text
stringlengths
56
7.94M
\begin{document} \small \pagenumbering{arabic} \title{ECH spectrum of some prequantization bundles} \author{{ Guanheng Chen}} \date{} \maketitle \thispagestyle{empty} \begin{abstract} A prequantization bundle is a circle bundle over a symplectic surface with negative Euler class. A connection $1$-form induces a natural contact form on it. The purpose of this note is to compute the ECH spectrum of the prequantization bundles of the sphere and the torus. Our proof relies on computations of the ECH cobordism maps induced by the associated line bundles. \end{abstract} \section{Introduction and main results} Let $Y$ be a closed three-manifold equipped with a contact form $\lambda$ such that $\lambda \wedge d\lambda >0$. M. Hutchings introduces a sequence of numerical invariants \begin{equation*} 0< c_1(Y, \lambda) \le c_2(Y, \lambda) \le c_3(Y, \lambda) .... \le \infty \end{equation*} associated to $(Y, \lambda)$ which he calls the \textbf{ECH spectrum} \cite{H3}. The ECH spectrum is a powerful tool for studying the four-dimensional symplectic embedding problems. In these applications, computations of the ECH spectrum play a key role. When $Y$ is the boundary of a domain in $\mathbb{R}^4$, many computations have been achieved by M. Hutchings, D. Cristofaro-Gardiner, K. Choi, D. Frenkel and V. G. B. Ramos\cite{H3, CCFHV, DCG}. Beyond the boundary of the domain in $\mathbb{R}^4$, B. Ferreira, V. G. B. Ramos and A. Vicente recently give computations for the unit disk subbundle of cotangent bundle of the sphere \cite{FRV, FRV2}. The purpose of this paper is to compute the ECH spectrum for some prequantization bundles. Roughly speaking, a prequantization bundle is a circle bundle over a symplectic surface with negative Euler class. A holomorphic curve in its symplecticization has certain $ {S}^1$-symmetry due to the fibration structure. Base on this observation, J. Nelson, and M. Weiler compute the embedded contact homology of the prequantization bundles \cite{NW} (based on D. Farris's PhD thesis \cite{Fa}). Their computations play a crucial role in our proof. The definition of the prequantization bundles is as follows. Let $(\Sigma, \omega_{\Sigma})$ be a closed surface with a volume form. Assume that $[\omega_{\Sigma}] \in H^2(\Sigma, \mathbb{R}) \cap H^2(\Sigma, \mathbb{Z})$ is integral. Let $\pi_E: E \to \Sigma$ be a complex line bundle with $c_1(E) =-[\omega_{\Sigma}]$. Then $E$ is called a \textbf{prequantization line bundle}. Let $e :=<c_1(E), [\Sigma]>$ denote the degree of $E$. Fix a Hermitian metric $h$ and a Hermitian connection 1-form $A_{\nabla}$ such that $\frac{i}{2\pi} F_{A_{\nabla}} = -\omega_{\Sigma}$, where $ F_{A_{\nabla}} $ is the curvature of $ {A_{\nabla}} $. This gives arise a global angular form $\alpha_{\nabla} \in \Omega^1(E -\Sigma, \mathbb{R})$. Under a unitary trivialization $U \times \mathbb{C}$, $\alpha_{\nabla} $ is of the form $\frac{1}{2\pi} (d\theta - i A_{\nabla} \vert_U)$, where $d \theta$ is the angular form of $\mathbb{C}$ and $ A_{\nabla} \vert_U$ is a $i\mathbb{R}$ valued $1$-form. Therefore, we have $d\alpha_{\nabla} = \pi_E^*\omega_{\Sigma}$ over $E-\Sigma$. A natural symplectic form on $E$ is defined by \begin{equation*} \Omega:=\pi_E^*\omega_{\Sigma} + d(\rho^2 \alpha_{\nabla}), \end{equation*} where $\rho$ is the radius coordinate of $E$ defined by the metric $h$. Extend $\Omega$ over the zero section $\Sigma$ by \begin{equation*} d(\rho^2 \alpha_{\nabla}) \vert_{fiber} :=(\mbox{area form of $\mathbb{C}$})/ \pi \mbox{ and } d(\rho^2 \alpha_{\nabla})(T\Sigma, \cdot) :=0. \end{equation*} Let $\pi: Y:=\{ \rho =1\} \to \Sigma$ be the unit circle subbundle of $E$. Since $$\Omega = 2\rho d \rho \wedge \alpha_{\nabla} + (\rho^2 +1) d\alpha_{\nabla}$$ away from $\Sigma$, the Liouville vector field is $Z=\frac{1+\rho^2}{2 \rho^2}\rho \partial_{\rho}$. Hence, $\Omega$ induces a contact form $\lambda = \Omega(Z, \cdot) =2 \alpha_{\nabla} $ on $Y.$ The contact manifold $(Y, \lambda)$ is called the \textbf{prequantization bundle} of $(\Sigma, \omega_{\Sigma})$. Our main results are as follows. \begin{thm} \label{thm0} Suppose that $\Sigma$ is the two-sphere. Then for any $k \ge 0$, the $k$-th ECH capacity of $(Y, \lambda)$ is $$c_k(Y, \lambda) = 2d|e|,$$ where $d$ is the unique nonnegative integer such that $$2d +d|e|(d-1) \le 2k \le 2d + d|e|(d+1).$$ \end{thm} \begin{thm} \label{thm2} Suppose that $\Sigma$ is the two-torus. Then for any $k \ge 1$, the $k$-th ECH capacity of $(Y, \lambda)$ satisfies $$2d_-|e|\le c_k(Y, \lambda)\le 2d_+|e|,$$ where $d_{-}$ and $d_+$ are respectively the minimal integer and maximal integer such that there exists nonnegative integers $m_+,m_-, m_1,m_2$ satisfying the following properties: \begin{equation} \label{eq20} \begin{split} &d^2|e| +m_+-m_-=2k,\\ &m_++m_1+m_2+m_-=d|e| \mbox{ and } m_1, m_2 \in \{0, 1\}. \end{split} \end{equation} Moreover, we have either $d_+=d_-$ or $d_- =d_+-1$. \end{thm} For some special $k$ and $e$, we can improve the inequalities in Theorem \ref{thm2} to equalities. \begin{corollary} Suppose that $\Sigma$ is the two-torus. Then the following assertions hold: \begin{enumerate} \item If $|e| \ge 2$, then $c_1(Y, \lambda) =2|e|$. \item Suppose that $e =-1$. If $k$ cannot be written as $ \frac{ n(n-1)}{2}$ for some positive integer $n$, then we have \begin{equation*} c_k(Y, \lambda) =2 \lfloor \sqrt{2k+ \frac{1}{4}} + \frac{1}{2} \rfloor |e|, \end{equation*} where $ \lfloor x \rfloor $ denote the maximal integer that is less than or equal to $x$. \end{enumerate} \end{corollary} \begin{proof} From the relations (\ref{eq20}), it is easy check that $d_{\pm}$ satisfy \begin{equation*} d_{\pm}(d_{\pm}-1)|e| \le 2k \mbox{ and } d_{\pm}(d_{\pm}+1)|e| \ge 2k. \end{equation*} If $|e| \ge 2$ and $k=1$, then $d=1$ is the only positive integer satisfying these two inequalities. Therefore, $c_1(Y, \lambda) =2|e|$. Suppose that $|e|=1$. Solve the inequality $d(d-1) \le 2k \le d(d+1) $; we get \begin{equation*} \sqrt{2k+ \frac{1}{4}} - \frac{1}{2} \le d \le \sqrt{2k+ \frac{1}{4}} + \frac{1}{2}. \end{equation*} Let $d_{max} = \sqrt{2k+ \frac{1}{4}} + \frac{1}{2}$ and $d_{min} = \sqrt{2k+ \frac{1}{4}} -\frac{1}{2}$. Note that $d_{max} =d_{min} +1$. Since $d$ is an integer, we have $\lceil d_{min}\rceil \le d\le \lfloor d_{max} \rfloor $. The assumption $k \ne \frac{ n(n-1)}{2}$ for any $n \in \mathbb{N}$ implies that $d_{\max}$ is not an integer. Hence, we have $d_{max} = \lfloor d_{max} \rfloor + r $ for some $0<r<1$. Then \begin{equation*} \lceil d_{min}\rceil = \lceil \lfloor d_{max} \rfloor + r -1 \rceil =\lfloor d_{max} \rfloor. \end{equation*} Therefore, we have $d_{\pm} = \lfloor d_{max} \rfloor= \lceil d_{min}\rceil $. \end{proof} \begin{remark} If we consider $Y=\{\rho =c\}$, then the induced contact form on $Y$ is $\lambda_{c} =(1+c^2) \alpha_{\nabla} =\frac{1+c^2}{2} \lambda$. Then $c_k(Y, \lambda_c) = \frac{1+c^2}{2} c_k(Y, \lambda)$. \end{remark} Let $DE:=\{ \rho \le 1 \}$ be the unit disk subbundle of $E$. Then $(DE, \Omega)$ form a natural symplectic filling of $(Y, \lambda)$. The proof of Theorem \ref{thm0} and Theorem \ref{thm2} relies on the computations of the ECH cobordism maps induced by $(DE,\Omega)$. Note that the contact form $\lambda$ is degenerate. To define the ECH group and the cobordism map, we follow \cite{NW} to perform a perturbation on $\lambda$ by a perfect Morse function $H:\Sigma \to \mathbb{R}$. Some suitable modifications also be made on $\Omega$. The results are denoted by $\lambda_{\varepsilon}$ and $\Omega_{\varepsilon}$ respectively. The details are given in Section \ref{section31}. \begin{thm}\label{thm1} Assume that $(A_{\nabla}, H)$ satisfies the condition (\ref{eq8}). For any $0<\varepsilon \ll 1 $, let $\lambda_{\varepsilon}$ and $\Omega_{\varepsilon}$ be the perturbation of $\lambda$ and $\Omega$ defined in Section \ref{section31}. Fix $\Gamma \in \mathbb{Z}_{|e|}$. For any positive integer $M$ such that $ \Gamma = M \mod |e|$ and $2M <L_{\varepsilon}$, there exists $A \in H_2(DE, Y, \mathbb{Z})$ such that $\partial A = \Gamma$ and the ECH cobordism map $$ECH^L(DE, \Omega_{\varepsilon}, A ): ECH^L(Y, \lambda_{\varepsilon}, \Gamma) \to \mathbb{F}$$ maps $[e_-^M]$ to $1$ and maps $e_+^{M-|e|}$ to zero, where $e_-$ and $e_+$ are the Reeb orbits corresponding to the minimum and maximum of $H$. \end{thm} \begin{remark} Actually, when $\Sigma$ is the sphere, there is a standard way to compute the $U$ map on $ECH^{L_{\varepsilon}}(Y, \lambda_{\varepsilon})$ and these can be used to compute $c_k(Y, \lambda_{\varepsilon})$. The methods are given by Hutchings when he compute the ECH of the three-sphere (Proposition 4.1 of \cite{H4}). In Ferreira, Ramos and Vicente's results \cite{FRV}, they use these arguments to compute the ECH spectrum of the cotangent bundle of the sphere. Here is the sketch of the argument. By Nelson and Weiler's index computations (Proposition 3.5 of \cite{NW}), one can show that the holomorphic curves $\mathcal{C}$ contributed to the $U$ map are holomorphic cylinders. These cylinders have a degree either zero or one. If the degree of $\mathcal{C}$ is zero, then it is corresponding to the index 2 Morse flow lines on $\mathbb{S}^2$. If the degree of $\mathcal{C}$ is one, then under the natural holomorphic structure of $E$, $\mathcal{C}$ is a meromorphic section of $E$ with poles and zeros. Combining these facts, one could show that \begin{enumerate} \item $U(e_+^{i} e_-^{j}) = (e_+^{i-1} e_-^{j+1}) $ (counting degree zero holomorphic cylinders); \item $U(e_-^{j}) = e_+^{j-|e|}$ (counting degree one holomorphic cylinders). \end{enumerate} Here $e_{\pm}$ are defined in Section \ref{section31}. \end{remark} \begin{remark} \label{remark1} Our alternative methods of computing $c_k(Y, \lambda_{\varepsilon})$ relies on the computations of the ECH cobordism maps (Theorem \ref{thm1}). Compared to the standard way mentioned in the above remark, our approach here may seem a little bit strange or unnecessary. However, our computations of the ECH cobordism maps may have other independent interests. There are also some advantages to our methods. We want to emphasize that we do not make any constraint on the genus of $\Sigma$ in Theorem \ref{thm1}. The only place where we use the assumption $\Sigma=\mathbb{S}^2$ in Theorem \ref{thm0} is that we can show that $U^k[e_-^{d|e|}] = [\emptyset]$ in this case. Our approach here does not need to understand all the index $2$ holomorphic curves contributed to the $U$ map. It may help us to avoid using the $S^1$-invariant domain dependent almost complex structures and the Morse-Bott computations when we compute the $g(\Sigma) \ge 1$ cases. For computing the $U$ map, Nelson and Weiler have some heuristic idea about this (see page 11 of \cite{NW}). But to fill in the details should be a difficult task. If one could prove $U^k[e_-^{d|e|}] = [\emptyset]$ for a subsequence $\{d_n\}_{n=1}^{\infty}$ when $g(\Sigma) \ge 1$ (using Nelson and Weiler's idea or other approach), then Theorem \ref{thm0} could be generalized to the general case or at least one could obtain some estimates as in Theorem \ref{thm2}. We can not see any evidence why this is true so far. But it should be easier to verify than to compute the full $U$ map. In the case that $\Sigma$ is the torus, Corollary 1.13 in \cite{NW} suggests that there are only two distinct $U$-sequences. This result plus some additional computations on the $U$ map, we could obtain Theorem \ref{thm2} even we still do not know whether $U^k[e_-^{d|e|}] = [\emptyset]$ is true. So our approach here seems to be easier to be generalized. \end{remark} An immediate application of Theorem \ref{thm0} and Theorem \ref{thm2} is that we can get an upper bound on the Gromov width of $(DE, \Omega)$. The \textbf{Gromov width} of a symplectic four-manifold $(X, \Omega_X)$ is defined by \begin{equation*} w_{Gr}(X, \Omega_X): = \sup\{a \in \mathbb{R}_{\ge 0} \vert \exists \mbox{ embedding } \varphi: B^4(a) \to X \mbox{ such that } \varphi^*\Omega_X=\omega_{std} \}, \end{equation*} where $B^4(a) =\{(z_1, z_2) \in \mathbb{C}^2 \vert \pi|z_1|^2 + \pi|z_2|^2 \le a\}$ is the four-ball with radius $\sqrt{a/\pi}$ and $\omega_{std}$ is the standard symplectic form of $\mathbb{R}^4$. \begin{corollary}Suppose that $\Sigma$ is the sphere or the torus. In the case that $\Sigma$ is the torus, we also assume that $|e| \ge 2$. Then the Gromov width of $(DE, \Omega)$ satisfies $w_{Gr}(DE, \Omega) \le 2|e|. $ \end{corollary} \begin{proof} For any small number $0<\epsilon \ll 1$, let $a_{\epsilon} = w_{Gr}(DE, \Omega) - \epsilon$. By definition, we have a symplectic embedding $\varphi: (B(a_{\epsilon}), \omega_{std}) \to (DE, \Omega)$. Then $(DE- \varphi(B(a_{\epsilon})), \Omega)$ is a weakly exact symplectic cobordism from $(Y, \lambda)$ to $(\partial B(a_{\epsilon}), \lambda_{std})$. The first ECH capacity of $(\partial B(a_{\epsilon}), \lambda_{std})$ is $a$. By Proposition 4.7 in \cite{H3}, we have \begin{equation*} a_{\epsilon}= c_1(B(a_{\epsilon})) \le c_1(Y,\lambda)=2|e| \end{equation*} We get the result by taking $\epsilon \to 0$. \end{proof} \paragraph{Idea of the proof} In the case that $\Sigma$ is the sphere, the ECH group is very simple (Proposition \ref{lem8}). Then we can rewrite the $k$-th ECH capacity as \begin{equation*} c_k(Y, f\lambda ) =\inf \{L \in \mathbb{R} \vert i_L: ECH_{2k}^L(Y, f\lambda,0) \to ECH_{2k}(Y, f\lambda,0) \mbox{ is nonzero} \}. \end{equation*} After the Morse-Bott perturbation, the Reeb orbits with length less than $L_{\varepsilon}$ are corresponding to the critical points of the Morse function $H$. According to Farris, Nelson and Weiler's results \cite{Fa, NW}, the differential vanishes on $ECC^{L_{\varepsilon}}(Y, \lambda_{\varepsilon})$. Moreover, for each grading $2k$, there exists only one ECH generator $\alpha_k$. Thus, it is natural to guess that the $k$-th ECH capacity is the action of the ECH generator $\alpha_k$ in $ECC^{L_{\varepsilon}}(Y, \lambda_{\varepsilon}, 0)$. However, this may not be the case because $\alpha_k$ could be the boundary of some linear combination of orbit sets with action larger than $L_{\varepsilon}$, also, these orbit sets are not covers of the fibers at critical points of $H$. In particular, $c_k(Y, \lambda_{\varepsilon}) \ge L_{\varepsilon}$. We should have $c_k(Y, \lambda) = \infty$ by taking $\varepsilon \to 0$. To rule out this possibility, our strategy is to compute $ECH^L(DE, \Omega_{\varepsilon}, A )([\alpha_k])$. If $ECH^L(DE, \Omega_{\varepsilon}, A )([\alpha_k]) \ne 0$, then the above possibility cannot happen because we have the following diagram: $$\begin{CD} ECH^L(Y, \lambda_{\varepsilon}, 0)@> ECH^L(DE,\Omega_{\varepsilon}, A)>> \mathbb{F} \\ @VV i_L V @VV id V\\ ECH(Y, \lambda_{\varepsilon}, 0) @> ECH(DE,\Omega_{\varepsilon}, A) >> \mathbb{F}. \end{CD}$$ In the case that $\Sigma$ is the torus, most of the arguments are the same and the main difference is that the ECH group is more complicate. Thanks to Corollary 1.13 in \cite{NW}, we know that there are only two generators in some particular grading. Some extra computations on the $U$ map between these generators will lead to Theorem \ref{thm2}. The methods of computing $ECH^L(DE, \Omega_{\varepsilon}, A )([\alpha_k])$ are more or less the same as \cite{GHC1}. Choose an almost complex structure such that the fiber of $E$ at the minimum of $H$ is holomorphic. For some fixed $A \in H_2(DE, Y, \mathbb{Z})$, we show that the covers of the fiber is the only holomorphic current with $I=0$ with relative class $A$. Then we obtain the result by applying the correspondence between solutions to the Seiberg-Witten equations and holomorphic curves (\cite{CG2}). \paragraph{Coefficient} We use $\mathbb{F} =\mathbb{Z}/2 \mathbb{Z}$-coefficient throughout this note. \end{ack} \section{Preliminaries} In this section, we give a quick review of the embedded contact homology (abbreviated as ECH). For more details, please refer to \cite{H4}. Let $ Y$ be a closed contact 3-dimensional manifold equipped with a nondegenerate contact form $\lambda$. The contact structure of $Y$ is denoted by $\xi: =\ker \lambda$. The \textbf{Reeb vector field} $R$ of $(Y, \lambda)$ is characterized by conditions $\lambda(R)=1$ and $d\lambda(R, \cdot)=0$. A \textbf{Reeb orbit} is a smooth map $\gamma: \mathbb{R}_{\tau} / T \mathbb{Z} \to Y $ satisfying the ODE $\partial_{\tau} \gamma =R \circ \gamma$ for some $T>0$. The number $T$ here is the \textbf{action} of $\gamma$ which can be alternatively defined by \begin{equation*} \mathcal{A}_{\lambda}(\gamma) :=T= \int_{\gamma} \lambda. \end{equation*} Given $L \in \mathbb{R}$, the contact form $\lambda$ is called $L$-\textbf{nondegenerate} if all Reeb orbits with action less than $L$ are nondegenerate. Given an $L$-nondegenerate contact form $\lambda$, a Reeb orbit with action less than $L$ is either elliptic, or positive hyperbolic or negative hyperbolic. An \textbf{orbit set} $\alpha=\{(\alpha_i, m_i)\}$ is a finite set of Reeb orbits, where $\alpha_i's$ are distinct, nondegenerate, irreducible embedded Reeb orbits and $m_i$'s are positive integers. In the rest of the paper, we write an orbit set using multiplicative notation $\alpha=\Pi_i \alpha_i^{m_i}$ instead. An orbit set $\alpha$ is called an \textbf{ECH generator} if $m_i=1$ whenever $\alpha_i$ is a hyperbolic orbit. The following definition will be used in the computations of the cobordism maps. In fact, the elliptic Reeb orbits considered in our cases are one of the following two types of Reeb orbits. \begin{definition} (see \cite{H5} Definition 4.1) Fix $L>0$. Let $\gamma$ be an embedded elliptic orbit with action $\mathcal{A}_{\lambda}(\gamma) < L$. \begin{itemize} \item $\gamma$ is called $L$-positive elliptic if the rotation number $\theta $ is in $ (0, \frac{\mathcal{A}_{\lambda}(\gamma) }{L}) \mod 1$. \item $\gamma$ is called $L$-negative elliptic if the rotation number $\theta $ is in $ ( -\frac{\mathcal{A}_{\lambda}(\gamma) }{L},0) \mod 1$. \end{itemize} \end{definition} \paragraph{The ECH index} Fix $\Gamma \in H_1(Y, \mathbb{Z})$. Given orbit sets $\alpha=\Pi_i \alpha_i^{m_i}$ and $\beta=\Pi_j \beta_j^{n_j}$ on $Y$ with $[\alpha]=[\beta] =\Gamma$, let $H_2(Y, \alpha ,\beta)$ be the set of 2-chains $Z$ such that $\partial Z = \sum_i m_i \alpha_i - \sum_j n_j \beta_j$, modulo boundaries of 3-chains. An element in $H_2(Y, \alpha ,\beta)$ is called \textbf{a relative homology class}. Note that the set $H_2(Y, \alpha ,\beta)$ is an affine space over $H_2(Y, \mathbb{Z})$. Given $Z \in H_2(Y, \alpha ,\beta)$ and trivializations $\tau$ of $\xi\vert_{\alpha}$ and $\xi \vert_{\beta}$, the ECH index is defined by \begin{equation*} I(\alpha, \beta, Z) := c_{\tau}(Z) + Q_{\tau}(Z) + \sum_i \sum\limits_{p=1}^{m_i} CZ_{\tau}(\alpha_i^{p})- \sum_j \sum\limits_{q=1}^{n_j} CZ_{\tau}(\beta_j^{q}), \end{equation*} where $c_{\tau}(Z)$ and $Q_{\tau}(Z)$ are respectively the relative Chern number and the relative self-intersection number (see \cite{H4} and \cite{H2}), and $CZ_{\tau}$ is the Conley--Zehnder index. The ECH index $I$ depends only on orbit sets $\alpha$, $\beta$ and the relative homology class $Z$. \paragraph{Holomorphic currents} An almost complex structure on $(\mathbb{R} \times Y, d(e^s \lambda))$ is called \textbf{admissible} if $J$ is $\mathbb{R}$-invariant, $J(\partial_s) =R$, $J(\xi) =\xi$ and $J \vert_{\xi}$ is $d \lambda$-compatible. We denote set of admissible almost complex structures by $\mathcal{J}(Y, \lambda)$. A \textbf{$J$-holomorphic current} from $\alpha$ to $\beta$ is a formal sum $\mathcal{C} =\sum_a d_a C_a$, where $C_a$ are distinct irreducible simple holomorphic curves, the $d_a$ are positive integers, $\mathcal{C}$ is asymptotic to $\alpha$ as a current as $s \to \infty $ and $\mathcal{C}$ is asymptotic to $\beta$ as a current as $s \to -\infty.$ Fix $Z \in H_2(Y, \alpha, \beta)$. Let $\mathcal{M}^J(\alpha, \beta, Z)$ denote the moduli space of holomorphic currents with relative homology class $Z$. Let $C$ be $J$-holomorphic curve in $\mathbb{R} \times Y $ whose positive ends are asymptotic to $\alpha= \Pi \alpha_i^{m_i}$ and negative ends are asymptotic to $\beta= \Pi_j \beta_j^{n_j}$. For each $i$, let $k_i$ denotes the number of ends of $C$ at $\alpha_i$, and let $\{p_{ia}\}^{k_i}_{a=1}$ denote their multiplicities. Likewise, for each $j$, let $l_j$ denote the number of ends of $u$ at $\beta_j$, and let $\{q_{jb}\}^{lj}_{b=1}$ denote their multiplicities. The set of numbers $\{p_{ia}\}^{k_i}_{a=1}$ modulo order is called the \textbf{partition} of $C$ at $\alpha_i$. The \textbf{Fredholm index} of $u$ is defined by \begin{eqnarray*} {\rm ind} C := -\chi(C) + 2 c_{\tau}(\xi) + \sum\limits_i \sum\limits_{a=1}^{k_i} \mu_{\tau} (\alpha^{p_{ia}}_i) - \sum\limits_j \sum\limits_{b=1}^{l_j} \mu_{\tau} (\beta^{q_{jb}}_j). \end{eqnarray*} By \cite{H1, H2}, if $C$ is a simple holomorphic curve, then we have $I(C) \ge ind(C)$. Moreover, the equality holds if and only if $C$ is embedded and $C$ satisfies the \textbf{ECH partition conditions}. The general definition of ECH partition conditions is quite complicated. Here we only present the examples that will be considered in our proof. Suppose that $C$ has no negative ends and has positive ends at covers of an Reeb orbit $\gamma$ with total multiplicities $m$. If $C$ satisfies the {ECH partition conditions}, then the partition at $\gamma$ is \begin{itemize} \item $(1, ..., 1)$ if $\gamma$ is $L$-positive elliptic and $m$ satisfies $\mathcal{A}_{\lambda}(\gamma^m) <L$ ; \item $(m)$ if $\gamma$ is $L$-negative elliptic and $m$ satisfies $\mathcal{A}_{\lambda}(\gamma^m) <L$; \item $(1, ..., 1)$ $\gamma$ is positive hyperbolic. \end{itemize} \paragraph{ECH group} Fix a class $\Gamma \in H_1(Y, \mathbb{Z})$. The chain group $ECC(Y, \lambda, \Gamma)$ is a free module generated by the ECH generators with homology class $\Gamma$. Fix a generic $J \in \mathcal{J}(Y, \lambda)$. The differential is defined by \begin{equation*} <\partial \alpha, \beta>: =\sum_{Z \in H_2(Y, \alpha, \beta), I(Z) =1} \# (\mathcal{M}^{J}(\alpha, \beta, Z)/ \mathbb{R}). \end{equation*} Then $ECH(Y, \lambda. \Gamma)$ is the homology of this chain complex $(ECC(Y, \lambda, \Gamma), \partial)$. Given $L >0$, define $ECC^L(Y, \lambda, \Gamma)$ be a submodule generated by the ECH generators with $\mathcal{A}_{\lambda}<L$. Note that the differential $\partial$ decreases the action. Therefore, $ECC^L(Y, \lambda, \Gamma)$ is a subcomplex and its homology is well defined, denoted by $ECH^L(Y, \lambda, \Gamma)$. The group $ECH^L(Y, \lambda, \Gamma)$ is called \textbf{filtered ECH}. The inclusion induces a homomorphism \begin{equation*} i_L : ECH^L(Y, \lambda, \Gamma) \to ECH(Y, \lambda, \Gamma). \end{equation*} \paragraph{ECH spectrum} ECH also equips with a homomorphism $$U: ECH(Y, \lambda, \Gamma) \to ECH(Y, \lambda, \Gamma),$$ called the \textbf{U map}. It is defined by counting $I=2$ holomorphic currents passing through a fixed point. If $b_1(Y)=0$, then there is only one element $Z \in H_2(Y, \alpha, \beta)$. So we write $I(\alpha, \beta)=I(Z)$ instead. Then we define a $\mathbb{Z}$ grading on $ ECH(Y, \lambda, 0)$ to be the ECH index relative to the empty set $\emptyset$. More precisely, for any orbit set $\alpha$ with $[\alpha]=0$, define its grading by \begin{equation} \label{eq10} gr(\alpha):= I(\alpha, \emptyset). \end{equation} In general case, the $\mathbb{Z}$ grading is well defined provided that $c_1(\xi)+2PD(\Gamma)$ is torsion. The $U$ map is a degree $-2$ map with respect to this grading. \begin{remark} In the case that $Y$ is a prequantization bundle, by Lemma 3.11 of \cite{NW}, the ECH index $I(\alpha, \beta, Z)$ is independent of $Z$. If $[\alpha]=0$, then the grading (\ref{eq10}) is well defined. \end{remark} There is a canonical element $[\emptyset] \in ECH(Y, \lambda, 0)$ which is represented by the empty orbit set. The class $[\emptyset] $ is called the \textbf{contact invariant.} We remark that $[\emptyset] \ne 0$ if $(Y, \lambda)$ admits a symplectic filling. So the contact invariant of the prequantization bundle $\pi: Y \to \Sigma$ is nonzero. Assume that $\lambda$ is nondegenerate. For $k \in \mathbb{Z}_{\ge 1}$, the \textbf{$k$-th ECH capacity} is defined by \begin{equation*} c_k(Y, \lambda) : = \inf\{L \in \mathbb{R}\vert \exists \sigma \in ECH^L(Y, \lambda, 0) \mbox{ such that } U^k (\sigma)= [\emptyset]\}. \end{equation*} If $\lambda$ is degenerate, define $c_k(Y, \lambda) : = \lim_{n \to \infty} c_k(Y, f_n\lambda), $ where $f_n:Y\to \mathbb{R}_{>0}$ is a sequence of smooth functions such that $f_n\lambda$ is nondegenerate and $f_n$ converges to $1$ in $C^0$ topology. \paragraph{Cobordism maps} Let $(X, \Omega_X)$ be a weakly exact symplectic cobordism from $(Y_+, \lambda_+)$ to $(Y_-, \lambda_-)$. Let $(\overline{X}, \Omega_X)$ denote the symplectic completion. The ECH index, Fredholm index and holomorphic currents can be defined similarily in the cobordism setting (see \cite{H2}). Also, the ECH inequality still holds. Fix a relative class $A \in H_2(X, \partial X, \mathbb{Z})$ such that $\partial A =\Gamma_+ -\Gamma_-$. Suppose that $\lambda_{\pm}$ are $\max\{L, L+\rho(A)\}$-nondegenerate. Here $\rho: H_2(X,\partial X, \mathbb{Z}) \to \mathbb{R}$ is a homomorphism defined by $\rho(A) : =\int_A \Omega_X -\int_{\partial{A_+}} \lambda_+ + \int_{\partial{A_-}} \lambda_-$. Hutchings and Taubes define a canonical homomorphism \cite{HT} \begin{equation*} ECH^L(X, \Omega_X, A): ECH^L(Y_+, \lambda_+, \Gamma_+) \to ECH^{L+ \rho(A)}(Y_-, \lambda_-, \Gamma_-). \end{equation*} The homomorphism is called a \textbf{cobordism map}. If $\lambda_{\pm}$ are nondegenerate, then we can take $L \to \infty$ and get a cobordism map on the whole ECH \begin{equation*} ECH(X, \Omega_X, A): ECH(Y_+, \lambda_+, \Gamma_+) \to ECH(Y_-, \lambda_-, \Gamma_-). \end{equation*} The cobordism map $ECH^L(X, \Omega_X, A)$ is defined by counting the gauge classes of solutions to the Seiberg-Witten equations. We will not provide any details about the Seiberg-Witten theory. We refer readers to the book of P. Kronheimer and T. Mrowka \cite{KM}. Hutchings and Taubes show that $ECH^L(X, \Omega_X, A)$ satisfies the holomorphic curve axioms (see \cite{HT}). Roughly speaking, it means that if the cobordism map is nonvanishing, then there exists a holomorphic current. In some special cases, C. Gerig enhances the holomorphic curves axioms. He shows that there is a 1-1 correspondence between the holomorphic currents and the gauge classes of solutions to the Seiberg-Witten equations \cite{CG, CG2}. In other words, the cobordism map is actually defined by counting holomorphic curves. We will show that this is the case in our situations. \section{Computations of the cobordism maps} \label{section3} In this section, we prove the Theorem \ref{thm1}. We \textbf{do not} make assumptition that $\Sigma$ is the sphere or the torus. \subsection{Perturbations}\label{section31} Before we go ahead, we need to clarify the perturbations made on the contact form $\lambda$ and the symplectic form $\Omega$. \paragraph{Morse-Bott perturbations} Note that the contact form $\lambda$ is Morse--Bott. The Reeb orbits are iterations of the fibers of $\pi: Y \to \Sigma$. Following Farris, Nelson and Weiler's approach \cite{Fa, NW}, we perturb the contact form by a perfect Morse function $H: \Sigma \to \mathbb{R}$. More precisely, define \begin{equation*} \lambda_{\varepsilon}: = (1+{\varepsilon}\pi^*H) \lambda, \end{equation*} where $0 < \varepsilon \ll1 $ is a small fixed number. Let $e_-$, $e_+$ and $\{h_i\}_{i=1}^{2g}$ denote the fiber over the minimum, maximum and saddle points of $H$ respectively. These are simple Reeb orbits of $\lambda_{\varepsilon}$. Moreover, $e_{\pm}$ are elliptic orbits and $\{h_i\}_{i=1}^{2g}$ are positive hyperbolic orbits. For any $0<\varepsilon \ll1$, there exists a constant $L_{\varepsilon}$ such that $ \lambda_{\varepsilon}$ is $L_{\varepsilon}$-nondegenerate and the covers of $e_{\pm}$, and $\{h_i\}_{i=1}^{2g}$ are the only Reeb orbits of $(Y, \lambda_{\varepsilon})$ with action less than $L_{\varepsilon}$. According to the computations in Lemma 3.9 of \cite{NW}, $e_+$ is $L_{\varepsilon}$-positive and $e_-$ is $L_{\varepsilon}$-negative. We remark that $L_{\varepsilon} \to \infty$ as $\varepsilon \to 0$. For an orbit set $\alpha=e_{+}^{m_+}h_1^{m_1}\cdots h_{2g}^{m_{2g}} e_-^{m_-}$, its action is \begin{equation*} \mathcal{A}_{\lambda_{\varepsilon}}(\alpha) =2M + \varepsilon \left(m_+ \pi^*H(e_+) + m_- \pi^*H(e_-)+ \sum_{i=1} m_i \pi^*H(h_i) \right), \end{equation*} where $M =m_- + m_++ \sum_{i=1}^{2g} m_i.$ By Lemma 3.7 in \cite{NW}, we have $H_1(Y, \mathbb{Z}) = \mathbb{Z}^{2g} \oplus \mathbb{Z}_{|e|}$. The homology class of each fiber of $\pi: Y \to \Sigma$ is $1 \mod |e|$ in the $\mathbb{Z}_{|e|}$ summand of $H_1(Y, \mathbb{Z})$. Therefore, $M =d|e| + \Gamma$ for some integer $d\ge 0 $ and $\Gamma =[\alpha] \in H_1(Y, \mathbb{Z}).$ \begin{comment} Define a diffeomorphism $F: \mathbb{R}_{>0}\times Y \to \mathbb{R}_{>0} \times Y$ by \begin{equation*} F(t, x) = (t(1+\varepsilon(t)\pi^*h),x). \end{equation*} Then $F^*(r\lambda)=tf\lambda$ when $t > \frac{4}{5}$. Under the identification \ref{eq1}, we can extend $F$ as a self diffeomorphism of $E$. \end{comment} \paragraph{$(L, \delta)$-flat approximation} Fix an admissible almost complex structure $J \in \mathcal{J}(Y, \lambda_{\varepsilon})$. To ensure that the ECH generators are 1-1 corresponding to the solutions to the Seiberg-Witten equations, we need to perturb $(\lambda_{\varepsilon}, J)$ such that it has certain standard forms in $\delta$-neighborhoods of Reeb orbits with action less than $L$. Moreover, the modifications do not change the ECH chain complex. The result of the perturbation of $(\lambda_{\varepsilon}, J)$ is called a \textbf{$(L, \delta)$-flat approximation}, and it was introduced by Taubes \cite{T2}. The $(L, \delta)$-flat approximation closes to the original one in $C^0$ topology. Suppose that $(\lambda, J)$ is $(L, \delta)$-flat. Let $\gamma$ be an elliptic orbit with $\mathcal{A}_{\lambda}(\gamma) =l_{\gamma} <L$. The standard form of $(\lambda, J)$ near $\gamma$ means that there exists a tubular neighbourhood $S_t^1 \times D_z$ of $\gamma$ such that: \begin{itemize} \item \begin{equation*} \frac{2\pi}{l_{\gamma}} \lambda = (1- \vartheta |z|^2)dt + \frac{i}{2}(z d\bar{z} -\bar{z}dz), \end{equation*} where $\vartheta$ is the rotation number of $\gamma$. \item $T^{1, 0}(\mathbb{R}_s \times Y)$ is spanned by $ds + i\lambda $ and $dz -i\vartheta z dt$. \end{itemize} If $(\lambda, J)$ is $(L, \delta)$-flat, then we have a canonical isomorphism (Theorem 4.2 of \cite{T2}) \begin{equation} \label{eq16} \Psi: ECC^L_*(Y, \lambda, \Gamma) \to CM_L^{-*}(Y, \lambda, \mathfrak{s}_{\Gamma}) \end{equation} between the ECH chain complex and the Seiberg-Witten chain complex, where $CM_L^{-*}(Y, \lambda, \mathfrak{s}_{\Gamma})$ is the Seiberg-Witten chain complex defined in \cite{HT}, and $\mathfrak{s}_{\Gamma}$ is the spin-c structure such that $c_1(\mathfrak{s}_{\Gamma}) = c_1(\xi)+ 2PD(\Gamma)$. By Lemma 3.6 of \cite{HT}, there exists a preferred homotopy $\{(\lambda_{\varepsilon}^t, J^t)\}_{t\in [0, 1]}$ such that $(\lambda_{\varepsilon}^0, J^0) = (\lambda_{\varepsilon}, J)$ and $(\lambda_{\varepsilon}^1, J^1)$ is $(L, \delta)$-flat. Moreover, $(\lambda_{\varepsilon}^t, J^t)$ is independent of $t$ outside the $\delta$-neighborhoods of Reeb orbits with action less than $L$. For gluing, we assume that $\{(\lambda_{\varepsilon}^t, J^t)\}_{t\in [0, 1]}$ is independent of $t$ near $t =0, 1.$ Now we describe the preferred homotopy near $e_-$. We make a further choice of the connection $A_{\nabla}$ and Morse function $H$ as follows. Let $U_z$ be a neighbourhood of the minimum of $H$ such that $\omega_{\Sigma} \vert_U =\frac{i}{2\pi}dz \wedge d\bar{z}$. Fix a local trivialization $U_z \times \mathbb{C}_w$. We choose the connection $A_{\nabla}$ and Morse function $H$ such that \begin{equation} \label{eq8} A_{\nabla} \vert_U=\frac{1}{2}(\bar{z} dz - zd\bar{z} ) \mbox{ and } H=\varepsilon |z|^2. \end{equation} By (\ref{eq8}), we have \begin{equation*} \pi \lambda_{\varepsilon} = (1+\varepsilon |z|^2) d\theta + \frac{i}{2}(1+ \varepsilon |z|^2) (z d\bar{z}-\bar{z} dz). \end{equation*} Note that $ \lambda_{\varepsilon}$ is very close to the standard form except that we have an extra term $ \varepsilon \frac{i |z|^2}{2 } (z d\bar{z}-\bar{z} dz).$ Now we can write down the homotopy $\lambda_{\varepsilon}^t$ explicitly near the $e_-$. Let $\chi(s) :\mathbb{R} \to \mathbb{R}$ be a cutoff function such that $\chi=1$ when $s \ge \frac{3}{4}$ and $\chi=0$ when $s \le \frac{1}{2}$. Define \begin{equation} \label{eq14} \lambda^t_{\varepsilon} =\frac{1}{\pi} (1+\varepsilon |z|^2) d\theta + \frac{i}{2\pi} (z d\bar{z}-\bar{z} dz) + \frac{i \varepsilon}{2\pi} \left( \chi(t) \chi(\frac{|z|}{2\delta}) + 1- \chi(t) \right) |z|^2 (z d\bar{z}-\bar{z} dz) . \end{equation} \paragraph{Symplectic completion of $(DE, \Omega)$} We regard $E$ as a symplectic completion of $(DE, \Omega)$ by the following way. Define $r:= \frac{1}{2}(1 + \rho^2)$. Then we have a symplecticmorphism \begin{equation}\label{eq1} (E-\Sigma, \Omega) \cong (\mathbb{R}_{(r>\frac{1}{2})} \times Y, d(r\lambda)). \end{equation} Sometimes we identify the conical end $(\mathbb{R}_r \times Y, d(r\lambda))$ with the cylindrical end $(\mathbb{R}_s \times Y, d(e^s\lambda))$ via changing coordinate $r=e^s. $ We modify the symplectic form $\Omega$ such that it is adapted to the perturbation of $\lambda$. Let $\varepsilon(r)$ be a nondecreasing cut off function such that $\varepsilon(r)=\varepsilon \ll 1$ when $r\ge \frac{4}{5}$ and $\varepsilon(r)=0$ when $r \le \frac{3}{4}$. Define $\lambda_{\varepsilon(r)}: =(1+ \varepsilon(r) \pi^*H) \lambda. $ Under the identification (\ref{eq1}), we define \begin{equation} \label{eq15} \Omega_{\varepsilon} = \begin{cases} \Omega, & r < \frac{3}{4}. \\ d(r \lambda_{\varepsilon(r)}) , & \frac{3}{4} \le r \le e^{-\epsilon},\\ d (r \lambda^{1+\epsilon^{-1}\log r}_{\varepsilon} ), & e^{-\epsilon} \le r\le 1\\ d (r \lambda^{1}_{\varepsilon} ), & r\ge 1, \end{cases} \end{equation} where $\lambda^{t}_{\varepsilon}$ is the homotopy between $\lambda_{\varepsilon}$ and the $(L, \delta)$ flat approximation. If $\varepsilon$ is small enough, then $\Omega_{\varepsilon}$ is still symplectic. To simply the notation, we still use $\lambda_{\varepsilon}$ to denote $\lambda_{\varepsilon}^1$. \paragraph{Almost complex structures in $E$} An $\Omega_{\varepsilon}$-compatible almost complex structure $J$ is \textbf{cobordism admissible} if $J=J_+$ for some $J_+ \in \mathcal{J}(Y, \lambda_{\varepsilon})$ over the cylindrical end. We choose $J$ such that \begin{enumerate} [label=\textbf{ J.\arabic*}] \item \label{J1} $(\lambda_{\varepsilon}, J_+)\vert_{\mathbb{R}_{s \ge 0} \times Y}$ is a $(L, \delta)$-flat approximation. \item \label{J2} Recall the neighbourhood $U_z \times \mathbb{C}_w$ of $e_-$. $J(r \partial_r) = f(r)\partial_{\theta}$ and $J(\partial_{z}) = i \partial_z$ along the fiber $\{0\} \times \mathbb{C}$, where $f(r)$ is a positive function such that $f(r) = 1$ when $r \ge e^{-\epsilon}$ and $f(r) = \frac{r}{2r-1}$ when $r $ closes to $\frac{1}{2}$. The latter assumption on $f$ is equivalent to $J(\rho \partial_{\rho}) =\partial_{\theta}$. This implies that $J$ is well defined on the whole $\{0\} \times \mathbb{C}$. \item The zero section $\Sigma$ is $J$-holomorphic. \end{enumerate} From the constructions (\ref{eq14}) and (\ref{eq15}), the choice of $J$ satisfying \ref{J1} and \ref{J2} is always feasible. An almost complex structure $J$ is called \textbf{generic} if all simple holomorphic curves are Fredholm regular except for the closed holomorphic curves. We assume that $J$ is generic, unless stated otherwise. Using these choice of $J$, $C_{e_-} =\{0\} \times \mathbb{C}$ is holomorphic because $J(TC_{e_-} ) \subset TC_{e_-} .$ Moreover, $C_{e_-}$ is asymptotic to $e_-$ under the identification (\ref{eq1}). We remark that $C_{e_-}$ is Fredholm regular for any $J$. This follows directly from C. Wendl's automatic transversality theorem \cite{Wen} and the index computation in Lemma \ref{lem2}. The fiber $C_{e_-}$ plays the same role as the horizontal section in \cite{GHC1}. \subsection{Moduli space of holomorphic currents} \paragraph{Computing the ECH index} The first task of studying the holomorphic currents is to compute their index. The computations here follow the similar argument in \cite{GHC1}, where the author computes ECH index of the relative homology classes in an elementary Lefschetz fibration. This approach coincides with Nelson and Weiler's methods (see Remark \ref{remark2}). Let $\alpha=e_{+}^{m_+}h_1^{m_1} \cdots h_{2g}^{m_{2g}} e_-^{m_-}$ be an orbit set. Let $M:=m_+ + m_1 + \cdots + m_{2g}+ m_-$. Let $H_2(DE, \alpha)$ denote the set of relative homology classes. It is an affine space over $H_2(DE, \mathbb{Z})$. Let $C_{e_{\pm}}$ and $C_{h_i}$ denote the fibers over the critical points corresponding to $e_{\pm}$ and $h_i$. For $\alpha$, we define a relative homology class $Z_{\alpha}$ represented by \begin{equation*} m_+ C_{e_+} + \sum_{i=1}^{2g}m_i C_{h_i}+ m_- C_{e_-}. \end{equation*} Since $H_2(DE, \mathbb{Z}) \cong H_2(\Sigma, \mathbb{Z})$ is generated by $\Sigma$, any other relative homology class can be written as $Z_{\alpha} + d [\Sigma]$. Fix a relative homology class $A \in H_2(DE, Y, \mathbb{Z})$. Consider a subset $\{Z \in H_2(DE, \alpha): [Z] =A \}$. Note that it is an affine space of $\ker j_*$, where $j_*$ is the map in the following exact sequence \begin{equation*} \begin{split} &\to H_2(Y, \mathbb{Z}) \xrightarrow{ i_*}H_2(DE, \mathbb{Z}) \xrightarrow{ j_*} H_2(DE, Y, \mathbb{Z}) \xrightarrow{ \partial_*} H_1(Y, \mathbb{Z}) \to. \end{split} \end{equation*} By Lemma 3.7 in \cite{NW}, $H_2(Y, \mathbb{Z}) $ is spanned by the classes represented by $\pi^{-1}(\eta)$, where $\eta$ is a simple closed curve in $\Sigma$. The surface $\pi^{-1}(\eta)$ is the boundary of $\pi_E^{-1}(\eta) \cap DE$ in $DE$. Then $i_*$ is the zero map and $\ker j_*=0$. Therefore, a relative class $A \in H_2(DE, Y , \mathbb{Z})$ and an orbit set $\alpha$ determine a unique relative homology class $Z_{\alpha, A} \in H_2(DE, \alpha)$. \begin{lemma}\label{lem1} Given an orbit set $\alpha=e_{+}^{m_+}h_1^{m_1}\cdots h_{2g}^{m_{2g}} e_-^{m_-}$, the ECH index of a relative class $Z_{\alpha}+ d[\Sigma] \in H_2(DE, \alpha)$ is \begin{equation*} I(Z_{\alpha}+d[\Sigma])=M+m_+-m_- + 2dM +d^2 e +de + d\chi(\Sigma). \end{equation*} \end{lemma} \begin{proof} Let $p$ be a critical point of $H$, $\gamma_p=\pi^{-1}(p)$ and $C_p=\pi^{-1}_E(p)$. We fix a constant trivialization as in \cite{NW}. More precisely, a trivialization of $T_p\Sigma$ can be lifted to a trivialization of $\xi\vert_{\gamma_p(t)}$. Under this trivialization, by Lemma 3.9 of \cite{NW}, we have \begin{equation*} CZ_{\tau}(\gamma_p^k) = ind_p H-1, \end{equation*} where $ind_p H$ is the Morse index at $p$. Regard the connection $A_{\nabla}$ as a map $A_{\nabla}: TE \to E$. Then it induces a splitting $TE=TE^{hor}\oplus TE^{vert}$, where $TE^{hor} : = \ker A_{\nabla}$ and $TE^{vert} := \ker d\pi_E$. The trivialization of $T_p\Sigma$ also can be lifted to $T^{hor} E \vert_{C_p}$. In particular, $c_{\tau}(T^{hor} E \vert_{C_p})=0$. Note that $T^{hor} E \vert_{C_p}$ can be identified with the normal bundle $N_{C_p}$ of $C_p$. By the definition of $Q_{\tau}$, we obtain $Q_{\tau}(C_p)=c_{\tau}(N_{C_p})=0$. The section $r \partial_r -i\partial_{\theta}$ can be extended to a section $\psi$ of $T^{vert} E \vert_{C_p} =TC_p$. We choose such an extension such that $\psi= \rho \partial_{\rho}- i\partial_{\theta}$ near $r =\frac{1}{2}$. We can choose $\psi$ such that it is nonvanishing except at the zero section. Hence, the relative Chern number is $$c_{\tau}( T^{vert} E \vert_{C_p})= \#\psi^{-1}(0) = \#( \rho \partial_{\rho}- i\partial_{\theta})^{-1}(0)=1.$$ Therefore, we have \begin{equation*} I(C_p)=c_{\tau}(TE\vert_{C_p}) + Q_{\tau} (C_p) + CZ_{\tau}(\gamma_p)=ind_p H. \end{equation*} For $p\ne q$, the fibers $C_p, C_q$ are obviously disjoint. Hence, $Q_{\tau}(C_p, C_q)=0$. Therefore, the formula for $I(Z_{\alpha})$ follows from the facts that the relative Chern number is additivity and $Q_{\tau}$ is quadratic. By the definition of the ECH index, we have \begin{equation}\label{eq2} I(Z_{\alpha}+ d[\Sigma]) =I(Z_{\alpha}) + 2d Z_{\alpha} \cdot \Sigma +I(d\Sigma)= M-m_- + m_+ + 2dM +I(d\Sigma). \end{equation} Under the identification $T^{hor}E=\pi_E^* T\Sigma$ and $T^{vert}E=\pi_E^*E$, we have \begin{equation*} <c_1(TE), \Sigma>=<c_1(T^{hor}E), \Sigma> + <c_1(T^{vert}E), \Sigma>=\chi(\Sigma) + e. \end{equation*} The self-intersection number of $\Sigma$ is \begin{equation*} \Sigma\cdot \Sigma =\#\psi^{-1}(0)=<c_1(E), \Sigma>=e, \end{equation*} where $\psi$ is a generic section of $E$. In sum, we have \begin{equation}\label{eq3} I(d\Sigma)=d\chi(\Sigma) + de + d^2e. \end{equation} Combine (\ref{eq2}) and (\ref{eq3}); then we get the result. \end{proof} \begin{remark} \label{remark2} Our computations here is equivalent to Nelson and Weiler's index computations \cite{NW} in the following sense. Let $I(\alpha, \beta)$ denote the index formula in Proposition 3.5 of \cite{NW}. One can use Lemma \ref{lem1} and the additivity of ECH index to recover the index formula $I(\alpha, \beta)$ as follows. Let $Z_{\alpha} + d_{\alpha} [\Sigma] \in H_2(DE, \alpha)$, $Z_{\beta} + d_{\beta} [\Sigma] \in H_2(DE, \beta)$ and $Z \in H_2(Y, \alpha, \beta)$ be relative homology classes such that $Z_{\alpha} + d_{\alpha} [\Sigma] =Z \#(Z_{\beta} + d_{\beta} [\Sigma])$. By a similar argument as in Proposition 4.4 of \cite{NW} (see (\ref{eq12}) for the details in the current setting), one can show that $d_{\alpha} -d_{\beta} =(M-N)|e|$. By direct computations, we have $$I(\alpha, \beta) =I(Z) = I(Z_{\alpha} + d_{\alpha} [\Sigma] ) -I(Z_{\beta} + d_{\beta} [\Sigma]). $$ Conversely, we also can deduce Lemma \ref{lem1} easily by applying Nelson and Weiler's index computations as follows. Suppose that $[\alpha]=0$ and $M=k|e|$. Let $Z_0=Z_{\alpha} +k[\Sigma]$. Note that $Z_0 \cdot [\Sigma]=0$. We can represent $Z_0$ by a surface $S_0$ which disjoints from $\Sigma$. Under the identification (\ref{eq1}), $S_0$ represents a relative homology class in $H_2(Y, \alpha, \emptyset)$. In particular, we have $I(Z_0)=I(S_0)=I(\alpha, \emptyset)$. Then one can apply the index ambiguity formula (\ref{eq2}) to drive the result for a general $Z=Z_0 +d[\Sigma]$. \end{remark} \begin{lemma} \label{lem2} Let $C$ be a holomorphic curve with relative class $[C]=Z_{\alpha}+ d[\Sigma] $. Then the Fredholm index is \begin{equation} ind C=2g(C)-2 + h(C) + 2e_+(C)+ 2M+2d\chi(\Sigma) +2de, \end{equation} where $h(C)$ is the number of ends at hyperbolic orbits and $e_+(C)$ is the number of ends at covers of $ e_+$. \end{lemma} \begin{proof} The proof follows directly from definition and the calculations in Lemma \ref{lem1}. \end{proof} \paragraph{Holomorphic currents without closed components}We first study the case that the holomorphic current $\mathcal{C}$ contains no closed components. Also, we assume that the holomorphic curves are asymptotic to orbit sets with $\mathcal{A}_{\lambda_{\varepsilon}} < L_{\varepsilon}$, unless stated otherwise. For a holomorphic curve in $E$, we define its degree which is an analog of Definition 4.1 in \cite{NW}. Let $\mathcal{C} \in \mathcal{M}^J(\alpha)$ be a holomorphic current represented by a holomorphic map $ u: \dot{F} \to E$, where $\dot{F} =F -\Gamma $, $F$ is a closed Riemann surface (possibly disconnected) and $\Gamma$ is the set of punctures. Since $\pi_E \circ u$ maps the punctures to the critical points of $H$, we can extend $\pi_E \circ u$ to a map $\pi_E \circ u: F \to \Sigma$. Then we have a well-defined degree $deg(\pi_E \circ u)$. Define $deg(\mathcal{C}) :=deg(\pi_E \circ u)$. It is called the \textbf{degree} of $\mathcal{C}$. Alternatively, we can define $deg(\mathcal{C})$ to be the unique integer $deg(\mathcal{C})$ such that $[\mathcal{C}] =Z_{\alpha} +deg(\mathcal{C}) [\Sigma]$. \begin{lemma} \label{lem11} For a generic almost complex structure $J$, let $\mathcal{C}$ be a $J$-holomorphic current without closed components. Then we have $deg(\mathcal{C}) \ge 0$. \end{lemma} \begin{proof} Write $\mathcal{C}=\sum_a d_a C_a.$ Since $deg(\mathcal{C}) = \sum_ad_adeg(C_a)$, it suffices to prove the conclusion for an irreducible simple holomorphic curve $C$ with at least one end. Let $d=deg(C)$. Assume that $d \le -1$. Then Lemma \ref{lem1} implies that \begin{equation*} I(C) \le |d|(1-|d|)|e| + d\chi(\Sigma) \le d \chi(\Sigma). \end{equation*} If $\chi(\Sigma)=2$, then $I(C) \le -2$. If $\chi(\Sigma)\le 0$, then $ind C \ge 2d\chi(\Sigma) + 2de> d\chi(\Sigma)$ by Lemma \ref{lem2}. In both cases, they violate the ECH inequality $I(C) \ge ind C \ge 0$. \end{proof} By the above lemma, we assume that $deg(C)\ge 0$ throughout. To deal with the holomorphic currents with multiply covered components, we need the following self-intersection number that appears in the ECH inequality. \begin{definition}[Definition 4.7 \cite{H5}] For two simple holomorphic curves $C, C'$ which are asymptotic to orbit sets with action less than $L$, define an integer $C \star C'$ as follows. \begin{itemize} \item If $C$ and $C'$ are distinct, then $C \star C'$ is the algebraic count of intersections of $C$ and $C'$. By intersection positivity, we have $C \star C' \ge 0$. The equality holds if and only if $C$ and $C'$ are disjont. \item If $C$ and $C'$ are the same curve, then define \begin{equation*} 2C\star C=2g(C)-2+ h(C) + ind C + 2e_L(C) + 4 \delta(C) < 0, \end{equation*} where $e_L(C)$ is the total multiplicity of all elliptic orbits in $\alpha$ that are $L$-negative, and $\delta (C)$ is the count of singularities of $C$ with positive integer weights. $\delta(C) \ge 0$ and equality holds if and only if $C$ is embedded. \end{itemize} \end{definition} Let $\mathcal{C}=\sum_{a} d_a C_a$ and $\mathcal{C}'=\sum_b d_b' C_b'$. By Proposition 4.8 of \cite{H5}, we have \begin{equation} \label{eq11} I(\mathcal{C}+ \mathcal{C}') \ge I(\mathcal{C}) +I(\mathcal{C}') +2 \mathcal{C} \star \mathcal{C'}, \end{equation} where $\mathcal{C} \star \mathcal{C'} = \sum_a \sum_b d_a d_b' C_a \star C_b'. $ \begin{lemma} \label{lem9} Let $C$ be an irreducible simple holomorphic curve with at least one end. If $C \star C < 0$, then $I(kC)\ge 2$ for any $k \ge 1$. In particular, $I(\mathcal{C}) \ge 0$ for any holomorphic current $\mathcal{C}$ without closed component. \end{lemma} \begin{proof} Assume that $C \star C <0$. By Lemma \ref{lem2}, we know that $h(C)+ ind C$ is a nonnegative even integer. Therefore, we have $g(C)=h(C) =ind C =e_L(C) =\delta(C)=0$. The condition $h(C)=e_L(C)=0$ forces $M=m_+$. By Lemma \ref{lem2}, $ind C=0$ implies that \begin{equation} \label{eq4} M=d|e|-d\chi(\Sigma)+ 1 -e_+(C). \end{equation} Write the relative homology class of $C$ as $Z_{{e_{+}^M}} + d\Sigma$, then $[kC]= Z_{e_+^{kM}} + dk\Sigma$. Note that $C \cdot \Sigma=M + de \ge 0$ by intersection positivity; then $M \ge d|e|$. If $\chi(\Sigma)>0$, then \begin{equation*} \begin{split} I(kC)&= 2kM +d^2k^2e + dke+dk\chi(\Sigma) +2dk^2M\\ &\ge 2kM + dk \chi(\Sigma) + k^2d^2|e| -dk|e| \ge 2kM \ge 2. \end{split} \end{equation*} If $\chi(\Sigma) \le 0$, then by Lemma \ref{lem2} and Equation (\ref{eq4}), we have \begin{equation*} \begin{split} I(kC)&= 2kM +d^2k^2e + dke+dk\chi(\Sigma) +2dk^2M\\ &=kM + k(1+d|e|-d\chi(\Sigma)-e_+(C)) +d^2k^2 e + dk e+ dk\chi(\Sigma) \\ &+ dk^2M + dk^2(d|e|-d\chi(\Sigma) +1 -e_+(C))\\ &=k(M-e_+(C)) + k + dk^2(M-e_+(C)) -d^2k^2 \chi(\Sigma) + dk^2 \ge k \ge 1. \end{split} \end{equation*} Note that $I(kC)$ is even. Hence, we get $I(kC) \ge 2$. Write $\mathcal{C} =\sum_a d_aC_a + \sum_{b} d_b' C_b'$ such that $C_a \star C_a\ge 0$ and $C_b'\star C_b'<0$. By Inequality (\ref{eq11}), we have \begin{equation}\label{eq5} \begin{split} I(\mathcal{C}) \ge& \sum_a I(d_aC_a) + \sum_b I(d_b'C_b') + 2\sum_{a,b} d_a d_b' C_a \star C_b' + 2\sum_{a\ne a'} d_a d_{a'} C_a \star C_{a'} + 2\sum_{b\ne b'} d_b' d_{b'}' C_b' \star C_{b'}'\\ \ge & \sum_a d_aI(C_a) + \sum_a d_a(d_a-1)C_a \star C_a + \sum_b I(d_b'C_b') + 2\sum_{a,b} d_a d_b' C_a \star C_b' \\ +& 2\sum_{a\ne a'} d_a d_{a'} C_a \star C_{a'} + 2\sum_{b\ne b'} d_b' d_{b'}' C_b' \star C_{b'}' \ge 0.\\ \end{split} \end{equation} \end{proof} A simple holomorphic curve $C$ is called a \textbf{special holomorphic plane} if it has $I(C) =indC =0$, and is an embedded plane whose positive end is asymptotic to $e_-$ with multiplicity 1. This is a counterpart of the Definition 3.15 in \cite{CG}. \begin{lemma} \label{lem4} Assume that $C$ is not closed. If $I(C)=indC =C\star C =0$, then $C$ is a special holomorphic plane. \end{lemma} \begin{proof} Note that $C \star C=0$ forces $\delta(C)=0$, i.e, $C$ is embedded. It is easy to check that $C $ satisfies one of the following properties: \begin{enumerate} \item $h(C)=e_L(C)=0$ and $g(C)=1$; \item $h(C)=2$ and $g(C)=e_L(C)=0$; \item $h(C)=g(C)=0$ and $e_L(C)=1$. \end{enumerate} Write $d =deg (C)$. By Lemmas \ref{lem1} and \ref{lem2}, we have \begin{equation} \label{eq6} \begin{split} 2I(C)-ind(C)=2m_+ - 2m_- +4dM -2d^2|e| +2 -2g(C)-h(C) -2e_+(C). \end{split} \end{equation} Since $I(C)=ind(C)$, the ECH partition condition implies that $e_+(C)=m_+$. Also, $e_L(C) =0$ is equivalent to $m_-=0$. In the first two cases, we have \begin{equation*} \begin{split} 0=4dM -2d^2|e| \ge 2d M \ge 0. \end{split} \end{equation*} The last step comes from the positivity intersection of holomorphic curves $C \cdot \Sigma =M-d |e| \ge 0 $. Hence, we have either $M=0$ or $d=0$. If $d=0$, then the formula in Lemma \ref{lem1} still implies that $M=0$. We get contradiction since we have assumed that $C$ is not closed. In the last case, $m_-=1$ for $C$. By Equation (\ref{eq6}), then we still get $d=0$. The formula in Lemma \ref{lem1} and $ I(C)=0$ imply that $m_+=0$. Hence, $C$ is a holomorphic plane with one end at $e_-$, i.e., it is a special holomorphic plane. \end{proof} \begin{lemma} \label{lem5} Let $\mathcal{C}\in \mathcal{M}^J(\alpha)$ be a holomorphic current with $I(\mathcal{C}) =i$, $i=0$ or $1$. If $i=1$, we also assume that $\alpha$ is an ECH generator. Then $\mathcal{C} =\mathcal{C}_{emb} \cup \mathcal{C}_{spec}$, where $\mathcal{C}_{emb}$ is embedded with $I(\mathcal{C}_{emb})=ind \mathcal{C}_{emb}=i$ and $\mathcal{C}_{spec}$ consists of special holomorphic planes. \end{lemma} \begin{proof} Write $\mathcal{C} =\sum_a d_aC_a + \sum_{b} d_b' C_b'$ as in Lemma \ref{lem9}. By Lemma \ref{lem9} and Inequality (\ref{eq5}), we must have $d_b'=0$ because $I(d_b'C_b')\ge 2$. In the case that $I(\mathcal{C})=0$, we have $I(C_a)=0$ for any $a$. Also, $d_a=1$ unless $C_a \star C_a=0$. The ECH equality implies that $ind C_a=0$ and $\delta(C_a)=0$ as well. If $d_a>1$, then $C_a \star C_a=0$. By Lemma \ref{lem4}, $C_a$ is a special holomorphic plane. In the case that $I(\mathcal{C})=1$, then we have $I(C_a) \le 1$. If $I(C_a)=0$ for all $a$, the ECH index equality and Lemma \ref{lem2} implies that $C_a$ has even ends at hyperbolic orbits. Since $\alpha$ is an ECH generator, we know that $\alpha$ contains even distinct simple hyperbolic orbits. By Lemma \ref{lem1}, $I(\mathcal{C})= 0\mod 2$, we get a contradiction. Therefore, there exists $C_{a_0}$ with $I(C_{a_0})=ind C_{a_0} =1$. The Inequality (\ref{eq5}) implies that such $a_0$ is unique and $d_{a_0}=1$. For any other $a$, we also have $I(C_a)=ind C_a= \delta(C_a)=0$. Moreover, $d_a=1$ unless $C_a \star C_a=0$. In both cases, $\mathcal{C}$ is a union of embedded curves and covers of special holomorphic disks. \end{proof} \paragraph{Closed holomorphic curves} Now we begin to consider the holomorphic currents that contain closed holomorphic curves. We first need to figure out what kind of closed holomorphic curves could exist in $E$. \begin{lemma} \label{lem7} The zero section $\Sigma$ is the unique simple closed holomorphic curve in $E$. \end{lemma} \begin{proof} Suppose we have a simple closed holomorphic curve $C$ which is different from $\Sigma$. Since $H_2(DE, \mathbb{Z})$ is generated by $[\Sigma]$, we must have $[C] =k [\Sigma]$. By energy reason, we have $k \ge 1$. However, $C\cdot \Sigma =k[\Sigma] \cdot [\Sigma] =ke <0$, contradicts with the intersection positivity of holomorphic curves. \end{proof} \begin{lemma} \label{lem6} Let $\mathcal{C} \in \mathcal{M}^J(\alpha, Z_{\alpha})$ be a holomorphic current . Then $\mathcal{C}$ contains no closed components. \end{lemma} \begin{proof} By Lemma \ref{lem7}, we can write $\mathcal{C} = \mathcal{C}_0 + k \Sigma$, where $\mathcal{C}_0 $ has no closed components and $k \ge 0$. Then $ \mathcal{C} \cdot\Sigma =M + deg(\mathcal{C}_0) e +k e =Z_{\alpha}\cdot [\Sigma] = M$. By Lemma \ref{lem11}, we have $k=deg(\mathcal{C}_0) =0$. \end{proof} \begin{lemma} The moduli space $\mathcal{M}^J(e_-^M, Z_{e_-^M})$ is a finite set. \end{lemma} \begin{proof} Let $\mathcal{C}_{\infty} =\{\mathcal{C}^{0},...,\mathcal{C}^{N}\}$ be a broken holomorphic curve which is a limit from a sequence of holomorphic curves $ \{\mathcal{C}_{n}\}_{n=1}^{\infty}$ in $\mathcal{M}^J(e_-^M, Z_{e_-^M})$, where $\mathcal{C}^{0} \in \mathcal{M}^J(\alpha_0)$, $\mathcal{C}^{i} \in \mathcal{M}^{J_+}(\alpha_i, \alpha_{i-1})$ and $\alpha_N =e_-^M$. We claim that $\mathcal{C}^0$ has no closed components. Then the rest of the proof is the same as Proposition 3.13 in \cite{CG}. We omit the details here. To prove the claim, we need to show that the degree is also additivity. The argument here is the same as Proposition 4.4 in \cite{NW}. Note the the energy of holomorphic currents are \begin{equation} \label{eq12} \begin{split} &\int_{\mathcal{C}_n \cap DE} \Omega_{\varepsilon} + \int_{\mathcal{C}_n \cap \mathbb{R}_{s \ge 0} \times Y} d\lambda_{\varepsilon} = M + deg(\mathcal{C}_n) |e|\\ &\int_{\mathcal{C}^0 \cap DE} \Omega_{\varepsilon} + \int_{\mathcal{C}^0 \cap \mathbb{R}_{s \ge 0} \times Y} d\lambda_{\varepsilon} = M_{\alpha_0} + \varepsilon \pi^* H(\alpha_0) + deg(\mathcal{C}^0) |e|\\ & \int_{\mathcal{C}^i \cap \mathbb{R}_{s} \times Y} d\lambda_{\varepsilon} =2(M_{\alpha_i} - M_{\alpha_{i-1}}) + \varepsilon \left( \pi^*H(\alpha_i) - \pi^*H(\alpha_{i-1}) \right), \end{split} \end{equation} where $M_{\alpha_i}$ is the total multiplicity of $\alpha_i$, $\pi^*H(\alpha_i)$ is short for $m_+^i \pi^*H(e_+) +m_-^i \pi^*H( e_-) + \sum^{2g}_{j=1} m_j^i \pi^*H(h_i)$ and $m_{\pm}^i, $ $m_j^i$ are multiplicities of $e_{\pm}, h_j $ of $\alpha_i$. Since \begin{equation*} \begin{split} \int_{\mathcal{C}_n \cap DE} \Omega_{\varepsilon} + \int_{\mathcal{C}_n \cap \mathbb{R}_{s \ge 0} \times Y} d\lambda_{\varepsilon} =\int_{\mathcal{C}^0 \cap DE} \Omega_{\varepsilon} + \int_{\mathcal{C}^0 \cap \mathbb{R}_{s \ge 0} \times Y} d\lambda_{\varepsilon} + \sum_{i=1}^N\int_{\mathcal{C}^i \cap \mathbb{R}_{s} \times Y} d\lambda_{\varepsilon}. \end{split} \end{equation*} By Equations (\ref{eq12}), we have \begin{equation*} deg(\mathcal{C}_n) |e|=deg(\mathcal{C}^0)|e| + \sum_{i=1}^N (M_{\alpha_i} - M_{\alpha_{i-1}}) =\sum_{i=0}^N deg(\mathcal{C}^i)|e|. \end{equation*} The second equality follows from $deg(\mathcal{C}^i)|e| =M_{\alpha_i} - M_{\alpha_{i-1}}$ (Proposition 4.4 of \cite{NW}). Recall that $ deg(\mathcal{C}_n) =0$ because its relative homology class is $Z_{e_-^M}$. Proposition 4.4 of \cite{NW} also implies that $deg(\mathcal{C}^i) \ge 0$ for $i=1...N$. By Lemma \ref{lem11}, we have $deg(\mathcal{C}^i) = 0$ for $i=0...N$. By Lemma \ref{lem6}, $\mathcal{C}^0$ has no closed components. \end{proof} \begin{comment} \begin{theorem} then for a generic almost complex structure $J$, we have a well--defined homomorphism \begin{equation*} ECH^L(E, F_{\varepsilon}^*\Omega_E)_J: ECH^L(Y, \lambda_{\varepsilon}) \to \mathbb{Z}. \end{equation*} \end{theorem} \begin{proof} The proof is essential the same as before. Let $\alpha $ be an ECH generator. The moduli space $\mathcal{M}_0^J(\alpha)$ is a finite set with orientation can be proof by the same argument in Gerig's paper. Hence, we can define a map in chain level by \begin{equation*} ECC^L(E, F_{\varepsilon}^*\Omega_E)_J\alpha=\#\mathcal{M}^J_0(\alpha). \end{equation*} To see this is a chain map, let $\mathcal{C}$ be a broken holomorphic curves come from the limit of curves in $\mathcal{M}^J_1(\alpha)$. Then $\mathcal{C}$ is either a curve in $\mathcal{M}^J_1(\alpha)$ or consists of \begin{itemize} \item A holomorphic curve with $I=1$ in the top level; \item A holomorphic curve with $I=0$ in the cobordism level; \item Connectors in the middle level. \end{itemize} We can apply HT's gluing argument, $ECC^L(E, F_{\varepsilon}^*\Omega_E)_J$ is a chain map. \end{proof} \end{comment} \paragraph{Uniqueness} We show that the $MC_{e_-}$ is the unique holomorphic current in the moduli space $\mathcal{M}^J(e_-^M, Z_{e_-^M})$. The energy constraint argument in \cite{GHC1} doesn't work in the current situation. We use the argument as in Lemma \ref{lem7} instead. To this end, we need to apply R. Siefring's intersection theory for punctured holomorphic curves. In \cite{RS}, Siefring defines intersection pairing $C\bullet C'$ for punctured holomorphic curves, where $C$ and $C'$ are simple holomorphic curves. Here we don't use the precise definition of $\bullet $, we only need to know the following facts: \begin{itemize} \item The intersection pairing $C\bullet C'$ are invariant under homotopic as cylindrical asymptotic maps. \item (Theorem 2.2 of \cite{RS}) If $C$ and $C'$ are distinct, then $C\bullet C' \ge 0$. \item (Theorem 2.3 of \cite{RS}) In the case that $C=C'$, the self-intersection number is defined by \begin{equation*} C\bullet C=2(\delta(C)+ \delta_{\infty}(C)) +\frac{1}{2} (2g(C)-2 + ind C + \# \Gamma_{even})+ (\bar{\sigma}(C)-\#\Gamma), \end{equation*} where $\Gamma$ denote the set of punctures, $\Gamma_{even}$ is the set of punctures which are asymptotic to Reeb orbits with even Conley Zehnder index, and $\delta_{\infty}(C)$ is an algebraic count of ``hidden'' singularities at the infinity. According to the definition, if all the ends of $C$ are asymptotic to distinct simple orbits, then $\delta_{\infty}(C)$ and $\bar{\sigma}(C)-\#\Gamma$ vanish. \end{itemize} \begin{lemma} \label{lem3} The moduli space $\mathcal{M}^J(e_-^M, Z_{e_-^M})$ only consists of one element. \end{lemma} \begin{proof} Note that $MC_{e_-} \in \mathcal{M}^J(e_-^M, Z_{e_-^M})$. The moduli space is nonempty. Moreover, $I(MC_{e_-}) = I(Z_{e_-^M})=0$. Let $\mathcal{C}=\sum_a d_a C_a \in \mathcal{M}^J(e_-^M, Z_{e_-^M}) $. By Lemma \ref{lem6}, $\mathcal{C}$ has no closed components. By Inequality (\ref{eq5}), we have $I(C_a)=ind C_a=\delta(C_a)=0$ for every $a$. Also note that $deg(C_a)=0$ for any $a$. Lemma \ref{lem2} forces $M_a=1$, $g(C_a)=0$ and $h(C_a) =e_+(C_a)=0$. In sum, $C_a$ are special holomorphic planes. By our choice of $J$, the fiber $C_{e_-} $ is a holomorphic plane with $I(C_{e_-}) =ind C_{e_-}=0$. By the third fact, we have $C_{e_-} \bullet C_{e_-}=-1$. If there exists another special plane $C_a $ other than $C_{e_-}$, then $C_{e_-} \bullet C_a \ge 0$ by the second fact. Note that $C_a$ is homotopic to $C_{e_-}$ as a asymptotic cylindrical map because $\pi_E(C_{e_-} -C_a)$ is trivial in $\pi_2(\Sigma)$. Therefore, $0 \le C_a \bullet C_{e_-}=C_{e_-}\bullet C_{e_-}=-1$. We get a contradiction. \end{proof} \begin{comment} \begin{lemma} Suppose that $\pi_E : E \to B $ satisfies one of the following conditions: \begin{enumerate} \item $\chi(B)=0$; \item $\chi(B)<0$ and $|e|> -\chi(B)$. \end{enumerate} Let $\mathcal{C} \in \mathcal{M}^J_0(\alpha)$ be a holomorphic current without closed components. Assume that the homology class of $[\alpha]=\Gamma$ is torsion. \begin{enumerate} \item If $2\Gamma > -\chi(B)$, then $deg(\mathcal{C})=0$. \item If $2\Gamma \le -\chi(B)$, then $deg(\mathcal{C})=1$. \item If $2\Gamma =-\chi(B)$ and $deg(\mathcal{C})=1$, then $\mathcal{C}$ is an irreducible embedded holomorphic curve. Moreover, $\mathcal{C} \cdot B =0$. \end{enumerate} \end{lemma} \begin{proof} Suppose that the relative homology class of $\mathcal{C}$ is $Z_{\alpha}+ dB$. By the ECH index formula and $M \ge d|e|$, we have \begin{equation} \label{eq7} \begin{split} I(\mathcal{C})&= M+m_+-m_- + 2dM +d^2 e +de + d\chi(B)\ge d((M-|e|+ \chi(B)). \end{split} \end{equation} Hence, $I=0$ implies that $d=1$. By the assumption that $[\alpha]=\Gamma \in \{0,1 \dots |e|-1\}$, we have $M=k|e| + \Gamma$ for some $k \ge 0$. Since $M \ge |e|$, we have $k \ge 1$. We must have $k =1$, otherwise, the right hand side of \ref{eq7} is positive. In sum, $M=|e|+ \Gamma$. Then $\mathcal{C} \cdot B =M-|e|=\Gamma$. Note that if $2\Gamma > -\chi(B)$, then $I(\mathcal{C})>0$. Write $\mathcal{C}=\sum_a d_a C_a.$ The formula $1=deg(\mathcal{C}) = \sum_ad_adeg(C_a)$ implies that one $deg(C_{a_0}) =1$ and $deg(C_a)=0$ for all other $a\ne a_0$. By Lemma \ref{lem3}, $C_a$ is the fiber $C_-$. Therefore, we can write $\mathcal{C} =C_0 + d_- C_-$. In the case that $\Gamma =0$ and $g(B)=1$, we have $M=m_-$ and $\mathcal{C}\cdot B=0$. Hence, $0=\mathcal{C}\cdot B\ge d_-C_-\dot B=d_-$. We know that $\mathcal{C}=C_0$ is an irreducible embedded holomorphic curve. \end{proof} \end{comment} \begin{proof} [Proof of Theorem \ref{thm1}] Let $A \in H_2(DE, Y, \mathbb{Z})$ be the relative class represented by $[Z_{e_-^M}].$ Recall that $Z_{e_-^M}$ is the only relative homology class $Z$ in $H_2(DE, e_-^M)$ such that $[Z]=A$. Since $(\lambda_{\varepsilon}, J_+)$ is $(L, \delta)$-flat approximation, recall that we have a bijection (\ref{eq16}) between the ECH generators and the gauge classes of the Seiberg-Witten solutions. Let $\mathfrak{c}_{e_-^M} =\Psi(e_-^{M})$. The ECH cobordism map is defined by (Definition 5.9 of \cite{HT}) \begin{equation*} ECC^L(DE, \Omega_{\varepsilon}, A)({e_-^M}) =\#\mathfrak{M}(\mathfrak{c}_{e_-^M}, \mathfrak{s}_A), \end{equation*} where $ \mathfrak{M}(\mathfrak{c}_{e_-^M}, \mathfrak{s}_A)$ is the moduli space of solutions to the Seiberg-Witten equations on $E$ which are asymptotic to $\mathfrak{c}_{e_-^M}$ (see (4.15) of \cite{HT}), $\mathfrak{s}_A$ is the spin-c structure such that $c_1(\mathfrak{s}_A) =c_1(K_{DE}^{-1}) +2PD_{DE}(A)$ and $K_{DE}^{-1}$ is the canonical line bundle. By Theorem 4.2 of \cite{CG2} and Lemma \ref{lem3}, we have $$\#\mathfrak{M}(\mathfrak{c}_{e_-^M}, \mathfrak{s}_A) = \# \mathcal{M}^J(e_-^M, Z_{e_-^M}) =1.$$ Because $e_-^M$ is a cycle, we have $ECH^L(DE, \Omega_{\varepsilon}, A)([{e_-^M}]) =1.$ To see $ECH^L(DE, \Omega_{\varepsilon}, A)([{e_+^{M-|e|}}]) =0,$ by the holomorphic curve axioms (see Theorem 1.9 of \cite{HT}), it suffices to show that the moduli space $\overline{\mathcal{M}^J(e_+^{M-|e|}, Z_{e_+^{M-|e|}, A})}$ is empty, where $Z_{e_+^{M-|e|}, A} \in H_2(E, e_+^{M-|e|})$ is the unique relative homology class determined by $A$. Let $\mathcal{C}= \mathcal{C}_0 + k\Sigma$ be a holomorphic current in this moduli space, where $\mathcal{C}_0$ has no closed component and $k \ge 0$. Then \begin{equation*} \mathcal{C} \cdot \Sigma = M-|e| + deg(\mathcal{C}_0)e +ke =A \cdot \Sigma =Z_{e_-^M} \cdot \Sigma =M. \end{equation*} Then $deg(\mathcal{C}_0) +k=-1$. This contradicts with Lemma \ref{lem11}. \end{proof} \section{Proof of Theorem \ref{thm0} and Theorem \ref{thm2}} \subsection{Sphere case} In this subsection, we assume $\Sigma =\mathbb{S}^2$. It is well known that the diffeomorphism type of $Y$ is the lens space $L(|e|, 1)$. The ECH group of $Y$ (as an $\mathbb{F}$ module) has been computed by Nelson and Weiler (Example 1.3)\cite{NW}. But we still need to know the $U$ module structure of $ECH(Y,\lambda_{\varepsilon}, 0)$ by using Taubes's isomorphism ``ECH=HM'' \cite{T2, Te2,Te3,Te4,Te5} and the computations of P. Kronheimer, T. Mrowka, P. Ozsv\'{a}th and Z. Szab\'{o} in \cite{KMOS}. \begin{prop} \label{lem8} The ECH of the lens space $Y \cong L(|e|, 1)$ is \begin{equation} \label{eq9} ECH_*(Y, \lambda, 0 ) = \begin{cases} \mathbb{F} , & *=2k \mbox{ and } k \ge 0,\\ 0, & else, \end{cases} \end{equation} where the $\mathbb{Z}$ grading is defined by (\ref{eq10}). Moreover, $U: ECH_{2k}(Y, \lambda, 0 ) \to ECH_{2k-2}(Y, \lambda, 0 )$ is an isomorphism for $k \in \mathbb{Z}_{\ge 1}$. Also, $ECH_0(Y, \lambda, 0) $ is spanned by $[\emptyset]$. \end{prop} \begin{proof} The isomorphism (\ref{eq9}) is just the sphere case of Theorem 1.1 in \cite{NW}. It remains to show that the $U$ map is an isomorphism. By Taubes's series papers \cite{T2, Te2,Te3,Te4,Te5}, we have a canonical isomorphism $ECH_*(Y, \lambda, 0) \cong \widehat{HM}^{-*}(Y, \mathfrak{s}_{\xi}) $ as an $U$-module. Since $L(|e|, 1)$ admits a metric with positive scalar curvature, by Proposition 2.2 and Corollary 2.12 of \cite{KMOS}, we have $\widehat{HM}(Y, \mathfrak{s}_{\xi}) \cong \mathbb{F}[U^{-1}, U]/ \mathbb{F}[U]. $ Therefore, $U$ is an isomorphism when the grading is at least two. \end{proof} \begin{proof}[Proof of Theorem \ref{thm0}] Let $d_0$ be the maximal integer such that $A_{\lambda_{\varepsilon}} (e_-^{d_0|e|}) <L_{\varepsilon}$. Let $k_0=gr(e_-^{d_0|e|})$. Let $f: Y \to (0,1]$ be a function such that $f\lambda_{\varepsilon}$ is nondegenerate. By Proposition \ref{lem8} and the following commutative diagram \begin{equation*} \begin{CD} ECH_{2k}^L(Y, f\lambda_{\varepsilon}, 0)@> U^k>> ECH_{0}^L(Y, f\lambda_{\varepsilon}, 0) \cong \mathbb{F} [\emptyset] \\ @VV i_L V @VV id V\\ ECH_{2k}(Y, f\lambda_{\varepsilon}, 0) @> U^k >> ECH_{0}(Y, f\lambda_{\varepsilon}, 0) \cong \mathbb{F} [\emptyset], \end{CD} \end{equation*} it is easy to show that the $k$-th ECH capacity is \begin{equation*} c_k(Y, f\lambda_{\varepsilon}) =\inf \{L \in \mathbb{R} \vert i_L: ECH_{2k}^L(Y, f\lambda_{\varepsilon},0) \to ECH_{2k}(Y, f\lambda_{\varepsilon},0) \mbox{ is nonvanishing} \}. \end{equation*} Assume that $0<f<1$. Then we have an exact symplectic cobordism \begin{equation*} (X_f, \Omega_{X_f}) =(\{ (s, y) \in \mathbb{R} \times Y: f\le e^s \le 1\}, d(e^s \lambda_{\varepsilon})). \end{equation*} Let $(DE,\Omega'_{\varepsilon}) $ be the symplectic cobordism such that $(DE,\Omega_{\varepsilon}) = (DE,\Omega'_{\varepsilon}) \circ (X_f, \Omega_{X_f})$. By Theorem 1.9 in \cite{HT}, we have the following diagram \begin{equation}\label{eq17} \begin{CD} ECH_{2k_0}^L(Y, \lambda_{\varepsilon}, 0) @> ECH^L(DE,\Omega_{\varepsilon}, A) >> \mathbb{F}\\ @VV ECH^L(X_f, \Omega_{X_f})V @VV id V\\ ECH_{2k_0}^L(Y, f\lambda_{\varepsilon}, 0)@> ECH^L(DE,\Omega_{\varepsilon}', A)>> \mathbb{F} \\ @VV i_L V @VV id V\\ ECH_{2k_0}(Y, f\lambda_{\varepsilon}, 0) @> ECH(DE,\Omega_{\varepsilon}', A) >> \mathbb{F} \end{CD} \end{equation} Take $L=A_{\lambda_{\varepsilon}}(e_-^{d_0|e|}) + \delta =2d_0|e| +\delta$, where $\delta$ is a sufficiently small positive number. By Theorem \ref{thm1} and (\ref{eq9}), we know that all the arrows are nonzero. Therefore, we have $c_{k_0}(Y, f\lambda_{\varepsilon}) \le 2d_0|e| +\delta$. If $f=1$ somewhere, we replace $f$ by $(1-\epsilon)f$ and run the same argument. Then take $\epsilon \to 0$; we get the same result. Since $f$ is arbitrary, we have $$c_{k_0}(Y, \lambda_{\varepsilon}) \le 2d_0|e| + \delta < L_{\varepsilon}.$$ Because $c_{k}(Y, \lambda_{\varepsilon}) $ is nondecreasing respect to $k$, we have $c_{k}(Y, \lambda_{\varepsilon}) < L_{\varepsilon}$ for any $1\le k\le k_0$. As a result, there is a class $\sigma \in ECH_{2k}^{L} (Y, \lambda_{\varepsilon}, 0)$ satisfies $U^{k} (\sigma) =[ \emptyset]$ for any $c_{k}(Y, \lambda_{\varepsilon})<L<L_{\varepsilon}$. In \cite{NW}, Nelson and Weiler show that there is a bijection between the nonnegative integers $k$ and the pairs $(m_-, m_+)$ satisfying $m_-+m_+=0 \mod |e|$. Therefore, there is a unique pair $(m_-, m_+)$ satisfying \begin{equation} \label{eq13} \begin{split} & m_-+m_+ =d|e| \\ & 2k= gr(e_-^{m_-}e_+^{m_+}) =2d+ d^2|e| + m_+ - m_-. \end{split} \end{equation} Therefore, we have \begin{equation*} ECH_{2k}^{L} (Y, \lambda_{\varepsilon}, 0) = \begin{cases} <e_-^{m_-}e_+^{m_+}> & \mbox{ when } \mathcal{A}_{\lambda_{\varepsilon}}(e_-^{m_-}e_+^{m_+})<L< L_{\varepsilon} \\ 0 & \mbox{ when } L<\mathcal{A}_{\lambda_{\varepsilon}}(e_-^{m_-}e_+^{m_+}). \end{cases} \end{equation*} Thus, we must have \begin{equation*} c_{k}(Y, \lambda_{\varepsilon}) = \mathcal{A}_{\lambda_{\varepsilon}}(e_-^{m_-}e_+^{m_+}) = 2d|e| +O(\varepsilon). \end{equation*} By the relation (\ref{eq13}), we get $2d+d|e|(d-1) \le 2k \le 2d+d|e|(d+1)$. It is easy to show that the nonnegative integer $d$ is unique provided that it exists. Then $$c_k(Y, \lambda) =\lim_{\varepsilon \to 0}c_k(Y, \lambda_{\varepsilon}) =2d|e|. $$ Since the integer $k_0 =k_0(\varepsilon) \to \infty $ as $\varepsilon \to 0$. Therefore, the conclusion holds for any $k \in \mathbb{Z}_{\ge 1}$. \end{proof} \subsection{Torus case} To prove Theorem \ref{thm2}, we first need to compute the $U$ map for some ECH generators. The computations are parallel to Lemma 4.6 of \cite{NW}. Let $\mathbf{z} =\{z_1,...z_k\}$ be $k$-distinct marked points in $Y$ away from the Reeb orbits. Let $\mathcal{M}_{i}^J(\alpha, \beta)_{\mathbf{z}}$ denote the moduli space of ECH index $i$ holomorphic currents passing through the marked points $\mathbf{z}$. By the same argument as in Lemma 2.6 of \cite{HT3}, $\mathcal{M}_{2k}^J(\alpha, \beta)_{\mathbf{z}}$ is a finite set for a generic almost complex structure. In the case that $k=1$, the counting of this moduli space is used to define the $U$ map. By the similar argument in Proposition 3.25 of \cite{CG}, we can define the $U^k$ in chain level by \begin{equation*} U^k \alpha :=\sum_{\beta} \# \mathcal{M}_{2k}^J(\alpha, \beta)_{\mathbf{z}} \beta. \end{equation*} \begin{lemma} \label{lem12} Fix a positive integer $k$. Let $J$ be an admissible almost complex structure. Then $\mathcal{M}_{2k}^J(e_+^k, e_-^k)_{\mathbf{z}}$ consists of $k$ distinct index $2$ holomorphic cylinders passing through the marked point $\mathbf{z}$. Moreover, we have $<U^k e_+^k, e_-^k>=1$. \end{lemma} \begin{proof} Note that $I(e_+^k, e_-^k) =2k$. So $<U^k e_+^k, e_-^k>$ is defined by counting $\mathcal{M}_{2k}^J(e_+^k, e_-^k)_{\mathbf{z}}$. Let $\mathcal{C} =\sum_a d_a C_a$ be a holomorphic current in $\mathcal{M}_{2k}^J(e_+^k, e_-^k)_{\mathbf{z}}$ . If $C_a$ passes through $l_a \ge 0 $ marked points, then $ind C_a \ge 2l_a$. By Hutchings's ECH inequality (see \cite{H1, H2} for example), we have \begin{equation*} \begin{split} 2k=I(\mathcal{C}) &\ge \sum_a d_aI(C_a)\\ &\ge \sum_a d_a \left(ind C_a + 2 \delta(C_a)\right) \\ &\ge \sum_{a, l_a \ge 1} 2l_a+ \sum_{a, l_a \ge 1} 2(d_a-1) l_a + \sum_{a } 2 d_a \delta(C_a) . \end{split} \end{equation*} Since $\sum_a l_a=k$, we must have $d_a=1$, $I(C_a) = ind C_a =2l_a$ if $l_a \ge 1$ and $C_a$ is a trivial cylinder if $l_a=0$. By Proposition 4.4 of \cite{NW}, the degree of $ {C_a}$ are nonnegative and additivity. Hence, we must have $deg(C_a) =0$ because their sum is $deg(\mathcal{C}) = (k-k)/|e|=0$. Consequently, we have $C_a \in \mathcal{M}^J(e_+^{m_a}, e_-^{m_a})$. By Nelson and Weiler's index formula (Proposition 1.5 of \cite{NW}) and the ECH partition condition, we have \begin{equation*} \begin{split} &2l_a= ind C_a = 2g(C_a) -2 + 4m_a, \\ & 2l_a= I(C_a) = 2m_a. \end{split} \end{equation*} Hence, we have $g(C_a)=0$ and $l_a =m_a =1$, i.e., $C_a$ is a holomorphic cylinder from $e_+$ to $e_-$ passing a marked point. Consequently, there are $k$ holomorphic cylinders and there are no trivial cylinders. By Proposition 4.7 of \cite{NW}, there is a bijection between $\mathcal{M}_{2k}^J(e_+^k, e_-^k)_{\mathbf{z}}$ and the moduli space of Morse flow lines passing through the marked points. For each marked point, there is exactly one index $2$ Morse flow line passing through it. Therefore, $<U^k e_+^k, e_-^k>= \#\mathcal{M}_{2k}^J(e_+^k, e_-^k)_{\mathbf{z}}=1$. \end{proof} Let $2k_0= gr(e_+^M) =d(d+1)|e|$, where $M=d|e|$. Let $i_L^f:=i_L \circ ECH^L(X_f, \Omega_{X_f})$, where $i_L $ and $ECH^L(X_f, \Omega_{X_f})$ are the homomorphisms in the diagram (\ref{eq17}). \begin{lemma} \label{lem19} Assume that $\Sigma$ is the two-torus. Let $L\in \mathbb{R}$ such that $\mathcal{A}_{\lambda_{\varepsilon}}(e_-^{M+|e|})<L<L_{\varepsilon}$. Then $i^f_L(e_+^M) \ne 0$ and $i_L^f(e_-^{M+|e|}) \ne 0$. Moreover, $i^f_L(e_+^M) \ne i_L^f(e_-^{M+|e|}).$ In particular, $i_L^f$ is an isomorphism. \end{lemma} \begin{proof} By Corollary 1.13 of \cite{NW} and its proof, we know that $e_+^{M}$ and $e_-^{M+|e|}$ are the only ECH generators in $ECC_{2k_0}^{L_{\varepsilon}}(Y, \lambda_{\varepsilon}, 0)$. Similarly, $e_+^{M-|e|}$ and $e_-^{M}$ are the only ECH generators in $ECC_{2k_0-2M}^{L_{\varepsilon}}(Y, \lambda_{\varepsilon}, 0)$. By Lemma \ref{lem12}, we have \begin{equation*} U^M(e_+^{M}) = e_-^M + a e_+^{M-|e|} \end{equation*} for some $a \in \mathbb{F}$. By Theorem \ref{thm1} and the diagram (\ref{eq17}), we have $i_L^f(e_-^M) \ne 0$ and $i_L^f(e_-^{M+|e|}) \ne 0$ . If $i^f_L(e_+^M)=0$, then \begin{equation*} 0= U^M \circ i^f_L(e_+^M)=i^f_L\circ U^M (e_+^M) =i^f_L( e_-^M + a e_+^{M-|e|}). \end{equation*} Then we must have $a =1$ and $i^f_L( e_-^M) = i_L^f( e_+^{M-|e|})$. By Theorem \ref{thm1} and the diagram (\ref{eq17}), we obtain \begin{equation} \label{eq18} \begin{split} 1= & ECH^L(DE, \Omega_{\varepsilon}, A)(e_-^M) \\ =& ECH(DE, \Omega'_{\varepsilon}, A)\circ i_L^f (e_-^{M})\\ =& ECH(DE, \Omega'_{\varepsilon}, A)\circ i_L^f (e_+^{M-|e|})\\ =& ECH^L(DE, \Omega_{\varepsilon}, A)(e_+^{M-|e|}) =0. \end{split} \end{equation} This contradiction implies that $i^f_L(e_+^M) \ne 0$. Replace $e_-^M$ and $e_+^{M-|e|}$ in (\ref{eq18}) by $e_-^{M+|e|}$ and $e_+^M$ respectively. The same argument implies that $i^f_L(e_+^M) \ne i_L^f(e_-^{M+|e|}).$ According to Corollary 1.13 of \cite{NW}, we know that $dim_{\mathbb{F}}ECH_{2k_0}(Y, f\lambda_{\varepsilon}, 0) =2$. Therefore, $i_L^f$ is an isomorphism. \end{proof} By definition, $c_k(Y, \lambda) =\infty$ if we cannot find $\sigma \in ECH(Y, \lambda, 0)$ such that $U^k\sigma =[\emptyset]$. Using results of J. B. Etnyre \cite{JBE} and Y. Eliashberg \cite{YE}, the existence of such a class can be guaranteed. \begin{lemma} Suppose that $f\lambda_{\varepsilon}$ is nondegenerate. There exists a sequence of classes $\sigma_{2k} \in ECH(Y, f\lambda_{\varepsilon}, 0)$ such that $gr(\sigma_{2k}) =2k$ and $U^{2k} (\sigma_{2k}) =[\emptyset]$. \end{lemma} \begin{proof} According to Theorem 1.1 in \cite{JBE} or Theorem 1.3 of \cite{YE}, we can find a symplectic cap of $(Y, f\lambda_{\varepsilon})$. Here the symplectic cap is a weakly exact symplectic cobordism from the empty set to $(Y, f\lambda_{\varepsilon})$. Then we obtain a weakly symplectic cobordism $(X, \Omega_X)$ from $(S^3, \lambda'_{std})$ to $(Y, f\lambda_{\varepsilon})$ by removing a small Darboux ball, where $\lambda_{std}'$ a perturbation of the standard contact form $\lambda_{std}$ such that it is nondegenerate. By Proposition \ref{lem8} (or Hutchings's computations in Section 3.7 and 4.1 of \cite{H4}), we know that $ECH_{2k}(S^3, \lambda_{std}', 0)=<\eta_{2k}>$ and $U^k (\eta_{2k}) =[\emptyset]$, where $k \in \mathbb{Z}_{\ge1}$. Define $$\sigma_{2k} = ECH(X, \Omega_X, 0)(\eta_{2k}) \in ECH(Y, f\lambda_{\varepsilon}, 0). $$ By Theorem 2.4 in \cite{H3}, we have $ECH(X, \Omega_X, 0)([\emptyset]) =[\emptyset]$. Therefore, $\sigma_{2k} \ne 0$; otherwise, if $\sigma_{2k}=0$, then \begin{equation*} 0=U^k\sigma_{2k} =U^k \circ ECH(X, \Omega_X, 0)(\eta_{2k}) =ECH(X, \Omega_X, 0)\circ U^k (\eta_{2k}) =[\emptyset]. \end{equation*} Then we get a contradiction. Since the $U$ map is a degree $-2$ map and $gr([\emptyset]) =0$, we must have $gr(\sigma_{2k}) =2k$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] Let $\sigma_{2k_0} \in ECH_{2k_0}(Y, f\lambda_{\varepsilon}, 0)$ be the class such that $U^{k_0} (\sigma_{2k_0}) =[\emptyset]$, where $2k_0= gr(e_+^M) =d(d+1)|e|$. According to Nelson and Weiler's computations, we have \begin{equation*} ECH_{2k_0}^{L} (Y, \lambda_{\varepsilon}, 0) = \begin{cases} <e_+^{M}> \oplus<e_-^{M+|e|}> & \mbox{ when } \mathcal{A}_{\lambda_{\varepsilon}}(e_-^{M+|e|})<L< L_{\varepsilon} \\ <e_+^{M}> & \mbox{ when } \mathcal{A}_{\lambda_{\varepsilon}}(e_+^{M})<L< \mathcal{A}_{\lambda_{\varepsilon}}(e_-^{M+|e|}) \\ 0 & \mbox{ when } L< \mathcal{A}_{\lambda_{\varepsilon}}(e_+^{M}). \end{cases} \end{equation*} Then Lemma \ref{lem19} and the definition of $c_{k_0}(Y, \lambda_{\varepsilon}, 0)$ imply that \begin{equation*} \mathcal{A}_{\lambda_{\varepsilon}}(e_+^{M}) \le c_{k_0}(Y, \lambda_{\varepsilon}, 0) \le \mathcal{A}_{\lambda_{\varepsilon}}(e_-^{M+|e|}). \end{equation*} Take $\varepsilon \to 0$; we have $ 2d|e| \le c_{k_0}(Y, \lambda, 0) \le 2|e|(d+1)$, where $d$ and $k_0$ satisfy the relation $2k_0=d(d+1)|e|$. For any $1\le k\le k_0$, the monotonicity of the ECH spectrum implies that $c_{k}(Y, \lambda_{\varepsilon}, 0) \le 2|e|(d+1) + O(\varepsilon)<L_{\varepsilon}$. Let \begin{equation*} \begin{split} & \mathcal{A}^+ : = \max\{ \mathcal{A}_{\lambda_{\varepsilon}}(\alpha): gr(\alpha) =2k \},\\ &\mathcal{A}^- : = \min\{ \mathcal{A}_{\lambda_{\varepsilon}}(\alpha): gr(\alpha) =2k \}. \end{split} \end{equation*} If $L> \mathcal{A}^+$, again, according to Nelson and Weiler's computations, $ECH_{2k}^L(Y, \lambda_{\varepsilon}, 0)$ is generated by the ECH generators $\alpha=e_-^{m_-}h_1^{m_1}h_2^{m_2}e_+^{m_+}$ satisfying \begin{equation} \label{eq19} \begin{split} &gr(\alpha)=d^2|e| +m_+-m_-=2k\\ &m_++m_1+m_2+m_-=d|e| \mbox{ and } m_1, m_2 \in \{0, 1\} \end{split} \end{equation} for some $d \in \mathbb{Z}_{\ge1}$. Also, $ECH_{2k}^L(Y, \lambda_{\varepsilon}, 0)$ vanishes if $L<\mathcal{A}_-$. Therefore, we have $$\mathcal{A}^-\le c_k(Y, \lambda_{\varepsilon}) \le \mathcal{A}^+.$$ Let $d_-$ and $d_+$ be the minimal integer and the maximal integer satisfying the relation (\ref{eq19}) respectively. Then \begin{equation*} 2d_-|e|\le c_k(Y, \lambda) =\lim_{\varepsilon \to 0} c_k(Y, \lambda_{\varepsilon}) \le 2d_+|e|. \end{equation*} As $\varepsilon$ tends to zero, we can take $k_0 \to \infty$. Therefore, the above inequality holds for any $k \in \mathbb{Z}_{\ge 1}$. We claim that either $d_+=d_-$ or $d_-=d_+-1$. To see this, assume that $d_- \ne d_+$. This implies $d_- \le d_+ -1$. Note that \begin{equation*} d_{\pm}(d_{\pm}-1)|e| \le 2k \le d_{\pm}(d_{\pm}+1)|e| \end{equation*} from (\ref{eq19}). Therefore, we have $d_+^2-d_+ \le d_-^2+ d_- \le d_-^2+ d_+ -1 $. This is equivalent to say $d_+ -1\le d_-$. Hence, we must have $d_-=d_+-1$. \end{proof} Shenzhen University \verb| E-mail address: [email protected]| \end{document}
\begin{document} \title {A new class of optimal four-point methods with \\ convergence order 16 for solving nonlinear equations} \author{Somayeh Sharifi$^a$\thanks{[email protected]} \and Mehdi Salimi$^b$\thanks{Corresponding author: [email protected]} \and Stefan Siegmund$^b$\thanks{[email protected]}\and Taher Lotfi$^c$\thanks{[email protected]}} \date{} \maketitle \begin{center} $^{a}$Young Researchers and Elite Club, Hamedan Branch, Islamic Azad University, Hamedan, Iran\\ $^{b}$Center for Dynamics, Department of Mathematics, Technische Universit{\"a}t Dresden, 01062 Dresden, Germany\\ $^{c}$Department of Mathematics, Hamedan Branch, Islamic Azad University, Hamedan, Iran\\ \end{center} \maketitle \begin{abstract}\noindent We introduce a new class of optimal iterative methods without memory for approximating a simple root of a given nonlinear equation. The proposed class uses four function evaluations and one first derivative evaluation per iteration and it is therefore optimal in the sense of Kung and Traub's conjecture. We present the construction, convergence analysis and numerical implementations, as well as comparisons of accuracy and basins of attraction between our method and existing optimal methods for several test problems. \noindent \textbf{Keywords}: Simple root, four-step iterative method, Kung and Traub conjecture, optimal order of convergence, computational efficiency. \end{abstract} \section{Introduction} Solving nonlinear equations is a basic and extremely valuable tool in all fields in the sciences and in engineering. One can distinguish between two general approaches for solving nonlinear equations numerically, namely, one-step and multi-step methods. Multi-step methods overcome some computational issues encountered with one-step iterative methods, typically they allow us to achieve a greater accuracy with the same number of function evaluations. In this context an unproved conjecture by Kung and Traub \cite{Kung} plays a central role, it states that an optimal multi-step method without memory which uses $n+1$ evaluations could achieve a convergence order of $2^{n}$. Considering this conjecture, many optimal two-step and three-step methods have been presented. However, because of complexity in construction and development, optimal four-point methods are rare and can be considered as an active research problem. Prominent optimal two-point methods have been introduced by Jarratt \cite{Jarrat}, King \cite{King} and Ostrowski \cite{Ostrowski}. Some optimal three-point methods have been proposed by Chun and Lee \cite{Chun1}, Cordero et al.\ \cite{Cordero5}-\cite{Cordero22}, Khattri and Steihaug \cite{Khattri}, Lotfi et al.\ \cite{Lotfi0}-\cite{Lotfi1}, Petkovic et al. \cite{Petkovic1,Petkovic2} and Sharma et al.\ \cite{Sharma1}. Neta \cite{Neta0} has presented methods with convergence orders $8$ and $16$. Babajee and Thukral \cite{Babajee} developed a four-point method with convergence order $16$ based on the King family of methods. In \cite{Geum1}-\cite{Geum3} Geum and Kim provided three methods with convergence order 16 by using weight function methods. We construct a new optimal class of four-point methods without memory which uses five function evaluations per iteration. The paper is organized as follows: Section 2 is devoted to the construction of the new optimal class. We first construct classes of optimal two-point and three-point methods and then utilize them in the three first steps of our new method. The section also includes convergence analysis of all these methods. Numerical performance and comparisons with other methods are illustrated in Section 3. A conclusion is provided in Section 4. \section{Main results: \\ Construction, error and convergence analysis} This section deals with construction, and error and convergence analysis of our method. First, we try to introduce an optimal two-point class (this class contains no originality, and we review it here only for easy-reference in constructing the next two classes), then an optimal three-point class is presented, and, finally, our optimal four-point class will be developed. \subsection{Construction of an optimal two-point class} In this section we construct a new optimal two-point method for solving nonlinear equations which employs Newton's one-point method \cite{Ostrowski,Traub} and suitable weight functions for evaluations at two points. Newton's method \cite{Ostrowski,Traub} \begin{equation}\label{a1} \begin{split} y_{n}&=x_n-\dfrac{f(x_n)}{f'(x_n)}, \quad (n \in \mathbb{N}_0 = \{0, 1, \dots\}) , \end{split} \end{equation} where $x_0$ denotes the initial approximation of $x^{*}$, is of convergence order two. To increase the order of convergence, we add one Newton step to the method \eqref{a1} to get \begin{equation}\label{a2} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ x_{n+1}=y_n-\dfrac{f(y_n)}{f'(y_n)}. \end{cases} \end{equation} The method \eqref{a2} uses four function evaluations to achieve order four, therefore the method is not an optimal two-point method. We modify \eqref{a2} by approximating $f'(y_n)$ by \begin{equation*} f'(y_n) \approx \frac{f'(x_n)}{G(t_n)}, \end{equation*} using only the values $f(x_n)$, $f(y_n)$, and $f'(x_n)$. More precisely, we use the abbreviations $t_n=\frac{f(y_n)}{f(x_n)}$ and utilize Mathematica \cite{Hazrat} to carefully choose the weight function $G : \mathbb{R} \rightarrow \mathbb{R}$ from a class of admissible functions such that the scheme \begin{equation}\label{a3} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ x_{n+1}=y_n- G(t_n) \cdot\dfrac{f(y_n)}{f'(x_n)}, \end{cases} \end{equation} is of order four. \begin{thm}\label{theorem1} Assume that $ f: D \subset \mathbb{R} \rightarrow \mathbb{R}$ is four times continuously differentiable and has a simple zero $x^{*}\in D$ and $G\in C^3\mathbb{R}$ is sufficiently continuously differentiable. If \begin{equation*} \begin{split} G_{0} &=1, \quad G_{1}=2,\quad |G_2|<\infty, \end{split} \end{equation*} where $G_{i}=\frac{d^i G(t_n)}{dt_n^i}|_{0}$ for $i=0,1,\ldots$, and the initial point $x_{0}$ is sufficiently close to $x^{*}$, then the sequence $\{x_n\}$ defined by \eqref{a3} converges to $x^{*}$ and the order of convergence is four. \end{thm} \begin{proof} Let $e_{n}:=x_{n}-x^{*}$, $e_{n,y}:=y_{n}-x^{*}$, and $c_{n}:=\frac{f^{(n)}(x^{*})}{n!f^{'}(x^{*})}$ for $n \in \mathbb{N}$. Using the fact that $f(x^{*})=0$, the Taylor expansion of $f$ at $x^*$ yields \begin{equation}\label{a4} f(x_{n}) = f^{'}(x^*)(e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+c_{4}e_{n}^{4})+O(e_{n}^{5}), \end{equation} and expanding $f'$ at $x^*$ we get \begin{equation}\label{a5} f^{'}(x_{n})=f^{'}(x^{*})(1+2c_{2}e_{n}+3c_{3}e_{n}^{2}+4c_{4}e_{n}^{3}+5c_{5}e_{n}^{4})+O(e_{n}^{5}). \end{equation} Therefore \begin{equation*}\label{a6} \dfrac{f(x_{n})}{f'(x_{n})}=e_{n}-c_{2}e_{n}^{2}+2(c_{2}^{2}-c_{3})e_{n}^{3}+(-4c_2^3+7c_2c_3-3c_4)e_n^4+O(e_n^5), \end{equation*} and hence \begin{equation*}\label{a7} e_{n,y}= y_n-x^*=c_{2}e_{n}^{2}+ O(e_n^3). \end{equation*} \\ For $f(y_n)$, we also have \begin{equation}\label{a8} f(y_{n})=f^{'}(x^{*})\left(e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3}+c_4e_{n,y}^{4}\right)+O(e_{n,y}^{5}). \end{equation}\\ By (\ref{a4}) and (\ref{a8}), we obtain \begin{equation}\label{a9} t_n=\dfrac{f(y_n)}{f(x_n)}\\ =c_2 e_n+(-3c_2^2+2c_3)e_n^2+(8c_2^3-10c_2c_3+3c_4)e_n^3+O(e_n^4), \end{equation} and expanding $G$ at $0$ yields \begin{equation}\label{a10} G(t_n)=G_{0}+G_{1}t_n+\frac{1}{2}G_{2}t_n^2+O(t_n^3). \end{equation} By substituting (\ref{a4})-(\ref{a10}) into (\ref{a3}), we obtain \begin{equation*} e_{n+1}=x_{n+1}-x^*=R_2e_n^2+R_3e_n^3+R_4e_n^4+O(e_n^5), \end{equation*} where \begin{equation*} \begin{split} R_2&=-c_2(-1+G_{0}),\\ R_3&=-c_2^2(-2+G_{1}),\\ R_4&=-c_2c_3+c_2^3(5-\frac{1}{2}G_2). \end{split} \end{equation*} \\ In general $R_{4}\neq 0$, however, by setting $R_2 = R_3 = 0$, the convergence order becomes four. Sufficient conditions are given by the following set of equations \begin{equation*} \begin{array}{lrl} G_{0} =1 & \quad \Rightarrow & R_2 = 0 , \\[1ex] G_{1}=2 & \Rightarrow & R_3=0 , \\[1ex] |G_{2}|<\infty & \Rightarrow & R_4\neq0 , \end{array} \end{equation*} and the error equation becomes \begin{equation*} e_{n+1}=R_{4}e_n^4+O(e_n^5), \end{equation*} which finishes the proof. \end{proof} \subsection{Construction of an optimal three-point class} To increase the order of convergence, we add one Newton step to the method \eqref{a3} to get \begin{equation}\label{b1} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n-G(t_n)\cdot\dfrac{f(y_n)}{f'(x_n)},\\ x_{n+1}=z_n-\dfrac{f(z_n)}{f'(z_n)}. \end{cases} \end{equation} The method \eqref{b1} uses five function evaluations and therefore the method is not an optimal three-point method. We modify \eqref{b1} by approximating $f'(z_n)$ by \begin{equation*} f'(z_n) \approx \frac{f'(x_n)}{H(t_n, s_n, u_n)}, \end{equation*} using only the values $f(x_n)$, $f(y_n)$, $f(z_n)$ and $f'(x_n)$. More precisely, we use the abbreviations $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$, $u_n=\frac{f(z_n)}{f(x_n)}$ and utilize Mathematica \cite{Hazrat} to carefully choose the weight functions $H : \mathbb{R}^3\rightarrow \mathbb{R}$ from a class of admissible functions such that the scheme \begin{equation}\label{b2} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- G(t_n) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ x_{n+1}=z_n-H(t_n, s_n, u_n)\cdot\dfrac{f(z_n)}{f'(x_n)}, \end{cases} \end{equation} is of order eight. \begin{thm}\label{theorem2} Assume that $ f: D \subset \mathbb{R} \rightarrow \mathbb{R}$ is eight times continuously differentiable and has a simple zero $x^{*}\in D$ and $G\in C^3\mathbb{R}$ and $H:\mathbb{R}^3\rightarrow \mathbb{R}$ are sufficiently often differentiable functions. If \begin{equation*} \begin{split} & G_{2} =10, \quad G_{3}=-36,\\ & H_{0, 0, 0}=1, \quad H_{1, 0 ,0}=2, \quad H_{0, 1, 0}=1,\\ & H_{2,0,0}=12, \quad H_{0,0,1}=4, \quad H_{1,1,0}=0, \end{split} \end{equation*} where $H_{i,j,k}=\frac{\partial^{i+j+k}H(t_n,s_n,u_n)}{\partial t_n^i\partial s_n^j\partial u_n^k}|_{(0,0,0)}$ for $i,j,k=0,1,2,3,\ldots$, and the initial point $x_{0}$ is sufficiently close to $x^{*}$, then the sequence $\{x_n\}$ defined by \eqref{b2} converges to $x^{*}$ and the order of convergence is eight. \end{thm} \begin{proof} Let $e_{n}:=x_{n}-x^{*}$, $e_{n,y}:=y_{n}-x^{*}$, $e_{n,z}:=z_{n}-x^{*}$ and $c_{n}:=\frac{f^{(n)}(x^{*})}{n!f^{'}(x^{*})}$ for $n \in \mathbb{N}$. Using the fact that $f(x^{*})=0$, the Taylor expansion of $f$ at $x^*$ yields \begin{equation}\label{b3} f(x_{n}) = f^{'}(x^*)(e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+\ldots+c_{8}e_{n}^{8})+O(e_{n}^{9}), \end{equation} and expanding $f'$ at $x^*$ we get \begin{equation}\label{b4} f^{'}(x_{n})=f^{'}(x^{*})(1+2c_{2}e_{n}+3c_{3}e_{n}^{2}+\ldots+9c_{9}e_{n}^{8})+O(e_{n}^{9}). \end{equation} Therefore \begin{equation*}\label{c5} \begin{split} \dfrac{f(x_{n})}{f'(x_{n})}&=e_{n}-c_{2}e_{n}^{2}+2(c_{2}^{2}-c_{3})e_{n}^{3}+(-4c_2^3+7c_2c_3-3c_4)e_n^4+(8c_2^4-20c_2^2c_3+6c_3^2\\ &+10c_2c_4-4c_5)e_n^5+(-16c_2^5+52c_2^3c_3-28c_2^2c_4+17c_3c_4+c_2(-33c_3^2+13c_5))e_n^6\\ &+2(16c_2^6-64c_2^4c_3-9c_3^3+36c_2^3c_4-46c_2c_3c_4+6c_4^2+9c_2^2(7c_3^2-2c_5)+11c_3c_5)e_n^7\\ &+(-64c_2^7+304c_2^5c_3-176c_2^4c_4+348c_2^2c_3c_4+c_4(-75c_3^2+31c_5)\\ &+c_2^3(-408c_3^2+92c_5)+c_2(135c_3^3-64c_4^2-118c_3c_5))e_n^8+O(e_n^9)\\ \end{split} \end{equation*} and hence \begin{equation*}\label{c6} \begin{split} e_{n,y}=y_n-x^*= &c_{2}e_{n}^{2}-2(c_{2}^{2}-c_{3})e_{n}^{3}+(4c_2^3-7c_2c_3+3c_4)e_n^4-(8c_2^4-20c_2^2c_3+6c_3^2+10c_2c_4-4c_5)e_n^5\\ &-(-16c_2^5+52c_2^3c_3-28c_2^2c_4+17c_3c_4+c_2(-33c_3^2+13c_5))e_n^6\\ &-2(16c_2^6-64c_2^4c_3-9c_3^3+36c_2^3c_4-46c_2c_3c_4+6c_4^2+9c_2^2(7c_3^2-2c_5)+11c_3c_5)e_n^7\\ &-(-64c_2^7+304c_2^5c_3-176c_2^4c_4+348c_2^2c_3c_4+c_4(-75c_3^2+31c_5)\\ &+c_2^3(-408c_3^2+92c_5)+c_2(135c_3^3-64c_4^2-118c_3c_5))e_n^8+O(e_n^9)\\ \end{split} \end{equation*} For $f(y_n)$, we also have \begin{equation}\label{b7} f(y_{n})=f^{'}(x^{*})\left(e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3}+\ldots+c_8e_{n,y}^{8}\right)+O(e_{n,y}^{9}). \end{equation} According to the proof of Theorem \ref{theorem1}, we have \begin{equation*} \begin{split} e_{n,z}=z_n-x^{*}&=(-c_2c_3)e_{n}^{4}+(20c_2^4+2c_2^2c_3-2c_3^2-2c_2c_4)e_n^5+(-218c_2^5+156c_2^3c_3\\ &+3c_2^2c_4-7c_3c_4+c_2(6c_3^2-3c_5))e_n^6+2(730c_2^6-1006c_2^4c_3+2c_3^3\\ &+118c_2^3c_4+8c_2c_3c_4-3c_4^2-5c_3c_5+2c_2^2(115c_3^2-c_5))e_n^7\\ &+(-7705c_2^7+15424c_2^5c_3-2946c_2^4c_4+1393c_2^2c_3c_4+c_4(14c_3^2-17c_5)\\ &+35c_2^3(-211c_3^2+9c_5)+5c_2(121c_3^3+2c_4^2+4c_3c_5))e_n^8+O(e_n^9). \end{split} \end{equation*} Also for $f(z_n)$, we get \begin{equation}\label{b8} f(z_{n})=f^{'}(x^{*})\left(e_{n,z} + c_{2}e_{n,z}^{2}+c_{3}e_{n,z}^{3}+\ldots+c_8e_{n,z}^{8}\right)+O(e_{n,z}^{9}). \end{equation} By (\ref{b3}) and (\ref{b7}), we obtain \begin{equation}\label{c8} \begin{split} t_n=\frac{f(y_n)}{f(x_n)}&=c_2 e_n+(-3c_2^2+2c_3)e_n^2+(8c_2^3-10c_2c_3+3c_4)e_n^3+(-20c_2^4+37c_2^2c_3\\ &-8c_3^2-14c_2c_4+4c_5)e_n^4+(48c_2^5-118c_2^3c_3+51c_2^2c_4-22c_3c_4+c_2(55c_3^2\\ &-18c_5))e_n^5+(-112c_2^6+344c_2^4c_3+26c_3^3-163c_2^3c_4+150c_2c_3c_4-15c_4^2\\ &-28c_3c_5+c_2^2(-252c_3^2-65c_5))e_n^6+O(e_n^7). \end{split} \end{equation} \\ By (\ref{b7}) and (\ref{b8}), we obtain \begin{equation}\label{c9} \begin{split} s_n=\frac{f(z_n)}{f(y_n)}&=-c_3e_n^2+(20c_2^3-2c_4)e_n^3+(-178c_2^4+121c_2^2c_3-c_3^2-c_2c_4)e_n^4+(1004c_2^5\\ &-1286c_2^3c_3+184c_2^2c_4-6c_3c_4+c_2(240c_3^2-2c_5))e_n^5+O(e_n^6). \end{split} \end{equation} \\ By (\ref{b3}) and (\ref{b8}), we get \begin{equation}\label{c10} \begin{split} u_n=\frac{f(z_n)}{f(x_n)}&=-c_2c_3e_n^3+(20c_2^4+3c_2^2c_3-2c_3^2-2c_2c_4)e_n^4+(-238c_2^5+153c_2^3c_3+5c_2^2c_4\\ &-7c_3c_4+c_2(9c_3^2-3c_5))e_n^5+O(e_n^6). \end{split} \end{equation} Expanding $H$ at $(0,0,0)$ yields \begin{equation}\label{b12} \begin{split} H(t_n,s_n,u_n)&=H_{0,0,0}+t_nH_{1,0,0}+s_nH_{0,1,0}+u_nH_{0,0,1}+t_ns_nH_{1,1,0}+t_nu_nH_{1,0,1}\\ &+s_nu_nH_{0,1,1}+t_ns_nu_nH_{1,1,1}+\frac{t_n^2}{2}H_{2,0,0}+O(t_n^3, s_n^2, u_n^2). \end{split} \end{equation} By substituting (\ref{b3})-(\ref{b12}) into (\ref{b2}), we obtain \begin{equation*} e_{n+1}=x_{n+1}-x^*=R_4e_n^4+R_5e_n^5+R_6e_n^6+R_7e_n^7+R_8e_n^8+O(e_n^9), \end{equation*} where \begin{equation*} \begin{split} R_4&=\frac{1}{2}c_2\left(2c_3+c_2^2(-10+G_2)\right)(-1+H_{0,0,0}),\\ R_5&=c_2^2c_3\left(-2+H_{1,0,0}\right),\\ R_6&=\frac{1}{2}c_2c_3\left(-2c_3\left(-1+H_{0,1,0}\right)+c_2^2\left(-12+H_{2,0,0}\right)\right),\\ R_7&=\frac{-1}{6}c_2^2c_3\left(c_2^2\left(36+G_3\right)+6c_3\left(-4+H_{0,0,1}+H_{1,1,0}\right)\right),\\ R_8&=c_2c_3\left(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4\right). \end{split} \end{equation*} Clearly $R_{8}\neq 0$, by setting $R_4 = R_5 =R_6 = R_7 = 0$, the convergence order becomes eight. Sufficient conditions are given by the following set of equations \begin{equation*} \begin{array}{lrl} H_{0,0,0}=1, \quad G_2=10, & \quad \Rightarrow & R_4 = 0 , \\[1ex] H_{1,0,0}=2, & \Rightarrow & R_5=0 , \\[1ex] H_{0,1,0}=1, \quad H_{2,0,0}=12, & \quad \Rightarrow & R_6 = 0 , \\[1ex] G_3=-36, \quad H_{0,0,1}=4, \quad H_{1,1,0}=0, \quad & \Rightarrow & R_7=0 , \end{array} \end{equation*} and the error equation becomes \begin{equation*} e_{n+1}=R_{8}e_n^8+O(e_n^9), \end{equation*} which finishes the proof. \end{proof} \subsection{Main contribution:\\ Construction of an optimal four-point class} This section contains the main contribution. To increase the order of convergence, we add one Newton's step to the method \eqref{b2} to get \begin{equation}\label{c1} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n-G(t_n)\cdot\dfrac{f(y_n)}{f'(x_n)},\\ w_{n}=z_n-H(t_n,s_n,u_n)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=w_n-\dfrac{f(w_n)}{f'(w_n)}. \end{cases} \end{equation} The method \eqref{c1} uses six function evaluations therefore the method is not an optimal four-point method. We modify \eqref{c1} by approximating $f'(w_n)$ by \begin{equation*} f'(w_n) \approx \frac{f'(x_n)}{I(t_n)+J(s_n)+K(u_n)+L(t_n,u_n)+M(p_n,q_n,r_n)+N(t_n,s_n,u_n,r_n)}, \end{equation*} using only the values $f(x_n)$, $f(y_n)$, $f(z_n)$, $f(w_n)$ and $f'(x_n)$. More precisely, we use the abbreviations $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$, $u_n=\frac{f(z_n)}{f(x_n)}$, $p_n=\frac{f(w_n)}{f(x_n)}$, $q_n=\frac{f(w_n)}{f(y_n)}$, $r_n=\frac{f(w_n)}{f(z_n)}$ and utilize Mathematica \cite{Hazrat} to carefully choose the weight functions $I,J,K:\mathbb{R}\rightarrow \mathbb{R}$, $L:\mathbb{R}^2\rightarrow\mathbb{R}$, $M:\mathbb{R}^3\rightarrow\mathbb{R}$ and $N:\mathbb{R}^4 \rightarrow \mathbb{R}$ from a class of admissible functions such that the scheme \begin{equation}\label{c2} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- G(t_n) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ w_{n}=z_n-H(t_n, s_n, u_n)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=w_n-\left(I(t_n)+J(s_n)+K(u_n)+L(t_n,u_n)+M(p_n,q_n,r_n)+N(t_n,s_n,u_n,r_n)\right)\cdot\dfrac{f(w_n)}{f'(x_n)}, \end{cases} \end{equation} is of order $16$. \begin{thm}\label{theorem3} Assume that $f: D \subset \mathbb{R} \rightarrow \mathbb{R}$ is $16$ times continuously differentiable and has a simple zero $x^{*}\in D$ and let $I,J,K:\mathbb{R}\rightarrow \mathbb{R}$, $L:\mathbb{R}^2\rightarrow\mathbb{R}$, $M:\mathbb{R}^3\rightarrow\mathbb{R}$ and $N:\mathbb{R}^4 \rightarrow \mathbb{R}$ be sufficiently differentiable functions. If \begin{equation*} \begin{split} &I_{0}=0,\quad I_{1}=2,\quad I_{2}=12,\\ &J_{0}=0,\quad J_{1}=1,\quad J_{2}=0,\quad J_{3}=-6\\ &K_{0}=1,\quad K_{1}=4,\quad K_{2}=-8,\\ &L_{0,0}=0,\quad L_{1,0}=0,\quad L_{1,1}=1,\quad L_{2,0}=0,\quad L_{0,1}=0,\\ &L_{3,0}=0, \quad L_{2,1}=12, \quad L_{3,1}=12, \quad L_{0,2}=0, \quad L_{1,2}=-20,\\ &H_{0,1,1}=0, \quad H_{1,1,1}=0,\\ &M_{0,0,0}=0,\quad M_{1,0,0}=8, \quad M_{0,1,0}=2, \quad M_{0,0,1}=1,\\ &N_{0,0,0,0}=N_{1,0,0,0}=N_{0,1,0,0}=N_{0,0,1,0}=N_{0,0,0,1}=N_{2,0,0,0}=N_{1,1,0,0}\\ &\quad \quad \quad =N_{0,2,0,0}=N_{1,0,1,0}=N_{2,1,0,0}=N_{3,1,0,0}=N_{2,2,0,0}=N_{0,1,0,1}\\ &\quad \quad \quad =N_{3,0,0,0}=N_{4,0,0,0}=N_{3,0,1,0}=N_{1,1,0,1}=N_{2,1,1,0}=N_{3,0,0,1}\\ &\quad \quad \quad =N_{0,0,1,1}=N_{3,2,0,0}=N_{4,1,0,0}=N_{1,2,0,0}=N_{2,0,1,0}=N_{1,1,1,0}=0,\\ & N_{1,0,0,1}=2,\quad N_{2,0,0,1}=12, \quad N_{0,2,1,0}=-8,\quad N_{4,0,1,0}=576, \quad N_{0,1,1,0}=2.\\ \end{split} \end{equation*} where $I_{i}=\frac{d^{i}I(t_n)}{d t_n^i}|_{0}$, $J_{i}=\frac{d^{i}J(s_n)}{d s_n^i}|_{0}$ and $K_{i}=\frac{d^{i}K(u_n)}{d u_n^i}|_{0}$ , $L_{i,j}=\frac{\partial^{i+j}L(t_n,u_n)}{\partial t_n^i\partial u_n^j}|_{(0,0)}$, $M_{i,j,k}=\frac{\partial^{i+j+k}M(p_n,q_n,r_n)}{\partial p_n^i\partial q_n^j\partial r_n^k}|_{(0,0,0)}$ and $N_{i,j,k,l}=\frac{\partial^{i+j+k+l}N(t_n,s_n,u_n,r_n)}{\partial t_n^i\partial s_n^j\partial u_n^k\partial r_n^l}|_{(0,0,0,0)}$ for $i,j,k,l=0,1,2,3,\ldots$, and the initial point $x_{0}$ is sufficiently close to $x^{*}$, then the sequence $\{x_n\}$ defined by \eqref{c2} converges to $x^{*}$ and the order of convergence is $16$. \end{thm} \begin{proof} Let $e_{n}:=x_{n}-x^{*}$, $e_{n,y}:=y_{n}-x^{*}$, $e_{n,z}:=z_{n}-x^{*}$, $e_{n,w}:=w_{n}-x^{*}$ and $c_{n}:=\frac{f^{(n)}(x^{*})}{n!f^{'}(x^{*})}$ for $n \in \mathbb{N}$. Using the fact that $f(x^{*})=0$, Taylor's expansion of $f$ at $x^*$ yields \begin{equation}\label{c3} f(x_{n})=f^{'}(x^*)(e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+\ldots+c_{16}e_{n}^{16})+O(e_{n}^{17}), \end{equation} and expanding $f'$ at $x^*$ we get \begin{equation}\label{c4} f^{'}(x_{n})=f^{'}(x^{*})(1+2c_{2}e_{n}+3c_{3}e_{n}^{2}+\ldots+17c_{17}e_{n}^{16})+O(e_{n}^{17}). \end{equation} Therefore \begin{equation*}\label{c5} \begin{split} \dfrac{f(x_{n})}{f'(x_{n})}&=e_{n}-c_{2}e_{n}^{2}+2(c_{2}^{2}-c_{3})e_{n}^{3}+(-4c_2^3+7c_2c_3-3c_4)e_n^4+(8c_2^4-20c_2^2c_3+6c_3^2\\ &+10c_2c_4-4c_5)e_n^5+2(-16c_2^5+52c_2^3c_3-28c_2^2c_4+17c_3c_4+c_2(-33c_3^2+13c_5))e_n^6\\ &+\ldots+(-64c_2^7+304c_2^5c_3-176c_2^4c_4+348c_2^2c_3c_4+c_2^3(-408c_3^2+92c_5)\\ &+c_4(-75c_3^2+31c_5)+c_2(135c_3^3-64c_4^2-118c_3c_5))e_n^8+O(e_n^9), \end{split} \end{equation*} so, \begin{equation*}\label{c6} \begin{split} e_{n,y}= y_n-x^*&=c_{2}e_{n}^{2}+(-2c_2^2+2c_3)e_n^3+(4c_2^3-7c_2c_3+3c_4)e_n^4+(-8c_2^4+20c_2^2c_3-6c_3^2\\ &-10c_2c_4+4c_5)e_n^5+2(16c_2^5-52c_2^3c_3+28c_2^2c_4-17c_3c_4+c_2(33c_3^2-13c_5))e_n^6\\ &+\ldots+(64c_2^7-304c_2^5c_3+176c_2^4c_4-348c_2^2c_3c_4+c_2^3(408c_3^2-92c_5)\\ &+c_4(75c_3^2-31c_5)+c_2(-135c_3^3+64c_4^2+118c_3c_5))e_n^8+O(e_n^9). \end{split} \end{equation*} For $f(y_n)$, we also have \begin{equation}\label{c7} f(y_{n})=f^{'}(x^{*})\left(e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3}+\ldots+c_{16}e_{n,y}^{16}\right)+O(e_{n,y}^{17}). \end{equation} According to the proof of Theorem \ref{theorem2}, we have \begin{equation*} \begin{split} e_{n,z}=z_n-x^{*}&=(-c_2c_3)e_{n}^{4}+(20c_2^4+2c_2^2c_3-2c_3^2-2c_2c_4)e_n^5+(-218c_2^5+156c_2^3c_3\\ &+3c_2^2c_4-7c_3c_4+c_2(6c_3^2-3c_5))e_n^6+2(730c_2^6-1006c_2^4c_3+2c_3^3\\ &+118c_2^3c_4+8c_2c_3c_4-3c_4^2-5c_3c_5+2c_2^2(115c_3^2-c_5))e_n^7\\ &+(-7705c_2^7+15424c_2^5c_3-2946c_2^4c_4+1393c_2^2c_3c_4+c_4(14c_3^2-17c_5)\\ &+35c_2^3(-211c_3^2+9c_5)+5c_2(121c_3^3+2c_4^2+4c_3c_5))e_n^8+O(e_n^9). \end{split} \end{equation*} For $f(z_n)$, we also get \begin{equation}\label{cc7} f(z_{n})=f^{'}(x^{*})\left(e_{n,z} + c_{2}e_{n,z}^{2}+c_{3}e_{n,z}^{3}+\ldots+c_{16}e_{n,z}^{16}\right)+O(e_{n,z}^{17}). \end{equation} By (\ref{c3}) and (\ref{c7}), we obtain \begin{equation}\label{c8} \begin{split} t_n=\frac{f(y_n)}{f(x_n)}&=c_2 e_n+(-3c_2^2+2c_3)e_n^2+(8c_2^3-10c_2c_3+3c_4)e_n^3+(-20c_2^4+37c_2^2c_3\\ &-8c_3^2-14c_2c_4+4c_5)e_n^4+(48c_2^5-118c_2^3c_3+51c_2^2c_4-22c_3c_4+c_2(55c_3^2\\ &-18c_5))e_n^5+(-112c_2^6+344c_2^4c_3+26c_3^3-163c_2^3c_4+150c_2c_3c_4-15c_4^2\\ &-28c_3c_5+c_2^2(-252c_3^2+65c_5))e_n^6+(256c_2^7-944c_2^5c_3+480c_2^4c_4-693c_2^2c_3c_4\\ &+c_2^3(952c_3^2-207c_5)+c_4(105c_3^2-38c_5)+2c_2(-114c_3^3+51c_4^2+95c_3c_5))e_n^7\\ &+(-576c_2^8+2480c_2^6c_3-1336c_2^5c_4+2660c_2^3c_3c_4+6c_2c_4(-156c_3^2+43c_5)\\ &+c_2^4(-3200c_3^2+607c_5)+3c_2^2(418c_3^3-159c_4^2-292c_3c_5)+3(-24c_3^4+47c_3c_4^2\\ &+44c_3^2c_5-8c_5))e_n^8+O(e_n^9). \end{split} \end{equation} By (\ref{c7}) and (\ref{cc7}), we obtain \begin{equation}\label{c9} \begin{split} s_n=\frac{f(z_n)}{f(y_n)}&=-c_3e_n^2+(20c_2^3-2c_4)e_n^3+(-178c_2^4+121c_2^2c_3-c_3^2-c_2c_4-3c_5)e_n^4+(1004c_2^5\\ &-1286c_2^3c_3+184c_2^2c_4-6c_3c_4+c_2(240c_3^2-2c_5))e_n^5+(-4567c_2^6+8541c_2^4c_3\\ &+155c_3^3+1863c_2^3c_4+725c_2c_3c_4-7c_4^2-10c_3c_5+\frac{1}{2}c_2^2(-6866c_3^2+492c_5))e_n^6\\ &+\frac{1}{6}(109500c_2^7-269472c_2^5c_3+72756c_2^4c_4-59376c_2^2c_3c_4+c_2^3(172248c_3^2-14616c_5)\\ &+12c_4(348c_3^2-11c_5)+c_2(-24048c_3^3+3276c_4^2+5796c_3c_5))e_n^7+(-66359c_2^8\\ &+204034c_2^6c_3-1725c_3^4-63159c_2^5c_4+81203c_2^3c_3c_4+1038c_3c_4^2+923c_3^2c_5-17c_5^2\\ &+c_2c_4(-17234c_3^2+1453c_5)+c_2^4(-181979c_3^2+15700c_5)+2c_2^2(23804c_3^3-3559c_4^2\\ &-6452c_3c_5))e_n^8+O(e_n^9). \end{split} \end{equation} By (\ref{c3}) and (\ref{cc7}), we have \begin{equation}\label{c10} \begin{split} u_n=\frac{f(z_n)}{f(x_n)}&=-c_2c_3e_n^3+(20c_2^4+3c_2^2c_3-2c_3^2-2c_2c_4)e_n^4+(-238c_2^5+153c_2^3c_3+5c_2^2c_4\\ &-7c_3c_4+c_2(9c_3^2-3c_5))e_n^5+(1698c_2^6-2185c_2^4c_3+6c_3^3+231c_2^3c_4+26c_2c_3c_4\\ &-6c_4^2-10c_3c_5+7c_2^2(64c_3^2+c_5))e_n^6+(-9403c_2^7+17847c_2^5c_3-3197c_2^4c_4\\ &+1359c_2^2c_3c_4+c_4(23c_3^2-17c_5)+c_2^3(-7985c_3^2+308c_5)+2c_2(295c_3^3+9c_4^2\\ &+17c_3c_5))e_n^7+(44503c_2^8-111251c_2^6c_3+292c_3^4+25635c_2^5c_4-23315c_2^3c_3c_4\\ &+27c_3c_4^2+382c_2^4(203c_3^2-11c_5)+28c_3^2c_5-12c_5^2+2c_2c_4(1344c_3^2+23c_5)\\ &+c_2^2(-14492c_3^3+1031c_4^2+1816c_3c_5))e_n^8+O(e_n^9), \end{split} \end{equation} According to the proof of Theorem \ref{theorem2}, we have \begin{equation}\label{cc6} \begin{split} e_{n,w}= w_n-x^*&=c_2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)e_n^8+2(120c_2^8+344c_2^6c_3-37c_2^4c_3^2+c_3^4\\ &-22c_2^5c_4-30c_2^3c_3c_4+5c_2c_3^2c_4+c_2^2(-40c_3^3+c_4^2+c_3c_5))e_n^9+(10296c_2^9+5059c_2^7c_3\\ &-1622c_2^6c_4+452c_2^4c_3c_4-19c_3^3c_4+c_2^2c_4(456c_3^2-7c_5)+c_2^3(236c_3^3+63c_4^2\\ &+91c_3c_5))e_n^{10}+O(e_n^{11}). \end{split} \end{equation} For $f(w_n)$, we also obtain \begin{equation}\label{ccc7} f(w_{n})=f^{'}(x^{*})\left(e_{n,w} + c_{2}e_{n,w}^{2}+c_{3}e_{n,w}^{3}+\ldots+c_{16}e_{n,w}^{16}\right)+O(e_{n,w}^{17}). \end{equation} By (\ref{c3}) and (\ref{ccc7}), we have \begin{equation}\label{c11} \begin{split} p_n=\frac{f(w_n)}{f(x_n)}&=c_2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)e_n^7+(-240c_2^8-700c_2^6c_3+60c_2^4c_3^2-2c_3^4\\ &+44c_2^5c_4+61c_2^3c_3c_4-10c_2c_3^2c_4+c_2^2(81c_3^3-2c_4^2-2c_3c_5))e_n^8+(10536c_2^9\\ &+5759c_2^7c_3-1666c_2^6c_4+391c_2^4c_3c_4-19c_3^3c_4+c_2^2c_4(467c_3^2-7c_5)+c_2^5(-8121c_3^2\\ &+76c_5)+c_2c_3(151c_3^3-26c_4^2-17c_3c_5)+c_2^3(141c_3^3+65c_4^2+93c_3c_5))e_n^9+O(e_n^{10}). \end{split} \end{equation} By (\ref{c7}) and (\ref{ccc7}), we get \begin{equation}\label{c12} \begin{split} q_n=\frac{f(w_n)}{f(y_n)}&=-c_3(-12c_2^4-14c_2^2c_3+c_3^2+c_2c_4)e_n^6-2(120c_2^7+332c_2^5c_3-39c_2^3c_3^2-22c_2^4c_4\\ &-29c_2^2c_3c_4+4c_3^2c_4+c_2(-25c_3^3+c_4^2+c_3c_5))e_n^7+(9816c_2^8+4151c_2^6c_3-1534c_2^5c_4\\ &+449c_2^3c_3c_4+c_2c_4(275c_3^2-7c_5)+c_2^4(-6551c_3^2+76c_5)+c_3(41c_3^3-19c_4^2-13c_3c_5)\\ &+c_2^2(288c_3^3+59c_4^2+87c_3c_5))e_n^8+O(e_n^9). \end{split} \end{equation} By (\ref{cc7}) and (\ref{ccc7}), we obtain \begin{equation}\label{c13} \begin{split} r_n=\frac{f(w_n)}{f(z_n)}&=(-12c_2^4-14c_2^2c_3+c_3^2+c_2c_4)e_n^4+2(384c_2^5-58c_2^3c_3c_3-30c_2^2c_4+6c_3c_4\\ &+2c_2(-25c_3^2+c_5))e_n^5+(-4271c_2^6+3691c_2^4c_3-42c_3^3-78c_2^3c_4-177c_2c_3c_4+7c_4^2\\ &+10c_3c_5-c_2^2(148c_3^2+45c_5))e_n^6+2(15680c_2^7-23481c_2^5c_3+2760c_2^4c_4-237c_2^2c_3c_4\\ &+c_2^3(7182c_3^2-51c_5)+c_4(-102c_3^2+11c_5)-c_2(126c_3^3+75c_4^2+125c_3c_5))e_n^7\\ &+(-182548c_2^8+384369c_2^6c_3-191c_3^4-68503c_2^5c_4+43033c_2^3c_3c_4-321c_3c_4^2\\ &-278c_3^2c_5+17c_5^2-c_2c_4(1316c_3^2+417c_5)+c_2^4(-213934c_3^2+7347c_5)\\ &+c_2^2(28040c_3^3-398c_4^2-673c_3c_5)e_n^8+O(e_n^9). \end{split} \end{equation} Expanding $I,J, K, L,M$ and $N$ at $0$ in $\mathbb{R}, \mathbb{R}^2, \mathbb{R}^3$ and $\mathbb{R}^4$, respectively, yield \begin{equation}\label{c14} I(t_n)=I_{0}+t_n I_{1}+ \frac{t_n^2}{2}I_{2}+O(t_n^3), \end{equation} \begin{equation}\label{c15} J(s_n)=J_{0}+s_n J_{1}+ \frac{s_n^2}{2}J_{2}+\frac{s_n^3}{6}J_{3}+O(s_n^4), \end{equation} \begin{equation}\label{c16} K(u_n)=K_{0}+u_nK_{1}+\frac{u_n^2}{2}K_{2}+ O(u_n^3), \end{equation} \begin{equation}\label{c17} \begin{split} L(t_n,u_n)&=L_{0,0}+t_nL_{1,0}+u_nL_{0,1}+t_nu_nL_{1,1}+\frac{t_n^2}{2}L_{2,0}+\frac{u_n^2}{2}L_{0,2}+\frac{t_n u_n^2}{2}L_{1,2}\\ &+\frac{t_n^2u_n}{2}L_{2,1}+\frac{t_n^2u_n^2}{4}L_{2,2}+\frac{t_n^3}{6}L_{3,0}+\frac{t_n^3u_n}{6}L_{3,1}+\frac{t_n^3u_n^2}{12}L_{3,2}+O(t_n^4,u_n^3), \end{split} \end{equation} \begin{equation}\label{c18} \begin{split} M(p_n,q_n,r_n)&=M_{0,0,0}+p_nM_{1,0,0}+q_nM_{0,1,0}+r_nM_{0,0,1}+p_nq_nM_{1,1,0}+p_nr_nM_{1,0,1}+p_nq_nr_nM_{1,1,1}\\ &+q_nr_nM_{0,1,1}+\frac{p_n^2}{2}M_{2,0,0}+\frac{q_n^2}{2}M_{0,2,0}+\frac{r_n^2}{2}M_{0,0,2}+\frac{p_nr_n^2}{2}M_{1,0,2}+\frac{p_n^2r_n}{2}M_{2,0,1}\\ &+\frac{p_n^2q_n}{2}M_{2,1,0}+\frac{p_n q_n^2}{2}M_{1,2,0}+\frac{p_n r_n^2}{2}M_{1,0,2}+\frac{q_nr_n^2}{2}M_{0,1,2}+\frac{q_n^2r_n}{2}M_{0,2,1}\\ &+\frac{p_nq_nr_n^2}{2}M_{1,1,2}+\frac{p_nq_n^2r_n}{2}M_{1,2,1}+\frac{p_n^2q_nr_n}{2}M_{2,1,1}+\frac{q_n^2r_n^2}{4}M_{0,2,2}+\frac{p_nq_n^2r_n^2}{4}M_{1,2,2}\\ &+\frac{p_n^2r_n^2}{4}M_{2,0,2}+\frac{p_n^2q_nr_n^2}{4}M_{2,1,2}+\frac{p_n^2q_n^2}{4}M_{2,2,0}+\frac{p_n^2q_n^2r_n}{4}M_{2,2,1}+\frac{p_n^2q_n^2r_n^2}{8}M_{2,2,2}\\ &+O(p_n^3,q_n^3,r_n^3),\\ \end{split} \end{equation} and, \begin{equation}\label{c19} \begin{split} N(t_n,s_n,u_n,r_n)&=N_{0,0,0,0}+t_nN_{1,0,0,0}+s_nN_{0,1,0,0}+u_nN_{0,0,1,0}+r_nN_{0,0,0,1}+t_ns_nN_{1,1,0,0}\\ &+t_nu_nN_{1,0,1,0}+t_nr_nN_{1,0,0,1}+u_nr_nN_{0,0,1,1}+s_nu_nN_{0,1,1,0}+s_nr_nN_{0,1,0,1}\\ &+t_ns_nu_nN_{1,1,1,0}+t_ns_nr_nN_{1,1,0,1}+s_nu_nr_nN_{0,1,1,1}+t_nu_nr_nN_{1,0,1,1}\\ &+t_ns_nu_nr_nN_{1,1,1,1}+\frac{t_n^2}{2}N_{2,0,0,0}+\frac{s_n^2}{2}N_{0,2,0,0}+\frac{s_n^2r_n}{2}N_{0,2,0,1}+\frac{s_n^2u_n}{2}N_{0,2,1,0}\\ &+\frac{s_n^2u_nr_n}{2}N_{0,2,1,1}+\frac{t_ns_n^2}{2}N_{1,2,0,0}+\frac{t_ns_n^2r_n}{2}N_{1,2,0,1}+\frac{t_ns_n^2u_n}{2}N_{1,2,1,0}\\ &+\frac{t_n^2r_n}{2}N_{2,0,0,1}+\frac{t_n^2u_n}{2}N_{2,0,1,0}+\frac{t_n^2s_n}{2}N_{2,1,0,0}+\frac{t_n^2u_nr_n}{2}N_{2,0,1,1}\\ &+\frac{t_n^2s_n}{2}N_{2,1,0,0}+\frac{t_n^2s_nr_n}{2}N_{2,1,0,1}+\frac{t_n^2s_nu_n}{2}N_{2,1,1,0}+\frac{t_n^2s_nu_nr_n}{2}N_{2,1,1,1}\\ &+\frac{t_n^2s_n^2}{4}N_{2,2,0,0}+\frac{t_n^2s_n^2r_n}{4}N_{2,2,0,1}+\frac{t_n^2s_n^2u_n}{4}N_{2,2,1,0}+\frac{t_n^2s_n^2u_nr_n}{4}N_{2,2,1,1}\\ &+\frac{t_n^3}{6}N_{3,0,0,0}+\frac{t_n^3r_n}{6}N_{3,0,0,1}+\frac{t_n^3u_n}{6}N_{3,0,1,0}+\frac{t_n^3u_nr_n}{6}N_{3,0,1,1}\\ &+\frac{t_n^3s_n}{6}N_{3,1,0,0}+\frac{t_n^3s_nr_n}{6}N_{3,1,0,1}+\frac{t_n^3s_nu_n}{6}N_{3,1,1,0}+\frac{t_n^3s_nu_nr_n}{6}N_{3,1,1,1}\\ &+\frac{t_n^3s_n^2}{12}N_{3,2,0,0}+\frac{t_n^3s_n^2r_n}{12}N_{3,2,0,1}+\frac{t_n^3s_n^2u_n}{12}N_{3,2,1,0}+\frac{t_n^3s_n^2u_nr_n}{12}N_{3,2,1,1}\\ &+\frac{t_n^4}{24}N_{4,0,0,0}+\frac{t_n^4r_n}{24}N_{4,0,0,1}+\frac{t_n^4u_n}{24}N_{4,0,1,0}+\frac{t_n^4s_nr_n}{24}N_{4,1,0,1}+\frac{t_n^4s_n}{24}N_{4,1,0,0}\\ &+\frac{t_n^4s_nr_n}{24}N_{4,1,0,1}+\frac{t_n^4s_nu_n}{24}N_{4,1,1,0}+\frac{t_n^4s_nu_nr_n}{24}N_{4,1,1,1}+\frac{t_n^4s_n^2}{48}N_{4,2,0,0}\\ &+\frac{t_n^4s_n^2r_n}{48}N_{4,2,0,1}+\frac{t_n^4s_n^2u_n}{48}N_{4,2,1,0}+\frac{t_n^4s_n^2u_nr_n}{48}N_{4,2,1,1}+O(t_n^5,s_n^3,u_n^2,r_n^2).\\ \end{split} \end{equation} By substituting (\ref{c3})-(\ref{c19}) into (\ref{c2}), we obtain \begin{equation*} \begin{split} e_{n+1}=x_{n+1}-x^*&=R_8e_n^8+R_9e_n^9+R_{10}e_n^{10}+R_{11}e_n^{11}+R_{12}e_n^{12}\\ &+R_{13}e_n^{13}+R_{14}e_n^{14}+R_{15}e_n^{15}+R_{16}e_n^{16}+O(e_n^{17}), \end{split} \end{equation*} where \begin{equation*} \begin{split} R_8&=-c_2c_3\left(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4\right)\left(-1+I_{0}+J_{0}+K_{0}+L_{0,0}+M_{0,0,0}+N_{0,0,0,0}\right)\\ \\ R_9&=c_2^2c_3\left(-12c_2^4-14c_2^2c_3+c_3^2+c_2c_4\right)\left(-2+I_{1}+L_{1,0}+N_{1,0,0,0}\right),\\ \\ R_{10}&=\frac{-1}{2}c_2c_3\left(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4\right)\\ &\big(-2c_3\left(-1+J_{1}+N_{0,1,0,0}\right)+c_2^2(-12+I_{2}+L_{2,0}+N_{2,0,0,0})\big),\\ \end{split} \end{equation*} \begin{equation*} \begin{split} R_{11}&=\frac{-1}{6}c_2^2c_3\left(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4\right)\big(-6c_3\left(-4+K_1+L_{0,1}+N_{0,0,1,0}+N_{1,1,0,0}\right)\\ &+c_2^2\left(L_{3,0}+N_{3,0,0,0}\right)\big),\\ \\ R_{12}&=\frac{1}{24}c_2c_3\left(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4\right)\\ &(-24c_2c_4(-1+M_{0,0,1}+N_{0,0,0,1})-12c_3^2(J_2+2(-1+M_{0,0,1}+N_{0,0,0,1})+N_{0,2,0,0})\\ &+12c_2^2c_3(2(-15+L_{1,1}+14M_{0,0,1}+14N_{0,0,0,1}+N_{1,0,1,0})+N_{2,1,0,0})\\ &+c_2^4(288(-1+M_{0,0,1}+N_{0,0,0,1})-N_{4,0,0,0})),\\ \\ R_{13}&=\frac{1}{6}c_2^2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)\\ &(72c_2^4(-2+N_{1,0,0,1})-6c_2c_4(-2+N_{1,0,0,1})+3c_3^2(2H_{0,1,1}-2(-4+N_{0,1,1,0}+N_{1,0,0,1})-N_{1,2,0,0})\\ &+c_2^2c_3(3(-68+L_{2,1}+28N_{1,0,0,1}+N_{2,0,1,0})+N_{3,1,0,0})),\\ \end{split} \end{equation*} \begin{equation*} \begin{split} R_{14}&=\frac{1}{24}c_2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)\\ &(24c_2c_3c_4(-2+M_{0,1,0}+N_{0,1,0,1})+4c_3^3(J_3+6(-1+M_{0,1,0}+N_{0,1,0,1}))\\ &+144c_2^6(-12+N_{2,0,0,1})-12c_2^3c_4(-12+N_{2,0,0,1})\\ &-6c_2^2c_3^2(2(-60+K_2+L_{0,2}-2H_{1,1,1}+28M_{0,1,0}+28N_{0,1,0,1}+2N_{1,1,1,0}+N_{2,0,0,1})+N_{2,2,0,0})\\ &+c_2^4c_3(4(-372+L_{3,1}-72M_{0,1,0}-72N_{0,1,0,1}+42N_{2,0,0,1}+N_{3,0,1,0})+N_{4,1,0,0})),\\ \end{split} \end{equation*} \begin{equation*} \begin{split} R_{15}&=\frac{1}{24}c_2^2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)\\ &(24c_2c_3c_4(-8+M_{1,0,0}+N_{0,0,1,1}+N_{1,1,0,1})+12c_3^3(-8+2M_{1,0,0}+2N_{0,0,1,1}+N_{0,2,1,0}+2N_{1,1,0,1})\\ &+48c_2^6N_{3,0,0,1}-4c_2^3c_4N_{3,0,0,1}-2c_2^2c_3^2(6(L_{1,2}+4(-51+7M_{1,0,0}+7N_{0,0,1,1}+7N_{1,1,0,1})+N_{2,1,1,0}\\ &+2N_{3,0,0,1}+N_{3,2,0,0})+c_2^4c_3(-288(-6+M_{1,0,0}+N_{0,0,1,1}+N_{1,1,0,1})+56N_{3,0,0,1}+N_{4,0,1,0})),\\ \end{split} \end{equation*} \begin{equation*} \begin{split} R_{16}&=\frac{-1}{48}c_2c_3(12c_2^4+14c_2^2c_3-c_3^2-c_2c_4)\\ &(24c_3^4(-2+M_{0,0,2}+N_{0,2,0,1})+24c_2c_3^2c_4(-6+2M_{0,0,2}+N_{0,2,0,1})\\ &-24c_2^3c_3c_4(-88+28M_{0,0,2}+2N_{1,0,1,1}+N_{2,1,0,1})-24c_2^2(2c_3c_5-c_4^2(-2+M_{0,0,2})\\ &+c_3^3(-86+28M_{0,0,2}+14N_{0,2,0,1}+2N_{1,0,1,1}+N_{1,2,1,0}+6N_{1,2,1,0}+N_{2,1,0,1}))\\ &+4c_2^6c_3(36(-145+56M_{0,0,2}+4N_{1,0,1,1}+2N_{2,1,0,1})-7N_{4,0,0,1})\\ &+24c_2^8(144(-2+M_{0,0,2})-N_{4,0,0,1})+2c_2^5c_4(576-288M_{0,0,2}+N_{4,0,0,1})+c_2^4c_3^2\\ &(2(6L_{2,2}+4(6(86M_{0,0,2}-6N_{0,2,0,1}+7(-45+2N_{1,0,1,1}+N_{2,1,0,1}))+N_{3,1,1,0})+N_{4,0,0,1})+N_{4,2,0,0})). \end{split} \end{equation*} In general $R_{16}\neq 0$, however, by setting $R_8 = R_9 =R_{10} =R_{11}=R_{12}=R_{13}=R_{15}=R_{15} = 0$, the convergence order becomes $16$. Sufficient conditions are given by the following set of equations \begin{equation*} \begin{array}{lrl} K_0=1, \quad I_0=J_0=L_{0,0}=M_{0,0,0}=N_{0,0,0,0}=0, & \Rightarrow & R_8 = 0, \\[4ex] I_1=2, \quad L_{1,0}=N_{1,0,0,0}=0, & \Rightarrow & R_9=0, \\[4ex] I_2=12, \quad J_1=1, \quad L_{2,0}=N_{0,1,0,0}=N_{2,0,0,0}=0, & \quad \Rightarrow & R_{10} = 0, \\[4ex] K_1=4, \quad L_{0,1}=L_{3,0}=N_{0,0,1,0}=N_{1,1,0,0}=N_{3,0,0,0}=0, & \Rightarrow & R_{11}=0, \\[4ex] L_{1,1}=1, \quad M_{0,0,1}=1, \quad J_2=N_{0,0,0,1}=N_{0,2,0,0}=N_{4,0,0,0}=N_{1,0,1,0}=N_{2,1,0,0}=0, & \Rightarrow & R_{12}=0 , \\[4ex] L_{2,1}=12, \quad N_{1,0,0,1}=N_{0,1,1,0}=2, \quad H_{0,1,1}=N_{1,2,0,0}=N_{2,0,1,0}=N_{3,1,0,0}=0, & \Rightarrow & R_{13}=0 , \\[4ex] J_3=-6, \quad K_2=-8,\quad L_{3,1}=12, \quad M_{0,1,0}=2, \quad N_{2,0,0,1}=12, \\[1ex] L_{0,2}=H_{1,1,1}=N_{0,1,0,1}=N_{1,1,1,0}=N_{2,2,0,0}=N_{3,0,1,0}=N_{4,1,0,0}=0 & \Rightarrow & R_{14}=0 , \\[4ex] L_{1,2}=-20, \quad M_{1,0,0}=8,\quad N_{0,2,1,0}=-8 \quad N_{4,0,1,0}=576, \\[1ex] N_{0,0,1,1}=N_{1,1,0,1}=N_{3,0,0,1}=N_{2,1,1,0}=N_{3,2,0,0}=0 & \Rightarrow & R_{15}=0 , \\[4ex] L_{2,2}=12, \quad M_{0,0,2}=2,\quad N_{1,0,1,1}=30 \quad N_{1,2,1,0}=-18, \\[1ex] N_{0,2,0,1}=N_{2,1,0,1}=N_{4,0,0,1}=N_{3,1,1,0}=N_{4,2,0,0}=0 & \Rightarrow & R_{16}\neq0 , \end{array} \end{equation*} and the error equation becomes \begin{equation*} e_{n+1}=\left(-c_2^2c_3^2(12c_2^4-c_3^2-c_2c_4)(93c_2^5-c_3c_4-c_2c_5)\right)e_n^{16}+O(e_n^{17}), \end{equation*} which finishes the proof. \end{proof} \section{Numerical results} \subsection{Numerical implementation and comparison} In this section, three concrete methods of each of the families (\ref{b2}) and(\ref{c2}) are tested on a number of nonlinear equations. To obtain a high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 7000 significant decimal digits in the programming package of Mathematica 8.\\ \begin{method}\label{mm1} Weight functions $G$ and $H$ in (\ref{b2}) are given by \begin{equation}\label{dd1} \begin{split} &G(t_n)=-6t_n^3+5t_n^2+2t_n+1,\\ &H(t_n,s_n,u_n)=1+2t_n+4u_n+6t_n^2+s_n, \end{split} \end{equation} where $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$ and $u_n=\frac{f(z_n)}{f(x_n)}$. These functions satisfy the given conditions in Theorems \ref{theorem1} and \ref{theorem2}, so \begin{equation}\label{dd2} \begin{cases} y_{n}=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- (-6t_n^3+5t_n^2+2t_n+1) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ x_{n+1}=z_n-(1+2t_n+4u_n+6t_n^2+s_n)\cdot\dfrac{f(z_n)}{f'(x_n)}. \end{cases} \end{equation} \end{method} \begin{method}\label{mmmm0} The method by H.T. Kung and J.F. Traub \cite{Kung} is given by \begin{equation}\label{KT0} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_n=y_n-G_f(x_n),\\ x_{n+1}=z_n-f^2(x_n)f(y_n)H_f(x_n,y_n,z_n),\\ \end{cases} \end{equation} where\\ \\ \begin{equation*} \begin{split} G_f(x_n)&=\frac{f^2(x_n)f(y_n)}{f'(x_n)(f(x_n)-f(y_n))^2},\\ H_f(x_n,y_n,z_n)&=G_f(x_n)\\ &\big(\frac{-1}{f^2(x_n)(f(x_n)-f(z_n))}+\frac{f(y_n)-f(x_n)}{f(x_n)f(y_n)(f(x_n)-f(z_n))^2}\\ &+\frac{1}{(f(y_n)-f(z_n))(f(x_n)-f(z_n))^2}\big). \end{split} \end{equation*} \end{method} \begin{method}\label{mmmm11} The method by B. Neta \cite{Neta0} is given by \begin{equation}\label{NNN} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+Af(y_n)}{f(x_n)+(A-2)f(y_n)}\dfrac{f(y_n)}{f'(x_n)}, \quad A\in \mathbb{R},\\ x_{n+1}=y_n+\delta_1f^2(x_n)+\delta_2f^3(x_n), \end{cases} \end{equation} where\\ \\ \begin{equation*} F_y=f(y_n)-f(x_n), \quad F_z=f(z_n)-f(x_n), \end{equation*} \begin{equation*} \zeta_y=\dfrac{1}{F_y} \left(\dfrac{y_n-x_n}{F_y}-\dfrac{1}{f'(x_n)}\right), \zeta_z=\dfrac{1}{F_z} \left(\dfrac{z_n-x_n}{F_z}-\dfrac{1}{f'(x_n)}\right), \end{equation*} \begin{equation*} \delta_1=\zeta_y+\delta_2F_y, \quad \quad \delta_2=-\dfrac{\zeta_y-\zeta_z}{F_y-F_z}, \end{equation*} \end{method} \begin{method} The method by Khattri and Steihaug \cite{Khattri} is given by \begin{equation}\label{kh1} \begin{cases} y_n=x_n-\frac{f(x_n)}{f'(x_n)},\\ z_n=y_n-\frac{f(y_n)}{\frac{x_n-y_n+\alpha f(x_n)}{(x_n-y_n)\alpha}-\frac{(x_n-y_n)f(x_n+\alpha f(x_n))}{(x_n-y_n+\alpha f(x_n))\alpha f(x_n)}-\frac{(2x_n-2y_n+\alpha f(x_n))f(y_n)}{(x_n-y_n)(x_n-y_n+\alpha f(x_n))}},\quad \alpha\in \mathbb{R},\\ x_{n+1}=z_n-\frac{f(z_n)}{H_1f(x_n)+H_2f(x_n+\alpha f(x_n))+H_3f(y_n)+H_4f(z_n)}, \end{cases} \end{equation} where \begin{equation*} \begin{split} H_1&=\frac{(y_n-z_n)(z_n-x_n-\alpha f(x_n))}{\alpha f(x_n)+(y_n-x_n)(z_n-x_n)},\\ H_2&=\frac{-x_ny_n+x_nz_n+y_nz_n-z_n^2}{\alpha f(x_n)(-\alpha f(x_n)+y_n-x_n)(\alpha f(x_n)+x_n-z_n)},\\ H_3&=-\frac{(x_n+\alpha f(x_n))x_n-(x_n+\alpha f(x_n))z_n-x_n z_n+z_n^2}{(\alpha f(x_n)+x_n-y_n)(y_n-x_n)(y_n-z_n)},\\ H_4&=-\frac{(x_n+\alpha f(x_n))x_n+(x_n+\alpha f(x_n))y_n+x_n y_n-2(x_n+\alpha f(x_n))z_n-2x_nz_n-2y_nz_n+3z_n^2}{(\alpha f(x_n)+x_n-z_n)(z_n-x_n)(z_n-y_n)}. \end{split} \end{equation*} \end{method} \begin{method}\label{m1} Weight functions $G$, $H$, $I$, $J$, $K$, $L$, $M$ and $N$ in (\ref{c2}) are given by \begin{equation}\label{d1} \begin{split} &G(t_n)=-6t_n^3+5t_n^2+2t_n+1,\\ &H(t_n,s_n,u_n)=1+2t_n+4u_n+6t_n^2+s_n,\\ &I(t_n)=6t_n^2+2t_n, \quad J(s_n)=-s_n^3+s_n+1, \quad K(u_n)=4u_n-4u_n^2,\\ &L(t_n, u_n)=t_nu_n+6t_n^2u_n+2t_n^3u_n-10t_nu_n^2, \quad M(p_n,q_n,r_n)=r_n+2q_n+8p_n,\\ &N(t_n,s_n,u_n,r_n)=2t_nr_n+2s_nu_n+6t_n^2r_n-4s_n^2u_n+24t_n^4u_n,\\ \end{split} \end{equation} where $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$, $u_n=\frac{f(z_n)}{f(x_n)}$, $p_n=\frac{f(w_n)}{f(x_n)}$, $q_n=\frac{f(w_n)}{f(y_n)}$ and $r_n=\frac{f(w_n)}{f(z_n)}$. These functions satisfy the given conditions in Theorems \ref{theorem1}, \ref{theorem2}, and \ref{theorem3}, so \begin{equation}\label{d2} \begin{cases} y_{n}=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- (-6t_n^3+5t_n^2+2t_n+1) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ w_{n}=z_n-(1+2t_n+4u_n+6t_n^2+s_n)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=w_n-\big[1+6t_n^2+2t_n-s_n^3+s_n+4u_n-4u_n^2\\ \quad \quad +t_nu_n+6t_n^2u_n+2t_n^3u_n-10t_nu_n^2+r_n+2q_n+8p_n\\ \quad \quad +2t_nr_n+2s_nu_n+6t_n^2r_n-4s_n^2u_n+24t_n^4u_n \big]\cdot\dfrac{f(w_n)}{f'(x_n)}. \end{cases} \end{equation} \end{method} \begin{method}\label{m2} Weight functions $G$, $H$, $I$, $J$, $K$, $L$, $M$, and $N$ in (\ref{c2}) are given by \begin{equation}\label{d3} \begin{split} &G(t_n)=t_n^2(5-7t_n)+(2t_n+1)(t_n^3+1)-2t_n^4,\\ &H(t_n,s_n,u_n)=(1+s_n)+(6+u_n^2)(u_n+t_n^2)+2(t_n-u_n),\\ &I(t_n)=(1+t_n)(2t_n+t_n^2)+t_n^2(3-t_n), \quad J(s_n)=\frac{s_n+s_n^2-s_n^3}{1+s_n}, \quad K(u_n)=\frac{1+5u_n}{1+u_n},\\ &L(t_n, u_n)=t_nu_n+6t_n^2u_n+\frac{2t_n^3u_n-10t_nu_n^2}{1+t_nu_n}, \quad M(p_n,q_n,r_n)=2(p_n+q_n)+\frac{6p_n+r_n}{1+p_n},\\ &N(t_n,s_n,u_n,r_n)=8t_n^2r_n-4s_n^2u_n-2t_n^3r_n+\frac{2s_nu_n+2t_nr_n+24t_n^4u_n+2t_ns_nu_n}{1+t_n},\\ \end{split} \end{equation} where $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$, $u_n=\frac{f(z_n)}{f(x_n)}$, $p_n=\frac{f(w_n)}{f(x_n)}$, $q_n=\frac{f(w_n)}{f(y_n)}$ and $r_n=\frac{f(w_n)}{f(z_n)}$. These functions satisfy the given conditions in Theorems \ref{theorem1}, \ref{theorem2}, and \ref{theorem3}, so \begin{equation}\label{d4} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- (t_n^2(5-7t_n)+(2t_n+1)(t_n^3+1)-2t_n^4) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ w_{n}=z_n-\left((1+s_n)+(6+u_n^2)(u_n+t_n^2)+2(t_n-u_n)\right)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=w_n-\big[(1+t_n)(2t_n+t_n^2)+3t_n^2-t_n^3+8t_n^2r_n-4s_n^2u_n-2t_n^3r_n+t_nu_n+6t_n^2u_n+2(p_n+q_n)\\ \quad \quad+\frac{1+5u_n}{1+u_n}+\frac{2t_n^3u_n-10t_nu_n^2}{1+t_nu_n}+\frac{6p_n+r_n}{1+p_n}+\frac{s_n+s_n^2-s_n^3}{1+s_n}+\frac{2s_nu_n+2t_nr_n+24t_n^4u_n+2t_ns_nu_n}{1+t_n}\big]\cdot\dfrac{f(w_n)}{f'(x_n)}. \end{cases} \end{equation} \end{method} \begin{method}\label{m3} Weight functions $G$, $H$, $I$, $J$, $K$, $L$, $M$, and $N$ in (\ref{c2}) are given by \begin{equation}\label{d5} \begin{split} &G(t_n)=(1+t_n^2)(1+2t_n+2t_n^2)+t^2(2-8t_n-2t_n^2),\\ &H(t_n,s_n,u_n)=4u_n-5s_n+(6+s_n^3)(t_n^2+s_n)+(1+u_n^3)(1+2t_n),\\ &I(t_n)=(1+t_n)(2t_n+t_n^3)+t_n^2(4-t_n-t_n^2), \quad J(s_n)=-2s_n^2+\frac{s_n+2s_n^2}{1+s_n^2},\\ &K(u_n)=1+6u_n-\frac{2u_n+6u_n^2}{1+u_n}, \quad L(t_n, u_n)=t_nu_n+\frac{2t_n^3u_n-10t_nu_n^2+6t_n^2u_n}{1+2t_nu_n},\\ &M(p_n,q_n,r_n)=\frac{1+2p_n+2q_n}{1-r_n}+\frac{6p_n}{1+q_n}-1,\\ &N(t_n,s_n,u_n,r_n)=2t_nr_n+2s_nu_n+24t_n^4u_n+\frac{6t_n^2r_n+6t_n^3r_n-4s_n^2u_n}{1+t_n},\\ \end{split} \end{equation} where $t_n=\frac{f(y_n)}{f(x_n)}$, $s_n=\frac{f(z_n)}{f(y_n)}$, $u_n=\frac{f(z_n)}{f(x_n)}$, $p_n=\frac{f(w_n)}{f(x_n)}$, $q_n=\frac{f(w_n)}{f(y_n)}$ and $r_n=\frac{f(w_n)}{f(z_n)}$. These functions satisfy the given conditions in Theorems \ref{theorem1}, \ref{theorem2}, and \ref{theorem3}, so \begin{equation}\label{d6} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- ((1+t_n^2)(1+2t_n+2t_n^2)+t^2(2-8t_n-2t_n^2)) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ w_{n}=z_n-\left(4u_n-5s_n+(6+s_n^3)(t_n^2+s_n)+(1+u_n^3)(1+2t_n)\right)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=w_n-\big[(1+t_n)(2t_n+t_n^3)+4t_n^2-t_n^3-t_n^4-2s_n^2+6u_n+2t_nr_n+2s_nu_n+24t_n^4u_n+t_nu_n\\ \quad \quad +\frac{2t_n^3u_n-10t_nu_n^2+6t_n^2u_n}{1+2t_nu_n}+\frac{1+2p_n+2q_n}{1-r_n}+\frac{6p_n}{1+q_n}-\frac{2u_n+6u_n^2}{1+u_n}+\frac{s_n+2s_n^2}{1+s_n^2}+\frac{6t_n^2r_n+6t_n^3r_n-4s_n^2u_n}{1+t_n}\big]\cdot\dfrac{f(w_n)}{f'(x_n)}. \end{cases} \end{equation} \end{method} \begin{method}\label{mmm0} The method by H.T. Kung and J.F. Traub \cite{Kung} is given by \begin{equation}\label{KT} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_n=y_n-G_f(x_n),\\ s_n=z_n-f^2(x_n)f(y_n)H_f(x_n,y_n,z_n),\\ x_{n+1}=s_n+\frac{f^2(x_n)f(y_n)f(z_n)}{f(x_n)-f(s_n)}\left(H_f(x_n,y_n,z_n)-\frac{K_f(x_n,y_n,z_n)-L_f(x_n,y_n,z_n,s_n)}{f(x_n)-f(s_n)}\right), \end{cases} \end{equation} where\\ \\ \begin{equation*} \begin{split} G_f(x_n)&=\frac{f^2(x_n)f(y_n)}{f'(x_n)(f(x_n)-f(y_n))^2},\\ H_f(x_n,y_n,z_n)&=G_f(x_n)\\ &\big(\frac{-1}{f^2(x_n)(f(x_n)-f(z_n))}+\frac{f(y_n)-f(x_n)}{f(x_n)f(y_n)(f(x_n)-f(z_n))^2}\\ &+\frac{1}{(f(y_n)-f(z_n))(f(x_n)-f(z_n))^2}\big),\\ K_f(x_n,y_n,z_n)&=\frac{f(x_n)(f(y_n)-f(z_n))(f(x_n)-f(y_n))-f^2(x_n)f(y_n)}{f'(x_n)(f(x_n)-f(z_n))(f(x_n)-f(y_n))^2(f(y_n)-f(z_n))},\\ L_f(x_n,y_n,z_n,s_n)&=\frac{G_f(x_n)(f(z_n)-f(s_n))-\left(f(y_n)f^2(x_n)H_f(x_n,y_n,z_n)\right)(f(y_n)-f(z_n))}{(f(y_n)-f(s_n))(f(y_n)-f(z_n))(f(z_n)-f(s_n))}. \end{split} \end{equation*} \end{method} \begin{method}\label{mmm11} The method by B. Neta \cite{Neta0} is given by \begin{equation}\label{NNNN} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+Af(y_n)}{f(x_n)+(A-2)f(y_n)}\dfrac{f(y_n)}{f'(x_n)}, \quad A\in \mathbb{R},\\ s_n=y_n+\delta_1f^2(x_n)+\delta_2f^3(x_n),\\ x_{n+1}=y_n+\theta_1f^2(x_n)+\theta_2f^3(x_n)+\theta_3f^4(x_n), \end{cases} \end{equation} where\\ \\ \begin{equation*} F_y=f(y_n)-f(x_n), \quad F_z=f(z_n)-f(x_n), \quad F_s=f(s_n)-f(x_n), \end{equation*} \begin{equation*} \zeta_y=\dfrac{1}{F_y} \left(\dfrac{y_n-x_n}{F_y}-\dfrac{1}{f'(x_n)}\right), \zeta_z=\dfrac{1}{F_z} \left(\dfrac{z_n-x_n}{F_z}-\dfrac{1}{f'(x_n)}\right), \zeta_s=\dfrac{1}{F_s} \left(\dfrac{s_n-x_n}{F_s}-\dfrac{1}{f'(x_n)}\right), \end{equation*} \begin{equation*} \delta_1=\zeta_y+\delta_2F_y, \quad \quad \delta_2=-\dfrac{\zeta_y-\zeta_z}{F_y-F_z},\quad \quad \gamma_1=\dfrac{\zeta_s-\zeta_z}{F_s-F_z},\quad \quad \gamma_2=\dfrac{\zeta_y-\zeta_z}{F_y-F_z}, \end{equation*} \begin{equation*} \theta_1=\zeta_s+\theta_2F_s-\theta_3F_s^2, \quad \theta_2=\gamma_1+\theta_3(F_s-+F_z), \quad \theta_3=\dfrac{\gamma_1-\gamma_2}{F_s-F_y}, \end{equation*} \end{method} \begin{method}\label{m4} The method by Y. H. Geum and Y. I. Kim \cite{Geum1} is given by \begin{equation}\label{d7} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- K_f(u_n) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ s_{n}=z_n-H_f(u_n, v_n, w_n)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=s_n-W_f(u_n,v_n,w_n,t_n)\cdot\dfrac{f(s_n)}{f'(x_n)}. \end{cases} \end{equation} where \begin{equation*} \begin{split} &K_f(u_n)=\dfrac{1+\beta u_n+\lambda u_n^2}{1+(\beta-2)u_n+\mu u_n^2},\\ &H_f(u_n,v_n,w_n)=\dfrac{1+a u_n +b v_n+\gamma w_n}{1+c u_n+d v_n+\sigma w_n},\\ &W_f(u_n,v_n,w_n,t_n)=\dfrac{1+B_1u_n+B_2v_nw_n}{1+B_3v_n+B_4w_n+B_5t_n+B_6v_nw_n}+G(u_n,w_n), \end{split} \end{equation*} and $u_n=\frac{f(y_n)}{f(x_n)}$, $v_n=\frac{f(z_n)}{f(y_n)}$, $w_n=\frac{f(z_n)}{f(x_n)}$ and $t_n=\frac{f(s_n)}{f(z_n)}$.\\ \begin{equation*}\label{d8} \begin{split} &a=2, \quad b=0, \quad c=0, \quad d=-1, \quad \gamma=2+\sigma, \quad \lambda=-9+\frac{5\beta}{2},\quad \mu=-4+\frac{\beta}{2}, \\ &B_1=2, \quad B_2=2+\sigma, \quad B_3=-1, \quad B_4=-2, \quad B_5=-1, \quad B_6=2(1+\sigma), \end{split} \end{equation*} With weight function \begin{equation}\label{d8} \begin{split} G(u_n,w_n)&=\frac{-1}{2}\left(u_nw_n\left(6+12u_n+u_n^2(24-11\beta)+u_n^3\phi_1+4\sigma\right)\right)+\phi_2w_n^2,\\ &\beta=2, \quad \sigma=-2, \quad \phi_1=11\beta^2-66\beta+136, \quad \phi_2=2u_n(\sigma^2-2\sigma-9)-4\sigma-6.\\ \end{split} \end{equation} \end{method} \begin{method}\label{m5} The method by Y. H. Geum and Y. I. Kim \cite{Geum2} is given by \begin{equation}\label{d9} \begin{cases} y_n=x_n-\dfrac{f(x_n)}{f'(x_n)},\\ z_{n}=y_n- K_f(u_n) \cdot\dfrac{f(y_n)}{f'(x_n)},\\ s_{n}=z_n-H_f(u_n, v_n, w_n)\cdot\dfrac{f(z_n)}{f'(x_n)},\\ x_{n+1}=s_n-W_f(u_n,v_n,w_n,t_n)\cdot\dfrac{f(s_n)}{f'(x_n)}. \end{cases} \end{equation} where \begin{equation*} \begin{split} &K_f(u_n)=\dfrac{1+\beta u_n+\gamma u_n^2}{1+(\beta-2)u_n+\mu u_n^2},\\ &H_f(u_n,v_n,w_n)=\dfrac{1+a u_n+b v_n +\gamma w_n}{1+c u_n+dv_n+\sigma w_n},\\ &W_f(u_n,v_n,w_n,t_n)=\dfrac{1+A_1u_n}{1+A_2v_n+A_3w_n+A_4t_n}+G(u_n,v_n,w_n), \end{split} \end{equation*} also $u_n=\frac{f(y_n)}{f(x_n)}$, $v_n=\frac{f(z_n)}{f(y_n)}$, $w_n=\frac{f(z_n)}{f(x_n)}$ and $t_n=\frac{f(s_n)}{f(z_n)}$, \begin{equation*} \begin{split} &a=2, \quad b=0, \quad c=0, \quad d=-1,\\ &\gamma=2+\sigma, \quad \lambda=-9+\frac{5\beta}{2}, \quad \mu=-4+\frac{\beta}{2},\\ &A_1=2, \quad A_2=-1, \quad A_3=-2, \quad A_4=-1, \end{split} \end{equation*} with the weight function \begin{equation}\label{d10} \begin{split} G(u_n,v_n,w_n)&=-6u_n^3v_n+6w_n^2-4u_n^4(3v_n+17w_n)+u_n(2v_n^2+4v_n^3+w_n-2w_n^2),\\ \beta&=0, \quad \sigma=-2. \end{split} \end{equation} \end{method} We test our proposed methods \eqref{d2}, \eqref{d4}, and \eqref{d6} on the functions $f_1, \dots, f_6$ described in Table \ref{table1}, the Table also lists the exact roots $x^*$ and initial approximations $x_0$, which are computed using the \texttt{FindRoot} command of Mathematica \cite[pp.\ 158--160]{Hazrat}. For the methods defined by \eqref{d2}, \eqref{d4}, and \eqref{d6}, we display in Tables \ref{table2}-\ref{table4} for the weight functions \eqref{d1}, \eqref{d3}, \eqref{d5} the errors $|x_n - x^*|$ for $n=1,2,3$, and the \emph{computational order of convergence (coc)} \cite{Fer}, approximated by \begin{equation*} \mathrm{coc} \approx \frac{\ln|(x_{n+1}-x^{*})/(x_{n}-x^{*})|}{\ln|(x_{n}-x^{*})/(x_{n-1}-x^{*})|}. \end{equation*} \begin{table}[!ht] \begin{center} \begin{tabular}{l c c} \hline Test function $f_n$ & root $x^{*}$ & initial approximation $x_0$ \\ \hline $f_1(x) = \ln (1+x^2)+e^{x}\sin x$ & $0$ & $0.03$ \\[0.5ex] $f_2(x) = \frac{-x}{100}+\sin x$ & $0$ & $0.5$ \\[0.5ex] $ f_3(x)=x \ln (1+x \sin x)+ e^{-1+x^2+ x \cos x}\sin \pi x $ & $ 0 $ & $ 0.01 $ \\[0.5ex] $ f_4(x)=1+e^{2+x-x^2}+x^3-\cos(1+x) $& $ -1 $ & $ -0.3 $ \\[0.5ex] $ f_5(x)=(1-\sin x^2)\frac{x^2+1}{x^3+1}+x\ln(x^2-\pi+1)-\frac{1+\pi}{1+\sqrt{\pi^3}}$&$\sqrt{\pi}$&$1.7$ \\[0.5ex] $ f_6(x)=(1+x^2)\cos(\frac{\pi x}{2})+\frac{\ln(x^2+2x+2)}{1+x^2}$&$-1$&$-1.1$ \\ \hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Test functions $f_1, \dots, f_6$, root $x^*$ and initial approximation $x_0$.\label{table1}} \end{table} To obtain a high accuracy and avoid loss of significant digits, we employ multi-precision arithmetic with $6000$ significant decimal digits in Mathematica 8. \begin{table}[!ht] \begin{center} \begin{tabular}{c c c c c } \hline $f_n$ & $~~~~\vert x_{1} - x^{*} \vert~~~~$ & $~~~~\vert x_{2} - x^{*} \vert~~~~$ & $~~~~\vert x_{3} - x^{*} \vert~~~~$ & coc\\ \hline $f_1$ & $0.380e-20$ & $0.126e-319$ & $0.276e-5111$ & $16.0000$ \\ $f_2$ & $0.104e-10$ & $0.265e-192$ & $0.211e-3279$ & $17.0000$ \\ $f_3$ & $0.450e-28$& $0.303e-449$ & $0.561e-7188$ & $16.0000$ \\ $f_4$ & $0.609e-8$ & $0.465e-136$ & $0.630e-2186$ & $16.0000$\\ $f_5$ & $0.246e-14$ & $0.276e-230$ & $0.169e-3685$ & $16.0000$\\ $f_6$ & $0.142e-17$ & $0.482e-283$ & $0.139e-4530$ & $16.0000$\\ \hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Errors and coc for method \eqref{d2}. \label{table2}} \end{table} \hspace{0.5cm} \begin{table}[!ht] \begin{center} \begin{tabular}{c c c c c } \hline $f_n$ & $\vert x_{1} - x^{*} \vert$ & $\vert x_{2} -x^{*} \vert$ & $\vert x_{3} - x^{*} \vert$ & coc \\ \hline $f_1$ & $0.144e-19$ & $0.193e-310$ & $0.222e-4964$ & $16.0000$ \\ $f_2$ & $0.301e-23$ & $0.339e-451$ & $0.336e-8582$ & $19.0000$ \\ $f_3$ & $0.405e-28$ & $0.515e-450$ & $0.239e-7200$ & $16.0000$ \\ $f_4$ & $0.628e-8$ & $0.276e-135$ & $0.561e-2173$ & $16.0000$\\ $f_5$ & $0.224e-14$ & $0.526e-231$ & $0.456e-3697$ & $16.0000$\\ $f_6$ & $0.186e-17$ & $0.322e-281$ & $0.202e-4501$ & $16.0000$\\ \hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Errors and coc for method \eqref{d4}. \label{table3}} \end{table} \hspace{0.5cm} \begin{table}[h!] \begin{center} \begin{tabular}{ c c c c c } \hline $f_n$ & $\vert x_{1} - x^{*} \vert$ & $\vert x_{2} - x^{*} \vert$ & $\vert x_{3} - x^{*} \vert$ & coc \\ \hline $f_1$ & $0.389e-20$ & $0.931e-321$ & $0.107e-5130$ & $16.0000$ \\ $f_2$ & $0.414e-22$ & $0.1220e-385$ & $0.117e-6565$ & $17.0000$ \\ $f_3$ & $0.936e-29$ & $0.865e-461$ & $0.243e-7373$ & $16.0000$ \\ $f_4$ & $0.554e-8$ & $0.756e-136$ & $0.108e-2181$ & $16.0000$\\ $f_5$ & $0.119e-14$ & $0.116e-235$ & $0.760e-3772$ & $16.0000$\\ $f_6$ & $0.125e-17$ & $0.926e-285$ & $0.745e-4559$ & $16.0000$\\ \hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Errors and coc for method \eqref{d6} .\label{table4}} \end{table} Tables \ref{table2}-\ref{table4} show that methods \eqref{d2}, \eqref{d4}, and \eqref{d6} support the convergence analysis given in the previous sections. In Tables \ref{table5}-\ref{table7}, we compare our three-point method \eqref{dd2} with the methods \eqref{KT0}, \eqref{NNN} and \eqref{kh1} and our four-point methods \eqref{d2}, \eqref{d4} and \eqref{d6} with the methods \eqref{KT}, \eqref{NNNN}, \eqref{d7} and \eqref{d9}. \begin{table}[!ht] \begin{center} \begin{tabular}{ c c c c c c} \hline Method & Weight function & $\vert x_{1}- x^{*}\vert$ & $\vert x_{2} - x^{*} \vert$ & $\vert x_{3} - x^{*} \vert$ & coc \\ \hline $(\ref{dd2})$ &(\ref{dd1})& $~~0.834e-13$ & $0.112e-121~~$ & $0.160e-1101$ &$9.0000$\\ $(\ref{KT0})$ &-& $~~0.316e-11$ & $0.165e-95~~$ & $0.918e-770$ &$8.0000$\\ $(\ref{NNN})$ &-& $~~0.445e-11$ & $0.380e-94~~$ & $0.108e-758$ &$8.0000$\\ $(\ref{kh1})$ &-& $~~0.704e-11$ & $0.658e-92~~$ & $0.384e-740$ &$8.0000$\\ $(\ref{d2})$ &(\ref{d1})& $~~0.844e-25$ & $0.659e-435~~$ & $0.257e-7406$ &$17.0000$\\ $(\ref{d4})$ &(\ref{d3})& $~~0.102e-24$ &$0.229e-433~~$ & $0.215e-7380$& $17.0000$ \\ $(\ref{d6})$ &(\ref{d5})& $~~0.371e-25$ &$0.429e-465~~$ & $0.571e-8384$& $18.0000$ \\ $(\ref{KT})$ &-& $~~0.114e-22$ &$0.323e-374~~$ & $0.546e-5999$& $16.0000$ \\ $(\ref{NNNN})$ &-& $~~0.226e-22$ &$0.403e-369~~$ & $0.429e-5917$& $16.0000$ \\ $(\ref{d7})$ &(\ref{d8})& $~~0.235e-22$ &$0.106e-368~~$ & $0.316e-5910$& $16.0000$ \\ $(\ref{d9})$ &(\ref{d10})& $~~0.310e-21$ &$0.600e-350~~$ &$0.226e-5609$& $16.0000$\\\hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Comparison for $f(x) = \ln(1-x+x^2)+4\sin (1-x)$, zero $x^{*}=1$ and initial $x_{0}=1.1$.\label{table5}} \end{table} \hspace{1cm} \begin{table}[h!] \begin{center} \begin{tabular}{ c c c c c c} \hline Method & Weight function & $\vert x_{1} -x^{*} \vert$ & $\vert x_{2} - x^{*} \vert$ & $\vert x_{3} - x^{*} \vert$ & coc \\ \hline $(\ref{dd2})$ &(\ref{dd1})& $~~0.125e-10$ & $0.888e-85~~$ & $0.545e-678$ &$8.0000$\\ $(\ref{KT0})$ &-& $~~0.414e-9$ & $0.961e-72~~$ & $0.811e-573$ &$8.0000$\\ $(\ref{NNN})$ &-& $~~0.597e-9$ & $0.273e-70~~$ & $0.533e-561$ &$8.0000$\\ $(\ref{kh1})$ &-& $~~0.629e-9$ & $0.393e-70~~$ & $0.929e-560$ &$8.0000$\\ $(\ref{d2})$ &(\ref{d1})& $~~0.201e-9$ & $0.510e-148~~$ & $0.142e-2365$ &$16.0000$\\ $(\ref{d4})$ &(\ref{d3})& $~~0.210e-9$ &$0.807e-148~~$ & $0.183e-2362$& $16.0000$ \\ $(\ref{d6})$ &(\ref{d5})& $~~0.389e-20$ &$0.931e-321~~$ & $0.107e-5130$& $16.0000$ \\ $(\ref{KT})$ &-& $~~0.883e-18$ &$0.897e-282~~$ & $0.114e-4505$& $16.0000$ \\ $(\ref{NNNN})$ &-& $~~0.183e-17$ &$0.254e-276~~$ & $0.470e-4418$& $16.0000$ \\ $(\ref{d7})$ &(\ref{d8})& $~~0.274e-10$ &$0.682e-162~~$ & $0.150e-2587$& $16.0000$ \\ $(\ref{d9})$ &(\ref{d10})& $~~0.115e-9$ &$0.377e-149~~$ & $0.621e-2381$& $16.0000$\\\hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Comparison for $f(x) = \ln(1+x^2)+e^{x}\sin x$, zero $x^{*}=0$ and initial $x_{0}=0.1$.\label{table6}} \end{table} \hspace{1cm} \begin{table}[h!] \begin{center} \begin{tabular}{ c c c c c c} \hline Method & Weight function & $\vert x_{1} -x^{*} \vert$ & $\vert x_{2} - x^{*} \vert$ & $\vert x_{3} - x^{*} \vert$ & coc \\ \hline $(\ref{dd2})$ &(\ref{dd1})& $~~0.361e-13$ & $0.209e-106~~$ & $0.270e-852$ &$8.0000$\\ $(\ref{KT0})$ &-& $~~0.230e-12$ & $0.396e-99~~$ & $0.303e-793$ &$8.0000$\\ $(\ref{NNN})$ &-& $~~0.349e-12$ & $0.168e-97~~$ & $0.485e-780$ &$8.0000$\\ $(\ref{kh1})$ &-& $~~0.477e-17$ & $0.463e-141~~$ & $0.366e-1133$ &$8.0000$\\ $(\ref{d2})$ &(\ref{d1})& $~~0.499e-25$ & $0.113e-400~~$ & $0.536e-6411$ &$16.0000$\\ $(\ref{d4})$ &(\ref{d3})& $~~0.406e-25$ &$0.345e-402~~$ & $0.256e-6435$& $16.0000$ \\ $(\ref{d6})$ &(\ref{d5})& $~~0.825e-26$ &$0.242e-414~~$ & $0.760e-6631$& $16.0000$ \\ $(\ref{KT})$ &-& $~~0.211e-24$ &$0.152e-390~~$ & $0.780e-6249$& $16.0000$ \\ $(\ref{NNNN})$ &-& $~~0.484e-24$ &$0.209e-384~~$ & $0.310e-6150$& $16.0000$ \\ $(\ref{d7})$ &(\ref{d8})&$~~0.253e-24$ &$0.173e-389~~$ &$0.372e-6232$& $16.0000$\\ $(\ref{d9})$ &(\ref{d10})& $~~0.454e-22$ &$0.108e-350~~$ &$0.123e-5608$& $16.0000$\\\hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Comparison for $f(x) = \frac{-2}{27}(9\sqrt{2}+7\sqrt{3})+\sqrt{1-x^2}+(1+x^3)\cos (\frac{\pi x}{2})$, zero $x^{*}=\frac{1}{3}$ and initial $x_{0}=0.35$.\label{table7}} \end{table} \\ \\ \\ \\ \\ \\ \\ \\ It can be observed from Tables \ref{table5}-\ref{table7} that for the presented examples our three-point proposed method \eqref{dd2} is comparable and competitive to the methods \eqref{KT0}, \eqref{NNN} and \eqref{kh1} and our four-point proposed methods \eqref{d2}, \eqref{d4} and \eqref{d6} are comparable and competitive to the methods \eqref{KT}, \eqref{NNNN}, \eqref{d7} and \eqref{d9} also. \subsection{Dynamic behavior} In this section, we survey the comparison of iterative methods in the complex plane by using basins of attraction. Studying the dynamic behavior, using basins of attractions, of the rational functions associated to an iterative method gives important information about the convergence and stability of the scheme \cite{Soleymani2}. Neta et al.\ have compared various methods for solving nonlinear equations with multiple roots by comparing the basins of attraction \cite{Neta} and also Scott et al.\ have compared several methods for approximating simple roots \cite{Scott}. Moreover, a number of iterative root-finding methods were compared from a dynamical point of view by Amat et al. \cite{Amat1}-\cite{Amat4}, Neta et al. \cite{Neta1}, \cite{Neta2} Stewart \cite{Stewart}, Vrscay and Gilbert \cite{Vrscay}. To this end, some basic concepts are briefly recalled. Let $G:\mathbb{C} \to \mathbb{C} $ be a rational map on the complex plane. For $z\in \mathbb{C} $, we define its orbit as the set $orb(z)=\{z,\,G(z),\,G^2(z),\dots\}$. A point $z_0 \in \mathbb{C} $ is called a periodic point with minimal period $m$ if $G^m(z_0)=z_0$, where $m$ is the smallest integer with this property. A periodic point with minimal period $1$ is called a fixed point. Moreover, a point $z_0$ is called attracting if $|G'(z_0)|<1$, repelling if $|G'(z_0)|>1$, and neutral otherwise. The Julia set of a nonlinear map $G(z)$, denoted by $J(G)$, is the closure of the set of its repelling periodic points. The complement of $J(G)$ is the Fatou set $F(G)$, where the basin of attraction of the different roots lie \cite{Babajee}, \cite{Cordero5}. We use the basin of attraction for comparing the iteration algorithms. Approximating basins of attraction is a method to visually comprehend how an algorithm behaves as a function of the various starting points. From the dynamical point of view, in fact, we take a $256 \times 256$ grid of the square $[-3,3]\times[-3,3]\in \mathbb{C}$ and assign a color to each grid point $z_0$ according to the simple root to which the corresponding orbit of the iterative method starting from $z_0$ converges, and we mark the point as black if the orbit does not converge to a root, in the sense that after at most 100 iterations it has a distance to any of the roots, which is larger than $10^{-3}$. In this way, we distinguish the attraction basins by their color for different methods. \\ \begin{table}[!ht] \begin{center} \begin{tabular}{l c} \hline Test Problems $p_n$ & Roots \\ \hline $p_1(z) = z^2+1 $ & $i,\quad -i$ \\[0.5ex] $p_2(z) = z^3+z $ & $0,\quad i,\quad -i$ \\[0.5ex] $p_3(z) = z^3+z^2-1 $ & $ -0.877439+0.744862i, \quad -0.877439-0.744862i,\quad 0.7548878 $ \\ \hline \end{tabular} \end{center} \vspace*{-3ex} \caption{Test Problems $p_1(z), p_2(z), p_3(z)$ and roots.\label{table8}} \end{table} \\ For the test problem $p_1(z)$, with its roots given in Table \ref{table8}, the results are presented in Figures \ref{fig:figure1}-\ref{fig:figure5}. For test problems $p_2(z)$ and $p_3(z)$, the results are shown in Figures \ref{fig:figure6}-\ref{fig:figure10} and Figures \ref{fig:figure11}-\ref{fig:figure15}, respectively. As a result, the method (\ref{d4}) (see Figures \ref{fig:figure1}, \ref{fig:figure6} and \ref{fig:figure11}) seems to produce larger basins of attraction than the methods ({\ref{d7}}) and ({\ref{d9}) (see Figures \ref{fig:figure4}, \ref{fig:figure5}, \ref{fig:figure9}, \ref{fig:figure10}, \ref{fig:figure14} and \ref{fig:figure15}) and smaller basins of attraction than the methods (\ref{KT}) and (\ref{NNNN}) (see Figures \ref{fig:figure2}, \ref{fig:figure3}, \ref{fig:figure7}, \ref{fig:figure8}, \ref{fig:figure12} and \ref{fig:figure13}). Note that points might belong to no basin of attraction; these are starting points for which the methods do not converge, approximated and visualized by black points. These exceptional points constitute the so-called Julia set of the corresponding methods. \begin{figure} \caption{Method (\ref{d4} \label{fig:figure1} \caption{Method (\ref{KT} \label{fig:figure2} \caption{Method (\ref{NNNN} \label{fig:figure3} \end{figure} \begin{figure} \caption{Method (\ref{d7} \label{fig:figure4} \caption{Method (\ref{d9} \label{fig:figure5} \end{figure} \begin{figure} \caption{Method (\ref{d4} \label{fig:figure6} \caption{Method (\ref{KT} \label{fig:figure7} \caption{Method (\ref{NNNN} \label{fig:figure8} \end{figure} \begin{figure} \caption{Method (\ref{d7} \label{fig:figure9} \caption{Method (\ref{d9} \label{fig:figure10} \end{figure} \begin{figure} \caption{Method (\ref{d4} \label{fig:figure11} \caption{Method (\ref{KT} \label{fig:figure12} \caption{Method (\ref{NNNN} \label{fig:figure13} \end{figure} \begin{figure} \caption{Method (\ref{d7} \label{fig:figure14} \caption{Method (\ref{d9} \label{fig:figure15} \end{figure} \section{Conclusion} We introduce a new optimal class of four-point methods without memory for approximating a simple root of a given nonlinear equation. Our proposed methods use five function evaluations for each iteration. Therefore they support Kung and Traub's conjecture. Error and convergence analysis are carried out. Numerical examples show that our methods work and can compete with other methods in the same class. We used the basin of attraction for comparing the iteration algorithms. \end{document}
\begin{equation}gin{document} \title{Gradient Schemes for linear and non-linear elasticity equations} \author{J\'er\^ome Droniou} \address{School of Mathematical Sciences, Monash University, Victoria 3800, Australia.} \author{Bishnu P. Lamichhane} \address{School of Mathematical \& Physical Sciences, University of Newcastle, University Drive, NSW 2308, Callaghan, Australia.} \email{[email protected],[email protected]} {\rm d}ate{19 November 2013} \begin{equation}gin{abstract} The Gradient Scheme framework provides a unified analysis setting for many different families of numerical methods for diffusion equations. We show in this paper that the Gradient Scheme framework can be adapted to elasticity equations, and provides error estimates for linear elasticity and convergence results for non-linear elasticity. We also establish that several classical and modern numerical methods for elasticity are embedded in the Gradient Scheme framework, which allows us to obtain convergence results for these methods in cases where the solution does not satisfy the full $H^2$-regularity or for non-linear models. \end{abstract} \subjclass[2010]{65N12, 65N15, 65N30} \keywords{elasticity equations, linear, non-linear, numerical methods, convergence analysis, Gradient Schemes} \maketitle \section{Introduction}\label{sec:intro} We are interested in the numerical approximation of the (possibly non-linear) elasticity equation \begin{equation}\label{eq:elas-strong} \ba -{\rm d}iv(\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))=\mathbf{F}\,,&\mbox{in }\Omegamega,\\ \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)= \frac{\mathbf{n}abla \mathbf{u}ex + (\mathbf{n}abla \mathbf{u}ex)^T}{2}\,,&\mbox{in }\Omegamega,\\ \mathbf{u}ex = 0\,,&\mbox{on }{\Gamma_D}\,,\\ \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))\mathbf{n}=\mathbf{g}\,,&\mbox{on }{\Gamma_N}, \end{array} \end{equation} where $\Omegamega\subset \mathbf{R}^d$ is the body submitted to the force field $\mathbf{F}$, $\mathbf{n}$ is the unit normal to $\partial\Omegamega$ pointing outward $\Omegamega$, ${\Gamma_D}$ and ${\Gamma_N}$ are subsets of $\partial\Omegamega$ on which the body is respectively fixed and submitted to traction, $\mbox{\boldmath{$\sigma$}}$ and $\mbox{\boldmath{$\varepsilon$}}$ are the second-order stress and strain tensors, $\mathbf{u}ex=(\bar u_i)_{i=1,\ldots,d}:\Omegamega\to\mathbf{R}^d$ describes local displacements and the gradient is written in columns: $\mathbf{n}abla\mathbf{u}ex= (\partial_j \bar u_i)_{i,j=1,\ldots,d}$. This formulation of elasticity equations covers a number of classical models: \begin{equation}gin{itemize} \item the linear elasticity model with $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))=\mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$, in which $\mathbb{C}$ is a $4$th order stiffness tensor, \item the damage models of \cite{CCC10b} with $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))=(1-D(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))) \mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$, where the damage index $D$ is a scalar function, \item the non-linear Hencky-von Mises elasticity model \cite{NEC86} in which $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))=\widetilde{\lambda}({\rm d}ev(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})))\mathop{\rm tr}(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))\mathbf{I} +2\widetilde{\mu}({\rm d}ev(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})))\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$, where $\widetilde{\lambda}$ and $\widetilde{\mu}$ are the non-linear Lam\'e coefficients, $\mathop{\rm tr}$ is the trace operator and ${\rm d}ev(\btau)=(\btau-\frac{1}{2}\mathop{\rm tr}(\btau)\mathbf{I}): (\btau-\frac{1}{2}\mathop{\rm tr}(\btau)\mathbf{I})$ is the deviatoric operator. \end{itemize} Convergence of conforming Finite Element methods for the linear elasticity problem can be obtained by using standard techniques \cite{Cia78,BS94}. This convergence analysis covers the case when the solution does not possess a full $H^2$-regularity. However, convergence analysis for non-conforming Finite Element methods is most often done using the full $H^2$-regularity of the solution \cite{BS92,BS94,BCR04,LRW06,BH06}. Similarly, the convergence of numerical methods for non-linear elasticity models only seems to have been established for conforming approximations (i.e. the space(s) of approximate solutions are subspaces of the space(s) of continuous solutions, whether a displacement or several-fields formulation is chosen) and assuming the full $H^2$-regularity of the solution \cite{GS02,CD04,BM05}. The Gradient Scheme framework is a setting, based on a few discrete elements and properties, which has been recently developed to analyse numerical methods for a vast number of diffusion models: linear or non-linear, local or non-local, stationary or transient models, etc. (see \cite{EYM11,DRO12,EYM13-2,EYM11-2,DRO13}). This framework is also currently being extended to the linear poroelasticity equation, see \cite{AEL13}. It has been shown that a number of well-known methods for diffusion equations are Gradient Schemes \cite{EYM12,DRO12,EYM13,DRO13}: Galerkin methods (including conforming Finite Element methods), Mixed Finite Element methods, Hybrid Mimetic Mixed methods (including Hybrid Finite Volumes, Mimetic Finite Differences and Mixed Finite Volumes), Discrete Duality Finite Volume methods, etc. Moreover, the Gradient Scheme framework enables convergence analysis of all these numerical methods for all the afore-mentioned models under very unrestrictive assumptions. The key feature of Gradient Schemes that they provide a unified framework for the convergence analysis of many different numerical schemes for linear and non-linear diffusion equations without assuming the full $H^2$-regularity of the solution. In practice, the full $H^2$-regularity is not achieved due to the non-convexity of the domain, corner singularities, discontinuities of the stiffness tensor, non-smooth data and mixed boundary conditions. The aim of this paper is to extend the Gradient Scheme framework to linear and non-linear elasticity models, thus showing that all the advantages of this analysis framework can be applied to classical numerical techniques developed for elasticity equations. The paper is organised as follows. In the next section, we introduce the notion of Gradient Discretisations, used to define Gradient Schemes for \eqref{eq:elas-strong}. We also state the three properties, \emph{consistency}, \emph{limit-conformity} and \emph{coercivity}, that a Gradient Discretisation must satisfy in order to lead to a stable and convergent numerical scheme. In Section \ref{sec:lin}, we first analyse the convergence of Gradient Schemes for linear elasticity equations, providing an error estimate under very weak regularity assumptions on the data and solution. We then carry out the convergence analysis for fully non-linear models, proving the convergence of the approximate solution under the same unrestrictive assumptions. Section \ref{sec:ex} is devoted to the study of some examples of Gradient Scheme. We show in particular that many schemes for elasticity equations, including methods developed to handle the nearly incompressible limit and acute bending, do fall in the framework of Gradient Schemes and that our convergence analysis -- for both linear and non-linear models -- therefore applies to them. Some conclusions of the paper are summarised in the final section. \section{Definition of Gradient Schemes for the elasticity equation}\label{sec:gs} Our general assumptions on the data are as follows. \begin{equation}\label{hyp:omega} \ba \Omegamega\mbox{ is a connected open subset of $\mathbf{R}^d$ ($d\ge 1$) with Lipschitz boundary,}\\ \mbox{${\Gamma_D}$ and ${\Gamma_N}$ are disjoint subsets of $\partial\Omegamega$ such that $\partial\Omegamega= {\Gamma_D}\cup{\Gamma_N}$ and}\\ \mbox{${\Gamma_D}$ has a non-zero $(d-1)$-dimensional measure}, \end{array} \end{equation} \begin{equation}\label{hyp:fg} \mathbf{F}\in \mathbf{L}^2(\Omegamega)\,,\quad \mathbf{g}\in \mathbf{L}^2({\Gamma_N}) \end{equation} (where $\mathbf{L}^2(X)=(L^2(X))^d$) and, denoting by $\mathcal S_{d\times d}$ the set of symmetric $d\times d$ tensors, \begin{equation}\label{hyp:stress} \ba {\rm d}isplaystyle \mbox{\boldmath{$\sigma$}}:(x,\btau)\in\Omegamega\times \mathcal S_{d\times d} \mapsto \mbox{\boldmath{$\sigma$}}(x,\btau)\in \mathcal S_{d\times d}\mbox{ is a Caratheodory}\\ {\rm d}isplaystyle\mbox{function (i.e. measurable w.r.t. $x$ and continuous w.r.t. $\btau$) and}\\ \exists \sigma^*,\sigma_*>0\mbox{ such that, for a.e. }x\in\Omegamega\,,\; \forall \btau,\bomega\in \mathcal S_{d\times d}\,,\\ \quad\begin{equation}gin{array}{l@{\qquad}l} |\mbox{\boldmath{$\sigma$}}(x,\btau)|\le \sigma^*|\btau|+\sigma^*&\mbox{(growth)},\\ \mbox{\boldmath{$\sigma$}}(x,\btau):\btau\ge \sigma_*|\btau|^2&\mbox{(coercivity)},\\ (\mbox{\boldmath{$\sigma$}}(x,\btau)-\mbox{\boldmath{$\sigma$}}(x,\bomega)):(\btau-\bomega)\ge 0&\mbox{(monotonicity)}, \end{array} \end{array} \end{equation} where, for $\btau,\bomega\in \mathbf{R}^{d\times d}$, $\btau:\bomega=\sum_{i,j=1}^d \btau_{ij}\bomega_{ij}$ and $|\btau|^2=\btau:\btau$. In the following, we also denote by $\cdot$ and $|\cdot|$ the Euclidean product and norm on $\mathbf{R}^d$. \begin{equation}gin{remark} Note that the linear elasticity and the Hencky-von Mises models both satisfy these assumptions (see \cite[Lemma 4.1]{BAR02} for a proof of the monotonicity of the Hencky-von Mises model). One can also see that the damage model $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))=(1-D(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})))\mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$ satisfies \eqref{hyp:stress} if $1-D(\mathbi{x}i)=f(|\mathbi{x}i|)$ where, for some $0<\underline{d}\le \overline{d}$, $f$ is continuous $[0,\infty)\to [\underline{d},\overline{d}]$ and such that $s\in[0,\infty)\to sf(s)$ is non-decreasing. \label{rem:dam}\end{remark} Under these assumptions, and defining $\mathbf{H}^1(\Omegamega)=H^1(\Omegamega)^d$, $\gamma:\mathbf{H}^1(\Omegamega)\to \mathbf{L}^{2}(\partial\Omegamega)$ the trace operator and $\mathbf{H}^1_{{\Gamma_D}}(\Omegamega)= \{\mathbf{v}\in \mathbf{H}^1(\Omegamega)\,:\,\gamma(\mathbf{v})=0\mbox{ on ${\Gamma_D}$}\}$, the weak formulation of \eqref{eq:elas-strong} is \begin{equation}\label{eq:elas-weak} \ba {\rm d}isplaystyle\mbox{Find $\mathbf{u}ex\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$ such that, for any $\mathbf{v} \in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$},\\ \ba{\rm d}isplaystyle \int_\Omegamega \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)):\mbox{\boldmath{$\varepsilon$}}(\mathbf{v})(x){\rm d} x&=&{\rm d}isplaystyle \int_\Omegamega \mathbf{F}(x)\cdot\mathbf{v}(x){\rm d} x\\ &&{\rm d}isplaystyle +\int_{{\Gamma_N}}\mathbf{g}(x)\cdot\gamma(\mathbf{v})(x){\rm d} S(x). \end{array} \end{array} \end{equation} Gradient Schemes for such equations are based on Gradient Discretisations, which consist in introducing a discrete space, gradient, trace and reconstructed function, and using those to approximate \eqref{eq:elas-weak}. The following definitions are adapted to elasticity equations, and to non-homogeneous mixed boundary conditions, from the theory developed in \cite{EYM11,DRO12} for diffusion equations with homogeneous Dirichlet boundary conditions. \begin{equation}gin{definition}[Gradient Discretisation for the elasticity equation] \label{def:grad-disc}~\\ A Gradient Discretisation ${\mathcal{D}}$ for Problem \eqref{eq:elas-strong} is ${\mathcal{D}} =(\X{{\mathcal{D}}},\Pi_{\mathcal{D}},{\mathcal T}_{{\mathcal{D}}},\mathbf{n}abla_{\mathcal{D}})$, where: \begin{equation}gin{enumerate} \item the set of discrete unknowns $\X{{\mathcal{D}}}$ is a finite dimensional vector space on $\mathbf{R}$ whose definition includes the null trace condition on ${\Gamma_D}$, \item the linear mapping $\Pi_{\mathcal{D}}~:~\X{{\mathcal{D}}}\to \mathbf{L}^2(\Omega)$ is the reconstruction of the approxi\-ma\-te function, \item the linear mapping ${\mathcal T}_{{\mathcal{D}}}:\X{{\mathcal{D}}}\to \mathbf{L}^2({\Gamma_N})$ is a discrete trace operator, \item the linear mapping $\mathbf{n}abla_{\mathcal{D}}~:~\X{{\mathcal{D}}}\to \mathbf{L}^2(\Omega)^{d}$ is the discrete gradient operator. It must be chosen such that $\Vert \cdot \Vert_{{\mathcal{D}}} := \Vert \mathbf{n}abla_{\mathcal{D}} \cdot \Vert_{\mathbf{L}^2(\Omega)^{d}}$ is a norm on $\X{{\mathcal{D}}}$. \end{enumerate} \end{definition} Once a Gradient Discretisation is available, the related Gradient Scheme consists in writing the weak formulation \eqref{eq:elas-weak} with the continuous spaces and operators replaced by their discrete counterparts. \begin{equation}gin{definition}[Gradient Scheme for the elasticity equation]\label{def:grad-scheme}~\\ If ${\mathcal{D}} = (\X{{\mathcal{D}}},\Pi_{\mathcal{D}},{\mathcal T}_{{\mathcal{D}}},\mathbf{n}abla_{\mathcal{D}})$ is a Gradient Discretisation in the sense of Definition \ref{def:grad-disc} then we define the related \emph{Gradient Scheme} for \eqref{eq:elas-strong} by \begin{equation}\label{grad-scheme} \begin{equation}gin{array}{l} {\rm d}isplaystyle \mbox{Find $\mathbf{u}\in \X{{\mathcal{D}}}$ such that, $\forall \mathbf{v}\in \X{{\mathcal{D}}}$,}\\[0.5em] \ba {\rm d}isplaystyle \int_\Omegamega \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u})(x)):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x){\rm d} x&=&{\rm d}isplaystyle \int_\Omegamega \mathbf{F}(x)\cdot\Pi_{\mathcal{D}}\mathbf{v}(x){\rm d} x\\ &&{\rm d}isplaystyle\qquad+\int_{{\Gamma_N}}\mathbf{g}(x)\cdot{\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x) \end{array} \end{array}\end{equation} where $\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})=\frac{\mathbf{n}abla_{\mathcal{D}} \mathbf{v}+(\mathbf{n}abla_{\mathcal{D}} \mathbf{v})^T}{2}$. \end{definition} The definitions of consistency, limit-conformity and compactness of Gradient Discretisations for Equation \eqref{eq:elas-strong} are the same as for diffusion equations, taking into account the fact that functions are vector- or tensor-valued in the elasticity model. The consistency of a sequence of Gradient Discretisations ensure that any function in the energy space can be approximated, along with its gradient, by discrete functions. \begin{equation}gin{definition}[Consistency] \label{def:cons} Let ${\mathcal{D}}$ be a Gradient Discretisation in the sense of Definition \ref{def:grad-disc}, and let $S_{{\mathcal{D}}}:\mathbf{H}^1_{{\Gamma_D}}(\Omegamega)\to [0,+\infty)$ be defined by \begin{equation}gin{equation} \ba {\rm d}isplaystyle \forall \mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)\,,\\[0.5em] {\rm d}isplaystyle S_{{\mathcal{D}}}(\mathbf{v}arphi) = \min_{\mathbf{v}\in \X{{\mathcal{D}}}}\big\{\Vert \Pi_{\mathcal{D}} \mathbf{v} - \mathbf{v}arphi\Vert_{\mathbf{L}^2(\Omega)} + \Vert \mathbf{n}abla_{\mathcal{D}} \mathbf{v} -\mathbf{n}abla\mathbf{v}arphi\Vert_{\mathbf{L}^2(\Omega)^{d}}\big\}. \end{array} \label{def:sdisc} \end{equation} A sequence $({\mathcal{D}}_m)_{m\in\mathbb N}$ of Gradient Discretisations is said to be \textbf{consistent} if, for all $\mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$, $S_{{\mathcal{D}}_m}(\mathbf{v}arphi)\to 0$ as $m\to\infty$. \end{definition} The limit-conformity of a sequence of Gradient Discretisations ensures that the dual of the discrete gradient behaves as an approximation of the divergence operator. We let \[ \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N}) = \{\btau\in \mathbf{L}^2(\Omegamega)^{d}\,:\, {\rm d}iv\btau\in \mathbf{L}^{2}(\Omega)\,,\;\gamma_\mathbf{n}(\btau)\in\mathbf{L}^2({\Gamma_N})\} \] where $\gamma_\mathbf{n}(\btau)$ is the normal trace of $\btau$. This normal trace is well defined in $\mathbf{H}^{-1/2}(\partial\Omegamega)$ if $\btau\in \mathbf{L}^2(\Omegamega)^d$ and ${\rm d}iv(\btau)\in \mathbf{L}^2(\Omegamega)$ (\footnote{The divergence of a tensor $\btau$ is taken row by row, i.e. if $\btau=(\btau_{i,j})_{i,j=1,\ldots,d}$ then ${\rm d}iv(\btau)=(\sum_{j=1}^d\partial_j \btau_{i,j})_{i=1,\ldots,d}$. This definition is consistent with our definition of $\mathbf{n}abla$ by column in the sense that $-{\rm d}iv$ is the formal dual operator of $\mathbf{n}abla$.}). \begin{equation}gin{definition}[Limit-conformity] \label{def:lim-conf} Let ${\mathcal{D}}$ be a Gradient Discretisation in the sense of Definition \ref{def:grad-disc}. We define $W_{{\mathcal{D}}}$: $\mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d\to [0,+\infty)$ by \begin{equation}gin{equation} \begin{equation}gin{array}{l@{}l} {\rm d}isplaystyle \forall \btau\in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d\,,\\ {\rm d}isplaystyle W_{{\mathcal{D}}}(\btau) = \mathop{\max_{\mathbf{v}\in \X{{\mathcal{D}}}}}_{\mathbf{v}\mathbf{n}ot=0} \frac{1}{\Vert \mathbf{v} \Vert_{{\mathcal{D}}}}&{\rm d}isplaystyle \Bigg| \int_\Omegamega \big(\mathbf{n}abla_{\mathcal{D}} \mathbf{v}(x):\btau(x) + \Pi_{\mathcal{D}} \mathbf{v}(x) \cdot {\rm d}iv(\btau)(x)\big) {\rm d} x\\ &{\rm d}isplaystyle \quad-\int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot{\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x)\Bigg|. \end{array} \label{def:wdisc}\end{equation} A sequence $({\mathcal{D}}_m)_{m\in\mathbb N}$ of Gradient Discretisations is said to be \textbf{limit-conforming} if, for all $\btau\in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$, $W_{{\mathcal{D}}_m}(\btau)\to 0$ as $m\to\infty$. \end{definition} The definition of coercivity of Gradient Discretisations for the elasticity equation starts in the same way as for diffusion equations. However, since the natural energy estimate for elasticity equations is not on $\mathbf{n}abla\mathbf{u}ex$ but on $\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)$, as in the continuous case we must add to it some discrete form of K\"orn's inequality. \begin{equation}gin{definition}[Coercivity] \label{def:coer} Let ${\mathcal{D}}$ be a Gradient Discretisation in the sense of Definition \ref{def:grad-disc}. We define $C_{\mathcal{D}}$ (maximum of the norms of the linear mappings $\Pi_{\mathcal{D}}$ and ${\mathcal T}_{\mathcal{D}}$) by \begin{equation} C_{\mathcal{D}} = \max_{\mathbf{v}\in \X{{\mathcal{D}}}\setminus\{0\}}\left(\frac {\Vert \Pi_{\mathcal{D}} \mathbf{v}\Vert_{\mathbf{L}^2(\Omega)}} {\Vert \mathbf{v} \Vert_{{\mathcal{D}}}},\frac {\Vert {\mathcal T}_{\mathcal{D}} \mathbf{v}\Vert_{\mathbf{L}^2({\Gamma_N})}} {\Vert \mathbf{v} \Vert_{{\mathcal{D}}}}\right). \label{def:Cdisc}\end{equation} and $K_{\mathcal{D}}$ (constant of the discrete K\"orn inequality) by \begin{equation} K_{\mathcal{D}} = \max_{\mathbf{v}\in \X{{\mathcal{D}}}\setminus\{0\}}\frac {\Vert \mathbf{v}\Vert_{{\mathcal{D}}}} {\Vert \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}) \Vert_{\mathbf{L}^2(\Omega)^{d}}}. \label{def:Kdisc}\end{equation} A sequence $({\mathcal{D}}_m)_{m\in\mathbb N}$ of Gradient Discretisations is said to be \textbf{coercive} if there exists $C_P>0$ such that $C_{{\mathcal{D}}_m}+K_{{\mathcal{D}}_m} \le C_P$ for all $m\in\mathbb N$. \end{definition} The definition of $C_{\mathcal{D}}$ gives the following discrete Poincar\'e's inequality: \begin{equation}\label{ineq:poincare} \forall \mathbf{v}\in \X{{\mathcal{D}}}\,:\quad \Vert \Pi_{\mathcal{D}} \mathbf{v}\Vert_{\mathbf{L}^2(\Omega)} \le C_{\mathcal{D}} \Vert \mathbf{n}abla_{\mathcal{D}} \mathbf{v} \Vert_{\mathbf{L}^2(\Omega)^{d}}. \end{equation} \begin{equation}gin{remark}[Non-homogeneous Dirichlet boundary conditions] Non-homogene\-ous Dirichlet boundary conditions $\mathbf{u}ex=\mathbf{h}$ can also be considered in \eqref{eq:elas-strong} and in the framework of Gradient Schemes, upon introducing an interpolation operator and modifying the definition of limit-conformity to take into account this interpolation operator. See \cite{DRO13} for diffusion equations. \end{remark} \begin{equation}gin{remark} Although it does not seem to relate to any elasticity model we know of, we could also handle a dependency of $\mbox{\boldmath{$\sigma$}}$ on $\mathbf{u}ex$, i.e. $\mbox{\boldmath{$\sigma$}}=\mbox{\boldmath{$\sigma$}}(x,\mathbf{u}ex,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))$, upon adding a compactness property of Gradient Discretisations (see \cite{DRO12} for the handling of such lower order terms in diffusion equations). \end{remark} \section{Convergence analysis} \subsection{Linear case}\label{sec:lin} We assume here that the relationship between the strain and stress is linear, and thus given by a stiffness $4$th order tensor: \begin{equation} \ba \mbox{There exists a measurable $\mathbb{C}:\Omegamega\to \mathbf{R}^{d^4}$ such that $\mbox{\boldmath{$\sigma$}}(x,\btau)=\mathbb{C}(x)\btau$ and}\\ \exists \sigma^*,\sigma_*>0\mbox{ s.t., for a.e. }x\in\Omegamega\,,\;\forall \btau,\bomega\in \mathbf{R}^{d\times d}\,,\\ \begin{equation}gin{array}{ll} \mathbb{C}(x)\btau:\bomega = \btau : \mathbb{C}(x)\bomega\mbox{ and } (\mathbb{C}(x)\btau)^T=\mathbb{C}(x)\btau^T\,, &\mbox{(symmetry)},\\ |\mathbb{C}(x)|\le \sigma^*&\mbox{(bound)},\\ \mathbb{C}(x)\btau:\btau\ge \sigma_*|\btau|^2&\mbox{(coercivity)}, \end{array} \end{array} \label{hyp:stiff}\end{equation} \begin{equation}gin{remark} These assumptions imply \eqref{hyp:stress} and cover the classical linear elasticity model $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))=\lambda \mathop{\rm tr}(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}))\mathbf{I} +2\mu\mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$ (i.e. the Hencky-von Mises model with Lam\'e coefficients not depending on $\mathbf{u}$). \end{remark} In this linear setting, the Gradient Scheme \eqref{grad-scheme} takes the form \begin{equation}\label{grad-scheme:lin} \begin{equation}gin{array}{l} {\rm d}isplaystyle \mbox{Find $\mathbf{u}\in \X{{\mathcal{D}}}$ such that, $\forall \mathbf{v}\in \X{{\mathcal{D}}}$,}\\[0.5em] \ba {\rm d}isplaystyle \int_\Omegamega \mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u})(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x){\rm d} x&{\rm d}isplaystyle = \int_\Omegamega \mathbf{F}(x)\cdot\Pi_{\mathcal{D}}\mathbf{v}(x){\rm d} x\\ &{\rm d}isplaystyle\qquad\qquad+\int_{{\Gamma_N}}\mathbf{g}(x)\cdot{\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x). \end{array} \end{array}\end{equation} The proof of the following error estimate is an adaptation of similar estimates done in \cite{EYM11} for linear diffusion equations with homogeneous Dirichlet boundary conditions. \begin{equation}gin{theorem}[Error estimate of Gradient Scheme for linear elasticity]\label{thm:error-est} We assu\-me that \eqref{hyp:omega}, \eqref{hyp:fg} and \eqref{hyp:stiff} hold and we let $\mathbf{u}ex$ be the solution to \eqref{eq:elas-weak} (note that $\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))= \mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex) \in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$ since $\mathbf{F}\in \mathbf{L}^2(\Omegamega)$ and $\mathbf{g}\in \mathbf{L}^2({\Gamma_N})$). If ${\mathcal{D}}$ is a Gradient Discretization in the sense of Definition \ref{def:grad-disc} then the Gradient Scheme \eqref{grad-scheme:lin} has a unique solution $\mathbf{u}_{\mathcal{D}}$ and it satisfies: \begin{equation}gin{align} & \Vert \mathbf{n}abla \mathbf{u}ex - \mathbf{n}abla_{\mathcal{D}} \mathbf{u}_{\mathcal{D}}\Vert_{\mathbf{L}^2(\Omega)^{d}} \le \frac{K_{\mathcal{D}}^2}{\sigma_*} W_{\mathcal{D}}(\mathbb{C} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) + \left(\frac{K_{\mathcal{D}}^2 \sigma^*}{\sigma_*}+1\right)S_{\mathcal{D}}(\mathbf{u}ex), \label{err:grad}\\ & \Vert \mathbf{u}ex -\Pi_{\mathcal{D}} \mathbf{u}_{\mathcal{D}}\Vert_{\mathbf{L}^2(\Omega)} \le \frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*} W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) +\left(\frac{C_{\mathcal{D}} K_{\mathcal{D}}^2 \sigma^*}{\sigma_*}+1\right) S_{\mathcal{D}}(\mathbf{u}ex), \label{err:u} \end{align} where $S_{\mathcal{D}}$, $W_{\mathcal{D}}$, $C_{\mathcal{D}}$ and $K_{\mathcal{D}}$ are defined by \eqref{def:sdisc}, \eqref{def:wdisc}, \eqref{def:Cdisc} and \eqref{def:Kdisc}. \end{theorem} \begin{equation}gin{proof} Let us first notice that if we prove \eqref{err:grad} for any solution $\mathbf{u}_{\mathcal{D}}$ to the Gradient Scheme \eqref{grad-scheme:lin}, then the existence and uniqueness of this solution follows. Indeed, \eqref{grad-scheme:lin} defines a square linear system and if $\mathbf{F}=0$ and $\mathbf{g}=0$ (meaning that $\mathbf{u}ex=0$) then \eqref{err:grad} shows that the only solution to this system is $0$, since $||\mathbf{n}abla_{\mathcal{D}}\cdot||_{\mathbf{L}^2(\Omegamega)^{d}}$ is a norm on $\X{{\mathcal{D}}}$. Hence, this system is invertible and \eqref{grad-scheme:lin} has a solution for any right-hand side functions $\mathbf{F}$ and $\mathbf{g}$ satisfying \eqref{hyp:fg}. Let us now prove the error estimates. Since $\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)\in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$, the definition of $W_{\mathcal{D}}$ gives, for any $\mathbf{v}\in \X{{\mathcal{D}}}$, \begin{equation}gin{equation} \label{proof:err1} \begin{equation}gin{array}{r@{}l} {\rm d}isplaystyle ||\mathbf{n}abla_{\mathcal{D}} \mathbf{v}||_{\mathbf{L}^2(\Omegamega)^{d}}&W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))\\[0.5em] \ge&{}{\rm d}isplaystyle \left\vert \int_\Omega \big(\mathbf{n}abla_{\mathcal{D}} \mathbf{v}(x):\mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x) +\Pi_{\mathcal{D}}\mathbf{v}(x)\cdot{\rm d}iv(\mathbb{C} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)\big)(x){\rm d} x\right.\\[0.5em] &{\rm d}isplaystyle \left. -\int_{{\Gamma_N}}\gamma_\mathbf{n}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))(x)\cdot {\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x)\right\vert\\[0.5em] \ge&{}{\rm d}isplaystyle \left\vert \int_\Omega \big(\mathbf{n}abla_{\mathcal{D}} \mathbf{v}(x):\mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x) -\Pi_{\mathcal{D}}\mathbf{v}(x)\cdot \mathbf{F}(x)\big){\rm d} x\right.\\[0.5em] &{\rm d}isplaystyle \left.-\int_{{\Gamma_N}}\mathbf{g}(x)\cdot {\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x)\right\vert. \end{array} \end{equation} By symmetry of $\mathbb{C}$ we have $\mathbb{C} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex):\mathbf{n}abla_{\mathcal{D}}\mathbf{v} =\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})$ and \eqref{proof:err1} therefore gives, since $\mathbf{u}_{\mathcal{D}}$ is a solution to \eqref{grad-scheme:lin}, \begin{equation}gin{multline}\label{eq:rap} ||\mathbf{n}abla_{\mathcal{D}} \mathbf{v}||_{\mathbf{L}^2(\Omegamega)^{d}} W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))\\ \ge\left\vert \int_\Omega \mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x) -\mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u}_{\mathcal{D}}):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x){\rm d} x\right\vert. \end{multline} Defining, for all $\mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$, \begin{equation}gin{equation}\label{def:PD} P_{{\mathcal{D}}} \mathbf{v}arphi = \mathop{\rm argmin}_{\mathbf{w}\in \X{{\mathcal{D}}}}\big\{\Vert \Pi_{\mathcal{D}} \mathbf{w} -\mathbf{v}arphi\Vert_{\mathbf{L}^2(\Omega)} + \Vert \mathbf{n}abla_{\mathcal{D}} \mathbf{w} - \mathbf{n}abla\mathbf{v}arphi\Vert_{\mathbf{L}^2(\Omega)^{d}}\big\} \end{equation} and recalling the definition \eqref{def:sdisc} of $S_{\mathcal{D}}$, we have \begin{equation}gin{equation} ||\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)-\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)^{d}} \le ||\mathbf{n}abla\mathbf{u}ex-\mathbf{n}abla_{\mathcal{D}} (P_{\mathcal{D}}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)^{d}} \le S_{\mathcal{D}}(\mathbf{u}ex). \label{proof:err1.1}\end{equation} Using the bound of $\mathbb{C}$ in \eqref{hyp:stiff} and Estimate \eqref{eq:rap}, we deduce \begin{equation}gin{equation} \begin{equation}gin{array}{r@{}l} {\rm d}isplaystyle\Bigg\vert \int_\Omegamega \mathbb{C}(x)&{\rm d}isplaystyle\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}})(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x) {\rm d} x\Bigg\vert\\[0.5em] \le&{}{\rm d}isplaystyle \left\vert \int_\Omegamega \mathbb{C}(x)(\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex)-\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x) {\rm d} x\right\vert\\[1em] &{}{\rm d}isplaystyle+\left\vert \int_\Omegamega \mathbb{C}(x)(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)-\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u}_{\mathcal{D}}))(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x) {\rm d} x\right\vert\\[1em] \le&{\rm d}isplaystyle\; \sigma^*S_{\mathcal{D}}(\mathbf{u}ex)||\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})||_{\mathbf{L}^2(\Omegamega)^{d}} +||\mathbf{n}abla_{\mathcal{D}} \mathbf{v}||_{\mathbf{L}^2(\Omegamega)^{d}}W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))\\[0.5em] \le&{}{\rm d}isplaystyle\; ||\mathbf{n}abla_{\mathcal{D}} \mathbf{v}||_{\mathbf{L}^2(\Omegamega)^{d}} \big(W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) +\sigma^*S_{\mathcal{D}}(\mathbf{u}ex)\big). \end{array} \label{proof:err1.2} \end{equation} Plugging $\mathbf{v}=P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}}\in \X{{\mathcal{D}}}$ in \eqref{proof:err1.2} and using the coercivity of $\mathbb{C}$ gives \begin{equation}gin{multline}\label{proof:err2} \sigma_*||\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}})||_{\mathbf{L}^2(\Omegamega)^{d}}^2\\ \le ||\mathbf{n}abla_{\mathcal{D}} (P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}})||_{\mathbf{L}^2(\Omegamega)^{d}} \big(W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) +\sigma^*S_{\mathcal{D}}(\mathbf{u}ex)\big). \end{multline} By definition \eqref{def:Kdisc} of $K_{\mathcal{D}}$, we have \[ ||\mathbf{n}abla_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}})||_{\mathbf{L}^2(\Omegamega)^{d}}\le K_{\mathcal{D}} ||\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(P_{\mathcal{D}}\mathbf{u}ex-\mathbf{u}_{\mathcal{D}})||_{\mathbf{L}^2(\Omegamega)^{d}} \] and \eqref{proof:err2} thus leads to \begin{equation}gin{equation}\label{proof:err3} ||\mathbf{n}abla_{\mathcal{D}} (P_{\mathcal{D}}\mathbf{u}ex)-\mathbf{n}abla_{\mathcal{D}}\mathbf{u}_{\mathcal{D}}||_{\mathbf{L}^2(\Omegamega)^{d}} \le \frac{K_{\mathcal{D}}^2}{\sigma_*} \big(W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) +\sigma^*S_{\mathcal{D}}(\mathbf{u}ex)\big) \end{equation} and the proof of \eqref{err:grad} is concluded thanks to \eqref{proof:err1.1}. The Poincar\'e inequality \eqref{ineq:poincare} and \eqref{proof:err3} also give \[ ||\Pi_{\mathcal{D}} (P_{\mathcal{D}}\mathbf{u}ex)-\Pi_{\mathcal{D}} \mathbf{u}_{\mathcal{D}}||_{\mathbf{L}^2(\Omegamega)} \le \frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*} \big(W_{\mathcal{D}}(\mathbb{C}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)) +\sigma^*S_{\mathcal{D}}(\mathbf{u}ex)\big), \] and the estimate $||\Pi_{\mathcal{D}} (P_{\mathcal{D}}\mathbf{u}ex)-\mathbf{u}ex||_{\mathbf{L}^2(\Omegamega)} \le S_{\mathcal{D}}(\mathbf{u}ex)$ concludes the proof of \eqref{err:u}. \end{proof} The following corollary is a straightforward consequence of Theorem \ref{thm:error-est}. \begin{equation}gin{corollary}[Convergence of Gradient Schemes for linear elasticity] We assume that \eqref{hyp:omega}, \eqref{hyp:fg} and \eqref{hyp:stiff} hold. We denote by $\mathbf{u}ex$ the solution to \eqref{eq:elas-weak}. If $({\mathcal{D}}_m)_{m\in\mathbb N}$ is a sequence of Gradient Discretizations in the sense of Definition \ref{def:grad-disc}, which is consistent (Definition \ref{def:cons}), limit-conforming (Definition \ref{def:lim-conf}) and coercive (Definition \ref{def:coer}), and if $\mathbf{u}_m\in \X{{\mathcal{D}}_m}$ is the solution to the Gradient Scheme \eqref{grad-scheme:lin} with ${\mathcal{D}}={\mathcal{D}}_m$, then, as $m\to\infty$, $\Pi_{{\mathcal{D}}_m} \mathbf{u}_m\to \mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)$ and $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m \to \mathbf{n}abla\mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)^{d}$. \end{corollary} \begin{equation}gin{remark} This result is valid under no additional regularity assumption on the data or $\mathbf{u}ex$. It holds in particular if $\partial\Omegamega$ has singularities or if $\mathbb{C}$ is discontinuous with respect to $x$, which corresponds to a body made of several different materials with interfaces (see e.g. \cite{BIS09}). However, for most Gradient Schemes (and under reasonable assumptions on the mesh/discretisation), there exists $C>0$ not depending on ${\mathcal{D}}$ such that \begin{equation}gin{eqnarray*} \forall \mathbf{v}arphi\in \mathbf{H}^2(\Omegamega)\cap \mathbf{H}^1_0(\Omegamega)\,,\;&& S_{\mathcal{D}}(\mathbf{v}arphi)\le Ch_{\mathcal{D}}||\mathbf{v}arphi||_{\mathbf{H}^2(\Omegamega)}\,,\\ \forall \btau\in \mathbf{H}^1(\Omegamega)^{d}\,,\;&& W_{\mathcal{D}}(\btau)\le Ch_{\mathcal{D}}||\btau||_{\mathbf{H}^1(\Omega)^{d}}, \end{eqnarray*} where $h_{\mathcal{D}}$ measures the scheme's precision (e.g. some mesh size). For such Gradient Schemes and when $\mathbf{u}\in \mathbf{H}^2(\Omega)$ and $\mathbb{C}$ is Lipschitz continuous, Theorem \ref{thm:error-est} gives $\mathcal O(h_{\mathcal{D}})$ error estimate for the approximation of $\mathbf{u}ex$ and its gradient. We note that the solution is $H^2$-regular when we have a pure Dirichlet problem on a convex polygonal or polyhedral domain \cite{BS92,KMR01}. \end{remark} \subsection{Non-linear case} In the non-linear case, error estimates cannot be provided in general but convergence of the Gradient Scheme \eqref{grad-scheme} can still be proved without additional regularity assumptions on the data. \begin{equation}gin{theorem}[Convergence of Gradient Schemes for non-linear elasticity]\label{th:convnl} Assu\-me that \eqref{hyp:omega}, \eqref{hyp:fg} and \eqref{hyp:stress} hold and let $({\mathcal{D}}_m)_{m\in\mathbb N}$ be a sequence of Gradient Discretizations in the sense of Definition \ref{def:grad-disc}, which is consistent (Definition \ref{def:cons}), limit-conforming (Definition \ref{def:lim-conf}) and coercive (Definition \ref{def:coer}). Then, for any $m\in\mathbb N$ there exists at least one solution $\mathbf{u}_m\in \X{{\mathcal{D}}_m}$ to the Gradient Scheme \eqref{grad-scheme} with ${\mathcal{D}}={\mathcal{D}}_m$ and, up to a subsequence, as $m\to\infty$, $\Pi_{{\mathcal{D}}_m}\mathbf{u}_m$ converges weakly in $\mathbf{L}^2(\Omegamega)$ to some $\mathbf{u}ex$ solution of \eqref{eq:elas-weak} and $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m$ converges weakly in $\mathbf{L}^2(\Omegamega)^{d}$ to $\mathbf{n}abla\mathbf{u}ex$. Moreover, if we assume that $\mbox{\boldmath{$\sigma$}}$ is strictly monotone in the following sense: \begin{equation}gin{equation}\label{hyp:strictmonotone} \mbox{For a.e. $x\in\Omegamega$, for all $\btau\mathbf{n}ot=\bomega$ in $\mathcal S_{d\times d}$}\,,\; (\mbox{\boldmath{$\sigma$}}(x,\btau)-\mbox{\boldmath{$\sigma$}}(x,\bomega)):(\btau-\bomega)>0 \end{equation} then, along the same subsequence, $\Pi_{{\mathcal{D}}_m}\mathbf{u}_m\to \mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)$ and $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m\to \mathbf{n}abla\mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)^{d}$. \end{theorem} \begin{equation}gin{remark} If the sequence of Gradient Discretisations $({\mathcal{D}}_m)_{m\in\mathbb N}$ is \emph{compact} as defined in \cite{DRO12}, then the convergence of $\Pi_{{\mathcal{D}}_m}\mathbf{u}_m$ is strong even if the strict monotonicity \eqref{hyp:strictmonotone} is not satisfied. \end{remark} \begin{equation}gin{remark} Should the solution to \eqref{eq:elas-weak} be unique, classical arguments also show that the convergences of $(\mathbf{u}_m)_{m\in\mathbb N}$ in the senses described in Theorem \ref{th:convnl} hold for the whole sequence, not only for a subsequence. \end{remark} \begin{equation}gin{remark} We do not need to assume the existence of a solution to the non-linear elasticity model \eqref{eq:elas-weak}. The technique of convergence analysis we use establishes in fact this existence. \end{remark} \begin{equation}gin{remark} The strict monotonicity assumption \eqref{hyp:strictmonotone} is satisfied by the Hencky-von Mises model (see \cite[Lemma 4.1]{BAR02}), and by the damage model when the function $f$ defined in Remark \ref{rem:dam} is such that $s\in[0,\infty)\to sf(s)$ is \emph{strictly} increasing. \end{remark} \begin{equation}gin{proof} The proof follows the techniques used in \cite{DRO12} for the non-linear elliptic problem with homogeneous Dirichlet boundary conditions. We adapt those techniques to deal with the non-linear elasticity models with mixed non-homogeneous boundary conditions. In the following steps, we sometimes drop the index $m$ in ${\mathcal{D}}_m$ to simplify the notations. \textbf{Step 1}: A priori estimates and existence of a solution to the scheme. Let us take a scalar product $\langle\cdot,\cdot\rangle$ on $\X{{\mathcal{D}}}$, with associated norm $N(\cdot)$, and let us define $T:\X{{\mathcal{D}}}\to \X{{\mathcal{D}}}$ and $L\in \X{{\mathcal{D}}}$ by: for all $\mathbf{w},\mathbf{v}\in \X{{\mathcal{D}}}$, \[ \langle T(\mathbf{w}),\mathbf{v}\rangle=\int_\Omegamega \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{w})(x)): \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v})(x){\rm d} x \] and \[ \langle L,\mathbf{v}\rangle= \int_\Omegamega \mathbf{F}(x)\cdot\Pi_{\mathcal{D}}\mathbf{v}(x){\rm d} x+\int_{{\Gamma_N}}\mathbf{g}(x)\cdot{\mathcal T}_{\mathcal{D}}(\mathbf{v})(x){\rm d} S(x). \] Then Assumption \eqref{hyp:stress} ensures that $T$ is continuous and that \begin{equation}gin{equation}\label{prnl:est1} \langle T(\mathbf{w}),\mathbf{w}\rangle \ge \sigma_*||\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{w})||_{\mathbf{L}^2(\Omegamega)^{d}}^2 \ge\sigma_* K_{{\mathcal{D}}}^{-2} ||\mathbf{w}||_{\mathcal{D}}^2. \end{equation} Since all norms are equivalent on $\X{{\mathcal{D}}}$, we also have $||\mathbf{w}||_{\mathcal{D}}\ge m_{\mathcal{D}} N(\mathbf{w})$ for some $m_{\mathcal{D}}>0$ and this shows that $\lim_{N(\mathbf{w})\to\infty}\frac{\langle T(\mathbf{w}),\mathbf{w}\rangle}{N(\mathbf{w})}=+\infty$. By \cite[Theorem 3.3 (p.19)]{DEI85}, we see that $T$ is onto and therefore that there exists $\mathbf{u}\in \X{{\mathcal{D}}}$ such that $T(\mathbf{u})=L$, which precisely states that $\mathbf{u}$ is a solution to \eqref{grad-scheme}. {}From \eqref{prnl:est1} and the definition \eqref{def:Cdisc} of $C_{\mathcal{D}}$, we also deduce that $\mathbf{u}$ satisfies \begin{equation}gin{multline*} ||\mathbf{u}||_{\mathcal{D}}^2 \le \frac{K_{\mathcal{D}}^2}{\sigma_*}\langle T(\mathbf{u}),\mathbf{u}\rangle =\frac{K_{\mathcal{D}}^2}{\sigma_*}\langle L,\mathbf{u}\rangle\\ \le \frac{K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{F}||_{\mathbf{L}^2(\Omegamega)}||\Pi_{\mathcal{D}}\mathbf{u}||_{\mathbf{L}^2(\Omegamega)} +\frac{K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{g}||_{\mathbf{L}^2({\Gamma_N})}||{\mathcal T}_{\mathcal{D}}\mathbf{u}||_{\mathbf{L}^2({\Gamma_N})}\\ \le \left(\frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{F}||_{\mathbf{L}^2(\Omegamega)} +\frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{g}||_{\mathbf{L}^2({\Gamma_N})}\right)||\mathbf{u}||_{\mathcal{D}}, \end{multline*} that is to say \begin{equation}\label{prnl:est2} ||\mathbf{u}||_{\mathcal{D}} \le \frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{F}||_{\mathbf{L}^2(\Omegamega)} +\frac{C_{\mathcal{D}} K_{\mathcal{D}}^2}{\sigma_*}||\mathbf{g}||_{\mathbf{L}^2({\Gamma_N})}. \end{equation} \textbf{Step 2}: Weak convergences. By Estimate \eqref{prnl:est2}, $(||\mathbf{u}_m||_{{\mathcal{D}}_m})_{m\in\mathbb N}$ is bounded and Lemma \ref{lem:weakconv} below therefore shows that there exists $\mathbf{u}ex\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$ such that, up to a subsequence, \begin{equation}\label{prnl:conv} \ba {\rm d}isplaystyle \Pi_{{\mathcal{D}}_m}\mathbf{u}_m\to \mathbf{u}ex\mbox{ weakly in $\mathbf{L}^2(\Omegamega)$}\,,\\ {\rm d}isplaystyle \mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m\to \mathbf{n}abla\mathbf{u}ex\mbox{ weakly in $\mathbf{L}^2(\Omegamega)^{d}$ and}\\ {\rm d}isplaystyle {\mathcal T}_{{\mathcal{D}}_m}\mathbf{u}_m\to \gamma(\mathbf{u}ex)\mbox{ weakly in $\mathbf{L}^2({\Gamma_N})$.}\\ \end{array} \end{equation} Let us now prove that $\mathbf{u}ex$ is a solution to \eqref{eq:elas-weak}. Assumptions \eqref{hyp:stress} and the bound on $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m$ shows that $(\mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m))_{m\in\mathbb N}$ is symmetric-valued and bounded in $\mathbf{L}^2(\Omegamega)^{d}$. There exists therefore a symmetric-valued $\btau\in \mathbf{L}^2(\Omegamega)^{d}$ such that, up to a subsequence, \begin{equation}\label{prnl:conv1} \mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m))\to \btau\mbox{ weakly in $\mathbf{L}^2(\Omegamega)^{d}$}. \end{equation} Let $\mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$. Then $P_{{\mathcal{D}}_m}\mathbf{v}arphi$ defined by \eqref{def:PD} belongs to $\X{{\mathcal{D}}_m}$ and, by consistency of $({\mathcal{D}}_m)_{m\in\mathbb N}$, $\Pi_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{v}arphi)\to \mathbf{v}arphi$ strongly in $\mathbf{L}^2(\Omegamega)$ and $\mathbf{n}abla_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{v}arphi) \to \mathbf{n}abla\mathbf{v}arphi$ strongly in $\mathbf{L}^2(\Omegamega)^{d}$. By Lemma \ref{lem:weakconv}, we also deduce that ${\mathcal T}_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{v}arphi) \to \gamma(\mathbf{v}arphi)$ weakly in $\mathbf{L}^2({\Gamma_N})$. The convergence \eqref{prnl:conv1} then allows to pass to the limit in \eqref{grad-scheme} with $\mathbf{v}=P_{{\mathcal{D}}_m}\mathbf{v}arphi$ as a test function and we obtain \begin{equation}\label{prnl:conv2} \int_\Omegamega \btau(x):\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}arphi)(x){\rm d} x=\int_\Omegamega \mathbf{F}(x)\cdot\mathbf{v}arphi(x){\rm d} x +\int_{{\Gamma_N}}\mathbf{g}(x)\cdot\gamma(\mathbf{v}arphi)(x){\rm d} S(x). \end{equation} We now use the monotonicity assumption on $\mbox{\boldmath{$\sigma$}}$ and Minty's trick \cite{MIN63,LER65} to prove that $\btau=\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))$. We first notice that, plugging $\mathbf{v}=\mathbf{u}_m$ in \eqref{grad-scheme} and using \eqref{prnl:conv} and \eqref{prnl:conv2}, \begin{equation}gin{multline} \label{prnl:conv3} \int_\Omegamega \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x)):\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x){\rm d} x\\ =\int_\Omegamega \mathbf{F}(x)\cdot\Pi_{{\mathcal{D}}_m}\mathbf{u}_m(x){\rm d} x+\int_{{\Gamma_N}}\mathbf{g}(x)\cdot {\mathcal T}_{{\mathcal{D}}_m}\mathbf{u}_m(x){\rm d} S(x)\\ \longrightarrow\int_\Omegamega \mathbf{F}(x)\cdot \mathbf{u}ex(x){\rm d} x+\int_{{\Gamma_N}}\mathbf{g}(x)\cdot \gamma(\mathbf{u}ex)(x){\rm d} S(x) =\int_\Omegamega \btau(x):\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x){\rm d} x. \end{multline} Let us now take any symmetric-valued $\bomega\in \mathbf{L}^2(\Omegamega)^{d}$. The monotonicity of $\mbox{\boldmath{$\sigma$}}$ shows that \[ A_m:=\int_\Omegamega \big[\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x))- \mbox{\boldmath{$\sigma$}}(x,\bomega(x))\big]:\big[\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x)-\bomega(x)\big]{\rm d} x\ge 0. \] After developing $A_m$, we can use \eqref{prnl:conv}, \eqref{prnl:conv1} and \eqref{prnl:conv3} to pass to the limit and we find \begin{equation}gin{equation}\label{prnl:conv24} \lim_{m\to\infty}A_m= \int_\Omegamega \big[\btau(x)-\mbox{\boldmath{$\sigma$}}(x,\bomega(x))\big] :\big[\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)-\bomega(x)\big]{\rm d} x\ge 0. \end{equation} The Minty trick then concludes the proof. Applying this inequality to $\bomega=\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)+\alpha\mathbf{\Delta}$ for some symmetric-valued $\mathbf{\Delta}\in \mathbf{L}^2(\Omegamega)^{d}$, dividing by $\alpha$ and letting $\alpha\to 0^{\pm}$ (thanks to Assumption \eqref{hyp:stress}), we obtain \[ \int_\Omegamega \big[\btau(x)-\mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x))\big] :\mathbf{\Delta}(x){\rm d} x=0, \] which proves, with $\mathbf{\Delta}=\btau-\mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))$, that \begin{equation}\label{prnl:ident} \btau=\mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)). \end{equation} Together with \eqref{prnl:conv2} this shows that $\mathbf{u}ex$ satisfies \eqref{eq:elas-weak}. \textbf{Step 3}: Strong convergences under strict monotonicity. We now assume that \eqref{hyp:strictmonotone} holds and we first prove the strong convergence of the strain tensors. We define \[ f_m=\big[\mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m))- \mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex))\big]\\ :\big[\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)-\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)\big]. \] The function $f_m$ is non-negative and, by \eqref{prnl:conv24} with $\bomega=\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)$ and the identity \eqref{prnl:ident}, we see that $\lim_{m\to\infty}\int_\Omegamega f_m(x){\rm d} x=0$. $(f_m)_{m\in\mathbb N}$ thus converges to $0$ in $L^1(\Omegamega)$, and therefore also a.e. on $\Omegamega$ up to a subsequence. Let us take $x\in\Omegamega$ such that the above mentioned convergence hold at $x$. {}From the coercivity and growth of $\mbox{\boldmath{$\sigma$}}$, developing the products in $f_m(x)$ gives \begin{equation}gin{multline*} f_m(x)\ge \sigma_*|\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x)|^2 -2\sigma^*|\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x)|\, |\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)|\\ -|\sigma(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)|\,|\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)|. \end{multline*} Since the right-hand side is quadratic in $|\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x)|$ and $(f_m(x))_{m\in\mathbb N}$ is bounded, we deduce that the sequence $(\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x))_{m\in\mathbb N}$ is bounded. If $\mathbf{L}_x$ is one of its adherence values then, by passing to the limit in the definition of $f_m(x)$, we see that \[ 0=\big[\mbox{\boldmath{$\sigma$}}(x,\mathbf{L}_x)- \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x))\big] :\big[\mathbf{L}_x-\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)\big]. \] By \eqref{hyp:strictmonotone}, this forces $\mathbf{L}_x=\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)$. The bounded sequence $(\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)(x))_{m\in\mathbb N}$ only has $\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)$ as adherence value and therefore converges in whole to this value. We have therefore established that $\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)\to \mbox{\boldmath{$\varepsilon$}}(\mathbf{u})$ a.e. on $\Omegamega$. Using then \eqref{prnl:conv3} and \eqref{prnl:ident} and defining \[ F_m=\mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)):\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)\ge 0, \] we see that \[ \lim_{m\to\infty}\int_\Omegamega F_m(x){\rm d} x= \int_\Omegamega \mbox{\boldmath{$\sigma$}}(x,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x)):\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)(x){\rm d} x. \] But since $F_m\to \mbox{\boldmath{$\sigma$}}(\cdot,\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)):\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)$ a.e. on $\Omegamega$ and is non-negative, we can apply Lemma \ref{lem:convpositive} below to deduce that $(F_m)_{m\in\mathbb N}$ converges in $L^1(\Omegamega)$. This sequence is therefore equi-integrable in $L^1(\Omegamega)$ and, by the coercivity property of $\mbox{\boldmath{$\sigma$}}$, this proves that $(\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m))_{m\in\mathbb N}$ is equi-integrable in $\mathbf{L}^2(\Omegamega)^{d}$. As this sequence converges a.e. on $\Omegamega$ to $\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)$, Vitali's theorem shows that \begin{equation}\label{prnl:conv5} \mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)\to \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)\mbox{ strongly in $\mathbf{L}^2(\Omegamega)^{d}$}. \end{equation} We then consider $P_{{\mathcal{D}}_m}\mathbf{u}ex\in \X{{\mathcal{D}}_m}$ and write, by definition \eqref{def:Kdisc} of $K_{\mathcal{D}}$, \[ ||\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m-\mathbf{n}abla_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)^{d}} \le K_{{\mathcal{D}}_m}||\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(\mathbf{u}_m)-\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)^{d}}. \] Since $\mathbf{n}abla_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)$ and $\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)$ strongly converge in $\mathbf{L}^2(\Omegamega)^d$ to $\mathbf{n}abla\mathbf{u}ex$ and $\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}ex)$, we can pass to the limit in this estimate by using the coercivity of $({\mathcal{D}}_m)_{m\in\mathbb N}$ and \eqref{prnl:conv5} and we deduce that $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m\to \mathbf{n}abla\mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)^{d}$. The definition \eqref{def:Cdisc} of $C_{{\mathcal{D}}_m}$ then gives \[ ||\Pi_{{\mathcal{D}}_m}\mathbf{u}_m-\Pi_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)} \le C_{{\mathcal{D}}_m} ||\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{u}_m-\mathbf{n}abla_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)||_{\mathbf{L}^2(\Omegamega)^{d}} \] and, since $\Pi_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{u}ex)\to \mathbf{u}ex$ strongly in $\mathbf{L}^2(\Omegamega)$, passing to the limit in this estimate proves the strong convergence in $\mathbf{L}^2(\Omegamega)^d$ of $\Pi_{{\mathcal{D}}_m}\mathbf{u}_m$ to $\mathbf{u}ex$. \end{proof} \begin{equation}gin{remark} We saw in the proof that ${\mathcal T}_{{\mathcal{D}}_m}\mathbf{u}_m\to \gamma(\mathbf{u}ex)$ weakly in $\mathbf{L}^2({\Gamma_N})$. If the interpolation $P_{\mathcal{D}}$ defined by \eqref{def:PD} satisfies, for any $\mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$, ${\mathcal T}_{{\mathcal{D}}_m}(P_{{\mathcal{D}}_m}\mathbf{v}arphi)\to \gamma(\mathbf{v}arphi)$ strongly in $\mathbf{L}^2({\Gamma_N})$ as $m\to\infty$, the same reasoning as the one used at the end of the proof shows that, in case of strict monotonicity of $\mbox{\boldmath{$\sigma$}}$, ${\mathcal T}_{{\mathcal{D}}_m}\mathbf{u}_m\to \gamma(\mathbf{u}ex)$ strongly in $\mathbf{L}^2({\Gamma_N})$. \end{remark} \begin{equation}gin{lemma}\label{lem:weakconv} Let $({\mathcal{D}}_m)_{m\in\mathbb N}$ be a sequence of Gradient Discretizations in the sense of Definition \ref{def:grad-disc}, which is limit-conforming (Definition \ref{def:lim-conf}) and coercive (Definition \ref{def:coer}). For any $m\in\mathbb N$ we take $\mathbf{v}_m\in \X{{\mathcal{D}}_m}$. If $(||\mathbf{v}_m||_{{\mathcal{D}}_m})_{m\in\mathbb N}$ is bounded then there exists $\mathbf{v}\in \mathbf{H}^1_{{\Gamma_D}}(\Omegamega)$ such that, up to a subsequence, $\Pi_{{\mathcal{D}}_m}\mathbf{v}_m\to \mathbf{v}$ weakly in $\mathbf{L}^2(\Omegamega)$, $\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{v}_m\to \mathbf{n}abla\mathbf{v}$ weakly in $\mathbf{L}^2(\Omegamega)^d$ and ${\mathcal T}_{{\mathcal{D}}_m}\mathbf{v}_m\to \gamma(\mathbf{v})$ weakly in $\mathbf{L}^2({\Gamma_N})$. \end{lemma} \begin{equation}gin{proof} The coercivity of $({\mathcal{D}}_m)_{m\in\mathbb N}$ and the bound on $||\mathbf{v}_m||_{{\mathcal{D}}_m}$ show that the sequences $||\Pi_{{\mathcal{D}}_m}\mathbf{v}_m||_{\mathbf{L}^2(\Omegamega)}$, $||\mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{v}_m||_{\mathbf{L}^2(\Omegamega)^d}$ and $||{\mathcal T}_{{\mathcal{D}}_m}\mathbf{v}_m||_{\mathbf{L}^2({\Gamma_N})}$ remain bounded. There exists therefore $\mathbf{v}\in \mathbf{L}^2(\Omegamega)$, $\bomega\in \mathbf{L}^2(\Omegamega)^d$ and $\mathbf{w}\in \mathbf{L}^2({\Gamma_N})$ such that, up to a subsequence, \begin{equation}\label{lem:weakconv1} \ba {\rm d}isplaystyle \Pi_{{\mathcal{D}}_m}\mathbf{v}_m\to \mathbf{v}\mbox{ weakly in $\mathbf{L}^2(\Omegamega)$}\,,\quad \mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{v}_m\to \bomega\mbox{ weakly in $\mathbf{L}^2(\Omegamega)^{d}$ and}\\ {\rm d}isplaystyle {\mathcal T}_{{\mathcal{D}}_m}\mathbf{v}_m\to \mathbf{w}\mbox{ weakly in $\mathbf{L}^2({\Gamma_N})$.}\\ \end{array} \end{equation} These convergences and the limit-conformity of $({\mathcal{D}}_m)_{m\in\mathbb N}$ show that, for any $\btau\in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$, \begin{equation}gin{multline*} \left|\int_\Omegamega \bomega(x):\btau(x)+\mathbf{v}(x)\cdot{\rm d}iv(\btau)(x){\rm d} x - \int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot\mathbf{w}(x){\rm d} S(x)\right|\\ =\lim_{m\to\infty} \left|\int_\Omegamega \mathbf{n}abla_{{\mathcal{D}}_m}\mathbf{v}_m(x):\btau(x)+\Pi_{{\mathcal{D}}_m}\mathbf{v}_m(x)\cdot{\rm d}iv(\btau)(x){\rm d} x\right.\\ \left. - \int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot {\mathcal T}_{{\mathcal{D}}_m}(\mathbf{v}_m)(x){\rm d} S(x)\right|\\ \le \lim_{m\to\infty}\big[||\mathbf{v}_m||_{{\mathcal{D}}_m}W_{{\mathcal{D}}_m}(\btau)\big]=0. \end{multline*} Hence, for any $\btau\in \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$, \begin{equation}gin{equation}\label{lem:weakconv2} \int_\Omegamega \bomega(x):\btau(x)+\mathbf{v}(x)\cdot{\rm d}iv(\btau)(x){\rm d} x- \int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot\mathbf{w}(x){\rm d} S(x)=0. \end{equation} Applied with $\btau\in C^\infty_c(\Omegamega)^{d\times d}$, this relation shows that \begin{equation}\label{lem:weakconv3} \mathbf{n}abla \mathbf{v}=\bomega\mbox{ in the sense of distributions on $\Omegamega$}, \end{equation} and thus that $\mathbf{v}\in \mathbf{H}^1(\Omegamega)$. By using \eqref{lem:weakconv2} with $\btau\in \mathbf{H}^1(\Omegamega)^{d}\subset \mathbf{H}_{{\rm d}iv}(\Omegamega,{\Gamma_N})^d$ and by integrating by parts, we obtain \[ \int_{\partial\Omegamega} \gamma_\mathbf{n}(\btau)(x)\cdot \gamma(\mathbf{v})(x){\rm d} S(x) - \int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot\mathbf{w}(x){\rm d} S(x)=0. \] As the set $\{\gamma_\mathbf{n}(\btau)\,:\,\btau\in \mathbf{H}^1(\Omegamega)^{d}\}$ is dense in $\mathbf{L}^2(\partial\Omegamega)$, we deduce from this that $\gamma(\mathbf{v})=0$ on ${\Gamma_D}$ and that \begin{equation}\label{lem:weakconv4} \gamma(\mathbf{v})=\mathbf{w}\mbox{ on ${\Gamma_N}$.} \end{equation} Thus, $\mathbf{v}\in \mathbf{H}^1_{{\Gamma_D}}$ and \eqref{lem:weakconv1}, \eqref{lem:weakconv3} and \eqref{lem:weakconv4} conclude the proof. \end{proof} The proof of the following lemma is classical \cite{DRO06,EYM09}. \begin{equation}gin{lemma}\label{lem:convpositive} Let $(F_m)_{m\in\mathbb N}$ be a sequence of non-negative measurable functions on $\Omegamega$ which converges a.e. on $\Omegamega$ to $F$ and such that $\int_\Omegamega F_m(x){\rm d} x\to \int_\Omegamega F(x){\rm d} x$. Then $F_m\to F$ in $L^1(\Omegamega)$. \end{lemma} \section{Examples of Gradient Schemes}\label{sec:ex} In all the following examples, we assume that ${\Gamma_D}$ has non-zero measure and is such that a K\"orn's inequality holds on $\mathbf{H}^1_{{\Gamma_D}}(\Omega)$ \cite{BS94,Cia88}. This is actually a necessary condition for \emph{coercive} and \emph{consistent} sequences of Gradient Discretisations to exist. \subsection{Standard displacement-based formulation} All (conforming) Galerkin methods are Gradient Schemes. If $(\mathbf{V}_n)_{n\in\mathbb N}$ is a sequence of finite dimensional subspaces of $\mathbf{H}^1_{{\Gamma_D}}(\Omega)$ such that $\cup_{n\ge 1}\mathbf{V}_n$ is dense in $\mathbf{H}^1_{{\Gamma_D}}(\Omega)$, then by letting $X_{{\mathcal{D}}_n}=V_n$, $\Pi_{{\mathcal{D}}_n}={\rm Id}$, ${\mathcal T}_{{\mathcal{D}}_n}=\gamma$ and $\mathbf{n}abla_{{\mathcal{D}}_n}=\mathbf{n}abla$, we obtain a sequence of Gradient Discretisations whose corresponding Gradient Schemes are Galerkin approximations of \eqref{eq:elas-strong}. This sequence of Gradient Discretisations is obviously \emph{consistent} (this is $\overline{\cup_{n\in\mathbb N}\mathbf{V}_n}=\mathbf{H}^1_{{\Gamma_D}}(\Omega)$), \emph{limit-conforming} (as it is a conforming approximation, $W_{{\mathcal{D}}_n}=0$ for any $n$) and \emph{coercive} (since Poincar\'e's and K\"orn's inequalities hold in $\mathbf{H}^1_{{\Gamma_D}}(\Omega)$). This is in particular the case for conforming Finite Element approximations based on spaces $\mathbf{V}_h$ built on quasi-uniform partitions $\mathcal{T}_h$ of $\Omega$ (made of quadrilaterals, hexahedra or simplices \cite{Bra01,QV94}). But non-conforming methods are also included in the framework of Gradient Schemes. For example, the Crouzeix-Raviart scheme falls in this framework, with the discrete gradient defined as the classical ``broken gradient''. Consistency, limit-conformity and the Poincar\'e's inequality for this scheme are established in \cite{DRO13}, and it is known that if ${\Gamma_D}=\partial\Omega$ then a uniform K\"orn's inequality holds. This inequality fails for general ${\Gamma_D}$ \cite{FAL90} but it is satisfied for higher order non-conforming methods (whose continuity conditions through the edges involve both the zero-th and first order moments) \cite{KNO00}. The consistency, limit-conformity and Poincar\'e's inequality for such methods can be easily established as for Crouzeix-Raviart's method. \subsection{Stabilised nodal strain formulation} We consider a nodal strain formulation as presented in \cite{FB81,PS06,Lam09p} and built on a conforming Finite Element space $\mathbf{V}_h$. Associated with the primal mesh $\mathcal{T}_h$ we let $\mathcal{T}^*_h$ be the dual mesh consisting of dual volumes, where a dual volume is associated with a vertex of $\mathcal{T}_h^*$ and is constructed as follows. Let $ \{T^{\mathbi{x}_i}_j\}_{j=1}^{M_i}\subset \mathcal{T}_h$ be the set of all elements touching the vertex $\mathbi{x}_i$, and $ \{E^{\mathbi{x}_i}_j\}_{j=1}^{N_i}$ the set of edges or faces touching $\mathbi{x}_i$. Then the dual volume associated with the vertex $\mathbi{x}_i$ is the polygonal or polyhedral region joining all the bary-centres of $ \{T^{\mathbi{x}_i}_j\}_{j=1}^{M_i}$ and $ \{E^{\mathbi{x}_i}_j\}_{j=1}^{N_i}$ . Let $S^*_h$ be the space of vector-valued piecewise constant functions with respect to the dual mesh $\mathcal{T}_h^*$. Defining the linear form \[ \ell(\mathbf{v}_h) = \int_\Omegamega \mathbf{F}(x)\cdot\mathbf{v}_h(x){\rm d} x +\int_{{\Gamma_N}}\mathbf{g}(x)\cdot\gamma(\mathbf{v}_h)(x){\rm d} S(x), \] the stabilised nodal strain formulation, for a constant stiffness tensor $\mathbb{C}$, is to find $\mathbf{u}_h\in \mathbf{V}_h$ such that, for any $\mathbf{v}_h\in \mathbf{V}_h$, \[ \int_{\Omegamega} \Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):\mathbb{C} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x+ \int_{\Omegamega} \mathbb{D}(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)-\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h))(x): \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x =\ell(\mathbf{v}_h) \] where $\Pi^*_h$ is the orthogonal projection onto $S^*_h$ and $\mathbb{D}$ is a constant stabilisation (symmetric positive definite) tensor. By the properties of the orthogonal projection and since $\mathbb{C}$ and $\mathbb{D}$ are constant, this can be recast as \begin{equation}\label{stab-nodal} \begin{equation}gin{array}{l} {\rm d}isplaystyle \mbox{Find $\mathbf{u}_h\in \mathbf{V}_h$ such that, $\forall \mathbf{v}_h\in \mathbf{V}_h$,}\\[0.5em] \ba {\rm d}isplaystyle \int_{\Omegamega} \mathbb{C} \Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x\\ {\rm d}isplaystyle\qquad\qquad+ \int_{\Omegamega} \mathbb{D}(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)-\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h))(x): (\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)-\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h))(x){\rm d} x =\ell(\mathbf{v}_h). \end{array} \end{array}\end{equation} We will take this formulation as definition of the stabilised nodal strain formulation in the case where $\mathbb{C}$ and $\mathbb{D}$ are not constant (in which case we assume that $\mathbb{D}$ satisfies Assumption \eqref{hyp:stiff}). Let us now construct a Gradient Discretisation ${\mathcal{D}}=(\X{{\mathcal{D}}},\Pi_{\mathcal{D}}, {\mathcal T}_{\mathcal{D}},\mathbf{n}abla_{\mathcal{D}})$ such that this formulation is identical to the corresponding Gradient Scheme \eqref{grad-scheme:lin}. We start by defining $\XD{{\mathcal{D}}}$ and the operators $\Pi_{\mathcal{D}}:\XD{{\mathcal{D}}}\to \mathbf{L}^2(\Omega)$ and ${\mathcal T}_{\mathcal{D}}:\XD{{\mathcal{D}}}\to \mathbf{L}^2({\Gamma_N})$ by \begin{equation}gin{equation}\label{def:gsnodal} \X{{\mathcal{D}}}=\mathbf{V}_h\,,\;\Pi_{\mathcal{D}} \mathbf{v}_h=\mathbf{v}_h\mbox{ and } {\mathcal T}_{\mathcal{D}} \mathbf{v}_h=\gamma(\mathbf{v}_h)_{|{\Gamma_N}}\mbox{ for all $\mathbf{v}_h\in \XD{{\mathcal{D}}}$}. \end{equation} With these choices, $\ell(\mathbf{v}_h)$ is the right-hand side of \eqref{grad-scheme:lin} and we therefore just need to find a discrete gradient $\mathbf{n}abla_{\mathcal{D}}$ such that the left-hand side of \eqref{grad-scheme:lin} is equal to the left-hand side of \eqref{stab-nodal}. We first notice that, by \eqref{hyp:stiff} on $\mathbb{C}$ and $\mathbb{D}$, for a.e. $x$ the linear mappings $\mathbb{C}(x),\mathbb{D}(x):\mathbf{R}^{d\times d}\to\mathbf{R}^{d\times d}$ are symmetric positive definite with respect to the inner product ``$:$'' and thus $\mathbb{C}(x)^{-1/2}$ and $\mathbb{D}(x)^{1/2}$ make sense. We can therefore define $\mathbf{n}abla_{\mathcal{D}}:\X{{\mathcal{D}}}\to \mathbf{L}^2(\Omega)^d$ by \begin{equation}gin{equation}\label{def:gradnodal} \mathbf{n}abla_{\mathcal{D}}\mathbf{v}_h = \Pi^*_h \mathbf{n}abla\mathbf{v}_h + \mathbb{C}^{-1/2}\mathbb{D}^{1/2}(\mathbf{n}abla\mathbf{v}_h-\Pi^*_h\mathbf{n}abla\mathbf{v}_h). \end{equation} By assumptions on $\mathbb{C}$ and $\mathbb{D}$ and Lemma \ref{lem:root}, this gives \[ \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)=\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h) + \mathbb{C}^{-1/2}\mathbb{D}^{1/2}(\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)-\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)). \] Assuming that $\mathbb{C}$ and $\mathbb{D}$ are piecewise constant on $\mathcal{T}_h^*$, we can then compute \begin{equation}gin{eqnarray} \lefteqn{\int_\Omegamega\mathbb{C}(x) \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u}_h)(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)(x){\rm d} x}&&\mathbf{n}onumber\\ &=&\int_\Omegamega \mathbb{C}(x)\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x\mathbf{n}onumber\\ &&+\int_\Omegamega \mathbb{C}(x)\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):\mathbb{C}^{-1/2}(x)\mathbb{D}^{1/2}(x) (\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)) {\rm d} x \label{T2}\\ &&+\int_\Omegamega \mathbb{C}(x)\mathbb{C}^{-1/2}(x)\mathbb{D}^{1/2}(x) (\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)):\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x\label{T3}\\ &&+\int_\Omegamega \mathbb{C}(x)\mathbb{C}^{-1/2}(x)\mathbb{D}^{1/2}(x) (\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x))\mathbf{n}onumber\\ &&\qquad\qquad:\mathbb{C}^{-1/2}(x)\mathbb{D}^{1/2}(x) (\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)) {\rm d} x. \mathbf{n}onumber \end{eqnarray} But, since $\mathbb{C}$, $\mathbb{D}$ and $\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)$ are constant on each cell in $\mathcal{T}^*_h$ and since \[ \Pi_h^*\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)=\frac{1}{{\rm meas}(K)}\int_K \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x\] on $K\in \mathcal{T}_h^*$, we have \[ \eqref{T2}=\sum_{K\in\mathcal{T}_h^*}\mathbb{C}_{|K} \Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)_{|K}:\mathbb{C}^{-1/2}_{|K}\mathbb{D}^{1/2}_{|K} \int_K \left(\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)-\Pi^*_h\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)\right){\rm d} x=0. \] Similarly, \eqref{T3} vanishes and, by using the symmetry of $\mathbb{C}$ and $\mathbb{D}$, we end up with \begin{equation}gin{eqnarray*} \lefteqn{\int_\Omegamega\mathbb{C}(x) \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u}_h)(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)(x){\rm d} x}&&\\ &=&\int_\Omegamega \mathbb{C}(x)\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)\\ &&+ \int_\Omegamega \mathbb{D}(x)(\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)) :(\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)-\Pi^*_h \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)) {\rm d} x, \end{eqnarray*} which precisely states that the left-hand sides of \eqref{grad-scheme:lin} and \eqref{stab-nodal} coincide. Thus, under the assumption that $\mathbb{C}$ and $\mathbb{D}$ are piecewise constant on $\mathcal{T}_h^*$, the stabilised nodal strain formulation \eqref{stab-nodal} is the Gradient Scheme, for the linear elasticity equation, corresponding to the Gradient Discretisation defined by \eqref{def:gsnodal}--\eqref{def:gradnodal}. \begin{equation}gin{remark}\label{ns:stiff} If $\mathbb{C}$ or $\mathbb{D}$ are not piecewise constant on $\mathcal{T}_h^*$, then by replacing them with $\Pi_h^*\mathbb{C}$ and $\Pi_h^*\mathbb{D}$ in the stabilised nodal strain formulation \eqref{stab-nodal} and the definition \eqref{def:gradnodal} of the discrete gradient, the stabilised nodal strain formulation is the Gradient Scheme \eqref{grad-scheme:lin} in which $\mathbb{C}$ is replaced with $\Pi_h^*\mathbb{C}$. \end{remark} \subsubsection{Consistency, limit-conformity and coercivity}\label{sec:propstabnodal} Let us consider $(\mathbf{V}_{h_n})_{n\in\mathbb N}$ a sequence of conforming Finite Element spaces on meshes $(\mathcal{T}_{h_n})_{n\in\mathbb N}$ with $h_n\to 0$. We prove here that if ${\mathcal{D}}_n$ is the Gradient Discretisation given by \eqref{def:gsnodal}--\eqref{def:gradnodal} for $\mathbf{V}_{h_n}$ then, under the classical quasi-uniform assumptions on $(\mathcal{T}_{h_n})_{n\in\mathbb N}$, the sequence $({\mathcal{D}}_n)_{n\in\mathbb N}$ is consistent, limit-conforming and coercive. The key point is to notice that the definition \eqref{def:gradnodal} of the discrete gradient can be recast as \begin{equation}gin{equation}\label{def:gradnodal2} \mathbf{n}abla_{\mathcal{D}}\mathbf{v}_h = \mathbf{n}abla\mathbf{v}_h + ( \mathbb{C}^{-1/2}\mathbb{D}^{1/2}-{\rm Id})(\mathbf{n}abla\mathbf{v}_h-\Pi^*_h\mathbf{n}abla\mathbf{v}_h) =\mathbf{n}abla\mathbf{v}_h + \mathcal L_h \mathbf{n}abla\mathbf{v}_h \end{equation} where $\mathcal L_h=(\mathbb{C}^{-1/2}\mathbb{D}^{1/2}-{\rm Id})(\mbox{Id}-\Pi^*_h):\mathbf{L}^2(\Omega)^d \to \mathbf{L}^2(\Omega)^d$ has a norm bounded independently on $h$ and converges pointwise to $0$. Let us first consider the consistency property. For any $\mathbf{v}arphi\in \mathbf{H}^1_{{\Gamma_D}}(\Omega)$, by quasi-uniformity of the sequence of meshes, there exists $\mathbf{v}_n\in \mathbf{V}_{h_n}=\X{{\mathcal{D}}_n}$ such that $\mathbf{v}_n=\Pi_{{\mathcal{D}}_n}\mathbf{v}_n\to \mathbf{v}arphi$ in $\mathbf{L}^2(\Omega)$ and $\mathbf{n}abla \mathbf{v}_n\to \mathbf{n}abla\mathbf{v}arphi$ in $\mathbf{L}^2(\Omega)^d$. We have \[ ||\mathcal L_{h_n}\mathbf{n}abla\mathbf{v}_n||_{\mathbf{L}^2(\Omega)^d}\le ||\mathcal L_{h_n}||_{\mathbf{L}^2(\Omega)^d\to \mathbf{L}^2(\Omega)^d}||\mathbf{n}abla\mathbf{v}_n-\mathbf{n}abla \varphi||_{\mathbf{L}^2(\Omega)^d} +||\mathcal L_{h_n}\mathbf{n}abla\mathbf{v}arphi||_{\mathbf{L}^2(\Omega)^d} \] and, by the properties of $\mathcal L_{h_n}$, both terms in the right-hand side tend to $0$. Combined with \eqref{def:gradnodal2} this proves that $\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v}_n\to \mathbf{n}abla\mathbf{v}arphi$ in $\mathbf{L}^2(\Omega)^d$, which concludes the proof of the consistency of $({\mathcal{D}}_n)_{n\in\mathbb N}$. Coercivity follows from the following comparisons between $\mathbf{n}abla$, $\mathbf{n}abla_{{\mathcal{D}}_n}$ and $\mbox{\boldmath{$\varepsilon$}}$, $\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_n}$: there exists $\ctel{cst1},\ctel{cst2}>0$ not depending on $n$ such that, for any $\mathbf{v}\in \mathbf{V}_{h_n}=\X{{\mathcal{D}}_n}$, \begin{equation}gin{eqnarray} \label{comp:grad} &&\cter{cst1}||\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v}||_{\mathbf{L}^2(\Omega)^d}\le ||\mathbf{n}abla\mathbf{v}||_{\mathbf{L}^2(\Omega)^d} \le \cter{cst2} ||\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v}||_{\mathbf{L}^2(\Omega)^d}\,,\\ \label{comp:strain} &&\cter{cst1}||\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_n}(\mathbf{v})||_{\mathbf{L}^2(\Omega)^d}\le ||\mbox{\boldmath{$\varepsilon$}}(\mathbf{v})||_{\mathbf{L}^2(\Omega)^d} \le \cter{cst2} ||\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}_n}(\mathbf{v})||_{\mathbf{L}^2(\Omega)^d}. \end{eqnarray} Indeed, with these two estimates, the coercivity of $({\mathcal{D}}_n)_{n\in\mathbb N}$ is a straightforward consequence of the Poincar\'e, trace and K\"orn's inequalities in $\mathbf{H}^1_{{\Gamma_D}}(\Omega)$. Since the proofs of \eqref{comp:grad} and \eqref{comp:strain} are similar, we only consider the first one. Using $||\Pi^*_{h_n}\mathbf{n}abla\mathbf{v}||_{\mathbf{L}^2(\Omega)^d}\le ||\mathbf{n}abla\mathbf{v}||_{\mathbf{L}^2(\Omega)^d}$, \eqref{def:gradnodal2} immediately gives the first inequality in \eqref{comp:grad}. To establish the second one, we just notice, applying $\Pi_{h_n}^*$ to \eqref{def:gradnodal} that $\Pi_{h_n}^*\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v}=\Pi_{h_n}^*\mathbf{n}abla\mathbf{v}$, which gives, plugged into \eqref{def:gradnodal}, \[ \mathbf{n}abla\mathbf{v} = \Pi^*_{h_n}\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v} + \mathbb{D}^{-1/2}\mathbb{C}^{1/2}\big(\mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v} - \Pi^*_{h_n} \mathbf{n}abla_{{\mathcal{D}}_n}\mathbf{v}\big). \] The second estimate of \eqref{comp:grad} follows by taking the $\mathbf{L}^2(\Omega)^d$ norm of this equality and using once more the fact that the orthogonal projection $\Pi^*_{h_n}$ has norm $1$. Limit-conformity is then easy to establish. For any $\btau\in \mathbf{H}_{{\rm d}iv}(\Omega,{\Gamma_N})^d$ and any $\mathbf{v}\in \mathbf{V}_{h_n}=\X{{\mathcal{D}}_n}$, by using \eqref{def:gradnodal2} we have \begin{equation}gin{eqnarray} \lefteqn{\Bigg| \int_\Omegamega \big(\mathbf{n}abla_{{\mathcal{D}}_n} \mathbf{v}(x):\btau(x) + \Pi_{{\mathcal{D}}_n} \mathbf{v}(x) \cdot {\rm d}iv(\btau)(x)\big) {\rm d} x -\int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot{\mathcal T}_{{\mathcal{D}}_n}(\mathbf{v})(x){\rm d} S(x)\Bigg|}\mathbf{n}onumber\\ &\le &\Bigg| \int_\Omegamega \big(\mathbf{n}abla \mathbf{v}(x):\btau(x) + \mathbf{v}(x) \cdot {\rm d}iv(\btau)(x)\big) {\rm d} x -\int_{{\Gamma_N}}\gamma_\mathbf{n}(\btau)(x)\cdot \gamma(\mathbf{v})(x){\rm d} S(x)\Bigg|\mathbf{n}onumber\\ &&+\Bigg|\int_\Omegamega \mathcal L_{h_n}\mathbf{n}abla\mathbf{v}(x):\btau(x){\rm d} x\Bigg|=T_1+T_2. \label{limconf:1} \end{eqnarray} By conformity of $\mathbf{V}_{h_n}$ we have $T_1=0$. Thanks to \eqref{comp:grad} and denoting by $\mathcal L_{h_n}^{\star}=({\rm Id}-\Pi_h^*)(\mathbb{D}^{1/2}\mathbb{C}^{-1/2}-{\rm Id})$ the dual operator of $\mathcal L_{h_n}$, we can write \begin{equation}gin{eqnarray*} T_2&=&\Bigg|\int_\Omegamega \mathbf{n}abla\mathbf{v}(x) :\mathcal L_{h_n}^{\star}\btau(x){\rm d} x\Bigg|\\ &\le& ||\mathbf{n}abla \mathbf{v}||_{\mathbf{L}^2(\Omega)^d}||\mathcal L_{h_n}^{\star}\btau||_{\mathbf{L}^2(\Omega)^d} \le \cter{cst2} ||\mathbf{n}abla_{{\mathcal{D}}_n} \mathbf{v}||_{\mathbf{L}^2(\Omega)^d}||\mathcal L_{h_n}^{\star}\btau||_{\mathbf{L}^2(\Omega)^d}. \end{eqnarray*} Plugged into \eqref{limconf:1}, this estimate on $T_2$ shows that $W_{{\mathcal{D}}_n}(\btau)\le \cter{cst2}||\mathcal L_{h_n}^{\star}\btau||_{\mathbf{L}^2(\Omega)^d}$. As $\mathcal L_{h_n}^{\star}\to 0$ pointwise as $n\to\infty$, this concludes the proof of the limit-conformity of $({\mathcal{D}}_n)_{n\in\mathbb N}$. \begin{equation}gin{remark} Reference \cite{PS06} provides an $\mathcal O(h)$ error estimate for \eqref{stab-nodal} under very strong assumptions on the solution to the continuous equation \eqref{eq:elas-strong}, namely $\mathbf{u}ex\in C^2(\overline{\Omegamega})$. Embedding \eqref{stab-nodal} into the Gradient Scheme framework allowed us to establish the same error estimate under no regularity assumption on the exact solution (see Theorem \ref{thm:error-est}) and that, contrary to what is written in \cite[p848]{PS06}, the smoothness of the solution is not required for the error analysis of the method. \end{remark} \begin{equation}gin{remark}\label{rem:nl} As a consequence of these properties and of Theorem \ref{th:convnl}, we deduce that the Gradient Scheme discretisation \eqref{def:gsnodal}--\eqref{def:gradnodal} coming from the stabilised nodal strain formulation of the linear elasticity equations can be used to define a ``stabilised nodal strain formulation for \emph{non-linear} elasiticity'' \eqref{grad-scheme}, and gives a converging scheme for these equations. In this case, the tensors $\mathbb{C}$ and $\mathbb{D}$ in \eqref{def:gradnodal} should be chosen accordingly to the considered non-linear equation, e.g. by selecting linear tensors with Lam\'e's coefficients of the correct order of magnitude with respect to the non-linear model. \end{remark} \begin{equation}gin{remark} We can also construct the ``nodal stabilised'' Gradient Discretisation ${\mathcal{D}}$ by \eqref{def:gsnodal}--\eqref{def:gradnodal} starting from a non-conforming Finite Element discretisation $\mathbf{V}_h$ (or, for that matter, any initial Gradient Discretisation built on a polygonal discretisation of $\Omega$ as defined in \cite{DRO13}). In this case, the preceding reasoning shows that if $(\mathbf{V}_{h_n})_{n\in\mathbb N}$ is consistent, limit-conforming and coercive then the corresponding nodal stabilised Gradient Discretisation $({\mathcal{D}}_n)_{n\in\mathbb N}$ is also consistent, limit-conforming and coercive. \end{remark} \subsection{Hu-Washizu-based formulation on quadrilateral meshes} We now consider a Finite Element method based on a modified Hu-Washizu formulation \cite{LRW06} for quadrilateral meshes. We start with the statically condensed displacement-based formulation in \cite{LRW06} of the following form: find $\mathbf{u}_h \in \mathbf{V}_h $ such that \begin{equation}gin{equation} \label{eq:disab} \int_\Omega P_{S_h} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x) : \mathbb{C}_h P_{S_h} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x){\rm d} x = \ell(\mathbf{v}_h), \quad \mathbf{v}_h \in \mathbf{V}_h \ , \end{equation} where $\mathbf{V}_h$ is the standard conforming Finite Element space constructed from piecewise bilinear polynomials on a reference element, $P_{S_h}$ is the $L^2$ orthogonal projection onto the discrete space of stress $S_h$, and $\mathbb{C}_h$ is some positive-definite symmetric operator approximating the classical linear elasticity tensor $\mathbb{C}$ with constant Lam\'e coefficients, $\mathbb{C}\mbox{\boldmath{$\tau$}}=\lambda\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})\mathbf{I}+2\mu\mbox{\boldmath{$\tau$}}$. We note that the space of stress $S_h\subset \mathbf{L}^2(\Omegamega)^d $ is defined element-wise, and there is no continuity condition for its element across the boundary of cell in $\mathcal{T}_h$. Various Finite Element methods used in alleviating locking effects are derived using this formulation \cite{LRW06,DLRW06}. Among them, the most popular methods are the assumed enhanced strain method of Simo and Rifai \cite{SR90}, the strain gap method of Romano, Marrotti de Sciarra and Diaco \cite{RMD01}, and the mixed enhanced strain method of Kasper and Taylor \cite{KT00}. We now consider the action of the operator $\mathbb{C}_h $ on a tensor $\mbox{\boldmath{$d$}}_h= P_{S_h}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)$ as derived in \cite{LRW06}. We use an orthogonal decomposition of $S_h$ in the form \[ S_h=S_h^c \oplus S_h^t \ , \] where \[ S_h^c := \{ \mbox{\boldmath{$\tau$}} \in S_h \ | \ \ \mathbb{C} \mbox{\boldmath{$\tau$}} \in S_h \} \] and $S_h^t$ is the orthogonal complement of $S_h^c$. We consider the case where the ope\-rator $\mathbb{C}_h$ is expressed as \cite{LRW06} \begin{equation}gin{equation}\label{def:Ch} \mathbb{C}_h \mbox{\boldmath{$d$}}_h = \mathbb{C} P_{S_h^c} \mbox{\boldmath{$d$}}_h + \theta P_{S_h^t} \mbox{\boldmath{$d$}}_h \end{equation} where $P_{S_h^c}$ and $P_{S_h^t}$ are the orthogonal projections onto $S_h^c$ and $S_h^t$ and $\theta>0$ is a constant only depending upon the Lam\'e coefficients $\lambda,\mu$ of $\mathbb{C}$ and upon the parameter $\alpha>0$ of the modified three-field Hu-Washizu formulation \cite{LRW06}. When the modified Hu-Washizu formulation is equivalent to the Hellinger-Reissner formulation, $\theta$ does not depend on $\alpha$. \begin{equation}gin{remark} The expression for the action of $\mathbb{C}_h$ is obtained in \cite{LRW06} using Voigt notation for tensors. However, we give here the expression for the discrete space of stress using the full tensor notation so that we have \[ P_{S_h} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h) = \frac{1}{2} \left(P_{S_h} (\mathbf{n}abla \mathbf{u}_h) + P_{S_h} (\mathbf{n}abla \mathbf{u}_h)^T\right). \] \end{remark} We restrict ourselves, for simplicity of presentation, to the two-dimensional case, where $\mbox{\boldmath{$d$}}_h$ is a $2 $ by $2$ tensor. We consider three choices for $S_h$, where this space is generated (through conformal transformations) from bases $S_{\Box}$ defined on $\hat{K}:=(-1,1)^2$. Let these three choices be denoted by $S_h^i$ and $S^i_{\Box}$, $1 \leq i \leq 3$. \begin{equation}gin{eqnarray*} S_{\Box}^1 := \left[\begin{equation}gin{array}{ccc} \mbox{span}\{1,\hat y\} & \mbox{span}\{1\} \\ \mbox{span}\{1\}&\mbox{span}\{1, \hat x\} \end{array}\right], \quad S_{\Box}^2 := \left[\begin{equation}gin{array}{ccc} \mbox{span}\{1,\hat y\} &\mbox{span}\{1,\hat x,\hat y \}\\ \mbox{span}\{1,\hat x,\hat y\}&\mbox{span}\{1, \hat x \} \end{array}\right], \end{eqnarray*} and \begin{equation}gin{eqnarray*} S_{\Box}^3 := \left[\begin{equation}gin{array}{ccc} \mbox{span}\{1\} &\mbox{span} \{1, \hat x,\hat y\}\\ \mbox{span}\{1, \hat x,\hat y\} &\mbox{span}\{1\} \end{array}\right] \end{eqnarray*} While the spherical part of the stress might be polluted by checkerboard modes as in the case of the $Q_1-P_0$ element, it is proved that the error in displacement satisfies a $\lambda$-independent \emph{a priori} error estimate \cite{LRW06}. Let us now prove that if $S_h=S_h^i$ for some $1\le i\le 3$ then \eqref{eq:disab} is a Gradient Scheme. We define \begin{equation}gin{equation}\label{huw:grad} \begin{equation}gin{array}{c} {\rm d}isplaystyle \X{{\mathcal{D}}}=\mathbf{V}_h\,,\quad \Pi_{\mathcal{D}}\mathbf{v}_h=\mathbf{v}_h\,,\quad {\mathcal T}_{\mathcal{D}}\mathbf{v}_h=\gamma(\mathbf{v}_h)_{|{\Gamma_N}} \mbox{ and }\\ {\rm d}isplaystyle\mathbf{n}abla_{\mathcal{D}} \mathbf{v}_h = P_{S_h^c}\mathbf{n}abla\mathbf{v}_h + \sqrt{\theta}\,\mathbb{C}^{-1/2} P_{S_h^t}\mathbf{n}abla \mathbf{v}_h. \end{array}\end{equation} We note that, by symmetry of $\mathbb{C}$, $S_h^c$ and $S_h^t$ are closed under transposition and therefore the projections onto those spaces commute with the transposition. By Lemma \ref{lem:root}, the definition of $\mathbf{n}abla_{\mathcal{D}}$ thus shows that \begin{equation}gin{equation}\label{huw:strain} \mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}} (\mathbf{v}_h) = P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h) + \sqrt{\theta}\,\mathbb{C}^{-1/2} P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h). \end{equation} We now prove that the Gradient Scheme corresponding to the Gradient Discretisation ${\mathcal{D}}=(\X{{\mathcal{D}}},\Pi_{\mathcal{D}},{\mathcal T}_{\mathcal{D}},\mathbf{n}abla_{\mathcal{D}})$ is precisely the Hu-Washizu scheme \eqref{eq:disab}. Let us first start with a lemma. \begin{equation}gin{lemma}\label{lem:stab} For any of the choices $S^i_h$ ($1\le i\le 3$) described above and for any linear elasticity tensor $\mathbb{D}$, $(S^i_h)^c$ is closed under $\mathbb{D}$, that is $\mathbb{D} \btau\in (S^i_h)^c$ whenever $\btau \in (S^i_h)^c$. In particular, \begin{equation}gin{equation}\label{eq:orth} \forall \btau,\bomega\in \mathbf{L}^2(\Omega)^d\,,\quad \int_\Omega \mathbb{D}P_{(S^i_h)^c}\btau(x):P_{(S^i_h)^t}\bomega(x){\rm d} x=0. \end{equation} \end{lemma} \begin{equation}gin{proof} If $\mbox{\boldmath{$\tau$}}\in (S_h^i)^c$ then $\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})\mathbf{I} =\lambda^{-1}(\mathbb{C}\mbox{\boldmath{$\tau$}}-2\mu\mbox{\boldmath{$\tau$}})\in S_h^i$. The definitions of $S^i_h$ then shows, by examining the coefficients $(1,1)$ and $(2,2)$ of $\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})\mathbf{I}$, that $\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})\in\mbox{span}\{1,\hat{y}\}\cap\mbox{span}\{1,\hat{x}\}=\mbox{span}\{1\}$ and thus that $\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})$ is constant. By Lemma \ref{lem:comp}, we see that $\mathbb{C}\mathbb{D}$ is a linear elasticity tensor with some Lam\'e coefficients $(\alpha,\begin{equation}ta)$ and therefore $\mathbb{C}\mathbb{D}\mbox{\boldmath{$\tau$}} =\alpha\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})\mathbf{I} + 2\begin{equation}ta\mbox{\boldmath{$\tau$}}$. The second term in this right-hand side clearly belongs to $S_h^i$ and, since $\mathop{\rm tr}(\mbox{\boldmath{$\tau$}})$ is constant, it is equally obvious that the first term in the right-hand side belongs to $S_h^i$ (which contains $\mbox{span}\{\mathbf{I}\}$). Hence, $\mathbb{D}\mbox{\boldmath{$\tau$}}\in (S_h^i)^c$ whenever $\mbox{\boldmath{$\tau$}}\in (S_h^i)^c$. Formula \eqref{eq:orth} is a consequence of this and of the orthogonality of $S_h^c$ and $S_h^t$. \end{proof} ~ We now consider the left-hand side of \eqref{grad-scheme:lin}. Using \eqref{eq:orth} with $\mathbb{D}=\mathbb{C}^{1/2}$ (which is a linear elasticity tensor by Lemma \ref{lem:comp}), the cross-products involving $\mathbb{C}^{1/2}P_{S_h^c}$ and $P_{S_h^t}$ which appear when plugging \eqref{huw:strain} into \eqref{grad-scheme:lin} vanish and we obtain \begin{equation}gin{multline}\label{huw:lhs1} \int_\Omega \mathbb{C}(x)\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{u}_h)(x):\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)(x){\rm d} x \\ = \int_\Omega \mathbb{C}(x)P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x): P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x){\rm d} x +\int_\Omega \theta P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x): P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x){\rm d} x. \end{multline} Using now the definition \eqref{def:Ch} of $\mathbb{C}_h$ and the orthogonality property \eqref{eq:orth} with $\mathbb{D}=\mathbb{C}$, the left-hand side of \eqref{eq:disab} can be written \begin{equation}gin{multline} \label{huw:lhs2} \int_\Omega \mathbb{C}_h P_{S_h} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):P_{S_h} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x) {\rm d} x \\ = \int_\Omega \left[\mathbb{C}(x) P_{S_h^c} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)+\theta P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x)\right] :\left[P_{S_h^c} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x) +P_{S_h^t} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x)\right]{\rm d} x\\ = \int_\Omega \mathbb{C}(x) P_{S_h^c} \mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):P_{S_h^c} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x +\int_\Omega\theta P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{u}_h)(x):P_{S_h^t} \mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)(x){\rm d} x. \end{multline} Equations \eqref{huw:lhs1} and \eqref{huw:lhs2} show that the left-hand sides of the Gradient Scheme \eqref{grad-scheme:lin} and of the Hu-Washizu formulation \eqref{eq:disab} are identical. As the right-hand sides of these equations are trivially identical (by definition of $\Pi_{\mathcal{D}}$ and ${\mathcal T}_{\mathcal{D}}$), this shows that the statically condensed Hu-Washizu formulation \cite{LRW06} is the Gradient Scheme corresponding to the Gradient Discretisation defined by \eqref{huw:grad}. Let us now see that the Gradient Discretisation \eqref{huw:grad} satisfies the properties defined in Section \ref{sec:gs}. The coercivity is again a consequence of \eqref{comp:grad} and \eqref{comp:strain} that we can prove in the following way. First, since the norms of $P_{S_h^c}$ and $P_{S_h^t}$ are bounded by $1$, the definition \eqref{huw:grad} of $\mathbf{n}abla_{\mathcal{D}}$ and the property \eqref{huw:strain} of $\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}$ immediately give the first inequalities in \eqref{comp:grad} and \eqref{comp:strain}. We then write, from \eqref{huw:strain}, \begin{equation}gin{equation}\label{huw:coer} \mathbb{C}^{1/2}\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)= \mathbb{C}^{1/2}P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h) +\sqrt{\theta}\, P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h). \end{equation} By Lemmas \ref{lem:stab} and \ref{lem:comp}, we have $\mathbb{C}^{1/2}P_{S_h^c}\mathbf{n}abla\mathbf{v}_h\in S_h^c$ and \eqref{huw:coer} thus shows that $P_{S_h^c}\mathbb{C}^{1/2}\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h)=\mathbb{C}^{1/2}P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)$ and $P_{S_h^t}\mathbb{C}^{1/2}\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h) =\sqrt{\theta}\, P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)$. This allows us to write \begin{equation}gin{eqnarray*} P_{S_h}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)&=& P_{S_h^c}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)+P_{S_h^t}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)\\ &=&\mathbb{C}^{-1/2}P_{S_h^c}\mathbb{C}^{1/2}\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h) +\sqrt{\theta}^{\;-1}P_{S_h^t}\mathbb{C}^{1/2}\mbox{\boldmath{$\varepsilon$}}_{\mathcal{D}}(\mathbf{v}_h). \end{eqnarray*} This relation shows that $||P_{S_h}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)||_{\mathbf{L}^2(\Omega)^d}\le \ctel{huw:cst}||\mbox{\boldmath{$\varepsilon$}}_{{\mathcal{D}}}(\mathbf{v}_h)||_{\mathbf{L}^2(\Omega)^d}$ with $\cter{huw:cst}$ not depending on $h$ or $\mathbf{v}_h$. Since it can be proved (see \cite{LRW06}) that $||\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)||_{\mathbf{L}^2(\Omega)^d} \le \ctel{huw:cst2}||P_{S_h}\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)||_{\mathbf{L}^2(\Omega)^d}$ with $\cter{huw:cst2}$ not depending on $h$ or $\mathbf{v}_h$, the second inequality in \eqref{comp:strain} follows immediately. The second inequality in \eqref{comp:grad} can then be established by using the continuous K\"orn inequality $||\mathbf{n}abla\mathbf{v}_h||_{\mathbf{L}^2(\Omega)^d}\le \ctel{huw:cst3}||\mbox{\boldmath{$\varepsilon$}}(\mathbf{v}_h)||_{\mathbf{L}^2(\Omega)^d}$ and the second inequality of \eqref{comp:strain} that we just established. To establish the consistency and limit-conformity of the Gradient Discretisation, we notice that \begin{equation}gin{equation}\label{huw:grad2} \mathbf{n}abla_{\mathcal{D}} \mathbf{v}_h = \mathbf{n}abla\mathbf{v}_h + (P_{S_h^c}-\mbox{Id})\mathbf{n}abla\mathbf{v}_h +\sqrt{\theta}\,\mathbb{C}^{-1/2}P_{S_h^t}\mathbf{n}abla \mathbf{v}_h = \mathbf{n}abla\mathbf{v}_h + \mathcal L_h\mathbf{n}abla\mathbf{v}_h \end{equation} where $\mathcal L_h=P_{S_h^c}-\mbox{Id}+\sqrt{\theta}\,\mathbb{C}^{-1/2}P_{S_h^t}: \mathbf{L}^2(\Omega)^d\to \mathbf{L}^2(\Omega)^d$ is a self-adjoint operator (because $\sqrt{\theta}\,\mathbb{C}^{-1/2}$ is constant) whose norm is bounded independently on $h$. As $S_h^c$ always contains the set of constant tensors $S_h^0$ and $P_{S_h^0}\to \mbox{Id}$ as $h\to 0$, we have $P_{S_h^c}=P_{S_h^c}(\mbox{Id}-P_{S_h^0})+P_{S_h^0}\to \mbox{Id}$ and $P_{S_h^t}=P_{S_h^t}(\mbox{Id}-P_{S_h^0})\to 0$ pointwise as $h\to 0$. Hence, $\mathcal L_h\to 0$ pointwise as $h\to 0$. Expression \eqref{huw:grad2} then allows us to prove the consistency and limit-conformity of the Gradient Discretisation \eqref{huw:grad} by using the same techniques as in Section \ref{sec:propstabnodal}. \begin{equation}gin{remark} The same construction can be made when $\mathbb{C}$ is only piecewise constant on $\mathcal{T}_h$. \end{remark} \begin{equation}gin{remark} In contrast to \cite{BCR04,LRW06}, the convergence result of Theorem \ref{thm:error-est} is obtained for the Hu-Washizu scheme without assuming the full $H^2$-regularity of the solution. Moreover, as in Remark \ref{rem:nl}, this construction also gives a converging Hu-Washizu-based scheme for non-linear elasticity equations. \end{remark} \subsection{Technical lemmas} \begin{equation}gin{lemma}\label{lem:comp} If $\mathbb{C}_1$ and $\mathbb{C}_2$ are linear elasticity tensors in $\mathbf{R}^d$ with Lam\'e coefficients $(\lambda_1,\mu_1)$ and $(\lambda_2,\mu_2)$, then, for any $\btau\in \mathbf{R}^{d\times d}$, \begin{equation}gin{equation}\label{form:comp} \mathbb{C}_1\mathbb{C}_2\btau = (\lambda_1\lambda_2 d +2\mu_1\lambda_2+2\mu_2\lambda_1) \mathop{\rm tr}(\btau)\mathbf{I}+4\mu_1\mu_2 \btau. \end{equation} If $\mathbb{C}$ is a linear elasticity tensor with Lam\'e coefficients $(\lambda,\mu)$, then \begin{equation}gin{equation}\label{form:root} \mathbb{C}^{1/2}\btau = \frac{\sqrt{2\mu+\lambda d}-\sqrt{2\mu}}{d}\mathop{\rm tr}(\btau)\mathbf{I} +\sqrt{2\mu}\btau. \end{equation} \end{lemma} \begin{equation}gin{proof} Formula \eqref{form:comp} is obtained by straightforward computation, and Formula \eqref{form:root} by looking for $\mathbb{C}^{1/2}$ as a linear elasticity tensor with coefficients $(\alpha,\begin{equation}ta)$ such that $\mathbb{C}^{1/2}\mathbb{C}^{1/2}=\mathbb{C}$, which boils down from \eqref{form:comp} to solving $\alpha^2d+4\alpha\begin{equation}ta=\lambda$ and $4\begin{equation}ta^2=2\mu$. \end{proof} \begin{equation}gin{lemma}\label{lem:root} If $\mathbb{E}:(\mathbf{R}^{d\times d},:)\to (\mathbf{R}^{d\times d},:)$ is symmetric positive definite and satisfies, for all $\btau\in \mathbf{R}^{d\times d}$, $(\mathbb{E}\btau)^T=\mathbb{E}\btau^T$, then $\mathbb{E}^{1/2}$ also satisfies this property. \end{lemma} \begin{equation}gin{proof} Let $\mathcal L:\mathbf{R}^{d\times d}\to \mathbf{R}^{d\times d}$ be the endomorphism $\mathcal L \btau=(\mathbb{E}^{1/2}\btau^T)^T$. Using $\btau:\bomega=\btau^T:\bomega^T$ and the symmetric positive definite character of $\mathbb{E}^{1/2}$, it is easy to check that $\mathcal L$ is symmetric positive definite. Moreover, by assumption on $\mathbb{E}$, $\mathcal L^2\btau=(\mathbb{E}^{1/2}[(\mathbb{E}^{1/2}\btau^T)^T]^T)^T =(\mathbb{E}^{1/2}\mathbb{E}^{1/2}\btau^T)^T=(\mathbb{E}\btau^T)^T= \mathbb{E}\btau$. Henceforth, $\mathcal L$ is the symmetric positive definite square root $\mathbb{E}^{1/2}$ of $\mathbb{E}$ and thus $\mathbb{E}^{1/2}\btau^T = \mathcal L(\btau^T)=(\mathbb{E}^{1/2}\btau)^T$, which completes the proof. \end{proof} \section{Conclusion} In this work, we developed the Gradient Scheme framework for linear and non-linear elasticity equations. We proved that this framework makes possible error estimates (for linear equations) and convergence analysis (for non-linear equations) of numerical methods under very few assumptions. In particular, these results hold without assuming the full $H^2$-regularity of the exact solution, which can be lost in the cases of composite materials or strongly non-linear models. We showed that many classical and modern numerical schemes developed in the literature for elasticity equations are actually Gradient Schemes. We even established that some three-field schemes, based on a modified Hu-Washizu formulation and designed to be stable in the quasi-incompressible limit, are also Gradient Schemes after being recast in a displacement-only formulation by static condensation. Since Gradient Schemes are seamlessly applicable to both linear and non-linear equations, the embedding into this framework of numerical methods solely developed for linear elasticity also allowed us to show how to adapt those methods to non-linear elasticity, while retaining nice stability and convergence properties. {} \end{document}
\begin{document} \title{\bf Estimation and Selection Properties of the LAD Fused Lasso Signal Approximator} \date{} \author{\begin{tabular}{c} Xiaoli Gao\footnote{Correspondence: 106 Petty Building, Greensboro, NC 27412. Email: x\[email protected]}\\ \emph{Department of Mathematics and Statistics}\\ \emph{University of North Carolina at Greensboro} \end{tabular}} \titlepage \maketitle \begin{center} \begin{minipage}{130mm} \begin{center}{\bf Abstract}\end{center} The fused lasso is an important method for signal processing when the hidden signals are sparse and blocky. It is often used in combination with the squared loss function. However, the squared loss is not suitable for heavy tail error distributions nor is robust against outliers which arise often in practice. The least absolute deviations (LAD) loss provides a robust alternative to the squared loss. In this paper, we study the asymptotic properties of the fused lasso estimator with the LAD loss for signal approximation. We refer to this estimator as the LAD fused lasso signal approximator, or LAD-FLSA. We investigate the estimation consistency properties of the LAD-FLSA and provide sufficient conditions under which the LAD-FLSA is sign consistent. We also construct an unbiased estimator for the degrees of freedom of the LAD-FLSA for any given tuning parameters. Both simulation studies and real data analysis are conducted to illustrate the performance of the LAD-FLSA and the effect of the unbiased estimator of the degrees of freedom. \end{minipage} \end{center} {Keywords:} Estimation consistency; Jump selection consistency; Block selection consistency; Degrees of freedom; Fused lasso; Least absolute deviations; Sign consistency. \setcounter{equation}{0} \setcounter{equation}{0} \section{Introduction} High-dimensional data arise in many fields including signal processing, image de-noising and genomic and genetic studies. When a model is sparse and has certain known structures, penalized methods have been widely used since they can incorporate known structures into penalty functions in a natural way and can do estimation and variable selection simultaneously. A biological example is the analysis of copy-number variations in a human genome. In this problem, we are interested in detecting the changes in copy numbers based on data from comparative genomic hybridization (CGH) arrays. For instance, Snijders et al. (2001) studied a CGH array consisting of $2400$ bacterial artificial chromosome (BAC) clones, where the log base 2 intensity ratios at all clones are measured. In Figure 1, we plot a sample CGH copy number data on chromosomes 1--4 from cell line GM 13330. The data set has two characteristics: 1) there are only a small number of chromosomal locations where the copy numbers of genes change, that is, the underlying signals are sparse; 2) the adjacent markers tend to have similar observations, i.e., the signals are blocky. In a signal approximation model with sample size $n$, the $i$th observation $y_i$ is considered to be a realization of the true signal $\mu^{0}_{i}$ plus random noise $\veps_i$, \bel{signal approximation model} y_i=\mu^{0}_{i} +\veps_i, \ \ i=1, \cdots, n. \eel In many cases, the true signal vector $\bmu^0=(\mu^0_{1},\cdots,\mu^0_{n})'$ is both blocky and sparse, meaning that the intensities of true signals within each block are the same and most blocks consist of zero signals. The goal here is to find a solution not only to recover all the changes points, but also to identify the nonzero blocks. We can use the lasso penalty to enforce a sparse solution by penalizing the $\ell_1$ norm of the signals $\|\bmu\|_1\equiv\sum_{i=1}^n |\mu_i|$, and use the fusion (total variation) penalty to enforce a blocky solution by penalizing $\|\bmu\|_{\rm TV}\equiv\sum_{i=2}^n |\mu_i-\mu_{i-1}|$. The combination of these two penalties results in the fused lasso (FL) penalty (Tibshirani et al., 2005). For detecting changing points in copy number variations, Tibshirani and Wang (2008) proposed to use the fused lasso with a squares loss function. We refer to this approach as the least squares fused lasso signal approximator (LS-FLSA). For $\lamn = (\lmone, \lmtwo)$, the LS-FLSA seeks to find $\hbmu^{\ell_2}_n(\lamn)= (\hmu^{\ell_2}_1(\lamn),\cdots,\hmu^{\ell_2}_n(\lamn))'$ that minimizes \bel{tw-ls flsa model} \sum_{i=1}^n(y_i-\mu_i)^2 +\lmone\sum_{i=1}^n |\mu_i|+\lmtwo\sum_{i=2}^n|\mu_i-\mu_{i-1}|, \eel where $\lmone$ and $\lmtwo$ are two nonnegative penalty parameters. Recently, Rinaldo (2009) studied the selection properties of the LS-FLSA and adaptive LS-FLSA under the block partition assumption in the underlying signal. Several authors have also studied the properties of related procedures. For example, Mammen and van de Geer (1997) studied the rate of convergence in bounded variation function classes of the parameter functions; Harchaoui and L\'{e}vy-Leduc (2010) investigated the estimation properties of change points using the total variation penalty; Boysen et al. (2009) studied the asymptotic properties for jump-penalized least-squares regression aiming at approximating a regression function by piecewise-constant functions. These studies significantly advanced our understanding of LS-FLSA or fusion penalized LS methods in the context signal detection or nonparametric estimation. However, all these results are obtained for methods with the least squares loss and/or require normality assumption on the errors. The LS-FLSA is easily affected by outliers which arise often in practice, for example, in CGH copy number variation data. A more robust fused lasso signal approximator can be constructed by using the LAD loss, which we shall refer to as LAD-FLSA. For any given $\lamn=(\lmone, \lmtwo)$, the LAD-FLSA is defined as \bel{lad-flsa model} \hbmu_n(\lamn)=\argmin_{\bmu\in \cR_n}\left\{\sum_{i=1}^n|y_i-\mu_i| +\lmone\sum_{i=1}^n |\mu_i|+\lmtwo\sum_{i=2}^n|\mu_i-\mu_{i-1}|\right\}. \eel The convex minimizer $\hbmu_n$ in \eqref{lad-flsa model} has been applied to detect copy number variation breakpoints in Gao and Huang (2010a). However, its theoretic properties of have not been studied. In this paper, we seek to answer the following questions about the LAD-FLSA: (1) how close $\hbmu_n^{\ell_1}$ can be to the true model $\bmu^0$ asymptotically? (2) how accurately $\hbmu_n^{\ell_1}$ can recover the true nonzero blocks with a large probability when $\bmu^0$ is both sparse and blocky? (3) what is the degrees of freedom of LAD-FLSA? The contributions of this paper are as follows. \begin{itemize} \item We show that the LAD-FLSA is rate consistent if the number of blocks is relatively small. \item We provide sufficient conditions under which the LAD-FLSA is able to recover the block patterns and distinguish nonzero blocks from zero ones correctly with high probability. That is, the LAD-FLSA can determine all the jumps, identify all the nonzero blocks, and also distinguish the positive signals from negative ones under some conditions. \item In terms of model complexity measures, we justify that the number of nonzero blocks generated from a LAD-FLSA estimate is an unbiased estimator of the degrees of freedom of the LAD-FLSA. \item Without the assumption of Gaussian or sub-Gaussian random error, our studies can be widely applied for signal detection in signal processing when random noises do not follow nice distributions or the signal-to-noise ratios are large. \end{itemize} The rest of the paper is organized as follows. We list some notations in Section 2. We study the estimation consistency and sign consistency properties of the LAD-FLSA respectively in Section 3 and 4. In Section 5 we derive an unbiased estimator of the degrees of freedom of the LAD-FLSA. In Section 6 we conduct simulation studies and real data analysis to demonstrate the performance of the LAD-FLSA. We also verify the effect of unbiased estimator of the degrees of freedom numerically in this section. Section 7 concludes the paper with some discussions. All the technical proofs are postponed to the Appendix. \section{Preliminaries} Suppose the true model $\bmu^0=(\mu^0_1,\cdots,\mu^0_n)'$ includes $J_0$ blocks and there exists a unique vector $\bnu^0=(\nu^0_1, \cdots,\nu^0_{J_0})$ such that \bel{truemodel} \mu_i^0=\sum_{j=1}^{J_0} \nu_j^0~ I(i \in \cB_j^0), \eel where $\{\cB_1^0, \cdots, \cB_{J_0}^0\}$ is a mutually exclusive block partition of $\{1,\cdots, n\}$ generated from $\bmu^0$. Based on the block partition, we can rewrite $\{1,\cdots,n\}$ as $\{i_0,\cdots,i_1-1, i_1,\cdots, i_2-1, \cdots,i_{J_0-1},\cdots, i_{J_0}-1\}$, where $1=i_0<i_1<\cdots<i_{J_0-1} \le i_{J_0}-1=n$ and $\{i_{j-1},\cdots,i_j-1\}$ gives the $j$th block set $\cB_j^0$. We denote the jump set in the true model as $\cJ^0$. Then $\cJ^0=\{i_1,\cdots,i_{J_0-1}\}$ and $J_0=|\cJ^0|+1$, where $|\cJ^0|$ is the the cardinality of $\cJ^0$. Let $\cK^0=\cK(\bmu^0)=\{1\le j\le J_0: \nu_j^0\neq 0\}$ be the set of nonzero blocks of $\bmu^0$ and the number of nonzero blocks $K_0=|\cK^0|$. We now list the following notations that will be used throughout the paper, some of which are adopted from Rinaldo (2009). \begin{itemize} \item For the true model $\bmu^0$ defined in \eqref{truemodel}, we introduce the following notations (I--IV): \begin{itemize} \item [(I)] $b_j^0=|\cB_j^0|$, the size of the block set $\cB_j^0$ for $1\le j\le J_0$; \item [(II)] $b^0_{\min}=\min_{1\le j\le J_0} b_j^0$, the smallest block size; \item [(III)] $a_n=\min_{i\in \cJ^0} |\mu_i^0-\mu_{i-1}^0|$, the smallest jump; \item [(IV)] $\rho_n=\min_{j\in\cK^0} |\nu_j^0|$, the smallest nonzero signal intensity. \end{itemize} \item Corresponding notations are analogous to a LAD-FLSA estimate $\hbmu_n$ in \eqref{lad-flsa model} as follows: \begin{itemize} \item [(V)] $\widehat \cJ=\cJ(\hbmu_n)$, $\Jhat=J(\hbmu_n)$, $ \hcB_j=\cB_j(\hbmu_n)$ with $\widehat b_j=|\hcB_j|$ and $\hnu_j=\nu_j(\hbmu_n)$ for $1\le j\le \hcBJhat$ are the estimated jump set, number of blocks, block partitions of $\{1,\cdots, n\}$ with corresponding block size and the associated unique vector generated from $\hbmu_n$; \item [(VI)] $\widehat{\cK}=\cK(\hbmu_n)=\{1\le j\le \Jhat: \hnu_j\neq 0\}$ is the set of estimated nonzero blocks and $\Khat=|\widehat{\cK}|$. \end{itemize} \end{itemize} \section{Estimation consistency of LAD-FLSA estimators}\label{est-con} In this section, we study the estimation consistency of the LAD-FLSA $\hbmu_n$. We first consider the following conditions: \begin{itemize} \item[(A1)] Error assumption: random errors $\veps_i$'s in model \eqref{signal approximation model} are independent and identically distributed with median $0$, and have a density $f$ that is continuous and positive in a neighborhood of $0$. \item[(A2)] Block number assumption: the true block number $J_0<M_1\Lambda_n$ for a constant $M_1 > 0$, where $\Lambda_n=\max\{16n/(\lmtwo^2-2n^2\lmone^2), n/(\lmtwo-n\lmone)\}+1$ with $\lmtwo^2>2n\lmone^2$. \end{itemize} In (A1), we only require that the random errors have a density that is continuous and positive in neighborhood of zero and have median zero. This is a much weaker condition than the Gaussian random error assumption required in Harchaoui and L\'{e}vy-Leduc (2010) and Rinaldo (2009). Indeed, (A1) allows all heavy-tail distributions of the errors, including the Cauchy distribution whose moments do not exist. Condition (A2) requires that the number of blocks in the underlying model can increase with $n$, but at a slower rate than $O(\Lambda_n)$. As a by-product, the tuning parameter for jumps, $\lmtwo$, grows much faster that the tuning parameter for signals intensities, $\lmone$. It is a reasonable assumption since the true model is block-wise, that is, the number of nonzero jumps can be much smaller than the number of the nonzero signals. For example, if the number of jumps is $O((\log(n))^{1/2})$, then we can let $\lmtwo=n^{1/2}(\log(n))^{-1/4}$ and $\lmone=n^{-1}$. In order to study the asymptotic properties of the LAD-FLSA estimator $\hbmu_n$ in \eqref{lad-flsa model}, we first investigate its LS-FLSA approximation, \bel{ls-flsa model} \tbmu_n(\lamn)=\argmin_{\bmu}\left\{\sum_{i=1}^n(z_i-(f(0))^{1/2}\mu_i)^2 +\lmone\sum_{i=1}^n |\mu_i| +\lmtwo \sum_{i=2}^n|\mu_i-\mu_{i-1}|\right\}, \eel where $z_i=(f(0))^{1/2}\mu^0_i+\eta_i$ with $\eta_i=(4f(0))^{-1/2}\sgn(\veps_i)$ for $1\le i\le n$ consist of pseudo-signal data. Thus, all estimates listed in (V--VI) can be analogous to the corresponding ones generated from $\tbmu_n$. We now provide some rate upper bounds for the number of blocks generated from $\tbmu_n$ in \eqref{ls-flsa model} and $\hbmu_n$ in \eqref{lad-flsa model}, respectively. \begin{lemma}\label{lemma dmax} Under (A1), we have (i) $\widetilde J\le 16n/(\lmtwo^2-2n^2\lmone^2)+1$, provided $\lmtwo^2>2n^2\lmone^2$ and (ii) $\widehat J\le n/(\lmtwo-n\lmone)+1$, provided $\lmtwo>n\lmone$. In addition, suppose (A2) holds, we also have (iii) $\widetilde J+\widehat J+J_0<(M_1+2)\Lambda_n$, where both $M_1$ and $\Lambda_n$ are defined in (A2). \end{lemma} The proof of Lemma \ref{lemma dmax} is given in the Appendix. Lemma \ref{lemma dmax} gives upper bounds for the number of blocks associated with $\tbmu_n$ and $\hbmu_n$. We can interpret bounds in (i) and (ii) as the maximal dimension of any linear space where $\hbmu_n$ and $\tbmu_n$ may belong, respectively. Similarly, (iii) provides us an unified rate upper bound for the dimension of any linear space to which $\hbmu_n$, $\tbmu_n$ and $\bmu^0$ can belong. Lemma \ref{lemma dmax} is useful in obtaining the estimation consistencies of $\hbmu_n$ and $\tbmu_n$. Furthermore, it is important to notice that those upper bounds in Lemma \ref{lemma dmax} are mainly affected by the rate of $\lmtwo$, which is reasonable since the number of jumps in an FLSA model is mainly determined by $\lmtwo$. Denote $\|\bmu\|_n^2=\sum_{i=1}^n \mu_i^2 /n$ and $\|\bmu\|_2^2=\sum_{i=1}^n \mu_i^2.$ Below we present the estimation properties of $\tbmu_n$ in \eqref{ls-flsa model}. \begin{lemma} \label{ls-flsa consistency} Suppose (A1-A2) hold. Then there exists a constant $0<c<1$, such that $$\bP\left(\|\tbmu_n-\bmu^0\|_n \ge \alpha_n \right) \le \Lambda_n \exp\{ \Lambda_n\log n -(1-c)^2(f(0)/2) n\alpha_n^2\},$$ where $\Lambda_n$ is defined in (A2) and $\alpha_n=1/(c\sqrt{f(0)}) [\lmone+2\lmtwo +((M_1+1)\Lambda_n/n)^{1/2}]$. Furthermore, if we let $\alpha_n=(2M_2 \Lambda_n (\log n)/n)^{1/2}$ and choose $\lmone$ and $\lmtwo$ such that $\lmone+2\lmtwo=c\sqrt{f(0)}\alpha_n-((M_1+1)\Lambda_n/n)^{1/2}$ for a constant $M_2>1/(f(0)(1-c)^{2}$, then $$ \bP\left(\|\tbmu_n-\bmu^0\|_n \ge \alpha_n\right) \le \Lambda_n n^{\{1-M_2f(0)(1-c)^2\}\Lambda_n}.$$ \end{lemma} The proof of Lemma 2 is given in the Appendix. Lemma \ref{ls-flsa consistency} gives us the estimation consistency result for a pseudo LS-FLSA $\tbmu_n$ (using pseudo data $z_i$'s and bounded noises $\eta_i$'s). It is worthwhile to point out that even though we only report the consistency result for a pseudo LS-FLSA estimator $\tbmu_n$ with bounded noises $\eta_i$'s in Lemma 2, we can obtain a similar consistency result for the regular LS-FLSA estimator \eqref{tw-ls flsa model} under the assumption of Gaussian noises without much extra work. Thus, the estimation consistency properties of the LS signal approximator with the total variation penalty in Harchaoui and L\'{e}vy-Leduc (2010) can also be obtained from Lemma 2 by taking $\lmtwo=0$ and $\Lambda_n=K_{\max}$. The consistency result of $\tbmu_n$ in Lemma \ref{ls-flsa consistency} plays an important role in deriving the corresponding estimation consistency result of $\hbmu_n$ in the following Theorem \ref{lad-flsa consistency}. \begin{theorem}\label{lad-flsa consistency} Suppose (A1) and (A2) hold. Then there exists a constant $0<c<1$ such that $$\bP\left(\|\hbmu_n-\bmu^0\|_n \ge \gamma_n \right) \le \Lambda_n \exp\{ \Lambda_n\log n -(1-c)^2(f(0)/8) n\gamma_n^2\} +(8/f(0))(\Lambda_n/(n\gamma_n^2))^{1/2},$$ where $\Lambda_n$ is defined in (A2) and $\gamma_n=2/(c\sqrt{f(0)}) [\lmone+2\lmtwo +((M_1+1)\Lambda_n/n)^{1/2}]$. Furthermore, if we let $\gamma_n=(8M_3 \Lambda_n (\log n)/n)^{1/2}$ for a constant $M_3>1/(f(0)(1-c)^2)$ and choose $\lmone$ and $\lmtwo$ such that $\lmone+2\lmtwo=(c\sqrt{f(0)}/2)\gamma_n-((M_1+1)\Lambda_n/n)^{1/2}$ , then $$ \bP\left(\|\hbmu_n-\bmu^0\|_n \ge \gamma_n\right) \le \Lambda_n n^{-\{M_3f(0)(1-c)^2-1\} \Lambda_n}+O\left(1/\sqrt{\log n}\right).$$ \end{theorem} The proof of Theorem \ref{lad-flsa consistency} is given in the Appendix. Theorem \ref{lad-flsa consistency} implies that the LAD-FLSA $\hbmu_n$ can be consistent for estimating $\bmu^0$ at the rate of $O\left(\Lambda_n (\log n)/ n)^{1/2}\right)$. Furthermore, if the number of blocks in true signals is bounded, the rate of convergence can be stated more explicitly as in the following Corollary~\ref{corollary con}. \begin{corollary}\label{corollary con} Suppose (A1) holds and there exists $J_{\max}>0$ such that $J_0 \le J_{\max}$. Then $$ \bP\left( \{\max(\Jhat,\widetilde J)<J_{\max}\} \cap \{\|\hbmu_n-\bmu^0\|_n \ge \theta_n\} \right) \le J_{\max} n^{-c_{2M} J_{\max}}+O\left(1/\sqrt{\log n}\right)$$ for $\theta_n=(8M J_{\max} (\log n)/n)^{1/2}$ and $\lmone+2\lmtwo=(c_{1M} J_{\max} (\log n)/n)^{1/2}-(J_{\max}/n)^{1/2}$. Here $M>1/((1-c)^2(f(0))$ is a constant, $c_{1M}=(2Mc^2(f(0))^{1/2}$ and $c_{2M}=f(0)M(1-c)^2-1$. \end{corollary} Corollary \ref{corollary con} says that the $\hbmu_n$ is consistent for estimating $\bmu^0$ at the rate $O\left( ( J_{\max} (\log n)/n)^{1/2} \right)$ if the numbers of both true and estimated jumps are bounded above. This convergence rate can be compared to $n^{-1/2}$, which is argued by Yao and Yu (1989) to be optimal for LS estimators of the levels of a step function. Notice that if $\lim_{n\to\infty}\bP(\widehat\cJ=\cJ^0)=1$, then $\sum_{i=1}^n(\hmu_i-\mu^0_i)^2=\sum_{j=1}^{J_0}b_j^0(\hnu_j-\nu^0_j)^2 \ge b_{\min}^0\sum_{j=1}^{J_0}(\hnu_j-\nu^0_j)^2$ for large $n$ almost surely. Thus Corollary \ref{corollary con} implies that, for large $n$ \bel{v-con} \bP\left(\{\widehat\cJ=\cJ^0\} \cap \{\|\hbnu_n-\bnu^0\|_2 \ge (8M J_{\max} (\log n)/ b_{\min}^0)^{1/2}\}\right) \le J_{\max} n^{-\{f(0)M(1-c)^2-1\}J_{\max}},\eel where the convergence rate is affected by $b^0_{\min}$. Therefore, $\hbnu_n$ can converge to $\bnu^0$ in $\ell_2$ norm at rate $O\left((J_{\max}(\log n)/b_{\min}^0)^{1/2}\right)$. In other words, a block estimator $\hbnu_n$ can converge faster to the true model $\bnu_0$ with larger block size. \section{Block sign consistency of LAD-FLSA}\label{sign con} In this section, we study the sign consistency of the LAD-FLSA. The sign consistency has been studied by Zhao and Yu (2006) and Gao and Huang (2010b) for both the LS-Lasso and LAD-Lasso in high-dimensional linear regression settings. It is a stronger result than variable selection consistency since it not only requires that variables to be selected correctly, but also their signs are estimated correctly with high probability. In light of the block structure in the hidden signals, we consider the selection consistency and sign consistency for jumps and blocks separately. \begin{definition}\label{jump selection consistency} $\hbmu_n$ is {\it jump selection consistent} if $$ \lim_{n\to \infty} \bP\left(\{\Jhat = J_0\} \bigcap\{ \cap_{1\le j\le J_0}\{\hcB_j=\cB_j^0 \}\}\right)=1. $$ \end{definition} \begin{definition}\label{jump sign consistency} $\hbmu_n$ is {\it jump sign consistent} if $$ \lim_{n\to \infty} \bP\left(\{\widehat \cJ = \cJ^0\} \bigcap \{\sgn(\hmu_i-\hmu_{i-1} )=\sgn(\mu_i^0-\mu_{i-1}^0) , \forall i\in \cJ^0\}\right)=1. $$ \end{definition} Definition \ref{jump selection consistency} requires that $\hbmu_n$ can partition the signals into blocks correctly with probability converging to one. Definition \ref{jump sign consistency} is a stronger requirement since it requires that $\hbmu_n$ finds not only all the jumps, but also the jump directions (up/down) correctly. A jump selection consistent estimator can recover the jumps set $\cJ^0$ correctly with high probability, but does not tell us which blocks have nonzero intensities. In other words, there may exist $\delta>0$ and $1\le j\le J_0$ such that $P(\{\hnu_j\neq 0\} \cap \{\nu_j^0=0\})>\delta$ for a jump sign consistent $\hbmu_n$. We now define the block selection consistency and the block sign consistency in Definitions \ref{block selection consistency} and \ref{block sign consistency}. The latter is a stronger definition since it requires the signs of the signals to be recovered correctly. \begin{definition}\label{block selection consistency} $\hbmu_n$ is {\it block selection consistent} if $$ \lim_{n\to \infty} \bP\left( \{\widehat \cJ = \cJ^0 \} \bigcap \{\widehat{\cK} = \cK^0\}\right)=1. $$ \end{definition} \begin{definition}\label{block sign consistency} $\hbmu_n$ is {\it block sign consistent} if $$ \lim_{n\to \infty} \bP\left(\{\widehat \cJ = \cJ^0\} \bigcap \{\widehat{\cK} = \cK^0\} \bigcap \{\sgn(\hnu_j)=\sgn(\nu_j^0), \forall j\in J_0\}\right)=1. $$ \end{definition} \subsection{Jump selection consistency} For $\lmone=0$, a LAD-FLSA becomes a LAD signal approximator using only the total variation penalty (LAD-FSA), defined as \bel{LAD-FSA} \hbmu_n^{\rm F}(\lm_{2n})= \hbmu_n^{\rm FL}(0,\lm_{2n})=\argmin\left\{\sum_{i=1}^n|y_i-\mu_i| +\lambda_{2n}\sum_{i=2}^n|\mu_i-\mu_{i-1}|\right\}. \eel Suppose $\hbmu_n^{\rm F}(\lm_{2n})$ can do the block partition correctly. Then we expect to sort out those nonzero blocks by increasing $\lmone$ slowly from $0$. So we first investigate the jump selection consistency of $\hbmu_n^{\rm F}(\lm_{2n})$. Below we list some conditions on the smallest value of true jumps and smallest size of the true blocks in model \eqref{signal approximation model} and \eqref{truemodel} for the jump sign consistency. Recall that $b_{\min}^0$ and $a_n$ are defined in II and III in Section 2. \begin{itemize} \item[(B1)] (a) $\lmtwo\to \infty$; (b) there exists a $\delta>0$, such that $ \lmtwo (\log(n-J_0))^{-1/2}>(1+\delta)/2$. \item [(B2)] (a) $ (b^0_{\min})^{1/2}a_n\to \infty$; (b) there exists $\delta>0$, such that $ (b^0_{\min}/\log(J_0))^{1/2}a_n>3(1+\delta)/(\sqrt{2}f(0))$ for sufficiently large $n$. \item [(B3)] $\lmtwo<(f(0)/3) b_{\min}^0 a_n$ for sufficiently large $n$. \end{itemize} Here (B1) and (B3) indicate that $\lmtwo$ increases with $n$ faster than $O\left(\log(n-J_0)^{1/2}\right)$ but slower than $O(b_{\min}^0 a_n)$. (B2-a) requires that either the smallest jump or the smallest size of all blocks in the true model should be large enough so that $\{1,\cdots,n\}$ can be partitioned into different blocks correctly. (B2-b) strengthes (B1-a) by providing a lower bound. Conditions (B1-B3) provide us some helpful information in finding an optimal tuning parameter in model \eqref{LAD-FSA}. When the above conditions are satisfied, the LAD-FSA estimator $\hbmu_n^{\rm F}(\lm_{2n})$ can group all signals into different blocks correctly with a large probability. \begin{theorem}\label{theorem-consistency-fsa} Consider the signal approximation model \eqref{signal approximation model} with the true model \eqref{truemodel}. A LAD-FSA estimator $\hbmu_n^F(\lm_{2n})$ is jump sign consistent under (A1) and (B1-B3). \end{theorem} The proof of Theorem \ref{theorem-consistency-fsa} is postponed to the Appendix. Theorem~\ref{theorem-consistency-fsa} tells us that we can apply a LAD-FSA approach to recover not only the true jumps, but also their signs correctly with high probability if the true hidden signal vector is blocky and the tuning parameter $\lmtwo$ is chosen appropriately. \subsection{Block selection consistency} We have seen that a LAD-FSA solution can be jump selection consistent to the blocky hidden signal vector under some conditions. In many cases, the true signal vector includes some zero blocks, which cannot be separated from nonzero ones using the LAD-FSA approach since the total variation penalty only shrinks adjacent differences but not signals themselves. The additional lasso penalty of FLSA can force the estimates of some block intensities to be exactly zero. We are interested in finding a LAD-FLSA solution to not only recover the true jumps, but also find the zero blocks and keep only the nonzero ones with a large probability. Eventually, we need to study how to choose tuning parameters $\lmone$ and $\lmtwo$ appropriately, such that the LAD-FLSA is block selection consistent. When the true block model in \eqref{truemodel} is also sparse, we need the following additional conditions to separate nonzero blocks from zero ones. \begin{itemize} \item[(C1):] (a) $\lmone (b_{\min}^0)^{1/2}\to \infty$ when $n\to \infty$; (b) there exists $\delta>0$, such that $\lmone(b^0_{\min}/\log(J_0-K_0))^{1/2}>4\sqrt{2}(1+\delta)$. \item [(C2):] $\lmtwo/b_{\min}^0 <\lmone/8$ for sufficiently large $n$. \item [(C3):] (a) $\rho_n(b^0_{\min})^{1/2} \to \infty$ when $n\to \infty$; (b) there exists $\delta>0$ such that $\rho_n (b^0_{\min}/\log(K_0))^{1/2}>2\sqrt{2}(1+\delta)/f(0)$. \item [(C4):] $\lmtwo/b^0_{\min}<f(0)\rho_n/3$ for sufficiently large $n$. \item [(C5):] $\lmone<f(0)\rho_n/2$ for sufficiently large $n$. \end{itemize} Here Condition (C1) and (C2) indicate that either $\lmone$ or the smallest block size $b^0_{\min}$ should grow with $n$ with a lower bound provided in (C1-b) since $\lmtwo$ grows with $n$ from (B1). Especially, if $\lmone$ is relatively small as seen in (C5), $b^0_{\min}$ must be large enough. (C4) and (C5) provide us a lower bound for the smallest nonzero signal $\rho_n$ when $n$ is large. Above interpretations are consistent with (C3-a), which requires either the block size or the true nonzero signal intensities should be large enough such that the nonzero blocks can be separated from zero ones. In other words, if $\rho_n$ is relatively smaller, it becomes harder to separate nonzero ones from zero ones. However, it is not impossible for us to distinguish those nonzero blocks if we have larger enough block size since more observations can be used to estimate $\nu_j^0$ within $j$th block. (C3-b) provides us an upper bound of the number of nonzero blocks. It is worthwhile to point out that even though these conditions seem to be complicated, some of them can be redundant. For instance, (C2) and (C5) can be used to derive a smaller upper bound than the one in (C4). Thus, if both (C2) and (C5) are satisfied, (C4) can be redundant. One can see that (C3-a) can be also redundant if (C5) and (C1-a) hold. \begin{theorem}\label{theorem-consistency-flsa} Under (A1), (B1-B3) and (C1-C5), a LAD-FLSA solution is block sign consistent. \end{theorem} Theorem \ref{theorem-consistency-flsa} tells us that the LAD-FLSA can first recover the block patterns of hidden signals by detecting all the true jumps, and then rule out those nonzero blocks. Furthermore, with a very large probability, those nonzero blocks are identified correctly to have either positive or negative signals. Thus, the LAD-FLSA is justified to be a promising approach for signal processing when the true hidden signal vector is both blocky and sparse and the observed data are contaminated by outliers. The proof of Theorem \ref{theorem-consistency-flsa} is provided in the Appendix. {\it Remark 1: The block assumption of the true model in \eqref{truemodel} is crucial in our study. If the model is grouped, but not blocky, fused lasso might be misleading since the fusion term is used to generate the block-wise solution. Some other techniques such as group lasso (Yuan and Lin, 2006) or smooth lasso (Hebiri, 2008) can be more useful to generate the corresponding group sparsity structure.} {\it Remark 2: The relaxation of Gaussian or sub-Gaussian random error assumption is important since it is very common to see some contaminated data in signal processing, especially when repeated measurements are not available. Some normalization methods such as Loess have been used in preprocessing the real data in order to improve the robustness of LS-FLSA. However, those techniques may over-smooth the data and then generate some false negatives.} \subsection{Additional remarks on asymptotic properties} We will provide two additional comments on the asymptotic results obtained in Section 3 and 4. \noindent{\it Remark 3: An LAD-FLSA may not reach the estimation consistency and sign consistency simultaneously.} The rate estimation consistency in Theorem \ref{est-con} holds for $\lm_{1n}+2\lm_{2n}=O(\log (n)/n)^{1/2}$. However, from (B1-b) and (C2), we know one of the sufficient conditions for the sign consistency in Theorem \ref{theorem-consistency-flsa} requiring $\lm_{kn}>O(\log (n))^{1/2}$ for $k=1, 2$. So an LAD-FLSA may not be able to reach both the estimation consistency and sign consistency simultaneously. However, this claim is not theoretically justified since all conditions assumed in both Theorem \ref{est-con} and \ref{theorem-consistency-flsa} are sufficient. \noindent {\it Remark 4: The weak irrepresentable condition is not necessary for the jump point detection consistency in Theorem \ref{theorem-consistency-fsa}.} To understand Remark 4, we will transform the signal approximation model in \eqref{LAD-FSA} into a Lasso representation. Consider a linear regression model \bel{lasso general} y_i=\sum_{j=1}^p x_{ij} \beta_j +\veps_i, \quad 1\le i\le n, \eel where $(y_i, x_{i1},\cdots,x_{ip})$ and $\bbeta=(\beta_1,\cdots,\beta_p)' $ represent the observed data and coefficients vector. A Lasso solution (Tibshirani, 1996) of $\bbeta$ is $$ \hbbeta (\lm)=\argmin\left\{(1/2)\sum_{i=1}^n (y_i-\sum_{j=1}^p x_{ij}\beta_j)^2 +\lm \sum_{j=1}^p|\beta_j|\right\}. $$ If we further divide the coefficients vector $\bbeta=(\bbeta_{\bone}', \bbeta_{\btwo}')'$, where $\bbeta_{\bone}$ include those nonzero coefficients and $\bbeta_{\btwo}$ includes zeros only, and correspondingly, we can write $\bX=(\bX_{\bone},\bX_{\btwo})$ and $\bs_{\bone}=\sgn(\bbeta_{\bone})$ consist of sign mappings of non-zero coefficients in the true model, then the {\it weak irrepresentable condition} of the designed matrix $\bX$ means \bel{wic} |\bX'_{\btwo}\bX_{\bone}(\bX'_{\bone}\bX_{\bone})^{-1}\bs_{\bone}|<\bone, \eel where $\bone$ is a vector with element being 1. Naturally, we can write the LAD-FSA in \eqref{LAD-FSA} into a Lasso solution of $\hnu=(\nu_1,\cdots,\nu_n)'$, \bel{lasso model} \hbnu_n^{\rm F}(\lm_{2n})=\argmin\left\{\|\by-\bZ\bnu\|_2 +\lambda_{2n}\sum_{i=2}^n|\nu_i|\right\}, \eel where $\nu_1=\mu_1$, $\nu_i=\mu_i-\mu_{i-1}$ for $2\le i\le n$ and $\bZ$ is the low triangular design matrix with nonzero items being 1. Zhao and Yu (2006) proved that the weak irrepresentable condition is a necessary condition for a Lasso solution in \eqref{lasso general} to be sign consistent under two regularity conditions. We list the result in the following Lemma \ref{lem:wic}. \begin{lemma}\label{lem:wic} (Zhao and Yu, 2006) Suppose two regularity conditions are satisfied for the designed matrix $\bX$: (1) there exists a positive definite matrix $C$ such that the covariance matrix $\bX'\bX/n\to C$ as $n\to \infty$, and (2) $\max_{1\le i\le n}\bx_i'\bx_i/n \to 0$ as $n\to \infty.$ Then Lasso is general sign consistent, $\lim_{n\to \infty} P(\exists \lm\ge 0, \sgn(\hbbeta(\lm))=\sgn(\bbeta_{\bzero}))=1$, only if there exists $N$ so that $\bX$ satisfies the weak irrepresentable condition holds for $n > N$. Here $\bbeta_{\bzero}$ is the true coefficient vector. \end{lemma} Unfortunately, it is easy for us to verify that the design matrix $\bZ$ in \eqref{lasso model} does not satisfy the weak irrepresentable condition. For example, if we consider a signal approximation data with only five observations where $\mu_1\neq\mu_2\neq\mu_3=\mu_4=\mu_5$, then the first row vector of $\bZ'_{\btwo}\bZ_{\bone}(\bZ'_{\bone}\bZ_{\bone})^{-1}$ is $(0,0,1)'$ Thus \eqref{wic} is violated. However, there is no contradiction between the sign consistency result in Theorem \ref{theorem-consistency-fsa}. and Lemma \ref{lem:wic} since both two regularity conditions in Lemma \ref{lem:wic} are violated for design matrix $\bZ$. Suppose $\rho_1\le \cdots \rho_n$ are eigenvalues of $\bZ'\bZ/n$. we know that (a) $\rho_1<1/(3n)\to 0$ and $\rho_n>4n^{1/2}\to \infty$ when $n\to \infty$, and in addition, (b) $\max_{1\le i\le n} \bz_i'\bz_i/n =1$. \section{ Degrees of freedom of LAD-FLSA}\label{GDF} It is crucial to seek appropriate $\lmone$ and $\lmtwo$ in (\ref{lad-flsa model}). Large $\lmone$ will generate all zero coefficients, while large $\lmtwo$ will generate all zero jumps. Conditions on $\lmone$ and $\lmtwo$ in Section \ref{est-con} and \ref{sign con} provide us some guidance in choosing the rates of two tuning parameters to obtain a well-behaved LAD-FLSA estimate. This section helps us to choose two optimal tuning parameters from the model selection point of view. For given $\lm_1$ and $\lm_2$, a LAD-FLSA approach is a modeling procedure including both model selection and model fitting. The complexity of a modeling procedure is defined as the generalized degrees of freedom (df) and measured by the sum of the sensitivity of the predicted values. See Ye (1998) and Gao and Fang (2011) for the discussion on the df for a modeling procedure under both the $\ell_2$ and $\ell_1$ loss functions, respectively. For $1\le i\le n$, let $\hmu_i({\by;\lm_1, \lm_2})$ be a LAD-FLSA fitted value of $y_i$ for any given $\lm_1$ and $\lm_2$. The degrees of freedom of a LAD-FLSA approach, \bel{Def:gdf} {\rm df}(\lm_1,\lm_2) =\sum_{i=1}^n \partial \rE[\hmu_i({\by;\lm_1, \lm_2})]/\partial y_i. \eel In Theorem \ref{thm-unbiased-fl}, we provide an unbiased estimator of df$(\lm_1,\lm_2)$ in \eqref{Def:gdf} for a LAD-FLSA modeling procedure. \begin{theorem}\label{thm-unbiased-fl} Consider a LAD-FLSA modeling procedure defined for \eqref{signal approximation model}, \eqref{lad-flsa model} and \eqref{truemodel}. For any fixed positive $\lm_1$ and $\lm_2$, we have \bel{eq-df-gfl} \rE[|\widehat{\cK}(\lm_1,\lm_2)|]={\rm df}(\lambda_1, \lambda_2). \eel \end{theorem} Theorem \ref{thm-unbiased-fl} indicates that the number of nonzero blocks, $|\widehat{\cK}(\lm_1,\lm_2)|$, is an unbiased estimator of the degrees of freedom of a LAD-FLSA modeling procedure with any given $\lm_1$ and $\lm_2$. We provide both the numerical demonstration and theoretical proof are provided in Section \ref{sec-sim-gdf} and the Appendix, respectively. In fact, such an unbiased estimator in \eqref{eq-df-gfl} can be also observed from Theorem 2 of Li and Zhu (2008). For example, if $\lm_1=0$, for any $\lm_2>0$, the LAD-FLSA reduces to a LAD-LASSO solution of $\bw$ ($w_i=\mu_i-\mu_{i-1}$) for $2\le i\le n$. Then $ \sum_{i=1}^n\partial \widehat y_i(0,\lm_2)/\partial y_i=|\widehat \cJ(0,\lm_2)|. $ Suppose $\lm_2>0$ is fixed, the block partition is decided. Then for $\lm_1>0$, the LAD-FLSA becomes a LASSO model of $\bnu$ ($\bnu$ is the block intensity vector). Therefore, $ \sum_{i=1}^n \partial \widehat y_i(\lm_1,\lm_2)/\partial y_i=|\widehat \cK(\lm,\lm_2)|. $ Results in Theorem \ref{thm-unbiased-fl} can be used to choose two optimal tuning parameters from the model selection point of view. Let $y_i^0$'s denote new observations generated from the same mechanism generating $y_i$'s. The prediction error of $\hbmu(\by;\lambda_1, \lambda_2)$ is defined as \bel{loss} \rE_0\{\sum_{i=1}^n |\hmu_i(\lambda_1, \lambda_2)-y_i^{0}|\}, \eel where $\rE_0$ is taken over $y_i^0$'s. From Theorem \ref{thm-unbiased-fl}, we can estimate the prediction error \eqref{loss} by $$ \sum_{i=1}^n |y_i-\hmu_i(\lambda_1, \lambda_2)| +|\widehat{\cK}(\lm_1,\lm_2)|. $$ Thus some existing model selection criteria can be modified to choose an optimal combination of tuning parameters. For instance, we can extend AICR (Ronchetti, 1985), BIC (Schwarz, 1978) and GCV (Wahba, 1990) to the LAD-FLSA as follows, \bel{eq:aicr} \begin{array}{ll} &{\rm AICR:}\quad \sum_{i=1}^n|y_i -\hmu_i(\lm_1,\lm_2)| +|\widehat{\cK}(\lm_1,\lm_2)|,\\ &{\rm BIC:}\quad \sum_{i=1}^n|y_i -\hmu_i(\lm_1,\lm_2)| +|\widehat{\cK}(\lm_1,\lm_2)|\log(n)/2,\\ &{\rm GCV:} \quad\sum_{i=1}^n|y_i -\hmu_i(\lm_1,\lm_2)|/[1-|\widehat{\cK}(\lm_1,\lm_2)|/n]. \end{array} \eel \section{Numerical studies} In this section, we first use some simulation studies and real data analysis to demonstrate the performances of the LAD-FLSA approach in recovering the true hidden signals. Then we verify Theorem \ref{thm-unbiased-fl} numerically using a sample copy number data. \subsection{Recovery of hidden signals}\label{sec-sim-con} We illustrate the performance of the LAD-FLSA by modifying the block example studied in both Donoho and Johnstone (1995) and Harchaoui and L\'{e}vy-Leduc (2010), where the signal vector is only blocky but not sparse. We choose ${\bf t}=(.1, .23, .65, .76, .9)'$ and ${\bf h}=(1.5, -3, 4.3, -3.1, -2)'$ and round $\sum_j h_j(1 + \sgn (i/n-t_j))/2$ to the nearest integers to get $\mu_i^0$ for $1\le i\le n$. Then the generated true hidden signal vector, $$\bmu^0=(\bzero_{p_1}' \btwo_{p_2}' -\btwo_{p_3}' {\bf 3}_{p_4}' \bzero_{p_5}' \btwo'_{n-q})'$$ is blocky and sparse with four nonzero blocks and two zero ones. Here $q=p_1+\cdots+ p_5$. The observed data are generated from model \eqref{signal approximation model} by simulating $\veps_i$'s from 1) normal distribution with mean $0$ and standard deviation $\sigma$, 2) double exponential distribution with center $0$ and standard deviation $\sigma$, and 3) standard cauchy distribution with a multiplier $0.1\sigma$. Similar to Harchaoui and L\'{e}vy-Leduc (2010), we consider weak, mild and strong noises by setting $\sigma=0.1$, $0.5$ and $1$ in all three types of distributions. In Figure \ref{fig:signaldata}, we plot a sample data set generated from 2), where the observed data, the true hidden signals and the LAD-FLSA estimates are plotted using gray, black and red colors, respectively. The data is standardized as $y_i/s(\by)$ and analyzed using the LAD-FLSA approach in \eqref{lad-flsa model}, where $s(\by)$ is the standard deviation of $(y_1,\cdots,y_n)'$. We choose ``optimal'' $\lm_1$ and $\lm_2$ by minimizing BIC in \eqref{eq:aicr} for $0<\lm_1<0.5$ with increments of $0.01$ and $(n/\log(n))^{1/2}<\lm_{2n}<n^{1/2}$ with increments of $0.1$, respectively. To demonstrate the robust properties of the LAD-FLSA approach, we also report the simulation results from the LS-FLSA approach in \eqref{tw-ls flsa model}. For each model, we illustrate the variable selection effect using CFR+6, the ratio of either recovering $\hmu^0$ correctly or over-fitting the model by including six additional noises over $1000$ replicates. We choose ``six'' here since we have six blocks in the true model. We also report JUMP, the average number (with standard deviation) of jumps over $1000$ replicates. A jump is counted only if the adjacent differences is at least $0.1$. We demonstrate the estimation effects by computing the least absolute relative error (LARE) as follows: \bel{eq:lare} {\rm LARE}(\hbmu_n,\bmu^0)=\dfrac{\sum_{i=1}^n|\hmu_i-\mu_i^0|}{\sum_{i=1}^n|\mu_i|}. \eel The simulation results for sample size $n=1000$ and $5000$ are reported in Table \ref{sim results}, where we can see that the LAD-FLSA approach has much better performance than the LS-FLSA for both strong and mild signal noises. When the signal noises are weak, the LAD-FLSA still has some advantages over the LS-FLSA especially when the data is contaminated by Cauchy distributed noises. For example, for Cauchy error and $\sigma=0.1$, the LAD-FLSA recovers the true signal vector exactly at a ratio of 87\% for $n=1000$ and 92\% for $n=5000$, while the LS-FLSA only recovers the true model exactly 49\% and 78\% of the time. \subsection{BAC array}\label{sec-bac} In Section 1 we introduced a sample BAC CGH data, where the observation of each entry for cell line GM 13330 is the log $2$ fluorescence ratios from all $23$ chromosomes resulted from the BAC experiment sorted in the order of the clones locations on the genome. The purpose of the study is to detect the locations where there are some significant deletions or amplifications. As a demonstration of the effect of the LAD-FLSA applied to copy number analysis, we only analyze the data from chromosome 1--4 with $129$, $67$, $83$ and $167$ markers, respectively. Since the log $2$ ratios at many markers are observed to be around $0$ and the data may also have some spatial dependence properties, it is reasonable to assume the true hidden signals to be both sparse and blocky. We analyze each chromosome independently by using both the LAD-FLSA in \eqref{lad-flsa model} and LS-FLSA in \eqref{tw-ls flsa model}. Tuning parameters are chosen the same as in Section \ref{sec-sim-con}. The final estimates from both methods on all $4$ chromosomes are plotted together in Figure 1. The LAD-FLSA estimates (top panel) provides four blocks with one amplification region in chromosome 1 and one deletion region in chromosome 4. Besides the two variation regions detected by the LAD-FLSA, the LS-FLSA estimates (bottom panel) also show an amplification at a single point in chromosome 2, which is not confirmed by spectral karyotyping in Snijders et al. (2001). \subsection{Effect of unbiased estimator of the degrees of freedom}\label{sec-sim-gdf} We now conduct some simulations based upon the sample BAC array studied in Section \ref{sec-bac} to examine Theorem \ref{thm-unbiased-fl} numerically. To illustrate the effect of the unbiased estimator of the degrees of freedom, we only take chromosome $1$ with $129$ locations as an example. One can see the sample data from Figure 1. We generate $500$ Monte Carlo simulations based on the same hypothetical model $$y_i^0=y_i+\veps^0_i, ~~i=1, \cdots, 129,$$ where $y_i$s are observations at $129$ locations and $\veps^0_i$'s are independent normal with center $0$ and standard deviation $0.1\sigma^*$, where $\sigma^*$ is the standard deviation of $\by$. For each combined $(\lm_1, \lm_2)$ with $0<\lm_1,\lm_2\le 1$, we record $\widehat{\mbox{df}}(\lm_1,\lm_2)$ from $|\widehat{\cK}(\lm_1,\lm_2)|$, and compute the true $\mbox{df}(\lm_1,\lm_2)$ defined in \eqref{Def:gdf} using the Monte Carlo simulation from Algorithm 1 in Ye (1998). In Figure 2, we plot $\widehat{\mbox{df}}(\lm_1,\lm_2)$ of the LAD-FLSA estimate for every combination of $0<\lm_1,\lm_2\le 1$, with the increment of $0.05$, respectively. The averages of df$(\lm_1,\lm_2)$ and $|\widehat \cK(\lm_1,\lm_2)|$ over 500 repetitions are reported in Figure \ref{fig:CGHdf}. Those simulation results show that the number of estimated nonzero blocks $|\widehat \cK(\lm_1,\lm_2)|$ is a promising estimate to the df$(\lm_1,\lm_2)$ numerically, especially when the number of estimated nonzero blocks is not deviated from the true one seriously. \section{Concluding remarks} In this paper, we study the asymptotic properties of the LAD signal-approximation approach using the fused lasso penalty. By assuming the true model to be both blocky and sparse, we investigate both the estimation consistency and sign consistency of the LAD-FLSA estimator. In terms of estimation consistency, the consistency rate is optimal up to an logarithmic factor if the dimension of any linear space where the true model and its estimates belong is bounded from above. In terms of sign consistency, we justify that a LAD-FLSA approach can not only recover the true block pattern but also distinguish those nonzero blocks from the zero ones correctly with high probability under reasonable conditions. In fact, those jump selection and block selection consistency results can be made stronger by matching the corresponding signs correctly with a large probability. Thus, by choosing two tuning parameters $\lm_1$ and $\lm_2$ properly, we can reach a well-behaved LAD-FLSA estimate to recover the true hidden signal vector under some random noises. The consistency results in this paper extend the theoretical properties of the LS-FSA in Harchaoui and L\'{e}vy-Leduc (2010) and the LS-FLSA in Rinaldo (2009) to the LAD signal approximation, which amplify the study of signal approximation using linear regression when the random error does not follow a Gaussian distribution. Furthermore, we demonstrate that the number of estimated nonzero blocks is an unbiased estimator of the degrees of freedom of the LAD-FLSA. Thus, the existing model selection criteria can be extended to the LAD-FLSA for choosing the tuning parameters. As in many recent studies, our results are proved for penalty parameters that satisfy the conditions as stated in the theorems. It is not clear whether the penalty parameters selected using data-driven procedures satisfy those conditions. However, our numerical study shows a satisfactory finite-sample performance of the LAD-FLSA. Particularly, we note that the tuning parameters selected based on the BIC seem sufficient for our simulated data. This is an important and challenging problem that requires further investigation, but is beyond the scope of the current paper. Also, a basic assumption required in our results is that the random error terms $\varepsilon_i$ in (\ref{signal approximation model}) are independent. Since the observations $y_1, \ldots, y_n$ are in a natural order in this model, for example, copy number variation data based on genetic markers are ordered according to their chromosomal locations, it would be interesting to study the behavior of the LAD-FLSA allowing for certain dependence structure in the error terms. \section *{Appendix} \noindent{\bf Proof of Lemma \ref{lemma dmax}} \noindent Let $\widetilde w_i=\widetilde w_i(\lmone,\lmtwo)=\tmu_i-\tmu_{i-1}$ and $\widehat w_i=\widehat w_i(\lmone,\lmtwo)=\hmu_i-\hmu_{i-1}$ be the LS-FLSA and LAD-FLSA estimates of $i$th jump in \eqref{ls-flsa model} and \eqref{lad-flsa model}. Using the Karush--Kuhn--Tucker (KKT) conditions \eqref{ls-flsa model}, we get \bes -2\sqrt{f(0)} (z_i-\sqrt{f(0)} \sum_{j=1}^i\widetilde w_j)+\lmone \sum_{k=i}^n \sgn(\sum_{j=1}^k \widetilde w_j)=-\lmtwo \sgn(\widetilde w_i) &{\rm if}~\widetilde w_i\neq 0. \ees Then \bes \lmtwo^2\widetilde {\cJ}\le 8f(0)\sum_{i=1}^n(z_i-\sqrt{f(0)}\sum_{j=1}^i\widetilde w_j)^2+2\lmone^2 n^2 \widetilde {\cJ}. \ees Thus \bel{cj1}|\widetilde {\cJ}|\le 8f(0)(\lmtwo^2-2 n^2\lmone^2)^{-1}\sum_{i=1}^n z_i^2\le 16f(0)n/(\lmtwo^2-2 n^2\lmone^2).\eel Using the KKT equations of \eqref{lad-flsa model}, we have \bes -\sgn (y_i-\sum_{j=1}^i\widehat w_j)+\lmone \sum_{k=i}^n \sgn(\sum_{j=1}^k \widehat w_j)=-\lmtwo \sgn(\widehat w_i) &{\rm if}~\widehat w_i\neq 0. \ees Then \bel{cj2} |\widehat {\cJ}|\le n/(\lmtwo-\lmone n). \eel Finally, combining with \eqref{cj1}, \eqref{cj2} and (A2), (iii) holds, which completes the proof of Lemma \ref{lemma dmax}. $\Box$ \noindent{\bf Proof of Lemma \ref{ls-flsa consistency} } \noindent From the definition of $\tbmu_n$ in \eqref{ls-flsa model}, we have \bel{eq-con-1} \begin{array}{ll} \sum_{i=1}^n(z_i-\sqrt{f(0)}\mu_i)^2&\le \sum_{i=1}^n(z_i-\sqrt{f(0)}\mu_i^0)^2 \\ &+\lmone \sum_{i=1}^n[|\mu_i^0|-|\tmu_i|] +\lmtwo\sum_{i=2}^n[|\mu_i^0-\mu_{i-1}^0|-|\tmu_i-\tmu_{i-1}|. \end{array} \eel From the triangle inequality, \eqref{eq-con-1} becomes \bel{eq-con-2} \begin{array}{ll} &f(0)\sum_{i=1}^n (\tmu_i-\mu^0_i)^2 \\ &\quad\le 2\sqrt{f(0)} \sum_{i=1}^n \eta_i(\tmu_i-\mu_i^0)+ \lmone\sum_{i=1}^n|\tmu_i-\mu_i^0|+\lmtwo\sum_{i=2}^n[|\mu_i^0-\mu_{i-1}^0|-|\tmu_i-\tmu_{i-1}|]\\ &\quad\le 2\sqrt{f(0)} \sum_{i=1}^n \eta_i(\tmu_i-\mu_i^0)+ (\lmone+2\lmtwo)\sum_{i=1}^n|\tmu_i-\mu_i^0|. \end{array} \eel The rest of the proof is similar to the proof of Proposition 2 in Harchaoui and L\'{e}vy-Leduc (2010). For $\bmu\in \cR^n$, we define $$G(\bmu)=2\sqrt{f(0)}\sum_{i=1}^n\eta_i(\mu_i-\mu_i^0)/\|\bmu-\bmu^0\|_2.$$ Thus \eqref{eq-con-2} becomes $$ f(0)\sum_{i=1}^n(\tmu_i-\mu_i^0)^2\\ \le (\lmone+2\lmtwo) \sqrt{n}\|\tbmu_{n}-\bmu^0\|_2 +G(\tbmu_{n})\|\tbmu_{n}-\bmu^0\|_2. $$ Then, \bel{eq-con-3} \sqrt{f(0)}\|\tbmu_{n}-\bmu^0\|_2\le (\lmone+2\lmtwo) \sqrt{n}+ G(\tbmu_{n}). \eel Let $\{S_K\}$ be a collection of any $K$-dimensional linear space to which $\tbmu_n$ may belong. From Lemma \ref{lemma dmax}, $1\le K\le \Lambda_n$. From \eqref{eq-con-3}, for any $\delta_n>0$, \bel{eq-con-4} \begin{array}{ll} \bP(\|\tbmu_n-\bmu^0\|_2\ge \delta_n) &\le \bP(G(\tbmu_{n})\ge \sqrt{f(0)}\delta_n-(\lmone+2\lmtwo) \sqrt{n} )\\ & \le \sum_{k=1}^{\Lambda_n} n^k \bP\left(\sup_{\bmu\in S_K} G(\bmu)\ge \sqrt{f(0)}\delta_n-(\lmone+2\lmtwo) \sqrt{n} \right). \end{array} \eel Notice that $\bE(G(\bmu))=0$ and $\Var(G(\bmu))=1$. As a consequence of Cirel'son, Ibragimov and Sudakov's (1976) inequality, \bel{eq-con-5} \bP\{\sup_{\bmu \in S_K} G(\bmu)\ge \bE\left[\sup_{\bmu \in S_K} G(\bmu)\right]+z\} \le \exp\{-z^2/2\} {\rm ~for~ some~ constant~} z>0. \eel Consider the collection $\{S_K\}$. Let $\Omega$ be the $D$-dimensional space to which $\bmu-\bmu^0$ belongs and $\bpsi_1,\cdots,\bpsi_D$ be its orthogonal basis. \bel{eq-con-6} \begin{array}{ll} \sup_{\bmu \in S_K} G(\bmu)&\le \sup_{\omega \in \Omega}\dfrac{2\sqrt{f(0)}\sum_{i=1}^n \eta_i \omega_i}{\sqrt{n}\|\omega\|_n}\\ &=\sup_{\bf a \in \cR^D}\dfrac{2\sqrt{f(0)}\sum_{i=1}^n \eta_i (\sum_{j=1}^D a_j\psi_{j,i})} {\sqrt{n}\|\sum_{j=1}^D a_j {\bpsi_j}\|_n}\\ &=\sup_{\bf a \in \cR^D}\dfrac{2\sqrt{f(0)}\sum_{j=1}^D a_j (\sum_{i=1}^n \eta_i\psi_{j,i})} {(\sum_{j=1}^D a_j)^{1/2}}\\ &\le 2\sqrt{f(0)}\left(\sum_{j=1}^D (\sum_{i=1}^n \eta_i\psi_{j,i})^2 \right)^{1/2}), \end{array} \eel where the last ``$\le$'' is obtained using the Cauchy-Schwarz inequality. From (A2) and (i) in Lemma \ref{lemma dmax}, there exists $M_1>0$ such that $D< (M_1+1)\Lambda_n$. Then by taking expectations on both sides of \eqref{eq-con-6}, we have \bel{eq-con-7} \begin{array}{ll} \bE[\sup_{\bmu \in S_K} G(\bmu)] &\le 2\sqrt{f(0)} \bE\left [\left (\sum_{j=1}^D \left (\sum_{i=1}^n \eta_i\psi_{j,i}\right )^2 \right )^{1/2}\right ] \\ & \le 2\sqrt{f(0)} \left ( \sum_{j=1}^D \bE \left[ \left(\sum_{i=1}^n \eta_i\psi_{j,i}\right )^2 \right] \right )^{1/2}\\ &\le \sqrt{D} \le ((M_1+1)\Lambda_n)^{1/2}. \end{array} \eel Combining \eqref{eq-con-5} and \eqref{eq-con-7}, we get \bel{eq-con-8} \bP\left(\sup_{\bmu \in S_K} G(\bmu)\ge ((M_1+1)\Lambda_n)^{1/2}+z\right) \le \exp\{-z^2/2\}. \eel Let $0<c<1$ such that $$ c\sqrt{f(0)}\delta_n=(\lmone+2\lmtwo)\sqrt{n}+((M_1+1)\Lambda_n)^{1/2}. $$ Then we can choose a positive $z=\sqrt{f(0)}\delta_n-(\lmone+2\lmtwo)\sqrt{n}-((M_1+1)\Lambda_n)^{1/2}$ in \eqref{eq-con-8}. Combining \eqref{eq-con-4} and \eqref{eq-con-8}, \bel{eq-con-9} \begin{array}{ll} \bP(\|\tbmu_{n}-\bmu^0\|_2\ge \delta_n) &\le \Lambda_n\exp\{\Lambda_n \log n -(1/2)[\sqrt{f(0)}\delta_n-(\lmone+2\lmtwo)\sqrt{n}-(M_1+1)\Lambda_n]^2\}\\ &\le \Lambda_n\exp\{\Lambda_n \log n -(1/2)(1-c)^2f(0)\delta_n^2\}. \end{array} \eel For $\alpha_n=\delta_n/\sqrt{n}$, we have $$ \bP(\|\tbmu_n-\bmu^0\|_n \ge \alpha_n) \le \Lambda_n \exp\{\Lambda_n \log n -(1/2)(1-c)^2f(0) n\alpha_n^2\}. $$ Thus the first part of Lemma~\ref{ls-flsa consistency} holds. Furthermore, if we also have $\alpha_n=\{2M_2 \Lambda_n (\log n)/ n\}^{1/2}$, then $$ \bP(\|\tbmu_n-\bmu^0\|_n \ge \sqrt {2M_2 \Lambda_n (\log n)/ n}) \le \Lambda_n n^{\{1-M_2f(0)(1-c)^2\}\Lambda_n}, $$ which completes the proof. $\Box$ \noindent {\bf Proof of Theorem \ref{lad-flsa consistency}} \noindent Define $$ L_n(\bmu)=n^{-1}\left[\sum_{i=1}^n|y_i-\mu_i|-\sum_{i=1}^n|y_i-\mu^0_i| +\lmone\sum_{i=1}^n|\mu_i|+\lmtwo\sum_{i=2}^n|\mu_i-\mu_{i-1}|\right] $$ and $$ M_n(\bmu)=n^{-1}\left[f(0)\sum_{i=1}^n(\mu_i-\mu_i^0)^2-\sum_{i=1}^n\sgn(\veps_i)(\mu_i-\mu_i^0) +\lmone\sum_{i=1}^n|\mu_i|+\lmtwo\sum_{i=2}^n|\mu_i-\mu_{i-1}|\right]. $$ Then $\hbmu_n=\argmin\{L_n(\bmu)\}$ and $\tbmu_n=\argmin\{M_n(\bmu)\}$. Define $R_{ni}=R_{ni}(\mu_i,\veps_i)=|\veps_i-(\mu_i-\mu_i^0)|-|\veps_i|+\sgn(\veps_i)(\mu_i-\mu_i^0)$ and $\xi_{ni}=R_{ni}-\bE[R_{ni}]$. Following Gao and Huang (2010b), we can verify \bel{eq-con-lad-1} |L_n(\bmu)-M_n(\bmu)|=\allsum\xi_{ni}/n+\tau_n (\bmu,\bmu^0), \eel where $\tau_n (\bmu,\bmu^0)=o(\|\bmu-\bmu^0\|_n^2)$. For any $\delta>0$, we define $S_{\delta}=\{\bmu:\|\bmu-\tbmu_n\|_2 \le \delta\}$, $S^d_{\delta}=\{\bmu:\|\bmu-\tbmu_n\|_2 = \delta\}$ and $S^0_{\delta}=\{\bmu:\|\bmu-\bmu^0\|_2 \le \delta\}$. We define $$ \Delta_n(\delta)=\sup_{\bmu\in S_{\delta}}|L_n(\bmu)-M_n(\bmu)| $$ and $$ h_n(\delta)=\inf_{\bmu\in S^d_{\delta}}(M_n(\bmu)-M_n(\tbmu_n)). $$ We have \bel{eq-con-lad-2} \begin{array}{ll} M_n(\bmu)-M_n(\tbmu_n)&=f(0)\|\bmu-\tbmu_n\|_n^2+2n^{-1}f(0)\allsum(\mu_i-\tmu_i)(\tmu_i-\mu_i^0) +n^{-1}\allsum\sgn(\veps_i)(\tmu_i-\mu_i)\\ &+n^{-1}\lmone\allsum[|\mu_i|-|\tmu_i|]+n^{-1}\lmtwo\sum_{i=2}^n[|\mu_i-\mu_{i-1}|-|\tmu_i-\tmu_{i-1}|]. \end{array} \eel Since $\partial M_n(\bmu)/\partial \mu_i\vert_{\mu_i=\tmu_i}=0$, $$ 2n^{-1}f(0)(\tmu_i-\mu_i^0)-n^{-1}\sgn(\veps_i)+n^{-1}\lmtwo[\sgn(\tmu_i-\mu_{i-1})-\sgn(\mu_{i+1}-\tmu_i)]=0. $$ Multiply $(\mu_i-\tmu_i)$ on both sides and take sums. Then \eqref{eq-con-lad-2} becomes $$ \begin{array}{ll} M_n(\bmu)-M_n(\tbmu_n) &=f(0)\|\bmu-\tbmu_n\|_n^2 +n^{-1}\lmone\allsum[|\tmu_i|-\mu_i\sgn(\tmu_i)-(|\tmu_i|-|\mu_i|)]\\ &\quad +n^{-1}\lmtwo\sum_{i=2}^n[ \sgn(\mu_{i+1}-\tmu_i)(\mu_i-\tmu_i) + \sgn(\tmu_{i}-\mu_{i-1})(\tmu_i-\mu_i)]\\ & \quad + n^{-1} \lmtwo\sum_{i=2}^n[|\mu_i-\mu_{i-1}|-|\tmu_i-\tmu_{i-1}|]\\ &>f(0)\|\bmu-\tbmu_n\|_n^2\\ &\quad +n^{-1}\lmtwo\sum_{i=2}^n[ \sgn(\mu_{i+1}-\tmu_i)(\mu_i-\tmu_i) + \sgn(\tmu_{i}-\mu_{i-1})(\tmu_i-\mu_i)]\\ & \quad + n^{-1} \lmtwo\sum_{i=2}^n[|\mu_i-\mu_{i-1}|-|\tmu_i-\tmu_{i-1}|]. \end{array} $$ Then for any $\kappa_{n}>0$, we have $$ h_n(\kappa_{n})>f(0)\kappa_{n}^2/n.$$ From the Convex Minimization Theorem in Hjort and Pollard (1993), we have \bel{eq-con-lad-2} \begin{array}{ll} & \bP(\|\hbmu_n-\tbmu_n\|_2 \ge \kappa_{n}) \\ &\quad\le \bP(\Delta_n(\kappa_{n})>h_n(\kappa_{n})/2) \\ &\quad\le \bP\left(\sup_{\bmu\in S_{\kappa_{n}}}n^{-1}|\allsum \xi_{ni}|\ge f(0)\kappa_{n}^2/(2n)\right) +\bP\left(\sup_{\bmu\in S_{\kappa_{n}}}|\tau_n(\bmu,\bmu^0)|\ge f(0)\kappa_{n}^2/(2n) \right). \end{array} \eel Suppose $r_n=o(1)$ and $\tau_n=\tau_n(\tbmu_n,\bmu^0)=r_n\|\tbmu_n-\bmu^0\|_n^2$. Then \bel{eq-con-lad-3} \begin{array}{ll} &\lim_{n\to \infty}\bP\left(\sup_{\bmu\in S_{\kappa_{n}}}|\tau_n(\bmu,\bmu^0)|\ge f(0)\kappa_{n}^2/(2n) \right) \\ &\quad\le \lim_{n\to \infty} \bP\left(\sup_{\bmu\in S_{\kappa_{n}}}2|r_n|\|\tbmu_n-\bmu^0\|_n^2\ge f(0)\kappa_{n}^2/(4n) \right) \\ &\quad\quad+\lim_{n\to \infty}\bP\left(\sup_{\bmu\in S_{\kappa_{n}}}2|r_n|\|\bmu-\tbmu_n\|_n^2\ge f(0)\kappa_{n}^2/(4n) \right) \\ & =\lim_{n\to \infty} \bP\left(\sup_{\bmu\in S_{\kappa_{n}}}|r_n|\|\tbmu_n-\bmu^0\|_n^2\ge f(0)\kappa_{n}^2/(8n) \right). \end{array} \eel Combining \eqref{eq-con-lad-1} and \eqref{eq-con-lad-2}, \bel{eq-con-lad-3} \begin{array}{ll} \bP(\|\hbmu-\tbmu\|_2 \ge \kappa_{n}) &\le \bP\left(\sup_{\bmu\in S_{\kappa_{n}}}\sum_{i=1}^n\xi_{ni}/n >f(0)\kappa_{n}^2/(2n) \right)\\ &\quad + \bP\left(\sup_{\bmu\in S_{\kappa_{n}}}|r_n|\|\tbmu_{n}-\bmu^0\|_n^2\ge f(0)\kappa_{n}^2/(8n) \right)+o(1)\\ &\le \bP\left(\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{i=1}^n\xi_{ni}/n| >f(0)\kappa_{n}^2/(2n) \right) + \bP\left(\|\tbmu_{n}-\bmu^0\|_2\ge \kappa_{n}\right), \end{array} \eel where $\kappa_{n}'=2\kappa_{n}$. Let the $u_i$'s be a Rademacher sequence and $\bpsi_1,\cdots,\bpsi_D$ be the orthogonal basis of a $D$-dimensional space to which $\bmu-\bmu^0$ belongs. Using the Contraction Theorem in Ledoux and Talagrand (1991) and also the Cauchy-Schwarz inequality, we have $$ \begin{array}{ll} \bE\left[\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{i=1}^n\xi_{ni}/n|\right] &=(8/n)\bE\left[\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{i=1}^n u_i(\mu_i-\mu_i^0)|\right]\\ &\le(8/n)\bE\left[\sup_{{\bf a}\in \cR^D}\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{j=1}^D a_j\sum_{i=1}^n u_i(\psi_{j,i})|\right]\\ &\le(8/n)\bE\left[\sup_{{\bf a}\in \cR^D}\sup_{\bmu\in S^0_{\kappa_{n}'}}(\sum_{j=1}^D a_j^2)^{1/2} (\sum_{j=1}^D (\sum_{i=1}^n u_i\psi_{j,i})^2)^{1/2}\right]\\ &\le (8/n) \sup_{\bmu\in S^0_{\kappa_{n}'}}\sqrt{D} (\sum_{j=1}^D a_j^2)^{1/2}\\ &= (8/n) \sup_{\bmu\in S^0_{\kappa_{n}'}}\sqrt{D}\|\bmu-\bmu^0\|_2\\ &\le 16\sqrt{\Lambda_n}\kappa_{n}/n. \end{array} $$ \bel{eq-con-lad-4} \begin{array}{ll} \bP\left(\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{i=1}^n\xi_{ni}/n| >f(0)\kappa_{n}^2/(2n) \right) &\le \bE\left[\sup_{\bmu\in S^0_{\kappa_{n}'}}|\sum_{i=1}^n\xi_{ni}/n|\right]/ (f(0)\kappa_{n}^2/(2n) )\\ &\le 32\sqrt{\Lambda_n} / (f(0)\kappa_{n})\le 8\sqrt{\Lambda_n} / (f(0)\kappa_{n}) . \end{array} \eel Let $\gamma_n=\kappa_{n}/\sqrt{n}$. Combining \eqref{eq-con-lad-3} and \eqref{eq-con-lad-4}, $$ \begin{array}{ll} \bP(\|\bmu-\bmu^0\|_n \ge \gamma_n) & \le \bP(\|\hbmu_n-\tbmu_n\|_2 \ge \kappa_{n}/2) + \bP(\|\tbmu_n-\bmu^0\|_2 \ge \kappa_{n}/2) \\ & = \bP(\|\hbmu_{n}-\tbmu_{n}\|_2 \ge \kappa_{n}/2) +\bP(\|\tbmu_n-\bmu^0\|_2 \ge \kappa_{n}/2) \\ &\le 32\sqrt{\Lambda_n} / (f(0)\gamma_n \sqrt{n})+ 2\bP\left(\|\tbmu_{n}-\bmu^0\|_n\ge \gamma_n/2\right)\\ &\le 32\sqrt{ \Lambda_n} / (f(0)\gamma_n \sqrt{n})+ 2\Lambda_n\exp\{\Lambda_n\log n-(1-c)^2f(0)n\gamma_n^2/8\}. \end{array} $$ The last ``$\le$'' is from \eqref{eq-con-lad-4} and Lemma \ref{ls-flsa consistency} by choosing $\gamma_n=2(c\sqrt{f(0)})^{-1}[\lmone+2\lmtwo+((M_1+1)\Lambda_n/n)^{1/2}]$. Thus the first part of Theorem \ref{ls-flsa consistency} holds. Furthermore, if we let $\lmone+2\lmtwo=[2c^2f(0)M_3\Lambda_n(\log n)/n]^{1/2}-[(M_1+1)\Lambda_n]^{1/2}$ for $M_3>1/((1-c)^2f(0))$ and $\gamma_n=(8M_3\Lambda_n(\log n)/n)^{1/2}$, then $ \sqrt{\Lambda_n} / (f(0)\gamma_n \sqrt{n})=O(1/\sqrt{\log n})$. Thus, $$ \bP(\|\hbmu_n-\bmu^0\|_n \ge \gamma_n)\le O(1/\sqrt{\log n})+2\Lambda_n\exp\{\Lambda_n\log n(1-M_3(1-c)^2f(0))\}. $$ $\Box$ \noindent{\bf Proof of Corollary \ref{corollary con}} \noindent If we replace the upper bound of maximal dimension of any linear space where $\hbmu_n$, $\tbmu_n$ or $\bmu^0$ belong by $J_{\max}$ in the proof of Theorem \ref{lad-flsa consistency}, we can obtain the consistency result in Corollary \ref{corollary con}. We do not repeat the proof here. $\Box$ \noindent{\bf Proof of Theorem \ref{theorem-consistency-fsa}} \noindent Suppose vector $\bmu$ has $J$ blocks and $\{\cB_1,\cdots,\cB_J\}$ is the corresponding unique block partition. Let $\nu_j$ be the intensity at $j$th block for $1\le j\le J$. From Lemma A.1 in Rinaldo (2009), the subdifferential of the total variation penalty \bel{subdifferential} \partial \left(\lmtwo \sum_{j=2}^J |\nu_j-\nu_{j-1}|\right)= \left\{ \begin{array}{ll} -\lmtwo \sgn(\nu_{j+1}-\nu_j),& j=1 \\ \lmtwo (\sgn(\nu_{j+1}-\nu_j)-\sgn(\nu_{j}-\nu_{j-1})),& 1<j<J \\ \lmtwo \sgn(\nu_{j}-\nu_{j-1}),& j=J \end{array} \right., \eel where $\sgn(x)=1, 0, -1$ when $x>0, =0, <0$, respectively. We define $c_j^0$ and $\widehat c_j$ as the subdifferentials \eqref{subdifferential} at both $\bnu^0$ and $\hbnu_n$ scaled by the corresponding block sizes. In other words, we have \bel{c0} c^0_j=\left\{ \begin{array}{ll} -\lmtwo \sgn(\nu^0_{j+1}-\nu^0_j)/b^0_j,& j=1 \\ \lmtwo (\sgn(\nu^0_{j+1}-\nu^0_j)-\sgn(\nu^0_{j}-\nu^0_{j-1}))/ b^0_j,& 1<j<J_0 \\ \lmtwo \sgn(\nu^0_{j}-\nu^0_{j-1})/ b^0_j,& j=J_0 \end{array} \right. \eel and \bel{chat} \widehat c_j=\left\{ \begin{array}{ll} -\lmtwo \sgn(\hnu_{j+1}-\hnu_j)/\widehat b_j,& j=1 \\ \lmtwo (\sgn(\hnu_{j+1}-\hnu_j)-\sgn(\hnu_{j}-\hnu_{j-1}))/\widehat b_j,& 1<j<\Jhat \\ \lmtwo \sgn(\hnu_{j}-\hnu_{j-1})/\widehat b_j,& j=\Jhat \end{array} \right.. \eel For an estimate $\hbmu_n$, we let $\widehat\cB_{j(i)}$ be the block estimate where $i$ stays, that is, $\hmu_i$ are all the same for $i\in \widehat\cB_{j(i)}$. Let $\widehat b_{j(i)}=|\widehat\cB_{j(i)}|$ be the size of $\widehat\cB_{j(i)}$. Then $\cB^0_{j(i)}$ ($b^0_{j(i)}$) is the corresponding block set (size). From notations (I) and (V) in Section 2, we have $b^0_{j(i)}=|\cB^0_{j(i)}|$ for $1\le j\le J_0$. From the KKT conditions, $\widehat {\bmu}^F$ is a LAD-FSA solution if and only if \bel{kkt} \left\{ \begin{array}{ll} \sum_{k\in \widehat \cB_{j(i)}} \sgn(y_k- \hmu_i)= \widehat b_{j(i)} \widehat c_{j(i)} &\quad{\rm if}~i\in \widehat\cJ \\ |\sum_{k\in \widehat\cB_{j(i)}} \sgn(y_k-\hmu_i)|<2\lmtwo & \quad {\rm if}~i\notin \widehat\cJ \end{array} \right.. \eel Let $\hmu_i$ and $\mu_i^0$ satisfy \begin{equation} \label{hmuj} \left\{ \begin{array}{ll} \hmu_i=\mu_i^0+(2f(0)b_{j(i)}^0)^{-1}\left(\sum_{k\in \cB_{j(i)}^0} \nolimits \sgn(\veps_k)- b^0_{j(i)} c^0_{j(i)}+ \widehat h_i\right) &\quad \forall i\in \cJ^0\\ \hmu_i=\hmu_{i-1} &\quad\forall i\notin \cJ^0. \end{array} \right.. \end{equation} Here $\widehat h_i$ is the remainder term with the stochastically equicontinuity, more specifically, \bel{equicontinuity} |(b_{j(i)}^0)^{-1/2}\widehat h_{i}|=O_p(1), ~\forall {1\le i\le n}. \eel In fact, $\widehat h_i=2f(0)b^0_{j(i)} (\hmu_i-\mu_i^0)+ \sum_{k\in \cB^0_{j(i)}}\bE[r_k^i] +\sum_{ k\in \cB^0_{j(i)}}\varsigma_k^i$ with $r_k^i=\sgn(\veps_k-(\hmu_i-\mu_i^0) )-\sgn(\veps_k)$ and $\varsigma_k^i=r_k^i-\bE_{\bveps}[r_k^i]$ for $k\in \cB^0_{j(i)}$. Define the difference vector $\bw=(w_1,\cdots, w_n)'$, with $w_1=\mu_1$ and $w_i=\mu_i -\mu_{i-1}$ for $2\le i\le n$. If $\sgn(\widehat w_i)=\sgn(w_i^0), \forall i\in \cJ^0$, then \eqref{kkt} holds for $\hbmu_n$ in \eqref{hmuj}. Thus, $\hbmu_n$ is a LAD-FSA solution. Define $$\cR_{\lmtwo}\equiv \{\widehat\cJ=\cJ^0 \}\cap \{\sgn(\widehat w_i^F)=\sgn(w_i^0), ~\forall i\in \cJ^0\}$$ Then $\cR_{\lmtwo}$ holds if \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} &\sgn(\widehat w_i)=\sgn(w_i^0) & \forall i\in \cJ^0\label{ktt-1}\\ & \arrowvert\sum_{k\in \widehat\cB_{j(i)}} \nolimits\sgn(y_k-\hmu_i)\arrowvert<2\lmtwo & {\rm if}~i\notin \cJ^0 \label{kkt-2} \end{empheq} \end{subequations} It is easy to verify that $\sgn(\widehat w_i)=\sgn(w_i^0), \forall i\in \cJ^0$ holds if \bel{hmuj3} |\sgn(\widehat w_i)(w_i^0-\widehat w_i)|<|w_i^0|,~{\rm for~}i\in \cJ^0 \eel Plug $\hbmu_n$ \eqref{hmuj} into \eqref{hmuj3} and \eqref{kkt-2}, and then use the triangle inequality, we know that $\cR_{\lmtwo}$ holds if \begin{equation}\label{Rlam2n-1} \begin{array}{l} \max_{i\in \cJ^0}|(b_{j(i)}^0)^{-1}\sum_{k\in \cB^0_{j(i)}} \sgn(\veps_k)- (b_{j(i-1)}^0)^{-1}\sum_{k\in \cB^0_{j(i-1)}} \sgn(\veps_k)|/w_i^0+ \\ \quad\max_{i\in \cJ^0} |(b_{j(i)}^0)^{-1}\widehat h_{i}-(b_{j(i-1)}^0)^{-1}\widehat h_{i-1}|/w_i^0+ \max_{i\in \cJ^0}|c_{j(i)}^0-c_{j(i-1)}^0|/w_i^0 \\ <2f(0). \end{array} \end{equation} and \begin{equation}\label{Rlam2n-2} \max_{i\notin \cJ^0} |\sgn(\veps_i)-\sgn(\veps_{i-1})+\widehat h_{i}-\widehat h_{i-1}|<4\lmtwo. \end{equation} We have $$\bE[\sgn(\veps_i)-\sgn(\veps_{i-1})]=0$$ and $$\Var[\sgn(\veps_i)-\sgn(\veps_{i-1})]=2 ~{\rm for ~}2\le i\le n$$ and for $2\le i_l, i_2 \le n$, $$\Cov(\sgn(\veps_{i_1})-\sgn(\veps_{{i_1}-1}), \sgn(\veps_{i_2})-\sgn(\veps_{{i_2}-1}) )=-1, 0~{\rm for~} |{i_1}-{i_2}|=1, ~{\rm otherwise}.$$ Suppose $d_i^*$ are independent copies of $N(0,2)$. Then we have $$\begin{array}{ll} \bP(I_4)&\equiv \bP(\max_{i\notin \cJ^0} |\sgn(\veps_i)-\sgn(\veps_{i-1})|>2\lmtwo)\\ &\quad\le \bP(\max_{i\notin \cJ^0}|d_i^*|> 2\lmtwo)\\ &\quad\le 2\exp\{-4\lmtwo^2+\log|\cJ_0^c|\} \end{array} $$ where we get the first ``$ \le $'' using Slepian's inequality, the second ``$\le$'' using Chernoff's bound. Then $\bP(I_4)=o(1)$ if conditions in (B1) hold. Define $$X_i=(2f(0)b_{j(i)}^0)^{-1}\sum_{k\in \cB^0_{j(i)}} \sgn(\veps_k)-(2f(0)b_{j(i-1)}^0)^{-1}\sum_{k\in \cB^0_{j(i-1)}} \sgn(\veps_k)| ~\forall i\in \cJ^0.$$ Then $\bE[X_i]=0$ and $\max_{i\in \cJ^0}\Var[X_i]\le (2f(0)b_{j(i)}^0)^{-1}$. Consider independent copies $X_i^*\sim N(0, (2f(0)b_{j(i)}^0)^{-1} ), i\in \cJ^0$. We have $$ \begin{array}{ll} \bP(I_1)&\equiv \bP(\max_{i\in \cJ^0}|(b_{j(i)}^0)^{-1}\sum_{k\in \cB^0_{j(i)}} \sgn(\veps_k)- (b_{j(i-1)}^0)^{-1}\sum_{k\in \cB^0_{j(i-1)}} \sgn(\veps_k)|>2f(0)a_n/3)\\ &\quad \le \bP(\max_{i\in \cJ^0}|X_i^*|>a_n/3)\le 2\exp\{-2b^0_{\min}f^2(0)a_n^2/9+\log|\cJ^0|\}. \end{array} $$ Thus $\bP(I_1)=o(1)$ if conditions in (B2) holds. Since $\max_{i\in \cJ^0}|c_{j(i)}^0-c_{j(i-1)}^0| \le 2\lmtwo /b_{\min}^0$, from (B3), $$ \bP(I_2)\equiv \bP(\max_{i\in \cJ^0}|c_{j(i)}^0-c_{j(i-1)}^0|> 2f(0) a_n/3)=0. $$ Furthermore, we have $$ \bP(I_5)\equiv \bP( \max_{i\notin \cJ^0} |\widehat h_i-\widehat h_{i-1}|>2\lmtwo)=0. $$ From \eqref{equicontinuity}, we have $$ \begin{array}{ll} \bP(I_3)&\equiv \bP(\max_{i\in \cJ^0} |(w_i^0b_{j(i)}^0)^{-1}\widehat h_{i}-(w_i^0b_{j(i-1)}^0)^{-1}\widehat h_{i-1}|> 2f(0)/3)\\ &\le \bP(\max_{i\in \cJ^0} |(b_{j(i)}^0)^{-1/2}\widehat h_{i}-(b_{j(i-1)}^0)^{-1/2}\widehat h_{i-1}|> 2f(0)(b_{\min}^0)^{1/2}a_n/3)\\ &=o(1). \end{array} $$ Then from \eqref{Rlam2n-1} and \eqref{Rlam2n-2}, we get $$ \bP(\cR_{\lmtwo}^c)\le \bP(I_1)+\bP(I_2)+\bP(I_3)+\bP(I_4)+\bP(I_5) \to 0 ~{\rm when~} n\to \infty. $$ $\Box$ \noindent{\bf Proof of Theorem \ref{theorem-consistency-flsa}} \noindent From Theorem \ref{theorem-consistency-fsa}, we know that if (A1) and (B1-B3) hold, the LAD-FLSA can choose all jumps with probability $1$. Thus, we can prove the main results based on the true block partition. By the KKT, $\hbnu_n$ is a LAD-FLSA solution if and only if \bel{kkt-flsa} \left\{ \begin{array}{ll} \sum_{k\in \cB^0_{j}} \sgn(y_k- \hnu_j)+ \widehat b_j \widehat c_j= \lmone \widehat b_j \sgn(\hnu_j) &{\rm if}~\hnu_j\neq 0 \\ |\sum_{k\in \cB^0_{j}} \sgn(y_k-\hnu_j)|+ \widehat b_j \widehat c_j|<\lmone \widehat b_j & {\rm if}~\hnu_j= 0 \end{array} \right.. \eel Let $\hnu_j$ and $\nu_j^0$ satisfy \begin{equation} \label{hnuj} \left\{ \begin{array}{ll} \hnu_j=\nu_j^0+(2f(0)b_{j}^0)^{-1}\left(\sum_{i\in \cB_{j}^0} \sgn(\veps_i)+ b^0_j c_j^0-\lmone b_j^0 \sgn(\nu_j^0)+\widehat h_{j}\right) & \quad \forall~ j\in \cK^0 \\ \hnu_j=0 & \quad\forall~ j\in \cK^0 \end{array} \right.. \end{equation} Here, by abuse of notation, $\widehat h_{j}$ is the remainder term with the stochastically equicontinunity, \bel{equicontinuity2} |(b_j^0)^{-1/2}\widehat h_{j}|=O_p(1), ~\forall {1\le i\le n}. \eel In fact, $\widehat h_{j}=2f(0)b^0_{j} (\hnu_j-\nu_j^0)+\sum_{ i\in \cB^0_{j}}\bE[r_i^j] +\sum_{i\in \cB^0_{j}}\varsigma_i^j$, with $r_i^j=\sgn(\veps_i-(\hnu_j-\nu_j^0) )-\sgn(\veps_i)$ and $\varsigma_i^j=r_i^j-\bE_{\bveps}[r_i^j]$ for $i\in \cB^0_j$ and $1\le j\le J_0+1$. If $\{\sgn(\hnu_j)=\sgn(\nu_j^0), \forall ~j\in \cK^0\}$, then $\bnu_n$ in \eqref{hnuj} satisfies {kkt-flsa}, and therefore is a LAD-FLSA solution. Define an event $$\cR_{n}=\cR(\lmone,\lmtwo)\equiv\{\widehat{\cK}=\cK^0\}\cap\{\sgn(\hnu_j)=\sgn(\nu_j^0), \forall ~j\in \cK^0\}.$$ Then $\cR_n$ holds if \bel{flsa-kkt2} \left\{ \begin{array}{ll} \{\sgn(\hnu_j)=\sgn(\nu_j^0), & \forall ~j\in \cK^0 \\ |\sum_{k\in \cB^0_{j}} \sgn(y_k-\hnu_j)|+ \widehat b_j \widehat c_j|<\lmone \widehat b_j & \forall ~j\notin \cK^0 \\ \end{array} \right.. \eel We can verify that $\sgn(\widehat \nu_j)=\sgn(\nu_j^0), \forall j\in \cK^0$ holds if $|\sgn(\hnu_j)(\nu_j^0-\widehat \nu_j)|<|\nu_j^0|,~{\rm for~}j\in \cK^0. $ Therefore, from \eqref{hnuj} and \eqref{flsa-kkt2}, $\cR_n$ holds if \bel{hnuj3} \left\{ \begin{array}{ll} |\sum_{i\in \cB^0} \sgn(\veps_i) +b_j^0c_j^0-\lmone b_j^0 \sgn(\nu_j^0)+\widehat h_j|<2f(0)b_j^0|\nu_j^0|& \forall~ j\in \cK^0\\ |\sum_{i\in \cB^0} \sgn(\veps_i) +b_j^0c_j^0+\widehat h_j|<\lmone b_j^0& \forall ~j\notin \cK^0 \end{array} \right.. \eel Thus we have \bel{R1n} \begin{array}{ll} \bP(\cR^c) &\le \bP(\max_{j\in \cK^0} |\sum_{i\in \cB_j^0} \sgn(\veps_i)|> 2f(0) \min_{j\in\cK^0} b_j^0 \min_{j\in\cK^0} |\nu_j^0|/4)\\ &+ \bP(\max_{j\in \cK^0} |c_j^0| > 2f(0) \min_{j\in\cK^0}|\nu_j^0|/4 )\\ &+\bP(\max_{j\in \cK^0} |\lmone \sgn(\nu_j^0)|> 2f(0) \min_{j\in\cK^0} |\nu_j^0|/4 )\\ &+ \bP(\max_{j\in \cK^0}|\widehat h_j/(b_j^0\nu_j^0)|>f(0) /2)\\ &+\bP(\max_{j\notin \cK^0} |\sum_{i\in \cB_j^0} \sgn(\veps_i)|>\lmone \min_{j\in\cK^0} b_j^0/3 )\\ &+\bP(\max_{j\notin \cK^0} |c_j^0|> \lmone/3)\\ &+\bP(\max_{j\notin \cK^0}|\widehat h_j/b_j^0|>\lmone /3)\\ &\equiv\bP(S_1)+\bP(S_2)+\bP(S_3)+\bP(S_4)+\bP(S_5)+\bP(S_6)+\bP(S_7). \end{array} \eel Let $Z_j=\sum_{i \in \cB_j^0} \sgn(\veps_i)/b_j^0$. Then $\bE[Z_j]=0$ and $\Var(Z_j)=1/b_j^0$. Then $Z_j$s are independent sub-Gaussian. From (C3), we have $$ P(S_1) \le 2K_0 \exp\{-b_{\min}^0f^2(0)\rho_n^2/8\}=o(1).$$ We can verify $P(S_2)=o(1)$ from (C4), $P(S_3)=o(1)$ from (C5) and $P(S_6)=o(1)$ from (C2) From (C1), $$P(S_5)\le 2(J_0-K_0)\exp\{-b_{\min}^0\lmone^2/32\}=o(1).$$ Furthermore, we have $P(S_7)=o(1)$ and $P(S_4)= o(1)$. From \eqref{R1n}, we have $P(\cR) \to 1$ when $n\to \infty$, which completes the proof. $\Box$ The rest of the Appendix are presented to prove Theorem \ref{thm-unbiased-fl}. Recall that $\bw$ is the jump coefficients vector with $w_i=\mu_i-\mu_{i-1}$ for $2\le i\le n$ and $\bnu=(\nu_1,\cdots, \nu_J)'$ is the block coefficients factor. From Proposition 3 in Rosset and Zhu (2007), we know that the following results of the LAD-FLSA solution. \begin{lemma}\label{piecewise} \begin{itemize} \item[(i)] For any $\lm_1=0$, there exists a set of values of $\lm_2$, $$0=\lm_{2,0}<\lm_{2,1}<\cdots<\lm_{2,m_2}<\lm_{2,m_2+1}=\infty$$ such that $\widehat \bw(0, \lm_{2,k})$ for $1\le k\le m_2$ is not uniquely defined, the set of optimal solutions of each $\lm_{2,k}$ is a straight line in $\cR^n$, and for any $\lm_2\in (\lm_{2,k}, \lm_{2,k+1})$, the solution $\widehat \bw(0, \lm_2)$ is constant. \item[(ii)] For above $\lm_{2,k}, 1\le k\le m_2$, there exists a set of values of $\lm_1$, $$0=\lm_{1,0}<\lm_{1,1}<\cdots<\lm_{1,m_2}<\lm_{1,m_2+1}=\infty$$ such that $\hbnu(\lm_{1,j}, \lm_{2,k})$ for $1\le j\le m_1$ is not uniquely defined, the set of optimal solutions of each $\lm_{1,j}$ is a straight line in $\cR^n$, and for any $\lm_1\in (\lm_{1,j}, \lm_{1,j+1})$, the solution $\hbnu(\lm_{1}, \lm_{2,k})$ is constant. \end{itemize} \end{lemma} In Lemma~\ref{piecewise}, if we define $\lm_{2,k}, 1\le k \le m_2$ from (i) as the transition points for $\bw$ and $\cN_{0, \lm_2}$ as the set of $\by\in \cR^n$ such that $\lm_{2}$ is a transition point for $\bw$, then the jumps set $\cJ(0,\lm_2)$ only changes at those $\lm_{2,k}$'s. Furthermore, Let $\lm_2=\lm_{2,k}$ for some $1\le k \le m_2$. If we can also define $\lm_{1,j}, 1\le j \le m_1$ from (ii) as the transition points for $\bnu$ and $\cN_{\lm_{1}, \lm_{2}}$ as the set of $\by\in \cR^n$ such that $\lm_{1}$ is a transition point for $\bnu$, then the set of nonzero blocks, $\cK(\lm_{1},\lm_{2})$, only changes at $\lm_{1,j}$'s or $\lm_{2,k}$'s, and $\cN_{\lm_1,\lm_2}$, is in a finite collection of hyperplanes in $\cR^n$. From Lemma \ref{piecewise}, we know that for any given $\by\in\cR^n/\cN_{\lm_{1}, \lm_2}$, $\hbmu(\lm_{1},\lm_2)$ is fixed and then $\{1, 2, \cdots, n\}$ is divided into two sets, $\cE_{\by,\lm_{1},\lm_2}$ and $\cE_{\by,\lm_{1},\lm_2}^c$, where $\cE_{\by,\lm_{1},\lm_2}=\{1\le i\le n:~ y_i-\hnu_{j(i)}=0, \hnu_{j(i)}\neq 0\} $ and $j(i)$ specifies the block where $\hmu_i$ stays. Thus, we have \bel{event} |\cE_{\by,\lm_1,\lm_2}|=|\cK(\lm_1,\lm_2)|. \eel \begin{lemma}\label{continuous} For any $\lm_1>0$ and $\lm_2>0$, if $\by\in\cR^n/\cN_{\lm_{1}, \lm_2}$, $\hnu(\lm_1,\lm_2,\by)$ is a continuous function of $\by$, and then $\cE_{\by,\lm_1,\lm_2}$ is locally constant. \end{lemma} \noindent {\bf Proof of Lemma~\ref{continuous}} \noindent Let $L(\mu, \by)$ denote the function $$L(\bmu, \by)=\sum_{i=1}^n|y_i-\mu_i|+\lm_1\sum_{i=1}^n|\mu_i|+\lm_2\sum_{i=2}^n|\mu_i-\mu_{i-1}|. $$ Since $\by\in\cR^n/\cN_{\lm_1,\lm_2}$, $\widehat \bnu$ does not change from Lemma \ref{piecewise}. For any $\by_0\in\cR^n/\cN_{\lm_1,\lm_2}$, and for any sequence $\{\by_m\}$ such that $\{\by_m\}\to \by_0$, we want to prove that $\hbnu(\lm_1,\lm_2, \by_m)\to \hbnu(\lm_1,\lm_2, \by_0)$. It is equivalent to prove $\hbmu(\lm_1,\lm_2, \by_m)\to \hbmu(\lm_1,\lm_2, \by_0)$. Because $\|\hbmu(\lm_1,\lm_2,\by)\|_1\leq\|\hbmu(0,0,\by)\|_1=\|\by\|_1$, $\hbmu(\lm_1,\lm_2,\by)$ is bounded. Thus we only need to check that for every converging subsequence of $\{\by_{m}\}$, say $\{\by_{m_k}\}$, we have $\hbmu(\lm_1,\lm_2, \by_{m_k})\to \hbmu(\lm_1,\lm_2, \by_0).$ Suppose that $\hbmu(\lm_1,\lm_2, \by_{m_k})\to \check\bmu(\lm_1,\lm_2)$ when $m_k\to \infty$. Let $\Delta(\bmu,\by,\by')=L(\bmu, \by)-L(\bmu, \by')$. On the one hand, we have \bes &&L(\hbmu(\lm_1,\lm_2, \by_{0}),\by_{0})\\ &=& L(\hbmu(\lm_1,\lm_2, \by_0),\by_{m_k})+\Delta(\hbmu(\lm_1,\lm_2, \by_0),\by_{0},\by_{m_k}) \\ &\ge& L(\hbmu(\lm_1,\lm_2, \by_{m_k}),\by_{m_k})+\Delta(\hbmu(\lm_1,\lm_2, \by_0),\by_{0},\by_{m_k})\\ &=& L(\hbmu(\lm_1,\lm_2, \by_{m_k}),\by_{0})+\Delta(\hbmu(\lm_1,\lm_2, \by_{m_k}),\by_{m_k},\by_{0}) +\Delta(\hbmu(\lm_1,\lm_2, \by_0),\by_{0},\by_{m_k}). \ees On the other hand, we have \bes &&\Delta(\hbmu(\lm, \by_{m_k}),\by_{m_k},\by_{0}) +\Delta(\hbmu(\lm_1,\lm_2, \by_0),\by_{0},\by_{m_k})\\ &=&\sum_{i=1}^n[|y_{i,m_k}-\hmu_i(\lm_1,\lm_2, \by_{m_k})|- |y_{i,0}-\hmu_i(\lm_1,\lm_2, \by_{m_k})|\\ &&+|y_{i,0} -\hmu_i(\lm_1,\lm_2, \by_{0})|- |y_{i,m_k}-\hmu_i(\lm_1,\lm_2, \by_{0})|]\\ &\le& 2\sum_{i=1}^n|y_{i,m_k}-y_{i,0}|\to 0 ~{\rm when~} k\to \infty \ees Thus $ L(\hbmu(\lm_1,\lm_2, \by_{0}),\by_{0}) \ge \lim_{k\to \infty} L(\hbmu(\lm_1,\lm_2, \by_{m_k}),\by_{0}) = L(\check\bmu(\lm_1,\lm_2,\by_{0}),\by_{0})$. Since $\hbmu(\lm_1,\lm_2, \by_{0})$ is the unique minimizer of $L(\bmu,\by_{0})$, we have $\check\bmu(\lm_1,\lm_2,\by_{0})=\hbmu(\lm_1,\lm_2, \by_{0})$. $\Box$ \noindent{\bf Proof of Theorem~\ref{thm-unbiased-fl}} \noindent From \eqref{event} and Lemma \ref{continuous}, there exists $\eps>0$ such that $\by\in {\rm Ball}(\by, \eps)$, $\cE_{\by,\lm_1,\lm_2}$ stays the same when neither $\lm_1$ and $\lm_2$ is a transitional point. Thus, $\partial \hnu_{j(i)}(\lm_1,\lm_2)/\partial y_i =1$ if $i\in \cE_{\by,\lm_1,\lm_2}$ and $\partial \hnu_{j(i)}(\lm_1,\lm_2)/\partial y_i =0$ if $i\notin \cE_{\by,\lm_1,\lm_2}$. Overall, we have $\sum_{i=1}^n \partial\hnu_{j(i)}(\lm_1,\lm_2)/\partial y_i=|\cE_{\by,\lm_1,\lm_2}|= |\widehat \cK(\lm_1,\lm_2)|$ for $\by \in \cN_{\lm_1,\lm_2}$. Since $\cN_{\lm_1,\lm_2}$ is in a collection of finite hyperplanes, we can obtain the conclusion by taking the expectation. $\Box$ \renewcommand{1.0}{1.0} \begin{figure} \caption{Copy Number data set from the GM 13330 BAC CGH array. The top and bottom panels give outputs from the LAD-FLSA and LS-FLSA, respectively. Both observed data (gray dots) and estimates (dark solid lines) from chromosome 1--4 are plotted . Data from different chromosomes are separated by gray vertical lines. } \label{fig:signal example} \end{figure} \begin{figure} \caption{The estimated degrees of freedom of LAD-FLSA for every combined $\lm_1$ and $\lm_2$ for the chromosome 1 data from the GM 13330 BAC array. } \label{fig:signal example} \end{figure} \begin{figure} \caption{Hypothetical model from 500 Monto Carlo simulations of 129 markers for chromosome 1 data from the GM 13330 BAC array. It shows that the estimated number of nonzero blocks $|\cK(\lm_1,\lm_2)|$ is very close to the true $D(\mathcal{M} \label{fig:CGHdf} \end{figure} \begin{figure} \caption{Example of observed data (grey) with true hidden signals (black) and LAD-FLSA estimates (red). There are $6$ blocks with $4$ nonzero ones. Random noise $\veps_i$'s are generated from double exponential distributions with center $0$ and scale $0.5/\sqrt{2} \label{fig:signaldata} \end{figure} \begin{table} [h] \begin{center} \caption {Simulation results for Section \ref{sec-sim-con}. } {\small \begin{tabular} {c c c c c c c c c }\label{sim results} \\ \hline\hline & & & \multicolumn{3}{c} {$n=1000 $} & \multicolumn{3}{c} {$n=5000$} \\ \cline{2-5} \cline{6-9} $\epsilon_i$ & $\sigma$ & {\bf Model} & LARE$^1$& CFR+6$^2$& JUMP$^3$ & LARE &CFR+6 & JUMP \\ \hline\hline \multirow{6}{*}{\rotatebox{90}{\mbox{Normal}}} & & LAD-FLSA &0.197 &89\%(17\%) &7.12(1.32) &0.173 & 82\%(15\%) &7.24(1.33) \\ & \raisebox{1.3ex}[0pt]{1.0} & LS-FLSA & 0.035 &18\%(3\%) & 7.82(1.47) &0.021 &5\%(0\%) &7.7(1.48)\\ \cline{3-9} & & LAD-FLSA &0.098 &97\%(32\%) & 5.59 (0.75) &0.087 & 96\%(22\%) &5.54(0.72) \\ & \raisebox{1.3ex}[0pt]{0.5} & LS-FLSA & 0.016 &48\%(13\%) & 5.68(0.74) &0.007 &57\%(7\%) &5.61(0.74)\\ \cline{3-9} & & LAD-FLSA & 0.019 &100\%(93\%) &5.00(0.00) &0.017 & 100\%(94\%) &5.00(0.00) \\ & \raisebox{1.3ex}[0pt]{0.1} & LS-FLSA & 0.013 &100\%(93\%) & 5.00(0.00) &0.003 &100\%(94\%) &5.00(0.00) \\ \cline{3-9} \hline \multirow{6}{*}{\rotatebox{90}{\mbox{Double Exp.}}} & & LAD-FLSA &0.154 &88\% (22\%) &7.42(1.54) &0.128 & 89\%(25\%) &7.18(1.35) \\ & \raisebox{1.3ex}[0pt]{1.0} & LS-FLSA & 0.031 &12\%(0\%) & 7.42(1.42) &0.021 &3\%(1\%) &7.31(1.43)\\ \cline{3-9} & & LAD-FLSA &0.077 &97\%(34\%) &5.95(0.90) &0.064 & 100\%(41\%) &5.74(0.86) \\ & \raisebox{1.3ex}[0pt]{0.5} & LS-FLSA & 0.016 &57\%(12\%) &5.73(0.78) &0.007 &62\%(19\%) &5.68(0.87)\\ \cline{3-9} & & LAD-FLSA & 0.015 &100\%(97\%) &5.00(0.00) &0.013 & 100\%(90\%) &5.00(0.00) \\ & \raisebox{1.3ex}[0pt]{0.1} & LS-FLSA & 0.013 &100\%(97\%) & 5.00(0.00) &0.003 &100\%(89\%) &5.00(0.00) \\ \cline{3-9} \hline \multirow{6}{*}{\rotatebox{90}{\mbox{Cauchy}}} & & LAD-FLSA &0.048 &87\%(56\%) &6.12(1.07) &0.029 & 82\%(45\%) &6.14(0.95) \\ & \raisebox{1.3ex}[0pt]{1.0} & LS-FLSA & 0.239 &17\%(4\%) & 16.37(5.38) &0.275 &2\%(0\%) &59.41(14.38)\\ \cline{3-9} & & LAD-FLSA &0.028 &99\%(70\%) &5.56(0.86) &0.015 & 87\%(66\%) &5.59(0.78) \\ & \raisebox{1.3ex}[0pt]{0.5} & LS-FLSA & 0.120 &39\%(17\%) & 10.67(3.62) &0.132 &15\%(3\%) &32.05(8.51)\\ \cline{3-9} & & LAD-FLSA & 0.007 &95\%(92\%) &5.18(0.46) &0.003 & 96\%(87\%) &5.17(0.40) \\ & \raisebox{1.3ex}[0pt]{0.1} & LS-FLSA & 0.029 &94\%(78\%) & 6.30(1.34) &0.023 &87\%(49\%) &10.14(3.13) \\ \cline{3-9} \hline\hline \multicolumn{9}{l} {\footnotesize{NOTE 1: LARE is the least absolute relative ratio defined in \eqref{eq:lare}.}}\\ \multicolumn{9}{l} {\footnotesize{NOTE 2: CFR+6 is the ratio of recovering $\hmu^0$ correctly or plus at most six additional false posives}}\\ \multicolumn{9}{l} {\footnotesize{ \quad\quad\quad\quad (correctly fitted ratio). }}\\ \multicolumn{9}{l} {\footnotesize{NOTE 3: JUMP is the average number (standard deviation) of the number of jumps.}} \end{tabular}} \end{center} \end{table} \end{document}
\begin{document} \title[The semi-global isometric embedding]{The semi-global isometric embedding of surfaces with curvature changing signs stably} \author[W. Cao]{Wentao Cao} \address{Institute f\"{u}r mathematik, Universit\"{a}t Leipzig, D-04109, Leipzig, Germany} \email{[email protected]} \thanks{The research is supported by the ERC Grant Agreement No. 724298. The paper was done when the author was a post-doctor of Max-Plank Institute of Mathematics in the Sciences, so the author warmly thanks the institute for its hospitality and research environment it provided. The author also thanks the anonymous referee for his/her careful reading and helpful comments, which contributed to an improvement of the initial manuscript.} \subjclass[2010]{Primary 35M12, 53A05, 53C21.} \date{} \begin{abstract} A semi-global isometric embedding of abstract surfaces with Gaussian curvature changing signs of any finite order is obtained through solving the Darboux equation. \end{abstract} \maketitle \section{Introduction} Isometric embedding is an interesting and historical problem in differential geometry. Nash obtained two famous results on such problem. In \cite{nash1954}, he showed that any smooth $n$ dimensional Riemannian manifold $(\mathcal{M}^n, g)$ can be $C^1$ isometric embedded into Euclidean space $\mathbb{R}^{n+2}$ by a useful technique called convex integration later, which is also applied to solve non-uniqueness problem in PDE. Besides, in \cite {nash1956} he also proved that sufficiently smooth metric can be isometric embedded into Euclidean space $\mathbb{R}^N$ with $N$ depending on $n$ and larger than $s_n=\frac{n(n+1)}{2}$ by applying his implicit theorem, which is a powerful tool handing the loss of regularity. However, it is believed that the best dimension of the target space for the local smooth isometric embedding of $\mathcal{M}^n$ is $N=s_n.$ Janet in \cite{Janet} and Cartan in \cite{Cartan} independently proved that any analytic $n$ dimensional Riemannian manifold admits a local analytic isometric embedding in $\mathbb{R}^{s_n}.$ In particular, when $n=2,$ Schlaefli conjectured and later Yau in \cite{Yau} reposed that any smooth surface always admit a local smooth isometric embedding in $\mathbb{R}^3.$ In this case, the problem can be formulated to solve the Gauss-Codazzi system or the Darboux equation (see \cite{HH}). The type of the equations depends on the signs of the Gaussian curvature $K$ of the given surface. When $K>0,$ the equations are of elliptic type. Many mathematicians take the Darboux equation to handle the problem. It also has a close relation with the Weyl problem when the manifold is $\mathbb{S}^2$ (see \cite{w}), one can find its proof in \cite{n}. Readers can also see \cite{gl, hz,lin0} for $K\geq0$ case. On the other hand, when $K<0,$ or $K\leq0,$ the equations are hyperbolic or degenerate hyperbolic equations. In stead of the Darboux equation, the Gauss-Codazzi system is usually used to tackle the isometric embedding problem for such case, one can see \cite{CSW,Christoforou,H}. Some difficulties arise when $K$ changes signs, because the equations are of mixed type. Lin made a breakthrough in \cite{Lin} by applying the theory of symmetric positive system and obtained that a sufficiently smooth isometric embedding exists when the Gaussian curvature satisfies $$K(0)=0, \nabla K(0)\neq0.$$ Then Han improved the regularity of the embedding in \cite{Han} and later he also showed the existence of local isometric embedding when the Gaussian curvature changes signs stably, i.e. $K$ vanishes at finite order across a curve in \cite{han}. For the semi-global isometric embedding, Dong first showed that there exists an isometric embedding parametrized on $[0, 2\partiali]\times[-\delta,\delta]$ mapping into $\mathbb{R}^3$ in \cite{Dong} provided that the positive constant $\delta$ is small enough and $K$ satisfies $$K(x_1, 0)=0, \partialartial_2K\Gamma^2_{11}<0, \text{ on } [0, 2\partiali]\times\{0\}$$ together with the other two compatibility conditions. Here $(x_1,x_2)$ is the parameter for the surface and $\Gamma^2_{11}$ is one Christoffel symbol. We call an embedding is semi-global if it is periodic in one variable and locally defined in the other variable. He proved this result through the Darboux equations and similar method to \cite{hong1987}. In \cite{li}, Li got a semi-global isometric embedding under same conditions as Dong but only defined on $[0, 2\partiali]\times[0,\delta].$ In the present paper, we gain a semi-global isometric embedding for the case where the Gaussian curvature changes signs stably, i.e. vanishes at any finite order $2\alpha-1$ across a closed curve, by solving the Darboux equation, see Theorem \ref{t:anyorder}. Our main approach is the framework of symmetric positive system, which is fully studied in \cite{Fried} and improved by Gu in \cite{Gu}. Surprisingly, our priori estimate for the linearised equation does not depend on the vanishing order $\alpha, $ while the regularity of the finial embedding depends on $\alpha.$ We remark that our arguments can not be extended to gain the semi-global smooth isometric embedding of the smooth metric, since the $H^\infty$ estimate seems hard to be obtained by the framework here. Our future plan is to consider the smooth case. The rest of this paper consists of two sections. In Section \ref{positive}, we provide general theory of symmetric positive system. Main Theorem \ref{t:anyorder} and its proof are given in Section \ref{stable}. \section{ Symmetric positive system}\label{positive} The theory of symmetric positive system is given in \cite{Fried} and later improved in \cite{Gu}, which is an effective approach to show the existence of differential equations of mixed type. We will apply the theory to obtain the priori estimate of the linearized equation of the Darboux equation in Section \ref{stable}. In this section, we will give the framework of symmetric positive system, whose details can be found in \cite{Fried, Gu, HH}. Consider the first order linear differential equations \begin{equation}\label{e:general} \mathcal{L}U=A\frac{\partialartial U}{\partialartial t}+B\frac{\partialartial U}{\partialartial x}+CU=F \end{equation} defined on a rectangle $\Omega$ centring at 0, where $U:\Omega\mapsto\mathbb{R}^n$ is the unknown vector function and $A, B, C$ are given $n\times n$ real function matrixes, $F:\Omega\mapsto\mathbb{R}^n$ is any given vector function. Besides, the boundary condition for \eqref{e:general} is given as follows \begin{equation}\label{e:genrealboundary} \Xi U=0, \quad \text{ on } \partialartial \Omega, \end{equation} where $\Xi$ is also a given $n\times n$ real function matrix. If $A$ and $B$ are symmetric, \eqref{e:general} is called symmetric, and \eqref{e:general} is called symmetric positive system if the following matrix is positive definite, \begin{equation*} \Theta=C+C^T-\partial_t A-\partial_x B. \end{equation*} Here $C^T$ stands for the transpose matrix of $C.$ With the assumption that \eqref{e:general} is symmetric positive, it is easy to get the bound of some norm of $U$ through integrating by parts after multiplying $U$ to \eqref{e:general}. To get a differentiable solution of \eqref{e:general} we must estimate the derivatives of $U.$ An easy calculation gives us \begin{equation*} \begin{split} &A\partial_t(\partial_t U)+B\partial_x(\partial_t U)+C\partial_t U+\partial_t A\partial_t U+\partial_t B\partial_x U+\partial_t CU=\partial_t F,\\ &A\partial_t(\partial_x U)+B\partial_x(\partial_x U)+C\partial_x U+\partial_x A\partial_t U+\partial_x B\partial_x U+\partial_x CU=\partial_x F. \end{split} \end{equation*} We note that the above equations and \eqref{e:general} still form a symmetric system of equations for the $3n$ unknown functions $\{U, \partial_t U, \partial_x U\}.$ However, the symmetric system needs not to be positive and the associated boundary conditions are not necessarily homogeneous. With an observation that the tangential derivatives satisfy the homogeneous boundary conditions, Friedrich in \cite{Fried} introduced a set of differential operators of the first order $D_\sigma$ like $$D_0=I, D_1=\frac{\partialartial}{\partialartial x}, D_\sigma=d^1_\sigma\frac{\partialartial}{\partialartial t}+d^2_\sigma, \sigma=2, 3, $$ where $d^1_\sigma$ are smooth $n\times n$ diagonal matrixes and $d^2_\sigma$ are any smooth $n\times n$ matrixes. Furthermore, we assume that the set of differential operator is complete, i.e. any tangential operator $D$ can be expressed as $D=C^\sigma D_\sigma$ with $n\times n$ matrix $C^0$ and scalar functions $C^\sigma(\sigma\neq0).$ On the other hand, let $n_1, n_2$ be the outward normal vectors and $\Upsilon=An_1+Bn_2,$ then the boundary condition \eqref{e:genrealboundary} is said to be admissible if for any point on the boundary $\partialartial\Omega$, the plane $\Xi U=0$ is the maximal non-negative plane of the quadratic form $U\cdot\Upsilon U$ . \partialar Moreover, the following formula \begin{equation}\label{e:diff} D_\sigma\mathcal{L}=\mathcal{L}D_\sigma+p_\sigma^\tau D_\tau+\theta_\sigma\mathcal{L} \end{equation} holds for the set of differential operators. Here $p_\sigma^\tau, \theta_\sigma$ are all $n\times n$ matrices. Let $\underset{1}{U}$ be the set of $4n$ unknown functions $\{U, D_1U, D_2U, D_3U\}$ and define $\underset{1}{\mathcal{L}}$ as the differential operator on $\underset{1}{U}$ like the following \begin{equation*} \underset{1}{\mathcal{L}}\underset{1}{U}=\{\mathcal{L}U+p_0^\tau U_\tau, \mathcal{L}U_1+p_1^\tau U_\tau,\cdots,\mathcal{L}U_3+p_3^\tau U_\tau\} \end{equation*} with $U_\sigma=D_\sigma U.$ Set \begin{equation*} \theta\underset{1}{U}=\{\theta_0U, \theta_1U_1, \theta_2U_2, \theta_3U_3\}. \end{equation*} Then we can derive the $1$-st enlarged system \begin{equation}\label{e:1enlarge} \underset{1}{\mathcal{L}}\underset{1}{U}=(D-\theta)F \end{equation} with notations \begin{equation*} \begin{split} &DF=\{D_0F, D_1F, D_2F, D_3 F\},\\ &\theta F=\{\theta_0F, \theta_1F, \theta_2F, \theta_3 F\}. \end{split} \end{equation*} Indeed for any operator $D_\sigma,$ we have \begin{equation*} D_\sigma\mathcal{L}U=\mathcal{L}D_\sigma U+p_\sigma^\tau D_\tau U+\theta_\sigma\mathcal{L}U=\underset{1}{\mathcal{L}}D_\sigma U+\theta_\sigma F=D_\sigma F. \end{equation*} On the other hand, if the boundary operator is assumed to satisfy the following formula \begin{equation}\label{e:diffboundary} D_\sigma MU=MD_\sigma U+q_\sigma^0D_0U, \end{equation} and we furthermore set $$\underset{1}{\Xi}\underset{1}{U}=\{\Xi U+q_0^0U, \Xi U_1+q_1^0U,\cdots, \Xi U_3+q_3^0U\},$$ then the enlarged boundary condition is \begin{equation}\label{e:1enlargeboundary} \underset{1}{\Xi}\underset{1}{U}=0. \end{equation} Hence, if $F\in C^1(\bar{\Omega}),$ $U\in C^2(\bar{\Omega}),$ then \eqref{e:1enlarge} and \eqref{e:1enlargeboundary} are satisfied. If the linear differential operator $\mathcal{L}$ and the boundary operator $M$ satisfy \eqref{e:diff} and \eqref{e:diffboundary} respectively, we call that \eqref{e:general}-\eqref{e:genrealboundary} can be enlarged. Friedrich in \cite{Fried} tells us that the set of differential operators can be constructed for any symmetric positive system \eqref{e:general}. Thus, we can also derive any $s$-enlarged system. From \cite{Fried} and \cite{Gu}, a powerful lemma about the existence of differentiable solutions to the boundary value problem \eqref{e:general}-\eqref{e:genrealboundary} can be concluded as follows. \begin{lemma}\label{l:exist} Assume that the system \eqref{e:general} is positive and any $s$-enlarged system of \eqref{e:general} is also positive. Besides, the boundary condition \eqref{e:genrealboundary} is noncharacteristic and admissible. $A, B, C\in C^{s+1},$ $F\in H_s.$ Then there exists a strong solution $U\in H_s.$ \end{lemma} Here $H_s$ denotes the normed space with norm defined as $$|||V|||_s=\sum_{0\leq l\leq s}\|D_{\sigma_1}\cdots D_{\sigma_l}V\|_{L^2}^2.$$ We also use $\|\cdot\|_s$ and $|\cdot|_s$ to denote the norms of Sobolev space $H^s$ and $C^s$ . The proof of the Lemma \ref{l:exist} can be found in \cite{Gu}. \section{Main theorem and its proof}\label{stable} Let the given sufficiently smooth Riemaninian metric of a surface be $$g=g_{ij}dx_idx_j,$$ and consider the isometric embedding problem in the neighbourhood of a closed curve $\Lambda=[0, 2\partiali]\times\{0\}$ on the surface, i.e. to seek a surface $\vec{r}$ defined in $I_\delta=[0, 2\partiali]\times[-\delta, \delta]$ such that $$\vec{r}=(p, q, z)(x_1, x_2): I_\delta\rightarrow \mathbb{R}^3,\quad g=d\vec{r}^2.$$ \subsection{Necessary conditions} Under the geodesic coordinate system based on the curve $\Lambda$, the metric can be reduced to be of the following form \begin{equation}\label{e:metric} g=B^2(x_1, x_2)dx_1^2+dx_2^2, \quad B(x_1, 0)=1, \quad \partialartial_{2}B(x_1, 0)=k_g, \end{equation} where $B(x_1, x_2)$ is a sufficiently smooth function and $k_g$ is the geodesic curvature of $\Lambda$. We also denote $(\partialartial_{x_1}, \partialartial_{x_2})=(\partialartial_1, \partialartial_2)$. Since $\Lambda$ is a closed curve, $B(x_1, x_2)$ is $2\partiali$ periodic with respect to $x_1$. In geodesic coordinate, the Christoffel symbols are \begin{equation}\label{e:christoffel} \begin{split} &\Gamma^1_{11}=\frac{\partialartial_1B}{B}, \quad \Gamma^1_{12}=\frac{\partialartial_2B}{B}, \quad \Gamma^1_{22}=0,\\ &\Gamma^2_{11}=-B\partialartial_2B, \quad \Gamma^2_{12}=\Gamma^2_{22}=0. \end{split} \end{equation} As derived in \cite{Dong}, functions $p, q, z$ shall satisfy \begin{align*} &dp^2+dq^2=g-dz^2\\ =&(g_{11}-(\partialartial_1z)^2)dx_1^2+2(g_{12}-\partialartial_1z\partialartial_2z)dx_1dx_2+ (g_{22}-(\partialartial_2z)^2)dx_2^2. \end{align*} Then it is not hard to derive the following Darboux equation satisfied by $z$: \begin{equation}\label{e:darboux} \begin{split} 0=&\det(\partialartial_{ij}z-\Gamma^k_{ij}\partialartial_kz)-K\det(g_{ij})(1-g^{ij}\partialartial_iz\partialartial_jz)\\ =&(\partialartial_{11}z-\Gamma^1_{11}\partialartial_1z+\Gamma^1_{12}\partialartial_2z)\partialartial_{22}z-(\partialartial_{12}z-\Gamma^1_{12}\partialartial_1z)^2\\ &-K\det(g_{ij})(1-g^{ij}\partialartial_iz\partialartial_jz) \end{split} \end{equation} with $g^{ij}\partialartial_iz\partialartial_jz<1.$ Here matrix $(g^{ij})$ is the inverse of the metric matrix $(g_{ij})$ and we have used \eqref{e:christoffel}. Finally, the isometric embedding problem is formulated to solve Darboux equation \eqref{e:darboux}. Similar to \cite{Dong} or \cite{li}, we can also derive the following theorem about the necessary conditions for our desired embeddings. \begin{theorem}\label{t:necessary} For any sufficiently smooth isometric embedding $\vec{r}=(p, q, z)(x_1, x_2)$ of $g$ on $I_\delta$ into $\mathbb{R}^3$ and $z=O(x_2^{\alpha+1})$ with any integer $\alpha\geq1,$ we have \begin{align} &k_g\partialartial_2^{2\alpha-1}K>0, \text{ on } \Lambda,\label{e:curvature}\\ &\int_0^{2\partiali}k_gdx_1=2\partiali, \label{e:compati-1}\\ &\int_0^{2\partiali}\exp{\sqrt{-1}}\int_0^{x_1}k_gdsdx_1=0. \label{e:compati-2} \end{align} \end{theorem} \begin{proof} Since $\vec{r}$ is an isometric embedding of $I_\delta$, and $z=O(x_2^{\alpha+1})$, we directly take $2\alpha-1$ times partial derivatives of \eqref{e:darboux} with respect to $x_2$ and then set $x_2=0$, finally obtain when $x_2=0,$ \begin{equation*} -\Gamma^2_{11}(\partialartial_2^{\alpha+1}z)^2=\partialartial_2^{2\alpha-1}K\det{g_{ij}}\neq0, \end{equation*} which implies $$\Gamma^2_{11}(x_1, 0)\neq0, \quad \partialartial_2^{\alpha+1}z\neq0,\quad(\Gamma^2_{11}\partialartial_2^{2\alpha-1}K)(x_1, 0)<0.$$ Hence, taking value of \eqref{e:metric} and \eqref{e:christoffel}, one can easily derive \eqref{e:curvature}. Using the fact that $\Lambda$ is a closed curve and following the same procedure as \cite{Dong}, we can also gain \eqref{e:compati-1} and \eqref{e:compati-2}. \end{proof} \subsection{Statement of main theorem} We first give a definition. \begin{definition} A surface is called $\alpha$-surface if it is $2\partiali$ periodic with respect to $x_1$ and satisfies \eqref{e:curvature}-\eqref{e:compati-2}. \end{definition} Then we state our semi-global isometric embedding theorem. \begin{theorem}[\bf{Main Theorem}]\label{t:anyorder} Given an $\alpha$-surface prescribed with sufficiently smooth ($C^{s_*}, s_*\geq 2\alpha+31\}$) metric $g$, we can find a small positive constant $\delta$ and a sufficiently smooth ($C^{s}, 4\leq s\leq \tfrac{4}{7} (s^*-2\alpha)-4$) isometric embedding $$\vec{r}=(p, q, z)(x_1, x_2): I_\delta\rightarrow \mathbb{R}^3,$$ such that $g=d\vec{r}^2.$ \end{theorem} \subsection{Proof of main theorem} We divide the proof into four steps. \emph{Step 1. Initial approximate solution. } Without loss of generality, we can assume that the Gaussian curvature is $$K(x_1, x_2)=x_2^{2\alpha-1}K_0(x_1, x_2), \quad K_0(x_1, x_2)\neq0, \text{ on } \Lambda.$$ Then \begin{equation}\label{e:k0} k_gK_0(x_1, 0)>0. \end{equation} Let our desired solution to the Darboux equation be of the following form $$z(x_1, x_2)=x_2^{\alpha+1}(a(x_1)+w(x_1, x_2))$$ and then an easy calculation yields \begin{equation}\label{e:zderivatives} \begin{split} &\partialartial_1z=x_2^{\alpha+1}(a'+\partialartial_1w), \quad \partialartial_2z=x_2^\alpha(\alpha+1)(a+w)+x_2^{\alpha+1}\partialartial_2w,\\ &\partialartial_{11}z=x_2^{\alpha+1}(a''+\partialartial_{11}w), ~~~\partialartial_{12}z=x_2^\alpha(\alpha+1)(a'+\partialartial_1w)+x_2^{\alpha+1}\partialartial_{12}w,\\ &\partialartial_{22}z=(\alpha+1)\alpha x_2^{\alpha-1}(a+w)+2(\alpha+1)x_2^\alpha\partialartial_2w+x_2^{\alpha+1}\partialartial_{22}w. \end{split} \end{equation} Plugging \eqref{e:zderivatives} into \eqref{e:darboux} and then dividing the resulting equation by $x_2^{2\alpha}$, we will get \begin{equation}\label{e:initial-1} \begin{split} &[x_2\partialartial_{11}w+x_2a''-\Gamma_{11}^1x_2(a'+\partialartial_1w)-\Gamma^2_{11}((\alpha+1)(a+w)+x_2\partialartial_2w)] \\ &\cdot[x_2\partialartial_{22}w+2(\alpha+1)\partialartial_2w+(\alpha+1)\alpha(a+w)x_2^{-1}]\\ &-[x_2\partialartial_{12}w+(\alpha+1)(a'+\partialartial_1w)-\Gamma^1_{12}x_2(a'+\partialartial_1w)]^2 \\ &-x_2^{-1}K_0\det(g_{ij})\big[1-g^{11}(x_2^{\alpha+1}(a'+\partialartial_1w))^2\\ &\qquad-g^{22}(x_2^\alpha(\alpha+1)(a+w)+x_2^{\alpha+1}\partialartial_2w)^2\\ &\qquad-2g^{12}(x_2^\alpha(\alpha+1)(a+w)+x_2^{\alpha+1}\partialartial_2w)(x_2^{\alpha+1}(a'+\partialartial_1w))\big]=0. \end{split} \end{equation} To construct $a(x_1),$ let $w=0$ in \eqref{e:initial-1} and then we get \begin{equation}\label{e:initial-2} \begin{split} &[-\Gamma_{11}^1x_2a'-\Gamma^2_{11}(\alpha+1)a] \cdot[(\alpha+1)\alpha ax_2^{-1}]-[(\alpha+1)a'-\Gamma^1_{12}x_2a']^2-\\ &-x_2^{-1}K_0\det(g_{ij})\big[1-g^{11}(x_2^{\alpha+1}a')^2 g^{22}(x_2^\alpha(\alpha+1)a)^2-2g^{12}ax_2^{2\alpha+1}a'\big]=0 \end{split} \end{equation} Furthermore, taking $x_2=0$ in \eqref{e:initial-2} leads to that $$-\Gamma^2_{11}(\alpha+1)^2\alpha a^2x_2^{-1}-x_2^{-1}K_0\det(g_{ij})=0$$ holds for $x_2=0.$ Thus we take \begin{equation}\label{e:a} a(x_1)=\sqrt{\frac{-K_0\det(g_{ij})}{\Gamma^2_{11}\alpha(\alpha+1)^2}}(x_1, 0) =\sqrt{\frac{K_0\det(g_{ij})}{k_g\alpha(\alpha+1)^2}}(x_1, 0). \end{equation} It is easy to see that $a(x_1)$ is well defined from \eqref{e:k0}. After choosing $a(x_1),$ we take $$x_1=y_1, x_2= \varepsilon^2 y_2, w= \varepsilon u.$$ Then we can rewrite the Darboux equation as follows. \begin{equation}\label{e:initial-3} \begin{split} \mathcal{F}(u, \varepsilon)= &[ \varepsilon^3y_2\partialartial_{11}u+ \varepsilon^2y_2a''-\Gamma_{11}^1 \varepsilon^2y_2(a'+ \varepsilon\partialartial_1u)\\ &\quad -\Gamma^2_{11}((\alpha+1)(a+ \varepsilon u)+ \varepsilon y_2\partialartial_2u)] \\ &\cdot[y_2\partialartial_{22}w+2(\alpha+1)\partialartial_2u+(\alpha+1)\alpha(a+ \varepsilon u)( \varepsilon y_2)^{-1}\\ &- \varepsilon[ \varepsilon y_2\partialartial_{12}u+(\alpha+1)(a'+ \varepsilon\partialartial_1u)-\Gamma^1_{12} \varepsilon^2y_2(a'+ \varepsilon\partialartial_1u)]^2 \\ &-{ \varepsilon y_2}^{-1}K_0\det(g_{ij})\big[1-g^{11}(( \varepsilon^2y_2)^{\alpha+1}(a'+ \varepsilon\partialartial_1u))^2\\ &\quad-g^{22}(( \varepsilon^2y_2)^\alpha(\alpha+1)(a+ \varepsilon u)+( \varepsilon^2y_2)^{\alpha+1} \varepsilon^{-1}\partialartial_2u)^2\\ &\quad-2g^{12}( \varepsilon^2y_2)^{2\alpha+1}((\alpha+1)(a+ \varepsilon u)+ \varepsilon y_2\partialartial_2u)(a'+ \varepsilon\partialartial_1u)\big]=0. \end{split} \end{equation} Here we still use $\partialartial_i$ to denote taking derivatives with respect to $y_i.$ Utilizing the definition of $a(x_1),$ it is easy to find $$\mathcal{F}(0, \varepsilon)= \varepsilon F_0(y_1, \varepsilon^2y_2),$$ where $F_0(x_1,x_2)$ is a smooth function of $x_1, x_2.$ Note that $a(x_1)\in C^{s_*-2\alpha-1} $ and $K_0(x_1, x_2)\in C^{s_*-2\alpha-1} $ due to $g\in C^{s_*}$ and $K\in C^{s_*-2}.$ Hence $F_0(y_1, \varepsilon^2y_2)\in C^{s_*-2\alpha-3}$ since $a''(y_1)$ in $F_0(y_1, \varepsilon^2y_2)$ is of $C^{s_*-2\alpha-3}.$ \emph{Step 2. Linearisation.} Linearising of $\mathcal{F}$ at $u$ contributes to \begin{equation*} \mathcal{F}'(u)\partialhi=\underset{i, j=1,2}{\sum}a_{ij}\partialartial_{ij}\partialhi+\sum_{i=1,2}a_i\partialartial_i\partialhi+a_0\partialhi, \end{equation*} in which \begin{align*} a_{11}=& \varepsilon^3y_2[y_2\partialartial_{22}u+2(\alpha+1)\partialartial_2u+(\alpha+1)\alpha(a+ \varepsilon u)( \varepsilon y_2)^{-1}]\\ \doteq& \varepsilon^2[a(\alpha+1)\alpha+ \varepsilon \tilde{b}_{11}],\\ a_{12}=&- \varepsilon y_2[ \varepsilon^2y_2\partialartial_{12}u+\Gamma^1_{11} \varepsilon^3y_2(a'+ \varepsilon\partialartial_1u)+ \varepsilon(\alpha+1)a'\\ &+ \varepsilon\Gamma^2_{11}((\alpha+1)(a+ \varepsilon u)+ \varepsilon y_2\partialartial_2u)+ \varepsilon^2(\alpha+1)\partialartial_1u]\\ \doteq& \varepsilon^2y_2\tilde{b}_{12},\\ a_{22}=&y_2[ \varepsilon^3y_2\partialartial_{11}u+\Gamma^1_{11} \varepsilon^2y_2(a'+ \varepsilon\partialartial_1u)+\Gamma^2_{11}(\alpha+1)(a+ \varepsilon u) +\Gamma^2_{11} \varepsilon y_2\partialartial_2u]\\ \doteq&y_2[\Gamma^2_{11}(\alpha+1)a+ \varepsilon \tilde{b}_{22}], \end{align*} and \begin{align*} a_1=& \varepsilon^3y_2\Gamma_{11}^1\frac{a_{11}}{ \varepsilon^3y_2} +[ \varepsilon^2(\alpha+1)+\Gamma^1_{11} \varepsilon^4y_2]\frac{a_{12}}{ \varepsilon y_2}+h.o.t.,\\ \doteq& \varepsilon^2\tilde{b}_1,\\ a_2=& \varepsilon\Gamma^2_{11}y_2\frac{a_{11}}{ \varepsilon^3y_2}+2(\alpha+1)\frac{a_{22}}{y_2} +h.o.t,\\ \doteq&\Gamma_{11}^2(\alpha+1)(3\alpha+2)a+ \varepsilon \tilde{b}_2,\\ a_0=& \varepsilon(\alpha+1)\Gamma^2_{11}\frac{a_{11}}{ \varepsilon^3y_2}+\frac{(\alpha+1)\alpha a}{ \varepsilon y_2}\frac{a_{22}}{y_2}\\ \doteq&\frac{1}{y_2}[2\Gamma^2_{11}a(\alpha+1)^2\alpha+ \varepsilon \tilde{b}_0] , \end{align*} where $h.o.t.$ stands for the higher order terms with respect to $ \varepsilon$. Besides, we divide $\mathcal{F}(u, \varepsilon)\partialhi$ by $\Gamma^2_{11}a(\alpha+1)$ and take a new change of variable \begin{equation*} t=y_2, x=\frac{2\partiali}{\int^{2\partiali}_0\sqrt{-(\alpha+1)k_ga}dx_1} \int_0^{y_1}\sqrt{-(\alpha+1)k_ga}dx_1, \end{equation*} where we have assumed $\Gamma^2_{11}(x_1, 0)=-k_g>0$ (otherwise one can divide $\mathcal{F}'(u)\partialhi$ by $-\Gamma^2_{11}a(\alpha+1),$ and make similar transformation). We finally get the following linear differential equation \begin{equation*} \begin{split} \mathcal{L}(u)\partialhi=& \varepsilon^2(\alpha+ \varepsilon b_{11})\partialartial^2_{xx}\partialhi+ \varepsilon^2tb_{12}\partialartial^2_{xt}\partialhi+t(1+ \varepsilon b_{22})\partialartial^2_{tt}\partialhi\\ &+ \varepsilon^2b_1\partial_x\partialhi+(3\alpha+2+ \varepsilon b_2)\partial_t\partialhi+[2(\alpha^2+\alpha)+ \varepsilon b_0]\frac{\partialhi}{t}, \end{split} \end{equation*} where $b_{ij}, b_k, (i, j=1,2. k=0,1,2)$ are all bounded smooth linear functions with respect to $u, t\partialartial_tu, t^2\partialartial^2_{tt}u, \partialartial_xu,\partialartial^2_{xx}u, t\partialartial_{xt}^2u$, similar to $\tilde{b}_{ij}, \tilde{b}_k.$ \emph{Step 3. Priori estimates.} For any given smooth function $f(x, t),$ we study the boundary value problem \begin{equation*} \begin{split} &\mathcal{L}(u)\partialhi=f(x, t), \quad (x, t)\in G=[0, 2\partiali]\times[-2,2],\\ &\partialhi(x, 2)=0, x\in[0, 2\partiali]; \partialhi(0, t)=\partialhi(2\partiali, t), |t|\leq2, \end{split} \end{equation*} and derive the estimates of its solutions. Let $$U=(u_1, u_2, u_3)^T=e^{\gamma t}(\partial_t\partialhi, \frac{\partialhi}{t}, \varepsilon\partial_x\partialhi),$$ and $F=(e^{\gamma t}f, 0, 0)^T.$ Then the linear equation with boundary condition can be transformed to be the following boundary value problem \begin{equation}\label{e:linear} \begin{split} &Lu=A\partial_t U+B\partial_x U+CU=F,\\ &u_2(x, 2)=u_3(x, 2)=0, x\in[0, 2\partiali], \\ &U(0, t)=U(2\partiali, t), |t|\leq 2, \end{split} \end{equation} where \begin{equation*} \begin{split} &A=\left( \begin{array}{ccc} t(1+ \varepsilon b_{22})~&~0~&~0\\ 0~&~\beta t~&~0\\ 0~&~0~&~-(\alpha+ \varepsilon b_{11}) \end{array} \right),\\ &B=\left( \begin{array}{ccc} \varepsilon^2b_{12}t~&~0~&~ \varepsilon(\alpha+ \varepsilon b_{11})\\ 0~&~0~&~0\\ \varepsilon(\alpha+ \varepsilon b_{11})~&~0~&~0 \end{array} \right),\\ &C=\left( \begin{array}{ccc} 3\alpha+2-\gamma t+ \varepsilon b_2- \varepsilon\gamma t b_{22}~&~ 2(\alpha^2+\alpha)+ \varepsilon b_0~&~ \varepsilon b_1\\ -\beta~&~\beta(1-\gamma t)~&~0\\ 0~&~0~&~\gamma(\alpha+ \varepsilon b_{11}) \end{array} \right) \end{split} \end{equation*} with positive constants $\beta, \gamma$ to be determined. In our transformation, we have used the following two simple equations: \begin{equation*} \begin{split} \beta t\partial_t(e^{\gamma t}\frac{\partialhi}{t})&=\beta t(\gamma e^{\gamma t}\frac{\partialhi}{t}+e^{\gamma t}\frac{\partial_t\partialhi}{t}-e^{\gamma t}\frac{\partialhi}{t^2})\\ &=\beta e^{\gamma t}\partial_t\partialhi+\beta(\gamma t-1)e^{\gamma t}\frac{\partialhi}{t},\\ \partial_t(e^{\gamma t} \varepsilon\partial_x\partialhi)&=\gamma(e^{\gamma t}\partial_x\partialhi)+ \varepsilon\partial_x(e^{\gamma t}\partial_t\partialhi). \end{split} \end{equation*} We shall show any s-enlarged system of \eqref{e:linear} is symmetric positive so that we can use Lemma \ref{l:exist} to show the existence and derive estimates. It is easy to see that $Lu=F$ is a symmetric system. To prove it positive, let $ \varepsilon=0,$ then \eqref{e:linear} becomes $$L^0U=A^0\partial_t U+C^0U=F,$$ where \begin{equation*} \begin{split} &A^0=\left( \begin{array}{ccc} t~&~0~&~0\\ 0~&~\beta t~&~0\\ 0~&~0~&~-\alpha \end{array} \right),\\ &C^0=\left( \begin{array}{ccc} 3\alpha+2-\gamma t~&~2(\alpha^2+\alpha)~&~0\\ -\beta~&~\beta(1-\gamma t)~&~0\\ 0~&~0~&~\gamma\alpha \end{array} \right). \end{split} \end{equation*} Furthermore, simple calculation leads to $$ \underset{0}{\Theta}^0=C^0+C^{0T}-\partial_t A^0 =\left( \begin{array}{ccc} 2(3\alpha+2-\gamma t)-1~&~ 2(\alpha^2+\alpha)-\beta~&~0\\ 2(\alpha^2+\alpha)-\beta~&~2\beta(1-\gamma t)-\beta~&~0\\ 0~&~0~&~2\gamma\alpha \end{array} \right). $$ Hence we can take $$\beta=2(\alpha^2+\alpha), \quad 0<\gamma<\frac{1}{4}(\text{e.g.} \gamma=\frac{1}{8})$$ to make $\underset{0}{\Theta}^0$ positive. Assume that $|u|_4\leq1,$ we can choose $ \varepsilon_1$ small enough so that when $0< \varepsilon\leq \varepsilon_1,$ $$\underset{0}{\Theta}=C+C^T-\partial_t A-\partial_x B$$ is positive. On the other hand, similar to \cite{Dong}, we introduce the following differential operators $$\mathfrak{D}=\{D_0=I, D_1=\partial_x, D_2=\xi_2(t)\partial_t, D_3=\xi_3(t)(t-2)\partial_t\},$$ which forms a complete system of tangential differential operators on $G$ if $\xi_2+\xi_3=1$ on $G$ and $\xi_2=1$ when $t<1/2,$ $\xi_3=1$ when $t>1.$ It is not hard to get that $\det A(x, 2)\neq0$ when $x\in[0, 2\partiali]$ and $0< \varepsilon\leq \varepsilon_2$ with $ \varepsilon_2$ small enough, thus $t=2$ is not characteristic boundary for the system \eqref{e:linear}. Moreover, the $s$-th enlarged system of \eqref{e:linear} is \begin{equation}\label{senlarge2} \begin{split} LD_{\sigma_1}\cdots D_{\sigma_s}&=\sum_\mu P_{\sigma_a}^\mu D_{\sigma_1}\cdots D_{\sigma_{a-1}}D_\mu D_{\sigma_{a+1}}\cdots D_{\sigma_s}+\partialrod^s_{i=1}(D_{\sigma_i}-Q_{\sigma_i})L\\ &+\sum_{r\leq s-1}D^{q_1}Q_{\sigma_1}\cdots D^{q_{l-1}}Q_{\sigma_{l-1}}D^{q_l}P_{\sigma_l}^\mu D_\mu D_{\sigma_{l+1}}\cdots D_{\sigma_r}, \end{split} \end{equation} with integers satisfying $q_1+\cdots+q_l\geq1,$ $q_1+\cdots+q_l+1+r-l\geq s.$ Here $P^\mu_\sigma, Q$ are smooth matrices. The positivity of \eqref{senlarge2} is determined by $$\underset{s}{\Theta}=\text{diag}(\underset{0}{\Theta}^0,\cdots, \underset{0}{\Theta}^0)+ \text{diag}(m_1\partial_t A, \cdots, m_{r_0}\partial_t A)$$ with $r_0=3\cdot 4^s$ and integers $m_j\in[0, s], 1\leq j\leq r_0.$ Hence, $\underset{s}{\Theta}$ is positive definite provided that $ \varepsilon$ is small enough and $|u|_4\leq1$. Upon obtaining the positivity of any $s$-th enlarged system, Lemma \ref{l:exist} guarantees existence of \eqref{e:linear}. Taking value of the relation between $H_s$ and $H^s$ and Sobolev embedding theorem, following\cite{Dong}, one can obtain the priori estimate for the linearised boundary value problem \begin{equation}\label{e:estimate} \|U\|_s\leq C_s(\|F\|_s+\zeta(s)\|u\|_{s+3}\|F\|_2), \end{equation} with $\zeta(s)=1$ when $s\geq3,$ and $\zeta(s)=0$ when $0\leq s\leq2.$ \emph{Step 4. Iteration and seeking $\vec{r}$.} After comparing \eqref{e:estimate} with the priori estimates (34) in \cite{Dong}, one find that both priori estimates are same. Hence we can use Nash-Moser iteration scheme in \cite{Dong} to construct a sequence of approximate solutions. After taking $0< \varepsilon\leq \varepsilon_3$ with $ \varepsilon_3$ small, we are able to show its convergence to a function $U\in H^{\tilde{s}}$ such that $$\|U\|_{\tilde{s}}\leq C \varepsilon\|F_0\|_{\tilde{s}_*} $$ with $$ 6\leq\tilde{s}<\frac{4}{7}\tilde{s}_*, \quad 28\leq\tilde{s}_*=s_*-2\alpha-3. $$ Hence our desired solution $z\in C^s$ with $s=\tilde{s}-2$ and then $$4\leq s\leq\frac{4}{7}(s_*-2\alpha)-4,\quad s_*\geq2\alpha+31.$$ Following the same way as Section 4 of \cite{Dong}, we are able to seek $p, q$, where the two compatibility conditions \eqref{e:compati-1}-\eqref{e:compati-2} are used to make $p, q, z$ periodic in $x_1$. Therefore we complete the proof of Theorem \ref{t:anyorder} after taking $\delta=2 \varepsilon_0$ with $ \varepsilon_0=\min\{ \varepsilon_i, i=1, 2, 3\}.$ \end{document}
\begin{document} \maketitle \thispagestyle{first} \setcounter{page}{629} \begin{abstract} \vskip 3mm In this talk we discuss the relations between representations of algebraic groups and principal bundles on algebraic varieties, especially in characteristic $p$. We quickly review the notions of stable and semistable vector bundles and principal $G$-bundles , where $G$ is any semisimple group. We define the notion of a low height representation in characteristic $p$ and outline a proof of the theorem that a bundle induced from a semistable bundle by a low height representation is again semistable. We include applications of this result to the following questions in characteristic $p$: 1) Existence of the moduli spaces of semistable $G$-bundles on curves. 2) Rationality of the canonical parabolic for nonsemistable principal bundles on curves. 3) Luna's etale slice theorem. We outline an application of a recent result of Hashimoto to study the singularities of the moduli spaces in (1) above, as well as when these spaces specialize correctly from characteristic $0$ to characteristic $p$. We also discuss the results of Laszlo-Beauville-Sorger and Kumar-Narasimhan on the Picard group of these spaces. This is combined with the work of Hara and Srinivas-Mehta to show that these moduli spaces are $F$-split for $p$ very large. We conclude by listing some open problems, in particular the problem of refining the bounds on the primes involved. \vskip 4.5mm \noindent {\bf 2000 Mathematics Subject Classification:} 22E46, 14D20. \noindent {\bf Keywords and Phrases:} Semistable bundles, Low-height representations. \end{abstract} \vskip 12mm \section{Some Definitions} \vskip-5mm \hspace{5mm} We begin with some basic definitions:\\ Let $V$ be a vector bundle on a smooth projective curve $X$ of genus $g$ over an algebraically closed field (in any characteristic). \noindent {\bf Definition 1.1:} {\it $V$ is \emph{stable} (\emph{ respectively semi-stable}) if for all subbundles $W$ of $V$, we have $$\mu(W) \:\raisebox{-0.6ex}{$\stackrel{=}{def}$}\: \emph{deg} \ W/rk \ W \ < (\leq)$$ $$\mu(V) \:\raisebox{-0.6ex}{$\stackrel{=}{def}$}\: \ \emph{deg} \ V/rk \ V.$$ } \noindent For integers $r$ and $d$ with $r > 0$, one constructs the moduli spaces $U^s (r,d) (U(r,d))$ of stable (semistable) vector bundles of rank $r$ and degree $d$, using Geometric Invariant Theory (G.I.T.). If the ground field is ${\mathbb C}$, the complex numbers, one has the basic (genus $X \ge 2$): \noindent {\bf Theorem 1.2:} {\it Let $V$ have degree $0$. Then $V$ is stable $\Leftrightarrow V \simeq V_{\sigma}$, for some irreducible representation $\sigma : \pi_1(X) \to U(n)$.} This is due to Narasimhan-Seshadri. Note that $H \to X$ is a principal $\pi_1(X)$ fibration, where $H$ is the upper-half plane. Any $\sigma : \pi_1(X) \to GL(n, {\mathbb C})$ gives a vector bundle of rank $n$ on $X, V_\sigma = H \times^{\pi_1(X)} {\mathbb C}^n$. \noindent {\bf Remark 1.3:} It follows from Theorem 1.2 that if $V$ is a semistable bundle on a curve $X$ over ${\mathbb C}$, then $\otimes^n(V), S^n(V)$, in fact any bundle induced from $V$ is again semistable. By Lefschetz, this holds for any algebraically closed field of characteristic $0$. \noindent {\bf Remark 1.4:} In general, a subbundle $W$ of a vector bundle $V$ is a reduction of the structure group of the principal bundle of $V$ to a maximal parabolic of $GL(n), n = \emph{rank} \ V$. This is in turn equivalent to a section $\sigma$ of the associated fibre space: $$E \times^{GL(n)} \ \ GL(n)/P. $$ Now let $X$ be a smooth curve and $E\stackrel{\pi}{\rightarrow} X$ a principal $G$-bundle on $X$, where $G$ is a semisimple (or even a reductive) group in any characteristic. \noindent {\bf Definition 1.5:} $E$ is stable (semistable) $\Leftrightarrow \forall$ maximal parabolics $P$ of $G, \forall$ sections $\sigma$ of $E(G/P)$, we have degree $\sigma^\# T_{\pi} > 0 (\ge 0)$, where $T_{\pi}$ is the relative tangent bundle of $E(G/P) \stackrel{\pi}{\rightarrow} X$. Over ${\mathbb C}$, we have the following [18]: \noindent {\bf Theorem 1.6:} {\it $E \to X$ is stable $\Leftrightarrow E \simeq E_\sigma$ for some irreducible representation $\sigma : \pi_1(X) \rightarrow K$, the maximal compact of $G$}. The analogue of Remark 1.3 is valid in this general situation. \noindent {\bf Remark 1.7:} One can analogously define stable and semistable vector bundles and principal bundles on normal projective varieties of dimension $>1$. Again, in characteristic $0$, bundles induced from semistable bundles continue to be semistable. \noindent {\bf Remark 1.8:} In characteristic $p$, bundles induced from semistable bundles need not be semistable, in general[7]. In this lecture we shall examine some conditions when this does hold, and also discuss some applications to the moduli spaces of principal $G$-bundles on curves. \section{Low height representations} \vskip-5mm \hspace{5mm} Here we introduce the basic notion of a low height representation in characteristic $p$. Let $f:G \to SL(n) = SL(V)$ be a representation of $G$ in char $p$, $G$ being reductive. Fix a Borel $B$ and a Torus $T$ in $G$. Let $L(\lambda_i), 1 \leq i \leq m$, be the simple $G$-modules occurring in the Jordan-Holder filtration of $V$. Write each $\lambda_i$ as $\displaystyle{\sum_{j}} q_{ij} \alpha_j$, where $\{ \alpha_j \}$ is the system of simple roots corresponding to $B$ and $q_{ij} \in Q \ \forall i,j$. Define $ht \lambda_i = \displaystyle{\sum_j} q_{ij}$. Then one has the basic [9,20]: \noindent {\bf Definition 2.1:} {\it $f$ is a low-height representation of $G,$ or $V$ is a low-height module over $G$, if $2 ht (\lambda_i) < p \ \forall i$.} \noindent {\bf Remark 2.2:} If $2 ht (\lambda_i)< p \ \forall i$, then it easily follows that $V$ is a completely reducible $G$-module. In fact for any subgroup $\Gamma$ of $G, V$ is completely reducible over $\Gamma \Leftrightarrow \Gamma$ itself is completely reducible in $G$. By definition, an abstract subgroup $\Gamma$ of $G$ is completely reducible in $G \Leftrightarrow$ for any parabolic $P$ of $G$, if $\Gamma$ is contained in $P$ then $\Gamma$ is contained in a Levi component $L$ of $P$. These results were proved by Serre[20] using the notion of a saturated subgroup of $G$. In general, denote sup $(2 ht \ \lambda_i)$ by $ht_GV$. If $V$ is the standard $SL(n)$ module, then $ht_{SL(n)} \wedge^i (V)= i (n-i), 1 \leq i \leq n-1$. More generally, $ht_G (V_1 \otimes V_2) = ht_G V_1+ht_G V_2$. The following theorem is the key link between low-height representations and semistability of induced bundles [9]: \noindent {\bf Theorem 2.3:} \emph{Let $E \to X$ be a semistable $G$-bundle, where $G$ is semisimple and the base $X$ is a normal projective variety. Let $f:G \to SL(n)$ be a low-height representation. Then the induced bundle $E (SL(n))$ is again semistable}. The proof is an interplay between the results of Bogomolov, Kempf, Rousseau and Kirwan in G.I.T. on one hand and the results of Serre mentioned earlier on the other. The group scheme $E(G)$ over $X$ acts on $E(SL(n)/P)$ and assume that $\sigma$ is a section of the latter. Consider the generic point $K$ of $X$ and its algebraic closure $\overline{K}$. Then $E(G)_{\overline{K}}$ acts on $E(SL(n)/P)_{\overline{K}}$, and $\sigma$ is a $K$-rational point of the latter. There are 2 possibilities: \begin{enumerate} \item[1)] $\sigma$ is $G.I.T$ semistable. In this case, one can easily prove that deg $\sigma^{\#} T_{ \pi} \geq 0$. \item[2)] $\sigma$ is $G.I.T.$ unstable, i.e., not semistable. Let $P (\sigma)$ be the Kempf-Rousseau parabolic for $\sigma$, which is defined over $\overline{K}$. For deg $\sigma^{\#} T_{\pi}$ to be $\geq 0$ it is {\it sufficient} that $P (\sigma)$ is defined over $K$. Note that since $V$ is a low-height representation of $G$, one has $p \geq h$. One then has ([20]). \end{enumerate} \noindent {\bf Proposition 2.4:} \emph{ If $p \geq h$, there is a unique G-invariant isomorphism log: $G^u \to \underline{g}_{{\rm nilp}}$, where $G^u$ is the unipotent variety of $G$ and $\underline{g}_{{\rm nilp}}$ is the nilpotent variety of $\underline{g} =$ Lie $G$}. Proposition 2.4 is used in \noindent {\bf Proposition 2.5:} \emph{Let $H$ be any semisimple group and $W$ a low-height representation of $H$. Let $ W_1 \subset W$ and assume that $\exists X \in $ Lie $H, \ X$ nilpotent such that $X \in $ Lie (Stab $(W_1))$. Then in fact one has $X \in $ Lie [Stab $(W_1)_{{\rm red}} ]$}. Along with some facts from $G.I.T$, Proposition 2.5 enables us to prove that $P (\sigma)$ is in fact defined over $K$, thus finishing the sketch of the proof of Theorem 2.3. See also Ramanathan-Ramanan [19]. One application of low-height representations is in the proof of a conjecture of Behrend on the rationality of the canonical parabolic or the instability parabolic. If $V$ is a nonsemistable bundle on a variety $X$, then one can show that there exists a flag $V^\cdot$, $$0=V_0 \subset V_1 \subset V_2 \cdots \subset V_n =V$$ of subbundles of $V$ with the properties: \begin{enumerate} \item[(1)] Each $V_i / V_{i-1}$ is semistable and $\mu \ (V_i/V_{i-1})> \mu \ (V_{i+1} /V_i), 1 \leq i \leq n-1$. \item[(2)] The flag $V^\cdot$ as in (1) is unique and infinitesimally unique, i.e., $V^\cdot$ is defined over any field of definition of $X$ and $V$. Such a flag corresponds to a reduction to a parabolic $P$ of $GL(n)$ and properties (1) and (2) may be expressed as follows: the {\it elementary} vector bundles on $X$ associated to $P$ all have positive degree and $H^0(X, E(\underline{g}) / E(\underline{p}))=0$, where $\underline{g} = \mbox{Lie} \ GL(n)$ and $\underline{p} = \mbox{Lie} \ P$. \end{enumerate} One may ask whether there is a such a canonical reduction for a nonsemistable principal $G$ bundle $E \rightarrow X$. Such a reduction was first asserted first by Ramanathan [18], and then by Atiyah-Bott[1] ,both over ${\mathbb C}$ and both without proofs. It was Behrend [ 5 ], who first proved the existence and uniqueness of the canonical reduction to the instability parabolic in all characteristics. Further, Behrend conjectured that $H^0(X, E(\underline{g})/ E(\underline{p}))=0$. In characteristic zero, one can check that all three definitions of the instability parabolic coincide and that Behrend's conjecture is valid. In characteristic $p$, one uses low-height representations to show the equality of the three definitions and prove Behrend's conjecture [14]. \noindent {\bf Theorem 2.6:} \emph{Let $E \rightarrow X$ be a nonsemistable principal $G$-bundle in char $p$. Assume that $ p > 2 dimG $. Then all the 3 definitions coincide and further we have $H^0(X, E(\underline{g})/E(\underline{p})) =0$, where $\underline{p} = \mbox{Lie} \ P$ and $P$ is the instability parabolic}.\\ Theorem 2.6 is useful, among other things, for classifying principal $G$-bundles on ${\mathbb P}^1$ and ${\mathbb P}^2$ in characteristic $p$.\\ If $V$ is a finite-dimensional representation of a semisimple group $G$ (in any characteristic), then the G.I.T. quotient $V//G$ parametrizes the closed orbits in $V$. Now, let the characteristic be zero and let $v_0 \in V$ have a closed orbit. Then Luna's \'etale slice theorem says that $\exists$ a locally closed non-singular subvariety $S$ of $V$ such that $v_0 \in S$ and $S // G_{v_o}$ is isomorphic to $V//G$, locally at $v_0$, in the \'etale topology. Here $G_{v_0}$ is the stabilizer of $v_0$. The proof uses the fact that $G_{v_0}$ is a reductive subgroup of $G$ (not necessarily connected!), hence $V$ is a completely reducible $G$ module. In characteristic $p$, one has to assume that $V$ is a low-ht representation of $G$. Then the conclusion of Luna's \'etale slice theorem is still valid: to be more precise, let $V$ be a low-ht representation of $G$ and let $v_0 \in V$ have a closed orbit. Put $H$ = Stab $(v_0)$. The essential point, as in characteristic 0, is to prove the complete reducibility of $V$ over $H$. Using the low-ht assumption, one shows that every $X \in \ \mbox{Lie} \ H$ with $X$ nilpotent can be integrated to a homomorphism $G_a \rightarrow H$ with tangent vector $X$. Now, under the hypothesis of low-ht, one shows that $H_{\mbox{red}}$ is a saturated subgroup of $G$ and $(H_{\mbox{red}} : H^0_{\mbox{red}})$ is prime to $p$. This shows that $V$ is a completely reducible $H_{\mbox{red}}$ module. Further, one shows that $H_{\mbox{red}}$ is a normal subgroup of $H$ with $H/H_{\mbox{red}}$ a finite group of multiplicative type, i.e. a finite subgroup of a torus. Now the complete reducibility of $V$ over $H$ follows easily [11]. Just as in characteristic zero, one deduces the existence of a smooth $H$-invariant subvariety $S$ of $V$ such that $v_0 \in S$ and $S // H $ is locally isomorphic to $V //G$ at $v_0$. This result is used in the construction of the moduli space $M_G$ to be described in the next section. \section{Construction of the moduli spaces} \vskip-5mm \hspace{5mm} The moduli spaces of semistable $G$-bundles on curves were first constructed by Ramanathan over ${\mathbb C}$ [16,17], then by Faltings and Balaji-Seshadri in characteristic 0 [3,6]. There are 3 main points in Ramanathan's construction: \begin{enumerate} \item If $E \rightarrow X$ is semistable, then the adjoint bundle $E(\underline{g})$ is semistable. \item If $E \rightarrow X$ is polystable, then $E(\underline{g})$ is also polystable. \item A semisimple Lie Algebra in char 0 is rigid. \end{enumerate} The construction of $M_G$ in char $p$ was carried out in [2,15]. We describe the method of [15] first : points (1) and (2) are handled by Theorem 2.3 and the following [11] : \noindent {\bf Theorem 3.1:} \emph{Let $E \rightarrow X$ be a polystable $G$-bundle over a curve in char $p$. Let $\sigma : G \rightarrow SL(n) = SL(V) $ be a representation such that all the exterior powers $\wedge^i V, 1 \leq i \leq n-1$, are low-height representations. Then the induced bundle $E(V)$ is also polystable}. The proof uses Luna's \'etale slice theorem in char $p$ and Theorem 2.3. Now one takes a total family $T$ of semistable $G$ bundle on $X$ and takes the good quotient of $T$ to obtain $M_G$ in char $p$. Theorem 3.1 is used to identify the closed points of $M_G$ as the isomorphism classes of polystable $G$-bundles, just as in char 0. The semistable reduction theorem is proved by lifting to characteristic 0 and then applying Ramanathan's proof (in which (3) above plays a crucial role). This construction follows Ramanathan very closely and, as is clear, one has to make low-height assumptions as in Theorem 3.1. The method of [2] follows the one in [3] with some technical and conceptual changes. One chooses an embedding $G \rightarrow SL(n)$ and a representation $W$ for $SL(n)$ such that (1) $G$ is the stabilizer of some $w_0 \in W $. (2) $W$ is a ``low separable index representation'' of $SL(n)$, i.e., all stabilizers are reduced and $W$ is low-height over $SL(n)$. The semistable reduction theorem is proved using the theory of Bruhat-Tits. Here also suitable low-height assumptions have to be made. \section{Singularities and specialization of the moduli spaces} \vskip-5mm \hspace{5mm} We first discuss the singularities of $M_G$, assuming throughout that $G$ is simply connected. In char 0, $M_G$ has rational singularities, this follows from Boutot's theorem. In char $p$, the following theorem due to Hashimoto [8] is relevant: \noindent {\bf Theorem 4.1:} \emph{Let $V$ be a representation of $G$ such that all the symmetric powers $S^n(V)$ have a good filtration. Then the ring of invariant $[S^\cdot(V)]^G$ is strongly $F$-regular}. Strongly $F$-regularity is a notion in the theory of tight closure in commutative algebra. We just note that if a geometric domain is strongly $F$-regular then it is normal,Cohen-Macaulay, $F$-split and has ``rational-like'' singularities. Now let $ t \in M_G$ be the ``worst point'', i.e., the trivial $G$-bundle on $X$. The local ring $({\mathcal O}_{M_G}, t)^\wedge$ is isomorphic to $(S^\cdot(W)// G)^\wedge$, where $W$ = direct sum of $g$ copies of $\underline{g}$, with $G$ acting diagonally. If $p$ is a good prime for $G$ , then Hashimoto's theorem implies that ${\mathcal O}_{M_G,t}$ is strongly $F$-regular. The other points of $M_G$ are not so well understood. This would require a detailed study of the automorphism groups of polystable bundles, both in char 0 and $p$, and of their invariants of the slice representations. This is necessary also to study the specialization problem, i.e., when $M_G$ in char 0 specializes to $M_G$ in char $p$. One has to show that the invariants of the slice representations in char 0 specialize to the invariants in char $p$. However for $G$=SL(n),the situation is much simpler. One can write down the automorphism group of a polystable bundle and its representation on the local moduli space. Consequently, one expects the moduli spaces to specialize correctly and that the local rings of $M_G$ are strongly $F$-regular in all positive characteristics. We briefly discuss Pic $M_G$ in char 0. It follows from [4,10] that $M_G$ has the following properties in char 0: \begin{enumerate} \item Pic $M_G \simeq {\mathbb Z}$. \item $M_G$ is a normal projective, Gorenstein variety with rational singularities and with $K$ negative ample. \end{enumerate} Now let $X$ be a normal,Cohen-Macaulay variety in char 0. It is proved in [13],in response to a conjecture of Karen Smith, that if $X$ has rational singularities, then the reduction of $X$ mod $p$ is $F$-rational for all large $p$. This result together with 1 and 2 above imply that $M_G$ reduced mod $p$ is $F$-split for all large $p$. We cannot give effective bounds on the primes involved. One partial result is known in this direction[12]. \noindent {\bf Acknowledgement:} I would like to thank my colleagues S.~Ilangovan, A.J.~Parame\-swaran and S.~Subramanian for their help in preparing this report and T.T.~Nayya and H for constant help and encouragement. \label{lastpage} \end{document}
\begin{document} \begin{center} {\LARGE \bf The uniform normal form} \\ \mbox{} \\ {\LARGE \bf of a linear mapping} \end{center} \begin{center} \mbox{} \\ Richard Cushman\footnotemark \mbox{} \\ Dedicated to my friend and colleague Arjeh Cohen on his retirement \end{center} \footnotetext{Department of Mathematics and Statistics, University of Calgary, \\ Calgary, Alberta, Canada, T2N 1N4.} Let $V$ be a finite dimensional vector space over a field $\mathrm{k} $ of characteristic $0$. Let $A:V \rightarrow V$ be a linear mapping of $V$ into itself with characteristic polynomial ${\chi }_A$. The goal of this paper is to give a normal form for $A$, which yields a better description of its structure than the classical companion matrix. This normal form does not use a factorization of ${\chi }_A$ and requires only operations in the field $\mathrm{k} $ to compute. \section{Semisimple linear mappings} \label{sec1} We begin by giving a well known criterion to determine if the linear mapping $A$ is semisimple, that is, every $A$-invariant subspace of $V$ has an $A$-invariant complementary subspace. Suppose that we can factor ${\chi }_A$, that is, find monic irreducible polynomials ${ \{ {\pi }_i \} }^m_{i=1}$, which are pairwise relatively prime, such that ${\chi }_A = {\prod }^m_{i=1} {\pi }^{n_i}_i$, where $n_i \in {\mathbb{Z}}_{\ge 1}$. Then \begin{displaymath} {\chi}'_A =\sum^m_{j=1} (n_j {\pi }^{n_j-1}_j {\pi }'_j) {\prod}_{i\ne j}{\pi }^{n_i}_i = \big( {\prod }^m_{\ell =1} {\pi }^{n_{\ell }-1}_{\ell } \big) \big( \sum^m_{j=1} (n_j {\pi }'_j) \prod_{i\ne j}{\pi }_i \big) . \end{displaymath} Therefore the greatest common divisor ${\chi }_A$ and its derivative ${\chi }'_A$ is the polynomial $d= {\prod }^m_{\ell =1} {\pi }^{n_{\ell }-1}_{\ell }$. The polynomial $d$ can be computed using the \linebreak Euclidean algorithm. Thus the square free factorization of ${\chi }_A$ is the polynomial $p = {\prod }^m_{\ell =1} {\pi }_{\ell } = {\chi }_A/d $, which can be computed without knowing a factorization of ${\chi }_A$. The goal of the next discussion is to prove \noindent \textbf{Claim 1.1} The linear mapping $A:V \rightarrow V$ is semisimple if $p(A) =0 $ on $V$. Let $p = {\prod }^m_{j =1} {\pi }_j$ be the square free factorization of the characteristic polynomial ${\chi }_A $ of $A$. We now decompose $V$ into $A$-invariant subspaces. For each $1\le j \le m$ let $V_j = \{ v \in V \setrule \, {\pi }_j (A)v = 0 \} $. Then $V_j$ is an $A$-invariant subspace of $V$. For if $v \in V_j$, then ${\pi }_j(A)Av = A{\pi }_j(A)v =0$, that is, $Av \in V_j$. The following argument shows that $V = {\bigoplus }^m_{j=1} V_j$. Because for $1 \le j \le m$ the polynomials ${\prod }_{i\ne j}{\pi}_i$ are pairwise relatively prime, there are polynomials $f_j$, $1 \le j \le m$ such that $1 = \sum^m_{j=1} f_j \, \big( \prod_{i\ne j}{\pi }_i \big)$. Therefore every vector $v \in V$ can be written as \begin{displaymath} v = \sum^m_{j=1} f_j(A) \, \big( \prod_{i\ne j}{\pi }_i(A)v \big) = \sum^m_{j=1}f_j(A)v_j. \end{displaymath} Since ${\pi }_j(A) \big( \prod_{i\ne j}{\pi }_i(A)v \big) = p(A)v =0$, the vector $v_j \in V_j$. Therefore $V = \sum^m_{j=1}V_j$. If for $i \ne j$ we have $w \in V_i \cap V_j$, then for some polynomials $F_i$ and $G_j$ we have $ 1 = F_i {\pi }_i + G_j {\pi }_j$, because ${\pi }_i$ and ${\pi }_j$ are relatively prime. Consequently, $w = F_i(A){\pi }_i(A)w +G_j(A){\pi }_j(A)w =0$. So $V = \sum^m_{j=1} \oplus V_j$. $\square $ We now prove \noindent \textbf{Lemma 1.2} For each $1 \le j \le m$ there is a basis of the $A$-invariant subspace $V_j$ such that that matrix of $A$ is block diagonal. \noindent \textbf{Proof.} Let $W$ be a minimal dimensional proper $A$-invariant subspace of $V_j$ and let $w$ be a nonzero vector in $W$. Then there is a minimal positive integer $r$ such that $A^r w \in {\mathop{\rm span}\nolimits }_{\mathrm{k} }\{ w, \, Aw, \, \ldots , \, A^{r-1}w \} = U$. We assert: the vectors ${\{ A^iw \} }^{r-1}_{i=0}$ are linearly independent. Suppose that there are $a_i \in \mathrm{k} $ for $1 \le i \le r-1$ such that $0 = a_0w + a_1Aw + \cdots + a_{r-1}A^{r-1}w$. Let $t \le r-1$ be the largest index such that $a_t \ne 0$. So $A^tw = -\frac{a_{t-1}}{a_t} A^{t-1}w - \cdots - \frac{a_0}{a_t} w$, that is, $A^t w \in {\mathop{\rm span}\nolimits }_k \{ w, \, \ldots , \, A^{t-1}w \} $ and $t <r $. This contradicts the definition of the integer $r$. Thus the index $t$ does not exist. Hence $a_i =0$ for every $0 \le i \le r-1$, that is, the vectors ${\{ A^iw \} }^{r-1}_{i=0}$ are linearly independent. The subspace $U$ of $W$ is $A$-invariant, for \begin{align} A(\sum^{r-1}_{j=0} b_j A^jw) & = \sum^{r-2}_{j=0} b_j A^{j+1}w +b_{r-1}A^r w, \, \, \, \mbox{where $b_j \in \mathrm{k}$} \notag \\ &\hspace{-.5in} = \sum^{r-1}_{j=1}b_{j-1}A^jw +b_{r-1}(\sum^{r-1}_{\ell =0} a_{\ell}A^{\ell }w), \quad \mbox{since $A^rw \in U$} \notag \\ & \hspace{-.5in} = b_{r-1}a_0w +\sum^{r-1}_{j=1}(b_{j-1}+b_{r-1}a_j)A^jw \in U. \notag \end{align} Next we show that there is a monic polynomial $\mu $ of degree $r$ such that $\mu (A) =0$ on $U$. With respect the basis ${\{ A^iw \} }^{r-1}_{i=0}$ of $U$ we can write $A^rw = -a_0w - \cdots - a_{r-1}A^{r-1}w$. So $\mu (A)w =0$, where \begin{equation} \mu (\lambda ) = a_0 +a_1\lambda + \cdots + a_{r-1}{\lambda }^{r-1} +{\lambda }^r . \label{eq-twos1} \end{equation} Since $\mu (A)A^iw = A^i(\mu (A)w) =0$ for every $0 \le i \le r-1$, it follows that $\mu (A) = 0$ on $U$. By the minimality of the dimension of $W$ the subspace $U$ cannot be proper. But $U \ne \{ 0 \}$, since $w \in U$. Therefore $U = W$. Since $U \subseteq V_j$, we obtain ${\pi }_j(A) u =0 $ for every $u \in U$. Because ${\pi }_j$ is irreducible, the preceding statement shows that ${\pi }_j$ is the minimum polynomial of $A$ on $U$. Thus ${\pi }_j$ divides $\mu $. Suppose that $\deg {\pi }_j =s < \deg \mu = r$. Then $A^s u^{\prime } \in {\mathop{\rm span}\nolimits }_k \{ u^{\prime }, \, \ldots \, A^{s-1}u^{\prime } \} = Y$ for some nonzero vector $u^{\prime }$ in $U$. By minimality, $Y = U$. But $\dim Y = s < \dim U = r$, which is a contradiction. Thus ${\pi }_j = \mu $. Note that the matrix of $A|U$ with respect to the basis ${\{ A^iw \} }^{r-1}_{i=0}$ is the $r \times r$ companion matrix \begin{equation} C_r = \mbox{\footnotesize $ \left( \begin{array}{lllcc} 0 & \cdots & \cdots & 0 & -a_0 \\ 1 & 0 & \cdots & 0 & -a_1 \\ \vdots & 1 & \ddots & \vdots & \vdots \\ \vdots & & \ddots & 0 & -a_{r-2} \\ 0 & \cdots & \cdots & 1 & -a_{r-1} \end{array} \right) $,} \label{eq-twostars1} \end{equation} where ${\pi }_j = a_0 +a_1\lambda + \cdots + a_{r-1}{\lambda }^{r-1} +{\lambda }^r$. Suppose that $U \ne V_j$. Then there is a nonzero vector $w' \in V_j \setminus U$. Let $r'$ be the smallest positive integer such that $A^{r'}w' \in {\mathop{\rm span}\nolimits }_{\mathrm{k} } \{ w', \, Aw', \, \ldots , \,$ $A^{r'-1}w' \} = U'$. Then by the argument in the preceding paragraph, $U'$ is a minimal $A$-invariant subspace of $V_j$ of dimension $r' =r$, whose minimal polynomial is ${\pi }_j$. Suppose that $U' \cap U \ne \{ 0 \} $. Then $U' \cap U$ is a proper $A$-invariant subspace of $U'$. By minimality $U' \cap U = U'$, that is, $U \subseteq U'$. But $r = \dim U = \dim U' = r'$. So $U = U'$. Thus $w'\in U'$ and $w' \notin U$, which is a contradiction. Therefore $U' \cap U = \{ 0 \} $. If $U \oplus U' \ne V_j$, we repeat the above argument. Using $U \oplus U'$ instead of $U$, after a finite number of repetitions we have $V_j = \sum^{\ell }_{i=1}\oplus U_i$, where for every $0 \le i \le \ell $ the subspace $U_i$ of $V_j$ is $A$-invariant with basis ${\{ A^k u_i \} }^{r-1}_{k =0}$ and the minimal polynomial of $A|U_i$ is ${\pi }_j$. With respect to the basis ${\{ A^k u_i \}}^{(\ell , r-1)}_{(i,k) =(1,0)}$ of $V_j$ the matrix of $A$ is $\mathrm{diag}(C_r, \ldots , C_r)$, which is block diagonal. $\square $ For each $1 \le j \le m $ applying lemma 1.2 to $V_j$ and using the fact that $V = \sum^m_{j=1} \oplus V_j$ we obtain \noindent \textbf{Corollary 1.3} There is a basis of $V$ such that the matrix of $A$ is block diagonal. \noindent \textbf{Proof of claim 1.1} Suppose that $U$ is an $A$-invariant subspace $V$. Then by corollary 1.3, there is a basis ${\varepsilon }_U$ of $U$ such that the matrix of $A|U$ is block diagonal. By corollary 1.3 there is a basis ${\varepsilon }_V$ of $V$ which extends the basis ${\varepsilon }_U$ such that the matrix of $A$ on $V$ is block diagonal. Let $W$ be the subspace of $V$ with basis ${\varepsilon }_W = {\varepsilon }_V \setminus {\varepsilon }_U$. The matrix of $A|W$ is block diagonal. Therefore $W$ is $A$-invariant and $V = U \oplus W$ by construction. Consequently, $A$ is semisimple. $\square $ \section{The Jordan decomposition of ${\bf A}$} \label{sec2} Here we give an algorithm for finding the Jordan decomposition of the linear mapping $A$, that is, we find real semisimple and commuting nilpotent linear maps $S$ and $N$ whose sum is $A$. The algorithm we present uses only the characteristic polynomial ${\chi }_A$ of $A$ and does {\em not} require that we know {\em any} of its factors. Our argument follows that of \cite{burgoyne-cushman}. Let $p$ be the square free factorization of ${\chi }_A$. Let $M$ be the smallest positive integer such that ${\chi }_A$ divides $p^M$. Then $M \le \deg {\chi }_A$. Assume that $\deg {\chi }_A \ge 2$, for otherwise $S = A$. Write \begin{equation} S = A + \sum^{M-1}_{j=1}r_j(A){p(A)}^j, \label{eq-ones2} \end{equation} where $r_j$ is a polynomial whose degree is less than the degree of $p$. From the fact that ${\chi }_A$ divides $p^M$, it follows that ${p(A)}^M =0$. We want to determine $S$ in the form (\ref{eq-ones2}) so that \begin{equation} p(S) =0. \label{eq-twos2} \end{equation} From claim 1.1 it follows that $S$ is semisimple. We have to find the polynomials $r_j$ in (\ref{eq-ones2}) so that equation (\ref{eq-twos2}) holds. We begin by using the Taylor expansion of $p$. If (\ref{eq-ones2}) holds, then \begin{align} p(S) & = p\Big( A + \sum^{M-1}_{j=1}r_j(A){p(A)}^j \Big) \notag \\ & = p(A) + \sum^{M-1}_{i=1} p^{(i)}(A) \Big( \sum^{M-1}_{j=1}r_j(A){p(A)}^j \Big)^i, \notag \\ & \hspace{.5in}\parbox[t]{3.5in}{where $p^{(i)}$ is $\frac{1}{i!}$ times the ${\rm i}^{\rm th}$ derivative of $p$} \notag \\ & = p(A) + \sum^{M-1}_{i=1}\sum^{M-1}_{k=1} c_{k,i}\, {p(A)}^k p^{(i)}(A). \label{eq-twostars2} \end{align} Here $c_{k,i}$ is the coefficient of $z^k$ in $(r_1z + \cdots +\, r_{M-1}z^{M-1})^i$. Note that $c_{k,i} =0$ if $k>i$. A calculation shows that when $k \le i$ we have \begin{equation} c_{k,i} = \sum_{\stackrel{{\alpha }_1 + \cdots + {\alpha}_{k-1}=i}{{\alpha }_1 + 2{\alpha }_2 + \cdots + (k-1){\alpha }_{k-1}}= k} \frac{i!}{{\alpha }_1! \cdots {\alpha }_{k-1}!} r^{{\alpha }_1}_1 \cdots r^{{\alpha }_{k-1}}_{k-1} . \label{eq-threestars2} \end{equation} Interchanging the order of summation in (\ref{eq-twostars2}) we get \begin{displaymath} p(S) = p(A) + \sum^{M-1}_{i=1}\Big( r_i(A)p^{(1)}(A) + e_i(A) \Big) {p(A)}^i, \end{displaymath} where $e_1=0$ and for $i \ge 2$ we have $e_i = \sum^{i}_{j=2} c_{i,j} p^{(j)}$. Note that $e_i$ depends on $r_1, \ldots , r_{i-1}$, because of (\ref{eq-threestars2}). Suppose that we can find polynomials $r_i$ and $b_i$ such that \begin{equation} r_ip^{(1)} + e_i = b_i p -b_{i-1}, \label{eq-threes2} \end{equation} for every $1 \le i \le M-1$. Here $b_0 =1$. Then \begin{displaymath} \sum^{M-1}_{i=1}\big( r_i(A)p^{(1)}(A) + e_i(A) \big) {p(A)}^i = \sum^{M-1}_{i=1} \big( b_i(A)p(A) - b_{i-1}(A) \big) {p(A)}^i \, = \, -p(A), \end{displaymath} since $p^M(A) =0$ and $b_0 =1$, which implies $p(S) =0$, see (\ref{eq-twostars2}). We now construct polynomials $r_i$ and $b_i$ so that (\ref{eq-threes2}) holds. We do this by induction. Since the polynomials $p$ and $p^{(1)}$ have no common nonconstant factors, their greatest common divisor is the constant polynomial $1$. Therefore by the Euclidean algorithm there are polynomials $g$ and $h$ with the degree of $h$ being less than the degree of $p$ such that \begin{equation} gp - hp^{(1)} = 1. \label{eq-fours2} \end{equation} \par Let $r_1 =h$, and $b_1 =g$. Using the fact that $b_0=1$ and $e_1=0$, we see that equation (\ref{eq-fours2}) is the same as equation (\ref{eq-threes2}) when $i=1$. Let $d_1 = 0$ and $q_0 = q_1 =0$. Now suppose that $n \ge 2$. By induction suppose that the polynomials $r_1, \ldots \, , r_{n-1}$, $e_1, \ldots , e_{n-1}$, $q_1, \ldots \, , q_{n-1}$ and $b_1, \ldots \, , b_{n-1}$ are known and that $r_i$ and $b_i$ satisfy (\ref{eq-threes2}) for every $1 \le i \le n-1$. Using the fact that the polynomials $r_1, \ldots , r_{n-1}$ are known, from formula (\ref{eq-threestars2}) we can calculate the polynomial $e_n = \sum^n_{j=2}c_{i,n}\, p^{(j)}$. For $n \ge 2$ define the polynomial $d_n$ by \begin{equation} d_n = q_{n-1} + h \sum^n_{i=1} g^{n-i} e_i. \label{eq-fives2} \end{equation} Note that the polynomials $q_{n-1}$, $g = b_1 $, $h= r_1$, and $e_i$ for $1 \le i \le n-1$ are already known by the induction hypothesis. Thus the right hand side of (\ref{eq-fives2}) is known and hence so is $d_n$. Now define the polynomials $q_n$ and $r_n$ by dividing $d_n$ by $p$ with remainder, namely \begin{equation} d_n = q_n p + r_n . \label{eq-sixs2} \end{equation} Clearly, $q_n$ and $r_n$ are now known. Next for $n\ge 2$ define the polynomial $b_n$ by \begin{equation} b_n = -p^{(1)}q_n + g\sum^n_{i=1}g^{n-i}e_i. \label{eq-sevens2} \end{equation} Since the polynomials $p^{(1)}$, $q_n$, $g = b_1$, and $e_i$ for $1 \le i \le n$ are known, the polynomial $b_n$ is known. We now show that equation (\ref{eq-threes2}) holds. \noindent \textbf{Proof.} We have already checked that (\ref{eq-threes2}) holds when $n=1$. By induction we assumed that it holds for every $1 \le i \le n-1$. Using the definition of $b_n$ (\ref{eq-sevens2}) and the induction hypothesis we compute \begin{align} b_n p - b_{n-1} & = \Big[ -p^{(1)}pq_n + pg\sum^n_{i=1} g^{n-i}e_i \Big] - \Big[ -p^{(1)}q_{n-1} + g\sum^{n-1}_{i=1}g^{n-1-i}e_i \Big] \notag \\ & \hspace{-.5in} = - p^{(1)}(q_n p - q_{n-1}) + pg\sum^n_{i=1}g^{n-i}e_i -\sum^{n-1}_{i=1}g^{n-i}e_i \notag \\ & \hspace{-.5in} = -p^{(1)}(-r_n + d_n-q_{n-1}) +(hp^{(1)}+1)\sum^n_{i=1}g^{n-i}e_i - \sum^{n-1}_{i=1}g^{n-i}e_i, \notag \\ & \hspace{.5in} \mbox{using (\ref{eq-fours2}) and (\ref{eq-sixs2})} \notag \\ & \hspace{-.5in} = p^{(1)}r_n -hp^{(1)}\sum^n_{i=1}g^{n-i}e_i + hp^{(1)}\sum^n_{i=1}g^{n-i}e_i + \sum^n_{i=1}g^{n-i}e_i - \sum^{n-1}_{i=1}g^{n-i}e_i, \notag \\ &\hspace{.5in} \mbox{using (\ref{eq-fives2})} \notag \\ & \hspace{-.5in}= p^{(1)}r_n +e_n. \tag*{$\square $} \end{align} This completes the construction of the polynomial $r_n$ in (\ref{eq-ones2}). Repeating this construction until $n = M-1$ we have determined the semisimple part $S$ of $A$. The commuting nilpotent part of A is $N = A-S$. $\Box $ \section{Uniform normal form} \label{sec3} In this section we give a description of the uniform normal form of a linear map $A$ of $V$ into itself. We assume that the Jordan decomposition of $A$ into its commuting semisimple and nilpotent summands $S$ and $N$, respectively, is known. \subsection{Nilpotent normal form} \label{sec3subsec1} In this subsection we find the Jordan normal form for a nilpotent linear transformation $N$. Recall that a linear transformation $N:V \rightarrow V$ is said to be {\sl nilpotent of index} $n$ if there is an integer $n \ge 1$ such that $N^{n-1} \not = 0$ but $N^n = 0$. Note that the index of nilpotency $n$ need not be equal to $\dim V$. Suppose that for some positive integer $\ge 1$ there is a nonzero vector $v$, which lies in $\ker N^{\ell } \setminus \ker N^{{\ell }-1}$. The set of vectors $\{ v, Nv, \ldots \, , N^{{\ell }-1}v \} $ is a \emph{Jordan chain} of \emph{length} ${\ell }$ with \emph{generating vector} $v$. The space $V^{\ell }$ spanned by the vectors in a given Jordan chain of length ${\ell }$ is a $N$-\emph{cyclic subspace} of $V$. Because $N^{\ell }v =0$, the subspace $V^{\ell }$ is $N$-invariant. Since $\ker N|V^{\ell } = {\mathop{\rm span}\nolimits }_{\mathrm{k} } \{ N^{{\ell }-1}v \} $, the mapping $N|V^{\ell }$ has exactly one eigenvector corresponding to the eigenvalue $0$. \noindent \textbf{Claim 3.1.1} The vectors in a Jordan chain are linearly independent. \noindent \textbf{Proof.} Suppose not. Then $0 = \sum^{{\ell }-1}_{i=0} {\alpha }_i\, N^iv$, where not every ${\alpha }_i \in \mathrm{k} $ is zero. Let $i_0$ be the smallest index for which ${\alpha }_{i_0} \not = 0$. Then \begin{equation} 0 = {\alpha }_{i_0}\, N^{i_0}v + \cdots \, + {\alpha }_{{\ell }-1} \, N^{{\ell }-1}v . \label{eq-sec4ss1one} \end{equation} Applying $N^{{\ell }-1-i_0}$ to both sides of (\ref{eq-sec4ss1one}) gives $0 = {\alpha }_{i_0}N^{{\ell }-1}v$. By hypothesis $v\not \in \ker N^{{\ell }-1}$, that is, $N^{{\ell }-1}v \ne 0$. Hence ${\alpha }_{i_0} =0$. This contradicts the definition of the index $i_0$. Therefore ${\alpha }_i =0$ for every $0 \le i \le \ell -1$. Thus the vectors ${\{ N^iv \}}^{\ell -1}_{i=0}$, which span the Jordan chain $V^{\ell }$, are linearly independent. $\square $ With respect to the \emph{standard basis} $\{ N^{{\ell }-1}v, N^{\ell-2}v, \ldots \, , Nv, v \}$ of $V^{\ell }$ the matrix of $N|V^{\ell }$ is the ${\ell } \times {\ell }$ matrix \begin{displaymath} \mbox{{\footnotesize $\left( \begin{array}{ccccc} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 &\ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & 1 \\ 0 & 0 & \cdots & \cdots & 0 \\ \end{array} \right) $},} \end{displaymath} which is a \emph{Jordan block} of size ${\ell }$. We want to show that $V$ can be decomposed into a direct sum of $N$-cyclic subspaces. In fact, we show that there is a basis of $V$, whose elements are given by a dark dot $\bullet $ or an open dot $\circ $ in the diagram below such that the arrows give the action of $N$ on the basis vectors. Such a diagram is called the \emph{Young diagram of} $N$. \begin{displaymath} \begin{array}{l} \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{circ}\\ \phantom{circ}\\ \phantom{x}\\ \end{array} \hspace{-1cm} \begin{array}{l} \begin{array}{l} \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{x}\\ \end{array} \\ \phantom{\uparrow}\\ \phantom{circ}\\ \end{array} \hspace{-0.5cm} \begin{array}{l} \begin{array}{l} \phantom{\bullet}\\ \end{array}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{circ}\\ \end{array} \hspace{-0.4cm} \begin{array}{cllllllll} \bullet&\bullet&\bullet&\bullet&\circ&\circ&\circ\\ \uparrow&\uparrow&\uparrow&\uparrow\\ \bullet&\bullet&\bullet&\bullet\\ \uparrow&\uparrow&\uparrow&\uparrow\\ \bullet&\bullet&\bullet&\circ\\ \vdots&\vdots&\vdots\\ \bullet&\bullet&\circ&\\ \uparrow&\uparrow\\ \circ&\circ \end{array} \end{displaymath} \par \noindent \hspace{1.25in}\parbox[t]{4.5in}{Figure 3.1.1. The Young diagram of $N$.} Note that the columns of the Young diagram of $N$ are Jordan chains with generating vector given by an open dot. The black dots form a basis for the image of $N$, whereas the open dots form a basis for a complementary subspace in $V$. The dots on or above the ${\ell }^{\rm th}$ row for a basis for $\ker N^{\ell }$ and the black dots in the first row form a basis for $\ker N \cap \mathrm{im}\, N$. Let $r_{\ell }$ be the number of dots in the ${\ell }^{\rm th}$ row. Then $r_{\ell } = \dim \ker N^{\ell } - \dim \ker N^{\ell -1}$. Thus the Young diagram of $N$ is unique. \noindent \textbf{Claim 3.1.2} There is a basis of $V$ that realizes the Young diagram of $N$. \noindent \textbf{Proof}. Our proof follows that of Hartl \cite{hartl}. We use induction of the dimension of $V$. Since $\dim \ker N > 0$, it follows that $\dim \mathrm{im}\, N < \dim V$. Thus by the induction hypothesis, we may suppose that $\mathrm{im}\, N$ has a basis which is the union of $p$ Jordan chains $\{ w_i, Nw_i, \ldots \, , N^{m_i}w_i \} $ each of length $m_i$. The \linebreak vectors ${\{ N^{m_i}w_i \} }^p_{i=1}$ lie in $\mathrm{im}\, N \cap \ker N$ and in fact form a basis of this subspace. Since $\ker N$ may be larger than $\mathrm{im}\, N \cap \ker N$, choose vectors $\{ y_1, \ldots \, , y_q \} $ where $q$ is a nonnegative integer such that $\{ N^{m_1}w_1, \ldots \, , N^{m_p}w_p, $ $y_1, \ldots \, , y_q \} $ form a basis of $\ker N$. Since $w_i \in \mathrm{im}\, N$ there is a vector $v_i$ in $V$ such that $w_i = Nv_i$. We assert that the $p$ Jordan chains \begin{displaymath} \{ v_i, Nv_i, \ldots \, , N^{m_i+1}v_i \} = \{ v_i, w_i, Nw_i, \ldots \, , N^{m_i}w_i \} \end{displaymath} each of length $m_i +2$ together with the $q$ vectors $\{ y_j \} $, which are Jordan chains of length $1$, form a basis of $V$. To see that they span $V$, let $v \in V$. Then $Nv \in \mathrm{im}\, N$. Using the basis of $\mathrm{im}\, N$ given by the induction hypothesis, we may write \begin{displaymath} Nv = \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell } w_i \, = \, N \Big( \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell }v_i \Big). \end{displaymath} Consequently, \begin{displaymath} v - \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell } v_i = \sum^p_{i=1}{\beta }_i N^{m_i+1}v_i + \sum^q_{\ell =1}{\gamma }_{\ell } y_{\ell }, \end{displaymath} since the vectors \begin{displaymath} \{ N^{m_1}w_1, \ldots \, , N^{m_p}w_p, y_1, \ldots \, , y_q \} = \{ N^{m_1+1}v_1, \ldots \, , N^{m_p+1}v_p, y_1, \ldots \, , y_q \} \end{displaymath} form a basis of $\ker N$. Linear independence is a consequence of the following counting argument. The number of vectors in the Jordan chains is \begin{align} \sum^p_{i=1}(m_i + 2) +q = \sum^p_{i=1}(m_i+1) +(p+q) \, = \, \dim \mathrm{im}\, N + \dim \ker N \, = \, \dim V. \tag*{$\square $} \end{align} We note that finding the generating vectors of the Young diagram of $N$ or equivalently the Jordan normal form of $N$, involves solving linear equations with coefficients in the field $\mathrm{k} $ and thus only operations in the field $k$. \subsection{Some facts about $S$} \label{sec3subsec3} We now study the semisimple part $A$. \noindent \textbf{Lemma 3.2.1} $V = \ker S \oplus \mathrm{im}\, S$. Moreover the characteristic polynomial ${\chi }_S(\lambda )$ of $S$ can be written as a product of ${\lambda }^n$, where $n = \dim \ker S$ and ${\chi}_{S|\mathrm{im}\, S}$, the characteristic polynomial of $S|\mathrm{im}\, S$. Note that ${\chi }_{S|\mathrm{im}\, S} (0) \ne 0$ \noindent \textbf{Proof.} $\ker S$ is an $S$-invariant subspace of $V$. Since $Sv =0$ for every $v \in $ $\ker S$, the characteristic polynomial of $S|\ker S$ is ${\lambda }^n$. Because $S$ is semisimple, there is an $S$-invariant subspace $Y$ of $V$ such that $V = \ker S \oplus Y$. The linear mapping $S|Y:Y \rightarrow Y$ is invertible, for if $Sy=0$ for some $y \in Y$, then $S(y+u) =0$ for every $u \in \ker S$. Therefore $y+u \in \ker S$, which implies that $y \in \ker S \cap Y = \{ 0 \} $, that is, $y=0$. So $S|Y$ is invertible. Suppose that $y \in Y$, then $y = S\big( (S|Y)^{-1}y \big) \in \mathrm{im}\, S$. Thus $Y \subseteq \mathrm{im}\, S$. But $\dim \mathrm{im}\, S = \dim V - \dim \ker S = \dim Y$. So $Y = \mathrm{im}\, S$. Since $\ker S \cap \mathrm{im}\, S = \{0 \} $, we see that $\lambda $ does not divide the polynomial ${\chi }_{S|\mathrm{im}\, S}(\lambda )$. Consequently, ${\chi }_{S|\mathrm{im}\, S}(0) \ne 0$. Since $V = \ker S \oplus \mathrm{im}\, S$, where $\ker S$ and $\mathrm{im}\, S$ are $S$-invariant subspaces of $V$, we obtain \begin{align} {\chi }_S(\lambda ) & = {\chi }_{\ker S}(\lambda ) \cdot {\chi }_{S|\mathrm{im}\, S}(\lambda ) = {\lambda }^n{\chi }_{S|\mathrm{im}\, S}(\lambda ). \tag*{$\square $} \end{align} \noindent \textbf{Lemma 3.2.2} The subspaces $\ker S$ and $\mathrm{im}\, S$ are $N$-invariant and hence $A$-invariant. \noindent \textbf{Proof.} Suppose that $x \in \mathrm{im}\, S$. Then there is a vector $v \in V$ such that $x = Sv$. So $Nx = N(Sv) = S(Nv) \in \mathrm{im}\, S$. In other words, $\mathrm{im}\, S$ is an $N$-invariant subspace of $V$. Because $\mathrm{im}\, S$ is also $S$-invariant and $A = S+N$, it follows that $\mathrm{im}\, S$ is an $A$-invariant subspace of $V$. Suppose that $x \in \ker S$, that is, $Sx =0$. Then $S(Nx) = N(Sx) =0$. So $Nx \in \ker S$. Therefore $\ker S$ is an $N$-invariant and hence $A$-invariant subspace of $V$. $\square $ \subsection{Decription of uniform normal form} \label{sec3subsec3} We now describe the uniform normal form of the linear mapping $A:V \rightarrow V$, using both its semisimple and nilpotent parts. Since $A|\ker S = N|\ker S$, we can apply the discussion of \S 3.1 to obtain a basis of $\ker S$ which realizes the Young diagram of $N|\ker S$, which say has $r$ columns. For $1 \le {\ell } \le r$ let $F_{q_{\ell }}$ be the space spanned by the generating vectors of Jordan chains of $N|\ker S$ in $\ker S$ of length $m_{\ell }$. By lemma 3.2.1 $A|\mathrm{im}\, S$ is a linear mapping of $\mathrm{im}\, S$ into itself with invertible semisimple part $S|\mathrm{im}\, S$ and commuting nilpotent part $N|\mathrm{im}\, S$. Using the discussion of \S 3.1 for every $r+1 \le {\ell } \le p$ let $F_{q_{\ell}}$ be the set of generating vectors of the Jordan chains of $N|\mathrm{im}\, S$ in $\mathrm{im}\, S$ of length $m_{\ell }$, which occur in the $p-(r+1)$ columns of the Young diagram of $N|\mathrm{im}\, S$. Now we prove \noindent \textbf{Claim 3.3.1} For each $1 \le {\ell } \le p$ the space $F_{q_{\ell }}$ is $S$-invariant. \noindent \textbf{Proof.} Let $v^{\ell } \in F_{q_{\ell }}$. Then $\{ v^{\ell }, \, N v^{\ell }, \ldots , N^{m_{{\ell }}-1}v^{\ell } \} $ is a Jordan chain in the Young diagram of $N$ of length $m_{\ell }$ with generating vector $v^{\ell }$. For each $1 \le {\ell } \le r$ we have $F_{q_{\ell }} \subseteq \ker S$. So trivially $F_{q_{\ell }}$ is $S$-invariant, because $S =0$ on $F_{q_{\ell }}$. Now suppose that $r+1 \le {\ell } \le p$. Then $F_{q_{\ell }} \subseteq \mathrm{im}\, S $ and $S|\mathrm{im}\, S$ is invertible Furthermore, suppose that for some ${\alpha }_j \in \mathrm{k} $ with $0 \le j \le m_{\ell }-1$ we have $0 = \sum^{m_{\ell }-1}_{j=0} {\alpha }_j N^j(Sv^{\ell })$. Then $0 = S\big( \sum^{m_{\ell }-1}_{j=0}{\alpha }_j N^jv^{\ell } \big)$, because $S|\mathrm{im}\, S$ and $N|\mathrm{im}\, S$ commute. Since $S|\mathrm{im}\, S$ is invertible, the preceding equality implies $0 = \sum^{m_{\ell }-1}_{j=0}{\alpha }_j N^jv^{\ell }$. Consequently, by lemma 3.1.1 we obtain ${\alpha }_j =0 $ for every $0 \le j \le m_{\ell }-1$. In other words, $\{ Sv^{\ell }, \, N(Sv^{\ell }), \ldots , N^{m_{\ell }-1}(Sv^{\ell }) \} $ is a Jordan chain of $N|\mathrm{im}\, S$ in $\mathrm{im}\, S$ of length $m_{\ell }$ with generating vector $Sv^{\ell }$. So $Sv^{\ell } \in F_{q_{\ell }}$. Thus $F_{q_{\ell }}$ is an $S$-invariant subspace of $\mathrm{im}\, S$ and hence is an $S$-invariant subspace of $V$, since $V = \mathrm{im}\, S \oplus \ker S$. $\square $ An $A$-invariant subspace $U$ of $V$ is \emph{uniform} of \emph{height} $m-1$ if $N^{m-1}U \ne \{ 0 \} $ but $N^m U =\{0 \} $ and $\ker N^{m-1}U = NU$. For each $1 \le {\ell } \le r$ let $U^{q_{\ell }}$ be the space spanned by the vectors in the Jordan chains of length $m_{\ell }$ in the Young diagram of $N|\ker S$ and for $r+1 \le \ell \le p$ let $U^{q_{\ell }}$ be the space spanned by the vectors in the Jordan chains of length $m_{\ell }$ in the Young diagram of $N|\mathrm{im}\, S$ \noindent \textbf{Claim 3.3.2} For each $1 \le {\ell } \le p$ the subspace $U^{q_{\ell }}$ is uniform of height $m_{\ell }-1$. \noindent \textbf{Proof.} By definition $U^{q_{\ell }} = F_{q_{\ell }} \oplus NF_{q_{\ell }} \oplus \cdots \oplus N^{m_{\ell }-1}F_{q_{\ell }}$. Since $N^{m_{\ell }}F_{q_{\ell }} =\{ 0 \} $ but $N^{m_{\ell }-}F_{q_{\ell }} \ne \{ 0 \} $, the subspace $U^{q_{\ell }}$ is $A$-invariant and of the height $m_{\ell }-1$. To show that $U^{q_{\ell }}$ is uniform we need only show that $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} \subseteq NU^{q_{\ell }}$ since the inclusion of $NU^{q_{\ell }}$ in $\ker N^{m_{\ell }-1}$ follows from the fact that $N^{m_{\ell }}F_{q_{\ell }} =0$. Suppose that $u \in \ker N^{m_{\ell }-1} \cap U^{q_{\ell }}$, then for every $0 \le i \le m_{\ell }-1$ there are unique vectors $f_i \in F_{q_{\ell }}$ such that $u = f_0 + Nf_1 + \cdots + N^{m_{\ell }-1}f_{m_{\ell }-1}$. Since $u \in \ker N^{m_{\ell }-1}$ we get $0 = N^{m_{\ell }-1}u = N^{m_{\ell }-1}f_0$. If $f_0 \ne 0$, then the preceding equality contradicts the fact that $f_0$ is a generating vector of a Jordan chain of $N$ of length $m_{\ell }$. Therefore $f_0 =0$, which means that $u = N(f_1 + \cdots + N^{m_{\ell }-2}f_{m_{\ell }-1}) \in NU^{q_{\ell }}$. This shows that $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} \subseteq NU^{q_{\ell }}$. Hence $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} = NU^{q_{\ell }}$, that is, the subspace $U^{q_{\ell }}$ is uniform of height $m_{\ell }-1$. $\square $ Now we give an explicit description of the uniform normal form of the linear mapping $A$. For each $1 \le {\ell } \le p$ let ${\chi }_{S|F_{q_{\ell }}}$ be the characteristic polynomial of $S$ on $F_{q_{\ell }}$. From the fact that every summand in $U^{q_{\ell }} = F_{q_{\ell }} \oplus NF_{q_{\ell }} \oplus \cdots \oplus N^{m_{\ell }-1}F_{q_{\ell }}$ is $S$-invariant, it follows that the characteristic polynomial ${\chi }_{S|U^{q_{\ell }}}$ of $S$ on $U^{q_{\ell }}$ is ${\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}$. Since $V = \sum^p_{\ell =1}\oplus U^{q_{\ell }}$, we obtain ${\chi }_S = \prod^p_{\ell =1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}$. Choose a basis ${\{ u^{\ell }_j \} }^{q_{\ell }}_{j=1}$ of $F_{q_{\ell }}$ so that the matrix of $S|F_{q^{\ell }}$ is the $q_{\ell } \times q_{\ell }$ companion matrix $C_{q_{\ell }}$ (\ref{eq-twostars1}) associated to the characteristic polynomial ${\chi }_{S|F_{q_{\ell }}}$. When $1 \le {\ell } \le r$ the companion matrix $C_{q_{\ell }}$ is $0$ since $S|F_{q_{\ell }} =0$. With respect to the basis ${\{ u^{\ell }_j, \, Nu^{\ell }_j, \ldots , N^{m_{\ell }-1}u^{\ell }_j \} }^{q_{\ell }}_{j=1} $ of $U^{q_{\ell }}$ the matrix of $A|U^{q_{\ell }}$ is the $m_{\ell }q_{\ell } \times m_{\ell }q_{\ell }$ matrix \begin{displaymath} D_{m_{\ell }q_{\ell }} = \mbox{\footnotesize $\left( \begin{array}{cccccl} C_{q_{\ell }} & 0 & 0 & \cdots & \cdots & 0 \\ I & C_{q_{\ell }} & 0 & \cdots & \vdots & 0 \\ 0 & I & \ddots & & \vdots & \vdots \\ \vdots & & \ddots & \ddots & \vdots & \vdots \\ 0 & \cdots & 0 & I & C_{q_{\ell }} & 0 \\ 0 & \cdots & \cdots & 0 & I & C_{q_{\ell }} \end{array} \right) .$} \end{displaymath} Since $V = \sum^p_{{\ell }=1} \oplus U^{q_{\ell }}$, the matrix of $A$ is $\mathrm{diag}\, (D_{m_1q_1}, \ldots , D_{m_pq_p})$ with respect to the basis ${\{ u^{\ell }_j, \, Nu^{\ell }_j, \ldots , N^{m_{\ell }-1}u^{\ell }_j \} }^{(q_{\ell }, p)}_{(j, \ell ) = (1,1)}$. We call preceding matrix the \emph{uniform normal form} for the linear map $A$ of $V$ into itself. We note that this normal form can be computed using only operations in the field $\mathrm{k} $ of characteristic $0$. Using the uniform normal form of $A$ we obtain a factorization of its characteristic polynomial ${\chi }_A$ over the field $\mathrm{k} $. \noindent \textbf{Corollary 3.3.3} ${\chi }_A (\lambda )= \prod^p_{\ell =1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}(\lambda ) = {\lambda }^n \, \prod^p_{\ell = r+1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}(\lambda )$, where $n = \sum^r_{\ell =1} m_{\ell } = \dim \ker S$. \end{document}
\begin{document} \title{A note on saturation for Berge-$G$ hypergraphs} \author{Maria Axenovich\thanks{Karlsruhe Institute of Technology, Karlsruhe, Germany}\and Christian Winter\thanks{Karlsruhe Institute of Technology, Karlsruhe, Germany}\thanks{Research supported in part by Talenx stipendium.}} \maketitle \begin{abstract} For a graph $G=(V,E)$, a hypergraph $H$ is called {\it Berge-$G$} if there is a hypergraph $H'$, isomorphic to $H$, so that $V(G)\subseteq V(H')$ and there is a bijection $\phi: E(G) \rightarrow E(H')$ such that for each $e\in E(G)$, $e \subseteq \phi(e)$. The set of all Berge-$G$ hypergraphs is denoted $\ensuremath{\mathcal{B}}(G)$. A hypergraph $H$ is called Berge-$G$ {\it saturated} if it does not contain any subhypergraph from $\ensuremath{\mathcal{B}}(G)$, but adding any new hyperedge of size at least $2$ to $H$ creates such a subhypergraph. Since each Berge-$G$ hypergraph contains $|E(G)|$ hypergedges, it follows that each Berge-$G$ saturated hypergraph must have at least $|E(G)|-1$ edges. We show that for each graph $G$ that is not a certain star and for any $n\geq |V(G)|$, there are Berge-$G$ saturated hypergraphs on $n$ vertices and exactly $|E(G)|-1$ hyperedges. This solves a problem of finding a saturated hypergraph with the smallest number of edges exactly. \end{abstract} \section{Introduction} For a graph $G=(V,E)$, a hypergraph $H$ is called {\it Berge-$G$} if there is a hypergraph $H'$, isomorphic to $H$, so that $V(G)\subseteq V(H')$ and there is a bijection $\phi: E(G) \rightarrow E(H')$ such that for each $e\in E(G)$, $e \subseteq \phi(e)$. The set of all Berge-$G$ hypergraphs is denoted $\ensuremath{\mathcal{B}}(G)$. Here, for a graph or a hypergraph $F$, we shall always denote the vertex set of $F$ as $V(F)$ and the edge set of $F$ as $E(F)$. A copy of a graph $F$ in a graph $G$ is a subgraph of $G$ isomorphic to $F$. When clear from context, we shall drop the word ``copy" and just say that there is an $F$ in $G$. Several classical questions regarding Berge-$G$ hypergraphs have been considered. Among those are extremal numbers for Berge-$G$ hypergraphs measuring the largest number of hyperedges or the largest weight of hypergraphs on $n$ vertices that contain no subhypergraph from $\ensuremath{\mathcal{B}}(G)$, see for example \cite{GP, G, GMT, PTTW}. In addition, Ramsey numbers for Berge-$G$ hypergraphs have been considered in \cite{AG, GYS, GYLSS}. In this paper, we consider a saturation problem. Let $\ensuremath{\mathcal{F}}$ be a class of hypergraphs with edges of size at least two. A hypergraph $\ensuremath{\mathcal{H}}$ is called $\ensuremath{\mathcal{F}}$ {\it saturated} if it does not contain any subhypergraph isomorphic to a member of $\ensuremath{\mathcal{F}}$, but adding any new hyperedge of size at least $2$ to $\ensuremath{\mathcal{H}}$ creates such a subhypergraph. Saturation problem for families of $k$-uniform hypergraphs has been treated by Pikhurko \cite{P}, see also \cite{P1}. Pikhurko \cite{P} proved in particular, that for any $k$-uniform hypergraph $G$ there is an $n$-vertex $k$-uniform hypergraph $H$ that is $\{G\}$ saturated and has $O(n^{k-1})$ edges. This extends a result of K\'aszonyi and Tuza \cite{KT} who proved this fact for $k=2$, i.e., for graphs. See also a survey of Faudree et al. \cite{FFS}. Here $\{G\}$ saturated means that $H$ has no subhypergraph isomorphic to $G$ but adding any new hyperedge of size $k$ creates such a subhypergraph. This result is asymptotically tight for some $G$. The determination of a smallest size for $\{G\}$-saturated hypergraphs remains open in general. In the same setting of $k$-uniform hypergraphs, English et al. \cite{EGMT} proved that there are $\ensuremath{\mathcal{B}}_k(G)$ saturated hypergraphs on $n$ vertices and $O(n)$ hyperedges, where $\ensuremath{\mathcal{B}}_k(G)$ is the set of all $k$-uniform Berge-$G$ hypergraphs, $3\leq k\leq 5$. See also English et al. \cite{EGKMS}, for Berge saturation results on some special graphs. \\ We restrict our attention to the non-uniform case and Berge-$G$ hypergraphs. For $n\geq |V(G)|$, let the {\it saturation number} for a Berge-$G$ hypergraph be defined as $${\rm sat}(n, \ensuremath{\mathcal{B}}(G))= \min \{|E(\ensuremath{\mathcal{H}})|: ~ \ensuremath{\mathcal{H}} \mbox{ is a } \ensuremath{\mathcal{B}}(G) \mbox{ saturated hypergraph on } n \mbox{ vertices} \}.$$ Observe that for any nontrivial graph $G$, $${\rm sat}(n, \ensuremath{\mathcal{B}}(G))\geq |E(G)|-1.$$ Since no Berge-$G$ hypergraph has hyperedges of sizes less than $2$, we can assume that all hypergraphs considered have hyperedges of sizes at least $2$. We further assume that graphs considered have no isolated vertices. The following is the main result of this paper: \begin{theorem}\label{thm:main} Let $G=(V, E)$ be a graph with no isolated vertices, $n\geq |V(G)|$, and $m=|E(G)|-1$. Then $$ {\rm sat}(n, \ensuremath{\mathcal{B}}(G))= \begin{cases} |E(G)|, & \mbox{ if } G \mbox{ is a star on at least four edges},\\ |E(G)| -1, & \mbox{ otherwise.} \end{cases}$$ Moreover if $G_1$ is a star on at least $4$ edges and $G_2 $ is any other graph, then $\ensuremath{\mathcal{H}}_t(n)$ and $\ensuremath{\mathcal{H}}(n, m)$ are a Berge-$G_1$ and a Berge-$G_2$ saturated hypergraphs, respectively. \end{theorem} \noindent For a positive integer $n$, let $[n]=\{1, 2, \ldots, n\}$. We shorten $\{i,j\}$ as $ij$ when clear from context. If $F$ is a hypergraph and $e$ is a hyperedge, we denote by $F+e$, $F-e$, a hypergraph obtained by adding $e$ to $F$, deleting $e$ from $F$, respectively.\\ \noindent {\bf Construction of a hypergraph $\ensuremath{\mathcal{H}}_t(n)$:}\\ Let $n$ and $t$ be positive integers, $t\leq n$. Let $ \ensuremath{\mathcal{H}}_t(n) = ([n],\{[n], [n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3]\}).$\\ \noindent {\bf Construction of a set system $H'(n,m)$ and a hypergraph $\ensuremath{\mathcal{H}}(n, m)$:}\\ Let $n$ and $m$ be positive integers, $m\leq \binom{n}{2}$. Let $x= \min\{ m-1, n\}$. Let $V' $ be a set of singletons, $V'\subseteq \{\{i\}\colon i\in[n]\}$, $|V'|=x$. Let $E'$ be an edge-set of an almost regular graph (the degrees of vertices differ by at most one) on the vertex set $[n]$, such that $|E'|=m - x-1$. Let $H' (n,m)= \{\varnothing\} \cup V' \cup E'$.\\ Informally, we build a set system $H'(n,m)$ of $m$ sets on the ground set $[n]$ by first picking an empty set, then as many as possible singletons, and then pairs, so that the pairs form an edge-set of an almost regular graph.\\ Let $$\ensuremath{\mathcal{H}}=\ensuremath{\mathcal{H}}(n,m)= ([n], \{ [n]-E: ~ E\in H'(n,m)\}).$$ Note that $|E(\ensuremath{\mathcal{H}})|=m$ and each hyperedge of $\ensuremath{\mathcal{H}}$ has size $n$, $n-1$, or $n-2$.\\ \noindent \textbf {Examples.} \\ If $n=4$ and $m=4$, we have: \begin{eqnarray*} H'(4,4)& = &\{ \varnothing, \{1\}, \{2\}, \{3\}\},\\ E(\ensuremath{\mathcal{H}}(4,4))& = & \{ [4], \{2,3,4\}, \{1,3,4 \}, \{1,2,4\}\}. \end{eqnarray*} \noindent If $n=5$ and $m = 8$, we have $$H'(5,8)= \{ \varnothing, \{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{12\}, \{34\}\},$$ $$E(\ensuremath{\mathcal{H}}(5,8)) = \{ [5], \{2,3,4,5\}, \{1,3,4,5 \}, \{1,2, 4,5\}, \{1,2,3,5\}, \{1,2, 3,4\}, \{3,4, 5\}, \{1,2,5\}\}.$$ Let $H$ be a Berge-$G$ hypergraph, we call a copy $G'$ of $G$, where $V(G')\subseteq V(H)$ and the edges of $G'$ are contained in distinct hyperedges of $H$, an {\it underlying graph} of the Berge-$G$ hypergraph $H$. For example, if $G'$ is a triangle on vertices $1,2,3$, then a hypergraph $(\{1,2, 3, 4\}, \{\{1,2\}, \{2, 3, 4\}, \{1,2, 3, 4\}\})$ is Berge-$K_3$ and $G'$ is an underlying graph of Berge-$K_3$ hypergraph $H$. \section{Proof of the main theorem} Let $S_t$ denote a star on $t$ vertices. \begin{lemma} Let $t\ge 5$, $n\geq t$. Then ${\rm sat}(n,\ensuremath{\mathcal{B}}(S_t))=t-1= |E(S_t)|$. \label{lem:35} \end{lemma} \begin{proof} To show the lower bound, assume first that there is a hypergraph $\ensuremath{\mathcal{H}}$ on $t-2$ hyperedges and vertex set $[n]$ that is Berge-$S_t$ saturated. Since maximum degree of any member in $\ensuremath{\mathcal{B}}(S_t)$ is at least $t-1$, we have that the maximum degree of $\ensuremath{\mathcal{H}}+e$ for any new edge $e$ of size at least $2$ is at least $t-1$. We have that $\ensuremath{\mathcal{H}}$ has at least $|V(S_t)|=t\geq 5$ vertices. Assume first that $\ensuremath{\mathcal{H}}$ contains an edge of size $2$, say $12$. Then any vertex in $\{3, \ldots, n\}$ does not belong to this edge, so it has a degree at most $t-3$. Thus, for any $i,j\in \{3, \ldots, n\}$, $i\neq j$, the maximum degree of $\ensuremath{\mathcal{H}}+ij$ is at most $t-2$, implying that $ij \in E(\ensuremath{\mathcal{H}})$. Since the edge $12$ was chosen arbitrarily, we can conclude that $\ensuremath{\mathcal{H}}$ contains all edges of size $2$. Thus $\ensuremath{\mathcal{H}}$ has at least $\binom{n}{2} \geq \binom{t}{2} > t-2$ edges, a contradiction. Therefore $\ensuremath{\mathcal{H}}$ has no hyperedges of size $2$. Assume next that all but at most one vertex, say $n$, belong to all hyperedges of $\ensuremath{\mathcal{H}}$. Thus each hyperedge contains the set $[n-1]$, implying that each hyperedge is either $[n-1]$ or $[n]$, a contradiction to the fact that there are $t-2\geq 3$ distinct hyperedges in $E(\ensuremath{\mathcal{H}})$. Hence, there are two vertices, say $1$ and $2$, each with degree at most $t-3$. We know that $12 \not\in E(\ensuremath{\mathcal{H}})$ and that $\ensuremath{\mathcal{H}}+12$ has maximum degree at most $t-2$, a contradiction. Thus $\ensuremath{\mathcal{H}}$ is not $\ensuremath{\mathcal{B}}(S_t)$ saturated. \\ For the upper bound, we show that $\ensuremath{\mathcal{H}}_t$ is a $\ensuremath{\mathcal{B}}(S_t)$-saturated hypergraph. Recall that $\ensuremath{\mathcal{H}}= \ensuremath{\mathcal{H}}_t= ([n],\{[n], [n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3]\}).$ Note that each vertex of $\ensuremath{\mathcal{H}}$ has degree $t-2$. Thus $\ensuremath{\mathcal{H}}$ is $\ensuremath{\mathcal{B}}(S_t)$-free. Let $e\subseteq [n]$ of size at least $2$, such that $e\not\in E(\ensuremath{\mathcal{H}})$. Let $i,j \in e$, $i\neq j$. We shall show that $\ensuremath{\mathcal{H}}+e$ contains a Berge $S_t$ hypergraph.\\ Case 1. $i, j\in [t-3]$, without loss of generality $i=1, j=2$. Then the pairs $1n, 1(n-1), 13, \ldots, 1(t-2), 12$ are contained in $[n], [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3], e$, respectively, and form an underlying graph of Berge-$S_t$ in $\ensuremath{\mathcal{H}}+e$.\\ Case 2. $i$ or $j$ is not in $[t-3]$. Let, without loss of generality $i=n$. Then, without loss of generality $j=n-1$ or $j=1$. Then the pairs $n2, n3, \ldots, n(t-2)$ are contained in $[n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}$, respectively, and and the pairs $1n, (n-1)n$ are contained in $[n], e$ or $e, [n]$, respectively. Thus all these $t-1$ pairs form an underlying graph of Berge-$S_t$ in $\ensuremath{\mathcal{H}}+e$. \end{proof} \vskip 1cm \begin{proof}[Proof of Theorem \ref{thm:main}] First we consider some special graphs: stars on at most three edges and a triangle. For the upper bounds on ${\rm sat}(n,\ensuremath{\mathcal{B}}(G))$ for $G=S_2, S_3, S_4, K_3$, consider the following hypergraphs in order for $n\geq 2$, $n\geq 3$, $n\geq 4$, and $n\geq 3$, respectively: $([n], \emptyset), ([n], \{[n]\}), ([n], \{[n], [n]-\{1\}\}), ([n], \{[n], [n]-\{1\}\})$. It is easy to see that these hypergraphs are saturated for the respective Berge hypergraphs. Thus, for $G$ being one of these graphs, ${\rm sat}(\ensuremath{\mathcal{B}}(G))\leq |E(G)|-1$. Since the lower bound on ${\rm sat}(\ensuremath{\mathcal{B}}(G))$ is trivially $|E(G)|-1$, the theorem holds in this case. Lemma \ref{lem:35} implies that the theorem holds for all other stars.\\ From now on, let $G$ be a non-empty graph which is neither a star nor a $K_3$. Let $n$ be the number of vertices in $G$, $n\geq 4$. We shall further assume that $G$ has no isolated vertices and that $V(G)=[n]$. Let $m=|E(G)|-1$. We shall prove that $\ensuremath{\mathcal{H}}=\ensuremath{\mathcal{H}}(n,m)$ as defined in the introduction is a Berge-$G$ saturated hypergraph, i.e. such that it does not contain any member of $\ensuremath{\mathcal{B}}(G)$ as a subhypergraph and such that for any new hyperedge $e$ of size at least two, $\ensuremath{\mathcal{H}}+e$ contains a Berge-$G$ sub-hypergraph. In fact, instead of $\ensuremath{\mathcal{H}}(n,m)$ we shall be mostly using the system $H'(n,m)$ also defined in the introduction. Note that $\ensuremath{\mathcal{H}}$ does not contain any member of $\ensuremath{\mathcal{B}}(G)$ since $\ensuremath{\mathcal{H}}$ has $|E(G)|-1$ edges. \\ Consider $e$, $e\subseteq [n]$, $|e|\geq 2$, $e\not\in E(H)$. Let $\{i,j\}\subseteq e$, $i\neq j$. Relabel the vertices of $G$ such that $ij \in E(G)$ and $i$ is a vertex of maximum degree in $G$. We shall show that $\ensuremath{\mathcal{H}}$ is a Berge-$(G-ij)$, thus showing that $\ensuremath{\mathcal{H}}+e$ is Berge-$G$. We shall prove one of the following equivalent statements: \\ \noindent (i) there is a bijection $\phi$ between $E(G-ij)$ and $E(\ensuremath{\mathcal{H}})$ such that $e' \subseteq \phi(e')$ for any $e'\in E(G-ij)$,\\ (ii) there is a bijection $f$ between $E(G-ij)$ and $H'=H'(n,m)$ such that for each $e'\in E(G-ij)$, $e' \cap f(e') = \varnothing$,\\ (iii) there is a perfect matching in a bipartite graph $F$ with one part $A= E(G) - \{ij\}$ and the other part $B=H'$ such that $e'\in A=E(G) - \{ij\}$ and $e''\in B= H'$ are adjacent in $F$ iff $e'\cap e'' = \varnothing$. \\ One can see that (i) and (ii) are equivalent by defining $\phi(e') $ to be $[n]-f(e')$. The equivalence of (ii) and (iii) is clear since $|A|=|B|$. Next, we shall prove (iii).\\ In each of the cases below, we assume that there is no perfect matching in $F$, thus by Hall's theorem, there is a set $S\subseteq A$ such that $|N_F(S)|<|S|$. Let $Q = B \setminus N_F(S)$. We see that each element of $Q$ intersects each edge in $ S$. Let $G_S$ be a subgraph of $G$ with edge set $S$. Since each element in $Q$ has size one or two, $G_S$ has a vertex cover of size one or two. Thus $G_S$ is either a star, a triangle, or an edge-disjoint union of two stars. Clearly, $\emptyset$ is not in $Q$. Assume some singleton, say $\{1\}$ is in $Q$. Then $S$ forms a star with center $1$. Then all singletons $\{2\}, \{3\}, ...$ and $\emptyset$ are in $N_F(S)$. If $S\neq A$, i.e., $|E(G)|-1>|S|$, then $|N_F(S)|\geq |S|$, a contradiction to our assumption on $S$. If $S=A$, i.e., $G$ is a union of a star and an edge $ij$, since $i$ is a vertex of maximum degree in $G$, we see that $G$ is a star, a contradiction. Thus we can assume that $Q$ contains only two-elements sets, i.e., in particular $H'$ has two-element sets and thus, by definition of $H'$, $|H'| >n+1$. Finally, since an empty set and all singletons are not in $Q$, they are in $N_F(S)$, so $|N_F(S)|\geq n$. Thus $|S|\geq n+1$, and in particular, $S$ does not form a star. We observed earlier that we could assume that $G$ is not a star. \\\\ {\bf Case 1.} $G$ is a union of two stars. We already excluded the case that $G$ is a star, so let's assume that $G$ is an edge-disjoint union of two stars with different centers. If one of the stars has at most two edges, then $|E(G)| \leq n+1$, and $|S|\leq n$, a contradiction. Thus each of the stars has at least $3$ edges. Note that $G$ has at most $2n-1$ edges. In particular, since there are $n$ singletons and an empty set in $H'$ and $|H'|\leq2n-1$, we have that $E'$, the set of pairs from $H'$, has size at most $n-2$ and thus the graph on edge set $E'$ has maximum degree at most $2$. This implies that for every vertex there is a non-adjacent vertex in a graph with edge-set $E'$. Let $k$ be the integer such that $ik\not\in E'$. Relabel the vertices of $G$ such that $ij$ is an edge of $G$, and $i$ and $k$ are the centers of the stars whose union is $G$, and $j\neq k$. Since $ik\not\in E'$, it follows that $ik \not\in Q$. Since each pair from $Q$ forms a vertex cover of $G_S$, there is a pair different from $ik$ that forms a vertex cover of $G_S$. Since $ik$ is a vertex cover of $G$, it is a vertex cover of $G_S$. Thus $G_S$ has two distinct vertex covers of size $2$. Then $G_S$ is a subgraph of a triangle with possibly some further edges incident to the same vertex of the triangle or a subgraph of a $C_4$. This implies that $|S|\leq n$, a contradiction.\\ \textbf {Case 2.} $G$ is not a union of two stars. If $|Q|=1$, then $|N_F(S)|=|B|-1= |A|-1$. Since $|S|> |N_F(S)|= |A|-1$ and $S\subseteq A$, we have that $S=A$, hence $G_S= G - ij$. Since there is a vertex cover of $G_S$ of size $2$, we have that $G_S= G-ij$ is a union of two stars $S', S''$, so $G$ is a union of two stars and an edge incident to a vertex of maximum degree of $G$. If maximum degree of $G$ is at least four, then $i$ is a center of $S'$ and $S''$. Thus $G$ is a union of two stars, a contradiction. If the maximum degree of $G$ is at most $3$, then $|E(G)| \leq 7$. On the other hand, $m=|H'| \ge n+2$. Thus $n+3\le |E(G)|\le 7$. Thus $n=|V(G)| \leq 4$ and for each such choice of $n$ we reach a contradiction by the fact that $n+3\leq |E(G)|$. If $Q$ contains two disjoint edges, say $12$ and $34$, then $G_S$ can only be a subgraph of a $4$-cycle $13241$. So, $|S|\leq 4\leq n$, a contradiction to our assumption that $|S|\geq n+1$. \\ Thus $Q$ contains edges that either form a star on at least three edges or a subgraph of a triangle. If the edges of $Q$ form a star on at least three edges, say $12, 13, 14, \ldots$, $S$ forms a star with center $1$, a contradiction. If the edges of $Q$ form a triangle, say $123$, then we arrive at a contradiction since no two-element set can at the same time intersect $12$, $23$, and $13$. Thus $Q$ contains exactly two adjacent edges, say $12$ and $13$. It follows that $S$ forms a star with center $1$ and maybe an edge $23$. Then $|S|\leq n$, a contradiction. Hence, there is a perfect matching in $F$ and thus $\ensuremath{\mathcal{H}}$ is Berge-$G$ saturated. \end{proof} \section{Conclusions} In this note, we completely determine ${\rm sat}(n, \ensuremath{\mathcal{B}}(G))$ for any $n\geq |V(G)|$ and show in particular that this function does not depend on $n$. There are many variations of saturation numbers for non-uniform hypergraphs that could be considered. Among those are functions optimising the total weight of a saturated hypergraph, i.e., the sum of cardinalities of all hyperedges, or functions optimising the size of a saturated multihypergraph. These have been considered by the second author in \cite{W}. One particularly interesting variation considered in \cite{W} is the following notion of saturation: a hypergraph $\ensuremath{\mathcal{H}}$ is called strongly $\ensuremath{\mathcal{F}}$ saturated with respect to a family of hypergraphs $\ensuremath{\mathcal{F}}$ if $\ensuremath{\mathcal{H}}$ does not contain any member of $\ensuremath{\mathcal{F}}$ as a subhypergraph, but replacing any hyperedge $e$ of $\ensuremath{\mathcal{H}}$ with $e\cup \{v\}$ for any vertex $v\not\in e$ such that $e\cup \{v\} \notin E(\ensuremath{\mathcal{H}})$ creates such a member of $\ensuremath{\mathcal{F}}$.\\ \noindent {\bf Acknowledgements~~~} We thank Casey Tompkins for useful discussions and carefully reading the manuscript. \end{document}
\begin{document} \title{A Combinatorial Formula for Test Functions \ with Pro-$p$ Iwahori Level Structure} \begin{abstract} The Test Function Conjecture due to Haines and Kottwitz predicts that the geometric Bernstein center is a source of test functions required by the Langlands-Kottwitz method for expressing the local semisimple Hasse-Weil zeta function of a Shimura variety in terms of automorphic L-functions. Haines and Rapoport found an explicit formula for such test functions in the Drinfeld case with pro-$p$ Iwahori level structure. This article generalizes the Haines-Rapoport formula for the Drinfeld case to a broader class of split groups. The main theorem presents a new formula for test functions with pro-$p$ Iwahori level structure, which can be computed through some combinatorics on Coxeter groups. Explicit descriptions of the test function in certain low-rank general linear and symplectic group examples are included. \end{abstract} \section{Introduction} The Hasse-Weil zeta function of an algebraic variety defined over a number field is an important object of study in modern number theory connected to several guiding problems. A goal of the Langlands Program is to express these zeta functions in terms of automorphic L-functions. Much can be said about the zeta functions in the case of Shimura varieties due to the contributions of many mathematicians, and in particular, the \emph{Langlands-Kottwitz method} outlines a rigorous strategy for studying local factors in the Euler product of a zeta function of a Shimura variety. Although the Langlands-Kottwitz method and the Test Function Conjecture of Haines and Kottwitz serve as the motivation for this work, very little of the technology involved with that theory will be used in what is to come. For a complete explanation, see the survey article \cite{haines2005}, while the Test Function Conjecture is precisely stated in \cite{haines2014}, Conjecture 4.30. We will focus instead on a single aspect of a certain identity involving the semisimple trace of Frobenius on the $\ell$-adic cohomology of a Shimura variety, which must be established in the course of following the Langlands-Kottwitz approach. Here is the formula as stated in \cite{haines2014}, Section 6.1: $$ {\rm{tr}}^{\rm{ss}}\left(\Phi_p^r, H_c^\bullet(Sh_{K_p} \otimes_E \bar{\mathbb{Q}}_p, \bar{\mathbb{Q}}_\ell)\right) = \sum_{(\gamma_0; \gamma, \delta)} c(\gamma_0; \gamma, \delta) {\rm{O}}_\gamma(1_{K^p}) {\rm{TO}}_{\delta \theta} (\phi_r). $$ For our purposes, we can limit our attention to the term ${\rm{TO}}_{\delta \theta} (\phi_r)$ in the trace formula, which is a twisted orbital integral defined by $$ {\rm{TO}}_{\delta \theta}(\phi_r) = \int_{G_{\delta \theta}^\circ (F) \setminus G(F_r)} \phi_r \Big(g^{-1} \delta \theta(g)\Big) d\bar{g}, $$ where \begin{itemize} \item $G$ is a connected reductive group over a $p$-adic field $F$, \item $F_r / F$ is a degree $r$ unramified extension, \item $\theta$ generates the Galois group ${\rm{Gal}}(F_r/F)$, \item $\delta$ is an element of $G(F_r)$ whose norm in $G(F)$ is semisimple, \item $G_{\delta \theta} (F) = \{ g \in G(F_r) \mid g^{-1} \delta \theta (g) = \delta\}$, with identity component $G_{\delta \theta}^\circ (F)$, \item $d\bar{g}$ is a quotient Haar measure, and \item $\phi_r$ is a locally-constant compactly-supported $K_{p^r}$-biinvariant function on $G(F_r)$. \end{itemize} See \cite{haines2012}, Section 6.2, for a complete explanation of ${\rm{TO}}_{\delta \theta} (\phi_r)$. The function $\phi_r$ is called a \emph{test function}, and it is the focal point of this article. The aforementioned Test Function Conjecture predicts that the Bernstein center of $G$ is a source of test functions that satisfy the above trace formula; however, we do not directly address the Conjecture. Instead, we consider functions defined via the Bernstein center in the case of split connected reductive groups with connected center with pro-$p$ Iwahori level structure, in which case the function is denoted $\phi_{r,1}$, and then we develop a combinatorial formula for a closely related function $\phi_{r,1}aug$ whose twisted orbital integrals match those of $\phi_{r,1}$, that is, ${\rm{TO}}_{\delta\theta} (\phi_{r,1}) = {\rm{TO}}_{\delta\theta}(\phi_{r,1}aug).$ Let us conclude this introduction with some motivation for proving explicit formulas for test functions arising from the Bernstein center and an overview of previous work. This area grew out of an effort to understand nearby cycles of Shimura varieties with parahoric level structure. At first, calculations were done using geometric methods, but eventually conjectures were formulated using functions coming from the Bernstein center. Then it became useful to have explicit versions of the test functions to compare with the geometric calculations. We refer the reader to the survey article \cite{haines2005} for additional background and history. A test function $\phi_r = q^{r\ell(t_\mu)/2}(Z_{V_\mu} \ast 1_{K_r})$ is defined in terms of a distribution $Z_{V_\mu}$ in the Bernstein center and a level structure group $K_r$, which is a compact open subgroup of $G(F_r)$. As the subgroup $K_r$ changes, so do both the coefficients of the test function $\phi_r$ and its support. For example, suppose $K_r = G(\mathcal{O}_{F_r})$ is a hyperspecial maximal compact subgroup. Then $\phi_r$ is just $1_{K_r \varpi^\mu K_r}$. So the coefficients of the test function are 0 or 1 and the support is the double $K_r$-coset corresponding to $\mu$. If $K_r = I_r$ is an Iwahori subgroup, the situation becomes more complex. Now the test function $\phi_r$ is supported on the \emph{$\mu$-admissible set} and the coefficient of $\phi_r$ at an admissible element $w$ involves the polynomial $R_{w, t_{\lambda(w)}}(q)$ coming from Kazhdan-Lusztig theory~\cite{haines2000b}. This paper addresses the case where $K_r = I_r^+$ is a pro-$p$ Iwahori subgroup following on the earlier work of Haines and Rapoport, which considered the Drinfeld case for this level structure \cite{haines-rapoport2012}. In this case, the coefficients of $\phi_r$ are far more complicated than in the case of Iwahori level structure and the support is stratified by elements which are products of elements in the $\mu$-admissible set with certain elements in the set of $k_r$-points of a split maximal torus. Finally, Scholze~\cite{scholze2011} discovered explicit test function formulas in the $GL_2$ case for deeper level structure groups and subsequently opened new directions of research into the Langlands-Kottwitz method~\cite{scholze2013a}. \emph{Acknowledgements.} This article was the author's Ph.D. thesis at the University of Maryland, College Park. I thank my advisor, Thomas Haines, for his guidance during this period. \subsection{Summary of this paper} \label{section::summary-of-thesis} We highlight key definitions and results, while pointing out the various hypotheses assumed along the way. Background material can be found in Section~\ref{section::preliminaries}. The group $G$ is a split connected reductive algebraic group with connected center defined over a $p$-adic field $F$, which admits the cases of general linear groups and general symplectic groups. Fixing a choice of Borel subgroup $B$, which we do, in turn determines a split maximal torus $T \subset B$. Now define the Iwahori subgroup $I$ to be the subgroup of $G(\mathcal{O}_F)$ whose reduction modulo $\varpi$ is $B(k_F)$. Let $\mu$ be a dominant minuscule cocharacter of $T$. Given a degree $r$ unramified extension $F_r/F$, the $F_r$-points of $G$ shall be denoted $G_r$. Let $q$ denote order of the residue field $k_F$, hence the residue field $k_r$ of $F_r$ has order $q^r$. We will often use the difference $Q_r = q^{-r/2} - q^{r/2}$ in what follows. Our group $G$ has a dual group $\widehat{G}$ defined over $\mathbb{C}$ corresponding to the dual root datum of $G$. There exists a highest-weight representation $(r_\mu, V_\mu)$ of $\widehat{G}$ determined by our chosen $\mu$. By the theory of the stable Bernstein center $\mathfrak{Z}^{\rm{st}}(G)$, there is an element $Z_{V_\mu}$ in $\mathfrak{Z}^{\rm{st}}(G)$ that maps an infinitesimal character $(\lambda)_{\widehat{G}}$ on the Weil group $W_F$ to the semisimple trace of Frobenius on $V_\mu$ (Proposition~\ref{prop::reg-func-semisimple-trace}). Assuming the LLC+ conjecture, described in \cite{haines2014} Section 5.2, the distribution $Z_{V_\mu}$ can be viewed as an element of the usual Bernstein center $\mathfrak{Z}(G)$. All of this is tied together in Definition~\ref{defin::test-function}, which is stated for a general test function. The discussion at the start of Section~\ref{section::depth-zero-characters} specializes that definition to the case where the level structure group is the pro-unipotent radical $I_r^+$ of an Iwahori subgroup $I_r$ of $G_r$. So we come to consider the test function $$ \phi_{r,1} = q^{r \ell(t_\mu)/2} \left(Z_{V_\mu} \ast 1_{I_r^+} \right). $$ The function $\phi_{r,1}$ lies in the center of the Hecke algebra $\mathcal{H}(G_r, I_r^+)$. This algebra is related to Hecke algebras $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$, each of which is determined by a depth-zero character $\chi_r$ on $T(\mathcal{O}_r)$ obtained by composing a depth-zero character $\chi : T(\mathcal{O}_F) \rightarrow \mathbb{C}^\times$ with the norm $N_r : T(\mathcal{O}_r) \rightarrow T(\mathcal{O}_F)$. Because $T(k_r) \cong I_r / I_r^+$, by Proposition~\ref{prop::iwahori-prop-quotient}, the character $\chi_r$ can be extended to a character $\rho_{\chi_r}$ on the Iwahori subgroup $I_r$ that is trivial on $I_r^+$. Section~\ref{section::depth-zero-characters} is devoted to objects and results, such as these, associated to depth-zero characters. Definition~\ref{endoscopic-element-definition} builds on the LLC for Tori to associate an ``endoscopic element'' $\kappa_\chi$ in $\widehat{T}(\mathbb{C})$ to each depth-zero character $\chi$ on $T(\mathcal{O}_F)$. Proposition~\ref{prop::dz-endoscopic-elements-equal-kernel} characterizes these endoscopic elements as the kernel $K_{q-1}$ of the endomorphism on $\widehat{T}(\mathbb{C})$ given by $\kappa \mapsto \kappa^{q-1}$. Section~\ref{section::first-formula} takes advantage of the work with depth-zero characters to prove $$ \phi_{r,1} = [I_r : I_r^+]^{-1} q^{r\ell(t_\mu)/2} \sum_{\xi \in T(k_r)^\vee} Z_{V_\mu} \ast e_\xi, $$ where $e_\xi$ is an idempotent in the Hecke algebra $\mathcal{H}(G)$ and $\xi$ is a depth-zero character on $T(\mathcal{O}_r)$. It turns out that we can ignore certain terms in this sum when viewing $\phi_{r,1}$ as a test function to be plugged into a twisted orbital integral. This is Lemma~\ref{lemma::orbital-intergrals-zero}: \begin{unnumlemma}{\rm{(Haines)}} Suppose $\xi \in T(k_r)^\vee$ is not a norm, that is, there is not a $\chi \in T(k_F)^\vee$ such that $\xi = \chi \circ N_r$. Then all twisted orbital integrals at $\theta$-semisimple elements vanish on functions in $\mathcal{H}(G_r, I_r, \rho_\xi)$. \end{unnumlemma} In light of this lemma, we define a new function $$ \phi_{r,1}aug = [I_r : I_r^+]^{-1} q^{r\ell(t_\mu)/2} \sum_{\chi \in T(k_F)^\vee} Z_{V_\mu} \ast e_{\chi_r} $$ whose twisted orbital integrals satisfy ${\rm{TO}}_{\delta\theta}(\phi_{r,1}) = {\rm{TO}}_{\delta\theta}(\phi_{r,1}aug)$. \textbf{As we shall see, the main theorem is a combinatorial formula for $\phi_{r,1}aug$ rather than $\phi_{r,1}$.} But because the twisted orbital integrals of these functions agree, this formula may as well be a formula for the test function. \begin{unnumrmk} It is possible to give a definition of $\phi_{r,1}^\prime$ that does not invoke LLC+ by using the LLC for Tori. Instead of using the distribution $Z_{V_\mu}$ to define functions $Z_{V_\mu} \ast e_{\chi_r}$ in the center of $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$, we can define $Z_{V_\mu} \ast e_{\chi_r}$ to be the function in the center of that Hecke algebra which acts by semisimple trace of Frobenius on the Bernstein block of $\tilde{\chi}_r$. See also Remark~\ref{rmk::clarify-dependence-on-llc+}. \end{unnumrmk} Roche's theory of Hecke algebra isomorphisms shows us how to rewrite a function in the center of $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ as a sum of \emph{Bernstein functions} in the center an Iwahori-Hecke algebra associated to an endoscopic group $H_{\chi_r}$; however, we must make some assumptions about $G$ in order to apply this theory without making restrictions to ${\rm{char}}(k_F)$. These are explained in Remark~\ref{rmk::roche-assumptions}. Haines's formula for Bernstein functions attached to dominant minuscule cocharacters leads to a more concrete formula for $\phi_{r,1}aug$ by introducing Kazhdan-Lusztig $\widetilde{R}$-polynomials to the expression. The end result of Chapter 2, Proposition~\ref{prop::first-explicit-formula}, is an explicit formula for the coefficients $\phi_{r,1}aug(I_r^+swI_r^+)$, with $\gamma_{N_r s}$ as in Lemma~\ref{lemma::defin-of-gamma-nrs}: \begin{unnumprop} Given a pair $(s,w) \in T(k_r) \times \widetilde{W}$, the coefficient $\phi_{r,1}aug (I_r^+ sw I_r^+)$ can be rewritten as a sum over endoscopic elements in $\widehat{T}(\mathbb{C})$ which arise from depth-zero characters $\chi \in T(k_F)^\vee$: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r: I_r^+]^{-1} \sum_{\kappa_\chi \in K_{q-1}} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r). $$ \end{unnumprop} The formula of Proposition~\ref{prop::first-explicit-formula} defines an element of the Hecke algebra $\mathcal{H}(G_r, I_r^+)$, and so there is no doubt that the function exists, subject to the various hypotheses in place. On the other hand, the purpose for considering this function involves the conjectural existence of a distribution $Z_{V_\mu}$ in the stable Bernstein center. When $G=GL_n$, this distribution is known to exist and embeds into $\mathfrak{Z}(G)$. Hence in at least one important example of a split connected reductive group with connected center, the function $\phi_{r,1}$ appearing in the Test Function Conjecture exists, and its twisted orbital integrals agree with those of the function $\phi_{r,1}aug$, whose coefficients are specified by the formula in Proposition~\ref{prop::first-explicit-formula}. Chapter 3 begins the process of simplifying this formula. The first simplification comes from studying the set of endoscopic elements $\kappa_\chi$ such that the functions $Z_{V_\mu} \ast e_{\chi_r}(w) \neq 0$ for a fixed $w \in \widetilde{W}$. An element $\kappa$ in $\widehat{T}(\mathbb{C})$ is ``relevant'' to $w = t_\lambda \bar{w}$ if $\lambda(\kappa) = 1$ and $\bar{w}\kappa = \kappa$. Then we prove in Proposition~\ref{prop::relevant-group}: \begin{unnumprop} The elements $\kappa \in \widehat{T}(\mathbb{C})$ relevant to a fixed $w \in \widetilde{W}$ form a closed subgroup called the $\textbf{relevant subgroup}$ $S_w$. \end{unnumprop} We subsequently define a subgroup $S_{w,J} \subseteq S_w$ for any root sub-system $J$ of the ambient system $\Phi(G,T)$ in Definition~\ref{defin::dzrelgrpJ}. The $S_{w,J}$ are diagonalizable algebraic subgroups of $\widehat{T}$ defined over $\mathbb{C}$, hence it is natural to consider their character groups $\chars{S_{w,J}}$. Section~\ref{section::lattices-in-char-groups} culminates in the definition of a lattice $L_{w,J} \subset \chars{\widehat{T}}$ such that $\chars{S_{w,J}} = \chars{\widehat{T}}/L_{w,J}$. Whereas the groups $S_{w,J}$ are (infinite) algebraic groups, the groups needed for the formula are finite subgroups $S_w^{\rm{dz}}J = S_{w,J} \cap K_{q-1}$ of $\widehat{T}(\mathbb{C})$. In Section~\ref{section::finite-critical-grps}, we define a finite group $A_{w,J, k_F} \subset T(k_F)$ using the lattice $L_{w,J}$. Together, this data describes what happens to a certain sum that will appear in the proof of the main theorem. This is Proposition~\ref{prop::sum-over-group}: \begin{unnumprop} Let $s \in T(k_r)$, and define $\gamma_{N_r s}$ as above. Then $$\sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1} = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J, k_F} \\ \vert S_w^{\rm{dz}}J \vert, & {\rm{otherwise}}. \end{cases} $$ \end{unnumprop} The second simplification of the formula for $\phi_{r,1}aug$ comes from the theory of the $\widetilde{R}$-polynomials, defined by Kazhdan and Lusztig, using a formula for these polynomials due to Dyer based on the Bruhat graph and reflection orderings. This is the content of Chapter 4. Section~\ref{section::reflection-orderings} defines the notion of a reflection ordering $\prec$ on the positive roots of the root system $\Phi$ associated to the Weyl group $W$. The set of paths $u\stackrel{\Delta}{\longrightarrow} v$, for $u$ and $v$ in $W$, through the Bruhat graph whose edges are increasing with respect to $\prec$ is denoted $B_\Phi^\prec(u,v)$. This information determines the $\widetilde{R}$-polynomials according to Theorem~\ref{dyer-R-polynomial-formula}: \begin{unnumthm}{\rm{(Dyer)}} Let $\widetilde{W}_{\chi}$ be the extended affine Weyl group of $H_{\chi_r}$, and let $\prec$ be a reflection ordering on $W_{\chi, \rm{aff}}$. Let $Q_r = q^{-r/2} - q^{r/2}$. For any $u, v \in \widetilde{W}_{\chi}$ such that $u \leq_\chi v$ in Bruhat order, $$\widetilde{R}^\chi_{u,v} (Q_r) = \sum_{\Delta \in B_{\Phi_{\chi, \rm{aff}}}^\prec(u, v)} Q_r^{\ell(\Delta)}.$$ \end{unnumthm} The main objective of Chapter 4 is to rewrite this formula for use in the proof of the main theorem. Suppose we start with an element $w$ in the extended affine Weyl group $\widetilde{W}$. Such an element has the form $w = t_\lambda \bar{w}$ where $t_\lambda$ is the translation element for the coweight $\lambda$ and $\bar{w}$ is an element of the finite Weyl group. When we want to emphasize that a translation element is the ``translation part'' of some $w \in \widetilde{W}$, we write $t_{\lambda(w)}$. Now suppose further that $w$ is $\mu$-admissible: Haines and Pettet~\cite{haines-pettet2002} showed that such elements satisfy $w \leq t_{\lambda(w)}$ in the Bruhat order on $\widetilde{W}$. Proposition~\ref{prop::finite-reflections-in-interval} shows that for any $\prec$-increasing path from $w$ to $t_{\lambda(w)}$, all edges of the path correspond to \emph{finite} reflections: \begin{unnumprop} Let $\mu$ be a dominant minuscule coweight of $\Phi$, and let $(W,S)$ be the finite Weyl group of $\Phi$ inside the affine Weyl group $(W_{\rm aff}, S_{\rm aff})$. Let $T$ be the set of reflections in $W$. Consider a $\mu$-admissible element $w \leq t_{\lambda (w)}$. There exists a length-zero element $\sigma$ in $\widetilde{W}$ such that $w, t_{\lambda(w)} \in \sigma W_{\rm{aff}}$. Let $w \stackrel{\Delta}\longrightarrow t_{\lambda (w)}$ be any path in the Bruhat graph $\Omega_{(W_{\rm{aff}}, S_{\rm{aff}})}$. Each reflection in the edge set $E(\Delta) = \{t_1, \ldots t_n\}$ belongs to $T$. \end{unnumprop} As a consequence of this proposition, we see that $B_{\Phi(G,T)_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ has a $\prec$-preserving bijection to a certain interval $B_{\Phi(G,T)}^\prec(w_\lambda^{-1}\bar{w}, w_\lambda^{-1})$ whose members are paths with edges only in the \emph{finite} Weyl group. See Proposition~\ref{bruhat-interval-isomorphism}. Moreover, for each path $\Delta \in B_{\Phi(G,T)}^\prec(w, t_{\lambda(w)})$, Lemma~\ref{path-root-system-lemma} shows how to construct a root system $J_\Delta \subset \Phi(G,T)$. All of this comes together in a ``stratified'' version of Dyer's formula, presented in Corollary~\ref{cor::finite-interval-r-polynomial}, which applies to the polynomials $\widetilde{R}_{w, t_{\lambda(w)}}^J(Q_r)$, defined in Chapter~3 (see discussion following Lemma~\ref{lemma::equality-on-strata}): \begin{unnumcor} Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$ and $J \subseteq \Phi$. Then $$ \widetilde{R}_{w,t_\lambda}^J (Q_r) = \sum_{J^\prime \subseteq J}\; \sum_{\substack{\Delta \in B_\Phi^\prec (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} Q_r^{\ell(\Delta)}. $$ \end{unnumcor} Finally, in Chapter 5 we come to the main result. Several lemmas employ the results from Chapters 3 and 4 to rewrite our original formula for $\phi_{r,1}aug(I_r^+ sw I_r^+)$. The proof of the theorem requires the following assumptions on $G$ (see Remark~\ref{rmk::roche-assumptions}), \begin{enumerate} \item $G$ is a split connected reductive group with connected center, \item The derived group $G_{\rm{der}}$ is simply-connected, and \item $W_\chi = W_\chi^\circ$. \end{enumerate} For $w \in \widetilde{W}$, $s \in T(k_r)$ and $J \subseteq \Phi$, define a symbol $\delta (s, w, J)$ by $$ \delta(s, w, J) = \begin{cases} \ 0, & {\rm{if}}\ w\notin {\rm{Adm}}_{G_r}(\mu) \\ \ 0, & {\rm{if}}\ w \in {\rm{Adm}}_{G_r}(\mu)\ {\rm{and}}\ N_r(s) \notin A_{w, J, k_F} \\ \ 1, & {\rm{if}}\ w \in {\rm{Adm}}_{G_r}(\mu)\ {\rm{and}}\ N_r(s) \in A_{w, J, k_F}. \end{cases} $$ Let $S_{w, J_\Delta}^{\rm{tors}}$ be the torsion subgroup of $S_{w, J_\Delta}$. Then Theorem~\ref{main-theorem} ties everything together: \begin{maintheorem} Let $w \in \widetilde{W}$ and $s \in T(k_r)$. Let $d$ be the rank of $T$. Fix a reflection ordering $\prec$ on $\Phi$, and set $c(\Delta) = \left[\ell(w, t_\mu) - \ell(\Delta)\right]/2$. The coefficient of $\phi_{r,1}aug$ for the $I_r^+$-double coset of $(s,w)$ is given by $$ (-1)^d \sum_{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1}\vert (q-1)^{d-{\rm{rank}}(J_\Delta) -1} q^{r c(\Delta)} (1-q^r)^{\ell(\Delta)-d}. $$ \end{maintheorem} Corollary~\ref{cor::drinfeld-case} shows how to recover the formula for the Drinfeld case obtained by Haines and Rapoport as a special case of the Main Theorem. Section~\ref{section::implementation-remarks} follows this with remarks on using the formula for calculations and a description of how the formula can be implemented in software. We conclude with two sections that discuss features of some data gathered by computer for cases where $G$ is a general linear group or a general symplectic group. Tables located in the Appendix present the data in full. \subsection{Preliminaries} \label{section::preliminaries} We conclude this Introduction by introducing some terms and notation concerning reductive algebraic groups defined over non-archimedean local fields. As the main purpose of this section is to introduce notation and supplementary facts, almost all of the details are left out, but references are given. Several unrelated concepts will be represented by similar symbols in this paper, for example, we use $W$ for a finite Weyl group and $W_F$ for a Weil group of a local field $F$; we use $T$ for both a split maximal torus of $G$ and the set of reflections in a Weyl group; and an Iwahori subgroup of an algebraic group $G$ defined over $F$ is denoted $I$, while the inertia subgroup of a Galois group $\rm{Gal}(\bar{F}/F)$ is denoted $I_F$. \subsubsection{$p$-adic fields} The following material on $p$-adic fields and related ideas has been drawn from the book by Serre~\cite{serre1979} and the article by Tate~\cite{tate1979} found in the Corvallis proceedings. The symbol $F$ shall always refer to a non-archimedean local field, which is sometimes also referred to as a \textbf{$p$-adic field}. Let $\mathcal{O}_F$ and $k_F$ denote its ring of integers and its residue field, respectively. The cardinality of $k_F$ shall be denoted $q$. Fix an algebraic closure $\bar{F} \supset F$. The \textbf{Galois group} ${\rm{Gal}}(\bar{F}/F)$ is the profinite topological group of automorphisms of $\bar{F}$ which fix $F$. Let $\varpi$ be a uniformizer of $\mathcal{O}_F$. Given an element $x = u\varpi^n$, for $u$ a unit, define ${\rm{val}}(x) = n$. A standard convention is to set ${\rm{val}}(0) = \infty$. Thus we get a map ${\rm{val}}: \mathcal{O}_F \rightarrow \{\mathbb{N}, \infty\}$. For each $r \in \mathbb{N}$, there exists a degree $r$ \textbf{unramified extension} $F_r \supset F$ (see~\cite{serre1979}, III.5), which is unique inside $\bar{F}$. Given such an extension, the algebraic integers inside $F_r$ are denoted $\mathcal{O}_r$ and its residue field is written $k_r$. Its Galois group ${\rm{Gal}}(F_r/F)$ is isomorphic to $\mathbb{Z}/r\mathbb{Z}$. Define the \textbf{norm map} $N_r : F_r \rightarrow F$ by $$ N_r (z) = \prod_{i = 0}^{r-1} \theta^i(z), $$ where $\theta$ is a generator of the cyclic group ${\rm{Gal}}(F_r/F)$. The following definition of a \textbf{Weil group} is drawn from \cite{tate1979}, Section 1.4.1. A Weil group for $F$ is a group $W_F$ embedded in ${\rm{Gal}}(\bar{F}/F)$ whose closure is the Galois group itself. Let $\hat{k} = \cup_{E/F} k_E$ be the union of all residue fields for finite extensions $E/F$. Then $W_F$ consists of the elements which act by Frobenius on $\hat{k}$, i.e., $x \mapsto x^{q^n}$, for $x \in \hat{k}$ and some $n \in \mathbb{Z}$. The Weil group is generated by a \textbf{geometric Frobenius element} $\Phi_F$ and fits into a short exact sequence $$ 1 \longrightarrow I_F \longrightarrow W_F \longrightarrow \mathbb{Z} \longrightarrow 1, $$ where $I_F$ is the \textbf{inertia subgroup}. It is a fact that a Weil group exists for all $p$-adic fields, and moreover, this group is unique up to isomorphism. One of the consequences of Local Class Field Theory (see \cite{serre1979}, XIII) is the existence of a \textbf{reciprocity map} $\tau_F : W_F \rightarrow F^\ast$. For any extension $E/F$, there is an isomorphism $r_E: E^\ast \rightarrow W_E^{\rm{ab}}$. Tate goes on to define the \textbf{Weil-Deligne group} $W_F^\prime$ associated to $W_F$ in~\cite{tate1979}, Definition 4.1.1. The representation theory of Weil-Deligne groups is a major feature of the Local Langlands Conjecture, but we will not need the details in what follows. \subsubsection{Root systems} We briefly introduce the notions of a root system and its Weyl group, the latter of which is a type of Coxeter group. Much more will be said about Coxeter groups in Section~\ref{section::coxeter-group-background}. There are many excellent references for this subject, such as Humphreys~\cite{humphreys1990}, Bourbaki~\cite{bourbaki4-6}, and the more recent book by Bj{\"o}rner and Brenti\cite{bjorner-brenti2005}, which focuses on combinatorics. Let $V$ be a Euclidean space endowed with an inner product $\langle\ ,\ \rangle$. Given a vector $\alpha \in V$, define the reflection $s_\alpha$ with respect to the hyperplane in $V$ orthogonal to $\alpha$. That is, $$ s_\alpha (x) = x - \frac{2\langle x, \alpha\rangle}{\langle \alpha,\alpha\rangle}\alpha $$ A \textbf{root system} $\Phi$ is a finite set of vectors $\alpha \in V$ such that $\Phi \cap \mathbb{R}\alpha = \{\alpha, -\alpha\}$ and $s_\alpha (\Phi) = \Phi$. A root system can be partitioned into a disjoint union of positive and negative roots, written $\Phi = \Phi^+ \cup \Phi^-$, by choosing a basis $\Delta \subset \Phi$ whose corresponding reflections generate the group $W = \langle s_\alpha \mid \alpha \in \Phi \rangle$, which is called the \textbf{Weyl group} of $\Phi$. All Coxeter groups admit a partial ordering called the Bruhat order. Furthermore, Coxeter groups have a length function $\ell$. A difference of lengths $\ell(v) - \ell(u)$ is sometimes written $\ell(u,v)$ as in~\cite{bjorner-brenti2005}. Given a root $\alpha \in \Phi$, define its \textbf{coroot} $\alpha^\vee$ by $$ \alpha^\vee = \frac{2\alpha}{\langle \alpha, \alpha\rangle}. $$ The set of such coroots forms the \textbf{dual root system} $\Phi^\vee$. A \textbf{weight} of $\Phi$ is a vector $\lambda \in V$ such that $\langle \lambda, \alpha^\vee\rangle \in \mathbb{Z}$ for all $\alpha \in \Phi$. The set of weights forms a lattice, which we sometimes denote $X$. A $\textbf{coweight}$ of $\Phi$ is $\eta \in V$ such that $\langle\eta, \alpha\rangle \in \mathbb{Z}$ for all $\alpha \in \Phi$; these too form a lattice, sometimes denoted $Y$. Given an irreducible root system $\Phi$, there is an associated \textbf{affine root system} $\Phi_{\rm{aff}}$ which has a corresponding Coxeter group $W_{\rm{aff}}$ called the \textbf{affine Weyl group}. Suppose $(W,S)$ is the (finite) Weyl group of $\Phi$. Let $\alpha_0$ denote the highest root of $\Phi$, and let $s_0 = t_{\alpha_0^\vee} s_{\alpha_0}$. Then the set of Coxeter generators $S_{\rm{aff}}$ of the affine Weyl group is the union of $S \cup \{s_0\}$. The affine Weyl group can be further enlarged to the \textbf{extended affine Weyl group} $\widetilde{W}$, defined as the semidirect product $\widetilde{W} = Y \rtimes W$. See Macdonald's book~\cite{macdonald2003}, Section 2.1, for additional details. Reflections define orthogonal hyperplanes in $V$ which chop the space up into \textbf{alcoves}, as described in \cite{bourbaki4-6}, Chapter 5. A choice of basis for $\Phi$ determines a Weyl chamber of $V$, and we choose the \textbf{fundamental alcove} $\mathcal{C}$ as the alcove in this chamber whose closure contains the origin. In the situation laid out in Section~\ref{section::summary-of-thesis}, consider the apartment corresponding to $T$ in the Bruhat-Tits building of $G$. (See \cite{tits1979} for more information about buildings. ) Our choice of Borel subgroup $B \supset T$ determines a basis of the root system $\Phi(G, T)$, and $\mathcal{C}$ is the unique alcove in the $B$-positive Weyl chamber inside the apartment whose closure contains the origin. The affine Weyl group is generated by reflections through the walls of $\mathcal{C}$. Let $\Omega[\mathcal{C}]$ denote the subgroup of $\widetilde{W}$ which stabilizes $\mathcal{C}$. Then we get a second realization of $\widetilde{W}$ as the semidirect product $W_{\rm{aff}} \rtimes \Omega[\mathcal{C}]$. \subsubsection{Reductive algebraic groups over $p$-adic fields} So far we have encountered Galois and Weil groups, along with several variants of Weyl groups. We conclude these preliminaries by giving some definitions pertaining to linear algebraic groups, i.e., Zariski closed subgroups of a general linear group viewed as an algebraic variety, focusing on the case of (split) reductive groups defined over local fields. For the general theory of linear algebraic groups, see for example the books by Borel~\cite{borel1991} and Humphreys~\cite{humphreys1975}. The structure theory of reductive groups over local fields is highly developed; see the survey by Tits~\cite{tits1979} as a starting point. Let $G$ denote a connected reductive algebraic group that is split over $F$. Its group of $F$-rational points, denoted $G(F)$, has a neighborhood basis of compact open subgroups; and moreover, $G(F)$ is unimodular, hence we may speak of a choice of Haar measure on $G(F)$. As in Section~\ref{section::summary-of-thesis}, we fix a Borel subgroup $B$ and let $T$ denote the split maximal torus inside $B$. The pair $(G,T)$ determines a root system $\Phi = \Phi(G,T)$ whose positive roots are denoted $\Phi^+$. Let $U$ denote the unipotent radical of $B$. Then $$ B = TU = T \prod_{\alpha \in \Phi^+} U_\alpha, $$ where $U_\alpha \subset G(\mathcal{O}_F)$ is normalized by $T$; see \cite{tits1979} Section 1. The \textbf{Iwahori subgroup} with respect to this configuration is the subgroup of $G(\mathcal{O}_F)$ that maps onto $B(k_F)$ via the ``mod $\varpi$'' map. The unique pro-unipotent subgroup of $I$ is called its \textbf{pro-unipotent radical} $I^+$. The subgroup $I^+$ is a pro-$p$ group. The \textbf{character group} of $T$ is $\chars{T} = {\rm{Hom}}_F(T, \mathbb{G}_m)$. This group can be thought of as the weight lattice of $\Phi(G,T)$. On the other hand, the \textbf{cocharacter group} $\cochars{T} = {\rm{Hom}}_F(\mathbb{G}_m, T)$ is the coweight lattice of $\Phi(G,T)$. A cocharacter $\lambda \in X_* (T)$ is \emph{dominant} if $\langle \lambda, \alpha \rangle \ge 0$ for all $\alpha \in \Phi^+$, and it is \emph{minuscule} if $\langle \lambda, \alpha \rangle \in \{ -1, 0, 1\}$. (See also~\cite{bourbaki4-6}, VI.1 Exercise 24.) Throughout this paper, the letter $\mu$ will be reserved for a dominant, minuscule cocharacter in $\cochars{T}$ with respect to $\Phi(G,T)$. The set of dominant cocharacters is written $\cochars{T}_{\rm{dom}}$. Our torus $T$ determines a unique \textbf{dual torus} $\widehat{T}$ defined over $\mathbb{C}$ whose character group $\chars{\widehat{T}}$ is the free abelian group $\cochars{T}$. Taking the view of a split connected algebraic group $G$ over $F$ as an $\mathcal{O}_F$-affine group scheme, we obtain data related to the above for each unramified extension $F_r/F$. We write $G_r$ for $G(F_r)$, $I_r$ for the corresponding Iwahori subgroup, etc. Given an unramified extension $F_r/F$, there is a norm $N_r : F_r \rightarrow F$ as described above. The Galois action of ${\rm{Gal}}(F_r/F)$ on $F_r$ can be extended to an action of the group on $T(\mathcal{O}_r)$. The resulting map is also written $N_r : T(\mathcal{O}_r) \rightarrow T(\mathcal{O}_F)$. Observe that for each character $\xi : T(\mathcal{O}_F) \rightarrow \mathbb{C}^\times$, we may use the norm to get a new character $\xi_r = \xi \circ N_r$. This is an important type of character on $T(\mathcal{O}_r)$. \section{Test functions with pro-$p$ Iwahori level structure} \label{chapter::test-functions} As stated in the Introduction, our main result is a new formula for the coefficients of a function $\phi_{r,1}aug$ whose twisted orbital integrals agree with those of the test function $\phi_{r,1}$ with pro-$p$ Iwahori level structure, at least in the cases of general linear groups and general symplectic groups. This chapter summarizes the results needed to define test functions as they arise from the Bernstein center, before going on to develop a first explicit formula for $\phi_{r,1}aug$ using various data about depth-zero characters. Subsequent chapters will translate this version of the formula into something based on the combinatorics of Coxeter groups. \subsection{Background on representation theory} \label{section::background-rep-theory} This section collects definitions and results concerning smooth representations of reductive algebraic groups defined over a non-archimedean local field. We also give some background on representations of Weil groups, which involves defining the Langlands dual group of $G$, in order to explain the Local Langlands Correspondence. For reference, we recommend the Fields Institute book~\cite{cunningham-nevins2009}, which introduces smooth representations of $p$-adic groups, representations of Weil groups, and the Local Langlands Correspondence in a single volume. \subsubsection{Smooth representations of $p$-adic groups} In this section, let $G$ be a connected reductive algebraic group defined over a $p$-adic field $F$. Let $V$ be a complex vector space. \begin{defin} A \textbf{smooth representation} of $G(F)$ is a homomorphism \\ ${\pi: G(F) \rightarrow Aut(V)}$ such that for every $v \in V$ there exists a compact open subgroup $K \subset G(F)$ such that $v \in V^K$, where $$ V^K = \{v \in V \mid \pi(k)\cdot v = v, \forall k \in K\}. $$ The category of smooth representations of $G$ is denoted $\mathfrak{R}(G)$. \end{defin} Let $C_c^\infty (G)$ denote the space of locally constant, compactly supported functions $f : G(F) \rightarrow \mathbb{C}$. If $K$ is a compact open subgroup of $G(F)$, let $\mathcal{H}(G,K)$ denote the $K$-biinvariant functions in $C_c^\infty$, that is, $f \in \mathcal{H}(G,K)$ satisfies $$ f(k_1 x k_2) = f(x),\ {\rm{for}}\ k_1, k_2 \in K\ {\rm{and}}\ x \in G. $$ The \textbf{Hecke algebra} $\mathcal{H}(G)$ is the union $\cup_K \mathcal{H}(G,K)$ ranging over all compact open subgroups $K$ of $G(F)$, equipped with a convolution integral: $$ (f \ast h)(x) = \int_G f(g)h(g^{-1}x) dg, $$ where $dx$ is a fixed normalization of Haar measure. It is well-known that $\mathfrak{R}(G)$ is equivalent to the category of non-degenerate left $\mathcal{H}(G)$-modules. \begin{defin} Let $G$ be a connected reductive group with maximal torus $T$, and let $I$ be the Iwahori subgroup as specified above. The \textbf{Iwahori-Hecke algebra} $\mathcal{H}(G, I)$ is the set of functions $f \in C_c^\infty(G)$ which are invariant on $I - I$-double cosets, with an algebra structure given by convolution. \end{defin} Let $P = MN$ be a Levi decomposition of a parabolic subgroup $P$ of $G(F)$. Given a representation $(\sigma,V)$ of $M$, we have an \textbf{induced representation} $\rm{Ind}_P^G (\sigma)$ in $\mathfrak{R}(G)$. The representation space of $\rm{Ind}_P^G (\sigma)$ is $$ \{f: G \rightarrow V \mid f(hg) = \sigma(h) f(g),\: \forall h \in P, g \in G\} $$ The case where $P$ is a Borel subgroup, in which case it is denoted $B$, is particularly important. Here $B = TU$ for a maximal torus $T$, and so forming induced representations is a method for producing smooth representations of $G$ from characters on a torus. In fact, we generally consider \emph{normalized} induced representations, where $\sigma$ is twisted by a square-root of the modulus function $\delta_P$. We write $i_P^G (\sigma) = \rm{Ind}_P^G(\delta_P^{1/2}\sigma)$. The category $\mathfrak{R}(G)$ can be further understood in terms of induced representations arising from supercuspidal representations of Levi subgroups. (This is part of Harish-Chandra's philosophy of cusp forms.) Much of the following terminology comes from the theory of Bushnell-Kutzko types~\cite{bushnell-kutzko1997}, though our presentation mostly follows~\cite{roche2009}, Section 1.7. Let $M$ be a Levi subgroup of $G$, and let $\sigma$ be a supercuspidal representation of $M$. The pair $(M, \sigma)$ is called a \textbf{cuspidal pair}. There is a conjugacy relation on cuspidal pairs: Given $g \in G$, let $L = gMg^{-1}$ and define ${^g}\sigma$ by ${^g}\sigma(x) = \sigma(g^{-1}x g)$; then the resulting cuspidal pair $(L, {^g}\sigma)$ belongs to $(M, \sigma)_G$, the $G$-conjugacy class of $(M, \sigma)$. It turns out to be more useful to consider the coarser \textbf{inertial equivalence} relation: The pairs $(M, \sigma)$ and $(L, \tau)$ are equivalent if there exists $g \in G$ such that $L = gMg^{-1}$ and $\tau \cong {^g}\sigma \otimes \eta$ for some unramified character $\eta$ on the group $L$. Let $\mathfrak{s} = [M, \sigma]_G$ denote an inertial equivalence class, and call the set of all inertial equivalence classes for $G$, denoted $\mathfrak{B}(G)$, the \textbf{Bernstein spectrum}. Each inertial equivalence class $\mathfrak{s}$ gives rise to a full subcategory $\mathfrak{R}_{\mathfrak{s}} (G)$ of $\mathfrak{R}(G)$. The objects are described in terms of subquotients of induced representations. Specifically, let $\Pi$ be a smooth representation of $G$. Then $\Pi \in \mathfrak{R}_{\mathfrak{s}} (G)$ if and only if every irreducible subquotient $\pi$ of $\Pi$ has inertial support equal to $\mathfrak{s}$, i.e., if there is a cuspidal pair $(M, \sigma) \in \mathfrak{s}$ such that $\pi$ is a subquotient of $i_P^G(\sigma\eta)$ for $P = MN$ a Levi decomposition and $\eta$ an unramified character on $M$. \begin{thm} \emph{(Bernstein Decomposition)} The category of smooth representations of $G$ decomposes as $$ \mathfrak{R}(G) = \prod_{\mathfrak{s} \in \mathfrak{B}(G)} \mathfrak{R}_{\mathfrak{s}} (G). $$ \end{thm} \begin{proof} This result is originally due to Bernstein. See also~\cite{roche2009}, Theorem 1.7.3.1. \end{proof} \subsubsection{The Local Langlands Correspondence} \label{section::LLC} The primary reference for this section is Borel's article on automorphic L-functions~\cite{borel1979} from the Corvallis proceedings. The \textbf{L-group} of a connected reductive group $G$ is ${^L}G = \widehat{G} \rtimes W_F$, where $\widehat{G}$ is the connected reductive group defined over $\mathbb{C}$ determined by the dual root datum $(\cochars{T}, \Phi^\vee, \chars{T}, \Phi)$. When $G$ is split, the action of the Weil group on $\widehat{G}$ is trivial; so in this case we may use $\widehat{G}$ in place of ${^L}G$. In the representation theory of the Weil-Deligne group $W_F^\prime$, L-groups play the role of automorphisms of a linear space, that is, we consider homomorphisms \\ $\varphi: W_F^\prime \rightarrow {^L}G$. More specifically, we consider ``admissible'' homomorphisms in the sense specified in \cite{haines2014}, Section 4. Following the discussion in \cite{haines2014} Section 5.1, we restrict an admissible homomorphism $\varphi$ on $W_F^\prime$ along the proper embedding $W_F \hookrightarrow W_F^\prime$ to get an admissible homomorphism $\lambda$ on the Weil group, where here ``admissible'' is in the sense of the footnote on p. 131 of \cite{haines2014}. The $\widehat{G}$-conjugacy class of an admissible homomorphism $\lambda$ on $W_F$, denoted $(\lambda)_{\widehat{G}}$, is called an \textbf{infinitesimal character}. The \emph{Local Langlands Correspondence} (LLC) predicts a finite-to-one relationship between the set of $\widehat{G}$-conjugacy classes of admissible homomorphisms of the Weil-Deligne group into the $L$-group, written $\Phi(G/F)$ and the set of smooth irreducible representations of $G(F)$, written $\Pi(G/F)$, satisfying desiderata given in \cite{borel1979}, which we will not recall here. Given $\pi \in \Pi(G/F)$, its \textbf{Langlands parameter} in $\Phi(G/F)$ is denoted $\varphi_\pi$. The LLC is a theorem in several cases important for our present purpose. First, it is a theorem for all tori, as we recall in Section~\ref{section::llc-for-tori}. Also, in a major breakthrough, Harris and Taylor proved the LLC for $GL_n$, which is among the split connected reductive groups with connected center being considered here. \subsection{Test functions via the Bernstein center} The precise statement of the Test Function Conjecture of Haines and Kottwitz relies on a significant amount of machinery from the study of bad reduction of Shimura varieties that will not be covered here; however, we will provide enough detail to define the test function $\phi_r$ at the heart of the conjecture. The primary reference for this section is~\cite{haines2014}. Although the Bernstein center is initially defined in categorical terms, there are three concrete ways to describe it, each of which will play into the present approach to test functions. The first part of this section explains each of these alternative descriptions. Then we apply this theory to define the test functions. The definition relies on the LLC+ conjecture in order to embed a distribution in the \emph{stable} Bernstein center $\mathfrak{Z}^{\rm{st}}(G)$ as an element of the usual Bernstein center $\mathfrak{Z}(G)$. We emphasize that our present objective is to study a test function $\varphi_r$ in the context of the Test Function Conjecture and not to explain the conjecture itself. Several important concepts and objects are mentioned in this section with only enough exposition to lead us to a definition of $\varphi_r$. The reader is encouraged to read the relevant parts of \cite{haines2014} for the full story. \subsubsection{The Bernstein Center} \begin{defin} The \textbf{Bernstein center} $\mathfrak{Z}(G)$ of a connected reductive algebraic group $G$ defined over a $p$-adic field is the center of the category $\mathfrak{R}(G)$, i.e., the endomorphism ring of the identity functor. An element $\xi \in \mathfrak{Z}(G)$ is a family of morphisms $\xi_A: A \rightarrow A$ such that for any morphism $f: A\rightarrow B$ the following diagram commutes { \large \[ \xymatrix{ A \ar[d]_{\xi_A} \ar[r]^f & B \ar[d]^{\xi_B} \\ A \ar[r]^f & B } \] } \end{defin} The first concrete realization of $\mathfrak{Z}(G)$ is as an algebra of certain distributions. A \textbf{distribution} is a linear map $D: C_c^\infty(G) \rightarrow \mathbb{C}$. Given $f \in C_c^\infty(G)$, one can define a new function $D \ast f$; see \cite{haines2014} Section 3.1. If $D$ is ``essentially compact,'' then $D \ast f \in C_c^\infty(G)$. Lemma 4.1 and Corollary 4.2 of \cite{haines2014} show that the set of $G$-invariant, essentially compact distributions on $G$ form a commutative and associative $\mathbb{C}$-algebra $(\mathcal{D}(G)_{\rm{ec}}^G, \ast)$. Second, $\mathfrak{Z}(G)$ is isomorphic to an inverse limit of centers of Hecke algebras. Given a compact open subgroup $J$ of $G$, consider the center $\mathcal{Z}(G, J)$ of the Hecke algebra $\mathcal{H}(G, J)$. This is an algebra under convolution, and we choose a Haar measure $dx_J$ such that $\rm{vol}_{dx_J} (J) = 1$. Let $1_J$ denote the characteristic function of the subgroup $J$. For $J^\prime \subset J$, there is a corresponding morphism of algebras $Z(G, J^\prime) \rightarrow Z(G, J)$ given by $z_{J^\prime} \mapsto z_{J^\prime} \ast_{dx_{J^\prime}} 1_J$. So we can form the inverse limit $\varprojlim_J \mathcal{Z}(G, J)$; it is a fact that $\mathfrak{Z}(G) \cong \varprojlim_J \mathcal{Z}(G, J)$. The final realization of the Bernstein center uses the inertial equivalence classes defined in Section~\ref{section::background-rep-theory}. For $\mathfrak{s} = [M, \sigma]_G \in \mathfrak{B}(G)$, let $$ \mathfrak{X}_\mathfrak{s} = \{ (L, \tau)_G \mid (L, \tau)_G \sim (M, \sigma)_G\}, $$ that is, the set of $G$-conjugacy classes of cuspidal pairs encompassed by the inertial equivalence class $\mathfrak{s}$. Now define a disjoint union $$ \mathfrak{X}_G = \coprod_{\mathfrak{s} \in \mathfrak{B}(G)} \mathfrak{X}_{\mathfrak{s}}. $$ This set can be given a variety structure. The Bernstein center is isomorphic to the ring of regular functions $\mathbb{C}[\mathfrak{X}_G]$. \begin{thm} In summary, the Bernstein center $\mathfrak{Z}(G)$ satisfies the following isomorphisms, $$ \mathfrak{Z}(G) \cong (\mathcal{D}(G)_{\rm{ec}}^G, \ast) \cong \varprojlim_J \mathcal{Z}(G, J) \cong \mathbb{C}[\mathfrak{X}_G]. $$ \end{thm} \begin{proof} This is all in \cite{haines2014} Section 3. \end{proof} Recall that given a distribution $Z \in \mathfrak{Z}(G)$ and a subgroup $J \subset G$, the element $Z \ast 1_J$ belongs to the Hecke algebra $\mathcal{H}(G,J)$. As such, we can study the action of $Z \ast 1_J$ on $\pi^J$ for a representation $\pi \in \mathfrak{R}(G)$ viewed as a $\mathcal{H}(G)$-module. The inertial equivalence class $\mathfrak{s}$ supporting a smooth representation $\pi \in \mathfrak{R}(G)$ is a point in the variety $\mathfrak{X}_G$. Viewing $Z \in \mathfrak{Z}(G)$ as a regular function on this variety, we may define a scalar $Z(\pi)$ as the value of $Z$ at $\mathfrak{s}$. \begin{prop} Let $\pi$ be a finite-length smooth representation. For every compact open subgroup $J \subset G$, $Z \ast 1_J$ acts on $\pi^J$ by $Z(\pi)$. \end{prop} \begin{proof} This statement is \cite{haines2014} Corollary 4.3(a). \end{proof} \subsubsection{Definition of a test function} Test functions may be defined in terms of distributions coming from the Bernstein center. We construct a particular distribution $Z_{V_\mu}$ in the \emph{stable} Bernstein center $\mathfrak{Z}^{\rm{st}}(G)$ attached to a representation $(r_\mu, V_\mu)$ of the Langlands dual group ${^L}G$ determined by a dominant minuscule cocharacter $\mu$ in $\cochars{T}$. The test function is obtained by convolving this distribution with the characteristic function of the level structure subgroup of $G$. Recall from Section~\ref{section::LLC} that restricting an admissible homomorphism $\varphi$ on $W_F^\prime$ gives us an admissible homomorphism $\lambda$ on $W_F$. \begin{defin} \label{defin::semisimple-trace} Let $(r,V)$ be a complex, finite-dimensional representation of ${^L}G$. Given a geometric Frobenius element $\Phi \in W_F$ and an admissible homomorphism $\lambda: W_F \rightarrow {^L}G$, define the \textbf{semisimple trace} by $$ {\rm{tr}}^{\rm{ss}}(\lambda(\Phi), V) = {\rm{tr}}(r\lambda(\Phi), V^{r\lambda (I_F)}). $$ \end{defin} An infinitesimal character $(\lambda)_{\widehat{G}}$ defines an element of a certain variety $\mathfrak{Y}$ (see~\cite{haines2014} Chapter 5). Assuming the LLC+ conjecture \cite{haines2014}, Section 5.2, the Bernstein variety $\mathfrak{X}_G$ has a quasi-finite surjection onto $\mathfrak{Y}$ when $G$ is split. \begin{prop} \label{prop::reg-func-semisimple-trace} The map $\lambda \mapsto {\rm{tr}}^{\rm{ss}}(\lambda(\Phi), V)$ defines an element $Z_V \in \mathfrak{Z}^{\rm{st}}(G)$ as a regular function on $\mathfrak{Y}$ given by $$ Z_V((\lambda)_{\widehat{G}}) = {\rm{tr}}^{\rm{ss}}(\lambda(\Phi), V) $$ \end{prop} \begin{proof} This is part of statement is Proposition 4.28 of~\cite{haines2014}. \end{proof} \begin{rmk} If our split connected reductive group $G$ defined over $F$ satisfies the LLC+ conjecture, then there is an injective homomorphism $\mathfrak{Z}^{\rm{st}}(G) \rightarrow \mathfrak{Z}(G)$. In this case, the distributions $Z_V$ may be viewed as elements of $\mathfrak{Z}(G)$. \end{rmk} \begin{thm} \emph{(The Theorem of the Highest Weight)} Let $G$ be a linear reductive group over an algebraically closed field $K$. An irreducible, finite-dimensional $K$-representation has a unique $\textbf{highest weight}$. Every dominant weight of the root system of $G$ is the highest weight of such a representation, which is unique up to isomorphism. \end{thm} \begin{proof} See~\cite{humphreys1975}, Theorem 31.3. Although the reference states the theorem for semisimple groups, the argument can be applied to reductive groups. \end{proof} Thus, given a dominant minuscule cocharacter $\mu \in \cochars{T}$, there exists a highest weight representation $(r_\mu, V_{\mu}) \in {\rm{Rep}}(\widehat{G})$, which is unique up to isomorphism. \begin{defin} \label{defin::test-function} Let $G$ be a split connected reductive group defined over a non-archimedean local field $F$ with split maximal torus $T$. Consider a degree $r$ unramified extension $F_r/F$. Denote the $F_r$-rational points by $G_r$. The residue field of $F_r$ has cardinality $q^r$. Let $\mu$ be a dominant cocharacter of $T$ and $K_r$ a compact open subgroup of $G_r$. Let $t_\mu$ denote the translation element in $\widetilde{W}$ corresponding to $\mu$. Finally, define the \textbf{test function with $K_r$-level structure} for $(G_r, \mu)$ to be $$ \phi_r = q^{r\ell(t_\mu)/2}(Z_{V_\mu} \ast 1_{K_r}) $$ \end{defin} Observe that $\phi_r$ lies in $\mathcal{Z}(G_r,K_r)$ by the theory of the Bernstein center. The terminology ``$K_r$-level structure'' routinely appears in articles on the theory of Shimura varieties with bad reduction at a place dividing $p$. The group we have called $K_r$ corresponds to the compact open subgroup $K_p \subset G(\mathbb{Q}_p)$, which is a part of a Shimura datum. As described in the Introduction, the Test Function Conjecture~\cite{haines2014}, Conjecture 4.30, predicts that $\phi_r$ can be used to prove a certain formula for the semi-simple Lefschetz number via the Langlands-Kottwitz method. \begin{defin} The \textbf{test function with $I_r^+$-level structure} is $$ \phi_{r,1} = q^{r\ell(t_\mu)/2}(Z_{V_\mu} \ast 1_{I_r^+}). $$ \end{defin} \subsection{Data associated to a depth-zero character} \label{section::depth-zero-characters} We now begin preparations to rewrite $\phi_{r,1}$ via depth-zero characters on $T(\mathcal{O}_r)$. As we shall see in the next section, $\phi_{r,1}$ can be expressed as a sum indexed by the group of depth-zero characters $\xi$ on $T(\mathcal{O}_r)$ by considering certain idempotents in the Hecke algebra $\mathcal{H}(G_r)$. \begin{rmk} \label{rmk::warning-on-different-weyl-grp-notation} Many of the definitions and results in this section can be found in {\rm{\cite{roche1998}}}; however, we have followed the notation used in {\rm{\cite{haines-rapoport2012}}}. The danger for confusion is mostly with variants of Weyl groups. Specifically, we use $W$ for the finite Weyl group and $\widetilde{W}$ for the extended affine Weyl group, whereas Roche denotes these groups $\overline{W}$ and $W$ respectively. \end{rmk} \subsubsection{First properties} \label{section::first-properties-dz-chars} \begin{defin} Let $T$ be a split maximal torus in $G$. A \textbf{depth-zero character} on $T(\mathcal{O}_F)$ is a smooth character $\chi: T(\mathcal{O}_F) \rightarrow \mathbb{C}^{\times}$ that factors through $T(k_F)$. The resulting character $T(k_F) \rightarrow \mathbb{C}^{\times}$ is also denoted $\chi$. Similarly, a depth-zero character on $T(\mathcal{O}_r)$ is a character $\xi$ that factors through $T(k_r)$. Let $T(k_F)^\vee$ and $T(k_r)^\vee$ denote the sets of depth-zero characters on the groups $T(\mathcal{O}_F)$ and $T(\mathcal{O}_r)$, respectively. \end{defin} Next, we will associate a root system $\Phi_\chi$, called a \textbf{$\chi$-root system}, to a character $\chi$ on $T(\mathcal{O}_F)$. For this application, $\chi$ need not have depth zero. \begin{prop} The set $\Phi_\chi = \{ \alpha \in \Phi \mid {\chi \circ \alpha^\vee\vert}_{{\mathcal{O}_F^\times}} = 1\}$ is a root system. \end{prop} \noindent This statement also appears in Roche~\cite{roche1998} and Goldstein's thesis~\cite{goldstein1990}. \begin{proof} We say that a subset $J$ of a root system $\Phi$ is \emph{closed} if it satisfies the following condition: If $\alpha, \beta \in J$ and $\alpha + \beta \in \Phi$, then $\alpha + \beta \in J$. A subset $J$ of $\Phi$ is \emph{symmetric} if $\alpha \in J$ implies $-\alpha \in J$. Following Bourbaki~\cite{bourbaki4-6}, it suffices to show that $\Phi_\chi$ is a closed, symmetric subset of $\Phi$. \emph{Closed:} Suppose $\alpha, \beta \in \Phi_\chi$ and $\alpha + \beta \in \Phi$. By direct calculation, $$ \chi \circ (\alpha+\beta)^\vee(x) = \left(\chi(\alpha^\vee(x))\right)\left(\chi(\beta^\vee(x))\right) = 1. $$ \emph{Symmetric:} Suppose $\alpha \in \Phi_\chi$. Then $\chi \circ (-\alpha^\vee) (x) = \chi(\alpha^\vee(x)^{-1}) = 1$. \end{proof} \begin{lemma} Let $F_r/F$ be an unramified extension of local fields. The norm map $N_r : F_r \rightarrow F$ restricts to a surjective map $N_r : \mathcal{O}_r^\times \rightarrow \mathcal{O}_F^\times$. \end{lemma} \begin{proof} See \cite{serre1979}, Proposition V.2.3 and its corollary. \end{proof} Recall that the norm induces a map $N_r : T(\mathcal{O}_r) \rightarrow T(\mathcal{O}_F)$. Given $\chi \in T(k_F)^\vee$ and an unramified extension $F_r/F$, define a new character $\chi_r = \chi \circ N_r$ on $T(\mathcal{O}_r)$. \begin{lemma} The character $\chi_r = \chi \circ N_r : T(\mathcal{O}_r) \longrightarrow \mathbb{C}^\times$ has depth zero. \end{lemma} \begin{proof} The norm $N_r : \mathcal{O}_r^\times \rightarrow \mathcal{O}_F^\times$ descends to $N_r : k_r \rightarrow k_F$ as explained in \cite{serre1979}, Section V.2. This induces a commutative diagram on points of the torus: { \[ \xymatrix{ T(\mathcal{O}_r) \ar[d] \ar[r]^{N_r} & T(\mathcal{O}_F) \ar[d] \\ T(k_r) \ar[r]^{N_r} & T(k_F) } \] } Because $\chi$ is depth zero, it factors through $T(k_F)$. Therefore $\chi \circ N_r : T(\mathcal{O}_r) \rightarrow \mathbb{C}^\times$ factors through $T(k_r)$ by composing the induced map $\chi : T(k_F) \rightarrow \mathbb{C}^\times$ with the lower route through the above diagram. \end{proof} \begin{prop} \label{prop::root-system-equality} Let $\chi : T(\mathcal{O}_F) \rightarrow \mathbb{C}^\times$ be a depth-zero character, and let $\chi_r$ be the associated depth-zero character on $T(\mathcal{O}_r)$. Then $\Phi_\chi = \Phi_{\chi_r}$ as subsystems of the ambient root system $\Phi = \Phi(G,T)$. \end{prop} \begin{proof} For any $\alpha \in \Phi$, consider the cocharacter $\alpha^\vee$ in $\cochars{T}$. Since $G$ is split, the maximal torus $T$ is $F$-isomorphic to a direct product $\mathbb{G}_m \times \cdots \times \mathbb{G}_m$ of rank equal to ${\rm{rank}}(T)$. Galois groups act coordinate-wise on $T(F_r)$ and $T(F)$, and the following diagram commutes: { \[ \xymatrix{ F_r^\times \ar[d]_{\alpha^\vee} \ar[r]^{N_r} & F^\times \ar[d]^{\alpha^\vee} \\ T(F_r) \ar[r]^{N_r} & T(F) } \] } Therefore, we have an equality, $$ (\chi_r \circ \alpha^\vee)(z) = (\chi \circ \alpha^\vee) (N_r(z)), $$ for any $\alpha \in \Phi$ and $z \in \mathcal{O}_r^\times$. The norm map $N_r : F_r \rightarrow F$ restricts to a surjection $N_r : \mathcal{O}_r^\times \rightarrow \mathcal{O}_F^\times$. It is then clear that $\chi_r(\alpha^\vee(z)) = 1$ if and only if $\chi(\alpha^\vee(N_r (z))) = 1$. So, if $\alpha \in \Phi_\chi$, we can conclude that $\chi_r(\alpha^\vee(z)) = 1$ for all $z \in k_r$, i.e., that $\alpha \in \Phi_{\chi_r}$. Similar logic gives the reverse inclusion. \end{proof} \begin{prop} \label{prop::iwahori-prop-quotient} Let $T$ be the split maximal torus in a fixed Borel subgroup $B$ of $G$. Let $I_r$ be the Iwahori subgroup of $G(\mathcal{O}_r)$ that maps onto $B(k_r)$ modulo $\varpi$. Then there is an isomorphism ${T(k_F) \cong I/I^+}$. Similarly, $T(k_r) \cong I_r/I_r^+$ for the analogous subgroups of $G(F_r)$. \end{prop} \begin{proof} The isomorphism is a consequence of the factorization of $I$ (resp. $I_r$) into a product of torus elements and unipotent elements. See for example Goldstein's thesis \cite{goldstein1990}, Chapter 2. \end{proof} Using $T(k_F) \cong I/I^+$, we can extend a depth-zero character $\chi$ on $T(k_F)$ to a character $\rho_\chi$ on $I$ which is trivial on $I^+$. There is a character $\rho_{\chi_r}$ on $I_r$ similarly derived from $\chi_r$. We conclude these opening remarks by defining some variants of the Weyl group associated to a depth-zero character. What follows is essentially reproduced from \cite{haines-rapoport2012} Section 9.1. Let $N_G(T)$ be the normalizer of $T(F)$ in $G(F)$. Recall that the Weyl group of $\Phi$ is $W = N_G(T)/T(F)$, and its extended affine Weyl group is $\widetilde{W} = N_G(T)/T(\mathcal{O}_F)$. The groups $N_G(T)$, $W$ and $\widetilde{W}$ all act on the depth-zero characters by conjugation, e.g., for $w \in W$ and $t \in T(\mathcal{O}_F)$, define the $W$-action by ${^w}\chi(t) = \chi(w^{-1}t w)$. Let $$ W_\chi = \{ w \in W \mid {^w}\chi = \chi\}. $$ From the definitions, we have surjections $N_G(T) \rightarrow \widetilde{W} \rightarrow W$ and that $W_\chi$ is a subgroup of $W$. Let $\widetilde{W}_\chi$ be the preimage of $W_\chi$ in $\widetilde{W}$, and let $N_\chi$ be the preimage of $W_\chi$ in $N_G(T)$. Let $\Phi_{\chi, {\rm{aff}}} = \{ a = \alpha + k \mid \alpha \in \Phi_\chi, k \in \mathbb{Z}\}$ be the affine root system arising from $\Phi_\chi$. Let $W_\chi^\circ = \langle s_\alpha \mid \alpha \in \Phi_\chi \rangle$ and $W_{\chi, {\rm{aff}}} = \langle s_a \mid a \in \Phi_{\chi, {\rm{aff}}}\rangle$. In conclusion, let us state Lemma 9.1.1 of \cite{haines-rapoport2012}, whose proof is due to Roche~\cite{roche1998}. \begin{lemma} \label{lemma::chi-weyl-groups} \begin{enumerate} \item The group $W_{\chi, {\rm{aff}}}$ is a Coxeter group, whose set of generators $S_{\chi, {\rm{aff}}}$ are the reflections associated to the simple roots of $\Phi_{\chi, {\rm{aff}}}$. \item There is a canonical decomposition $\widetilde{W}_\chi = W_{\chi, {\rm{aff}}} \rtimes \Omega_\chi$, where $\Omega_\chi$ is the subset of $\widetilde{W}_\chi$ which fixes the base alcove of $\Phi_\chi$. The Bruhat order $\leq_\chi$ and length function $\ell_\chi$ of $W_{\chi, {\rm{aff}}}$ can be extended to $\widetilde{W}_\chi$ such that $\Omega_\chi$ consists of the length-zero elements. \item If $W_\chi^\circ = W_\chi$, then $W_{\chi, {\rm{aff}}}$ (resp. $\widetilde{W}_\chi$) is the affine (resp. extended affine) Weyl group associated to $\Phi_\chi$. \end{enumerate} \end{lemma} \subsubsection{Hecke algebras and their isomorphisms} Given a character $\xi$ on $T(k_r)$, extend to $\rho_\xi$ on $I_r$ using $T(k_r) \cong I_r / I_r^+$ as before. We define the subalgebra $\mathcal{H}(G_r, I_r, \rho_{\xi}) \subset \mathcal{H}(G_r)$ consisting of functions $f$ such that $$ f(xgy) = \rho_{\xi}(x)^{-1} f(g) \rho_{\xi}(y)^{-1} $$ where $x, y \in I_r$ and $g \in G_r$. Roche refers to such an $f$ as a \emph{$\rho_{\xi}^{-1}$-spherical function}. Iwahori and Matsumoto~\cite{iwahori-matsumoto1965} gave an explicit presentation for certain Iwahori-Hecke algebras, which generalizes to algebras such as $\mathcal{H}(G_r, I_r, \rho_{\xi})$ as described in works by Goldstein~\cite{goldstein1990}, Morris~\cite{morris1993}, and Roche~\cite{roche1998}. Roche introduced an approach to Hecke algebra isomorphisms using endoscopic groups, which is advantageous in the present situation; however, Goldstein's isomorphism would be sufficient as it specifically covers the case of depth-zero characters for split reductive groups. We begin by introducing the Hecke algebra attached to a general Coxeter group. These groups will be denoted $(\mathcal{W}, \mathcal{S})$ to differentiate them from the finite Weyl group $(W,S)$ of $G$. The Hecke algebra is defined by making a parameter choice for the following general construction: \begin{thm} Let $(\mathcal{W},\mathcal{S})$ be a Coxeter system and $A$ a commutative ring with unity. There is a unique associated $A$-algebra $\mathcal{H}$ based on a free $A$-module $\mathcal{E}$ having basis $T_w$ for $w \in \mathcal{W}$, with parameters $a_s, b_s \in \mathcal{S}$, subject to the relations $$ T_s T_w =\begin{cases} T_{sw}, & {\rm{if}}\ \ell(sw) > \ell(w) \\ a_s T_w + b_s T_{sw}, & {\rm{otherwise}}. \end{cases} $$ \end{thm} \begin{proof} See \cite{humphreys1990}, Sections 7.1-7.3. \end{proof} If we set $a_s = q-1$ and $b_s = q$ for all $s \in \mathcal{S}$, then we get the \textbf{Hecke algebra} $\mathcal{H}(\mathcal{W},\mathcal{S})$ as in \cite{humphreys1990}, Section 7.4. If $W_{\rm{aff}}$ is the affine Weyl group of $G$, then the Iwahori-Matsumoto isomorphism for the Iwahori-Hecke algebra is $$ \mathcal{H}(G, I) \cong \mathcal{H}(W_{\rm{aff}}, S_{\rm{aff}})\ \tilde{\otimes}\ \mathbb{C}[\Omega], $$ where the notation $\tilde{\otimes}$ refers to a twisted tensor product and whose multiplication on simple tensors is given by $$ (T_w \otimes T_\sigma) \cdot (T_{w^\prime} \otimes T_{\sigma^\prime}) = T_w T_{\sigma w^\prime \sigma^{-1}} \otimes T_{\sigma \sigma^\prime} $$ for $w, w^\prime \in W_{\rm{aff}}$ and $\sigma, \sigma^\prime \in \Omega$. The isomorphism $\widetilde{W} \cong W_{\rm{aff}} \rtimes \Omega$ enables us to view the simple tensors $T_w \otimes T_\sigma$ as a basis for $\mathcal{H}(G,I)$ indexed by $\widetilde{W}$. Following Roche, there are two ways to generalize this isomorphism. One version of the Hecke algebra isomorphism (see \cite{roche1998}, Theorem 6.3) shows directly that $$ \mathcal{H}(G_r, I_r, \rho_{\chi_r}) \stackrel{\sim}{\longrightarrow} \mathcal{H}(W_{\chi_r, {\rm{aff}}}, S_{\chi_r, {\rm{aff}}})\ \tilde{\otimes}\ \mathbb{C}[\Omega_{\chi_r}]. $$ However, we will follow the second approach, which defines an endoscopic group $H_{\chi_r}$ and shows that $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ is isomorphic to the Iwahori-Hecke algebra of this endoscopic group with a suitably chosen Iwahori subgroup $I_{H_r}$. Then the original Iwahori-Matsumoto isomorphism gives a presentation of the Hecke algebra in terms of a basis indexed by $\widetilde{W}_{\chi_r}$. Before explaining Hecke algebra isomorphisms according to Roche, we make the following remark to clarify what assumptions are being made. \begin{rmk} \label{rmk::roche-assumptions} The following will be enforced from now on: \begin{enumerate} \item As before, $G$ is a split connected reductive group with connected center, \item The derived group $G_{\rm{der}}$ is simply-connected (see Section~\ref{section::lattices-in-char-groups}), and \item $W_\chi = W_\chi^\circ$. \end{enumerate} Under these conditions, we may avoid the restrictions on ${\rm{char}}(k_F)$ made by Roche in~\cite{roche1998} to prove Hecke algebra isomorphisms for characters with positive depth. Other restrictions are needed to ensure $W_\chi = W_\chi^\circ$. The theory of Hecke algebra isomorphisms associated to depth-zero characters holds without any restriction on ${\rm{char}}(k_F)$. Proposition~\ref{prop::w-chi-equality} states that $W_\chi = W_\chi^\circ$ holds for general linear groups and general symplectic groups without restrictions, these two important examples satisfy the above criteria. \end{rmk} \begin{prop} \label{prop::w-chi-equality} Suppose $G$ is a split connected reductive group with connected center defined over $F$. If $G = GL_n$ or $G = GSp_{2n}$, then $W_\chi = W_\chi^\circ$ without restriction on residue characteristic. \end{prop} \begin{proof} The proof for all split connected groups with connected center can be extracted from pages 395-397 of \cite{roche1998}, but this comes at the cost of some restrictions on ${\rm{char}}(k_F)$. The cases of $GL_n$ and $GSp_{2n}$ are proved to be independent of such restrictions in an unpublished manuscript of Haines and Stroh. \end{proof} We are ready to give Roche's definition of the dual group $H_{\chi_r}$. In fact, Roche defines two groups $\tilde{H}_{\chi_r}$ and $H_{\chi_r}$; however, if $G$ has connected center and $W_\chi = W_\chi^\circ$, then $\tilde{H}_{\chi_r} = H_{\chi_r}$. See~\cite{roche1998}, Section 8, for the complete story. Let $H_{\chi_r}$ be the split connected reductive group over $\mathcal{O}_r$ associated to the root datum $(\chars{T}, \Phi_{\chi_r}, \cochars{T}, \Phi_{\chi_r}^\vee)$. By Proposition~\ref{prop::root-system-equality}, $\Phi_{\chi_r} = \Phi_\chi$. Consequently, $W_\chi^\circ$ is the Weyl group for $H_{\chi_r}$, while $\widetilde{W}_\chi$ is its extended affine Weyl group by Lemma~\ref{lemma::chi-weyl-groups}. We may assume $T$ is the split maximal torus inside $H_{\chi_r}$, and there is an Iwahori subgroup $I_{H_r} \subset H_{\chi_r}$ determined by the positive roots $\Phi_{\chi}^+$. Thus we come to consider the Iwahori-Hecke algebra $\mathcal{H}(H_{\chi_r}, I_{H_r})$. When considering this algebra, we normalize Haar measure for the convolution integral such that ${\rm{vol}}(I_{H_r}) = 1$. \begin{thm} The algebras $\mathcal{H}(H_{\chi_r}, I_{H_r})$ and $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ are isomorphic via a family of support-preserving isomorphisms. \end{thm} \begin{proof} This is \cite{roche1998} Theorem 8.2. \end{proof} We make a specific choice of isomorphism among the family established by the theorem, following the presentation in Haines-Rapoport~\cite{haines-rapoport2012}, Section 9. Recall that $N_\chi$ is the inverse image of $\widetilde{W}_\chi$ arising from the surjective map $N \rightarrow \widetilde{W}$. \begin{lemma} \label{lemma::existence-breve-chi} Let $\tilde{\chi}_r$ denote an extension of $\chi_r$ to $T(F_r)$. Then $\tilde{\chi}_r$ extends to a character $\breve{\chi}_r$ on $N_{\chi_r}$ if and only if $\tilde{\chi}_r$ is $W_{\chi_r}$-invariant. \end{lemma} \begin{proof} This is \cite{haines-rapoport2012}, Lemma 9.2.3. \end{proof} For a fixed choice of uniformizer $\varpi$, Remark 9.2.4 loc. cit. defines a specific $W_{\chi_r}$-invariant extension of $\chi_r$, called the \emph{$\varpi$-canonical extension}, by $$ \tilde{\chi}_r^\varpi (\nu(\varpi) t_0) = \chi(t_0), $$ for all $\nu \in \cochars{T}$ and $t_0 \in T(\mathcal{O}_r)$. \textbf{From now on let $\breve{\chi}_r$ be the character on $N_{\chi_r}$ determined according to Lemma~\ref{lemma::existence-breve-chi}.} For each $w \in \widetilde{W}_\chi$, fix a choice of $n_w \in N_\chi$ such that $n_w \mapsto w$ under the surjection $N_\chi \rightarrow \widetilde{W}_\chi$. Still following Haines-Rapoport, define $[I_r n_w I_r]_{\breve{\chi}_r}$ in $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ to be the ``unique element in $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ which is supported on $I_r n_w I_r$ and whose value at $n_w$ is $\breve{\chi}_r^{-1}(n_w)$. Note that $[I_r n_w I_r]_{\breve{\chi}_r}$ depends only on $w$, not on the choice of $n \in N_\chi$ mapping to $w \in \widetilde{W}_\chi$.'' (\cite{haines-rapoport2012}, p. 766.) Recall that Haar measure on $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ is normalized so that ${\rm{vol}}(I_r^+) = 1$, while the measure on $\mathcal{H}(H_{\chi_r}, I_{H_r})$ is normalized so that ${\rm{vol}}(I_{H_r}) = 1$. \begin{lemma} \label{lemma::hecke-algebra-mapping-coefficients} The isomorphism $\Psi_{\breve{\chi}_r}: \mathcal{H}(G_r, I_r, \rho_{\chi_r}) \stackrel{\sim}{\longrightarrow} \mathcal{H}(H_{\chi_r}, I_{H_r})$ maps $$ q^{-r\ell(w)/2}[I_r n_w I_r]_{\breve{\chi}_r} \longmapsto q^{-r\ell_\chi(w)/2}[I_{H_r} n_w I_{H_r}]. $$ \end{lemma} \begin{proof} This is quoted from \cite{haines-rapoport2012}, Theorem 9.3.1, but the result comes from \cite{roche1998}. \end{proof} The isomorphism $\Psi_{\breve{\chi}_r}$ will be referred to as ``the'' Hecke algebra isomorphism between these algebras. It depends on the choice of $\varpi$ used to construct $\breve{\chi}_r$. Let $\tilde{\chi} : T(F) \rightarrow \mathbb{C}^\times$ be an extension of $\chi$. There is a corresponding inertial equivalence class $\mathfrak{s}_\chi = [T(F), \tilde{\chi}]_G$, and hence a Bernstein component $\mathfrak{R}_{\mathfrak{s}_\chi} (G)$, which we refer to as a \emph{depth-zero principal series component} of $\mathfrak{R}(G)$. If $\chi = 1$, i.e., the trivial character, the inertial class is denoted $\mathfrak{s}_1$. The component $\mathfrak{R}_{\mathfrak{s}_1}(G)$ is the \emph{unramified principal series component}. \begin{prop} \label{prop::equivalence-of-repn-cat} The isomorphism $\Psi_{\breve{\chi}_r}$ sets up an equivalence of categories $\mathfrak{R}_{\mathfrak{s}_\chi} (G_r) \cong \mathfrak{R}_{\mathfrak{s}_1} (H_{\chi_r})$ under which $i_{B_r}^{G_r}(\tilde{\chi}_r^\varpi \eta)$ corresponds to $i_{B_{H_r}}^{H_{\chi_r}}(\eta)$, where $\eta$ is an unramified character. \end{prop} \begin{proof} This statement is Proposition 9.3.3(2) in \cite{haines-rapoport2012}. \end{proof} Using the statements and results from Roche and Haines-Rapoport, we have established an isomorphism between the Hecke algebra $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ and the Iwahori-Hecke algebra $\mathcal{H}(H_{\chi_r}, I_{H_r})$. Then by the Iwahori-Matsumoto isomorphism, $$ \mathcal{H}(H_{\chi_r}, I_{H_r}) \cong \mathcal{H}(W_{{\chi_r}, {\rm{aff}}}, S_{{\chi_r}, {\rm{aff}}})\ \tilde{\otimes}\ \mathbb{C}[\Omega_{\chi_r}]. $$ The right-hand side of this expression is called a \emph{twisted affine Hecke algebra}. It has a basis $\{T_w\}_{w \in \widetilde{W}_\chi}$, where each of the $T_w$ is invertible. Recall from Proposition~\ref{prop::root-system-equality} that $\Phi_\chi = \Phi_{\chi_r}$. It follows that the Coxeter systems $(W_{\chi_r, {\rm{aff}}}, S_{\chi_r, {\rm{aff}}})$ and $(W_{\chi, {\rm{aff}}}, S_{\chi, {\rm{aff}}})$ are identical; however, we could make different parameter choices $a_s = q^{r} -1, b_s = q^r$ and $a_s = q-1, b_s = q$, respectively for $s \in S_{\chi, {\rm{aff}}}$, which would yield two different twisted affine Hecke algebras that share a common basis $\{T_w\}$ indexed by $\widetilde{W}_{\chi_r}$. In what follows, we would like to work with data defined in terms of characters $\chi \in T(k_F)^\vee$. If we set the parameters $a_s = q^{r} -1, b_s = q^r$ with respect to the Coxeter system $(W_{\chi, {\rm{aff}}}, S_{\chi, {\rm{aff}}}) = (W_{\chi_r, {\rm{aff}}}, S_{\chi_r, {\rm{aff}}})$, then we get the isomorphism $$ \mathcal{H}(H_{\chi_r}, I_{H_r}) \cong \mathcal{H}_r(W_{{\chi}, {\rm{aff}}}, S_{{\chi}, {\rm{aff}}})\ \tilde{\otimes}\ \mathbb{C}[\Omega_{\chi}], $$ where the subscript ``r'' in $\mathcal{H}_r$ is meant to remind the reader that the parameters for the Hecke algebra of the Coxeter system depend on $r$, even though we are working with the basis indexed by $\widetilde{W}_\chi$. The inversion formula for the basis elements determines the $\widetilde{R}$-polynomials, which arise from Kazhdan-Lusztig theory, for the group $(W_{\chi, {\rm{aff}}}, S_{\chi, {\rm{aff}}})$. This notion extends to the extended affine Weyl group $\widetilde{W}_\chi$. \begin{defin} \label{defin::r-poly-from-hecke-algebras} Let $Q_r = q^{-r/2} - q^{r/2}$. For the twisted affine Hecke algebra $\mathcal{H}_r$ associated to $\widetilde{W}_\chi$, there is a family of polynomials $\widetilde{R}^{\chi}_{x,y}(Q_r)$, with $x, y \in \widetilde{W}_\chi$, determined by the inversion formula for a normalized basis element $\tilde{T}_{w,r} = q^{-r\ell_\chi(w)/2} T_w$ in $\mathcal{H}_r$. The polynomials are defined by $$ \tilde{T}_{w^{-1}, r}^{-1} = \sum_{x \in \widetilde{W}_\chi} \tilde{R}^{\chi}_{x,w}(Q_r) \tilde{T}_{x,r}. $$ \end{defin} We will revisit these polynomials in Chapter 4, where they are defined in terms of an abstract Coxeter system. \subsubsection{The LLC for Tori} \label{section::llc-for-tori} The Local Langlands Correspondence is a theorem in the case of tori. While the LLC is true for general tori, we consider only the split case. Therefore ${^L}T$ can be viewed as $\widehat{T}$, the torus determined by the dual root datum. \begin{thm} \label{thm::LLC-for-Tori} \emph{(LLC for Tori: Split Case)} Let $T$ be a split torus over $F$ and $W_F$ the Weil group of $F$. Then there is a correspondence $${\rm{Hom}}(W_F, \widehat{T}(\mathbb{C})) = {\rm{Hom}}(T(F), \mathbb{C}^\times).$$ Denote the Langlands parameter of a character $\xi : T(F) \rightarrow \mathbb{C}^\times$ by $\varphi_{\xi} : W_F \rightarrow \widehat{T}(\mathbb{C}).$ \end{thm} \begin{proof} For details see Yu~\cite{yu2009}. \end{proof} Let $\chi : T(\mathcal{O}_F) \rightarrow \mathbb{C}^\times$ be a depth-zero character, and let $\tilde{\chi}$ denote an extension to $T(F)$. Let $\tau_F$ denote the Artin reciprocity map from local class field theory. For any $\nu \in \chars{\widehat{T}} = \cochars{T}$, Theorem~\ref{thm::LLC-for-Tori} yields the following commutative diagram: { \[ \xymatrix{ W_F \ar[d]_{\nu\circ\tau_F} \ar[r]^{\varphi_{\tilde{\chi}}} & \widehat{T}(\mathbb{C}) \ar[d]^\nu \\ T(F) \ar[r]^{\tilde{\chi}} & \mathbb{C}^\times } \] } \begin{lemma} Consider $\chi : T(\mathcal{O}_F) \rightarrow \mathbb{C}^\times$ and an extension $\tilde{\chi}$ to $T(F)$. \begin{enumerate} \item $\tilde{\chi}$ is an unramified character if and only if its Langlands parameter $\varphi_{\tilde{\chi}}$ is trivial on the inertia subgroup $I_F \subset W_F$. \item The restriction of $\varphi_{\tilde{\chi}}$ to $I_F$ depends only on $\chi$. This restriction is denoted $\varphi_{\chi}$. \item $\chi$ is depth-zero if and only if $\varphi_{\chi}$ is trivial on the subgroup $\tau_F^{-1} (1 + \varpi \mathcal{O}_F) \subset I_F$. In this case, $\chi$ is determined by the value of $\varphi_\chi$ on an element $x \in I_F$ whose image $\tilde{x} \in \mathcal{O}_F^\times$ under $\tau_F$ projects to a generator of the multiplicative group of the residue field $k_F$. \end{enumerate} \end{lemma} \noindent We will give a proof of the lemma; however, these statements also appear in Roche's article \cite{roche1998} in the discussion following his Theorem 8.2. \begin{proof} Recall that a character is \emph{unramified} if it is trivial on $$ {^\circ}T = \{ t \in T(F) \mid \rm{val}_F (\nu(t)) = 0, \forall \nu \in X^*(T)\}. $$ For split groups over non-archimedean local fields, this group is the maximal compact open subgroup ${^\circ}T = T(\mathcal{O}_F)$. The Artin map gives a surjection $\tau_F : I_F \rightarrow \mathcal{O}_F^\times$. Suppose $\tilde{\chi}$ is an unramified character. If $z \in I_F$, then $\nu(\tau_F(z)) \in T(\mathcal{O}_F)$, which implies $\nu(\varphi_{\tilde{\chi}} (z) ) =1$ for all $\nu \in X^*(T)$. Therefore, $\varphi_{\tilde{\chi}} (z) = 1$. Conversely, suppose $t \in T(\mathcal{O}_F)$. Then it can be written as a product $t = \prod_i t_i = \prod_i \nu_i(\tau_F(z_i))$ for some $z_i \in I_F$ and cocharacters $\nu_i \in \cochars{T}$, because the $\mathcal{O}_F$-points of $T$ are generated by the images of its cocharacters applied to $\mathcal{O}_F^\times$. We have $$ \tilde{\chi}(t) = \prod_i \tilde{\chi}(\nu_i(\tau(z_i))) = \prod \nu_i(\varphi_{\tilde{\chi}} (z_i)) =1, $$ and $\varphi_{\tilde{\chi}}$ is trivial on each $z_i$ by assumption. This proves the first statement. To prove the second statement, consider any two extensions $\tilde{\chi}_1$, $\tilde{\chi}_2$ to $T(F)$. We will show that the parameters $\varphi_{\tilde{\chi}_1}$ and $\varphi_{\tilde{\chi}_2}$ agree on the inertia subgroup, i.e., $\varphi_{\tilde{\chi}_1} \vert_{I_F} = \varphi_{\tilde{\chi}_2}\vert_{I_F}$. Choose any $z \in I_F$. Then for all $\nu \in \cochars{T} = \chars{\widehat{T}}$ and $i=1,2$: $$ \nu(\varphi_{\tilde{\chi}_i}(z)) = \tilde{\chi}_i (\nu(\tau_F(z))). $$ Since $\tau_F$ maps $I_F$ onto $\mathcal{O}_F^\times$, $\nu(\tau_F(z)) \in T(\mathcal{O}_F)$. Therefore, $\tilde{\chi}_i (\nu(\tau_F(z))) = \chi(\nu(\tau_F(z)))$ for $i = 1,2$. Translating this back into a statement about Langlands parameters gives $$ \nu(\varphi_{\tilde{\chi}_1}(z)) = \chi(\nu(\tau_F(z))) = \nu(\varphi_{\tilde{\chi}_2}(z)). $$ Since this holds for all $\nu$, we conclude $\varphi_{\tilde{\chi}_1} \vert_{I_F} = \varphi_{\tilde{\chi}_2}\vert_{I_F}$, as our choice of $z \in I_F$ was arbitrary. Finally, recall that $\chi$ is depth-zero if it factors through $T(k_F)$. Pick any $z$ in $\tau_F^{-1} (1 + \varpi \mathcal{O}_F)$. For all $\nu \in X^*(\widehat{T})$, $\nu(\varphi_{\chi} (z)) = \chi (\nu(\tilde{z}))$ where $\tilde{z} = \tau_F (z)$ belongs to $1 + \varpi \mathcal{O}_F$. Thus if $\chi$ is depth-zero, all $\nu(\varphi_{\chi} (z)) = 1$, i.e., $\varphi_{\chi} (z) = 1$. The relation $\nu(\varphi_{\chi} (z)) = \chi (\nu(\tilde{z}))$ also implies the converse direction, namely that if $\varphi_\chi$ is trivial on $\tau_F^{-1} (1 + \varpi \mathcal{O}_F) \subset I_F$ then $\chi$ is a depth-zero character. Choose $x \in I_F$ such that $\tilde{x} = \tau(x) \in \mathcal{O}_F^\times$ projects to a generator of $k_F^\ast$ under the reduction map. For any $t \in T(\mathcal{O}_F)$, there is an expression $$ \chi(t) = \chi\left( \prod_i \nu_i (\tilde{x}^{n_i}) \right) = \prod_i \chi(\nu_i (\tilde{x}))^{n_i}, $$ where the $\nu_i$ are a basis of $\cochars{T}$. Therefore, we know the value $\chi(t)$ if we know the values $\chi(\nu_i(\tilde{x}))$. This proves the third statement of the lemma. \end{proof} \begin{defin} \label{endoscopic-element-definition} Fix a choice of $x \in I_F$ such that $\tau_F(x) \in \mathcal{O}_F^\times$ projects to a generator of $k_F^\ast$, and consider any depth-zero character $\chi$ on $T(\mathcal{O}_F)$. Define the \textbf{endoscopic element} in $\widehat{T}(\mathbb{C})$ associated to $\chi$ by $\kappa_\chi = \varphi_\chi (x) $. \end{defin} \begin{cor} Fix a choice of $x \in I_F$ such that $\tau_F(x) \in \mathcal{O}_F^\times$ projects to a generator of $k_F^\ast$. The (finite) group of depth-zero characters $T(k_F)^\vee$ is isomorphic to the group of endoscopic elements $\kappa_\chi$ determined by $x$. \end{cor} \begin{prop} \label{prop::dz-endoscopic-elements-equal-kernel} Let $q$ denote the cardinality of the residue field $k_F$. Let $K_{q-1}$ denote the kernel of the map $\widehat{T}(\mathbb{C}) \rightarrow \widehat{T}(\mathbb{C})$ defined by $\kappa \mapsto \kappa^{q-1}$. The group of depth-zero endoscopic elements is $K_{q-1}$; that is, there is $\chi \in T(k_F)^\vee$ such that $\kappa = \kappa_\chi$ if and only if $\kappa \in K_{q-1}$. \end{prop} \begin{proof} Suppose $\kappa = \kappa_\chi$ for some $\chi \in T(k_F)^\vee$. Then $\kappa_\chi = \varphi_\chi(x)$ for some $x \in I_F$ such that $\tilde{x} = \tau_F(x)$ descends to a generator of $k_F^\ast$. For any $\nu \in \cochars{T} = \chars{\widehat{T}}$, $$ \chi(\nu(\tilde{x}))^{q-1} = \nu(\kappa_\chi^{q-1}). $$ Because $\chi$ descends to a character on $T(k_F)$, the values $\chi(\nu(\tilde{x}))$ are all $(q-1)$-th roots of unity. Therefore, $\nu(\kappa_\chi^{q-1}) = 1$ for all $\nu \in \chars{\widehat{T}}$, which implies $\kappa_\chi^{q-1} = 1$. That is, $\kappa_\chi \in K_{q-1}$. Now suppose we start with $\kappa \in K_{q-1}$. We will produce a depth-zero character $\chi$ on $T(k_F)$ such that $\kappa = \kappa_\chi$. Since $T$ is split, $\widehat{T}$ is also split and so is isomorphic to a product of copies of $\mathbb{G}_m$. We think of $\kappa = {\rm{diag}}(\kappa_1, \ldots, \kappa_d)$, where each coordinate $\kappa_i$ is a $(q-1)$-th root of unity. Fix a choice of $x \in I_F$ such that $\tau_F(x)$ descends to a primitive $(q-1)$-th root $\zeta$ generating $k_F^\ast$. Now $\kappa_i = \zeta^{n_i}$ for some $0 \le n_i < q-1$. For any $\nu \in \cochars{T}$, the element $\nu(\tau_F(x))$ can be projected into $T(k_F)$ and thought of as a diagonal matrix ${\rm{diag}}(\zeta^{r_1}, \ldots, \zeta^{r_d})$. Let $\chi_0$ be the depth-zero character such that $$ \chi_0(\nu(\tau_F(x))) = \prod_{i=1}^d \zeta^{r_i n_i}. $$ Then by construction, for any $\nu \in \cochars{T} = \chars{\widehat{T}}$ we have $$ \chi_0(\nu(\tau_F(x))) = \nu(\kappa). $$ By Theorem~\ref{thm::LLC-for-Tori}, $\chi_0(\nu(\tau_F(z))) = \nu(\varphi_{\chi_0}(z))$ for all $z \in W_F$. So in particular $\nu(\varphi_{\chi_0}(x)) = \nu(\kappa)$ for all $\nu$. This shows $\kappa = \kappa_{\chi_0}$. \end{proof} \begin{rmk} We sometimes use the notation $\widehat{T}(\mathbb{C})^{\rm{dz}}$ in place of $K_{q-1}$ to remind the reader of the connection with depth-zero endoscopic elements. \end{rmk} Later, we will encounter values $\chi^{-1}_r (s)$ for depth-zero $\chi$ and $s \in T(k_r)$. The following lemma rephrases this scalar in terms of endoscopic elements. \begin{lemma} \label{lemma::defin-of-gamma-nrs} Let $s \in T(k_r)$. Then there exists a character $\gamma_{N_r s} \in \chars{\widehat{T}}$ such that $\chi_r (s) = \gamma_{N_r s}(\kappa_\chi)$ in $\mathbb{C}^\times$ for all $\chi \in T(k_F)^\vee$. \end{lemma} \begin{proof} Let $\tilde{x} = \tau_F(x)$ where $x \in I_F$ the element determining the correspondence between $\chi$ and $\kappa_\chi$. Let $\{\omega_i\}$ be a basis for $\cochars{T}$. There are integers $n_i$ such that $N_r(s) = \prod_i \omega_i(\tilde{x})^{n_i}$ in $T(k_F)$. Let $\gamma_{N_r s} = \sum_i n_i\omega_i.$ Then $$ \chi(\gamma_{N_r s} (\tilde{x})) = \prod_i \chi(\omega_i (\tilde{x})^{n_i}) = \left(\sum_i n_i \omega_i\right)(\kappa_\chi) = \gamma_{N_r s} (\kappa_\chi). $$ \end{proof} \subsection{A first formula for $\phi_{r,1}$} \label{section::first-formula} It looks difficult to compute a test function, in the sense of the Langlands-Kottwitz method, from the definition given in terms of distributions in the Bernstein center. Recall that the test function is written $\phi_{r,1}$ when it has $I_r^+$-level structure. This section shows how to use the various data coming from depth-zero characters to give a more explicit formula for a function $\phi_{r,1}aug$ whose twisted orbital integrals are identical to those of the test function $\phi_{r,1}$, as described in the Introduction. The first step is to write $\phi_{r,1} = q^{r \ell(t_\mu)/2} (Z_{V_\mu} \ast 1_{I_r^+})$ as a sum indexed by the depth-zero characters of $T(\mathcal{O}_r)$. Its twisted orbital integrals vanish at the summands corresponding to characters $\xi \in T(k_r)^\vee$ which are not norms of characters in $T(k_F)^\vee$. The Hecke algebra isomorphism $\Psi_{\breve{\chi}_r}: {\mathcal{H}(G_r, I_r, \rho_{\chi_r}) \stackrel{\sim}{\longrightarrow} \mathcal{H}(H_{\chi_r}, I_{H_r})}$ shows that summands indexed by norms $\chi_r = \chi \circ N_r$ map to sums of Bernstein functions in the center of $\mathcal{H}(H_{\chi_r}, I_{H_r})$. We apply an explicit formula for Bernstein functions attached to dominant minuscule cocharacters to get the formula for $\phi_{r,1}aug$. \subsubsection{Definition of $\phi_{r,1}aug$} Recall from Section~\ref{section::first-properties-dz-chars} that a depth-zero character $\xi \in T(k_r)^\vee$ extends to a character $\rho_\xi$ on $I_r$ which is trivial on $I_r^+$. Similarly, recall that $1_K$ refers to the characteristic function of a subgroup $K \subseteq G_r$. Let us extend this notation: $$ 1_K^{\rho} (x) = \begin{cases} \rho(x)^{-1}, & {\rm{if}}\ x \in K, \\ 0, & {\rm{otherwise.}} \end{cases} $$ \begin{lemma} \label{lemma::idempotents} For $\xi \in T(k_r)^\vee$, define $e_\xi = \rm{vol}(I_r)^{-1} 1_{I_r}^{\rho_\xi}$ in $\mathcal{H}(G_r)$. The elements $e_{\xi}$ are idempotents satisfying $1_{I_r^+} = \sum_{\xi \in T(k_r)^\vee} e_{\xi}$. \end{lemma} \begin{proof} Let $dz$ denote Haar measure on $G_r$ normalized such that $\rm{vol}(I_r^+) = 1$. We will compute $e_\xi \ast e_\xi$ from the definition of the convolution integral, namely $$ (e_\xi \ast e_\xi)(g) = \int_{G_r} e_\xi(gz^{-1})e_\xi(z) dz. $$ Notice that if $g \notin I_r$, then $(e_\xi \ast e_\xi)(g) = 0$: if $z \notin I_r$, then $e_\xi(z) = 0$ in the integrand; otherwise $e_\xi(gz^{-1}) = 0$. Assuming $g \in I_r$, the integral becomes $$ (e_\xi \ast e_\xi)(g) = \int_{I_r} \rho_\xi(gz^{-1})^{-1} \rho_\xi(z)^{-1} \rm{vol}(I_r)^{-2} dz. $$ Since $\rho_\xi(gz^{-1})^{-1} \rho_\xi(z)^{-1} = \rho_\xi(g)^{-1}\rho_\xi(z)\rho_\xi(z)^{-1} = \rho_\xi(g)^{-1}$, we get $$ (e_\xi \ast e_\xi)(g) = \rho_\xi(g)^{-1}\rm{vol}(I_r)^{-2}\int_{I_r} dz. $$ It follows that $e_\xi \ast e_\xi = e_\xi$, verifying the first claim. Next we show that $1_{I_r^+} = \sum_{\xi \in T(k_r)^\vee} e_{\xi}$. First, observe that if $z \notin I_r$, then both sides are zero. Second, suppose $z \in I_r\backslash I_r^+$. Then $1_{I_r^+} (z) = 0$, and $$ \sum_{\xi \in T(k_r)^\vee} e_{\xi}(z) = \rm{vol}(I_r)^{-1} \sum_{\xi \in T(k_r)^\vee} \rho_\xi(z)^{-1}. $$ But $\sum_{\xi \in T(k_r)^\vee} \rho_\xi(z)^{-1} = 0$: we can find $\xi_0 \in T(k_r)^\vee$ such that $\rho_{\xi_0} (z) \neq 1$ and write $$ \sum_{\xi \in T(k_r)^\vee} \rho_\xi(z)^{-1} = \sum_{\xi \in T(k_r)^\vee} (\rho_{\xi_0}\rho_\xi)(z)^{-1} = \rho_{\xi_0}(z)^{-1} \sum_{\xi \in T(k_r)^\vee} \rho_\xi(z)^{-1}. $$ Thus we are reduced to looking at $z \in I_r^+$. Here we have $$ \sum_{\xi \in T(k_r)^\vee} \rho_\xi(z)^{-1} = \sum_{\xi \in T(k_r)^\vee} {\rm{vol}}(I_r)^{-1} = \vert T(k_r)\vert [I_r : I_r^+]^{-1} = 1. $$ \end{proof} \begin{lemma} \label{lemma::orbital-intergrals-zero} Suppose $\xi \in T(k_r)^\vee$ is not a norm, that is, there is not a $\chi \in T(k_F)^\vee$ such that $\xi = \chi \circ N_r$. Then all twisted orbital integrals at $\theta$-semisimple elements vanish on functions in $\mathcal{H}(G_r, I_r, \rho_\xi)$. \end{lemma} \begin{proof} This is Lemma 10.0.4 of~\cite{haines2012}. \end{proof} \begin{cor} \label{cor::equality-of-orbital-integrals} The function $\phi_{r,1}aug \in C_c^\infty(G_r)$ defined by $$ \phi_{r,1}aug = [I_r : I_r^+]^{-1} q^{r\ell(t_{\mu})/2} \sum_{\chi \in T(k_F)^\vee} Z_{V_\mu} \ast e_{\chi_r}, $$ satisfies ${\rm{TO}}_{\delta\theta} (\phi_{r,1}) = {\rm{TO}}_{\delta\theta}(\phi_{r,1}aug)$. \end{cor} \begin{proof} Recall that Haar measure on $\mathcal{H}(G_r, I_r^+)$ is normalized such that ${\rm{vol}}(I_r^+) = 1$, while Haar measure on $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ is normalized to have ${\rm{vol}}(I_r) = 1$. The function $\phi_{r,1}aug \in \mathcal{H}(G_r, I_r^+)$ is defined by summing up functions $Z_{V_\mu} \ast e_{\chi_r} \in \mathcal{H}(G_r, I_r, \rho_{\chi_r})$; thus we must account for the different normalizations when rewriting $\phi_{r,1}$ using Lemma~\ref{lemma::idempotents}. Thus we have the intermediate result $$ {\rm{TO}}_{\delta\theta} (\phi_{r,1}) = {\rm{TO}}_{\delta\theta}\left([I_r : I_r^+]^{-1} q^{r\ell(t_{\mu})/2} \sum_{\xi \in T(k_r)^\vee} Z_{V_\mu} \ast e_{\xi}\right). $$ But Lemma~\ref{lemma::orbital-intergrals-zero} says that ${\rm{TO}}_{\delta\theta} (Z_{V_\mu} \ast e_{\xi}) = 0$ if $\xi$ is not a norm, which means there is no $\chi \in T(k_F)^\vee$ such that $\xi = \chi_r$. This shows ${\rm{TO}}_{\delta\theta} (\phi_{r,1}) = {\rm{TO}}_{\delta\theta}(\phi_{r,1}aug)$. \end{proof} \begin{rmk} \label{rmk::clarify-dependence-on-llc+} Recall that the definition of a test function $\phi_r$ invokes the LLC+ conjecture to view the distribution $Z_{V_\mu}$ as an element of the Bernstein center. Let us give an unconditional definition for $\phi_{r,1}^\prime$ that agrees with Definition~\ref{defin::test-function} whenever LLC+ holds. For each $\chi_r$, define $Z_{V_\mu} \ast e_{\chi_r}$ to be the unique function in the center $\mathcal{Z}(G_r, I_r, \rho_{\chi_r})$ which acts on the $\rho_{\chi_r}$-isotypical component of all $i_{B_r}^{G_r}(\tilde{\chi}_r)$ in $\mathfrak{R}_{[T, \tilde{\chi}_r]}(G_r)$ by the scalar ${\rm{tr}}^{\rm{ss}}(\varphi_{\tilde{\chi}_r}(\Phi),V_\mu)$. Of course, the Langlands parameter $\varphi_{\tilde{\chi}_r}$ exists by the LLC for Tori. Sum these functions as before to get a new definition of $\phi_{r,1}^\prime$ that satisfies ${\rm{TO}}_{\delta\theta} (\phi_{r,1}) = {\rm{TO}}_{\delta\theta}(\phi_{r,1}aug)$ as in Corollary~\ref{cor::equality-of-orbital-integrals}. \end{rmk} \subsubsection{Bernstein functions for dominant minuscule cocharacters} \label{section::bernstein-functions} The center of an Iwahori-Hecke algebra can be described in terms of \emph{Bernstein functions} indexed by dominant cocharacters of the maximal torus. In certain cases, such as the case of dominant and minuscule cocharacters of interest here, Haines proved explicit formulas for Bernstein functions \cite{haines2000}, \cite{haines2000b}, \cite{haines-pettet2002}. Our presentation states Haines's formula for the Iwahori-Hecke algebra $\mathcal{H}(H_{\chi_r}, I_{H_r})$. Recall that $\mathcal{H}(H_{\chi_r}, I_{H_r})$ is isomorphic to the twisted affine Hecke algebra $\mathcal{H}_r (W_{\chi, {\rm{aff}}}, S_{\chi, {\rm{aff}}})\ \tilde{\otimes}\ \mathbb{C}[\Omega_\chi]$, which we sometimes denote $\mathcal{H}_r$. In the notation of Definition~\ref{defin::r-poly-from-hecke-algebras}, this algebra has a normalized basis $\{\tilde{T}_{w,r} \mid w \in \widetilde{W}_\chi\}$. If $w = t_\lambda$ is a translation element, we write $\widetilde{T}_{\lambda,r}$ instead of $\widetilde{T}_{t_\lambda, r}$. For any $\lambda \in \cochars{T}$, there exist dominant $\lambda_1, \lambda_2 \in \cochars{T}$ such that $\lambda = \lambda_1 - \lambda_2$. Let $\Theta_{\lambda,r} = \widetilde{T}_{\lambda_1, r} \widetilde{T}_{\lambda_2, r}^{-1}.$ This element is independent of the choice of $\lambda_1$ and $\lambda_2$. \begin{defin} For $\lambda \in \cochars{T}$, the \textbf{Bernstein function} attached to a Weyl orbit $M_\chi = W_\chi\cdot\lambda$ is defined by $$ z_{M_\chi,r} = \sum_{\eta \in M_\chi} \Theta_{\eta,r}. $$ If $\lambda$ is dominant, we denote the function associated to $M_\chi = W_\chi\cdot\lambda$ by $z_{\lambda,r}$. \end{defin} \begin{thm} The center of $\mathcal{H}(H_{\chi_r}, I_{H_r})$ is the free $\mathbb{Z}[q^{r/2}, q^{-r/2}]$-module generated by the elements $z_{\lambda, r}$, where $\lambda$ ranges over all dominant cocharacters of $T$. \end{thm} \begin{proof} The theorem is due to Bernstein, but this statement is \cite{haines2000}, Theorem 2.3, adapted to the algebra $\mathcal{H}(H_{\chi_r}, I_{H_r})$. \end{proof} \begin{defin} \label{defin::mu-admissible-set} Let $\mu$ be a dominant coweight of $\Phi$. An element $w \in \widetilde{W}$ is called \textbf{$\mu$-admissible} if $x \leq t_\lambda$ for some $\lambda$ in the $W$-orbit of $\mu$. \end{defin} When $\Phi$ is the root system of $G_r$, the $\mu$-admissible set is denoted ${\rm{Adm}}_{G_r}(\mu)$. Similarly, if $\mu_\chi$ is a dominant coweight of $\Phi_\chi$, the root system of $H_{\chi_r}$ under our hypotheses, elements $w \in \widetilde{W}_\chi$ such that $w \leq_\chi t_\lambda$ for $\lambda \in W_\chi \mu_\chi$ form the set ${\rm{Adm}}_{H_r}(\mu_\chi)$. \begin{prop} \label{prop::supp-bern-fns} Let $\mu_\chi$ be a dominant, minuscule cocharacter of $T$ with respect to $\Phi_\chi$. The support of $z_{\mu_\chi, r}$ equals ${\rm{Adm}}_{H_r}(\mu_\chi)$ as subsets of $\widetilde{W}_\chi$. \end{prop} \begin{proof} Apply Proposition 4.6 of~\cite{haines2000b} to the Iwahori-Hecke algebra $\mathcal{H}(H_{\chi_r}, I_{H_r})$, which has a basis indexed by $\widetilde{W}_\chi$, the extended affine Weyl group of $H_{\chi_r}$. \end{proof} For any $w \in \widetilde{W}$, let $\lambda(w)$ denote the cocharacter of $T$ such that $w = t_{\lambda(w)} \bar{w}$ via the isomorphism $\widetilde{W} \cong \cochars{T} \rtimes W$. \begin{prop} Suppose $\mu$ is a minuscule cocharacter and $w = t_{\lambda(w)} \bar{w} \in \widetilde{W}$ is $\mu$-admissible. Then $w \leq t_{\lambda(w)}$. \end{prop} \begin{proof} This is \cite{haines-pettet2002}, Corollary 3.5. \end{proof} \begin{thm} \label{thm::bernstein-fn-formula} Let $\mu_\chi$ be a dominant minuscule cocharacter of $T$ with respect to $\Phi_\chi$. Set $Q_r = q^{-r/2} - q^{r/2}$. The Bernstein function $z_{\mu_\chi, r}$ in the center of $\mathcal{H}(H_{\chi_r}, I_{H_r})$ is given by the formula, $$ z_{\mu_\chi, r} = \sum_{w \in {\rm{Adm}}_{H_r}(\mu_\chi)} \widetilde{R}^\chi_{w, t_{\lambda(w)}}(Q_r) \tilde{T}_{w,r}. $$ \end{thm} \begin{proof} This is Theorem 4.3 of\cite{haines2000b} applied in the case of $\mathcal{H}(H_{\chi_r}, I_{H_r})$. A different proof appears in~\cite{haines-pettet2002}. \end{proof} \subsubsection{An explicit formula via Bernstein functions} The material on Hecke algebra isomorphisms and Bernstein functions developed in earlier sections can be used to give an explicit formula for the coefficients of $\phi_{r,1}aug$. We begin by expressing the image of $Z_{V_\mu} \ast e_{\chi_r}$ under the Hecke algebra isomorphism $\Psi_{\breve{\chi}_r}$ in terms of Bernstein functions in the center of $\mathcal{H}(H_{\chi_r}, I_{H_r})$. \begin{lemma} \label{lemma::chi-weights-disjoint-union} Let $\rm{Wt}(\chi) = \{ \lambda \in W\mu \mid \lambda(\kappa_\chi) = 1\}$. Let $\{\mu_\chi^i\}$ denote the subset of $W\mu$ consisting of elements which are dominant and minuscule with respect to $\Phi_\chi$. Then $\rm{Wt}(\chi)$ is the disjoint union of the orbits $W_\chi \mu_\chi^i$. \end{lemma} \begin{proof} The orbits $W_\chi \mu_\chi^i$ are all disjoint, because there is a unique dominant element in each $W_\chi$-orbit. We have $W\mu \supseteq \coprod_i W_\chi \mu_\chi^i$, because $\mu$ is among the $\mu_\chi^i$ and $W_\chi \subseteq W$. On the other hand, given an element $\eta \in W\mu$, there must exist some index $k$ such that $\eta \in W_\chi \mu_\chi^k$; just consider the orbit $W_\chi \eta$, which must contain a unique element dominant with respect to $\Phi_\chi$. So $W\mu = \coprod_i W_\chi \mu_\chi^i$. Now pick any $\lambda \in {\rm{Wt}}(\chi)$. Then $\lambda \in W_\chi \mu_\chi^k$ for some $k$. For any other $\eta \in W_\chi \mu_\chi^k$, there is $u \in W_\chi$ such that $\eta = u^{-1}\lambda$. But $u \in W_\chi$ satisfies $u\kappa_\chi = \kappa_\chi$, so: $$ \eta (\kappa_\chi) = u^{-1}\lambda(\kappa_\chi) = \lambda(u \kappa_\chi) = \lambda(\kappa_\chi) = 1. $$ This shows that any $W_\chi$-orbit containing an element of ${\rm{Wt}}(\chi)$ lies entirely within ${\rm{Wt}}(\chi)$. It follows that ${\rm{Wt}}(\chi)$ is a finite, disjoint union of $W_\chi$-orbits. \end{proof} \begin{lemma} \label{lemma:mu-chi-admissible-sets-disjoint} Suppose ${\rm{Wt}}(\chi) = \coprod_i W_\chi \mu_\chi^i$ as in Lemma~\ref{lemma::chi-weights-disjoint-union}. Then the $\mu_\chi^i$-admissible sets ${\rm{Adm}}_{H_{\chi_r}}(\mu_\chi^i)$ are disjoint subsets of $\widetilde{W}$. \end{lemma} \begin{proof} The translation elements $t_\lambda$ for $\lambda \in W\mu$ may be written $t_\lambda = \sigma_\lambda \bar{w}_\lambda \in \widetilde{W}$ for some length-zero element $\sigma_\lambda$ and $\bar{w}_\lambda \in W$. The $\sigma_\lambda$ are unique. Choose any $\mu_\chi^i$. It is dominant and minuscule with respect to $\Phi_\chi$, hence any $w \in {\rm{Adm}}_{H_{\chi_r}}(\mu_\chi^i)$ satisfies $w \leq_\chi t_{\lambda(w)}$. By definition of Bruhat order on extended affine Weyl groups, this means the length-zero part of $w$ is $\sigma_{\lambda(w)}$. But since the $\sigma_\lambda$ are unique and the $W_\chi \mu_\chi^i$ orbits are disjoint, we cannot have $w \in {\rm{Adm}}_{H_{\chi_r}}(\mu_\chi^k)$ for any $k \neq i$. \end{proof} We shall sometimes speak of $w \in \widetilde{W}$ as though it were an element of $G_r$ by using a set-theoretic embedding of the extended affine Weyl group into $N_G(T)(F_r)$, which depends on a choice of uniformizer $\varpi$ in $F_r$. The following definition comes from \cite{haines-rapoport2012}, Section 2. \begin{defin} \label{defin::set-theoretic-embedding} For $w \in \widetilde{W}$, we have $w = t_\lambda \bar{w}$ for $\lambda \in \cochars{T}$ and $\bar{w} \in W$. Let $\varpi$ be the fixed uniformizer in $F_r$, and set $\varpi^\lambda = \lambda(\varpi)$. The set-theoretic embedding $i_\varpi : \widetilde{W} \hookrightarrow G_r$ is defined by $t_\lambda \mapsto \varpi^{-\lambda}$ and mapping $\bar{w}$ to a fixed representative in $N_{G(\mathcal{O}_r)} (T(\mathcal{O}_r))$. \end{defin} \begin{lemma} Let $I_r$ be an Iwahori subgroup of $G_r$ with pro-unipotent radical $I_r^+$. Then $G_r$ decomposes into disjoint double-cosets indexed by pairs $(s,w) \in T(k_r) \times \widetilde{W}$, $$ G_r = \coprod_{(s,w)} I_r^+ sw I_r^+. $$ \end{lemma} \begin{proof} This is a variant of the Bruhat-Tits decomposition which describes $G_r$ in terms of $I_r$-double cosets. The proof of this decomposition uses the relation $T(k_r) \cong I_r/I_r^+$; see \cite{haines-rapoport2012}, Section 2, for an explanation. Both the statement and proof depend on the set-theoretic embedding of $\widetilde{W}$ into $G(F_r)$ given in Definition~\ref{defin::set-theoretic-embedding}. \end{proof} I thank my advisor, Thomas Haines, for supplying the following lemma and its proof. \begin{lemma} \label{lemma::image-of-phi-r-chi} Let $\chi_r$ be a depth-zero character on $T(\mathcal{O}_r)$ arising from a depth-zero character $\chi$ on $T(\mathcal{O}_F)$, and fix the $\varpi$-canonical extensions $\tilde{\chi}_r^\varpi$ and $\breve{\chi}_r$ to $T(F_r)$ and $N_G(T)(F_r)$ respectively. Let $\varpi^{-\lambda} \bar{w}$ be an element of $N_G(T)(F_r)$, which is the image of $w = t_\lambda \bar{w}$ under the set-theoretic embedding $\widetilde{W} \hookrightarrow G(F_r)$. For simplicity, assume $W_{\chi_r} = W_{\chi_r}^\circ$. \begin{enumerate} \item If $\varpi^{-\lambda} \bar{w}$ lies in the support of any non-zero function $\phi \in \mathcal{H}(G_r, I_r, \rho_{\chi_r})$, then ${^{\bar{w}}}\chi = \chi$, or equivalently, $\bar{w} \kappa_\chi = \kappa_\chi$. \item Suppose $\mu \in \cochars{T} = \chars{\widehat{T}}$ is dominant and minuscule. Under the Hecke algebra isomorphism $\Psi_{\breve{\chi}_r}$, $Z_{V_\mu} \ast e_{\chi_r}$ goes to the sum $\sum_{\mu_\chi} z_{\mu_\chi, r}$, where $\mu_\chi$ ranges over dominant representatives of $W_\chi$-orbits of $\nu \in W\mu$ such that $\nu(\kappa_\chi) = 1$. \item If $\varpi^{-\lambda} \bar{w}$ lies in the support of any function of the form $Z_{V_\mu} \ast e_{\chi_r}$ then $\lambda(\kappa_\chi) = 1$. \item If $Z_{V_\mu} \ast e_{\chi_r} (\varpi^{-\lambda}\bar{w}) \neq 0$, then $t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$. \end{enumerate} \end{lemma} \begin{proof} (T. Haines) To prove the first statement, we may assume $\phi = [I_r \varpi^{-\lambda} \bar{w} I_r]_{\breve{\chi}_r}$. For any $t \in T(\mathcal{O}_r)$ we have $$ \chi_r^{-1}(t) \phi(\varpi^{-\lambda} \bar{w}) = \phi(\varpi^{-\lambda} \bar{w} ({^{\bar{w}^{-1}}}t)) = {^{\bar{w}}}\chi_r^{-1} (t) \phi(\varpi^{-\lambda} \bar{w}). $$ Since $\phi(\varpi^{-\lambda} \bar{w}) \neq 0$, we conclude $\chi_r (t) = {^{\bar{w}}}\chi_r (t)$ for all $t \in T(\mathcal{O}_r)$, i.e., ${^{\bar{w}}}\chi_r = \chi_r$. Now let us prove the second statement. The function $Z_{V_\mu} \ast e_{\chi_r}$ lies in the center of the Hecke algebra $\mathcal{H}(G_r, I_r, \rho_{\chi_r})$ and acts by a non-zero scalar only on the $\rho_{\chi_r}$-isotypical components of representations in the Bernstein block for the inertial class $[T, \tilde{\chi}_r^\varpi]_G$. In fact, for any unramified character $\eta$ on $T(F_r)$, unwinding the definition shows that it acts on $i_{B_r}^{G_r} (\tilde{\chi}_r^\varpi)^{\rho_{\chi_r}}$ by the scalar $$ {\rm{Tr}}(r_\mu \varphi_{\tilde{\chi}_r^\varpi \eta} (\Phi), V_\mu^{\kappa_\chi}). $$ By~\cite{haines-rapoport2012}, Lemma 10.1.1, the image $\Psi_{\breve{\chi}_r} (Z_{V_\mu} \ast e_{\chi_r})$ acts by this scalar on the Iwahori-fixed vectors $i_{B_{H_r}}^{H_{\chi_r}} (\eta)^{I_{H_r}}$ in the unramified principal series of $H_{\chi_r}$. In order to prove $\Psi_{\breve{\chi}_r} (Z_{V_\mu} \ast e_{\chi_r}) = \sum_{\mu_\chi} z_{\mu_\chi, r}$ , it will suffice to show that $\sum_{\mu_\chi} z_{\mu_\chi, r}$ acts by this same scalar on such representations, in which case the functions are equal because they determine the same regular function on the Bernstein variety $\mathfrak{X}_{H_r}$. As $\widehat{T}$-representations we have $V_\mu^{\kappa_\chi} = \oplus_{\mu_\chi} V_{\mu_\chi}^{H_{\chi_r}}$, the sum of highest weight representations for the dual group of $H_{\chi_r}$, hence the scalar can be written as $$ \sum_{\mu_\chi} {\rm{Tr}}(r_{\mu_\chi} \varphi_{\tilde{\chi}_r^\varpi \eta} (\Phi), V_{\mu_\chi}^H). $$ Note that $\varphi_{\tilde{\chi}_r^\varpi \eta} (\Phi) = \varphi_\eta (\Phi)$. View the function $z_{\mu_\chi}$ as a regular function on the variety $\widehat{T}/W_\chi$, whose points correspond to $W_\chi$-invariant unramified characters, $$ z_{\mu_\chi,r} : \eta \mapsto \sum_{\lambda \in W_\chi \mu_\chi} \lambda(\eta). $$ Looking at the weight space decomposition, $\sum_{\lambda \in W_\chi \mu_\chi} \lambda(\eta) = {\rm{Tr}}(r_{\mu_\chi} \varphi_{\eta} (\Phi), V_{\mu_\chi}^H)$. Summing over the $\mu_\chi$ shows that $\sum_{\mu_\chi} z_{\mu_\chi, r}(\eta) = {\rm{Tr}}(r_\mu \varphi_{\tilde{\chi}_r^\varpi \eta} (\Phi), V_\mu^{\kappa_\chi}).$ Now use the second statement and the fact that $\Psi_{\breve{\chi}_r}$ is support-preserving to see that $\varpi^{-\lambda} \bar{w}$ must lie in the support of some $z_{\mu_\chi, r}$. Since $\mu_\chi$ is minuscule, we must have in that case that $\lambda \in W_\chi \mu_\chi$. But then $\lambda(\kappa_\chi) = 1$ by Lemma~\ref{lemma::chi-weights-disjoint-union}. Finally, suppose that $Z_{V_\mu} \ast e_{\chi_r} (\varpi^{-\lambda}\bar{w}) \neq 0$. Using earlier work and the support-preserving property of Hecke algebra isomorphisms, we have that $z_{\mu_\chi, r} (w) \neq 0$ for some $\mu_\chi$. The support of $z_{\mu_\chi,r}$ is ${\rm{Adm}}_{H_{\chi_r}}(\mu_\chi)$, i.e., $w \in {\rm{Adm}}_{H_{\chi_r}}(\mu_\chi)$. So it is enough to show that ${\rm{Adm}}_{H_{\chi_r}}(\mu_\chi) \subset {\rm{Adm}}_{G_r}(\mu)$ as subsets of $\widetilde{W}$. The base alcove for a (based) root system $\Phi$ is the set of points in $\cochars{T} \otimes \mathbb{R}$ on which all positive affine roots take positive values. Since the positive affine roots attached to $\Phi_\chi$ are a subset of the positive affine roots for $\Phi$, we deduce an inclusion of base alcoves $\textbf{a} \subset \textbf{a}_\chi$ associated to $\Phi$, and $\Phi_\chi$, respectively. Furthermore, every alcove for $\Phi$ is contained in a unique alcove for $\Phi_\chi$. Given $w = t_\lambda \bar{w} \in {\rm{Adm}}_{H_{\chi_r}}(\mu_\chi)$, we know that $w \leq_\chi t_\lambda$, thus there is a sequence of alcoves $w\textbf{a}_\chi = \textbf{a}_\chi^{0}, \textbf{a}_\chi^{1}, \ldots, \textbf{a}_\chi^{l} = t_\lambda \textbf{a}_\chi$, such that if $\textbf{a}_\chi^{i-1}$ and $\textbf{a}_\chi^{i}$ are separated by an affine hyperplane $H$, then $\textbf{a}_\chi$ and $\textbf{a}_\chi^{i-1}$ are on the same side of $H$. From this we get a sequence of alcoves $w\textbf{a} = \textbf{a}^{0}, \textbf{a}^{0}, \ldots, \textbf{a}^{l} = t_\lambda \textbf{a}$, such that whenever $\textbf{a}^{i-1}$ and $\textbf{a}^{i}$ are separated by an affine hyperplane $H$, then $\textbf{a}$ and $\textbf{a}^{i-1}$ are on the same side of $H$. It follows that $w \leq t_\lambda$, which implies $w \in {\rm{Adm}}_{G_r}(\mu)$. \end{proof} The function $\phi_{r,1}aug$ is an element of the Hecke algebra $\mathcal{H}(G_r, I_r^+)$, hence it can be described by specifying the complex values taken on $I_r^+$-double cosets of $G_r$ indexed by pairs in $T(k_r) \times \widetilde{W}$. The values $\phi_{r,1}aug(I_r^+ sw I_r^+)$ are called the \textbf{coefficients} of $\phi_{r,1}aug$. The following proposition gives an explicit formula for the coefficients of $\phi_{r,1}aug$, which is sufficient for our purposes by Corollary~\ref{cor::equality-of-orbital-integrals}. We will revisit this proposition in Chapter 5, where we use the results of the intervening chapters to develop a combinatorial formula using the following formula as a starting point. \begin{prop} \label{prop::first-explicit-formula} Given a pair $(s,w) \in T(k_r) \times \widetilde{W}$, the coefficient $\phi_{r,1}aug (I_r^+ sw I_r^+)$ can be rewritten as a sum over endoscopic elements in $\widehat{T}(\mathbb{C})$ which arise from depth-zero characters $\chi \in T(k_F)^\vee$: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r: I_r^+]^{-1} \sum_{\kappa_\chi \in K_{q-1}} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r). $$ \end{prop} \begin{proof} The proof is a straightforward application of the work done throughout this chapter. We start with the definition of $\phi_{r,1}aug$ from Corollary~\ref{cor::equality-of-orbital-integrals}, $$ \phi_{r,1}aug = [I_r : I_r^+]^{-1} q^{r\ell(t_\mu)/2} \sum_{\chi \in T(k_F)^\vee} Z_{V_\mu} \ast e_{\chi_r}. $$ Then by Lemma~\ref{lemma::image-of-phi-r-chi}, $$ \phi_{r,1}aug = [I_r : I_r^+]^{-1} q^{r\ell(t_\mu)/2} \sum_{\chi \in T(k_F)^\vee} \sum_{\mu_\chi} \Psi_{\breve{\chi}_r}^{-1} (z_{\mu_\chi, r}). $$ The orbits $W_\chi \mu_\chi$ are disjoint as $\mu_\chi$ ranges over the set of elements in $W\mu$ which are dominant and minuscule with respect to $\Phi_\chi$. The $\mu_\chi$-admissible sets in $\widetilde{W}$ are disjoint by Lemma~\ref{lemma:mu-chi-admissible-sets-disjoint}, hence the supports of each $z_{\mu_\chi, r}$ are disjoint. Thus for each $w \in {\rm{Adm}}_{G_r}(\mu)$, there is a unique $\mu_\chi \in W\mu$ such that $w \in \rm{supp}(z_{\mu_\chi, r})$. Therefore, $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r : I_r^+]^{-1} q^{r\ell(t_\mu)/2} \sum_{\chi \in T(k_F)^\vee} \Psi_{\breve{\chi}_r}^{-1} (z_{\mu_\chi, r})(sw). $$ Now, we apply Haines's formula for $z_{\mu_\chi, r}$ and Lemma~\ref{lemma::hecke-algebra-mapping-coefficients}. Recall that $$ z_{\mu_\chi, r} (w) = \widetilde{R}_{w, t_{\lambda(w)}}^\chi (Q_r) \widetilde{T}_{w,r}(w), $$ and $\widetilde{T}_{w,r}(w) = q^{-r\ell_\chi(w)/2} [I_{H_r} w I_{H_r}] (w).$ Therefore, $$ \Psi_{\breve{\chi}_r}^{-1} (z_{\mu_\chi, r})(sw) = q^{-r\ell(w)/2 + r\ell_\chi(w)/2} q^{-r\ell_\chi(w)/2} \widetilde{R}_{w, t_{\lambda(w)}}^\chi (Q_r) [I_r^+ sw I_r^+]_{\breve{\chi}_r} (sw). $$ But $[I_r^+ sw I_r^+]_{\breve{\chi}_r} (sw) = \chi^{-1}_r(s)$. So, we conclude that $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r : I_r^+] ^{-1} \sum_{\chi \in T(k_F)^\vee} \chi_r^{-1}(s) q^{r(\ell(t_{\lambda(w)})-\ell(w))/2}\widetilde{R}_{w, t_{\lambda(w)}}^\chi (Q_r), $$ where we have used that the conjugates of a translation element all have the same length; that is, $\ell(t_\mu) = \ell(t_{\lambda(w)})$ for all $\lambda \in W\mu$. The set of depth-zero characters $T(k_F)^\vee$ is in bijective correspondence with the set of endoscopic elements $\kappa_\chi \in \widehat{T}(\mathbb{C})$ arising from depth-zero characters; this bijection is determined by a fixed element of the inertia subgroup $x \in I_F$ whose image $\tau_F(x)$ projects to a generator of $k_F^\ast$. Proposition~\ref{prop::dz-endoscopic-elements-equal-kernel} shows that this subset of $\widehat{T}(\mathbb{C})$ equals $K_{q-1}$. Following the notation in \cite{bjorner-brenti2005}, we write $\ell(w, t_{\lambda(w)})$ for the difference in lengths $\ell(t_{\lambda(w)}) - \ell(w)$. Finally, for each $s \in T(k_r)$ the equation $\gamma_{N_r s}(\kappa_\chi) = \chi_r (s)$ holds for all depth-zero characters $\chi$. In conclusion, $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r : I_r^+]^{-1} \sum_{\kappa_\chi \in K_{q-1}} \gamma_{N_r s} (\kappa_\chi)^{-1} q^{r\ell(w, t_{\lambda(w)})/2} \widetilde{R}_{w, t_{\lambda(w)}}^\chi (Q_r) $$ \end{proof} \section{Groups of endoscopic elements in the dual torus} Consider the endoscopic elements $\kappa_\chi$ in $\widehat{T}(\mathbb{C})$ arising from depth-zero characters on $T(\mathcal{O}_F)$, which index the summation in our formula for $\phi_{r,1}aug$. We shall see that it is useful to determine the $\kappa_\chi$ such that $Z_{V_\mu} \ast e_{\chi_r} (w) \neq 0$ for a fixed $w \in \widetilde{W}$. We begin by defining a closed subgroup $S_w \subset \widehat{T}(\mathbb{C})$ associated to a fixed $w = t_{\lambda} \bar{w} \in \widetilde{W}$. This ``relevant subgroup'' is an \emph{infinite} diagonalizable algebraic group. In exchange for working with a more complex object, we gain access to the theory of diagonalizable groups and tori defined over an algebraically closed field. The endoscopic elements needed for the combinatorial formula for $\phi_{r,1}aug$ comprise the ``depth-zero relevant subgroup'' $S_w^{\rm{dz}}$, which is a finite subgroup of $\widehat{T}(\mathbb{C})$. The final section of this chapter contains results to be used later in the statement and proof of the main theorem in Chapter~\ref{chapter::main-theorem}. First, we identify an analogue of the ``critical index torus'' used by Haines and Rapoport in the Drinfeld case; this object is used to determine which $s \in T(k_r)$ contribute to nontrivial coefficients $\phi_{r,1}aug(I_r^+ sw I_r^+)$ in terms of the $k_F$-points of a subtorus of $T$. Second, we look at the order of certain subgroups ${S_w^{\rm{dz}}J \subset S_w^{\rm{dz}}}$ which arise from root sub-systems $J \subseteq \Phi$. \subsection{Background on diagonalizable algebraic groups} \label{section::diag-groups-background} Let us recall various properties of diagonalizable algebraic groups over an algebraically closed field, that is, linear algebraic groups which are isomorphic to a closed subgroup of the diagonal torus in some general linear group. In fact, we will only consider such groups over $\mathbb{C}$. \begin{defin} An algebraic group over $\mathbb{C}$ is \textbf{diagonalizable} if it isomorphic to a closed subgroup of the diagonal group $D(n, \mathbb{C})$ for some $n$. \end{defin} Recall that a connected diagonalizable group is a torus. \begin{thm} Let $G$ be a diagonalizable group over $\mathbb{C}$. Then $G = A \times H$, where $A$ is a torus over $\mathbb{C}$ and $H$ is a finite group. \end{thm} \begin{proof} See \cite{humphreys1975}, Section 16.2. \end{proof} Let $D$ be a diagonalizable group defined over $\mathbb{C}$. Recall that the multiplicative group $\mathbb{G}_m$ consists of the nonzero elements of the affine space $\mathbb{A}^1$ equipped with the group law $(x,y) \mapsto xy$. The group of $\mathbb{C}$-rational points of $\mathbb{G}_m$ is $\mathbb{C}^\times$. The \textbf{character group} of $D$ is defined by $\chars{D} = {\rm{Hom}}_{\mathbb{C}}(D, \mathbb{C}^\times)$, while its \textbf{cocharacter group} is $\cochars{D} = {\rm{Hom}}_{\mathbb{C}}(\mathbb{C}^\times, D)$. \begin{defin} A \textbf{lattice} is a free subgroup of $\chars{D}$ or $\cochars{D}$ generated over a linearly independent set. \end{defin} When considering a torus we will sometimes choose a basis for the cocharacter group $\cochars{T}$ and then give coordinates in terms of that basis. For example, we write $\mu = (1,0, \ldots, 0)$ for the cocharacter $\mu$ of the diagonal torus in $GL_d$ given by the formula $$ \mu(z) = {\rm{diag}}(z,1,\ldots,1). $$ \begin{thm} \label{thm::cat-antiequiv} There is a categorical anti-equivalence between diagonalizable algebraic groups and abelian groups, which arises from the contravariant functor sending a diagonalizable group $D$ to its character group $\chars{D}$. \end{thm} \begin{proof} See \cite{waterhouse1979}, Section 2.2, for example. \end{proof} The following corollaries follow directly from Theorem~\ref{thm::cat-antiequiv}. \begin{cor} Let $D$ be a diagonalizable algebraic group over $\mathbb{C}$. If its character group $\chars{D}$ has torsion, then $D$ is not connected, i.e., $D$ is not a torus. \end{cor} \begin{cor} \label{cor::contain-by-annihilation} Let $D$ be a diagonalizable subgroup of $\widehat{T}(\mathbb{C})$, and let $L$ be the lattice in $\chars{\widehat{T}}$ such that $\chars{D} = \chars{\widehat{T}}/L$. Then $\kappa \in \widehat{T}(\mathbb{C})$ is annihilated by $L$ if and only if $\kappa \in D$. \end{cor} We conclude this background section with two useful lemmas. \begin{lemma} \label{lemma::chars-separate-points} Let $D$ be a diagonalizable group. Given two distinct points $x$ and $y$ in $D$, there exists a character $\eta \in \chars{D}$ such that $\eta(x) \neq \eta(y)$. \end{lemma} \begin{proof} See \cite{humphreys1975}, Section 16.1. \end{proof} \begin{lemma} \label{lemma::chars-of-intersected-groups} Suppose $D_1$ and $D_2$ are diagonalizable subgroups of $\widehat{T}(\mathbb{C})$ such that $\chars{D_i} = \chars{\widehat{T}}/L_i$ for lattices $L_1$ and $L_2$. Then the character group of $D_1 \cap D_2$ is $$ \chars{D_1 \cap D_2} = \chars{\widehat{T}}/\langle L_1, L_2 \rangle. $$ \end{lemma} \begin{proof} The group $D_1 \cap D_2$ is the largest common subgroup of $D_1$ and $D_2$. Under the categorical anti-equivalence between diagonalizable groups and their character groups, its character group $\chars{D_1 \cap D_2}$ is the largest common quotient of $\chars{D_1}$ and $\chars{D_2}$. But as $\langle L_1, L_2\rangle$ is the smallest lattice containing both $L_1$ and $L_2$, we conclude $\chars{D_1 \cap D_2} = \chars{\widehat{T}}/\langle L_1, L_2\rangle$. \end{proof} \subsection{The relevant group of an admissible element} For any $w \in \widetilde{W}$, there is an expression $w = t_{\lambda} \bar{w}$ obtainable via the isomorphism $\widetilde{W} \cong \cochars{T} \rtimes W$. We give certain conditions based on the data $\lambda$ and $\bar{w}$ to define a diagonalizable group $S_w$ in $\widehat{T}(\mathbb{C})$. This infinite group contains a finite subgroup $S_w^{\rm{dz}}$ consisting of those $\kappa \in S_w$ such that $\kappa = \kappa_\chi$ for some depth-zero character $\chi$ on $T(\mathcal{O}_F)$. Given a root sub-system $J \subseteq \Phi$, there are analogous groups $S_wJ$ and $S_w^{\rm{dz}}J$. We specify lattices $L_{w,J} \subset \chars{\widehat{T}}$ such that $\chars{S_{w,J}} = \chars{\widehat{T}}/L_{w,J}$, thereby giving information about $S_{w,J}$ through Theorem~\ref{thm::cat-antiequiv}. \subsubsection{Definition of $S_w$ and $S_{w,J}$} \begin{lemma} Let $\bar{w} \in W$. Then $\kappa_{^{\bar{w}}\chi} = \bar{w}\kappa_\chi$ for all characters $\chi$ on $T(\mathcal{O}_F)$. If in addition $\bar{w} \in W_\chi$, then $\bar{w}\kappa_\chi = \kappa_\chi$. \end{lemma} \begin{proof} Given $\bar{w} \in W$, the LLC for Tori implies ${^{\bar{w}}} \chi (\nu(\tau_F(x))) = \nu(\kappa_{{^{\bar{w}}} \chi})$ for all $\nu \in \cochars{T} = \chars{T}$. On the other hand, $$ {^{\bar{w}}}\chi (\nu(\tau_F(x))) = \chi (\bar{w}^{-1} \nu(\tau_F(x))) = (\bar{w}^{-1}\cdot \nu)(\kappa_\chi) = \nu(\bar{w}\kappa_\chi). $$ Thus $\nu(\bar{w}\kappa_\chi) = \nu(\kappa_{{^{\bar{w}}}\chi})$ for all $\nu$. We conclude that $\bar{w}\kappa_\chi = \kappa_{{^{\bar{w}}}\chi}$ by Lemma~\ref{lemma::chars-separate-points}. For the final statement, apply the definition $W_\chi = \{\bar{w} \in W \mid {^{\bar{w}}\chi} = \chi\}$. \end{proof} \begin{defin} \label{defin::relevant-endoscopic-elements} An endoscopic element $\kappa \in \widehat{T}(\mathbb{C})$ is \textbf{relevant} to $w \in {\widetilde{W}}$ if $\bar{w}\kappa = \kappa$ and $\lambda(\kappa) = 1$. \end{defin} \begin{prop} \label{prop::relevant-group} The elements $\kappa \in \widehat{T}(\mathbb{C})$ relevant to a fixed $w \in \widetilde{W}$ form a closed subgroup called the \textbf{relevant subgroup} $S_w$. \end{prop} \begin{proof} First, we verify that relevant $\kappa$ satisfy the group axioms. It is obvious that $\kappa = 1$, corresponding to the trivial character, belongs to $S_w$ for any $w \in \widetilde{W}$. If ${\kappa}_1, {\kappa}_2 \in S_w$, then it is enough to observe that $$ \bar{w}\cdot ({\kappa}_1 {\kappa}_2) = (\bar{w}\cdot \kappa_1)(\bar{w}\cdot\kappa_2) = {\kappa}_1{\kappa}_2 $$ and $$ \lambda({\kappa}_1{\kappa}_2) = \lambda({\kappa}_1)\lambda({\kappa}_2) = 1. $$ In particular, $\bar{w}\cdot \kappa^{-1} = \kappa^{-1}$. This subgroup is \emph{closed} because it is the intersection of $\rm{ker}(\lambda)$ and the fixed-point set of $\bar{w}$, each of which is a Zariski-closed subset of $\widehat{T}(\mathbb{C})$. \end{proof} The relevant subgroup for an element $w$ is not a torus in general. See Example~\ref{example::rel-group-not-a-torus} which uses an explicit realization of the character group $\chars{S_w}$ to produce torsion elements. \begin{lemma} \label{lemma::support-of-phi-r-chi} Let $w \in \widetilde{W}$. Then $Z_{V_\mu} \ast e_{\chi_r} (w) \neq 0$ implies $w \in {\rm{Adm}}_{G_r}(\mu)$ and that $\kappa_\chi$ is relevant to $w$. \end{lemma} \begin{proof} This is a rephrasing of two statements proved in Lemma~\ref{lemma::image-of-phi-r-chi}. \end{proof} \begin{defin} The subgroup of $S_w$ consisting of $\kappa_\chi$ associated to $\chi \in T(k_F)^\vee$ will be called the \textbf{depth-zero relevant subgroup} of $w$. It is denoted $S_w^{\rm{dz}}$. \end{defin} Lemma~\ref{lemma::support-of-phi-r-chi} says that if $Z_{V_\mu} \ast e_{\chi_r}(w) \neq 0$ then $w$ is $\mu$-admissible and $\kappa_\chi \in S_w^{\rm{dz}}$. \begin{defin} \label{defin::dzrelgrpJ} Let $J$ be a root sub-system of $\Phi$. For $\alpha \in J$, consider $\alpha^\vee$ as a character on $\widehat{T}$. Define $S_{w,J} \subseteq S_w$ by $$ S_{w,J} = S_w \cap \left(\bigcap_{\alpha \in J} \rm{ker}(\alpha^\vee)\right). $$ Let $S_w^{\rm{dz}}J$ denote the subset of $S_wJ$ whose elements arise as endoscopic elements corresponding to $\chi \in T(k_F)^\vee$. \end{defin} \begin{prop} \label{prop::set-version-of-dzrelgrpJ} Let $w \in \widetilde{W}$, and fix a sub-system $J \subseteq \Phi$. Then $$ S_w^{\rm{dz}}J = \{ \kappa_\chi \in S_w^{\rm{dz}} \mid \Phi_\chi \supseteq J\}. $$ \end{prop} \begin{proof} If $\kappa_\chi \in S_w^{\rm{dz}}J$, then for all $\alpha \in J$ we have $\alpha^\vee (\kappa_\chi) = 1$. It follows that $\chi \circ \alpha^\vee (\tilde{x}) = 1$. By definition this means $\alpha \in \Phi_\chi$, so in total, $J \subseteq \Phi_\chi$. Conversely, if $\kappa_\chi \in S_w^{\rm{dz}}$ satisfies $\Phi_\chi \supseteq J$, then $\chi \circ \alpha^\vee (\tilde{x}) = 1$ for all $\alpha \in J$. Again, $\alpha^\vee (\kappa_\chi) = 1$, so that $\kappa_\chi \in \cap_{\alpha \in J} \rm{ker}(\alpha^\vee)$. \end{proof} \subsubsection{Lattices in character groups} \label{section::lattices-in-char-groups} The relevant subgroup $S_w$ will guide us to an analogue of the critical index torus defined for the Drinfeld case in~\cite{haines-rapoport2012}, Section 6.4. This comes about by realizing the character groups $\chars{S_w}$ and $\chars{S_wJ}$ as quotients of $\chars{\widehat{T}}$ by specifying certain lattices. The categorical anti-equivalence between diagonalizable groups and their character groups enables us to take advantage of the definition of $S_{w,J}$ as the intersection of two diagonalizable groups. Let $K_J = \bigcap_{\alpha \in J} \rm{ker}(\alpha^\vee)$. In the following diagram, arrows in the left diamond are inclusions while arrows in the right diamond are quotients. { \[ \xymatrix{ & \widehat{T}(\mathbb{C})\\ S_w \ar[ur] && K_J \ar[ul] \\ & S_{w,J} \ar[ul] \ar[ur] } \ \xymatrix{ \\ \longleftrightarrow \\ } \ \xymatrix{ & \chars{\widehat{T}} \ar[dl] \ar[dr] \\ \chars{S_w} \ar[dr] && \chars{K_J} \ar[dl] \\ & \chars{S_{w,J}} } \] } Quotients of $\chars{\widehat{T}}$ correspond to the lattice used to form the quotient. The lattice needed to form the quotient $\chars{S_{w,J}}$ is the lattice generated by the lattices corresponding to $\chars{S_w}$ and $\chars{K_J}$. \begin{lemma} \label{lattice-equality-lemma} Let $w = t_\lambda \bar{w}$ be a $\mu$-admissible element in $\widetilde{W}$. Define a lattice $L_w = \langle w(\nu) - \nu \mid \nu \in \chars{\widehat{T}} \rangle$, where $w(\nu) = \lambda + \bar{w}(\nu)$. Then $$L_w = \langle \lambda,\: \bar{w} (\nu) - \nu \mid \nu \in \chars{\widehat{T}} \rangle.$$ \end{lemma} \noindent The $L_w$ so defined is the same as the lattice studied in Section 6.4 of \cite{haines-rapoport2012}. \begin{proof} It immediately obvious that $L_w \subseteq \langle \lambda,\: \bar{w} \nu - \nu \vert\ \nu \in \chars{\widehat{T}} \rangle.$ Let us consider the reverse inclusion. If we choose $\nu = 0$, then $w(0) - 0 = \lambda$ belongs to $L_w$. On the other hand, choosing $\nu = \lambda$ yields $w(\lambda) - \lambda = \bar{w}\lambda$. For any choice of $\nu \in \chars{\widehat{T}}$, $$ w(\nu + \lambda) - (\nu + \lambda) = \bar{w}(\nu) + \bar{w}(\lambda) - \nu. $$ But we have just seen that $\bar{w}(\lambda) \in L_w$, hence $\bar{w}(\nu) - \nu$ lies in $L_w$ for all $\nu \in \chars{\widehat{T}}$. It follows that $L_w \supseteq \langle \lambda,\: \bar{w} \nu - \nu \vert\ \nu \in \chars{\widehat{T}} \rangle.$ \end{proof} Let $A$ be a group and $M$ be an $A$-module. The \emph{module of coinvariants} of $M$ is defined as $$ M_A = M/\langle gm - m \mid m \in M,\; g \in A\rangle. $$ This module satisfies the following universal property: If $N$ is another $A$-module with trivial $A$-action, and there is a surjection $M \rightarrow N$, then there is a unique map $M_A \rightarrow N$. \begin{prop} \label{characters-of-relevant-subgroup-proposition} Let $w = t_\lambda \bar{w}$ be $\mu$-admissible. Then $$\chars{S_w} = \chars{\widehat{T}} / L_w = \chars{\widehat{T}}_{\langle\bar{w}\rangle} / \langle \lambda \rangle.$$ \end{prop} \begin{proof} Let $K$ be the intersection of kernels $$ K = {\rm{ker}}(\lambda) \cap \left(\bigcap_{\nu \in \chars{\widehat{T}}} {\rm{ker}}(\bar{w}(\nu) - \nu)\right) $$ We contend that this set equals $S_w$ in $\widehat{T}(\mathbb{C})$. Recall that $S_w$ is defined as the subgroup of $\widehat{T}(\mathbb{C})$ comprising those $\kappa$ such that $\lambda(\kappa) = 1$ and $\bar{w} \kappa = \kappa$. Thus all $\kappa \in S_w$ lie in ${\rm{ker}}(\lambda)$, and for each $\nu \in \chars{\widehat{T}}$, $$ \Big((\bar{w}\cdot\nu)(\kappa)\Big)\nu(\kappa)^{-1} = \nu(\bar{w}^{-1}\kappa)\nu(\kappa)^{-1} = \nu(\kappa)\nu(\kappa)^{-1} = 1. $$ So $\kappa \in {\rm{ker}}(\bar{w}(\nu)-\nu)$ for all $\nu \in \chars{\widehat{T}}$. It follows that $S_w \subseteq K$. Conversely, if we choose some $\kappa_0 \in K$, then $\lambda(\kappa_0) = 1$ from the definition, while for each character $\nu$ on $\widehat{T}(\mathbb{C})$, $\nu(\bar{w}^{-1} \kappa_0) = \nu(\kappa_0).$ By Lemma~\ref{lemma::chars-separate-points}, $\bar{w}^{-1} \kappa_0 = \kappa_0$. We conclude that $\kappa_0 \in S_w$. This proves $S_w = K$. For a character $\eta \in \chars{\widehat{T}}$, Lemma~\ref{lemma::kernel-lattice-relationship} implies $\chars{{\rm{ker}}\;\eta}= \chars{\widehat{T}} / \langle\eta\rangle.$ Lemma~\ref{lemma::chars-of-intersected-groups} shows that the character group of an intersection of kernels equals the quotient of $\chars{\widehat{T}}$ by the lattice generated by the characters whose kernels were intersected. In this case, $$ \chars{K} = \chars{\widehat{T}}/\langle \lambda, \bar{w}(\nu) - \nu \mid \nu \in \chars{\widehat{T}}\rangle. $$ Applying $K = S_w$ and Lemma~\ref{lattice-equality-lemma}, we get the desired result: $\chars{S_w} = \chars{\widehat{T}}/L_w.$ \end{proof} \begin{example} \label{example::rel-group-not-a-torus} Let $G = GL_4$ and $\mu = (1,1,0,0)$. Let $w = t_{\mu}(132)$, so that $\bar{w} = (132)$ is a permutation in the symmetric group $W \cong S_3$. Then the module of coinvariants $\cochars{T}_{\langle \bar{w} \rangle}$ is generated by $\bar{\varepsilon}_1, \bar{\varepsilon}_4$, where the $\varepsilon_i$ are the coordinate cocharacters of $\cochars{T}$, which form a basis, and $\bar{\varepsilon}_i$ is the image of $\varepsilon_i$ in the module of coinvariants. Consider $$ \chars{S_w} = \cochars{T}_{\langle \bar{w} \rangle} / \langle \mu \rangle. $$ Then as elements of $\chars{S_w}$, $$ 2\bar{\varepsilon}_1 = \bar{\varepsilon}_1 + \bar{\varepsilon}_2 = \bar{\mu} = 0. $$ Thus $\chars{S_w}$ has 2-torsion in this case, and so $S_w$ is not a torus. \end{example} \begin{lemma} \label{lemma::kernel-lattice-relationship} For any root subsystem $J \subseteq \Phi$, let $K_J = \cap_{\alpha \in J} \rm{ker}(\alpha^\vee)$ and let $L_J = \mathbb{Z}\langle \alpha^\vee \mid \alpha \in J\rangle.$ Then $\chars{K_J} = \chars{\widehat{T}}/L_J.$ \end{lemma} \begin{proof} Choose any $\alpha \in J$. This determines a short exact sequence $$ 1 \longrightarrow {\rm{ker}}(\alpha^\vee) \longrightarrow \widehat{T} \longrightarrow \mathbb{G}_m \longrightarrow 1. $$ The corresponding exact sequence for character groups is $$ 0 \longleftarrow \chars{{\rm{ker}}(\alpha^\vee)} \longleftarrow \chars{\widehat{T}} \stackrel{f}{\longleftarrow} \mathbb{Z} \longleftarrow 0, $$ where $f:1\mapsto \alpha^\vee$. It follows that $\chars{{\rm{ker}}(\alpha^\vee)} = \chars{\widehat{T}}/\mathbb{Z}\alpha^\vee$. Now consider all $\alpha \in J$ at once. Repeated application of Lemma~\ref{lemma::chars-of-intersected-groups} shows that $\chars{K_J} = \chars{\widehat{T}}/L_J.$ \end{proof} \begin{cor} \label{cor::chars-of-swj} Let $L_{w,J}$ be the lattice of $\chars{\widehat{T}}$ generated by $L_w$ and $L_J$ as defined above. Then $\chars{S_{w,J}} = \chars{\widehat{T}}/L_{w,J}.$ \end{cor} A root sub-system $J \subseteq \Phi$ determines a subgroup $W_J = \langle s_\alpha \mid \alpha \in J^+\rangle$ of $W$. \begin{prop} \label{prop::structure-of-lwj} Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$ and $J \subseteq \Phi$. If $\bar{w} \in W_J$, then $L_{w,J} = \langle \lambda, \alpha^\vee \mid \alpha \in J^+\rangle$. \end{prop} \begin{proof} By its construction, $L_{w,J} = \langle \lambda + \bar{w}(\nu) - \nu, \alpha^\vee \mid \nu \in \chars{\widehat{T}}, \alpha \in J^+\rangle$. Lemma~\ref{lattice-equality-lemma} implies $L_{w,J} = \langle \lambda, \bar{w}(\nu) - \nu, \alpha^\vee \mid \nu \in \chars{\widehat{T}}, \alpha \in J^+\rangle$, so that $L_{w,J}$ clearly contains $\langle \lambda, \alpha^\vee \mid \alpha \in J^+\rangle$.\ In order to prove the reverse inclusion, it is enough to show that $\bar{w}(\nu) - \nu$ lies in the span of the coroots of $J^+$ for all $\nu \in \chars{\widehat{T}}$. Since $W_J$ is a reflection subgroup, we can find an expression $\bar{w} = s_1 \cdots s_m$ where the $s_i$ are reflections in $W_J$. Choose any $\nu \in \chars{\widehat{T}}$. Observe that $$ (s_1 \cdots s_m)(\nu) = (s_2 \cdots s_m)(\nu) - \langle (s_2\cdots s_m)(\nu), \alpha_1\rangle \alpha_1^\vee, $$ where $\alpha_1$ is the positive root in $J$ corresponding the to reflection $s_1$. Of course, a similar formula can be applied to the term $(s_2 \cdots s_m)(\nu)$, leading to the expression $$ \bar{w}(\nu) = \nu - \sum_{k=2}^m \langle (s_k \cdots s_m)(\nu), \alpha_{k-1}\rangle \alpha_{k-1}^\vee. $$ Therefore, $\bar{w}(\nu) - \nu \in \langle \alpha^\vee \mid \alpha \in J^+ \rangle.$ \end{proof} The \textbf{fundamental group} of a reductive algebraic group is defined by $$ \pi_1(G) = \cochars{T}/\mathbb{Z}\Phi^\vee. $$ A reductive algebraic group $H$ is \textbf{simply-connected} if $\pi_1 (H) = 1$. \begin{cor} \label{cor::rank-of-swj} Suppose that $G$ satisfies the assumptions of Remark~\ref{rmk::roche-assumptions}, in particular, that $G_{\rm{der}}$ is simply connected. Let $w = t_\lambda \bar{w}$ be $\mu$-admissible, and let $J \subseteq \Phi$ be a root subsystem. Suppose $\bar{w} \in W_J$. Then ${\rm{rank}}(S_{w,J}) = {\rm{rank}}(\widehat{T}) - {\rm{rank}}(J) - 1$. \end{cor} \begin{proof} By assuming that $G_{\rm{der}}$ is simply connected, we have that $\cochars{T} / \mathbb{Z}\Phi^\vee$ is torsion-free. So any torsion element of $\cochars{T} / \mathbb{Z}J^\vee$ must come from the quotient $\mathbb{Z}\Phi^\vee / \mathbb{Z}J^\vee$. We claim that $\lambda \notin \mathbb{Z}\Phi^\vee$. Fix a basis $\Delta$ of $\Phi$. Without loss of generality, we may assume that $\lambda$ is dominant because the coroot lattice is stable under the $W$-action. A nonzero dominant coweight is in particular a fundamental coweight, which corresponds to some simple root $\alpha_i \in \Delta$. For any $\alpha_j \in \Delta$, $\langle \lambda, \alpha_j\rangle = \delta_{ij}$. Suppose that $\lambda \in \mathbb{Z}\Phi^\vee$, so that we have an expression $$ \lambda = \sum_{\alpha_j \in \Delta} c_{\alpha_j} \alpha_j^\vee,\ {\rm{where}}\ c_{\alpha_j} \in \mathbb{Z}. $$ We apply the simple reflection $s_i$ to $\lambda$ in two ways. First, $$ s_i \lambda = \lambda - \langle \lambda, \alpha_i \rangle \alpha_i^\vee = \lambda - \alpha_i^\vee. $$ On the other hand, $$ s_i \lambda = s_i \Big( \sum_{\alpha_j \in \Delta} c_{\alpha_j} \alpha_j^\vee\Big) = \sum_{\alpha_j \in \Delta} c_{\alpha_j} s_i \alpha_j^\vee = -c_{\alpha_i}\alpha_i^\vee + \sum_{\alpha_j \in \Delta \setminus \{\alpha_i\}} d_j \alpha_j^\vee, $$ where we have used that $s_i$ permutes the simple coroots $\alpha_j^\vee$ for $j \neq i$. The coefficients $d_{\alpha_j}$ are the permuted $c_{\alpha_j}$. Now combine these two different forms of $s_i \lambda$ to get a relation $$ (2c_{\alpha_i} - 1)\alpha_i^\vee + \sum_{\alpha_j \in \Delta \setminus \{\alpha_i\}} (c_{\alpha_j} - d_{\alpha_j}) \alpha_j^\vee = 0. $$ This is a linear combination of the basis elements of $\mathbb{Z}\Phi^\vee$, hence all coefficients must be zero. This implies $2c_{\alpha_i} = 1$, which is impossible since $c_{\alpha_i} \in \mathbb{Z}$. Because $\lambda \notin \mathbb{Z}\Phi^\vee$, it is not a torsion element of $\cochars{T} / \mathbb{Z}J^\vee$, and consequently $\chars{\widehat{T}}/L_{w,J} = \left(\chars{\widehat{T}}/L_J\right)/\mathbb{Z}\lambda$ has rank equal to $\Big({\rm{rank}}(\widehat{T}) - {\rm{rank}}(J) - 1\Big)$, because ${\rm{rank}}(L_J) = {\rm{rank}}(J)$. \end{proof} \begin{rmk} \label{rmk::assume-gder-sc} The assumption that $G_{\rm{der}}$ be simply connected still admits the $GL_n$ and $GSp_{2n}$ cases. Moreover, Corollary~\ref{cor::rank-of-swj} is only used to find the rank of certain $S_{w, J}$ at the very end of the proof of the main theorem (Theorem~\ref{main-theorem}); the rest of the proof is independent of this assumption. \end{rmk} When $G$ is $GL_n$, $GSp_{4}$ or $GSp_{6}$, we can give a precise description of the structure of $\chars{S_{w,J}}$ as a finitely generated abelian group. More specifically, in these cases we can say something about how torsion appears, rather than only finding the rank of the group. \begin{lemma} \label{lemma::rel-grp-gln-gsp2n} Suppose $G=GL_n$, $GSp_{4}$ or $GSp_{6}$. Consider $S_{w,J}$ for $\mu$-admissible $w$ and a root subsystem $J \subseteq \Phi$. The Smith form of $\chars{S_{w,J}}$ has a single invariant factor $c_1$, and $$ \chars{S_{w,J}} \cong \begin{cases} \mathbb{Z}^{{\rm{rank}}(\widehat{T}) - {\rm{rank}}(J)-1}, & {\rm{if}}\ c_1 = \pm 1,\\ \mathbb{Z}^{{\rm{rank}}(\widehat{T}) - {\rm{rank}}(J)-1} \times \mathbb{Z}/c_1 \mathbb{Z}, & {\rm{otherwise}}. \end{cases} $$ \end{lemma} \begin{proof} We will handle the $GL_n$ case separately from $GSp_4$ and $GSp_6$. \noindent\textbf{Case 1: $G = GL_n$.} The quotient $\cochars{T}/\mathbb{Z}J^\vee$ is the fundamental group of an endoscopic group of $GL_n$. All endoscopic groups have the property that their derived groups are simply connected, so $\cochars{T}/\mathbb{Z}J^\vee$ is torsion-free for all $J \subseteq \Phi$. We show how to write down an explicit set of generators. Let $\varepsilon_1, \ldots \varepsilon_n$ be the coordinate generators of $\cochars{T}$. Coroots $\alpha^\vee \in J^\vee$ have the form $\alpha_{uv}^\vee = \varepsilon_u - \varepsilon_v$ The image of $\varepsilon_i$ in $\cochars{T}/\mathbb{Z}J^\vee$ is denoted $\bar{\varepsilon}_i$. Let $J = J_1 \coprod \cdots \coprod J_r$ be the irreducible decomposition of $J$. Let $i_k$ be the minimal index among those $i$ such that $\langle \varepsilon_{i}, \alpha \rangle \neq 0$ for $\alpha \in J_k$. Then if $\alpha_{uv} \in J_k$, we have $\bar{\varepsilon}_{i_k} = \bar{\varepsilon}_{u} = \bar{\varepsilon}_{v}$ in the quotient. Let $A$ denote the set of indices $t$ such that $\langle \varepsilon_t, \alpha \rangle = 0$ for all $\alpha \in J$. Then $\cochars{T}/\mathbb{Z}J^\vee$ is generated by the $\bar{\varepsilon}_{i_k}$ and the $\bar{\varepsilon}_t$. Relabel this generating set such that $$ \cochars{T}/\mathbb{Z}J^\vee = \langle \bar{\varepsilon}_{j_1}, \ldots, \bar{\varepsilon}_{j_m} \rangle. $$ Let $\lambda = \lambda(w)$ be the translation part of $w$. Then $\lambda = \sum_{\ell=1}^t \varepsilon_{i_\ell}$ in $\cochars{T}$. Let $I_\lambda = \{i_1, \ldots, i_t\}$. There is a map $$ f: I_\lambda \rightarrow \{j_1, \ldots, j_m\} $$ defined by sending $\varepsilon_i$ to its image in $\cochars{T}/\mathbb{Z}J^\vee$. Set $c_i = \vert f^{-1}(j_i)\vert$ for $1 \le i \le m$, so $\bar{\lambda} = \sum_{k=1}^m c_i \bar{\varepsilon}_{j_k}$. Corollary~\ref{cor::chars-of-swj} and Proposition~\ref{prop::structure-of-lwj} show that $$ \chars{S_{w,J}} = \Big(\cochars{T}/\mathbb{Z}J^\vee\Big) / \langle \lambda \rangle. $$ Thus $\bar{\lambda} = \sum_{k=1}^m c_i \bar{\varepsilon}_{j_k}$ can be thought of as a $1 \times m$ relation matrix among the generators of $\cochars{T}/\mathbb{Z}J^\vee$. The resulting Smith form, which describes the finitely generated abelian group $\chars{S_{w,J}}$, has a single invariant factor $c_1$ and rank equal to $\Big({\rm{rank}}(\widehat{T}) - {\rm{rank}}(J) - 1\Big)$ by Corollary~\ref{cor::rank-of-swj}. \noindent\textbf{Case 2: $G = GSp_{4}$ or $GSp_{6}$.} In this case, $$ \cochars{T} = \langle \varepsilon_0, \varepsilon_1, \ldots, \varepsilon_{n} \rangle, $$ where $\varepsilon_0$ is the similitude cocharacter. The positive coroots have one of three forms for $i < j$: $\varepsilon_i - \varepsilon_j$, $\varepsilon_i + \varepsilon_j$ or $\varepsilon_i$. For these low-rank groups, one can check all possible $J^\vee$ to confirm that $\cochars{T}/\mathbb{Z}J^\vee$ is torsion-free and generated by the images of a subset of the $\varepsilon_i$. There are six possible $J^\vee$ for $GSp_4$: \begin{align*} J_1^\vee &= \{ \pm (\varepsilon_1 - \varepsilon_2)\} & J_4^\vee &= \{ \pm \varepsilon_2 \} \\ J_2^\vee &= \{ \pm (\varepsilon_1 + \varepsilon_2)\} & J_5^\vee &= \{ \pm \varepsilon_1\} \coprod \{ \pm \varepsilon_2\} \\ J_3^\vee &= \{ \pm \varepsilon_1 \} & J_6^\vee &= \Phi^\vee \\ \end{align*} Each system gives a relations matrix $A_J$ among the generators $\varepsilon_0, \ldots, \varepsilon_n$. For example, in the the case of $J_5^\vee$ the matrix is \[ A_{J_5} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \] where the columns correspond to generators of $\cochars{T}$ and each row is a relation coming from a positive coroot. This matrix has invariant factors $(1,1)$, hence $\cochars{T} / \mathbb{Z}J_5^\vee \cong \mathbb{Z}^{3-2} = \mathbb{Z}$ is torsion-free. The relation matrices for $J_1^\vee, \ldots, J_4^\vee$ consist of a single row and have invariant factor $(1)$, so each $\cochars{T} / \mathbb{Z}J_i^\vee$ is torsion-free for $i = 1,\ldots, 4$. Meanwhile, we know $\cochars{T} / \mathbb{Z}\Phi^\vee$ is torsion-free because $GSp_4$ has a simply-connected derived group. We use the same approach for $GSp_6$. There are thirty possible $J^\vee$, which split into seven families organized by the type of $J^\vee$. We list each type, followed by the number of members in the family, and then a representative $J^\vee$ of the family. \begin{align*} {\rm{Type}}\ A_1 &\ (9) & J^\vee &= \{\pm (\varepsilon_1 - \varepsilon_2)\} \\ {\rm{Type}}\ A_1 \times A_1 &\ (9) & J^\vee &= \{\pm (\varepsilon_1 + \varepsilon_2)\} \coprod \{\pm \varepsilon_3\} \\ {\rm{Type}}\ A_2 &\ (4) & J^\vee &= \{\pm (\varepsilon_1 + \varepsilon_2), \pm (\varepsilon_1 + \varepsilon_3), \pm (\varepsilon_2 - \varepsilon_3)\} \\ {\rm{Type}}\ C_2 &\ (3) & J^\vee &= \{\pm (\varepsilon_1 - \varepsilon_2), \pm (\varepsilon_1 + \varepsilon_2), \pm \varepsilon_1, \pm \varepsilon_2\} \\ {\rm{Type}}\ A_1 \times A_1 \times A_1 &\ (1) & J^\vee &= \{\pm \varepsilon_1\} \coprod \{\pm \varepsilon_2\} \coprod \{\varepsilon_3\} \\ {\rm{Type}}\ C_2\times A_1 &\ (3) & J^\vee &= \{\pm (\varepsilon_1 - \varepsilon_2), \pm (\varepsilon_1 + \varepsilon_2), \pm \varepsilon_1, \pm \varepsilon_2\} \coprod \{\pm \varepsilon_3\} \\ {\rm{Type}}\ C_3 &\ (1) & J^\vee &= \Phi^\vee \\ \end{align*} Now we can construct relation matrices $A_J$ as before to compute the Smith form of $\cochars{T} / \mathbb{Z}J^\vee$ in each case. It turns out that in every case, all invariant factors are units, which means the quotient is torsion-free. Here are some examples. Let $J^\vee = \{\pm (\varepsilon_1 + \varepsilon_2), \pm (\varepsilon_1 + \varepsilon_3), \pm (\varepsilon_2 - \varepsilon_3)\}$: \[ A_{J} = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -1 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \] Let $J^\vee = \{\pm (\varepsilon_1 - \varepsilon_2), \pm (\varepsilon_1 + \varepsilon_2), \pm \varepsilon_1, \pm \varepsilon_2\} \coprod \{\pm \varepsilon_3\}$: \[ A_{J} = \begin{bmatrix} 0 & 1 & -1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \] Since $\cochars{T}/\mathbb{Z}J^\vee$ is torsion-free in all cases for $GSp_4$ and $GSp_6$, we can now proceed as in the $GL_n$ case. Consider the image of $\lambda = \sum_{\ell=1}^r \varepsilon_{i_\ell}$ in $\cochars{T}/\mathbb{Z}J^\vee$ to get a relation among the generators of the quotient, then take the Smith form. \end{proof} \subsection{Analogues of the critical index torus} The final section of this chapter determines an analogue of the group of $k_F$-points of the ``critical index torus'' $T_{S(w)}$ defined in the Drinfeld case~\cite{haines-rapoport2012}, Section 6.4, wherein this subtorus is defined by specifying that a coordinate in the diagonal torus $T$ is always 1 if that coordinate does not belong to a certain subset of indices $S(w)$. There are important differences between the Drinfeld case and the more general case being considered here. First, in the general case we cannot find a \emph{subtorus} to play the role of $T_{S(w)}$, though we can approximate its behavior through certain finite subgroups of $T(k_F)$. Second, whereas the support of $\phi_{r,1}aug$ can be described in terms of the $\mu$-admissible set and the single group $T_{S(w)}(k_F)$ in the Drinfeld case, the general case admits the possibility that multiple finite groups $A_{w,J,\resfield}$ are needed to completely describe those $s \in T(k_r)$ whose norms $N_r(s)$ contribute to nonzero coefficients $\phi_{r,1}aug(I_r^+ sw I_r^+)$. This second point will be elaborated in Chapter 5. \subsubsection{Stratification of $S_w^{\rm{dz}}$ by $\chi$-root systems} \label{subsection::stratification-by-chi-root-systems} Recall that $\widehat{T}(\mathbb{C})^{\rm{dz}}$ is the finite subgroup of $\widehat{T}(\mathbb{C})$ comprising the depth-zero endoscopic elements $\kappa_\chi$ associated to $\chi \in T(k_F)^\vee$. Consider any $\kappa_\chi \in \widehat{T}(\mathbb{C})^{\rm{dz}}$, not necessarily relevant to $w$. The corresponding character $\chi$ determines a $\chi$-root system $\Phi_\chi \subseteq \Phi$, which may be reducible. We stratify $\widehat{T}(\mathbb{C})^{\rm{dz}}$ by defining $$ \widehat{T}(J) = \left\{\kappa_\chi \in \widehat{T}(\mathbb{C})^{\rm{dz}} \mid J = \Phi_\chi\right\}, $$ which yields $$\widehat{T}(\mathbb{C})^{\rm{dz}} = \coprod_{J \subseteq \Phi} \widehat{T}(J).$$ Now any subset $A \subseteq \widehat{T}(\mathbb{C})^{\rm{dz}}$ can be stratified by setting $A(J) = A \cap \widehat{T}(J)$ for each $J \subseteq \Phi$, thus we come to consider the strata $S_w^{\rm{dz}}(J)$ of the depth-zero relevant group. \begin{cor} \label{cor::union-of-strata} With notation as above, $$ S_w^{\rm{dz}}J = \coprod_{J \subseteq J^\prime \subseteq \Phi} S_w^{\rm{dz}}(J^\prime). $$ \end{cor} \begin{proof} This is a consequence of Proposition~\ref{prop::set-version-of-dzrelgrpJ}. \end{proof} \begin{lemma} \label{lemma::equality-on-strata} Consider a stratum $S_w^{\rm{dz}}(J)$. For all characters $\chi \in T(k_F)^\vee$ such that $\kappa_\chi \in S_w^{\rm{dz}}(J)$: the groups $W_\chi^\circ$ are all isomorphic, the length functions $\ell_\chi$ agree in the obvious sense, and the polynomials $\widetilde{R}_{u,v}^\chi(Q_r)$ are identical for all $u, v \in \widetilde{W}_\chi$. \end{lemma} \begin{proof} The key point is that all $\kappa_\chi \in S_w^{\rm{dz}}(J)$ have the same $\chi$-root system $\Phi_\chi \subseteq \Phi$. Consequently, the groups $W_\chi^\circ$ are all identical, and so are the length functions $\ell_\chi$. Finally, the Hecke algebras $\mathcal{H}(W_{\chi, {\rm{aff}}}, S_{\chi, {\rm{aff}}})$ all determine the same $\widetilde{R}^\chi$-polynomials. \end{proof} The common length function and $\widetilde{R}^\chi$-polynomials associated to points in the stratum $S_{w}^{\rm{dz}}(J)$ are sometimes denoted $\ell_J$ and $\widetilde{R}^J$, respectively. \subsubsection{Definition of finite critical groups} \label{section::finite-critical-grps} The critical index torus $T_{S(w)}$ defined in the Drinfeld case is first understood in terms of geometric data concerning the special fiber of the Shimura variety. Haines and Rapoport prove that $T_{S(w)}$ is the particular subtorus of $T$ corresponding to the lattice $L_w \subset \cochars{T}$, where $L_w$ is defined as in Lemma~\ref{lattice-equality-lemma}. The group of $k_F$-points $T_{S(w)}(k_F)$ plays a role in determining the support of $\phi_{r,1}aug$ in the Drinfeld case. See also Section~\ref{section::the-drinfeld-case}. In the more general case considered here, the analogous lattices $L_{w,J}$ do not necessarily correspond to subtori of $T$, because $\cochars{T}/L_{w,J}$ is not always torsion-free. \begin{defin} \label{defin::finite-critical-groups} Let $w$ be $\mu$-admissible, and let $J \subseteq \Phi$ be a root subsystem. The lattice $L_{w,J}$ from Corollary~\ref{cor::chars-of-swj} can be viewed as a lattice in $\cochars{T}$. Define a subgroup of $T(k_F)$ by $$ A_{w,J,k_F} = \langle \nu(k_F) \:\vert\: \nu \in L_{w,J}\rangle, $$ This is the \textbf{finite critical group} of $w$ and $J$ with respect to $k_F$. \end{defin} \begin{lemma} Let $\chi$ be a depth-zero character on $T(k_F)$. Then $\kappa_\chi$ lies in $S_w^{\rm{dz}}J$ if and only if the restriction of $\chi$ to $A_{w,J,k_F}$ is the the trivial character. \end{lemma} \begin{proof} Suppose $\kappa_\chi \in S_w^{\rm{dz}}J$. We have $\eta(\kappa_\chi) = 1$ for all $\eta \in L_{w,J}$ because $\chars{S_w^{\rm{dz}}J} = \chars{\widehat{T}}/L_{w,J}$. Thus viewing $\eta$ as an element of $\cochars{T}$, we have $\chi(\eta(\tilde{x})) = 1$ for any generator $\tilde{x} = \tau_F(x)$ of $k_F^\ast$, since $\chi(\eta(\tilde{x})) = \eta(\kappa_\chi)$. But $A_{w,J,\resfield}$ is generated by the elements $\eta(\tilde{x})$, so $\chi$ is trivial on this subgroup of $T(k_F)$. Conversely, suppose $\chi$ restricts to the trivial character on $A_{w,J,\resfield}$. For any generator $\tilde{x}$ of $k_F^\ast$ such that $\tilde{x} = \tau_F (x)$ for $x \in I_F$, the hypothesis implies $\chi(\eta(\tilde{x})) = 1$ for $\eta \in L_{w,J}$. As before, $\chi(\eta(\tilde{x})) = \eta(\kappa_\chi)$, hence $\eta(\kappa_\chi) = 1$ for all $\eta \in L_{w,J}$. Therefore $\kappa_\chi \in S_w^{\rm{dz}}J$ by Corollary~\ref{cor::contain-by-annihilation}. \end{proof} \subsubsection{Sums of over groups of endoscopic elements} Let $N_r : T(k_r) \rightarrow T(k_F)$ be the norm map. Lemma~\ref{lemma::defin-of-gamma-nrs} showed that given $s \in T(k_r)$, we can attach an element $\gamma_{N_r s} \in \cochars{T}$ to $N_r(s) \in T(k_F)$. The purpose of this section is to determine the possible values of sums $$ \sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1}. $$ Such sums will appear in Chapter 5 as a result of grouping terms in an expression for $\phi_{r,1}aug$. The finite critical groups introduced in Definition~\ref{defin::finite-critical-groups} are the key determining factor for whether a sum is zero. \begin{prop} \label{prop::sum-over-group} Let $s \in T(k_r)$, and define $\gamma_{N_r s}$ as above. Then $$\sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1} = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J, k_F} \\ \vert S_w^{\rm{dz}}J \vert, & {\rm{otherwise}}. \end{cases} $$ \end{prop} \begin{proof} First, suppose there exists $\kappa_0 \in S_w^{\rm{dz}}J$ such that $\gamma_{N_r s} (\kappa_0) \neq 1$. Then $$ \sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1} = \sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_0 \kappa_\chi)^{-1} = \gamma_{N_r s}(\kappa_0)^{-1} \sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1}, $$ implies $\sum_{\kappa_\chi \in S_w^{\rm{dz}}J} \gamma_{N_r s}(\kappa_\chi)^{-1} = 0$. But $\gamma_{N_r s}(\kappa_\chi) \neq 1$ for some $\kappa_\chi \in S_{w,J}^{\rm{dz}}$ if and only if $N_r(s) \notin A_{w, J, k_F}$. The second case is obvious, because then $N_r(s) \in A_{w,J,k_F}$ implies all $\gamma_{N_r(s)}(\kappa_\chi) = 1$ \end{proof} \section{Combinatorics on increasing paths in a Bruhat graph} \label{chapter::combinatorics-on-paths} The explicit formula for Bernstein functions attached to dominant minuscule cocharacters applied in Chapter 2 introduced a term to our formula connected with the $\widetilde{R}$-polynomials defined by Kazhdan and Lusztig. In fact, we must work with polynomials $\widetilde{R}_{w, t_{\lambda(w)}}^J(Q_r)$ for $w \in {\rm{Adm}}_{G_r}(\mu)$ and a $\chi$-root system $J \subseteq \Phi$. This chapter applies a formula for $\widetilde{R}$-polynomials due to Dyer in order to rewrite $\widetilde{R}_{w, t_{\lambda(w)}}^J(Q_r)$ in a new way, which will simplify our formula for the coefficients of $\phi_{r,1}aug$. Our first order of business is to recall some definitions and results pertaining to reflection subgroups and reflection orderings on Coxeter groups. Then, we will take a closer look at the behavior of these objects in the special case of the Bruhat interval between an affine Weyl group element and its translation part, before going on to prove the desired modification to Dyer's formula in the final section. \subsection{Background on Coxeter groups} \label{section::coxeter-group-background} Recall that a \textbf{Coxeter system} is a pair $(\mathcal{W},\mathcal{S})$ composed of a group $\mathcal{W}$ generated by a set of involutions $\mathcal{S}$ subject to braid relations $(s_i s_j)^{m(i,j)} = 1$, where $m(i,j) \in \{ \mathbb{Z}, \infty\}$. The \textbf{length} of an element $w \in \mathcal{W}$ is denoted $\ell(w)$. A \textbf{finite} Coxeter group has all $m(i,j) \in \mathbb{Z}$, and a \textbf{finite rank} Coxeter group has $\vert \mathcal{S} \vert < \infty$. Weyl groups are finite Coxeter groups, while affine Weyl groups are infinite. Both types of groups have finite rank. Essentially all of the material in this first section is covered in the book by Bj{\"o}rner and Brenti~\cite{bjorner-brenti2005} or in the papers of Dyer cited throughout. \subsubsection{Bruhat order and Bruhat graphs} The following statement is Definition 2.1.1 in~\cite{bjorner-brenti2005}. \begin{defin} Let $(\mathcal{W}, \mathcal{S})$ be a Coxeter system and let $$ \mathcal{T} = \{wsw^{-1} \mid w \in \mathcal{W}, s \in \mathcal{S}\} = \bigcup_{w \in \mathcal{W}} w\mathcal{S}w^{-1} $$ be its set of \textbf{reflections}. Let $u, w \in \mathcal{W}$. Then \begin{enumerate} \item $u\stackrel{t}{\rightarrow}w$ means that $u^{-1}w = t \in \mathcal{T}$ and $\ell(u) < \ell(w)$. \item $u\rightarrow w$ means that $u\stackrel{t}{\rightarrow}w$ for some $t \in \mathcal{T}$. \item $u \leq w$ means that there exist $u_i \in \mathcal{W}$ such that $$u = u_0 \rightarrow u_1 \rightarrow \cdots \rightarrow u_k = w.$$ \end{enumerate} The \emph{\textbf{Bruhat graph}} $\Omega_{(\mathcal{W},\mathcal{S})}$ is the directed graph whose vertices are the elements of $\mathcal{W}$ and whose edges are given by $u \rightarrow w$. \emph{\textbf{Bruhat order}} is the partial order relation $u \leq w$ on the set $\mathcal{W}$. \end{defin} For elements $x \leq y$ in a Coxeter system $(\mathcal{W}, \mathcal{S})$, a \textbf{path} $\Delta$ from $x$ to $y$, also written $x\stackrel{\Delta}\longrightarrow y$, is a set of edges in $\Omega_{(\mathcal{W},\mathcal{S})}$ that connect the vertices $x$ and $y$. Let $B_{\mathcal{W}}(x,y)$ denote the set of all paths $x\stackrel{\Delta}\longrightarrow y$ through $\Omega_{(\mathcal{W}, \mathcal{S})}$. If $W$ is a Weyl group associated to a root system $\Phi$, this set instead may be written $B_\Phi(x,y)$. \subsubsection{Reflection subgroups of Coxeter groups} \begin{defin} Let $\mathcal{W}$ be a Coxeter group with set of reflections $\mathcal{T}$. Any subgroup $\mathcal{W}^\prime \subset \mathcal{W}$ satisfying $\mathcal{W}^\prime = \langle \mathcal{W}^\prime \cap \mathcal{T} \rangle$ is called a \textbf{reflection subgroup} of $\mathcal{W}$. \end{defin} The following is Definition 3.1 of~\cite{dyer1990}. \begin{defin} \label{defin::coxeter-gens-of-refl-subgrp} Let $\mathcal{W}$ be a Coxeter group. For $w \in \mathcal{W}$, let $$ N(w) = \{ t \in \mathcal{T} \mid \ell(tw) < \ell(w)\}. $$ If $\mathcal{W}^\prime$ is a subgroup of $\mathcal{W}$, let $$ \Sigma(\mathcal{W}^\prime) = \Big\{ t \in \mathcal{T} \mid N(t) \cap \mathcal{W}^\prime = \{t\}\Big\}. $$ \end{defin} \begin{thm} \label{reflection-subgroups-are-coxeter-systems} Let $\mathcal{W}^\prime$ be a reflection subgroup of a Coxeter system $(\mathcal{W}, \mathcal{S})$ and let $\mathcal{S}^\prime = \Sigma(\mathcal{W}^\prime)$. Then \begin{enumerate} \item $\mathcal{W}^\prime \cap \mathcal{T} = \bigcup_{u \in \mathcal{W}^\prime} u \mathcal{S}^\prime u^{-1}$, and \item $(\mathcal{W}^\prime, \mathcal{S}^\prime)$ is a Coxeter system. \end{enumerate} \end{thm} \begin{proof} This is Theorem 3.3 of~\cite{dyer1990}, where the result is established for general reflection systems. The fact that a reflection subgroup of a Coxeter group if itself a Coxeter group was proved independently by Deodhar \cite{deodhar1989} (see the ``Main Theorem'' proved in Section 3 of the cited paper). \end{proof} Dyer~\cite{dyer1991} further proved that the Bruhat graph associated to any reflection subgroup $\mathcal{W}^\prime$ of $(\mathcal{W}, \mathcal{S})$ embeds as a full subgraph of the Bruhat graph $\Omega_{(\mathcal{W}, \mathcal{S})}$. Let $\Omega_{(\mathcal{W}, \mathcal{S})}(\mathcal{W}^\prime)$ denote the full subgraph of $\Omega_{(\mathcal{W}, \mathcal{S})}$ on the vertex set $\mathcal{W}^\prime$. \begin{thm} \label{thm::bruhat-graph-of-reflection-subgroups} Let $\mathcal{W}^\prime$ be a reflection subgroup of $(\mathcal{W}, \mathcal{S})$ and set $\mathcal{S}^\prime = \Sigma(\mathcal{W}^\prime)$. \begin{enumerate} \item $\Omega_{(\mathcal{W}^\prime, \mathcal{S}^\prime)} = \Omega_{(\mathcal{W}, \mathcal{S})} (\mathcal{W}^\prime)$. \item For any $x \in \mathcal{W}$, there exists a unique $x_0 \in \mathcal{W}^\prime x$ such that the map ${\mathcal{W}^\prime \rightarrow \mathcal{W}^\prime x}$ defined by $w \mapsto wx_0$, for $w \in \mathcal{W}^\prime$, is an isomorphism of directed graphs $\Omega_{(\mathcal{W}, \mathcal{S})} (\mathcal{W}^\prime) \rightarrow \Omega_{(\mathcal{W}, \mathcal{S})} (\mathcal{W}^\prime x)$. \end{enumerate} \end{thm} \begin{proof} This is Theorem 1.4 of~\cite{dyer1991}. \end{proof} Suppose $(\mathcal{W}, \mathcal{S})$ is a finite-rank Coxeter system, i.e., $\mathcal{S}$ has finite cardinality. This is the case for all Coxeter groups considered in this article. There is a root system $\Phi_{\mathcal{W}}$ associated to any such $(\mathcal{W}, \mathcal{S})$ arising from the standard geometric representation of $\mathcal{W}$. See, for example, \cite{bjorner-brenti2005}, Section 4.4. \begin{lemma} \label{lemma::reflection-subgroup-gives-root-subsystem} Let $W$ be a Weyl group associated to a root system $\Phi$ and let $W^\prime$ be any reflection subgroup of $W$. Then the root system $\Phi_{W^\prime}$ is a sub-system of $\Phi$. \end{lemma} \begin{proof} Per~\cite{bjorner-brenti2005}, the root system $\Phi=\Phi_W$ of $(W,S)$ is equal to $$ \Phi = \{w(\alpha_s) \mid w\in W, s \in S\}, $$ where the $\alpha_s$ form a basis for ambient Euclidean space with dimension equal to $\vert S \vert$. The reflection group $W^\prime$ is a Coxeter system $(W^\prime, \Sigma(W^\prime)$, hence there exists a root system $\Phi_{W^\prime}$ as above. For each $s^\prime$, the root $a_{s^\prime}$ equals $u(\alpha_s)$ for some $u \in W$ and $s \in S$. Therefore, for any $w^\prime \in W^\prime$ and $s^\prime \in \Sigma(W^\prime)$, the root $w^\prime(\alpha_{s^\prime})$ has the form $u(\alpha_s)$ for some $u \in W$ and $s \in S$. This proves $\Phi_{W^\prime} \subseteq \Phi$. \end{proof} \subsubsection{Reflection orderings} \label{section::reflection-orderings} Dyer introduced the notion of a reflection ordering in \cite{dyer1993}. Our presentation also draws from Sections 5.2 and 5.3 of~\cite{bjorner-brenti2005}. \begin{defin} Let $(\mathcal{W}, \mathcal{S})$ be a finite-rank Coxeter system, and let $\Phi_{\mathcal{W}}$ be its associated root system. A total ordering $\prec$ on the (possibly infinite) set of positive roots $\Phi_{\mathcal{W}}^+$ is a \textbf{reflection ordering} if for all $\alpha, \beta \in \Phi_{\mathcal{W}}^+$ and $\lambda, \mu \in \mathbb{R}_{> 0}$ such that $\lambda \alpha + \mu \beta \in \Phi_{\mathcal{W}}^+$, we have that either $$\alpha \prec \lambda \alpha + \mu \beta \prec \beta$$ or $$ \beta \prec \lambda \alpha + \mu \beta \prec \alpha.$$ \end{defin} The bijection between the positive roots $\Phi_{\mathcal{W}}^+$ and the set of reflections $\mathcal{T}$ in $\mathcal{W}$ means that a reflection ordering induces a total ordering on $\mathcal{T}$. \begin{prop} Let $(\mathcal{W}, \mathcal{S})$ be a finite-rank Coxeter system, and let $\Phi_{\mathcal{W}}$ be its associated root system. Then there exists a reflection ordering on $\Phi_{\mathcal{W}}^+$. \end{prop} \begin{proof} This first appeared in in \cite{dyer1993}, (2.1) - (2.3), and an alternative proof is given in \cite{bjorner-brenti2005}, Proposition 5.2.1. \end{proof} We emphasize that Dyer's theory holds for both finite and infinite Coxeter groups. In what follows, we will repeatedly discuss reflection orderings on reflections in a finite Weyl group \emph{and} reflection orderings on affine reflections in an affine Weyl group. For a finite Weyl group $W$ (resp. an affine Weyl group $W_{\rm{aff}}$), the root system $\Phi_{\mathcal{W}}$ coincides with the root system of $W$ (resp. the affine root system of $W$). See~\cite{humphreys1990}, Sections 6.4 - 6.5. \begin{defin} Let $(\mathcal{W}, \mathcal{S})$ be a finite-rank Coxeter group, and fix a reflection ordering $\prec$ on $\Phi_{\mathcal{W}}^+$. Given a path $$\Delta = \{w_0, w_1, \ldots, w_n\}$$ from $u \rightarrow v$ through the Bruhat graph for $(\mathcal{W},\mathcal{S})$, define the \textbf{edge set} of $\Delta$ by $$ E(\Delta) = \{ w_{i-1}^{-1} w_i \mid 1 \leq i \leq n\}. $$ The \textbf{descent set} of $\Delta$ with respect to the reflection ordering $\prec$ is defined by $$ D(\Delta; \prec) = \Big\{ i \in \{1,\ldots, n-1\} : w_i^{-1} w_{i+1} \prec w_{i-1}^{-1} w_i\Big\}. $$ \end{defin} Given a fixed reflection ordering $\prec$ on the reflections of $\mathcal{W}$ and $x \leq y$ in the Bruhat order on $\mathcal{W}$, we denote the set of $\prec$-increasing paths from $x$ to $y$ by $$ B_{\mathcal{W}}^\prec (x,y) = \{ \Delta \in B_{\mathcal{W}}(x,y) \mid D(\Delta; \prec) = \emptyset \}. $$ \subsection{Bruhat intervals for admissible elements} This section shows that any path from a $\mu$-admissible $w$ in $\widetilde{W}$ to its translation part $t_{\lambda(w)}$ consists solely of reflections appearing in the finite Weyl group $W$. \begin{rmk} In this section and the next, we will sometimes write $B_{\Phi_{\rm{aff}}}(w, t_{\lambda(w)})$ when working with $w$ and $t_{\lambda(w)}$ in $\widetilde{W}$---as opposed to $W_{\rm{aff}}$. Let us justify this apparent misuse of notation. Recall how Bruhat order works in the extended affine Weyl group: if $w \leq t_\lambda$, then there exists a length-zero element $\sigma \in \widetilde{W}$ such that $w, t_{\lambda(w)} \in \sigma^{-1} W_{\rm{aff}}$, and $\sigma^{-1} w \leq \sigma^{-1} t_{\lambda(w)}$ in the Bruhat order on $W_{\rm{aff}}$. We are simply writing $B_{\Phi_{\rm{aff}}}(w, t_{\lambda(w)})$ instead of $B_{\Phi_{\rm{aff}}}(\sigma^{-1} w, \sigma^{-1} t_{\lambda(w)})$. \end{rmk} \begin{prop} \label{prop::finite-reflections-in-interval} Let $\mu$ be a dominant minuscule coweight of $\Phi$, and let $(W,S)$ be the finite Weyl group of $\Phi$ inside the affine Weyl group $(W_{\rm aff}, S_{\rm aff})$. Let $T$ be the set of reflections in $W$. Consider a $\mu$-admissible element $w \leq t_{\lambda (w)}$. There exists a length-zero element $\sigma$ in $\widetilde{W}$ such that $w, t_{\lambda(w)} \in \sigma W_{\rm{aff}}$. Let $w \stackrel{\Delta}\longrightarrow t_{\lambda (w)}$ be any path in the Bruhat graph $\Omega_{(W_{\rm{aff}}, S_{\rm{aff}})}$. Each reflection in the edge set $E(\Delta) = \{t_1, \ldots t_n\}$ belongs to $T$. \end{prop} \begin{proof} Let $\mathcal{C}$ denote the base alcove. Recall that $W_{\rm{aff}}$ acts simply transitively on alcoves (see \cite{humphreys1990}, Section 4.5), and let $A_u = u \cdot \mathcal{C}$ for $u \in W_{\rm{aff}}$. Because $w$ belongs to ${\rm{Adm}}(\mu)$, it can be written $w = t_\lambda \bar{w}$ with $w \leq t_\lambda$. (N.B. the $\lambda$ here is $\lambda(w)$ by definition.) We claim that $A_w$ and $A_{t_\lambda}$ both contain $\lambda$ in their closures. Observe that the fundamental alcove $\mathcal{C}$ and the alcove $\bar{w}\mathcal{C}$ both have the origin in their closure, because $\bar{w}$ is an element of the finite Weyl group. Translating each alcove by $\lambda$ means the translate of the origin lies in the closure of the translated alcoves. A path $w \stackrel{\Delta}\longrightarrow t_{\lambda}$ is a sequence of (affine) reflections $t_1, \ldots, t_n$ such that $$w < wt_1 < w t_1 t_2 < \cdots < wt_1 \cdots t_n = t_{\lambda}.$$ Now we further claim that each alcove $A_{w t_1\cdots t_i}$ also contains $\lambda$ in its closure. Let $H_i$ be a hyperplane crossed by going from $A_{w t_1 \ldots t_{i-1}}$ to $A_{w t_1\ldots t_i}$. All such $H_i$ weakly separate $A_w$ from $A_\lambda$. Suppose $\lambda$ did not lie in some $H_i$. Then it would be strictly on one side of $H_i$. This is a contradiction, because $\lambda$ belongs to the closure of both $A_w$ and $A_\lambda$, and these alcoves are separated by $H_i$. As a matter of notation, given $u \in \widetilde{W}$, let $^u t_i = u t_i u^{-1}$. Then we can rewrite the above sequence as $$w < {^w t_1} w < {^{wt_1}t_2} {^w t_1}w < \cdots <{^{wt_1\cdots t_{n-1}} t_n} \cdots {^{wt_1}t_2} {^w t_1} w = t_{\lambda}.$$ The argument above shows that the hyperplane for each affine reflection ${^{wt_1\cdots t_{i-1}} t_i}$ passes through the point $\lambda$. Therefore, the corresponding reflection fixes this point, $$ {^{wt_1\cdots t_{i-1}} t_i} (\lambda) = \lambda. $$ Since $w^{-1} = \bar{w}^{-1} t_{-\lambda}$, the preceding equation can be rewritten $$ t_1 \cdots t_i \cdots t_1 \bar{w}^{-1}t_{-\lambda} (\lambda) = \bar{w}^{-1}t_{-\lambda}(\lambda) $$ Using $\bar{w}^{-1} (0) = 0$, we conclude that for each $1 \leq i \leq n$, $$t_1 \cdots t_i \cdots t_1 (0) = \overline{w}^{-1} (0) = 0.$$ An affine reflection fixes the origin if and only if its translation part is trivial. So the reflection $t_1 \cdots t_i \cdots t_1$ is in the finite Weyl group. It follows that each $t_i$ belongs to $T$. \end{proof} \begin{lemma} \label{lemma::minimal-coset-rep} Let $w = t_\lambda \bar{w}$ be $\mu$-admissible. There is an element $w_\lambda \in W$ such that $t_\lambda w_\lambda$ has minimal length in the coset $t_\lambda W$, and moreover, for any $x \in W$ $$ \ell(t_\lambda w_\lambda x) = \ell(t_\lambda w_\lambda) + \ell(x). $$ Finally, for any $x, y \in W$, $t_\lambda w_\lambda x \leq t_\lambda w_\lambda y$ if and only if $x \leq y$. \end{lemma} \begin{proof} Because we know $w \leq t_\lambda$, there is a length-zero element $\sigma \in \widetilde{W}$ such that $w, t_\lambda \in \sigma W_{\rm{aff}}$. We may and do think of $t_\lambda$ and $w$ as elements of $W_{\rm{aff}}$ by multiplying each on the left by $\sigma^{-1}$. Since $W$ is a finite group, it is clearly possible to attain a minimal value in the set $\{ \ell(t_\lambda x) \mid x \in W\}$. Let $w_\lambda$ denote an element of $W$ such that $\ell(t_\lambda w_\lambda)$ is minimal. In fact, this $w_\lambda$ is unique by the theory of minimal coset representatives applied to the quotient $W_{\rm{aff}}/W$; see \cite{bjorner-brenti2005} Corollary 2.4.5 for example. By the triangle inequality, for any $x \in W$ the lengths satisfy $$ \ell(t_\lambda w_\lambda x) \leq \ell(t_\lambda w_\lambda) + \ell(x). $$ Suppose $\ell(t_\lambda w_\lambda x) < \ell(t_\lambda w_\lambda) + \ell(x)$. Let $S_{\rm{aff}} = \{s_0, s_1, \ldots, s_r\},$ and choose reduced expressions $t_\lambda w_\lambda = s_{i_1}\cdots s_{i_n}$ and $x = s_{j_1}\cdots s_{j_m}$. By assumption of strict inequality, there is an index $j_k$ such that $$ \ell(t_\lambda w_\lambda s_{j_1}\cdots s_{j_{k-1}} s_{j_k}) < \ell(t_\lambda w_\lambda s_{j_1}\cdots s_{j_{k-1}}). $$ By the Exchange Condition, multiplying $t_\lambda w_\lambda s_{j_1}\cdots s_{j_{k-1}}$ by $s_{j_k}$ must delete some element in the expression $s_{i_1}\cdots s_{i_n} s_{j_1}\cdots s_{j_{k-1}}.$ Since the expression for $x$ is reduced, $s_{j_k}$ must delete some $s_{i_p}$. This contradicts the assumption that $t_\lambda w_\lambda = s_{i_1}\cdots s_{i_n}$ is reduced. Therefore, $\ell(t_\lambda w_\lambda x) = \ell(t_\lambda w_\lambda) + \ell(x).$ The final statement is a consequence of the above equality and the definition of Bruhat order. If we assume $x \neq y$, then $t_\lambda w_\lambda x < t_\lambda w_\lambda y$ means $\ell(t_\lambda w_\lambda x) < \ell(t_\lambda w_\lambda y)$. But then $$ \ell(t_\lambda w_\lambda) + \ell(x) < \ell(t_\lambda w_\lambda) + \ell(y), $$ and so $\ell(x) < \ell(y)$. Then by definition, $x < y$. To prove the opposite implication, run the argument in reverse. \end{proof} \begin{prop} \label{bruhat-interval-isomorphism} Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$. Given a reflection ordering $\prec$ on $W_{\rm{aff}}$, let $\prec$ also denote its restriction to $W$. There is an explicit $\prec$-preserving bijection between $ B_{\Phi_{\rm{aff}}}(w, t_\lambda)$, whose paths go through $\Omega_{(W_{\rm{aff}}, S_{\rm{aff}})}$, and $B_\Phi(w_\lambda^{-1}\bar{w}, w_\lambda^{-1})$, whose paths go through $\Omega_{(W,S)}$. \end{prop} \begin{proof} Choose any $\Delta \in B_{\Phi_{\rm{aff}}}(w, t_\lambda)$, with edge set $E(\Delta) = \{t_1, \ldots, t_n\}$. That is, $$ w < wt_1 < w t_1 t_2 < \cdots < w t_1 t_2 \cdots t_{n-1} < t_\lambda. $$ By the final statement of Lemma~\ref{lemma::minimal-coset-rep}, this chain of inequalities holds if and only if the following chain also holds, $$ w_\lambda^{-1} \bar{w} < w_\lambda^{-1} \bar{w} t_1 < \cdots w_\lambda^{-1} \bar{w} t_1\cdots t_{n-1} < w_\lambda^{-1}. $$ Proposition~\ref{prop::finite-reflections-in-interval} shows that the $t_i$ are reflections in the finite Weyl group. Hence the new chain of inequalities defines a path $\Delta^\prime \in B_\Phi (w_\lambda^{-1} \bar{w}, w_\lambda^{-1})$. The edge sets of each path are identical, and moreover, the edges appear in the same order. Therefore, we have a $\prec$-preserving bijection between the two Bruhat intervals. \end{proof} \subsection{A stratified formula for $\widetilde{R}$-polynomials} The preceding section showed that the set of paths between a $\mu$-admissible element and its translation part increasing with respect to a reflection ordering $\prec$ is in bijection with the $\prec$-increasing paths between certain finite Weyl group elements. In this section, we show that the polynomials $\widetilde{R}_{w, t_{\lambda(w)}}^J(Q_r)$ appearing in the formula for the coefficients of $\phi_{r,1}aug$ can be written as a sum indexed by these paths. \subsubsection{Dyer's formula} This subsection is devoted to justifying Theorem~\ref{dyer-R-polynomial-formula}, Dyer's $\widetilde{R}$-polynomial formula, in the context of this article. We begin by explaining how the polynomials are defined for a general Coxeter system, then we compare this definition to the definition of $\widetilde{R}$-polynomials arising from the inversion formula for basis elements of a twisted affine Hecke algebra (see Definition~\ref{defin::r-poly-from-hecke-algebras}). Given a Coxeter system $(\mathcal{W},\mathcal{S})$ and $w \in \mathcal{W}$, the \textbf{right descent set} of $w$ is defined as $$ D_R(w) = \{s \in \mathcal{S} \mid \ell(ws) < \ell(w)\}. $$ \begin{thm} Let $(\mathcal{W}, \mathcal{S})$ be a Coxeter system. There is a unique family of polynomials $\{R_{u,v}(q^r)\}_{u,v \in \mathcal{W}}$ satisfying the following conditions: \begin{enumerate} \item $R_{u,v}(q^r) = 0$ if $u\nleq v$, \item $R_{u,v}(q^r) = 1$ if $u = v$, \item If $s \in D_R(v)$, then $$ R_{u,v}(q^r) = \begin{cases} R_{us,vs}(q^r), & {\rm{if}}\ s \in D_R(u), \\ q^rR_{us,vs}(q^r) + (q^r-1)R_{u,vs}(q^r), & {\rm{if}}\ s \notin D_R(u). \end{cases} $$ \end{enumerate} \end{thm} \begin{proof} This is Theorem 5.1.1 of~\cite{bjorner-brenti2005}. \end{proof} \begin{prop} Let $u, v \in \mathcal{W}$, and let $\hat{Q} = q^{r/2} - q^{-r/2}$. There exists a unique polynomial $\widetilde{R}_{u,v}(q^r) \in \mathbb{N}[q]$ such that $$R_{u,v}(q^r) = q^{r\ell(u,v)/2} \widetilde{R}_{u,v}(\hat{Q}_r).$$ \end{prop} \begin{proof} This is Proposition 5.3.1 of~\cite{bjorner-brenti2005}. \end{proof} Recall that in Chapter 2 we set $Q = q^{-r/2} - q^{r/2}$. In the definitions given for the Hecke algebra case, the $R$-polynomials and $\widetilde{R}$-polynomials are related by $$ (-1)^{\ell(u)} (-1)^{\ell(v)} R_{u,v}(q^r) = q^{r\ell(u,v)/2} \widetilde{R}_{u,v}(Q_r), $$ for $u$ and $v$ in the extended affine Weyl group $\widetilde{W}$, whereas in the definition of the polynomials for general Coxeter groups we have $$ R_{u,v}(q^r) = q^{r\ell(u,v)/2} \widetilde{R}_{u,v}(\hat{Q}_r). $$ \begin{lemma} In the notation defined above, $$ (-1)^{\ell(u)} (-1)^{\ell(v)} \widetilde{R}_{u,v} (Q_r) = \widetilde{R}_{u,v}(\hat{Q}_r). $$ \end{lemma} \begin{proof} Notice that $\hat{Q}_r = -Q_r$, so it is enough to show that the sign $(-1)^{\ell(u)} (-1)^{\ell(v)}$ is somehow compatible with the individual terms in the polynomial. It will help to rewrite $(-1)^{\ell(u)} (-1)^{\ell(v)} = (-1)^{\ell(v) - \ell(u)} = (-1)^{\ell(u,v)}$. Next, we recall two facts about $\widetilde{R}$-polynomials arising from the inversion formula for Hecke algebra basis elements. First, ${\rm{deg}}\big(\widetilde{R}_{u,v}(Q_r)\big) = \ell(u,v)$; see~\cite{haines2000b}, Lemma 2.5. Second, the powers of $Q_r$ in $\widetilde{R}_{u,v} (Q_r)$ all have the same parity. This follows from Theorem~\ref{dyer-R-polynomial-formula}, whose proof is independent of the current argument. Suppose we have two paths $\Delta_1, \Delta_2 \in B_{\Phi_{\chi, \rm{aff}}}^\prec(u, v)$, such that the product of edges of $\Delta_1$ is $t_1\ldots t_r$, while the product of edges of $\Delta_2$ is $u_1 \ldots u_s$. Then $$u^{-1}v = t_1\ldots t_r = u_1 \ldots u_s.$$ It is a general fact about Coxeter groups that in this situation $r - s$ must be even; but $r = \ell(\Delta_1)$ and $s = \ell(\Delta_2)$. An $\widetilde{R}$-polynomial has coefficients $c_n$ in $\mathbb{N}$. Multiplying through by $(-1)^{\ell(u,v)}$ gives $$ (-1)^{\ell(u,v)} c_n Q_r^{\ell(u,v) - 2n} = (-1)^{2n} c_n \hat{Q}_r^{\ell(u,v)-2n}. $$ This completes the proof. \end{proof} Finally, we can state Dyer's formula~\cite{dyer1993} using the notation of Chapter 2. \begin{thm} \label{dyer-R-polynomial-formula} Let $\widetilde{W}_{\chi}$ be the extended affine Weyl group of $H_{\chi_r}$, and let $\prec$ be a reflection ordering on the reflections in $W_{\chi, \rm{aff}}$. Let $Q_r = q^{-r/2} - q^{r/2}$. For any $u, v \in \widetilde{W}_{\chi}$ such that $u \leq_\chi v$ in Bruhat order, $$\widetilde{R}^\chi_{u,v} (Q_r) = \sum_{\Delta \in B_{\Phi_{\chi, \rm{aff}}}^\prec(u, v)} Q_r^{\ell(\Delta)}.$$ \end{thm} \begin{proof} This statement is Theorem 5.3.4 of~\cite{bjorner-brenti2005}, applied to the case of $\widetilde{R}$-polynomials arising from $\mathcal{H}(H_{\chi_r}, I_{H_r})$ viewed as a twisted affine Hecke algebra. \end{proof} \subsubsection{Modifications to Dyer's formula} Suppose $J$ is a root sub-system of $\Phi$. Let $W_J = \langle s_\alpha \mid \alpha \in J^+\rangle$ be the reflection subgroup of $W$ associated to $J$. The group $W_{J,{\rm{aff}}}$ is the corresponding affine Weyl group. Given a reflection ordering $\prec$ on reflections in $W_{\rm{aff}}$, we induce a reflection ordering on reflections in $W_{J, {\rm{aff}}}$ by restricting the ordering on $\Phi_{\rm{aff}}^+$ to $J_{\rm{aff}}^+$. Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$, and $J \subseteq \Phi$. There is an automorphism $\sigma$ of the base alcove $\mathcal{C}$ such that $w$ and $t_\lambda$ lie in $\sigma W_{\rm{aff}}$, so we may speak of paths $w \stackrel{\Delta}{\longrightarrow} t_\lambda$ through the Bruhat graph $\Omega_{(W_{\rm{aff}}, S_{\rm{aff}})}$. We consider Bruhat intervals $$ B_{J_{{\rm{aff}}}}^{\prec} (w, t_\lambda) = \{ w \stackrel{\Delta}{\longrightarrow} t \mid E(\Delta) \subset W_{J,{\rm{aff}}},\ D(\Delta, \prec) = \emptyset \}. $$ \begin{lemma} \label{path-root-system-lemma} For any path $\Delta$ in $B_{J_{\rm{aff}}}^{\prec} (w, t_\lambda)$, there is a unique minimal root subsystem $J_\Delta \subseteq J$ such that all reflections $t_i \in E(\Delta)$ lie in $W_{J_\Delta}$. \end{lemma} \begin{proof} Fix a path $\Delta \in B_{J,{\rm{aff}}}^{\prec} (w, t_{\lambda})$. By Proposition~\ref{prop::finite-reflections-in-interval}, the edges of all paths in $B_{J,{\rm{aff}}}^{\prec} (w, t_{\lambda})$ are finite reflections. Observe that $$W^\prime =\ \bigcap_{\substack{E(\Delta) \subset V \subset W_J}} V \ =\ \langle t_i \mid t_i \in E(\Delta) \rangle,$$ is the smallest subgroup of $W_J$ containing all of the edges in $\Delta$. Theorem~\ref{reflection-subgroups-are-coxeter-systems} shows that $W^\prime$ is itself a Coxeter group. Therefore, by Lemma~\ref{lemma::reflection-subgroup-gives-root-subsystem}, there is an associated root system $J_\Delta$ whose positive roots are in bijection with the reflections of $W^\prime$. \end{proof} \begin{prop} \label{stratified-r-polynomial-formula} Let $w=t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$. The polynomial $\widetilde{R}_{w,t_\lambda}^J (Q_r)$, defined with respect to the reflection subgroup $(W_J, \Sigma_J)$, can be rewritten as $$ \widetilde{R}_{w,t_\lambda}^J (Q_r) = \sum_{J^\prime \subseteq J}\; \sum_{\substack{\Delta \in B_{J}^{\prec} (w,t_\lambda)\\ J_\Delta = J^\prime}} Q_r^{\ell(\Delta)}. $$ \end{prop} \begin{proof} This is a direct consequence of Dyer's formula and the preceding lemmas. \end{proof} \begin{lemma} \label{invariance-of-paths} Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$. For any chain of root systems $J^\prime \subseteq J \subseteq \Phi$, there is an equality of sets $$ \{\Delta \in B_J^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1}) \mid J_\Delta = J^\prime \} = \{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1}) \mid J_\Delta = J^\prime\}. $$ \end{lemma} \begin{proof} The reflection subgroups $W_J$ and $W_{J^\prime}$ can be realized as Coxeter groups $(W_J, \Sigma_J)$ and $(W_{J^\prime}, \Sigma_{J^\prime})$. Let $\Omega_{(W,S)}$ denote the Bruhat graph of $(W,S)$ and use analogous notation for graphs of reflection subgroups. The key observation comes from Theorem~\ref{thm::bruhat-graph-of-reflection-subgroups}: The Bruhat graph of a reflection subgroup is equal to the full subgraph of the Bruhat graph of an ambient Coxeter group having vertices in the reflection subgroup. Symbolically, this says: $$ \Omega_{(W,S)} (W_{J^\prime}) = \Omega_{(W_{J^\prime}, \Sigma_{J^\prime})} = \Omega_{(W_J, \Sigma_J)} (W_{J^\prime}). $$ Therefore, the set of paths through $\Omega_{(W,S)}$ associated to $J^\prime$ equals the set of paths through $\Omega_{(W_J, \Sigma_J)}$ associated to $J^\prime$. \end{proof} \begin{cor} \label{cor::finite-interval-r-polynomial} Let $w = t_\lambda \bar{w} \in {\rm{Adm}}_{G_r}(\mu)$ and $J \subseteq \Phi$. Then $$ \widetilde{R}_{w,t_\lambda}^J (Q_r) = \sum_{J^\prime \subseteq J}\; \sum_{\substack{\Delta \in B_\Phi^\prec (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} Q_r^{\ell(\Delta)}. $$ \end{cor} \begin{proof} Apply Lemma~\ref{invariance-of-paths} to rewrite the formula of Proposition~\ref{stratified-r-polynomial-formula} with $B_\Phi^\prec (w, t_\lambda)$ instead of $B_J^\prec (w, t_\lambda)$. Then apply Proposition~\ref{bruhat-interval-isomorphism}. \end{proof} \section{The combinatorial formula and example calculations} \label{chapter::main-theorem} Let us recall what we have done leading up to this final chapter. First, we gave an abstract definition of a test function $\phi_r$ via the Bernstein center and then explicitly described the test function in the case of $I_r^+$-level structure for split connected reductive groups with connected center by applying results on depth-zero characters and invoking Haines's formula for Bernstein functions attached to dominant minuscule cocharacters. We then embarked on two (mostly) independent paths. By looking more closely at the depth-zero endoscopic elements $\kappa_\chi$, we obtained information on which $s \in T(k_r)$ help characterize the nonzero coefficients of $\phi_{r,1}aug$, plus we simplified the sum in the first explicit formula through a stratification process indexed by $\chi$-root systems. This stratification by $\chi$-root systems was also employed in the subsequent chapter to adapt a formula of Dyer concerning $\widetilde{R}$-polynomials. All of these threads will now come together into the main theorem. The proof of the Main Theorem amounts to showing that the various stratifications and corresponding summations behave in a compatible way. By swapping sums and subsequently reorganizing terms, objects introduced in Chapters 3 and 4 begin to appear in the formula. Finally, all but one summation drops out of the formula for $\phi_{r,1}aug(I_r^+ sw I_r^+)$. This remaining sum is over a well-studied combinatorial set connected with the Bruhat order of the \emph{ambient} Weyl group. After proving the formula, we make some remarks about using it in practice. The chapter concludes with some calculations of interest to the study of Shimura varieties in the cases of $GL_n$ and $GSp_{2n}$. \subsection{The Main Theorem} Throughout this section, let $G$ be a split connected reductive group with connected center and whose derived group $G_{\rm{der}}$ is simply-connected, and assume $W_\chi^\circ = W_\chi$ (see Remark~\ref{rmk::roche-assumptions}). Fix a split maximal torus $T \subset G$ of rank $d$. Let $\mu$ be a dominant minuscule cocharacter of $T$. Finally, choose a reflection ordering $\prec$ on $\Phi(G,T)$. Recall that Proposition~\ref{prop::first-explicit-formula} presented our first explicit formula for the coefficients of $\phi_{r,1}aug$, the function used in place of ${\phi_{r,1} = q^{\ell(t_{\mu})/2}(Z_{V_\mu} \ast 1_{I_r^+})}$ when computing twisted orbital integrals: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r: I_r^+]^{-1} \sum_{\kappa_\chi \in K_{q-1}} \gamma_{N_r s} (\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r). $$ \subsubsection{Proof of the main theorem} The notation $I_r^+ sw I_r^+$ treats $w \in \widetilde{W}$ as an element of $G_r$ using the set-theoretic embedding $\widetilde{W} \hookrightarrow G_r$ defined in Definition~\ref{defin::set-theoretic-embedding}. \begin{lemma} For $w \in {\rm{Adm}}_{G_r}(\mu)$ and $s \in T(k_r)$, we have $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = [I_r: I_r^+]^{-1} \sum_{J \subseteq \Phi} \sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^J(Q_r). $$ \end{lemma} \begin{proof} According to Lemma~\ref{lemma::support-of-phi-r-chi}, $Z_{V_\mu} \ast e_{\chi_r} (w) = 0$ if $\kappa_\chi$ is not relevant to $w$, i.e., if $\kappa_\chi \notin S_w^{\rm{dz}}$. Using this, we may replace the sum in the formula from Proposition~\ref{prop::first-explicit-formula} with $S_w^{\rm{dz}}$. Then $\phi_{r,1}aug (I_r^+ sw I_r^+)$ equals $$ [I_r: I_r^+]^{-1} \sum_{\kappa_\chi \in S_w^{\rm{dz}}} \gamma_{N_r s} (\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r). $$ Next, stratify $S_w^{\rm{dz}}$ as described in Section~\ref{subsection::stratification-by-chi-root-systems}, i.e., $S_w^{\rm{dz}} = \coprod_{J \subseteq \Phi} S_w^{\rm{dz}}(J)$, so that the coefficient equals, $$ [I_r: I_r^+]^{-1} \sum_{J \subseteq \Phi} \sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)} \gamma_{N_r s} (\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r). $$ Finally, recall that the polynomials $\widetilde{R}_{w,t_{\lambda(w)}}^\chi(Q_r)$ are identical for all $\kappa_\chi$ in $S_w^{\rm{dz}}(J)$ by Lemma~\ref{lemma::equality-on-strata}. Specifically, they are the polynomial $\widetilde{R}_{w,t_{\lambda(w)}}^J(Q_r)$ with $J = \Phi_\chi$. This completes the proof. \end{proof} \begin{lemma} \label{lemma::formula-after-chapter-4} For $w \in {\rm{Adm}}_{G_r}(\mu)$ and $s \in T(k_r)$, $\phi_{r,1}aug (I_r^+ sw I_r^+)$ equals $$ [I_r: I_r^+]^{-1} \sum_{J \subseteq \Phi} \sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2} \sum_{J^\prime \subseteq J}\; \sum_{\substack{\Delta \in B_\Phi^\prec (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} Q_r^{\ell(\Delta)}. $$ \end{lemma} \begin{proof} Replace $\widetilde{R}_{w,t_{\lambda(w)}}^J(Q_r)$ with the formula from Corollary~\ref{cor::finite-interval-r-polynomial}. \end{proof} For each path $\Delta$ in $B_\Phi^\prec(w, t_{\lambda(w)})$, we defined a root system $J_\Delta \subseteq \Phi$, which in turn determines a diagonalizable subgroup $S_{w, J_\Delta}$ of $\widehat{T}(\mathbb{C})$. Let $S_{w, J_\Delta}^{\rm{tors}}$ denote the torsion subgroup of $S_{w, J_\Delta}$. If $w = t_{\lambda(w)}$, then $B_\Phi^\prec(w, t_{\lambda(w)})$ contains only a single element, the ``empty path,'' whose length is zero and has trivial root system $J_\Delta = \emptyset$; see also Corollary~\ref{cor::main-theorem-codim-0}. Let $w \in \widetilde{W}$, $s \in T(k_r)$, and $J \subseteq \Phi$. Definition~\ref{defin::finite-critical-groups} introduced the finite critical groups $A_{w,J,k_F}$, and we saw in Proposition~\ref{prop::sum-over-group} that there are consequences of $N_r (s)$ belonging to $A_{w,J,k_F}$ or not. Define the symbol $\delta(s, w, J)$ by $$ \delta(s, w, J) = \begin{cases} 0, & {\rm{if}}\ w\notin {\rm{Adm}}_{G_r}(\mu) \\ 0, & {\rm{if}}\ w \in {\rm{Adm}}_{G_r}(\mu)\ {\rm{and}}\ N_r(s) \notin A_{w, J, k_F} \\ 1, & {\rm{if}}\ w \in {\rm{Adm}}_{G_r}(\mu)\ {\rm{and}}\ N_r(s) \in A_{w, J, k_F}. \end{cases} $$ Recall that the main theorem holds under the conditions of Remark~\ref{rmk::roche-assumptions}. \begin{thm} \label{main-theorem} Let $w \in \widetilde{W}$ and $s \in T(k_r)$. Let $d$ be the rank of $T$. Fix a reflection ordering $\prec$ on $\Phi$, and set $c(\Delta) = \left[\ell(w, t_\mu) - \ell(\Delta)\right]/2$. The coefficient of $\phi_{r,1}aug$ for the $I_r^+$-double coset of $(s,w)$ is given by $$ (-1)^d \sum_{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1}\vert (q-1)^{d-{\rm{rank}}(J_\Delta) -1} q^{r c(\Delta)} (1-q^r)^{\ell(\Delta)-d}. $$ \end{thm} \begin{proof} If $w \notin {\rm{Adm}}_{G_r}(\mu)$, we know $\phi_{r,1}^\prime(I_r^+ sw I_r^+) = 0$ for all $s \in T(k_r)$ by combining Corollary~\ref{cor::equality-of-orbital-integrals} and Lemma~\ref{lemma::image-of-phi-r-chi}. Thus we assume $w \in {\rm{Adm}}_{G_r}(\mu)$, and our starting point is the version of the formula given in Lemma~\ref{lemma::formula-after-chapter-4}. The first phase of the proof involves rearranging the four summations therein. First, observe that the sum indexed by endoscopic elements in the stratum $S_w^{\rm{dz}}(J)$ does not depend on the choice of $J^\prime \subseteq J$. Exchanging the corresponding summations and moving all terms to the innermost quantity shows that $\phi_{r,1}aug(I_r^+ sw I_r^+)$ equals $$ [I_r: I_r^+]^{-1} \sum_{J \subseteq \Phi} \;\sum_{J^\prime \subseteq J}\;\sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)}\;\sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2}Q_r^{\ell(\Delta)}. $$ Next, we rewrite the first two sums of this expression as follows: Instead of summing over $J \subseteq \Phi$ and then summing over $J^\prime \subseteq J$; first sum over $J^\prime \subseteq \Phi$ and then $J \supseteq J^\prime$. The new expression is $$ [I_r: I_r^+]^{-1} \sum_{J^\prime \subseteq \Phi} \;\sum_{J \supseteq J^\prime}\;\sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)}\;\sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2}Q_r^{\ell(\Delta)}. $$ The innermost sum does not depend on the choice of $J$ containing a fixed $J^\prime$. Therefore we may move it through the two adjacent summations. That is, \begin{multline*} [I_r: I_r^+]^{-1} \sum_{J^\prime \subseteq \Phi} \;\sum_{J \supseteq J^\prime}\;\sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)}\;\sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} \gamma_{N_r s}(\kappa_\chi)^{-1} q^{r\ell(w,t_{\lambda(w)})/2}Q_r^{\ell(\Delta)} \\ = [I_r: I_r^+]^{-1} \sum_{J^\prime \subseteq \Phi} \sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} q^{r\ell(w,t_{\lambda(w)})/2} Q_r^{\ell(\Delta)} \left( \sum_{J \supseteq J^\prime}\;\sum_{\kappa_\chi \in S_w^{\rm{dz}}(J)} \gamma_{N_r s}(\kappa_\chi)^{-1}\right). \end{multline*} Corollary~\ref{cor::union-of-strata} simplifies the quantity in parentheses: $$ [I_r: I_r^+]^{-1} \sum_{J^\prime \subseteq \Phi} \sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} q^{r\ell(w,t_{\lambda(w)})/2} Q_r^{\ell(\Delta)} \left(\sum_{\kappa_\chi \in S_{w, J^\prime}^{\rm{dz}}} \gamma_{N_r s}(\kappa_\chi)^{-1}\right) $$ The second phase of the proof simplifies the preceding expression by applying our results about paths through the Bruhat graph and sums of character values. Recall that $\ell(t_\mu) = \ell(t_\lambda)$ for all $\lambda \in W\mu$, so that we may work with $\ell(w, t_\mu)$ in all cases rather than $\ell(w, t_{\lambda(w)})$. Observe that the difference of lengths $$ \ell(w, t_\mu) = \ell(t_\mu) - \ell(w) $$ has the same parity as every path length $\ell(\Delta)$. (This a rephrasing of our earlier statement that the orders of terms in $\widetilde{R}$-polynomials all have the same parity.) Thus $c(\Delta) = \left[\ell(w, t_\mu) - \ell(\Delta)\right]/2$ is a nonnegative integer. We also apply Corollary~\ref{prop::sum-over-group} to the quantity $\left(\sum_{\kappa_\chi \in S_{w, J^\prime}^{\rm{dz}}} \gamma_{N_r s}(\kappa_\chi)^{-1}\right)$, which implies $$ \left(\sum_{\kappa_\chi \in S_{w, J^\prime}^{\rm{dz}}} \gamma_{N_r s}(\kappa_\chi)^{-1}\right) = \delta(s,w,J^\prime) \vert S_{w, J^\prime}^{\rm{dz}} \vert. $$ Finally, use that $I_r/I_r^+ \cong T(k_r)$ and that $T$ is a split maximal torus to see $$ [I_r : I_r^+] = (q^r -1)^d. $$ The result is that $\phi_{r,1}aug(I_r^+ sw I_r^+)$ equals $$ (-1)^d \sum_{J^\prime \subseteq \Phi}\;\sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J^\prime}} \delta(s,w,J^\prime) \vert S_{w, J^\prime}^{\rm{dz}} \vert q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d} $$ For simplicity of notation, relabel all $J^\prime$ as $J$, $$ (-1)^d \sum_{J \subseteq \Phi}\;\sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J}} \delta(s,w,J) \vert S_w^{\rm{dz}}J\vert q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d}. $$ The third phase of the proof simplifies the double summation in the previous expression. Recall that there is a well-defined root subsystem $J_\Delta \subseteq \Phi$ associated to each path $\Delta$ in $B_\Phi^\prec(w_\lambda^{-1}\bar{w}, w_\lambda^{-1})$. This relationship partitions the $\prec$-increasing paths: $$ B_\Phi^\prec (w_\lambda^{-1}\bar{w}, w_\lambda^{-1}) = \coprod_{J \subseteq \Phi} \left\{ \Delta \in B_\Phi^\prec (w_\lambda^{-1}\bar{w}, w_\lambda^{-1}) \mid J_\Delta = J\right\}. $$ For a fixed $J \subseteq \Phi$, suppose that $J_\Delta \neq J$ for all $\Delta \in B_\Phi^\prec(w_\lambda^{-1}\bar{w}, w_\lambda^{-1})$. Then $$ \delta(s, w,J) \vert S_w^{\rm{dz}}J\vert \sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J}} q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d} = 0. $$ On the other hand, for any $J$ where there exist paths $\Delta$ such that $J_\Delta = J$, we have an equality \begin{multline*} \delta(s, w,J) \vert S_w^{\rm{dz}}J\vert \sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J}} q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d} \\ = \sum_{\substack{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})\\ J_\Delta = J}} \delta(s, w,J_\Delta) \vert S_{w, J_\Delta}^{\rm{dz}} \vert q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d} \end{multline*} \noindent So we have shown that $\phi_{r,1}aug(I_r^+ sw I_r^+)$ equals $$ (-1)^d \sum_{\Delta \in B_\Phi^{\prec} (w_\lambda^{-1}\bar{w}, w_\lambda^{-1})} \delta(s, w,J_\Delta) \vert S_{w, J_\Delta}^{\rm{dz}} \vert q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)-d}, $$ because $B_\Phi^\prec(w_\lambda^{-1}\bar{w}, w_\lambda^{-1})$ splits as a disjoint union indexed by $J \subseteq \Phi$. Finally, let us rewrite $\vert S_{w, J_\Delta}^{\rm{dz}} \vert$. The diagonalizable group $S_{w, J_\Delta} \subseteq \widehat{T}(\mathbb{C})$ factors into a direct product of a torus $S_{w, J_\Delta}^\circ$ and a torsion subgroup $S_{w,J_\Delta}^{\rm{tors}}$. Because the group of depth-zero endoscopic elements $S_w^{\rm{dz}}$ in $\widehat{T}(\mathbb{C})$ equals the kernel $K_{q-1}$ of the endomorphism on $\widehat{T}(\mathbb{C})$ given by multiplication by $(q-1)$, $$ S_{w, J_\Delta}^{\rm dz} = (S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1}) \times (S_{w, J_\Delta}^\circ \cap K_{q-1}). $$ But since ${\rm{rank}}(S_{w, J_\Delta}) = d - {\rm{rank}}(J_\Delta) - 1$ by Corollary~\ref{cor::rank-of-swj}, and $S_{w, J_\Delta}^\circ$ is the connected component, we must have $\vert S_{w, J_\Delta}^\circ \cap K_{q-1} \vert = (q-1)^{d - {\rm{rank}}(J_\Delta) - 1}$; so, $$ \vert S_{w, J_\Delta}^{\rm{dz}} \vert = \vert S_{w, J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{d - {\rm{rank}}(J_\Delta) - 1}. $$ This completes the proof of the main theorem. \end{proof} \subsubsection{The Drinfeld Case} \label{section::the-drinfeld-case} The formula for test functions in the Drinfeld case found by Haines and Rapoport is a special case of Theorem~\ref{main-theorem}. Their expression depends on the ``set of critical indices'' $S(w)$ associated to a $\mu$-admissible element $w \in \widetilde{W}$ and the corresponding subtorus $T_{S(w)}$ of the split maximal torus $T$ in $G$. Recall the Drinfeld case data: $G = GL_d$, $\mu = (1,0,\dots,0)$, and $k_F \cong \mathbb{F}_p$. Let $e_1 = (1, 0, \ldots)$, $e_2 = (0,1,0,\ldots)$, and so on. \begin{defin} In the Drinfeld case, for $w \in {\rm{Adm}}_{G_r}(\mu)$, the set of \textbf{critical indices} is the subset $S(w) \subseteq \{1, \ldots, d\}$ given by $$ S(w) = \{j \mid w \leq t_{e_j}\}. $$ The subtorus $T_{S(w)}$ consists of the elements ${\rm{diag}}(t_1, \ldots, t_d) \in T$ such that $t_i = 1$ for all $i \notin S(w)$. See Section 6 of~\emph{\cite{haines-rapoport2012}} for more details. \end{defin} \begin{cor} \emph{(Haines-Rapoport, \cite{haines-rapoport2012} 12.2.1)} \label{cor::drinfeld-case} With respect to the Haar measure $dx$ on $G_r$ which gives $I_r^+$ volume equal to 1, the function $\phi_{r,1}aug$ is given by the formula $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = \begin{cases} 0,\ {\rm{if}}\ w \notin {\rm{Adm}}_{G_r}(\mu) \\ 0,\ {\rm{if}}\ w \in {\rm{Adm}}_{G_r}(\mu)\ {\rm{and}}\ N_r(s) \notin T_{S(w)}(k_F) \\ (-1)^{d} (p-1)^{d-\vert S(w)\vert} (1-p^r)^{\vert S(w)\vert-d-1},\ {\rm{otherwise}}. \\ \end{cases} $$ \end{cor} \begin{proof} Admissible elements in the Drinfeld case have the form $$ w = t_{e_m} (m m_{k-1} \cdots m_1), $$ where $m > m_{k-1} > \cdots > m_1$. The proof of \cite{haines2000b}, Proposition 5.2, shows that, in this case, $$ \widetilde{R}_{w, t_{\lambda(w)}}(Q) = Q^{\ell(w, t_{\lambda(w)})}.$$ But Theorem~\ref{dyer-R-polynomial-formula} gives this polynomial in terms of the set $B_\Phi^\prec (w, t_{\lambda(w)})$ for any choice of reflection ordering $\prec$. It follows that $B_\Phi^\prec (w, t_{\lambda(w)})$ consists of a single path of length $\ell(w, t_{\lambda(w)})$. Therefore, if $w \in {\rm{Adm}}_{G_r}(\mu)$ we have $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = (-1)^{d} (1-q)^{-d} \delta(s,w,\Phi) \vert S_w^{\rm{dz}} \vert (1-q)^{\ell(w,t_{\lambda(w)})}. $$ Haines and Rapoport show that $\ell(w,t_{\lambda}) = \vert S(w)\vert - 1$. It is clear that in the Drinfeld case $S_w$ is always a torus, hence we can define a torus $T_{S(w)}$ that fits into an exact sequence $$ 1 \rightarrow T_{S(w)} \rightarrow T \rightarrow Q_w \rightarrow 1, $$ such that $Q_w$ is dual to $\widehat{T}/S_w$. We conclude that $\vert S_w^{\rm{dz}} \vert = (p-1)^{d-\vert S(w)\vert}$ by the relations imposed by $\lambda(\kappa) = 1$ and $\bar{w}(\kappa) = \kappa$. Now apply the results of Chapter 3: if $w \in {\rm{Adm}}_{G_r}(\mu)$ and $N_r(s) \in T_{S(w)}(k_F)$, then we have $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = (-1)^{d} (p-1)^{d-\vert S(w)\vert} (1-q)^{\vert S(w)\vert-d-1}, $$ and the coefficient is zero otherwise. \end{proof} \subsubsection{Relationship to test functions with Iwahori level structure} \label{section::rel-to-fn-iwahori-level} Let us say something about the relationship between the formula for coefficients $\phi_{r,1}aug$, which is sufficient for computing twisted orbital integrals of the test function $\phi_{r,1}$, and the coefficients of the test function $\phi_{r,0}$ with \emph{Iwahori} level structure. Let $\phi_{r,0} = q^{r\ell(t_\mu)/2} z_{\mu, r}$ be the Kottwitz function, where $\mu$ is a dominant minuscule cocharacter of $G$ as usual and $z_{\mu, r}$ is a Bernstein function in the center of $\mathcal{H}(G_r, I_r)$ as defined in Section~\ref{section::bernstein-functions}. Haines's formula for Bernstein functions of minuscule cocharacters shows that $$ \phi_{r,0} = q^{r\ell(t_\mu)/2} \sum_{w \in {\rm{Adm}_{G_r}(\mu)}} \widetilde{R}_{w, t_{\lambda(w)}}(Q_r) \tilde{T}_{w, r}, $$ where again notation is the same as in Chapter~\ref{chapter::test-functions}. Its coefficients are $$ \phi_{r,0}(I_r w I_r) = q^{r\ell(t_\mu)/2} q^{-r\ell(w)/2} \widetilde{R}_{w, t_{\lambda(w)}}(Q_r). $$ The term $q^{-r\ell(w)/2}$ comes from the normalization of basis elements in the Iwahori-Matsumoto presentation of $\mathcal{H}(G_r, I_r)$. Dyer's formula for $\widetilde{R}$-polynomials, discussed in Chapter~\ref{chapter::combinatorics-on-paths}, implies $$ \phi_{r,0}(I_r w I_r) = q^{r\ell(t_\mu)/2} q^{-r\ell(w)/2} \sum_{\Delta \in B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})} Q_r^{\ell(\Delta)}. $$ Let $c(\Delta) = [\ell(w, t_\mu) - \ell{\Delta}]/2$ as in Theorem~\ref{main-theorem} to get $$ \phi_{r,0}(I_r w I_r) = \sum_{\Delta \in B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})} q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)} $$ So we see that the formula for coefficients of $\phi_{r,0}$ have a similar structure to those of $\phi_{r,1}aug$. The latter can be written as $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = [I_r : I_r^+]^{-1} \sum_{\Delta \in B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})} \delta(s, w, J_\Delta) \vert S_{w, J_\Delta}^{\rm{dz}} \vert q^{rc(\Delta)} (1-q^r)^{\ell(\Delta)}. $$ \subsection{Remarks on applying the formula} \label{section::implementation-remarks} Now that we have established the combinatorial formula for coefficients of $\phi_{r,1}aug$, let us say a few words about calculating values of the formula. \subsubsection{Some special cases} Understanding the $\prec$-increasing paths through the Bruhat graph is an important first step in calculating results with Theorem~\ref{main-theorem}. In some cases, $B_{\Phi_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ has a very simple description. The exercises following Chapter 5 of \cite{bjorner-brenti2005} describe $\widetilde{R}_{u,v}(Q)$ for elements $u, v \in W$ such that $\ell(u,v) \leq 4$; the following two corollaries consider the cases $\ell(w, t_{\lambda(w)}) = 0$ and $\ell(w, t_{\lambda(w)}) = 1$. \begin{cor} \label{cor::main-theorem-codim-0} Suppose $w = t_\lambda$ for $\lambda \in W \mu$. Then $$ \phi_{r,1}aug(I_r^+ s t_\lambda I_r^+) = \begin{cases} (-1)^d (q-1)^{d-1} (1-q^r)^{-d}, & {\rm{if}}\ N_r(s) \in A_{w, \emptyset, k_F} \\ 0, & {\rm{otherwise}}. \end{cases} $$ \end{cor} \begin{proof} There are no non-trivial $\prec$-increasing paths from $t_\lambda$ to itself. Therefore, the formula reduces to $$ [I_r : I_r^+]^{-1} \sum_{\kappa_\chi \in S_w^{\rm{dz}}} 1 = (-1)^d (1 - q^r)^{-d} \vert S_{w,J}^{\rm{dz}}\vert. $$ The relation $\lambda (\kappa_\chi) = 1$ is the only restriction on depth-zero endoscopic elements. If we view $\kappa_\chi = {\rm{diag}}(\kappa_1, \ldots, \kappa_d) \in \widehat{T}(\mathbb{C})$, then this restriction allows for a free choice of all $\kappa_i$ except for one coordinate, which is determined by the relation. It follows that $S_w$ is a torus, hence $\vert S_w^{\rm{dz}} \vert = (q-1)^{d-1}$. \end{proof} \begin{cor} \label{cor::main-theorem-codim-1} If $w = t_\lambda x$, for $x$ a reflection in $W$, such that $\ell(w, t_{\lambda}) = 1$, then $$ \phi_{r,1}aug(I_r^+ s w I_r^+) = \begin{cases} (-1)^d \vert S_{w, J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{d-2} (1-q^r)^{1-d}, & {\rm{if}}\ N_r(s) \in A_{w,J_\Delta, k_F} \\ 0, & {\rm{otherwise}}. \end{cases} $$ \end{cor} \begin{proof} The interval $B_{\Phi_{\rm{aff}}}^\prec(w, t_\lambda)$ contains a single path $\Delta = \{w, wx\}$. It is clear that $J_\Delta$ is a rank 1 system determined by the root $\alpha \in \Phi$ such that $x = s_\alpha$. Applying the formula shows that $\phi_{r,1}(I_r^+ sw I_r^+)$ equals $$ (-1)^d \vert S_{w, J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{d-2} (1-q^r)^{1-d}, $$ because $[\ell(w, t_{\lambda}) - \ell(\Delta)]/2 = [1 - 1]/2 = 0$. \end{proof} \subsubsection{Implementation details} Let us describe how to implement the formula in software. The data presented in the next sections was computed using SageMath~\cite{sagemath}; however, this process should be feasible in any mathematics package with a robust implementation of Coxeter groups and finitely generated abelian groups. The first order of business is to enumerate the $\mu$-admissible set, because if $w \notin \rm{Adm}(\mu)$ then $\phi_{r,1}aug(I_r^+ sw I_r^+) = 0$. By definition, $$ {\rm{Adm}}(\mu) = \{w \in \widetilde{W} \mid w \le t_\lambda,\; \rm{some}\; \lambda \in W\mu\}. $$ Therefore, we can employ the following naive algorithm: \begin{enumerate} \item Determine the orbit $W\mu$. \item For every $\lambda \in W\mu$ and every $\bar{w} \in W$, compute $t_\lambda \bar{w}$. \item If $t_\lambda \bar{w} \le t_\lambda$, then $w = t_\lambda \bar{w} \in {\rm{Adm}}(\mu)$. \end{enumerate} This approach is fast enough to handle small rank cases. For type $A_n$ systems, $\vert W \vert = (n+1)!$, which means this exhaustive strategy would quickly become infeasible. Now the algorithm proceeds in parallel for each $w = t_\lambda \bar{w} \in {\rm{Adm}}(\mu)$. For each $w \in {\rm{Adm}}(\mu)$, compute $\ell(w)$ and $\ell(t_{\lambda})$; this determines the codimension $\ell(w, t_{\lambda})$. Next, we must find the minimal length coset representative of $t_\lambda$ with respect to the finite Weyl group, i.e., find $w_\lambda \in W$ such that $t_\lambda w_\lambda \in \widetilde{W}$ has minimal length. Once $w_\lambda^{-1}$ inverse is in hand, we can focus our attention on the finite group $W$ and the elements $w_\lambda^{-1} \bar{w}$ and $w_\lambda^{-1}$. In order to compute $B_\Phi^\prec (w_\lambda^{-1}\bar{w}^{-1}, w_\lambda^{-1})$, we need to choose a reflection ordering $\prec$ for the finite Weyl group. Fortunately, one of Dyer's results provides a straightforward algorithm for making a consistent choice across all root system types and for all ranks. \begin{prop} \label{prop::refl-order-from-high-root} Let $(\mathcal{W},\mathcal{S})$ be a finite Coxeter system with longest element $w_0$, and let $\mathcal{T} = \{t_1, \ldots, t_n\}$ be the set of reflections in $\mathcal{W}$. Then the total ordering $\prec$ on $\mathcal{T}$ such that $t_1 \prec \ldots \prec t_n$ is a reflection ordering if and only if there is a reduced expression $w_0 = s_1 \ldots s_n$, where $s_i \in \mathcal{S}$, such that $t_i = s_1 \ldots s_{i-1} s_i s_{i-1} \ldots s_1$. \end{prop} \begin{proof} This is \cite{dyer1993}, Proposition 2.13. \end{proof} We come now to the main combinatorial part of the algorithm: enumerating the $\prec$-increasing paths through the Bruhat graph of the finite Weyl group. This is hard insofar as the Bruhat graph of $(W,S)$ grows rapidly in complexity as the rank of the group increases, but the basic problem has been studied due to the connection with Kazhdan-Lusztig theory. The naive approach of creating the full Bruhat graph and then considering all paths is very expensive even in small examples. Instead, we take advantage of the relative scarcity of reflections in $W$ compared to $\vert W \vert$. For example, there are $\frac{n(n+1)}{2}$ reflections in a Weyl group of type $A_n$ while the group has order $(n+1)!$. For elements $u, v \in W$, enumerate $B_\Phi^\prec(u, v)$ as follows: \begin{enumerate} \item Let $C$ denote the set of ``candidate paths'' in the Bruhat graph $\Omega_{(W,S)}$. This set is initially empty and will be built up as the algorithm proceeds. \item For each reflection $t \in T$, if $u < ut$ in Bruhat order, add $\{u, ut\}$ to $C$. \item For each reflection $t \in T$, and each candidate path $\Delta^\prime \in C$, with $x$ equal to the last vertex in $\Delta^\prime$ and $e$ the last edge in $\Delta^\prime$, check whether $x < xt$ and $e \prec t$. If both conditions are satisfied, add $\{\Delta^\prime, xt\}$ to $C$. \item Iterate the above procedure until the total number of trials equals $\ell(u,v)$. \item Finally, for each $\Delta^\prime \in C$, if the last vertex of $\Delta^\prime$ equals $v$, then $\Delta^\prime$ is a $\prec$-increasing path in $\Omega_{(W,S)}$ from $u$ to $v$. \end{enumerate} While this algorithm is not necessarily efficient, there is a manageable upper bound on the number of candidate paths to be considered: $\ell(u, v) \cdot \vert T \vert$. Once we have enumerated $B_{\Phi}(w_\lambda^{-1} \bar{w}, w_\lambda^{-1})$, we can read off the statistics $c(\Delta)$ and $\ell(\Delta)$ for each path. The edges of the path determines the root system $J_\Delta$ for by looking at the intersection of all root subsystems in $\Phi$ which contain $E(\Delta)$. For each path $\Delta \in B_\Phi^\prec (w_\lambda^{-1} \bar{w}, w_\lambda^{-1})$, the group $\chars{S_{w, J_\Delta}}$ is a finitely generated abelian group. It corresponds to $S_{w, J_\Delta}$ under the categorical anti-equivalence between diagonalizable algebraic groups and their character groups. Therefore, if we find the invariant factors for $\chars{S_{w, J}}$, as a finitely generated abelian group, then we get a description of $S_{w, J}$. We know that $\chars{S_{w, J_\Delta}} = \cochars{T} / L_{w, J_\Delta}$, and $L_{w, J_\Delta}$ is generated by $\lambda$ and $\alpha^\vee$ for $\alpha \in J_\Delta^+$ (whenever $\bar{w} \in W_{J_\Delta}$, which is true here). Suppose we choose a generating set for $\cochars{T}$ and find the coordinates for $\lambda$ and the coroots $\alpha^\vee$ in terms of these generators. This is sufficient data to compute the invariant factors of $\chars{S_{w, J_{\Delta}}}$. If this group has any torsion, then take the additional step of computing $\vert S_{w, J_{\Delta}}^{\rm{tors}} \cap K_{q-1} \vert$ by comparing the order of the torsion elements with the $(q-1)$-th roots of unity. This concludes the mathematical considerations for implementing the calculations in software. The calculations reported on in the next two sections and in the Appendix were done through a combination of by-hand calculation and SageMath. Results were obtained on a single 1.1 GHz core using approximately 800 MB of RAM in the largest cases; however, once the up-front work of computing the $\mu$-admissible set is done, the computation could be trivially parallelized across many cores. \subsection{Results for general linear groups} Let $F$ be a $p$-adic field. The $F$-points of the general linear group $GL_d$ are $$ GL_d(F) = \{ g \in M_{d,d} (F) \mid {\rm{det}}(g) \neq 0 \}, $$ where $M_{d, d}(F)$ is the group of $M_{d, d}$ matrices under multiplication with coefficients in $F$. We fix a split maximal torus $T = \{{\rm{diag}}(t_1, \ldots, t_d) \mid t_i \in \mathbb{G}_m\}$ in $GL_d$. Since $GL_d$ is self-dual, we can identify $T$ with its dual $\widehat{T}$ so that $$ \widehat{T}(\mathbb{C}) = \{ \kappa = {\rm{diag}}(\kappa_1, \ldots, \kappa_d) \mid \kappa_i \in \mathbb{C}^\times\}. $$ The character group $\chars{\widehat{T}}$ consists of coordinate projections $\varepsilon_i (\kappa) = \kappa_i$, which can also be viewed as cocharacters of $T$. The root system $\Phi = \Phi(G,T)$ is of type $A_{n-1}$, and its positive roots are $\alpha_{ij} = \varepsilon_i - \varepsilon_j$ for $1 \leq i < j \leq n$. \subsubsection{Example: $GL_4(F)$, $\mu = (1,1,0,0)$} \label{section::worked-example-gl4-1100} Let us give a detailed overview of how to compute the coefficients of $\phi_{r,1}aug$ when $G=GL_4$ and $\mu = (1, 1, 0, 0)$. Although this case is small enough for us to work through the details, we will explain how to read the results out of the tables found in the Appendix to help the reader understand the data in larger examples. Despite being the smallest example different from the Drinfeld case, we will see several interesting distinctions between the coefficients in Corollary~\ref{cor::drinfeld-case} and the coefficients described below. First, let us consider how data specific to this case fill in some of the values in the formula of Theorem~\ref{main-theorem}. The split maximal torus $T$ has rank equal to 4. Translation elements $t_\lambda$ in the $W$-orbit of $t_\mu$ have length $\ell(t_\lambda) = 4$. Therefore, the formula becomes $$ \sum_{\Delta \in B_{\Phi_{\rm{aff}}}^{\prec} (w, t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1}\vert (q-1)^{A(\Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 4 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(4 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 4 \end{cases} $$ The root system is type $A_3$, so it has six positive roots, and the longest element of the finite Weyl group is $w_0 = s_{123121}$, where $s_1$, $s_2$ and $s_3$ correspond to the simple positive roots. The notation $s_{i_1 \cdots i_n}$ is shorthand for the product $s_{i_1} \cdots s_{i_n}$. Using Proposition~\ref{prop::refl-order-from-high-root}, we compute the reflection ordering, $$ s_1 \prec s_{121} \prec s_{12321} \prec s_2 \prec s_{232} \prec s_3, $$ which for the positive roots is $$ \alpha_{12} \prec \alpha_{13} \prec \alpha_{14} \prec \alpha_{23} \prec \alpha_{24} \prec \alpha_{34}. $$ It is also useful to note that the rank of $\Phi$ equals 3, so that we know any $\chi$-root system $J_\Delta$ of rank 3 must equal $\Phi$. The $\mu$-admissible set ${\rm{Adm}}_{G_r}(\mu)$ contains 33 elements, according to the formula in \cite{haines2000}, Proposition 8.2; the highest length for $w \in {\rm{Adm}}_{G_r}(\mu)$ is $\ell(w) = 4$. We will group the calculations according to these lengths. \noindent\textbf{Length 0:} There is a unique $\mu$-admissible element such that $\ell(w) = 0$: $w = t_\mu s_{2312}$. The minimal coset representative $w_\lambda$ equals $\bar{w}^{-1} = s_{2312}$. Calculations show that $$ B_\Phi^\prec(e, s_{2312}) = \{\Delta_1, \Delta_2\}, $$ where $E(\Delta_1) = \{s_{121},\: s_{232}\}$ and $E(\Delta_2) = \{s_1,\: s_{121},\: s_2,\: s_{232}\}$. The reflection subgroups of $W$ corresponding to these paths are \begin{itemize} \item $W_{J_{\Delta_1}} = \{e, s_{121}, s_{232}, s_{2312}\}$ \item $W_{J_{\Delta_2}} = W$ \end{itemize} Hence $J_{\Delta_1} = \{\pm\alpha_{13}, \pm\alpha_{24}\}$ and $J_{\Delta_2} = \Phi$. So as an intermediate result we have $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \delta(s, w, J_{\Delta_1}) \vert S_{w,J_{\Delta_1}}^{\rm{dz}}\vert q^r (1-q^r)^{-2} + \delta(s, w, J_{\Delta_2}) \vert S_{w,J_{\Delta_2}}^{\rm{dz}} \vert (1-q^r)^{0}. $$ Elements $\kappa \in S_{w, J_{\Delta_1}}$ satisfy $\alpha_{13}^\vee (\kappa) = \alpha_{24}^\vee (\kappa) = \mu (\kappa) = 1$, hence they are subject to constraints $\kappa_1 \kappa_2 = 1$, $\kappa_1 = \kappa_3$ and $\kappa_2 = \kappa_4$; meanwhile $\kappa \in S_{w, J_{\Delta_2}}$ satisfy $\kappa_1 \kappa_2 = 1$ and $\kappa_1 = \kappa_2 = \kappa_3 = \kappa_4$ because $\alpha^\vee (\kappa) = 1$ for all $\alpha \in \Phi$. Therefore, \begin{itemize} \item $S_{w, J_{\Delta_1}} = \{ {\rm{diag}}(\kappa_1, \kappa_1^{-1}, \kappa_1, \kappa_1^{-1}) \in \widehat{T}(\mathbb{C}) \mid \kappa_1 \in \mathbb{C}^\times\}$ \item $S_{w, J_{\Delta_2}} = \{ {\rm{diag}}(a, a, a, a) \in \widehat{T}(\mathbb{C}) \mid a^2 = 1,\ a \in \mathbb{C}^\times \}$ \end{itemize} The group $S_{w, J_{\Delta_1}}$ is a torus with rank equal to 1, while $S_{w, J_{\Delta_2}}$ is a torsion group of order 2. Assuming ${\rm{char}}(k_F) \neq 2$, we have $\vert S_{w, J_{\Delta_2}}^{\rm{tors}} \cap K_{q-1} \vert = 2.$ Here's a second intermediate result, assuming ${\rm{char}}(k_F) \neq 2$: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \delta(s, w, J_{\Delta_1}) (q-1) q^r (1-q^r)^{-2} + 2 \delta(s, w, J_{\Delta_2}). $$ Now we describe the finite critical groups $A_{w, J_{\Delta_1}, k_F}$ and $A_{w, J_{\Delta_2}, k_F}$, which tell us when $s \in T(k_r)$ give $\delta(s, w, J) = 1$. By definition, $A_{w,J,\resfield} = \langle \nu(k_F) \mid \nu \in L_{w,J}\rangle$ and $L_{w,J} = \langle w(\nu) - \nu, \alpha^\vee \mid \nu \in \chars{\widehat{T}}, \alpha \in J^+\rangle$. Because $J_{\Delta_1} \subset J_{\Delta_2}$, we have the containment $A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F}$. In conclusion, if ${\rm{char}}(k_F) \neq 2$ and $w = t_\mu s_{2312}$, $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta_2}, k_F} \\ 2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F} \\ (q-1) q^r (1-q^r)^{-2} + 2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ \noindent\textbf{Length 1:} There are four $\mu$-admissible elements with $\ell(w) = 1$: \begin{center} \begin{tabular}{cccc} $t_{(1,1,0,0)} s_{312}$, & $t_{(1,1,0,0)} s_{231}$, & $t_{(1,1,0,0)} s_{12312}$, & $t_{(1,0,1,0)} s_{23121}$. \end{tabular} \end{center} Let us explain the calculation when $w = t_\mu s_{12312}$. There is a unique $\prec$-increasing path $\Delta$ from $w$ to $t_\mu$; $\ell(\Delta) =3$ and $E(\Delta) = \{s_{121}, s_2, s_{232}\}$. Thus $J_\Delta$ must contain the positive roots $\alpha_{13}$, $\alpha_{23}$, and $\alpha_{24}$, which forces $J_\Delta = \Phi$. Then $$ S_{w, J_\Delta} = \{{\rm{diag}}(a,a,a,a) \in \widehat{T}(\mathbb{C}) \mid a^2 = 1\}. $$ Since there is only a single path $\Delta \in B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})$, there is only a single finite critical group $A_{w, J_\Delta, k_F}$. The calculations for the other three $\mu$-admissible elements of length 1 are completely analogous, in the sense that each $B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})$ contains a single path $\Delta$ such that the numbers $A(\Delta)$, $B(w,\Delta)$ and $C(\Delta)$ are as before. Therefore, if $\ell(w) = 1$ and ${\rm{char}}(k_F) \neq 2$, $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta}, k_F} \\ 2(1-q^r)^{-1}, & {\rm{if}}\ N_r(s) \in A_{w, J_\Delta, k_F}. \end{cases} $$ \noindent\textbf{Length 2:} There are ten $\mu$-admissible elements with $\ell(w) = 2$; however, as we shall see in a moment, it will help to group them into two subsets. \begin{center} \begin{tabular}{|l|l|} \hline $X_1$ & $X_2$ \\ \hline $t_{(0, 1, 1, 0)} s_{32}$ & $t_{(1, 0, 0, 1)} s_{12}$ \\ $t_{(1, 0, 1, 0)} s_{3121}$ & $t_{(1, 0, 1, 0)} s_{1232}$ \\ $t_{(1, 1, 0, 0)} s_{21}$ & $t_{(1, 0, 1, 0)} s_{31}$ \\ $t_{(1, 1, 0, 0)} s_{2321}$ & $t_{(1, 1, 0, 0)} s_{123121}$ \\ & $t_{(1, 1, 0, 0)} s_{1231}$ \\ & $t_{(1, 1, 0, 0)} s_{23}$ \\ \hline \end{tabular} \end{center} The elements within each subset determine the same coefficients as the other elements of their subset. We will discuss a representative from each subset. Suppose we choose $t_{(0, 1, 1, 0)} s_{32} \in X_1$. The set $B_{\Phi_{\rm{aff}}^\prec(w, t_{\lambda(w)})}$ contains a unique path $\Delta$ whose edges are $E(\Delta) = \{s_2, s_3\}$. Then $J_\Delta$ is the rank 2 subsystem whose positive roots are $\alpha_{23}, \alpha_{34}, \alpha_{24}$. Next, we have that $$ S_{w,J_\Delta} = \{ {\rm{diag}}(\kappa_1, a, a, a) \in \widehat{T}(\mathbb{C}) \mid \kappa_1 \in \mathbb{C}^\times, a^2 = 1\}. $$ This is the direct product of a rank 1 torus and the torsion group $$ S_{w,J_\Delta}^{\rm{tors}} = \{{\rm{diag}}(1, a, a, a) \in \widehat{T}(\mathbb{C}) \mid a^2 = 1\}. $$ So we have an example of a diagonalizable $S_{w,J_\Delta}$ that is neither a torus nor a torsion group. Then if $w \in X_1$ and ${\rm{char}}(k_F) \neq 2$, $$ \phi_{r,1}aug(I_r^+ s w I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_\Delta, k_F}, \\ 2(q-1)(1-q^r)^{-2}, & {\rm{if}}\ N_r(s) \in A_{w, J_\Delta, k_F}. \end{cases} $$ Now consider $t_{(1, 0, 0, 1)} s_{12} \in X_2$. There is again a unique $\prec$-increasing path from $w$ to $t_{(1,0,0,1)}$ whose edges are $E(\Delta) = \{s_1, s_{121}\}$. But now $$ S_{w, J_\Delta} = \{\kappa = {\rm{diag}}(\kappa_1, \kappa_1, \kappa_1, \kappa_1^{-1}) \mid \kappa_1 \in \mathbb{C}^\times\} $$ is a torus of rank 1, determined by the root system $J_\Delta$ whose positive roots are $J_\Delta^+ = \{\alpha_{12}, \alpha_{23}, \alpha_{13}\}$. We conclude that if $w \in X_2$, then $$ \phi_{r,1}aug(I_r^+ s w I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_\Delta, k_F}, \\ (q-1)(1-q^r)^{-2}, & {\rm{if}}\ N_r(s) \in A_{w, J_\Delta, k_F}. \end{cases} $$ \noindent\textbf{Length 3:} There are twelve $\mu$-admissible elements with $\ell(w) = 3$: \begin{center} \begin{tabular}{llll} $t_{(1, 1, 0, 0)} s_{12321}$, & $t_{(1, 1, 0, 0)} s_{121}$, & $t_{(1, 1, 0, 0)} s_{232}$, & $t_{(1, 1, 0, 0)} s_2$, \\ $t_{(1, 0, 1, 0)} s_{12321}$, & $t_{(1, 0, 1, 0)} s_3$, & $t_{(1, 0, 1, 0)} s_1$, & $t_{(0, 1, 0, 1)} s_2$ \\ $t_{(1, 0, 0, 1)} s_{121}$, & $t_{(1, 0, 0, 1)} s_1$, & $t_{(0, 1, 1, 0)} s_3$ & $t_{(0, 1, 1, 0)} s_{232}$. \end{tabular} \end{center} Corollary~\ref{cor::main-theorem-codim-1} covers this case. Each $B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})$ contains a unique path of length 1, whose sole edge can be read directly off of each element, e.g. the path $\Delta$ in the case $w = t_{(1, 1, 0, 0)} s_{12321}$ has edge $E(\Delta) = \{s_{12321}\}$. In order to finish the calculation after invoking Corollary~\ref{cor::main-theorem-codim-1}, we need to check that $S_{w, J_\Delta}$ is torsion-free. We will do so in the case $w = t_{(1, 1, 0, 0)} s_{12321}$, since again, all of the other cases work the same way. Here, $$ S_{w, J_\Delta} = \{ \kappa = {\rm{diag}}(\kappa_1, \kappa_1^{-1}, \kappa_3, \kappa_1) \mid \kappa_i \in \mathbb{C}^\times\}, $$ which is a rank-2 torus. Then $\vert S_{w, J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert = 1$, and by the Corollary $$ \phi_{r,1}aug(I_r^+ s w I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_\Delta, k_F}, \\ (q-1)^{2} (1-q^r)^{-3}, & {\rm{if}}\ N_r(s) \in A_{w, J_\Delta, k_F}. \end{cases} $$ \noindent\textbf{Length 4:} There are six $\mu$-admissible elements with $\ell(w) = 4$, namely the six elements $\lambda \in W\mu$. This case is settled by Corollary~\ref{cor::main-theorem-codim-0}: $$ \phi_{r,1}aug(I_r^+ s w I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, \emptyset, k_F}, \\ (q-1)^{3} (1-q^r)^{-4}, & {\rm{if}}\ N_r(s) \in A_{w, \emptyset, k_F}. \end{cases} $$ Let us summarize the nonzero values from the different cases. We use the notation $A_{w,J_\Delta,k_F}$ to refer to the finite critical group determined by the unique path in each case, except when $\ell(w) = 0$, where we need to consider two such groups. $$ \begin{cases} 2, & \ell(w) = 0,\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F} \\ (q-1) q^r (1-q^r)^{-2} + 2, & \ell(w)=0,\ N_r(s) \in A_{w, J_{\Delta_1}, k_F}, \\ 2(1-q^r)^{-1}, & \ell(w) = 1,\ N_r(s) \in A_{w, J_\Delta, k_F}, \\ 2(q-1)(1-q^r)^{-2}, & \ell(w)=2,\ w \in X_1\ N_r(s) \in A_{w, J_\Delta, k_F}, \\ (q-1)(1-q^r)^{-2}, & \ell(w)=2,\ w \in X_2\ N_r(s) \in A_{w, J_\Delta, k_F}, \\ (q-1)^{2} (1-q^r)^{-3}, & \ell(w) = 3,\ N_r(s) \in A_{w, J_\Delta, k_F},\\ (q-1)^{3} (1-q^r)^{-4}, & \ell(w)=4,\ N_r(s) \in A_{w, \emptyset, k_F}. \end{cases} $$ \noindent\textbf{Reading data from tables in the appendix:} The preceding example shows how the values $\phi_{r,1}aug (I_r^+ sw I_r^+)$ all follow the same basic template. Table~\ref{table::gl4-1100-coeff} contains all of the data needed to compute a coefficient of $\phi_{r,1}aug$. Here is the row for the length-zero element: \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 3 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subseteq A_{w, \Delta_2, k_F}$} \\ \hline \end{tabular} \end{center} Of course, we already know the rank of $S_{w,J_\Delta}$ is $A(\Delta)$. The ``Isom. class of $S_{w, J_\Delta}$'' is important because it specifies the torsion subgroup $S_{w,J_\Delta}^{\rm{tors}}$, if present. Thus we have the necessary path data to plug into $$ \sum_{\Delta \in B_{\Phi_{\rm{aff}}}^{\prec} (w, t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1}\vert (q-1)^{A(\Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)}, $$ the template formula for this case, while the containment data $A_{w, \Delta_1, k_F} \subseteq A_{w, \Delta_2, k_F}$ tells us how to arrange the path values into the expression $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta_2}, k_F} \\ 2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F} \\ (q-1) q^r (1-q^r)^{-2} + 2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ If $B_{\Phi_{\rm{aff}}}^{\prec} (w, t_{\lambda(w)})$ contains a single path, the table will omit the information about $A_{w, J_\Delta, k_F}$. We saw that when $\ell(w) = 2$, there were two possibilities for $\phi_{r,1}aug (I_r^+ sw I_r^+)$ depending on whether $w \in X_1$ or $w \in X_2$. Table~\ref{table::gl4-1100-adm} addresses this issue: Suppose we were given $w \in {\rm{Adm}}_{G_r}(\mu)$ with $\ell(w) = 2$; we would look up which subset $X_i$ contains $w$ in Table~\ref{table::gl4-1100-adm}, which would tell us which row in Table~\ref{table::gl4-1100-coeff} to use. \subsubsection{$GL_5(F)$, $\mu= (1, 1, 0, 0, 0)$} The group $GL_5$ is type $A_4$, hence its root system $\Phi$ has rank equal to 4. The maximal split torus $T \subset GL_5$ has rank $d = 5$. For all conjugates $\lambda \in W\cdot\mu$, $\ell(t_\lambda) = 6$. Therefore, the formula becomes $$ (-1) \sum_{\Delta \in B_\Phi^{\prec} (w,t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{A(w, \Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 5 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(6 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 5 \end{cases} $$ Fix the following reflection ordering on $W$: $$ \alpha_{12} \prec \alpha_{13} \prec \alpha_{14} \prec \alpha_{15} \prec \alpha_{23} \prec \alpha_{24} \prec \alpha_{25} \prec \alpha_{34} \prec \alpha_{35} \prec \alpha_{45}. $$ There are 131 elements in ${\rm{Adm}}_{G_r}(\mu)$. The raw data for this case can be found in the Appendix, Tables~\ref{table::gl5-11000-coeff} and \ref{table::gl5-11000-adm}. The story for this case is similar to what we saw for $(GL_4, (1,1,0,0))$; however, we will point out a few features before moving on to the $GL_6$ cases. First, let us discuss the length-zero alcove: $w = t_{(1,1,0,0,0)} s_{234123}$. There are three paths in $B_{\Phi_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ whose edge sets are: \begin{itemize} \item $E(\Delta_1) = \{ s_1, s_{12321}, s_2, s_{23432}\}$ \item $E(\Delta_2) = \{ s_{121}, s_{12321}, s_{232}, s_{23432}\}$ \item $E(\Delta_3) = \{ s_1, s_{121}, s_{12321}, s_2, s_{232}, s_{23432}\}$ \end{itemize} All three paths have $J_\Delta = \Phi$, hence they determine the same finite critical group $A_{w, J_\Delta, k_F}$. Compare this to the length-zero element in the case $(GL_4, \mu=(1,1,0,0))$, where there were two paths in $B_{\Phi_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ but one finite critical group was a subgroup of the other. The relevant subgroup is $S_{w,J_\Delta} = \{{\rm{diag}}(a, a, a, a, a) \in \widehat{T}(\mathbb{C}) \mid a^2 = 1\}$, i.e., a torsion group of order 2. Therefore, if $\ell(w) = 0$ is $\mu$-admissible and ${\rm{char}}(k_F) \neq 2$, $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w,J_\Delta, k_F}, \\ (-1)\Big(4q^r(1-q^r)^{-1} + 2(1-q^r)\Big), & {\rm{if}}\ N_r(s) \in A_{w,J_\Delta, k_F}. \end{cases} $$ Second, notice that the coefficient for certain $\mu$-admissible length-two $w$ (specifically, those $w \in X_2$ listed in Table~\ref{table::gl5-11000-adm}) is very similar to that of the length-zero element in $(GL_4, \mu=(1,1,0,0))$. Here is a representative element: $$ w = t_{(1,1,0,0,0)} s_{2312}. $$ The set $B_{\Phi_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ for this element contains two paths, whose edge sets are: \begin{itemize} \item $E(\Delta_1) = \{s_{121}, s_{232}\}$ \item $E(\Delta_2) = \{s_1, s_{121}, s_2, s_{232}\}$. \end{itemize} These are identical to the paths given in Section~\ref{section::worked-example-gl4-1100} for the length-zero alcove. Of course, here we have a larger torus; so for example $$ S_{w, J_{\Delta_2}} = \{{\rm{diag}}(a, a, a, a, \kappa_5) \in \widehat{T}(\mathbb{C}) \mid a^2 = 1, \kappa_5 \in \mathbb{C}^\times\} $$ has a free variable $\kappa_5$ where the corresponding relevant group in the $GL_4$ case was only a torsion group of order 2. If ${\rm{char}}(k_F) \neq 2$, $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = \begin{cases} 0,\ {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta_2}, k_F}, \\ (-1)\Big(2(q-1)(1-q^r)^{-1}\Big),\ {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F}, \\ (-1)\Big((q-1)^2q^r(1-q^r)^{-3} + 2(q-1)(1-q^r)^{-1}\Big),\ {\rm{else}}. \end{cases} $$ On the other hand, if our $\mu$-admissible element $w$ has $\ell(w) = 1$, then the data in Table~\ref{table::gl5-11000-coeff} shows that the coefficient $\phi_{r,1}aug(I_r^+ sw I_r^+)$ is essentially identical to that of the length-zero element for $GL_4$, even though the former's $\prec$-increasing paths are longer: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta_2}, k_F} \\ -2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F} \\ (-1)\Big((q-1) q^r (1-q^r)^{-2} + 2\Big), & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ Of course, we have called attention to these examples to make the point that patterns appear throughout the data. The polynomial given by the formula ultimately depends on the structure of Bruhat intervals in some finite Weyl group, and there are many isomorphic intervals between pairs of elements in different groups. The examples given above cover all cases in $(GL_5, \mu = (1,1,0,0,0))$ where $B_{\Phi_{\rm{aff}}}^\prec(w, t_{\lambda(w)})$ contains multiple paths. \subsubsection{$GL_6(F)$, $\mu= (1, 1, 0, 0, 0, 0)$} \label{section::gl6-mu-110000} The group $GL_6$ is type $A_5$, so $\Phi$ has rank 5. The split maximal torus $T$ has rank $d=6$. All $W$-conjugates $t_\lambda$ of $t_\mu$ have $\ell(t_\lambda) = 8$. The $\mu$-admissible set contains 473 elements. Following the previous examples, this data determines our template $$ \sum_{\Delta \in B_\Phi^{\prec} (w,t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{A(w, \Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 6 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(8 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 6 \end{cases} $$ Fix the following reflection ordering on $W$: \begin{multline*} \alpha_{12} \prec \alpha_{13} \prec \alpha_{14} \prec \alpha_{15} \prec \alpha_{16} \prec \alpha_{23} \prec \alpha_{24} \prec \alpha_{25} \prec \alpha_{26} \prec \\ \prec \alpha_{34} \prec \alpha_{35} \prec \alpha_{36} \prec \prec \alpha_{45} \prec \alpha_{46} \prec \alpha_{56}. \end{multline*} The raw data for calculating the coefficients of $\phi_{r,1}aug$ in this case are spread across Tables~\ref{table::gl6-110000-coeff} and \ref{table::gl6-110000-coeff-cont}. There are sub-cases in the calculations for $\mu$-admissible $w$ of lengths 2, 3, 4, 5 and 6; Tables~\ref{table::gl6-110000-adm} and \ref{table::gl6-110000-adm-cont} list which elements correspond to a given sub-case. The unique length-zero $\mu$-admissible element is $w = t_{(1, 1, 0, 0, 0, 0)} s_{23451234}$. There are five $\prec$-increasing paths from $w$ to $t_{\lambda(w)}$; their edge sets are: \begin{itemize} \item $E(\Delta_1) = \{s_{\alpha_{13}}, s_{\alpha_{15}}, s_{\alpha_{24}}, s_{\alpha_{26}}\}$ \item $E(\Delta_2) = \{s_{\alpha_{12}}, s_{\alpha_{13}}, s_{\alpha_{15}}, s_{\alpha_{23}}, s_{\alpha_{24}}, s_{\alpha_{26}}\}$ \item $E(\Delta_3) = \{s_{\alpha_{12}}, s_{\alpha_{14}}, s_{\alpha_{15}}, s_{\alpha_{23}}, s_{\alpha_{25}}, s_{\alpha_{26}}\}$ \item $E(\Delta_4) = \{s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{15}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{26}}\}$ \item $E(\Delta_5) = \{s_{\alpha_{12}}, s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{15}} , s_{\alpha_{23}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{26}}\}$ \end{itemize} For paths $\Delta_2, \ldots \Delta_5$, $J_{\Delta_i} = \Phi$. The root system $J_{\Delta_4}$ is a subsystem of rank 4. Using the data from Table~\ref{table::gl6-110000-coeff}, we have for ${\rm{char}}(k_F) \neq 2$, $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = \begin{cases} 0,\ {\rm{if}}\ N_r(s) \notin A_{w, J_{\Delta_2}, k_F}, \\ 6q^r + (1 - q^r)^{2},\ {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_2}, k_F}\setminus A_{w, J_{\Delta_1}, k_F}, \\ (q-1)q^{2r}(1-q^r)^{-2} + 6q^r + (1 - q^r)^{2},\ {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ The data shows that when $w \in {\rm{Adm}}_{GL_6}(\mu=(1,1,0,0,0,0))$ has $\ell(w) = 1$, the coefficient $\phi_{r,1}aug (I_r^+ sw I_r^+)$ is the same as the coefficient for the unique length-zero $w \in {\rm{Adm}}_{GL_5}(\mu=(1,1,0,0,0))$---other than their difference in sign. \subsubsection{$GL_6(F)$, $\mu= (1, 1, 1, 0, 0, 0)$} Most of the objects here are the same as in Section~\ref{section::gl6-mu-110000}, such as $\Phi$, the choice of maximal torus $T$, and the reflection ordering $\prec$ on $W$. In this case, the translation $t_\mu$ and its conjugates have length $\ell(t_\mu) = 9$. The template is $$ \sum_{\Delta \in B_\Phi^{\prec} (w,t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{A(w, \Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 6 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(9 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 6. \end{cases} $$ There are 883 $\mu$-admissible elements, and as one would expect, there are many different cases for the coefficients of $\phi_{r,1}aug$. Raw data for the coefficients is spread across Tables~\ref{table::gl6-111000-coeff}, \ref{table::gl6-111000-coeff-cont} and \ref{table::gl6-111000-coeff-cont2}. The subsets of $\mu$-admissible elements are further explained in Tables~\ref{table::gl6-111000-adm} and \ref{table::gl6-111000-adm-cont} as appropriate. Let us discuss the coefficient for $w = t_{(1, 1, 1, 0, 0, 0)} s_{345234123}$, the unique length-zero $\mu$-admissible element. This is the most complex example we shall consider in this chapter: $B_{\Phi_{\rm{aff}}}^\prec (w, t_{\lambda(w)})$ contains nine paths, which yield five distinct root systems $J_\Delta$. The edges sets are: \begin{itemize} \item $E_{\Delta_1} = \{s_{\alpha_{14}}, s_{\alpha_{25}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_2} = \{s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{25}}, s_{\alpha_{34}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_3} = \{s_{\alpha_{14}}, s_{\alpha_{23}}, s_{\alpha_{25}}, s_{\alpha_{35}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_4} = \{s_{\alpha_{12}}, s_{\alpha_{14}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_5} = \{s_{\alpha_{12}}, s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{23}}, s_{\alpha_{25}}, s_{\alpha_{34}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_6} = \{s_{\alpha_{12}}, s_{\alpha_{14}}, s_{\alpha_{23}}, s_{\alpha_{25}}, s_{\alpha_{34}}, s_{\alpha_{35}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_7} = \{s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{34}}, s_{\alpha_{35}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_8} = \{s_{\alpha_{12}}, s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{35}}, s_{\alpha_{36}} \}$ \item $E_{\Delta_9} = \{s_{\alpha_{12}}, s_{\alpha_{13}}, s_{\alpha_{14}}, s_{\alpha_{23}}, s_{\alpha_{24}}, s_{\alpha_{25}}, s_{\alpha_{34}}, s_{\alpha_{35}}, s_{\alpha_{36}} \}$ \end{itemize} The paths $\Delta_5, \ldots, \Delta_9$ all have $J_\Delta = \Phi$; however, the other four paths all give a distinct subsystem $J_\Delta \subset \Phi$. \begin{itemize} \item $J_{\Delta_1} = \{\pm \alpha_{14}\} \times \{\pm \alpha_{25}\} \times \{ \pm \alpha_{36}\}$ \item $J_{\Delta_2} = \{\pm\alpha_{13}, \pm\alpha_{14} \pm\alpha_{16}, \pm\alpha_{34}, \pm\alpha_{36}, \pm\alpha_{46} \} \times \{\pm \alpha_{25}\}$ \item $J_{\Delta_3} = \{\pm\alpha_{14}\} \times \{\pm\alpha_{23}, \pm\alpha_{25}, \pm\alpha_{26}, \pm\alpha_{35}, \pm\alpha_{36}, \pm\alpha_{56}\}$ \item $J_{\Delta_4} = \{\pm\alpha_{12}, \pm\alpha_{14}, \pm\alpha_{15}, \pm\alpha_{24}, \pm\alpha_{25}, \pm\alpha_{45}\} \times \{ \pm \alpha_{36}\}$ \end{itemize} Here is the graph of inclusions for the finite critical groups: \[ \xymatrix{ & A_{w,J_{\Delta_5}, k_F} = \cdots = A_{w, J_{\Delta_9}, k_F} & \\ A_{w, J_{\Delta_2}, k_F} \ar[ur] & A_{w, J_{\Delta_3}, k_F} \ar[u] & A_{w, J_{\Delta_4}, k_F} \ar[ul] \\ & A_{w, J_{\Delta_1}, k_F} \ar[ul] \ar[u] \ar[ur] } \] As usual, the relationships between finite critical groups characterize the different cases for the coefficient $\phi_{r,1}aug (I_r^+ sw I_r^+)$ when $w = t_{(1, 1, 1, 0, 0, 0)} s_{345234123}$. If $N_r(s) \notin A_{w, J_{\Delta_5}, k_F}$, then $\phi_{r,1}aug (I_r^+ sw I_r^+) = 0$. Otherwise, there are three nonzero possibilities: \noindent\textbf{Case 1:} If $N_r(s) \in A_{w, J_{\Delta_5}, k_F} \setminus \Big(\bigcup_{i=2}^4 A_{w, J_{\Delta_i}, k_F}\Big)$, the coefficient is $$ 12q^r(1-q^r) + 3(1-q^r)^3. $$ \textbf{Case 2:} If $N_r (s) \in A_{w, J_{\Delta_i},k_F} \setminus A_{w, J_{\Delta_1},k_F}$ for $i = 2,3,4$, the coefficient is $$ (q-1)q^{2r}(1-q^r)^{-1} + 12q^r(1-q^r) + 3(1-q^r)^3. $$ \textbf{Case 3:} If $N_r(s) \in A_{w, J_{\Delta_1},k_F}$, the coefficient is $$ (q-1)^2q^{3r}(1-q^r)^{-3} + 3(q-1)q^{2r}(1-q^r)^{-1} + 12q^r(1-q^r) + 3(1-q^r)^3. $$ \subsection{Results for general symplectic groups} We begin by recalling the definition of $GSp_{2n}$ and the associated data required for the calculations of $\phi_{r,1}aug$. Let $\tilde{I}_n$ be the $n\times n$ matrix with ones on the anti-diagonal and zeroes everywhere else. Let $$ J = \begin{pmatrix} 0 & \tilde{I}_n \\ -\tilde{I}_n & 0 \end{pmatrix}. $$ Then for a $p$-adic field $F$, the $F$-rational points of $GSp_{2n}$ are $$ GSp_{2n}(F) = \{ g \in GL_{2n}(F) \mid {^t}g J g = c(g) J, c(g) \in F^\times\} $$ We choose a split maximal torus $T$ with $$ T(F) = \{{\rm{diag}}(t_1, \ldots, t_n, t_n^{-1}t_0,\ldots t_1^{-1}t_0) \mid t_i \in F^\times\}. $$ For the $GL_n$ case, we showed how to compute $\vert S_{w, J}^{\rm{dz}}\vert$ from the definitions, looking at endoscopic elements in $\widehat{T}(\mathbb{C})$. For examples when $G=GSp_{2n}$, we show how to use Lemma~\ref{lemma::rel-grp-gln-gsp2n} to get the structure of $\chars{S_{w,J}}$, and hence that of $S_{w,J}$. \subsubsection{Example: $GSp_4(F)$, $\mu = (1,1,0,0)$} As with the $GL_n$ examples, we use data from this case to first write down a template formula for the coefficients of $\phi_{r,1}aug$. The rank of $T$ is 3, while translation elements $t_\lambda$ for $\lambda \in W\mu$ have $\ell(t_\lambda) = 3$. Thus for $w \in \widetilde{W}$ and $s \in T(k_r)$, the formula for $\phi_{r,1}aug(I_r^+ sw I_r^+)$ becomes $$ (-1) \sum_{\Delta \in B_\Phi^{\prec} (w,t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{A(w, \Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 3 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(3 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 3. \end{cases} $$ Let us describe the characters and cocharacters of $T$. Let $\varepsilon_i$ be the $i$-th coordinate projection of $T$, and let $c$ be the similitude character. Then $$ \chars{T} = \langle c, \varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4\rangle / \langle \varepsilon_1 + \varepsilon_4 = \varepsilon_2 + \varepsilon_3 = c\rangle. $$ Using coordinates in terms of the $\varepsilon_i$, the positive roots in $\Phi$, which is type $C_2$, are \begin{align*} \alpha_1 &= (1/2, -1/2, 1/2, -1/2) \\ \alpha_2 &= (0, 1, -1, 0) \\ \alpha_3 &= (1/2, 1/2, -1/2, -1/2) \\ \alpha_4 &= (1, 0, 0 -1). \end{align*} We can also describe the character lattice as $\chars{T} = \langle c_0, c_1, c_2 \rangle$, where the generators act on $t = {\rm{diag}}(t_1, t_2, t_2^{-1}t_0, t_1^{-1}t_0)$ by $c_i (t) = t_i$ for $i = 0,1,2$. In this coordinate system, the roots are written \begin{align*} \alpha_1 &= c_1 - c_2 \\ \alpha_2 &= 2 c_2 - c_0 \\ \alpha_3 &= c_1 + c_2 - c_0\\ \alpha_4 &= 2 c_1 - c_0. \end{align*} The cocharacter lattice $\cochars{T}$ is the free group on generators $e_0, e_1, e_2$ where for $x \in F2^\times$, \begin{align*} e_0 (x) &= {\rm{diag}}(1, 1, x, x) \\ e_1 (x) &= {\rm{diag}}(x, 1, 1, x^{-1}) \\ e_2 (x) &= {\rm{diag}}(1, x, x^{-1}, 1). \end{align*} We could again use coordinates in terms of maps $\breve{\varepsilon}_i$ sending $x \in F^\times$ to the $i$-th coordinate in $T$ with 1's elsewhere. Then an element $\nu \in \cochars{T}$ is a tuple $(a_1, a_2, a_3, a_4)$ such that $a_1 + a_4 = a_2 + a_3 = c$. In these coordinates, $\mu = (1,1,0,0)$ is the map $\mu(x) = {\rm{diag}}(x, x, 1, 1) \in T(F)$ for $x \in F^\times$. So $\mu = e_0 + e_1 + e_2$. The coroots are \begin{align*} \alpha_1^\vee &= (1, -1, 1, -1) = e_1 - e_2 \\ \alpha_2^\vee &= (0, 1, -1, 0) = e_2 \\ \alpha_3^\vee &= (1,1,-1,-1) = e_1 + e_2 \\ \alpha_4^\vee &= (1, 0, 0, -1) = e_1 \\ \end{align*} Let $s_1$ and $s_2$ denote the simple reflections corresponding to the roots $\alpha_1$ and $\alpha_2$, respectively. The longest element of the finite Weyl group is $w_0 = s_{2121}$, where again the notation $s_{i_1 \cdots i_n}$ is shorthand for the product $s_{i_1} \cdots s_{i_n}$. By Proposition~\ref{prop::refl-order-from-high-root}, we have a reflection ordering, $$ s_2 \prec s_{212} \prec s_{121} \prec s_1. $$ When $\mu = (1,1,0,0)$, Proposition 8.2 of~\cite{haines2000} shows that the set ${\rm{Adm}}_{G_r}(\mu)$ contains 13 elements. The $\mu$-admissible elements range in length from 0 to 3. See Table~\ref{table::gsp4-1100-coeff} for the raw coefficient data. \noindent\textbf{Length 0:} The unique length-zero $\mu$-admissible element is $w = t_{(1,1,0,0)} s_{212}$. We want to enumerate the $\prec$-increasing paths $$ B_{\Phi}^{\prec} (w_\lambda^{-1} \bar{w}, w_\lambda^{-1}) = B_{\Phi}^{\prec} (1, s_{212}). $$ It turns out there are only two paths, whose edge sets are \begin{align*} E(\Delta_1) &= \{s_{212}\}, \\ E(\Delta_2) &= \{s_2, s_{212}, s_{121}\}. \end{align*} It is clear that $J_{\Delta_1} = \{ \pm \alpha_3 \}$ and $J_{\Delta_2} = \Phi$. Plugging this data into the template gives us $$ (-1) \Big( \delta(s, w,J_{\Delta_1}) \vert S_{w,J_{\Delta_1}}^{\rm{tors}} \cap K_{q-1} \vert (q-1) q^{r} (1-q^r)^{-2} + \delta(s, w,J_{\Delta_2}) \vert S_{w,J_{\Delta_2}}^{\rm{tors}} \cap K_{q-1} \vert \Big) $$ It remains to describe $S_{w, J_{\Delta_i}}$ for $i = 1,2$. Consider the quotient $$ \cochars{T} / \mathbb{Z}J_{\Delta_1}^\vee = \langle e_0, e_1, e_2 \rangle / \langle e_1 + e_2 = 0\rangle \cong \langle \bar{e}_0, \bar{e}_1 \rangle. $$ Now use the method of Lemma~\ref{lemma::rel-grp-gln-gsp2n}, i.e., consider how $\lambda(w) = \mu = e_0 + e_1 + e_2$ appears in the above quotient. It is just $\bar{\mu} = \bar{e}_0 + \bar{e}_1 - \bar{e_1} = \bar{e_0}$. But then $$ \chars{S_{w, J_{\Delta_1}}} \cong \langle \bar{e}_0, \bar{e}_1 \rangle / \langle \bar{e}_0 \rangle \cong \langle \bar{e}_1 \rangle \cong \mathbb{Z}. $$ Similarly, for $J_{\Delta_2} = \Phi$, $$ \cochars{T}/\mathbb{Z}\Phi^\vee = \langle \bar{e}_0 \rangle, $$ because the relations imposed by the coroots in $\Phi^\vee$ force $\bar{e}_1 = \bar{e}_2 = 0$. Then in the quotient, $$ \overline{\lambda(w)} = \bar{\mu} = \bar{e}_0. $$ So $\chars{S_{w, J_{\Delta_2}}} = \{1\}$. The finite critical groups satisfy $A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F}$. This is enough information to give the coefficient data for this alcove: $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w, J_{\Delta_2}, k_F} \\ -1, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_2}, k_F} \setminus A_{w, J_{\Delta_1}, k_F} \\ (-1) \Big( 1 + (q-1) q^{r} (1-q^r)^{-2} \Big), & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ \noindent\textbf{Length 1:} There are three $\mu$-admissible elements $w$ such that $\ell(w) = 3$: $$ \begin{tabular}{ccc} $t_{(1,1,0,0)} s_{2121}$ & $t_{(1,1,0,0)} s_{21}$ & $t_{(1,0,1,0)} s_{12}$. \end{tabular} $$ In each case, $B_{\Phi}^\prec (w_\lambda^{-1} \bar{w}, w_\lambda^{-1})$ contains a single path $\Delta_1$ with $\ell(\Delta_1) = 2$. When $w = t_{(1,1,0,0)} s_{2121}$, we have $E(\Delta_1) = \{s_{2}, s_{121}\}$ and $J_{\Delta_1} = \{\pm 2\alpha_2, \pm 2\alpha_4\}$. In this case, $$ \cochars{T}/\mathbb{Z}J_{\Delta_1}^\vee = \langle e_0, e_1, e_2 \rangle / \langle e_1, e_2 \rangle. $$ Then $\overline{\lambda(w)} = \bar{\mu} = \bar{e}_0$ means $$ \chars{S_{w,J_{\Delta_1}}} = \langle \bar{e}_0 \rangle / \langle \bar{e}_0 \rangle = \{1\}. $$ For $w = t_{(1,1,0,0)} s_{21}$, the edge set is $E(\Delta_1) = \{s_{212}, s_{121}\}$ and $J_{\Delta_1} = \Phi$. Finally, if $w = t_{(1,0,1,0)} s_{12}$, then $E(\Delta_1) = \{s_2, s_1\}$ and $J_{\Delta_1} = \Phi$. In both of these cases, $\chars{S_{w, J_{\Delta_1}}}$ is again trivial. We have shown that $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w, J_{\Delta_1}, k_F} \\ (-1)(1-q^r)^{-1}, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ \noindent\textbf{Length 2:} There are five $\mu$-admissible elements whose length is two. In each case, there is a single $\prec$-increasing path $w \stackrel{\Delta_1}{\longrightarrow} t_{\lambda(w)}$. Here's the data: \begin{center} \begin{tabular}{|c|c|c|} \hline $w$ & $E(\Delta_1)$ & $J_{\Delta_1}$ \\ \hline $t_{(1,1,0,0)} s_2$ & $\{s_2\}$ & $\{\pm \alpha_2\}$ \\ $t_{(0,1,0,1)} s_2$ & $\{s_2\}$ & $\{\pm \alpha_2\}$ \\ $t_{(1,0,1,0)} s_1$ & $\{s_1\}$ & $\{\pm \alpha_1\}$ \\ $t_{(1,1,0,0)} s_{121}$ & $\{s_{121}\}$ & $\{\pm \alpha_4\}$ \\ $t_{(1,0,1,0)} s_{121}$ & $\{s_{121}\}$ & $\{\pm \alpha_4\}$ \\ \hline \end{tabular} \end{center} Let us work out $\chars{S_{w, J_{\Delta_1}}}$ for $w = t_{(1,1,0,0)} s_2$. Starting with $$ \cochars{T} / \mathbb{Z}J_{\Delta_1}^\vee = \langle e_0, e_1, e_2 \rangle / \langle e_2 \rangle = \langle \bar{e}_0, \bar{e}_1 \rangle, $$ we have that $\overline{\lambda(w)} = \bar{\mu} = \bar{e}_0 + \bar{e}_1$. Then in the notation of Lemma~\ref{lemma::rel-grp-gln-gsp2n}, the map $c: \mathbb{Z} \rightarrow \mathbb{Z}^2$ is given by the tuple $c = (1, 1, 0)$. Since $c_1$ is a unit, $\chars{S_{w, J_{\Delta_1}}} \cong \mathbb{Z}$. The calculation of $\chars{S_{w, J_{\Delta_1}}}$ is similar for the other length-two $\mu$-admissible elements. The coefficient for this case is $$ \phi_{r,1}aug (I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w, J_{\Delta_1}, k_F} \\ (-1)(q-1)(1-q^r)^{-2}, & {\rm{if}} N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ \noindent\textbf{Length 3:} The translation elements are $t_{(1,1,0,0)}$, $t_{(1,0,1,0)}$, $t_{(0,1,0,1)}$ and $t_{(0,0,1,1)}$. For any of these, Corollary~\ref{cor::main-theorem-codim-0} shows $$ \phi_{r,1}aug (I_r^+ s t_\lambda I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r(s) \notin A_{t_\lambda, \emptyset, k_F}, \\ (-1)(q-1)^{2}(1-q^r)^{-3}, & {\rm{if}} N_r(s) \in A_{t_\lambda, \emptyset, k_F}. \end{cases} $$ \subsubsection{$GSp_6(F)$, $\mu = (1,1,1,0,0,0)$} In this case, the root system has type $C_3$, so ${\rm{rank}}(T) = 4$. For $\lambda \in W\mu$, with $\mu = (1,1,1,0,0,0)$, translation elements have $\ell(t_\lambda) = 6$. Use this data to fill in the template: $$ \sum_{\Delta \in B_\Phi^{\prec} (w,t_{\lambda(w)})} \delta(s, w,J_\Delta) \vert S_{w,J_\Delta}^{\rm{tors}} \cap K_{q-1} \vert (q-1)^{A(w, \Delta)} q^{r B(w, \Delta)} (1-q^r)^{C(\Delta)} $$ where $$ \begin{cases} A(\Delta) = 4 - {\rm{rank}}(J_\Delta) - 1 \\ B(w, \Delta) = [(6 - \ell(w)) - \ell(\Delta)]/2 \\ C(\Delta) = \ell(\Delta) - 4. \end{cases} $$ Roots and coroots for $GSp_6$ can be described in the same coordinate systems used for $GSp_4$. In particular, the cocharacter lattice of $T$ is $\cochars{T} = \langle e_0, e_1, e_2, e_3 \rangle$, and we can express the coroots in terms of these generators: \begin{align*} \alpha_1^\vee &= (1, -1, 0, 0, 1, -1) = e_1 - e_2 & \alpha_6^\vee &= (1,1,0,0,-1,-1) = e_1 + e_2 \\ \alpha_2^\vee &= (0, 1, -1, -1, 1, 0) = e_2 - e_3 & \alpha_7^\vee &= (1,0,1,-1, -,-1) = e_1 + e_3 \\ \alpha_3^\vee &= (0,0,1,-1,0,0) = e_3 & \alpha_8^\vee &= (0, 1, 0, 0, -1, 0) = e_2 \\ \alpha_4^\vee &= (1, 0,-1,1, 0, -1) = e_1 - e_3 & \alpha_9^\vee &= (0, 1,1, -1, -1, 0) = e_2 + e_3 \\ \alpha_5^\vee &= (1,0,0,0,0,-1) = e_1 \end{align*} The simple reflections in $W$ are $s_1$, $s_2$ and $s_3$, corresponding to the roots $\alpha_1$, $\alpha_2$ and $\alpha_3$, respectively. We sometimes specify a reflection in terms of its corresponding positive coroot, \begin{align*} s_{\alpha_1} &= s_1 & s_{\alpha_4} &= s_{121} & s_{\alpha_7} &= s_{31213} \\ s_{\alpha_2} &= s_2 & s_{\alpha_5} &= s_{12321} & s_{\alpha_8} &= s_{232} \\ s_{\alpha_3} &= s_3 & s_{\alpha_6} &= s_{2132312} & s_{\alpha_9} &= s_{323} \end{align*} The reflection ordering is: $$ s_3 \prec s_{323} \prec s_{232} \prec s_{31213} \prec s_{2132312} \prec s_{12321} \prec s_2 \prec s_{121} \prec s_1. $$ There are 79 $\mu$-admissible elements in this case, ranging in length from 0 to 6. The data for this case is in Tables~\ref{table::gsp6-1100-coeff} and~\ref{table::gsp6-1100-adm}. Consider $w=t_{(1,1,1,0,0,0)} s_{323123}$, the unique length-zero $\mu$-admissible element. The set $B_{\Phi}^{\prec} (w, t_{\lambda(w)})$ contains five paths, whose edge sets are \begin{align*} E(\Delta_1) &= \{s_{232}, s_{31213}\} \\ E(\Delta_2) &= \{s_3, s_{232}, s_{31213}, s_{12321}\} \\ E(\Delta_3) &= \{s_{323}, s_{31213}, s_{2132312}, s_{12321}\} \\ E(\Delta_4) &= \{s_3, s_{323}, s_{31213}, s_{2132312}\} \\ E(\Delta_5) &= \{s_3, s_{323}, s_{232}, s_{31213}, s_{2132312}, s_{12321}\}. \end{align*} The associated root systems for the first two paths are $J_{\Delta_1} = \{\pm \alpha_7\} \coprod \{\pm \alpha_8\}$, which is type $A_1 \times A_1$, and $J_{\Delta_2} = \{\pm \alpha_3, \pm \alpha_4, \pm \alpha_5, \pm \alpha_7\} \coprod \{ \pm \alpha_8 \}$, which has type $C_2 \times A_1$. The latter three associated root systems are $J_{\Delta_3} = J_{\Delta_4} = J_{\Delta_5} = \Phi$. Next, we want to find $\chars{S_{w, J_{\Delta_i}}}$ to see if any torsion elements exist. Let us do the calculation for $\Delta_1$. We have $$ \cochars{T}/\mathbb{Z}J_{\Delta_1}^\vee = \langle e_0, e_1, e_2, e_3 \rangle / \langle e_1 + e_3, e_2 \rangle = \langle \bar{e}_0, \bar{e}_1 \rangle. $$ Since $\lambda(w) = \mu = e_0 + e_1 + e_2 + e_3 \in \cochars{T}$, it's image in the above quotient is $\bar{\mu} = \bar{e}_0$. This means $c_1 = 1$ in the notation of Lemma~\ref{lemma::rel-grp-gln-gsp2n}, so $\chars{S_{w, J_{\Delta_1}}} \cong \mathbb{Z}$ is torsion-free. A similar calculation for the other paths shows that $\chars{S_{w, J_{\Delta_i}}} = \{1\}$ for $i = 2,3,4,5$. Here are the inclusions between the finite critical groups: $$ A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F} \subset A_{w, J_{\Delta_3}, k_F} = A_{w, J_{\Delta_4}, k_F} = A_{w, J_{\Delta_5}, k_F}. $$ This is enough information to write down the coefficient $\phi_{r,1}aug(I_r^+ s t_{(1,1,1,0,0,0)} s_{313123} I_r^+)$: $$ \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w,J_{\Delta_3}, k_F}, \\ 2q^r + (1-q^r)^2, & {\rm{if}}\ N_r(s) \in A_{w, J_{\Delta_3}, k_F} \setminus A_{w, J_{\Delta_2}, k_F}, \\ 3q^r + (1-q^r)^2, & {\rm{if}} N_r (s) \in A_{w, J_{\Delta_2}, k_F}, \\ (q-1)q^{2r} (1-q^r)^{-2} + 3q^r + (1-q^r)^2, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ The $\mu$-admissible elements such that $\ell(w) = 3$ split into two cases (the exact lists are in Table~\ref{table::gsp6-1100-adm}). Consider $w = t_\mu s_{323}$ as an exemplar for the first case. There are two $\prec$-increasing paths $w \stackrel{\Delta_i}{\longrightarrow} t_{\lambda(w)}$; their edge sets are: \begin{align*} E(\Delta_1) &= \{ s_{323} \} \\ E(\Delta_2) &= \{ s_3, s_{323}, s_{232} \}. \end{align*} The associated root systems are $J_{\Delta_1} = \{ \pm \alpha_9 \}$ and $J_{\Delta_2} = \{\pm \alpha_2, \pm \alpha_3, \pm \alpha_8, \pm \alpha_9\}$, which has type $C_2$. The usual method of calculation shows that $$ \chars{S_{w, J_{\Delta_1}}} \cong \mathbb{Z} \times \mathbb{Z} $$ and $$ \chars{S_{w, J_{\Delta_2}}} \cong \mathbb{Z}. $$ We also have $A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F}$. So the coefficient in this case is, $$ \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w, J_{\Delta_2}, k_F}, \\ (q-1)(1-q^r)^{-1}, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_2}, k_F} \setminus A_{w, J_{\Delta_1}, k_F}, \\ (q-1)^2 q^r (1-q^r)^{-3} + (q-1)(1-q^r)^{-1}, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ Now choose $w = t_{(1,0,0,1,1,0)} s_{123}$ as the representative for the second class of length-three $\mu$-admissible elements. There is only one $\prec$-increasing path in this case, whose edge set is $E(\Delta_1) = \{s_3, s_2, s_1\}.$ These edges correspond to the three simple roots, so $J_{\Delta_1} = \Phi$. It follows that $$ \phi_{r,1}aug(I_r^+ sw I_r^+) = \begin{cases} 0, & {\rm{if}}\ N_r (s) \notin A_{w, J_{\Delta_1}, k_F}, \\ (1-q^r)^{-1}, & {\rm{if}}\ N_r (s) \in A_{w, J_{\Delta_1}, k_F}. \end{cases} $$ \appendix \section{Tables of coefficient data} This appendix presents the data needed to fully explain several cases of $\phi_{r,1}aug$ for general linear groups and general symplectic groups: \begin{enumerate} \item $GL_4$, $\mu = (1, 1, 0, 0)$ \item $GL_5$, $\mu = (1, 1, 0, 0, 0)$ \item $GL_6$, $\mu = (1, 1, 0, 0, 0, 0)$ \item $GL_6$, $\mu = (1, 1, 1, 0, 0, 0)$ \item $GSp_4$, $\mu = (1, 1, 0, 0)$ \item $GSp_6$, $\mu = (1, 1, 1, 0, 0, 0)$ \end{enumerate} In some tables, we will describe $\mu$-admissible elements; these have the form $w = t_\lambda \bar{w}$, where $\lambda$ is a conjugate of $\mu$ and $\bar{w}$ is an element of the finite Weyl group. The coordinates of the coweight $\lambda$ in $\cochars{T}$ are used. To save space, we write $\bar{w}$ as $s_{i_1 i_2 \ldots i_r}$ rather than $s_{i_1} s_{i_2} \cdots s_{i_r}$, where the $s_{i_k}$ are simple reflections in $W$. See Section~\ref{section::worked-example-gl4-1100} for directions on using these tables. \begin{table}[hp] \caption{Coefficient data for $GL_4$, $\mu = (1,1,0,0)$} \label{table::gl4-1100-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 3 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subseteq A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & -1 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$, $w \in X_1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$, $w \in X_2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 0 & -4 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_4$, $\mu = (1,1,0,0)$} \label{table::gl4-1100-adm} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_1$} \\ \hline $t_{(0, 1, 1, 0)} s_{32}$, $t_{(1, 0, 1, 0)} s_{3121}$, $t_{(1, 1, 0, 0)} s_{21}$, $t_{(1, 1, 0, 0)} s_{2321}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_2$} \\ \hline $t_{(1, 0, 0, 1)} s_{12}$, $t_{(1, 0, 1, 0)} s_{1232}$, $t_{(1, 0, 1, 0)} s_{31}$, $t_{(1, 1, 0, 0)} s_{123121}$, $t_{(1, 1, 0, 0)} s_{1231}$, $t_{(1, 1, 0, 0)} s_{23}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_5$, $\mu = (1,1,0,0,0)$} \label{table::gl5-11000-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_2$ & 4 & 4 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_3$ & 6 & 4 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} = A_{w, \Delta_2, k_F} = A_{w, \Delta_3, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subseteq A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$, $w \in X_1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & -1 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$, $w \in X_2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 1 & -3 \\ $\Delta_2$ & 4 & 3 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subseteq A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$, $w \in X_3$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$, $w \in X_4$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4$, $w \in X_5$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4$, $w \in X_6$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 0 & -4 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 4 & 0 & -5 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_5$, $\mu = (1,1,0,0,0)$} \label{table::gl5-11000-adm} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_1$} \\ \hline $t_{(1, 0, 0, 1, 0)} s_{4123}$, $t_{(1, 0, 1, 0, 0)} s_{123423}$, $t_{(1, 0, 1, 0, 0)} s_{341232}$, $t_{(1, 0, 1, 0, 0)} s_{3412}$, $t_{(1, 1, 0, 0, 0)} s_{12341231}$, $t_{(1, 1, 0, 0, 0)} s_{12341232}$, $t_{(1, 1, 0, 0, 0)} s_{123412}$, $t_{(1, 1, 0, 0, 0)} s_{234121}$, $t_{(1, 1, 0, 0, 0)} s_{23412321}$, $t_{(1, 1, 0, 0, 0)} s_{2341}$\\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w)=2$, $w \in X_2$} \\ \hline $t_{(0, 1, 1, 0, 0)} s_{3423}$, $t_{(1, 0, 1, 0, 0)} s_{341231}$, $t_{(1, 1, 0, 0, 0)} s_{2312}$, $t_{(1, 1, 0, 0, 0)} s_{23412312}$, $t_{(1, 1, 0, 0, 0)} s_{234312}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_3$} \\ \hline $t_{(0, 1, 0, 1, 0)} s_{423}$, $t_{(0, 1, 1, 0, 0)} s_{23423}$, $t_{(0, 1, 1, 0, 0)} s_{34232}$, $t_{(0, 1, 1, 0, 0)} s_{342}$, $t_{(1, 0, 0, 1, 0)} s_{41231}$, $t_{(1, 0, 0, 1, 0)} s_{41232}$, $t_{(1, 0, 1, 0, 0)} s_{1234231}$, $t_{(1, 0, 1, 0, 0)} s_{312}$, $t_{(1, 0, 1, 0, 0)} s_{34121}$, $t_{(1, 0, 1, 0, 0)} s_{3412321}$, $t_{(1, 0, 1, 0, 0)} s_{34312}$, $t_{(1, 1, 0, 0, 0)} s_{12312}$, $t_{(1, 1, 0, 0, 0)} s_{123412312}$, $t_{(1, 1, 0, 0, 0)} s_{1234312}$, $t_{(1, 1, 0, 0, 0)} s_{23121}$, $t_{(1, 1, 0, 0, 0)} s_{231}$, $t_{(1, 1, 0, 0, 0)} s_{234123121}$, $t_{(1, 1, 0, 0, 0)} s_{23421}$, $t_{(1, 1, 0, 0, 0)} s_{2343121}$, $t_{(1, 1, 0, 0, 0)} s_{23431}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_4$} \\ \hline $t_{(1, 0, 0, 0, 1)} s_{123}$, $t_{(1, 0, 0, 1, 0)} s_{12343}$, $t_{(1, 0, 0, 1, 0)} s_{412}$, $t_{(1, 0, 1, 0, 0)} s_{1234232}$, $t_{(1, 0, 1, 0, 0)} s_{12342}$, $t_{(1, 0, 1, 0, 0)} s_{341}$, $t_{(1, 1, 0, 0, 0)} s_{1234121}$, $t_{(1, 1, 0, 0, 0)} s_{123412321}$, $t_{(1, 1, 0, 0, 0)} s_{12341}$, $t_{(1, 1, 0, 0, 0)} s_{234}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_5$} \\ \hline $t_{(0, 1, 0, 0, 1)} s_{23}$, $t_{(0, 1, 0, 1, 0)} s_{2343}$, $t_{(0, 1, 0, 1, 0)} s_{42}$, $t_{(0, 1, 1, 0, 0)} s_{234232}$, $t_{(0, 1, 1, 0, 0)} s_{2342}$, $t_{(0, 1, 1, 0, 0)} s_{34}$, $t_{(1, 0, 0, 0, 1)} s_{1231}$, $t_{(1, 0, 0, 0, 1)} s_{1232}$, $t_{(1, 0, 0, 0, 1)} s_{12}$, $t_{(1, 0, 0, 1, 0)} s_{123431}$, $t_{(1, 0, 0, 1, 0)} s_{123432}$, $t_{(1, 0, 0, 1, 0)} s_{12}$, $t_{(1, 0, 0, 1, 0)} s_{4121}$, $t_{(1, 0, 0, 1, 0)} s_{41}$, $t_{(1, 0, 1, 0, 0)} s_{1232}$, $t_{(1, 0, 1, 0, 0)} s_{123421}$, $t_{(1, 0, 1, 0, 0)} s_{12342321}$, $t_{(1, 0, 1, 0, 0)} s_{123432}$, $t_{(1, 0, 1, 0, 0)} s_{31}$, $t_{(1, 0, 1, 0, 0)} s_{3431}$, $t_{(1, 0, 1, 0, 0)} s_{34}$, $t_{(1, 1, 0, 0, 0)} s_{123121}$, $t_{(1, 1, 0, 0, 0)} s_{1231}$, $t_{(1, 1, 0, 0, 0)} s_{1234123121}$, $t_{(1, 1, 0, 0, 0)} s_{123421}$, $t_{(1, 1, 0, 0, 0)} s_{12343121}$, $t_{(1, 1, 0, 0, 0)} s_{123431}$, $t_{(1, 1, 0, 0, 0)} s_{2342}$, $t_{(1, 1, 0, 0, 0)} s_{2343}$, $t_{(1, 1, 0, 0, 0)} s_{23}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_6$} \\ \hline $t_{(0, 0, 1, 1, 0)} s_{43}$, $t_{(0, 1, 0, 1, 0)} s_{4232}$, $t_{(0, 1, 1, 0, 0)} s_{32}$, $t_{(0, 1, 1, 0, 0)} s_{3432}$, $t_{(1, 0, 0, 1, 0)} s_{412321}$, $t_{(1, 0, 1, 0, 0)} s_{3121}$, $t_{(1, 0, 1, 0, 0)} s_{343121}$, $t_{(1, 1, 0, 0, 0)} s_{21}$, $t_{(1, 1, 0, 0, 0)} s_{2321}$, $t_{(1, 1, 0, 0, 0)} s_{234321}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_6$, $\mu = (1,1,0,0,0,0)$} \label{table::gl6-110000-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 2 & -2 \\ $\Delta_2$ & 6 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_3$ & 6 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_4$ & 6 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_5$ & 8 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 2 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subset A_{w, \Delta_2, k_F} = A_{w, \Delta_3, k_F} = A_{w, \Delta_4, k_F} = A_{w, \Delta_5, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 5 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_2$ & 5 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_3$ & 7 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} = A_{w, \Delta_2, k_F} = A_{w, \Delta_3, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2, w \in X_1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_3$ & 6 & 4 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} = A_{w, \Delta_2, k_F} = A_{w, \Delta_3, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2, w \in X_2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 6 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subset A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3, w \in X_3$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 1 & -3 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subset A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3, w \in X_4$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 5 & 5 & $\mathbb{Z}/2\mathbb{Z}$ & 0 & 0 & -1 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize Table continues on next page. \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_6$, $\mu = (1,1,0,0,0,0)$ (continued)} \label{table::gl6-110000-coeff-cont} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_5$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_6$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 1 & -4 \\ $\Delta_2$ & 4 & 3 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 2 & 0 & -2 \\ \hline \multicolumn{7}{|c|}{$A_{w, \Delta_1, k_F} \subset A_{w, \Delta_2, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_7$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5, w \in X_8$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5, w \in X_9$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6, w \in X_{10}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 0 & -4 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6, w \in X_{11}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ & 3 & 0 & -4 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 7$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 4 & 0 & -5 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 8$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 5 & 0 & -6 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_6$, $\mu = (1,1,0,0,0,0)$} \label{table::gl6-110000-adm} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_1$} \\ \hline $t_{(0, 1, 1, 0, 0, 0)} s_{345234}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34512341}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234123}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345123412}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345123423}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23454123}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_2$} \\ \hline $t_{(1, 0, 0, 1, 0, 0)} s_{451234}$, $t_{(1, 0, 1, 0, 0, 0)} s_{12345234}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34512342}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34512343}$, $t_{(1, 0, 1, 0, 0, 0)} s_{345123}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512341}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512342}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512343}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345123}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23451231}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23451232}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345123421}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345123431}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345123432}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234512}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_3$} \\ \hline $X_3$ contains the 30 length-three elements not contained in $X_4$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_4$} \\ \hline $t_{(1, 0, 0, 0, 1, 0)} s_{51234}$, $t_{(1, 0, 0, 1, 0, 0)} s_{1234534}$, $t_{(1, 0, 0, 1, 0, 0)} s_{4512343}$, $t_{(1, 0, 0, 1, 0, 0)} s_{45123}$, $t_{(1, 0, 1, 0, 0, 0)} s_{123452342}$, $t_{(1, 0, 1, 0, 0, 0)} s_{123452343}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234523}$, $t_{(1, 0, 1, 0, 0, 0)} s_{3451232}$, $t_{(1, 0, 1, 0, 0, 0)} s_{345123432}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34512}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123451231}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123451232}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345123421}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345123431}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345123432}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234512321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23451234321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23451}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_5$} \\ \hline $t_{(1, 0, 0, 0, 0, 1)} s_{1234}$, $t_{(1, 0, 0, 0, 1, 0)} s_{123454}$, $t_{(1, 0, 0, 0, 1, 0)} s_{5123}$, $t_{(1, 0, 0, 1, 0, 0)} s_{12345343}$, $t_{(1, 0, 0, 1, 0, 0)} s_{123453}$, $t_{(1, 0, 0, 1, 0, 0)} s_{4512}$, $t_{(1, 0, 1, 0, 0, 0)} s_{12345232}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234523432}$, $t_{(1, 0, 1, 0, 0, 0)} s_{123452}$, $t_{(1, 0, 1, 0, 0, 0)} s_{3451}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123451234321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123451}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_6$} \\ \hline $t_{(0, 0, 1, 1, 0, 0)} s_{4534}$, $t_{(0, 1, 0, 1, 0, 0)} s_{452342}$, $t_{(0, 1, 1, 0, 0, 0)} s_{3423}$, $t_{(0, 1, 1, 0, 0, 0)} s_{34523423}$, $t_{(0, 1, 1, 0, 0, 0)} s_{345423}$, $t_{(1, 0, 0, 1, 0, 0)} s_{45123421}$, $t_{(1, 0, 1, 0, 0, 0)} s_{341231}$, $t_{(1, 0, 1, 0, 0, 0)} s_{3451234231}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34541231}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2312}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23412312}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234312}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234512342312}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2345412312}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23454312}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_7$} \\ \hline $X_7$ contains the 60 length-four elements not contained in $X_5 \cup X_6$. \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize Table continues on next page. \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_6$, $\mu = (1,1,0,0,0,0)$ (continued)} \label{table::gl6-110000-adm-cont} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 5$, $w \in X_8$} \\ \hline $t_{(0, 1, 0, 0, 0, 1)} s_{234}$, $t_{(0, 1, 0, 0, 1, 0)} s_{23454}$, $t_{(0, 1, 0, 0, 1, 0)} s_{523}$, $t_{(0, 1, 0, 1, 0, 0)} s_{2345343}$, $t_{(0, 1, 0, 1, 0, 0)} s_{23453}$, $t_{(0, 1, 0, 1, 0, 0)} s_{452}$, $t_{(0, 1, 1, 0, 0, 0)} s_{2345232}$, $t_{(0, 1, 1, 0, 0, 0)} s_{234523432}$, $t_{(0, 1, 1, 0, 0, 0)} s_{23452}$, $t_{(0, 1, 1, 0, 0, 0)} s_{345}$, $t_{(1, 0, 0, 0, 0, 1)} s_{12341}$, $t_{(1, 0, 0, 0, 0, 1)} s_{12342}$, $t_{(1, 0, 0, 0, 0, 1)} s_{12343}$, $t_{(1, 0, 0, 0, 0, 1)} s_{123}$, $t_{(1, 0, 0, 0, 1, 0)} s_{1234541}$, $t_{(1, 0, 0, 0, 1, 0)} s_{1234542}$, $t_{(1, 0, 0, 0, 1, 0)} s_{1234543}$, $t_{(1, 0, 0, 0, 1, 0)} s_{123}$, $t_{(1, 0, 0, 0, 1, 0)} s_{51231}$, $t_{(1, 0, 0, 0, 1, 0)} s_{51232}$, $t_{(1, 0, 0, 0, 1, 0)} s_{512}$, $t_{(1, 0, 0, 1, 0, 0)} s_{12343}$, $t_{(1, 0, 0, 1, 0, 0)} s_{1234531}$, $t_{(1, 0, 0, 1, 0, 0)} s_{1234532}$, $t_{(1, 0, 0, 1, 0, 0)} s_{123453431}$, $t_{(1, 0, 0, 1, 0, 0)} s_{123453432}$, $t_{(1, 0, 0, 1, 0, 0)} s_{1234543}$, $t_{(1, 0, 0, 1, 0, 0)} s_{412}$, $t_{(1, 0, 0, 1, 0, 0)} s_{45121}$, $t_{(1, 0, 0, 1, 0, 0)} s_{451}$, $t_{(1, 0, 0, 1, 0, 0)} s_{45412}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234232}$ $t_{(1, 0, 1, 0, 0, 0)} s_{12342}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234521}$, $t_{(1, 0, 1, 0, 0, 0)} s_{123452321}$, $t_{(1, 0, 1, 0, 0, 0)} s_{12345234232}$, $t_{(1, 0, 1, 0, 0, 0)} s_{12345234321}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234532}$, $t_{(1, 0, 1, 0, 0, 0)} s_{123454232}$, $t_{(1, 0, 1, 0, 0, 0)} s_{1234542}$, $t_{(1, 0, 1, 0, 0, 0)} s_{341}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34531}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34541}$, $t_{(1, 0, 1, 0, 0, 0)} s_{345}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123412321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12341}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345123121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512342321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234512343121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234521}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123453121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234531}$, $t_{(1, 1, 0, 0, 0, 0)} s_{123454121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{12345412321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{1234541}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23452}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23453}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23454}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 5$, $w \in X_9$} \\ \hline $X_9$ contains the 60 length-five elements not contained in $X_8$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 6$, $w \in X_{10}$} \\ \hline $X_{10}$ contains the 90 length-six elements not contained in $X_{11}$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 6$, $w \in X_{11}$} \\ \hline $t_{(0, 0, 0, 1, 1, 0)} s_{54}$, $t_{(0, 0, 1, 0, 1, 0)} s_{5343}$, $t_{(0, 0, 1, 1, 0, 0)} s_{43}$, $t_{(0, 0, 1, 1, 0, 0)} s_{4543}$, $t_{(0, 1, 0, 0, 1, 0)} s_{523432}$, $t_{(0, 1, 0, 1, 0, 0)} s_{4232}$, $t_{(0, 1, 0, 1, 0, 0)} s_{454232}$, $t_{(0, 1, 1, 0, 0, 0)} s_{32}$, $t_{(0, 1, 1, 0, 0, 0)} s_{3432}$, $t_{(0, 1, 1, 0, 0, 0)} s_{345432}$, $t_{(1, 0, 0, 0, 1, 0)} s_{51234321}$, $t_{(1, 0, 0, 1, 0, 0)} s_{412321}$, $t_{(1, 0, 0, 1, 0, 0)} s_{45412321}$, $t_{(1, 0, 1, 0, 0, 0)} s_{3121}$, $t_{(1, 0, 1, 0, 0, 0)} s_{343121}$, $t_{(1, 0, 1, 0, 0, 0)} s_{34543121}$, $t_{(1, 1, 0, 0, 0, 0)} s_{21}$, $t_{(1, 1, 0, 0, 0, 0)} s_{2321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{234321}$, $t_{(1, 1, 0, 0, 0, 0)} s_{23454321}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_6$, $\mu = (1,1,1,0,0,0)$} \label{table::gl6-111000-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 3 & -3 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z}$ & 1 & 2 & -1 \\ $\Delta_3$ & 5 & 4 & $\mathbb{Z}$ & 1 & 2 & -1 \\ $\Delta_4$ & 5 & 4 & $\mathbb{Z}$ & 1 & 2 & -1 \\ $\Delta_5$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 1 \\ $\Delta_6$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 1 \\ $\Delta_7$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 1 \\ $\Delta_8$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 1 \\ $\Delta_9$ & 9 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & 3 \\ \hline \multicolumn{7}{|c|}{ \xymatrix{ & A_{w,J_{\Delta_5}, k_F} = \cdots = A_{w, J_{\Delta_9}, k_F} & \\ A_{w, J_{\Delta_2}, k_F} \ar[ur] & A_{w, J_{\Delta_3}, k_F} \ar[u] & A_{w, J_{\Delta_4}, k_F} \ar[ul] \\ & A_{w, J_{\Delta_1}, k_F} \ar[ul] \ar[u] \ar[ur] } } \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 2 & -2 \\ $\Delta_2$ & 6 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_3$ & 6 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_4$ & 6 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & 0 \\ $\Delta_5$ & 8 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & 2 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} \subset A_{w,J_{\Delta_2}, k_F} = \cdots = A_{w,J_{\Delta_5}, k_F}$}\\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2, w \in X_1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 2 & -3 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z}$ & 1 & 1 & -1 \\ $\Delta_3$ & 5 & 4 & $\mathbb{Z}$ & 1 & 1 & -1 \\ $\Delta_4$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{ \xymatrix{ & A_{w,J_{\Delta_4}, k_F} & \\ A_{w, J_{\Delta_2}, k_F} \ar[ur] & & A_{w, J_{\Delta_3}, k_F} \ar[ul] \\ & A_{w, J_{\Delta_1}, k_F} \ar[ul] \ar[ur] } } \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize Table continues on next page. \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_6$, $\mu = (1,1,1,0,0,0)$ (continued)} \label{table::gl6-111000-coeff-cont} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2, w \in X_2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 5 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_2$ & 5 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 1 & -1 \\ $\Delta_3$ & 7 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} = A_{w,J_{\Delta_2},k_F} = A_{w,J_{\Delta_3},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3, w \in X_3$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 6 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} \subset A_{w,J_{\Delta_2},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3, w \in X_4$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 4 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_3$ & 6 & 4 & $\mathbb{Z}$ & 1 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} = A_{w,J_{\Delta_2},k_F} = A_{w,J_{\Delta_3},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3, w \in X_5$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_3$ & 6 & 4 & $\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 1 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} = A_{w,J_{\Delta_2},k_F} = A_{w,J_{\Delta_3},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_6$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 5 & 5 & $\mathbb{Z}/3\mathbb{Z}$ & 0 & 0 & -1 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_7$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 1 & -3 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 1 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} \subset A_{w,J_{\Delta_2},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4, w \in X_8$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 1 & -3 \\ $\Delta_2$ & 5 & 4 & $\mathbb{Z}$ & 1 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} \subset A_{w,J_{\Delta_2},k_F}$} \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize Table continues on next page. \end{table} \begin{table}[hp] \caption{Coefficient data for $GL_6$, $\mu = (1,1,1,0,0,0)$ (continued)} \label{table::gl6-111000-coeff-cont2} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5, w \in X_9$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5, w \in X_{10}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 4 & 4 & $\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5, w \in X_{11}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 1 & -4 \\ $\Delta_2$ & 4 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -2 \\ \hline \multicolumn{7}{|c|}{$A_{w,J_{\Delta_1}, k_F} \subset A_{w,J_{\Delta_2},k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6, w \in X_{12}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6, w \in X_{13}$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ & 2 & 0 & -3 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 7$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 0 & -4 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 8$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 4 & 0 & -5 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 9$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 5 & 0 & -6 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_6$, $\mu = (1,1,1,0,0,0)$} \label{table::gl6-111000-adm} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_1$} \\ \hline $t_{(1, 1, 0, 1, 0, 0)} s_{4523412}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23452341232}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34512341231}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_2$} \\ \hline $X_2$ contains the 18 length-two elements not contained in $X_1$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_3$} \\ \hline $X_3$ contains the 44 length-three elements not contained in $X_4 \cup X_5$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_4$} \\ \hline $t_{(1, 1, 0, 0, 0, 1)} s_{234123}$, $t_{(1, 1, 0, 0, 1, 0)} s_{23454123}$, $t_{(1, 1, 0, 1, 0, 0)} s_{2345123423}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345123412}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34512341}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345234}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_5$} \\ \hline $t_{(0, 1, 1, 1, 0, 0)} s_{453423}$, $t_{(1, 0, 1, 1, 0, 0)} s_{45341231}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4523412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3452342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34542312}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_6$} \\ \hline $X_6$ contains the 48 length-four elements not contained in $X_7 \cup X_8$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_7$} \\ \hline $t_{(0, 1, 1, 0, 1, 0)} s_{53423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{3453423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{4523423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{4534232}$, $t_{(0, 1, 1, 1, 0, 0)} s_{45342}$, $t_{(1, 0, 1, 0, 1, 0)} s_{5341231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{345341231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{451234231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4534121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{453412321}$, $t_{(1, 1, 0, 0, 1, 0)} s_{523412312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{5234312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{23453412312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{42312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45123412312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45234123121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4523421}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4542312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23452342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{234542312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34512342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34523423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345234231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3454231}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 4$, $w \in X_8$} \\ \hline $t_{(1, 0, 1, 0, 0, 1)} s_{34123}$, $t_{(1, 0, 1, 0, 1, 0)} s_{3454123}$, $t_{(1, 0, 1, 1, 0, 0)} s_{345123423}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4512342}$, $t_{(1, 0, 1, 1, 0, 0)} s_{45341}$, $t_{(1, 1, 0, 0, 0, 1)} s_{1234123}$, $t_{(1, 1, 0, 0, 0, 1)} s_{2341231}$, $t_{(1, 1, 0, 0, 0, 1)} s_{2341232}$, $t_{(1, 1, 0, 0, 0, 1)} s_{23412}$, $t_{(1, 1, 0, 0, 1, 0)} s_{123454123}$, $t_{(1, 1, 0, 0, 1, 0)} s_{234541231}$, $t_{(1, 1, 0, 0, 1, 0)} s_{234541232}$, $t_{(1, 1, 0, 0, 1, 0)} s_{2345412}$, $t_{(1, 1, 0, 0, 1, 0)} s_{52312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{12345123423}$, $t_{(1, 1, 0, 1, 0, 0)} s_{23451234231}$, $t_{(1, 1, 0, 1, 0, 0)} s_{23451234232}$, $t_{(1, 1, 0, 1, 0, 0)} s_{2345312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{234534312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{451234121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4512341}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45234}$, $t_{(1, 1, 1, 0, 0, 0)} s_{12345123412}$, $t_{(1, 1, 1, 0, 0, 0)} s_{1234523412321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{123452341}$, $t_{(1, 1, 1, 0, 0, 0)} s_{234512312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23451234121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345123412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23451234312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345234}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3451231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345123421}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345123431}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3452342}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3452343}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34523}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize Table continues on next page. \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GL_6$, $\mu = (1,1,1,0,0,0)$ (continued)} \label{table::gl6-111000-adm-cont} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 5$, $w \in X_9$} \\ \hline $X_9$ contains the 90 length-five elements not contained in $X_{10} \cup X_{11}$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 5$, $w \in X_{10}$} \\ \hline $t_{(0, 1, 0, 1, 1, 0)} s_{5423}$, $t_{(0, 1, 1, 0, 1, 0)} s_{523423}$, $t_{(0, 1, 1, 0, 1, 0)} s_{534232}$, $t_{(0, 1, 1, 0, 1, 0)} s_{5342}$, $t_{(0, 1, 1, 1, 0, 0)} s_{23453423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{34534232}$, $t_{(0, 1, 1, 1, 0, 0)} s_{345342}$, $t_{(0, 1, 1, 1, 0, 0)} s_{45234232}$, $t_{(0, 1, 1, 1, 0, 0)} s_{4532}$, $t_{(0, 1, 1, 1, 0, 0)} s_{453432}$, $t_{(1, 0, 0, 1, 1, 0)} s_{541231}$, $t_{(1, 0, 0, 1, 1, 0)} s_{541232}$, $t_{(1, 0, 1, 0, 1, 0)} s_{51234231}$, $t_{(1, 0, 1, 0, 1, 0)} s_{534121}$, $t_{(1, 0, 1, 0, 1, 0)} s_{53412321}$, $t_{(1, 0, 1, 0, 1, 0)} s_{534312}$, $t_{(1, 0, 1, 1, 0, 0)} s_{1234534231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{34534121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{3453412321}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4312}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4512342321}$, $t_{(1, 0, 1, 1, 0, 0)} s_{453121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{45343121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{454312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{5123412312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{51234312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{5234123121}$, $t_{(1, 1, 0, 0, 1, 0)} s_{523421}$, $t_{(1, 1, 0, 0, 1, 0)} s_{52343121}$, $t_{(1, 1, 0, 0, 1, 0)} s_{523431}$, $t_{(1, 1, 0, 1, 0, 0)} s_{123453412312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{234534123121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{23453421}$, $t_{(1, 1, 0, 1, 0, 0)} s_{412312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{423121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4231}$, $t_{(1, 1, 0, 1, 0, 0)} s_{451234123121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{452321}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45234321}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45412312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45423121}$, $t_{(1, 1, 0, 1, 0, 0)} s_{454231}$ $t_{(1, 1, 1, 0, 0, 0)} s_{12342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{123452342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{1234542312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{234231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{234523423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345234231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23454231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34123121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3421}$, $t_{(1, 1, 1, 0, 0, 0)} s_{342321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345123423121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3452342321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3454123121}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345421}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34542321}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 5$, $w \in X_{11}$} \\ \hline $t_{(0, 1, 1, 0, 0, 1)} s_{3423}$, $t_{(0, 1, 1, 0, 1, 0)} s_{345423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{34523423}$, $t_{(0, 1, 1, 1, 0, 0)} s_{452342}$, $t_{(0, 1, 1, 1, 0, 0)} s_{4534}$, $t_{(1, 0, 1, 0, 0, 1)} s_{341231}$, $t_{(1, 0, 1, 0, 1, 0)} s_{34541231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{3451234231}$, $t_{(1, 0, 1, 1, 0, 0)} s_{45123421}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4534}$, $t_{(1, 1, 0, 0, 0, 1)} s_{2312}$, $t_{(1, 1, 0, 0, 0, 1)} s_{23412312}$, $t_{(1, 1, 0, 0, 0, 1)} s_{234312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{2312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{2345412312}$, $t_{(1, 1, 0, 0, 1, 0)} s_{23454312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{234312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{234512342312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{23454312}$, $t_{(1, 1, 0, 1, 0, 0)} s_{45123421}$, $t_{(1, 1, 0, 1, 0, 0)} s_{452342}$, $t_{(1, 1, 1, 0, 0, 0)} s_{23412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{234512342312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{2345412312}$, $t_{(1, 1, 1, 0, 0, 0)} s_{341231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3423}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3451234231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34523423}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34541231}$, $t_{(1, 1, 1, 0, 0, 0)} s_{345423}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 6$, $w \in X_{12}$} \\ \hline $X_{12}$ contains the 200 length-six elements not contained in $X_{13}$. \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 6$, $w \in X_{13}$} \\ \hline $t_{(0, 0, 1, 1, 1, 0)} s_{543}$, $t_{(0, 1, 0, 1, 1, 0)} s_{54232}$, $t_{(0, 1, 1, 0, 1, 0)} s_{53432}$, $t_{(0, 1, 1, 1, 0, 0)} s_{432}$, $t_{(0, 1, 1, 1, 0, 0)} s_{45432}$, $t_{(1, 0, 0, 1, 1, 0)} s_{5412321}$, $t_{(1, 0, 1, 0, 1, 0)} s_{5343121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{43121}$, $t_{(1, 0, 1, 1, 0, 0)} s_{4543121}$, $t_{(1, 1, 0, 0, 1, 0)} s_{5234321}$, $t_{(1, 1, 0, 1, 0, 0)} s_{42321}$, $t_{(1, 1, 0, 1, 0, 0)} s_{4542321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{34321}$, $t_{(1, 1, 1, 0, 0, 0)} s_{3454321}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Coefficient data for $GSp_4$, $\mu = (1,1,0,0)$} \label{table::gsp4-1100-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 3 & 2 & $\{1\}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, J_{\Delta_1}, k_F} \subseteq A_{w, J_{\Delta_2}, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\{1\}$ & 0 & 0 & -1 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Coefficient data for $GSp_6$, $\mu = (1,1,1,0,0,0)$} \label{table::gsp6-1100-coeff} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 0$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 2 & -2 \\ $\Delta_2$ & 4 & 3 & $\{1\}$ & 0 & 1 & 0 \\ $\Delta_3$ & 4 & 3 & $\{1\}$ & 0 & 1 & 0 \\ $\Delta_4$ & 4 & 3 & $\{1\}$ & 0 & 1 & 0 \\ $\Delta_5$ & 6 & 3 & $\{1\}$ & 0 & 0 & 2 \\ \hline \multicolumn{7}{|c|}{$A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F} \subset A_{w, J_{\Delta_3}, k_F} = A_{w, J_{\Delta_4}, k_F} = A_{w, J_{\Delta_5}, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\{1\}$ & 0 & 1 & -1 \\ $\Delta_2$ & 3 & 3 & $\{1\}$ & 0 & 1 & -1 \\ $\Delta_3$ & 5 & 3 & $\{1\}$ & 0 & 0 & 1 \\ \hline \multicolumn{7}{|c|}{$A_{w, J_{\Delta_1}, k_F} = A_{w, J_{\Delta_2}, k_F} = A_{w, J_{\Delta_3}, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 1 & -2 \\ $\Delta_2$ & 4 & 3 & $\{1\}$ & 0 & 0 & 0 \\ \hline \multicolumn{7}{|c|}{$A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$, $w \in X_1$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 1 & -3 \\ $\Delta_2$ & 3 & 2 & $\mathbb{Z}$ & 1 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{$A_{w, J_{\Delta_1}, k_F} \subset A_{w, J_{\Delta_2}, k_F}$} \\ \hline \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 3$, $w \in X_2$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 3 & 3 & $\{1\}$ & 0 & 0 & -1 \\ \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 4$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 2 & 2 & $\mathbb{Z}$ & 1 & 0 & -2 \\ \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 5$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\Delta_1$ & 1 & 1 & $\mathbb{Z} \times \mathbb{Z}$ & 2 & 0 & -3 \\ \hline \multicolumn{7}{|c|}{\cellcolor[gray]{0.9} $\ell(w) = 6$} \\ \hline Path & $\ell(\Delta)$ & ${\rm{rank}}(J_\Delta)$ & Isom. class of $S_{w,J_\Delta}$ & $A(\Delta)$ &$B(w, \Delta)$ & $C(\Delta)$ \\ \hline $\emptyset$ & 0 & 0 & $\mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ & 3 & 0 & -4 \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \begin{table}[hp] \caption{Subsets of ${\rm{Adm}}(\mu)$ for $GSp_6$, $\mu = (1,1,1,0,0,0)$} \label{table::gsp6-1100-adm} \renewcommand{2}{1} \small\normalsize \begin{center} \begin{tabular}{|p{5in}|} \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 2$, $w \in X_1$} \\ \hline $t_{(0,1,1,0,0,1)} s_{323}$, $t_{(1,0,1,0,1,0)} s_{31231}$, $t_{(1,1,0,1,0,0)} s_{2312312}$, $t_{(1,1,1,0,0,0)} s_{2312312}$, $t_{(1,1,1,0,0,0)} s_{31231}$, $t_{(1,1,1,0,0,0)} s_{323}$ \\ \hline \hline \multicolumn{1}{|c|}{\cellcolor[gray]{0.9}$\ell(w) = 3$, $w \in X_2$} \\ \hline $t_{(1,0,0,1,1,0)} s_{123}$, $t_{(1,0,1,0,1,0)} s_{31232}$, $t_{(1,0,1,0,1,0)} s_{312}$, $t_{(1,1,0,1,0,0)} s_{12312}$, $t_{(1,1,0,1,0,0)} s_{23121}$, $t_{(1,1,0,1,0,0)} s_{2312321}$, $t_{(1,1,0,1,0,0)} s_{231}$, $t_{(1,1,1,0,0,0)} s_{3123121}$, $t_{(1,1,1,0,0,0)} s_{321}$, $t_{(1,1,1,0,0,0)} s_{323123121}$, $t_{(1,1,1,0,0,0)} s_{32321}$ \\ \hline \end{tabular} \end{center} \renewcommand{2}{2} \small\normalsize \end{table} \end{document}
\begin{document} \begin{center} \Large \textbf{Exact dynamics of moments and correlation functions for fermionic Poisson-type GKSL equations} \large \textbf{Iu.A. Nosal}\footnote{Faculty of Physics, Lomonosov Moscow State University.}, \textbf{A.E. Teretenkov}\footnote{Department of Mathematical Methods for Quantum Technologies, Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia.\\ E-mail:[email protected]} \end{center} \footnotesize Gorini-Kossakowski-Sudarshan-Lindblad equation of Poisson-type for the density matrix is considered. The Poisson jumps are assumed to be unitary operators with generators, which are quadratic in fermionic creation and annihilation operators. The explicit dynamics of the density matrix moments and Markovian multi-time ordered correlation functions is obtained.\\ \textit{AMS classification:} 81S22, 82C31, 81Q05, 81Q80 \textit{Keywords:} GKSL equation, irreversible quantum dynamics, Poisson stochastic process, exact solution, fermions \normalsize \section{Introduction} In this work we obtain both the fermionic analog of results of \cite{Teretenkov20} and new result on multi-time ordered Markovian correlation functions. The latter one is also important due to modern interest to non-Markovian effects which manifest themselves only on the level of multi-time Markovian correlation functions rather than master equations \cite{Gullo14}. As in \cite{Teretenkov20} we consider the equation for the density matrix \begin{equation}\label{eq:mainEq} \frac{d}{dt} \rho_t = \mathcal{L}(\rho_t), \qquad \mathcal{L}(\rho) = \sum_{k=1}^K \lambda_k (U_k \rho U_k^{\dagger} - \rho), \qquad \lambda_k >0, \end{equation} where $ U_k $ are unitary . Let us also note that the generator $ \mathcal{L} $ has the \textit{Gorini-Kossakowski-Sudarshan-Lindblad} (GKSL) form \cite{gorini1976completely, lindblad1976generators} \begin{equation*} \mathcal{L}(\rho) = \sum_{k=1}^K \left(L_k \rho_t L_k^{\dagger} - \frac12 L_k^{\dagger} L_k\rho- \frac12 \rho L_k^{\dagger} L_k\right), \end{equation*} where $ L_k = \sqrt{\lambda_k} U_k $. In our case $ U_k $ are exponential of quadratic forms in fermionic creation and annihilation operators in the finite-dimensional Hilbert space. Such generators naturally arise in the case of averaging with respect to classical Poisson processes with intensities $ \lambda_k $ and unitary jumps $ U_k $ \cite{Kummerer87}, so we call them Poisson-type generators. For the infinite-dimensional Hilbert space such generators were discussed in \cite{Holevo96, Holevo98}. Let us note that Poisson processes and the correspondent quantum Markov equations arise in physical applications \cite{accardi2002quantum, vacchini2009quantum, Basharov2014, TrubBash2018}. Unitary evolution with the fermionic quadratic generators mentioned above was discussed in \cite{Fried1953, Ber86, Dodonov83}. Its bosonic counterpart was discussed in \cite{Fried1953, Ber86, Manko79, Manko87, dodonov2002nonclassical, dodonov2003theory, Cheb11, Cheb12}. Now let us specify the exact mathematical formulation of our problem and main results. We use the notation which is similar to \cite{Ter17, Ter19}. We consider the finite-dimensional Hilbert space $\mathbb{C}^{2^n}$. In such a space one could (see \cite[p. 407]{Takht11} for explicit formulae) define $ n $-pairs of fermionic creation and annihilation operators satisfying \textit{canonical anticommutation relations} (CARs): $ \{\hat{c}_i^{\dagger}, \hat{c}_j \} = \delta_{ij}, \{\hat{c}_i, \hat{c}_j\} = 0 $. Let us define the $2n$-dimensional vector $\mathfrak{c} = (\hat{c}_1, \ldots, \hat{c}_n, \hat{c}_1^{\dagger}, \ldots, \hat{c}_n^{\dagger})^T$ of creation and annihilaiton operators. The quadratic forms in such operators we denote by $ \mathfrak{c}^T K \mathfrak{c} $, $ K \in \mathbb{C}^{2n \times 2n} $. Define the $2n \times 2n$-dimensional matrix \begin{equation*} E = \biggl( \begin{array}{cc} 0 & I_n \\ I_n & 0 \end{array} \biggr), \end{equation*} where $ I_n $ is the identity matrix from $ \mathbb{C}^{n \times n} $. Then CARs take the form $ \{f^T\mathfrak{c}, \mathfrak{c}^Tg \} = f^TEg$, $ f, g \in \mathbb{C}^{2n}$. We also define the $\sim$-conju\-ga\-tion of matrices by the formula \begin{equation*} \tilde{K} = E \overline{K} E, \qquad K \in \mathbb{C}^{2n \times 2n}, \end{equation*} where the overline is an (elementwise) complex conjugation. \begin{theorem}\label{th:main} Let the density matrix $ \rho_t $ satisfy Eq.~\eqref{eq:mainEq}, where the unitary operators $ U_k$, $ k=1, \ldots, K $, are defined by the formulae $ U_k = e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } $, $ H_k = -H_k^T = -\tilde{H}_k \in \mathbb{C}^{2n \times 2 n} $, then 1) Dynamics of the moments has the form \begin{equation}\label{eq:momDynam} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_t = e^{\sum_{k=1}^K\lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0, \qquad O_k = e^{-i E H_k}, \end{equation} where the average is defined by the formula $ \langle \otimes_{m=1}^M \mathfrak{c} \rangle_t \equiv \mathrm{tr} \; (\rho_t \otimes_{m=1}^M \mathfrak{c}) $, $ I_{(2n)^m} $ is the identity matrix in $ \mathbb{C}^{2n} \otimes \cdots \otimes \mathbb{C}^{2n} = \mathbb{C}^{(2n)^m} $. 2) If we denote $ L_{M, m} = \sum_{k=1}^K\lambda_k (\otimes_{r=1}^m O_k - I_{(2n)^m}) \otimes I_{(2n)^{M-m}}$ for $ m =1, \ldots, M $, then the dynamics the Markovian multi-time ordered correlation functions has the form \begin{equation}\label{eq:corrEvol} \langle \mathfrak{c}(t_M) \otimes \ldots \otimes \mathfrak{c}(t_1) \rangle = e^{L_{M,1} (t_M - t_{M-1})} \ldots e^{L_{M,M} t_1} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0, \end{equation} where $ t_M \geqslant \ldots \geqslant t_1 \geqslant 0 $ and the tensor $ \langle \mathfrak{c}(t_M) \otimes \ldots \otimes \mathfrak{c}(t_1) \rangle $ is defined by its elements \begin{equation}\label{eq:corrDef} \langle \mathfrak{c}_{j_M}(t_M) \ldots \mathfrak{c}_{j_1}(t_1) \rangle \equiv \tr( \mathfrak{c}_{j_M} e^{\mathcal{L}(t_M - t_{M-1})} \ldots \mathfrak{c}_{j_2} e^{\mathcal{L} (t_2 - t_1)}\mathfrak{c}_{j_1} e^{\mathcal{L} t_1}\rho_0), \end{equation} where $ j_m =1, \ldots, 2n, $ for all $ m =1, \ldots, M $. \end{theorem} In definition \eqref{eq:corrEvol} for the Markovian multi-time ordered correlation functions we follow \cite{Gullo14}. In particular, for the first and second moments we have \begin{equation*} \langle\mathfrak{c}\rangle_t = e^{\sum_{k=1}^K \lambda_k(O_k - I_{2n}) t} \langle\mathfrak{c}\rangle_0, \qquad \langle\mathfrak{c} \otimes \mathfrak{c}\rangle_t = e^{\sum_{k=1}^K \lambda_k (O_k \otimes O_k - I_{4 n^2}) t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 \end{equation*} and for two-time correlation functions we have \begin{equation*} \langle\mathfrak{c}(t_2) \mathfrak{c}(t_1) \rangle = e^{L_{2,1} (t_2 - t_1)} e^{L_{2,2} t_1} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0. \end{equation*} Notr that $ \langle\mathfrak{c}(t) \mathfrak{c}(t) \rangle = e^{L_{2,2} t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 = e^{\sum_{k=1}^K \lambda_k (O_k \otimes O_k - I_{4 n^2}) t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 = \langle\mathfrak{c} \otimes \mathfrak{c}\rangle_t $. \section{Dynamics in Heisenberg picture} In this section we prove Th.~\ref{th:main}. The main idea consists in transfer to the Heisenberg picture. To do it let us calculate the conjugate generator $ \mathcal{L}^* $ defined by the relation \begin{equation}\label{eq:defConjGen} \mathrm{tr} \, \hat{X} \mathcal{L}(\rho) = \mathrm{tr} \, \mathcal{L}^* (\hat{X})\rho, \end{equation} for arbitrary matrices $ \rho, X \in \mathbb{C}^{2n \times 2n} $. We need lemma 1 from \cite{Ter17} in the case when $ A = i H $, $ B =0 $, which takes the following form. \begin{lemma} \label{lem:orthTransform} Let $ H = -H^T \in \mathbb{C}^{2 n \times 2n} $, then $ e^{ \frac{i}{2} \mathfrak{c}^T H \mathfrak{c} } \mathfrak{c} e^{- \frac{i}{2} \mathfrak{c}^T H \mathfrak{c} } = O \mathfrak{c}$, where $O = e^{-i E H} $. \end{lemma} Let us note that in accordance with lemma 4 from \cite{Ter17}, if $ \tilde{H} = - H$, then the matrix $ \frac12 \mathfrak{c}^T H \mathfrak{c} $ is self-adjoint. Thus, the operators $ U_k $ defined in Th.~\ref{th:main} are unitary indeed. \begin{lemma} \label{lem:conjGen} Let $\mathcal{L} $ be defined by \eqref{eq:mainEq} with $ U_k = e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } $, $ H_k = \tilde{H}_k \in \mathbb{C}^{2n \times 2 n} $, and $ \mathcal{L}^*$ be defined by formula \eqref{eq:defConjGen}, then \begin{equation*} \mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c}) = \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) \otimes_{m=1}^M \mathfrak{c} , \qquad O_k = e^{-i E H_k}. \end{equation*} \end{lemma} \begin{demo} By the cyclic property of trace we have $ \tr \hat{X} (U_k \rho U_k^{\dagger} - \rho) = \tr (U_k^{\dagger} \hat{X} U_k - \hat{X}) \rho $. Hence, by Eq.~\eqref{eq:defConjGen} we obtain \begin{equation*} \mathcal{L}^* (\hat{X}) = \sum_{k=1}^K \lambda_k (U_k^{\dagger} \hat{X} U_k - \hat{X}). \end{equation*} Taking the elements of the tensor $ \otimes_{m=1}^M \mathfrak{c} $ as $ \hat{X} $ we obtain \begin{equation*} \mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c} ) = \sum_{k=1}^K \lambda_k (U_k^{\dagger} (\otimes_{m=1}^M \mathfrak{c} ) U_k - \otimes_{m=1}^M \mathfrak{c} ) = \sum_{k=1}^K \lambda_k ( \otimes_{m=1}^M (U_k^{\dagger} \mathfrak{c} U_k) - \otimes_{m=1}^M \mathfrak{c}). \end{equation*} By lemma \ref{lem:orthTransform}, we have $ U_k^{\dagger} \mathfrak{c} U_k= e^{\frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } \mathfrak{c} e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } = e^{-i E H_k} \mathfrak{c} = O_k \mathfrak{c} $. Thus, we obtain \begin{equation*} \mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c} ) = \sum_{k=1}^K \lambda_k ( \otimes_{m=1}^M ( O_k \mathfrak{c} ) - \otimes_{m=1}^M \mathfrak{c}) = \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) \otimes_{m=1}^M \mathfrak{c}. \quad \quad \qed \end{equation*} \end{demo} \noindent\textbf{Proof of Th.~\ref{th:main}}. 1) Taking into account lemma \ref{lem:conjGen} we obtain the Heisenberg evolution of the operators $ \otimes_{m=1}^M \mathfrak{c} $ in the following explicit form. \begin{equation}\label{eq:HeisEvol} e^{\mathcal{L}^* t}(\otimes_{m=1}^M \mathfrak{c}) = e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \otimes_{m=1}^M \mathfrak{c} \end{equation} Then taking into account the definition of the average from the statement of Th.~\ref{th:main} we have \begin{align*} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_t &\equiv \mathrm{tr} \; (\otimes_{m=1}^M \mathfrak{c} \rho_t) = \mathrm{tr} \; ( \otimes_{m=1}^M \mathfrak{c} e^{\mathcal{L} t}(\rho_0)) = \mathrm{tr} \; (e^{\mathcal{L}^* t}(\otimes_{m=1}^M \mathfrak{c})\rho_0) = \\ &= e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \mathrm{tr} \; (\otimes_{m=1}^M \mathfrak{c} \rho_0) = e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0 \end{align*} Thus, we obtain \eqref{eq:momDynam}. 2) As for the moments let us turn to Heisenberg evolution operators in definition \eqref{eq:corrDef} \begin{equation*} \tr( \mathfrak{c}_{j_M} e^{\mathcal{L}(t_M - t_{M-1})} \ldots \mathfrak{c}_{j_{2}} e^{\mathcal{L} (t_2 - t_1)}\mathfrak{c}_{j_1} e^{\mathcal{L} t_1}\rho_0) = \tr \rho_0 e^{\mathcal{L}^* t_1}((e^{\mathcal{L}^* (t_2 - t_1)} ((\ldots e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}\ldots)\mathfrak{c}_{j_2} ))\mathfrak{c}_{j_1}). \end{equation*} By formula \eqref{eq:HeisEvol} taking into account the definition of $ L_{1,1} $ we have \begin{equation*} e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M} = (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c})_{j_M}, \end{equation*} then \begin{align*} e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}((e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}) \mathfrak{c}_{j_{M-1}}) &= e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c})_{j_M} \mathfrak{c}_{j_{M-1}}) =\\ = e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c}) \otimes \mathfrak{c})_{j_M j_{M-1}} &= e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( e^{L_{2,1} (t_M - t_{M-1})} \mathfrak{c} \otimes \mathfrak{c})_{j_M j_{M-1}} =\\ = ( e^{L_{2,1} (t_M - t_{M-1})} e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}(\mathfrak{c} \otimes \mathfrak{c}))_{j_M j_{M-1}} &= (e^{L_{2,1} (t_M - t_{M-1})} e^{L_{2,2} (t_{M-1} - t_{M-2})} \mathfrak{c} \otimes \mathfrak{c})_{j_M j_{M-1}} \end{align*} Analogously we have \begin{equation*} e^{\mathcal{L}^* t_1}((e^{\mathcal{L}^* (t_2 - t_1)} ((\ldots e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}\ldots)\mathfrak{c}_{j_2} ))\mathfrak{c}_{j_1}) = e^{L_{M,1} (t_M - t_{M-1})} \ldots e^{L_{M,M} t_1} \otimes_{m=1}^M \mathfrak{c} \end{equation*} By averaging with respect to the initial state $ \rho_0 $ we obtain \eqref{eq:corrEvol}. \qed \section{Conclusions} In this work we have considered evolution for the density matrix in accordance with GKSL equation \eqref{eq:mainEq}. In part 1) of Th.~\ref{th:main} we have obtained the fermionic analog to Th.~1 from \cite{Teretenkov20}. We have also obtained multi-time ordered Markovian correlation functions, which is a generalization of single-time formula \eqref{eq:momDynam} to the multi-time case. This is important due to the modern discussion of quantum Markovianity which necessarily (according to \cite{Gullo14}) leads to the very special form \eqref{eq:corrDef} for multi-time ordered correlation in addition to the GKSL from of the master equations. The explicit expression for these correlation functions in our case is presented in part 2) of Th.~\ref{th:main}. The study of Markovian and non-Markovian effects are important now due to rising interest to the open quantum systems, the range of approaches to which is becoming wider and wider now \cite{Trushechkin19, Trushechkin19a, Luchnikov19, Teretenkov19a, Teretenkov19b}. A possible direction of future development consists in calculation of more general multi-time observables, e.g. unordered correlation functions in 2D echo-spectroscopy \cite{Plenio13}. \end{document}
\begin{document} \title{Infinities within Finitely Supported Structures} \begin{abstract} The theory of finitely supported algebraic structures is related to Pitts theory of nominal sets (by equipping finitely supported sets with finitely supported internal algebraic laws). It represents a reformulation of Zermelo Fraenkel set theory obtained by requiring every set theoretical construction to be finitely supported according to a certain action of a group of permutations of some basic elements named atoms. Its main purpose is to let us characterize infinite algebraic structures, defined involving atoms, only by analyzing their finite supports. The first goal of this paper is to define and study different kinds of infinities and the notion of `cardinality' in the framework of finitely supported structures. We present several properties of infinite cardinalities. Some of these properties are extended from the non-atomic Zermelo Fraenkel set theory into the world of atomic objects with finite support, while other properties are specific to finitely supported structures. We also compare alternative definitions of `infinite finitely supported set', and we finally provide a characterization of finitely supported countable sets. \end{abstract} \section{Introduction} The theory of finitely supported algebraic structures which is known under the name of `nominal sets' (when dealing with computer science applications) or `Finitely Supported Mathematics' (in some pure set theoretical papers related to the foundations of mathematics) represents an alternative framework for working with infinite structures hierarchically constructed by involving some basic elements (called atoms) by dealing only with a finite number of entities that form their supports. The theory of nominal sets is presented in a categorical manner as a Zermelo-Fraenkel (ZF) alternative to Fraenkel and Mostowski 1930s permutation models of set theory with atoms \cite{pitts-2}. A nominal set is defined as a usual~ZF set endowed with a group action of the group of (finitary) permutations over a certain fixed countable ZF set~$A$ (also called the set of atoms by analogy with the Fraenkel and Mostowski framework) formed by elements whose internal structure is not taken into consideration (i.e. by elements that can be checked only for equality), satisfying a finite support requirement. This requirement states that for any element in a nominal set there should exist a finite set of atoms such that any permutation fixing pointwise this set of atoms also leaves the element invariant under the related group action. Nominal sets represents a categorical mathematical theory of names studying scope, binding, freshness and renaming in formal languages based upon symmetry. Inductively defined finitely supported sets (that are finitely supported elements in the powerset of a nominal set) involving the name-abstraction together with Cartesian product and disjoint union can encode syntax modulo renaming of bound variables. In this way, the standard theory of algebraic data types can be extended to include signatures involving binding operators. In particular, there is an associated notion of structural recursion for defining syntax-manipulating functions and a notion of proof by structural induction. Various generalizations of nominal were used in order to study automata, languages or Turing machines that operate over infinite alphabets; for this a relaxed notion of finiteness, called `orbit finiteness', was defined and means `having a finite number of orbits under a certain group action' \cite{boj}. Finitely Supported Mathematics (FSM) is an alternative name for nominal algebraic structures, used in theoretical papers focused on the foundations of set theory (rather than on applications in computer science). In order to describe FSM as a theory of finitely supported algebraic structures (that is finitely supported sets \emph{together with finitely supported internal algebraic laws}), we use nominal sets (without the requirement that the set $A$ of atoms is countable) which by now on will be called invariant sets motivated by Tarski's approach regarding logicality (i.e. a logical notion is defined by Tarski as one that is invariant under the permutations of the universe of discourse). The cardinality of the set of atoms \emph{cannot} be internally compared with any other ZF cardinality, and so we just say that atoms form an infinite set without any specifications regarding its cardinality. In FSM we actually study the finitely supported subsets of invariant sets together with finitely supported relations (order relations, functions, algebraic laws etc), and so FSM becomes a theory of atomic algebraic structures constructed/defined according to the finite support requirement. The requirement of being finitely supported under a canonical action of the group of permutation of atoms (constructed under the rules in Proposition \ref{p1}) is actually an axiom adjoined to ZF, and so non-finitely supported structures are not allowed (they do not exist) in FSM. FSM contains the family of `non-atomic' (ordinary) ZF sets (which are proved to be trivial FSM sets) and the family of `atomic' sets with finite supports (hierarchically constructed from the empty set and the fixed ZF set $A$). The main question now is whether a classical ZF result (obtained in ZF framework for non-atomic sets) can be adequately reformulated by replacing `non-atomic element/set' with `atomic finitely supported element/set' (according to the canonical actions of the group of one-to-one transformations of $A$ onto itself) in order to be valid also for atomic sets with finite supports. The (non-atomic) ZF results cannot be directly translated into the framework of atomic finitely supported sets, unless we are able to reprove their new formulations internally in FSM, i.e. by involving only \emph{finitely supported structures} even in the intermediate steps of the proof. This is because the family of finitely supported sets is not closed under subset constructions, and we cannot use something outside FSM in order to prove something in FSM. The meta-theoretical techniques for the translation of a result from non-atomic structures to atomic structures are fully described in \cite{book} (or in \cite{pitts-2}, with the mention that, working on foundations of mathematics, and so we use a slightly different terminology for the same concept). They are based on a refinement of the finite support principle form \cite{pitts-2} called ``$S$-finite supports principle" claiming that for any finite set $S$ of atoms, anything that is definable in higher order logic from $S$-supported structures using $S$-supported constructions is also $S$-supported. The formal involvement of the $S$-finite support principles implies a constructive method for defining the support of a structure by employing the supports of the sub-structures of a related structure. In this paper we introduce the notion of `cardinality' of a finitely supported set, and we prove several properties of this concept. Some properties are naturally extended from the non-atomic ZF into the world of atomic structures. In this sense we prove that Cantor-Schr{\"o}der-Bernstein theorem for cardinalities is still valid in FSM. Several other cardinality properties are preserved from ZF. However, although Cantor-Schr{\"o}der-Bernstein theorem can be successfully translated into FSM, its ZF dual is no longer valid in FSM. Other specific FSM properties of cardinalities (that do not have related~ZF correspondents) are also emphasized. We introduce various definition for infinity and we compare them, providing relevant examples of atomic sets verifying the conditions of each such definition. Finally, we introduce and study the concept of countability in FSM. \section{Finitely Supported Sets} \label{FMset} A ZF finite set is referred to a set for which there is a bijection with a finite ordinal; a ZF infinite set is a set that is not finite. Adjoin to ZF a special infinite set $A$ (called `the set of atoms'; despite classical set theory with atoms we do not need to modify the axiom of extensionality). Actually, atoms are entities whose internal structure is considered to be irrelevant which are considered as basic for a higher-order construction, i.e. their internal structure is not taken into consideration. A \emph{transposition} is a function $(a\, b):A\to A$ given by~$(a\, b)(a)=b$, $(a\, b)(b)=a$ and $(a\, b)(n)=n$ for $n\neq a,b$. A \emph{(finitary) permutation} of $A$ in FSM is a one-to-one transformation of $A$ onto itself (a bijection of $A$) generated by composing finitely many transpositions. We denote by $S_{A}$ the set of all finitary permutations of $A$. According to Proposition 2.6 from \cite{book}, a function $f:A \to A$ is a bijection on~$A$ in FSM if and only if it leaves unchanged all but finitely many elements of~$A$. Thus, in FSM a function is a one-to-one transformation of $A$ onto itself if and only if it is a (finitary) permutation of $A$. Thus, the notions `permutation (bijection) of $A$' and `finitary permutation of $A$' coincide in FSM. \begin{definition}\label{2.4} Let $X$ be a ZF set. \begin{enumerate} \item An \emph{$S_{A}$-action} on $X$ is a function $\cdot:S_{A}\times X\rightarrow X$ having the properties that $Id\cdot x=x$ and $\pi\cdot(\pi'\cdot x)=(\pi\circ\pi')\cdot x$ for all $\pi,\pi'\in S_{A}$ and $x\in X$, where $Id$ is the identity mapping on $A$. An \emph{$S_{A}$-set} is a pair $(X,\cdot)$ where $X$ is a ZF set, and $\cdot:S_{A}\times X\to X$ is an $S_{A}$-action on $X$. \item Let $(X,\cdot)$ be an $S_{A}$-set. We say that \emph{$S\subset A$ supports $x$} whenever for each $\pi\in Fix(S)$ we have $\pi\cdot x=x$, where $Fix(S)=\{\pi\,|\,\pi(a)=a,\forall a\in S\}$. The least finite set supporting $x$ (which exists according to Proposition \ref{p11}) is called \emph{the support of $x$} and is denoted by $supp(x)$. An empty supported element is called \emph{equivariant}; this means that $x \in X$ is equivariant if and only if $\pi \cdot x=x$, $\forall \pi \in S_{A}$. \item Let $(X,\cdot)$ be an $S_{A}$-set. We say that $X$ is an \emph{invariant set} if for each $x\in X$ there exists a finite set \emph{$S_{x}\subset A$ }which supports $x$. \end{enumerate} \end{definition} \begin{proposition}\label{p11} \cite{book} Let $X$ be an $S_{A}$-set and let $x\in X$. If there exists a finite set supporting $x$ (particularly, if $X$ is an invariant set), then there exists a least finite set supporting $x$ which is constructed as the intersection of all finite sets supporting $x$. \end{proposition} \begin{proposition}\label{2.15} \cite{book} Let $(X,\cdot)$ be an $S_{A}$-set, and~$\pi\in S_{A}$. If $x\in X$ is finitely supported, then $\pi\cdot x$ is finitely supported and $supp(\pi\cdot x)=\pi(supp(x))$. \end{proposition} \begin{example}\label{2.7} \ \ \begin{enumerate} \item The set $A$ of atoms is an $S_{A}$-set with the $S_{A}$-action $\cdot:S_{A}\times A\rightarrow A$ defined by $\pi\cdot a:=\pi(a)$ for all $\pi\in S_{A}$ and $a\in A$. $(A,\cdot)$ is an invariant set because for each $a\in A$ we have that $\{a\}$ supports $a$. Furthermore, $supp(a)=\{a\}$ for each $a\in A$. \item The set $S_{A}$ is an $S_{A}$-set with the $S_{A}$-action $\cdot:S_{A}\times S_{A}\rightarrow S_{A}$ defined by $\pi\cdot\sigma:=\pi\circ\sigma\circ\pi^{-1}$ for all $\pi,\sigma\in S_{A}$. $(S_{A},\cdot)$ is an invariant set because for each $\sigma\in S_{A}$ we have that the finite set $\{a\in A\,|\,\sigma(a)\neq a\}$ supports~$\sigma$. Furthermore, $supp(\sigma)=\{a\in A\,|\,\sigma(a)\neq a\}$ for each $\sigma\in S_{A}$. \item Any ordinary (non-atomic) ZF-set $X$ (such as $\mathbb{N},\mathbb{Z},\mathbb{Q}$ or $\mathbb{R}$ for example) is an invariant set with the single possible $S_{A}$-action $\cdot:S_{A}\times X\rightarrow X$ defined by $\pi\cdot x:=x$ for all $\pi \in S_{A}$ and $x\in X$. \end{enumerate} \end{example} \begin{proposition} \label{p1} Let $(X,\cdot)$ and $(Y,\diamond)$ be $S_{A}$-sets. \begin{enumerate} \item The Cartesian product $X\times Y$ is also an $S_{A}$-set with the $S_{A}$-action $\otimes:S_{A}\times(X\times Y)\rightarrow(X\times Y)$ defined by $\pi\otimes(x,y)=(\pi\cdot x,\pi\diamond y)$ for all $\pi\in S_{A}$ and all $x\in X$, $y\in Y$. If $(X,\cdot)$ and $(Y,\diamond)$ are invariant sets, then $(X\times Y,\otimes)$ is also an invariant set. \item The powerset $\wp(X)=\{Z\,|\, Z\subseteq X\}$ is also an $S_{A}$-set with the $S_{A}$-action $\star: S_{A}\times\wp(X) \rightarrow \wp(X)$ defined by $\pi\star Z:=\{\pi\cdot z\,|\, z\in Z\}$ for all $\pi \in S_{A}$, and all $Z \subseteq X$. For each invariant set $(X,\cdot)$, we denote by $\wp_{fs}(X)$ the set formed from those subsets of~$X$ which are finitely supported according to the action $\star$ . $(\wp_{fs}(X),\star|_{\wp_{fs}(X)})$ is an invariant set, where $\star|_{\wp_{fs}(X)}$ represents the action $\star$ restricted to $\wp_{fs}(X)$. \item The finite powerset of $X$ $\wp_{fin}(X)=\{Y \subseteq X\,|\, Y \text{finite}\}$ and the cofinite powerset of $X$ $\wp_{cofin}(X)=\{Y \subseteq X\,|\, X\setminus Y \text{finite}\}$ are $S_{A}$-sets with the $S_{A}$-action $\star$ defined as in item 2. If $X$ is an invariant set, then both $\wp_{fin}(X)$ and $\wp_{cofin}(X)$ are invariant sets. \item Let $(X,\cdot)$ and $(Y,\diamond)$ be $S_{A}$-sets. We define the disjoint union of $X$ and $Y$ by $X+Y=\{(0,x)\,|\, x\in X\}\cup\{(1,y)\,|\, y\in Y\}$. $X+Y$ is an $S_{A}$-set with the $S_{A}$-action $\star:S_{A}\times(X+Y)\rightarrow(X+Y)$ defined by $\pi\star z=(0,\pi\cdot x)$ if $z=(0,x)$ and $\pi\star z=(1,\pi\diamond y)$ if $z=(1,y)$. If $(X,\cdot)$ and $(Y,\diamond)$ are invariant sets, then $(X+Y,\star)$ is also an invariant set: each $z\in X+Y$ is either of the form $(0,x)$ and supported by the finite set supporting $x$ in~$X$, or of the form $(1,y)$ and supported by the finite set supporting $y$ in~$Y$. \end{enumerate} \end{proposition} \begin{definition}\label{2.14} \begin{enumerate} \item Let $(X,\cdot)$ be an $S_{A}$-set. A subset~$Z$ of $X$ is called \emph{finitely supported} if and only if $Z\in\wp_{fs}(X)$ with the notations from Proposition \ref{p1}. A subset $Z$ of $X$ is \emph{uniformly supported} if all the elements of $Z$ are supported by the same set $S$ (and so $Z$ is itself supported by $S$ as an element of $\wp_{fs}(X)$). Generally, an FSM set is a finitely supported subset (possibly equivariant) of an invariant set. \item Let $(X,\cdot)$ be a finitely supported subset of an $S_{A}$- set $(Y, \cdot)$. A subset~$Z$ of $Y$ is called \emph{finitely supported subset of $X$} (and we denote this by $Z \in \wp_{fs}(X)$) if and only if $Z\in\wp_{fs}(Y)$ and $Z \subseteq X$. Similarly, we say that a uniformly supported subset of $Y$ contained in $X$ is a \emph{uniformly supported subset of $X$}. \end{enumerate} \end{definition} From Definition \ref{2.4}, a subset~$Z$ of an invariant set $(X, \cdot)$ is finitely supported by a set $S \subseteq A$ if and only if $\pi \star Z \subseteq Z$ for all $\pi \in Fix(S)$. This is because any permutation of atoms should have finite order. \begin{proposition} \label{4.4-9} \begin{enumerate} \item Let $X$ be a finite subset of an invariant set $(U, \cdot)$. Then $X$ is finitely supported and $supp(X)=\cup\{supp(x)\,|\, x\in X\}$. \item Let $X$ be a uniformly supported subset of an invariant set $(U, \cdot)$. Then $X$ is finitely supported and $supp(X)=\cup\{supp(x)\,|\, x\in X\}$. \end{enumerate} \end{proposition} \begin{proof} 1. Let $X=\left\{ x_{1},\ldots, x_{k}\right\}$, and $S=supp(x_{1})\cup\ldots\cup supp(x_{k})$. Obviously, $S$ supports $X$. Indeed, let us consider $\pi\in Fix(S)$. We have that $\pi\in Fix(supp(x_{i}))$ for each $i\in\{1,\ldots ,k\}$. Therefore, $\pi\cdot x_{i}=x_{i}$ for each $i\in\{1,\ldots ,k\}$ because $supp(x_{i})$ supports $x_{i}$ for each $i\in\{1,\ldots ,k\}$, and so $supp(X) \subseteq S$. It remains to prove that $S \subseteq supp(X)$. Consider $a \in S$. This means there exists $j\in\{1,\ldots ,k\}$ such that $a \in supp(x_{j})$. Let $b$ be an atom such that $b \notin supp(X)$ and $b \notin supp(x_{i})$, $\forall i\in\{1,\ldots ,k\}$. Such an atom exists because $A$ is infinite, while $supp(X)$ and $supp(x_{i})$, $ i\in\{1,\ldots ,k\}$, are all finite. We prove by contradiction that $(b\; a) \cdot x_{j} \notin X$. Indeed, suppose that $(b\; a) \cdot x_{j} \in X$. In this case there is $y \in X$ with $(b\; a) \cdot x_{j}=y$. Since $a \in supp(x_{j})$, we have $b \in (b\; a)(supp(x_{j}))$. However, according to Proposition \ref{2.15}, we have $supp(y)=(b\; a)(supp(x_{j}))$. We obtain that $b \in supp(y)$ for some $y \in X$, which is a contradiction with the choice of $b$. Therefore, $(b\; a) \star X \neq X$, where~$\star$ is the standard $S_{A}$-action on $\wp(U)$ is defined in Proposition \ref{p1}(2). Since $b \notin supp(X)$, we prove by contradiction that $a \in supp(X)$. Indeed, suppose that $a \notin supp(X)$. It follows that the transposition $(b\; a)$ fixes each element from $supp(X)$, i.e. $(b\; a) \in Fix(supp(X))$. Since $supp(X)$ supports $X$, by Definition \ref{2.4}, it follows that $(b\; a) \star X=X$, which is a contradiction. Thus, $a \in supp(X)$, and so $S \subseteq supp(X)$. 2. Since $X$ is uniformly supported, there exists a finite subset of atoms $T$ such that $T$ supports every $x \in X$, i.e. $supp(x) \subseteq T$ for all $x \in X$. Thus, $\cup\{supp(x)\,|\, x\in X\} \subseteq T$. Clearly, $supp(X) \subseteq \cup\{supp(x)\,|\, x\in X\}$. Conversely, let $a \in \cup\{supp(x)\,|\, x\in X\}$. Thus, there exists $x_{0} \in X$ such that $a \in supp(x_{0})$. Let $b$ be an atom such that $b \notin supp(X)$ and $b \notin T$. Such an atom exists because $A$ is infinite, while $supp(X)$ and $T$ are both finite. We prove by contradiction that $(b\; a) \cdot x_{0} \notin X$. Indeed, suppose that $(b\; a) \cdot x_{0}=y \in X$. Since $a \in supp(x_{0})$, we have $b =(b\;a)(a) \in (b\; a)(supp(x_{0}))=supp((b\; a) \cdot x_{0})=supp(y)$. Since $supp(y) \subseteq T$, we get $b \in T$, a contradiction. Therefore, $(b\; a) \star X \neq X$. Since $b \notin supp(X)$, we have that $a \in supp(X)$ as in the above item. \end{proof} \begin{corollary}Let $X$ be a uniformly supported subset of an invariant set. Then $X$ is uniformly supported by $supp(X)$. \end{corollary} \begin{proof}Since $supp(X)=\cup\{supp(x)\,|\, x\in X\}$, we have $supp(x) \subseteq supp(X)$ for all $x \in X$ which means $supp(X)$ supports every $x \in X$. \end{proof} \begin{proposition}\label{p111} We have $\wp_{fs}(A)=\wp_{fin}(A) \cup \wp_{cofin}(A)$. \end{proposition} \begin{proof} We know that $B$ is finitely supported with $supp(B)=B$ whenever $B \subset A$ and $B$ is finite. If $C \subseteq A$ and~$C$ is cofinite, then $C$ is finitely supported by $A \setminus C$ with $supp(C)=A \setminus C$. However, if $D \subsetneq A$ is neither finite nor cofinite, then $D$ is not finitely supported. Indeed, assume by contradiction that there exists a finite set of atoms~$S$ supporting $D$. Since $S$ is finite and both $D$ and its complementary $C_{D}$ are infinite, we can take $a \in D \setminus S$ and $b \in C_{D} \setminus S$. Then the transposition $(a\,b)$ fixes $S$ pointwise, but $(a\,b) \star D \neq D$ because $(a\,b)(a)=b \notin D$; this contradicts the assertion that $S$ supports $D$. Therefore, $\wp_{fs}(A)=\wp_{fin}(A) \cup \wp_{cofin}(A)$. \end{proof} \begin{definition}\label{2.10-1} Let $X$ and $Y$ be invariant sets. \begin{enumerate} \item A function $f:X\rightarrow Y$ is \emph{finitely supported} if $f\in\wp_{fs}(X\times Y)$. The set of all finitely supported functions from $X$ to $Y$ is denoted by $Y^{X}_{fs}$. \item Let $Z$ be a finitely supported subset of $X$ and $T$ a finitely supported subset of $Y$. A function $f:Z\rightarrow T$ is \emph{finitely supported} if $f\in\wp_{fs}(X\times Y)$. The set of all finitely supported functions from $Z$ to $T$ is denoted by~$T^{Z}_{fs}$. \end{enumerate} \end{definition} \begin{proposition}\label{2.18'} \cite{book} Let $(X,\cdot)$ and $(Y,\diamond)$ be two invariant sets. \begin{enumerate} \item $Y^{X}$ (i.e. the set of all functions from $X$ to $Y$) is an $S_{A}$-set with the $S_{A}$-action $\widetilde{\star}:S_{A}\times Y^{X}\rightarrow Y^{X}$ defined by $(\pi \widetilde{\star}f)(x) = \pi\diamond(f(\pi^{-1}\cdot x))$ for all $\pi\in S_{A}$, $f\in Y^{X}$ and $x\in X$. A function $f:X\rightarrow Y$ is finitely supported in the sense of Definition \ref{2.10-1} if and only if it is finitely supported with respect the permutation action $\widetilde{\star}$. \item Let $Z$ be a finitely supported subset of $X$ and $T$ a finitely supported subset of $Y$. A function $f:Z\rightarrow T$ is supported by a finite set $S \subseteq A$ if and only if for all $x \in Z$ and all $\pi \in Fix(S)$ we have $\pi \cdot x \in Z$, $\pi \diamond f(x) \in T$ and $f(\pi\cdot x)=\pi\diamond f(x)$. Particularly, a function $f:X\rightarrow Y$ is supported by a finite set $S \subseteq A$ if and only if for all $x \in X$ and all $\pi \in Fix(S)$ we have $f(\pi\cdot x)=\pi\diamond f(x)$. \end{enumerate} \end{proposition} \section{Cardinalities and Order Properties} \begin{definition} \label{FM-event struc} \ \begin{itemize} \item An \emph{invariant partially ordered set (invariant poset)} is an invariant set $(P,\cdot)$ together with an equivariant partial order relation $\sqsubseteq$ on $P$. An invariant poset is denoted by $(P,\sqsubseteq,\cdot)$ or simply $P$. \item A \emph{finitely supported partially ordered set (finitely supported poset)} is a finitely supported subset $X$ of an invariant set $(P,\cdot)$ together with a partial order relation $\sqsubseteq$ on $X$ that is finitely supported as a subset of $P\times P$. \end{itemize} \end{definition} Two FSM sets $X$ and $Y$ are called equipollent if there exists a finitely supported bijection $f:X \to Y$. The FSM cardinality of $X$ is defined as the equivalence class of all FSM sets equipollent to $X$, and is denoted by $|X|$. This means that for two FSM sets $X$ and $Y$ we have $|X|=|Y|$ if and only if there exists a finitely supported bijection $f:X \to Y$. On the family of cardinalities we can define the relations: \begin{itemize} \item $\leq$ by: \ $|X| \leq |Y|$ if and only if there is a finitely supported injective mapping $f:X \to Y$; \item $\leq^{*}$ by: $|X| \leq^{*} |Y|$ if and only if there is a finitely supported surjective mapping $f:Y \to X$. \end{itemize} \begin{theorem} \label{cardord} \begin{enumerate} \item The relation $\leq$ is equivariant, reflexive, anti-symmetric and transitive, but it is not total. \item The relation $\leq^{*}$ is equivariant, reflexive and transitive, but it is not anti-symmetric, nor total. \end{enumerate} \end{theorem} \begin{proof} \begin{itemize} \item $\leq$ and $\leq^{*}$ are equivariant because for any FSM sets $X$ and $Y$, whenever there is a finitely supported injection/ surjection $f:X \to Y$, according to Proposition \ref{2.15}, we have that $\pi \star f:\pi \star X \to \pi \star Y$, defined by $(\pi \star f)(\pi \cdot x)=\pi \cdot f(x)$ for all $x \in X$, is a finitely supported injective/surjective mapping, and so $\pi \star X$ is comparable with $\pi \star Y$ (under $\leq$ or $\leq^{*}$, after case). \item $\leq$ and $\leq^{*}$ are obviously reflexive because for each FSM set $X$, the identity of $X$ is an equivariant bijection from $X$ to $X$. \item $\leq$ and $\leq^{*}$ are transitive because for any FSM sets $X$, $Y$ and $Z$, whenever there are two finitely supported injections/surjections $f:X \to Y$ and $g:Y \to Z$, there exists an injection/surjection $g \circ f:X \to Z$ which is finitely supported by $supp(f) \cup supp(g)$. \item The anti-symmetry of $\leq$. \begin{lemma} \label{lem1} Let $(B, \cdot)$ and \; $(C, \diamond)$ be two invariant sets. If there exist a finitely supported injective mapping $f: B \to C$ and a finitely supported injective mapping $g: C \to B$, then there exists a finitely supported bijective mapping $h:B \to C$. Furthermore, $supp(h) \subseteq supp(f) \cup supp(g)$. \end{lemma} \emph{Proof of Lemma \ref{lem1}.} Let us define $F:\wp_{fs}(B) \to \wp_{fs}(B)$ by $F(X)=B-g(C-f(X))$ for all finitely supported subsets $X$ of $B$. \textbf{Claim 1:} $F$ is correctly defined, i.e. $Im(F) \subseteq \wp_{fs}(B)$.\\ For every finitely supported subset $X$ of $B$, we have that $f(X)$ is supported by $supp(f) \cup supp(X)$. Indeed, let $\pi \in Fix(supp(f) \cup supp(X))$. Let~$y$ be an arbitrary element from $f(X)$; then $y=f(x)$ for some $x \in X$. However, because $\pi \in Fix (supp(X))$, it follows that $\pi \cdot x \in X$ and so, because $supp(f)$ supports $f$ and $\pi$ fixes $supp(f)$ pointwise, from Proposition~\ref{2.18'} we get $\pi \diamond y= \pi \diamond f(x)= f(\pi \cdot x) \in f(X)$. Thus $\pi \widetilde{\star} f(X)=f(X)$, where~$\widetilde{\star}$ is the $S_{A}$-action on~$\wp_{fs}(C)$ defined as in Proposition~\ref{p1}. Analogously, $g(Y)$ is finitely supported by $supp(g) \cup supp(Y)$ for all $Y \in \wp_{fs}(C)$. It is easy to remark that for every finitely supported subset~$X$ of $B$ we have that $C-f(X)$ is also supported by $supp(f) \cup supp(X)$, $g(C-f(X))$ is supported by $supp(g) \cup supp(f) \cup supp(X)$, and $B-g(C-f(X))$ is supported by $supp(g) \cup supp(f) \cup supp(X)$. Thus,~$F$ is well-defined. \textbf{Claim 2:} $F$ is a finitely supported function.\\ We prove that $F$ is finitely supported by $supp(f) \cup supp(g)$. Let us consider $\pi \in Fix(supp(f) \cup supp(g))$. Since $\pi \in Fix(supp(f))$ and $supp(f)$ supports~$f$, according to Proposition \ref{2.18'} we have that $f(\pi \cdot x)=\pi \diamond f(x)$ for all $x \in B$. Thus, for every finitely supported subset $X$ of $B$ we have $f(\pi \star X)=\{f(\pi \cdot x)\;|\;x \in X\}=\{\pi \diamond f(x)\;|\;x \in X\}=\pi \widetilde{\star} f(X)$, where~$\star$ is the $S_{A}$-action on $\wp_{fs}(B)$ and $\widetilde{\star}$ is the $S_{A}$-action on $\wp_{fs}(C)$. Similarly, $g(\pi \widetilde{\star} Y)=\pi \star g(Y)$ for any finitely supported subset $Y$ of $C$. Therefore, $F(\pi \star X)=B-g(C-f(\pi \star X))=B-g(C-\pi \widetilde{\star} f(X)) \overset{\pi \widetilde{\star}C=C}{=}B-g(\pi \widetilde{\star}(C-f(X)))=B-(\pi \star g(C-f(X))) \overset{\pi \star B=B}{=} \pi \star (B- g(C-f(X)))=\pi \star F(X)$. From Proposition \ref{2.18'} it follows that $F$ is finitely supported. Moreover, because $supp(F)$ is the least set of atoms supporting $F$, we have $supp(F) \subseteq supp(f) \cup supp(g)$. \textbf{Claim 3:} For any $X,Y \in \wp_{fs}(B)$ with $X \subseteq Y$, we have $F(X) \subseteq F(Y)$. This remark follows by direct calculation. \textbf{Claim 4:} The set $S:=\{X\;|\; X \in \wp_{fs}(B), X \subseteq F(X)\}$ is a non-empty finitely supported subset of $\wp_{fs}(B)$. Obviously, $\emptyset \in S$. We claim that $S$ is supported by $supp(F)$. Let $\pi \in Fix(supp(F))$, and $X \in S$. Then $X \subseteq F(X)$. From the definition of $\star$ (see Proposition \ref{p1}) we have $\pi \star X \subseteq \pi \star F(X)$. According to Proposition \ref{2.18'}, because $supp(F)$ supports $F$, we have $\pi \star X \subseteq \pi \star F(X)=F(\pi \star X)$, and so $\pi \star X \in S$. It follows that $S$ is finitely supported, and $supp(S) \subseteq supp(F)$. \textbf{Claim 5:} $T:=\underset{X \in S}{\cup}X$ is finitely supported by $supp(S)$.\\ Let $\pi \in Fix(supp(S))$, and $t \in T$. Since $T=\underset{X \in S}{\cup}X$, we have that there exists $Z \in S$ such that $t \in Z$. Therefore, $\pi \cdot t \in \pi \star Z$. However, since~$\pi$ fixes $supp(S)$ pointwise and $supp(S)$ supports $S$, we have that $\pi \star Z \in S$. Thus, there exists $Y \in S$ such that $\pi \star Z=Y$. Therefore $\pi \cdot t \in Y$, and so $\pi \cdot t \in \underset{X \in S}{\cup}X$. It follows that $\underset{X \in S}{\cup}X$ is finitely supported, and so $T=\underset{X \in S}{\cup}X \in \wp_{fs}(B)$. Furthermore, $supp(T) \subseteq supp(S)$. \textbf{Claim 6:} We prove that $F(T)=T$.\\ Let $X \in S$ arbitrary. We have $X \subseteq F(X) \subseteq F(T)$. By taking the supremum on $S$, this leads to $T \subseteq F(T)$. However, because $T \subseteq F(T)$, from Claim 3 we also have $F(T) \subseteq F(F(T))$. Furthermore, $F(T)$ is supported by $supp(F) \cup supp(T)$ (i.e. by $supp(f) \cup supp(g)$), and so $F(T) \in S$. According to the definition of $T$, we get $F(T) \subseteq T$. We get $T=B-g(C-f(T))$, or equivalently, $B-T=g(C-f(T))$. Since~$g$ is injective, we obtain that for each $x \in B-T$, $g^{-1}(x)$ is a set containing exactly one element. Let us define $h:B \to C$ by \[ h(x)=\left\{ \begin{array}{ll} f(x), & \text{for}\: x \in T;\\ g^{-1}(x), & \text{for}\: x \in B-T.\end{array}\right. \] \textbf{Claim 7:} We claim that $h$ is supported by the set $supp(f) \cup supp(g) \cup supp(T)$ (more exactly, by $supp(f) \cup supp(g)$, according to the previous~claims). Let $\pi \in Fix(supp(f) \cup supp(g) \cup supp(T))$, and $x$ an arbitrary element of~$B$. If $x \in T$, because $\pi \in Fix(supp(T))$ and $supp(T)$ supports $T$, we have $\pi \cdot x \in T$. Thus, from Proposition~\ref{2.18'} we get $h(\pi \cdot x)=f(\pi \cdot x)=\pi \diamond f(x)=\pi \diamond h(x)$. If $x \in B-T$, we have $\pi \cdot x \in B-T$. Otherwise, we would obtain the contradiction $x=\pi^{-1} \cdot (\pi \cdot x) \in T$ because $\pi^{-1}$ also fixes $supp(T)$ pointwise. Thus, because $g$ is finitely supported, according to Proposition~\ref{2.18'} we have $h(\pi \cdot x)=g^{-1}(\pi \cdot x)=\{y \in C\;|\; g(y)$=$\pi \cdot x\}=\{y \in C\;|\; \pi^{-1} \cdot g(y)$=$x\}=\{y\in C\;|\; g(\pi^{-1} \diamond y)$= $x\}\overset{\pi^{-1} \diamond y:= z}{=}\{\pi \diamond z \in C\;|\; g(z)$=$x\}=\pi \diamond \{ z \in C\;|\; g(z)$=$x\}=\pi \diamond g^{-1}(x)=\pi \diamond h(x)$. We obtained $h(\pi \cdot x)=\pi \diamond h(x)$ for all $\pi \in Fix(supp(f) \cup supp(g) \cup supp(T))$ and all $x \in B$. According to Proposition~\ref{2.18'}, we get that $h$ is finitely supported. Furthermore, we also have that $supp(h) \subseteq supp(f) \cup supp(g) \cup supp(T) \overset{Claim\; 5}{\subseteq} supp(f) \cup supp(g) \cup supp(S)$$ \overset{Claim \; 4}{\subseteq} supp(f) \cup supp(g) \cup supp(F) \overset{Claim \; 2}{\subseteq} supp(f)$ $ \cup supp(g)$. \textbf{Claim 8:} $h$ is a bijective function. \\ First we prove that $h$ is injective. Let us suppose that $h(x)=h(y)$. We claim that either $x,y \in T$ or $x,y \in B-T$. Indeed, let us suppose that $x\in T$ and $y \notin T$ (the case $x \notin T$, $y \in T$ is similar). We have $h(x)=f(x)$ and $h(y)=g^{-1}(y)$. If we denote $g^{-1}(y)=z$, we have $g(z)=y$. However, we supposed that $y \in B-T$, and so there exists $u \in C-f(T)$ such that $y=g(u)$. Since $y=g(z)$, from the injectivity of $g$ we get $u=z$. This is a contradiction because $u \notin f(T)$, while $z=f(x) \in f(T)$. Since we proved that both $x,y$ are contained either in $T$ or in $B-T$, the injectivity of $h$ follows from the injectivity of $f$ or $g$, respectively. Now we prove that $h$ is surjective. Let $y \in C$ be arbitrarily chosen. If $y \in f(T)$, then there exists $z \in T$ such that $y=f(z)$, and so $y=h(z)$. If $y \in C-f(T)$, and because $g(C-f(T))=B-T$, there exists $x \in B-T$ such that $g(y)=x$. Thus, $y \in g^{-1}(x)$. Since $g$ is injective, and so $g^{-1}(x)$ is a one-element set, we can say that $g^{-1}(x)=y$ with $x \in B-T$. Thus we have $y=h(x)$. \begin{lemma} \label{lem2} Let $(B, \cdot)$ and \; $(C, \diamond)$ be two invariant sets (in particular, $B$ and $C$ could coincide), $B_{1}$ a finitely supported subset of $B$ and $C_{1}$ a finitely supported subset of $C$. If there exist a finitely supported injective mapping $f: B_{1} \to C_{1}$ and a finitely supported injective mapping $g: C_{1} \to B_{1}$, then there exists a finitely supported bijective mapping $h:B_{1} \to C_{1}$. Furthermore, $supp(h) \subseteq supp(f) \cup supp(g) \cup supp(B_{1}) \cup supp(C_{1})$. \end{lemma} \emph{Proof of Lemma \ref{lem2}.} We follow the proof of Lemma \ref{lem1}. We define $F:\wp_{fs'}(B_{1}) \to \wp_{fs'}(B_{1})$ by $F(X)=B_{1}-g(C_{1}-f(X))$ for all $X \in \wp_{fs'}(B_{1})$, where $\wp_{fs'}(B_{1})$ is a finitely supported subset of the invariant set $\wp_{fs}(B)$ (supported by $supp(B_{1})$) defined by $\wp_{fs'}(B_{1})=\{X \in \wp_{fs}(B)\;|\;X \subseteq B_{1}\}$. As in the previous lemma, but using Proposition \ref{2.18'}, we get that $F$ is well-defined, i.e. for every $X \in \wp_{fs'}(B_{1})$ we have that $F(X)$ is supported by $supp(f) \cup supp(g) \cup supp(B_{1}) \cup supp(C_{1}) \cup supp(X)$ which means $F(X) \in \wp_{fs'}(B_{1})$. Moreover, $F$ is itself finitely supported (in the sense of Definition~\ref{2.10-1}) by $supp(f) \cup supp(g) \cup supp(B_{1}) \cup supp(C_{1})$. The set $S:=\{X\;|\; X \in \wp_{fs'}(B_{1}), \ X \subseteq F(X)\}$ is contained in $\wp_{fs'}(B_{1})$ and it is supported by $supp(F)$ as a subset of $\wp_{fs}(B)$. The set $T:=\underset{X \in S}{\cup}X \in \wp_{fs'}(B_{1})$ is finitely supported by $supp(S)$, and it is a fixed point of $F$. As in the proof of Lemma \ref{lem1}, we define the bijection $h:B_{1} \to C_{1}$ by \[ h(x)=\left\{ \begin{array}{ll} f(x), & \text{for}\: x \in T;\\ g^{-1}(x), & \text{for}\: x \in B_{1}-T.\end{array}\right. \] According to Proposition \ref{2.18'}, we obtain that $h$ is finitely supported by $ supp(f) \cup supp(g) \cup supp(B_{1}) \cup supp(C_{1}) \cup supp(T)$, and $supp(h) \subseteq supp(f) \cup supp(g) \cup supp(B_{1}) \cup supp(C_{1})$. Thus, $h$ is the required finitely supported bijection between $B_{1}$ and $C_{1}$. The anti-symmetry of $\leq$ follows from Lemma \ref{lem1} and Lemma \ref{lem2} because FSM sets are actually finitely supported subsets of invariant sets. It is worth noting that $\leq^{*}$ is not anti-symmetric. \begin{lemma} \label{lem3} There are two invariant sets $B$ and $C$ such that there exist both a finitely supported surjective mapping $f: C \to B$ and a finitely supported surjective mapping $g: B \to C$, but it does not exist a finitely supported bijective mapping $h:B \to C$. \end{lemma} \emph{Proof of Lemma \ref{lem3}.} Let us consider the invariant set $(A, \cdot)$ of atoms. The family $T_{fin}(A)=\{(x_{1}, \ldots, x_{m}) \subseteq (A \times \ldots \times A)\,|\,m \geq 0\}$ of all finite injective tuples from $A$ (including the empty tuple denoted by~$\bar{\emptyset}$) is an $S_{A}$-set with the $S_{A}$-action $\star:S_{A}\times T_{fin}(A) \rightarrow T_{fin}(A)$ defined by $\pi \star (x_{1}, \ldots, x_{m})=(\pi \cdot x_{1}, \ldots, \pi \cdot x_{m})$ for all $ (x_{1}, \ldots, x_{m}) \in T_{fin}(A)$ and all $\pi \in S_{A}$. Since $A$ is an invariant set, we have that $T_{fin}(A)$ is an invariant set. Whenever $X$ is an invariant set, we have that each injective tuple $(x_{1}, \ldots, x_{m})$ of elements belonging to $X$ is finitely supported, and, furthermore, $supp(x_{1}, \ldots, x_{m})=supp(x_{1}) \cup \ldots \cup supp(x_{m})$. Particularly, we obtain that $supp(a_{1}, \ldots, a_{m})=\{a_{1}, \ldots, a_{m}\}$, for any injective tuple of atoms $(a_{1}, \ldots, a_{m})$ (similarly as in Proposition 2.2 from \cite{book}). Since $supp(\bar{\emptyset})=\emptyset$, it follows that $T^{*}_{fin}(A)=T_{fin}(A)\setminus \bar{\emptyset}$ is an equivariant subset of $T_{fin}(A)$, and is itself an invariant set. Let us fix an atom $a \in A$. We define $f:T_{fin}(A) \to T_{fin}(A)\setminus \bar{\emptyset}$ by \[ f(y)=\left\{ \begin{array}{ll} y, & \text{if}\: \text{$y$ is an injective non-empty tuple};\\ (a), & \text{if}\: \text{$y=\bar{\emptyset}$}\: .\end{array}\right. \] Clearly, $f$ is surjective. We claim that $f$ is supported by $supp(a)$. Let $\pi \in Fix(supp(a))$, i.e. $a=\pi(a)=\pi \star(a)$. If $y$ is a non-empty tuple of atoms, we obviously have $f(\pi \star y)=\pi \star y=\pi \star f(y)$. If $y=\bar{\emptyset}$, we have $\pi \star y=\bar{\emptyset}$, and so $f(\pi \star y)=(a)=\pi(a)=\pi \star f(y)$. Thus, $f(\pi \star y)=\pi \star f(y)$ for all $y \in T_{fin}(A)$. According to Proposition \ref{2.18'}, we have that $f$ is finitely supported. We define an equivariant surjective function $g:T_{fin}(A)\setminus \bar{\emptyset} \to T_{fin}(A)$ by \[ g(y)=\left\{ \begin{array}{ll} \bar{\emptyset}, & \text{if}\: \text{$y$ is a tuple with exactly one element};\\ y', & \text{otherwise}\: ; \end{array}\right. \] where $y'$ is a new tuple formed by deleting the first element in tuple~$y$ (the first position in a finite injective tuple exists without requiring any form of choice). Clearly, $g$ is surjective. Indeed, $\bar{\emptyset}=g((a))$ for some one-element tuple $(a)$ ($A$ is non-empty, and so it has at least one atom). For a fixed finite injective non-empty $m$-tuple $y$, we have that $y$ can be seen as being ``contained" in an injective $(m+1)$-tuple $z$ of form $(b,y)$ (whose first element is a certain atom~$b$, and the following elements are precisely the elements of~$y$). The related atom~$b$ exists because $y$ is finite, while $A$ is infinite (generally, we can always find an atom $b \notin supp(y)=\{y\}$ according to the finite support requirement in FSM - more details in Section 2.9 of \cite{book}). We get $y=g (z)$. For proving the surjectivity of $g$ we do not need to `choose' a precise such an element $b$ (we do not need to define an inverse function for $g$); it is sufficient to ascertain that $g(b,y)=y$ for every $b \in A\setminus\{y\}$ and $A\setminus\{y\}$ is non-empty (the axiom of choice is not required because for proving only the surjectivity of $g$ we do not involve the construction of a system of representatives for the family $(g^{-1}(y))_{y \in T_{fin}(A)}$). We claim now that $g$ is equivariant. Let $(x)$ be a one-element tuple from~$A$ and $\pi$ an arbitrary permutation from~$S_{A}$. We have that $\pi \star (x)=(\pi(x))$ is a one-element tuple from $A$, and so $g(\pi \star (x))=\bar{\emptyset} = \pi \star \bar{\emptyset} = \pi \star g((x))$. Now, let us consider $(x_{1}, \ldots, x_{m}) \in T_{fin}(A), m \geq 2$ and $\pi \in S_{A}$. We have $ g(\pi \star (x_{1}, \ldots, x_{m}))=g((\pi \cdot x_{1}, \ldots, \pi \cdot x_{m}))=g((\pi (x_{1}), \ldots, \pi (x_{m})))=(\pi (x_{2}), \ldots, \pi (x_{m}))=\pi \star (x_{2}, \ldots, x_{m})=\pi \star g(x_{1}, \ldots, x_{m})$. According to Proposition \ref{2.18'}, we have that $g$ is empty-supported (equivariant). We prove by contradiction that there could not exist a finitely supported injective $h: T_{fin}(A) \to T_{fin}(A)\setminus \bar{\emptyset}$. Let us suppose there is a finitely supported injection $h:T_{fin}(A) \rightarrow T_{fin}(A)\setminus \bar{\emptyset}$. We have $\bar{\emptyset} \notin Im(h)$ because $Im(h) \subseteq T_{fin}(A) \setminus \bar{\emptyset}$. We can form an infinite sequence $\mathcal{F}$ which has the first term $y_{0}=\bar{\emptyset}$, and the general term $y_{n+1}=h(y_{n})$ for all $n \in \mathbb{N}$. Since $\bar{\emptyset}\notin Im(h)$, it follows that $\bar{\emptyset} \neq h(\bar{\emptyset})$. Since $h$ is injective and $\bar{\emptyset} \notin Im(h)$, we obtain by induction that $h^{n}(\bar{\emptyset}) \neq h^{m}(\bar{\emptyset})$ for all $n,m \in \mathbb{N}$ with $n \neq m$. We prove now that for each $n \in \mathbb{N}$ we have that $y_{n+1}$ is supported by $supp(h)\cup supp(y_{n})$. Let $\pi \in Fix(supp(h)\cup supp(y_{n}))$. According to Proposition~\ref{2.18'}, because $\pi \in Fix(supp(h))$ we have $h(\pi \star y_{n})=\pi \star h(y_{n})$. Since $\pi \in Fix(supp(y_{n}))$ we have $\pi \star y_{n}=y_{n}$, and so $h(y_{n})=\pi \star h(y_{n})$. Thus, $\pi \star y_{n+1}= \pi \star h(y_{n}) = h(y_{n}) = y_{n+1}$. Furthermore, because $supp(y_{n+1})$ is the least set supporting $y_{n+1}$, we have $supp(y_{n+1}) \subseteq supp(h)\cup supp(y_{n})$ for all $n \in \mathbb{N}$. Since each $y_{n}$ is a finite injective tuple of atoms, it follows that $supp(y_{n})=\{y_{n}\}$ for all $n \in \mathbb{N}$ (where by $\{y_{n}\}$ we denoted the set of atoms forming $y_{n}$). We get $\{y_{n+1}\} = supp(y_{n+1}) \subseteq supp(h)\cup supp(y_{n}) = supp(h) \cup \{y_{n}\}$. By repeatedly applying this result, we get $\{y_{n}\} \subseteq supp(h) \cup \{y_{0}\}= supp(h) \cup \emptyset =supp(h)$ for all $n\in\mathbb{N}$. Since $supp(h)$ has only a finite number of subsets, we contradict the statement that the infinite sequence $(y_{n})_{n}$ never repeats. Thus, there does not exist a finitely supported bijection between $T_{fin}(A)\setminus \bar{\emptyset}$ and $T_{fin}(A)$. \item $\leq$ and $\leq^{\star}$ are not total. We prove that whenever $X$ is an infinite ordinary (non-atomic) ZF-set, for any finitely supported function $f : A \to X$ and any finitely supported function $g : X \to A$, $Im(f)$ and $Im(g)$ are finite. As a direct consequence there are no finitely supported injective mappings and no finitely supported surjective mappings between $A$ and $X$. Let us consider a finitely supported mapping $f:A \to X$. Let let us fix an element $b\in A$ with $b\notin supp(f)$. Let $c$ be an arbitrary element from $A\setminus supp(f)$. Since $b\notin supp(f)$, we have that $(b\, c)$ fixes every element from $supp(f)$, i.e. $(b\, c)\in Fix(supp(f))$. However, $supp(f)$ supports~$f$, and so, by Proposition \ref{2.18'}, we have $f((b\, c)(a))=(b\,c) \diamond f(a)=f(a)$ for all $a\in A$. In particular, $f(c)=f((b\,c)(b))=f(b)$. Since~$c$ has been chosen arbitrarily from $A \setminus supp(f)$, it follows that $f(c)=f(b)$, for all $c \in A \setminus supp(f)$. If $supp(f)=\{a_{1}, \ldots, a_{n}\}$, then $Im(f) = \{f(a_{1})\}\cup \ldots \cup \{f(a_{n})\} \cup \{f(b)\}$. Thus, $Im(f)$ is finite (because it is a finite union of singletons). Let $g : X \to A$ be a finitely supported function. Assume by contradiction that $Im(g)$ is infinite. Pick any atom $a \in Im(g) \setminus supp(g)$ (such an atom exists because $supp(g)$ is finite). There exists an $x \in X$ such that $g(x) = a$. Now pick any atom $b \in Im(g) \setminus (supp(g) \cup \{a\})$, The transposition $(a\,b)$ fixes $supp(g)$ pointwise, and so $g(x)=g((a\,b) \diamond x)=(a\,b) \cdot g(x) =(a\, b)(a)= b$, contradicting the fact that $g$ is a function. Thus, $Im(g)$ is finite. \end{itemize} \end{proof} \begin{corollary}\label{corcor} There exist two invariant sets $B$ and $C$ such that there is a finitely supported bijection between $\wp_{fs}(B)$ and $\wp_{fs}(C)$, but there is no finitely supported bijection between $B$ and $C$. \end{corollary} \begin{proof} Firstly we prove the following lemma. \begin{lemma} \label{lemlem} Let $X$ and $Y$ be two FSM sets and $f:X \to Y$ a finitely supported surjective function. Then the mapping $g:\wp_{fs}(Y) \to \wp_{fs}(X)$ defined by $g(V)=f^{-1}(V)$ for all $V \in \wp_{fs}(Y)$ is well defined, injective and finitely supported by $supp(f) \cup supp(X) \cup supp(Y)$. \end{lemma} \emph{Proof of Lemma \ref{lemlem}.} Let $V$ be an arbitrary element from $\wp_{fs}(Y)$. We claim that $f^{-1}(V) \in \wp_{fs}(X)$. Indeed we prove that the set $f^{-1}(V)$ is supported by $supp(f) \cup supp(V) \cup supp(X) \cup supp(Y)$. Let $\pi \in Fix(supp(f) \cup supp(V) \cup supp(X) \cup supp(Y))$, and $x \in f^{-1}(V) $. This means $f(x) \in V$. According to Proposition \ref{2.18'}, and because $\pi$ fixes $supp(f)$ pointwise and $supp(f)$ supports $f$, we have $f(\pi \cdot x)= \pi \cdot f(x) \in \pi \star V = V$, and so $\pi \cdot x \in f^{-1}(V)$ (we denoted the actions on $X$ and $Y$ generically by $\cdot$, and the actions on their powersets by $\star$). Therefore, $f^{-1}(V)$ is finitely supported, and so the function $g$ is well defined. We claim that $g$ is supported by $supp(f) \cup supp(X) \cup supp(Y)$. Let $\pi \in Fix(supp(f) \cup supp(X) \cup supp(Y))$. For any arbitrary $V \in \wp_{fs}(Y)$ we get $\pi \star V \in \wp_{fs}(Y)$ and $\pi \star g(V) \in \wp_{fs}(X)$, and by Proposition \ref{2.18'} we have that $\pi^{-1} \in Fix(supp(f))$, and so $f(\pi^{-1} \cdot x)=\pi^{-1} \cdot f(x)$ for all $x\in X$. For any arbitrary $V \in \wp_{fs}(Y)$, we have that $z \in g(\pi \star V) = f^{-1}(\pi \star V) \Leftrightarrow f(z) \in \pi \star V \Leftrightarrow \pi^{-1} \cdot f(z) \in V \Leftrightarrow f(\pi^{-1} \cdot z) \in V \Leftrightarrow \pi^{-1} \cdot z \in f^{-1}(V) \Leftrightarrow z \in \pi \star f^{-1}(V)=\pi \star g(V)$. If follows that $g(\pi \star V)=\pi \star g(V)$ for all $V \in \wp_{fs}(Y)$, and so $g$ is finitely supported. Moreover, because $f$ is surjective, a simple calculation shows us that $g$ is injective. Indeed, let us suppose that $g(U)=g(V)$ for some $U,V \in \wp_{fs}(Y)$. We have $f^{-1}(U) = f^{-1}(V)$, and so $f(f^{-1}(U)) = f(f^{-1}(V))$. Since $f$ is surjective, we get $U = f(f^{-1}(U)) = f(f^{-1}(V)) = V$. We start the proof of Corollary \ref{corcor}. As in Lemma \ref{lem3}, we consider the sets $B=T_{fin}(A)\setminus \bar{\emptyset}$ and $C=T_{fin}(A)$. According to Lemma \ref{lem3} there exists a finitely supported surjective function $f:C \to B$ and a finitely supported (equivariant) surjection $g:B \to C$. Thus, according to Lemma \ref{lemlem}, there exist a finitely supported injective function $f':\wp_{fs}(B) \to \wp_{fs}(C)$ and a finitely supported injective function $g':\wp_{fs}(C) \to \wp_{fs}(B)$. According to Lemma \ref{lem1}, there is a finitely supported bijection between $\wp_{fs}(B)$ and $\wp_{fs}(C)$. However, we proved in Lemma \ref{lem3} that there is no finitely supported bijection between $B=T_{fin}(A)\setminus \bar{\emptyset}$ and $C=T_{fin}(A)$. \end{proof} The following result communicated by Levy in 1965 for non-atomic ZF sets can be reformulated in the world of finitely supported atomic structures. \begin{corollary}Let $X$ and $Y$ be two invariant sets with the property that whenever $|2^{X}_{fs}|=|2^{Y}_{fs}|$ we have $|X|=|Y|$. If $|X|\leq^{\star}|Y|$ and $|Y|\leq^{\star}|X|$, then $|X|=|Y|$. \end{corollary} \begin{proof}According to the hypothesis and to Lemma \ref{lemlem} there exist two finitely supported injective functions $f:\wp_{fs}(Y) \to \wp_{fs}(X)$ and $g:\wp_{fs}(X) \to \wp_{fs}(Y)$. According to Lemma \ref{lem1}, there is a bijective mapping $h:\wp_{fs}(X) \to \wp_{fs}(Y)$ . According to Theorem \ref{comp}, we get $|2^{X}_{fs}|=|2^{Y}_{fs}|$, and so we get $|X|=|Y|$. \end{proof} \begin{proposition}[Cantor] \label{Cantor} Let $X$ be a finitely supported subset of an invariant set $(Y, \cdot)$. Then $|X| \lneq |\wp_{fs}(X)|$ and $|X| \lneq^{*} |\wp_{fs}(X)|$ \end{proposition} \begin{proof}First we prove that there is no finitely supported bijection between $X$ and $\wp_{fs}(X)$, and so their cardinalities cannot be equal. Assume, by contradiction, that there is a finitely supported surjective mapping $f: X \to \wp_{fs}(X)$. Let us consider $Z=\{x \in X\,|\,x \notin f(x)\}$. We claim that $supp(X) \cup supp(f)$ supports $Z$. Let $\pi \in Fix(supp(X) \cup supp(f))$. Let $x \in Z$. Then $\pi \cdot x \in X$ and $\pi \cdot x \notin \pi \star f(x)=f(\pi \cdot x)$. Thus, $\pi \cdot x \in Z$, and so $Z \in \wp_{fs}(X)$. Therefore, since $f$ is surjective there is $x_{0} \in X$ such that $f(x_{0})=Z$. However, from the definition of $Z$ we have $x_{0} \in Z$ if and only if $x_{0}\notin f(x_{0})=Z$, which is a contradiction. Now, it is clear that the mapping $i: X \to \wp_{fs}(X)$ defined by $i(x)=\{x\}$ is injective and supported by $supp(X)$. Thus, $|X| \lneq |\wp_{fs}(X)|$. Let us fix an atom $y \in X$. We define $s:\wp_{fs}(X) \to X$ by \[ s(U)=\left\{ \begin{array}{ll} u, & \text{if}\: \text{$U$ is an one-element set \{u\} };\\ y, & \text{if}\: \text{$U$ has more than one element}\: .\end{array}\right. \] Clearly, $s$ is surjective. We claim that $s$ is supported by $supp(y) \cup supp(X)$. Let $\pi \in Fix(supp(y) \cup supp(X))$. Thus, $y=\pi \cdot y$. If $U$ is of form $U=\{u\}$, we obviously have $s(\pi \star U)=s(\{\pi \cdot u\})=\pi \cdot u=\pi \cdot s(U)$. If $U$ has more than one element, then $\pi \star U$ has more than one element, and we have $s(\pi \star U)=y=\pi \cdot y=\pi \cdot s(U)$. Thus, $\pi \star U \in \wp_{fs}(X)$, $\pi \cdot s(U) \in X$, and $s(\pi \star U)=\pi \cdot s(U)$ for all $U \in \wp_{fs}(X)$ . According to Proposition \ref{2.18'}, we have that $s$ is finitely supported. Therefore, $|X| \lneq^{*} |\wp_{fs}(X)|$. \end{proof} In Proposition \ref{Cantor} we used a technique for constructing a surjection starting from an injection defined in the opposite way, that can be generalized as follows. \begin{proposition} \label{pco2}Let $X$ and $Y$ be finitely supported subsets of an invariant set $U$. If If $|X| \leq |Y|$, then $|X| \leq ^{\star} |Y|$. The converse is not valid. However, if $|X| \leq ^{\star} |Y|$, then $|X| \leq |\wp_{fs}(Y)|$. \end{proposition} \begin{proof} Suppose there exists a finitely supported injective mapping $f: X \to Y$. We consider the case $Y \neq \emptyset$ (otherwise, the result follows trivially). Fix $x_{0} \in X$. Define the mapping $f':Y \to X$ by \[ f'(y)=\left\{ \begin{array}{ll} f^{-1}(y), & \text{if}\: \text{$y \in Im(f)$ };\\ x_{0}, & \text{if}\: \text{$y \notin Im(f)$}\: .\end{array}\right. \] Since $f$ is injective, it follows that $f^{-1}(y)$ is an one-element set for each $y \in Im(f)$, and so $f'$ is a function. Clearly,~$f'$ is surjective. We claim that $f'$ is supported by the set $supp(f) \cup supp(x_{0}) \cup supp(X) \cup supp(Y)$. Indeed, let us consider $\pi \in Fix(supp(f) \cup supp(x_{0}) \cup supp(X) \cup supp(Y))$. Whenever $y \in Im(f)$ we have $y=f(z)$ for some $z \in X$ and $\pi \cdot y=\pi \cdot f(z)=f(\pi \cdot z) \in Im(f)$, which means $Im(f)$ is finitely supported by $supp(f)$. Consider an arbitrary $y_{0} \in Im(f)$, and thus $\pi \cdot y_{0} \in Im(f)$. Then $f'(y_{0})= f^{-1}(y_{0})=z_{0}$ with $f(z_{0})=y_{0}$, and so $f(\pi \cdot z_{0})= \pi \cdot f(z_{0})=\pi \cdot y_{0}$, which means $f'(\pi \cdot y_{0})=f^{-1}(\pi \cdot y_{0})=\pi \cdot z_{0}=\pi \cdot f^{-1}(y_{0})=\pi \cdot f'(y_{0})$. Now, for $y \notin Im(f)$ we have $\pi \cdot y \notin Im(f)$, which means $f'(\pi \cdot y)=x_{0}=\pi \cdot x_{0}=\pi \cdot f(y)$ since $\pi$ fixes $x_{0}$ pointwise. Thus, $|X| \leq ^{\star} |Y|$. Conversely, from the proof of Lemma \ref{lem3}, we know that there is a finitely supported surjection $g:T_{fin}(A)\setminus \bar{\emptyset} \to T_{fin}(A)$, but there does not exist a finitely supported injection $h: T_{fin}(A) \to T_{fin}(A)\setminus \bar{\emptyset}$. Assume now there is a finitely supported surjective mapping $f:Y \to X$. We proceed similarly as in the proof of Lemma \ref{lemlem}. Fix $x \in X$. Then $f^{-1}(\{x\})$ is supported by $supp(f) \cup supp(x) \cup supp(X)$. Indeed, let $\pi \in Fix(supp(f) \cup supp(x) \cup supp(X))$, and $y \in f^{-1}(\{x\})$. This means $f(y)=x$. According to Proposition \ref{2.18'}, we have $f(\pi \cdot y)= \pi \cdot f(y)= \pi \cdot x=x$, and so $\pi \cdot y \in f^{-1}(\{x\})$. Define $g:X \to \wp_{fs}(Y)$ by $g(x)=f^{-1}(\{x\})$. We claim that $g$ is supported by $supp(f) \cup supp(X)$. Let $\pi \in Fix(supp(f) \cup supp(X))$. For any arbitrary $x \in X$, we have that $z \in g(\pi \cdot x) = f^{-1}(\{\pi \cdot x\}) \Leftrightarrow f(z)=\pi \cdot x \Leftrightarrow \pi^{-1} \cdot f(z)=x \Leftrightarrow f(\pi^{-1} \cdot z) =x \Leftrightarrow \pi^{-1} \cdot z \in f^{-1}(\{x\}) \Leftrightarrow z \in \pi \star f^{-1}(\{x\})=\pi \star g(x)$. From Proposition \ref{2.18'} it follows that $g$ is finitely supported. Since $g$ is also injective, we get $|X| \leq |\wp_{fs}(Y)|$. \end{proof} \begin{proposition} \label{pco1} Let $X,Y,Z$ be finitely supported subsets of an invariant set $U$. The following properties hold. \end{proposition} \begin{enumerate} \item If $|X| \leq |Y|$, then $|X|+|Z| \leq |Y|+|Z|$; \item If $|X| \leq |Y|$, then $|X| \cdot |Z| \leq |Y| \cdot |Z|$; \item If $|X| \leq |Y|$, then $|X^{Z}_{fs}| \leq |Y^{Z}_{fs}|$; \item If $|X| \leq |Y|$ and $Z\neq \emptyset$, then $|Z^{X}_{fs}| \leq |Z^{Y}_{fs}|$; \item $|X|+|Y| \leq |X|\cdot|Y|$ whenever both $X$ and $Y$ have more than two elements. \end{enumerate} \begin{proof} 1. Suppose there is a finitely supported injective $f: X \to Y$, and define the injection $g: X+Z\to Y+Z$ by \[ g(u)=\left\{ \begin{array}{ll} (0,f(x)), & \text{if}\: u=(0,x)\: \text{with}\: x \in X;\\ (1,z), & \text{if}\: u=(1,z)\: \text{with}\: z \in Z .\end{array}\right. \] Since $f$ is finitely supported we have that $f(\pi \cdot x)=\pi \cdot f(x)$ for all $x \in X$ and $\pi \in Fix(supp(f))$. By using Proposition~\ref{2.18'}, i.e verifying that $g(\pi \star u)=\pi \star g(u)$ for all $u \in X+Z$ and all $\pi \in Fix(supp(f) \cup supp(X) \cup supp(Y) \cup supp(Z))$, we have that $g$ is also finitely supported. 2. Suppose there exists a finitely supported injective mapping $f: X \to Y$. Define the injection $g: X\times Z\to Y \times Z$ by $g((x,z))=(f(x),z)$ for all $(x,z) \in X \times Z$. Clearly $g$ is injective. Since $f$ is finitely supported we have that $f(\pi \cdot x)=\pi \cdot f(x)$ for all $x \in X$ and $\pi \in Fix(supp(f))$, and so $g(\pi \otimes(x,z))=g((\pi \cdot x, \pi \cdot z))=(f(\pi \cdot x),\pi \cdot z)=(\pi \cdot f(x),\pi \cdot z)=\pi \otimes g((x,z))$ for all $(x,z) \in X \times Z$ and $\pi \in Fix(supp(f) \cup supp(X) \cup supp(Y) \cup supp(Z))$, which means $g$ is supported by $supp(f) \cup supp(X) \cup supp(Y) \cup supp(Z)$. 3. Suppose there exists a finitely supported injective mapping $f: X \to Y$. Define $g:X^{Z}_{fs} \to Y^{Z}_{fs}$ by $g(h)=f \circ h$. We have that $g$ is injective and for any $\pi \in Fix(supp(f))$ we have $\pi \widetilde{\star} f=f$, and so $g(\pi \widetilde{\star} h)=f \circ (\pi \widetilde{\star} h)= (\pi \widetilde{\star} f) \circ (\pi \widetilde{\star} h)=\pi \widetilde{\star} (f \circ h)=\pi \widetilde{\star} g(h)$ for all $h \in X^{Z}_{fs}$. We used the relation $(\pi \widetilde{\star} f) \circ (\pi \widetilde{\star} h)=\pi \widetilde{\star} (f \circ h)$ for all $\pi \in S_{A}$. This can be proved as follows. Fix $x\in Z$, we have $(\pi\widetilde{\star}(f\circ h))(x)=\pi\cdot(f(h(\pi^{-1}\cdot x)))$. Also, if we denote $(\pi\widetilde{\star} h)(x)=y$ we have $y=\pi\cdot(h(\pi^{-1}\cdot x))$ and $((\pi\widetilde{\star} f)\circ(\pi\widetilde{\star} h))(x)=(\pi\widetilde{\star} f)(y)=\pi\cdot(f(\pi^{-1}\cdot y))=\pi\cdot(f((\pi^{-1}\circ\pi)\cdot h(\pi^{-1}\cdot x)))=\pi\cdot(f(h(\pi^{-1}\cdot x)))$. We finally obtain that $g$ is supported by $supp(f) \cup supp(X) \cup supp(Y) \cup supp(Z)$. 4. Suppose there exists a finitely supported injective mapping $f: X \to Y$. According to Proposition \ref{pco2}, there is a finitely supported surjective mapping $f':Y \to X$. Define the injective mapping $g:Z^{X}_{fs} \to Z^{Y}_{fs}$ by $g(h)=h \circ f'$. As in item 3 one can prove that $g$ is finitely supported by $supp(f') \cup supp(X) \cup supp(Y) \cup supp(Z)$. 5. Fix $x_{0}, x_{1} \in X$ with $x_{0} \neq x_{1}$ and $y_{0}, y_{1} \in Y$ with $y_{0}\neq y_{1}$. Define the injection $g: X+Y\to X \times Y$ by \[ g(u)=\left\{ \begin{array}{ll} (x, y_{0}), & \text{if}\: u=(0,x)\: \text{with}\: x \in X, x \neq x_{0};\\ (x_{0},y), & \text{if}\: u=(1,y)\: \text{with}\: y \in Y;\\ (x_{1}, y_{1}), & \text{if}\: u=(0,x_{0})\end{array}\right. \] It follows that $g$ is supported by $supp(x_{0}) \cup supp(y_{0}) \cup supp(x_{1}) \cup supp(y_{1}) \cup supp(X) \cup supp(Y)$, and $g$ is injective. \end{proof} \begin{theorem} \label{comp} Let $(X, \cdot)$ be a finitely supported subset of an invariant set $(Z, \cdot)$. There exists a one-to-one mapping from $\wp_{fs}(X)$ onto $\{0,1\}^{X}_{fs}$ which is finitely supported by $supp(X)$, where $\wp_{fs}(X)$ is considered the family of those finitely supported subsets of $Z$ contained in $X$. \end{theorem} \begin{proof} Let $Y$ be a finitely supported subset of $Z$ contained in $X$, and $\varphi_{Y}$ be the characteristic function on $Y$, i.e. $\varphi_{Y}:X \to \{0,1\}$ is defined by \begin{center} $\varphi_{Y}(x)\overset{def}{=}\left\{ \begin{array}{ll} 1 & \text{for}\: x \in Y\\ 0 & \text{for}\: x\in X \setminus Y \end{array}\right.$.\\ \end{center} We prove that $\varphi_{Y}$ is a finitely supported function from $X$ to $\{0,1\}$ (according to Proposition \ref{p1}, $\{0,1\}$ is a trivial invariant set), and the mapping $Y \mapsto \varphi_{Y}$ defined on $\wp_{fs}(X)$ is also finitely supported in the sense of Definition~\ref{2.10-1}. First we prove that $\varphi_{Y}$ is supported by $supp(Y) \cup supp(X)$. Let $\pi \in Fix(supp(Y) \cup supp(X))$. Thus $\pi \star Y=Y$ (where $\star$ represents the canonical permutation action on $\wp(Z)$), and so $\pi \cdot x \in Y$ if and only if $x \in Y$. Since we additionally have $\pi \star X=X$, we obtain $\pi \cdot x \in X \setminus Y$ if and only if $x \in X \setminus Y$. Thus, $\varphi_{Y}(\pi\cdot x)=\varphi_{Y}(x)$ for all $x \in X$. Furthermore, because $\pi$ fixes $supp(X)$ pointwise we have $\pi \cdot x \in X$ for all $x \in X$, and from Proposition \ref{2.18'} we get that $\varphi_{Y}$ is supported by $supp(Y) \cup supp(X)$. We remark that $\{0,1\}^{X}_{fs}$ is a finitely supported subset of the set $(\wp_{fs}(Z \times \{0,1\}), \widetilde{\star})$. Let $\pi \in Fix(supp(X))$ and $f:X \to \{0,1\}$ finitely supported. We have $\pi\widetilde{\star} f=\{(\pi\cdot x,$ $\pi\diamond y)\,|\,(x,y)\in f\}=\{(\pi\cdot x,$ $y)\,|\,(x,y)\in f\}$ because $\diamond$ is the trivial action on $\{0,1\}$. Thus, $\pi\widetilde{\star}f $ is a function with the domain $\pi\star X=X$ which is finitely supported as an element of $(\wp (Z \times \{0,1\}), \widetilde{\star})$ according to Proposition~\ref{2.15}. Moreover, $(\pi \widetilde{\star} f)(\pi\cdot x)= f(x)$ for all $x \in X$ (1). According to Proposition \ref{2.18'}, to prove that the function $g:=Y \mapsto \varphi_{Y}$ defined on $\wp_{fs}(X)$ (with the codomain contained in $\{0,1\}^{X}_{fs}$) is supported by $supp(X)$, we have to prove that $\pi \widetilde{\star}g(Y)=g(\pi \star Y)$ for all $\pi \in Fix(supp(X))$ and all $Y \in \wp_{fs}(X)$ (where $\widetilde{\star}$ symbolizes the induced $S_{A}$-action on $\{0,1\}^{X}_{fs}$). This means that we need to verify the relation $\pi \widetilde{\star} \varphi_{Y} = \varphi_{\pi \star Y}$ for all $\pi \in Fix(supp(X))$ and all $Y \in \wp_{fs}(X)$. Let us consider $\pi \in Fix(supp(X))$ (which means $\pi \cdot x \in X$ for all $x \in X$) and $Y \in \wp_{fs}(X)$. For any $x \in X$, we know that $x \in \pi \star Y$ if and only if $\pi^{-1} \cdot x \in Y$. Thus, $\varphi_{Y}(\pi^{-1} \cdot x )=\varphi_{\pi \star Y}(x)$ for all $x \in X$, and so $(\pi \widetilde{\star} \varphi_{Y})(x) \overset{(1)}{=}\varphi_{Y}(\pi^{-1} \cdot x )=\varphi_{\pi \star Y}(x)$ for all $x\in X$. Moreover, from Proposition \ref{2.15}, $\pi \star Y$ is a finitely supported subset of $Z$ contained in $\pi \star X=X$, and $\{0,1\}^{X}_{fs}$ can be represented as a finitely supported subset of $\wp_{fs}(Z \times \{0,1\})$ (supported by $supp(X)$). According to Proposition \ref{2.18'} we have that~$g$ is a finitely supported function from $\wp_{fs}(X)$ to $\{0,1\}^{X}_{fs}$. Obviously, $g$ is one-to-one. Now we prove that $g$ is onto. Let us consider an arbitrary finitely supported function $f: X \to \{0,1\}$. Let $Y_{f}\overset{def}{=}\{x \in X\;|\; f(x)=1\}$. We claim that $Y_{f} \in \wp_{fs}(X)$. Let $\pi \in Fix(supp(f))$. According to Proposition \ref{2.18'} we have $\pi \cdot x \in X$ and $f(\pi \cdot x)=f(x)$ for all $x \in X$. Thus, for each $x \in Y_{f}$, we have $\pi \cdot x \in Y_{f}$. Therefore $\pi \star Y_{f}=Y_{f}$, and so $Y_{f}$ is finitely supported by $supp(f)$ as a subset of $Z$, and it is contained in $X$. A simple calculation show us that $g(Y_{f})=f$, and so $g$ is onto. \end{proof} One can easy verify that the properties of $\leq$ presented in Proposition \ref{pco1} (1), (2) and (4) also hold for $\leq^{\star}$. We left the details to the reader. \begin{theorem} \label{cardord1} There exists an invariant set $X$ (particularly the set $A$ of atoms) having the following properties. \begin{enumerate} \item $|X \times X| \nleq^{*} |\wp_{fs}(X)|$; \item $|X \times X| \nleq |\wp_{fs}(X)|$; \item $|X \times X| \nleq^{*} |X|$; \item $|X \times X| \nleq |X|$; \item For each $n \in \mathbb{N}, n \geq 2$ we have $|X| \lneq |\wp_{n}(X)| \lneq |\wp_{fs}(X)|$, where $\wp_{n}(X)$ is the family of all $n$-sized subsets of $X$; \item For each $n \in \mathbb{N}$ we have $|X| \lneq^{*} |\wp_{n}(X)| \lneq^{*} |\wp_{fs}(X)|$; \item $|X| \lneq |\wp_{fin}(X)| \lneq |\wp_{fs}(X)|$; \item $|X| \lneq^{*} |\wp_{fin}(X)| \lneq^{*} |\wp_{fs}(X)|$; \item $|\wp_{fs}(X) \times \wp_{fs}(X)| \nleq^{*} |\wp_{fs}(X)|$; \item $|\wp_{fs}(X) \times \wp_{fs}(X)| \nleq |\wp_{fs}(X)|$; \item$|X+X| \lneq^{*} |X\times X|$; \item $|X+X| \lneq |X\times X|$. \end{enumerate} \end{theorem} \begin{proof} 1. We prove that that there does not exist a finitely supported surjective mapping $f: \wp_{fs}(A) \to A \times A$. Suppose, by contradiction, that there is a finitely supported surjective mapping $f: \wp_{fs}(A) \to A \times A$. Let us consider two atoms $a,b\notin supp(f)$ with $a \neq b$. These atoms exist because $A$ is infinite, while $supp(f) \subseteq A$ is finite. It follows that the transposition $(a\, b)$ fixes each element from $supp(f)$, i.e. $(a\, b) \in Fix(supp(f))$. Since $f$ is surjective, it follows that there exists an element $X \in \wp_{fs}(A)$ such that $f(X)=(a,b)$. Since $supp(f)$ supports $f$ and $(a\, b) \in Fix(supp(f))$, from Proposition \ref{2.18'} we have $f((a\,b) \star X)=(a\,b) \otimes f(X)=(a\,b) \otimes (a,b)=((a\,b)(a), (a\,b)(b))=(b,a)$. Due to the functionality of $f$ we should have $(a\,b) \star X \neq X$. Otherwise, we would obtain $(a,b)=(b,a)$. We claim that if both $a,b \in supp(X)$, then $(a\,b)\star X=X$. Indeed, suppose $a,b \in supp(X)$. Since $X$ is a finitely supported subset of $A$, then $X$ is either finite or cofinite. If $X$ is finite, then $supp(X)=X$, and so $a,b \in X$. Moreover, $(a\, b)(a)=b$, $(a\, b)(b)=a$, and $(a\, b)(c)=c$ for all $c \in X$ with $c\neq a,b$. Therefore, $(a\,b) \star X=\{(a\,b)(x)\,|\,x \in X\}=\{(a\,b)(a)\} \cup \{(a\,b)(b)\} \cup \{(a\,b)(c)\,|\,c \in X \setminus\{a,b\}\}=\{b\} \cup \{a\} \cup (X \setminus \{a,b\})=X$. Now, if $X$ is cofinite, then $supp(X)=A \setminus X$, and so $a,b \in A \setminus X$. Since $a,b \notin X$, we have $a,b \neq x$ for all $x \in X$, and so $(a\,b)(x)=x$ for all $x \in X$. Thus, in this case we also have $(a\,b) \star X=X$. Since when both $a,b \in supp(X)$ we have $(a\,b)\star X=X$, it follows that one of $a$ or $b$ does not belong to $supp(X)$. Suppose $b \notin supp(X)$ (the other case is analogue). Let us consider $c\neq a,b$, $c \notin supp(f)$, $c \notin supp(X)$. Then $(b\, c) \in Fix(supp(X))$, and, because $supp(X)$ supports $X$, we have $(b\,c)\star X=X$. Furthermore, $(b\, c) \in Fix(supp(f))$, and by Proposition \ref{2.18'} we have $(a,b)=f(X)=f((b\,c) \star X)=(b\,c) \otimes f(X)=(b\,c) \otimes (a,b)=((b\,c)(a), (b\,c)(b))=(a,c)$ which is a contradiction because $b\neq c$. Thus, $|A \times A| \nleq^{*} |\wp_{fs}(A)|$. 2. We prove that there does not exist a finitely supported injective mapping $f: A \times A \to \wp_{fs}(A)$. Suppose, by contradiction, that there is a finitely supported injective mapping $f: A \times A \to \wp_{fs}(A)$. According to Proposition~\ref{pco2}, one can define a finitely supported surjection $g: \wp_{fs}(A) \to A \times A$. This contradicts the above item. Thus, $|A \times A| \nleq |\wp_{fs}(A)|$. 3. We prove that there does not exist a finitely supported surjection $f: A \to A \times A$. Since there exists a surjection $s$ from $\wp_{fs}(A)$ onto $A$ defined by \[ s(X)=\left\{ \begin{array}{ll} x, & \text{if}\: \text{$X$ is an one-element set $\{x\}$ };\\ a, & \text{if}\: \text{$X$ is not an one-element set,}\: \end{array}\right. \] where $a$ is a fixed atom, and $s$ is finitely supported (by$\{a\}$), the result follows from item 1. Thus, $|A \times A| \nleq^{*} |A|$. 4. We prove that there does not exist a finitely supported injection $f: A \times A \to A$. Since there exists an equivariant injection from $A$ into $\wp_{fs}(A)$ defined as $x \mapsto \{x\}$, the result follows from item 2. Thus, $|A \times A| \nleq |A|$; Alternatively, one can prove that there does not exist a one-to-one mapping from $A \times A$ to $A$ (and so neither a finitely supported one). Suppose, by contradiction, that there is a an injective mapping $i: A \times A \to A$. Let us fix two atoms $x$ and $y$ with $x \neq y$. The sets $\{i(a,x)\,|\,a \in A\}$ and $\{i(a,y)\,|\,a \in A\}$ are disjoint and infinite. Thus, $\{i(a,x)\,|\,a \in A\}$ is a infinite and coinfinite subset of $A$, which contradicts the fact that any subset of $A$ is either finite or cofinite. 5. We prove that $|A| \lneq |\wp_{n}(A)| \lneq |\wp_{fs}(A)|$ for all $n \in \mathbb{N}, n \geq 2$. Consider $a_{1},a_{2}, \ldots, a_{n-1}$ $, a^{1}_{1}, \ldots, a^{n}_{1}, \ldots, a^{1}_{n-1}, \ldots, a^{n}_{n-1} \in A$ a family of pairwise different elements. Then $i: A \to \wp_{n}(A)$ defined by \[i(x)=\left\{ \begin{array}{ll} \{x,a_{1},a_{2}, \ldots, a_{n-1}\}, & \text{if}\: \text{$x \neq a_{1}, \ldots, a_{n-1}$ };\\ \{ a^{1}_{1}, \ldots, a^{n}_{1}\}, & \text{if}\: \text{$x=a_{1}$ }\\ \vdots \\ \{ a^{1}_{n-1}, \ldots, a^{n}_{n-1}\}, & \text{if}\: \text{$x=a_{n-1}$ } \: \end{array}\right.\] is obviously an injective mapping from $(A, \cdot)$ to $(\wp_{n}(A), \star)$. Furthermore, we can easy check that $i$ is supported by the finite set \{$a_{1},a_{2}, \ldots, a_{n-1} $ $, a^{1}_{1}, \ldots, a^{n}_{1}, \ldots, a^{1}_{n-1}, \ldots, a^{n}_{n-1}\}$, and so $|A| \leq |\wp_{n}(A)|$ in FSM. We claim that there does not exist a finitely supported injection from $\wp_{n}(A)$ into $A$. Assume on the contrary that there exists an finitely supported injection $f:\wp_{n}(A) \to A$. First, we claim that, for any $Y \in \wp_{n}(A)$ which is disjoint from $supp(f)$, we have $f(Y) \notin Y$. Assume by contradiction that $f(Y) \in Y$ for a fixed $Y$ with $Y \cap supp(f)=\emptyset$. Let $\pi$ be a permutation of atoms which fixes $supp(f)$ pointwise, and interchanges all the elements of $Y$ (e.g. $\pi$ is a cyclic permutation of $Y$). Since $\pi$ permutes all the elements of $Y$, we have $\pi \cdot f(Y)=\pi(f(Y)) \neq f(Y)$. However, $\pi \star Y =\{\pi(a_{1}), \ldots, \pi(a_{n})\}=\{a_{1}, \ldots, a_{n}\}=Y$. Since $\pi$ fixes $supp(f)$ pointwise and~$supp(f)$ supports $f$, we have $\pi (f(Y))=\pi \cdot f(Y)= f(\pi \star Y)=f(Y)$, a contradiction. Since $supp(f)$ is finite, there are infinitely many such $Y$ with the property that $Y \cap supp(f)=\emptyset$. Thus, because it is injective, $f$ takes infinitely many values on those $Y$. Since $supp(f)$ is finite, there should exist at least one element in $\wp_{n}(A)$, denoted by $Z$ such that $Z \cap supp(f)=\emptyset$ and $f(Z) \notin supp(f)$. Thus, $f(Z) = a$ for some $a \in A \setminus (Z \cup supp(f))$. Let $b \in A \setminus (supp(f) \cup Z \cup \{a\})$ and also let $\pi = (a\, b)$. Then $\pi \in Fix(supp(f) \cup Z)$, and hence $f(Z) =f((a\,b) \star Z)=(a\,b)(f(Z))= b$, a contradiction. We obtained that $|A| \neq |\wp_{n}(A)|$ in FSM, and so $|A|<|\wp_{n}(A)|$. We obviously have $|\wp_{n}(A)| \leq |\wp_{fs}(A)|$. We prove below that there does not exist a finitely supported injective mapping from $\wp_{fs}(A)$ onto one of its finitely supported proper subsets, i.e. any finitely supported injection $f:\wp_{fs}(A) \rightarrow \wp_{fs}(A)$ is also surjective. Let us consider a finitely supported injection $f:\wp_{fs}(A) \rightarrow \wp_{fs}(A)$. Suppose, by contradiction, $Im(f) \subsetneq \wp_{fs}(A)$. This means that there exists $X_{0}\in\wp_{fs}(A)$ such that $X_{0}\notin Im(f)$. Since $f$ is injective, we can define an infinite sequence $\mathcal{F}=(X_{n})_{n}$ starting from $X_{0}$, with distinct terms of form $X_{n+1}=f(X_{n})$ for all $n \in \mathbb{N}$. Furthermore, according to Proposition \ref{2.18'}, for a fixed $k \in \mathbb{N}$ and $\pi \in Fix(supp(f) \cup supp(X_{k}))$, we have $\pi \star X_{k+1}=\pi \star f(X_{k})=f(\pi \star X_{k})=f(X_{k})=X_{k+1}$. Then, $supp(X_{n+1}) \subseteq supp(f)\cup supp(X_{n})$ for all $n \in \mathbb{N}$, and by induction on $n$ we have that $supp(X_{n}) \subseteq supp(f)\cup supp(X_{0})$ for all $n \in \mathbb{N}$. We obtained that each element $X_{n}\in \mathcal{F}$ is supported by the same finite set $S:=supp(f)\cup supp(X_{0})$. However, there could exist only finitely many subsets of $A$ (i.e. only finitely many elements in $\wp_{fs}(A)$) supported by $S$, namely the subsets of $S$ and the supersets of $A\setminus S$ (where a superset of $A \setminus S$ is of form $A\setminus X$ with $X \subseteq S$). We contradict the statement that the infinite sequence $(X_{n})_{n}$ never repeats. Thus, $f$ is surjective, and so there could not exist a bijection between $\wp_{fin}(A)$ and $\wp_{fs}(A)$, which means $|\wp_{n}(A)| \neq |\wp_{fs}(A)|$. 6. Fix $n \in \mathbb{N}$. As in the above item there does not exist neither a finitely supported bijection between $\wp_{n}(A)$ and $\wp_{fs}(A)$, nor a finitely supported bijection between $A$ and $\wp_{n}(A)$. However, there exists a finitely supported injection $i: A \to \wp_{n}(A)$. Fix an atom $a \in A$. The mapping $s: \wp_{n}(A) \to A $ defined by \[ s(X)=\left\{ \begin{array}{ll} i^{-1}(X), & \text{if}\: \text{$X\in Im(i)$ };\\ a, & \text{if}\: \text{$X \notin Im(i)$}\: \end{array}\right. \] is supported by $supp(i) \cup \{a\}$ and is surjective. Now, fix $n$ atoms $x_{1}, \ldots, x_{n}$. The mapping $g:\wp_{fs}(A) \to \wp_{n}(A)$ defined by \[g(X)=\left\{ \begin{array}{ll} X, & \text{if}\: \text{$X\in \wp_{n}(A)$ };\\ \{x_{1}, \ldots x_{n}\}, & \text{if}\: \text{$X \notin \wp_{n}(A)$}\: \end{array}\right. \] is supported by $\{x_{1}, \ldots x_{n}\}$ and is surjective. 7. We prove that $|A| \lneq |\wp_{fin}(A)| \lneq |\wp_{fs}(A)|$. We obviously have that $|A| \leq |\wp_{fin}(A)|$ by taking the equivariant injective mapping $f:A \to \wp_{fin}(A)$ defined by $f(a)=\{a\}$ for all $a \in A$. We prove, by contradiction, that there is no finitely supported surjection from $A$ onto $\wp_{fin}(A)$. Assume that $g:A \to \wp_{fin}(A)$ is a finitely supported surjection. Let us fix two atoms $x$ and $y$. We define the function $h: \wp_{fin}(A) \to \wp_{2}(A)$ by $ h(X)=\left\{ \begin{array}{ll} X, & \text{if}\: \text{$|X|=2$ };\\ \{x,y\}, & \text{if}\: \text{$|X| \neq 2$}\: .\end{array}\right. $. Since for every $\pi \in S_{A}$ and $X \in \wp_{fin}(A)$ we have $|\pi \star X|=|X|$, we conclude that $h$ is finitely supported by $\{x,y\}$. Thus, $h \circ g$ is a surjection from $A$ onto $\wp_{2}(A)$ supported by $supp(g)\cup\{x,y\}$, which contradicts the previous item. Therefore, $|A|<|\wp_{fin}(A)|$. Since every element in $\wp_{fin}(A)$ belongs to $\wp_{fs}(A)$, but there does not exist a finitely supported injective mapping from $\wp_{fs}(A)$ onto one of its finitely supported proper subsets, we also have $|\wp_{fin}(A)|<|\wp_{fs}(A)|$. 8. As in the above item there does not exist neither a finitely supported bijection between $\wp_{fin}(A)$ and $\wp_{fs}(A)$, nor a finitely supported bijection between $A$ and $\wp_{fin}(A)$. Fix an atom $a \in A$. The mapping $s: \wp_{fin}(A) \to A$ defined by \[ s(X)=\left\{ \begin{array}{ll} x, & \text{if}\: \text{$X$ is an one-element set $\{x\}$ };\\ a, & \text{if}\: \text{$X$ is not an one-element set}\: \end{array}\right. \] is supported by $\{a\}$ and is surjective. Now, fix an atom $b$. The mapping $g:\wp_{fs}(A) \to \wp_{fin}(A)$ defined by \[g(X)=\left\{ \begin{array}{ll} X, & \text{if}\: \text{$X\in \wp_{fin}(A)$ };\\ \{b\}, & \text{if}\: \text{$X \notin \wp_{fin}(A)$}\: \end{array}\right. \] is supported by $\{b\}$ and is surjective. 9. According to Theorem \ref{cardord1}(1) there is no finitely supported surjection from $\wp_{fs}(A)$ onto $A \times A$. Suppose there is a finitely supported surjective mapping $f: \wp_{fs}(A) \to \wp_{fs}(A) \times \wp_{fs}(A)$. Obviously, there exists a supported surjection $s:\wp_{fs}(A) \to A$ defined by \[ s(X)=\left\{ \begin{array}{ll} a, & \text{if}\: \text{$X$ is an one-element set \{a\} };\\ x, & \text{if}\: \text{$X$ has more than one element}\: .\end{array}\right. \] where $x$ is a fixed atoms of $A$. The surjection $s$ is supported by $supp(x)=x$. Thus, we can define a surjection $g:\wp_{fs}(A) \times \wp_{fs}(A) \to A \times A$ by $g(X,Y)=(s(X),s(Y))$ for all $X,Y \in \wp_{fs}(A)$. Let $\pi \in Fix(supp(s))$. Since $supp(s)$ supports $s$, by Proposition \ref{2.18'} we have $g(\pi \otimes_{\star} (X,Y))=g(\pi \star X,\pi \star Y)=(s(\pi \star X),s(\pi \star Y))=(\pi \cdot s(X),\pi \cdot s(Y))=\pi \otimes (s(X),s(Y))$ for all $X,Y \in \wp_{fs}(A)$, where $\otimes_{\star}$ and $\otimes$ represent the $S_{A}$-actions on $\wp_{fs}(A) \times \wp_{fs}(A)$ and $A \times A$, respectively. Thus, $supp(s)$ supports $g$, and so $supp(g) \subseteq supp(s)$. Furthermore, the function $h=g \circ f: \wp_{fs}(A) \to A \times A$ is surjective and finitely supported by $supp(s) \cup supp(f)$. This is a contradiction, and so $|\wp_{fs}(A) \times \wp_{fs}(A)| \nleq^{*} |\wp_{fs}(A)|$. 10. Suppose, by contradiction, that there is a finitely supported injective mapping $f: \wp_{fs}(A) \times \wp_{fs}(A) \to \wp_{fs}(A)$. In the view of Proposition \ref{pco2}, let us fix two finitely supported subsets of $A$, namely $U$ and $V$. We define the function $g: \wp_{fs}(A) \to \wp_{fs}(A) \times \wp_{fs}(A)$ by \[ g(X)=\left\{ \begin{array}{ll} f^{-1}(X), & \text{if}\: \text{$X\in Im(f)$ };\\ (U,V), & \text{if}\: \text{$X \notin Im(f)$}\: .\end{array}\right. \] Clearly, $g$ is surjective. Furthermore, $g$ is supported by $supp(f) \cup supp(U) \cup supp(V)$ (the proof uses the fact that $Im(f)$ is a subset of $\wp_{fs}(A)$ supported by $supp(f)$). This contradicts the above item, and so $|\wp_{fs}(A) \times \wp_{fs}(A)| \nleq |\wp_{fs}(A)|$. 11. In the view of Proposition \ref{pco1}(5) there is a finitely supported injection from $A+A$ into $A \times A$, and a finitely supported surjection from $A \times A$ onto $A+A$ according to Proposition \ref{pco2}. Thus $|A+A|\leq |A \times A|$ and $|A+A|\leq^{*}|A \times A|$ Fix three different atoms $a,b,c \in A$. Define the mapping $f:A+A \to \wp_{fs}(A)$ by \[ f(u)=\left\{ \begin{array}{ll} \{x\}, & \text{if}\: u=(0,x)\: \text{with}\: x \in A;\\ \{a,y\}, & \text{if}\: u=(1,y)\: \text{with}\: y \in A, y \neq a;\\ \{b,c\}, & \text{if}\: u=(1,a)\end{array}\right. \] One can directly prove that $f$ is injective and supported by $\{a,b,c\}$. According to Proposition \ref{pco2}, we have $|A+A|\leq^{*}|\wp_{fs}(A)|$. If we had $|A \times A|=|A+A|$, we would obtain $|A\times A| \leq^{*} |\wp_{fs}(A)|$ which contradicts item 1. 12. According to the above item $|A+A|\leq |\wp_{fs}(A)|$. If we had $|A \times A|=|A+A|$, we would obtain $|A\times A| \leq |\wp_{fs}(A)|$ which contradicts item 2. \end{proof} \begin{proposition} \label{cardord2'} There exists an invariant set $X$ having the following properties: \begin{enumerate} \item $|X| \lneq |X|+|X|$; \item $|X| \lneq^{*} |X|+|X|$. \end{enumerate} \end{proposition} \begin{proof} 1. First we prove that in FSM we have $|\wp_{fs}(A)|=2|\wp_{fin}(A)|$. Let us consider the function $f:\wp_{fin}(A) \to \wp_{cofin}(A)$ defined by $f(U)=A \setminus U$ for all $U \in \wp_{fin}(A)$. Clearly, $f$ is bijective. We claim that $f$ is equivariant. Indeed, let $\pi \in S_{A}$. To prove that $f(\pi \star U)$=$\pi \star f(U)$ for all $U \in \wp_{fin}(A)$, we have to prove that $A \setminus (\pi \star U)=\pi \star (A \setminus U)$ for all $U \in \wp_{fin}(A)$. Let $y \in A \setminus (\pi \star U)$. We can express $y$ as $y=\pi \cdot (\pi^{-1} \cdot y)$. If $\pi^{-1} \cdot y \in U$, then $y \in \pi \star U$, which is a contradiction. Thus, $\pi^{-1} \cdot y \in (A \setminus U)$, and so $y \in \pi \star (A \setminus U)$. Conversely, if $y \in \pi \star (A \setminus U)$, then $y=\pi\cdot x$ with $x \in A \setminus U$. Suppose $y \in \pi \star U$. Then $y=\pi\cdot z$ with $z \in U$. Thus, $x=z$ which is a contradiction, and so $y \in A \setminus (\pi \star U)$. Since $f$ is equivariant and bijective, it follows that $|\wp_{fin}(A)|=|\wp_{cofin}(A)|$. However, every finitely supported subset of $A$ is either finite or cofinite, and so $\wp_{fs}(A)$ is the union of the disjoint subsets $\wp_{fin}(A)$ and $\wp_{cofin}(A)$. Thus, $|\wp_{fs}(A)|=2|\wp_{fin}(A)|$. Moreover, there exists an equivariant injection $i: \wp_{fin}(A) \to \wp_{fs}(A)$ defined by $i(U)=U$ for all $U \in \wp_{fin}(A)$. However, there does not exist a finitely supported one-to-one mapping from $\wp_{fs}(A)$ onto one of its finitely supported proper subsets. Thus, there could not exist a bijection $f: \wp_{fs}(A) \to \wp_{fin}(A)$. Therefore, $|\wp_{fin}(A)| \neq |\wp_{fs}(A)|=2|\wp_{fin}(A)|$. We can consider $X=\wp_{fin}(A)$ or $X=\wp_{cofin}(A)$. 2. It remains to prove that there is a finitely supported surjection from $\wp_{fs}(A)$ onto $\wp_{fin}(A)$. We either use Proposition~\ref{pco2} or effectively construct the surjection as below. Fix $a \in A$. We define $g:\wp_{fs}(A) \to \wp_{fin}(A)$ by \[ g(U)=\left\{ \begin{array}{ll} U, & \text{if}\: \text{$U\in \wp_{fin}(A)$ };\\ \{a\}, & \text{if}\: \text{$U \notin \wp_{fin}(A)$}\: .\end{array}\right. \] Clearly, $g$ is supported by $\{a\}$ and surjective. We can consider $X=\wp_{fin}(A)$ or $X=\wp_{cofin}(A)$. \end{proof} \section{Forms of Infinite in Finitely Supported Structures} \label{chap9} The equivalence of various definitions for infinity is provable in ZF under the consideration of the axiom of choice. Since in FSM the axiom of choice fails, our goal is to study various FSM forms of infinite and to provide several relations between them. \begin{definition} Let $X$ be a finitely supported subset of an invariant set. \begin{enumerate} \item $X$ is called \emph{FSM usual infinite} if $X$ does not correspond one-to-one and onto to a finite ordinal. We simply call \emph{infinite} an FSM usual infinite set. \item $X$ is \emph{FSM covering infinite} if there is a finitely supported directed family $\mathcal{F}$ of finitely supported sets with the property that $X$ is contained in the union of the members of $\mathcal{F}$, but there does not exist $Z\in \mathcal{F}$ such that $X\subseteq Z$. \item $X$ is called \emph{FSM Tarski I infinite} if there exists a finitely supported one-to-one mapping of $X$ onto $X \times X$. \item $X$ is called \emph{FSM Tarski II infinite} if there exists a finitely supported family of finitely supported subsets of $X$, totally ordered by inclusion, having no maximal element. \item $X$ is called \emph{FSM Tarski III infinite} if $|X|=2|X|$. \item $X$ is called \emph{FSM Mostowski infinite} if there exists an infinite finitely supported totally ordered subset of $X$. \item $X$ is called \emph{FSM Dedekind infinite} if there exist a finitely supported one-to-one mapping of $X$ onto a finitely supported proper subset of $X$. \item $X$ is \emph{FSM ascending infinite} if there is a finitely supported increasing countable chain of finitely supported sets $X_{0}\subseteq X_{1}\subseteq\ldots\subseteq X_{n}\subseteq\ldots$ with $X\subseteq\cup X_{n}$, but there does not exist $n\in\mathbb{N}$ such that $X\subseteq X_{n}$; \end{enumerate} \end{definition} Note that in the definition of FSM Tarski II infinity for a certain $X$, the existence of a finitely supported family of finitely supported subsets of $X$ is required, while in the definition of FSM ascending infinity for $X$, the related family of finitely supported subsets of $X$ has to be FSM countable (i.e. the mapping $n \mapsto X_{n}$ should be finitely supported). It is immediate that if $X$ is FSM ascending infinite, then it is also FSM Tarski II infinite. \begin{theorem} Let $X$ be a finitely supported subset of an invariant set. Then $X$ is FSM usual infinite if and only if $X$ is FSM covering infinite. \end{theorem} \begin{proof} Let us suppose that $X$ is FSM usual infinite. Let $\mathcal{F}$ be the family of all FSM usual non-infinite (FSM usual finite) subsets of $X$ ordered by inclusion. Since $X$ is finitely supported, it follows that $\mathcal{F}$ is supported by $supp(X)$. Moreover, since all the elements of $\mathcal{F}$ are finite sets, it follows that all the elements of $\mathcal{F}$ are finitely supported. Clearly,~$\mathcal{F}$ is directed and $X$ is the union of the members of $\mathcal{F}$. Suppose by contradiction, that $X$ is not FSM covering infinite. Then there exists $Z\in\mathcal{F}$ such that $X\subseteq Z$. Therefore, $X$ should by FSM usual finite which is a contradiction with our original assumption. Conversely, assume that $X$ is FSM covering infinite. Suppose, by contradiction that $X$ is FSM usual finite, i.e. $X=\{x_{1}, \ldots x_{n}\}$. Let $\mathcal{F}$ be a directed family such that $X$ is contained in the union of the members of $\mathcal{F}$ (at least one such a family exists, for example $\wp_{fs}(X)$). Then for each $i \in \{1, \ldots, n\}$ there exists $F_{i} \in \mathcal{F}$ such that $x_{i} \in F_{i}$. Since $\mathcal{F}$ is directed, there is $Z \in \mathcal {F}$ such that $F_{i} \subseteq Z$ for all $i \in \{1, \ldots, n\}$, and so $X \subseteq Z$ with $Z\in \mathcal{F}$, which is a contradiction. \end{proof} \begin{theorem} \label{ti1} The following properties of FSM Dedekind infinite sets hold. \begin{enumerate} \item Let $X$ be a finitely supported subset of an invariant set $Y$. Then $X$ is FSM Dedekind infinite if and only if there exists a finitely supported one-to-one mapping $f: \mathbb{N} \to X$. As a consequence, an FSM superset of an FSM Dedekind infinite set is FSM Dedekind infinite, and an FSM subset of an FSM set that is not Dedekind infinite is also not FSM Dedekind infinite. \item Let $X$ be an infinite finitely supported subset of an invariant set $Y$. Then the sets $\wp_{fs}(\wp_{fin}(X))$ and $\wp_{fs}(T_{fin}(X))$ are FSM Dedekind infinite. \item Let $X$ be an infinite finitely supported subset of an invariant set $Y$. Then the set $\wp_{fs}(\wp_{fs}(X))$ is FSM Dedekind infinite. \item Let $X$ be a finitely supported subset of an invariant set $Y$ such that $X$ does not contain an infinite subset $Z$ with the property that all the elements of $Z$ are supported by the same set of atoms. Then $X$ is not FSM Dedekind infinite. \item Let $X$ be a finitely supported subset of an invariant set $Y$ such that $X$ does not contain an infinite subset $Z$ with the property that all the elements of $Z$ are supported by the same set of atoms. Then $\wp_{fin}(X)$ is not FSM Dedekind infinite. \item Let $X$ and $Y$ be two finitely supported subsets of an invariant set $Z$. If neither $X$ nor $Y$ is FSM Dedekind infinite, then $X \times Y$ is not FSM Dedekind infinite. \item Let $X$ and $Y$ be two finitely supported subsets of an invariant set $Z$. If neither $X$ nor $Y$ is FSM Dedekind infinite, then $X + Y$ is not FSM Dedekind infinite. \item Let $X$ be a finitely supported subset of an invariant set $Y$. Then $\wp_{fs}(X)$ is FSM Dedekind infinite if and only if $X$ is FSM ascending infinite. \item Let $X$ be a finitely supported subset of an invariant set $Y$. If $X$ is FSM Dedekind infinite, then $X$ is FSM ascending infinite. The reverse implication is not valid. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Let us suppose that $(X, \cdot)$ is FSM Dedekind infinite, and $g: X \rightarrow X$ is an injection supported by the finite set $S \subsetneq A$ with the property that $Im(g) \subsetneq X$. This means that there exists $supp(g) \subseteq S$ and there exists $x_{0}\in X$ such that $x_{0}\notin Im(g)$. We can form a sequence of elements from~$X$ which has the first term $x_{0}$ and the general term $x_{n+1}=g(x_{n})$ for all $n \in \mathbb{N}$. Since $x_{0}\notin Im(g)$ it follows that $x_{0} \neq g(x_{0})$. Since $g$ is injective and $x_{0} \notin Im(g)$, by induction we obtain that $g^{n}(x_{0}) \neq g^{m}(x_{0})$ for all $n,m \in \mathbb{N}$ with $n \neq m$. Furthermore,~$x_{n+1}$ is supported by $supp(g)\cup supp(x_{n})$ for all $n \in \mathbb{N}$. Indeed, let $\pi \in Fix(supp(g)\cup supp(x_{n}))$. According to Proposition~\ref{2.18'}, $\pi \cdot x_{n+1}= \pi \cdot g(x_{n})=g(\pi \cdot x_{n})=g(x_{n})=x_{n+1}$. Since $supp(x_{n+1})$ is the least set supporting $x_{n+1}$, we obtain $supp(x_{n+1}) \subseteq supp(g)\cup supp(x_{n})$ for all $n \in \mathbb{N}$. By finite recursion, we have $supp(x_{n}) \subseteq supp(g)\cup supp(x_{0})$ for all $n\in\mathbb{N}$. Since all $x_{n}$ are supported by the same set of atoms $supp(g)\cup supp(x_{0})$, we have that the function $f:\mathbb{N} \to X$, defined by $f(n)=x_{n}$, is also finitely supported (by the set $supp(g)\cup supp(x_{0}) \cup supp(X)$ not depending on $n$). Indeed, for any $\pi \in Fix(supp(g)\cup supp(x_{0}) \cup supp(X))$ we have $f(\pi \diamond n)=f(n)=x_{n}=\pi \cdot x_{n}=\pi\cdot f(n)$, $\forall n \in \mathbb{N}$, where by $\diamond$ we denoted the trivial $S_{A}$-action on $\mathbb{N}$. Furthermore, because $\pi$ fixes $supp(X)$ pointwise we have $\pi \cdot f(n) \in X$ for all $n \in \mathbb{N}$. From Proposition \ref{2.18'} we have that $f$ is finitely supported. Obviously, $f$ is also injective. Conversely, suppose there exists a finitely supported injective mapping $f: \mathbb{N} \to X$. According to Proposition~\ref{2.18'}, it follows that for any $\pi \in Fix(supp(f))$ we have $\pi \cdot f(n)=f(\pi \diamond n)=f(n)$ and $\pi \cdot f(n) \in X$ for all $n \in \mathbb{N}$. Let us define $g:X \to X$ by \[ g(x)=\left\{ \begin{array}{ll} f(n+1), & \text{if}\: \text{there exists $n \in \mathbb{N}$ with $x=f(n)$};\\ x, & \text{if}\: \text{$x \notin Im(f)$}\: .\end{array}\right. \] We claim that $g$ is supported by $supp(f) \cup supp(X)$. Indeed, let us consider $\pi \in Fix(supp(f) \cup supp(X))$ and $x \in X$. If there is some $n$ such that $x=f(n)$, we have that $\pi \cdot x=\pi \cdot f(n)=f(n)$, and so $g(\pi \cdot x)=g(f(n))=f(n+1)=\pi \cdot f(n+1)=\pi \cdot g(x)$. If $x \notin Im(f)$, we prove by contradiction that $\pi \cdot x \notin Im(f)$. Indeed, suppose that $\pi \cdot x \in Im(f)$. Then there is $y \in \mathbb{N}$ such that $\pi \cdot x=f(y)$ or, equivalently, $x = \pi^{-1} \cdot f(y)$. However, since $\pi \in Fix(supp(f))$, from Proposition \ref{2.18'} we have $\pi^{-1} \cdot f(y)=f(\pi^{-1} \diamond y)$, and so we get $x=f(\pi^{-1} \diamond y)=f(y) \in Im(f)$ which contradicts the assumption that $x \notin Im(f)$. Thus, $\pi \cdot x \notin Im(f)$, and so $g(\pi \cdot x)=\pi \cdot x=\pi \cdot g(x)$. We obtained that $g(\pi \cdot x)=\pi \cdot x=\pi \cdot g(x)$ for all $x \in X$ and all $\pi \in Fix(supp(f) \cup supp(X))$. Furthermore, $\pi \cdot g(x) \in \pi \star X=X$ (where by $\star$ we denoted the $S_{A}$-action on $\wp_{fs}(Y)$), and so $g$ is finitely supported. Since $f$ is injective, it follows immediately that $g$ is injective. Furthermore, $Im(g)=X \setminus \{f(0)\}$ which is a proper subset of $X$, finitely supported by $supp(f(0)) \cup supp(X)=supp(f) \cup supp(X)$. \item The family $\wp_{fin}(X)$ represents the family of those finite subsets of $X$ (these subsets of $X$ are finitely supported as subsets of the invariant set $Y$ in the sense of Definition \ref{2.14}). Obviously, $\wp_{fin}(X)$ is a finitely supported subset of the invariant set $\wp_{fs}(Y)$, supported by $supp(X)$. This is because whenever $Z$ is an element of $\wp_{fin}(X)$ (i.e. whenever $Z$ is a finite subset of $X$) and $\pi$ fixes $supp(X)$ pointwise, we have that $\pi \star Z$ is also a finite subset of $X$. The family $\wp_{fs}(\wp_{fin}(X))$ represents the family of those subsets of $\wp_{fin}(X)$ which are finitely supported as subsets of the invariant set $\wp_{fs}(Y)$ in the sense of Definition \ref{2.14}. As above, according to Proposition \ref{2.15}, we have that $\wp_{fs}(\wp_{fin}(X))$ is a finitely supported subset of the invariant set $\wp_{fs}(\wp_{fs}(Y))$, supported by $supp(\wp_{fin}(X)) \subseteq supp(X)$. Let $X_{i}$ be the set of all $i$-sized subsets from $X$, i.e. $X_{i}=\{Z \subseteq X\,|\,|Z|=i\}$. Since $X$ is infinite, it follows that each $X_{i}, i \geq 1$ is non-empty. Obviously, we have that any $i$-sized subset $\{x_{1}, \ldots, x_{i}\}$ of $X$ is finitely supported (as a subset of $Y$) by $supp(x_{1}) \cup \ldots \cup supp(x_{i})$. Therefore, $X_{i} \subseteq \wp_{fin}(X)$ and $X_{i} \subseteq \wp_{fs}(Y)$ for all $i \in \mathbb{N}$. Since $\cdot$ is a group action, the image of an $i$-sized subset of $X$ under an arbitrary permutation is an $i$-sized subset of $Y$. However, any permutation of atoms that fixes $supp(X)$ pointwise also leaves $X$ invariant, and so for any permutation $\pi \in Fix(supp(X))$ we have that $\pi \star Z$ is an $i$-sized subset of $X$ whenever $Z$ is an $i$-sized subset of $X$. Thus, each $X_{i}$ is a subset of $\wp_{fin}(X)$ finitely supported by $supp(X)$, and so~$X_{i} \in \wp_{fs}(\wp_{fin}(X))$. We define $f: \mathbb{N} \to \wp_{fs}(\wp_{fin}(X))$ by $f(n)=X_{n}$. We claim that $supp(X)$ supports $f$. Indeed, let $\pi \in Fix(supp(X))$. Since $supp(X)$ supports $X_{n}$ for all $n \in \mathbb{N}$, we have $\pi \star f(n)=\pi \star X_{n}=X_{n}=f(n)= f(\pi \diamond n)$ (where $\diamond$ is the trivial $S_{A}$-action on $\mathbb{N}$) and $\pi \star f(n)=\pi \star X_{n}=X_{n} \in \wp_{fs}(\wp_{fin}(X))$ for all $n \in \mathbb{N}$. According to Proposition \ref{2.18'}, we have that $f$ is finitely supported. Furthermore, $f$ is injective and, by item 1, we have that $\wp_{fs}(\wp_{fin}(X))$ is FSM Dedekind infinite. If we consider $Y_{i}$ the set of all $i$-sized injective tuples formed by elements of $X$, we have that each $Y_{i}$ is a subset of $T_{fin}(X)$ supported by $supp(X)$, and the family $(Y_{i})_{i \in \mathbb{N}}$ is a countably infinite, uniformly supported, subset of $\wp_{fs}(T_{fin}(X))$. From item 1 we get that $\wp_{fs}(T_{fin}(X))$ is FSM Dedekind infinite. \item The proof is actually the same as in the above item because every $X_{i} \in \wp_{fs}(\wp_{fs}(A))$. \item If there does not exist a uniformly supported subset of $X$, then there does not exist a finitely supported injective mapping $f:\mathbb{N} \to X$, and so $f$ cannot be FSM Dedekind infinite. \item We prove the following lemma: \begin{lemma} \label{lem4} Let $X$ be a finitely supported subset of an invariant set $Y$ such that $X$ does not contain an infinite uniformly supported subset. Then the set $\wp_{fin}(X)=\{Z\!\subseteq\! X\,|\, Z\, \text{finite}\}$ does not contain an infinite uniformly supported subset. \end{lemma} \emph{Proof of Lemma \ref{lem4}.} Suppose, by contradiction, that the set $\wp_{fin}(X)$ contains an infinite subset $\mathcal{F}$ such that all the elements of $\mathcal{F}$ are different and supported by the same finite set $S$. Therefore, we can express $\mathcal{F}$ as $\mathcal{F}=(X_{i})_{i \in I} \subseteq \wp_{fin}(X)$ with the properties that $X_{i} \neq X_{j}$ whenever $i \neq j$ and $supp(X_{i}) \subseteq S$ for all $i \in I$. Fix an arbitrary $j \in I$. However, from Proposition \ref{4.4-9}, because $supp(X_{j})=\underset{x \in X_{j}}{\cup}supp(x)$, we have that $X_{j}$ has the property that $supp(x) $ $\subseteq S$ for all $x \in X_{j}$. Since $j$ has been arbitrarily chosen from $I$, it follows that every element from every set of form $X_{i}$ is supported by $S$, and so $\underset{i}{\cup}X_{i}$ is an uniformly supported subset of $X$ (all its elements being supported by $S$). Furthermore, $\underset{i \in I}{\cup}X_{i}$ is infinite because the family $(X_{i})_{i \in I}$ is infinite and $X_{i} \neq X_{j}$ whenever $i \neq j$. Otherwise, if $\underset{i}{\cup}X_{i}$ was finite, the family $(X_{i})_{i \in I}$ would be contained in the finite set $\wp (\underset{i}{\cup}X_{i})$, and so it couldn't be infinite with the property that $X_{i} \neq X_{j}$ whenever $i \neq j$. We were able to construct an infinite uniformly supported subset of $X$, namely $\underset{i}{\cup}X_{i}$, and this contradicts the hypothesis that~$X$ does not contain an infinite uniformly supported subset. \emph{Proof of this item} According to the above lemma, if $X$ does not contain an infinite uniformly supported subset, then $\wp_{fin}(X)$ does not contain an infinite uniformly supported subset. Suppose, by contradiction, that $\wp_{fin}(X)$ is FSM Dedekind infinite. According to item 1, there exists a finitely supported injective mapping $f: \mathbb{N} \to \wp_{fin}(X)$. Thus, because $\mathbb{N}$ is a trivial invariant set, according to Proposition \ref{2.18'}, there exists an infinite injective (countable) sequence $f(\mathbb{N})=(X_{i})_{i \in \mathbb{N}} \subseteq \wp_{fin}(X)$ having the property $supp(X_{i}) \subseteq supp(f)$ for all $i \in \mathbb{N}$. We obtained that $\wp_{fin}(X)$ contains an infinite uniformly supported subset $(X_{i})_{i \in \mathbb{N}}$, which is a contradiction. \item Suppose, by contradiction, that $X \times Y$ is FSM Dedekind infinite. According to item 1, there exists a finitely supported injective mapping $f: \mathbb{N} \to X \times Y$ Thus, according to Proposition \ref{2.18'}, there exists an infinite injective sequence $f(\mathbb{N})=((x_{i},y_{i}))_{i \in \mathbb{N}} \subseteq X \times Y$ with the property that $supp((x_{i}, y_{i})) \subseteq supp(f)$ for all $i \in \mathbb{N}$ (1). Fix some $j \in \mathbb{N}$. We claim that $supp((x_{j}, y_{j})) =supp(x_{j}) \cup supp(y_{j})$. Let $U=(x_{j}, y_{j})$, and $S=supp(x_{j})\cup supp(y_{j})$. Obviously, $S$ supports $U$. Indeed, let us consider $\pi\in Fix(S)$. We have that $\pi\in Fix(supp(x_{j}))$ and also $\pi\in Fix(supp(y_{j}))$ Therefore, $\pi\cdot x_{j}=x_{j}$ and $\pi\cdot y_{j}=y_{j}$, and so $\pi \otimes (x_{j}, y_{j})=(\pi \cdot x_{j}, \pi \cdot y_{j})=(x_{j}, y_{j})$, where $\otimes$ represent the $S_{A}$ action on $X \times Y$ described in Proposition \ref{p1}. Thus, $supp(U) \subseteq S$. It remains to prove that $S \subseteq supp(U)$. Fix $\pi \in Fix(supp(U))$. Since $supp(U)$ supports $U$, we have $\pi \otimes (x_{j}, y_{j})=(x_{j}, y_{j})$, and so $(\pi \cdot x_{j}, \pi \cdot y_{j})=(x_{j}, y_{j})$, from which we get $\pi \cdot x_{j}=x_{j}$ and $\pi \cdot y_{j}= y_{j}$. Thus, $supp(x_{j}) \subseteq supp(U)$ and $supp(y_{j}) \subseteq supp(U)$. Hence $S=supp(x_{j})\cup supp(y_{j}) \subseteq supp(U)$. According to relation (1) we obtain, $supp(x_{i})\cup supp(y_{i}) \subseteq supp(f)$ for all $i \in \mathbb{N}$. Thus, $supp(x_{i}) \subseteq supp(f)$ for all $i \in \mathbb{N}$ and $supp(y_{i}) \subseteq supp(f)$ for all $i \in \mathbb{N}$ (2). Since the sequence $((x_{i},y_{i}))_{i \in \mathbb{N}}$ is infinite and injective, then at least one of the sequences $(x_{i})_{i \in \mathbb{N}}$ and $(y_{i})_{i \in \mathbb{N}}$ is infinite. Assume that $(x_{i})_{i \in \mathbb{N}}$ is infinite. Then there exists an infinite subset $B$ of $\mathbb{N}$ such that $(x_{i})_{i \in B}$ is injective, and so there exists an injection $u: B \to X$ defined by $u(i)=x_{i}$ for all $i \in B$ which is supported by $supp(f)$ (according to relation (2) and Proposition \ref{2.18'}). However, since $B$ is an infinite subset of $\mathbb{N}$, there exists a ZF bijection $h: \mathbb{N} \to B$. The construction of $h$ requires only the fact that $\mathbb{N}$ is well-ordered which is obtained from the Peano construction of~$\mathbb{N}$ and does not involve a form of the axiom of choice. Since both $B$ and $\mathbb{N}$ are trivial invariant sets, it follows that~$h$ is equivariant. Thus, $u \circ h$ is an injection from $\mathbb{N}$ to $X$ which is finitely supported by $supp(u) \subseteq supp(f)$. This contradicts the assumption that $X$ is not FSM Dedekind infinite. \begin{remark} \label{rrr} Analogously, using the relation $supp(x) \cup supp(y)=supp((x,y))$ for all $x \in X$ and $y \in Y$ derived from Proposition \ref{4.4-9}, it can be proved that $X \times Y$ does not contain an infinite uniformly supported subset if neither $X$ nor $Y$ contain an infinite uniformly supported subset. \end{remark} \item Suppose, by contradiction, that $X + Y$ is FSM Dedekind infinite. According to item 1, there exists a finitely supported injective mapping $f: \mathbb{N} \to X + Y$. Thus, there exists an infinite injective sequence $(z_{i})_{i \in \mathbb{N}} \subseteq X+ Y$ such that $supp(z_{i}) \subseteq supp(f)$ for all $i \in \mathbb{N}$. According to the construction of the disjoint union of two $S_{A}$-sets (see Proposition \ref{p1}), as in the proof of item 6, there should exist an infinite subsequence of $(z_{i})_{i}$ of form $((0, x_{j}))_{x_{j} \in X}$ which is uniformly supported by $supp(f)$, or an infinite sequence of form $((1, y_{k}))_{y_{k} \in Y}$ which is uniformly supported by $supp(f)$. Since $0$ and $1$ are constants, this means there should exist at least an infinite uniformly supported sequence of elements from $X$, or an infinite uniformly supported sequence of elements from $Y$. This contradicts the hypothesis neither $X$ nor $Y$ is FSM Dedekind infinite. \begin{remark} Analogously, it can be proved that $X + Y$ does not contain an infinite uniformly supported subset if neither $X$ nor $Y$ contain an infinite uniformly supported subset. \end{remark} \item Assume, by contradiction, that $(X_{n})_{n \in \mathbb{N}}$ is an infinite countable family of different subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported. Thus, each $X_{n}$ is supported by the same set $S=supp(n \mapsto X_{n})$. We define a countable family $(Y_{n})_{n \in \mathbb{N}}$ of subsets of $X$ that are non-empty and pairwise disjoint. A ZF construction of such a family belongs to Kuratowski and can also be found in Lemma 4.11 from \cite{herrlich}. This approach works also in FSM in the view of the $S$-finite support principle because every $Y_{k}$ is defined only involving elements in the family $(X_{n})_{n \in \mathbb{N}}$, and so whenever $(X_{n})_{n \in \mathbb{N}}$ is uniformly supported (meaning that all $X_{n}$ are supported by the same set of atoms), we get that $(Y_{n})_{n \in \mathbb{N}}$ is uniformly supported. Formally the sequence $(Y_{n})_{n \in \mathbb{N}}$ is recursively constructed as below. For $n \in \mathbb{N}$, assume that $Y_{m}$ is defined for any $m<n$ such that the set $\{ X_{k} \setminus \underset{m<n}\cup Y_{m}\,|\,k \geq n\}$ is infinite. Define $n'=min\{k\,|\,k \geq n \:\text{and}\: X_{k} \setminus \underset{m<n}\cup Y_{m} \neq \emptyset \:\text{and}\: (X \setminus X_{k}) \setminus \underset{m<n}\cup Y_{m} \neq \emptyset\}$. We define \[Y_{n}=\left\{ \begin{array}{ll} X_{n'} \setminus \underset{m<n}\cup Y_{m}, & \text{if}\: \{X_{k}\setminus (X_{n'} \cup \underset{m<n}\cup Y_{m}) \,|\, k>n'\}\:\text{is infinite};\\ (X \setminus X_{n'})\setminus \underset{m<n}\cup Y_{m}, & \text{otherwise} .\end{array}\right. \] Obviously, $Y_{1}$ is supported by $S \cup supp(X)$. By induction, assume that $Y_{m}$ is supported by $S \cup supp(X)$ for each $m<n$. Since $Y_{n}$ is defined as a set combination of $X_{i}$'s (which are all $S$-supported) and $Y_{m}$'s with $m<n$, we get that $Y_{n}$ is supported by $S \cup supp(X)$ according to the $S$-finite support principle. Therefore the family $(Y_{i})_{i \in \mathbb{N}}$ is uniformly supported by $S \cup supp(X)$. Let $U_{i}=Y_{0} \cup \ldots \cup Y_{i}$ for all $i \in \mathbb{N}$. Clearly all $U_{i}$ are supported by $S \cup supp(X)$, and $U_{0} \subsetneq U_{1} \subsetneq U_{2} \subsetneq \ldots \subsetneq X$. Let $V_{n}=(X\setminus\underset{i\in\mathbb{N}}{\cup}U_{i})\cup U_{n}$. Clearly, $X=\underset{n\in \mathbb{N}}{\cup} V_{n}$. Moreover, $V_{n}$ is supported by $S \cup supp(X)$ for all $n \in \mathbb{N}$. Therefore, the mapping $n\mapsto V_{n}$ is finitely supported. Obviously, $V_{0} \subsetneq V_{1} \subsetneq V_{2} \subsetneq \ldots \subsetneq X$. However, there does not exist $n\in\mathbb{N}$ such that $X=V_{n}$, and so $X$ is FSM ascending infinite. The converse holds since if $X$ is FSM ascending infinite, there is a finitely supported increasing countable chain of finitely supported sets $X_{0}\subseteq X_{1}\subseteq\ldots\subseteq X_{n}\subseteq\ldots$ with $X\subseteq\cup X_{n}$, but there does not exist $n\in\mathbb{N}$ such that $X\subseteq X_{n}$. In this sequence there should exist infinitely many different elements of form $X_{i}$ (otherwise their union will be a term of the sequence), and the result follows from Proposition \ref{cou}. \item Suppose $X$ is FSM Dedekind infinite. Therefore, $\wp_{fs}(X)$ is FSM Dedekind infinite. According to item 8, we have that $X$ is FSM ascending infinite. The reverse implication is not valid because, as it is proved in Proposition~\ref{tari}, $\wp_{fin}(A)$ is FSM ascending infinite, but not FSM Dedekind infinite. \end{enumerate} \end{proof} \begin{corollary} \label{ti2} The following sets and all of their FSM usual infinite subsets are FSM usual infinite, but they are not FSM Dedekind infinite. \begin{enumerate} \item The invariant set $A$ of atoms. \item The powerset $\wp_{fs}(A)$ of the set of atoms. \item The set $T_{fin}(A)$ of all finite injective tuples of atoms. \item The invariant set $A^{A}_{fs}$ of all finitely supported functions from $A$ to $A$. \item The invariant set of all finitely supported functions $f:A \to A^{n}$, where $n \in \mathbb{N}$. \item The invariant set of all finitely supported functions $f:A \to T_{fin}(A)$. \item The invariant set of all finitely supported functions $f:A \to \wp_{fs}(A)$. \item The sets $\wp_{fin}(A)$, $\wp_{cofin}(A)$, $\wp_{fin}(\wp_{fs}(A))$, $\wp_{fin}(\wp_{cofin}(A))$, $\wp_{fin}(\wp_{fin}(A))$, $\wp_{fin}(A^{A}_{fs})$. \item Any construction of finite powersets of form $\wp_{fin}(\ldots \wp_{fin}(A))$, $\wp_{fin}(\ldots \wp_{fin}(P(A)))$, or $\wp_{fin}(\ldots \wp_{fin}(\wp_{fs}(A)))$. \item Every finite Cartesian combination between the set $A$, $\wp_{fin}(A)$, $\wp_{cofin}(A)$, $\wp_{fs}(A)$ and $A^{A}_{fs}$\ . \item The disjoint unions $A+A^{A}_{fs}$, $A+\wp_{fs}(A)$, $\wp_{fs}(A)+A^{A}_{fs}$ and $A+\wp_{fs}(A)+A^{A}_{fs}$ and all finite disjoint unions between $A$, $A^{A}_{fs}$ and $\wp_{fs}(A)$. \end{enumerate} \end{corollary} \begin{proof} \begin{enumerate} \item $A$ does not contain an infinite uniformly supported subset, and so it is not FSM Dedekind infinite (according to Theorem \ref{ti1}(4)). \item $\wp_{fs}(A)$ does not contain an infinite uniformly supported subset because for any finite set $S$ of atoms there exist only finitely many elements of $\wp_{fs}(A)$ supported by $S$, namely the subsets of $S$ and the supersets of $A\setminus S$. Thus, $\wp_{fs}(A)$ it is not FSM Dedekind infinite (Theorem \ref{ti1}(4)). \item $T_{fin}(A)$ does not contain an infinite uniformly supported subset because the finite injective tuples of atoms supported by a finite set $S$ are only those injective tuples formed by elements of $S$, being at most $1+A_{|S|}^{1}+A_{|S|}^{2}+\ldots+A_{|S|}^{|S|}$ such tuples, where $A_{n}^{k}=n(n-1)\ldots (n-k+1)$. \item We prove the following lemmas. \begin{lemma} \label{lem''} Let $S=\{s_{1},\ldots,s_{n}\}$ be a finite subset of an invariant set $(U, \cdot)$ and $X$ a finitely supported subset of an invariant set $(V, \diamond)$. Then if $X$ is does not contain an infinite uniformly supported subset, we have that $X^{S}_{fs}$ does not contain an infinite uniformly supported subset. \end{lemma} \emph{Proof of Lemma \ref{lem''}}. First we prove that there is an FSM injection $g$ from $X^{S}_{fs}$ into $X^{|S|}$. For $f \in X^{S}_{fs}$ define $g(f)=(f(s_{1}),\ldots, f(s_{n}))$. Clearly $g$ is injective (and it is also surjective). Let $\pi \in Fix(supp(s_{1}) \cup \ldots \cup supp(s_{n}) \cup supp(X))$. Thus, $g(\pi \widetilde{\star} f)=(\pi \diamond f(\pi^{-1} \cdot s_{1}),\ldots, \pi \diamond f(\pi^{-1} \cdot s_{n}))=(\pi \diamond f( s_{1}),\ldots, \pi \diamond f(s_{n}))$ $=\pi \otimes g(f)$ for all $f \in X^{S}_{fs}$, where $\otimes$ is the $S_{A}$-action on $X^{|S|}$ defined as in Proposition \ref{p1}. Hence $g$ is finitely supported, and the conclusion follows from Theorem \ref{ti1}(1) and by repeatedly applying similar arguments as in Theorem \ref{ti1}(6) (if we slightly modify the proof of the theorem, using the fact that $supp(x) \cup supp(y)=supp((x,y))$ for all $x,y \in X$, we show that the $|S|$-time Cartesian product of $X$, i.e. $X^{|S|}$ does not contain an infinite uniformly supported subset; otherwise $X$ should contain itself an infinite uniformly supported subset, which contradicts the hypothesis). \begin{lemma} \label{lemyy} Let $S=\{s_{1},\ldots,s_{n}\}$ be a finite subset of an invariant set $(U, \cdot)$ and $X$ a finitely supported subset of an invariant set $(V, \diamond)$. Then if $X$ is not FSM Dedekind infinite, we have that $X^{S}_{fs}$ is not FSM Dedekind-infinite. \end{lemma} \emph{Proof of Lemma \ref{lemyy}} First we proved that there is an FSM injection $g$ from $X^{S}_{fs}$ into $X^{|S|}$. The conclusion follows from Theorem \ref{ti1}(1) and by repeatedly applying Theorem \ref{ti1}(6) (from which we know that the $|S|$-time Cartesian product of $X$, i.e. $X^{|S|}$, is not FSM Dedekind infinite). \begin{lemma} \label{lem'''}Let $f:A \to A$ be a function that is finitely supported by a certain finite set of atoms $S$. Then either $f|_{A \setminus S}=Id$ or $f|_{A \setminus S}$ is an one-element subset of $S$. \end{lemma} \emph{Proof of Lemma \ref{lem'''}} Let $f:A \to A$ be a function that is finitely supported by the finite set of atoms $S$. We distinguish two cases: I. There is $a \notin S$ with $f(a)=a$. Then for each $b\notin S$ we have that $(a\, b) \in Fix(S)$, and so $f(b)=f((a\, b)(a))=(a\, b)(f(a))=(a\, b)(a)=b$. Thus, $f|_{A \setminus S}=Id$. II. For all $a \notin S$ we have $f(a) \neq a$. We claim that $f(a) \in S$ for all $a \notin S$. Suppose, by contradiction, that $f(a)=b \in A \setminus S$ for a certain $a \notin S$. Thus, $(a\, b) \in Fix(S)$, and so $f(b)=f((a\, b)(a))=(a\, b)(f(a))=(a\, b)(b)=a$. Let us consider $c \in A \setminus S$, $c \neq a,b$. Thus, $(a\, c) \in Fix(S)$, and so $f(c)=f((a\, c)(a))=(a\, c)(f(a))=(a\, c)(b)=b$. Furthermore, $(b\, c) \in Fix(S)$, and so $f(b)=f((b\, c)(c))=(b\, c)(f(c))=(b\, c)(b)=c$. However, $f(b)=a$ which contradicts the functionality of $f$. Thus $f(a) \in S$ for any $a \notin S$. If $x,y \notin S$, then we should have $f(x),f(y) \in S$, and so, because $(x\,y) \in Fix(S)$, we get $f(x)=f((x\, y)(y))=(x\, y)(f(y))=f(y)$ since both $x$ and $y$ belong to $A\setminus S$ which means they are different from $f(y)$ belonging to $S$. Therefore there is $x_{0} \in S$ such that $f|_{A \setminus S}$ $=\{x_{0}\}$. \emph{Proof of this item}. Assume, by contradiction, that $A^{A}_{fs}$ contains an infinite, uniformly supported subset, meaning that there are infinitely many functions from $A$ to $A$ supported by the same finite set $S$. According to Lemma \ref{lem'''}, any $S$-supported function $f:A \to A$ should have the property that either $f|_{A \setminus S}=Id$ or $f|_{A \setminus S}$ is an one-element subset of $S$. A function from $A$ to $A$ is precisely characterized by the set of values it takes on the elements of $S$ and on the elements of $A\setminus S$, respectively. For each possible definition of such an $f$ on $S$ we have at most $|S|+1$ possible ways to define $f$ on $A \setminus S$. Since we assumed that there exist infinitely many finitely supported functions from $A$ to $A$ supported by the same set $S$, there should exist infinitely many finitely supported functions from $S$ to $A$ supported by the set $S$. But this is a contradiction according to Lemma \ref{lem''} which states that $A^{S}_{fs}$ is does not contain an infinite uniformly supported subset (because $A$ does not contain an infinite uniformly supported subset). \item There is an equivariant bijective mapping between $(A^{n})^{A}_{fs}$ and $(A^{A}_{fs})^{n}$ defined as follows. If $f:A \to A^{n}$ is a finitely supported function with $f(a)=(a_{1},\ldots, a_{n})$, we associate to $f$ the Cartesian pair $(f_{1},\ldots, f_{n})$ where for each $i \in \mathbb{N}$, $f_{i}:A \to A$ is defined by $f_{i}(a)=a_{i}$ for all $a \in A$. We omit technical details since they are based only on the application of Proposition \ref{2.18'}. We proved above that $A^{A}_{fs}$ does not contain an infinite uniformly supported subset, and so neither $(A^{A}_{fs})^{n}$ contains an infinite uniformly supported subset by involving a similar proof as of Theorem \ref{ti1}(6) (see the proof of Lemma \ref{lem''}). \item Assume by contradiction that $T_{fin}(A)^{A}$ contains an infinite $S$-uniformly supported subset. If $f:A \to T_{fin}(A)$ is a function supported by $S$, then consider $f(a)=x$ for some $a \notin S$. For $b \notin S$ we have $(a\,b) \in Fix(S)$, and so $f(b)=f((a\,b)(a))=(a\,b)\otimes f(a)=(a\,b)\otimes x$ which means $|f(a)|=|f(b)|$ for all $a,b \notin S$. Each $S$-supported function $f:A \to T_{fin}(A)$ is fully described the values it takes on the elements of $S$ and on the elements of $A\setminus S$, respectively, i.e., by the elements of $f(S)$ and of $f(A\setminus S)$. More precisely, each $S$-supported function $f:A \to T_{fin}(A)$ can be uniquely decomposed into two $S$-supported functions $f|_{S}$ and $f|_{A \setminus S}$ (this follows from Proposition \ref{2.18'} and because both $S$ and $A \setminus S$ are supported by $S$). However, $f(A\setminus S) \subseteq A'^{n}$ for some $n \in \mathbb{N}$, where $A'^{n}$ is the set of all injective $n$-tuples of $A$. According to Lemma~\ref{lem''} we have at most finitely many $S$-supported functions from $S$ to $T_{fin}(A)$. According to item 5, we have at most finitely many $S$-supported functions from $A\setminus S$ to $A'^{n}$ for each fixed $n \in \mathbb{N}$. This is because $A'^{n}$ is a subset of $A^{n}$ and $A \setminus S$ is a subset of $A$, and so by involving Proposition \ref{pco1}(3) and (4) we find a finitely supported injection $\varphi$ from $(A'^{n})^{A \setminus S}$ and $(A^{n})^{A}$; if $\mathcal{K}$ was an infinite subset in $(A'^{n})^{A \setminus S}$ uniformly supported by $T$, then $\varphi(\mathcal{K})$ would be an infinite subset of $(A^{n})^{A}$ uniformly supported by $T \cup supp(\varphi)$. Therefore, there should exist an infinite subset $M \subseteq \mathbb{N}$ such that we have at least one $S$-supported function $g:A\setminus S \to A'^{k}$ for any $k \in M$. We do not need to find a set of representatives for such $g$'s; we consider all of them. Fix $a \in A \setminus S$. For each of the above $g$'s (that form an $S$-supported family $\mathcal{F}$) we have that $g(a)$'s form an uniformly supported family (by $S \cup \{a\}$) of $T_{fin}(A)$, which is also infinite because tuples having different cardinalities are different and $M$ is infinite. However, we contradict the proof of item 3 stating that $T_{fin}(A)$ does not contain an infinite uniformly supported subset. Alternatively, one can remark that if $|S \cup \{a\}|=l$ with $l$ fixed, then there is $m\in M$ fixed with $m>l$. Moreover, $g(a)$ for some $g:A\setminus S \to A'^{m}$ in $\mathcal{F}$ (we need to select only a function from those functions $g:A\setminus S \to A'^{m}$ with $m$ fixed depending only on the fixed $l$, \emph{and not} a set of representatives for the entire family $(\{g:A\setminus S \to A'^{k}\})_{k \in M}$), which is an injective $m$-tuple of atoms, cannot be supported by $S \cup \{a\}$; thus, the set of all $g(a)$'s cannot be infinite and uniformly supported. \item We can use a similar approach as in item 6, to prove that there exist at most finitely many $S$-supported functions from $A$ to $\wp_{fin}(A)$. For this we just replace $A'^{n}$ with the set of all $n$-sized subsets of $A$, $\wp_{n}(A)$. All it remains is to prove that, for each $n \in \mathbb{N}$, there cannot exist infinitely many functions $g:A \to \wp_{n}(A)$ supported by the same set $S'$. Fix $n \in \mathbb{N}$. Assume, by contradiction that there exist infinitely many functions $g:A \to \wp_{n}(A)$ supported by the same set $S'$. According to Lemma \ref{lem''} there are only finitely many functions from $S'$ to $\wp_{n}(A)$ supported by the same set of atoms, and so there should exist infinitely many functions $g:(A\setminus S') \to \wp_{n}(A)$ supported by $S'$. For such a $g$, let us fix an element $a\in A$ with $a\notin S'$. There exist $x_{1}, \ldots, x_{n} \in A$ fixed (depending only on the fixed $a$) and different such that $g(a)=\{x_{1}, \ldots, x_{n}\}$. Let $b$ be an arbitrary element from $A\setminus S'$, and so $(a\, b)\in Fix(S')$ which means $g(b)=g((a\,b)(a))=(a\,b) \star g(a)=(a\,b) \star \{x_{1}, \ldots, x_{n}\}=\{(a\,b)(x_{1}),\ldots, (a\,b)(x_{n})\}$. We analyze the two possibilities: Case 1: One of $x_{1}, \ldots, x_{n}$ coincides to $a$. Suppose $x_{1}=a$. We claim that $x_{2}, \ldots, x_{n} \in S'$. Assume the contrary, that is, there exists $i \in \{2,\ldots,n\}$ such that $x_{i} \notin S'$. Without losing the generality suppose $x_{2} \notin S'$, which means $(a\,x_{2}) \in Fix(S')$, and so $g(x_{2})=g((a\,x_{2})(a))=(a\,x_{2}) \star g(a)=(a\,x_{2}) \star \{a,x_{2}, \ldots, x_{n}\}=\{a,x_{2}, \ldots, x_{n}\}$. Let $c \in A\setminus S'$ with $c$ different from $a,x_{2}, \ldots, x_{n}$. We have $g(c)=g((a\,c)(a))=(a\,c) \star g(a)=(a\,c) \star \{a,x_{2}, \ldots, x_{n}\}=\{c,x_{2}, \ldots, x_{n}\}$, and hence $g(x_{2})=g((c\,x_{2})(c))=(c\,x_{2}) \star g(c)=(c\,x_{2}) \star \{c,x_{2}, \ldots, x_{n}\}=\{c,x_{2}, \ldots, x_{n}\}$ which contradicts the functionality of $g$. Therefore, $g(b)=(b,x_{2},\ldots, x_{n})$ for all $b \in A \setminus S'$, and so only the selection of $x_{2}, \ldots x_{n}$ provides the distinction between $g$'s. Since $S'$ is finite, $\{x_{2}, \ldots, x_{n}\}$ can be selected in $C_{|S'|}^{n-1}$ ways if $|S'|\geq n-1$, or in~$0$ ways otherwise. Case 2: Consider now that all $x_{1}, \ldots, x_{n}$ are different from $a$. Then $g(b)= \left\{ \begin{array}{ll} \{x_{1}, \ldots, x_{n}\}, & \text{if}\: \text{$b \neq x_{1}, \ldots, x_{n}$ };\\ \{a,x_{2}, \ldots, x_{n}\}, & \text{if}\: \text{$x_{1} \notin S'$ and $b=x_{1}$ };\\ \ldots\\ \{x_{1},\ldots, x_{n-1}, a\}, & \text{if}\: \text{$x_{n} \notin S'$ and $b=x_{n}$} \: .\end{array}\right.$ Since $x_{1}, \ldots, x_{n},a$ are fixed atoms, then $g(A \setminus S')$ is finite. However, $Im(g)$ should be supported by~$S'$. According to Proposition \ref{4.4-9}, since $Im(g)$ is finite, it should be uniformly supported by $S'$. We obtain that $x_{1}, \ldots, x_{n} \in S'$, and so $g(A \setminus S') =\{x_{1}, \ldots, x_{n}\}$. Otherwise, if some $x_{i} \notin S'$, we would get $\{x_{1}, \ldots, a, \ldots, x_{n}\} \in Im(g)$ (where $a$ replaces $x_{i}$) and so $\{x_{1}, \ldots, a, \ldots, x_{n}\}$ is supported by $S'$. Again by Proposition \ref{4.4-9} we would have that $a$ is supported by $S'$ which means $\{a\}=supp(a)\subseteq S'$ contradicting the choice of $a$. Alternatively, for proving that all $x_{1}, \ldots, x_{n} \in S'$, assume by contradiction that one of them (say $x_{1}$) does not belong to $S'$. Let $c$ be an atom from $A \setminus S'$ with $c$ different from $a, x_{1},x_{2}, \ldots, x_{n}$. We have $g(c)=g((a\,c)(a))=(a\,c) \star g(a)=(a\,c) \star \{x_{1}, \ldots, x_{n}\}=\{x_{1}, \ldots, x_{n}\}$, and hence $g(x_{1})=g((c\,x_{1})(c))=(c\,x_{1}) \star g(c)=(c\,x_{1}) \star \{x_{1}, x_{2}, \ldots, x_{n}\}=\{c,x_{2}, \ldots, x_{n}\}$. However $g(x_{1})=\{a,x_{2}, \ldots, x_{n}\}$ which contradicts the functionality of $g$. Since $S'$ is finite, $\{x_{1}, \ldots, x_{n}\}$ can be selected in $C_{|S'|}^{n}$ ways $|S'|\geq n$ or in $0$ ways otherwise. In either case, there couldn't exist infinitely many $g$'s supported by $S'$, and so for each $n \in \mathbb{N}$, there exist at most finitely many functions from $A$ to $\wp_{n}(A)$ supported by the same set of atoms. Assume by contradiction that $\wp_{fin}(A)^{A}$ contains an infinite $S$-uniformly supported subset. If $f:A \to \wp_{fin}(A)$ is a function supported by $S$, then we have $|f(a)|=|(a\,b)\star f(a)|=|f((a\,b)(a))|=|f(b)|$ for all $a,b \notin S$. According to Proposition \ref{2.18'} (since both $S$ and $A \setminus S$ are supported by $S$), each $S$-supported function $f:A \to \wp_{fin}(A)$ is uniquely decomposed into two $S$-supported functions $f|_{S}$ and $f|_{A \setminus S}$. However $f(A\setminus S) \subseteq \wp_{n}(A)$ for some $n \in \mathbb{N}$. According to Lemma \ref{lem''} there are at most finitely many $S$-supported functions from $S$ to $\wp_{fin}(A)$. Furthermore, there exist at most finitely many $S$-supported functions from $A\setminus S$ to $\wp_{n}(A)$ for each fixed $n \in \mathbb{N}$. Therefore, there should exist an infinite subset $M \subseteq \mathbb{N}$ such that we have at least one $S$-supported function $g:A\setminus S \to \wp_{k}(A)$ for any $k \in M$. Fix $a \in A \setminus S$. For each of the above $g$'s (that form an $S$-supported family $\mathcal{F}$) we have that $g(a)$'s form an uniformly supported family (by $S \cup \{a\}$) of $\wp_{fin}(A)$. If $|S \cup \{a\}|=l$ with $l$ fixed, then there is $m\in M$ fixed with $m>l$. Moreover, $g(a)$ for $g:A\setminus S \to \wp_{m}(A) \in \mathcal{F}$, which is an $m$-sized subset of atoms, cannot be supported by $S \cup \{a\}$ (according to Proposition \ref{4.4-9}); thus, the set of all $g(a)$'s cannot be infinite and uniformly supported. Analogously, there there exist at most finitely many $S$-supported functions from $A$ to $\wp_{cofin}(A)$ (using eventually the fact that there is an equivariant bijection $X \mapsto A\setminus X$ between $\wp_{fin}(A)$ and $\wp_{cofin}(A)$). Assume by contradiction that $\wp_{fs}(A)^{A}$ contains an infinite $S$-uniformly supported subset. If $f:A \to \wp_{fs}(A)$ is a function supported by $S$, then consider $f(a)=X$ for some $a \notin S$. For $b \notin S$ we have $f(b)=(a\,b)\star X$ which means $f(A \setminus S)$ is formed only by finite subsets of atoms if $X$ is finite, and $f(A \setminus S)$ is formed only by cofinite subsets of atoms if $X$ is cofinite. Thus, whenever $f:A \to \wp_{fs}(A)$ is a function supported by $S$, we have either $f(A \setminus S) \subseteq \wp_{fin}(A)$ or $f(A \setminus S) \subseteq \wp_{cofin}(A)$. Each $S$-supported function $f:A \to \wp_{fs}(A)$ is fully described by $f(S)$ and $f(A\setminus S)$. According to Lemma \ref{lem''} we have at most finitely many $S$-supported functions from $S$ to $\wp_{fs}(A)$. Furthermore, we have at most finitely many $S$-supported functions from $A\setminus S$ to $\wp_{fin}(A)$, and at most finitely many $S$-supported functions from $A\setminus S$ to $\wp_{cofin}(A)$. Thus, $\wp_{fs}(A)^{A}$ does not contain an infinite uniformly supported subset. \item The sets $\wp_{fin}(A)$, $\wp_{cofin}(A)$, $\wp_{fin}(\wp_{fs}(A))$, $\wp_{fin}(\wp_{cofin}(A))$, $\wp_{fin}(\wp_{fin}(A))$, $\wp_{fin}(A^{A}_{fs})$ do not contain infinite uniformly supported subsets, and so they are not FSM Dedekind infinite (Theorem \ref{ti1}(5)). \item Directly from Theorem \ref{ti1}(5). \item According to Theorem \ref{ti1}(6). \item According to Theorem \ref{ti1}(7). \end{enumerate} \end{proof} \begin{corollary} There exist two FSM sets that whose cardinalities incomparable via the relation $\leq$ on cardinalities, and none of them is FSM Dedekind infinite. \end{corollary} \begin{proof} According to Corollary \ref{ti2}, none of the sets $A \times A$ and $\wp_{fs}(A)$ is FSM Dedekind infinite. According to Theorem \ref{cardord1}, there does not exist a finitely supported injective mapping $f: A \times A \to \wp_{fs}(A)$. According to Lemma~11.10 from \cite{jech} that is preserved in FSM (proof omitted) there does not exist a finitely supported injective mapping $f: \wp_{fs}(A) \to A \times A$. \end{proof} \begin{corollary} \label{ti3} The following sets and all of their supersets, their powersets and the families of their finite subsets, are both FSM usual infinite and FSM Dedekind infinite. \begin{enumerate} \item The invariant sets $\wp_{fs}(\wp_{fs}(A))$, $\wp_{fs}(\wp_{fin}(A))$ and $\mathbb{N}$. \item The set of all finitely supported mappings from $X$ to $Y$, and the set of all finitely supported mappings from $Y$ to $X$, where $X$ is a finitely supported subset of an invariant set with at least two elements, and $Y$ is an FSM Dedekind infinite set. \item The set of all finitely supported functions $f:\wp_{fin}(Y) \to X$ and the set of all finitely supported functions $f:\wp_{fs}(Y) \to X$, where $Y$ is an infinite finitely supported subset of an invariant set, and $X$ is a finitely supported subset of an invariant set with at least two elements. \item The set $T^{\delta}_{fin}(A)=\underset{n\in \mathbb{N}}{\cup}A^{n}$ of all finite tuples of atoms (not necessarily injective). \end{enumerate} \end{corollary} \begin{proof} \begin{enumerate} \item This follows from Theorem \ref{ti1}(3) and Theorem \ref{ti1} (2). \item Let $(y_{n})_{n \in \mathbb{N}}$ be an injective, uniformly supported, countable sequence in $Y$ (that exists from Theorem~\ref{ti1}(1)). Thus, each $y_{n}$ is supported by the same set $S$ of atoms. In $Y^{X}$ we consider the injective family $(f_{n})_{n \in \mathbb{N}}$ of functions from $X$ to $Y$ where for each $i \in \mathbb{N}$ we define $f_{i}(x)=y_{i}$ for all $x \in X$. According to Proposition~\ref{2.18'}, each $f_{i}$ is supported by $S$, and so is the infinite family $(f_{n})_{n \in \mathbb{N}}$, meaning that there is an $S$-supported injective mapping from $\mathbb{N}$ to $Y^{X}$. In this case it is necessary to require only that $X$ is non-empty. Fix two different elements $x_{1}, x_{2} \in X$. Take $\mathcal{F}=(y_{n})_{n \in \mathbb{N}}$ an injective, uniformly supported, countable sequence in $Y$. In $X^{Y}$ we consider the injective family $(g_{n})_{n \in \mathbb{N}}$ of functions from $Y$ to $X$ where for each $i \in \mathbb{N}$ we define $g_{i}(y)=\left\{ \begin{array}{ll} x_{1} & \text{if}\: y=y_{i}\\ x_{2} & \text{if}\: y=y_{j} \; \text{with}\; j \neq i, \; \text{or}\; y\notin \mathcal{F}\end{array} \right.$. According to Proposition \ref{2.18'}, each $g_{i}$ is supported by the finite set $supp(x_{1}) \cup supp(x_{2}) \cup supp(\mathcal{F})$, and so the infinite family $(g_{n})_{n \in \mathbb{N}}$ is uniformly supported meaning that there is an injective mapping from $\mathbb{N}$ to $X^{Y}$ supported by $supp(x_{1}) \cup supp(x_{2}) \cup supp(\mathcal{F})$. \item From Theorem \ref{comp}, there exists a one-to-one mapping from $\wp_{fs}(U)$ onto $\{0,1\}^{U}_{fs}$ for an arbitrary finitely supported subset of an invariant set $U$. Fix two distinct elements $x_{1},x_{2} \in X$. There exists a finitely supported (by $supp(x_{1}) \cup supp(x_{2})$) bijective mapping from $\{0,1\}^{U}_{fs}$ to $\{x_{1},x_{2}\}^{U}_{fs}$ which associates to each $f \in \{0,1\}^{U}_{fs}$ an element $g \in \{x_{1},x_{2}\}^{U}_{fs}$ defined by $g(x)=\left\{ \begin{array}{ll} x_{1} & \text{for}\: f(x)=0\\ x_{2} & \text{for}\: f(x)=1 \end{array}\right.$ for all $x \in U$ and supported by $supp(x_{1}) \cup supp(x_{2}) \cup supp(f)$. Obviously, there is a finitely supported injection between $\{x_{1},x_{2}\}^{U}_{fs}$ and $X^{U}_{fs}$. Thus, there is a finitely supported injection from $\wp_{fs}(U)$ into $X^{U}_{fs}$. If we take $U=\wp_{fin}(Y)$ or $U=\wp_{fs}(Y)$, the result follows from Theorem \ref{ti1}(1), Theorem \ref{ti1}(2) and Theorem \ref{ti1}(3). \item Fix $a \in A$ and $i \in \mathbb{N}$. We consider the tuple $x_{i}=(a,\ldots,a) \in A^{i}$. Clearly $x_{i}$ is supported by $\{a\}$ for each $i \in \mathbb{N}$, and so $(x_{n})_{n \in \mathbb{N}}$ is a uniformly supported subset of $T^{\delta}_{fin}(A)$. \end{enumerate} \end{proof} \begin{proposition} \label{propro} Let $X$ be a finitely supported subset of an invariant set such that $\wp_{fs}(X)$ is not FSM Dedekind infinite. Then each finitely supported surjective mapping $f:X \to X$ should be injective. \end{proposition} \begin{proof}Let $f: X \to X$ be a finitely supported surjection. Since $f$ is surjective, we can define the function $g:\wp_{fs}(X) \rightarrow \wp_{fs}(X)$ by $g(Y) = f^{-1}(Y)$ for all $Y\in \wp_{fs}(X)$ which is finitely supported and injective according to Lemma \ref{lemlem}. Since $\wp_{fs}(X)$ is not FSM Dedekind infinite, it follow that $g$ is surjective. Now let us consider two elements $a,b \in X$ such that $f(a)=f(b)$. We prove by contradiction that $a=b$. Suppose that $a \neq b$. Let us consider $Y=\{a\}$ and $Z=\{b\}$. Obviously, $Y,Z \in \wp_{fs}(X)$. Since $g$ is surjective, for $Y$ and $Z$ there exist $Y_{1}, Z_{1} \in \wp_{fs}(X)$ such that $f^{-1}(Y_{1})=g(Y_{1})=Y$ and $f^{-1}(Z_{1})=g(Z_{1})=Z$. We know that $f(Y) \cap f(Z)= \{f(a)\}$. Thus, $f(a) \in f(Y)=f(f^{-1}(Y_{1})) \subseteq Y_{1}$. Similarly, $f(a) =f(b) \in f(Z)=f(f^{-1}(Z_{1})) \subseteq Z_{1}$, and so $f(a) \in Y_{1} \cap Z_{1}$. Thus, $a \in f^{-1}(Y_{1} \cap Z_{1})=f^{-1}(Y_{1}) \cap f^{-1}(Z_{1})=Y \cap Z$. However, since we assumed that $a \neq b$, we have that $Y \cap Z = \emptyset$, which represents a contradiction. It follows that $a=b$, and so $f$ is injective. \end{proof} \begin{proposition} \label{propro''} \begin{enumerate} \item Let $X$ be a finitely supported subset of an invariant set. If $\wp_{fin}(X)$ is FSM Dedekind infinite, then $X$ should be FSM non-uniformly amorphous, meaning that $X$ should contain two disjoint, infinite, uniformly supported subsets. \item Let $X$ be a finitely supported subset of an invariant set. If $\wp_{fs}(X)$ is FSM Dedekind infinite, then $X$ should be FSM non-amorphous, meaning that $X$ should contain two disjoint, infinite, finitely supported supported subsets. The reverse implication is not valid. \end{enumerate} \end{proposition} \begin{proof} 1. Assume that $(X_{n})_{n \in \mathbb{N}}$ is a countable family of different finite subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported. Thus, each $X_{n}$ is supported by the same set $S=supp(n \mapsto X_{n})$. Since each $X_{n}$ is finite (and the support of a finite set coincides with the union of the supports of its elements), as in the proof of Lemma \ref{lem4}, we have that $\underset{n \in \mathbb{N}}\cup X_{n}$ is uniformly supported by $S$. Furthermore, $\underset{n \in \mathbb{N}}\cup X_{n}$ is infinite since all $X_{i}$ are pairwise different. Moreover, the countable sequence $(Y_{n})_{n \in \mathbb{N}}$ defined by $Y_{n}=X_{n} \setminus \underset{m<n}\cup X_{m}$ is a uniformly supported (by $S$) sequence of pairwise disjoint uniformly supported sets with $\underset{n \in \mathbb{N}}\cup X_{n}=\underset{n \in \mathbb{N}}\cup Y_{n}$. Again since each $Y_{n}$ is finite (and the support of a finite set coincides with the union of the supports of its elements), any element belonging to a set from the sequence $(Y_{n})_{n \in \mathbb{N}}$ is $S$-supported. Since the union of all $Y_{n}$ is infinite, and each $Y_{n}$ is finite, there should exist infinitely many terms from the sequence $(Y_{n})_{n \in \mathbb{N}}$ that are non-empty. Assume that $(Y_{n})_{n \in M \subseteq \mathbb{N}}$ with $M$ infinite is a subset of $(Y_{n})_{n \in \mathbb{N}}$ formed by non-empty terms. Let $U_{1}=\{\cup Y_{k}\,|\, k \in M, k \;\text{is odd}\}$ and $U_{2}=\{\cup Y_{k}\,|\, k \in M, k\; \text{is even}\}$. Then $U_{1}$ and $U_{2}$ are disjoint, uniformly $S$-supported and infinite subsets of $X$. 2. Assume that $\wp_{fs}(X)$ is FSM Dedekind infinite. As in the proof of Theorem \ref{ti1}(8), we can define a uniformly supported, countable family $(Y_{n})_{n \in \mathbb{N}}$ of subsets of $X$ that are non-empty and pairwise disjoint. Let $V_{1}=\{\cup Y_{k}\,|\, k \;\text{is odd}\}$ and $V_{2}=\{\cup Y_{k}\,|\, k\; \text{is even}\}$. Then $V_{1}$ and $V_{2}$ are disjoint, infinite subsets of $X$. Since each $Y_{i}$ is supported by $S'=supp(n \mapsto Y_{n})$ we have $\pi \star Y_{i}=Y_{i}$ for all $i \in \mathbb{N}$ and $\pi \in Fix(S')$. Fix $\pi \in Fix(S')$ and $x \in V_{1}$. Thus, there is $l \in \mathbb{N}$ such that $x \in Y_{2l+1}$. We obtain $\pi \cdot x \in \pi \star Y_{2l+1}=Y_{2l+1}$, and so $\pi \cdot x \in V_{1}$. Thus, $V_{1}$ is supported by $S'$. Analogously, $V_{2}$ is supported by $S'$, and so $X$ is FSM non-amorphous. Conversely, the set $A+A=\{0,1\}\times A$ (the disjoint union of $A$ and $A$) is obviously non-amorphous because $\{(0,a)\,|\,a \in A\}$ is equivariant, infinite and coinfinite. One can define the equivariant bijection $f: \wp_{fs}(A) \times \wp_{fs}(A) \to \wp_{fs}(\{0,1\}\times A)$ by $f(U,V)=\{(0,x)\,|\,x \in U\} \cup \{(1,y)\,|\,y \in V\}$ for all $U,V \in \wp_{fs}(A)$. Clearly $f$ is equivariant because for each $\pi \in S_{A}$ we have $f(\pi \star U,\pi \star V)=\pi \star f(U,V)$. However, $\wp_{fs}(A) \times \wp_{fs}(A)$ is not FSM Dedekind infinite according to Corollary \ref{ti2}(2) and Theorem \ref{ti1}(6). \end{proof} It is also worth noting that non-uniformly amorphous FSM sets are non-amorphous FSM sets since uniformly supported sets are obviously finitely supported. The converse however is not valid since $\wp_{fin}(A)$ is non-amorphous but it has no infinite uniformly supported subset (the only finite subsets of atoms supported by a finite set $S$ of atoms being the subsets of $S$), and so it cannot be non-uniformly amorphous. \begin{corollary} \label{propro'} Let $X$ be a finitely supported amorphous subset of an invariant set (i.e. any finitely supported subset of~$X$ is either finite or cofinite). Then each finitely supported surjective mapping $f:X \to X$ should be injective. \end{corollary} \begin{proof} Since any finitely supported subset of $X$ is either finite or cofinite, then any uniformly supported subset of $X$ is either finite or cofinite. From Proposition \ref{propro''}, $\wp_{fin}(X)$ is not FSM Dedekind infinite. For the rest of the proof we follow step-by-step the proof of Proposition \ref{propro} (and of Lemma \ref{lemlem}). If $X$ is finite, we are done, so assume $X$ is infinite. If $Y \in \wp_{fin}(X)$, then $f^{-1}(Y) \in \wp_{fs}(X)$ (supported by $supp(f) \cup supp(X) \cup supp(Y)$). Since $X$ is amorphous, it follows that $f^{-1}(Y)$ is either finite or cofinite. If $f^{-1}(Y)$ is cofinite, then its complementary $\{x \in X \;|\; f(x) \notin Y\}$ is finite. This means that all but finitely many elements in $X$ would have their image under $f$ belonging to the finite set~$Y$. Therefore, $Im(f)$ would be a finite subset of $X$, which contradicts the surjectivity of $f$. Thus, $f^{-1}(Y)$ is a finite subset of $X$. In this sense we can well-define define the function $g:\wp_{fin}(X) \rightarrow \wp_{fin}(X)$ by $g(Y) = f^{-1}(Y)$ which is supported by $supp(f) \cup supp(X)$ and injective. Since $\wp_{fin}(X)$ is not FSM Dedekind infinite, it follow that $g$ is surjective, and so $f$ is injective exactly as in the last paragraph of the proof of Proposition \ref{propro}. \end{proof} \begin{proposition} \begin{enumerate} \item Let $X$ be an FSM Dedekind infinite set. Then there exists a finitely supported surjection $j:X \to \mathbb{N}$. The reverse implication is not valid. \item If $X$ is a finitely supported subset of an invariant set such that there exists a finitely supported surjection $j:X \to \mathbb{N}$, then $\wp_{fs}(X)$ is FSM Dedekind infinite. The reverse implication is also valid. \end{enumerate} \end{proposition} \begin{proof} 1. Let $X$ be an FSM Dedekind infinite set. According to Theorem \ref{ti1}(1), there is a finitely supported injection $i:\mathbb{N} \to X$. Let us fix $n_{0} \in \mathbb{N}$. We define the function $j: X \to \mathbb{N}$ by \[ j(x)=\left\{ \begin{array}{ll} i^{-1}(x), & \text{if}\: \text{$x\in Im(i)$ };\\ n_{0}, & \text{if}\: \text{$x\notin Im(i)$}\: .\end{array}\right. \] Since $Im(i)$ is supported by $supp(i)$ and $n_{0}$ is empty supported, by verifying the condition in Proposition \ref{2.18'} we have that $j$ is supported by $supp(i) \cup supp(X)$. Indeed, when $\pi \in Fix(supp(i) \cup supp(X))$, then $x \in Im(i) \Leftrightarrow \pi \cdot x \in Im(i)$, and $n=i^{-1}(\pi \cdot x) \Leftrightarrow i(n)=\pi \cdot x \Leftrightarrow \pi^{-1} \cdot i(n)=x \Leftrightarrow i(\pi^{-1} \diamond n)=x \Leftrightarrow i(n)=x \Leftrightarrow n=i^{-1}(x)$, where $\diamond$ is the trivial action on $\mathbb{N}$; similarly $y \notin Im(i) \Leftrightarrow \pi \cdot y \notin Im(i)$ and $j(\pi \cdot y)=n_{0}=\pi \diamond n_{0}=\pi \diamond j(y)$. Clearly, $j$ is surjective. However, the reverse implication is not valid because the mapping $f:\wp_{fin}(A) \to \mathbb{N}$ defined by $f(X)=|X|$ for all $X \in \wp_{fin}(A)$ is equivariant and surjective, but $\wp_{fin}(A)$ is not FSM Dedekind infinite. 2. Suppose now there exists a finitely supported surjection $j:X \to \mathbb{N}$. Clearly, for any $n \in \mathbb{N}$, the set $j^{-1}(\{n\})$ is non-empty and supported by $supp(j)$. Define $f: \mathbb{N} \to \wp_{fs}(X)$ by $f(n)=j^{-1}(\{n\})$. For $\pi \in Fix(supp(j))$ and an arbitrary $n \in \mathbb{N}$ we have $j(x)=n \Leftrightarrow j(\pi^{-1} \cdot x)=n$, and so $x \in j^{-1}(\{n\}) \Leftrightarrow \pi^{-1} \cdot x \in j^{-1}(\{n\})$, which means $f(n) = \pi \star f(n)$ for all $n \in \mathbb{N}$, and so $f$ is supported by $supp(j)$. Since $f$ is also injective, by Theorem \ref{ti1}(1) we have that $\wp_{fs}(X)$ is FSM Dedekind infinite. Conversely, assume that $\wp_{fs}(X)$ is FSM Dedekind infinite. As in the proof of Theorem \ref{ti1}(8), we can define a uniformly supported, countable family $(Y_{n})_{n \in \mathbb{N}}$ of subsets of $X$ that are non-empty and pairwise disjoint. The mapping $f$ can be defined by $ f(x)=\left\{ \begin{array}{ll} n, & \text{if}\; \exists n. x\in Y_{n} ;\\ 0, & \text{otherwise} \end{array}\right.$, and, obviously, $f$ is supported by $supp(n \mapsto Y_{n})$. \end{proof} \begin{proposition}Let $X$ be an infinite finitely supported subset of an invariant set. Then there exists a finitely supported surjection $f:\wp_{fs}(X) \to \mathbb{N}$. \end{proposition} \begin{proof}Let $X_{i}$ be the set of all $i$-sized subsets from $X$, i.e. $X_{i}=\{Z \subseteq X\,|\,|Z|=i\}$. The family $(X_{i})_{i \in \mathbb{N}}$ is uniformly supported by $supp(X)$ and all $X_{i}$ are non-empty and pairwise disjoint. Define the mapping $f$ by $f(Y)=\left\{ \begin{array}{ll} n, & \text{if}\: Y\in X_{n};\\ 0, & \text{if}\; Y\; \text{is infinite} .\end{array}\right.$ According to Proposition \ref{2.18'}, $f$ is supported by $supp(X)$ (since any $X_{n}$ is supported by $supp(X)$) and it is surjective. We actually proved the existence of a finitely supported surjection from $\wp_{fin}(X)$ onto~$\mathbb{N}$. \end{proof} The sets $A$ and $\wp_{fin}(A)$ are both FSM usual infinite and none of them is FSM Dedekind infinite. We prove below that~$A$ is not FSM ascending infinite, while $\wp_{fin}(A)$ is FSM ascending infinite. \begin{proposition} \label{tari} \begin{itemize} \item The set $A$ is not FSM ascending infinite. \item Let $X$ be a finitely supported subset of an invariant set $U$. If $X$ is FSM usual infinite, then the set $\wp_{fin}(X)$ is FSM ascending infinite. \end{itemize} \end{proposition} \begin{proof} In order to prove that $A$ is not FSM ascending infinite, we prove firstly that each finitely supported increasing countable chain of finitely supported subsets of $A$ must be stationary. Indeed, if there exists an increasing countable chain $X_{0}\subseteq X_{1}\subseteq\ldots\subseteq A$ such that $n \mapsto X_{n}$ is finitely supported, then, according to Proposition \ref{2.18'} and because $\mathbb{N}$ is a trivial invariant set, each element $X_{i}$ of the chain must be supported by the same $S=supp(n \mapsto X_{n})$. However, there are only finitely many such subsets of $A$ namely the subsets of $S$ and the supersets of $A\setminus S$. Therefore the chain is finite, and, because it is ascending, there exists $n_{0}\in\mathbb{N}$ such that $X_{n}=X_{n_{0}},\forall n\geq n_{0}$. Now, let $Y_{0}\subseteq Y_{1}\subseteq\ldots\subseteq Y_{n}\subseteq\ldots$ be a finitely supported countable chain with $A\subseteq \underset{n \in \mathbb{N}}{\cup} Y_{n}$. Then $A \cap Y_{0}\subseteq A \cap Y_{1}\subseteq\ldots\subseteq A \cap Y_{n}\subseteq\ldots \subseteq A$ is a finitely supported countable chain of subsets of $A$ (supported by $supp(n \mapsto Y_{n})$) which should be stationary (finite). Furthermore, since $\underset{i \in \mathbb{N}}{\cup}(A\cap Y_{i})=A \cap(\underset{i \in \mathbb{N}}{\cup} Y_{i})=A$, there is some $k_{0}$ such that $A \cap Y_{k_{0}}=A$, and so $A \subseteq Y_{k_{0}}$. Thus,~$A$ is not FSM ascending infinite. We know that $\wp_{fin}(X)$ is a subset of the invariant set $\wp_{fin}(U)$ supported by $supp(X)$. Let us consider $X_{n}\!=\!\{Z\!\in\!\wp_{fin}(X)\,|\,|Z|\!\leq\! n\}$. Clearly, $X_{0}\subseteq X_{1}\subseteq\ldots\subseteq X_{n}\subseteq\ldots$. Furthermore, because permutations of atoms are bijective, we have that for an arbitrary $k \in \mathbb{N}$, $|\pi \star Y| =|Y|$ for all $\pi \in S_{A}$ and all $Y \in X_{k}$, and so $\pi \star Y \in X_{k}$ for all $\pi \in Fix(supp(X))$ and all $Y \in X_{k}$. Thus, each $X_{k}$ is a subset of $\wp_{fin}(X)$ finitely supported by $supp(X)$, and so $(X_{n})_{n \in \mathbb{N}}$ is finitely (uniformly) supported by $supp(X)$. Obviously, $\wp_{fin}(X)=\underset{n \in \mathbb{N}}{\cup} X_{n}$. However, there exists no $n\in\mathbb{N}$ such that $\wp_{fin}(X)=X_{n}$. Thus, $(\wp_{fin}(X), \star)$ is FSM ascending infinite. \end{proof} \begin{theorem} \label{Tt} Let $X$ be a finitely supported subset of an invariant set $(Z, \cdot)$. \begin{enumerate}\item If $X$ is FSM Dedekind infinite, then $X$ is FSM Mostowski infinite. \item If $X$ is FSM Mostowski infinite, then $X$ is FSM Tarski II infinite. The reverse implication is not valid. \end{enumerate} \end{theorem} \begin{proof} 1. Suppose $X$ is FSM Dedekind infinite. According to Theorem \ref{ti1}(1) there exists an uniformly supported infinite injective sequence $T=(x_{n})_{n \in \mathbb{N}}$ of elements from $X$. Thus, each element of $T$ is supported by $supp(T)$ and there is a bijective correspondence between $\mathbb{N}$ and $T$ defined as $n \mapsto x_{n}$ which is supported by $supp(T)$. If we define the relation $\sqsubset$ on $T$ by: $x_{i} \sqsubset x_{j}$ if and only if $i<j$, we have that $\sqsubset$ is a (strict) total order relation supported by $supp(T)$. Thus $T$ is an infinite finitely supported (strictly) totally ordered subset of $X$, and so $X$ is FSM Mostowski infinite since any strict total order can be extended to a total order. 2. Suppose that $X$ is not FSM Tarski II infinite. Then every non-empty finitely supported family of finitely supported subsets of $X$ which is totally ordered by inclusion has a maximal element under inclusion. Let $(U, <)$ be a finitely supported strictly totally ordered subset of $X$ (any total order relation induces a strict total order relation). We prove that~$U$ is finite, and so $X$ is not FSM Mostowski infinite. In this sense it is sufficient to prove that $<$ and $>$ are well-orderings. Since both of them are (strict) total orderings, we need to prove that any finitely supported subset of $U$ has a least and a greatest element wrt $<$, i.e. a minimal and a maximal element (because $<$ is total). Let $Y$ be a finitely supported subset of $U$. The set $\downarrow z=\{y \in Y\,|\,y<z\}$ is supported by $supp(z) \cup supp(Y) \cup supp(<)$ for all $z \in Y$. The family $T=\{\downarrow z\,|\,z \in Y\}$ is itself finitely supported by $supp(Y) \cup supp(<)$ because for all $\pi \in Fix(supp(Y) \cup supp(<))$ we have $\pi \cdot \downarrow z=\downarrow \pi \cdot z$. Since $<$ is transitive, we have that $T$ is (strictly) totally ordered by inclusion, and so it has a maximal element, which means $Y$ has a maximal element. Analogously, the set $\uparrow z=\{y \in Y\,|\,z<y\}$ is supported by $supp(z) \cup supp(Y) \cup supp(<)$ for all $z \in Y$ and the family $T'=\{\uparrow z\,|\,z \in Y\}$ is itself finitely supported by $supp(Y) \cup supp(<)$ because for all $\pi \in Fix(supp(Y) \cup supp(<))$ we have $\pi \cdot \uparrow z=\uparrow \pi \cdot z$. The family $T'$ is (strictly) totally ordered by inclusion, and so it has a maximal element, from which $Y$ has a minimal element. We used the obvious properties $z<t$ if and only if $\downarrow z \subset \downarrow t$, and $z<t$ if and only if $\uparrow t \subset \uparrow z$. Conversely, according to Proposition \ref{tari}, $\wp_{fin}(A)$ is FSM ascending infinite, and so it is FSM Tarski II infinite. However, $\wp_{fin}(A)$ is not FSM Mostowski infinite, according to Corollary~\ref{pTII}. \end{proof} \begin{proposition} \label{propamo} Let $X$ be a finitely supported subset of an invariant set $(Z, \cdot)$. If $X$ is FSM Mostowski infinite, then~$X$ is non-amorphous meaning that $X$ can be expressed as a disjoint union of two infinite finitely supported subsets. The reverse implication is not valid. \end{proposition} \begin{proof}Suppose that there is an infinite finitely supported totally ordered subset $(Y, \leq)$ of $X$. Assume, by contradiction, that $Y$ is amorphous, meaning that any finitely supported subset of $Y$ is either finite or cofinite. As in the proof of Theorem \ref{Tt} (without making the requirement that $\leq$ is strict, which anyway would not essentially change the proof), for $z \in Y$ we define the finitely supported subsets $\downarrow z=\{y \in Y\,|\,y \leq z\}$ and $\uparrow z=\{y \in Y\,|\,z \leq y\}$ for all $z \in Y$. We have that the mapping $z \mapsto \downarrow z$ from $Y$ to $T=\{\downarrow z\,|\,z \in Y\}$ is itself finitely supported by $supp(Y) \cup supp(\leq)$. Furthermore it is bijective, and so $T$ is amorphous. Thus, any subset $Z$ of $T$ is either finite or cofinite, and obviously any subset $Z$ of $T$ is finitely supported. Analogously, the mapping $z \mapsto \uparrow z$ from $Y$ to $T'=\{\uparrow z\,|\,z \in Y\}$ is finitely supported and bijective, which means that any subset of $T'$ is either finite or cofinite, and clearly any subset of $T'$ is finitely supported. We distinguish the following two cases: 1. There are only finitely many elements $x_{1}, \ldots, x_{n} \in Y$ such that $\downarrow x_{1}, \ldots, \downarrow x_{n}$ are finite. Thus, for $y \in U= Y \setminus \{x_{1}, \ldots, x_{n}\}$ we have $\downarrow y$ infinite. Since $\downarrow y$ is a subset of $Y$, it should be cofinite, and so $\uparrow y$ is finite (because $\leq$ is a total order relation). Let $M=\{\uparrow y\,|\,y \in U\}$. As in Theorem \ref{Tt} we have that $M$ is totally ordered with respect to sets inclusion. Furthermore, for an arbitrary $y \in U$ we cannot have $y \leq x_{k}$ for some $k \in \{1,\ldots,n\}$ because $\downarrow y$ is infinite, while $\downarrow x_{k}$ is finite, and so $\uparrow y$ is a subset of $U$. Thus, $M$ is an infinite, finitely supported (by $supp(U) \cup supp(\leq)$), totally ordered family formed by finite subsets of $U$. Since $M$ is finitely supported, for each $y \in U$ and each $\pi \in Fix(supp(M))$ we have $\pi \cdot \uparrow y \in M$. Since $\uparrow y$ is finite, we have that $\pi \cdot \uparrow y$ is finite having the same number of elements as $\uparrow y$. Since $\pi \cdot \uparrow y$ and $\uparrow y$ are comparable via inclusion, they should be equal. Thus,~$M$ is uniformly supported. Since $\leq$ is a total order, for $\pi \in Fix(supp(\uparrow y))$ we have $\uparrow \pi \cdot y=\pi \cdot \uparrow y=\uparrow y$, and so $\pi \cdot y=y$, from which $supp(y) \subseteq supp(\uparrow y)$. Thus, $U$ is uniformly supported. Since any element of $U$ has only a finite number of successors (leading to the conclusion that $\geq$ is an well-ordering on $U$ uniformly supported by $supp(U)$) and~$U$ is \emph{uniformly supported}, we can define an order monomorphism between $\mathbb{N}$ and $U$ which is supported by $supp(U)$. For example, choose $u_{0}\neq u_{1} \in U$, then let $u_{2}$ be \emph{the greatest element} (w.r.t. $\leq$) in $U\setminus \{u_{0}, u_{1}\}$, $u_{3}$ be \emph{the greatest element} in $U\setminus \{u_{0}, u_{1}, u_{2}\}$ (no choice principle is used since $\geq$ is an well-ordering, and so such a \emph{greatest} element is precisely defined), and so on, and find an infinite, uniformly supported countable sequence $u_{0}, u_{1}, u_{2},\ldots$. Since $\mathbb{N}$ is non-amorphous (being expressed as the union between the even elements and the odd elements), we conclude that $U$ is non-(uniformly) amorphous containing two infinite uniformly supported disjoint subsets. 2. We have cofinitely many elements $z$ such that $\downarrow z$ is finite. Thus, there are only finitely many elements $y_{1}, \ldots, y_{m} \in Y$ such that $\downarrow y_{1}, \ldots, \downarrow y_{m}$ are infinite. Since every infinite subset of $Y$ is cofinite, only $\uparrow y_{1}, \ldots, \uparrow y_{m}$ are finite. Let $z \in Y \setminus \{y_{1}, \ldots, y_{m}\}$ which means $\uparrow z$ infinite. Since $\uparrow z$ is a subset of $Y$ it should be cofinite, and so $\downarrow z$ is finite. As in the above item, the set $M'=\{\downarrow z\,|\,z \in Y \setminus \{y_{1}, \ldots, y_{m}\} \}$ is an infinite, finitely supported, totally ordered (by inclusion) family of finite sets, and so it has to be uniformly supported, from which $Y \setminus \{y_{1}, \ldots, y_{m}\}$ is uniformly supported, and so $\leq$ is an FSM well ordering on $Y \setminus \{y_{1}, \ldots, y_{m}\}$. Therefore, $Y \setminus \{y_{1}, \ldots, y_{m}\}$ has an infinite, uniformly supported, countable subset, and so $Y \setminus \{y_{1}, \ldots, y_{m}\}$ is non-(uniformly) amorphous containing two infinite uniformly supported disjoint subsets. Thus $Y$ is non-amorphous, and so $X$ is non-amorphous. Conversely, the set $A+A$ (the disjoint union of $A$ and $A$) is obviously non-amorhpous because because $\{(0,a)\,|\,a \in A\}$ is equivariant, infinite and coinfinite. However, if we assume there exists a finitely supported total order relation on an infinite subset of $A+A$, then there should exist an infinite, finitely supported, total order on at least one of the sets $\{(0,a)\,|\,a \in A\}$ or $\{(1,a)\,|\,a \in A\}$, which leads to an infinite finitely supported total order relation on $A$. However $A$ is not FSM Mostowski infinite by Corollary \ref{pTII}. \end{proof} \begin{theorem} \label{Tti} Let $X$ be a finitely supported subset of an invariant set $(Z, \cdot)$. If $X$ contains no infinite uniformly supported subset, then $X$ is not FSM Mostowski infinite. \end{theorem} \begin{proof}Assume, by contradiction, that $X$ is FSM Mostowski infinite, meaning that $X$ contains an infinite, finitely supported, totally ordered subset $(Y, \leq)$. We claim that $Y$ is uniformly supported by $supp(\leq) \cup supp(Y)$. Let $\pi \in Fix (supp(\leq) \cup supp(Y))$ and let $y \in Y$ an arbitrary element. Since $\pi$ fixes $supp(Y)$ pointwise and $supp(Y)$ supports $Y$, we obtain that $\pi \cdot y \in Y$, and so we should have either $y<\pi \cdot y$, or $y=\pi \cdot y$, or $\pi \cdot y<y$. If $y<\pi \cdot y$, then, because $\pi$ fixes $supp(\leq)$ pointwise and because the mapping $z\mapsto \pi \cdot z$ is bijective from $Y$ to $\pi \star Y$, we get $y<\pi \cdot y<\pi^{2} \cdot y<\ldots < \pi^{n} \cdot y$ for all $n \in \mathbb{N}$. However, since any permutation of atoms interchanges only finitely many atoms, it has a finite order in the group $S_{A}$, and so there is $m \in \mathbb{N}$ such that $\pi^{m}=Id$. This means $\pi^{m} \cdot y=y$, and so we get $y<y$ which is a contradiction. Analogously, the assumption $\pi \cdot y<y$, leads to the relation $\pi^{n} \cdot y<\ldots<\pi \cdot y<y$ for all $n \in \mathbb{N}$ which is also a contradiction since $\pi$ has finite order. Therefore, $\pi \cdot y=y$, and because $y$ was arbitrary chosen form $Y$, $Y$ should be a uniformly supported infinite subset of $X$. \end{proof} Looking to the proof of Proposition \ref{propamo}, the following result follows directly. \begin{corollary} Let $X$ be a finitely supported subset of an invariant set $(Z, \cdot)$. If $X$ is FSM Mostowski infinite, then~$X$ is non-uniformly amorphous meaning that $X$ has two disjoint, infinite, uniformly supported subsets. \end{corollary} \begin{remark} \label{remarema} In a permutation model of set theory with atoms, a set can be well-ordered if and only if there is a one-to-one mapping of the related set into the kernel of the model. Also it is noted that axiom of choice is valid in the kernel of the model \cite{jech}. Although FSM/nominal is somehow related to (has connections with) permutation models of set theory with atoms, it is independently developed over ZF without being necessary to relax the axioms of extensionality or foundation. FSM sets are ZF sets together with group actions, and such a theory makes sense over ZF without being necessary to require the validity of the axiom of choice on ZF sets. Thus, FSM is the entire ZF together with atomic sets with finite support (where the set of atoms is a fixed ZF formed by element whose internal structure is ignored and which are basic in the higher order construction). There may exist infinite ZF sets that do not contain infinite countable subsets, and as well there may exist infinite uniformly supported FSM sets (particularly such ZF sets) that do not contain infinite countable, uniformly supported, subsets. \end{remark} \begin{corollary} \label{pTII} \begin{enumerate} \item The sets $A$, $A+A$ and $A \times A$ are FSM usually infinite, but there are not FSM Mostowski infinite, nor FSM Tarski II infinite. \item None of the sets $\wp_{fin}(A)$, $\wp_{cofin}(A)$, $\wp_{fs}(A)$ and $\wp_{fin}(\wp_{fs}(A))$ is Mostowski infinite in FSM. \item None of the sets $A^{A}_{fs}$, $T_{fin}(A)^{A}_{fs}$ and $\wp_{fs}(A)^{A}_{fs}$ is FSM Mostowksi infinite. \end{enumerate} \end{corollary} \begin{proof} In the view of Theorem \ref{Tti} it is sufficient to prove that none of the sets $A$, $\wp_{fin}(A)$, $\wp_{cofin}(A)$, $\wp_{fs}(A)$, $A+A$, $A \times A$, $A^{A}_{fs}$,$T_{fin}(A)^{A}_{fs}$ and $\wp_{fs}(A)^{A}_{fs}$ contain infinite uniformly supported subsets. For $A$, $\wp_{fin}(A)$, $\wp_{cofin}(A)$ and $\wp_{fs}(A)$ this is obvious since for any finite set $S$ of atoms there are at most finitely many subsets of $A$ supported by~$S$, namely the subsets of $S$ and the supersets of $A \setminus S$. Moreover, $\wp_{fin}(\wp_{fs}(A))$ does not contain an infinite uniformly supported subset according to Lemma \ref{lem4} since $\wp_{fs}(A)$ does not contain an infinite uniformly supported subset. Regarding $A^{A}_{fs}$, the things are also similar with Corollary \ref{ti2}(4). According to Lemma \ref{lem'''}, any $S$-supported function $f:A \to A$ should have the property that either $f|_{A \setminus S}=Id$ or $f|_{A \setminus S}$ is an one-element subset of $S$. For each possible definition of such an $f$ on $S$ we have at most $|S|+1$ possible ways to define $f$ on $A \setminus S$, and so at most $|S|+1$ possible ways to completely define $f$ on $A$. If there was an infinite uniformly $S$-supported sequence of finitely supported functions from $A$ to $A$, there should exist infinitely many finitely supported functions from $S$ to $A$ supported by the same finite set $S$. But this contradicts the fact that $A^{|S|}$ does not contain an infinite uniformly supported subset (this follows by applying finitely many times the result that $X \times X$ does not contain an infinite uniformly supported subset whenever $X$ does not contain an infinite uniformly supported subset). Analyzing the proofs of Corollary \ref{ti2}(6) and (7), we also conclude that $T_{fin}(A)^{A}_{fs}$ and $\wp_{fs}(A)^{A}_{fs}$ do not contain infinite uniformly supported subsets. We also have that $A$ is not FSM Tarski II infinite because $\wp_{fs}(A)$ contains no infinite uniformly supported subsets, and so every totally ordered subset (particularly via inclusion) of $\wp_{fs}(A)$ should be finite meaning that it should have a maximal element. Furthermore, we have that there is an equivariant bijection between $\wp_{fs}(A+A)$ and $\wp_{fs}(A)\times \wp_{fs}(A)$. Since $\wp_{fs}(A)$ does not contain an infinite uniformly supported subset, we have that $\wp_{fs}(A) \times \wp_{fs}(A)$ does not contain an infinite uniformly supported subset (the proof is quasi-identical to the one of Theorem \ref{ti1}(6) without taking count on the countability of the related infinite uniformly supported family). Therefore, any infinite totally ordered (via inclusion) uniformly supported family of $\wp_{fs}(A+A)$ should be finite containing a maximal element. There is an equivariant bijection between $\wp_{fs}(A)^{A}_{fs}$ and $\wp_{fs}(A \times A)$. Therefore any uniformly supported totally ordered subset of $\wp_{fs}(A \times A)$ should be finite containing a maximal element. \end{proof} \begin{corollary} Let $X$ be a finitely supported subset of an invariant set $Y$ such that $X$ does not contain an infinite uniformly supported subset. Then the set $\wp_{fin}(X)$ is not FSM Mostowski infinite. \end{corollary} \begin{proof} According to Lemma \ref{lem4}, $\wp_{fin}(X)$ does not contain an infinite uniformly supported subset. Thus, by Theorem~\ref{Tti}, $\wp_{fin}(X)$ is not FSM Mostowski infinite. \end{proof} \begin{theorem} \label{TTRR} Let $X$ be a finitely supported subset of an invariant set $(Y, \cdot)$. \begin{enumerate} \item If $X$ is FSM Tarski I infinite, then $X$ is FSM Tarski III infinite. The converse does not hold. However if $X$ is FSM Tarski III infinite, then $\wp_{fs}(X)$ is FSM Tarski I infinite. \item If $X$ is FSM Tarski III infinite, then $X$ is FSM Dedekind infinite. The converse does not hold. However if $X$ is FSM Dedekind infinite, then $\wp_{fs}(X)$ is FSM Tarski III infinite. \end{enumerate} \end{theorem} \begin{proof} 1. We consider the case when $X$ has at least two elements (otherwise the theorem is trivial). Let $X$ be FSM Tarski I infinite. Then $|X \times X|= |X|$. Fix two elements $x_{1}, x_{2} \in X$ with $x_{1}\neq x_{2}$. We can define an injection $f:X \times \{0,1\} \to X \times X$ by $f(u)=\left\{ \begin{array}{ll} (x,x_{1}) & \text{for}\: u=(x,0)\\ (x,x_{2}) & \text{for}\: u=(x,1) \end{array}\right.$. Clearly, by checking the condition in Proposition~\ref{2.18'} and using Proposition~\ref{p1}, we have that $f$ is supported by $supp(X) \cup supp(x_{1}) \cup supp(x_{2})$ (since $\{0,1\}$ is necessarily a trivial invariant set), and so $|X\times\{0,1\}|\leq |X \times X|$. Thus, $|X\times\{0,1\}| \leq |X|$. Obviously, there is an injection $i: X \to X\times\{0,1\}$ defined by $i(x)=(x,0)$ for all $x \in X$ which is supported by $supp(X)$. According to Lemma~\ref{lem2}, we get $2|X|=|X \times \{0,1\}|=|X|$. Let us consider $X=\mathbb{N} \times A$. We make the remark that $|\mathbb{N}\times \mathbb{N}|=|\mathbb{N}|$ by considering the equivariant injection $h:\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ defined by $h(m,n)=2^{m}3^{n}$ and using Lemma \ref{lem2}. Similarly, $|\{0,1\}\times \mathbb{N}|=|\mathbb{N}|$ by considering the equivariant injection $h':\mathbb{N} \times \{0,1\}\to \mathbb{N}$ defined by $h'(n,0)=2^{n}$ and $h'(n,1)=3^{n}$ and using Lemma \ref{lem2}. We have $2|X|=2|\mathbb{N}||A|=|\mathbb{N}||A|=|X|$. However, we prove that $|X \times X|\neq |X|$. Assume the contrary, and so we have $|\mathbb{N} \times (A \times A)|=|\mathbb{N} \times A \times \mathbb{N} \times A|=|\mathbb{N} \times A|$. Thus, there is a finitely supported injection $g: A \times A \to \mathbb{N} \times A$, and by Proposition \ref{pco2} there is a finitely supported surjection $f:\mathbb{N} \times A \to A \times A$. Let us consider three different atoms $a,b,c \notin supp(f)$. There exists $(i,x) \in \mathbb{N} \times A$ such that $f(i,x)=(a,b)$. Since $(a\,b) \in Fix(supp(f))$ and~$\mathbb{N}$ is trivial invariant set, we have $f(i,(a\,b)(x))=(a\,b)f(i,x)=(a\,b)(a,b)=((a\,b)(a),(a\,b)(b))=(b,a)$. We should have $x=a$ or $x=b$, otherwise $f$ is not a function. Assume without losing the generality that $x=a$, which means $f(i,a)=(a,b)$. Therefore $f(i,b)=f(i,(a\,b)(a))=(a\,b)f(i,a)=(a\,b)(a,b)=(b,a)$. Similarly, since $(a\,c),(b\,c) \in Fix(supp(f))$, we have $f(i,c)=f(i,(a\,c)(a))=(a\,c)f(i,a)=(a\,c)(a,b)=(c,b)$ and $f(i,b)=f(i,(b\,c)(c))=(b\,c)f(i,c)=(b\,c)(c,b)=(b,c)$. But $f(i,b)=(b,a)$ contradicting the functionality of~$f$. Therefore, $X$ is FSM Tarski III infinite, but it is not FSM Tarski I infinite. Now, suppose that $X$ is FSM Tarski III infinite, which means $|\{0,1\}\times X|=|X|$. We define the mapping $\psi:\wp_{fs}(X) \times \wp_{fs}(X) \to \wp_{fs}(\{0,1\}\times X)$ by $f(U,V)=\{(0,x)\,|\,x \in U\} \cup \{(1,y)\,|\,y \in V\}$ for all $U,V \in \wp_{fs}(X)$. Clearly $\psi$ is well defined and bijective, and for each $\pi \in Fix(supp(X))$ we have $\psi(\pi \star U,\pi \star V)=\pi \star \psi(U,V)$ which means $\psi$ is finitely supported. Therefore, $|\wp_{fs}(X) \times \wp_{fs}(X)|=|\wp_{fs}(\{0,1\}\times X)|=|\wp_{fs}(X)|$. The last equality follows by applying twice Lemma \ref{lemlem} (using the fact that there is a finitely supported surjection from $X$ onto $X\times\{0,1\}$ and a finitely supported surjection from $X\times\{0,1\}$ onto $X$, we obtain there is a finitely supported injection from $\wp_{fs}(X\times\{0,1\})$ into $\wp_{fs}(X)$, and a finitely supported injection from $\wp_{fs}(X)$ into $\wp_{fs}(X\times\{0,1\})$) and Lemma~\ref{lem3}. 2. Let us assume that $X$ is FSM Tarski III infinite. Let us consider an element $y_{1}$ belonging to an invariant set (whose action is also denoted by $\cdot$) with $y_{1}\notin X$ (such an element can be, for example, a non-empty element in $\wp_{fs}(X) \setminus X$). Fix $y_{2} \in X$. One can define a mapping $f:X \cup \{y_{1}\} \to X \times \{0,1\}$ by $f(x)=\left\{ \begin{array}{ll} (x,0) & \text{for}\: x \in X\\ (y_{2}, 1) & \text{for}\: x=y_{1} \end{array}\right.$. Clearly $f$ is injective and it is supported by $S=supp(X) \cup supp(y_{1}) \cup supp(y_{2})$ because for all $\pi$ fixing $S$ pointwise we have $f(\pi \cdot x)=\pi \cdot f(x)$ for all $x \in X \cup \{y_{1}\}$. Therefore, $|X \cup \{y_{1}\}| \leq |X \times \{0,1\}|=|X|$, and so there is a finitely supported injection $g:X \cup \{y_{1}\} \to X$. The mapping $h:X \to X$ defined by $h(x)=g(x)$ is injective, supported by $supp(g) \cup supp(X)$, and $g(y_{1}) \in X \setminus h(X)$, which means $h$ is not surjective. It follows that $X$ is FSM Dedekind infinite. Let us consider $X=A \cup \mathbb{N}$. Since $A$ and $\mathbb{N}$ are disjoint, we have that $X$ is an invariant set (similarly as in Proposition~\ref{p1}). Clearly $X$ is FSM Dedekind infinite. Assume, by contradiction, that $|X|=2|X|$, that is $|A \cup \mathbb{N}|=|A+A+\mathbb{N}|=|(\{0,1\}\times A) \cup \mathbb{N}|$. Thus, there is a finitely supported injection $f:(\{0,1\}\times A) \cup \mathbb{N}\to A \cup \mathbb{N}$, and so there exists a finitely supported injection $f:(\{0,1\}\times A) \to A \cup \mathbb{N}$. We prove that whenever $\varphi:A \to A \cup \mathbb{N}$ is finitely supported and injective, for $a \notin supp(\varphi)$ we have $\varphi(a) \in A$. Assume, by contradiction, that there is $a \notin supp(\varphi)$ such that $\varphi(a)\in \mathbb{N}$. Since $supp(\varphi)$ is finite, there exists $b \notin supp(\varphi)$, $b \neq a$. Thus, $(a\,b) \in Fix(supp(\varphi))$, and so $\varphi(b)=\varphi((a\,b)(a))=(a\,b)\diamond \varphi(a)=\varphi(a)$ since $(\mathbb{N}, \diamond)$ is a trivial invariant set. This contradicts the injectivity of $\varphi$. We can consider the mappings $\varphi_{1},\varphi_{2}: A \to A \cup \mathbb{N}$ defined by $\varphi_{1}(a)=f(0,a)$ for all $a \in A$ and $\varphi_{2}(a)=f(1,a)$ for all $a \in A$, that are injective and supported by $supp(f)$. Therefore, $f(\{0\} \times A)=\varphi_{1}(A)$ contains at most finitely many element from $\mathbb{N}$, and $f(\{1\} \times A)=\varphi_{2}(A)$ also contains at most finitely many element from $\mathbb{N}$. Thus, $f$ is an injection from $(\{0,1\}\times A)$ to $A \cup Z$ where $Z$ is a finite subset of $\mathbb{N}$. It follows that $f(\{0\} \times A)$ contains an infinite subset of atoms $U$, and $f(\{1\} \times A)$ contains an infinite subset of atoms $V$. Since $f$ is injective, it follows that $U$ and $V$ are infinite disjoint subsets of $A$, which contradicts Proposition \ref{p111} stating that $A$ is amorphous. Now, if $X$ is FSM Dedekind infinite, we have that there is a finitely supported injection $h$ from $X$ onto a finitely supported proper subset $Z$ of $X$. Consider an element $y_{1}$ belonging to an invariant set with $y_{1}\notin X$. We can define an injection $h': X \cup \{y_{1}\} \to X$ by taking $h'(x)=h(x)$ for all $x \in X$ and $h'(y_{1})=b$ with $b \in X \setminus Z$. Clearly $h'$ is supported by $supp(h) \cup supp(y_{1}) \cup supp(b)$. Since there also exists an $supp(X)$-supported injection from $X$ to $X \cup \{y_{1}\}$, according to Lemma \ref{lem3}, one can define a finitely supported bijection $\psi$ from $X$ to $X \cup \{y_{1}\}$. According to Lemma \ref{lemlem} the mapping $g:\wp_{fs}(X \cup \{y_{1}\}) \to \wp_{fs}(X)$ defined by $g(V)=f^{-1}(V)$ for all $V \in \wp_{fs}(X \cup \{y_{1}\})$ is finitely supported and injective. Therefore, $2^{|X|}\geq 2^{|X|+1} =2\cdot 2^{|X|}$ which in the view of Lemma~\ref{lem3} leads to the conclusion that $\wp_{fs}(X)$ is FSM Tarski III infinite. \end{proof} \begin{corollary} \label{ti4} The following sets are FSM usual infinite, but they are not FSM Tarski I infinite, nor FSM Tarski III infinite. \begin{enumerate} \item The invariant set $A$. \item The invariant set $\wp_{fs}(A)$. \item The invariant sets $\wp_{fin}(A)$ and $\wp_{cofin}(A)$. \item The set $\wp_{fin}(X)$ where $X$ is a finitely supported subset of an invariant set containing no infinite uniformly supported subset. \end{enumerate} \end{corollary} \begin{proof} The result follows directly because the related sets are not FSM Dedekind infinite, according to Theorem~\ref{ti1} and Corollary~\ref{ti2}. \end{proof} \begin{corollary}Let $X$ be an infinite finitely supported subset of an invariant set. Then $\wp_{fs}(\wp_{fs}(\wp_{fs}(X)))$ is FSM Tarski III infinite and, consequently, $\wp_{fs}(\wp_{fs}(\wp_{fs}(\wp_{fs}(X))))$ is FSM Tarski I infinite. \end{corollary} \begin{proof}Since $\wp_{fs}(\wp_{fs}(X))$ is FSM Dedekind infinite, as in the proof of Theorem \ref{TTRR}(2) one can prove $|\wp_{fs}(\wp_{fs}(X))|+1=|\wp_{fs}(\wp_{fs}(X))|$. The result now follows directly using arithmetic properties of FSM cardinalities proved above. \end{proof} In a future work we intend to prove an even stronger result claiming that $\wp_{fs}(\wp_{fs}(X))$ is FSM Tarski III infinite and, consequently, $\wp_{fs}(\wp_{fs}(\wp_{fs}(X)))$ is FSM Tarski I infinite, whenever $X$ is an infinite finitely supported subset of an invariant set. \begin{corollary} \label{CAN1} The sets $A^{\mathbb{N}}_{fs}$ and $\mathbb{N}^{A}_{fs}$ are FSM Tarski I infinite, and so they are also Tarski III infinite. \end{corollary} \begin{proof}There is an equivariant bijection $\psi$ between $(A^{\mathbb{N}})^{2}_{fs}$ and $A^{\mathbb{N} \times \{0,1\}}_{fs}$ that associates to each Cartesian pair $(f,g)$ of mappings from $\mathbb{N}$ to $A$ a mapping $h:\mathbb{N} \times\{0,1\} \to A$ defined as follows. \[ h(u)=\left\{ \begin{array}{ll} f(n) & \text{if}\: u=(n,0) \\ \\ g(n) & \text{if}\: u=(n,1) \end{array}\right., \] The equivariance of $\psi$ follows from Proposition \ref{2.18'} because if $\pi \in S_{A}$ we have $\psi(\pi \widetilde {\star} f, \pi \widetilde {\star} g)=h'$ where $h'(n,0)=(\pi \widetilde {\star} f)(n)=\pi(f(n))$ and $h'(n,1)=(\pi \widetilde {\star} g)(n)=\pi(g(n))$. Thus, $h'(u)=\pi(h(u))$ for all $u \in \mathbb{N} \times \{0,1\}$ which means $h'=\pi \widetilde {\star} h=\pi \widetilde {\star} \psi (f,g)$. There also exists an equivariant bijection $\varphi$ between $(\mathbb{N}^{A})^{2}_{fs}$ and $(\mathbb{N} \times \mathbb{N})^{A}_{fs}$ that associates to each Cartesian pair $(f,g)$ of mappings from $A$ to $\mathbb{N}$ a mapping $h: A \to \mathbb{N} \times \mathbb{N}$ defined by $h(a)=(f(a),g(a))$ for all $a \in A$. The equivariance of $\varphi$ follows from Proposition \ref{2.18'} because if $\pi \in S_{A}$ we have $\varphi(\pi \widetilde {\star} f, \pi \widetilde {\star} g)=h'$ where $h'(a)=((\pi \widetilde {\star} f)(a), (\pi \widetilde {\star} g)(a))=(f(\pi^{-1}(a)), g(\pi^{-1}(a)))=h(\pi^{-1}(a))=(\pi \widetilde {\star}h)(a)$ for all $a \in A$, and so $h'=\pi \widetilde {\star} h=\pi \widetilde {\star} \varphi(f,g)$. Therefore $|(A^{\mathbb{N}})^{2}_{fs}|=|A^{\mathbb{N} \times \{0,1\}}_{fs}|= |A^{\mathbb{N}}_{fs}|$. Therefore, $|(\mathbb{N}^{A})^{2}_{fs}|=|(\mathbb{N} \times \mathbb{N})^{A}_{fs}|=|\mathbb{N}^{A}_{fs}|$ according to Proposition \ref{pco1}(3) and Lemma \ref{lem2} (we used $|\mathbb{N} \times \mathbb{N}|=|\mathbb{N}|$). \end{proof} \begin{theorem} Let $X$ be a finitely supported subset of an invariant set $(Y, \cdot)$. If $\wp_{fs}(X)$ is FSM Tarski I infinite, then $\wp_{fs}(X)$ is FSM Tarski III infinite. The converse does not hold. \end{theorem} \begin{proof} The direct implication is a consequence of Theorem \ref{TTRR}(1). Thus, we focus on the proof of the invalidity of the reverse implication. Firstly we make the remark that whenever $U,V$ are finitely supported subsets of an invariant set with $U \cap V=\emptyset$, we have that there is a finitely supported (by $supp(U) \cup supp(V)$) bijection from $\wp_{fs}(U\cup V)$ into $\wp_{fs}(U) \times \wp_{fs}(V)$ that maps each $X \in \wp_{fs}(U\cup V)$ into the pair $(X \cap U, X \cap V)$. Analogously, whenever $B,C$ are invariant sets there is an equivariant bijection from $\wp_{fs}(B) \times \wp_{fs}(C)$ into $\wp_{fs}(B+C)$ that maps each pair $(B_{1},C_{1}) \in \wp_{fs}(B) \times \wp_{fs}(C)$ into the set $\{(0,b)\,|\,b \in B_{1}\} \cup \{(1,c)\,|\,c \in C_{1}\}$. This follows directly by verifying the conditions in Proposition \ref{2.18'}. Let us consider the set $A \cup \mathbb{N}$ which is FSM Dedekind infinite. According to Theorem \ref{TTRR}(2), we have that $\wp_{fs}(A \cup \mathbb{N})$ is FSM Tarski III infinite. We prove that it is not FSM Tarski I infinite. Assume, by contradiction that $|\wp_{fs}(A \cup \mathbb{N}) \times \wp_{fs}(A \cup \mathbb{N})|=|\wp_{fs}(A \cup \mathbb{N})|$ which means $|\wp_{fs}(A + \mathbb{N}+A + \mathbb{N})|=|\wp_{fs}(A \cup \mathbb{N})|$, and so $|\wp_{fs}(A +A + \mathbb{N})|=|\wp_{fs}(A \cup \mathbb{N})|$. Thus, according to Proposition \ref{pco1}(4), there is a finitely supported injection from $\wp_{fs}(A + A)$ to $\wp_{fs}(A \cup \mathbb{N})$, which means there is a finitely supported injection from $\wp_{fs}(A) \times \wp_{fs}(A)$ to $ \wp_{fs}(A) \times \wp_{fs}(\mathbb{N})$, and so there is a finitely supported injection from $A \times A$ to $ \wp_{fs}(A) \times \wp_{fs}(\mathbb{N})$. According to Proposition \ref{pco2}, there should exist a finitely supported surjection $f: \wp_{fs}(A) \times \wp_{fs}(\mathbb{N}) \to A \times A$. Let us consider two atoms $a,b\notin supp(f)$ with $a \neq b$. It follows that $(a\, b) \in Fix(supp(f))$. Since $f$ is surjective, there exists $(X,M) \in \wp_{fs}(A) \times \wp_{fs}(\mathbb{N})$ such that $f(X,M)=(a,b)$. According to Proposition \ref{2.18'} and because $\mathbb{N}$ is a trivial invariant set meaning that $(a\,b) \star M = M$, we have $f((a\,b) \star X,M)=f((a\,b) \otimes (X,M))=(a\,b) \otimes f(X,M)=(a\,b) \otimes (a,b)=((a\,b)(a), (a\,b)(b))=(b,a)$. Due to the functionality of $f$ we should have $((a\,b) \star X,M) \neq (X,M)$, which means $(a\,b) \star X \neq X$. We prove that if both $a,b \in supp(X)$, then $(a\,b)\star X=X$. Indeed, suppose $a,b \in supp(X)$. Since $X \in \wp_{fs}(A)$, from Proposition \ref{p111} we have that $X$ is either finite or cofinite. If $X$ is finite, then $supp(X)=X$, and so $a,b \in X$. Therefore, $(a\,b) \star X=\{(a\,b)(x)\,|\,x \in X\}=\{(a\,b)(a)\} \cup \{(a\,b)(b)\} \cup \{(a\,b)(c)\,|\,c \in X \setminus\{a,b\}\}=\{b\} \cup \{a\} \cup (X \setminus \{a,b\})=X$. Now, if $X$ is cofinite, then $supp(X)=A \setminus X$, and so $a,b \in A \setminus X$. Since $a,b \notin X$, we have $a,b \neq x$ for all $x \in X$, which means $(a\,b)(x)=x$ for all $x \in X$, and again $(a\,b)\star X=X$. Thus, one of $a$ or $b$ does not belong to $supp(X)$. Assume $b \notin supp(X)$. Let us consider $c\neq a,b$, $c \notin supp(f)$, $c \notin supp(X)$. Then $(b\, c) \in Fix(supp(X))$, and so $(b\,c)\star X=X$. Moreover, $(b\, c) \in Fix(supp(f))$, and by Proposition~\ref{2.18'} we have $(a,b)=f(X,M)=f((b\,c) \star X,M)=f((b\,c)\otimes (X,M))=(b\,c) \otimes f(X,M)=(b\,c) \otimes (a,b)=(a,c)$ which is a contradiction because $b\neq c$. \end{proof} \begin{proposition} Let $X$ be a finitely supported subset of an invariant set $(Y, \cdot)$. If $X$ is FSM Tarski III infinite, then there exists a finitely supported bijection $g:\mathbb{N} \times X \to X$. The reverse implication is also valid. \end{proposition} \begin{proof}By hypothesis, there is a finitely supported bijection $\varphi:\{0,1\} \times X \to X$. Let us consider the mappings $f_{1},f_{2}: X \to X$ defined by $f_{1}(x)=\varphi(0,x)$ for all $x \in X$ and $f_{2}(x)=\varphi(1,x)$ for all $x \in X$, that are injective and supported by $supp(\varphi)$ according to Proposition \ref{2.18'}. Since $\varphi$ is injective we also have $Im(f_{1}) \cap Im(f_{2})=\emptyset$, and because $\varphi$ is surjective we get $Im(f_{1}) \cup Im(f_{2})=X$. We prove by induction that the $n$-times auto-composition of $f_{2}$, denoted by $f_{2}^{n}$, is supported by $supp(f_{2})$ for all $n\in \mathbb{N}$. For $n=1$ this is obvious. So assume that $f_{2}^{n-1}$ is supported by $supp(f_{2})$. By Proposition \ref{2.18'} we must have $f_{2}^{n-1}(\sigma \cdot x)=\sigma \cdot f_{2}^{n-1}(x)$ for all $\sigma \in Fix(supp(f_{2}))$ and $x \in X$. Let us fix $\pi \in Fix(supp(f_{2}))$. According to Proposition \ref{2.18'}, we have $f_{2}^{n}(\pi \cdot x)=f_{2}(f_{2}^{n-1}(\pi \cdot x))=f_{2}(\pi\cdot f_{2}^{n-1}(x))=\pi \cdot f_{2}(f_{2}^{n-1}(x))=\pi \cdot f_{2}^{n}(x)$ for all $x \in X$, and so $f_{2}^{n}$ is finitely supported from Proposition \ref{2.18'}. Define $f:\mathbb{N} \times X \to X$ by $f((n,x))=f_{2}^{n}(f_{1}(x))$. Let $\pi \in Fix(supp(f_{1}) \cup supp(f_{2}))$. According to Proposition \ref{2.18'} and because $(\mathbb{N}, \diamond)$ is a trivial invariant set we get $f(\pi \otimes (n,x))=f((n, \pi \cdot x))=f_{2}^{n}(f_{1}(\pi \cdot x))=f_{2}^{n}(\pi \cdot f_{1}(x))=\pi \cdot f_{2}^{n}(f_{1}(x))=\pi \cdot f((n,x))$ for all $(n,x) \in \mathbb{N} \times X$, which means $f$ is supported by $supp(f_{1}) \cup supp(f_{2})$. We prove the injectivity of $f$. Assume $f((n,x))=f((m,y))$ which means $f_{2}^{n}(f_{1}(x))=f_{2}^{m}(f_{1}(y))$. If $n>m$ this leads to $f_{2}^{n-m}(f_{1}(x))=f_{1}(y)$ (since $f_{2}$ is injective) which is in contradiction with the relation $Im(f_{1}) \cap Im(f_{2})=\emptyset$. Analogously we cannot have $n<m$. Thus, $n=m$ which leads to $f_{1}(x)=f_{1}(y)$, and so $x=y$ due to the injectivity of $f_{1}$. Therefore, $f$ is injective. Since we obviously have a finitely supported injection from $X$ into $\mathbb{N} \times X$ (e.g $x \mapsto (0,x)$ which is supported by $supp(X)$), in the view of Lemma \ref{lem2} we can find a finitely supported bijection between $X$ and $\mathbb{N} \times X$. The reverse implication is almost trivial. There is a finitely supported injection from $\{0,1\} \times X$ into $\mathbb{N} \times X$. If there is a finitely supported injection from $\mathbb{N} \times X$ into $X$, then there is a finitely supported injection from $\{0,1\} \times X$ into $X$. The desired result follows from Lemma \ref{lem2}. \end{proof} \section{Countability} \label{countable} \begin{definition} Let $Y$ be a finitely supported subset of an invariant set $X$. Then $Y$ is \emph{countable in FSM (or FSM countable)} if there exists a finitely supported onto mapping $f: \mathbb{N} \to Y$. \end{definition} \begin{proposition} Let $Y$ be a finitely supported countable subset of an invariant set $(X, \cdot)$. Then $Y$ is uniformly supported. \end{proposition} \begin{proof}There exists a finitely supported onto mapping $f: \mathbb{N} \to Y$. Thus, for each arbitrary $y\in Y$, there exists $n \in \mathbb{N}$ such that $f(n)=y$. According to Proposition \ref{2.18'}, for each $\pi \in Fix(supp(f))$ we have $\pi \cdot y=\pi \cdot f(n)=f(\pi \diamond n)=f(n)=y$, where $\diamond$ is the necessarily trivial action on $\mathbb{N}$. Thus, $Y$ is uniformly supported by $supp(f)$. \end{proof} \begin{proposition} Let $Y$ be a finitely supported subset of an invariant set $X$. Then $Y$ is countable in FSM if and only if there exists a finitely supported one-to-one mapping $g: Y \to \mathbb{N}$. \end{proposition} \begin{proof} Suppose that $Y$ is countable in FSM. Then there exists a finitely supported onto mapping $f: \mathbb{N} \to Y$. We define $g: Y \to \mathbb{N}$ by $g(y)=min[f^{-1}(\{y\})]$, for all $y \in Y$. According to Proposition \ref{2.18'}, $g$ is supported by $supp(f) \cup supp(Y)$. Obviously, $g$ is one-to-one. Conversely, if there exists a finitely supported one-to-one mapping $g: Y \to \mathbb{N}$, then $g(Y)$ is supported is equivariant as a subset of the trivial invariant set $\mathbb{N}$. Thus, there exists a finitely supported bijection $g: Y \to g(Y)$, where $g(Y) \subseteq \mathbb{N}$. We define $f: \mathbb{N} \to Y$ by \[ f(n)=\left\{ \begin{array}{ll} g^{-1}(n) & \text{if}\: n \in g(Y) \\ \\ t & \text{if}\: n \in \mathbb{N} \setminus g(Y) \end{array}\right., \] where $t$ is a fixed element of $Y$. According to Proposition \ref{2.18'}, we have that $f$ is supported by $supp(g) \cup supp(Y) \cup supp(t)$. Moreover, $f$ is onto. \end{proof} \begin{proposition} \label{cou} Let $Y$ be an infinite, finitely supported, countable subset of an invariant set $X$. Then there exists a finitely supported bijective mapping $g: Y \to \mathbb{N}$. \end{proposition} \begin{proof} First we prove that for any infinite subset $B$ of $\mathbb{N}$, there is an injection from $\mathbb{N}$ into $B$. Fix such a $B$. It follows that $B$ is well ordered. Define $f:\mathbb{N} \to B$ by: $f(1)=min(B)$, $f(2)=min(B \setminus f(1))$, and recursively $f(m)= min(B \setminus \{f(1),f(2),...,f(m-1)\})$ for all $m \in \mathbb{N}$ (since $B$ is infinite). Since $\mathbb{N}$ is well ordered, choice is not involved. Obviously since both $B$ and $\mathbb{N}$ are trivial invariant sets, we have that $f$ is equivariant. Since $B$ is a subset of $\mathbb{N}$ we also have an equivariant injective mapping $h:B \to \mathbb{N}$. According to Lemma \ref{lem1}, there is an equivariant bijection between $B$ and $\mathbb{N}$ (we can even prove that $f$ is bijective). Since $Y$ is countable, there exists a finitely supported one-to-one mapping $u: Y \to \mathbb{N}$. Thus, the mapping $u: Y \to u(Y)$ is finitely supported and bijective. Since $u(Y) \subseteq N$, we have that there is an equivariant bijection $v$ between $u(Y)$ and~$\mathbb{N}$, and so there exists a finitely supported bijective mapping $g: Y \to \mathbb{N}$ defined by $g=v \circ u$. \end{proof} From \cite{book} we know that the (in)consistency of the choice principle $\textbf{CC(fin)}$ in FSM is an open problem, meaning that we do not know whether this principle is consistent or not in respect of the FSM axioms. A relationship between countable union principles and countable choice principles is presented in ZF in \cite{herrlich}. Below we prove that such a relationship is preserved in FSM. \begin{definition} \begin{enumerate} \item The Countable Choice Principle for finite sets in FSM \textbf{CC(fin)} has the form ``Given any invariant set $X$, and any countable family $\mathcal{F}=(X_{n})_{n}$ of finite subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported, there exists a finitely supported choice function on $\mathcal{F}$." \item The Countable Union Theorem for finite sets in FSM, \textbf{CUT(fin)}, has the form ``Given any invariant set $X$ and any countable family $\mathcal{F}=(X_{n})_{n}$ of finite subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported, then there exists a finitely supported onto mapping $f: \mathbb{N} \to \underset{n}{\cup}X_{n}$" \item The Countable Union Theorem for $k$-element sets in FSM, \textbf{CUT(k)}, has the form ``Given any invariant set $X$ and any countable family $\mathcal{F}=(X_{n})_{n}$ of $k$-element subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported, then there exists a finitely supported onto mapping $f: \mathbb{N} \to \underset{n}{\cup}X_{n}$" \item The Countable Choice Principle for sets of $k$-element sets in FSM, \textbf{CC(k)} has the form ``Given any invariant set~$X$ and any countable family $\mathcal{F}=(X_{n})_{n}$ of $k$-element subsets of $X$ in FSM such that the mapping $n\mapsto X_{n}$ is finitely supported, there exists a finitely supported choice function on $\mathcal{F}$." \end{enumerate} \end{definition} \begin{proposition} In FSM, the following equivalences hold. \begin{enumerate} \item \textbf{CUT(fin)} $\Leftrightarrow$ \textbf{CC(fin)}; \item \textbf{CUT(2)} $\Leftrightarrow$ \textbf{CC(2)}; \item \textbf{CUT(n)} $\Leftrightarrow$ \textbf{CC(i)} for all $i \leq n$. \end{enumerate} \end{proposition} \begin{proof} 1. Let us assume that \textbf{CUT(fin)} is valid in FSM. We consider the finitely supported countable family $\mathcal{F}=(X_{n})_{n}$ in FSM, where each $X_{n}$ is a non-empty finite subset of an invariant set $X$ in FSM. From \textbf{CUT(fin)}, there exists a finitely supported onto mapping $f: \mathbb{N} \to \underset{n}{\cup}X_{n}$. Since $f$ is onto and each $X_{n}$ is non-empty, we have that $f^{-1}(X_{n})$ is a non-empty subset of $\mathbb{N}$ for each $n \in \mathbb{N}$. Consider the function $g: \mathcal{F} \to \cup \mathcal{F}$, defined by $g(X_{n})=f(min[f^{-1}(X_{n})])$. We claim that $supp(f) \cup supp(n \mapsto X_{n})$ supports $g$. Let $\pi \in Fix(supp(f) \cup supp(n \mapsto X_{n}))$. According to Proposition \ref{2.18'}, and because $\mathbb{N}$ is a trivial invariant set and each element $X_{n}$ is supported by $supp(n \mapsto X_{n})$, we have $\pi \cdot g(X_{n})=\pi \cdot f(min[f^{-1}(X_{n})])= f(\pi \diamond min[f^{-1}(X_{n})])=f(min[f^{-1}(X_{n})])=g(X_{n})=g(\pi \star X_{n})$, where by $\star$ we denoted the $S_{A}$-action on $\mathcal{F}$, by $\cdot$ we denoted the $S_{A}$-action on $\cup \mathcal{F}$ and by $\diamond$ we denoted the trivial action on $\mathbb{N}$. Therefore, $g$ is finitely supported. Moreover, $g(X_{n}) \in X_{n}$, and so $g$ is a choice function on $\mathcal{F}$. Conversely, let $\mathcal{F}=(X_{n})_{n}$ be a countable family of finite subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported. Thus, each $X_{n}$ is supported by the same set $S=supp(n \mapsto X_{n})$. Since each $X_{n}$ is finite (and the support of a finite set coincides with the union of the supports of its elements), as in the proof of Lemma \ref{lem4}, we have that $Y=\underset{n \in \mathbb{N}}\cup X_{n}$ is uniformly supported by $S$. Moreover, the countable sequence $(Y_{n})_{n \in \mathbb{N}}$ defined by $Y_{n}=X_{n} \setminus \underset{m<n}\cup X_{m}$ is a uniformly supported (by $S$) sequence of pairwise disjoint uniformly supported sets with $Y=\underset{n \in \mathbb{N}}\cup Y_{n}$. Consider the infinite family $M \subseteq \mathbb{N}$ such that all the terms of $(Y_{n})_{n \in M}$ are non-empty. For each $n \in M$, the set $T_{n}$ of total orders on $Y_{n}$ is finite, non-empty, and uniformly supported by $S$. Thus, by applying \textbf{CC(fin)} to $(T_{n})_{n \in M}$, there is a choice function $f$ on $(T_{n})_{n \in M}$ which is also supported by $S$. Furthermore, $f(T_{n})$ is supported by $supp(f) \cup supp(T_{n})=S$ for all $n \in M$. One can define a uniformly supported (by $S$) total order relation on $Y$ (which is also a well order relation on $Y$) as follows $x \leq y$ if and only if $\left\{ \begin{array}{ll} x \in Y_{n} \; \text{and}\; y \in Y_{m} \;\text{with}\; n<m \\ \text{or}\\ x,y \in Y_{n}\; \text{and}\; xf(T_{n})y \end{array}\right.$. Clearly, if $Y$ is infinite, then there is an $S$-supported order isomorphism between $(Y, \leq)$ and $M$ with the natural order, which means, in the view of Proposition \ref{cou}, that $Y$ is countable. 2. As in the above item $\textbf{CUT(2)} \Rightarrow \textbf{CC(2)}$. For proving $\textbf{CC(2)} \Rightarrow \textbf{CUT(2)}$, let $\mathcal{F}=(X_{n})_{n}$ be a countable family of 2-element subsets of $X$ such that the mapping $n\mapsto X_{n}$ is finitely supported. According to \textbf{CC(2)} we have that there exists a finitely supported choice function $g$ on $(X_{n})_{n}$. Let $x_{n}=g(X_{n}) \in X_{n}$. As in the above item, we have that $supp(n \mapsto X_{n})$ supports $x_{n}$ for all $n \in \mathbb{N}$. For each $n$, let $y_{n}$ be the unique element of $X_{n}\setminus \{x_{n}\}$. Since for any $n$ both $x_{n}$ and $X_{n}$ are supported by the same set $supp(n \mapsto X_{n})$, it follows that $y_{n}$ is also supported by $supp(n \mapsto X_{n})$ for all $n \in \mathbb{N}$. Define $f: \mathbb{N} \to \underset{n}{\cup}X_{n}$ by $ f(n)=\left\{ \begin{array}{ll} x_{\frac{n}{2}} & \text{if}\: n \;\text{is even}\\ \\ y_{\frac{n-1}{2}} & \text{if}\: n \;\text {is odd} \end{array}\right.$. We can equivalently describe $f$ as being defined by $f(2k)=x_{k}$ and $f(2k+1)=y_{k}$. Clearly, $f$ is onto. Furthermore, because all $x_{n}$ and all $y_{n}$ are uniformly supported by $supp(n \mapsto X_{n})$, we have that $f(n)=\pi \cdot f(n)$, for all $\pi \in Fix(supp(n \mapsto X_{n}))$ and all $n \in \mathbb{N}$. Thus, according to Proposition \ref{2.18'}, we obtain that $f$ is also supported by $supp(n \mapsto X_{n})$, and so $\underset{n}{\cup}X_{n}$ is FSM countable. 3. As in the proof of the first item. \end{proof} We can easily remark that under $\textbf{CC(fin)}$ a finitely supported subset $X$ of an invariant set is FSM Dedekind infinite if and only if $\wp_{fin}(X)$ is FSM Dedekind infinite. \begin{proposition}Let $Y$ be a finitely supported countable subset of an invariant set $X$. Then the set $\underset{n \in \mathbb{N}}{\cup}Y^{n}$ is countable, where $Y^{n}$ is defined as the n-time Cartesian product of $Y$. \end{proposition} \begin{proof}Since $Y$ is countable, we can order it as a sequence $Y=\{x_{1}, \ldots, x_{n}, \ldots\}$. The other sets of form $Y^{k}$ are \emph{uniquely} represented in respect of the previous enumeration of the elements of $Y$. Since $Y$ is finitely supported and countable, all the elements of $Y$ are supported by the same set $S$ of atoms. Thus, in the view of Proposition \ref{p1}, for each $k \in \mathbb{N}$, all the elements of $Y^{k}$ are supported by $S$. Fix $n \in \mathbb{N}$. On $Y^{n}$ define the S-supported strict well order relation $\sqsubset$ by: $(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{n}}) \sqsubset (x_{j_{1}}, x_{j_{2}}, \ldots, x_{j_{n}})$ if and only if $\left\{ \begin{array}{ll} i_{1} < j_{1} \\ \text{or}\\ i_{1}=j_{1}\; \text{and}\; i_{2}<j_{2}\\ \text{or}\\ \ldots \\ \text{or}\\ i_{1}=j_{1}, \ldots, i_{n-1}=j_{n-1}\; \text{and}\; i_{n}<j_{n} \end{array}\right.$. Now, define an $S$-supported strict well order relation $\prec$ on $\underset{n \in \mathbb{N}}{\cup}Y^{n}$ by $u \prec v$ if and only if $\left\{ \begin{array}{ll} u \in Y^{n} \; \text{and}\; v \in Y^{m} \;\text{with}\; n<m \\ \text{or}\\ u,v \in Y_{n}\; \text{and}\; u \sqsubset v \end{array}\right.$. Therefore, there exists an $S$-supported order isomorphism between $(\underset{n \in \mathbb{N}}{\cup}Y^{n}, \prec)$ and $(\mathbb{N}, <)$. \end{proof} \section{Conclusion} It is known that, when an infinite family of elements having no internal structure is considered by weakening some axioms of the ZF set theory, the results in ZF may lose their validity. According to Theorem 5.4 in \cite{hal}, multiple choice principle and Kurepa's antichain principle are both equivalent to the axiom of choice in ZF. However, in Theorem~9.2 of~\cite{jech} it is proved that multiple choice principle is valid in the Second Fraenkel Model, while the axiom of choice fails in this model. Furthermore, Kurepa's maximal antichain principle is valid in the Basic Fraenkel Model, while multiple choice principle fails in this model. This means that the following two statements (that are valid in ZF) \emph{`Kurepa's principle implies axiom of choice'} and \emph{`Multiple choice principle implies axiom of choice'} fail in Zermelo Fraenkel set theory with atoms. FSM is related to set theory with atoms, however in our approach $A$ is considered as a ZF set (without being necessary to modify the axioms of foundation or of extensionality), and invariant sets are defined as sets with group actions. Additionally, FSM involves an axiom of finite support which states that only atomic finitely supported structures (under a canonical hierarchical set-theoretical construction) are allowed in the theory. Therefore, there is indeed a similarity between the development of permutation models of set theory with atoms and FSM, but this framework is developed over the standard ZF in the form `usual sets together with actions of permutation groups' without being necessary to consider an alternative set theory. The goal of this paper is to answer to a natural question whether the theorems involving the usual/non-atomic ZF sets remain valid in the framework of atomic sets with finite supports modulo canonical permutation actions. It is already known that there exist results that are consistent with ZF, but the are invalid when replacing `non-atomic structure' with `atomic finitely supported structure'. The ZF results are not valid in FSM unless we are able to reformulate them with respect to the finite support requirement. The proofs of the~FSM results should not brake the principle that any structure has to be finitely supported, which means that the related proofs should be \emph{internally consistent in FSM} and not retrieved from ZF. The methodology for moving from ZF into~FSM is based on the formalization of FSM into higher order logic (and this is not a simple task due to some important limitations) or on the hierarchical construction of supports using the $S$-finite support reasoning that actually represents an hierarchical method for defining the support of a structure using the supports of the sub-structures of the related structure. Since any structure has to be finitely supported in FSM, specific results (that are not derived from ZF) can also be obtained. In this paper we study infinite cardinalities of finitely supported structures. The preorder relation~$\leq$ on FSM cardinalities defined by involving finitely supported injective mappings is antisymmetric, but not total. The preorder relation~$\leq^{*}$ on FSM cardinalities defined by involving finitely supported surjective mappings is not antisymmetric, nor total. Thus, Cantor-Schr{\"o}der-Bernstein theorem (in which cardinalities are ordered by involving finitely supported injective mappings) is consistent with the finite support requirement of FSM. However, the dual of Cantor-Schr{\"o}der-Bernstein theorem (in which cardinalities are ordered by involving finitely supported surjective mappings) is not valid for finitely supported structures. Several other specific properties of cardinalities are presented in Theorem \ref{cardord1}. The idea of presenting various approaches regarding `infinite' belongs to Tarski who formulates several definitions of infinite in \cite{tarski24}. The independence of these definitions was later proved in set theory with atoms in \cite{levy1}. Such independence results can be transferred into classical ZF set theory by employing Jech-Sochor's embedding theorem stating that permutation models of set theory with atoms can be embedded into symmetric models of ZF, and so a statement which holds in a given permutation model of set theory with atoms and whose validity depend only on a certain fragment of that model, also holds in some well-founded model of ZF. In this paper we reformulate the definitions of (in)finiteness from \cite{tarski24} internally into FSM, in terms of finitely supported structures. The related definitions for `FSM infinite' are introduced in Section \ref{chap9}. We particularly mention FSM usual infinite, FSM Tarski (of three types) infinite, FSM Dedekind infinite, FSM Mostowski infinite, FSM Kuratowski infinite, or FSM ascending infinite. We were able to establish comparison results between them and to present relevant examples of FSM sets that satisfy certain specific infinity properties. These comparison results are proved internally in FSM, by employing only finitely supported constructions. Some of the results are obtained by using the classical translation technique from ZF into FSM involving the $S$-finite support principle, while many other properties (especially those revealing uniform supports) are specific to~FSM. We also provide connections with FSM (uniformly) amorphous sets. We particularly have focused on the notion of FSM Dedekind infinity, and we proved a full characterization of FSM Dedekind infinite sets. For example, we were able to prove that $T_{fin}(A)$, $\wp_{fin}(\wp_{fs}(A))$, $A^{A}_{fs}$, $\wp_{fin}(A^{A}_{fs})$, $(A^{n})^{A}_{fs}$ (for a fixed $n \in \mathbb{N}$), $T_{fin}(A)^{A}_{fs}$ and $\wp_{fs}(A)^{A}_{fs}$ are not FSM Dedekind infinite (nor FSM Mostowski infinite), while $\wp_{fs}(\wp_{fin}(A))$ and $T^{\delta}_{fin}(A)$ are FSM Dedekind infinite. The notion of `countability' is described in FSM in Section \ref{countable}, where we present connections between countable choice principles and countable union theorems within finitely supported sets. In Figure \ref{fig:1} we point out some of the relationships between the FSM definitions of infinite. The `red arrows' symbolize \emph{strict} implications (of from $p$ implies $q$, but $q$ does not imply $p$), while `black arrows' symbolize implications for which we have not proved yet if they may be strict or not (analyze this in respect of Remark \ref{remarema}). Blue arrows represent equivalences. \begin{figure} \caption{FSM relationship between various forms of infinity} \label{fig:1} \end{figure} \begin{center} \begin{tabular} {|l|l|l|l|l|l|l|l|p{0.5cm}|} \hline Set & Tarski I inf & Tarski III inf & Ded. inf & Most. inf & Asc. inf & Tarski II inf & Non-amorph. \\ \hline $A$ & No & No & No & No & No & No & No \\ \hline $A+A$ & No & No & No & No & No & No & Yes \\ \hline $A \times A$ & No & No & No & No & No & No & Yes \\ \hline $\wp_{fin}(A)$ & No & No & No & No & Yes & Yes & Yes \\ \hline $T_{fin}(A)$ & No & No & No & No & Yes & Yes & Yes \\ \hline $\wp_{fs}(A)$ & No & No & No & No & Yes & Yes & Yes \\ \hline $\wp_{fin}(\wp_{fs}(A))$ & No & No & No & No & Yes & Yes & Yes \\ \hline $A^{A}_{fs}$ & No & No & No & No & Yes & Yes & Yes \\ \hline $T_{fin}(A)^{A}_{fs}$ & No & No & No & No & Yes & Yes & Yes \\ \hline $\wp_{fs}(A)^{A}_{fs}$ & No & No & No & No & Yes & Yes & Yes \\ \hline $A \cup \mathbb{N}$ & No & No & Yes & Yes & Yes & Yes & Yes \\ \hline $A \times \mathbb{N}$ & No & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline $\wp_{fs}(A \cup \mathbb{N})$ & No & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline $\wp_{fs}(\wp_{fs}(A))$ & ? & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline $A^{\mathbb{N}}_{fs}$ & Yes & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline $\mathbb{N}^{A}_{fs}$ & Yes & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline \end{tabular} \end{center} \qquad In this final table we present the forms of infinity satisfied by the classical FSM sets. \end{document}
\begin{document} \title{Moyal products -- a new perspective on quasi-hermitian quantum mechanics} \author{F G Scholtz} \altaffiliation{[email protected]} \author{H B Geyer} \altaffiliation{[email protected]} \affiliation{Institute of Theoretical Physics, University of Stellenbosch,\\ Stellenbosch 7600, South Africa} \date{\today} \begin{abstract} The rationale for introducing non-hermitian Hamiltonians and other observables is reviewed and open issues identified. We present a new approach based on Moyal products to compute the metric for quasi-hermitian systems. This approach is not only an efficient method of computation, but also suggests a new perspective on quasi-hermitian quantum mechanics which invites further exploration. In particular, we present some first results which link the Berry connection and curvature to non-perturbative properties and the metric. \pacs{03.65-w, 03.65-Ca, 03.65-Ta} \end{abstract} \maketitle \section{Introduction} \label{intro} The history of quantum mechanics now spans more then a century. The mathematical underpinning, although more recent, is also well established, particularly due to the work of von Neumann and Weyl, amongst others (see \cite{bratteli} for a comprehensive treatment). A central result of this analysis, known as the Stone-von Neumann theorem (\cite{bratteli}, vol 2, pp. 34-37), states that for systems with a finite number of degrees of freedom, all unitary irreducible representation of the Weyl algebra (determined by the canonical commutation relations) are equivalent. In essence this implies that the quantization of systems with a finite number of degrees of freedom is unique, up to unitary transformations. For systems with an infinite number of degrees of freedom the situation is more complex. In this case there are inequivalent representations of the Weyl algebra and, correspondingly, such systems can be quantized in inequivalent ways. Central to the analysis mentioned above, which forms the basis for our current formulation and interpretation of quantum mechanics, is the idea that classical observables should be represented as hermitian operators on the quantum level. In particular position and momentum are viewed as observables that have to be quantized as hermitian operators. In the past three to four decades, it has, however, transpired that this may be an unnecessarily restrictive point of view for a variety of reasons. In particular it turns out that it is convenient to resort to non-hermitian observables, mostly the Hamiltonian, when considering effective interactions \cite{schucan,barrett} or when bosonizing a fermionic system, as the one- plus two-body character of the Hamiltonian can only be maintained in this way \cite{geyer,Doba}, while also maintaining a lowest order association between bosons and fermion pairs. More recently, there has been considerable interest in so-called PT-symmetric quantum mechanics in which the Hamiltonian is taken to be non-hermitian, but PT-invariant. Several examples of such Hamiltonians have been found which exhibit a PT-unbroken phase in which the eigenvalues are real and a normal quantum mechanical interpretation is made possible by the introduction of a new inner product, based on the so-called ${\mathcal C}$-operator having properties very similar to the charge conjugation operator \cite{bbj2,bender}. The possibilty of quantum computing has necessitated the study of open quantum systems in which the Hamiltonian necessarily becomes non-hermitian in order to capture the dissipative nature of the system, see e.g. \cite{qcomp1,qcomp2}. Therefore, the use, and even necessity, of non-hermitian Hamiltonians is nowadays generally accepted, be it for mathematical simplicity or the description of physical reality. Above we alluded to two quite distinct situations in which non-hermiticity is introduced. In the case of dissipative systems the Hamiltonian will generally have complex eigenvalues and a normal quantum mechanical interpretation is simply not possible, and indeed inappropriate. In the cases of bosonization, effective interactions and PT-symmetric quantum mechanics, non-hermiticity is introduced with other considerations in mind, such as a simplified mathematical description. A normal quantum mechanical interpretation should, however, still be possible. In this article we consider the latter systems, and consider when and how a system of non-hermitian observables constitute a consistent quantum system. This question is not new and has, in fact, been considered in considerable detail some time ago by the present authors \cite{scholtz}. The conclusion that came out of that analysis was that a system of non-hermitian observables forms a consistent quantum system if a new inner product exists with respect to which the observables are all hermitian. The existence of this new inner product was tied to the existence of a positive definite, bounded hermitian metric operator $\Theta$, defined on the whole Hilbert space, and which obeys the following exchange rule with the observables $A_i$ \cite{scholtz}: \begin{equation} \label{metric} A_i\Theta=\Theta A_i^\dagger\,\,\forall i\,. \end{equation} Furthermore it was shown that this metric operator is unique (up to a global normalization) if and only if the set of observables under consideration forms an irreducible set \cite{scholtz}. All these results apply to the case of a finite number of degrees of freedom. The considerations above open up an alternative way of thinking about quantum mechanics. Conventionally one chooses a Hilbert space (and by implication inner product) and the construction of observables is dictated by the requirement of hermiticity with respect to this inner product. With the results above in place, one can, however, take the point of view that the observables are the primary objects which uniquely fix the Hilbert space and inner product. The latter, of course, only exists if the observables form a consistent set. To implement this approach in practice, as is demonstrated in section \ref{examples}, one starts with a Hilbert space which carries an irreducible unitary representation of the Weyl algebra, i.e., hermitian position and momentum operators and thus a unitary representation of the Heisenberg algebra. Then one proceeds to write down the observables of the theory, e.g., energy angular momentum etc. as functions of the Hermitian position and momentum operators, but, in contrast to the conventional approach, the condition that these observables are hermitian w.r.t. the inner product on the Hilbert space is relaxed. To verify that this set of observables constitute a consistent quantum system one verifies the existence of a metric operator with the properties stated above. One can also take a slightly different point of view to this construction. If a metric operator exists, one easily verifies from (\ref{metric}) that under the similarity transformation $\Theta^{-1/2}A_i(\hat x,\hat p)\Theta^{1/2}=A_i(\Theta^{-1/2}\hat x\Theta^{1/2},\Theta^{-1/2}\hat p\Theta^{1/2})$ the observables become Hermitian on the Hilbert space under consideration (see also \cite{kretsch1,kretsch2}). Thus, one could take the point of view that the observables are hermitian functions of non-hermitian position and momentum operators $\Theta^{-1/2}\hat x\Theta^{1/2}$ and $\Theta^{-1/2}\hat p\Theta^{1/2}$. From this point of view one thus considers a non-unitary representation of the Weyl algebra, i.e., non-hermitian position and momentum, and construct the observables as hermitian functions of these non-hermitian position and momentum operators. If the metric exists, this non-unitary representation of the Weyl algebra is equivalent to a unitary representation. This also implies that if the non-unitary representation under consideration is not equivalent to a unitary one, the metric cannot exist. Note that since all irreducible unitary representations of the Weyl algebra are equivalent, all non-unitary representations, equivalent to a unitary representation, are also equivalent. Whether the Weyl algebra admits non-equivalent non-unitary irreducible representations is, to our knowledge, an open question. However, if such representations were to be encountered in the construction procedure above, the metric cannot exist or the construction procedure has to be revised. Thus, at least within the construction procedure outlined above, no inequivalent quantizations of a given system (with finite number of degrees of freedom) can occur if it is to constitute a consistent quantum system. Given this situation, one may wonder what is to be gained in quantum mechanics from taking the point of view advocated above. One advantage has already been mentioned, namely, mathematical simplification. As already pointed out, when bosonising fermionic systems the one-plus-two body character of the Hamiltonian can only be maintained in a non-hermitian description, while the hermitian counterpart contains all possible higher order interaction terms\cite{geyer,Doba}. Related to this observation, it was recently also noted in the context of PT-symmetric quantum mechanics that the non-hermitian Hamiltonian often lends itself more readily to a perturbative treatment than its hermitian counterpart \cite{bender2}. This reflects the fact that the implied similarity transformation relating the Hermitian and non-hermitain Hamiltonians already accounts for some non-perturbative effects. These examples illustrate that a judicious choice of observables (which fixes the inner product) may lead to considerable simplification and even the possibility of non-perturbative solutions. Central to the succesful practical implementation of the approach outlined above, is the verification of the existence of the metric operator and its subsequent construction. In this regard it is important to note that if one is only interested in solving the eigenvalue equation for the Hamiltonian, it is sufficient to verify the existence of a metric operator obeying the exchange rule (\ref{metric}) with the Hamiltonian as one is then ensured of the reality of eigenvalues. For this purpose it is not necessary to construct the metric explicitly as the solution of the eigenvalue equation requires no explicit knowledge of the metric. However, if one intends to compute other physical quantities such as expectation values, transition probabilities and cross sections, detailed knowledge of the metric is required and the metric must be computed explicitly. It is also at this point that the uniqueness (up to a global normalization) of the metric becomes important as physical quantities will depend on the choice of metric if it is not unique. Solving operator equations of the type (\ref{metric}) is notoriously difficult. The succesful implementation of the approach outlined above hinges therefore very much on a reliable way of solving these types of equations, or at the very least verifying the existence of a solution with the properties required from a metric operator. Therefore this issue was partially investigated in \cite{scholtz} and more recently from a variety of different viewpoints in \cite{most1,mosta,bbj,bbj1,jones,swanson}. Here we propose a new approach, based on the Moyal product, which maps the operator equation (\ref{metric}) exactly onto an equivalent differential equation. Generically this equation may be of infinite order, but in many cases of physical interest it turns out to be finite. This equation contains all the information required to construct the metric operator exactly. In addition, criteria can be formulated to test the hermiticity and positive definitness of the metric directly on the level of this equation, leading to considerable simplification. When one considers eq.\ (\ref{metric}) for a reducible set of observables, e.g. the Hamiltonian alone, the metric is not uniquely determined. However, on the level of the Moyal product formulation the solution of the corresponding differential equation is uniquely determined once an appropriate set of boundary conditions has been specified. On the other hand, as pointed out above, the metric is uniquely determined (up to an irrelevant normalization factor) if eq.\ (\ref{metric}) is considered for an irreducible set of observables. This suggests an interplay between boundary conditions in phase space and the choice of an irreducible set of physical observables. This provides us with yet another perspective on the construction of a quantum system, which is the one we explore somewhat more here. The paper is organised in the following way. In section \ref{Moyal} we briefly review the Moyal product with emphasis on the properties we use in later sections. In section \ref{examples} we construct the metric of three non-hermitian PT symmetric Hamiltonians, two of which are exactly solvable. The issues of uniqueness of the metric, choice of observables and the relation to boundary conditions in phase space are addressed within these models. In section \ref{conclusions} we conclude with a brief summary of our findings. The present paper extends the results recently published in \cite{schgey}. \section{The Moyal product} \label{Moyal} The Moyal product \cite{moyal} was introduced by Moyal to facilitate Wigner's phase space formulation of quantum mechanics. More recently it was revived in the context of non-commutative systems (see e.g. \cite{fairlie}). In this section we briefly list the essential features of this construction. A more detailed exposition can be found in \cite{schgey}. In a finite dimensional Hilbert space with dimension $N$ one can construct an unitary irreducible representation of the Heisenberg-Weyl algebra \begin{equation} \label{algebra} gh=e^{i\phi}hg;\quad g^\dagger=g^{-1},\, h^\dagger=h^{-1}\,, \end{equation} where $\phi=\frac{2\pi}{N}$. The operators $U(n,m)=g^nh^m$, with $n=0,1,\ldots N-1$ and $m=0,1,\ldots N-1$, form a basis in the space of operators (matrices) on the Hilbert space. Any operator $A$ can therefore be expanded in the form \begin{equation} \label{expandf} A=\sum_{n,m=0}^{N-1} a_{n,m}g^nh^m\,,\quad a_{n,m}=(U(n,m),A)/N\,. \end{equation} Making the substitutions $g\rightarrow e^{i\alpha}\,\quad h\rightarrow e^{i\beta}\;,\alpha\,,\beta\in [0,2\pi)$ in the expansion (\ref{expandf}), turns $A$ into a function $A(\alpha,\beta)$, uniquely determined by the operator $A$ \begin{equation} \label{functionf} A=\sum_{n,m=0}^{N-1} a_{n,m}e^{in\alpha}e^{im\beta}\,. \end{equation} An isomorphism with the operator product can be established by defining the Moyal product of functions $A(\alpha,\beta)$ and $B(\alpha,\beta)$ \cite{moyal,fairlie} \begin{equation} \label{moyal} A(\alpha,\beta)\ast B(\alpha,\beta)=A(\alpha,\beta)e^{i\phi\stackrel{\leftarrow}{\partial_\beta} \stackrel{\rightarrow}{\partial_\alpha}}B(\alpha,\beta)\,, \end{equation} where the notation $\stackrel{\leftarrow}{\partial}$ and $\stackrel{\rightarrow}{\partial}$ denotes that the derivatives act to the left and right, respectively. On this level operators are replaced by functions, as described by (\ref{functionf}), while the non-commutative nature of the operators is captured by the Moyal product. It is easily checked that the Moyal product is associative, as one would expect from the associativity of the corresponding operator product. Once the function $A(\alpha,\beta)$ is given, the coefficients $a_{n,m}$ are computed from a simple Fourier transform. Insertion of these coefficients in (\ref{expandf}) enables the reconstruction of the operator. Let $A^\dagger(\alpha,\beta)$ and $A(\alpha,\beta)$ denote the functions corresponding to the hermitian conjugate operator $A^\dagger$ and the operator $A$, respectively. One easily establishes the following relation between these functions \cite{schgey} \begin{equation} \label{hc} A^\dagger(\alpha,\beta)=e^{i\phi\partial_\alpha \partial_\beta}A^\ast(\alpha,\beta)\,. \end{equation} From this it follows that an operator is hermitian if and only if \begin{equation} \label{ccf} A^\ast(\alpha,\beta)=e^{-i\phi \partial_\alpha\partial_\beta}A(\alpha,\beta)\,. \end{equation} These results are easily generalized to the case of an infinite dimensional quantum system. In this case a well known irreducible unitary representation of the Heisenberg-Weyl algebra exists \cite{bratteli} \begin{equation} \label{weyl} e^{it\hat p}e^{is\hat x}=e^{i\hbar ts}e^{is\hat x}e^{it\hat p}\,, \end{equation} where $\hat x$ and $\hat p$ are the hermitian position and momentum operators satisfying canonical commutation relations. The operators $U(t,s)\equiv e^{it\hat p}e^{is\hat x}$ constitute a complete set \cite{bratteli} and any operator can be expanded as \begin{equation} \label{expandq} A(\hat x,\hat p)=\int_{-\infty}^{\infty} dsdt\; a(t,s) e^{it\hat p}e^{is\hat x}\,,\quad a(t,s)=\frac{\hbar}{2\pi} (U(t,s),A)\,. \end{equation} Replacing $\hat x$ and $\hat p$ by real numbers turns $A(\hat x,\hat p)$ into a function $A(x,p)$, uniquely determined by $A$ \begin{equation} \label{functionq} A(x,p)=\int_{-\infty}^{\infty} dsdt\; a(t,s) e^{itp}e^{isx}\,. \end{equation} An isomorphism with the operator product can be established by introducing the Moyal product of functions \begin{equation} \label{moyalq} A(x,p)\ast B(x,p)=A(x,p)e^{i\hbar \stackrel{\leftarrow}{\partial_x}\stackrel{\rightarrow}{\partial_p}}B(x,p)\,. \end{equation} On this level we again work with functions, rather than operators, while the non-commutativity of the operators is captured by the Moyal product. As before associativity is easily verified. Once the function $A(x,p)$ has been determined, the function $a(t,s)$ is determined from a Fourier transform. Insertion into the expansion (\ref{expandq}) recovers the operator $A(\hat x,\hat p)$. As before, let the functions $A^\dagger(x,p)$ and $A(x,p)$ denote the functions corresponding to the hermitian conjugate operator $A^\dagger(\hat x,\hat p)$ and the operator $A(\hat x,\hat p)$, respectively. One easily establishes the relation \cite{schgey} \begin{equation} \label{hcq} A^\dagger(x,p)=e^{i\hbar \partial_x\partial_p}A^\ast(x,p)\,. \end{equation} This implies that an operator is hermitian if and only if \begin{equation} \label{ccq} A^\ast(x,p)=e^{-i\hbar \partial_x\partial_p}A(x,p)\,. \end{equation} In what follows we shall often encounter situations where the operator $A$ is a function of $\hat x$, or $\hat p$, only. It is therefore worthwhile to consider this situation briefly. Consider the case where $A(\hat p)$. From (\ref{expandq}) we have \begin{equation} \label{ponly} a(t,s)=\frac{\hbar}{2\pi} (U(t,s),A(\hat p))=\frac{\hbar}{2\pi}tr(e^{-is\hat x}e^{-it\hat p}A(\hat p)) =\delta(s)\int \frac{dp}{2\pi} A(p)e^{-itp}\,. \end{equation} Substituting this result in (\ref{functionq}) we note that the function $A(x,p)$ corresponding to the operator $A(\hat p)$ is just $A(p)$, i.e., we just replace the momentum operator by a real number. Clearly, the same argument applies to $A(\hat x)$. An approach related to the one we discuss here was developed in \cite{bender1}, although in that case the position and momentum operators are used as a basis to expand the operators. Compared to the present approach, the unboundedness of the position and momentum operators complicates the proof of completeness. Secondly, the product rule of these operators is not as simple as that of the Weyl algebra. This complicates the implementation on the level of classical variables. The current approach therefore seems to be more generic. \section{Metrics from Moyal products} \label{examples} \subsection{Metric equation in Moyal form} As was already pointed out in the introduction, there are a variety of reasons why the Hamiltonian, and other observables, of a system may be non-hermitian with respect to the inner product on the Hilbert space on which the system is quantized. The central question is then whether a consistent quantum mechanical interpretation remains possible. This was answered in \cite{scholtz} where it was pointed out that a normal quantum mechanical interpretation is possible if a metric operator $\Theta$ exists which has as domain the whole Hilbert space, is hermitian, positive definite and bounded, and satisfies the equation \begin{equation} \label{metricdef} H\Theta=\Theta H^\dagger\,, \end{equation} where $H$ denotes the Hamiltonian of the system. Once the existence of such an operator has been established, a new inner product can be defined with respect to which the Hamiltonian is hermitian and a standard quantum mechanical interpretation is possible. However, as was pointed out in \cite{scholtz}, and also explored in \cite{most1,most2}, the condition (\ref{metricdef}) is not sufficient to fix the metric uniquely, which implies that the quantum mechanical interpretation based on this metric, and the associated inner product, is ambiguous. The metric is uniquely determined (up to an irrelevant global normalization) if one requires hermiticity of a complete irreducible set of observables, $A_i$ (of which the Hamiltonian may be a member), with respect to the inner product associated with $\Theta$, i.e., it is required that (\ref{metricdef}) holds for all observables \cite{scholtz}: \begin{equation} \label{metricdefg} A_i\Theta=\Theta A_i^\dagger\,\,\forall i\,. \end{equation} From this point of view the choice of observables determines the metric and Hilbert space of the quantum system uniquely. Alternatively, one may argue that if a metric, satisfying (\ref{metricdef}), can be found, a particular choice of metric determines the allowed set of measurable observables. This is the spirit of PT-symmetric quantum mechanics. Let us consider the defining equations (\ref{metricdef}) and (\ref{metricdefg}) on the level of the Moyal product formulation. On this level the observables $A_i$, their hermitian conjugates $A_i^\dagger$ and the metric operator $\Theta$ get replaced by functions $A_i(x,p)$, $A_i^\dagger (x,p)$ and $\Theta(x,p)$ as prescribed in (\ref{functionq}). Note that $A_i(x,p)$ and $A_i^\dagger (x,p)$ are related as in (\ref{hcq}). In terms of these functions the defining relation (\ref{metricdef}) then reads \begin{equation} \label{metricdefm} H(x,p)\ast \Theta(x,p)=\Theta(x,p)\ast H^\dagger(x,p)\,, \end{equation} while (\ref{metricdefg}) reads \begin{equation} \label{metricdefgm} A_i(x,p)\ast \Theta(x,p)=\Theta(x,p)\ast A_i^\dagger(x,p)\,. \end{equation} If the set of observables $A_i$ is irreducible, equation (\ref{metricdefgm}) determines the metric uniquely up to a global normalization factor, without the necessity of additional boundary conditions. On the other hand, equation (\ref{metricdefm}) does not determine the metric uniquely and it is necessary to impose further boundary conditions to fix the metric up to a possible normalization factor. This demonstrates the already mentioned interplay between the choice of observables and boundary conditions on the metric function $\Theta(x,p)$. To demonstrate this point more clearly we consider the case where we specify $\hat x$ and $\hat p$ as observables. In this case equation (\ref{metricdefgm}) simply becomes $\Theta^{(1,0)}(x,p)=\Theta^{(0,1)}(x,p)=0$, i.e., the metric is just a constant. This simply reflects the following facts: (1) that $\hat x$ and $\hat p$ form an irreducible set, which implies that the metric is uniquely determined up to a global normalization factor and (2) that we have chosen $\hat x$ and $\hat p$ hermitian from the outset (insisting on unitary representations of the Heisenberg-Weyl algebra), so that the metric must be proportional to the identity (the original inner product corresponds to $\Theta=1$). \subsection{A solvable model} \label{soluble} To demonstrate the technique described above, and to illustrate the features discussed thus far, we study in this section a model for which the metric can be computed exactly and analytically. We proceed by first choosing the Hamiltonian as the only observable, leaving all other possible observables unspecified. It was already pointed out that this does not fix the metric uniquely and this is demonstrated explicitly below. Once this has been established, we proceed to augment the Hamiltonian with other choices of observables which, together with the Hamiltonian, form an irreducible set and show that this indeed eliminates the non-uniqueness of the metric. The model we study is defined by the following Hamiltonian, written in second quantized form: \begin{equation} \label{ham1} H=\hbar\omega a^\dagger a+\hbar\alpha aa+\hbar\beta a^\dagger a^\dagger\,, \end{equation} where $a$ and $a^\dagger$ are bosonic annihilation and creation operators, respectively. A finite dimensional version of this model was studied in \cite{scholtz} and more recently this model was studied in \cite{swanson,geyer1,jones}. This can be rewritten in terms of $\hat x$ and $\hat p$ in the usual way by setting $a^\dagger=(\hat x-i\hat p)/\sqrt{2\hbar}$ and $a=(\hat x+i\hat p)/\sqrt{2\hbar}$. Suppressing an irrelevant real constant term, which plays no role in the metric equation, this yields \begin{equation} \label{ham2} H=a\hat p^2+b\hat x^2+ic\hat p\hat x\,,\;a=(\omega-\alpha-\beta)/2,\,b=(\omega+\alpha+\beta)/2,\,c =(\alpha-\beta)\,. \end{equation} On the level of the Moyal bracket formulation this becomes \begin{equation} \label{clasham1} H(x,p)=a p^2+b x^2+icpx;\quad H^\dagger(x,p)=a p^2+b x^2-icpx+c\hbar\,. \end{equation} Substituting this in (\ref{metricdefm}) and noting that since $H(x,p)$ is polynomial the Moyal product truncates, one obtains the following partial differential equation for $\Theta(x,p)$: \begin{eqnarray} \label{diffeq1} &&c\left(\hbar - 2\,i\,p\,x \right) \,\Theta(x,p) \nonumber\\ &&+\hbar\,\left( \left( c\,p - 2\,i\,b\,x \right) \,\Theta^{(0,1)}(x,p)+ \left( c\,x+2\,i \,a\,p\,\right)\Theta^{(1,0)}(x,p) + b\,\hbar\,\Theta^{(0,2)}(x,p) - a\,\hbar\,\Theta^{(2,0)}(x,p) \right)=0\, , \end{eqnarray} where the notation $\Theta^{(n,m)}=\partial^{n+m}\Theta /\partial^n x\partial^m p$ has been introduced. Note that since no boundary conditions have been specified, the solution is not unique. On the level of the Moyal product the non-uniqueness of the metric therefore resides in the freedom to specify the boundary conditions in (\ref{diffeq1}). It should, however, be noted that the boundary condition that is to be imposed on (\ref{diffeq1}) can not be arbitrary as the solution has to conform with the conditions of hermiticity and positive definiteness of the metric. Keeping in mind that the metric is uniquely specified once a complete set of observables, hermitian with respect to the inner product associated with $\Theta$, is identified, an interplay between the boundary conditions imposed on (\ref{diffeq1}) and the choice of physical observables on the operator level is suggested. In this regard, note that a choice of boundary conditions that does not admit a solution conforming to hermiticity and positive definitess, constitutes an inconsistent choice of observables, and subsequently an inconsitent quantum system, as was pointed out in \cite{scholtz}. On the other hand, if an appropriate choice of boundary conditions, which fixes the metric uniquely, is made, both the Hilbert space and allowed observables in the quantum theory are uniquely determined. The allowed observables can indeed be computed by solving (\ref{metricdefgm}) for the observables once the metric has been determined. In this case each choice of boundary condition for solving (\ref{metricdefgm}) corresponds to an admissable observable. Of course, it remains to be determined, typically {\it a posteriori}, when a set of such observables, now by construction quasi-hermitian with respect to the same metric which has been uniquely fixed (or chosen) with respect to the Hamiltonian, constitutes an irreducible set together with the Hamiltonian. Before proceeding to the detailed solutions of (\ref{diffeq1}), it is useful to comment on some general features of the equation and its solutions. The first issue that we address is the existence of an hermitian metric operator, $\Theta$, as required by the definition of the metric operator. Equation (\ref{diffeq1}) is clearly linear and of the form $L\Theta(x,p)=0$, with $L$ a differential operator. Using $e^{-i\hbar \partial_x\partial_p}x e^{i\hbar \partial_x\partial_p}=x-i\hbar\partial_p$ and $e^{-i\hbar \partial_x\partial_p}p e^{i\hbar \partial_x\partial_p}=p-i\hbar\partial_x$, one easily verifies $e^{-i\hbar \partial_x\partial_p}L e^{i\hbar \partial_x\partial_p}= -L^*$, so that $L^*e^{-i\hbar \partial_x\partial_p}\Theta(x,p)=0$. On the other hand the complex conjugate of (\ref{diffeq1}) reads $L^*\Theta^*(x,p)=0$. Provided that the boundary conditions imposed on (\ref{diffeq1}) also satisfy (\ref{ccq}), uniqueness of the solution ((\ref{diffeq1}) is linear) implies $\Theta^*(x,p)=e^{-i\hbar \partial_x\partial_p}\Theta(x,p)$. Thus, provided that the boundary conditions imposed are consistent with (\ref{ccq}), the solution of (\ref{diffeq1}), when employed to construct the metric operator as described in the previous section, will yield an hermitian metric operator. A further property to note is that if $\Theta(x,p)$ is a solution (not necessarily corresponding to an hermitian and positive definite operator) of (\ref{diffeq1}), or equivalently (\ref{metricdefm}), then for arbitrary functions $f(H(x,p))$ and $g(H^\dagger(x,p))$ the following is also a solution: $f(H(x,p))\ast \Theta(x,p)\ast g(H^\dagger(x,p))$, where the functions $f$ and $g$ are defined through a Taylor expansion involving the Moyal product. This is most easily checked directly on the level of equation (\ref{metricdefm}) using the associativity of the Moyal product and the fact that $f(H)\ast H=H\ast f(H)$ and $g(H^\dagger)\ast H^\dagger=H^\dagger\ast g(H^\dagger)$. This is, once again, merely a reflection of the non-uniqueness of the solution of (\ref{diffeq1}), which has to be eliminated through some appropriate choice of boundary conditions. As was pointed out earlier the boundary conditions cannot be completely arbitrary, but must conform to hermiticity and positive definiteness. This does not, however, eliminate the freedom in (\ref{diffeq1}) completely and more input is required in the form of boundary conditions. Indeed, one can easily verify from (\ref{metricdefm}), associativity, and the relation (\ref{hcq}), that if $\Theta(x,p)$ is a solution corresponding to an hermitian and positive definite operator, so will be $g(H(x,p))\ast \Theta(x,p)\ast e^{i\hbar\partial_x\partial_p}g(H(x,p))^\ast$. Let us now proceed to the detailed solution of (\ref{diffeq1}). It is not difficult to find an exact solution in the form of the following one parameter family of metrics: \begin{equation} \label{onepar} \Theta(x,p)=e^{r\,p^2 + s\,p\,x + t\,x^2},\, \end{equation} where \begin{equation} \label{onepar1} r=\frac{-c \pm {\sqrt{c^2 - 4\,a\,b\,\hbar\,s\left(2i-\hbar s\right)}}}{4\,b\,\hbar}\,,t =\frac{c \pm {\sqrt{c^2 - 4\,a\,b\,\hbar\,s\left(2i-\hbar s\right)}}}{4\,a\,\hbar}\,, \end{equation} $s$ being a free parameter. Note that the solution has an essential singularity at $\hbar=0$, so that the metric does not exist as a classical object. Once this solution has been found, a large class of solutions can be constructed as was pointed out earlier. The non-uniqueness of the solution is, however, already explicit in (\ref{onepar}) and we concentrate on that for the moment. To see how this freedom can be eliminated through the specification of other observables, consider the situation where we specify the momentum as a further observable. Then, from (\ref{metricdefgm}), $\Theta(x,p)$ also has to satisfy the equation $\Theta^{(1,0)}(x,p)=0$, i.e., it can not depend on $x$. To satisfy this condition we must have $s=0$, $t=0$ and $r=-\frac{c}{2b\hbar}=\frac{\beta-\alpha}{(\omega+\alpha+\beta)\hbar}$, which removes the freedom in (\ref{onepar}). Furthermore it is clear that the arbitrariness in the solution $g(H(x,p))\ast \Theta(p)\ast e^{i\hbar\partial_x\partial_p}g(H(x,p))^\ast$ is also removed in that the introduction of the function $g$ will lead to an unwanted position dependency of the metric through the position dependency of the Hamiltonian. Similarly, if one specifies the position as another observable, the metric must be independent of momentum, which is the case when $s=0$, $r=0$ and $t=\frac{c}{2b\hbar}=\frac{\alpha-\beta}{(\omega+\alpha+\beta)\hbar}$. As before the introduction of the function $g$ introduces an unwanted momemtum dependency in the metric, so that all freedom, apart from a global normalization factor, has been eliminated. Note that both these solutions also have the correct asymptotic behaviour in the limit $c\rightarrow 0$. As a final remark we point out that the first of these solutions would have resulted by imposing the boundary conditions that on a line $p=p_0$ the metric is a constant, while the second results from imposing the same condition on a line $x=x_0$. Let us now consider the hermiticity and positive definiteness of these solutions. Since $a$, $b$ and $c$ are real, the condition (\ref{ccq}) is trivially satisfied in both these cases and the metric is hermitian. To show positive definiteness one has to verify that the logarithm of the metric is hermitian. In the Moyal product formulation this implies that one has to find the function corresponding to the logarithm of the metric operator and verify that it satisfies (\ref{ccq}), i.e., one has to find the function $\eta(x,p)$ such that $1+\eta+\frac{1}{2!}\eta\ast\eta+\frac{1}{3!}\eta\ast\eta\ast\eta\ast+\ldots=\Theta$. In this case it is, however, obvious that the Moyal product reduces to an ordinary product so that the function corresponding to the logarithm of the metric operator is simply the logarithm of (\ref{onepar}), which is $-\,c\,p^2/2b\hbar$ or $\,c\,x^2/2a\hbar$, depending on the choice of additional observables $p$ or $x$. This trivially satisfies (\ref{ccq}) so that the metric is positive definite, although not necessarily bounded. As a final example of how the non-uniqueness is removed through the specification of other observables, consider the case where the number operator $\hat N=a^\dagger a=\frac{1}{2\hbar}(\hat p^2+\hat x^2)$ is specified as the other observable. On the level of the Moyal product $\hat N$ gets replaced by $N=\frac{1}{2\hbar}(p^2+x^2)$. From (\ref{metricdefgm}) and (\ref{clasham1}) we then note that $\Theta(x,p)$ has to satisfy an additional equation, which is identical to (\ref{diffeq1}) with the specific choice of parameters $a=b=\frac{1}{2\hbar}$ and $c=0$. The solution of this equation is clearly also of the form (\ref{onepar}) where $r=t=\pm i\sqrt{\hbar s(2i-\hbar s)}/2\hbar$. Since the metric has to satisfy both (\ref{diffeq1}) and this equation, the value of the free parameter $s$ is fixed to be $s_{\pm}=\frac{1}{\hbar}(i\pm\frac{\sqrt{c^2-(a-b)^2}}{|a-b|})=\frac{i}{\hbar}(1\pm\frac{2\sqrt{\alpha\beta}}{|\alpha+\beta|})$ and $r=t=\frac{c}{2\hbar(a-b)}=-\frac{(\alpha-\beta)}{2\hbar(\alpha+\beta)}$. Only the $s_-$ solution yields the correct asymptotic behaviour for $\Theta(x,p)$ in the hermitian limit $c\rightarrow 0$ or $\alpha\rightarrow\beta$. One can also easily verify that no further freedom exists by introducing the function $g$. The reason is simply that these functions can only depend on either $H$ or $N$ if one wants a solution of either of these two equations. Since both equations have to be satisfied this freedom is thus eliminated. Finally we check the hermiticity and positive definiteness of this metric, i.e., we check whether the metric and its logarithm satisfy (\ref{ccq}). We first check the metric. Since it is an exponential it is very difficult to verify the hermiticity directly. Instead, the way we do it is to check the hermiticity order by order in a series expansion in $c$. As an example we show the calculation to third order in $c$. To this order we have for $\Theta(x,p)$ \begin{eqnarray} \label{3rd} \Theta(x,p)&=&1 + \frac{c\,\left( p^2 + x^2 \right) }{2\hbar(a - b)} + \frac{c^2\,\left( p^4 + 4\,i \,\hbar\,p\,x + 2\,p^2\,x^2 + x^4 \right) }{8\,{\left( a - b \right) }^2\,\hbar^2} + \frac{c^3\,\left( p^2 + x^2 \right) \,\left( p^4 + 12\,i \,\hbar\,p\,x + 2\,p^2\,x^2 + x^4 \right) } {48\,{\left( a - b \right) }^3\,\hbar^3}\nonumber\\ &\equiv& 1+ca_1+c^2a_2+c^3a_3\,. \end{eqnarray} It is now simple to verify that (\ref{ccq}) is indeed satisfied to this order in $c$, verifying the hermiticity of the metric. Next we turn to the logarithm of the metric. To compute the logarithm to third order we simply expand the logaritm of (\ref{3rd}), keeping in mind that the expansion must be done through the Moyal product in order to reflect the operator nature correctly. One easily finds \begin{equation} \label{logT1} (\log \Theta)(x,p)=\log(1+ca_1+c^2a_2+c^3a_3)=a_1c+c^2\left(a_2-\frac{a_1\ast a_1}{2}\right)+c^3\left(a_3+\frac{_1\ast a_1\ast a_1}{3}- \frac{a_1\ast a_2+a_2\ast a_1}{2}\right)\,. \end{equation} Substituting from (\ref{3rd}) the values for $a_1,a_2$ and $a_3$, the logarithm of the metric can easily be evaluated to this order to be \begin{equation} \label{logTsol} (\log \Theta)(x,p)=\frac{c^2}{4(a-b)^2}+\left(\frac{c}{2\hbar(a-b)}\left(1+\frac{c^2}{3(a-b)^2}\right)\right) \left(p^2+x^2\right)\,. \end{equation} It is now simple to verify that (\ref{ccq}) is satisfied by (\ref{logTsol}), verifying the positive definiteness of the metric. We note that (\ref{logTsol}) has a simple linear form in $N=p^2+x^2$. It is not too difficult to see that this is indeed a feature that persists to all orders so that in this model the logaritm of the metric has the simple form $(\log \Theta)(x,p)=a(c)+b(c)N$ where $a(c)$ and $b(c)$ are real and $a(0)=b(0)=0$. The condition (\ref{ccq}) can then in fact be verified to all orders in $c$. This confirms the positive definiteness and hermiticity to all orders. \subsection{A shifted oscillator: the $ix$ potential} Not unexpectedly an exact solution can be obtained for the metric $\Theta$ (and equivalently for the $\mathcal C$-operator) when the Hamiltonian is the PT-symmetric complex shifted harmonic oscillator with $H= \tfrac{1}{2}p^2 + \tfrac{1}{2}x^2 + ix$. In this case it has been shown \cite{bbj} that the general form of the $\mathcal C$-operator, ${\mathcal C} = e^Q{\mathcal P}$, simplifies to ${\mathcal C} = e^{-2p}{\mathcal P}$. It is well known that the metric can be related to a similarity transformation that transforms a given non-hermitian Hamiltonian into and equivalent hermitian form (see also section \ref{intro}), and that the similarity transformation is also related to the $\mathcal C$-operator) \cite{most1,mosta,scholtz}, which allows the identification $\Theta = e^{-2p}$ in the present example. This result also immediateley follows from the general analysis in section \ref{Moyal}. In particular, eq.\ (\ref{metricdefm}) reduces to the partial differential equation \begin{equation} \label{linpot} 2ix\Theta (x,p) + (ix-1)\Theta^{(0,1)} - \tfrac{1}{2}\Theta^{(0,2)} - i\Theta^{(1,0)}p + \tfrac{1}{2} \Theta^{(2,0)} = 0, \end{equation} where we have set $\hbar = 1$. Assuming $\Theta (x,p) = \Theta (p)$ only, eq.\ (\ref{linpot}) becomes the ordinary differential equation (primes indicating differentiation with respect to $p$) \begin{equation} 2ix\Theta + (ix-1)\Theta^{\prime} - \tfrac{1}{2}\Theta^{\prime\prime} = 0, \end{equation} with the solution $\Theta (p) = e^{-2p}$, as before. As already pointed out, the non-uniqueness of the metric determined from the star product (\ref{metricdefm}) is associated with different boundary conditions. For the PDE (\ref{linpot}) above, another class of solutions may accordingly be investigated by assuming $\Theta (x,p) = \Theta (x)$. The differential equation reduces to a Schr\"odinger equation with a linear potential, with solutions the standard Airy functions, here with complex argument. These solutions, to what extent they conform to the requirements of Hermiticity and positive definiteness, and their physical implications, will be discussed elsewhere \cite{geysch}. \subsection{The $ix^3$ potential} \label{PT} Next we consider the PT-symmetric model with Hamiltonian \cite{bender} \begin{equation} \label{ham} H(\hat x,\hat p)=\hat p^2 +ig\hat x^3\,. \end{equation} Here we consider the Hamiltonian as the only observable and leave the other possible observables unspecified. We then proceed to investigate the existence of a (non-unique) hermitian and positive definite metric. Since the PT-symmetry is unbroken for this Hamiltonian \cite{bender} we expect such a metric to exist. On the level of the Moyal product this Hamiltonian and its hermitian conjugate get replaced by the functions (it is a sum of functions depending on $\hat p$ and $\hat x$ only) \begin{equation} \label{hamf} H(x,p)=p^2 +igx^3\,, H^\dagger(x,p)=H^\ast(x,p)\,, \end{equation} while the metric becomes a function $\Theta(x,p)$ as defined in (\ref{functionq}). Note that in this case the hermitian conjugate of the operator gets replaced by the complex conjugate of the function $H(x,p)$. It is simple to see that this is a generic feature of functions (or the sum of functions) that depend on $\hat x$ or $\hat p$ only, as there is no phase due to the exchange of the operators $e^{it\hat p}$ and $e^{is\hat x}$ (see (\ref{ponly})). As before the Moyal product in equation (\ref{metricdefm}) truncates due to the polynomial nature of $H(x,p)$ to yield the following differential equation for $\Theta(x,p)$: \begin{equation} \label{diffeq} 2\,i \,g\,x^3\,\Theta(x,p) - 3\,g\,\hbar\,x^2\,\Theta^{(0,1)}(x,p) - 3\,i \,g\,\hbar^2\,x\,\Theta^{(0,2)}(x,p) + g\,\hbar^3\,\Theta^{(0,3)}(x,p) - 2\,i \,\hbar\,p\,\Theta^{(1,0)}(x,p) + \hbar^2\,\Theta^{(2,0)}(x,p)=0\,. \end{equation} Solutions with the correct asymptotic behaviour can be constructed by resorting to a series representation in $g$ for the solutions of (\ref{diffeq}). This also allows us to make contact with existing literature in which a series expansion was used \cite{most2}. In the same spirit as the solvable model this series expansion can be tested for hermiticity and positive definiteness by using the criterion (\ref{ccq}). We list the result to $O(g)$ below, as the expression becomes rather elaborate at higher order, but in principle it is quite simple to compute the result to any order desired: \begin{eqnarray} \label{solog} &&\Theta(x,p)=\frac{-21\,i \,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^4\,c1(p)}{16p^6} - \frac{i \,e^{\frac{2\,i \,p\,x}{\hbar}}\,\hbar\,c1(p)}{2p} - \frac{21\,e^{\frac{ 2\,i \,p\,x}{\hbar}}\,g\,\hbar^3\,x\,c1(p)}{8\,p^5} + \frac{i\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^2\,x^2\,c1(p)}{8p^4} + \nonumber\\ && \frac{e^{\frac{ 2\,i \,p\,x}{\hbar}}\,g\,\hbar\,x^3\,c1(p)}{4\,p^3} + c2(p) + \frac{3\,i \,g\,\hbar^2\,x\,c2(p)}{4p^4} - \frac{3\,g\,\hbar\,x^2\,c2(p)}{4\,p^3} - \frac{i\,g\,x^3\,c2(p)}{2p^2} + \frac{g\,x^4\,c2(p)}{4\,\hbar\,p} - \frac{i\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar\,c3(p)}{2p} + \nonumber\\ && g\,c4(p) + \frac{21\, i\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^4\, c1'(p)}{16p^5} + \frac{21\,e^{\frac{ 2\,i \,p\,x}{\hbar}}\,g\,\hbar^3\,x\,c1'(p)} {8\,p^4} - \frac{9\,i\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^2\,x^2\,c1'(p)} {8p^3} - \frac{e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar\,x^3\,c1'(p)}{4\,p^2} - \nonumber\\ &&\frac{3\,i\,g\,\hbar^2\,x\,c2'(p)}{4p^3} + \frac{3\,g\,\hbar\,x^2\,c2'(p)}{4\,p^2} + \frac{i\,g\,x^3\,c2'(p)}{2p} - \frac{9\,i \,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^4\,c1''(p)}{16p^4} - \frac{9\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^3\,x\,c1''(p)}{8\,p^3} + \nonumber\\ &&\frac{\,i\,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^2\,x^2\,c1''(p)}{8p^2} + \frac{3\,i \,g\,\hbar^2\,x\,c2''(p)}{4p^2} - \frac{3\,g\,\hbar\,x^2\,c2''(p)}{4\,p} + \frac{i \,e^{\frac{2\,i \,p\,x}{\hbar}}\,g\,\hbar^4\,c1^{(3)}(p)}{8p^3} + \nonumber\\ &&\frac{e^{\frac{ 2\,i \,p\,x}{\hbar}}\,g\,\hbar^3\,x\,c1^{(3)}(p)}{4\,p^2} - \frac{i\,g\,\hbar^2\,x\,c2^{(3)}(p)}{2p}\,+O(g^2)\,. \end{eqnarray} This is the most general form of the solution where $c1(p)$, $c2(p)$, $c3(p)$ and $c4(p)$ are completely arbitrary functions of $p$. These functions are, however, restricted if one requires that the expansion (\ref{solog}) satisfies the hermiticity condition (\ref{ccq}). Indeed, one can quite easily see that $c2(p)$ and $c4(p)$ must at least be real. Here we do not pursue the most general solution, but rather focus on a particular choice of integration constants for which this expression simplifies considerably. Setting $c1(p)=0$, $c2(p)=1$ (this brings about only a global normalization), $c3(p)=0$ and $c4(p)=0$ one finds to $O(g)$ \begin{equation} \label{solbcg} \Theta(x,p)=1 + g\,\left( \frac{3\,i\,\hbar^2\,x}{4p^4} - \frac{3\,\hbar\,x^2}{4\,p^3} - \frac{i \,x^3}{2p^2} + \frac{x^4}{4\,\hbar\,p} \right)\,. \end{equation} This result agrees with the result in \cite{most2} when the different normalization of the kinetic energy term, accounting for the factor of $\frac{1}{2}$, and the different convention for the definitions of the metric (see (\ref{metricdef})), which brings about a complex conjugation, are taken into account. With this choice of boundary conditions, even the higher order term simplifies considerably and one easily finds to $O(g^3)$ (setting any further integration constants zero) \begin{eqnarray} \label{solbcg2} \Theta(x,p)&=&1 + g\,\left( \frac{3\,i \,\hbar^2\,x}{4p^4} - \frac{3\,\hbar\,x^2}{4\,p^3} - \frac{i\,x^3}{2p^2} + \frac{x^4}{4\,\hbar\,p} \right) \nonumber\\ & +& g^2\,\left( \frac{108\,i \,\hbar^5\,x}{p^9} - \frac{108\,\hbar^4\,x^2}{p^8} - \frac{ 57\,i \,\hbar^3\,x^3}{p^7} + \frac{21\,\hbar^2\,x^4}{p^6} + \frac{6\,i \,\hbar\,x^5}{p^5} - \frac{11\,x^6}{8\,p^4} - \frac{i \,x^7}{4\hbar\,p^3} + \frac{x^8}{32\,\hbar^2\,p^2} \right) \nonumber\\ &+& g^3\,\left( \frac{29872557\,i \,\hbar^8\,x}{256p^{14}} - \frac{29872557\,\hbar^7\,x^2}{256\,p^{13}} - \frac{7676559\,i\,\hbar^6\,x^3}{128p^{12}} + \frac{5395599\,\hbar^5\,x^4}{256\,p^{11}} + \frac{727299\,i\,\hbar^4\,x^5}{128p^{10}}\right.\nonumber\\ & -& \left.\frac{159489\,\hbar^3\,x^6}{128\,p^9} -\frac{14679\,i \,\hbar^2\,x^7}{64p^8} + \frac{9207\,\hbar\,x^8}{256\,p^7} + \frac{615\,i\,x^9}{128p^6} - \frac{343\,x^{10}}{640\,\hbar\,p^5} - \frac{3\,i \,x^{11}}{64\hbar^2\,p^4} + \frac{x^{12}}{384\,\hbar^3\,p^3} \right)\,+O(g^4)\nonumber\\ &\equiv& 1+ga+g^2b+g^3c+O(g^4)\,. \end{eqnarray} Note, as in the solvable model, that $\Theta(x,p)$ is singular in the limit $\hbar\rightarrow 0$. We can now proceed to check the hermiticity of $\Theta$ by verifying that (\ref{ccq}) holds for (\ref{solbcg2}). It is easily verified that this is indeed the case up to $O(g^3)$. Finally, we consider the positive definiteness of $\Theta$. For this we have to show that the logarithm of $\Theta$ is also hermitian. As in the solvable model the logarithm of $\Theta$ can be computed from (\ref{solbcg2}) by using (\ref{logT1}). This yields \begin{eqnarray} \label{logT2} (\log \Theta)(x,p)&=&g\,\left( \frac{3\,i\,\hbar^2\,x}{4p^4} - \frac{3\,\hbar\,x^2}{4\,p^3} - \frac{i \,x^3}{2p^2} + \frac{x^4}{4\,\hbar\,p} \right)\nonumber\\ &+&g^3\left(\frac{-2745171\,i\,\hbar^8\,x}{256p^{14}} + \frac{2745171\,\hbar^7\,x^2}{256\,p^{13}} + \frac{677457\,i\,\hbar^6\,x^3}{128p^{12}} - \frac{439857\,\hbar^5\,x^4}{256\,p^{11}} - \frac{52029\,i\,\hbar^4\,x^5}{128p^{10}} \right.\nonumber\\ &+&\left. \frac{9375\,\hbar^3\,x^6}{128\,p^9} + \frac{651\,i\,\hbar^2\,x^7}{64p^8} - \frac{273\,\hbar\,x^8}{256\,p^7} - \frac{5\,i \,x^9}{64p^6} + \frac{x^{10}}{320\,\hbar\,p^5}\right)\,, \end{eqnarray} which can easily be checked to satisfy (\ref{ccq}). Note that the second order term vanishes with this choice of boundary conditions \cite{most2}. We have now verified the hermiticity and positive definiteness of the solution, at least to third order in the coupling $g$. \section{The Berry connection and curvature} In this section we show how the formalism developed in section \ref{Moyal} can be used to construct the Berry connection and curvature \cite{berry,simon,shapwilcz}. Although we cannot verify directly from this the existence of a metric operator for a given Hamiltonian, we can identify points or lines in parameter space where singularities will occur in the metric, eliminating the possible existence of a metric in these cases. This enables us to make statements about the analytic properties of the eigenvalues and eigenstates, which is a non-perturbative piece of information providing information on the radius of convergence of a perturbative treatment. Let us start by setting up the equation for the Berry connection. Consider a Hamiltonian depending on a set of real parameters $q_1,q_2\ldots q_n$, denoted $H(q)$. Here we are interested in non-hermitian Hamiltonians and therefore the Hamiltonian does not have to be hermitian as we range over the parameter space. Furthermore we do not even insist on real eigenvalues of the Hamiltonian as we range over parameter space, in which case a metric can of course not exist. We now write the Hamiltonian as \begin{equation} \label{diag} H(q)=S(q)D(q)S^{-1}(q)\,, \end{equation} where $D(q)$ is a diagonal operator. It is important to note here that the operator $S(q)$ may be singular at certain points in parameter space. These are the points where the Hamiltonian is not diagonalizable (it can at best be brought in the Jordan form) and the eigenstates do not span the whole space, i.e., they are linearly dependent. At these points the metric can also not exist (if it does, one can construct an operator that diagonalizes the Hamiltonian). To proceed we differentiate (\ref{diag}) with respect to $q_i$ \begin{equation} \label{diff} \frac{\partial H(q)}{\partial q_i}-S(q)\frac{\partial D(q)}{\partial q_i}S^{-1}(q) =[A_i(q),H(q)]\,. \end{equation} Here we have introduced the Berry connection defined by \begin{equation} \label{berryc1} A_i(q)=\frac{\partial S(q)}{\partial q_i}S^{-1}(q)\,. \end{equation} The Berry connection simply generates the change in the eigenstates as the parameters on which the Hamiltonian depend are changed. Furthermore one expects singularities in $S(q)$ to show up in the Berry connection. Thus solving for the Berry connection from (\ref{berryc1}) one may be able to identify these singular points. Note that as the components of the Berry connection are operators they do not necessarily commute. Equation (\ref{diff}) is not very useful in its given form, as we need to know the eigenvalues in order to compute the Berry connection from it. To avoid this, we take the commutator of (\ref{diff}) with the Hamiltonian. It is simple to see that $[S(q)\frac{\partial D(q)}{\partial q_i}S^{-1}(q),H(q)]=S(q)( D(q)\frac{\partial D(q)}{\partial q_i}-\frac{\partial D(q)}{\partial q_i}D(q))S^{-1}(q)=0$ and we have \begin{equation} \label{diff1} [\frac{\partial H(q)}{\partial q_i},H(q)] =[[A_i(q),H(q)],H(q)]\,. \end{equation} We can now use this equation to solve for the Berry connection. Once again this is an operator equation which is generally difficult to solve. On the level of the Moyal product this becomes, however, again a differential equation of finite order if the Hamiltonian is polynomial. Before attempting to solve (\ref{diff1}) in a particular model, we first have to consider some of its general features. It is clear that (\ref{diff1}) does not have a unique solution as we can always add a solution of the homogenous equation to find another solution. This freedom is indeed already present in equation (\ref{diag}) as a 'gauge freedom'. To see this we note that if $S(q)$ diagonalizes $H(q)$, so will $T(q)=S(q)\Lambda(q)$ where $\Lambda(q)$ is an arbitrary diagonal operator. Under the transformation $S(q)\rightarrow T(q)$, the Berry connection transforms to \begin{equation} \label{trans} A_i(q)\rightarrow A_i^\prime (q)=A_i(q)+S(q)\frac{\partial\Lambda(q)}{\partial q_i}\Lambda^{-1}(q)S^{-1}(q)\,, \end{equation} illustrating the non-uniqueness of the Berry connection. Quantities like the Berry phase should, however, be unique and thus invariant under the transformation (\ref{trans}). Let us therefore consider this issue in some more detail. To do this we consider the change in $S(q)$ as we translate around a small plaquette in the $q_i$ and $q_j$ directions as shown in figure \ref{plaq}. \setlength{\unitlength}{1mm} \begin{figure}\label{plaq} \end{figure} Expanding to second order in $dq_i$ and $dq_j$, one easily finds the following relations between the intermediate values (see figure \ref{plaq}) of $S(q)$ \begin{eqnarray} \label{transl} S_1&=&\left(1+A_jdq_j+\frac{1}{2}\left(\frac{\partial A_j}{\partial q_j}+A_j^2\right)dq_j^2\right)S_0\,,\nonumber\\ S_2&=&\left(1+A_idq_i+\frac{\partial A_i}{\partial q_j}dq_i\,dq_j+\frac{1}{2}\left(\frac{\partial A_i}{\partial q_i}+A_i^2\right)dq_i^2\right)S_1\,,\nonumber\\ S_3&=&\left(1-A_jdq_j-\frac{\partial A_j}{\partial q_i}dq_i\,dq_j+\frac{1}{2}\left(A_j^2-\frac{\partial A_j}{\partial q_j}\right)dq_j^2\right)S_2\,,\nonumber\\ S_4&=&\left(1-A_idq_i+\frac{1}{2}\left(A_i^2-\frac{\partial A_i}{\partial q_i}\right)dq_i^2\right)S_3\,.\nonumber\\ \end{eqnarray} Successive application now yields the relation between $S_4$ and $S_0$ \begin{equation} \label{berrycur1} S_4=\left(1+F_{ij}dq_idq_j\right)S_0\,, \end{equation} where we have introduced the Berry curvature \begin{equation} F_{ij}=\frac{\partial A_i}{\partial q_j}-\frac{\partial A_j}{\partial q_i}+[A_i,A_j]\,. \end{equation} This is the infinitesimal change in the eigenstates. For finite closed loops the change can be obtained from a path ordered exponential of the Berry connection or a careful exponentiation of (\ref{berrycur1}), keeping in mind that the $F_{ij}$ at different points need not commute. It is now a simple matter to verify that under the transformation (\ref{trans}) the Berry curvature is invariant, i.e., $F_{ij}^\prime=F_{ij}$. Before applying (\ref{diff1}) to infinite dimensional quantum systems, we consider a simple 2-dimensional matrix problem. Consider the following Hamiltonian \begin{equation} H(q_1,q_2)=\left( {\begin{array}{*{20}c} 1 & {q_1 + iq_2 } \\ {q_1 + iq_2 } & { - 1} \\ \end{array} } \right) \end{equation} One can easily solve (\ref{diff1}) to find the most general solution \begin{eqnarray} A_1(q_1,q_2)&=&\left( {\begin{array}{*{20}c} \frac{2\,{w_1}}{{q_1} + i \,{q_2}} - \frac{1}{\left( {q_1} + i \,{q_2} \right) \, \left( 1 + \left(q_1 + i q_2\right)^2 \right)} + {y_1} & {w_1} - \frac{1}{ 1 + \left(q_1 + i q_2\right)^2} \\ w_1 & y_1 \\ \end{array} } \right)\nonumber\\ A_2(q_1,q_2)&=&\left( {\begin{array}{*{20}c} \frac{ 2\,i \,{w_1}}{{q_1} + i \,{q_2}} - \frac{i }{\left( {q_1} + i \,{q_2} \right) \, \left( 1 + \left(q_1 + i q_2\right)^2 \right) } + {y_2} & i \,{w_1} - \frac{i }{ 1 + \left(q_1 + i q_2\right)^2 } \\ iw_1& y_2 \\ \end{array} } \right) \end{eqnarray} The homogenous part has been constructed to yield a zero curvature. We note that this expression has singularities at the points $\{q1=0,\,q2=0\}$ and $\{q1=0,\,q2=\pm 1\}$. The singularity at the origin is spurious (the transformation $S(q)=1$ at this point) and can be removed by an appropriate choice of the homogenous part. Indeed choosing $w_1=1/2$ and $y_1=y_2=0$ we have \begin{eqnarray} \label{2dim} A_1(q_1,q_2)=-iA_2(q_1,q_2)=\left( {\begin{array}{*{20}c} \frac{{q_1} + i \,{q_2}}{ 1 + \left(q_1 + i q_2\right)^2 } & \frac{1}{2}-\frac{1}{1 + \left(q_1 + iq_2\right)^2} \\ \frac{1}{2} & 0 \\ \end{array} } \right) \end{eqnarray} The singularities at the points $\{q_1=0,\,q_2=\pm 1\}$ are, however, not removable and, not surprisingly, are the two exceptional points where the matrix is not diagonalizable \cite{kato} and the transformation $S(q)$ becomes singular. When one computes the curvature it vanishes everywhere, except at these points where the curvature has to be computed with great care due to the singularity. This implies that moving in an infinitesimal loop around any point brings the eigenfunctions back to themselves; the only exception being the exceptional points where the eigenfunctions transform into each other \cite{kato}. Note that the Berry connection at different points does not commute. Due to this fact one has to compute a path ordered exponential in order to obtain the change of the eigenvectors as one moves around in a closed loop. The result (\ref{2dim}) can also be obtained from (\ref{berryc1}) by direct diagonalization and an appropriate choice of 'gauge'. To demonstrate the aforementioned points, we compute the curvature at the exceptional point $\{q_1=0,q_2=1\}$. We do this by computing the change of the eigenstate when we move in an infinitesimal circle around the exceptional point. Setting $q_1=r\cos\phi$ and $q_2=1+r\sin\phi$, we easily compute the Berry connection in the azimuthal direction from (\ref{berryc1}) and (\ref{2dim}) to be $A_\phi=ire^{i\phi} A_1$. This result is now expanded to lowest order in $r$ to obtain \begin{equation} \label{berryex} \frac{\partial S(\phi)}{\partial \phi}S^{-1}(\phi)|_{(0,1)}=A_\phi=\left( {\begin{array}{*{20}c} \frac{i}{2} & -\frac{1}{2} \\ 0& 0 \\ \end{array} } \right) \, . \end{equation} The change in the eigenstate can now easily be computed from (\ref{berryex}) by noting that $S(2\pi)=(1+\frac{2\pi}{N}A_\phi)^NS(0)$ in the limit $N\rightarrow\infty$. The result is $S(2\pi)=FS(0)$ with \begin{equation} F=\left( {\begin{array}{*{20}c} -1 & -2i \\ 0& 1 \\ \end{array} } \right)\,. \end{equation} We now proceed to compute the action of $F$ on the eigenstates infinitesimally close to the exceptional point. These eigenstates, expanded to leading order in $r$, read \begin{equation} u_\pm=\left( {\begin{array}{*{20}c} { - i \pm i\sqrt {2w} } \\ 0 \\ \end{array} } \right)\,, \end{equation} where we have set $w=re^{i\phi}$. It is now a simple matter to see that $Fu_\pm=u_\mp$. Let us now consider the quantum system discussed in section \ref{soluble}. All the considerations of section \ref{Moyal} apply and on the level of the Moyal formulation we can replace equation (\ref{diff1}) by \begin{equation} \label{diff1m} \left[\frac{\partial H(x,p,q)}{\partial q_i},H(x,p,q)\right]_\ast =[[A_i(x,p,q),H(x,p,q)],H(x,p,q)]_\ast\,. \end{equation} Here $x$ and $p$ are the phase space variables, $q$ the parameters in the Hamiltonian and we have introduced the Moyal bracket \begin{equation} [A(x,p),B(x,p)]_\ast=A(x,p)\ast B(x,p)-B(x,p)\ast A(x,p)\,. \end{equation} For the system of section \ref{soluble} the Hamiltonian was given in (\ref{clasham1}) and the two parameters entering the Hamiltonian are the parameters $\alpha$ and $\beta$ (see (\ref{ham1}) and (\ref{ham2})). As we are not interested in a global rescaling of the Hamiltonian, which only effects the homogenous part of the Berry connection, it is more convenient to change to two new variables, $q_1$ and $q_2$, by dividing the Hamiltonian (\ref{clasham1}) by $a$: \begin{equation} \label{clasham2} H(x,p)=p^2+q_1 x^2+iq_2 px;\quad q_1=\frac{b}{a}=\frac{\omega+\alpha+\beta}{\omega-\alpha-\beta}\,,q_2=\frac{c}{a}=\frac{2(\alpha-\beta)}{\omega-\alpha-\beta}. \end{equation} It is now easy to set up the explicit equations for the connection from (\ref{diff1m}), but we do not list them here due to their length. We rather make an assumption for the connection of the following form \begin{equation} A_i(x,p)=r_ip^2+s_ixp+t_ix^2\,;i=1,2\,, \end{equation} and solve for the parameters $r_i$, $s_i$ and $t_i$ as functions of $q_1$ and $q_2$. The result is \begin{eqnarray} A_1(x,p)&=&\frac{i \,p\,x} {\hbar\,\left( 4\,{q_1} + {{q_2}}^2 \right) } - \frac{x^2\,{q_2}} {2\,\hbar\,\left( 4\,{q_1} + {{q_2}}^2 \right) }\,,\nonumber\\ A_2(x,p)&=&\frac{x^2\,{q_1}} {\hbar\,\left( 4\,{q_1} + {{q_2}}^2 \right) } + \frac{i\,p\,x\,{q_2}} {2\hbar\,\left( 4\,{q_1} + {{q_2}}^2 \right) }\,. \end{eqnarray} The curvature can also be easily computed from $F_{1,2}=\frac{\partial A_1}{\partial q_2}-\frac{\partial A_2}{\partial q_1}+[A_1,A_2]_\ast$. It vanishes everywhere, except at possible singularities. Let us therefore investigate the singularities and interpret them. There are singularities on the curves $4q_1+q_2^2=\omega^2-4\alpha\beta=0$. It is well known \cite{geyer1} that on these curves the Bogoliubov transformation that diagonalizes the Hamiltonian becomes singular, which is now simply reflected on the level of the Berry connection. These continuous curves divide the $\alpha$-$\beta$ parameter space into disjoint regions and it is not possible to move from the one to the other without crossing a singularity of $S(q)$. This signals that perturbation theory is limited to within these regions and that the radius of convergence will be set by the distance to the closest point on these curves. This is illustrated in figure \ref{fig2}. One may also evaluate these results from the perspective of the existence of the metric $\Theta$. While $S(q)$ is singularity free, except on the border between shaded and unshaded areas, $\Theta$ does not necessarily exist elsewhere, since positive definiteness and hermiticity of $\Theta$ cannot be immediately inferred. At the same time the singularity of $S(q)$ on the borders indicate the existence of zero norm states, implying that on these borders the metric definitely does not exist, confirming in this particular case what was anticipated in general at the beginning of this section. \setlength{\unitlength}{1mm} \begin{figure}\label{fig2} \end{figure} \section{Conclusions} \label{conclusions} We have presented a brief overview of considerations pertinent tot non- and quasi-hermitian quantum mechanics in which the role of the metric is stressed. Subsequently we have shown how the Moyal product can be used to compute the metric for a given non-hermitian Hamiltonian from a general partial differential equation. The verification that the metric is hermitian and positive definite can be carried out directly on the level of the Moyal product formulation, without refering to the operator level. We have carried through this program for three Hamiltonians, all of which possess PT-symmetry. These considerations can also be applied to finite dimensional models as those studied in \cite{scholtz}, by using the finite dimensional formulation of the Moyal product where, essentially, $h\rightarrow\frac{1}{N}$. An interesting new perspective that arises from the present formulation is the relation between the choice of observables and boundary conditions imposed on the metric equation, as formulated in terms of the Moyal product, both of which give rise to a unique metric. We have also dicussed the Berry connection in the context of the Moyal product and demonstrated that non-perturbative information can be extracted from the resulting equation. \end{document}
\overline{e}gin{document} \title{On the Optimal Fixed-Price Mechanism in Bilateral Trade} \overline{e}gin{abstract} We study the problem of social welfare maximization in bilateral trade, where two agents, a buyer and a seller, trade an indivisible item. The seminal result of Myerson and Satterthwaite~\cite{myerson_efficient_1983} shows that no incentive compatible and \emph{budget balanced} (i.e., the mechanism does not run a deficit) mechanism can achieve the optimal social welfare in bilateral trade. Motivated by this impossibility result, we focus on approximating the optimal social welfare. We consider arguably the simplest form of mechanisms -- the fixed-price mechanisms, where the designer offers trade at a fixed price to the seller and buyer. Besides the simple form, fixed-price mechanisms are also the only \emph{dominant strategy incentive compatible} and \emph{budget balanced} mechanisms in bilateral trade~\cite{HAGERTY198794}. We obtain improved approximation ratios of fixed-price mechanisms in both (i) the setting where the designer has the \emph{full prior information}, that is, the value distributions of both the seller and buyer; and (ii) the setting where the designer only has access to limited information of the prior. In the full prior information setting, we show that the optimal fixed-price mechanism can achieve at least $0.72$ of the optimal welfare, and no fixed-price mechanism can achieve more than $0.7381$ of the optimal welfare. Prior to our result the state of the art approximation ratio was $1 - {1\over e} + 0.0001\approx 0.632$~\cite{kang2022fixed}. Interestingly, we further show that the optimal approximation ratio achievable with full prior information is \emph{identical} to the optimal approximation ratio obtainable with only \emph{one-sided prior information}, i.e., the buyer's or the seller's value distribution. As a simple corollary, our upper and lower bounds in the full prior information setting also apply to the one-sided prior information setting. We further consider two limited information settings. In the first one, the designer is only given the mean of the buyer's value (or the mean of the seller's value). We show that with such minimal information, one can already design a fixed-price mechanism that achieves $2/3$ of the optimal social welfare, which surpasses the previous state of the art ratio even when the designer has access to the full prior information. Furthermore, $2/3$ is the optimal attainable ratio in this setting. In the second limited information setting, we assume that the designer has access to finitely many samples from the value distributions. Recent results show that one can already obtain a constant factor approximation to the optimal welfare using a single sample from the seller's distribution~\cite{babaioff_bulow-klemperer-style_2020,dutting_efficient_2021, kang2022fixed}. Our goal is to understand what approximation ratios are possible if the designer has more than one but still finitely many samples. This is usually a technically more challenging regime and requires tools different from the single-sample analysis. We propose a new family of sample-based fixed-price mechanisms that we refer to as the \emph{order statistic mechanisms} and provide a complete characterization of their approximation ratios for any fixed number of samples. Using the characterization, we provide the optimal approximation ratios obtainable by order statistic mechanism for small sample sizes (no more than $10$ samples) and observe that they significantly outperform the single sample mechanism. \end{abstract} \thispagestyle{empty} \addtocounter{page}{-1} \section{Introduction}\label{sec:Intro} We study a fundamental problem in mechanism design -- maximizing social welfare in bilateral trade, in which two agents, a seller and a buyer, trade an indivisible item. More specifically, we consider the Bayesian setting where the seller's private value $S$ for the item that is drawn from distribution $F_S$, and the buyer's private value $B$ for the item is drawn from distribution $F_B$. The social welfare is therefore defined as ${\mathop{{}\mathbb{E}}_{B,S}\left[S+(B-S)\cdot x(B,S)\right]}$, where $x(B,S)$ denotes the probability that the trade happens when the seller's value is $S$ and the buyer's value is $B$. Surprisingly, exactly maximizing the social welfare in bilateral trade is impossible. The seminal result by Myerson and Satterthwaite~\cite{myerson_efficient_1983} shows that no mechanism can \emph{simultaneously} be (i) incentive compatible (to the buyer and the seller), (ii) \emph{budget balanced}, i.e., the mechanism does not run a deficit, and (iii) maximizes the social welfare. For example, the VCG mechanism is incentive compatible and maximizes the social welfare but is not budget balanced in general. Motivated by this impossibility result, our goal is to design incentive compatible and budget balanced mechanisms to approximate the optimal welfare. We focus on the fixed-price mechanisms, in which the designer offers trade at a fixed price to the seller and buyer. It is also known that fixed-price mechanisms are the only \emph{dominant strategy} incentive compatible and budge balance mechanisms in bilateral trade~\cite{HAGERTY198794}. \subsection{Our Contributions}\label{sec:contribution} We make progress on this problem on multiple fronts. We first consider the \emph{full prior information} setting, where the designer knows both $F_S$ and $F_B$. We show how to use a \emph{factor revealing min-max program} to improve the approximation ratio achievable by a fixed-price mechanism. \noindent \hspace{.2in}\overline{e}gin{minipage}{0.93\textwidth}\textbf{Contribution 1:} For any $F_B,F_S$, there exists a fixed-price mechanism whose welfare is at least $0.72\cdot \textsf{OPT}$, where $\textsf{OPT}=\mathop{{}\mathbb{E}}_{B,S} [\max\{B,S\}]$ is the optimal welfare. Moreover, there exists a $F_B$ and $F_S$ such that no fixed-price mechanism can attain welfare more than $0.7381\cdot \textsf{OPT}$. The formal statement of our result can be found in \mathcal{C}ref{thm:fullinfo}. \end{minipage} We also have a ``constant time'' algorithm for computing the fixed-price mechanism that achieves the welfare guarantee above. More specifically, we construct a collection of numbers $p_1,\cdots, p_n$, so that for any $F_S$, $F_B$, our algorithm chooses the best price in the set ${\left \{p_1\cdot \textsf{OPT},\cdots,p_n\cdot \textsf{OPT}\right \}}$. Clearly, the approximation ratio will be better when we increase $n$. We show that when $n=16$, our algorithm already computes a fixed-price mechanism that has welfare at least $0.72\cdot \textsf{OPT}$. Our result significantly improves on the state-of-the-art approximation $1-{1\over e} + 0.0001\approx 0.6322$~\cite{kang2022fixed}. Our new hardness result also strengthens the previous best bound of $0.7385$~\cite{kang2018strategy}. Our upper and lower bounds are obtained by considering two discretized variants of an infinite-dimensional min-max optimization problem defined in \mathcal{C}ref{sec:characterization fullinfo}. We show in \mathcal{C}ref{lem:optconvergence} that, in the limit when the discretization accuracy approaches $0$, the upper bound and lower bound obtainable by our method will converge to the optimal approximation ratio. Of course, the factor-revealing program becomes more expensive to solve with finer discretization. Our upper and lower bounds are derived using the finest discretization that we can computationally solve, but one could further close the gap with more computational resources. We next consider the case where the designer only has access to either the buyer's or the seller's value distribution. Clearly, the performance of the fixed price mechanism cannot be better than the case when the designer knows the full prior. Surprisingly, we show that the optimal fixed-price mechanism's performance in terms of the worst-case approximation ratio remains unaffected despite the absence of information from one side of the market. \noindent \hspace{.2in}\overline{e}gin{minipage}{0.93\textwidth}\textbf{Contribution 2:} Given only one-sided prior information, i.e., $F_B$ or $F_S$, the optimal approximation ratio obtainable by a fixed-price mechanism is identical to the optimal approximation ratio obtainable given access to both $F_B$ and $F_S$. As a simple corollary, our upper and lower bounds in \mathcal{C}ref{thm:fullinfo} also apply to the case when the designer only has access to one-sided prior information. The formal statement is in \mathcal{C}ref{thm:one-sided}. \end{minipage} \paragraph{Fixed-price mechanism based on only $\mathop{{}\mathbb{E}}[B]$ or $\mathop{{}\mathbb{E}}[S]$.} Our first two results require the designer to know either both $F_B$ and $F_S$ \footnote{Our first result uses $F_B$ and $F_S$ in two places: (1) to compute $\textsf{OPT}$ and (2) to identify the best price in the set.} or at least one of the two distributions. However, information about the underlying distributions of the agents' values is often scarce in practice, thus it is more desirable to design approximately optimal mechanisms using only limited prior information. Our third contribution concerns the case where the designer does not have the full information of the underlying distributions but only knows the mean of $F_S$ or $F_B$. \noindent \hspace{.2in}\overline{e}gin{minipage}{0.93\textwidth}\textbf{Contribution 3:} We provide the \emph{max-min optimal} fixed-price mechanism, when the designer only has access to $\mathop{{}\mathbb{E}}[B]$ or $\mathop{{}\mathbb{E}}[S]$. More specifically, given only $\mathop{{}\mathbb{E}}[B]$ (or $\mathop{{}\mathbb{E}}[S]$), we provide a \emph{closed-form} randomized fixed-price mechanism $\mathcal{M}_B$ (or $\mathcal{M}_S$) whose welfare is at least $\frac23\cdot \textsf{OPT}$ for any buyer value distribution with mean $\mathop{{}\mathbb{E}}[B]$ (or any seller value distribution with mean $\mathop{{}\mathbb{E}}[S]$) and seller value distribution $F_S$ (or any buyer value distribution $F_B$). Additionally, $\frac 23$ is the optimal approximation ratio obtainable by any fixed-price mechanism that only uses $\mathop{{}\mathbb{E}}[B]$ (or $\mathop{{}\mathbb{E}}[S]$). See \mathcal{C}ref{def:optimal mechanism using only mean} for details of $\mathcal{M}_B$ and $\mathcal{M}_S$, and \mathcal{C}ref{thm:PartialMecha} for the formal statement of our result. \end{minipage} We would like to highlight that the ratio of $\frac23$ exceeds the previous state-of-the-art approximation ratio achievable in the full prior information setting. \cite{blumrosen2021almost,kang2022fixed} consider the setting where only $F_S$ is known to the designer and show that a \emph{quantile mechanism} (Mechanism~\ref{alg:quantile mech}), i.e., a fixed-price mechanism that chooses the trading price according to a distribution of quantiles of the seller's distribution, can obtain at least $1-{1\over e}\approx 0.6321$ fraction of the optimal welfare. \cite{kang2022fixed} further shows that no quantile mechanism can obtain more than $1-{1\over e}$ fraction of the optimal welfare in the worst case. This result is sometimes interpreted as saying no mechanism can obtain an approximation ratio better than $1-{1\over e}$ with only information about the seller's value distribution. Our second contribution (\mathcal{C}ref{thm:one-sided}) shows that there is a strictly better way to use the information about seller's value distribution, as the ratio should be at least $0.72$. Furthermore, our third contribution (\mathcal{C}ref{thm:PartialMecha}) shows that, with minimal information about $F_S$, i.e., its mean $\mathop{{}\mathbb{E}}[S]$, one can design a fixed-price mechanism that strictly outperforms the optimal quantile mechanism that requires the full knowledge of $F_S$. Moreover, the quantile mechanism is asymmetric and only defined when we know the seller's value distribution. We show in Theorem~\ref{thm:NonBuyerInfo} that this is unavoidable, as no quantile mechanism over buyer's value distribution can guarantee a constant fraction of the optimal welfare.\footnote{This asymmetry is due to the asymmetry of the initial allocation -- the item is owned by the seller.} In contrast, our third result holds when the designer only knows the mean of the buyer's value distribution $F_B$. \overline{e}gin{algorithm}[H] \SetAlgorithmName{Mechanism} ~~Input: A distribution $Q$ over the interval $[0,1]$\; Randomly choose a quantile $x\in [0,1]$ according to $Q$\; Output the $x$-quantile of the seller's distribution as the price. Let $F_S$ be the seller's distribution and $F_S^{-1}(\cdot)$ be seller's quantile function mapping any quantile to its corresponding value. The quantile mechanism outputs $F_S^{-1}(x)$ as the price. \caption{{\sf \quad Quantile mechanism.}}\label{alg:quantile mech} \end{algorithm} \paragraph{Fixed-price mechanism using finitely many samples.} Finally, we consider a different limited information model and initiate the study of approximating the optimal social welfare in using \emph{finitely many samples}. Namely, we are given a finite and limited number of samples, e.g., $3$ or $5$ samples, and the goal is to design the best mechanism possible using these samples. It is important to distinguish this setting from the more standard \emph{large sample} setting, where the goal is to determine the number of samples needed to design a $(1-\varepsilon)$-optimal mechanism (or optimal within a certain mechanism class) grows as a function of $\frac{1}{\varepsilon}$ and other parameters of the mechanism design environment. The sample complexity in large sample settings is usually stated using the big-O notation and ignores the accompanying constant. As a result, these bounds are often vacuous when apply to the \emph{small sample} regime, where there are only a small finite number of samples available. \noindent \hspace{.2in}\overline{e}gin{minipage}{0.93\textwidth}\textbf{Contribution 4:} We introduce a new family of mechanisms -- \emph{order statistic mechanisms} (Mechanism~\ref{alg:order statistic mech}) and provide an exact characterization of the optimal order statistic mechanisms for any fixed number of samples (\mathcal{C}ref{thm:symoptorderstatistic} and \mathcal{C}ref{thm:asymoptorderstatistic}). Using our characterization, we can compute the optimal approximation ratio obtainable for any sample size. \end{minipage} Recent results show that one can already obtain a constant factor approximation to the optimal welfare using a single sample from the seller's distribution~\cite{babaioff_bulow-klemperer-style_2020,dutting_efficient_2021, kang2022fixed}. However, techniques from these papers are tailored to the single sample setting and are difficult to generalize to even the case when two samples are available. We provide a rich family of mechanisms that is well-defined for any number of samples and characterize their performance. Using the characterization, we manage to optimize within this family of mechanisms for any fixed number of samples. \overline{e}gin{algorithm}[H] \SetAlgorithmName{Mechanism} ~~Input: A distribution $P$ over $[N]$\; Randomly choose a number $i \in [N]$ according to the distribution $P$\; Given $N$ samples from the seller, select the $i$-th smallest sample as the price, which is the $i$-th order statistic of these samples. \caption{{\sf \quad Order statistic mechanism with $N$ samples.}}\label{alg:order statistic mech} \end{algorithm} By numerically computing the optimal approximation ratios of order statistic mechanisms, we observe that the optimal order statistic mechanism with a small number of samples is usually sufficient to significantly boost the approximation ratio. For example, in the symmetric setting, i.e., $F_S=F_B$, \emph{five samples} is sufficient to obtain an approximation ratio that is within $1\%$ of the optimal ratio achievable by any fixed-price mechanism; in the asymmetric setting, i.e., $F_S\neq F_B$, the approximation ratio improves from $1/2$ to $0.578$ when the sample size increases from one to three. Another natural mechanism is the empirical risk minimization (ERM) mechanism, where one selects a price to maximize the social welfare w.r.t. the empirical distribution. We compare the performance of the optimal order statistic mechanism with ERM for sample size $N=2,3,5,10$ in the symmetric setting. In all cases, the order statistic mechanism substantially outperforms the ERM. See Table~\ref{table:symmetric} and~\ref{table:asymmetric} for our computed ratios in the symmetric and asymmetric cases respectively. Our analysis of the order statistic mechanisms builds on an interesting connection between the order statistic mechanisms and the quantile mechanisms, that is, any order statistic mechanism is also a quantile mechanism. Note that the $i$-th order statistic over $N$ samples drawn uniformly and independently from $[0, 1]$ has density $f_N^i(x) = N \binom{N-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$. Suppose we use the $i$-th order statistic as the price, then it is equivalent to the quantile mechanism who selects a quantile corresponding to the density function $f_N^i(\cdot)$. More generally, if we choose the $i$-th order statistic with probability $P_i$, then the order statistic mechanism is equivalent to the quantile mechanism that chooses the quantile according to the density function $q(\cdot) = \sum_{i=1}^N P_i f_N^i(\cdot)$. With this connection, we can focus on quantile mechanisms, and we characterize the approximation ratio of any quantile mechanism as the solution of a minimization problem (Lemma~\ref{lem:asymratio}). By applying this characterization for quantile mechanisms to order statistic mechanisms, we show that for any fixed sample size $N$, the ratio of the optimal order statistic mechanism is exactly the solution of a max-min optimization problem. Although the optimization problem seems intractable in general, we manage to solve it with sufficient numerical accuracy for $N\leq 10$. We only study approximating social welfare in bilateral trade in this paper, but we believe this perspective of viewing sample-based mechanisms through the lens of quantile mechanisms is novel and has broader applications, especially in the small sample regime where the designer only has access to finitely many samples. \subsection{Related Work} \paragraph{Gains from Trade Maximization in Two-Sided Markets.} Another important objective in two-sided markets is the gains from trade (GFT), which measures the increment of the welfare after the trade. Note that~\cite{myerson_efficient_1983} also implies that optimal GFT is not achievable in bilateral trade. There has been increasing interest from the algorithmic mechanism design community to study the approximability of the optimal GFT~\citep{blumrosen_approximating_2016,brustle_approximating_2017,colini-baldeschi_fixed_2017,babaioff_best_2018,babaioff_bulow-klemperer-style_2020,cai_multi-dimensional_2021,deng2021approximately}. It will be interesting to study the optimal approximation ratio obtainable for GFT maximization in both the full information and the limited information settings. \paragraph{Sample-Based Mechanism Design.} Sample-based mechanism design has become a central topic in algorithmic mechanism design as it provides an alternative model that weakens the classical but sometimes unrealistic Bayesian assumption. The results in this direction can be roughly partition into two groups: (1) Large sample results, where the goal is to determine the number of samples needed to design a $1-\varepsilon$-optimal mechanism (or optimal in a certain mechanism class) as a function of $\frac{1}{\varepsilon}$ and other parameters of the mechanism design environment, e.g., ~\citep{elkind_designing_2007,cole_sample_2014,mohri_learning_2014, guo_settling_2019, syrgkanis_sample_2017,morgenstern_pseudo-dimension_2015,morgenstern_learning_2016,cai_learning_2017,brustle_multi-item_2020} or (2) Single sample results, where the goal is to determine the optimal approximation ratio obtainable using a single sample, e.g.,\citep{fu_randomization_2015,dhangwatnotai_revenue_2010,goldner_prior-independent_2016,kang2022fixed,gonczarowski_sample_2018,dutting_efficient_2021}. Our result does not fit in to either of the groups. In particular, we study the regime where the designer has a small fixed number of samples, as a result, the machinery developed for large number of samples or a single sample does not apply to our setting. A recent line of works focus on the same regime as ours but for the monopolist pricing problem~\citep{babaioff_are_2018,daskalakis2020more,allouah_revenue_2021}. Due to the different nature of the studied problems, their techniques also do not apply here. \section{Preliminaries} \label{sec:Prelim} \paragraph{Bilateral Trade.} We study the bilateral trade problem. In this setting, there are two agents, a buyer and a seller, trade a single indivisible item. The seller owns the item and values it at $S$ while the buyer values the item at $B$. Both $S$ and $B$ are non-negative and unknown to us but they are respectively drawn from distributions $F_S$ and $F_B$ independently. We assume that $F_S$ and $F_B$ are continous distributions. Actually, such assumption is w.l.o.g. and we discuss the reduction from distributions with point masses to continous ones in Appendix~\ref{appendix:tiebreaking}. \paragraph{Fixed-price Mechanism.} We consider fixed-price mechanisms, which offer a price $p$ to trade the item. The trade happens if and only if both the seller and the buyer accept the price, i.e., $B \ge p \ge S$. As shown by \cite{HAGERTY198794}, fixed-price mechanism is the only dominant-strategy incentive-compatibility mechanism. In this paper, we consider (possibly randomized) fixed-price mechanisms. We abuse notation and use $\mathcal{M}(F_S, F_B)$ or $\mathcal{M}(\mathcal{I})$ where $\mathcal{I} = (F_S, F_B)$ to denote the distribution of prices $p$ selected by mechanism $\mathcal{M}$ on instance $\mathcal{I} = (F_S, F_B)$. \paragraph{Welfare and Approximation Ratio.} We consider the objective of social welfare in this paper. For an instance $\mathcal{I} = (F_S, F_B)$, the optimal welfare is defined as: \overline{e}gin{align*} \text{OPT-}\mathcal{W}(\mathcal{I}) = \mathop{{}\mathbb{E}}_{S\sim F_S, B\sim F_B}\left[\max(S, B)\right] \end{align*} Similarly, for a fixed-price mechanism $\mathcal{M}$, the expected welfare on instance $\mathcal{I}$ can be written as: \overline{e}gin{align*} \mathcal{W}(\mathcal{M}, \mathcal{I}) = \mathop{{}\mathbb{E}}_{S\sim F_{S}, B\sim F_{B} \atop p\sim \mathcal{M}(F_S, F_B)} \left[S + \mathbbm{1}ic[S \le p \le B]\cdot (B - S) \right] \end{align*} Specifically, we use $\mathrm{ALG}\left(q, \mathcal{I}\right)$ to denote the expected welfare when using a fixed price $q$. Our goal is to maximize the approximation ratio. That is, find some mechanism $\mathcal{M}$ maximize the following ratio. \overline{e}gin{align*} \min_{\mathcal{I} = (F_S, F_B)} \frac{\mathcal{W}(\mathcal{M}, \mathcal{I}) }{\text{OPT-}\mathcal{W}(\mathcal{I})} \end{align*} \paragraph{Quantile Function.} Suppose $F(\cdot)$ is the c.d.f. of a distribution, and we define $F^{-1}(\cdot)$ as the quantile function mapping the quantile to its corresponding value in this distribution. That is, ${F^{-1}(x) = \inf\{y\;\vert\; F(y) = x\}}$. \section{A Near-Optimal Mechanism in the Full Prior Information Setting} \label{sec:fullinfo} In this section, we show a near-optimal fixed-price mechanism when given the full prior information of the buyer and the seller. \overline{e}gin{theorem} \label{thm:fullinfo} There exists a DSIC, individually rational, budget balanced mechanism that achieves at least 0.72~fraction of the optimal welfare for any instance $\mathcal{I} = (F_S, F_B)$. Moreover, no such mechanism has an approximation ratio better than 0.7381. \end{theorem} To prove this, we first identify the best fixed-price mechanism when given the instance $\mathcal{I} = (F_S, F_B)$. Then, the approximation ratio is determined by the mechanism's performance on the worst-case instance. Such a worst-case instance could be characterized by an infinite dimensional quadratically constrained quadratic program (QCQP). However, the infinite dimensional program is hard to solve directly. Instead, we use two finite programs that can be solved numerically to upper bound and lower bound the infinite dimensional program. Additionally, we show that the optimal solutions of these two programs converge to the optimal solution of the infinite dimensional program as the number of variables tends to infinity. \subsection{Characterizing the Optimal Mechanism}\label{sec:characterization fullinfo} We first characterize the optimal fixed-price mechanism via an infinite dimensional QCQP. Given any instance $\mathcal{I} = (F_S, F_B)$, we could assume that $\text{OPT-}\mathcal{W}(\mathcal{I}) = 1$ without loss of generality since we can always scale the instance so that this is true. The optimal fixed-price mechanism corresponds to choosing a price $p \in \arg\max_p \mathcal{W}(\mathcal{I}, p)$. The following program captures the worst-case instance for fixed-price mechanisms. \overline{e}gin{center} \overline{e}gin{tabular}{|c|} \hline The Optimization Problem $\mathsf{FullOp}$ \\ \hline \parbox{15cm}{ \overline{e}gin{align} \min_{\mu,\nu, r}\quad r \nonumber \\ \textsf{s.t.} \quad & \mu,\nu \text{ are probability measures defined on $\mathbb{R}_{\geq 0}$}\\ & \text{OPT-}\mathcal{W}(\mathcal{I}) \defeq \int_{\mathbb{R}_{\geq 0}} \int_{\mathbb{R}_{\geq 0}} \max(x,y)\, \nu(d y) \, \mu (d x) \geq 1& \nonumber \\ & \mathcal{W}(\mathcal{I}, t) \defeq \int_{\mathbb{R}_{\geq 0}} x\, \mu(dx)+ \int_{\mathbb{R}_{\geq 0}} \int_{\mathbb{R}_{\geq 0}} (y-x)\cdot \mathbbm{1}ic[x\leq t\leq y]\, \nu(dy)\, \mu(dx) \leq r & \forall t \geq 0 \nonumber \end{align} }\\ \hline \end{tabular} \end{center} \overline{e}gin{lemma} \label{lem:fullinfo} The value of the optimal solution of $\mathsf{FullOp}$ is the \emph{tight} worst-case approximation ratio achievable by a fixed-price mechanism. \end{lemma} The proof of Lemma~\ref{lem:fullinfo} is postponed to Appendix~\ref{appendix:prooffullinfo}. Since it is difficult to directly solve an infinite dimensional program like $\mathsf{FullOp}$, we approximate $\mathsf{FullOp}$ from both above and below by constructing two families of finite programs which provide an upper bound and a lower bound respectively. \subsection{Factor Revealing Program for the Approximation Ratio under Full Prior Information} We show that the approximation ratio of the optimal fixed price mechanism is at least $0.72$, which significantly improves the previous state of the art bound of $1-1/e+\varepsilon$ with $\varepsilon\approx 10^{-4}$. Our approach is to find a fixed-price mechanism whose performance under the worst distribution is maximized. This is exactly captured by the optimization problem $\mathsf{FullOp}$. However, it is an infinite-dimensional program. In this section, we consider a discretized version of $\mathsf{FullOp}$. {More specifically, we assume that $\text{OPT-}\mathcal{W}(\mathcal{I}) = 1$, and we restrict the mechanism to only choose price from a finite set $P=\{p_1,p_2,\ldots, p_n\}$. What we manage to show is that the optimal value of the optimization problem $\mathsf{LowerOp}$ is indeed a lower bound on the maximum approximation ratio one can obtain using prices from $P$ for instance $\mathcal{I}$. We establish the following two crucial properties: (i) For any $\mathcal{I}=(F_S,F_B)$ satisfying $\text{OPT-}\mathcal{W}(\mathcal{I}) = 1$, we can carefully round $F_S$ and $F_B$ to two discrete distributions supported on $P$, where $\{s_1,\ldots, s_n\}$ and $\{b_1,\ldots, b_n\}$ can be viewed as the corresponding ``probability mass function'' for the discretized distributions of the seller and the buyer.\footnote{For technical reasons, $\{s_1,\ldots, s_n\}$ and $\{b_1,\ldots, b_n\}$ do not exactly correspond to probability mass functions, but viewing them as the probability mass functions gives the right intuition.} Importantly, $\{s_1,\ldots, s_n\}$ and $\{b_1,\ldots, b_n\}$ satisfy inequalities~\eqref{eq:LowerOp1} -~\eqref{eq:LowerOp4}. (ii) For any price $p_t$, the welfare from the corresponding fixed-price mechanism under $\mathcal{I}$ is at least the welfare under the rounded distributions $\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i)$. Therefore, if we choose $r$ to be $\max_{t\in [n]} \mathcal{W}(\mathcal{I}, p_t)$, $\{s_1,\ldots, s_n\}$, $\{b_1,\ldots, b_n\}$, and $r$ form a feasible solution of $\mathsf{LowerOp}$, which implies that the optimal value of $\mathsf{LowerOp}$ is no greater than the constructed $r$. As the rounded distribution needs to satisfy a sequence of constraints (especially constraint~\eqref{eq:LowerOp4}), the procedure we use to round $F_S$ and $F_B$ is subtle and does not simply round things up or down. See Appendix~\ref{appendix:prooffullinfolower} for details.} \overline{e}gin{center} \overline{e}gin{tabular}{|c|} \hline The Optimization Problem $\mathsf{LowerOp}$ \\ \hline \parbox{15cm}{ \overline{e}gin{align} \min_{s_1,s_2\cdots, s_n\atop b_1,b_2,\cdots,b_n, r} \quad r \nonumber \\ \textsf{s.t.} \quad & s_i, b_i \geq 0 & \forall i \in [n]\label{eq:LowerOp1}\\ & \sum_{i=1}^n s_i \geq 1 \quad \text{ {and} } \quad \sum_{i=1}^n b_i \geq 1 \label{eq:LowerOp2}\\ & \sum_{i=1}^n s_i \leq 1 + \frac{1}{p_n} \quad \text{ {and} } \quad \sum_{i=1}^n b_i \leq 1 + \frac{1}{p_n} & \label{eq:LowerOp3} \\ & \sum_{i=1}^n \sum_{j=1}^n s_i b_j \max(p_i,p_j) \geq 1 & \label{eq:LowerOp4} \\ & \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i) \leq r & \forall t \in [n] \label{eq:LowerOp5} \end{align} }\\ \hline \end{tabular} \end{center} \overline{e}gin{lemma} \label{lem:fullinfolower} For any $0 = p_1 < p_2 < \cdots < p_n$, let $r^{*}$ be the optimal value of~$\mathsf{LowerOp}$. Suppose $M$ is the mechanism that chooses the best price from the set $\big\{p_1\cdot \mathop{{}\mathbb{E}}[\max(S, B)],p_2\cdot \mathop{{}\mathbb{E}}[\max(S,B)],\cdots, p_n\cdot \mathop{{}\mathbb{E}}[\max(S, B)]\big\}$ to maximize the welfare. The welfare obtained by $M$ is at least $r^{*}\cdot \textsf{OPT}$. \end{lemma} We defer the proof of the lemma to Appendix~\ref{appendix:prooffullinfolower}. \subsection{Hardness Result under Full Prior Information} In this section, our goal is to find a threshold and an instance such that no fixed-price mechanism has an approximation ratio better than the threshold on this instance. We focus on discrete distributions and consider an instance $\mathcal{I} = (F_S, F_B)$ where $F_S$ is a discrete distribution supported on $\{p_1+\varepsilon,p_2+\varepsilon,\cdots,p_n+\varepsilon\}$, and $F_B$ is a discrete distributions supported on $P=\{p_1,p_2,\cdots, p_n\}$ where $\varepsilon > 0$ is a small enough constant. For such instance, the optimal price must also lie in the set $\{p_i + \varepsilon\}_{i\in [n]}$, as choosing a price $x$ where $p_{i} + \varepsilon \le x < p_{i+1} + \varepsilon$ is equivalent to choosing a price of $p_{i} + \varepsilon$ . Therefore, any valid solution for the optimization problem below corresponds to a hard instance. \overline{e}gin{lemma}\label{lem:fullinfoupper} For any valid solution $(s_1,s_2,\cdots, s_n, b_1,b_2,\cdots, b_n, r)$ of~$\mathsf{UpperOp}$ (defined in \mathcal{C}ref{appendix:prooffullinfoupper}) satisfying $r = \max_{t\in [n]} \sum_{i=1}^n s_i p_i + \sum_{i=1}^t\sum_{j=t+1}^n s_ib_j(p_j - p_i)$ and $\varepsilon > 0$, there exists an instance $\mathcal{I} = (F_S, F_B)$ such that no fixed-price mechanism can achieve more than $\left(r+\varepsilon\right)$-fraction of the optimal welfare on this instance. \end{lemma} The proof of Lemma~\ref{lem:fullinfoupper} is deferred to Appendix~\ref{appendix:prooffullinfoupper}. \overline{e}gin{proof}[Proof of Theorem~\ref{thm:fullinfo}] With Lemma~\ref{lem:fullinfolower} and Lemma~\ref{lem:fullinfoupper}, we are now ready to prove Theorem~\ref{thm:fullinfo}. For the numerical results, our anonymous \href{https://github.com/BilateralTradeAnonymous/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade}{GitHub repository}(\texttt{https://github.com/BilateralTradeAnonymou\\s/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade}) provides all the certificates and codes and also carefully explains all the details. For the lower bound, we choose $n = 16$. Using Gurobi \cite{gurobi}, we obtain a lower bound of 0.72~for the optimization problem $\mathsf{LowerOp}$ for a carefully chosen set of price $\{p_1,p_2,\cdots, p_n\}$.\footnote{We choose $\{p_1,p_2,\cdots, p_{16}\}$ to be $\{0.0, 0.1, 0.19, 0.27, 0.315, 0.355, 0.395, 0.44, 0.485, 0.535, 0.595, 0.665, 0.74, 0.875, 1.195, 1000.0\}$ to derive the $0.72$. These numbers are chosen heuristically to provide good coverage between $0.3$ to $0.5$, which is the region with concentration of probability mass in some bad instances we encounter.} Therefore, by Lemma~\ref{lem:fullinfolower}, there exists a 0.72-approximate fixed-price mechanism. Things become much easier for the upper bound since we only need to find a feasible solution instead of proving a lower bound of the optimal value. We choose $n = 100$ and numerically solve $\mathsf{UpperOp}$ with a specific support $\{p_1,p_2,\cdots, p_n\}$ and find a feasible solution that satisfies the constraints in Lemma~\ref{lem:fullinfoupper} where $r \leq 0.7381$. Together with Lemma~\ref{lem:fullinfoupper}, we then find a hard instance such that no fixed-price mechanism attains a 0.7381-approximation of the optimal welfare. Please check our \href{https://github.com/BilateralTradeAnonymous/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade}{GitHub repository} for the detailed specification of the distributions. \end{proof} Finally, we would like to point out that the optimal value obtained by $\mathsf{LowerOp}$ and $\mathsf{UpperOp}$ will converge to the optimal value as the discretization accuracy tends to $0$. \overline{e}gin{lemma} \label{lem:optconvergence} Let $r^*$ be the optimal value of $\mathsf{FullOp}$, i.e. the optimal approximation ratio. For any $\varepsilon>0$, there exists two sets numbers $0=p_1 < p_2 < \cdots < p_n$ and $\{p_1',p_2',\cdots, p_{n'}'\}$ such that the optimal value of $\mathsf{LowerOp}$ with respect to $\{p_1,p_2,\cdots, p_n\}$ is at least $r^{*} - \varepsilon$ and the optimal value of $\mathsf{UpperOp}$ w.r.t. $\{p_1',p_2',\cdots,p_{n'}'\}$ is at most $r^{*} + \varepsilon$. \end{lemma} The proof of Lemma~\ref{lem:optconvergence} is deferred to Appendix~\ref{appendix:proofconvergence} \section{One-Sided Prior Information} \label{subsec:one-sided} We discuss the setting where we only have access to either the seller's or buyer's distribution in this section. We show that the optimal approximation ratio achievable with full prior information is identical to the optimal approximation ratio obtainable when only one-sided prior information is known. \overline{e}gin{theorem}\label{thm:one-sided} Let $r^{*}$ be the approximation ratio of the optimal fixed-price mechanism in the full prior information setting. The optimal mechanism with only access to the value distribution of the buyer (or the seller) has exactly the same approximation ratio $r^{*}$. \end{theorem} \overline{e}gin{remark} Note that \mathcal{C}ref{thm:one-sided} does not imply that the optimal mechanism in the full prior information setting is also an optimal mechanism in the setting where only one-sided prior information is known. Indeed, the mechanism proposed in \mathcal{C}ref{thm:fullinfo} that achieves the ratio $0.72$ relies on $\mathrm{OPT}$, which can only be computed with full prior information. \mathcal{C}ref{thm:one-sided} simply states that the optimal approximation ratio is identical in the two settings, but we do not provide explicit approximately-optimal mechanisms that use only one-sided prior information. \end{remark} To provide a high-level sketch of the proof, we consider the case in which we only have information of the buyer's distribution as an example. Suppose our goal is only to achieve an approximation ratio of at least $r$. This requires us to find a distribution of price ${\omega}$ which only depends on $F_B$ so that for any seller's distribution $F_S$, the term $\mathop{{}\mathbb{E}}_{q\sim \omega}[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})]$ is always non-negative for $\mathcal{I} = (F_B, F_S)$. Therefore, the approximation ratio is at least $r$ if and only if the optimum of the following optimization problem is non-negative: \[\min_{F_B}\ \max_{\omega}\ \min_{F_S}\mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right].\] Observe that, for any fixed $F_B$, $\mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right]$ is bilinear with respect to the probability density function of $\omega$ and $F_S$. This allows us to apply minimax theorem to swap the order of $\max_\omega$ and $\min_{F_S}$: \overline{e}gin{align*} \min_{F_B}\ \max_{\omega}\ \min_{F_S}\mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right] &=\min_{F_B}\min_{F_S} \max_{\omega} \mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right]\\ & = \min_{\mathcal{I} = (F_S,F_B)} \max_{\omega} \mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right]. \end{align*} Note that the term $\min_{\mathcal{I} = (F_S,F_B)} \max_{\omega} \mathop{{}\mathbb{E}}_{q\sim \omega}\left[\mathrm{ALG}(q, \mathcal{I}) - r\cdot \mathrm{OPT}(\mathcal{I})\right]$ is non-negative if and only if the optimal mechanism given the full prior information has an approximation ratio of at least $r$. This equality indicates that for any $r > 0$, there exists a fixed-price mechanism using only buyer's distribution that can attain an $r$ fraction of the optimal welfare if and only if the approximation ratio of the optimal fixed-price mechanism with full prior information is at least $r$. In other words, the full prior information setting and the one-sided prior information setting have the same approximation ratio. The formal proof is more complicated as the ``variables'' of our optimization problem are infinite-dimensional. The full proof of Theorem~\ref{thm:one-sided} is postponed to Appendix~\ref{subsec:one-sided-appendix}. \section{Breaking $1 - 1/e$ with Only $\mathop{{}\mathbb{E}}[B]$ or $\mathop{{}\mathbb{E}}[S]$} \label{sec:partial} We consider a limited information setting in which only the mean of either the seller's value or the buyer's value is known. \cite{kang2022fixed} shows that any mechanism that only uses quantile information from the seller can not achieve a ratio better than $1 - 1/e$. However, we observe that with minimal information of $F_S$ (or $F_B$), i.e., $\mathop{{}\mathbb{E}}[S]$ (or similarly $\mathop{{}\mathbb{E}}[B]$), we can break the $1 - 1/e$ barrier. \overline{e}gin{definition}\label{def:optimal mechanism using only mean} We define the following two fixed-price mechanisms using only $\mathop{{}\mathbb{E}}[S]$ or $\mathop{{}\mathbb{E}}[B]$. \overline{e}gin{itemize} \item[] $\mathcal{M}_S:$ Given $\mathop{{}\mathbb{E}}[S]$, the mechanism randomly picks a number $x\sim U[0,3]$, and sets the price as $x\cdot \mathop{{}\mathbb{E}}[S]$. \item[] $\mathcal{M}_B:$ Given $\mathop{{}\mathbb{E}}[B]$, the mechanism randomly picks a number $x\sim P_B$, and sets the price as $x\cdot \mathop{{}\mathbb{E}}[B]$, where $P_B$ is a distribution over the interval $[0, 2]$ with the following cdf $F_B(x)$: \overline{e}gin{align*} \overline{e}gin{split} F_B(x)=\left\{ \overline{e}gin{aligned} &\frac{x}{3-3x} \quad & 0 \leq x\leq \frac12&\\ &(4x - 1) / 3 \quad & \frac12 < x\leq \frac23\\ &(x + 1) / 3 \quad & \frac23 < x\ \leq 2\\ \end{aligned} \right. \end{split} \end{align*} \end{itemize} \end{definition} Both mechanisms attain at least $2/3$ of the optimal social welfare for any possible distribution of the other side. Additionally, $2/3$ is the best possible approximation ratio when only $\mathop{{}\mathbb{E}}[B]$ (or $\mathop{{}\mathbb{E}}[S]$) is known, indicating the optimality of $\mathcal{M}_S$ and $\mathcal{M}_B$. \overline{e}gin{theorem} \label{thm:PartialMecha} Given only $\mathop{{}\mathbb{E}}[S]$ (or $\mathop{{}\mathbb{E}}[B])$, $\mathcal{M}_S$ (or $\mathcal{M}_B$) obtains $\frac23$ of the optimal welfare. Furthermore, $\frac23$ is the optimal approximation ratio achievable using only $\mathop{{}\mathbb{E}}[S]$ (or $\mathop{{}\mathbb{E}}[B])$. \end{theorem} Let us examine the setting where only $\mathop{{}\mathbb{E}}[S]$ is known to offer some insights. Since the mean of the seller is given, we can assume, without loss of generality, that $\mathop{{}\mathbb{E}}[S] = 1$ by applying appropriate scaling. To check whether the approximation ratio of $\mathcal{M}_S$ is $2/3$, it suffices to verify that \overline{e}gin{equation}\label{eq:UB using seller mean} \min_{\mathcal{I} = (F_S,F_B)\atop \mathop{{}\mathbb{E}}[S] = 1}\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - {2\over 3}\cdot \mathrm{OPT}(\mathcal{I})\right] \end{equation} is non-negative. Similar to the one-sided prior information setting, this term is bilinear w.r.t. the probability density function of $F_S$ and $F_B$. As a result, we can argue that \emph{one of} the ``worst-case'' instances must have the following simple form: (i) the buyer's distribution is concentrated at a single point; (ii) the seller's distribution is supported on only two distinct points. Equipped with this observation, we can simplify the minimization problem in \mathcal{C}ref{eq:UB using seller mean} and certify the non-negativity of its minimum. To demonstrate that $2/3$ is indeed the optimal ratio, we construct two instances with the same $\mathop{{}\mathbb{E}}[S]$ (or $\mathop{{}\mathbb{E}}[B]$). As only $\mathop{{}\mathbb{E}}[S]$ (or $\mathop{{}\mathbb{E}}[B]$) is known, the price for these two instances must be chosen from an identical distribution. We then show that, for any distribution of prices, the worse of the two instances must have an approximation ratio no greater than $2/3$. The complete proof of Theorem~\ref{thm:PartialMecha} is in Appendix~\ref{appendix:partial}. \section{Fixed-Price Mechanism with Different Numbers of Samples} \label{sec:Sample} In this section, we consider the limited information setting where we only have sample access to the distributions. We focus on order statistic mechanisms which is defined in \mathcal{C}ref{sec:contribution} and our results cover different number of samples for both symmetric and general instances. In the small sample regime, we are able to characterize the optimal order statistic mechanism with any fixed number of samples. When the number of samples goes to infinity, we show that the optimal quantile mechanism can be approximated by order statistics mechanism as closely as desired and also obtain an upper bound on the sample complexity. Finally, recall that we assume the distributions for the seller and the buyer are continuous. See~\mathcal{C}ref{appendix:tiebreaking} for details. \subsection{Order Statistic Mechanisms} \label{subsec:Mecha} To start with, we briefly discuss these two families of mechanisms that is used in the sample setting and give high level ideas on how to design the order statistic mechanisms. Order statistic mechanisms will be used when we only have samples from the distribution and quantile mechanisms will help us analyze the performance of order statistic mechanisms. Actually, we will point out that quantile mechanisms and order statistic mechanisms are equivalent in some sense. \subsubsection{Connection Between Two Mechanisms} \label{subsec:Connection} Next we aim to show the connection between these two mechanisms. Such observations give us insights on designing mechanisms with small or large number of samples. \paragraph{The order statistic mechanism is a special kind of quantile mechanisms} First, we can see that the following two operations are equivalent: \overline{e}gin{itemize} \item Draw a sample from distribution $F$. \item Uniformly sample a quantile $x$ from $[0, 1]$, and use $F^{-1}(x)$ as the sample. \end{itemize} Now suppose $f_N^i(x) = N \binom{N-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$ be the p.d.f. of the $i$-th order statistic over $N$ samples drawn uniformly and independently from $[0, 1]$ and let $P_i$ to be $\mathcal{P}r_{x\sim P}[x = i]$ for any distribution $P$ over $[N]$. Using similar ideas above, it can be proved that any order statistic mechanism $P$ is equivalent to a quantile mechanism $Q$ with probability density function \[q(x) = \sum_{i=1}^N P_i f_N^i(x)\] Therefore, we can analyze the approximation ratio of quantile mechanism $Q$ instead of order statistic mechanism $P$. If we are able to compute the approximation ratio of any quantile mechanism $Q$, it follows that we can also characterize the optimal order statistic mechanism exactly. When the number of samples are small, we can have a fine-grained analysis of the order statistic mechanisms and use these limited samples carefully. Section~\ref{subsec:smallsample} actually follow such intuitions to characterize the best possible order statistic mechanism. \paragraph{Quantile mechanisms can be approximated by order statistic mechanisms within any small error} Our goal is that for any quantile mechanism $Q$ with p.d.f. $q(x)$, we need to find some integer $N$ and a distribution $P$ over $[N]$, such that \[q(x) \approx \sum_{i=1}^N P_i f_N^i(x)\] Since $\sum_{i=1}^N P_i f_N^i(x)$ is a polynomial of degree $N-1$, this could be done for any continuous $q(x)$ on $[0, 1]$ since the Weierstrass approximation theorem states that every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function. What's more interesting is that $\{f_N^i(x)\}_{i = 1}^N$ are Bernstein basis polynomials and there are a series of work showing that (stochastic) Bernstein polynomials can efficiently and uniformly approximate to any continous function. Therefore, we can have an asymptotic analysis of the order statistic mechanism. What's more, such observation also shows that we have a block-box transformation from any quantile mechanism to mechanisms only using samples. Section~\ref{subsec:largesample} uses such techniques and ideas. \subsection{Small Sample Regime} \label{subsec:smallsample} In this section, we characterize the optimal order statistic mechanisms with any fixed number of samples for both symmetric and general instances. We first show that, in any setting, if we are able to give a tight analysis of the quantile mechanism, we could directly characterize the optimal order statistic mechanism with any fixed number of samples via an optimization problem. In the next, we show a tight analysis of the quantile mechanism on both symmetric and general instances, and thus we obtain the characterization of the optimal order statistic mechanism. Recall that an order statistics mechanism with $N$ samples randomly choose a number $i\in [N]$ according to a previously defined distribution $P$ and select the $i$-th smallest sample as the price, and a quantile mechanism randomly choose a quantile $x\in [0,1]$ from a determined distribution $Q$ and choose the $x$-quantile, i.e. $F^{-1}_S(x)$, as the price. Since every quantile mechanism and order statistic mechanism is determined by the previously defined distribution, we abuse the notation and use distribution $P$ over $[N]$ denote its corresponding order statistic mechanism and distribution $Q$ over $[0, 1]$ denote its corresponding quantile mechanism. \overline{e}gin{lemma}\label{lem:smallsamplekey} Suppose $\mathcal{C}:\Delta([0,1])\mapsto \mathbb{R}$ maps every quantile mechanism $P$ to its exact approximation ratio. {Let $\mathcal{P}(Q)$ be the corresponding quantile mechanism of the order statistic mechanism $Q$.} Fixing the number of samples $N$, the optimal order statistic mechanism with $N$ samples $Q_N^*$ is characterized by the following optimization problem: \[Q^*_N = \arg\max_{Q\in \Delta_N} \mathcal{C}(\mathcal{P}(Q))\] where $\Delta([0, 1])$ is the set of all distributions over $[0, 1]$, i.e. the set of all quantile mechanisms, and $\Delta_N$ is the set of all distributions over $[N]$, i.e. the set of all order statistic mechanisms with $N$ samples. \end{lemma} The proof of Lemma~\ref{lem:smallsamplekey} is quite straightforward and thus is postponed to Appendix~\ref{appendix:proofofsmallsamplekey}. \subsubsection{Symmetric Instances} \label{subsec:symsmallsample} Now we study the case when the distributions are symmetric, i.e., $F_S = F_B$, which means that the seller's value $S$ and the buyer's value $B$ are drawn from the same distribution. For simplification, we will use $F$ to refer to their distributions in this setting. In order to find out the optimal order statistic mechanism, we need to first give a tight analysis of the quantile mechanism. \overline{e}gin{lemma} \label{lem:symratio} For any quantile mechanism for symmetric instance with distribution $Q$ over $[0,1]$, the approximation ratio is exactly \[ \inf_{x\in [0,1)} \frac{\int_{[0,x]} t(1-x) \, d Q(t) + \int_{(x, 1]} (1-t)x\, dQ(t) + (1-x)}{1-x^2}\] where $Q(t)$ is the cumulative distribution function of distribution $Q$. \end{lemma} Therefore, combining Lemma~\ref{lem:smallsamplekey} and Lemma~\ref{lem:symratio}, we could characterize the optimal order statistic mechanism via an optimization problem. \overline{e}gin{theorem} \label{thm:symoptorderstatistic} {The optimal order statistic mechanism with $N$ samples for symmetric instances is the solution to the following optimization problem:} \[P_N^* = \max_{P_1,P_2,\cdots, P_N \geq 0\atop \sum_{i=1}^N P_i = 1} \inf_{x\in [0,1)} \frac{\int_0^x p(t) t(1-x) \, d t + \int_x^1 p(t)(1-t)x\, dt + (1-x)}{1-x^2}\] where $p(t) = \sum_{i=1}^N P_if_N^i(x)$ and $f_N^i(x) = N \binom{N-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$ is the p.d.f. of the $i$-th order statistic over $N$ samples drawn uniformly and independently from $[0, 1]$. \end{theorem} The proof of Lemma~\ref{lem:symratio} and Theorem~\ref{thm:symoptorderstatistic} is in Appendix~\ref{appendix:proofofsymratio} and \ref{appendix:proofsymoptorderstatistic}. It turns out the optimization above is computationally tractable when $N$ is not too large. We solve the optimization problem and find out the optimal order statistic mechanism numerally with different numbers of samples $N$. To compare with the order statistic mechanisms, we also consider the most natural sample-based mechanism -- the \emph{Empirical Risk Minimization mechanism (ERM)}. We first provide the formal definition below. \overline{e}gin{definition}[Empirical Risk Minimization Mechanism] Given $N$ samples $X_1, X_2,\dots, X_N$ drawn from $F$, define $\tilde{F}$ be the empirical distribution of these $N$ samples. That is to say, $\tilde{F}$ is the distribution with c.d.f. $\tilde{F}(x)$ satisfying: \[\tilde{F}(x) = \frac{1}{N}\sum_{i=1}^N \mathbbm{1}ic[x\ge X_i]\] The Empirical Risk Minimization mechanism (ERM) is the mechanism that computes the optimal price according the empirical distribution $\tilde{F}$. In particular, for $N$ samples $X_1,X_2,\dots, X_N$, \[ \mathsf{ERM}(X_1,X_2,\dots, X_N) = \arg\max_{p} \mathop{{}\mathbb{E}}_{S\sim \tilde{F}, B\sim \tilde{F}}[S + (B - S)\cdot\mathbbm{1}ic[B\ge p\ge S]] \] If there are multiple prices $p$ that maximize the expected welfare, the ERM mechanism may select any of them. \end{definition} For $N = 1,2,3,5, 10$, we compute the approximation ratio of order statistic mechanisms and also show the upper bound of ERM. The results are listed below. To prove the upper bound, we use a counter example in \cite{kang2022fixed} and show that ERM has a bad performance on this instance. We defer the complete proof of the upper bound of ERM to Appendix~\ref{appendix:ERM} and the details of numerical results to Appendix~\ref{appendix:symmetricexperiment}. \overline{e}gin{table}[H] \centering \overline{e}gin{tabular}{ c | c | c }\hline \#Samples & Order Statistics Mechanism & ERM \\ $1$ & $0.75$ & $0.5$\\ $2$ & $0.821$ & $\le 0.67$\\ $3$ & $0.822$ &$\le 0.75$\\ $5$ & $0.847$ &$\le 0.76$\\ $10$ & $0.852$ &$\le 0.80$\\ $ \infty$ & $\frac{2 + \sqrt{2}}{4}\approx 0.8536$ & / \end{tabular} \caption{Approximation Ratios with Different Number of Samples\\ in the Symmetric Setting.}\label{table:symmetric} \end{table} \subsubsection{General Instances} \label{subsec:generalsmallsample} Now we consider the general setting, where the buyer's distribution may be different from the seller's. Recall that we only consider mechanisms over seller's information since there is no constant quantile or order statistic mechanism over seller's information. Using similar ideas, we first show a tight analysis regarding quantile mechanisms, which would guide us to discover the optimal order statistic mechanism. \overline{e}gin{lemma}[Theorem~4.1 of \cite{blumrosen2021almost}] \label{lem:asymratio} For any quantile mechanism $Q$ (over seller's distribution) with cumulative distribution function $Q$, its approximation ratio is exactly \[\min_{x\in [0,1]}{\int_{[0,x]} t\, dQ(t) + 1-x}\] \end{lemma} Similarly, combining Lemma~\ref{lem:smallsamplekey} and Lemma~\ref{lem:asymratio}, we are able to charaterize the optimal order statistic mechanism over with $N$ samples from seller's distribution by an optimization problem: \overline{e}gin{theorem} \label{thm:asymoptorderstatistic} The optimal order statistic mechanism with $N$ samples for symmetric instances is the solution to the following optimization problem: \[P_N^* = \max_{P_1,P_2,\cdots, P_N \geq 0\atop \sum_{i=1}^N P_i = 1} \min_{x\in [0,1]}{\int_{[0,x]} t\cdot p(t) d t + 1-x}, \] where $p(x) = \sum_{i=1}^N P_i \cdot f_N^i(x)$ and $f_N^i(x) = N \binom{N-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$ is the p.d.f. of the $i$-th order statistic over $N$ samples drawn uniformly and independently from $[0, 1]$. \end{theorem} The proof of Lemma~\ref{lem:asymratio} and Theorem~\ref{thm:asymoptorderstatistic} is deferred to Appendix~\ref{appendix:proofasymratio} and \ref{appendix:proofasymoptorderstatistic}. Similarly, such optimization problem is easy to solve when the number of samples $N$ is not to large. We solve the optimization problem numerically for $N = 1,2,3,5, 10$. Note that we do not compare our mechanism to the Empirical Risk Minimization mechanism in the general setting. This is because we only have sample access to the seller's distribution, and the ERM can not be implemented without the buyer's samples. The details of numerical results is defered to Appendix~\ref{appendix:generalexperiment}. \overline{e}gin{table}[H] \centering \overline{e}gin{tabular}{ c | c }\hline \#Samples & Order Statistic Mechanism \\ $1$ & $0.5$ \\ $2$ & $0.531$ \\ $3$ & $0.578$ \\ $5$ & $0.601$ \\ $10$ & $0.615$ \\ $ \infty$ & $1 - {1\over e} \approx 0.632$ \end{tabular} \caption{Approximation Ratios with Different Number of Samples\\ In the General Setting.}\label{table:asymmetric} \end{table} \subsection{Asymptotic Analysis: From Quantile to Order Statistics} \label{subsec:largesample} In this section, we turn to the case when the number of samples tends to infinity. As we show in section~\ref{subsec:Connection}, we could approximate any quantile mechanisms by order statistic mechanisms within any small error. Using such ideas, we provide a "black-box" reduction that allows us to convert any quantile mechanism with continuous probability density function $q(x)$ to order statistic mechanism with $N$ samples. Here $N$ is usually a polynomial of $\frac{1}{\varepsilon}$, as long as the probability density function is not too crazy. We now formally write it down. \overline{e}gin{lemma}\label{lem:convert} Let $\mathcal{C}:\Delta([0,1])\mapsto \mathbb{R}$ be a function that maps every quantile mechanism $Q$ with continuous probability density function to its approximation ratio such that for any quantile mechanism $Q_1$ with p.d.f. $q_1(x)$ and quantile mechanism $Q_2$ with p.d.f. $q_2(x)$, it holds that \[\mathcal{C}(Q_1) - \mathcal{C}(Q_2) \geq -c\cdot \left|q_1 - q_2\right|_{\infty}\] where $c$ is a constant. Now let $Q$ be any quantile mechanism with continuous probability density function $q(x)$. Define $M$ as $\max_{x\in [0,1]} q(x)$. For any $\varepsilon > 0$, suppose $n$ is a positive integer satisfying that \overline{e}gin{align} \omega\left(\frac{1}{\sqrt{n - 1}}\right) &\leq \varepsilon / 100\label{eq:con1}\\ 2n\exp\left(-\frac{\varepsilon^2}{8\omega^2\left(\frac{1}{\sqrt{n - 1}}\right)}\right) &\leq \varepsilon\label{eq:con2}\\ \exp\left(-\frac{\varepsilon^2 n}{48M^2}\right) &\leq \varepsilon/2 \label{eq:con3} \end{align} where $\omega(h) = \sup_{x,y\in [0,1]\atop |x - y| \leq h} \left|q(x) - q(y)\right|$. Then, there exists an order statistic mechanism with $n$ samples that achieves an approximation ratio of $\mathcal{C}(Q) - c\cdot \varepsilon$. \end{lemma} The high level idea of the proof is as follows. Since we know that probability density functions of order statistics form Bernstein basis polynomials, we could approximate the p.d.f. of the quantile mechanism $q(x)$ within any small error. Inequality~\eqref{eq:con1}, \eqref{eq:con2} and \eqref{eq:con3} actually help us to get an order statistic mechanism whose corresponding distribution of quantile is close to the desired quantile mechanism $Q$. Finally, by the property of $\mathcal{C}$, we know that their approximation ratio is also close. The proof is postponed to Appendix~\ref{appendix:proofconvert}. In the following, we show that we could apply lemma~\ref{lem:convert} to both the symmetric and general settings and convert the optimal quantile mechanism to order statistic mechanism within a error of at most $\varepsilon$ using $\text{poly}\left(\frac{1}{\varepsilon}\right)$ samples. \subsubsection{Symmetric Instance} We first study the case when the distributions are symmetric. \cite{kang2022fixed} provide a mechanism that chooses the mean of the distribution $F$ as the price. They show that in the symmetric setting, this is the optimal fixed price mechanism and achieves an approximation ratio of $\frac{2+\sqrt{2}}{4}$. However, what we want here is a quantile mechanism and we could not convert such mean-based mechanism directly into an order statistic mechanism. We show that quantile mechanisms can also reach the optimal $\frac{2 + \sqrt{2}}{4}$-approximation ratio. After that, we use the technique in Lemma~\ref{lem:convert} to produce an order statistic mechanism that achieves an approximation ratio of $\frac{2 + \sqrt{2}}{4} - \varepsilon$ with $\mathrm{poly}\left(\frac{1}{\varepsilon}\right)$ samples. To start with, we first show our optimal order statistic mechanism. \overline{e}gin{theorem} \label{thm:symquantile} There is a $\frac{2 + \sqrt{2}}{4}$-approximation quantile mechanism in the symmetric setting. \end{theorem} \overline{e}gin{proof} Our quantile mechanism $Q$ runs as following: \overline{e}gin{itemize} \item Let $F$ be the c.d.f. of the distribution. \item Output $F^{-1}\left(\frac{\sqrt{2}}{2}\right)$, i.e. $\frac{\sqrt{2}}{2}$-quantile of the distribution, as the price. \end{itemize} The approximation ratio could be directly calculated using Lemma~\ref{lem:symratio}. One could see that \overline{e}gin{align*} \overline{e}gin{split} &\inf_{x\in [0,1)} \frac{\int_{[0,x]} t(1-x) \, d Q(t) + \int_{(x, 1]} (1-t)x\, dQ(t) + (1-x)}{1-x^2} \\ &= \min\left(\min_{x\in[0,\frac{\sqrt{2}}{2}]} \frac{x\cdot(1-\frac{\sqrt{2}}{2}) + 1-x}{1-x^2}, \inf_{x\in[\frac{\sqrt2}2,1)}\frac{\frac{\sqrt{2}}{2}\cdot(1 -x) + (1-x)}{1-x^2}\right)\\ & = \frac{2+\sqrt{2}}{4}. \end{split} \end{align*}which completes the proof. \end{proof} Now we aim to convert it to an order statistic mechanism. \overline{e}gin{theorem} \label{thm:symapprox} There exists an order statistic mechanism $P$ with $N = O\left(\frac{1}{\varepsilon^7}\right)$ samples that achieves $\frac{2 + \sqrt{2}}{4} - \varepsilon$ approximation. \end{theorem} To start with, we may notice that it is impossible to directly apply Lemma~\ref{lem:convert} to the optimal quantile mechanism a since it is does not have a continiuous probability density function. So our first step is to provide a quantile mechanism with continuous distribution. \overline{e}gin{lemma} \label{lem:symturning} For any $\varepsilon > 0$, there exists a quantile mechanism $Q'$ with a probability density function $\tilde{q}(x)$ such that $\omega\left(c \cdot \varepsilon^{3.5}\right) \leq 32^2 c\cdot \varepsilon^{1.5}$ and $\max_{x\in[0,1]} \tilde{q}(x) \leq 32/\varepsilon$. Furthermore, the mechanism $Q'$ achieves an approximation ratio of $\frac{\sqrt2 + 2}{4} - \varepsilon/2$. \end{lemma} Our last step is to make sure that the approximation ratio would not differ to much for two probability density functions that are close to each other. \overline{e}gin{lemma} \label{lem:symclose} Suppose $\mathcal{C}(p)$ is the function that maps every quantile mechanism $Q$ for symmetric instances with a continuous probabilitiy density function $q(x)$ to its approximation ratio where \[ \mathcal{C}(p)=\inf_{x\in [0,1)} \frac{\int_{[0,x]} q(t)t(1-x) \, dt + \int_{(x, 1]} q(t)(1-t)x\, dt + (1-x)}{1-x^2}.\] For any quantile mechanism $Q_1$ with continuous p.d.f $q_1(x)$ and $Q_2$ with continuous p.d.f. $q_2(x)$, it holds that \[\mathcal{C}(p_1) - \mathcal{C}(p_2) \geq - \left|q_1 - q_2\right|_{\infty}.\] \end{lemma} We first use these lemmas to give a proof of Theorem~\ref{thm:symapprox}, and leave the proof of Lemma~\ref{lem:symturning} and Lemma~\ref{lem:symclose} to Appendix~\ref{appendix:proofofsymturning} and \ref{appendix:proofsymclose}. \overline{e}gin{proof}[Proof Of Theorem~\ref{thm:symapprox}] Let $n = c\cdot \frac{1}{\varepsilon^7} + 1$ where $c$ is a large enough constant. Notice that $\omega\left(\frac{1}{\sqrt{n - 1}}\right)$ equals $\omega\left(c^{-0.5}\cdot \varepsilon^{3.5}\right)\leq 32^2 c^{-0.5}\cdot \varepsilon^{1.5}$. This implies that $2n\exp\left(-\frac{\varepsilon^2}{8\omega^2\left(\frac{1}{\sqrt{n - 1}}\right)}\right) \leq \varepsilon / 2$. Besides, define $M$ be $\max_{x\in [0,1]} \tilde{q}(x)$ which is at most $ 32/\varepsilon$. It is also easy to verify that $\exp\left(-\frac{\varepsilon^2 n}{48M^2}\right) \leq \varepsilon / 4$. Combining the properties above with Lemma~\ref{lem:symclose}, we could apply Lemma~\ref{lem:convert}, and see that there exists an order statistic mechanism with $n$ samples with an approximation of at least $\mathcal{C}(Q') - \varepsilon / 2$. Together with Lemma~\ref{lem:symturning}, it follows that this order statistic mechanism is $\frac{2+\sqrt2}{4} - \varepsilon$-approximate. This concludes our proof. We would like to comment that if we always choose the $\left\lfloor \frac{\sqrt2}{2}\cdot n\right\rfloor$-th order statistic as the price, there is an argument to prove that we could achieve an approximation ratio of $\frac{2+\sqrt2}{4} - \varepsilon$ with $\tilde{O}\left(\frac{1}{\varepsilon^2}\right)$ samples. \end{proof} \subsubsection{General Instance} We now consider the general instance. \cite{blumrosen2021almost} provides a $1 - 1/e$ approximation quantile mechanism that is also shown to be optimal by \cite{kang2022fixed}. Using the block-box reduction shown in Lemma~\ref{lem:convert}, we show that the optimal quantile mechanism can be approximated by order statistics mechanism as closely as desired and also obtain an upper bound on the sample complexity. \overline{e}gin{theorem} \label{thm:asymapprox} There exists an order statistic mechanism $P$ with $N = O\left(\frac{1}{\varepsilon^5}\right)$ samples that achieves $1 - \frac{1}{e} - \varepsilon$ approximation. \end{theorem} In the following proof, we will use order statistic mechanism to approximate the optimal quantile mechanism $Q$ with p.d.f. $q(x) = \frac{1}{x}$ on $[1/e, 1]$. Similarly, $q(x)$ is not a continuous function on $[0, 1]$. Thus, we need to first convert it to a continuous function $\tilde{q}(x)$ on $[0, 1]$ and then apply Lemma~\ref{lem:convert}. Similarly, we introduce the following two lemmas first. \overline{e}gin{lemma} \label{lem:asymturning} For any $\varepsilon > 0$, there exists a quantile mechanism $Q'$ with a probability density function $\tilde{q}(x)$ such that $\omega\left(c\cdot \varepsilon^{-2.5}\right) \leq 100c^{0.5} \cdot \varepsilon^{1.5}$ and $\max_{x\in[0,1]} \tilde{q}(x)\leq 2e$. Besides, the quantile mechanism $Q'$ has an approximation ratio of $1 - \frac{1}{e} - \varepsilon/2$. \end{lemma} \overline{e}gin{lemma} \label{lem:asymclose} Suppose $\mathcal{C}(p)$ is the function that maps every quantile mechanism $Q$ for general instances with a continuous probabilitiy density function $q(x)$ to its approximation ratio where \[ \mathcal{C}(p)=\min_{x\in [0,1]} {\int_0^x q(t) t\, dt + 1-x}.\] For any quantile mechanism $Q_1$ with continuous p.d.f $q_1(x)$ and $Q_2$ with continuous p.d.f. $q_2(x)$, it holds that \[\mathcal{C}(p_1) - \mathcal{C}(p_2) \geq - \left|q_1 - q_2\right|_{\infty}.\] \end{lemma} The proofs of Lemma~\ref{lem:asymturning} and Lemma~\ref{lem:asymclose} are respectively in Appendix~\ref{appendix:proofasymturning} and \ref{appendix:proofasymclose}. \overline{e}gin{proof}[Proof Of Theorem~\ref{thm:asymapprox}] We follow the same argument to prove Theorem~\ref{thm:asymapprox}. Let $n = c\cdot \frac{1}{\varepsilon^5} + 1$ where $c$ is a large enough constant. Again it is easy to see that $\omega\left(\frac{1}{\sqrt{n - 1}}\right) \leq 100c^{-0.5} \cdot \varepsilon^{1.5}$. Thus it holds that $2n\exp\left(-\frac{\varepsilon^2}{8\omega^2\left(\frac{1}{\sqrt{n - 1}}\right)}\right) \leq \varepsilon / 2$. Besides, define $M = \max_{x\in [0,1]} \tilde{q}(x) \leq 2e$, we could also see that $\exp\left(-\frac{\varepsilon^2 n}{48M^2}\right) \leq \varepsilon / 4$. Combining the properties above with Lemma~\ref{lem:asymclose}, we could see the existence of an order statistic mechanism with $n$ samples with an approximation of at least $\mathcal{C}(Q') - \varepsilon / 2$ by applying Lemma~\ref{lem:convert}. Together with Lemma~\ref{lem:asymturning}, we know that the approximation ratio of this order statistic mechanism is $1 - 1/e - \varepsilon$. This finishes our proof. \end{proof} \appendix \section{Tie Breaking}\label{appendix:tiebreaking} For distribution $D$ with point masses, the following reduction will convert it to continuous one. We will overload the notation of $D$ and think of it as a bivariate distribution with the first coordinate drawn from the previous single-variate distribution $D$ and the second tie-breaker coordinate drawn independently and uniformly from $[0,1]$. And $(X_1, t_1) > (X_2, t_2)$ if and only if either $X_1 > X_2$, or $X_1 = X_2$ and $t_1 > t_2$. Since the tie-breaker coordinate is continuous, the probability of having $(X_1, t_1) = (X_2, t_2)$ for any two values during a run of any mechanism is zero. Therefore we could define the c.d.f. of $D$ as \[F_{D}(X,t) = \mathcal{P}r_{(Y,u) \sim (D,U[0,1])}[(Y, u) < (X, t)]\] Remind the second coordinate is only used to break ties, and it does not affect the calculation of welfare. After including the additional random variable, we can see that $D$ has been converted into a continuous distribution since its second coordinate is continuous. \section{Missing Proofs in Section~\ref{sec:fullinfo}} \label{appendix:fullinfo} \subsection{Proof of Lemma~\ref{lem:fullinfo}} \label{appendix:prooffullinfo} We first show the proof of Lemma~\ref{lem:fullinfo}. The approximation ratio of the optimal fixed-price mechanism could be written as \overline{e}gin{align*} \overline{e}gin{split} \min_{\mathcal{I} = (F_S, F_B)} \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})}. \end{split} \end{align*} We first show that for any instance $\mathcal{I} = (F_S, F_B)$, there is a valid solution $(u, v, r)$ such that $r = \max_{p\in \mathbb{R}}\\ \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})}$. We could first simply scale the instance by $\frac{1}{\text{OPT-}\mathcal{W}(\mathcal{I})}$ to $\mathcal{I}' = (F_S', F_B')$ where $\text{OPT-}\mathcal{W}(\mathcal{I}') = 1$. Such scaling means that $\mathcal{W}(\mathcal{I}, \text{OPT-}\mathcal{W}(\mathcal{I})\cdot p) = \text{OPT-}\mathcal{W}(\mathcal{I})\cdot \mathcal{W}(\mathcal{I}', p)$ for all $p\in \mathbb{R}_{\ge 0}$. This implies that \overline{e}gin{align*} \overline{e}gin{split} \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} = \max_{p\in \mathbb{R}} {\mathcal{W}(\mathcal{I}', p)}. \end{split} \end{align*} Therefore, let $u$ and $v$ be the probability measures of $F_S'$ and $F_B'$ and $r$ be $\max_{p\in \mathbb{R}} {\mathcal{W}(\mathcal{I}', p)}$. It is easy to verify that $(u, v, r)$ is a valid solution. Let $r^*$ be the optimal value of $\mathsf{FullOp}$, this implies that $r^*\leq \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})}$ for any instance $\mathcal{I} = (F_S, F_B)$. Taking the minimum over all possible $\mathcal{I}$, we then get that \overline{e}gin{equation} \label{eq:fullinfopr1} r^* \leq \min_{\mathcal{I} = (F_S, F_B)} \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \end{equation} Next, let $(u^*, v^*, r^*)$ be the optimal solution of $\mathsf{FullOp}$. Since $u^*, v^*$ are both probability measures, let $F_S^*, F_B^*$ be the corresponding distributions of $u$ and $v$ and $\mathcal{I}^* = (F_S^*, F_B^*)$ be the instance. Now by the constraint of $\mathsf{FullOp}$, we know that $\text{OPT-}\mathcal{W}(\mathcal{I}^*)\geq 1$. Besides, $(u^{*}, v^*, r^*)$ is an optimal solution implies that $r^{*} = \max_{p\in \mathbb{R}} \mathcal{W}(\mathcal{I}^*, p)$. Therefore, \overline{e}gin{equation} \label{eq:fullinfopr2} r^* = \max_{p\in \mathbb{R}} \mathcal{W}(\mathcal{I}^*, p) \geq \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}^*, p)}{\text{OPT-}\mathcal{W}(\mathcal{I}^*)} \geq \min_{\mathcal{I} = (F_S, F_B)} \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \end{equation} Now combining inequality~\eqref{eq:fullinfopr1} and \eqref{eq:fullinfopr2}, it follows that \[r^* = \min_{\mathcal{I} = (F_S, F_B)} \max_{p\in \mathbb{R}} \frac{\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})}\] which completes the proof. \subsection{Proof of Lemma~\ref{lem:fullinfolower}} \label{appendix:prooffullinfolower} Before we give the proof of Lemma~\ref{lem:fullinfolower}, we first show prove a lemma that helps us discretize a continuous distribution. \overline{e}gin{lemma} \label{lem:discretization} For any instance $\mathcal{I} = (F_S, F_B)$, and $0 = p_1 < p_2 < \cdots < p_n$, there exists a set of numbers $\{s_i\}_{i\in [n]}, \{b_i\}_{i\in [n]}$ satisfying the following equations. \overline{e}gin{align} s_i , b_i & \geq 0\quad \forall i\in [n]\label{eq:discretization1}\\ 1\le\sum_{i=1}^n s_i &\le 1 + \frac{\mathop{{}\mathbb{E}}[S]}{p_n} \quad 1 \le \sum_{i=1}^n b_i \le 1+ \frac{\mathop{{}\mathbb{E}}[B]}{p_n}\label{eq:discretization2}\\ \sum_{i=1}^t s_i &\geq \mathcal{P}r[S < p_t]\quad \sum_{i=1}^t b_i \geq \mathcal{P}r[B<p_t]\quad \forall t\in [n]\label{eq:discretizationNew}\\ \sum_{i=1}^{n-1} s_i &\leq 1 \quad \sum_{i=1}^{n-1} b_i \leq 1\label{eq:discretization6}\\ \mathop{{}\mathbb{E}}_{S\sim F_S}\left[S\right] &= \sum_{i=1}^n s_i p_i\label{eq:discretization3}\\ \mathop{{}\mathbb{E}}_{B\sim F_B}\left[B\right] &= \sum_{j=1}^n b_j p_j\label{eq:discretization4}\\ \text{OPT-}\mathcal{W}(\mathcal{I}) &\defeq \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}[\max(S, B)]\leq \sum_{i=1}^n \sum_{j=1}^n s_i b_j \cdot \max(p_i, p_j)\label{eq:discretization5}\\ \mathcal{W}(\mathcal{I}, p_t) &\defeq \mathop{{}\mathbb{E}}_{S\sim F_S}[S] + \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\Big[(B-S)\cdot \mathbbm{1}ic[S\leq p_t\leq B]\Big]\nonumber\\ & \quad \geq \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i) \quad \forall t\in [n] \label{eq:discretization7} \end{align} Additionally, $\{b_i\}_{i\in [n]}$ and $\{s_i\}_{i\in [n]}$ only depends on $F_B$ and $F_S$ respectively. \end{lemma} \overline{e}gin{proof} We construct $(s_1,\cdots,s_n,b_1,\cdots,b_n)$ as follows. For the seller, define \[q_{s,i} = \mathcal{P}r_{S\sim F_S}[p_i\le S < p_{i+1}] \text{ and } E_{s,i} = \mathop{{}\mathbb{E}}_{S\sim F_S}[S\cdot\mathbbm{1}ic[p_i\le S < p_{i+1}]], ~ \forall i\in [n],\] where we assume that $p_{n+1} = +\infty$. It is clear from the definition that $q_{s,i} \cdot p_i \le E_{s,i} \le q_{s,i} \cdot p_{i+1}$. Therefore, for any $i\in [n - 1]$, there exists non-negative numbers $s_{i,\textsc{Left}}$ and $s_{i + 1, \mathbbm{R}T}$ such that \overline{e}gin{equation}\label{eq:discredef} s_{i,\textsc{Left}} + s_{i + 1, \mathbbm{R}T} = q_{s,i}~~ \text{and}~~s_{i,\textsc{Left}} \cdot p_i + s_{i + 1, \mathbbm{R}T}\cdot p_{i+1} = E_{s,i}. \end{equation} {We further define $s_{n,\textsc{Left}}$ as $E_{s,n} / p_n$ and $s_{1,\mathbbm{R}T} = 0$. Now set $s_i = s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T}$ for all $i\in [n]$. For the buyer, we define $\{b_i\}_{i\in [n]}$ similarly.} Therefore, from the construction, it is clear that $\{b_i\}_{i\in [n]}$ and $\{s_i\}_{i\in [n]}$ only depend on $F_B$ and $F_S$ respectively. We now verify that $\left(\{s_i\}_{i\in [n]}, \{b_i\}_{i\in [n]}\right)$ satisfies the properties above. The non-negativity of $s_i$ and $b_i$ is immediately derived from $s_{i,\textsc{Left}},s_{i,\mathbbm{R}T} \geq 0$. { From our definition, it is clear that $\sum_{i=1}^n q_{s,i} = 1$, therefore} \[\sum_{i=1}^n s_i = \sum_{i=1}^n s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T} \geq \sum_{i=1}^n q_{s,i} = 1,\] and \[\sum_{i=1}^{n - 1} s_i = \sum_{i=1}^{n-1} s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T} \leq \sum_{i=1}^n q_{s,i} = 1.\] We could also see that { \[\sum_{i=1}^n s_i = \sum_{i=1}^n s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T} \leq \sum_{i=1}^n q_{s,i} + s_{n,\textsc{Left}} = 1 + \frac{E_{s,n}}{p_n} \leq 1 + \frac{\mathbb{E}[S]}{p_n}.\]} For any $t\in [n]$, it is clear that \[\sum_{i=1}^t s_i = \sum_{i=1}^t s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T} \geq \sum_{i=1}^{t-1} q_{s,i} = \mathcal{P}r[S < p_t].\] For the expectations, it holds that \[\sum_{i=1}^n s_i p_i = \sum_{i=1}^n \left(s_{i,\textsc{Left}} + s_{i,\mathbbm{R}T}\right)\cdot p_i = \sum_{i=1}^n E_{s,i} = \mathop{{}\mathbb{E}}[S]\] By symmetry, similar inequalities also holds for for $\{b_i\}_{i\in [n]}$. So far, we have verified that properties \eqref{eq:discretization1}, \eqref{eq:discretization2}, \eqref{eq:discretizationNew}, \eqref{eq:discretization6}, \eqref{eq:discretization3} and \eqref{eq:discretization4} are satisfied. It only remains to show that \eqref{eq:discretization5} and\eqref{eq:discretization7} holds. For any $i\neq j\in [n]$, w.l.o.g. we can assume that $i < j$. We could see that \overline{e}gin{align}\label{eq:FullVeri1} \overline{e}gin{split} &\mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\Big[\max(S,B)\cdot \mathbbm{1}ic[S\in [p_i,p_{i+1})] \mathbbm{1}ic[B \in [p_j,p_{j+1})]\Big] \\ & = \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\Big[B\cdot \mathbbm{1}ic[S\in [p_i,p_{i+1})] \mathbbm{1}ic[B \in [ p_j,p_{j+1})]\Big]\\ & = \mathop{{}\mathbb{E}}_{B\sim F_B}\Big[B\cdot \mathbbm{1}ic[B \in [p_j,p_{j+1})]\Big] \cdot \mathcal{P}r_{S\sim F_S}\Big[ S\in [p_i, p_{i+1})\Big]\\ & = E_{b, j} \cdot q_{s, i}\\ & = (b_{j,\textsc{Left}} \cdot p_j + b_{j + 1, \mathbbm{R}T} \cdot p_{j+1}) \cdot (s_{i,\textsc{Left}} + s_{i+1, \mathbbm{R}T})\\ & = s_{i,\textsc{Left}} \cdot b_{j,\textsc{Left}} \max(p_i, p_j) + s_{i+1,\mathbbm{R}T} \cdot b_{j,\textsc{Left}} \max(p_{i+1}, p_j) \\ &\quad + s_{i,\textsc{Left}} \cdot b_{j+1,\mathbbm{R}T} \max(p_i, p_{j+1}) + s_{i+1,\mathbbm{R}T} \cdot b_{j+1,\mathbbm{R}T} \max(p_{i+1}, p_{j+1}) \end{split} \end{align} The second equality is due to the independence between $S$ and $B$. Now consider the case when $i = j \leq n - 1$. For any $x, y\in [p_i, p_{i+1}]$, we have \overline{e}gin{align*} \overline{e}gin{split} \max(x,y) \le \max(p_i, y) \cdot \frac{p_{i+1} - x}{p_{i + 1} - p_i}+ \max(p_{i+1}, y) \cdot\frac{x - p_i}{p_{i+ 1} - p_{i}}, \end{split} \end{align*} as $\frac{p_{i+1} - x}{p_{i + 1} - p_i} + \frac{x-p_{i}}{p_{i+ 1} - p_{i}} = 1$ and $\frac{p_{i+1} - x}{p_{i + 1} - p_i} \cdot p_i + \frac{x - p_{i}}{p_{i+ 1} - p_{i}} \cdot p_{i+1} = x$. Based on the inequality above, for any fixed $y\in [p_i, p_{i+1})$, we have \overline{e}gin{align*} \overline{e}gin{split} &\mathop{{}\mathbb{E}}_{S\sim F_S}\Big[\max(S, y)\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\Big] \\ \leq & \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\left( \max(p_i, y) \cdot \frac{p_{i+1} - S}{p_{i + 1} - p_i}+ \max(p_{i+1}, y) \cdot\frac{S - p_i}{p_{i+ 1} - p_{i}} \right) \mathbbm{1}ic[S\in [p_i,p_{i+1})]\right]\\ = &\max(p_i, y) \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{p_{i+1} - S}{p_{i+1} - p_i} \cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right] + \max(p_{i+1}, y)\mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{S - p_i}{p_{i+1} - p_i}\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right]\\ = &y\cdot s_{i,\textsc{Left}} + p_{i+1}\cdot s_{i+1,\mathbbm{R}T} \end{split} \end{align*} The last equality is because of the following identities: \overline{e}gin{align*} \overline{e}gin{split} \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{p_{i+1} - S}{p_{i+1} - p_i} \cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right] + \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{S - p_i}{p_{i+1} - p_i}\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right] &= \mathcal{P}r[S\in [p_i, p_{i+1})]] \\ p_i\cdot \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{p_{i+1} - S}{p_{i+1} - p_i} \cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right] + p_{i+1} \cdot \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{S - p_i}{p_{i+1} - p_i}\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right] &= \mathop{{}\mathbb{E}}[S\cdot \mathbbm{1}ic[S\in[p_i, p_{i+1})]]. \end{split} \end{align*} Hence, we can conclude that $\left(\mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{p_{i+1} - S}{p_{i+1} - p_i} \cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right], \mathop{{}\mathbb{E}}_{S\sim F_S}\left[\frac{S - p_i}{p_{i+1} - p_i}\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\right]\right)$ is the unique solution to \eqref{eq:discredef}. Thus, these two numbers respectively equal to $s_{i,\textsc{Left}}$ and $s_{i+1,\mathbbm{R}T}$. Due to the inequality above, we have \overline{e}gin{align}\label{eq:FullVeri2} \overline{e}gin{split} &\mathop{{}\mathbb{E}}_{B\sim F_B}\left[ \mathop{{}\mathbb{E}}_{S\sim F_S}\Big[\max(S, B)\cdot \mathbbm{1}ic[S\in [p_i, p_{i+1})]\Big]\cdot \mathbbm{1}ic[B\in [p_i, p_{i+1})]\right]\\ {\leq} &\mathop{{}\mathbb{E}}_{B\sim F_B}\left[\left(B\cdot s_{i,\textsc{Left}} + p_{i+1}\cdot s_{i+1,\mathbbm{R}T}\right)\cdot \mathbbm{1}ic[B\in [p_i, p_{i+1})]\right]\\ = &b_{i,\textsc{Left}} s_{i,\textsc{Left}}\cdot p_i + b_{i + 1, \mathbbm{R}T} s_{i,\textsc{Left}}\cdot p_{i+1} + b_{i,\textsc{Left}} s_{i+1,\mathbbm{R}T} \cdot p_{i+1} + b_{i + 1,\mathbbm{R}T} s_{i+1,\mathbbm{R}T}\cdot p_{i+1} \end{split} \end{align} The last special case is when $i = j = n$. \overline{e}gin{align}\label{eq:FullVeri3} \overline{e}gin{split} \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\left[\max(S,B)\cdot \mathbbm{1}ic[S\geq p_n] \mathbbm{1}ic[B\geq p_n]\right] &{\leq} \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\left[B S/ p_n\cdot \mathbbm{1}ic[S\geq p_n] \mathbbm{1}ic[B\geq p_n]\right]\\ &= p_n \cdot \left(\mathop{{}\mathbb{E}}_{S\sim F_S}\left[S \cdot \mathbbm{1}ic[S\geq p_n]\right]/p_n\right) \cdot \left(\mathop{{}\mathbb{E}}_{S\sim F_B}\left[{B}\cdot \mathbbm{1}ic[B\geq p_n]\right]/p_n\right) \\ &= s_{n,\textsc{Left}} \cdot b_{n,\textsc{Left}} \cdot p_n. \end{split} \end{align} Combining inequality~\eqref{eq:FullVeri1}, \eqref{eq:FullVeri2} and \eqref{eq:FullVeri3}, we have \overline{e}gin{align*} \overline{e}gin{split} \sum_{i=1}^n \sum_{j=1}^n s_i b_j \cdot \max(p_i,p_j) &\geq \sum_{i=1}^n \sum_{j=1}^n \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\Big[\max(S, B)\mathbbm{1}ic[S\in [p_{i}, p_{i+1})]\mathbbm{1}ic[B\in [p_j, p_{j+1})]\Big] \\ &= \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\left[\max(S, B)\right], \end{split} \end{align*} so inequality~\eqref{eq:discretization5} is satisfied. Finally, we are only left to show that property~\eqref{eq:discretization7} holds. For any $t\in [n]$, it follows that \overline{e}gin{small} \overline{e}gin{align} \overline{e}gin{split} \mathcal{W}(\mathcal{I}, p_t) & = \mathop{{}\mathbb{E}}_{S\sim F_S}[S] + \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\left[(B - S)\cdot \mathbbm{1}ic[S\leq p_t\leq B]\right]\\ & \geq \mathop{{}\mathbb{E}}_{S\sim F_S}[S] + \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\left[(B - S)\cdot \mathbbm{1}ic[S< p_t\leq B]\right]\\ &= \sum_{i=1}^n E_{s,i} + \mathop{{}\mathbb{E}}_{B\sim F_B}[B\cdot \mathbbm{1}ic[B \geq p_t]]\cdot \mathcal{P}r_{S\sim F_S}[S< p_t] - \mathop{{}\mathbb{E}}_{S\sim F_S}[S\cdot \mathbbm{1}ic[S < p_t]]\cdot \mathcal{P}r_{B\sim F_B}[B\geq p_t]\\ & \geq \sum_{i=1}^n s_i \cdot p_i + \left(\sum_{j=t+1}^n b_j\cdot p_j + b_{t,\textsc{Left}}\cdot p_t\right)\cdot \left(\sum_{i=1}^{t-1} s_i + s_{t,\mathbbm{R}T}\right) \\ & \quad\quad\quad\quad\quad~ - \left(\sum_{i=1}^{t-1} s_i\cdot p_i + s_{t,\mathbbm{R}T}\cdot p_t\right)\cdot \left(\sum_{j=t + 1}^n b_j + b_{t,\textsc{Left}}\right)\\ & = \sum_{i=1}^n s_i \cdot p_i + \sum_{i=1}^{t-1} \sum_{j=t + 1}^n s_i b_j (p_j - p_i) + \sum_{j=t+1}^n b_j\cdot s_{t,\mathbbm{R}T}\cdot (p_j - p_t) + \sum_{i=1}^{t-1} s_i \cdot b_{t,\textsc{Left}}\cdot (p_t - p_i)\\ & \geq \sum_{i=1}^n s_i \cdot p_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^n s_i b_j (p_j - p_i). \end{split} \end{align} \end{small}where the second inequality follows from the fact that \[\mathcal{P}r[B\geq p_t] = \sum_{j=t}^{n-1} (b_{j,\textsc{Left}} + b_{j+1,\mathbbm{R}T}) + \mathcal{P}r[B\geq p_n]{\leq \sum_{j=t}^{n-1} (b_{j,\textsc{Left}} + b_{j+1,\mathbbm{R}T}) +b_{n,\textsc{Left}}}\leq \sum_{j=t+1}^n b_j + b_{t,\textsc{Left}} \] Therefore, we could see that inequality~\eqref{eq:discretization7} holds. This finishes our proof. \end{proof} With the lemma above, we are ready to give the proof of Lemma~\ref{lem:fullinfolower}. Consider the following fixed-price mechanism: Given any instance $\mathcal{I} = (F_S, F_B)$, we first compute the optimal welfare of the instance. Suppose $\text{OPT-}\mathcal{W}(\mathcal{I}) = c$, we choose the fixed price $p^*$ from $\{c p_1,\cdots, c p_n\}$ to maximizes the welfare, i.e., $p^* \in \arg\max_{p\in \{c p_1,\cdots,cp_n\}} \mathcal{W}(\mathcal{I}, p)$. In the following, we show that this mechanism is an $r^{*}$-approximation to the optimal welfare. Note that the approximation ratio of our mechanism is independent of $c$.\footnote{The price $p^*$ depends on $c$, but the approximation ratio to the optimal welfare does not.} To keep our analysis clean, we first assume that the instance $\mathcal{I} = (F_S,F_B)$ has optimal welfare $1$. The approximation ratio of our mechanism could be written as \overline{e}gin{align*} \overline{e}gin{split} \min_{\mathcal{I} = (F_S,F_B)\atop \text{OPT-}\mathcal{W}(\mathcal{I}) = 1} \max_{p\in \{p_1,p_2,\cdots, p_n\}} \mathcal{W}(\mathcal{I}, p). \end{split} \end{align*} Next, we argue that for any instance $\mathcal{I} = (F_S, F_B)$ satisfying $\text{OPT-}\mathcal{W}(\mathcal{I}) = 1$, there exists a valid solution $(s_1,\cdots,s_n,b_1,\cdots,b_n,r)$ of $\mathsf{LowerOp}$ such that $r \le \max_{p\in \{p_1,p_2,\cdots,p_n\}} \mathcal{W}(\mathcal{I}, p)$. This immediately implies that $r^{*}$ is a lower bound of the approximation ratio. Given an instance $\mathcal{I} = (F_S, F_B)$ s.t. $\text{OPT-}\mathcal{W}(\mathcal{I}) = 1$, the solution $(s_1,\cdots,s_n,b_1,\cdots,b_n,r)$ is constructed as follows. Let $(s_1,s_2,\cdots, s_n,b_1,b_2,\cdots, b_n)$ be the set of numbers that satisfies all the properties stated in Lemma~\ref{lem:discretization}. Let $r$ be $$\max_{t\in [n]} \sum_{i=1}^n s_i p_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^{n} s_i b_j (p_j-p_i).$$ We first verify that $(\{s_i\}_{i\in [n]}, \{b_i\}_{i\in [n]}, r)$ is a valid solution of $\mathsf{LowerOp}$. Notice that $\mathop{{}\mathbb{E}}[S]\leq \mathop{{}\mathbb{E}}[\max(S,B)] = 1$ and $\mathop{{}\mathbb{E}}[B] \leq \mathop{{}\mathbb{E}}[\max(B,S)] = 1$. Therefore, constraints~\eqref{eq:LowerOp1}, \eqref{eq:LowerOp2} and \eqref{eq:LowerOp3} directly follows from inequality~\eqref{eq:discretization1} and \eqref{eq:discretization2}. What's more, we could see \eqref{eq:LowerOp5} holds by the definition of $r$. Now by property~\eqref{eq:discretization5}, we have \overline{e}gin{align*} \overline{e}gin{split} \sum_{i=1}^n \sum_{j=1}^n s_i b_j \cdot \max(p_i,p_j) \geq \sum_{i=1}^n \sum_{j=1}^n \mathop{{}\mathbb{E}}_{S\sim F_S\atop B\sim F_B}\Big[\max(S, B)\mathbbm{1}ic[S\in [p_{i}, p_{i+1})]\mathbbm{1}ic[B\in [p_j, p_{j+1})]\Big] = 1, \end{split} \end{align*} so constraint~\eqref{eq:LowerOp4} is satisfied. Finally, we are only left to show that the best price in $\{p_1,p_2,\cdots, p_k\}$ must obtain an approximation ratio that is at least $r$ on instance $\mathcal{I}$, i.e., $r \leq \max_{p\in \{p_1,p_2,\cdots, p_k\}} \mathcal{W}(\mathcal{I}, p)$. Inequality~\eqref{eq:discretization7} states that \[\mathcal{W}(\mathcal{I}, p_t)\geq \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i).\] Taking maximum over $t\in [n]$, we then get that \[r = \max_{t\in [n]}\sum_{i=1}^n s_i \cdot p_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^n s_i b_j (p_j - p_i) \leq \max_{t\in [n]} \mathcal{W}(\mathcal{I}, p_t)\] which finishes our proof. \subsection{Proof of Lemma~\ref{lem:fullinfoupper}} \label{appendix:prooffullinfoupper} In the following, we complete the proof of Lemma~\ref{lem:fullinfoupper}. \overline{e}gin{center} \overline{e}gin{tabular}{|c|} \hline The Optimization Problem $\mathsf{UpperOp}$ \\ \hline \parbox{15cm}{ \overline{e}gin{align} \min_{s_1,s_2\cdots, s_n\atop b_1,b_2,\cdots,b_n,r} \quad r \nonumber \\ \textsf{s.t.} \quad & s_i, b_i \geq 0 & \forall i \in [n]\nonumber\\ & \sum_{i=1}^n s_i = 1 \quad \text{ and } \quad \sum_{i=1}^n b_i = 1 \nonumber\\ & \sum_{i=1}^n \sum_{j=1}^n s_i b_j \max(p_i,p_j) \geq 1 & \nonumber \\ & \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i) \leq r & \forall t \in [n] \nonumber \end{align} }\\ \hline \end{tabular} \end{center} For any fixed support $0 = p_1 < p_2 < \cdots < p_n$ and a valid solution $(s_1, s_2, \cdots, s_n, b_1, b_2,\cdots, b_n, r)$, define an instance $\mathcal{I} = (F_S, F_B)$ satisfying \overline{e}gin{align*} \overline{e}gin{split} S \sim F_S, S=\left\{ \overline{e}gin{aligned} &p_1 + \varepsilon \quad & w.p. \quad s_1\\ &p_2 + \varepsilon \quad & w.p. \quad s_2\\ & \cdots & \\ &p_n + \varepsilon \quad & w.p. \quad s_n \end{aligned} \right. \quad\quad\quad B \sim F_B, B=\left\{ \overline{e}gin{aligned} &p_1 \quad & w.p. \quad b_1\\ &p_2 \quad & w.p. \quad b_2\\ & \cdots & \\ &p_n \quad & w.p. \quad b_n \end{aligned} \right. \end{split} \end{align*} where $\varepsilon > 0$ is a constant that small enough. It is easy to see that both $F_S$ and $F_B$ are valid distributions since the $\mathsf{UpperOp}$ requires the non-negativity of $s_i, b_i$ and $\sum_{i=1}^n s_i = \sum_{i=1}^n b_i = 1$. Next, we aim to show that no fixed-price mechanism have an approximation ratio of $r + \varepsilon$ on this instance $\mathcal{I} = (F_S, F_B)$. For any $x\in \mathbb{R}_{\ge 0}$, we could first see that $x < {\varepsilon}$ would never be a optimal price. Thus let $p_i$ be the largest $p\in \{p_1,p_2,\cdots, p_n\}$ that is not greater than $x - \varepsilon $. Notice that both $F_S$ is a distribution on support $\{p_i + \varepsilon\}_{i\in [n]}$ and $F_B$ is a discrete distribution on support $\{p_i\}_{i\in [n]}$. This means choosing $p_i + \varepsilon$ instead of $x$ would never become worse. Therefore, we could see that the optimal fixed-price mechanism on this instance is simply choosing one $p_t\in \{p_1,p_2,\cdots, p_k\}$ that maximizes $\mathcal{W}(\mathcal{I}, p_t + \varepsilon )$. Again, by the fact that $F_S$ and $F_B$ are discrete distributions, $\mathcal{W}(\mathcal{I}, p_t + \varepsilon )$ could be written as: \overline{e}gin{align*} \overline{e}gin{split} \mathcal{W}(\mathcal{I}, p_t + \varepsilon ) &= \sum_{i=1}^n (p_i + \varepsilon) s_i + \sum_{i=1}^t \sum_{j=t+1}^n s_i b_j (p_j - p_i - \varepsilon) \\& \leq \sum_{i=1}^n p_i s_i + \sum_{i=1}^t \sum_{j=t+1}^n s_i b_j (p_j - p_i) + \varepsilon. \end{split} \end{align*} Also notice that the constraints of $\mathsf{UpperOp}$ guarantee that \overline{e}gin{align*} \overline{e}gin{split} \text{OPT-}\mathcal{W}(\mathcal{I}) = \sum_{i=1}^n\sum_{j=1}^n s_i b_j \max(p_i + \varepsilon,p_j) \geq \sum_{i=1}^n\sum_{j=1}^n s_i b_j \max(p_i,p_j) \geq 1. \end{split} \end{align*} Therefore, the approximation ratio of the optimal fixed-price mechanism on instance $\mathcal{I} = (F_S, F_B)$ is upper bounded by \overline{e}gin{align*} \overline{e}gin{split} \frac{\max_{t\in [n]} \mathcal{W}(\mathcal{I}, p_t + \varepsilon)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \le \max_{t\in [n]} \sum_{i=1}^n p_i s_i + \sum_{i=1}^t \sum_{j=t}^n s_i b_j (p_j - p_i) + \varepsilon = r + \varepsilon. \end{split} \end{align*} And this finishes our proof. \subsection{Proof of Lemma~\ref{lem:optconvergence}} \label{appendix:proofconvergence} In this section, we assume that $\varepsilon > 0$ is a small enough constant such that $\varepsilon^2 \ll \varepsilon$. We first show that, for any $\varepsilon > 0$, there exists a set of support $\{p_1,p_2,\cdots, p_n\}$ such that $\mathsf{UpperOp}$ has an optimal value of at most $r^{*} + \varepsilon$. As we show before, we could assume that the instance $\mathcal{I} = (F_S, F_B)$ has optimal welfare $1$. Thus, the approximation ratio of the optimal fixed-price mechanism is \overline{e}gin{align*} \overline{e}gin{split} r^{*} = \min_{\mathcal{I} = (F_S, F_B)\atop \text{OPT-}\mathcal{W}(\mathcal{I}) = 1} \max_{p\in \mathbb{R}} {\mathcal{W}(\mathcal{I}, p)}. \end{split} \end{align*} Suppose $r^{*}$ is attained at $\mathcal{I}^{*} = (F_S^*, F_B^*)$. Now define $n = 1/\varepsilon^4$, and $p_i = i\cdot (1/\varepsilon^2) + \varepsilon / 2$ for $i\in [n+1]$. Our idea is to construct a valid solution $\{s_i, b_i\}_{i\in [n+1]}$ by rounding up $\mathcal{I}^{*}$ to $p_i$ and show that this solution has an objective value that is close to $r^*$. Suppose $p_0 = 0$. Now we define \[s_i = \mathcal{P}r_{S\sim F_S^*}\Big[S\in [(i-1) \varepsilon^2, i \varepsilon^2)\Big] \text{ and } b_i = \mathcal{P}r_{B\sim F_B^*}\Big[B\in [(i-1) \varepsilon^2, i \varepsilon^2)\Big]\] for $i\in [n]$. Especially, let \[s_{n+1} = \mathop{{}\mathbb{E}}_{S\sim F_S^*}\Big[S\cdot \mathbbm{1}ic[S \geq n\varepsilon^2]\Big]/\left(n\varepsilon^2\right) \text{ and } b_{n+1} = \mathop{{}\mathbb{E}}_{B\sim F_B^*}\Big[B\cdot \mathbbm{1}ic[B \geq n\varepsilon^2]\Big]/\left(n\varepsilon^2\right).\] Since $\mathop{{}\mathbb{E}}[S]$ and $\mathop{{}\mathbb{E}}[B]$ are upper bounded by $1$, we could see that $s_{n+1},b_{n+1}\leq \varepsilon^2$. In the last, let $s = \sum_{i=1}^{n+1} s_i$ and $b = \sum_{i=1}^{n+1}$ be the normalization factors. It's also straightforward to see that $s\leq \sum_{i=1}^n s_i + s_{n+1} \leq 1 + \varepsilon^2$. Following the same argument, it also holds that $b\leq 1 + \varepsilon^2$. Now define \[r = \max_{t\in [n]} \sum_{i=1}^n (s_i/s)p_i + \sum_{i=1}^t\sum_{j=t+1}^n (s_i/s) (b_j/b)(p_j-p_i).\] We aim to verify that $(s_1/s,s_2/s,\cdots,s_{n+1}/s,b_1/b,b_2/b,\cdots,b_{n+1}/b,r)$ is a valid solution of $\mathsf{UpperOp}$. It is easy to see the non-negativity of $s_i,b_i$ and $\sum_{i=1}^{n+1} s_i/s = \sum_{i=1}^{n+1} b_i/b = 1$. What's more, from the definition of $r$, we could see the last constraint holds. Now we only need to check the third constraint. For any $i,j\in [n]$, it holds that \overline{e}gin{align*} &\mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}\Big[\max(S, B)\mathbbm{1}ic[S\in [(i-1) \varepsilon^2, i \varepsilon^2)]\mathbbm{1}ic[B\in [(j-1) \varepsilon^2, j \varepsilon^2)]\Big] \\ & \leq (\max(p_i, p_j) - \varepsilon / 4) s_i b_j \end{align*} When one of $i,j$ equals to $n+1$(we can assume $i = n+1$ w.l.o.g.), it is true that \overline{e}gin{align*} &\mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}\Big[\max(S, B)\mathbbm{1}ic[S\geq n\varepsilon^2]\mathbbm{1}ic[B\in [(j-1) \varepsilon^2, j \varepsilon^2)]\Big] \\ & = \mathop{{}\mathbb{E}}_{S\sim F_S^*}[S\cdot \mathbbm{1}ic[S\geq n\varepsilon^2]]\mathcal{P}r_{B\sim F_B^*}[B\in [(j-1) \varepsilon^2, j \varepsilon^2)]\\ & = \left(n\varepsilon^2\right) s_{n+1} b_j\\ & \leq (p_{n+1} - \varepsilon / 4) s_i b_j \end{align*} Finally, for the special case that $i = j= n+1$, we could see that \overline{e}gin{align*} &\mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}\Big[\max(S, B)\mathbbm{1}ic[S\geq n\varepsilon^2]\mathbbm{1}ic[B\geq n\varepsilon^2]\Big] \\ & \leq \mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}\Big[B S/\left(n\varepsilon^2\right)\cdot \mathbbm{1}ic[S\geq n\varepsilon^2]\mathbbm{1}ic[B\geq n\varepsilon^2]\Big]\\ & = \left(n\varepsilon^2\right) s_{n+1} b_{n+1}\\ & \leq (p_{n+1} - \varepsilon / 4) s_{n+1} b_{n+1} \end{align*} Summing up all the inequalities above, we then get that \overline{e}gin{align*} &\mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}[\max(S, B)] \leq \sum_{i=1}^{n+1} \sum_{j=1}^{n+1} \left(\max(p_i,p_j) - \varepsilon / 4\right) s_i b_j \end{align*} This implies that \overline{e}gin{align*} &\sum_{i=1}^{n+1}\sum_{j=1}^{n+1} (s_i/s) (b_j/b) \max(p_i,p_j)\\ &\geq \sum_{i=1}^{n+1}\sum_{j=1}^{n+1} s_i b_j \max(p_i, p_j) \cdot \left(1 + \varepsilon^2\right)^{-2}\\ & \geq \left(\mathop{{}\mathbb{E}}_{S\sim F_S^*\atop B\sim F_B^*}[\max(S, B)] +\sum_{i=1}^{n+1}\sum_{j=1}^{n+1} \varepsilon/4 s_ib_j\right)\cdot (1 - \varepsilon^2)^2\\ & \geq 1 + \varepsilon/8. \end{align*} which means that $(s_1/s,s_2/s,\cdots,s_{n+1}/s,b_1/b,b_2/b,\cdots,b_{n+1}/b,r)$ is truly a valid solution. Next, we give an upper bound of $r$. To start with, notice that \overline{e}gin{align} \label{eq:upperboundES} \overline{e}gin{split} \sum_{i=1}^{n+1} s_i p_i &= \sum_{i=1}^n \mathcal{P}r\Big[S\in [(i-1) \varepsilon^2, i \varepsilon^2)\Big]\cdot \left((i-1)\varepsilon + \varepsilon^2 + \varepsilon /4 \right) + \mathop{{}\mathbb{E}}[S\cdot \mathbbm{1}ic[S\geq n\varepsilon^2]] + s_{n+1}\cdot(\varepsilon^2 + \varepsilon/4)\\ & \leq \sum_{i=1}^n \mathop{{}\mathbb{E}}\Big[S\cdot\mathbbm{1}ic\left[S\in [(i-1) \varepsilon^2, i \varepsilon^2)\right] \Big] + \mathop{{}\mathbb{E}}[S\cdot \mathbbm{1}ic[S\geq n\varepsilon^2]] + \sum_{i=1}^{n+1} s_i (\varepsilon^2 + \varepsilon/4)\\ & \leq \mathop{{}\mathbb{E}}[S] + \varepsilon/2. \end{split} \end{align} For the term of gain from trade, it holds that \overline{e}gin{align} \label{eq:upperboundGFT} \overline{e}gin{split} \sum_{i=1}^t \sum_{j=t+1 }^{n+1} s_i b_j (p_j - p_i) &= \sum_{i=1}^t \sum_{j=t+1}^n s_i b_j(j\varepsilon^2-i\varepsilon^2) + \sum_{i=1}^t s_i b_{n+1} \left(n\varepsilon^2 - (i-1)\varepsilon^2\right)\\ & \leq \sum_{i=1}^t \sum_{j=t+1}^n s_i b_j((j-1)\varepsilon^2-i\varepsilon^2) + \sum_{i=1}^t s_i b_{n+1} \left(n\varepsilon^2 - i\varepsilon^2\right) + \varepsilon^2 (1 + \varepsilon^2)^2\\ & \leq \sum_{i=1}^t \sum_{j=t+1}^{n} \mathop{{}\mathbb{E}}[(B-S)\cdot\mathbbm{1}ic[S\in[(i-1)\varepsilon^2,i\varepsilon^2)]]\mathbbm{1}ic[B\in[(j-1)\varepsilon^2,j\varepsilon^2]]\\ & \quad + \mathop{{}\mathbb{E}}[(B-S)\cdot \mathbbm{1}ic[S\in[0,t\cdot\varepsilon^2)]\mathbbm{1}ic[B\geq t\cdot \varepsilon^2]] + \varepsilon / 2\\ & \leq \mathop{{}\mathbb{E}}[(B-S)\cdot \mathbbm{1}ic[S\in[0,t\cdot\varepsilon^2)]\mathbbm{1}ic[B\geq t\cdot \varepsilon^2]] + \varepsilon / 2. \end{split} \end{align} Combining~\eqref{eq:upperboundES} and \eqref{eq:upperboundGFT}, we know that for any $t\in [n + 1]$, \overline{e}gin{align*} \overline{e}gin{split} &\sum_{i=1}^{n+1} (s_i/s)p_i + \sum_{i=1}^t\sum_{j=t+1}^{n+1} (s_i/s) (b_j/b)(p_j-p_i)\\ & \leq \sum_{i=1}^{n+1} s_i p_i+ \sum_{i=1}^t \sum_{j=t+1}^{n+1} s_i b_j (p_j - p_i)\\ & = \mathop{{}\mathbb{E}}[S] + \mathop{{}\mathbb{E}}[(B-S)\cdot \mathbbm{1}ic[S\in[0,t\cdot\varepsilon^2)]\mathbbm{1}ic[B\geq t\cdot \varepsilon^2]] + \varepsilon\\ & \leq \mathcal{W}(\mathcal{I}, t\cdot \varepsilon^2) + \varepsilon. \end{split} \end{align*} Taking maximum over $[n + 1]$, we then get that \overline{e}gin{align*} r = \max_{t\in [n+1]}\sum_{i=1}^{n+1} (s_i/s)p_i + \sum_{i=1}^t\sum_{j=t}^{n+1} (s_i/s) (b_j/b)(p_j-p_i) \leq \max_{t\in [n+1]}\mathcal{W}(\mathcal{I}, t\cdot \varepsilon^2) + \varepsilon \leq r^* +\varepsilon. \end{align*} This means that the optimal value of $\mathsf{UpperOp}$ with respect to $\{p_i\}_{i\in[n+1]}$ is at most $r^*+\varepsilon$, and this finishes our proof. Next, we aim to show that for any $\varepsilon > 0$, there exists $0 = p_1 < p_2 <\cdots p_n$ such that $\mathsf{LowerOp}$ has an optimal value of at least $r^{*} - \varepsilon$. We first present the following lemma that supports our proof. \overline{e}gin{lemma}\label{lem:generateinstance} For any small enough constant $\varepsilon > 0$, suppose there exists a set of supports $\{p_i\}_{i\in [n]}$ with $\{s_i\}_{i\in [n]}$ and $\{b_{i}\}_{i\in [n]}$ that satisfies the subsequent conditions: \overline{e}gin{itemize} \item For all $i\in [n - 1]$, $p_{i+1}-p_i \leq \varepsilon^3$. \item $1\leq \sum_{i=1}^n s_i, \sum_{i=1}^n b_i \leq 1 + \varepsilon^3$. \item $\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b_j \geq 1$. \end{itemize} Let $r$ be defined as $\frac{\max_{t\in [n]} {\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i)}}{\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b_j }$. Then, an instance $\mathcal{I}$ exists for which no fixed-price mechanism attains a welfare exceeding $r + \varepsilon$ times the optimal welfare; that is, \[ \frac{\max_{p\in \mathbb{R}}\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \leq r+\varepsilon. \] \end{lemma} Before proving Lemma~\ref{lem:generateinstance}, we first illustrate how this completes our proof. Define $n = \left\lceil\varepsilon^{-6}\right\rceil + 1$, and $p_i = (i - 1)\cdot \varepsilon^3$ for $i\in [n]$. Let $(s_1,\cdots,s_n,b_1,\cdots,b_n,r')$ be the optimal solution of the optimization problem $\mathsf{LowerOp}$ with respect to $\{p_i\}_{i\in [n]}$. It is equivalent to show that there exists an instance $\mathcal{I}=(F_S, F_B)$ such that the optimal approximation ratio of $\mathcal{I}$, i.e. $\max_{x\in \mathbb{R}}\frac{\mathcal{W}(\mathcal{I}, x)}{\text{OPT-}\mathcal{W}(\mathcal{I})}$, is at most $r' + \varepsilon$. Notice that the definition of $\{p_i\}_{i\in [n]}$ directly implies fulfillment of the first condition in Lemma~\ref{lem:generateinstance}. Furthermore, since it is a valid solution of $\mathsf{LowerOp}$, we can deduce that the second and the third conditions are satisfied. The optimality of $(s_1,\cdots,s_n,b_1,\cdots,b_n,r')$ implies that $r = \frac{\max_{t\in [n]} {\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i)}}{\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b_j }$ is at most $r'$. By applying Lemma~\ref{lem:generateinstance}, we confirm the existence of an instance exhibiting an approximation ratio no greater than $r' + \varepsilon$, thereby completing our proof of Lemma~\ref{lem:optconvergence}. \overline{e}gin{proof}[Proof of Lemma~\ref{lem:generateinstance}] In this proof, we define $v_1 = \max_{t\in [n]} {\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t - 1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i)}$, $v_2 = \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b_j$, and thus $r = \frac{v_1}{v_2}$. We construct the instance as follows. Let $n' = n + \left\lceil\frac{4}{\varepsilon}\right\rceil$, and $s_i = b_i = 0$ for $n< i\le n'$. Now define $\{s'_i\}$ where \[s_{j}' = \sum_{i=\max\left(1, j - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{j} s_i / \left\lceil\frac{4}{\varepsilon}\right\rceil.\] It follows that \[\sum_{j=1}^{n'} s'_{j} = \sum_{j = 1}^n\sum_{i=\max\left(1, j - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{j} s_i / \left\lceil\frac{4}{\varepsilon}\right\rceil = \sum_{i=1}^{n'} s_i.\] Let $s = \sum_{i=1}^{n'} s'_i$ and $b = \sum_{i=1}^{n'} b_i$ be the normalization factors. The second condition guarantees that $1\leq s,b\leq 1 + \varepsilon^3$. What's more, we could also see that \overline{e}gin{align*} s_j' = \sum_{i=\max\left(1, j - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{j} s_i / \left\lceil\frac{4}{\varepsilon}\right\rceil \leq (1 + \varepsilon^3) \left\lceil\frac{4}{\varepsilon}\right\rceil \leq \varepsilon /3 . \end{align*} holds for all $j\in [n']$. Consider the following instance $\mathcal{I} = (F_S, F_B)$: \overline{e}gin{align*} \overline{e}gin{split} S \sim F_{S}, S=\left\{ \overline{e}gin{aligned} &p_1 + \varepsilon^4 \quad & w.p. \quad s'_1 / s\\ & \cdots & \\ &p_{n'} + \varepsilon^4 & w.p.\quad s'_{n'} / s \end{aligned} \right. \quad\quad\quad B \sim F_{B}, B=\left\{ \overline{e}gin{aligned} &p_1 \quad & w.p. \quad b_1 / b\\ & \cdots & \\ &p_{n'} & w.p.\quad b_{n'} / b \end{aligned} \right. \end{split} \end{align*} First, it is straight forward to verify this is a valid distribution. We first calculate $\text{OPT-}\mathcal{W}(\mathcal{I})$: \overline{e}gin{align*} \overline{e}gin{split} \text{OPT-}\mathcal{W}(\mathcal{I}) &= \sum_{i=1}^{n'} \sum_{j=1}^{n'} \max(p_i + \varepsilon^4, p_j)\cdot (s_i'/s) (b_j/b)\\ & \geq \sum_{i=1}^{n'} \sum_{j=1}^{n'} \max(p_i, p_j)\cdot (s_i'/s) (b_j/b) - \varepsilon^4\\ & \geq \sum_{i=1}^{n'} \sum_{j=1}^{n'} \max(p_i, p_j)\left(\sum_{k=\max\left(1, i - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{i} \left(s_k / \left\lceil\frac{4}{\varepsilon}\right\rceil\right)/s\right) \cdot (b_j / b)- \varepsilon^4\\ & \geq \sum_{i=1}^{n'} \sum_{j=1}^{n'} \max(p_i, p_j)\cdot s_i b_j / (bs) - \varepsilon^4\\ & \geq \left(1 - \varepsilon^2\right)\cdot v_2 \end{split} \end{align*}where the last inequality follows from $v_2 \geq 1$. Now consider the optimal fixed-price mechanism for the instance. As we have shown in the proof of Lemma~\ref{lem:fullinfoupper}, the optimal mechanism only need to choose price from the support of the discrete distribution. This implies that \overline{e}gin{align*} \overline{e}gin{split} \max_{p\in \mathbb{R}} \mathcal{W}(\mathcal{I}, p) = \max_{t\in [n]}\sum_{i=1}^{n'} (p_i + \varepsilon^4) (s_i' / s) + \sum_{i=1}^t \sum_{j=t+1}^{n'} (s_i'/s) (b_j/b) (p_j - p_i - \varepsilon^4) \end{split} \end{align*} For any $t\in [n]$, one could see that \overline{e}gin{align} \label{eq:LowerBoundES} \overline{e}gin{split} \sum_{i=1}^{n'} p_i s'_i &= \sum_{i=1}^{n'}\left(\sum_{k=\max\left(1, i - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{i} \left(s_k / \left\lceil\frac{4}{\varepsilon}\right\rceil\right)\right) p_i\\ &\leq \sum_{i=1}^{n'}\left(\sum_{k=\max\left(1, i - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{i} \left(s_k / \left\lceil\frac{4}{\varepsilon}\right\rceil\right)\cdot \left(p_k + \left\lceil\frac{4}{\varepsilon}\right\rceil \cdot \varepsilon^3 \right)\right)\\ & \leq \sum_{i=1}^{n'} (p_i + 5\varepsilon^2) s_i \end{split} \end{align}where the first inequality is because that the gap between any $p_i$ and $p_{i+1}$ is at most $\varepsilon^3$. For the term of gain from trade, it follows that \overline{e}gin{align} \label{eq:LowerBoundGFT} \overline{e}gin{split} \sum_{i=1}^t \sum_{j=t+1}^{n'} s_i' b_j (p_j - p_i) &= \sum_{i=1}^{t-1} \sum_{j=t+1}^{n'} s_i' b_j (p_j - p_i) + \sum_{j=t+1}^{n'} s_{t}' b_j (p_j - p_i)\\ & \leq \sum_{i=1}^{t-1} \sum_{j=t+1}^{n'} \left(\sum_{k=\max\left(1, i - \left\lceil\frac{4}{\varepsilon}\right\rceil+1\right)}^{i} \left(s_k / \left\lceil\frac{4}{\varepsilon}\right\rceil\right)\right) b_j (p_j - p_i) + s_t' \sum_{j=1}^{n'} b_j p_j\\ & \leq \sum_{i=1}^{t-1}\sum_{j=t+1}^{n'} s_i b_j (p_j - p_i) + \varepsilon / 3 \cdot v_2. \end{split} \end{align}where we use the fact that $s_k b_j (p_j - p_i) \leq s_k b_j (p_j - p_k)$ for $j > i \geq k$, $s_t' \leq \varepsilon /3$, and $\sum_{j=1}^{n'} b_j p_j \leq \sum_{i=1}^{n'} \sum_{j=1}^{n'}\\ \max(p_i,p_j) s_i b_j = v_2$. Again by combining the two inequalities above, we know that \overline{e}gin{align*} \overline{e}gin{split} &\sum_{i=1}^{n'} (p_i + \varepsilon^4) (s_i' / s) + \sum_{i=1}^t \sum_{j=t+1}^{n'} (s_i'/s) (b_j/b) (p_j - p_i - \varepsilon^4)\\ &\leq \sum_{i=1}^{n'} p_i s_i' + \sum_{i=1}^t \sum_{j=t+1}^{n'} s_i' b_j (p_j - p_i) + \varepsilon^4\\ & \leq \sum_{i=1}^{n'} (p_i + 5\varepsilon^2) s_i + \sum_{i=1}^{t-1}\sum_{j=t+1}^{n'} s_i b_j (p_j - p_i) + \varepsilon / 3\cdot v_2 + \varepsilon^4\\ & \leq \sum_{i=1}^{n'} p_i s_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^{n'} s_i b_j (p_j - p_i) + \varepsilon / 2 \cdot v_2 \end{split} \end{align*}where we apply~\eqref{eq:LowerBoundES} and \eqref{eq:LowerBoundGFT} in the second inequality. The last inequality holds since $v_2 \geq 1$, and $\sum_{i=1}^{n'} s_i \leq 1 + \varepsilon^3$. Now taking the maximum over $t\in [n']$, we then get that \overline{e}gin{align*} \max_{p\in \mathbb{R}} \mathcal{W}(\mathcal{I}, p) &= \max_{t\in [n']}\sum_{i=1}^{n'} (p_i + \varepsilon^4) (s_i' / s) + \sum_{i=1}^t \sum_{j=t+1}^{n'} (s_i'/s) (b_j/b) (p_j - p_i - \varepsilon^4)\\ & \leq \max_{t\in [n']}\sum_{i=1}^{n'} p_i s_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^{n'} s_i b_j (p_j - p_i) + \varepsilon / 2\cdot v_2\\ & = \max_{t\in [n]}\sum_{i=1}^{n} p_i s_i + \sum_{i=1}^{t-1} \sum_{j=t+1}^{n} s_i b_j (p_j - p_i) + \varepsilon / 2\cdot v_2\\ & = v_1 + \frac{\varepsilon}{2}\cdot v_2. \end{align*}where the second equation follows from $s_i = b_i = 0$ when $i > n$. Therefore, on instance $\mathcal{I}$, it holds that \overline{e}gin{align*} \frac{\max_{p\in \mathbb{R}}\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \leq\frac{v_1 + \frac{\varepsilon}{2}\cdot v_2}{(1-\varepsilon^2)\cdot v_2} \leq r+\varepsilon. \end{align*} \end{proof} \section{Proof of Theorem~\ref{thm:one-sided}} \label{subsec:one-sided-appendix} As discussed in Section~\ref{subsec:one-sided}, our intention is to prove Theorem \ref{thm:one-sided} utilizing the minimax theorem. However, we are unable to directly apply the theorem due to the infinite-dimensional nature of the problem. Fortunately, with the assistance of the discretization lemma (Lemma \ref{lem:discretization}), we are able to transform the problem into a finite-dimensional one, and subsequently, prove Theorem \ref{thm:one-sided} by employing the minimax theorem. The detailed proof is presented below. We let $r^{*}$ be the optimal approximation ratio in the full prior information setting. Our goal is to demonstrate that, given solely some one-sided information, i.e., only the distribution of the seller or the buyer, there still exists some fixed-price mechanism that obtains $r^{*}$ fraction of the optimal welfare. We initiate the proof by considering the scenario in which only the buyer's distribution $F_B$ is known. Given the distribution $F_B$, we can once again assume that $\mathop{{}\mathbb{E}}_{B\sim F_B}[B] = 1$. For any sufficiently small constant $\varepsilon > 0$, let $n$ be defined as $\left\lceil \frac{1}{\varepsilon^8}\right\rceil + 1$, and consider the set of support $\{p_i\}_{i\in [n]}$, where $p_i = (i - 1) \cdot \varepsilon^4$. As stated in Lemma~\ref{lem:discretization}, the construction of $\{b_i\}_{i\in [n]}$ (or $\{s_i\}_{i\in [n]}$) relies exclusively on $F_B$ (or $F_S$), so we can first construct $\{b_i\}_{i\in [n]}$ according to \mathcal{C}ref{lem:discretization} with the $\{p_i\}_{i\in [n]}$ defined above. Let us now consider the following min-max optimization problem $\mathsf{BuyerFull}$: \overline{e}gin{align} \min_{s_1,s_2\cdots, s_n} \max_{\omega_1,\omega_2,\cdots,\omega_n} \quad & \sum_{t=1}^n \omega_t \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) - (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j)s_i b_j\nonumber \\ \textsf{s.t.} \quad & s_i \geq 0 & \forall i \in [n]\nonumber\\ & \sum_{i=1}^{n - 1} s_i \leq 1 \quad \text{ and } \quad \sum_{i=1}^n s_i \geq 1 \quad \text{ and }\quad s_n \leq c\nonumber\\ & \omega_i \geq 0 & \forall i \in [n]\nonumber \nonumber \\ & \sum_{i=1}^n \omega_i = 1 \nonumber \end{align} Here we allow $c$ to be an arbitrary non-negative number. We enforce an upper bound of $s_n$ to make sure that $\{s_i\}_{i\in [n]}$ lie in a compact set. We demonstrate that this min-max optimization problem $\mathsf{BuyerFull}$ has a non-negative optimal value for any constant $c \geq 0$. \overline{e}gin{lemma}\label{lem:buyerfull} The optimal objective value of $\mathsf{BuyerFull}$ is non-negative for any constant $c \geq 0$. \end{lemma} \overline{e}gin{proof} To prove the non-negativity for any constant $c \geq 0$, we simply eliminate the constraint that $s_n \leq c$ and show that the program has a non-negative objective value even after dropping the upper bound constraint of $s_n$, as removing a constraint on $s_n$ can only decrease the objective value. We prove the lemma by way of contradiction. Given $\{s_i\}_{i\in [n]}$, it is clear that \[\max_{\omega \in \Delta(n)} \sum_{t=1}^n \omega_t \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) = \max_{t\in [n]} \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i),\]where $\Delta(n)$ is the probability simplex over $[n]$. Suppose $\mathsf{BuyerFull}$ has a negative optimal objective value, then there exists $\{s^*_i\}_{i\in [n]}$ such that \overline{e}gin{equation}\label{eq:buyerfullop1} \max_{t\in [n]} \sum_{i=1}^n s^*_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s^*_ib_j(p_j-p_i) < (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s^*_i b_j. \end{equation} Notice that $\{b_i\}_{i\in [n]}$ is defined according to Lemma~\ref{lem:discretization}, which ensures that \[1 \leq \sum_{i = 1}^n b_i \leq 1 + \frac{\mathop{{}\mathbb{E}}[B]}{p_n} \leq 1 + \varepsilon^3. \] Let us examine $s^*_n$ first. Note that \[\max_{t\in [n]} \sum_{i=1}^n s^*_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s^*_ib_j(p_j-p_i) \geq \sum_{i=1}^{n}s^*_i p_i,\] and \[\sum_{i=1}^n \sum_{j=1}^n \max(p_i, p_j) s^*_i b_j \leq \sum_{i=1}^n s^*_ip_i + \sum_{i=1}^n b_ip_i = \sum_{i=1}^n s^*_ip_i + 1\] If {$s^*_n \geq {\varepsilon^3}$}, notice that $\sum_{i=1}^n s^*_i p_i \geq s^*_n p_n \geq \frac{1}{\varepsilon}$ is sufficiently large, while $r^{*}$ does not exceed $0.75$. This means that if {$s^*_n \geq \varepsilon^3$}, \eqref{eq:buyerfullop1} could never happen. Thus, we get that $\sum_{i=1}^n s^*_i \leq \sum_{i=1}^{n-1}s^*_i + s^*_n \leq 1 + {\varepsilon^3}$ and {$\sum_{i=1}^n s^*_i \geq 1$}. Finally, it is easy to see that $\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s^*_i b_j \geq \sum_{j=1}^n p_j b_j = 1$, and the gap between $p_i$ and $p_{i+1}$ is at most $\varepsilon^4$. Hence, we may invoke Lemma~\ref{lem:generateinstance}, which provides the existence of an instance $\mathcal{I}$ such that \[ \frac{\max_{p\in \mathbb{R}}\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \leq \frac{\max_{t\in [n]} \sum_{i=1}^n s^*_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s^*_ib_j(p_j-p_i)}{\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s^*_i b_j} + \varepsilon < r^*- \varepsilon.\] However, this contradicts the fact that $r^{*}$ is the optimal approximation ratio in the full prior information setting. Thus, we prove that the optimal value of $\mathsf{BuyerFull}$ is non-negative. \end{proof} Notice that the min-max optimization problem $\mathsf{BuyerFull}$ is bilinear with respect to $\{s_i\}_{i\in [n]}$ and $\{\omega_i\}_{i\in [n]}$. Furthermore, it is clear that the feasible solution spaces for both ${\omega}$ and ${s}$ are convex and compact. Consequently, by employing the minimax theorem, the following max-min optimization problem $\mathsf{BuyerOnly}$ possesses an identical optimal objective value to that of $\mathsf{BuyerFull}$, which is non-negative as we proved above. \overline{e}gin{align} \max_{\omega_1,\omega_2,\cdots,\omega_n} \min_{s_1,s_2\cdots, s_n} \quad & \sum_{t=1}^n \omega_t \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) - (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j)s_i b_j\nonumber \\ \textsf{s.t.} \quad & s_i \geq 0 & \forall i \in [n]\nonumber\\ & \sum_{i=1}^{n - 1} s_i \leq 1 \quad \text{ and } \quad \sum_{i=1}^n s_i \geq 1 \quad \text{ and } \quad s_n \leq c\nonumber\\ & \omega_i \geq 0 & \forall i \in [n]\nonumber \nonumber \\ & \sum_{i=1}^n \omega_i = 1 \nonumber \end{align} Consider the following mechanism, given solely the buyer's distribution $F_B$. Without loss of generality we can assume that $\mathop{{}\mathbb{E}}_{B\sim F_B}[B] = 1$. We generate the discretized support sets $\{p_i\}_{i\in [n]}$ and $\{b_i\}_{i\in [n]}$ in the same manner as previously described. Subsequently, we solve the max-min optimization problem $\mathsf{BuyerOnly}$ with respect to $\{p_i\}_{i\in [n]}$ and $\{b_i\}_{i\in [n]}$ while setting $c$ to be $10$. Denote the optimal solution of {the max player of} this max-min optimization problem as $\left\{\omega_i^{*}\right\}_{i\in [n]}$. We then select $p_i$ as the price with probability $\omega_i^*$. We now demonstrate that this mechanism attains an approximation ratio of no less than $(r^* - 2\varepsilon)$. We establish this by employing a proof by contradiction. Suppose there exists a seller's distribution $F_S$ so that our mechanism has an approximation ratio lower than $r^* - 2\varepsilon$ on the instance $\mathcal{I} = (F_S, F_B)$. We generate the set $\{s_i\}_{i\in [n]}$ as described in Lemma~\ref{lem:discretization} with respect to $\{p_i\}_{i\in [n]}$ and the distribution $F_S$ {, and we also choose $\{b_i\}_{i\in [n]}$ that correspond to the sets of numbers described in Lemma~\ref{lem:discretization}.} Therefore, the following holds \overline{e}gin{align} \sum_{i=1}^n s_i \geq 1 \text{ and } \sum_{i=1}^{n - 1} s_i &\leq 1.\label{eq:one-sidedfaq} \\ \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i) &\leq \mathcal{W}(\mathcal{I}, p_t)\label{eq:one-sided1} \\ \sum_{i = 1}^n \sum_{j=1}^n \max(p_i, p_j) s_i b_j &\geq \text{OPT-}\mathcal{W}(\mathcal{I}).\label{eq:one-sided2} \end{align} Now let us consider $s_n$. Suppose $s_n \geq 10$. Notice that Lemma~\ref{lem:discretization} demonstrates that \[\sum_{i = 1}^n s_i \leq 1 + \frac{\mathop{{}\mathbb{E}}[S]}{p_n}.\] Combining this with the fact that $s_n \geq 10$, we obtain that \[\mathop{{}\mathbb{E}}[S] \geq (s_n - 1)\cdot p_n \geq 9\varepsilon^{-4}.\] Therefore, on this instance, any fixed-price mechanism has an approximation ratio of at least \[\frac{\mathop{{}\mathbb{E}}[S]}{\mathop{{}\mathbb{E}}[\max(S, B)]} \geq \frac{\mathop{{}\mathbb{E}}[S]}{\mathop{{}\mathbb{E}}[S] + \mathop{{}\mathbb{E}}[B]} \geq \frac{9\varepsilon^{-4}}{9\varepsilon^{-4} + 1} \geq r^{*}\] where the last inequality holds since we know that $r^{*}$ is at most $0.75$, while $\varepsilon^{-4}$ is a large enough number. However, this contradicts the claim that the mechanism has an approximation lower than $r^{*} - 2\varepsilon$ on this instance, and thus we prove that $s_n \leq 10$. Combining this inequality with~\eqref{eq:one-sidedfaq}, we demonstrate that $\{s_i\}_{i\in [n]}$ is a feasible solution to $\mathsf{BuyerOnly}$ when $c = 10$. Given that our mechanism achieves a fraction less than $r^* - 2\varepsilon$ of the optimal welfare, it follows that \overline{e}gin{equation} \sum_{t=1}^n \omega_t^* \mathcal{W}(\mathcal{I}, p_t) < (r^* - 2\varepsilon) \text{OPT-}\mathcal{W}(\mathcal{I}) \label{eq:one-sided3} \end{equation} Combining~\eqref{eq:one-sided1},\eqref{eq:one-sided2} and \eqref{eq:one-sided3}, we have that \[\sum_{t=1}^n \omega_t^* \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) -(r^* - 2\varepsilon) \sum_{i = 1}^n \sum_{j=1}^n \max(p_i, p_j) s_i b_j < 0.\] Observe that $\{\omega_i^*\}$ represents the optimal solution of the max-min optimization problem $\mathsf{BuyerOnly}$, and $\{s_i\}_{i\in [n]}$ is a feasible solution for the inner minimization problem with $c = 10$. This contradicts the assertion that $\mathsf{BuyerOnly}$ possesses a non-negative optimal objective value. Consequently, we deduce that the mechanism utilizing solely the buyer's distribution $F_B$ achieves an approximation ratio of $r^* - 2\varepsilon$. By allowing $\varepsilon \rightarrow 0$, we demonstrate that when considering only the buyer's distribution, the optimal approximation ratio is identical to the optimal ratio obtainable with full prior information. Next, we establish the case, where only the seller's distribution $F_S$ is known, by essentially the same argument. Without loss of generality, we assume that the seller's distribution $F_S$ satisfying $\mathop{{}\mathbb{E}}_{S\sim F_S} [S] = 1$. We define $n$ as $\left\lceil \frac{1}{\varepsilon^{20}} \right\rceil+ 1$ and $p_i = (i - 1)\cdot\varepsilon^{10}$ for some small enough constant $\varepsilon > 0$. We construct a set of numbers $\{s_i\}_{i\in [n]}$ as described in Lemma~\ref{lem:discretization}. Given $\{p_i\}_{i\in [n]}$ and $\{s_i\}_{i\in [n]}$, we define the following min-max optimization problem $\mathsf{SellerFull}$: \overline{e}gin{align} \min_{b_1,b_2\cdots, b_n} \max_{\omega_1,\omega_2,\cdots,\omega_n} \quad & \sum_{t=1}^n \omega_t \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) - (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j)s_i b_j\nonumber \\ \textsf{s.t.} \quad & b_i \geq 0 & \forall i \in [n]\nonumber\\ & \sum_{i=1}^{n - 1} b_i \leq 1 \quad \text{ and } \quad \sum_{i=1}^n b_i \geq 1 \quad \text{ and } \quad b_n \leq c\nonumber\\ & \omega_i \geq 0 & \forall i \in [n]\nonumber \nonumber \\ & \sum_{i=1}^n \omega_i = 1 \nonumber \end{align} $c > 0$ is an arbitrary positive constant. We first argue that $\mathsf{SellerFull}$ has a non-negative objective value. \overline{e}gin{lemma}\label{lem:SellerFull} The optimal value of $\mathsf{SellerFull}$ is non-negative for any constant $c > 0$. \end{lemma} \overline{e}gin{proof} The proof is nearly identical to the proof of Lemma~\ref{lem:buyerfull}, while the only distinction is the argument used to upper bound the value of $b_n$. Again, we first drop the upper bound constraint of $b_n$ and prove the claim by way of contradiction. Suppose $\mathsf{SellerFull}$ has a negative optimal objective value, then there exists a set of $\{b^*_i\}_{i\in [n]}$ such that \overline{e}gin{equation}\label{eq:sellerfullop1} \max_{t\in [n]} \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib^*_j(p_j-p_i) < (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b^*_j. \end{equation} Observe that $\{s_i\}_{i\in [n]}$ are generated according to Lemma~\ref{lem:discretization}, so we get that \[1\leq \sum_{i = 1}^n s_i \leq 1 + \frac{\mathop{{}\mathbb{E}}[S]}{p_n} \leq 1 + \varepsilon^3.\] Now let us consider $b^*_n$. Suppose $b^*_n \geq \varepsilon^3$. Let $t'$ be $\left\lceil\frac{1}{\varepsilon^{11}}\right\rceil$ + 2. From property~\eqref{eq:discretizationNew} in Lemma~\ref{lem:discretization}, it is clear that \[\sum_{i = 1}^{t' - 1} s_i \geq \mathcal{P}r_{S\sim F_S}[S< p_{t' - 1}] = 1 - \mathcal{P}r_{S\sim F_S}[S \geq p_{t' - 1}]\geq 1 - \varepsilon,\] where the last inequality follow from Markov's inequality as $p_{t'-1}\geq \frac{1}{\varepsilon}$. Therefore, it holds that \overline{e}gin{align*} \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t'-1}\sum_{j=t'+1}^n s_ib^*_j(p_j-p_i) & =1 + \sum_{j = t' + 1}^n b_j^*p_j\cdot \left(\sum_{i=1}^{t' - 1} s_i\right) - \sum_{i=1}^{t' - 1} s_i p_i\left(\sum_{j=t'+1}^n b^*_j\right)\\ &\geq 1 + \sum_{j = t' + 1}^n b^*_jp_j\cdot \left(1 - \varepsilon\right) - \left(1 + b_n^*\right)\\ & \geq \left(1 - \varepsilon\right)\cdot \sum_{j = t' + 1}^n b^*_jp_j - b_n^*\\ & \geq \left(1 - 2\varepsilon\right) \cdot \sum_{j=t'+1}^n b_j^* p_j \end{align*} The first inequality follows from the fact that $\sum_{j=t'+1}^n b_j^* = \sum_{j=t'+1}^{n - 1} b_j^* + b_n^* \leq 1 + b_n^* $, and the last inequality is because that $p_n$ is no less than $\varepsilon^{-10}$, and thus is significantly larger than $1$. For the optimal welfare, we can upper bound it as follows. \overline{e}gin{align*} \overline{e}gin{split} \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_ib_j^* & \leq \sum_{i=1}^n p_i s_i + \sum_{j=1}^n p_j b_j^*\\ & \leq 1 + \sum_{j=1}^{t'}p_{t'}b_j^* + \sum_{j=t'+1}^{n} p_j b_j^*\\ & \leq 1 + 2\varepsilon^{-1} + \sum_{j = t' + 1}^n b_j^*p_j. \end{split} \end{align*} However, notice that $\sum_{j=t'+1}^n b^*_j p_j \geq b^*_n p_n \geq \varepsilon^{-7}$, which is significantly larger than $1 + 2\varepsilon^{-1}$, while $r^{*}$ is at most $0.75$. Thus, \eqref{eq:sellerfullop1} could never be true when $b^*_n \geq \frac{1}{\varepsilon^3}$, hence $1\leq \sum_{i = 1}^n b^*_i \leq 1 + \varepsilon^3$. Furthermore, it is also easy to see that $\sum_{i=1}^n \sum_{j = 1}^n \max(p_i,p_j) s_i b^*_j \geq \sum_{i=1}^n p_i s_i = 1$, and the gap between $p_{i + 1} - p_{i}\leq \varepsilon^3$ for any $i\in [n - 1]$. Consequently, we can apply Lemma~\ref{lem:generateinstance}, which establishes the existence of an instance $\mathcal{I}$ such that: \[ \frac{\max_{p\in \mathbb{R}}\mathcal{W}(\mathcal{I}, p)}{\text{OPT-}\mathcal{W}(\mathcal{I})} \leq \frac{\max_{t\in [n]} \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib^*_j(p_j-p_i)}{\sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j) s_i b^*_j} + \varepsilon < r^*- \varepsilon.\] However, this result contradicts the fact that $r^{*}$ represents the optimal approximation ratio within the full prior information setting, and thus demonstrates that the optimal value of $\mathsf{BuyerFull}$ must be non-negative. \end{proof} Given the fact that $\mathsf{BuyerFull}$ is non-negative, we are almost done with our proof. Similarly, by applying the Minimax theorem, we can deduce that the following max-min optimization problem $\mathsf{BuyerOnly}$ has the \emph{non-negative} identical objective value as $\mathsf{BuyerFull}$ for any constant $c > 0$. \overline{e}gin{align} \max_{\omega_1,\omega_2,\cdots,\omega_n} \min_{b_1,b_2\cdots, b_n} \quad & \sum_{t=1}^n \omega_t \left(\sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i)\right) - (r^* - 2\varepsilon) \cdot \sum_{i=1}^n \sum_{j=1}^n \max(p_i,p_j)s_i b_j\nonumber \\ \textsf{s.t.} \quad & b_i \geq 0 & \forall i \in [n]\nonumber\\ & \sum_{i=1}^{n - 1} b_i \leq 1 \quad \text{ and } \quad \sum_{i=1}^n b_i \geq 1 \quad \text{ and }\quad b_n \leq c \nonumber\\ & \omega_i \geq 0 & \forall i \in [n]\nonumber \nonumber \\ & \sum_{i=1}^n \omega_i = 1 \nonumber \end{align} We next apply almost the same argument as the case where the buyer's distribution is known. The only difference is how we deal with the case where $b_n > c$. Similarly, let us consider the following mechanism where only the seller's distribution $F_S$ is known. Without loss of generality we can assume that $\mathop{{}\mathbb{E}}_{S\sim F_S}[S] = 1$, and we use $F_S$ to generate the set $\{s_i\}_{i\in [n]}$ according to Lemma~\ref{lem:discretization}, and solve $\mathsf{SellerOnly}$ with respect to $\{p_i\}_{i\in [n]}$ and $\{s_i\}_{i\in [n]}$ with $c$ to be $\varepsilon^{-1}$ to get the optimal solution of the max player, denoted as $\left\{\omega_i^*\right\}_{i\in [n]}$. The mechanism simply chooses the price $p_i$ with probability $\omega_i^*$. We now aim to illustrate that this mechanism achieves an approximation of at least $r^* - 5\varepsilon$. Let us again prove this by contradiction. Suppose there exists a buyer's distribution $F_B$ so that this mechanism has an approximation ratio lower than $r^* - 5\varepsilon$ on this particular instance $\mathcal{I} = (F_B, F_S)$. According to Lemma~\ref{lem:discretization}, we could correspondingly get a set $\{b_i\}_{i\in [n]}$ such that the following properties hold: \overline{e}gin{align} \sum_{i = 1}^n b_i \geq 1 \text{ and } \sum_{i=1}^{n-1} b_i &\leq 1. \label{eq:seleronesided-1}\\ \sum_{i=1}^n s_i p_i+\sum_{i=1}^{t-1}\sum_{j=t+1}^n s_ib_j(p_j-p_i) &\leq \mathcal{W}(\mathcal{I}, p_t)\label{eq:seleronesided-2} \\ \sum_{i = 1}^n \sum_{j=1}^n \max(p_i, p_j) s_i b_j &\geq \text{OPT-}\mathcal{W}(\mathcal{I}).\label{eq:seleronesided-3} \end{align} It is clear that if $b_n \leq \varepsilon^{-1}$ holds, we can straightforwardly use the similar argument to the case for the buyer and show that $\{b_i\}_{i\in [n]}$ is indeed a feasible solution for the inner minimization problem with $c = \varepsilon^{-1}$, and thus reach a contradiction to the fact that the min-max optimization problem has a non-negative objective value. Therefore, the final thing left is to show that $b_n \leq \varepsilon^{-1}$. To prove this, consider the following specific set $\left\{b_i'\right\}_{i\in [n]}$ where $b_{i}' = 0$ for $i< n$ and $b_n' = 1$. It is clear that $\left\{b_i'\right\}_{i\in [n]}$ is a feasible solution for the inner minimization problem. As $\{\omega_i^*\}_{i\in [n]}$ is the optimal solution for the max player and has a non-negative objective value, we obtain that \overline{e}gin{align}\label{eq:sellerside-4} 1 + \sum_{t=1}^{n-1}\omega_t^* \sum_{i=1}^{t-1}s_i (p_n - p_i) - (r^* - 2\varepsilon)\sum_{i=1}^n s_i \cdot p_n \geq 0. \end{align} Now suppose $b_n > \varepsilon^{-1}$, we could see that \overline{e}gin{align}\overline{e}gin{split} \label{eq:sellerside-5} \sum_{i=1}^n s_i p_i + \sum_{t=1}^n \omega_t^* \sum_{i=1}^{t-1} \sum_{j=t+1}^n (p_j - p_i) s_i b_j &\geq b_n \sum_{t=1}^{n-1} \omega_t^* \sum_{i=1}^{t-1} (p_n - p_i) s_i\\ & \geq (1 - \varepsilon) b_n \left(1 + \sum_{t=1}^{n - 1} \omega_t^* \sum_{i=1}^{t-1}(p_n - p_i) s_i\right)\\ & \geq (1 - \varepsilon) (r^* - 2\varepsilon)\cdot b_n \sum_{i=1}^n s_i p_n \end{split} \end{align} The second inequality is because that $\sum_{t=1}^{n-1}\omega_t^* \sum_{i=1}^{t-1}s_i (p_n - p_i) \geq (r^*-2\varepsilon) \sum_{i=1}^n s_i p_n -1 \geq 0.5 p_n \geq 0.5 \varepsilon^{-10} \gg \varepsilon^{-1}$ and the last inequality follows from~\eqref{eq:sellerside-4}. Furthermore, notice that $\sum_{i=1}^{n-1} b_i \leq 1$ and $\sum_{i=1}^{n} s_i \leq 1 + \frac{\mathop{{}\mathbb{E}}[S]}{p_n} \leq 1 + \varepsilon^{10}$. It follows that \overline{e}gin{align}\label{eq:sellerside-6} \sum_{i=1}^n \sum_{j=1}^{n - 1} s_i b_j \max(p_i, p_j) \leq \left(\sum_{i=1}^n s_i\right) \cdot \left(\sum_{j=1}^{n-1} b_j\right) \cdot p_n \leq 2 \varepsilon^{-10}. \end{align} However, $b_n > \varepsilon^{-1}$ and $\sum_{i=1}^n s_i \geq 1$ imply that \overline{e}gin{align}\label{eq:sellerside-7} b_n \sum_{i=1}^n s_i p_n \geq \varepsilon^{-11}. \end{align} Combining~\eqref{eq:sellerside-6} and \eqref{eq:sellerside-7}, we get that \overline{e}gin{align}\label{eq:sellerside-8} b_n \sum_{i=1}^n s_i p_n \geq (1 - 2\varepsilon) \sum_{i=1}^n \sum_{j=1}^n s_i b_j \max(p_i, p_j). \end{align} We further relax the RHS of~\eqref{eq:sellerside-5} using~\eqref{eq:sellerside-8}. \overline{e}gin{align}\label{eq:sellerside-9} \sum_{i=1}^n s_i p_i + \sum_{t=1}^n \omega_t^* \sum_{i=1}^{t-1} \sum_{j=t+1}^n (p_j - p_i) s_i b_j &\geq (r^{*} - 5\varepsilon) \sum_{i=1}^n \sum_{j=1}^n s_i b_j \max(p_i, p_j). \end{align} Putting~\eqref{eq:seleronesided-2},\eqref{eq:seleronesided-3} and \eqref{eq:sellerside-9} together, we obtain that \[\sum_{t=1}^n \omega_t^* \mathcal{W}(\mathcal{I}, p_t) \geq (r^* - 5\varepsilon) \text{OPT-}\mathcal{W}(\mathcal{I}).\] However, this contradicts the assumption that this mechanism achieves an approximation ratio lower than $r^{*} - 5\varepsilon$ on this instance. Thus, we show that $b_n \leq \varepsilon^{-1}$ always holds. Finally, letting $\varepsilon \rightarrow 0$, we then complete our proof of Theorem~\ref{thm:one-sided}. \section{Proof of Theorem~\ref{thm:PartialMecha}} \label{appendix:partial} \paragraph{Seller's distribution mean $\mathop{{}\mathbb{E}}[S]$ is known.} We start by addressing the case in which only the mean of the seller's value, $\mathop{{}\mathbb{E}}[S]$, is known. The mechanism $\mathcal{M}_S$ takes $\mathop{{}\mathbb{E}}[S]$ as input and randomly picks a number $x\sim U[0,3]$. Then $\mathcal{M}_S$ sets the price as $x\cdot \mathop{{}\mathbb{E}}[S]$. As discussed in Section~\ref{sec:partial}, in order to demonstrate that $\mathcal{M}_S$ achieves an approximation ratio of $\frac23$, it suffices to verify that $\inf_{\mathcal{I} = (F_S,F_B)\atop \mathop{{}\mathbb{E}}[S] = 1}\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right]$ is non-negative. We first aim to demonstrate that, in order to verify non-negativity, it is sufficient to consider two-point distributions for the seller and single-point distributions for the buyer. The intuition for focusing solely on such instances is rather straightforward. Consider the simple case in which the support is discrete. In this case, fixing the buyer's (or the seller's) distribution, the linear program to find the worst seller's (or the buyer's) distribution only has $2$ (or $1$) non-trivial constraints. This implies that there is an optimal solution, i.e., a worst-case distribution, that is supported on $2$ (or $1$) points. However, given that the support is, in fact, continuous, a more rigorous argument is necessary to validate this assertion. We now present it below. Fix any distributions of the seller $F_S$, define $g(x)$ as the contribution to the objective when the buyer's value is $x$, i.e., \[g(x) = \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_S\atop S\sim F_S}\left[S + \mathbbm{1}ic[S\leq q \leq x]\cdot (x - S) - \frac23 \max(S, x)\right].\] This means that for any distribution $F_B$, it holds that \[\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = \int g(x) {d} F_B(x),\] where $F_B(x)$ is the c.d.f. of distribution $F_B$ and $\mathcal{I} = (F_S, F_B)$ represents the instance. Therefore, if there exists some distribution $F_B$ so that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right]$ is negative, then there exists some $x_0\in \mathbb{R}_{\geq 0}$ such that $g(x_0) < 0$. Let $\mathcal{I}'$ represent the instance in which the seller's distribution remains to be $F_S$ and the buyer's distribution is a single-point distribution at $x_0$. It is clear that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\Big[\mathrm{ALG}(q, \mathcal{I}') - \frac23\cdot \mathrm{OPT}(\mathcal{I}')\Big] = g(x_0) < 0$. This means that for any distribution $F_S$, \[\inf_{F_B}\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\big[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\big]\] is non-negative if an only if for any single-point distribution $F_B$, $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right]$ is non-negative. This suggests that it suffices to examine all single-point distributions for the seller. Now, we fix any distribution of the buyer $F_B$, and define $h(x)$ as the contribution when seller's value is $x$, i.e., \[h(x) = \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_S\atop B\sim F_B}\left[x + \mathbbm{1}ic[x\leq q\leq B]\cdot\left(B - x\right) - \frac23\max(x, B)\right].\] As the price $q$ is sampled from a continuous distribution $\mathcal{M}_S$, and $F_B$ is a single-point distribution, we know that $h(x)$ is continuous. What's more, for any distribution $F_S$, it holds that \[\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = \int h(x) {d} F_S(x) = \mathop{{}\mathbb{E}}_{S\sim F_S}\big[h(S)\big]\] To show that two-point distributions can always achieve the minimum, define $r^{*}$ be the infimum of the following optimization problem: \overline{e}gin{align}\label{eq:rstaropt} \underset{y,z,p\in \mathbb{R}_{\geq 0}}{\inf} \quad &p\cdot h(y) + (1 - p)\cdot h(z) \nonumber\\ \text{subject to:} \quad & py + (1 - p) z = 1 \nonumber \\ & 0\leq p\leq 1. \end{align} Now define $k$ as $\sup_{y\in [0, 1)} \frac{r^{*} - h(y)}{1 - y}$. We first confirm the existence of a supremum, i.e., $k$ is a finite number here. This is because that if we define $p_y$ as $\frac{1}{2 - y}$, it holds that $p_y \cdot y + (1 - p_y)\cdot 2 = 1$. Note that $r^*$ is the infimum of the optimization problem~\eqref{eq:rstaropt}, and this directly implies that $p_y\cdot h(y) + (1-p_y)\cdot h(2) \geq r^*$. After multiplying $\frac{1}{1-y}$ on both sides of the inequality and then reformulating it, we obtain that \[\frac{r^* - h(y)}{1 - y} \leq \frac{h(2) - h(y)}{2 - y}.\] This clearly illustrates that $\frac{r^* - h(y)}{1 - y}$ is at most $\frac{h(2) - h(y) }{2-y} \leq |h(2)| + |h(y)|$. Furthermore, given that $h(y)$ is a continuous function in $[0,1]$, it follows that $|h(y)|$ is bounded within the same interval. Thus, this means that $\frac{r^* - h(y)}{1 - y}$ is upper bounded by a constant for any $y\in [0, 1)$, thereby confirming the existence of the supremum. For any small enough constant $\varepsilon > 0$, let $L(x)$ be the line that go through the point $(1, r^{*} - \varepsilon)$ with a slope of $k - \varepsilon$, i.e. $L(x) = (k - \varepsilon)(x - 1) + r^{*} - \varepsilon$. We first argue that $L(x)$ is a lower bound of $h(x)$. \overline{e}gin{lemma} For any $x \in \mathbb{R}_{\geq 0}$, $L(x) \leq h(x)$. \end{lemma} \overline{e}gin{proof} First, it is clear that \[L(1) = r^{*} - \varepsilon < r^{*} \leq h(1),\] and this implies that the inequality holds when $x = 1$. As $k$ is the supremum of $\frac{r^{*} - h(y)}{1 - y}$ over the interval $[0, 1)$, we know that there exists a $y'\in [0, 1)$, where $\frac{r^{*} - h(y')}{1 - y'}$ is at least $k - \varepsilon$. Suppose there exists some $x > 1$ such that $h(x) < \frac{r^{*} - h(y')}{1 - y'} (x - 1) + r^{*}$. Notice that $\left(y', x, \frac{x - 1}{x - y'}\right)$ is a valid solution for the optimization problem~\eqref{eq:rstaropt}, and it holds that \[h\left(y'\right)\frac{x - 1}{x - y'} + h\left(x\right) \frac{1 - y'}{x - y'} < h\left(y'\right)\frac{x - 1}{x - y'} + \left(\frac{r^{*} - h(y')}{1 - y'} (x - 1) + r^{*}\right)\frac{1 - y'}{x - y'} = r^{*}\] which contradicts the fact that $r^{*}$ is the infimum of~\eqref{eq:rstaropt}. This implies that for any $x > 1$, \[h(x) \geq \frac{r^{*} - h(y')}{1 - y'} (x - 1) + r^{*} \geq (k - \varepsilon) (x - 1) + r^{*} - \varepsilon = L(x).\] Finally we consider the case where $x \in [0, 1)$. According to the definition of $k$, it follows that $\frac{r^{*} - h(x)}{1 - x} \leq k$ for all $x\in [0, 1)$. This implies that $h(x) \geq k(x - 1) + r^{*}$ for all $x\in [0, 1)$. As $x\in [0, 1)$, $x - 1$ lies within the interval $[-1, 0)$. Thus \[h(x)\geq k(x - 1) + r^{*} \geq (k - \varepsilon)(x - 1) + r^{*} - \varepsilon = L(x).\] \end{proof} Therefore, for any distribution $F_S$ satisfying $\mathop{{}\mathbb{E}}_{S\sim F_S}[S] = 1$, we could see that \[\mathop{{}\mathbb{E}}_{S\sim F_S}[h(S)]\geq \mathop{{}\mathbb{E}}_{S\sim F_S}[L(S)] = (k - \varepsilon)\cdot \mathop{{}\mathbb{E}}_{S\sim F_S}[(S - 1)] + r^{*} - \varepsilon = L(1) \geq r^{*} - \varepsilon.\] This means that $\mathop{{}\mathbb{E}}_{S\sim F_S}\left[h(S)\right]$ is at least $r^{*} - \varepsilon$ for any distribution $F_S$ such that $\mathop{{}\mathbb{E}}_{S\sim F_S}[S] = 1$. What's more, by the definition of $r^{*}$, we can also see that there always exists a two-point distribution so that $\mathop{{}\mathbb{E}}_{S} [h(S)]$ is at most $r^{*} + \varepsilon$. Taking $\varepsilon \rightarrow 0$, we then show that it suffices to prove that $\mathop{{}\mathbb{E}}_{S} [h(S)]$ is non-negative for any two-point distribution $F_S$. To sum up, we have argued that it suffices to show that \[\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right]\geq 0 \] for all instances $\mathcal{I}$ that have a single-point buyer distribution and a two-point seller distribution. In our following proof, we assume that the value of the buyer is always $y$, and the seller's value is $x$ with probability $p$, and $\frac{1 - xp}{1 - p}$ with probability $1 - p$, where $x \in [0, 1]$ and $p\in [0, 1)$. Notice here the mean of the seller is scaled to $1$ and $\frac{1 - xp}{1 - p}$ is at least $1$. Recall that the price $q$ is chosen uniformly from the interval $[0, 3]$. We now break the problem into the following different cases, and argue that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right]$ is non-negative in each of the cases: \overline{e}gin{itemize} \item $y\leq x$. Notice that when $y\leq x$, both $\mathrm{ALG}(q, \mathcal{I})$ and $\mathrm{OPT}(\mathcal{I})$ are $1$ since the trade never happens. \item $x < y \leq \frac{1 - xp}{1 - p}$. In this case, we can see that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_S}\left[\mathrm{ALG}(q, \mathcal{I})\right] = 1 + (y-x)p\cdot \frac{\min(y, 3) - x}{3}$ and $\mathrm{OPT}(\mathcal{I}) = 1 + yp - xp$. Thus, we need to show that \[\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = \frac13\left(1 + (y - x)\cdot p \cdot \left(\min(y,3) - x{-2}\right)\right)\] is non-negative. When $y\geq 3$, by some simple calculations, it holds that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = \frac13\left(1 + (y - x) p (1 - x)\right) \geq \frac13, \end{align*} where the last inequality follows from the non-negativity of $1-x$, $y - x$ and $p$. When $y < 3$, we can see that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = \frac13\left(1 + (y - x) p (y - x - 2)\right) \geq 0, \end{align*}where the last inequality is because that $(y - x)(y - x - 2) \geq -1$ and $p \in [0, 1]$. \item $x \leq \frac{1 - xp}{1 - p} \leq y$. In this case, \small {$\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_S}\left[\mathrm{ALG}(q, \mathcal{I})\right] = 1 + (y-x)p \frac{\min(y, 3) - x}{3} + \left(y - \frac{1 - xp}{1 - p}\right)\cdot (1 - p)\cdot \frac{\min(y, 3) - \min\left(\frac{1 - xp}{1 - p}, 3\right)}{3}$} and $\mathrm{OPT}(\mathcal{I}) = y$. We first assume $y \geq \frac{1-xp}{1-p}\geq 3$. Notice that $\frac{1-xp}{1-p}\geq 3$ implies that $(3 - x) p \geq 2$. Thus, \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] &= 1 + \frac13(y - x) p (3 - x) - \frac23 y\\ & \geq \frac23 (y - x) + 1 - \frac23 y = 1 - \frac23 x \geq \frac13. \end{align*} We now consider the case where $y \geq3 \geq \frac{1-xp}{1-p}$. We know that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] &=1 + \frac13(y - x) p (3 - x) + \frac13 \left(y - \frac{1 - xp}{1 - p}\right)(1 - p)\left(3 - \frac{1 - xp}{1 - p}\right)- \frac23 y\\ & = \frac{p(x^2-2x+3)-2}{3(1-p)} + 1\geq \frac13, \end{align*} where the second equality follows from the equation that $\frac13p (3 - x) + \frac13(1 - p)\left(3 - \frac{1 - xp}{1 - p}\right) = \frac23$ and the last inequality is because $x^2 - 2x + 3\geq 2$ and also $p\in [0, 1]$. Finally, when $y\leq 3$, it holds that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{S}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] &= 1 + \frac13 p (y-x)^2 + \frac13 (1 - p)\left(y-\frac{1 - xp}{1 - p}\right)^2 - \frac23 y\\ & \geq \frac{p(x - 1)^2}{3(1-p)} \geq 0, \end{align*} where the first inequality is from the fact that $1 + \frac13 p (y-x)^2 + \frac13 (1 - p)\left(y-\frac{1 - xp}{1 - p}\right)^2 - \frac23 y$ is minimized at $y = 2$. \end{itemize} Consequently, we demonstrate that $\mathcal{M}_S$ achieves an approximation ratio of $\frac23$. \paragraph{Buyer's distribution mean $\mathop{{}\mathbb{E}}[B]$ is known.} We now prove that $\mathcal{M}_B$ also has an approximation ratio of $\frac23$. Following the same argument, we could see that it suffices to prove that \[\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] \] is non-negative for all instances that have a single-point distribution for the seller and a two-point distribution for the buyer. Similarly, we assume that the seller's value is always $y$, and the buyer's value is $x$ with probability $p$ and $\frac{1 - xp}{1 - p}$ with probability $1 - p$ where $x\in [0,1]$ and $p\in [0, 1)$. Recall that the price $q$ is chosen from $[0, 2]$ according to the following cumulative distribution function $F(x)$:\overline{e}gin{align*} \overline{e}gin{split} F(x)=\left\{ \overline{e}gin{aligned} &\frac{x}{3-3x} \quad & 0 \leq x\leq \frac12&\\ &(4x - 1) / 3 \quad & \frac12 < x\leq \frac23\\ &(x + 1) / 3 \quad & \frac23 < x\ \leq 2\\ & 1 \quad & x\geq 2 \end{aligned} \right. \end{split} \end{align*} Now let us consider the following cases. \overline{e}gin{itemize} \item $x \leq \frac{1 - xp}{1 - p} \leq y$. This case is trivial, as both $\mathrm{ALG}(q, \mathcal{I})$ and $\mathrm{OPT}(\mathcal{I})$ are $y$. \item $x \leq y \leq \frac{1 - xp}{1 - p}$. In this case, we know that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_S} \left[\mathrm{ALG}(q, \mathcal{I})\right] = y + (1 - p)\left(\frac{1 - xp}{1 - p} - y\right)\cdot\left(F\left(\frac{1 - xp}{1 - p}\right) - F(y)\right)$ and $\mathrm{OPT}(\mathcal{I})$ is $1 - xp + yp$. Thus \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] & = y \left(1 + F(y) - F\left(\frac{1 - xp}{1 - p}\right)\right) + (1 - xp + yp) \left(F\left(\frac{1 - xp}{1 - p}\right) - F(y) - \frac23\right) \end{align*} \overline{e}gin{enumerate} \item We first examine the scenario where $\frac{1 - xp}{1 - p} \geq 2$. Under these conditions, the function $F\left(\frac{1 - xp}{1 - p}\right)$ is equal to 1. Thus we get that in this scenario, \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] & = y F(y) + (1 - xp + yp) \left(\frac13- F(y)\right) \end{align*} When $y \leq \frac{1}{2}$, the function $F(y)$ is upper bounded by $\frac{1}{3}$, which means that $\frac13 - F(y)$ is non-negative. Thus, we get that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] & \geq yF(y) + (y - yp + yp)\left(\frac13 - F(y)\right) = \frac13 y \geq 0, \end{align*} where the first inequality is because that $\frac{1 - xp}{1 - p} \geq y$ implies $1 - xp \geq y - yp$. When$y \geq \frac{1}{2}$, $\frac{1}{3} - F(y)$ is always non-positive. Since $y - x \geq 0$, $(1 - xp + yp) \left(\frac{1}{3}- F(y)\right)$ is minimized when $p = 1$ for any fixed $x$ and $y$.\footnote{{Although in our definition we do not allow $p$ to be $1$, we can nevertheless lower bound $(1 - xp + yp) \left(\frac{1}{3}- F(y)\right)$ by setting $p$ to $1$.}} As a result, it can be concluded that \overline{e}gin{align*} \mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] & \geq y F(y) + (1 - x + y) \left(\frac13- F(y)\right)\\ & = \left(F(y) - \frac13\right)\cdot x + \frac13 (y + 1) - F(y)\\ & \geq \frac13 (y + 1) - F(y) \geq 0. \end{align*} The second inequality arises from $F(y) \geq \frac13$ when $y\geq \frac12$, and the last inequality is follows from the observation that $\frac13(y + 1)\geq F(y)$ for all $y\in \mathbb{R}_{\geq 0}$. \item We now proceed to the case where $\frac{1 - xp}{1 - p} < 2$. As $x \leq 1$, we know that $\frac{1 - xp}{1 - p}$ is never less than $1$, and consequently, $F\left(\frac{1 - xp}{1 - p}\right) = \left(\frac{1 - xp}{1 - p} + 1\right)/3$. Depending on the value of $y$, there are three distinct cases, which we present below. It suffices to demonstrate that all these terms have non-negative values. We postpone the proof of their non-negativity to Appendix~\ref{subsec:nonnega}. \overline{e}gin{table}[H] \overline{e}gin{center} \overline{e}gin{tabularx}{0.77\textwidth}{@{}lML@{}} \toprule {Value of $y$} & \multicolumn{1}{c}{$\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_{B}}\left[\mathrm{ALG}(q, \mathcal{I}) - \frac23\cdot \mathrm{OPT}(\mathcal{I})\right] = $} & \multicolumn{1}{l}{}\\ \midrule $y\in \left[0, \frac12\right)$ & \frac13\left(pxy + \frac{p(1-x)(2-p-px)}{1 - p}- \frac{p(1-x)}{1 - y}\right) & eq:Case11 \\ $y\in \left[\frac12, \frac23\right)$ & \frac13\left(\frac{(1-px)^2}{1-p} + (5px-4)y+4(1-p)y^2\right) & eq:Case12 \\ $y\in \left[\frac23, 2\right)$ & \frac{y (1 + y) + p^2 (y - x + 2) (y - x) - p \left( y \left(3 + 2(y - x)\right)-2\right)-1}{3 (1 - p)} & eq:Case13 \\ \bottomrule \end{tabularx} \end{center} \end{table} \end{enumerate} \item $y\leq x \leq \frac{1 - xp}{1 - p}$. We get that $\mathop{{}\mathbb{E}}_{q\sim \mathcal{M}_B}\left[\mathrm{ALG}(q, \mathcal{I})\right] = y + (1 - p)\left(\frac{1 - xp}{1 - p} - y\right)\cdot\left(F\left(\frac{1 - xp}{1 - p}\right) - F(y)\right) + p(x-y)\left(F(x)-F(y)\right)$ and $\mathrm{OPT}(\mathcal{I})$ is exactly $1$. Consequently, we must demonstrate that for any $y \leq x \leq \frac{1-xp}{1-x}$, the following term is always non-negative: \overline{e}gin{align*} y + (1 - p)\left(\frac{1 - xp}{1 - p} - y\right)\cdot\left(F\left(\frac{1 - xp}{1 - p}\right) - F(y)\right) + p(x-y)\left(F(x)-F(y)\right) - \frac23. \end{align*} To establish this, we consider the intervals in which the values of $x$,$y$ and $\frac{1-xp}{1-p}$ falls. Depending on the values of $x,y$ and $\frac{1-xp}{1-p}$, $F(x), F(y)$ and $F\left(\frac{1-xp}{1-p}\right)$ has different expressions, and we prove the non-negativity for each term. We present the expressions to demonstrate their non-negativity in Table~\ref{table:Discussion} and provide the proof in~\mathcal{C}ref{sec:nng for discussion table}. \overline{e}gin{table}[h] \centering \caption{Different Cases with Different Values} \label{table:Discussion} \overline{e}gin{tabular}{|c|c|cc|} \hline & & \multicolumn{2}{c|}{$y\in \left[0,\frac12\right)$} \\ \hline \multirow{4}{*}{$\frac{1-xp}{1-p}\geq 2$} & $x\in \left[0,\frac12\right)$ & \multicolumn{2}{c|}{\inlineeq{\frac{1}{3} - p(x - y)\left(1 - \frac{x}{3(1 - x)}\right) - \frac{y}{3}}\label{eq:Case21}} \\ \cline{2-4} & $x\in \left[\frac12,\frac23\right)$ & \multicolumn{2}{c|}{\inlineeq{\frac{1}{3}\left(1- 4p\left(1 - x\right)\left(x - y\right) - y\right)}\label{eq:Case22}} \\ \cline{2-4} & $x\in \left[\frac23, 1\right]$ & \multicolumn{2}{c|}{\inlineeq{\frac13 \left(1 - p (2 - x) (x - y) - y)\right)}\label{eq:Case24}} \\ \hline \multirow{4}{*}{$\frac{1-xp}{1-p}< 2$} & $x\in \left[0,\frac12\right)$ & \multicolumn{2}{c|}{\inlineeq{\frac{p\left(1 + p(x - y)(1 - x - x^2) + y - x(1 + x)y - 4(1 - x)x\right)}{3(1 - p)(1 - x)}}\label{eq:Case31}} \\ \cline{2-4} & $x\in \left[\frac12,\frac23\right)$ & \multicolumn{2}{c|}{\inlineeq{\frac{p\left(1 + (4 - 3p)x^2 + 2(1 - p)y - x(4 + 3y - p(2 + 3y))\right)}{3(1 - p)}}\label{eq:Case32}} \\ \cline{2-4} &$x\in \left[\frac23, 1\right]$ & \multicolumn{2}{c|}{\inlineeq{\frac{p (1 - x)^2}{3 (1 - p)}}\label{eq:Case34}} \\ \hline & & \multicolumn{1}{c|}{$y\in \left[\frac12,\frac23\right)$} & $y\in \left[\frac23, 1\right]$\\ \hline \multirow{4}{*}{$\frac{1-xp}{1-p}\geq 2$} & $x\in \left[0,\frac12\right)$ & \multicolumn{1}{c|}{\xmark} & \xmark \\ \cline{2-4} & $x\in \left[\frac12,\frac23\right)$ & \multicolumn{1}{c|}{\inlineeq{\frac13 \left(2 - 4 p (1 - x) (x - y) + y (4y - 5)\right)}\label{eq:Case23}} & \xmark \\ \cline{2-4} & $x\in \left[\frac23, 1\right]$ & \multicolumn{1}{c|}{\inlineeq{\frac13 \left(2 - p (2 - x) (x - y) + y (4y - 5)\right)}\label{eq:Case25}} & \inlineeq{\frac13 \big(y^2 - p(2-x)(x-y)\big) }\label{eq:Case26} \\ \hline \multirow{4}{*}{$\frac{1-xp}{1-p}< 2$} & $x\in \left[0,\frac12\right)$ & \multicolumn{1}{c|}{\xmark} & \xmark \\ \cline{2-4} & $x\in \left[\frac12,\frac23\right)$ & \multicolumn{1}{c|}{\overline{e}gin{footnotesize}\inlineeq{\frac{2p^2x\left(1 - \frac{3}{2}x\right) + (1-2y)^2 - 4px\left(1 - x\right) + (6-3x-4y)py - 2y\left(1 - \frac{3}{2}x\right)p^2}{3(1 - p)}}\label{eq:Case33}\end{footnotesize}} & \xmark \\ \cline{2-4} & $x\in \left[\frac23, 1\right]$ & \multicolumn{1}{c|}{\inlineeq{\frac{1 + p (-2 + x) x}{{3}(1 - p)} - {\frac{1}{3}\cdot\left(4 y - 4 y^2\right)}}\label{eq:Case35}} & \inlineeq{\frac{y(1 + y) -1 + p(2 + (x - 2)x - y - y^2)}{3(1 - p)}}\label{eq:Case36} \\ \hline \end{tabular} \end{table} \end{itemize} \paragraph{$\frac{2}{3}$ upper bounds the approximation ratio.} We now prove that $\frac23$ is the upper bound when we only know the mean of the buyer's value or the seller's value. We first consider the case when the mean of the seller's value is known. We introduce two instances with $\mathop{{}\mathbb{E}}[S]=1$. As only $\mathop{{}\mathbb{E}}[S]$ is known, any mechanism must choose the price from the same distribution for the two different instances. Given any mechanism, let $P_S$ be the distribution of prices when $\mathop{{}\mathbb{E}}[S] = 1$, and $r_S$ be the approximation ratio of this mechanism. The two instances are as follows: \overline{e}gin{itemize} \item The values for both agents are deterministic. The seller's value is always $1$, and the buyer's value is always $H$, which should be think of as a large number that approaches infinity. In order to obtain $r_S$ fraction of the optimal welfare in this instance when $H\rightarrow \infty$, the condition $\mathcal{P}r_{p\sim P_S}[p \geq 1] \geq r_S$ must be satisfied. \item The seller's value is $0$ with probability $1-\varepsilon$ and $1/\varepsilon$ with probability $\varepsilon$, and the buyer has a deterministic value of $1 - \varepsilon$. As $\varepsilon\rightarrow 0$, it is clear that $\mathrm{OPT}(\mathcal{I})$ has a limit of $2$, while $\mathop{{}\mathbb{E}}\left[\mathrm{ALG}(q,\mathcal{I})\right]$ has a limit of $1 + \mathcal{P}r_{p\sim P_S}[0\le p < 1]$. Hence, $\mathcal{P}r_{p\sim P_S}[0\leq p < 1] \geq 2r_S - 1$ must hold to achieve an $r_S$-approximate in this instance. \end{itemize} Notice that $\mathcal{P}r_{p\sim P_S}[p \geq 1] + \mathcal{P}r_{p\sim P_S}[0\leq p < 1] = 1$, which implies that $3r_S - 1\leq 1$. Since such inequality holds for all mechanisms, this indicates that the upper bound for any mechanism given only $\mathop{{}\mathbb{E}}[S]$ is at most $\frac23$. Finally, we prove the upper bound when only the mean of the buyer's value is known. Similarly, we again present two instances with identical $\mathop{{}\mathbb{E}}[B] = 1$ and demonstrate that no mechanisms can achieve an approximation ratio exceeding $\frac23$ in both instances. For any mechanism that only uses the information of $\mathop{{}\mathbb{E}}[B]$, we let $P_B$ be the distribution of prices when $\mathop{{}\mathbb{E}}[B] = 1$, and $r_B$ be the approximation ratio of this mechanism. The two instances can be described as follows: \overline{e}gin{itemize} \item The values for both agents are deterministic. The seller's value is always $0$, and the buyer's value is always $1$. In this instance, the trade must happen with probability at least $r_B$, which means that $\mathcal{P}r_{p\sim P_B}[0\leq p\leq 1] \geq r_B$. \item The buyer's value is $0$ with probability $1-\varepsilon$ and $1/\varepsilon$ with probability $\varepsilon$, and the seller has a deterministic value of $1 + \varepsilon$. If we let $\varepsilon$ approach $0$, it is clear that $\mathrm{OPT}(\mathcal{I})$ has a limit of $2$, while $\mathop{{}\mathbb{E}}\left[\mathrm{ALG}(q,\mathcal{I})\right]$ has a limit of $1 + \mathcal{P}r_{p\sim P_B}[p > 1]$. Thus, $\mathcal{P}r_{p\sim P_B}[p > 1] \geq 2r_B - 1$ must hold to achieve an $r_B$-approximation in this instance. \end{itemize} Again, $\mathcal{P}r_{p\sim P_B}[p > 1] + \mathcal{P}r_{p\sim P_B}[0\leq p \leq 1] = 1$ implies that $r_B$ is at most $\frac23$. This concludes our proof. \subsection{Proof of Non-negativity} \label{subsec:nonnega} We demonstrate the non-negativity of each term individually. \overline{e}gin{itemize} \item[\ref{eq:Case11}:] It suffices to prove that $pxy + \frac{p(1-x)(2-p-px)}{1 - p}- \frac{p(1-x)}{1 - y}$ is always non-negative. We could see that \overline{e}gin{align*} \overline{e}gin{split} pxy + \frac{p(1-x)(2-p-px)}{1 - p}- \frac{p(1-x)}{1 - y} &\geq \frac{p(1-x)(2-p-px)}{1 - p}- \frac{p(1-x)}{1 - y}\\ & \geq \frac{p(1-x)(2-p-px)}{1 - p}- 2p(1-x)\\ & = {\frac{p(1-x)(p-px)}{1-p}}\\ &{={\frac{p^2(1-x)^2}{1-p}}}\\ & \geq 0, \end{split} \end{align*} where the first inequality follows from the non-negativity of $pxy$, and the second inequality is because that the term $\frac{p(1-x)}{1 - y}$ is maximized at $y = \frac12$, since the value of $y$ is at most $\frac12$. The last inequality holds {as $p<1$.} \item[\ref{eq:Case12}:] Our goal is to demonstrate the non-negativity of $4(1-p)y^2 + (5px-4)y + \frac{(1-px)^2}{1-p}$. {Note that \overline{e}gin{align*} \overline{e}gin{split} 4(1-p)y^2 + (5px-4)y + \frac{(1-px)^2}{1-p} &= \frac{\left(2(1-p)y-(1-px)\right)^2}{(1-p)}+pxy\\ & \geq 0. \end{split} \end{align*} The inequality follows from the non-negativity of $pxy$ and the fact that $p<1$. } \item[\ref{eq:Case13}:] We need to show that $y (1 + y) + p^2 (y - x + 2) (y - x) - p \left( y \left(3 + 2(y - x)\right)-2\right)-1$ is non-negative. We first rearrange the terms to get the quadratic form with respect to $y$, i.e. \overline{e}gin{align*} &y (1 + y) + p^2 (y - x + 2) (y - x) - p \left( y \left(3 + 2(y - x)\right)-2\right)-1 \\&= (1-p)^2 y^2 + (1-p)\left(1 - 2 p (1 - x)\right)y+ p \left(2 - p (2 - x) x\right) - 1. \end{align*} When we do not have any restrictions on $y$, we observe that the minimum is attained at $y = \frac{2p(1 - x) - 1}{2(1-p)}$. We first consider the case when $\frac{2p(1 - x) - 1}{2(1-p)} \geq \frac23$. From this, we first get that $-6px \geq 7 - 10p$. As $-6px$ is always non-positive, $7 - 10p$ must also be at most $0$, meaning that $p\geq \frac{7}{10}$. Taking $y$ to be the minimum point, we get that \overline{e}gin{align*} (1-p)^2 y^2 + (1-p)\left(1 - 2 p (1 - x)\right)y+ p \left(2 - p (2 - x) x\right) - 1 &\geq -p^2 + 3p-px- \frac54\\ & \geq -p^2 + 3p + \frac76 - \frac{10}{6}p - \frac54\\ & \geq 0. \end{align*} where the last inequality follows from the fact that the quadratic function $-p^2 + 3p + \frac76 - \frac{10}{6}p - \frac54$ remains positive over the interval $[\frac{7}{10},1]$. When $\frac{2p(1 - x) - 1}{2(1-p)} < \frac23$, as $y$ is at least $\frac23$, the minimum is attained at $y = \frac23$. In this case, it holds that \overline{e}gin{align*} (1-p)^2 y^2 + (1-p)\left(1 - 2 p (1 - x)\right)y+ p \left(2 - p (2 - x) x\right) - 1 &\geq \frac{1}{9} \left( (16 - 30x + 9x^2) p^2 + (-2 + 3x)4p +1 \right)\\ &{= \frac{1}{9}\left((3x-2)(3x-8)p^2 + 4(3x-2)p+1)\right)}\\ & {= \frac{1}{9}\left((3x-2)(3x-8)\left(p+\frac{2}{3x-8}\right)^2 + 1-\frac{4(3x-2)}{3x-8}\right)}\\ & {= \frac{1}{9}\left((3x-2)(3x-8)\left(p+\frac{2}{3x-8}\right)^2 +\frac{9x}{8-3x}\right)} \geq 0. \end{align*} {The final inequality is because $x\in\left[0,\frac{2}{3}\right]$.} \subsection{Non-negativity of the Terms in~\mathcal{C}ref{table:Discussion}}~\label{sec:nng for discussion table} \item[\ref{eq:Case21}:] Notice that $\left(1 - \frac{x}{3(1 - x)}\right)$ remains non-negative for $x\in [0, \frac12]$. Consequently, the term is minimized at $p = 1$. \overline{e}gin{align*} \frac{1}{3} - p(x - y)\left(1 - \frac{x}{3(1 - x)}\right) - \frac{y}{3} &\geq \frac{1}{3} - (x - y)\left(1 - \frac{x}{3(1 - x)}\right) - \frac{y}{3}\\ & = \frac{1}{3(1-x)} \left(1 + 2y - x\left(4-4x + 3y\right)\right){=\frac{1}{3(1-x)} \left((1-2x)^2+2y-3xy\right)}. \end{align*}{The final term in the inequality above is non-negative because $x \leq \frac12$ and $y\geq 0$ implies that $2y - 3xy \geq 0$.} \item[\ref{eq:Case22}:] We need to verify the non-negativity of $1- 4p\left(1 - x\right)\left(x - y\right) - y$. Notice that \overline{e}gin{align*} 1- 4p\left(1 - x\right)\left(x - y\right) - y&\geq 1- 4\left(1 - x\right)\left(x - y\right) - y\\ & \geq 1- 4\left(1 - \left(1 + y\right) / 2\right)\left( \left(1 + y\right)/ 2{-y}\right) - y\\ & = y (1 - y) \geq 0, \end{align*}where the first inequality is derived from that the term is minimized at $p = 1$, and the second inequality is because that the term is minimized when $x = \frac{(1 + y)}{2}$. \item[\ref{eq:Case23}:] We here aim to ensure that $2 - 4 p (1 - x) (x - y) + y (4y - 5)$ is non-negative. \overline{e}gin{align*} 2 - 4 p (1 - x) (x - y) + y (4y - 5) &\geq 2 - 4(1 - x) (x - y) + y (4y - 5)\\ & \geq 2 - 4\left(1 - \left(1 + y\right) / 2\right) \left(\left(y+1\right)/2 - y\right) + y (4y - 5)\\ & = {3\left(y-\frac{1}{2}\right)^2+\frac{1}{4}}\geq 0. \end{align*}where the first inequality and the second inequality holds due to similar reasons as in \eqref{eq:Case22}. The term is minimized at $p = 1$ and $x = (1 + y) / 2$. \item[\ref{eq:Case24}:] We aim to show that $1 - p (2 - x) (x - y) - y$ is at least $0$. It is clear to see that \overline{e}gin{align*} \overline{e}gin{split} 1 - p (2 - x) (x - y) - y &\geq 1 - (2 - x)(x - y) - y \\ & \geq 1 - (2 - 1)\cdot (1 - y) - y\\ & = 0, \end{split} \end{align*} where the first inequality once again follows from the fact that this term is minimized at $p = 1$. For the second inequality, note that if we do not have any constraints on $x$, the term would be minimized at $x = (2 + y) / 2 \geq 1$. However, since the value of $x$ is at most $1$, we find that this term actually attains its minimum at $x = 1$. \item[\ref{eq:Case25}:] It is sufficient to prove that $2 - p (2 - x) (x - y) + y (4y - 5)$ is always non-negative. Observe that if we consider $y$ as a constant, this term is nearly identical to \eqref{eq:Case24}, except for constants. Consequently, it also reaches the minimum at $p = 1$ and $x = 1$. \overline{e}gin{align*} 2 - p (2 - x) (x - y) + y (4y - 5) & \geq 2 - (2 - 1) (1 - y) + y (4y - 5)\\ & = \left(1-2y\right)^2 \geq 0. \end{align*} \item[\ref{eq:Case26}:] The term for which we aim to ensure non-negativity is $y^2 - p(2 - x) (x - y)$. Similar to \eqref{eq:Case24}, it is evident that the term attains its minimum at $p = 1$ and $x = 1$. \overline{e}gin{align*} y^2 - p(2 - x) (x - y) &\geq y^2 - (1 - y)\\ & = y^2 + y - 1 \geq 0. \end{align*} where the last inequality follows from the non-negativity of $y^2 + y - 1$ on $\left[\frac23, 1\right]$. \item[\ref{eq:Case31}:] As $1 - x$, $1 - p$, $p$ are always non-negative, we only need to prove that the value of $1 + p(x - y)(1 - x - x^2) + y - x(1 + x)y - 4(1 - x)x$ is at least $0$. Since $1 - x - x^2$ is non-negative for $x\in \left[0, \frac12\right)$, both $p(x-y)(1-x-x^2)$ and $(1 - x - x^2)y$ are always non-negative, and thus the following holds: \overline{e}gin{align*} 1 + p(x - y)(1 - x - x^2) + y - x(1 + x)y - 4(1 - x)x & \geq 1 + \left(1 - x - x^2\right)y - 4(1 - x)x\\ & \geq 1 - 4(1 - x)x\\ & \geq 0. \end{align*}where the last inequality dues to the non-negativity of $4x^2-4x+1$. \item[\ref{eq:Case32}:] We need show that $1 + (4 - 3p)x^2 + 2(1 - p)y - x\left(4 + 3y - p(2 + 3y)\right)$ is always non-negative. We first reorganize the term and get $1 + (4 - 3 p) x^2 + (2 (1 - p) - 3 x (1 - p)) y - x (4 - 2 p)$. As $x\leq \frac23$, we could see that $2 (1 - p) - 3 x (1 - p)$ is always non-negative. Thus, \overline{e}gin{align*} 1 + (4 - 3 p) x^2 + (2 (1 - p) - 3 x (1 - p)) y - x (4 - 2 p) &\geq (4 - 3 p) x^2 - (4-2p) x + 1\\ & \geq 1 -\frac{(4-2p)^2}{4(4-3p)}\\ & {=\frac{p-p^2}{(4-3p)} }\geq 0 \end{align*} where the second inequality is derived from that the quadratic term obtains its minimum at $x = -\frac{4-2p}{2(4-3p)}$. The final inequality is because $p-p^2\geq 0$ for $p\in [0, 1]$. \item[\ref{eq:Case33}:] We aim to demonstrate that $2p^2x\left(1 - \frac{3}{2}x\right) + 1 - 4y + 4y^2 - 4px\left(1 - x\right) + 6py - 2y\left(1 - \frac{3}{2}x\right)p^2 - 3pxy - 4py^2$ is always non-negative when $x,y\in \left[\frac12,\frac23\right)$ and $p\in [0, 1)$. Notice that $x < \frac23$ implies that $1 - \frac23x$ is positive in the interval. Therefore, we only need to establish that $1 - 4y + 4y^2 - 4px\left(1 - x\right) + 6py - 2y\left(1 - \frac{3}{2}x\right)p^2 - 3pxy - 4py^2$ is non-negative. Fixing $x,y$, we could see that this is a quadratic term with respect to $p$, and the coefficient of $p^2$ is $-2y\left(1-\frac23x\right)$ which is non-positive. Therefore, the minimum of this quadratic term is achieved at the boundary, i.e., $p = 0$ or $p = 1$. When $p = 0$, the expression is equivalent to \[4y^2 - 4y + 1 = (2y - 1)^2 \geq 0.\] When $p = 1$, the expression becomes {\[1+x^2\left(4-\frac{9}{2}y\right)+x(-4+3y)=(2x - 1)^2+3x\left(1-\frac{3}{2}x\right)y\geq 0.\] The inequality is because $3x(1-\frac{3}{2}x)$ is non-negative for $x\leq \frac{2}{3}$. } \item[\ref{eq:Case34}:] This case is trivial, as $p\in [0,1)$. \item[\ref{eq:Case35}:] This one is also easy. First, notice that $(-2 + x) x \geq -1$. Therefore, it holds that \overline{e}gin{align*} \frac{1+p(x-2)x}{1-p} + 4y^2 - 4y& \geq \frac{1-p}{1-p} + 4y^2 - 4y\\ & = (2y - 1)^2 \geq 0. \end{align*} \item[\ref{eq:Case36}:] It suffices to show that $-1 + y(1 + y) + p(2 + (-2 + x)x - y - y^2)$ is always non-negative. Once again, using the fact that $(-2 + x) x \geq -1$, we observe that \overline{e}gin{align*} -1 + y(1 + y) + p(2 + (-2 + x)x - y - y^2) &\geq y(1 + y) - 1+(1-y-y^2)p\\ & \geq y(1 + y) - 1+ (1-y-y^2) = 0 \end{align*} where the second inequality arises from the observation that $(1-y-y^2)$ remains negative when $y\in \left[\frac23, 1\right]$, and thus the expression attains the minimum value at $p = 1$. \end{itemize} \section{Missing Proofs in Section~\ref{sec:Sample}} \label{appendix:SampleAppendix} \subsection{Mechanisms over Buyer's information} \label{appendix:NonbuyerInfo} In the sample setting, we only consider mechanisms over seller's information. We do not consider quantile or order statistics mechanisms over buyer's information since it is impossible to get any constant approximation with these family of mechanisms. \overline{e}gin{theorem} \label{thm:NonBuyerInfo} No quantile mechanism over buyer's distribution or order statistic mechanism over only buyer's samples can achieve a constant fraction of the optimal welfare. \end{theorem} \overline{e}gin{proof} We first show that there is no constant approximation quantile mechanism $Q$ over buyer's distribution. Remind that $F_B$ and $F_S$ respectively stand for the distribution of the buyer and the seller, and we will also use $Q$ to denote the corresponding distribution over the buyer's quantile. To start with, we can assume that distribution $Q$ does not have point mass at $1$. That's because if we set the $1$-quantile of the buyer's distribution, i.e. $F_B^{-1}(1)$, as the price, we have $\mathcal{P}r_{B\sim F_B}[B \geq F_B^{-1}(1)] = 0$. This means that the trade will never happen under such price and thus this price will not increase the welfare. Therefore, if we move this probability mass to other values, the welfare and also the approximation ratio will not decrease, and we prove that this assumption is with out loss of generality. Now, for an arbitarily small $\varepsilon > 0$, we will show that there is no $\varepsilon$-approximation quantile mechanism over buyer's distribution. For any quantile mechanism $Q$ over buyer's distribution, we construct the following set \[ \mathcal{X} = \{{t\in [0,1]}\;\vert\; \mathcal{P}r_{x\sim Q}[x\ge t] \le \varepsilon / 2\} \] Since there is no point mass at $1$, this set will contain some $t\in \mathcal{X} $ and $ t \neq 1$. Consider the following instance $\mathcal{I} = (F_S, F_B)$: \overline{e}gin{align*} \overline{e}gin{split} B \sim F_{B}, B=\left\{ \overline{e}gin{aligned} &0 \quad & w.p. \quad t\\ &H & w.p.\quad 1 - t \end{aligned} \right. \quad\quad\quad S \sim F_{S}, S= \overline{e}gin{aligned} &\varepsilon / 2 \quad & w.p. \quad 1\\ \end{aligned} \end{split} \end{align*} where $H = \frac{1}{1 - t}$ is a large enough number. In this instance, the intuition is that all the welfare is hide at some very little probability of the buyer, and we must make sure that the trade is very likely to happen when the buyer has a very high value. However, since we don't know the value of the seller, it is hard for us to make sure that $p\ge S$ which means that this trade will not happen. Remind that we define $\text{OPT-}\mathcal{W}(\mathcal{I})$ as the optimal welfare, i.e., $\mathop{{}\mathbb{E}}_{S\sim F_S, B\sim F_B}[\max(S, B)]$ against instance $\mathcal{I} = (F_S, F_B)$ and $\mathcal{W}(Q, \mathcal{I})$ as the welfare of mechanism $Q$ against instance $\mathcal{I}$. Formally speaking, we have that \[\text{OPT-}\mathcal{W}(\mathcal{I}) \ge H \cdot (1 - t) = 1\] and also \overline{e}gin{align*} \overline{e}gin{split} \mathcal{W}(Q, \mathcal{I}) &= \mathop{{}\mathbb{E}}[S] + \mathop{{}\mathbb{E}}_{p\sim F_B^{-1}(Q)}[(B - S)\cdot \mathbbm{1}ic[B\ge p\ge S]]\\ &\le \varepsilon/2 + \mathop{{}\mathbb{E}}_{p\sim F_B^{-1}(Q)}[B \cdot \mathbbm{1}ic[p\ge S]]\\ &\le \varepsilon/2 + H \cdot (1 - t)\cdot \varepsilon / 2 = \varepsilon \end{split} \end{align*} where the last inequality holds since $ p = F_B^{-1}(x)\ge S$ is equivlent to $x \ge t$ where $x$ is drawn from $Q$, and this happends w.p. at most $\varepsilon/2$ by the definition of $t$. So for every distribution $Q$ over buyer's quantile, we find an instance $\mathcal{I}$ so that $\mathcal{W}(Q, \mathcal{I}) \le \varepsilon \cdot \text{OPT-}\mathcal{W}(\mathcal{I})$, which completes the first part of our proof. Next we aim to show that for any $\varepsilon > 0, N > 0$, there is no $\varepsilon$-approximation mechanisms using only $N$ samples from the buyer. First, for any mechanisms $\mathcal{M}$ using $N$ samples from the buyer, it can be formallized as a mapping \[ f: \mathbb{R}_{\ge 0}^N \mapsto \Delta\left( \mathbb{R}_{\ge 0} \right) \] where $f(x_1, x_2, \cdots, x_N)$ stands for the distribution of price selected by this mechanism after receiving $N$ samples $(x_1, x_2, \cdots, x_N)$. Let $D$ be $f(0, 0, \dots, 0)$, which is the distribution of the price if this mechanism sees $N$ samples all with value $0$. Similarly, we consider the following set: \[ \mathcal{H}' = \{{t\in \mathbb{R}_{\ge 0}}\;\vert\; \mathcal{P}r_{x\sim D}[x\ge t] \le \varepsilon / 2\} \] Again we know this set is non-empty, so let $t$ be any real positive number in the set $\mathcal{H}'$. Therefore, we could construct an instance $\mathcal{I} = (F_S, F_B)$ satisfying \overline{e}gin{align*} \overline{e}gin{split} B \sim F_{B}, B=\left\{ \overline{e}gin{aligned} &0 \quad & w.p. \quad (1-\varepsilon/4)^{1/N}\\ &H & w.p.\quad 1 - (1-\varepsilon/4)^{1/N } \end{aligned} \right. \quad\quad\quad S \sim F_{S}, S= \overline{e}gin{aligned} &t + 1 \quad & w.p. \quad 1\\ \end{aligned} \end{split} \end{align*} where $H > \frac{t + 1}{\varepsilon / 4\cdot (1 - (1 - \varepsilon / 4)^{1/N}}$ is a large enough number. In this instance, we can see that with just $N$ samples, no mechanism can distinguish this instance with another instance whose buyer always have a value of $0$. Therefore, it can not get the welfare hidden at the buyer. Formally speaking: \[ \text{OPT-}\mathcal{W}(\mathcal{I}) \ge H \cdot \left(1 - (1-\varepsilon/4)^{1/N}\right) > \frac{t+1}{\varepsilon / 4} \] To calculate $\mathcal{W}(\mathcal{M}, \mathcal{I})$, we consider the case when all the samples are zero and the case when there is at least one non-zero number in the samples. In the latter case, the probability that at least one sample is non-zero is at most $1 -\left( (1 - \varepsilon / 4)^{1/N}\right)^N = \varepsilon / 4$ which is negligible. In the former case, since $\mathcal{P}r_{p\sim f(0, 0, \dots, 0)}[p \ge t]\le \varepsilon / 2$, the trade happends w.p. at most $\varepsilon / 2$. Therefore, we could expand $\mathcal{W}(\mathcal{M}, \mathcal{I})$ into: \overline{e}gin{align*} \overline{e}gin{split} \mathcal{W}(\mathcal{M}, \mathcal{I}) &\le \mathop{{}\mathbb{E}}_{p\sim f(0, 0, \dots, 0)}[S + (B - S)\cdot \mathbbm{1}ic[B \ge p \ge S]] \cdot \mathcal{P}r[\text{All } N \text{ samples are } 0]\\ & \quad + \text{OPT-}\mathcal{W}(\mathcal{I}) \cdot \mathcal{P}r[\text{at least } 1 \text{ sample is not } 0]\\ & \le (t + 1+ \mathop{{}\mathbb{E}}[B \cdot\mathbbm{1}ic[p\ge S]]) \cdot 1 + \text{OPT-}\mathcal{W}(\mathcal{I}) \cdot(\varepsilon / 4)\\ & \le \text{OPT-}\mathcal{W}(\mathcal{I}) \cdot (\varepsilon / 4 + \varepsilon / 2 + \varepsilon / 4)\\ & = \varepsilon \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \end{split} \end{align*} where the second inequality holds since $ t + 1\le (\varepsilon / 4) \cdot \text{OPT-}\mathcal{W}(\mathcal{I})$, $\mathcal{P}r_{p\sim f(0, 0,\dots,0)}[p\ge S] \le \varepsilon / 2$ and $\mathop{{}\mathbb{E}}_{B\sim F_B}[B]\le \mathop{{}\mathbb{E}}_{S\sim F_S, B\sim F_B}[\max(S, B)] = \text{OPT-}\mathcal{W}(\mathcal{I})$. And this finishes our proof. \end{proof} \subsection{Proof of Lemma~\ref{lem:smallsamplekey}} \label{appendix:proofofsmallsamplekey} The proof here is quite straight forward. As we show in Section~\ref{subsec:Connection}, each order statistic mechanism corresponds to a quantile mechanism. Thus $\mathcal{C}(\mathcal{P}(Q))$ is exactly the approximation ratio of the order statistic mechanism $Q$. What's more, we could see that the $\Delta_N$ enumerates all possible order statistic mechanisms with $N$ samples. Therefore, this directly implies that $\argmax_{Q\in \Delta_N} \mathcal{C}(\mathcal{P}(Q))$ is the optimal order statistic mechanism with $N$ samples. \subsection{Proof of Lemma~\ref{lem:symratio}} \label{appendix:proofofsymratio} Fix an instance $\mathcal{I} = (F, F)$, recall that $S$ and $B$ are the random variables respectively indicating the value of the seller and the buyer. Define $\mathrm{ALG}$ to be the random variable which indicates the welfare of our mechanism in the realization, which is $S + (B - S) \cdot \mathbbm{1}ic[B\ge p \ge S]$ where $p$ is the price chosen by our quantile mechanism $Q$. Similarly, let $\mathrm{OPT}$ be the random variable which indicates the optimal welfare in the realization, which is $\max(B, S)$. To prove Lemma~\ref{lem:symratio}, we introduce the following lemma. \overline{e}gin{lemma}\label{lem:symcalc} For any quantile mechanism $Q$, let $\mathrm{ALG}$ and $\mathrm{OPT}$ respectively be the random variables indicating the welfare of the mechanism $Q$ and the optimal welfare in the realization. Let $r$ be \[\min_{\mathcal{I} = (F, F)}\inf_{x\in [0,1)} \frac{\mathcal{P}r[\mathrm{ALG} \ge F^{-1}(x)]}{\mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x)]}\] where $F(x)$ is the cumulative distribution function of distribution $F$, and $F^{-1}(x)$ is the quantile function. The quantile mechanism $Q$ is at least $r$-approximate. \end{lemma} \overline{e}gin{proof} We have \[\mathcal{P}r[\mathrm{ALG}\ge F^{-1}(x)] \ge r\cdot \mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x)]\] for all $x\in [0, 1]$ and quantile function $F^{-1}(x)$. Without loss of generality, we could assume the distribution has a support over $[0, a]$. Notice that since we assume the distribution is continuous w.l.o.g. in the sample setting, $F^{-1}(x)$ is a continuous and increasing function over $[0, 1]$ and $F^{-1}(0) = 0, F^{-1}(1) = a$, so we have \overline{e}gin{align*} \mathcal{W}(\mathcal{I}, Q) &= \mathop{{}\mathbb{E}}[\mathrm{ALG}]\\ & = \int_{0}^a \mathcal{P}r[\mathrm{ALG} \ge x] \mathop{dx}\\ & = \int_{0}^1 \mathcal{P}r[\mathrm{ALG} \ge F^{-1}(z)] \mathop{d F^{-1}(z)}\\ & \ge \int_{0}^1 r \cdot \mathcal{P}r[\mathrm{OPT} \ge F^{-1}(z)] \mathop{d F^{-1}(z)}\\ & = r\cdot \int_{0}^a \mathcal{P}r[\mathrm{OPT} \ge x] \mathop{dx} = r\cdot \mathop{{}\mathbb{E}}[\mathrm{OPT}]\\ & = r\cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \end{align*}holds for any instance $\mathcal{I} = (F, F)$, which implies that quantile mechanism $Q$ is at least $r$-approximate. \end{proof} With Lemma~\ref{lem:symcalc}, we are able to give a lower bound of approximation ratio for any quantile function $Q$. Fixing the buyer and seller's distribution $F$, we only need to calculate the term $\mathcal{P}r[\mathrm{ALG} \ge F^{-1}(x)]$ and $\mathcal{P}r[\mathrm{OPT} \ge F^{-1}(x)]$. The event $\mathrm{OPT}\ge F^{-1}(x)$ happens if and only if either $B$ or $S$ is greater than $F^{-1}(x)$. Thus, \overline{e}gin{equation} \label{eq:termOPT} \mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x)] = 1-x^2 \end{equation} The event $\mathrm{ALG}\ge F^{-1}(x)$ happens if and only if one of the following conditions is satisfied: \overline{e}gin{itemize} \item $S\ge F^{-1}(x)$ \item $p\le F^{-1}(x)$, $S\le p$ and $B\ge F^{-1}(x)$. Here $S\le p\le B$, thus the trade takes place, and $B\ge F^{-1}(x)$. \item $p> F^{-1}(x)$, $S\le F^{-1}(x)$ and $B\ge p$. Since $S\le p\le B$, the seller trades the item to the buyer,and we have $B\ge p\ge F^{-1}(x)$. \end{itemize} Note that these three events are disjoint, so we could calculate the probability for each event to happen and add them up. For the first event, $\mathcal{P}r[S\ge F^{-1}(x)] = 1 - x$. For the second event, we just enumerate the quantile of $p$. Suppose the quantile of $p$ is $t$, which means that $F^{-1}(t) = p$. Then, we have $\mathcal{P}r[S\le p] = t$ and $\mathcal{P}r[B\ge F^{-1}(x)] = 1-x$. Thus, this event takes place w.p. $\int_{[0,x]} t(1-x)\, dQ(t)$ where $Q(t)$ is the c.d.f. of distribution $Q$. For the third event, we use the same idea. Suppose the quantile of $p$ is $t$, we have $\mathcal{P}r[S\le F^{-1}(x)] = x$ and $\mathcal{P}r[B\ge p] = 1 - t$. Therefore, this event happens w.p. $\int_{(x,1]}(1-t)x\, dQ(t)$. By adding the terms above up, we have: \overline{e}gin{equation} \label{eq:termALG} \mathcal{P}r[\mathrm{ALG}\ge F^{-1}(x)] = {\int_{[0,x]} t(1-x) \, d Q(t) + \int_{(x, 1]} (1-t)x\, dQ(t) + (1-x)} \end{equation} Therefore, combining Lemma~\ref{lem:symcalc} and Equation~\eqref{eq:termOPT} and \eqref{eq:termALG}, we have that for any quantile mechanism $Q$ with c.d.f. $Q(x)$, the minimum of the following optimization problem lower bounds the approximation ratio of the quantile mechanism $Q$. \overline{e}gin{align*} \overline{e}gin{split} & \min_{\mathcal{I} = (F, F)} \inf_{x\in [0,1)} \frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2} \\ & = \inf_{x\in [0,1)} \frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2} \end{split} \end{align*} where the equality holds since we could see that the term is independent from $F(x)$. We now left to show that the approximation ratio of $Q$ is also upper bounded by $r$. It suffices to show that for any $\varepsilon > 0$ there exists some instance $\mathcal{I} = (F, F)$ such that \[\mathcal{W}(Q, I) \le (r + \varepsilon) \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \] First, Recall Equation~\eqref{eq:termOPT} and~\eqref{eq:termALG}. We could see that both the term $\mathcal{P}r[\mathrm{ALG}\ge F^{-1}(x)]$ and the term $\mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x)]$ are independent of the distribution $F$. Thus, \[r = \min_F\inf_{x\in [0,1)} \frac{\mathcal{P}r[\mathrm{ALG} \ge F^{-1}(x)]}{\mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x)]} = \inf_{x\in [0,1)} \frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2}\] Suppose the optimum of $ \inf_{x\in [0,1)} \frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2}$ is attained at $x^*$. Consider the following instance $\mathcal{I} = (F, F)$ satisfying \overline{e}gin{align*} \overline{e}gin{split} v \sim F, v=\left\{ \overline{e}gin{aligned} &U[0, r \cdot (1 - x^*) \cdot \varepsilon/2] \quad & w.p. \quad x^*\\ &U[1, 1 + \varepsilon/2] & w.p.\quad 1 - x^* \end{aligned} \right. \end{split} \end{align*} Notice that in this instance, The event $\mathbbm{1}ic[\mathrm{ALG} \ge F^{-1}(x^*)]$ is equivalent to $\mathbbm{1}ic\Big[\mathrm{ALG} \in [1, 1 + \varepsilon/2]\Big]$. Such argument also holds for $\mathrm{OPT}$. Thus, we can see that \overline{e}gin{align*} \overline{e}gin{split} \mathcal{W}(Q, \mathcal{I}) &= \mathop{{}\mathbb{E}}[\mathrm{ALG}] \\ & \le \mathcal{P}r[\mathrm{ALG} \in [1, 1 + \varepsilon/2 ]]\cdot (1 + \varepsilon/2) + \mathcal{P}r[\mathrm{ALG} \in [0, r \cdot (1 - x^*) \cdot \varepsilon/2]] \cdot (r \cdot (1 - x^*) \cdot \varepsilon/2)\\ & \leq \mathcal{P}r[\mathrm{ALG} \ge F^{-1}(x^*)]\cdot ( 1 + \varepsilon/2) + r \cdot (1 - x^*) \cdot \varepsilon/2 \\ &= r\cdot \mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x^*)]\cdot (1 + \varepsilon/2) + r \cdot (1 - x^*) \cdot \varepsilon/2\\ & \leq r\cdot \mathop{{}\mathbb{E}}[\mathrm{OPT}]\cdot (1 + \varepsilon / 2) + r\cdot (1 - x^*) \cdot \varepsilon/2\\ & \leq (r + \varepsilon) \cdot \mathop{{}\mathbb{E}}[\mathrm{OPT}] = (r + \varepsilon) \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \end{split} \end{align*} where the second equation uses the fact that $r\cdot \mathcal{P}r[\mathrm{OPT}\ge F^{-1}(x^*)] = \mathcal{P}r[\mathrm{ALG} \ge F^{-1}(x^*)]$ since $r$ is attained at $x^*$. Since the above holds for every $\varepsilon > 0$, this completes our proof of Lemma~\ref{lem:symratio}. \subsection{Proof of Theorem~\ref{thm:symoptorderstatistic}} \label{appendix:proofsymoptorderstatistic} The proof of Theorem~\ref{thm:symoptorderstatistic} is directly a combination of Lemma~\ref{lem:smallsamplekey} and Lemma~\ref{lem:symratio}. From Lemma~\ref{lem:symratio}, we know that the approximation ratio of a quantile mechanism $Q$ with c.d.f. $Q(x)$ is exactly \[ \inf_{x\in [0,1)} \frac{\int_{[0,x]} q(t)t(1-x) \, d t + \int_{(x, 1]} q(t)(1-t)x\, dt + (1-x)}{1-x^2}. \] where we use $q(x)\, dx$ instead of $dQ(x)$ since we have a continuous probability density function. We could see that the set of all distributions over $[N]$ is actually $\Big\{\{P_i\}_{i\in [n]}~|~P_i\geq 0\text{ and } \sum_{i=1}^n P_i = 1\Big\}$ where distribution $P$ would choose $i$ w.p. $P_i$. What's more, the probabiltiy density function of the quantile of such order statistic mechanism is exactly $p(x) = \sum_{i=1}^n P_i f_N^i(x)$ where $f_N^i(x) = N \binom{n-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$. Therefore, combining it with Lemma~\ref{lem:smallsamplekey}, it follows that Theorem~\ref{thm:symoptorderstatistic} holds. \subsection{Analysis of Empirical Risk Minimization Mechanism} \label{appendix:ERM} In this section, we give the upper bounds of the Empirical Risk Minimization mechanism. When $N = 1$, the empirical distribution is a one-point distribution, so any price $x\in \mathbb{R}_{\ge 0}$ is optimal. Therefore, we consider the following instance $\mathcal{I} = (F, F)$ s.t. \overline{e}gin{align*} \overline{e}gin{split} x \sim F, x=\left\{ \overline{e}gin{aligned} &0 \quad & w.p. \quad 1 - 1/H\\ &H \quad & w.p. \quad 1/H \end{aligned} \right. \end{split} \end{align*} Since any price is optimal for ERM, we can assume that it will always select $H + 1$ so that the trade will never take place. Therefore taking $H\rightarrow \infty$ in this instance, the approximation ratio tends to $0.5$. When $N = 2$, the empirical distribution is a two-point distribution. Suppose the two samples are $X_1 \le X_2$, we can see that any price in $[X_1, X_2)$ is optimal for this empirical distribution. Thus, we can assume that it will always select $X_1$. So, ERM is equivalent to an order statistic mechanism that always selects the smallest sample. Therefore, we can solve the following optimization problem: \[\inf_{x\in [0, 1)}\frac{\frac13x^3-x^2-\frac13x+1}{1-x^2} = \frac{2}{3}\] Then, by Lemma~\ref{lem:symratio}, we know that there exists an instance such that ERM achieves exactly $\frac{2}{3}$ approximation. Now we are in the case that $N = 3$. By some calculations, we know that the second smallest sample will always be an optimal choice. Therefore, ERM is equivalent to an order statistic mechanism that always selects the second smallest sample when $N = 3$. Similarly, we can calculate that \[\inf_{x\in [0, 1)}\frac{\frac12x^4-x^3-\frac12x+1}{1-x^2} = \frac{3}{4}\] Applying Lemma~\ref{lem:symratio} again, there is an instance such that ERM has an approximation ratio of exactly $\frac{3}{4}$. Our proof strategy changes when the number of samples is greater than $5$. We consider a particular instance $\mathcal{I} = (F, F)$ and calculate the performance of ERM on such instance. \overline{e}gin{align*} \overline{e}gin{split} x \sim F, x=\left\{ \overline{e}gin{aligned} &0 \quad & w.p. \quad (\sqrt2-1)\cdot(1 - \frac{1}{n})\\ &\frac1n \quad & w.p. \quad 2 - \sqrt2\\ &1 \quad & w.p. \quad \frac{1}{n}(\sqrt2-1) \end{aligned} \right. \end{split} \end{align*} Actually, this is a counterexample appeared in \cite{kang2022fixed}. They show that \[\text{OPT-}\mathcal{W}(\mathcal{I}) = \frac{4(\sqrt2-1)}{n} - \frac{4\sqrt2 -1 }{n^2}\] Since we will let $n \rightarrow \infty$, we will ignore the $O(\frac1{n^2})$ terms in the following calculation. Notice that $\mathcal{P}r[x = 1] = O(\frac{1}{n})$, the probability that there is at least $1$ sample with value $1$ is negligible, so we will also assume that all samples are $0$ or $\frac{1}{n}$. Recall that there will be a tie-breaker coordinate drawn uniformly from $[0, 1]$ for each variable, and we will compare the tie-breaker coordinate if they have the same value. Now suppose there are $k_1$ samples with value $0$ and $k_2$ samples with value $\frac{1}{n}$. We know that the largest $0$ or the smallest $\frac{1}{n}$ is an optimal price for the empirical distribution when $k_1, k_2\neq 0$. Now, if we choose the largest $0$, as the price $p$, the expected welfare is: \overline{e}gin{align} \overline{e}gin{split} W_1(k_1) = &\mathcal{P}r[S = 0] \cdot \mathcal{P}r[S < p] \cdot E[B] + \mathcal{P}r\left[S = \frac{1}{n}\right] \cdot \frac{1}{n} + \mathcal{P}r[S = 1] \cdot 1\\ = &(\sqrt2-1)\cdot \frac{k_1}{k_1+1} \left((2-\sqrt2+\sqrt2-1)\cdot \frac1n\right) + (2-\sqrt2)\cdot \frac{1}{n} + \frac{1}{n}\cdot (\sqrt2-1) \end{split} \end{align} if we choose the smallest $\frac1n$, as the price $p$, the expected welfare is: \overline{e}gin{small} \overline{e}gin{align} \overline{e}gin{split} W_2(k_2) = &\mathcal{P}r[S = 0] \cdot E[B \text{ \& } B > p] + \mathcal{P}r\left[S = \frac{1}{n}\right] \cdot \left(\frac{1}{n} +\mathcal{P}r[S < p]\cdot \mathop{{}\mathbb{E}}[B \text{ \& } B = 1]\right) + \mathcal{P}r[S = 1] \cdot 1\\ = &(\sqrt2-1)\cdot \left((2-\sqrt2)\cdot \frac{1}{n}\cdot \frac{k_2}{k_2+1} + \frac{1}{n}(\sqrt2-1))\right) + (2-\sqrt2)\cdot \left(\frac{1}{n} + \frac{1}{k_2+1} \cdot(\sqrt2-1)\frac1n\right) + \frac{1}{n}\cdot (\sqrt2-1) \end{split} \end{align} \end{small} When all the samples have the same value, any price is optimal. Similar to the case when $N = 1$, the trade may never happen, so the expected welfare is $\frac{1}{n}$ in this case. Now, when there are $5$ samples, suppose the ERM will choose the largest $0$ when there are $1 \sim 3$ samples with value $0$, and choose the smallest $\frac1n$ when there are $1$ samples with value $\frac1n$. The expected welfare of ERM when $N = 5$ is : \overline{e}gin{small} \overline{e}gin{align*} \overline{e}gin{split} \sum_{i=1}^3 (\sqrt2-1)^i (2-\sqrt2)^{5-i} \binom{5}{i} \cdot W_1(i) + \sum_{i=1}^1 (\sqrt2-1)^{5-i} (2-\sqrt2)^{i} \binom{5}{i} \cdot W_2(i) + (p^5 + (1-p)^5)\cdot \frac1n \end{split} \end{align*} \end{small} Now compare it to the optimal: \[\text{OPT-}\mathcal{W}(\mathcal{I}) = \frac{4(\sqrt2-1)}{n} - \frac{4\sqrt2 -1 }{n^2}\] By numerical calculations, the ratio is $\approx 0.76$ as $n\rightarrow \infty$. Similarly, when there are $10$ samples, suppose the ERM will choose the largest $0$ when there are $1 \sim 6$ samples with value $0$, and choose the smallest $\frac1n$ when there are $1\sim 3$ samples with value $\frac1n$. The expected welfare of ERM when $N = 10$ is : \overline{e}gin{small} \overline{e}gin{align*} \overline{e}gin{split} \sum_{i=1}^6 (\sqrt2-1)^i (2-\sqrt2)^{10-i} \binom{10}{i} \cdot W_1(i) + \sum_{i=1}^3 (\sqrt2-1)^{10-i} (2-\sqrt2)^{i} \binom{10}{i} \cdot W_2(i) + (p^{10} + (1-p)^{10})\cdot \frac1n \end{split} \end{align*} \end{small} By calculations, the approximation ratio is $\approx 0.80$ as $n\rightarrow \infty$. \subsection{Proof of Lemma~\ref{lem:asymratio}} \label{appendix:proofasymratio} Now we fix the quantile mechanism $Q$ and suppose its c.d.f. is $Q(x)$. Let $r$ be \[\min_{x\in [0,1]}{\int_{[0,x]} t\, dQ(t) + 1-x}.\] \cite{blumrosen2021almost} already prove that $r$ is the lower bound the approximation ratio. We are only left to show that it is also the upper bound. We aim to show that for any $\varepsilon > 0$, there exists an instance $\mathcal{I} = (F_S, F_B)$ such that \[\mathcal{W}(\mathcal{I}, Q)\le (r + \varepsilon) \cdot \text{OPT-}\mathcal{W}(\mathcal{I})\] Now suppose the optimization problem above achieves its minimum at $x^{*}$. Consider the following instance $\mathcal{I} = (F_S, F_B)$. \overline{e}gin{align*} \overline{e}gin{split} S \sim F_{S}, S=\left\{ \overline{e}gin{aligned} &U[0,\varepsilon] \quad & w.p. \quad x^{*} \\ &H + \varepsilon & w.p.\quad 1 - x^{*} \end{aligned} \right. \quad\quad\quad B \sim F_{B}, B= \overline{e}gin{aligned} &H\quad & w.p. \quad 1\\ \end{aligned} \end{split} \end{align*} where $H > 1$ is a sufficiently large number. We can see that in this instance, \[\text{OPT-}\mathcal{W}(\mathcal{I}) \geq H\] Now we compute the expected welfare for our mechanism. When its price has a quantile $t$ smaller than or equal to $x^*$, the trade will happen with probability exactly $t$. When the quantile of its price is greater than $x^*$, the trade will never happen. \overline{e}gin{align*} \mathcal{W}(\mathcal{I}) &= \mathop{{}\mathbb{E}}[S] + \mathop{{}\mathbb{E}}[(B - S)\mathbbm{1}ic[B\le p\le S]]\\ & \le x^* \varepsilon + (H + \varepsilon)(1-x^*) + \int_{[0, x^*]} tH \,dQ(t)\\ & \le H\cdot\left( 1-x^{*} +\int_{[0, x^*]} t\,dQ(t)\right) + \varepsilon \\ & \le H\cdot r + \varepsilon \le H\cdot(r + \epsilon)\\ & \le (r + \varepsilon)\cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \end{align*} where the third inequality follows from $r = \int_{[0, x^*]} t\, dQ(t) + (1 - x^*)$. Therefore, we could find an instance $\mathcal{I} = (F_S, F_B)$ such that $\mathcal{W}(Q, \mathcal{I}) \le (r + \varepsilon) \cdot \text{OPT-}\mathcal{W}(\mathcal{I})$ for any small enough $\varepsilon > 0$, and this concludes our proof. \subsection{Proof of Theorem~\ref{thm:asymoptorderstatistic}} \label{appendix:proofasymoptorderstatistic} The proof of Theorem~\ref{thm:asymoptorderstatistic} is nearly identical to Theorem~\ref{thm:symoptorderstatistic}. From Lemma~\ref{lem:asymratio}, we know that the approximation ratio of a quantile mechanism $Q$ with a continuous p.d.f. $q(x)$ is exactly \[ \mathcal{C}(p)=\min_{x\in [0,1]} {\int_0^x q(t) t\, dt + 1-x}. \] where we use $q(x)\, dx$ instead of $dQ(x)$ since we have a continuous probability density function. Again, we know that the set of all distributions over $[N]$ is actually $\Big\{\{P_i\}_{i\in [n]}~|~P_i\geq 0\text{ and } \sum_{i=1}^n P_i = 1\Big\}$ where distribution $P$ would choose $i$ w.p. $P_i$. What's more, the probabiltiy density function of the quantile of such order statistic mechanism is exactly $p(x) = \sum_{i=1}^n P_i f_N^i(x)$ where $f_N^i(x) = N \binom{n-1}{i-1} \cdot x ^{i-1}\cdot(1-x)^{N-i}$. Therefore, combining it with Lemma~\ref{lem:smallsamplekey}, it follows that Theorem~\ref{thm:asymoptorderstatistic} holds. \subsection{Proof of Lemma~\ref{lem:convert}} \label{appendix:proofconvert} Before we give the proof, we first introduce some notations and lemmas about Bernstein that may useful to our proof. \overline{e}gin{definition}[Stochastic Bernstein Polynomials] The stochastic Bernstein polynomial of degree $n$ for a continous function $f$ on $[0, 1]$ is defined as \[\left(B_n^X f\right)(t) = \sum_{k=0}^n f(X_k) p_{n,k}(t)\] in which $X_0, X_1, \cdots X_n$ are the order statistics of $(n + 1)$ independent copies of the random variable uniformly distributed in $[0, 1]$, and, \[p_{n,k}(t) = \binom{n}{k} x^{k} (1-x)^{n-k}, 0\le k\le n, 0\le t\le 1\] \end{definition} Now fix the continous function $q(x)$ we aim to approximate, define $\omega(h)$ as the following function: \[\omega(h) = \sup_{0\le x, y\le 1\atop|x-y|\le h} |{q}(x) - {q}(y)|\] Now we can introduce the lemma in \cite{DBLP:journals/moc/SunWZ21} that help us approximate the function ${q}(x)$ by order statistics. \overline{e}gin{lemma}[Theorem~2.11 In \cite{DBLP:journals/moc/SunWZ21}] \label{lem:Bernstein} Let $\varepsilon > 0$ and $f\in C[0, 1]$ be given. Suppose that $\omega\left(\frac{1}{\sqrt{n}}\right) < \varepsilon / 6.2$. Then the following inequality holds true: \overline{e}gin{align*} \mathcal{P}r\left[\left\| B_n^X f - f \right\|_{\infty} > \varepsilon \right] \le 2(n+1)\exp\left(-\frac{2\varepsilon^2}{\omega^2\left(\frac{1}{\sqrt{n}}\right)}\right) \end{align*} \end{lemma} We are now ready to prove Lemma~\ref{lem:convert}. We first present our mechanism $P$. For some instance $\mathcal{I} = (F_S, F_B)$, suppose there are $n$ samples $X_1\le X_2\le \cdots \le X_n$ drawn from the distribution. We draw another $n$ samples $Y_1 \le Y_2\le \cdots \le Y_n$ uniformly and independently from $[0, 1]$. Let $s = \sum_{i=1}^n q(Y_i)$ be the sum. Then our mechanism will choose $X_i$ with probability ${q}(Y_i) / s$ Now, let \[g(x) = \sum_{i=1}^n q(Y_i)f_n^i(x)/s \quad \text{where}~ f_n^i(x) = n\binom{n - 1}{i-1}\cdot x^{i-1}(1-x)^{n-i}\] be the corresponding probability density function of the order statistic mechanism $P$. It suffices to prove that with high probability \[|g(x) - q(x)| \leq \varepsilon \quad \forall x\in [0, 1]\] To prove this, we introduce an intermediate function $h(x)$: \[h(x) = \sum_{i=1}^n q(Y_i) \cdot \binom{n-1}{i-1}x^{i-1}(1-x)^{N-i} = \sum_{i=1}^n {q}(Y_i) p_{n-1,i-1}(x)\] As we can see, $h$ is the stochastic Bernstein polynomial of ${q}$ with degree $n - 1$ and $\omega\left(\frac{1}{\sqrt{n - 1}}\right) \leq \varepsilon /100$. Applying lemma~\ref{lem:Bernstein}, we know that \overline{e}gin{equation}\label{eq:hqdis} \mathcal{P}r\left[\left\|h - q\right\|_{\infty} > \varepsilon / 4\right] \le 2n \exp\left(-\frac{\varepsilon^2}{8 \cdot \omega^2\left(\frac{1}{\sqrt{n - 1}}\right)} \right)\le \varepsilon \end{equation} where the last inequality comes from the assumption in the statement of lemma. Thus, we only need to show that the difference between $h$ and $g$ is small. First we have \overline{e}gin{align*} \overline{e}gin{split} g(x) &= \sum_{i=1}^N \frac{{q}(Y_i)}{S} f_n^i(x) \\ & = \sum_{i=1}^N {q}(Y_i) \binom{n-1}{i-1}x^{i-1}(1-x)^{n-i} \frac{n}{s}\\ & = \frac{n}{s} \cdot h(x) \end{split} \end{align*} So it is equivalent to prove that $n$ and $s$ are close w.h.p. We have the following lemma. \overline{e}gin{lemma}\label{lem:Bernsteinhelp} \overline{e}gin{align*} \overline{e}gin{split} \mathcal{P}r\left[\left(1 - \frac{\varepsilon}{4M}\right)\cdot n\le s \le \left(1 + \frac{\varepsilon}{4M}\right) \cdot n\right] \ge 1 - \varepsilon \end{split} \end{align*} \end{lemma} We first use the lemma to continue our proof, before proving the lemma itself. We know that $|g(x) - h(x)|\leq \varepsilon/2 ~ \forall x\in [0, 1]$ if $n \left(1-\frac{\varepsilon}{4M}\right)\leq s \le n \left(1 + \frac{\varepsilon} {4M}\right)$ and $h(x) < 2M ~ \forall x\in [0, 1]$. Therefore, \overline{e}gin{small} \overline{e}gin{align}\label{eq:ghdis} \overline{e}gin{split} \mathcal{P}r[|g(x) - h(x)| \geq \varepsilon/2 ~\exists x\in [0, 1]] &\le \mathcal{P}r\left[s > n \left(1 + \frac{\varepsilon}{4M}\right)\right] + \mathcal{P}r\left[s < n \left(1 - \frac{\varepsilon}{4M}\right)\right] +\mathcal{P}r[\exists_{x\in [0, 1]} h(x) > 2M]\\ & \le 2\varepsilon \end{split} \end{align} \end{small} where the second inequality is from Lemma~\ref{lem:Bernsteinhelp} and the fact that with probability at least $1 - \varepsilon$, $\|h - {q}\| \le \varepsilon / 4$ and ${q}$ has a maximum of $M$ on $[0, 1]$. Now combining inequality~\eqref{eq:hqdis} and \eqref{eq:ghdis}, we know that with probability at least $1 - 3\varepsilon$, we have: \[|g(x) - {q}(x)|\le |g(x) - h(x)| + |h(x) - {q}(x)| \leq \varepsilon\] Since such probability is strictly greater than $0$, we know that there exists some order statistic mechanism $P$ over $N$ samples such that $|g(x) - q(x) | \leq \varepsilon ~ \forall x\in [0, 1]$. Also we show that our construction will find such order statistics with high probability. Finally, as we assumed in the statement, it holds that \[\mathcal{C}(P) \geq \mathcal{C}(Q) - c\cdot |g - q|_{\infty} \geq \mathcal{C}(Q) - c\varepsilon\] This completes the proof of Lemma~\ref{lem:convert}. \overline{e}gin{proof}[Proof of Lemma~\ref{lem:Bernsteinhelp}] We know that $\mathop{{}\mathbb{E}}[s] = n$. Notice that $s$ is the sum of $n$ i.i.d. random variables ranging in $[0, M]$. Therefore, by Chernoff bound, it holds that \overline{e}gin{align*} \overline{e}gin{split} \mathcal{P}r\left[\left(1 - \frac{\varepsilon}{4M}\right)\cdot n\le s \le \left(1 + \frac{\varepsilon}{4M}\right) \cdot n\right] &\geq 1- \exp\left(-\frac{\varepsilon^2 n}{48M^2}\right) - \exp\left(-\frac{\varepsilon^2 n}{32M^2}\right)\\ & \geq 1 - \varepsilon \end{split} \end{align*}where the last inequality is from the property in the statement. \end{proof} \subsection{Proof of Lemma~\ref{lem:symturning}} \label{appendix:proofofsymturning} let $Q'$ be the quantile mechanism with p.d.f. $\tilde{q}(x)$ satisfying \overline{e}gin{align*} \overline{e}gin{split} \tilde{q}(x) = \left\{ \overline{e}gin{aligned} &0 \quad & x\in \left[0, \frac{\sqrt{2}}{2} - \frac{1}{32}\varepsilon\right]\bigcup \left[\frac{\sqrt{2}}{2}+\frac{1}{32}\varepsilon, 1\right]\\ &(32/\varepsilon)^2 \cdot \left(x - \frac{\sqrt2}{2} + \frac{1}{32}\varepsilon\right) & x\in \left[\frac{\sqrt2}{2} - \frac{1}{32}\varepsilon, \frac{\sqrt2}{2}\right]\\ &-(32/\varepsilon)^2\cdot \left(x - \frac{\sqrt2}{2} - \frac{1}{32}\varepsilon\right) & x\in \left[\frac{\sqrt2}{2}, \frac{\sqrt2}{2} + \frac{1}{32}\varepsilon\right]\\ \end{aligned} \right. \end{split} \end{align*} One could see that the quantile mechanism $\tilde{Q}$ would choose a price with its quantile in $\left[\frac{\sqrt2}{2} - \frac{1}{32}\varepsilon, \frac{\sqrt2}{2} + \frac{1}{32}\varepsilon\right]$. Fix a price $p$ and an instance $\mathcal{I} = (F, F)$, $\textsf{ALG}$ is the random variable indicating the welfare for the fixed price $p$ in the realzation, i.e. $S + (B - S)\mathbbm{1}ic[B\ge p\ge S]$ where $B,S$ are drawn independently from $F$. Thus, it suffices to show that for any instance $\mathcal{I} = (F, F)$, the following holds when $\frac{\sqrt2}2 - \frac{1}{32}\varepsilon \le t\leq \frac{\sqrt2}2 + \frac{1}{32}\varepsilon$. \[ \mathop{{}\mathbb{E}}\left[\textsf{ALG}\;\vert\; p = F^{-1}(t)\right] \ge \left(\frac{2+\sqrt2}{4} - \varepsilon / 2\right) \cdot \text{OPT-}\mathcal{W}(\mathcal{I})\] To prove the approximation ratio, we need to use Lemma~\ref{lem:symratio}. Notice that the term $\mathop{{}\mathbb{E}}[\mathrm{ALG}\;\vert\; p = F^{-1}(t)]$ could be understood as the expected welfare of a quantile mechanism that always selects the $t$-quantile as the price, so Lemma~\ref{lem:symratio} could also be applied to analyze the ratio between ${\mathop{{}\mathbb{E}}\left[\textsf{ALG}\;\vert\; p = F^{-1}(t)\right]}$ and ${\text{OPT-}\mathcal{W}(\mathcal{I})}$. For $x\ge t$, we know that \overline{e}gin{align*} \overline{e}gin{split} \inf_{x\in [t, 1)}\frac{(1-x)+t(1-x)}{1-x^2} = \frac{1 + t}{2} \end{split} \end{align*} When $x<t$, it holds that \overline{e}gin{align*} \overline{e}gin{split} \min_{x\in [0, t]}\frac{(1-x)+x(1-t)}{1-x^2} = \frac{1 + \sqrt{1 -t^2}}{2} \end{split} \end{align*} As we can see, $\frac{1 + t}{2}\ge \frac{1 + \sqrt{1 -t^2}}{2}$ when $1 \ge t\ge \frac{\sqrt2}2$ and $\frac{1 + t}{2}\le \frac{1 + \sqrt{1 -t^2}}{2}$ when $\frac{\sqrt{2}}{2}\ge t\ge 0$. By Lemma~\ref{lem:symratio}, we know that for any instance $\mathcal{I} = (F, F)$, \overline{e}gin{align*} \overline{e}gin{split} \mathop{{}\mathbb{E}}\left[\textsf{ALG}\;\vert\; p = F^{-1}(t)\right] \ge \left\{ \overline{e}gin{aligned} &\frac{1 + \sqrt{1 -t^2}}{2} \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \quad & t\in \left[\frac{\sqrt2}2, 1\right] \\ &\frac{1 + t}{2} \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) & t\in \left[0, \frac{\sqrt2}2\right] \end{aligned} \right. \end{split} \end{align*} Now if $\frac{\sqrt2}2 - \frac{1}{32}\varepsilon \le t \le \frac{\sqrt2}2$, it holds that \overline{e}gin{align*} \overline{e}gin{split} \mathop{{}\mathbb{E}}\left[\textsf{ALG}\;\vert\; p = F^{-1}(t)\right] &\ge\frac{1 + t}{2} \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \\ & \ge \left(\frac{2 + \sqrt2}{4} - \varepsilon / 2\right) \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \end{split} \end{align*}. For $\frac{\sqrt2}2 \le t\le \frac{\sqrt2}2 + \frac{1}{32}\varepsilon$, we have \overline{e}gin{align*} \overline{e}gin{split} \mathop{{}\mathbb{E}}\left[\textsf{ALG}\;\vert\; p = F^{-1}(t)\right] &\ge\frac{1 + \sqrt{1 -t^2}}{2} \cdot \text{OPT-}\mathcal{W}(\mathcal{I}) \\ & \ge \left(\frac{2 + \sqrt2}{4} - \varepsilon / 2\right) \cdot \text{OPT-}\mathcal{W}(\mathcal{I}). \end{split} \end{align*} Finally, by some simple calculation, it is easy to see that $\omega\left(c \cdot \varepsilon^{3.5}\right) \leq 32^2 c\cdot \varepsilon^{1.5}$ and $\max_{x\in[0,1]} \tilde{q}(x) \leq 32/\varepsilon$ hold. This concludes the proof. \subsection{Proof of Lemma~\ref{lem:symclose}} \label{appendix:proofsymclose} Suppose $|q_1(x) - q_2(x)| \leq c$ for all $x\in [0, 1]$. We could see that \overline{e}gin{align*} &\frac{\int_0^x (q_1(t) - q_2(t))t(1-x) \, d t + \int_x^1 (q_1(t) - q_2(t)) (1-t)x\, dt}{1-x^2}\\ & \geq - \frac{\int_0^x c t(1-x) \, d t + \int_x^1 c (1-t)x\, dt }{1-x^2}\\ & \geq - \frac{\int_0^x c t(1-x) \, d t + \int_x^1 c (1-x)x\, dt }{1-x^2}\\ & = - \frac{\int_0^x c t \, d t + \int_x^1 c x\, dt }{1+x} \geq -c. \end{align*} holds for all $x\in [0, 1]$. Taking the infimum over $[0, 1)$, this directly implies that $\mathcal{C}(Q_1) - \mathcal{C}(Q_2) \geq - c$. \subsection{Proof of Lemma~\ref{lem:asymturning}} \label{appendix:proofasymturning} let $\tilde{Q}$ be the quantile mechanism with p.d.f. $\tilde{q}(x)$ satisfying \overline{e}gin{align*} \overline{e}gin{split} \tilde{q}(x) = \left\{ \overline{e}gin{aligned} &0 \quad & x\in \left[0, \frac{1}{e} - \frac{\left(1 - e^{-1}\right)}{\left(e-\frac12\varepsilon\right)}\varepsilon\right]\\ &\frac{(e-\frac12\varepsilon)^2}{\varepsilon (1-e^{-1})} \left(x - \left(\frac{1}{e} - \frac{\left(1 - e^{-1}\right)}{\left(e-\frac12\varepsilon\right)}\varepsilon\right)\right) & x\in \left[\frac{1}{e} - \frac{\left(1 - e^{-1}\right)}{\left(e-\frac12\varepsilon\right)}\varepsilon, \frac{1}{e}\right]\\ &\frac{1}{x} - \frac{1}{2}\varepsilon & x\in \left[\frac{1}{e}, 1\right]\\ \end{aligned} \right. \end{split} \end{align*} By Lemma~\ref{lem:asymratio}, the approximation ratio of $\tilde{Q}$ can be computed as \overline{e}gin{align*} \overline{e}gin{split} \min_{x\in [0,1]}\int_0^x \tilde{q}(t)\cdot t\mathop{dt} + 1-x &\ge \min_{x\in [0,1]}\int_{\frac{1}{e}}^x \left(\frac{1}{t} - \frac{1}{2}\varepsilon\right)\cdot t\mathop{dt} + 1-x\\ &\geq \min_{x\in [0,1]}\int_{\frac{1}{e}}^x \frac{1}{t} \cdot t\mathop{dt} + 1-x - \frac{1}{2}\varepsilon\\ & = 1 - \frac{1}{e} - \frac{1}{2} \varepsilon. \end{split} \end{align*} Finally, it is also straightforward to check that $\omega\left(c\cdot \varepsilon^{-2.5}\right) \leq e^2(1-e^{-1})^{-1}c^{0.5}\cdot\varepsilon^{1.5} \leq 100c^{0.5} \cdot \varepsilon^{1.5}$ and $\max_{x\in[0,1]} \tilde{q}(x)$ is at most $2e$. \subsection{Proof of Lemma~\ref{lem:asymclose}} \label{appendix:proofasymclose} Suppose $|q_1(x) - q_2(x)| \leq c$ for all $x\in [0, 1]$. We could see that \overline{e}gin{align*} &\left(\int_0^x q_1(t)t\,dt + (1 - x) \right)- \left(\int_{0}^x q_2(t)t\, dt + (1 - x)\right) \\ & \geq - \int_{0}^x ct\, dt \geq -c. \end{align*} holds for all $x\in [0, 1]$. Taking the minimum over $[0, 1)$, this directly implies that $\mathcal{C}(Q_1) - \mathcal{C}(Q_2) \geq - c$. \section{Details of Numerical Experiments} \label{appendix:experiments} \subsection{Symmetric Instance} \label{appendix:symmetricexperiment} We now present the details of numerical experiments for the symmetric instances $\mathcal{I} = (F, F)$. We first formally write down the optimization problem that indicates the optimal order statistic mechanisms. As we proved in Section~\ref{subsec:symsmallsample}, suppose the following optimization problem $\mathsf{PO}$ achieves its maximum $\textsf{OPT}_{\mathsf{PO}}$ at $(P_1^*, P_2^*, \dots, P_N^*)$. Let $P^*$ be the distribution over $[N]$ such that $\mathcal{P}r_{x\sim P^*}[x = i] = P_i^*$. Then, $P^*$ is the optimal order statistic mechanism in the symmetric setting with $N$ samples and its approximation ratio is $\textsf{OPT}_{\mathsf{PO}}$. Notice that since the c.d.f. of the order statistic mechanism $Q(x)$ is differentiable, we use $q(t)\, dt$ instead of $dQ(t)$ for ease of computation. Here $q(x)$ is the probality density function of mechanism $Q$. \overline{e}gin{center} \overline{e}gin{tabular}{|c|} \hline The Optimization Problem $\mathsf{PO}$ \\ \hline \parbox{13cm}{ \overline{e}gin{align} \max_{P_1,\dots, P_N} \quad \min_{x\in [0,1]} \quad &\frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2}\nonumber \\ \textsf{s.t.} \quad & q(x) = \sum_{i=1}^N P_i \cdot f_N^i(x)\nonumber\\ & f_N^i(x) = N\cdot \binom{N-1}{i-1}\cdot x^{i-1} (1-x)^{n-i} \quad \forall i\in [N] \nonumber\\ &\sum_{i=1}^n P_i = 1 \label{eq:PiSum1}\\ &P_i \geq 0 \quad \forall i\in [N] \label{eq:PiNonZero} \end{align} } \\ \hline \end{tabular} \end{center} Now we aim to solve the optimization problem $\mathsf{PO}$ numerally with different numbers of samples $N$. For the inner minimization problem, it must be solved accurately so that it precisely reflect the approximation ratio of the order statistics mechanism. We use binary search to find its optimum. When we need to check whether \[\inf_{x\in [0,1)} \frac{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)}{1-x^2} \ge r\] It is equivalent to check if \[{\int_0^x q(t)\cdot t(1-x)\mathop{dt} + \int_x^1 q(t)\cdot (1-t)x\mathop{dt} + (1-x)} - r(1-x^2) \ge 0 \quad \forall x \in [0, 1]\] Notice that $q(t)$ is a polynomial of degree $N$, thus we only need to find the minimum of a single-variable polynomial over $[0, 1]$, and this could be efficiently done by finding the roots of its derivatives. We do the binary search for $100$ times, so the error caused by binary search is at most $2^{-100}$, which is much smaller than the floating point errors and can be ignored. Then we use some empirical algorithms to search for parameters in the outer maximization problem. Unlike the inner minimization problem, we do not need to get an exact optimum of the outer maximization problem since it only reflects the ratio of the order statistic mechanism we have found. The code can be found at our \href{https://github.com/BilateralTradeAnonymous/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade}{GitHub repository}\overline{e}gin{footnotesize}(\texttt{https:\\//github.com/BilateralTradeAnonymous/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade})\end{footnotesize}. \subsection{General Instance} \label{appendix:generalexperiment} Now again we formally write down the optimization problem that characterizes the optimal order statistic mechanism with $N$ samples. Similarly, as we proved in Section~\ref{subsec:generalsmallsample}, suppose the following optimization problem $\mathsf{QO}$ achieves its maximum $\textsf{OPT}_{\mathsf{QO}}$ at $(P_1^{*}, P_2^{*}, \dots, P_n^{*})$. Let $P^*$ be the distribution over $[N]$ such that $\mathcal{P}r_{x\sim P^*}[x = i] = P_i^*$. Then, $P^*$ is the optimal order statistic mechanism in the general setting with $N$ samples and its approximation ratio is exactly $\textsf{OPT}_{\mathsf{QO}}$. Again notice that since the c.d.f. of the order statistic mechanism $Q(x)$ is differentiable, we use $q(t)\, dt$ instead of $dQ(t)$ where $q(x)$ is the p.d.f. of the order statistic mechanism. \overline{e}gin{center} \overline{e}gin{tabular}{|c|} \hline The Optimization Problem $\mathsf{QO}$ \\ \hline \parbox{13cm}{ \overline{e}gin{align} \max_{P_1,\dots, P_N} \quad &{\min_{x\in [0,1]}{\int_0^x q(t)\cdot t\mathop{dt} + 1-x}}\nonumber \\ \textsf{s.t.} \quad & q(x) = \sum_{i=1}^N P_i \cdot f_N^i(x)\nonumber\\ & f_N^i(x) = N\cdot \binom{N-1}{i-1}\cdot x^{i-1} (1-x)^{n-i} \quad \forall i\in [N] \nonumber\\ &\sum_{i=1}^n P_i = 1 \label{eq:APiSum1}\\ &P_i \geq 0 \quad \forall i\in [N] \label{eq:APiNonZero} \end{align} } \\ \hline \end{tabular} \end{center} Now we solve the optimization problem numerically for different fixed number $N$. Again the inner minimization problem could be solved efficiently by calculating the zero point of its derivatives and we search for the parameters in the outer maximization problem to get a good enough solution. The code could be found at our \href{https://github.com/BilateralTradeAnonymous/On-the-Optimal-Fixed-Price-Mechanism-in-Bilateral-Trade}{Github repository}. \end{document}
\begin{document} \title{Federated Learning with Classifier Shift for Class Imbalance} \begin{abstract} Federated learning aims to learn a global model collaboratively while the training data belongs to different clients and is not allowed to be exchanged. However, the statistical heterogeneity challenge on non-IID data, such as class imbalance in classification, will cause client drift and significantly reduce the performance of the global model. This paper proposes a simple and effective approach named FedShift which adds the shift on the classifier output during the local training phase to alleviate the negative impact of class imbalance. We theoretically prove that the classifier shift in FedShift can make the local optimum consistent with the global optimum and ensure the convergence of the algorithm. Moreover, our experiments indicate that FedShift significantly outperforms the other state-of-the-art federated learning approaches on various datasets regarding accuracy and communication efficiency. \end{abstract} \section{Introduction} There are already numerous edge devices such as smartphones and IoT devices that can collect valuable raw data, and ones expect to use these data to complete some intelligent tasks such as image recognition or text generation. However, deep learning, the most effective algorithm for accomplishing these tasks, requires huge data to train the model, making it challenging to learn a good enough model from the data owned by a single edge device. Besides, due to data privacy, data protection regulations \citep{voigt2017eu}, and the massive overhead of data transmission, it is unrealistic to aggregate data from different clients (edge devices) in a server for training. Therefore, federated learning (FL) \citep{kairouz2019advances} has emerged to solve the problem of jointly learning a global model without sharing the private data. Although federated learning has shown good performance in many applications \citep{kaissis2020secure,liu2020fedvision}, there are still several important challenges that require researchers to pay attention to, namely privacy, communication cost, and statistical heterogeneity \citep{ji2021emerging}. Statistical heterogeneity means that client data is non-IID (independent and identically distributed). \citet{zhao2018federated} show that the accuracy of the federated learning algorithm has decreased significantly in the case of non-IID data. There are many methods proposed to address the challenge of statistical heterogeneity. FedProx \citep{MLSYS2020_38af8613} introduces a proximal term to constrain the update of the local model, and SCAFFOLD \citep{karimireddy2020scaffold} corrects the gradient of each local update to reduce the variance. However, these methods do not bring significant improvement because they only implicitly deal with the fundamental dilemma caused by statistical heterogeneity, that is, the optimal objective of local update is inconsistent with the optimal objective of global update. In this work, we propose an approach FedShift to explicitly solve the above fundamental dilemma in the statistical heterogeneity challenge. FedShift is a simple and effective approach which adds the shift on the classifier output calculated by the client category distribution and makes the local optimal models satisfy the global optimum. We also prove the convergence results of FedShift in the strongly convex and non-convex cases and compare with FedAvg, which does not have the classifier shift. Numerous experiments are conducted to evaluate the effectiveness of FedShift, which demonstrate that FedShift outperforms the other state-of-the-art federated learning algorithms in test accuracy and communication efficiency on various datasets, including Cifar10, Cinic10 and Tiny-Imagenet. \section{Related Works} FedAvg \citep{mcmahan2017communication} is the benchmark method in federated learning, which has demonstrated reliability in image classification and language modeling tasks. Each round of FedAvg mainly contains two phases, client update and server aggregation. First, each client that is selected to participate in training downloads the latest global model from the server, and updates the model locally using stochastic gradient descent. Then, the server collects the updated models from each client and aggregates them to obtain a new global model by averaging the model weights. Unlike the privacy and communication challenges, the statistical heterogeneity challenge is a unique and popular issue in the federated learning paradigm \citep{kairouz2019advances}. According to the two stages of federated learning mentioned above, the contributions of these studies can be roughly divided into local update improvements and aggregation improvements. Our work is an improvement in local update phase, so it can be combined with existing aggregation improvements without any conflict. As for the aggregation improvements on non-IID data, there are a series of related studies. PFNM \citep{yurochkin2019bayesian} and FedMA \citep{wang2020federated} apply the Bayesian non-parametric mechanism to study the permutation invariance of the neural network, and match the neurons of client neural networks to the global neurons. Moreover, methods such as adaptive weights \citep{yeganeh2020inverse}, attention mechanisms \citep{ji2019learning}, and normalization \citep{wang2020tackling} are also used to improve the aggregation effect for statistical heterogeneity. There are also many studies to alleviate the negative effects of statistical heterogeneity in the local update phase. \citet{MLSYS2020_38af8613} propose the FedProx algorithm, which adds a regular term to the loss function of the client. This proximal term uses the $\ell_2$ norm to explicitly constrain the local model to be close to the latest global model, limiting the local update of non-IID clients to be not too far apart. However, this explicit constraint could inhibit FedProx from quickly finding a better model in the early stage. Similarly, based on multi-task learning, FedCurv \citep{shoham2019overcoming} adds penalty items for changes in important parameters related to other clients during local training. In addition, SCAFFOLD \citep{karimireddy2020scaffold} introduces control variables to correct the gradient of each local update and make it the same as the global update direction. In each round, the control variables are updated as the estimations of the difference between the update of the server model and the local model. However, these methods only implicitly reduce the impact of the inconsistency of objective, which is the fundamental dilemma of statistical heterogeneity, rather than eliminating it. This is exactly our motivation for proposing FedShift. In detail, we formulate the problem and propose our method FedShift in Section \ref{method}, which also includes the convergence analysis of FedShift under different assumptions and its superiority compared with FedAvg. In Section \ref{experiments}, we report our experimental results, we compare the accuracy and communication efficiency of our algorithm with other algorithms, and study the influence of different settings on the algorithm, such as degree of heterogeneity, the local epoch number and clients number. Finally, Section \ref{conclusion} concludes our paper. \section{FedShift: Federated Learning with Classifier Shift} \label{method} \subsection{Problem Formulation} \label{formulation} In federated learning, the global objective is to solve the following optimization problem: \begin{equation}\label{obj} \min _{\boldsymbol{w}}\left[L(\boldsymbol{w}) \triangleq \sum_{i =1 }^{N} \frac{|\mathcal{D}_i|}{|\mathcal{D}|} L_{i}(\boldsymbol{w})\right], \end{equation} where $L_{i}(\boldsymbol{w})=\mathbb{E}_{(\boldsymbol{x}, y) \sim \mathcal{D}_{i}}\left[\ell_{i}(f(\boldsymbol{w} ;\boldsymbol{x}), y)\right]$ is the empirical loss of the $\mathit{i}$-th client that owns the local dataset $\mathcal{D}_{i}$, and $\mathcal{D} \triangleq \bigcup_{i=1}^N \mathcal{D}_{i}$ is a virtual entire dataset that includes all client‘s local data. $f(\boldsymbol{w} ;\boldsymbol{x})$ is the output of the model $\boldsymbol{w}$ when the input $\boldsymbol{x}$ is given, and $\ell_{i}$ denotes the loss function of the $i$-th client. Here, FL expects to learn a global model $\boldsymbol{w}$ that can perform well on the entire dataset $\mathcal{D}$. However, due to the inability to communicate local data, each client usually learns a local model $\boldsymbol{w}_i$ on its local dataset by minimizing the experience loss $L_{i}(\boldsymbol{w}_i)$. Then the server aggregates multiple local models to obtain a global model $\bar{\boldsymbol{w}}$. In FedAvg algorithm \citep{mcmahan2017communication}, each client adopts stochastic gradient descent (SGD) to update the local model $\boldsymbol{w}_i^{(t,\tau)}$ starting from $\boldsymbol{w}_{i}^{(t,0)} \triangleq \bar{\boldsymbol{w}}^{t-1}$ which is the latest global model. The local update process can be formulated as follows: \begin{equation} \boldsymbol{w}_{i}^{(t,\tau)}=\boldsymbol{w}_{i}^{(t,\tau-1)}-\eta \nabla_{\boldsymbol{w}} \ell_i(\boldsymbol{w}_{i}^{(t,\tau-1)},\mathcal{B}_{i}^{(t,\tau)}) \end{equation} where $\eta$ is the client learning rate and $\boldsymbol{w}_{i}^{(t,\tau)}$ denotes the local model of client $i$ after the $\tau$-th local update in the $t$-th communication round. Also, $\ell_i(\boldsymbol{w}_{i}^{(t,\tau-1)},\mathcal{B}_{i}^{(t,\tau)}) \triangleq \sum_{(\boldsymbol{x}, y) \sim \mathcal{B}_{i}^{(t,\tau)}} \frac{1}{|\mathcal{B}_{i}^{(t,\tau}|} \left[\ell_{i}(f( \boldsymbol{w}_{i}^{(t,\tau-1)} ;\boldsymbol{x}), y)\right]$ where $\mathcal{B}_{i}^{(t,\tau)}$ represents the $\tau$-th mini-batch samples of the local dataset $\mathcal{D}_{i}$ in the $t$-th communication round. And then the server updates the global model by averaging the local model updates of all clients at the end of each communication round as: \begin{equation} \bar{\boldsymbol{w}}^{t} = \bar{\boldsymbol{w}}^{t-1} + \sum_{i =1 }^{N} \frac{|\mathcal{D}_i|}{|\mathcal{D}|} (\boldsymbol{w}_{i}^{(t,\tau_i)}-\boldsymbol{w}_{i}^{(t,0)}) \end{equation} where $\tau_i$ is the local iterations completed by client $i$ in the SGD optimizer with a fixed batch-size. \paragraph{Client drift} As mentioned in \citep{karimireddy2020scaffold,zhao2018federated}, the problem of client drift will occur during the federated learning process due to the statistical heterogeneity ($P_i(\boldsymbol{x},y) \neq P(\boldsymbol{x},y)$) where $P_i$ denotes the probability distribution for $(\boldsymbol{x},y)$ in local client and $P$ denotes the probability distribution for global data. Let $\boldsymbol{w}^{*}$ be the global optimum of $L(\boldsymbol{w})$ and $\boldsymbol{w}_{i}^{*}$ be the optimum of each client’s empirical loss $L_i(\boldsymbol{w})$. Actually, we have $\boldsymbol{w}_{i}^{*} \neq \boldsymbol{w}^{*}$ and $\sum_{i =1 }^{N} \frac{|\mathcal{D}_i|}{|\mathcal{D}|} \boldsymbol{w}_{i}^{*} \neq \boldsymbol{w}^{*}$ due to the heterogeneous data distribution and our equation \ref{obj}. Therefore, the direction of each local update of clients will deviate from the global update. This deviation is accumulated in multiple iterations of SGD, which will eventually lead to a drift between $\boldsymbol{w}^{t}$ (the true global update) and $\bar{\boldsymbol{w}}^t$ (the average of the client update aggregated by the server). \paragraph{Class Imbalance} For classification tasks, the statistical heterogeneity of federated learning is usually caused by class imbalance. Suppose the label space $ Y = [1,2,\dots,K]$, and $P(y)$ is the probability distribution of each class. The distribution of training data can be expanded as $P(\boldsymbol{x},y) = P(\boldsymbol{x}|y)P(y)$ and $P_i(\boldsymbol{x},y) = P_i(\boldsymbol{x}|y)P_i(y)$, where $P(\boldsymbol{x}|y)$ is the conditional probability distribution of class $y$. The subscript $i$ represents the data distribution and probability of client $i$. In many real-world application scenarios of federated learning, data collected by different clients (such as IoT cameras) usually has approximately the same conditional probability distribution of each class, which implies $P_i(\boldsymbol{x}|y) \approx P(\boldsymbol{x}|y)$. Therefore, the statistical heterogeneity of federated learning often appears as class imbalance, that is, $P_i(y) \neq P(y)$. \subsection{Method} \label{shift} In order to alleviate the degradation of model performance due to class imbalance, we propose FedShift which is shown in Algorithm \ref{fedshift}. We first start from some intuitions of our proposed method. Note that $\boldsymbol{w}_i^{*}\triangleq \arg\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P_i} \left[ \ell_i ( f(\boldsymbol{w};\boldsymbol{x}),y) \right]$ is not the optimum of the global optimization problem ($\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P} \left[ \ell ( f(\boldsymbol{w};\boldsymbol{x}),y) \right]$), because of the statistical heterogeneity ($P\neq P_i$), although the global evaluation function is consistent with the local evaluation function ($\ell_i=\ell$). An intuitive idea can be inspired, that is, resampling or reweighting the client data to make $P = P_i$. However, since there are few or no samples in some categories, this method does not achieve good results in practice. Some empirical results in Section \ref{experiments} show that reweighting is not effective and even bring a severe drop in accuracy compared to FedAvg. \begin{algorithm}[htbp] \label{fedshift} \caption{FedShift} \LinesNumbered \KwIn{number of communication rounds T, number of clients N, the fraction of clients C, number of local epochs E, batch size B, learning rate $\eta$, the global label distribution $P(y)$ } \KwOut{the global model $\boldsymbol{w}^{T}$} initialize $\boldsymbol{w}^{0}$ \\ $m \gets \max( \lfloor C*N \rfloor, 1)$ \\ \For{communicate round $t = 0,1,2,\dots,T-1$}{ $M_t \gets $ randomly select a subset containing $m$ clients \\ \ForEach{client $i \in M_t$}{ $\boldsymbol{w}_{i}^{t} = \boldsymbol{w}^{t}$ \\ $\boldsymbol{w}_i^{t+1} \gets $ \bf{LocalUpdate}($\boldsymbol{w}_{i}^{t})$\\ } $\boldsymbol{w}^{t+1} =\boldsymbol{w}^{t} + \sum_{i \in M_t } \frac{|\mathcal{D}_i|}{|\mathcal{D}|} (\boldsymbol{w}_{i}^{t+1}-\boldsymbol{w}_{i}^{t}) $\\ } \bf{LocalUpdate} ($\boldsymbol{w}_{i}^{t}$):\\ \For{epoch $e = 1,2,\dots,E$}{ \ForEach{batch $\mathcal{B}_{i}^{t} = (\boldsymbol{x},y) \in \mathcal{D}_{i}$}{ $ \tilde{\ell_i}(\boldsymbol{w}_i^{t},\mathcal{B}_{i}^{t}) = \sum_{(\boldsymbol{x}, y)} \frac{1}{|\mathcal{B}_{i}^{(t,k}|} \left[ \ell_i\left( f(\boldsymbol{w}_i^{t};\boldsymbol{x}) + \boldsymbol{s}_i,y \right)\right]$ \tcp*{$s_i$ follows Eq.\ref{si}} $ \boldsymbol{w}_{i}^{t}=\boldsymbol{w}_{i}^{t}-\eta \nabla \tilde{\ell_i}(\boldsymbol{w}_i^{t},\mathcal{B}_{i}^{t}) $\\ } } return $\boldsymbol{w}_i^{t}$\\ \end{algorithm} Different from reweighting, FedShift modifies the local optimization objective of each client to satisfy that $ \tilde{\boldsymbol{w}_i^*} = \arg\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P_i} \left[ \tilde{\ell_i} ( \boldsymbol{f}(\boldsymbol{w};\boldsymbol{x}),y) \right]$ also is the optimum of the global optimization problem $\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P} \left[ \ell ( \boldsymbol{f}(\boldsymbol{w};\boldsymbol{x}),y) \right]$. Let $\tilde{\ell_i}$ denote the modified local optimization objective of client $i$. In FedShift, we add the shift $s_i$ on the classifier output of the model to modify the local optimization objective of client $i$, shown as: \begin{equation} \label{newf} \tilde{\ell_i} = \ell_i( \tilde{\boldsymbol{f}}(\boldsymbol{w}_i^{t};\boldsymbol{x}),y) = \ell_i\left( \boldsymbol{f}(\boldsymbol{w}_i^{t};\boldsymbol{x}) + \boldsymbol{s}_i,y \right) \end{equation} The shift $\boldsymbol{s}_i = [s_{i,1}, s_{i,2}, \dots, s_{i,K}]$ is calculated by the local category probability to the classifier at the end of network, as follows: \begin{equation} \label{si} \boldsymbol{s}_{i,k} = \ln(\frac{P_i(y=k)}{P(y=k)}) \qquad k=1,2,\dots,K \end{equation} where $P(y=k)=\sum_{i=1}^{N}\frac{|\mathcal{D}_i|}{|\mathcal{D}|}P_i(y=k)$. \footnote{The probability can be calculated using the secure aggregation algorithm \citep{bonawitz2016practical} without leaking any client information at the beginning of the entire learning process. More specifically, we use Laplace smoothing on each client to approximate the probability by frequency to guarantee secure computation of the class probability.} Then, we propose our following Theorem 1 to show the advantages of our FedShift theoretically. \begin{thm} \label{thm1} \textit{For FedShift, by add shift $s_i$ in the output of model, the local optimum $\boldsymbol{w}_i^*$ satisfies the global optimum of $\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P} \left[ \ell ( f(\boldsymbol{w};\boldsymbol{x}),y) \right]$.} \end{thm} \textbf{Proof:} For classification, cross entropy loss is the most commonly used loss function, and the output of the neural network model usually passes through a softmax function to get the predicted category probability. Therefore, by definition, we have \begin{equation} \boldsymbol{w}_{i}^{*} = \arg\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P_i} \left[ \ell_i ( \tilde{f}(\boldsymbol{w};\boldsymbol{x}),y) \right] = \arg\min _{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y) \sim P_i}\left[-\sum_{k=1}^{K} \mathbbm{1}_{y=k} \log \frac{e^{\tilde{f}_k(\boldsymbol{w};\boldsymbol{x})}} {\sum_{j=1}^{K}e^{\tilde{f}_j(\boldsymbol{w};\boldsymbol{x})}} \right] \end{equation} Let $q_i(y=k|\boldsymbol{x};\boldsymbol{w}) \triangleq \frac{e^{\tilde{f}_k(\boldsymbol{w};\boldsymbol{x})}} {\sum_{j=1}^{K}e^{\tilde{f}_j(\boldsymbol{w};\boldsymbol{x})}}$, we can derive that the optimal model $\boldsymbol{w}_i^{*}$ should satisfy $q_i(y=k|\boldsymbol{x};\boldsymbol{w}_{i}^{*}) = P_i(y=k|\boldsymbol{x})$. And according to Bayes' theorem, we have $ p_i(y=k|\boldsymbol{x}) = \frac{p_i(\boldsymbol{x}|y=k)p_i(y=k)}{\sum_{j=1}^{K}p_i(\boldsymbol{x}|y=j)p_i(y=j)} $, then we have \begin{equation} \tilde{f}_k(\boldsymbol{w}_{i}^{*};\boldsymbol{x}) = \ln(p_i(\boldsymbol{x}|y=k)p_i(y=k))+const , k=1,2,\dots,K \end{equation} Then, we consider the origin output which is added the classify shift $s_i$ in client $i$ by equation \ref{newf}, we have \begin{equation} \begin{aligned} f_k(\boldsymbol{w}_i^{*};\boldsymbol{x}) & = \tilde{f}_k(\boldsymbol{w}_i^{*};\boldsymbol{x}) - s_{i.k} \\ {} & =\ln(p_i(\boldsymbol{x}|y=k)p_i(y=k))-\ln(\frac{p_i(y=k)}{p(y=k)})+const \\ {} & =\ln(p(\boldsymbol{x}|y=k)p(y=k))+const \\ \end{aligned} \end{equation} \begin{equation} q(y=k|\boldsymbol{x};\boldsymbol{w}_i^{*}) = \frac{e^{f_k(\boldsymbol{w}_i^{*};\boldsymbol{x})}} {\sum_{j=1}^{K}e^{f_j(\boldsymbol{w}_i^{*};\boldsymbol{x})}} = p(y=k|\boldsymbol{x}), k=1,2,\dots,K \end{equation} which means that $\boldsymbol{w}_{i}^{*}$ satisfies $\min_{\boldsymbol{w}} \mathbb{E}_{(\boldsymbol{x},y)\sim P} \left[ \ell ( f(\boldsymbol{w};\boldsymbol{x}),y) \right]$, which is the global optimum. $ \qedsymbol$ Note that FedProx can also be considered to have made such a modification,that is $\tilde{\ell_i}(\tilde{\boldsymbol{f}}(\boldsymbol{w};\boldsymbol{x}),y) = \ell_i(\boldsymbol{f}(\boldsymbol{w};\boldsymbol{x}),y)+\lambda\|\boldsymbol{w}-\bar{\boldsymbol{w}}\|_2^2$. but it does not guarantee that this modification has the properties shown in Theorem \ref{thm1}. FedShift is designed as a simple and effective approach based on FedAvg, only introducing lightweight but novel modifications in the local training phase. Benefiting from the lightweight modifications in local training, FedShift will not damage the data privacy and add any communication cost, which potentially can be combined with other aggregation optimization approaches. \subsection{Convergence Analysis} \label{proof} Properties outlined in Theorem \ref{thm1} motivate our FedShift convergence analysis. We will present theoretical results for strongly convex and non-convex functions. We first give some common assumptions about the function ${L}_i$ and $\nabla \ell_i(\boldsymbol{w},\mathcal{B}_{i}) $, which is the unbiased stochastic gradient of ${L}_i$. \begin{assumption} \label{assumption1} \textit{For all $i$, ${L}_i$ has the properties of $\mu$-strong convexity and $\beta$-smooth:} \begin{equation*} \mu\text{-strongly convex: } {L}_i(\boldsymbol{v}) \geq {L}_i(\boldsymbol{w})+\langle(\boldsymbol{v}-\boldsymbol{w}), \nabla {L}_i(\boldsymbol{w})\rangle + \frac{\mu}{2}||\boldsymbol{v}-\boldsymbol{w}||_2^2 \end{equation*} \begin{equation*} \beta\text{-smooth: } {L}_i(\boldsymbol{v}) \leq {L}_i(\boldsymbol{w})+\langle(\boldsymbol{v}-\boldsymbol{w}), \nabla {L}_i(\boldsymbol{w})\rangle + \frac{\beta}{2}||\boldsymbol{v}-\boldsymbol{w}||_2^2 \end{equation*} \end{assumption} \begin{assumption} \label{assumption2} \textit{Bounded variances and second moments: There exits constants $\sigma > 0$ and $G > 0$ such that} \begin{equation*} \mathbb{E}_{\mathcal{B}_{i} \sim \mathcal{D}_{i}}\left\|\nabla \ell_{i}\left(\boldsymbol{w} ; \mathcal{B}_{i}\right)-\nabla {L}_{i}(\boldsymbol{w})\right\|_2^{2} \leq \sigma^{2}, \forall \boldsymbol{w}, \forall i \end{equation*} \begin{equation*} \mathbb{E}_{\mathcal{B}_{i} \sim \mathcal{D}_i} \left[ \| \nabla \ell_i(\boldsymbol{w},\mathcal{B}_{i}) \|_2^2\right] \leq G^2, \forall \boldsymbol{w}, \forall i \end{equation*} \end{assumption} Then, we give a lemma about the gap between the local model and the local optimum as follows, where the detailed proof is in Appendix A. \begin{lem} \label{lem1} \textit{Under Assumption \ref{assumption1} and \ref{assumption2}, we have} $ \mathbb{E}(||\boldsymbol{w}_{i}^{t+1}-\boldsymbol{w}_{i}^{*}||_2^2) \leq (1-\eta\mu)^{I+1} ||\bar{\boldsymbol{w}}^{t}-\boldsymbol{w}_{i}^{*}||_2^2 + \frac{\eta}{\mu}G^2$, \textit{where $I$ denotes the iterations of SGD for each client in each rounds.} \end{lem} \begin{thm} \label{FedShiftthm} \textit{Under Assumption \ref{assumption1} and \ref{assumption2}, in FedShift, we have $ \mathbb{E}(||\bar{\boldsymbol{w}}^{t+1}-\boldsymbol{w}^{*}||_2^2) \leq (1-\eta\mu)^{(I+1)t} ||\bar{\boldsymbol{w}}^{0}-\boldsymbol{w}^{*}||_2^2 + \frac{\eta\left[1-(1-\eta\mu)^{(I+1)(t+1)}\right]}{\mu\left[1-(1-\eta\mu)^{I+1}\right]}G^2 $, where $\bar{\boldsymbol{w}}^{t+1}\triangleq \sum_{i =1}^{N}\frac{\boldsymbol{w}_{i}^{t+1}}{N}$ and $\bar{\boldsymbol{w}}^{0}$ is the initial global model.} \end{thm} \textbf{Proof:} Following Theorem \ref{thm1} and the strongly convex of $\mathcal{L}$, we can derive that $\boldsymbol{w}_i^*=\boldsymbol{w}^*$. Then, we have \begin{equation} \begin{aligned} \mathbb{E}(\|\bar{\boldsymbol{w}}^{t+1}-\boldsymbol{w}^{*}\|_2^2) & = \mathbb{E}(\|\sum_{i =1}^{N}\frac{\boldsymbol{w}_{i}^{t+1}}{N}-\boldsymbol{w}^{*}\|_2^2) = \mathbb{E}( \| \frac{1}{N}\sum_{i=1}^{N}(\boldsymbol{w}_{i}^{t+1}-\boldsymbol{w}^{*}) \|_2^2) &&\\ & \stackrel{(a)}{\leq} \frac{1}{N}\sum_{i=1}^{N} \mathbb{E}( \| (\boldsymbol{w}_{i}^{t+1}-\boldsymbol{w}_{i}^{*}) \|_2^2) \stackrel{(b)}{\leq} (1-\eta\mu)^{I+1} \|\bar{\boldsymbol{w}}^{t}-\boldsymbol{w}_{i}^{*}\|_2^2 + \frac{\eta}{\mu}G^2 &&\\ & \stackrel{(recurrence)}{\leq} (1-\eta\mu)^{(I+1)t} ||\bar{\boldsymbol{w}}^{0}-\boldsymbol{w}^{*}||_2^2 + \frac{\eta\left[1-(1-\eta\mu)^{(I+1)(t+1)}\right]}{\mu\left[1-(1-\eta\mu)^{I+1}\right]}G^2 &&\\ \end{aligned} \end{equation} where (a) follows from the Jensen's Inequality and $\boldsymbol{w}_i^*=\boldsymbol{w}^*$, (b) follows from Lemma \ref{lem1}.$ \qedsymbol$ Theorem \ref{FedShiftthm} shows us that under the strongly convex assumption of the function, benefiting from the classifier shift in FedShift, the global model can converge to the global optimum when there are enough iterations and communication rounds and a decayed learning rate. However, since Theorem \ref{thm1} does not hold on FedAvg, FedAvg does not have such good properties. We can get a lower bound of the gap between the global model and the global optimum in FedAvg, expressed as Theorem \ref{avgthm}. \begin{thm} \label{avgthm} \textit{For FedAvg, in the case of non-IID client data, there is a gap between the local optimal and the global optimal. Mark $\bar{\boldsymbol{w}}^{*}\triangleq \sum_{i=1}^{N}\frac{\boldsymbol{w}_i^{*}}{N}$. If we assume that $||\bar{\boldsymbol{w}}^{*}-\boldsymbol{w}^{*}||_2=\delta>0$, $||\boldsymbol{w}_i^{*}-\boldsymbol{w}^{*}||_2=\zeta>0$ and $||\bar{\boldsymbol{w}}^{0}-\boldsymbol{w}^{*}||_2=\gamma >0$, then under Assumption \ref{assumption1} and \ref{assumption2}, we have $ \mathbb{E}(||\bar{\boldsymbol{w}}^{t+1}-\boldsymbol{w}^{*}||_2^2) \geq \frac{\delta^2}{2}$ when $I$ satisfies that $I \geq \max\left\{ \frac{\ln(\frac{\frac{\delta^2}{16}-\frac{\eta}{\mu}G^2}{(\frac{5\delta}{4} +\zeta)^2})}{\ln(1-\eta\mu)} -1, \frac{\ln(\frac{\frac{\delta^2}{16}-\frac{\eta}{\mu}G^2}{(\zeta+\gamma)^2})}{\ln(1-\eta\mu)} -1 \right\}$.} \end{thm} \textbf{Proof:} Considering the first local update, from Lemma \ref{lem1}, we can get that $\mathbb{E}(\| \boldsymbol{w}_{i}^{1} - \boldsymbol{w}_{i}^{*}\|_2^2) \leq (1-\eta\mu)^{I+1} \|\bar{\boldsymbol{w}}^{0}-\boldsymbol{w}_{i}^{*}\|_2^2 + \frac{\eta}{\mu}G^2 \leq (1-\eta\mu)^{I+1} \left( \|\bar{\boldsymbol{w}}^{0}-\boldsymbol{w}^{*}\|_2 + \|\boldsymbol{w}^{*}-\boldsymbol{w}_{i}^{*}\|_2 \right)^2+ \frac{\eta}{\mu}G^2 \leq (1-\eta\mu)^{I+1} \left( \gamma + \zeta \right)^2+ \frac{\eta}{\mu}G^2 \leq \frac{\delta^2}{16}$. Then, we do a mathematical induction proof for $\mathbb{E}(\| \boldsymbol{w}_{i}^{t} - \boldsymbol{w}_{i}^{*}\|_2^2)\leq \frac{\delta^2}{16}$, which holds at round t. Then, we can derive \begin{equation} \label{lemme2} \begin{aligned} \mathbb{E}(\| \boldsymbol{w}_{i}^{t+1} - \boldsymbol{w}_{i}^{*}\|_2^2) & \leq (1-\eta\mu)^{I+1} \|\bar{\boldsymbol{w}}^{t}-\boldsymbol{w}_{i}^{*}\|_2^2 + \frac{\eta}{\mu}G^2 \\ & \leq (1-\eta\mu)^{I+1} \left( \|\bar{\boldsymbol{w}}^{t}-\boldsymbol{w}^{*}\|_2 + \|\boldsymbol{w}^{*}-\boldsymbol{w}_{i}^{*}\|_2 \right)^2+ \frac{\eta}{\mu}G^2 \\ & \leq (1-\eta\mu)^{I+1} \left( \|\bar{\boldsymbol{w}}^{t}-\bar{\boldsymbol{w}}^{*}\|_2 + \|\bar{\boldsymbol{w}}^{*}-\boldsymbol{w}^{*}\|_2 +\zeta \right)^2+ \frac{\eta}{\mu}G^2 \\ & \leq (1-\eta\mu)^{I+1} \left( \frac{1}{N}\sum_{i =1}^N\|\boldsymbol{w}_{i}^{t}-\boldsymbol{w}_{i}^{*}\|_2 + \delta +\zeta \right)^2+ \frac{\eta}{\mu}G^2 \\ & \leq (1-\eta\mu)^{I+1} \left( \frac{\delta}{4} + \delta +\zeta \right)^2+ \frac{\eta}{\mu}G^2 \leq \frac{\delta^2}{16} \\ \end{aligned} \end{equation} Then, we have \begin{equation} \begin{aligned} \mathbb{E}(\|\bar{\boldsymbol{w}}^{t+1}-\boldsymbol{w}^{*}\|_2^2) & = \mathbb{E}(\|\bar{\boldsymbol{w}}^{t+1}-\bar{\boldsymbol{w}}^{*} + \bar{\boldsymbol{w}}^{*} - \boldsymbol{w}^{*} \|_2^2) \\ & \stackrel{(c)}{\geq} \mathbb{E}( \| \bar{\boldsymbol{w}}^{t+1}-\bar{\boldsymbol{w}}^{*}\|_2 - \| \bar{\boldsymbol{w}}^{*}-\boldsymbol{w}^{*}\|_2 )^2 \\ & = \mathbb{E}( \| \bar{\boldsymbol{w}}^{t+1}-\bar{\boldsymbol{w}}^{*}\|_2^2) + \mathbb{E}( \| \bar{\boldsymbol{w}}^{*}-\boldsymbol{w}^{*}\|_2^2) - 2\delta \mathbb{E}(\| \bar{\boldsymbol{w}}^{t+1}-\bar{\boldsymbol{w}}^{*} \|_2) \\ & \geq 0 + \delta^2-2\delta {\mathbb{E}(\| \bar{\boldsymbol{w}}^{t+1}-\bar{\boldsymbol{w}}^{*} \|_2)} \\ & \stackrel{(d)}{\geq} \delta^2-\frac{2\delta}{N}\sum_{i =1}^{N} \mathbb{E}(\| \boldsymbol{w}_{i}^{t+1}-\boldsymbol{w}_{i}^{*} \|_2) \\ & \stackrel{(e)}{\geq} \delta^2-2\delta\sqrt{\frac{\delta^2}{16}} = \frac{\delta^2}{2} >0 \\ \end{aligned} \end{equation} where (c,d,e) follow the Triangle Inequality, the Jensen's Inequality and equation \ref{lemme2} respectively.$ \qedsymbol$ Furthermore, we consider the convergence of FedShift in the non-convex case, expressed as Theorem \ref{nonvexthm}. The detailed proof is in Appendix A. \begin{thm} \label{nonvexthm} \textit{Under assumption \ref{assumption1} and \ref{assumption2}, and removing the $\mu$-strongly convex assumption, we have $\frac{1}{T} \sum_{t=1}^{T}\mathbb{E}(|| \nabla_{\boldsymbol{w}} L (\bar{\boldsymbol{w}}^{t-1}) ||_2^2) \leq \frac{2}{\eta T}({L} (\bar{\boldsymbol{w}}^{0})-{L} (\bar{\boldsymbol{w}}^{*}) ) + 4\eta^2I^2G^2\beta^2+\frac{\beta}{N}\eta \sigma^2$.} \end{thm} \section{Experiments} \label{experiments} \subsection{Experimental Setup} \label{experimental setup} \paragraph{Datasets and Models} We conduct experiments on three public datasets including Cifar10 (60,000 images with 10 classes) \citep*{krizhevsky2009learning}, Cinic10 (270,000 images with 10 classes) \citep{darlow2018cinic}, and Tiny-Imagenet (100,000 images with 200 classes) \citep{Le2015TinyIV}. We follow the setting in \citep{wang2019federated,yurochkin2019bayesian} to generate the non-IID data partition by using Dirichlet distribution. Specifically, for class $c$, we sample $\mathit{p}_c \sim Dir_{N}(\alpha)$, where $\mathit{p}_{c,i}$ represents the proportion of data with category $k$ allocated to client $i$. The smaller $\alpha$ means the heavier statistical heterogeneity. For all experiments unless there are special instructions, we set $\alpha = 0.1$ and the number of clients $N = 10$ by default. In order to show that the algorithm is feasible on the actual deep learning model, we use ResNet18 \citep{he2016deep} as our network architecture for Cifar10 and Cinic10. For Tiny-Imagenet, we use ResNet50 \citep{he2016deep} to deal with more complex data. \paragraph{Baselines} We compare FedShift with three state-of-the-art approaches which are the most relevant to us, including FedAvg \citep{mcmahan2017communication}, FedProx \citep{MLSYS2020_38af8613}, and SCAFFOLD \citep{karimireddy2020scaffold} on all three datasets. \paragraph{Implementation} We use PyTorch \citep{paszke2019pytorch} to implement FedShift and the other baselines. We use the SGD with momentum as our optimizer for all experiments, where the SGD weight decay is set to $0.0001$ and the momentum to $0.9$. We adjust batchsize $B = 40$ and the learning rate $\eta = 0.01$ with a decay rate $0.95$ for every $10$ communication rounds. We take the best performance of each method for comparison. \subsection{Accuracy Comparison} For each dataset, we tune the number of local epochs $E$ from $\{1, 5, 10, 20\}$ based on FedAvg, and choose the best $E$ as the hyperparameter of other algorithms. The best $E$ for Cifar10, Cinic10, and Tiny-Imagenet are $5$, $1$, and $1$, respectively. Besides, for FedProx, we tune $\lambda$ from $\{0.001, 0.01, 0.1\}$, which is a hyperparameter to control the weight of its proximal term. The best $\lambda$ of FedProx for Cifar10, Cinic10, and Tiny-Imagenet are $0.01$, $0.001$, and $0.01$, respectively. Unless explicitly specified, we use $E$ and $\lambda$ for all the remaining experiments. The number of communication rounds is set to $100$ for Cifar10, $150$ for Cinic10 and $50$ for Tiny-Imagenet, where all federated learning approaches have little or no accuracy gain with more communications. \begin{table}[htbp] \caption{The accuracy of Reweight, FedShift and three baselines (FedAvg, FedProx and SCAFFOLD) on three test datasets (Cifar10, Cinic10 and Tiny-Imagenet).} \label{acc-comparison} \centering \begin{tabular}{c|c|c|c} \toprule Methods & Cifar10 & Cinic10 & Tiny-Imagenet \\ \midrule FedAvg & 78.94\% & 72.26\% & 35.14\% \\ FedProx & 79.33\% & 71.57\% & 36.16\% \\ SCAFFOLD & 77.75\% & 73.22\% & 35.18\% \\ FedShift(ours) & \bf{83.52\%} & \bf{74.86\%} & \bf{36.61\%} \\ Reweighting & 63.15\% & 30.06\% & 13.25\% \\ \bottomrule \end{tabular} \end{table} Table \ref{acc-comparison} shows the test accuracy of all approaches with the above settings. Comparing different federated learning approaches, we can observe that FedShift is the best approach among all tasks, which even can outperform FedAvg by $4.58$\% accuracy on Cifar10. For FedProx and SCAFFOLD, they are only superior to FedAvg in specific datasets and do not have a significant improvement. Reweighting has much worse accuracy than other methods as we mentioned in Section \ref{method}. \subsection{Discussion of Communication Efficiency} Figure \ref{accfig} shows the accuracy in each communication round during training. As we can see, FedShift obviously has a faster convergence speed and higher accuracy compared with the other three methods. Moreover, unlike the other three methods, FedShift has a more stable upward curve due to the same optimization objective in all clients. \begin{figure} \caption{The test accuracy in each communication round during training.} \label{level.sub.1} \label{level.sub.2} \label{level.sub.3} \label{accfig} \end{figure} In order to verify the communication efficiency of different approaches, we show the number of communication rounds to achieve the same accuracy for FedAvg in Table \ref{communication-rounds}. We can observe that the number of communication rounds is significantly reduced in FedShift. FedShift only needs less than a quarter of the number of rounds to reach the accuracy of FedAvg on the Cifar10. The speedup of FedShift is also significant on Cinic10 or Tiny-Imagenet. Therefore, we can consider FedShift is much more communication efficient than the other approaches. \begin{table}[htbp] \caption{The number of rounds of FedShift and three baselines (FedAvg,FedProx and SCAFFOLD) to achieve a consistent accuracy on three test datasets (Cifar10, Cinic10 and Tiny-Imagenet) respectively.} \label{communication-rounds} \centering \begin{tabular}{c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{|c|} { Cifar10 } & \multicolumn{2}{c|} { Cinic10 } & \multicolumn{2}{|c} { Tiny-Imagenet } \\ {} & rounds & speedup & rounds & speedup & rounds & speedup \\ \midrule FedAvg & 100 & $1 \times$ & 150 & $1 \times$ & 50 & $1 \times$ \\ FedProx & $85$ & $1.18 \times$ & $\backslash$ & $<1 \times$ & 46 & $1.09 \times$ \\ SCAFFOLD & $\backslash$ & $<1 \times$ & 133 & $1.13 \times$ & 49 & $1.02 \times$ \\ FedShift (ours) & \bf{23} & $\mathbf{4.34} \times$ & \bf{81} & $\mathbf{1.85} \times$ & \bf{37} & $\mathbf{1.35} \times$ \\ \bottomrule \end{tabular} \end{table} \subsection{Impact of Local Epochs} \begin{table}[htbp] \caption{The top-1 accuracy of FedShift and three baselines (FedAvg,FedProx and SCAFFOLD) on Cifar10 dataset with different number of local epochs.} \label{local-epochs} \centering \begin{tabular}{c|c|c|c|c} \toprule \cmidrule(r){1-2} Methods & E=1 & E=5 & E=10 & E=20 \\ \midrule FedAvg & 75.25\% & 78.94\% & 76.47\% & 72.28\% \\ FedProx & 75.31\% & 79.33\% & 77.27\% & 76.19\% \\ SCAFFOLD & 70.60\% & 77.75\% & 77.95\% & 78.04\% \\ FedShift (ours) & \bf{80.61\%} & \bf{83.52\%} & \bf{82.67\%} & \bf{81.63\%} \\ \bottomrule \end{tabular} \end{table} We next focus on the effect of the number of local epochs on Cifar10. The results are shown in Table \ref{local-epochs}. When the number of local epochs is $1$, the local update is tiny, which leads to lower accuracy than more local epochs' results. However, FedShift still has the best accuracy. When the number of local epochs becomes too large, the accuracy of all approaches drops unless SCAFFOLD, which is due to the overfitting of local updates. Moreover, benefiting from the proximal term, FedProx performs better than FedAvg in all settings. Note that SCAFFOLD is far inferior to other algorithms when $E=1$, and has higher accuracy as the number of local epochs increases. Because the control variables in SCAFFOLD can be estimated more accurately with more local updates, SCAFFOLD has a higher tolerance for the number of local epochs. Nevertheless, FedShift clearly outperforms the other approaches. This further verifies that FedShift can effectively mitigate the negative effects of the accumulative client drift. \subsection{Impact of Data Heterogeneity} \begin{table}[htbp] \caption{The top-1 accuracy of FedShift and three baselines (FedAvg,FedProx and SCAFFOLD) on Cifar10 dataset with different parameter $\alpha$ of dirichlet distribution.} \label{alpha} \centering \begin{tabular}{c|c|c|c|c} \toprule \cmidrule(r){1-2} Methods & $\alpha$=0.1 & $\alpha$=0.15 &$\alpha$=0.2 & $\alpha$=0.5 \\ \midrule FedAvg & 78.94\% & 80.11\% & 89.18\% & {91.27\%} \\ FedProx & 75.95\% & 82.10\% & 89.43\% & 91.21\% \\ SCAFFOLD & 77.75\% & 83.11\% & \bf{90.99}\% & \bf{92.22\%} \\ FedShift (ours) & \bf{83.52\%} & \bf{86.21\%} & {90.76\%} & {91.26\%} \\ \bottomrule \end{tabular} \end{table} Data heterogeneity is changed in this numerical study by varying the concentration parameter $\alpha$ of Dirichlet distribution on Cifar10. The results are shown in Table \ref{alpha}. For a smaller $\alpha$, the partition will be more unbalanced, we can significantly see the effectiveness of FedShift. When the unbalanced level decreases (i.e., $\alpha = 0.5$), all approaches have similar accuracy, and the control variable in SCAFFOLD actually degenerates into more considerable momentum to obtain higher accuracy. \subsection{Impact of the Number of Clients} \begin{table}[htbp] \caption{The top-1 accuracy of FedShift and three baselines (FedAvg,FedProx and SCAFFOLD) on Cifar10 dataset with different number of clients.} \label{clients} \centering \begin{tabular}{c|c|c|c} \toprule {Method} & { N=10,C=1.0 } & { N=20,C=0.5 } & { N=50,C=0.2 } \\ \midrule FedAvg & 78.94\% & 79.49\% & 55.22\% \\ FedProx & 79.33\% & 79.57\% & 56.24\% \\ SCAFFOLD & 77.75\% & 78.79\% & \bf{64.92\%} \\ FedShift (ours) & \bf{83.52\%} & \bf{83.13\%} & 64.69\% \\ \bottomrule \end{tabular} \end{table} To show the scalability of FedShift, we try more number of clients on Cifar10, including two settings: $20$ clients and $50$ clients. For better comparison, we adjust the proportion of clients participating in training in each round so that there are exactly $10$ clients each time, where $C=0.5/0.2$ for $N=20/50$. The communication round remains the same as the previous experiment, which is 100 rounds. The results are shown in Table \ref{clients}. FedShift achieves higher accuracy than FedAvg and FedProx in the different number of clients. Moreover, SCAFFOLD even outperforms FedShift with 50 clients and the fraction $C=0.2$ because of its estimation of the global gradient even if the gradients of some clients participating in training are small. \section{Conclusion} \label{conclusion} Focusing on the class imbalance in the statistical heterogeneity of federated learning, we propose FedShift in this paper, which is a simple and effective method that adds the shift on the classifier output based on the client class distribution in the local training phase. Then, we theoretically prove that the classifier shift in FedShift make the local optimal model satisfies the global optimum. Additionally, we prove the convergence of the FedShift algorithm and compare with FedAvg. We also conduct numerical studies, and the experimental results show that FedShift significantly outperforms the popular state-of-the-art algorithms on various datasets. Finally, as a future prospect, Fedshift has the potential to combine with the research of feature representation to deal with the inconsistency of the category conditional probabilities in each client, which is relaxation of our assumptions in our work. \end{document}
\begin{document} \title{How a~nonconvergent recovered Hessian works in~mesh adaptation \thanks{ Supported in part by the DFG under Grant KA\,3215/1-2 and the NSF under Grant DMS-1115118. } } \author{Lennard Kamenski \thanks{ Weierstrass Institute, Berlin, Germany (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}). } \and Weizhang Huang \thanks{ University of Kansas, Department of Mathematics, Lawrence, KS 66045, USA (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}). } } \begin{titlepage} \maketitle \begin{abstract} Hessian recovery has been commonly used in mesh adaptation for obtaining the required magnitude and direction information of the solution error. Unfortunately, a recovered Hessian from a linear finite element approximation is nonconvergent in general as the mesh is refined. It has been observed numerically that adaptive meshes based on such a nonconvergent recovered Hessian can nevertheless lead to an optimal error in the finite element approximation. This also explains why Hessian recovery is still widely used despite its nonconvergence. In this paper we develop an error bound for the linear finite element solution of a general boundary value problem under a mild assumption on the closeness of the recovered Hessian to the exact one. Numerical results show that this closeness assumption is satisfied by the recovered Hessian obtained with commonly used Hessian recovery methods. Moreover, it is shown that the finite element error changes gradually with the closeness of the recovered Hessian. This provides an explanation on how a nonconvergent recovered Hessian works in mesh adaptation. \begin{keywords} Hessian recovery, mesh adaptation, anisotropic mesh, finite element, convergence analysis, error estimate \end{keywords} \begin{AMS} 65N50, 65N30 \end{AMS} \end{abstract} \end{titlepage} \section{Introduction} \label{sect:introduction} Gradient and Hessian recovery has been commonly used in mesh adaptation for the numerical solution of partial differential equations (PDEs); e.g.,\ see~\cite{AinOde00,BabStr01,HuaRus11,Tang07,ZhaNag05,ZieZhu92,ZieZhu92a}. The use typically involves the approximation of solution derivatives based on a computed solution defined on the current mesh (recovery), the generation of a new mesh using the recovered derivatives, and the solution of the physical PDE on the new mesh. These steps are often repeated several times until a suitable mesh and a numerical solution defined thereon are obtained. As the mesh is refined, a sequence of adaptive meshes, derivative approximations, and numerical solutions results. A theoretical and also practical question is whether this sequence of numerical solutions converges to the exact solution. Naturally, this question is linked to the convergence of the recovered derivatives used to generate the meshes. It is known that recovered gradient through the least squares fitting~\cite{ZieZhu92,ZieZhu92a} or polynomial preserving techniques~\cite{ZhaNag05} is convergent for uniform or quasi-uniform meshes~\cite{ZhaNag05,ZhaZhu95} and superconvergent for mildly structured meshes~\cite{XuZha04} as well for a type of adaptive mesh~\cite{WuZha07}. For the Hessian, it has been observed that, unfortunately, a convergent recovery cannot be obtained from linear finite element approximations for general nonuniform meshes~\cite{AgoLipVas10,Kam09,PicAlaBorGeo11}, although Hessian recovery is known to converge when the numerical solution exhibits superconvergence or supercloseness for some special meshes~\cite{BanXu03,BanXu03a,Ova07}. On the other hand, numerical experiments also show that the numerical solution obtained with an adaptive mesh generated using a nonconvergent recovered Hessian is often not only convergent but also has an error comparable to that obtained with the exact analytical Hessian. To demonstrate this, we consider a Dirichlet boundary value problem (BVP) for the Poisson equation \begin{equation} \begin{cases} -\Delta u = f, &\text{in $\Omega= (0,1)\times(0,1)$}, \\ u = g, &\text{on $\partial\Omega$}, \end{cases} \label{eq:bvp} \end{equation} where $f$ and $g$ are chosen such that the exact solution of the BVP is given by \begin{equation} u(x,y) = x^2 + 25 y^2. \end{equation} Two Hessian recovery methods, QLS (quadratic least squares fitting) and WF (weak formulation) are used (see~\cref{sect:recovery:methods} for the description of these and other Hessian recovery techniques). \Cref{fig:x2y2:25:intro} shows the error in recovered Hessian and the linear finite element solution with exact and recovered Hessian. One can see that the finite element error is convergent and almost undistinguishable for the exact and approximate Hessian (\cref{fig:x2y2:25:solution}) whereas the error of the Hessian recovery remains $\mathcal{O}(1)$ (\cref{fig:x2y2:25:recovery}). Obviously, this indicates that a convergent recovered Hessian is not necessary for the purpose of mesh adaptation. Of course, a badly recovered Hessian does not serve the purpose either. \begin{figure} \caption{finte element error $\Abs{u-u_h} \label{fig:x2y2:25:solution} \caption{recovery error $\max_K \max_{\boldsymbol{x} \label{fig:x2y2:25:recovery} \caption{Finite element and Hessian recovery errors as a function of $N$\label{fig:x2y2:25:intro} \label{fig:x2y2:25:intro} \end{figure} How accurate should a recovered Hessian be for the purpose of mesh adaptation? This issue has been studied by Agouzal et al.~\cite{AgoLipVas99} and Vassilevski and Lipnikov~\cite{VasLip99}. In particular~\cite[Theorem~3.2]{AgoLipVas99}, they show that a mesh based on an approximation $R$ of the Hessian $H$ is quasi-optimal if there exist small (with respect to one) positive numbers $\varepsilon$ and $\delta$ such that \begin{align} \max_{\boldsymbol{x} \in \omega_i} \Norm{ H(\boldsymbol{x}) - H_{\omega_i} }_\infty & \leq \delta \lambda_{\min} \bigl(R(\boldsymbol{x}_i)\bigr), \label{eq:AgLiVa99:1} \\ \Norm{ R(\boldsymbol{x}_i) - H_{\omega_i} }_\infty &\leq \varepsilon \lambda_{\min} \bigl(R(\boldsymbol{x}_i)\bigr) \label{eq:AgLiVa99:2} \end{align} hold for any mesh vertex $\boldsymbol{x}_i$ and its patch $\omega_i$, where $H_{\omega_i}$ is the Hessian at a point in $\omega_i$ where $\Abs{\det H(\boldsymbol{x}) }$ attains its maximum and $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a matrix. Notice that \cref{eq:AgLiVa99:2} does not require $R$ to converge to $H$ as the mesh is refined. Instead, it requires the eigenvalues of $R^{-1} H$ to be around one (cf.~\cref{sect:analysis:1}). Unfortunately, it is still too restrictive to be satisfied by most examples we tested; see~\cref{sect:examples}. Thus, the work~\cite{AgoLipVas99,VasLip99} does not give a full explanation why a nonconvergent recovered Hessian works in mesh adaptation. The objective of the paper is to present a study on this issue. To be specific, we consider a BVP and its linear finite element solution with adaptive anisotropic meshes generated from a recovered Hessian. We adopt the $M$-uniform mesh approach~\cite{Hua07,HuaRus11} to view any adaptive mesh as a uniform one in some metric depending on the computed solution. An advantage of the approach is that the relation between the recovered Hessian and an adaptive anisotropic mesh generated using it can be fully characterized through the so-called alignment and equidistribution conditions (see \cref{eq:equi,eq:ali} in~\cref{sect:analysis:1}). This characterization plays a crucial role in the development of a bound for the $H^1$ semi-norm of the finite element error. The bound converges at a first order rate in terms of the average element diameter, $N^{-\frac{1}{d}}$, where $N$ is the number of elements and $d$ is the dimension of the physical domain. Moreover, the bound is valid under a condition on the closeness of the recovered Hessian to the exact one; see \cref{eq:CRs} or \cref{eq:CRplus:general,eq:CRminus:general}. This closeness condition is much weaker than \cref{eq:AgLiVa99:2}. Roughly speaking, \cref{eq:AgLiVa99:2} requires the eigenvalues of $R^{-1} H$ to be around one whereas the new condition only requires them to be bounded below from zero and from above. Numerical results in~\cref{sect:examples} show that the new closeness condition is satisfied in all examples for four commonly used Hessian recovery techniques considered in this paper whereas \cref{eq:AgLiVa99:2} is satisfied only in some examples. Furthermore, the error bound is linearly proportional to the ratio of the maximum (over the physical domain) of the largest eigenvalues of $R^{-1} H$ to the minimum of the smallest eigenvalues. Since the ratio is a measure of the closeness of the recovered Hessian to the exact one, the dependence indicates that the finite element error changes gradually with the closeness of the recovered Hessian. Hence, the error for the linear finite element approximation of the BVP is convergent for the considered Hessian recovery techniques and insensitive to the closeness of the recovered Hessian to the exact one. This provides an explanation how a nonconvergent recovered Hessian works for mesh adaptation. An outline of the paper is as follows. Convergence analysis of the linear finite element approximation is given in~\cref{sect:analysis:1,sect:general} for the cases with positive definite and general Hessian, respectively. A brief description of four common Hessian recovery techniques is given in~\cref{sect:recovery:methods} followed by numerical examples in~\cref{sect:examples}. Finally, \cref{sect:conclusion} contains conclusions and further comments. \section{Convergence of~linear finite element approximation for~positive definite Hessian} \label{sect:analysis:1} We consider the BVP \begin{equation} \begin{cases} \mathcal{L} u = f, & \text{ in $\Omega$}, \\ u = g, & \text{ on $\partial\Omega$}, \end{cases} \label{eq:bvp-2} \end{equation} where $\Omega$ is a polygonal or polyhedral domain of $\mathbb{R}^d$ ($d \ge 1$), $\mathcal{L}$ is an elliptic second-order differential operator, and $f$ and $g$ are given functions. We are concerned with the adaptive mesh solution of this BVP using the conventional linear finite element method. Denote a family of simplicial meshes for $\Omega$ by $\{ \mathcal{T}_h\}$ and the corresponding reference element by $\hat{K}$ which is chosen to be unitary in volume. For each mesh $\mathcal{T}_h$, we denote the corresponding finite element solution by $u_h$. C\'{e}a's lemma implies that the finite element error is bounded by the interpolation error, i.e., \begin{equation} \Abs{u-u_h}_{H^1(\Omega)} \le C \Abs{u - \Pi_h u}_{H^1(\Omega)} , \label{eq:cea-1} \end{equation} where $C$ is a constant independent of $u$ and $\mathcal{T}_h$ and $\Pi_h$ is the nodal interpolation operator associated with the linear finite element space defined on $\mathcal{T}_h$. Note that \cref{eq:cea-1} is valid for any mesh. \subsection{\texorpdfstring{Quasi-$M$-uniform meshes}{Quasi-M-uniform meshes}} In this paper we consider adaptive meshes generated based on a recovered Hessian $R$ and use the $M$-uniform mesh approach with which any adaptive mesh is viewed as a uniform one in some metric $M$ (defined in terms of $R$ in our current situation). It is known~\cite{Hua07,HuaRus11} that such an $M$-uniform mesh satisfies the equidistribution and alignment conditions, \begin{align} \Abs{K} {\det(M_K)}^{\frac{1}{2}} &= \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h} \Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}}, \quad \forall K \in \mathcal{T}_h, \label{eq:equi} \\ \frac{1}{d} \tr\left( {(F_K')}^T M_K F_K' \right) &= {\det\left( {(F_K')}^T M_K F_K' \right)}^{\frac{1}{d}}, \quad \forall K \in \mathcal{T}_h, \label{eq:ali} \end{align} where $N$ is the number of mesh elements, $M_K$ is an average of $M$ over $K$, $F_K \colon \hat{K} \to K$ is the affine mapping from the reference element $\hat{K}$ to a mesh element $K$, $F_K'$ is the Jacobian matrix of $F_K$ (which is constant on $K$), and $\det(\cdot)$ and $\tr(\cdot)$ denote the determinant and trace of a matrix, respectively. In practice, it is more realistic to generate less restrictive quasi-$M$-uniform meshes which satisfy \begin{align} \Abs{K} {\det(M_K)}^{\frac{1}{2}} & \leq C_{eq} \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h} \Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}}, \quad \forall K \in \mathcal{T}_h, \label{eq:equi:approx} \\ \frac{1}{d} \tr\left( {(F_K')}^T M_K F_K' \right) & \leq C_{ali} \Abs{K}^{\frac{2}{d}} {\det(M_K)}^{\frac{1}{d}}, \quad \forall K \in \mathcal{T}_h, \label{eq:ali:approx} \end{align} where $C_{eq}, C_{ali} \geq 1$ are some constants independent of $K$, $N$, and $\mathcal{T}_h$. Numerical experiments in~\cite{Hua05a} and~\cref{sect:examples} (\cref{fig:flower:ceq,fig:flower:cali,fig:tanh:ceq,fig:tanh:cali}) show that quasi-$M$-uniform meshes with relatively small $C_{eq}$ and $C_{ali}$ can be generated in practice. For this reason, we use quasi-$M$-uniform meshes in our analysis and numerical experiments. We would like to point out that conditions \cref{eq:equi:approx,eq:ali:approx} with $C_{eq} = C_{ali} = 1$ imply \cref{eq:equi,eq:ali}. Indeed, the inequality \cref{eq:ali:approx} with $C_{ali} = 1$ becomes the equality \cref{eq:ali} because the left-hand side of it (the arithmetic mean of the eigenvalues of ${(F_K')}^T M_K F_K'$) cannot be smaller than the right-hand side (the geometric mean of the eigenvalues). Further, if $C_{eq} = 1$ then \cref{eq:equi:approx} becomes \[ \Abs{K} {\det(M_K)}^{\frac{1}{2}} \le \frac{1}{N} \sum_{{\tilde{K}}\in\mathcal{T}_h} \Abs{{\tilde{K}}} {\det(M_{{\tilde{K}}})}^{\frac{1}{2}}, \quad \forall K \in \mathcal{T}_h. \] This implies \begin{align*} \max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}} &\leq \frac{1}{N} \sum_{K\in\mathcal{T}_h} \Abs{K} {\det(M_{K})}^{\frac{1}{2}}\\ &\leq \frac{1}{N} \left( (N-1)\max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}} + \min_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}} \right) \end{align*} and therefore \[ \max_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}} \leq \min_{K\in\mathcal{T}_h} \Abs{K} {\det(M_K)}^{\frac{1}{2}}, \] which can only be valid if all values of $\Abs{K} {\det(M_K)}^{\frac{1}{2}}$ are the same for all $K$. \subsection{Main result} In this section we consider a special case where the Hessian of the solution is uniformly positive definite in $\Omega$; i.e., \begin{equation} \exists \gamma > 0 \colon H(\boldsymbol{x}) \geq \gamma I, \quad \forall \boldsymbol{x} \in \Omega, \label{eq:Hpd} \end{equation} where the greater-than-or-equal sign means that the difference between the left-hand side and right-hand side terms is positive semidefinite. We also assume that the recovered Hessian $R$ is uniformly positive definite in $\Omega$. This assumption is not essential and will be dropped for the general situation discussed in~\cref{sect:general}. Recall from \cref{eq:cea-1} that the finite element error is bounded by the $H^1$ semi-norm of the interpolation error of the exact solution. A metric tensor corresponding to the $H^1$ semi-norm can be defined as \begin{equation} M_K = {\det(R_K)}^{- \frac{1}{d+2}} \Norm{R_K}_2^{\frac{2}{d+2}} R_K, \quad \forall K \in \mathcal{T}_h, \label{eq:M:H1} \end{equation} where $R_K$ is an average of $R$ over $K$~\cite{Hua05a}. For this metric tensor, mesh conditions \cref{eq:equi:approx,eq:ali:approx} become \begin{align} \Abs{K} {\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}} &\leq C_{eq} \frac{1}{N} \sum_{\tilde{K}} \abs{{\tilde{K}}} {\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}}, \quad \forall K \in \mathcal{T}_h, \label{eq:equi:H1} \\ \frac{1}{d} \tr\left( {(F_K')}^T R_K F_K' \right) &\leq C_{ali} \Abs{K}^{\frac{2}{d}} {\det(R_K)}^{\frac{1}{d}}, \quad \forall K \in \mathcal{T}_h. \label{eq:ali:H1} \end{align} Note that the alignment condition \cref{eq:ali:H1} implies the inverse alignment condition \begin{equation} \frac{1}{d} \tr\left( {(F_K')}^T R_K F_K'inv \right) < {\left( \frac{d}{d-1} C_{ali}\right)}^{d-1} \Abs{K}^{-\frac{2}{d}} {\det(R_K)}^{-\frac{1}{d}}, \quad \forall K \in \mathcal{T}_h . \label{eq:ali:inverse} \end{equation} To show this, we denote the eigenvalues of ${(F_K')}^T R_K F_K'$ by $0 < \lambda_1 \le \cdots \le \lambda_d$ and rewrite \cref{eq:ali:H1} as \[ \sum_i \lambda_i \le d C_{ali} {\left(\prod_i \lambda_i\right)}^{\frac{1}{d}}. \] Then \cref{eq:ali:inverse} follows from \begin{align*} \frac{1}{d} \sum_i \lambda_i^{-1} &= \prod_i \lambda_i^{-1} \cdot \frac{1}{d} \sum_i \prod_{j\neq i} \lambda_j \\ &\leq \prod_i \lambda_i^{-1} \cdot \frac{1}{d} \sum_i {\left(\frac{\sum_{j\neq i} \lambda_j}{d-1} \right)}^{d-1}\\ &< \prod_i \lambda_i^{-1} \cdot \frac{1}{d} \sum_i {\left( \frac{ \sum_{j} \lambda_j }{d-1}\right)}^{d-1} = \prod_i \lambda_i^{-1} {\left( \frac{\sum_{j} \lambda_j }{d-1} \right)}^{d-1}\\ &\leq {\left( \frac{d}{d-1} C_{ali} \right)}^{d-1} {\left(\prod_i \lambda_i\right)}^{-\frac{1}{d}} . \end{align*} \begin{theorem}[Positive definite Hessian] \label{thm:H1} Assume that $H(\boldsymbol{x})$ and the recovered Hessian $R$ are uniformly positive definite in $\Omega$ and that $R$ satisfies \begin{equation} C_{R-,K} I \leq R_K^{-1} H (\bx) \leq C_{R+, K} I, \quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h \label{eq:CRs} \end{equation} where $C_{R-,K}$ and $C_{R+,K}$ are element-wise constants satisfying \begin{equation} C_{R-} \le \min_{K \in \mathcal{T}_h} C_{R-,K} \qquad \text{and} \qquad \sqrt{\frac{1}{N} \sum_{K\in \mathcal{T}_h} C_{R+,K}^2} \le C_{R+} \label{CR+} \end{equation} with some mesh-independent positive constants $C_{R-}$ and $C_{R+}$. If the solution of the BVP \cref{eq:bvp-2} is in $H^2(\Omega)$, then for any quasi-$M$-uniform mesh associated with the metric tensor \cref{eq:M:H1} and satisfying \cref{eq:equi:approx,eq:ali:approx} the linear finite element error for the BVP is bounded by \begin{equation} \Abs{u-u_h}_{H^1(\Omega)} \leq C \cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}} \cdot \frac{C_{R+}}{C_{R-}} \cdot N^{-\frac{1}{d}} \Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}. \label{eq:thm:H1} \end{equation} \end{theorem} \begin{proof} The nodal interpolation error of a function $u \in H^2(\Omega)$ on $K$ is bounded by \begin{equation} \Abs{u - \Pi_h u}_{H^1(K)} \leq C \Norm{ {(F_K')}^{-1}}_2 {\left( \int_K \Norm{{(F_K')}^T \Abs{H (\bx)} F_K'}_2^2 \,d\bx \right)}^{\frac{1}{2}} , \label{eq:HR11:1} \end{equation} where $\Abs{H(\boldsymbol{x})} = \sqrt{ {H(\boldsymbol{x})}^2 }$~\cite[Theorem~5.1.5]{HuaRus11} (the interested reader is referred to, for example,~\cite{CheSunXu07,ForPer01,Hua05a,HuaSun03,Mir12} for anisotropic error estimates for interpolation with linear and higher order finite elements). Notice that $\Abs{H(\boldsymbol{x})} = H(\boldsymbol{x})$ in the current situation (symmetric and positive definite $H(\boldsymbol{x})$). Further, \begin{align*} \Norm{ {(F_K')}^T H(\boldsymbol{x}) F_K' }_2 &= \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} F_K' }_2^2 = \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} R_K^{-\frac{1}{2}} R_K^{\frac{1}{2}} F_K' }_2^2 \\ &\le \Norm{ {H(\boldsymbol{x})}^{\frac{1}{2}} R_K^{-\frac{1}{2}} }_2^2 \Norm{ R_K^{\frac{1}{2}} F_K' }_2^2 \\ &= \Norm{ R_K^{-\frac{1}{2}} H(\boldsymbol{x}) R_K^{-\frac{1}{2}} }_2 \Norm{ {(F_K')}^T R_K F_K' }_2 \\ &= \lambda_{\max}\bigl( R_K^{-\frac{1}{2}} H(\boldsymbol{x}) R_K^{-\frac{1}{2}} \bigr) \Norm{ {(F_K')}^T R_K F_K' }_2 \\ &= \lambda_{\max} \bigl( R_K^{-1} H(\boldsymbol{x}) \bigr) \Norm{ {(F_K')}^T R_K F_K' }_2 \\ &\le \Norm{ R_K^{-1} H(\boldsymbol{x}) }_2 \Norm{ {(F_K')}^T R_K F_K' }_2. \end{align*} Similarly, \[ \Norm{ {(F_K')}^{-1}}_2^2 = \Norm{ {(F_K')}^{-1} {(F_K')}^{-T} }_2 \le \Norm{ {(F_K')}^{-T} R_K^{-1} {(F_K')}^{-1} }_2 \Norm{R_K}_2. \] Thus, \cref{eq:HR11:1} yields \[ \Abs{u - \Pi_h u}_{H^1(K)}^2 \leq C \Norm{ {(F_K')}^T R_K F_K'inv }_2 \Norm{R_K}_2 \int_K \Norm{ {(F_K')}^T R_K F_K' }_2^2 \Norm{ R_K^{-1} H(\boldsymbol{x})}_2^2 \,d\bx . \] Using this, \cref{eq:ali:approx}, \cref{eq:ali:inverse}, \cref{eq:CRs}, the fact that the trace of any $d\times d$ symmetric and positive definite matrix $A$ is equivalent to its $l^2$ norm, viz., $\Norm{A}_2 \le \tr(A) \le d \Norm{A}_2$, and absorbing powers of $d$ into the generic constant $C$, we get \begin{align*} \Abs{u - \Pi_h u}_{H^1(\Omega)}^2 &= \sum_K \Abs{u - \Pi_h u}_{H^1(K)}^2 \\ &\leq C \sum_K C_{ali}^{d-1} \Abs{K}^{-\frac{2}{d}} {\det(R_K)}^{-\frac{1}{d}}\Norm{R_K}_2 \times \Abs{K} C_{ali}^2 \Abs{K}^{\frac{4}{d}} {\det(R_K)}^{\frac{2}{d}} C_{R+,K}^2 \\ & = C C_{ali}^{d+1} \sum_K \Abs{K}^{\frac{d+2}{d}} {\det(R_K)}^{\frac{1}{d}} \Norm{R_K}_2 C_{R+,K}^2\\ &= C C_{ali}^{d+1} \sum_K {\left( \Abs{K} {\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}} \right)}^\frac{d+2}{d} C_{R+,K}^2. \end{align*} Applying \cref{eq:equi:approx} to the above result and using \cref{CR+} gives \begin{align*} \Abs{u - \Pi_h u}_{H^1(\Omega)}^2 &\leq C C_{ali}^{d+1} \sum_K {\left( \frac{C_{eq}}{N} \sum_{\tilde{K}} \abs{{\tilde{K}}} {\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}} \right)}^{\frac{d+2}{d}} C_{R+,K}^2 \\ &= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} \left (\frac{1}{N} \sum_{K\in \mathcal{T}_h} C_{R+,K}^2\right ) {\left(\sum_{\tilde{K}} \abs{{\tilde{K}}} {\det(R_{\tilde{K}})}^{\frac{1}{d+2}} \Norm{R_{\tilde{K}}}_2^{\frac{d}{d+2}} \right)}^{\frac{d+2}{d}}\\ &\le C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} C_{R+}^2 {\left(\sum_K \abs{K} {\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}} \right)}^{\frac{d+2}{d}}\\ &= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} C_{R+}^2 {\left( \sum_{K} \int_K {\det(R_K)}^{\frac{1}{d+2}} \Norm{R_K}_2^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}} . \end{align*} Further, assumption~\cref{eq:CRs} implies \begin{equation} \det(R_K) \le \det\bigl(H(\boldsymbol{x})\bigr) \Norm{H^{-1}(\boldsymbol{x}) R_K}_2^d \leq C_{R-}^{-d} \det\bigl(H(\boldsymbol{x})\bigr) \label{eq:detR:detH} \end{equation} and \begin{equation} \Norm{R_K}_2 = \Norm{H(\boldsymbol{x}) H^{-1}(\boldsymbol{x}) R_K }_2 \le \Norm{H(\boldsymbol{x})}_2 \Norm{H^{-1}(\boldsymbol{x}) R_K}_2 \le C_{R-}^{-1} \Norm{H(\boldsymbol{x})}_2 . \label{eq:detR:detH-1} \end{equation} Thus, \begin{align*} \Abs{u - \Pi_h u}_{H^1(\Omega)}^2 &\leq C C_{ali}^{d+1} C_{R+}^2 C_{eq}^{\frac{d+2}{d}} N^{-\frac{2}{d}} {\left( C_{R-}^{\frac{-2d}{d+2}} \int_\Omega {\left({\det(H(\boldsymbol{x}))}^{\frac{1}{d}} \Norm{H(\boldsymbol{x})}_2 \right)}^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}} \\ &= C C_{ali}^{d+1} C_{eq}^{\frac{d+2}{d}} {\left(\frac{C_{R+}}{C_{R-}}\right)}^2 N^{-\frac{2}{d}} {\left( \int_\Omega \Norm{ {\det(H(\boldsymbol{x}))}^{\frac{1}{d}} H(\boldsymbol{x})}_2^{\frac{d}{d+2}} \,d\bx \right)}^{\frac{d+2}{d}}, \end{align*} which, together with \cref{eq:cea-1}, gives \cref{eq:thm:H1}. \qquad \end{proof} \subsection{Remarks} \Cref{thm:H1} shows how a nonconvergent recovered Hessian works in mesh adaptation. The error bound \cref{eq:thm:H1} is linearly proportional to the ratio $C_{R+}/C_{R-}$, which is a measure for the closeness of $R$ to $H$. Thus, the finite element error changes gradually with the closeness of the recovered Hessian. If $R$ is a good approximation to $H$ (but not necessarily convergent), then $C_{R+}/C_{R-} = \mathcal{O}(1)$ and the solution-dependent factor in the error bound is \begin{equation} \Norm{{\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}. \label{eq:factor:1} \end{equation} On the other hand, if $R$ is not a good approximation to $H$, solution-dependent factor in the error bound will be larger. For example, consider $R = I$ (the identity matrix), which leads to the uniform mesh refinement. In this case the condition \cref{eq:CRs} is satisfied with \[ C_{R+} = C_{R+,K} = \max_{\boldsymbol{x}\in \Omega} \lambda_{\max} \bigl(H(\boldsymbol{x})\bigr) \quad \text{and} \quad C_{R-} = \min_{\boldsymbol{x}\in \Omega} \lambda_{\min} \bigl(H(\boldsymbol{x})\bigr), \] where $\lambda_{\max} \bigl(H(\boldsymbol{x})\bigr)$ and $\lambda_{\min} \bigl(H(\boldsymbol{x})\bigr)$ denote the maximum and minimum eigenvalues of $H(\boldsymbol{x})$, respectively. Thus, for $R=I$ the solution-dependent factor in the bound \cref{eq:thm:H1} becomes \[ \frac{\max_{\boldsymbol{x}\in \Omega} \lambda_{\max} \bigl(H(\boldsymbol{x})\bigr)} { \min_{\boldsymbol{x}\in \Omega} \lambda_{\min} \bigl(H(\boldsymbol{x})\bigr)} \Norm{{\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}, \] which is obviously larger than \cref{eq:factor:1}. Next, we study the relation between \cref{eq:CRs} and \cref{eq:AgLiVa99:1}--\cref{eq:AgLiVa99:2}. In practical computation, the Hessian is typically recovered at mesh nodes (see~\cref{sect:recovery:methods}) and a recovered Hessian can be considered on the whole domain as a piecewise linear matrix-valued function. In this case, the average $R_K$ of $R$ over any given element $K$ can be expressed as a linear combination of the nodal values of $R$. Applying the triangle inequality to \cref{eq:AgLiVa99:1,eq:AgLiVa99:2} we get \[ \Norm{R(\boldsymbol{x}_i)-H(\boldsymbol{x})}_\infty \le \left( \delta + \varepsilon \right) \lambda_{\min}(R_{\boldsymbol{x}_i}), \quad \forall \boldsymbol{x} \in \omega_i \] and, since $R_K$ is a linear combination of $R(\boldsymbol{x}_i)$, \[ \Norm{R_K - H(\boldsymbol{x})}_\infty \leq \left(\delta + \varepsilon \right) \lambda_{\min} ( R_K ) . \] Since $R_K - H(\boldsymbol{x})$ is symmetric, $\Norm{R_K - H(\boldsymbol{x})}_2 \le \Norm{R_K - H(\boldsymbol{x})}_\infty$. Thus, conditions \cref{eq:AgLiVa99:1,eq:AgLiVa99:2} imply \begin{equation} \Norm{ R_K - H(\boldsymbol{x}) }_2 \leq \left( \delta + \varepsilon \right) \lambda_{\min} (R_K), \quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h \label{eq:R:H:eps} \end{equation} and \begin{equation} \Norm{ R_K^{-1} H(\boldsymbol{x}) - I }_2 \leq \left( \delta + \varepsilon \right), \quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h \label{eq:R:H:I} \end{equation} which in turn implies \cref{eq:CRs} with $C_{R+,K} = 1+ \left( \delta + \varepsilon \right)$ and $C_{R-} = 1 - \left( \delta + \varepsilon \right)$, if $\left(\delta + \varepsilon\right) < 1$. Condition \cref{eq:R:H:I} and therefore \cref{eq:AgLiVa99:2} require the eigenvalues of $R_K^{-1} H (\bx)$ to stay closely around one. On the other hand, condition \cref{eq:CRs} only requires the eigenvalues of $R_K^{-1} H (\bx)$ to be bounded from above and below from zero, which is weaker than \cref{eq:AgLiVa99:2}. If $R$ converges to $H(\boldsymbol{x})$, both \cref{eq:AgLiVa99:2,eq:CRs} can be satisfied. However, if $R$ does not converge to $H(\boldsymbol{x})$, as is the case for most adaptive computation, the situation is different. As we shall see in~\cref{sect:examples}, condition \cref{eq:CRs} is satisfied for all of the examples tested whereas condition \cref{eq:AgLiVa99:2} is not satisfied by either of the examples. We would like to point out that it is unclear if the considered monitor function \cref{eq:M:H1} (and the corresponding bound \cref{eq:thm:H1}) is optimal, although it seems to be the best we can get. For example, if we choose the monitor function to be \begin{equation} M_K = {\det(R_K)}^{- \frac{1}{d+4}} R_K, \quad \forall K \in \mathcal{T}_h \label{eq:M:L} \end{equation} which is optimal for the $L^2$ norm~\cite{Hua05a}, the error bound becomes \begin{equation} \Abs{u - u_h}_{H^1(\Omega)} \leq C \cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+4}{4d}} \cdot \frac{C_{R+}}{C_{R-}} \cdot N^{-\frac{1}{d}} \Norm{{\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)}^{\frac{2}{d+4}} \Norm{ {\det(H)}^{\frac{1}{d+4}} H }_{L^1(\Omega)}^{\frac{1}{2}}. \label{thm:error:H1:2} \end{equation} This bound has a larger solution-dependent factor than \cref{eq:thm:H1} since Hölder's inequality yields \[ \Norm{ {\det(H)}^{\frac{1}{d}} H}_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}} \le \Norm{{\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)}^{\frac{2}{d+4}} \Norm{ {\det(H)}^{\frac{1}{d+4}} H}_{L^1(\Omega)}^{\frac{1}{2}} . \] It is worth mentioning that when the metric tensor \cref{eq:M:L} is used, the $L^2$ norm of the piecewise linear interpolation error is bounded by \begin{equation} \Norm{u - \Pi_h u}_{L^2(\Omega)} \leq C \cdot C_{ali} C_{eq}^{\frac{d+4}{2 d}} \cdot \frac{C_{R+}}{C_{R-}} \cdot N^{-\frac{2}{d}} \Norm{ {\det(H)}^{\frac{1}{d}}}_{L^{\frac{2d}{d+4}}(\Omega)} , \label{eq:error:L2} \end{equation} which is optimal in terms of convergence order and solution-dependent factor, e.g.,\ see~\cite{CheSunXu07,HuaSun03}. Note that \cref{thm:H1} holds for $u \in H^2(\Omega)$ although the estimate \cref{eq:thm:H1} only requires \begin{equation} \Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)} < \infty. \label{reg-1} \end{equation} Since \[ \Norm{ {\det(H)}^{\frac{1}{d}} H }_{L^{\frac{d}{d+2}}(\Omega)} \le \Norm{ \frac{1}{d} \tr(H) \cdot H }_{L^{\frac{d}{d+2}}(\Omega)} , \] \cref{reg-1} can be satisfied when $u \in W^{2, \frac{2d}{d+2}}(\Omega)$. Thus, there is a gap between the sufficient requirement $u\in H^2(\Omega)$ and the necessary requirement $u \in W^{2, \frac{2d}{d+2}}(\Omega)$. The stronger requirement $u \in H^2(\Omega)$ comes from the estimation of the interpolation error in~\cite[Theorem~5.1.5]{HuaRus11}. It is unclear to the authors whether or not this requirement can be weakened. It is pointed out that $u \in H^2(\Omega)$ may not hold when $\partial \Omega$ is not smooth. For example, in 2D, if $\partial \Omega$ has a corner with an angle $\omega \in (0,2\pi)$, the solution of the BVP \cref{eq:bvp-2} with smooth $f$ and $g$ basically has the following form near the corner, \[ u(r, \theta) = r^{\frac{\pi}{\omega}} u_0(\theta) + u_1(r, \theta), \] where $(r,\theta)$ denote the polar coordinates and $u_0(\theta)$ and $u_1(r, \theta)$ are some smooth functions. Then, \[ \Abs{u}_{H^2(\Omega)}^2 \sim \int_0^{b} {\left( r^{\frac{\pi}{\omega}-2} \right)}^2 r \,dr \sim \left. r^{\frac{2 \pi}{\omega}-2} \right\vert_{0}^{b} \] for some constant $b>0$. This implies that $u \notin H^2(\Omega)$ if $\omega > \pi$. On the other hand, $W^{2, \frac{2d}{d+2}}(\Omega) = W^{2, 1}(\Omega)$ for $d = 2$ and \[ \Abs{u}_{W^{2,1}(\Omega)}^2 \sim \int_0^{b} \left( r^{\frac{\pi}{\omega}-2} \right) r \,dr \\ \sim \left. r^{\frac{\pi}{\omega}} \right |_{0}^{b}, \] which indicates that $u \in W^{2,1}(\Omega)$ for all $\omega \in (0,2\pi)$. \section{Convergence of the linear finite element approximation for a general Hessian} \label{sect:general} In this section we consider the general situation where $H(\boldsymbol{x})$ is symmetric but not necessarily positive definite. In this case, it is unrealistic to require the recovered Hessian $R$ to be positive definite. Thus, we cannot use $R$ directly to define the metric tensor which is required to be positive definite. A commonly used strategy is to replace $R$ by $\Abs{R} = \sqrt{R^2}$ since $\Abs{R}$ retains the eigensystem of $R$. However, $\Abs{R}$ can become singular locally. To avoid this difficulty, we regularize $\Abs{R}$ with a regularization parameter $\alpha_h > 0$ (to be determined). From \cref{eq:M:H1}, we define the regularized metric tensor as \begin{equation} M_K = {\det(\alpha_h I + \Abs{R_K})}^{- \frac{1}{d+2}} \Norm{\alpha_h I + \Abs{R_K} }_2^{\frac{2}{d+2}} \left(\alpha_h I + \Abs{R_K}\right), \quad \forall K \in \mathcal{T}_h, \label{eq:M:H1:2} \end{equation} and obtain the following theorem with a proof similar to that of \cref{thm:H1}. \begin{theorem}[General Hessian] \label{thm:H1:general} For a given positive parameter $\alpha_h > 0$, we assume that the recovered Hessian $R$ satisfies \begin{align} & C_{R-,K} I \le {\left( \alpha_h I + \Abs{R_K} \right)}^{-1} \left(\alpha_h I + \Abs{H(\boldsymbol{x})} \right), \quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h, \label{eq:CRminus:general} \\ & {\left( \alpha_h I + \Abs{R_K} \right)}^{-1} \Abs{H(\boldsymbol{x})} \leq C_{R+,K} I, \quad \forall \boldsymbol{x} \in K, \quad \forall K \in \mathcal{T}_h, \label{eq:CRplus:general} \end{align} where $C_{R-,K}$ and $C_{R+,K}$ are element-wise constants satisfying \cref{CR+}. If the solution of the BVP \cref{eq:bvp-2} is in $H^2(\Omega)$, then for any quasi-$M$-uniform mesh associated with metric tensor \cref{eq:M:H1:2} and satisfying \cref{eq:equi:approx,eq:ali:approx} the linear finite element error for the BVP is bounded by \begin{equation} \Abs{u - u_h}_{H^1(\Omega)} \leq C \cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}} \cdot \frac{C_{R+}}{C_{R-}} \cdot N^{-\frac{1}{d}} \Norm{ {\det(\alpha_h I + \Abs{H})}^{\frac{1}{d}} \left(\alpha_h I + \Abs{H}\right) }_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}. \label{eq:thm:H1:general} \end{equation} \end{theorem} From \cref{eq:CRplus:general,eq:CRminus:general,eq:thm:H1:general} we see that the greater $\alpha_h$ is, the easier the recovered Hessian satisfies \cref{eq:CRplus:general,eq:CRminus:general}; however, the error bound increases as well. For example, consider the extreme case of $\alpha_h \to \infty$. In this case, \cref{eq:CRplus:general,eq:CRminus:general} can be satisfied with $C_{R+} = C_{R-} = 1$ for any $R$. At the same time, the metric tensor defined in \cref{eq:M:H1:2} has an asymptotic behavior $M_K \to \alpha_h^{\frac{4}{d+4}} I$ and the corresponding $M$-uniform mesh is a uniform mesh. Obviously, the right-hand side of \cref{eq:thm:H1:general} is large for this case. Another extreme case is $\alpha_h \to 0$ where \cref{eq:thm:H1:general} reduces to \cref{eq:thm:H1} if both $R$ and $H(\boldsymbol{x})$ are positive definite. We now consider the choice of $\alpha_h$. We define a parameter $\alpha$ through the implicit equation \begin{equation} \Norm{ \sqrt[d]{\det(\alpha I + \Abs{H})} \cdot (\alpha I + \Abs{H}) }_{L^{\frac{d}{d+2}}(\Omega)} = 2 \Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H}_{L^{\frac{d}{d+2}}(\Omega)} . \label{eq:alpha-3} \end{equation} The left-hand-side term is an increasing function of $\alpha$. Moreover, the term is equal to the half of the right-hand-side term when $\alpha = 0$ and tends to infinity as $\alpha \to \infty$. Thus, from the intermediate value theorem we know that \cref{eq:alpha-3} has a unique solution $\alpha > 0$ if $\Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H}_{L^{\frac{d}{d+2}}(\Omega)} > 0$. If we choose $\alpha_h = \alpha$, then the finite element error is bounded by \begin{equation} \Abs{u - u_h}_{H^1(\Omega)} \leq C \cdot C_{ali}^{\frac{d+1}{2}} C_{eq}^{\frac{d+2}{2d}} \cdot \frac{C_{R+}}{C_{R-}} \cdot 2N^{-\frac{1}{d}} \Norm{ \sqrt[d]{\det(\Abs{H})} \cdot H }_{L^{\frac{d}{d+2}}(\Omega)}^{\frac{1}{2}}, \label{eq:thm:H1:general-2} \end{equation} which is essentially the same as \cref{eq:thm:H1}. Note that \cref{eq:alpha-3} is impractical since it requires the prior knowledge of $H(\boldsymbol{x})$. In practice it can be replaced by \begin{equation} \sum_K \Abs{K} {\det\left( \alpha_h I + \Abs{R_K}\right)}^{\frac{1}{d+2}} \Norm{\alpha_h I + \Abs{R_K}}_2^{\frac{d}{d+2}} = 2^{\frac{d}{d+2}} \sum_K \Abs{K} {\det(\Abs{R_K})}^{\frac{1}{d+2}}\Norm{R_K}_2^{\frac{d}{d+2}} . \label{alpha-2} \end{equation} This equation can be solved effectively using the bisection method. Numerical results show that $\alpha_h$ is close to $\alpha$ (\cref{fig:tanh:alpha}). \section{A selection of commonly used Hessian recovery methods} \label{sect:recovery:methods} In this section we give a brief description of four commonly used Hessian recovery algorithms for two-dimensional mesh adaptation. The interested reader is referred to~\cite{Kam09,ValManDomDufGui07} for a more detailed description of these Hessian recovery techniques. Recall that the goal of the Hessian recovery in the current context is to find an approximation of the Hessian in mesh nodes using the linear finite element solution $u_h$. The approximation of the Hessian on an element is calculated as the average of the nodal approximations of the Hessian at the vertices of the element. \subsection*{QLS:\ quadratic least squares fitting to nodal values} This method involves the fitting of a quadratic polynomial to nodal values of $u_h$ at a selection of neighboring nodes in the least square sense and subsequent differentiation. The original purpose of the QLS was the gradient recovery (e.g.,\ see Zhang and Naga~\cite{ZhaNag05}). However, it is easily adopted for the Hessian recovery by simply differentiating the fitting polynomial twice. More specifically, for a given node (say $\boldsymbol{x}_0$) at least five neighboring nodes are selected. A quadratic polynomial (denoted by $p$) is found by least squares fitting to the values of $u_h$ at the selected nodes. The linear system associated with the least squares problem usually has full rank and a unique solution. If it does not, additional nodes from the neighborhood of $\boldsymbol{x}_0$ are added to the selection until the system has full rank. An approximation to the Hessian of the solution $u$ at $\boldsymbol{x}_{0}$ is defined as the Hessian of $p$, viz., \[ R^{QLS}(\boldsymbol{x}_{0}) = H(p)(\boldsymbol{x}_{0}) . \] \subsection*{DLF:\ double linear least squares fitting} \label{ssect:DLF} The DLF method computes the Hessian by using linear least squares fitting twice. First, the least squares fitting of the nodal values of $u_h$ in a neighbourhood of $\boldsymbol{x}_0$ is employed to find a linear fitting polynomial $p$. The recovered gradient of function $u$ at $\boldsymbol{x}_0$ is defined as the gradient of $p$ at $\boldsymbol{x}_{0}$, i.e., \[ \nabla_h^{DLF} u(\boldsymbol{x}_{0}) = \nabla p(\boldsymbol{x}_{0}). \] Second-order derivatives are then obtained by subsequent application of this linear fitting to the calculated first order derivatives. Mixed derivatives are averaged in order to obtain a symmetric recovered Hessian. \subsection*{LLS:\ linear least squares fitting to first-order derivatives} This method is similar to DLF except that the first-order derivatives at nodes are calculated in a different way. In this method, the first-order derivatives are first calculated at element centers and then at nodes by linear least squares fitting to their values at element centers. \subsection*{WF:\ weak formulation} This approach recovers the Hessian by means of a variational formulation~\cite{Dol98}. More specifically, let $\phi_0$ be a canonical piecewise linear basis function at node $\boldsymbol{x}_0$. Then the nodal approximation $u_{xx,h}$ to the second-order derivative $u_{xx} $ at $\boldsymbol{x}_i$ is defined through \[ u_{xx,h}(\boldsymbol{x}_0) \int_\Omega \phi_0(\boldsymbol{x}) \,d\bx = -\int_\Omega \frac{\partial u_h}{\partial x} \frac{\partial \phi_0}{\partial x} \,d\bx . \] The same approach is used to compute $u_{xy,h}$ and $u_{yy,h}$. Since $\phi_0$ are piecewise linear and vanish outside the patch associated with $\boldsymbol{x}_0$, the involved integrals can be computed efficiently with appropriate quadrature formulas over a single patch. \section{Numerical examples} \label{sect:examples} In this section we present two numerical examples to verify the analysis given in the previous sections. We use \bamg{}~\cite{bamg} to generate adaptive meshes as quasi-$M$-uniform meshes for the regularized metric tensor \cref{eq:M:H1:2}. Special attention will be paid to mesh conditions \cref{eq:equi:approx,eq:ali:approx} and closeness conditions \cref{eq:AgLiVa99:2,eq:CRplus:general,eq:CRminus:general}. For the recovery closeness condition \cref{eq:AgLiVa99:2} we compare the regularized recovered and exact Hessians, i.e.,\ we compute $\varepsilon$ for \begin{equation} \Norm{(\alpha_h I + \Abs{R_K}) - (\alpha I + \abs{H_K}) }_\infty \leq \varepsilon \lambda_{\min} (\alpha_h I + \Abs{R_K}), \qquad \forall K \in \mathcal{T}_h, \label{eq:eps:regularized} \end{equation} where $H_K$ is an average of the exact Hessian on the element $K$ and $\alpha_h$ and $\alpha$ are the is the regularization parameters for the recovered and the exact Hessians, respectively. \begin{example}[{\cite[Example~4.3]{Hua05a}}] \label{ex:flower} \normalfont{} The first example is in the form of BVP~\cref{eq:bvp} with $f$ and $g$ chosen such that the exact solution is given by \begin{align*} u(x,y) =& \tanh \left[ 30 \left( x^2 + y^2 - 0.125 \right) \right] \\ &+ \tanh \left[ 30 \left( {(x-0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right] \\ &+ \tanh \left[ 30 \left( {(x-0.5)}^2 + {(y+0.5)}^2 - 0.125 \right) \right] \\ &+ \tanh \left[ 30 \left( {(x+0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right] \\ &+ \tanh \left[ 30 \left( {(x+0.5)}^2 + {(y-0.5)}^2 - 0.125 \right) \right] . \end{align*} A typical plot of element-wise constants $C_{eq,K}$ and $C_{ali,K}$ in mesh quasi-$M$-uniformity conditions \cref{eq:equi:approx,eq:ali:approx} is shown in \cref{fig:flower:ceq,fig:flower:cali}, demonstrating that these conditions hold with relatively small $C_{eq}$ and $C_{ali}$. For the given mesh example we have $0.5 \le C_{eq,K} \le 1.5$ and $1 \le C_{ali,K} \le 1.3$, which gives $C_{eq} = 1.5$ and $C_{ali} = 1.3$. In fact, we found that $C_{eq} \le 2.0$ and $C_{ali} \le 2.1$ for all computations in this paper, indicating that \bamg{} does a good job in generating quasi-$M$-uniform meshes for a given metric tensor. \Cref{fig:flower:epsk,fig:flower:eps} show a typical distribution of element-wise values of $\varepsilon$ in \cref{eq:eps:regularized} and its values for a sequence of adaptive grids. We observe that for all methods $\varepsilon$ is not small with respect to one, which violates the condition \cref{eq:AgLiVa99:2}. Typical element-wise values $C_{R+,K}/C_{R-}$ and values of $C_{R+}/C_{R-}$ for a sequence of adaptive grids are shown in \cref{fig:flower:crk,fig:flower:CR}. Notice that $C_{R+} / C_{R-}$ stays relatively small and bounded, thus satisfying the closeness conditions \cref{eq:CRplus:general,eq:CRminus:general}. For this example, the finite element error $\Abs{u-u_h}_{H^1(\Omega)}$ is almost undistinguishable for meshes obtained by means of the exact and recovered Hessian (\cref{fig:flower:error}) and the approximated $\alpha_h$, computed through \cref{alpha-2}, is very close the value for the exact Hessian (\cref{fig:flower:alpha}). \begin{figure} \caption{element-wise $C_{eq,K} \label{fig:flower:ceq} \caption{element-wise $C_{ali,K} \label{fig:flower:cali} \caption{element-wise $\varepsilon$ for \cref{eq:eps:regularized} \label{fig:flower:epsk} \caption{element-wise $C_{R+,K} \label{fig:flower:crk} \caption{$\varepsilon$ for \cref{eq:eps:regularized} \label{fig:flower:eps} \caption{$C_{R+} \label{fig:flower:CR} \caption{finite element error $\Abs{u-u_h} \label{fig:flower:error} \caption{comparison of $\alpha_h$ and $\alpha$\label{fig:flower:alpha} \label{fig:flower:alpha} \caption{Numerical results for \cref{ex:flower} \label{fig:flower} \end{figure} \end{example} \begin{example}[Strong anisotropy] \label{ex:tanh} \normalfont{} The second example is in the form of BVP~\cref{eq:bvp} with $f$ and $g$ chosen such that the exact solution is given by \[ u(x,y) = \tanh ( 60y ) - \tanh \bigl( 60 (x - y) - 30 \bigr). \] This solution exhibits a very strong anisotropic behavior and describes the interaction between a boundary layer along the $x$-axis and a steep shock wave along the line $y = x - 1/2$. \Cref{fig:tanh:epsk,fig:tanh:epsilon} show that $\varepsilon \approx 60$ and therefore not small with respect to one, violating the condition \cref{eq:AgLiVa99:2} for all meshes in the considered range of $N$ for all four recovery techniques. On the other hand, \cref{fig:tanh:CR} shows that the ratio $C_{R+} / C_{R-}$ is large ($\approx 10^2$) but, nevertheless, it seems to stay bounded with increasing $N$, confirming that \cref{eq:CRplus:general,eq:CRminus:general} are satisfied by the recovered Hessian. The fact that the ratio $C_{R+} / C_{R-}$ has different values in this and the previous examples indicates that the accuracy or closeness of the four Hessian recovery techniques depends on the behavior and especially the anisotropy of the solution. Fortunately, as shown by \cref{thm:H1,thm:H1:general}, the finite element error is insensitive to the closeness of the recovered Hessian. The finite element solution error is shown in \cref{fig:tanh:error} as a function of $N$. Finally, \cref{fig:tanh:alpha} shows that $\alpha_h$, computed through \cref{alpha-2}, is close to the exact value $\alpha$ defined in \cref{eq:alpha-3}. \begin{figure} \caption{element-wise $C_{eq,K} \caption{element-wise $C_{ali,K} \caption{element-wise $\varepsilon$ for \cref{eq:eps:regularized} \caption{element-wise $C_{R+,K} \caption{$\varepsilon$ for \cref{eq:eps:regularized} \caption{$C_{R+} \caption{finite element error $\Abs{u-u_h} \caption{comparison of $\alpha_h$ and $\alpha$\label{fig:tanh:alpha} \caption{Numerical results for \cref{ex:tanh} \label{fig:tanh:ceq} \label{fig:tanh:cali} \label{fig:tanh:epsk} \label{fig:tanh:epsilon} \label{fig:tanh:CR} \label{fig:tanh:error} \label{fig:tanh:alpha} \label{fig:tanh} \end{figure} \end{example} \section{Conclusion and further comments} \label{sect:conclusion} In the previous sections we have investigated how a nonconvergent recovered Hessian works in mesh adaptation. Our main results are \cref{thm:H1,thm:H1:general} where an error bound for the linear finite element solution of BVP \cref{eq:bvp-2} is given for quasi-$M$-uniform meshes corresponding to a metric depending on a recovered Hessian. As conventional error estimates for the $H^1$ semi-norm of the error in linear finite element approximations, our error bound is of first order in terms of the average element diameter, $N^{-\frac{1}{d}}$, where $N$ is the number of elements and $d$ is the dimension of the physical domain. This error bound is valid under the closeness condition \cref{eq:CRs} (or \cref{eq:CRplus:general,eq:CRminus:general}), which is weaker than \cref{eq:AgLiVa99:2} used by Agouzal et al.~\cite{AgoLipVas99} and Vassilevski and Lipnikov~\cite{VasLip99}. Numerical results in~\cref{sect:examples} show that the new closeness condition is satisfied by the recovered Hessian obtained with commonly used Hessian recovery algorithms. The error bound also shows that the finite element error changes gradually with the closeness of the recovered Hessian to the exact one. These results provide an explanation on how a nonconvergent recovered Hessian works in mesh adaptation. In this work the closeness conditions \cref{eq:CRplus:general,eq:CRminus:general} have been verified only numerically. Developing a theoretical proof of the condition for some Hessian recovery techniques is an interesting topic for further investigations. \section*{Acknowledgment} The authors are grateful to the anonymous referees for their comments and suggestions for improving the quality of this paper, particularly for the helpful comments on improving the proof of \cref{thm:H1}. \end{document}
\begin{document} \maketitle \begin{abstract} In this paper we study the blow-ups of the singular points in the boundary of a minimizing cluster lying in the interface of more than two chambers. We establish a sharp lower bound for the perimeter density at those points and we prove that this bound is rigid, namely having the lowest possible density completely characterizes the blow-up. \end{abstract} \section{Introduction} This paper deals with the study of the singularities of an {\it isoperimetric cluster}, that is a finite and disjoint family of sets of finite perimeter, called chambers, which minimizes the sum of the perimeter of all the chambers (Section \ref{sec.prel} contains all the precise definitions).\\ If the cluster has only one chamber, it is well known that the ball is the isoperimetric set (with volume constraint). As the number of chambers increases, the problem becomes slightly more difficult: while existence and regularity of isoperimetric clusters is nowadays classical--see \cite{Al}, we also refer to \cite{Ma} and the references therein for a complete overview on the subject--very few is still known about the shape of those minimizing clusters. A complete characterization of them is obtained only in the case when there are two chambers (see \cite{FABHZ,HMRR,HLR, Re}), or, in the plane, when there are three chambers (see \cite{Wi}). \partialr Since it seems to be very difficult to obtain such a "global" description of the isoperimetric clusters with an arbitrary number of chambers, a crucial role is played by a "local" analysis of their boundary. \\ In \cite{Al} it is proved, in analogy to the case when there is only one chamber, that the reduced boundary of the cluster (we refer again to Section \ref{sec.prel} for the definition) is a finite union of analytic hypersurfaces with constant mean curvature, while, as we shall explain, the problem of the description of the singular part of the boundary of the cluster is very interesting and almost completely open. \partialr Just to make an example of the peculiar behavior of the singular points when dealing with clusters, let us notice that, while minimal surfaces can develop singularities only in dimension $N\ge 8$, in the case of minimizing clusters, any point in the interface of three or more different chambers is a singular point in every dimension.\\ As we shall see in the following sections these kind of singularities produced by the junction of three or more chambers will play a crucial role in our analysis. We define \[ c\mathrm{-sing}(\mathcal E)=\left\{\text{points lying in the boundary of at least three chambers of }\mathcal E\right\}. \] We will concentrate our analysis to theese points.\\ Given a minimizing cluster $\mathcal E$ and the singular point $p\in\partialrtial\mathcal E$, the monotonicity formula \eqref{monoton} introduced in Section \ref{sec.prel}, ensures that the blow-up $\lim_{r\to 0}r^{-1}\left(\mathcal E-p\right)$ is still minimizing and that it is a cone. The study of those blow-ups, and thence of cone-like minimising clusters, is then a natural tool in order to get a local description of the boundary of the cluster around a singular point. In dimension $N=2$, there is only one (up to rotations) possible blow-up of the boundary of the cluster around a singular point. This blow-up consists of three half-line meeting at the origin and forming three 120-degrees angles. Throughout this paper we will denote this set simply by $Y$. In $\mathbb R^3$, a remarkable result in \cite{Ta} provides us with a complete characterization of all the possible blow-ups around singular points asserting that there are only two possible cases: the set $Y\times\mathbb R$ and the cone generated by the centre of a regular tetrahedron and his edges. \partialr The singular set of minimal surfaces $L^\infty$-close to $Y\times \mathbb R^{N-2}$, or to a tetrahedron with an $N-3$ spine had been investigated in the seminal works \cite{SimonCone} and by \cite{ColSpol}. \partialr To our knowledge, no characterization of cones is settled in dimension greater than three.\\ The description of the possible blow-ups deeply relies on the ambient dimension $N$. Nevertheless we shall prove a dimension-free lower bound for the perimeter density $\Theta_p(\mathcal E)$ defined as the perimeter of $\mathcal E$ inside the ball of radius $r$ divided by the volume of the $N-1$-dimensional ball of the same radius. More precisely in Section \ref{sec.ldb} we prove the following theorem: \begin{thm}\label{thm.lowdens} Let $N\ge 2$, and let $\mathcal E$ be a cone-like minimizing cluster with at least three chambers. Then \[ \Theta_0(\mathcal E)\ge \frac 3 2. \] \end{thm} Theorem \ref{thm.lowdens} can be rephrased as follows: $Y\times \mathbb R^{N-2}$ has the lowest possible perimeter density at the origin, among all minimizing cone-like clusters of $\mathbb R^N$, with at least three chambers. As we shall see in Section \ref{sec.consequences} the above statement is rigid, in particular, in Corollary \ref{cor.characterizationbydensity} we show that any other cone-like minimizing cluster has density strictly greater than $3/2$.\\ To prove Theorem \ref{thm.lowdens} we use an induction argument on the dimension in order to reduce to the case $N=2$, where the conclusion easily follows by the above mentioned characterization of the minimizing cone-like clusters. To prove the inductive step we show the following: \begin{itemize} \item[i)] there always exist singular points in the unit sphere where more than two chamber meets,\\ \item[ii)]the blow-up at those points is a cone of the form $\mathcal E'\times \mathbb R$, with $\mathcal E'\subset\mathbb R^{N-1}$ with density lower than $\Theta_0(\mathcal E)$. \end{itemize} The proof of item i) requires some preliminary results introduced in Section \ref{sec.connected}.\\ In Section \ref{sec.consequences}, we consider a converging sequence of minimizing clusters $\mathcal E_k\rightarrow \mathcal E$ and we show, as an application of Theorem \ref{thm.lowdens}, that $c\mathrm{-sing}(\mathcal E_k)$ converges in Hausdorff to $c\mathrm{-sing}(\mathcal E)$ (see Proposition \ref{prop.sequential}). It is relatively easy to show that a sequence of $c$-singular points must converge to a $c$-singular point of the limit cluster, but it is more difficult to show that every $c$-singularity cannot appear just in the limit. To show this, we reduce to the case $N=2$ and we prove in Lemma \ref{lem.Ynonremovable}, that every cluster $\mathcal E\subset \mathbb R^2$ sufficiently close to $Y$ must have a $c$-singular point in a neighborhood of the origin. \section{Preliminaries and basic definitions}\label{sec.prel} Let $m\in\mathbb N$ and let $\mathcal E=\left\{\mathcal E(i)\right\}_{i=1}^m$ be a family of subset of $\mathbb R^N$. We say that $\mathcal E$ is a $m$-{\it cluster} (or simply a {\it cluster}), provided each $\mathcal E(i)$ is a set of locally finite perimeter and $|\mathcal E(i)\cap\mathcal E(j)|=0$, for every $i\neq j$, where $|\cdot|$ denotes the $N$-dimensional Lebesgue measure. We call the sets $\mathcal E(i)$ the {\it chambers} of the cluster $\mathcal E$, and we denote by $\mathcal E(0)$ the {\it exterior chamber} $\mathbb R^N\setminus \overset{m}{\underset{i=1}{\bigcup}}\mathcal E(i)$ .\\ Given a $m$-cluster $\mathcal E$ and $i\neq j\in\{0,\ldots,m\}$ we denote by $\mathcal E(i,j)$ the {\it interface} of the chambers $\mathcal E(i)$ and $\mathcal E(j)$ defined as the intersection of their reduced boundaries, namely $\mathcal E(i,j)=\partialrtial^*\mathcal E(i)\cap\partialrtial^*\mathcal E(j)$. Given an open set $A$, we finally define the {\it relative perimeter} of $\mathcal E$ in $A$ as \[ P(\mathcal E;A)={\underset{0\le i<j\le m}{\sum}}\mathcal H^{N-1}\left(\mathcal E(i,j)\cap A\right), \] where $\mathcal H^{N-1}$ denotes the $N-1$-dimensional Hausdorff measure. By virtue of \cite[Proposition 29.4]{Ma} the above definition is equivalent to \begin{equation}\label{eq:1/2perimeter} P(\mathcal E;A)=\frac 1 2 \sum_{i=0}^m P(\mathcal E(i);A). \end{equation} Here we denoted by $P(E;A)$ the relative perimeter of a set $E$ of locally finite perimeter in $A$.\\ Let $\mathcal E$ and $\mathcal F$ be two $m$-clusters and let $\mathrm{d}(\mathcal E,\mathcal F)=\sum_{i=1}^m|\mathcal E(i)\Delta\mathcal F(i)|$ be the sum of the volumes of the symmetric differences between the chambers of $\mathcal E$ and the chambers of $\mathcal F$. Given $\Lambda,\,\rho>0$, we say that $\mathcal E$ is a $(\Lambda, \rho)$-{\it minimizing cluster} in $A$ if \[ P(\mathcal E;B_r(x))\le P(\mathcal F; B_r(x))+\Lambda \mathrm{d}(\mathcal E,\mathcal F), \] holds true for every $r<\rho$ and every $x\in\mathbb R^N$ such that $\mathcal E(i)\Delta \mathcal F(i)\subset\subset B_r(x)$.\\ We stress that, if a cluster $\mathcal E$ is a minimizer of the perimeter functional among all clusters of given volume, namely \[ P(\mathcal E)=\inf \left\{P(\mathcal F)\,:\,|\mathcal F(i)|=|\mathcal E(i)|\right\}, \] then $\mathcal E$ is a $(\Lambda,\rho)$-minimizing cluster, for some positive numbers $\Lambda$ and $\rho$ (see for instance \cite[Chapter 29]{Ma}).\\ If $\mathcal E$ is a $(\Lambda,\rho)$-minimizing cluster, then its reduced boundary is a $C^{1,\alpha}$-hypersurface, for every $\alpha\in(0,1)$ (see \cite[Corollary 4.6]{CLM}); however, the set \[ \partialrtial \mathcal E:=\mathrm{cl}\left(\overset{N}{\underset{i=1}{\bigcup}}\partialrtial^*\mathcal E(i)\right)=\overset{N}{\underset{i=1}{\bigcup}}\left\{x\,:\,0<|B_r(x)\cap\mathcal E(i)|<r^N\omega_N, \ \ r>0\right\} \] can develop singularities.\\ We define the $c$-{\it singular set} as the subset of $\partialrtial E\setminus \overset{N}{\underset{i=1}{\bigcup}}\partialrtial^*\mathcal E(i)$ consisting of those singularities arising in the junction of three or more chambers, more precisely we set \[ c\mathrm{-sing}(\mathcal E)=\{x\in\partialrtial\mathcal E\,:\, \# I(x)\ge 3\}, \] where \begin{equation}\label{closechambers} I(x)=\left\{i\in\{0,\ldots,m\}\,:\, 0<\liminf_{r\to 0}|B_r(x)\cap\mathcal E(i)|r^{-N}\right\}. \end{equation} For the sake of completeness we state here below a result that allows us to reformulate the definition of the $c$-singular set and we refer to \cite{CLM} and \cite[Chapter 30]{Ma} for a proof.\\ \begin{lem}[Infiltration Lemma]\label{infiltrationlem} Let $\mathcal E$ be a $(\Lambda,\rho)$-minimizing cluster, then there exist constants $\varepsilon$ and $r_0$ depending only on $\Lambda$, $\rho$ and $N$ such that if \[ |\mathcal E(i)\cap B_r(x)|<\varepsilon r^N\omega_N, \] for some $x\in\mathbb R^N$, $i=0,\ldots,m$ and $r<r_0$, then \[ |\mathcal E(i)\cap B_{r/2}(x)|=0. \] \end{lem} \begin{pro}\label{csingpro} Let $\mathcal E$ be a $(\Lambda,\rho)$-minimizing cluster, let $\varepsilon$ and $r_0$ be the constants defined after Lemma \ref{infiltrationlem}. Then $x\in c\mathrm{-sing}(\mathcal E)$ if and only if $x\in\partialrtial E$ and \begin{equation}\label{csing} |\mathcal E(i)\cap B_r(x)|+|\mathcal E(j)\cap B_r(x)|\le(1-\varepsilon)r^N\omega_N, \end{equation}\label{csingeq} for every $i\neq j$ and every $r\le r_0$. \end{pro} \begin{proof} Suppose that $x$ is such that \[ |\mathcal E(i)\cap B_r(x)|+|\mathcal E(j)\cap B_r(x)|>(1-\varepsilon)r^N, \] for some $r<r_0$ and $i\neq j$; then, for every $h\neq i\neq j$, since the chambers are disjoint, one has $|\mathcal E(h)\cap B_r(x)|<\varepsilon r^N\omega_N$. By Lemma \ref{infiltrationlem} $|\mathcal E(h)\cap B_{r/2}(x)|=0$, and then $h\notin I(x)$. In other words $I(x)\subset\{i,j\}$, thus $x\notin c-{\mathrm sing}(\mathcal E)$.\\ We are left to show that if $x$ satisfies \eqref{csing}, then $x\in c-{\mathrm sing}(\mathcal E)$. Suppose that $I(x)=\{i,j\}$ and let $r_n$ be a sequence such that $r_n\to 0$ and $\lim_{n\to\infty}|B_{r_n}(x)\cap\mathcal E(h)|r_n^{-N}=0$, for every $h\neq i\neq j$. Then \[ \omega_N=\lim_{n\to\infty}\left|B_{r_n}(x)\cap \left(\bigcup\mathcal E(l)\right) \right|r_n^{-N}=\lim_{n\to\infty}\sum |B_{r_n}(x)\cap\mathcal E(l)|r_n^{-N} \] \[ =\lim_{n\to\infty}\left[|B_{r_n}(x)\cap\mathcal E(i)|r_n^{-N}+|B_{r_n}(x)\cap\mathcal E(j)|r_n^{-N}\right]. \] Thus \eqref{csing} cannot be satisfied by the chambers $\mathcal E(i)$ and $E(j)$, provided $r$ is sufficiently small. \end{proof} We now introduce the {\it perimeter density} $\Theta_x(\mathcal E)$ as follows: \[ \Theta_x(\mathcal E)=\lim_{r\to 0}\frac{P(\mathcal E;B_r(x))}{\omega_{N-1} r^{N-1}} \] Existence and finiteness of $\Theta_x(\mathcal E)$ for $(\Lambda, \rho)$-minimizing clusters is given by the so-called {\it monotonicity formula} stating that the quantity \begin{equation}\label{monoton} e^{\Lambda r}\frac{P(\mathcal E;B_r(x))}{\omega_{N-1} r^{N-1}} \end{equation} is increasing, for every $r<\rho$ and $x\in\partialrtial \mathcal E$, see for instance \cite[Theorem 17.6]{SimonBook}\\ In particular we can infer from the monotonicity formula \eqref{monoton} the existence of blow-ups at every point $x\in\partialrtial\mathcal E$, namely the sets $\mathcal E_{x,r}=(\mathcal E-x)/r$ converge, as $r\to 0$, in $L^1$ to a cluster $\mathcal E_{x,0}$ and $\partial\mathcal E_{x,r}$ converge in Hausdorff to the boundary of $\mathcal E_{x,0}$. By investigation of the precise error term in the monotonicity formula, one obtains that the limit set is a conical cluster which minimizes the perimeter in a sense that will be made rigorous by the following definition. \partialr We say that $\mathcal E$ is a {\it cone-like} $m$-cluster if, for $i=0,\ldots,m$, $\mathcal E(i)$ is an open cone with vertex at the origin of $\mathbb R^N$. A cone-like $m$-cluster $\mathcal E$ is a {\it minimizing cone-like cluster} provided \[ P(\mathcal E; B_r(x))\le P(\mathcal F; B_r(x)), \] for every $r>0$, every $x\in\mathbb R^N$ and every $m$-cluster $\mathcal F$ such that $\mathcal E(i)\Delta\mathcal F(i)\subset\subset B_r(x)$. \section{Connectedness of the interfaces}\label{sec.connected} The aim of this section is to proof the following lemma: \begin{lem}\label{lem.connected} Let $N>2$ and $\mathcal E$ be a minimizing cone-like cluster, then the set $\partialrtial\mathcal E\cap\mathbb S^{N-1}$ has the following property: if $A$ and $B$ are non-empty sets such that $\partialrtial\mathcal E\cap \mathbb S^{N-1}=A\cup B$, then $\mathrm{dist(A,B)=0}$. \end{lem} We will derive it as a consequence of the following two statements. The first one is a local version of a generalized version of Frankel's Theorem, \cite{FR}. The second one is a regularity result that enables us to apply the first one. \begin{lem}[local version of {\cite[Theorem 3]{PW}}]\label{lem.local version} Let $M^n$ be a complete, connected Riemannian manifold of positive Ricci curvature. If $\Sigma_1, \Sigma_2$ are two minimal hypersurfaces inside $M^n$ possibly with boundary then the map \[ (p,q) \in \Sigma_1 \times \Sigma_2 \mapsto \dist_{M^n}(p,q) \] cannot have a local interior minimum. \end{lem} \begin{proof} The lemma is proven in precisely the same way as the "global" version presented in \cite[Proof of Theorem 3]{PW} because it is a local argument. It is a direct consequence of Synge's second variation formula for geodesics. \end{proof} \begin{lem}\label{lem.regular touching points} Let $\mathcal{E}$ be a $(\Lambda, \rho)$-minimizing cluster, and $x \in \partialrtial \mathcal{E}$ with the property that there exists a ball $B_r(p) \subset \mathbb R^N$ with $x \in \partialrtial B_r(p)$ and $\partialrtial \mathcal{E} \cap B_r(p) = \emptyset$ then $x$ is a regular point. \end{lem} Before giving their proofs let us conclude the proof of Lemma \ref{lem.connected}. \begin{proof}[Proof of Lemma \ref{lem.connected}] Assume by contradiction that $\dist(A,B)>0$. Since $\partialrtial \mathcal{E}$ is closed, $A$ and $B$ are closed, hence there exist $a\in A, b \in B$ such that $\dist_{\mathbb{S}^{N-1}}(a,b)=\dist_{\mathbb{S}^{N-1}}(A,B)=d$. Let $\gamma: [0,d] \to \mathbb{S}^{N-1}$ be the length minimizing geodesic\footnote{The geodesic is unique since $\partialrtial \mathcal{E}$ cannot consists only by two antipodal points for $N>2$, since $\mathcal{H}^{N-2}(\mathcal{E}\cap \mathbb{S}^{N-1})>0$.} between $a,b$. Let $m:=\gamma(\frac{d}{2})$ be the midpoint on $\gamma$. The geodesic ball $\mathcal{B}:=\mathcal{B}_{\frac{d}{2}}(m)$ satisfies $\mathcal{B} \cap A = \emptyset = \mathcal{B}\cap B$ otherwise it would contradict $\dist_{\mathbb{S}^{N-1}}(A,B)=d$. Since $\mathcal{E}$ is cone like we conclude that $\mathcal{E} \cap C_\mathcal{B} = \emptyset$ where $C_\mathcal{B}$ is the cone over $\mathcal{B}$ i.e. $C_\mathcal{B}:=\{ \lambda y \colon y \in \mathcal{B}, \lambda \in \mathbb R^+ \}$. By construction $a, b \in \partialrtial C_\mathcal{B}$ and $\partialrtial C_{\mathcal{B}}$ is a smooth hypersurface outside of $0$. We conclude that there are euclidean balls $B_1, B_2 \subset C_{\mathcal{B}}$ with $a \in \partialrtial B_1, b \in \partialrtial B_2$. Hence we are in the situation of Lemma \ref{lem.regular touching points} at $a, b$. This implies that there exists $\varepsilon>0$ s.t. $\Sigma_1:=A \cap B_{\varepsilon}(a), \Sigma_2:=B\cap B_{\varepsilon}(b)$ are $C^{1,\gamma}$-regular. Furthermore since $\mathcal{E}$ is a minimizing cone-like cluster we deduce that $\Sigma_i$ are minimal hypersurfaces with boundary in $\mathbb{S}^{N-1}$ for $i=1,2$. By construction the map \[ (p,q) \in \Sigma_1 \times \Sigma_2 \mapsto \dist_{\mathbb{S}^{N-1}}(p,q) \] takes a local minimum in $a,b$. This contradicts Lemma \ref{lem.local version} since $\operatorname{Ric}_{\mathbb{S}^{N-1}} > 0 $ for $N>2$ and proves the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{lem.regular touching points}] As shown in about the existence of tangent cone-like minimizing cluster one has the following equivalence: \begin{itemize} \item[(i)] $x \in \partialrtial \mathcal{E}$ is a regular point, i.e. there exists a neighborhood of $x$ where $\partialrtial \mathcal{E}$ is an embedded $C^{1,\gamma}$-hypersurface; \item[(ii)] $x$ has a "halfspace" as a tangent cone, i.e. there exists a sequence $r_k \to 0$ and $i\neq j$ such that up to rotation $\frac{\mathcal{E}-x}{r_k} \to \mathcal{K}$ with $\mathcal{K}(i)=\mathbb R^N_+, \mathcal{K}(j)=\mathbb R^N_-$ and $\mathcal{K}(h)=\emptyset$ for all $ h \neq i,j$. \end{itemize} Once again by \cite[Theorem 4.13]{CLM} for every sequence $r_k \to 0$ there is a subsequence, still denoted by $r_k$, a cone-like minimizing cluster $\mathcal{K}$ such that \[ \frac{\mathcal{E}-x}{r_k} \to \mathcal{K}. \] Up to a rotation we have $\frac{B_r(p)-x}{r_k} \to \mathbb R^N_-$ as $k \to \infty$. Hence $\partialrtial\mathcal{K} \subset \overline{\mathbb R^N_+}$. The Lemma follows showing the following claim:\\ \emph{Claim: }every cone-like minimizing cluster $\mathcal{K}$ with $\partialrtial \mathcal{K}$ contained in a halfspace is a "halfspace".\\ \emph{Proof of the claim:} After reordering the components of $\mathcal{K}$ we may assume that $\mathbb R^N_- \subset \mathcal{K}(1) $ and $\mathcal{K}(i) \subset \overline{\mathbb R^N_+}$ for all $i>1$. Fix $\varphi, f \in C^1(\mathbb R, \mathbb R)$ non-negative, non increasing with $\varphi(s), f(s) = 1$ for $s\le 0$ and define the vectorfield \[ X(x):= \varphi(\abs{y}) f(t) e_{N}. \] Observe that $\spt(X) \cap \overline{R^N_+} \subset B_R $ for some $R>0$. Hence if $\Phi_t$ denotes the flow generated by $\Phi$ we have that \[ \mathcal{K}(j) \Delta \Phi_t(\mathcal{K}(j)) \subset B_R \text{ for all } j = 1, \dotsc, m, \abs{t}<\delta. \] Since $\mathcal{K}$ is minimizing and the first variation of perimeter, \cite[Theorem 17.5]{Ma}, and \ref{eq:1/2perimeter} we must have \[ 0 = \frac{d}{dt}|_{t=0} P(\Phi_t(\mathcal{K})) = \frac{1}{2} \sum_{i=1}^m \int_{\partialrtial \mathcal{K}(i)} \Div_{\mathcal{K}(i)}(X) \, d\mathcal{H}^{N-1}. \] Since $DX(x) = \varphi(\abs{y}) f'(t) e_N \otimes e_N + \varphi'(\abs{y}) \frac{f(t)}{\abs{y}} e_N \otimes y$, the boundary divergence of $X$ on $\mathcal{K}(i)$ is given by \begin{align*} \Div_{\mathcal{K}(i)}(X)= \varphi(\abs{y}) f'(t) \left( 1 - \langle \nu_i, e_N \rangle^2 \right) + \varphi'(\abs{y}) \frac{f(t)}{\abs{y}} \left( \langle e_N, y\rangle - \langle \nu_i, e_N \rangle \langle y, \nu_i \rangle \right)\\ = \varphi(\abs{y}) f'(t) \left( 1 - \langle \nu_i, e_N \rangle^2 \right) + \varphi'(\abs{y}) \frac{f(t)}{\abs{y}} t \langle \nu_i, e_N\rangle^2 \end{align*} where we used beside $\langle e_N, y \rangle =0 $ that, since $\mathcal{K}(i)$ is an open cone, \[0=\langle x, \nu_i \rangle = \langle y , \nu_i\rangle + t \langle \nu_i, e_N \rangle .\] This implies that $\Div_{\mathcal{K}(i)}(X) \le 0$ for each $i$. Choosing $\varphi$ and $f$ appropriate we deduce that $\partialrtial \mathcal{K}(i) \subset \{ t= 0\} $ for all $i$. This concludes the proof of the claim. \end{proof} \section{Proof of the lower density bound}\label{sec.ldb} In this section we prove Theorem \ref{thm.lowdens} for minimizing cone-like $m$-clusters. Before proving our main result we show in the following lemma that, if $m\ge 2$, then the origin cannot be the only $c$-singular point. \begin{lem}\label{lem.csingsphere} Let $N>2$, $m\ge 2$ and let $\mathcal E$ be a minimizing cone-like cluster, then $c-\mathrm{sing}(\mathcal E)\cap \mathbb S^{N-1}\neq\emptyset$. \end{lem} \begin{proof} Fix a chamber $\mathcal E(i)$; for $x\in\partialrtial\mathcal E(i)$ let $I(x)$ be the set defined in \eqref{closechambers}. Suppose that $c-\mathrm{sing}(\mathcal E)\cap \mathbb S^{N-1}\cap\partialrtial \mathcal E(i)=\emptyset$, then $I(x)=\{i,j(x)\}$, for some $i\neq j(x)\in\{0,\ldots,m\}$. Let \[ r(x)=\sup\left\{r\ge0\,:\,|B_r(x)\cap\mathcal E(l)|=0, \ \ \text{for every } l\notin I(x)\right\} \] Thanks to Lemma \ref{infiltrationlem} we have that $r(x)>0$, for every $x\in\partialrtial\mathcal E(i)$. Moreover $r$ is a Lipschitz continuous function, indeed, since $B_{r-\varepsilon}(y)\subset B_r(x)\subset B_{r+\varepsilon}(y)$, whenever $|x-y|<\varepsilon$, thus $r(x)-\varepsilon< r(y)<r(x)+\varepsilon$. With the same argument it is easy to check that $j(x)$ is locally constant. \partialr Let $K$ be a connected component of $\mathcal E(i)\cap\mathbb S^{N-1}$ and let $j=j(x)$, for every $x\in K$; since $K$ is a compact set, then the function $r(x)$ achieves a positive minumum $\overline r$. We get the contradiction by using Lemma \ref{lem.connected} with $A=K$ and $B=\partialrtial\mathcal E\cap\mathbb S^{N-1}\setminus K$. Indeed $\mathrm{dist}(A,B)>\overline r$ and $B\neq \emptyset$, since $\partialrtial \mathcal E(l)\subset B$, for every $l\notin\{i,j\}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm.lowdens}] We are going to prove the statement by induction on the ambient dimension $N$.\\ If $N=2$ then the estimate follows by the fact that the boundary any minimizing cone-like cluster consist of three line segments meeting at the origin.\\ Let $N>2$, let $\mathcal E$ a cone-like minimizing cluster of $\mathbb R^N$ and suppose that our statement is valid for every cone-like minimizing cluster $\mathcal E'\subset \mathbb R^{N-1}$.\\ Thanks to Lemma \ref{lem.csingsphere} there exists a point $x\in c-\mathrm{sing}(\mathcal E)\cap\mathbb S^{N-1}$ that we can assume, up to rotating $\mathcal E$, that coincides with $e_N$. Since $\mathcal E$ is conical, then, for $\varepsilon>0$, every point $\varepsilon e_N$ belongs to $c-\mathrm{sing}(\mathcal E)$, thus monotonicity formula \eqref{monoton} and the fact that $\mathcal E$ is left invariant under blow ups at the origin imply that \[ \Theta_0(\mathcal E)\ge \Theta_{e_N}(\mathcal E). \] Indeed \begin{align*} \Theta_0(\mathcal E) &=\frac{P(\mathcal E;B_1(0))}{\omega_{N-1}}=\frac{P(\mathcal E;B_1(0))}{\omega_{N-1}(1-\varepsilon)^{N-1}}+O(\varepsilon)\\ &\ge \frac{P(\mathcal E; B_{1-\varepsilon}(\varepsilon e_N))}{\omega_{N-1}(1-\varepsilon)^{N-1}}+O(\varepsilon)\ge \Theta_{e_N}(\mathcal E)+O(\varepsilon). \end{align*} We set $\tilde{\mathcal E}=\lim_{r\to 0}\mathcal E_{e_N,r}$; as already mentioned $\tilde{\mathcal E}$ is a cone-like minimizing cluster with $\Theta_0(\tilde{\mathcal E})=\Theta_{e_N}(\mathcal E)$, moreover it is easy to check that $0\in c-\mathrm{sing}(\tilde{\mathcal E})$.\\ Since $\mathcal E$ is conical, then $\tilde{\mathcal E}$ is invariant under translation in the $e_N$-direction, namely $\tilde{\mathcal E}=\mathcal E'\times\mathbb R$, for some (still minimizing) $m'$-cluster $\mathcal E'\subset \mathbb R^{N-1}$. Again $0\in c-\mathrm{sing}(\mathcal E')$ and thence with $m'\ge 2$.\\ The inductive assumption gives that $\Theta_0(\mathcal E')\ge 3/2$. We can now conclude by computing the density $\Theta_0(\tilde{\mathcal E})$ in terms of $\Theta_0(\mathcal E')$. Indeed the {\it co-area} formula for $\mathcal H^{N-1}$-rectifiable sets yields \begin{equation}\label{eq:fubini} \begin{split} \Theta_0(\tilde{\mathcal E}) &=\frac{P(\tilde{\mathcal E};B_1)}{\omega_{N-1}} \ge\frac{1}{\omega_{N-1}}\int_{-1}^1 \mathcal H^{N-2} \left(\partialrtial^*\tilde{\mathcal E}\cap B_1\cap\{x_N=t\}\right)dt\\ &=\frac{1}{\omega_{N-1}}\int_{-1}^1 \mathcal H^{N-2}\left(\partialrtial^*\mathcal E'\cap B_{\sqrt{1-t^2}}\right)dt\\ &=\frac{1}{\omega_{N-1}}\int_{-1}^1\mathcal H^{N-2}\left(\partialrtial^*\mathcal E'\cap B_1\right)(1-t^2)^{\frac{N-2}{2}}dt\\ &=\frac{P(\mathcal E';B_1)}{\omega_{N-1}}\int_{-1}^{1}(1-t^2)^{\frac{N-2}{2}}dt=\Theta_0(\mathcal E')\frac{\omega_{N-2}}{\omega_{N-1}}\int_{-1}^{1}(1-t^2)^{\frac{N-2}{2}}dt\\ &\ge\frac 3 2 \frac{\omega_{N-2}}{\omega_{N-1}}\int_{-1}^{1}(1-t^2)^{\frac{N-2}{2}}dt=\frac 3 2, \end{split} \end{equation} where the last equation easily follows from Fubini's Theorem. \end{proof} \begin{remark}\label{rmk.subsequences} {\rm In the proof above, we showed that there exist $x_k\in c\mathrm{-sing}(\mathcal E)$ and $r_k>0$ such that $\mathcal E_k=\mathcal E_{x_k,r_k}\rightarrow \mathcal E'\times \mathbb R$, such that $\mathcal E'$ is an $N-1$-dimensional minimising cone-like cluster, singular at the origin. In particular, arguing by induction on the dimension $N$, and repeating the construction of the proof of Theorem \ref{thm.lowdens}, it is possible to show that there exists a sequence of points $x_k\rightarrow 0$, $x_k\in c\mathrm{-sing}$, and a sequence of positive numbers $r_k\rightarrow 0$, such that, up to rotations \[ \frac{\mathcal E-x_k}{r_k}\rightarrow Y\times \mathbb R^{N-2}. \] } \end{remark} \section{Consequences}\label{sec.consequences} \subsection{Characterization of $Y \times \mathbb R^{N-2}$ by its density } \begin{cor}\label{cor.characterizationbydensity} Suppose $\mathcal{K}$ is a minimzing conelike cluster with $\Theta_0(\mathcal{K})=\frac32$ then up to rotation we have \[ \mathcal{K}=Y \times \mathbb R^{N-2}.\] \end{cor} \begin{proof} To show the claim we combine Proposition \ref{csingpro} with some classical consequences of the monotonicity of the density:\\ Let $\mathcal{E}$ be a any minimizing cluster then \begin{enumerate} \item if $\frac{ P(\mathcal{E};B_r(y))}{w_{N-1}r^{N-1}}= \frac{ P(\mathcal{E};B_s(y))}{w_{N-1}s^{N-1}}$ for some $0\le r < s$ then $\mathcal{E}$ coincides with a cone in the annulus $B_{s}(y)\setminus B_r(y)$; \item if $\mathcal{E}$ is a cone with vertex $0$ then $\Theta_y(\mathcal{E}) \le \Theta_0(\mathcal{E})$ for all $y$; \item additionally one has that $L_{\mathcal{E}}:=\{ y \colon \Theta_y(\mathcal{E}) = \Theta_0(\mathcal{E})\}$ is a linear subspace of $\mathbb R^N$. Suppose $k= \operatorname{dim}(L_{\mathcal{E}})$ then there is a cone-like minimizing cluster $\mathcal{E}'$ in $\mathbb R^{N-k}$ such that $ \mathcal{E} = \mathcal{E}' \times L_{\mathcal{E}}$. \end{enumerate} (1) follows from the montonicity formula \cite[Theorem 28.9]{Ma}, (2) and (3) can be found for instance in a more general setting in \cite{Wh}.\\ We will show the corollary by induction on the dimension $N$. For $N=2$ the statement follows by the classification of cone-like clusters in $\mathbb R^2$, \cite[Proposition 30.9]{Ma}. Suppose the corollary is proven for dimension $N'<N$. Let $\mathcal{K}$ be a minimizing cone-like cluster in $\mathbb R^N$ with $\Theta_0(\mathcal{K})=\frac32$. By Lemma \ref{lem.csingsphere} there is $y \in c-\sing(\mathcal{K}) \cap \mathbb{S}^{N-1}$. Applying Theorem \ref{thm.lowdens} we have $\Theta_y(\mathcal{K}) \ge \frac{3}{2}$. Combining (2) and (3) above we deduce that $k:=\operatorname{dim}(L_{\mathcal{K}})\ge 1$ and the existence of cone-like minimizing cluster $\mathcal{K}'$ in $\mathbb R^{N-k}$ with $ \mathcal{K} = \mathcal{K}' \times L_{\mathcal{K}}$. By Fubini's Theorem, compare \eqref{eq:fubini}, we have \[ \frac3 2\le\Theta_0(\mathcal{K}')\le\Theta_0(\mathcal{K})\le \frac3 2. \] Hence by $\mathcal{K}'$ satisfies the conditions of the corollary and by induction hypothesis we have $\mathcal{K}' = Y \times \mathbb R^{N-k-2}$ and so $\mathcal{K}= Y \times \mathbb R^N$. \end{proof} \subsection{Sequential convergence of clusters} The aim of this subsection is to proof the following: \begin{pro}\label{prop.sequential} Let $\mathcal{E}_k$ be a sequence of $(\Lambda_k, \rho_k)$-minimizing clusters with $\Lambda_k \to \Lambda, \rho_k \to \rho_0>0$ and $\mathcal{E}_k \to \mathcal{E}$ in some open set $U \in \mathbb R^{N-1}$ as $k \to \infty$, then the following holds: \[ c-\sing(\mathcal{E}_k ) \to c-\sing(\mathcal{E}) \text{ in Hausdorff on every } V \Subset U. \] \end{pro} \begin{proof} We will first show the easier part that for each $\varepsilon >0$ there exists $k_0>0$ such that \begin{equation*}\label{eq:easierdirection} c-\sing(\mathcal{E}_k) \subset (c-\sing(\mathcal{E}))_\varepsilon. \end{equation*} Suppose the inclusion fails, then there exist a sequence $x_k \in c-\sing(\mathcal{E}_k)\cap U'$ with $\dist(x_k, c-\sing(\mathcal{E}))> \varepsilon$. Passing to an appropriate subsequence we have $x_k \to x \in \overline{U'\setminus (c-\sing{\mathcal{E}})_\varepsilon} $ i.e. $x \notin c-\sing(\mathcal{E})$. Hence by Proposition \ref{csingpro} there exists a tuple $i\neq j$, a radius $r>0$ and a positive number $\varepsilon$ such that \[ \abs{\mathcal{E}(i) \cap B_r(x)} + \abs{\mathcal{E}(j) \cap B_r(x)} > (1-\varepsilon) r^N w_N .\] But since $\mathcal{E}_k \to \mathcal{E}$, and $B_r(x_k) \to B_r(x)$ for $k$ sufficient large we have \[ \abs{\mathcal{E}_k(i) \cap B_r(x_k)} + \abs{\mathcal{E}_k(j) \cap B_r(x_k)} > (1-2\varepsilon) r^N w_N. \] But again by Proposition \ref{csingpro} this implies $x_k \notin c-\sing(\mathcal{E}_k)$. Which is the desired contradiction.\\ We are left to show the harder part of the proposition, namely that if $x \in c-\sing(\mathcal{E})\cap U$ then there exists a sequence $x_k \in c-\sing(\mathcal{E}_k)$ with $x_k \to x$. This will be achieved by the fact that a $Y\subset \mathbb R^2$ is essentially non removable: \begin{lem}\label{lem.Ynonremovable} Given a partition of $B_1\subset \mathbb R^2$ into three sets of finite perimeter $E_1,E_2,E_3$ such that $E_i \cap \partialrtial B_1$ is a single interval for each $i=1,2,3$ then \[ \partialrtial E_1 \cap \partialrtial E_2 \cap \partialrtial E_3 \neq \emptyset. \] \end{lem} Before we give the proof of this statement let us show how to conclude. \\ We will use the following notation for cylindrical sets: \[ C_r:=B_r\times ]-r,r[^{N-2} \subset \mathbb R^2 \times \mathbb R^{N-2} \ \ \text{and} \ \ S_\varepsilon:= B_\varepsilon \times \mathbb R^{N-2}. \] By the Remark \ref{rmk.subsequences}, there exists sequences $x_l \to x$ and $r_l \to 0$ such that up to rotation we have \[ \frac{\mathcal{E}- x_l}{r_l} \to Y \times \mathbb R^{N-2} \text{ as } l \to \infty. \] Furthermore up to relabeling the chambers and an application of the infiltration lemma \ref{infiltrationlem} we may assume that \[ \frac{\mathcal{E}(i)-x_l}{r_l}\cap C_5 = \emptyset \text{ for all } i > 3, l >l_0. \] Since $\mathcal{E}_k \to \mathcal{E}$ as $k \to \infty$ we can therefore find a sequence $\{k(l)\}_{l \in \mathbb{N}}$ such that \[ \mathcal{E}_l':= \frac{\mathcal{E}_{k(l)} - x_l}{r_l} \to Y\times \mathbb R^{N-2} \text{ as } l \to \infty. \] Hence we will have that $\mathcal{E}'_l(i) \cap C_5 = \emptyset $ for all $i>3$ and $l>l_1$. So will forget about all higher $i>3$ in sequel and consider $\mathcal{E}'_l$ as a cluster with $3$ chambers.\\ Observe that $\sing(Y\times \mathbb R^{N-2}) = \{0\}\times \mathbb R^{N-2}$. The small excess regularity criterion, e.g. \cite[Theorem 4.1]{CLM}, implies that for $l>l_2$ we have \begin{align}\label{eq:closetoY} \sing(\mathcal{E}'_l)\cap C_4 & \subset S_\varepsilonsilon \nonumber \\ \partialrtial \mathcal{E}'_l\cap C_3 \setminus S_\varepsilonsilon & \text{ is a $C^{1,\alpha}$- graph over parts of $Y\times \mathbb R^{N-2}$ }. \end{align} To conclude the claim it is sufficient to show what for each $l>l_2$ one has $c-\sing(\mathcal{E}_l')\cap C_2 \neq \emptyset$. To do so fix some $l>l_2$. Since the argument is independent of $l$ we drop the index and write only $\mathcal{E}'$.\\ Assume by contradiction that $c-\sing(\mathcal{E}')\cap C_2 = \emptyset$. We consider the the sliced cluster for $z \in ]-2,2[^{N-2}$ \[ \mathcal{E}'_z:= \mathcal{E}'\cap C_2 \cap \mathbb R^2 \times \{z\} \subset B_2 \subset \mathbb R^2. \] Observe that for almost every $z$ we have \begin{enumerate} \item $ \mathcal{E}'_z= \left\{ \mathcal{E}_z'(1), \mathcal{E}_z'(2), \mathcal{E}_z'(3) \right\} $ is a partition of $B_2\subset \mathbb R^2$ by sets of finite perimeter. \item as a consequence of \eqref{eq:closetoY} $\partialrtial \mathcal{E}'_z \setminus B_{2\varepsilonsilon}$ consists of three $C^{1,\alpha}$-curves and $\mathcal{E}'_z(i)\cap \partialrtial B_1$ is a simple interval for $i=1,2,3$. \end{enumerate} Fix any such $z\in ]-2,2[^{N-2}$. We assumed that $c-\sing(\mathcal{E})\cap C_2 = \emptyset$. But then the Proposition \ref{csingpro} implies that for each $(y,z)\in C_2$ there exists a radius $r = r(y,z) >0$ and and index $i = i(y,z) \in \{ 1, 2,3 \}$ such that \[ \mathcal{E}'(i) \cap B_{r}((y,z)) = \emptyset,\] which implies \[ \mathcal{E}'_z(i) \cap B_r(y) = \emptyset. \] Observe that this equivalent to $\partialrtial \mathcal{E}_z'(y) \cap B_r(y) = \emptyset$. As a consequence we have \[ \partialrtial \mathcal{E}'_z(1) \cap \partialrtial \mathcal{E}'_z(2) \cap \partialrtial \mathcal{E}'_z(3) = \emptyset. \] But this contradicts Lemma \ref{lem.Ynonremovable}, because the assumptions are satisfied by (2) above. \end{proof} To prove Lemma \ref{lem.Ynonremovable} we will use the following "path-connectedness" property of sets of finite perimeter in the plane, Figure 1. \begin{figure}\label{fig.fig1} \end{figure} \begin{lem}\label{lem.pathconnected} Let $E\subset \mathbb R^2$ be a set of finite perimeter s.t. $E \cap \partialrtial B_1$ is a single interval with $\partialrtial E \cap \partialrtial B_1 =\{a, b\}$, then there exists a rectifiable curve $\gamma \subset B_1$ s.t. $\partialrtial \gamma = \{a, b\}$ and $\gamma \subset \partialrtial E$ i.e. $\{a,b\}$ are path connected inside $\partialrtial E$. \end{lem} \begin{proof} The proof is based on an approximation by smooths sets and the classification of compact 1-manifolds. \\ Passing to \[ \mathbf{1}_{\hat{E}}(x) = \begin{cases} \mathbf{1}_E(x) & \text{ for } \abs{x} \le 1 \\ \mathbf{1}_E(\frac{x}{\abs{x}}) & \text{ for } \abs{x} > 1 \end{cases} \] we may assume that $E$ is a cone outside of $B_1$ i.e. $\partialrtial E \cap \partialrtial B_R = \{ Ra , Rb \}$ for all $R\ge 1$. \\ Given $\varepsilon>0, \varepsilonsilon_n \to 0$ and $t \in ]0,1[$ we set for an symmetric mollifier $\rho_\varepsilon$ \[ u_\varepsilon= \rho_\varepsilon \star \mathbf{1}_E \quad u_n= u_{\varepsilon_n} \quad E_n^t=\{ u_n > t \}. \] As shown in \cite[Theorem 13.8]{Ma} we have that for a.e. $t \in ]0,1[$ we have \begin{enumerate} \item $t$ is a regular value for $u_n$; \item $E_n^t \xrightarrow{loc} E$, $P(E_n^t, B_2) \to P(E, B_2)$; \item $\partialrtial E_n^t \subset (\partialrtial E)_{\varepsilon_n}$. \end{enumerate} Appealing once more to the Morse-Sard theorem we may assume that \begin{enumerate}\setcounter{enumi}{3} \item $t$ is a regular value for $u_n|_{\partialrtial B_2}: \partialrtial B_2 \to [0,1]$. \end{enumerate} We claim that for $n> n_1$ we have \[\partialrtial E_n^t \cap \partialrtial B_2 =\{ 2a_n, 2b_n\} \quad a_n \to a, b_n \to b.\] This can be seen as follows. Let $S$ be the set $\{ \lambda x \colon x \in E\cap \partialrtial B_1, \lambda >0 \}$ then for $n$ large we have \[ u_n(x) = \rho_{\varepsilonsilon_n} \star \mathbf{1}_S(x), x \in \partialrtial B_2. \] The claim easily holds true for $\rho_{\varepsilon_n} \star \mathbf{1}_S(x)$. \\ Fix $0< t <1$ such that (1) to (4) above are satisfied for all $n \in \mathbb{N}$. By the classification of compact 1-dimensional manifolds, \cite[Appendix]{Mi}, $\partialrtial E_n^t \cap B_2 = \{u_n =t\}\cap B_2$ is a finite disjoint union of circles and segments. Since $\partialrtial E_n^t \cap \partialrtial B_2 = \{2a_n, 2b_n\}$ there is precisely one segment $\gamma_n$. This segment satisfies \[ \partialrtial \gamma_n =\{ a_n, b_n \} \quad \operatorname{Length}(\gamma_n) \le P(\partialrtial E_n^t, B_2). \] Since $P(\partialrtial E_n^t, B_2) \le 2 P(\partialrtial E, B_2)$ by (2) for sufficient large $n$ and compactness of rectifiable curves there is a subsequence satisfying \[ \gamma_n \to \gamma \quad \partialrtial \gamma_n \to \{ 2a, 2b\}.\] It remains to be proven that $\gamma \subset \partialrtial E$. Assume by contradiction that there is $x \in \gamma \setminus \partialrtial E$ then there exist $r>0$ s.t. for $n$ large with $\varepsilonsilon_n< \frac{r}{2}$ \begin{align*} \text{either } \abs{ B_r(x)\cap E} &= 0 \quad u_n(y) = 0 \quad \forall y \in B_{\frac{r}{2}}(x) \\ \text{or } \abs{ B_r(x)\setminus E} &= 0 \quad u_n(y) = 1 \quad \forall y \in B_{\frac{r}{2}}(x). \end{align*} In both cases we have $\gamma_n \cap B_{\frac{r}{2}}(x) \subset \{ u_n =t \} \cap B_{\frac{r}{2}}(x)= \emptyset $, since $0<t<1$. But this contradicts $x \in \gamma$. The lemma now follows since $\partialrtial E\setminus \overline{B_1 } =\{ \lambda a \colon \lambda > 1 \} \cup \{ \lambda b \colon \lambda > 1 \}$. \end{proof} Now we are able to obtain Lemma \ref{lem.Ynonremovable} as a corollary. \begin{proof}[Proof of \ref{lem.Ynonremovable}] Given the sets $E_1,E_2,E_3$; as in Figure 2 we set \begin{figure}\label{fig.fig2} \end{figure} \[ a= \partialrtial E_1 \cap \partialrtial E_2\quad b= \partialrtial E_2 \cap \partialrtial E_3 \quad c= \partialrtial E_1 \cap \partialrtial E_3. \] We apply lemma \ref{lem.pathconnected} to the set $E_1$ and obtain the existence of a rectifiable curve \[ \gamma: [0,1] \to B_1 \text{ with } \gamma \subset \partialrtial E_1, \gamma(0)=a, \gamma(1)=c. \] By contradiction assume that \[ \partialrtial E_1 \cap \partialrtial E_2 \cap \partialrtial E_3 = \emptyset. \] This implies that \[ \# I(x) \le 2 \text{ for all } x \in B_1 \] where \[ I(x) = \left\{ i \in \{ 1, 2, 3 \} \colon 0 < \abs{ E_i \cap B_r(x)} < \pi r^2 \; \forall r>0 \right\}. \] It is straight forward that $I(x)$ is locally constant i.e. the map $t \mapsto I(\gamma(t))$ must be constant. But this is a contradiction since $I(\gamma(0))=\{1,2\}$ and $I(\gamma(1))=\{1,3\}$. Hence the lemma is proven. \end{proof} By applying Proposition \ref{prop.sequential} with $\mathcal E_k=\mathcal E$ (and thus $\Lambda_k=\Lambda$ and $\rho_k=\rho$), have the following \begin{cor} Let $\mathcal E$ be a $(\Lambda,\rho)$-minimizing cluster, then $c\mathrm{-sing}(\mathcal E)$ is a closed subset of $\partialrtial\mathcal E$. \end{cor} \ack The work of the authors is supported by the MIUR SIR-grant {\it``Geometric Variational Problems''} (RBSI14RVEZ). \end{document}
\begin{document} \begin{center} \epsfxsize=4in \end{center} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem {Congruence}{Congruence} \begin{center} \vskip 1cm{\LARGE\bf M\'enage Numbers and M\'enage Permutations } \vskip 1cm \large Yiting Li\\ Department of Mathematics \\ Brandeis University \\ {\tt [email protected]} \end{center} \newcommand{\Addresses}{{ \footnotesize Yiting Li, \textsc{Department of Mathematics, Brandeis University, 415 Soutrh Street, Waltham, MA 02453, USA}\par\nopagebreak \textit{E-mail address}, Yiting Li: \texttt{[email protected]} }} \vskip .2 in \newcommand {\stirlingf}[2]{\genfrac[]{0pt}{}{#1}{#2}} \newcommand {\stirlings}[2]{\genfrac\{\}{0pt}{}{#1}{#2}} \begin{abstract} In this paper, we study the combinatorial structures of straight and ordinary m\'enage permutations. Based on these structures, we prove four formulas. The first two formulas define a relationship between the m\'enage numbers and the Catalan numbers. The other two formulas count the m\'enage permutations by number of cycles. \end{abstract} \section{Introduction} \subsection{Straight m\'enage permutations and straight m\'enage numbers} The straight m\'enage problem asks for the number of ways one can arrange $n$ male-female pairs along a linearly arranged table in such a way that men and women alternate but no woman sits next to her partner. We call a permutation $\pi\in S_n$ a $straight$ {\it m\'enage} $permutation$ if $\pi(i)\ne i$ and $\pi(i)\ne i+1$ for $1\le i\le n$. Use $V_n$ to denote the number of straight m\'enage permutations in $S_n$. We call $V_n$ the $nth$ $straight$ {\it menage} $number$. The straight m\'enage problem is equivalent to finding $V_n$. Label the seats along the table as $1,2,\ldots,2n$. Sit the men at positions with even numbers and women at positions with odd numbers. Let $\pi$ be the permutation such that the man at position $2i$ is the partner of the woman at position $2\pi(i)-1$ for $1\le i\le n$. Then, the requirement of the straight m\'enage problem is equivalent to the condition that $\pi(i)$ is neither $i$ nor $i+1$ for $1\le i\le n$. \subsection{Ordinary m\'enage permutations and ordinary m\'enage numbers} The ordinary m\'enage problem asks for the number of ways one can arrange $n$ male-female pairs around a circular table in such a way that men and women alternate, but no woman sits next to her partner. We call a permutation $\pi\in S_n$ an $ordinary$ {\it m\'enage} $permutation$ if $\pi(i)\ne i$ and $\pi(i)\not\equiv i+1$ (mod $n$) for $1\le i\le n$. Use $U_n$ to denote the number of ordinary m\'enage permutations in $S_n$. We call $U_n$ the $nth$ $ordinary$ {\it m\'enage} $number$. The ordinary m\'enage problem is equivalent to finding $U_n$. Label the seats around the table as $1,2,\ldots,2n$. Sit the men at positions with even numbers and women at positions with odd numbers. Let $\pi$ be the permutation such that the man at position $2i$ is the partner of the woman at position $2\pi(i)-1$ for all $1\le i\le n$. Then, the requirement of the ordinary m\'enage problem is equivalent to the condition that $\pi(i)$ is neither $i$ nor $i+1$ (mod $n$) for $1\le i\le n$. We hold the convention that the empty permutation $\pi_\emptyset\in S_0$ is both a straight m\'enage permutation and an ordinary m\'enage permutation, so $U_0=V_0=1$. \subsection{Background} Lucas \cite{Lucas} first posed the problem of finding ordinary m\'enage numbers. Touchard \cite{Touchard} first found the following explicit formula \eqref{eq:known_formulas_for_U_n}. Kaplansky and Riordan \cite{Kaplansky} also proved an explicit formula for ordinary m\'enage numbers. For other early work in m\'enage numbers, see \cite{Kaplansky1943,MW} and references therein. Among more recent papers, there are some using bijective methods to study m\'enage numbers. For example, Canfield and Wormald \cite{CW} used graphs to address the question. One can find the following formulas of m\'enage numbers in \cite{Bogart,Touchard} and in Chapter 8 of \cite{Riordan}: \begin{align}\label{eq:known_formulas_for_U_n} U_m=\sum\limits_{k=0}^m(-1)^k\dfrac{2m}{2m-k}{2m-k\choose k}(m-k)!\quad\quad(m\ge2);\\ V_n=\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!\quad\quad(n\ge0).\label{eq:known_formulas_for_V_n} \end{align} The purpose of the current paper is to study the combinatorial structures of straight and ordinary m\'enage permutations and to use these structures to prove some formulas of straight and ordinary m\'enage numbers. We also give an analytical proof of Theorem \ref{thm:main_theorem_1} in Section \ref{appendix}. \subsection{Main results} Let $C_k$ be the $k$th $Catalan$ $number$: \[ C_k=\dfrac{(2k)!}{k!\,(k+1)!} \] and $c(x)=\sum\limits_{k=0}^\infty c_kx^k$. Our first main result is the following theorem. \begin{theorem}\label{thm:main_theorem_1} \begin{align}\label{eq:result_of_straight_menage_numbers} \sum\limits_{n=0}^{\infty}n!\,x^n=\sum\limits_{n=0}^\infty V_nx^nc(x)^{2n+1}, \\ \sum\limits_{n=0}^{\infty}n!\,x^n=c(x)+c'(x)\sum\limits_{n=1}^\infty U_nx^nc(x)^{2n-2}.\label{eq:result_of_ordinary_menage_numbers} \end{align} \end{theorem} Our second main result counts the straight and ordinary m\'enage permutations by the number of cycles. For $k\in\mathbb{N}$, use $(\alpha)_k$ to denote $\alpha(\alpha+1)\cdots(\alpha+k-1)$. Define $(\alpha)_0=1$. For $k\le n$, use $C_n^k$ ($D_n^k$) to denote the number of straight (ordinary) m\'enage permutations in $S_n$ with $k$ cycles. \begin{theorem} \begin{align}\label{eq:straight_by_cycles} 1+\sum_{n=1}^\infty\sum_{j=1}^n C_n^j\alpha^jx^n=\sum\limits_{n=0}^\infty(\alpha)_n\frac{x^n}{(1+x)^n(1+\alpha x)^{n+1}}, \end{align} \begin{align}\label{eq:ordinary_by_cycles} 1+\sum_{n=1}^\infty\sum_{j=1}^n D_n^j\alpha^jx^n=\frac{x+\alpha x^2}{1+x}+(1-\alpha x^2)\sum\limits_{n=0}^\infty(\alpha)_n\frac{x^n}{(1+x)^{n+1}(1+\alpha x)^{n+1}}. \end{align} \end{theorem} \subsection{Outline} We give some preliminary concepts and facts in Section \ref{sec:preliminaries}. In Section \ref{sec:reductions_and_nice_bijections}, we define three types of reductions and the nice bijection. Then, we study the structure of straight m\'enage permutations and prove \eqref{eq:result_of_straight_menage_numbers} in Section \ref{sec:straight_menage_permutation}. In Section \ref{sec:ordinary_menage_permutation}, we study the structure of ordinary m\'enage permutations and prove \eqref{eq:result_of_ordinary_menage_numbers}. Finally, we count the straight and ordinary m\'enage permutations by number of cycles and prove \eqref{eq:straight_by_cycles} and \eqref{eq:ordinary_by_cycles} in Section \ref{count_permutations_by_cycles}. \section{Preliminaries}\label{sec:preliminaries} For $n\in\mathbb{N}$, we use $[n]$ to denote $\{1,\ldots,n\}$. Define $[0]$ to be $\emptyset$. \begin{definition} Suppose $n>0$ and $\pi\in S_n$. If $\pi(i)=i+1$, then we call $\{i,i+1\}$ a $succession$ of $\pi$. If $\pi(i)\equiv i+1$ (mod $n$), then we call $\{i,\pi(i)\}$ a $generalized$ $succession$ of $\pi$. \end{definition} \subsection{Partitions and Catalan numbers}\label{sec:partitions_and_Catalan_numbers} Suppose $n>0$. A $partition$ $\epsilon$ of $[n]$ is a collection of disjoint subsets of $[n]$ whose union is $[n]$. We call each subset a $block$ of $\epsilon$. We also describe a partition as an equivalence relation: $p\sim_\epsilon q$ if and only if $p$ and $q$ belong to a same block of $\epsilon$. If a partition $\epsilon$ satisfies that for any $p\sim_\epsilon p'$ and $q\sim_\epsilon q'$, $p<q<p'<q'$ implies $p\sim_\epsilon q$; then, we call $\epsilon$ a $noncrossing$ $partition$. For $n\in\mathbb{N}$, suppose $\epsilon=\{V_1,\ldots,V_k\}$ is a noncrossing partition of $[n]$ and $V_i=\{a_1^i,\ldots, a_{j_i}^i\}$, where $a_1^i<\cdots<a_{j_i}^i$. Then, $\epsilon$ induces a permutation $\pi\in S_n$: $\pi(a_{r(i)}^i)=a_{r(i)+1}^i$ for $1\le r(i)\le j_i-1$ and $\pi(a_{j_i}^i)=a_1^i$. It is not difficult to see that different noncrossing partitions induce different permutations. The following lemma is well known. See, for example, \cite{Stanley}. \begin{lemma} For $n\in\mathbb{N}$, there are $C_n$ noncrossing partitions of $[n]$. \end{lemma} It is well known that the generating function of the Catalan numbers is $$c(x)=\sum\limits_{n=0}^\infty C_nx^n=\dfrac{1-\sqrt{1-4x}}{2x}.$$ It is also well known that one can define the Catalan numbers by recurrence relation \begin{align}\label{recurrence_of_Catalan_number} C_{n+1}=\sum\limits_{k=0}^nC_kC_{n-k} \end{align} with initial condition $C_0=1$. \begin{lemma}\label{thm:property_of_Catalan_numbers} The generating function of the Catalan numbers $c(x)$ satisfies $$c(x)=\frac{1}{1-xc(x)}=1+xc^2(x)\quad\text{and}\quad\dfrac{c^3(x)}{1-xc^2(x)}=c'(x).$$ \end{lemma} \begin{proof}[Proof of Lemma \ref{thm:property_of_Catalan_numbers}] The first formula is well known. By the first formula, \begin{align}\label{www} c'(x)=c^2(x)+2xc(x)c'(x)=\dfrac{c^2(x)}{1-2xc(x)}. \end{align} Thus, to prove the second formula, we only have to show that $\dfrac{c(x)}{1-xc^2(x)}=\dfrac{1}{1-2xc(x)}$ which is equivalent to $c(x)-2xc^2(x)=1-xc^2(x)$. This follows from the first formula. \end{proof} \subsection{Diagram representation of permutations}\label{sec:diagrams} \subsubsection{Diagram of horizontal type} For $n>0$ and $\pi\in S_n$, we use a diagram of horizontal type to represent $\pi$. To do this, draw $n$ points on a horizontal line. The points represent the numbers $1,\ldots,n$ from left to right. For each $i\in[n]$, we draw a directed arc from $i$ to $\pi(i)$. The permutation uniquely determines the diagram. For example, if $\pi=(1,5,4)(2)(3)(6)$, then its diagram is \centerline{\includegraphics[width=3.5in]{1.eps}} \subsubsection{Diagram of circular type} For $n>0$ and $\pi\in S_n$, we also use a diagram of circular type to represent $\pi$. To do this, draw $n$ points uniformly distributed on a circle. Specify a point that represents the number 1. The other points represent $2,\ldots,n$ in counter-clockwise order. For each $i$, draw a directed arc from $i$ to $\pi(i)$. The permutation uniquely determines the diagram (up to rotation). For example, if $\pi=(1,5,4)(2)(3)(6)$, then its diagram is \centerline{\includegraphics[width=1.5in]{2.eps}} \subsection{The empty permutation} The empty permutation $\pi_\emptyset\in S_0$ is a permutation with no fixed points, no (generalized) successions and no cycles. \section{Reductions and nice bijections}\label{sec:reductions_and_nice_bijections} In this section, we introduce reductions and nice bijections which serve as our main tools to study m\'enage permutations. \subsection{Reduction of type 1} Intuitively speaking, to perform a reduction of type 1 is to remove a fixed point from a permutation. Suppose $n\ge1$, $\pi\in S_n$ and $\pi(i)=i$. Define $\pi'\in S_{n-1}$ such that: \begin{align}\label{eq:expression_of_reduction_of_type_1} \pi'(j)=\begin{cases}\pi(j)&\text{if}\quad j<i\quad\text{and}\quad\pi(j)<i;\\ \pi(j)-1&\text{if}\quad j<i\quad\text{and}\quad\pi(j)>i;\\\pi(j+1)&\text{if}\quad j\ge i\quad\text{and}\quad\pi(j+1)<i;\\\pi(j+1)-1&\text{if}\quad j\ge i\quad\text{and}\quad\pi(j+1)>i\end{cases} \end{align} when $n>1$. When $n=1$, define $\pi'$ to be $\pi_\emptyset$. If we represent $\pi$ by a diagram (of either type), erase the point corresponding to $i$ and the arc connected to the point (and number other points appropriately for the circular case); then we obtain the diagram of $\pi'$. We call this procedure of obtaining a new permutation by removing a fixed point a $reduction$ $of$ $type$ $1$. For example, if $$\pi=(1,5,6,4)(2)(3)(7)\in S_7,$$ then by removing the fixed point 3 we obtain $\pi'=(1,4,5,3)(2)(6)\in S_6$. \subsection{Reduction of type 2} Intuitively speaking, to do a reduction of type 2 is to glue a succession $\{k,k+1\}$ together. Suppose $n\ge2$, $\pi\in S_n$ and $\pi(i)=i+1$. Define $\pi'\in S_{n-1}$ such that: \begin{align}\label{eq:expression_of_reduction_of_type_2} \pi'(j)=\begin{cases}\pi(j)&\text{if}\quad j<i\quad\text{and}\quad\pi(j)\le i;\\ \pi(j)-1&\text{if}\quad j<i\quad\text{and}\quad\pi(j)>i+1;\\\pi(j+1)&\text{if}\quad j\ge i\quad\text{and}\quad\pi(j+1)\le i;\\\pi(j+1)-1&\text{if}\quad j\ge i\quad\text{and}\quad\pi(j+1)>i+1.\end{cases} \end{align} If we represent $\pi$ by the diagram of the $horizontal$ type, erase the arc from $i$ to $i+1$, and glue the points corresponding to $i$ and $i+1$ together; then, we obtain the diagram of $\pi'$. We call this procedure of obtaining a new permutation by gluing a succession together a $reduction$ $of$ $type$ $2$. For example, if $$\pi=(1,5,6,4)(2)(3)(7)\in S_7,$$ then by gluing 5 and 6 together, we obtain $\pi'=(1,5,4)(2)(3)(6)\in S_6$. \subsection{Reduction of type 3} Intuitively speaking, to perform a reduction of type 3 is to glue a generalized succession $\{k,k+1\pmod{n}\}$ together. Suppose $n\ge1$, $\pi\in S_n$ and $\pi(i)\equiv i+1\pmod{n}$. Define $\pi'$ to be the same as in \eqref{eq:expression_of_reduction_of_type_2} when $i\ne n$. When $i=n>1$, define $\pi'$ to be \begin{align*} \pi'(j)=\begin{cases}\pi(j)&\text{ if }j\ne\pi^{-1}(n);\\1&\text{ if }j=\pi^{-1}(n).\end{cases} \end{align*} When $i=n=1$, define $\pi'$ to be $\pi_\emptyset$. If we represent $\pi$ by a diagram of the $circular$ type, erase the arc from $i$ to $i+1$ (mod $n$), glue the points corresponding to $i$ and $i+1$ (mod $n$) together and number the points appropriately; then, we obtain the diagram of $\pi'$. We call this procedure of obtaining a new permutation by gluing a generalized succession together a $reduction$ $of$ $type$ $3$. For example, if $$\pi=(1,5,6,7)(2)(3)(4)\in S_7,$$ then by gluing 1 and 7 together, we obtain $\pi'=(1,5,6)(2)(3)(4)\in S_6$. \subsection{Nice bijections}\label{sec:nice_bijection} Suppose $n\ge1$ and $f$ is a bijection from $[n]$ to $\{2,\ldots, n+1\}$. We can also represent $f$ by a diagram of horizontal type as for permutations. The bijection uniquely determines the diagram. If $f$ has a fixed point or there exists $i$ such that $f(i)=i+1$, then we can also perform reductions of type 1 or type 2 on $f$ as above. In the latter case, we also call $\{i,i+1\}$ a $succession$ of $f$. We can reduce $f$ to a bijection with no fixed points and no successions by a series of reductions. It is easy to see that the resulting bijection does not depend on the order of the reductions. The following diagram shows an example of reduction of type 2 on the bijection by gluing the succession 2 and 3 together. \centerline{\includegraphics[width=4in]{3.eps}} \begin{definition} Suppose $f$ is a bijection from $[n]$ to $\{2,\ldots, n+1\}$. If there exist a series of reductions of type 1 or type 2 by which one can reduce $f$ to the simplest bijection $1\mapsto2$, then we call $f$ a $nice$ $bijection$. \end{definition} Suppose $\pi\in S_n$ and $p$ is a point of $\pi$. We can replace $p$ by a bijection $f$ from $[k]$ to $\{2,\ldots, k+1\}$ and obtain a new permutation $\pi'\in S_{n+k}$ by the following steps: (1) represent $\pi$ by the horizontal diagram; (2) add a point $q$ right before $p$ and add an arc from $q$ to $p$ ; (3) replace the arc from $\pi^{-1}(p)$ to $p$ by an arc from $\pi^{-1}(p)$ to $q$ ; (4) replace the arc from $q$ to $p$ by the diagram of $f$. For example, if $\pi$, $p$ and $f$ are as below, \centerline{\includegraphics[width=4in]{4.eps}} then, we can replace $p$ by $f$ and obtain the following permutation: \centerline{\includegraphics[width=3.5in]{5.eps}} It is easy to see, by the definition of the nice bijection, that if $f$ is a nice bijection, then we can reduce $\pi'$ back to $\pi$: first, we can reduce $\pi'$ to the permutation obtained in Step (3) because $f$ is nice; then, by gluing the succession $p$ and $q$ together, we obtain $\pi$. Notice that the bijection $f$ in the above example is not nice. For the circular diagram of a permutation $\pi$, we can also use Steps (2)--(4) shown above to obtain a new diagram. However, to obtain a permutation $\pi'$, we need to specify the point representing number 1 in the new diagram. See the example in the proof of Theorem \ref{gf_of_r}. \begin{lemma}\label{lemma:count_nice_bijection} For $n\ge2$, let $a_n$ be the number of nice bijections from $[n-1]$ to $\{2,\ldots, n\}$. Define $a_1=1$. The generating function of $a_n$ satisfies the following equation: \[ g(x):=a_1x+a_2x^2+a_3x^3+\cdots=xc(x) \] where $c(x)=\sum\limits_{n=0}^\infty C_kx^k$ is the generating function of the Catalan numbers. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:count_nice_bijection}] Suppose $n\ge2$ and $f$ is a nice bijection from $[n-1]$ to $\{2,\ldots, n\}$ such that $f(1)=k$. Suppose $p<k$. We claim that $f(p)<k$. If not, suppose $q=f(p)>k$. Then, neither $\{1,k\}$ nor $\{p,q\}$ is a succession. Consider the horizontal diagram of $f$. If we perform a reduction of type 1 or type 2 on $f$, then the arc we remove is neither the arc from $1$ to $k$ nor the arc from $p$ to $q$. By induction, no matter how many reductions we perform, there always exists an arc from $1$ to $k'$ and an arc from $p'$ to $q'$ such that $p'<k'<q'$. In other words, we can never reduce the bijection to $1\mapsto2$. Thus, we proved the claim. Thus, the image of $\{2,\ldots, k-1\}$ under $f$ must be $\{2,\ldots, k-1\}$, and therefore, the image of $\{k,\ldots,n-1\}$ under $f$ must be $\{k+1,\ldots,n\}$. This implies that $f|_{\{2,\ldots, k-1\}}$ has the same diagram as a permutation $\tau\in S_{k-2}$ and we can reduce $\tau$ to $\pi_\emptyset$. This also implies that $f|_{\{k,\ldots, n-1\}}$ has the same diagram as a nice bijection from $[n-k]$ to $\{2,\ldots, n-k+1\}$. By Lemma \ref{lemma:a_permutation_can_be_reduced_to_null_iff...}, the number of nice bijections from $[n-1]$ to $\{2,\ldots, n\}$ such that $f(1)=k$ is $C_{k-2}a_{n-k+1}$. Letting $k$ vary, we obtain $a_n=\sum\limits_{k=2}^nC_{k-2}a_{n-k+1}$. By \eqref{recurrence_of_Catalan_number}, $(C_n)_{n\ge0}$ and $(a_{n+1})_{n\ge0}$ have the same recurrence relation and the same initial condition $C_0=a_1=1$, so $C_n=a_{n+1}(n\ge0)$. Therefore $g(x)=xc(x)$. \end{proof} \section{Structure of straight m\'enage permutations}\label{sec:straight_menage_permutation} In Section \ref{sec:straight_menage_permutation}, when we mention a reduction, we mean a reduction of \textbf{type 1 or type 2}. If a permutation $\pi$ is not a straight m\'enage permutation, then $\pi$ has at least one fixed point or succession. Thus, we can apply a reduction to $\pi$. By induction, we can reduce $\pi$ to a straight m\'enage permutation $\pi'$ by a series of reductions. For example, we can reduce $\pi_1=(1,3)(2)(4,5,6)$ to $\pi_\emptyset$: $(1,3)(2)(4,5,6)\to(1,2)(3,4,5)\to(1)(2,3,4)\to(1,2,3)\to(1,2)\to(1)\to\pi_\emptyset$. We can reduce $\pi_2=(1,5,4)(2)(3)(6)\in S_6$ to $(1,3,2)$: $(1,5,4)(2)(3)(6)\to(1,5,4)(2)(3)\to(1,4,3)(2)\to(1,3,2)$. It is easy to see that the resulting straight m\'enage permutation does not depend on the order of the reductions. Recall that we defined the permutation induced from a noncrossing partition in Section \ref{sec:partitions_and_Catalan_numbers}. \begin{lemma}\label{lemma:a_permutation_can_be_reduced_to_null_iff...} Suppose $\pi\in S_n$. We can reduce $\pi$ to $\pi_\emptyset$ by reductions of type 1 and type 2 if and only if there is a noncrossing partition inducing $\pi$. In particular, there are $C_n$ such permutations in $S_n$. \end{lemma} \begin{proof} $\Rightarrow$: Suppose we can reduce $\pi\in S_n$ to $\pi_\emptyset$. Then, $\pi$ has at least one fixed point or succession. We use induction on $n$. If $n=1$, the conclusion is trivial. Suppose $n>1$. If $\pi$ has a fixed point $i$, then by reduction of type 1 on $i$ we obtain $\pi'$ satisfying \eqref{eq:expression_of_reduction_of_type_1}. By induction assumption, there is a noncrossing partition $\Phi=\{V_1,\ldots,V_k\}$ inducing $\pi'$. Now, we define a new noncrossing partition $\Pi_1(\Phi,i)$ as follows. Set $$\tilde V_r=\{x+1|x\in V_r\text{ and }x\ge i\}\cup\{x|x\in V_r\text{ and }x<i\}$$ for $1\le r\le k$ and $\tilde V_{k+1}=\{i\}$. Define $\Pi_1(\Phi,i)=\{\tilde V_1,\ldots,\tilde V_{k+1}\}$. It is not difficult to check that $\Pi_1(\Phi,i)$ is a noncrossing partition inducing $\pi$. If $\pi$ has a succession $\{i,i+1\}$, then by reduction of type 2 on $\{i,i+1\}$, we obtain $\pi''$ satisfying \eqref{eq:expression_of_reduction_of_type_2}. By induction assumption, there is a noncrossing partition $\Phi=\{U_1,\ldots,U_s\}$ inducing $\pi''$. Now, we define a new noncrossing partition $\Pi_2(\Phi,i)$ as follows. Set \begin{align*} \tilde U_t=\begin{cases}\{x+1|x\in U_t\text{ and }x> i\}\cup\{x|x\in U_t\text{ and }x<i\}&\text{ if }t\ne t_0;\\\{x+1|x\in U_t\text{ and }x> i\}\cup\{x|x\in U_t\text{ and }x<i\}\cup\{i,i+1\}&\text{ if }t=t_0.\end{cases} \end{align*} Define $\Pi_2(\Phi,i)=\{\tilde U_1,\ldots,\tilde U_s\}$. It is not difficult to check that $\Pi_2(\Phi,i)$ is a noncrossing partition inducing $\pi$. $\Leftarrow$: Suppose there is a noncrossing partition $\{V_1,\ldots,V_k\}$ inducing $\pi$ where $V_r=\{a_1^r,\ldots, a_{j_r}^r\}$ and $a_1^r<\cdots<a_{j_r}^r$. We prove by using induction on $n$. The case $n=1$ is trivial. Suppose $n>1$. For $r_1\ne r_2$, if $[a_1^{r_1},a_{j_{r_1}}^{r_1}]\cap[a_1^{r_2},a_{j_{r_2}}^{r_2}]\ne\emptyset$, then either $[a_1^{r_1},a_{j_{r_1}}^{r_1}]\subset[a_1^{r_2},a_{j_{r_2}}^{r_2}]$ or $[a_1^{r_2},a_{j_{r_2}}^{r_2}]\subset[a_1^{r_1},a_{j_{r_1}}^{r_1}]$; otherwise, the partition cannot be noncrossing. Thus, there exists $p$ such that $[a_1^p,a_{j_p}^p]\cap[a_1^q,a_{j_q}^q]=\emptyset$ for all $q\ne p$. If $j_p=1$, then $a_1^p$ is a fixed point of $\pi$. If $j_p>1$, then $\{a_1^p,a_2^p\}$ is a succession of $\pi$. For the case $j_p=1$, perform a reduction of type 1 on $a_1^p$ and obtain $\pi'\in S_{n-1}$. Then, $\{\tilde V_r|r\ne p\}$ is a noncrossing partition inducing $\pi'$, where \begin{align}\label{eq:tilde_V_r} \tilde V_r=\{x-1|x\in V_r\text{ and }x> a_{1}^p\}\cup\{x|x\in V_r\text{ and }x<a_{1}^p\}. \end{align} By induction assumption, we can reduce $\pi'$ to $\pi_\emptyset$; then, we can also reduce $\pi$ to $\pi_\emptyset$. For the case $j_p>1$, perform a reduction of type 2 on $\{a_1^p,a_2^p\}$ and get $\pi'\in S_{n-1}$. Then, $\{\tilde V_r|1\le r\le k\}$ is a noncrossing partition inducing $\pi'$, where $\tilde V_r$ is the same as in \eqref{eq:tilde_V_r}. By induction assumption, we can reduce $\pi'$ to $\pi_\emptyset$; then, we can also reduce $\pi$ to $\pi_\emptyset$. \end{proof} Conversely, for a given straight m\'enage permutation $\pi\in S_m$, what is the cardinality of the set \begin{align}\label{set:permutations_which_can_be_reduced_to_a_given_permutation} \{\tau\in S_{m+n}\big|\text{we can reduce }\tau\text{ to }\pi\text{ by reductions of type 1 and type 2}\}? \end{align} Interestingly the answer only depends on $m$ and $n$; it does not depend on the choice of $\pi$. In fact, we have \begin{theorem}\label{gf_of_omega} Suppose $m\ge0$ and $\pi\in S_m$ is a straight m\'enage permutation. Suppose $n\ge0$ and $w_m^n$ is the cardinality of the set in \eqref{set:permutations_which_can_be_reduced_to_a_given_permutation}. Set $W_m(x)=w_m^0+w_m^1x+w_m^2x^2+w_m^3x^3+\cdots$; then, \[ W_m(x)=c(x)^{2m+1}. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{gf_of_omega}] If $m=0$, then $\pi=\pi_\emptyset$, and the conclusion follows from Lemma \ref{lemma:a_permutation_can_be_reduced_to_null_iff...}. Thus, we only consider the case that $m>0$. Obviously, $w_m^0=1$. Now, suppose $n\ge1$. Represent $\pi$ by a horizontal diagram. The diagram has $m+1$ gaps: one gap before the first point, one gap after the last point and one gap between each pair of adjacent points. Let $A$ be the set in \eqref{set:permutations_which_can_be_reduced_to_a_given_permutation}. To obtain a permutation in $A$, we add points to $\pi$ in the following two ways: \begin{enumerate} \item[(a)] add a permutation induced by a noncrossing partition $\Phi_p$ of $[d_p]$ into the $p$th gap of $\pi$, where $1\le p\le m+1$ and $d_p\ge0$ ($d_p=0$ means that we add nothing into the $p$th gap); \item[(b)] replace the $q$th point of $\pi$ by a nice bijection $f_q$ from $[r_q]$ to $\{2,\ldots,r_q+1\}$, where $1\le q\le m$ and $r_q\ge0$ ($r_q=0$ means that we do not change the $q$th point). \end{enumerate} For example, if $\pi$ and $f$ are as below, \centerline{\includegraphics[width=4in]{6.eps}} then we can add the permutation $(1,2)$ between $p$ and $q$, add the permutation $(1)(2,3)$ after the last point and replace $p$ by $f$. Then, we obtain a permutation in $A$ that is: \centerline{\includegraphics[width=4in]{7.eps}} \textbf{Statement: The set of permutations constructed from (a) and (b) equals $A$.} It is easy to see that we can reduce a permutation constructed through (a) and (b) to $\pi$. Conversely, suppose we can reduce $\pi'\in S_{m+n}$ to $\pi$. Now, we show that one can construct $\pi'$ through (a) and (b). Use induction on $n$. The case that $n=0$ is trivial. Suppose $n>0$. Then, $\pi'$ has at least one fixed point or succession. For a nice bijection $f$ from $[s]$ to $\{2,\ldots,s+1\}$ and $1<w_1<s+1$, $1\le w_2\le s+1$, define nice bijections $B_1(f,w_1)$ and $B_2(f,w_2)$ as \begin{align}\label{eq:B_1(f,w_1)} B_1(f,w_1)=\begin{cases}f(x)&\text{ if }x<w_1\text{ and } f(x)<w_1;\\f(x)+1&\text{ if }x<w_1\text{ and } f(x)\ge w_1;\\f(x-1)&\text{ if }x>w_1\text{ and } f(x)<w_1;\\f(x-1)+1&\text{ if }x>w_1\text{ and } f(x)\ge w_1;\\w_1&\text{ if }x=w_1\end{cases} \end{align} \begin{align} B_2(f,w_2)=\begin{cases}f(x)&\text{ if }x<w_2\text{ and } f(x)\le w_2;\\f(x)+1&\text{ if }x<w_2\text{ and } f(x)>w_2;\\f(x-1)&\text{ if }x>w_2\text{ and } f(x)\le w_2;\\f(x-1)+1&\text{ if }x>w_2\text{ and } f(x)>w_2;\\w_2+1&\text{ if }x=w_2.\end{cases}\label{eq:B_2(f,w_2)} \end{align} We can reduce $B_1(f,w_1)$ to $f$ by a reduction of type 1 on the fixed point $w_1$. We can reduce $B_2(f,w_2)$ to $f$ by a reduction of type 2 on the succession $\{w_2,w_2+1\}$. \textbf{Case 1: $\pi'$ has a fixed point $i$.} Using a reduction of type 1 on $i$, we obtain $\pi''\in S_{m+n-1}$. By induction assumption, we can construct $\pi''$ from $\pi$ by (a) and (b). According to the value of $i$, there is either a $k$ such that \begin{align}\label{eq:condition_of_k} 1\le i-\sum_{j<k}(d_j+r_j+1)\le d_k+1 \end{align} or a $k'$ such that \begin{align}\label{eq:condition_of_k'} 1<i-(\sum_{j<k'}(d_j+r_j+1)+d_{k'})\le r_{k'}+1. \end{align} If there is a $k$ such that \eqref{eq:condition_of_k} holds, then we can construct $\pi'$ from $\pi$ by (a) and (b), except that we add the permutation induced by $\Pi_1(\Phi_k,i-\sum_{j<k}(d_j+r_j+1))$ instead of $\Phi_k$ into the $k$th gap, where $\Pi_1$ is the same as in the proof of Lemma \ref{lemma:a_permutation_can_be_reduced_to_null_iff...}. Conversely, if there exists a $k'$ such that \eqref{eq:condition_of_k'} holds, then we can construct $\pi'$ from $\pi$ by (a) and (b), except that we replace the $k'$th point of $\pi$ by $B_1(f_{k'},i-\sum_{j<k'}(d_j+r_j+1))$ instead of $f_{k'}$, where $B_1$ is the same as in \eqref{eq:B_1(f,w_1)}. \textbf{Case 2: $\pi'$ has a succession $\{i,i+1\}$.} By reduction of type 2 on $\{i,i+1\}$ we obtain $\pi''\in S_{m+n-1}$. By induction assumption, we can construct $\pi''$ from $\pi$ by (a) and (b). According to the value of $i$, there is either a $k$ such that \begin{align}\label{eq:condition_of_k_for_succession} 0<i-\sum_{j<k}(d_j+r_j+1)\le d_k \end{align} or a $k'$ such that \begin{align}\label{eq:condition_of_k'_for_succession} 0<i-(\sum_{j<k'}(d_j+r_j+1)+d_{k'})\le r_{k'}+1. \end{align} If there is a $k$ such that \eqref{eq:condition_of_k_for_succession} holds, then we can construct $\pi'$ from $\pi$ by (a) and (b), except that we add the permutation induced by $\Pi_2(\Phi_k,i-\sum_{j<k}(d_j+r_j+1))$ instead of $\Phi_k$ into the $k$th gap, where $\Pi_2$ is the same as in the proof of Lemma \ref{lemma:a_permutation_can_be_reduced_to_null_iff...}. Conversely, if there exists a $k'$ such that \eqref{eq:condition_of_k'_for_succession} holds, then we can construct $\pi'$ from $\pi$ by (a) and (b), except that we replace the $k'$th point of $\pi$ by $B_2(f_{k'},i-\sum_{j<k'}(d_j+r_j+1))$ instead of $f_{k'}$, where $B_2$ is the same as in \eqref{eq:B_2(f,w_2)}. Thus, we have proved the statement. Now, add points to $\pi$ by (a) and (b). The total number of points added to $\pi$ is $d_1+\cdots+d_{m+1}+r_1+\cdots+r_m$. To obtain a permutation in $S_{m+n}\cap A$, $d_1+\cdots+d_{m+1}+r_1+\cdots+r_m$ should be $n$. Therefore, the number of permutations in $S_{m+n}\cap A$ is \[ w_m^n=\sum\limits_{d_1,\ldots,d_{m+1}\atop r_1,\ldots, r_m}C_{d_1}\cdots C_{d_{m+1}}a_{1+r_1}\cdots a_{1+r_m} \] where $C_k$ is the $k$th Catalan number and $a_k$ is the same as in Lemma \ref{lemma:count_nice_bijection} and the sum runs over all $(2m+1)$-triples $(d_1,\ldots,d_{m+1},r_1,\ldots,r_m)$ of nonnegative numbers with sum $n$. By Lemma \ref{lemma:count_nice_bijection}, the generating function of $w_m^n$ is $$c(x)^{m+1}\big(\frac{g(x)}{x}\big)^m=c(x)^{2m+1}.$$ \end{proof} \begin{proof}[Proof of \eqref{eq:result_of_straight_menage_numbers}] We can reduce each permutation in $S_n$ to a straight m\'enage permutation in $S_i$ ($0\le i\le n$). Thus, we have $$n!\,=\sum\limits_{i=0}^nw_i^{n-i}V_i$$ where $V_i$ is the $i$th straight m\'enage number and $w_i^{n-i}$ is the same as in Theorem \ref{gf_of_omega}. Thus, \begin{align*} \sum\limits_{n=0}^{\infty}n!\,x^n=\sum\limits_{n=0}^\infty \sum\limits_{i=0}^nw_i^{n-i}V_ix^n=\sum\limits_{i=0}^\infty \sum\limits_{n=i}^\infty w_i^{n-i}V_ix^n=\sum\limits_{i=0}^\infty \sum\limits_{n=0}^\infty w_i^nV_ix^nx^i=\sum\limits_{i=0}^\infty c(x)^{2i+1}V_ix^i \end{align*} where the last equality is from Theorem \ref{gf_of_omega}. \end{proof} \section{Structure of ordinary m\'enage permutations}\label{sec:ordinary_menage_permutation} By definition, a permutation $\tau$ is an ordinary m\'enage permutation if and only if we cannot apply reductions of either type 1 or type 3 on $\tau$. Similarly as in Section \ref{sec:straight_menage_permutation}, we can reduce each permutation $\pi$ to an ordinary m\'enage permutation by reductions of type 1 and type 3. The resulting permutation does not depend on the order of the reductions. By the circular diagram representation of permutations, it is not difficult to see that we can reduce a permutation $\pi$ to $\pi_\emptyset$ by reductions of type 1 and type 2 if and only if we can reduce $\pi$ to $\pi_\emptyset$ by reductions of type 1 and type 3. Thus, we have the following lemma: \begin{lemma}\label{lemma:equivalence} Suppose $\pi\in S_n$. We can reduce $\pi$ to $\pi_\emptyset$ by reductions of type 1 and type 3 if and only if there is a noncrossing partition inducing $\pi$. In particular, there are $C_n$ permutations of this type in $S_n$. \end{lemma} In the following parts of Section \ref{sec:ordinary_menage_permutation}, when we mention reductions, we mean reductions of \textbf{type 1 or type 3} unless otherwise specified. \begin{theorem}\label{gf_of_r} Suppose $m\ge0$ and $\pi\in S_m$ is an ordinary m\'enage permutation. Let $r_m^n$ denote the cardinality of the set \begin{align}\label{set:permutations_which_can_be_reduced_to_a_given_permutation_by_type_1_and_3} \{\tau\in S_{m+n}|\text{we can reduce }\tau\text{ to }\pi\text{ by reductions of type 1 and type 3}\}. \end{align} Then, the generating function of $r_m^n$ satisfies \begin{align*} R_m(x):=r_m^0+r_m^1x+r_m^2x^2+r_m^3x^3+\cdots=\begin{cases}c'(x)c(x)^{2m-2}&\text{if}\quad m>0;\\ c(x)&\text{if}\quad m=0.\end{cases} \end{align*} \end{theorem} \begin{proof} When $m=0$, $r_m^n=C_n$ by Lemma \ref{lemma:equivalence}. So $R_0(x)=c(x)$. Now, suppose $m>0$. Obviously $r_m^0=1$. Suppose $n>0$. Represent $\pi$ by a circular diagram. The diagram has $m$ gaps: one gap between each pair of adjacent points. Call the point corresponding to number $i$ $point$ $i$. Let $A$ denote the set in \eqref{set:permutations_which_can_be_reduced_to_a_given_permutation_by_type_1_and_3}. To obtain a permutation in $A$ we can add points into $\pi$ by the following steps: \begin{enumerate} \item[(a)] Add a permutation induced by a noncrossing partition $\Phi_i$ of $[d_i]$ into the gap between point $i$ and point $i+1$ (mod $m$), where $1\le i\le m$ and $d_p\ge0$ ($d_p=0$ means we add nothing into the gap). Use $Q_i$ to denote the set of points added into the gap between point $i$ and point $i+1$ (mod $m$). \item[(b)] Replace point $i$ by a nice bijection $f_i$ from $[t_i]$ to $\{2,\ldots,t_i+1\}$, where $1\le i\le m$ and $t_i\ge0$ ($t_i=0$ means that we do not change the $i$th point). Use $P_i$ to denote the set of points obtained from this replacement. Thus, $P_i$ contains $t_i+1$ points. \item[(c)] Specify a point in $P_1\bigcup Q_m$ to correspond to the number 1 of the new permutation $\pi'$. \end{enumerate} Steps (a) and (b) are the same as in the proof of Theorem \ref{gf_of_omega}, but Step (c) needs some explanation. In the proof of Theorem \ref{gf_of_omega}, after adding points into the permutation by (a) and (b), we defined $\pi'$ from the resulting $horizontal$ diagram in a natural way, that is, the left most point corresponds to 1, and the following points correspond to 2, 3, 4, \ldots \,respectively. However, now there is no natural way to define $\pi'$ from the resulting $circular$ diagram because we have more than one choice of the point corresponding to number 1 of $\pi'$. Note that $\pi'$ can become $\pi$ by a series of reductions of type 1 or type 3. Then, for each $1\le u\le m$, the points in $P_u$ will become point $u$ of $\pi$ after the reductions. Thus, we can choose any point of $P_1$ to be the one corresponding to the number 1 of $\pi'$. Moreover, we can also choose any point of $Q_m$ to be the one corresponding to the number 1 of $\pi'$. The reason is that if a permutation in $S_k$ has a cycle $(1,k)$, then a reduction of type 3 will reduce $(1,k)$ to the cycle $(1)$. Thus, we can choose any point in $P_1\bigcup Q_m$ to be the point corresponding to number 1 of $\pi'$. To see this more clearly, let us look at an example. Suppose $\pi$ and $f$ are as below: \centerline{\includegraphics[width=3.2in]{8.eps}} Then, we can add the permutation $(1,2)$ between point 2 and point 3, add the permutation $(1)(2,3)$ between point 6 and point 1 and replace point 2 by $f$. Then, we obtain a new diagram: \centerline{\includegraphics[width=2in]{9.eps}} Now, we can choose point 1 or any of the three points between point 1 and point 6 to be the point corresponding to number 1 of the new permutation $\pi'$. For instance, if we set point 1 to be the point corresponding to number 1 of $\pi'$, then $$\pi'=(1,8,2,5)(3,4)(6,7)(9,11,10)(12)(13,14).$$ If we set point w to be the point corresponding to number 1 of $\pi'$, then $$\pi'=(1,14)(2,9,3,6)(4,5)(7,8)(10,12,11)(13).$$ Now, continue the proof. By a similar argument to the one we used to prove the statement in the proof of Theorem \ref{gf_of_omega}, we have that the set of permutations constructed by (a)--(c) equals $A$. Now, add points to $\pi$ by (a)--(c). Then, $P_1\bigcup Q_m$ contains, in total, $d_m+(t_1+1)$ points. Thus, we have $d_m+(t_1+1)$ ways to specify the point corresponding to number 1 in $\pi'$. The total number added to $\pi$ is $d_1+\cdots+d_m+t_1+\cdots+t_m$. Therefore, the number of permutations in $S_{m+n}\cap A$ is \[ r_m^n=\sum\limits_{d_1,\ldots,d_m\atop t_1,\ldots, t_m}C_{d_1}\cdots C_{d_m}a_{1+t_1}\cdots a_{1+t_m}(d_m+t_1+1) \] where $C_k$ is the $k$th Catalan number, $a_k$ is the same as in Lemma \ref{lemma:count_nice_bijection}, and the sum runs over all $2m$-triples $(d_1,\ldots,d_m,t_1,\ldots, t_m)$ of nonnegative integers such that $\sum_{u=1}^m(t_u+d_u)=n$. Set $\eta_k=\sum\limits_{r=0}^ka_{r+1}C_{k-r}(k+1)$; from Lemma \ref{lemma:count_nice_bijection}, we have $$1+\frac{\eta_1}{2}x+\frac{\eta_2}{3}x^2+\frac{\eta_3}{4}x^3+\cdots=c^2(x).$$ By Lemma \ref{thm:property_of_Catalan_numbers}, the generating function of $\eta_k$ is $1+\eta_1x+\eta_2x^2+\eta_3x^3+\cdots=(xc^2(x))'=c'(x)$. Thus, the generating function of $r_m^n$ is $c(x)^{2m-2}c'(x)$. \end{proof} \begin{proof}[Proof of \eqref{eq:result_of_ordinary_menage_numbers}] We can reduce each permutation in $S_n$ to an ordinary m\'enage permutation. Thus, we have $$n!\,=\sum\limits_{i=0}^nr_i^{n-i}U_i$$ where $U_i$ is the $i$th ordinary m\'enage number and $r_i^{n-i}$ is the same as in Theorem \ref{gf_of_r}. Thus, \begin{multline*} \sum\limits_{n=0}^{\infty}n!\,x^n=\sum\limits_{n=0}^\infty \sum\limits_{i=0}^nr_i^{n-i}U_ix^n=\sum\limits_{i=0}^\infty \sum\limits_{n=i}^\infty r_i^{n-i}U_ix^n\\=\sum\limits_{i=0}^\infty \sum\limits_{n=0}^\infty r_i^nU_ix^nx^i =c(x)+\sum\limits_{i=1}^\infty c'(x)c(x)^{2i-2}U_ix^i \end{multline*} where the last equality follows from Theorem \ref{gf_of_r}. \end{proof} \section{Counting m\'enage permutations by number of cycles}\label{count_permutations_by_cycles} We prove \eqref{eq:straight_by_cycles} in Section \ref{sec:count_straight_by_cycles} and prove \eqref{eq:ordinary_by_cycles} in Section \ref{sec:count_ordinary_by_cycles}. Our main method is coloring. We remark that one can also prove \eqref{eq:straight_by_cycles} and \eqref{eq:ordinary_by_cycles} by using the inclusion-exclusion principle in a similar way to \cite{Gessel}. \subsection{Coloring and weights} For a permutation $\pi$, we use $f(\pi)$, $g(\pi)$, $h(\pi)$ and $r(\pi)$ to denote the number of its cycles, fixed points, successions and generalized successions, respectively. The following lemma is well known. \begin{lemma}\label{lemma:well_known_lemma_for_cycles} For $n\ge0$, \begin{align*} \sum\limits_{\pi\in S_n}\alpha^{f(\pi)}=(\alpha)_n. \end{align*} \end{lemma} For a permutation, color some of its fixed points red and color some of its $generalized$ successions yellow. Then, we obtain a colored permutation. Here and in the following sections, we use the phrase $colored$ $permutation$ as follows: if two permutations are the same as maps but have different colors, then they are different colored permutations. Define $\mathbb{S}_n$ to be the set of colored permutations on $n$ objects. Set $A_n$ to be the subset of $\mathbb{S}_n$ consisting of colored permutations with a colored generalized succession $\{n,1\}$. Set $B_n=\mathbb{S}_n\backslash A_n$. So we can consider $S_n$ as a subset of $\mathbb{S}_n$ consisting of permutations with no color. In particular, $\mathbb{S}_0=S_0$. Set $$M_n^\alpha(t,u)=\sum\limits_{\pi\in S_n}\alpha^{f(\pi)}t^{g(\pi)}u^{h(\pi)}\quad\quad\text{and}\quad\quad L_n^\alpha(t,u)=\sum\limits_{\pi\in S_n}\alpha^{f(\pi)}t^{g(\pi)}u^{\tau(\pi)}.$$ For a colored permutation $\epsilon\in\mathbb{S}_n$, define two weights $W_1$ and $W_2$ of $\epsilon$ as: \begin{align*} W_1(\epsilon)&=x^n\cdot\alpha^{f(\epsilon)}\cdot t^{\text{number of colored fixed points}}\cdot u^{\text{number of colored successions}},\\ W_2(\epsilon)&=x^n\cdot\alpha^{f(\epsilon)}\cdot t^{\text{number of colored fixed points}}\cdot u^{\text{number of colored generalized successions}}. \end{align*} \begin{lemma}\label{lemma:sum_of_the_W1_and_W2_weights} $\sum\limits_{\epsilon\in B_n}W_1(\epsilon)=\sum\limits_{\epsilon\in B_n}W_2(\epsilon)=M_n^\alpha(1+t,1+u)x^n$, $\sum\limits_{\epsilon\in\mathbb{S}_n}W_2(\epsilon)=L_n^\alpha(1+t,1+u)x^n$. \end{lemma} \begin{proof} The lemma follows directly from the definitions of $M_n^\alpha$, $L_n^\alpha$, $B_n$, $W_1$ and $W_2$. \end{proof} \subsection{Counting straight m\'enage permutations by number of cycles}\label{sec:count_straight_by_cycles} In this subsection, we represent permutations by diagrams of \textbf{horizontal type}. Suppose $n\ge0$ and $\pi\in S_n$. Then, $\pi$ has $n+1$ gaps: one gap before the first point, one gap after the last point and one gap between each pair of adjacent points. We can add points to $\pi$ by the following steps. (a) Add the identity permutation of $S_{(d_p)}$ into the $p$th gap of $\pi$, where $1\le p\le n+1$ and $d_p\ge0$ ($d_p=0$ means that we add nothing into the $p$th gap). (b) Replace the $q$th point of $\pi$ by a nice bijection from $[r_q]$ to $\{2,\ldots,r_q+1\}$, which sends each $i$ to $i+1$. Here, $1\le q\le n$ and $r_q\ge0$ ($r_q=0$ means that we do not change the $q$th point). (c) Color the fixed points added by (a) red, and color the successions added by (b) yellow. Then (a), (b) and (c) give a colored permutation in $\bigcup_{n=0}^\infty B_n$. \begin{lemma}\label{lemma:sum_of_W1_weight_of_things_constructed_from_pi} Suppose $\pi\in S_n$. The sum of the $W_1$-weights of all colored permutations constructed from $\pi$ by (a)--(c) is \[ x^n\cdot\alpha^{f(\pi)}\cdot\frac{1}{(1-\alpha tx)^{n+1}}\cdot\frac{1}{(1-ux)^{n}}. \] \end{lemma} \begin{proof} In Step (a), we added $d_p$ fixed points into the $p$th gap. They contribute $(xt\alpha)^{d_p}$ to the weight because each of them is a single cycle and a colored fixed point. Because $d_p$ can be any nonnegative integer, the total contribution of the fixed points added into a gap is $\frac{1}{(1-\alpha tx)}$. Thus, the total contribution of the fixed points added into all the gaps is $\frac{1}{(1-\alpha tx)^{n+1}}$. In Step (b), through the replacement on the $q$th point, we added $r_q$ points and $r_q$ successions to $\pi$ (each of which received a color in Step (c)). Thus, the contribution of this replacement to the weight is $(ux)^{d_q}$. Because $d_q$ can be any nonnegative integer, the total contribution of the nice bijections replacing the $q$th point is $\frac{1}{(1-ux)}$. Thus, the total contribution of the nice bijections corresponding to all points is $\frac{1}{(1-ux)^{n}}$. Observing that the $W_1$-weight of $\pi$ is $x^n\cdot\alpha^{f(\pi)}$, we complete the proof. \end{proof} Suppose $\epsilon$ is a colored permutation in $\bigcup_{n=0}^\infty B_n$. If we perform reductions of type 1 on the colored fixed points and perform reductions of type 2 on the colored successions, then we obtain a new permutation $\epsilon'$ with no color. This $\epsilon'$ is the only permutation in $\bigcup_{n=0}^\infty S_n$ from which we can obtain $\epsilon$ by Steps (a)--(c). Therefore, there is a bijection between $\bigcup_{n=0}^\infty B_n$ and $$\bigcup_{n=0}^\infty\bigcup_{\pi\in S_n}\{\text{colored permutation constructed from $\pi$ through Steps (a)--(c)}\}.$$ Because of the bijection, Lemmas \ref{lemma:sum_of_the_W1_and_W2_weights} and \ref{lemma:sum_of_W1_weight_of_things_constructed_from_pi} imply that \begin{align}\label{kkk} \sum\limits_{n=0}^\infty M_n^\alpha(1+t,1+u)x^n=\sum\limits_{n=0}^\infty\sum\limits_{\pi\in S_n}\dfrac{x^n\alpha^{f(\pi)}}{(1-\alpha tx)^{n+1}(1-ux)^{n}}. \end{align} \begin{proof}[Proof of \eqref{eq:straight_by_cycles}] By Lemma \ref{lemma:well_known_lemma_for_cycles} and \eqref{kkk}, the sum of the $W_1$-weights of colored permutations in $\bigcup\limits_{n=0}^\infty B_n$ is \begin{align}\label{lll} \sum\limits_{n=0}^\infty M_n^\alpha(1+t,1+u)x^n=\sum\limits_{n=0}^\infty \frac{x^n(\alpha)_n}{(1-\alpha tx)^{n+1}(1-ux)^{n}}. \end{align} Setting $t=u=-1$, we have \[ \sum\limits_{n=0}^\infty M_n^\alpha(0,0)x^n=\sum\limits_{n=0}^\infty \frac{x^n(\alpha)_n}{(1+\alpha x)^{n+1}(1+x)^{n}}. \] Recall that straight m\'enage permutations are permutations with no fixed points or successions. Thus, for $n>0$, $M_n^\alpha(0,0)$ is the sum of the $W_1$-weights of straight m\'enage permutations in $S_n$, which is $\sum\limits_{j=1}^nC_n^j\alpha^jx^n$. Furthermore, $M_0^\alpha(0,0)=1$. Thus, we have proved \eqref{eq:straight_by_cycles}. \end{proof} \subsection{Counting ordinary m\'enage permutations by number of cycles}\label{sec:count_ordinary_by_cycles} In this subsection, we represent permutations by diagrams of the \textbf{horizontal type}. Suppose $n\ge0$ and $\pi\in S_n$. Define $\mathbb{S}_m(\pi)$ to be the a subset of $\mathbb{S}_m$: $\tau\in\mathbb{S}_m$ is in $\mathbb{S}_m(\pi)$ if and only if when we apply reductions of type 1 on the colored fixed points of $\tau$ and apply reductions of type 3 on the colored generalized successions of $\tau$, we obtain $\pi$. Define $A_m(\pi)=\mathbb{S}_m(\pi)\cap A_m$ and $B_m(\pi)=\mathbb{S}_m(\pi)\backslash A_m(\pi)$. \begin{lemma}\label{lemma:sum_of_W2_weights_in_all_An} The $W_2$-weights of colored permutations in $\bigcup_{n=0}^\infty A_n$ are \begin{align*} \sum\limits_{n=1}^\infty x^n(\alpha)_n\cdot\frac{1}{(1-\alpha tx)^n}\cdot\frac{1}{(1-ux)^{n+1}}\cdot ux+\alpha utx+\frac{\alpha ux}{1-ux}. \end{align*} \end{lemma} \begin{proof} We first evaluate the sum of $W_2$-weights of colored permutations in $\bigcup_{m=n}^\infty A_m(\pi)$ and then add them up with respect to $\pi\in S_n$ and $n\ge0$. \textbf{Case 1: $n>0$.} In this case, for $\pi\in S_n$, we can construct a colored permutation in $\bigcup_{m=n}^\infty A_m(\pi)$ by the following steps. ($a^\prime$) Define $\tilde\pi$ to be a permutation in $S_{n+1}$ that sends $\pi^{-1}(1)$ to $n+1$, sends $n+1$ to 1 and sends all other $j$ to $\pi(j)$. Represent $\tilde\pi$ by a horizontal diagram. ($b^\prime$) For $1\le p\le n-1$, add the identity permutation of $S_{(d_p)}$ into the gap of $\tilde\pi$ between number $p$ and $p+1$, where $d_p\ge0$ ($d_p=0$ means we add nothing into the gap). ($c^\prime$) Replace the $q$th point of $\tilde\pi$ by a nice bijection from $[r_q]$ to $\{2,\ldots,r_q+1\}$, which maps each $i$ to $i+1$. Here, $1\le q\le n$ and $r_q\ge0$ ($r_q=0$ means that we do not change the $q$th point). ($d^\prime$) Color the generalized succession consisting of 1 and the largest number yellow. Color the fixed points and generalized successions added by ($b^\prime$)--($c^\prime$) red and yellow, respectively. Suppose $\epsilon\in\bigcup_{m=n}^\infty A_m(\pi)$. If we perform reductions of type 1 on its colored fixed points and perform reductions of type 3 on its colored generalized successions, we obtain $\pi$. Furthermore, $\pi$ is the only permutation in $\bigcup_{k=0}^\infty S_k$ from which we can obtain $\epsilon$ through ($a^\prime$)--($d^\prime$). Therefore, there is a bijection between $\bigcup_{m=n}^\infty A_m(\pi)$ and $$\{\text{colored permutations constructed from $\pi$ through ($a^\prime$)--($d^\prime$)}\}.$$ Now, we can claim that the sum of $W_2$-weight of the colored permutations in $\bigcup_{m=n}^\infty A_m(\pi)$ is \begin{align}\label{W2_weight_of...} x^n\alpha^{f(\pi)}\cdot\frac{1}{(1-\alpha tx)^n}\cdot\frac{1}{(1-ux)^{n+1}}\cdot ux. \end{align} In \eqref{W2_weight_of...}, $x^n\alpha^{f(\pi)}$ is the $W_2$-weight of $\pi$. The term $\frac{1}{(1-\alpha tx)^n}$ corresponds to the fixed points added to the permutation in ($b^\prime$). The term $\frac{1}{(1-ux)^{n+1}}$ corresponds to the successions added to the permutation in ($c^\prime$). The term $ux$ corresponds to the generalized succession $\{n+1,1\}$ added to the permutation in ($a^\prime$). \textbf{Case 2: $n=0$ and $m=0$.} In this case, $A_0(\pi_\emptyset)$ is empty. \textbf{Case 3: $n=0$ and $m\ge2$.} In this case, $A_m(\pi_\emptyset)$ contains one element: the cyclic permutation $\pi_C$, which maps each $i$ to $i+1$ (mod $m$). Each generalized succession of $\pi_C$ has a color, so the sum of the $W_2$-weight of the colored permutations in $A_m(\pi_\emptyset)$ is $\alpha(ux)^m$. \textbf{Case 4: $n=0$ and $m=1$.} In this case, $A_1(\pi_\emptyset)$ contains one map: $id_1\in S_1$. However, $id_1$ can have two types of color, namely, yellow and red+yellow, because $id_1$ has one fixed point and one generalized succession. Thus, $A_1(\pi_\emptyset)$ contains two colored permutations, and the sum of their $W_2$-weights is $\alpha ux+\alpha tux$. We remark that the identity permutation $id_1$ actually corresponds to four colored permutations. In addition to the two in $A_1(\pi_\emptyset)$, the other two are $id_1$, with a red color for its fixed point, and $id_1$, with no color. We have considered these colored permutations in $B_1(\pi_\emptyset)$ and $B_1(id_1)$, respectively. Because $\bigcup_{n=0}^\infty A_n=\bigcup_{n=0}^\infty\bigcup_{\pi\in S_n}\bigcup_{m=n}^\infty A_m(\pi)$, the sum of the $W_2$-weights of the colored permutations in $\bigcup_{n=0}^\infty A_n$ equals the sum of the weights found in Case 1--4. Thus, the sum is \begin{multline*} \Bigg(\sum\limits_{n=1}^\infty\sum\limits_{\pi\in S_n}x^n\alpha^{f(\pi)}\cdot\frac{1}{(1-\alpha tx)^n}\cdot\frac{1}{(1-ux)^{n+1}}\cdot ux\Bigg)+\big(\sum\limits_{m=2}^\infty\alpha(ux)^m\big)+\big(\alpha ux+\alpha tux\big)\\ =\sum\limits_{n=1}^\infty x^n(\alpha)_n\cdot\frac{1}{(1-\alpha tx)^n}\cdot\frac{1}{(1-ux)^{n+1}}\cdot ux+\alpha utx+\frac{\alpha ux}{1-ux} \end{multline*} where we used Lemma \ref{lemma:well_known_lemma_for_cycles}. \end{proof} \begin{proof}[Proof of \eqref{eq:ordinary_by_cycles}] By Lemma \ref{lemma:sum_of_the_W1_and_W2_weights}, $\sum\limits_{n=0}^\infty L_n^\alpha(1+t,1+u)x^n$ is the sum of the $W_2$-weights of the colored permutations in $\bigcup_{n=0}^\infty\mathbb{S}_n$. By Lemma \ref{lemma:sum_of_the_W1_and_W2_weights} and \eqref{lll}, the sum of the $W_2$-weights of all colored permutations in $\bigcup_{n=0}^\infty B_n$ is $$\sum\limits_{n=0}^\infty \frac{x^n(\alpha)_n}{(1-\alpha tx)^{n+1}(1-ux)^{n}}.$$ Because $\bigcup_{n=0}^\infty\mathbb{S}_n=(\bigcup_{n=0}^\infty B_n)\bigcup(\bigcup_{n=0}^\infty A_n)$, Lemma \ref{lemma:sum_of_W2_weights_in_all_An} implies \begin{align}\label{eq:equation} &\sum\limits_{n=0}^\infty L_n^\alpha(1+t,1+u)x^n\nonumber\\ =&\Bigg(\sum\limits_{n=0}^\infty \frac{x^n(\alpha)_n}{(1-\alpha tx)^{n+1}(1-ux)^{n}}\Bigg)+\Bigg(\sum\limits_{n=1}^\infty x^n(\alpha)_n\cdot\frac{1}{(1-\alpha tx)^n}\cdot\frac{1}{(1-ux)^{n+1}}\cdot ux+\alpha utx+\frac{\alpha ux}{1-ux}\Bigg)\nonumber\\ =&\sum\limits_{n=0}^\infty \bigg[\frac{x^n(\alpha)_n}{(1-\alpha tx)^{n+1}(1-ux)^{n+1}}(1-\alpha tux^2)\bigg]+\alpha utx+\frac{(\alpha-1) ux}{1-ux}. \end{align} Recall that ordinary m\'enage permutations are permutations with no fixed points and no generalized successions. By definition, $L_0^\alpha(0,0)x^0=1$. When $n\ge1$, $L_n^\alpha(0,0)x^n$ is the sum of the $W_2$-weights of all ordinary m\'enage permutations in $S_n$, which equals $\sum\limits_{j=1}^nD_n^j\alpha^jx^n$. Thus, $\sum\limits_{n=0}^\infty L_n^\alpha(0,0)x^n$ equals the left side of \eqref{eq:ordinary_by_cycles}. When we set $t=u=-1$, the left side of \eqref{eq:equation} equals the left side of \eqref{eq:ordinary_by_cycles}, and the right side of \eqref{eq:equation} equals the right side of \eqref{eq:ordinary_by_cycles}. We have proved \eqref{eq:ordinary_by_cycles}. \end{proof} \section{An analytical proof of \eqref{eq:result_of_straight_menage_numbers} and \eqref{eq:result_of_ordinary_menage_numbers}}\label{appendix} Now, we derive \eqref{eq:result_of_straight_menage_numbers} and \eqref{eq:result_of_ordinary_menage_numbers} from \eqref{eq:known_formulas_for_U_n} and \eqref{eq:known_formulas_for_V_n}. By \eqref{eq:known_formulas_for_V_n}, \begin{align}\label{qqq} \sum\limits_{n=0}^\infty V_nx^n&=\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!\,x^n=\sum\limits_{k=0}^\infty\sum\limits_{n=k}^\infty(-1)^k{2n-k\choose k}(n-k)!\,x^n\nonumber\\&=\sum\limits_{k=0}^\infty\sum\limits_{n=0}^\infty(-1)^k{2n+k\choose k}n!\,x^{n+k}=\sum\limits_{n=0}^\infty n!\,x^n\bigg[\sum\limits_{k=0}^\infty(-1)^k{2n+k\choose k}x^k\bigg]\nonumber\\&=\sum\limits_{n=0}^\infty n!\,\dfrac{x^n}{(1+x)^{2n+1}}. \end{align} Letting $x=zc^2(z)$, from Lemma \ref{thm:property_of_Catalan_numbers} we have $1+x=c(z)$, $\dfrac{x}{(1+x)^2}=z$ and \begin{align*} \sum\limits_{n=0}^\infty V_nz^nc^{2n}(z)=\sum\limits_{n=0}^\infty V_nx^n=\sum\limits_{n=0}^\infty n!\,\dfrac{x^n}{(1+x)^{2n+1}}=\sum\limits_{n=0}^\infty n!\,\dfrac{z^n}{c(z)}. \end{align*} which implies \eqref{eq:result_of_straight_menage_numbers}. From \eqref{eq:known_formulas_for_U_n}, \begin{align*} U_n=\begin{cases}1&\text{if }n=0;\\0&\text{if }n=1;\\\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!+\sum\limits_{k=1}^n(-1)^k{2n-k-1\choose k-1}(n-k)!&\text{if }n>1.\end{cases} \end{align*} Thus \begin{align*} \sum\limits_{n=0}^\infty U_nx^n &=x+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!\,x^n+\sum\limits_{n=1}^\infty\sum\limits_{k=1}^n(-1)^k{2n-k-1\choose k-1}(n-k)!\,x^n\\ &=x+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!\,x^n+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n(-1)^{k+1}{2n-k\choose k}(n-k)!\,x^{n+1}\\ &=x+(1-x)\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n(-1)^k{2n-k\choose k}(n-k)!\,x^n\nonumber. \end{align*} Then, by \eqref{qqq}, \begin{align*} \sum\limits_{n=0}^\infty U_nx^n=x+(1-x)\sum\limits_{n=0}^\infty n!\,\dfrac{x^n}{(1+x)^{2n+1}}. \end{align*} Noticing $U_0=1$ we have $$1-x+\sum\limits_{n=1}^\infty U_nx^n=(1-x)\sum\limits_{n=0}^\infty n!\,\dfrac{x^n}{(1+x)^{2n+1}}$$ and \begin{align}\label{eq:for_analytical_proof_of_ordinary} 1+x+\dfrac{1+x}{1-x}\sum\limits_{n=1}^\infty U_nx^n=\sum\limits_{n=0}^\infty n!\,\dfrac{x^n}{(1+x)^{2n}}. \end{align} Letting $x=zc^2(z)$, from Lemma \ref{thm:property_of_Catalan_numbers} and \eqref{eq:for_analytical_proof_of_ordinary}, we have $1+x=c(z)$, $\dfrac{x}{(1+x)^2}=z$ and \begin{align*} \sum\limits_{n=0}^\infty n!\,z^n=c(z)+\dfrac{c(z)}{1-zc^2(z)}\sum\limits_{n=1}^\infty U_nz^n(c(z))^{2n}=c(z)+\dfrac{(c(z))^3}{1-zc^2(z)}\sum\limits_{n=1}^\infty U_nz^n(c(z))^{2n-2}. \end{align*} Then, \eqref{eq:result_of_ordinary_menage_numbers} follows from the above equation and Lemma \ref{thm:property_of_Catalan_numbers}. \Addresses \end{document}
\begin{document} \begin{frontmatter} \title{Second-order accurate genuine BGK schemes for {the} ultra-relativistic flow simulations} \author{Yaping Chen}, \ead{[email protected]} \author{Yangyu Kuang} \ead{[email protected]} \address{HEDPS, CAPT \& LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, P.R. China} \author[label2]{Huazhong Tang} \thanks[label2]{Corresponding author. Tel:~+86-10-62757018; Fax:~+86-10-62751801.} \ead{[email protected]} \address{HEDPS, CAPT \& LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, P.R. China; School of Mathematics and Computational Science, Xiangtan University, Hunan Province, Xiangtan 411105, P.R. China} \date{\today{}} \maketitle \begin{abstract} This paper presents second-order accurate genuine BGK (Bhatnagar-Gross-Krook) schemes in the framework of finite volume method for the ultra-relativistic flows. Different from the existing kinetic flux-vector splitting (KFVS) or BGK-type schemes for the ultra-relativistic Euler equations, the present genuine BGK schemes are derived from the analytical solution of the Anderson-Witting model, which is given for the first time and includes the ``genuine'' particle collisions in the gas transport process. The BGK schemes for the ultra-relativistic viscous flows are also developed and two examples of ultra-relativistic viscous {flow} are designed. Several 1D and 2D numerical experiments are conducted to demonstrate that the proposed BGK schemes not only are accurate and stable in simulating ultra-relativistic inviscid and viscous flows, but also have higher resolution at the contact discontinuity than the KFVS or BGK-type schemes. \end{abstract} \begin{keyword} BGK scheme, Anderson-Witting model, ultra-relativistic Euler equations, ultra-relativistic Navier-Stokes equations \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} Relativistic hydrodynamics (RHD) arise in astrophysics, nuclear physics, plasma physics and other fields. In many radiation hydrodynamics problems of astrophysical interest, the fluid moves at extremely high velocities near the speed of light, and relativistic effects become important. Examples of such flows are supernova explosions, the cosmic expansion, and solar flares. The relativistic hydrodynamical equations are highly nonlinear, making the analytic treatment of practical problems extremely difficult. The numerical simulation is the primary and powerful way to study and understand the relativistic hydrodynamics. This work will mainly focus on the numerical methods for the special RHDs, where there is no strong gravitational field involved. The pioneering numerical work may date back to the finite difference code via artificial viscosity for the spherically symmetric general RHD equations in the Lagrangian coordinate \cite{May1966,May1967} and the finite difference method with the artificial viscosity technique for the multi-dimensional RHD equations in the Eulerian coordinate \cite{R.Wilson1972}. Since 1990s, the numerical study of the RHDs began to attract considerable attention, and various modern shock-capturing methods with an exact or approximate Riemann solver have been developed for the RHD equations. Some examples are the local characteristic approach \cite{Mart1991Numerical}, the two-shock approximation solvers \cite{balsara1994riemann,Dai1997}, the Roe solver \cite{F.Eulderink1995}, the flux corrected transport method \cite{duncan1994}, the flux-splitting method based on the spectral decomposition \cite{Donat1998}, the piecewise parabolic method \cite{Marti1996,Mignone2005}, the HLL (Harten-Lax-van Leer) method \cite{schneider1993new}, the HLLC (Harten-Lax-van Leer-Contact) method \cite{mignone2005hllc} and the Steger-Warming flux vector splitting method \cite{Zhao2014Steger}. The analytical solution of the Riemann problem in relativistic hydrodynamics was studied in \cite{mart1994}. Some other higher-order accurate methods have also been well studied in the literature, e.g. the ENO (essentially non-oscillatory) and weighted ENO methods \cite{dolezal1995relativistic,del2002efficient,Tchekhovskoy2007}, the discontinuous Galerkin (DG) method \cite{RezzollaDG2011}, the adaptive moving mesh methods \cite{he2012adaptive1,he2012adaptive2}, the Runge-Kutta DG {methods} with WENO limiter \cite{zhao2013runge,ZhaoTang-CiCP2017,ZhaoTang-JCP2017}, the direct Eulerian GRP {schemes} \cite{yang2011direct,yang2012direct,wu2014third}, and the local evolution Galerkin method \cite{wu2014finite}. Recently some physical-constraints-preserving (PCP) schemes were developed for the special RHD equations. They are the high-order accurate PCP finite difference weighted essentially non-oscillatory (WENO) schemes and discontinuous Galerkin (DG) methods proposed in \cite{wu2015high,wu2016physical,qin2016bound}. The readers are also referred to the early review articles \cite{marti2003review,Font2008} as well as references therein. The gas-kinetic schemes present a gas evolution process from a kinetic scale to a hydrodynamic scale, where both inviscid and viscous fluxes are recovered from moments of a single time-dependent gas distribution function \cite{PanXu2015}. The development of gas-kinetic schemes, such as the kinetic flux vector splitting (KFVS) and Bhatnagar-Gross-Krook (BGK) schemes, has attracted much attention and significant progress has been made in the non-relativistic hydrodynamics. They utilize the well-known connection that the macroscopic governing equations are the moments of the Boltzmann equation whenever the distribution function is at equilibrium. The KFVS schemes are constructed by applying upwind technique directly to the collisionless Boltzmann equation, see e.g. \cite{Pullin1980,Mandal1994,Chou1997,HongLui2001,Reitz1981,Perthame1992,TangMagnet2000,TangRadia2000,Tang1999High}. Due to the lack of collision in the numerical flux calculations, the KFVS schemes smear the solutions, especially the contact discontinuity. To overcome this problem, the BGK schemes are constructed by taking into account the particle collisions in the whole gas evolution process within a time step, see e.g. \cite{LiXu2010,Xu2001,LiuTang2014}. Moreover, due to their specific derivation, they are also able to present the accurate Navier-Stokes solution in the smooth flow regime and have favorable shock capturing capability in the shock region. The kinetic beam scheme was first proposed for the relativistic gas dynamics in \cite{yang1997kinetic}. After that, the kinetic schemes for the ultra-relativistic Euler equations were developed in \cite{Kunik2003ultra,kunik2003second,Kunik2004}. The BGK-type schemes \cite{Xu1999,TangMagnet2000} were extended to the ultra-relativistic Euler equations in \cite{kunik2004bgktype,QamarRMHD2005} in order to reduce the numerical dissipation. Those kinetic schemes resulted directly from the moments of the relativistic J$\ddot{\text{u}}$ttner equilibrium distribution without including the ``genuine" particle collisions in the gas transport process. This paper will develop second-order genuine BGK schemes for the ultra-relativistic inviscid and viscous flow simulations. It is organized as follows. Section \ref{sec:basic} introduces the special relativistic Boltzmann equation and discusses how to recover some macroscopic quantities from the kinetic theory. Section \ref{sec:GovernEqns} presents the ultra-relativistic hydrodynamical equations through the Chapman-Enskog expansion. Section \ref{sec:scheme} develops second-order accurate genuine BGK schemes for the 1D and 2D ultra-relativistic Euler equations and 2D ultra-relativistic Navier-Stokes equations. Section \ref{sec:test} gives several numerical experiments to demonstrate accuracy, robustness and effectiveness of the proposed schemes in simulating inviscid and viscous ultra-relativistic fluid flows. Section \ref{sec:conclusion} concludes the paper. \section{Preliminaries and notations} \label{sec:basic} In the special relativistic kinetic theory of gases \cite{rbebook}, a microscopic gas particle is characterized by the four-dimensional space-time coordinates $(x^{\alpha}) = (x^0,\vec{x})$ and four-momentum vectors $(p^{\alpha}) = (p^0,\vec{p})$, where $x^0 = ct$, $c$ denotes the speed of light in vacuum, $t$ and $\vec{x}$ are the time and 3D spatial coordinates, respectively, and the Greek index $\alpha$ runs from $0, 1, 2, 3$. Besides the contravariant notation (e.g. $p^{\alpha}$), the covariant notation such as $p_{\alpha}$ will also be used in the following, while both notations $p^\alpha$ and $p_{\alpha}$ are related by \[p_\alpha = g_{\alpha\beta}p^{\beta},\quad p^{\alpha} = g^{\alpha\beta}p_{\beta},\] where the Einstein summation convention over repeated indices has been used, $(g^{\alpha\beta})$ is the Minkowski space-time metric tensor and chosen as $(g^{\alpha\beta}) = \text{diag}\{1, -1, -1, -1\}$, while $(g_{\alpha\beta})$ denotes the inverse of $(g^{\alpha\beta})$. For a free relativistic particle, the relativistic energy-momentum relation (aka ``on-shell'' or ``mass-shell'' condition) $E^2-|\vec p|^2 c^2=m^2 c^4$ holds, where $m$ denotes the mass of each structure-less particle which is assumed to be the same for all particles. The ``mass-shell'' condition {can} be rewritten as $p^{\alpha}p_{\alpha}=m^2c^2$ if putting $p^0= c^{-1}E=\sqrt{|\vec{p}|^2+m^2c^2}$, which becomes $p^0=|\vec{p}|$ in the ultra-relativistic limit, i.e. $m\to0$. Similar to the non-relativistic case, the relativistic Boltzmann equation describes the evolution of one-particle distribution function $f(\vec{x},t,\vec{p})$ in the phase space spanned by the space-time coordinates $x^\alpha$ and momentum $p^\alpha$ of particles. It reads \begin{equation} p^{\alpha}\frac{\partial f}{\partial x^{\alpha}} = Q(f,f), \end{equation} where $Q(f,f)$ denotes the collision term and depends on the product of distribution functions of two particles at collision. In the literature, there exist several simple collision models. The Anderson-Witting model \cite{AW} \begin{equation}\label{AWM} p^{\alpha}\frac{\partial f}{\partial x^{\alpha}} = -\frac{U_\alpha p^{\alpha}}{\tau c^2}(f-g), \end{equation} is similar to the BGK model in the non-relativistic kinetic theory and will be considered in this paper, where $\tau$ is the relaxation time, in {the} Landau-Lifshitz frame, the hydrodynamic four-velocities $U_{\alpha}$ are defined by \begin{equation}\label{EQ:Landau-Lifshitz frame} U_{\beta}T^{\alpha\beta} = \varepsilon g^{\alpha\beta}U_{\alpha}, \end{equation} which implies that $(\varepsilon, U_{\alpha})$ is a generalized characteristic pair of $(T^{\alpha\beta},g^{\alpha\beta})$, $\varepsilon$ and $ T^{\alpha\beta}$ are the energy density and energy-momentum tensor, respectively, and $g=g(\vec{x},t,\vec{p})$ denotes the distribution function at the local thermodynamic equilibrium, the so-called J$\ddot{\text{u}}$ttner equilibrium (or relativistic Maxwellian) distribution. In the ultra-relativistic case, it becomes \cite{Kunik2003ultra} \begin{equation}\label{juttner} g={\frac{ n c^3}{8\pi k^3T^3}}\exp\left(-\frac{U_{\alpha}p^{\alpha}}{{kT}}\right)={\frac{ n c^3}{8\pi k^3T^3}}\exp\left(-\frac{|\vec{p}|}{{kT}}\left(U_0-\sum_{i=1}^3U_i\frac{p^i}{|\vec{p}|}\right)\right), \end{equation} where $n$ and $T$ denote { the} number density and thermodynamic temperature, respectively, and $k$ is the Boltzmann's constant. The Anderson-Witting model \eqref{AWM} can tend to the BGK model in the non-relativistic limit and the collision term $-\frac{U_\alpha p^{\alpha}}{\tau c^2}(f-g)$ satisfies the following identities \begin{equation}\label{convc} \int_{\mathbb{R}^3}\frac{U_\alpha p^{\alpha}}{\tau c^2}(f-g)\Psi \frac{d^3\vec{p}}{p^0} = 0, \ \ \vec{\Psi} = (1, p^{i}, p^0)^T, \end{equation} which imply the conservation of particle number, momentum and energy \begin{equation}\label{conv} \partial_{\alpha}N^{\alpha} = 0,\quad \partial_{\beta}T^{\alpha\beta} = 0, \end{equation} where the particle four-flow $N^{\alpha}$ and the energy-momentum tensor $ T^{\alpha\beta}$ are related to the distribution $f$ by \begin{align} N^{\alpha} = c\int_{\mathbb{R}^3}p^{\alpha}f\frac{d^3\vec{p}}{p^0}, \ \ \label{intTT} T^{\alpha\beta}=c\int_{\mathbb{R}^3}p^{\alpha}p^{\beta}f\frac{d^3\vec{p}}{p^0}. \end{align} In the Landau-Lifshitz decomposition, both $N^{\alpha}$ and $T^{\alpha\beta}$ are rewritten as follows \begin{align} \label{NN} N^{\alpha} &= n U^{\alpha} + n^{\alpha},\\ \label{TT} T^{\alpha\beta} &= c^{-2}\varepsilon U^{\alpha}U^{\beta} - \Delta^{\alpha\beta}(p+\varPi) + \pi^{\alpha\beta}, \end{align} where $\Delta^{\alpha\beta}$ is defined by \begin{equation} \Delta^{\alpha\beta} = g^{\alpha\beta} - \frac{1}{c^2}U^{\alpha}U^{\beta}, \end{equation} satisfying $\Delta^{\alpha\beta} U_{\beta} = 0$, the number density $n$, particle-diffusion current $n^\alpha$, energy density $\varepsilon$, and shear-stress tensor {$\pi^{\alpha\beta}$} can be calculated by \begin{align} n=& \frac{1}{c^2} U_{\alpha}N^{\alpha} = \frac{1}{c} \int_{\mathbb{R}^3}Ef\frac{d^3\vec{p}}{p^0}, \\ \label{n} n^{\alpha} =& \Delta^{\alpha}_{\beta}N^{\beta} = c\int_{\mathbb{R}^3}p^{<\alpha>}f\frac{d^3\vec{p}}{p^0}, \\ \varepsilon =& \frac{1}{c^2}U_{\alpha}U_{\beta}T^{\alpha\beta} = \frac{1}{c}\int_{\mathbb{R}^3}E^2f\frac{d^3\vec{p}}{p^0}, \\ \label{pi} \pi^{\alpha\beta} =& \Delta^{\alpha\beta}_{\mu\nu}T^{\mu\nu} = c\int_{\mathbb{R}^3}p^{<\alpha\beta>}f\frac{d^3\vec{p}}{p^0}, \end{align} and the sum of thermodynamic pressure $p$ and bulk viscous pressure $\varPi$ is \begin{equation}\label{varPi} p + \varPi = -\frac{1}{3}\Delta_{\alpha\beta}T^{\alpha\beta} = \frac{1}{3c}\int_{\mathbb{R}^3}(E^2-m^2c^4)f\frac{d^3\vec{p}}{p^0}. \end{equation} Here $E=U_{\alpha}p^{\alpha}$, $p^{<\alpha>}=\Delta^{{\alpha}}_{\gamma}p^{\gamma}$, $p^{<\alpha\beta>}=\Delta^{\alpha\beta}_{\gamma\delta}p^{\gamma}p^{{\delta}}$, and \begin{equation} \Delta^{\alpha\beta}_{\mu\nu}=\frac{1}{2}(\Delta^{\alpha}_{\mu}\Delta^{\beta}_{\nu} + \Delta^{\beta}_{\mu}\Delta^{\alpha}_{\nu}-\frac{2}{3}\Delta_{\mu\nu}\Delta^{\alpha\beta}). \end{equation} \begin{remark}\label{remark2.1} The quantities $n^{\alpha}, \varPi$, and $\pi^{\alpha\beta}$ become zero at the local thermodynamic equilibrium $f=g$. \end{remark} The following gives a general recovery procedure of the admissible primitive variables $n$, $\vec u$, and $T$ from the nonnegative distribution $f(\vec x, t, \vec p)$, where $\vec u$ is the macroscopic velocity in the $(x^i)$ space. Such recovery procedure will be useful in our BGK scheme. \begin{theorem}\label{thm:NT} For any nonnegative distribution $f(\vec{x},t,\vec{p})$ which is not always be zero, the number density $n$, velocity $\vec{u}$ and temperature $T$ can be uniquely obtained as follows: \begin{enumerate} \item $T^{\alpha\beta}$ is positive definite and $(T^{\alpha\beta},g^{\alpha\beta})$ has only one positive generalized eigenvalue, i.e. the energy density $\varepsilon$, and $U_{\alpha}$ is corresponding generalized eigenvector satisfying $U_0=\sqrt{U^2_1+U^2_2+U^2_3+c^2}$. Thus, the macroscopic velocity $\vec{u}$ can be calculated by $\vec{u}=-c(U^{-1}_0U_1,U^{-1}_0U_2,U^{-1}_0U_3)^{T}$, satisfying $|\vec{u}|<c$ and \begin{equation} (U_{\alpha}) = (\gamma c,-\gamma\vec{u}), \ \ (U^{\alpha}) = (\gamma c,\gamma\vec{u}), \end{equation} where $\gamma=(1-c^{-2}|\vec{u}|^2)^{-\frac{1}{2}}$ denotes the Lorentz factor. \item The number density $ n $ is calculated by \begin{equation} n = c^{-2} U_{\alpha}N^{\alpha}>0. \end{equation} \item The temperature $T$ solves the nonlinear algebraic equation \begin{equation} \varepsilon = n m c^2(G(\zeta)-\zeta^{-1}), \end{equation} where $\zeta = \frac{ m c^2}{kT}$, $G(\zeta)=\frac{K_3(\zeta)}{K_2(\zeta)}$, and $K_{{\nu}}(\zeta)$ is modified Bessel function of the second kind, defined by \[ K_{{\nu}}(\zeta):=\int_{0}^{\infty}\cosh({\nu}\vartheta)\exp(-\zeta\cosh\vartheta)d\vartheta, \ \ {\nu}\geq 0. \] In the ultra-relativistic case, $K_2(\zeta)$ and $K_3(\zeta)$ reduce to $\frac{2}{\zeta^2}$ and $\frac{8}{\zeta^3}$, respectively, so that one has $G(\zeta)=\frac{4}{\zeta}$, and then \begin{equation}\label{eT} \varepsilon = 3k n T. \end{equation} \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Since the nonnegative distribution $f(\vec{x},t,\vec{p})$ is not identically zero, using the relation \eqref{intTT} gives \begin{align}\nonumber \vec{X}^{T}T^{\alpha\beta}\vec{X} &= c\vec{X}^{T}\int_{\mathbb{R}^3}p^{\alpha}p^{\beta}f\frac{d^3\vec{p}}{p^0}\vec{X} =c\int_{\mathbb{R}^3}x_{\alpha}p^{\alpha}p^{\beta}x_{\beta}f\frac{d^3\vec{p}}{p^0} \\ &= c\int_{\mathbb{R}^3}(x_{\alpha}p^{\alpha})^2f\frac{d^3\vec{p}}{p^0}>0, \end{align} for any nonzero vector $\vec{X}=(x_0,x_1,x_2,x_3)^T\in\mathbb{R}^4$. Thus, the matrix $T^{\alpha\beta}$ is positive definite. Thanks to $g^{\alpha\beta} = \text{diag}\{1,-1,-1,-1\}$ and \eqref{EQ:Landau-Lifshitz frame}, the matrix-pair $(T^{\alpha\beta},g^{\alpha\beta})$ has an unique positive generalized eigenvalue $\varepsilon$, satisfying \begin{equation} 0<U_{\alpha}T^{\alpha\beta}U_{\beta} = \varepsilon U_{\alpha}g^{\alpha\beta}U_{\beta}, \end{equation} which implies $U^2_0>U^2_1+U^2_2+U^2_3$. Thus, one can obtain $U_0=\sqrt{U^2_1+U^2_2+U^2_3+c^2}$ via multiplying $(U_{\alpha})$ by a scaling constant $c(U^2_0-U^2_1-U^2_2-U^2_3)^{-1/2}$. As a result, the macroscopic velocity $\vec{u}$ can be calculated by $\vec{u}=-c(U^{-1}_0U_1,U^{-1}_0U_2,U^{-1}_0U_3)^{T}$, satisfying \begin{equation} |\vec{u}| = cU^{-1}_0\sqrt{U^2_1+U^2_2+U^2_3}<c. \end{equation} \item For ${U}_i\in\mathbb{R}$ and ${p}^i\in\mathbb{R}$, $i=1,2,3$, using the Cauchy-Schwarz inequality gives \begin{align} \vec{U}\cdot\vec{p} &\leqslant |\vec{U}||\vec{p}| < \sqrt{U^2_1+U^2_2+U^2_3+c^2}\cdot p^0 =U_0p^0, \end{align} which implies $E=U_{\alpha}p^{\alpha}>0$. Thus one has \begin{equation} n = \frac{1}{c^2} U_{\alpha}N^{\alpha} = \frac1c \int_{\mathbb{R}^3}Ef\frac{d^3\vec{p}}{p^0}>0. \end{equation} \item It is obvious that the positive temperature $T$ can be obtained from \eqref{eT}. \end{enumerate}\qed \end{proof} \section{Ultra-relativistic hydrodynamic equations} \label{sec:GovernEqns} This section gives the ultra-relativistic hydrodynamic equations, which can be derived from the Anderson-Witting model by using the Chapman-Enskog expansion. For the sake of convenience, units in which the speed of light and the Boltzmann's constant are equal to one will be used here and hereafter. \subsection{Euler equations} In the ultra-relativistic limit, the macroscopic variables $ n, \varepsilon, p$ are related to $g$ by \begin{align} n &= \int_{\mathbb{R}^3}Eg\frac{d^3\vec{p}}{|\vec{p}|},\\ \varepsilon &= \int_{\mathbb{R}^3}E^2g\frac{d^3\vec{p}}{|\vec{p}|}=3 n T,\\ p &=\frac13\int_{\mathbb{R}^3}E^2g\frac{d^3\vec{p}}{|\vec{p}|}= \frac{1}{3}\varepsilon. \end{align} If taking the zero order Chapman-Enskog expansion $f=g$ and using the conclusion in Remark \ref{remark2.1}, the ultra-relativistic Euler equations are derived as follows \begin{equation} \frac{\partial \vec{W}}{\partial t} + \sum^{3}_{k=1}\frac{\partial\vec F^{k}(\vec{W})}{\partial x^k} = 0, \end{equation} where \begin{equation} \vec{W} = \left(N^0, T^{0i}, T^{00} \right)^T = \left( n U^0, n h U^0U^i, n h U^0U^0 - p \right)^T, \end{equation} and \begin{equation} \vec{F^k(W)} = \left( N^k, T^{ki}, T^{k0} \right)^T = \left( n U^k, n h U^kU^i + p\delta^{ik}, n h U^kU^0 \right)^T. \end{equation} Here $i=1,2,3$ and $h=4T$ denotes the specific enthalpy. For the given conservative vector $\vec{W}$, one can get the primitive variables $ n , U^k$ and $p$ by \cite{kunik2004bgktype} \begin{align}\label{eq:con2pri} \begin{aligned} p &= \frac{1}{3}\left(-T^{00}+\sqrt{4(T^{00})^2-3\sum^3_{i=1}(T^{0i})^2}\right),\\ U^i &= \frac{T^{0i}}{\sqrt{4p(p+T^{00})}},\ \ n = \frac{N^0}{\sqrt{1+\sum_{i=1}^3(U^i)^2}}, \quad i=1,2,3. \end{aligned}\end{align} \subsection{Navier-Stokes equations} If taking the first order Chapman-Enskog expansion \begin{equation}\label{ce} f=g(1-\frac{\tau}{U_{\alpha}p^{\alpha}}\varphi), \end{equation} with \begin{equation}\label{dev} \varphi = -\frac{p_{\alpha}p_{\beta}}{T}\nabla^{<\alpha}U^{\beta>} + \frac{p_{\alpha}}{T^2}(U_{\beta}p^{\beta}-h)(\nabla^{\alpha}T-\frac{T}{ n h}\nabla^{\alpha}p), \end{equation} where $\nabla^{\alpha} = \Delta^{\alpha\beta}\partial_{\beta}$ and $\nabla^{<\alpha}U^{\beta>} = \Delta^{\alpha\beta}_{\gamma\delta}\nabla^{\gamma}U^{\delta}$, then \eqref{n}, \eqref{pi} and \eqref{varPi} give \begin{align} n^{\alpha} = -\frac{\lambda}{h}(\nabla^{\alpha}T-\frac{T}{ n h}\nabla^{\alpha}p), \ \ \pi^{\alpha\beta} = 2\mu\nabla^{<\alpha}U^{\beta>},\ \ \varPi = 0 , \end{align} where $\lambda=\frac{4}{3T}p\tau$ and $\mu=\frac{4}{5}p\tau$. Based on those, the ultra-relativistic Navier-Stokes equations see \cite{rbebook} can be obtained as follows \begin{equation}\label{eq-NS01} \frac{\partial \vec{W}}{\partial t} + \sum^{3}_{k=1}\frac{\partial\vec F^{k}(\vec{W})}{\partial x^k} = 0, \end{equation} where \begin{equation} \vec{W} = \begin{pmatrix} N^0\\ T^{0i}\\ T^{00} \end{pmatrix} = \begin{pmatrix} n U^0 - \frac{\lambda}{h}(\nabla^0 T-\frac{T}{ n h}\nabla^0p)\\ n h U^0U^i + 2\mu\nabla^{<0}U^{i>}\\ n h U^0U^0 - p + 2\mu\nabla^{<0}U^{0>} \end{pmatrix}, \end{equation} and \begin{equation}\label{eq-NS03} \vec{F^k(W)} = \begin{pmatrix} N^k\\ T^{ki}\\ T^{k0} \end{pmatrix} = \begin{pmatrix} n U^k - \frac{\lambda}{h}(\nabla^k T-\frac{T}{ n h}\nabla^kp)\\ n h U^kU^i + p\delta^{ik} + 2\mu\nabla^{<k}U^{i>}\\ n h U^kU^0 + 2\mu\nabla^{<k}U^{0>} \end{pmatrix}. \end{equation} It shows that one cannot recover the values of primitive variables $n, {\vec{u}}$ and ${T}$ only from the given conservative vector $\vec{W}$. In practice, the values of $n, {\vec{u}}$ and ${T}$ have to be recovered from the given $\vec{W}$ and $\vec{F^k(W)}$ or $N^\alpha$ and $T^{\alpha\beta}$ by using Theorem \ref{thm:NT}. \section{Numerical schemes}\label{sec:scheme} This section develops second-order accurate genuine BGK schemes for the 1D and 2D ultra-relativistic Euler and Navier-Stokes equations. {The BGK} schemes are derived from the analytical solution of the Anderson-Witting model \eqref{AWM}, which is given for the first time and includes the ``genuine'' particle collisions in the gas transport process. \subsection{1D Euler equations}\label{sec:scheme-euler1d} Consider the 1D ultra-relativistic Euler equations with $\vec{u}=(u,0,0)^{T}$ as \begin{equation}\label{1DEuler} \frac{\partial \vec{W}}{\partial t} + \frac{\partial \vec{F}(\vec{W})}{\partial x} = 0, \end{equation} where \begin{equation} \vec{W} = \left( n U^0, n h U^0U^1, n h U^0U^0 - p \right)^T,\ \vec{F(W)} = \left( n U^1, n h U^1U^1 + p, n h U^0U^1\right)^T. \end{equation} It is strictly hyperbolic because there are three real and distinct eigenvalues of the Jacobian matrix $A(\vec{W})=\partial\vec{F}/\partial \vec{W}$ \cite{wu2015high} \begin{equation} \lambda_1=\frac{u(1-c_s^2)-c_s(1-u^2)}{1-u^2c_s^2},\quad \lambda_2=u,\quad\lambda_3=\frac{u(1-c_s^2)+c_s(1-u^2)}{1-u^2c_s^2}, \end{equation} where $c_s=1/\sqrt{3}$ is the speed of sound. Divide the spatial domain into a uniform mesh with the step size $\Delta x$ and the $j$th cell $I_j = (x_{j-\frac{1}{2}}, x_{j+\frac{1}{2}})$, where $x_{j+\frac{1}{2}} = \frac{1}{2}(x_{j}+x_{j+1})$ and $x_j = j\Delta x$, $j\in\mathbb{Z}$. The time interval $[0,T]$ is also divided into a (non-uniform) mesh $\{t_{n+1}=t_n+\Delta t_n, t_0=0, n\geqslant0\}$, where the step size $\Delta t_n$ is determined by \begin{equation}\label{TimeStep1D} \Delta t_n = \frac{C\Delta x}{\max\limits_j \bar{\varrho}_j}, \end{equation} the constant $C$ denotes the CFL number, and $\bar{\varrho}_j$ denotes a suitable approximation of the spectral radius of $A(\vec{W})$ within the cell $I_j$. For the given approximate cell-average values $\{\vec{\bar{W}}^n_j\}$, i.e. \begin{equation*} \vec{\bar{W}}^{n}_j \approx \frac{1}{\Delta x}\int_{I_j}\vec{W}(x,t_n)dx, \end{equation*} reconstruct a piecewise linear function as follows \begin{equation}\label{EQ:initial-reconstruction} \vec{W}_h(x,t_n)=\sum\vec{W}^n_j(x)\chi_j(x), \ \ \vec{W}^n_j(x):= {\vec{\bar{W}}}^n_j + \vec{W}^{n,x}_j(x-x_j), \end{equation} where $\vec{W}^{n,x}_j$ is the approximate slope in the cell $I_j$ obtained by using some slope limiter and $\chi_j(x)$ denotes the characteristic function of $I_j$. In the 1D case, the Anderson-Witting model \eqref{AWM} reduces to \begin{equation}\label{1DAW} p^0\frac{\partial f}{\partial t} + p^1\frac{\partial f}{\partial x} = \frac{U_{\alpha}p^{\alpha}}{\tau}(g-f), \end{equation} whose analytical solution is given by \begin{align} \nonumber f(x,t,\vec{p})&=\int_{0}^{t}g(x',t',\vec{p})\exp\left(-\int_{t'}^{t} \frac{U_{\alpha}(x'',t'')p^{\alpha}}{p^{0}\tau}dt''\right) \frac{U_{\alpha}(x',t')p^{\alpha}}{p^{0}\tau}dt'\\ \label{eq:1DAWsolu}&+\exp\left(-\int_{0}^{t} \frac{U_{\alpha}(x',t')p^{\alpha}}{\tau p^{0}}dt'\right)f_{0}(x-v_1t,\vec{p}), \end{align} where $v_1=p^1/p^0$ is the velocity of particle in $x$ direction, $x'=x-v_1(t-t')$ and $x''=x-v_1(t-t'')$ are {the} particle trajectories, and $f_{0}$ is the initial particle velocity distribution function, i.e. $f(x,0,\vec{p})=f_{0}(x,\vec{p})$. Taking the moments of \eqref{1DAW} and integrating them over the space-time cell $I_j\times[t_n,t_{n+1})$ {yield} \begin{equation} \int_{t_n}^{t_{n+1}}\int_{I_j}\int_{\mathbb{R}^3}\vec{\Psi}(p^0\frac{\partial f}{\partial t} + p^1\frac{\partial f}{\partial x} - \frac{U_{\alpha}p^{\alpha}}{\tau}(g-f))d\varXi dxdt=0, \end{equation} where $d\varXi = \frac{d^3\vec{p}}{|\vec{p}|}$. Using the conservation constraints \eqref{convc} gives \begin{align}\nonumber \int_{I_j}&\int_{\mathbb{R}^3}\vec{\Psi} p^0f(x,t_{n+1},\vec{p})d\varXi dx =\int_{I_j}\int_{\mathbb{R}^3}\vec{\Psi} p^0f(x,t_{n},\vec{p})d\varXi dx \\ & - \int_{t_n}^{t_{n+1}}\int_{\mathbb{R}^3}\vec{\Psi} p^1\left(f(x_{j+\frac{1}{2}},t,\vec{p}) - f(x_{j-\frac{1}{2}},t,\vec{p})\right)~d\varXi dxdt,\label{BGK1D-0001} \end{align} which is the starting point of our 1D second-order accurate BGK scheme. If replacing the distribution $f(x_{j\pm \frac{1}{2}},t,\vec{p})$ in \eqref{BGK1D-0001} with an approximate distribution $\hat{f}(x_{j\pm \frac{1}{2}},t,\vec{p})$, then one gets the following finite volume scheme \begin{equation} \vec{\bar{W}}^{n+1}_j = \vec{\bar{W}}^{n}_j - \frac{\Delta t_n}{\Delta x}(\hat{\vec{F}}^n_{j+\frac{1}{2}} - \hat{\vec{F}}^n_{j-\frac{1}{2}}), \end{equation} where the numerical flux $\hat{\vec{F}}^n_{j+\frac{1}{2}}$ is given by \begin{equation}\label{eq:1DF} \hat{\vec{F}}^n_{j+\frac{1}{2}}=\frac{1}{\Delta t_n}\int_{t_n}^{t_{n+1}}\int_{\mathbb{R}^3}\vec{\Psi} p^1\hat{f}(x_{j+\frac{1}{2}},t,\vec{p})d\varXi dxdt, \end{equation} with \begin{align}\label{eq:faprox} \nonumber \hat{f}(x_{j+\frac{1}{2}},t,\vec{p})&=\int_{t_n}^{t}g_h(x',t',\vec{p})\exp\left(-\int_{t'}^{t} \frac{U_{\alpha}(x'',t'')p^{\alpha}}{p^{0}\tau}dt''\right) \frac{U_{\alpha}(x',t')p^{\alpha}}{p^{0}\tau}dt'\\ &+\exp\left(-\int_{t_n}^{t} \frac{U_{\alpha}(x',t')p^{\alpha}}{p^{0}\tau}dt'\right)f_{h,0}(x_{j+\frac{1}{2}}-v_1(t-t_n),\vec{p}), \end{align} {here} $v_1=p^1/p^0$, $x'=x_{j+\frac{1}{2}}-v_1(t-t')$, $x''=x_{j+\frac{1}{2}}-v_1(t-t'')$, $f_{h,0}(x_{j+\frac{1}{2}}-v_1(t-t_n),\vec{p})\approx f_{0}(x_{j+\frac{1}{2}}-v_1(t-t_n),\vec{p})$ and $g_h(x',t',\vec{p})\approx g(x',t',\vec{p})$. It is worth noting that it is very expensive to get $U_{\alpha}(x'',t'')$ and $U_{\alpha}(x',t')$ {at} the right hand side of \eqref{eq:faprox}. In practice, $U_{\alpha}(x'',t'')$ and $U_{\alpha}(x',t')$ in the first term may be approximated as $U_{\alpha,j+\frac{1}{2}}^n$ while $U_{\alpha}(x',t')$ in the second term may be simplified as $U_{\alpha,j+\frac{1}{2},L}^n$ or $U_{\alpha,j+\frac{1}{2},R}^n$ depending on the sign of $v_1$ and {will be given} in Section \ref{sec:fluxevolution}. The remaining tasks {are to derive} the approximate initial velocity distribution function $f_{h,0}(x_{j+\frac{1}{2}}-v_1(t-t_n),\vec{p})$ and equilibrium velocity distribution function $g_h(x',t',\vec{p})$. \subsubsection{Equilibrium distribution $g_0$ at the point $(x_{j+\frac{1}{2}},t_n)$} \label{sec:fluxevolution} At the cell interface ${x=x_{j+\frac{1}{2}}}$, \eqref{EQ:initial-reconstruction} gives the following left and right limiting values \begin{equation} \begin{aligned} \vec{W}^n_{j+\frac{1}{2},L} &:= \vec{W}_h(x_{j+\frac{1}{2}}-0,t_n)=\vec{W}^n_j(x_{j+\frac{1}{2}}),\\ \vec{W}^n_{j+\frac{1}{2},R} &:= \vec{W}_h(x_{j+\frac{1}{2}}+0,t_n)=\vec{W}^n_{j+1}(x_{j+\frac{1}{2}}),\\ \vec{W}^{n,x}_{j+\frac{1}{2},L} &:= \frac{d\vec{W}_h}{dx}(x_{j+\frac{1}{2}}-0,t_n)=\frac{d\vec{W}^n_j}{dx}(x_{j+\frac{1}{2}}),\\ \vec{W}^{n,x}_{j+\frac{1}{2},R} &:= \frac{d\vec{W}_h}{dx}(x_{j+\frac{1}{2}}+0,t_n)=\frac{d\vec{W}^n_{j+1}}{dx}(x_{j+\frac{1}{2}}). \end{aligned} \label{eq:RL} \end{equation} Using \eqref{juttner}, $\vec{W}^n_{j+\frac{1}{2},L}$ and $\vec{W}^n_{j+\frac{1}{2},R}$ {gives} the J$\ddot{\text{u}}$ttner distributions at the left and right of cell interface $x=x_{j+\frac{1}{2}}$ as follows \begin{equation} \begin{aligned} g_L&=\frac{ n _{j+1/2,L}}{8\pi T_{j+1/2,L}^3}e^{-\frac{U_{\alpha,j+1/2,L}p^{\alpha}}{T_{j+1/2,L}}},\ \ g_R =\frac{ n _{j+1/2,R}}{8\pi T_{j+1/2,R}^3}e^{-\frac{U_{\alpha,j+1/2,R}p^{\alpha}}{T_{j+1/2,R}}}, \end{aligned} \label{eq:gLR} \end{equation} and the particle four-flow $N^{\alpha}$ and the energy-momentum tensor $T^{\alpha\beta}$ at the point $(x_{j+\frac{1}{2}},t_n)$ \begin{align*} (N^{0},T^{01},T^{00})_{j+\frac{1}{2}}^{n,T}&:=\int_{\mathbb{R}^{3}\cap{p^{1} >0}}\vec{\Psi}p^{0}g_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{0}g_{R}d\Xi, \\ (N^{1},T^{11},T^{01})_{j+\frac{1}{2}}^{n,T}&:=\int_{\mathbb{R}^{3}\cap{p^{1} >0}}\vec{\Psi}p^{1}g_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{1}g_{R}d\Xi. \end{align*} Using those and Theorem \ref{thm:NT} calculates the macroscopic quantities $ n ^n_{j+\frac{1}{2}}, T^n_{j+\frac{1}{2}}$, and $U_{{\alpha,j+\frac{1}{2}}}^n$, and then gives the J$\ddot{\text{u}}$ttner distribution function at the point $(x_{j+\frac{1}{2}},t_n)$ as follows \begin{align} g_0&=\frac{ n ^n_{j+1/2}}{8\pi (T^n_{j+1/2})^3}\exp(-\frac{U_{{\alpha,j+\frac{1}{2}}}^np^{\alpha}}{T^n_{j+1/2}}), \end{align} which will be used to derive the equilibrium velocity distribution $g_h(x,t,\vec{p})$, see Section \ref{subsection4.1.3}. \subsubsection{Initial distribution function $f_{h,0}(x,\vec{p})$}\label{subsection1d-fh0} Assuming that $f(x,t,\vec{p})$ and $g(x,t,\vec{p})$ are sufficiently smooth and borrowing the idea in the Chapman-Enskog expansion, $f(x,t,\vec{p})$ is supposed to be expanded as follows \begin{equation}\label{eq:ceEuler} f(x,t,\vec{p})=g-\frac{\tau}{U_{\alpha} p^{\alpha}}(p^0 {g_t+p^1g_x})+O(\tau^2)=:g\left(1-\frac{\tau}{U_{\alpha} p^{\alpha}}(p^0 A+p^1 a)\right)+O(\tau^2), \end{equation} with \begin{equation}\label{eq:aAform} A=A_{1}+A_{2}p^{1}+A_{3}p^{0}, \quad a=a_{1}+a_{2}p^{1}+a_{3}p^{0}. \end{equation} The conservation constraints \eqref{convc} give the constraints on $A$ and $a$ \begin{equation} \label{eq:aAcons} \int_{\mathbb{R}^{3}}\vec{\Psi}(p^{0}A+p^{1}a)gd\Xi=\int_{\mathbb{R}^{3}}\vec{\Psi}(p^{0}g_{t}+p^{1}g_{x})d\Xi =\frac{1}{\tau}\int_{\mathbb{R}^{3}}\vec{\Psi}U_{\alpha}p^{\alpha}(g-f)d\Xi=0. \end{equation} Setting $t=t_n$ and using \eqref{eq:ceEuler} and the Taylor series expansion of $f(x,t,\vec{p})$ with respect to $x$ from both sides of the cell interface $x=x_{j+\frac{1}{2}}$ give the following approximate initial non-equilibrium distribution function \begin{equation} \label{eq:fh0} f_{h,0}(x,t_n,\vec{p}):=\left\{\begin{aligned} &g_{L}\left(1-\frac{\tau}{U_{\alpha,L}p^{\alpha}}(p^{0}A_{L}+p^{1}a_{L})+a_{L}\tilde{x}\right),&\tilde{x}<0,\\ &g_{R}\left(1-\frac{\tau}{U_{\alpha,R}p^{\alpha}}(p^{0}A_{R}+p^{1}a_{R})+a_{R}\tilde{x}\right),&\tilde{x}>0, \end{aligned} \right. \end{equation} where $\tilde{x}=x-x_{j+\frac{1}{2}}$, $g_{L}$ and $g_{R}$ are given in \eqref{eq:gLR}, $(a_{L},A_{L})$ and $(a_{R},A_{R})$ are considered as the left and right limits of $(a,A)$ at the cell interface $x=x_{j+\frac{1}{2}}$ respectively. The slopes $a_L$ and $a_R$ come from the spatial derivative of J$\ddot{\text{u}}$ttner distribution and have unique correspondences with the slopes of the conservative variables $\vec W$ by \[ <a_{\omega}{p^{0}}>=\vec{W}_{j+\frac{1}{2},\omega}^{n,x},\ \] where \[ { <a_{\omega}>:=\int_{\mathbb{R}^{3}}a_{\omega}g_{\omega} \vec{\Psi}d\Xi},\ \ \omega=L,R. \] Those correspondences form the linear system for the unknow $\vec{a}_\omega:=({a_{\omega,1}, a_{\omega,2}, a_{\omega,3}})^T$ \begin{equation}\label{eq:la} M_{0}^{\omega}\vec{a}_\omega=\vec{W}_{j+\frac{1}{2},\omega}^{n,x}, \end{equation} where the coefficient matrix $M_{0}^{\omega}$ is given by \[ M_{0}^{\omega}=\int_{\mathbb{R}^{3}}p^{0}g_{\omega}\vec{\Psi}\vec{\Psi}^Td\Xi, \ \omega=L, R. \] {Using} the conservation constraints \eqref{eq:aAcons} and $a_{\omega}$ gives the linear system for $A_\omega$ as follows \[ <a_{\omega}p^{1}+A_{\omega}p^{0}>=0, \] which can be cast into the following form \begin{equation}\label{eq:lA} M_{0}^{\omega}\vec{A}_\omega =-M_{1}^{\omega}\vec{a}_\omega, \end{equation} with \begin{equation*} M_{1}^{\omega}=\int_{\mathbb{R}^{3}}g_{\omega}p^{1}\vec{\Psi}\vec{\Psi}^Td\Xi, \ \omega={L},{R}. \end{equation*} The rest is to calculate all elements of $M_0$ and $M_1$, whose superscript $L$ or $R$ has been omitted for the sake of convenience. In the ultra-relativistic limit, those can be exactly gotten. Because $p^0=|\vec{p}|$, the triple integrals in $M_0$ and $M_1$ can be simplified by using polar coordinate transformation \begin{equation}\label{eq-polar-tran} p^1 = |\vec{p}|\xi, \ p^2=|\vec{p}|\sqrt{1-\xi^2}\sin\varphi,\ p^3=|\vec{p}|\sqrt{1-\xi^2}\cos\varphi, \ \xi\in[-1,1], \varphi\in[-\pi,\pi], \end{equation} which implies $d\Xi=|\vec{p}|d|\vec{p}|d\xi d\varphi$. In fact, the above transformation can convert the triple integrals in the matrices $M_0$ and $M_1$ into a single integral with respect to $|\vec{p}|$ and a double integral with respect to $\xi$ and $\varphi$. On the other hand, in the 1D case, the integrands do not depend on the variable $\varphi$, so the double integral can further {reduce} to a single integral with respect to $\xi$ which can be exactly calculated. Those lead to \begin{align}\nonumber M_0&=\int_{\mathbb{R}^{3}}p^{0}g\vec{\Psi}\vec{\Psi}^Td\Xi =\begin{pmatrix} \int^1_{-1}\Phi(x,\xi)d\xi & \int^1_{-1}\xi\Psi(x,\xi)d\xi & \int^1_{-1}\Psi(x,\xi)d\xi\\ \int^1_{-1}\xi\Psi(x,\xi)d\xi & \int^1_{-1}\xi^2\Upsilon(x,\xi)d\xi & \int^1_{-1}\xi\Upsilon(x,\xi)d\xi\\ \int^1_{-1}\Psi(x,\xi)d\xi & \int^1_{-1}\xi\Upsilon(x,\xi)d\xi & \int^1_{-1}\Upsilon(x,\xi)d\xi\\ \end{pmatrix}\\ &=\begin{pmatrix} n U^0 & 4 n T U^1U^0 & n T(4U^1U^1+3)\\ 4 n T U^1U^0 & 4 n T^2 U^0(6U^1U^1+1) & 4 n T^2 U^1(6U^1U^1+5)\\ n T(4U^1U^1+3) & 4 n T^2 U^1(6U^1U^1+5) & 12 n T^2 U^0(2U^1U^1 + 1) \end{pmatrix},\label{M0} \end{align} and \begin{align}\notag M_1&=\int_{\mathbb{R}^{3}}p^{1}g\vec{\Psi}\vec{\Psi}^Td\Xi =\begin{pmatrix} \int^1_{-1}\xi\Phi(x,\xi)d\xi & \int^1_{-1}\xi^2\Psi(x,\xi)d\xi & \int^1_{-1}\xi\Psi(x,\xi)d\xi\\ \int^1_{-1}\xi^2\Psi(x,\xi)d\xi & \int^1_{-1}\xi^3\Upsilon(x,\xi)d\xi & \int^1_{-1}\xi^2\Upsilon(x,\xi)d\xi\\ \int^1_{-1}\xi\Psi(x,\xi)d\xi & \int^1_{-1}\xi^2\Upsilon(x,\xi)d\xi & \int^1_{-1}\xi\Upsilon(x,\xi)d\xi\\ \end{pmatrix}\\ &=\begin{pmatrix} n U^1 & n T (4U^1U^1+1) & 4 n T U^1U^0\\ n T (4U^1U^1+1) & 12 n T^2 U^1(2U^1U^1+1) & 4 n T^2 U^0(6U^1U^1+1)\\ 4 n T U^1U^0 & 4 n T^2 U^0(6U^1U^1+1) & 4 n T^2 U^1(6U^1U^1 + 5) \end{pmatrix},\label{M1} \end{align} where \begin{align} \Phi(x,\xi) &=\frac{1}{2}\frac{ n (x)}{(U^0(x)-\xi U^1(x))^3},\notag\\ \Psi(x,\xi) &=\frac{3}{2}\frac{( n T)(x)}{(U^0(x)-\xi U^1(x))^4},\\ \Upsilon(x,\xi) &= \frac{6( n T^2)(x)}{(U^0(x)-\xi U^1(x))^5}.\notag \end{align} \subsubsection{Equilibrium velocity distribution $g_h(x,t,\vec{p})$}\label{subsection4.1.3} Using $\vec{W}_{0}:=\vec{W}_{j+\frac{1}{2}}^{n}$ derived in Section \ref{sec:fluxevolution} and the approximate cell average values $\vec{\bar{W}}_{j+1}$ and $\vec{\bar{W}}_{j}$ {reconstructs} a cell-vertex based linear polynomial around the cell interface $x=x_{j+\frac{1}{2}}$ as follows \[ \vec{W}_{0}(x)=\vec{W}_{0}+\vec{W}_{0}^{x}(x-x_{j+\frac{1}{2}}), \] where $\vec{W}_{0}^{x}=\frac{1}{\Delta x}(\vec{\bar{W}}_{j+1}-\vec{\bar{W}}_{j})$. Again the Taylor series expansion of $g$ at the cell interface $x=x_{j+\frac{1}{2}}$ gives \begin{equation} \label{eq:gh} g_{h}(x,t,\vec{p})=g_{0}(1+a_{0}(x-x_{j+\frac{1}{2}})+A_{0}(t-t_n)), \end{equation} where $(a_0,A_0)$ are the values of $(a,A)$ at the point $(x_{j+\frac{1}{2}},t_n)$. Similarly, the slope $a_0$ comes from the spatial derivative of J$\ddot{\text{u}}$ttner distribution and has a unique correspondence with the slope of the conservative variables $\vec W$ by \[ {<a_{0} p^0>}=\vec{W}_{0}^{x}, \] and then the conservation constraints and $a_0$ gives the following linear system $$ \quad <A_{0}p^{0}+a_{0}p^{1}>=0. $$ Those can be rewritten as \[ M_{0}^{0}\vec{a}_{0}=\vec{W}_{0}^{x},\quad M_{0}^{0}\vec{A}_{0}=- M_{1}^{0}\vec{a}_{0}, \] where $\vec{a}_{0}=(a_{0,1},a_{0,2}, a_{0,3})^T$, $\vec{A}_{0}=(A_{0,1},A_{0,2}, A_{0,3})^T$, and $M_0^0$ and $M_1^0$ can be calculated by \eqref{M0} and \eqref{M1} with $ n, T$ and $U^{\alpha}$ instead of $ n ^n_{j+\frac{1}{2}}, T^n_{j+\frac{1}{2}}$ and $U^{n,\alpha}_{j+\frac{1}{2}}$. Those systems can be solved by using the subroutine for \eqref{eq:la} and \eqref{eq:lA}. Up to now, all parameters in the initial gas distribution function $f_{h,0}$ and the equilibrium state $g_h$ have been determined. Substituting \eqref{eq:fh0} and \eqref{eq:gh} into \eqref{eq:faprox} gives our distribution function $\hat{f}$ at a cell interface $x=x_{j+\frac{1}{2}}$ as follows \begin{small} \begin{align}\notag &\hat{f}(x_{j+\frac{1}{2}},t,\vec{p}) =g_0\Big(1-\exp\big(-\frac{U_{\alpha,j+\frac{1}{2}}^np^{\alpha}}{p^0\tau}(t-t_n)\big)\Big) \notag\\ &+g_0a_0v_1\Big(\big(t-t_n+\frac{p^0\tau}{U_{\alpha,j+\frac{1}{2}}^n p^{\alpha}}\big)\exp\big(-\frac{U_{\alpha,j+\frac{1}{2}}^np^{\alpha}} {p^0\tau}(t-t_n)\big)-\frac{p^0\tau}{U_{\alpha,j+\frac{1}{2}}^np^{\alpha}}\Big)\notag\\ &+g_0A_0\Big((t-t_n)-\frac{p^0\tau}{U_{\alpha,j+\frac{1}{2}}^np^{\alpha}} \Big(1-\exp\big(-\frac{U_{\alpha,j+\frac{1}{2}}^np^{\alpha}}{p^0\tau}( t-t_n)\big)\Big)\Big)\notag\\ &+H[v_1]g_L\big(1-\frac{\tau}{U_{\alpha,j+\frac{1}{2},L}^np^{\alpha}}(p^0A_L+p^1a_L)-a_Lv_1(t-t_n)\big) \exp\big(-\frac{U_{\alpha,j+\frac{1}{2},L}^np^{\alpha}}{p^0\tau}(t-t_n)\big) \notag\\ &+(1-H[v_1])g_R\big(1-\frac{\tau}{U_{\alpha,j+\frac{1}{2},R}^np^{\alpha}} (p^0A_R+p^1a_R)-a_Rv_1(t-t_n)\big)\exp\big(-\frac{U_{\alpha,j+\frac{1}{2},R}^np^{\alpha}}{p^0\tau}(t-t_n)\big),\label{EQ0000000000} \end{align} \end{small} where $H[x]$ is the Heaviside function defined by \begin{equation*} H[x]=\begin{cases} 0,& x<0,\\ 1,& x\geqslant0. \end{cases} \end{equation*} Finally, substituting \eqref{EQ0000000000} into the integral \eqref{eq:1DF} yields the numerical flux $\hat{\vec{F}}^n_{j+\frac{1}{2}}$. \subsection{2D Euler equations}\label{sec:2DGKSEuler} This section extends the above BGK scheme to the 2D ultra-relativistic Euler equations \begin{equation}\label{2DEuler} \frac{\partial \vec{W}}{\partial t} + \frac{\partial \vec{F}(\vec{W})}{\partial x} + \frac{\partial \vec{G}(\vec{W})}{\partial y}= 0, \end{equation} where \begin{align} \vec{W} = \begin{pmatrix} n U^0\\ n h U^0U^1\\ n h U^0U^2\\ n h U^0U^0 - p \end{pmatrix}, \vec{F}(\vec{W}) = \begin{pmatrix} n U^1\\ n h U^1U^1 + p\\ n h U^2U^1\\ n h U^0U^1 \end{pmatrix}, \vec{G}(\vec{W}) = \begin{pmatrix} n U^2\\ n h U^1U^2\\ n h U^2U^2 + p\\ n h U^0U^2 \end{pmatrix}, \end{align} with $h=4T$, $p= n T$, and $\vec u = (u_1, u_2, 0)^T$. Four real eigenvalues of the Jacobian matrix $A_1(\vec{W})=\partial\vec{F}/\partial \vec{W}$ and $A_2(\vec{W})=\partial\vec{G}/\partial \vec{W}$ can be given as follows \begin{align} \lambda_k^{(1)}&=\frac{u_k(1-c_s^2)-c_s\gamma(\vec{u})\sqrt{1-u_k^2-(|\vec{u}|^2-u_k^2)c_s^2}} {1-|\vec{u}|^2c_s^2},\nonumber\\ \lambda_k^{(2)}&=\lambda_k^{(3)}=u_k,\nonumber\\ \lambda_k^{(4)}&=\frac{u_k(1-c_s^2)+c_s\gamma(\vec{u})\sqrt{1-u_k^2 -(|\vec{u}|^2-u_k^2)c_s^2}}{1-|\vec{u}|^2c_s^2},\nonumber \end{align} where $k=1,2$, and $c_s = \frac{1}{\sqrt{3}}$ is the speed of sound. Divide the spatial domain $\Omega$ into a rectangular mesh with the cell $I_{i,j}=\{(x,y)|x_{i-\frac{1}{2}}<x<x_{i+\frac{1}{2}}, y_{j-\frac{1}{2}}<y<y_{j+\frac{1}{2}}\}$, where $x_{i+\frac{1}{2}} = \frac{1}{2}(x_{i}+x_{i+1}), y_{j+\frac{1}{2}}=\frac{1}{2}(y_{j}+y_{j+1})$, $x_i=i\Delta x, y_j = j\Delta y$, and $i,j\in\mathbb{Z}$. The time interval $[0,T]$ is also partitioned into a (non-uniform) mesh ${t_{n+1}=t_n+\Delta t_n, t_0=0, n\geqslant0}$, where the time step size $\Delta t_n$ is determined by \begin{equation}\label{TimeStep2D} \Delta t_n = \frac{C \min\{\Delta x, \Delta y\}}{\max\limits_{ij}\{\bar{\varrho}^1_{i,j},\bar{\varrho}^2_{i,j}\}}, \end{equation} the constant $C$ denotes the CFL number, and $\bar{\varrho}^k_{i,j}$ denotes the approximation of the spectral radius of $A_k(\vec{W})$ over the cell $I_{i,j}$, $k=1,2$. The 2D Anderson-Witting model becomes \begin{equation}\label{2DAW} p^0\frac{\partial f}{\partial t} + p^1\frac{\partial f}{\partial x} + p^2\frac{\partial f}{\partial y}= \frac{U_{\alpha}p^{\alpha}}{\tau}(g-f), \end{equation} whose analytical solution can be given by \begin{align} \nonumber f(x,y,t,\vec{p})=&\int_{0}^{t}g(x',y', t',\vec{p})\exp\left(-\int_{t'}^{t} \frac{U_{\alpha}(x'',y'',t'')p^{\alpha}}{p^{0}\tau}dt''\right) \frac{U_{\alpha}(x',y',t')p^{\alpha}}{p^{0}\tau}dt'\\ &+\exp\left(-\int_{0}^{t} \frac{U_{\alpha}(x',y',t')p^{\alpha}}{\tau p^{0}}dt'\right)f_{0}(x-v_1t,y-v_2t,\vec{p}), \label{eq:2DAWsolu} \end{align} where $v_1=p^1/p^0$ and $v_2=p^2/p^0$ are the particle velocities in $x$ and $y$ directions respectively, $\{x'=x-v_1(t-t'), y'=y-v_2(t-t')\}$ and $\{x''=x-v_1(t-t''), y''=y-v_2(t-t'')\}$ are the particle trajectories, and $f_{0}(x,y,\vec{p})$ is the initial particle velocity distribution function, i.e. $f(x,y,0,\vec{p})=f_{0}(x,y,\vec{p})$. Taking the moments of \eqref{2DAW} and integrating them over $I_{i,j}\times[t_n,t_{n+1})$ {yield} the 2D finite volume scheme \begin{equation} \label{eq:2DEulerDisc} \vec{\bar{W}}^{n+1}_{i,j} = \vec{\bar{W}}^{n}_{i,j} - \frac{\Delta t_n}{\Delta x}(\hat{\vec{F}}^{n}_{i+\frac{1}{2},j} - \hat{\vec{F}}^n_{i-\frac{1}{2},j}) - \frac{\Delta t_n}{\Delta y}(\hat{\vec{G}}^n_{i,j+\frac{1}{2}} - \hat{\vec{G}}^n_{i,j-\frac{1}{2}}), \end{equation} where $\vec{\bar{W}}^{n}_{i,j}$ is the cell average approximation of conservative vector $\vec{W}(x,y,t)$ over the cell $I_{i,j}$ at time $t_n$, i.e. \begin{equation*} \vec{\bar{W}}^{n}_{i,j} \approx \frac{1}{\Delta x{\Delta y}}\int_{I_{i,j}}\vec{W}(x,y,t_n)dx{dy}, \end{equation*} and \begin{align}\label{eq:2DF} \hat{\vec{F}}^n_{i+\frac{1}{2},j}&=\frac{1}{\Delta t_n}\int_{t_n}^{t_{n+1}}\int_{\mathbb{R}^3}\vec{\Psi} p^1\hat{f}(x_{i+\frac{1}{2}},y_j,t,\vec{p})d\varXi {dt},\\ \hat{\vec{G}}^n_{i,j+\frac{1}{2}}&=\frac{1}{\Delta t_n}\int_{t_n}^{t_{n+1}}\int_{\mathbb{R}^3}\vec{\Psi} p^2\hat{f}(x_i,y_{j+\frac{1}{2}},t,\vec{p})d\varXi {dt}, \end{align} where $\hat{f}(x_{i+\frac{1}{2}},y_j,t,\vec{p})\approx f(x_{i+\frac{1}{2}},y_j,t,\vec{p})$ and $\hat{f}(x_i,y_{j+\frac{1}{2}},t,\vec{p})\approx f(x_i,y_{j+\frac{1}{2}},t,\vec{p})$. Because the derivation of $\hat{f}(x_i,y_{j+\frac{1}{2}},t,\vec{p})$ is very similar to $\hat{f}(x_{i+\frac{1}{2}},y_j, t,\vec{p})$, we will mainly derive $\hat{f}(x_{i+\frac{1}{2}},y_j, t,\vec{p})$ with the help of \eqref{eq:2DAWsolu} as follows \begin{align} \nonumber \hat{f}(x_{i+\frac{1}{2}},y_j,t,\vec{p})=&\int_{t_n}^{t}g_h(x',y', t',\vec{p})\exp\left(-\int_{t'}^{t} \frac{U_{\alpha}(x'',y'',t'')p^{\alpha}}{p^{0}\tau}dt''\right) \frac{U_{\alpha}(x',y',t')p^{\alpha}}{p^{0}\tau}dt'\\ \label{eq:2DAWsoluapprox}&+\exp\left(-\int_{t_n}^{t} \frac{U_{\alpha}(x',y',t')p^{\alpha}}{\tau p^{0}}dt'\right)f_{h,0}(x_{i+\frac{1}{2}}-v_1\tilde{t},y_j-v_2\tilde{t},\vec{p}), \end{align} where $\tilde{t}=t-t_n$, $x'=x_{i+\frac{1}{2}}-v_1(t-t'), y'=y_j-v_2(t-t')$ and $x''=x_{i+\frac{1}{2}}-v_1(t-t''), y''=y_j-v_2(t-t'')$, and $f_{h,0}(x_{i+\frac{1}{2}},y_j-v_1\tilde{t},\vec{p})$ and $g_h(x',y',t',\vec{p})$ are (approximate) initial distribution function and equilibrium velocity distribution function, respectively, which will be presented in the following. Similarly, in order to avoid expensive cost in getting $U_{\alpha}(x'',y'',t'')$ or $U_{\alpha}(x',y',t')$ along the particle trajectory, $U_{\alpha}(x'',y'',t'')$ and $U_{\alpha}(x',y',t')$ in \eqref{eq:2DAWsoluapprox} may be taken as a constant $U_{\alpha,{i+\frac{1}{2},j}}^n$, and $U_{\alpha}(x',y',t')$ in the second term may be replaced with $U_{\alpha,{i+\frac{1}{2},j},L}^n$ or $U_{\alpha,{i+\frac{1}{2},j},R}^n$ which is given in Section \ref{sec:fluxevolution2D}. \subsubsection{Equilibrium distribution $g_0$ at the point $(x_{i+\frac{1}{2}},y_j, t_n)$} \label{sec:fluxevolution2D} Using the cell average values $\{\vec{\bar{W}}^n_{i,j}\}$ reconstructs a piecewise linear function \begin{equation}\label{eq-2dreconstruction} \vec{W}_h(x,y,t_n)=\sum_{i,j} \vec{W}^n_{i,j}(x,y)\chi_{i,j}(x,y), \end{equation} where {$\vec{W}^n_{i,j}(x,y):={\vec{\bar{W}}}^n_{i,j} + \vec{W}^{n,x}_{i,j}(x-x_i) + \vec{W}^{n,y}_{i,j}(y-y_j)$, $\vec{W}^{n,x}_{i,j}$ and $\vec{W}^{n,y}_{i,j}$ are the $x$- and $y$-slopes in the cell $I_{i,j}$, respectively, and $\chi_{i,j}(x,y)$ is the characteristic function of the cell $I_{i,j}$.} At the point $(x_{i+\frac{1}{2}},y_j)$, the left and right limiting values of $\vec{W}_h(x,y,t_n)$ are given by \begin{equation} \begin{aligned} \vec{W}^n_{i+\frac{1}{2},j,L} &:= \vec{W}_h(x_{i+\frac{1}{2}}-0,y_j,t_n)=\vec{W}^n_{i,j}(x_{i+\frac{1}{2}},y_j),\\ \vec{W}^n_{i+\frac{1}{2},j,R} &:= \vec{W}_h(x_{j+\frac{1}{2}}+0,y_j,t_n)=\vec{W}^n_{{i+1,j}}(x_{i+\frac{1}{2}},y_j),\\ \vec{W}^{n,x}_{i+\frac{1}{2},j,L} &:= \frac{d\vec{W}_h}{dx}(x_{i+\frac{1}{2}}-0,y_j,t_n)=\frac{d\vec{W}^n_{i,j}}{dx}(x_{i+\frac{1}{2}},y_j),\\ \vec{W}^{n,x}_{i+\frac{1}{2},j,R} &:= \frac{d\vec{W}_h}{dx}(x_{i+\frac{1}{2}}+0,y_j,t_n)=\frac{d\vec{W}^n_{{i+1,j}}}{dx}(x_{i+\frac{1}{2}},y_j),\\ \vec{W}^{n,y}_{i+\frac{1}{2},j,L} &:= \frac{d\vec{W}_h}{dy}(x_{i+\frac{1}{2}}-0,y_j,t_n)=\frac{d\vec{W}^n_{i,j}}{dy}(x_{i+\frac{1}{2}},y_j),\\ \vec{W}^{n,y}_{i+\frac{1}{2},j,R} &:= \frac{d\vec{W}_h}{dy}(x_{i+\frac{1}{2}}+0,y_j,t_n)=\frac{d\vec{W}^n_{{i+1,j}}}{dy}(x_{i+\frac{1}{2}},y_j). \end{aligned} \label{eq:RL2D} \end{equation} Similar to the 1D case, with the help of $\vec{W}^n_{i+\frac{1}{2},j,L}$, $\vec{W}^n_{i+\frac{1}{2},j,R}$ and J$\ddot{\text{u}}$ttner distribution \eqref{juttner}, one can get $g_L$ and $g_R$ at $(x_{i+\frac{1}{2}},y_j,{t_n})$. Then the particle four-flow $N^{\alpha}$ and the energy-momentum tensor $T^{\alpha\beta}$ at $(x_{j+\frac{1}{2}},y_j,t_n)$ can be defined by \begin{align*} &(N^{0},T^{01},T^{02},T^{00})_{i+\frac{1}{2},j}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{1} >0}}\vec{\Psi}p^{0}g_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{0}g_{R}d\Xi,\\ &(N^{1},T^{11},T^{21},T^{01})_{i+\frac{1}{2},j}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{1} >0}}\vec{\Psi}p^{1}g_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{1}g_{R}d\Xi,\\ &(N^{2},T^{12},T^{22},T^{02})_{i+\frac{1}{2},j}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{1} >0}}\vec{\Psi}p^{2}g_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{2}g_{R}d\Xi. \end{align*} Using those and Theorem \ref{thm:NT}, the macroscopic quantities $ n ^n_{i+\frac{1}{2},j}, T^n_{i+\frac{1}{2},j}$ and $U_{{\alpha,i+\frac{1}{2},j}}^n$ can be calculated and then the J$\ddot{\text{u}}$ttner distribution function $g_0$ at $(x_{i+\frac{1}{2}},y_j, t_n)$ is obtained. Similarly, in the $y$-direction, $\vec{W}^n_{i,j+\frac{1}{2},L}$ and $\vec{W}^n_{i,j+\frac{1}{2},R}$ can also be given by \eqref{eq-2dreconstruction} so that one has corresponding left and right equilibrium distributions $\tilde{g}_L$ and $\tilde{g}_R$. The particle four-flow $N^{\alpha}$ and the energy-momentum tensor $T^{\alpha\beta}$ at $(x_i,y_{j+\frac{1}{2}}, t_n)$ are defined by \begin{align*} &(N^{0},T^{01},T^{02},T^{00})_{i,j+\frac{1}{2}}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{2} >0}}\vec{\Psi}p^{0}\tilde{g}_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{2}<0}}\vec{\Psi}p^{0}\tilde{g}_{R}d\Xi,\\ &(N^{1},T^{11},T^{21},T^{01})_{i,j+\frac{1}{2}}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{2} >0}}\vec{\Psi}p^{1}\tilde{g}_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{2}<0}}\vec{\Psi}p^{1}\tilde{g}_{R}d\Xi,\\ &(N^{2},T^{12},T^{22},T^{02})_{i,j+\frac{1}{2}}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{2} >0}}\vec{\Psi}p^{2}\tilde{g}_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{2}<0}}\vec{\Psi}p^{2}\tilde{g}_{R}d\Xi, \end{align*} which give $ n ^n_{i,j+\frac{1}{2}}, T^n_{i,j+\frac{1}{2}}$, $U_{{\alpha,i,j+\frac{1}{2}}}^n$ and $g_0$ at $(x_{i},y_{j+\frac{1}{2}}, t_n)$. The following will derive the initial distribution function $f_{h,0}(x,y,\vec{p})$ and equilibrium distribution $g_h(x,y,t,\vec{p})$, separately. \subsubsection{Initial distribution function $f_{h,0}(x,y,\vec{p})$} \label{Section-Initial-velocity-distribution} Borrowing the idea in the Chapman-Enskog expansion, $f(x,y,t,\vec{p})$ is supposed to be of the form \begin{small} \begin{equation}\label{eq:ceEuler2D000} f(x,y,t,\vec{p})=g-\frac{\tau}{U_{\alpha} p^{\alpha}}\left(p^0{g_t+p^1g_x+p^2g_y}\right)+O(\tau^2)=:g\left(1-\frac{\tau}{U_{\alpha} p^{\alpha}}\left(p^0A+p^1a+p^2b\right)\right)+O(\tau^2). \end{equation} \end{small} The conservation constraints \eqref{convc} imply the constraints on $A,a$ and $b$ \begin{equation} \label{eq:aAcons2D} \int_{\mathbb{R}^{3}}\vec{\Psi}(p^{0}A+p^{1}a+p^2b)gd\Xi=\int_{\mathbb{R}^{3}}\vec{\Psi}(p^{0}g_{t}+p^{1}g_{x}+p^2g_y)d\Xi =\frac{1}{\tau}\int_{\mathbb{R}^{3}}\vec{\Psi}U_{\alpha}p^{\alpha}(g-f)d\Xi=0. \end{equation} Using the Taylor series expansion of $f$ at the cell interface $(x_{i+\frac{1}{2}},y_j)$ gives \begin{equation} \label{eq:fh02D} f_{h,0}=\left\{\begin{aligned} &g_{L}\left(1-\frac{\tau}{U_{\alpha,L}p^{\alpha}}(p^{0}A_{L}+p^{1}a_{L}+p^{2}b_{L})+a_{L}\tilde{x} + b_{L}\tilde{y}\right),&\tilde{x}<0,\\ &g_{R}\left(1-\frac{\tau}{U_{\alpha,R}p^{\alpha}}(p^{0}A_{R}+p^{1}a_{R}+p^{2}b_{L})+a_{R}\tilde{x} + b_{R}\tilde{y}\right),&\tilde{x}>0, \end{aligned} \right. \end{equation} where $\tilde{x}=x-x_{i+\frac{1}{2}}, \tilde{y}=y-y_j$, and $(a_{\omega}, b_{\omega}, A_{\omega})$, $\omega=L,R$, are of the form \begin{align}\label{eq:aAform2D}\begin{aligned} a_{\omega}=&a_{\omega,1}+a_{\omega,2}p^{1}+a_{\omega,3}p^{2}+a_{\omega,4}p^{0}, \\ b_{\omega}=&b_{\omega,1}+b_{\omega,2}p^{1}+b_{\omega,3}p^{2}+b_{\omega,4}p^{0}, \\ A_{\omega}=&A_{\omega,1}+A_{\omega,2}p^{1}+A_{\omega,3}p^{2}+A_{\omega,4}p^{0}. \end{aligned}\end{align} The slopes $a_\omega$ and $b_\omega$ come from the spatial derivative of J$\ddot{\text{u}}$ttner distribution and have unique correspondences with the slopes of the conservative variables $\vec W$ by the following linear systems for $a_\omega$ and $b_\omega$ \begin{align*} <a_{\omega}{p^0}>=\vec{W}_{i+\frac{1}{2},j,\omega}^{n,x},\ \ <b_{\omega}{p^0}>=\vec{W}_{i+\frac{1}{2},j,\omega}^{n,y}, \ \omega=L,R. \end{align*} Those linear systems can also be expressed as follows \begin{align*} M_{0}^{\omega}{\vec{a}}_\omega=\vec{W}_{i+\frac{1}{2},j,\omega}^{n,x},\quad M_{0}^{\omega}{\vec{b}}_\omega=\vec{W}_{i+\frac{1}{2},j,\omega}^{n,y}, \end{align*} where the coefficient matrix is defined by \[ M_{0}^{\omega}=\int_{\mathbb{R}^{3}}p^{0}g_{\omega}\vec{\Psi}\vec{\Psi}^Td\Xi. \] Substituting $a_\omega$ and $b_\omega$ into the conservation constraints \eqref{eq:aAcons2D} gives the linear systems for $A_\omega$ as follows \[ <a_{\omega}p^{1}+b_{\omega}p^{2}+A_{\omega}p^{0}>=0,\ \omega=L,R, \] which can be rewritten as \begin{equation}\label{eq:lA2D} M_{0}^{\omega}\vec{A}_{\omega}=-M_{1}^{\omega}\vec{a}_{\omega}-M_{2}^{\omega}\vec{b}_{\omega}, \end{equation} where \begin{align*} M_{1}^{\omega}=\int_{\mathbb{R}^{3}}g_{\omega}p^{1}\vec{\Psi}\vec{\Psi}^Td\Xi,\ \ M_{2}^{\omega}=\int_{\mathbb{R}^{3}}g_{\omega}p^{2}\vec{\Psi}\vec{\Psi}^Td\Xi, \ \omega=L,R. \end{align*} All elements of the matrices $M_0^{\omega}$, $M_1^{\omega}$ and $M_2^{\omega}$ can also be explicitly presented by using the coordinate transformation \eqref{eq-polar-tran}. If omitting the superscripts $L$ and $R$, then the matrices $M_0$, $M_1$, and $M_2$ are \begin{small} \begin{align}\notag M_0&=\int_{\mathbb{R}^{3}}p^{0}g\vec{\Psi}\vec{\Psi}^Td\Xi:=\begin{pmatrix} M^0_{00} &M^0_{01} &M^0_{02} &M^0_{03}\\ M^0_{10} &M^0_{11} &M^0_{12} &M^0_{13}\\ M^0_{20} &M^0_{21} &M^0_{22} &M^0_{23}\\ M^0_{30} &M^0_{31} &M^0_{32} &M^0_{33} \end{pmatrix}\\ &=\begin{pmatrix} \int^{\pi}_{-\pi}\int^1_{-1}\Phi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}\Psi d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}w^1\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1\Upsilon d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}w^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2w^1\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2\Upsilon d\xi d\varphi \\ \int^{\pi}_{-\pi}\int^1_{-1}\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}\Upsilon d\xi d\varphi \notag \end{pmatrix}\\ &= \begin{pmatrix} n U^0 & 4 n TU^1U^0 & 4 n TU^2U^0 & n T (4 U^1U^1 + 4 U^2U^2 + 3)\\ 4 n TU^1 U^0 & 4 n T^2 (6U^1U^1 + 1) U^0 & 24 n T^2U^1 U^2 U^0 & 4 n T^2 U^1 (6 U^1U^1 + 6 U^2U^2 + 5)\\ 4 n TU^2 U^0 & 24 n T^2 U^1 U^2 U^0 & 4 n T^2(6 U^2U^2 + 1) U^0 & 4 n T^2 U^2 (6 U^1U^1 + 6 U^2U^2+ 5)\\ M^0_{03} & M^0_{13} & M^0_{23} & 12 n T^2 U^0 (2 U^1U^1 + 2U^2U^2 + 1) \end{pmatrix}, \label{M02D} \end{align} \end{small} \begin{small} \begin{align}\notag M_1&=\int_{\mathbb{R}^{3}}p^{1}g\vec{\Psi}\vec{\Psi}^Td\Xi:=\begin{pmatrix} M^1_{00} &M^1_{01} &M^1_{02} &M^1_{03}\\ M^1_{10} &M^1_{11} &M^1_{12} &M^1_{13}\\ M^1_{20} &M^1_{21} &M^1_{22} &M^1_{23}\\ M^1_{30} &M^1_{31} &M^1_{32} &M^1_{33} \end{pmatrix}\\ &= \begin{pmatrix} \int^{\pi}_{-\pi}\int^1_{-1}w^1\Phi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1\Psi d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^3\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2\Upsilon d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1(w^2)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Upsilon d\xi d\varphi \\ \int^{\pi}_{-\pi}\int^1_{-1}w^1\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^1)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1\Upsilon d\xi d\varphi \notag \end{pmatrix}\\ &= \begin{pmatrix} n U^1 & n T(4U^1U^1 + 1) & 4 n TU^1U^2 & 4 n TU^1U^0\\ n T(4U^1U^1 + 1) & 12 n T^2 U^1(2U^1U^1 + 1) & 4 n T^2 U^2(6U^1U^1 + 1) & 4 n T^2 U^0(6U^1U^1 + 1)\\ 4 n TU^1U^2 & 4 n T^2 U^2(6U^1U^1 + 1) & 4 n T^2 U^1(6U^2U^2 + 1) & 24 n T^2 U^1U^2U^0\\ 4 n TU^1U^0 & 4 n T^2 U^0(6U^1U^1 + 1) & 24 n T^2 U^1U^2U^0 & 4 n T^2 U^1(6U^1U^1 + 6U^2U^2 + 5) \end{pmatrix},\label{M12D} \end{align} \end{small} and \begin{small} \begin{align}\notag M_2&=\int_{\mathbb{R}^{3}}p^{2}g\vec{\Psi}\vec{\Psi}^Td\Xi:=\begin{pmatrix} M^2_{00} &M^2_{01} &M^2_{02} &M^2_{03}\\ M^2_{10} &M^2_{11} &M^2_{12} &M^2_{13}\\ M^2_{20} &M^2_{21} &M^2_{22} &M^2_{23}\\ M^2_{30} &M^2_{31} &M^2_{32} &M^2_{33} \end{pmatrix}\\ &= \begin{pmatrix} \int^{\pi}_{-\pi}\int^1_{-1}w^2\Phi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2w^1\Psi d\xi d\varphi & \int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2\Psi d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}w^2w^1\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2(w^1)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1(w^2)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Upsilon d\xi d\varphi \notag \\ \int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2w^1\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^2)^3\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2\Upsilon d\xi d\varphi \notag\\ \int^{\pi}_{-\pi}\int^1_{-1}w^2\Psi d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^1w^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}(w^2)^2\Upsilon d\xi d\varphi &\int^{\pi}_{-\pi}\int^1_{-1}w^2\Upsilon d\xi d\varphi \notag \end{pmatrix}\\ &= \begin{pmatrix} n U^2 & 4 n TU^1U^2 & n T(4U^2U^2 + 1) & 4 n TU^2U^0\\ 4 n TU^1U^2 & 4 n T^2U^2(6U^1U^1 + 1) & 4 n T^2U^1(6U^2U^2 + 1) & 24 n T^2U^1U^2U^0\\ n T(4U^2U^2 + 1) & 4 n T^2U^1(6U^2U^2 + 1) & 12 n T^2U^2(2U^2U^2 + 1) & 4 n T^2U^0(6U^2U^2 + 1)\\ 4 n TU^2U^0 & 24 n T^2U^1U^2U^0 & 4 n T^2U^0(6U^2U^2 + 1) & 4 n T^2U^2(6U^1U^1 + 6U^2U^2 + 5) \end{pmatrix},\label{M22D} \end{align} \end{small} where $w^1=\xi, w^2=\sqrt{1-\xi^2}\sin\varphi, w^3=\sqrt{1-\xi^2}\cos\varphi$, and \begin{align} \Phi(x,y,\xi,\varphi) &=\frac{1}{4\pi}\frac{ n (x,y)}{(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^3},\notag\\ \Psi(x,y,\xi,\varphi) &=\frac{3}{4\pi}\frac{( n T)(x,y)}{(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^4},\\ \Upsilon(x,y,\xi,\varphi) &= \frac{3}{\pi}\frac{( n T^2)(x,y)}{(U^0(x,y)-w^1U^1(x,y)-w^2U^2(x,y))^5}\notag. \end{align} \subsubsection{Equilibrium velocity distribution $g_h(x,y,t,\vec p)$} \label{sec:equi2DEuler} Using $\vec{W}_{0}:=\vec{W}_{i+\frac{1}{2},j}^{n}$ derived in Section \ref{sec:fluxevolution2D} and the cell averages $\vec{\bar{W}}_{i+1,j}$ and $\vec{\bar{W}}_{i,j}$ reconstructs a linear polynomial \[ \vec{W}_{0}(x)=\vec{W}_{0}+\vec{W}_{0}^{x}(x-x_{i+\frac{1}{2}}) + \vec{W}_{0}^{y}(y-y_j), \] where $\vec{W}_{0}^{x}=\frac{1}{\Delta x}(\vec{\bar{W}}_{i+1,j}-\vec{\bar{W}}_{i,j})$ and $\vec{W}_{0}^{y}=\frac{1}{2\Delta y}(\vec{W}^{n}_{i+\frac12,j+1}-\vec{W}^{n}_{i+\frac12,j-1})$. Again using the Taylor series expansion of $g$ at the cell interface $(x_{i+\frac{1}{2}},y_j)$ gives \begin{equation} \label{eq:gh2D} g_{h}(x,y,t,\vec{p})=g_{0}(1+a_{0}(x-x_{i+\frac{1}{2}})+b_0(y-y_j)+A_{0}(t-t^n)), \end{equation} where $(a_0,b_0,A_0)$ are the values of $(a,b,A)$ at the point $(x_{i+\frac{1}{2}},y_j, t_n)$. Similarly, the linear systems for $a_0, b_0$ and $A_0$ can be derived as follows \[ <a_{0}{p^0}>=\vec{W}_{0}^{x},\quad <b_{0}{p^0}>=\vec{W}_{0}^{y},\quad <A_{0}p^{0}+a_{0}p^{1}+b_{0}p^{2}>=0, \] or \begin{equation}\label{eq:abAforg} M_{0}^{0}\vec{a}_{0}=\vec{W}_{0}^{x},\quad M_{0}^{0}\vec{b}_{0}=\vec{W}_{0}^{y}, \quad M_{0}^{0}\vec{A}_{0}= -M_{1}^{0}\vec{a}_{0} -M_{2}^{0}\vec{b}_{0}, \end{equation} where the elements of $M_0^0, M_1^0$ and $M_2^0$ are given by \eqref{M02D}, \eqref{M12D}, and \eqref{M22D} with $ n ,T,U^{\alpha}$ instead of $ n ^n_{i+\frac{1}{2},j}, T^n_{i+\frac{1}{2},j}$ and $U^{n,\alpha}_{i+\frac{1}{2},j}$. Up to now, the initial gas distribution function $f_{h,0}$ and the equilibrium state $g_h$ have been given. Substituting \eqref{eq:fh02D} and \eqref{eq:gh2D} into \eqref{eq:2DAWsoluapprox} gives \begin{small} \begin{align} &\hat{f}(x_{i+\frac{1}{2}},y_j,t,\vec{p}) =g_0\left(1-\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}{p^0\tau}\tilde{t}\right)\right)\notag\\ &+g_0a_0v_1\left(\left(\tilde{t}+\frac{p^0\tau}{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}\right)\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}{p^0\tau}\tilde{t}\right)-\frac{p^0\tau}{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}\right)\notag\\ &+g_0b_0v_2\left(\left(\tilde{t}+\frac{p^0\tau}{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}\right)\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}{p^0\tau}\tilde{t}\right)-\frac{p^0\tau}{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}\right)\notag\\ &+g_0A_0\left(\tilde{t}-\frac{p^0\tau}{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}\left(1-\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j}^np^{\alpha}}{p^0\tau}\tilde{t}\right)\right)\right)\notag\\ &+H[v_1]g_L\left(1-\frac{\tau}{U_{\alpha,i+\frac{1}{2},j,L}^np^{\alpha}}(p^0A_L+p^1a_L+p^2b_L)-a_Lv_1\tilde{t}-b_Lv_2\tilde{t}\right)\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j,L}^np^{\alpha}}{p^0\tau}\tilde{t}\right)\notag\\ &+(1-H[v_1])g_R\left(1-\frac{\tau}{U_{\alpha,i+\frac{1}{2},j,R}^np^{\alpha}}(p^0A_R+p^1a_R+p^2a_R)-a_Rv_1\tilde{t}-b_Rv_2\tilde{t}\right)\exp\left(-\frac{U_{\alpha,i+\frac{1}{2},j,R}^np^{\alpha}}{p^0\tau}\tilde{t}\right),\notag \end{align} \end{small} where $\tilde{t}=t-t_n$. Combining this $\hat{f}(x_{i+\frac{1}{2}},y_j,t,\vec{p})$ with \eqref{eq:2DF} can get the numerical flux $\hat{\vec{F}}^n_{i+\frac{1}{2},j}$. The numerical flux $\hat{\vec{G}}^n_{i,j+\frac{1}{2}}$ can be obtained in the same procedure. \subsection{2D Navier-Stokes equations} \label{sec:GKSNS} Because the previous simple expansion \eqref{eq:ceEuler} or \eqref{eq:ceEuler2D000} cannot give the Navier-Stokes equations \eqref{eq-NS01}-\eqref{eq-NS03}, one has to use the complicate Chapman-Enskog expansion \eqref{ce}-\eqref{dev} to design the genuine BGK schemes for the Navier-Stokes equations. On the other hand, for the Navier-Stokes equations, calculating the macroscopic quantities $n, U^{\alpha}$, and $p$ needs the value of the fluxes $\vec F^k$ besides $\vec W$. More specially, one has to first calculate the {energy-momentum} tensor $T^{\alpha\beta}$ and {particle} four-flow $N^{\alpha}$ from the kinetic level and then use Theorem \ref{thm:NT} to calculate $n, U^{\alpha}$, and $p$. It shows that there exists a very big difference between the genuine BGK schemes for the Euler and Navier-Stokes equations. In order to obtain $T^{\alpha\beta}$ and $N^{\alpha}$ at $t=t_{n+1}$ from the kinetic level, multiplying \eqref{2DAW} by $p^k/p^0$ gives \begin{align} \label{eq:boltzmannNS2} p^k\frac{\partial f}{\partial t} + \frac{p^kp^1}{p^0}\frac{\partial f}{\partial x} + \frac{p^kp^2}{p^0}\frac{\partial f}{\partial y} = \frac{p^kU_{\alpha}p^{\alpha}(g-f)}{p^0\tau}, \ k=1,2. \end{align} Taking the moments of \eqref{2DAW} and \eqref{eq:boltzmannNS2} and integrating them over the space-time domain $I_{i,j}\times[t_n,t_{n+1})$ , respectively, yield \begin{equation} \label{eq:moment1} \vec{\bar{W}}^{n+1}_{\alpha,i,j} = \vec{\bar{W}}^{n}_{\alpha,i,j} - \frac{\Delta t_n}{\Delta x}(\hat{\vec{F}}^{n}_{\alpha,i+\frac{1}{2},j} - \hat{\vec{F}}^n_{\alpha,i-\frac{1}{2},j}) - \frac{\Delta t_n}{\Delta y}(\hat{\vec{G}}^n_{\alpha,i,j+\frac{1}{2}} - \hat{\vec{G}}^n_{\alpha,i,j-\frac{1}{2}}) +\vec{S}^n_{\alpha,i,j} ,\ \alpha=0,1,2, \end{equation} where \begin{equation} \begin{aligned} \label{eq:relationNS} &\vec{\bar{W}}^{n}_{\alpha,i,j} = (N^\alpha,T^{1\alpha},T^{2\alpha},T^{0\alpha})^{n,T}_{i,j},\\ &\hat{\vec{F}}^{n}_{\alpha,i+\frac{1}{2},j} = \frac{1}{\Delta t_n}\int_{\mathbb{R}^3}\int_{t_n}^{t_{n+1}}\vec{\Psi}\frac{p^1p^\alpha}{p^0}\hat{f}(x_{i+\frac{1}{2}},y_j,t)dtd\varXi,\\ &\hat{\vec{G}}^n_{\alpha,i,j+\frac{1}{2}} = \frac{1}{\Delta t_n}\int_{\mathbb{R}^3}\int_{t_n}^{t_{n+1}}\vec{\Psi}\frac{p^2p^\alpha}{p^0}\hat{f}(x_i,y_{j+\frac{1}{2}},t)dtd\varXi,\\ &\vec{S}^n_{0,i,j} =0,\ \vec{S}^n_{k,i,j} = \int_{\mathbb{R}^3}\int_{t_n}^{t_{n+1}}\vec{\Psi}\frac{p^kU_{\alpha}p^{\alpha}}{p^0\tau}(g(x_i,y_j,t)-\hat{f}(x_i,y_j,t))dtd\varXi, \ k=1,2. \end{aligned} \end{equation} Our task is to get the approximate distributions $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$ and $\hat{f}(x_{i},y_{j+\frac{1}{2}},t)$ for the numerical fluxes and $\hat{f}(x_{i},y_j,t)$ and $g(x_i,y_j,t)$ for the source terms. The following will focus on the derivation of $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$ with the help of the analytical solution \eqref{eq:2DAWsolu} of the 2D Anderson-Witting model. \subsubsection{Initial distribution function $f_{h,0}(x,y,t,\vec p)$} This section derives the initial distribution function $f_{h,0}$ for $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$. The Chapman-Enskog expansion \eqref{ce}-\eqref{dev} is rewritten as follows \begin{equation}\label{eq:CENS} f(x,y,t,\vec{p})=g\left(1-\frac{\tau}{U_{\alpha}p^{\alpha}}\left(A^{ce}p^0 +a^{ce}p^1+b^{ce}p^2+c^{ce}p^3\right)\right)+O(\tau^2), \end{equation} where $A^{ce}=A_{\beta}^{ce}p^{\beta}+A^{ce}_4$, $a^{ce} =a^{ce}_{\beta}p^{\beta}+a^{ce}_4$, $b^{ce} =b^{ce}_{\beta}p^{\beta}+b^{ce}_4$, $c^{ce} =c^{ce}_{\beta}p^{\beta}+c^{ce}_4$, and \begin{align}\label{cecoef}\begin{aligned} A^{ce}_{\beta} &= -\frac{1}{T}\nabla^{<0}U^{\beta>} + \frac{U_{\beta}}{T^2}(\nabla^{0}T-\frac{T}{ n h}\nabla^{0}p),\quad A^{ce}_4 = -\frac{h}{T^2}(\nabla^{0}T-\frac{T}{ n h}\nabla^{0}p), \\ a^{ce}_{\beta} &= \frac{1}{T}\nabla^{<1}U^{\beta>} - \frac{U_{\beta}}{T^2}(\nabla^{1}T-\frac{T}{ n h}\nabla^{1}p),\quad a^{ce}_4 = \frac{h}{T^2}(\nabla^{1}T-\frac{T}{ n h}\nabla^{1}p),\ \\ b_{\beta} &= \frac{1}{T}\nabla^{<2}U^{\beta>} - \frac{U_{\beta}}{T^2}(\nabla^{2}T-\frac{T}{ n h}\nabla^{2}p),\quad b^{ce}_4 = \frac{h}{T^2}(\nabla^{2}T-\frac{T}{ n h}\nabla^{2}p),\\ c^{ce}_{\beta} &= \frac{1}{T}\nabla^{<3}U^{\beta>} - \frac{U_{\beta}}{T^2}(\nabla^{3}T-\frac{T}{ n h}\nabla^{3}p),\quad c^{ce}_4 = \frac{h}{T^2}(\nabla^{3}T-\frac{T}{ n h}\nabla^{3}p). \end{aligned}\end{align} It is observed from those expressions of $A^{ce}, a^{ce}, b^{ce}$, and $c^{ce}$ that one has to compute the time derivatives, which are not required in the Euler case. Those time derivatives are {approximately} computed by using the following second-order extrapolation method: for any smooth function $h(t)$, the first order derivative at $t=t_n$ is numerically obtained by \begin{small} \begin{equation} \label{eq:extra} h_t(t_{n}) = \frac{h(t_{n-2})(t_{n-1}-t_n)^2-h(t_{n-1})(t_{n-2}-t_n)^2-h(t_{n})((t_{n-1}-t_n)^2-(t_{n-2}-t_n)^2)} {(t_{n-2}-t_{n})(t_{n-1}-t_n)^2-(t_{n-1}-t_{n})(t_{n-2}-t_n)^2}. \end{equation} \end{small} Using the Chapman-Enskog expansion \eqref{eq:CENS} and the Taylor series expansion in terms of $x$ gives the initial velocity distribution \begin{small} \begin{equation} \label{eq:fh0NS} f_{h,0}(x,y,t^n,\vec{p})=\left\{\begin{aligned} &g_{L}\left(1-\frac{\tau}{U_{\alpha,L}p^{\alpha}}(p^{0}A_{L}^{ce}+p^{1}a_{L}^{ce}+p^{2}b_{L}^{ce}+p^{3}c_{L}^{ce})+a_{L}\tilde{x}+b_{L}\tilde{y}\right),\tilde{x}<0,\\ &g_{R}\left(1-\frac{\tau}{U_{\alpha,R}p^{\alpha}}(p^{0}A_{R}^{ce}+p^{1}a_{R}^{ce}+p^{2}b_{R}^{ce}+p^{3}c_{R}^{ce})+a_{R}\tilde{x}+b_{R}\tilde{y}\right),\tilde{x}>0, \end{aligned} \right. \end{equation} \end{small} where $\tilde{x}=x-x_{i+\frac{1}{2}}, \tilde{y}=y-y_{j}$, $g_{L}$ and $g_{R}$ denote the left and right J$\ddot{\text{u}}$ttner distributions at $x_{i+\frac{1}{2}}$ with $y=y_j,t=t_n$, the Taylor expansion coefficients $(a_{L},b_{L})$ and $(a_{R},b_{R})$ are calculated by using the same procedure as in the Euler case, while the Chapman-Enskog expansion coefficients $a_{L}^{ce}, a_{R}^{ce}, b_{L}^{ce}, b_{R}^{ce}, c_{L}^{ce}, c_{R}^{ce}$ and $A_{L}^{ce},A_{R}^{ce}$ are calculated by \eqref{cecoef}. \subsubsection{Equilibrium distribution functions $g_h(x,y,t,\vec p)$} In order to obtain the equilibrium distribution functions $g_h(x,y,t,\vec p)$ for $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$, the particle four-flow $N^{\alpha}$ and the energy-momentum tensor $T^{\alpha\beta}$ at $(x_{i+\frac{1}{2}},y_j)$ and $t=t^n$ are defined by \begin{align*} (N^{\alpha},T^{\alpha1},T^{\alpha2},T^{\alpha0})_{i+\frac{1}{2},j}^{n,T}:=\int_{\mathbb{R}^{3}\cap{p^{1}>0}}\vec{\Psi}p^{\alpha}f_{L}d\Xi+\int_{\mathbb{R}^{3}\cap{p^{1}<0}}\vec{\Psi}p^{\alpha}f_{R}d\Xi,\ \alpha=0,1,2, \end{align*} where $f_L$ and $f_R$ are the left and right limits of $f_{h,0}$ with $y=y_j$ at $x=x_{i+\frac{1}{2}}$. Using those definitions and Theorem \ref{thm:NT}, the macroscopic quantities $ n ^n_{i+\frac{1}{2},j}, T^n_{i+\frac{1}{2},j}$ and {$U_{\alpha,i+\frac{1}{2},j}^n$} can be obtained, and then one gets the J$\ddot{\text{u}}$ttner distribution function $g_0$ at $(x_{i+\frac{1}{2}},y_j,t_n)$. Similar to Section \ref{sec:equi2DEuler}, we reconstruct a cell-vertex based linear polynomial and do the first-order Taylor series expansion of $g$ at the cell interface $(x_{i+\frac{1}{2}},y_j)$, see \eqref{eq:gh2D}. However, it is different from the Euler case that $A_0$ is obtained by \[M^0_0\vec{A}_0=\vec{W}^t_0,\] where $\vec{W}^t_0$ is calculated by using the second-order extrapolation \eqref{eq:extra}. After those, substituting $f_{h,0}$ and $g_h$ into \eqref{eq:2DAWsoluapprox} gets $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$. The distribution $\hat{f}(x_{i},y_{j+\frac{1}{2}},t)$ can be similarly obtained. \subsubsection{Derivation of source terms $\vec{S}_{{1},i,j}$ and $\vec{S}_{{2},i,j}$} The rest is to calculate $\hat{f}(x_i,y_j,t)$ and $g(x_i,y_j,t)$ for the source terms $\vec{S}_{{1},i,j}$ and $\vec{S}_{{2},i,j}$. The procedure is the same as the above except for taking the first-order Taylor series expansion at the cell-center $(x_i,y_j)$. To be more specific, $g$ and $f_0$ in the analytical solution \eqref{eq:2DAWsoluapprox} of 2D Anderson-Witting model are replaced with \begin{equation} \label{eq:ghsource} g_{h}(x,y,t,\vec{p})=g_{0}(1+a_{0}(x-x_{i})+b_0(y-y_j)+A_{0}(t-t_n)), \end{equation} and \begin{equation} f_{h,0}({x,y},\vec{p})= g_{0}\left(1-\frac{\tau}{U_{\alpha,0}p^{\alpha}}(A_0^{ce}p^0+a_0^{ce}p^1+b_0^{ce}p^2+c_0^{ce}p^3)+a_{0}\tilde{x}+b_{0}\tilde{y}\right), \end{equation} where $(a_0,b_0,A_0)$ are {the Taylor expansion coefficients} at $(x_{i},y_j,t_n)$ calculated by the same procedure as that for $\hat{f}(x_{i+\frac{1}{2}},y_j,t)$, $\tilde{x}=x-x_{i}$, $\tilde{y}=y-y_{j}$, $g_{0}$ denotes the J$\ddot{\text{u}}$ttner distribution at $(x_i,y_j, t_n)$, $a_{0}^{ce}, b_{0}^{ce}, c_{0}^{ce}$ and $A_{0}^{ce}$ are the Chapman-Enskog expansion coefficients at $(x_i,y_j,t_n)$. It is worth noting that since $f_{h,0}$ is continuous at $(x_i, y_j)$, there is no need to consider whether the left or right states should be taken here. The subroutine for the coefficients in \eqref{eq:fh0NS} can be used to get those in $f_{h,0}({x,y},\vec{p})$. In order to define the equilibrium state $g(x_i,y_j,t)$ in the source term, firstly we need to figure out the corresponding macroscopic quantities such as $N^{\alpha}$ and $T^{\alpha\beta}$ which can be obtained by taking the moments of $\hat{f}(x_i,y_j,t)$. Using the Theorem \ref{thm:NT}, the macroscopic quantities such as $ n , T$ and ${\vec{u}}$ can be obtained. Thus the J$\ddot{\text{u}}$ttner distribution function at cell center $(x_{i},y_j)$ is derived according to the definition. Until now, all distributions are derived and the second-order accurate genuine BGK scheme \eqref{eq:moment1} is developed for the 2D ultra-relativistic Navier-Stokes equations. \section{Numerical experiments} \label{sec:test} This section will solve several 1D and 2D problems on the ultra-relativistic fluid flow to demonstrate the accuracy and effectiveness of the present genuine BGK schemes, which will be compared to the second-order accurate BGK-type and KFVS schemes \cite{abdel2013,qamar2003}. The collision time $\tau$ is taken as \[ \tau=\tau_m+C_{2}\Delta t^{\alpha}_{n}\frac{|P_{L}-P_{R}|}{P_{L}+P_{R}}, \] with $\tau_m = \frac{5\mu}{4p}$ for the viscous flow and $\tau_m=C_{1}\Delta t^{\alpha}_n$ for the inviscid flow, $C_1$, $C_2$ and $\alpha$ are three constants, $P_L, P_R$ are the left and right limits of the pressure at the cell interface, respectively. Unless specifically stated, this section takes $C_1=0.001, C_2=1.5$ and $\alpha=1$, the time step-size $\Delta t_n$ is determined by the CFL condition \eqref{TimeStep1D} or \eqref{TimeStep2D} with the CFL number of 0.4, and the characteristic variables are reconstructed with the van Leer limiter. \subsection{1D Euler case} \begin{example}[Accuracy test]\label{ex:accurary1D}\rm To check the accuracy of our BGK method, we first solve a smooth problem which describes a sine wave propagating periodically in the domain $\Omega=[0,1]$. The initial conditions are taken as \[ n (x,0)= 1+0.5\sin(2\pi x), \quad u_1(x,0)=0.2, \quad p(x,0)=1,\] and corresponding exact solutions are given by \[ n (x,t)= 1+0.5\sin(2\pi (x-0.2t)), \quad u_1(x,t)=0.2, \quad p(x,t)=1.\] The computational domain $\Omega$ is divided into $N$ uniform cells and the periodic boundary conditions are specified at $x=0,1$. \scriptsize \begin{table}[H] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \caption{Example \ref{ex:accurary1D}: Numerical errors of $n$ in $l^1, l^2$-norms and convergence rates at $t = 0.2$ with or without limiter.}\label{accuracy} \begin{center} \begin{tabular}{*{9}{c}} \toprule \multirow{2}*{$N$} &\multicolumn{4}{c}{With limiter} &\multicolumn{4}{c}{Without limiter}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} & $l^1$ error & $l^1$ order & $l^2$ error & $l^2$ order & $l^1$ error & $l^1$ order & $l^2$ error & $l^2$ order\\ \midrule 25 & 1.6793e-03 & -- &2.5667e-03 & -- &6.0337e-04 &- &6.7007e-04 &- \\ 50 & 4.9516e-04 & 1.7619 &8.2151e-04 &1.6436 &1.5275e-04 &1.9819 &1.6965e-04 &1.9818\\ 100 & 1.3012e-04 & 1.9281 &2.6823e-04 &1.6148 &3.8305e-05 &1.9956 &4.2559e-05 &1.9950\\ 200 & 3.4917e-05 & 1.8978 &8.5622e-05 &1.6474 &9.5628e-06 &2.0020 &1.0621e-05 &2.0025\\ 400 & 8.2820e-06 & 2.0759 &2.6141e-05 &1.7117 &2.3904e-06 &2.0002 &2.6550e-06 &2.0001\\ \bottomrule \end{tabular} \end{center} \end{table} \end{example} Table \ref{accuracy} gives the $l^1$- and $l^2$-errors at $t=0.2$ and corresponding convergence rates for the BGK scheme with $\alpha = 2$ and $C_1=C_2=1$. The results show that a second-order rate of convergence can be obtained for our BGK scheme although the van Leer limiter loses slight accuracy. \begin{example}[Riemann problem I]\rm \label{ex:1DRP1} This is a Riemann problem with the following initial data \begin{equation} \label{exeq:1DRP1} (n,u_1,p)(x,0)= \begin{cases} (1.0,1.0,3.0),& x<0.5,\\ (1.0,-0.5,2.0),& x>0.5. \end{cases} \end{equation} \end{example} The initial discontinuity will evolve as a left-moving shock wave, a right-moving contact discontinuity, and a right-moving shock wave. Fig. \ref{fig:1DRP1} displays the numerical results at $t=0.5$ and their close-ups obtained by using our BGK scheme (``{$\circ$}"), the BGK-type scheme (``{$\times$}"), and the KFVS scheme (``{+}") with 400 uniform cells in the domain $[0,1]$, where the solid lines denote the exact solutions. It can be seen that our BGK scheme resolves the contact discontinuity better than the second-order accurate BGK-type and KFVS schemes, and they can well capture such wave configuration. \begin{figure} \caption{Example \ref{ex:1DRP1} \label{fig:1DRP1} \end{figure} \begin{example}[Riemann problem II]\label{ex:1DRP2}\rm The initial conditions of the second Riemann problem are \begin{equation} \label{exeq:1DRP2} (n,u_1,p)(x,0)= \begin{cases} (5.0,0.0,10.0),& x<0.5,\\ (1.0,0.0,0.5),& x>0.5. \end{cases} \end{equation} \end{example} \begin{figure} \caption{Example \ref{ex:1DRP2} \label{fig:1DRP2} \end{figure} Fig. \ref{fig:1DRP2} shows the numerical solutions at $t=0.5$ obtained by using our BGK scheme (``{$\circ$}"), the BGK-type scheme (``{$\times$}"), and the KFVS scheme (``{+}") with 400 uniform cells within the domain $[0,1]$, where the solid line denotes the exact solution. It is seen that the solutions consist of a left-moving rarefaction wave, a contact discontinuity, and a right-moving shock wave, the computed solutions well accord with the exact solutions, and the rarefaction and shock waves are well resolved. Moreover, our BGK scheme exhibits better resolution of the contact discontinuity than the BGK-type and KFVS schemes. \begin{example}[Riemann problem III]\label{ex:1DRP3}\rm The initial data are \begin{equation} \label{eqex:1DRP3} (n,u_1,p)(x,0)= \begin{cases} (1.0,-0.5,2.0),& x<0.5,\\ (1.0,0.5,2.0),& x>0.5. \end{cases} \end{equation} The initial discontinuity will evolve as a left-moving rarefaction wave, a stationary contact discontinuity, and a right-moving rarefaction wave. Fig. \ref{fig:1DRP3} plots the numerical results at $t=0.5$ obtained by using our BGK scheme (``{$\circ$}"), the BGK-type scheme (``{$\times$}"), and the KFVS scheme (``{+}") with 400 uniform cells in the domain $[0,1]$, where the solid line denotes the exact solution. It is seen that there is a undershoot near the contact discontinuity in the number density which usually happens in the non-relativistic cases. \end{example} \begin{figure} \caption{Example \ref{ex:1DRP3} \label{fig:1DRP3} \end{figure} \begin{example}[Perturbed shock tube problem] \label{ex:sinewave}\rm The initial data are \begin{equation} \label{exeq:sinewave} (n,u_1,p)(x,0)= \begin{cases} (1.0,0.0,1.0),& x<0.5,\\ ( n _r,0.0,0.1),& x>0.5, \end{cases} \end{equation} where $ n _r=0.125 - 0.0875\sin(50(x-0.5))$. It is a perturbed shock tube problem, which has widely been used to test the ability of the shock-capturing schemes in resolving small-scale flow features in the non-relativistic flow. \end{example} \begin{figure} \caption{Example \ref{ex:sinewave} \label{fig:sinewave} \end{figure} Fig. \ref{fig:sinewave} plots the numerical results at $t=0.5$ in the computational domain $\Omega=[0,1]$ obtained by using our BGK scheme (``{$\circ$}"), the BGK-type scheme (``{$\times$}"), and the KFVS scheme (``{+}") with 400 uniform cells. Those are compared with the reference solution (the solid line) obtained by using the KFVS scheme with a finer mesh of 10000 uniform cells. It is seen that the shock wave is moving into a sinusoidal density field, some complex but smooth structures are generated at the left hand side of the shock wave when the shock wave interacts with the sine wave, and our BGK scheme is obviously better than the BGK-type and KFVS schemes in resolving those complex structures. Since the continuity equation in the Euler equations decouples from other equations for the pressure and velocity, one does not see the effect of perturbation in the pressure \cite{kunik2004bgktype}. \begin{example}[Collision of blast waves]\label{ex:blast}\rm It is about the collision of blast waves and simulated to evaluate the performance of the genuine BGK scheme and the BGK-type and KFVS schemes for the flow with strong discontinuities. The initial data are taken as follows \begin{equation} \label{eqex:blast} (n,u_1,p)(x,0)= \begin{cases} (1.0,0.0,100.0),& 0<x<0.1,\\ (1.0,0.0,0.06),& 0.1<x<0.9,\\ (1.0,0.0,10.0),& 0.9<x<1.0. \end{cases} \end{equation} \end{example} Reflecting boundary conditions are specified at the two ends of the unit interval $[0,1]$. Fig. \ref{fig:blast} plots the numerical results at $t=0.75$ obtained by using our BGK scheme (``{$\circ$}"), the BGK-type scheme (``{$\times$}"), and the KFVS scheme (``{+}") with 700 uniform cells within the domain $[0,1]$. It is found that the solutions at $t=0.75$ are bounded by two shock waves and those schemes can well resolve those shock waves. However, the genuine BGK scheme exhibits better resolution of the contact discontinuity than the BGK-type and KFVS schemes. \begin{figure} \caption{Example \ref{ex:blast} \label{fig:blast} \end{figure} \subsection{2D Euler case} \begin{example}[Accuracy test]\label{ex:accurary2D}\rm To check the accuracy of our BGK scheme, we solve a smooth problem which describes a sine wave propagating periodically in the domain $\Omega=[0,1]\times[0,1]$ at an angle $\alpha={45}^{\circ}$ with the $x$-axis. The initial conditions are taken as follows \[ n (x,y,0)= 1+0.5\sin(2\pi (x+y)), u_1(x,y,0)=u_2(x,y,0)=0.2, p(x,y,0)=1,\] so that the exact solution can be given by \[ n (x,y,t)= 1+0.5\sin(2\pi (x-0.2t + y-0.2t)), u_1(x,y,t)=u_2(x,y,t)=0.2, p(x,y,t)=1.\] The computational domain $\Omega$ is divided into $N\times N$ uniform cells and the periodic boundary conditions are specified. \end{example} \begin{table}[H] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \caption{Example \ref{ex:accurary2D}: Numerical errors at $t = 0.1$ in $l^1, l^2$-norms and convergence rates with or without limiter.}\label{tab:accuracy2D} \begin{center} \begin{tabular}{*{9}{c}} \toprule \multirow{2}*{N} &\multicolumn{4}{c}{With limiter} &\multicolumn{4}{c}{Without limiter}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} & $l^1$ error & $l^1$ order & $l^2$ error & $l^2$ order & $l^1$ error & $l^1$ order & $l^2$ error & $l^2$ order\\ \midrule 25 & 1.7316e-03 & - &2.5820e-03 & - &6.1369e-04 &- &6.8214e-04 &- \\ 50 & 5.3784e-04 & 1.6869 &9.1457e-04 &1.4974 &1.5610e-04 &1.9751 &1.7340e-04 &1.9759\\ 100 & 1.4248e-04 & 1.9164 &2.8992e-04 &1.6574 &3.9584e-05 &1.9795 &4.3962e-05 &1.9798\\ 200 & 3.8119e-05 & 1.9022 &9.5759e-05 &1.5982 &9.8942e-06 &2.0003 &1.0989e-05 &2.0002\\ 400 & 1.0923e-05 & 1.8031 &3.1779e-05 &1.5914 &2.4837e-06 &1.9941 &2.7586e-06 &1.9941\\ \bottomrule \end{tabular} \end{center} \end{table} Table \ref{tab:accuracy2D} gives the $l^1$- and $l^2$- errors at $t=0.1$ and corresponding convergence rates for the BGK scheme with $\alpha = 2$ and $C_1=C_2=1$. The results show that the 2D BGK scheme is second-order accurate and the van Leer limiter affects the accuracy. To verify the capability of our genuine BGK scheme in capturing the complex 2D relativistic wave configurations, we will solve three inviscid problems: explosion in a box, cylindrical explosion, and ultra-relativistic jet problems. \begin{example}[Implosion in a box]\rm\label{ex:implosion} This example considers a 2D Riemann problem inside a squared domain $[0,2]\times[0,2]$ with reflecting walls. A square with side length of 0.5 embedded in the center of the outer box of side length of 2. The number density is 4 and the pressure is 10 inside the small box while both the density and the pressure are 1 outside of the small box. The fluid velocities are zero everywhere. \end{example} Figs.~\ref{fig:implosion3} and \ref{fig:implosion12} give the contours of the density, pressure and velocities at time $t=3$ and $12$ obtained by our BGK scheme on the uniform mesh of $400\times400$ cells, respectively. The results show that the genuine BGK scheme captures the complex wave interaction well. Fig. \ref{fig:implosioncompare} gives a comparison of the numerical densities along the line $y=1$ calculated by using the genuine BGK scheme (``{$\circ$}"), { BGK-type scheme (``{$\times$}"), and KFVS scheme (``{+}") respectively. Obviously, the genuine BGK scheme resolves the complex wave structure better than the BGK-type and KFVS schemes. \begin{figure} \caption{Example \ref{ex:implosion} \label{fig:implosion3} \end{figure} \begin{figure} \caption{Example \ref{ex:implosion} \label{fig:implosion12} \end{figure} \begin{figure} \caption{Example \ref{ex:implosion} \label{fig:implosioncompare} \end{figure} \begin{example}[Cylindrical explosion problem]\rm\label{ex:cylindrical} Initially, there is a high-density, high-pressure circle with a radius of 0.2 embedded in a low density, low pressure medium within a squared domain $[0,1]\times[0,1]$. Inside the circle, the number density is 2 and the pressure is 10, while outside the circle the number density and pressure are 1 and 0.3, respectively. The velocities are zero everywhere. \end{example} Fig. \ref{fig:ccontour} displays the the contour plots at $t = 0.2$ obtained by using the BGK scheme on the mesh of $200\times200$ uniform cells. The results show that a circular shock wave and a circular discontinuity travel away from the center, and a circular rarefaction wave propagates toward the center of the circle. Fig. \ref{fig:ccompare} gives a comparison of the number density and pressure along the line $y=0.5$ obtained by the BGK, BGK-type, and KFVS schemes, respectively. The symbols ``{$\circ$}" , {``$\times$"} and ``{+}" denote the solutions obtained by using the BGK, {BGK-type} and KFVS schemes. It can be observed that all of them give closer results. However, the BGK scheme resolves the discontinuities better than the KFVS. \begin{figure} \caption{Example \ref{ex:cylindrical} \label{fig:ccontour} \end{figure} \begin{figure} \caption{Example \ref{ex:cylindrical} \label{fig:ccompare} \end{figure} \begin{example}[Ultra-relativistic jet]\rm\label{ex:Jet} The dynamics of relativistic jet relevant in astrophysics has been widely studied by numerical methods in the literature \cite{3DJET1999,RAM2006,Marti1997}. This test simulates a relativistic jet with the computational region $[0,12]\times[-3.5,3.5]$ and $\alpha = C_1=C_2=1$. The initial states for the relativistic jet beam are \begin{align*} ( n _b,u_{1,b},u_{2,b},p_b)=(0.01,0.99,0.0,10.0),\ \ ( n _m,u_{1,m},u_{2,m},p_m)=(1.0,0.0,0.0,10.0), \end{align*} where the subscripts $b$ and $m$ correspond to the beam and medium, {respectively}. \end{example} The initial relativistic jet is injected through a unit wide nozzle located at the middle of left boundary while a reflecting boundary is used outside of the nozzle. Outflow boundary conditions with zero gradients of variables are imposed at the other part of the domain boundary. Fig. \ref{fig:Jet} shows the numerical results at $t=5,6,7,8$ obtained by our BGK scheme on the mesh of $600\times350$ uniform cells. The average speed of the jet head is 0.91 which matches the theoretical estimate 0.87 in \cite{Marti1997}. \begin{figure} \caption{Example \ref{ex:Jet} \label{fig:Jet} \end{figure} \subsection{Navier-Stokes case} This section designs two examples of viscous {flow} to test the genuine BGK scheme \eqref{eq:moment1} for the ultra-relativistic Navier-Stokes equations. Because the extrapolation \eqref{eq:extra} requires the numerical solutions at $t=t_{n-1}$ and $t_{n-2}$, the ``initial'' data at first several time levels have to be specified for the BGK scheme in advance. In the following examples, the macroscopic variables at $t=t_0+0.5\Delta t_0$ and $t_0+\Delta t_0$ are first obtained by using the initial data, time partial derivatives at $t=t_0$, and BGK scheme proposed in Section \ref{sec:GKSNS}, where the first order partial derivatives in time are derived by using the exact solutions. Then, the time partial derivatives at $t=t_0+\Delta t_0$ for the macroscopic variables are calculated by using the extrapolation \eqref{eq:extra}, and the solutions are further evolved in time by the BGK scheme with the extrapolation \eqref{eq:extra}. \begin{example}[longitudinally boost-invariant system]\rm \label{ex:boost} For ease of numerical implementation, this test focuses on the longitudinally boost-invariant systems. They are conveniently described in curvilinear coordinates $x_m=(\tilde{t}, y, z, \eta)$, where $\tilde{t} = \sqrt{t^2-x^2}$ is the longitudinal proper time, $\eta=\frac{1}{2}\ln\left(\frac{t+x}{t-x}\right)$ is the space-time rapidity and $(y, z)$ are the usual Cartesian coordinates in the plane transverse to the beam direction $x$. The systems are realized by assuming a specific ``scaling" velocity profile $u_1 = x/t$ along the beam direction, and the initial conditions are independent on the longitudinal reference frame (boost invariance), that is to say, they do not depend on $\eta$. The readers are referred to \cite{Song2009} for more details. Our computations consider the boost-invariant longitudinal expansion without transverse flow, so that the relativistic Navier-Stokes equations read \begin{equation*} \frac{\partial p}{\partial \tilde{t}} + \frac{4}{3\tilde{t}}\left(p-\frac{\mu}{3\tilde{t}}\right)=0, \ \ \frac{\partial n }{\partial \tilde{t}} = - n \partial_{\alpha} U^{\alpha}. \end{equation*} Since {$u_{1}=\frac{x}{t}$,} $U^0=t/\tilde{t}$ and $U^1=x/\tilde{t}$, it holds that $\partial_{\alpha} U^{\alpha} = 1/\tilde{t}$. Thus the equation for $ n $ becomes \begin{equation*} \frac{\partial n }{\partial \tilde{t}} = -\frac{ n }{\tilde{t}}. \end{equation*} The analytical solutions can be given by \begin{equation*} p = C_1 \tilde{t}^{-\frac{4}{3}} + \frac{4}{3}\mu\tilde{t}^{-1},\ \ n = C_2\tilde{t}^{-1}, \end{equation*} where $C_1=p_0(t_0^2-x_0^2)^{{\frac{2}{3}}}-\frac{4\mu}{3}(t_0^2-x_0^2)^{{\frac{1}{6}}}$ and $C_2= n _0\sqrt{t_0^2-x_0^2}$. We take $x_0=0, t_0=1, p_0=1, n _0=1$, $\mu=0.0005$, and $\Omega=[-\frac{t_0}{2},\frac{t_0}{2}]$. Moreover, the time partial derivatives of $ n , {u_{1}}, p$ at $t={t_0}$ are given by the exact solution. Fig. \ref{fig:boost} shows the number density, velocity and pressure at $t=1.2$ obtained by our 1D BGK scheme with 20 cells (``${\triangle}$") and 40 cells (``{$\circ$}"), respectively. The results show that the numerical results predicted by our BGK scheme fit the exact solutions very well. Table \ref{tab:boost} lists the $l^1$- and $l^2$-errors at $t=1.2$ and corresponding convergence rates for our BGK scheme. Those data show that a second-order rate of convergence can be obtained by our BGK scheme. \end{example} \begin{figure} \caption{Example \ref{ex:boost} \label{fig:boost} \end{figure} \begin{table}[H] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \caption{Example \ref{ex:boost}: Numerical errors of $ n $ in $l^1$ and $l^2$-norm and convergence rates at $t = 1.2$.}\label{tab:boost} \begin{center} \begin{tabular}{c|cc|cc} \hline $N$ & $l^1$ error & $l^1$ order & $l^2$ error & $l^2$ order \\ \hline 10 & 8.9214e-03 & -- & 1.2028e-02 & -- \\ 20 & 1.9291e-03 & 2.2094 & 2.4837e-03 & 2.2759 \\ 40 & 4.9766e-04 & 1.9546 & 6.2325e-04 & 1.9946 \\ 80 & 1.3682e-04 & 1.8629 & 1.6760e-04 & 1.8948 \\ \hline \end{tabular} \end{center} \end{table} \begin{example}[Heat conduction]\rm\label{ex:heat} This test considers the problem of heat conduction between two parallel plates, which are assumed to be infinite and separated by a distance $H$. Moreover, both plates are always stationary. The temperatures of the lower and upper plates are given by $T_0$ and $T_1$, respectively. The viscosity $\mu$ is a constant. Based on the above assumptions, the Navier-Stokes equations can be simplified as \begin{equation*} \frac{\partial }{\partial y}\left(\frac{1}{T^2}\frac{\partial T}{\partial y}\right) = 0,\quad T(0)=T_0,\quad T(H)=T_1, \end{equation*} whose analytic solution is gotten as follows \begin{equation}\label{eq:heatexact} T(y)=\frac{HT_0T_1}{HT_1-(T_1-T_0)y}. \end{equation} Our computation takes $H=1, p=0.8, u_1=0.2, u_2=0, \mu=5\times10^{-3}, T_0=0.1, T_1=1.0002T_0$, and $0.5(T_0+T_1)$ as the initial value for the temperature $T$ in the entire domain. Moreover, the initial time partial derivatives are given by $ n _t(x,0)=0, v_{1t}(x,0)=0, v_{2t}(x,0)=0$ and $p_t(x,0)=0$. Because $u_1\neq 0$, the the 2D BGK scheme should be used for numerical simulation. The left figure in Fig. \ref{fig:heat} plots the numerical temperature (``{$\circ$}") obtained by the 2D BGK scheme in comparison with the steady-state analytic solution (solid line) given by \eqref{eq:heatexact}. It is seen that the numerical solution is well comparable with the analytic. The right figure in Fig. \ref{fig:heat} shows convergence of the temperature to the steady state measured in the $l^1$-error between the numerical and analytic solutions. \end{example} \begin{figure} \caption{Example \ref{ex:heat} \label{fig:heat} \end{figure} \section{Conclusions} \label{sec:conclusion} The paper developed second-order accurate genuine BGK schemes in the framework of finite volume method for the 1D and 2D ultra-relativistic flows. Different from the existing KFVS or BGK-type schemes for the ultra-relativistic Euler equations the present genuine BGK schemes were derived from the analytical solution of the Anderson-Witting model, which was given for the first time and included the ``genuine'' particle collisions in the gas transport process. The genuine BGK schemes were also developed for the ultra-relativistic viscous flows and two ultra-relativistic viscous examples were designed. Several 1D and 2D numerical experiments were conducted to demonstrate that the proposed BGK schemes were accurate and stable in simulating ultra-relativistic inviscid and viscous flows, and had higher resolution at the contact discontinuity than the KFVS or BGK-type schemes. The present BGK schemes could be easily extended to the 3D Cartesian grid for the ultra-relativistic flows and it was interesting to develop the genuine BGK schemes for the special and general relativistic flows. \end{document}
\begin{document} \title{The number of occurrences of a fixed spread among $n$ directions in vector spaces over finite fields}\author{Le Anh Vinh\\ Mathematics Department\\ Harvard University\\ Cambridge, MA 02138, US\\ [email protected]}\maketitle \begin{abstract} We study a finite analog of a problem of Erd\"os, Hickerson and Pach on the maximum number of occurrences of a fixed angle among $n$ directions in three-dimensional spaces. \end{abstract} \section{Introduction} Let $\mathbbm{F}_q$ denote the finite field with $q$ elements where $q \gg 1$ is an odd prime power. For any $x, y \in \mathbbm{F}_q^d$, the distance between $x, y$ is defined as $\|x - y\|= (x_1 - y_1)^2 + \ldots + (x_d - y_d)^2$. Let $E \subset \mathbbm{F}_q^d$, $d \geqslant 2$. Then the finite analog of the classical Erd\"os distance problem is to determine the smallest possible cardinality of the set \begin{equation} \Delta (E) =\{\|x - y\|: x, y \in E\}, \end{equation} viewed as a subset of $\mathbbm{F}_q$. Bourgain, Katz and Tao (\cite{bourgain-katz-tao}), showed, using intricate incidence geometry, that for every $\varepsilon > 0$, there exists $\delta > 0$, such that if $E \in \mathbbm{F}_q^2$ and $C_{\varepsilon}^1 q^{\varepsilon} \leqslant |E| \leqslant C_{\varepsilon}^2 q^{2 - \varepsilon}$, then $| \Delta (E) | \geqslant C_{\delta} |E|^{\frac{1}{2} + \delta}$ for some constants $C_{\varepsilon}^1, C_{\varepsilon}^2$ and $C_{\delta}$. The relationship between $\varepsilon$ and $\delta$ in their argument is difficult to determine. Going up to higher dimension using arguments of Bourgain, Katz and Tao is quite subtle. Iosevich and Rudnev \cite{iosevich-rudnev} establish the following result using Fourier analytic method. \begin{theorem}(\cite{iosevich-rudnev}) Let $E \subset \mathbbm{F}_q^d$ such that $|E| \gtrsim C q^{d / 2}$ for $C$ sufficiently large. Then \begin{equation} | \Delta (E) | \gtrsim \min \left\{ q, \frac{|E|}{q^{(d - 1) / 2}} \right\} . \end{equation} \end{theorem} Iosevich and his collaborators investigated several related results using this method in a series of papers \cite{covert,hart-iosevich-solymosi,hart-iosevich,hart-iosevich-koh-rudnev,iosevich-koh,iosevich-rudnev,iosevich-senger}. Using graph theoretic method, the author reproved some of these resutls in \cite{vinh-ejc1,vinh-ejc2,vinh-ejc,vinh-fkw,vinh-dg}. The advantages of the graph theoretic method are twofold. First, we can reprove and sometimes improve several known results in vector spaces over finite fields. Second, our approach works transparently in the non-Euclidean setting. In this note, we use the graph theoretic method to study a finite analog of a related problem of Erdos, Hickerson and Pach \cite{ehp}. \begin{problem} (\cite{ehp}) Give a good asymptotic bounds for the maximum number of occurrences of a fixed angle $\gamma$ among $n$ unit vectors in three-dimensional spaces. \end{problem} If $\gamma = \pi / 2$, the maximum number of orthogonal pairs is known to be $\Theta (n^{4 / 3})$ as this problem is equivalent to bounding the number of point-line incidences in the plane (see \cite{brass} for a detailed discussion). For any other angle $\gamma \neq \pi / 2$, we are far from haveing good estimates for the maximum number of occurrences of $\gamma$. The only known upper bound is still $O (n^{4 / 3})$. the same as for orthogonal pairs. For the lower bound, Swanepoel and Valtr \cite{sv} established the bound $\Omega (n \log n)$, improving an earlier result of Erdos, Hickerson and Pach \cite{ehp}. It is, however, widely believed that the $\Omega (n \log n)$ lower bound can be much improved. The purpose of this note is to study an analog of this problem in the three-dimension space over finite fields. In vector spaces over finite fields, however, the separation of lines is not measured by the transcendental notion of angle. A remarkable approach of Wildberger \cite{norman1,norman2} by recasting metrical geometry in a purely algebraic setting, eliminate the difficulties in defining an angle by using instead the notion of spread - in Euclidean geometry the square of the sine of the angle between two rays lying on those lines (the notation of spread will be defined precisely in Section 2). Using this notation, we now can state the main result of this note. \begin{theorem}\label{mt1-man} Let $E$ be a set of unit vectors in $\mathbbm{F}_q^3$ with $q^{3/2} \ll |E| \ll q^2$. For any $\gamma \in \mathbbm{F}_q$, let $f_{\gamma}(E)$ denote the number of occurrences of a fixed spread $\gamma$ among $E$. Then $f_{\gamma} (E) = \Theta (|E|^2 / q)$ if $1-\gamma$ is a square in $\mathbbm{F}_q$ and $f_{\gamma}(E)=0$ otherwise. \end{theorem} The rest of this note is organized as follows. In Section 2, we follow Wildberger's construction of affine and projective rational trigonometry to define the notions of quadrance and spread. We then define the main tool of our proof, the finite Poincar\'e graphs. Using these graphs, we give a proof of Theorem \mbox{Re}f{mt1-man} in Section 3. \section{Quadrance, Spread and finite Poincar\'e graphs} In this section, we follow Wildberger's construction of affine and projective rational trigonometry over finite fields. Interested readers can see \cite{norman1,norman2} for a detailed discussion. \subsection{Quadrance and Spread: affine rational geometry} We work in a three-dimensional vector space over a field $F$, not of characteristic two. Elements of the vector space are called points or vectors (these two terms are equivalent and will be used interchangeably) and are denoted by $U, V, W$ and so on. The zero vector or point is denote $O$. The unique line $l$ through distinct points $U$ and $V$ is denoted $UV$. For a non-zero point $U$ the line $OU$ is denoted $[U]$. Fix a symmetric bilinear form and represent it by $U \cdot V$. In terms of this form, the line $UV$ is perpendicular to the line $WZ$ precisely when $(V-U) \cdot (Z-W) = 0$. A point $U$ is a null point or null vector when $U \cdot U = 0$. The origin $O$ is always a null point, and there are others as well. The distance (or so-called \textit{quadrance} in Wildberger's construction) between the points $U$ and $V$ is the number \begin{equation}\label{quadrance} Q(U,V) = (V-U) \cdot (V-U). \end{equation} The line $UV$ is a null line precisely when $Q(U,V) = 0$, or equivalently when it is perpendicular to itself. In Euclidean geometry, the separation of lines is traditionally measured by the transcendental notion of \textit{angle}. The difficulities in defining an angle precisely, and in extending the concept over an arbitrarily field, are eliminated in rational trigonometry by using instead the notion of \textit{spread} - in Euclidean geometry the square of the sine of the angle between two rays lying on those lines. Precisely, the \textit{spread} between the non-null lines $UW$ and $VZ$ is the number \begin{equation}\label{spread} s(UW,VZ) = 1 - \frac{((W-U)\cdot (Z-V))^2}{Q(U,W)Q(V,Z)}. \end{equation} This depends only on the two lines, not the choice of points lying on them. The spread between two non-null lines is $1$ precisely when they are perpendicular. Given a large set $E$ of unit vectors in $\mathbbm{F}_q^3$, our aim is to study the number of occurences of a fixed spread $\gamma \in \mathbbm{F}_q$ among $E$. \subsection{Finite Poincar\'e graphs: projective rational geometry} Fix a three-dimensional vector space over a field with a symmetric bilinear form $U \cdot V$ as in the previous subsection. A line though the origion $O$ will now be called a projective point and denoted by a small letter such as $u$. The space of such projective points is called $n$ dimensional projective space. If $V$ is a non-zero vector in the vector space, then $v = [V]$ denote the projective point $OV$. A projective point is a null projective point when some non-zero null point lies on it. Two projective points $u = [U]$ and $v = [V]$ are perpendicular when they are perpendicular as lines. The \textit{projective quadrance} between the non-null projective points $u = [U]$ and $v = [V]$ is the number \begin{equation}\label{p-quadrance} q(u,v) = 1 - \frac{(U\cdot V)^2}{(U \cdot U)(V \cdot V)}. \end{equation} This is the same as the spread $s(OU,OV)$, and has the value $1$ precisely when the projective points are perpendicular. The \textit{projective spread} between the intersecting projective lines $wu = [W,U]$ and $wv = [W,V]$ is defined to be the spread between these intersecting planes: \begin{equation}\label{p-spread} S(wu,wv) = 1 - \frac{\left(\left(U-\frac{U\cdot W}{W \cdot W}W\right)\cdot \left(V - \frac{V\cdot W}{W \cdot W}W\right)\right)^2}{\left(\left(U-\frac{U\cdot W}{W \cdot W}W\right)\cdot \left(U-\frac{U\cdot W}{W \cdot W}W\right)\right)\left(\left(V - \frac{V\cdot W}{W \cdot W}W\right) \cdot \left(V - \frac{V\cdot W}{W \cdot W}W\right)\right)} \end{equation} This approach is entirely algebraic and elementary which allows one to formulate two dimensional hyperbolic geometry as a projective theorey over a general field. Precisely, over the real numbers, the projective quadrance in the projective rational model is the negative of the square of the hyperbolic sine of the hyperbolic distance between the corresponding points in the Poincar\'e model, and the projective spread is the square of the sine of the angle between corresponding geodesics in the Poincar\'e model (see \cite{norman2}). Let $\Omega$ be the set of square-type non-isotropic $1$-dimensional subspaces of $\mathbbm{F}_q^3$ then $|\Omega| = q(q+1)/2$. For a fixed $\gamma \in \mathbbm{F}_q$, the \textit{finite Poincar\'e graph} $P_q(\gamma)$ has vertices as the points in $\Omega$ and edges between vertices $[Z],[W]$ if and only if $s(OZ,OW) =\gamma$. These graphs can be viewed as a companion of the well-known (and well studied) finite upper half plane graphs (see \cite{terras} for a survey on the finite upper half plane graphs). From the definition of the spread, the finite Poincar\'e graph $P_q(\gamma)$ is nonempty if and only if $1-\lambda$ is a square in $\mathbbm{F}_q$. We have the orthogonal group $O_3(\mathbbm{F}_q)$ acts transitively on $\Omega$, and yields a symmetric association scheme $\Psi(O_3(\mathbbm{F}_q),\Omega)$ of class $(q+1)/2$. The relations of $\Psi(O_3(\mathbbm{F}_q),\Omega)$ are given by \begin{eqnarray*} R_1 & = & \{([U],[V]) \in \Omega \times \Omega \mid (U+V) \cdot (U+V) = 0\},\\ R_i & = & \{([U],[V]) \in \Omega \times \Omega \mid (U+V) \cdot (U+V) = 2 + 2 \nu^{- (i - 1)} \} \, (2 \leqslant i \leqslant (q - 1) / 2)\\ R_{(q+1)/2} & = & \{([U], [V]) \in \Omega \times \Omega \cdot (U+V) \cdot (U+V) = 2\}, \end{eqnarray*} where $\nu$ is a generator of the field $\mathbbm{F}_q$ and we assume $U\cdot U = 1$ for all $[U] \in \Omega$ (see \cite{bannai-hao-song}, Section 6). Note that $\Psi(O_3(\mathbbm{F}_q),\Omega)$ is isomorphic to the association scheme $PGL(2,q)/D_{2(q-1)}$ where $D_{2(q-1)}$ is a dihedral sugroup of order $2(q-1)$. The graphs $(\Omega,R_i)$ are not Ramanujan in general, but fortunately, they are asymptotic Ramanujan for large $q$. The following theorem summaries the results from \cite{bannai-shimabukuro-tanaka}, Section 2 in a rough form. \begin{theorem} (\cite{bannai-shimabukuro-tanaka})\label{bhs} The graphs $(\Omega,R_i)$ ($1\leq i \leq (q+1)/2$) are regular of valency $Cq(1+o(1))$. Let $\lambda$ be any eigenvalue of the graph $(\Omega,R_i)$ with $\lambda \neq$ valency of the graph then \[|\lambda| \leq c(1+o(1))\sqrt{q},\] for some $C, c > 0$ (In fact, we can show that $c = 1/2$). \end{theorem} Theorem \mbox{Re}f{bhs} implies that the finite Poincar\'e graphs $P_q(\gamma)$ are asymptotic Ramanujan whenever $1 - \gamma$ is a square in $\mathbbm{F}_q$. Precisely, we have the following theorem. \begin{theorem} \label{spectral-man} a) If $1 - \gamma$ is not a square in $\mathbbm{F}_q$ then the finite Poincar\'e graph $P_q(\gamma)$ is empty. b) If $1 - \gamma$ is a square in $\mathbbm{F}_q$ then the finite Poincar\'e graph $P_q(\gamma)$ is regular of valency $Cq(1+o(1))$. Let $\lambda$ be any eigenvalue of the graph $P_q(\gamma)$ with $\lambda \neq$ valency of the graph then \[|\lambda| \leq c(1+o(1))\sqrt{q},\] for some $C, c > 0$. \end{theorem} \begin{proof} a) Suppose that $[U], [V] \in \Omega$ and $s(OU,OV) = \gamma$ then \[1-\gamma = \frac{(U\cdot V)^2}{(U \cdot U)(V \cdot V)}.\] But $U, V$ are square-type so $1-\gamma$ is a square in $\mathbbm{F}_q$. b) It is easy to see that the finite Poincar\'e graphs $P_q(1-\nu^{2-2i}) = (\Omega,R_i)$ for $1\leq i \leq (q-1)/2$ and $P_q(1) = (\Omega,R_{(q+1)/2})$. The theorem follows immediately from Theorem \mbox{Re}f{bhs}. \end{proof} \section{Proof of Theorem \mbox{Re}f{mt1-man}} We call a graph $G = (V, E)$ $(n, d, \lambda)$-regular if $G$ is a $d$-regular graph on $n$ vertices with the absolute value of each of its eigenvalues but the largest one is at most $\lambda$. It is well-known that if $\lambda \ll d$ then a $(n,d,\lambda)$-regular graph behaves similarly as a random graph $G_{n,d/n}$. Presicely, we have the following result (see Corollary 9.2.5 and Corollary 9.2.6 in \cite{alon-spencer}). \begin{theorem} \label{expander} (\cite{alon-spencer}) Let $G$ be a $(n, d, \lambda)$-regular graph. For every set of vertices $B$ of $G$, we have \begin{equation}\label{f2} |e (B) - \frac{d}{2 n} |B|^2 | \leqslant \frac{1}{2} \lambda |B|, \end{equation} where $e (B)$ is number of edges in the induced subgraph of $G$ on $B$. \end{theorem} Let $E$ be a set of $m$ unit vectors in $\mathbbm{F}_q^3$ then $E$ can be viewed as a subset of $\Omega$. The number of occurrences of a fixed spread $\gamma$ among $E$ can be realized as the number of edges in the induced subgraph of the finite Poincar\'e graph $P_q(\gamma)$ on the vertex set $E$. Thus, from Theorem \mbox{Re}f{spectral-man}, $f_{\gamma}(E) = 0$ if $1 - \gamma$ is not a square in $\mathbbm{F}_q$. Suppose that $1 - \gamma$ is a square in $\mathbbm{F}_q$. From Theorem \mbox{Re}f{spectral-man} and Theorem \mbox{Re}f{expander}, we have \begin{equation}\label{bound-man} |f_{\gamma}(E) - \frac{Cq(1+o(1))}{q(q+1)/2}|E|^2| \leq \frac{1}{2} c(1+o(1))\sqrt{q}|E|. \end{equation} Since $|E| \gg q^{3/2}$, we have $\frac{1}{2} c(1+o(1))\sqrt{q}|E| \ll \frac{Cq(1+o(1))}{q(q+1)/2}|E|^2$ and the theorem follows. \end{document}
\begin{document} \begin{abstract} We consider certain point patterns of an Euclidean space and calculate the Ellis enveloping semigroup of their associated dynamical systems. The algebraic structure and the topology of the Ellis semigroup, as well as its action on the underlying space, are explicitly described. As an example, we treat the vertex pattern of the Amman-Beenker tiling of the plane. \end{abstract} \maketitle \section*{\large{\textbf{Introduction}}} This article proposes to study certain aspects of dynamical systems associated with \textit{point patterns} of Euclidean space. The topic of point patterns arose in symbolic dynamics, and also concerns aperiodic tilings. Point patterns have been studied by numerous authors for the last thirty years after the discovery by Schetchmann et al of the physical materials now commonly called \textit{quasicrystals}. In this context, a point pattern of a Euclidean space $\mathbb{R}^d$ is thought of as an alloy, where points are understood as positions of atoms (or molecules or electrons) and the quasicrystalline structure then arises when a certain \textit{long range order} is observed on the disposition of points within the pattern. A great success in the topic of point patterns is the possibility to handle a pattern $\Lambda _0$ of $\mathbb{R}^d$ by considering the \textit{dynamical system} associated to it. The system consists of a space $\mathbb{X}_{\Lambda_0}$, called the \textit{hull} of $\Lambda_0$, which is formed of all other point patterns that locally look like $\Lambda_0$ and is endowed with a suitable compact topology, together with an action of the space $\mathbb{R}^d$ by homeomorphisms. Natural properties of a pattern of geometric, combinatoric or statistical nature are then displayed by topological, dynamical or ergodic features in this dynamical system. This is particularly true for long range order on point patterns, where the counterpart seems to rely on the existence of \textit{eigenfunctions} for the associated dynamical system. For instance, within the class of substitutive point patterns, the \textit{Meyer property}, which is a strong form of internal order (Moody \cite{Mo}), is equivalent to the existence of a nontrivial eigenfunction for the associated dynamical system (Lee and Solomyak \cite{LeeSo}). This type of statement also exists outside the realm of substitution patterns (Kellendonk and Sadun \cite{KeSa}). Another example concerns the subclass of \textit{model sets}, which can be viewed as the most ordered aperiodic patterns. The property of being a model set is equivalent to being a Meyer set such that continuous eigenfunctions separate a residual subset of elements in its associated hull (see \cite{Auj}, Baake, Lenz and Moody \cite{BaaLenMo}, Lee and Moody \cite{LeeMo}). In other words, model sets are exactly the point patterns with the Meyer property and \textit{almost automorphic} associated dynamical system. A third striking result is that \textit{pure point diffractivity} of a pattern (Hof \cite{Hof}), with is truly of statistical nature (Moody \cite{Mo3}), is known to be equivalent to the existence of a basis of eigenfunctions for the Hilbert space provided by the hull together with a certain ergodic measure (there is a widely developed literature about this aspect of patterns, see for instance Lee, Moody and Solomyak \cite{LeeMoSo} and Baake and Lenz \cite{BaaLe} and references therein). These statements have been proven under various mild assumptions on the pattern considered. A certain form of this eigenvalue problem for a point pattern can be addressed, from a topological point of view, by the knowledge of the \textit{Ellis enveloping semigroup} of its dynamical system $(\mathbb{X}, \mathbb{R}^d)$. This semigroup was introduced for dynamical systems by Ellis and Gottschalk \cite{EllGo} as a way to study actions of a group on a compact space from an algebraic point of view. In a series of papers, Glasner investigated this semigroup for fairly general dynamical systems (see the review Glasner \cite{Gl} and references therein), and he and Megrelishvili showed in \cite{GlMe} that the Ellis semigroup $E(\mathbb{X}, \mathbb{R}^d)$ obeys a dichotomy: It is either the sequential closure of the acting group $\mathbb{R}^d$ or contains a topological copy of the Stone-\v{C}ech compactification $\beta \mathbb{N}$ of the integers. The former situation admits several equivalent formulations, and when it occurs the underlying dynamical system is called \textit{tame}; see Glasner \cite{Gl0}. Tame systems are dynamically simple: Indeed it is proved in Glasner \cite{Gl2} that they are uniquely ergodic, almost automorphic and measurably conjugated with a Kronecker system. In particular, a point pattern with the Meyer property and a tame dynamical system must be a model set. In this work we propose a qualitative description of the Ellis semigroup of dynamical systems associated with particular point patterns, the \textit{almost canonical model sets}. These particular patterns are relevant in the crystallographic sense, as well as very accessible mathematically: One can get a complete picture of the hull $\mathbb{X}_{\Lambda_0}$ of such patterns (Le \cite{Le}), as well as their associated $C^*$-Algebras (a recent source is Putnam \cite{Put}, see references therein), and also compute their cohomology and K-theory groups (Forrest, Hunton and Kellendonk \cite{FoHuKe}, G\"{a}hler, Hunton and Kellendonk \cite{GaHuKe} and Putnam \cite{Put}) as well as the asymptotic exponent of their complexity function (Julien \cite{Ju}). We show that in our situation it is possible to completely describe elements of the Ellis semigroup, their action onto the underlying space, as well as the algebraic and topological structure of this semigroup. The type of calculations made here can be compared with the calculations performed in Pikula \cite{Pik} about Sturmian and Sturmian-like systems (see also Glasner \cite[Example 4.5]{Gl}). We also show that for those dynamical systems the Ellis semigroup is of first class in the sense of the dichotomy of Glasner and Megrelishvili \cite{GlMe}, that is, almost canonical model sets have tame systems. \section*{\large{\textbf{The contents of the paper}}} To construct a model set of $\mathbb{R}^d$, one begins by considering a higher dimensional Euclidean space $\mathbb{R}^{n+d}$, together with a lattice $\Sigma$ in it, as well as an embedded $d$-dimensional slope, usually placed in an "irrational" manner, which is thought as the space $\mathbb{R}^d$ itself. Such an environment used to construct a model set is called a \textit{cut and project scheme}. The second step is to consider a suitable region $W$ of the Euclidean subspace $\mathbb{R}^n$ orthogonal to $\mathbb{R}^d$. The model set in question thereby emerges as the orthogonal projection to $\mathbb{R}^d$ of certain points in $\Sigma$, namely those that fall into the region $W$ when projected orthogonally onto $\mathbb{R}^n$. A model set is thus written as \begin{align*} \Lambda _0:= \left\lbrace \gamma ^{\Vert } \; \vert \; \gamma \in \Sigma \; \text{ and } \; \gamma ^{\perp} \in W \right\rbrace , \end{align*} where $^{\Vert }$ and $^{\perp}$ denote the orthogonal projections onto $\mathbb{R}^d$ and $\mathbb{R}^n$ respectively. In the above context we will speak about a \textit{real model set} (see the discussion in Section $1$), the word "real" coming from the fact that the summand $\mathbb{R}^n$ used here to form the cut and project scheme is a Euclidean space. The dynamical system $(\mathbb{X},\mathbb{R}^d)$ associated with a real model set is of very particular form: It is an \textit{almost automorphic} extension over a torus $$\mathbb{T}^{n+d}:=\mathbb{R}^{n+d}\diagup _{\Sigma }$$ (see the material of Section $2$). This property will be central in our task, and shows up in the consideration of a certain factor map, known as the \textit{parametrization map} (Baake, Hermisson and Pleasant \cite{BaaHePl} and Schlottmann \cite{Sch}), \begin{align*} \pi : \mathbb{X} \longrightarrow \mathbb{T}^{n+d}. \end{align*} This mapping also demonstrates that any pattern $\Lambda$ in the hull $\mathbb{X}$ of a model set $\Lambda _0$ is also a model set, such that if $[w,t]_\Sigma \in \mathbb{R}^{n+d}\diagup _{\Sigma }=\mathbb{T}^{n+d}$ is its image, then $\Lambda$ is determined, as model set, by the region $W+w$ in $\mathbb{R}^n$, next translated by the vector $t$ in $\mathbb{R}^d$. This is described in better details in first section of the main text. The first step in determining the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ is to describe it as a \textit{suspension} of another (simpler) semigroup (see Section $3$). To that end we let $\Gamma$ be the subgroup of $\mathbb{R}^d$ obtained as the orthogonal projection of the lattice $\Sigma$ used to construct $\Lambda_0$ as a model set. $\Gamma$ is generally not a lattice in $\mathbb{R}^d$, and will even often be dense in $\mathbb{R}^d$, although it is always finitely generated. We now consider the collection $\Xi ^\Gamma$ of point patterns in $\mathbb{X}$ that are contained, as subsets of $\mathbb{R}^d$, in $\Gamma$. This subset of $\mathbb{X}$ remains stable under the action of any vector of $\mathbb{R}^d$ which lies in $\Gamma$, and when endowed with a suitable topology it gives rise to a new dynamical system $(\Xi ^\Gamma , \Gamma)$. We call this latter system the \textit{subsystem} associated with $\Lambda _0$. The space $\Xi ^\Gamma $ will have a locally compact totally disconnected topology, and as a result its Ellis enveloping semigroup $E(\Xi ^\Gamma , \Gamma)$ will be a locally compact totally disconnected topological space (for Ellis semigroup of dynamical systems over locally compact spaces, see Section $2$). The importance of this semigroup in our setting is highlighted by Theorem \ref{theo:suspension.ellis}, which yields a algebraic isomorphism and homeomorphism \begin{align*} E(\mathbb{X},\mathbb{R}^d) \simeq E(\Xi ^\Gamma , \Gamma)\times _{\Gamma}\mathbb{R}^d , \end{align*} where the right-hand term is understood as a quotient of $E(\Xi ^\Gamma , \Gamma)\times \mathbb{R}^d$ under a natural diagonal action of $\Gamma$. This theorem shows in particular that the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ is in our context a \textit{matchbox manifold}: It is locally the product of an Euclidean open subset with a totally disconnected space. It also asserts that the non-trivial (and in particular the noncommutative) part of $E(\mathbb{X},\mathbb{R}^d)$ is displayed by the semigroup $E(\Xi ^\Gamma , \Gamma)$. We will thus from now on focus on the calculation of $E(\Xi ^\Gamma , \Gamma)$. At first, we show the existence of an onto continuous semigroup morphism \begin{align*} \Pi ^* : E(\Xi ^\Gamma , \Gamma) \longrightarrow \mathbb{R}^n . \end{align*} This morphism is closely related with the parametrization map presented above, and will allow us to understand the convergence of a net in $E(\Xi ^\Gamma , \Gamma)$ by studying how the corresponding net, via this morphism, converges in $\mathbb{R}^n$. Our wish is to find a certain semigroup $S$, together with a certain semigroup morphism from $E(\Xi ^\Gamma , \Gamma)$ into $S$, such that the Ellis semigroup $E(\Xi ^\Gamma , \Gamma)$ embeds in the direct product $S\times \mathbb{R}^n$. To simplify the problem we let the \textit{almost canonical property} enter the game. This property consists of a condition on the region $W$ used to obtain $\Lambda _0$ as model set, that is, $W$ must be a polytope of $\mathbb{R}^n$ satisfying a particular condition (see Section $4$). Under the assumption that the region $W$ is almost canonical, together with the almost automorphic property observed on the dynamical system $(\mathbb{X}_{\Lambda _0}, \mathbb{R}^d)$, we are able to identify the correct semigroup $S$ as the \textit{face semigroup} associated with the polytope $W$ in $\mathbb{R}^n$ (see Sections $5$ and $6$ for our presentation and results). We may briefly present the face semigroup $\mathfrak{T}_{W}$ associated with the polytope $W$ in $\mathbb{R}^n$ as follows: The polytope $W$ determines a finite collection of linear hyperplanes $\mathfrak{H}_{W}$ in $\mathbb{R}^n$, namely the ones that are parallel to at least one face of $W$. This collection in turn determines a stratification of $\mathbb{R}^n$ by cones, all being, for each hyperplane $H\in \mathfrak{H}_W$, included in $H$ or integrally part of one of the two possible complementary half-spaces. An illustration of this construction is provided in Section $7$, where $W$ is a regular octagon in $\mathbb{R}^2$. The face semigroup $\mathfrak{T}_{W}$ is set-theoretically the finite collection of cones resulting from this stratification process, together with a (noncommutative) semigroup product stating that the product $C.C'$ of two cones is the cone which the head of $C$ enters after being translated by small vectors of $C'$. The elements of $\mathfrak{T}_W$ are more conveniently described as "side maps", which consist of mappings from $\mathfrak{H}_W$ to the three symbol set $\lbrace -, 0,+\rbrace$, giving the relative position of any cone with respect to each hyperplane. This formalism has the advantage to allows for a concise and handy formulation of the product law on this semigroup (see Section $6$). The embedding morphism \begin{align*} E(\Xi ^\Gamma , \Gamma) \longrightarrow \mathfrak{H}_W\times \mathbb{R}^n \end{align*} comes from the observation that a neighborhood basis of any transformation $g\in E(\Xi ^\Gamma , \Gamma)$ is provided by the vector $w_g:= \Pi ^*(g)$ of $\mathbb{R}^n$, together with a certain cone $$C_g\in \mathfrak{T}_W ,$$ in the sense that a net in $\Gamma$ converges to $g$ in the Ellis semigroup $E(\Xi ^\Gamma , \Gamma)$ (such a net exists by construction) if and only if the corresponding net in $\mathbb{R}^n$ converges to $w_g$ and eventually lies in $C_g+w_g$. In this sense, the cone $C_g$ provides the direction a net must follow in order to converge to the transformation $g$. This allows us to calculate the corresponding image subsemigroup in $\mathfrak{H}_W\times \mathbb{R}^n$, which is the aim of Theorem \ref{theo:principal.interne}, and proves to be a finite disjoint union of \textit{subgroups} of $\mathbb{R}^n$. Moreover the topology of $E(\Xi ^\Gamma , \Gamma)$ is completely described by a geometric criterion of convergence for nets. Finally, we fusion Theorems \ref{theo:suspension.ellis} and \ref{theo:principal.interne} to formulate our main theorem \ref{theo:principal}, which establishes the existence of an embedding semigroup morphism \begin{align*} E(\mathbb{X} , \mathbb{R}^d) \longrightarrow \mathfrak{H}_W\times \mathbb{T}^{n+d} \end{align*} for which the image subsemigroup together with its topology are identified. Interestingly, this semigroup remains exactly the same for model sets issued after translating, dilating, or deforming the region $W$ as long as the hyperplanes determined by the faces are unchanged. As a byproduct of the previous analysis we show that the topology of the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ admits a first countable topology, and thus is the sequential closure of the acting group $\mathbb{R}^d$. We conclude this work by determining some algebraic features of this Ellis semigroup, as well as a picture of its underlying action on the space $\mathbb{X}$. \section{\large{\textbf{Model sets and associated dynamical systems}}}\label{model sets} \subsection{General definition of inter-model set} To define what an (almost canonical) model set in $\mathbb{R}^d$ is (see \cite{Sch}, as well as \cite{Mo2} for a more detailed exposition), we consider first an environment used to construct it, namely a \textit{cut and project scheme}. This consists of a triple $(\mathsf{H}, \Sigma , \mathbb{R}^d)$ and a diagram \includegraphics[trim = 4cm 22.8cm 5.5cm 5.2cm]{CPS} where $\mathsf{H}$ is a locally compact Abelian group and \begin{itemize} \item[•] $\Sigma$ is a countable lattice in $\mathsf{H}\small{\times}\mathbb{R}^{d}$, ie a countable discrete and cocompact subgroup, \item[•] the canonical projection $\pi _{\mathbb{R}^d}$ onto $\mathbb{R}^d$ is bijective from $\Sigma$ onto its image $\Gamma $, \item[•] the image $\Gamma ^*$ of $\Sigma$ under the canonical projection $\pi _{\mathsf{H}}$ is a dense subgroup of $\mathsf{H}$. \end{itemize} Hence such an environment consists of an Euclidean space $\mathbb{R}^d$ embedded into $\mathsf{H}\times\mathbb{R}^{d}$ in an "irrational position" with respect to the lattice $\Sigma$. There is a well established formalism for these different ingredients: the space $\mathbb{R}^d$ is often called the \textit{physical space}, whereas the space $\mathsf{H}$ is called the \textit{internal space}. Moreover the morphism $\Gamma \longrightarrow \mathsf{H}$ which maps any $\gamma $ onto $\gamma ^*:= \pi _{\mathsf{H}}(\pi _{\mathbb{R}^d}^{-1}(\gamma ))\in \Gamma ^*$ is the \textit{*-map} of the cut and project scheme, whose graph is the lattice $\Sigma$. We will say that a cut and project scheme is \textit{real} whenever the internal space $\mathsf{H}$ is a finite dimensional real vector space $\mathbb{R}^n$. We shall also consider a certain type of subset in the internal space $\mathsf{H}$, usually called a \textit{window}, which consists of a compact and topologically regular subset $W$, supposed \textit{irredundant} in the sense that the compact subgroup of elements of $w\in \mathsf{H}$ that satisfy $W+w = W$ is trivial. When $\mathsf{H}= \mathbb{R}^n$, this condition is immediately satisfied. If we are given a cut and project scheme together with a window $W$ in its internal space, we may form a certain point pattern $\mathfrak{P}(W)$ in $\mathbb{R}^d$ by projecting onto the physical space the subset of points of the lattice $\Sigma$ lying within the \textit{strip} $W\times \mathbb{R}^d$. This is illustrated in Figure $1$ (see also \cite{BaaHePl}). Figure $1$ presents the most simple real cut and project scheme one may consider, that is, with physical and internal spaces being $1$-dimensional, and with a lattice $\Sigma = \mathbb{Z}^2$ not crossing these spaces except at the origin. As window we consider the projection onto the internal space of the unit square in $\mathbb{R}^2$. \includegraphics[trim = 4cm 18cm 5cm 4cm]{sturmien} The point pattern $\mathfrak{P}(W)$ may be written using the $*$-map as \begin{align*} \mathfrak{P}(W) := \left\lbrace \gamma \in \Gamma \; \vert \; \gamma ^*\in W\right\rbrace . \end{align*} We may allow ourselves to translate the resulting point pattern by any vector $t$ in the physical space $\mathbb{R}^d$, which we call here a \textit{physical translation}, or to translate the window $W$ by an element $w\in \mathsf{H}$, which we call an \textit{internal translation}. In both cases this leads to a new point pattern in $\mathbb{R}^d$. We now introduce the class of \textit{model sets} of $\mathbb{R}^d$ as follows: \begin{de}\label{def:inter.model.set} An inter-model set $\Lambda$ associated to a cut and project scheme $(\mathsf{H}, \Sigma , \mathbb{R}^d)$ together with a window $W$ is a subset of $\mathbb{R}^d$ that satisfies \begin{align*} \mathfrak{P}(w+ \stackrel{\circ }{W}) -t \subseteq \Lambda \subseteq \mathfrak{P}(w+ W) -t. \end{align*} \end{de} An inter-model set is called \textit{regular} whenever the window $W$ used to construct it has boundary of Haar measure zero in $\mathsf{H}$. Due to the assumptions on the underlying cut and project scheme and on the window $W$, any inter model set is a \textit{Delone set}, that is a uniformly discrete and relatively dense subset of $\mathbb{R}^d$. In fact, it also satisfies the stronger property of being a \textit{Meyer set}, meaning that any inter-model set $\Lambda$ admits a uniformly discrete difference subset $\Lambda - \Lambda$ in $\mathbb{R}^d$. Most of the content of this article is about real cut and project schemes together with polytopic windows in their internal spaces, that hence provide regular inter-model sets. \subsection{nonsingular model sets} An important notion affiliated with a point pattern $\Lambda$ is its \textit{language}, namely the collection of all "circular-shaped" patterns appearing at sites of the point pattern: \begin{align*} \mathcal{L}_\Lambda := \left\lbrace (\Lambda -\gamma )\cap B(0,R) \, \vert \, \gamma \in \Lambda , \; R> 0\right\rbrace . \end{align*} Not all inter-model sets coming from a common cut and project scheme and window have same language. However, the class of \textit{nonsingular model sets}, also often called \textit{generic model sets}, do share a language: \begin{de}\label{def:non.singular} A nonsingular model set is an inter-model set $\Lambda$ for which we have equalities \begin{align*} \mathfrak{P}(w+ \stackrel{\circ }{W}) -t = \Lambda =\mathfrak{P}(w+ W) -t. \end{align*} \end{de} The situation where such equality occurs for a given couple $(w,t)$ clearly only depends on the choice of $w\in \mathsf{H}$. We will then call an element $w\in \mathsf{H}$ nonsingular when the inter-model sets $\mathfrak{P}(w+ \stackrel{\circ }{W}) -t = \Lambda =\mathfrak{P}(w+ W) -t$ are nonsingular. Such a subset of nonsingular elements may easily be described: it consists of all $w\in \mathsf{H}$ where no point of the subgroup $\Gamma ^*$ of $\mathsf{H}$ enters the boundary $w+\partial W$ of the translated window $w+W$. It thus consists of the complementary subset \begin{align*} NS:= \left[ \Gamma ^*-\partial W\right] ^c . \end{align*} This set is always nonempty by the Baire category theorem, as $W$ was assumed topologically regular, hence with boundary of empty interior in $\mathsf{H}$, and $\Sigma$ (hence $\Gamma ^*$) was supposed to be countable. As already pointed out, the nonsingular model sets arising from a common cut and project scheme with window have a common language, which means that any pattern of some nonsingular model set appears elsewhere in all other nonsingular model sets. Denoting this language by $\mathcal{L}_{NS}$, we are led to consider its associated \textit{hull}. \begin{de}\label{def:hull} Given a cut and project scheme and a window, and the language $\mathcal{L}_{NS}$ of any nonsingular model set arising from this data, the hull of this data is the collection \begin{align*} \mathbb{X}:= \left\lbrace \Lambda \text{ point pattern of } \mathbb{R}^d \; \vert \mathcal{L}_\Lambda = \mathcal{L}_{NS} \right\rbrace . \end{align*} We call a model set any point pattern within the hull $\mathbb{X}$ associated with some cut and project scheme and window. \end{de} The hull $\mathbb{X}$ associated with some cut and project scheme and window is also called the \textit{local isomorphism class} (or simply LI-class) of any model set within this hull. \subsection{The hull as dynamical system} There is a natural topology on the hull $\mathbb{X}$, which is metrizable and may be described by setting a basis of open neighborhoods of any point pattern $\Lambda \in \mathbb{X}$ to be (see \cite{Mo2}) \begin{align}\label{topology} \mathcal{U}_{K, \varepsilon }(\Lambda ):= \left\lbrace \Lambda '\in \mathbb{X} \; \vert \; \exists \vert t \vert , \vert t'\vert < \varepsilon , \; (\Lambda -t)\cap K = (\Lambda '-t')\cap K \right\rbrace , \end{align} where $K$ is any compact set in $\mathbb{R}^d$ and $\varepsilon >0$. This topology roughly means that two point patterns are close if they agree on a large domain about the origin up to small shifts. The hull $\mathbb{X}$, endowed with this topology, is a compact metrizable space, and is equipped with a natural action of $\mathbb{R}^d$ given by $\Lambda .t := \Lambda -t$, that is, by translating any point pattern. This provides a dynamical system $(\mathbb{X}, \mathbb{R}^d)$. To figure out what exactly this space consists of, we invoke the following beautiful result: \begin{theo}\label{theo:parametrization.map} \cite{Sch} Let $\mathbb{X}$ be the hull associated with a cut and project scheme $(\mathsf{H},\Sigma,\mathbb{R}^d)$ and a window $W$. Then $\mathbb{X}$ is compact and the dynamical system $(\mathbb{X}, \mathbb{R}^d)$ is minimal. Each $\Lambda \in \mathbb{X}$ satisfies inclusions of the form \begin{align*} \mathfrak{P} (w_\Lambda +\mathring{W})-t_\Lambda \subseteq \Lambda \subseteq \mathfrak{P} (w_\Lambda +W)-t_\Lambda , \end{align*} where $(w_\Lambda , t_\Lambda)\in \mathsf{H}\times \mathbb{R}^d$ is unique up to an element of $\Sigma$, thus defining a factor map \begin{align*} \pi : \mathbb{X}\longrightarrow \mathsf{H}\times _{\Sigma} \mathbb{R}^d \end{align*} where $\mathsf{H}\times _{\Sigma} \mathbb{R}^d$ is the compact Abelian group quotient of $\mathsf{H}\times \mathbb{R}^d$ by the lattice $\Sigma$.\\ The map $\pi$ is injective precisely on the collection of nonsingular model sets of $\mathbb{X}$. \end{theo} By factor map we mean a continuous, onto and $\mathbb{R}^d$-equivariant map, where on the compact Abelian group $\mathsf{H}\times _{\Sigma} \mathbb{R}^d$ the space $\mathbb{R}^d$ acts through $[w,t]_{\Sigma}.s:= [w,t+s]_{\Sigma}$. In the context of real cut and project schemes the compact Abelian group is given by $\left[ \mathbb{R}^{n+d}\right] _{\Sigma} $, that is, it is a $(n+d)$-torus. In the theory of point patterns the above factor map is called the \textit{parametrization map}, and shows in particular that any model set of $\mathbb{X}$ is an inter-model set in the sense of Definition \ref{def:inter.model.set}. In fact, the collection $\mathbb{X}$ of model sets of a given cut and project scheme and window is precisely the collection of \textit{repetitive} inter-model sets arising from this data (see for instance \cite{Sch}). \subsection{An explicit example}\label{subsection:explicit.example} A well-known example of model set is the vertex point pattern of the famous \textit{Ammann-Beenker tiling}, from which an uncolored local pattern about the origin shows up as in Figure 2. \includegraphics[ trim = 4cm 17cm 5cm 4.5cm]{ammannbeenker} We can set a cut $\&$ project scheme and window giving rise to the desired point pattern as follows: In a physical space $\mathbb{R}^2$ we embed the group $\Gamma $ algebraically generated by four vectors whose coordinates in an orthonormal basis are \begin{align*} \begin{tabular}{c c c c} $e_1= (1,0),$ & $e_2 = (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}),$ & $e_3= (0,1),$ & $e_4 = (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}).$ \end{tabular} \end{align*} These four vectors are algebraically independent, and thus $\Gamma $ is isomorphic to $\mathbb{Z}^4$. Next we set the internal space $\mathbb{R}^2_{int}$ to be a $2$-dimensional real vector space, into which we define a $*-$map through the images of the four above vectors, reading in some orthonormal basis of $\mathbb{R}^2_{int}$ as \begin{align*} \begin{tabular}{c c c c} $e_1^*= (1,0),\qquad e_2^* = (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}),$ & $e_3^*= (0,-1),$ & $e_4^* = (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}),$ \end{tabular} \end{align*} The four vectors $\widetilde{e_i}:= (e_i,e_i^*)$, $i=1,2,3,4$, are linearly independant in $\mathbb{R}^2_{int}\times \mathbb{R}^2$ and thus form a lattice $\Sigma$, which projects onto a dense subgroup $\Gamma ^*$ in $\mathbb{R}^2_{int}$. This defines a real cut and project scheme. We choose the window to be canonical, that is, to be the projection to the internal space of the unit cube in $\mathbb{R}^2_{int}\times \mathbb{R}^2$ with respect to the basis $(\widetilde{e_1}, \widetilde{e_2}, \widetilde{e_3}, \widetilde{e_4})$. Hence we get a regular octagonal window $W_{oct}$ of the form in Figure 3. Then the vertex point pattern appearing in the Ammann-Beenker tiling is given by the nonsingular model set $\mathfrak{P}\left( W_{oct} -\dfrac{e_1^*+e_2^*+e_3^*+e_4^*}{2}\right)$. \includegraphics[ trim = 4cm 19cm 5cm 4.2cm]{octogon} \section{\large{\textbf{Ellis semigroups of dynamical systems}}}\label{section:ellis.semigroup} \subsection{Ellis semigroup and equicontinuity} Let us consider a compact dynamical system, that is, a compact (Hausdorff) space $\mathbb{X}$ together with an action of a group $T$ by homeomorphisms. \begin{de}\label{def:ellis.semigroup} The Ellis semigroup $E(\mathbb{X},T)$ is the pointwise closure of the group of homeomorphisms given by the $T$-action in the space $\mathbb{X}^\mathbb{X}$ of self-mappings on $\mathbb{X}$. \end{de} The Ellis semigroup $E(\mathbb{X},T)$ is a family of transformations on the space $\mathbb{X}$ that are pointwise limits of homeomorphisms coming from the $T$-action, and is stable under composition. Moreover it is a compact (Hausdorff) space when endowed with the pointwise convergence topology coming from $\mathbb{X}^\mathbb{X}$. If the acting group is Abelian, then although the Ellis semigroup may not be itself Abelian, all of its transformations commute with any homeomorphism coming from the action. The Ellis semigroup construction is functorial (covariant) in the sense that any onto continuous and $T$-equivariant mapping $\pi : \mathbb{X} \rightarrow \mathbb{Y}$ gives rise to an onto continuous semigroup morphism $\pi ^*: E(\mathbb{X},T) \rightarrow E(\mathbb{Y},T)$, satisfying $\pi (x.g)=\pi (x).\pi ^*(g)$ for any $x\in \mathbb{X}$ and any transformation $g\in E(\mathbb{X},T)$. Here we have written $x.g$ for the evaluation of a mapping $g$ at a point $x$. With this convention the Ellis semigroup is always a compact right-topological semigroup, that is, if some net $( h_\lambda )$ converges pointwise to $h$ then the net $( g.h_\lambda ) $ converges pointwise to $g.h$ for any $g$, where $g.h$ stands for the composition map which at each point $x$ reads $(x.g).h$. Among the whole category of dynamical systems, the certainly most simple objects are the \textit{equicontinuous dynamical systems}. These are dynamical systems such that the family of homeomorphisms coming from the group action is equicontinous, and within the more specific class of compact minimal dynamical systems they exactly show up as the well-known class of \textit{Kronecker systems}. About these particular dynamical systems one has the following: \begin{theo}\label{theo3} \cite{Aus} \cite{GlMeUs} Let $(\mathbb{X},T)$ be a minimal dynamical system over a compact metric space, with Abelian acting group. Then the following assertions are equivalent: \begin{itemize} \item[(1)] The dynamical system $(\mathbb{X},T)$ is equicontinuous. \item[(2)] $E(\mathbb{X},T)$ is a compact group acting by homeomorphisms on $\mathbb{X}$. \item[(3)] $E(\mathbb{X},T)$ is metrizable. \item[(4)] $E(\mathbb{X},T)$ has left-continuous product. \item[(5)] $E(\mathbb{X},T)$ is Abelian. \item[(6)] $E(\mathbb{X},T)$ is made of continuous transformations. \end{itemize} In this case, $E(\mathbb{X},T)=\mathbb{X}$ as compact Abelian groups. \end{theo} Here the compact Abelian group structure of a compact minimal equicontinuous system $(\mathbb{X},T)$ with Abelian acting group is only determined by the choice (which is arbitrary) of one element $\mathfrak{e}\in \mathbb{X}$ which plays the role of a unit, from which the group structure extends that of $T$ mapped on the dense orbit $\mathfrak{e}.T$. In this case the equality $E(\mathbb{X},T)=\mathbb{X}$ is achieved by identifying a transformation $g\in E(\mathbb{X},T)$ with $\mathfrak{e}.g$ in $\mathbb{X}$. Outside the scope of equicontinuous systems, the Ellis semigroup is a quite complicated object as it is formed of mappings neither necessarily continuous nor invertible, and is not commutative. However a general construction allows us to attach to any compact dynamical system a particular factor: \begin{theo}\label{equ} Let $(\mathbb{X},T)$ be a compact dynamical system. There exist an equicontinuous dynamical system $(\mathbb{X}_{eq}, T)$ together with a factor map $\pi : \mathbb{X} \rightarrow \mathbb{X}_{eq}$ such that any equicontinuous factor of $(\mathbb{X},T)$ factors through $\pi$. \end{theo} The space $\mathbb{X}_{eq}$ with its $T$ -action is called the \textit{maximal equicontinuous factor} of $(\mathbb{X},T)$, and it is a Kronecker system whenever $(\mathbb{X},T)$ is topologically transitive. From Theorem \ref{theo3} one has $E(\mathbb{X}_{eq},T) = \mathbb{X}_{eq}$ as compact groups, and from the functorial property of the Ellis semigroup the quotient factor map $\pi$ from $\mathbb{X}$ onto its maximal equicontinuous factor gives rise to an onto and continuous semigroup morphism \begin{align*} \pi ^*:E(\mathbb{X},T)\longrightarrow \mathbb{X}_{eq}. \end{align*} \subsection{The tame property} The following statement is obtained from \cite[Theorem 6.1, proof of Theorem 6.3]{GlMeUs}. \begin{theo}\label{theo:dichotomy} \cite{GlMeUs} The Ellis semigroup $E(\mathbb{X},T)$ of a dynamical system over a compact metric space is either the sequential closure of the acting group $T$ in $\mathbb{X}^{\mathbb{X}}$ or contains a topological copy of the Stone-\v{C}ech compactification $\beta \mathbb{N}$ of the integers. \end{theo} The first case of this dichotomy admits several different formulations (see Glasner \cite{Gl} and Glasner, Megrelishvili and Uspenskij \cite{GlMeUs} and references therein) and whenever it occurs the underlying dynamical system is called \textit{tame}. If a compact metric dynamical system admits an Ellis semigroup with first countable topology then it is automatically a tame system. The tame property is related to the following notion: \begin{de} A compact dynamical system $(\mathbb{X},T)$ is almost automorphic if the factor map $\pi : \mathbb{X} \rightarrow \mathbb{X}_{eq}$ possesses a one-point fiber. \end{de} \begin{theo}\label{theo:glasner} \cite{Gl2} If a compact metric minimal dynamical system with Abelian acting group is tame, then it is almost automorphic. \end{theo} In case of metrizability of the space $\mathbb{X}$ Veech showed in \cite[Lemma 4.1]{Vee} that any almost automorphic system in fact has a residual subset of one-point fibers with respect to the mapping $\pi$. In the situation of a hull $\mathbb{X}$ of model sets, the factor map $\pi$ onto the maximal equicontinuous factor is precisely the parametrization map of Theorem \ref{theo:parametrization.map}, and thus $(\mathbb{X}, \mathbb{R}^d)$ is almost automorphic since $\pi$ is one-to-one on a nonempty subclass of $\mathbb{X}$ (the nonsingular model sets). Even more is true: The hull $\mathbb{X}$ consists of regular model sets (meaning that the region $W$ as its boundary of null Haar measure in $\mathsf{H}$) if and only if the map $\pi$ is one-to-one above a full haar measure subset of $\mathbb{X}_{eq}$ \cite[Theorem 5]{BaaLenMo}. \subsection{Ellis semigroup for locally compact dynamical systems} We wish to include here two elementary results about the Ellis semigroup one may define for dynamical systems over locally compact spaces. Let $\mathbb{X}$ be a locally compact space together with an action of a group $T$ by homeomorphisms, and as in the compact case, set the Ellis semigroup $E(\mathbb{X},T)$ to be the pointwise closure in the product space $\mathbb{X}^\mathbb{X}$ of the group of homeomorphisms coming from the $T$--action. In order to extend some results available in the compact case to this setting we consider the one-point compactification $\hat{\mathbb{X}}$ of $\mathbb{X}$, endowed with a $T$--action by homeomorphisms so that the infinite point remains fixed through any such homeomorphism. Let us denote by $\mathcal{F}_{\mathbb{X}}$ the subset of $\hat{\mathbb{X}}^{\hat{\mathbb{X}}}$ of transformations mapping $\mathbb{X}$ into itself and keep the point at infinity fixed, endowed with relative topology. Then $\mathcal{F}_{\mathbb{X}}$ is a semigroup which is isomorphic and homeomorphic with the product space $\mathbb{X}^\mathbb{X}$, and under this identification \begin{align*} E(\mathbb{X},T)= E(\hat{\mathbb{X}},T)\cap \mathcal{F}_{\mathbb{X}}. \end{align*} Observe that $E(\mathbb{X},T)$ is, as in the compact flow case, a right-topological semigroup containing $T$ as a dense subgroup (or rather the subsequent group of homeomorphisms). The following is a general fact, whose proof for compact dynamical systems can be found in \cite{Aus}: \begin{prop}\label{prop:map.ellis.loc.compact} Let $\pi : \mathbb{X}\longrightarrow \mathbb{Y}$ be a continuous, proper, onto, and $T$--equivariant map between locally compact spaces. Then there exists a continuous, proper, and onto morphism $\pi ^*: E(\mathbb{X},T) \longrightarrow E(\mathbb{Y},T)$ that satisfies the equivariance condition $\pi (x.g)= \pi (x).\pi ^*(g)$ for any $x\in \mathbb{X}$ and $g\in E(\mathbb{X},T)$. \end{prop} \begin{proof} Denote by $\star _\mathbb{X} $ and $\star _\mathbb{Y}$ the respective points at infinity in the compactified spaces. Since $\pi $ is continuous and proper, it extends to a continuous and onto map $\widehat{\pi }: \widehat{\mathbb{X}}\rightarrow \widehat{\mathbb{Y}}$, such that $\widehat{\pi }^{-1}(\star _\mathbb{Y})= \lbrace \star _\mathbb{X} \rbrace$. Obviously $\widehat{\pi }$ is $T$--equivariant with respect to the extended $T$--actions. Then there exists a continuous and onto morphism $\widehat{\pi }^*: E(\widehat{\mathbb{X}},T) \rightarrow E(\widehat{\mathbb{Y}},T)$, satisfying the equivariance equality $\widehat{\pi }(x.g)= \widehat{\pi }(x).\widehat{\pi }^*(g)$ for any $x\in \widehat{\mathbb{X}}$ and $g\in E(\widehat{\mathbb{X}},T)$. The later equivariance condition implies that a transformation $g$ of $E(\widehat{\mathbb{X}},T)$ lies in $\mathcal{F}_{\mathbb{X}}$ if and only if $\widehat{\pi }^*(g)$ lies in $\mathcal{F}_{\mathbb{X}}$: it follows that $E(\mathbb{X},T)= (\widehat{\pi }^*)^{-1}(E(\mathbb{Y},T))$. Restricting the morphism onto $E(\mathbb{X},T)$ gives the map, together with the onto property. Finally a compact set of $E(\mathbb{Y},T)$ must be compact in $E(\widehat{\mathbb{Y}},T)$, as it is easy to check, so have a compact inverse image in $E(\widehat{\mathbb{X}},T)$ under $\widehat{\pi }^*$. This latter is entirely included in $E(\mathbb{X},T)$, so is compact for the relative topology on $E(\mathbb{X},T)$. This gives the properness. \end{proof} Observe that $\pi ^*(t)=t$ holds for any $t\in T$. As in the compact setting, if the acting group $T$ is Abelian then any induced homeomorphism commutes with any mapping in $E(\mathbb{X},T)$. To end this subsection we state without proof an easy property of locally compact Kronecker systems: \begin{prop}\label{prop:equi.loc.compact} If $T$ is a dense subgroup of a locally compact Abelian group $\mathbb{G}$, $T$ acting by translation, then $E(\mathbb{\mathbb{G}},T)$ is topologically isomorphic with $\mathbb{G}$, where any $g\in \mathbb{G}$ is identified with its translation map in $E(\mathbb{\mathbb{G}},T)$. \end{prop} \section{\large{\textbf{The internal system of a hull of model sets}}}\label{internal system} \subsection{Internal system} What we introduce here is an analogue in the hull $\mathbb{X}$ of the internal space $\mathsf{H}$ we may find in the compact Abelian group $\mathsf{H}\times _{\Sigma} \mathbb{R}^d$ (the torus $\left[ \mathbb{R}^{n+d}\right] _{\Sigma}$ in case of a real cut $\&$ project scheme). We call this analogue the \textit{internal system} of a hull of model sets. The consideration of this particular space is not new (it appeared in \cite{FoHuKe} as well as in the formalism of $C^*$--algebras in \cite{Put}), although it is often not explicitly mentioned, and we record here the main aspects of this space. \begin{de}\label{def:internal.system} Let $\mathbb{X}$ be the hull of model sets associated with a cut $\&$ project scheme $(\mathsf{H},\Sigma,\mathbb{R}^d)$ and a window $W$. Then its internal system is the subclass $\Xi ^\Gamma$ of point patterns that are supported on the structure group $\Gamma$ in $\mathbb{R}^d$, that is, \begin{align*} \Xi ^\Gamma := \left\lbrace \Lambda \in \mathbb{X} \, \vert \; \Lambda \subset \Gamma \right\rbrace . \end{align*} \end{de} According to Theorem \ref{theo:parametrization.map}, any model set admits inclusions of the form stated in Definition \ref{def:inter.model.set}, and we can see that the subclass $\Xi ^\Gamma$ consists exactly of the model sets for which these inclusions read \begin{align*} \mathfrak{P}(w+ \stackrel{\circ }{W}) \subseteq \Lambda \subseteq \mathfrak{P}(w+ W). \end{align*} Equivalently, $\Xi ^\Gamma$ is the subclass of model sets in $\mathbb{X}$ whose image under the parametrization map $\pi$ of Theorem \ref{theo:parametrization.map} is of the form $[w,0]_{\Sigma}$ in the compact Abelian group $\mathsf{H}\times _{\Sigma} \mathbb{R}^d$. On the other hand, there exists a natural morphism mapping any element $w$ of the internal space $\mathsf{H}$ to $[w,0]_{\Sigma}$ in $\mathsf{H}\times _{\Sigma} \mathbb{R}^d$, which is one-to-one and continuous. This suggests the existence of a mapping from the internal system $\Xi ^\Gamma$ onto the internal space $\mathsf{H}$ of the cut $\&$ project scheme. However, similar to the fact that $\mathsf{H}$ is in general not topologically conjugated with its image in $\mathsf{H}\times _{\Sigma}\mathbb{R}^d$, one cannot just consider the topology of $\mathbb{X}$ restricted on $\Xi ^\Gamma$. Rather, we consider on the internal system the topology whose basis of open neighborhoods around any $\Lambda \in \Xi ^\Gamma$ is \begin{align}\label{topology.internal}\mathcal{U}_K(\Lambda):= \left\lbrace \Lambda '\in \Xi ^\Gamma \; \vert \; \Lambda \cap K = \Lambda '\cap K \right\rbrace , \end{align} where $K$ is any compact set in $\mathbb{R}^d$. This means that two point patterns are close in $\Xi ^\Gamma$ if they exactly match on a large domain about the origin. On the internal system equipped with the above topology, we consider the action of the group $\Gamma$ by homeomorphisms given by translation on each model set, so that one obtains a dynamical system $(\Xi ^\Gamma , \Gamma)$. From the minimality of the dynamical system $(\mathbb{X},\mathbb{R}^d)$ by Theorem \ref{theo:parametrization.map}, this dynamical system is also minimal. Of particular importance is the sub-collection, often called the \textit{transversal}, of point patterns containing the origin \begin{align*} \Xi := \left\lbrace \Lambda \in \mathbb{X} \, \vert \; 0\in \Lambda \subset \Gamma \right\rbrace . \end{align*} Any model set containing the origin must be entirely included in the structure group $\Gamma$, that is, $\Xi $ is a subset of the internal system $\Xi ^\Gamma$. A fact of fundamental importance is that $\Xi$ is in fact a \textit{clopen set}, that is, a subset which is both open and closed in the internal system $\Xi ^\Gamma$: Indeed any accumulation point pattern of $\Xi$ must possess the origin in its support, and thus is actually an element of $\Xi$, and on the other hand for each $\Lambda \in \Xi$ and any radius $R>0$ the collection $ \mathcal{U}_{B(0,R)}(\Lambda )$ is an open neighborhood of $\Lambda$ in $\Xi ^\Gamma$ that is clearly contained in the transversal $\Xi$. About the topology of the transversal one may observe that there is a one to one correspondence between circular-shaped local configuration of radius $R$ in the language $\mathcal{L}_{NS}$ and subsets in $\Xi$ of the form $ \mathcal{U}_{B(0,R)}(\Lambda )$, for the same radius $R$ and $\Lambda$ chosen in $\Xi$. Thus compactness of $\Xi$ is rephrased by the existence of only a finite number of such circular-shaped patterns for any radius $R$, a property called \textit{finite local complexity} for the underlying point patterns in $\mathbb{X}$. This property holds in our context \cite{Mo2, Sch}, so that $\Xi$ is a compact open subset of the internal system $\Xi ^\Gamma$. \begin{prop}\label{prop:topology.internal.system.tot.disconnected} The internal system $\Xi ^\Gamma$ of a hull of model sets is a totally disconnected locally compact topological space, and a subbasis for its topology is formed by all $\Gamma$--translates of $\Xi$ and its complementary set $\Xi ^c$. \end{prop} \begin{proof} Any point pattern $\Lambda \in \Xi ^\Gamma$ is uniquely determined by the knowledge of whether a point $\gamma \in \Gamma$ lies in $\Lambda $ or not, for each $\gamma \in \Gamma$. Thus a sub-basis for the topology of the internal system is given by the subsets \begin{align*} \Xi _\gamma:= \left\lbrace \Lambda \in \Xi ^\Gamma \, \vert \; \gamma\in \Lambda \right\rbrace \end{align*} or their complementary sets $\Xi _\gamma ^c$ for $\gamma \in \Gamma$. Since they are both open they are thus both closed as well, giving that the internal system is totally disconnected. Now any $\Xi _\gamma$ or $\Xi _\gamma ^c$ is nothing but the $-\gamma$ translate of $\Xi$ or the complementary set $\Xi ^c$, as $$\Lambda \in \Xi .(-\gamma ) \Longleftrightarrow \Lambda .\gamma \in \Xi \Longleftrightarrow 0\in (\Lambda -\gamma) \Longleftrightarrow \gamma \in \Lambda \Longleftrightarrow \Lambda \in \Xi _\gamma$$ As any point pattern $\Lambda \in \Xi ^\Gamma$ must contain at least one element $\gamma \in \Gamma$, one gets that the compact open subsets $\Xi _\gamma$ form a covering of $\Xi^\Gamma$, giving the local compactness. \end{proof} \begin{prop}\label{prop:parametrization.map.interne}\cite{Sch} Let $\Xi ^\Gamma$ be the internal system associated with a cut $\&$ project scheme $(\mathsf{H},\Sigma,\mathbb{R}^d)$ and some window. Then there exists a factor map \begin{align*} \Pi : \Xi ^\Gamma \longrightarrow \mathsf{H} \end{align*} mapping a point pattern $\Lambda$ onto the unique element $\Pi (\Lambda )= w_\Lambda $ of $\mathsf{H}$ satisfying \begin{align*} \mathfrak{P} (w_\Lambda +\mathring{W}) \subseteq \Lambda \subseteq \mathfrak{P} (w_\Lambda +W). \end{align*} Moreover, the map $\Pi$ satisfies $\Pi(\Xi)= -W$, and is injective precisely on the dense family of nonsingular model sets of $\Xi ^\Gamma$, whose image is the dense subset $NS$ of $\mathsf{H}$. \end{prop} From the above proposition we thus have a correspondence between any $w\in NS$ with a unique nonsingular model set $\mathfrak{P}(w+ W)\in \Xi ^\Gamma$, and we may also write $NS$ for the dense subclass of nonsingular model sets of $\Xi ^\Gamma$. Thus the internal system $\Xi ^\Gamma$ and the internal space $\mathsf{H}$ as different completions of a single set $NS$. This observation allows us to set, for any subset $A$ of $\mathsf{H}$, a corresponding subset of $\Xi ^\Gamma $ of the form \begin{align}\label{coupe} [A]_{\Xi}:= \overline{A\cap NS}^{\Xi ^\Gamma} \end{align} Such a $[A]_{\Xi}$ will be non-empty if and only if $A$ intersects $NS$. In particular $[A]_{\Xi}$ will have non-empty interior (and hence will be non-empty) whenever $A$ has non-empty interior. We will have use of the following lemma, which we state without proof: \begin{lem}\label{lem:clopen} Let $X$ be a topological space, and $Y$ a dense subset. Then each clopen subset $V$ of $X$ is equal to the closure of $V\cap Y$. Moreover, if two clopen subsets coincide on $Y$, then they are equal. \end{lem} For instance, one is able to show $\Xi = [-W]_\Xi = [-\mathring{W}]_\Xi$, underlying a link between the topology of the internal system and the geometry of $W$ in the internal space. \begin{prop}\label{prop:internal.morphism.ellis} There exists an onto and proper continuous morphism \begin{align*} \Pi ^* : E(\Xi ^\Gamma,\Gamma)\longrightarrow \mathsf{H} \end{align*} that satisfies the equivariance relation $\Pi(\Lambda .g)= \Pi(\Lambda)+\Pi^*(g)$ for any model set $\Lambda \in \Xi ^\Gamma$ and any mapping $g\in E(\Xi ^\Gamma,\Gamma)$. \end{prop} \begin{proof} Let us show that the the map $\Pi $ of Proposition \ref{prop:parametrization.map.interne} is proper, that is, the inverse image of any compact set of $\mathsf{H}$ is compact in $\Xi ^\Gamma$: Let $K$ be a compact subset of $\mathsf{H}$ and pick a model set $\Lambda$ in $\Pi ^{-1}(K)$. Since $\Gamma ^*$ is dense in $\mathsf{H}$ there exist $\gamma _1, ..., \gamma _l$ in $\Gamma$ such that $$K\subset \bigcup _{k=1 }^l \gamma ^* _k-\mathring{W}.$$ Thus $w_\Lambda \in K$ falls into some $\gamma ^* _k-\mathring{W}$, which implies that $\gamma _k$ lies in $ \mathfrak{P}(w_\Lambda +\mathring{W}) \subset \Lambda$. This in turns means that $0\in \Lambda -\gamma _k$, so $\Lambda -\gamma _k \in \Xi$ and thus $\Lambda \in \Xi .(-\gamma _k)$. Hence the closed set $\Pi ^{-1}(K)$ is entirely included in a finite union of translates of $\Xi$, each being compact, and so is a compact set of $\Xi ^\Gamma $. Now Proposition \ref{prop:map.ellis.loc.compact} applies, and after invoking Proposition \ref{prop:equi.loc.compact} it gives the desired morphism $\Pi ^*$. \end{proof} \subsection{Hull and internal system Ellis semigroups} We want to relate the Ellis semigroup of the dynamical systems $(\mathbb{X},\mathbb{R}^d)$ with that of $(\Xi ^\Gamma, \Gamma )$. To this end, let $g$ be any mapping in the Ellis semigroup $E(\Xi ^\Gamma , \Gamma )$. Using Theorem \ref{theo:parametrization.map} together with the definition of the internal system, one sees that any point pattern $\Lambda $ in $\mathbb{X}$ can be written as $\Lambda _0= \Lambda -t$ for some model set $\Lambda \in \Xi ^\Gamma $ and some vector $t\in \mathbb{R}^d$. The mapping $g$ is well defined on each $\Lambda \in \Xi ^\Gamma$, and we may thus extend it into a self-map $\widetilde{g}$ of $\mathbb{X}$ by setting \begin{align}\label{extension}\Lambda _0 .\widetilde{g} =(\Lambda -t).\widetilde{g} := \Lambda .g -t. \end{align} This is well defined since if one has $\Lambda -t=\Lambda '-t'$ with $\Lambda$ and $\Lambda '\in \Xi ^\Gamma$ then necessarily $\Gamma -t= \Gamma -t'$, which means that $t-t'\in \Gamma$, and since $g$ commutes with the $\Gamma$--action on $\Xi ^\Gamma$ then applying (\ref{extension}) gives the same result. Let us now consider the semigroup $E(\Xi ^\Gamma , \Gamma )\times _\Gamma \mathbb{R}^d $ to be the (topological) quotient of the direct product semigroup $E(\Xi ^\Gamma , \Gamma )\times \mathbb{R}^d$ by the normal subsemigroup formed of elements $(\gamma , \gamma)$ with $\gamma \in \Gamma$. \begin{theo}\label{theo:suspension.ellis} Let $\mathbb{X}$ and $\Xi ^\Gamma$ be the hull and internal system generated by a cut $\&$ project scheme $(\mathsf{H} ,\Sigma,\mathbb{R}^d)$ and some window. Then there is a homeomorphism and semigroup isomorphism \begin{align*} E(\Xi ^\Gamma , \Gamma )\times _\Gamma \mathbb{R}^d \simeq E(\mathbb{X}, \mathbb{R}^d) \end{align*} mapping each element $[g, t]_\Gamma $ of $E(\Xi ^\Gamma , \Gamma )\times _\Gamma \mathbb{R}^d $ to $\widetilde{g}-t$. \end{theo} \begin{proof} First we show that the quotient semigroup $E(\Xi ^\Gamma , \Gamma )\times _\Gamma \mathbb{R}^d $ is compact (Hausdorff): From the existence of the morphism $\Pi ^*$ one then gets a natural onto semigroup morphism $\Pi ^*\times id: E(\Xi ^\Gamma, \Gamma)\times \mathbb{R}^d \rightarrow \mathsf{H} \times \mathbb{R}^d$, which maps the normal subsemigroup formed by elements $(\gamma , \gamma)$ with $\gamma \in \Gamma$ onto the lattice $\Sigma$. Since $\mathsf{H} \times _{\Sigma}\mathbb{R}^d$ is compact (Hausdorff), and since $\Pi ^*\times id$ is continuous and proper, we deduce that $E(\Xi ^\Gamma , \Gamma )\times _{\Gamma}\mathbb{R}^d$ must itself be compact (Hausdorff). Now it is clear that the mapping associating $g\in E(\Xi ^\Gamma , \Gamma )$ with $\tilde{g}\in \mathbb{X}^\mathbb{X}$ is a semigroup morphism for the composition laws of mappings. This association is moreover continuous: For this it suffices to check the continuity of each evaluation map $g\mapsto \Lambda _0. \tilde{g}$, with $\Lambda _0\in \mathbb{X}$. Write $\Lambda _0$ as $\Lambda - t$ for some $\Lambda \in \Xi ^\Gamma$. If $g_\lambda$ is a net in $E(\Xi ^\Gamma , \Gamma )$ pointwise converging in $\Xi ^\Gamma$ to a mapping $g$, then one has convergence of the net $\Lambda .g_\lambda$ to $\Lambda .g$ in the internal system $\Xi ^\Gamma$. Comparing the topologies coming from the topologies (\ref{topology}) on $\mathbb{X}$ and (\ref{topology.internal}) on $\Xi ^\Gamma$, one sees that the embedding of $\Xi ^\Gamma$ into the hull $\mathbb{X}$ is continuous, so that the net $\Lambda .g_\lambda$ also converges to $\Lambda .g$ in the hull $\mathbb{X}$. Hence the net $\Lambda _0. \tilde{g}_\lambda = \Lambda . g_\lambda -t$ converges to $\Lambda .g -t= \Lambda _0. \tilde{g}$, as desired. From this we can define a continuous semigroup morphism from $E(\Xi ^\Gamma , \Gamma )\times \mathbb{R}^d $ into $\mathbb{X}^\mathbb{X}$ that associates a pair $(g, t)$ with $\tilde{g}-t$. Clearly any pair of the form $(\gamma , \gamma)$ with $\gamma \in \Gamma$ is mapped to the identity map, thus giving a continuous semigroup morphism \begin{align*} E(\Xi ^\Gamma , \Gamma )\times _\Gamma \mathbb{R}^d \longrightarrow \mathbb{X}^\mathbb{X}, \quad \left[ g, t\right] _\Gamma \longmapsto \tilde{g}-t . \end{align*} This map is one-to-one: If $\left[ g, t\right] _\Gamma$ and $\left[ g', t'\right] _\Gamma$ are such that $\tilde{g}-t \equiv \tilde{g}'-t' $, then they must in particular coincide at any model set $\Lambda \in \Xi ^\Gamma$, thus giving for each such point pattern $\Lambda .g -t = \Lambda .g' -t'$. As $\Lambda .g$ and $\Lambda .g'$ are supported on $\Gamma $ we deduce that $t'-t=: \gamma\in \Gamma$, and that $g'$ coincides with $g+\gamma$ everywhere on $\Xi ^\Gamma$. It follows that $\left[ g', t'\right] _\Gamma = \left[ g + \gamma, t + \gamma\right] _\Gamma = \left[ g, t\right] _\Gamma$, giving injectivity. Now the stated morphism conjugates, both topologically and algebraically, the semigroup $E(\Xi ^\Gamma , \Gamma )\times \mathbb{R}^d $ with its image in $\mathbb{X}^\mathbb{X}$. To conclude it suffices then to show that this image densely contains the group of homeomorphisms coming from the $\mathbb{R}^d$--action on $\mathbb{X}$. Obviously this group is contained into the image in question, appearing as $[0 ,t]_\Gamma $, where $0$ stands for the identity mapping on $\Xi ^\Gamma$, lying in $\Gamma$ and thus in $E(\Xi ^\Gamma , \Gamma )$. Let $\tilde{g}-t$ be some mapping in this image. A neighborhood basis for this latter in $\mathbb{X}^\mathbb{X}$ may be given as finite intersections of sets \begin{align*} V_{\mathbb{X}}(\Lambda , U):= \lbrace f\in \mathbb{X}^\mathbb{X} \, \vert \, \Lambda .f \in U\rbrace \end{align*} containing $\tilde{g}-t$. Let $\Lambda _1 , .., \Lambda _k$ be model sets and $U_1,..,U_k$ be open subsets of $\mathbb{X}$ such that $\tilde{g}-t$ lies in $V(\Lambda _j,U_j)$ for each $j$. To get density it then suffices to show the existence of some element of $\mathbb{R}^d$ also contained in $V(\Lambda _j,U_j)$ for each $j$. Let us write $\Lambda _j$ as a sum $\Lambda _j' - t_j$ with $\Lambda _j '\in \Xi ^\Gamma$. Hence the mapping $g$, being the restriction of $\tilde{g}$ on $\Xi ^\Gamma$, lies in each subset \begin{align}\label{open.sets} V_{\Xi ^\Gamma}(\Lambda _j', \Xi ^\Gamma \cap (U_j+t +t_j)):= \left\lbrace f\in (\Xi ^\Gamma )^{\Xi ^\Gamma} \, \vert \, \Lambda _j'.f \in U_j+t +t_j\right\rbrace \end{align} The embedding of $\Xi ^\Gamma$ in the hull $\mathbb{X}$ is clearly continuous, so that $\Xi ^\Gamma \cap (U_j+t +t_j)$ are open sets of the internal system, so the sets (\ref{open.sets}) are open in $(\Xi ^\Gamma ) ^{\Xi ^\Gamma}$. As $E(\Xi ^\Gamma , \Gamma )$ is the closure of $\Gamma $ in $(\Xi ^\Gamma ) ^{\Xi ^\Gamma}$ one may thus find some $\gamma \in \Gamma$ within each set (\ref{open.sets}), giving that $\gamma -t\in \mathbb{R}^d$ lies in each $V(\Lambda _j,U_j)$, as desired. \end{proof} Apart from this, the parametrization map $\pi$ of Theorem \ref{theo:parametrization.map} also implies the existence of an onto continuous semigroup morphism \begin{align*} \pi ^* : E(\mathbb{X},\mathbb{R}^d)\longrightarrow \mathsf{H}\times _{\Sigma}\mathbb{R}^d \end{align*} that satisfies the equivariance relation $\pi(\Lambda .\mathsf{g})= \pi(\Lambda)+\pi^*(\mathsf{g})$ for any model set $\Lambda \in \mathbb{X}$ and any Ellis transformation $\mathsf{g}\in E(\mathbb{X},\mathbb{R}^d)$. Then the morphism $\pi ^*$ extends the morphism $\Pi ^*$ in the sense that for any transformation $g$ in $E(\Xi ^\Gamma , \Gamma )$ and $t\in \mathbb{R}^d$, one has the equality \begin{align*}\pi ^*(\tilde{g}-t)=[\Pi ^*(g),t]_{\Sigma}. \end{align*} \section{\large{\textbf{The almost canonical property for model sets}}}\label{almost canonical property} We wish to define here the almost canonical property on a model set. To this end we restrict ourselves to real cut and project scheme $(\mathbb{R}^n, \Sigma , \mathbb{R}^d)$, and we ask the window $W$ to be a $n$--dimensional \textit{compact convex polytope} of the internal space $\mathbb{R}^n$. The definition of almost canonical model sets will be derived from a corresponding notion on $W$, which consists of a pair of assumptions we will now present. In fact, it will be much more convenient to consider the \textit{reversed window} $M:= -W$ in the internal space. It as well consists of a $n$--dimensional compact convex polytope in $\mathbb{R}^n$, whose boundary is given by $\partial M = -\partial W$. If we now let $f$ be any $(n-1)$ dimensional face of $M$, we define: \begin{itemize} \item $A_f$ or $A_f^0$ to be the affine hyperplane generated by $f$. \item $H_f$ or $H_f^0$ be the corresponding linear hyperplane in $\mathbb{R}^n$. \item $Stab_{\Gamma}(A_f)$ to be the subgroup of $\gamma \in \Gamma$ with $\gamma ^*\in H_f$. \end{itemize} We remark that $Stab_{\Gamma}(A_f)$ is precisely the subgroup of elements $\gamma \in \Gamma$ such that $A_f+\gamma ^*=A_f$, whence the notation. We may also denote $Stab_{\Gamma}(A_f)^*$ for its image in the internal space under the $*$--map. \begin{as}\label{as1} For each face $f$ of $M$, the sum $Stab_{\Gamma}(A_f)^*+f$ covers $A_f$ in $\mathbb{R}^n$. \end{as} The above assumption implies in particular that $Stab_{\Gamma}(A_f)$ has a relatively dense image in $H_f$ under the $*$--map, and thus must be of rank at least $n-1$. Under the above assumption we get a nice description of the subset of nonsingular vectors: \begin{align*} NS:= \left[ \Gamma ^*-\partial W\right] ^c = \left[ \Gamma ^*+\partial M\right] ^c = \left[ \bigcup _{ f \text{ face of } M} \Gamma^* +A_f\right] ^c . \end{align*} As we see above, the subset of nonsingular vectors arises as the complementary subsets of all the $\Gamma$--translates of \textit{singular hyperplanes}, namely the affine hyperplanes $A_f$ with $f$ a face of the reversed window $M$. Let us in addition define for each $(n-1)$--dimensional face $f$ of $M$: \begin{itemize} \item $H_f ^-$ and $H_f^+$ to be the open half-spaces with boundary $H_f$. \item $H_f ^{-0}$ and $H_f^{+0}$ to be the closed half-spaces with boundary $H_f$. \item $A_f ^-$, $A_f^+$, $A_f ^{-0}$ and $A_f^{+0}$ be the corresponding objects with respect to $A_f$. \end{itemize} The choice of orientation on each linear hyperplane provided by the above notation is not relevant, but will be remained fixed from now on. Observe that a hyperplane $H$ may be associated to two different faces, which in this case leaves a common orientation on the corresponding affine spaces. Recall that to any Euclidean subset $A$ may be associated a corresponding subset $[A]_\Xi$ of the internal system according to (\ref{coupe}). We will be specially interested here in a certain collection of Euclidean subsets which we call the family of \textit{admissible half-spaces}, \begin{align*} \mathfrak{A}= \left\lbrace \gamma ^*+ A_f^\pm \, \vert \, \gamma \in \Gamma , \, f \text{ face of } M \right\rbrace \end{align*} \begin{as}\label{as2} Any set $[A]_{\Xi}$, where $ A\in \mathfrak{A}$, is a clopen set. \end{as} It can be shown that Assumption $2$ implies Assumption $1$, but as we don't really need to prove this fact here we assume both independently. We wish to illustrate what type of polytope could satisfies Assumptions $1$ and $2$ by showing situations where this holds, but first let us define what an \textit{almost canonical model set} is: \begin{de}\label{def:almost.canonical.model.set} A model set is almost canonical when it may be constructed with a real cut and project scheme and a compact convex polytopic window in its internal space that satisfy Assumptions $1$ and $2$. \end{de} The term \textit{almost canonical} makes reference to the first point patterns defined as model sets, the \textit{canonical} model sets, constructed via a real cut and project scheme $(\mathbb{R}^n, \Sigma , \mathbb{R}^d)$ together with an orthogonal projection of a unit cube for a window, with respect to the lattice $\Sigma$, in the internal space. Our example in Section $1$ is of this form. The terminology \textit{almost canonical} has been introduced by Julien in \cite{Ju} to define slight generalizations of these model sets. However, our definition doesn't fit exactly with the one given in \cite{Ju} (it can be shown that ours implies the one of Julien), but as we don't want to introduce another definition for something so close to the situation of \cite{Ju}, we allow ourselves to abuse terminology and call ours almost canonical. As shown in \cite{FoHuKe}, a canonical window always satisfies Assumptions $1$ and $2$ and is thus almost canonical in our sense. A condition that ensures that Assumptions $1$ and $2$ hold is the requirement that any stabilizer $Stab_\Gamma (A_f)$ is dense in the corresponding linear hyperplane $H_f$ for any face $f$ of the window $W$ (or its reversed window $M$, which remains the same). A lighter condition that also ensures Assumptions $1$ and $2$ is a slight strengthening of Assumption $1$: \begin{flushleft} \textbf{Assumption 1'.} \textit{For each face $f$ of $M$, the sum $Stab_{\Gamma}(A_f)^*+\mathring{f}$ covers $A_f$, where $\mathring{f}$ denotes the relative interior of $f$.} \end{flushleft} \section{\large{\textbf{Preparatory results on the Ellis semigroup of the internal system}}}\label{preliminary} \subsection{Internal system topology} \begin{prop}\label{prop:topology.internal.system} The collection of clopen sets $[A]_{\Xi}$, where $ A\in \mathfrak{A}$, forms a subbasis for the topology of the internal system. Moreover, for any pair $A,A'$ in $\mathfrak{A}$ the following Boolean rules are true: $$[A\cup A']_{\Xi}=[A]_{\Xi}\cup [A']_{\Xi}, \quad [A]_{\Xi}^c=[A^c]_{\Xi}, \quad [A\cap A']_{\Xi}=[A]_{\Xi}\cap [A']_{\Xi}.$$ \end{prop} \begin{proof} Whenever $w$ is a nonsingular element of $NS\subset \mathbb{R}^n$, one has for any $\gamma \in \Gamma$ that \begin{align*}\mathfrak{P}(W+w).\gamma := \mathfrak{P}(W+w) -\gamma = \mathfrak{P}(W+w+\gamma ^*) \end{align*} This is the key that lets us write, for any $\gamma \in \Gamma$, the equalities \begin{align}\label{cut.up.half.space.translated} [A _f^+]_{\Xi }.\gamma = [A _f^+ +\gamma ^*]_\Xi \end{align} This observation being made, let us start the proof by showing the Boolean equalities. The equality on the left is a simple consequence of closure operation. The equality in the middle is equivalent to having disjoint decompositions \begin{align}\label{complémentaire} [A _f^+ +\gamma ^*]_\Xi \sqcup [A _f^- +\gamma ^*]_\Xi = \Xi ^\Gamma \end{align} which reduces, due to the equalities provided in (\ref{cut.up.half.space.translated}), to showing that $[A _f^+]_\Xi \sqcup [A _f^-]_\Xi = \Xi ^\Gamma $. To that end, note that any element of the internal system $\Xi ^\Gamma$ is the limit of a sequence of nonsingular elements, a sequence which can be taken after extraction into one of the two open half-spaces $A_f^+$ and $A_f ^-$. Therefore such element remains in either $[A _f^+]_\Xi $ or $ [A _f^-]_\Xi$, showing that their union covers the internal system. On the other hand, these subsets are by assumption clopen, so they must have a clopen intersection. Assume for a contradiction that this is not the empty set: It must contain a nonsingular model set $\Lambda $, whose image under $\Pi$ is a nonsingular element $w_\Lambda \in NS\subset \mathbb{R}^n$. However $\Lambda$ is the limit of two sequences of nonsingular model sets, with associated sequences of nonsingular elements in $\mathbb{R}^n$ taken in $A _f^+ $ for the first sequence and in $A _f^-$ for the second one. Taking limits one must have $w_\Lambda \in A_f$, and since $w_\Lambda$ has been taken nonsingular one has the desired contradiction. Having proven the left and middle Boolean equalities, one can deduce third one by general Boolean algebra. To show that the sets $[A]_{\Xi}$ where $ A\in \mathfrak{A}$ form a subbasis for the topology, observe that the set $\mathring{M}=-\mathring{W}$ is precisely the intersection of admissible half-spaces $A_f^{s_f}$, where $s_f$ is the sign $-$ or $+$ such that $A_f^{s_f}$ contains the interior of $M$. From this we deduce \begin{align*} \Xi = [-\mathring{W}]_\Xi = [\mathring{M}]_\Xi = \left[ \bigcap _{f \text{ face of } W} A_f^{s_f}\right] _\Xi = \bigcap _{f \text{ face of } W}[A_f^{s_f}]_\Xi . \end{align*} Thus the set $\Xi$ and its complementary set can be obtained as finite intersections of sets of the statement. Since $\Xi ^\Gamma $ admits a subbasis formed by the $\Gamma$--translates of $\Xi$ and its complementary set by Proposition \ref{prop:topology.internal.system.tot.disconnected}, the proof is complete. \end{proof} \subsection{Cones associated with model sets} We define the \textit{cut type} of a vector $w\in \mathbb{R}^n$ to be the family of linear hyperplanes for which some parallel singular hyperplane passes through $w$, \begin{align}\label{type.coupe} \mathfrak{H}_w:= \left\lbrace H_f\in \mathfrak{H}_W \; \vert \; w\in \Gamma ^*+A_f\right\rbrace . \end{align} To each $w\in \mathbb{R}^n$ is associated a family of \textit{cones} (also called \textit{corners} in \cite{Le}), which are open cones with vertex $0$ and boundaries formed by hyperplanes in $\mathfrak{H}_w$. We may label each of these cones by a \textit{cone type} $\mathfrak{c} : \mathfrak{H}_W\longrightarrow \left\lbrace -,+,\infty \right\rbrace $, so that the labeled cone is obtained, according to the notations of Section $4$, as \begin{align*} C:= \bigcap _{H\in \mathfrak{H}_W} H^{\mathfrak{c}(H)}. \end{align*} In the above intersection only hyperplanes where $\mathfrak{c}$ has value not equal to $\infty$ are consistent, and we may set the \textit{domain} of a cone type $\mathfrak{c}$ to be the subset $dom(\mathfrak{c})$ of $\mathfrak{H}_W$ where it has value different from $\infty$. Moreover, a cone determined by the cut type $\mathfrak{H}_w$ has only one cone type whose domain is precisely $\mathfrak{H}_w$. Now given a cone $C$ in $\mathbb{R}^n$ and some vector $w$ of $\mathbb{R}^n$, we define \begin{align}\label{cone.borné} C(w,\varepsilon ):= (C\cap B(0,\varepsilon ))+w \end{align} to be the head of the cone $C$, translated at $w$ and of length $\varepsilon $. One can easily verify that a vector $w\in \mathbb{R}^n$ belongs to the nonsingular vectors $NS$ if and only if its cut type $\mathfrak{H}_w$ is empty. In this latter case the unique resulting cone $C$ is the full Euclidean space $\mathbb{R}^n$, for which any set of the form (\ref{cone.borné}) is a Euclidean ball. \begin{prop}\label{prop:local.topology.internal.system} Given a model set $\Lambda \in \Xi ^\Gamma$, there exists a cone $C_\Lambda$, admitting a cone type $\mathfrak{c}_\Lambda$ with domain $\mathfrak{H}_{w_\Lambda}$, such that the following equivalence holds for each admissible half space $A\in \mathfrak{A}$: \begin{align*} \Lambda \in [A]_\Xi \quad \Longleftrightarrow \quad C_\Lambda (w_\Lambda ,\varepsilon )\subset A \text{ for some } \varepsilon >0. \end{align*} \end{prop} \begin{proof} Let $\Lambda \in \Xi ^\Gamma$. If $H$ is a hyperplane of the cut type $\mathfrak{H}_{w_\Lambda}$, that is, if one has some $\gamma \in \Gamma $ and some face $f$ with $w_\Lambda \in \gamma ^*+A_f$, $A_f$ parallel to $H$, then the hyperplane $H + w_\Lambda$ is equal to $ \gamma ^*+A_f$ and the half-spaces $H^{\pm}+w_\Lambda$ are admissible. Therefore $[H^{+}+w_\Lambda]_\Xi$ and $[H^{-}+w_\Lambda]_\Xi$ are clopen complementary sets, and the one containing $\Lambda$ defines the sign $\mathfrak{c}_\Lambda(H)$. This provides $\mathfrak{c}_\Lambda$ uniquely. From the Boolean rules stated in Proposition \ref{prop:topology.internal.system}, the model set $\Lambda$ is such that \begin{align}\label{cone}\Lambda \in \bigcap _{H\in \mathfrak{H}_{w_\Lambda}} \left[ H^{\mathfrak{c}_\Lambda(H)}+w_\Lambda\right] _\Xi= \left[ \bigcap _{H\in \mathfrak{H}_{w_\Lambda}} H^{\mathfrak{c}_\Lambda(H)} + w_\Lambda\right] _\Xi= \left[ C_\Lambda+w_\Lambda\right] _\Xi \end{align} with $C_{_\Lambda}$ the unique cone with cone type $\mathfrak{c}_\Lambda$, in particular non-empty. We now show that a model set $\Lambda$ has a neighborhood basis in the internal system obtained as \begin{align}\label{nei} [C_\Lambda + w_\Lambda]_\Xi \cap \Pi ^{-1}(B(w_\Lambda, \varepsilon)) \end{align} From the inclusion of $\Lambda$ stated in (\ref{cone}) it is clear that (\ref{nei}) is a family of open neighborhoods of $\Lambda$ in the internal system. We will use the following lemma: \begin{lem}\label{lem:base.voisinages.ouverts} Let $\pi : X \rightarrow Y$ be a continuous and proper map between locally compact spaces. Let $X_x$ be the fiber of $x$ with respect to $\pi $ for each $x\in X$. If there is a clopen neighborhood $V_x$ of $x$ satisfying $V_x\cap X_x= \lbrace x\rbrace $, then a neighborhood basis of $x$ is provided by $V_x\cap \pi ^{-1}(U)$ with $U$ running among the neighborhoods of $\pi (x)$. \end{lem} \begin{proof} Suppose for a contradiction that the stated family is not a neighborhood basis of $x$. One may then select an open neighborhood $V$ of $x$ such that $V_x\cap \pi ^{-1}(U)$ meets $V^c$ for each neighborhood $U$ of $\pi (x)$. Let $\Delta $ be the directed family of open neighborhoods of $\pi (x)$ falling in some compact neighborhood $U_0$ of $\pi (x)$. One may select a net $\lbrace x_U \rbrace_{ U\in \Delta }$ into $V^c$ and with each $x_U $ belonging to $ V_x\cap \pi ^{-1}(U)$. This net falls in the compact set $V_x\cap \pi ^{-1}(U_0)$ and in $V^c$ as well. Taking some accumulation point $x'$, necessarily lying in both $V_x$ and $ X_x$, and in the closed set $V^c$ as well, gives the contradiction as we supposed $V_x\cap X_x= \lbrace x\rbrace $ contained in $V$. \end{proof} We then show that a clopen neighborhood of $\Lambda$ which fits the condition of the above lemma is provided by $[C_\Lambda + w_\Lambda]_\Xi$: For this, suppose that $\Lambda$ and $\Lambda '$ are such that $w_\Lambda = w_{\Lambda '}=:w$ in $\mathbb{R}^n$. From Proposition \ref{prop:topology.internal.system} there is a face $f$ of $W$ as well as an element $\gamma \in \Gamma$ such that (up to a permutation of signs $+$ and $-$) $\Lambda\in [A_f^++\gamma ^*]_\Xi$ and $\Lambda '\in [A_f^-+\gamma ^*]_\Xi$. Then the vector $w$ falls into both closed half planes $A_f^{+0}+\gamma ^*$ and $A_f^{-0}+\gamma ^*$, and thus into $A_f + \gamma ^*$. The latter hyperplane can consequently also be written $H_f + w$, and it follows that $\Lambda \in [H_f^+ + w]_\Xi$ whereas $\Lambda '\in [H_f^- + w ]_\Xi$. This shows that $\Lambda '$ is outside $[C_\Lambda + w_\Lambda]_\Xi$, as desired.\\ Now, it is clear that $\Lambda \in [A]_\Xi $ if and only if one has a subset of the form $[C_\Lambda + w_\Lambda]_\Xi \cap \Pi ^{-1}(B(w_\Lambda, \varepsilon))$ included in $[A]_\Xi $ for some $\varepsilon >0$. Then intersecting with $NS$ gives that $C_\Lambda (w_\Lambda ,\varepsilon )\cap NS$ falls into $ A\cap NS$, and by taking closure and next interior in $\mathbb{R}^n$ one obtains the right-hand inclusion of the statement. Conversely if the right-hand inclusion of the statement occurs for some $\Lambda \in \Xi ^\Gamma$ then we may choose a sequence of nonsingular model sets converging to it, in a manner that the associated sequence of nonsingular vectors falls into (\ref{nei}), and thus into $C_\Lambda (w_\Lambda ,\varepsilon )$. The sequence of nonsingular model sets then lies into $[A]_\Xi $, and since this latter is closed we obtain the result. \end{proof} \subsection{Topology of the internal system Ellis semigroup} Recall that by construction, the Ellis semigroup for the internal system is a closure of the group $\Gamma$, or rather the resulting group of homeomorphisms on the internal system. Thus for any Euclidean subset $A$ one may set a corresponding subset $[A]_E$ to be the closure of $\left\lbrace \gamma \in \Gamma \, \vert \, \gamma ^*\in A\right\rbrace $ in the Ellis semigroup $E(\Xi ^\Gamma,\Gamma )$ of the internal system. We would in fact consider a specific family of Euclidean subsets, namely \begin{align*}\mathfrak{A}_{\mathrm{Ellis}}:=\lbrace H^\mathfrak{t} +w\, \vert \, H\in \mathfrak{H}_W, \, \mathfrak{t}\in \lbrace -,0,+\rbrace, \, w\in \mathbb{R}^n\rbrace . \end{align*} Observe that the above family contains the family $\mathfrak{A}$ of admissible half-spaces, in a strict sense however. \begin{prop}\label{prop:topology.ellis.internal.system} Any set $[A]_E$, where $ A\in \mathfrak{A}_{Ellis}$, is clopen, and the collection of these sets forms a subbasis for the topology of $E(\Xi ^\Gamma,\Gamma )$. Moreover, for any pair $A,A'$ in $\mathfrak{A}_{\mathrm{Ellis}}$ the following Boolean rules are true: $$[A\cup A']_E=[A]_E\cup [A']_E, \quad [A]_E^c=[A^c]_E, \quad [A\cap A']_E=[A]_E\cap [A']_E .$$ \end{prop} \begin{proof} From Proposition \ref{prop:topology.internal.system}, the sets $[A]_\Xi$, where $A$ is an admissible half-space, are clopen subsets of the internal system $\Xi ^\Gamma$, and form a subbasis for its topology. It thus follows that the sets \begin{align*} V\left( \Lambda , [A]_\Xi\right) := \left\lbrace g \in E(\Xi ^\Gamma,\Gamma ) \, \vert \; \Lambda .g\in [A]_\Xi \right\rbrace , \end{align*} where $\Lambda $ is any model set in the internal system and $A$ is any admissible half-space, are clopen subsets of the Ellis semigroup $E(\Xi ^\Gamma,\Gamma )$, and that they form a subbasis for its topology. Moreover, using the fact that $[\gamma ^*+A_f^{\pm}]_\Xi$ is equal to $[A_f^{\pm}]_\Xi .\gamma$ whatever the element $\gamma \in \Gamma$ one can directly check that $V(\Lambda , [\gamma ^*+A_f^{\pm}]_\Xi)$ is equal to $V(\Lambda .(-\gamma ), [A_f^{\pm}]_\Xi)$. This shows that a subbasis for the Ellis semigroup topology is obtained as the collection \begin{align}\label{sub-basis} \left\lbrace V\left( \Lambda , [A_f^{\pm}]_\Xi\right) \, \vert \; \Lambda \in \Xi ^\Gamma , \, f \text{ face of } M\right\rbrace . \end{align} In order to relate these sets with the ones given in the statement we prove a cornerstone lemma to this proposition: \begin{lem}\label{lem:cornerstone} Let $\Lambda$ be in the internal system $\Xi ^\Gamma$. Then \begin{align*} V\left( \Lambda , [A_f^+]_\Xi\right)= \begin{cases} [A^{+0}_f-w_\Lambda]_E & \text{ if } \mathfrak{c}_\Lambda(H_f)=+, \\ [A^{+}_f-w_\Lambda]_E & \text{ if } \mathfrak{c}_\Lambda(H_f)=-, \\ [A^{+0}_f-w_\Lambda]_E= [A^{+}_f-w_\Lambda]_E & \text{ if } \mathfrak{c}_\Lambda(H_f)=\infty . \end{cases} \end{align*} The same statement holds with the $+$ and $-$ signs switched everywhere. \end{lem} \begin{proof} Recall from Lemma \ref{lem:clopen} that a clopen set of $E(\Xi ^\Gamma,\Gamma )$ is the closure of its subset of $\Gamma$--elements. Now given $V( \Lambda , [A_f^+]_\Xi )$, an element $\gamma \in \Gamma$ lies inside if and only if $\Lambda .\gamma\in [A_f^+]_\Xi$, which happens from Proposition \ref{prop:local.topology.internal.system} if and only if $C_{\Lambda .\gamma}(w_{\Lambda .\gamma}, \varepsilon ) $ embeds into $A^+_f$ for some $\varepsilon >0$. As the cones of $\Lambda$ and its $\gamma$--translate are the same, and because the factor map $\Pi$ is $\Gamma$--equivariant, the previous condition is equivalent to \begin{align}\label{C3} C_\Lambda(\gamma ^*,\varepsilon ) \subset A^+_f -w_\Lambda \end{align} for some $\varepsilon >0$. It is then obvious that: \begin{itemize} \item[•] Whenever $\gamma ^*\in A^+_f -w_\Lambda$ this condition is satisfied. \item[•] Whenever $\gamma ^*\in A^-_f -w_\Lambda$ this condition is not satisfied. \end{itemize} Now suppose that $\mathfrak{c}_\Lambda(H_f)=\infty$, so that $H_f$ doesn't belong to the cut type of $w_\Lambda$: Then no element of $\Gamma$ maps to $A_f -w_\Lambda$ under the $*$--map, and thus by taking closure in the Ellis semigroup one has the desired equality in the case $\mathfrak{c}_\Lambda(H_f)=\infty$. Suppose by contrast that $\mathfrak{c}_\Lambda(H_f)\neq \infty$, so that there exist elements of $\Gamma$ whose image under the $*$--map falls into $A_f -w_\Lambda$. Then for each such $\gamma \in \Gamma$ the hyperplane $A_f-w_\Lambda$ may also be written $H_f+\gamma ^*$, giving $A^+_f-w_\Lambda=H^+_f+\gamma ^*$. Hence such a $\gamma$ satisfies (\ref{C3}) if and only if the cone $C_\Lambda$ lies into $H_f^+$, which can be rewritten as $\mathfrak{c}_\Lambda(H_f)=+$. Again by taking closure in the Ellis semigroup, one has the desired equalities in the case $\mathfrak{c}_\Lambda(H_f)\neq \infty$. The argument remains valid when interchanging the $+$ and $-$ signs everywhere, completing the proof of the lemma. \end{proof} \begin{lem}\label{lem:ellis.clopen} For each hyperplane $H$ and vector $w\in \mathbb{R}^n$, one has a partition of the Ellis semigroup by clopen sets \begin{align}\label{partition} E(\Xi ^\Gamma ,\Gamma )=\left[ H^- +w\right] _E\sqcup \left[ H^{} +w\right] _E\sqcup \left[ H^+ +w\right] _E . \end{align} \end{lem} \begin{proof} First observe that by construction the group $\Gamma$ is dense in the Ellis semigroup, and consequently the union of the three right-hand sets stated in the equality must covers the Ellis semigroup. Now select a face $f$ with $H=H_f$ and let $w'\in \mathbb{R}^n$ be such that $H^\mathfrak{t}+w$ can be rewritten as $A_f^\mathfrak{t}-w'$ for each sign $\mathfrak{t}\in \lbrace -,0,+\rbrace$ (this can always be achieved as $H$ and $A_f$ are parallel). This choice of vector $w'$ will be kept along this proof. It is quite clear that the middle term $[H_f +w]_E$ is nonempty if and only if one has an element $\gamma \in \Gamma$ such that $\gamma ^*\in H_f +w$, or equivalently in $A_f-w'$, which in turns exactly means that $H$ is a hyperplane of the cut type $\mathfrak{H}_{w'}$. Thus we will consider two cases: Suppose that $H\in \mathfrak{H}_{w'}$: we may select two cones, both determined by the cut type $\mathfrak{H}_{w'}$, living at opposite sides with respect to $H$. Let us pick two model sets $\Lambda$ and $\Lambda '$ with common associated vector $w'$ in $\mathbb{R}^n$ and associated with these cones, so that $\mathfrak{c}_\Lambda(H)=+$ and $\mathfrak{c}_{\Lambda '}(H)=-$ up to a switch of signs (the existence of such model sets is shown in Theorem \ref{theo:topologie.par.cones} appearing later, whose proof is independent of the present statement). Then by the previous lemma, the set $[H_f^- +w]_E$ is the clopen subset $V( \Lambda , [A_f^-]_\Xi )$, and is disjoint from the other two since they are both included in $V( \Lambda , [A_f^+]_\Xi )$. In the same way the set $[H_f^+ +w]_E$ is the clopen subset $V( \Lambda ', [A_f^+]_\Xi )$, and is disjoint from the two others since they are both included into $V( \Lambda ', [A_f^-]_\Xi )$. As the left-hand term and the right-hand term are clopen and disjoint from the respective two other sets then the stated union must be disjoint, and the middle term is clopen as well. If $H\notin \mathfrak{H}_{w'}$ then things are even easier: the middle term becomes empty, and in pretty much the same way as before, by picking only one model set with associated vector $w'$ one can show that the two sets of the union are clopen and disjoint. \end{proof} Now the proof of the statement almost immediately follows: By Lemma \ref{lem:ellis.clopen} the sets of the statement are clopen sets, and form a subbasis since any subset of the family (\ref{sub-basis}) can be written as one of them by Lemma \ref{lem:cornerstone}. It remains to show the Boolean rules: The left-hand rule is a direct consequence of the closure operation, whereas the middle rule follows from the family of partitions given by Lemma \ref{lem:ellis.clopen}. The third rule naturally follows from the two others. \end{proof} \section{\large{\textbf{Main result on the internal system Ellis semigroup}}}\label{main internal} \subsection{The face semigroup of a convex polytope} Given a real cut and project scheme $(\mathbb{R}^n, \Sigma, \mathbb{R}^d)$ with an almost canonical window $W$ in the internal space, we shall define the \textit{face semigroup} of $W$ \cite{Bro, Sa}. Let $\mathfrak{H}_W$ be the family of linear hyperplanes parallel to the faces of $W$. It defines a stratification of $\mathbb{R}^n$ by cones of dimension between $0$ and $n$ (those cones are called \textit{faces} in \cite{Sa}), that is, by nonempty sets of the form \begin{align}\label{face} \bigcap _{H\in \mathfrak{H}_W} H^{\mathfrak{t}(H)} . \end{align} where $\mathfrak{t}(H)$ is a symbol among $ \left\lbrace -,0,+\right\rbrace $ for each $ H\in \mathfrak{H}_W$. Then each such cone $C$ is determined through a unique map $\mathfrak{t}_C: \mathfrak{H}_W\longrightarrow \left\lbrace -,0,+\right\rbrace $, which we call here its \textit{cone type}. A special class of cones is that of \textit{chambers}, that is, the cones of maximal dimension $n$, which are open in $\mathbb{R}^n$ and are precisely those with a nowhere-vanishing cone type. On the other extreme is the unique cone of dimension $0$, namely the singleton $\left\lbrace 0\right\rbrace $, whose type is entirely vanishing and which we denote by $\mathfrak{o}$. Let us denote by $\mathfrak{T}_W$ the above set of cones, and define on this set a semigroup law: if $C,C'\in \mathfrak{T}_W$ are given, then the product $C.C'$ is the face whose type is given by \begin{align*} \mathfrak{t}_{C.C'}(H)=\mathfrak{t}_C.\mathfrak{t}_{C'}(H):= \begin{cases} \mathfrak{t}_{C'}(H) \text{ if }\mathfrak{t}_C(H)= 0, \\ \mathfrak{t}_C(H)\, \text{ else.} \\ \end{cases} \end{align*} The reading direction is from right to left, as for actions: first we look at the value of $\mathfrak{t}_{C'}(H)$, we keep it when $\mathfrak{t}_C(H)= 0$ and else replace it by $\mathfrak{t}_C(H)$, which in this case makes us forget the existence of $\mathfrak{t}_{C'}$. It may easily checked that this product law is well defined on $\mathfrak{T}_W$, that is, the product of two (non empty) cones is again a (non empty) cone, and is associative. \begin{de}\label{def:face.semigroup} The face semigroup associated with the polytope $W$ in $\mathbb{R}^n$ is the set $\mathfrak{T}_W$ equipped with the above product law. \end{de} It is clear from the formula that $\mathfrak{o}$ is an identity for $\mathfrak{T}_W$. Moreover, any cone $C$ satisfies the equality $C.C=C$, that is, is \textit{idempotent} in $\mathfrak{T}_W$. There moreover exists a natural partial order on the face semigroup under which $C\leqslant C'$ if and only if $C'$ is a lower-dimensional facet of $C$, or equivalently when the inclusion $C'\subseteq \overline{C}$ occurs. This may be rephrased by means of the semigroup law on $\mathfrak{T}_W$, as we have \begin{align*} C\leqslant C' \quad \Longleftrightarrow \quad \mathfrak{t}_C = \mathfrak{t}_{C'}.\mathfrak{t}_C \end{align*} With respect to this order, the chambers are the minimal cones whereas $\mathfrak{o}$ is the (unique) maximal cone in the face semigroup. Some authors use the reverse order instead, but it appears more convenient for later needs to define the order as above. \subsection{Taking $\Gamma$ into account} Here we introduce a modified version of the face semigroup obtained from an almost canonical window $W$ of the internal space $\mathbb{R}^n$ of some real cut and project scheme. Let us call a cone $C$ of the face semigroup \textit{nontrivial} whenever the origin in $\mathbb{R}^n$ is an accumulation point of elements of $C\cap \Gamma ^*$. We moreover denote the family of nontrivial cones of the face semigroup by $\mathfrak{T}_{W,\Gamma }$, and refer it as the \textit{nontrivial face semigroup}. It is at this point not clear whether $\mathfrak{T}_{W,\Gamma }$ is a subsemigroup of $\mathfrak{T}_{W}$. However to convince ourselves that it is the case, we may observe that the product $C.C'$ of two cones of the face semigroup is the only cone containing a small head of the cone $C'$ when this latter is shifted by a small vector of $C$, and that this preserves the subset $\mathfrak{T}_{W,\Gamma }$ in the face semigroup. Now given a nontrivial cone $C$, as $C\cap \Gamma ^*$ accumulates at $0$, the vector space $\langle C\rangle $ spanned by $C$ admits a subgroup $\langle C\rangle \cap \Gamma ^*$ which cannot be uniformly discrete, and thus is "dense along some subspace". More precisely we state in our setting a theorem of \cite{Se}: \begin{theo}\label{theo:Senechal} The vector space $\langle C\rangle $ uniquely decomposes as a direct sum $ V\oplus D$, where $V\cap \Gamma ^*$ is dense in $ V$, $D\cap \Gamma ^*$ is uniformly discrete in $D$, and $\langle C\rangle \cap \Gamma ^* = \left( V \cap \Gamma ^*\right) \oplus \left( D \cap \Gamma ^*\right) $. \end{theo} Now given a nontrivial cone $C$ of the face semigroup with decomposition $\langle C\rangle = V\oplus D$ provided by the previous theorem, the summand $V$ is nontrivial and thus one may attach to it another smaller cone, \begin{align*} \mathsf{C}:= C\cap V. \end{align*} We call $\mathsf{C}$ the \textit{plain cone} associated to $C$. By construction, the cone $\mathsf{C}$ is open in the space $V$ and spans this latter space, and $\mathsf{C}\cap \Gamma ^*$ is a dense subset of the plain cone $\mathsf{C}$. It is easy to observe that $\mathsf{C}=C$ when and only when the set $C\cap \Gamma ^*$ is dense in the cone $C$. For any nontrivial cone type $\mathfrak{t}\in \mathfrak{T}_{W, \Gamma }$ we may define $\mathsf{C}_\mathfrak{t}$ to be the plain cone associated with $C_\mathfrak{t}$. \subsection{The main theorem for internal system Ellis semigroup} Let us consider an Ellis transformation $g\in E(\Xi ^\Gamma,\Gamma)$ with associated translation vector $w_g$ in $\mathbb{R}^n$. Given a hyperplane $H\in \mathfrak{H}_W$, it has been shown in Lemma \ref{lem:ellis.clopen} that the mapping $g$ falls into one and only one clopen subset of the form $[H^{\mathfrak{t}} +w_g]_E$, whose sign for any hyperplane $H\in \mathfrak{H}_W$ determines a face type $\mathfrak{t}_g$ uniquely. To see that $\mathfrak{t}_g$ is a face type in the above sense, that is, is associated with a nonempty cone $C_g$ of the stratification obtained from $\mathfrak{H}_W$, observe that from the Boolean rules of Proposition \ref{prop:topology.ellis.internal.system} one has \begin{align}\label{intersection.ellis} g\in \bigcap _{H \in \mathfrak{H}_W} \left[ H^{\mathfrak{t}_g(H)} +w_g\right] _E = \left[ \bigcap _{H \in \mathfrak{H}_W} H^{\mathfrak{t}_g(H)} +w_g\right] _E = \left[ C_g +w_g\right] _E, \end{align} which ensure that $C_g$ must be nonempty. Having related the internal system Ellis semigroup with the face semigroup just defined, we are now able to set our main theorem concerning the internal system Ellis semigroup: \begin{theo}\label{theo:principal.interne} The mapping associating to any transformation $g$ the couple $(w_g,\mathfrak{t}_g)$ establishes an isomorphism between the Ellis semigroup $E(\Xi ^\Gamma,\Gamma)$ and the subsemigroup of the direct product $\mathbb{R}^n \times \mathfrak{T}_{W,\Gamma }$ given by \begin{align*}\bigsqcup _{\mathfrak{t}\in \mathfrak{T}_{W,\Gamma }} \left[ \langle \mathsf{C}_\mathfrak{t}\rangle +\Gamma ^*\right] \times \left\lbrace \mathfrak{t}\right\rbrace . \end{align*} This isomorphism becomes a homeomorphism when the above union is equipped with the following convergence class: $(w_\lambda,\mathfrak{t}_\lambda)\longrightarrow (w,\mathfrak{t})$ if and only if \begin{align*} \forall \varepsilon >0, \exists \, \delta _\lambda >0 \text{ such that } \; \mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda,\delta _\lambda)\subset \mathsf{C}_{\mathfrak{t}}(w,\varepsilon ) \; \text{ for large enough }\lambda . \end{align*} The Ellis semigroup $E(\Xi ^\Gamma,\Gamma)$ has a first countable topology. \end{theo} The convergence class of the statement is there to precise the full family of nets and limit points which obey the above condition. This family completely characterizes the Ellis semigroup topology since, being derived from the topology of the internal system Ellis semigroup, it satisfies a correct set of axioms which permit to recover the closure operator on the Ellis semigroup, and thus its topology (see \cite{Kelley}). The remaining part of this section is devoted to the proof of the above theorem. To this end we decompose the proof into three parts: the first one states the existence of a semigroup isomorphism between the internal system Ellis semigroup and a subsemigroup of the direct product $\mathbb{R}^n \times \mathfrak{T}_{W}$. The second step states the proof that the isomorphic image maps into $\mathbb{R}^n \times \mathfrak{T}_{W,\Gamma }$ and is of the form stated above. In a third part we then show the topological part of the statement. \paragraph{6.3.1 $\; $ Step 1: Existence of the semigroup isomorphism} \begin{prop}\label{prop:morphism.type} The mapping $E(\Xi ^\Gamma,\Gamma) \longrightarrow \mathfrak{T}_W$ that associates to each transformation $g$ its face type $\mathfrak{t}_g$ is a semigroup morphism. \end{prop} \begin{proof} We have to show that given two transformations $g$ and $h$ the face types $\mathfrak{t}_{g.h}$ and $\mathfrak{t}_g.\mathfrak{t}_h$ are equal. By (\ref{intersection.ellis}) the transformation $g.h$ lies in the clopen subset $[C_{g.h}+w_{g.h}]_E$. Since, by construction, $\Gamma$ is dense in the Ellis semigroup, and since the composition law on this latter is right-continuous, one can find a $\gamma \in \Gamma$ sufficiently close to $h$ in the sense that $$ \text{(i)}\; \; \gamma \in [C_h + w_h ]_E, \; \; \; \; \; \; \; \; \text{(ii)}\; \; g.\gamma \in [C_{g.h}+w_{g.h}]_E . $$ From Lemma \ref{lem:ellis.clopen} together with the Boolean rules of Proposition \ref{prop:topology.ellis.internal.system}, one can deduce from (i) that $\gamma ^*\in C_h + w_h$, or equivalently $(\gamma ^*-w_h)\in C_h$ in the internal space. Moreover, as the transformation $g.\gamma$ lies both in the clopen subset $[C_{g.\gamma}+w_{g.\gamma}]_E$ and the open subset $(\Pi ^*)^{-1}(B(w_{g.\gamma },\varepsilon))$, again from the density of $\Gamma $ in the Ellis semigroup together with point (ii), one can find an element $\gamma _\varepsilon \in \Gamma$ sufficiently close to $g.\gamma$ so that \begin{align*} \gamma ^*_\varepsilon \in \left( C_{g.h}+w_{g.h}\right) \cap \left( C_{g.\gamma}+w_{g.\gamma} \right) \cap B(w_{g.\gamma },\varepsilon). \end{align*} Since the cone associated with $g.\gamma$ is equal to the one associated with $g$, the previous fact implies that \begin{align*} C_g(\gamma ^*-w_h,\varepsilon ) \cap C_{g.h}\neq \emptyset \; \forall \varepsilon >0 \; \; \text{with} \; \; \gamma ^*-w_h\in C_h . \end{align*} Let us now consider three cases about a hyperplane $H\in \mathfrak{H}_W$: $ \mathfrak{t}_h(H) =+ \;$ In this case the vector $\gamma ^*-w_h\in C_h $ falls into the open half-space $ H^+$, and thus one may find a $\varepsilon _0$ with $C_g(\gamma ^*-w_h,\varepsilon _0)$ included in $ H^+$, so that $H^+$ must intersect the cone $C_{g.h}$. This forces $C_{g.h}\subset H^+$, or equivalently $\mathfrak{t}_{g.h}(H)=+$. $\mathfrak{t}_h(H)=-\; $ By the same type of argument one can show that $\mathfrak{t}_{g.h}(H)=-$. $\mathfrak{t}_h(H)=0 \;$ In this last case one has $\gamma ^*-w_h\in H$ and thus $C_g(\gamma ^*-w_h,\varepsilon )\subset H^{\mathfrak{t}_g(H)}$ whatever the symbol $\mathfrak{t}_g(H)$. It thus follows that $H^{\mathfrak{t}_g(H)}\cap C_{g.h}$ is nonempty, which necessary gives $C_{g.h}\subset H^{\mathfrak{t}_g(H)}$, or equivalently $\mathfrak{t}_{g.h}(H)=\mathfrak{t}_g(H)$. The above three cases show that the cone type $\mathfrak{t}_{g.h}$ is equal to the composition $\mathfrak{t}_g.\mathfrak{t}_h$, as desired. \end{proof} Combining the previous proposition with the existence of the onto morphism of Proposition \ref{prop:internal.morphism.ellis}, we see that the mapping that associates to each transformation $g$ in $E(\Xi ^\Gamma,\Gamma)$ the couple $(w_g, \mathfrak{t}_g)$ in the product semigroup $\mathbb{R}^n\times \mathfrak{T}_W$ is a semigroup morphism. Thus to settle Step $1$, we only need to show injectivity: Suppose for that purpose that two transformations $g$ and $h$ satisfy $w_g=w_h=:w$ in the internal space. Then by using the subbasis of Proposition \ref{prop:topology.ellis.internal.system} one can find a vector $w_0$ as well as a hyperplane $H\in \mathfrak{H}_W$ such that $g$ and $h$ fall into different clopen subsets among the partition \begin{align*} E(\Xi ^\Gamma ,\Gamma )=\left[ H^- +w_0\right] _E\sqcup \left[ H^{} +w_0\right] _E\sqcup \left[ H^+ +w_0\right] _E . \end{align*} Thus one must have that $w$ and $w_0$ are equal up to a vector of the hyperplane $H$, and this implies that the signs $\mathfrak{t}_g(H)$ and $\mathfrak{t}_h(H)$ must be different. This exactly means that the associated cone types $\mathfrak{t}_g$ and $\mathfrak{t}_h$ are different, and the proof of Step $1$ is complete. \paragraph{6.3.2 $\; $ Step 2: Determination of the isomorphic image} The problem now is to identify the subsemigroup of $\mathbb{R}^n \times \mathfrak{T}$ isomorphic to the internal system Ellis semigroup via the previous mapping. To that end, one may describe this subsemigroup as a disjoint union \begin{align*} \bigsqcup _{\mathfrak{t}\in \mathfrak{T}_W} \mathbb{R}^n_{\mathfrak{t}} \times \left\lbrace \mathfrak{t}\right\rbrace \end{align*} for some Euclidean subsets $\mathbb{R}^n_\mathfrak{t}$, the \textit{allowed translations} of a cone type $\mathfrak{t}$, which we need to identify. A first point about this is the following lemma. \begin{lem}\label{lem:translations.permises} For any cone type $\mathfrak{t}\in \mathfrak{T}_W$ with associated cone $C_\mathfrak{t}$ one has \begin{align*} \mathbb{R}^n_\mathfrak{t} = \left\lbrace w\in \mathbb{R}^n \, \vert \, (C_{\mathfrak{t}}+w)\cap \Gamma ^* \text{ accumulates at } w\right\rbrace . \end{align*} \end{lem} \begin{proof} Given some $\mathfrak{t}\in \mathfrak{T}_W$ with associated cone $C_\mathfrak{t}$, its set of allowed translations $\mathbb{R}^n_\mathfrak{t}$ is by construction $ \mathbb{R}^n_\mathfrak{t} = \left\lbrace w_g \, \vert \, g\in E(\Xi ^\Gamma ,\Gamma) \text{ and } \mathfrak{t}_g=\mathfrak{t} \right\rbrace $. Let us show $"\supseteq "$: If $w$ is such that $(C_{\mathfrak{t}}+w)\cap \Gamma ^*$ accumulates at $ w$ then the intersection $(C_{\mathfrak{t}}+w)\cap B(w, \varepsilon )\cap \Gamma ^* $ is non-empty for any $\varepsilon > 0$, and thus the family $\left\lbrace [C_{\mathfrak{t}}+w]_E\cap (\Pi ^*)^{-1}(B(w,\varepsilon ))\right\rbrace _{\varepsilon > 0}$ forms a filter base in the space $E(\Xi ^\Gamma,\Gamma)$. In turn, the morphism $\Pi ^*$ is, by Proposition \ref{prop:map.ellis.loc.compact}, a proper map so this filter base, for $0< \varepsilon < \varepsilon _0$, lies in the fixed compact subset $(\Pi ^*)^{-1}\left( \overline{B}(w,\varepsilon _0)\right) $ and thus possess an accumulation point $g$. This Ellis transformation necessarily satisfies $w_g=w$, and because the set $[C_{\mathfrak{t}}+w]_E= [C_{\mathfrak{t}}+w_g]_E$ is closed, containing the above filter base, it thus contains $g$. We deduce that $C_g= C_{\mathfrak{t}}$, or equivalently $\mathfrak{t}_g = \mathfrak{t}$, giving that $w = w_g \in \mathbb{R}^n_\mathfrak{t}$. Conversely we show $"\subseteq "$: Given some cone type $\mathfrak{t}$ and some Ellis transformation $g$ with $\mathfrak{t}= \mathfrak{t}_g$, then as $g$ lies in $[C_g +w_g]_E$ one can select a net of elements of $(C_g +w_g)\cap \Gamma ^*= (C_\mathfrak{t} +w_g)\cap \Gamma ^*$ converging to $g$ in the internal system Ellis semigroup. Applying $\Pi ^*$ we obtain a net of $(C_\mathfrak{t} +w_g)\cap \Gamma ^* $ converging to $w_g$ in the Euclidean space $\mathbb{R}^n$, so that $ (C_{\mathfrak{t}}+w_g)\cap \Gamma ^*$ accumulates at $w_g$. \end{proof} Let now $\mathfrak{T}_{W,0}$ be the homomorphic image of the internal system Ellis semigroup in the face semigroup $\mathfrak{T}_{W}$ via the morphism of Proposition \ref{prop:morphism.type}. Then it precisely consists of those cone types that have a nonempty associated subset $\mathbb{R}^n_\mathfrak{t}$ of allowed translations. From the definition of the plain face semigroup $\mathfrak{T}_{W,\Gamma }$, a face type $\mathfrak{t}$ is nontrivial if and only if $0$ lies in $\mathbb{R}^n_\mathfrak{t}$, which shows in particular that $\mathfrak{T}_{W,0}$ contains the plain face semigroup $\mathfrak{T}_{W,\Gamma }$. We will now write any Euclidean subset $\mathbb{R}^n_\mathfrak{t}$ in a more suitable form. Obviously it is sufficient to consider cone types of the homomorphic image $\mathfrak{T}_{W,0}$. Observe that for any such cone type, thei associated Euclidean subset of allowed translations is stable under $\Gamma ^*$--translation. \begin{prop}\label{prop:décomp.translations.permises} Let $\mathfrak{t}\in \mathfrak{T}_{W,0}$, with $\langle C_\mathfrak{t} \rangle = V_\mathfrak{t} \oplus D_\mathfrak{t}$ being its direct sum decomposition from Theorem \ref{theo:Senechal}. Then one has \begin{align*} \mathbb{R}^n_{\mathfrak{t}}= V_{\mathfrak{t}}+ \Gamma ^*. \end{align*} \end{prop} \begin{proof} For $\mathfrak{t}\in \mathfrak{T}_{W,0}$ and $\langle C_\mathfrak{t} \rangle = V_\mathfrak{t} \oplus D_\mathfrak{t}$, denote by $P^V$ (resp. $P^D$) the skew projection of $\langle C_\mathfrak{t} \rangle$ with range $V_{\mathfrak{t}}$ and kernel $D_{\mathfrak{t}}$ (resp. the skew projection with range $ D_{\mathfrak{t}}$ and kernel $V_{\mathfrak{t}}$). Then, from the particular form of the decomposition, one has $P^V(\langle C_\mathfrak{t} \rangle\cap \Gamma ^*)= V_{\mathfrak{t}}\cap \Gamma ^*$ and $P^D( \langle C_\mathfrak{t} \rangle\cap \Gamma ^*)= D_{\mathfrak{t}}\cap \Gamma ^*$. Let us show first that $\mathbb{R}^n_{\mathfrak{t}}$ lies in $ \langle C_\mathfrak{t} \rangle + \Gamma ^*$: Any vector $w\in \mathbb{R}^n_{\mathfrak{t}}$ admits some $\gamma ^*$ in $(C_{\mathfrak{t}}+w)\cap \Gamma ^*$, so that $\gamma ^*-w$ lies in $C_{\mathfrak{t}}$ and thus in $\langle C_\mathfrak{t} \rangle$. So does the vector $w-\gamma ^*$, giving that $w$ lies in $ \langle C_\mathfrak{t} \rangle + \Gamma ^*$. Now we more precisely show that $\mathbb{R}^n_{\mathfrak{t}}$ lies in $ V_{\mathfrak{t}}+ \Gamma $: Given $w\in \mathbb{R}^n_{\mathfrak{t}}$, one may write $w= w' + \gamma ^*$ with $w'\in \langle C_\mathfrak{t} \rangle$ and $\gamma \in \Gamma ^*$, $w'$ itself being in $\mathbb{R}^n_{\mathfrak{t}}$ as this latter is stable under $\Gamma ^*$--translation. It thus suffices to prove that $w'$ lies in $V_{\mathfrak{t}}+ \Gamma $ to conclude. From the previous lemma, $w'$ is the limit point of a sequence $(\gamma ^*_k)$ of elements in $(C_{\mathfrak{t}}+w')\cap \Gamma ^*$, in turn included in $\langle C_\mathfrak{t} \rangle \cap \Gamma ^*$. Thus $P^D(\gamma ^*_k)$ converges to $P^D(w')$ and $P^V(\gamma ^*_k)$ converges to $P^V(w')$. But as the sequence $(P^D(\gamma ^*_k))$ lies in the uniformly discrete subset $ D_{\mathfrak{t}}\cap \Gamma ^*$ of $ D_{\mathfrak{t}}$, it must be eventually constant, equal to $P^D(w')$ for great enough $k$. Hence $P^D(w')$ lies in $\Gamma ^*$, which gives $w'= P^W(w')+ P^D(w')\in V_{\mathfrak{t}}+ \Gamma ^*$, as desired. We want to observe that the sequence eventually satisfies $P^V(\gamma ^*_k)= \gamma ^*_k- P^D(w')\in (C_{\mathfrak{t}}+w')\cap \Gamma ^*- P^D(w')$, with $P^D(w')\in \Gamma ^*$, and thus $P^V(\gamma ^*_k)\in (C_{\mathfrak{t}}+P^V(w'))\cap \Gamma ^*$. Hence $P^V(\gamma ^*_k)-P^V(w')= P^V(\gamma ^*_k-w')$ lies in both $ V_{\mathfrak{t}}$ and $C_{\mathfrak{t}}$ eventually, which ensures that the intersection $\mathsf{C}_\mathfrak{t}:=C_{\mathfrak{t}}\cap V_{\mathfrak{t}}$ is nonempty. Now we show that $\mathbb{R}^n_{\mathfrak{t}}$ contains $ V_{\mathfrak{t}}+ \Gamma$: To that end it suffices from $\Gamma ^*$--invariance to show that it contains $ V_{\mathfrak{t}}$. First it is clear that the subset $\mathsf{C}_\mathfrak{t}$ is a (nonempty) open cone of the space $V_{\mathfrak{t}}$, since is the intersection of $C_{\mathfrak{t}}$ which is open in its own spanned space $\langle C_\mathfrak{t} \rangle $ with the subspace $V_{\mathfrak{t}}$. Let now $ w\in V_{\mathfrak{t}}$ be given. Then $\mathsf{C}_\mathfrak{t}$ is open in $W_{\mathfrak{t}}$ and is a cone pointed at $0$, so that $\mathsf{C}_\mathfrak{t}+w$ is an open cone of $V_{\mathfrak{t}}$ pointed at $w$. But from the density of $V_\mathfrak{t}\cap \Gamma ^*$ in $V_{\mathfrak{t}}$ one can obtain $w$ as an accumulation point of $\mathsf{C}_\mathfrak{t}+w \cap \Gamma ^*$ and thus of $(C_{\mathfrak{t}}+w)\cap \Gamma ^*$, showing that $w\in \mathbb{R}^n_{\mathfrak{t}}$, as desired. \end{proof} From the previous proposition one gets that any cone type $\mathfrak{t}$ of $\mathfrak{T}_{W,0}$ has the origin $0$ as allowed translation, and thus is an element of $\mathfrak{T}_{W,\Gamma}$. This shows that the internal system Ellis semigroup is isomorphic with a subsemigroup of the direct product $\mathbb{R}^n\times \mathfrak{T}_{W,\Gamma}$, and that its isomorphic image is of the form stated in Theorem \ref{theo:principal.interne}, once we recall that $V_\mathfrak{t}$ is spanned by the pain cone $\mathsf{C}_\mathfrak{t}$ for any $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma}$. This completes Step $2$. \paragraph{6.3.3 $\; $ Step 3: The topology of convergence} Let us first show the first countability property of the internal system Ellis semigroup: From the injectivity of the mapping associating to any transformation $g$ the couple $(w_g, \mathfrak{t}_g)$, one can deduce that $g$ is the only transformation in its fiber with respect to $\Pi ^*$ falling into the clopen subset $\left[ C_g +w_g\right] _E$ of the Ellis semigroup. It follows by Lemma \ref{lem:base.voisinages.ouverts} that a neighborhood basis of $g$ is provided by the intersections \begin{align}\label{ellis.local.basis}\left[ C_g +w_g\right] _E\cap (\Pi ^*)^{-1}(B(w_g,\varepsilon )). \end{align} It is then clear that one can extract a countable subbasis of this family, completing the argument. Now we wish to show the bicontinuity of the stated isomorphism, and to that end we let $(g_\lambda )$ be a net of the Ellis semigroup with associated net $(w_\lambda,\mathfrak{t}_\lambda)$ in the direct product $\mathbb{R}^n\times\mathfrak{T}_{W,\Gamma}$, and $g$ be some Ellis transformation with associated couple $(w,\mathfrak{t})$. Let us first state a useful lemma: \begin{lem}\label{lem:plain.cone} There exists an $\varepsilon _0>0$ such that, for any $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma }$ and $w\in \mathbb{R}^n_{\mathfrak{t}}= V_\mathfrak{t}+ \Gamma ^*$, we have $$C_{\mathfrak{t}}(w,\varepsilon )\cap \Gamma ^* = \mathsf{C}_{\mathfrak{t}}(w,\varepsilon )\cap \Gamma ^* \; \; \; \forall \; 0<\varepsilon \leqslant\varepsilon _0.$$ \end{lem} \begin{proof} Clearly the cone $C_{\mathfrak{t}}(w,\varepsilon )$ contains $ \mathsf{C}_{\mathfrak{t}}(w,\varepsilon )$ for all $\varepsilon >0$. Conversely let $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma }$ be chosen, with associated cone $C_{\mathfrak{t}}$ in $\mathbb{R}^n$ and the direct sum decomposition $\langle C_{\mathfrak{t}}\rangle= V_{\mathfrak{t}}\oplus D_{\mathfrak{t}}$ provided by Theorem \ref{theo:Senechal}. As $ D_{\mathfrak{t}}\cap \Gamma ^*$ is uniformly discrete in $D_{\mathfrak{t}}$, with $\varepsilon _{\mathfrak{t}}>0$ being some radius of discreteness, we must have $$ \langle C_{\mathfrak{t}}\rangle \cap B(w, \varepsilon _{\mathfrak{t}}) \cap \Gamma ^*= (V_{\mathfrak{t}}+w) \cap B(w, \varepsilon _{\mathfrak{t}})\cap \Gamma ^*$$ for any $w\in V_\mathfrak{t}+ \Gamma ^*$. Hence by intersecting with $C_{\mathfrak{t}}+w$ we obtain \begin{align*} C_{\mathfrak{t}}(w,\varepsilon _{\mathfrak{t}})\cap \Gamma ^*= (C_{\mathfrak{t}}+w)\cap (V_{\mathfrak{t}}+w)\cap B(w, \varepsilon _{\mathfrak{t}})\cap \Gamma ^*= \mathsf{C}_{\mathfrak{t}}(w,\varepsilon _{\mathfrak{t}})\cap \Gamma ^* . \end{align*} Finally, taking $\varepsilon _0$ to be the minimum over $\varepsilon _{\mathfrak{t}}$, $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma }$, gives the statement. \end{proof} Then $ g_\lambda $ converges to $g$ if and only if for any $\varepsilon >0$, which can be chosen less than the constant $\varepsilon _0$ of Lemma \ref{lem:plain.cone}, there is some net of positive real numbers $(\delta _\lambda )$, which can be chosen less than the constant $\varepsilon _0$ as well, such that one has for great enough $\lambda$: \begin{align*} \left[ C_{\mathfrak{t}_\lambda} +w_\lambda\right] _E\cap (\Pi ^*)^{-1}(B(w_\lambda ,\delta _\lambda ))\; \subset \; \left[ C_\mathfrak{t} +w\right] _E\cap (\Pi ^*)^{-1}(B(w ,\varepsilon )) . \end{align*} By Lemma \ref{lem:plain.cone}, intersecting with $\Gamma ^*$ leads for great enough $\lambda$ to \begin{align*} \mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda )\cap \Gamma ^* \; \subset \; \mathsf{C}_{\mathfrak{t}}(w,\varepsilon )\cap \Gamma ^* . \end{align*} Now the affine space generated by $\mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda )$ is precisely $V_{\mathfrak{t}_\lambda}+w_\lambda $ which contains, since $w_\lambda$ is an allowed translation for $\mathfrak{t}_\lambda$, a dense subset of elements of $\Gamma ^*$. The same occurs about $w$ with respect to $\mathfrak{t}$, and thus we get for great enough $\lambda$ the inclusions \begin{align*}V_{\mathfrak{t}_\lambda}+w_\lambda \; \subset \; V_{\mathfrak{t}}+w . \end{align*} As $\mathsf{C}_{\mathfrak{t}}(w,\varepsilon )$ is a topologically regular open subset of $V_{\mathfrak{t}}+w$, its intersection with $V_{\mathfrak{t}_\lambda}+w_\lambda$ forms an open topologically regular subset of this latter affine space, containing $\mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda )\cap \Gamma ^*$. As $\mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda )$ is a topologically regular open subset of $V_{\mathfrak{t}_\lambda}+w_\lambda$ as well, taking closure an next interior in $V_{\mathfrak{t}_\lambda}+w_\lambda$ provides for great enough $\lambda$ \begin{align*} \mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda )\; \subset \; \mathsf{C}_{\mathfrak{t}}(w,\varepsilon ) , \end{align*} thus giving the $\Rightarrow $ part of the statement. Conversely, let us suppose that for any $\varepsilon >0$, which can be chosen less than the constant $\varepsilon _0$ of Lemma \ref{lem:plain.cone}, there is some net of positive real numbers $(\delta _\lambda )$, which can be chosen less than the constant $\varepsilon _0$ as well, such that one has $\mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda ) \subset \mathsf{C}_{\mathfrak{t}}(w,\varepsilon ) \subset C_\mathfrak{t}+w$ for great enough $\lambda$. Now the first point is that the net $(w_\lambda)$ converges to $w$ in $\mathbb{R}^n$, and so $g_\lambda$ falls into the inverse image of any ball $B(w, \varepsilon )$ for great enough $\lambda$. Secondly, any $g_\lambda $ has a neighborhood of the form $\left[ C_{\mathfrak{t}_\lambda} +w_\lambda \right] _E\cap (\Pi ^*)^{-1}(B(w_\lambda ,\delta _\lambda ))$, which is contained in the subset $\left[ C_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda ) \right] _E =\left[ \mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda ,\delta _\lambda ) \right] _E$ and thus in $\left[ C_\mathfrak{t} +w \right] _E$ for great enough $\lambda$. Combining these two arguments we deduce from the neighborhood basis formula (\ref{ellis.local.basis}) that $g_\lambda$ converges to $g$ in the internal system Ellis semigroup. This completes the proof of Theorem \ref{theo:principal.interne}.\begin{flushright} $\square $ \end{flushright} \section{\large{\textbf{Results on the hull Ellis semigroup and additional algebraic features}}}\label{main} We arrive at our main result, namely, the algebraic and topological description of the Ellis semigroup for a hull $\mathbb{X}$ of almost canonical model sets together with its $\mathbb{R}^d$--action. \subsection{The main result} From Theorem \ref{theo:suspension.ellis}, any transformation $\mathsf{g}$ in the semigroup $E(\mathbb{X},\mathbb{R}^d)$ may be written as $\tilde{g}-s$ where $g$ is a transformation in $E(\Xi ^\Gamma , \Gamma )$ and $s$ a vector of $\mathbb{R}^d$, and with $g$ uniquely defined up to an element of $\Gamma$. Thus we may associate to any transformation $\mathsf{g}=\tilde{g}-s$ the cone type of any underlying transformation $g\in E(\Xi ^\Gamma , \Gamma )$, which we write $\mathfrak{t}_\mathsf{g}$, thus providing a semigroup morphism from the hull Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ to the nontrivial face semigroup $\mathfrak{T}_{W,\Gamma }$. We are now able to formulate the main result of this work, which is completely deduced from Theorems \ref{theo:suspension.ellis} and \ref{theo:principal.interne}: \begin{theo}\label{theo:principal} The mapping that associates to any transformation $\mathsf{g}$ the couple $(z_\mathsf{g},\mathfrak{t}_\mathsf{g})$ establishes an isomorphism between the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ and the subsemigroup of the direct product $\left[ \mathbb{R}^{n+d}\right] _{\Sigma} \times \mathfrak{T}_{W,\Gamma }$ given by \begin{align*}\bigsqcup _{\mathfrak{t}\in \mathfrak{T}_{W,\Gamma }} \left[ \langle \mathsf{C}_\mathfrak{t}\rangle \times\mathbb{R}^d\right]_{\Sigma} \times \left\lbrace \mathfrak{t}\right\rbrace . \end{align*} Moreover, this isomorphism becomes a homeomorphism when the above union is equipped with the following convergence class: $(z_\lambda,\mathfrak{t}_\lambda)\longrightarrow (z,\mathfrak{t})$ if and only if one can write $z_\lambda =[w_\lambda,s_\lambda]_{\Sigma}$ and $z =[w,s]_{\Sigma}$ such that \begin{itemize} \item[$\mathrm{(1)}$] $s_\lambda \longrightarrow s$ in $\mathbb{R}^d$, \item[$\mathrm{(2)}$] $\forall \varepsilon >0, \exists \, \delta _\lambda >0 \text{ such that } \; \mathsf{C}_{\mathfrak{t}_\lambda}(w_\lambda,\delta _\lambda)\subset \mathsf{C}_{\mathfrak{t}}(w,\varepsilon ) \; \text{ for large enough }\lambda $ in $\mathbb{R}^n$. \end{itemize} Finally, the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ has a first countable topology, and the dynamical system $(\mathbb{X},\mathbb{R}^d)$ is tame. \end{theo} \subsection{Additional algebraic features} \paragraph{7.2.1 $\;$ Invertible Ellis transformations} One can naturally ask whether there are transformations in the hull Ellis semigroup which are invertible but not homeomorphisms given by the $\mathbb{R}^d$--action. It turns out that the answer is no: We have seen that any cone type $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma}$ is idempotent, and thus an invertible transformation must corresponds to a couple of the form $(z,\mathfrak{o})$ where $\mathfrak{o}$ is the identity cone type in $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma}$. Since the cone with cone type $\mathfrak{o}$ is precisely the trivial cone $\left\lbrace 0\right\rbrace $, its associated plain cone $\mathsf{C}_\mathfrak{o}$ is nothing but $\left\lbrace 0\right\rbrace $ and Theorem \ref{theo:principal} ensures that $z$ must be an element of the form $[0,s]_{\Sigma}$ in $\left[ \left\lbrace 0\right\rbrace \times \mathbb{R}^d\right] _{\Sigma}$. It follows that the underlying transformation is the homeomorphism arising from translation by the vector $s\in \mathbb{R}^d$. \paragraph{7.2.2 $\;$ Range of Ellis transformations} It is natural to define on the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ a preorder by letting $\mathsf{g}\leqslant \mathsf{g}'$ whenever the \textit{range} of the mapping $\mathsf{g}$ is contained in that of $\mathsf{g}'$. By range we mean here the subset $r(\mathsf{g}):= \mathbb{X}.\mathsf{g}$ of the hull $\mathbb{X}$. When one considers idempotent transformations $\mathsf{q}$ and $\mathsf{q}'$ then it is easy to show that $\mathsf{q}\leqslant \mathsf{q}'$ when and only when one has $\mathsf{q} = \mathsf{q}.\mathsf{q}'$, thus turning this preorder into algebraic terms in this particular setting. In the case of an almost canonical hull Ellis semigroup we are able to describe this preorder in a quite elegant manner: \begin{prop}\label{prop:range} For any transformations of $E(\mathbb{X},\mathbb{R}^d)$ we have the equivalence \begin{align*} \mathsf{g}\leqslant \mathsf{g}' \quad \Longleftrightarrow \quad C_\mathsf{g} \leqslant C_{\mathsf{g}'}. \end{align*} \end{prop} The proposition above asserts that the range of $\mathsf{g}$ is contained into the range of $\mathsf{g}'$ if and only if the cone $C_{\mathsf{g}'}$ is equal or a lower dimensional facet of the cone $C_\mathsf{g}$.\\ \begin{proof} Let $\mathsf{g}$ and $ \mathsf{g}'$ be chosen. Each are element of a subgroup respectively given by $\left[ \langle \mathsf{C}_{\mathfrak{t}_\mathsf{g}}\rangle \times\mathbb{R}^d\right]_{\Sigma} \times \left\lbrace \mathfrak{t}_\mathsf{g}\right\rbrace $ and $\left[ \langle \mathsf{C}_{\mathfrak{t}_{\mathsf{g}'}}\rangle \times\mathbb{R}^d\right]_{\Sigma} \times \left\lbrace \mathfrak{t}_{\mathsf{g}'}\right\rbrace $, and thus one can see that $r(\mathsf{g})= r([0]_{\Sigma}\times\left\lbrace \mathfrak{t}_\mathsf{g}\right\rbrace )$ and that $r(\mathsf{g}')= r([0]_{\Sigma}\times\left\lbrace \mathfrak{t}_{\mathsf{g}'}\right\rbrace )$. From what have just been said it becomes clear that $\mathsf{g}\leqslant \mathsf{g}'$ if and only if $\mathfrak{t}_{\mathsf{g}}=\mathfrak{t}_{\mathsf{g}}.\mathfrak{t}_{\mathsf{g}'}$, which exactly means that the cone $C_{\mathsf{g}'}$ is equal or a lower-dimensional facet of the cone $C_\mathsf{g}$, or equivalently $C_\mathsf{g} \leqslant C_{\mathsf{g}'}$. \end{proof} \paragraph{7.2.3 $\;$ Ideals} The general theory of Ellis semigroups gives great importance to the ideal theory of an Ellis semigroup. In the case of a almost canonical hull Ellis semigroup it is easy to prove the proposition stated below, showing that the ideal theory of the hull Ellis semigroup reduces to the ideal theory of the semigroup $\mathfrak{T}_{W,\Gamma}$: \begin{prop}\label{prop:min} Each right ideal $\mathfrak{M}$ of the nontrivial face semigroup $\mathfrak{T}_{W,\Gamma}$ defines a right ideal of the Ellis semigroup $E(\mathbb{X},\mathbb{R}^d)$ by \begin{align*}\bigsqcup _{\mathfrak{t}\in \mathfrak{M}} \left[ \langle \mathsf{C}_\mathfrak{t}\rangle \times\mathbb{R}^d\right]_{\Sigma} \times \left\lbrace \mathfrak{t}\right\rbrace \end{align*} and conversely each right ideal of $E(\mathbb{X},\mathbb{R}^d)$ arises in this manner. \end{prop} We can in particular easily identify the unique minimal ideal of $E(\mathbb{X},\mathbb{R}^d)$: This latter is isomorphic with the direct product $\left[ \mathbb{R}^{n+d}\right] _{\Sigma} \times \mathfrak{M}^{\, ch}$ where $\mathfrak{M}^{\, ch}$ is the family of cone types associated with the \textit{chambers} of the stratification defined by the collection of hyperplanes used to construct the face semigroup $\mathfrak{T}_W$. \subsection{An explicit computation} We consider the hull $\mathbb{X}_{oct}$ associated to the real cut and project scheme and octagonal window presented in \ref{subsection:explicit.example}. The associated family of linear hyperplanes parallel to faces of the window (or its reversed set) is described, in the orthonormal basis $(e_1^*, e_2^*)$ of the internal space $\mathbb{R}^2_{int}$, as \begin{align*} H_1:= \langle v_1\rangle = \langle v_2-v_4\rangle , \quad H_2:= \langle v_2\rangle= \langle v_1+v_3\rangle \\ H_3:= \langle v_3\rangle =\langle v_2+v_4\rangle , \quad H_4:= \langle v_4\rangle =\langle v_1-v_3\rangle , \end{align*} where \begin{align*} v_1:=e_1^*, \quad v_2:=(e_1^*+e_2^*)\diagup \sqrt{2}, \quad v_3:=e_2^*, \quad v_4:=(e_2^*-e_1^*)\diagup\sqrt{2}; \end{align*} see Figure 4. \includegraphics[trim = 7cm 18.5cm 8.5cm 4.8cm]{hyperplans} \includegraphics[trim = 6cm 18.6cm 8.5cm 6cm]{conesellis} The stratification obtained from these hyperplanes is of the form in Figure 5. The internal space $\mathbb{R}^2_{int}$ is partitioned into $17$ different cones: the singleton $\lbrace 0\rbrace $, eight half-lines $\lbrace L_1,...,L_8\rbrace $ pointed at $0$ though not containing it which we label $L_i$, $L_{i+4}\, \subset H_i$ for $1\leqslant i\leqslant 4$, and eight chambers $\lbrace C_1,...,C_8\rbrace $, each consisting of an $\frac{1}{8}^{th}$ part of the space and being open cones pointed at $0$. Now the stabilizers $Stab_{\Gamma}(H_i)$ are dense in $H_i$ for each index $1\leqslant i\leqslant 4$, and we deduce that each cone of this stratification is nontrivial, and moreover equal to its associated plain cone. Thus $\langle \mathsf{C}_\mathfrak{o}\rangle =\lbrace 0\rbrace$ as usual, whereas $\langle \mathsf{C}_{\mathfrak{t}_{L_i}}\rangle = \langle \mathsf{C}_{\mathfrak{t}_{L_{i+4}}}\rangle = H_i$ for each value $1\leqslant i\leqslant 4$, and $\langle \mathsf{C}_{\mathfrak{t}_{C_i}}\rangle = \mathbb{R}^2$ for each index $1\leqslant i \leqslant 8$.\\ Consequently, the hull Ellis semigroup $E(\mathbb{X}_{oct}, \mathbb{R}^2) $ is in this case obtained as \begin{align*}\left( \bigsqcup _{i=1}^8\left[ \mathbb{R}^4\right] _{\mathbb{Z}^4}\times \lbrace \mathfrak{t}_{C_i}\rbrace \right) \bigsqcup \left( \bigsqcup _{i=1}^4 \left[ H_i \times \mathbb{R}^2\right] _{\mathbb{Z}^4}\times \lbrace \mathfrak{t}_{L_i}, \mathfrak{t}_{L_{i+4}}\rbrace \right) \bigsqcup \mathbb{R}^2 . \end{align*} \section{\large{\textbf{The Ellis action on the hull}}}\label{action} \subsection{A further look on cones} We saw in Section $5$ that to any model set $\Lambda$ of the internal system can be associated a cone $C_\Lambda$, that is, an open connected cone pointed at $0$ with boundary delimited by hyperplanes of a subfamily $\mathfrak{H}_{w_\Lambda}$ of $\mathfrak{H}_W$. Moreover each such cone admits a unique cone type $\mathfrak{c}_\Lambda$ with domain $\mathfrak{H}_{w_\Lambda}$, and there can be only finitely many such cone types, whose family is denoted $\mathfrak{C}$. Now if one look at some model set $\Lambda _0$ in the hull $\mathbb{X}$ then it always can be written $\Lambda _0=\Lambda -t$, where $\Lambda$ lies in the internal system and $t$ is a vector of $\mathbb{R}^d$. This presentation is unique up to a translation of both the model set $\Lambda $ and the vector $t$ by some $\gamma \in \Gamma$. Thus one may without misunderstanding define the cut type $\mathfrak{H}_{z_{\Lambda_0}}$ and the cone type $\mathfrak{c}_{\Lambda _0}$ with domain $\mathfrak{H}_{z_{\Lambda_0}}$ to be the ones associated with $\Lambda \in \Xi ^\Gamma$ in the decomposition $\Lambda _0=\Lambda -t$. We may then describe the hull, as was already done by Le \cite{Le}, as follows: \begin{theo}\label{theo:topologie.par.cones} The mapping associating to any model set $\Lambda $ the couple $(z_\Lambda,\mathfrak{c}_\Lambda)$ establishes a bijective correspondence between the hull $\mathbb{X}$ and \begin{align*}\left\lbrace (z,\mathfrak{c})\in \left[ \mathbb{R}^{n+d}\right] _{\Sigma}\times\mathfrak{C} \; \vert \; dom(\mathfrak{c})= \mathfrak{H}_{z} \right\rbrace . \end{align*} \end{theo} \begin{proof} From what has been just said it is sufficient to prove that the mapping associating to any model set $\Lambda $ in $\Xi ^\Gamma$ the couple $(w_\Lambda,\mathfrak{c}_\Lambda)$ establishes a bijective correspondence between the internal system $\Xi ^\Gamma$ and \begin{align*}\left\lbrace (w,\mathfrak{c})\in \mathbb{R}^n \times\mathfrak{C} \; \vert \; dom(\mathfrak{c})= \mathfrak{H}_{w} \right\rbrace . \end{align*} First from the very construction of the cone type $\mathfrak{c}_\Lambda$ associated with any $\Lambda \in \Xi ^\Gamma$ this association is well defined. By the arguments used in the proof of Proposition \ref{prop:topology.internal.system}, each model set $\Lambda\in \Xi ^\Gamma$ is the limit of a filter base (\ref{nei}) which only depends on the couple $(w_\Lambda, \mathfrak{c}_\Lambda)$, and thus the association is one-to-one. Moreover this association is onto: If $(w, \mathfrak{c})$ is a couple with $dom(\mathfrak{c})$ equal to $\mathfrak{H}_w$, then consider the family of subsets $ [C_{\mathfrak{c}}+w]_\Xi \cap \Pi ^{-1}(B(w,\varepsilon))$ of the internal system. Each such set contains some nonsingular model sets, and thus forms a filter base in $\Xi ^\Gamma$. As $\Pi$ is a proper map this filter base is eventually contained in a compact subset of the form $\Pi ^{-1}(\overline{B}(w,\varepsilon))$ and thus admits an accumulation element $\Lambda$. This latter must satisfies $\Pi (\Lambda)=w_\Lambda =w$ and $C_\Lambda= C_{\mathfrak{c}}$ on the other hand. But as the domains of $\mathfrak{c}$ and $\mathfrak{c}_\Lambda$ are both equal to the cut type of $w_\Lambda =w$ the couple $(w_\Lambda,\mathfrak{c}_\Lambda)$ is nothing but $(w,\mathfrak{c})$, showing that the association is onto. \end{proof} \subsection{The Ellis action} We wish to use here the descriptions of the hull obtained in the above paragraph and that of its Ellis semigroup performed in Theorem \ref{theo:principal}. To this end we set an action of the nontrivial face semigroup $\mathfrak{T}_{W,\Gamma}$ on the family $\mathfrak{C}$ of cone types introduced above: For $\mathfrak{c}\in \mathfrak{C}$ and $\mathfrak{t}\in \mathfrak{T}_{W,\Gamma}$ let us define a map $\mathfrak{H}_W \longrightarrow \left\lbrace -,+,\infty \right\rbrace $ as \begin{align*} \mathfrak{c}.\mathfrak{t}(H):= \begin{cases} \mathfrak{c}(H) \; \; $ if $\mathfrak{t}(H)= 0 ,\\ \mathfrak{t}(H)\; \, \; $ else. $ \end{cases} \end{align*} This definition is not properly an action of $\mathfrak{T}_{W,\Gamma}$ on $\mathfrak{C}$ as the resulting map may not be a cone issuing from any model set of the hull $\mathbb{X}$. However it allows us to recover the Ellis action as follows: \begin{prop}\label{prop:ellis.action} The Ellis action $\mathbb{X}\times E(\mathbb{X}, \mathbb{R}^d )\longrightarrow \mathbb{X}$ obtains as \begin{align*}(z,\mathfrak{c}).(z',\mathfrak{t})= (z+z',\mathfrak{c}') \quad \text{where} \quad \mathfrak{c}'(H):= \begin{cases} \mathfrak{c}.\mathfrak{t}(H) \quad \text{ if }H\in \mathfrak{H}_{z+z'}, \\ \infty \qquad \; \; \text{ else. } \end{cases} \end{align*} \end{prop} \subsection{An illustration of the Ellis action} In order to illustrate the Ellis action described as above, we focus here on the example of the hull $\mathbb{X}_{oct}$ associated with the data given in Section \ref{subsection:explicit.example}. More precisely we won't describe the action of any transformation but rather the one of the idempotent transformations (as the other part is only a shifting in the parametrization torus $\left[ \mathbb{R}^4\right] _{\mathbb{Z}^4}$). Moreover it can be checked that the idempotent Ellis transformations are precisely those Ellis transformations mapped onto $0\in \left[ \mathbb{R}^4\right] _{\mathbb{Z}^4}$ under $\pi ^*$, or equivalently, those which preserve fibers in $\mathbb{X}_{oct}$ with respect to the parametrization map $\pi$. Here we won't describe the Ellis action of these idempotents at any model set, but rather on the single fiber above $0\in \left[ \mathbb{R}^4\right] _{\mathbb{Z}^4}$, any other fiber can be treated in the same manner. First we need to know the cut type of $0$: it is easily checked that $\mathfrak{H}_0= \mathfrak{H}_{W_{oct}}= \left\lbrace H_1, H_2, H_3, H_4\right\rbrace $, so that the fiber above $0$ in the hull consists of eight model sets $ \; \; $ $\left\lbrace \Lambda _{C_1},..,\Lambda _{C_8}\right\rbrace $, each associated with some cone which is in this particular case a \textit{chamber} among $\left\lbrace C_1,..,C_8\right\rbrace $. Then we can compute the action of any of the $17$ idempotent transformations $[0]_{\mathbb{Z}^4}\times\left\lbrace \mathfrak{t} \right\rbrace $ , $\mathfrak{t} \in \mathfrak{T}_{W_{oct}}$:\\ The identity map, given by $[0]_{\mathbb{Z}^4}\times\left\lbrace \mathfrak{o} \right\rbrace $, preserves any of the eight model sets, whereas any idempotent map $[0]_{\mathbb{Z}^4}\times\left\lbrace \mathfrak{t}_{C_i} \right\rbrace $ associated with the chamber $C_i$ maps all of these model sets onto a single one, namely $\Lambda _{C_i}$. For an idempotent map of the form $[0]_{\mathbb{Z}^4}\times\left\lbrace \mathfrak{t}_{L_i} \right\rbrace $ with $L_i$ some half-line contained in the hyperplane $H_i$, each model set with associated cone belonging to the side $\pm$ of $H_i$ is mapped onto the unique model set whose cone belongs to the same side $\pm$ of $H_i$ and has $L_i$ in its boundary. Therefore these transformations have two distinct model sets of this fiber in their range, namely these which have $L_i$ in the boundary of their associated cone. \textbf{Acknowledgement} I wish to express my deep gratitude to my advisor Johannes Kellendonk for all its valuable suggestions, discussions and comments about the present article. \end{document}
\begin{document} \title{Quantum secret sharing based on Smolin states alone} \author{Guang Ping He} \email{[email protected]} \affiliation{School of Physics \& Engineering and Advanced Research Center, Sun Yat-sen University, Guangzhou 510275, China\\ and Center of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China} \author{Z. D. Wang} \email{[email protected]} \author{Yan-Kui Bai} \email{[email protected]} \affiliation{Department of Physics and Center of Theoretical and Computational Physics, The University of Hong Kong, Pokfulam Road, Hong Kong, China} \begin{abstract} It was indicated [Yu 2007 Phys. Rev. A 75 066301] that a previous proposed quantum secret sharing (QSS) protocol based on Smolin states [Augusiak 2006 Phys. Rev. A 73 012318] is insecure against an internal cheater. Here we build a different QSS protocol with Smolin states alone, and prove it to be secure against known cheating strategies. Thus we open a promising venue for building secure QSS using merely Smolin states, which is a typical kind of bound entangled states. We also propose a feasible scheme to implement the protocol experimentally. \end{abstract} \pacs{03.67.Hk,03.67.Dd} \maketitle \section{Introduction} The properties of Smolin states \cite{qi474-23} have caught great interests recently. It was shown \cite{qi474,qi356} that they can maximally violate simple correlation Bell inequalities, and thus reduce communication complexity. On the other hand, as a typical kind of bound entangled (i.e., cannot be distilled to pure entangled form with local operations and classical communications (LOCC) ) states, Smolin states do not allow for secure key distillation. This indicates that neither entanglement nor maximal violation of Bell inequalities implies directly the presence of a quantum secure key. Thus it becomes an intriguing question how useful Smolin states can be for quantum cryptography. Especially, it was left an open question in Refs. \cite{qi474,qi356} whether Smolin states can lead to secure quantum secret sharing (QSS) \cite{qi103,qi46}. This question was further indicated to be non-trivial by Ref. \cite{qi483}, in which an explicit cheating strategy was proposed, showing that a class of QSS protocols using Smolin states can be broken if one of the participants is dishonest. In this paper, a four-party QSS protocol based on Smolin states is proposed, and proven to be secure against the cheating strategy proposed in Ref. \cite {qi483} as well as other known attacks. A feasible scheme for implementing our protocol experimentally is proposed. Building multi-party secure QSS protocols on generalized Smolin states \cite{qi474} is also addressed. These findings may help to answer the question whether Smolin states and other bound entangled states can lead to secure QSS. \section{The original protocol and the cheating strategy} The original Smolin state is a mixed state of four qubits $A$, $B$, $C$ and $ D$\ described by the density matrix \begin{eqnarray} \rho _{ABCD}^{S} &=&\frac{1}{4}(\left| \Phi ^{+}\right\rangle _{AB}\left\langle \Phi ^{+}\right| \otimes \left| \Phi ^{+}\right\rangle _{CD}\left\langle \Phi ^{+}\right| \nonumber \\ &&+\left| \Phi ^{-}\right\rangle _{AB}\left\langle \Phi ^{-}\right| \otimes \left| \Phi ^{-}\right\rangle _{CD}\left\langle \Phi ^{-}\right| \nonumber \\ &&+\left| \Psi ^{+}\right\rangle _{AB}\left\langle \Psi ^{+}\right| \otimes \left| \Psi ^{+}\right\rangle _{CD}\left\langle \Psi ^{+}\right| \nonumber \\ &&+\left| \Psi ^{-}\right\rangle _{AB}\left\langle \Psi ^{-}\right| \otimes \left| \Psi ^{-}\right\rangle _{CD}\left\langle \Psi ^{-}\right| ). \label{Smolin} \end{eqnarray} Here $\left| \Phi ^{\pm }\right\rangle =(\left| 00\right\rangle \pm \left| 11\right\rangle )/\sqrt{2}$\ and $\left| \Psi ^{\pm }\right\rangle =(\left| 01\right\rangle \pm \left| 10\right\rangle )/\sqrt{2}$\ denote the four Bell states. Now consider the task of QSS among four parties Alice, Bob, Charlie and Diana. The model of QSS studied in this paper includes the following essential features. (I) The goal of the process is that Alice, who has a classical secret bit to be shared, encodes the bit with certain quantum states and sends them to the other three parties, so that they can retrieve the secret bit \textit{ if and only if} all the three of them collaborate. (II) In QSS, it is generally assumed that Alice always acts honestly. That is, we don't consider the case where Alice wants to cause the participants to accept inconsistent versions of her secret bit. (III) A QSS protocol is called secure if it can stand the following two types of attack, (1) ``passive'' attacks, i. e., eavesdropping from external attackers, and (2) ``active attacks'', i. e., one or some of the legal participants trying to gain non-trivial amount of information on the secret bit without the collaboration of all participants (except Alice). Using other types of quantum states to accomplish QSS has already been well studied in literatures \cite{qi103,qi46}. What we focus in this paper is the interesting question raised by Refs. \cite{qi474,qi356} whether QSS can be accomplished using quantum states having the form of Eq. (\ref{Smolin}), which is a typical example of bound entangled states. In Ref. \cite{qi483}, the following QSS protocol was studied. \textit{The original protocol:} Alice prepares a 4-qubit Smolin state in the form of Eq. (\ref{Smolin}), and she keeps qubit $A$ to herself, while sending qubit $B$ to Bob, qubit $C$ to Charlie and qubit $D$ to Diana respectively. Each party then measures an arbitrary Pauli matrix $\sigma _{i}$ of his/her respective qubit, and obtains a result $r_{j}\in \{0,1\}$\ ($j=A,B,C,D$). Then all the parties announce publicly which observable they measured. If all of them measured the same observable, from Eq. (\ref{Smolin}) it can be seen that their results always satisfy $r_{A}\oplus r_{B}\oplus r_{C}\oplus r_{D}=0$ ($ \oplus $ means addition modulo $2$). Therefore, all the three parties, Bob, Charlie, and Diana, together can reconstruct Alice's secret bit $r_{A}$. It was proven in Ref. \cite{qi474} that such a protocol would be secure against the ``passive'' attacks of external eavesdroppers. However, it was pointed out in Ref. \cite{qi483} that the protocol would be insecure if the internal participant Bob cheats with the following intercept-resend strategy. \textit{The cheating strategy:} Bob intercepts qubits $C$ and $D$ sent to Charlie and Diana respectively by Alice, and measures them in the Bell basis. This makes the Smolin state Eq. ( \ref{Smolin}) collapse into a tensor product of two Bell states with the same form \begin{equation} \left| \psi \right\rangle _{ABCD}=\left| \varphi \right\rangle _{AB}\otimes \left| \varphi \right\rangle _{CD}. \label{product} \end{equation} Here $\left| \varphi \right\rangle $ is one of the four Bell states $\left| \Phi ^{\pm }\right\rangle $\ and $\left| \Psi ^{\pm }\right\rangle $, and from the result of his measurement, Bob knows which Bell state $\left| \varphi \right\rangle $ is. He then resends the two qubits of such a Bell state to Charlie and Diana respectively. Since the Smolin state is merely a mixture of the product states in the form of Eq. (\ref{product}) where $ \left| \varphi \right\rangle $ covers all the four possible choices $\left| \Phi ^{\pm }\right\rangle $\ and $\left| \Psi ^{\pm }\right\rangle $, the states owned by Alice, Charlie and Diana in this case show no difference from those in the honest protocol. But since qubit $B$\ owned by Bob is directly correlated with Alice's qubit $A$ now, Bob alone can know Alice's secret bit $r_{A}$ when they measure the same observable, without the help of Charlie and Diana. This strategy is not only adoptable by Bob. For example, consider that Charlie intercepts qubits $B$\ and $D$\ and measures them in the Bell basis. Note that when swapping the position of two of the qubits (e.g., $B$\ and $C$ ), $\left| \Phi ^{\pm }\right\rangle _{AB}\otimes \left| \Phi ^{\pm }\right\rangle _{CD}$ can be rewritten as \begin{eqnarray} \left| \Phi ^{\pm }\right\rangle _{AB}\otimes \left| \Phi ^{\pm }\right\rangle _{CD} &\longrightarrow &\frac{1}{2}(\left| \Phi ^{+}\right\rangle _{AC}\otimes \left| \Phi ^{+}\right\rangle _{BD} \nonumber \\ &&+\left| \Phi ^{-}\right\rangle _{AC}\otimes \left| \Phi ^{-}\right\rangle _{BD} \nonumber \\ &&\pm \left| \Psi ^{+}\right\rangle _{AC}\otimes \left| \Psi ^{+}\right\rangle _{BD} \nonumber \\ &&\pm \left| \Psi ^{-}\right\rangle _{AC}\otimes \left| \Psi ^{-}\right\rangle _{BD}). \end{eqnarray} Similar expression can also be found for $\left| \Psi ^{\pm }\right\rangle _{AB}\otimes \left| \Psi ^{\pm }\right\rangle _{CD}$. Therefore, after Charlie's measurement, the Smolin state Eq. (\ref{Smolin}) will collapse into \begin{equation} \left| \psi \right\rangle _{ACBD}=\left| \varphi \right\rangle _{AC}\otimes \left| \varphi \right\rangle _{BD}, \end{equation} where $\left| \varphi \right\rangle $ is one of the four Bell states $\left| \Phi ^{\pm }\right\rangle $\ and $\left| \Psi ^{\pm }\right\rangle $. Comparing with Eq. (\ref{product}), we can see that Charlie can cheat with the same strategy. So does Diana. \section{Simplified cheating strategy} Defeating this cheating alone is easy. Since it requires the cheater to perform joint measurement on the qubits of the other two parties, we can restrict Alice to send the qubits one at a time. That is, she does not send the qubit to the next party until the receipt of the qubit sent to the last party was confirmed. With this method, the cheater can never have the qubits of the other two parties simultaneously. Thus he cannot perform joint measurement on them and the strategy is defeated. Nevertheless, we would like to pinpoint out that there is an even more simple cheating strategy which does not require any joint measurement. The cheater can simply intercept every qubit and measure the same observable of them (including his own one). Then he resends the measured qubits to the corresponding parties. As a result, if Alice also measured the same observable of her qubit, the cheater can infer her result since he has measured all the other three qubits. Else if Alice measured a different observable, the result of the four qubits will not have any correlation so that the cheating will not be detected. Since this strategy involves individual measurement only, it could be successful even if Alice sends the qubits one at a time. \section{Our protocol} If our purpose is merely to achieve secure QSS, it is not difficult to defeat all the above cheating strategies. For example, Alice can also prepare some qubits in pure states. She mixes some of these qubits with qubit $B$ ($C$ or $D$) and sends them to Bob (Charlie or Diana). By requiring the other parties to announce their measurement result on some of these pure states, she can easily check whether there is intercept-resend attacks on the quantum communication channel between her and each of the other three parties. After all three quantum channels are verified secure, she tells the other three parties which qubits are $B$, $C$ and $D$, then they can accomplish the task of secret sharing with these qubits as described in the original protocol. Alternatively, Alice can prepare many copies of Smolin states. She keeps qubits $A$, $B$ and $C$ of each copy to herself, and sends qubit $D$ to one of the other parties. By measuring $A$ and $B$ in the Bell basis, she can collapse $C$ and $D$\ into a Bell state. With the Bell state, she can set up a secret key with each of the other parties with the well-known quantum key distribution protocol \cite{qi10}. Then the sharing of her secret data can easily be achieved with these secret keys. However, these methods cannot help to answer the question whether Smolin states and bound entanglement can lead to secure QSS. This is because when pure states are involved, or one party owns more than one qubit of a Smolin state, the correlation shared between the parties is no longer pure bound entanglement. Therefore, it is important to study whether a secure QSS protocol can be built in the framework where only Smolin states are used, and each party can have one qubit of each copy of Smolin state only, i.e., the honest operation on Smolin states must be local operations on single qubit rather than joint ones on many qubits. Here we purpose such a exotic protocol. \textit{Our secure protocol:} (1) Alice prepares $n$ copies of the 4-qubit Smolin state in the form of Eq. (\ref{Smolin}). She keeps qubit $A_{j}$ of the $j$th copy ($j=\{1,...,n\}$) to herself, while sending qubits $B_{j}$ to Bob, qubits $C_{j}$ to Charlie and qubits $D_{j}$ to Diana ($j=\{1,...,n\}$) respectively. But different from the original protocol, the order of the qubits sent to each party should be random. That is, the qubit sequence received by Bob, for example,\ can be $B_{3}B_{6}B_{5}B_{11}B_{4}...$, while that of Charlie and Diana can be $C_{4}C_{2}C_{9}C_{7}C_{5}...$\ and $D_{4}D_{20}D_{7}D_{3}D_{1}...$\ respectively. The order should be kept secret by Alice herself. Also, each qubit should be sent only after the receipt of the previous one is confirmed by the corresponding party. (2) Alice tells the other three parties which observable to measure for each of their qubit. She should guarantee that the same observable are measured for the four qubits of the same copy. But which qubits belong to the same copy should still be kept secret. (3) Alice randomly chooses some qubits for the security check. For these qubits, she asks the other three parties to announce the result of their measurement, and checks whether $r_{A_{j}}\oplus r_{B_{j}}\oplus r_{C_{j}}\oplus r_{D_{j}}=0$ is satisfied whenever $A_{j}$, $B_{j}$, $C_{j}$ and $D_{j}$ belonging to the same copy are chosen for the check. (4) If no disagreed result is found, Alice randomly picks one of the remaining unchecked copy (suppose that it is the $k$th copy) for secret sharing. She tells the other three parties the position of qubits $B_{k}$, $ C_{k}$ and $D_{k}$, so that all the other three parties together can reconstruct Alice's secret bit $r_{A_{k}}$ for this copy from the equation $ r_{A_{k}}\oplus r_{B_{k}}\oplus r_{C_{k}}\oplus r_{D_{k}}=0$. Now we show that the following three important features together make our protocol secure against known cheating strategies: (i) the randomness in the secret order of the qubits being sent; (ii) each qubit is sent only after the receipt of the previous one is confirmed; (iii) it is decided by Alice which observable the other parties should measure, and it is not announced until the receipt of all qubits is confirmed. Let us consider the most severe case where the number of cheaters is as large as possible. As stated above, Alice is always assumed to be honest in QSS. Now if all the other three parties are cheaters, then they can surely obtain the secret data because any secret sharing protocol allows the secret to be retrievable when all the three parties collaborate, even without cheating. Therefore it is natural to assume that there are two cheaters at the most. For concreteness and without loss of generality, here we study the case where Diana is honest while both Bob and Charlie are cheaters and they can perform any kind of communication (either classical or quantum) with each other. In fact, due to the symmetric form of Smolin states, the same security analysis on this case can also apply to the cases where the two cheaters are Bob and Diana, or Charlie and Diana. Also, note that external eavesdroppers have less advantages than internal cheaters since they can attack the quantum communication channel only, while cannot alter the announcement sent via the classical channel to cover their attacks. Thus if a QSS protocol is proven secure against internal cheaters, it is also secure against external eavesdroppers. Therefore, the case studied here is sufficient for the security proof. Let us formulate the model of the cheating strategy of the cheaters Bob and Charlie. Suppose that they intercepted a qubit being sent to Diana. Due to the features (ii) of our protocol, they must decide immediately what kind of qubit should be resent to Diana. There can be four choices: (a) resend the intercepted qubit intact to Diana; (b) perform an operation (including projecting the state into a certain basis, performing an unitary transformation, or making it entangled with other systems, etc.) on the intercepted qubit, and then send it to Diana; (c) Prepare another qubit, which may even entangled with other systems kept by Bob and Charlie, and send it to Diana; and (d) send Diana another qubit which was sent to Bob or Charlie, or is previously sent to Diana but intercepted by Bob and Charlie. Choice (a) is obviously not a cheating anymore. Meanwhile, all the other choices can be summarized as: Bob and Charlie prepare the following system \begin{equation} \left| bc\otimes d\right\rangle =\sum_{i}\left| \beta _{i}\right\rangle _{bc}\otimes \left| \gamma _{i}\right\rangle _{d}. \label{system} \end{equation} Here system $d$ is the qubit they will resend to Diana, while system $bc$ can be the system kept at their side and the environment, and may even include the systems of Alice's and Diana's in choices (b) and (d), and $i$ covers all possible states of these systems. Note that if they measure the original qubit and then send Diana the resultant state in choice (b), then system $d$ is in a pure state that does not entangled with system $bc$, which is simply a special case of Eq. (\ref{system}). After Diana receives the qubit, due to feature (iii) of our protocol, the cheaters cannot control the result of Diana's measurement. Since the qubit is not the original one, Diana's result does not always show correlation of Smolin states. To avoid the uncorrelated result from being detected by Alice, the only method left for the cheaters is to adjust their own announcement in step (3) of the protocol so that their result looks to be correlated with that of Diana's. Indeed, after step (2) they can know what result should have been found by Diana by measuring the original qubit they intercepted, and it is also possible for them to know the actual result of Diana's measurement by properly measuring the system $bc$ (if it is completely kept at their side) after they monitor Alice's announcement to Diana in step (2). However, when they need to determine their announcement in step (3), Alice has not announced the ordering of the qubits (i.e., which qubits belong to the same copy of Smolin state) yet. Note that it is insufficient for Bob and Charlie to obtain information on this ordering by comparing the measurement directions Alice announced in step (2) either. This is because totally there are only three measurement direction (corresponding to the three Pauli matrices), while the number of copies of Smolin state is large. Consequently, there will be a large number of qubits which do not belong to the same copy of Smolin state, while the measurement directions listed by Alice are the same. As the number of copies of Smolin state used in the protocol increases, the amount of mutual information on the ordering Bob and Charlie gain by comparing the measurement directions will drop exponentially to zero. Therefore, for the qubits chosen for the security check, feature (i) ensures that Bob and Charlie do not know which announcement of their own should be adjusted. Then for any single copy of Smolin state chosen for the security check in step (3), the cheaters stand a non-trivial probability (denoted as $\varepsilon $) to make an inconsistent announcement. The total probability for the cheaters to escape the detection will be at the order of $(1-\varepsilon )^{m}$, where $m$ is the number of copies of Smolin state to which Bob and Charlie apply the attack. This probability drops exponentially to zero as $m$ increases, so the cheating will inevitably be detected if $m$ is large. On the other hand, if $m$ is small, the probability for these $m$ copies of Smolin state to be chosen as the $k$th copy for the final secret sharing in step (4) will drop to zero as the total number of copies of Smolin state used in the protocol increases, so that the cheating is fruitless. Thus it is proven that our protocol is secure against the cheating strategy above. It is important to note that in our protocol, after Alice performs permutation on all Smolin states, the resultant states are still bound-entangled. The reason is that the permutation operation can in fact be viewed as a local operation, because in step (1) of the protocol, the qubit sequence received by Bob is merely the permutation of all $B_{j}$'s (e.g., $B_{3}B_{6}B_{5}B_{11}B_{4}...$), while that of Charlie and Diana are all $ C_{j}$'s and $D_{j}$'s respectively. There is no joint operation between the qubits $A$, $B$, $C$ and $D$. That is, suppose that the four participants share many copies of Smolin state, each participant has one qubit from each copy. Then the above permutation can be accomplished by each participants locally. It is a known fact that Smolin states cannot be distilled to pure entangled form with LOCC. Therefore the resultant states still cannot be distilled with LOCC either, thus it still satisfy the definition of bound entangled states. For this reason, what we achieved here is not merely another QSS protocol. The significance of our result is that the QSS protocol proposed here is based on bound entangled state alone. We have to point out that we currently cannot prove the generality of the model of the cheating strategy we studied above, because there may potentially exist strategies which are beyond our current imagination. Therefore, whether our specific protocol is unconditionally secure against any cheating strategy or not is still an open question. Nevertheless, the above model seems to cover all attacks currently known. Therefore, before a different cheating strategy fell outside the above model could be found in the future, our result seems to give a positive answer to the question whether Smolin states alone can lead to secure QSS. This is in contrast to the conclusion of Ref. \cite{qi483}. It also seems that generalized Smolin states \cite{qi474} can lead to secure quantum secret sharing between more participants too. Here the generalized Smolin states mean the $2n$-qubit ($n>2$) bound entangled states defined as follows. Let $U_{n}^{(m)}=I^{\otimes n-1}\otimes \sigma _{m}$ ($m=0,1,2,3,$ $ n=1,2,3,...$) be a class of unitary operations, where $\sigma _{0}=I$\ is the identity acting on the $2$-dimensional Hilbert space $C^{2}$ and $\sigma _{i}$ ($i=0,1,2,3$) are the standard Pauli matrices. Let $\rho _{2}=\left| \Psi ^{-}\right\rangle \left\langle \Psi ^{-}\right| $, and denote the density matrix of the original 4-qubit Smolin state (Eq. (\ref{Smolin})) as $ \rho _{4}$. Then the density matrix of $2n$-qubit ($n>2$) generalized Smolin state is \begin{equation} \rho _{2n}=\frac{1}{4}\sum\limits_{m=0}^{3}U_{2(n-1)}^{(m)}\rho _{2(n-1)}U_{2(n-1)}^{(m)}\otimes U_{2}^{(m)}\rho _{2}U_{2}^{(m)}. \end{equation} (Please see Ref. \cite{qi474} for details.) When the state is shared by $2n$ parties (each party has one qubit) and they measure the same observable, their results will always satisfy $\sum\nolimits_{j=1}^{2n}\oplus r_{j}=0$. Therefore we can see that a secure quantum secret sharing between Alice and other $(2n-1)$ parties can be accomplished with a protocol similar to our above secure protocol with original Smolin states, by including the following main features. (i) Alice prepares many copies of the $2n$-qubit generalized Smolin state, and sends them to the other parties in random order. It is not announced which qubits belong to the same copy, until all secure checks are successfully finished. (ii) Each qubit is sent only after the receipt of the previous one is confirmed. (iii) It is decided by Alice which observable the other parties should measure, and it is not announced until the receipt of all qubits is confirmed. \section{Summary and discussions} Thus we proposed a QSS protocol secure against known cheating strategies. We would like to emphasize that the present protocol is, to our best knowledge, the first example of secure QSS in terms of bound entangled states alone. This result suggests an positive answer to the question in Refs. \cite {qi474,qi356} whether Smolin states can lead to secure QSS. As to the more general question in Refs. \cite{qi356} whether there are cases when violation of local realism is necessary but not sufficient condition for QSS, our result seems to suggest that we need not to search for such cases in the framework of the original and generalized Smolin states. Our protocol is also feasible for practical implementation. At the first glance, there seems to have a difficulty since the qubits received by Bob, Charlie and Diana in step (1) need to be kept unmeasured until Alice announces which observable to measure in step (2). To this date, keeping a quantum state for a long period of time is still a technical challenge. Nevertheless, in practice Alice can uses the well-known quantum key distribution protocol (e.g., Ref. \cite{qi10}) to setup a secret string with each of the other parties beforehand, so that she can tell him secretly which observable he is to measure. Then delaying the measurement is no longer necessary. Therefore our protocol can be implemented as long as a source of Smolin states is available. Though in this case, not merely bound entangled states are used in the protocol, it is made simple to realize secure QSS with state-of-the-art technology. Finally, we would like to propose a feasible scheme to prepare Smolin states experimentally \cite{qi474}. The quantum circuit for this scheme is shown in Fig.1. The input part contains six qubits, in which the ancillary qubits $ \alpha$ and $\beta$ are initialized in state $\left| ++\right\rangle _{\alpha\beta}$ (here, $\left| +\right\rangle =(\left| 0\right\rangle +\left| 1\right\rangle )/\sqrt{2}$) and the target qubits in the tensor product of two Bell states $\left| \Phi ^{+}\right\rangle _{AB}\otimes \left| \Phi ^{+}\right\rangle _{CD}$. First, let $\beta$ be the control qubit and perform the controlled-$\sigma _{z}$ operations on qubits $\beta A$ and $\beta C$, respectively. Then, let $\alpha$ be the control qubit and perform the controlled-$\sigma _{x}$ operations on qubits $\alpha B$ and $ \alpha D$, respectively. This procedure of the target qubits can be formulated by \begin{equation} \rho _{ABCD}^{S}=\mathtt{Tr}_{\alpha\beta}[U\rho _{in}U^{\dagger }], \label{1} \end{equation} where the input state is $\rho _{in}=\left| \psi \right\rangle _{in}\left\langle \psi \right| $ with $\left| \psi \right\rangle _{in}=\left| ++\right\rangle _{\alpha\beta}\otimes \left| \Phi ^{+}\right\rangle _{AB}\otimes \left| \Phi ^{+}\right\rangle _{CD}$, and the unitary transformation takes the form \begin{eqnarray} U &=&\left| 00\right\rangle _{\alpha\beta}\left\langle 00\right| \otimes I_{ABCD} \nonumber \\ &&+\left| 01\right\rangle _{\alpha\beta}\left\langle 01\right| \otimes \sigma _{z}^{A}\sigma _{z}^{C}I_{BD} \nonumber \\ &&+\left| 10\right\rangle _{\alpha\beta}\left\langle 10\right| \otimes I_{AC}\sigma _{x}^{B}\sigma _{x}^{D} \nonumber \\ &&+\left| 11\right\rangle _{\alpha\beta}\left\langle 11\right| \otimes \sigma _{z}^{A}\sigma _{x}^{B}\sigma _{z}^{C}\sigma _{x}^{D}. \end{eqnarray} After this procedure, the output state will be the desired quantum state, \emph{i.e.}, the Smolin state $\rho _{ABCD}^{S}$. Therefore with any source of Bell states currently available, Smolin states may be generated with this scenario, and thus our protocol is expected to be implemented in principle in near future. \begin{figure} \caption{\label{fig:epsart} \label{fig:epsart} \end{figure} The work was supported by the NSFC under grant Nos.10605041 and 10429401, the RGC grant of Hong Kong, the NSF of Guangdong province under Grant No.06023145, and the Foundation of Zhongshan University Advanced Research Center. \end{document}
\begin{document} \title[Problematic subgroups] {Classification of problematic subgroups of $\Uof{n}$} \author{Julia E.\ Bergner} \address{Department of Mathematics, University of Virginia, Charlottesville, VA} \email{[email protected]} \author{Ruth Joachimi} \address{Department of Mathematics and Informatics, University of Wuppertal, Germany} \email{[email protected]} \author{Kathryn Lesh} \address{Department of Mathematics, Union College, Schenectady NY} \email{[email protected]} \author{Vesna Stojanoska} \address{Department of Mathematics, University of Illinois at Urbana-Champaign, Urbana IL} \email{[email protected]} \author{Kirsten Wickelgren} \address{School of Mathematics, Georgia Institute of Technology, Atlanta GA} \email{[email protected]} \date{\today} \maketitle \markboth{\sc{Bergner, Joachimi, Lesh, Stojanoska, and Wickelgren}} {\sc{Problematic subgroups of $\Uof{n}$}} \begin{abstract} Let $\Lcal_{n}$ denote the topological poset of decompositions of $\complexesn$ into mutually orthogonal subspaces. We classify $p$\kern1.3pt- toral subgroups of $\Uof{n}$ that can have noncontractible fixed points under the action of $\Uof{n}$ on~$\Lcal_{n}$. \end{abstract} \section{Introduction} Throughout the paper, let $p$ denote a fixed prime. Let $\Pcal_{n}$ be the $\Sigma_{n}$-space given by the nerve of the poset category of proper nontrivial partitions of the set $\{1,\dots,n\}$. In~\cite{ADL2}, the authors compute the Bredon homology and cohomology groups of $\Pcal_{n}$ with certain $p$\kern1.3pt- local coefficients. The computation is part of a program to give a new proof of the Whitehead Conjecture and the collapse of the homotopy spectral sequence for the Goodwillie tower of the identity, one that does not rely on the detailed knowledge of homology used in \cite{Kuhn-Whitehead}, \cite{Kuhn-Priddy}, and~\cite{Behrens-Goodwillie-Tower}. For appropriate $p$\kern1.3pt- local coefficients, the Bredon homology and cohomology of~$\Pcal_{n}$ turn out to be trivial when $n$ is not a power of the prime~$p$, and nontrivial in only one dimension when $n=p^{k}$ (\cite{ADL2} Theorem~1.1 and Corollary~1.2). A key ingredient of the proof is the identification of the fixed point spaces of $p$\kern1.3pt- subgroups of $\Sigma_{n}$ acting on $\Pcal_{n}$. If a $p$\kern1.3pt- subgroup $H\subseteq\Sigma_{n}$ has noncontractible fixed points, then $H$ gives an obstruction to triviality of Bredon homology, so it is ``problematic'' (see Definition~\ref{defn: problematic} below). It turns out that only elementary abelian $p$\kern1.3pt- subgroups of $\Sigma_{n}$ with free action on $\{1,\dots,n\}$ can have noncontractible fixed points on~$\Pcal_{n}$ (\cite{ADL2} Proposition~6.2). The proof of the main result of \cite{ADL2} then proceeds by showing that, given appropriate conditions on the coefficients, these problematic subgroups can be ``pruned'' or ``discarded'' in most cases, resulting in sparse Bredon homology and cohomology. In this paper, we carry out the fixed point calculation analogous to that of \cite{ADL2} in the context of unitary groups. The calculation is part of a program to establish the conjectured $bu$-analogue of the Whitehead Conjecture and the conjectured collapse of the homotopy spectral sequence of the Weiss tower for the functor $V\mapsto B\Uof{V}$ (see~\cite{Arone-Lesh-Crelle} Section~12). Let $\Lcal_{n}$ denote the (topological) poset category of decompositions of $\complexesn$ into nonzero proper orthogonal subspaces. The category $\Lcal_{n}$ was first introduced in~\cite{Arone-Topology}. It is internal to the category of topological spaces: both its object set and its morphism set have a topology. (See~\cite{Banff1} for a detailed discussion and examples.) The action of the unitary group $\Uof{n}$ on $\complexesn$ induces a natural action of $\Uof{n}$ on $\Lcal_{n}$, and the Bredon homology of the $\Uof{n}$-space $\Lcal_{n}$ plays an analogous role in the unitary context to the part played by the Bredon homology of the $\Sigma_{n}$-space~$\Pcal_{n}$ in the classical context. The complex $\Lcal_{n}$ has a similar flavor to the ``stable building" or ``common basis complex" studied in~\cite{Rognes-Topology, RognesPreprint}. In passing from finite groups to compact Lie groups, one usually replaces the notion of finite $p$\kern1.3pt- groups with that of $p$\kern1.3pt- toral groups (i.e., extensions of finite $p$\kern1.3pt- groups by tori). Our goal in this paper is to identify the $p$\kern1.3pt- toral subgroups of $\Uof{n}$ that have noncontractible fixed point spaces on $\Lcal_{n}$. These groups will be the obstructions to triviality of the Bredon homology and cohomology of $\Lcal_{n}$ with suitable $p$\kern1.3pt- local coefficients. Hence we make the following definition. \begin{definition} \label{defn: problematic} A closed subgroup $H\subseteq\Uof{n}$ is called \definedas{problematic} if the fixed point space of $H$ acting on the nerve of $\Lcal_{n}$ is not contractible. \end{definition} In this paper, we give a complete classification of problematic $p$\kern1.3pt- toral subgroups of $\Uof{n}$. To state the result, we recall that there is a family of $p$\kern1.3pt- toral subgroups $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ with the following key properties: (i)~$\Gamma_{k}$ acts irreducibly on $\complexespk$, and (ii)~$\Gamma_{k}$ is an extension of the central $S^{1}\subseteqU\kern-2.5pt\left(p^{k}\right)$ by an elementary abelian $p$\kern1.3pt- group. The subgroups $\Gamma_{k}$ have appeared in numerous works, such as \cite{Griess}, \cite{JMO}, \cite{Oliver-p-stubborn}, \cite{Arone-Topology}, ~\cite{Arone-Lesh-Crelle}, and~\cite{AGMV}. The structure of $\Gamma_{k}\subsetU\kern-2.5pt\left(p^{k}\right)$ is described explicitly in Section~\ref{section: Gamma_k}, and we call the corresponding action of $\Gamma_{k}$ on $\complexespk$ the \defining{standard action} of~$\Gamma_{k}$. For any $n=mp^k$, we can consider $\Gamma_{k}$ acting on ${\mathbb{C}}^{mp^{k}}$ by the $m$-fold multiple of the standard action. That is, we choose a fixed isomorphism $\complexesn\cong{\mathbb{C}}^{m}\otimes\complexespk$, and we let $\Gamma_{k}$ act trivially on ${\mathbb{C}}^{m}$, and by the standard action of $\Gamma_{k}$ on~$\complexespk$. A subgroup $H\subseteq\Gamma_{k}$ is coisotropic if it contains the subgroup of $\Gamma_{k}$ that centralizes it, $C_{\Gamma_{k}}(H)\subseteq H$. (These are, in fact, the $p$-centric subgroups of~$\Gamma_{k}$.) When we write~$S^{1}$, we always mean the center of~$\Uof{n}$. \begin{theorem}\label{theorem: new classification theorem} \classificationtheoremtext \end{theorem} Note that there is no loss of generality in assuming that $S^{1}\subseteq H$ in Theorem~\ref{theorem: new classification theorem}. This is because $S^{1}$ acts on $\complexesn$ via multiplication by scalars, and fixes any object of~$\Lcal_{n}$. Further, $H$ is a $p$\kern1.3pt- toral subgroup of~$\Uof{n}$ if and only if $HS^{1}$ is $p$\kern1.3pt- toral, and $\weakfixed{HS^{1}}=\weakfixed{H}$. Hence $H$ is problematic if and only if $HS^{1}$ is problematic. The remainder of the introduction is devoted to outlining the proof of Theorem~\ref{theorem: new classification theorem}. We begin with the special case $H=S^{1}$. Since $\weakfixed{S^1}=\Lcal_{n}$, the question of whether $S^{1}$ is problematic is asking if $\Lcal_{n}$ is contractible. We only need one new ingredient to answer this question. \begin{simplyconnectedthm} \simplyconnectedtext \end{simplyconnectedthm} Homology considerations then give us the following corollary, which says that $S^{1}$ is problematic if and only if $n$ is a power of a prime. \begin{corollarycontractible} \corollarycontractibletext \end{corollarycontractible} For $p$\kern1.3pt- toral subgroups $H\subseteq\Uof{n}$ that strictly contain~$S^{1}$, the proof of Theorem~\ref{theorem: new classification theorem} has two major components. The first is a general argument to establish that if $n=mp^{k}$ (with $m$ and $p$ coprime) and $H$ is a problematic $p$\kern1.3pt- toral subgroup of~$\Uof{n}$, then $H$ is conjugate to a subgroup of $\Gamma_{k}\subset\Uof{n}$. The second consists of checking which of these remaining possibilities are actually problematic, by analyzing the resulting fixed point spaces. To outline the reduction to subgroups of~$\Gamma_{k}\subset\Uof{n}$, we need a little more terminology. The center of $\Uof{n}$ is $S^{1}$, acting on $\complexesn$ via scalar multiples of the identity matrix, and the \definedas{projective unitary group} $P\Uof{n}$ is the quotient~$\Uof{n}/S^{1}$. We say that a closed subgroup $H\subseteq \Uof{n}$ is a \definedas{projective elementary abelian $p$\kern1.3pt- group} if the image of $H$ in $P\Uof{n}$ is an elementary abelian $p$\kern1.3pt- group. Lastly, if $H$ is a closed subgroup of~$\Uof{n}$, we write $\character{H}$ for the character of $H$ acting on $\complexesn$ through the standard action of $\Uof{n}$ on $\complexesn$. \begin{elementaryabeliantheorem} \elementaryabeliantheoremtext \end{elementaryabeliantheorem} The first part of Theorem~\ref{thm: H elementary abelian} was the principal result of~\cite{Banff1}. For completeness, we give a streamlined proof in the current work, in order to obtain the second part of the theorem, the subgroup's character. The character data of Theorem~\ref{thm: H elementary abelian} allows us to narrow down the problematic $p$\kern1.3pt- toral subgroups of $\Uof{n}$ to a very small, explicitly described collection. The subgroups $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ (see Section~\ref{section: Gamma_k}) satisfy the conclusions of Theorem~\ref{thm: H elementary abelian}. The next step is to show that they generate all the possibilities. \begin{subgroupGammadiag} \subgroupGammadiagtext \end{subgroupGammadiag} The strategy of the proof of Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag} is to use the first part of Theorem~\ref{thm: H elementary abelian} and bilinear forms to classify $H$ up to abstract group isomorphism. Then character theory, together with the second part of Theorem~\ref{thm: H elementary abelian}, allows us to pinpoint the actual conjugacy class of~$H$ in~$\Uof{n}$. After this, the remaining question for the classification theorem is whether all of the groups named in Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag} are, in fact, problematic. Our strategy is to compute the fixed point spaces fairly explicitly. Let $X\ast Y$ denote the join of the spaces $X$ and~$Y$. The following formula was suggested to us by Gregory Arone. \begin{jointheorem} \asttheoremtext \end{jointheorem} The remainder of the proof of Theorem~\ref{theorem: new classification theorem} is then a homology calculation, based on the formula of Theorem~\ref{theorem: join theorem}. \mbox{} \\ The organization of the paper is as follows. In the first part of the paper, Section~\ref{section: exploration} and Section~\ref{section: simple connectivity}, we describe some elementary properties of $\Lcal_{n}$ and prove the simple connectivity result for $n>2$. The second part of the paper is devoted to establishing the list of candidates for problematic subgroups. In Section~\ref{section: normal subgroup condition} we give a normal subgroup condition from which one can deduce contractibility of a fixed point set~$\weakfixed{H}$. We follow up in Section~\ref{section: find normal subgroup} with the proof of Theorem~\ref{thm: H elementary abelian}, by finding an appropriate normal subgroup unless $H$ is a projective elementary abelian $p$\kern1.3pt- subgroup of~$\Uof{n}$. Section~\ref{section: Gamma_k} is expository and discusses the salient properties of the subgroup $\Gamma_k\subseteqU\kern-2.5pt\left(p^{k}\right)$. The projective elementary abelian $p$\kern1.3pt- subgroups of $\Uof{n}$ are classified up to abstract group isomorphism using bilinear forms in Section~\ref{section: alternating forms}. They are shown to be isomorphic to $\Gamma_{s}\times\Delta_{t}$ where $\Delta_{t}$ denotes $({\mathbb{Z}}/p)^{t}$. In Section~\ref{section: problematic subgroups}, we prove Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag}, allowing us to view this abstract group isomorphism as an isomorphism of representations. The third part of the paper is devoted to checking the candidates identified in Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag}. Section~\ref{section: joins} calculates the $U(m)$-equivariant homotopy type of the fixed points of the action of $\Gamma_{s}$ on $\Lcal_{mp^s}$ (Theorem~\ref{theorem: join theorem}). This allows us to compute the fixed points of the action of $\Gamma_s \times \Delta_t$ on $\Lcal_{mp^s}$ by first computing the fixed points under $\Gamma_s$, reducing the problem to the computation of $\Delta_t$-fixed points. The bulk of Section~\ref{section: proof of classification theorem} consists of analyzing the problem of $\Delta_{t}$-fixed points, after which we assemble the pieces to prove Theorem~\ref{theorem: new classification theorem}. \mathbf{n}oindent{\bf{Definitions, Notation, and Terminology}}\\ \indent When we speak of a subgroup of a Lie group, we always mean a closed subgroup. If we speak of $S^{1}\subseteq\Uof{n}$ without any further description, we mean the center of~$\Uof{n}$. We generally do not distinguish in notation between a category and its nerve, since the context will make clear which we mean. We are concerned with actions of subgroups $H\subseteq\Uof{n}$ on~$\complexesn$; we write $\repn{H}$ for the restriction of the standard representation of $\Uof{n}$ to $H$, and $\character{H}$ for the corresponding character. We apply the standard terms from representation theory to $H$ if they apply to~$\repn{H}$. For example, we say that $H$ is \definedas{irreducible} if $\repn{H}$ is irreducible, and we say that $H$ is \definedas{isotypic} if $\repn{H}$ is the sum of all isomorphic irreducible representations of $H$. We also introduce a new term: if $H$ is not isotypic, we say that $H$ is \definedas{polytypic}, as a succinct way to say that ``the action of $H$ on $\complexesn$ is not isotypic.'' A \definedas{decomposition} $\lambda$ of $\complexesn$ is an (unordered) decomposition of $\complexesn$ into mutually orthogonal, nonzero subspaces. We say that $\lambda$ is \definedas{proper} if it consists of subspaces properly contained in~$\complexesn$. If $v_{1}$, \dots, $v_{m}$ are the subspaces in~$\lambda$, then we call $v_{1}$, \dots, $v_{m}$ the \definedas{components} or \definedas{classes} of~$\lambda$; we write $\class(\lambda)$ for $\{v_{1},\dots,v_{m}\}$ when we want to emphasize the set of subspaces in the decomposition~$\lambda$. The action of $\Uof{n}$ on $\complexesn$ induces a natural action of $\Uof{n}$ on $\Lcal_{n}$, and we are interested in fixed points of this action. If $\lambda$ consists of the subspaces $v_{1}$,\dots, $v_{m}$, we say that $\lambda$ is \definedas{weakly fixed} by a subgroup $H\subseteq\Uof{n}$ if for every $h\in H$ and every $v_{i}$, there exists a $j$ such that $hv_{i}=v_{j}$. (We use the word ``weakly fixed'' here to contrast with the notion of ``strongly fixed,'' below.) We write $\weakfixed{H}$ for the full subcategory of weakly $H$-fixed objects of~$\Lcal_{n}$. Note that $\Nerve\left(\left(\Lcal_{n}\right)^{H}\right) \cong\left(\strut\Nerve\left(\Lcal_{n}\right)\right)^{H}$, and we abuse notation by writing $\weakfixed{H}$ for either the subcategory or its nerve, depending on context. By contrast, we say that $\lambda$ is \definedas{strongly fixed} by $H\subseteq\Uof{n}$ if for all $i$, we have $Hv_{i}=v_{i}$, that is, each $v_{i}$ is a representation of~$H$. We write $\strongfixed{H}$ for the full subcategory of strongly $H$-fixed objects of $\Lcal_{n}$ (and for its nerve). \mathbf{n}oindent{\bf{Acknowledgements}}\\ \indent The authors are grateful to Bill Dwyer, Jesper Grodal, and Gregory Arone, for many helpful discussions about this project. We also thank Dave Benson, Jeremiah Heller, John Rognes, and David Vogan for helpful comments on earlier drafts of this document. The authors thank the Banff International Research Station and the Clay Mathematics Institute for financial support. The first, third, and fourth authors received partial support from NSF grants DMS-1105766 and DMS-1352298, DMS-0968251, DMS-1307390 and DMS-1606479, respectively. The second author was partially supported by DFG grant HO 4729/1-1, and the fifth author was partially supported by an AIM 5-year fellowship and NSF grants DMS-1406380 and DMS-1552730. Some of this work was done while the first, fourth, and fifth authors were in residence at MSRI during the Spring 2014 semester, supported by NSF grant 0932078 000. \section{The topological category $\Lcal_{n}$} \label{section: exploration} The goal of this section is to offer descriptions and examples of the object and morphism spaces of $\Lcal_{n}$, to build intuition for the category. We devote some attention to the fundamental groups of the connected components of the object and morphism spaces, since the goal of Section~\ref{section: simple connectivity} is to establish simple connectivity of the nerve of~$\Lcal_{n}$. We also refer the reader to~\cite{Banff1}, where some low-dimensional cases were studied explicitly. For example, it was established that $\Lcal_{2}$ is homeomorphic to~${\mathbb{R}} P^{2}$. From time to time, we refer to the category of unordered partitions of an integer~$n$, by which we mean the quotient of the action of $\Sigma_{n}$ on the set of partitions of the set $\mathbf{n}=\{1,\ldots,n\}$. We sometimes use a notation for unordered partitions where sizes of components in the partition are underlined, and the multiplicity of components of a certain size is indicated as scalar multiplication. For example, the unordered partition $7=1+3+3$ is denoted ${\underline{1}}+2\cdot{\underline{3}}$, because the partition has two components of size $3$ and one component of size~$1$. We begin with the object space of $\Lcal_{n}$, denoted by $\Obj\left(\Lcal_{n}\right)$. The elements are (unordered) decompositions of $\complexesn$ into proper, mutually orthogonal subspaces. There is a natural action of $\Uof{n}$ on~$\Obj\left(\Lcal_{n}\right)$, and we topologize $\Obj\left(\Lcal_{n}\right)$ as the disjoint union of orbits of the $\Uof{n}$ action. (This description and another equivalent one are given in~\cite{Banff1}.) \begin{definition}\mbox{} \label{defn: type objects} \begin{enumerate} \item If $\lambda\in\Obj\left(\Lcal_{n}\right)$, then the \defining{type} of~$\lambda$, denoted by $tof{\lambda}$, is the unordered partition of the integer $n$ given by the dimensions of the components of $\lambda$. \item For $\lambda\in\Obj\left(\Lcal_{n}\right)$, we write $\Uof{n}Isotropyof{\lambda}\subseteq\Uof{n}$ for the isotropy subgroup of $\lambda$ under the action of~$\Uof{n}$, i.e. $\Uof{n}Isotropyof{\lambda}$ contains those elements of $\Uof{n}$ that weakly fix $\lambda$. \item \label{item: set with same type} If $\tau$ is an unordered partition of the integer~$n$, we gather the decompositions with this type by defining \begin{align*} \objcomponent{\tau}&= \left\{\lambda\midtof{\lambda}=\tau\right\} \subseteq\Obj\left(\Lcal_{n}\right). \end{align*} \end{enumerate} \end{definition} Since $\Obj\left(\Lcal_{n}\right)$ is topologized as the disjoint union of $\Uof{n}$-orbits, the transitive action of $\Uof{n}$ on $n$-frames for $\complexesn$ gives us the following result. \begin{lemma} \label{lemma: object component homogeneous} The subspace $\objcomponent{tof{\lambda}}$ is a homogeneous space; specifically, \[\objcomponent{tof{\lambda}}\cong\Uof{n}/\Uof{n}Isotropyof{\lambda}. \] The connected components of~$\Obj\left(\Lcal_{n}\right)$ are given by the spaces $\objcomponent{\tau}$ as $\tau$ ranges over unordered partitions of the integer~$n$. \end{lemma} \begin{example} \label{example: component of lines} Consider decompositions of $\complexesn$ into $n$ mutually orthogonal lines. An element of the isotropy group of such a decomposition can act by $U(1)$ on each of the lines, and can also permute the lines, so the isotropy group is~$\Uof{1}\wr\Sigma_{n}$. The connected component of $\Obj\left(\Lcal_{n}\right)$ consisting of all decompositions into lines is \[ \objcomponent{n\cdot\underline{1}}\cong\Uof{n}/\left(\Uof{1}\wr\Sigma_{n}\right). \] \end{example} Before moving on, we record a generalization of the computation of Example~\ref{example: component of lines}. \begin{lemma} \label{lemma: object isotropy} If $\lambda\in\Obj\left(\Lcal_{n}\right)$ has type $tof{\lambda}=k_{1}\cdot{\underline{n}}_{1}+\dots+k_{j}\cdot{\underline{n}}_{j}$, then \[ \Uof{n}Isotropyof{\lambda}\cong \left(\Uof{n_{1}}\wr\Sigma_{k_{1}}\right)\times\dots\times \left(\Uof{n_{j}}\wr\Sigma_{k_{j}}\right). \] \end{lemma} Next we consider $\Morph\left(\Lcal_{n}\right)$, the morphism space of~$\Lcal_{n}$. Between any two objects of $\Obj\left(\Lcal_{n}\right)$, there is at most one morphism. Hence the source and target maps give a $\Uof{n}$-equivariant monomorphism \[ \Morph\left(\Lcal_{n}\right)\longrightarrow \Obj\left(\Lcal_{n}\right)\times \Obj\left(\Lcal_{n}\right), \] and we give $\Morph\left(\Lcal_{n}\right)$ the subspace topology. \begin{definition}\mbox{} \label{defn: type morphisms} \begin{enumerate} \item If $m \colon \lambda\rightarrow\mu$ is a morphism in~$\Lcal_{n}$, the \defining{type} of $m$, denoted by $tof{m}$ or $tof{\lambda\rightarrow\mu}$, is the morphism that $m$ induces in the category of unordered partitions of the integer~$n$. \item We define $\morphcomponent{\lambda}{\mu} \subseteq \Morph\left(\Lcal_{n}\right)$ by \begin{align*} \morphcomponent{\lambda}{\mu}&= \left\{\lambda'\rightarrow\mu'\mid tof{\lambda'\rightarrow\mu'} =tof{\lambda\rightarrow\mu}\right\}. \end{align*} \end{enumerate} \end{definition} \begin{example} Let $\lambda=\{a, b, c, v\}$ be an orthogonal decomposition of ${\mathbb{C}}^{5}$ into lines $a$, $b$, and $c$ and a two-dimensional subspace~$v$. There are two different types of morphisms from $\lambda$ to objects in $\objcomponent{{\underline{2}}+{\underline{3}}}$. The first is: \begin{align*} \{a, b, c, v\}&\longrightarrow\{a+ b+ c ,v\}\\ \left({\underline{1}}+{\underline{1}}+{\underline{1}}\right)+{\underline{2}} &\longrightarrow {\underline{3}}+{\underline{2}}. \end{align*} The second type takes the sum of two of the lines, and adds the third line to the two-dimensional subspace. There are three morphisms of this type that have $\lambda$ as their source: \begin{align*} \{a, b, c, v\}&\longrightarrow\{a+ b, c+ v\}\\ \{a, b, c, v\}&\longrightarrow\{a+ c, b+ v\}\\ \{a, b, c, v\}&\longrightarrow\{b+ c, a+ v\}\\ \left({\underline{1}}+{\underline{1}}\right) +\left({\underline{1}}+{\underline{2}}\right) &\longrightarrow {\underline{2}}+{\underline{3}}. \end{align*} However, the three morphisms of the second type are in the same orbit of the action of $\Uof{n}$ on $\Morph\left(\Lcal_{n}\right)$, whereas the first type of morphism is in a different $\Uof{n}$-orbit. \end{example} We have a parallel result to that of Lemma~\ref{lemma: object component homogeneous}. \begin{lemma} \label{lemma: type=component} The subspace $\morphcomponent{\lambda}{\mu}$ is a homogeneous space; specifically, \[ \morphcomponent{\lambda}{\mu} \cong \Uof{n}/\left(\Uof{n}Isotropyof{\lambda}\cap\Uof{n}Isotropyof{\mu}\right). \] The connected components of~$\Morph\left(\Lcal_{n}\right)$ are given by the spaces $\morphcomponent{\lambda}{\mu}$ as the type of $\lambda\rightarrow\mu$ ranges over the morphisms of unordered partitions of the integer~$n$. \end{lemma} \begin{proof} We must show that $\Uof{n}$ acts transitively on $\morphcomponent{\lambda}{\mu}$. Suppose that $tof{\lambda'\rightarrow\mu'}=tof{\lambda\rightarrow\mu}$. Then $\mu$ and $\mu'$ have the same type, so there exists $u\in\Uof{n}$ such that $u\,\mu=\mu'$. It is not necessarily the case that $u \lambda = \lambda'$. However, $u\lambda$ does have the same type as $\lambda'$, and both are refinements of~$\mu'$. Hence there exists $u'\in\Uof{n}Isotropyof{\mu'}$ such that $u' u\lambda=\lambda'$. To compute the isotropy group of $\lambda\rightarrow\mu$, we recall that there is at most one morphism between any two objects of~$\Lcal_{n}$. Hence $\lambda\rightarrow\mu$ is fixed by $u\in\Uof{n}$ if and only if $u$ fixes both $\lambda$ and $\mu$. \end{proof} \section{$\Lcal_{n}$ is simply connected} \label{section: simple connectivity} In~\cite{Banff1}, we proved that the nerve of $\Lcal_{3}$ is simply connected by exhibiting the object space (which has two connected components) and the morphism space (which has one connected component), and using the Van Kampen Theorem. In this section, we use similar methods to show more generally that $\Lcal_{n}$ is simply connected when $n\geq 3$. Since $\Lcal_{1}$ is empty, and we know $\Lcal_{2}\cong{\mathbb{R}} P^{2}$ \cite[Prop. 2.1]{Banff1}, the following theorem completes the understanding of the fundamental group of $\Lcal_{n}$ for all~$n$. \begin{theorem} \label{theorem: simply connected} \simplyconnectedtext \end{theorem} Given Theorem~\ref{theorem: simply connected}, the contractibility of $\Lcal_{n}$ is determined by homology. \begin{corollary} \label{corollary: contractible} \corollarycontractibletext \end{corollary} \begin{proof} The mod~$p$ homology of $\Lcal_{p^{k}}$ is nonzero \cite[Theorem~4(b)]{Arone-Topology}, so certainly $\Lcal_{p^{k}}$ is not contractible. If $n$ is not a prime power, then $n>2$, so $\Lcal_{n}$ is simply connected by Theorem~\ref{theorem: simply connected}. But if $n$ is not a power of a prime, then $\Lcal_{n}$ is acyclic both rationally and at all primes by another result of Arone (see, for example, \cite[Prop.~9.6]{Arone-Lesh-Crelle}). The result follows. \end{proof} We build on the results of Section~\ref{section: exploration} to give an elementary approach to proving Theorem~\ref{theorem: simply connected}. We begin by considering the fundamental group of each connected component of~$\Obj\left(\Lcal_{n}\right)$. \begin{lemma} \label{lemma: fund group obj component} Suppose that $\lambda\in\Obj\left(\Lcal_{n}\right)$ has type $k_{1}{\underline{n}}_{1}+\dots+k_{j}{\underline{n}}_{j}$. Then $\pi_{1}\objcomponent{\lambda}\cong \prod_{i=1}^{j}\Sigma_{k_{i}}$. \end{lemma} \begin{proof} Recall from Lemmas~\ref{lemma: object component homogeneous} and~\ref{lemma: object isotropy} that $\objcomponent{\lambda}\cong \Uof{n}/\Uof{n}Isotropyof{\lambda}$ and $\Uof{n}Isotropyof{\lambda}\cong\prod_{i=1}^{j}\left(U(n_i)\wr\Sigma_{k_i}\right)$ However, if $G$ is a compact Lie group with maximal torus~$T$, then $\pi_{1}T\rightarrow\pi_{1}G$ is surjective. (See, for example, Corollary~5.17 in \cite{MimuraToda}.) The subgroup $\Uof{n}Isotropyof{\lambda}$ contains a maximal torus of~$\Uof{n}$, so $\pi_{1}\Uof{n}Isotropyof{\lambda}\rightarrow\pi_{1}\Uof{n}$ is an epimorphism. The lemma then follows from the long exact sequence of homotopy groups associated to $\Uof{n}Isotropyof{\lambda}\rightarrow\Uof{n}\rightarrow\Uof{n}/\Uof{n}Isotropyof{\lambda}$, because $\pi_{0}\Uof{n}Isotropyof{\lambda}\cong \prod_{i=1}^{j}\Sigma_{k_{i}}$ and $\Uof{n}$ is connected. \end{proof} We need an easily-applied criterion for the Van Kampen Theorem to yield simple connectivity, which is the purpose of the following algebraic lemma. \begin{lemma} \label{lemma: free product zero} Consider a diagram of groups \[ A \xleftarrow{\ f\ } C \xrightarrow{\ g\ } B. \] Let $K=\ker(C \xrightarrow{\ g\ } B)$ and assume the following conditions. \begin{enumerate} \item The normal closure of $\im(K\xrightarrow{\ f\ } A)\subseteq A$ is $A$. \item The normal closure of $\im(C\xrightarrow{\ g\ } B)\subseteq B$ is $B$. \end{enumerate} Then $A \ast_C B$ is the trivial group. \end{lemma} \begin{proof} We first show that any element of $A$, regarded as an element of $A \ast_C B$, is actually trivial. By definition, elements of the free product of $A$ and $B$ that have the form $f(c)g(c)^{-1}$ are null in the amalgamated product $A \ast_C B$. Since the normal closure of $K$ in $A$ is $A$ itself, any element of $A$ can be written as a product of elements of the form $y=af(k)a^{-1}$ where $a\in A$ and $k\in K$. However, $g(k)=e$, so $y=af(k)a^{-1}=af(k)g(k)^{-1}a^{-1}$, and $y$ becomes null in $A \ast_C B$. Likewise, any element of $B$ can be written as the product of elements $z=bg(c)b^{-1}$ where $b\in B$ and $c\in C$. We can write $z= bg(c)b^{-1}= bf(c)\left[f(c)^{-1}g(c)\right]b^{-1}$, that is, $z= bf(c)b^{-1}$ in the amalgamated product. But there exist $k\in K$ and $a\in A$ such that $f(c)=af(k)a^{-1}$. Hence, in the amalgamated product $z=b\left(a f(k)a^{-1}\right)b^{-1}$. But since $k \in K$, we know that $f(k)=e$ in the amalgamated product, from which it follows that $z$ also becomes trivial in $A \ast_C B$. \end{proof} Let ${\rm{sk}}el{i}{\Lcal_{n}}$ denote the $i$-skeleton of the nerve of~$\Lcal_{n}$. It is sufficient to show that the map \[ \pi_{1}{\rm{sk}}el{1}{\Lcal_{n}}\rightarrow\pi_{1}{\rm{sk}}el{2}{\Lcal_{n}} \] is trivial, so we begin by analyzing paths in ${\rm{sk}}el{1}{\Lcal_{n}}$ in detail. Given a decomposition $\lambda\in\Obj\left(\Lcal_{n}\right)$, recall that $\objcomponent{tof{\lambda}}$ is the connected component of $\Obj\left(\Lcal_{n}\right)$ consisting of all objects with the same type as~$\lambda$ (Definition~\ref{defn: type objects}~\eqref{item: set with same type} and Lemma~\ref{lemma: object component homogeneous}). Likewise, $\morphcomponent{\lambda}{\mu}$ is the connected component of $\Morph\left(\Lcal_{n}\right)$ containing morphisms of the same type as~$\lambda\rightarrow\mu$ (Definition~\ref{defn: type morphisms}~\eqref{item: set with same type} and Lemma~\ref{lemma: type=component}). Finally, we write $\complexesyl{\lambda}{\mu}$ for the homotopy pushout of the diagram \begin{equation} \label{eq: cylinder} \xymatrix{ \morphcomponent{\lambda}{\mu}\ar^{\rm{\hspace{1.5em}target\hspace{.5em}}}[r] \ar_{\rm{source}}[d] & \objcomponent{\mu}\\ \objcomponent{\lambda} & . } \end{equation} The space ${\rm{sk}}el{1}{\Lcal_{n}}$ can be written as the (finite) union of spaces $\complexesyl{\lambda}{\mu}$ where $\lambda\rightarrow\mu$ ranges over a set of representatives of the connected components of $\Morph\left(\Lcal_{n}\right)$. Note that these cylinders are not disjoint. For example, the following diagram shows connected components of $\Obj\left(\Lcal_{4}\right)$, with arrows between those that are connected by morphisms: \begin{equation} \label{eq: L4 diagram} \xymatrix{ \objcomponent{1\cdot\underline{1}+1\cdot\underline{3}} & & \objcomponent{2\cdot\underline{2}}\\ & \objcomponent{2\cdot\underline{1}+1\cdot\underline{2}}\ar[ul] \ar[ur]\\ . &\objcomponent{4\cdot{\underline{1}}} \ar@<1ex>[uul] \ar[u] \ar@<-1ex>[uur] } \end{equation} Thus ${\rm{sk}}el{1}{\Lcal_{4}}$ is the union of five non-degenerate cylinders, three of which contain the object component~$\objcomponent{4\cdot\underline{1}}$. Using the Van Kampen Theorem, the fundamental group of any double mapping cylinder $\complexesyl{\lambda}{\mu}$ can be expressed as a quotient of the free product of $\pi_{1}\objcomponent{\lambda}$ and $\pi_{1}\objcomponent{\mu}$. We illustrate with a special case that turns out to do much of the work to prove Theorem~\ref{theorem: simply connected}. \begin{proposition} \label{prop: tentacle} Let $\mu$ be any decomposition containing at least one subspace of dimension greater than one, and let $\epsilon$ be a refinement of $\mu$ into lines. Then $\complexesyl{\epsilon}{\mu}$ is simply connected. \end{proposition} \begin{proof} Suppose that $\mu$ has type $k_{1}{\underline{n}}_{1}+\dots+k_{j}{\underline{n}}_{j}$. Then \[ \Uof{n}Isotropyof{\mu}\cong \left(\Uof{n_1} \wr \Sigma_{k_1}\right) \times\dots\times \left(\Uof{n_j} \wr \Sigma_{k_j}\right), \] and by Lemma~\ref{lemma: fund group obj component} we have $\pi_1\objcomponent{\mu}\cong\prod_{i=1}^{j}\Sigma_{k_{i}}$. Likewise \[ \Uof{n}Isotropyof{\epsilon}\cong \Uof{1} \wr \Sigma_{n} \] and by Lemma~\ref{lemma: fund group obj component} we have $\pi_1\objcomponent{\epsilon}\cong\Sigma_n$. Finally, $\Uof{n}$ acts transitively on $\morphcomponent{\epsilon}{\mu}$, with isotropy group $\Uof{n}Isotropyof{\epsilon}\cap\Uof{n}Isotropyof{\mu}$ (Lemma~\ref{lemma: type=component}). We can compute \[ \Uof{n}Isotropyof{\epsilon}\cap\Uof{n}Isotropyof{\mu}\cong \left(U(1) \wr \Sigma_{n_1}) \wr \Sigma_{k_1}\right) \times \cdots \times \left(U(1) \wr \Sigma_{n_j}) \wr \Sigma_{k_j}\right), \] and as a result, by a similar argument to Lemma~\ref{lemma: fund group obj component}, we have \[ \pi_1\morphcomponent{\epsilon}{\mu} \cong \left(\Sigma_{n_1} \wr \Sigma_{k_1}\right) \times \cdots \times \left( \Sigma_{n_j} \wr \Sigma_{k_j}\right). \] Hence applying $\pi_{1}$ to diagram~\eqref{eq: cylinder} gives us \[\xymatrix{ \left(\Sigma_{n_1} \wr \Sigma_{k_1}\right) \times \cdots \times \left( \Sigma_{n_j} \wr \Sigma_{k_j}\right) \ar[r] \ar[d] & \Sigma_{k_1}\times \cdots \times \Sigma_{k_j} \\ \Sigma_n & , } \] where the top map uses the projection for each of the wreath products, and the vertical map is inclusion (recall that $n_1k_1+\dots+k_jn_j=n$). It now follows from Lemma~\ref{lemma: free product zero} that the fundamental group of $\complexesyl{\epsilon}{\mu}$ is trivial. \end{proof} If we apply Proposition~\ref{prop: tentacle} to diagram~\eqref{eq: L4 diagram}, we see that we have simple connectivity of each of the three cylinders containing~$\objcomponent{4\cdot\underline{1}}$. However, it is possible to make homotopically essential loops in the $1$-skeleton by making circuits of the triangles in the diagram, so we focus attention on such triangles. Given a morphism $\lambda\rightarrow\mu$ in~$\Lcal_{n}$, choose a refinement $\epsilon$ of $\lambda$ into one-dimensional components, and consider the ``triangle'' \begin{equation} \label{diagram: triangle} \xymatrix{ &\objcomponent{tof{\mu}}\\ \objcomponent{tof{\lambda}} \ar[ur]^{\morphcomponent{\lambda}{\mu}}\\ & \objcomponent{n\cdot{\underline{1}}} \ar[ul]^{\morphcomponent{\epsilon}{\lambda}} \ar[uu]_{\morphcomponent{\epsilon}{\mu}}. } \end{equation} We define \[ \morphtriangle{\lambda}{\mu}= \complexesyl{\epsilon}{\mu}\cup\complexesyl{\epsilon}{\lambda}\cup\complexesyl{\lambda}{\mu}, \] a path-connected space that depends only on the morphism type of $\lambda\rightarrow\mu$. Consider the following paths: \begin{equation} \label{eq: defn alpha} \begin{cases} \alpha_{1}(t)=\left(\epsilon\rightarrow\lambda,t\right)& \mbox{in }\complexesyl{\epsilon}{\lambda}\\ \alpha_{2}(t)=\left(\lambda\rightarrow\mu,t\right)& \mbox{in }\complexesyl{\lambda}{\mu}\\ \alpha_{3}(t)=\left(\epsilon\rightarrow\mu,t\right)& \mbox{in }\complexesyl{\epsilon}{\mu} \end{cases} \end{equation} Let $\alpha=\alpha_{1}*\alpha_{2}*\alpha_{3}^{-1}$ be the concatenation of the three paths to form a loop in~$\morphtriangle{\lambda}{\mu}$ based at $\epsilon$. \begin{lemma} \label{lemma: alpha generates fundamental group} If $\lambda$ contains at least one subspace of dimension greater than one, then $\pi_{1}\left(\morphtriangle{\lambda}{\mu},\epsilon\right)$ is generated by the loop $\alpha$ given above. \end{lemma} \begin{proof} Any loop based at $\epsilon$ can be written as a finite concatenation of paths in $\complexesyl{\epsilon}{\lambda}$, $\complexesyl{\lambda}{\mu}$, and $\complexesyl{\epsilon}{\mu}$. Furthermore, path-connectedness of $\objcomponent{tof{\lambda}}$, $ \objcomponent{tof{\mu}}$, and $\objcomponent{n\cdot{\underline{1}}}$ means that we can assume that the paths in the concatenation have endpoints at $\epsilon$, $\lambda$, or $\mu$. Hence it is sufficient to analyze paths in the cylinders with those endpoints. By Proposition~\ref{prop: tentacle}, $\complexesyl{\epsilon}{\lambda}$ is simply connected. Hence any path in $\complexesyl{\epsilon}{\lambda}$ beginning at $\epsilon$ and ending at $\lambda$ is path homotopic to~$\alpha_{1}$. Similarly, any path in $\complexesyl{\epsilon}{\mu}$ beginning at $\epsilon$ and ending at $\mu$ is path homotopic to $\alpha_{3}$. To finish the proof, we need to show that any path in $\complexesyl{\lambda}{\mu}$ from $\lambda$ to $\mu$ is path homotopic in $\morphtriangle{\lambda}{\mu}$ to~$\alpha_{2}$. Note that we need the entire triangle. Unlike the previous cases, it is not generally true that a path from $\lambda$ to $\mu$ in $\complexesyl{\lambda}{\mu}$ is path homotopic to $\alpha_{2}$ by a path homotopy that stays inside $\complexesyl{\lambda}{\mu}$. It is sufficient to prove that the inclusion $\complexesyl{\lambda}{\mu}\subseteq\morphtriangle{\lambda}{\mu}$ induces the trivial map on fundamental groups. Observe that by the Van Kampen Theorem, $\pi_{1}\complexesyl{\lambda}{\mu}$ is a quotient of the free product $\pi_{1}\objcomponent{tof{\lambda}} \ast\pi_{1}\objcomponent{tof{\mu}}$. However, simple connectivity of $\complexesyl{\epsilon}{\lambda}$ means that $\pi_{1}\objcomponent{tof{\lambda}}$ maps trivially to $\pi_{1}\morphtriangle{\lambda}{\mu}$. Likewise, simple connectivity of $\complexesyl{\epsilon}{\mu}$ means that $\pi_{1}\objcomponent{tof{\mu}}$ maps trivially to $\pi_{1}\morphtriangle{\lambda}{\mu}$. The proposition follows, because any loop based at $\epsilon$ is path homotopic to a concatenation of the paths $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$, and their inverses. \end{proof} Having identified a generator for the potentially nontrivial fundamental group of a triangle, we establish that the generator becomes null in the two-skeleton. (In fact, the fundamental group of $\morphtriangle{\lambda}{\mu}$ is nontrivial for $n>2$ if and only if $\epsilon$, $\lambda$, and $\mu$ all have different types, but we will not need this fact.) \begin{proposition} \label{prop: zero map} Suppose that $\epsilon\rightarrow\lambda\rightarrow\mu$ are composable morphisms in~$\Lcal_{n}$, where $\epsilon$ has type~$n\cdot\underline{1}$ and $\lambda$ has at least one subspace of dimension greater than one. Then $\morphtriangle{\lambda}{\mu}\hookrightarrow{\rm{sk}}el{2}{\Lcal_{n}}$ induces the zero map on fundamental groups. \end{proposition} \begin{proof} By Lemma~\ref{lemma: alpha generates fundamental group}, the fundamental group $\pi_1\morphtriangle{\lambda}{\mu}$ is generated by the loop $\alpha$ in~\eqref{eq: defn alpha}. But in ${\rm{sk}}el{2}{\Lcal_{n}}$, a null-homotopy for $\alpha$ is provided by the $2$-simplex corresponding to the composable morphisms $\epsilon\rightarrow\lambda\rightarrow\mu$, thus proving the proposition. \end{proof} By breaking a loop up as a concatenation of loops in triangles, we obtain the desired simple connectivity result. \begin{proof}[Proof of Theorem~\ref{theorem: simply connected}] We can write ${\rm{sk}}el{1}{\Lcal_{n}}$ as the finite union of the path connected spaces $\morphtriangle{\lambda}{\mu}$. Therefore any path $\alpha$ based at $\epsilon\in\objcomponent{n\cdot{\underline{1}}}$ can be written as a finite concatenation $\alpha_{1}\ast\dots\ast\alpha_{k}$, where each $\alpha_{i}$ is a path in a triangle $\morphtriangle{\lambda_{i}}{\mu_{i}}$. However, the intersection of neighboring triangles $\morphtriangle{\lambda_{i}}{\mu_{i}}\cap\morphtriangle{\lambda_{i+1}}{\mu_{i+1}}$ is path connected and contains~$\objcomponent{n\cdot{\underline{1}}}$, so $\alpha_{i}(1)=\alpha_{i+1}(0)$ can be connected to the basepoint $\epsilon$ by a path $\beta_{i}$ within $\morphtriangle{\lambda_{i}}{\mu_{i}}\cap\morphtriangle{\lambda_{i+1}}{\mu_{i+1}}$. Then $\alpha$ is path homotopic to \[ \left(\alpha_{1}\ast\beta_{1}\right) \ast\left(\beta_{1}^{-1}\ast\alpha_{2}\ast\beta_{2}\right) \ast\dots \ast\left(\beta_{k-1}^{-1}\ast\alpha_{k}\right), \] which is a concatenation of loops at $\epsilon$ completely contained within a triangles. But by Proposition~\ref{prop: zero map}, any loop in $\morphtriangle{\lambda}{\mu}$ maps to zero in $\pi_1 {\rm{sk}}el{2}{\Lcal_{n}}$, which proves the theorem. \end{proof} \section{A contractibility criterion for fixed points} \label{section: normal subgroup condition} Recall that we call a subgroup of $\Uof{n}$ \defining{polytypic} if its action on $\complexesn$ is not isotypic. This section establishes that subgroups $H\subseteq \Uof{n}$ with certain normal, polytypic subgroups have contractible fixed point sets on $\Lcal_{n}$. The main result is Proposition~\ref{proposition: polytypic gives contractible} below, which first appeared as Theorem~4.5 in \cite{Banff1}; its proof is given at the end of the section. Our goal in Section~\ref{section: find normal subgroup} will be to find normal subgroups $J$ satisfying the hypotheses of Proposition~\ref{proposition: polytypic gives contractible}. \begin{proposition} \label{proposition: polytypic gives contractible} Let $H$ be a subgroup of $\Uof{n}$, and suppose $H$ has a normal subgroup $J$ with the following properties: \begin{enumerate} \item $J$ is polytypic. \item For every $\lambda\in \weakfixed{H}$, the action of $J$ on $\class(\lambda)$ is not transitive. \end{enumerate} Then $\weakfixed{H}$ is contractible. \end{proposition} To prove Proposition~\ref{proposition: polytypic gives contractible}, we need two constructions that will give natural retractions between various subcategories of $\Lcal_{n}$. \begin{definition}\label{orbit_partition_def} Let $J \subseteq \Uof{n}$ be a subgroup, and let $\lambda$ be a weakly $J$-fixed decomposition of $\complexesn$. \begin{enumerate} \item The decomposition $\glom{\lambda}{J}$ is defined as follows: $w\subseteq\complexesn$ is a component of $\glom{\lambda}{J}$ if and only if $w=\Sigma_{j\in J} \,jv$ for some $v\in\class(\lambda)$. That is, a component of $\glom{\lambda}{J}$ is formed by taking the sum of all components that are in the same orbit of the action of $J$ on the set of components of~$\lambda$. \item If $\lambda$ is strongly $J$-fixed, let $\isorefine{\lambda}{J}$ be the refinement of $\lambda$ obtained by breaking each component of $\lambda$ into its canonical $J$-isotypical summands. \end{enumerate} \end{definition} A routine check establishes the following properties of the operations above. \begin{lemma} \begin{enumerate} \item The decomposition $\glom{\lambda}{J}$ is strongly $J$-fixed, natural in $\lambda$, and minimal among coarsenings of $\lambda$ that are strongly $J$-fixed. \item The decomposition $\isorefine{\lambda}{J}$ is natural in strongly $J$-fixed decompositions $\lambda$ and is maximal among refinements of $\lambda$ whose classes are isotypical representations of $J$. \end{enumerate} \end{lemma} The following two lemmas give criteria for $\glom{\lambda}{J}$ and $\isorefine{\lambda}{J}$ to be weakly fixed by a supergroup $H\subseteq\Uof{n}$ of~$J$. Lemma~\ref{lemma: glom} is a straightforward check, and Lemma~\ref{lemma: isorefine} involves some basic representation theory. \begin{lemma} \label{lemma: glom} Let $J\triangleleft H$, and suppose that $\lambda$ is weakly fixed by $J$. If $h\in H$, then $\glom{(h\lambda)}{J}=h\left(\glom{\lambda}{J}\right)$. In particular, if $\lambda$ is weakly fixed by~$H$, then $\glom{\lambda}{J}$ is also weakly fixed by~$H$. \end{lemma} \begin{proof} We show that a component of $h\left(\glom{\lambda}{J}\right)$ is in fact a component of $\glom{(h\lambda)}{J}$. Let $v\in\class(\lambda)$, and consider the component $w\in\class\left(\glom{\lambda}{J}\right)$ given by $w=\Sigma_{j\in J}\,jv$. We can compute $hw$ as \begin{align*} hw &=h\left(\Sigma_{j\in J}\,jv\right) \\ &= \Sigma_{j\in J}\,\left(hjh^{-1}\right)(hv)\\ &= \Sigma_{j'\in J}\ j'(hv). \end{align*} Here $j'$ runs over all elements of $J$ because conjugation by $H$ is an automorphism. Therefore $hw$ is a component in $\glom{(h\lambda)}{J}$. \end{proof} \begin{lemma} \label{lemma: isorefine} Let $J\triangleleft H$, and suppose that $\lambda$ is strongly fixed by $J$. If $h\in H$, then $\isorefine{(h\lambda)}{J}=h\left(\isorefine{\lambda}{J}\right)$. In particular, if $\lambda$ is weakly fixed by~$H$, then $\isorefine{\lambda}{J}$ is also weakly fixed by~$H$. \end{lemma} \begin{proof} Let $v\in\class(\lambda)$, and let $h\in H$. Because $\lambda$ is strongly $J$-fixed, we know that $v$ is stabilized by~$J$. The subspace $hv$ is a component of~$h\lambda$, and because $J\triangleleft H$, we can check that $hv$ is also stabilized by~$J$. Hence $v\in\class(\lambda)$ and $hv\in\class(h\lambda)$ are both $J$-representations. But as a representation of $J$, the subspace $hv$ is conjugate to the representation $v$ of $J$. Thus if $w\subseteq v$ is the isotypical summand of $v$ for an irreducible representation $\rho$ of $J$, then $hw$ is the isotypical summand of $hv$ for the conjugate of $\rho$ by $h$. We conclude that $h$ maps the canonical $J$-isotypical summands of $v$ to the canonical $J$-isotypical summands of $hv$, which are components of~$\isorefine{(h\lambda)}{J}$. Since this result is true for every $v\in\class(\lambda)$, we find that $\isorefine{(h\lambda)}{J}=h\left(\isorefine{\lambda}{J}\right)$. \end{proof} For $J\triangleleft H$, the constructions $\lambda\mapsto\glom{\lambda}{J}$ and $\lambda\mapsto\isorefine{\lambda}{J}$ will allow us to retract $\weakfixed{H}$ onto subcategories that in many cases have a terminal object and hence contractible nerve. As a first step, we need to verify continuity of the constructions. The proofs use an explicit identification of the path components of~$\weakfixed{J}$. We need the following lemma and its corollary, which we learned from S. Costenoble. \begin{lemma}{\cite[Lemma~1.1]{May-Tel-Aviv}} \label{lemma: May} Let $G$ be a compact Lie group with closed subgroups $J$ and $K$. Let $p:\alpha\rightarrow\beta$ be a $G$-homotopy between $G$-maps $G/J\longrightarrow G/K$. Then $p$ factors as the composite of $\alpha$ and a homotopy $c \colon G/J\times I\rightarrow G/J$ such that $c\left(eJ,t\right)=c_{t}J$, where $c_{0}=e$ and the values $c_{t}$ specify a path in the identity component of the centralizer $\complexesen_{G}(J)$ of $J$ in~$G$. \end{lemma} \begin{corollary} \label{cor: J fixed path components} Let $J\subseteq\Uof{n}$ be a closed subgroup, and let $\complexesentIdent{J}$ denote the identity component of the centralizer of $J$ in~$\Uof{n}$. \begin{enumerate} \item \label{item: path components} The path components of $\Obj\weakfixed{J}$ are $\complexesentIdent{J}$-orbits. \item \label{item: equivariant sufficient} Any $\complexesentIdent{J}$-equivariant map from $\Obj\weakfixed{J}$ to itself is continuous. \end{enumerate} \end{corollary} \begin{proof} The space $\Obj\left(\Lcal_{n}\right)$ is topologized as the disjoint union of $\Uof{n}$-orbits $\Uof{n}/\Uof{n}Isotropyof{\lambda}$, so we can apply Lemma~\ref{lemma: May} with $G=\Uof{n}$ to each path component of $\Obj\left(\Lcal_{n}\right)$ to obtain the first result. The second result then follows from the fact that $\Obj\weakfixed{J}$ is topologized as the disjoint union of $\complexesentIdent{J}$-orbits. \end{proof} As a consequence, we obtain continuity of $\lambda\mapsto\glom{\lambda}{J}$. \begin{lemma} \label{lemma: continuity of glom} If $J\triangleleft H$, then the function $\lambda\mapsto\glom{\lambda}{J}$ is continuous on~$\Obj\weakfixed{H}$. \end{lemma} \begin{proof} Since $\Obj\weakfixed{H}$ is a subspace of $\Obj\weakfixed{J}$, it suffices to show the lemma for $H=J$. We need only verify that the hypothesis of Corollary~\ref{cor: J fixed path components}~\eqref{item: equivariant sufficient} holds, that is, that the assignment $\lambda\mapsto\glom{\lambda}{J}$ is $\complexesentIdent{J}$-equivariant. However, Lemma~\ref{lemma: glom} tells us the stronger condition that $\lambda\mapsto\glom{\lambda}{J}$ is actually equivariant with respect to the normalizer of~$J$, hence necessarily with respect to $\complexesentIdent{J}$ as well. \end{proof} We handle isotypical refinement in a similar way. \begin{lemma} \label{lemma: continuity of isorefine} If $J\triangleleft H$, then the function $\lambda\mapsto\isorefine{\lambda}{J}$ is continuous on~$\Obj\strongfixed{H}$. \end{lemma} \begin{proof} Since $\Obj\weakfixed{H}$ is a subspace of $\Obj\weakfixed{J}$, it suffices to show the lemma for $H=J$. First we observe that $\Obj\strongfixed{J}$ and $\Obj\isofixed{J}$ are both stabilized by the action of~$\complexesentIdent{J}$, and hence both are unions of path components of~$\Obj\weakfixed{J}$. As in the previous lemma, the result follows from Corollary~\ref{cor: J fixed path components}~\eqref{item: equivariant sufficient}. \end{proof} With continuity established, we can present the key retraction result. \begin{proposition} \label{proposition: ContractJ} Let $H$ be a subgroup of $\Uof{n}$, and suppose $J\triangleleft H$. Then the inclusion functor \[ \iota_{1}\colon \isofixed{J}\ \cap\ \weakfixed{H} \longrightarrow \strongfixed{J}\ \cap\ \weakfixed{H} \] induces a homotopy equivalence on nerves. If $J$ further has the property that $\glom{\lambda}{J}$ is proper for every $\lambda\in \weakfixed{H}$, then the inclusion functor \[ \iota_{2}\colon\strongfixed{J}\ \cap\ \weakfixed{H}\longrightarrow \weakfixed{H} \] induces a homotopy equivalence on nerves. \end{proposition} \begin{proof} By Lemmas~\ref{lemma: isorefine} and~\ref{lemma: continuity of isorefine}, a continuous, functorial retraction of $\iota_{1}$ is given by $r_{1}\colon \lambda\mapsto\isorefine{\lambda}{J}$. The coarsening morphism $\isorefine{\lambda}{J}\rightarrow\lambda$ provides a natural transformation from $\iota_{1}r_{1}$ to the identity, establishing the first desired equivalence. Similarly, by Lemmas~\ref{lemma: glom} and~\ref{lemma: continuity of glom}, a continuous, functorial retraction of $\iota_{2}$ is given by $r_{2}\colon\lambda\mapsto\glom{\lambda}{J}$, because by assumption $\glom{\lambda}{J}$ is always a proper decomposition of $\complexesn$. The coarsening morphism $\lambda\rightarrow\glom{\lambda}{J}$ provides a natural transformation from the identity to $\iota_{2}r_{2}$, which establishes the second desired equivalence. \end{proof} \begin{proof} [Proof of Proposition~\ref{proposition: polytypic gives contractible}] Let $\mu$ denote the decomposition of $\complexesn$ into the canonical isotypical components of $J$. If $J$ is polytypic, then $\mu$ has more than one component and thus is proper, and $\mu$ is terminal in~$\isofixed{J}$. We assert that $\mu$ is a terminal object in $\isofixed{J}\ \cap\ \weakfixed{H}$, and we need only establish that $\mu$ is weakly $H$-fixed. But $\mu$ is the $J$-isotypical refinement of the indiscrete decomposition of $\complexesn$, i.e., the decomposition consisting of just $\complexesn$ itself. Since the indiscrete decomposition is certainly $H$-fixed, Lemma~\ref{lemma: isorefine} tells us that $\mu$ is weakly $H$-fixed. The result now follows from Proposition~\ref{proposition: ContractJ}, because the assumption that the action of $J$ on $\class(\lambda)$ is always intransitive means that $\glom{\lambda}{J}$ is always proper. \end{proof} \section{Finding a normal subgroup} \label{section: find normal subgroup} Throughout this section, suppose that $H\subseteq \Uof{n}$ is a $p$\kern1.3pt- toral subgroup of~$\Uof{n}$, and let $\Hbar$ denote the image of $H$ in~$P\Uof{n}$. Note that although $P\Uof{n}$ does not act on $\complexesn$, it does act on $\Lcal_{n}$, because the central $S^{1}\subseteq\Uof{n}$ stabilizes any subspace of $\complexesn$. Hence, for example, if a decomposition $\lambda$ is weakly $H$-fixed, we can speak of the action of $\Hbar$ on~$\class(\lambda)$. Our goal is to prove Theorem~\ref{thm: H elementary abelian}, which says that if $H$ is problematic, then $\Hbar$ is elementary abelian and $\character{H}$ is zero except on elements that are in the central $S^{1}$ of~$\Uof{n}$. The plan for the proof is to apply Proposition~\ref{proposition: polytypic gives contractible} after locating a relevant normal polytypic subgroup of~$H$. The following lemma gives us a starting point, and the proof of Theorem~\ref{thm: H elementary abelian} appears at the end of the section. \begin{lemma} \label{lemma: abelian means polytypic} If $J\subseteq \Uof{n}$ is abelian, then either $J$ is polytypic or $J\subseteq S^{1}$. \end{lemma} \begin{proof} Decompose $\complexesn$ into a sum of $J$-irreducible representations, all of which are necessarily one-dimensional because $J$ is abelian. If $J$ is isotypic, then an element $j\in J$ acts on every one-dimensional summand by multiplication by the same scalar, so $j\in S^{1}$. \end{proof} Lemma~\ref{lemma: abelian means polytypic} places an immediate restriction on problematic subgroups. \begin{lemma} \label{lemma: only rank one torus} If $H$ is a problematic $p$\kern1.3pt- toral subgroup of $\Uof{n}$, then $\Hbar$ is discrete. \end{lemma} \begin{proof} We assert that, without loss of generality, we can assume that $H$ actually contains $S^{1}$. Because $\weakfixed{H}=\weakfixed{HS^{1}}$, if $H$ is problematic, then so is~$HS^{1}$. Likewise, $H$ is $p$-toral if and only if $HS^{1}$ is $p$\kern1.3pt- toral. Further, $H$ and $HS^{1}$ have the same image in~$P\Uof{n}$. Hence we can assume that $S^{1}\subseteq H$, by replacing $H$ by $HS^{1}$ if necessary. The group $\Hbar$ is $p$\kern1.3pt- toral (e.g., \cite{Banff1}, Lemma~3.3), so its identity component, denoted by $\Hbar_{0}$, is a torus. Let $J$ denote the inverse image of $\Hbar_{0}$ in $H$; thus $J\triangleleft H$. Since we have a fibration $S^{1}\rightarrow J\rightarrow\Hbar_{0}$, we know that $J$ is connected. Further, $J$ is a torus, because it is a connected closed subgroup of the identity component of $H$, which is a torus. If $\Hbar$ is not discrete, then $J$ is not contained in $S^{1}$, and thus $J$ is polytypic by Lemma~\ref{lemma: abelian means polytypic}. Since $J$ is connected, its action on the set of equivalence classes of any proper decomposition is trivial. The lemma follows from Proposition~\ref{proposition: polytypic gives contractible}. \end{proof} In terms of progress towards Theorem~\ref{thm: H elementary abelian}, we now know that if $H$ is a problematic $p$\kern1.3pt- toral subgroup of~$\Uof{n}$, then $\Hbar$ must be a finite $p$\kern1.3pt- group. The next part of our strategy is to show that if $\Hbar$ is not an elementary abelian $p$\kern1.3pt- group, then $\Hbar$ has a normal subgroup $V$ satisfying the conditions of the following lemma. \begin{lemma} \label{lemma: detection mechanism} Let $H\subseteq \Uof{n}$, and assume there exists $V\triangleleft\Hbar$ such that $V\cong{\mathbb{Z}}/p$ and $V$ does not act transitively on $\class(\lambda)$ for any $\lambda\in\weakfixed{H}$. Then $\weakfixed{H}$ is contractible. \end{lemma} \begin{proof} Let $J$ be the inverse image of $V$ in $H$. Then $J\triangleleft H$, and because the action of $J$ on $\Lcal_{n}$ factors through $V$, the action of $J$ on $\class(\lambda)$ is not transitive for any $\lambda\in\weakfixed{H}$. Further, $J$ is abelian, because a routine splitting argument shows $J\cong V\times S^{1}$ (see~\cite{Banff1}, Lemma~3.1). Therefore $J$ is polytypic by Lemma~\ref{lemma: abelian means polytypic}, and the lemma follows from Proposition~\ref{proposition: polytypic gives contractible}. \end{proof} Before we prove our first theorem, we recall that if $G$ is a finite $p$\kern1.3pt- group, then its Frattini subgroup, $\Phi(G)$, is generated by the commutators $[G,G]$ and the $p$\kern1.3pt- fold powers $G^p$. It has the property that $G\rightarrow G/\Phi(G)$ is initial among maps from $G$ to elementary abelian $p$\kern1.3pt- groups. We now have all the ingredients we require to prove Theorem~\ref{thm: H elementary abelian}. We note that both the statement and the proof are closely related to those of Proposition~6.1 of~\cite{ADL2}. \begin{theorem} \label{thm: H elementary abelian} \elementaryabeliantheoremtext \end{theorem} \begin{proof} For~\eqref{item: elem abelian}, we must show that $\Hbar$ is an elementary abelian $p$\kern1.3pt- group. By Lemma~\ref{lemma: only rank one torus}, we know that $\Hbar$ is a finite $p$\kern1.3pt- group. If $\Hbar$ is not elementary abelian, then the kernel of $\Hbar\rightarrow\Hbar/\Phi\!\left(\Hbar\right)$ is a nontrivial normal $p$\kern1.3pt- subgroup of $\Hbar$, and thus has nontrivial intersection with the center of $\Hbar$. Choose $V\cong{\mathbb{Z}}/p$ with \[ V\subseteq \ker\left[\,\Hbar\rightarrow\,\Hbar/\Phi\!\left(\Hbar\right)\right] \cap Z\left(\Hbar\right). \] We assert that $V$ satisfies the conditions of Lemma~\ref{lemma: detection mechanism}. Certainly $V\triangleleft\Hbar$, because $V$ is contained in the center of~$\Hbar$. Suppose that there exists $\lambda\in\weakfixed{H}$ such that $V$ acts transitively on $\class(\lambda)$. The set $\class(\lambda)$ must then have exactly $p$ elements. The action of $\Hbar$ on $\class(\lambda)$ induces a map $\Hbar\rightarrow\Sigma_{p}$, and since $\Hbar$ is a $p$\kern1.3pt- group, the image of this map must lie in a Sylow $p$\kern1.3pt- subgroup of~$\Sigma_{p}$. But then the map $\Hbar\rightarrow\Sigma_{p}$ factors as \[ \Hbar\rightarrow\Hbar/\Phi\!\left(\Hbar\right)\rightarrow{\mathbb{Z}}/p\hookrightarrow\Sigma_{p}, \] with $V\subseteq\Hbar$ mapping nontrivially to ${\mathbb{Z}}/p\subseteq\Sigma_{p}$. We have thus contradicted the assumption that $V\subseteq\ker\left[\Hbar\rightarrow\Hbar/\Phi\!\left(\Hbar\right)\right]$. We conclude that $\Hbar$ is, in fact, an elementary abelian $p$\kern1.3pt- group, and so \eqref{item: elem abelian} is proved. For~\eqref{item: character}, first note that if $h\in S^{1}$, then its matrix representation in $\Uof{n}$ is $hI$, hence $\character{H}(h)=\tr(hI)=nh$. Suppose that $\Hbar$ is an elementary abelian $p$\kern1.3pt- group, and consider an arbitrary element $h\in H$ such that $h\mathbf{n}ot\in S^{1}$. The image of $h$ in $\Hbar$ generates a subgroup $V\cong{\mathbb{Z}}/p\subseteq\Hbar$, and $V$ is a candidate for applying Lemma~\ref{lemma: detection mechanism}. Since $\weakfixed{H}$ is not contractible, there must be a decomposition $\lambda\in\weakfixed{H}$ such that $V$ acts transitively on~$\class(\lambda)$. The action of $V$ on $\class(\lambda)$ is necessarily free because $V\cong{\mathbb{Z}}/p$, so a basis for $\complexesn$ can be constructed that is invariant under $V$ and consists of bases for the subspaces in $\class(\lambda)$. This action represents $h$ as a fixed-point free permutation of a basis of~$\complexesn$. Hence $\character{H}(h)=0$. \end{proof} \section{The subgroups $\Gamma_{k}$ of $U\kern-2.5pt\left(p^{k}\right)$} \label{section: Gamma_k} Theorem~\ref{thm: H elementary abelian} tells us that if $H$ is a problematic $p$\kern1.3pt- toral subgroup of $\Uof{n}$, then $H$ is a projective elementary abelian $p$\kern1.3pt- group and the character of $H$ is zero away from the center of $\Uof{n}$. In fact, there are well-known subgroups of $\Uof{n}$ that satisfy these conditions, namely the subgroups $\Gamma_{k}\subseteq U\kern-2.5pt\left(p^{k}\right)$ that arise in, for example, \cite{Griess}, \cite{JMO}, \cite{Oliver-p-stubborn}, \cite{Arone-Topology}, ~\cite{Arone-Lesh-Crelle}, \cite{AGMV}, and others. In this section, we review background on the groups~$\Gamma_{k}$. We begin with the discrete analogue of~$\Gamma_{k}$. Let $n=p^{k}$ and choose an identification of the elements of $({\mathbb{Z}}/p)^{k}$ with the set $\{1,...,n\}$. The action of $({\mathbb{Z}}/p)^{k}$ on itself by translation identifies $({\mathbb{Z}}/p)^{k}$ as a transitive elementary abelian $p$\kern1.3pt- subgroup of $\Sigma_{p^{k}}$, denoted by~$\Delta_{k}$. Up to conjugacy, $\Delta_{k}$ is the unique transitive elementary abelian $p$\kern1.3pt- subgroup of $\Sigma_{p^{k}}$. Note that every nonidentity element of $\Delta_{k}$ acts without fixed points. The embedding \[ \Delta_{k}\hookrightarrow\Sigma_{p^{k}}\hookrightarrowU\kern-2.5pt\left(p^{k}\right) \] given by permuting the standard basis elements is the regular representation of $\Delta_{k}$, and has character $\character{\Delta_{k}}=0$ except at the identity. In the unitary context, the projective elementary abelian $p$\kern1.3pt- subgroup $\Gamma_{k}\subseteq U\kern-2.5pt\left(p^{k}\right)$ is generated by the central $S^{1}\subseteqU\kern-2.5pt\left(p^{k}\right)$ and two different embeddings of $\Delta_{k}$ in $U\kern-2.5pt\left(p^{k}\right)$, which we denote by $\Acal_{k}$ and $\Bcal_{k}$ and describe momentarily. Just as $\Delta_{k}$ is the unique (up to conjugacy) elementary abelian $p$\kern1.3pt- subgroup of $\Sigma_{p^k}$ with transitive action, it turns out that $\Gamma_{k}$ is the unique (up to conjugacy) projective elementary abelian $p$\kern1.3pt- subgroup of $U\kern-2.5pt\left(p^{k}\right)$ containing the central $S^{1}$ and having irreducible action (see, for example, \cite{Zolotykh}). For the explicit description of $\Gamma_{k}$, we follow~\cite{Oliver-p-stubborn}. The subgroup $\Bcal_{k}\cong\Delta_{k}$ of $U\kern-2.5pt\left(p^{k}\right)$ is given as follows. Consider ${\mathbb{Z}}/p\subseteq U(p)$ acting by the regular representation, and let $\Bcal_{k}$ be the group $({\mathbb{Z}}/p)^{k}$ acting on the $k$-fold tensor power $\left({\mathbb{C}}^{p}\right)^{\otimes k}$. (This action is, in fact, the regular representation of $\Delta_{k}\cong ({\mathbb{Z}}/p)^{k}$.) Explicitly, for any $r = 0,1, \ldots, k-1$, let $\sigma_r \in \Sigma_{p^k}$ denote the permutation defined by \[ \sigma_r (i) = \begin{cases} i + p^r &\mbox{if } i \equiv 1, \ldots, (p-1)p^r \qquad\quad\mod p^{r+1}, \\ i - (p-1)p^r & \mbox{if } i \equiv (p-1)p^r+1, \ldots, p^{r+1} \mod p^{r+1}. \end{cases} \] For each $r$, let $B_r \inU\kern-2.5pt\left(p^{k}\right)$ be the corresponding permutation matrix, \[ \left(B_r\right)_{ij} = \begin{cases} 1 &\mbox{if } \sigma_r(i) = j\\ 0 &\mbox{if } \sigma_r(i) \mathbf{n}eq j. \end{cases} \] For later purposes, we record the following lemma. \begin{lemma} \label{lemma: character B_k zero} The character $\character{\Bcal_{k}}$ is zero except at the identity. \end{lemma} \begin{proof} Every nonidentity element of $\Bcal_{k}$ acts by a fixed-point free permutation of the standard basis of $\complexespk$, and so has only zeroes on the diagonal. \end{proof} Our goal is to define $\Gamma_{k}$ with irreducible action on $\complexespk$, but $\Bcal_{k}$ alone does not act irreducibly: being abelian, the subgroup $\Bcal_{k}$ has only one-dimensional irreducibles. In fact, since $\Bcal_{k}$ is acting on $\complexespk$ by the regular representation, each of its $p^{k}$ irreducible representations is present exactly once. The role of the other subgroup $\Acal_{k}\cong\Delta_{k}\subseteq\Gamma_{k}$ is to permute the irreducible representations of $\Bcal_{k}$. To be specific, let $\zeta = e^{2 \pi i/p}$, and consider ${\mathbb{Z}}/p\subseteq U\kern-.2pt(p)$ generated by the diagonal matrix with entries $1,\zeta,\zeta^2,...,\zeta^{p-1}$. Then $\Acal_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ is the group $\left({\mathbb{Z}}/p\right)^{k}$ acting on the $k$-fold tensor power $\left({\mathbb{C}}^{p}\right)^{\otimes k}$. Explicitly, for $r = 0, \ldots, k-1$ define $A_{r}\inU\kern-2.5pt\left(p^{k}\right)$ by \[ (A_r)_{ij} = \begin{cases} \zeta^{\left[ (i-1)/p^r\right]} & \mbox{if } i=j \\ 0 &\mbox{if } i\mathbf{n}eq j \end{cases} \] where $\left[\,\text{--}\,\right]$ denotes the greatest integer function. The matrices $A_{0}$,...,$A_{k-1}$ commute, are of order~$p$, and generate a rank~$k$ elementary abelian $p$\kern1.3pt- group~$\Acal_{k}$. \begin{lemma} \label{lemma: character A_k zero} The character $\character{\Acal_{k}}$ is zero except at the identity. \end{lemma} \begin{proof} The character of ${\mathbb{Z}}/p\subseteq U\kern-.2pt(p)$ generated by the diagonal matrix with entries $1,\zeta,\zeta^2,...,\zeta^{p-1}$ is zero away from the identity by direct computation, because \[ 1+\zeta+\zeta^2+...+\zeta^{p-1}=\frac{\zeta^{p}-1}{\zeta-1}=0. \] The same is true for $\Acal_{k}$, because the character is obtained by multiplying together the characters of the individual factors. \end{proof} Since the characters $\character{\Acal_{k}}$ and $\character{\Bcal_{k}}$ are the same, we obtain the following corollary. \begin{corollary} The subgroups $\Acal_{k}$ and $\Bcal_{k}$ are conjugate in $U\kern-2.5pt\left(p^{k}\right)$. \end{corollary} Although $\Acal_{k}$ and $\Bcal_{k}$ do not quite commute with each other in~$U\kern-2.5pt\left(p^{k}\right)$, the commutator relations are simple and follow from examining the actions of $\Bcal_{k}\cong({\mathbb{Z}}/p)^{k}$ and $\Acal_{k}\cong({\mathbb{Z}}/p)^{k}$ on $\left({\mathbb{C}}^{p}\right)^{\otimes k}$. If $r\mathbf{n}eq s$, then $A_r$ and $B_s$ are acting on different tensor factors in $\left({\mathbb{C}}^{p}\right)^{\otimes k}$, and hence they commute. The commutator of $A_{r}$ and $B_{r}$, which both act on the $r$th tensor factor of ${\mathbb{C}}^{p}$, can be computed by an explicit computation in $U(p)$. As a result, we obtain the following relations: \begin{align} \label{eq: Gamma_k_commutator_equations} [A_r,A_s] &= I = [B_r,B_s], & \text{for all } r,s \mathbf{n}onumber\\ [A_r,B_s] &= I, & \text{for all } r\mathbf{n}eq s\\ [B_r,A_r] &= \zeta I, & \text{for all } r\mathbf{n}onumber. \end{align} \begin{definition} The subgroup $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ is generated by the subgroups $\Acal_{k}$, $\Bcal_{k}$, and the central $S^{1}\subseteqU\kern-2.5pt\left(p^{k}\right)$. \end{definition} \begin{lemma} \label{lemma: Gamma_k extension} There is a short exact sequence \begin{equation} 1\rightarrow S^{1}\rightarrow\Gamma_{k} \rightarrow \left(\Delta_{k}\times\Delta_{k}\right) \rightarrow 1. \end{equation} where $S^{1}$ is the center of $U\kern-2.5pt\left(p^{k}\right)$. \end{lemma} \begin{proof} The subgroups $\Acal_{k}$ and $\Bcal_{k}$ can be taken as the preimages of the two copies of $\Delta_{k}$ in $\Gamma_{k}$. The commutator relations \eqref{eq: Gamma_k_commutator_equations} show that $\Acal_{k}$ and $\Bcal_{k}$ do not generate any noncentral elements. \end{proof} \begin{remark*} When $k=0$ we have $\Gamma_{0}=S^{1}\subseteq\Uof{1}$, and $\Delta_{0}$ is trivial, so Lemma~\ref{lemma: Gamma_k extension} is true even for $k=0$. \end{remark*} For later purposes, we record the following lemma. \begin{lemma} \label{lemma: Gamma_k subgroups} The subgroup $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ contains subgroups isomorphic to $\Gamma_{s}\times\Delta_{t}$ for all nonnegative integers $s$ and $t$ such that $s+t\leq k$. \end{lemma} \begin{proof} The required subgroup is generated by $S^{1}$, the matrices $A_{0}$, \dots, $A_{s+t-1}$, and the matrices $B_{0}$, \dots,~$B_{s-1}$. \end{proof} A consequence of Lemma~\ref{lemma: Gamma_k extension} is that $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ is an example of a $p$\kern1.3pt- toral subgroup that satisfies the first conclusion of Theorem~\ref{thm: H elementary abelian}. The next lemma says that $\Gamma_{k}$ satisfies the second conclusion as well. \begin{lemma} \label{lemma: Gamma zero character} The character of $\Gamma_{k}$ is nonzero only on the elements of~$S^{1}$. \end{lemma} \begin{proof} The character of $\Gamma_{k}$ on elements of $\Acal_{k}$ and $\Bcal_{k}$ is zero by Lemmas~\ref{lemma: character B_k zero} and~\ref{lemma: character A_k zero}. Multiplying any of these matrices by an element of $S^{1}$, i.e., a scalar, gives a matrix that also has zero trace. Finally, products of nonidentity elements of $\Bcal_{k}$ with elements of $\Acal_{k}S^{1}$ are obtained by multiplying matrices in $\Bcal_{k}$, which have only zero entries on the diagonal, by diagonal matrices. The resulting products likewise have no nonzero diagonal entries and thus have zero trace. \end{proof} \begin{remark*} We note that, by inspection of the commutator relations, $S^{1}\times\Acal_{k}$ and $S^{1}\times\Bcal_{k}$ normalize each other in $\Gamma_{k}$. Suppose we decompose $\complexespk$ by the $p^{k}$ one-dimensional irreducible representations of $\Acal_{k}$. That decomposition is weakly fixed by $\Bcal_{k}$, and further, $\Bcal_{k}$ acts transitively on the classes in the decomposition because $\Gamma_{k}$ is irreducible. Likewise, the decomposition of $\complexespk$ by the $p^{k}$ one-dimensional irreducible representations of $\Bcal_{k}$ is weakly fixed by $\Acal_{k}$, which has transitive action on the classes. \end{remark*} \section{Alternating forms} \label{section: alternating forms} From Theorem~\ref{thm: H elementary abelian}, we know that if $H$ is a problematic $p$\kern1.3pt- toral subgroup of $\Uof{n}$, then its image $\Hbar$ in $P\Uof{n}$ is an elementary abelian $p$\kern1.3pt- group. We would like to know the possible group isomorphism types of such subgroups of~$\Uof{n}$. For simplicity, we restrict ourselves to subgroups $H$ that contain the central~$S^{1}$ of~$U(n)$. (See the proof of Lemma~\ref{lemma: only rank one torus}.) Our main results are Propositions~\ref{proposition: forms=groups} and~\ref{proposition: group classification}, below. Once the group-theoretic classification is complete, we use representation theory in Section~\ref{section: problematic subgroups} to pin down the conjugacy classes of elementary abelian $p$\kern1.3pt- subgroups of~$\Uof{n}$ that can be problematic. Before proceeding, we note that the remarkable paper \cite{AGMV} of Andersen-Grodal-M{\o}ller-Viruel classifies non-toral elementary abelian $p$\kern1.3pt- subgroups (for odd primes $p$) of the simple and center-free Lie groups. In particular, Theorem~8.5 of that work contains a classification of all the elementary abelian $p$\kern1.3pt- subgroups of $P\Uof{n}$, building on earlier work of Griess~\cite{Griess}. Our approach is independent of this classification, and works for all primes, using elementary methods. We make the following definition. \begin{definition} \label{defn: abstract proj elem} A $p$\kern1.3pt- toral group $H$ is an \definedas{abstract projective elementary abelian $p$\kern1.3pt- group} if $H$ can be written as a central extension \[ 1\rightarrow S^{1}\rightarrow H\rightarrow V\rightarrow 1, \] where $V$ is an elementary abelian $p$\kern1.3pt- group. \end{definition} We begin by recalling some background on forms. Let $A$ be a finite-dimensional ${\mathbb{F}}_p$-vector space, and let $\alpha\colon A\times A \to {\mathbb{F}}_p$ be a bilinear form. We say that $\alpha$ is \definedas{totally isotropic} if $\alpha(a,a)=0$ for all $a\in A$. (A totally isotropic form is necessarily skew-symmetric, as seen by expanding $\alpha(a+b, a+b)=0$, but the reverse is not true for $p=2$.) If $\alpha$ is not only totally isotropic, but also non-degenerate, then it is called a \definedas{symplectic form}. Any vector space with a symplectic form is even-dimensional and has a (nonunique) basis $e_1,\dots,e_s,f_1,\dots, f_s$, called a \definedas{symplectic basis}, with the property that $\alpha(e_i,e_j)=0=\alpha(f_i,f_j)$ for all $i,j$, and $\alpha(e_i,f_j)$ is $1$ if $i=j$ and zero otherwise. All symplectic vector spaces of the same dimension are isomorphic (i.e., there exists a linear isomorphism that preserves the form), and if the vector space has dimension $2s$ we use $\mathbb{H}_s$ to denote the associated isomorphism class of symplectic vector spaces. Let $\mathbb{T}_t$ denote the vector space of dimension $t$ over ${\mathbb{F}}_{p}$ with trivial form. We have the following standard classification result. \begin{lemma} \label{lemma: classification of forms} Let $A$ be a vector space over ${\mathbb{F}}_{p}$ with a totally isotropic bilinear form $\alpha$. Then there exist $s$ and $t$ such that $A\cong\mathbb{H}_{s}\oplus\mathbb{T}_{t}$ by a form-preserving isomorphism. \end{lemma} Our next task is to relate the preceding discussion to abstract projective elementary abelian $p$\kern1.3pt- groups. For the remainder of the section, assume that $H$ is an abstract projective elementary abelian $p$\kern1.3pt- group \begin{equation*} 1\rightarrow S^{1}\rightarrow H\rightarrow V\rightarrow 1. \end{equation*} Choose an identification of ${\mathbb{Z}}/p$ with the elements of order $p$ in $S^{1}$. Given $x,y\in V$, let $\xwiggle, \ywiggle$ be lifts of $x,y$ to $H$. Define the \definedas{commutator form associated to $H$} as the form on $V$ defined by \begin{equation} \label{eq: commutator form} \alpha(x,y)= \left[\xwiggle,\ywiggle\right]=\xwiggle\ywiggle\xwiggle^{-1}\ywiggle^{-1}. \end{equation} \begin{lemma} \label{lemma: BilinearForm1} Let $H$ and $\alpha$ be as above, and suppose that $x,y\in V$. Then \begin{enumerate} \item $\alpha(x,y)$ is a well-defined element of ${\mathbb{Z}}/p\subseteq S^{1}$, \item $\alpha$ is a totally isotropic bilinear form on $V$, and \item isomorphic groups $H$ and $H'$ give isomorphic forms $\alpha$ and $\alpha'$. \end{enumerate} \end{lemma} \begin{proof} Certainly $\left[\xwiggle,\ywiggle\right]\in S^{1}$, since $x$ and $y$ commute in $V$. If $\zeta\in S^{1}$ then $[\zeta\xwiggle,\ywiggle]=[\xwiggle,\ywiggle]=[\xwiggle,\zeta\ywiggle]$ because $S^{1}$ is central in $H$, which shows that $\alpha$ is independent of the choice of lifts $\tilde x$ and $\tilde y$, and that $\alpha$ is linear with respect to scalar multiplication. To show that $[\xwiggle,\ywiggle]$ has order $p$, we note that commutators in a group satisfy the following versions of the Hall-Witt identities, as can be verified by expanding and simplifying: \begin{align*} [a,bc] &=[a,b]\cdot \big[b,[a,c]\big]\cdot [a,c]\\ [ab,c] &= \big[a,[b,c] \big]\cdot [b,c]\cdot [a,c]. \end{align*} We know that $[H,H]$ is contained in the center of $H$, so $[H,[H,H]]$ is the trivial group. Hence for~$H$, the identities reduce to \begin{equation} \label{eq: Witt Hall} \begin{aligned}{} [a,bc] &= [a,b][a,c] \\ [ab,c] &= [a,c][b,c]. \end{aligned} \end{equation} In particular, $\left[\xwiggle,\ywiggle\right]^{p} =\left[\xwiggle,\ywiggle^{p}\right]= e$, since $\ywiggle^{p}\in S^{1}$ commutes with $\xwiggle$. Bilinearity of $\alpha$ with respect to addition follows directly from~\eqref{eq: Witt Hall}. The form is totally isotropic because an element commutes with itself. Finally, an isomorphism $H\rightarrow H'$ necessarily restricts to an isomorphism on the identity component and induces a diagram \[ \begin{CD} 1@>>> S^{1}@>>> H@>>> V@>>> 1\\ @. @V{\cong}VV @V{\cong}VV @VVV\\ 1@>>> S^{1}@>>> H'@>>> V'@>>> 1, \end{CD} \] which, in turn, induces an isomorphism of the associated forms $\alpha$ and~$\alpha'$ on $V$ and~$V'$, respectively. \end{proof} Lemma~\ref{lemma: BilinearForm1} shows that~\eqref{eq: commutator form} gives a function from isomorphism classes of projective elementary abelian $p$\kern1.3pt- groups to isomorphism classes of totally isotropic forms over~${\mathbb{Z}}/p$. Conversely, we can start with a form $\alpha$ and directly construct an abstract projective elementary abelian $p$\kern1.3pt- group~$H_{\alpha}$. Let $\alpha\colon V\times V\rightarrow{\mathbb{Z}}/{p}$ be a totally isotropic bilinear form, which can be regarded as a function $\alpha\colon V\times V\rightarrow S^{1}$. Let \begin{align*} V_{K} & =\{v\in V \mid \alpha(v,x)=0 \ \text{ for all } x\in V\}\\ & = \ker\left( V\rightarrow V^{\ast} \right), \end{align*} where $V^\ast$ denotes the dual of $V$. Then $\alpha$ restricted to $V_{K}$ is trivial. Now we make some choices, and we address the issue of the choices a little later in the section. Choose a complement $V_{c}$ to $V_{K}$ in $V$; note $V_{c}$ is necessarily orthogonal to~$V_{K}$. Since $\alpha$ must be symplectic on $V_{c}$, we can choose a symplectic basis $e_{1},\dots,e_{r},f_{1},\dots,f_{r}$ for $V_{c}$; let $V_{E}$ and $V_{F}$ denote the spans of $E=\{e_{1},\dots,e_{r}\}$ and $F=\{f_{1},\dots,f_{r}\}$, respectively. By construction, we can write any $v\in V$ uniquely as a sum $v=\vvec{K}+\vvec{E}+\vvec{F}$ where $\vvec{K}\in V_{K}$, $\vvec{E}\in V_{E}$, and $\vvec{F}\in V_{F}$. Let $H_{\alpha}$ be the set $S^{1}\times V$. Given the previous choices, we can endow $H_{\alpha}$ with the following operation~$\alpha(E,F)$: \begin{equation} \label{eq: group operation} (z,v)*_{\alpha(E,F)}(z',v') =\left(\strut zz'\,\alpha(\vvec{F},\vprimevec{E}), \ v+v'\right). \end{equation} \begin{proposition}\label{prop: form gives extension} For an elementary abelian $p$\kern1.3pt- group $V$ and a totally isotropic bilinear form $\alpha \colon V\times V\to {\mathbb{Z}}/p$ as above, we have the following. \begin{enumerate} \item The operation~\eqref{eq: group operation} gives $H_{\alpha}$ the structure of an abstract projective elementary abelian $p$\kern1.3pt- group with associated commutator pairing~$\alpha$. \item The group isomorphism class of $H_{\alpha}$ depends only on the isomorphism class of $\alpha$ as a bilinear form. \end{enumerate} Further, non-isomorphic forms $\alpha$ and $\alpha'$ on $V$ give nonisomorphic groups $H_{\alpha}$ and $H_{\alpha'}$. \end{proposition} \begin{proof} The element $(1,0)$ serves as the identity in $H_\alpha$. By bilinearity of~$\alpha$, we know $\alpha\left(\vvec{F},-\vvec{E}\right)+\alpha\left(\vvec{F},\vvec{E}\right)=0$, which allows us to check that the inverse of $(z,v)$ is $\left(z^{-1}\alpha\left(\vvec{F},\vvec{E}\right),-v\right)$. A straightforward computation verifies associativity, showing that $*_{\alpha(E,F)}$ defines a group law, and another shows that $H_{\alpha}$ has $\alpha$ for its commutator pairing. We need to check the effect of the choices we made when we defined the operation~$\alpha(E,F)$ on~$H_{\alpha}$. The subspace $V_{K}\subseteq V$ is well-defined, but $V_{E}$ and $V_{F}$ are not, and they are used in the definition~~$\alpha(E,F)$. Suppose that $E,F$ and $E',F'$ are two choices for a symplectic basis spanning (not necessarily identical) complements of $V_{K}$ in~$V$. There is an automorphism of $\alpha$ that takes $E$ to $E'$ and $F$ to $F'$, and then this automorphism defines an isomorphism $\left(H_{\alpha},\alpha(E,F)\right) \cong \left(H_{\alpha},\alpha(E',F')\right)$. Hence the group isomorphism class of $H_{\alpha}$ is well-defined, independent of the choices made to define the group operation. Similarly, if $V\xrightarrow{\cong}V'$ induces an isomorphism of forms $\alpha, \alpha'$, then compatible choices can be made for the symplectic bases $E,F\subseteq V$ and $E', F'\subseteq V'$, and these choices will induce an isomorphism of $\left(H_{\alpha}, \alpha(E,F)\right)$ with $\left(H_{\alpha'}, \alpha'(E',F')\right)$. Lastly, if $\alpha$ and $\alpha'$ are not isomorphic, then by Lemma~\ref{lemma: classification of forms} their trivial components must be of different dimensions. It follows that the centers of $H_{\alpha}$ and $H_{\alpha'}$ are not isomorphic (for example, they have a different number of connected components), and $H_{\alpha}$ and $H_{\alpha'}$ are therefore not isomorphic as groups. \end{proof} Proposition~\ref{prop: form gives extension} tells us that the construction $\alpha\mapsto H_{\alpha}$ defines a monomorphism from the isomorphism classes of totally isotropic bilinear forms over ${\mathbb{Z}}/p$ to the isomorphism classes of abstract projective elementary abelian $p$-groups. It remains to show that this function is an epimorphism, which we do by constructing a group for each form. For later purposes, we pay special attention to the identity component. \begin{proposition} \label{proposition: one-to-one} Let $H$ be an abstract projective elementary abelian $p$\kern1.3pt- group with associated commutator form $\alpha\colon V\times V\rightarrow{\mathbb{Z}}/p$. Let $\phi:S^{1}\rightarrow H_{0}$ be an isomorphism of $S^{1}$ with the identity component of~$H$. Then $H_{\alpha}$ is isomorphic to $H$ via an isomorphism that restricts to $\phi$ on the identity component of~$H_{\alpha}$. \end{proposition} \begin{proof} Let $V_{K}$, $V_{E}$, $V_{F}$ be the subspaces of $V$ defined just prior to Proposition~\ref{prop: form gives extension}. The basis elements of $V_{E}$ can be lifted to elements of $H$, which can be chosen to be of order~$p$ because $S^{1}$ is a divisible group. The lifts commute since the form is trivial on $V_{E}$. Mapping basis elements of $V_{E}$ to their lifts in $H$ gives a monomorphism of groups $V_{E}\hookrightarrow H$ whose image we call $W_{E}$. Likewise, we can choose lifts $V_{K}\hookrightarrow H$ and $V_{F}\hookrightarrow H$, whose images are subgroups $W_{K}$ and $W_{F}$ of $H$, respectively. Recall that as a set, $H_{\alpha}=S^{1}\times V$, and for $v\in V$, we have $v=\vvec{K}+\vvec{E}+\vvec{F}$ as before. Let $\wvec{K}$, $\wvec{E}$, and $\wvec{F}$ be the images of $\vvec{K}$, $\vvec{E}$, and $\vvec{F}$ under the lifting homomorphisms of the previous paragraph. We extend the given isomorphism $\phi:S^{1}\rightarrow H_{0}$ to a function $\Phi\colon H_{\alpha}\rightarrow H$ by \begin{equation} \label{eq: define phi} \Phi\left(z, \vvec{K}+\vvec{E}+\vvec{F} \right)= \phi(z)\,\wvec{K}\wvec{E}\wvec{F}. \end{equation} (Note that we write the group operation additively in $V$, which is abelian, but multiplicatively in $H$, which may not be.) We assert that $\Phi$ is a group homomorphism. To see that, suppose we have two elements $(z,v)$ and $(z',v')$ of $H_{\alpha}$. If we multiply first in $H_{\alpha}$ we get $\left(\strut zz'\,\alpha(\vvec{F},\vprimevec{E}), \ v+v'\right)$, and then application of $\Phi$ gives us \begin{equation} \label{eq: messy product} \phi\left(zz'\right)\,\alpha(\vvec{F},\vprimevec{E}) (\vvec{K}\vprimevec{K})(\vvec{E}\vprimevec{E})(\vvec{F}\vprimevec{F}). \end{equation} On the other hand, if we apply $\Phi$ first and then multiply, we get $\left(\phi(z)\,\vvec{K}\vvec{E}\vvec{F}\right) \left(\phi(z')\,\vprimevec{K}\vprimevec{E}\vprimevec{F}\right)$, which can be rewritten as \begin{align} \label{eq: another messy product} \phi\left(zz'\right)\,\left(\vvec{K}\vprimevec{K}\right) \left(\vvec{E}\vvec{F}\vprimevec{E}\vprimevec{F}\right). \end{align} To compare \eqref{eq: messy product} to~\eqref{eq: another messy product}, we need to relate $\vprimevec{E}\vvec{F}$ and $\vvec{F}\vprimevec{E}$. However, the commutators in $H$ are given exactly by $\alpha$, so $\vvec{F}\vprimevec{E}=\alpha(\vvec{F},\vprimevec{E})\vprimevec{E}\vvec{F}$, which allows us to see that \eqref{eq: messy product} and \eqref{eq: another messy product} are equal. We conclude that $\Phi$ is a group homomorphism. Finally, we need to know that $\Phi$ is a bijection. To see that $\Phi$ is surjective, observe that if $h\in H$ maps to $v\in V$ where $v=\vvec{K}+\vvec{E}+\vvec{F}$, then $h$ and $\wvec{K}\wvec{E}\wvec{F}$ differ only by some element $z$ of the central $S^{1}$. Hence every element of $H$ can we written as $z \wvec{K} \wvec{E} \wvec{F}$ for some $z$, $\wvec{K}$, $\wvec{E}$, $\wvec{F}$, and $\Phi$ is surjective. However, \eqref{eq: define phi}~tells us that $\Phi$ is the isomorphism $\phi$ on the identity component,~$S^{1}$. Further, $\Phi$ is a surjection of the finite set of components, hence a bijection of components. We conclude that $\Phi$ is an isomorphism. \\ \end{proof} \begin{proposition} \label{proposition: forms=groups} The commutator form gives a one-to-one correspondence between isomorphism classes of abstract projective elementary abelian $p$\kern1.3pt- groups and isomorphism classes of totally isotropic bilinear forms over~${\mathbb{Z}}/p$. \end{proposition} \begin{proof} Every totally isotropic form $\alpha$ is realized as the commutator form of the group~$H_{\alpha}$. By Propositions~\ref{prop: form gives extension} and~\ref{proposition: one-to-one}, if $H$ and $H'$ have isomorphic commutator forms $\alpha$ and $\alpha'$, then \[ H\cong H_{\alpha}\cong H_{\alpha_{'}}\cong H'. \] \end{proof} Proposition~\ref{proposition: forms=groups} allows us to give the following explicit classification of abstract projective elementary abelian $p$\kern1.3pt- groups. This classification result can also be found in \cite{Griess} Theorem~3.1, though in this section we have given an elementary and self-contained discussion and proof. \begin{proposition} \label{proposition: group classification} Suppose that $H$ is an abstract projective elementary abelian $p$\kern1.3pt- group \[ 1\rightarrow S^{1}\rightarrow H\rightarrow V\rightarrow 1. \] Let $2s$ be the maximal rank of a symplectic subspace of $V$ under the commutator form of $H$, and let $t=\rank(V)-2s$. Then $H$ is isomorphic to $\Gamma_{s}\times\Delta_{t}$. \end{proposition} \begin{proof} The commutator form of $\Gamma_{s}\times\Delta_{t}$ is isomorphic to that of $H$, so the result follows from Proposition~\ref{proposition: forms=groups}. To interpret the proposition when $s=0$, note that $\Gamma_{0}=S^{1}$, so the proposition says that if $s=0$, then $H\cong S^{1}\times\Delta_{t}\cong S^{1}\times V$. \end{proof} \begin{remark} As pointed out by D.~Benson, one could also approach this classification result via group cohomology, using the fact that equivalence classes of extensions as in Definition~\ref{defn: abstract proj elem} correspond to elements of $H^2(V; S^1)$. An argument using the Bockstein homomorphism shows that the group $H^2(V; S^1)\cong H^3(V,\integers)$ can be identified with the exterior square of $H^1(V,\integers/p)$, which is in turn isomorphic to the space of alternating forms $V\times V \to \integers/p$. The standard factor set approach to $H^2$ can be used to identify such a form with the commutator pairing of the extension (as in \cite{Brown} or \cite{Weibel}). The factor set associated to an extension is similarly defined and often identically denoted as the commutator pairing, but the two pairings are \emph{not} the same. In particular, a factor set need not be bilinear or totally isotropic. \end{remark} \section{Initial list of problematic subgroups} \label{section: problematic subgroups} Throughout this section, assume that $n=mp^k$ where $m$ and $p$ are coprime. Suppose that $H$ is a problematic $p$\kern1.3pt- toral subgroup of $\Uof{n}$. We know from the first part of Theorem~\ref{thm: H elementary abelian} that $H$ must be a projective elementary abelian $p$\kern1.3pt- group. If $H$ contains $S^{1}\subseteq \Uof{n}$, we know the possible group isomorphism classes of $H$ from Proposition~\ref{proposition: group classification}. The purpose of this section is to use the character criterion of Theorem~\ref{thm: H elementary abelian} to narrow down the possible conjugacy classes of $H$ in $\Uof{n}$. In Section~\ref{section: Gamma_k}, we described the projective elementary $p$\kern1.3pt- subgroup $\Gamma_{k}\subsetU\kern-2.5pt\left(p^{k}\right)$. In this and subsequent sections, we consider the action of $\Gamma_{k}$ on $\complexesn$ by a multiple of its ``standard" action on $\complexespk$: the group $\Gamma_{k}$ acts on $\complexesn\cong{\mathbb{C}}^{m}\otimes\complexespk$ by acting trivially on ${\mathbb{C}}^{m}$ and by the standard action on~$\complexespk$. In order to streamline notation, we denote this subgroup of $\Uof{n}$ by $\Gamma_{k}$ also, since context will indicate the dimension of the ambient space. Since $\Gamma_{k}\subseteq\Uof{n}$ is represented by block diagonal matrices with blocks $\Gamma_{k}$, we immediately obtain the following from Lemma~\ref{lemma: Gamma zero character}. \begin{lemma} \label{lemma: Diag character} The character of $\Gamma_{k}\subseteq\Uof{n}$ is nonzero only on the elements of $S^{1}$, where the character is $\chi(s)=ns\in{\mathbb{C}}$. \end{lemma} Our goal is to show that if $H$ is a problematic subgroup of $\Uof{n}$ where $n=mp^{k}$ and $m$ and $p$ are coprime, then $H$ is a subgroup of $\Gamma_{k}\subset\Uof{n}$. Although $H$ itself may not be finite, we can use its finite subgroups to get information about $n$ using the following result from basic representation theory. \begin{lemma}\label{lemma: n=mp^k} Suppose that $G$ is a finite subgroup of $\Uof{n}$ and that $\character{G}(y)=0$ unless $y=e$. Then $|G|$ divides $n$, and the action of $G$ on $\complexesn$ is by $n/|G|$ copies of the regular representation. \end{lemma} \begin{proof} The number of copies of an irreducible character $\character{}$ in $\character{G}$ is given by the inner product \[ \langle \character{G},\character{}\rangle =\frac{1}{|G|} \sum_{y \in G} \character{G}(y) \overline{\character{}(y)}. \] Take $\character{}$ to be the character of the one-dimensional trivial representation of $G$. The only nonzero term in the summation occurs when $y=e$, and since $\character{G}(e)=n$, we find $\langle \character{G},\character{}\rangle=n/|G|$. Since $\langle \character{G},\character{}\rangle$ must be an integer, we find that $|G|$ divides $n$. To finish, we observe that the character of $n/|G|$ copies of the regular representation is the same as $\character{G}$, which finishes the proof. \end{proof} We now have all the ingredients we require to prove the main result of this section. \begin{theorem} \label{theorem: Non_contractible_implies_subgroup_Gamma_diag} \subgroupGammadiagtext \end{theorem} \begin{proof} We know from Theorem~\ref{thm: H elementary abelian}\eqref{item: elem abelian} that $H$ is an abstract projective elementary elementary abelian $p$\kern1.3pt- group. By Proposition~\ref{proposition: group classification}, $H$ is abstractly isomorphic to $\Gamma_{s}\times\Delta_{t}$ for some $s$ and $t$, so $H$ contains a subgroup $({\mathbb{Z}}/p)^{s+t}$ (say, $\Acal_{s}\times\Delta_{t}$). By Theorem~\ref{thm: H elementary abelian}\eqref{item: character}, we have the character criterion on~$H$ necessary to apply Lemma~\ref{lemma: n=mp^k}, and we conclude that $p^{s+t}$ divides~$n$. Since $n=mp^{k}$ with $m$ coprime to~$p$, we necessarily have $s+t\leq k$. Hence by Lemma~\ref{lemma: Gamma_k subgroups}, we know $\Gamma_{s}\times\Delta_{t}\subseteq\Gamma_{k}$. To finish the proof, we compare two representations of~$\Gamma_{s}\times\Delta_{t}$. The first is the composite \[ \Gamma_{s}\times\Delta_{t}\hookrightarrow\Gamma_{k}\subseteq \Uof{n}. \] This map gives an identification of the identity component of the abstract group $\Gamma_{s}\times\Delta_{t}$ with the center $S^{1}\subseteq\Uof{n}$, and in terms of this identification, the character of the representation is $x\mapsto nx$ on the identity component of $\Gamma_{s}\times\Delta_{t}$ and zero else (Lemma~\ref{lemma: Diag character}). To construct the second representation of~$\Gamma_{s}\times\Delta_{t}$, we construct a map $\Gamma_{s}\times\Delta_{t}\rightarrow H$. Since $H$ has the same commutator form as $\Gamma_{s}\times\Delta_{t}$, by Proposition~\ref{proposition: one-to-one} there is an isomorphism $\Gamma_{t}\times\Delta_{t}\rightarrow H\subseteq\Uof{n}$ that gives the same map on identity components as $\Gamma_{s}\times\Delta_{t}\hookrightarrow\Gamma_{k} \hookrightarrow\subseteq \Uof{n}$. Hence the character for this representation is also zero off the identity component (by Theorem~\ref{thm: H elementary abelian}), and $x\mapsto nx$ on the central~$S^{1}$. Thus the two representations of $\Gamma_{s}\times\Delta_{t}$ have the same character, and we conclude that they are conjugate. Since the image of one is the subgroup~$H$, and the image of the other is $\Gamma_{s}\times\Delta_{t}\subseteq\Gamma_{k}\subseteq\Uof{n}$, the theorem follows. \end{proof} \begin{example} Suppose that $p$ is an odd prime, and let $n=2p$. Let $H$ be a problematic subgroup of $\Uof{2p}$ acting on~$\Lcal_{2p}$. According to Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag}, the subgroup $H$ is conjugate in $\Uof{2p}$ to a subgroup of~$\Gamma_{1}$. Since in addition we assume that $H$ contains the central $S^{1}$, there are only three possibilities for $H$: $S^{1}$ itself, $\Gamma_{1}$ acting by two copies of its standard representation, or $S^{1}\times\Delta_{1}\subset\Gamma_{1}$. \end{example} \section{Fixed points and joins} \label{section: joins} In this section, we begin the work of computing the fixed points of the $p$-toral subgroups of $\Uof{n}$ that are identified in Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag} as potentially problematic. Throughout this section, let $n=mp^{k}$, and fix an isomorphism $\complexesn\cong{\mathbb{C}}^{m}\otimes\complexespk$. Let $\Gamma_{k}$ act on $\complexesn$ by acting trivially on ${\mathbb{C}}^{m}$ and by its standard representation (described in Section~\ref{section: Gamma_k}) on~$\complexespk$. There is also an action of $\Uof{m}$ on $\complexesn$ that commutes with the action of~$\Gamma_{k}$, by letting $\Uof{m}$ act by the standard action on ${\mathbb{C}}^{m}$ and trivially on~$\complexespk$. This action passes to an action of $\Uof{m}$ on the fixed point space~$\weakfixed{\Gamma_{k}}$. Our goal in this section and the next is to establish which subgroups of $\Gamma_{k}\subseteq\Uof{n}$ actually have noncontractible fixed points on~$\Lcal_{n}$. A~starting point is provided by the following result of~\cite{Arone-Lesh-Tits} for $\Gamma_{k}\subsetU\kern-2.5pt\left(p^{k}\right)$ acting on~$\Lcal_{p^{k}}$. We will bootstrap this result to fixed points of $\Gamma_{k}\subset\Uof{n}$ acting on~$\Lcal_{n}$. \begin{proposition}[\cite{Arone-Lesh-Tits}] \label{proposition: fixed points of Gamma_k} For $k\geq 1$, the fixed point space of $\Gamma_k$ acting on $\Lcal_{p^k}$ is homotopy equivalent to a wedge of spheres of dimension~$k-1$. \end{proposition} Let $X\ast Y$ denote the join of the two spaces $X$ and~$Y$. The theorem below establishes a formula for the fixed point space $\weakfixedspecific{\Gamma_{k}}{mp^k}$ that was suggested to us by G.~Arone. Notice that there is no assumption that $m$ and $p$ should be coprime. \begin{theorem} \label{theorem: join theorem} \asttheoremtext \end{theorem} We begin by outlining the proof of Theorem~\ref{theorem: join theorem}. The strategy is to identify a $\Uof{m}$-subcomplex $Z$ of $\weakfixedspecific{\Gamma_{k}}{mp^k}$ such that the nerve of $Z$ has the $\Uof{m}$-equivariant homotopy type of the join in Theorem~\ref{theorem: join theorem}. Then we establish that $Z$ is a $\Uof{m}$-equivariant deformation retraction of~$\weakfixedspecific{\Gamma_{k}}{mp^k}$. To construct the subcomplex~$Z$, suppose that $\mu$ and $\mathbf{n}u$ are orthogonal decompositions of ${\mathbb{C}}^{m}$ and~$\complexespk$, respectively, with $\class(\mu)=\{v_{1},\dots,v_{s}\}$ and $\class(\mathbf{n}u)=\{w_{1},\dots,w_{t}\}$. We can tensor the components of $\mu$ and $\mathbf{n}u$ to obtain a decomposition of ${\mathbb{C}}^{m}\otimes\complexespk$ that we denote $\mu\otimes\mathbf{n}u$: \[ \class(\mu\otimes\mathbf{n}u)=\{v_{i}\otimes w_{j}: 1\leq i\leq s \mbox{ and } 1\leq j\leq t\}. \] If $\mathbf{n}u$ is weakly fixed by~$\Gamma_{k}$, then so is $\mu\otimes\mathbf{n}u$, and if at least one of $\mu$ and $\mathbf{n}u$ is proper, then $\mu\otimes\mathbf{n}u$ is proper as well. \begin{definition} The subposet $Z\subseteq\weakfixedspecific{\Gamma_{k}}{mp^{k}}$ is the set of objects of the form $\mu\otimes\mathbf{n}u$ where $\mu$ is a decomposition of ${\mathbb{C}}^{m}$ and $\mathbf{n}u$ is a weakly $\Gamma_{k}$-fixed decomposition of $\complexespk$, and at least one of $\mu$ and $\mathbf{n}u$ is proper. \end{definition} \begin{remark} \label{remark: path components Z} It follows from the definition that $Z$ is stabilized by the action of $\Uof{m}$ on $\complexesn\cong{\mathbb{C}}^{m}\otimes\complexespk$. In fact, by \cite{Oliver-p-stubborn} the centralizer of $\Gamma_{k}$ is actually~$\Uof{m}$, and thus by Corollary~\ref{cor: J fixed path components} the path components of the object space of $Z$ are actually $\Uof{m}$-orbits. The same is true of the morphism space of~$Z$, and indeed, for the space of $d$-simplices of $Z$ for every~$d$. \end{remark} To analyze~$Z$, we write it as a union of two subposets, each of which is closed under the action of~$\Uof{m}$. Let $X$ (resp.~$Y$) denote the subposet of~$Z$ consisting of the decompositions $\lambda=\mu\otimes\mathbf{n}u$ where $\mu$ (resp.~$\mathbf{n}u$) is a proper decomposition of~${\mathbb{C}}^{m}$ (resp.~$\complexespk$). The object space of $X$ is stabilized by~$\Uof{m}$, and hence (Remark~\ref{remark: path components Z}) is a union of path components of the object space of~$Z$. The same is true of~$Y$, and likewise the morphism spaces of $X$ and $Y$ are unions of path components of the morphism space of~$Z$. Any refinement in $Z$ of an object in $X$ is also in~$X$, and likewise any refinement in $Z$ of an object in $Y$ is likewise in~$Y$. Hence the nerve of $Z$ is the union of the nerve of $X$ and the nerve of~$Y$. We construct a $\Uof{m}$-equivariant map of diagrams \begin{equation} \label{eq: map arrays} \left( \begin{array}{ccc} \Lcal_{m}\times\weakfixedspecific{\Gamma_{k}}{p^{k}} & \longrightarrow &\Lcal_{m}\\ \downarrow\\ \weakfixedspecific{\Gamma_{k}}{p^{k}} \end{array} \right)\ \xrightarrow{\ \ \ \ \ \ \ } \ \left( \begin{array}{ccc} X\capY & \longrightarrow X\\ \downarrow\\ Y \end{array} \right) \end{equation} by doing the following. \begin{itemize} \item In the upper right corner, we map a proper decomposition $\mu$ of~${\mathbb{C}}^{m}$ to the decomposition $\mu\otimes\complexespk$, which is in~$X$. \item In the lower left corner, we map a decomposition $\mathbf{n}u$ in $\weakfixedspecific{\Gamma_{k}}{p^{k}}$ to ${\mathbb{C}}^{m}\otimes\mathbf{n}u$, which is in~$Y$. \item In the upper left corner, we map the pair $(\mu,\mathbf{n}u)$ to $\mu\otimes\mathbf{n}u$, which is in $X\capY$. \end{itemize} In~\eqref{eq: map arrays}, we would like to relate the homotopy pushout of the left diagram to the strict pushout of the right diagram (which is $X\cupY$). First we establish commutativity of the map of diagrams in~\eqref{eq: map arrays}. We show that at each corner, \eqref{eq: map arrays}~is a homotopy equivalence, and then we show that this statement remains true when we take fixed points under the action of a subgroup $H\subseteq\Uof{m}$. Finally, we show that in the right-hand diagram of~\eqref{eq: map arrays}, the map from the homotopy pushout to the strict pushout is a homotopy equivalence and remains so after taking fixed points under $H\subseteq\Uof{m}$. \begin{lemma} \label{lemma: commuting ladder} The following ladder is $\Uof{m}$-equivariantly homotopy commutative, and the vertical arrows are equivalences of $\Uof{m}$-spaces: \begin{equation} \label{diag: lemma ladder} \begin{CD} \weakfixedspecific{\Gamma_{k}}{p^{k}} @<<<\Lcal_{m}\times\weakfixedspecific{\Gamma_{k}}{p^{k}} @>>> \Lcal_{m}\\ @V{\simeq}VV @V{\cong}VV @V{\simeq}VV\\ Y@<<< X\capY @>>> X . \end{CD} \end{equation} \end{lemma} \begin{proof} To establish homotopy commutativity, consider a pair $(\mu,\mathbf{n}u)$ in $\Lcal_{m}\times\weakfixedspecific{\Gamma_{k}}{p^{k}}$. Going clockwise around the right-hand square of \eqref{diag: lemma ladder} yields the decomposition $\mu\otimes\complexespk$ in~$X$. Going around that square counterclockwise yields the decomposition $\mu\otimes\mathbf{n}u$ in~$X$. There is a natural coarsening $\mu\otimes\mathbf{n}u\rightarrow\mu\otimes\complexespk$, and the coarsening morphism is stabilized by~$\Uof{m}$. Hence the right-hand square of diagram~\eqref{diag: lemma ladder} induces a $\Uof{m}$-homotopy commutative diagram of nerves. Similarly, following $(\mu,\mathbf{n}u)$ clockwise around the left-hand square gives $\mu\otimes\mathbf{n}u$ in~$Y$, and going counterclockwise gives ${\mathbb{C}}^{m}\otimes\mathbf{n}u$ in~$Y$, and there is a natural $\Uof{m}$-equivariant homotopy given by the coarsening $\mu\otimes\mathbf{n}u\rightarrow{\mathbb{C}}^{m}\otimes\mathbf{n}u$. The vertical equivalences result from similar arguments, as follows. \begin{enumerate} \item To see that the right-hand vertical map, $\Lcal_{m}\rightarrowX$, is a homotopy equivalence of $\Uof{m}$-spaces, consider that $\mu\otimes\mathbf{n}u\mapsto\mu\otimes\complexespk$ gives a natural retraction. The coarsening map $\mu\otimes\mathbf{n}u\rightarrow\mu\otimes\complexespk$ gives a $\Uof{m}$-equivariant homotopy between the retraction and the identity. \item Likewise, we consider the left-hand map, $\weakfixedspecific{\Gamma_{k}}{p^{k}}\rightarrowY$. The map $\mu\otimes\mathbf{n}u\mapsto{\mathbb{C}}^{m}\otimes\mathbf{n}u$ gives a natural retraction, and the coarsening map $\mu\otimes\mathbf{n}u\rightarrow{\mathbb{C}}^{m}\otimes\mathbf{n}u$ is a $\Uof{m}$-equivariant homotopy making it a deformation retraction. \item The map $\Lcal_{m}\times\weakfixedspecific{\Gamma_{k}}{p^{k}} \longrightarrow X\capY$ is a $\Uof{m}$-equivariant isomorphism of posets. \end{enumerate} \end{proof} As a consequence of Lemma~\ref{lemma: commuting ladder}, we know that the map of diagrams in \eqref{eq: map arrays} induces a map between the homotopy pushouts that is $\Uof{m}$-equivariant and a homotopy equivalence. However, to get equivalences of $\Uof{m}$-spaces, we need to know what happens after taking fixed points of a subgroup $J\subseteq\Uof{m}$. Hence we need to know the relationship between taking fixed points and taking homotopy pushouts. We give an argument that we learned from C. Malkiewich~\cite{CM-notes}. \begin{proposition} \label{proposition: fixed points of mapping cylinder} Let $f:A\longrightarrow B$ be an equivariant map of spaces with an action of a group~$J$, and let $\complexesylinder(f)$ denote the mapping cylinder of~$f$. Let $C=\complexesylinder\left(f^{J}\right)$ be the mapping cylinder of the function $A^{J}\longrightarrow B^{J}$ given by restricting $f$ to $J$-fixed points, and let $D=\left(\complexesylinder(f)\right)^{J}$ be the fixed point space of the action of $J$ on $\complexesylinder(f)$. Then the natural map $C\longrightarrow D$ is a homeomorphism. \end{proposition} \begin{proof} The space $C$ is the quotient space of $\left(A^{J}\times [0,1]\right)\coprod B^{J}$ by the relation $(a,1)\simeq f(a)$. The inclusions $A^{J}\times [0,1]\hookrightarrow A\times [0,1]$ and $B^{J}\hookrightarrow B$ induce a natural map $C\rightarrow \complexesylinder(f)$, whose image is contained in~$D$, and which is continuous by definition of the quotient topology on~$C$. Further, using the fact that $A^{J}$ and $B^{J}$ are closed in $A$ and $B$, respectively, we can check that $C\rightarrow D$ is a closed map. A routine check verifies that $C\rightarrow D$ is a bijection. We conclude that $C\rightarrow D$ is a homeomorphism. \end{proof} \begin{corollary} \label{corollary: fixed points and hocolim} Suppose that \[ \begin{CD} A@>>> B\\ @VVV @VVV\\ C @>>> D \end{CD} \] is a homotopy pushout diagram of spaces with an action of a group~$J$, and $J$-equivariant maps. Then \begin{equation} \begin{CD} A^J@>>> B^J\\ @VVV @VVV\\ C^J@>>>D^{J} \end{CD} \end{equation} is also a homotopy pushout diagram. \end{corollary} \begin{proof} The corollary is a consequence of Proposition~\ref{proposition: fixed points of mapping cylinder} once we have replaced $B$ and $C$ with mapping cylinders and $D$ with the double mapping cylinder. \end{proof} \begin{proposition} \label{proposition: subcomplex is a join} The nerve of $Z$ is $\Uof{m}$-equivariantly homotopy equivalent to $\Lcal_{m}\ast\weakfixedspecific{\Gamma_{k}}{p^{k}}$. \end{proposition} \begin{proof} Lemma~\ref{lemma: commuting ladder} and Corollary~\ref{corollary: fixed points and hocolim} tell us that the map of diagrams in \eqref{eq: map arrays} induces an equivalence of $\Uof{m}$-spaces on homotopy pushouts, i.e., a $\Uof{m}$-equivariant map that is a homotopy equivalence on the fixed point space of any subgroup $J\subseteq\Uof{m}$. The homotopy pushout of the left diagram in \eqref{eq: map arrays} is $\Lcal_{m}\ast\weakfixedspecific{\Gamma_{k}}{p^{k}}$, and we need to relate this space to the strict pushout (not the homotopy pushout) of the right diagram in \eqref{eq: map arrays}, the strict pushout being $X\cupY=Z$. Hence the proposition follows once we show that the natural map from the homotopy pushout to the strict pushout for the diagram \[ \begin{CD} X\capY @>>> X\\ @VVV\\ Y \end{CD} \] is a homotopy equivalence, and remains so after taking fixed points for any subgroup $J\subseteq\Uof{m}$. This statement follows if we can prove that for all subgroups $J$ of~$\Uof{m}$, the space $X^{J}$ is Reedy cofibrant and $(X\capY)^{J}\rightarrowY^{J}$ is a Reedy cofibration. To establish that $X$ itself is Reedy cofibrant, consider the space $X_{d}$ of $d$-simplices of~$X$. Let $L_{d}\left(X\right)$ denote the space of degenerate $d$-simplices of~$X$ (i.e., the $d$th latching object for~$X$). We must show that \begin{equation} \label{eq: inclusion of latching} L_{d}X\longrightarrow X_{d} \end{equation} is a cofibration of topological spaces. We assert that $L_{d}X$ is, in fact, a union of path components of~$X_{d}$. The complex $X\subseteqZ$ is stabilized by~$\Uof{m}$, and by Remark~\ref{remark: path components Z} the path components of the $d$-simplices of $Z$ are $\Uof{m}$-orbits. Hence $X_{d}$~is a disjoint union of $\Uof{m}$-orbits, and the same is true of~$X_{d-1}$. The degeneracy maps of $X$ are $\Uof{m}$-equivariant, and it follows that their images are unions of path components. The space $L_{d}X$ is the union of such images, and is therefore also a union of path components of~$X_d$, establishing that $X$ is Reedy cofibrant. We need to know that the statements of the previous paragraph remain true when we replace $X$ by $X^{J}$, where $J$ is any subgroup of~$\Uof{m}$. By Corollary~\ref{cor: J fixed path components}, the fixed point space of the action of $J$ on a $\Uof{m}$-orbit has path components that are orbits under the action of~$\complexesentIdent{J}$, the identity component of the centralizer of $J$ in~$\Uof{m}$. We can now follow exactly the same reasoning as in the previous paragraph to conclude that $X^{J}$ is Reedy cofibrant. We claim that the map $X\capY\rightarrow Y$ is a Reedy cofibration for essentially the same reasons. We must show that for each~$d$, the map from the pushout of the diagram \begin{equation} \label{diagram: latching objects} \begin{CD} L_d (X\capY) @>>> (X\capY)_d\\ @VVV\\ L_d (Y) \end{CD} \end{equation} to $Y_{d}$ is a cofibration in topological spaces. But $X_{d}$, $Y_{d}$, and $(X\capY)_{d}$ are disjoint unions of $\Uof{m}$-orbits, as are their subspaces of degenerate simplices. As a consequence, each of the spaces in \eqref{diagram: latching objects} is a union of path components of~$Y_d$, and the maps are inclusions. So the pushout is likewise a union of path components of~$Y_d$, and its inclusion into $Y_d$ is a cofibration. We also need to know that the map remains a Reedy cofibration after taking $J$-fixed points for any subgroup $J$ of~$\Uof{m}$, and the argument is obtained by combining the previous two paragraphs, since taking $J$-fixed point spaces results in spaces of $d$-simplices that are orbits of~$\complexesentIdent{J}$. \end{proof} Now that we have constructed the $\Uof{m}$-equivariant subcomplex $Z$ of~$\weakfixedspecific{\Gamma_{k}}{mp^k}$ with the desired homotopy type, we need to prove that there is a deformation retraction of $\Uof{m}$-spaces from $\weakfixedspecific{\Gamma_{k}}{mp^k}$ to~$Z$. We obtain it in steps, by constructing an interpolating subcategory between $Z$ and~$\weakfixedspecific{\Gamma_{k}}{mp^{k}}$ using the following definition. \begin{definition} \label{definition: uniform isotropy} Suppose that $H\subseteq\Uof{n}$ and $\lambda$ is an object of $\weakfixed{H}$. \begin{enumerate} \item For $v\in\class(\lambda)$, define the \defining{$H$-isotropy group} of~$v$ as $\{g\in H: gv=v\}$. \item We say that $\lambda$ has \defining{uniform $H$-isotropy} if every $v\in\class(\lambda)$ has the same $H$-isotropy group. In this case, we write $\isogroupof{\lambda}$ for the $H$-isotropy group of each element of~$\class(\lambda)$. \item We say that $\lambda$ is \defining{$H$-isotypical} if $\lambda$ has uniform $H$-isotropy and $\isogroupof{\lambda}$ acts isotypically on each component $v\in\class(\lambda)$. \end{enumerate} \end{definition} There is an easy criterion guaranteeing uniform isotropy. \begin{lemma} \label{lemma: transitive means uniform} Suppose $\lambda\in\Obj\weakfixed{H}$ has the property that for some $v\in\class(\lambda)$, the $H$-isotropy subgroup of $v$ is normal in~$H$. If $H$ acts transitively on $\class(\lambda)$, then $\lambda$ has uniform $H$-isotropy. \end{lemma} \begin{proof} Since the action of $H$ on $\class(\lambda)$ is transitive, the $H$-isotropy subgroups of elements of $\class(\lambda)$ are all conjugate in $H$ to the isotropy group of~$v$, which is assumed to be normal. We conclude that they are all actually the same. \end{proof} \begin{corollary} All objects in $\weakfixedspecific{\Gamma_{k}}{p^k}$ have uniform isotropy, as do objects in~$Z$. \end{corollary} \begin{proof} Since $\Gamma_{k}$ acts irreducibly on ${\mathbb{C}}^{k}$, it necessarily acts transitively on the components of any weakly $\Gamma_{k}$-fixed decomposition of~$\complexespk$. An object $\mu\otimes\mathbf{n}u$ in $Z$ has the same $\Gamma_{k}$-isotropy group as~$\mathbf{n}u$. \end{proof} Since objects in $Z$ have uniform $\Gamma_{k}$-isotropy, whereas objects in $\weakfixedspecific{\Gamma_{k}}{mp^{k}}$ may not, we consider an interpolating subcomplex that focuses on uniform isotropy. \begin{definition} Let $\Uof{n}iform{\Gamma_{k}}{mp^k}$ be the subposet of $\weakfixedspecific{\Gamma_{k}}{mp^k}$ given by objects with uniform $\Gamma_{k}$-isotropy. \end{definition} \begin{proposition} \label{proposition: inclusion of uniform subposet} The inclusion $\Uof{n}iform{\Gamma_{k}}{mp^k}\subseteq\weakfixedspecific{\Gamma_{k}}{mp^k}$ induces a homotopy equivalence of $\Uof{m}$-spaces on nerves. \end{proposition} \begin{proof} As usual, we define a $\Uof{m}$-equivariant deformation retraction. Let $\lambda$ be a decomposition that is weakly fixed by $\Gamma_{k}$, say $\class(\lambda)=\{v_{1},\dots,v_{j}\}$. Let $\isogroupof{v_1},\dots,\isogroupof{v_j}$ be the $\Gamma_{k}$-isotropy subgroups of $v_{1},\dots,v_{j}$, respectively. Each of them contains $S^{1}$ and is therefore normal in~$\Gamma_{k}$. Let $\isogroupof{\lambda}\subseteq\Gamma_{k}$ be the product, $\isogroupof{\lambda}=\isogroupof{v_1}\dots \isogroupof{v_j}$. The construction of $\isogroupof{\lambda}$ is $\Uof{m}$-invariant, since $\Uof{m}$ centralizes~$\Gamma_{k}$. Recall that $\glom{\lambda}{\isogroupof{\lambda}}$ denotes the strongly $\isogroupof{\lambda}$-fixed coarsening of $\lambda$ created by summing components in the same orbit of the action of $\isogroupof{\lambda}$ on~$\class(\lambda)$ (see Definition~\ref{orbit_partition_def}). Consider the assignment \[ \lambda\mapsto\glom{\lambda}{\isogroupof{\lambda}}, \] and observe that it is~$\Uof{m}$-equivariant. First we check that this assignment actually lands in~$\weakfixedspecific{\Gamma_{k}}{mp^k}$. Because $S^{1}\subseteq \isogroupof{\lambda}$, we know $\isogroupof{\lambda}\triangleleft\Gamma_{k}$. Lemma~\ref{lemma: glom} then tells us that $\glom{\lambda}{\isogroupof{\lambda}}$ is weakly fixed by~$\Gamma_{k}$. We also need to check that $\glom{\lambda}{\isogroupof{\lambda}}$ is in fact proper. If not, then $\isogroupof{\lambda}$ acts transitively on $\class(\lambda)$, and therefore $\Gamma_{k}$ does also. By Lemma~\ref{lemma: transitive means uniform}, we have $\isogroupof{v_1}=\dots=\isogroupof{v_j}$, so $\isogroupof{\lambda}=\isogroupof{v_1}$. But then $\isogroupof{v_1}$ acts transitively on $\class(\lambda)$, which is a contradiction since $\isogroupof{v_1}$ fixes $v_{1}$ and $\lambda$ is proper. We now check that the assignment $\lambda\mapsto\glom{\lambda}{\isogroupof{\lambda}}$ is continuous, by considering the $\Gamma_{k}$-isotropy of decompositions in the same path component of $\Obj(\Lcal_{mp^{k}})^{\Gamma_{k}}$ as~$\lambda$. By Corollary~\ref{cor: J fixed path components}, the path component of $\lambda$ consists of elements $c\lambda$ where $c$ is in the identity component of $\complexesen_{\Uof{mp^k}}\left(\Gamma_{k}\right)$. However, if $v\in\class(\lambda)$ then $\isogroupof{c\,v}=c\left(\isogroupof{v}\right)c^{-1}=\isogroupof{v}$. As a result, $\isogroupof{\lambda}=\isogroupof{c\lambda}$. Therefore the same subgroup is being used to coarsen the entire path component of~$\lambda$, and we already know that this operation is continuous from Lemma~\ref{lemma: continuity of glom}. Next we verify that the assignment $\lambda\mapsto\glom{\lambda}{\isogroupof{\lambda}}$ respects coarsenings: we must check that if $\mu\leq\lambda$, then $\glom{\mu}{\isogroupof{\mu}}\leq\glom{\lambda}{\isogroupof{\lambda}}$. Suppose that $w\in\class(\mu)$ and that $w\subseteq v\in\class(\lambda)$, and further suppose that $\gamma\in\isogroupof{w}$. Since $\isogroupof{w} \subset \Gamma_k$ and $\Gamma_k$ weakly fixes $\lambda$, we have that $\gamma v$ is either equal to $v$ or orthogonal to $v$, whence $\gamma v=v$. Hence $\gamma\in\isogroupof{v}$, establishing that $w\subseteq v$ implies $\isogroupof{w}\subseteq\isogroupof{v}$. As a result, $\isogroupof{\mu}\subseteq\isogroupof{\lambda}$. We conclude that $\glom{\mu}{\isogroupof{\mu}}\leq\glom{\lambda}{\isogroupof{\mu}} \leq\glom{\lambda}{\isogroupof{\lambda}}$. The coarsening $\lambda\rightarrow\glom{\lambda}{\isogroupof{\lambda}}$ gives a $\Uof{m}$-equivariant homotopy from the identity functor on $\weakfixedspecific{\Gamma_{k}}{mp^k}$ to the composition of retraction and inclusion, and the lemma follows. \end{proof} At this point, we have \[ Z\subseteq\Uof{n}iform{\Gamma_{k}}{mp^k}\subseteq\weakfixedspecific{\Gamma_{k}}{mp^k} \] and the second inclusion is an equivalence of $\Uof{m}$-spaces. Next, we interpolate again by defining $\isotypicsubcomplex$ as the subcomplex of $\Uof{n}iform{\Gamma_{k}}{mp^k}$ of decompositions $\lambda$ that are $\isogroupof{\lambda}$-isotypical, where $\isogroupof{\lambda}$ is the (uniform) $\Gamma_k$-isotropy of $\lambda$. \begin{proposition} \label{proposition: isotypic to unif} The inclusion $\isotypicsubcomplex\hookrightarrow\Uof{n}iform{\Gamma_{k}}{mp^k}$ induces a homotopy equivalence of $\Uof{m}$-spaces on nerves. \end{proposition} \begin{proof} Again we define a $\Uof{m}$-equivariant deformation retraction. Let $\lambda$ be a decomposition in~$\Uof{n}iform{\Gamma_{k}}{mp^k}$ with $\Gamma_{k}$-isotropy $\isogroupof{\lambda}\subseteq\Gamma_{k}$. Now consider the assignment $\lambda\mapsto\isorefine{\lambda}{\isogroupof{\lambda}}$, which we assert is continuous. As in the proof of Proposition~\ref{proposition: inclusion of uniform subposet}, continuity follows from the fact that $\isogroupof{\lambda}$ is constant on each path component, and isotypical refinement with respect to a subgroup is continuous on each path component (Lemma~\ref{lemma: continuity of isorefine}). Further, the value of $\isogroupof{\lambda}$ does not change when we act on $\lambda$ by an element of~$\Uof{m}$, because $\Uof{m}$ centralizes~$\Gamma_{k}$. And because $\Uof{m}$ also necessarily centralizes $\isogroupof{\lambda}$, the assignment $\lambda\mapsto\isorefine{\lambda}{\isogroupof{\lambda}}$ is also $\Uof{m}$-equivariant. To check that $\lambda\mapsto\isorefine{\lambda}{\isogroupof{\lambda}}$ is natural in $\lambda$, suppose given $\mu\leq\lambda$ with uniform $\Gamma_{k}$-isotropy $\isogroupof{\mu}$ and $\isogroupof{\lambda}$, respectively. As in the proof of Proposition~\ref{proposition: inclusion of uniform subposet}, $\isogroupof{\mu}\subseteq\isogroupof{\lambda}$. We need to check that $\isorefine{\mu}{\isogroupof{\mu}}$ is a refinement of~$\isorefine{\lambda}{\isogroupof{\lambda}}$. Suppose we have $w\in\class(\mu)$ with $w\subseteq v\in\class(\lambda)$. We need to prove that for every $\isogroupof{\mu}$-isotypical summand of $w$, there exists a $\isogroupof{\lambda}$-isotypical summand of~$v$ that contains it. It is sufficient to show that non-isomorphic irreducible representations of $\isogroupof{\lambda}$ contained in $v$ cannot contain isomorphic irreducible representations of~$\isogroupof{\mu}$. However, any $\isogroupof{\lambda}$-irreducible subspace of $v$ is contained in the restriction of the standard representation of $\Gamma_{k}$ to~$\isogroupof{\lambda}$. By Frobenius reciprocity (\cite[Theorem 9.9]{Knapp}), the restriction of the standard representation of $\Gamma_{k}$ to $\isogroupof{\mu}$ splits as the sum of pairwise nonisomorphic $\isogroupof{\mu}$-irreducibles. Since $\isogroupof{\mu}\subseteq\isogroupof{\lambda}$, we conclude that nonisomorphic representations of $\isogroupof{\lambda}$ contained in~$v$ cannot contain isomorphic representations of~$\isogroupof{\mu}$. Finally, the coarsening morphism $\isorefine{\lambda}{\isogroupof{\lambda}}\rightarrow\lambda$ provides the necessary $\Uof{m}$-equivariant natural transformation from the composition of retraction and inclusion to the identity functor. \end{proof} We continue with a result on decompositions of tensor products. Since the particular properties of $\Gamma_{k}$ are not needed, we use a more general statement. \begin{lemma} \label{lemma: well-defined tensor} Suppose that $H\subseteq\Uof{i}$ acts irreducibly on~${\mathbb{C}}^{i}$, and let~${\mathbb{C}}^{m}$ have trivial $H$-action. If $v$ is an $H$-invariant subspace of ${\mathbb{C}}^{m}\otimes{\mathbb{C}}^{i}$, then there exists a well-defined subspace $w_{v}\subseteq{\mathbb{C}}^{m}$ such that $v=w_{v}\otimes{\mathbb{C}}^{i}$. The assignment $v\mapsto w_{v}$ is natural in~$v$, is continuous, preserves orthogonality, and is $\Uof{m}$-equivariant. \end{lemma} \begin{corollary} If $n=mp^{k}$ then there is a $\Uof{m}$-equivariant isomorphism $\strongfixedspecific{\Gamma_{k}}{mp^{k}}\cong\Lcal_{m} $. \end{corollary} \begin{proof}[Proof of Lemma~\ref{lemma: well-defined tensor}] By Schur's Lemma, there is an isomorphism \[ \End_{{\mathbb{C}}}\left({\mathbb{C}}^{m}\right) \xrightarrow{\ \cong\ } \End_{H}\left({\mathbb{C}}^{m}\otimes{\mathbb{C}}^{i}\right) \] given by the function $f\mapsto f\otimes{\mathbb{C}}^{i}$, and the function is $\Uof{m}$-equivariant by inspection. The idempotent $e_{v}\in\End\left({\mathbb{C}}^{m}\otimes{\mathbb{C}}^{i}\right)$ corresponding to orthogonal projection to~$v$ is $H$-equivariant. Hence $e_{v}$ has the form $f_{v}\otimes{\mathbb{C}}^{i}$ for some $f_{v}\in\End_{{\mathbb{C}}}\left({\mathbb{C}}^{m}\right)$. Since $f_{v}$ is necessarily an idempotent, it defines a subspace $w_{v}=\im\left(f_{v}\right)$, and $v=w_{v}\otimes{\mathbb{C}}^{i}$. The three assignments $v\mapsto e_{v}\mapsto f_{v}\mapsto w_{v}$ are each continuous. Finally, $v_{1}\perp v_2$ implies that $w_{v_{1}}\perp w_{v_{2}}$ (for example, because the corresponding idempotents compose to zero). \end{proof} We now have everything we need to prove Theorem~\ref{theorem: join theorem}. \begin{proof}[Proof of Theorem~\ref{theorem: join theorem}] We have a sequence of poset inclusions, \[ Z\subseteq\isotypicsubcomplex\subseteq \Uof{n}iform{\Gamma_{k}}{mp^k}\subseteq\weakfixedspecific{\Gamma_{k}}{mp^k} \] and we have already established that the second two inclusions induce homotopy equivalences of $\Uof{m}$-spaces on nerves (Proposition~\ref{proposition: isotypic to unif} and Proposition~\ref{proposition: inclusion of uniform subposet}, respectively). To finish the proof, we show that $Z\subseteq\isotypicsubcomplex$ is a $\Uof{m}$-equivariant isomorphism. Suppose that $\lambda$ is a decomposition of $\complexesn\cong{\mathbb{C}}^{m}\otimes\complexespk$ that lies in~$\isotypicsubcomplex$, that is, $\lambda$ is weakly fixed by~$\Gamma_{k}$, has uniform $\Gamma_{k}$-isotropy group~$\isogroupof{\lambda}$, and is $\isogroupof{\lambda}$-isotypic. We will prove that there exists a decomposition $\mu$ of ${\mathbb{C}}^{m}$ and a weakly $\Gamma_{k}$-fixed decomposition $\mathbf{n}u$ of $\complexespk$ such that $\lambda=\mu\otimes\mathbf{n}u$. Note that if $\lambda$ is proper, then one of $\mu$ or $\mathbf{n}u$ (but not both) could have a single component. As in the proof of Proposition~\ref{proposition: inclusion of uniform subposet}, Frobenius reciprocity tells us that the restriction of the standard representation of $\Gamma_{k}$ on $\complexespk$ to the subgroup $\isogroupof{\lambda}$ splits as a sum of pairwise nonisomorphic $\isogroupof{\lambda}$-irreducible subspaces, say $v_1, v_2,\ldots, v_r$. These subspaces are orthogonal, and we use them to define the decomposition $\mathbf{n}u$ of $\complexespk$ with $\class(\mathbf{n}u)=\{v_1, v_2,\ldots, v_r\}$. Note that $\mathbf{n}u$ is weakly fixed by $\Gamma_k$ since $\isogroupof{\lambda}\triangleleft\Gamma_{k}$. The elements of $\class(\mathbf{n}u)$ are permuted transitively by the action of~$\Gamma_{k}$ because $\Gamma_{k}$ acts irreducibly on~$\complexespk$. Recall that by assumption, the components of $\lambda$ are isotypical representations of $\isogroupof{\lambda}$. Fix $v\in\class(\mathbf{n}u)$, and consider the components of $\lambda$ that are $\isogroupof{\lambda}$-isotypical for the irreducible representation~$v$, say $c_{1},\dots,c_{s}\in\class(\lambda)$. Each one is an $\isogroupof{\lambda}$-subrepresentation of of~${\mathbb{C}}^m \otimes v$, which is the canonical $v$-isotypical summand of ${\mathbb{C}}^{m}\otimes\complexespk$. Thus by Lemma~\ref{lemma: well-defined tensor}, there exist subspaces $w_{1},\dots,w_{s}$ of ${\mathbb{C}}^m$ such that $c_{1} = w_{1} \otimes v$,\dots,$c_{s} = w_{s} \otimes v$. We take $\mu$ to be the decomposition defined by $\class(\mu)=\{w_{1},\dots,w_{s}\}$. Because every component of $\lambda$ is isotypical for a unique element of~$\class(\mathbf{n}u)$, and $\Gamma_{k}$ acts transitively on~$\class(\mathbf{n}u)$, we know that for every $c\in\class(\lambda)$ there exists $\gamma\in\Gamma_{k}$ such that $\gamma c\subseteq {\mathbb{C}}^{m}\otimes v$. Since $\gamma c\in\class(\lambda)$, we must have that $\gamma c$ is equal to some $w_{i}\otimes v$. Hence the set $\class(\lambda)$ must be the union of the orbits of $w_{1} \otimes v$,\dots,$w_{s} \otimes v$ under the action of $\Gamma_{k}$. These orbits are, respectively, $w_{1}\otimes\mathbf{n}u$,\dots,$w_{s}\otimes\mathbf{n}u$. We conclude that $\lambda=\mu\otimes\mathbf{n}u$. \end{proof} \section{Proof of the classification theorem} \label{section: proof of classification theorem} In this section, we prove the classification theorem for problematic $p$\kern1.3pt- toral subgroups of~$\Uof{n}$, Theorem~\ref{theorem: new classification theorem}. Recall that if $m$ and $p$ are coprime, then any problematic $p$-toral subgroup of $\Uof{mp^k}$ is subconjugate to $\Gamma_{k}$ acting on ${\mathbb{C}}^{mp^k}$ by $m$ copies of the standard representation of $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$ (Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag}). Furthermore, all subgroups of $\Gamma_{k}$ that contain $S^{1}$ have the form $\Gamma_{s}\times\Delta_{t}$, where $s+t\leq k$ (Proposition~\ref{proposition: group classification}). Hence, we can use Theorem~\ref{theorem: join theorem} to obtain a reduction of the classification theorem as follows. \begin{proposition} \label{proposition: reduction to Delta_t} Let $H$ be a subgroup of $\Gamma_{k}\subseteq\Uof{n}$ acting by a multiple of the standard representation of $\Gamma_{k}\subseteqU\kern-2.5pt\left(p^{k}\right)$. Suppose that $H\cong\Gamma_{s}\times\Delta_{t}$, and let $r=n/p^{s+t}$. Then $\weakfixed{H}$ is contractible (respectively, mod $p$ acyclic) if and only if the $s$-fold suspension of $\weakfixedspecific{\Delta_{t}}{rp^t}$ is contractible (respectively, mod $p$ acyclic). \end{proposition} \begin{proof} If $s=0$, then we are considering $H\cong S^{1}\times\Delta_{t}$ and $n=rp^{t}$. Hence $\weakfixed{H} =\weakfixedspecific{\Delta_{t}}{rp^t}$, so the proposition is tautologically true. For $s>0$, Theorem~\ref{theorem: join theorem} gives an equivalence of $\Uof{rp^t}$-spaces \[ \weakfixed{\Gamma_s}\simeq\Lcal_{rp^t}\ast\weakfixedspecific{\Gamma_s}{p^s}. \] Taking fixed points under the action of $\Delta_{t}\subseteq\Uof{rp^t}$ gives \[ \weakfixed{H} \simeq\weakfixedspecific{\Delta_{t}}{rp^t}\ast\weakfixedspecific{\Gamma_s}{p^s}. \] Recall that $\weakfixedspecific{\Gamma_s}{p^s}$ is a wedge of spheres of dimension $s-1$ (Proposition~\ref{proposition: fixed points of Gamma_k}). Choosing basepoints gives $X\ast Y\simeq\Sigma(X\wedge Y)$ for any $X$ and~$Y$, so $\weakfixed{H}$ has the homotopy type of a wedge of $s$-fold suspensions of~$\weakfixedspecific{\Delta_{t}}{rp^t}$. \end{proof} Most of the remainder of this section is devoted to studying fixed points of $\Delta_{t}$ acting on~$\Lcal_{rp^t}$, in preparation for assembling the classification theorem at the end of the section. When $r=1$, we can quote the following result. \begin{proposition}[\cite{Arone-Lesh-Tits}] \label{proposition: fixed points of Delta_t} For $t\geq 1$, the fixed point space of $\Delta_t$ acting on $\Lcal_{p^t}$ contains, as a retract, a wedge of spheres of dimension~$(t-1)$. \end{proposition} Next we look at the special case of $\weakfixedspecific{\Delta_{t}}{mp^t}$ where $m$ is a positive power of a prime $q$ different from~$p$. In this case, we do not get contractibility, but we do have mod~$p$ acyclicity. Because we have two primes in play, we specify them explicitly in the notation for the following proposition. We write $\Delta_{t}(p)\cong({\mathbb{Z}}/p)^t$, and we write $\Gamma_{r}(q)$ for the mod $q$ irreducible projective elementary abelian $q$-group of $\Uof{q^r}$, \[ 1\rightarrow S^{1}\rightarrow\Gamma_{r}(q)\rightarrow({\mathbb{Z}}/q)^{2r}\rightarrow 1. \] \begin{proposition} \label{prop: full Delta is problematic} If $m=q^r$ where $q$ is a prime different from $p$ and $r>0$, then $\weakfixedspecific{\Delta_t(p)}{mp^t}$ is mod $p$ acyclic, but has nontrivial mod $q$ homology, and in particular is not contractible. \end{proposition} \begin{proof} Mod $p$ acyclicity of $\weakfixedspecific{\Delta_t(p)}{mp^t}$ follows from Smith theory, since $\Lcal_{mp^t}$ is a finite complex and is mod $p$ acyclic by Corollary~\ref{corollary: contractible}. To prove the statement about mod $q$ homology of~$\weakfixedspecific{\Delta_t(p)}{mp^t}$, we use Smith theory again, this time in reverse and for mod~$q$ homology, as follows. We reverse the roles of $p$ and~$q$, and we consider the action of $\Delta_{t}(p)\times\Gamma_{r}(q)$ on ${\mathbb{C}}^{p^t}\otimes{\mathbb{C}}^{q^r}$. By Theorem~\ref{theorem: join theorem} applied to $\Gamma_{r}(q)$, with $\Delta_{t}(p)\subseteq \Uof{p^t}$, we find that \[ \weakfixedspecific{\Delta_{t}(p)\times\Gamma_{r}(q)}{p^{t}q^{r}} \simeq \weakfixedspecific{\Delta_{t}(p)}{p^t}\ast\weakfixedspecific{\Gamma_{r}(q)}{q^r}. \] By Proposition~\ref{proposition: fixed points of Gamma_k}, $\weakfixedspecific{\Gamma_{r}(q)}{q^r}$ is a wedge of spheres. On the other hand, by Proposition~\ref{proposition: fixed points of Delta_t}, $\weakfixedspecific{\Delta_{t}(p)}{p^t}$ has a wedge of spheres as a retract. Hence $\weakfixedspecific{\Delta_{t}(p)\times\Gamma_{r}(q)}{p^{t}q^{r}}$ also has a wedge of spheres as a retract, and therefore has nonzero mod~$q$ homology. But in fact, \[ \weakfixedspecific{\Gamma_{r}(q)\times\Delta_{t}(p)}{q^{r}p^{t}} = \left( \weakfixedspecific{\Delta_{t}(p)}{q^{r}p^{t}} \right)^{\Gamma_{r}(q)/S^1}. \] If $\weakfixedspecific{\Delta_{t}(p)}{q^{r}p^{t}}$ were mod~$q$ acyclic, then we could apply Smith theory to the finite complex $\weakfixedspecific{\Delta_{t}(p)}{q^{r}p^{t}}$ to conclude that $\weakfixedspecific{\Gamma_{r}(q)\times\Delta_{t}(p)}{q^{r}p^{t}}$ would be mod $q$ acyclic also, which we know is not the case. \end{proof} The bulk of the rest of this section is to study the case $\weakfixedspecific{\Delta_{t}}{mp^t}$ where $m$ is not a power of a prime, with the aim of showing it is contractible. The strategy resembles that of Section~\ref{section: joins}, in that we replace the category $\weakfixedspecific{\Delta_{t}}{mp^t}$ with the category $\Uof{n}iform{\Delta_{t}}{mp^t}$ of $\Delta_{t}$-fixed decompositions with uniform $\Delta_{t}$-isotropy, and then we decompose $\Uof{n}iform{\Delta_{t}}{mp^t}$ into a union of two categories, the union of whose nerves gives the nerve of $\Uof{n}iform{\Delta_{t}}{mp^t}$. \begin{definition} \mbox{} \begin{enumerate} \item Let $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$ denote the subposet of $\Uof{n}iform{\Delta_{t}}{mp^t}$ consisting of decompositions $\lambda$ that have a nontransitive action of $\Delta_{t}$ on~$\class(\lambda)$. \item Let $\properspecific{\Delta_{t}}{mp^t}$ denote the subposet of $\Uof{n}iform{\Delta_{t}}{mp^t}$ consisting of decompositions $\lambda$ such that the action of $\Delta_{t}$ on $\class(\lambda)$ is nontrivial; that is, $\lambda$ is not strongly fixed by~$\Delta_{t}$. \end{enumerate} \end{definition} As in Section~\ref{section: joins}, we observe that if $\lambda$ is an object of $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$, then all refinements of $\lambda$ are objects in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$ as well, and likewise $\properspecific{\Delta_{t}}{mp^t}$ contains all refinements of its objects. In addition, any $\lambda$ with uniform $\Delta_t$-isotropy is in one of these subspaces; hence the nerve of $\Uof{n}iform{\Delta_{t}}{mp^t}$ is the union of the nerves of the two subcategories, and we have a pushout diagram that is also a homotopy pushout: \begin{equation} \label{diagram: Delta_t pushout} \begin{CD} \mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t} \cap \properspecific{\Delta_{t}}{mp^t} @>>> \mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}\\ @VVV @VVV\\ \properspecific{\Delta_{t}}{mp^t}@>>> \Uof{n}iform{\Delta_{t}}{mp^t} \end{CD}. \end{equation} \begin{lemma} The object spaces of the subcategories $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$ and $\properspecific{\Delta_{t}}{mp^t}$ are each unions of path components of the object space of $\Uof{n}iform{\Delta_{t}}{mp^t}$. \end{lemma} \begin{proof} We apply Corollary~\ref{cor: J fixed path components}. The action of the centralizer of $\Delta_{t}$ on the object space preserves the defining characteristics of $\mathbf{n}ontransitive{\Delta_{t}}$ and $\properspecific{\Delta_{t}}{mp^t}$. \end{proof} The initial step to prove that the nerve of $\Uof{n}iform{\Delta_{t}}{mp^t}$ is contractible is to establish that the upper right-hand corner of \eqref{diagram: Delta_t pushout} is contractible. \begin{lemma} \label{lemma: nontransitive} Suppose that $\Delta_{t}$ is nontrivial and acts on ${\mathbb{C}}^{m}\otimes{\mathbb{C}}^{p^t}$ as the tensor product of the trivial representation and the regular representation. Then $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$ has contractible nerve. \end{lemma} \begin{proof} As usual, consider the inclusions \begin{equation} \label{eq: nontransitive subcat inclusions} \isofixedspecific{\Delta_{t}}{mp^t} \hookrightarrow \strongfixedspecific{\Delta_{t}}{mp^t} \hookrightarrow \mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}. \end{equation} The first map has the retraction functor $\lambda\mapsto\isorefine{\lambda}{\Delta_{t}}$. The second map has the retraction functor $\lambda\mapsto\glom{\lambda}{\Delta_{t}}$. (Note that $\glom{\lambda}{\Delta_{t}}$ is necessarily proper by the definition of~$\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$.) Hence both inclusions of \eqref{eq: nontransitive subcat inclusions} induce equivalences of nerves. Finally, the action of $\Delta_{t}$ on ${\mathbb{C}}^{mp^t}$ is a multiple of the regular representation, and is therefore not isotypic. Hence $\isofixedspecific{\Delta_{t}}{mp^t}$ has a terminal object, namely the canonical $\Delta_{t}$-isotypic decomposition of~${\mathbb{C}}^{mp^t}$, and has contractible nerve. \end{proof} Since we have shown that the upper right corner of diagram \eqref{diagram: Delta_t pushout} has contractible nerve, to establish that $\Uof{n}iform{\Delta_{t}}{mp^t}$ has contractible nerve it is sufficient to show that the left-hand vertical map of~\eqref{diagram: Delta_t pushout} induces a homotopy equivalence on nerves. We will apply a topological version of Quillen's Theorem~A for categories internal to spaces, as stated and proved in \cite[Theorem 5.8]{Libman}. To check that the conditions of the cited theorem are satisfied, the first step is to look at the overcategories for objects in the lower left-hand corner of~\eqref{diagram: Delta_t pushout}. \begin{proposition} \label{prop: overcategory is contractible} Let $\lambda$ be an object of $\properspecific{\Delta_{t}}{mp^t}$, and assume that $m>1$ is not a power of a prime. Let $\Ical$ denote the intersection \[ \Ical =\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t} \cap \properspecific{\Delta_{t}}{mp^t}. \] Then the overcategory $\Ical\downarrow\lambda$ has contractible nerve. \end{proposition} \begin{proof} The category $\Ical\downarrow\lambda$ is the poset of refinements (not necessarily strict) of $\lambda$ that happen to be in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$. In other words, $\Ical\downarrow\lambda$ contains objects $\mu\rightarrow\lambda$ such that the action of $\Delta_{t}$ on $\class(\mu)$ is not transitive. If $\lambda$ is in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$, then the identity morphism of $\lambda$ is a terminal object of $\Ical\downarrow\lambda$; therefore the category $\Ical\downarrow\lambda$ has contractible nerve. So suppose that $\lambda$ is in $\properspecific{\Delta_{t}}{mp^t}$, but not in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$. In particular, $\lambda$ has uniform $\Delta_{t}$-isotropy~$\isogroupof{\lambda}$ properly contained in~$\Delta_{t}$, and $\class(\lambda)$ has a transitive action of~$\Delta_{t}$. To specify a refinement $\mu$ of $\lambda$ that lies in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$, it is sufficient to choose one component $v\in\class(\lambda)$, and to specify an orthogonal decomposition of~$v$, call it $\mu_{v}$, such that \begin{itemize} \item $\mu_{v}$ is (weakly) stabilized by the action of $\isogroupof{\lambda}$ on~$v$, \item $\isogroupof{\lambda}$ acts non-transitively on $\class(\mu_{v})$ (hence $\mu_{v}$ is proper), and \item components of $\mu_{v}$ have uniform $\isogroupof{\lambda}$-isotropy. \end{itemize} The rest of $\mu$ is determined by transitivity of the $\Delta_{t}$-action on~$\class(\lambda)$. If we denote the dimension of $v$ by~$r$, then the above shows that \begin{equation} \Ical\downarrow\lambda \cong \mathbf{n}ontransitivespecific{\isogroupof{\lambda}}{r}. \end{equation} There are two cases: $\isogroupof{\lambda}=0$, and $\isogroupof{\lambda}$ is nontrivial. If $\isogroupof{\lambda}=0$, then $\mathbf{n}ontransitivespecific{\isogroupof{\lambda}}{r}\cong\Lcal_{r}$. Because $r$ is a multiple of~$m>1$, which is not the power of a prime, we know that $\Lcal_{r}$ is nonempty and has contractible nerve by Corollary~\ref{corollary: contractible}. On the other hand, if $\isogroupof{\lambda}$ is not trivial, then $\mathbf{n}ontransitivespecific{\isogroupof{\lambda}}{r}$ has contractible nerve by Lemma~\ref{lemma: nontransitive}. \end{proof} The next condition to check in order to apply \cite[Theorem 5.8]{Libman} is the Reedy cofibrancy of an associated simplicial space. Here, we need to consider an intermediate object between a topological category and its nerve, namely the \emph{simplicial nerve}, which is a simplicial diagram of spaces obtained by taking the levelwise nerve of the original topological category. The nerve is then obtained by applying a realization functor, which gives a space. The category of simplicial spaces can be given a Reedy model structure which we do not need here, but the properties of cofibrant objects in that model structure are sufficiently nice that they will be helpful in the results that follow. \begin{lemma} \label{lemma: Reedy cofibrant nerves} The categories $\Ical = \mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}\cap\properspecific{\Delta_{t}}{mp^t}$ and $\properspecific{\Delta_{t}}{mp^t}$ have simplicial nerves that are Reedy cofibrant simplicial spaces. \end{lemma} \begin{proof} We apply the criterion for Reedy cofibrancy in \cite[Proposition A.2.2]{Libman}. This criterion says that for a simplicial space $Y$ to be Reedy cofibrant, it is sufficient that $Y$ have an action of a compact Lie group $G$ such that in each simplicial dimension, $Y$ is a disjoint union of $G$-orbits, so that $Y/G$ is discrete. Here, we take $G=\complexesentIdent{\Delta_{t}}$, the identity component of the centralizer of $\Delta_{t}$ in $\Uof{mp^t}$. We observe the following. \begin{itemize} \item In order for a decomposition $\lambda$ to be in $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}$, the action of $\Delta_{t}$ on $\class(\lambda)$ must be nontransitive, and $\lambda$ must have uniform $\Delta_{t}$-isotropy. \item In order for a decomposition $\lambda$ to be in $\properspecific{\Delta_{t}}{mp^t}$, the action of $\Delta_{t}$ on $\class(\lambda)$ must be nontrivial, and $\lambda$ must have uniform $\Delta_{t}$-isotropy. \end{itemize} If $\lambda$ satisfies either (or both) of these conditions and $c\in\complexesentIdent{\Delta_{t}}$, then $c\lambda$ satisfies the same condition(s) as~$\lambda$. But Corollary~\ref{cor: J fixed path components} tells us that the path components of both the object and morphism spaces of $\weakfixedspecific{\Delta_{t}}{mp^t}$ are orbits of~$\complexesentIdent{\Delta_{t}}$. Applying the observations above tells us that both $\mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t}\cap\properspecific{\Delta_{t}}{mp^t}$ and $\properspecific{\Delta_{t}}{mp^t}$ are unions of path components of $\weakfixedspecific{\Delta_{t}}{mp^t}$, each of which is an orbit of~$\complexesentIdent{\Delta_{t}}$. We conclude that after taking the quotient by the action of~$\complexesentIdent{\Delta_{t}}$, we have a discrete space in each simplicial dimension of the simplicial nerves, and \cite[Proposition A.2.2]{Libman} now gives the desired result. \end{proof} \begin{proposition} \label{proposition: apply Quillen A} Suppose $ m>1$ is not a power of a prime. The inclusion of topological categories \[ \Ical = \mathbf{n}ontransitivespecific{\Delta_{t}}{mp^t} \cap \properspecific{\Delta_{t}}{mp^t} \longrightarrow \properspecific{\Delta_{t}}{mp^t} \] induces a homotopy equivalence of nerves. \end{proposition} \begin{proof} We prove this result by applying Libman's version of Quillen's Theorem~A \cite[Theorem 5.8]{Libman} to the inclusion of opposite categories \[ j \colon \Ical^{op} \longrightarrow \left[\properspecific{\Delta_{t}}{mp^t}\right]^{op}. \] Thus maps are now refinements of decompositions, i.e., if $\mu $ is a refinement of $ \lambda$, we have a map $\lambda \to \mu$ in the opposite category. To apply the cited theorem, we need to know that the nerves of $\Ical^{op}$ and $\left[\properspecific{\Delta_{t}}{mp^t}\right]^{op}$ are Reedy cofibrant; since Reedy cofibrancy is preserved by taking opposites, we have this condition by Lemma \ref{lemma: Reedy cofibrant nerves}. Next, we need to know that for any $\lambda \in\left[\properspecific{\Delta_{t}}{mp^t}\right]^{op}$, the nerve of the undercategory $\lambda \downarrow j$ is Reedy cofibrant; the proof can be handled just as that of Lemma~\ref{lemma: Reedy cofibrant nerves}. Specifically, the object space of $\lambda \downarrow j$ consists of $\mu \subseteq \lambda$, where $\mu $ is in~$\Ical$. Let $\Uof{n}Isotropyof{\lambda}$ denote the $\Uof{n}$-isotropy group of $\lambda$; then the group $\complexesentIdent{\Delta_{t}} \cap \Uof{n}Isotropyof{\lambda}$ acts transitively on each path component of $\lambda \downarrow j$, and taking the quotient by this action gives a discrete space. Further, we need to know that for any $\lambda \in\left[\properspecific{\Delta_{t}}{mp^t}\right]^{op}$, the nerve of the undercategory $\lambda \downarrow j$ is contractible. But this undercategory is the same as the overcategory $\Ical\downarrow\lambda$, which was proved to be contractible in Proposition~\ref{prop: overcategory is contractible}. Finally, we need to check that the inclusion $j$ is absolutely tame, a technical condition which we now recall from \cite[Definition 5.5]{Libman}. Associated to the inclusion~$j$ is a bisimplicial space $\mathcal{X}(j)$ whose space $\mathcal{X}_{s,r}(j)$ of $(s,r)$-bisimplices consists of chains of refinements and coarsenings \[ \{ \lambda_s \rightarrow \dots \rightarrow \lambda_0 \leftarrow \mu_0 \leftarrow \dots \leftarrow \mu_r \}, \] where $\lambda_i \in \properspecific{\Delta_{t}}{mp^t}$ and $\mu_j \in \Ical$. There is a projection map \[ \pi_{s,r} \colon \mathcal{X}_{s,r}(j) \to \Nerve_s(\properspecific{\Delta_{t}}{mp^t}) \] which forgets the chain of $\mu$'s. We say that $j$ is \emph{absolutely tame} if $\pi_{s,r}$ is a Serre fibration for all $s,r\geq 0$ \cite[Definition 5.7]{Libman}. We will verify this condition. A connected component of the object space of $\properspecific{\Delta_{t}}{mp^t}$, which is also the space of zero simplices $\Nerve_0(\properspecific{\Delta_{t}}{mp^t})$, is a $\complexesentIdent{\Delta_{t}}$-orbit; more precisely, the connected component of a decomposition $\lambda_0 $ in $ \Nerve_0(\properspecific{\Delta_{t}}{mp^t})$ is $ \complexesentIdent{\Delta_{t}}/ (\complexesentIdent{\Delta_{t}} \cap \Uof{n}Isotropyof{\lambda_0}) $, where $\Uof{n}Isotropyof{\lambda_0}$ is the $\Uof{n}$-isotropy group of $\lambda_0$. Consequently, we can determine that the connected component of $\lambda_\bullet = \{ \lambda_s \rightarrow \dots \rightarrow \lambda_0 \} \in \Nerve_s(\properspecific{\Delta_{t}}{mp^t})$ is $\complexesentIdent{\Delta_{t}}/ (\complexesentIdent{\Delta_{t}} \cap \Uof{n}Isotropyof{\lambda_\bullet})$, where \[ \Uof{n}Isotropyof{\lambda_\bullet} = \Uof{n}Isotropyof{\lambda_0} \cap \dots \cap \Uof{n}Isotropyof{\lambda_s}. \] Similarly, the connected component of \[ (\lambda_\bullet,\mu_\bullet) = \{ \lambda_s \rightarrow \dots \rightarrow \lambda_0 \leftarrow \mu_0 \leftarrow \dots \leftarrow \mu_t \} \in \mathcal{X}_{s,t}(j) \] is \[ \complexesentIdent{\Delta_{t}}/ \left(\complexesentIdent{\Delta_{t}} \cap \Uof{n}Isotropyof{\lambda_\bullet} \cap \Uof{n}Isotropyof{\mu_\bullet}\right). \] Restricted to a connected component of $\mathcal{X}_{s,t}(j)$, the map $\pi_{s,t}$ is the quotient induced by the inclusion of subgroups \[ \complexesentIdent{\Delta_{t}} \cap \Uof{n}Isotropyof{\lambda_\bullet} \cap \Uof{n}Isotropyof{\mu_\bullet} \subseteq \complexesentIdent{V} \cap \Uof{n}Isotropyof{\lambda_\bullet}, \] and that quotient is a Serre fibration. We have established that all conditions of \cite[Theorem 5.8]{Libman} are satisfied for the map $j$, so we conclude that it induces a homotopy equivalence on nerves. Thus the inclusion (before taking opposites of the categories) \[ \Ical \to \properspecific{\Delta_{t}}{mp^t}\] also induces a homotopy equivalence on nerves, as claimed. \end{proof} The work above showing that we can use Quillen's Theorem~A allows us to establish the following key result. \begin{theorem}\label{theorem: fixed points Delta_t} Let $m>1$, and let $\Delta_{t}$ act on ${\mathbb{C}}^{mp^{t}}$ by $m$ copies of the regular representation. If $m$ is not a power of a prime, then $\weakfixedspecific{\Delta_{t}}{mp^t}$ is contractible. \end{theorem} \begin{proof} If $t=0$ then the result follows from Corollary~\ref{corollary: contractible}, so we can assume that $t>0$ and $\Delta_{t}$ is nontrivial. As in Section~\ref{section: joins}, we proceed by decomposing the category of interest. Let $\Uof{n}iform{\Delta_{t}}{mp^t}$ denote the subposet of $\weakfixedspecific{\Delta_{t}}{mp^t}$ consisting of objects with uniform $\Delta_{t}$-isotropy. Exactly as in Proposition~\ref{proposition: inclusion of uniform subposet}, the inclusion $\Uof{n}iform{\Delta_{t}}{mp^t}\hookrightarrow\weakfixedspecific{\Delta_{t}}{mp^t}$ induces a homotopy equivalence on nerves. We will show that $\Uof{n}iform{\Delta_{t}}{mp^t}$ has a contractible nerve. The object space of $\Uof{n}iform{\Delta_{t}}{mp^t}$ is a union of path components of $\Obj\weakfixedspecific{\Delta_{t}}{mp^t}$, a fact which follows from Corollary~\ref{cor: J fixed path components}, since the action of the centralizer of $\Delta_{t}$ preserves the property of having uniform $\Delta_{t}$-isotropy. Therefore, the pushout diagram \begin{equation} \label{eq: diagram for contractibility} \xymatrix{ \mathbf{n}ontransitive{\Delta_{t}}\cap\properspecific{\Delta_{t}}{mp^t} \ar[r]\ar[d] & \mathbf{n}ontransitive{\Delta_{t}} \ar[d]\\ \properspecific{\Delta_{t}}{mp^t} \ar[r] & \Uof{n}iform{\Delta_{t}}{mp^t} } \end{equation} gives rise to a homotopy pushout diagram after applying the nerve functor. The upper right-hand corner has contractible nerve, by Lemma~\ref{lemma: nontransitive}. The left-hand vertical map induces a homotopy equivalence on nerves, by Proposition~\ref{proposition: apply Quillen A}. Thus $\Uof{n}iform{\Delta_{t}}{mp^t}$ must also be contractible, as needed. \end{proof} This result brings us at last to the assembly of the proof of the classification theorem. Recall that a coistropic subgroup of $\Gamma_{k}$ is one that has the form $\Gamma_{s}\times\Delta_{t}$ where $s+t=k$. \begin{classificationtheorem} \classificationtheoremtext \end{classificationtheorem} \begin{proof} Suppose that $H$ is problematic. By Theorem~\ref{theorem: Non_contractible_implies_subgroup_Gamma_diag}, we may assume that $H$ is subconjugate to a subgroup of $\Gamma_k$ acting on ${\mathbb{C}}^{m}\otimes\complexespk$ by the standard action on $\complexespk$ and the trivial action on~${\mathbb{C}}^{m}$. Hence $H\cong\Gamma_{s}\times\Delta_{t}$ for $s+t\leq k$. To prove the converse for $m=1$, we must show that all subgroups of $\Gamma_{k}$ are in fact problematic. If $H=\Gamma_{k}$, then $\weakfixedspecific{\Gamma_{k}}{p^{k}}$ is a wedge of spheres (Proposition~\ref{proposition: fixed points of Gamma_k}), and hence has nontrivial mod~$p$ homology. Since we have assumed that $S^{1}\subseteq H\subseteq\Gamma_{k}$, the quotient $\Gamma_{k}/H$ is a finite $p$\kern1.3pt- group, so we can apply Smith theory to \[ \left(\weakfixedspecific{H}{p^k}\right)^{\Gamma_{k}/H} =\weakfixedspecific{\Gamma_{k}}{p^k}. \] We conclude that $\weakfixedspecific{H}{p^{k}}$ likewise has nontrivial mod~$p$ homology, and therefore is not contractible. Therefore $H$ is problematic. Now suppose $m>1$ and $H\cong\Gamma_{s}\times\Delta_{t}\subseteq\Gamma_{k}$ is problematic. Let $r=n/p^{s+t}$. We apply Proposition~\ref{proposition: reduction to Delta_t} to conclude that because $\weakfixed{H}$ is not contractible, then the $s$-fold suspension of $\weakfixedspecific{\Delta_{t}}{rp^t}$ is not contractible. Hence $\weakfixedspecific{\Delta_{t}}{rp^t}$ is likewise not contractible. The contrapositive of Theorem~\ref{theorem: fixed points Delta_t} implies that $r$ is a power of a prime, and since $m\mid r$ and $m$ is coprime to~$p$, this means that $m=r=q^i$ for $i>1$ and $q$ a prime different from $p$. In particular, since $n=rp^{s+t}=mp^{k}$, we have $s+t=k$, so $H$ is coisotropic. In summary, if $\weakfixed{H}$ is noncontractible, then \begin{itemize} \item $\weakfixedspecific{\Delta_{t}}{rp^t}$ is not contractible, and \item $r=q^{i}$ for $i>0$ and $q$ a prime different from~$p$. \end{itemize} To finish, we need to know that $\weakfixedspecific{\Delta_{t}}{q^{i}p^{t}}$ is not contractible, a result provided by Proposition~\ref{prop: full Delta is problematic}. \end{proof} \mathbf{n}ewpage \mathbf{n}ewcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{An extension of the class of regularly varying functions} \begin{abstract} We define a new class of positive and Lebesgue measurable functions in terms of their asymptotic behavior, which includes the class of regularly varying functions. We also characterize it by transformations, corresponding to generalized moments when these functions are random variables. We study the properties of this new class and discuss their applications to Extreme Value Theory. {\it Keywords: asymptotic behavior, domains of attraction; extreme value theory; Karamata's representation theorem; Karamata's theorem; Karamata's tauberian theorem; measurable functions; von Mises' conditions; Peter and Paul distribution; regularly varying function} {\it AMS classification}: 26A42; 60F99; 60G70 \end{abstract} \section*{Introduction} The field of Extreme Value Theory (EVT) started to be developed in the 20's, concurrently with the development of modern probability theory by Kolmogorov, with the pioneers Fisher and Tippett (1928) who introduced the fundamental theorem of EVT, the Fisher-Tippett Theorem, giving three types of limit distribution for the extremes (minimum or maximum). A few years later, in the 30's, Karamata defined the notion of slowly varying and regularly varying (RV) functions, describing a specific asymptotic behavior of these functions, namely:\\ {\it Definition.} A Lebesgue-measurable function $U:\Rset^+\rightarrow\Rset^+$ is RV at infinity if, for all $t>0$, \begin{equation}\label{eq:000rv} \lim_{x\rightarrow\infty}\frac{U(xt)}{U(x)}=t^{\rho}\quad\textrm{for some $\rho\in\Rset$}, \end{equation} $\rho$ being called the tail index of $U$, and the case $\rho=0$ corresponding to the notion of slowly varying function. $U$ is RV at $0^+$ if \eqref{eq:000rv} holds taking the limit $x\to0^+$. \\ We had to wait for more than one decade to see links appearing between EVT and RV functions. Following the earlier works by Gnedenko (see \cite{Gnedenko1943}), then Feller (see \cite{feller21966}), who characterized the domains of attraction of Fr\'echet and Weibull using RV functions at infinity, without using Karamata theory in the case of Gnedenko, de Haan (1970) generalized the results using Karamata theory and completed it, providing a complete solution for the case of Gumbel limits. Since then, much work has been developed on EVT and RV functions, in particular in the multivariate case with the notion of multivariate regular variation (see e.g. \cite{deHaan}, \cite{deHaanFerreira}, \cite{resnick3}, \cite{Resnick2004}, and references therein). Nevertheless, the RV class may still be restrictive, particularly in practice. If the limit in \eqref{eq:000rv} does not exist, all standard results given for RV functions and used in EVT, as e.g. Karamata theorems, Von Mises conditions, etc..., cannot be applied. Hence the natural question of extending this class and EVT characterizations, for broader applications in view of (tail) modelling. We answer this concern in real analysis and EVT, constructing a (strictly) larger class of functions than the RV class on which we generalize EVT results and provide conditions easy to check in practice. The paper is organized in two main parts. The first section defines our new large class of functions described in terms of their asymptotic behaviors, which may violate \eqref{eq:000rv}. It provides its algebraic properties, as well as characteristic representation theorems, one being of Karamata type. In the second section, we discuss extensions for this class of functions of other important Karamata theorems, and end with results on domains of attraction. Proofs of the results are given in the appendix. This study is the first of a series of two papers, extending the class of regularly varying functions. It addresses the probabilistic analysis of our new class. The second paper will treat the statistical aspect of it. \section{Study of a new class of functions} We focus on the new class $\M$ of positive and measurable functions with support $\Rset^+$, characterizing their behavior at $\infty$ with respect to polynomial functions. A number of properties of this class are studied and characterizations are provided. Further, variants of this class, considering asymptotic behaviors of exponential type instead of polynomial one, provide other classes, denoted by $\M_\infty$ and $\M_{-\infty}$, having similar properties and characterizations as $\M$ does. Let us introduce a few notations. When using limits, we will discriminate between existing limits, namely finite or infinite ($\infty$, $-\infty$) ones, and not existing ones. The notation a.s. (almost surely) in (in)equalities concerning measurable functions is omitted. Moreover, for any random variable (rv) $X$, we denote its distribution by $F_X(x)=P(X\leq x)$, and its tail of distribution by $\overline{F}_X=1-F_X$. The subscript $X$ will be omitted when no possible confusion. RV (RV$_\rho$ respectively) denotes indifferently the class of regularly varying functions (with tail index $\rho$, respectively) or the property of regularly varying function (with tail index $\rho$). Finally recall the notations $\min(a,b)=a\wedge b$ and $\max(a,b)=a\vee b$ that will be used, $\left\lfloor x\right\rfloor$ for the largest integer not greater than $x$ and $\left\lceil x\right\rceil$ for the lowest integer greater or equal than $x$, and $\log(x)$ represents the natural logarithm of $x$. \subsection{The class $\mathcal{M}$} We introduce a new class $\mathcal{M}$ that we define as follows. \begin{defi}\label{eq:main:defi:classM} $\mathcal{M}$ is the class of positive and measurable functions $U$ with support $\Rset^+$, bounded on finite intervals, such that \begin{equation}\label{Mkappa} \exists ~\rho\in\Rset, \, \forall \varepsilon > 0,\; \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho+\epsilon}}=0\text{\quad and\quad}\lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho-\epsilon}}=\infty\,. \end{equation} \end{defi} On $\M$, we can define specific properties. \begin{propt}\label{propt:main:001}~ \begin{itemize} \item[(i)] For any $U\in\M_\fty$, $\rho$ defined in (\ref{Mkappa}) is unique, and denoted by $\rho_U$. \item[(ii)] Let $U,V\in\mathcal{M}$ s.t. $\rho_U>\rho_V$. Then $\displaystyle \lim_{x\rightarrow\infty}\frac{V(x)}{U(x)}=0$. \item[(iii)] For any $U,V\in\M$ and any $a\ge 0$, $aU+V\in\M$ with $\rho_{aU+V}=\rho_U\vee\rho_V$. \item[(iv)] If $U\in\M$ with $\rho_U$ defined in (\ref{Mkappa}), then $1/U\in\M_\fty$ with $\rho_{1/U}=-\rho_U$. \item[(v)] Let $U\in\M$ with $\rho_U$ defined in \eqref{Mkappa}. If $\rho_U<-1$, then $U$ is integrable on $\Rset^+$, whereas, if $\rho_U>-1$, $U$ is not integrable on $\Rset^+$.\\ Note that in the case $\rho_U=-1$, we can find examples of functions $U$ which are integrable or not. \item[(vi)] \emph{Sufficient condition for $U$ to belong to $\M$:} Let $U$ be a positive and measurable function with support $\Rset^+$, bounded on finite intervals. Then $$ -\infty < \lim_{x\rightarrow\infty} \frac{\log\left(U(x)\right)}{\log(x)} <\infty \quad \Longrightarrow \quad U\in \mathcal{M} $$ \end{itemize} \end{propt} To simplify the notation, when no confusion is possible, we will denote $\rho_U$ by $\rho$. \begin{rmq} {\it Link to the notion of stochastic dominance} Let $X$ and $Y$ be rv's with distributions $F_X$ and $F_Y$, respectively, with support $\Rset^+$. We say that $X$ is smaller than $Y$ in the usual stochastic order (see e.g. \cite{shaked}) if \begin{equation}\label{eq:20140926:001} \overline{F}_X(x)\leq\overline{F}_Y(x)\quad\textrm{for all $x\in\Rset^+$.} \end{equation} This relation is also interpreted as the first-order stochastic dominance of $X$ over $Y$, as $F_X\ge F_Y$ (see e.g. \cite{HadarRussell1971}). Let $X$, $Y$ be rv's such that $\overline{F}_X=U$ and $\overline{F}_Y=V$, where $U, V\in \M$ and $\rho_U>\rho_V$. Then Properties~\ref{propt:main:001}, (ii), implies that there exists $x_0>0$ such that, for any $x\geq x_0$, $V(x)< U(x)$, hence that \eqref{eq:20140926:001} is satisfied at infinity, i.e. that $X$ strictly dominates $Y$ at infinity. Furthermore, the previous proof shows that a relation like \eqref{eq:20140926:001} is satisfied at infinity for any functions $U$ and $V$ in $\M$ satisfying $\rho_U>\rho_V$. It means that the notion of first-order stochastic dominance or stochastic order confined to rv's can be extended to functions in $\M$. In this way, we can say that if $\rho_U>\rho_V$, then $U$ strictly dominates $V$ at infinity. \end{rmq} Now let us define, for any positive and measurable function $U$ with support $\Rset^+$, \begin{equation}\label{eq:main:000c} \kappa_U:=\sup\left\{r:r\in\Rset\textrm{ \ \ and \ \ }\int_1^{\infty}x^{r-1}U(x)dx<\infty\right\}\textrm{.} \end{equation} Note that $\kappa_U$ may take values $\pm \infty$. \begin{defi}\label{eq:main:defi:kappa} For $U\in\mathcal{M}$, $\kappa_U$ defined in \eqref{eq:main:000c} is called the $\mathcal{M}$-index of $U$. \end{defi} \begin{rmq}\label{rmq:20140725:001}~ \begin{enumerate} \item If the function $U$ considered in \eqref{eq:main:000c} is bounded on finite intervals, then the integral involved can be computed on any interval $[a,\infty)$ with $a>1$. \item When assuming $U=\overline{F}$, $F$ being a continuous distribution, the integral in (\ref{eq:main:000c}) reduces (by changing the order of integration), for $r>0$, to an expression of moment of a rv: $$ \int_1^{\infty}x^{r-1}\overline{F}(x)dx=\frac{1}{r}\int_1^{\infty}\left(x^r-1\right)dF(x)=\frac{1}{r}\int_1^{\infty}x^rdF(x)-\frac{\overline{F}(1)}{r}\textrm{.} $$ \item We have $\kappa_U \geq 0$ for any tail $U=\overline{F}$ of a distribution $F$.\\ Indeed, suppose there exists $\overline{F}$ such that $\kappa_{\overline F}<0$. Let us denote $\kappa_{\overline F}$ by $\kappa$. Since $\kappa<\kappa/2<0$, we have by definition of $\kappa$ that $\displaystyle \int_1^{\infty}x^{\kappa/2-1}\overline{F}(x)dx=\infty $. But, since $\overline F\le 1$ and $\kappa/2-1<-1$, we can also write that $\displaystyle \int_1^{\infty}x^{\kappa/2-1}\overline{F}(x)dx\leq\int_1^{\infty}x^{\kappa/2-1}dx<\infty $. Hence the contradiction. \item A similar statement to Properties \ref{propt:main:001}, (iii), has been proved for RV functions (see \cite{BinghamGoldieTeugels}). \end{enumerate} \end{rmq} Let us develop a simple example, also useful for the proofs. \begin{exm}\label{lem:main:003} Let $\alpha\in\Rset$ and $U_\alpha$ the function defined on $(0,\infty)$ by $$ U_\alpha(x):=\left\{ \begin{array}{lcl} 1\text{,} & & 0< x<1 \\ x^\alpha\text{,} & & x\geq1\,. \end{array} \right. $$ Then $U_\alpha\in\mathcal{M}$ with $\rho_{U_\alpha}=\alpha$ defined in (\ref{Mkappa}), and its $\mathcal{M}$-index satisfies $\kappa_{U_\alpha}=-\alpha$. \end{exm} To check that $U_\alpha\in\mathcal{M}$, it is enough to find a $\rho_{U_\alpha}$, since its unicity follows by Properties \ref{propt:main:001}, (i). Choosing $\rho_{U_\alpha}=\alpha$, we obtain, for any $\epsilon>0$, that $$ \lim_{x\rightarrow\infty}\frac{U_\alpha(x)}{x^{\rho_{U_\alpha}+\epsilon}}= \lim_{x\rightarrow\infty}\frac{1}{x^{\epsilon}}=0 \quad\text{and} \quad \lim_{x\rightarrow\infty}\frac{U_\alpha(x)}{x^{\rho_{U_\alpha}-\epsilon}}= \lim_{x\rightarrow\infty}x^{\epsilon}=\infty $$ Hence $U_\alpha$ satisfies (\ref{Mkappa}) with $\rho_{U_\alpha}=\alpha$.\\ Now, noticing that $$ \int_1^{\infty}x^{s-1}U_\alpha(x)dx=\int_1^{\infty}x^{s+\alpha-1}dx <\infty \quad \Longleftrightarrow \quad s+\alpha<0 $$ then it comes that $\kappa_{U_\alpha}$ defined in \eqref{eq:main:000c} satisfies $\kappa_{U_\alpha}=-\alpha$. $\Box$ As a consequence of the definition of the $\M$-index $\kappa$ on $\mathcal{M}$, we can prove that Properties \ref{propt:main:001}, (vi), is not only a sufficient but also a necessary condition, obtaining then a first characterization of $\mathcal{M}$. \begin{teo}\label{teo:main:001} \textbf{First characterization of $\mathcal{M}$} \\ Let $U$ be a positive measurable function with support $\Rset^+$ and bounded on finite intervals. Then \begin{equation}\label{eq:main:000b} U\in\mathcal{M} \,\text{with} \, \rho_U= - \tau \quad\Longleftrightarrow \quad\lim_{x\rightarrow\infty} \frac{\log\left(U(x)\right)}{\log(x)}= - \tau \end{equation} where $\rho_U$ is defined in \eqref{Mkappa}. \end{teo} \begin{exm} The function $U$ defined by $U(x)=x^{\sin(x)}$ does not belong to $\M$ since the limit expressed in (\ref{eq:main:000b}) does not exist . \end{exm} Other properties on $\M$ can be deduced from Theorem \ref{teo:main:001}, namely: \begin{propt}\label{propt:20140630} Let $U$, $V$ $\in$ $\M$ with $\rho_U$ and $\rho_V$ defined in \eqref{Mkappa}, respectively. Then: \begin{itemize} \item[(i)] The product \ $U\,V\in\M$ with $\rho_{U\,V}=\rho_U+\rho_V$. \item[(ii)] If $\rho_U\leq\rho_V<-1$ or $\rho_U<-1<0\leq\rho_V$, then the convolution $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_V$. If $-1<\rho_U\leq\rho_V$, then $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_U+\rho_V$+1. \item[(iii)] If $\displaystyle \lim_{x\rightarrow\infty} V(x)=\infty$, then $U\circ V\in\M$ with $\rho_{U\circ V}=\rho_U\,\rho_V$. \end{itemize} \end{propt} \begin{rmq}\label{rmq:20140724:001}~ A similar statement to Properties \ref{propt:20140630}, (ii), has been proved when restricting the functions $U$ and $V$ to RV probability density functions, showing first $\displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{U(x)+V(x)}=1$ (see \cite{BinghamGoldieOmey2006}). In contrast, we propose a direct proof, under the condition of integrability of the function of $\M$ having the lowest $\rho$. When $U$ and $V$ are tails of distributions belonging to RV, with the same tail index, Feller (\cite{feller21966}) proved that the convolution of $U$ and $V$ also belongs to this class and has the same tail index as $U$ and $V$. \end{rmq} We can give a second way to characterize $\mathcal{M}$ using $\kappa_U$ defined in \eqref{eq:main:000c}. \begin{teo}\label{teo:main:002} \textbf{Second characterization of $\mathcal{M}$} \\ Let $U$ be a positive measurable function with support $\Rset^+$, bounded on finite intervals. Then \begin{eqnarray} U\in\mathcal{M_\fty} \, \text{with associated\ } \, \rho_U &\Longleftrightarrow & \kappa_U=-\rho_U \quad\label{teo2a} \end{eqnarray} where $ \rho_U$ satisfies \eqref{Mkappa} and $\kappa_U$ satisfies \eqref{eq:main:000c}. \end{teo} Here is another characterization of $\mathcal{M}_\fty$, of Karamata type. \begin{teo}\label{teo:main:003} \textbf{Representation Theorem of Karamata type for $\mathcal{M}_\fty$}~ \begin{itemize} \item[(i)] Let $U\in\mathcal{M}_\fty$ with finite $\rho_U$ defined in (\ref{Mkappa}). There exist $b>1$ and functions $\alpha$, $\beta$ and $\epsilon$ satisfying, as $x\rightarrow\infty$, \begin{equation}\label{alpha-eps-beta} \alpha(x)/\log(x) \,\, \rightarrow \,\, 0 \, , \qquad \epsilon(x) \,\, \rightarrow \,\, 1 \, , \qquad \beta(x) \,\, \rightarrow \,\, \rho_U, \end{equation} such that, for $ x\geq b$, \begin{equation}\label{eq:main:000d} U(x)=\exp\left\{\alpha(x)+\epsilon(x)\,\int_b^x\frac{\beta(t)}{t}dt\right\}\textrm{.} \end{equation} \item[(ii)] Conversely, if there exists a positive measurable function $U$ with support $\Rset^+$, bounded on finite intervals, satisfying (\ref{eq:main:000d}) for some $b>1$ and functions $\alpha$, $\beta$, and $\epsilon$ satisfying \eqref{alpha-eps-beta}, then $U\in\mathcal{M}_\fty$ with finite $\rho_U$ defined in (\ref{Mkappa}). \end{itemize} \end{teo} \begin{rmq}~ \begin{enumerate} \item Another way to express \eqref{eq:main:000d} is the following: \begin{equation}\label{eq:teo:main:003:004} U(x)=\exp\left\{\alpha(x)+\frac{\epsilon(x)\,\log(x)}{x}\,\int_b^x\beta(t)dt\right\}\textrm{.} \end{equation} \item The function $\alpha$ defined in Theorem \ref{teo:main:003} is not necessarily bounded, contrarily to the case of Karamata representation for RV functions. \end{enumerate} \end{rmq} \begin{exm}\label{exm:main:intro:002} Let $U\in\M$ with $\M$-index $\kappa_U$. If there exists $c>0$ such that $U<c$, then $\kappa_U\geq0$. \end{exm} Indeed, since we have $\displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(1/U(x)\right)}{\log(x)}\geq \lim_{x\rightarrow\infty}\frac{\log\left(1/c\right)}{\log(x)}=0\textrm{,} $ applying Theorem \ref{teo:main:001} allows to conclude. $\Box$ \subsection{Extension of the class $\mathcal{M}$} We extend the class $\mathcal{M}$ introducing two other classes of functions. \begin{defi}\label{eq:main:defi:classMextension} $\mathcal{M}_\infty$ and $\mathcal{M}_{-\infty}$ are the classes of positive measurable functions $U$ with support $\Rset^+$, bounded on finite intervals, defined as \begin{equation}\label{M+} \M_\infty :=\left\{ U ~ : ~ \forall \rho \in\Rset, \, \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho}}=0\right\} \end{equation} and \begin{equation}\label{M-} \M_{-\infty} :=\left\{ U ~ : ~ \forall\rho\in\Rset, \, \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho}}=\infty\right\} \end{equation} \end{defi} Notice that it would be enough to consider $\rho<0$ ($\rho>0$, respectively) in (\ref{M+}) ((\ref{M-}), respectively), and that $\M_\infty$, $\M_{-\infty}$ and $\M_\fty$ are disjoint. We denote by $\M_{\pm\infty}$ the union $\M_\infty\cup\M_{-\infty}$. We obtain similar properties for $\M_\infty$ and $\M_{-\infty}$, as the ones given for $\M$, namely: \begin{propt}\label{propt:main:001extension}~ \begin{itemize} \item[(i)] $U\in\M_\infty \quad \Longleftrightarrow \quad 1/U\in\M_{-\infty}$. \item[(ii)] If $ \quad(U,V)\textrm{\ $\in$\ }\M_{-\infty}\times\M\textrm{\ or\ }\M_{-\infty}\times\M_\infty\textrm{\ or\ }\M\times\M_\infty\textrm{,} $ then $\displaystyle \, \lim_{x\rightarrow\infty}\frac{V(x)}{U(x)}=0$. \item[(iii)] If $U,V\in\M_{\infty}$ ($\M_{-\infty}$ respectively), then $U+V\in\M_{\infty}$ ($\M_{-\infty}$ respectively). \end{itemize} \end{propt} The index $\kappa_U$ defined in (\ref{eq:main:000c}) may also be used to analyze $\M_\infty$ and $\M_{-\infty}$. It can take infinite values, as can be seen in the following example. \begin{exm} Consider $U$ defined on $\Rset^+$ by \ $U(x):=e^{-x}$. Then $U\in\M_\infty$ with $\kappa_U=\infty$. Choosing $U(x)=e^{x}$ leads to $U\in\M_{-\infty}$ with $\kappa_U=-\infty$. \end{exm} A first characterization of $\M_\infty$ and $\M_{-\infty}$ can be provided, as done for $\M$ in Theorem~\ref{teo:main:001}. \begin{teo}\label{teo:main:001extension} \textbf{First characterization of $\M_\infty$ and $\M_{-\infty}$} \\ Let $U$ be a positive measurable function with support $\Rset^+$, bounded on finite intervals. Then we have \begin{equation}\label{eq:main:000bextension01} U\in\M_\infty \quad \Longleftrightarrow \quad\lim_{x\rightarrow\infty} \frac{\log\left(U(x)\right)}{\log(x)}= - \infty \end{equation} and \begin{equation}\label{eq:main:000bextension02} U\in\M_{-\infty} \quad \Longleftrightarrow \quad\lim_{x\rightarrow\infty} \frac{\log\left(U(x)\right)}{\log(x)}= \infty . \end{equation} \end{teo} \begin{rmq}\label{rmq:20140920:001} {\it Link to a result from Daley and Goldie}. If we restrict $\M\cup\M_{\pm\infty}$ to tails of distributions, then combining Theorems \ref{teo:main:001} and \ref{teo:main:001extension} and Theorem 2 in \cite{DaleyGoldie2006} provide another characterization, namely $$ U\in\M\cup\M_{\pm\infty}\quad\Longleftrightarrow\quad X_U\in\M^{DG} $$ where $X_U$ is a rv with tail $U$ and $\M^{DG}$ is the set of non-negative rv's $X$ having the property introduced by Daley and Goldie (see \cite{DaleyGoldie2006}) that $$ \kappa(X\wedge Y)=\kappa(X)+\kappa(Y) $$ for independent rv's $X$ and $Y$. We notice that $\kappa(X)$ defined in \cite{DaleyGoldie2006} (called there the moment index) and applied to rv's, coincides with the $\M$-index of $U$, when $U$ is the tail of the distribution of $X$. \end{rmq} An application of Theorem \ref{teo:main:001extension} provides similar properties as Properties \ref{propt:20140630}, namely: \begin{propt}\label{propt:20140630ext}~ \begin{itemize} \item[(i)] Let $(U,V)$ $\in$ $\M_{\infty}\times\M_\infty$ or $\M_{\pm\infty}\times\M$ or $\M_{-\infty}\times\M_{-\infty}$. Then $U\cdot V\in\M_\infty$ or $\M_{\pm\infty}$ or $\M_{-\infty}$, respectively. \item[(ii)] Let $(U,V)$ $\in$ $\M_{\infty}\times\M$ with $\rho_V\geq0$ or $\rho_V<-1$, then $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_V$. Let $(U,V)\in\M_\infty\times\M_\infty$, then $U\ast V\in\M_\infty$. Let $(U,V)$ $\in$ $\M_{-\infty}\times\M$ or $\M_{-\infty}\times\M_{\pm\infty}$, then $U\ast V\in\M_{-\infty}$. \item[(iii)] Let $U$ $\in$ $\M_{\pm\infty}$ and $V$ $\in$ $\M$ such that $\displaystyle \lim_{x\rightarrow\infty} V(x)=\infty$ or $V$ $\in$ $\M_{-\infty}$, then $U\circ V\in\M_{\pm\infty}$. \end{itemize} \end{propt} Looking for extending Theorems~\ref{teo:main:002}-\ref{teo:main:003} to $\M_\infty$ and $\M_{-\infty}$ provide the next results. On the converses of the result corresponding to Theorem~\ref{teo:main:002} extra-conditions are required. \begin{teo}\label{teo:main:002extension}~ Let $U$ be a positive measurable function with support $\Rset^+$, bounded on finite intervals, with $\kappa_U$ defined in (\ref{eq:main:000c}). Then \begin{itemize} \item[(i)] \begin{itemize} \item[(a)] $ U\in\M_\infty\quad \Longrightarrow \quad \kappa_U=\infty$. \item[(b)] $U$ continuous, $\displaystyle \lim_{x\rightarrow\infty} U(x)=0$, and $\kappa_U=\infty\quad \Longrightarrow \quad U\in\M_\infty$. \end{itemize} \item[(ii)] \begin{itemize} \item[(a)] $U\in\M_{-\infty} \quad \Longrightarrow \quad \kappa_U=-\infty$. \item[(b)] $U$ continuous and non-decreasing, and $\kappa_U=-\infty\quad \Longrightarrow \quad U\in\M_{-\infty}$. \end{itemize} \end{itemize} \end{teo} \begin{rmq}\label{rmq:20141107:001}~ \begin{enumerate} \item In (i)-(b), the condition $\kappa_U=\infty$ might appear intuitively sufficient to prove that $U\in\M_{\infty}$. This is not true, as we can see with the following example showing for instance that the continuity assumption is needed. Indeed, we can check that the function $U$ defined on $\Rset^+$ by $$ U(x):=\left\{ \begin{array}{ll} 1/x & \mbox{if} \quad x\in \underset{n\in \Nset\backslash \{0\}}{\bigcup} (n; n+1/n^n) \\ e^{-x} & \mbox{otherwise}, \end{array} \right. $$ satisfies $\kappa_U=\infty$ and $\displaystyle \lim_{x\rightarrow\infty} U(x)=0$, but is not continuous and does not belong to $\M_{\infty}$. \item The proof of (i)-(b) is based on an integration by parts, isolating the term $t^rU(t)$. The continuity of $U$ is needed, otherwise we would end up with an infinite number of jumps of the type $U(t^+)-U(t^-)(\neq 0)$ on $\Rset^+$. \end{enumerate} \end{rmq} \begin{teo}\label{teo:main:003extension} \textbf{Representation Theorem of Karamata Type for $\M_\infty$ and $\M_{-\infty}$} \begin{itemize} \item[(i)] If $U\in\M_{\infty}$, then there exist $b>1$ and a positive measurable function $\alpha$ satisfying \begin{equation}\label{eq:main:alpha} \textrm{$\displaystyle \alpha(x)/\log(x) \, \underset{x\to\infty}{\rightarrow} \, \infty$,} \end{equation} such that, $\forall x\ge b$, \begin{equation}\label{eq:main:000dextension01} U(x)=\exp\left\{-\alpha(x)\right\}. \end{equation} \item[(ii)] If $U\in\M_{-\infty}$, then there exist $b>1$ and a positive measurable function $\alpha$ satisfying (\ref{eq:main:alpha}) such that, $\forall x\ge b$, \begin{equation}\label{eq:main:000dextension02} U(x)=\exp\left\{\alpha(x)\right\}. \end{equation} \item[(iii)] Conversely, if there exists a positive function $U$ with support $\Rset^+$, bound\-ed on finite intervals, satisfying (\ref{eq:main:000dextension01}) or (\ref{eq:main:000dextension02}), respectively, for some positive function $\alpha$ satisfying (\ref{eq:main:alpha}), then $U\in\mathcal{M}_\infty$ or $U\in\mathcal{M}_{-\infty}$, respectively. \end{itemize} \end{teo} \subsection{On the complement set of $\M\cup\M_{\pm\infty}$} Considering measurable functions $U:\Rset^+\to\Rset^+$, we have, applying Theorems \ref{teo:main:001} and \ref{teo:main:001extension}, that $U$ belongs to $\M$, $\M_\infty$ or $\M_{-\infty}$ if and only if $\displaystyle \lim_{x\rightarrow\infty} \frac{\log\left(U(x)\right)}{\log(x)}$ exists, finite or infinite. Using the notions (see for instance \cite{BinghamGoldieTeugels}) of \emph{lower order} of $U$, defined by \begin{equation}\label{eq:20140609:001} \mu(U):=\mathop{\underline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\textrm{,} \end{equation} and \emph{upper order} of $U$, defined by \begin{equation}\label{eq:20140609:001bis} \nu(U):=\mathop{\overline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\textrm{,} \end{equation} we can rewrite this characterization simply by $\mu(U)=\nu(U)$. Hence, the complement set of $\M\cup\M_{\pm\infty}$ in the set of functions $U:\Rset^+\to\Rset^+$, denoted by $\Oset$, can be written as $$ \Oset:= \{U:\Rset^+\rightarrow\Rset^+~: \mu(U)<\nu(U)\}. $$ This set is nonempty: $\Oset\neq\emptyset$, as we are going to see through examples. A natural question is whether the \PBdH\ (see Theorem \ref{tpbdh} in Appendix \ref{ProofsofresultsconcerningOset}) applies when restricting $\Oset$ to tails of distributions. The answer follows. \begin{teo}\label{teo:20140801:001}~\\ Any distribution of a rv having a tail in $\Oset$ does not satisfy \PBdH. \end{teo} Examples of distributions $F$ satisfying $\mu(\overline{F})<\nu(\overline{F})$ are not well-known. A non explicit one was given by Daley (see \cite{Daley2001}) when considering rv's with discrete support (see \cite{DaleyGoldie2006}). We will provide a couple of explicit parametrized examples of functions in $\Oset$ which include tails of distributions with discrete support. These functions can be extended easily to continuous positive functions not necessarily monotone, for instance adapting polynomes given by Karamata (see \cite{Karamata1931a001}). These examples are more detailed in Appendix \ref{ProofsofresultsconcerningOset}. \begin{exm}\label{exm:20140802:001}~ Let $\alpha>0$, $\beta\in\Rset$ such that $\beta\neq-1$, and $x_a>1$. Let us consider the increasing series defined by $x_n=x_a^{(1+\alpha)^n}$, $n\geq1$, well-defined because $x_a>1$. Note that $x_n\rightarrow\infty$ as $n\rightarrow\infty$. The function $U$ defined by \begin{equation}\label{eq:20140610:003} U(x):=\left\{ \begin{array}{ll} 1\textrm{,} & 0\leq x<x_1 \\ x_n^{\alpha(1+\beta)}\textrm{,} & x\in[x_n,x_{n+1})\textrm{,} \quad \forall n\geq1 \end{array} \right. \end{equation} belongs to $\Oset$, with $$ \left\{ \begin{array}{cc} \displaystyle\mu(U)=\frac{\alpha(1+\beta)}{1+\alpha}\quad\textrm{and}\quad\nu(U)=\alpha(1+\beta)\textrm{,} & \textrm{if $1+\beta>0$} \\ & \\ \displaystyle\mu(U)=\alpha(1+\beta)\quad\textrm{and}\quad\nu(U)=\frac{\alpha(1+\beta)}{1+\alpha}\textrm{,} & \textrm{if $1+\beta<0$.} \end{array} \right. $$ Moreover, if $1+\beta<0$, then $U$ is a tail of distribution whose associated rv has moments lower than $-\alpha(1+\beta)\big/(1+\alpha)$. \end{exm} \begin{exm}\label{exm:20140802:002}~ Let $c>0$ and $\alpha\in\Rset$ such that $\alpha\neq0$. Let $(x_n)_{n\in\Nset}$ be defined by $x_1=1$ and $x_{n+1}=2^{x_n/c}$, $n\geq1$, well-defined for $c>0$. Note that $x_n\to\infty$ as $n\to\infty$. The function $U$ defined by $$ U(x):=\left\{ \begin{array}{ll} 1 & 0\leq x<x_1 \\ 2^{\alpha x_n} & x_n\leq x<x_{n+1}\textrm{,} \quad \forall n\geq1 \end{array} \right. $$ belongs to $\Oset$, with $$ \left\{ \begin{array}{cc} \displaystyle\mu(U)=\alpha c\quad\textrm{and}\quad\nu(U)=\infty\textrm{,} & \textrm{if $\alpha>0$} \\ & \\ \displaystyle\mu(U)=-\infty\quad\textrm{and}\quad\nu(U)=\alpha c\textrm{,} & \textrm{if $\alpha<0$.} \end{array} \right. $$ Moreover, if $\alpha<0$, then $U$ is a tail of distribution whose associated rv has moments lower than $-\alpha c$. \end{exm} \section{Extension of RV results} In this section, well-known results and fundamental in Extreme Value Theory, as Karamata's relations and Karamata's Tauberian Theorem, are discussed on $\mathcal{M}$. A key tool for the extension of these standard results to $\mathcal{M}$ is the characterizations of $\M$ given in Theorems \ref{teo:main:001} and \ref{teo:main:002}. First notice the relation between the class $\M$ introduced in the previous section and the class RV defined in (\ref{eq:000rv}). \begin{prop}\label{prop:20140329:strictsubset}~ RV$_\rho$ ($\rho\in\Rset$) is a strict subset of $\M$. \end{prop} The proof of this claim comes from the Karamata relation (see \cite{Karamata1933}) given, for all RV function $U$ with index $\rho\in\Rset$, by \begin{equation}\label{eq:kar:001:karamata} \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}=\rho\textrm{,} \end{equation} which implies, using Properties \ref{propt:main:001}, (vi), that $U\in\M$ with $\M$-index $\kappa_U=-\rho$. Moreover, RV $\neq$ $\M$, noticing that, for $t>0$, $ \displaystyle \lim_{x\rightarrow\infty}\frac{U(tx)}{U(x)} $ does not necessarily exist, whereas it does for a RV function $U$. For instance the function defined on $\Rset^+$ by $U(x)=2+\sin(x)$, is not RV, but $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}=0\textrm{,} $ hence $U\in\M$. \subsection{Karamata's Theorem} We will focus on the well-known Karamata Theorem developed for RV (see \cite{Karamata1930} and e.g. \cite{feller21966,BinghamGoldieTeugels}) to analyze its extension to $\M$. Let us recall it, borrowing the version given in \cite{deHaan}. \begin{teo}\textbf{Karamata's Theorem (\cite{Karamata1930}; e.g. \cite{deHaan})}\label{teo:KaramatasTheorem}~ Suppose $U:\Rset^+\rightarrow\Rset^+$ is Lebesgue-summable on finite intervals. Then \begin{itemize} \item[(K1)] $$ U\in\textrm{RV}_\rho\textrm{, $\rho>-1$} \quad\Longleftrightarrow\quad \lim_{x\rightarrow\infty}\frac{xU(x)}{\int_0^xU(t)dt}=\rho+1>0\textrm{.} $$ \item[(K2)] $$ U\in\textrm{RV}_\rho\textrm{, $\rho<-1$} \quad \Longleftrightarrow \quad \lim_{x\rightarrow\infty}\frac{xU(x)}{\int_x^{\infty}U(t)dt}=-\rho-1>0\textrm{.} $$ \item[(K3)] \begin{itemize} \item[(i)] $\displaystyle U\in\textrm{RV}_{-1} \quad \Longrightarrow \quad \lim_{x\rightarrow\infty}\frac{xU(x)}{\int_0^xU(t)dt}=0$. \item[(ii)] $\displaystyle U\in\textrm{RV}_{-1} \mbox{ and } \int_0^{\infty}U(t)dt<\infty \quad\Longrightarrow\quad \lim_{x\rightarrow\infty}\frac{xU(x)}{\int_x^{\infty}U(t)dt}=0$. \end{itemize} \end{itemize} \end{teo} \begin{rmq} The converse of (K3), (i), is wrong in general. A counterexample can be given by the Peter and Paul distribution which satisfies $\displaystyle \lim_{x\rightarrow\infty}\frac{xU(x)}{\int_x^{\infty}U(t)dt}=0$ but is not $\textrm{RV}_{-1}$. We will come back on that, in more details, in \ $\S$ \ref{subsection2.1.2}. \end{rmq} Theorem~\ref{teo:KaramatasTheorem} is based on the existence of certain limits. We can extend some of the results to $\M$, even when theses limits do not exist, replacing them by more general expressions. \subsubsection{Karamata's Theorem on $\M$} Let us introduce the following conditions, in order to state the generalization of the Karamata Theorem to $\mathcal{M}$: \begin{eqnarray*} &(C1r)& \frac{x^rU(x)}{\int_b^xt^{r-1}U(t)dt}\in\M\textrm{ with $\M$-index 0, }\, i.e.\quad \lim_{x\rightarrow\infty}\left(\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}-\frac{\log\left(U(x)\right)}{\log(x)}\right)=r \\ &(C2r)& \frac{x^rU(x)}{\int_x^\infty t^{r-1}U(t)dt}\in\M\textrm{ with $\M$-index 0, }\, i.e. \quad \lim_{x\rightarrow\infty}\left(\frac{\log\left(\int_x^{\infty}t^{r-1}U(t)dt\right)}{\log(x)}-\frac{\log\left(U(x)\right)}{\log(x)}\right)=r\\ && \end{eqnarray*} \begin{teo}\label{prop:kar:001}~\textbf{Generalization of the Karamata Theorem to $\mathcal{M}$} Let $U:\Rset^+\rightarrow\Rset^+$ be a Lebesgue-summable on finite intervals, and $b>0$. We have, for $r\in\Rset$, \begin{itemize} \item[(K1$^*$)] $$ U\in\M\textrm{ with $\M$-index $(-\rho)$ such that $\rho+r>0$} \quad\Longleftrightarrow\quad \left\{ \begin{array}{l} \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}=\rho+r>0\\ ~\\ U \mbox{ satisfies } (C1r) \end{array} \right. $$ \item[(K2$^*$)] $$ U\in\M\textrm{ with $\M$-index $(-\rho)$ such that $\rho+r < 0$} \quad\Longleftrightarrow\quad \left\{ \begin{array}{l} \lim_{x\rightarrow\infty}\frac{\log\left(\int_x^{\infty}t^{r-1}U(t)dt\right)}{\log(x)}=\rho+r<0\textrm{}\\ ~\\ U \mbox{ satisfies } (C2r) \end{array} \right. $$ \item[(K3$^*$)] $$ U\in\M\textrm{ with $\M$-index $(-\rho)$ such that $\rho+r = 0$} \quad\Longleftrightarrow\quad \left\{ \begin{array}{l} \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}=\rho+r=0\\ ~\\ U \mbox{ satisfies } (C1r) \end{array} \right. $$ \end{itemize} \end{teo} This theorem provides then a fourth characterization of $\M$. Note that if $r=1$, we can assume $b\ge 0$, as in the original Karamata's Theorem. \begin{rmq}~ \begin{enumerate} \item Note that (K3$^*$) provides an equivalence contrarily to (K3). \item Assuming that $U$ satisfies the conditions $(C2r)$ and \begin{equation}\label{cdtionIntFini} \int_1^{\infty}t^{r} U(t)dt \, <\, \infty \end{equation} we can propose a characterization of $U\in\M$ with $\M$-index $(r+1)$, namely $$ U\in\M\textrm{ with $\M$-index $(r+1)$} \quad\Longleftrightarrow\quad \lim_{x\rightarrow\infty} \frac{\log\left(\int_x^{\infty}t^{r} U(t)dt\right)}{\log(x)}=0\textrm{.} $$ This is the generalization of (K3) in Theorem~\ref{teo:KaramatasTheorem}, providing not only a necessary condition but also a sufficient one for $U$ to belong to $\M$, under the conditions $(C2r)$ and \eqref{cdtionIntFini}. \end{enumerate} \end{rmq} \subsubsection{Illustration using Peter and Paul distribution} \label{subsection2.1.2} The Peter and Paul distribution is a typical example of a function which is not RV. It is defined by (see e.g. \cite{Goldie1978}, \cite{EmbrechtsOmey1984}, \cite{embrechts1997} or \cite{Mikosch}) \begin{equation}\label{def-PP} F(x):=1-\sum_{k\geq1\textrm{: }2^k>x}2^{-k}\textrm{,\quad $x>0$.} \end{equation} Let us illustrate the characterization theorems when applied on \pyp; we do it for instance for Theorems \ref{teo:main:001} and \,\ref{prop:kar:001}, proving that this distribution belongs to $\M$. \begin{prop}~ The \pyp\ does not belong to RV, but to $\M$ with $\M$-index $1$. \end{prop} This proposition can be proved using Theorem \ref{teo:main:001} or Theorem \ref{prop:kar:001}. To illustrate the application of these two theorems, we develop the proof here and not in the appendix. \begin{itemize} \item[(i)] \emph{Application of Theorem \ref{teo:main:001}} For $x\in[2^n;2^{n+1})$ ($n\geq0$), we have, using \eqref{def-PP}, $ \displaystyle \overline{F}(x)= \sum_{k\geq n+1}2^{-k}= 2^{-n}\textrm{,} $ from which we deduce that $\displaystyle \frac{n}{n+1}\leq-\frac{\log\left(\overline{F}(x)\right)}{\log(x)}<1$, hence $\displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(\overline{F}(x)\right)}{\log(x)}=-1$, which by Theorem~\ref{teo:main:001} is equivalent to $$ \overline{F}\in\mathcal{M} \quad\text{with} \quad \M-\text{index}\;1. $$ \item[(ii)] \emph{Application of Theorem \ref{prop:kar:001}} Let us prove that $$ \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^x\overline{F}(t)dt\right)}{\log(x)}=0. $$ Suppose $2^n\leq x<2^{n+1}$ and consider $a\in\Nset$ such that $a<n$. Choose w.l.o.g. $b=2^a$. Then the \pyp \, \eqref{def-PP} satisfies $$ \int_b^x\overline{F}(t)dt=\sum_{k=a}^{n-1}\int_{2^k}^{2^{k+1}}\!\!\! \!\! \overline{F}(t)dt+\int_{2^n}^{x} \! \overline{F}(t)dt =\sum_{k=a}^{n-1}2^{-k}(2^{k+1}-2^k)+(x-2^n)2^{-n}=n-a+x2^{-n}-1. $$ Hence it comes $$ \frac{\log(n-a+x2^{-n}-1)}{(n+1)\,\log(2)}\leq\frac{\log\left(\int_b^x\overline{F}(t)dt\right)}{\log(x)}\leq\frac{\log(n-a+x2^{-n}-1)}{n\,\log(2)} $$ and, since $1\leq 2^{-n} x<2$, we obtain $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^x\overline{F}(t)dt\right)}{\log(x)}=0 $.\\ Moreover, we have $$ \lim_{x\rightarrow\infty}\frac{\log\left(\frac{x\overline{F}(x)}{\int_b^x\overline{F}(t)dt}\right)}{\log(x)}= 1+\lim_{x\rightarrow\infty}\frac{\log\left(\overline{F}(x)\right)}{\log(x)}-\lim_{x\rightarrow\infty}\frac{\log\left(\int_b^x\overline{F}(t)dt\right)}{\log(x)}=1. $$ Theorem \ref{prop:kar:001} allows then to conclude that \; $\displaystyle \textrm{$\overline{F}\in\mathcal{M}$ with $\mathcal{M}$-index $1$} $. $\Box$ \end{itemize} Note that the original Karamata Theorem (Theorem \ref{teo:KaramatasTheorem}) does not allow to prove that the \pyp \ is RV or not, since the converse of (i) in (K3) does not hold, contrarily to Theorem \ref{prop:kar:001}. Indeed, although we can prove that $$ \lim_{x\rightarrow\infty}\frac{x\,\overline{F}(x)}{\int_b^x\overline{F}(t)dt}=\lim_{x,n\rightarrow\infty}\frac{x\,2^{-n}}{n-a+x2^{-n}-1}=0, $$ Theorem \ref{teo:KaramatasTheorem} does not imply that $\overline{F}$ is $\textrm{RV}_{-1}$. \subsection{Karamata's Tauberian Theorem} Let us recall the well-known Karamata Tauberian Theorem which deals on Laplace-Stieltjes (L-S) transforms and RV functions. The L-S transform of a positive, right continuous function $U$ with support $\Rset^+$ and with local bounded variation, is defined by \begin{equation}\label{eq:20140328:001} \widehat{U}(s) :=\int_{(0; \infty)}e^{-xs}dU(x) \textrm{,\quad $s>0$.} \end{equation} \begin{teo}\label{teo:KaramatasTauberianTheorem}\textbf{Karamata's Tauberian Theorem (see \cite{Karamata1931})} If $U$ is a non-decreasing right continuous function with support $\Rset^+$ and satisfying $U(0^+)=0$, with finite L-S transform $\widehat{U}$, then, for $\alpha>0$, $$ U\in\textrm{RV}_{\alpha}\textrm{\ \ at infinity}\quad\Longleftrightarrow\quad\widehat{U}\in\textrm{RV}_{\alpha}\textrm{\ \ at $0^+$.} $$ \end{teo} Now we present the main result of this subsection which extends only partly the Karamata Tauberian Theorem to $\M$. \begin{teo}\label{teo:KaramatasTauberianTheoremExtension}~ Let $U$ be a continuous function with support $\Rset^+$ and local bounded variation, satisfying $U(0^+)=0$. Let $g$ be defined on $\Rset^+$ by $g(x)=1/x$. Then, for any $\alpha>0$, \begin{itemize} \item[(i)] $\displaystyle U\in\M\textrm{\ \ with $\M$-index $(-\alpha)$} \quad \Longrightarrow \quad \widehat{U}\circ g \in\M\textrm{\ \ with $\M$-index $(-\alpha)$} $. \item[(ii)] $ \left\{ \begin{array}{l} \widehat{U}\circ g \in\M\textrm{\ \ with $\M$-index $(-\alpha)$} \\ \textrm{and\, $\exists\, \eta\in [0;\alpha)$ \, : \,$x^{-\eta}U(x)$ concave} \end{array} \right. \quad \Longrightarrow \quad U\in\M\textrm{\ \ with $\M$-index $(-\alpha)$} $. \end{itemize} \end{teo} \subsection{Results concerning domains of attraction} \label{Section2.3} Von Mises (see \cite{vonMises1936}) formulated some sufficient conditions to guarantee that the maximum of a sample of independent and identically distributed (iid) rv's with a same distribution, when normalized, converges to a non-degenerate limit distribution belonging to the class of extreme value distributions. In this subsection we analyze these conditions on $\mathcal{M}$. Before presenting the well-known von Mises' conditions, let us recall the theorem of the three limit types. \begin{teo} (see for instance \cite{FisherTippett1928}, \cite{Gnedenko1943}) \label{teo:FT-G}~ Let $(X_n,n\in\Nset)$ be a sequence of iid rv's and $\displaystyle M_n:=\max_{1\le i\le n} X_i$. If there exist constants $(a_n,n\in\Nset)$ and $(b_n,n\in\Nset)$ with $a_n>0$ and $b_n\in\Rset$ such that \begin{equation}\label{eq:20140727:010} P\left(\frac{M_n-b_n}{a_n}\leq x\right)=F^n(a_n x+b_n)\underset{n\rightarrow\infty}{\rightarrow}G(x) \end{equation} with $G$ a non degenerate distribution function, then $G$ is one of the three following types: \begin{eqnarray*} \textrm{Gumbel} & : & \Lambda(x):=\exp\left\{e^{-x}\right\}\textrm{,\quad $x\in\Rset$} \\ \textrm{Fr\'{e}chet} & : & \Phi_{\alpha}(x):=\exp\left\{-x^{-\alpha}\right\}\textrm{,\quad $x\geq0$,\, for some $\alpha>0$} \\ \textrm{Weibull} & : & \Psi_{\alpha}(x):=\exp\left\{-(-x)^{-\alpha}\right\}\textrm{,\quad $x<0$, \, for some $\alpha<0$} \end{eqnarray*} \end{teo} The set of distributions $F$ satisfying \eqref{eq:20140727:010} is called the domain of attraction of $G$ and denoted by $DA(G)$. In what follows, we refer to the domains of attraction related to distributions with support $\Rset^+$ only, so the Fr\'{e}chet class and the subclass of the Gumbel class, denoted by $DA(\Lambda_\infty)$, consisting of distributions $F\in DA(\Lambda)$ with endpoint $x^*:=\sup\{x:F(x)>0\}=\infty$. Now, let us recall the von Mises' conditions. \begin{enumerate} \item[(vM1)] Suppose that $F$, continuous and differentiable, satisfies $F'>0$ for all $x\geq x_0$, for some $x_0>0$. If there exists $\alpha>0$, such that \begin{equation*} \lim_{x\rightarrow\infty}\frac{x\,F'(x)}{\overline{F}(x)}=\alpha\textrm{,} \end{equation*} then $F\in DA(\Phi_{\alpha})$. \item[(vM2)] Suppose that $F$ with infinite endpoint, is continue and twice differentiable for all $x\geq x_0$, with $x_0>0$. If \begin{equation*} \lim_{x\rightarrow\infty}\left(\frac{\overline{F}(x)}{F'(x)}\right)'=0\textrm{,} \end{equation*} then $F\in DA(\Lambda_{\infty})$. \item[(vM2bis)] Suppose that $F$ with finite endpoint $x^*$, is continue and twice differentiable for all $x\geq x_0$, with $x_0>0$. If $$ \lim_{x\rightarrow x^*}\left(\frac{\overline{F}(x)}{F'(x)}\right)'=0\textrm{,} $$ then $F\in DA(\Lambda)\setminus DA(\Lambda_{\infty})$. \end{enumerate} It is then straightforward to deduce from the conditions (vM1) and (vM2), the next results. \begin{prop}\label{prop:main:vonmises:001}~ Let $F$ be a distribution. \begin{itemize} \item[(i)] If $F$ satisfies $\displaystyle \lim_{x\rightarrow\infty}\frac{x\,F'(x)}{\overline{F}(x)}=\alpha>0$, then $\overline{F}\in\mathcal{M}$ with $\mathcal{M}$-index $1/\alpha$. \item[(ii)] If $F$ satisfies $\displaystyle \lim_{x\rightarrow\infty}\left(\frac{\overline{F}(x)}{F'(x)}\right)'=0$, then $\overline{F}\in\M_\infty$. \end{itemize} \end{prop} So the natural question is how to relate $\mathcal{M}$ or $\M_\infty$ to the domains of attraction $DA(\Phi_{\alpha})$ and $DA(\Lambda_{\infty})$. To answer it, let us recall three results on those domains of attraction that will be needed. \begin{teo}(see e.g. \cite{deHaanFerreira}, Theorem 1.2.1) \label{teo:mises:dehaanferreira0}~ Let $\alpha>0$. The distribution function $F\in DA(\Phi_{\alpha})$ if and only if $x^*=\sup\{x:F(x)<1\}=\infty$ and $\overline{F}\in\textrm{RV}_{-\alpha}$. \end{teo} \begin{cor} De Haan (1970) (see \cite{deHaan}, Corollary 2.5.3) \label{cor:mises:dehaan0}~ If $F\in DA(\Lambda_{\infty})$, then $\displaystyle \lim_{x\rightarrow\infty} \frac{\log\left(\overline{F}(x)\right)}{\log(x)} = -\infty$. \end{cor} \begin{teo} Gnedenko (see \cite{Gnedenko1943}, Theorem 7) \label{teo:mises:gnedenko:20140412:001}~ The distribution function $F\in DA(\Lambda_{\infty})$ if and only if there exists a continuous function $A$ such that $A(x)\rightarrow0$ as $x\rightarrow\infty$ and, for all $x\in\Rset$, \begin{equation}\label{eq:mises:20140412:010bis} \lim_{z\rightarrow\infty}\frac{1-F(z\,(1+A(z)\,x))}{1-F(z)}=e^{-x}\textrm{} \end{equation} \end{teo} De Haan (\cite{deHaan1971}) noticed that Gnedenko did not use the continuity of $A$ to prove this theorem. These results allow to formulate the next statement. \begin{teo}\label{teo:20140411:001}~ \begin{itemize} \item[(i)] $\forall\alpha>0$, $F\in DA(\Phi_{\alpha})\ \Longrightarrow\ \overline{F}\in\M$ with $\M$-index ($- \alpha$), and the converse does not hold: $$ \displaystyle \{F\in DA(\Phi_{\alpha})\textrm{, }\alpha>0\}\, \subsetneq \; \{F : ~\overline{F}\in\mathcal{M}\}\textrm{.} $$ \item[(ii)] $\displaystyle DA(\Lambda_{\infty})\; \subsetneq \; \{F : ~\overline{F}\in\mathcal{M}_\infty\}$. \end{itemize} \end{teo} Let us give some examples illustrating the strict subset inclusions. \begin{exm} {\it The Peter and Paul distribution.} To show that $DA(\Phi_{\alpha}) \neq \{F : ~\overline{F}\in\mathcal{M}\textrm{ with $\M$-index ($- \alpha$)}\}$, $\alpha>0$, in (i), it is enough to notice that the Peter and Paul distribution does not belong to $DA(\Phi_1)$, but its associated tail of distribution belongs to $\mathcal{M}$. \end{exm} \begin{exm}\label{exm:20140925:001} To illustrate (ii), we consider the distribution $F$ defined in a left neighborhood of $\infty$ by \begin{equation}\label{eq:20140526:002} F(x):=1-\exp\left(-\lfloor x\rfloor\,\log(x)\right)\textrm{,} \end{equation} Then it is straightforward to see that $F \in \{F : ~\overline{F}\in\mathcal{M}_\infty\}$, by Theorem \ref{teo:main:003extension} and the fact that $\displaystyle \lim_{x\rightarrow\infty} \frac{\lfloor x\rfloor\,\log(x)}{\log(x)} = \infty\, . $ We can check that $F\not \in DA(\Lambda_{\infty})$. The proof, by contradiction, is given in Appendix \ref{ProofsofSection2.3}. \end{exm} \begin{rmq}~ Lemma 2.4.3 in \cite{deHaan} says that if $F\in DA(\Lambda_{\infty})$, then a continuous and increasing distribution function $G$ satisfying \begin{equation}\label{eq:20140526:001} \lim_{x\rightarrow\infty}\frac{\overline{F}(x)}{\overline{G}(x)}=1\textrm{,} \end{equation} exists. Is it possible to extend this result to $\M$? The answer is no. To see that, it is enough to consider Example~\ref{exm:20140925:001} with $F\in\M \setminus DA(\Lambda_{\infty}) $ defined in \eqref{eq:20140526:002} to see that the De Haan's result does not hold. Indeed, suppose that for $F$ defined in \eqref{eq:20140526:002}, there exists a continuous and increasing distribution function $G$ satisfying \eqref{eq:20140526:001}, which comes back to suppose that there exits a positive and continuous function $h$ such that $G(x)=1-\exp\left(-h(x)\,\log(x)\right)$ ($x>0$), in particular in a neighborhood of $\infty$. So \eqref{eq:20140526:001} may be rewritten as $$ \lim_{x\rightarrow\infty}\frac{\overline{F}(x)}{\overline{G}(x)}= \lim_{x\rightarrow\infty}\exp\left(-\left(\lfloor x\rfloor-h(x)\right)\,\log(x)\right)= \lim_{x\rightarrow\infty} x^{h(x)-\lfloor x\rfloor}= 1\textrm{} $$ However, since $\lfloor x\rfloor$ cannot be approximated for any continuous function, the previous limit does not hold. \end{rmq} \section{Conclusion} We introduced a new class of positive functions with support $\Rset^+$, denoted by $\M$, strictly larger than the class of RV functions at infinity. We extended to $\M$ some well-known results given on RV class, which are crucial to study extreme events. These new tools allow to expand EVT beyond RV. This class satisfies a number of algebraic properties and its members $U$ can be characterized by a unique real number, called the $\M$-index $\kappa_U$. Four characterizations of $\M$ were provided, one of them being the extension to $\M$ of the well-known Karamata's Theorem restricted to RV class. Furthermore, the cases $\kappa_U=\infty$ and $\kappa_U=-\infty$ were analyzed and their corresponding classes, denoted by $\M_\infty$ and $\M_{-\infty}$ respectively, were identified and studied, as done for $\M$. The three sets $\M_{\infty}$, $\M_{-\infty}$ and $\M$ are disjoint. Tails of distributions not belonging to $\M\cup\M_{\pm\infty}$ were proved not to satisfy \PBdH. Explicit examples of such functions and their generalization were given. Extensions to $\M$ of the Karamata Theorems were discussed in the second part of the paper. Moreover, we proved that the sets of tails of distributions whose distributions belong to the domains of attraction of Fr\'echet and Gumbel (with distribution support $\Rset^+$), are strictly included in $\M$ and $\M_\infty$ , respectively. Note that any result obtained here can be applied to functions with finite support, i.e. finite endpoint $x^*$, by using the change of variable $y=1/(x^*-x)$ for $x<x^*$. After having addressed the probabilistic analysis of $\M$, we will look at its statistical one. An interesting question is how to build estimators of the $\M$-index, which could be used on RV since $\textrm{RV}\subseteq\M$. A companion paper addressing this question is in progress. Finally, we will develop a multivariate version of $\M$, to represent and describe relations among random variables: dependence structure, tail dependence, conditional independence, and asymptotic independence. \section*{Acknowledgments} Meitner Cadena acknowledges the support of SWISS LIFE through its ESSEC research program on 'Consequences of the population ageing on the insurances loss'. Partial support from RARE-318984 (an FP7 Marie Curie IRSES Fellowship) is also kindly acknowledged. \begin{thebibliography}{99} \bibitem{BalkemadeHaan1974} \textsc{A. Balkema, L. de~Haan}, Residual Life Time at Great Age. \emph{Ann. Probab.} {\bf 2}, (1974) 792-804. \bibitem{BasrakDavisMikosch2002} \textsc{B. Basrak, R. Davis, T. Mikosch}, A Characterization of Multivariate Regular Variation. \emph{Ann. Appl. Probab.} {\bf 12}, (2002) 908-920. \bibitem{BinghamGoldieOmey2006} \textsc{N. Bingham, C. Goldie, E. Omey}, Regularly varying probability densities. \emph{Publications de l'Institut Math\'{e}matique} {\bf 80}, (2006) 47-57. \bibitem{BinghamGoldieTeugels} \textsc{N. Bingham, C. Goldie, J. Teugels}, Regular Variation. \emph{Cambridge University Press} (1989). \bibitem{Daley2001} \textsc{D. Daley}, The Moment Index of Minima. \emph{J. Appl. Probab.} {\bf 38}, (2001) 33-36. \bibitem{DaleyGoldie2006} \textsc{D. Daley, C. Goldie}, The moment index of minima (II). \emph{Stat. \& Probab. Letters} {\bf 76}, (2006) 831-837. \bibitem{deHaan} \textsc{L. de~Haan}, On regular variation and its applications to the weak convergence of sample extremes. \emph{Mathematical Centre Tracts, {\bf 32}} (1970). \bibitem{deHaan1971} \textsc{L. de~Haan}, A Form of Regular Variation and Its Application to the Domain of Attraction of the Double Exponential Distribution. \emph{Z. Wahrsch. V. Geb.} {\bf 17}, (1971) 241-258. \bibitem{deHaanFerreira} \textsc{L. de~Haan, A. Ferreira}, Extreme Value Theory. An Introduction. \emph{Springer}, (2006). \bibitem{deHaanResnick1981} \textsc{L. de~Haan, S. Resnick}, On the observation closest to the origin. \emph{Stoch. Proc. Applic.} {\bf 11}, (1981) 301-308. \bibitem{embrechts1997} \textsc{P. Embrechts, C. Kl\"{u}ppelberg, T. Mikosch}, Modelling Extremal Events for Insurance and Finance. \emph{Springer Verlag} (1997). \bibitem{EmbrechtsOmey1984} \textsc{P. Embrechts, E. Omey}, A property of longtailed distributions. \emph{J. Appl. Probab.} {\bf 21}, (1984) 80-87. \bibitem{feller21966} \textsc{W. Feller}, An introduction to probability theory and its applications. Vol II. \emph{J. Wiley \& Sons} (1966). \bibitem{FisherTippett1928} \textsc{R. Fisher, L. Tippett}, Limiting forms of the frequency distribution of the largest or smallest number of a sample. \emph{Proc. Cambridge Phil. Soc.} {\bf 24}, (1928) 180-190. \bibitem{Gnedenko1943} \textsc{B. Gnedenko}, Sur La Distribution Limite Du Terme Maximum D'Une S\'{e}rie Al\'{e}atoire. \emph{Ann. Math.} {\bf 44}, (1943) 423--453. \bibitem{Goldie1978} \textsc{C. Goldie}, Subexponential distributions and dominated-variation tails. \emph{J. Appl. Probab.} {\bf 15}, (1978) 440-442. \bibitem{HadarRussell1971} \textsc{J. Hadar, W. Russell}, Stochastic Dominance and Diversification. \emph{J. Econ. Theory} {\bf 3}, (1971) 288-305. \bibitem{Hill1975} \textsc{B. Hill}, A Simple General Approach to Inference About the Tail of a Distribution. \emph{Ann. Stat.} {\bf 3}, (1975) 1163-1174. \bibitem{Karamata1930} \textsc{J. Karamata}, Sur un mode de croissance r\'{e}guli\`{e}re des fonctions. \emph{Mathematica (Cluj)} {\bf 4}, (1930) 38-53. \bibitem{Karamata1931} \textsc{J. Karamata}, Neuer Beweis und Verallgemeinerung der Tauberschen S\"{a}tze, welche die Laplacesche und Stieltjessche Transformation betreffen. \emph{J. R. A. Math.} {\bf 1931}, (1931) 27-39. \bibitem{Karamata1931a001} \textsc{J. Karamata}, Sur le rapport entre les convergences d'une suite de fonctions et de leurs moments avec application \`{a} l'inversion des proc\'{e}d\'{e}s de sommabilit\'{e}. \emph{Studia Math.} {\bf 3}, (1931) 68-76. \bibitem{Karamata1933} \textsc{J. Karamata}, Sur un mode de croissance r\'{e}guli\`{e}re. Th\'{e}or\`{e}mes fondamentaux. \emph{Bulletin SMF} {\bf 61}, (1933) 55-62. \bibitem{Lindskog} \textsc{F. Lindskog}, Multivariate extremes and regular variation for stochastic processes. \emph{Ph.D. thesis, ETH Z\"urich, available online} (2004). \bibitem{Mikosch} \textsc{T. Mikosch}, Non-Life Insurance Mathematics. An Introduction with Stochastic Processes. \emph{Springer} (2006). \bibitem{Pickands1975} \textsc{J. Pickands}, Statistical Inference Using Extreme Order Statistics. \emph{Ann. Stat.} {\bf 3}, (1975) 119-131. \bibitem{resnick3} \textsc{S. Resnick}, Extreme Values, Regular Variation, and Point Processes. \emph{Springer-Verlag} (1987). \bibitem{Resnick2004} \textsc{S. Resnick}, On the Foundations of Multivariate Heavy-Tail Analysis. \emph{J. Appl. Probab.} {\bf 41}, (2004) 191-212. \bibitem{shaked} \textsc{M. Shaked, G. Shanthikumar}, Stochastic Orders. \emph{Springer} (2007). \bibitem{Stankovic1990} \textsc{B. Stankovi\'{c}}, Regularly Varying Distributions. \emph{Public. Inst. Math.} {\bf 48}, (1990) 119-128. \bibitem{vonMises1936} \textsc{R. von~Mises}, La distribution de la plus grande de $n$ valeurs. \emph{Revue Math. Union Interbalkanique} {\bf 1}, (1936) 141-160. \end{thebibliography} \appendix \section{Proofs of results given in Section 1} \subsection{Proofs of results concerning $\M$} \begin{proof}[Proof of Theorem\,\ref{teo:main:001}] The sufficient condition given in Theorem\,\ref{teo:main:001} comes from Properties\,\ref{propt:main:001}, (vi). So it remains to prove its necessary condition, namely that \begin{equation} \label{teo:main:001:lemma} \lim_{x\rightarrow\infty}-\frac{\log\left(U(x)\right)}{\log(x)}=-\rho_U \end{equation} for $U\in\mathcal{M}$ with finite $\rho_U$ defined in (\ref{Mkappa}). Let $\epsilon>0$ and define $V$ by $$ V(x)=\left\{ \begin{array}{ll} 1\text{,} & 0< x<1 \\ x^{\rho_U+\epsilon}\text{,} & x\geq1 \end{array} \right. $$ Applying Example \ref{lem:main:003} with $\alpha=\rho_U+\epsilon$ with $\epsilon>0$ implies that $\rho_V=\rho_U+\epsilon$, hence $\rho_V>\rho_U$. Using Properties \ref{propt:main:001}, (ii), provides then that $$ \lim_{x\rightarrow\infty}\frac{U(x)}{V(x)}=\lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho_U+\epsilon}}=0\textrm{,} $$ so, for $n\in\Nset^*$, there exists $x_0>1$ such for all $x\geq x_0$, $$ \frac{U(x)}{x^{\rho_U+\epsilon}}\leq\frac{1}{n}\, , \quad \textit{i.e.}\quad n\,U(x)\leq x^{\rho_U+\epsilon}\textrm{.} $$ Applying the logarithm function to this last inequality and dividing it by $-\log(x)$, $x \geq x_0$, gives \begin{eqnarray*} -\frac{\log(n)}{\log(x)}-\frac{\log(U(x))}{\log(x)} & \geq & -\rho_U-\epsilon\textrm{,} \end{eqnarray*} hence $$ -\frac{\log(U(x))}{\log(x)}\geq-\rho_U-\epsilon $$ and then $$ \mathop{\underline{\mathrm{lim}}}_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}\geq-\rho_U-\epsilon\textrm{.} $$ We consider now the function $$ W(x)=\left\{ \begin{array}{ll} 1\text{,} & 0< x<1 \\ x^{\rho_U-\epsilon}\text{,} & x\geq1\textrm{} \end{array} \right. $$ with $\epsilon>0$ and proceed in the same way to obtain that, for any $\epsilon>0$, $\displaystyle \mathop{\overline{\mathrm{lim}}}_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}\leq-\rho_U+\epsilon$. Hence, $\forall \epsilon>0$, we have $$ -\rho_U-\epsilon\leq\mathop{\underline{\mathrm{lim}}}_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}\leq\mathop{\overline{\mathrm{lim}}}_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}\leq-\rho_U+\epsilon $$ from which the result follows taking $\epsilon$ arbitrary. \end{proof} Now we introduce a lemma, on which the proof of Theorem \ref{teo:main:002} will be based. \begin{lem}\label{lem:main:004} Let $U\in\mathcal{M}$ with associated $\mathcal{M}$-index $\kappa_U$ defined in (\ref{eq:main:000c}). Then necessarily $\kappa_U=-\rho_U$, where $\rho_U$ is defined in (\ref{Mkappa}). \end{lem} \begin{proof}[Proof of Lemma \ref{lem:main:004}] Let $U\in\mathcal{M}$ with $\mathcal{M}$-index $\kappa_U$ given in \eqref{eq:main:000c} and $\rho_U$ defined in \eqref{Mkappa}. By Theorem \ref{teo:main:001}, we have $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log(U(x))}{\log(x)}=\rho_U $. Hence, for all $\epsilon>0$ there exists $x_0>1$ such that, for $x\geq x_0$, $ U(x)\leq x^{\rho_U+\epsilon} $. Multiplying this last inequality by $x^{r-1}$, $r\in\Rset$, and integrating it on $[x_0;\infty)$, we obtain $$ \int_{x_0}^\infty x^{r-1}U(x)dx\leq\int_{x_0}^\infty x^{\rho_U+\epsilon+r-1}dx $$ which is finite if $r<-\rho_U-\epsilon$. Taking $\epsilon\downarrow0$ then the supremum on $r$ leads to $\kappa_U=-\rho_U$. \end{proof} \begin{proof}[Proof of Theorem \ref{teo:main:002}] ~ The necessary condition is proved by Lemma\,\ref{lem:main:004}. The sufficient condition follows from the assumption that $\rho_U$ satisfies \eqref{Mkappa}. \end{proof} \begin{proof}[Proof of Theorem \ref{teo:main:003}]~ \begin{itemize} \item \emph{Proof of (i)} For $U\in\mathcal{M}$, Theorems \ref{teo:main:001} and \ref{teo:main:002} give that \begin{equation}\label{eq:main:teoRepresentation:000} \lim_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}=-\rho_U=\kappa_U\quad\textrm{with $\rho_U$ defined in (\ref{Mkappa}) and $\kappa_U$ in (\ref{eq:main:000c}).} \end{equation} Introducing a function $\gamma$ such that \begin{equation}\label{eq:main:teoRepresentation:000bis} \lim_{x\rightarrow\infty} \gamma(x) = 0 \end{equation} we can write, for some $b>1$, applying the L'H\^{o}pital's rule to the ratio, \begin{equation}\label{eq:main:teoRepresentation:001} \lim_{x\rightarrow\infty}\left(\gamma(x)+\frac{\int_b^x\frac{\log(U(t))}{\log(t)}\,\frac{dt}{t}}{\log(x)}\right)= \lim_{x\rightarrow\infty}\frac{\log(U(x))}{\log(x)}=-\kappa_U\textrm{.} \end{equation} \begin{itemize} \item[$\triangleright$] Suppose $\kappa_U\neq0$. Then we deduce from (\ref{eq:main:teoRepresentation:000}) and (\ref{eq:main:teoRepresentation:001}), that \begin{equation}\label{eq:teo:main:003:002} \lim_{x\rightarrow\infty}\frac{\log(U(x))}{\gamma(x)\,\log(x)+\int_b^x\frac{\log(U(t))}{t\log(t)}\,dt}=1\textrm{} \end{equation} Hence, defining the function $\displaystyle \epsilon_U(x):=\frac{\log(U(x))}{\gamma(x)\,\log(x)+\int_b^x\frac{\log(U(t))}{t\log(t)}\,dt}$, for $x\ge b$, we can express $U$, for $x\ge b$, as $$ U(x) = \exp\left\{\alpha_U(x) + \epsilon_U(x)\,\int_b^x\frac{\beta_U(t)}{t}\,dt \right\} $$ \begin{equation}\label{betaU-const} \text{where} \quad \alpha_U(x):=\epsilon_U(x)\,\gamma(x)\,\log(x) \quad \text{and} \quad \beta_U(x):=\frac{\log(U(x))}{\log(x)}.\quad \end{equation} It is then straightforward to check that the functions $\alpha_U$, $\beta_U$ and $\epsilon_U$ satisfy the conditions given in Theorem\,\ref{teo:main:003}. Indeed, by \eqref{eq:main:teoRepresentation:000bis} and \eqref{eq:teo:main:003:002}, $ \displaystyle \lim_{x\rightarrow\infty}\frac{\alpha_U(x)}{\log(x)}=\lim_{x\rightarrow\infty}\epsilon_U(x)\,\gamma(x)=0 $. Using \eqref{eq:main:teoRepresentation:000}, we obtain $ \displaystyle \lim_{x\rightarrow\infty}\beta_U(x)=\lim_{x\rightarrow\infty}\frac{\log(U(x))}{\log(x)}=-\kappa_U=\rho_U\textrm{.} $ Finally, by \eqref{eq:teo:main:003:002}, we have $ \displaystyle \lim_{x\rightarrow\infty}\epsilon_U(x)=1\,. $ \item[$\triangleright$] Now suppose $\kappa_U=0$. We want to prove \eqref{eq:main:000d} for some functions $\alpha$, $\beta$, and $\epsilon$ satisfying \eqref{alpha-eps-beta}. Notice that \eqref{eq:main:teoRepresentation:000} with $\kappa_U=0$ allows to write that $\displaystyle \lim_{x\rightarrow\infty}\frac{\log(x\,U(x))}{\log(x)}=1$. So applying Theorem\,\ref{teo:main:001} to the function $V$ defined by $V(x)=xU(x)$, gives that $\displaystyle V \in\mathcal{M}$ with $\rho_V=-\kappa_V=1$. Since $\kappa_V \neq 0$, we can proceed in the same way as previously, and obtain a representation for $V$ of the form \eqref{eq:main:000d}, namely, for $d>1$, $\forall x\ge d$, $$ V(x)=\exp\left\{\alpha_V(x)+\epsilon_V(x)\,\int_{d}^x\frac{\beta_V(t)}{t}\, dt \right\} $$ where $\alpha_V$, $\beta_V$, $\epsilon_V$ satisfy the conditions of Theorem\,\ref{teo:main:003} and $\displaystyle \beta_V=\frac{\log(V(x))}{\log(x)}$ (see \eqref{betaU-const}). Hence we have, for $x\ge d$, \begin{eqnarray*} U(x)&=& \frac{V(x)}{x} =\exp\left\{-\log(x)+\alpha_V(x)+\epsilon_V(x)\,\int_{d}^x\frac{\log(t\;U(t))}{t\;\log(t)}\, dt \right\} \\ & = & \exp\left\{\alpha_V(x)+(\epsilon_V(x)-1)\,\log(x)-\epsilon_V(x)\,\log(d)+\epsilon_V(x)\,\int_{d}^x\frac{\log(U(t))}{t\;\log(t)}dt\right\}\textrm{.} \end{eqnarray*} Noticing that $\displaystyle \lim_{x\rightarrow\infty}\frac{\alpha_V(x)+(\epsilon_V(x)-1)\,\log(x)-\epsilon_V(x)\,\log(d)}{\log(x)}=0 $, we obtain that $U$ satisfies \eqref{eq:main:000d} when setting, for $x\ge d$, $\displaystyle \alpha_U(x):=\alpha_V(x)+(\epsilon_V(x)-1)\,\log(x)-\epsilon_V(x)\,\log(d)$, $\displaystyle \beta_U(x):=\frac{\log(U(x))}{\log(x)}$ and $\displaystyle \epsilon_U:=\epsilon_V$. \end{itemize} \item \emph{Proof of (ii)} Let $U$ be a positive function with support $\Rset^+$, bounded on finite intervals. Assume that $U$ can be expressed as \eqref{eq:main:000d} for some functions $\alpha$, $\beta$, and $\epsilon$ satisfying (\ref{alpha-eps-beta}). We are going to check the sufficient condition given in Properties \ref{propt:main:001}, (vi), to prove that $U\in\mathcal{M}$. Since $ \displaystyle \frac{\log(U(x))}{\log(x)}=\frac{\alpha(x)}{\log(x)}+\epsilon(x)\frac{\int_b^x\frac{\beta(t)}{t}dt}{\log(x)} $ and that, via L'H\^{o}pital's rule, $$ \lim_{x\rightarrow\infty}\frac{\int_b^x\frac{\beta(t)}{t}dt}{\log(x)}=\lim_{x\rightarrow\infty}\frac{\beta(x)/x}{1/x}=\lim_{x\rightarrow\infty}\beta(x) $$ then using the limits of $\alpha$, $\beta$, and $\epsilon$ allows to conclude. \end{itemize} \end{proof} \begin{proof}[Proof of Properties \ref{propt:main:001}] ~ \begin{itemize} \item {\it Proof of (i)} Let us prove this property by contradiction. Suppose there exist $\rho$ and $\rho'$, with $\rho'<\rho$, both satisfying \eqref{Mkappa}, for $U\in\mathcal{M}$. Choosing $\epsilon=(\rho-\rho')/2$ in \eqref{Mkappa} gives $$ \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho'+\epsilon}}=0\textrm{\ \ \ and \ \ }\lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho-\epsilon}}=\lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho'+\epsilon}}=\infty\textrm{,} $$ hence the contradiction. \item {\it Proof of (ii)} Choosing $\epsilon=(\rho_U-\rho_V)/2$, we can write $$ \frac{V(x)}{U(x)}=\frac{V(x)}{x^{\rho_V+\epsilon}}\,\frac{x^{\rho_V+\epsilon}}{U(x)} =\frac{V(x)}{x^{\rho_V+\epsilon}}\,\left(\frac{U(x)}{x^{\rho_U-\epsilon}}\right)^{-1} $$ from which we deduce (ii). \item {\it Proof of (iii)} Let $U,V\in\M$, $a>0$, $\epsilon>0$ and suppose w.l.o.g. that $\rho_U\leq\rho_V$. Since $\rho_V-\rho_U>0$, writing $ \displaystyle \frac{aU(x)}{x^{\rho_V\pm\epsilon}} $ $ \displaystyle =\frac{a}{x^{\rho_V-\rho_U}}\frac{U(x)}{x^{\rho_U\pm\epsilon}} $ gives $ \displaystyle \lim_{x\rightarrow\infty}\frac{aU(x)+V(x)}{x^{\rho_V+\epsilon}}=0 $ and \newline $ \displaystyle \lim_{x\rightarrow\infty}\frac{aU(x)+V(x)}{x^{\rho_V-\epsilon}}=\infty $, we conclude thus that $\rho_{aU+V}=\rho_U\vee\rho_V$. \item {\it Proof of (iv)} It is straightforward since \eqref{Mkappa} can be rewritten as $$ \lim_{x\rightarrow\infty}\frac{1/U(x)}{x^{-\rho_U-\epsilon}} =\infty\textrm{\ \ \ and \ \ }\lim_{x\rightarrow\infty}\frac{1/U(x)}{x^{-\rho_U+\epsilon}} =0 \textrm{.} $$ \item {\it Proof of (v)} First, let us consider $U\in\M$ with $\rho_U<-1$. Choosing $\epsilon_0=-(\rho_U+1)/2$ ($>0$) in \eqref{Mkappa} implies that there exist $C>0$ and $x_0>1$ such that, for $x\geq x_0$, $ \displaystyle U(x)\leq C\,x^{\rho_U+\epsilon_0}=C\,x^{(\rho_U-1)/2} $, from which we deduce that $$ \int_{x_0}^\infty U(x)\,dx<\infty\textrm{.} $$ We conclude that $\displaystyle \int_0^\infty U(x)\,dx<\infty$ because $U$ is bounded on finite intervals. Now suppose that $\rho_U>-1$. Choosing $\epsilon_0=(\rho_U+1)/2$ ($>0$) in \eqref{Mkappa} gives that for $C>0$ there exists $x_0>1$ such that, for $x\geq x_0$, $ U(x)\geq C\,x^{(\rho_U-1)/2} $ $\displaystyle \int_0^\infty U(x)\,dx\geq\int_{x_0}^\infty U(x)\,dx\geq\infty$. \item {\it Proof of (vi)} Assuming $\displaystyle -\infty<\lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}<\infty $, we want to prove that $U$ satisfies \eqref{Mkappa}, which implies that $U\in\M$. So let us prove \eqref{Mkappa}. Consider $ \displaystyle \rho=\lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}\textrm{} $ well defined under our assumption, and from which we can deduce that, \begin{equation*} \forall \epsilon>0, \exists x_0>1 \, \text{such that}, \,\forall x\geq x_0,\quad -\frac{\epsilon}{2}\leq\frac{\log\left(U(x)\right)}{\log(x)}-\rho\leq\frac{\epsilon}{2}\textrm{.} \end{equation*} Therefore we can write that, for $x\geq x_0$, on one hand, $$ 0\leq \frac{U(x)}{x^{\rho+\epsilon}} =\exp\left\{\left(\frac{\log\left(U(x)\right)}{\log(x)}-\rho-\epsilon\right)\log(x)\right\} \le \exp\left\{-\frac{\epsilon}{2}\log(x)\right\} \underset{x\rightarrow\infty} {\longrightarrow} 0 $$ and on the other hand, $$ \frac{U(x)}{x^{\rho-\epsilon}} =\exp\left\{\left(\frac{\log\left(U(x)\right)}{\log(x)}-\rho+\epsilon\right)\log(x)\right\} \ge \exp\left\{\frac{\epsilon}{2}\log(x)\right\}\underset{x\rightarrow\infty} {\longrightarrow} \infty $$ hence the result. \end{itemize} \end{proof} \begin{proof}[Proof of Properties \ref{propt:20140630}] ~ Let $U$, $V$ $\in$ $\M$ with $\rho_U$ and $\rho_V$ respectively, defined in \eqref{Mkappa}. \begin{itemize} \item {\it Proof of (i)}~ It is immediate since $$ \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\,V(x)\right)}{\log(x)}= \lim_{x\rightarrow\infty}\left(\frac{\log\left(U(x)\right)}{\log(x)}+\frac{\log\left(V(x)\right)}{\log(x)}\right)=\rho_U+\rho_V $$ \item {\it Proof of (ii)}~ First notice that, since $U, V\in\M$, via Theorems \ref{teo:main:001} and \ref{teo:main:002}, for $\epsilon>0$, there exist $x_U>0$, $x_V>0$, such that, for $x\geq x_0=x_U\vee x_V$, $$ x^{\rho_U-\epsilon/2}\leq U(x)\leq x^{\rho_U+\epsilon/2} \quad\textrm{and}\quad x^{\rho_V-\epsilon/2}\leq V(x)\leq x^{\rho_V+\epsilon/2}\textrm{.} $$ \begin{itemize} \item[$\triangleright$] \emph{Assume $\rho_U \leq \rho_V < -1$}. Hence, via Properties \ref{propt:main:001}, (v), both $U$ and $V$ are integrable on $\Rset^+$. Choose $\rho=\rho_V$. Via the change of variable $s=x-t$, we have, $\forall$ $x\geq 2x_0>0$, \begin{eqnarray*} \lefteqn{\frac{U\ast V(x)}{x^{\rho+\epsilon}} = \int_0^{x/2} U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt+\int_{x/2}^x U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt} \\ & & \le \frac{1}{x^{\epsilon/2}}\int_0^{x/2} U(t)\left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2}dt +\frac{1}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x/2} V(s)\left(1-\frac{s}{x}\right)^{\rho_U+\epsilon/2}ds \\ & & \le \frac{\max\left(1,c^{\rho_V+\epsilon/2}\right)}{x^{\epsilon/2}}\int_0^{x/2} U(t)dt+\frac{\max\left(1,c^{\rho_U+\epsilon/2}\right)}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x/2}V(s)ds \end{eqnarray*} since, for $0\leq t \leq x/2$, {\it i.e.} $\displaystyle 0<c<\frac{1}{2}\leq1-\frac{t}{x}\leq1$, $$ \left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2}\!\!\! \leq \max\left(1,c^{\rho_V+\epsilon/2}\right) \quad\text{and}\quad \left(1-\frac{t}{x}\right)^{\rho_U+\epsilon/2} \!\!\! \leq \max\left(1,c^{\rho_U+\epsilon/2}\right). $$ Hence we obtain, $U$ and $V$ being integrable, and since $\rho_V-\rho_U+\epsilon/2>0$, $$ \lim_{x\rightarrow\infty}\frac{\max\left(1,c^{\rho_V+\epsilon/2}\right)}{x^{\epsilon/2}}\int_0^{x/2} U(t)dt=0 \quad \text{and} \quad \lim_{x\rightarrow\infty}\frac{\max\left(1,c^{\rho_U+\epsilon/2}\right)}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x/2}V(s)ds=0, $$ from which we deduce that, for any $\epsilon>0$, $ \displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho+\epsilon}} =0$. Applying Fatou's Lemma, then using that $V\in \M$ with $\rho_V=\rho$, gives, for any $\epsilon$, $$ \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho-\epsilon}} \ge \lim_{x\rightarrow\infty}\int_0^1 \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt \geq \mathop{\underline{\mathrm{lim}}}x\int_0^1\!\!\!\! U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt \geq \int_0^1 \!\!\!\! U(t)\lim_{x\rightarrow\infty}\left(\frac{V(x-t)}{x^{\rho-\epsilon}}\right)dt =\infty. $$ We can conclude that $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_V$. \item[$\triangleright$] \emph{Assume $\rho_U<-1<0\leq\rho_V$}. Therefore $U$ is integrable on $\Rset^+$, but not $V$ (Properties \ref{propt:main:001}, (v)). Choose $\rho=\rho_V$. Using the change of variable $s=x-t$, we have, $\forall$ $x\geq 2x_0>x_0(>0)$, \begin{eqnarray*} \lefteqn{\frac{U\ast V(x)}{x^{\rho+\epsilon}} = \int_0^{x-x_0} \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt+\int_{x-x_0}^x \!\!\!\!\!\!\! U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt} \\ & & = \int_0^{x-x_0} \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt+\int_0^{x_0}\!\!\!\! V(s)\frac{U(x-s)}{x^{\rho+\epsilon}}ds \\ & & \leq \int_0^{x-x_0} U(t)\frac{(x-t)^{\rho_V+\epsilon/2}}{x^{\rho+\epsilon}}dt+\int_0^{x_0} V(s)\frac{(x-s)^{\rho_U+\epsilon/2}}{x^{\rho+\epsilon}}ds \\ & & = \frac{1}{x^{\epsilon/2}}\int_0^{x-x_0} U(t)\left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2} \!\!\!\! dt +\frac{1}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x_0}V(s)\left(1-\frac{s}{x}\right)^{\rho_U+\epsilon/2}\!\!\!\! ds\textrm{.} \end{eqnarray*} Noticing that for $0\leq t\leq x-x_0$, so $\displaystyle \left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2}\leq1$, and for $0\leq s\leq x_0<2x_0\leq x$, $\displaystyle 0< c< \frac{1}{2}\le 1-\frac{x_0}{x}\leq1-\frac{s}{x}\leq1$, so $\displaystyle \left(1-\frac{s}{x}\right)^{\rho_U+\epsilon/2}\leq \max\left(1,c^{\,\rho_U+\epsilon/2}\right)$, we obtain $$ \frac{U\ast V(x)}{x^{\rho+\epsilon}} \leq\frac{1}{x^{\epsilon/2}}\int_0^{x-x_0} U(t)dt+\frac{\max\left(1,c^{\,\rho_U+\epsilon/2}\right)}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x_0}V(s)ds\, . $$ Since $U$ is integrable, $V$ bounded on finite intervals, and $\rho_V-\rho_U+\epsilon/2>0$, we have $$ \lim_{x\rightarrow\infty}\frac{1}{x^{\epsilon/2}}\int_0^{x-x_0} U(t)dt=0 \quad \text{and}\quad \lim_{x\rightarrow\infty}\frac{\max\left(1,c^{\,\rho_U+\epsilon/2}\right)}{x^{\rho_V-\rho_U+\epsilon/2}}\int_0^{x_0}V(t)dt=0. $$ therefore, for any $\epsilon>0$, we have $ \displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho+\epsilon}} =0 $. Applying Fatou's Lemma, then using that $V\in \M$ with $\rho_V=\rho$, gives, for any $\epsilon$, $$ \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho-\epsilon}} \ge \lim_{x\rightarrow\infty}\int_0^1 \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt \geq \mathop{\underline{\mathrm{lim}}}x\int_0^1\!\!\!\! U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt \geq \int_0^1 \!\!\!\! U(t)\lim_{x\rightarrow\infty}\left(\frac{V(x-t)}{x^{\rho-\epsilon}}\right)dt =\infty. $$ We can conclude that $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_V$. \item[$\triangleright$] \emph{Assume $-1<\rho_U\leq\rho_V$}. Then both $U$ and $V$ are not integrable on $\Rset^+$ (Properties \ref{propt:main:001}, (v)). Choose $\rho=\rho_U+\rho_V+1$. Let $0<\epsilon<\rho_U+1$. Since $V$ is not integrable on $\Rset^+$, we have $\displaystyle \int_0^xV(t)dt \underset{x\to\infty}{\rightarrow} \infty$. So we can apply \thelr\ and obtain $$ \lim_{x\rightarrow\infty}\frac{\int_0^xV(t)dt}{x^{\rho_V+1+\epsilon}}= \lim_{x\rightarrow\infty}\frac{\left(\int_0^xV(t)dt\right)'}{\left(x^{\rho_V+1+\epsilon}\right)'}= \lim_{x\rightarrow\infty}\frac{V(x)}{(\rho_V+1+\epsilon)x^{\rho_V+\epsilon}}=0 $$ and $$ \lim_{x\rightarrow\infty}\frac{\int_0^xV(t)dt}{x^{\rho_V+1-\epsilon}}= \lim_{x\rightarrow\infty}\frac{\left(\int_0^xV(t)dt\right)'}{\left(x^{\rho_V+1-\epsilon}\right)'}= \lim_{x\rightarrow\infty}\frac{V(x)}{(\rho_V+1-\epsilon)x^{\rho_V-\epsilon}}=\infty\textrm{,} $$ from which we deduce that $\displaystyle W_V(x):=\int_0^xV(t)dt\in\M$ with $\M$-index $\rho_V+1$. We obtain in the same way that $\displaystyle W_U(x):=\int_0^xU(t)dt\in\M$ with $\M$-index $\rho_U+1$. We have, via the change of variable $s=x-t$, $\forall\ x\geq2 x_0>0$, \begin{eqnarray*} \lefteqn{\frac{U\ast V(x)}{x^{\rho+\epsilon}} = \int_0^{x/2} U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt+\int_{x/2}^x U(t)\frac{V(x-t)}{x^{\rho+\epsilon}}dt} \nonumber \\ & & \le \frac{1}{x^{\rho_U+1+\epsilon/2}}\int_0^{x/2} U(t)\left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2}dt +\frac{1}{x^{\rho_V+1+\epsilon/2}}\int_0^{x/2} V(s)\left(1-\frac{s}{x}\right)^{\rho_U+\epsilon/2}ds \nonumber \\ & & \le \max\left(1,c^{\rho_V+\epsilon/2}\right)\frac{W_U(x/2)}{x^{\rho_U+1+\epsilon/2}}+\max\left(1,c^{\rho_U+\epsilon/2}\right)\frac{W_V(x/2)}{x^{\rho_V+1+\epsilon/2}} \end{eqnarray*} and \begin{eqnarray*} \lefteqn{\frac{U\ast V(x)}{x^{\rho-\epsilon}} = \int_0^{x/2} U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt+\int_{x/2}^x U(t)\frac{V(x-t)}{x^{\rho-\epsilon}}dt} \nonumber \\ & & \ge \frac{1}{x^{\rho_U+1-\epsilon/2}}\int_0^{x/2} U(t)\left(1-\frac{t}{x}\right)^{\rho_V-\epsilon/2}dt +\frac{1}{x^{\rho_V+1-\epsilon/2}}\int_0^{x/2} V(s)\left(1-\frac{s}{x}\right)^{\rho_U-\epsilon/2}ds \nonumber \\ & & \ge \min\left(1,c^{\rho_V-\epsilon/2}\right)\frac{W_U(x/2)}{x^{\rho_U+1-\epsilon/2}}+\min\left(1,c^{\rho_U-\epsilon/2}\right)\frac{W_V(x/2)}{x^{\rho_V+1-\epsilon/2}} \end{eqnarray*} since, for $0\leq t \leq x/2$, {\it i.e.} $\displaystyle 0<c<\frac{1}{2}\leq1-\frac{t}{x}\leq1$, $$ \min\left(1,c^{\rho_V-\epsilon/2}\right)\leq\left(1-\frac{t}{x}\right)^{\rho_V-\epsilon/2}\!\!\! \leq\left(1-\frac{t}{x}\right)^{\rho_V+\epsilon/2}\!\!\! \leq \max\left(1,c^{\rho_V+\epsilon/2}\right) $$ and $$ \min\left(1,c^{\rho_U-\epsilon/2}\right)\leq\left(1-\frac{t}{x}\right)^{\rho_U-\epsilon/2} \!\!\!\leq\left(1-\frac{t}{x}\right)^{\rho_U+\epsilon/2} \!\!\! \leq \max\left(1,c^{\rho_U+\epsilon/2}\right). $$ Hence, for any $0<\epsilon<\rho_U+1$, we have $ \displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho+\epsilon}}=0$ and $ \displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho-\epsilon}}=\infty$. We can conclude that $U\ast V\in\M$ with $\rho_{U\ast V}=\rho_U+\rho_V+1$. \end{itemize} \item {\it Proof of (iii)}~ It is straightforward, since we can write, with $y=V(x)\to\infty$ as $x\to\infty$, $$ \lim_{x\rightarrow\infty}\frac{\log(U(V(x)))}{\log(x)}= \lim_{y\rightarrow\infty}\frac{\log(U(y))}{\log(y)}\times \lim_{x\rightarrow\infty}\frac{\log(V(x))}{\log(x)}=\rho_U\,\rho_V $$ Hence we obtain $\rho_{U\circ V}=\rho_U\,\rho_V$. \end{itemize} \end{proof} \subsection{Proofs of results concerning $\mathcal{M}_\infty$ and $\mathcal{M}_{-\infty}$} \begin{proof}[Proof of Theorem\,\ref{teo:main:001extension}]~ It is enough to prove \eqref{eq:main:000bextension01} because by this equivalence and Properties \ref{propt:main:001extension}, (i), one has $$ U\in\M_{-\infty}\quad\Longleftrightarrow\quad 1/U\in\M_{\infty}\quad\Longleftrightarrow\quad\lim_{x\rightarrow\infty}-\frac{\log\left(1/U(x)\right)}{\log(x)}=\infty \quad\Longleftrightarrow\quad\lim_{x\rightarrow\infty}-\frac{\log\left(U(x)\right)}{\log(x)}=-\infty\textrm{,} $$ i.e. \eqref{eq:main:000bextension02}. \begin{itemize} \item Let us prove that $ \displaystyle U\in\M_\infty\quad\Longrightarrow\quad\lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}=-\infty $. Suppose $U\in\M_\infty$. This implies that for all $\rho\in\Rset$, one has $\displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{x^\rho}=0$, i.e. for all $\epsilon>0$ there exists $x_0>1$ such that, for $x\geq x_0$, $ U(x)\leq\epsilon x^\rho $ which implies $ \displaystyle \frac{\log\left(U(x)\right)}{\log(x)}\leq\frac{\log(\epsilon)}{\log(x)}+\rho $, hence $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}\leq\rho $ and the statement follows since the argument applies for all $\rho\in\Rset$. \item Now let us prove that $ \displaystyle \lim_{x\rightarrow\infty}-\frac{\log\left(U(x)\right)}{\log(x)}=\infty\quad\Longrightarrow\quad U\in\M_\infty $. For any $\rho\in \Rset$, we can write $$ \lim_{x\rightarrow\infty}-\frac{\log\left(\frac{U(x)}{x^{\rho}}\right)}{\log(x)}= \lim_{x\rightarrow\infty}\left(-\frac{\log\left(U(x)\right)}{\log(x)}+\rho\right)=\infty\quad\textrm{under the hypothesis,} $$ which implies that $U(x)\big/x^{\rho}<1$ and hence $\displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho}}=0$. \end{itemize} \end{proof} \begin{proof}[Proof of Theorem \ref{teo:main:002extension}]~ \begin{itemize} \item\emph{Proof of (i)-(a)} Suppose $U\in\M_\infty$. Then, by definition (\ref{M+}), for any $\rho\in\Rset$, $ \displaystyle \lim_{x\rightarrow\infty}x^\rho U(x)=0\textrm{} $, which implies that for $c>0$, there exists $x_0>1$ such that, for all $x\geq x_0$, $ \displaystyle U(x)\leq cx^{-\rho}\textrm{,} $ from which we deduce that $$ \int_{x_0}^{\infty}x^{r-1}U(x)dx\leq c\int_{x_0}^{\infty}x^{r-1-\rho}dx $$ which is finite whenever $r<\rho$. This result holds also on $(1;\infty)$ since $U$ is bounded on finite intervals. Thus we conclude that $\kappa_U=\infty$, $\rho$ being any real number. \item \emph{Proof of (i)-(b)} Note that $U$ is integrable on $\Rset^+$ since $\displaystyle \int_1^\infty x^{r-1}U(x)dx<\infty$, for any $r\in\Rset$, in particular for $r=1$. Moreover $U$ is bounded on finite intervals. For $r>0$, we have, via the continuity of $U$, $$ \int_0^\infty x^{r+1}dU(x) = (r+1)\int_0^\infty\int_0^xy^{r}dy\, dU(x)=(r+1)\int_0^\infty y^r\left(\int_y^{\infty} dU(x)\right)\, dy $$ which implies, since $\displaystyle \lim_{x\rightarrow\infty} U(x)=0$, that \begin{equation}\label{eq:20141106:001} -\int_0^\infty x^{r+1}dU(x) = (r+1)\int_0^\infty y^{r}U(y)dy, \end{equation} which is positive and finite. Now, for $t>0$, we have, integrating by parts and using again the continuity of $U$, $$ t^{r+1}U(t)=(r+1)\int_0^tx^{r}U(x)dx+\int_0^tx^{r+1}dU(x) $$ where the integrals on the right hand side of the equality are finite as $t\to\infty$ and their sum tends to 0 via \eqref{eq:20141106:001}. This implies that, $\forall r>0$, $t^{r+1}U(t)\to0$ as $t\to\infty$. For $r\leq0$, we have, for $t\geq1$, using the previous result, $t^{r+1}U(t)\leq t^2U(t)\to0$ as $t\to\infty$. This completes the proof that $U\in\M_\infty$. \item \emph{Proof of (ii)-(a)} Suppose $U\in\M_{-\infty}$. Then, by definition (\ref{M-}), for any $\rho\in\Rset$, we have $ \displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{x^\rho}=\infty\textrm{} $, which implies that for $c>0$, there exists $x_0>1$ such that, for all $x\geq x_0$, $ \displaystyle U(x)\geq cx^{\rho}\textrm{,} $ from which we deduce that, $U$ being bounded on finite intervals, $$ \int_{1}^{\infty}x^{r-1}U(x)dx\geq c\int_{x_0}^{\infty}x^{r-1+\rho}dx $$ which is infinite whenever $r\geq-\rho$. The argument applying for any $\rho$, we conclude that $\kappa_U=-\infty$. \item\emph{Proof of (ii)-(b)} Let $r\geq0$. We can write, for $s+2<0$ and $t>1$, \begin{eqnarray*} 0 & \geq & -\int_{1}^t x^{s+1}d\left(x^rU(x)\right)\qquad (x^rU(x) \, \textrm{being non-decreasing}) \\ & = & \int_{1}^t\left(\int_x^t d\left(y^{s+1}\right)\, - t^{s+1}\right)d\left(x^rU(x)\right) \\ & = & \int_1^t y^{s+1}\left( \int_{1}^y d\left(x^rU(x)\right)\right) dy \,-t^{s+1} \int_{1}^t d\left(x^rU(x)\right) \\ & = & \int_1^t y^{s+r+1}U(y)dy\, - \frac{t^{s+2}-1}{s+2} \,U(1)\, - t^{s+1}\left(t^rU(t)-U(1)\right)\quad \textrm{( $U$ being continue)}. \end{eqnarray*} Hence we obtain, as $t\to\infty$, $\displaystyle t^{s+r+1}U(t)\to\infty$ since $\displaystyle \int_1^t y^{s+r+1}U(y)dy\to\infty$ and $\displaystyle \frac{t^{s+2}}{s+2}+t^{s+1}\to0$ (under the assumption $s<-2$).\\ This implies that $U\in\M_{-\infty}$ since $s+r+1\in\Rset$. \end{itemize} \end{proof} \begin{proof}[Proof of Remark \ref{rmq:20141107:001} -1]~ Set $\displaystyle A=\int_1^{\infty}e^{-x}dx=e^{-1}$ and let us prove that $U\in\M_\infty$.\\ If $r>0$, then \begin{eqnarray*} \lefteqn{\int_1^\infty x^{r}U(x)dx\leq A+\sum_{n=1}^\infty\int_{n}^{n+1/n^n} x^{r}U(x)dx=A+\sum_{n=1}^\infty\int_{n}^{n+1/n^n} x^{r-1}dx} \\ & & \leq A+\sum_{n=1}^\infty\int_{n}^{n+1/n^n} x^{\left\lceil r\right\rceil-1}dx =A+\frac{1}{\left\lceil r\right\rceil}\sum_{n=1}^\infty\left((n+1/n^n)^{\left\lceil r\right\rceil}-n^{\left\lceil r\right\rceil}\right)dx \\ & & =A+\frac{1}{\left\lceil r\right\rceil}\sum_{n=1}^\infty n^{-(n-1)\left\lceil r\right\rceil-1} \sum_{k=0}^{\left\lceil r\right\rceil-1}(1+1/n^{n-1})^{k} <\infty \, . \end{eqnarray*} If $r\leq0$, then we can write $\displaystyle \int_1^\infty x^{r}U(x)dx \, \leq \, \int_1^\infty xU(x)dx$, which is finite using the previous result with $r=1$. Now, let us prove $U\not\in\M_{\infty}$ by contradiction.\\ Suppose $U\in\M_{\infty}$. Then Theorem \ref{teo:main:001extension} implies that $\displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\right)}{\log(x)}=-\infty$, which contradicts $$ \lim_{n\rightarrow\infty}\frac{\log\left(U(n)\right)}{\log(n)}= \lim_{n\rightarrow\infty}\frac{\log\left(1/n\right)}{\log(n)}=-1>-\infty\textrm{.} $$ \end{proof} \begin{proof}[Proof of Theorem \ref{teo:main:003extension}]~ \begin{itemize} \item \emph{Proof of (i)} Suppose $U\in\mathcal{M}_\infty$. By Theorem \ref{teo:main:001extension}, we have $ \displaystyle \lim_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}=\infty\textrm{.} $ It implies that there exists $b>1$ such that, for $x\geq b$, $ \displaystyle \beta(x):=-\frac{\log(U(x))}{\log(x)}>0\textrm{.} $ Defining, for $x\geq b$, $ \alpha(x):=\beta(x)\,\log(x)\textrm{,} $ gives (i). \item \emph{Proof of (ii)} Suppose $U\in\mathcal{M}_{-\infty}$. By Properties \ref{propt:main:001extension}, (i), $1/U\in\mathcal{M}_{\infty}$. Applying the previous result to $1/U$ implies that there exists a positive function $\alpha$ satisfying $\alpha(x)/\log(x)\underset{x\rightarrow\infty}{\rightarrow}\infty$ such that $1/U(x)=\exp(-\alpha(x))$, $x\geq b$ for some $b>1$. Hence we get $U(x)=\exp(-\alpha(x))$, $x\geq b$, as required. \item \emph{Proof of (iii)} Assume that $U$ satisfies, for $x\geq b$, $ U(x)=\exp(-\alpha(x)) $, for some $b>1$ and $\alpha$ satisfying $\alpha(x)/\log(x)\underset{x\rightarrow\infty}{\rightarrow}\infty$. A straightforward computation gives $ \displaystyle \lim_{x\rightarrow\infty}-\frac{\log(U(x))}{\log(x)}= \lim_{x\rightarrow\infty}\frac{\alpha(x)}{\log(x)}=\infty\textrm{.} $ Hence $U\in\M_\infty$. We can proceed exactly in the same way when supposing that $U$ satisfies, for $x\geq b$, $ U(x)=\exp(\alpha(x)) $ for some $b>1$ and $\alpha$ satisfying $\alpha(x)/\log(x)\underset{x\rightarrow\infty}{\rightarrow}\infty$, to conclude that $U\in\M_{-\infty}$. \end{itemize} \end{proof} \begin{proof}[Proof of Properties \ref{propt:main:001extension}]~ \begin{itemize} \item \emph{Proof of (i)} It is straightforward since, for $\rho\in\Rset$, $ \displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho}}=0\textrm{} \quad\Longleftrightarrow\quad \lim_{x\rightarrow\infty}\frac{1/U(x)}{x^{-\rho}}=\infty\textrm{.} $ \item \emph{Proof of (ii)} \begin{itemize} \item[$\triangleright$] Suppose $(U,V)\in\M_{-\infty}\times\M_\fty$ with $\rho_V$ defined in (\ref{Mkappa}). Let $\epsilon>0$. Writing $ \displaystyle \frac{V(x)}{U(x)} =\frac{V(x)}{x^{\rho_V+\epsilon}}\,\left(\frac{U(x)}{x^{\rho_V+\epsilon}}\right)^{-1}\textrm{,} $ we obtain $ \displaystyle \lim_{x\rightarrow\infty}\frac{V(x)}{U(x)} =0\textrm{} $ since $V\in\M$ with $\rho_V$ satisfying (\ref{Mkappa}) and $U$ satisfies (\ref{M-}) with $\rho_U=\rho_V+\epsilon\in\Rset$. \item[$\triangleright$] Suppose $(U,V)\in\M_{-\infty}\times\M_\infty$. Let $\rho>0$. We have $ \displaystyle \lim_{x\rightarrow\infty}\frac{V(x)}{U(x)} = \lim_{x\rightarrow\infty}\frac{V(x)}{x^{\rho}}\,\left(\frac{U(x)}{x^{\rho}}\right)^{-1} =0\textrm{} $ since $V$ satisfies (\ref{M+}) and $U$ (\ref{M-}). \item[$\triangleright$] Suppose $(U,V)\in\M_\fty\times\M_\infty$ with $\rho_U$ defined in (\ref{Mkappa}). By Properties \ref{propt:main:001}, (iv), and Properties \ref{propt:main:001extension}, (i), we have $(1/U,1/V)\in\M_\fty\times\M_{-\infty}$. The result follows because $ \displaystyle \lim_{x\rightarrow\infty}\frac{V(x)}{U(x)}= \lim_{x\rightarrow\infty}\frac{1/U(x)}{1/V(x)}=0 $. \end{itemize} \item The proof of (iii) is immediate. \end{itemize} \end{proof} \begin{proof}[Proof of Properties \ref{propt:20140630ext}] ~ Let $U$, $V$ $\in$ $\M$ with $\M$-index $\kappa_U$ and $\kappa_V$ respectively. \begin{itemize} \item {\it Proof of (i)}~ It is straightforward as $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(U(x)\,V(x)\right)}{\log(x)}= \lim_{x\rightarrow\infty}\left(\frac{\log\left(U(x)\right)}{\log(x)}+\frac{\log\left(V(x)\right)}{\log(x)}\right) $. \item {\it Proof of (ii)}~ We distinguish the next three cases.\\ (a) {\it Let $U\in\M_{\infty}$ and $V\in$ $\M$ with $\rho_V\notin [-1, 0)$.}~ Let $W(x)=x^\eta 1_{(x\ge 1)}+ 1_{(0<x<1)}$, with $\eta=-2$ if $\rho_V\geq0$, or $\eta=\rho_V-1$ if $\rho_V<-1$. Note that $W\in\M$ with $\rho_W=\eta < \rho_V$. By Properties \ref{propt:main:001extension}, (ii), $\displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{W(x)}=0$, so for $0<\delta<1$, there exists $x_0\geq1$ such that, for all $x\geq x_0$, $U(x)\leq \delta W(x)$. Consider $Z$ defined by $Z(x)=U(x)1_{(0<x<x_0)}+ W(x)1_{(x\ge x_0)}$, which satisfies $Z\geq U$ and $Z\in\M$ with $\rho_Z=\rho_W=\eta<\rho_V$. Applying Properties \ref{propt:20140630}, (ii), gives $Z\ast V\in\M$ with $\rho_{Z\ast V}=\rho_Z \vee \rho_V=\rho_V$ (note that the restriction on $\rho_v$ corresponds to the condition given in Properties \ref{propt:20140630}, (ii)). We deduce that, for any $x>0$, $U\ast V(x)\le Z\ast V(x)$, and, for $\epsilon>0$, $$ \frac{U\ast V(x)}{x^{\rho_V+\epsilon}}\leq \frac{Z\ast V(x)}{x^{\rho_V+\epsilon}}\ \underset{x\to\infty}{\rightarrow}\ \ 0\,. $$ Moreover, applying Fatou's Lemma gives $$ \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho_V-\epsilon}} \ge \lim_{x\rightarrow\infty}\int_0^1 \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho_V-\epsilon}}dt \ge \mathop{\underline{\mathrm{lim}}}x\int_0^1 \!\!\!\! U(t)\frac{V(x-t)}{x^{\rho_V-\epsilon}}dt \geq\int_0^1 \!\!\!\! U(t)\lim_{x\rightarrow\infty}\left(\frac{V(x-t)}{x^{\rho_V-\epsilon}}\right)dt=\infty\textrm{.} $$ Therefore, $U\ast V\in \M$ with $\M$-index $\rho_{U\ast V}=\rho_V$.\\ (b) {\it $(U,V)\in\M_\infty\times\M_\infty$, then $U\ast V\in\M_\infty$} Let $\rho\in\Rset$. Consider $U\in\M_\infty$. We have, applying Theorem \ref{teo:main:001extension}, $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log(U(x))}{\log(x)}=-\infty $. Rewriting this limit as $$ \displaystyle \lim_{x\rightarrow\infty}\frac{\log(U(x))}{\log(1/x)}=\infty $$ we deduce that, for $c\geq|\rho|+1>0$, there exists $x_U>1$ such that, for $x\geq x_U$, $\log(U(x))\leq c\log(1/x)$, i.e. $U(x)\leq x^{-c}$. On $V\in\M_\infty$, a similar reasoning leads to that there exists $x_V>1$ such that, for $x\geq x_V$, $V(x)\leq x^{-c}$. Using the change of variable $s=x-t$, we have, $\forall\ x\geq2\max(x_U,x_V)>0$, \begin{eqnarray*} \lefteqn{\frac{U\ast V(x)}{x^{\rho}} = \int_0^{x/2} U(t)\frac{V(x-t)}{x^{\rho}}dt+\int_{x/2}^x U(t)\frac{V(x-t)}{x^{\rho}}dt} \\ & & \le \frac{1}{x^{\rho+c}}\int_0^{x/2} U(t)\Big(1-\frac{t}{x}\Big)^{-c}dt +\frac{1}{x^{\rho+c}}\int_0^{x/2} V(s)\Big(1-\frac{s}{x}\Big)^{-c}ds \\ & & \le \frac{2^c}{x^{\rho+c}}\int_0^{x/2} U(t)dt+\frac{2^c}{x^{\rho+c}}\int_0^{x/2} V(s)ds \end{eqnarray*} since, for $0\leq t \leq x/2$, {\it i.e.} $\displaystyle 0<\frac{1}{2}\leq1-\frac{t}{x}\leq1$, $ \displaystyle \Big(1-\frac{t}{x}\Big)^{-c}\!\!\! \leq 2^c $. This implies, via the integrability of $U$ and $V$, for $\rho\in\Rset$, $\displaystyle \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho}}=0$. Hence $U\ast V\in\M_\infty$. (c) {\it Let $U\in\M_{-\infty}$ and $V\in$ $\M$ or $\M_{\pm\infty}$.}~ We apply Fatou's Lemma, as in (a), to obtain, for any $\rho\in\Rset$, $$ \lim_{x\rightarrow\infty}\frac{U\ast V(x)}{x^{\rho}} \ge \mathop{\underline{\mathrm{lim}}}x\int_0^1 V(t)\frac{U(x-t)}{x^{\rho}}dt \geq \int_0^1 V(t)\lim_{x\rightarrow\infty}\left(\frac{U(x-t)}{x^{\rho}}\right)dt=\infty\textrm{.} $$ We conclude that $U\ast V\in\M_{-\infty}$. \item {\it Proof of (iii)}~ First, note that if $V$ $\in$ $\M_{-\infty}$, then $\displaystyle \lim_{x\rightarrow\infty} V(x)=\infty$. Hence writing $$ \frac{\log\left(U(V(x))\right)}{\log(x)}= \frac{\log\left(U(y)\right)}{\log(y)}\, \times \frac{\log\left(V(x)\right)}{\log(x)}\textrm{,\quad with $y=V(x)$} $$ allows to conclude. \end{itemize} \end{proof} \subsection{Proofs of results concerning $\Oset$} \label{ProofsofresultsconcerningOset} Let us recall the \PBdH, that we will need to prove Theorem \ref{teo:20140801:001}. Let us define the Generalized Pareto Distribution (GPD) $$ G_\xi(x)=\left\{ \begin{array}{ll} (1+\xi x)^{-1/\xi} & \xi\in\Rset\textrm{, }\xi\neq0\textrm{, }1+\xi x>0 \\ & \\ e^{-x} & \xi=0\textrm{, }x\in\Rset\textrm{.} \end{array} \right. $$ \begin{teo}\label{tpbdh}\PBdH\ (see e.g. Theorem 3.4.5 in \cite{embrechts1997}, \PBdH)~ For $\xi\in\Rset$, the following assertions are equivalent: \begin{itemize} \item[(i)] $F\in DA(\exp(-G_\xi))$ \item[(ii)] There exists a positive function $a>0$ such that for $1+\xi x>0$, $$ \lim_{u\rightarrow\infty}\frac{\overline{F}(u+x\,a(u))}{\overline{F}(u)}=G_\xi(x)\textrm{.} $$ \end{itemize} \end{teo} Note that Theorem \ref{teo:20140801:001} refers to distributions $F$ with endpoint $x^* =\sup\{x:F(x)<1\}=\infty$. \begin{proof}[Proof of Theorem \ref{teo:20140801:001}:]~ Let us prove this theorem by contradiction, assuming that $F$ satisfies the \PBdH\ and that $\overline{F}$ satisfies $\mu(\overline{F})<\nu(\overline{F})$. Note that $x^* =\infty$. We consider the two possibilities given in (i) and (ii) in Theorem \ref{tpbdh}. \begin{itemize} \item \emph{Assume that $F$ satisfies Theorem \ref{tpbdh}, (i), with $\xi\geq0$ (because $x^* =\infty$).} Let $\epsilon>0$. By Theorem \ref{tpbdh}, (ii), there exists $u_0>0$ such that, for $u\geq u_0$ and $x\geq0$, \begin{equation}\label{eq:20140803:001} \frac{\overline{F}(u+x)}{\overline{F}(u)\,G_\xi(x/a(u))}\leq1+\epsilon\textrm{.} \end{equation} By the definition of upper order, we have that there exists a sequence $(x_n)_{n\in\Nset}$ satisfying $x_n\rightarrow\infty$ as $n\to\infty$ such that \begin{eqnarray*} \lefteqn{\nu(\overline{F}) =\lim_{x_n\rightarrow\infty}\frac{\log\left(\overline{F}(u+x_n)\right)}{\log(u+x_n)} =\lim_{x_n\rightarrow\infty}\frac{\log\left(\overline{F}(u+x_n)\right)}{\log(x_n)}} \\ & & \leq\lim_{x_n\rightarrow\infty}\frac{\log\left((1+\epsilon)\,\overline{F}(u)\,G_\xi(x_n/a(u))\right)}{\log(x_n)}\quad\textrm{by \eqref{eq:20140803:001}} \\ & & =\lim_{x_n\rightarrow\infty}\frac{\log\left(\overline{F}(u)\right)}{\log(x_n)}+\lim_{x_n\rightarrow\infty}\frac{\log\left(G_\xi(x_n/a(u))\right)}{\log(x_n)} \\ & & =\left\{ \begin{array}{ll} -\frac{1}{\xi}\lim_{x_n\rightarrow\infty}\frac{\log\left(1+\xi\,x_n/a(u)\right)}{\log(x_n)} & \textrm{if $\xi>0$} \\ -\lim_{x_n\rightarrow\infty}\frac{x_n/a(u)}{\log(x_n)} & \textrm{if $\xi=0$} \end{array} \right. \\ & & =\left\{ \begin{array}{ll} -\frac{1}{\xi} & \textrm{if $\xi>0$} \\ -\infty & \textrm{if $\xi=0$.} \end{array} \right.\textrm{} \end{eqnarray*} If $\xi>0$, we conclude that $\nu(\overline{F})\leq-1/\xi$. A similar procedure provides $\mu(\overline{F})\geq-1/\xi$. Hence we conclude $\mu(\overline{F})=\nu(\overline{F})$ which contradicts $\mu(\overline{F})<\nu(\overline{F})$. If $\xi=0$, we conclude that $-\infty\leq\mu(\overline{F})\leq\nu(\overline{F})\leq-\infty$. Hence we conclude $\mu(\overline{F})=\nu(\overline{F})=-\infty$ which contradicts $\mu(\overline{F})<\nu(\overline{F})$. \item \emph{Assuming that $F$ satisfies Theorem \ref{tpbdh}, (ii),} and following the previous proof (done when assuming (i)), we deduce that $\mu(\overline{F})=\nu(\overline{F})$ which contradicts $\mu(\overline{F})<\nu(\overline{F})$. \end{itemize} \end{proof} \begin{proof}[Proof of Example \ref{exm:20140802:001}]~ Let $x\in[x_n,x_{n+1})$, $n\geq1$. We can write \begin{equation}\label{eq:20140610:002} \frac{\log\left(U(x)\right)}{\log(x)}=\frac{\log\left(x_n^{\alpha(1+\beta)}\right)}{\log(x)}=\alpha(1+\beta)\frac{\log\left(x_n\right)}{\log(x)}\textrm{.} \end{equation} Since $ \log(x_n)\leq\log(x)<\log(x_{n+1})=(1+\alpha)\log(x_n) $, we obtain $$ \frac{\alpha(1+\beta)}{1+\alpha} < \frac{\log\left(U(x)\right)}{\log(x)} \leq \alpha(1+\beta)\textrm{,} \quad\textrm{if\quad $1+\beta>0$} $$ and $$ \alpha(1+\beta) \leq \frac{\log\left(U(x)\right)}{\log(x)} < \frac{\alpha(1+\beta)}{1+\alpha}\textrm{,} \quad\textrm{if\quad $1+\beta<0$} $$ from which we deduce $$ \mu(U)\geq\frac{\alpha(1+\beta)}{1+\alpha}\quad\textrm{and}\quad\nu(U)\leq\alpha(1+\beta)\textrm{,} \quad\textrm{if\quad $1+\beta>0$} $$ and $$ \mu(U)\geq\alpha(1+\beta)\quad\textrm{and}\quad\nu(U)\leq\frac{\alpha(1+\beta)}{1+\alpha}\textrm{,} \quad\textrm{if\quad $1+\beta<0$.} $$ Moreover, taking $x=x_n$ in \eqref{eq:20140610:002} leads to $$ \lim_{n\rightarrow\infty}\frac{\log\left(U(x_n)\right)}{\log(x_n)}=\alpha(1+\beta) $$ which implies $$ \nu(U)\geq\alpha(1+\beta)\textrm{,} \quad\textrm{if\quad $1+\beta>0$} $$ and $$ \mu(U)\leq\alpha(1+\beta)\textrm{,} \quad\textrm{if\quad $1+\beta<0$.} $$ Hence, to conclude, it remains to prove that $$ \mu(U)\leq\frac{\alpha(1+\beta)}{1+\alpha}\textrm{,} \quad\textrm{if\quad $1+\beta>0$,} \quad\textrm{and}\quad \nu(U)\geq\frac{\alpha(1+\beta)}{1+\alpha}\textrm{,} \quad\textrm{if\quad $1+\beta<0$.} $$ If $1+\beta>0$, the function $\log\left(U(x)\right)/\log(x)$ is strictly decreasing continuous on $(x_n;x_{n+1})$ reaching the supremum value $\alpha(1+\beta)$ and the infimum value $\alpha(1+\beta)/(1+\alpha)$. Hence, for $\delta>0$ such that $$ \frac{\alpha(1+\beta)}{1+\alpha}<\frac{\alpha(1+\beta)}{1+\alpha}+\delta<\alpha(1+\beta)\textrm{,} $$ there exists $x_n<y_n<x_{n+1}$ satisfying $$ \frac{\log\left(U(y_n)\right)}{\log(y_n)}=\frac{\alpha(1+\beta)}{1+\alpha}+\delta\textrm{.} $$ Since $y_n\rightarrow\infty$ as $n\rightarrow\infty$ because $x_n\rightarrow\infty$ as $n\rightarrow\infty$, $ \displaystyle \mu(U)\leq\lim_{n\rightarrow\infty}\frac{\log\left(U(y_n)\right)}{\log(y_n)}=\frac{\alpha(1+\beta)}{1+\alpha}+\delta $ follows. Hence we conclude $\displaystyle \mu(U)\leq\frac{\alpha(1+\beta)}{1+\alpha}$ since $\delta$ is arbitrary. If $1+\beta<0$, a similar development to the case $1+\beta>0$ allows proving $\displaystyle \nu(U)\geq\frac{\alpha(1+\beta)}{1+\alpha}$. Moreover, if $1+\beta<0$ we have that $U$ is a tail of distribution. Let us check that the rv having a tail of distribution $\overline{F}=U$ has a finite $s$th moment whenver $0\leq s<-\alpha(1+\beta)/(1+\alpha)$. Let $s\geq 0$. We have \begin{eqnarray*} \lefteqn{\int_0^\infty x^sdF(x) = \sum_{n=1}^\infty x_n^s\left(U(x_{n}^-)-U(x_n^+)\right)} \\ & & = \sum_{n=2}^\infty x_n^s\left(x_{n-1}^{\alpha(1+\beta)}-x_{n}^{\alpha(1+\beta)}\right) =\sum_{n=2}^\infty x_n^s\left(x_{n}^{\frac{\alpha(1+\beta)}{1+\alpha}}-x_{n}^{\alpha(1+\beta)}\right) \leq \sum_{n=2}^\infty x_{n}^{s+\frac{\alpha(1+\beta)}{1+\alpha}}<\infty \end{eqnarray*} because $s<-\alpha(1+\beta)/(1+\alpha)$. Note that if $s\geq-\alpha(1+\beta)/(1+\alpha)$, $\displaystyle \int_0^\infty x^sdF(x)=\infty$. \end{proof} \begin{proof}[Proof of Example \ref{exm:20140802:002}]~ If $\alpha>0$, $\nu(U)=\infty$ comes from $$ \nu(U)=\mathop{\overline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\geq\lim_{x_n\rightarrow\infty}\frac{\log\left(U(x_n)\right)}{\log(x_n)} =\lim_{x_n\rightarrow\infty}\frac{\alpha x_n\,\log(2)}{\log(x_n)}=\infty\textrm{,} $$ and, if $\alpha<0$, $\mu(U)=-\infty$ comes from $$ \mu(U)=\mathop{\underline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\leq\lim_{x_n\rightarrow\infty}\frac{\log\left(U(x_n)\right)}{\log(x_n)} =\lim_{x_n\rightarrow\infty}\frac{\alpha x_n\,\log(2)}{\log(x_n)}=-\infty\textrm{.} $$ Next, let $\epsilon>0$ be small enough. Then, we have, if $\alpha>0$, \begin{eqnarray*} \lefteqn{\mu(U)=\mathop{\underline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\leq\lim_{x_n\rightarrow\infty}\frac{\log\left(U(x_n-\epsilon)\right)}{\log(x_n-\epsilon)} } \\ & & =\lim_{x_n\rightarrow\infty}\frac{\log\left(2^{\alpha x_{n-1}}\right)}{\log(2^{x_{n-1}/c})}\,\frac{\log(2^{x_{n-1}/c})}{\log(2^{x_{n-1}/c}-\epsilon)} =\lim_{x_n\rightarrow\infty}\frac{\log\left(2^{\alpha x_{n-1}}\right)}{\log(2^{x_{n-1}/c})}=\alpha c\textrm{,} \end{eqnarray*} and, if $\alpha<0$, \begin{eqnarray*} \lefteqn{\nu(U)=\mathop{\overline{\mathrm{lim}}}x\frac{\log\left(U(x)\right)}{\log(x)}\geq\lim_{x_n\rightarrow\infty}\frac{\log\left(U(x_n-\epsilon)\right)}{\log(x_n-\epsilon)} } \\ & & =\lim_{x_n\rightarrow\infty}\frac{\log\left(2^{\alpha x_{n-1}}\right)}{\log(2^{x_{n-1}/c})}\,\frac{\log(2^{x_{n-1}/c})}{\log(2^{x_{n-1}/c}-\epsilon)} =\lim_{x_n\rightarrow\infty}\frac{\log\left(2^{\alpha x_{n-1}}\right)}{\log(2^{x_{n-1}/c})}=\alpha c\textrm{.} \end{eqnarray*} It remains to prove that, if $\alpha>0$, $\mu(U)\geq\alpha c$, and, if $\alpha<0$, $\nu(U)\leq\alpha c$. It follows from the fact that, for $x_n\leq x<x_{n+1}$, $$ \frac{\log\left(U(x)\right)}{\log(x)} =\alpha\frac{x_n\,\log\left(2\right)}{\log(x)} =\alpha c\frac{\log\left(x_{n+1}\right)}{\log(x)} \left\{ \begin{array}{cll} > & \alpha c\textrm{,} & \textrm{if $\alpha>0$} \\ < & \alpha c\textrm{,} & \textrm{if $\alpha<0$.} \end{array} \right. $$ Next, if $\alpha<0$ we have that $U$ is a tail of distribution. Let us check that the rv having a tail of distribution $\overline{F}=U$ has a finite $s$th moment whenever $0\leq s<-\alpha c$. Let $s>0$ and denote $x_0=0$. We have $$ \int_0^\infty x^sdF(x) = \sum_{n=1}^\infty x_n^s\left(U(x_{n}^-)-U(x_n^+)\right) =\sum_{n=1}^\infty x_n^s\left(2^{\alpha x_{n-1}}-2^{\alpha x_n}\right) \leq \sum_{n=1}^\infty 2^{(s/c-\alpha) x_{n-1}}<\infty $$ because $s<-\alpha c$. If $s=0$, let $\epsilon=-\alpha c/2$ ($>0$) and the statement follows from $\displaystyle \int_0^\infty dF(x)=\int_0^1 dF(x)+\int_1^\infty dF(x)\leq\int_0^1 dF(x)+\int_1^\infty x^\alpha dF(x)<\infty$. Note that if $s\geq-\alpha c$, $\displaystyle \int_0^\infty x^sdF(x)=\infty$. \end{proof} \section{Proofs of results given in Section 2} \subsection{Section 2.1} \label{ProofsofSection2.1} Let us introduce the following functions that will be used in the proofs. We define, for some $b>0$ and $r\in\Rset$, \begin{equation}\label{eq:20140405:002} V_r(x)=\left\{ \begin{array}{ll} \int_b^xy^rU(y)dy\text{,} & x\geq b \\ 1\text{,} & 0< x<b\textrm{} \end{array} \right. \quad\textrm{;}\quad W_r(x)=\left\{ \begin{array}{ll} \int_x^{\infty}y^rU(y)dy\text{,} & x\geq b \\ 1\text{,} & 0< x<b\textrm{} \end{array} \right. \end{equation} For the main result, we will need the following lemma which is of interest on its own. \begin{lem}\label{lem:kar:001} Let $U\in\mathcal{M}$ with finite $\mathcal{M}$-index $\kappa_U$ and let $b>0$. \begin{enumerate} \item[(i)] Consider $V_r$ defined in (\ref{eq:20140405:002}) with $r+1>\kappa_U$. Then $V_r\in\mathcal{M}$ and its $\mathcal{M}$-index $\kappa_{V_r}$ satisfies $\kappa_{V_r}=\kappa_U-(r+1)$. \item[(ii)] Consider $W_r$ defined in (\ref{eq:20140405:002}) with $r+1<\kappa_U$. Then $W_r\in\mathcal{M}$ and its $\mathcal{M}$-index $\kappa_{W_r}$ satisfies $\kappa_{W_r}=\kappa_U-(r+1)$. \end{enumerate} \end{lem} \begin{proof}[Proof of Theorem \ref{prop:kar:001}] \begin{itemize} \item \emph{Proof of the necessary condition of (K1$^*$)} As an immediate consequence of Lemma \ref{lem:kar:001}, (i), we have, assuming that $\rho+r>0$: \begin{eqnarray*} \lefteqn{U\in\M\textrm{ with $\M$-index $\kappa_U=-\rho$ such that $(r-1)+1=r>-\rho=\kappa_U$}} \\ & & \Longrightarrow\quad V_{r-1}(x)=\int_b^xt^{r-1}U(t)dt\in\M\textrm{ with $\M$-index $\kappa_{V_{r-1}}=\kappa_U-r=-\rho-r$} \end{eqnarray*} Hence, by applying Theorems \ref{teo:main:001} and \ref{teo:main:002} to $V_{r-1}$, the result follows: $$ \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}=\lim_{x\rightarrow\infty}\frac{\log\left(V_{r-1}(x)\right)}{\log(x)}=-\kappa_{V_{r-1}}=\rho+r>0\textrm{.} $$ \item \emph{Proof of the sufficient of (K1$^*$)} Using ($C1r$) and $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}=\rho+r $ gives \begin{eqnarray*} \lefteqn{\lim_{x\rightarrow\infty}-\frac{\log\left(U(x)\right)}{\log(x)} = \lim_{x\rightarrow\infty}-\frac{\log\left(\frac{x^rU(x)}{\int_b^xt^{r-1}U(t)dt}\right)+\log\left(x^{-r}\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}} \\ & & \qquad\qquad=r+\lim_{x\rightarrow\infty}-\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)} = r-(\rho+r)=-\rho \end{eqnarray*} and the statement follows. \item \emph{Proof of the necessary condition of (K2$^*$)} As an immediate consequence of Lemma \ref{lem:kar:001}, (ii), we have, assuming that $\rho+r<0$: \begin{eqnarray*} \lefteqn{U\in\M\textrm{ with $\M$-index $\kappa_U=-\rho$ such that $(r-1)+1=r<-\rho=\kappa_U$}} \\ & & \Longrightarrow\quad W_{r-1}(x)=\int_x^\infty t^{r-1}U(t)dt\in\M\textrm{ with $\M$-index $\kappa_{W_{r-1}}=\kappa_U-r=-\rho-r$} \end{eqnarray*} Hence, by applying Theorems \ref{teo:main:001} and \ref{teo:main:002} to $W_{r-1}$, the result follows: $$ \lim_{x\rightarrow\infty}\frac{\log\left(\int_x^\infty t^{r-1}U(t)dt\right)}{\log(x)}=\lim_{x\rightarrow\infty}\frac{\log\left(W_{r-1}(x)\right)}{\log(x)}=-\kappa_{W_{r-1}}=\rho+r<0\textrm{.} $$ \item \emph{Proof of the sufficient of (K2$^*$)} Using ($C2r$) and $ \displaystyle \lim_{x\rightarrow\infty}\frac{\log\left(\int_x^{\infty}t^{r-1}U(t)dt\right)}{\log(x)}=\rho+r $ gives \begin{eqnarray*} \lefteqn{\lim_{x\rightarrow\infty}-\frac{\log\left(U(x)\right)}{\log(x)} = \lim_{x\rightarrow\infty}-\frac{\log\left(\frac{x^rU(x)}{\int_x^{\infty}t^{r-1}U(t)dt}\right)+\log\left(x^{-r}\int_x^{\infty}t^{r-1}U(t)dt\right)}{\log(x)}} \\ & & \qquad\qquad=r+\lim_{x\rightarrow\infty}-\frac{\log\left(\int_x^{\infty}t^{r-1}U(t)dt\right)}{\log(x)} = r-(\rho+r)=-\rho \end{eqnarray*} and the statement follows. \item \emph{Proof of the necessary condition of (K3$^* $); case $\displaystyle \int_b^{\infty}t^{r-1}U(t)dt=\infty$ with $b>1$.} On one hand, assumed $\rho+r=0$, $U\in\M$ with $\M$-index $\kappa_U=-\rho$ implies, for any $\epsilon>0$, \begin{equation}\label{eq:20140402:101} \lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho+\epsilon}}=0\textrm{\ \ \ and \ \ }\lim_{x\rightarrow\infty}\frac{U(x)}{x^{\rho-\epsilon}}=\infty\textrm{} \end{equation} On the other hand, $\displaystyle \int_b^{\infty}t^{r-1}U(t)dt=\infty$ implies $ \displaystyle \lim_{x\rightarrow\infty}\int_b^xt^{r-1}U(t)dt=\infty\textrm{.} $ Hence we can apply \thelr\, to the first limit of (\ref{eq:20140402:101}) to get, for any $\epsilon>0$, \begin{equation}\label{eq:20140402:002} \lim_{x\rightarrow\infty}\frac{\int_b^xt^{r-1}U(t)dt}{x^{\epsilon}}=\lim_{x\rightarrow\infty}\frac{x^{r-1}U(x)}{\epsilon x^{-1+\epsilon}}= \lim_{x\rightarrow\infty}\frac{U(x)}{\epsilon x^{-r-1+\epsilon}}=\lim_{x\rightarrow\infty}\frac{U(x)}{\epsilon x^{\rho+\epsilon}}=0\textrm{} \end{equation} Moreover, we have, for any $\epsilon>0$, \begin{equation}\label{eq:20140402:003} \lim_{x\rightarrow\infty}\frac{\int_b^xt^{r-1}U(t)dt}{x^{-\epsilon}}= \left(\lim_{x\rightarrow\infty}\int_b^xt^{r-1}U(t)dt\right)\, \left(\lim_{x\rightarrow\infty}x^{\epsilon}\right) =\infty\times\infty =\infty\textrm{} \end{equation} Defining $V_{r-1}$ as in (\ref{eq:20140405:002}) we deduce from (\ref{eq:20140402:002}) and (\ref{eq:20140402:003}) that $V_{r-1}\in\M$ with $\M$-index $0=\rho+r$. So, taking $x\geq b$, the required result follows: $$ \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^{x}t^{r-1}U(t)dt\right)}{\log(x)}=\lim_{x\rightarrow\infty}\frac{\log\left(V_{r-1}(x)\right)}{\log(x)}=\rho+r=0 $$ \item \emph{Proof of the necessary condition of (K3$^* $); case $\displaystyle \int_b^{\infty}t^{r-1}U(t)dt<\infty$ with $b>1$.} Suppose $U\in\M\textrm{ with $\M$-index $\kappa_U=-\rho$}$. By a straightforward computation we have $$ \lim_{x\rightarrow\infty}\frac{\log\left(\int_b^xt^{r-1}U(t)dt\right)}{\log(x)}= \frac{\log\left(\int_b^{\infty}t^{r-1}U(t)dt\right)}{\lim_{x\rightarrow\infty}\log(x)}=0=\rho+r\textrm{} $$ \item \emph{Proof of the sufficient condition of (K3$^* $)} A similar proof used to prove the sufficient condition of (K1$^{* }$). \end{itemize} \end{proof} \begin{proof}[Proof of Lemma \ref{lem:kar:001}]~ \begin{itemize} \item \emph{Proof of (i)} Let us prove that $V_r$ defined in (\ref{eq:20140405:002}) belongs to $\M$ with $\M$-index $\kappa_{V_r}=\kappa_U-(r+1)$. Choose $\rho=-\kappa_U+r+1>0$ and $0<\epsilon<\rho$. Note that $x^{\rho\pm\epsilon}\to\infty$ as $\rho\pm\epsilon>0$. Combining, for $x>1$, under the assumption $r+1>\kappa_U$, and for $U\in\M$, $$ \lim_{x\rightarrow\infty}V_r(x)=\int_b^1y^rU(y)dy+\int_1^{\infty}y^rU(y)dy=\infty\textrm{,} $$ and, $$ \displaystyle \lim_{x\rightarrow\infty}\frac{\left(V_r(x)\right)'}{\left(x^{\rho+\delta}\right)'} =\lim_{x\rightarrow\infty}\frac{U(x)}{(\rho+\delta)x^{-\kappa_U+\delta}} =\left\{ \begin{array}{ll} 0 & \textrm{if $\delta=\epsilon$} \\ \infty & \textrm{if $\delta=-\epsilon$} \end{array} \right. $$ provides, applying \thelr, $$ \displaystyle \lim_{x\rightarrow\infty}\frac{V_r(x)}{x^{\rho+\delta}} =\lim_{x\rightarrow\infty}\frac{\left(V_r(x)\right)'}{\left(x^{\rho+\delta}\right)'} =\left\{ \begin{array}{ll} 0 & \textrm{if $\delta=\epsilon$} \\ \infty & \textrm{if $\delta=-\epsilon$,} \end{array} \right. $$ which implies that $V_r\in\mathcal{M}$ with $\mathcal{M}$-index $\kappa_{V_r}=-\rho=\kappa_U-(r+1)$, as required. \item \emph{Proof of (ii)} First let us check that $W_r$ is well-defined. Let $\delta=(\kappa_U-r-1)/2$ ($>0$ by assumption). We have, for $U\in\mathcal{M}$, $ \displaystyle \lim_{x\rightarrow\infty}\frac{U(x)}{x^{-\kappa_U+\delta}}=0\textrm{,} $ which implies that for $c>0$ there exists $x_0\geq1$ such that for all $x\geq x_0$, $ \displaystyle \frac{U(x)}{x^{-\kappa_U+\delta}}\leq c\textrm{} $. Hence, one has, $\forall$ $x\geq x_0$, $$ \int_x^{\infty}y^rU(y)dy \leq c\int_x^{\infty}y^{-\kappa_U+\delta+r}dy =c\int_x^{\infty}y^{\frac{-\kappa_U+r+1}{2}-1}dy <\infty $$ because of $-\kappa_U+r+1<0$. Then, we can conclude, $U$ being bounded on finite intervals, that $W_r$ is well-defined. Now choose $\rho=-\kappa_U+r+1<0$ and $0<\epsilon<-\rho$. We have $x^{\rho\pm\epsilon}\to0$ as $\rho\pm\epsilon<0$. We will proceed as in (i). For $x>1$, under the assumption $r+1<\kappa_U$, for $U\in\M$, we have $ \displaystyle \lim_{x\rightarrow\infty}W_r(x)=\int_x^\infty y^rU(y)dy=0\textrm{,} $ and $ \displaystyle \lim_{x\rightarrow\infty}\frac{\left(W_r(x)\right)'}{\left(x^{\rho+\delta}\right)'} =\lim_{x\rightarrow\infty}-\frac{U(x)}{(\rho+\delta)x^{-\kappa+\delta}} =\left\{ \begin{array}{ll} 0 & \textrm{if $\delta=\epsilon$} \\ \infty & \textrm{if $\delta=-\epsilon$} \end{array} \right. $. Hence applying \thelr\ gives $$ \displaystyle \lim_{x\rightarrow\infty}\frac{W_r(x)}{x^{\rho+\delta}} =\lim_{x\rightarrow\infty}\frac{\left(W_r(x)\right)'}{\left(x^{\rho+\delta}\right)'} =\left\{ \begin{array}{ll} 0 & \textrm{if $\delta=\epsilon$} \\ \infty & \textrm{if $\delta=-\epsilon$,} \end{array} \right. $$ which implies that $W_r\in\mathcal{M}$ with $\mathcal{M}$-index $\kappa_{W_r}=-\rho=\kappa_U-(r+1)$. \end{itemize} \end{proof} \subsection{Section 2.2} \label{ProofsofSection2.2} \begin{proof}[Proof of Theorem \ref{teo:KaramatasTauberianTheoremExtension}]~ \begin{itemize} \item\emph{Proof of (i)} Changing the order of integration in \eqref{eq:20140328:001}, using the continuity of $U$ and the assumption $U(0^+)=0$, give, for $s>0$, $$ \widehat{U}(s)=s\int_{(0;\infty)}e^{-xs}U(x)dx \, , $$ or, with the change of variable $y=x/s$, $$ \widehat{U}\left(\frac{1}{s}\right)=\int_{(0;\infty)}e^{-y}U(sy)dy\textrm{.} $$ Let $U\in\M$ with $\M$-index $(-\alpha)<0$. Let $0<\epsilon< \alpha$. We have, via Theorems \ref{teo:main:001} and \ref{teo:main:002}, that there exists $x_0>1$ such that, for $x\geq x_0$, $$ x^{\alpha-\epsilon}\leq U(x)\leq x^{\alpha+\epsilon}\textrm{.} $$ Hence, for $s>1$, we can write $$ \int_{x_0/s}^{\infty}e^{-x}(xs)^{\alpha-\epsilon}dx\leq\int_{x_0/s}^{\infty}e^{-x}U(xs)dx\leq\int_{x_0/s}^{\infty}e^{-x}(xs)^{\alpha+\epsilon}dx $$ so $ \displaystyle \frac{\int_0^{x_0/s}e^{-x}U(xs)dx+\int_{x_0/s}^{\infty}e^{-x}x^{\alpha-\epsilon}dx}{s^{-\alpha+\epsilon}}\leq\widehat{U}\left(\frac{1}{s}\right) \leq \frac{\int_0^{x_0/s}e^{-x}U(xs)dx+\int_{x_0/s}^{\infty}e^{-x}x^{\alpha+\epsilon}dx}{s^{-\alpha-\epsilon}}\textrm{} $, from which we deduce that $$ -\alpha-\epsilon\leq\lim_{s\rightarrow\infty}-\frac{\log\left(\widehat{U}(1/s)\right)}{\log(s)}\leq-\alpha+\epsilon\textrm{.} $$ Then we obtain, $\epsilon$ being arbitrary, $ \displaystyle \lim_{s\rightarrow\infty}-\frac{\log\left(\widehat{U}(1/s)\right)}{\log(s)}=-\alpha\textrm{.} $ The conclusion follows, applying Theorem \ref{teo:main:001}, to get $\widehat{U}\circ g\in\M$ with $g(s)=1/s$, ($s>0$), and, Theorem \ref{teo:main:002}, for the $\M$-index. \item\emph{Proof of (ii)} Let $0<\epsilon< \alpha$. Since we assumed $U(0^+)=0$, we have, for $s>1$, \begin{equation}\label{eq:20141107:010} e^{-1}U(s) \leq \int_{(0;s)}e^{-\frac{x}{s}}dU(x) \leq \int_{(0;\infty)}e^{-\frac{x}{s}}dU(x) = \widehat{U}\left(\frac{1}{s}\right). \end{equation} Changing the order of integration in the last integral (on the right hand side of the previous equation), and using the continuity of $U$ and the fact that $U(0^+)=0$, gives, for $s>0$, \begin{equation}\label{eq:20141107:011} \widehat{U}\left(\frac{1}{s}\right)=\int_{(0;\infty)}e^{-x}U(sx)dx\textrm{.} \end{equation} Set $\displaystyle I_{\eta}=\int_{(0;\infty)}e^{-x}x^\eta dx$, for $\eta\in [0,\alpha)$ (such that $x^{-\eta}U(x)$ concave, by assumption). Introducing the function $\displaystyle V(x):=I_{\eta}\,(sx)^{-\eta}\,U(sx)$, which is concave, and the rv $Z$ having the probability density function defined on $\Rset^+$ by $\displaystyle e^{-x}x^\eta\big/I_\eta$, we can write $$ \int_{(0;\infty)}e^{-x}U(sx)dx = s^\eta \int_{(0;\infty)} V(x)\, \frac{e^{-x}x^\eta}{I_\eta} dx = s^\eta \,E[V(Z)] \le s^\eta \,V\left(E[Z]\right) $$ applying Jensen's inequality. Hence we obtain, using that $\displaystyle E[Z]=I_{\eta+1}\big/I_{\eta}$ and the definition of $V$, $$ \int_{(0;\infty)}e^{-x}U(sx)dx \le \frac{I_{\eta}^{\,\eta+1}}{I_{\eta +1}^{\,\eta}}\,U\left(s\,I_{\eta+1}\big/I_{\eta}\right), $$ from which we deduce, using \eqref{eq:20141107:011}, that \begin{equation*} \frac1{s^{\alpha-\epsilon}}\widehat{U}\left(\frac{1}{s}\right) \le \frac{I_{\eta}^{\,\eta+1-\alpha+\epsilon}}{I_{\eta +1}^{\,\eta-\alpha+\epsilon}}\times \frac{U\left(s\,I_{\eta+1}\big/I_{\eta}\right)}{\left(s\,I_{\eta+1}\big/I_{\eta} \right)^{\alpha-\epsilon}} \,. \end{equation*} Therefore, since $\widehat{U}\circ g\in\M$ with $g(s)=1/s$ and $\M$-index ($-\alpha$), we obtain $$ \frac{I_{\eta}^{\,\eta+1-\alpha+\epsilon}}{I_{\eta +1}^{\,\eta-\alpha+\epsilon}}\times \frac{U\left(s\,I_{\eta+1}\big/I_{\eta}\right)}{\left(s\,I_{\eta+1}\big/I_{\eta} \right)^{\alpha-\epsilon}} \, \underset{s\to\infty}{\longrightarrow}\, \infty \, . $$ But $\widehat{U}\circ g\in\M$ with $\M$-index ($-\alpha$) also implies in \eqref{eq:20141107:010} that $\displaystyle \frac{e^{-1}U(s)}{s^{\alpha + \epsilon}}\, \underset{s\to\infty}{\longrightarrow} \, 0$.\\ From these last two limits, we obtain that $U\in\M$ with $\M$-index $(-\alpha)$. \end{itemize} \end{proof} \subsection{Section \ref{Section2.3}} \label{ProofsofSection2.3} \begin{proof}[Proof of Proposition \ref{prop:main:vonmises:001}]~ \begin{itemize} \item \emph{Proof of (i)} Suppose that $F$ satisfies $ \displaystyle \lim_{x\rightarrow\infty}\frac{x\,F'(x)}{\overline{F}(x)}=\alpha $. Applying the L'H\^{o}pital's rule gives $$ \lim_{x\rightarrow\infty}\frac{x\,F'(x)}{\overline{F}(x)}= \lim_{x\rightarrow\infty}-\frac{\left(\log\left(\overline{F}(x)\right)\right)'}{\left(\log(x)\right)'}= \lim_{x\rightarrow\infty}-\frac{\log\left(\overline{F}(x)\right)}{\log(x)}= \frac{1}{\alpha}\textrm{,} $$ hence $\overline{F}\in\M$, via Theorem \ref{teo:main:001}, with $\M$-index $\kappa_{\overline{F}}=1/\alpha$, via Theorem \ref{teo:main:002}. \item \emph{Proof of (ii)} Suppose that $F$ satisfies $\displaystyle \lim_{x\rightarrow\infty}\left(\frac{\overline{F}(x)}{F'(x)}\right)'$=0. It implies that, for all $\epsilon>0$, there exists $x_0>0$ such that, for $x\geq x_0$, $ \displaystyle -\epsilon\leq\left(\frac{\overline{F}(x)}{F'(x)}\right)'\leq\epsilon\textrm{.} $ Integrating this inequality on $[x_0,x]$ gives $$ -\epsilon(x-x_0)\leq\left(\frac{\overline{F}(x)}{F'(x)}\right)-\left(\frac{\overline{F}(x_0)}{F'(x_0)}\right)\leq\epsilon(x-x_0)\textrm{} $$ from which we deduce $ \displaystyle -\epsilon\leq\lim_{x\rightarrow\infty}\frac{\overline{F}(x)}{xF'(x)}\leq\epsilon\textrm{,} $ hence $ \displaystyle \lim_{x\rightarrow\infty}\frac{\overline{F}(x)}{xF'(x)}=0\textrm{.} $ $$ \lim_{x\rightarrow\infty}\frac{x\,F'(x)}{\overline{F}(x)}= \lim_{x\rightarrow\infty}-\frac{\left(\log\left(\overline{F}(x)\right)\right)'}{\left(\log(x)\right)'}= \lim_{x\rightarrow\infty}-\frac{\log\left(\overline{F}(x)\right)}{\log(x)}= \frac{1}{0}=\infty\textrm{,} $$ since $F'(x)>0$ as $x\to\infty$. We conclude that $\overline{F}\in\M_\infty$, via Theorem \ref{teo:main:001extension}. \end{itemize} \end{proof} \begin{proof}[Proof of Theorem \ref{teo:20140411:001}]~ \begin{itemize} \item Let $F\in DA(\Phi_\alpha)$, $\alpha>0$. Then Theorem \ref{teo:mises:dehaanferreira0} and Proposition \ref{prop:20140329:strictsubset} imply that $\overline{F}\in RV_{-\alpha}\subseteq\M$ with $\M$-index $\kappa_{\overline{F}}=-\alpha$. \item Assume $F\in DA(\Lambda_{\infty})$. Applying Corollary \ref{cor:mises:dehaan0} gives $ \displaystyle \lim_{x\rightarrow\infty}-\frac{\log\left(\overline{F}(x)\right)}{\log(x)}=\infty $. Theorem \ref{teo:main:001extension} allows to conclude. \end{itemize} \end{proof} \begin{proof}[Proof of Example \ref{exm:20140925:001}]~ Let us check that $F\not \in DA(\Lambda_{\infty})$. We prove it by contradiction. Suppose that $F$ defined in \eqref{eq:20140526:002} belongs to $DA(\Lambda_{\infty})$. By applying Theorem \ref{teo:mises:gnedenko:20140412:001}, we conclude that there exists a function $A$ such that $A(x)\rightarrow0$ as $x\rightarrow\infty$ and \eqref{eq:mises:20140412:010bis} holds. Introducing the definition \eqref{eq:20140526:002} into \eqref{eq:mises:20140412:010bis}, we can write, for all $x\in\Rset$, \begin{eqnarray} \lefteqn{\lim_{z\rightarrow\infty}\Big(\lfloor z\,(1+A(z)\,x)\rfloor\,\log\big(z\,(1+A(z)\,x)\big)-\lfloor z\rfloor\,\log(z)\Big)} \nonumber \\ & & =\lim_{z\rightarrow\infty}\Big(\big(\lfloor z\,(1+A(z)\,x)\rfloor-\lfloor z\rfloor\big)\log\left(z\right)+\lfloor z\,(1+A(z)\,x)\rfloor\,\log\big(1+A(z)\,x\big)\Big)=x\textrm{} \label{eq:20140413:001} \end{eqnarray} Let us see that the assumption of the existence of such function $A$ leads to a contradiction when considering some values $x$. \begin{itemize} \item Suppose $\displaystyle \lim_{z\to\infty} z\,A(z)=c>0$. Take $x>0$ such that $cx/2\geq1$ and $z$ large enough such that $z\,A(z)\geq c/2$. On one hand, we have $ \lfloor z\,(1+A(z)\,x)\rfloor-\lfloor z\rfloor>0 $ since $ z\,(1+A(z)\,x) \geq z+cx/2 \geq z+1 $. This implies that $$ \lim_{z\rightarrow\infty}\big(\lfloor z\,(1+A(z)\,x)\rfloor-\lfloor z\rfloor\big)\log\left(z\right)=\infty\textrm{.} $$ On the other hand, we have, taking $z$ large enough to have $\log\big(1+A(z)\,x\big)\approx A(z)\,x$ and $z\,A(z)\leq 2c$, \begin{eqnarray*} \lefteqn{\lfloor z\,(1+A(z)\,x)\rfloor\,\log\big(1+A(z)\,x\big)} \\ & & \leq z\,(1+A(z)\,x)\,\log\big(1+A(z)\,x\big) \approx z\,(1+A(z)\,x)\,A(z)\,x \leq 2\,c\,(1+A(z)\,x)\,x<\infty \end{eqnarray*} Combining these results and taking $z\to\infty$ contradict \eqref{eq:20140413:001}. \item Suppose $\displaystyle \lim_{z\to\infty} z\,A(z)=0$. Let $x>0$. On one hand, we have that $$ \lim_{z\rightarrow\infty}\big(\lfloor z\,(1+A(z)\,x)\rfloor-\lfloor z\rfloor\big)\log\left(z\right) $$ could be 0 or $\infty$ depending on the behavior of $z\,A(z)$ as $z\to\infty$. On the other hand, we have, taking $z$ large enough such that $\log\big(1+A(z)\,x\big)\approx A(z)\,x$, \begin{eqnarray*} \lefteqn{\lfloor z\,(1+A(z)\,x)\rfloor\,\log\big(1+A(z)\,x\big)} \\ & & \leq z\,(1+A(z)\,x)\,\log\big(1+A(z)\,x\big) \approx z\,(1+A(z)\,x)\,A(z)\,x \to0 \quad\textrm{as} \quad z\to\infty\textrm{.} \end{eqnarray*} Combining these results contradicts \eqref{eq:20140413:001}. \end{itemize} \end{proof} \end{document}
\begin{document} \title[On new inequalities]{New generalization fractional inequalities of Ostrowski-Gr\"{u}ss type } \author{Mehmet Zeki Sarikaya$^{\star }$} \address{Department of Mathematics, Faculty of Science and Arts, D\"{u}zce University, D\"{u}zce, Turkey} \email{[email protected]} \thanks{$^{\star }$corresponding author} \author{Hatice YALDIZ} \email{[email protected]} \subjclass[2000]{ 26D15, 41A55, 26D10, 26A33 } \keywords{Montgomery identity, fractional integral, Ostrowski inequality, Gr \"{u}ss inequality.} \begin{abstract} In this paper, we use the Riemann-Liouville fractional integrals to establish some new integral inequalities of Ostrowski-Gr\"{u}ss type. From our results, the classical Ostrowski-Gr\"{u}ss type inequalities can be deduced as some special cases. \end{abstract} \maketitle \section{Introduction} Let $f:[a,b]\rightarrow \mathbb{R}$ be continuous on $\left[ a,b\right] $ and differentiable on $\left( a,b\right) $ and assume $\left\vert f^{\prime }(x)\right\vert \leq M$ for all $x\in (a,b).$ Then the following inequality holds: \begin{equation} \left\vert S(f;a,b)\right\vert \leq \frac{M}{b-a}\left[ \left( \frac{b-a}{2} \right) ^{2}+\left( x-\frac{a+b}{2}\right) ^{2}\right] \label{Z} \end{equation} for all $x\in \left[ a,b\right] $ where \begin{equation*} S(f;a,b)=f(x)-\mathcal{M}\left( f;a,b\right) \end{equation*} and \begin{equation} \mathcal{M}\left( f;a,b\right) =\frac{1}{b-a}\int_{a}^{b}f(x)dx. \label{Z1} \end{equation} This inequality is well known in the literature as Ostrowski inequality. In 1882, P. L. \v{C}eby\v{s}ev \cite{Cebysev} gave the following inequality: \begin{equation} \left\vert T(f,g)\right\vert \leq \frac{1}{12}(b-a)^{2}\left\Vert f^{\prime }\right\Vert _{\infty }\left\Vert g^{\prime }\right\Vert _{\infty }, \label{Z2} \end{equation} where $f,g:[a,b]\rightarrow \mathbb{R}$ are absolutely continuous function, whose first derivatives $f^{\prime }$ and $g^{\prime }$ are bounded, \begin{eqnarray} T(f,g) &=&\frac{1}{b-a}\int\limits_{a}^{b}f(x)g(x)dx-\left( \frac{1}{b-a} \int\limits_{a}^{b}f(x)dx\right) \left( \frac{1}{b-a}\int \limits_{a}^{b}g(x)dx\right) \notag \\ && \label{Z3} \\ &=&\mathcal{M}\left( fg;a,b\right) -\mathcal{M}\left( f;a,b\right) \mathcal{M }\left( g;a,b\right) \notag \end{eqnarray} and $\left\Vert .\right\Vert _{\infty }$ denotes the norm in $L_{\infty }[a,b]$ defined as $\left\Vert p\right\Vert _{\infty }=\underset{t\in \lbrack a,b]}{ess\sup }\left\vert p(t)\right\vert .$ In 1935, G. Gr\"{u}ss \cite{Gruss} proved the following inequality: \begin{equation} \left\vert \frac{1}{b-a}\int\limits_{a}^{b}f(x)g(x)dx-\frac{1}{b-a} \int\limits_{a}^{b}f(x)dx\frac{1}{b-a}\int\limits_{a}^{b}g(x)dx\right\vert \leq \frac{1}{4}(\Phi -\varphi )(\Gamma -\gamma ), \label{Z4} \end{equation} provided that $f$ and $g$ are two integrable function on $[a,b]$ satisfying the condition \begin{equation} \varphi \leq f(x)\leq \Phi \text{ \ and \ }\gamma \leq g(x)\leq \Gamma \text{ for all }x\in \lbrack a,b]. \label{Z5} \end{equation} The constant $\frac{1}{4}$ is best possible. From \cite{[13]}, if $f:[a,b]\rightarrow \mathbb{R}$ is differentiable on $ [a,b]$ with the first derivative $f^{\prime }$ integrable on $[a,b],$ then Montgomery identity holds: \begin{equation} f(x)=\frac{1}{b-a}\int\limits_{a}^{b}f(t)dt+\int\limits_{a}^{b}P_{1}(x,t)f^{ \prime }(t)dt, \label{7} \end{equation} where $P_{1}(x,t)$ is the Peano kernel defined by \begin{equation*} P_{1}(x,t)=\left\{ \begin{array}{ll} \dfrac{t-a}{b-a}, & a\leq t<x \\ \dfrac{t-b}{b-a}, & x\leq t\leq b. \end{array} \right. \end{equation*} This inequality provides an upper bound for the approximation of integral mean of a function $f$ by the functional value $f(x)$ at $x\in \lbrack a,b].$ In 2001, Cheng \cite{[1]} proved the following Ostrowski-Gr\"{u}ss type integral inequality. \begin{theorem} \label{t1} Let $I\subset \mathbb{R} $ be an open interval, $a,b\in I,a<b$. If $f:I\rightarrow \mathbb{R} $ is a differentiable function such that there exist constants $\gamma ,\Gamma \in \mathbb{R} $, with $\varphi \leq f^{^{\prime }}\left( x\right) \leq \Phi $, $x\in \left[ a,b\right] $. Then have \begin{eqnarray} &&\left\vert f\left( x\right) -\frac{f\left( b\right) -f\left( a\right) }{b-a }\left( x-\frac{a+b}{2}\right) -\frac{1}{b-a}\int\limits_{a}^{b}f\left( t\right) dt\right\vert \label{hh} \\ && \notag \\ &\leq &\frac{1}{4}\left( b-a\right) \left( \Phi -\varphi \right) \text{, for all }x\in \left[ a,b\right] . \notag \end{eqnarray} \end{theorem} In \cite{[5]}, Matic et. al gave the following theorem by use of Gr\"{u}ss inequality: \begin{theorem} \label{t2} Let the assumptions of Theorem \ref{t1} hold. Then for all $x\in \left[ a,b\right] $, we have \end{theorem} \begin{eqnarray} &&\left\vert f\left( x\right) -\frac{f\left( b\right) -f\left( a\right) }{b-a }\left( x-\frac{a+b}{2}\right) -\frac{1}{b-a}\int\limits_{a}^{b}f\left( t\right) dt\right\vert \label{hhh} \\ && \notag \\ &\leq &\frac{1}{4\sqrt{3}}\left( b-a\right) \left( \Phi -\varphi \right) . \notag \end{eqnarray} In \cite{bernant}, Bernett et al., by the use of Chebeyshev's functional, improved the Matic et al. result by providing first membership of the right side of (\ref{hhh}) in terms of Euclidean norm as follows: \begin{theorem} Let $f:[a,b]\rightarrow \mathbb{R} $ be an absolutely continuous mapping whose derivative $f^{\prime }\in L_{2}[a,b]$, then we have, \end{theorem} \begin{eqnarray*} &&\left\vert f\left( x\right) -\frac{f\left( b\right) -f\left( a\right) }{b-a }\left( x-\frac{a+b}{2}\right) -\frac{1}{b-a}\int\limits_{a}^{b}f\left( t\right) dt\right\vert \\ && \\ &\leq &\frac{\left( b-a\right) }{2\sqrt{3}}\left( \frac{1}{(b-a)}\left\Vert f^{^{\prime }}\right\Vert _{2}^{2}-\left( \frac{f\left( b\right) -f\left( a\right) }{(b-a)}\right) ^{2}\right) ^{\frac{1}{2}} \\ && \\ &\leq &\frac{\left( b-a\right) \left( \Phi -\varphi \right) }{4\sqrt{3}}, \text{ if }\varphi \leq f^{\prime }(x)\leq \Phi \text{ for a.e.t on }[a,b], \end{eqnarray*} for all $x\in \left[ a,b\right] .$ During the past few years many researchers have given considerable attention to the above inequalities and various generalizations, extensions and variants of these inequalities have appeared in the literature, see \cite {[1]}-\cite{bernant}, \cite{[5]}, \cite{[24]} and the references cited therein. For recent results and generalizations concerning Ostrowski and Gr \"{u}ss inequalities, we refer the reader to the recent papers \cite{[1]}- \cite{bernant}, \cite{[5]}, \cite{zafar}-\cite{[24]}. The theory of fractional calculus has known an intensive development over the last few decades. It is shown that derivatives and\ integrals of fractional type provide an adequate mathematical modelling of real objects\ and processes see (\cite{[6]}-\cite{sarikaya}). Therefore, the study of fractional differential equations need more developmental of inequalities of fractional type. The main aim of this work is to develop new integral inequalities of Ostrowski-Gr\"{u}ss type for Riemann-Liouville fractional integrals. From our results, the classical Ostrowski-Gr\"{u}ss type inequalities can be deduced as some special cases. Let us begin by introducing this type of inequality. In \cite{[6]} and \cite{[21]}, the authors established some inequalities for differentiable mappings which are connected with Ostrowski type inequality by used the Riemann-Liouville fractional integrals, and they used the following lemma to prove their results: \begin{lemma} \label{l} Let $f:I\subset \mathbb{R}\rightarrow \mathbb{R}$ be differentiable function on $I^{\circ }$ with $a,b\in I$ ($a<b$) and $ f^{\prime }\in L_{1}[a,b]$, then \begin{equation} f(x)=\frac{\Gamma (\alpha )}{b-a}(b-x)^{1-\alpha }{\Large J}_{a}^{\alpha }f(b)-{\Large J}_{a}^{\alpha -1}(P_{2}(x,b)f(b))+{\Large J}_{a}^{\alpha }(P_{2}(x,b)f^{^{\prime }}(b)),\ \ \ \alpha \geq 1, \label{z} \end{equation} where $P_{2}(x,t)$ is the fractional Peano kernel defined by \begin{equation} P_{2}(x,t)=\left\{ \begin{array}{ll} \left( \dfrac{t-a}{b-a}\right) (b-x)^{1-\alpha }\Gamma (\alpha ), & a\leq t<x \\ & \\ \left( \dfrac{t-b}{b-a}\right) (b-x)^{1-\alpha }\Gamma (\alpha ), & x\leq t\leq b. \end{array} \right. \label{z1} \end{equation} \end{lemma} In \cite{[6]} and \cite{[21]}, the authors derive the following interesting fractional integral inequality: \begin{eqnarray*} &&\left\vert f\left( x\right) -\frac{1}{b-a}\left( b-x\right) ^{1-\alpha }\Gamma \left( \alpha \right) J_{a}^{\alpha }\left( f\left( b\right) \right) +J_{a}^{\alpha -1}\left( P_{2}\left( x,b\right) f\left( b\right) \right) \right\vert \\ && \\ &\leq &\frac{M}{\alpha \left( \alpha +1\right) }\left[ \left( b-x\right) \left( 2\alpha \left( \frac{b-x}{b-a}\right) -\alpha -1\right) +\left( b-a\right) ^{\alpha }\left( b-x\right) ^{1-\alpha }\right] \end{eqnarray*} under the assumption that $\left\vert f^{\prime }\left( x\right) \right\vert \leq M$, for any $x\in \left[ a,b\right] $. Firstly, we give some necessary definitions and mathematical preliminaries of fractional calculus theory which are used further in this paper. More details, one can consult \cite{[17]} and \cite{[12]}. \begin{definition} The Riemann-Liouville fractional integral operator of order $\alpha \geq 0$ with $a\geq 0$ is defined as \begin{eqnarray*} J_{a}^{\alpha }f(x) &=&\frac{1}{\Gamma (\alpha )}\dint\limits_{a}^{x}(x-t)^{ \alpha -1}f(t)dt, \\ J_{a}^{0}f(x) &=&f(x). \end{eqnarray*} \end{definition} Recently, many authors have studied a number of inequalities by used the Riemann-Liouville fractional integrals, see (\cite{[6]}-\cite{sarikaya}) and the references cited therein. \section{Main Results} \begin{theorem} \label{t} Let $f:I\subseteq \mathbb{R} \rightarrow \mathbb{R} $ be a differentiable mapping in $I^{0}$ (interior of $I$), and $a,b\in I^{0} $ with $a<b$ and $f^{\prime }\in L_{2}[a,b]$. If $f^{^{\prime }}:\left( a,b\right) \rightarrow \mathbb{R} $ is bounded on $\left( a,b\right) $ with $\varphi \leq f^{\prime }(x)\leq \Phi $, then we have \begin{eqnarray} &&\left\vert \frac{f\left( x\right) }{\Gamma \left( \alpha \right) }-\frac{ \left( b-x\right) ^{1-\alpha }}{\left( b-a\right) }J_{a}^{\alpha }f\left( b\right) +\frac{1}{\Gamma \left( \alpha \right) }J_{a}^{\alpha -1}\left( P\left( x,b\right) f\left( b\right) \right) \right. \notag \\ && \notag \\ &&\left. -\left( \frac{f\left( b\right) -f\left( a\right) }{b-a}\right) \times \left( \frac{\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{\alpha }}{\Gamma \left( \alpha +2\right) }-\frac{\left( b-x\right) }{\Gamma \left( \alpha +1\right) }\right) \right\vert \notag \\ && \label{h} \\ &\leq &(b-a)\left( K(x)\right) ^{\frac{1}{2}}\left( \frac{1}{(b-a)\Gamma ^{2}\left( \alpha \right) }\left\Vert f^{^{\prime }}\right\Vert _{2}^{2}-\left( \frac{f\left( b\right) -f\left( a\right) }{(b-a)\Gamma \left( \alpha \right) }\right) ^{2}\right) ^{\frac{1}{2}} \notag \\ && \notag \\ &\leq &\frac{\left( K(x)\right) ^{\frac{1}{2}}}{2\Gamma \left( \alpha \right) }(b-a)(\Phi -\varphi ) \notag \end{eqnarray} for all $x\in \lbrack a,b]$ and $\alpha \geq 1$ where \begin{eqnarray*} K(x) &=&\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{2\alpha -2}\left( \frac{1}{2\alpha +1}+\frac{1}{2\alpha -1}-\frac{1}{\alpha }\right) \\ && \\ &&+\frac{(b-x)^{\alpha }}{\left( b-a\right) ^{2}}\left( \frac{b-x}{\alpha }- \frac{b-a}{2\alpha -1}\right) -\left( \frac{\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{\alpha -1}}{\alpha \left( \alpha +1\right) }-\frac{ \left( b-x\right) }{\alpha \left( b-a\right) }\right) ^{2}. \end{eqnarray*} \end{theorem} \begin{proof} We consider the fractional Peano kernel $P_{2}:\left[ a,b\right] ^{2}\rightarrow \mathbb{R} $ as defined in (\ref{z1}). Using Korkine's identity \begin{equation*} T\left( f,g\right) :=\frac{1}{2\left( b-a\right) ^{2}}\dint\limits_{a}^{b} \dint\limits_{a}^{b}\left( f\left( t\right) -f\left( s\right) \right) \left( g\left( t\right) -g\left( s\right) \right) dsdt \end{equation*} we obtain \begin{eqnarray*} &&\frac{1}{\left( b-a\right) \Gamma ^{2}(\alpha )}\dint\limits_{a}^{b}\left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) f^{^{\prime }}\left( t\right) dt \\ &&-\left( \frac{1}{\left( b-a\right) \Gamma (\alpha )}\dint\limits_{a}^{b} \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) dt\right) \left( \frac{1 }{\left( b-a\right) \Gamma (\alpha )}\dint\limits_{a}^{b}f^{^{\prime }}\left( t\right) dt\right) \\ && \\ &=&\frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )}\dint\limits_{a}^{b} \dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] \left[ f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right] dsdt \end{eqnarray*} i.e. \begin{eqnarray} &&\frac{1}{\left( b-a\right) \Gamma \left( \alpha \right) }J_{a}^{\alpha }\left( P_{2}\left( x,b\right) f^{^{\prime }}\left( b\right) \right) -\frac{1 }{\left( b-a\right) \Gamma \left( \alpha \right) }J_{a}^{\alpha }\left( P_{2}\left( x,b\right) \right) \left( \frac{f\left( b\right) -f\left( a\right) }{b-a}\right) \notag \\ && \label{h1} \\ &=&\frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )}\dint\limits_{a}^{b} \dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] \left[ f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right] dsdt. \notag \end{eqnarray} Since \begin{equation} J_{a}^{\alpha }\left( P_{2}\left( x,b\right) f^{^{\prime }}\left( b\right) \right) =f\left( x\right) -\frac{\Gamma \left( \alpha \right) }{b-a}\left( b-x\right) ^{1-\alpha }J_{a}^{\alpha }f\left( b\right) +J_{a}^{\alpha -1}\left( P_{2}\left( x,b\right) f\left( b\right) \right) \label{h2} \end{equation} and \begin{equation} J_{a}^{\alpha }\left( P_{2}\left( x,b\right) \right) =\frac{\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{\alpha }}{\alpha \left( \alpha +1\right) }-\frac{\left( b-x\right) }{\alpha }, \label{h3} \end{equation} then by (\ref{h1}) we get the following identity, \begin{eqnarray} &&\frac{f\left( x\right) }{\left( b-a\right) \Gamma \left( \alpha \right) }- \frac{\left( b-x\right) ^{1-\alpha }}{\left( b-a\right) ^{2}}J_{a}^{\alpha }f\left( b\right) +\frac{1}{\left( b-a\right) \Gamma \left( \alpha \right) } J_{a}^{\alpha -1}\left( P\left( x,b\right) f\left( b\right) \right) \notag \\ && \notag \\ &&-\frac{1}{(b-a)}\left( \frac{f\left( b\right) -f\left( a\right) }{b-a} \right) \times \left( \frac{\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{\alpha }}{\Gamma \left( \alpha +2\right) }-\frac{\left( b-x\right) }{ \Gamma \left( \alpha +1\right) }\right) \notag \\ && \label{h4} \\ &=&\frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )}\dint\limits_{a}^{b} \dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] \left[ f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right] dsdt. \notag \end{eqnarray} Using the Cauchy-Swartz inequality for double integrals, we write \begin{eqnarray} &&\frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )}\left\vert \dint\limits_{a}^{b}\dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] \left[ f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right] dsdt\right\vert \notag \\ && \notag \\ &\leq &\left( \frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )} \dint\limits_{a}^{b}\dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] ^{2}dsdt\right) ^{\frac{1}{2}} \notag \\ && \label{h5} \\ &&\times \left( \frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )} \dint\limits_{a}^{b}\dint\limits_{a}^{b}\left[ f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right] ^{2}dsdt\right) ^{\frac{1}{2}}. \notag \end{eqnarray} However, \begin{eqnarray} &&\frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}(\alpha )}\dint\limits_{a}^{b} \dint\limits_{a}^{b}\left[ \left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) -\left( b-s\right) ^{\alpha -1}P_{2}\left( x,s\right) \right] ^{2}dsdt \notag \\ && \notag \\ &=&\frac{1}{\left( b-a\right) \Gamma ^{2}(\alpha )}\dint\limits_{a}^{b} \left( b-t\right) ^{2\alpha -2}P_{2}^{2}\left( x,t\right) dt-\left( \frac{1}{ \left( b-a\right) \Gamma (\alpha )}\dint\limits_{a}^{b}\left( b-t\right) ^{\alpha -1}P_{2}\left( x,t\right) dt\right) ^{2} \notag \\ && \label{h6} \\ &=&\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{2\alpha -2}\left( \frac{ 1}{2\alpha +1}+\frac{1}{2\alpha -1}-\frac{1}{\alpha }\right) +\frac{\left( b-x\right) ^{\alpha }}{\left( b-a\right) ^{2}}\left( \frac{b-x}{\alpha }- \frac{b-a}{2\alpha -1}\right) \notag \\ && \notag \\ &&-\left( \frac{\left( b-x\right) ^{1-\alpha }\left( b-a\right) ^{\alpha -1} }{\alpha \left( \alpha +1\right) }-\frac{\left( b-x\right) }{\alpha \left( b-a\right) }\right) ^{2}, \notag \end{eqnarray} and \begin{equation} \frac{1}{2\left( b-a\right) ^{2}\Gamma ^{2}\left( \alpha \right) } \dint\limits_{a}^{b}\dint\limits_{a}^{b}\left( f^{^{\prime }}\left( t\right) -f^{^{\prime }}\left( s\right) \right) ^{2}dsdt=\frac{1}{(b-a)\Gamma ^{2}\left( \alpha \right) }\left\Vert f^{^{\prime }}\right\Vert _{2}^{2}-\left( \frac{f\left( b\right) -f\left( a\right) }{(b-a)\Gamma \left( \alpha \right) }\right) ^{2}. \label{h7} \end{equation} Using (\ref{h5})-(\ref{h7}), we deduce the (\ref{h1}) inequality. Moreover, if $\varphi \leq f^{\prime }(t)\leq \Phi $ almost everywhere $t$ on $(a,b),$ then by using Gr\"{u}ss inequality, we get \begin{equation*} 0\leq \frac{1}{b-a}\dint\limits_{a}^{b}\left( f^{\prime }\left( t\right) \right) ^{2}dt-\left( \frac{1}{b-a}\dint\limits_{a}^{b}f^{\prime }\left( t\right) dt\right) ^{2}\leq \frac{1}{4}(\Phi -\varphi )^{2}, \end{equation*} which proves the last inequality of (\ref{h}). \end{proof} \begin{corollary} \label{c} Under the assumptions of Theorem \ref{t} with $\alpha =1$. Then the following inequality holds:For \begin{eqnarray} &&\left\vert f\left( x\right) -\frac{1}{b-a}\dint\limits_{a}^{b}f\left( t\right) dt-\frac{f\left( b\right) -f\left( a\right) }{b-a}\left( x-\frac{a+b }{2}\right) \right\vert \label{c1} \\ && \notag \\ &\leq &\frac{(b-a)}{2\sqrt{3}}\left( \frac{1}{(b-a)}\left\Vert f^{^{\prime }}\right\Vert _{2}^{2}-\left( \frac{f\left( b\right) -f\left( a\right) }{ (b-a)}\right) ^{2}\right) ^{\frac{1}{2}} \notag \\ && \notag \\ &\leq &\frac{(b-a)(\Phi -\varphi )}{4\sqrt{3}}. \notag \end{eqnarray} \end{corollary} \begin{proof} Proof of Corollary \ref{c} can be as similar to the proof of Theorem \ref{t}. \end{proof} \begin{remark} If we take $x=\frac{a+b}{2}$ in (\ref{c1}), it follows that \begin{eqnarray*} &&\left\vert f\left( \frac{a+b}{2}\right) -\frac{1}{b-a}\dint \limits_{a}^{b}f\left( t\right) dt\right\vert \leq \frac{(b-a)}{2\sqrt{3}} \left( \frac{1}{(b-a)}\left\Vert f^{^{\prime }}\right\Vert _{2}^{2}-\left( \frac{f\left( b\right) -f\left( a\right) }{(b-a)}\right) ^{2}\right) ^{\frac{ 1}{2}} \\ && \\ &\leq &\frac{(b-a)(\Phi -\varphi )}{4\sqrt{3}}. \end{eqnarray*} \end{remark} \end{document}
\begin{document} \title{{The Dirichlet problem in Lipschitz domains with boundary data in Besov spaces for higher order elliptic systems with rough coefficients} \thanks{2000 {\it Math Subject Classification.} Primary: 35G15, 35J55, 35J40 Secondary 35J67, 35E05, 46E39. \newline {\it Key words}: higher order elliptic systems, Besov spaces, weighted Sobolev spaces, mean oscillations, BMO, VMO, Lipschitz domains, Dirichlet problem \newline The work of authors was supported in part by from NSF DMS and FRG grants as well as from the Swedish National Science Research Council}} \author{V.\, Maz'ya, M.\, Mitrea and T.\, Shaposhnikova} \date{~} \maketitle \begin{abstract} We settle the issue of well-posedness for the Dirichlet problem for a higher order elliptic system ${\mathcal L}(x,D_x)$ with complex-valued, bounded, measurable coefficients in a Lipschitz domain $\Omega$, with boundary data in Besov spaces. The main hypothesis under which our principal result is established is in the nature of best possible and requires that, at small scales, the mean oscillations of the unit normal to $\partial\Omega$ and of the coefficients of the differential operator ${\mathcal L}(x,D_x)$ are not too large. \end{abstract} \section{Introduction} \setcounter{equation}{0} A fundamental theme in the theory of partial differential equations, which has profound and intriguing connections with many other subareas of analysis, is the well-posedness of various classes of boundary value problems under sharp smoothness assumptions on the boundary of the domain and on the coefficients of the corresponding differential operator. In this paper we initiate a program broadly aimed at extending the scope of the agenda set forth by Agmon, Douglis, Nirenberg and Solonnikov (cf. {\bf\cite{ADN}}, {\bf\cite{Sol1}}, {\bf\cite{Sol2}}) in connection with general elliptic boundary value problems on Sobolev-Besov scales, as to allow minimal smoothness assumptions (on the underlying domain and on the coefficients of the differential operator). Our main result is the solvability of the Dirichlet problem for general higher order elliptic systems in divergence form, with complex-valued, bounded, measurable coefficients in Lipschitz domains, and for boundary data in Besov spaces. In order to be more specific we need to introduce some notation. Let $m,l\in{\mathbb{N}}$ be two fixed integers and, for a bounded Lipschitz domain $\Omega$ in $\mathbb{R}^n$ (a formal definition is given in \S{6.1}) with outward unit normal $\nu=(\nu_1,...,\nu_n)$ consider the Dirichlet problem for the operator \begin{equation}\label{LOL} {\mathcal L}(X,D_X)\,{\mathcal U} :=\sum_{|\alpha|=|\beta|=m}D^\alpha(A_{\alpha\beta}(X)D^\beta{\mathcal U}) \end{equation} \noindent i.e., \begin{equation}\label{e0} \left\{ \begin{array}{l} \displaystyle{\sum_{|\alpha|=|\beta|=m} D^\alpha(A_{\alpha\beta}(X)\,D^\beta\,{\mathcal U})}=0 \qquad\mbox{for}\,\,X\in\Omega, \\[28pt] {\displaystyle\frac{\partial^k{\mathcal U}}{\partial\nu^k}}=g_k \,\,\quad\mbox{on}\,\,\partial\Omega,\qquad 0\leq k\leq m-1. \end{array} \right. \end{equation} \noindent Here and elsewhere, $D^\alpha=(-i\partial/\partial x_1)^{\alpha_1}\cdots (-i\partial/\partial x_n)^{\alpha_n}$ if $\alpha=(\alpha_1,...,\alpha_n)$. The coefficients $A_{\alpha\beta}$ are $l\times l$ matrix-valued functions with complex entries satisfying \begin{equation}\label{A-bdd} \sum_{|\alpha|=|\beta|=m}\|A_{\alpha\beta}\|_{L_\infty(\Omega)}\leq\kappa_1 \end{equation} \noindent for some finite constant $\kappa_1$, and such that the coercivity condition \begin{equation}\label{coercive} \Re\,\int_\Omega\sum_{|\alpha|=|\beta|=m}\langle A_{\alpha\beta}(X) D^\beta\,{\mathcal U}(X),\,D^\alpha\,{\mathcal U}(X)\rangle\,dX \geq\kappa_0\sum_{|\alpha|=m}\|D^\alpha\,{\mathcal U}\|^2_{L_2(\Omega)} \end{equation} \noindent with $\kappa_0=const>0$ holds for all ${\mathbb{C}}^l$-valued functions ${\mathcal U}\in C^\infty_0(\Omega)$. Throughout the paper, $\Re\,z$ denotes the real part of $z\in{\mathbb{C}}$ and $\langle\cdot,\cdot\rangle$ stands for the canonical inner product in $\mathbb{C}^l$. Since, generally speaking, $\nu$ is merely bounded and measurable, care should be exercised when defining iterated normal derivatives. For the setting we have in mind it is natural to take $\partial^k/\partial\nu^k:=(\sum_{j=1}^n\xi_j\partial/\partial x_j)^k \mid_{\xi=\nu}$ or, more precisely, \begin{equation}\label{nuk} \frac{\partial^k{\mathcal U}}{\partial\nu^k} :=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\, \nu^\alpha\,{\rm Tr}\,[D^\alpha{\mathcal U}],\qquad 0\leq k\leq m-1, \end{equation} \noindent where ${\rm Tr}$ is the boundary trace operator and $\nu^\alpha:=\nu_1^{\alpha_1}\cdots\nu_n^{\alpha_n}$ if $\alpha=(\alpha_1,...,\alpha_n)$. With $\rho(X):={\rm dist}\,(X,\partial\Omega)$ and $p\in(1,\infty)$, $a\in(-1/p,1-1/p)$ fixed, a solution for (\ref{e0}) is sought in $W^{m,a}_p(\Omega)$, defined as the space of vector-valued functions for which \begin{equation}\label{W-Nr} \Bigl(\sum_{0\leq|\alpha|\leq m}\int_\Omega|D^\alpha{\mathcal U}(X)|^p \rho(X)^{pa}\,dX\Bigr)^{1/p}<\infty. \end{equation} \noindent In particular, as explained later on, the traces in (\ref{nuk}) exist in the Besov space $B_p^{s}(\partial\Omega)$, where $s:=1-a-1/p\in(0,1)$, for any ${\mathcal U}\in W^{m,a}_p(\Omega)$. Recall that, with $d\sigma$ denoting the area element on $\partial\Omega$, \begin{equation}\label{Bes-xxx} g\in B_p^s(\partial\Omega)\Leftrightarrow \|g\|_{B_p^s(\partial\Omega)}:=\|f\|_{L_p(\partial\Omega)} +\Bigl(\int_{\partial\Omega}\int_{\partial\Omega} \frac{|g(X)-g(Y)|^p}{|X-Y|^{n-1+sp}}\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}<\infty. \end{equation} \noindent The above definition takes advantage of the Lipschitz manifold structure of $\partial\Omega$. On such manifolds, smoothness spaces of index $s\in(0,1)$ can be defined in an intrinsic, invariant fashion by lifting their Euclidean counterparts onto the manifold itself via local charts. We shall, nonetheless, find it useful to consider higher order smoothness spaces on $\partial\Omega$ in which case the above approach is no longer effective. An alternative point of view has been developed by H.\,Whitney in {\bf\cite{Wh}} where he considered what amounts to higher order Lipschitz spaces on arbitrary closed sets. A far-reaching extension of this circle of ideas pertaining to the full scale of Besov and Sobolev spaces on irregular subsets of ${\mathbb{R}}^n$ can be found in the book {\bf\cite{JW}} by A.\,Jonsson and H.\,Wallin. For the purpose of this introduction we note that one possible description of these higher order Besov spaces on the boundary of a Lipschitz domain $\Omega\subset{\mathbb{R}}^n$ and for $m\in{\mathbb{N}}$, $p\in(1,\infty)$, $s\in(0,1)$, reads \begin{equation}\label{Bes-X} \dot{B}^{m-1+s}_p(\partial\Omega)=\,\mbox{the closure of}\,\, \Bigl\{(D^\alpha\,{\mathcal V}|_{\partial\Omega})_{|\alpha|\leq m-1}:\, {\mathcal V}\in C^\infty_0({\mathbb{R}}^n)\Bigr\}\mbox{ in } B_p^s(\partial\Omega) \end{equation} \noindent (we shall often make no notational distinction between a Banach space ${\mathfrak X}$ and ${\mathfrak X}^N={\mathfrak X}\oplus\cdots\oplus{\mathfrak X}$ for a finite, positive integer $N$). A formal definition along with other equivalent characterizations of $\dot{B}^{m-1+s}_p(\partial\Omega)$ can be found in \S{6.4}. Given (\ref{nuk})-(\ref{W-Nr}), a necessary condition for the boundary data $\{g_k\}_{0\leq k\leq m-1}$ in (\ref{e0}) is that \begin{equation}\label{data-B} \begin{array}{l} \displaystyle{ \mbox{there exists $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ such that}} \\[10pt] \displaystyle{ g_k=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha, \qquad\mbox{for each}\,\,\,\,0\leq k\leq m-1.} \end{array} \end{equation} To state the (analytical and geometrical) conditions under which the problem (\ref{e0}), formulated as above, is well-posed, we need one final piece of terminology. By the {\it infinitesimal mean oscillation} of a function $F\in L_1(\Omega)$ we shall understand the quantity \begin{equation}\label{e60} \{F\}_{{\rm Osc}(\Omega)}:=\mathop{\hbox{lim\,sup}}_{\varepsilon\to 0} \left(\mathop{\hbox{sup}}_{{\{B_\varepsilon\}}_\Omega} {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\Omega}\,\, {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\Omega}\, \Bigl|\,F(X)-F(Y)\,\Bigr|\,dXdY\right), \end{equation} \noindent where $\{B_\varepsilon\}_\Omega$ stands for the set of arbitrary balls centered at points of $\Omega$ and of radius $\varepsilon$, and the barred integral is the mean value. In a similar fashion, the infinitesimal mean oscillation of a function $f\in L_1(\partial\Omega)$ is defined by \begin{equation}\label{e61} \{f\}_{{\rm Osc}(\partial\Omega)} :=\mathop{\hbox{lim\,sup}}_{\varepsilon\to 0} \left(\mathop{\hbox{sup}}_{\{B_\varepsilon\}_{\partial\Omega}} {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\partial\Omega}\,\, {\int{\mkern-19mu}-}_{\!\!\!B_\varepsilon\cap\partial\Omega}\, \Bigl|\,f(X)-f(Y)\,\Bigr|\,d\sigma_Xd\sigma_Y\right), \end{equation} \noindent where $\{B_\varepsilon\}_{\partial\Omega}$ is the collection of $n$-dimensional balls with centers on $\partial\Omega$ and of radius $\varepsilon$. Our main result reads as follows; see also Theorem~\ref{Theorem1} for a more general version. \vskip 0.08in \begin{theorem}\label{Theorem} In the above setting, for each $p\in (1,\infty)$ and $s\in (0,1)$, the problem {\rm (\ref{e0})} with boundary data as in (\ref{data-B}) has a unique solution ${\mathcal U}$ for which (\ref{W-Nr}) holds with $a=1-s-1/p$ provided the coefficient matrices $A_{\alpha\beta}$ and the exterior normal vector $\nu$ to $\partial\Omega$ satisfy \begin{equation}\label{a0} \{\nu\}_{{\rm Osc}(\partial\Omega)} +\sum_{|\alpha|=|\beta|=m}\{ A_{\alpha\beta}\}_{{\rm Osc}(\Omega)} \leq\,C\,s(1-s)\Bigl(pp'+s^{-1}(1-s)^{-1}\Bigr)^{-1} \end{equation} \noindent where $p'=p/(p-1)$ is the conjugate exponent of $p$. Above, $C$ is a sufficiently small constant which depends on $\kappa_0$, $\kappa_1$ and the Lipschitz constant of $\Omega$, and is independent of $p$ and $s$. Furthermore, the bound {\rm (\ref{a0})} can be improved for second order operators, i.e. when $m=1$, when the factor $s(1-s)$ in {\rm (\ref{a0})} can be removed. \end{theorem} \vskip 0.08in Let ${\rm BMO}$ and ${\rm VMO}$ stand, respectively, for the John-Nirenberg space of functions of bounded mean oscillations and the Sarason space of functions of vanishing mean oscillations (considered either on $\Omega$ or on $\partial\Omega$). Since for an arbitrary function $F$ we have (with the dependence on the domain dropped) $\{F\}_{{\rm Osc}}\leq 2\,{\rm dist}\,(F,{\rm VMO})$ where the distance is taken in ${\rm BMO}$, the smallness condition (\ref{a0}) in Theorem~\ref{Theorem} is satisfied if \begin{equation}\label{axxx} {\rm dist}\,(\nu,{\rm VMO}\,(\partial\Omega)) +\sum_{|\alpha|=|\beta|=m}{\rm dist}\,(A_{\alpha\beta},{\rm VMO}\,(\Omega)) \leq\,C\,s(1-s)\Bigl(pp'+s^{-1}(1-s)^{-1}\Bigr)^{-1}. \end{equation} \noindent In particular, this is trivially the case when $\nu\in {\rm VMO}(\partial\Omega)$ and the $A_{\alpha\beta}$'s belong to ${\rm VMO}(\Omega)$, irrespective of $p$, $s$, $\kappa_0$, $\kappa_1$ and the Lipschitz constant of $\Omega$. While the Lipschitz character of a domain $\Omega$ controls the infinitesimal mean oscillation of its unit normal, the inequality in the opposite direction is false in general, as seen by considering $\Omega:=\{(x,y)\in{\mathbb{R}}^2:\,y>\varphi_\varepsilon(x)\}$ with $\varphi_\varepsilon(x):=x\,\sin\,(\varepsilon\log |x|^{-1})$. Indeed, a simple calculation gives $\|\varphi'_\varepsilon\|_{{\rm BMO}({\mathbb{R}})}\leq C\varepsilon$, yet $\|\varphi'_\varepsilon\|_{L_\infty({\mathbb{R}})}\sim 1$ uniformly for $\varepsilon\in(0,1/2)$. An essentially equivalent reformulation of (\ref{e0}) is \begin{equation}\label{e0-bis} \left\{ \begin{array}{l} \displaystyle{\sum_{|\alpha|=|\beta|=m} D^\alpha(A_{\alpha\beta}(X)\,D^\beta\,{\mathcal U})}=0 \qquad\mbox{in}\,\,\,\Omega, \\[24pt] {\rm Tr}\,[D^\gamma\,{\mathcal U}]=g_\gamma \,\,\quad\mbox{on}\,\,\partial\Omega,\qquad |\gamma|\leq m-1, \end{array} \right. \end{equation} \noindent where ${\mathcal U}$ satisfies (\ref{W-Nr}) and \begin{equation}\label{data-G} \dot{g}:=\{g_\gamma\}_{|\gamma|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega), \end{equation} \noindent though an advantage of the classical formulation (\ref{e0}) is that the number of the data is minimal. For a domain $\Omega\subset{\mathbb{R}}^2$ of class $C^r$, $r>{\textstyle\frac 1{2}}$, and for constant coefficient operators, the Dirichlet problem (\ref{e0-bis}) has been considered by S.\,Agmon in {\bf\cite{Ag1}} where he proved that there exists a unique solution ${\mathcal U}\in C^{m-1+s}(\bar{\Omega})$, $0<s<r$, whenever $g_\gamma=D^\gamma{\mathcal V}|_{\partial\Omega}$ , $|\gamma|\leq m-1$, for some function ${\mathcal V}\in C^{m-1+s}(\bar{\Omega})$. See also {\bf\cite{Ag2}} for a related version. The innovation that allows us to consider, for the first time, boundary data in Besov spaces as in (\ref{data-B}) and (\ref{data-G}), is the systematic use of {\it weighted Sobolev spaces} such as those associated with the norm in (\ref{W-Nr}). In relation to the standard Besov scale in ${\mathbb{R}}^n$, we would like to point out that, thanks to Theorem~4.1 in {\bf\cite{JK}} on the one hand, and Theorem~1.4.2.4 and Theorem~1.4.4.4 in {\bf\cite{Gr}} on the other, we have \begin{equation}\label{incls} \begin{array}{l} a=1-s-\frac{1}{p}\in (0,1-1/p)\Longrightarrow W^{m,a}_p(\Omega)\hookrightarrow B^{m-1+s+1/p}_p(\Omega), \\[10pt] a=1-s-\frac{1}{p}\in (-1/p,0)\Longrightarrow B^{m-1+s+1/p}_p(\Omega)\hookrightarrow W^{m,a}_p(\Omega). \end{array} \end{equation} \noindent Of course, $W^{m,a}_p(\Omega)$ is just a classical Sobolev space when $a=0$. Remarkably, the classical trace theory for unweighted Sobolev spaces turns out to have a most satisfactory analogue in this weighted context; for the upper half-space this has been worked out by S.V.\,Uspenski\u{\i} in {\bf\cite{Usp}}, a paper preceded by the significant work of E.\,Gagliardo in {\bf\cite{Ga}} in the unweighted case. As a consequence, we note that under the assumptions made in Theorem~\ref{Theorem}, \begin{equation}\label{Trace} \sum_{|\alpha|\leq m-1} \|{\rm Tr}\,[D^\alpha\,{\mathcal U}]\|_{B_p^{s}(\partial\Omega)} \sim \left(\sum_{0\leq|\alpha|\leq m}\int_{\Omega}\rho(X)^{p(1-s)-1}\, |D^\alpha{\mathcal U}(X)|^p\,dX\right)^{1/p}, \end{equation} \noindent uniformly in ${\mathcal U}$ satisfying ${\mathcal L}(X,D_X)\,{\mathcal U}=0$ in $\Omega$. The estimate (\ref{Trace}) can be viewed as a far-reaching generalization of a well-known characterization of the membership of a function to a Besov space in ${\mathbb{R}}^{n-1}$ in terms of weighted Sobolev norm estimates for its harmonic extension to ${\mathbb{R}}^n_+$ (see, e.g., Proposition~$7'$ on p.\,151 of {\bf\cite{St}}). Theorem~1.1 is new even in the case when $m=1$ and $A_{\alpha\beta}\in{\mathbb{C}}^{l\times l}$ (i.e., for second order, constant coefficient systems) and provides a complete answer to the issue of well-posedness of the problem (\ref{e0}), (\ref{data-B}), (\ref{W-Nr}) in the sense that the small mean oscillation condition, depending on $p$ and $s$, is in the nature of best possible if one insists of allowing arbitrary indices $p$ and $s$ in (\ref{data-B}). This can be seen by considering the following Dirichlet problem for the Laplacian in a domain $\Omega\subset{\mathbb{R}}^n$: \begin{equation}\label{LapJK} \left\{ \begin{array}{l} \Delta\,{\mathcal U}=0\mbox{ in }\Omega, \\[6pt] {\rm Tr}\,{\mathcal U}=g\in B^s_p(\partial\Omega), \\[6pt] D^\alpha{\mathcal U}\in L_p(\Omega,\,\rho(X)^{p(1-s)-1}\,dX),\qquad \forall\,\alpha\,:\,|\alpha|\leq 1. \end{array} \right. \end{equation} \noindent It has long been known that, already in the case when $\partial\Omega$ exhibits one cone-like singularity, the well-posedness of (\ref{LapJK}) prevents the indices $(s,1/p)$ from taking arbitrary values in $(0,1)\times(0,1)$. At a more sophisticated level, the work of D.\,Jerison and C.\,Kenig in {\bf\cite{JK}} shows that (\ref{LapJK}) is well-posed in an arbitrary, given Lipschitz domain $\Omega$ if and only if the point $(s,1/p)$ belongs to a certain open region ${\mathcal R}_{\Omega}\subseteq(0,1)\times(0,1)$, determined exclusively by the geometry of the domain $\Omega$ (cf. {\bf\cite{JK}} for more details). Let us also mention here that, even when $\partial\Omega$ is smooth and $m=l=1$, a well-known example due to N.\,Meyers (cf. {\bf\cite{Mey}}) shows that the well-posedness of (\ref{e0}) in the class of operators with bounded, measurable coefficients confines $p$ to a small neighborhood of $2$. Broadly speaking, there are two types of questions pertaining to the well-posedness of the Dirichlet problem in a Lipschitz domain $\Omega$ for a divergence form, elliptic system (\ref{LOL}) of order $2m$ with boundary data in Besov spaces. \vskip 0.08in \noindent {\it Question I.} Granted that the coefficients of ${\mathcal L}$ exhibit a certain amount of smoothness, identifying the Besov spaces for which this boundary value problem is well-posed. \vskip 0.08in \noindent {\it Question II.} Alternatively, for a given Besov space characterize the class of Lipschitz domains $\Omega$ and elliptic operators ${\mathcal L}$ for which the aforementioned boundary value problem is well-posed. \vskip 0.08in \noindent These, as well as other related issues, have been a driving force behind many exciting, recent developments in partial differential equations and allied fields. Ample evidence of their impact can be found in C.\,Kenig's excellent account {\bf\cite{Ke}} which describes the state of the art in this field of research up to mid 1990's, with a particular emphasis on the role played by harmonic analysis techniques. One generic problem which falls under the scope of {\it Question I} is to determine the optimal scale of spaces on which the Dirichlet problem for an elliptic system of order $2m$ is solvable in an {\it arbitrary Lipschitz domain} $\Omega$ in ${\mathbb{R}}^n$. The most basic case, that of the constant coefficient Laplacian in arbitrary Lipschitz domains in ${\mathbb{R}}^n$, is now well-understood thanks to the work of B.\,Dahlberg and C.\,Kenig {\bf\cite{DK}}, in the case of $L_p$-data, and D.\,Jerison and C.\,Kenig {\bf\cite{JK}}, in the case of Besov data. The case of (\ref{LapJK}) for boundary data exhibiting higher regularity (i.e., $s>1$) has been recently dealt with by V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS2}} where nearly optimal smoothness conditions for $\partial\Omega$ are found in terms of the properties of $\nu$ as a Sobolev space multiplier. Generalizations of (\ref{LapJK}) to the case of variable-coefficient, single, second order elliptic equations have been obtained by M.\,Mitrea and M.\,Taylor in {\bf\cite{MT1}}, {\bf\cite{MT2}}, {\bf\cite{MT3}}. In spite of substantial progress in recent years, there remain many basic open questions, particularly for $l>1$ and/or $m>1$ (corresponding to genuine systems and/or higher order equations), even in the case of {\it constant coefficient} operators in Lipschitz domains. In this context, one significant problem (as mentioned in, e.g., {\bf\cite{Fa}}) is to determine the sharp range of $p$'s for which the Dirichlet problem for elliptic systems with $L_p$-boundary data is well-posed. In {\bf\cite{PV}}, J.\,Pipher and G.\,Verchota have developed a $L_p$-theory for real, constant coefficient, higher order systems $L=\sum_{|\alpha|=2m}A_\alpha D^\alpha$ when $p$ is near $2$, i.e. $2-\varepsilon<p<2+\varepsilon$ with $\varepsilon>0$ depending on the Lipschitz character of $\Omega$ but this range is not optimal. Recently, more progress for the biharmonic equation and for general constant coefficient, second order systems with real coefficients, which are elliptic in the sense of Legendre-Hadamard was made by Z.\,Shen in {\bf\cite{Sh}}, where he further extended the range of $p$'s from $(2-\varepsilon,2+\varepsilon)$ to $(2-\varepsilon,\frac{2(n-1)}{n-3}+\varepsilon)$ for a general Lipschitz domain $\Omega$ in ${\mathbb{R}}^n$, $n\geq 4$, where as before $\varepsilon=\varepsilon(\partial\Omega)>0$. Let us also mention here the work {\bf\cite{AP}} of V.\,Adolfsson and J.\,Pipher who have dealt with the Dirichlet problem for the biharmonic operator in arbitrary Lipschitz domains and with data in Besov spaces, {\bf\cite{Ve}} where G.\,Verchota formulates and solves a Neumann-type problem for the bi-Laplacian in Lipschitz domains and with boundary data in $L_2$, {\bf\cite{MMT}} where the authors treat the Dirichlet problem for variable coefficient symmetric, real, elliptic systems of second order in an arbitrary Lipschitz domain $\Omega$ and with boundary data in $B^s_p(\partial\Omega)$, when $2-\varepsilon<p<2+\varepsilon$ and $0<s<1$, as well as the paper {\bf\cite{KM}} by V.\,Kozlov and V.\,Maz'ya, which contains an explicit description of the asymptotic behavior of null-solutions of constant coefficient, higher order, elliptic operators near points on the boundary of a domain with a sufficiently small Lipschitz constant. A successful strategy for dealing with {\it Question II} consists of formulating and solving the analogue of the original problem in a standard case, typically when $\Omega={\mathbb{R}}^n_+$ and ${\mathcal L}$ has constant coefficients, and then deviating from this most standard setting by allowing perturbations of a certain magnitude. A paradigm result in this regard, going back to the work of Agmon, Douglis, Nirenberg and Solonnikov in the 50's and 60's is that the Dirichlet problem is solvable in the context of Sobolev-Besov spaces if $\partial\Omega$ is sufficiently smooth and if ${\mathcal L}$ has continuous coefficients. The latter requirement is an artifact of the method of proof (based on Korn's trick of freezing the coefficients) which requires measuring the size of the oscillations of the coefficients in a {\it pointwise sense} (as opposed to integral sense, as in (\ref{e60})). For a version of {\it Question II}, corresponding to boundary data of higher regularity, optimal results have been obtained by V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS}}. In this context, the natural language for describing the smoothness of the domain $\Omega$ is that of Sobolev space multipliers. While the study of boundary value problems in a domain $\Omega\subset{\mathbb{R}}^n$ for elliptic differential operators with discontinuous coefficients goes a long way back (for instance, C.\,Miranda has considered in {\bf\cite{Mir}} operators with coefficients in the Sobolev space $W^1_n$), a lot of attention has been devoted lately to the class of operators with coefficients in ${\rm VMO}$ (it is worth pointing out here that $W^1_n\hookrightarrow{\rm VMO}$ on Lipschitz subdomains of ${\mathbb{R}}^n$). Much of the impetus for the recent surge of interest in this particular line of work stems from an observation made by F.\,Chiarenza, M.\,Frasca and P.\,Longo in the early 1990's. More specifically, while investigating interior estimates for the solution of a scalar, second-order elliptic differential equation of the form ${\mathcal L}\,{\mathcal U}=F$, these authors have noticed in {\bf\cite{CFL1}} that ${\mathcal U}$ can be related to $F$ via a potential theoretic representation formula in which the residual terms are commutators between operators of Calder\'on-Zygmund type, on the one hand, and operators of multiplication by the coefficients of ${\mathcal L}$, on the other hand. This made it possible to control these terms by invoking the commutator estimate of Coifman-Rochberg-Weiss ({\bf\cite{CRW}}). Various partial extensions of this result can be found in {\bf\cite{AQ}}, {\bf\cite{By1}}, {\bf\cite{CaPe}}, {\bf\cite{CFL2}}, {\bf\cite{Faz}}, {\bf\cite{Gu}}, {\bf\cite{Ra}}, and the references therein. Here we would just like to mention that, in the whole Euclidean space, a different approach (based on estimates for the Riesz transforms) has been devised by T.\,Iwaniec and C.\,Sbordone in {\bf\cite{IS}}. Compared to the aforementioned works, our approach is more akin to that of F.\,Chiarenza and collaborators ({\bf\cite{CFL1}}, {\bf\cite{CFL2}}), though there are fundamental differences between solving boundary problems for higher order and for second order operators. One difficulty inherently linked with the case $m>1$ arises from the way the norm in (\ref{W-Nr}) behaves under a change of variables $\varkappa:\Omega=\{(X',X_n):\,X_n>\varphi(X')\}\to{\mathbb{R}}^n_+$ designed to flatten the Lipschitz surface $\partial\Omega$. When $m=1$, a simple bi-Lipschitz changes of variables such as the inverse of the map ${\mathbb{R}}^n_+\ni (X',X_n)\mapsto(X',\varphi(X')+X_n)\in\Omega$ will do, but matters are considerable more subtle in the case $m>1$. In this latter situation, we employ a special global flattening map first introduced by J.\,Ne\v{c}as (in a different context; cf. p.\,188 in {\bf\cite{Nec}}) and then independently rediscovered and/or further adapted to new settings by several authors, including V.\,Maz'ya and T.\,Shaposhnikova in {\bf\cite{MS}}, B.\,Dahlberg, C.\,Kenig J.\,Pipher, E.\,Stein and G.\,Verchota (cf. {\bf\cite{Dah}} and the discussion in {\bf\cite{DKPV}}), and S.\,Hofmann and J.\,Lewis in {\bf\cite{HL}}. Our main novel contribution in this regard is adapting this circle of ideas to the context when one seeks pointwise estimates for higher order derivatives of $\varkappa$ and $\lambda:=\varkappa^{-1}$ in terms of $[\nabla\varphi]_{{\rm BMO}({\mathbb{R}}^{n-1})}$. Another ingredient of independent interest is deriving estimates for $D_x^\alpha D_y^\beta G(x,y)$ where $G$ is the Green function associated with a constant (complex) coefficient system $L(D)$ of order $2m$ in the upper half space, which are sufficiently well-suited for deriving commutator estimates in the spirit of {\bf\cite{CRW}}. The methods employed in earlier work are largely based on explicit representation formulas for $G(x,y)$ and, hence, cannot be adapted easily to the case of non-symmetric, complex coefficient, higher order systems. By way of contrast, our approach consists of proving directly that the residual part $R(x,y):=G(x,y)-\Phi(x-y)$, where $\Phi$ is a fundamental solution for $L(D)$, has the property that $D_x^\alpha D_y^\beta R(x,y)$ is a Hardy-type kernel whenever $|\alpha|=|\beta|=m$. The layout of the paper is as follows. Section~2 contains estimates for the Green function in the upper-half space. Section~3 deals with integral operators (of Calder\'on-Zygmund and Hardy type) as well as commutator estimates on weighted Lebesgue spaces. In the last part of this section we also revisit Gagliardo's extension operator and establish estimates in the context of ${\rm BMO}$. Section~4 contains a discussion of the Dirichlet problem for higher order, variable coefficient elliptic systems in the upper-half space. Then the adjustments necessary to treat the case of an unbounded domain lying above the graph of a Lipschitz function are presented in Section~5. Finally, in Section~6, we explain how to handle the case of a bounded Lipschitz domain, and state and prove Theorem~\ref{Theorem1} (from which Theorem~\ref{Theorem} follows). This section also contains further complements and extensions of our main result. \section{Green's matrix estimates in the half-space} \setcounter{equation}{0} In this section we prove a key estimate for derivatives of Green's matrix associated with the Dirichlet problem for homogeneous, higher-order constant coefficient elliptic systems in the half-space $\mathbb{R}^n_+$. \subsection{Statement of the main result} Let ${L}(D_x)$ be a matrix-valued differential operator \begin{equation}\label{eq1.1} {L}(D_x)=\sum_{|\alpha|=2m}A_{\alpha} D^{\alpha}_x, \end{equation} \noindent \noindent where the $A_\alpha$'s are constant $l\times l$ matrices with complex entries. Throughout the paper, $D^\alpha_x:= i^{-|\alpha|} \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\cdots \partial_{x_n}^{\alpha_n}$ if $\alpha=(\alpha_1,\alpha_2,...,\alpha_n)\in{\mathbb{N}}_0^n$. Here and elsewhere, ${\mathbb{N}}$ stands for the collection of all positive integers and ${\mathbb{N}}_0:={\mathbb{N}}\cup\{0\}$. We assume that ${L}$ is strongly elliptic, i.e. there exists $\kappa>0$ such that $\sum_{|\alpha|=m}\|A_\alpha\|_{{\mathbb{C}}^{l\times l}}\leq\kappa^{-1}$ and \begin{equation}\label{eq1.2} \Re\,\langle{L}(\xi)\eta,\eta\rangle_{\mathbb{C}^l}\geq\kappa\, |\xi|^{2m}\,\|\eta\|^2_{\mathbb{C}^l},\qquad \forall\,\xi\in{\mathbb{R}}^n,\,\,\,\forall\,\eta\in\mathbb{C}^l. \end{equation} \noindent In what follows, in order to simplify notations, we shall denote the norms in different finite-dimensional real Euclidean spaces by $|\cdot|$ irrespective of their dimensions. Also, quite frequently, we shall make no notational distinction between a space of scalar functions, call it ${\mathfrak X}$, and the space of vector-valued functions (of a fixed, finite dimension) whose components are in ${\mathfrak X}$. We denote by $F(x)$ a fundamental matrix of the operator ${L}(D_x)$, i.e. a $l\times l$ matrix solution of the system \begin{equation}\label{eq1.3} {L}(D_x)F(x)=\delta(x)I_l\quad\mbox{in}\,\,\mathbb{R}^n, \end{equation} \noindent where $I_l$ is the $l\times l$ identity matrix and $\delta$ is the Dirac function. Consider the Dirichlet problem \begin{equation}\label{eq1.5} \left\{ \begin{array}{l} {L}(D_x)u=f\qquad\qquad\qquad \mbox{in}\,\,\mathbb{R}^n_+, \\[6pt] {\rm Tr}\,[\partial^j u/\partial x_n^j]=f_j\quad j=0,1,\ldots,m-1, \qquad\quad\,\,\mbox{on}\,\,\mathbb{R}^{n-1} \end{array} \right. \end{equation} \noindent where $\mathbb{R}^n_+:=\{x=(x',x_n):\,x'\in\mathbb{R}^{n-1},\,x_n>0\}$ and ${\rm Tr}$ is the boundary trace operator. Hereafter, we shall identify $\partial{\mathbb{R}}^n_+$ with ${\mathbb{R}}^{n-1}$ in a canonical fashion. For each $y'\in \mathbb{R}^{n-1}$ we introduce the Poisson matrices $P_0,\ldots, P_{m-1}$ for problem (\ref{eq1.5}), i.e. the solutions of the boundary-value problems \begin{equation}\label{eq1.6} \left\{ \begin{array}{l} { L}(D_x)P_j(x,y')= 0\,I_l \qquad\qquad\qquad\mbox{on}\,\,\mathbb{R}^n_+, \\[10pt] \displaystyle{\left(\frac{\partial^k}{\partial x_n^k}P_j\right) (\,(x',0),y'\,)} =\delta_{jk}\,\delta(x'-y')I_l\,\,\,{\rm for}\,\,\,x'\in\mathbb{R}^{n-1},\,\, 0\leq k\leq m-1, \end{array} \right. \end{equation} \noindent where $\delta_{jk}$ is the usual Kronecker symbol and $0\leq j\leq m-1$. The matrix-valued function $P_j(x,0')$ is positive homogeneous of degree $j+1-n$, i.e. \begin{equation}\label{eq1.7} P_j(x,0')=|x|^{j+1-n}\,P_j(x/|x|,0'),\qquad x\in\mathbb{R}^n, \end{equation} \noindent where $0'$ denotes the origin of $\mathbb{R}^{n-1}$. The restriction of $P_j(\cdot,0')$ to the upper half-sphere $S^{n-1}_+$ is smooth and vanishes on the equator along with all of its derivatives up to order $m-1$ (see for example, \S{10.3} in {\bf\cite{KMR2}}). Hence, \begin{equation}\label{eq1.8} \|P_j(x,0')\|_{\mathbb{C}^{l\times l}} \leq C\,\frac{x_n^m}{|x|^{n+m-1-j}},\qquad x\in{\mathbb{R}}^n_+, \end{equation} \noindent and, consequently, \begin{equation}\label{eq1.9} \|P_j(x,y')\|_{\mathbb{C}^{l\times l}} \leq C\,\frac{x_n^m}{|x-(y',0)|^{n+m-1-j}},\qquad x\in{\mathbb{R}}^n_+,\,\,\,\,y'\in{\mathbb{R}}^{n-1}. \end{equation} By $G(x,y)$ we shall denote the Green's matrix of the problem (\ref{eq1.5}), i.e. the unique solution of the boundary-value problem \begin{equation}\label{eq1.10} \left\{ \begin{array}{l} {L}(D_x)G(x,y)=\delta(x-y)I_l\quad\mbox{for}\,\,x\in\mathbb{R}^n, \\[6pt] \displaystyle{\left(\frac{\partial ^j}{\partial x_n^j}G\right)((x',0),y)} =0\,I_l\qquad \mbox{for}\,\,x'\in\mathbb{R}^{n-1}, \,\,\, 0\leq j\leq m-1, \end{array} \right. \end{equation} \noindent where $y\in\mathbb{R}^n_+$ is regarded as a parameter. We now introduce the matrix \begin{equation}\label{defRRR} R(x,y):=F(x-y)-G(x,y),\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent so that, for each fixed $y\in\mathbb{R}^n_+$, \begin{equation}\label{eq1.12} \left\{ \begin{array}{l} {L}(D_x)\,R(x,y)=0\qquad\qquad\qquad\qquad\qquad\quad\,\,\,\, \mbox{for}\,\,x\in\mathbb{R}^n, \\[6pt] \displaystyle{\left(\frac{\partial^j}{\partial x_n^j}R\right)((x',0),y)} =\left(\frac{\partial^j}{\partial x_n^j}F\right)((x',0)-y) \quad\mbox{for}\,\,x'\in\mathbb{R}^{n-1},\,\, 0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent Our goal is to prove the following result. \begin{theorem}\label{th1} For all multi-indices $\alpha,\beta$ of length $m$ \begin{equation}\label{mainest} \|D^\alpha_xD^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C\,|x-\bar{y}|^{-n}, \end{equation} \noindent for $x,y\in{\mathbb{R}}^n_+$, where $\bar{y}:=(y',-y_n)$ is the reflection of the point $y\in{\mathbb{R}}^n_+$ with respect to $\partial{\mathbb{R}}^n_+$. \end{theorem} In the proof of Theorem~\ref{th1} we distinguish two cases, $n>2m$ and $n\leq 2m$, which we shall treat separately. Our argument pertaining to the situation when $n>2m$ is based on a lemma to be proved in the subsection below. \subsection{Estimate for a parameter dependent integral} As a preamble to the proof of Theorem~\ref{th1}, here we dispense with the following technical result. \begin{lemma}\label{lem1} Let $a$ and $b$ be two non-negative numbers and assume that $\zeta\in \mathbb{R}^N$. Then for every $\varepsilon>0$ and $0<\delta<N$ there exists a constant $c(N,\varepsilon,\delta)>0$ such that \begin{equation}\label{E1} \int_{\mathbb{R}^N} \frac{d\eta}{(|\eta|+a)^{N+\varepsilon}(|\eta-\zeta|+ b)^{N-\delta}} \leq\frac{c(N,\varepsilon,\delta)}{a^\varepsilon(|\zeta|+a+b)^{N-\delta}}. \end{equation} \end{lemma} \noindent{\bf Proof.} Write ${\mathcal J}={\mathcal J}_1+{\mathcal J}_2$ where ${\mathcal J}$ stands for the integral in the left side of (\ref{E1}), whereas ${\mathcal J}_1$ and ${\mathcal J}_2$ denote the integrals obtained by splitting the domain of integration in ${\mathcal J}$ into $B_a=\{\eta\in \mathbb{R}^N:\,|\eta|<a\}$ and ${\mathbb{R}}^n\setminus B_a$, respectively. If $|\zeta|<2a$, then \begin{equation}\label{I1} {\mathcal J}_1\leq a^{-N-\varepsilon} \int_{B_a}\frac{d\eta}{(|\eta-\zeta|+b)^{N-\delta}} \leq c\,a^{-N-\varepsilon}\int_{B_{4a}}\frac{d\xi}{(|\xi|+b)^{N-\delta}}. \end{equation} \noindent Hence \begin{equation}\label{II1} {\mathcal J}_1\leq \left\{ \begin{array}{l} c\,a^{-N-\varepsilon}\,a^{N}/b^{N-\delta}\qquad{\rm if}\,\,a<b, \\[6pt] c\,a^{-N-\varepsilon+\delta}\qquad\quad{\rm if}\,\,a>b, \end{array} \right. \end{equation} \noindent so that, in particular, \begin{equation}\label{I1est} |\zeta|<2a\Longrightarrow {\mathcal J}_1\leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}. \end{equation} Let us now assume that $|\zeta|>2a$. Then \begin{equation}\label{I1est2} {\mathcal J}_1\leq\int_{B_a}\frac{d\eta}{(|\eta|+a)^{N+\varepsilon}}\, \frac{c}{(|\zeta|+b)^{N-\delta}} \leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}, \end{equation} \noindent which is of the right order. As for ${\mathcal J}_2$, we write \begin{equation}\label{I2} {\mathcal J}_2\leq\int_{\mathbb{R}^n\backslash B_a} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta-\zeta|+b)^{N-\delta}} ={\mathcal J}_{2,1}+{\mathcal J}_{2,2}. \end{equation} \noindent where ${\mathcal J}_{2,1}$, ${\mathcal J}_{2,2}$ are obtained by splitting the domain of integration in the above integral into the set $\{\eta:|\eta|>\max\{a,2|\zeta|\}\}$ and its complement in $\mathbb{R}^n\backslash B_a$. We have \begin{eqnarray}\label{Eq4} {\mathcal J}_{2,1} & \leq & \int_{|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta|+b)^{N-\delta}} +\int_{b>|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta|+b)^{N-\delta}} \nonumber\\[6pt] & \leq & c\Biggl(\int_{|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{2N+\varepsilon-\delta}} +\frac{1}{b^{N-\delta}}\int_{b>|\eta|>\max\{a,b,2|\zeta|\}} \frac{d\eta}{|\eta|^{N+\varepsilon}}\Biggr) \nonumber\\[6pt] & \leq & \frac{c}{(a+b+|\zeta|)^{N+\varepsilon-\delta}} +\frac{c}{a^\varepsilon(a+b+|\zeta|)^{N-\delta}} \nonumber\\[6pt] & \leq & \frac{c}{a^\varepsilon(a+b+|\zeta|)^{N-\delta}}. \end{eqnarray} There remains to estimate the integral \begin{equation}\label{Eq5} {\mathcal J}_{2,2}=\int_{B_{2|\zeta|}\backslash B_a} \frac{d\eta}{|\eta|^{N+\varepsilon}(|\eta-\zeta|+b)^{N-\delta}} ={\mathcal J}_{2,2}^{(1)}+{\mathcal J}_{2,2}^{(2)}, \end{equation} \noindent where ${\mathcal J}_{2,2}^{(1)}$ and ${\mathcal J}_{2,2}^{(2)}$ are obtained by splitting the domain of integration in ${\mathcal J}_{2,2}$ into $B_{|\zeta|/2}\backslash B_a$ and its complement (relative to $B_{2|\zeta|}\backslash B_a$). On the one hand, \begin{equation}\label{Eq6} {\mathcal J}_{2,2}^{(1)}\leq\frac{c}{(|\zeta|+b)^{N-\delta}} \int_{B_{|\zeta|/2}\backslash B_a}\frac{d\eta}{|\eta|^{N+\varepsilon}} \leq\frac{c}{a^\varepsilon(|\zeta|+a+b)^{N-\delta}}. \end{equation} \noindent On the other hand, whenever $|\zeta|>a/2$, the integral ${\mathcal J}_{2,2}^{(2)}$, which extends over all $\eta$'s such that $|\eta|>a$, $2|\zeta|>|\eta|>|\zeta|/2$, can be estimated as \begin{eqnarray*} {\mathcal J}_{2,2}^{(2)} & \leq & \frac{c}{|\zeta|^{N+\varepsilon}} \int_{B_{2|\zeta|}\backslash B_a}\frac{d\eta}{(|\eta-\zeta|+b)^{N-\delta}} \leq\frac{c}{|\zeta|^{N+\varepsilon}} \int_{B_{4|\zeta|}}\frac{d\xi}{(|\xi|+b)^{N-\delta}} \\[6pt] & \leq & \frac{c}{|\zeta|^{N+\varepsilon}} \Biggl(\int_{{|\xi|<4|\zeta|}\atop{|\xi|<b}} \frac{d\xi}{(|\xi|+b)^{N-\delta}} +\int_{{|\xi|<4|\zeta|}\atop{|\xi|>b}}\frac{d\xi}{(|\xi|+b)^{N-\delta}}\Biggr). \end{eqnarray*} \noindent Consequently, \begin{equation}\label{J22} {\mathcal J}_{2,2}^{(2)}\leq \frac{c\,\min\{|\zeta|,b\}^N}{|\zeta|^{N+\varepsilon} b^{N-\delta}}. \end{equation} \noindent Using $|\zeta|>a/2$ and the obvious inequality \begin{equation}\label{trivial} \min\{|\zeta|,b\}^N\,\max\{|\zeta|,b^{N-\delta}\}\leq|\zeta|^N\,b^{N-\delta} \end{equation} \noindent we arrive at \begin{equation}\label{Eq7} {\mathcal J}_{2,2}^{(2)}\leq c\,a^{-\varepsilon}(|\zeta|+a+b)^{\delta-N}. \end{equation} \noindent The estimate (\ref{Eq7}), along with (\ref{Eq6}) and (\ref{Eq5}), gives the upper bound $c\,a^{-\varepsilon}(|\zeta|+ a+b)^{\delta-N}$ for ${\mathcal J}_{2,2}$. Combining this with (\ref{Eq4}) we obtain the same majorant for ${\mathcal J}_{2}$ which, together with a similar result for ${\mathcal J}_{1}$ already obtained leads to (\ref{E1}). The proof of the lemma is therefore complete. $\Box$ \vskip 0.08in \subsection{Proof of Theorem~\ref{th1} for $n>2m$} In the case when $n>2m$ there exists a unique fundamental matrix $F(x)$ for the the operator (\ref{eq1.1}) which is positive homogeneous of degree $2m-n$. We shall use the integral representation formula \begin{equation}\label{IntRRR} R(x,y)=R_0(x,y)+\ldots+ R_{m-1}(x,y),\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent where $R(x,y)$ has been introduced in (\ref{defRRR}) and, with $P_j$ as in (\ref{eq1.6}), we set \begin{equation}\label{eq1.13} R_j(x,y):=\int_{\mathbb{R}^{n-1}}P_j(x,\xi')\, \left(\frac{\partial^j}{\partial x_n^j}F\right)((\xi',0)-y)\,d\xi', \qquad 0\leq j\leq m-1. \end{equation} \noindent Then, thanks to (\ref{eq1.8}) we have \begin{equation}\label{eq1.14} \|R_j(x,y)\|_{\mathbb{C}^{l\times l}}\leq C\,\int_{\mathbb{R}^{n-1}} \frac{x_n^m}{|x-(\xi',0)|^{n+m-1-j}}\cdot\frac{d\xi'}{|(\xi',0)-y|^{n-2m+j}}. \end{equation} Next, putting \begin{eqnarray*} N=n-1 &, & \quad a=x_n,\\ \varepsilon=m-j &, & \quad b=y_n,\\ \delta=2m-j-1 &, & \quad \zeta=y'-x', \end{eqnarray*} \noindent in the formulation of Lemma~\ref{lem1}, we obtain from (\ref{eq1.14}) \begin{equation}\label{Rj} \|R_j(x,y)\|_{\mathbb{C}^{l\times l}} \leq\frac{C\,x_n^j}{(|y'-x'|+x_n+y_n)^{n-2m+j}},\qquad 0\leq j\leq m-1. \end{equation} \noindent Summing up over $j=0,\ldots,m-1$ gives, by virtue of (\ref{IntRRR}), the estimate \begin{equation}\label{eq1.16} \|R(x,y)\|_{\mathbb{C}^{l\times l}}\leq C\,|x-{\bar y}|^{2m-n}, \qquad x,y\in{\mathbb{R}}^n_+. \end{equation} In order to obtain pointwise estimates for derivatives of $R(x,y)$, we make use of the following local estimate for a solution of problem (\ref{eq1.5}) with $f=0$. Recall that $W^s_p$ stands for the classical $L_p$-based Sobolev space of order $s$. The particle {\it loc} is used to brand the local versions of these (and other similar) spaces. \begin{lemma}\label{l1.1}{\rm [see {\bf\cite{ADN}}]} Let $\zeta$ and $\zeta_0$ be functions in $C_0^\infty(\mathbb{R}^n)$ such that $\zeta_0 =1$ in a neighborhood of $\mbox{supp}\,\zeta$. Then the solution $u\in W_2^m(\mathbb{R}_+^n,loc)$ of problem (\ref{eq1.5}) with $f=0$ and $\varphi_j\in W_p^{k+1-j-1/p}(\mathbb{R}^{n-1},loc)$, where $k\geq m$ and $p\in(1,\infty)$, belongs to $W_p^{k+1}(\mathbb{R}^n_+,loc)$ and satisfies the estimate \begin{equation}\label{eq1.17} \|\zeta u\|_{W_p^{k+1}(\mathbb{R}^n_+)}\leq C\, \Bigl(\sum_{j=0}^{m-1}\|\zeta_0 \varphi_j\|_{W_p^{k+1-j-1/p}(\mathbb{R}^{n-1})} +\|\zeta_0 u\|_{L_p(\mathbb{R}^n_+)}\Bigr), \end{equation} \noindent where $C$ is independent of $u$ and $\varphi_j$. \end{lemma} \noindent Let $B(x,r)$ denote the ball of radius $r>0$ centered at $x$. \begin{corollary}\label{c1.2} Assume that $u\in W_2^m(\mathbb{R}_+^n,loc)$ is a solution of problem (\ref{eq1.5}) with $f=0$ and $\varphi_j\in C^{k+1-j}(\mathbb{R}^{n-1},loc)$. Then for any $z\in\overline{\mathbb{R}^n_+}$ and $\rho>0$ \begin{equation}\label{eq1.18} \sup_{\mathbb{R}^n_+\cap B(z,\rho)}|\nabla _k u|\leq C\, \Bigl(\,\rho^{-k}\sup_{\mathbb{R}^n_+\cap B(z,2\rho)}|u| +\sum_{j=0}^{m-1}\sum_{s=0}^{k+1-j} \rho^{s+j-k}\sup_{\mathbb{R}^{n-1}\cap B(z,2\rho)}|\nabla'_{s}\varphi_j|\Bigr), \end{equation} \noindent where $\nabla'_s$ is the gradient of order $s$ in $\mathbb{R}^{n-1}$. Here $C$ is a constant independent of $\rho$, $z$, $u$ and $f_j$. \end{corollary} \noindent{\bf Proof.} Given the dilation invariant nature of the estimate we seek, it suffices to assume that $\rho=1$. Given $\phi\in C^{k+1-j}({\mathbb{R}}^{n-1})$ supported in ${\mathbb{R}}^{n-1}\cap B(z,2)$, we observe that, for a suitable $\theta\in (0,1)$, \begin{eqnarray}\label{simple} \|\phi\|_{W_p^{k+1-j-1/p}(\mathbb{R}^{n-1})} & \leq C & \|\phi\|_{L_p(\mathbb{R}^{n-1})}^\theta \|\phi\|_{W_p^{k+1-j}(\mathbb{R}^{n-1})}^{1-\theta} \nonumber\\[6pt] & \leq C & \sum_{s=0}^{k+1-j}\sup_{\mathbb{R}^{n-1}\cap B(z,2)}|\nabla'_{s}\phi|. \end{eqnarray} \noindent Also, if $p>n$, \begin{equation}\label{eq1.19} \sup\limits_{\mathbb{R}^n_+}|\nabla_k v|\leq C\, \|v\|_{W_p^{k+1}(\mathbb{R}^n_+)}, \end{equation} \noindent by virtue of the classical Sobolev inequality. Combining (\ref{simple}), (\ref{eq1.19}) with Lemma~\ref{l1.1} now readily gives (\ref{eq1.18}). $\Box$ \vskip 0.08in Given $x,y\in{\mathbb{R}}^n_+$, set $\rho:=|x-\bar{y}|/5$ and pick $z\in\partial{\mathbb{R}}^n_+$ such that $|x-z|=\rho/2$. It follows that for any $w\in{\mathbb{R}}^n_+\cap B(z,2\rho)$ we have $|x-\bar{y}|\leq |x-z|+|z-w|+|w-\bar{y}|\leq \rho/2+2\rho+|w-\bar{y}| \leq |x-\bar{y}|/2+|w-\bar{y}|$. Consequently, $|x-\bar{y}|/2\leq |w-\bar{y}|$ for every $w\in{\mathbb{R}}^n_+\cap B(z,2\rho)$, so that, ultimately, \begin{equation}\label{nablaPhi} \rho^{\nu-k}\sup_{w\in\mathbb{R}^{n-1}\cap B(z,2\rho)} \|\nabla'_{\nu}F(w-y)\|_{{\mathbb{C}}^{l\times l}} \leq \frac{C}{|x-\bar{y}|^{n-2m+k}}, \end{equation} \noindent for each $\nu\in{\mathbb{N}}_0$. Granted (\ref{eq1.16}) and our choice of $\rho$, we altogether obtain that \begin{equation}\label{eq1.20} \|D^\alpha_x R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{k}\,|x-\bar{y}|^{2m-n-k},\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent for each multi-index $\alpha\in{\mathbb{N}}_0^n$ of length $k$. In the following two formulas, it will be convenient to use the notation $R_{\mathcal L}$ for the matrix $R$ associated with the operator ${\mathcal L}(D_x)$ as in (\ref{defRRR}). By Green's formula \begin{equation}\label{eq1.11} R_{\mathcal L}(y,x)=\Bigl[R_{{\mathcal L}^*}(x,y)\Bigr]^*,\qquad x,y\in{\mathbb{R}}^n_+, \end{equation} \noindent where the superscript star indicates adjunction. In order to estimate {\it mixed} partial derivatives, we observe that (\ref{eq1.11}) entails \begin{equation}\label{eq1.21} (D^\beta_y R_{\mathcal L})(x,y) =\Bigl[(D^\beta_x R_{{\mathcal L}^*})(y,x)\Bigr]^* \end{equation} \noindent and remark that ${\mathcal L}^*$ has properties similar to ${\mathcal L}$. This, in concert with (\ref{eq1.20}) and \begin{equation}\label{reflect} |x-\bar{y}|=|\bar{x}-y|,\qquad x, y\in\mathbb{R}^n_+. \end{equation} \noindent yields \begin{equation}\label{eq1.22} \|D^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{\beta}\,|x-\bar{y}|^{2m-n-|\beta|}. \end{equation} \noindent Let us also point out that by formally differentiating (\ref{eq1.12}) with respect to $y$ we obtain \begin{equation}\label{eq1.23} \left\{ \begin{array}{l} {\mathcal L}(D_x)\,[D^\beta_yR_{\mathcal L}(x,y)]=0 \qquad\qquad\qquad\qquad\qquad \mbox{for}\,\,x\in\mathbb{R}^n, \\[10pt] \displaystyle{\left(\frac{\partial^j}{\partial x_n^j} D^\beta_y R\right)((x',0),y) =\left(\frac{\partial ^j}{\partial x_n^j}(-D)^\beta F\right)((x',0)-y)}, \,\,x'\in\mathbb{R}^{n-1},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent With (\ref{eq1.22}) and (\ref{eq1.23}) in place of (\ref{eq1.16}) and (\ref{eq1.12}), respectively, we can now run the same program as above and obtain the estimate \begin{equation}\label{Eq3} \|D^\alpha_xD^\beta_y R(x,y)\|_{\mathbb{C}^{l\times l}} \leq C_{\alpha\beta}\,|x-\bar{y}|^{2m-n-|\alpha|-|\beta|}, \end{equation} \noindent for all multi-indices $\alpha$ and $\beta$. \subsection{Proof of Theorem~\ref{th1} for $n\leq 2m$} When $n\leq 2m$ we shall use the method of descent. To get started, fix an integer $N$ such that $N>2m$ and let $(x,z)\mapsto {\mathcal G}(x,y,z-\zeta)$ denote the Green matrix with singularity at $(y,\zeta)\in{\mathbb{R}}^n\times{\mathbb{R}}^{N-n}$ of the Dirichlet problem for the operator ${\mathcal L}(D_x)+(-\Delta_z)^m$ in the $N$-dimensional half-space \begin{equation}\label{RN} \mathbb{R}^N_+:= \{(x,z):\,z\in\mathbb{R}^{N-n},\,x=(x',x_n),\,x'\in\mathbb{R}^{n-1},\,x_n>0\}. \end{equation} \noindent Also, recall that $G(x,y)$ stands for the Green matrix of the problem (\ref{eq1.5}). Our immediate goal is to establish the following. \begin{lemma}\label{lem3} For all multi-indices $\alpha$ and $\beta$ of order $m$ and for all $x$ and $y$ in $\mathbb{R}^n_+$ \begin{equation}\label{Eq12} D^\alpha_x D^\beta_y G(x',y) =\int_{\mathbb{R}^{N-n}}D^\alpha_x D^\beta_y{\mathcal G}(x,y,-\zeta)\,d\zeta. \end{equation} \end{lemma} \noindent{\bf Proof.} The strategy is to show that \begin{equation}\label{pairGf} \int_{\mathbb{R}^n_+}D^\alpha_xD_y^\beta G(x,y)\,f_\beta(y)\,dy =\int_{\mathbb{R}^n_+}\int_{\mathbb{R}^{N-n}}D^\alpha_xD_y^\beta {\mathcal G}(x, y,-\zeta)\,d\zeta\,f_\beta(y)\,dy \end{equation} \noindent for each $f_\beta\in C^\infty_0(\mathbb{R}^n_+)$, from which (\ref{Eq12}) clearly follows. To justify (\ref{pairGf}) for a fixed, arbitrary $f_\beta\in C^\infty_0(\mathbb{R}^n_+)$, we let $u$ be the unique vector-valued function satisfying $D^\alpha u\in L^2(\mathbb{R}^n_+)$ for all $\alpha$ with $|\alpha|=m$, and such that \begin{equation}\label{Eq9} \left\{ \begin{array}{l} {\mathcal L}(D_x)u=D^\beta_x f_\beta \qquad{\rm in}\,\,\mathbb{R}^n_+, \\[6pt] \displaystyle{\left(\frac{\partial^j u}{\partial x_n^j}\right)(x',0)=0 \qquad {\rm on}\,\,\mathbb{R}^{n-1}},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent It is well-known that for each $\gamma\in{\mathbb{N}}_0^n$ \begin{equation}\label{Eq10} |D^\gamma u(x)|\leq C_\gamma\,|x|^{m-n-|\gamma|}\qquad{\rm for}\,\,|x|>1. \end{equation} \noindent This follows, for instance, from Theorem~6.1.4 {\bf\cite{KMR1}} combined with Theorem~10.3.2 {\bf\cite{KMR2}}. Also, as a consequence of Green's formula, the solution of the problem (\ref{Eq9}) satisfies \begin{equation}\label{Eq14} D^\alpha_x u(x) =\int_{\mathbb{R}^n_+}D^\alpha_x(-D_y)^\beta G(x,y)\,f_\beta(y)\,dy. \end{equation} We shall now derive yet another integral representation formula for $D^\alpha_x u$ in terms of (derivatives of) ${\mathcal G}$ which is similar in spirit to (\ref{Eq14}). To get started, we note that since $N>2m$ the estimate (\ref{Eq3}) implies \begin{equation}\label{Eq13} \|D^\alpha_x D^\beta_y {\mathcal G}(x,y,-\zeta)\|_{\mathbb{C}^{l\times l}} \leq c\,(|x-y|+|\zeta|)^{-N}. \end{equation} \noindent Let us now fix $x\in\mathbb{R}^n_+$, $\rho>0$ and introduce a cut-off function $H\in C^\infty(\mathbb{R}^{N-n})$ which satisfies $H(z)=1$ for $|z|\leq 1$ and $H(z)=0$ for $|z|\geq 2$. We may then write \begin{equation}\label{u=G} u(x)=\int_{\mathbb{R}^N} {\mathcal G}(x,y,-\zeta) \Bigl[H\bigl(\zeta/\rho\bigr)D^\beta f_\beta(y)+(-\Delta_\zeta)^m \bigl(H\bigl(\zeta/\rho\bigr)\,u(y)\bigr)\Bigr]\,dy\,d\zeta, \end{equation} \noindent which further implies \begin{eqnarray}\label{est1} && \Bigl|D^\alpha_x u(x)-\int_{\mathbb{R}^N} D^\alpha_x(-D_y)^\beta\, {\mathcal G}(x,y,-\zeta)\,H\bigl(\zeta/\rho\bigr)\,f_\beta(y)\,dy\,d\zeta\Bigr| \nonumber\\[6pt] && \qquad\leq c\,\sum_{|\gamma|=m}\int_{\mathbb{R}^N_+} \|D^\alpha_x D_\zeta^\gamma\,{\mathcal G}(x,y,-\zeta)\| _{\mathbb{C}^{l\times l}}\, \bigl|u(y)\,D^\gamma_\zeta\bigl(H\bigl(\zeta/\rho\bigr)\bigr)\bigr|\,d\zeta. \end{eqnarray} \noindent By (\ref{Eq10}) and (\ref{Eq13}), the expression in the right-hand side of (\ref{est1}) does not exceed \begin{eqnarray*} && c\,\rho^{-m}\int_{\rho<|\zeta|<2\rho}d\zeta \int_{\mathbb{R}^{n-1}}(|x-y|+|\zeta|)^{-N}\,|y|^{m-n}\,dy \\[6pt] && \qquad\qquad \leq c\,\rho^{N-n-m}\int_{\mathbb{R}^{n-1}}(|y|+\rho)^{-N}|y|^{m-n}\,dy =c\,\rho^{-n}. \end{eqnarray*} \noindent This estimate, in concert with (\ref{Eq13}), allows us to obtain, after making $\rho\to\infty$, that \begin{equation}\label{DalphaU} D^\alpha_x u(x) =\int_{\mathbb{R}^n_+}\int_{\mathbb{R}^{N-n}}D^\alpha_x(-D_y)^\beta {\mathcal G}(x, y,-\zeta)\,d\zeta\,f_\beta(y)\,dy. \end{equation} \noindent Now (\ref{pairGf}) follows readily from this and (\ref{Eq14}). $\Box$ \vskip 0.08in Having disposed of Lemma~\ref{lem3}, we are ready to present the \vskip 0.08in \noindent{\bf End of Proof of Theorem~\ref{th1}.} Assume that $2m\geq n$ and let $N$ be again an integer such that $N>2m$. Denote by ${\mathcal F}(x,z)$ the fundamental solution of the operator ${\mathcal L}(D_x)+(-\Delta_z)^m$, which is positive homogeneous of degree $2m-N$ and is singular at $(0,0)\in{\mathbb{R}}^n\times{\mathbb{R}}^{N-n}$. Then the identity \begin{equation}\label{Eq17} D^{\alpha+\beta}_x F(x)=\int_{\mathbb{R}^{N-n}}D^{\alpha+\beta}_x {\mathcal F}(x,-\zeta)\,d\zeta \end{equation} \noindent can be established by proceeding as in the proof of Lemma~\ref{lem3}. Combining (\ref{Eq17}) with Lemma~\ref{lem3}, we arrive at \begin{equation}\label{Eq18} D^\alpha_x D^\beta_y R(x',y)=\int_{\mathbb{R}^{N-n}}D^\alpha_x D^\beta_y {\mathcal R}(x,y,-\zeta)\,d\zeta, \end{equation} \noindent where ${\mathcal R}(x,y,z):={\mathcal G}(x,y,z)-{\mathcal F}(x-y,z)$. Consequently, \begin{equation}\label{DalDbetU} \|D^\alpha_x D^\beta_y {\mathcal R}(x,y,-\zeta)\|_{\mathbb{C}^{l\times l}} \leq C(|x-\bar{y}|+|\zeta|)^{-N}. \end{equation} \noindent by (\ref{eq1.20}) with $k=0$ and $N$ in place of $n$. This estimate, together with (\ref{Eq18}), yields (\ref{mainest}) and the proof of Theorem~\ref{th1} is therefore complete. $\Box$ \vskip 0.08in \section{Properties of integral operators in a half-space} \setcounter{equation}{0} \noindent In \S{3.1} and \S{3.2} we prove estimates for commutators (and certain commutator-like operators) between integral operators in $\mathbb{R}^n_+$ and multiplication operators with functions of bounded mean oscillations, in weighted Lebesgue spaces on $\mathbb{R}^n_+$. Subsection~3.3 contains ${\rm BMO}$ and pointwise estimates for extension operators from $\mathbb{R}^{n-1}$ onto $\mathbb{R}^n_+$. Throughout this section, given two Banach spaces $E,F$, we let ${\mathfrak L}(E,F)$ stand for the space of bounded linear operators from $E$ into $F$, and abbreviate ${\mathfrak L}(E):={\mathfrak L}(E,E)$. Also, given $p\in[1,\infty]$, an open set ${\mathcal O}\subset{\mathbb{R}}^n$ and a measurable nonnegative function $w$ on ${\mathcal O}$, we let $L_p({\mathcal O},w(x)\,dx)$ denote the usual Lebesgue space of (classes of) functions which are $p$-th power integrable with respect to the weighted measure $w(x)\,dx$ on ${\mathcal O}$. Finally, following a well-established tradition, $A(r)\sim B(r)$ will mean that each quantity is dominated by a fixed multiple of the other, uniformly in the parameter $r$. \subsection{Kernels with singularity at $\partial\mathbb{R}^{n}_+$} Recall $L_p({\mathbb{R}}^n_+,\,x_n^{ap}\,dx)$ stands for the weighted Lebesgue space of $p$-th power integrable functions in ${\mathbb{R}}^n_+$ corresponding to the weight $w(x):=x_n^{ap}$, $x=(x',x_n)\in{\mathbb{R}}^n_+$. \begin{proposition}\label{tp3} Let $a\in{\mathbb{R}}$, $1<p<\infty$, and assume that ${\mathcal Q}$ is a non-negative measurable function on $\{\zeta=(\zeta',\zeta_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}:\,\zeta_n>-1\}$, which also satisfies \begin{equation}\label{CC-1} \int_{\mathbb{R}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\,\zeta_n^{-a-1/p}\,d\zeta<\infty. \end{equation} \noindent Then the operator \begin{equation}\label{CC-2} Qf(x):=x_n^{-n}\int_{\mathbb{R}^n_+}{\mathcal Q} \Bigl(\frac{y-x}{x_n}\Bigr)f(y)\,dy,\qquad x=(x',x_n)\in{\mathbb{R}}^n_+, \end{equation} \noindent initially defined on functions $f\in L_p(\mathbb{R}^n_+)$ with compact support in $\mathbb{R}^n_+$, can be extended by continuity to an operator acting from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself, with the norm satisfying \begin{equation}\label{CC-3} \|Q\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{ap}dx))}\leq\int_{{\mathbb{R}}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\,\zeta_n^{-a-1/p}\,d\zeta. \end{equation} \end{proposition} \noindent{\bf Proof.} Introducing the new variable $\zeta:=(x_n^{-1}(y'-x'), x_n^{-1}y_n)\in{\mathbb{R}}^n_+$, we may write \begin{equation}\label{CC-4} |Qf(x)|\leq\int_{{\mathbb{R}}^n_+}{\mathcal Q}(\zeta',\zeta_n-1) |f(x' +x_n\zeta', x_n\zeta_n)|d\zeta,\qquad\forall\,x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then, by Minkowski's inequality, \begin{eqnarray}\label{CC-5} \|Qf\|_{L_p({\mathbb{R}}^n_+,x_n^{a p}\,dx)} & \leq & \int_{{\mathbb{R}}^n_+} {\mathcal Q}(\zeta',\zeta_n-1)\Bigl(\int_{{\mathbb{R}}^n_+} x_n^{a p}|f(x'+x_n\zeta',x_n\zeta_n)|^p\,dx\Bigr)^{1/p}d\zeta \nonumber\\[6pt] & = & \Bigl(\int_{{\mathbb{R}}^n_+}{\mathcal Q}(\zeta',\zeta_n-1)\, \zeta_n^{-a-1/p}\,d\zeta\Bigr)\|f\|_{L_p({\mathbb{R}}^n_+,x_n^{a p}\,dx)}, \end{eqnarray} \noindent as desired. $\Box$ \vskip 0.08in Recall that $\bar{y}:=(y',-y_n)$ if $y=(y',y_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}$. \begin{corollary}\label{Cor1} Consider \begin{equation}\label{1a} Rf(x):=\int_{\mathbb{R}^n_+}\frac{\log\,\bigl(\frac{|x-y|}{x_n}+2\bigr)} {|x-\bar{y}|^n} f(y)\,dy,\qquad x=(x',x_n)\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $1<p<\infty$ and each $a\in (-1/p,1-1/p)$ the operator $R$ is bounded from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself. Moreover, \begin{equation}\label{70a} \|R\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq\frac{c(n)\,p^2}{(pa+1)(p(1-a)-1)}=\frac{c(n)}{s(1-s)}, \end{equation} \noindent where $s=1-a-1/p$ and $c(n)$ is independent of $p$ and $a$. \end{corollary} \noindent{\bf Proof.} The result follows from Proposition~\ref{tp3} with \begin{equation}\label{CC-6} {\mathcal Q}(\zeta):=\frac{\log\,(|\zeta|+2)}{(|\zeta|^2+1)^{n/2}}, \end{equation} \noindent and from the obvious inequality $2|x-\bar{y}|^2 \geq |x-y|^2 +x_n^2$. $\Box$ \vskip 0.08in Let us note here that Corollary~\ref{Cor1} immediately yields the following. \begin{corollary}\label{Cor2} Consider \begin{equation}\label{2a} Kf(x):=\int_{{\mathbb{R}}^n_+}\frac{f(y)}{|x-\bar{y}|^n}\,dy,\qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $1<p<\infty$ and $a\in (-1/p,1-1/p)$ the operator $K$ is bounded from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself. Moreover, \begin{equation}\label{71a} \|K\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq \frac{c(n)\,p^2}{(pa+1)(p(1-a)-1)}=\frac{c(n)}{s(1-s)}, \end{equation} \noindent where $s=1-a-1/p$ and $c(n)$ is independent of $p$ and $a$. \end{corollary} Recall that the barred integral stands for the mean-value (taken in the integral sense). \begin{lemma}\label{Lem1} Assume that $1<p<\infty$, $a\in (-1/p,1-1/p)$, and recall the operator $K$ introduced in {\rm (\ref{2a})}. Further, consider a non-negative, measurable function $w$ defined on ${\mathbb{R}}^n_+$ and fix a family of balls ${\mathcal F}$ which form a Whitney covering of ${\mathbb{R}}^n_+$. Then the norm of $wK$ as an operator from $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ into itself is equivalent to \begin{equation}\label{CC-7} \sup\limits_{B\in{\mathcal F}}{\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy. \end{equation} \noindent Furthermore, \begin{equation}\label{CC-70} \|w\,K\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))}\leq\frac{c(n)}{s(1-s)} \sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy\Bigr)^{1/p}, \end{equation} \noindent where $c(n)$ is independent of $w$, $p$, and $\alpha$. \end{lemma} \noindent{\bf Proof.} Fix $f\geq 0$ and denote by $|B|$ the Euclidean volume of $B$. Sobolev's embedding theorem allows us to write \begin{equation}\label{CC-8} \|Kf\|^p_{L_\infty(B)}\leq c(n)\,|B|^{-1}\sum_{j=0}^n |B|^{jp/n} \|\nabla_j Kf\|^p_{L_p(B)},\qquad\forall\,B\in{\mathcal F}. \end{equation} \noindent Hence, \begin{equation}\label{N1-bis} \int_{{\mathbb{R}}^n_+}|x_n^{a}w(x)(Kf)(x)|^p\,dx\leq c(n)\, \sup\limits_{B\in{\mathcal F}}{\int{\mkern-19mu}-}_{\!\!\!\!B}w(y)^p\,dy \int_{{\mathbb{R}}^n_+}x_n^{pa}\sum_{0\leq j\leq l}x_n^{jp}|\nabla_j Kf|^p\,dx. \end{equation} \noindent Observing that $x_n^j|\nabla_j\, Kf|\leq c(n)\,Kf$ and referring to Corollary~\ref{Cor2}, we arrive at the required upper estimate for the norm of $wK$. The lower estimate is obvious. $\Box$ \vskip 0.08in We momentarily pause in order to collect some definitions and set up basic notation pertaining to functions with bounded mean oscillations. Let $f$ be a locally integrable function defined on $\mathbb{R}^n$ and define the seminorm \begin{equation}\label{semi1} [f]_{{\rm BMO}(\mathbb{R}^n)}:=\sup_{B} {\int{\mkern-19mu}-}_{\!\!\!B}\,\Bigl|f(x)-{\int{\mkern-19mu}-}_{\!\!\!B}f(y)\,dy\Bigr|\,dx, \end{equation} \noindent where the supremum is taken over all balls $B$ in ${\mathbb{R}^n}$. Next, if $f$ is a locally integrable function defined on $\mathbb{R}^n_+$, we set \begin{equation}\label{semi2} [f]_{{\rm BMO}(\mathbb{R}^n_+)}:=\mathop{\hbox{sup}}_{\{B\}} {\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}\, \Bigl|f(x)-{\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}f(y)\,dy\Bigr|\,dx, \end{equation} \noindent where, this time, the supremum is taken over the collection $\{B\}$ of all balls $B$ with centers in $\overline{\mathbb{R}^n_+}$. Then the following inequalities are straightforward \begin{equation}\label{N4-bis} [f]_{{\rm BMO}(\mathbb{R}^n_+)}\leq\mathop{\hbox{sup}}_{\{B\}} {\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+}\,{\int{\mkern-19mu}-}_{\!\!\!B\cap\mathbb{R}^n_+} \Bigl|f(x)-f(y)\,\Bigr|\,dxdy\leq 2\,[f]_{{\rm BMO}(\mathbb{R}^n_+)}. \end{equation} \noindent We also record here the equivalence relation \begin{equation}\label{semi} [f]_{{\rm BMO}(\mathbb{R}^n_+)}\sim [{\rm Ext}\,f]_{{\rm BMO}(\mathbb{R}^n)}, \end{equation} \noindent where ${\rm Ext}\,f$ is the extension of $f$ onto $\mathbb{R}^n$ as an even function in $x_n$. Finally, by ${\rm BMO}({\mathbb{R}^n_+})$ we denote the collection of equivalence classes, modulo constants, of functions $f$ on $\mathbb{R}^n_+$ for which $[f]_{{\rm BMO}({\mathbb{R}^n_+})}<\infty$. \begin{proposition}\label{tp3-bis} Let $b\in{\rm BMO}({\mathbb{R}}^n_+)$ and consider the operator \begin{equation}\label{eqp8-bis} Tf(x):=\int_{{\mathbb{R}}^n_+}\frac{|b(x)-b(y)|}{|x-\bar{y}|^n}f(y)\,dy, \qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent Then for each $p\in(1,\infty)$ and $a\in (-1/p,1-1/p)$ \begin{equation}\label{eqp9bis} T:L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)\longrightarrow L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx) \end{equation} \noindent is a well-defined, bounded operator with \begin{equation}\label{CC-71} \|T\|_{{\mathfrak L}(L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx))} \leq\frac{c(n)}{s(1-s)}\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $c(n)$ is a constant which depends only on $n$. \end{proposition} \noindent{\bf Proof.} Given $x\in{\mathbb{R}}^n_+$ and $r>0$, we shall use the abbreviations \begin{equation}\label{CC-9} \bar{b}_r(x):={\int{\mkern-19mu}-}_{\!\!\!B(x,r)\cap{\mathbb{R}}^n_+}b(y)\,dy, \qquad\quad D_r(x):=|b(x)-\bar{b}_{r}(x)|, \end{equation} \noindent and make use of the integral operator \begin{equation}\label{CC-10} Sf(x):=\int_{\mathbb{R}^n_+}\frac{D_{|x-\bar{y}|}(x)}{|x-\bar{y}|^n} \,f(y)\,dy,\qquad x\in{\mathbb{R}}^n_+, \end{equation} \noindent as well as its adjoint $S^*$. Clearly, for each nonnegative, measurable function $f$ on ${\mathbb{R}}^n_+$ and each $x\in{\mathbb{R}}^n_+$, \begin{eqnarray}\label{CC-11} Tf(x) & \leq & Sf(x)+S^*f(x)+\int_{{\mathbb{R}}^n_+} \frac{|\bar{b}_{|x-\bar{y}|}(x)-\bar{b}_{|x-\bar{y}|}(y)|}{|x-\bar{y}|^n}\, f(y)dy \nonumber\\[6pt] & \leq & Sf(x)+S^*f(x)+c(n)\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}Kf(x), \end{eqnarray} \noindent where $K$ has been introduced in (\ref{2a}). Making use of Corollary~\ref{Cor2}, we need to estimate only the norm of $S$. Obviously, \begin{equation}\label{CC-12} Sf(x)\leq D_{x_n}(x)Kf(x)+\int_{{\mathbb{R}}^n_+} \frac{|\bar{b}_{x_n}(x)-\bar{b}_{|x-\bar{y}|}(x)|}{|x-\bar{y}|^n}\,f(y)\,dy. \end{equation} \noindent Setting $r=|x-\bar{y}|$ and $\rho=x_n$ in the standard inequality \begin{equation}\label{CC-13} |\bar{b}_\rho(x)-\bar{b}_r(x)|\leq c(n)\,\log\,\Bigl(\frac{r}{\rho}+1\Bigr) [b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $r>\rho$ (cf., e.g., p.\,176 in {\bf\cite{MS}}, or p.\,206 in {\bf\cite{Tor}}), we arrive at \begin{equation}\label{CC-14} Sf(x)\leq D_{x_n}(x)Kf(x)+ c(n)\,[b]_{BMO(\mathbb{R}^n_+)}\, Rf(x), \end{equation} \noindent where $R$ is defined in (\ref{1a}). Let ${\mathcal F}$ be a Whitney covering of ${\mathbb{R}}^n_+$ with open balls. For an arbitrary $B\in{\mathcal F}$, denote by $\delta$ the radius of $B$. By Lemma~\ref{Lem1} with $w(x):=D_{x_n}(x)$, the norm of the operator $D_{x_n}(x)K$ does not exceed \begin{eqnarray}\label{CC-15} \sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!B}|D_{x_n}(x)|^p\,dx \Bigr)^{1/p} & \leq & c(n)\,\sup\limits_{B\in{\mathcal F}}\Bigl({\int{\mkern-19mu}-}_{\!\!\!B} |b(x)-\bar{b}_\delta(x)|^p\,dx\Bigr)^{1/p} +c(n)\,[b]_{{\rm BMO}(\mathbb{R}^n_+)} \nonumber\\[6pt] &\leq & c(n)\,[b]_{{\rm BMO}(\mathbb{R}^n_+)}, \end{eqnarray} \noindent by the John-Nirenberg inequality. Here we have also used the triangle inequality and the estimate (\ref{CC-13}) in order to replace $\bar{b}_{x_n}(x)$ in the definition of $D_{x_n}(x)$ by $\bar{b}_{\delta}(x)$. The intervening logarithmic factor is bounded independently of $x$ since $x_n$ is comparable with $\delta$, uniformly for $x\in B$. With this estimate in hand, a reference to Corollary~\ref{Cor1} gives that \begin{eqnarray}\label{CC-16} && S:L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)\to L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx) \mbox{ boundedly} \\[6pt] &&\mbox{for each }p\in(1,\infty)\mbox{ and each }a\in (-1/p,1-1/p). \nonumber \end{eqnarray} \noindent The corresponding estimate for the norm $S$ results as well. By duality, it follows that $S^*$ enjoys the same property and, hence, the operator $T$ is bounded on $L_p({\mathbb{R}}^n_+,\,x_n^{a p}\,dx)$ for each $p\in(1,\infty)$ and $a\in(-1/p,1-1/p)$, thanks to (\ref{CC-11}) and Corollary~\ref{Cor2}. The fact that the operator norm of $T$ admits the desired estimate is implicit in the above reasoning and this finishes the proof of the proposition. $\Box$ \vskip 0.08in \subsection{Singular integral operators} We need the analogue of Proposition~\ref{tp3-bis} for the class of Mikhlin-Calder\'on-Zygmund singular integral operators. Recall that \begin{equation}\label{CZ-op} {\mathcal S}f(x)=p.v.\int_{{\mathbb{R}}^n}k(x,x-y)f(y)\,dy,\qquad x\in{\mathbb{R}}^n, \end{equation} \noindent (where $p.v.$ indicates that the integral is taken in the principal value sense, which means excluding balls centered at the singularity and then passing to the limit as the radii shrink to zero), is called a Mikhlin-Calder\'on-Zygmund operator (with a variable coefficient kernel) provided the function $k:{\mathbb{R}}^n\times({\mathbb{R}}^n\setminus\{0\})\to{\mathbb{R}}$ satisfies: \begin{itemize} \item[(i)] $k(x,\cdot)\in C^\infty({\mathbb{R}}^n\setminus\{0\})$ and, for almost each $x\in{\mathbb{R}}^n$, \begin{equation}\label{kk-est} \max_{|\alpha|\leq 2n}\|D_z^\alpha k(x,z)\|_{L_\infty({\mathbb{R}}^n\times S^{n-1})} <\infty, \end{equation} \noindent where $S^{n-1}$ is the unit sphere in ${\mathbb{R}}^n$; \item[(ii)] $k(x,\lambda z)=\lambda ^{-n}k(x,z)$ for each $z\in{\mathbb{R}}^n$ and each $\lambda\in{\mathbb{R}}$, $\lambda>0$; \item[(iii)] $\int_{S^{n-1}} k(x,\omega)\,d\omega=0$, where $d\omega$ indicates integration with respect to $\omega\in S^{n-1}$. \end{itemize} \vskip 0.08in It is well-known that the Mikhlin-Calder\'on-Zygmund operator ${\mathcal S}$ and its commutator $[{\mathcal S},b]$ with the operator of multiplication by a function $b\in{\rm BMO}(\mathbb{R}^n_+)$ are bounded operators in $L_p(\mathbb{R}^n_+)$ for each $1<p<\infty$. The norms of these operators admit the estimates \begin{equation}\label{b1} \|{\mathcal S}\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+))}\leq c(n)\,p\,p',\qquad \|\,[{\mathcal S},b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+))} \leq c(n)\,p\,p'\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}, \end{equation} \noindent where $c(n)$ depends only on $n$ and the quantity in (\ref{kk-est}). The first estimate in (\ref{b1}) goes back to the work of A.\,Calder\'on and A.\,Zygmund (cf., e.g., {\bf\cite{CaZy}}, {\bf\cite{CaZy2}}; see also the comment on p.\,22 of {\bf\cite{St}} regarding the dependence on the parameter $p$ of the constants involved). The second estimate in (\ref{b1}) was originally proved for convolution type operators by R.\,Coifman, R.\,Rochberg and G.\,Weiss in {\bf\cite{CRW}} and a standard expansion in spherical harmonics allows to extend this result to the case of operators with variable-kernels of the type considered above. We are interested in extending (\ref{b1}) to the weighted case, i.e. when the measure $dx$ on ${\mathbb{R}}^n_+$ is replaced by $x_n^{ap}\,dx$, where $1<p<\infty$ and $a\in(-1/p,1-1/p)$. Parenthetically, we wish to point out that $a\in(-1/p,1-1/p)$ corresponds precisely to the range of $a$'s for which $w(x):=x_n^{ap}$ is a weight in Muckenhoupt's $A_p$ class, and that while in principle this observation can help with the goal just stated, we prefer to give a direct, self-contained proof. \begin{proposition}\label{Prop4} Retain the above conventions and hypotheses. Then the operator ${\mathcal S}$ and its commutator $[{\mathcal S},b]$ with a function $b\in{\rm BMO}(\mathbb{R}^n_+)$ are bounded when acting from $L_p({\mathbb{R}}^n_+,\,x_n^{ap}\,dx)$ into itself for each $p\in(1,\infty)$ and $a\in (-1/p, 1-1/p)$. The norms of these operators satisfy \begin{eqnarray}\label{b2} &&\|{\mathcal S}\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\Bigl(p\,p'+\frac{1}{s(1-s)}\Bigr), \\[6pt] &&\|\,[{\mathcal S},b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\Bigl(p\,p'+\frac{1}{s(1-s)}\Bigr)\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}. \label{b2'} \end{eqnarray} \end{proposition} \noindent{\bf Proof.} Let $\chi_j$ be the characteristic function of the layer $2^{j/2}<x_n\leq 2^{1+j/2}$, $j=0,\pm 1,\ldots$, so that $\sum_{j\in{\mathbb{Z}}}\chi_j=2$. We then write ${\mathcal S}$ as the sum ${\mathcal S}_1+{\mathcal S}_2$, where \begin{equation}\label{CC-17} {\mathcal S}_1:=\frac{1}{4}\sum_{|j-k|\leq 3}\chi _j{\mathcal S}\chi_k. \end{equation} \noindent The following chain of inequalities is evident \begin{eqnarray}\label{CC-18} \|{\mathcal S}_1\, f\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} & \leq & \Bigl(\sum_j\int_{{\mathbb{R}}^n_+}\chi_j(x)\, \Bigl|{\mathcal S}\Bigl(\sum_{|k-j|\leq 3}\chi_k f\Bigr)(x)\Bigr|^p\, x_n^{ap}\,dx\Bigr)^{1/p} \nonumber\\[6pt] & \leq & c(n)\Bigl(\sum_j\int_{\mathbb{R}^n_+} \Bigl|{\mathcal S}\Bigl(\sum_{|k-j|\leq 3} \chi_k 2^{ja/2} f\Bigr)(x)\Bigr|^p\,dx\Bigr)^{1/p}. \end{eqnarray} \noindent In concert with the first estimate in (\ref{b1}), this entails \begin{eqnarray}\label{CC-19} \|{\mathcal S}_1\,f\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} & \leq & c(n)\,p\, p'\Bigl(\sum_j \int_{{\mathbb{R}}^n_+}\Bigl(\sum_{|k-j|\leq 3} \chi_k 2^{ja/2}|f|\Bigr)^p\,dx\Bigr)^{1/p} \nonumber\\[6pt] & \leq & c(n)\,p\,p'\Bigl(\int_{{\mathbb{R}}^n_+}|f(x)|^p\,x_n^{ap}\,dx\Bigr)^{1/p}, \end{eqnarray} \noindent which is further equivalent to \begin{equation}\label{b3} \|{\mathcal S}_1\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\, p\, p'. \end{equation} \noindent Applying the same argument to $[{\mathcal S},b]$ and referring to (\ref{b1}), we arrive at \begin{equation}\label{b4} \|\,[{\mathcal S}_1,b]\,\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))} \leq c(n)\,p\,p'\,[b]_{{\rm BMO}({\mathbb{R}}^n_+)}. \end{equation} It remains to obtain the analogues of (\ref{b3}) and ({\ref{b4}) with ${\mathcal S}_2$ in place of ${\mathcal S}_1$. One can check directly that the modulus of the kernel of ${\mathcal S}_2$ does not exceed $c(n)\,|x-\bar{y}|^{-n}$ and that the modulus of the kernel of $[{\mathcal S}_2,b]$ is majorized by $c(n)\,|b(x)-b(y)|\,|x-\bar{y}|^{-n}$. Then the desired conclusions follow from Corollary~\ref{Cor2} and Proposition~\ref{tp3-bis}. $\Box$ \vskip 0.08in \subsection{The Gagliardo extension operator} Here we shall revisit a certain operator $T$, extending functions defined on $\mathbb{R}^{n-1}$ into functions defined on $\mathbb{R}^n_+$, first introduced by Gagliardo in {\bf\cite{Ga}}. Fix a smooth, radial, decreasing, even, non-negative function $\zeta$ in $\mathbb{R}^{n-1}$ such that $\zeta(t)=0$ for $|t|\geq 1$ and \begin{equation}\label{zeta-int} \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\,dt=1. \end{equation} \noindent (A standard choice is $\zeta(t):=c\,{\rm exp}\,(-1/(1-|t|^2)_+)$ for a suitable $c$.) Following {\bf\cite{Ga}} we then define \begin{equation}\label{10.1.20} (T\varphi)(x',x_n)=\int\limits_{\mathbb{R}^{n-1}}\zeta(t)\varphi(x'+x_nt)\,dt, \qquad(x',x_n)\in{\mathbb{R}}^n_+, \end{equation} \noindent acting on functions $\varphi$ from $L_1(\mathbb{R}^{n-1},loc)$. To get started, we note that \begin{eqnarray}\label{10.1.29} \nabla_{x'}(T\varphi)(x',x_n) & = & \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\nabla\varphi(x'+tx_n)\,dt, \\[6pt] {\partial\over\partial x_n}(T\varphi)(x',x_n) & = & \int\limits_{\mathbb{R}^{n-1}}\zeta(t)\,t\,\nabla\varphi(x'+t x_n)\,dt, \label{10.1.30} \end{eqnarray} \noindent and, hence, we have the estimate \begin{equation}\label{10.1.21} \|\nabla_x\,(T\varphi)\|_{L_\infty(\mathbb{R}^n_+)} \leq c\,\|\nabla_{x'}\,\varphi\|_{L_\infty(\mathbb{R}^{n-1})}. \end{equation} \noindent Refinements of (\ref{10.1.21}) are contained in the Lemmas~\ref{lem7}-\ref{lem8} below. \begin{lemma}\label{lem7} (i) For all $x\in\mathbb{R}^n_+$ and for all multi-indices $\alpha$ with $|\alpha|>1$, \begin{equation}\label{1.2} \Bigl|D^\alpha_{x}(T\varphi)(x)\Bigr| \leq c\,x_n ^{1-|\alpha|}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} (ii) For all $x=(x',x_n)\in\mathbb{R}^n_+$, \begin{equation}\label{Tfi} \Bigl|(T\varphi)(x)-\varphi(x')\Bigr| \leq c\,x_n[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} \end{lemma} \noindent{\bf Proof.} Rewriting (\ref{10.1.30}) as \begin{equation}\label{1.3} {\partial\over\partial x_n}(T\varphi)(x',x_n) =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\zeta\Bigl(\frac{\xi-x'}{x_n}\Bigr) \frac{\xi-x'}{x_n}\Bigl(\nabla\varphi(\xi)-{\int{\mkern-19mu}-}_{\!\!\!|z-x'|<x_n} \nabla\varphi(z)dz\Bigr)d\xi \end{equation} \noindent we obtain \begin{equation}\label{1.12} \Bigl|D^\gamma_{x}{\partial\over\partial x_n}(T\varphi)(x)\Bigr| \leq c\,x_n^{-|\gamma|}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})} \end{equation} \noindent for every non-zero multi-index $\gamma$. Furthermore, for $i=1,\ldots n-1$, by (\ref{10.1.29}) \begin{equation}\label{TT1} \frac{\partial}{\partial x_i}\nabla_{x'}(T\varphi)(x) =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\partial_i\zeta \Bigl(\frac{\xi-x'}{x_n}\Bigr)\Bigl(\nabla\varphi(\xi) -{\int{\mkern-19mu}-}_{\!\!\!|z-x'|<x_n}\nabla\varphi(z)dz\Bigr)d\xi, \end{equation} \noindent where $\partial_i$ is the differentiation with respect to the $i$-th component of the argument. Hence once again \begin{equation}\label{TT2} \Bigl|D^\gamma_x\frac{\partial}{\partial x_i}\nabla_{x'}(T\varphi)(x)\Bigr| \leq c\,x_n^{-|\gamma|-1}[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}, \end{equation} \noindent and the estimate claimed in (i) follows. Finally, (ii) is a simple consequence of (i) and the fact that $(T\varphi)|_{{\mathbb{R}}^{n-1}}=\varphi$. $\Box$ \vskip 0.08in \noindent{\bf Remark.} In concert with Theorem~2 on p.\,62-63 in {\bf\cite{St}}, formula (\ref{10.1.29}) yields the pointwise estimate \begin{equation}\label{HL-Max} |\nabla\,(T\varphi)(x)| \leq c\,{\mathcal M}(\nabla\varphi)(x'),\qquad x=(x',x_n)\in{\mathbb{R}}^{n}_+, \end{equation} \noindent where ${\mathcal M}$ is the classical Hardy-Littlewood maximal function (cf., e.g., Chapter~I in {\bf\cite{St}}). As for higher order derivatives, an inspection of the above proof reveals that \begin{equation}\label{sharp} \Bigl|D_x^\alpha(T\varphi)(x)\Bigr| \leq c\,x_n^{1-|\alpha|}(\nabla\varphi)^\#(x'),\qquad (x',x_n)\in{\mathbb{R}}^{n}, \end{equation} \noindent holds for each multi-index $\alpha$ with $|\alpha|>1$, where $(\cdot)^\#$ is the Fefferman-Stein sharp maximal function (cf. {\bf\cite{FS}}). \begin{lemma}\label{lem8} If $\nabla_{x'}\varphi\in{\rm BMO}(\mathbb{R}^{n-1})$ then $\nabla(T\varphi)\in{\rm BMO}(\mathbb{R}^n_+)$ and \begin{equation}\label{1.8} [\nabla(T\varphi)]_{{\rm BMO}(\mathbb{R}^n_+)} \leq c\,[\nabla_{x'}\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \end{lemma} \noindent{\bf Proof.} Since $(T\varphi)(x',x_n)$ is even with respect to $x_n$, it suffices to estimate $[\nabla_x(T\varphi)]_{{\rm BMO}(\mathbb{R}^n)}$. Let $Q_r$ denote a cube with side-length $r$ centered at the point $\eta =(\eta',\eta_n)\in{\mathbb{R}}^{n-1}\times{\mathbb{R}}$. Also let $Q'_r$ be the projection of $Q_r$ on $\mathbb{R}^{n-1}$. Clearly, \begin{equation}\label{xxx} \nabla_{x'}(T\varphi)(x',x_n)-\nabla _{x'}\varphi(x') =x_n^{1-n}\int\limits_{\mathbb{R}^{n-1}}\zeta\Bigl(\frac{\xi -x'}{x_n}\Bigr) (\nabla\varphi(\xi)-\nabla\varphi(x'))\,d\xi. \end{equation} Suppose that $|\eta_n|<2r$ and write \begin{eqnarray}\label{1.10} \int_{Q_r}\Bigl|\nabla_{x'}(T\varphi)(x',x_n)-\nabla_{x'}\varphi(x')\Bigr|\,dx & \leq & c\,r^{2-n}\int_{Q'_{4r}} \int_{Q'_{4r}}|\nabla\varphi(\xi)-\nabla\varphi(z)|\,dz\,d\xi. \nonumber\\[6pt] & \leq & c\,r^n[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} Therefore, for $|\eta_n|<2r$ \begin{eqnarray}\label{1.18} {\int{\mkern-19mu}-}_{\!\!\!Q_r}{\int{\mkern-19mu}-}_{\!\!\!Q_r}|\nabla_{x'}T\varphi(x)-\nabla_{y'} T\varphi(y)|\,dxdy & \leq & 2{\int{\mkern-19mu}-}_{\!\!\!Q_r}|\nabla_{x'}T\varphi(x)-\nabla\varphi(x')|\,dx \nonumber\\[6pt] &&+{\int{\mkern-19mu}-}_{\!\!\!Q'_r}{\int{\mkern-19mu}-}_{\!\!\!Q'_r} |\nabla\varphi(x')-\nabla\varphi(y')|\,dx'dy' \nonumber\\[6pt] & \leq & c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} Next, consider the case when $|\eta_n|\geq 2r$ and let $x$ and $y$ be arbitrary points in $Q_r(\eta)$. Then, using the generic abbreviation $\bar{f}_E:={\displaystyle{{\int{\mkern-19mu}-}_{\!\!\!E}f}}$, we may write \begin{eqnarray}\label{arTT} |\nabla_{x'}T\varphi(x)-\nabla_{y'}T\varphi(y)| & \leq & \int\limits_{\mathbb{R}^{n-1}}\Bigl|x_n^{1-n}\zeta\Bigl(\frac{\xi-x'}{x_n} \Bigr)-y_n^{1-n}\zeta\Bigl(\frac{\xi-y'}{y_n}\Bigr) \Bigr|\Bigl|\nabla\varphi(\xi)- {\overline{\nabla\varphi}}_{Q'_{2|\eta_n|}} \Bigr|\,d\xi \nonumber\\[6pt] & \leq & \frac{c\,r}{|\eta_n|^n}\int_{Q'_{2|\eta_n|}} \Bigl|\nabla\varphi(\xi) -{\overline{\nabla\varphi}}_{Q'_{2|\eta_n|}}\Bigr|\,d\xi \nonumber\\[6pt] & \leq & c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{eqnarray} \noindent Consequently, for $|\eta_n|\geq 2r$, \begin{equation}\label{QrT3} {\int{\mkern-19mu}-}_{\!\!\!Q_r}{\int{\mkern-19mu}-}_{\!\!\!Q_r} |\nabla_{x'}T\varphi(x)-\nabla_{y'}T\varphi(y)|\,dxdy \leq c\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent which, together with (\ref{1.18}), gives \begin{equation}\label{QrT4} [\nabla_{x'}T\varphi]_{{\rm BMO}(\mathbb{R}^{n})} \leq c[\nabla\varphi]_{\rm BMO(\mathbb{R}^{n-1})}. \end{equation} \noindent This inequality and (\ref{1.12}), where $|\gamma|=0$, imply (\ref{1.8}). $\Box$ \vskip 0.08in \section{The Dirichlet problem in $\mathbb{R}^n_+$ for variable coefficient systems} \setcounter{equation}{0} \subsection{Preliminaries} For \begin{equation}\label{indices} 1<p<\infty,\quad -\frac{1}{p}<a<1-\frac{1}{p}\quad \mbox{and}\quad m\in{\mathbb{N}}, \end{equation} \noindent we let $V^{m,a}_p(\mathbb{R}^n_+)$ denote the weighted Sobolev space associated with the norm \begin{equation}\label{defVVV} \|u\|_{V_p^{m,a}(\mathbb{R}^n_+)}:=\Bigl(\sum_{0\leq|\beta|\leq m} \int_{\mathbb{R}^n_+}|x_n^{|\beta|-m} D^\beta u(x)|^p\,x_n^{pa}\,dx \Bigr)^{1/p}. \end{equation} \noindent It is easily proved that $C_0^\infty(\mathbb{R}^n_+)$ is dense in $V^{m,a}_p(\mathbb{R}^n_+)$. Moreover, by the one-dimensional Hardy's inequality (see, for instance, {\bf\cite{Maz1}}, formula (1.3/1)), we have \begin{equation}\label{4.60} \|u\|_{V_p^{m,a}(\mathbb{R}^n_+)}\leq cs^{-1} \Bigl(\sum_{|\beta|=m}\int_{\mathbb{R}^n_+} |D^\beta u(x)|^p\,x_n^{pa}\,dx\Bigr)^{1/p} \,\,\,\,\mbox{ for }\,\,u\in C_0^\infty(\mathbb{R}^n_+). \end{equation} \noindent The dual of $V^{m,-a}_{p'}(\mathbb{R}^n_+)$ will be denoted by $V^{m,a}_p(\mathbb{R}^n_+)$, where $1/p+1/p'=1$. Consider now the operator \begin{equation}\label{LxDu} {L}(x,D_x)u:=\sum_{0\leq|\alpha|,|\beta|\leq m} D^\alpha_x({A}_{\alpha\beta}(x)\,x_n^{|\alpha|+|\beta|-2m} D_x^\beta\,u) \end{equation} \noindent where ${A}_{\alpha\beta}$ are ${\mathbb{C}}^{l\times l}$-valued functions in $L_\infty(\mathbb{R}^n_+)$. We shall use the notation $\mathaccent"0017 {L}(x,D_x)$ for the principal part of ${L}(x,D_x)$, i.e. \begin{equation}\label{Lcirc} \mathaccent"0017 {L}(x,D_x)u:=\sum_{|\alpha|=|\beta|=m} D^\alpha_x({A}_{\alpha\beta}(x)\,D_x^\beta\,u). \end{equation} \subsection{Solvability and regularity result} \begin{lemma}\label{lem5} Assume that there exists $\kappa=const>0$ such that the coercivity condition \begin{equation}\label{B5} \Re\int_{\mathbb{R}^n_+}\sum_{|\alpha|=|\beta|=m} \langle A_{\alpha\beta}(x)\,D^\beta u(x),\,D^\alpha u(x) \rangle_{\mathbb{C}^l}\,dx \geq \kappa\sum_{|\gamma|=m}\|D^\gamma\,u\|^2_{L_2(\mathbb{R}^n_+)}, \end{equation} \noindent holds for all $u\in C^\infty_0(\mathbb{R}^n_+)$, and that \begin{equation}\label{B6} \sum_{|\alpha|=|\beta|= m}\|A_{\alpha\beta}\|_{L_\infty(\mathbb{R}^n_+)} \leq \kappa^{-1}. \end{equation} {\rm (i)} Let $p\in (1,\infty)$ and $-1/p<a< 1-1/p$ and suppose that \begin{equation}\label{E7} \frac{1}{s(1-s)}\sum_{{|\alpha|+|\beta|<2m}\atop{0\leq |\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L_\infty}(\mathbb{R}^n_+)} +\sum_{|\alpha|=|\beta|=m}[A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} \leq \delta, \end{equation} \noindent where $\delta$ satisfies \begin{equation}\label{E8} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\,{\delta}<c(n,m,\kappa) \end{equation} \noindent with a sufficiently small constant $c(n,m,\kappa)$ and $s=1-a-1/p$. Then the operator \begin{equation}\label{Liso} L=L(x,D_x):V_p^{m,a}(\mathbb{R}^n_+)\longrightarrow V_p^{-m,a}(\mathbb{R}^n_+) \end{equation} \noindent is an isomorphism. {\rm (ii)} Let $p_i\in (1,\infty)$ and $-1/p_i<a_i<1-1/p_i$, where $i=1,2$. Suppose that (\ref{E8}) holds with $p_i$ and $s_i=1-a_i-1/p_i$ in place of $p$ and $s$. If $u\in V_{p_1}^{m,a_1}(\mathbb{R}^n_+)$ is such that ${L}u\in V_{p_1}^{-m,a_1}(\mathbb{R}^n_+) \cap V_{p_2}^{-m,a_2}(\mathbb{R}^n_+)$, then $u\in V_{p_2}^{m,a_2}(\mathbb{R}^n_+)$. \end{lemma} \noindent{\bf Proof.} The fact that the operator (\ref{Liso}) is continuous is obvious. Also, the existence of a bounded inverse ${L}^{-1}$ for $p=2$ and $a=0$ follows from (\ref{B5}) and (\ref{E7})-(\ref{E8}) with $p=2$, $a=0$, which allow us to implement the Lax-Milgram lemma. We shall use the notation $\mathaccent"0017 {L}_y$ for the operator $\mathaccent"0017 {L}(y,D_x)$, corresponding to (\ref{Lcirc}) in which the coefficients have been frozen at $y\in\mathbb{R}^n_+$, and the notation $G_y$ for the solution operator for the Dirichlet problem for $\mathaccent"0017 {L}_y$ in ${\mathbb{R}}^n_+$ with homogeneous boundary conditions. Next, given $u\in V_p^{m,a}(\mathbb{R}^n_+)$, set $f:=Lu\in V_p^{-m,a}(\mathbb{R}^n_+)$ so that \begin{equation}\label{E10} \left\{ \begin{array}{l} {L}(x,D)u=f\qquad\qquad\mbox{in}\,\,\,\mathbb{R}^n_+,\\[6pt] \displaystyle{\frac{\partial^j\,u}{\partial x_n^j}(x',0)=0} \qquad\qquad {\rm on}\,\,\mathbb{R}^{n-1},\,\,0\leq j\leq m-1. \end{array} \right. \end{equation} \noindent Applying the trick used for the first time in {\bf\cite{CFL1}}, we may write \begin{equation}\label{E12} u(x)=(G_y f)(x)-(G_{y}(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u)(x)-(G_{y}({L}-\mathaccent"0017 {L})u)(x), \qquad x\in{\mathbb{R}}^n_+. \end{equation} \noindent We desire to use (\ref{E12}) in order to express $u$ in terms of $f$ (cf. (\ref{IntRepFor})-(\ref{SSS}) below) via integral operators whose norms we can control. First, we claim that whenever $|\gamma|=m$, the norm of the operator \begin{equation}\label{ItalianTrick} V_p^{m,a}(\mathbb{R}^n_+)\ni u \mapsto D_x^\gamma(G_y(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u)(x)\Bigl|_{x=y}\, \in L_p(\mathbb{R}^n_+,y_n^{ap}\,dy) \end{equation} \noindent does not exceed \begin{equation}\label{smallCT} C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\sum_{|\alpha|=|\beta|=m} [A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)}. \end{equation} \noindent Given the hypotheses under which we operate, the expression (\ref{smallCT}) is therefore small if $\delta$ is small. In what follows, we denote by $G_y(x,z)$ the integral kernel of $G_y$ and integrate by parts in order to move derivatives of the form $D_z^\alpha$ with $|\alpha|=m$ from $(\mathaccent"0017 {L}-\mathaccent"0017 {L}_y)u$ onto $G_y(x,z)$ (the absence of boundary terms is due to the fact that $G_y(x,\cdot)$ satisfies homogeneous Dirichlet boundary conditions). That (\ref{smallCT}) bounds the norm of (\ref{ItalianTrick}) can now be seen by combining Theorem~\ref{th1} with (\ref{CC-71}) and Proposition~\ref{Prop4}. Let $\gamma$ and $\alpha$ be multi-indices with $|\gamma|=m$, $|\alpha|\leq m$ and consider the assignment \begin{equation}\label{oppsi} \begin{array}{l} \displaystyle C_0^\infty({\mathbb{R}}^n_+)\ni\Psi\mapsto\Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+} G_y(x,z) D^\alpha_z\frac{\Psi(z)}{z_n^{m-|\alpha|}}\,dz\Bigr)\Bigl|_{x=y}. \end{array} \end{equation} \noindent After integrating by parts, the action of this operator can be rewritten in the form \begin{equation}\label{DgammaInt} \Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+}\Bigl[ \Bigl(\frac{-1}{i}\frac{\partial}{\partial z_n}\Bigr)^{m-|\alpha|} (-D_z)^\alpha G_y(x,z)\Bigr]\Gamma_\alpha(z)\,dz\Bigr)\Bigl|_{x=y}, \end{equation} \noindent where \begin{equation}\label{Gamma-def} \Gamma_\alpha(z):= \left\{ \begin{array}{l} \Psi(z),\qquad\mbox{if }\,\,\,|\alpha|=m, \\[10pt] {\displaystyle{\frac{(-1)^{m-|\alpha|}}{(m-|\alpha|-1)!} \int_{z_n}^\infty (t-z_n)^{m-|\alpha|-1}\frac{\Psi(z',t)}{t^{m-|\alpha|}}\,dt, \qquad\mbox{if }\,\,\,|\alpha|<m}}. \end{array} \right. \end{equation} \noindent Using Theorem~\ref{th1} along with (\ref{CC-71}) and Proposition~\ref{Prop4}, we may therefore conclude that \begin{eqnarray}\label{DZGamma} && \Bigl\|\Bigl(D^\gamma_x\int_{{\mathbb{R}}^n_+} \Bigl[\Bigl(\frac{-1}{i}\frac{\partial}{\partial z_n}\Bigr)^{m-|\alpha|} (-D_z)^\alpha G_y(x,z)\Bigr]\Gamma_\alpha(z)\,dz\Bigr)\Bigl|_{x=y} \Bigr\|_{L_p(\mathbb{R}^n_+,\,y_n^{ap}\,dy)} \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \leq C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr) \|\Gamma_\alpha\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)}. \end{eqnarray} \noindent On the other hand, Hardy's inequality gives \begin{equation}\label{Cs-1} \|\Gamma_\alpha\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} \leq\frac{C}{1-s}\,\|\Psi\|_{L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)} \end{equation} \noindent and, hence, the operator (\ref{oppsi}) can be extended from $C_0^\infty(\mathbb{R}^n_+)$ as a bounded operator in $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ and its norm is majorized by \begin{equation}\label{Bigpp} \frac{C}{1-s}\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr). \end{equation} Next, given an arbitrary $u\in V_p^{m,a}(\mathbb{R}^n_+)$, we let $\Psi=\Psi_{\alpha\beta}$ in (\ref{oppsi}) with \begin{equation}\label{Pizz} \Psi_{\alpha\beta}(z):=z_n^{|\beta|-m}\,A_{\alpha\beta}\,D^\beta u(z), \qquad |\alpha|+|\beta|<2m, \end{equation} \noindent and conclude that the norm of the operator \begin{equation}\label{altOp} V_p^{m,a}(\mathbb{R}^n_+)\ni u\mapsto D_x^\gamma\,(G_y\,({L}-\mathaccent"0017 {L})u)(x)\Bigl|_{x=y} \in L_p(\mathbb{R}^n_+,y_n^{ap}\,dy) \end{equation} \noindent does not exceed \begin{equation}\label{notExceed} \frac{C}{1-s}\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\sum_{{|\alpha|+|\beta|<2m} \atop{0\leq|\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L_\infty}(\mathbb{R}^n_+)}. \end{equation} It is well-known (cf. (1.1.10/6) on p.\,22 of {\bf\cite{Maz1}}) that any $u\in V^{m,a}_p({\mathbb{R}}^n_+)$ can be represented in the form \begin{equation}\label{IntUU} u=K\{D^\sigma u\}_{|\sigma|=m} \end{equation} \noindent where $K$ is a linear operator with the property that \begin{equation}\label{K-map} D^\alpha K:L_p(\mathbb{R}^n_+,x_n^{ap}\,dx)\longrightarrow L_p(\mathbb{R}^n_+,x_n^{ap}\,dx) \end{equation} \noindent is bounded for every multi-index $\alpha$ with $|\alpha|=m$. In particular, by (\ref{4.60}), \begin{equation}\label{KDu} \|K\{D^\sigma u\}_{|\sigma|=m}\|_{V_p^{m,a}(\mathbb{R}^n_+)}\leq C\,s^{-1} \|\{D^\sigma u\}_{|\sigma|=m}\|_{L_p(\mathbb{R}^n_+,x_n^{ap}\,dx)}. \end{equation} At this stage, we transform the identity (\ref{E12}) in the following fashion. First, we express the two $u$'s occurring inside the Green operator $G_y$ in the left-hand side of (\ref{E12}) as in (\ref{IntUU}). Second, for each multi-index $\gamma$ with $|\gamma|=m$, we apply $D^\gamma$ to both sides of (\ref{E12}) and, finally, set $x=y$. The resulting identity reads \begin{equation}\label{IntRepFor} \{D^\gamma u\}_{|\gamma|=m}+S\{D^\sigma u\}_{|\sigma|=m}=Q\,f \end{equation} \noindent where $Q$ is a bounded operator from $V_p^{-m,a}(\mathbb{R}^n_+)$ into $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ and $S$ is a linear operator mapping $L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx)$ into itself. Furthermore, on account of (\ref{ItalianTrick})-(\ref{smallCT}), (\ref{altOp})-(\ref{notExceed}) and (\ref{KDu}), we can bound $\|S\|_{{\mathfrak L}(L_p(\mathbb{R}^n_+,\,x_n^{ap}\,dx))}$ by \begin{equation}\label{SSS} C\,\Bigl(pp'+\frac{1}{s(1-s)}\Bigr) \Bigl(\sum_{|\alpha|=|\beta|=m} [A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} +\frac{1}{s(1-s)} \sum_{{|\alpha|+|\beta|<2m}\atop{0\leq |\alpha|,|\beta|\leq m}} \|A_{\alpha\beta}\|_{{L^\infty}(\mathbb{R}^n_+)}\Bigr). \end{equation} Owing to (\ref{E7})-(\ref{E8}) and with the integral representation formula (\ref{IntRepFor}) and the bound (\ref{SSS}) in hand, a Neumann series argument and standard functional analysis allow us to simultaneously settle the claims (i) and (ii) in the statement of the lemma. $\Box$ \vskip 0.08in \section{The Dirichlet problem in a special Lipschitz domain} \setcounter{equation}{0} In this section as well as in subsequent ones, we shall work with an unbounded domain of the form \begin{equation}\label{10.1.26} G=\{X=(X',X_n)\in{\mathbb{R}}^n:\,X'\in\mathbb{R}^{n-1},\,\,X_n>\varphi(X')\}, \end{equation} \noindent where $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ is a Lipschitz function. \subsection{The space ${\rm BMO}(G)$} The space of functions of bounded mean oscillations in $G$ can be introduced in a similar fashion to the case $G={\mathbb{R}}^n_+$. Specifically, a locally integrable function on $G$ belongs to the space ${\rm BMO}(G)$ if \begin{equation}\label{f-BMO} [f]_{{\rm BMO}(G)}:=\sup\limits_{\{B\}}{\int{\mkern-19mu}-}_{\!\!\!B\cap G} \Bigl|f(X)-{\int{\mkern-19mu}-}_{\!\!\!B\cap G}f(Y)\,dY\Bigr|\,dX<\infty, \end{equation} \noindent where the supremum is taken over all balls $B$ centered at points in ${\bar G}$. Much as before, \begin{equation}\label{10.26} [f]_{{\rm BMO}(G)}\sim\sup\limits_{\{B\}}{\int{\mkern-19mu}-}_{\!\!\!B\cap G} {\int{\mkern-19mu}-}_{\!\!\!B\cap G}\Bigl|f(X)-f(Y)\Bigr|\,dXdY. \end{equation} \noindent This implies the equivalence relation \begin{equation}\label{1.32} [f]_{{\rm BMO}(G)}\sim [f\circ\lambda]_{{\rm BMO}(\mathbb{R}^n_+)} \end{equation} \noindent for each bi-Lipschitz diffeomorphism $\lambda$ of $\mathbb{R}^n_+$ onto $G$. As direct consequences of definitions, we also have \begin{eqnarray}\label{1.33} [\prod_{1\leq j\leq N} f_j]_{{\rm BMO}(G)} & \leq & c\,\|f\|^{N-1}_{L_\infty(G)} [f]_{{\rm BMO}(G)},\quad\mbox{where}\,\,f=(f_1,\ldots, f_N), \\[6pt] [f^{-1}]_{{\rm BMO}(G)} & \leq & c\,\|f^{-1}\|^2_{L_\infty(G)}[f]_{{\rm BMO}(G)}. \label{1.34} \end{eqnarray} \subsection{A bi-Lipschitz map $\lambda:\mathbb{R}^n_+\to G$ and its inverse} Let $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ be the Lipschitz function whose graph is $\partial G$ and set $M:=\|\nabla\varphi\|_{L_\infty({\mathbb{R}}^{n-1})}$. Next, let $T$ be the extension operator defined as in (\ref{10.1.20}) and, for a fixed, sufficiently large constant $C>0$, consider the Lipschitz mapping \begin{equation}\label{lambda} \lambda:\,\mathbb{R}^n_+\ni(x',x_n)\mapsto (X',X_n)\in\,G \end{equation} \noindent defined by the equalities \begin{equation}\label{10.1.27} X':=x',\qquad X_n:=C\,M\,x_n+(T\varphi)(x',x_n) \end{equation} \noindent (see {\bf\cite{MS}}, \S{6.5.1} and an earlier, less accessible, reference {\bf\cite{MS1}}). The Jacobi matrix of $\lambda$ is given by \begin{equation}\label{matrix} \lambda'= \left( \begin{array}{cc} I & 0 \\ \nabla_{x'}(T\varphi) & CM+\partial (T\varphi)/\partial x_n \end{array} \right) \end{equation} \noindent where $I$ is the identity $(n-1)\times(n-1)$-matrix. Since $\vert\partial(T\varphi)/\partial x_n\vert\leq cM$ by (\ref{10.1.30}), it follows that ${\rm det}\,\lambda'>(C-c)\,M>0$. Next, thanks to (\ref{Tfi}) and (\ref{lambda})-(\ref{10.1.27}) we have \begin{equation}\label{eq-phi} X_n-\varphi(X')\sim x_n. \end{equation} \noindent Also, based on (\ref{1.8}) we may write \begin{equation}\label{1.35} [\lambda']_{{\rm BMO}(\mathbb{R}^n_+)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent and further, by (\ref{10.1.21}) and (\ref{1.2}), \begin{equation}\label{1.4} \|D^\alpha \lambda'(x)\|_{\mathbb{R}^{n\times n}} \leq c(M)\,x_n^{-|\alpha|}[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \qquad\forall\,\alpha\,:\,|\alpha|\geq 1. \end{equation} Next, by closely mimicking the proof of Proposition~2.6 from {\bf\cite{MS2}} it is possible to show the existence of the inverse Lipschitz mapping $\varkappa:=\lambda^{-1}:G\to{\mathbb{R}}^n_+$. Owing to (\ref{1.32}), the inequality (\ref{1.35}) implies \begin{equation}\label{1.40} [\lambda'\circ\varkappa]_{{\rm BMO}(G)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent Furthermore, (\ref{1.4}) is equivalent to \begin{equation}\label{1.41} \|(D^\alpha\lambda')(\varkappa(X))\|_{\mathbb{R}^{n\times n}} \leq c(M,\alpha)(X_n-\varphi(X'))^{-|\alpha|} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \end{equation} \noindent whenever $|\alpha|>0$. Since $\varkappa'=(\lambda'\circ\varkappa)^{-1}$ we obtain from (\ref{1.34}) and (\ref{1.40}) \begin{equation}\label{1.36} [\varkappa']_{{\rm BMO}(G)}\leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent On the other hand, using $\varkappa'=(\lambda'\circ\varkappa)^{-1}$ and (\ref{1.41}) one can prove by induction on the order of differentiation that \begin{equation}\label{1.5} \|D^\alpha\varkappa'(X)\|_{\mathbb{R}^{n\times n}} \leq c(M,\alpha)\,(X_n-\varphi(X'))^{-|\alpha|} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} \end{equation} \noindent for all $X\in G$ if $|\alpha|>0$. \subsection{The space $V_p^{m,a}(G)$} Analogously to $V_p^{m,a}(\mathbb{R}^n_+)$, we define the weighted Sobolev space $V_p^{m,a}(G)$ naturally associated with the norm \begin{equation}\label{VVV-space} \|{\mathcal U}\|_{V_p^{m,a}(G)}:=\Bigl(\sum_{0\leq|\gamma|\leq m}\int_{G} |(X_n-\varphi(X'))^{|\gamma|-m}D^\gamma{\mathcal U}(X)|^p \,(X_n-\varphi(X'))^{pa}\,dX\Bigr)^{1/p}. \end{equation} \noindent Replacing the function $X_n-\varphi(X')$ by either $\rho(X):={\rm dist}\,(X,\partial G)$, or by the so-called regularized distance function $\rho_{\rm reg}(X)$ (defined as on pp.\,170-171 of {\bf\cite{St}}), yields equivalent norms on $V_p^{m,a}(G)$. Based on a standard localization argument involving a cut-off function vanishing near $\partial G$ (for example, take $\eta(\rho_{\rm reg}/\varepsilon)$ where $\eta\in C^\infty_0({\mathbb{R}})$ satisfies $\eta(t)=0$ for $|t|<1$ and $\eta(t)=1$ for $|t|>2$) one can show that $C^\infty_0(G)$ is dense in $V_p^{m,a}(G)$. Next, we observe that for each ${\mathcal U}\in C^\infty_0(G)$, \begin{equation}\label{equivNr} C\,s\,\|{\mathcal U}\|_{V_p^{m,a}(G)}\leq \Bigl(\sum_{|\gamma|=m}\int_G |D^\gamma {\mathcal U}(X)|^p\,(X_n-\varphi(X'))^{pa}\,dX\Bigr)^{1/p} \leq\,\|{\mathcal U}\|_{V_p^{m,a}(G)} \end{equation} \noindent where, as before, $s=1-a-1/p$. Indeed, for each multi-index $\gamma$ with $0\leq|\gamma|\leq m$, the one-dimensional Hardy's inequality gives \begin{eqnarray}\label{Hardy} && \int_{G}|(X_n-\varphi(X'))^{|\gamma|-m}D^\gamma{\mathcal U}(X)|^p\, (X_n-\varphi(X'))^{pa}\,dX \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad \leq\bigl({C}/{s}\bigr)^p \sum_{|\alpha|=m}\int_G|D^\alpha{\mathcal U}(X)|^p\,(X_n-\varphi(X'))^{pa}\,dX, \end{eqnarray} \noindent and the first inequality in (\ref{equivNr}) follows readily from it. Also, the second inequality in (\ref{equivNr}) is a trivial consequence of (\ref{VVV-space}). Going further, we aim to establish that \begin{equation}\label{equivNr-bis} c_1\,\|{u}\|_{V_p^{m,a}(\mathbb{R}^n_+)} \leq\|{u}\circ\varkappa\|_{V_p^{m,a}(G)} \leq c_2\,\|{u}\|_{V_p^{m,a}(\mathbb{R}^n_+)}, \end{equation} \noindent where $c_1$ and $c_2$ do not depend on $p$ and $s$, whereas $\varkappa:G\longrightarrow\mathbb{R}^n_+$ is the map introduced in \S{5.2}. Clearly, it suffices to prove the upper estimate for $\|{u}\circ\varkappa\|_{V_p^{m,a}(G)}$ in (\ref{equivNr-bis}). As a preliminary matter, we remark that \begin{eqnarray}\label{1.49} D^\gamma\bigl({u}(\varkappa(X))\bigr) & = & \bigl((\varkappa'^*(X)\xi)_{\xi=D}^\gamma\,{u}\bigr)(\varkappa(X)) \nonumber\\[6pt] && +\sum_{1\leq|\tau|<|\gamma|} (D^\tau{u})(\varkappa(X))\sum_\sigma\,c_\sigma\prod_{i=1}^n\prod_j D^{\sigma_{ij}}\varkappa_i(X), \end{eqnarray} \noindent where \begin{equation}\label{1.50} \sum_{i,j}\sigma_{ij}=\gamma,\quad |\sigma_{ij}|\geq 1,\quad \sum_{i,j}(|\sigma_{ij}|-1)=|\gamma|-|\tau|. \end{equation} \noindent In turn, (\ref{1.49})-(\ref{1.50}) and (\ref{1.5}) allow us to conclude that \begin{equation}\label{DUU} |D^\gamma\bigl({u}(\varkappa(X))\bigr)|\leq c\sum_{1\leq |\tau|\leq |\gamma|} x_n^{|\tau|-|\gamma|}\,|D^\tau{u}(x)|, \end{equation} \noindent which, in view of (\ref{eq-phi}), yields the desired conclusion. Finally, we set \begin{equation}\label{dual-VG} V^{-m,a}_p(G):=\Bigl(V^{m,-a}_{p'}(G)\Bigr)^*. \end{equation} \noindent where, as usual, $p'=p/(p-1)$. \subsection{Solvability and regularity result for the Dirichlet problem in the domain $G$} Let us consider the differential operator \begin{equation}\label{E4} {\mathcal L}\,{\mathcal U} ={\mathcal L}(X,D_X)\,{\mathcal U}=\sum_{|\alpha|=|\beta|=m} D^\alpha(\mathfrak{A}_{\alpha\beta}(X)\,D^\beta{\mathcal U}),\qquad X\in G, \end{equation} \noindent whose matrix-valued coefficients satisfy \begin{equation}\label{E4a} \sum_{|\alpha|=|\beta|=m}\|\mathfrak{A}_{\alpha\beta}\|_{L_\infty(G)} \leq \kappa^{-1}. \end{equation} \noindent This operator generates the sesquilinear form ${\mathcal L}(\cdot,\cdot):V_p^{m,a}(G)\times V_{p'}^{m,-a}(G)\to{\mathbb{C}}$ where $p'$ is the conjugate exponent of $p$, defined by \begin{equation}\label{LUV} {\mathcal L}({\mathcal U},{\mathcal V}):=\sum_{|\alpha|=|\beta|=m} \int_G\langle\mathfrak{A}_{\alpha\beta}(X)\, D^\beta{\mathcal U}(X),\,D^\alpha{\mathcal V}(X)\rangle\,dX. \end{equation} \noindent We assume that the inequality \begin{equation}\label{B25} \Re\,{\mathcal L}({\mathcal U},{\mathcal U}) \geq \kappa\sum_{|\gamma|=m}\|D^\gamma\,{\mathcal U}\|^2_{L_2(G)} \end{equation} \noindent holds for all ${\mathcal U}\in V_2^{m,0}(G)$. \begin{lemma}\label{lem5a} {\rm (i)} Let $p\in (1,\infty)$, $-1/p<a<1-1/p$ and $s:=1-a-1/p$. Suppose that \begin{equation}\label{E7a} [\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} +\sum_{|\alpha|=|\beta|=m}[{\mathfrak A}_{\alpha\beta}]_{{\rm BMO}(G)} \leq \delta, \end{equation} \noindent where $\delta$ satisfies \begin{equation}\label{E8b} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\,\frac{\delta}{s(1-s)} <C(n,m,\kappa,\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}) \end{equation} \noindent with a sufficiently small constant $C$, independent of $p$ and $s$. In the case $m=1$ the factor $\delta/s(1-s)$ in {\rm (\ref{E8b})} can be replaced by $\delta$. Then the operator \begin{equation}\label{Liso-2} {\mathcal L}(X,D_X):V_p^{m,a}(G)\longrightarrow V_p^{-m,a}(G) \end{equation} \noindent is an isomorphism. {\rm (ii)} Let $p_i\in (1,\infty)$ and $-1/p_i<a_i<1-1/p_i$, where $i=1,2$. Suppose that {\rm (\ref{E8})} holds with $p_i$ and $s_i=1-a_i-1/p_i$ in place of $p$ and $s$. If ${\mathcal U}\in V_{p_1}^{m,a_1}(G)$ and ${\mathcal L}\,{\mathcal U}\in V_{p_1}^{-m,a_1}(G)\cap V_{p_2}^{-m,a_2}(G)$, then ${\mathcal U}\in V_{p_2}^{m,a_2}(G)$. \end{lemma} \noindent{\bf Proof.} We shall extensively use the flattening mapping $\lambda$ and its inverse studied in \S{5.2}. The assertions (i) and (ii) will follow directly from Lemma~\ref{lem5} as soon as we show that the operator $L$ defined in $\mathbb{R}^n_+$ by \begin{equation}\label{E20} L({\mathcal U}\circ\lambda):=({\mathcal L}\,{\mathcal U})\circ\lambda \end{equation} \noindent satisfies all the hypotheses in that lemma. The sesquilinear form corresponding to the operator $L$ will be denoted by $L(u,v)$. Set $u(x):={\mathcal U}(\lambda(x))$, $v(x):={\mathcal V}(\lambda(x))$ and note that the identity (\ref{1.49}) implies \begin{equation}\label{E40} D^\beta {\mathcal U}(X)=\bigl((\varkappa'^*(\lambda(x))\xi)_{\xi=D}^\beta\,{u} \bigr)(x)+\sum_{1\leq|\tau|<|\beta|}K_{\beta\tau}(x)\,x_n^{|\tau|-|\beta|} D^\tau u(x), \end{equation} \begin{equation}\label{E41} D^\alpha {\mathcal V}(X) =\bigl((\varkappa'^*(\lambda(x))\xi)_{\xi=D}^\alpha\,{v} \bigr)(x)+\sum_{1\leq|\tau|<|\alpha|}K_{\alpha\tau}(x)\,x_n^{|\tau|-|\alpha|} D^\tau v(x), \end{equation} \noindent where, thanks to (\ref{1.5}), the coefficients $K_{\gamma\tau}$ satisfy \begin{equation}\label{E42} \|K_{\gamma\tau}\|_{L_\infty(\mathbb{R}^n_+)} \leq c[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}. \end{equation} \noindent Plugging (\ref{E40}) and (\ref{E41}) into the definition of ${\mathcal L}({\mathcal U},{\mathcal V})$, we arrive at \begin{equation}\label{LUV2} {\mathcal L}({\mathcal U},{\mathcal V})=L_0(u,v) +\sum_{{1\leq|\alpha|,|\beta|\leq m}\atop{|\alpha|+|\beta|<2m}} \int_{\mathbb{R}^n_+}\langle {A}_{\alpha\beta}(x)\,x_n^{|\alpha|+|\beta|-2m} D^\beta\,u(x),\,D^\alpha v(x)\rangle\,dx, \end{equation} \noindent where \begin{equation}\label{Lzero} L_0(u,v)=\sum_{|\alpha|=|\beta|=m}\int_{\mathbb{R}^n_+} \langle (\mathfrak{A}_{\alpha\beta}\circ\lambda) ((\varkappa'^*\circ\lambda)\xi)_{\xi=D}^\beta\,{u},\, ((\varkappa'^*\circ\lambda)\xi)_{\xi=D}^\alpha\,{v}\rangle \,{\rm det}\,\lambda'\,dx. \end{equation} \noindent It follows from (\ref{E40})-(\ref{E42}) that the coefficient matrices $A_{\alpha\beta}$ obey \begin{equation}\label{E43} \sum_{{1\leq|\alpha|,|\beta|\leq m}\atop{|\alpha|+|\beta|<2m}} \|A_{\alpha\beta}\|_{L_\infty(\mathbb{R}^n_+)} \leq {c}{\kappa}^{-1}\,[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}, \end{equation} \noindent where $c$ depends on $m$, $n$, and $\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}$. We can write the form $L_0(u,v)$ as \begin{equation}\label{Lzero-uv} \sum_{|\alpha| =|\beta| =m}\int_{\mathbb{R}^n_+} \langle {A}_{\alpha\beta}(x)\,D^\beta u(x),\,D^\alpha v(x)\rangle\,dx \end{equation} \noindent where the coefficient matrices $A_{\alpha\beta}$ are given by \begin{equation}\label{Aalbet} A_{\alpha\beta}={\rm det}\,\lambda'\,\sum_{|\gamma|=|\tau|=m} P_{\alpha\beta}^{\gamma\tau}(\varkappa'\circ\lambda) ({\mathfrak A}_{\gamma\tau}\circ\lambda), \end{equation} \noindent for some scalar homogeneous polynomials $P_{\alpha\beta}^{\gamma\tau}$ of the elements of the matrix $\varkappa'(\lambda(x))$ with ${\rm deg}\,P_{\alpha\beta}^{\gamma\tau}=2m$. In view of (\ref{1.33})-(\ref{1.36}), \begin{equation}\label{E44} \sum_{|\alpha|=|\beta|=m}[A_{\alpha\beta}]_{{\rm BMO}(\mathbb{R}^n_+)} \leq c\Bigl(\kappa^{-1}[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})} +\sum_{|\alpha|=|\beta|=m}[{\mathfrak A}_{\alpha\beta}]_{{\rm BMO}(G)}\Bigr), \end{equation} \noindent where $c$ depends on $n$, $m$, and $\|\nabla\varphi\|_{L_\infty(\mathbb{R}^{n-1})}$. By (\ref{E43}) \begin{equation}\label{L-L} |L(u,u)-L_0(u,u)|\leq c\,\delta\|u\|_{V_2^{m,0}(\mathbb{R}^n_+)}^2 \end{equation} \noindent and, therefore, \begin{equation}\label{ReLzero} \Re\,L_0(u,u)\geq\Re\,{\mathcal L}({\mathcal U},{\mathcal U}) -c\,\delta\|u\|^2_{V_2^{m,0}(\mathbb{R}^n_+)}. \end{equation} \noindent Using (\ref{B25}) and the equivalence \begin{equation}\label{norm-U} \|{\mathcal U}\|_{V_2^{m,0}(G)}\sim\|u\|_{V_2^{m,0}(\mathbb{R}^n_+)} \end{equation} \noindent (cf. the discussion in \S{5.3}), we arrive at (\ref{B5}). Thus, all conditions of Lemma~\ref{lem5} hold and the result follows. The improvement of (\ref{E8b}) for $m=1$ mentioned in the statement (i) holds because in this case $L=L_0$. $\Box$ \vskip 0.08in \section{Dirichlet problem in a bounded Lipschitz domain} \setcounter{equation}{0} \subsection{Preliminaries} Let $\Omega$ be a {\it bounded Lipschitz domain} in ${\mathbb{R}}^n$ which means (cf. {\bf\cite{St}}, p.\,189) that there exists a finite open covering $\{{\mathcal O}_j\}_{1\leq j\leq N}$ of $\partial\Omega$ with the property that, for every $j\in\{1,...,N\}$, ${\mathcal O}_j\cap\Omega$ coincides with the portion of ${\mathcal O}_j$ lying in the over-graph of a Lipschitz function $\varphi_j:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ (where ${\mathbb{R}}^{n-1}\times{\mathbb{R}}$ is a new system of coordinates obtained from the original one via a rigid motion). We then define the {\it Lipschitz constant} of a bounded Lipschitz domain $\Omega\subset{\mathbb{R}}^n$ as \begin{equation}\label{Lip-ct} \inf\,\Bigl(\max\{\|\nabla\varphi_j\|_{L_\infty({\mathbb{R}}^{n-1})}:\,1\leq j\leq N\} \Bigr) \end{equation} \noindent where the infimum is taken over all possible families $\{\varphi_j\}_{1\leq j\leq N}$ as above. It is a classical result that the surface measure $d\sigma$ is well-defined and that there exists an outward pointing normal vector $\nu$ at almost every point on $\partial\Omega$. We denote by $\rho(X)$ the distance from $X\in{\mathbb{R}}^n$ to $\partial\Omega$ and, for $p$, $a$ and $m$ as in (\ref{indices}), introduce the weighted Sobolev space $V^{m,a}_p(\Omega)$ naturally associated with the norm \begin{equation}\label{normU2} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)} :=\Bigl(\sum_{0\leq |\beta|\leq m}\int_{\Omega} |\rho(X)^{|\beta|-m} D^\beta{\mathcal U}(X)|^p\,\rho(X)^{pa}\,dX\Bigr)^{1/p}. \end{equation} \noindent One can check the equivalence of the norms \begin{equation}\label{equiv-Nr2} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)}\sim \|\rho_{\rm reg}^a\,{\mathcal U}\|_{V_p^{m,0}(\Omega)}, \end{equation} \noindent where $\rho_{\rm reg}(X)$ stands for the regularized distance from $X$ to $\partial\Omega$ (in the sense of Theorem~2, p.\,171 in {\bf\cite{St}}). It is also easily proved that $C_0^\infty(\Omega)$ is dense in $V^{m,a}_p(\Omega)$ and that \begin{equation}\label{sumDU} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)}\sim \Bigl(\sum_{|\beta|=m}\int_\Omega|D^\beta{\mathcal U}(X)|^p\,\rho(X)^{pa}\,dX \Bigr)^{1/p} \end{equation} \noindent uniformly for ${\mathcal U}\in C_0^\infty(\Omega)$. As in (\ref{dual-VG}), we set \begin{equation}\label{dual-V} V^{-m,a}_p(\Omega):=\Bigl(V^{m,-a}_{p'}(\Omega)\Bigr)^*. \end{equation} Let us fix a Cartesian coordinates system and consider the differential operator \begin{equation}\label{E444} {\mathcal A}\,{\mathcal U}={\mathcal A}(X,D_X)\,{\mathcal U} :=\sum_{|\alpha|=|\beta|=m}D^\alpha({\mathcal A}_{\alpha\beta}(X) \,D^\beta{\mathcal U}),\qquad X\in\Omega, \end{equation} \noindent with measurable $l\times l$ matrix-valued coefficients. The corresponding sesquilinear form will be denoted by ${\mathcal A}({\mathcal U},{\mathcal V})$. Similarly to (\ref{E4a}) and (\ref{B25}) we impose the conditions \begin{equation}\label{E4b} \sum_{|\alpha|=|\beta|=m}\|{\mathcal A}_{\alpha\beta}\|_{L_\infty(\Omega)} \leq \kappa^{-1} \end{equation} \noindent and \begin{equation}\label{B25b} \Re\,{\mathcal A}({\mathcal U},{\mathcal U})\geq\kappa\sum_{|\gamma|=m} \|D^\gamma\,{\mathcal U}\|^2_{L_2(\Omega)}\quad\mbox{for all}\,\,\, {\mathcal U}\in V_2^{m,0}(G). \end{equation} \subsection{Interior regularity of solutions} \begin{lemma}\label{lem2} Let $\Omega\subset{\mathbb{R}}^n$ be a bounded Lipschitz domain. Pick two functions ${\mathcal H},{\mathcal Z}\in C^\infty_0(\Omega)$ such that ${\mathcal H}\,{\mathcal Z}={\mathcal H}$, and assume that \begin{equation}\label{E5} \sum_{|\alpha|=|\beta|=m}[{\mathcal A}_{\alpha\beta}]_{{\rm BMO}(\Omega)} \leq\delta \end{equation} \noindent where \begin{equation}\label{E6} \delta\leq \frac{c(m,n,\kappa)}{p\,p'} \end{equation} \noindent with a sufficiently small constant $c(m,n,\kappa)>0$. If ${\mathcal U}\in W_q^m(\Omega,loc)$ for a certain $q<p$ and ${\mathcal A}\,{\mathcal U}\in W_p^{-m}(\Omega,loc)$, then ${\mathcal U}\in W_p^m(\Omega,loc)$ and \begin{equation}\label{E3} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal H}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}\,{\mathcal U}\|_{W_q^m(\Omega)}). \end{equation} \end{lemma} \noindent{\bf Proof.} We start with a trick applied in {\bf\cite{CFL1}} under slightly different circumstances. We shall use the notation ${\mathcal A}_Y$ for the operator ${\mathcal A}(Y, D_X)$, where $Y\in\Omega$ and the notation $\Phi_Y$ for a fundamental solution of ${\mathcal A}_Y$ in $\mathbb{R}^n$. Then, with star denoting the convolution product, \begin{equation}\label{HUPhi} {\mathcal H}\,{\mathcal U} +\Phi_Y\ast({\mathcal A}-{\mathcal A}_Y)({\mathcal H}\,{\mathcal U}) =\Phi_Y\ast({\mathcal H}\,{\mathcal A}{\mathcal U}) +\Phi_Y\ast ([{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})) \end{equation} \noindent and, consequently, for each multi-index $\gamma$, $|\gamma|=m$, \begin{eqnarray}\label{DHU} && D^\gamma({\mathcal H}\,{\mathcal U})+\sum_{|\alpha|=|\beta|=m} D^{\alpha+\gamma}\Phi_Y\ast\bigl(({\mathcal A}_{\alpha\beta} -{\mathcal A}_{\alpha\beta}(Y)) D^\beta({\mathcal H}\,{\mathcal U})\bigr) \nonumber\\[6pt] && \qquad\quad =D^\gamma\Phi_Y\ast({\mathcal H}\,{\mathcal A}{\mathcal U}) +D^\gamma\Phi_Y\ast([{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})). \end{eqnarray} \noindent Writing this equation at the point $Y$ and using (\ref{b1}), we obtain \begin{eqnarray}\label{E3a} && (1-C\,pp'\delta) \sum_{|\gamma|=m}\|D^\gamma({\mathcal H}\,{\mathcal U})\|_{L_p(\Omega)} \nonumber\\[6pt] && \qquad\quad \leq C(p,\kappa) (\|{\mathcal H}\,{\mathcal A}{\mathcal U}\|_{W_p^{-m}(\Omega)} +\|[{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})\|_{W_p^{-m}(\Omega)}). \end{eqnarray} \noindent Let $p'<n$. We have for every ${\mathcal V}\in \mathaccent"0017 W^m_p(\Omega)$ \begin{eqnarray}\label{AHZUV} && \Bigl|\int_\Omega\langle[{\mathcal A},{\mathcal H}] ({\mathcal Z}{\mathcal U}),{\mathcal V}\rangle\,dX\Bigr| =|{\mathcal A}({\mathcal H}{\mathcal Z}{\mathcal U},{\mathcal V}) -{\mathcal A}({\mathcal Z}{\mathcal U},{\mathcal H}{\mathcal V})| \nonumber\\[6pt] && \qquad\qquad\quad \leq c(\|{\mathcal Z}{\mathcal U}\|_{W_p^{m-1}(\Omega)} \|{\mathcal V}\|_{W_{p'}^m(\Omega)} +\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m} (\Omega)}\|{\mathcal V}\|_{W_{\frac{p'n}{n-p'}}^{m-1}(\Omega)}). \end{eqnarray} \noindent By Sobolev's theorem \begin{equation}\label{ZZU} \|{\mathcal Z}{\mathcal U}\|_{W_p^{m-1}(\Omega)} \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m}(\Omega)} \end{equation} \noindent and \begin{equation}\label{VWpm} \|{\mathcal V}\|_{W_{\frac{p'n}{n-p'}}^{m-1}(\Omega)} \leq c\,\|{\mathcal V}\|_{W_{p'}^m(\Omega)}. \end{equation} \noindent Therefore, \begin{equation}\label{AHZ} \Bigl|\int_\Omega\langle [{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U}),{\mathcal V}\rangle \,dX\Bigr| \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m}(\Omega)} \|{\mathcal V}\|_{W_{p'}^m(\Omega)} \end{equation} \noindent which is equivalent to the inequality \begin{equation}\label{AHZ3} \|[{\mathcal A},{\mathcal H}]({\mathcal Z}{\mathcal U})\|_{W_p^{-m}(\Omega)} \leq c\,\|{\mathcal Z}{\mathcal U}\|_{W_{\frac{pn}{n+p}}^{m} (\Omega)}. \end{equation} \noindent In the case $p'\geq n$, the same argument leads to a similar inequality, where $pn/(n+p)$ is replaced by $1+\varepsilon$ with an arbitrary $\varepsilon>0$ for $p'>n$ and $\varepsilon=0$ for $p'=n$. Now, (\ref{E3}) follows from (\ref{E3a}) if $p'\geq n$ and $p'<n$, $q\geq pn/(n+p)$. In the remaining case the goal is achieved by iterating this argument finitely many times. $\Box$ \vskip 0.08in \begin{corollary}\label{cor3} Let $p\geq 2$ and suppose that {\rm (\ref{E5})} and {\rm (\ref{E6})} hold. If ${\mathcal U}\in W_2^m(\Omega,loc)$ and ${\mathcal A}\,{\mathcal U}\in W_p^{-m}(\Omega,loc)$, then ${\mathcal U}\in W_p^m(\Omega,loc)$ and \begin{equation}\label{E3-aaa} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal Z}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}\,{\mathcal U}\|_{W_2^{m-1}(\Omega)}). \end{equation} \end{corollary} \noindent{\bf Proof.} Let ${\mathcal Z}_0$ denote a real-valued function in $C^\infty_0(\Omega)$ such that ${\mathcal H}{\mathcal Z}_0={\mathcal H}$ and ${\mathcal Z}_0{\mathcal Z}={\mathcal Z}_0$. By (\ref{E3}) \begin{equation}\label{E3b} \|{\mathcal H}\,{\mathcal U}\|_{W_p^m(\Omega)} \leq C\,(\|{\mathcal H}\,{\mathcal A}(\cdot,D)\, {\mathcal U}\|_{W_p^{-m}(\Omega)} +\|{\mathcal Z}_0\,{\mathcal U}\|_{W_2^{m}(\Omega)}) \end{equation} \noindent and it follows from (\ref{B25b}) that \begin{equation}\label{ZUW} \|{\mathcal Z}_0\,{\mathcal U}\|^2_{W_2^m(\Omega)}\leq c\kappa^{-1} \Re\,{\mathcal A}({\mathcal Z}_0{\mathcal U},{\mathcal Z}_0{\mathcal U}). \end{equation} \noindent Furthermore, \begin{equation}\label{AZ5} |{\mathcal A}({\mathcal Z}_0{\mathcal U},{\mathcal Z}_0{\mathcal U}) -{\mathcal A}({\mathcal U},{\mathcal Z}_0^2{\mathcal U})|\leq c\kappa^{-1} \|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}\, \|{\mathcal Z}_0{\mathcal U}\|_{W_2^{m}(\Omega)}. \end{equation} \noindent Hence \begin{equation}\label{ZU6} \|{\mathcal Z}_0\,{\mathcal U}\|^2_{W_2^m(\Omega)}\leq c\kappa^{-1} (\|{\mathcal Z}\,{\mathcal A}{\mathcal U}\|_{W_2^{-m}(\Omega)}\, \|{\mathcal Z}_0^2\,{\mathcal U}\|_{W_2^m(\Omega)} +\kappa^{-1}\|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}\, \|{\mathcal Z}_0{\mathcal U}\|_{W_2^{m}(\Omega)}) \end{equation} \noindent and, therefore, \begin{equation}\label{Zu7} \|{\mathcal Z}_0\,{\mathcal U}\|_{W_2^m(\Omega)} \leq c\kappa^{-1}(\|{\mathcal Z}\,{\mathcal A} {\mathcal U}\|_{W_2^{-m}(\Omega)}\, +\kappa^{-1}\|{\mathcal Z}{\mathcal U}\|_{W_2^{m-1}(\Omega)}). \end{equation} \noindent Combining this inequality with (\ref{E3b}) we arrive at (\ref{E3-aaa}). $\Box$ \vskip 0.08in \subsection{Invertibility of ${\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega)$} Recall the infinitesimal mean oscillations as defined in (\ref{e60}). \begin{theorem}\label{th1a} Let $1<p<\infty$, $0<s<1$, and $a=1-s-1/p$. Furthermore, let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^n$. Suppose that the differential operator ${\mathcal A}$ is as in {\rm \S{6.1}} and that, in addition, \begin{equation}\label{E16} \sum_{|\alpha|=|\beta|=m}\{{\mathcal A}_{\alpha\beta}\}_{{\rm Osc}(\Omega)} +\{\nu\}_{{\rm Osc}(\partial\Omega)}\leq\delta, \end{equation} \noindent where \begin{equation}\label{E17} \Bigl(pp'+\frac{1}{s(1-s)}\Bigr)\frac{\delta}{s(1-s)}\leq c \end{equation} \noindent for a sufficiently small constant $c>0$ independent of $p$ and $s$. In the case $m=1$ the factor $\delta/s(1-s)$ in {\rm (\ref{E17})} can be replaced by $\delta$. Then the operator \begin{equation}\label{cal-L} {\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is an isomorphism. \end{theorem} \noindent{\bf Proof.} We shall proceed in a series a steps starting with \vskip 0.08in (i) {\it The construction of the auxiliary domain $G$ and operator ${\mathcal L}$}. \noindent Let $\varepsilon$ be small enough so that \begin{equation}\label{m1} \sum_{|\alpha|=|\beta|=m}{\int{\mkern-19mu}-}_{\!\!\!B_r\cap \Omega} {\int{\mkern-19mu}-}_{\!\!\!B_r\cap\Omega} |{\mathcal A}_{\alpha\beta}(X)-{\mathcal A}_{\alpha\beta}(Y)|\,dXdY \leq 2\delta \end{equation} \noindent for all balls in $\{B_r\}_\Omega$ with radii $r<\varepsilon$ and \begin{equation}\label{m2} {\int{\mkern-19mu}-}_{\!\!\!B_r\cap\partial\Omega}{\int{\mkern-19mu}-}_{\!\!\!B_r\cap\partial\Omega}\, \Bigl|\nu(X)-\nu(Y)\,\Bigr|\,d\sigma_Xd\sigma_Y\leq 2\delta \end{equation} \noindent for all balls in $\{B_r\}_{\partial\Omega}$ with radii $r<\varepsilon$. We fix a ball $B_\varepsilon$ in $\{B_\varepsilon\}_{\partial\Omega}$ and assume without loss of generality that, in a suitable system of Cartesian coordinates, \begin{equation}\label{newGGG} \Omega\cap B_\varepsilon=\{X=(X',X_n)\in B_\varepsilon:\,X_n>\varphi(X')\} \end{equation} \noindent for some Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$. Consider now the unique cube $Q(\varepsilon)$ (relative to this system of coordinates) which is inscribed in $B_\varepsilon$ and denote its projection onto $\mathbb{R}^{n-1}$ by $Q'(\varepsilon)$. Since $\nabla\varphi=-\nu'/\nu_n$, it follows from (\ref{m2}) that \begin{equation}\label{m3} {\int{\mkern-19mu}-}_{\!\!\!B'_r}{\int{\mkern-19mu}-}_{\!\!\!B'_r}\, \Bigl|\nabla\varphi(X')-\nabla\varphi(Y')\,\Bigr|\,dX'dY'\leq c(n)\,\delta, \end{equation} \noindent where $B'_r=B_r\cap \mathbb{R}^{n-1}$, $r<\varepsilon$. Let us retain the notation $\varphi$ for the mirror extension of the function $\varphi$ from $Q'(\varepsilon)$ onto $\mathbb{R}^{n-1}$. We extend ${\mathcal A}_{\alpha\beta}$ from $Q(\varepsilon)\cap\Omega$ onto $Q(\varepsilon)\backslash\Omega$ by setting \begin{equation}\label{A=A} {\mathcal A}_{\alpha\beta}(X) :={\mathcal A}_{\alpha\beta}(X',-X_n+2\varphi(X')), \qquad X\in Q(\varepsilon)\backslash\Omega, \end{equation} \noindent and we shall use the notation ${\mathfrak A}_{\alpha\beta}$ for the periodic extension of ${\mathcal A}_{\alpha\beta}$ from $Q(\varepsilon)$ onto $\mathbb{R}^n$. Consistent with the earlier discussion in \S{5}, we shall denote the special Lipschitz domain $\{X=(X',X_n):\,X'\in\mathbb{R}^{n-1}, \,X_n>\varphi(X')\}$ by $G$. One can easily see that, owing to $2\varepsilon n^{-1/2}$-periodicity of $\varphi$ and ${\mathcal A}_{\alpha\beta}$, \begin{equation}\label{SumA} \sum_{|\alpha|=|\beta|=m}[{\mathcal A}_{\alpha\beta}]_{{\rm BMO}(G)} +[\nabla\varphi]_{{\rm BMO}(\mathbb{R}^{n-1})}\leq c(n)\,\delta. \end{equation} \noindent Now, with the operator ${\mathcal A}(X,D_X)$ in $\Omega$, we associate an auxiliary operator ${\mathcal L}(X,D_X)$ in $G$ given by (\ref{E4}). \vskip 0.08in (ii) {\it Uniqueness.} \noindent Assuming that ${\mathcal U}\in V_p^{m,a}(\Omega)$ satisfies ${\mathcal L}\,{\mathcal U}=0$ in $\Omega$, we shall show that ${\mathcal U}\in V_2^{m,0}(\Omega)$. This will imply that ${\mathcal U}=0$ which proves the injectivity of the operator (\ref{cal-L}). To this end, pick a function ${\mathcal H}\in C_0^\infty(Q(\varepsilon))$ and write ${\mathcal L}({\mathcal H}\,{\mathcal U}) =[{\mathcal L},\,{\mathcal H}]\,{\mathcal U}$. Also, fix a small $\theta>0$ and select a smooth function $\Lambda$ on $\mathbb{R}^1_+$, which is identically $1$ on $[0,1]$ and which vanishes identically on $(2,\infty)$. Then by (ii) in Lemma~\ref{lem5a}, \begin{equation}\label{E19} {\mathcal L}({\mathcal H}\,{\mathcal U})-[{\mathcal L},\,{\mathcal H}]\, (\Lambda(\rho_{\rm reg}/\theta)\,{\mathcal U}) \in V_2^{-m,0}(G)\cap V_p^{-m,a}(G). \end{equation} \noindent Note that the operator \begin{equation}\label{LH-1} [{\mathcal L},\,{\mathcal H}]\rho_{\rm reg}^{-1}: V_p^{m,a}(G) \longrightarrow V_p^{-m,a}(G) \end{equation} \noindent is bounded and that the norm of the multiplier $\rho_{\rm reg}\,\Lambda(\rho_{\rm reg}/\theta)$ in $V_p^{m,a}(G)$ is $O(\theta)$. Moreover, the same is true for $p=2$ and $a=0$. The inclusion (\ref{E19}) can be written in the form \begin{equation}\label{E21} {\mathcal L}({\mathcal H}\,{\mathcal U}) +{\mathcal M}({\mathcal Z}\,{\mathcal U})\in V_p^{-m,a}(G)\cap V_2^{-m,0}(G), \end{equation} \noindent where ${\mathcal Z}\in C^\infty_0(\mathbb{R}^n)$, ${\mathcal Z}\,{\mathcal H}={\mathcal H}$ and ${\mathcal M}$ is a linear operator mapping \begin{equation}\label{V2Vp} V_p^{m,a}(G)\to V_p^{-m,a}(G)\quad {\rm and}\quad V_2^{m,0}(G)\to V_2^{-m,0}(G) \end{equation} \noindent with both norms of order $O(\theta)$. Select a finite covering of $\overline{\Omega}$ by cubes $Q_j(\varepsilon)$ and let $\{{\mathcal H}_j\}$ be a smooth partition of unity subordinate to $\{Q_j(\varepsilon)\}$. Also, let ${\mathcal Z}_j\in C_0^\infty(Q_j(\varepsilon))$ be such that ${\mathcal H}_j{\mathcal Z}_j={\mathcal H}_j$. By $G_j$ we denote the special Lipschitz domain generated by the cube $Q_j(\varepsilon)$ as in part (i) of the present proof. The corresponding operators ${\mathcal L}$ and ${\mathcal M}$ will be denoted by ${\mathcal L}_j$ and ${\mathcal M}_j$, respectively. It follows from (\ref{E21}) that \begin{equation}\label{Hu8} {\mathcal H}_j\,{\mathcal U} +\sum_k({\mathcal L}_j^{-1}\,{\mathcal M}_j\,{\mathcal Z}_j \,{\mathcal Z}_k)({\mathcal H}_k\,{\mathcal U}) \in V_p^{m,a}(\Omega)\cap V_2^{m,0}(\Omega). \end{equation} \noindent Taking into account that the norms of the matrix operator ${\mathcal L}_j\,{\mathcal M}_j\,{\mathcal Z}_j\,{\mathcal Z}_k$ in the spaces $V_p^{m,a}(\Omega)$ and $V_2^{m,0}(\Omega)$ are $O(\theta)$, we may take $\theta>0$ small enough and obtain ${\mathcal H}_j\,{\mathcal U}\in V_2^{m,0}(\Omega)$, i.e. ${\mathcal U}\in V_2^{m,0}(\Omega)$. Therefore, ${\mathcal L}:V_p^{m,a}(\Omega)\to V_p^{-m,a}(\Omega)$ is injective. \vskip 0.08in (iii) {\it A priori estimate}. \noindent Let $p\geq 2$ and assume that ${\mathcal U}\in V_p^{m,a}(\Omega)$. Referring to Corollary~\ref{cor3} and arguing as in part (ii) of the present proof, we arrive at the equation \begin{equation}\label{m32} {\mathcal H}_j\,{\mathcal U} +\sum_k({\mathcal L}_j^{-1}\,{\mathcal M}_j\,{\mathcal Z}_j\, {\mathcal Z}_k)({\mathcal H}_k\,{\mathcal U})={\mathcal F}, \end{equation} \noindent whose right-hand side satisfies \begin{equation}\label{FVO} \|{\mathcal F}\|_{V_p^{m,a}(\Omega)}\leq c (\|{\mathcal A}\,{\mathcal U}\|_{V_p^{-m,a}(\Omega)} +\|{\mathcal U}\|_{W_2^{m-1}(\omega)}), \end{equation} \noindent for some domain $\omega$ with ${\overline\omega}\subset\Omega$. Since the $V_p^{m,a}(\Omega)$-norm of the sum in (\ref{m32}) does not exceed $ C\theta\|{\mathcal U}\|_{V_p^{m,a}(\Omega)}$, we obtain the estimate \begin{equation}\label{m32-bis} \|{\mathcal U}\|_{V_p^{m,a}(\Omega)} \leq c\,(\|{\mathcal A}\,{\mathcal U}\|_{V_p^{-m,a}(\Omega)} +\|{\mathcal U} \|_{W_2^{m-1}(\omega)}). \end{equation} \vskip 0.08in (iv) {\it End of proof.} \noindent Let $p\geq 2$. The range of the operator ${\mathcal A}:V_p^{m,a}(\Omega)\to V_p^{-m,a}(\Omega)$ is closed by (\ref{m32}) and the compactness of the restriction operator: $V_p^{m,a}(\Omega) \to W_2^{m-1}(\omega)$. Since the coefficients of the adjoint operator ${\mathcal L}^*$ satisfy the same conditions as those of ${\mathcal L}$, the operator ${\mathcal L}^*: V_{p'}^{m,a}(\Omega)\to V_{p'}^{-m,-a}(\Omega)$ is injective. Therefore, we conclude that ${\mathcal L}:V_{p}^{m,a}(\Omega)\to V_{p}^{-m,-a}(\Omega)$ is surjective. Being also injective, ${\mathcal L}$ is isomorphic if $p\geq 2$. Hence ${\mathcal L}^*$ is isomorphic for $p'\leq 2$. This means that ${\mathcal L}$ is isomorphic for $p\leq 2$. The result follows. $\Box$ \vskip 0.08in \subsection{Traces and extensions} Let $\Omega\subset{\mathbb{R}}^n$ be a bounded Lipschitz domain and, for $m\in{\mathbb{N}}$ as well as $1<p<\infty$ and $-1/p<a<1-1/p$, consider a new space, $W_p^{m,a}(\Omega)$, consisting of functions ${\mathcal U}\in L_p(\Omega,loc)$ with the property that $\rho^{a}D^\alpha{\mathcal U}\in L_p(\Omega)$ for all multi-indices $\alpha$ with $|\alpha|=m$. We equip $W_p^{m,a}(\Omega)$ with the norm \begin{equation}\label{newW} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} :=\sum_{|\alpha|=m}\|D^\alpha{\mathcal U}\|_{L_p(\Omega,\,\rho(X)^{ap}\,dX)} +\|{\mathcal U}\|_{L_p(\omega)}, \end{equation} \noindent where $\omega$ is an open non-empty domain, $\overline{\omega}\subset\Omega$. An equivalent norm is given by the expression in (\ref{W-Nr}). We omit the standard proof of the fact that \begin{equation}\label{dense} C^\infty({\overline\Omega})\hookrightarrow W_p^{m,a}(\Omega) \quad\mbox{densely}. \end{equation} Recall that for $p\in(1,\infty)$ and $s\in(0,1)$ the Besov space $B_p^s(\partial\Omega)$ is then defined via the requirement (\ref{Bes-xxx}). If we introduce the $L_p$-modulus of continuity \begin{equation}\label{omega-p} \omega_p(f,t):=\Bigl(\int\!\!\!\!\!\!\int \limits_{{|X-Y|<t}\atop{X,Y\in\partial\Omega}} |f(X)-f(Y)|^p\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}, \end{equation} \noindent then \begin{equation}\label{Eqv} \|f\|_{B_p^s(\partial\Omega)}\sim\|f\|_{L_p(\partial\Omega)} +\left(\int_0^\infty\frac{\omega_p(f,t)^p}{t^{n-1+ps}}dt\right)^{1/p}, \end{equation} \noindent uniformly for $f\in B^s_p(\partial\Omega)$. The nature of our problem requires that we work with Besov spaces (defined on Lipschitz boundaries) which exhibit a higher order of smoothness. In accordance with {\bf\cite{JW}}, we now make the following definition. \begin{definition}\label{def1} For $p\in(1,\infty)$, $m\in{\mathbb{N}}$ and $s\in(0,1)$, define the (higher order) Besov space $\dot{B}^{m-1+s}_p(\partial\Omega)$ as the collection of all finite families $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}$ of functions defined on $\partial\Omega$ with the following property. For each multi-index $\alpha$ of length $\leq m-1$ let \begin{equation}\label{reminder} R_\alpha(X,Y):=f_\alpha(X)-\sum_{|\beta|\leq m-1-|\alpha|}\frac{1}{\beta!}\, f_{\alpha+\beta}(Y)\,(X-Y)^\beta,\qquad X,Y\in\partial\Omega, \end{equation} \noindent and consider the $L_p$-modulus of continuity \begin{equation}\label{rem-Rr} r_\alpha(t):=\Bigl(\int\!\!\!\!\!\!\int \limits_{{|X-Y|<t}\atop{X,Y\in\partial\Omega}} |R_\alpha(X,Y)|^p\,d\sigma_Xd\sigma_Y\Bigr)^{1/p}. \end{equation} \noindent Then \begin{equation}\label{Bes-Nr} \|\dot{f}\|_{\dot{B}^{m-1+s}_p(\partial\Omega)} :=\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{L_p(\partial\Omega)} +\sum_{|\alpha|\leq m-1}\Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p}<\infty. \end{equation} \end{definition} For further reference we note here that for each fixed $\kappa>0$, an equivalent norm is obtained by replacing $r_\alpha(t)$ by $r_\alpha(\kappa\,t)$ in (\ref{Bes-Nr}). Also, when $m=1$, the above definition agrees with (\ref{Bes-xxx}), thanks to (\ref{Eqv}). A few notational conventions which make the exposition more transparent are as follows. Given a family of functions $\{f_\alpha\}_{|\alpha|\leq m-1}$ on $\partial\Omega$ and $X\in\Omega$, $Y,Z\in\partial\Omega$, set \begin{equation}\label{PPP} \begin{array}{l} {\displaystyle{ P_\alpha(X,Y):=\sum_{|\beta|\leq m-1-|\alpha|}\frac{1}{\beta!}\, f_{\alpha+\beta}(Y)\,(X-Y)^\beta,\qquad \forall\,\alpha\,:\,|\alpha|\leq m-1,}} \\[25pt] P(X,Y):=P_{(0,...,0)}(X,Y), \end{array} \end{equation} \noindent so that \begin{equation}\label{PR-0} R_\alpha(Y,Z)=f_\alpha(Y)-P_\alpha(Y,Z), \qquad\forall\,\alpha\,:\,|\alpha|\leq m-1, \end{equation} \noindent and the following elementary identities hold for each multi-index $\alpha$ of length $\leq m-1$: \begin{eqnarray}\label{PR} D^{\beta}_XP_{\alpha}(X,Y) & = & P_{\alpha+\beta}(X,Y), \qquad|\beta|\leq m-1-|\alpha|, \nonumber\\[6pt] P_{\alpha}(X,Y) -P_{\alpha}(X,Z) & =& \sum_{|\beta|\leq m-1-|\alpha|}R_{\alpha+\beta}(Y,Z)\frac{(X-Y)^\beta}{\beta!}. \end{eqnarray} \noindent See, e.g., p.\,177 in {\bf\cite{St}} for the last formula. \begin{lemma}\label{trace-1} For each $1<p<\infty$, $-1/p<a<1-1/p$ and $s=1-a-1/p$, the trace operator \begin{equation}\label{TR-1} {\rm Tr}:W^{1,a}_p(\Omega)\longrightarrow B^s_p(\partial\Omega) \end{equation} \noindent is well-defined, linear, bounded, onto and has $V^{1,a}_p(\Omega)$ as its null-space. Furthermore, there exists a linear, continuous mapping \begin{equation}\label{Extension} {\mathcal E}:B^s_p(\partial\Omega)\longrightarrow W^{1,a}_p(\Omega), \end{equation} \noindent called extension operator, such that ${\rm Tr}\circ{\mathcal E}=I$ (i.e., the operator (\ref{TR-1}) has a bounded, linear right-inverse). \end{lemma} \noindent{\bf Proof.} By a standard argument involving a smooth partition of unity it suffices to deal with the case when $\Omega$ is the domain lying above the graph of a Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$. Composing with the bi-Lipschitz homeomorphism ${\mathbb{R}}^n_+\ni(X',X_n)\mapsto(X',\varphi(X')+X_n)\in\Omega$ further reduces matters to the case when $\Omega={\mathbb{R}}^n_+$, in which situation the claims in the lemma have been proved in {\bf\cite{Usp}}. $\Box$ \vskip 0.08in We need to establish an analogue of Lemma~\ref{trace-1} for higher smoothness spaces. While for $\Omega={\mathbb{R}}^n_+$ this has been done by Uspenski\u{\i} in {\bf\cite{Usp}}, the flattening argument used in Lemma~\ref{trace-1} is no longer effective in this context. Let us also mention here that a result similar in spirit, valid for any Lipschitz domain $\Omega$ but with $B^{m-1+s+1/p}(\Omega)$ in place of $W^{m,a}_p(\Omega)$ (cf. (\ref{incls}) for the relationship between these spaces) has been proved by A.\,Jonsson and H.\,Wallin in {\bf\cite{JW}} (in fact, in this latter context, these authors have dealt with much more general sets than Lipschitz domains). The result which serves our purposes is as follows. \begin{proposition}\label{trace-2} Let $1<p<\infty$, $-1/p<a<1-1/p$, $s=1-a-1/p\in(0,1)$ and $m\in{\mathbb{N}}$. Define the {\rm higher} {\rm order} trace operator \begin{equation}\label{TR-11} {\rm tr}_{m-1}:W^{m,a}_p(\Omega)\longrightarrow \dot{B}^{m-1+s}_p(\partial\Omega) \end{equation} \noindent by setting \begin{equation}\label{Tr-DDD} {\rm tr}_{m-1}\,\,{\mathcal U} :=\Bigl\{i^{|\alpha|}\,{\rm Tr}\,[D^\alpha\,{\mathcal U}]\Bigr\} _{|\alpha|\leq m-1}, \end{equation} \noindent where the traces in the right-hand side are taken in the sense of Lemma~\ref{trace-1}. Then (\ref{TR-11})-(\ref{Tr-DDD}) is a a well-defined, linear, bounded operator, which is onto and has $V^{m,a}_p(\Omega)$ as its null-space. Moreover, it has a bounded, linear right-inverse, i.e. there exists a linear, continuous operator \begin{equation}\label{Ext-222} {\mathcal E}:\dot{B}^{m-1+s}_p(\partial\Omega) \longrightarrow W^{m,a}_p(\Omega) \end{equation} \noindent such that \begin{equation}\label{Ext-333} \dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}(\partial\Omega) \Rightarrow i^{|\alpha|}\,{\rm Tr}\,[D^\alpha({\mathcal E}\,\dot{f})]=f_\alpha, \quad\forall\,\alpha\,:\,|\alpha|\leq m-1. \end{equation} \end{proposition} In order to facilitate the exposition, we isolate a couple of preliminary results prior to the proof of Proposition~\ref{trace-2}. \begin{lemma}\label{Lemma-R} Assume that $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ is a Lipschitz function and define $\Phi:{\mathbb{R}}^{n-1}\to\partial\Omega\hookrightarrow{\mathbb{R}}^n$ by setting $\Phi(X'):=(X',\varphi(X'))$ at each $X'\in{\mathbb{R}}^{n-1}$. Define the Lipschitz domain $\Omega$ as $\{X=(X',X_n)\in{\mathbb{R}}^n:\,X_n>\varphi(X')\}$ and, for some fixed $m\in{\mathbb{N}}$, $p\in(1,\infty)$ and $s\in(0,1)$ consider a system of functions $f_\alpha\in B^s_p(\partial\Omega)$, $\alpha\in{\mathbb{N}}_0^n$, $|\alpha|\leq m-1$, with the property that \begin{equation}\label{B-CC} \frac{\partial}{\partial X_k}[f_\alpha(\Phi(X'))] =\sum_{j=1}^{n}f_{\alpha+e_j}(\Phi(X'))\partial_k\Phi_j(X'), \qquad 1\leq k\leq n-1, \end{equation} \noindent for each multi-index $\alpha$ of length $\leq m-2$, where $\{e_j\}_j$ is the canonical orthonormal basis in ${\mathbb{R}}^n$. Finally, for each $l\in\{1,...,m-1\}$ introduce $\Delta_l:=\{(t_1,...,t_{l}):\,0\leq t_{l}\leq\cdots\leq t_1\leq 1\}$, and define $R_\alpha(X,Y)$ as in (\ref{reminder}). Then if $\alpha$ is an arbitrary multi-index of length $\leq m-2$ and $r:=m-1-|\alpha|$, the following identity holds: \begin{eqnarray}\label{RRR=id} && R_\alpha(\Phi(X'),\Phi(Y')) \nonumber\\[6pt] &&\quad =\sum_{(j_1,...,j_{r})\in\{1,...,n\}^{r}}\Bigl\{\int_{\Delta_{r}}\Bigl[ f_{\alpha+e_{j_1}+\cdots+e_{j_r}}(\Phi(Y'+t_{r}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_r}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] && \qquad\times\prod_{k=1}^{r}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{r}\cdots dt_1\Bigr\},\qquad X',\,Y'\in{\mathbb{R}}^{n-1}. \end{eqnarray} \end{lemma} \noindent{\bf Proof.} We shall show that for any system of functions $\{f_\alpha\}_{|\alpha|\leq m-1}$ which satisfies (\ref{B-CC}), any multi-index $\alpha\in{\mathbb{N}}_0^n$ with $|\alpha|\leq m-2$ and any $l\in{\mathbb{N}}$ with $l\leq r:=m-1-|\alpha|$, there holds \begin{eqnarray}\label{FF=id} && f_\alpha(\Phi(X'))-\sum_{|\beta|\leq l}\frac{1}{\beta!} f_{\alpha+\beta}(\Phi(Y'))(\Phi(X')-\Phi(Y'))^\beta \nonumber\\[6pt] &&\quad=\sum_{(j_1,...,j_{l})\in\{1,...,n\}^{l}}\Bigl\{\int_{\Delta_l} \Bigl[f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'+t_{l}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad\quad \times\prod_{k=1}^{l}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l}\cdots dt_1\Bigr\}. \end{eqnarray} \noindent Clearly, (\ref{RRR=id}) follows from (\ref{reminder}) and (\ref{FF=id}) by taking $l:=r$. In order to justify (\ref{FF=id}) we proceed by induction on $l$. Concretely, when $l=1$ we may write, based on (\ref{B-CC}) and the Fundamental Theorem of Calculus, \begin{eqnarray}\label{RRR=0} && f_{\alpha}(\Phi(X'))-f_{\alpha}(\Phi(Y')) -\sum_{j=1}^n f_{\alpha+e_j}(\Phi(Y'))(\Phi_j(X')-\Phi_j(Y')) \nonumber\\[6pt] &&\quad =\int_0^1\frac{d}{dt}\Bigl[f_\alpha(\Phi(Y'+t(X'-Y')))\Bigr]\,dt \nonumber\\[6pt] && \qquad\qquad\quad-\sum_{j=1}^n f_{\alpha+e_j}(\Phi(Y')) \int_0^1\frac{d}{dt}\Bigl[\Phi_j(Y'+t(X'-Y'))\Bigr]\,dt \nonumber\\[6pt] &&\quad = \sum_{j=1}^n\Bigl\{\int_0^1\Bigl[f_{\alpha+e_j}(Y'+t(X'-Y')) -f_{\alpha+e_j}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\times\nabla\Phi_j(Y'+t(X'-Y'))\cdot(X'-Y')\,dt\Bigr\}, \end{eqnarray} \noindent as wanted. To prove the version of (\ref{FF=id}) when $l$ is replaced by $l+1$ we split the sum in the left-hand side of (\ref{FF=id}), written for $l+1$ in place of $l$, according to whether $|\beta|\leq l$ or $|\beta|=l+1$ and denote the expressions created in this fashion by $S_1$ and $S_2$, respectively. Next, based on (\ref{B-CC}) and the Fundamental Theorem of Calculus, we write \begin{eqnarray}\label{FTC} && f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y'+t_{l}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_l}}(\Phi(Y')) \\[6pt] && =\sum_{i=1}^n\int_0^{t_{l}}f_{\alpha+e_{j_1}+\cdots+e_{j_l}+e_i} (\Phi(Y'+t_{l+1}(X'-Y'))) \nabla\Phi_{i}(Y'+t_{l+1}(X'-Y'))\cdot(X'-Y')\,dt_{l+1} \nonumber \end{eqnarray} \noindent and use the induction hypothesis to conclude that \begin{eqnarray}\label{FF=id-2} && S_1=\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}} \Bigl\{\int_{\Delta_{l+1}} f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'+t_{l+1}(X'-Y'))) \nonumber\\[6pt] &&\qquad\qquad \times\prod_{k=1}^{l+1}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l+1}\cdots dt_1\Bigr\}. \end{eqnarray} \noindent Thus, if \begin{equation}\label{F=Psi} F_j(t):=\Phi_j(Y'+t(X'-Y'))-\Phi_j(Y'),\qquad 1\leq j\leq n, \end{equation} \noindent we may express $S_1$ in the form \begin{eqnarray}\label{another} && S_1=\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}}\Bigl\{\int_{\Delta_{l+1}} \Bigl[f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'+t_{l+1}(X'-Y'))) -f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'))\Bigr] \nonumber\\[6pt] &&\qquad\qquad\qquad\qquad \times\prod_{k=1}^{l+1}\,\nabla\Phi_{j_k}(Y'+t_{k}(X'-Y'))\cdot(X'-Y') \,dt_{l+1}\cdots dt_1\Bigr\} \\[6pt] &&\qquad\quad +\sum_{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}} f_{\alpha+e_{j_1}+\cdots+e_{j_{l+1}}}(\Phi(Y'))\int_{\Delta_{l+1}}\, \prod_{k=1}^{l+1}\,F'_{j_k}(t_{k})\,dt_{l+1}\cdots dt_1. \nonumber \end{eqnarray} Note that the first double sum above corresponds precisely to the expression in the right-hand side of (\ref{FF=id}) written with $l$ replaced by $l+1$. Our proof of (\ref{FF=id}) by induction is therefore complete as soon as we show that for each multi-index $\beta$ of length $l+1$, \begin{equation}\label{S2} \sum_{{(j_1,...,j_{l+1})\in\{1,...,n\}^{l+1}}\atop {e_{j_1}+\cdots+e_{j_{l+1}}=\beta}} \int_{\Delta_{l+1}}\, \prod_{k=1}^{l+1}\,F'_{j_k}(t_{k})\,dt_{l+1}\cdots dt_1 =\frac{1}{\beta!}(\Phi(X')-\Phi(Y'))^\beta. \end{equation} \noindent In turn, this is going to be a consequence of a general identity, to the effect that \begin{equation}\label{Fprime} \sum_{{(j_1,...,j_{l})\in\{1,...,n\}^{l}} \atop{e_{j_1}+\cdots+e_{j_{l}}=\beta}} \int_0^{t_0}\int_0^{t_1}\cdots\int_0^{t_{l-1}} \prod_{k=1}^{l}\,F'_{j_k}(t_{k})\,dt_{l}\cdots dt_1 =\frac{1}{\beta!}F(t_0)^\beta, \end{equation} \noindent for any Lipschitz function $F=(F_1,...,F_n):[0,1]\to{\mathbb{C}}^n$ with $F(0)=0$, any point $t_0\in[0,1]$ any $l\in{\mathbb{N}}$ and any multi-index $\beta$ of length $l$. Of course, the case most relevant for our purposes is when the $F_j$'s are as in (\ref{F=Psi}), $t_0=1$ and when $l$ is replaced by $l+1$, but the above formulation is best suited for proving (\ref{Fprime}) via induction on $l$. Indeed, the case $l=1$ is immediate from the Fundamental Theorem of Calculus and to pass from $l$ to $l+1$ it suffices to show that the two sides of (\ref{Fprime}) have the same derivative with respect to $t_0$. The important observation in carrying out the latter step is that the derivative of the left-hand side of (\ref{Fprime}) with respect to $t_0$ is an expression to which the current induction hypothesis is readily applicable. This justifies (\ref{Fprime}) and completes the proof of (\ref{RRR=id}). $\Box$ \vskip 0.08in \begin{corollary}\label{Cor-R} Under the assumptions of Lemma~\ref{Lemma-R}, for each multi-index of length $\leq m-2$ the following estimate holds \begin{equation}\label{RRR=est} \Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p} \leq C\sum_{|\gamma|=m-1}\|f_\gamma\|_{B^s_p(\partial\Omega)}, \end{equation} \noindent where the constant $C$ depends only on $n$, $p$, $s$ and $\|\nabla\varphi\|_{L_\infty({\mathbb{R}}^{n-1})}$. \end{corollary} \noindent{\bf Proof.} The identity (\ref{RRR=id}) gives \begin{eqnarray}\label{pw-est-R} && |R_\alpha(\Phi(X'),\Phi(Y'))| \\[6pt] &&\qquad\qquad \leq C|X'-Y'|^{m-1-|\alpha|} \sum_{|\gamma|=m-1}\int_0^1 |f_{\gamma}(\Phi(Y'+\tau(X'-Y')))-f_{\gamma}(\Phi(Y'))|\,d\tau \nonumber \end{eqnarray} \noindent for each $X',Y'\in{\mathbb{R}}^{n-1}$, where the constant $C$ depends only on $n$ and $\|\nabla\Phi\|_{L_\infty}$ which, in turn, is controlled in terms of $\|\nabla \varphi\|_{L_\infty}$. Given an arbitrary $t>0$ we now integrate the $p$-th power of both sides in (\ref{pw-est-R}) for $X',Y'\in\partial{\mathbb{R}}^{n-1}$ subject to $|\Phi(X')-\Phi(Y')|<t$. Using Fubini's Theorem and making the change of variables $Z':=Y'+\tau(X'-Y')$ we obtain, after noticing that $|Z'-Y'|\leq \tau t$, \begin{eqnarray}\label{r-vs-om} r_\alpha(t)^p & \leq & C\,t^{p(m-1-|\alpha|)}\sum_{|\gamma|=m-1} \int\!\!\!\!\!\!\int\limits_{{X',Y'\in{\mathbb{R}}^{n-1}}\atop{|X'-Y'|< c\,t}} \int_0^1|f_{\gamma}(\Phi(Y'+\tau(X'-Y')))-f_{\gamma}(\Phi(Y'))|^p \,d\tau\,dX'dY' \nonumber\\[6pt] &\leq & C\,t^{p(m-1-|\alpha|)}\sum_{|\gamma|=m-1}\int_0^1 \int\!\!\!\!\!\!\int\limits_{{Z',Y'\in{\mathbb{R}}^{n-1}}\atop{|Z'-Y'|<c\,\tau t}} |f_{\gamma}(\Phi(Z'))-f_{\gamma}(\Phi(Y'))|^p\,dZ'dY'd\tau \nonumber\\[6pt] &\leq & C\,t^{p(m-1+s-|\alpha|)+n-1} \sum_{|\gamma|=m-1}\int_0^1 \frac{\omega_p(f_{\gamma},\,c\,\tau t)^p}{\tau^{n-1}t^{ps+n-1}}\,d\tau. \end{eqnarray} \noindent Consequently, \begin{eqnarray}\label{r-om-bis} \int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt & \leq & C\sum_{|\gamma|=m-1}\int_0^\infty \int_0^1 \frac{\omega_p(f_{\gamma},\,c\,\tau t)^p}{\tau^{n-1}t^{ps+n-1}}\,d\tau dt \nonumber\\[6pt] & \leq & C\sum_{|\gamma|=m-1}\Bigl(\int_0^\infty \frac{\omega_p(f_{\gamma},r)^p}{r^{ps+n-1}}\,dr\Bigr) \Bigl(\int_0^1\frac{1}{\tau^{1-sp}}\,d\tau\Bigr) \nonumber\\[6pt] & \leq & C\sum_{|\gamma|=m-1}\int_0^\infty \frac{\omega_p(f_{\gamma},t)^p}{t^{ps+n-1}}\,dt, \end{eqnarray} \noindent after making the change of variables $r:=c\,\tau t$ in the second step. With this in hand, the estimate (\ref{RRR=est}) follows by virtue of (\ref{Eqv}). $\Box$ \vskip 0.08in After this preamble, we are in a position to present the \vskip 0.08in \noindent{\bf Proof of Proposition~\ref{trace-2}.} We divide the proof into a series of steps, starting with \vskip 0.08in \noindent{\it Step I: The well-definiteness of trace.} Let ${\mathcal U}$ be an arbitrary function in $W^{m,a}_p(\Omega)$ and set \begin{equation}\label{ff-aa} f_\alpha:=i^{|\alpha|}\,{\rm Tr}\,[D^\alpha\,{\mathcal U}],\qquad \forall\,\alpha\,:\,|\alpha|\leq m-1. \end{equation} \noindent It follows from Lemma~\ref{trace-1} that these trace functions are well-defined and, in fact, \begin{equation}\label{falpha} \sum_{|\alpha|\leq m-1}\|f_\alpha\|_{B^s_p(\partial\Omega)} \leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}. \end{equation} In order to prove that $\dot{f}:=\{f_\alpha\}_{|\alpha|\leq m-1}$ belongs to $\dot{B}^{m-1+s}_p(\partial\Omega)$, let $R_\alpha(X,Y)$ and $r_\alpha(t)$ be as in (\ref{reminder})-(\ref{rem-Rr}). Our goal is to show that for every multi-index $\alpha$ with $|\alpha|\leq m-1$, \begin{equation}\label{B-alpha} \Bigl(\int_0^\infty \frac{r_\alpha(t)^p}{t^{p(m-1+s-|\alpha|)+n-1}}\,dt\Bigr)^{1/p} \leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}. \end{equation} \noindent To this end, we first observe that if $|\alpha|=m-1$ then the expression in the left-hand side of (\ref{B-alpha}) is majorized by $C\Bigl(\int_0^\infty\omega_p(f_\alpha,t)^p/t^{ps+n-1}\,dt\Bigr)^{1/p}$ which, by (\ref{Eqv}) and (\ref{falpha}), is indeed $\leq C\|\,{\mathcal U}\|_{W^{m,a}_p(\Omega)}$. To treat the case when $|\alpha|<m-1$ we assume that $\Omega$ is locally represented as $\{X:\,X_n>\varphi(X')\}$ for some Lipschitz function $\varphi:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$ and, as before, set $\Phi(X'):=(X',\varphi(X'))$, $X'\in{\mathbb{R}}^{n-1}$. Then (\ref{B-CC}) holds, thanks to (\ref{ff-aa}), for every multi-index $\alpha$ of length $\leq m-2$. Consequently, Corollary~\ref{Cor-R} applies and, in concert with (\ref{falpha}), yields (\ref{B-alpha}). This proves that the operator (\ref{TR-11})-(\ref{Tr-DDD}) is well-defined and bounded. \vskip 0.08in \noindent{\it Step II: The extension operator.} We introduce a co-boundary operator ${\mathcal E}$ which acts on $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in \dot{B}^{m-1+s}_p(\partial\Omega)$ according to \begin{equation}\label{def-Ee} ({\mathcal E}\dot{f})(X) =\int_{\partial\Omega}{\mathcal K}(X,Y)\,P(X,Y)\,d\sigma_Y, \qquad X\in\Omega, \end{equation} \noindent where $P(X,Y)$ is the polynomial associated with $\dot{f}$ as in (\ref{PPP}). The integral kernel ${\mathcal K}$ is assumed to satisfy \begin{eqnarray}\label{ker-prp} && \int_{\partial\Omega}{\mathcal K}(X,Y)\,d\sigma_Y=1 \qquad\mbox{for all }\,X\in\Omega, \\[6pt] && |D^\alpha_X{\mathcal K}(X,Y)|\leq c_\alpha\,\rho(X)^{1-n-|\alpha|}, \quad\forall\,X\in\Omega,\,\,\forall\,Y\in\partial\Omega, \label{more-Kp} \end{eqnarray} \noindent where $\alpha$ is an arbitrary multi-index, and \begin{equation}\label{last-Kp} {\mathcal K}(X,Y)=0\quad\mbox{if }\,\,|X-Y|\geq 2\rho(X). \end{equation} \noindent One can take, for instance, the kernel \begin{equation}\label{K-def} {\mathcal K}(X,Y):=\eta\left(\frac{X-Y}{\varkappa\rho_{\rm reg}(X)}\right) \left(\int_{\partial\Omega}\eta\left(\frac{X-Z}{\varkappa\rho_{\rm reg}(X)} \right)d\sigma_Z\right)^{-1}, \end{equation} \noindent where $\eta\in C^\infty_0(B_2)$, $\eta=1$ on $B_1$, $\eta\geq 0$ and $\varkappa$ is a positive constant depending on the Lipschitz constant of $\partial\Omega$. Here, as before, $\rho_{\rm reg}(X)$ stands for the regularized distance from $X$ to $\partial\Omega$. For each $X\in\Omega$ and $Z\in\partial\Omega$ and for every multi-index $\gamma$ with $|\gamma|=m$ we then obtain \begin{equation}\label{N0} D^\gamma{\mathcal E}\dot{f}(X) =\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \frac{\gamma!}{\alpha!\beta!}\int_{\partial\Omega} D^\alpha_X{\mathcal K}(X,Y)\,(P_\beta(X,Y)-P_\beta (X,Z))\,d\sigma_Y. \end{equation} \noindent If for a fixed $\mu>1$ and for each $X\in\Omega$ and $t>0$ we set \begin{equation}\label{def-Gamma} \Gamma_t:=\{Y\in\partial\Omega:\,|X-Y|<\mu t\} \end{equation} \noindent we may then estimate \begin{eqnarray}\label{N1} |D^\gamma{\mathcal E}\dot{f}(X)|^p &\leq & C \sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \rho(X)^{-p|\alpha|}{\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |P_\beta(X,Y)-P_\beta(X,Z)|^p\,d\sigma_Y \nonumber\\[6pt] & \leq & C\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \sum_{|\beta|+|\delta|\leq m-1}\rho(X)^{-p|\alpha|} {\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |R_{\delta+\beta}(Y,Z)|^p\,|X-Y|^{p|\delta|}\,d\sigma_Y, \nonumber\\[6pt] & \leq & C\sum_{|\tau|\leq m-1}\rho(X)^{p(|\tau|-m)} {\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}}|R_{\tau}(Y,Z)|^p\,d\sigma_Y, \end{eqnarray} \noindent where we have used H\"older's inequality and (\ref{PR}). Averaging the extreme terms in (\ref{N1}) for $Z$ in $\Gamma_{\rho(X)}$, we arrive at \begin{equation}\label{N2} |D^\gamma{\mathcal E}\dot{f}(X)|^p \leq C \sum_{|\tau|\leq m-1}\rho(X)^{p(|\tau|-m)-2(n-1)} \int_{\Gamma_{\rho(X)}}\!\int_{\Gamma_{\rho(X)}} |R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z. \end{equation} Consider now a Whitney decomposition of $\Omega$ into a family of dyadic cubes, $\{Q_i\}_{i\in {\mathcal I}}$. In particular, $l_i:={\rm diam}\,Q_i\sim{\rm dist}\,(Q_i,\partial\Omega)$ uniformly for $i\in{\mathcal I}$. Thus, if $X\in Q_{i}$ for some ${i}\in I_{j}:=\{i\in{\mathcal I}:\,l_i=2^{-j}\}$, $j\in{\mathbb{Z}}$, the estimate (\ref{N2}) yields \begin{equation}\label{EST-E} |D^\gamma{\mathcal E}\dot{f}(X)| \leq C\sum_{|\tau|\leq m-1}2^{-j(|\tau|-m)} \Bigl(2^{2j(n-1)}\int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega\cap\,\varkappa\,Q_{i}}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z\Bigr)^{1/p} \end{equation} \noindent for some $\varkappa=\varkappa(\partial\Omega)>1$. In fact, by choosing the constant $\mu$ in (\ref{def-Gamma}) sufficiently close to $1$, matters can be arranged so that the family $\{\varkappa Q_i\}_{i\in{\mathcal I}}$ has finite overlap. Keeping this in mind and availing ourselves of the fact that $\rho(X)\sim l_i$ uniformly for $X\in Q_i$, $i\in{\mathcal I}$, for each multi-index $\gamma$ of length $m$ we may then estimate: \begin{eqnarray}\label{bigstep} && \int_{\Omega}|D^\gamma{\mathcal E}\dot{f}(X)|^p\rho(X)^{p(1-s)-1}\,dX \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}\sum_{i\in I_j} 2^{-jp(1-s)-j} \int_{Q_i}|D^\gamma{\mathcal E}\dot{f}(X)|^p\,dX \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}\sum_{i\in I_j}\sum_{|\tau|\leq m-1} 2^{jp(m-1+s-|\tau|)+j(n-1)} \int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega\cap\,\varkappa\,Q_i}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z \nonumber\\[6pt] && \quad \leq C\sum_{j\in{\mathbb{Z}}}^\infty\sum_{|\tau|\leq m-1} 2^{jp(m-1+s-|\tau|)+j(n-1)} \int\!\!\!\!\!\!\!\!\!\!\!\!\int \limits_{{Y,Z\in\partial\Omega}\atop {|Y-Z|<\varkappa\,2^{-j}}}|R_\tau(Y,Z)|^p\,d\sigma_Yd\sigma_Z \nonumber\\[6pt] && \quad \leq C\sum_{|\tau|\leq m-1} \int_0^\infty\frac{r_\tau(t)^p}{t^{p(m-1+s-|\tau|)+n-1}}\,dt \nonumber\\[6pt] && \quad \leq C\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}, \end{eqnarray} \noindent where in the last step we have used (\ref{Bes-Nr}). This proves that the operator (\ref{Extension}) is well-defined and bounded. \vskip 0.08in \noindent{\it Step III: The right-invertibility property.} We shall now show that the operator (\ref{def-Ee}) is a right-inverse for the trace operator (\ref{TR-11}), i.e., whenever $\dot{f}=\{f_\gamma\}_{|\gamma|\leq m-1}\in \dot{B}^{m-1+s}_p(\partial\Omega)$, there holds \begin{equation}\label{N3a} f_\gamma=i^{|\gamma|}\,{\rm Tr}[D^\gamma{\mathcal E}\dot{f}] \end{equation} \noindent for every multi-index $\gamma$ of length $\leq m-1$. To this end, for $|\gamma|\leq m-1$ we write \begin{equation}\label{N4} D^\gamma{\mathcal E}\dot{f}(X)-{\mathcal E}_\gamma\dot{f}(X) =\sum_{{\alpha+\beta=\gamma}\atop{|\alpha|\geq 1}} \frac{\gamma!}{\alpha!\beta!}\int_{\partial\Omega} D^\alpha_X {\mathcal K}(X,Y)(P_\beta(X,Y)-P_\beta(X,Z))\,d\sigma_Y, \end{equation} \noindent where \begin{equation}\label{N3b} {\mathcal E}_\gamma\dot{f}(X) :=\int_{\partial\Omega}{\mathcal K}(X,Y)\, P_\gamma(X,Y)\,d\sigma_Y, \qquad X\in\Omega. \end{equation} \noindent Estimating the right-hand side in (\ref{N4}) in the same way as we did with the right-hand side of (\ref{N0}), we obtain \begin{eqnarray}\label{Dgamma-E} \int_{\partial\Omega} |D^\gamma{\mathcal E}\dot{f}(X)-{\mathcal E}_\gamma\dot{f}(X)|^p \rho(X)^{-ps-1}\,dX &\leq & C\sum_{|\tau|\leq m-1}\int_0^\infty\frac{r_\tau(t)^p} {t^{p(|\gamma|+s-|\tau|)+n-1}}\,dt \nonumber\\[6pt] & \leq & C\,\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}. \end{eqnarray} \noindent In a similar fashion, we check that \begin{eqnarray}\label{sim-fash} && \int_{\partial\Omega}|\nabla (D^\gamma{\mathcal E}\dot{f}(X) -{\mathcal E}_\gamma\dot{f}(X)) |^p \rho(X)^{p-ps-1}\,dX \nonumber\\[6pt] &&\qquad\qquad \leq C\sum_{|\tau|\leq m-1}\int_0^\infty\frac{r_\tau(t)^p} {t^{p(|\gamma|+s-|\tau|)+n-1}}\,dt \leq C\,\|\dot{f}\|^p_{\dot{B}^{m-1+s}_p(\partial\Omega)}. \end{eqnarray} \noindent The two last inequalities imply $D^\gamma{\mathcal E}\dot{f}-{\mathcal E}\dot{f}\in V^{1,a}_p(\Omega)$ and, therefore, \begin{equation}\label{N4a} {\rm Tr}\,(D^\gamma{\mathcal E}\dot{f}-{\mathcal E}_\gamma\dot{f})=0. \end{equation} Going further, let us set \begin{equation}\label{EgX} Eg(X):=\int_{\partial\Omega}{\mathcal K}(X,Y)\,g(Y)\,d\sigma_Y, \qquad X\in\Omega. \end{equation} \noindent A simpler version of the reasoning in Step~II yields that $E$ maps $B^s_p(\partial\Omega)$ boundedly into $W^{1,a}_p(\Omega)$. Also, a standard argument based on the Poisson kernel-like behavior of ${\mathcal K}(X,Y)$ shows that ${\rm Tr}\,Eg=g$ for each $g\in B^s_p(\partial\Omega)$. Based on the definition (\ref{PPP}) and (\ref{N3b}) we have \begin{eqnarray}\label{E-E} && |{\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X)|^p +\rho(X)^p|\nabla({\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X))|^p \nonumber\\[6pt] && \qquad\qquad \leq C\sum_{{|\beta|\leq m-1-|\gamma|}\atop{|\beta|\geq 1}} \rho(X)^{p|\beta|}{\int{\mkern-19mu}-}_{\Gamma_{\rho(X)}} |f_{\gamma+\beta}(Y)|^p\,d\sigma_Y. \end{eqnarray} \noindent Consequently, for an arbitrary Whitney cube $Q_i$ we have \begin{eqnarray}\label{n1} && \int_{Q_i}|{\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X)|^p\rho(X)^{-ps-1}\,dX +\int_{B_\delta} |\nabla({\mathcal E}_\gamma\dot{f}(X)-Ef_\gamma(X))|^p\rho(X)^{p-ps-1}\,dX \nonumber\\[6pt] && \qquad\qquad\qquad\qquad\qquad\qquad \leq C\sum_{{|\beta|\leq m-1-|\gamma|}\atop{|\beta|\geq 1}} l_i^{p(|\beta|-s)}\int_{\partial\Omega\cap \varkappa Q_i} |f_{\gamma+\beta}(Y)|^p\,d\sigma_Y. \end{eqnarray} \noindent Summing over all Whitney cubes we find \begin{equation}\label{Sm-wb} \|{\mathcal E}_\gamma\dot{f}-Ef_\gamma\|_{V_p^{1,a}(\Omega)} \leq C\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{L_p(\partial\Omega)} \end{equation} \noindent which implies \begin{equation}\label{N5} {\rm Tr}\,({\mathcal E}_\gamma\dot{f}-Ef_\gamma) =0. \end{equation} \noindent Finally, combining (\ref{N5}), (\ref{N4a}), and ${\rm Tr}\,Ef_\gamma=f_\gamma$, we arrive at (\ref{N3a}). \vskip 0.08in \noindent{\it Step IV: The kernel of the trace.} We now turn to the task of identifying the null-space of the trace operator (\ref{TR-11})-(\ref{Tr-DDD}). For each $k\in{\mathbb{N}}_0$ we denote by ${\mathcal P}_k$ the collection of all vector-valued, complex coefficient polynomials of degree $\leq k$ (and agree that ${\mathcal P}_k= 0$ whenever $k$ is a negative integer). The claim we make at this stage is that the null-space of the operator \begin{equation}\label{TRW} W^{m,a}_p(\Omega)\ni {\mathcal W}\mapsto \Bigl\{{\rm Tr}\,[D^\gamma{\mathcal W}]\Bigr\}_{|\gamma|=m-1} \in B^s_p(\partial\Omega) \end{equation} \noindent is given by \begin{equation}\label{Null-tr} {\mathcal P}_{m-2}+V^{m,a}_p(\Omega). \end{equation} \noindent The fact that the null-space of the trace operator (\ref{TR-11})-(\ref{Tr-DDD}) is $V^{m,a}_p(\Omega)$ follows readily from this. That (\ref{Null-tr}) is included in the null-space of the operator (\ref{TRW}) is obvious. The opposite inclusion amounts to showing that if ${\mathcal W}\in W_p^{m,a}(\Omega)$ is such that ${\rm Tr}\,[D^\gamma{\mathcal W}]=0$ for all multi-indices $\gamma$ with $|\gamma|=m-1$, then there exists $P_{m-2}\in {\mathcal P}_{m-2}$ with the property that ${\mathcal W}-P_{m-2}\in V_p^{m,a}(\Omega)$. To this end, we note that the case $m=1$ is a consequence of (\ref{Hardy}) and consider next the case $m=2$, i.e. when \begin{equation}\label{WWT} {\mathcal W}\in W_p^{2,a}(\Omega),\qquad{\rm Tr}\,[\nabla{\mathcal W}]=0 \quad\mbox{on}\,\,\partial\Omega. \end{equation} \noindent Assume that $\{{\mathcal W}_j\}_{j\geq 1}$ is a sequence of smooth in $\overline{\Omega}$ (even polynomial) vector-valued functions approximating ${\mathcal W}$ in $W_p^{2,a}(\Omega)$. In particular, \begin{equation}\label{w1} {\rm Tr}\,[\nabla{\mathcal W}_j]\to 0\,\,\,\mbox{ in }\,\,\,L_p(\partial\Omega) \,\,\,\mbox{ as }\,\,j\to\infty. \end{equation} \noindent If in a neighborhood of a point on $\partial\Omega$ the domain $\Omega$ is given by $\{X:\,X_n>\varphi(X')\}$ for some Lipschitz function $\varphi$, the following chain rule holds for the gradient of the function $w_j:B'\ni X'\mapsto{\mathcal W}_j(X',\varphi(X'))$, where $B'$ is a $(n-1)$-dimensional ball: \begin{equation}\label{w2} \nabla w_j(X')= \Bigl(\nabla_{Y'}{\mathcal W}_j(Y',\varphi(X'))\Bigr)\Bigl|_{Y'= X'} +\Bigl(\frac{\partial}{\partial Y_n}{\mathcal W}_j(X',Y_n)\Bigr) \Bigl|_{Y_n=\varphi(X')}\,\nabla\varphi(X'). \end{equation} \noindent Since the sequence $\{w_j\}_{j\geq 1}$ is bounded in $L_p(B')$ and $\nabla w_j\to 0$ in $L_p(B')$, it follows that there exists a subsequence $\{j_i\}_i$ such that $w_{j_i}\to const$ in $L_p(B')$ (see Theorem~1.1.12/2 in {\bf\cite{Maz1}}). Hence, ${\rm Tr}\,{\mathcal W}=P_0=const$ on $\partial\Omega$. In view of ${\rm Tr}\,[{\mathcal W}-P_0]=0$ and ${\rm Tr}\,[\nabla{\mathcal W}]=0$, we may conclude that ${\mathcal W}-P_0\in V_p^{2,a}(\Omega)$ by Hardy's inequality. The general case follows in an inductive fashion, by reasoning as before with $D^\alpha{\mathcal W}$ with $|\alpha|=m-2$ in place of ${\mathcal W}$. $\Box$ \vskip 0.08in We now present a short proof of (\ref{Bes-X}), based on Proposition~\ref{trace-2}. \begin{proposition}\label{B-EQ} Assume that $1<p<\infty$, $s\in(0,1)$ and $m\in{\mathbb{N}}$. Then \begin{equation}\label{eQ-11} \|\dot{f}\|_{\dot{B}^{m-1+s}_p(\partial\Omega)}\sim \sum_{|\alpha|\leq m-1} \|f_\alpha\|_{B^{s}_p(\partial\Omega)}, \end{equation} \noindent uniformly for $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$. As a consequence, (\ref{Bes-X}) holds. \end{proposition} \noindent{\bf Proof.} The left-pointing inequality in (\ref{eQ-11}) is implicit in (\ref{RRR=est}). As for the opposite one, let $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ and, with $a:=1-s-1/p$, consider ${\mathcal U}:={\mathcal E}(\dot{f})\in W^{m,a}_p(\Omega)$. Then Lemma~\ref{trace-1} implies that, for each multi-index $\alpha$ of length $\leq m-1$, the function $f_\alpha=i^{|\alpha|}\,{\rm Tr}\,[D^\alpha{\mathcal U}]$ belongs to $B^s_p(\partial\Omega)$, plus a naturally accompanying norm estimate. This concludes the proof of (\ref{eQ-11}). Finally, the last claim in the proposition is a consequence of (\ref{eQ-11}), (\ref{dense}) and the fact that the operator (\ref{TR-11})-(\ref{Tr-DDD}) is onto. $\Box$ \vskip 0.08in We include one more equivalent characterization of the space $\dot{B}^{m-1+s}_p(\partial\Omega)$, in the spirit of work in {\bf\cite{AP}}, {\bf\cite{PV}}, {\bf\cite{Ve2}}. To state it, recall that $\{e_j\}_j$ is the canonical orthonormal basis in ${\mathbb{R}}^n$. \begin{proposition}\label{CC-Aray} Assume that $1<p<\infty$, $s\in(0,1)$ and $m\in{\mathbb{N}}$. Then \begin{equation}\label{eQ0} \{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega) \Longleftrightarrow \left\{ \begin{array}{l} f_\alpha\in B^{s}_p(\partial\Omega),\quad \forall\,\alpha\,:\,|\alpha|\leq m-1 \\[10pt] \qquad\qquad\qquad \mbox{ and } \\[10pt] (\nu_j\partial_k-\nu_k\partial_j)f_{\alpha}= \nu_jf_{\alpha+e_k}-\nu_kf_{\alpha+e_j} \\[6pt] \forall\,\alpha\,:\,|\alpha|\leq m-2,\quad\forall\,j,k\in\{1,...,n\}. \end{array} \right. \end{equation} \end{proposition} \noindent{\bf Proof.} The left-to-right implication is a consequence of (\ref{eQ-11}) and of the fact that (\ref{ff-aa}) holds for some ${\mathcal U}\in W^{m,a}_p(\Omega)$ (cf. Proposition~\ref{trace-2}). As for the opposite implication, we proceed as in the proof of Proposition~\ref{trace-2} and estimate (\ref{rem-Rr}) based on the identities (\ref{B-CC}) and knowledge that $f_\alpha$ belongs to $B^{s}_p(\partial\Omega)$ for each $\alpha$ of length $\leq m-1$. $\Box$ \vskip 0.08in We close this section with two remarks on the nature of the space $\dot{B}^{m-1+s}_p(\partial\Omega)$. First, we claim that the assignment \begin{equation}\label{ass} \dot{B}^{m-1+s}_p(\partial\Omega)\ni\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1} \mapsto \Bigl\{i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha\Bigr\} _{0\leq k\leq m-1}\in L_p(\partial\Omega) \end{equation} \noindent is one-to-one. This is readily justified with the help of the identity \begin{equation}\label{m-xxx} D^\alpha=i^{-|\alpha|}\,\nu^\alpha\, \frac{\partial^{|\alpha|}}{\partial\nu^{|\alpha|}} +\sum_{|\beta|=|\alpha|-1}\sum_{j,k=1}^n p_{\alpha,\beta,j,k}(\nu) \frac{\partial}{\partial\tau_{jk}}D^\beta \end{equation} \noindent where $\partial/\partial\tau_{jk}:=\nu_j\partial/\partial x_k -\nu_k\partial/\partial x_j$ and the $p_{\alpha,\beta,j,k}$'s are polynomial functions. Indeed, let $\dot{f}\in \dot{B}^{m-1+s}_p(\partial\Omega)$ be mapped to zero by the assignment (\ref{ass}) and consider ${\mathcal U}:={\mathcal E}(\dot{f})\in W^{m,a}_p(\Omega)$. Then $f_\alpha=i^{|\alpha|}\,{\rm Tr}\,\,[D^\alpha\,{\mathcal U}]$ on $\partial\Omega$ for each $\alpha$ with $|\alpha|\leq m-1$ and, granted the current hypotheses, $\partial^k{\mathcal U}/\partial\nu^k=0$ for $k=0,1,...,m-1$. Consequently, (\ref{m-xxx}) and induction on $|\alpha|$ yield that ${\rm Tr}\,\,[D^\alpha\,{\mathcal U}]=0$ on $\partial\Omega$ for each $\alpha$ with $|\alpha|\leq m-1$. Thus, $f_\alpha=0$ for each $\alpha$ with $|\alpha|\leq m-1$, as desired. The elementary identity (\ref{m-xxx}) can be proved by writing \begin{eqnarray}\label{m-xxx2} i^{|\alpha|}\,D^\alpha & = & \prod_{j=1}^n\Bigl(\frac{\partial}{\partial x_j} \Bigr)^{\alpha_j} \\[6pt] & = & \prod_{j=1}^n\Bigl[\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr) +\sum_{k=1}^n\xi_j\xi_k\frac{\partial}{\partial x_k}\Bigr]^{\alpha_j} \Bigl|_{\xi=\nu} \nonumber\\[6pt] & = & \prod_{j=1}^n\Bigl[\sum_{l=0}^{\alpha_j}\frac{\alpha_j!}{l!(\alpha_j-l)!} \Bigl(\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr)\Bigr) ^{\alpha_j-l}\nu_j^{l}\frac{\partial^l}{\partial\nu^l}\Bigr]\Bigl|_{\xi=\nu} \nonumber\\[6pt] & = & \prod_{j=1}^n\Bigl[ \nu_j^{\alpha_j}\frac{\partial^{\alpha_j}}{\partial\nu^{\alpha_j}}+ \sum_{l=0}^{\alpha_j-1}\frac{\alpha_j!}{l!(\alpha_j-l)!} \Bigl(\sum_{k=1}^n\xi_k\Bigl(\xi_k \frac{\partial}{\partial x_j}-\xi_j\frac{\partial}{\partial x_k}\Bigr)\Bigr) ^{\alpha_j-l}\nu_j^{l}\frac{\partial^l}{\partial\nu^l}\Bigr]\Bigl|_{\xi=\nu} \nonumber \end{eqnarray} \noindent and noticing that $\prod_{j=1}^n\nu_j^{\alpha_j}\partial^{\alpha_j}/\partial\nu^{\alpha_j} =\nu^\alpha\partial^{|\alpha|}/\partial\nu^{|\alpha|}$, whereas $(\xi_k\partial/\partial x_j-\xi_j\partial/\partial x_k)|_{\xi=\nu} =-\partial/\partial\tau_{jk}$. Our second remark concerns the image of the mapping (\ref{ass}) in the case when $\partial\Omega$ is sufficiently smooth. More precisely, assume that $\partial\Omega\in C^{m-1,1}$ and, for $0\leq k\leq m-1$, the space $B^{m-1-k+s}_p(\partial\Omega)$ is defined starting from $B^{m-1-k+s}_p({\mathbb{R}}^{n-1})$ and then transporting this space to $\partial\Omega$ via a smooth partition of unity argument and locally flattening the boundary (alternatively, $B^{m-1-k+s}_p(\partial\Omega)$ is the image of the trace operator acting from $B^{m-1-k+s+1/p}_p({\mathbb{R}}^{n})$). We claim that \begin{equation}\label{image} \partial\Omega\in C^{m-1,1}\Longrightarrow \mbox{the image of the mapping (\ref{ass}) is $\oplus_{k=0}^{m-1}B^{m-1-k+s}_p(\partial\Omega)$}. \end{equation} \noindent Indeed, granted that $\partial\Omega\in C^{m-1,1}$, it follows from (\ref{eQ0}) that $f_\alpha\in B^{m-1-|\alpha|+s}_p(\partial\Omega)$ for each $\alpha$ with $|\alpha|\leq m-1$ and, hence, $g_k:=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha \in B^{m-1-k+s}_p(\partial\Omega)$ for each $k\in\{0,...,m-1\}$. Conversely, given a family $\{g_k\}_{0\leq k\leq m-1}\in\oplus_{k=0}^{m-1}B^{m-1-k+s}_p(\partial\Omega)$, we claim that there exists $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ such that $g_k=i^k\sum_{|\alpha|=k}\frac{k!}{\alpha!}\,\nu^\alpha\,f_\alpha$ for each $k\in\{0,...,m-1\}$. One way to see this is to start with ${\mathcal U}\in B^{m-1+s+1/p}_p(\Omega)$ solution of $\Delta^m{\mathcal U}=0$ in $\Omega$, $\partial^k{\mathcal U}/{\partial\nu^k}=g_k \in B^{m-1-k+s}_p(\partial\Omega)$, $0\leq k\leq m-1$ (a system which satisfies the Shapiro-Lopatinskij condition) and then define the $f_\alpha$'s as in (\ref{ff-aa}). \subsection{Proof of the main result and further comments}} Theorem~\ref{Theorem} is a particular case of the next theorem concerning the unique solvability of the Dirichlet problem in $W_p^{m,a}(\Omega)$. \begin{theorem}\label{Theorem1} Let all assumptions of Theorem~\ref{th1a} be satisfied. Also let ${\mathcal F}\in V_p^{-m,a}(\Omega)$. Then the Dirichlet problem \begin{equation}\label{m9} \left\{ \begin{array}{l} {\mathcal A}(X,D_X)\,{\mathcal U}={\mathcal F} \qquad\mbox{in}\,\,\Omega, \\[15pt] {\displaystyle{\frac{\partial^k{\mathcal U}}{\partial\nu^k}}} =g_k\quad\,\,\mbox{on}\,\,\partial\Omega,\,\,\,\,\,0\leq k\leq m-1, \end{array} \right. \end{equation} \noindent has a solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ if and only if (\ref{data-B}) is satisfied. In this latter case, the solution is unique and satisfies \begin{equation}\label{estUU} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} \leq C\sum_{|\alpha|\leq m-1}\|f_\alpha\|_{B^{s}_p(\partial\Omega)} +C\|{\mathcal F}\|_{V_p^{-m,a}(\Omega)}. \end{equation} \end{theorem} \noindent{\bf Proof.} It is clear from definitions that the operator \begin{equation}\label{ADXW} {\mathcal A}(X,D_X):W_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is well-defined and bounded. Thus, granted that we seek solutions for (\ref{m9}) in the space $W_p^{m,a}(\Omega)$, the membership of ${\mathcal F}$ to $V_p^{-m,a}(\Omega)$, as well as the fact that the $g_k$'s satisfy (\ref{data-B}), are necessary conditions for the solvability of (\ref{m9}). Conversely, let $\dot{f}=\{f_\alpha\}_{|\alpha|\leq m-1}\in\dot{B}^{m-1+s}_p(\partial\Omega)$ be such that (\ref{data-B}) holds and, with ${\mathcal E}$ denoting the extension operator from Proposition~\ref{trace-2}, seek a solution for (\ref{m9}) in the form ${\mathcal U}={\mathcal E}(\dot{f})+{\mathcal W}$. where ${\mathcal W}\in V^{m,a}_p(\Omega)$ solves \begin{equation}\label{m10} \left\{ \begin{array}{l} {\mathcal A}(X,D_X){\mathcal W} ={\mathcal F}-{\mathcal A}(X,D_X)({\mathcal E}(\dot{f})) \qquad\mbox{in}\,\,\Omega, \\[15pt] {\rm Tr}\,\,[D^\gamma\,{\mathcal W}]=0\quad\mbox{on}\,\,\partial\Omega, \,\,\,\forall\,\gamma\,:\,|\gamma|\leq m-1. \end{array} \right. \end{equation} \noindent Since the boundary conditions in (\ref{m10}) are automatically satisfied if ${\mathcal W}\in V_p^{m,a}(\Omega)$, the solvability of (\ref{m10}) is a direct consequence of Theorem~\ref{th1a}. As for uniqueness, assume that ${\mathcal U}\in W_p^{m,a}(\Omega)$ solves (\ref{m9}) with ${\mathcal F}=0$ and $g_k=0$, $0\leq k\leq m-1$. From the fact that (\ref{ass}) is one-to-one, we infer that ${\rm Tr}\,\,[D^\gamma\,{\mathcal U}]=0$ on $\partial\Omega$ for all $\gamma$ with $|\gamma|\leq m-1$. Then, by Proposition~\ref{trace-2}, ${\mathcal U}\in V_p^{m,a}(\Omega)$ is a null-solution of ${\mathcal A}(X,D_X)$. In turn, Theorem~\ref{th1a} gives that ${\mathcal U}=0$, proving uniqueness for (\ref{m9}). Finally, (\ref{estUU}) is a consequence of the results in \S{6.4}. $\Box$ \vskip 0.08in We conclude this section with a couple of comments, the first of which regards the effect of the presence of lower order terms. More specifically, assume that \begin{equation}\label{E444-bis} {\mathcal A}(X,D_X)\,{\mathcal U} :=\sum_{0\leq |\alpha|,|\beta|\leq m}D^\alpha({\mathcal A}_{\alpha\beta}(X) \,D^\beta{\mathcal U}),\qquad X\in\Omega, \end{equation} \noindent where the top part of ${\mathcal A}(X,D_X)$ satisfies the hypotheses made in Theorem~\ref{Theorem} and the lower order terms are bounded. Then the Dirichlet problem (\ref{m9}) is Fredholm solvable, of index zero, in the sense that a solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ exists if and only if the data ${\mathcal F}$, $\{g_k\}_{0\leq k\leq m-1}$ satisfy finitely many linear conditions, whose number matches the dimension of the space of null-solutions for (\ref{m9}). Furthermore, the estimate \begin{equation}\label{estUU-bis} \|{\mathcal U}\|_{W_p^{m,a}(\Omega)} \leq C\, \Bigl(\, \sum_{|\alpha|\leq m-1}\|{\mathcal F}\|_{V_p^{-m,a}(\Omega)} +\|f_\alpha\|_{B^{s}_p(\partial\Omega)}+\|{\mathcal U}\|_{L_p(\Omega)}\Bigr) \end{equation} \noindent holds for any solution ${\mathcal U}\in W_p^{m,a}(\Omega)$ of (\ref{m9}). Indeed, the operator \begin{equation}\label{cal-AAA} {\mathcal A}:V_p^{m,a}(\Omega)\longrightarrow V_p^{-m,a}(\Omega) \end{equation} \noindent is Fredholm with index zero, as can be seen by decomposing ${\mathcal A}=\mathaccent"0017 {\mathcal A}+({\mathcal A}-\mathaccent"0017 {\mathcal A})$ where $\mathaccent"0017 {\mathcal A}:=\sum_{|\alpha|=|\beta|=m} D^\alpha {\mathcal A}_{\alpha\beta}\,D^\beta$, and then invoking Theorem~\ref{th1a}. Now, it can be shown that the problem (\ref{m9}) is solvable if and only if ${\mathcal F}-{\mathcal A}(X,D_X){\mathcal E}\dot{f} \in {\rm Im}\,{\mathcal A}$, the image of the operator (\ref{cal-AAA}). Thus, if $T({\mathcal F},\{g_k\}_{0\leq k\leq m-1}):= {\mathcal F}-{\mathcal A}(X,D_X){\mathcal E}\dot{f}$, this membership entails $({\mathcal F},\{g_k\}_{0\leq k\leq m-1})\in T^{-1} \Bigl({\rm Im}\,{\mathcal A}\Bigr)$. Note that $T$ maps the space of data onto $V_p^{-m,a}(\Omega)$, hence the number of linearly independent compatibility conditions the data should satisfy is \begin{equation}\label{comp-cond-X} {\rm codim}\,T^{-1}\Bigl({\rm Im}\,{\mathcal A}\Bigr) ={\rm codim}\,({\rm Im}\,{\mathcal A}). \end{equation} \noindent On the other hand, from by Proposition~\ref{trace-2} and the fact that (\ref{ass}) is one-to-one we infer that the space of null-solutions for (\ref{m9}) is precisely ${\rm ker}\,{\mathcal A}$, the kernel of the operator (\ref{cal-AAA}). Since, as already pointed out, this operator has index zero, it follows that the problem (\ref{m9}) has index zero. Finally, (\ref{estUU-bis}) follows from what we have proved so far via a standard reasoning as in {\bf\cite{Ho}}. Our last comment regards the statement of the Dirichlet problem (\ref{e0}) with data \begin{equation}\label{newdata} \partial^k{\cal U}/\partial\nu^k=g_k\in B_p^{m-1-k+s}(\partial\Omega), \qquad k=0,1,\ldots,m-1, \end{equation} \noindent where $B_p^{m-1-k+s}(\partial\Omega)$ is defined here as the {\it range of ${\rm Tr}$ acting from $B_p^{m-1-k+s+1/p}({\mathbb{R}}^n)$}. If $\partial\Omega$ is smooth ($C^{1,1}$ will do) this problem is, certainly, well-posed. Let us illustrate some features of this particular formulation as the smoothness of $\partial\Omega$ deteriorates. Suppose we are looking for the solution ${\cal U}\in W_2^2(\Omega)$ of the Dirichlet problem for the biharmonic operator \begin{equation}\label{m9b} \left\{ \begin{array}{l} \Delta ^2\,{\cal U}=0\qquad\mbox{in}\,\,\Omega, \\[10pt] {\rm Tr}\,{\cal U}=g_1\qquad\mbox{on}\,\,\partial\Omega, \\[10pt] \langle\nu,{\rm Tr}\,[\nabla{\cal U}]\rangle =g_2\quad\mbox{on}\,\,\partial\Omega. \end{array} \right. \end{equation} \noindent The simplest class of data $(g_1, g_2)$ would be, of course, $B_2^{3/2}(\partial\Omega)\times B_2^{1/2}(\partial\Omega)$, where $B_2^{3/2}(\partial\Omega)$ and $B_2^{1/2}(\partial\Omega)$ are the spaces of traces on $\partial\Omega$ for functions in $ W_2^2(\Omega)$ and $ W_2^1(\Omega)$, respectively. However, this formulation has several serious drawbacks. The first one is that the mapping \begin{equation}\label{w22} W_2^2(\Omega)\ni {\cal U}\to\langle\nu,{\rm Tr}\,[\nabla{\mathcal U}]\rangle \in B_2^{1/2}(\partial\Omega) \end{equation} \noindent is generally unbounded. In fact, by choosing ${\mathcal U}$ to be a linear function we see that the continuity of (\ref{w22}) implies $\nu\in B_2^{1/2}(\partial\Omega)$ which is not necessarily the case for a Lipschitz domain, even for such a simple one as the square $S=[0,1]^2$. The same problem fails to have a solution in the class in $W_2^2(\Omega)$ when when $(g_1, g_2)$ is an arbitrary pair in $B_2^{3/2}(\partial\Omega)\times B_2^{1/2}(\partial\Omega)$. Indeed, consider the problem (\ref{m9b}) for $\Omega=S$ and the data $g_1 =0$ and $g_2 =1$. It is standard (see Theorem 7.2.4 in {\bf\cite{KMR1}} and Sect.\,7.1 in {\bf\cite{KMR2}}) that the main term of the asymptotics near the origin of any solution ${\mathcal U}$ in $W_2^1(S)$ is given in polar coordinates $(r,\omega)$ by \begin{equation}\label{asymp} \frac{2r}{\pi+2}\left( (\omega -\frac{\pi}{2}) \sin\omega - \omega\cos\omega\right). \end{equation} \noindent Since this function does not belong to $W_2^2(S)$, there is no solution of problem (\ref{m9b}) in this space. \vskip 0.10in \noindent -------------------------------------- \vskip 0.20in \noindent {\tt Vladimir Maz'ya} \noindent Department of Mathematics \noindent Ohio State University \noindent Columbus, OH 43210, USA \noindent {\tt e-mail}: {\it vlmaz\@@math.ohio-state.edu} and \noindent Department of Mathematical Sciences \noindent University of Liverpool \noindent Liverpool L69 3BX, UK \vskip 0.15in \noindent {\tt Marius Mitrea} \noindent Department of Mathematics \noindent University of Missouri at Columbia \noindent Columbia, MO 65211, USA \noindent {\tt e-mail}: {\it marius\@@math.missouri.edu} \vskip 0.15in \noindent {\tt Tatyana Shaposhnikova} \noindent Department of Mathematics \noindent Ohio State University \noindent Columbus, OH 43210, USA \noindent {\tt e-mail}: {\it [email protected]} and \noindent Department of Mathematics \noindent Link\"oping University \noindent Link\"oping SE-581 83, Sweden \noindent {\tt e-mail}: {\it [email protected] \end{document}
\begin{document} \maketitle\footnotetext{National Research University Higher School of Economics, Laboratory of Algorithms and Technologies for Network Analysis, 136 Rodionova St, Nizhny Novgorod 603093, Russia} \abstract{Graphical models are used in a variety of problems to uncover hidden structures. There is a huge number of different identification procedures, constructed for different purposes. However, it is important to research different properties of such procedures and compare them in order to find out the best procedure or the best use case for some specific procedure. In this paper, some statistical identification procedures are compared using different measures, such as Type I and Type II errors, ROC AUC. \\} \keywords{identification procedure, statistical inference, risk function, ROC AUC} \section{Introduction} Graphical models are used as an applications in various areas of science such as bioinformatics, economics, cryptography and many more \cite{drton, jordan}. The main reason for extensive use of graphical models is the level of visualization, which allows a scientist to uncover hidden patterns and connections between entities. However, the reliability of the obtained network is under discussion. In this paper gaussian graphical models are discussed. Usually, gaussian graphical model is a graph, where every vertex is a random variable and the random vector is distributed according to the multivariate normal distribution\cite{drton}. Connections in the graph show some kind of dependency between two variables. There are three types of graphs widely used. First of them is bi-directed graph; in this model, two random variables are marginally independent if there are no edge between corresponding vertices. Such graphs are usually constructed on correlation matrices, where zero correlation means marginal independence or absence of the edge in graph. Other case of independence is conditional independence on all other variables: undirected graphs are used for these patterns. Undirected graphs are constructed from concentration or partial correlation matrices, which are just inverses of covariance matrix; zero element of the concentration matrix means lack of edge in the graph between two corresponding vertices. Finally, directed acyclic graphs display conditional independence on a subset of random variables. In the literature, there are some methods or approaches for statistical inference. Drton \& Perlman\cite{drton} suggested some methods of family-wise error rate control, using different p-value adjustment. Those methods control the pre-determined level of family-wise Type I error (the probability of at least one Type I error in the model). Also, Drton \& Perlman \cite{sinful} suggested SINful procedure as a way of obtaining more conservative and less conservative networks. Later, this procedure was improved by introducing Minimal Covariance Determinant estimator \cite{gottard}, because it makes the SINful procedure more robust. However, the authors do not suggest any algorithmic procedure to find the boundaries of significant, indeterminate and non-significant p-values. Another approach tries to obtain the best final network or correlation matrix corresponding to the graphical model using some function, which estimates the score of a model\cite{schafer, bayescovlasso, lasso1}. Basically, this approach comes in part from optimisation field. Recently, L1-regularization for optimisation function gains popularity in probabilistic inference field\cite{bayescovlasso, lasso1}. The reason behind this is the fact that lasso regularization has a property, where great part of the parameters in optimisation function become zero. This fact helps to distuinguish zero correlation coefficients from non-zero values and obtain network, where edges connect correlated variables. This regularization particularly helps with sparse correlation matrices, where a lot of elements are assumed to be zero. In the article, undirected gaussian graphical models will be discussed. Statistical methods of probabilistic inference are analysed. These methods are described by Drton \& Perlman \cite{drton}. The authors conducted experiments on procedures with different p-value adjustments. They showed that step-down adjustments on practice assimptotically achieve values of Family-Wise Error Rate close to the theoretical control level with increasing number of observations. On the contrary, other procedures on practice show decreasing level of FWER with increasing number of observations; their theoretically controlled FWER level is far from practically achieved. At the same time, Drton and Perlman do not analyze Type II errors. The goal of this article is to analyze properties of described in the Drton \& Perlman's article statistical procedures. Additionally to FWER, Type II errors, ROC AUC and risk functions will be compared. Type II error is useful to estimate total number of errors in a model or, in other words, the difference between true model and obtained model. ROC AUC allows to compare different models and analyze them without connection to the significance level. One more procedure will be added for comparison, which will be called simultaneous multiple testing procedure, which is optimal for risk function in the class of all unbiased procedures \cite{risk}. \section{Undirected gaussian graphical models} In this article, we concentrate on undirected gaussian graphical models. We have random vector $Y = (Y_1,..., Y_n)$, which is distributed according to the multivariate gaussian distribution $N_p(\mu, \Sigma)$ - hence the word "gaussian" in before "graphical models". Graph $G = (V,E)$ represents network that is constructed using the information from random variables. The set $E$ of edges represents conditional independencies through Markov properties. It means that if the edge $(i,j)$ is absent from the set $E$, then two corresponding random variables from the random vector are conditionally independent, where the condition is induced on all other variables: \begin{equation} Y_i \parallel Y_j \hspace{4pt} | \hspace{6pt} Y_{V\\\{i,j\}} \end{equation} This pairwise Markov property or the conditional independence of two random variables also corresponds to the zero element of the concentration matrix. It is obtained from covariance matrix $\Sigma$ by inversion. This matrix can be coerced to the partial correlation matrix as well: if the elements of the concentration matrix are $\{\sigma^{ij}\}$, then: \begin{equation} \rho^{ij} = \frac{\sigma^{ij}}{\sqrt{\sigma^{ii} * \sigma^{jj}}} \end{equation} Here is an example of the graph constructed on some concentration matrix. \begin{center} \begin{figure} \caption{Example of a graph, constructed with concentration matrix} \end{figure} \end{center} \begin{table}[h] \centering \begin{tabular}{| p{0.5cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | } \hline $\Sigma$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 1 & 1 & 0.465 & 0 & 0 & 0.511 & 0.392 & 0 \\ \hline 2 & 0.465 & 1 & 0 & 0 & 0.448 & 0 & 0 \\ \hline 3 & 0 & 0 & 1 & 0 & 0 & 0.32 & 0 \\ \hline 4 & 0 & 0 & 0 & 1 & 0.262 & 0 & 0.314 \\ \hline 5 & 0.511 & 0.448 & 0 & 0.262 & 1 & 0.459 & 0.42 \\ \hline 6 & 0.392 & 0 & 0.32 & 0 & 0.459 & 1 & 0 \\ \hline 7 & 0 & 0 & 0 & 0.314 & 0.42 & 0 & 1 \\ \hline \end{tabular} \caption{Example of concentration matrix} \end{table} If the dimensionality of the random vector is $p$, then the number of edges is $P = \frac{p*(p-1)}{2}$. For every edge, there is a hypothesis: \begin{equation} h_{ij}: Y_i \parallel Y_j \hspace{4pt} | \hspace{6pt} Y_{V\\\{i,j\}} \hspace{8pt} or \hspace{8pt} \rho^{ij} = 0 \end{equation} against alternative \begin{equation} k_{ij}: Y_i \not\parallel Y_j \hspace{4pt} | \hspace{6pt} Y_{V\\\{i,j\}} \hspace{8pt} or \hspace{8pt} \rho^{ij} \ne 0 \end{equation} As a result, there are $P$ different hypothesis to test to determine the structure of the graph. The hypothesis is rejected if the probability of it being true, based on the existing data is too small, less than some significance level $\alpha$, chosen beforehand. There are a lot of procedures, which are described in the literature for constructing graphs on networks on data. Usually, the exact steps of procedures depend on the end goal, however, we can distinguish three common types of procedures. The first type might be called statistical because these procedures rely on some statistical properties of source data in order to find the network. In particular, different identification procedures of this type are suggested by Drton \& Perlman\cite{drton}. Some properties of these statistical identification procedures will be observed in the paper. Other types of procedures include procedures that optimise some goodness-of-fit function, that is based on the graph structure. Some of procedures use Bayesian approach, which is considered more demanding, because it needs to know prior information about distribution and compute posterior distribution. However, in this paper only properties of statistical procedures will be observed. The goal of any procedure is to uncover the underlying patterns and connections of random variables. It means that there are basically two kinds of networks. True network or true correlation matrix defines how random variables are connected in reality, however we usually do not know the true network structure. Therefore we use data from observations on random variables, and from that data we construct sample correlation matrix or sample network. This network represents some graph, which is considered to be close to the true network, however the data may have provided us with some misleadings. As a result there are errors that we can consider as a measure of uncertainty in the network. First of all there are two types of errors: when we add edge, that do not exist in the true network and when we miss the edge that exist in true graph. The Type I Error is an error, when hypothesis is rejected, however it is true in reality. In our case Type I Error is when we consider establishing an edge between two edges, whereas in reality, the edge does not exist. Other way round, the Type II Error is when we accept a hypothesis, which is not true in real case. That means that we do not establish edge that exist in the true network. Based on these two types of errors, there are a lot of different measures of uncertainty. The most simple measure is a number of Type I or Type II errors, or in other words, the total number of wrongly added or wrongly missed edges. Sometimes errors of Type I are called True Positive (FP) and errors of Type II are called False Negative (FN), in that case number of correctly allocated edges is called True Positive (TP), and number of correctly absent edges is True Negative (TN). One of the most popular measures is Family-Wise Error Rate (FWER), which is a probability of making at least one Type I error in the whole network or $FWER = P(FP>0)$. False Discovery Rate (FDR) is a relation between the number of Type I errors and total number of edges in a sample network, or $FDR = \frac{FP}{FP + TP}$. Also known measures of errors include so called risk function, which is a linear combination of the number of Type I and Type II errors: \begin{equation} R(Y,\alpha) = E(FP)*(1-\alpha) + E(FN)*\alpha \end{equation} Additionaly, in this paper, we consider area under receiver operating characteristic (ROC) curve (ROC AUC). This curve is constructed for some procedure as follows: for any significance level, we calculate two characteristics: specifity, which is $Spe = \frac{TN}{FP+TN}$, and sensitivity, $Sen = \frac{TP}{TP+FN}$. We draw all points in two-dimensional space, where $X = 1-Spe$ and $Y = Sen$. As a result, when the significance level is zero, we will not reject any hypothesis, because there is no probability less than zero, $TP = 0$, $FP = 0$ and resulting point on a plot is $(0,0)$. On the contrary, if significance level is 1, using the same reasoning, all hypotheses will be rejected, $TN = 0$, $FN = 0$, as a result the point will be $(1,1)$. Resulting curve will be drawn from $(0,0)$ to $(1,1)$. If on some interval the curve is situated under the $y = x$ line, the output of the procedure might be inverted on the interval, which will only improve the procedure, however, optimal procedures are expected to be located on the upper side of $y = x$ line. Naturally, one can estimate the area under this whole curve, if it is equal to one, the procedure is optimal and ideal for any significance level; if area under curve is close to 0.5, the procedure practically does not differ from random decision. To sum up, ROC AUC allows to estimate the efficiency of the procedure as a whole, at different significance levels. \section{Identification procedures} All described measures allow us to compare different procedures. The procedures are described below. First of all, if we have observations $Y^{(1)},...,Y^{(n)}$ from a given multivariate normal distribution, where each observation is a vector of length $p$. Sample correlation matrix can be derived from observations using sample covariance matrix and sample mean: \begin{equation} S = \frac{1}{n-1}\sum_{m=1}^{n}(Y^{(m)}-\overline{Y})(Y^{(m)}-\overline{Y})^T \end{equation} \begin{equation} \overline{Y}=\frac{1}{n}\sum_{m=1}^{n}Y^{(m)} \end{equation} Usually, for statistical procedures one needs to compute p-values. P-value is essentially a probability of trueness of hypothesis, given the current observations. The distribution of sample correlation coefficient, when true correlation coefficient is zero is known for components of normal random vector: if $r_{ij}$ is such sample correlation coefficient, then $ \sqrt{n-2}*r_{ij}\sqrt{1-r_{ij}^2}$ has a t-distribution with $n-2$ degrees of freedom. For sample correlation coefficient $n$ should be changed on $n-p$. Knowing this, we can obtain p-values for every sample correlation coefficient. The simultaneous multiple testing procedure is the simplest one to obtain the network. In this procedure every hypothesis is tested independently at the same time with some chosen significance level. It means, that we compare the significance level with p-value for the hypothesis, if the p-value is lower, then the hypothesis is rejected and vice versa. It is worth noting that this procedure only controls the level of error in every hypothesis, but not in the whole network, which may be useful in some cases. Drton \& Perlman\cite{drton} described procedures that control FWER in a network, which adjust p-value for every hypothesis. In this paper, four different adjustments are observed: Bonferroni adjustment: \begin{equation}\pi_{ij}^{Bonf}=\min{\{C_p^2*\pi_{ij}, 1\}}\end{equation} Sidak adjustment: \begin{equation}\pi_{ij}^{Sidak}=1 - (1-\pi_{ij})^{C_p^2}\end{equation} If we reorder p-values in such a way that $\pi_{(1)}\leq\pi_{(2)}\leq...\leq\pi_{(C_p^2)}$, then Bonferroni adjustment with Holm step-down procedure: \\ \begin{equation}\pi_{(a)}^{Bonf.Step} = \max_{b = 1,...,a}\min\{(C_p^2-b+1)*\pi_{(b)},1\}\end{equation} Sidak adjustment with Holm step-down procedure: \\ \begin{equation}\pi_{(a)}^{Bonf.Step} = \max_{b = 1,...,a} 1 - (1-\pi_{(b)})^{C_p^2-b+1}\end{equation} \section{Experiments and Results} In the article by Drton \& Perlman\cite{drton}, they conducted some experiments to show that described procedures control FWER at pre-determined level. In the experiments they used generated concentration matrix with $p=7$, which has 9 non-zero elements from the interval $[0.2, 0.55]$. The number of observations for one trial varied from 25 to 500. Their experiments showed that step-procedures with Bonferroni and Sidak adjustments are closing in to pre-determined significance level $\alpha$. However, non-step procedures with the same adjustments are going further away from the pre-determined level, and the real FWER controlling level for those procedures is much lower than chosen $\alpha$. First goal of the experiments was to repeat described experiments for matrices of higher dimensionality($p = 25$) and different concentration matrix densities($q=0.2,0.4,0.6,0.8,0.95$). Number of observations were chosen from 100 to 500 with step 100. \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | } \hline Bonferroni & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.069 & 0.057 & 0.063 & 0.058 & 0.06 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.096 & 0.075 & 0.079 & 0.068 & 0.065 \\ \hline $E(FN>0)$ & 42.124 & 33.732 & 29.145 & 26.204 & 23.865 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.2$} \end{table} \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | } \hline Bonferroni Step & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.07 & 0.062 & 0.069 & 0.062 & 0.062 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.098 & 0.08 & 0.085 & 0.073 & 0.069 \\ \hline $E(FN>0)$ & 42.033 & 33.658 & 29.046 & 26.123 & 23.803 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.2$} \end{table} \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | } \hline Sidak & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.071 & 0.065 & 0.071 & 0.063 & 0.071 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.099 & 0.084 & 0.089 & 0.075 & 0.078 \\ \hline $E(FN>0)$ & 42.016 & 33.59 & 29.963 & 26.045 & 23.734 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.2$} \end{table} \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | p{0.8cm} | } \hline Sidak Step & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.077 & 0.07 & 0.074 & 0.065 & 0.075 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.105 & 0.089 & 0.093 & 0.08 & 0.083 \\ \hline $E(FN>0)$ & 41.921 & 33.507 & 28.866 & 25.96 & 23.666 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.2$} \end{table} \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} | } \hline Sidak Step & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.038 & 0.048 & 0.052 & 0.041 & 0.067 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.105 & 0.089 & 0.093 & 0.08 & 0.083 \\ \hline $E(FN>0)$ & 153.506 & 117.174 & 95.569 & 82.109 & 73.635 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.6$} \end{table} \begin{table}[h] \centering \begin{tabular}{| p{2cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} | } \hline Sidak Step & 100 & 200 & 300 & 400 & 500 \\ \hline $E(P(FP>0))$ & 0.003 & 0.008 & 0.004 & 0.005 & 0.012 \\ \hline $E(P(FN>0))$ & 1 & 1 & 1 & 1 & 1 \\ \hline $E(FP>0)$ & 0.003 & 0.008 & 0.004 & 0.005 & 0.012 \\ \hline $E(FN>0)$ & 263.477 & 219.876 & 191.627 & 171.954 & 157.58 \\ \hline \end{tabular} \caption{Type I and Type II error with $p=25$, $q=0.95$} \end{table} Tables 2,3,4 and 5 show 4 different adjustment procedures for matrix with $p=25$ and $q=0.2$. It means that there are 300 possible connections, and about 60 true edges. As a result, there can be not more than 60 Type II errors in total. Tables 6 and 7 show Sidak step-down adjustment procedure for different densities of matrices. Experiments showed that: \begin{itemize}[noitemsep] \item Achieved practical level of FWER depends on density of a matrix for any of the procedures \item The number of Type II error is particularly high in case of greater dimensionality even for significant number of observations (for 7x7 case this number is usually ~0 for 500 observations) \item The number of Type II errors is a bit smaller for step procedures, the number of Type I errors is also slightly bigger for step procedures, however the difference is small in comparison to the total number of errors, or wrongly defined connections. \end{itemize} Other experiments compared ROC AUC for the four procedures with adjustments and simultaneous multiple testing procedure. ROC AUC measure allows us to compare these procedures without looking at their significance level. The experiments were conducted for the matrix with $p=7$, which has 9 non-zero elements from the interval $[0.2, 0.55]$. \begin{figure} \caption{ROC for different procedures, $n = 10, 20$} \end{figure} \begin{figure} \caption{ROC for different procedures, $n = 50, 150$} \end{figure} Increasing number of observations led to increasing ROC AUC, which can be clearly seen from figures 2 and 3. The best ROC AUC is achieved for Sidak adjustment and simultaneous multiple testing procedure without adjustments. Differences between those three procedures are insignificant. Finally, we compare risk functions for different procedures. On the picture, horizontal axis is different values of $\alpha$ and vertical axis is values of risk function. \begin{figure} \caption{Risk function for different procedures, $n = 10, 20$} \end{figure} \begin{figure} \caption{Risk function for different procedures, $n = 50, 140$} \end{figure} According to figures 4 and 5, risk function for simultaneous multiple testing coincides with risk function for Sidak step-down adjustment procedure. Additionally, the value of risk function for these two procedures increases with the number of observations, whereas Bonferroni, Bonferroni step-down and Sidak adjustment lower the value of risk function with rising number of observations. \section{Conclusion} In the article we analyzed procedures, described in Drton \& Perlman from some new points of view. Despite the control of FWER for these procedures, they perform poorly from the point of view of Type II errors. Sidak, Sidak step-down adjustment procedures and simultaneous multiple testing procedure show similar ROC curves with almost equal AUC score, which improves with growing number of observations. As a result, Sidak, Sidak step-down adjustment procedures procedures and simultaneous multiple testing procedure may be considered best amongst analyzed, however some of their properties are still not satisfactory. The directions for the future work include analysis of goodness-of-fit procedures and research of properties of observed procedures for other elliptical distributions. \section{Acknowledgments} This work was conducted at the Laboratory of Algorithms and Technologies for Network Analysis. \pagebreak \end{document}
\begin{document} \title{Upper bounds on maximum admissible noise in zeroth-order optimisation\thanks{The research is supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) 075-00337-20-03, project No. 0714-2020-0005.}} \author{Dmitry A. Pasechnyuk\inst{1,2,3,4}\orcidID{0000-0002-1208-1659} \and Aleksandr Lobanov\inst{2,3}\orcidID{0000-0003-1620-9581} \and Alexander Gasnikov\inst{2,3,4}\orcidID{0000-0002-7386-039X}} \authorrunning{D.A. Pasechnyuk et al.} \institute{Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE \email{[email protected]} \and Moscow Institute of Physics and Technology, Dolgoprudny, Russia \email{\{lobanov.av,gasnikov.av\}@mipt.ru} \and ISP RAS Research Center for Trusted Artificial Intelligence, Moscow, Russia \and Institute for Information Transmission Problems RAS, Moscow, Russia} \maketitle \begin{abstract} In this paper, based on information-theoretic upper bound on noise in convex Lipschitz continuous zeroth-order optimisation, we provide corresponding upper bounds for strongly-convex and smooth classes of problems using non-constructive proofs through optimal reductions. Also, we show that based on one-dimensional grid-search optimisation algorithm one can construct algorithm for simplex-constrained optimisation with upper bound on noise better than that for ball-constrained and asymptotic in dimensionality case. \keywords{Zeroth-order optimization \and Simplex constraints \and Information-theoretic lower bound \and Maximum admissible noise.} \end{abstract} \section{Introduction} The problem of interest in this paper is $\min_{x \in S} f(x)$, such that $f: \mathbb{R}^n \to \mathbb{R}$ is convex and $S \subset \mathbb{R}^n$ is compact and convex. We assume that the only oracle provided for $f$ is zeroth-order one with additive noise, i.e. providing $f(x) + \xi$ for given $x \in S$ such that $|\xi| \leqslant \delta$ \cite{gasnikov2022randomized}. One can consider $f$ to be approximately convex \cite{risteski2016algorithms}, i.e. such that exists some convex $g$ which satisfies $|f(x) - g(x)| \leqslant \delta$ for all $x \in S$. Problems of described form arise when gradient cannot be efficiently computed neither explicitly nor automatically (for example, if $x$ is hyper-parameter of some model, or if $f(x)$ is the outcome of real-life experiment with control $x$) \cite{conn2009introduction,larson2019derivative,spall2005introduction}. Assessment of complexity of those problems should include the upper bounds on noise $\delta$ \cite{risteski2016algorithms,singer2015information} besides the lower bounds on oracle calls/iterations number. While latter tell us how many time will consume solving of the worst-case problem by the best algorithm, first tells us whether we can achieve desired accuracy or not. For intuitive example, one cannot expect to solve problem with arbitrary accuracy if computation of $f$ suffers from machine precision or other instrument error. This leads to new trade-off, which was not in first-order optimisation: between time resources allocated for algorithm and for computer of $f$ (reader will see it in section devoted to multi-level problems) or between overall time and memory resources (if the main part of noise comes from rounding errors in floating point numbers). One information-theoretic upper bound on noise was obtained for $f$ satisfying Lipschitz continuity condition \cite{risteski2016algorithms}. As for many information-theoretic proofs, there is no straight way to obtain upper bounds for other classes of optimisation problems, like strongly-convex or those having Lipschitz-continuous gradient, from that one. On the other hand, it is known that optimal iteration complexities of different classes are ``reducable'' to each other by optimal reductions, i.e. constructive algorithmic extensions which transform optimal algorithm for one class into optimal algorithm for another \cite{allen2016optimal}. We use optimal reductions to obtain bounds on noise and proof that our bounds are indeed upper bounds by non-constructive argument. Upper bounds on noise are independent with lower bounds on iterations number, i.e. algorithm which has optimal convergence rate, generally speaking, may require smaller noise than it is acceptable for less efficient algorithm. This is especially evident for the grid-search algorithm, which choose the best of tightly placed probes as the estimation of minimum: this algorithm has upper bound on noise proportional to desired accuracy. But, since size of grid grows exponentially, this algorithm ceases to be polynomial in $n$ and $\varepsilon^{-1}$. There is the lower bound on noise proportional to desired accuracy with coefficient polynomial on $n$ \cite{nemirovskij1983problem}. We make an attempt to improve this lower bound for narrowed class of simplex-constrained problems by reducing them to multi-level one-dimensional problems and using edge-effects to reduce the size of grid, which gives lower bound on noise proportional to desired accuracy divided by dimensionality. \section{Methods} In course of our work we raise the following research questions and answer them with purely theoretical (and in a manner non-constructive) means. Each of them contributes in understanding of upper bounds on noise in theory and stopping conditions guaranteeing the acceptability of resulting noise in practice. The latter is considered in details for saddle-point problems, hyper-parameter optimisation, and composite function optimisation, that could be a most interesting to a practitioner. \begin{itemize} \item [\textbf{RQ$_1$}] Can upper bounds on noise in function's value for class of problems $A$ be obtained from that of class of problems $B$ using optimal reduction? \item [\textbf{RQ$_2$}] What is upper bound on noise for class of problems a) with small dimension of variables space; b) with aspheric feasible set? \end{itemize} Optimal reductions of optimisation algorithms were introduced in \cite{allen2016optimal}. In Section~\ref{sec:reduction} we completely solve \textbf{RQ$_1$} for classes of convex ($A$) and strongly-convex ($B$) problems and to some extent for classes of non-smooth ($A$) and smooth ($B$) problems. This provides upper bounds on noise for, correspondingly, strongly-convex and smooth problems. The difference between theoretical asymptotic upper bound on noise and actual noise, optimisation algorithms can deal with in low-dimensional problems, is described in Section~\ref{sec:simplex}. We provide example of non-optimal algorithm for optimisation over segment which is less noise-sensitive than it is expected from theory. This does not solve \textbf{RQ$_{2a}$}, but states that actual upper bound for small dimensions is higher than was considered before. Ibid we discuss how it is connected with optimisation over feasible sets which are not of a form of ball, in particular, $n$-dimensional simplex. This discussion solves \textbf{RQ$_{2b}$} for simplex feasible set and presents the noise bounds on which practitioner should rely for problems with aspheric feasible set in spite of that general theory has not developed for this case yet. \begin{assumption} The assumptions which define corresponding classes of optimisation problems under consideration, are provided below. \begin{itemize} \item Convexity. For given convex set $S \subset \mathbb{R}^n$ it holds that \begin{equation} \label{eq:convex} \langle \nabla f(x), y - x \rangle \leqslant f(y) - f(x), \quad \forall x, y \in S. \tag{C} \end{equation} \item Lipschitz continuity. For given set $S \subset \mathbb{R}^n$ there exists $M > 0$, such that \begin{equation} \label{eq:lipschitz} f(y) - f(x) \leqslant M \|y - x\|_2, \quad \forall x, y \in S. \tag{LC} \end{equation} \item Smoothness. For given set $S \subset \mathbb{R}^n$ there exists $L > 0$, such that \begin{equation} \label{eq:smooth} \langle \nabla f(y) - \nabla f(x), y - x \rangle \leqslant L \|y - x\|_2^2, \quad \forall x, y \in S. \tag{LS} \end{equation} \item Strong-growth. For given set $S \subset \mathbb{R}^n$ and extremum $x^* \in S$ there exists $\nu \in [1, +\infty]$, $\mu > 0$ and convex function $g: S \to \mathbb{R}$, such that \begin{equation} \label{eq:growth} g(x) = f(x) - \frac{\mu}{2} \|x - x^*\|_2^\nu, \quad \forall x \in S. \tag{SG} \end{equation} \end{itemize} \end{assumption} Note that strong-growth condition used in this paper is stronger than standard strong-growth condition that $\frac{\mu}{2} \|x - x^*\|_2^\nu \leqslant f(x) - f(x^*)$ for all $x \in S$. Similarly, in Subsection~\ref{sec:smooth} we consider more strong condition than smoothness, which requires function to ``remain'' Lipschitz continuous if smoothness ``is substracted''. This means that upper bounds obtained in corresponding theorems relate to these narrowed classes, but remain upper bounds for wider classes (with widening the class maximal admissible noise can be only decreasing). But in Section~\ref{sec:matching} we show that obtained upper bounds are tight (there exist algorithms which match them), and maximal admissible noise guarantees for these algorithms are given for wider, original classes. Thus, upper bounds obtained for narrowed classes are tight upper bounds for also original wide classes of strong-growing and smooth functions. \section{Upper bounds on oracle's noise across classes} \label{sec:reduction} \begin{definition}[Convex optimisation classes of problems] For convex set $S \subset \mathbb{R}^n$, $M > 0$, $L > 0$, $\mu \geqslant 0$ and $\nu \in [1, +\infty]$, the following classes are defined \begin{itemize} \item Class $\mathcal{F}_{Lip}^M(S) \subset C^1(S) \cap Cvx(S)$ of $M$-Lipschitz continuous functions, which satisfy Assumptions~\ref{eq:convex} and \ref{eq:lipschitz}; \item Class $\mathcal{F}_{Lip}^{M, \mu, \nu}(S) \subset \mathcal{F}_{Lip}^M(S)$ of $M$-Lipschitz continuous $\mu$-strongly-growing functions, which satisfy Assumptions~\ref{eq:convex}, \ref{eq:lipschitz} and \ref{eq:growth}; \item Class $\mathcal{F}_{Smooth}^L(S) \subset C^2(S) \cap Cvx(S)$ of functions with $L$-Lipschitz continuous gradient, which satisfy Assumptions~\ref{eq:convex} and \ref{eq:smooth}; \item Class $\mathcal{F}_{Smooth}^{L, \mu, \nu}(S) \subset \mathcal{F}_{Smooth}^L(S)$ of $(\mu,\nu)$-strongly-growing functions with $L$-Lipschitz continuous gradient, which satisfy Assumptions~\ref{eq:convex}, \ref{eq:smooth} and \ref{eq:growth}, \end{itemize} where $Cvx(S)$ is set of convex functions, which satisfy Assumption~\ref{eq:convex}. \end{definition} In \cite{risteski2016algorithms}, authors obtained the upper bound on noise for only first class, of Lipschitz continuous convex problems, by constructing worst-case function explicitly. There is no obvious way to construct worst-cases for other classes by transformation of that, so upper bounds on noise for them have not been obtained yet. The result of this paper closes this question by deriving all upper bounds from one in a non-constructive way, and can be summarised by the following \begin{table}[H] \caption{Known and obtained in this paper upper bounds on noise across standard convex optimisation classes of functions.} \label{table:upper-bounds} \centering {\renewcommand{2}{2} \begin{tabular}{ l|c|c| } $\delta = \mathcal{O}(\dots)$ & convex & $(\mu,\nu)$-strongly-growing \\ \hline $M$-Lipschitz & $\displaystyle\min\left\{\frac{\varepsilon^2}{\sqrt{n} M R}, \frac{\varepsilon}{n}\right\}$, \cite{risteski2016algorithms} & $\displaystyle \min\left\{\frac{\mu^{1/\nu} \varepsilon^{2 - 1/\nu}}{\sqrt{n} M}, \frac{\varepsilon}{n}\right\}$, Thm~\ref{th:strongly} \\ Lipschitz and $L$-smooth & $\displaystyle \min\left\{\frac{\varepsilon^{3/2}}{\sqrt{n L} R}, \frac{\varepsilon}{n}\right\}$, Thm~\ref{th:smooth} & $\displaystyle \min\left\{\frac{\mu^{1/\nu} \varepsilon^{3/2 - 1/\nu}}{\sqrt{n L}}, \frac{\varepsilon}{n}\right\}$, Thm~\ref{th:smooth-strongly} \\ \hline \end{tabular}} \end{table} \subsection{From convex to strongly-convex case: noise contraction} Following the reasoning standard for applying optimal reduction by restarts techinque \cite{allen2016optimal}, we need to ensure \begin{equation} \label{eq:reg} \left\|x^{N_k} - x^*\right\|_2 \leqslant \frac{1}{2^{1/\nu}} \left\|x^{N_{k-1}} - x^*\right\|_2, \end{equation} where $N_k$ is the number of iterations made by algorithm being restarted during $k$-th restart. While $\varepsilon_k \geqslant M \left\|x^{N_{k-1}} - x^*\right\|_2 / \sqrt{n}$, it holds that \begin{equation*} \frac{\mu}{2} \left\|x^{N_k} - x^*\right\|_2^\nu \stackrel{\eqref{eq:growth}}{\leqslant} f(x^{N_k}) - f(x^*) \leqslant \frac{M \left\|x^{N_{k-1}} - x^*\right\|_2}{\sqrt{N_k}} + n^{1/4} \sqrt{M \delta_k \left\|x^{N_{k-1}} - x^*\right\|_2}, \end{equation*} where second inequality is implied by optimal convergence rate in Lipschitz continuous case and known upper bound on noise for it. This implies the following choice of parameters (for some $\alpha \in (0, 1)$) \begin{equation*} \delta_k = \frac{\alpha^2\varepsilon_k^2}{\sqrt{n} M \left\|x^{N_{k-1}} - x^*\right\|_2}, \quad N_k = \frac{M^2 \left\|x^{N_{k-1}} - x^*\right\|_2^2}{(1 - \alpha)^2\varepsilon_k^2},\quad \varepsilon_k = \frac{\mu}{4}\left\|x^{N_{k-1}} - x^*\right\|^\nu_2, \end{equation*} which results in such a dependence of noise on distance from optimum \begin{equation} \label{eq:strongly_1} \delta_k = \frac{\alpha^2 \mu^2 \left\|x^{N_{k-1}} - x^*\right\|_2^{2\nu - 1}}{16 \sqrt{n} M} = 2^{-k (2 - 1/\nu)} \cdot \frac{\alpha^2 \mu^2 R^{2\nu - 1}}{16 \sqrt{n} M} \end{equation} for $k = 1, ..., K$ until $\left\|x^{N_K} - x^*\right\|^{\nu - 1}_2 < 4 M / (\mu \sqrt{n})$ or for all $k = 1, ...$ if $n \geqslant 4 M^2 / \mu^2$ and $\nu = 1$. After that, it holds that \begin{equation*} \frac{\mu}{2} \left\|x^{N_k} - x^*\right\|_2^\nu \stackrel{\eqref{eq:growth}}{\leqslant} f(x^{N_k}) - f(x^*) \leqslant \frac{M \left\|x^{N_{k-1}} - x^*\right\|_2}{\sqrt{N_k}} + n \delta_k, \end{equation*} which implies the following choice of parameters \begin{equation*} \delta_k = \frac{\alpha\varepsilon_k}{n}, \quad N_k = \frac{M^2 \left\|x^{N_{k-1}} - x^*\right\|_2^2}{(1 - \alpha)^2\varepsilon_k^2},\quad \varepsilon_k = \frac{\mu}{4}\left\|x^{N_{k-1}} - x^*\right\|^\nu_2, \end{equation*} which results in such a dependence of noise on distance from optimum \begin{align} \label{eq:strongly_2} \nonumber\delta_k &= \frac{\alpha \mu \left\|x^{N_{k-1}} - x^*\right\|^\nu_2}{4 n} = 2^{-k+K} \cdot \frac{\alpha \mu \left\|x^{N_K} - x^*\right\|_2^\nu}{4 n}\\ &= 2^{-k} \cdot \frac{\alpha \mu^{2 + \frac{1}{\nu - 1}} R^{\nu}}{2^{3+\frac{1}{\nu - 1}} \sqrt{n^{1 - \frac{1}{\nu - 1}}} M^{1 + \frac{1}{\nu - 1}}} \left\|x^{N_K} - x^*\right\|_2^\nu \leqslant 2^{-k} \cdot \frac{\alpha \mu R^{\nu}}{4 n}, \end{align} due to $2^{K} = \frac{n^{\frac{\nu}{2(\nu - 1)}} \mu^{\frac{\nu}{\nu - 1}} R^{\nu}}{2^{\frac{\nu}{\nu - 1}} M^{\frac{\nu}{\nu - 1}}}$, for all $k = K, ...$ or \begin{equation} \label{eq:strongly_3} \delta_k = 2^{-k} \cdot \frac{\alpha \mu R^\nu}{4 n} \end{equation} for all $k = 1, ...$ if $n < 4 M^2 / \mu^2$ and $\nu = 1$. Since the former condition is ensured by the proper choice of parameters, we have \begin{equation*} f(x^{N_k}) - f(x^*) \leqslant\frac{\mu}{4}\left\|x^{N_{k-1}} - x^*\right\|^\nu_2 \leqslant 2^{-k-1} \mu R^\nu \leqslant \varepsilon, \end{equation*} which implies \begin{equation} \label{eq:restarts} k = \log_2 {\frac{\mu R^\nu}{\varepsilon}} - 1. \end{equation} By plugging this estimation of number of iterations in \eqref{eq:strongly_1}, \eqref{eq:strongly_2} and \eqref{eq:strongly_3}, for each particular $n \in \mathbb{N}$, $M > 0$, $\mu > 0$, $\nu \in [1, +\infty]$, ball $S \subset \mathbb{R}^n$ containing extremum $x^*$, we obtain \begin{theorem}[Upper bound on maximum admissible noise for $M$-Lipschitz $(\mu, \nu)$-strongly-growing problems] \label{th:strongly} For any algorithm, given $\varepsilon > 0$, and \begin{equation*} \delta > \min\left\{\frac{\mu^{1/\nu} \varepsilon^{2 - 1/\nu}}{\sqrt{n} M}, \frac{\varepsilon}{n}\right\}, \end{equation*} there exists $f \in \mathcal{F}_{Lip}^{M, \mu, \nu}(S)$ such that algorithm will not obtain $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. \end{theorem} \begin{proof} All the bounds obtained using the reduction of optimisation algorithms should be proven by contradiction. Assume that there exist algorithm, $\varepsilon > 0$, and corresponding $\delta$, such that for every $f \in \mathcal{F}_{Lip}^{M, \mu, \nu}(S)$ algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. Let us introduce function $g$ defined by $g(x) = f(x) - \frac{\mu}{2} \|x - x^*\|_2^\nu$. It holds that $g \in \mathcal{F}^M_{Lip}(S)$ and $x^*$ is extremum of $g$. One can apply regularisation reduction from $M$-Lipschitz convex to $M$-Lipschitz $(\frac{\varepsilon}{2 R^\nu}, \nu)$-strongly-growing class of problems to algorithm for function $f$, which will give an algorithm for $g$. As soon as algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations for every function, the same holds for algorithm applied to $h$ defined by $h(x) = g(x) + \frac{\varepsilon}{2 R^\nu} \cdot \|x - x^*\|_2^\nu$. Therefore, there is the algorithm which obtains $x$ such that $g(x) - g(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations \cite{allen2016optimal}. On the other hand, by substituting $\mu = \frac{\varepsilon}{2 R^\nu}$ in formula for $\delta$, we obtain \begin{equation*} \textstyle \delta > \min\left\{\frac{\varepsilon^2}{\sqrt{n} M R}, \frac{\varepsilon}{n}\right\}, \end{equation*} which contradicts the theoretical upper bound from \cite{risteski2016algorithms}. \end{proof} \subsection{From non-smooth to smooth case: through duality}\label{sec:smooth} \begin{definition}[Narrowed class of smooth problems] For convex set $S \subset \mathbb{R}^n$, $M > 0$, and $L > 0$ the following classes are defined \begin{itemize} \item Class $\mathcal{F}_{Smooth-Lip}^{L,M}(S) \subset \mathcal{F}_{Smooth}^L(S)$ of $M$-Lipschitz continuous functions with $L$-Lipschitz continuous gradient, which satisfy Assumptions~\ref{eq:convex}, \ref{eq:lipschitz} and \ref{eq:smooth}; \item Class $\mathcal{F}_{Smooth-Lip}^{L,M,\mu, \nu}(S) \subset \mathcal{F}_{Smooth}^L(S)$ of $M$-Lipschitz continuous $(\mu, \nu)$-strongly-growing functions with $L$-Lipschitz continuous gradient, which satisfy Assumptions~\ref{eq:convex}, \ref{eq:lipschitz}, \ref{eq:smooth} and \ref{eq:growth}. \end{itemize} \end{definition} \begin{remark}[Alternative definition for narrowed class] Class $\mathcal{F}_{Smooth-Lip}^{L,M}(S)$ could be defined to be wider by adding to the definition of $\mathcal{F}_{Smooth}^L(S)$ class a requirement that optimum $y^*$ of Legendre--Fenchel conjugate $f^*$ is in $B(0, M)$. But this requirement is more confusing than weak in comparison to Lipschitz continuity, which is often required together with Lipschitz-continuity of gradient. \end{remark} \begin{theorem}[Upper bound on maximum admissible noise for $L$-smooth convex problems] \label{th:smooth} For any algorithm, given $\varepsilon > 0$, and \begin{equation*} \delta > \min\left\{\frac{\varepsilon^{3/2}}{\sqrt{n L} R}, \frac{\varepsilon}{n}\right\}, \end{equation*} there exists $f \in \mathcal{F}_{Smooth-Lip}^{L,M}(S)$ such that algorithm will not obtain $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. \end{theorem} \begin{proof} Assume that there exist algorithm, $\varepsilon > 0$, and corresponding $\delta$, such that for every $f \in \mathcal{F}_{Smooth-Lip}^{L,M}(S)$ algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. Due to convexity of $f$, there exists its Legendre--Fenchel conjugate $f^*$ defined by $f^*(y) = \max_{x \in S} \{\langle y, x\rangle - f(x)\}$. $f^*$ is $1/L$-strongly convex \cite{zalinescu2002convex}, $R$-Lipschitz continuous and has $M$-bounded domain by properties of conjugation. Thus, $f^* \in \mathcal{F}_{Lip}^{R,1/L,2}(S^*)$, where $S^* \subset B(0, M)$. By Fenchel--Moreau theorem, there exists $y^*$ such that $f^*(y^*) = f(x^*)$. Let us introduce function $g^*$ defined by $g^*(y) = f^*(y) - \frac{1}{2 L} \|y - y^*\|_2^2$. By properties of conjugation, $g^* \in \mathcal{F}^R_{Lip}(S^*)$ and $y^*$ is extremum of $g^*$ and $g^*(y^*) = f(x^*)$. By Fenchel--Moreau theorem, it, in turn, has Legendre--Fenchel conjugate $g \in \mathcal{F}^M_{Lip}(S^{**})$ for some $S^{**} \subset B(0, R)$ defined by $g(x) = \max_{y \in S^*} \{\langle y, x \rangle - g^*(y)\}$, and there exists $x^{**} \in S^{**}$ such that $g(x^{**}) = f(x^*)$. One can apply smoothing reduction from convex to class of functions with $L$-Lipschitz continuous gradient to algorithm for function $f$, which will give an algorithm for $g$. As soon as algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations for every function, the same holds for algorithm applied to dual smoothed function $h$ \cite{nesterov2005smooth} defined by $h(x) = \sup_{y \in \mathbb{R}^n} \{\langle y, x \rangle - g^*(y) - \frac{\varepsilon}{2M^2} \|y\|_2^2\}$. Therefore, there is the algorithm which obtains $x$ such that $g(x) - g(x^{**}) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations \cite{allen2016optimal}. On the other hand, by substituting $L = \frac{M^2}{\varepsilon}$ in formula for $\delta$, we obtain \begin{equation*} \textstyle \delta > \min\left\{\frac{\varepsilon^2}{\sqrt{n} M R}, \frac{\varepsilon}{n}\right\}, \end{equation*} which contradicts the theoretical upper bound from \cite{risteski2016algorithms}. \end{proof} \begin{remark}[Alternative proof by reduction to strongly-convex problem] Similar result could by proven by applying regularisation reduction from $M$-Lipschitz convex to $M$-Lipschitz $\frac{\varepsilon}{2R^2}$-strongly-convex class of problems, which will give an algorithm for $g$. But it would work if class of optimisation algorithms is narrowed to primal-dual algorithms and algorithm for function $f$ is at the same an algorithm for $f^*$. \end{remark} \subsection{Smooth strongly-convex case} While $\varepsilon_k \geqslant \sqrt[4]{L} \|x^{N_{k-1}} - x^*\|_2^{1/2} / \sqrt[4]{n}$, it holds that \begin{equation*} \frac{\mu}{2} \|x^{N_k} - x^*\|_2^\nu \stackrel{\eqref{eq:growth}}{\leqslant} f(x^{N_k}) - f(x^*) \leqslant \frac{L \|x^{N_{k-1}} - x^*\|_2^2}{N_k^2} + \sqrt[3]{n L \delta_k^2 \|x^{N_{k-1}} - x^*\|_2^2}, \end{equation*} which implies the following choice of parameters (for some $\alpha \in (0, 1)$) \begin{equation*} \delta_k = \sqrt{\frac{\alpha^3 \varepsilon_k^3}{n L \|x^{N_{k-1}} - x^*\|_2^2}},\quad N_k = \sqrt{\frac{L \|x^{N_{k-1}} - x^*\|^2}{(1 - \alpha) \varepsilon_k}}, \quad \varepsilon_k = \frac{\mu}{4} \|x^{N_{k-1}} - x^*\|_2^\nu, \end{equation*} which results in such a dependence of noise on distance from optimum \begin{equation} \label{eq:ss-2} \delta_k = \sqrt{\frac{\alpha^3 \mu^3}{n L}} \frac{\|x^{N_{k-1}} - x^*\|_2^{3\nu/2 - 1}}{8} = 2^{-k (3/2 - 1/\nu)} \cdot \sqrt{\frac{\alpha^3 \mu^3}{n L}} \frac{R^{3\nu/2 - 1}}{8} \end{equation} for $k = 1, ..., K$ until $\|x^{N_K} - x^*\|_2^{\nu - 1/2} < 4 \sqrt[4]{L} / (\mu \sqrt[4]{n})$. After that, it holds that \begin{equation*} \frac{\mu}{2} \|x^{N_k} - x^*\|_2^\nu \leqslant f(x^{N_k}) - f(x^*) \leqslant \frac{L \|x^{N_{k-1}} - x^*\|_2^2}{N_k^2} + n \delta_k, \end{equation*} which implies the following choice of parameters \begin{equation*} \delta_k = \frac{\alpha \varepsilon_k}{n},\quad N_k = \sqrt{\frac{L \|x^{N_{k-1}} - x^*\|_2^2}{(1 - \alpha) \varepsilon_k}}, \quad \varepsilon_k = \frac{\mu}{4} \|x^{N_{k-1}} - x^*\|_2^\nu, \end{equation*} which results in such a dependence of noise on distance from optimum \begin{align} \nonumber\delta_k &= \frac{\alpha \mu \|x^{N_{k-1}} - x^*\|_2^\nu}{n} = 2^{-k+K} \cdot \frac{\alpha \mu \|x^{N_K} - x^*\|_2^\nu}{4 n}\\ &\label{eq:ss-3}= 2^{-k} \cdot \frac{\alpha \mu^{2 + \frac{1}{2\nu - 1}} \sqrt[4]{n^{1 + \frac{1}{2\nu - 1}}} R^\nu}{4^{1 + \frac{1}{2\nu - 1}} \sqrt[4]{L^{1 + \frac{1}{2\nu - 1}}}} \|x^{N_K} - x^*\|_2^\nu \leqslant 2^{-k} \cdot \frac{\alpha \mu R^\nu}{4 n}, \end{align} due to $2^K = \frac{\mu^{\frac{\nu}{\nu - 1/2}} n^{\frac{\nu}{4(\nu - 1/2)}} R^\nu}{4^{\frac{\nu}{\nu - 1/2}} L^{\frac{\nu}{4(\nu - 1/2)}}}$, for all $k = K, ...$ By plugging \eqref{eq:restarts} in \eqref{eq:ss-2} and \eqref{eq:ss-3}, for each particular $n \in \mathbb{N}$, $L > 0$, $\mu > 0$, $\nu \in [1, +\infty]$, ball $S \subset \mathbb{R}^n$ containing extremum $x^*$, we obtain \begin{theorem}[Upper bound on maximum admissible noise for $L$-smooth $(\mu, \nu)$-strongly-growing problems] \label{th:smooth-strongly} For any algorithm, given $\varepsilon > 0$, and \begin{equation*} \delta > \min\left\{\frac{\mu^{1/\nu} \varepsilon^{3/2 - 1/\nu}}{\sqrt{n L}}, \frac{\varepsilon}{n}\right\}, \end{equation*} there exists $f \in \mathcal{F}_{Smooth-Lip}^{L, M, \mu, \nu}(S)$ such that algorithm will not obtain $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. \end{theorem} \begin{proof} Assume that there exist algorithm, $\varepsilon > 0$, and corresponding $\delta$, such that for every $f \in \mathcal{F}_{Smooth-Lip}^{L, M, \mu, \nu}(S)$ algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations. Let us introduce function $g$ defined by $g(x) = f(x) - \frac{\mu}{2} \|x - x^*\|_2^\nu$. By properties of conjugation, $g \in \mathcal{F}^{L, M}_{Smooth-Lip}(S)$ and $x^*$ is extremum of $g$. One can apply regularisation reduction from problem for $M$-Lipschitz convex function with $L$-Lipschitz gradient to problem for $M$-Lipschitz $(\frac{\varepsilon}{2 R^\nu}, \nu)$-strongly-growing function with $L$-Lipschitz gradient to algorithm for function $f$, which will give an algorithm for $g$. As soon as algorithm obtains $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations for every function, the same holds for algorithm applied to $h$ defined by $h(x) = g(x) + \frac{\varepsilon}{2 R^\nu} \cdot \|x - x^*\|_2^\nu$. Therefore, there is the algorithm which obtains $x$ such that $g(x) - g(x^*) \leqslant \varepsilon$ in $poly\left(n, \varepsilon^{-1}\right)$ iterations \cite{allen2016optimal}. On the other hand, by substituting $\mu = \frac{\varepsilon}{2 R^\nu}$ in formula for $\delta$, we obtain \begin{equation*} \textstyle \delta > \min\left\{\frac{\varepsilon^{3/2}}{\sqrt{n L} R}, \frac{\varepsilon}{n}\right\}, \end{equation*} which contradicts the upper bound from Theorem~\ref{th:smooth}. \end{proof} \section{Matching bounds} \label{sec:matching} In this section, we compare the obtained upper bounds on oracle's noise for different classes of optimization problems in Section \ref{sec:reduction} (see Table \ref{table:upper-bounds} first arguments) with the estimates of gradient-free algorithms for the maximum allowable noise level, which still allowing to guarantee certain accuracy. \\ \textbf{Convex setting.} For example, work \cite{Lobanov_2022} proposed an algorithm for solving a class $\mathcal{F}_{Lip}^M(S)$ of optimization problem that has an estimate on the maximum allowable noise level of $\frac{\varepsilon^2}{\sqrt{n} M R}$, corresponding to the upper bound \cite{risteski2016algorithms}. Moreover, considering the same class of problems with the additional assumption $L$ of smoothness (class $\mathcal{F}_{Smooth-Lip}^{L,M}(S)$) in paper \cite{Kornilov_NIPS}, the authors were able to improve the estimate on the maximum noise level for the gradient-free algorithm created using the smoothing scheme: $\frac{\varepsilon^{3/2}}{\sqrt{n L} R}$ (see Subsection 3.2 from~\cite{Kornilov_NIPS} for more details on the technique to improving the estimate). This estimate matches the upper bound obtained in Theorem \ref{th:smooth}.\\ \textbf{$(\mu,\nu)$-strongly-growing setting.} The upper bound of oracle's noise $\frac{\mu^{1/\nu} \varepsilon^{2 - 1/\nu}}{\sqrt{n} M}$ for a class $\mathcal{F}_{Lip}^{M, \mu, \nu}(S)$ of optimization problems (see Theorem \ref{th:strongly}) is also achieved by estimation on the maximum noise level of the algorithm provided in \cite{kornilov2023gradient}. In addition, it is not difficult to show that using the same technique as described in Subsection 3.2 of \cite{Kornilov_NIPS} one can obtain an improved estimate for the algorithm from \cite{kornilov2023gradient} by additionally assuming $L$ smoothness (class $\mathcal{F}_{Smooth-Lip}^{L, M, \mu, \nu}(S)$), thereby achieving the upper bound from Theorem \ref{th:smooth-strongly}: $\frac{\mu^{1/\nu} \varepsilon^{3/2 - 1/\nu}}{\sqrt{n L}}$. As we can see, all upper bounds are achieved by algorithms, but it is worth noting that these algorithms are optimal not only in terms of the maximum allowable noise level, but also in terms of the number of successive iterations and the total number of calls to the oracle. In \cite{kornilov2023gradient,Kornilov_NIPS,Lobanov_2022}, the authors solve the stochastic problem formulation, which indicates that the upper bounds are robust not only in the deterministic setting, but also in the stochastic setting. The upper bounds presented are also robust for stochastic problems with heavy tails (see \cite{Kornilov_NIPS}). \section{Noise on non-asymptotic scales and for non-optimal algorithms} \label{sec:simplex} \subsection{One-dimensional exception} \begin{definition}[Classes of non-convex 1-dimensional and separable Lipschitz problems] For compact interval $I \subsetneq \mathbb{R}$, convex set $S \subset \mathbb{R}^n$, and $M > 0$, the following classes are defined \begin{itemize} \item Class $\mathcal{F}_{NC-Lip}^{M}(S) \subset C^1(S)$ of $M$-Lipschitz continuous functions, which satisfy Assumption~\ref{eq:lipschitz}; \item Class $\mathcal{F}_{1-Lip}^{M}(I) \subset \mathcal{F}_{NC-Lip}^{M}(I)$ of 1-dimensional $M$-Lipschitz continuous functions, which satisfy Assumption~\ref{eq:lipschitz}; \item Class $\mathcal{F}_{Sep-Lip}^{M}(S) \subset \mathcal{F}_{NC-Lip}^{M}(S)$ of functions $f$ defined by $f(x) = \sum_{i=1}^n f_i(x_i)$, where $f_i \in \mathcal{F}_{1-Lip}^{M}(I_i)$ are $M$-Lipschitz continuous functions, which satisfy Assumption~\ref{eq:lipschitz}. \end{itemize} \end{definition} Note that $\mathcal{F}_{1-Lip}^{M}(I) \subset \mathcal{F}_{Sep-Lip}^{M}(S)$, if there exist $x_1, ..., x_{i-1}, x_{i+1}, ..., x_{n}$ such that $\{x_1\} \times \dotsi \times \{x_{i-1}\} \times I \times \{x_{i+1}\} \times \dotsi \times \{x_n\} \subset S$. \begin{theorem}[Lower bound on maximum admissible noise for separable $M$-Lipschitz problems] There exists algorithm, such that for every $\varepsilon > 0$, given \begin{equation*} \delta \leqslant \frac{\varepsilon}{2 n}, \end{equation*} and function $f \in \mathcal{F}_{Sep-Lip}^{M}(S)$, algorithm will obtain $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $\frac{n^2 R M}{\varepsilon} \in poly(n, \varepsilon^{-1})$ iterations. \end{theorem} \begin{proof} Note that its enough to proof the statement in the case of $n = 1$. Indeed, statements of the theorem for $n > 1$ follows if we solve 1-dimensional problem w.r.t. $x_1$ with accuracy $\varepsilon/(2n)$ on $\text{proj}_1 S$, then w.r.t. $x_2$ on $\text{proj}_2 \{x \in S: x_1 = x_1^*\}$, etc. \noindent \begin{minipage}{0.4\textwidth} \begin{figure} \caption{Example of one-dimensional $M$-Lipschitz continuous function with overlaid grid} \label{fig:example} \end{figure} \end{minipage} \hspace{0.05\textwidth} \begin{minipage}{0.5\textwidth} \begin{figure} \caption{Bound for feasible set of consequent 1-dimensional problems} \label{fig:bound} \end{figure} \end{minipage} So, let's proof the result for $f \in \mathcal{F}_{1-Lip}^{M}(I)$. There exists such an algorithm: take $\Delta = \varepsilon/(2M)$ and overlay a $\Delta$-grid on $I = [a, b]$, i.e. $I_\Delta = \{a, ..., a+i\cdot\Delta, ..., a + \lfloor \frac{b - a}{\Delta} \rfloor \cdot \Delta, b\}$. Illustration is provided on Figure~\ref{fig:example}. Then choose $x = \arg \min_{x \in I_\Delta} f(x)$ (this takes $\leqslant \lceil \frac{b - a}{\Delta} \rceil + 1$ function evaluations). Due to Assumption~\ref{eq:lipschitz} it holds that $f(x) - f(x^*) \leqslant M \cdot \Delta + \delta \leqslant \varepsilon$. \end{proof} \subsection{Connection to optimisation on Euclidean simplex} \begin{definition}[Euclidean simplex] \label{def:simplex} Euclidean n-simplex $\sigma[p_1, ..., p_{n+1}] \subset \mathbb{R}^{n+1}$ spanned by points $p_1, ..., p_{n+1} \in \mathbb{R}^{n+1}$ is $\{x \in \mathbb{R}^{n+1}: \exists \alpha_1, ..., \alpha_{n+1} \in \mathbb{R}_+, \sum_{i=1}^{n+1} \alpha_i = 1, x = \sum_{i=1}^{n+1} \alpha_i p_i\}$. \end{definition} Note that $\sigma[p_1, ..., p_{n+1}]$ is contained in some hyperplane. To specify a point on an Euclidean simplex it is required to specify $n-1$ coordinates. Set of coefficients $\alpha_1, ..., \alpha_n$ defines a point $x$ in $\sigma[p_1, ..., p_{n+1}]$ and is called barycentric coordinates of $x$. \begin{theorem}[Lower bound on maximum admissible noise for $M$-Lipschitz $(M,1)$-strongly-growing problems on Euclidean simplex] \label{th:oned} There exists algorithm, such that for every $\varepsilon > 0$, given \begin{equation*} \delta \leqslant \frac{\varepsilon}{n+1}, \end{equation*} points $p_1, ..., p_n \in \mathbb{R}^{n+1}$ such that $\|p_i\| = 1, i = 1, ..., n$, and function\footnote{It can confuse that $M < \mu$ defines class of non-constant problems, because usually condition number is $\geqslant 1$, but it is not exactly condition number here: by our definition, we have $\frac{\mu}{2} \|x - x^*\|_2 \leqslant |f(x) - f(x^*)| \leqslant M \|x - x^*\|_2$, so it must be necessarily $M > \mu / 2$} $f \in \mathcal{F}_{Lip}^{M,M,1}(\sigma[p_1, ..., p_n, 0])$, algorithm will obtain $x$ such that $f(x) - f(x^*) \leqslant \varepsilon$ in $poly(n, \varepsilon^{-1})$ iterations. \end{theorem} \begin{proof} Original optimisation problem for function $f$ can be equivalently rewritten as optimisation problem for function $g$ in barycentric coordinates: \begin{gather} \label{eq:barycentric} \nonumber x(\alpha_1, ..., \alpha_n) = \sum_{i=1}^n \alpha_i p_i + (1 - \sum_{i=1}^n \alpha_i) p_{n+1},\\ \min_{\alpha_n \in [0, 1]} \dotsi \min_{\alpha_2 \in [0, 1 - \alpha_n]} \min_{\alpha_1 \in \left[0, 1 - \sum_{i=2}^{n} \alpha_i\right]} \{g(\alpha_1, ..., \alpha_n) = f(x(\alpha_1, ..., \alpha_n))\}. \end{gather} If $f$ is $M$-Lipschitz continuous, $g$ is coordinate-wise $(M \|p_i - p_{n+1}\|)$-Lipschitz continuous: indeed, \begin{align*} &|g(\alpha_1, ..., \alpha_i + \alpha, ..., \alpha_n) - g(\alpha_1, ..., \alpha_i, ..., \alpha_n)|\\ &=|f(x(\alpha_1, ..., \alpha_n) + \alpha (p_i - p_{n+1})) - f(x(\alpha_1, ..., \alpha_n))| \leqslant M \|p_i - p_{n+1}\| |\alpha|. \end{align*} Statement of the theorem follows if we solve 1-dimensional problem w.r.t. $\alpha_1$ with accuracy $\varepsilon/(n+1)$ and given $\delta$, use this solution in the oracle to solve 1-dimensional problem for function $\min_{\alpha_1} g(\alpha_1, \alpha_2, ..., \alpha_n)$ w.r.t. $\delta_2$ with accuracy $\varepsilon/(n+1)$ and noise $2\varepsilon/(n+1)$, ..., and finally use solution of (n-1)-th problem in the oracle to solve 1-dimensional problem for function $\min_{\alpha_{n-1}} \dotsi \min_{\alpha_1} g(\alpha_1, ..., \alpha_n)$ w.r.t. $\delta_n$ with accuracy $\varepsilon/(n+1)$ and noise $n\varepsilon/(n+1)$. Similarly to Theorem~\ref{th:oned}, Lipschitz constants do not affect bound on noise / resulting accuracy, but only density of the grid. If the size of feasible set for each 1-dimensional problem is upper bounded by 1, number of function's evaluations will be in the worst case $\leqslant \left(\frac{(n+1) M}{\varepsilon}\right)^n \cdot \prod_{i=1}^n \|p_i - p_{n+1}\|$, which is not $poly(n, \varepsilon^{-1})$. If we allow such an asymptotic for the number of iterations, provided bound for $\delta$ is lower bound for all bounded sets and all $f \in \mathcal{F}_{NC-Lip}^{M}(S)$. W.l.o.g. we assume $\|p_i\| = 1, i=1,...,n$ and $p_{n+1} = 0$ hereinafter to simplify the form of the proofs. Note that algorithm makes $\frac{(n+1) M}{\varepsilon}$ probes to solve 1-level 1-dimensional problem. Each probe makes $\alpha_n$ bigger by $\frac{\varepsilon}{(n+1) M}$, and makes feasible set for 2-level smaller by $\frac{\varepsilon}{(n+1) M}$, correspondingly. If 2-level problem makes $T_2(R)$ function's evaluations on feasible set of size $R$, 1-level problem will require in total $\sum_{i=0}^{\lceil \frac{(n+1) M}{\varepsilon} \rceil} T_2(1 - \frac{i \varepsilon}{(n+1) M})$. This can be iterated to result in \begin{align*} &\textstyle \sum_{i_1=0}^{\lceil N \rceil} \sum_{i_2=0}^{\lceil N (1 - \frac{i_1}{N}) \rceil} \dotsi \sum_{i_{n-1}=0}^{\lceil N (1 - \frac{i_1 + \dotsi + i_{n-2}}{N}) \rceil} T_n(1 - \frac{i_1 + \dotsi + i_{n-1}}{N}) \leqslant\\ &\textstyle \qquad \leqslant \sum_{i_1=0}^{N+1} \sum_{i_2=0}^{N - i_1 + 1} \dotsi \sum_{i_{n-1}=0}^{N - (i_1 + \dotsi + i_{n-2}) + 1} \sum_{i_n=0}^{N - (i_1 + \dotsi + i_{n-1}) + 1} 1 =\\ &\textstyle \qquad = \sum_{i_1=0}^{N+1} \sum_{i_2=i_1}^{N + 1} \dotsi \sum_{i_n=i_{n-1}}^{N + 1} 1 = \binom{N+n+1}{n} = \mathcal{O}\left(\frac{1}{\sqrt{n}} \left(\frac{\mathrm{e} M}{\varepsilon}\right)^n\right) \end{align*} function's evaluations, where $N = \frac{(n+1) M}{\varepsilon}$. ($\mathcal{O}$ has non-asymptotic meaning typical for optimisation, i.e. there exists constant, independent on $n$, such that, for all $n \in \mathbb{N}$, $\leqslant$ holds). If we allow such an asymptotic for the number of iterations, provided bound for $\delta$ is lower bound for all bounded sets triangulable into $poly(n, \varepsilon^{-1})$ simplices and all $f \in \mathcal{F}_{NC-Lip}^{M}(S)$. If we now exploit Assumption~\ref{eq:growth}, we can avoid most of probes taken into account in the last formula. We can use the proximity of 1-dimensional subproblems being solved on the same level to localise next subproblem given the solution of previous one. In case of $n = 2$ we need to solve one 1-dimensional subproblem corresponding to $\alpha + \frac{i \varepsilon}{(n+1)M}$ with accuracy $\varepsilon/(n+1)$ by the overlaying full $\frac{\varepsilon}{(n+1) M}$-grid to get $x_i = x(\alpha_1, ..., \alpha_n)$ such that $f(x) - f(x_i^*) \leqslant \varepsilon/(n+1)$, where $x_i^*$ is minimum point on a segment $I_{i} = \{x(\alpha_1, \alpha_2) \in \sigma[p_1, p_2, p_3]: \alpha_1 = \alpha + \frac{i\varepsilon}{(n+1)M}\}$ and $R$ is length of feasible interval. Point $x'_{i+1}$ on the segment $I_{i+1} = \{x(\alpha_1, \alpha_2) \in \sigma[p_1, p_2, p_3]: \alpha_1 = \alpha + \frac{(i+1)\varepsilon}{(n+1) M}\}$ obtained by translation of $x_i$ parallel to line $\alpha_2 = 0$ will be centre of localised feasible set. Besides, for every pair of arbitrary point $x_i$ and $x'_{i+1}$ obtained by procedure above, it holds that $\|x_i - x'_{i+1}\|_2 \leqslant \varepsilon/((n+1)M)$. By Assumption~\ref{eq:lipschitz}, $f(x'_{i+1}) \leqslant f(x_i) + \varepsilon/(n+1)$. Besides, Assumptions~\ref{eq:lipschitz} implies that $f(x^*_{i+1}) > f(x^*_i) - \varepsilon/(n+1)$ (otherwise, one could parallel translate $x^*_{i+1}$ into the point $x$ on $I_i$ such that $f(x) < f(x^*_i)$, which is not possible). Thus, $f(x'_{i+1}) - f(x^*_{i+1}) \leqslant 2 \varepsilon / (n+1)$. By Assumption~\ref{eq:growth} we get that $\|x'_{i+1} - x^*_{i+1}\|_2 \leqslant 2^{1 + 1/\nu} \cdot \left(\frac{\varepsilon}{\mu (n+1)}\right)^{1/\nu}$, which means that one should take feasible interval of length $2^{2 + 1/\nu} \cdot \left(\frac{\varepsilon}{\mu (n+1)}\right)^{1/\nu}$ centred in $x'_{i+1}$. Since the minimum $x_{i+1}$ found on such a feasible interval will coincide with if it would be found on full interval, postnext problem can be provided with the feasible interval of the same length but centred in $x'_{i+2}$, etc. In total, this procedure requires at most \begin{align*} &\textstyle \frac{(n+1) M}{\varepsilon} + \left(\frac{(n+1) M}{\varepsilon} - 1\right) \cdot 2^{2 + 1/\nu} \cdot \left(\frac{\varepsilon}{\mu (n+1)}\right)^{1/\nu} \frac{(n+1) M}{\varepsilon}\\ &\textstyle \qquad= \mathcal{O}\left(\frac{(n+1) M}{\varepsilon} + \frac{(n+1)^{2-1/\nu} M^2}{\mu^{1/\nu} \varepsilon^{2-1/\nu}}\right) \end{align*} probes, which only for $\nu = 1$ takes the form $\mathcal{O}\left(\frac{M (n+1)}{\varepsilon} \left(1 + \frac{M}{\mu}\right)\right)$ with unchanged degree of $\varepsilon$ and $n$. To get number of iterations required for $n > 2$ such an algorithms will need \begin{equation*} \textstyle \mathcal{O}\left(\frac{(n+1) M}{\varepsilon} \left(1 + \frac{3M}{\mu} \left(1 + \frac{3M}{\mu}\left( 1 + \dotsi \right)\right)\right)\right) = \mathcal{O}\left(\frac{(n+1) M}{\varepsilon} \left(\frac{3M}{\mu}\right)^n\right) \end{equation*} probes. If we allow such an asymptotic for the number of iterations, provided bound for $\delta$ is lower bound for all bounded sets and all $f \in \mathcal{F}_{Lip}^{M,\mu,1}(S)$. Finally, we can exploit some edge effects inherent in the simplex. Firstly, we always can choose interval for the first subproblem which requires full grid overlaying to be small enough, so that $1 + \frac{3 M}{\mu} + \dotsi + \left(\frac{3 M}{\mu}\right)^{n-1}$ turns into just $\left(\frac{3 M}{\mu}\right)^{n-1}$. Secondly, some choices of $\alpha_i$ lead to subproblems that are not to be solved. Indeed, if current feasible interval length is $0 < R \leqslant 1$ and $R - \alpha_i \leqslant \frac{2 \varepsilon}{\mu (n+1)}$, any feasible point will be $\varepsilon/(n+1)$ solution of all subproblems. Since probation step is equal to $\frac{\varepsilon}{M (n+1)}$, we can exclude $\frac{2 M}{\mu}$ probes from each subproblem to obtain $\max\left\{1, \frac{M}{\mu}\right\}^{n-1}$ instead of $\left(\frac{3 M}{\mu}\right)^{n-1}$ ($\max\{1, \dots\}$ reserves 1 necessary probe). It remains to note that $\frac{M}{\mu} \geqslant \frac{1}{2}$ by Assumptions~\ref{eq:growth} and \ref{eq:lipschitz}, and if problem has $\frac{M}{\mu} < 1$, convergence rate becomes $\mathcal{O}\left(\frac{(n+1) M}{\varepsilon}\right)$. Respectively, provided bound for $\delta$ is lower bound for all bounded sets triangulable into $poly(n, \varepsilon^{-1})$ simplices and all $f \in \mathcal{F}_{Lip}^{M,M,1}(S)$. \end{proof} \section{Discussion} This paper shows how can algorithmic optimal reductions introduced in \cite{allen2016optimal} be applied to generalise information-theoretic upper bound on admissible noise level from one class of zeroth-order optimisation problems to others. Using well-known reductions: regularisation, restarts and dual smoothing, --- Table~\ref{table:upper-bounds} with upper bounds for 4 most common optimisation problems classes was obtained. The other side of the coin is that algorithms optimal with respect to iterations number or oracle calls are not optimal with respect to admissible noise, and vice versa. The curious example is grid-search algorithm, which has maximal admissible noise $\delta \propto \varepsilon$ for any $\varepsilon$ and $n = 1$ which is better than upper bound from \cite{risteski2016algorithms} (there is nothing unexpected here, because that bound is asymptotic in $n$), but is far from optimality in convergence rate. Developing the idea of noise-tolerance of grid-search algorithm, this paper demonstrates differences in upper bounds on noise brought by geometrical properties of the problem by analysing simplex-constrained problems instead of ball constrained, which are considered in \cite{risteski2016algorithms}, and shows that trivial grid-search algorithm can solve such a problems in regime of $\delta \propto \varepsilon / n$ for any $\varepsilon$ and $n$ (but not for any $\mu$, $\nu$ and $M$, unfortunately). Despite the practical inefficiency of trivial algorithm, it helps to understand that optimality with respect to admissible noise in aspheric set constraints case lies beyond framework of \cite{risteski2016algorithms}, and should probably be achieved by different practical means. There remain a lot of open questions on upper bounds on noise: What are bounds for arbitrary asphericity of the constraints set? How to generalise bounds to highly-smooth case? What bound is tight for non-asymptotic dimensionality of the problems? Authors are confident that these questions can be addressed using optimal reductions technique, and will continue the development of this approach. \end{document}
\begin{document} \title{Combinatorics of the Interrupted Period.} \begin{abstract} This article is about discrete periodicities and their combinatorial structures. It presents and describes the unique structure caused by the alteration of a pattern in a repetition. Those alterations of a pattern arise in the context of double squares and were discovered while working on bounding the number of distinct squares in a string. Nevertheless, they can arise in other phenomena and are worth being presented on their own. \end{abstract} \noindent {\small \textbf{Keywords:} \textit{string, period, primitive string, factorization}} If $\generateur{x}$ is a primitive word, and $\prefix{x}$ a prefix of $\generateur{x}$, the sequence $\generateur{x}^n\prefix{x}\generateur{x}^m$ has a singularity: it has a periodic part of period $\generateur{x}$, an interruption, and a resumption of the pattern $\generateur{x}$. That interruption creates a different pattern, one that does not appear in $\generateur{x}^n$. The goal of this article is to unveil that pattern. \section{Preliminaries} In this section, we introduce the notations and present a simple property and two of its corollaries. These observations are straightforward but their proofs introduce the technique used to prove Theorem \ref{ath} and provide insights. We first fix some notations. An \emph{alphabet} $A$ is a finite set. We call \emph{letters} the elements of $A$. If $\lvert A \rvert = 2$, the words are referred to as binary and are used in computers. Another well known example for $\lvert A \rvert = 4$ is DNA. \\ A vector of $A^n$ is a \emph{word} $w$ of length $\lvert w \rvert = n$, which can also be presented under the form of an array $w[1,...,n]$. Two words are \emph{homographic} if they are equal to each other. If $x = \prefix{x}\suffix{x}x_3$ for non-empty words $\prefix{x}, \suffix{x}$ and $x_3$, then $\prefix{x}$ is a \emph{prefix} of $x$, $\suffix{x}$ is a \emph{factor} of $x$, and $x_3$ is a \emph{suffix} of $x$ (if both the prefix and the suffix are non empty, we refer to them as proper). We define \emph{multiplication} as concatenation. In english, $breakfast = break . fast$. In a traditional fashion, we define the \emph{$n^{th}$ power} of a word $w$ as $n$ time the multiplication of $w$ with itself. A word $x$ is \emph{primitive} if $x$ cannot be expressed as a non-trivial power of another word $x'$.\\ A word $\tilde{x}$ is a \emph{conjugate} of $x$ if $x=\prefix{x}\suffix{x}$ and $\tilde{x}=\suffix{x}\prefix{x}$ for non-empty words $\prefix{x}$ and $\suffix{x}$. The set of conjugates of $x$ together with $x$ form the conjugacy class of $x$ which is denoted $Cl(x)$. \\ A factor $x, \lvert x \rvert =n$ of $w$ has \emph{period} $p$ if $x[i]=x[i+\lvert p \rvert], \forall i \in [1,...n-\lvert p\rvert]$.\\ The \emph{number of occurrences} of a letter $c$ in a word $w$ is denoted $n_c(w)$, the \emph{longest common prefix} of $x$ and $y$ as $lcp(x,y)$ , while $lcs(x,y)$ denotes the \emph{longest common suffix} of $x$ and $y$ (note that $\lcs (x,y)$ and $\lcp (x,y)$ are words).\\\\ The properties presented next rely on a simple counting argument. If the proofs are not interesting in themselves, they still allow for meaningful results. \begin{property}\label{fr} A word $w$ and all of its conjugates have the same number of occurrences for all of their letters, i.e. $\forall\tilde{w} \in Cl(w), \forall a \in A,\ n_a(w) = n_a(\tilde{w})$. \end{property} \begin{proof} Note that $\forall \tilde{w} \in Cl(w), \exists w_1, w_2$, such that $w = w_1w_2, \tilde{w}=w_2w_1$. Then, $\forall a \in A, n_a(w) = n_a(w_1) + n_a(w_2) = n_a(\tilde{w})$. \qed \end{proof} The negation of Property \ref{fr} gives the following corollary: \begin{corollary}\label{number} If two words do not have the same number of occurrence for the same letter, they are not conjugates. \end{corollary} Another important corollary of Property \ref{fr} is the following: \begin{corollary} Let $x$ be a word, $\lvert x \rvert \geq n+1$. If $u=x[1...n]$ and $v=x[2...n+1]$ are conjugates of each other, then $x[1] = x[n+1]$, i.e. $v$ is a cyclic shift of $u$. \end{corollary} \begin{proof} Note that $u$ and $v$ have the factor $x[2...n]$ in common. Since $u$ and $v$ are conjugates, they have the same number of occurrences for all of their letters (Proposition \ref{fr}). It follows that $n_{x[1]}(u) = n_{x[1]}(x[1...n]) = n_{x[1]}(x[2...n]) + 1 = n_{x[1]}(v) = n_{x[1]}(x[2...n]) +n_{x[1]}(x[n+1])$, hence $n_{x[1]}(x[n+1]) = 1$, i.e. $x[1] = x[n+1]$. \qed \end{proof} \section{Theorem} Discrete periods were described by N.J. Fine and H.S. Wilf in 1965 in the article ``Uniqueness theorem for periodic functions'' \cite{FW65}. A corollary of that theorem, the synchronization principle, was proved by W. Smyth in \cite{S05} and L. Ilie in \cite{I05}: \begin{theorem}\label{fw} If $w$ is primitive, then, for all conjugates $\tilde{w}$ of $w, w \neq \tilde{w}$. \end{theorem} Which is about the synchronization of patterns. The next theorem is about the impossible synchronization when a pattern is interrupted. First, we need to formalize what we call an interruption of the pattern. Let $\generateur{x}$ be a primitive word and $\prefix{x}$ be a proper prefix of $\generateur{x}$, i.e. $\prefix{x}\ne \generateur{x}$. Write $\generateur{x}=\prefix{x}\suffix{x}$ for some suffix $\suffix{x}$ of $\generateur{x}$. \\\\ Let $W=\generateur{x}^{e_1}\prefix{x}\generateur{x}^{e_2}$ with $e_1\geq1, e_2 \geq 1, e_1+e_2\geq 3$.\\\\ We see that $W$ has a repetition of a pattern $\generateur{x}$ as a prefix: $\generateur{x}^{e_1}\prefix{x}$, and then the repetition is interrupted at position $\lvert \generateur{x}^{e_1}\prefix{x} \rvert$, before starting again in the suffix $\generateur{x}^{e_2}$. We need one more definition (albeit that definition is not necessary, it is presented here for better understanding) before introducing the two factors that we claim have restricted occurrences in $W$. \begin{definition} Let $W=\generateur{x}^{e_1}\prefix{x}\generateur{x}^{e_2}$ with $e_1\geq1, e_2 \geq 1, e_1+e_2\geq 3$ for a primitive word $\generateur{x}=\prefix{x}\suffix{x}$. Let $\tilde{p}$ be the prefix of length $\lvert \lcp (\prefix{x}\suffix{x}, \suffix{x}\prefix{x})\rvert+1$ of $\prefix{x}\suffix{x}$ and $\tilde{s}$ the suffix of length $\lvert \lcs (\prefix{x}\suffix{x}, \suffix{x}\prefix{x})\rvert+1$ of $\suffix{x}\prefix{x}$. The factor $\tilde{s}\tilde{p}$ starting at position $\lvert \generateur{x}^{e_1} \rvert + \lvert \prefix{x} \rvert - \lvert \lcs (\prefix{x}\suffix{x}, \suffix{x}\prefix{x})\rvert -1$ is the \emph{core of the interrupt} of $W$.\\ \end{definition} If $W$ and its interrupt are clear from the context, we will just speak of the core (of the interrupt). \begin{example} Consider $\generateur{x} = aaabaaaaaabaaaa$ and $\prefix{x} = aaabaaaaaabaaa$, then $\generateur{x}\prefix{x}\generateur{x}^2$ has $\generateur{x}\prefix{x}\generateur{x} = aaabaaaaaabaaaa\mathbf{aaabaaaaaabaaa}aaabaaaaaabaaaa$ as a prefix and $\suffix{x} = a$. It follows that $\lcp (\prefix{x}\suffix{x},\suffix{x}\prefix{x}) = aaa$, and $ \tilde{p} = aaab, \lcs (\prefix{x}\suffix{x},\suffix{x}\prefix{x}) = aaa$, and $\tilde{s}=baaa$. The core of the interrupt, $\tilde{s}\tilde{p}$, is underlined in: \[ x\prefix{x}x = aaabaaaaaabaaaaaaabaaaaaa\underbrace{baaaaaab}_{\tilde{s}\tilde{p}}aaaaaabaaaa. \] \end{example} The factors that were previously known to have restricted occurrences in $W$, to the best of the author's knowledge, were the inversion factors defined by A. Deza, F. Franek and A. Thierry in \cite{DFT15}: \begin{definition} Let $W=\generateur{x}^{e_1}\prefix{x}\generateur{x}^{e_2}$ with $\generateur{x}=\prefix{x}\suffix{x}$ a primitive word and $e_1\geq1, e_2 \geq 1, e_1+e_2\geq 3$. An \emph{inversion factor} of $W$ is a factor that starts at position $i$ and for which: \begin{itemize} \item $W[i+j]=W[i+j+\lvert \generateur{x} \rvert + \lvert \prefix{x}\rvert]$ for $0\leq j < \lvert \prefix{x}\rvert$, and \item $W[i+j]=W[i+j+\lvert \prefix{x}\rvert ]$ for $ \lvert \prefix{x}\rvert \leq j \leq \lvert x\rvert + \lvert \prefix{x}\rvert$. \end{itemize} \end{definition} Those inversion factors, which have the structure of $\suffix{x}\prefix{x}\prefix{x}\suffix{x} = \tilde{\generateur{x}}\generateur{x}$, and which length are twice the length of $\generateur{x}$, were used as two notches that forces a certain synchronization of certain squares in the problem of the maximal number of squares in a word, and allowed to offer a new bound to that problem. The main anticipated application of the next result is an improvement of that bound, though the technique has already proved useful in the improvement of M. Crochemore and W. Rytter's three squares lemma, \cite{CR95}, by H. Bay, A. Deza and F. Franek, \cite{BDF15}, and in the proof of the New Periodicity Lemma by H. Bay, F. Franek and W. Smyth \cite{BFS15}. Now, let $w_1$ be the factor of length $\lvert \generateur{x} \rvert$ of $W$ that has the core of the interrupt of $W$ as a suffix, and let $w_2$ be the factor of length $\lvert x \rvert$ that has the core of the interrupt of $W$ as a prefix. We will show that both $w_1$ and $w_2$ have restricted occurrences in $W$.\\ \begin{theorem}\label{ath} Let $\generateur{x}$ be a primitive word, $\prefix{x}$ a proper prefix of $\generateur{x}$ and $W=\generateur{x}^{e_1}\prefix{x}\generateur{x}^{e_2}$ with $e_1\geq1, e_2 \geq 1, e_1+e_2\geq 3$. Let $w_1$ be the factor of length $\lvert \generateur{x} \rvert$ of $W$ ending with the core of the interrupt of $W$, and let $w_2$ be the factor of length $\lvert \generateur{x} \rvert$ starting with the core of the interrupt of $W$. The words $w_1$ and $w_2$ are not in the conjugacy class of $\generateur{x}$.\\ \end{theorem} \begin{proof} Define $p=\lcp(\prefix{x}\suffix{x}, \suffix{x}\prefix{x})$ and $s=\lcs(\prefix{x}\suffix{x}, \suffix{x}\prefix{x})$ (note that $p$ and $s$ can be empty). \\ Deza, Franek, and Thierry showed that $\lvert \lcs(\prefix{x}\suffix{x},\suffix{x}\prefix{x})\rvert+\lvert \lcp(\prefix{x}\suffix{x},\suffix{x}\prefix{x})\rvert\leq\lvert \prefix{x}\suffix{x}\rvert-2$ when $\prefix{x}\suffix{x}$ is primitive (see \cite{DFT15}). Note that in the case $\lvert \lcs(\prefix{x}\suffix{x},\suffix{x}\prefix{x})\rvert+\lvert \lcp(\prefix{x}\suffix{x},\suffix{x}\prefix{x})\rvert = \lvert x\rvert-2$, $w_1$ $w_2$ are the same factor. \\ Write $\generateur{x}=pr_prr_ss$ and $\tilde{\generateur{x}}=pr'_pr'r'_ss$ for the letters $r_p, r'_p, r_s, r'_s$, $r_p \neq r'_p, r_s \neq r'_s$ (by maximality of the longest common prefix and suffix) and the possibly empty and possibly homographic words $r$ and $r'$.\\ We have, by construction, $w_1=r'r'_sspr_p$ and $w_2=r'_sspr_pr$.\\ Note that $n_{r_p}(w_1) = n_{r_p}(\tilde{\generateur{x}}) + 1$ and that $n_{r'_p}(\tilde{\generateur{x}}) = n_{r'_p}(w_1) + 1$ and, by Corollary \ref{number}, $w_1$ is not a conjugate of $\tilde{\generateur{x}}$, nor of $\generateur{x}$. And because $\lvert w_1\rvert = \lvert x\generateur{x}\rvert, w_1$ is neither a factor of $\generateur{x}^{e_1}\prefix{x}$ nor of $\generateur{x}^{e_2}$.\\ Similarly for $w_2$, $n_{r'_s}(w_2) = n_{r'_s}(\generateur{x}) + 1$ and $n_{r_s}(\generateur{x}) = n_{r_s}(w_2) + 1$ and, by corollary \ref{number}, $w_2$ is not a conjugate of $\generateur{x}$, and because $\lvert w_2\rvert = \lvert x \rvert, w_2$ is neither a factor of $\generateur{x}^{e_1}\prefix{x}$ nor of $\generateur{x}^{e_2}$. \qed \end{proof} \begin{example} Consider again $\generateur{x} = aaabaaaaaabaaaa$, $\prefix{x} = aaabaaaaaabaaa$ and $\suffix{x}=a$. We have $\lvert \generateur{x} \rvert = 15$, and: \[ \generateur{x}\prefix{x}\generateur{x} = aaabaaaaaabaaaaaaa\rlap{$\overbrace{\phantom{baaaaaa\mathbf{baaaaaab}}}^{w_1}$}baaaaaa\underbrace{\mathbf{baaaaaab}aaaaaab}_{w_2}aaaa \] The core of the interrupt is presented in bold.\\ The two factors $w_1$ and $w_2 = w_1 = baaaaaabaaaaaab$ (note that $w_2$ needs not be equal to $w_1$), starting at different positions, are not factors of $\generateur{x}^2$. Yet, the factor $aaaaaabaaaaaabaaaaaa$ of length $\lvert \generateur{x} \rvert + \lvert \lcs (\generateur{x}, \tilde{\generateur{x}}) \rvert + \lvert \lcp (\generateur{x}, \tilde{\generateur{x}}) \rvert$ and which contains the core of the interrupt is a factor of $\generateur{x}^2$. The same goes for the factors of length $\lvert \generateur{x} \rvert -1$ that starts and ends with the core of the interrupt, $aaaaaabaaaaaab$ and $baaaaaabaaaaaa$: they both are factors of $\generateur{x}^2$. For those reasons, the theorem can be regarded as tight \end{example} \section{Conclusion} The key features of the core of the interrupt was understood while studying double squares. Ilie \cite{I05} provided an alternate and shorter proof of Crochemore and Rytter's three squares lemma \cite{CR95}. We offer another concise proof within the framework of the core of the interrupt. \begin{lemma} In a word, no more that two squares can have their last occurrence starting at the same position. \end{lemma} \begin{proof} Suppose that three squares $u_1^2, u_2^2, u_3^2, \lvert u_1 \rvert < \lvert u_2 \rvert < \lvert u_3 \rvert$ start at the same position. Because $u_2^2$ and $u_3^2$ start at the same position, we can write $u_2=\generateur{x}^{e_1}\prefix{x}, u_3=\generateur{x}^{e_1}\prefix{x}\generateur{x}^{e_2}$ for $\generateur{x} = \prefix{x}\suffix{x}$ a primitive word, $\prefix{x}$ a proper prefix of $\generateur{x}$ and $e_1 \geq e_2 \geq 1$, hence $u_3$ contains a core of the interrupt. Now, by synchronization principle, Theorem \ref{fw}, $u_1, \lvert u_1 \rvert < \lvert u_2 \rvert$, cannot end in the suffix $\lcs (\prefix{x}\suffix{x}, \suffix{x}\prefix{x})$ of $u_2$ (since $u_1$ has $\generateur{x}$ as a prefix) and ends before the core of the interrupt of $u_3$, but if $\lvert u_1^2 \rvert \geq \lvert u_3 \rvert$, the second occurrence of $u_1$ contains the core of the interrupt and a word of length $\lvert \generateur{x} \rvert$ that starts with it, while the first occurrence doesn't: which, by Theorem \ref{ath}, is a contradiction. \end{proof} \subsubsection{Thanks} to my supervisors Antoine Deza and Franya Franek for helpful discussions and advices and to Alice Heliou for proof reading of a preliminary version of this article. \end{document}
\begin{document} \title{Complete Quasi-Metrics for Hyperspaces, Continuous Valuations, and Previsions} \author{Jean Goubault-Larrecq} \affil{Universit\'e Paris-Saclay, CNRS, ENS Paris-Saclay, Laboratoire M\'ethodes Formelles, 91190, Gif-sur-Yvette, France} \maketitle \begin{abstract} \noindent The Kantorovich-Rubinshte\u\i n metric is an $L^1$-like metric on spaces of probability distributions that enjoys several serendipitous properties. It is complete separable if the underlying metric space of points is complete separable, and in that case it metrizes the weak topology. We introduce a variant of that construction in the realm of quasi-metric spaces, and prove that it is algebraic Yoneda-complete as soon as the underlying quasi-metric space of points is algebraic Yoneda-complete, and that the associated topology is the weak topology. We do this not only for probability distributions, represented as normalized continuous valuations, but also for subprobability distributions, for various hyperspaces, and in general for different brands of functionals. Those functionals model probabilistic choice, angelic and demonic non-deterministic choice, and their combinations. The mathematics needed for those results are more demanding than in the simpler case of metric spaces. To obtain our results, we prove a few other results that have independent interest, notably: continuous Yoneda-complete spaces are consonant; on a continuous Yoneda-complete space, the Scott topology on the space of $\overline{\mathbb{R}}_+$-valued lower semicontinuous maps coincides with the compact-open and Isbell topologies, and the subspace topology on spaces of $\alpha$-Lipschitz continuous maps also coincides with the topology of pointwise convergence, and is stably compact; we introduce and study the so-called Lipschitz-regular quasi-metric spaces, and we show that the formal ball functor induces a Kock-Z\"oberlein monad, of which all algebras are Lipschitz-regular; and we prove a minimax theorem where one of the spaces is not compact Hausdorff, but merely compact. Keywords: quasi-metric, formal balls. Subject classification: mathematics of computing - continuous mathematics - topology - point-set topology. \end{abstract} \tableofcontents \section{Introduction} \label{sec:intro} A landmark result in the theory of topological measure theory states that the space of probability distributions $\mathcal M_1 (X)$ on a Polish space $X$, with the weak topology, is again Polish. A Polish space is a separable complete metric space $X, d$, and to prove that theorem, one must build a suitable metric on $\mathcal M_1 (X)$. There are several possibilities here. One may take the L\'evy-Prohorov metric for example, but we shall concentrate on the $L^1$-like metric defined by: \[ \dKRH (\mu, \nu) = \sup_h \left|\int_{x \in X} h (x) d\mu - \int_{x \in X} h (x) d\nu\right|, \] where $h$ ranges over the $1$-Lipschitz maps from $X$ to $\mathbb{R}$. This is sometimes called the \emph{Hutchinson metric}, after J. E. Hutchinson's work on the theory of fractals \cite{Hutchinson:fractals}. L. V. Kantorovich had already introduced the same notion in 1942 in a famous paper \cite{Kantorovich:1942}, where he showed that, on a compact metric space, $\dKRH (\mu, \nu)$ is also equal to the minimum of $\int_{(x, y) \in X^2} d (x, y) d\tau$, where $\tau$ ranges over the probability distributions whose first and second marginals coincide with $\mu$ and $\nu$, respectively. (That result also holds on general Polish spaces, assuming that the double integrals of $d (x, y)$ with respect to $\mu \otimes \mu$ and $\nu \otimes \nu$ are finite \cite{Fernique:KR}.) G. P. Rubinshte\u\i n made important contributions to the theory, and $\dKRH$ is variously called the Hutchinson metric, the Kantorovich metric, the Kantorovich-Rubinshte\u\i n metric, and in the latter cases the name of Leonid Vaser\v ste\u\i n (who considered $L^p$-like variants on the second form) is sometimes added. However, the $\dKRH$ metric does not metrize the weak topology unless the metric $d$ is bounded (see \cite[Lemma~3.7]{Kravchenko:complete:K}). This is probably the reason why some papers consider a variant of $\dKRH$ where $h$ ranges over the $1$-Lipschitz maps from $X$ to $[-1, 1]$ instead of $\mathbb{R}$, or over the maps with Lipschitz norm at most $1$, where the Lipschitz norm of $h$ is the sum of the Lipschitz constant of $h$ and of $\sup_{x \in X} |h (x)|$. In any case, $\dKRH$ or a variant produces a complete metric on a space of probability distributions, and its open ball topology coincides with the weak topology. The aim of this paper is to generalize those results, in the following directions: \begin{enumerate} \item We consider non-Hausdorff topological spaces, and \emph{quasi-metrics} instead of metrics. A quasi-metric is axiomatized just like a metric, except that the axiom of symmetry $d (x, y) = d (y, x)$ is dropped. \item We consider not only measures, but also non-linear variants of measures, defined as specific (continuous, positively homogeneous) functionals that we have called previsions \cite{Gou-csl07}. \end{enumerate} Both generalizations stem from questions from computer science, and from the semantics of probabilistic and non-deterministic languages, as well as from the study of transition systems with probabilistic and non-deterministic transition relations. In the realm of semantics, one often considers \emph{domains} \cite{GHKLMS:contlatt}, which are certain directed complete partial orders. Seen as topological spaces, with their Scott topology, those spaces are never Hausdorff (unless the order is equality). The question of similarity (as opposed to bisimilarity) of transition systems also requires quasi-metrics \cite{Gou-fossacs08b}, leading to non-Hausdorff spaces. The mixture of probabilistic choice (where one chooses the next action according to some known probability distribution) and of non-deterministic choice (where the next action is chosen by a scheduler by some unknown but deterministic strategy) calls for previsions, or for other models such as the powercone model \cite{TKP:nondet:prob}, which are isomorphic under weak assumptions \cite{JGL-mscs16,KP:mixed}. Each of these generalizations is non-trivial in itself. As far as quasi-metrics are concerned already, there are at least two distinct notions of completeness, and the one we shall consider here, Yoneda-completeness, admits several refinements (algebraicity, continuity). By comparison, all those notions of completeness coincide on metric spaces. The open ball topology is probably not the right topology on a quasi-metric space either. The paper \cite{Gou-fossacs08b} shows that one can do a lot with open ball topologies, but sometimes under strange assumptions, such as total boundedness, or symcompactness. The right topology, or so we claim, is the so-called $d$-Scott topology, which we shall (re)define below. Again, in the metric case the $d$-Scott topology and the open ball topology are the same. Finally, let us mention that we shall retrieve the classical case of metrics for probability distributions by restricting to metrics, and to linear previsions. This will be up to a small detail: in general, linear previsions are integration functionals not of measures, but of continuous valuations \cite{Jones:proba,JP:proba}, a closely related concept that behaves better with topological notions. Measures and continuous valuations are isomorphic concepts in several important settings that go well beyond Hausdorff spaces \cite{KL:measureext}, and we shall see that this is also true in our case, under natural assumptions (Proposition~\ref{prop:mes=val}). This paper is long, to a point that it may discourage the bravest reader. I plan to split it into more manageable subunits that I will attempt to publish separately. In that context, the role of the present document will be to describe the big picture, at least, which those subunits won't convey. \subsection{Outline} We recapitulate some of the basic notions we need in Section~\ref{sec:basics-quasi-metric}, in particular on sober spaces, quasi-metric spaces, standard quasi-metric spaces, and Lipschitz continuous functions. The space of formal balls $\mathbf B (X, d)$ of a quasi-metric space $X, d$ is probably the single most important artifact that has to be considered in the study of quasi-metric spaces. In Section~\ref{sec:formal-ball-monad} we show that the formal ball functor $\mathbf B$ is part of a monad, even a Kock-Z\"oberlein monad, on the category of standard quasi-metric spaces and $1$-Lipschitz continuous maps. Although this has independent interest, one may say that this should be outside the scope of the present paper. However, the spaces that are algebras of that monad, in particular the free algebras $\mathbf B (X, d)$, will be important in the sequel, since, as we shall show in Section~\ref{sec:lipsch-regul-spac}, they are all Lipschitz regular. Lipschitz regularity is a new notion that arises naturally from the consideration of formal balls, and which will be instrumental in later sections. In Section~\ref{sec:comp-subs-quasi}, we characterize the compact saturated subsets of continuous Yoneda-complete quasi-metric space. The classical proof that $\dKRH$ is complete, in the Polish case, goes through the study of tight families of measures, and for that one usually appeals to a result by Hausdorff stating that, in a complete metric space, the compact subsets are the closed precompact sets. I had initially meant the results of Section~\ref{sec:comp-subs-quasi} to be a preparation for a similar argument, but tightness does not seem to be a fruitful concept in the quasi-metric setting. Instead, what worked is a long-winded route that explores the properties of the various spaces of lower semicontinuous, resp., Lipschitz continuous maps from a quasi-metric space $X, d$ to $\overline{\mathbb{R}}_+$. We introduce the various spaces of functions that we need in Section~\ref{sec:cont-lipsch-maps}, and examine several ways of approximating functions by $\alpha$-Lipschitz continuous maps there. In Section~\ref{sec:topol-spac-maps}, we show that every continuous Yoneda-complete quasi-metric space is consonant, and in fact a bit more. This allows us to show that the Scott topology coincides with the compact-open topology on the space of lower semicontinuous maps of such a space. We refine this in Section~\ref{sec:topol-lform_-x}, where we show that the induced topologies on the subspaces $\Lform_\alpha (X, d)$ of $\alpha$-Lipschitz continuous maps coincide with the topology of pointwise convergence. This is a key step: it follows that those spaces $\Lform_\alpha (X, d)$ are stably compact. That will be crucial in establishing all of our algebraicity results in the sequel, on spaces of previsions. In Section~\ref{sec:topol-lform_-x-1}, we examine the question whether the topology on the space $\Lform_\infty (X, d)$ of all Lipschitz continuous maps is determined by the subspace topologies on $\Lform_\alpha (X, d)$, $\alpha > 0$. This is a nasty subtle issue, which we explain there. We show that the topology on $\Lform_\infty (X, d)$ is indeed determined if $X, d$ is Lipschitz regular, vindicating our earlier introduction of the notion. If the topology of $\Lform_\infty (X, d)$ is indeed determined, then we shall see that our quasi-metric analogue of $\dKRH$, which we again write as $\dKRH$, is Yoneda-complete on spaces of previsions in Section~\ref{sec:quasi-metr-prev}. Since we only know that the topology of $\Lform_\infty (X, d)$ is determined when $X, d$ is Lipschitz regular, and that seems too restrictive in practice, we show that we can obtain Yoneda-completeness under less demanding conditions, provided that our spaces of previsions satisfy a certain property on their supports. That condition again relies crucially on the space $\mathbf B (X, d)$ of formal balls, and the fact that $\mathbf B (X, d)$ embeds the original space $X$ and is always Lipschitz regular. In the remaining section, we explore various kinds of previsions. I have said that previsions were introduced as models of probabilistic choice, non-deterministic choice, or both. In Section~\ref{sec:case-cont-valu}, we examine the case of linear previsions. Those are isomorphic to the so-called continuous valuations, and provide models for pure probabilistic (or subprobabilistic) choice. Since measures are a more well-known notion than continuous valuations, we start by showing that, in $\omega$-continuous Yoneda-complete quasi-metric spaces (in particular, on all Polish spaces), the (sub)normalized continuous valuations are the same thing as the (sub)probability measures. There is another reason to that: this correspondence between continuous valuations and measures is precisely the reason why the property on supports mentioned above is satisfied on linear previsions. The rest of the section establishes that the spaces of (sub)normalized continuous valuations are algebraic Yoneda-complete (resp., continuous Yoneda-complete) under a bounded variant $\dKRH^a$ of the $\dKRH$ quasi-metric, provided we start from an algebraic Yoneda-complete (resp., continuous Yoneda-complete) space of points, and that the $\dKRH^a$-Scott topology coincides with the weak topology. We shall also establish a splitting lemma and prove an analogue of the Kantorovich-Rubinshte\u\i n theorem $\dKRH (\mu, \nu) = \min_\tau \int_{(x, y) \in X^2} d (x, y) d\tau$ for continuous quasi-metric spaces there. We follow a similar plan with the so-called \emph{Hoare powerdomain} of $X$, namely the hyperspace of closed subsets of $X, d$ in Section~\ref{sec:hoare-powerdomain}, with a quasi-metric $\dH$ that is close to one half of the definition of the Hausdorff metric. The Hoare powerdomain models non-deterministic choice in computer science, and more precisely \emph{angelic} non-determinism, where one might think of the scheduler trying to pick the next action that is most favorable to you. This choice of a Hausdorff-like metric $\dH$ works because that hyperspace is isomorphic to the space of so-called sublinear, discrete previsions, and the isomorphism transports $\dKRH$ over to become $\dH$ on closed sets. Under the assumption that $X, d$ is algebraic Yoneda-complete (resp., continuous Yoneda-complete), we show that this hyperspace is algebraic Yoneda-complete (resp., continuous Yoneda-complete), and that the $\dH$-Scott topology coincides with the lower Vietoris topology, a standard topology that is one half of the well-known Vietoris (a.k.a.\ hit-and-miss) topology. We again follow a similar plan with the \emph{Smyth powerdomain} of $X$, that is, the hyperspace of compact saturated subsets of $X, d$, with a quasi-metric $\dQ$ that is exactly the other half of the definition of the Hausdorff metric, in Section~\ref{sec:smyth-powerdomain}. The Smyth powerdomain models \emph{demonic} non-determinism in computer science, where the scheduler tries to pick the worst possible next action. The Smyth powerdomain is isomorphic to the space of superlinear, discrete previsions, and transports $\dKRH$ over to $\dQ$. Again we show that the Smyth powerdomain of an algebraic Yoneda-complete (resp., continuous Yoneda-complete) space $X, d$ is algebraic Yoneda-complete (resp., continuous Yoneda-complete), and that its $\dQ$-Scott topology coincides with the upper Vietoris topology, the other half of the Vietoris topology. We turn to the spaces of sublinear and superlinear previsions in Section~\ref{sec:other-previsions}. The sublinear, not necessarily discrete, previsions model all possible mixtures of (sub)probabilistic and angelic non-deterministic choice. The superlinear previsions model all possible mixtures of (sub)probabilistic and demonic non-deterministic choice. We again obtain similar results: on an algebraic (resp., continuous) Yoneda-complete space $X, d$, they form algebraic (resp., continuous) Yoneda-complete spaces under the $\dKRH^a$ quasi-metrics. We show that they are retracts of the Hoare, resp.\ the Smyth powerdomain on spaces of continuous valuations through $1$-Lipschitz continuous maps. It follows that they are isomorphic to the upper and lower powercones of Tix, Keimel, and Plotkin \cite{TKP:nondet:prob}, which are convex variants of the above Hoare and Smyth powerdomains of valuations. Those isomorphisms were known when all the spaces considered are dcpos \cite{KP:mixed}, or topological spaces \cite{JGL-mscs16}, under some assumptions. We show that they are also isomorphisms of quasi-metric spaces, i.e., (continuous) isometries. Finally, we show that the spaces of sublinear and superlinear previsions on a continuous Yoneda-complete space have identical $\dKRH^a$-Scott and weak topologies. Section~\ref{sec:other-previsions} relies on essentially everything we have done until then, and also depends on a minimax theorem in the style of Ky Fan \cite{KyFan:minimax} or Frenk and Kassay \cite{FK:minimax}, but which only requires one space to be compact, \emph{without} the Hausdorff separation axiom. We had announced that generalized minimax theorem as Theorem~6 in \cite{Gou-fossacs08b}, without a proof. We prove it here, in Section~\ref{sec:minimax-theorem-1}. Section~\ref{sec:forks-quasi-lenses} is devoted to a similar study on the Plotkin powerdomain first, on spaces of forks second. The Plotkin powerdomain traditionally models erratic non-determinism, namely the mixture of angelic and demonic non-determinism. Forks model mixed probabilistic and erratic non-deterministic choice. The Plotkin powerdomain, a.k.a.\ the convex powerdomain, denotes several slightly different constructions in the literature. The most natural one in our case is Heckmann's $\mathbf A$-valuations \cite{Heckmann:absval}, which is directly related to what we call discrete forks. If the underlying space is sober, we can interpret elements of the Plotkin powerdomain as certain compact subsets called \emph{quasi-lenses}, and the $\dKRH$ metric translates to a slight variant $\dP$ of a two-sided Hausdorff quasi-metric. On metric spaces, we retrieve the usual Hausdorff metric on non-empty compact subsets. We show that the Plotkin powerdomain of an algebraic (resp., continuous) Yoneda-complete quasi-metric space is algebraic (resp., continuous) Yoneda-complete, and that the $\dP$-Scott topology coincides with a variant of the usual two-sided Vietoris topology. We again obtain similar results for spaces of forks: the space of forks on an algebraic (resp., continuous) Yoneda-complete quasi-metric space is algebraic (resp., continuous) Yoneda-complete with the $\dKRH^a$ quasi-metric, and the $\dKRH^a$-Scott topology is the weak topology. We conclude with some open questions in Section~\ref{sec:open-questions}. \section{Basics} \label{sec:basics-quasi-metric} \subsection{General Topology} \label{sec:general-topology} We refer the reader to \cite{JGL-topology} for basic notions and theorems of topology, domain theory, and in theory of quasi-metric spaces. The book \cite{GHKLMS:contlatt} is the standard reference on domain theory, and I will assume known the notions of directed complete posets (dcpo), Scott-continuous functions, the way-below relation $\ll$, and so on. We write $\uuarrow x$ for the set of points $y$ such that $x \ll y$. The Scott topology on a poset consists of the Scott-open subsets, the upwards-closed subsets $U$ such that every directed family that has a supremum in $U$ must intersect $U$. A Scott-continuous map between posets is one that is monotonic and preserves existing directed suprema, and this is equivalent to requiring that it is continuous for the underlying Scott topologies. The topic of the present paper is on quasi-metric spaces of previsions. Chapters~6 and~7 of \cite{JGL-topology} are a recommended read on that subject. The paper \cite{JGL:formalballs} gives additional information on quasi-metric spaces, which we shall also rely on. We shall introduce some of the concepts in Section~\ref{sec:quasi-metric-spaces}, and others as we require them. As far as topology is concerned, compactness does not imply separation. In other words, we call a subset $K$ of a topological space compact if and only if every open cover of $K$ contains a finite subcover. This property is sometimes called quasicompactness. We shall always write $\leq$ for the specialization preordering of a topological space: $x \leq y$ if and only if every open neighborhood of $x$ is also an open neighborhood of $y$, if and only if $x$ is in the closure of $y$. As a result, the closure of a single point $y$ is also its downward closure $\dc y$. In general, we write $\dc A$ for the downward closure of any set $A$, $\upc A$ for its upward closure, and $\upc x = \upc \{x\}$. A subset $A$ of a topological space is saturated if and only if it is equal to the intersection of its open neighborhoods. Equivalently, $A$ is saturated if and only if it is upwards-closed, i.e., $x \in A$ and $x \leq y$ imply $y \in A$. We write $\upc A = \{y \mid \exists x \in A. x \leq y\}$ for the upward closure of $A$. This is also the saturation of $A$, defined as the intersection of the open neighborhoods of $A$. If $K$ is compact, then $\upc K$ is compact, too, and saturated. We shall regularly rely on sobriety. The sober spaces are exactly the Stone duals of locales, but here is a more elementary description. A closed subset $C$ of a topological space $X$ is \emph{irreducible} if and only if it is non-empty, and for any two closed subsets $C_1$, $C_2$ such that $C \subseteq C_1 \cup C_2$, $C$ is included in $C_1$ or in $C_2$. Equivalently, $C$ is irreducible if and only if for any two open subsets $U_1$ and $U_2$ that both intersect $C$, $U_1 \cap U_2$ also intersects $C$. The closures of single points are always irreducible. A $T_0$ space is \emph{sober} if and only if those are its only irreducible closed subsets. Every Hausdorff space is sober, and every continuous dcpo is sober in its Scott topology \cite[Proposition~8.2.12]{JGL-topology}. Every sober space is \emph{well-filtered} (see Proposition~8.3.5, loc.cit.). A space is well-filtered if and only if, for every filtered intersection $\bigcap_{i \in I} Q_i$ of compact saturated subsets that is included in some open set $U$, it is already the case that $Q_i \subseteq U$ for some $i \in I$. In a well-filtered space, filtered intersections of compact saturated sets are compact saturated (Proposition~8.3.6, loc.cit.). \subsection{Quasi-Metric Spaces} \label{sec:quasi-metric-spaces} Let $\overline{\mathbb{R}}_+$ be the set of extended non-negative reals. A \emph{quasi-metric} on a set $X$ is a map $d \colon X \times X \to \overline{\mathbb{R}}_+$ satisfying: $d (x, x)=0$; $d (x, z) \leq d (x, y) + d (y, z)$ (triangular inequality); $d(x,y)=d(y,x)=0$ implies $x=y$. Given a quasi-metric space $X, d$, the \emph{open ball} $B^d_{x, <r}$ with center $x \in X$ and radius $r \in \mathbb{R}_+$ is $\{y \in X \mid d (x, y) < r\}$. The open ball topology is the coarsest containing all open balls, and is the standard topology on metric spaces. In the realm of quasi-metric spaces, we prefer to use the \emph{$d$-Scott topology}. This is defined as follows. A \emph{formal ball} is a pair $(x, r)$ of a point $x \in X$ (the center) and a number $r \in \mathbb{R}_+$ (the radius). Formal balls are ordered by $(x, r) \leq^{d^+} (y, s)$ iff $d (x, y) \leq r-s$, and form a poset $\mathbf B (X, d)$. We equip the latter with its Scott topology. There is an injective map $x \mapsto (x, 0)$ from $X$ to $\mathbf B (X, d)$, and the $d$-Scott topology is the coarsest that makes it continuous. This allows us to see $X$ as a topological subspace of $\mathbf B (X, d)$. The notation $\leq^{d^+}$ comes from the fact that it is the specialization ordering of $\mathbf B (X, d), d^+$, where the quasi-metric $d^+$ is defined by $d^+ ((x, r), (y, s)) = \max (d (x, y) -r +s, 0)$. The $d$-Scott topology coincides with the open ball topology when $d$ is a metric \cite[Proposition~7.4.46]{JGL-topology}, or when $X, d$ is Smyth-complete \cite[Proposition~7.4.47]{JGL-topology}. It coincides with the generalized Scott topology of \cite{BvBR:gms} when $X, d$ is an algebraic Yoneda-complete quasi-metric space \cite[Exercise~7.4.69]{JGL-topology}. We shall define all notions when they are required. Algebraicity will be defined later. For now, let us make clear what we understand by a Yoneda-complete quasi-metric space. Recall from Section~7.2.1 of \cite{JGL-topology} that a \emph{Cauchy-weighted net} ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ is a monotone net of formal balls on $X, d$ (i.e., $i \sqsubseteq j$ implies $(x_i, r_i) \leq^{d^+} (x_j, r_j)$) such that $\inf_{i \in I} r_i = 0$. The underlying net ${(x_i)}_{i \in I, \sqsubseteq}$ is then called \emph{Cauchy-weightable}. A point $x \in X$ is a \emph{$d$-limit} of the latter net if and only if, for every $y \in X$, $d (x, y) = \limsup_{i \in I, \sqsubseteq} d (x_i, y)$. This is equivalent to: for every $y \in X$, $d (x, y)$ is the supremum of the monotone net ${(d (x_i, y) - r_i)}_{i \in I, \sqsubseteq}$ \cite[Lemma~7.4.9]{JGL-topology}, a formula which we shall prefer for its simplicity. The $d$-limit is unique if it exists. Then $X, d$ is \emph{Yoneda-complete} if and only if every Cauchy-weightable net has a $d$-limit. (Or: if and only if every Cauchy net has a $d$-limit; but Cauchy-weighted nets will be easier to work with.) This is also equivalent to requiring that $\mathbf B (X, d)$ is a dcpo, and in that case, the least upper bound $(x, r)$ of ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ is given by $r = \inf_{i \in I} r_i$ and $x$ is the $d$-limit of ${(x_i)}_{i \in I, \sqsubseteq}$. This is the Kostanek-Waszkiewicz Theorem \cite[Theorem~7.4.27]{JGL-topology}. \begin{exa} \label{exa:creal} $\overline{\mathbb{R}}_+$ comes with a natural quasi-metric $\dreal$, defined by $\dreal (x, y) = 0$ if $x \leq y$, $\dreal (+\infty, y) = +\infty$ if $y \neq +\infty$, $\dreal (x, y) = x-y$ if $x > y$ and $x \neq +\infty$. Then $\leq^{\dreal^+}$ is the usual ordering $\leq$. We check that the Scott topology on $\overline{\mathbb{R}}_+$ coincides with the $\dreal$-Scott topology. To this end, observe that $\mathbf B (\overline{\mathbb{R}}_+, \dreal)$ is order-isomorphic to $C = \{(a, b) \in (\mathbb{R} \cup \{+\infty\}) \times ]-\infty, 0] \mid a-b \geq 0\}$ through the map $(x, r) \mapsto (x-r, -r)$. Since $C$ is a Scott-closed subset of a continuous dcpo, it is itself a continuous dcpo. A base of the Scott topology on $C$ is given by open subsets of the form $\uuarrow (a, b) = \{(c, d) \mid a<c, b<d\}$, hence a base of the Scott topology on $\mathbf B (\overline{\mathbb{R}}_+, \dreal)$ is given by sets of the form $\{(x, r) \in \mathbf B (\overline{\mathbb{R}}_+, \dreal) \mid a < x-r, b <-r\}$, $(a, b) \in C$. The intersections of the latter with $\overline{\mathbb{R}}_+$ are intervals of the form $\overline{\mathbb{R}}_+ \cap ]a, +\infty]$, $a \in \mathbb{R} \cup \{+\infty\}$. Those are exactly the Scott open subsets of $\overline{\mathbb{R}}_+$. \end{exa} \begin{exa} \label{exa:poset} Any poset $X, \leq$ gives rise to a quasi-metric space in a canonical way, by defining $d_\leq (x, y)$ as $0$ if $x \leq y$, $+\infty$ otherwise. The $d_\leq$-Scott topology is exactly the Scott topology on $X$ \cite[Example~1.8]{JGL:formalballs}. \end{exa} To avoid certain pathologies, we shall concentrate on \emph{standard} quasi-metric spaces \cite[Section~2]{JGL:formalballs}. $X, d$ is standard if and only if, for every directed family of formal balls ${(x_i, r_i)}_{i \in I}$, for every $s \in \mathbb{R}_+$, ${(x_i, r_i)}_{i \in I}$ has a supremum in $\mathbf B (X, d)$ if and only if ${(x_i, r_i+s)}_{i \in I}$ has a supremum in $\mathbf B (X, d)$. Writing the supremum of the former as $(x, r)$, we then have that $r = \inf_{i \in I} r_i$, and that the supremum of the latter is $(x, r+s)$---this holds not only for $s \in \mathbb{R}_+$, but for every $s \geq -r$. In particular, the radius map $(x, r) \mapsto r$ is Scott-continuous from $\mathbf B (X, d)$ to $\overline{\mathbb{R}}_+^{op}$ ($\overline{\mathbb{R}}_+$ with the opposite ordering $\geq$), and for every $s \in \mathbb{R}_+$, the map $\_ + s : (x, r) \mapsto (x, r+s)$ is Scott-continuous from $\mathbf B (X, d)$ to itself \cite[Proposition~2.4]{JGL:formalballs}. Most quasi-metric spaces---not all---are standard: all metric spaces, all Yoneda-complete quasi-metric spaces, all posets are standard \cite[Proposition~2.2]{JGL:formalballs}. $\overline{\mathbb{R}}_+, \dreal$ is standard, being Yoneda-complete. Given a map $f$ from a quasi-metric space $X, d$ to a quasi-metric space $Y, \partial$, $f$ is \emph{$\alpha$-Lipschitz} if and only if $\partial (f (x), f (y)) \leq \alpha d (x, y)$ for all $x, y \in X$. (When $\alpha=0$ and $d (x, y) = +\infty$, we take the convention that $0.+\infty=0$.) For every $\alpha \in \mathbb{R}_+$, and every map $f \colon X, d \to Y, \partial$, let $\mathbf B^\alpha (f)$ map $(x, r) \in \mathbf B (X, d)$ to $(f (x), \alpha r) \in \mathbf B (Y, \partial)$. Then $f$ is $\alpha$-Lipschitz if and only if $\mathbf B^\alpha (f)$ is monotonic. Contrarily to the case of spaces with the open ball topology, a Lipschitz map need not be continuous. There is a notion of Lipschitz \emph{Yoneda-continuous} map, characterized as preserving so-called $d$-limits. When both $X, d$ and $Y, \partial$ are standard, $f$ is $\alpha$-Lipschitz Yoneda-continuous if and only if $\mathbf B^\alpha (f)$ is Scott-continuous \cite[Lemma~6.3]{JGL:formalballs}. We take the latter as our definition: \begin{defi}[$\alpha$-Lipschitz continuous] \label{defn:Lipcont} A map $f \colon X, d \to Y, \partial$ between quasi-metric spaces is \emph{$\alpha$-Lipschitz continuous} if and only if $\mathbf B^\alpha (f)$ is Scott-continuous. \end{defi} The phrase ``$\alpha$-Lipschitz continuous'' should not be read as ``$\alpha$-Lipschitz \emph{and} continuous'', rather as another notion of continuity. The two notions are actually equivalent in the case of standard quasi-metric spaces, as we show in Proposition~\ref{prop:cont} below. The proof is similar to Proposition~7.4.52 of \cite{JGL-topology}, which states a similar result for Yoneda-complete quasi-metric spaces, and relies on the following lemma, similar to Lemma~7.4.48 of loc.cit. \begin{lem} \label{lemma:hole} Let $X, d$ be a standard quasi-metric space. Every \emph{open hole} $T^d_{x, > \epsilon}$, defined as $\{y \in X \mid d (y, x) > \epsilon\}$, where $\epsilon \in \mathbb{R}_+$, is open in the $d$-Scott topology: it is the intersection of the Scott-open set $T^{d^+}_{(x, 0), > \epsilon}$ with $X$. \end{lem} \proof Let $V$ be the open hole $T^{d^+}_{(x, 0), > \epsilon}$. This is the set of formal balls $(y, s)$ such that $d (y, x) > s+\epsilon$. We claim that $V$ is upwards-closed: for every $(y, s) \in V$ and every $(z, t)$ such that $(y, s) \leq^{d^+} (z, t)$, we have $d (y, x) > s+\epsilon$ and $d (y, z) \leq s-t$; by the triangular inequality $d (y, x) \leq d (y, z) + d (z, x) \leq s-t+d(y,x)$, so $d (y, x) > t+\epsilon$, showing that $(z, t)$ is in $V$. Next we claim that $V$ is Scott-open. Let ${(y_i, s_i)}_{i \in I}$ be a directed family of formal balls that has a supremum $(y, s)$ in $V$. Since $X, d$ is standard, $(y, s+2\epsilon)$ is the supremum of the directed family ${(y_i, s_i+2\epsilon)}_{i \in I}$. If no $(y_i, s_i)$ were in $V$, then we would have $d (y_i, x) \leq s_i + \epsilon$, i.e., $(y_i, s_i+2\epsilon) \leq^{d^+} (x, \epsilon)$ for every $i \in I$. Since $(y, s+2\epsilon)$ is the least upper bound of the family, $(y, s+2\epsilon) \leq^{d^+} (x, \epsilon)$, so $d (y, x) \leq s+\epsilon$, contradicting $(y, s) \in V$. Therefore $(y_i, s_i)$ is in $V$ for some $i \in V$, showing that $V$ is Scott-open. Finally, $V \cap X$ consists of those points $y$ such that $d (y, x) > 0 + \epsilon$, hence is equal to $T^d_{x, > \epsilon}$, whence the claim. \qed \begin{prop} \label{prop:cont} Let $X, d$ and $Y, \partial$ be two quasi-metric spaces, $\alpha > 0$, and $f$ be a map from $X$ to $Y$. Consider the following claims: \begin{enumerate} \item $f$ is $\alpha$-Lipschitz continuous in the sense of Definition~\ref{defn:Lipcont}; \item $f$ is $\alpha$-Lipschitz and continuous, from $X$ with its $d$-Scott topology to $Y$ with its $\partial$-Scott topology. \end{enumerate} Then (1) implies (2), and (2) implies (1) provided that $X, d$ and $Y, \partial$ are standard. \end{prop} \proof (1) $\limp$ (2). Assume $f$ is $\alpha$-Lipschitz continuous. Let $V$ be a $\partial$-Scott open subset of $Y$. By definition, and equating $Y$ with a subspace of $\mathbf B (Y, \partial)$, $V$ is the intersection of some Scott-open subset $\mathcal V$ of $\mathbf B (Y, \partial)$ with $Y$. Since $\mathbf B^\alpha (f)$ is Scott-continuous, $\mathcal U = \mathbf B^1 (f)^{-1} (\mathcal V)$ is Scott-open in $\mathbf B (X, d)$. Look at $U = \mathcal U \cap X$, a $d$-Scott open subset of $X$. We note that $x \in U$ if and only if $(x, 0) \in \mathcal U$ if and only if $\mathbf B^1 (f) (x, 0) = (f (x), 0)$ is in $\mathcal V$, if and only if $f (x) \in V$, so that $U = f^{-1} (V)$. Hence $f$ is continuous. (2) $\limp$ (1), assuming $X, d$ and $Y, \partial$ standard. Assume $f$ is $\alpha$-Lipschitz and continuous. Since $f$ is $\alpha$-Lipschitz, $\mathbf B^\alpha (f)$ is monotonic. In order to show that it is Scott-continuous, consider an arbitrary directed family ${(x_i, r_i)}_{i \in I}$ in $\mathbf B (X, d)$, with a supremum $(x, r)$. We see that family as a monotone net, and let $i \sqsubseteq j$ if and only if $(x_i, r_i) \leq^{d^+} (x_j, r_j)$. Since $X, d$ is standard, $r = \inf_{i \in I} r_i$ and $(x, 0)$ is the supremum of the directed family ${(x_i, r_i-r)}_{i \in I}$. $\mathbf B^\alpha (f) (x, r) = (f (x), \alpha r)$ is an upper bound of ${(f (x_i), \alpha r_i)}_{i \in I}$ by monotonicity. Assume that it is not least. Then there is a formal ball $(y, s)$ such that $(f (x_i), \alpha r_i) \leq^{\partial^+} (y, s)$ for every $i \in I$, i.e., $\partial (f (x_i), y) \leq \alpha r_i - s$ for every $i \in I$, and such that $(f (x), \alpha r)$ is not below $(y, s)$, i.e., $\partial (f (x), y) > \alpha r - s$. Pick a real number $\eta$ such that $\partial (f (x), y) > \eta > \alpha r - s$. In particular, $f (x)$ is in the open hole $T^\partial_{y > \eta}$, which is $\partial$-Scott open by Lemma~\ref{lemma:hole}. Since $f$ is continuous, $U = f^{-1} (T^\partial_{y > \eta})$ is $d$-Scott open, and contains $x$ by definition. Let $\mathcal U$ be a Scott-open subset of $\mathbf B (X, d)$ whose intersection with $X$ is equal to $U$. Since $(x, 0) \in \mathcal U$, $(x_i, r_i-r)$ is in $\mathcal U$ for all $i$ large enough; in other words, there is an $i_0 \in I$ such that $(x_i, r_i-r) \in \mathcal U$ for all $i \in I$ such that $i_0 \sqsubseteq i$. Since $\mathcal U$ is upwards-closed, $(x_i, 0)$ is in $\mathcal U$, so $x_i$ is in $U$, which implies that $f (x_i)$ is in $T^\partial_{y > \eta}$, for every $i \sqsupseteq i_0$. The latter expands to $\partial (f (x_i), y) > \eta$ for every $i \sqsupseteq i_0$. However, $\partial (f (x_i), y) \leq \alpha r_i - s$ for every $i \in I$, and since $r = \inf_{i \in I} r_i$ is also equal to $\inf_{i \in I, i_0 \sqsubseteq i} r_i$ (by directedness of $I$ and the fact that $i \sqsubseteq j$ implies $r_i \geq r_j$), we obtain that $\alpha r - s \geq \eta$. This is impossible since $\eta > \alpha r - s$. \qed The latter has the following nice consequence. \begin{lem} \label{lemma:d(_,x)} Let $X, d$ be a standard quasi-metric space. For every point $x' \in X$, the function $d (\_, x') \colon x \mapsto d (x, x')$ is $1$-Lipschitz continuous from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$. \end{lem} \proof It is $1$-Lipschitz because of the triangular inequality. Relying on Proposition~\ref{prop:cont}, and since $\overline{\mathbb{R}}_+, \dreal$ is standard, we only need to check that $d (\_, x')$ is continuous. By Example~\ref{exa:creal}, the $\dreal$-Scott topology is the Scott topology on $\overline{\mathbb{R}}_+$, hence it suffices to show that the inverse image of the Scott open $]\epsilon, +\infty]$ by $d (\_, x')$ is $d$-Scott open. That inverse image is the open hole $T^d_{x', > \epsilon}$, and we conclude by Lemma~\ref{lemma:hole}. \qed Of particular interest are the Lipschitz continuous functions from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$. Recall that $f \colon X, d \to \overline{\mathbb{R}}_+, \dreal$ is $\alpha$-Lipschitz continuous if and only if $\mathbf B^\alpha (f)$ is Scott-continuous. $\mathbf B (\overline{\mathbb{R}}_+, \dreal)$ is order-isomorphic with the Scott-closed set $C = \{(a, b) \in (\mathbb{R} \cup \{+\infty\}) \times (-\infty, 0] \mid a-b \geq 0\}$, through the map $(x, r) \mapsto (x-r, -r)$: see Example~\ref{exa:creal}. Every order isomorphism is Scott-continuous. Therefore $f$ is $\alpha$-Lipschitz continuous if and only if the composition $\xymatrix{X \ar[r]^(0.4){\mathbf B^\alpha (f)} & \mathbf B (X, d) \ar[r]^(0.6){\cong} & C}$ is Scott-continuous. That composition is $(x, r) \mapsto (f' (x, r), -\alpha r)$, where $f'$ is defined by $f' (x, r) = f (x) - \alpha r$. The map $(x, r) \mapsto -\alpha r$ is Scott-continuous when $X, d$ is standard. Hence we obtain the second part of the following result. The first part is obvious. \begin{lem} \label{lemma:f'} Let $X, d$ be a standard quasi-metric space, $\alpha > 0$, and let $f$ be a map from $X$ to $\overline{\mathbb{R}}_+$. Let $f' \colon \mathbf B (X, d) \to \mathbb{R} \cup \{+\infty\}$ be defined by $f' (x, r) = f (x) - \alpha r$. Then: \begin{enumerate} \item $f$ is $\alpha$-Lipschitz if and only if $f'$ is monotonic; \item $f$ is $\alpha$-Lipschitz continuous if and only if $f'$ is Scott-continuous. \qed \end{enumerate} \end{lem} Lemma~\ref{lemma:f'} is Lemma~6.4 of \cite{JGL:formalballs}, where Lipschitz Yoneda-continuous maps are used instead of Lipschitz continuous maps. The two notions are equivalent on standard quasi-metric spaces, as we have noticed before Definition~\ref{defn:Lipcont}. The Lipschitz continuous functions to $\overline{\mathbb{R}}_+, \dreal$ are closed under several constructions, which we recapitulate here. \begin{prop} \label{prop:alphaLip:props} Let $X, d$ be a standard quasi-metric space, $\alpha, \beta \in \mathbb{R}_+$, and $f$, $g$ be maps from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$. \begin{enumerate} \item If $f$ is $\beta$-Lipschitz continuous, then $\alpha f$ is $\alpha\beta$-Lipschitz continuous; \item If $f$ is $\alpha$-Lipschitz continuous and $g$ is $\beta$-Lipschitz continuous then $f+g$ is $(\alpha+\beta)$-Lipschitz continuous; \item If $f$, $g$ are $\alpha$-Lipschitz continuous, then so are $\min (f, g)$ and $\max (f, g)$; \item If ${(f_i)}_{i \in I}$ is any family of $\alpha$-Lipschitz continuous maps, then the pointwise supremum $\sup_{i \in I} f_i$ is also $\alpha$-Lipschitz continuous. \item If $\alpha \leq \beta$ and $f$ is $\alpha$-Lipschitz continuous then $f$ is $\beta$-Lipschitz continuous. \item Every constant map is $\alpha$-Lipschitz continuous. \end{enumerate} \end{prop} \proof (1--5) were proved in \cite[Proposition~6.7~(2)]{JGL:formalballs}, and are easy consequences of Lemma~\ref{lemma:f'}~(2). For (6), using the same lemma, we observe that for each constant $a$, the map $(x, r) \mapsto a - \alpha r$ is Scott-continuous, because in a standard space, the radius map is Scott-continuous from $\mathbf B (X, d)$ to $\mathbb{R}_+^{op}$. \qed We shall also need the following result, which is obvious considering our definition of Lipschitz continuity. \begin{lem} \label{lemma:comp:Lip} Let $X, d$ and $Y, \partial$ and $Z, \mathfrak d$ be three quasi-metric spaces. For every $\alpha$-Lipschitz continuous map $f \colon X, d \to Y, \partial$ and every $\beta$-Lipschitz continuous map $g \colon Y, \partial \to Z, \mathfrak d$, $g \circ f$ is $\alpha\beta$-Lipschitz continuous. \end{lem} \proof $\mathbf B^{\alpha\beta} (g \circ f)$ maps $(x, r)$ to $(g (f (x)), \alpha\beta r) = \mathbf B^\beta (g) (f (x), \alpha r) = \mathbf B^\beta (g) (\mathbf B^\alpha (f) (x, r))$. Since $\mathbf B^\alpha (f)$ and $\mathbf B^\beta (g)$ are Scott-continuous by assumption, so is their composition $\mathbf B^{\alpha\beta} (g \circ f)$. \qed \section{The Formal Ball Monad} \label{sec:formal-ball-monad} We now examine the space $\mathbf B (X, d)$, with its quasi-metric $d^+ ((x, r), (y, s)) = \max (d (x, y) -r +s, 0)$. The following is the first part of Exercise~7.4.54 of \cite{JGL-topology}. It might seem a mistake that this does not require $X, d$ to be standard: to dispel any doubt, we give a complete proof. \begin{lem} \label{lemma:mu:cont} Let $X, d$ be a quasi-metric space. The map $\mu_X \colon ((x, r), s) \in \mathbf B (\mathbf B (X, d), d^+) \mapsto (x, r+s) \in \mathbf B (X, d)$ is Scott-continuous. \end{lem} \proof The map $\mu_X$ is monotonic: if $((x, r), s) \leq^{d^{++}} ((x', r'), s')$, then $d^+ ((x, r), (x', r')) \leq s-s'$, meaning that $\max (d (x, x') - r + r', 0) \leq s-s'$, and this implies $d (x, x') \leq r-r'+s-s'$, hence $(x, r+s) \leq^{d^+} (x', r'+s')$. We claim that $\mu_X$ is Scott-continuous. Consider a directed family of formal balls ${((x_i, r_i), \allowbreak s_i)}_{i \in I}$ in $\mathbf B (\mathbf B (X, d), d^+)$ with a supremum $((x, r), s)$. We must show that $(x, r+s)$ is the supremum of the directed family ${(x_i, r_i+s_i)}_{i \in I}$. It is certainly an upper bound, since $\mu_X$ is monotonic. Let $(y, t)$ be another upper bound of ${(x_i, r_i+s_i)}_{i \in I}$. Let $a = \max (t-s, 0)$, $b = t-a = \min (s, t)$. We claim that $((y, a), b)$ is an upper bound of ${((x_i, r_i), s_i)}_{i \in I}$. For every $i \in I$, by assumption $(x_i, r_i+s_i) \leq^{d^+} (y, t)$, so $d (x_i, y) \leq r_i+s_i - t$. We must check that $d^+ ((x_i, r_i), (y, a)) \leq s_i-b$, namely that $\max (d (x_i, y) - r_i + a, 0) \leq s_i-b$, and this decomposes into $d (x_i, y) \leq r_i - a + s_i - b$ and $s_i \geq b$. The latter is proved as follows: since $b = \min (s, t)$, $b \leq s$, and since $((x, r), s)$ is an upper bound of ${((x_i, r_i), s_i)}_{i \in I}$, $s \leq s_i$ for every $i \in I$. The former is equivalent to $d (x_i, y) \leq r_i + s_i - t$, since $a+b=t$, and this is our assumption. Since $((x, r), s)$ is the least upper bound of ${((x_i, r_i), s_i)}_{i \in I}$, $((x, r), s) \leq^{d^{++}} ((y, a), b)$, so $\max (d (x, y) - r + a, 0) \leq s-b$. In particular, $d (x, y) \leq r+s-a-b = r+s-t$, so $(x, r+s) \leq^{d^+} (y, t)$. This shows that $(x, r+s)$ is the least upper bound of ${(x_i, r_i+ s_i)}_{i \in I}$, hence that $\mu_X$ is Scott-continuous. \qed \begin{lem} \label{lemma:dScott=Scott} For every quasi-metric space $X, d$: \begin{enumerate} \item the map $\eta_{\mathbf B (X, d)} \colon (x, r) \mapsto ((x, r), 0)$ is Scott-continuous; \item the $d^+$-Scott topology on $\mathbf B (X, d)$ coincides with the Scott topology. \end{enumerate} \end{lem} \proof (1) This is \cite[Exercise~7.4.53]{JGL-topology}. Monotonicity is clear: if $(x, r) \leq^{d^+} (y, s)$, then $d (x, y) \leq r-s$, so $d^+ ((x, r), (y, s)) = \max (d (x, y) - r + s, 0) = 0$. For every directed family ${(x_i, r_i)}_{i \in I}$ in $\mathbf B (X, d)$, with supremum $(x, r)$, by monotonicity $((x, r), 0)$ is an upper bound of ${((x_i,r_i), 0)}_{i \in I}$. Consider another upper bound $((y, s), t)$. For every $i \in I$, $((x_i, r_i), 0) \leq^{d^{++}} ((y, s), t)$, namely $d^+ ((x_i, r_i), (y, s)) \leq 0 - t$. That implies $t=0$, and $d (x_i, y) - r_i + s \leq 0$. The latter means that $(x_i, r_i) \leq^{d^+} (y, s)$, and as this holds for every $i \in I$, $(x, r) \leq^{d^+} (y, s)$. Therefore $d^+ ((x, r), (y, s)) = \max (d (x, y) - r + s, 0) = 0$, showing that $((x, r), 0) \leq ((y, s), 0) = ((y, s), t)$. (2) This is Exercise~7.4.54 of \cite{JGL-topology}. Using (1), every $d^+$-Scott open subset $V$ of $\mathbf B (X, d)$ is Scott-open: by definition, $V = \eta_{\mathbf B (X, d)}^{-1} (\mathcal V)$ for some Scott-open subset $\mathcal V$ of $\mathbf B (\mathbf B (X, d), d^+)$, and since $\eta_{\mathbf B (X, d)}$ is Scott-continuous, $V$ is Scott-open. To show the converse implication, we observe that $\mu_X \circ \eta_{\mathbf B (X, d)}$ is the identity map, by Lemma~\ref{lemma:eta:mu}~$(ii)$. Then for every Scott-open subset $V$ of $\mathbf B (X, d)$, $V$ is equal to $(\mu_X \circ \eta_{\mathbf B (X, d)})^{-1} (V)$, hence to $\eta_{\mathbf B (X, d)}^{-1} (\mathcal V)$ where $\mathcal V$ is the Scott-open subset $\mu_X^{-1} (V)$. This exhibits $V$ as a $d^+$-Scott open subset of $\mathbf B (X, d)$. \qed We write $\eta_X \colon X \to \mathbf B (X, d)$ for the embedding $x \mapsto (x, 0)$. \begin{lem} \label{lemma:eta:lipcont} Let $X, d$ be a standard quasi-metric space. The map $\eta_X \colon x \mapsto (x, 0)$ is $1$-Lipschitz continuous from $X, d$ to $\mathbf B (X, d), d^+$. \end{lem} \proof It is $1$-Lipschitz, because $d^+ ((x, 0), (y, 0)) = d (x, y)$. It is continuous by definition. Now apply Proposition~\ref{prop:cont}. \qed \begin{lem} \label{lemma:eta:mu} Let $X, d$ be a quasi-metric space. The following relations hold: $(i)$ $\mu_X \circ \eta_{\mathbf B (X, d)} = \identity {\mathbf B (X, d)}$; $(ii)$ $\mu_X \circ \mathbf B^1 (\eta_X) = \identity {\mathbf B (X, d)}$; $(iii)$ $\mu_X \circ \mu_{\mathbf B (X, d)} = \mu_X \circ \mathbf B^1 (\mu_X)$; $(iv)$ $\eta_{\mathbf B (X, d)} \circ \mu_X \geq \identity {\mathbf B (\mathbf B (X, d), d^+)}$. \end{lem} \proof $(i)$ $\mu_X (\eta_{\mathbf B (X, d)} (x, r)) = \mu_X ((x, r), 0) = (x, r+0) = (x, r)$. $(ii)$ $\mu_X (\mathbf B^1 (\eta_X) (x, r)) = \mu_X (\eta_X (x), r) = \mu_X ((x, 0), r) = (x, 0+r) = (x, r)$. $(iii)$ $\mu_X (\mu_{\mathbf B (X, d)} (((x, r), s), t)) = \mu_X ((x, r), s+t) = (x, r+s+t)$, while $\mu_X (\mathbf B^1 (\mu_X) (((x, r), \allowbreak s), t)) = \mu_X (\mu_X ((x, r), s), t) = \mu_X ((x, r+s), t) = (x, r+s+t)$. $(iv)$ $\eta_{\mathbf B (X, d)} (\mu_X ((x, r), s)) = \eta_{\mathbf B (X, d)} (x, r+s) = ((x, r+s), 0)$. We must check that this is larger than or equal to $((x, r), s)$, namely that $d^+ ((x, r), (x, r+s)) \leq s - 0$. Since $d^+ ((x, r), (x, r+s)) = \max (d (x, x) -r +r+s, 0) = s$, this is clear. \qed A \emph{monad} on a category is the data of an endofunctor $T$, two natural transformations $\eta \colon \identity\relax \to T$ and $\mu \colon T^2 \to T$ satisfying: $\mu_X \circ \eta_{TX} = \identity {TX}$, $\mu_X \circ T \eta_X = \identity {TX}$, and $\mu_X \circ \mu_{TX}= \mu_X \circ T (\mu_X)$. The first three statements of Lemma~\ref{lemma:eta:mu} seem to indicate that $T = \mathbf B$ gives rise to a monad, where the functor $\mathbf B$ maps every quasi-metric space $X, d$ to the quasi-metric space $\mathbf B (X, d), d^+$, and every $1$-Lipschitz continuous map $f \colon X, d \to Y, \partial$ to $\mathbf B^1 (f) \colon (x, r) \mapsto (f (x), r)$. The devil hides in details, one says. We must work in a category of standard, not arbitrary quasi-metric spaces, for $\eta_X$ to be a morphism (see Lemma~\ref{lemma:eta:lipcont}). Then we must show that $\mathbf B$ maps standard spaces to standard spaces. This is done in several steps. \begin{lem} \label{lemma:sup=>dlim} Let $X, d$ be a standard quasi-metric space, and let ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ be a monotone net of formal balls on $X, d$ with supremum $(x, r)$. Then $r = \inf_{i \in I} r_i$ and $x$ is the $d$-limit of ${(x_i)}_{i \in I, \sqsubseteq}$. \end{lem} \proof This is similar to the proof of \cite[Lemma~7.4.26]{JGL-topology}, which assumes that $\mathbf B (X, d)$ is a dcpo, whereas we only assume that $X, d$ is standard. Since $X, d$ is standard, $r = \inf_{i \in I} r_i$. Since $(x_i, r_i) \leq^{d^+} (x, r)$ for each $i \in I$, $d (x_i, x) \leq r_i - r$. For every $y \in X$, $d (x_i, y) \leq d (x_i, x) + d (x, y) \leq r_i - r + d (x, y)$. Taking suprema over $i \in I$, $\sup_{i \in I} (d (x_i, y) - r_i + r) \leq d (x, y)$. Assume that the inequality were strict. Let $s = \sup_{i \in I} (d (x_i, y) - r_i + r) < d (x, y)$. In particular, $s < +\infty$. For every $i \in I$, $d (x_i, y) - r_i + r \leq s$, so $d (x_i, y) \leq r_i-r+s$, i.e., $(x_i, r_i-r+s) \leq^{d^+} (y, 0)$. Since $X, d$ is standard, and $(x, r)$ is the supremum of ${(x_i, r_i)}_{i \in I}$, $(x, s)$ is the supremum of ${(x_i, r_i-r+s)}_{i \in I}$. It follows that $(x, s) \leq^{d^+} (y, 0)$, that is, $d (x, y) \leq s$, which is impossible. \qed This has the following interesting consequence (which we shall not use, however). Standardness says that if ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ and ${(x_i, s_i)}_{i \in I, \sqsubseteq}$ are two monotone nets of formal balls with the same underlying net ${(x_i)}_{i \in I, \sqsubseteq}$, then one of them has a supremum if and only if the other one has, \emph{provided} that $r_i$ and $s_i$ differ by a constant. In that case, those suprema are of the form $(x, r)$ and $(x, s)$ for the same point $x$ (and where $r$ and $s$ differ by the same constant). The following lemma shows that this holds without any condition on $r_i$ and $s_i$. That might be used to (re)define the notion of $d$-limit $x$ of a net ${(x_i)}_{i \in I, \sqsubseteq}$, as the center of the supremum of ${(x_i, r_i)}_{i \in I, \sqsubseteq}$, for some family of radii $r_i$ that make ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ a monotone net of formal balls. The following lemma shows that that definition is independent of the chosen radii $r_i$, assuming just standardness. \begin{prop} \label{prop:std:dlim} Let $X, d$ be a standard quasi-metric space. Let ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ and ${(x_i, s_i)}_{i \in I, \sqsubseteq}$ be two monotone nets of formal balls with the same underlying net ${(x_i)}_{i \in I, \sqsubseteq}$. If ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ has a supremum $(x, r)$, then $r = \inf_{i \in I} r_i$ and ${(x_i, s_i)}_{i \in I, \sqsubseteq}$ also has a supremum, which is equal to $(x, s)$, where $s = \inf_{i \in I} s_i$. \end{prop} \proof If ${(x_i, r_i)}_{i \in I, \sqsubseteq}$ has a supremum $(x, r)$, then $r = \inf_{i \in I} r_i$ because $X, d$ is standard. By Lemma~\ref{lemma:sup=>dlim}, $x$ is the $d$-limit of ${(x_i)}_{i \in I, \sqsubseteq}$. Lemma~7.4.25 of \cite{JGL-topology} states that if ${(x_i, s_i)}_{i \in I, \sqsubseteq}$ is a monotone net of formal balls and if ${(x_i)}_{i \in I, \sqsubseteq}$ has a $d$-limit $x$, then $(x, s)$ is the supremum of ${(x_i, s_i)}_{i \in I, \sqsubseteq}$, where $s = \inf_{i \in I} s_i$. \qed \begin{prop} \label{prop:B:std} For every standard quasi-metric space $X, d$, $\mathbf B (X, d), d^+$ is standard. \end{prop} \proof Let ${((x_i, r_i), s_i)}_{i \in I}$ be a directed family of formal balls on $\mathbf B (X, d), d^+$. This is a monotone net, provided we define $\sqsubseteq$ by $i \sqsubseteq j$ if and only if $((x_i, r_i), s_i) \leq^{d^{++}} ((x_j, r_j), s_j)$. Assume that ${((x_i, r_i), s_i)}_{i \in I}$ has a supremum $((x, r), s)$. Since $\mu_X$ is Scott-continuous (Lemma~\ref{lemma:mu:cont}), ${(x_i, r_i+s_i)}_{i \in I}$ is a directed family with supremum $(x, r+s)$. We use the fact that $X, d$ is standard and apply Lemma~\ref{lemma:sup=>dlim} to obtain that $r+s = \inf_{i \in I} (r_i + s_i)$ and that $x$ is the $d$-limit of ${(x_i)}_{i \in I, \sqsubseteq}$. Since ${((x_i, r_i), s_i)}_{i \in I}$ is directed, ${(s_i)}_{i \in I}$ is filtered. Let $s_\infty = \inf_{i \in I} s_i$. Since $\mu_X$ is Scott-continuous hence monotonic, ${(x_i, r_i+s_i)}_{i \in I}$ is also directed, so ${(r_i+s_i)}_{i \in I}$ is filtered. Its infimum is $r+s$. Let $r_\infty = \inf_{i \in I} (r_i + s_i) - s_\infty = r+s-s_\infty$. Consider $((x, r_\infty), s_\infty)$. For every $i \in I$, we claim that $((x_i, r_i), s_i) \leq^{d^{++}} ((x, r_\infty), s_\infty)$. For that, we compute $d^+ ((x_i, r_i), (x, r_\infty)) = \max (d (x_i, x) - r_i + r_\infty, 0)$, and we check that this is less than or equal to $s_i - s_\infty$. Since $s_i - s_\infty \geq 0$ by definition of $s_\infty$, it remains to verify that $d (x_i, x) - r_i + r_\infty \leq s_i - s_\infty$. Using the equality $r_\infty + s_\infty = r+s$, obtained as a consequence of the definition of $r_\infty$, we have to verify the equivalent inequality $d (x_i, x) \leq r_i + s_i - r - s$. That one is obvious, since $(x_i, r_i+s_i)$ is below the supremum $(x, r+s)$. Since $((x_i, r_i), s_i) \leq^{d^{++}} ((x, r_\infty), s_\infty)$ for every $i \in I$, $((x, r), s) \leq^{d^{++}} ((x, r_\infty), s_\infty)$. However, we claim that the reverse inequality also holds. Indeed, we start by observing that $s \leq s_i$ for every $i \in I$, since $((x_i, r_i), s_i) \leq^{d^{++}} ((x, r), s)$. Hence $s \leq s_\infty$. Since $r_\infty = r+s-s_\infty$, $r_\infty \leq r$. Therefore $d^+ ((x, r_\infty), (x, r)) = \max (d (x, x) - r_\infty + r, 0) = r - r_\infty$, and the latter is equal to, hence less than or equal to $s_\infty-s$. This means that $((x, r_\infty), s_\infty) \leq^{d^{++}} ((x, r), s)$. Having inequalities in both directions, we conclude that $((x, r), s) = ((x, r_\infty), s_\infty)$. This entails the important fact that $s = s_\infty = \inf_{i \in I} s_i$. We use that to show that for any $a \geq -s$, $((x, r), s+a)$ is the supremum of ${((x_i, r_i), s_i+a)}_{i \in I}$. Since $((x_i, r_i), s_i) \leq^{d^{++}} ((x, r), s)$, we have $((x_i, r_i), s_i+a) \leq^{d^{++}} ((x, r), s+a)$. Now consider any other upper bound $((x', r'), s')$ of ${((x_i, r_i), s_i+a)}_{i \in I}$. We have $s' \leq s_i + a$ for every $i \in I$, whence using the equality $s = \inf_{i \in I} s_i$, $s' \leq s+a$. We wish to check that $((x, r), s+a) \leq^{d^{++}} ((x', r'), s')$, equivalently $d^+ ((x, r), (x', r')) \leq s+a-s'$, and that reduces to $s+a-s'$ (which we have just shown) and $d (x, x') \leq r+s+a-r'-s'$. In order to establish the latter, recall that $(x, r+s)$ is the supremum of ${(x_i, r_i+s_i)}_{i \in I}$. Since $X, d$ is standard (and since $a \geq -s \geq -r-s$), $(x, r+s+a)$ is also the supremum of ${(x_i, r_i+s_i+a)}_{i \in I}$. Since $\mu_X$ is monotonic, $(x', r'+s')$ is an upper bound of ${(x_i, r_i+s_i+a)}_{i \in I}$, so $(x, r+s+a) \leq^{d^+} (x', r'+s')$, or equivalently $d (x, x') \leq r+s+a-r'-s'$: that is exactly what we wanted to prove. Let us recap: for every directed family ${((x_i, r_i), s_i)}_{i \in I}$ with supremum $((x, r), s)$, then $s = \inf_{i \in I} s_i$ and for every $a \geq -s$, $((x, r), s+a)$ is the supremum of ${((x_i, r_i), s_i+a)}_{i \in I}$. This certainly implies that if ${((x_i, r_i), s_i)}_{i \in I}$ has a supremum, then ${((x_i, r_i), s_i+a)}_{i \in I}$ also has one for every $a \in \mathbb{R}_+$. Conversely, if ${((x_i, r_i), s_i+a)}_{i \in I}$ has a supremum for some $a \in \mathbb{R}_+$, then it is of the form $((x, r), s+a)$ where $s = \inf_{i \in I} s_i$, and for every $a' \geq -s-a$, $((x, r), s+a+a')$ is the supremum of ${((x_i, r_i), s_i+a+a')}_{i \in I}$. In particular, for $a' = -a$, ${((x_i, r_i), s_i)}_{i \in I}$ has a supremum. \qed \begin{lem} \label{lemma:mu:lipcont} Let $X, d$ be a standard quasi-metric space. The map $\mu_X \colon ((x, r), s) \mapsto (x, r+s)$ is $1$-Lipschitz continuous from $\mathbf B (\mathbf B (X, d), d^+), d^{++}$ to $\mathbf B (X, d), d^+$. \end{lem} \proof We first check that $\mu_X$ is $1$-Lipschitz: \begin{eqnarray*} d^+ (\mu_X ((x, r), s), \mu_X ((x', r'), s')) & = & d^+ ((x, r+s), (x', r'+s')) \\ & = & \max (d (x, x') - r - s + r' + r', 0), \end{eqnarray*} while \begin{eqnarray*} d^{++} (((x, r), s), ((x', r'), s')) & = & \max (d^+ ((x, r), (x', r')) - s + s', 0) \\ & = & \max (\max (d (x, x') - r + r', 0) - s + s', 0) \\ & = & \max (d (x, x') - r + r' - s + s', -s+s', 0) \\ & = & \max (d^+ (\mu_X ((x, r), s), \mu_X ((x', r'), s')), -s+s'), \end{eqnarray*} which implies $d^{++} (((x, r), s), ((x', r'), s')) \geq d^+ (\mu_X ((x, r), s), \mu_X ((x', r'), s'))$. Next, $\mu_X$ is Scott-continuous from $\mathbf B (\mathbf B (X, d), d^+)$ to $\mathbf B (X, d)$ by Lemma~\ref{lemma:mu:cont}. The Scott topology on the former coincides with its $d^{++}$-Scott topology and the Scott topology on the latter coincides with its $d^+$-Scott topology, by Lemma~\ref{lemma:dScott=Scott}~(2). Hence $\mu_X$ is continuous from $\mathbf B (\mathbf B (X, d), d^+)$ to $\mathbf B (X, d)$, with their $d^{++}$-Scott, resp.\ $d^+$-Scott topologies. Since $\mathbf B (X, d), d^+$ and $\mathbf B (\mathbf B (X, d), d^+), d^{++}$ are standard, by Proposition~\ref{prop:B:std}, we can apply the (2) $\limp$ (1) direction of Proposition~\ref{prop:cont}, and we obtain that $\mu_X$ is $1$-Lipschitz continuous. \qed \begin{lem} \label{lemma:Balpha:f} Let $X, d$ and $Y, \partial$ be two standard quasi-metric spaces, and $f$ be an $\alpha$-Lipschitz continuous map from $X, d$ to $Y, \partial$, with $\alpha > 0$. Then $\mathbf B^\alpha (f)$ is $\alpha$-Lipschitz continuous from $\mathbf B (X, d), d^+$ to $\mathbf B (Y, \partial), \partial^+$. \end{lem} \proof We verify that $\mathbf B^\alpha (f)$ is $\alpha$-Lipschitz: \begin{eqnarray*} \partial^+ (\mathbf B^\alpha (f) (x, r), \mathbf B^\alpha (y, s)) & = & \partial^+ ((f (x), \alpha r), (f (y), \alpha s)) \\ & = & \max (\partial (f (x), f (y)) - \alpha r + \alpha s, 0) \\ & \leq & \max (\alpha d (x, y) - \alpha r + \alpha s, 0) \\ & = & \alpha \max (d (x, y) - r + s, 0) = \alpha d^+ ((x, r), (y, s)). \end{eqnarray*} By definition of $\alpha$-Lipschitz continuity, $\mathbf B^\alpha (f)$ is Scott-continuous. Since the Scott topology on $\mathbf B (X, d)$ coincides with the $d^+$-Scott topology, and similarly for $Y$, thanks to Lemma~\ref{lemma:dScott=Scott}~(2), $\mathbf B^\alpha (f)$ is continuous with respect to the $d^+$-Scott and $\partial^+$-Scott topologies. Now use that $\mathbf B (X, d), d^+$ and $\mathbf B (Y, \partial), \partial^+$ are standard, owing to Proposition~\ref{prop:B:std}, and apply Proposition~\ref{prop:cont} to conclude that $\mathbf B^\alpha (f)$ is $\alpha$-Lipschitz continuous. \qed \begin{prop} \label{prop:B:monad} The triple $(\mathbf B, \eta, \mu)$ is a monad on the category of standard quasi-metric spaces and $1$-Lipschitz continuous maps. \end{prop} \proof We shall show the equivalent claim that $(\mathbf B, \eta, \_^\dagger)$ is a Kleisli triple, that is: $(i)$ $\mathbf B$ maps objects of the category (standard quasi-metric spaces) to objects of the category; $(ii)$ $\eta_X$ is a morphism from $X, d$ to $\mathbf B (X, d), d^+$ (a $1$-Lipschitz continuous map); $(iii)$ for every morphism $f \colon X, d \to \mathbf B (Y, \partial), \partial^+$, $f^\dagger$ is a morphism from $\mathbf B (X, d), d^+$ to $\mathbf B (Y, \partial), \partial^+$ such that: $(a)$ $\eta_X^\dagger = \identity {\mathbf B (X, d)}$; $(b)$ $f^\dagger \circ \eta_X = f$; $(c)$ $f^\dagger \circ g^\dagger = (f^\dagger \circ g)^\dagger$. For that, we define $f^\dagger$ as mapping $(x, r)$ to $(y, r+s)$, where $(y, s) = f (x)$. Proposition~\ref{prop:B:std} gives us $(i)$, and Lemma~\ref{lemma:eta:lipcont} gives us $(ii)$. We devote the rest of this proof to $(iii)$. We must start by checking that $f^\dagger$ is a morphism for every morphism $f \colon X, d \to \mathbf B (Y, \partial), \partial^+$. We have defined $f^\dagger (x, r)$ as $(y, r+s)$ where $(y, s) = f (x)$, and we notice that $f^\dagger$ is equal to $\mu_Y \circ \mathbf B^1 (f)$. This is $1$-Lipschitz continuous because $\mu_Y$ and $\mathbf B^1 (f)$ both are, by Lemma~\ref{lemma:mu:lipcont} and Lemma~\ref{lemma:Balpha:f} respectively. The equalities $(a)$, $(b)$, $(c)$ are easily checked. Any Kleisli triple $(T, \eta, \_^\dagger)$ gives rise to a monad $(T, \eta, m)$ by letting $m_X = \identity X^\dagger$. Here $m_X$ maps $((x, r), s)$ to $(x, r+s)$, hence coincides with $\mu_X$, finishing the proof. \qed A \emph{left KZ-monad} \cite[Definition~4.1.2, Lemma~4.1.1]{Escardo:properly:inj} (short for \emph{Kock-Z\"oberlein monad}) is a monad $(T, \eta, \mu)$ on a poset-enriched category such that $T$ is monotonic on homsets, and either one of the following equivalent conditions hold: \begin{enumerate} \item $T\eta_X \leq \eta_{TX}$ for every object $X$; \item a morphism $\alpha \colon TX \to X$ is the structure map of a $T$-algebra if and only if $\alpha \circ \eta_X = \identity X$ and $\identity {TX} \leq \eta_X \circ \alpha$; \item $\mu_X \dashv \eta_{TX}$ for every object $X$; \item $T\eta_X \dashv \mu_X$ for every object $X$. \end{enumerate} The notion stems from work by A. Kock on doctrines in 2-categories \cite{Kock:KZmonad}, and the above equivalence is due to Kock, in the more general case of 2-categories. The notation $f \dashv g$ means that the two morphisms $f$ and $g$ are \emph{adjoint}, namely, $f \circ g \leq \identity \relax$ and $\identity \relax \leq g \circ f$. A \emph{$T$-algebra} is an object $X$ together with a morphism $\alpha \colon TX \to X$ called its \emph{structure map}, such that $\alpha \circ \eta_X = \identity X$ and $\alpha \circ \mu_X = \alpha \circ T\alpha$. $TX$ is always a $T$-algebra with structure map $\mu_X$, called the \emph{free $T$-algebra} on $X$. The category of standard quasi-metric spaces and $1$-Lipschitz continuous maps is poset-enriched. Each homset is ordered by: for $f, g \colon X, d \to Y, \partial$, $f \leq g$ if and only if for every $x \in X$, $f (x) \leq^\partial g (y)$. If $f \leq g$, then $\mathbf B^1 (f) \leq \mathbf B^1 (g)$, since for every $(x, r) \in \mathbf B (X, d)$, $\mathbf B^1 (f) (x, r) = (f (x), r) \leq^{\partial^+} (g (x), r) = \mathbf B^1 (g) (x, r)$. Condition (3) of a left KZ-monad reads: $\mu_X \circ \eta_{TX} \leq \identity {TX}$ and $\identity X \leq \eta_{TX} \circ \mu_X$. For $T = \mathbf B$, those follow from Lemma~\ref{lemma:eta:mu}~(1) and (4). Hence: \begin{prop} \label{prop:B:KZ} The triple $(\mathbf B, \eta, \mu)$ is a left KZ-monad on the category of standard quasi-metric spaces and $1$-Lipschitz continuous maps. \qed \end{prop} Kock's theorem between the equivalence of the four conditions defining KZ-monads yields the following immediately. \begin{prop} \label{prop:B:alg} Let $X, d$ be a standard quasi-metric space, and $\alpha \colon \mathbf B (X, d), d^+ \to X, d$ be a $1$-Lipschitz continuous map. The following are equivalent: \begin{enumerate} \item $\alpha$ is the structure map of a $\mathbf B$-algebra; \item for every $x \in X$, $\alpha (x, 0) = x$ and for all $r, s \in \mathbb{R}_+$, $\alpha (x, r+s) = \alpha (\alpha (x, r), s)$; \item for every $x \in X$, and every $r \in \mathbb{R}_+$, $\alpha (x, r)$ is a point in the closed ball $B^d_{x, \leq r}$, which is equal to $x$ if $r=0$; \item for every $x \in X$, $\alpha (x, 0) = x$. \end{enumerate} \end{prop} \proof The equivalence between (1) and (2) is the definition of an algebra of a monad. Look at the second equivalent condition defining left KZ-monads, applied to the left KZ-monad $\mathbf B$ (Proposition~\ref{prop:B:KZ}). This implies that (1) is equivalent to $\alpha \circ \eta_X = \identity X$ (i.e., $\alpha (x, 0)=x$ for every $x \in X$), and to $\identity X \leq \eta_X \circ \alpha$; the latter means that for every $x \in X$ and every $r \in \mathbb{R}_+$, $(x, r) \leq^{d^+} \eta_X (\alpha (x, r))$, equivalently, $d (x, \alpha (x, r)) \leq r$, i.e., $\alpha (x, r) \in B^d_{x, \leq r}$. Finally, clearly (3) implies (4). In the reverse direction, note that since $\alpha$ is $1$-Lipschitz, $d (\alpha (x, 0), \alpha (x, r)) \leq d^+ ((x, 0), (x, r)) = r$. Since $\alpha (x, 0)=x$, this implies that $\alpha (x, r)$ is in $B^d_{x, \leq r}$. \qed \section{Lipschitz Regular Spaces} \label{sec:lipsch-regul-spac} For every open subset $U$ of $X$ in its $d$-Scott topology, there is a largest open subset $\widehat U$ of $\mathbf B (X, d)$ such that $\widehat U \cap X \subseteq U$. Then $\widehat U \cap X = U$. This was used in \cite[Definition~6.10]{JGL:formalballs} in order to define the distance $d (x, \overline U)$ of any point $x$ to the complement $\overline U$ of $U$ as $\sup \{r \in \mathbb{R}_+ \mid (x, r) \in \widehat U\}$. We shall write $\Open Y$ for the lattice of open subsets of a topological space $Y$. The assignment $U \mapsto \widehat U$ is monotonic. Being a right adjoint to the frame homomorphism that maps every open subset $V$ of $\mathbf B (X, d)$ to $V \cap X$, it also preserves arbitrary meets, namely interiors of arbitrary intersections; but it satisfies no other remarkable property in general. One property that we will need in a number of places is the following. \begin{defi}[Lipschitz regular] \label{defn:lipreg} A quasi-metric space $X, d$ is \emph{Lipschitz regular} if and only if the map $U \in \Open X \mapsto \widehat U \in \Open \mathbf B (X, d)$ is Scott-continuous. \end{defi} The name stems from a result that we shall see later, Proposition~\ref{prop:lipreg:Lip}. In general, a subspace $X$ of a topology space $Y$ is \emph{finitarily embedded} in $Y$ if and only if the map $V \in \Open Y \mapsto V \cap X$ is Scott-continuous, see \cite{Escardo:properly:inj}. \begin{lem} \label{lemma:lipreg:dist} The following are equivalent for a standard quasi-metric space $X, d$: \begin{enumerate} \item $X, d$ is Lipschitz regular; \item for every point $x \in X$, the map $U \in \Open X \mapsto d (x, \overline U)$ is Scott-continuous. \end{enumerate} \end{lem} \proof (1) $\limp$ (2). The map $U \mapsto d (x, \overline U)$ is the composition of $U \mapsto \widehat U$ and of the map $\mathcal U \in \Open \mathbf B (X, d) \mapsto \sup \{r \in \mathbb{R}_+ \mid (x, r) \in \mathcal U\}$. The latter is easily seen to be Scott-continuous, and the former is Scott-continuous by (1). (2) $\limp$ (1). Let $U$ be the union of a directed family of open subsets ${(U_i)}_{i \in I}$. We only have to show that every $(x, r) \in \widehat U$ is in some $\widehat U_i$. By \cite[Lemma~3.4]{JGL:formalballs}, $(x, r)$ is the supremum of the chain of formal balls $(x, r+1/2^n)$, $n \in \nat$, so one of them is in $\widehat U$. This implies that $d (x, \overline U) \geq r + 1/2^n > r$. Using (2), $d (x, \overline U_i) > r$ for some $i \in I$, and that implies the existence of a real number $s > r$ such that $(x, s) \in \widehat U_i$. Since $(x, s) \leq^{d^+} (x, r)$, $(x, r)$ is also in $\widehat U_i$. \qed The following Proposition~\ref{prop:compactballs} gives a further explanation of Lipschitz regularity, in the special case of algebraic quasi-metric spaces. A point $x$ in a standard quasi-metric space $X, d$ is a center point if and only if, for every $\epsilon > 0$, the open ball $B^{d^+}_{(x, 0), <\epsilon} = \{(y, s) \in \mathbf B (X, d) \mid d (x, y) < \epsilon-s\}$ is Scott-open in $\mathbf B (X, d)$. This is equivalent to requiring that $x$ be a finite point in $X, d$, a notion that has a more complicated definition \cite[Lemma~5.7]{JGL:formalballs}. $X, d$ itself is called \emph{algebraic} if and only if every point $x$ is the $d$-limit of a Cauchy (or even Cauchy-weightable, see loc.cit.) net of center points, or equivalently, for every $x \in X$, there is a directed family of formal balls $(x_i, r_i)$, $i \in I$, where every $x_i$ is a center point, such that $\sup_{i \in I} (x_i, r_i) = (x, 0)$ (Lemma~5.15, loc.cit.). Every metric space is (standard and) algebraic, since in a metric space every point is a center point, as a consequence of results by Edalat and Heckmann \cite{EH:comp:metric}. Indeed, the poset of formal balls of a metric space $X, d$ is continuous, and $(x, r) \ll (y, s)$ if and only if $d (x, y) < r-s$ (Proposition~7 and Corollary~10, loc.\ cit.): then $B^{d^+}_{(x, 0), <\epsilon}$ is equal to $\uuarrow (x, \epsilon)$, hence is Scott-open. Every standard algebraic quasi-metric space $X, d$ is \emph{continuous} \cite[Proposition~5.18]{JGL:formalballs}, where a continuous quasi-metric space is a standard quasi-metric space $X, d$ whose space of formal balls $\mathbf B (X, d)$ is a continuous poset (Definition~3.10, loc.\ cit.)\@ Moreover, when $X, d$ is standard algebraic, $\mathbf B (X, d)$ has a basis of formal balls whose centers are center points, and for a center point $x$, $(x, r) \ll (y, s)$ if and only if $d (x, y) < r-s$. This is the same relation as in metric spaces, but beware that we only require it when $x$ is a center point. In general, we shall call a \emph{strong basis} of a standard quasi-metric space $X, d$ any set $\mathcal B$ of center points of $X$ such that, for every $x \in X$, $(x, 0)$ is the supremum of a directed family of formal balls with center points in $\mathcal B$. (Given that $X, d$ is standard, this is equivalent to Definition~7.4.66 of loc.\ cit.) Hence $X, d$ is algebraic if and only if it has a strong basis. \begin{rem} \label{rem:strong:basis} In metric spaces, a strong basis is nothing else than the familiar concept of a dense subset (Exercise~7.4.67, loc.\ cit.) Strong bases are the correct generalization of dense subsets in the realm of quasi-metric spaces. \end{rem} \begin{prop} \label{prop:compactballs} Let $X, d$ be a standard algebraic quasi-metric space. The following are equivalent: \begin{enumerate} \item $X, d$ is Lipschitz regular; \item $X, d$ \emph{has relatively compact balls}, namely: for every center point $x$ of $X$, for all $r, s \in \mathbb{R}_+$ with $s < r$, every open cover of the open ball $B^d_{x, < r}$ contains a finite subcover of the closed ball $B^d_{x, \leq s}$; \item for every center point $x$ of $X$, for all $r, s \in \mathbb{R}_+$ with $s < r$, for every directed family of open subsets ${(U_i)}_{i \in I}$ of open subsets of $X$ such that $B^d_{x, < r} \subseteq \bigcup_{i \in I} U_i$, there is an $i \in I$ such that $B^d_{x, \leq s} \subseteq U_i$. \end{enumerate} \end{prop} \proof The equivalence of (2) and (3) is a standard exercise. In the difficult direction, notice that any union of open sets can be written as a directed union of finite unions. (3) $\limp$ (1). It is easy to see that $U \mapsto \widehat U$ is monotonic. Let ${(U_i)}_{i \in I}$ be a directed family of $d$-Scott open subsets of $X$, and $U = \bigcup_{i \in I} U_i$. Pick an arbitrary element $(y, s)$ in $\widehat U$. Our task is to show that $(y, s)$ lies in some $\widehat U_i$. Since $X, d$ is algebraic, $(y, s)$ is the supremum of a directed family of formal balls $(x, r)$ way-below $(y, s)$, where each $x$ is a center point. Since $\widehat U$ is Scott-open, one of them is in $\widehat U$. From $(x, r) \ll (y, s)$ we obtain $d (x, y) < r-s$. Find a real number $\epsilon > 0$ so that $d (x, y) < r-s-\epsilon$. The open ball $B^d_{x, < r}$ is the intersection of $\uuarrow (x, r)$ with $X$, and $\uuarrow (x, r)$ is included in $\widehat U$ because $(x, r)$ is in $\widehat U$ and $\widehat U$ is upwards-closed. Hence $B^d_{x, < r}$ is included in $\widehat U \cap X = U = \bigcup_{i \in I} U_i$. By (3), $B^d_{x, \leq r-\epsilon}$ is included in some $U_i$, $i \in I$. Consider $\widehat U_i \cup \uuarrow (x, r-\epsilon)$. This is an open subset of $\mathbf B (X, d)$, and its intersection with $X$ is $U_i \cup B^d_{x, < r-\epsilon} = U_i$. By the maximality of $\widehat U_i$, $\widehat U_i = \widehat U_i \cup \uuarrow (x, r-\epsilon)$, meaning that $\uuarrow (x, r-\epsilon)$ is included in $\widehat U_i$. Since $d (x, y) < r-s-\epsilon$, $(x, r-\epsilon) \ll (y, s)$. It follows that $(y, s)$ is in $\widehat U_i$. (1) $\limp$ (3). Fix a center point $x$, two real numbers $r$ and $s$ such that $0 < s < r$, and assume that $B^d_{x, < r}$ is included in the union $U$ of some directed family of open subsets $U_i$ of $X$. We claim that $(x, s)$ must be in $\widehat U$. The argument is one we have just seen. Indeed, $\widehat U \cup \uuarrow (x, r)$ is an open subset of $\mathbf B (X, d)$ whose intersection with $X$ equals $U \cup B^d_{x, <r} = U$. By maximality $\widehat U \cup \uuarrow (x, r) = \widehat U$. However, since $x$ is a center point and $d (x, x) < r - s$, we have $(x, r) \ll (x, s)$, so $(x, s)$ is in $\widehat U$. By (1), $(x, s)$ is in some $\widehat U_i$, so $\upc (x, s) \subseteq \widehat U_i$, hence, taking intersections with $X$, $B^d_{x, \leq s}$ is included in $U_i$. \qed \begin{rem} \label{rem:compactballs:metric} As a special case, every metric case in which closed balls are compact is Lipschitz regular. Indeed, recall that every metric space is standard and algebraic, and compactness immediately implies the relatively compact ball property. \end{rem} Having relatively compact balls is a pretty strong requirement. Any standard algebraic quasi-metric space with that property must be core-compact, for example: for every point $y$ and every open neighborhood $U$ of $y$, $y$ is in some open ball $B^d_{x, <r}$ included in $U$, where $x$ is a center point and $r > 0$. Hence $d (x, y) < r$, so that $d (x, y) < r-\epsilon$ for some $\epsilon > 0$. Then $y$ is also in the open neighborhood $B^d_{x, < r-\epsilon}$ of $x$, and $B^d_{x, < r-\epsilon} \subseteq B^d_{x, \leq r-\epsilon}$ is way-below $B^d_{x, <\epsilon}$, using property (3). Another argument consists in using the definition of Lipschitz regularity directly: then $\Open X$ is a retract of $\Open \mathbf B (X, d)$, and when $\mathbf B (X, d)$ is a continuous poset, $\Open \mathbf B (X, d)$ is a completely distributive lattice, in particular a continuous lattice; any retract of a continuous lattice is again continuous, so $\Open X$ is continuous, meaning that $X$ is core-compact. Not all standard algebraic quasi-metric spaces have relatively compact balls. For example, $\mathbb{Q}$ with its usual metric is not core-compact, hence does not have relatively compact balls. \begin{rem} \label{rem:lipreg} Lipschitz regularity is therefore a pretty strong requirement---in the case of standard algebraic quasi-metric spaces. On the contrary, we shall see below that spaces of formal balls are always Lipschitz regular (Theorem~\ref{thm:B:lipreg}), even when not core-compact (Remark~\ref{rem:N2}). \end{rem} The following lemma shows that the construction $U \mapsto \widehat U$ admits a particularly simple form on $\mathbf B$-algebras. \begin{lem} \label{lemma:Uhat:alpha} Let $X, d$ be a quasi-metric space, and assume that there is a continuous map $\alpha \colon \mathbf B (X, d) \to X$ (with respect to the $d^+$-Scott and $d$-Scott topologies) such that $\alpha (x, r) \in B^d_{x, \leq r}$ and $\alpha (x, 0) = x$ for all $x \in X$ and $r \in \mathbb{R}_+$. Then: \begin{enumerate} \item For every $d$-Scott open subset $U$ of $X$, $\widehat U$ is equal to $\alpha^{-1} (U)$; \item $X, d$ is Lipschitz regular. \end{enumerate} \end{lem} \proof Since $\alpha$ is continuous, $\alpha^{-1} (U)$ is $d^+$-Scott open in $\mathbf B (X, d)$. Its intersection with $X$ is equal to $U$, since $(x, 0) \in \alpha^{-1} (U)$ is equivalent to $\alpha (x, 0) \in U$, and $\alpha (x, 0) = x$. By the definition of $\widehat U$ as largest, $\alpha^{-1} (U)$ is included in $\widehat U$. To show the converse implication, let $(x, r)$ be an arbitrary element of $\widehat U$. Since $\alpha (x, r)$ is an element of $B^d_{x, \leq r}$, $d (x, \alpha (x, r)) \leq r$, so $(x, r) \leq^{d^+} (\alpha (x, r), 0)$. Since $\widehat U$ is upwards-closed, $(\alpha (x, r), 0)$ is in $\mathcal U$. It follows that $\alpha (x, r)$ is in $U$, so that $(x, r)$ is in $\alpha^{-1} (U)$. (2) follows from (1), since $\alpha^{-1}$ commutes with unions. \qed \begin{rem} \label{rem:Uhat:alpha} Lemma~\ref{lemma:Uhat:alpha} in particularly applies when $X, d$ is a (standard) $\mathbf B$-algebra, with structure map $\alpha$. Indeed, by the (1) $\limp$ (2) direction of Proposition~\ref{prop:cont}, $\alpha$ is continuous, and the remaining assumptions are item~(3) of Proposition~\ref{prop:B:alg}. \end{rem} \begin{rem} \label{rem:Balg} By Lemma~\ref{lemma:Uhat:alpha}~(1), the standard quasi-metric spaces that are $\mathbf B$-algebras are much more than Lipschitz regular: the map $U \mapsto \widehat U$ must preserve \emph{all} unions, not just the directed unions, and all intersections. \end{rem} However rare as $\mathbf B$-algebras may appear to be, recall that (when $X, d$ is standard) $\mathbf B (X, d), d^+$ is itself a $\mathbf B$-algebra, with structure map $\mu_X$. Hence the following is clear under a standardness assumption. However, this even holds without standardness. \begin{thm} \label{thm:B:lipreg} For every quasi-metric space, the quasi-metric space $\mathbf B (X, d), d^+$ is Lipschitz regular. For every $d^+$-Scott open subset $U$ of $\mathbf B (X, d)$, $\widehat U = \mu_X^{-1} (U)$. \end{thm} \proof This is Lemma~\ref{lemma:Uhat:alpha} with $\alpha = \mu_X$. This is a continuous map because it is Scott-continuous by Lemma~\ref{lemma:mu:cont} and because the Scott topologies on $\mathbf B (X, d)$ and on $\mathbf B (\mathbf B (X, d), d^+)$ coincide with the $d^+$-Scott topology and with the $d^{++}$-Scott topology respectively, by Lemma~\ref{lemma:dScott=Scott}~(2). The other two assumptions are Lemma~\ref{lemma:eta:mu}, items~$(i)$ and~$(iv)$. \qed \begin{rem} \label{rem:N2} We exhibit a Lipschitz regular, standard quasi-metric space that is not core-compact. Necessarily, that quasi-metric space cannot be algebraic, by Proposition~\ref{prop:compactballs}. We build that quasi-metric space as $\mathbf B (X, d)$ for some quasi-metric space $X, d$, so that Theorem~\ref{thm:B:lipreg} will give us Lipschitz regularity for free. Every poset $X$ can be turned into a quasi-metric space by letting $d (x, y) = 0$ if $x \leq y$, $+\infty$ otherwise. Then $\mathbf B (X, d)$ is order-isomorphic with the poset $X \times ]-\infty, 0]$ \cite[Example~1.6]{JGL:formalballs}. Consider the dcpo $X = (\nat \times \nat) \cup \{\omega\}$, with the ordering defined by $(i, n) \leq (i', n')$ iff $i=i'$ and $n \leq n'$, and where $\omega$ is larger than any other element. The non-empty upwards-closed subsets of $X$ are the subsets of the form $\{\omega\} \cup \bigcup_{i \in S} \upc (i, n_i)$, where $S \subseteq \nat$ and for each $i$, $n_i \in \nat$. Those that are compact are exactly those such that $S$ is finite, and those that are Scott-open are exactly those such that $S = \nat$. In particular, note that all compact saturated subsets have empty interior. The same happens in $X \times ]-\infty, 0]$. Indeed, assume a compact saturated subset $Q$ of $X \times ]-\infty, 0]$ with non-empty interior $U$. Since $Q$ is compact, $\pi_1 [Q]$ is compact, too, and we see that $\pi_1 [Q]$ is also upwards-closed, hence of the form $\{\omega\} \cup \bigcup_{i \in S} \upc (i, n_i)$, with $S$ finite. Pick some $j \in \nat$ outside of $S$. Since $U$ is non-empty, it must contain $(\omega, 0)$. However, $(\omega, 0)$ is the supremum of the chain of points $((j, n), 0)$, $n \in \nat$, so one of them is in $U$, hence in $Q$. This is impossible since $j \not \in S$. Since all compact saturated subsets of $X \times ]-\infty, 0]$ have empty interior, it follows that $X \times ]-\infty, 0]$ is not locally compact. Note that $X$ is sober. Indeed, consider a non-empty closed subset $C$. If its complement is empty, then $C = \dc \omega$. Otherwise, $C$ is the complement of an open set $\{\omega\} \cup \bigcup_{i \in \nat} \upc (i, n_i)$, hence is equal to $\bigcup_{i \in S} \dc (i, n_i-1)$, where $S$ is the set of indices $i$ such that $n_i \geq 1$. $S$ is non-empty since we have assumed $C$ non-empty. Pick $i_0$ from $S$. Then $C$ is included in the union of $\dc (i_0, n_{i_0}-1)$ and $C' = \bigcup_{i \in S \smallsetminus \{i_0\}} \dc (i, n_i-1)$. Note that $C$ is not included in $C'$, so if $C$ is irreducible, then $C \subseteq \dc (i_0, n_{i_0}-1)$, from which we obtain $C = \dc (i_0, n_{i_0}-1)$. In any case, we have shown that every irreducible closed subset of $X$ is the downward closure of a unique point, hence $X$ is sober. Since $]-\infty, 0]$ is a continuous dcpo, it is sober in its Scott topology \cite[Proposition~8.2.12~$(b)$]{JGL-topology}. Products of sober spaces are sober (Theorem~8.4.8, loc.\ cit.), so $X \times ]-\infty, 0]$ is sober. Since every sober core-compact space is locally compact (Theorem~8.3.10, loc.\ cit.), we conclude that $X \times ]-\infty, 0]$ is not core-compact. We conclude that $\mathbf B (X, d) \cong X \times ]-\infty, 0]$ is Lipschitz regular but not core-compact. \qed \end{rem} \section{Compact Subsets of Quasi-Metric Spaces} \label{sec:comp-subs-quasi} Let us characterize the compact saturated subsets of quasi-metric spaces. For that, we concentrate on the case where $X, d$ is a continuous Yoneda-complete space, i.e., where $\mathbf B (X, d)$ is a continuous dcpo \cite[Definition~7.4.72]{JGL-topology}; the notion was initially introduced by Kostanek and Waszkiewicz \cite{KW:formal:ball}. In general, a \emph{continuous} quasi-metric space is a standard quasi-metric space whose space of formal balls is a continuous poset, see \cite[Section~3]{JGL:formalballs}. (Standardness is automatically implied by Yoneda-completeness, but we have to require it explicitly if we do not assume Yoneda-completeness.) We shall need the following useful (and certainly well-known) lemma. \begin{lem} \label{lemma:compact:subspace} Let $Y$ be a subspace of a topological space $X$. The compact subsets of $Y$ are exactly the compact subsets of $X$ that are included in $Y$. If $Y$ is upwards-closed in $X$, then the compact saturated subsets of $Y$ are exactly the compact saturated subsets of $X$ that are included in $Y$. \end{lem} \proof Take a compact subset $K$ of $Y$. $K$ is also compact in $X$, as the image of $K$ by the inclusion map, which is continuous by definition of subspaces. Conversely, let $K$ be a compact subset of $X$ that is included in $Y$. Consider an open cover ${(U_i)}_{i \in I}$ of $K$ in $Y$. For each $i \in I$, there is an open subset $V_i$ of $X$ such that $U_i = V_i \cap Y$. Then ${(V_i)}_{i \in I}$ is an open cover of $K$ in $X$. Extract a finite subcover ${(V_i)}_{i \in J}$ of $K$ in $X$. Then ${(U_i)}_{i \in J}$ is a finite subcover of $K$ in $Y$. For the second part, we first note that the specialization preordering of $Y$ is the restriction of that of $X$ to $Y$ \cite[Proposition~4.9.5]{JGL-topology}. If $Y$ is upwards-closed in $X$, it follows that the saturated subsets of $Y$ are exactly the saturated subsets of $X$ that are included in $Y$. \qed Let $X, d$ be a quasi-metric space. Up to the embedding $x \mapsto (x, 0)$, $X$ is a subspace of $\mathbf B (X, d)$. $X$ also happens to be upwards-closed in $\mathbf B (X, d)$. Lemma~\ref{lemma:compact:subspace} will help us characterize the compact saturated subsets of $X$ through this embedding. For every $\epsilon > 0$, let $V_\epsilon$ be the set of formal balls $(x, r)$ with $r < \epsilon$. \begin{lem} \label{lemma:Veps} If $X, d$ is standard, then $V_\epsilon$ is Scott-open in $\mathbf B (X, d)$, and $X = \bigcap_{\epsilon > 0} V_\epsilon = \bigcap_{n \in \nat} V_{1/2^n}$; in particular, $X$ is a $G_\delta$-subset of $\mathbf B (X, d)$. \end{lem} \proof This is a more explicit version of Proposition~2.6 of \cite{JGL:formalballs}. The key is that $V_\epsilon$ is the inverse image of $[0, \epsilon[$ by the radius map $(x, r) \mapsto r$, which is Scott-continuous from $\mathbf B (X, d)$ to $\mathbb{R}_+^{op}$ by Proposition~2.4~(3) of loc.cit., owing to the fact that $X, d$ is standard. \qed \begin{lem} \label{lemma:Q:radius} Let $X, d$ be a standard quasi-metric space. For every non-empty compact saturated subset $Q$ of $\mathbf B (X, d)$, there is a largest $r \in \mathbb{R}_+$ such that $Q$ contains a point of the form $(x, r)$. \end{lem} We shall call the largest such $r \in \mathbb{R}_+$ the \emph{radius} $r (Q)$ of $Q$. \proof The radius map $(x, r) \mapsto r$ is Scott-continuous from $\mathbf B (X, d)$ to $\overline{\mathbb{R}}_+^{op}$, since $X, d$ is standard. Since continuous images of compact subsets are compact, this map reaches a minimum $r$ in $\overline{\mathbb{R}}_+^{op}$, hence a maximum in $\overline{\mathbb{R}}_+$. That maximum cannot be equal to $+\infty$ since radii of formal balls are all different from $+\infty$, so $r$ is in $\mathbb{R}_+$. (Alternatively, $Q$ is contained in $\mathbf B (X, d) = \bigcup_{\epsilon > 0} V_\epsilon$, so $Q \subseteq V_\epsilon$ for some $\epsilon > 0$.) \qed The \emph{closed ball} $B^d_{x, \leq r}$ of center $x$ and radius $r$ is the set of points $y \in X$, such that $d (x, y) \leq r$. Despite the name, this is not a closed set in general, except when $X, d$ is metric: closed balls are upwards-closed, whereas closed sets must be downwards-closed. Closed balls need not be compact either. However, $B^d_{x, \leq r}$ is the intersection of $\upc (x, r)$ with $X$, considering the latter as a subspace of $\mathbf B (X, d)$, and $\upc (x, r)$ is compact saturated in $\mathbf B (X, d)$. (The upward closure of any point with respect to the specialization preordering of a topological space is always compact saturated.) Note that the radius of $\upc (x, r)$, as introduced in Lemma~\ref{lemma:Q:radius}, is $r$. In general, for every finite set of formal balls $(x_1, r_1)$, \ldots, $(x_n, r_n)$, the upward closure $Q = \upc \{(x_1, r_1), \cdots, (x_n, r_n)\}$ is compact saturated in $\mathbf B (X, d)$. The upward closures of finite sets, which are compact saturated in any topological space, are called \emph{finitary compact} in \cite[Proposition~4.4.21]{JGL-topology}. When $n \neq 0$, the radius of $Q$ is $r (Q) = \max_{j=1}^n r_j$. The intersection of $\upc \{(x_1, r_1), \cdots, (x_n, r_n)\}$ with $X$ is the union $\bigcup_{j=1}^n B^d_{x_j, \leq r_j}$ of finitely many closed balls. This may fail to be compact in $X$, but if we take the intersection of a filtered family of such finite unions of formal balls, in such a way that the radii go to $0$, we will obtain a compact subset of $\mathbf B (X, d)$ that is included in $X$, hence a compact subset of $X$. \begin{lem} \label{lemma:Xd:compact:1} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For every filtered family ${(Q_i)}_{i \in I}$ of non-empty compact saturated subsets of $\mathbf B (X, d)$ such that $\inf_{i \in I} r (Q_i) = 0$, $\bigcap_{i \in I} Q_i$ is a non-empty compact saturated subset of $X$ in its $d$-Scott topology. \end{lem} This is in particular the case when $Q_i$ is taken to consist of non-empty finitary compact sets. \proof Since $X, d$ is continuous, $\mathbf B (X, d)$ is a continuous dcpo. Recall that every continuous dcpo is sober in its Scott topology, that every sober space is well-filtered, and that in a well-filtered space every filtered intersection of compact saturated subsets is compact saturated. Hence $\bigcap_{i \in I} Q_i$ is compact saturated in $\mathbf B (X, d)$. In a well-filtered space, filtered intersections of non-empty compact saturated subsets are non-empty as well. Indeed, if $\bigcap_{i \in I} Q_i$ is empty, then it is included in the open set $\emptyset$, and that implies $Q_i \subseteq \emptyset$ for some $i \in I$. For every $\epsilon > 0$, there is an $i \in I$ such that $r (Q_i) < \epsilon$. That implies $Q_i \subseteq V_\epsilon$. Hence $\bigcap_{i \in I} Q_i$ is included in $\bigcap_{\epsilon > 0} V_\epsilon = X$, using Lemma~\ref{lemma:Veps}. We now conclude that $\bigcap_{i \in I} Q_i$ is a compact saturated subset of $X$ by Lemma~\ref{lemma:compact:subspace}. \qed Since $\bigcap_{i \in I} Q_i$ is included in $X$, it is also equal to $\bigcap_{i \in I} (Q_i \cap X)$. For a finitary compact $Q = \upc \{(x_1, r_1), \cdots, (x_n, r_n)\}$, $Q \cap X$ is the finite union of closed balls $\bigcup_{j=1}^n B^d_{x_j, \leq r_j}$. Hence Lemma~\ref{lemma:Xd:compact:1} implies the following (for non-empty finitary compacts; if one of them is empty, the claim is obvious). Condition~(1) means that $Q_i = \upc \{(x_{ij}, r_{ij}) \mid 1\leq j\leq n_i\}$ contains $Q_{i'}$ for all $i \sqsubseteq i'$ in $I$, so that ${(Q_i)}_{i \in I}$ is filtered. Condition~(2) states that $\inf_{i \in I} r (Q_i)=0$. \begin{cor} \label{corl:Xd:compact:1} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $I, \sqsubseteq$ be a directed preordered set. Assume that: \begin{enumerate} \item for all $i \sqsubseteq i'$ in $I$, for every $j'$, $1\leq j' \leq n_{i'}$, there is a $j$, $1\leq j\leq n_i$, such that $d (x_{ij}, x_{i'j'}) \leq r_{ij} - r_{i'j'}$; \item for every $\epsilon > 0$, there is an $i \in I$ such that for every $j$, $1\leq j\leq n_i$, $r_{ij} \leq \epsilon$, \end{enumerate} then $\bigcap_{i \in I} \bigcup_{j=1}^{n_i} B^d_{x_{ij}, \leq r_{ij}}$ is compact saturated in $X$. \qed \end{cor} \begin{lem} \label{lemma:cont:Q} Let $X, d$ be a continuous quasi-metric space. For every compact saturated subset $Q$ of $X$, for every open neighborhood $U$ of $Q$, for every $\epsilon > 0$, one can find a finite union $A$ of closed balls $B^d_{x_i, \leq r_i}$ with $r_i < \epsilon$, and such that $Q \subseteq int (A) \subseteq A \subseteq U$. \end{lem} \proof The image of $Q$ by the embedding $x \mapsto (x, 0)$ of $X$ into $\mathbf B (X, d)$ is compact. If we agree to equate $x$ with $(x, 0)$, then $Q$ is compact not only in $X$, but also in $\mathbf B (X, d)$. Recall that $\widehat U$ is a Scott-open subset of $\mathbf B (X, d)$ and that $\widehat U \cap X = U$. By intersecting it with $V_\epsilon = \{(x, r) \mid x \in X, r < \epsilon\}$ (a Scott-open subset, owing to the fact that $X, d$ is standard, see Lemma~\ref{lemma:Veps}), we obtain an open neighborhood $\widehat U \cap V_\epsilon$ of $Q$ in $\mathbf B (X, d)$. Since $\mathbf B (X, d)$ is a continuous poset, $\widehat U \cap V_\epsilon$ is the union of all open subsets $\uuarrow (x, r)$, $(x, r) \in \widehat U \cap V_\epsilon$. By compactness, $Q$ is therefore included in some finite union $\bigcup_{i=1}^n \uuarrow (x_i, r_i)$, where every $(x_i, r_i)$ is in $\widehat U \cap V_\epsilon$. In particular, $r_i < \epsilon$ for each $i$. Note also that $\upc (x_i, r_i)$ is included in $\widehat U$, since $\widehat U$ is upwards-closed, so $\upc (x_i, r_i) \cap X = B^d_{x_i, \leq r_i}$ is included in $U$. Therefore $A = \bigcup_{i=1}^n B^d_{x_i, \leq r_i}$, whose interior contains the open neighborhood $\bigcup_{i=1}^n (\uuarrow (x_i, r_i) \cap X)$ of $Q$, fits. \qed \begin{prop} \label{prop:Xd:compact} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The compact saturated subsets of $X$ with its $d$-Scott topology are exactly the sets $\bigcap_{i \in I} \bigcup_{j=1}^{n_i} B^d_{x_{ij}, \leq r_{ij}}$ that satisfy the conditions of Corollary~\ref{corl:Xd:compact:1}, or equivalently, the filtered intersections $\bigcap_{i \in I} Q_i$ of finitary compacts $Q_i$ of $\mathbf B (X, d)$ with $\inf_{i \in I} r (Q_i)=0$. \end{prop} \proof Let $Q$ be compact saturated in $X$, and $U$ be an open neighborhood of $Q$. By Lemma~\ref{lemma:cont:Q}, we can build a subset $A_{U,0}$ of $U$, which is a finite union of closed balls of radius $< 1$ and whose interior $V_{U,0}$ contains $Q$. Reusing that lemma, we can find a subset $A_{U,1}$ of $V_{U,0}$, which is a finite union of closed balls of radius $<1/2$ and whose interior $V_{U,1}$ contains $Q$. We continue in this way to build $A_{U,n}$ for every $n \in \nat$, together with its interior $V_{U,n}$ containing $Q$, in such a way that $A_{U,n}$ is a finite union of closed balls of radius $<1/2^n$. The family of all sets $A_{U, n}$, where $U$ ranges over the open neighborhoods of $Q$ and $n \in \nat$ is filtered, because $A_{U, m}$ and $A_{U', n}$ both contain $A_{V_{U,m} \cap V_{U',n}, 0}$. Their intersection contains $Q$, and is included in any open neighborhood $U$ of $Q$, hence equals $Q$, since $Q$ is saturated. In the reverse direction, we use Lemma~\ref{lemma:Xd:compact:1}. \qed \begin{rem} \label{rem:Hausdorff} Proposition~\ref{prop:Xd:compact} is a quasi-metric analogue of a result by Hausdorff stating that, in a complete metric space, the compact subsets are exactly the closed, precompact sets. A set is precompact if and only if, for every $\epsilon > 0$, it can be covered by a finite union of open balls of radius $\epsilon$. In particular, any set of the form $\bigcap_{i \in I} \bigcup_{j=1}^{n_i} B^d_{x_{ij}, \leq r_{ij}}$ as above, in a metric space, is both closed and precompact, hence compact. In a quasi-metric space, beware that the closed balls $B^d_{x_{ij}, \leq r_{ij}}$ may fail to be closed. Another analogue of Hausdorff's result can be found in \cite[Proposition~7.2.22]{JGL-topology}: the symcompact quasi-metric spaces, namely the quasi-metric spaces $X, d$ that are compact in the open ball topology of $d^{sym}$ (where $d^{sym} (x, y) = \max (d (x, y), d (y, x))$) are exactly the Smyth-complete totally bounded quasi-metric spaces ($X, d$ is totally bounded if and only if $X, d^{sym}$ is precompact). That is less relevant to our setting. \end{rem} It turns out I will not need Proposition~\ref{prop:Xd:compact} in the following. It has independent interest, and we shall definitely require some of the auxiliary lemmas that led us to it. \section{Spaces of Continuous and Lipschitz Maps} \label{sec:cont-lipsch-maps} We equip $\overline{\mathbb{R}}_+$ with the Scott topology of its ordering $\leq$, or equivalently, with the $\dreal$-Scott topology. For a topological space $X$, the continuous maps from $X$ to $\overline{\mathbb{R}}_+$ are usually called \emph{lower semicontinuous}. \begin{defi}[$\Lform X$] \label{defn:LX} Let $\Lform X$ denote the set of lower semicontinuous maps from $X$ to $\overline{\mathbb{R}}_+$. We give it the Scott topology of the pointwise ordering. \end{defi} \begin{lem} \label{lemma:Lalpha:LX} Let $X, d$ be a standard quasi-metric space, and $\alpha > 0$. Every $\alpha$-Lipschitz continuous map from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$ is lower semicontinuous. \end{lem} \proof Recall that $f$ is $\alpha$-Lipschitz continuous if and only if $f' \colon (x, r) \mapsto f (x) - \alpha r$ is Scott-continuous, by Lemma~\ref{lemma:f'}. Then $f^{-1} (]t, +\infty]) = X \cap {f'}^{-1} (]t, +\infty])$ for every $t \in \mathbb{R}$, showing that $f$ itself is lower semicontinuous. \qed \begin{defi}[$\Lform_\alpha (X, d)$, $\Lform_\infty (X, d)$] \label{defn:Lalpha} Let $\Lform_\alpha (X, d)$ be the set of $\alpha$-Lipschitz continuous maps from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$, and let $\Lform_\infty (X, d) = \bigcup_{\alpha \in \mathbb{R}_+} \Lform_\alpha (X, d)$ be the set of all Lipschitz continuous maps from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$. We give those spaces the subspace topology from $\Lform X$ (which makes sense, by Lemma~\ref{lemma:Lalpha:LX}). \end{defi} We also write $\Lform_\infty X$ for $\Lform_\infty (X, d)$, and $\Lform_\alpha X$ for $\Lform_\alpha (X, d)$. Beware that there is no reason why the topologies on $\Lform_\alpha (X, d)$ and $\Lform_\infty (X, d)$ would be the Scott topology of the pointwise ordering. We shall see a case where those topologies coincide in Proposition~\ref{prop:Lalpha:retract}. \begin{defi}[$\Lform_\alpha^a (X, d)$, $\Lform_\infty^a (X, d)$] \label{defn:Lalpha:bnd} Let $\Lform_\alpha^a X$ or $\Lform_\alpha^a (X, d)$ be the space of all $\alpha$-Lipschitz continuous maps from $X, d$ to $[0, \alpha a], \dreal$, for $\alpha \in \mathbb{R}_+$, where $a \in \mathbb{R}_+$, $a > 0$. Give it the subspace topology from $\Lform_\alpha X$, or equivalently, from $\Lform X$. \end{defi} \begin{defi}[$\Lform_\infty^\bnd (X, d)$] \label{defn:Lbnd} Let $\Lform_\infty^\bnd (X, d)$ be the subspace of all \emph{bounded} maps in $\Lform_\infty (X, d)$, and $\Lform_\alpha^\bnd (X, d)$ be the corresponding subspace of all bounded maps in $\Lform_\alpha (X, d)$, with the subspace topologies. \end{defi} Since the specialization ordering on $\Lform X$ is the pointwise ordering ($f \leq g$ if and only if for every $x \in X$, $f (x) \leq g (x)$), the same holds for all the spaces defined above as well. We shall always write $\leq$ for that ordering. We note the following, although we shall only use it much later. \begin{lem} \label{lemma:Linfa} Let $X, d$ be a quasi-metric space. For every $a > 0$, $\Lform_\infty^\bnd (X, d) = \bigcup_{\alpha > 0} \Lform_\alpha^a (X, d)$. \end{lem} \proof Consider any bounded map $f$ from $\Lform_\infty (X, d)$. By definition, $f \leq b.\mathbf 1$ for some $b \in \mathbb{R}_+$, where $\mathbf 1$ is the constant map equal to $1$, and $f \in \Lform_\alpha (X, d)$ for some $\alpha > 0$. Since $\Lform_\alpha (X, d)$ grows as $\alpha$ increases, by Proposition~\ref{prop:alphaLip:props}~(5), we may assume that $\alpha \geq b/a$. Then $f$ is in $\Lform_\alpha^a (X, d)$. The reverse inclusion is obvious. \qed Assuming $X, d$ standard, for each $\alpha \in \mathbb{R}_+$, there is a largest $\alpha$-Lipschitz continuous map $f^{(\alpha)}$ below any lower semicontinuous map $f \in \Lform X$. Moreover, the family ${(f^{(\alpha)})}_{\alpha \in \mathbb{R}_+}$ is a chain, and $\sup_{\alpha \in \mathbb{R}_+} f^{(\alpha)} = f$, where suprema are taken pointwise \cite[Theorem~6.17]{JGL:formalballs}. We shall examine what $f^{(\alpha)}$ may be, and what properties it may have, in two important cases: when $f$ is lower semicontinuous and $X, d$ is Lipschitz regular; and when $f$ is already $\alpha$-Lipschitz but not necessarily continuous, and $X, d$ is algebraic. Next, while $f^{(\alpha)}$ is largest, we shall introduce functions that are smallest among the $1$-Lipschitz continuous maps mapping some given center points to values above some specified numbers. \subsection{$f^{(\alpha)}$ for $f$ Lower Semicontinuous} \label{sec:falpha-f-lower} Let $X, d$ be a standard quasi-metric space. We know that, for every $d$-Scott open subset $U$ of $X$, for all $\alpha, r \in \mathbb{R}_+$, $(r\chi_U)^{(\alpha)}$ is the map $x \mapsto \min (r, \alpha d (x, \overline U))$ (Proposition~6.14, loc.cit.). We shall extend that below. Also, for every $\alpha$-Lipschitz continuous map $g \colon X \to \overline{\mathbb{R}}_+$, for every $t \in \mathbb{R}_+$, $t g$ is $t \alpha$-Lipschitz continuous (Proposition~\ref{prop:alphaLip:props}~(1)). Let us call \emph{step function} any function from $X$ to $\overline{\mathbb{R}}_+$ of the form $\sup_{i=1}^m a_i \chi_{U_i}$, where $0 < a_1 < \cdots < a_m < +\infty$ and $U_1 \supseteq U_2 \supseteq \cdots \supseteq U_m$ form a finite antitone family of open subsets of $X$. \begin{lem} \label{lemma:f(alpha):step} Let $X, d$ be a standard quasi-metric space. For a step function $f = \sup_{i=1}^m a_i \chi_{U_i}$, and $\alpha > 0$, $f^{(\alpha)}$ is the function that maps every $x \in X$ to $\min (\alpha d (x, \overline U_1), a_1 + \alpha d (x, \overline U_2), \cdots, \allowbreak a_{i-1} + \alpha d (x, \overline U_i), \cdots, a_{m-1} + \alpha d (x, \overline U_m), a_m)$. \end{lem} \proof Let $g (x) = \min (\alpha d (x, \overline U_1), a_1 + \alpha d (x, \overline U_2), \cdots, a_{i-1} + \alpha d (x, \overline U_i), \cdots, \allowbreak a_{m-1} + \alpha d (x, \overline U_m), \allowbreak a_m)$. Each of the maps $x \mapsto a_{i-1} + \alpha d (x, \overline U_i)$ (where, for convenience, we shall assume $a_0 = 0$, so as not to make a special case for $i=1$) is $\alpha$-Lipschitz continuous, and therefore $g$ is $\alpha$-Lipschitz continuous. Indeed, the map $d (\_, \overline U)$ is $1$-Lipschitz (Yoneda-)continuous, as shown in Lemma~6.11~(3) of \cite{JGL:formalballs}; the rest of the argument relies on Proposition~\ref{prop:alphaLip:props}. We claim that $g (x) \leq f (x)$ for every $x \in X$. Let $U_0=X$, so that $U_i$ makes sense also when $i=0$, and let $U_{m+1}=\emptyset$. The latter allows us to write $g (x)$ as $\min_{i=0}^{m} (a_i + \alpha d (x, \overline U_{i+1}))$, noticing that $d (x, \overline U_{m+1}) = 0$. Indeed, by \cite[Lemma~6.11~(1)]{JGL:formalballs}, for every open subset $U$, $d (x, \overline U)=0$ if and only if $x \not\in U$. There is a unique index $j$, $0\leq j \leq m$, such that $x \in U_j$ and $x \not\in U_{j+1}$. Then $g (x) \leq a_j + d (x, \overline U_{j+1}) = a_j$. Noticing that $f (x) = a_j$, it follows that $g (x) \leq f (x)$. Now consider any $\alpha$-Lipschitz continuous map $h \leq f$, and let us show that $h \leq g$. We fix $x \in X$ and $i$ with $0 \leq i \leq m$, and we claim that $h (x) \leq a_i + \alpha d (x, \overline U_{i+1})$. Since $h$ is $\alpha$-Lipschitz continuous, $h' \colon (x, r) \mapsto h (x) - \alpha r$ is Scott-continuous, so $V = {h'}^{-1} (]a_i, +\infty])$ is open in $\mathbf B (X, d)$. For every element of the form $(y, 0)$ in $V \cap X$, $h' (y, 0) = h (y) > a_i$, hence $f (y) \geq h (y) > a_i$, which implies that $y$ is in $U_{i+1}$. We have just shown that $V \cap X \subseteq U_{i+1}$, and that implies $V \subseteq \widehat U_{i+1}$, by maximality of $\widehat U_{i+1}$. Now, for every $s \in \mathbb{R}_+$ such that $s < (h (x) - a_i)/\alpha$, i.e., such that $h' (x, s) = h (x) - \alpha s$ is strictly larger than $a_i$, by definition $(x, s)$ is in $V$, hence in $\widehat U_{i+1}$. By definition, this means that $s \leq d (x, \overline U_{i+1})$. Taking suprema over $s$, we obtain $(h (x) - a_i) / \alpha \leq d (x, \overline U_{i+1})$, equivalently $h (x) \leq a_i + \alpha d (x, \overline U_{i+1})$. Since that holds for every $i$, $0\leq i\leq m$, $h (x) \leq g (x)$. Hence $g$ is the largest $\alpha$-Lipschitz continuous map below $f$, in other words, $g = f^{(\alpha)}$. \qed Given any topological space $X$, every lower semicontinuous function $f \colon X \to \overline{\mathbb{R}}_+$ is the pointwise supremum of a chain of step functions: \begin{equation} \label{eq:fK} f_K (x) = \frac 1 {2^K} \sup\nolimits_{k=1}^{K2^K} k \chi_{f^{-1} (]k/2^K, +\infty])} (x) \end{equation} where $K \in \nat$. If $X, d$ is a standard quasi-metric space, $f_K^{(\alpha)}$ is given by Lemma~\ref{lemma:f(alpha):step}, namely: \begin{equation} \label{eq:fK(alpha)} f_K^{(\alpha)} (x) = \min (\min\nolimits_{k=1}^{K2^K} (\frac {k-1} {2^K} + \alpha d (x, \overline {f^{-1} (]k/2^K, +\infty])}), K). \end{equation} \begin{prop} \label{prop:f(alpha)} Let $X, d$ be a standard quasi-metric space. For every lower semicontinuous map $f \colon X \to \overline{\mathbb{R}}_+$, for every $\alpha \in \mathbb{R}_+$, for every $x \in X$, \[ f^{(\alpha)} (x) = \sup_{K \in \nat} f_K^{(\alpha)} (x). \] \end{prop} \proof We first deal with the case where $f$ is already $\alpha$-Lipschitz continuous. In that case, we claim the equivalent statement: $(*)$ if $f$ is $\alpha$-Lipschitz continuous, then for every $x \in X$, $f (x) = \sup_{K \in \nat} f_K^{(\alpha)} (x)$. Fix $j \in \nat$ and $k$ such that $1 \leq k \leq K2^K$, and note that if $(x, j/(\alpha 2^K)) \in \widehat U_k$, where $U_k = {f^{-1} (]k/2^K, +\infty]}$, then $\alpha d (x, \overline U_k) \geq j/2^K$. This is by definition of $d (x, \overline U_k)$. Recall that $f' (x, r) = f (x) - \alpha r$ defines a Scott-continuous map. For every $(y, 0)$ in $X \cap {f'}^{-1} (]k/2^K, +\infty])$, $f' (y, 0) = f (y) > k/2^K$, so $X \cap {f'}^{-1} (]k/2^K, +\infty])$ is included in $f^{-1} (]k/2^K, +\infty]) = U_k$. By maximality, ${f'}^{-1} (]k/2^K, +\infty])$ is included in $\widehat U_k$. Hence if $(x, j/(\alpha 2^K))$ is in ${f'}^{-1} (]k/2^K, +\infty])$, then $\alpha d (x, \overline U_k) \geq j/2^K$. That happens when $f (x) - j/2^K > k / 2^K$, i.e., when $x$ is in $f^{-1} (](k+j)/2^K, +\infty])$. Therefore $\alpha d (x, \overline U_k) \geq j/2^K \chi_{f^{-1} (](k+j)/2^K, +\infty])} (x)$ for all $j \in \nat$ and $k$ such that $1\leq k\leq K2^K$. Now fix $k_0$ with $1\leq k_0\leq K2^K$. For every $k$ with $1\leq k \leq k_0$, letting $j = k_0-k$, we obtain that $(k-1)/2^K + \alpha d (x, \overline U_k) \geq (k-1)/2^K + (k_0-k)/2^K \chi_{f^{-1} (]k_0/2^K, +\infty])} (x) \geq (k_0-1)/2^K \chi_{f^{-1} (]k_0/2^K, +\infty])} (x)$. For every $k$ such that $k_0 < k \leq K2^K$, $(k-1)/2^K + \alpha d (x, \overline {U_k}) \geq k_0/2^K$ is larger than the same quantity already, and similarly for $K$, which is also larger than or equal to $k_0/2^K$. Using (\ref{eq:fK(alpha)}), we obtain $f_K^{(\alpha)} (x) \geq (k_0-1)/2^K \chi_{f^{-1} (]k_0/2^K, +\infty])} (x)$, and therefore $f_K^{(\alpha)} (x) \geq k_0/2^K \chi_{f^{-1} (]k_0/2^K, +\infty])} (x) -1/2^K$. Since that holds for every $k_0$ between $1$ and $K2^K$, it follows that $f_K^{(\alpha)} (x) \geq f_K (x) - 1/2^K$. Taking suprema over $K \in \nat$, we obtain $\sup_{K \in \nat} f_K^{(\alpha)} (x) \geq \sup_{K \in \nat} (f_K (x) - 1/2^K) = f (x)$, proving $(*)$. In the general case, where $f$ is only assumed to be lower semicontinuous, we note that $f \geq f^{(\alpha)}$ implies that $f_K \geq (f^{(\alpha)})_K$. Indeed, that follows from formula (\ref{eq:fK}) and the fact that $(f^{(\alpha)})^{-1} (]k/2^K, +\infty])$ is included in $f^{-1} (]k/2^K, +\infty])$ for every $k$. The mapping $g \mapsto g^{(\alpha)}$ is also monotonic, since $g^{(\alpha)}$ is defined as the largest $\alpha$-Lipschitz continuous map below $g$. Therefore $f_K^{(\alpha)} \geq (f^{(\alpha)})_K^{(\alpha)}$. Taking suprema, we obtain that $\sup_{K \in \nat} f_K^{(\alpha)} (x) \geq \sup_{K \in \nat} (f^{(\alpha)})_K^{(\alpha)} (x) = f^{(\alpha)} (x)$, where the last equality follows from statement $(*)$ (first part of the proof), applied to the $\alpha$-Lipschitz continuous function $f^{(\alpha)}$. The reverse inequality $\sup_{K \in \nat} f_K^{(\alpha)} \leq f^{(\alpha)}$ is easy: for every $K \in \nat$, $f_K \leq f$, so $f_K^{(\alpha)} \leq f^{(\alpha)}$. \qed \begin{prop} \label{prop:lipreg:Lip} Let $X, d$ be a standard quasi-metric space. The following are equivalent: \begin{enumerate} \item $X, d$ is Lipschitz regular; \item for every $\alpha \in \mathbb{R}_+$, the map $f \in \Lform X \mapsto f^{(\alpha)} \in \Lform_\alpha (X, d)$ is Scott-continuous; \item for some $\alpha > 0$, the map $f \in \Lform X \mapsto f^{(\alpha)} \in \Lform_\alpha (X, d)$ is Scott-continuous. \end{enumerate} \end{prop} \proof (1) $\limp$ (2). Clearly $f \mapsto f^{(\alpha)}$ is monotonic. Let ${(f_i)}_{i \in I}$ be a directed family of lower semicontinuous maps from $X$ to $\overline{\mathbb{R}}_+$, and $f$ be their (pointwise) supremum. Note that, for every $t \in \mathbb{R}_+$, $f^{-1} (]t, +\infty])$ is the union of the directed family of open sets $f_i^{-1} (]t, +\infty])$, $i \in I$. Then, for every $x \in X$, and every $K \in \nat$: \begin{eqnarray*} f_K^{(\alpha)} (x) & = & \min (\min\nolimits_{k=1}^{K2^K} (\frac {k-1} {2^K} + \alpha d (x, \overline {f^{-1} (]k/2^K, +\infty])}), K) \\ && \qquad \text{by Formula~(\ref{eq:fK(alpha)})} \\ & = & \min (\min\nolimits_{k=1}^{K2^K} (\frac {k-1} {2^K} + \alpha d (x, \overline {\bigcup_{i \in I} f_i^{-1} (]k/2^K, +\infty])}), K) \\ & = & \min (\min\nolimits_{k=1}^{K2^K} (\frac {k-1} {2^K} + \alpha \sup_{i \in I} d (x, \overline {f_i^{-1} (]k/2^K, +\infty])}), K) \\ && \qquad\text{by Lipschitz-regularity (Lemma~\ref{lemma:lipreg:dist}~(2))} \\ & = & \sup_{i \in I} \min (\min\nolimits_{k=1}^{K2^K} (\frac {k-1} {2^K} + \alpha d (x, \overline {f_i^{-1} (]k/2^K, +\infty])}), K) \\ & = & \sup_{i \in I} {(f_i)}_K^{(\alpha)} (x) \end{eqnarray*} since multiplication by $\alpha$, addition, and $\min$ are Scott-continuous. Using Proposition~\ref{prop:f(alpha)}, it follows that $f^{(\alpha)} (x) = \sup_{K \in \nat} \sup_{i \in I} {(f_i)}_K^{(\alpha)} (x) = \sup_{i \in I} \sup_{K \in \nat} {(f_i)}_K^{(\alpha)} (x) = \sup_{i \in I} f_i^{(\alpha)} (x)$. (2) $\limp$ (3): obvious. (3) $\limp$ (1). (3) applies notably to the family of maps $r \chi_{U_i}$, where ${(U_i)}_{i \in I}$ is an arbitrary directed family of open subsets of $X$, and $r \in \mathbb{R}_+$. Let $U = \bigcup_{i \in I} U_i$, so that $\sup_{i \in I} r \chi_{U_i} = r \chi_U$. Then (3) entails that $(r\chi_U)^{(\alpha)} = \sup_{i \in I} (r\chi_{U_i})^{(\alpha)}$. This means that for every $x \in X$, $\min (r, \alpha d (x, \overline U)) = \sup_{i \in I} \min (r, \alpha d (x, \overline U_i)) = \min (r, \alpha \sup_{i \in I} d (x, \overline U_i))$. Since $r$ is arbitrary, we make it tend to $+\infty$, leaving $x$ fixed. We obtain that $\alpha d (x, \overline U)) = \alpha \sup_{i \in I} d (x, \overline U_i)$, and since $\alpha > 0$, that $d (x, \overline U) = \sup_{i \in I} d (x, \overline U_i)$. Hence $X, d$ is Lipschitz regular by Lemma~\ref{lemma:lipreg:dist}~(2). \qed \begin{cor} \label{corl:Lalpha:retract} Let $\alpha > 0$, and $X, d$ be a Lipschitz regular standard quasi-metric space. Then the canonical injection $i_\alpha \colon \Lform_\alpha (X, d) \to \Lform X$ and the map $r_\alpha \colon f \in \Lform X \mapsto f^{(\alpha)} \in \Lform_\alpha (X, d)$ form an embedding-projection pair, viz., $r_\alpha$ and $i_\alpha$ are continuous, $r_\alpha \circ i_\alpha = \identity {\Lform_\alpha X}$ and $i_\alpha \circ r_\alpha \leq \identity {\Lform X}$. \end{cor} \proof We know that $i_\alpha$ is continuous (by definition of the subspace topology), the equalities $r_\alpha \circ i_\alpha = \identity {\Lform_\alpha X}$ and $i_\alpha \circ r_\alpha \leq \identity {\Lform X}$ are clear, and $r_\alpha$ is Scott-continuous by Proposition~\ref{prop:lipreg:Lip}. Recall however that the topology we have taken on $\Lform_\alpha (X, d)$ is not the Scott topology. In order to show that $r_\alpha$ is continuous, we therefore proceed as follows. Given any open subset $V$ of $\Lform_\alpha (X, d)$, by definition of the subspace topology there is a Scott-open subset $W$ of $\Lform X$ such that $V = W \cap \Lform_\alpha (X, d)$. Then $r_\alpha^{-1} (V) = r_\alpha^{-1} (W)$ is Scott-open in $\Lform X$, showing that $r_\alpha$ is continuous from $\Lform X$ to $\Lform_\alpha (X, d)$. \qed In the proof of Corollary~\ref{corl:Lalpha:retract}, we have paid attention to the fact that the subspace topology on $\Lform_\alpha (X, d)$ might fail to coincide with the Scott topology. However, when $X, d$ is Lipschitz regular and standard, this is unnecessary: \begin{prop} \label{prop:Lalpha:retract} Let $X, d$ be a Lipschitz regular standard quasi-metric space. Then the subspace topology on $\Lform_\alpha (X, d)$ induced by the Scott topology on $\Lform X$ coincides with the Scott topology. \end{prop} \proof $r_\alpha$ is Scott-continuous by Proposition~\ref{prop:lipreg:Lip}~(2), and $i_\alpha$ is also Scott-continuous, since suprema are computed in the same way in $\Lform_\alpha (X, d)$ and in $\Lform X$. In a section-retraction pair, the section is a topological embedding, so $i_\alpha$ is an embedding of $\Lform_\alpha (X, d)$, with its Scott topology, into $\Lform X$. That implies that the Scott topology on $\Lform_\alpha (X, d)$ coincides with the subspace topology. \qed A similar argument allows us to establish the following. To show that $\min (\alpha a . \mathbf 1, f^{(\alpha)}) \in \Lform_\alpha^a (X, d)$, we use Proposition~\ref{prop:alphaLip:props}~(3) and (6), which state that the pointwise min of two $\alpha$-Lipschitz continuous maps is $\alpha$-Lipschitz continuous and that constant maps are $\alpha$-Lipschitz continuous. We write $\mathbf 1$ for the constant map equal to $1$. \begin{cor} \label{corl:Lalpha:retract:bnd} Let $\alpha > 0$, $a > 0$, and $X, d$ be a Lipschitz regular standard quasi-metric space. Then the canonical injection $i_\alpha^a \colon \Lform_\alpha^a (X, d) \to \Lform X$ and the map $r_\alpha^a \colon f \in \Lform X \mapsto \min (a\alpha.\mathbf 1, f^{(\alpha)}) \in \Lform_\alpha^a (X, d)$ form an embedding-projection pair, viz., $r_\alpha^a$ and $i_\alpha^a$ are continuous, $r_\alpha^a \circ i_\alpha^a = \identity {\Lform_\alpha^a X}$ and $i_\alpha^a \circ r_\alpha^a \leq \identity {\Lform X}$. \qed \end{cor} \begin{rem} \label{rem:Lalpha:retract:bnd} As in Proposition~\ref{prop:Lalpha:retract}, this also shows that, when $X, d$ is Lipschitz regular and standard, the subspace topology (induced by the inclusion into $\Lform X$) coincides with the Scott topology on $\Lform_\alpha^a (X, d)$. \end{rem} \subsection{$f^{(\alpha)}$ for $f$ $\alpha$-Lipschitz} \label{sec:falpha-f-alpha} We no longer assume $f$ lower semicontinuous. Finding the largest $\alpha$-Lipschitz (not necessarily continuous) map below $f$ is easy: \begin{lem} \label{lemma:largestLip} Let $X, d$ be a quasi-metric space, and $\alpha \in \mathbb{R}_+$. The largest $\alpha$-Lipschitz map below an arbitrary function $f \colon X \to \overline{\mathbb{R}}_+$ is given by: \begin{equation} \label{eq:falpha} f^\alpha (x) = \inf_{z \in X} (f (z) + \alpha d (x, z)). \end{equation} \end{lem} \proof For all $x, y \in X$, $f^\alpha (y) + \alpha d (x, y) = \inf_{z \in X} (f (z) + \alpha d (x, y) + \alpha d (y, z)) \geq \inf_{z \in X} (f (z) + \alpha d (x, z)) = f^\alpha (x)$, so $f^\alpha$ is $\alpha$-Lipschitz. It is clear that $f^\alpha$ is below $f$: take $z=x$ in the infimum defining $f^\alpha$. Assume another $\alpha$-Lipschitz map $g$ below $f$. For all $x, z \in X$, $g (x) \leq g (z) + d (x, z) \leq f (z) + d (x, z)$, hence by taking infima over all $z \in X$, $g (x) \leq f^\alpha (x)$. \qed Every monotonic map $g$ from a space $Y$ to $\overline{\mathbb{R}}_+$ has a lower semicontinuous envelope $\overline g$, defined as the (pointwise) largest map below $g$ that is lower semicontinuous. When $Y$ is a continuous poset, one can define $\overline g (y)$ as the directed supremum $\sup_{y' \ll y} g (y')$ (see for example \cite[Corollary~5.1.61]{JGL-topology}). This is sometimes called Scott's formula. Now assume $f \colon X \to \overline{\mathbb{R}}_+$ is already $\alpha$-Lipschitz, but not necessarily continuous. There are two ways one can find an $\alpha$-Lipschitz continuous map below $f$: either consider $f^{(\alpha)}$, the largest possible such map, or, if $X, d$ is continuous, extend $f$ to $f' \colon (x, r) \mapsto f (x) - \alpha r$, apply Scott's formula to obtain $\overline {f'}$, then restrict the latter to the subspace $X$ of $\mathbf B (X, d)$. We show that the two routes lead to the same function. \begin{lem} \label{lemma:largestLipcont} Let $X, d$ be a continuous quasi-metric space, and $\alpha \in \mathbb{R}_+$. For any $\alpha$-Lipschitz map $f \colon X \to \overline{\mathbb{R}}_+$, the largest $\alpha$-Lipschitz continuous map below $f$, $f^{(\alpha)}$, is given by: \begin{equation} \label{eq:f(alpha)} f^{(\alpha)} (x) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s). \end{equation} Moreover, $(f^{(\alpha)})'$, defined as mapping $(x, r)$ to $f^{(\alpha)} (x) - \alpha r$, is the largest lower semicontinuous map $\overline {f'}$ from $\mathbf B (X, d)$ to $\overline{\mathbb{R}}_+$ below $f' \colon (x, r) \mapsto f (x) - \alpha r$. \end{lem} \proof Take $Y=\mathbf B (X, d)$. Since $f$ is $\alpha$-Lipschitz, the map $f' \colon \mathbf B (X, d) \to \mathbb{R} \cup \{+\infty\}$ defined by $f' (x, r) = f (x) - \alpha r$ is monotonic, by Lemma~\ref{lemma:f'}. Then $\overline {f'} (x, r) = \sup_{(y, s) \ll (x, r)} (f (y) - \alpha s)$. By definition, $\overline {f'}$ is Scott-continuous. Note that $\overline {f'} (x, 0)$ is exactly the right-hand side of (\ref{eq:f(alpha)}). For clarity, let $g (x) = \overline {f'} (x, 0) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s)$. We check that for every $r \in \mathbb{R}_+$, $\overline {f'} (x, r) = g (x) - \alpha r$. For that, we use the fact that, when $X, d$ is a continuous quasi-metric space, the way-below relation $\ll$ on $\mathbf B (X, d)$ is \emph{standard} \cite[Proposition~3.6]{JGL:formalballs}, meaning that, for every $a \in \mathbb{R}_+$, for all formal balls $(x, r)$ and $(y, s)$, $(y, s) \ll (x, r)$ if and only if $(y, s+a) \ll (x, r+a)$. It follows that $\overline{f'} (x, r) = \sup_{s\geq r, (y, s-r) \ll (x, 0)} (f (y) - \alpha s) = \sup_{(y, s') \ll (x, 0)} (f (y) - \alpha (s'+r)) = \sup_{(y, s') \ll (x, 0)} (f (y) - \alpha s')- \alpha r = g (x) - \alpha r$. In other words, $\overline {f'} = g'$. Since $\overline {f'}$ is Scott-continuous, $g$ is $\alpha$-Lipschitz continuous. We check that $g$ takes its values in $\overline{\mathbb{R}}_+$, not just $\mathbb{R} \cup \{+\infty\}$. For every $x \in X$, for every $\epsilon > 0$, $(x, 0)$ is in the open set $V_\epsilon$ (Lemma~\ref{lemma:Veps}), and since $(x, 0)$ is the supremum of the directed family of all formal balls $(y, s) \ll (x, 0)$, one of them is in $V_\epsilon$; this implies that $g (x) \geq -\alpha\epsilon$, and as $\epsilon$ is arbitrary, that $g (x) \geq 0$. Also, $g \leq f$, since for every $x \in X$, $g (x) = \overline {f'} (x, 0) \leq f' (x, 0) = f (x)$. Hence $g$ is an $\alpha$-Lipschitz continuous map from $X$ to $\overline{\mathbb{R}}_+$ below $f$, from which we deduce that it must also be below the largest such map, $f^{(\alpha)}$. We show that $g$ is equal to $f^{(\alpha)}$. To that end, we take any $\alpha$-Lipschitz continuous map $h \colon X \to \overline{\mathbb{R}}_+$ below $f$, and we show that $h \leq g$. Since $h \leq f$, $h' \leq f'$. Since $h$ is $\alpha$-Lipschitz continuous, $h'$ is Scott-continuous. We use the fact that $\overline {f'}$ is the largest Scott-continuous map below $f'$ to obtain $h' \leq \overline {f'}$, and apply both sides of the inequality to $(x, 0)$ to obtain $h (x) = h' (x) \leq \overline {f'} (x, 0) = g (x)$. Since $g=f^{(\alpha)}$ and $g (x) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s)$ by definition, (\ref{eq:f(alpha)}) follows. Finally, we have seen that $\overline {f'} = g'$, namely that $\overline {f'} = (f^{(\alpha)})'$, and that is the final part of the lemma. \qed \begin{cor} \label{corl:ha=hb} Let $X, d$ be a continuous quasi-metric space, and $\alpha, \beta \in \mathbb{R}_+$. For every $f \in L_\alpha (X, d) \cap L_\beta (X, d)$, $f^{(\alpha)} = f^{(\beta)}$. \end{cor} \proof Since $\mathbf B (X, d)$ is a continuous poset, for every $x \in X$, $(x, 0)$ is the supremum of the directed family of formal balls $(y, s) \ll (x, 0)$, and since $X, d$ is standard, such formal balls have arbitrarily small radii $s$. By (\ref{eq:f(alpha)}), $f^{(\alpha)} (x) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s)$. For every $(y, s) \ll (x, 0)$, for every $\epsilon > 0$, we can find another formal ball $(z, t) \ll (x, 0)$ such that $t < \epsilon$, by the remark we have just made. Using directedness, we can require $(z, t)$ to be above $(y, s)$. Then $f (y) - \alpha s \leq f (z) - \alpha t$ since $f$ is $\alpha$-Lipschitz, and $f (z) - \alpha t = f (z) - \beta t + (\beta-\alpha) t$ is less than or equal to $f^{(\beta)} (x) + |\beta-\alpha| \epsilon$. Taking suprema over $(y, s) \ll (x, 0)$, we obtain $f^{(\alpha)} (x) \leq f^{(\beta)} (x) + |\beta-\alpha| \epsilon$. Since $\epsilon$ can be made arbitrarily small, $f^{(\alpha)} (x) \leq f^{(\beta)} (x)$. We show $f^{(\beta)} (x) \leq f^{(\alpha)} (x)$ symmetrically, using the fact that $f$ is $\beta$-Lipschitz. The equality follows. \qed \begin{lem} \label{lemma:largestLipcont:alg} Let $X, d$ be a standard algebraic quasi-metric space, and $\alpha \in \mathbb{R}_+$. For any $\alpha$-Lipschitz map $f \colon X \to \overline{\mathbb{R}}_+$, the largest $\alpha$-Lipschitz continuous map below $f$, namely $f^{(\alpha)}$, is given by: \begin{equation} \label{eq:f(alpha):alg} f^{(\alpha)} (x) = \sup_{\substack{z \text{ center point}\\ t > d (z, x)}} (f (z) - \alpha t). \end{equation} Moreover, $(f^{(\alpha)})'$, defined as mapping $(x, r)$ to $f^{(\alpha)} (x) - \alpha r$, is the largest lower semicontinuous map $\overline {f'}$ from $\mathbf B (X, d)$ to $\overline{\mathbb{R}}_+$ below $f' \colon (x, r) \mapsto f (x) - \alpha r$. \end{lem} (\ref{eq:f(alpha):alg}) simplifies to $f^{(\alpha)} (x) = \sup_{z \text{ center point}} (f (z) - \alpha d (z, x))$ when $d (z, x) \neq +\infty$ for all center points $z$, or when $f (z) \neq +\infty$. \proof Easy consequence of Lemma~\ref{lemma:largestLipcont}, using the fact that, in a standard algebraic quasi-metric space, $(y, s) \ll (x, r)$ if and only if there is a center point $z$ and some $t \in \mathbb{R}_+$ such that $(y, s) \leq^{d^+} (z, t)$ and $d (z, x) < t-r$ \cite[Proposition~5.18]{JGL:formalballs}. \qed \subsection{The Functions $\sea x b$} \label{sec:functions-sea-x} \begin{defi}[$\sea x b$] \label{defn:sea} Let $X, d$ be a quasi-metric space. For each center point $x \in X$ and each $b \in \overline{\mathbb{R}}_+$, let $\sea x b \colon X \to \overline{\mathbb{R}}_+$ map every $y \in X$ to the smallest element $t \in \overline{\mathbb{R}}_+$ such that $b \leq t + d (x, y)$. \end{defi} When $b \neq +\infty$, we might have said, more simply: $(\sea x b) (y) = \max (b - d (x, y), 0)$. The definition caters for the general situation. When $b=+\infty$, $(\sea x {+\infty}) (y)$ is equal to $0$ if $d (x, y) = +\infty$, and to $+\infty$ if $d (x, y) < +\infty$. \begin{lem} \label{lemma:sea:cont} Let $X, d$ be a standard quasi-metric space, $x$ be a center point of $X, d$ and $b \in \overline{\mathbb{R}}_+$. The function $\sea x b$ is $1$-Lipschitz continuous. \end{lem} \proof It is enough to show that it is continuous, by Proposition~\ref{prop:cont}. Let $f = \sea x b$, and $f' (y, r) = f (y) - r$. For every $s \in \mathbb{R}$, we wish to show that ${f'}^{-1} (]s, +\infty])$ is Scott-open. If $b \neq +\infty$, a formal ball $(y, r)$ is in ${f'}^{-1} (]s, +\infty])$ if and only if $\max (b - d (x, y), 0) - r > s$. This is equivalent to $d (x, y) + r < b - s$ or $r < -s$. The set $V_{-s} = \{(y, r) \mid r < -s\}$ is Scott-open since $X, d$ is standard, by Lemma~\ref{lemma:Veps}. The condition $d (x, y) + r < b - s$ is vacuously false if $b \leq s$, and is equivalent to $d^+ ((x, 0), (y, r)) < b-s$ otherwise. It follows that ${f'}^{-1} (]s, +\infty])$ is equal to $V_{-s}$ if $b \leq s$, or to $V_{-s} \cup B^{d^+}_{(x, 0), < b-s}$ otherwise. This is Scott-open in any case, because $x$ is a center point. If $b = +\infty$, then $(y, r) \in {f'}^{-1} (]s, +\infty])$ entails that $-r > s$ if $d (x, y) = +\infty$, or $d (x, y) < +\infty$. Conversely, if $-r > s$, then $f' (y, r) > s$, whether $d (x, y) = +\infty$ or $d (x, y) < +\infty$, and if $d (x, y) < +\infty$, then $f' (y, r) = +\infty > s$. Also, $d (x, y) < +\infty$ if and only if $d^+ ((x, 0), (y, r)) < +\infty$, if and only if $d^+ ((x, 0), (y, r)) < N$ for some natural number $N$. Therefore ${f'}^{-1} (]s, +\infty])$ is equal to $V_{-s} \cup \bigcup_N B^{d^+}_{(x, 0), < N}$, which is Scott-open. \qed \begin{lem} \label{lemma:sea:min} Let $X, d$ be a standard quasi-metric space, $x_i$ be center points of $X, d$ and $b_i \in \overline{\mathbb{R}}_+$, $1\leq i \leq n$. The function $\bigvee_{i=1}^n \sea {x_i} {b_i}$, which maps every $y \in X$ to $\max \{(\sea {x_i} {b_i}) (y) \mid 1\leq i\leq n\}$, is the smallest $1$-Lipschitz map $f$ (hence also the smallest function in $\Lform_1 (X, d)$) such that $f (x_i) \geq b_i$ for every $i$, $1\leq i\leq n$. \end{lem} \proof First, $f = \bigvee_{i=1}^n \sea {x_i} {b_i}$ is in $\Lform_1 (X, d)$, by Lemma~\ref{lemma:sea:cont} and Proposition~\ref{prop:alphaLip:props}~(3), which states that the pointwise max of two $\alpha$-Lipschitz continuous maps is $\alpha$-Lipschitz continuous. For any other $1$-Lipschitz map $h$ such that $h (x_i) \geq b_i$, $1\leq i\leq n$, for every $y \in X$, $h (x_i) \leq h (y) + d (x_i, y)$. In other words, $h (y)$ is a number $t \in \overline{\mathbb{R}}_+$ such that $b_i \leq t + d (x_i, y)$. The smallest such $t$ is $\sea {x_i} {b_i} (y)$ by definition, so $h (y) \geq (\sea {x_i} {b_i}) (y)$. Since that holds for every $i$ and every $y$, $h \geq f$. \qed \section{Topologies on $\Lform X$} \label{sec:topol-spac-maps} Several results in the sequel will rely on a fine analysis of the (Scott) topology of $\Lform X$ and of the (subspace) topology of $\Lform_\infty X$, $\Lform_\alpha (X, d)$, and a few other subspaces. We shall notably see that, in case $X, d$ is continuous Yoneda-complete, the Scott topology on $\Lform X$, the compact-open topology, and the Isbell topology all coincide. We already know the Scott topology, and recall that we order $\Lform X$ pointwise. The compact-open topology has a subbase of sets of the form $[Q \in V] = \{f \mid f \text{ maps } Q \text{ to }V\}$, for $Q$ a compact subset of the domain and $V$ an open subset of the codomain. In the case of $\Lform X$, the codomain is $\overline{\mathbb{R}}_+$ with its Scott topology, and we shall write $[Q > a]$ for the subbasic open set $\{f \in \Lform X \mid \forall x \in Q . f (x) > a\}$. The Isbell topology has a subbase of sets of the form $N (\mathcal U, V) = \{f \mid f^{-1} (V) \in \mathcal U\}$, where $V$ is open in the codomain and $\mathcal U$ is Scott-open in the lattice of open subsets of the domain. We will prove this result of coincidence of topologies in several steps, concentrating first on the coincidence of the compact-open and Isbell topologies. The compact-open topology is always coarser than the Isbell topology (which is coarser than the Scott topology), and the converse is known to hold when $X$ is consonant \cite{DGL:consonant}. For a subset $Q$ of $X$, let $\blacksquare Q$ be the family of open neighborhoods of $Q$. A space is \emph{consonant} if and only if, given any Scott-open family $\mathcal U$ of open sets, and given any $U \in \mathcal U$, there is a compact saturated set $Q$ such that $U \in \blacksquare Q \subseteq \mathcal U$. Equivalently, if and only if every Scott-open family of opens is a union of sets of the form $\blacksquare Q$, $Q$ compact saturated. The latter generate the compact-open topology on the space $\Open X$ of open subsets of $X$, if we equate $\Open X$ with the family of continuous maps from $X$ to Sierpi\'nski space. Using the same identification, the Isbell topology on $\Open X$ coincides with the Scott topology. The latter Scott topology is always finer than the compact-open topology. A space is consonant if and only if the two topologies coincide. In a locally compact space, every open subset $U$ is the union of the interiors $int (Q)$ of compact saturated subsets of $U$, and that family is directed. It follows immediately that every locally compact space is consonant, as we have already claimed. In the following, a \emph{well-filtered} space is a topological space in which the following property holds (see \cite[Proposition~8.3.5]{JGL-topology}): if a filtered intersection of compact saturated sets $Q_i$ is included in some open set $U$, then some $Q_i$ is already included in $U$. For a $T_0$ locally compact space, well-filteredness is equivalent to the more well-known property of \emph{sobriety} (Propositions~8.3.5 and 8.3.8, loc.cit.). Hence, under the additional assumption of being $T_0$, the following theorem could also be stated thus: every $G_\delta$ subspace of a sober locally compact space is consonant. \begin{thm} \label{thm:Gdelta=>cons} Every $G_\delta$ subspace of a well-filtered locally compact space is consonant. \end{thm} Before we start the proof, let us recall that every locally compact space is consonant. But consonance is not preserved under the formation of $G_\delta$ subsets \cite[Proposition~7.3]{DGL:consonant}. We shall also use the following easy fact: in a well-filtered space, the intersection of a filtered family of compact saturated subsets is compact saturated (Proposition~8.3.6, loc.cit.). \proof Let $Y$ be a $G_\delta$ subset of $X$, where $X$ is well-filtered and locally compact. We may write $Y$ as $\bigcap_{n \in \nat} V_n$, where each $V_n$ is open in $X$ and $V_0 \supseteq V_1 \supseteq \cdots \supseteq V_n \supseteq \cdots$. Let $\mathcal U$ be a Scott-open family of open subsets of $Y$, and $U \in \mathcal U$. By the definition of the subspace topology, there is an open subset $\widehat U$ of $X$ such that $\widehat U \cap Y = U$. By local compactness, $\widehat U \cap V_0$ is the union of the directed family of the sets $int (Q)$, where $Q$ ranges over the family $\mathcal Q_0$ of compact saturated subsets of $\widehat U \cap V_0$. We have $\bigcup_{Q \in \mathcal Q_0} int (Q) \cap Y = \widehat U \cap V_0 \cap Y = \widehat U \cap Y = U$. Since $U$ is in $\mathcal U$ and $\mathcal U$ is Scott-open, $int (Q) \cap Y$ is in $\mathcal U$ for some $Q \in \mathcal Q_0$. Let $Q_0$ be this compact saturated set $Q$, $\widehat U_0 = int (Q_0)$, and $U_0 = \widehat U_0 \cap Y$. Note that $U_0 \in \mathcal U$, $\widehat U_0 \subseteq Q_0 \subseteq \widehat U \cap V_0$. Do the same thing with $\widehat U_0 \cap V_1$ instead of $\widehat U \cap V_0$. There is a compact saturated subset $Q_1$ of $\widehat U_0 \cap V_1$ such that $int (Q_1) \cap Y$ is in $\mathcal U$. Then, letting $\widehat U_1 = int (Q_1)$, $U_1 = \widehat U_1 \cap Y$, we obtain that $U_1 \in \mathcal U$, $\widehat U_1 \subseteq Q_1 \subseteq \widehat U_0 \cap V_1$. Iterating this construction, we obtain for each $n \in \nat$ a compact saturated subset $Q_n$ and an open subset $\widehat U_n$ of $X$, and an open subset $U_n$ of $Y$ such that $U_n \in \mathcal U$ for each $n$, and $\widehat U_{n+1} \subseteq Q_{n+1} \subseteq \widehat U_n \cap V_n$. Let $Q = \bigcap_{n \in \nat} Q_n$. Since $X$ is well-filtered, $Q$ is compact saturated. Since $Q \subseteq \bigcap_{n \in \nat} V_n = Y$, $Q$ is a compact saturated subset of $X$ that is included in $Y$, hence a compact saturated subset of $Y$ by Lemma~\ref{lemma:compact:subspace}. We have $Q \subseteq Q_0 \subseteq \widehat U \cap V_0 \subseteq \widehat U$, and $Q \subseteq Y$, so $Q \subseteq \widehat U \cap Y = U$. Therefore $U \in \blacksquare Q$. For every $W \in \blacksquare Q$, write $W$ as the intersection of some open subset $\widehat W$ of $X$ with $Y$. Since $Q = \bigcap_{n \in \nat} Q_n \subseteq \widehat W$, by well-filteredness some $Q_n$ is included in $\widehat W$. Hence $\widehat U_n \subseteq Q_n \subseteq \widehat W$. Taking intersections with $Y$, $U_n \subseteq W$. Since $U_n$ is in $\mathcal U$, so is $W$. \qed Every continuous dcpo is sober in its Scott topology \cite[Proposition~8.2.12~$(b)$]{JGL-topology} ---hence well-filtered---, and also locally compact (Corollary~5.1.36, loc.cit.) Hence when $X, d$ is a continuous Yoneda-complete quasi-metric space, i.e., when $X, d$ is standard and $\mathbf B (X, d)$ is a continuous dcpo, then $\mathbf B (X, d)$ is well-filtered and locally compact. Recall that every Yoneda-complete space is standard, so that Lemma~\ref{lemma:Veps} applies, and we obtain that $X$ is a $G_\delta$-subset of $\mathbf B (X, d)$. Hence Theorem~\ref{thm:Gdelta=>cons} immediately yields: \begin{prop} \label{prop:cont=>cons} Every continuous Yoneda-complete quasi-metric space $X, d$ is consonant in its $d$-Scott topology. \qed \end{prop} \begin{rem} The \emph{Sorgenfrey line} $\mathbb{R}_\ell$ is $\mathbb{R}, \dRl$, where $\dRl (x, y)$ is equal to $+\infty$ if $x > y$, and to $y-x$ if $x \leq y$ \cite{KW:formal:ball}. Its specialization ordering $\leq^{\dRl}$ is equality. Its open ball topology is the topology generated by the half-open intervals $[x, x+r)$, and is a well-known counterexample in topology. In particular, $\mathbb{R}_\ell$ is not consonant in its open ball topology \cite{AC:consonant}. However, $\mathbb{R}_\ell$ is a continuous Yoneda-complete quasi-metric space, since the map $(x, r) \mapsto (-x-r, x)$ is an order isomorphism from $\mathbf B (\mathbb{R}, \dRl)$ to the continuous dcpo $C_\ell = \{(a, b) \in \mathbb{R}^2 \mid a+b \leq 0\}$. Proposition~\ref{prop:cont=>cons} implies that $\mathbb{R}_\ell$ \emph{is} consonant in the $\dRl$-Scott topology. You may confirm this by checking that the latter topology is just the usual metric topology on $\mathbb{R}$, which is locally compact hence consonant. \end{rem} The topological coproduct of two consonant spaces is not in general consonant \cite[Example~6.12]{NS:consonant}. I do not know whether, given a consonant space $X$, its $n$th \emph{copower} $n \odot X$ (the coproduct of $n$ copies of $X$) is consonant, but that seems unlikely. \begin{defi}[$\odot$-consonant] \label{defn:realcons} A topological space $X$ is called \emph{$\odot$-consonant} if and only if, for every $n \in \nat$, $n \odot X$ is consonant. \end{defi} In particular, every $\odot$-consonant space is consonant. Since coproducts of locally compact spaces are locally compact, every locally compact space is $\odot$-consonant. \begin{lem} \label{lemma:coprod:cont} For every two continuous Yoneda-complete quasi-metric spaces $X, d$ and $Y, \partial$, the quasi-metric space $X+Y$, with quasi-metric $d+\partial$ defined by $(d+\partial) (x, x') = d (x, x')$ if $x, x' \in X$, $(d+\partial) (y, y') = \partial (y, y')$ if $y, y' \in Y$, $(d + \partial) (z, z') = +\infty$ in all other cases, is continuous Yoneda-complete. With the $(d+\partial)$-Scott topology, $X+Y$ is the topological coproduct of $X$ (with the $d$-Scott topology) and $Y$ (with the $\partial$-Scott topology). \end{lem} \proof It is easy to see that $\mathbf B (X+Y, d+\partial)$ is the poset coproduct of $\mathbf B (X, d)$ and of $\mathbf B (Y, \partial)$: $(x, r) \leq^{(d+\partial)^+} (x', r')$ iff $(x, r) \leq^{d^+} (x', r')$ if $x, x' \in X$, $(y, s) \leq^{(d+\partial)^+} (y', s')$ iff $(y, s) \leq^{\partial^+} (y', s')$ if $y, y' \in Y$, and for $x \in X$ and $y \in Y$, $(x, r)$ and $(y, s)$ are incomparable. The poset coproduct of two continuous dcpos is a continuous dcpo, so $\mathbf B (X+Y, d+\partial)$ is a continuous dcpo, which shows the first part of the lemma. Also, the Scott topology on a poset coproduct is the coproduct topology, which shows the second part of the lemma. \qed Together with Proposition~\ref{prop:cont=>cons}, this yields the following corollary. \begin{prop} \label{prop:cont=>realcons} Every continuous Yoneda-complete quasi-metric space $X, d$ is $\odot$-consonant in its $d$-Scott topology. \end{prop} \proof For every $n \in \nat$, $n \odot X$ is continuous Yoneda-complete, hence consonant. \qed \begin{prop} \label{prop:co=Scott} Let $X$ be a $\odot$-consonant space, for example, a continuous Yoneda-complete quasi-metric space. The compact-open topology on $\Lform X$ is equal to the Scott topology on $\Lform X$. \end{prop} \proof Let $f [Q]$ denote the image of $Q$ by $f$. Write $[Q > a] = \{f \in \mathcal L X \mid f [Q] \subseteq ]a, +\infty]\} = \{f \in \mathcal L X \mid Q \subseteq f^{-1} (]a, +\infty])$, where $Q$ is compact saturated in $X$ and $a \in \mathbb{R}$, for the subbasic open subsets of the compact-open topology. It is easy to see that $[Q > a]$ is Scott-open, so that the Scott topology is finer than the compact-open topology. In the reverse direction, let $\mathcal W$ be a Scott-open subset of $\Lform X$, $f \in \mathcal W$. Our task is to find an open neighborhood of $f$ in the compact-open topology that would be included in $\mathcal W$. The function $f$ is the pointwise supremum of a chain of so-called \emph{step functions}, of the form $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{U_i}$, where $U_1$, $U_2$, \ldots, $U_{N2^N}$ are open subsets, and $N \in \nat$. Concretely, $U_i = f^{-1} (]i/2^N, +\infty])$. Hence some $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{U_i}$ is in $\mathcal W$, where we can even require $U_1 \supseteq U_2 \supseteq \cdots \supseteq U_{N2^N}$. Let $\mathcal V$ be the set of $N2^N$-tuples $(V_1, V_2, \cdots, V_{N2^N})$ of open subsets of $X$ such that $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{V_i}$ is in $\mathcal W$. Ordering those tuples componentwise, where each open is compared through inclusion, $\mathcal V$ is Scott-open. Equating those tuples with open subsets of $N2^N \odot X$ allows us to apply $\odot$-consonance: there is a compact saturated subset of $N2^N \odot X$, or equivalently, there are $N2^N$ compact saturated subsets $Q_1$, $Q_2$, \ldots, $Q_{N2^N}$ of $X$ such that $Q_i \subseteq U_i$ for each $i$, and such that for all open neighborhoods $V_i$ of $Q_i$, $1\leq i\leq N2^N$, $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{V_i}$ is in $\mathcal W$. Clearly, $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{U_i}$ is in $\bigcap_{i=1}^{N2^N} [Q_i > i/2^N]$, hence also the larger function $f$. Consider any $g \in \bigcap_{i=1}^{N2^N} [Q_i > i/2^N]$. Let $V_i = g^{-1} (]i/2^N, +\infty])$. By definition, $Q_i$ is included in $V_i$ for every $i$, so $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{V_i}$ is in $\mathcal W$. For every $x \in X$, let $i$ be the smallest number between $1$ and $N2^N$ such that $g (x) \leq (i+1)/2^N$, and $N2^N$ if there is none. Then $x$ is in $V_1$, $V_2$, \ldots, $V_i$, and not in $V_{i+1}$, \ldots, $V_{N2^N}$, hence is mapped to $i/2^N$ by $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{V_i}$. That is less than or equal to $g (x)$. It follows that $g$ is pointwise above $\frac 1 {2^N} \sum_{i=1}^{N2^N} \chi_{V_i}$, and is therefore also in $\mathcal W$, finishing the proof that $\bigcap_{i=1}^{N2^N} [Q_i > i/2^N]$ is included in $\mathcal W$. \qed \section{Topologies on $\Lform_\alpha (X, d)$ and $\Lform_\alpha^a (X, d)$} \label{sec:topol-lform_-x} The topology we have taken on $\Lform_\alpha (X, d)$ and on $\Lform_\alpha^a (X, d)$ is the subspace topology from $\Lform X$ (with its Scott topology). Proposition~\ref{prop:co=Scott} immediately implies that, when $X, d$ is continuous Yoneda-complete, the topologies of $\Lform_\alpha (X, d)$ and $\Lform_\alpha^a (X, d)$ are the compact-open topology, i.e., the topology generated by the subsets of the form $\{f \in \Lform_\alpha X \mid f [Q] \subseteq ]a, +\infty]\}$ (resp., $\{f \in \Lform_\alpha^a X \mid f [Q] \subseteq ]a, +\infty]\}$), $Q$ compact saturated in $X$, and which we again write as $[Q > a]$. Let us refine our understanding of the shape of compact saturated subsets $Q$ of $X$. Recall that we write $B^d_{x, \leq r}$ for the closed ball $\{y \in X \mid d (x, y) \leq r\}$, and beware that, despite their names, closed balls are not closed. The topology of pointwise convergence, on any set of functions from $X$ to $Y$, is the subspace topology from the ordinary product topology on $Y^X$. It is always coarser than the compact-open topology. When $Y = \overline{\mathbb{R}}_+$ or $Y = [0, a]$, with its Scott-topology, a subbase for the pointwise topology is given by the subsets $[x > r] = \{f \mid f (x) > r\}$, where $x \in X$ and $r \in \mathbb{R}_+$. \begin{prop} \label{prop:cont:Lalpha:pw} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For every $\alpha \in \mathbb{R}_+$, the topology of $\Lform_\alpha (X, d)$ (resp., $\Lform_\alpha^a (X, d)$, for any $a > 0$) coincides with the topology of pointwise convergence. \end{prop} \proof We already know that the topology of $\Lform_\alpha (X, d)$, and of $\Lform_\alpha^a (X, d)$, is the compact-open topology, as a consequence of Proposition~\ref{prop:co=Scott}. It therefore suffices to show that every subset $[Q > r]$ of $\Lform_\alpha (X, d)$ or $\Lform_\alpha^a (X, d)$, $Q$ compact saturated in $X$ and $r \in \mathbb{R}_+$, is open in the topology of pointwise convergence. We assume without loss of generality that $Q$ is non-empty. Let $f \in \Lform_\alpha (X, d)$, resp.\ $f \in \Lform_\alpha^a (X, d)$, be an arbitrary element of $[Q > r]$. Since $f$ is lower semicontinuous, i.e., continuous from $X$ to $\overline{\mathbb{R}}_+$ (resp., to $[0, \alpha a]$), the image of $Q$ by $f$ is compact, and its upward closure is compact saturated. The non-empty compact saturated subsets of $\overline{\mathbb{R}}_+$ (resp., $[0, \alpha a]$), in its Scott topology, are the intervals $[b, +\infty]$, $b \in \overline{\mathbb{R}}_+$ (resp., $[b, \alpha a]$, $b \in [0, \alpha a]$); hence the image of $Q$ by $f$ is of this form, and $b$ is necessarily the minimum value attained by $f$ on $Q$, $\min_{x \in X} f (x)$. Since $f \in [Q > r]$, we must have $b > r$, and therefore there is an $\eta > 0$ such that $b > r+\eta$. Let $\epsilon > 0$ be such that $\alpha \epsilon \leq \eta$. Put in a simpler form, let $\epsilon = \eta/\alpha$ if $\alpha\neq 0$, otherwise let $\epsilon > 0$ be arbitrary. Apply Lemma~\ref{lemma:cont:Q}, and find finitely many closed balls $B_{x_i, \leq r_i}$, $1\leq i\leq n$, included in $U = f^{-1} (]r+\eta, +\infty])$, with $r_i < \epsilon$, and whose union contains $Q$. Now consider $W = \bigcap_{i=1}^n [x_i > r + \eta]$. Since each $x_i$ is in $U$, $f$ is in $W$. For every $g \in \Lform_\alpha (X, d)$ (resp., $g \in \Lform_\alpha^a (X, d)$) that is in $W$, we claim that $g$ is in $[Q > r]$. For every $x \in Q$, there is an index $i$ such that $x \in B_{x_i, \leq r_i}$, and since $g$ is $\alpha$-Lipschitz, $g (x_i) \leq g (x) + \alpha r_i$. By assumption $g (x_i) > r+\eta$, and $r_i < \epsilon$, so $g (x) > r$, proving the claim. Since $W$ is open in the topology of pointwise convergence, we have proved that every element of $[Q > r]$ is an open neighborhood, for the topology of pointwise convergence, of each of its elements, which proves the result. \qed We shall now use some results in the theory of stably compact spaces \cite[Chapter~9]{JGL-topology}. A \emph{stably compact} space is a sober, locally compact, compact and coherent space, where coherence means that the intersection of any two compact saturated subsets is compact. For a compact Hausdorff space $Z$ equipped with an ordering $\preceq$ whose graph is closed in $Z^2$ (a so-called \emph{compact pospace}), the space $Z$ with the upward topology, whose opens are by definition the open subsets of $Z$ that are upwards-closed with respect to $\preceq$, is stably compact. More: all stably compact spaces can be obtained this way. If $X$ is stably compact, then form a second topology, the \emph{cocompact} topology, whose closed subsets are the compact saturated subsets of the original space. With the cocompact topology, we obtain a space called the \emph{de Groot dual} of $X$, $X^\dG$, which is also stably compact. We have $X^{\dG\dG} = X$. Also, with the join of the original topology on $X$ and of the cocompact topology, one obtains a compact Hausdorff space $X^\patch$, the \emph{patch space} of $X$. Together with the specialization ordering $\leq$ of $X$, $X^\patch$ is then a compact pospace. Moreover, passing from $X$ to its compact pospace, and conversely, are mutually inverse operations. We shall need to go through the following auxiliary notion. \begin{defi}[$L_\alpha (X, d)$, $L_\alpha^a (X, d)$] \label{defn:Lalpha:notcont} Let $L_\alpha (X, d)$ denote the space of all $\alpha$-Lipschitz (not necessarily $\alpha$-Lipschitz continuous) maps from $X, d$ to $\overline{\mathbb{R}}_+$. Let also $L_\alpha^a (X, d)$ be the subspace of all $h \in L_\alpha (X, d)$ such that $h \leq \alpha a$. \end{defi} Please pay attention to the change of font compared with $\Lform_\alpha (X, d)$. $\overline{\mathbb{R}}_+$ (resp., $[0, \alpha a]$), with its Scott topology, is stably compact, and its patch space $\overline{\mathbb{R}}_+^\patch$ (resp., $[0, \alpha a]^\patch$) has the usual Hausdorff topology. Since the product of stably compact spaces is stably compact (see \cite[Proposition~9.3.1]{JGL-topology}), $\overline{\mathbb{R}}_+^X$ (resp., $[0, \alpha a]^X$) is stably compact. Moreover, the patch operation commutes with products, so $(\overline{\mathbb{R}}_+^X)^\patch = (\overline{\mathbb{R}}_+^\patch)^X$ (resp., $([0, \alpha a]^X)^\patch = ([0, \alpha a]^\patch)^X$), and the specialization ordering is pointwise. The subset $L_\alpha (X, d)$ (resp., $L_\alpha^a (X, d)$) is then patch-closed in $\overline{\mathbb{R}}_+^X$ (resp., $[0, \alpha a]^X$), meaning that it is closed in $(\overline{\mathbb{R}}_+^X)^\patch = (\overline{\mathbb{R}}_+^\patch)^X$ (resp., in $([0, \alpha a]^X)^\patch = ([0, \alpha a]^\patch)^X$). Indeed, $L_\alpha (X, d) = \{f \in \overline{\mathbb{R}}_+^X \mid \forall x, y \in X . f (x) \leq f (y) + \alpha d (x, y)\}$ is the intersection of the sets $\{f \in \overline{\mathbb{R}}_+^X \mid f (x) \leq f (y) + \alpha d (x, y)\}$, $x, y \in X$, and each is patch-closed, because the graph of $\leq$ is closed in $(\overline{\mathbb{R}}_+^\patch)^2$ and the maps $f \mapsto f (x)$ are continuous from $(\overline{\mathbb{R}}_+^\patch)^X$ to $\overline{\mathbb{R}}_+^\patch$---similarly with $[0, \alpha a]$ in lieu of $\overline{\mathbb{R}}_+$ and $L_\alpha^a (X, d)$ instead of $L_\alpha (X, d)$. It is well-known (Proposition~9.3.4, loc.cit.) that the patch-closed subsets $C$ of stably compact spaces $Y$ are themselves stably compact, and that the topology of $C^\patch$ is the subspace topology of $Y^\patch$. It follows that: \begin{lem} \label{lemma:cont:Lalpha:patchclos} Let $X, d$ be a continuous quasi-metric space. $L_\alpha (X, d)$ and $L_\alpha^a (X, d)$, for every $a \in \mathbb{R}_+$, $a > 0$, are stably compact in the topology of pointwise convergence. \qed \end{lem} \begin{lem} \label{lemma:cont:Lalpha:retr} Let $X, d$ be a continuous quasi-metric space, $\alpha \in \mathbb{R}_+$, and $a \in \mathbb{R}_+$, $a > 0$. Let $\rho_\alpha$ be the map $f \in L_\alpha (X, d) \mapsto f^{(\alpha)} \in \Lform_\alpha (X, d)$. Then: \begin{enumerate} \item $\rho_\alpha$ is continuous from $L_\alpha (X, d)$ to $\Lform_\alpha (X, d)$ (resp., from $L_\alpha^a (X, d)$ to $\Lform_\alpha^a (X, d)$), both spaces being equipped with the topology of pointwise convergence; \item $\Lform_\alpha (X, d)$ is a retract of $L_\alpha (X, d)$ (resp., $\Lform_\alpha^a (X, d)$ is a retract of $L_\alpha^a (X, d)$), with $\rho_\alpha$ the retraction and where inclusion serves as section; again both spaces are assumed to have the topology of pointwise convergence here; \item $\Lform_\alpha (X, d)$ and $\Lform_\alpha^a (X, d)$, with the topology of pointwise convergence, are stably compact; \item If $X, d$ is also Yoneda-complete, then $\Lform_\alpha (X, d)$ and $\Lform_\alpha^a (X, d)$ are stably compact (in the subspace topology of the Scott topology on $\Lform X$). \end{enumerate} \end{lem} \proof Recall from Lemma~\ref{lemma:largestLipcont} that $f^{(\alpha)} (x) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s)$. (1) The inverse image of $[x > r]$ by $\rho_\alpha$ is the set of $\alpha$-Lipschitz maps $f$ such that, for some $(y, s) \ll (x, 0)$, $f (y) - \alpha s > r$, hence is the union of the open sets $[y > r+\alpha s]$ over the elements $(y, s)$ way-below $(x, 0)$. (2) If $\iota$ denotes the inclusion of $\Lform_\alpha (X, d)$ into $L_\alpha (X, d)$ (resp., of $\Lform_\alpha^a (X, d)$ into $L_\alpha^a (X, d)$), we must show that $\rho_\alpha \circ \iota$ is the identity map. For every $f \in \Lform_\alpha (X, d)$, $\rho_\alpha (\iota (f)) = f^{(\alpha)}$. Since $f$ is already $\alpha$-Lipschitz continuous, $f^{(\alpha)} = f$, and we conclude. (3) follows from Lemma~\ref{lemma:cont:Lalpha:patchclos}, from (2), and the fact that retracts of stably compact spaces are stably compact (see Proposition~9.2.3 in loc.cit.; the result is due to Jimmie Lawson \cite[Proposition, bottom of p.153, and subsequent discussion]{Lawson:versatile}). (4) follows from (3) and Proposition~\ref{prop:cont:Lalpha:pw}. \qed The proof of (3) above is inspired by a similar argument due to Achim Jung \cite{Jung:scs:prob}. \begin{lem} \label{lemma:cont:Lalpha:compact} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, and $\alpha \in \mathbb{R}_+$. For every center point $x$ in $X$ and every $b \in \mathbb{R}_+$, the set $[x \geq b]$ of all $f \in \Lform_\alpha (X, d)$ (resp., $f \in \Lform_\alpha^a (X, d)$) such that $f (x) \geq b$ is compact saturated in $\Lform_\alpha (X, d)$ (resp., in $\Lform_\alpha^a (X, d)$, for every $a \in \mathbb{R}_+$, $a > 0$). \end{lem} \proof We deal with the case of $\Lform_\alpha (X, d)$. The case of $\Lform_\alpha^a (X, d)$ is entirely similar. Let us start with the following observation. The projection map $\pi_x \colon L_\alpha (X, d) \to \overline{\mathbb{R}}_+$ that maps $f$ to $f (x)$ is patch-continuous, that is, continuous from $L_\alpha^\patch (X, d)$ to $\overline{\mathbb{R}}_+^\patch$, where $L_\alpha (X, d)$ has the topology of pointwise convergence. (Beware: $L_\alpha$, not $\Lform_\alpha$.) The reason was given before Lemma~\ref{lemma:cont:Lalpha:patchclos}: the map $f \mapsto f (x)$ is continuous from $(\overline{\mathbb{R}}_+^\patch)^X$ to $\overline{\mathbb{R}}_+^\patch$, and therefore restricts to a continuous map on $L_\alpha^\patch (X, d)$, which is a subspace of $(\overline{\mathbb{R}}_+^\patch)^X$, since $L_\alpha (X, d)$ is patch-closed in the product space $\overline{\mathbb{R}}_+^X$. Since $[b, +\infty]$ is closed in $\overline{\mathbb{R}}_+^\patch$, $\pi_x^{-1} ([b, +\infty])$ is also closed in $L_\alpha (X, d)^\patch$. It is clearly upwards-closed, and in any stably compact space $Y$, the closed upwards-closed subsets of $Y^\patch$ are the compact saturated subsets of $Y$ (see \cite[Proposition~9.1.20]{JGL-topology}): so $\pi_x^{-1} ([b, +\infty])$ is compact saturated in $L_\alpha (X, d)$. Note that $\pi_x^{-1} ([b, +\infty])$ is the set of all $\alpha$-Lipschitz (not necessarily $\alpha$-Lipschitz continuous) maps $f$ such that $f (x) \geq b$. We claim that its image by $\rho_\alpha$ is exactly $[x \geq b]$. This will prove that the latter is compact as well. The fact that it is upwards-closed (saturated) is obvious. For every $f \in \pi_x^{-1} ([b, +\infty])$, $\rho_\alpha (f) (x)$ is clearly less than or equal to $f (x)$. Because $x$ is a center point, for all $y$, $r$, and $s$, $(x, r) \ll (y, s)$ is equivalent to $d (x, y) < r-s$; in particular, $(x, \epsilon) \ll (x, 0)$. Therefore $\rho_\alpha (f) (x) \geq f (x) - \epsilon$. Since $\epsilon$ is arbitrary, $\rho_\alpha (f) (x) = f (x)$, and since $f (x) \geq b$, $\rho_\alpha (f) (x) \geq b$. We have proved that $\rho_\alpha (f)$ is in $[x \geq b]$. Conversely, every $f \in [x \geq b]$ is of the form $\rho_\alpha (g)$ for some $g \in \pi_x^{-1} ([b, +\infty])$, namely $g=f$, because $\rho_\alpha$ is a retraction. Hence $[x \geq b]$ is the image of the compact set $\pi_x^{-1} ([b, +\infty])$, as claimed. Our whole argument works provided all considered spaces of functions have the topology of pointwise convergence. We use Proposition~\ref{prop:cont:Lalpha:pw} to conclude. \qed \begin{rem} \label{rem:cont:Lalpha:compact} Lemma~\ref{lemma:cont:Lalpha:compact} does not hold for non-center points $x$, as we now illustrate. Let $X = \nat_\omega$ be the dcpo obtained by adding a top element $\omega$ to $\nat$, with its usual ordering $\leq$. We consider it as a quasi-metric space with the quasi-metric $d_\leq$. On formal balls, $(x, r) \leq^{d_\leq} (y, s)$ if and only if $x \leq y$ and $r \geq s$, hence its space of formal balls is isomorphic to the continuous dcpo $\nat_\omega \times ]-\infty, 0]$, through the map $(x, r) \mapsto (x, -r)$. We consider the point $x = \omega$, and $b=1$, say. The elements of $\Lform_\alpha (X, d)$ are exactly the Scott-continuous maps, so $[x \geq b]$ is the set of Scott-continuous maps $f \colon \nat_\omega \to \overline{\mathbb{R}}_+$ such that $f (\omega) \geq 1$. Let us pick a fixed element $f$ of $[x \geq b]$, for example, the constant function with value $+\infty$. For every $n \in \nat$, let $f_n \colon X \to \overline{\mathbb{R}}_+$ be defined by $f_n (m) = 0$ if $m \leq n$, $f_n (m) = f (m)$ otherwise. The family ${(f_n)}_{n \in \nat}$ is a decreasing sequence of elements of $[x \geq b]$. If $[x \geq b]$ were compact, then the intersection of the decreasing family of closed sets $\dc f_n$ would intersect $[x \geq b]$; but the only element in that intersection is the constant zero map. \end{rem} \begin{cor} \label{corl:cont:Lalpha:open} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, $\alpha \in \mathbb{R}_+$, and $a \in \mathbb{R}_+$, $a > 0$. \begin{enumerate} \item For every center point $x$ in $X$ and every $b \in \mathbb{R}_+$, the set $[x < b]$ of all $f \in \Lform_\alpha (X, d)$ (resp., $f \in \Lform_\alpha^a (X, d)$) such that $f (x) < b$ is open and downwards-closed in $\Lform_\alpha (X, d)^\patch$ (resp., in $\Lform_\alpha^a (X, d)^\patch$). \item For every center point $x \in X$, the map $f \mapsto f (x)$ is continuous from $\Lform_\alpha (X, d)^\dG$ to $(\overline{\mathbb{R}}_+)^\dG$, and from $\Lform_\alpha^a (X, d)^\dG$ to $[0, \alpha a]^\dG$. \end{enumerate} \end{cor} \proof (1) $[x < b]$ is the complement of $[x \geq b]$. Since $[x \geq b]$ is compact saturated (Lemma~\ref{lemma:cont:Lalpha:compact}), it is closed and upwards-closed in $\Lform_\alpha (X, d)^\patch$ (resp., in $\Lform_\alpha^a (X, d)^\patch$). (2) is just a reformulation of (1). \qed \begin{cor} \label{corl:cont:Lalpha:simple} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, $\alpha \in \mathbb{R}_+$, and $a \in \mathbb{R}_+$, $a > 0$. For every $n \in \nat$, for all $a_1, \ldots, a_n \in \mathbb{R}_+$ and every $n$-tuple of center points $x_1$, \ldots, $x_n$ in $X, d$, the maps: \begin{enumerate} \item $f \mapsto \sum_{i=1}^n a_i f (x_i)$, \item $f \mapsto \max_{i=1}^n f (x_i)$, \item $f \mapsto \min_{i=1}^n f (x_i)$ \end{enumerate} are continuous from $\Lform_\alpha (X, d)^\dG$ (resp., $\Lform_\alpha^a (X, d)^\dG$) to $(\overline{\mathbb{R}}_+)^\dG$. \end{cor} \proof By Corollary~\ref{corl:cont:Lalpha:open}, using the fact that scalar multiplication and addition are continuous on $(\overline{\mathbb{R}}_+)^\dG$. The latter means that those operations are monotonic and preserve filtered infima. \qed The same argument also shows the following, and an endless variety of similar results. \begin{cor} \label{corl:cont:Lalpha:simple:2} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, $\alpha \in \mathbb{R}_+$, and $a \in \mathbb{R}_+$, $a > 0$. For every family of non-negative reals $a_{ij}$ and of center points $x_{ij}$, $1\leq i \leq m$, $1\leq j\leq n_i$, the maps: \begin{enumerate} \item $f \mapsto \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} f (x_{ij})$, \item $f \mapsto \max_{i=1}^m \sum_{j=1}^{n_i} a_{ij} f (x_{ij})$, \end{enumerate} are continuous from $\Lform_\alpha (X, d)^\dG$ (resp., $\Lform_\alpha^a (X, d)^\dG$) to $(\overline{\mathbb{R}}_+)^\dG$. \end{cor} \section{Topologies on $\Lform_\infty (X, d)$ Determined by $\Lform_\alpha (X, d)$, $\alpha > 0$} \label{sec:topol-lform_-x-1} Recall that $\Lform_\infty (X, d) = \bigcup_{\alpha \in \mathbb{R}_+} \Lform_\alpha (X, d)$. As usual, we equip $\Lform_\infty (X, d)$ with the subspace topology from $\Lform X$, the latter having the Scott topology. A nasty subtle issue we will have to face is the following. Imagine we have a function $F \colon \Lform_\infty (X, d) \to \overline{\mathbb{R}}_+$, and we wish to show that $F$ is continuous. One might think of showing that the restriction of $F$ to $\Lform_\alpha (X, d)$ is continuous to that end. This is certainly necessary, but by no means sufficient. For sufficiency, we would need the topology of $\Lform_\infty (X, d)$ to be \emph{coherent with}, or \emph{determined by}, the topologies of its subspaces $\Lform_\alpha (X, d)$, $\alpha \in \mathbb{R}_+$. The name ``coherent with'' is unfortunate, for a coherent space also means a space in which the intersection of any two compact saturated subsets is compact, as we have already mentioned. We shall therefore prefer ``determined by''. Let us recall some basic facts about topologies determined by families of subspaces. Fix a topological space $Y$, and a chain of subspaces $Y_i$, $i \in I$, where $I$ is equipped with some ordering $\leq$, in such a way that $i \leq j$ implies $Y_i \subseteq Y_j$. Let also $e_{ij} \colon Y_i \to Y_j$ and $e_i \colon Y_i \to Y$ be the canonical embeddings, and assume that $Y = \bigcup_{i \in I} Y_i$. Then there is a unique topology $\Open_c$ on $Y$ that is determined by the topologies of the subspaces $Y_i$, $i \in I$. We call it the \emph{determined topology} on $Y$. Standard names for such a topology include the \emph{weak topology} or the \emph{inductive topology}. Its open subsets are the subsets $A$ of $Y$ such that $A \cap Y_i$ is open in $Y_i$ for every $i \in I$. In particular, every open subset of $Y$, with its original topology, is open in $\Open_c$. Categorically, $Y$ with the $\Open_c$ topology, together with the map $e_i \colon Y_i \to Y$, is the inductive limit, a.k.a., the colimit, of the diagram given by the arrows $\xymatrix{Y_i \ar[r]^{e_{ij}} & Y_j}$, $i \leq j$. It is known that $Y$ is automatically determined by the topologies of $Y_i$, $i \in I$, in the following cases: when every $Y_i$ is open; when every $Y_i$ is closed and the cover ${(Y_i)}_{i \in I}$ is locally finite; when $Y$ has the discrete topology. The case of $\Lform_\infty (X, d)$ and its subspaces $\Lform_\alpha (X, d)$ falls into none of those subcases. Here is another case of a determined topology. \begin{prop} \label{prop:Y:det:ep} Let ${(Y_i)}_{i \in I, \sqsubseteq}$ be a monotone net of subspaces of a topological space $Y$, forming a cover of $Y$. Assume there are continuous projections of $Y$ onto $Y_i$, namely continuous maps $p_i \colon Y \to Y_i$ such that $p_i (x) \leq x$ for every $x \in Y$ and $p_i (x)=x$ for every $x \in Y_i$, for every $i \in I$. Then the topology of $Y$ is determined by the topologies of $Y_i$, $i \in I$. \end{prop} What the assumption means is that we do not just have arrows $\xymatrix{Y_i \ar[r]^{e_i} & Y}$, $i \in I$, but embedding-projection pairs $\xymatrix{Y_i \ar@<1ex>[r]^{e_i} & Y \ar@<1ex>[l]^{p_i}}$, $i \in I$. This is a standard situation in domain theory. The inequality $e_i \circ p_i \leq \identity Y$ should be read by keeping in mind the specialization preorder of $Y$. \proof Let $A$ be an open subset in the topology on $Y$ determined by the subspaces $Y_i$. For every $i \in I$, $A \cap Y_i$ is open in $Y_i$, and since $p_i$ is continuous, $U_i = p_i^{-1} (A \cap Y_i)$ is open in $Y$. We claim that $A = \bigcup_{i \in I} U_i$, which will show that $A$ is open, allowing us to conclude. For every $x \in A$, we find some $i \in I$ such that $x$ is in $Y_i$, using the fact that ${(Y_j)}_{j \in I, \sqsubseteq}$ is a cover. Then $x$ is in $A \cap Y_i$. Since $p_i$ is a projection, $p_i (x) = x$, so $x$ is in $p_i^{-1} (A \cap Y_i) = U_i$. Conversely, for every element $x$ of any $U_i$, $i \in I$, $p_i (x)$ is in the open subset $A \cap Y_i$ of $Y_i$. Since ${(Y_j)}_{j \in I, \sqsubseteq}$ is a cover, there is a $j \in I$ such that $x \in Y_j$, and since that is also a monotone net, we may assume that $i \sqsubseteq j$. Hence $Y_i \subseteq Y_j$, so that $p_i (x)$ is also in $A \cap Y_j$. Moreover, $p_i (x) \leq x$. Since the specialization preordering on a subspace (here, $Y_j$) coincides with the restriction of the specialization preordering on the superspace (here, $Y$), and $A \cap Y_j$ is open hence upwards-closed in $Y_j$, $x$ is in $A \cap Y_j$. In particular, $x$ is in $A$. \qed Using the fact that $\Lform_\infty (X, d)$ has the subspace topology from $\Lform X$, Corollary~\ref{corl:Lalpha:retract} implies that there is an embedding-projection pair $i_\alpha, r_\alpha$ between $\Lform_\alpha (X, d)$ and $\Lform_\infty (X, d)$. When $\alpha > 0$ varies, the spaces $\Lform_\alpha (X, d)$ form a cover of $\Lform_\infty (X, d)$. Hence we can apply Proposition~\ref{prop:Y:det:ep}, and we obtain: \begin{prop} \label{prop:Linfty:determined} Let $X, d$ be a Lipschitz regular standard quasi-metric space. The topology of $\Lform_\infty (X, d)$ is determined by those of the subspaces $\Lform_\alpha (X, d)$, $\alpha > 0$. \qed \end{prop} We have a similar result for bounded maps, using Corollary~\ref{corl:Lalpha:retract:bnd} instead of Corollary~\ref{corl:Lalpha:retract}, and relying on Lemma~\ref{lemma:Linfa} to ensure that the spaces $\Lform_\alpha^a (X, d)$, $\alpha > 0$, form a cover of $\Lform_\infty^\bnd (X, d)$. \begin{prop} \label{prop:Linfty:determined:bnd} Let $X, d$ be a Lipschitz regular standard quasi-metric space. Fix $a > 0$. Then the topology of $\Lform_\infty^\bnd (X, d)$ is determined by those of the subspaces $\Lform_\alpha^a (X, d)$, $\alpha > 0$. \qed \end{prop} \section{The Kantorovich-Rubinshte\u\i n-Hutchinson Quasi-Metric on Previsions} \label{sec:quasi-metr-prev} At last we come to our main object of study. This was introduced in \cite{Gou-csl07}, and is a common generalization of continuous valuations (a concept close to that of measure), spaces of closed, resp., compact saturated sets, resp.\ of lenses, and more. \begin{defi}[Prevision] \label{defn:prev} A \emph{prevision} on a topological space $X$ is a Scott-continuous map $F$ from $\Lform X$ to $\overline{\mathbb{R}}_+$ that is \emph{positively homogeneous}, namely: $F (\alpha h) = \alpha F (h)$ for all $\alpha \in \mathbb{R}_+$, $h \in \Lform X$. It is: \begin{itemize} \item \emph{sublinear} if $F (h+h') \leq F (h) + F (h')$ holds for all $h, h' \in \Lform X$, \item \emph{superlinear} if $F (h+h') \geq F (h) + F (h')$ holds, \item \emph{linear} if it is both sublinear and superlinear, \item \emph{subnormalized} if $F (\alpha.\mathbf 1 + h) \leq \alpha + F (h)$ holds, where $\mathbf 1$ is the constant $1$ map, \item \emph{normalized} if $F (\alpha.\mathbf 1 + h) = \alpha + F (h)$ holds, \item \emph{discrete} if $F (f \circ h) = f (F (h))$ for every $h \in \Lform X$ and every strict $f \in \Lform \overline{\mathbb{R}}_+$---a map $f$ is strict if and only if $f (0) = 0$. \end{itemize} \end{defi} We write $\Prev X$ for the set of previsions on $X$. Let now $X, d$ be a quasi-metric space, with its $d$-Scott topology. \subsection{The Unbounded Kantorovich-Rubinshte\u\i n-Hutchinson Quasi-Metric} \label{sec:unbo-kant-rubinsht} \begin{defi}[$\dKRH$] \label{defn:KRH} Let $X, d$ be a quasi-metric space. The \emph{Kantorovich-Rubinshte\u\i n-Hutchinson quasi-metric} on the space of previsions on $X$ is defined by: \begin{equation} \label{eq:dKRH} \dKRH (F, F') = \sup_{h \in \Lform_1 X} \dreal (F (h), F' (h)). \end{equation} \end{defi} This is a quasi-metric on any standard quasi-metric space, as remarked at the end of Section~6 of \cite{JGL:formalballs}. We give a slightly expanded argument in Lemma~\ref{lemma:KRH:qmet} below. \begin{rem} \label{rem:dKRH:classical} The name of that quasi-metric stems from analogous definitions of metrics on spaces of measures. The \emph{classical} Kantorovich-Rubinshte\u\i n-Hutchinson metric between two bounded measures $\mu$ and $\mu'$ is given as $\sup_{h \in L_1 X, h \leq \mathbf 1} |\int_{x \in X} h (x) d\mu - \int_{x \in X} h (x) d\mu'|$. Writing $F$ for the functional $h \mapsto \int_{x \in X} h (x) d\mu$ and $F'$ for $h \mapsto \int_{x \in X} h (x) d\mu'$, this can be rewritten as $\sup_{h \in L_1 X, h \leq \mathbf 1} \dreal^{sym} (F (h), F' (h))$. In the case of metric (as opposed to quasi-metric) spaces, we shall see in Lemma~\ref{lemma:dKRH:metric} that the use of the symmetrized metric $\dreal^{sym}$ is irrelevant, and the distance between $\mu$ and $\mu'$ is equal to $\sup_{h \in L_1 X, h \leq \mathbf 1} \dreal (F (h), F' (h))$. Additionally, on a metric space, all $1$-Lipschitz maps $h$ are automatically continuous. In the end, on metric spaces, the only difference between our definition and the latter is that $h$ ranges over functions bounded above by $1$ in the latter. We shall also explore this variant in Section~\ref{sec:bound-kant-rubinsht}. \end{rem} The $\dKRH$ quasi-metric is interesting mostly on subspaces of subnormalized, resp.\ normalized previsions, as we shall illustrate in Remark~\ref{rem:KRH:useless} in the case of linear previsions. \begin{lem} \label{lemma:KRH:qmet} Let $X, d$ be a standard quasi-metric space. For all previsions $F$, $F'$ on $X$, the following are equivalent: $(a)$ $F \leq F'$; $(b)$ $\dKRH (F, F') = 0$. In particular, $\dKRH$ is a quasi-metric. \end{lem} \proof $(a) \limp (b)$. For every $h \in \Lform_1 (X, d)$, $F (h) \leq F' (h)$, so $\dreal (F (h), F' (h)) = 0$. $(b) \limp (a)$. Let $f$ be an arbitrary element of $\Lform X$. For every $\alpha > 0$, $1/\alpha f^{(\alpha)}$ is in $\Lform_1 (X, d)$, so $F (1/\alpha f^{(\alpha)}) \leq F' (1/\alpha f^{(\alpha)})$. Multiply by $\alpha$: $F (f^{(\alpha)}) \leq F' (f^{(\alpha)})$. The family ${(f^{(\alpha)})}_{\alpha > 0}$ is a chain whose supremum is $f$. Using the fact that $F$ and $F'$ are Scott-continuous, $F (f) \leq F' (f)$, and as $f$ is arbitrary, $(a)$ follows. \qed The following shows that we can restrict to bounded maps $h \in \Lform_1 (X, d)$. A map $h$ is \emph{bounded} if and only if there is a constant $a \in \mathbb{R}_+$ such that for every $x \in X$, $h (x) \leq a$. \begin{lem} \label{lemma:KRH:bounded} Let $X, d$ be a quasi-metric space. For all previsions $F$, $F'$ on $X$, \[ \dKRH (F, F') = \sup_{h \text{ bounded }\in \Lform_1 X} \dreal (F (h), F' (h)). \] \end{lem} \proof For every $h \in \Lform_1 (X, d)$, $h$ is the pointwise supremum of the chain ${(\min (h, a))}_{a \in \mathbb{R}_+}$, where $\min (h, a) \colon x \mapsto \min (h (x), a)$. For the final claim, it suffices to show that for every $r \in \mathbb{R}_+$ such that $r < \dKRH (F, F')$, there is a bounded map $h' \in \Lform_1 (X, d)$ such that $r < \dreal (F (h'), F' (h'))$. Since $r < \dKRH (F, F')$, there is an $h \in \Lform_1 (X, d)$ such that $r < \dreal (F (h), F' (h))$. If $F (h) < +\infty$, this implies that $F (h) > F' (h) + r$. Since $F$ is Scott-continuous, there is an $a \in \mathbb{R}_+$ such that $F (\min (h, a)) > F' (h) + r \geq F' (\min (h, a)) + r$, so we can take $h' = \min (h, a)$. If $F (h) = +\infty$, then note that since $0 \leq r < \dreal (F (h), F' (h))$, we must have $F' (h) < +\infty$. By Scott-continuity again, there is an $a \in \mathbb{R}_+$ such that $F (\min (h, a)) > r + F' (h)$, and then we can again take $h' = \min (h, a)$. \qed \begin{lem} \label{lemma:LPrev:ord} Let $X, d$ be a standard quasi-metric space, let $F$, $F'$ be two previsions on $X$, and $r$, $r'$ be two elements of $\mathbb{R}_+$. The following are equivalent: \begin{enumerate} \item $(F, r) \leq^{\dKRH^+} (F', r')$; \item $r \geq r'$ and, for every $h \in \Lform_1 (X, d)$, $F (h) - r \leq F' (h) - r'$; \item $r \geq r'$ and, for every $h \in \Lform_\alpha (X, d)$, $\alpha > 0$, $F (h) - \alpha r \leq F' (h) - \alpha r'$. \end{enumerate} \end{lem} \proof (1) $\limp$ (2). If $(F, r) \leq^{\dKRH^+} (F', r')$, then for every $h \in \Lform_1 (X, d)$, $\dreal (F (h), F' (h)) \leq r-r'$. This implies that $r \geq r'$, on the one hand, and on the other hand that either $F (h) = F' (h) = +\infty$ or $F (h) \neq +\infty$ and $F (h) - F' (h) \leq r-r'$; in both cases, $F (h) - r \leq F' (h) - r'$. (2) $\limp$ (3). For every $h \in \Lform_\alpha (X, d)$, $\alpha > 0$, $1/\alpha h$ is in $\Lform_1 (X, d)$, so (2) implies $F (1/\alpha h) - r \leq F' (1/\alpha h) - r'$. Multiplying by $\alpha$, and using positive homogeneity, we obtain (3). (3) $\limp$ (1). For $\alpha=1$, we obtain that for every $h \in \Lform_1 (X, d)$, $F (h) - r \leq F' (h) - r'$. If $F' (h) = +\infty$, then $\dreal (F (h), F' (h)) = 0 \leq r-r'$. If $F' (h) \neq +\infty$, then $F (h)$ cannot be equal to $+\infty$, so $\dreal (F (h), F' (h)) = \max (F (h) - F' (h), 0) \leq \max (r-r', 0) = r-r'$. \qed Recall that $\Lform_\infty X$ has the subspace topology from $\Lform X$. Let us introduce the following variant on the notion of prevision. We do this, because our completeness theorem will naturally produce $\Lform$-previsions, not previsions, as $\dKRH$-limits. Showing that every $\Lform$-prevision defines a unique prevision will be the subject of Proposition~\ref{prop:Gbar}. \begin{defi}[$\Lform$-prevision] \label{defn:Lprev} Let $X, d$ be a quasi-metric space. An \emph{$\Lform$-prevision} on $X$ is any continuous map $G$ from $\Lform_\infty (X, d)$ to $\overline{\mathbb{R}}_+$ such that $G (\alpha h) = \alpha G (h)$ for all $\alpha \in \mathbb{R}_+$ and $h \in \Lform_\infty (X, d)$. \end{defi} The notions of sublinearity, superlinearity, linearity, normalization, subnormalization, discreteness, carry over to $\Lform$-previsions, taking care to quantify over $h, h' \in \Lform_\infty X$ and over $f$ strict in $\Lform_\infty \overline{\mathbb{R}}_+$. We write $\Lform\Prev X$ for the set of all $\Lform$-previsions on $X$, and equip it with a quasi-metric defined by the same formula as Definition~\ref{defn:KRH}, and which we denote by $\dKRH$ again. Every $F \in \Prev X$ defines an element $F_{|\Lform_\infty X}$ of $\Lform\Prev X$ by restriction. Conversely, for every $G \in \Lform\Prev X$, let $\overline G (h) = \sup_{\alpha \in \mathbb{R}_+} G (h^{(\alpha)})$. \begin{prop} \label{prop:Gbar} Let $X, d$ be a standard quasi-metric space. \begin{enumerate} \item For every $G \in \Lform\Prev X$, $\overline G$ is a prevision. \item The maps $G \in \Lform\Prev X \mapsto \overline G \in \Prev X$ and $F \in \Prev X \mapsto F_{|\Lform_\infty X} \in \Lform\Prev X$ are inverse of each other. \item If $G$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, then so is $\overline G$. \item Conversely, if $\overline G$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, then so is $G$. \end{enumerate} \end{prop} \proof (1) We must first show that $\overline G$ is Scott-continuous, i.e., continuous from $\Lform X$ to $\overline{\mathbb{R}}_+$, both with their Scott topologies. Let $h \in \Lform X$ such that $\overline G (h) > a$. For some $\alpha \in \mathbb{R}_+$, $G (h^{(\alpha)}) > a$, so $h^{(\alpha)}$ is in the open subset $G^{-1} (]a, +\infty])$ of $\Lform_\infty X$. By definition of a subspace topology, there is a (Scott-)open subset $W$ of $\Lform X$ such that $G^{-1} (]a, +\infty]) = W \cap \Lform_\infty X$. Then $h^{(\alpha)}$ is in $W$, and since $h \geq h^{(\alpha)}$, $h$ is also in $W$. Moreover, for every $g \in W$, $g^{(\beta)}$ is in $W$ for some $\beta \in \mathbb{R}_+$, hence in $G^{-1} (]a, +\infty])$. It follows that $G (g^{(\beta)}) > a$, and therefore $\overline G (g) > a$: hence $W$ is an open neighborhood of $h$ contained in ${\overline G}^{-1} (]a, +\infty])$. We conclude that the latter is open in $\Lform X$, which implies that $\overline G$ is continuous. Before we proceed, we note the following. \begin{lem} \label{lemma:ah:(alpha)} Let $X, d$ be a standard quasi-metric space. For every map $h \colon X \to \overline{\mathbb{R}}_+$, for all $a, \alpha \in \mathbb{R}_+$, $a h^{(\alpha)} = {(ah)}^{(a\alpha)}$. \end{lem} \proof We have $a h^{(\alpha)} \leq {(ah)}^{(a\alpha)}$, because $a h^{(\alpha)}$ is $a\alpha$-Lipschitz continuous by Proposition~\ref{prop:alphaLip:props}~(1), below $ah$, and because ${(ah)}^{(a\alpha)}$ is the largest $a\alpha$-Lipschitz continuous function below $ah$. When $a > 0$, by the same argument, $1/a {(ah)}^{(a\alpha)} \leq h^{(\alpha)}$, so $a h^{(\alpha)} = {(ah)}^{(a\alpha)}$. The same equality holds, trivially, when $a=0$. \qed We return to the proof of the theorem. When $a > 0$, $\overline G (ah) = \sup_{\beta \in \mathbb{R}_+} G ({(ah)}^{(\beta)}) = \sup_{\alpha \in \mathbb{R}_+} G ({(ah)}^{(a\alpha)}) = \sup_{\alpha \in \mathbb{R}_+} G (a h^{(\alpha)})$ (by Lemma~\ref{lemma:ah:(alpha)}) $= a \overline G (h)$. When $a=0$, $\overline G (0) = 0$. Therefore $\overline G$ is a prevision. (2) For every $G \in \Lform\Prev X$, the restriction of $\overline G$ to $\Lform_\infty X$ maps every Lipschitz (say, $\alpha$-Lipschitz) continuous map $g \colon X \to \overline{\mathbb{R}}_+$ to $\sup_{\beta \in \mathbb{R}_+} G (g^{(\beta)})$. For every $\beta \geq \alpha$, $g$ is $\beta$-Lipschitz continuous, and $g^{(\beta)}$ is the largest $\beta$-Lipschitz continuous map below $g$, so $g^{(\beta)} = g$. It follows that $\sup_{\beta \in \mathbb{R}_+} G (g^{(\beta)}) = G (g)$, showing that $\overline G_{|\Lform_\infty X} = G$. Note that this says that $G$ and $\overline G$ coincide on Lipschitz continuous maps, a fact that we will use several times below. In the reverse direction, for every $F \in \Prev X$, $\overline {(F_{|\Lform_\infty X})}$ maps every function $h \in \Lform X$ to $\sup_{\alpha \in \mathbb{R}_+} F (h^{(\alpha)}) = F$, because $F$ is Scott-continuous and $\sup_{\alpha \in \mathbb{R}_+} h^{(\alpha)} = h$ (see \cite[Theorem~6.17]{JGL:formalballs}). (3) For all $g, h \in \Lform X$, for all $\alpha, \beta \in \mathbb{R}_+$, $g^{(\alpha)} + h^{(\beta)} \leq (g+h)^{(\alpha+\beta)}$, because the left-hand side is an $(\alpha+\beta)$-Lipschitz continuous map below $g+h$, using Proposition~\ref{prop:alphaLip:props}~(2), and the right-hand side is the largest. If $G$ is superlinear, it follows that $\overline G (g) + \overline G (h) = \sup_{\alpha, \beta \in \mathbb{R}_+} G (g^{(\alpha)}) + G (h^{(\beta)}) \leq \sup_{\alpha, \beta \in \mathbb{R}_+} G (g^{(\alpha)} + h^{(\beta)}) \leq \sup_{\alpha, \beta \in \mathbb{R}_+} G ((g+h)^{(\alpha+\beta)}) = \overline G (g+h)$, hence that $\overline G$ is superlinear, too. If $G$ is sublinear, then we need another argument. We wish to show that $\overline G (g+h) \leq \overline G (g) + \overline G (h)$. To this end, let $a$ be an arbitrary element of $\mathbb{R}_+$ such that $a < \overline G (g+h)$. Since $\overline G$ is Scott-continuous (item~(1) above), $g+h$ is in the Scott-open set ${\overline G}^{-1} (]a, +\infty])$. Now $g+h = \sup_{\alpha \in \mathbb{R}_+} g^{(\alpha)} + \sup_{\alpha \in \mathbb{R}_+} h^{(\alpha)} = \sup_{\alpha \in \mathbb{R}_+} (g^{(\alpha)} + h^{(\alpha)})$, since addition is Scott-continuous on $\overline{\mathbb{R}}_+$, so $g^{(\alpha)} + h^{(\alpha)}$ is in ${\overline G}^{-1} (]a, +\infty])$ for some $\alpha \in \mathbb{R}_+$. The function $g^{(\alpha)} + h^{(\alpha)}$ is $2\alpha$-Lipschitz continuous (this is again Proposition~\ref{prop:alphaLip:props}~(2)). Item~(2) above shows that $\overline G$ coincides with $G$ on Lipschitz continuous maps, so $\overline G (g^{(\alpha)} + h^{(\alpha)}) = G (g^{(\alpha)} + h^{(\alpha)})$. Since $G$ is sublinear, the latter is less than or equal to $G (g^{(\alpha)}) + G (h^{(\alpha)})$, so $G (g^{(\alpha)}) + G (h^{(\alpha)}) > a$. Since $\overline G (g) \geq G (g^{(\alpha)})$, and similarly with $h$, we obtain that $\overline G (g) + \overline G (h) > a$. Since $a$ is arbitrary, $\overline G (g) + \overline G (h) \geq \overline G (g+h)$. If $G$ is subnormalized, then we show that $\overline G$ is subnormalized, too, by a similar argument. Let $\alpha \in \mathbb{R}_+$, $h \in \Lform X$. Fix $a \in \mathbb{R}_+$ such that $a < \overline G (\alpha.\mathbf 1 + h)$. Then $\alpha . \mathbf 1 + h$ is in the Scott-open set $\overline G^{-1} (]a, +\infty])$. We observe that $\alpha . \mathbf 1 + h$ is the pointwise supremum of the chain of maps $\alpha . \mathbf 1 + h^{(\beta)}$, $\beta \in \mathbb{R}_+$, and that those maps are $\beta$-Lipschitz continuous, by Proposition~\ref{prop:alphaLip:props}~(2) and~(6). Therefore $\alpha . \mathbf 1 + h^{(\beta)}$ is also in $\overline G^{-1} (]a, +\infty])$, for some $\beta \in \mathbb{R}_+$. Since that function is $\beta$-Lipschitz continuous, $\overline G$ maps it to $G (\alpha . \mathbf 1 + h^{(\beta)})$, which is less than or equal to $\alpha + G (h^{(\beta)})$ since $G$ is subnormalized. In particular, $a < \overline G (\alpha . \mathbf 1 + h^{(\beta)}) \leq \alpha + G (h)$. Taking suprema over $a$ proves the claim. If $G$ is normalized, it remains to show that $\overline G (\alpha . \mathbf 1 + h) \geq \alpha + G (h)$. This is similar to the argument for the preservation of superlinearity. We use the fact that $\alpha . \mathbf 1 + h^{(\beta)} \leq {(\alpha . \mathbf 1 + h)}^{(\beta)}$, which follows from the fact that the left-hand side is $\beta$-Lipschitz continuous below $\alpha . \mathbf 1 + h$. Then $\overline G (\alpha . \mathbf 1 + h) = \sup_{\beta \in \mathbb{R}_+} G ({(\alpha . \mathbf 1 + h)}^{(\beta)}) \geq \sup_{\beta \in \mathbb{R}_+} G (\alpha . \mathbf 1 + h^{(\beta)}) = \sup_{\beta \in \mathbb{R}_+} (\alpha + G (h^{(\beta)})) = \alpha + \overline G (h)$. If $G$ is discrete, then we need the following auxiliary lemma. \begin{lem} \label{lemma:comp:alphaLip} Let $X, d$ be a standard quasi-metric space, $\alpha, \beta \in \mathbb{R}_+$. \begin{enumerate} \item for all $h \in \Lform_\alpha (X, d)$, $f \in \Lform_\beta (\overline{\mathbb{R}}_+, \dreal)$, $f \circ h$ is in $\Lform_{\alpha\beta} (X, d)$; \item for all $h \in \Lform X$, $f \in \Lform \overline{\mathbb{R}}_+$, $f^{(\alpha)} \circ h^{(\beta)} \leq (f \circ h)^{(\alpha\beta)}$. \end{enumerate} \end{lem} \proof (1) Direct consequence of Lemma~\ref{lemma:comp:Lip}. (2) By (1), $f^{(\alpha)} \circ h^{(\beta)}$ is $\alpha\beta$-Lipschitz continuous. For every $x \in X$, $f^{(\alpha)} (h^{(\beta)} (x)) \leq f (h^{(\beta)} (x)) \leq f (h (x))$, so $f^{(\alpha)} \circ h^{(\beta)}$ is less than or equal to $f \circ h$, hence also to the largest $\alpha\beta$-Lipschitz continuous map below $f \circ h$, $(f \circ h)^{(\alpha\beta)}$. \qed Let us resume our proof of Proposition~\ref{prop:Gbar}~(3), where $G$ is discrete. Fix $h \in \Lform X$, and let $f \in \Lform \overline{\mathbb{R}}_+$ be strict. Note that for every $\beta \in \mathbb{R}_+$, $f^{(\beta)}$ is strict, too, since $0 \leq f^{(\beta)} \leq f$. We have $\overline G (f \circ h) = \sup_{\alpha \in \mathbb{R}_+} G ((f \circ h)^{(\alpha)})$. The map $f^{(\sqrt\alpha)} \circ h^{(\sqrt \alpha)}$ is $\alpha$-Lipschitz continuous and below $(f \circ h)^{(\alpha)}$ by Lemma~\ref{lemma:comp:alphaLip}~(1, 2). Therefore $\overline G (f \circ h) \geq \sup_{\alpha \in \mathbb{R}_+} G (f^{(\sqrt\alpha)} \circ h^{(\sqrt\alpha)})$. For all $\beta$, $\gamma$ in $\mathbb{R}_+$, $\beta$ and $\gamma$ are less than or equal to $\sqrt \alpha$ for $\alpha$ sufficiently large, so $\overline G (f \circ h) \geq \sup_{\beta, \gamma \in \mathbb{R}_+} G (f^{(\beta)} \circ h^{(\gamma)}) = \sup_{\gamma\in \mathbb{R}_+} \sup_{\beta \in \mathbb{R}_+} f^{(\beta)} ( G (h^{(\gamma)}))$ (using the fact that $G$ is discrete) $= \sup_{\gamma\in \mathbb{R}_+} f (G (h^{(\gamma)})) = f (\sup_{\gamma\in \mathbb{R}_+} G (h^{(\gamma)}))$ (since $f$ is lower semicontinuous from $\overline{\mathbb{R}}_+$ to itself, hence Scott-continuous) $= f (\overline G (h))$. Conversely, $f \circ h$ is the supremum of the directed family ${(f^{(\beta)} \circ h^{(\gamma)})}_{\beta, \gamma \in \mathbb{R}_+}$. This is directed because both ${(f^{(\beta)})}_{\beta \in \mathbb{R}_+}$ and ${(h^{(\gamma)})}_{\gamma \in \mathbb{R}_+}$ are chains, and every $f^{(\beta)}$ is monotonic (remember that $\Lform_\beta (X, d) \subseteq \Lform X$ by Lemma~\ref{lemma:Lalpha:LX}, and that every lower semicontinuous map is monotonic). Moreover, for every $x \in X$, $\sup_{\beta, \gamma \in \mathbb{R}_+} f^{(\beta)} (h^{(\gamma)} (x)) = \sup_{\gamma \in \mathbb{R}_+}\sup_{\beta \in \mathbb{R}_+} f^{(\beta)} (h^{(\gamma)} (x)) = \sup_{\gamma \in \mathbb{R}_+} f (h^{(\gamma)} (x)) = f (\sup_{\gamma \in \mathbb{R}_+} h^{(\gamma)} (x)) = f (h (x))$. Using that, we show that $\overline G (f \circ h) \leq f (\overline G (h))$, from which the equality will follow. For every $a < \overline G (f \circ h)$, since $\overline G$ is Scott-continuous by (1), and using the previous observation, there are $\beta, \gamma \in \mathbb{R}_+$ such that $a < \overline G (f^{(\beta)} \circ h^{(\gamma)})$. Since $f^{(\beta)} \circ h^{(\gamma)}$ is in $\Lform_{\beta\gamma} (X, d)$ (Lemma~\ref{lemma:comp:alphaLip}~(1)), $\overline G (f^{(\beta)} \circ h^{(\gamma)}) = G (f^{(\beta)} \circ h^{(\gamma)})$. Since $G$ is discrete, the latter is equal to $f^{(\beta)} (G (h^{(\gamma)}))$, which is less than or equal to $f (G (h^{(\gamma)}))$, hence to $f (\overline G (h))$ (recall that $f$ is lower semicontinuous hence monotonic). Since $a$ is arbitrary, $\overline G (f \circ h) \leq f (\overline G (h))$. (4) is obvious, since by (2) $G$ is the restriction of $\overline G$ to $\Lform_\infty (X, d)$. \qed \begin{cor} \label{corl:=:L1} Let $X, d$ be a standard quasi-metric space, and $F$, $F'$ be two previsions on $X$. Then $F=F'$ if and only if $F (h) = F' (h)$ for every $h \in \Lform_1 (X, d)$. \end{cor} \proof The only if direction is trivial. In the if direction, for every $\alpha > 0$, for every $h \in \Lform_\alpha (X, d)$, $1/\alpha h$ is in $\Lform_1 (X, d)$ by Proposition~\ref{prop:alphaLip:props}~(1). We have $F (1/\alpha h) = F' (1/\alpha h)$ by assumption. Multiplying by $\alpha$ and relying on positive homogeneity, $F (h) = F (h')$. This shows that $F_{|\Lform_\infty X} = F'_{|\Lform_\infty X}$, whence $F=F'$ by Proposition~\ref{prop:Gbar}~(2). \qed Recall that $\Lform_\infty^\bnd (X, d)$ is the space of all bounded Lipschitz continuous maps from $X$ to $\overline{\mathbb{R}}_+$. \begin{defi}[$\Lform^\bnd$-prevision] \label{defn:Lbnd:prev} Let $X, d$ be a quasi-metric space. An \emph{$\Lform^\bnd$-prevision} on $X$ is any continuous map $H$ from $\Lform_\infty^\bnd X$ to $\overline{\mathbb{R}}_+$ such that $H (\beta h) = \beta H (h)$ for all $\beta \in \mathbb{R}_+$ and $h \in \Lform_\infty^\bnd X$. \end{defi} Let $\Lform^\bnd\Prev X$ be the set of all $\Lform^\bnd$-previsions on $X$. We define linear, superlinear, sublinear, subnormalized, and normalized $\Lform^\bnd$-previsions in the usual way. Finally, we say that $H$ is \emph{discrete} iff it satisfies $H (f \circ h) = f (H (h))$ for every strict map $f \in \Lform_\infty^\bnd \overline{\mathbb{R}}_+$ (not $\Lform_\infty \overline{\mathbb{R}}_+$) and every $h \in \Lform_\infty^\bnd (X, d)$. Every $G \in \Lform\Prev X$ defines an element $G_{|\Lform_\infty^\bnd X}$ of $\Lform^\bnd\Prev X$ by restriction. Conversely, for every $H \in \Lform^\bnd\Prev X$, let $\overline{\overline H} (h) = \sup_{\beta > 0} H (\min (h, \beta))$ for every $h \in \Lform_\infty (X, d)$. \begin{lem} \label{lemma:Hbar} Let $X, d$ be a standard quasi-metric space, and $a > 0$. \begin{enumerate} \item For every $H \in \Lform^\bnd\Prev X$, $\overline{\overline H}$ is an $\Lform$-prevision. \item The maps $G \in \Lform\Prev X \mapsto G_{|\Lform_\infty^\bnd X} \in \Lform^\bnd\Prev X$ and $H \in \Lform^\bnd\Prev X \mapsto \overline{\overline H} \in \Lform\Prev X$ are inverse of each other. \item For every $G \in \Lform\Prev X$, $G$ is linear, resp.\ superlinear, resp.\ sublinear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, if and only if $G_{|\Lform_\infty^\bnd X}$ is. \end{enumerate} \end{lem} \proof We first show that the map $t_{\beta} \colon \Lform_\infty (X, d) \to \Lform_\infty^\bnd (X, d)$ defined by $t_{\beta} (h) = \allowbreak \min (h, \beta)$ is continuous for every $\beta > 0$. This follows from the definition of subspace topologies, and the fact that $t_{\beta}$ is the restriction of the Scott-continuous map $h \in \Lform X \mapsto \min (h, \beta) \in \Lform X$. (1) $\overline{\overline H}$ is continuous, since $(\overline{\overline H})^{-1} (]r, +\infty]) = \{h \in \Lform_\infty (X, d) \mid \exists \beta > 0 , H (\min (h, \beta)) > r\} = \bigcup_{\beta > 0} \{h \in \Lform_\infty (X, d) \mid \min (h, \beta) \in H^{-1} (]r, +\infty])\} = \bigcup_{\beta > 0} t_{\beta}^{-1} (H^{-1} (]r, +\infty]))$, which is open since $H$ and $t_{\beta}$ are continuous. We must show that $\overline{\overline H} (\alpha h) = \alpha \overline{\overline H} (h)$ for every $\alpha \in \mathbb{R}_+$. When $\alpha > 0$, $\overline{\overline H} (\alpha h) = \sup_{\beta > 0} H (\min (\alpha h, \beta)) = \sup_{\beta' > 0} H (\min (\alpha h, \alpha \beta')) = \sup_{\beta' > 0} \alpha H (\min (h, \beta')) = \alpha \overline{\overline H} (h)$. When $\alpha = 0$, $\overline{\overline H} (0) = \sup_{\beta > 0} H (0) = 0$. (2) For every $h \in \Lform_\infty^\bnd (X, d)$, $(\overline{\overline H})_{|\Lform_\infty^\bnd X} (h) = \overline{\overline H} (h) = \sup_{\beta > 0} H (\min (h, \beta)) = H (h)$, since $\min (h, \beta) = h$ for $\beta$ large enough. In the other direction, for every $h \in \Lform_\infty (X, d)$, $\overline {\overline {G_{|\Lform_\infty^\bnd X}}} (h) = \sup_{\beta > 0} G_{|\Lform_\infty^\bnd X} (\min (h, \beta)) = \sup_{\beta > 0} G (\min (h, \beta))$. Recall now from Proposition~\ref{prop:Gbar} that $G$ is the restriction to $\Lform_\infty X$ of a prevision $F$ on $X$, which is Scott-continuous. So $\overline {\overline {G_{|\Lform_\infty^\bnd X}}} (h) = \sup_{\beta > 0} F (\min (h, \beta)) = F (h) = G (h)$. (3) All the claims except the one on discreteness follow from the fact that $\min (g, \beta) + \min (h, \gamma) \leq \min (g+h, \beta + \gamma)$ and $\min (g+h, \beta) \leq \min (g, \beta) + \min (h, \beta)$ for all maps $g$, $h$ and all $\beta, \gamma > 0$. We turn to discreteness. If $G \in \Lform\Prev X$ is discrete, then clearly $G_{|\Lform_\infty^\bnd X}$ is discrete, too. Conversely, let $H \in \Lform^\bnd\Prev X$ be discrete. To show that $\overline{\overline H}$ is discrete, let $f \in \Lform_\infty \overline{\mathbb{R}}_+$ be strict, $h \in \Lform_\infty (X, d)$, and let us show that $\overline{\overline H} (f \circ h) = f (\overline {\overline H} (h))$. The left-hand side, $\overline{\overline H} (f \circ h)$, is equal to $\sup_{\beta > 0} H (\min (f \circ h, \beta))$, hence to $\sup_{\beta > 0} H (f_\beta \circ h)$, where $f_\beta = \min (f, \beta)$. Since $f_\beta$ is Scott-continuous and $H$ is the restriction of a Scott-continuous map $F \colon \Lform (X, d) \to \overline{\mathbb{R}}_+$, $\overline{\overline H} (f \circ h)$ is also equal to $\sup_{\beta, \gamma > 0} H (f_\beta \circ \min (h, \gamma))$. We know that $\min (h, \gamma)$ is in $\Lform_\infty^\bnd (X, d)$, and that$f_\beta$ is in $\Lform_\infty^\bnd \overline{\mathbb{R}}_+$. Since $H$ is discrete, $\overline{\overline H} (f \circ h)$ is therefore equal to $\sup_{\beta, \gamma> 0} f_\beta (H (\min (h, \gamma))) = \sup_{\gamma > 0} f (H (\min (h, \gamma)))$. Now $f$ is Scott-continuous, so this is equal to $f (\sup_{\gamma > 0} H (\min (h, \gamma))) = f (\overline {\overline H} (h))$. \qed \begin{cor} \label{corl:=:Lbnd1} Let $X, d$ be a standard quasi-metric space, and $F$, $F'$ be two previsions on $X$, and $a > 0$. Then $F=F'$ if and only if $F (h) = F' (h)$ for every $h \in \Lform_1^a (X, d)$. \end{cor} \proof The only if direction is trivial. In the if direction, for every $\alpha > 0$, for every $h \in \Lform_\alpha^a (X, d)$, $1/\alpha h$ is in $\Lform_1^a (X, d)$ by Proposition~\ref{prop:alphaLip:props}~(1). We have $F (1/\alpha h) = F' (1/\alpha h)$ by assumption. Multiplying by $\alpha$ and relying on positive homogeneity, $F (h) = F (h')$. Using Lemma~\ref{lemma:Linfa}, this shows that $F_{|\Lform_\infty^\bnd X} = F'_{|\Lform_\infty^\bnd X}$, whence $F=F'$ by Lemma~\ref{lemma:Hbar}~(2) and Proposition~\ref{prop:Gbar}~(2). \qed We can equip $\Lform\Prev X$ and $\Lform^\bnd\Prev X$ with a quasi-metric defined by formula (\ref{eq:dKRH}), which we shall again denote by $\dKRH$. Recall that an \emph{isometry} is a map that preserves distances on the nose. That the maps $F \in \Prev X \mapsto F_{|\Lform_\infty X} \in \Lform\Prev X$ and $G \in \Lform\Prev X \mapsto G_{|\Lform_\infty^\bnd X} \in \Lform^\bnd\Prev X$ are isometries follows by definition. \begin{lem} \label{lemma:dKRH:iso} Let $X, d$ be a quasi-metric space. The maps: \[ \xymatrix@R=0pt{ F \ar@{|->}[r] & F_{|\Lform_\infty X}&& G \ar@{|->}[r] & G_{\Lform_\infty^a X} \\ \Prev X & \Lform\Prev X & & \Lform\Prev X & \Lform^\bnd\Prev X \\ \overline G & G \ar@{|->}[l] && \overline{\overline H} & H \ar@{|->}[l] } \] are isometries between $\Prev X, \dKRH$ and $\Lform\Prev X, \dKRH$, and between $\Lform\Prev X, \dKRH$ and $\Lform^\bnd\Prev X, \dKRH$ for every $a > 0$. \qed \end{lem} If two maps $f \colon X, d \mapsto Y, \partial$ and $g \colon Y, \partial \to X, d$ are mutually inverse isometries, then they are not only $1$-Lipschitz but also $1$-Lipschitz continuous. Indeed, $\mathbf B^1 (f)$ and $\mathbf B^1 (g)$ are order-isomorphisms, and therefore are both Scott-continuous. (This is a standard exercise.) It follows that, for the purpose of quasi-metrics, of the underlying $d$-Scott topologies, and for the underlying specialization orderings, $\Lform\Prev X$, $\Lform^\bnd\Prev X$, and $\Prev X$ can be regarded as the same space under $\dKRH$. The following proposition \emph{almost} shows that our spaces of previsions are Yoneda-complete, and gives a simple formula for the supremum $(G, r)$ of a directed family of formal balls ${(G_i, r_i)}_{i \in I}$. I say ``almost'' because $G$ is not guaranteed to be continuous. We address that problem in the subsequent Theorem~\ref{thm:LPrev:sup}. \begin{prop} \label{prop:LPrev:simplesup} Let $X, d$ be a standard quasi-metric space, and ${(G_i, r_i)}_{i \in I}$ be a directed family in $\mathbf B (\Lform\Prev X, \dKRH)$. Let $r = \inf_{i \in I} r_i$ and, for each $h \in \Lform_\alpha (X, d)$ with $\alpha > 0$, define $G (h)$ as the directed supremum $\sup_{i \in I} (G_i (h) + \alpha r - \alpha r_i)$. Then: \begin{enumerate} \item $\overline G$ is a well-defined, positively homogeneous function from $\Lform X$ to $\overline{\mathbb{R}}_+$; \item for any upper bound $(G', r')$ of ${(G_i, r_i)}_{i \in I}$, $r' \leq r$ and, for every $h \in \Lform_\alpha (X, d)$, $\alpha > 0$, $G (h) \leq G' (h) + \alpha r - \alpha r'$; \item if $G$ is continuous, then $(G, r)$ is the supremum of ${(G_i, r_i)}_{i \in I}$; \item if every $G_i$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, then so is $G$. \end{enumerate} \end{prop} We call $(G, r)$ the \emph{naive supremum} of ${(G_i, r_i)}_{i \in I}$. \proof We check that $\sup_{i \in I} (G_i (h) + \alpha r - \alpha r_i)$ is a directed supremum, for every $h \in \Lform_\alpha (X, d)$. Define $\sqsubseteq$ by $i \sqsubseteq j$ if and only if $(G_i, r_i) \leq^{\dKRH^+} (G_j, r_j)$. Then $\sqsubseteq$ turns $I$ into a directed preordered set, and the family ${(G_i, r_i)}_{i \in I}$ into a monotone net ${(G_i, r_i)}_{i \in I, \sqsubseteq}$. It remains to show that $i \sqsubseteq j$ implies $G_i (h) + \alpha r - \alpha r_i \leq G_j (h) + \alpha r - \alpha r_j$. Since $(G_i, r_i) \leq^{\dKRH^+} (G_j, r_j)$, and $1/\alpha h$ is in $\Lform_1 (X, d)$, $G_i (1/\alpha h) \leq G_j (1/\alpha h) + r_i - r_j$. We obtain the result by multiplying both sides by $\alpha$ and adding $\alpha r$. (1) We must first show that $G$ is well-defined, in the following sense. When $h \in \Lform_\alpha X$, $h$ is also in $\Lform_\beta X$ for every $\beta \geq \alpha$ by Proposition~\ref{prop:alphaLip:props}~(5), and our tentative definition is not unique, apparently: we have defined $G (h)$ both as $\sup_{i \in I} (G_i (h) + \alpha r - \alpha r_i)$ and as $\sup_{i \in I} (G_i (h) + \beta r - \beta r_i)$. This is not a problem: the two suprema coincide, since $G_i (h) + \alpha r - \alpha r_i$ and $G_i (h) + \beta r - \beta r_i$ only differ by $(\beta- \alpha) (r_i - r)$, which can be made arbitrarily small as $i$ varies in $I$. In fact, both ``definitions'' are equal to $\lim_{i \in I, \sqsubseteq} G_i (h)$, where the limit is taken in $\overline{\mathbb{R}}_+$ with its usual Hausdorff topology, a base of which is given by the intervals $[0, b[$, $]a, b[$ and $]a, +\infty]$, $0 < a < b < +\infty$. This remark will be helpful in the sequel. In particular, taking the definition $G (h) = \lim_{i \in I, \sqsubseteq} G_i (h)$, it is easy to show that $G$ commutes with products with non-negative constants, which finishes to prove (1). We proceed with (4), and we will return to (2) and (3) later. (4) Using again the formula $G (h) = \lim_{i \in I, \sqsubseteq} G_i (h)$, if every $G_i$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, then so is $G$, because because $+$ and products by scalars are continuous on $\overline{\mathbb{R}}_+$ with its usual Hausdorff topology. The same argument works to show that $G$ is discrete when every $G_i$ is, because of the following lemma. \begin{lem} \label{lemma:Lbeta:creal} Every function $f \in \Lform_\infty \overline{\mathbb{R}}_+$ is continuous from $\overline{\mathbb{R}}_+$ to $\overline{\mathbb{R}}_+$, with its usual Hausdorff topology. \end{lem} \proof Let $f \in \Lform_\beta \overline{\mathbb{R}}_+$, $\beta > 0$. The Hausdorff topology on $\overline{\mathbb{R}}_+$ is generated by the Scott-open subsets $]a, +\infty]$ and the subsets of the form $[0, b[$. Since $\Lform_\beta \overline{\mathbb{R}}_+ \subseteq \Lform \overline{\mathbb{R}}_+$ (owing to the fact that $\overline{\mathbb{R}}_+, \dreal$ is standard), $f^{-1} (]a, +\infty])$ is Scott-open hence open in $\overline{\mathbb{R}}_+$. And $f^{-1} ([0, b[)$ is open because $f$ is $\beta$-Lipschitz: for every $x \in f^{-1} ([0, b[)$, let $\epsilon > 0$ be such that $f (x) + \epsilon < b$; for every $x' \in [0, x + \epsilon/\beta[$, $f (x') \leq f (x) + \beta \dreal (x', x) < f (x) + \epsilon < b$, so $[0, x+\epsilon/\beta[$ is an open neighborhood of $x$ contained in $f^{-1} ([0, b[)$. \qed We return to the proof of (2) and of (3). (2) Let $(G', r')$ be any upper bound of ${(G_i, r_i)}_{i \in I}$. For every $i \in I$, for every $h \in \Lform_1 (X, d)$ the inequality $(G_i, r_i) \leq^{\dKRH^+} (G', r')$, equivalently $\dKRH (G_i, G') \leq r_i-r'$, implies that $G_i (h) \leq G' (h) + r_i - r'$ (and $r_i \geq r'$). Then $G_i (h) - r_i \leq G' (h) - r'$. Taking suprema over $i \in I$, $G (h) - r \leq G' (h) - r'$, hence $G (h) \leq G' (h) + r - r'$ (and, by taking infima, $r = \inf_{i \in I} r_i \geq r'$). Now take any $h \in \Lform_\alpha (X, d)$, $\alpha > 0$. Then $1/\alpha h$ is in $\Lform_1 (X, d)$, and by positive homogeneity $G (h) \leq G' (h) + \alpha r - \alpha r'$. (3) Assume $G$ continuous. The definition of $G$ ensures that $(G, r)$ is an element of $\mathbf B (\Lform\Prev X, \dKRH)$, and is an upper bound of ${(G_i, r_i)}_{i \in I}$. For every upper bound $(G', r')$ of ${(G_i, r_i)}_{i \in I}$, (2) entails that $r \geq r'$ and, for every $h \in \Lform_1 (X, d)$, $G (h) \leq G' (h) + r - r'$, whence $\dKRH (G, G') \leq r-r'$. It follows that $(G, r) \leq^{\dKRH^+} (G', r')$, showing that $(G, r)$ is the least upper bound. \qed \begin{thm}[Yoneda-completeness] \label{thm:LPrev:sup} Let $X, d$ be a standard quasi-metric space, and assume that the topology of $\Lform_\infty (X, d)$ is determined by those of $\Lform_\alpha (X, d)$, $\alpha > 0$ (e.g., if $X, d$ is standard and Lipschitz regular, see Proposition~\ref{prop:Linfty:determined}). Then $\Prev X, \dKRH$ is Yoneda-complete, and all suprema of directed families of formal balls of previsions are naive suprema, as described in Proposition~\ref{prop:LPrev:simplesup}. The same result holds for the subspace of previsions satisfying any given set of properties among: sublinearity, superlinearity, linearity, subnormalization, normalization, and discreteness. \end{thm} \proof Use the assumptions, notations and results of Proposition~\ref{prop:LPrev:simplesup}. It remains to show that the naive supremum $G$, as defined there, \emph{is} continuous from $\Lform_\infty (X, d)$ to $\overline{\mathbb{R}}_+$. For that, by the assumption that the topology of $\Lform_\infty (X, d)$ is determined, it suffices to show that the restriction $G_{|\Lform_\alpha X}$ of $G$ is continuous from $\Lform_\alpha (X, d)$ to $\overline{\mathbb{R}}_+$, for any $\alpha > 0$. Fix $a \in \mathbb{R}$. We must show that $\mathcal U = \{h \in \Lform_\alpha (X, d) \mid G (h) > a\}$ is open in $\Lform_\alpha (X, d)$. Using the definition of $G$, we write $\mathcal U$ as the set of maps $h \in \Lform_\alpha (X, d)$ such that $G_i (h) + \alpha r - \alpha r_i > a$ for some $i \in I$. Therefore $\mathcal U = \bigcup_{i \in I} G_{i|\Lform_\alpha X}^{-1} (]a + \alpha r_i - \alpha r, +\infty])$, which is open. \qed \subsection{The Bounded Kantorovich-Rubinshte\u\i n-Hutchinson Quasi-Metrics} \label{sec:bound-kant-rubinsht} We shall sometimes consider the following bounded variant $\dKRH^a$ of the $\dKRH$ quasi-metric. Typically, we shall be able to show that, in certain contexts, the weak topology coincides with the $\dKRH^a$-Scott topology, but not with the $\dKRH$-Scott topology. \begin{defi}[$\dKRH^a$] \label{defn:KRH:bounded} Let $X, d$ be a quasi-metric space, and $a \in \mathbb{R}_+$, $a > 0$. The \emph{$a$-bounded Kantorovich-Rubinshte\u\i n-Hutchinson quasi-metric} on the space of previsions on $X$ is defined by: \begin{equation} \label{eq:dKRHa} \dKRH^a (F, F') = \sup_{h \in \Lform_1^a X} \dreal (F (h), F' (h)). \end{equation} \end{defi} The single point to pay attention to is that $h$ ranges over $\Lform_1^a (X, d)$, not $\Lform_1 (X, d)$, in the definition of $\dKRH^a$. \begin{lem} \label{lemma:KRHa:qmet} Let $X, d$ be a standard quasi-metric space, and $a \in \mathbb{R}_+$, $a > 0$. For all previsions $F$, $F'$ on $X$, the following are equivalent: $(a)$ $F \leq F'$; $(b)$ $\dKRH^a (F, F') = 0$. In particular, $\dKRH^a$ is a quasi-metric. \end{lem} \proof $(a) \limp (b)$. For every $h \in \Lform_1^a (X, d)$, $F (h) \leq F' (h)$, so $\dreal (F (h), F' (h)) = 0$. $(b) \limp (a)$. Let $f$ be an arbitrary element of $\Lform X$. For every $\alpha > 0$, $1/\alpha f^{(\alpha)}$ is in $\Lform_1 (X, d)$, so $\min (1/\alpha f^{(\alpha)}, a)$ is also in $\Lform_1 (X, d)$, and is below $a.\mathbf 1$. By $(b)$, $F (\min (1/\alpha f^{(\alpha)}, a)) \leq F' (\min (1/\alpha f^{(\alpha)}, a))$. Multiply by $\alpha$: $F (\min (f^{(\alpha)}, \alpha a)) \leq F' (\min (f^{(\alpha)}, \alpha a))$. The family ${(f^{(\alpha)})}_{\alpha > 0}$ is a chain whose supremum is $f$, so ${(\min (f^{(\alpha)}, \alpha a))}$ is also a chain whose supremum is $f$. Using the fact that $F$ and $F'$ are Scott-continuous, $F (f) \leq F' (f)$, and as $f$ is arbitrary, $(a)$ follows. \qed The relation with the $\dKRH$ quasi-metric is as follows. \begin{lem} \label{lemma:KRH:KRHa} Let $X, d$ be a quasi-metric space. Order quasi-metrics on any space of previsions pointwise. Then ${(\dKRH^a)}_{a \in \mathbb{R}_+, a > 0}$ is a chain, and for all previsions $F$, $F'$, $\dKRH (F, F') = \sup_{a \in \mathbb{R}_+, a > 0} \dKRH^a (F, F')$. \end{lem} \proof Clearly, for $0 < a \leq a'$, $\Lform_1^a (X, d)$ is included in $\Lform_1^{a'} (X, d)$, so $\dKRH^a (F, F') \leq \dKRH^{a'} (F, F')$. Therefore ${(\dKRH^a)}_{a \in \mathbb{R}_+, a > 0}$ is a chain. Similarly, $\dKRH^a (F, F') \leq \dKRH (F, F')$. Finally, $\sup_{a \in \mathbb{R}_+, a > 0} \dKRH^a (F, F')$ is equal to $\sup_{a \in \mathbb{R}_+, a > 0, h \in \Lform_1^a (X, d)} \dreal (F (h),F' (h))$, in other words to $\sup_{h \text{ bounded }\in\Lform_1 (X, d)} \dreal (F (h),F' (h))$, and that is equal to $\dKRH (F, F')$ by Lemma~\ref{lemma:KRH:bounded}. \qed \begin{lem} \label{lemma:LaPrev:ord} Let $X, d$ be a standard quasi-metric space, $a > 0$, let $F$, $F'$ be two previsions on $X$, and $r$, $r'$ be two elements of $\mathbb{R}_+$. The following are equivalent: \begin{enumerate} \item $(F, r) \leq^{\dKRH^{a+}} (F', r')$; \item $r \geq r'$ and, for every $h \in \Lform_1^a (X, d)$, $F (h) - r \leq F' (h) - r'$; \item $r \geq r'$ and, for every $h \in \Lform_\alpha^a (X, d)$, $\alpha > 0$, $F (h) - \alpha r \leq F' (h) - \alpha r'$. \end{enumerate} \end{lem} \proof The proof is exactly as for Lemma~\ref{lemma:LPrev:ord}. \qed Equip $\Lform\Prev X$ and $\Lform^\bnd\Prev X$ with the quasi-metric defined by formula (\ref{eq:dKRHa}), which we shall again denote by $\dKRH^a$. The maps $F \in \Prev X \mapsto F_{|\Lform_\infty X} \in \Lform\Prev X$ and $G \in \Lform\Prev X \mapsto G_{|\Lform_\infty^\bnd X} \in \Lform^\bnd\Prev X$ are isometries, by definition. \begin{lem} \label{lemma:dKRHa:iso} Let $X, d$ be a quasi-metric space. The maps: \[ \xymatrix@R=0pt{ F \ar@{|->}[r] & F_{|\Lform_\infty X}&& G \ar@{|->}[r] & G_{\Lform_\infty^\bnd X} \\ \Prev X & \Lform\Prev X & & \Lform\Prev X & \Lform^\bnd\Prev X \\ \overline G & G \ar@{|->}[l] && \overline{\overline H} & H \ar@{|->}[l] } \] are isometries between $\Prev X, \dKRH^a$ and $\Lform\Prev X, \dKRH^a$, and between $\Lform\Prev X, \dKRH^a$ and $\Lform^\bnd\Prev X, \dKRH^a$ for every $a > 0$. \qed \end{lem} Hence we can regard $\Prev X$, $\Lform\Prev X$, and $\Lform^\bnd\Prev X$ as the same quasi-metric spaces, when equipped with $\dKRH^a$. \begin{prop} \label{prop:LaPrev:simplesup} Fix $a > 0$. Let $X, d$ be a standard quasi-metric space, and ${(H_i, r_i)}_{i \in I}$ be a directed family in $\mathbf B (\Lform^\bnd\Prev X, \dKRH^a)$. Let $r = \inf_{i \in I} r_i$ and, for each $h \in \Lform_\alpha^a (X, d)$, $\alpha > 0$, define $H (h)$ as $\sup_{i \in I} (H_i (h) + \alpha r - \alpha r_i)$. Then: \begin{enumerate} \item $H$ is a well-defined, positively homogeneous functional from $\Lform^\bnd\Prev X$ to $\overline{\mathbb{R}}_+$; \item for any upper bound $(H', r')$ of ${(H_i, r_i)}_{i \in I}$, $r' \leq r$ and, for every $h \in \Lform_\alpha^a (X, d)$, $\alpha > 0$, $H (h) \leq H' (h) + \alpha r - \alpha r'$; \item if $H$ is continuous, then $(H, r)$ is the supremum of ${(H_i, r_i)}_{i \in I}$; \item if every $H_i$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, then so is $H$. \end{enumerate} \end{prop} Note that $H$ is indeed defined on the whole of $\Lform_\infty^\bnd (X, d)$, by Lemma~\ref{lemma:Linfa}. We again call $(H, r)$ the \emph{naive supremum} of ${(H_i, r_i)}_{i \in I}$. \proof We mimic Proposition~\ref{prop:LPrev:simplesup}. Again, this definition does not depend on $\alpha$, really, because $H (h)$ is the limit of ${(H_i (h))}_{i \in I, \sqsubseteq}$ taken in $\overline{\mathbb{R}}_+$ with its usual topology (not the Scott topology). Using the formula $H (h) = \lim_{i \in I, \sqsubseteq} H_i (h)$, where the limit is taken in $\overline{\mathbb{R}}_+$ with its usual, Hausdorff topology, it is easy to show that $H$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, when every $H_i$ is, because $+$ and products by scalars are continuous on $\overline{\mathbb{R}}_+$ with its usual topology. Similarly, $H$ is discrete whenever every $H_i$ is, using Lemma~\ref{lemma:Lbeta:creal}. This shows (1) and (4). (2) and (3) are proved as in Proposition~\ref{prop:LPrev:simplesup}, taking care to quantify over $h$ in $\Lform_1^a (X, d)$, or in $\Lform_\alpha^a (X, d)$. \qed We deduce the following theorem with exactly the same proof as for Theorem~\ref{thm:LPrev:sup}, using Proposition~\ref{prop:LaPrev:simplesup} instead of Proposition~\ref{prop:LPrev:simplesup}. \begin{thm}[Yoneda-completeness] \label{thm:LaPrev:sup} Let $X, d$ be a standard quasi-metric space, and fix $a > 0$. Assume that the topology of $\Lform_\infty^\bnd (X, d)$ is determined by those of $\Lform_\alpha^a (X, d)$, $\alpha > 0$ (e.g., if $X, d$ is standard and Lipschitz regular, see Proposition~\ref{prop:Linfty:determined:bnd}). Then $\Prev X, \dKRH^a$ is Yoneda-complete, and all suprema of directed families of formal balls of previsions are naive suprema, as described in Proposition~\ref{prop:LaPrev:simplesup}. The same result holds for the subspace of previsions satisfying any given set of properties among: sublinearity, superlinearity, linearity, subnormalization, normalization, and discreteness. \qed \end{thm} \subsection{Supports} \label{sec:supports} Our previous Yoneda-completeness theorems for spaces of previsions make a strong assumption, Lipschitz regularity. We show that we also obtain Yoneda-completeness under different assumptions, which have to do with supports. \begin{defi}[Support] \label{defn:support} Let $X$ be a topological space. A subset $A$ of $X$ is called a \emph{support} of a prevision $F$ if and only if, for all $g, h \in \Lform X$ with the same restriction to $A$ (i.e., such that $g_{|A} = h_{|A}$), then $F (g) = F (h)$. \end{defi} We also say that $F$ is \emph{supported on $A$} in that case. We do not know whether $\Prev$ defines a functor on the category of standard quasi-metric spaces and $1$-Lipschitz continuous maps, however, we have the following partial result. \begin{lem} \label{lemma:Pf:lip} Let $X, d$ and $Y, \partial$ be two quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be an $\alpha$-Lipschitz continuous map, where $\alpha > 0$. Then $\Prev f \colon \Prev X, \dKRH \to \Prev Y, \KRH\partial$, defined by: \begin{equation} \label{eq:Prevf} \Prev f (F) (k) = F (k \circ f) \end{equation} for every $k \in \Lform Y$, is a well-defined, $\alpha$-Lipschitz map, which maps previsions to previsions, preserving sublinearity, superlinearity, linearity, subnormalization, normalization, and discreteness. A similar result holds for $\Prev f$, seen as a map from $\Prev X, \dKRH^a$ to $\Prev Y, \KRH\partial^a$, for every $a \in \mathbb{R}_+$, $a > 0$. \end{lem} \proof It is easy to check that $\Prev f (F)$ is a prevision, which is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, whenever $F$ is. To show that $\Prev f$ is $\alpha$-Lipschitz from $\Prev X, \dKRH$ to $\Prev Y, \KRH\partial$, we assume that $\dKRH (F, F') \leq r$, and we wish to show that $\KRH{\partial} (\Prev f (F), \Prev f (F')) \leq \alpha r$. To this end, we let $k \in \Lform_1 (\mathbf B (Y, \partial), \partial^+)$ be arbitrary, and we attempt to show that $\Prev f (F) (k) \leq \Prev f (F') (k) + \alpha r$. By Lemma~\ref{lemma:comp:Lip}, $k \circ f$ is in $\Lform_\alpha (X, d)$, so $1/\alpha (k \circ f)$ is in $\Lform_1 (X, d)$. We expand the definition of $\dKRH$ in $\dKRH (F, F') \leq r$, and we obtain that $F (1/\alpha (k \circ f)) \leq F' (1/\alpha (k \circ f)) + r$. Using positive homogeneity, we deduce $F (k \circ f) \leq F' (k \circ f) + \alpha r$, which is what we wanted to prove. The proof that $\Prev f$ is $\alpha$-Lipschitz from $\Prev X, \dKRH^a$ to $\Prev Y, \KRH\partial^a$ is the same, noticing that if $k$ is bounded by $a$, then $k \circ f$ is also bounded by $a$. \qed The one thing that is missing in order to show that $\Prev$ is a functor, in a category whose morphisms are $1$-Lipschitz continuous maps, is continuity. We have the following approximation, which shows that naive suprema are preserved. \begin{lem} \label{lemma:Pf:lipcont} Let $X, d$ and $Y, \partial$ be two quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. For every directed family of formal balls $(G_i, r_i)$, $i \in I$, in $\mathbf B (\Prev X, \dKRH)$ (resp., $\dKRH^a$), with naive supremum $(G, r)$, $(\Prev f (G), r)$ is the naive supremum of ${(\Prev f (G_i), r_i)}_{i \in I}$ in $\mathbf B (\Prev Y, \KRH{\partial})$ (resp., $\KRH{\partial}^a$). \end{lem} \proof By definition, $r = \inf_{i \in I} r_i$, and $G (h) = \sup_{i \in I} (G_i (h) + \alpha r - \alpha r_i)$ for every $h \in \Lform_\alpha (X, d)$ (resp., in $\Lform_\alpha^a (X, d)$), for every $\alpha \in \mathbb{R}_+$, $\alpha > 0$. It follows that, for every $k \in \Lform_\alpha (Y, \partial)$ (resp., in $\Lform_\alpha^a (Y, \partial)$), and using the fact that $k \circ f$ is in $\Lform_\alpha (X, d)$ by Lemma~\ref{lemma:comp:Lip} (resp., in $\Lform_\alpha^a (X, d)$, since $k \circ f$ is bounded by $\alpha a$, just like $k$), $\Prev f (G) (k) = G (k \circ f) = \sup_{i \in I} (G_i (k \circ f) + \alpha r - \alpha r_i) = \sup_{i \in I} (\Prev f (G_i) (k) + \alpha r - \alpha r_i)$. \qed If $X, d$ is standard, then Lemma~\ref{lemma:eta:lipcont} states that $\eta_X$ is $1$-Lipschitz continuous. Hence Lemma~\ref{lemma:Pf:lip} implies that $\Prev \eta_X (F)$ is a prevision on $\mathbf B (X, d), d^+$, which one can see as an extension of $F$ from $X$ to the superspace $\mathbf B (X, d)$. Moreover, $\Prev \eta_X$ is $1$-Lipschitz, so $\mathbf B^1 (\Prev \eta_X)$ is monotonic. This implies that for every directed family ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (\Prev X, \dKRH)$, ${(\Prev \eta_X (F_i), r_i)}_{i \in I}$ is also a directed family, this time in $\mathbf B (\Prev (\mathbf B (X, d)), \KRH{d^+})$. Because $\mathbf B (X, d), d^+$ is Lipschitz regular (Theorem~\ref{thm:B:lipreg}) and standard (Proposition~\ref{prop:B:std}), ${(\Prev \eta_X (F_i), r_i)}_{i \in I}$ has a supremum $(G, r)$, as an application of Theorem~\ref{thm:LPrev:sup}. That supremum $(G, r)$ is the naive supremum, which is to say that $r = \inf_{i \in I} r_i$, and: \begin{equation} \label{eq:G} G (k) = \sup_{i \in I} (F_i (k \circ \eta_X) + \alpha r - \alpha r_i) \end{equation} for every $k \in \Lform_\alpha (\mathbf B (X, d), d^+)$, $\alpha > 0$. Recall that $V_\epsilon$ is the set of formal balls of radius strictly less than $\epsilon$. \begin{lem} \label{lemma:G:supp:Veps} Let $X, d$ be a standard quasi-metric space, ${(F_i, r_i)}_{i \in I}$ be a directed family in $\mathbf B (\Prev X, \dKRH)$, let $r = \inf_{i \in I} r_i$ and $G$ be the prevision on $\mathbf B (X, d)$ defined in (\ref{eq:G}). For every $\epsilon > 0$, $G$ is supported on $V_\epsilon$. \end{lem} \proof Formally, $G$, as defined in (\ref{eq:G}), is merely an $\Lform$-prevision, and the prevision we are considering is $\overline G$, not $G$. We wish to show that $\overline G$ is supported on $V_\epsilon$. In order to do so, we will prove that $\overline G (f) = \overline G (f \chi_{V_\epsilon})$ for every $f \in \Lform (\mathbf B (X, d))$. Using (\ref{eq:G}) and the definition of $\overline G$, then Proposition~\ref{prop:f(alpha)}, we have: \begin{align*} \overline G (f) & = \sup_{\alpha \in \mathbb{R}_+, i \in I} \left( F_i (f^{(\alpha)} \circ \eta_X) + \alpha r - \alpha r_i \right) \\ & = \sup_{\alpha \in \mathbb{R}_+, i \in I, K \in \nat} \left( F_i (f_K^{(\alpha)} \circ \eta_X) + \alpha r - \alpha r_i \right) \\ \end{align*} which is a directed supremum. We claim that, for $\alpha$ large enough, $f_K^{(\alpha)} \circ \eta_X = {(f \chi_{V_\epsilon})}_K^{(\alpha)} \circ \eta_X$, equivalently that $f_K^{(\alpha)} (x, 0) = {(f \chi_{V_\epsilon})}_K^{(\alpha)} (x, 0)$ for every $x \in X$. This will prove the desired result. Let us fix $K$, and let us write $f_K$ as $\frac 1 {2^K} \sup_{k=1}^{K2^K} k \chi_{U_k}$, where $U_k = f^{-1} (]k/2^K, +\infty])$. Since ${(f \chi_{V_\epsilon})}^{-1} (]k/2^K, +\infty]) = U_k \cap V_\epsilon$, we have $({f \chi_{V_\epsilon})}_K = \frac 1 {2^K} \sup_{k=1}^{K2^K} k \chi_{U_k \cap V_\epsilon}$. Using Lemma~\ref{lemma:f(alpha):step}, we have: \begin{equation} \label{eq:G:supp:Veps} f_K^{(\alpha)} (x, 0) = \min \left(\min\nolimits_{k=1}^{K2^K} \left(\frac {k-1} {2^K} + \alpha d^+ ((x, 0), \overline {U_k})\right), K \right), \end{equation} and similarly for $({f \chi_{V_\epsilon})}_K^{(\alpha)}$, replacing each $U_k$ by $U_k \cap V_\epsilon$. We pick $\alpha > K/\epsilon$. For every $x \in X$, for every $k$ ($1\leq k \leq K2^K$), $d^+ ((x, 0), \overline {U_k}) = \sup \{s \in \mathbb{R}_+ \mid ((x, 0), s) \in \widehat {U_k})$, where, for each $d^+$-Scott open subset $V$ of $\mathbf B (X, d)$, $\widehat V$ denotes the largest Scott-open subset of $\mathbf B (\mathbf B (X, d), d^+)$ whose intersection with $\mathbf B (X, d)$ is equal to $V$. We apply Lemma~\ref{lemma:Uhat:alpha} (see also Remark~\ref{rem:Uhat:alpha}) to the case of the free $\mathbf B$-algebra on $X$, whose structure map is $\mu_X$, and we obtain that $\widehat V = \mu_X^{-1} (V)$. Hence $d^+ ((x, 0), U_k) = \sup \{s \in \mathbb{R}_+ \mid \mu_X ((x, 0), s) \in U_k\} = \sup \{s \in \mathbb{R}_+ \mid (x, s) \in U_k\}$. Hence we can rewrite equation~\ref{eq:G:supp:Veps} as: \[ f_K^{(\alpha)} (x, 0) = \min \left(\min\nolimits_{k=1}^{K2^K} \left(\sup \left\{\frac {k-1} {2^K} + \alpha s \in \mathbb{R}_+ \mid s \in \mathbb{R}_+, (x, s) \in U_k\right\}\right), K \right), \] In the supremum over $s$, we can ignore all the values $s > \epsilon$, since they will contribute $\frac {k-1} {2^K} + \alpha s > K$ to the min, and the latter formula defines $f_K^{(\alpha)} (x, r)$ as a min with $K$ anyway. Hence: \begin{align*} f_K^{(\alpha)} (x, 0) & = \min \left(\min\nolimits_{k=1}^{K2^K} \left(\sup \left\{\frac {k-1} {2^K} + \alpha s \in \mathbb{R}_+ \mid s \in [0, \epsilon[, (x, s) \in U_k\right\}\right), K \right) \\ & = \min \left(\min\nolimits_{k=1}^{K2^K} \left(\sup \left\{\frac {k-1} {2^K} + \alpha s \in \mathbb{R}_+ \mid s \in \mathbb{R}_+, (x, s) \in U_k \cap V_\epsilon\right\}\right), K \right) \\ & = ({f \chi_{V_\epsilon})}_K^{(\alpha)} (x, 0), \end{align*} which finishes to prove the claim. \qed So $G$ is supported on every $V_\epsilon$, $\epsilon > 0$. If we could prove the slightly stronger statement that $G$ is supported on $X$, seen as a subset of $\mathbf B (X, d)$ (formally, on the set of open balls of radius $0$), then we claim that $G$ would be a prevision on $X$; i.e., that $G = \Prev \eta_X (F)$ for some prevision $F$ on $X$. Moreover, $(F, r)$ would be the supremum of ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (\Prev X, \dKRH)$. \begin{prop} \label{prop:supp:X} Let $X, d$ be a standard quasi-metric space, ${(F_i, r_i)}_{i \in I}$ be a directed family in $\mathbf B (\Prev X, \dKRH)$, let $r = \inf_{i \in I} r_i$ and $G$ be the prevision on $\mathbf B (X, d)$ defined in (\ref{eq:G}). If $G$ is supported on $X$, then: \begin{enumerate} \item $G = \Prev \eta_X (F)$ for some unique prevision $F$ on $X$; \item if $G$ is superlinear, resp.\ sublinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized, resp.\ discrete, then so is $F$; \item $(F, r)$ is the naive supremum of ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (\Prev X, \dKRH)$; \item $(F, r)$ is the supremum of ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (\Prev X, \dKRH)$. \end{enumerate} \end{prop} \proof (1) For every $h \in \Lform X$, let $\widehat h \colon \mathbf B (X, d) \to \overline{\mathbb{R}}_+$ be defined by $\widehat h (x, r) = \sup \{a \in \mathbb{R}_+ \mid (x, r) \in \widehat {h^{-1} (]a, +\infty])}\}$. For every $b \in \mathbb{R}$, ${\widehat h}^{-1} (]b, +\infty]) = \bigcup_{a > b} \widehat {h^{-1} (]a, +\infty])}$ is open, so $\widehat h$ is (Scott-)continuous. For $r=0$, $\widehat h (x, 0) = \sup \{a \in \mathbb{R}_+ \mid (x, 0) \in \widehat {h^{-1} (]a, +\infty])}\} = \sup \{a \in \mathbb{R}_+ \mid x \in h^{-1} (]a, +\infty])\} = \sup \{a \in \mathbb{R}_+ \mid h (x) > a\} = h (x)$. Therefore $\widehat h$ is a Scott-continuous extension of $h$ to $\mathbf B (X, d)$. If $G = \Prev \eta_X (F)$ for some $F$, then for every $h \in \Lform X$, $G (\widehat h) = F (\widehat h \circ \eta_X) = F (h)$, and this shows that $F$ is unique if it exists. Now define $F (h)$ as $G (\widehat h)$. One can show that $\widehat {ah} = a \widehat h$ for every $h \in \Lform X$ and $a \in \mathbb{R}_+$, and this will imply that $F$ is positively homogeneous. But $h \mapsto \widehat h$ does not commute with directed suprema (unless $X, d$ is Lipschitz regular, an assumption we do not make), so that a similar argument cannot show that $F$ is Scott-continuous. Instead, we observe that $\widehat {ah}$ and $a \widehat h$ have the same restriction to $X$: $\widehat {ah} (x, 0) = (ah) (x) = a h (x) = a \widehat h (x, 0)$. Since $G$ is supported on $X$, $G (\widehat {ah}) = G (a \widehat h)$, and since the latter is equal to $a G (\widehat h)$, we obtain that $F (ah) = a F (h)$. We can show that $F$ is Scott-continuous by a similar argument. First, $h \mapsto \widehat h$ is monotonic, owing to the fact that $U \mapsto \widehat U$ is itself monotonic. It follows that $F$ is monotonic. Let now ${(h_i)}_{i \in I}$ be a directed family in $\Lform X$, and $h$ be its supremum. Then $\widehat h (x, 0) = h (x) = \sup_{i \in I} h_i (x) = \sup_{i \in I} \widehat {h_i} (x, 0)$, so $\widehat h$ and $\sup_{i \in I} \widehat {h_i}$ have the same restriction to $X$. Hence $F (h) = G (\widehat h) = G (\sup_{i \in I} \widehat {h_i}) = \sup_{i \in I} G (\widehat {h_i}) = \sup_{i \in I} F (h_i)$. (2) One may observe that (2) is a consequence of (3), using Proposition~\ref{prop:LPrev:simplesup}~(4) to show that naive suprema of directed families of formal balls of previsions on $\mathbf B (X, d)$ preserve superlinearity, sublinearity, etc. However, a direct proof is equally simple. Using the fact that $\widehat g + \widehat h$ and $\widehat {g+h}$ have the same restriction $g+h$ to $X$, we obtain that $F$ is superlinear, resp.\ sublinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized if $G$ is. It remains to show that If $G$ is discrete, then $F$ is discrete as well. Let $f$ be any strict map in $\Lform \overline{\mathbb{R}}_+$. Then $F (f \circ h) = G (\widehat {f \circ h})$. We have $\widehat {f \circ h} (x, 0) = f (h (x)) = f (\widehat h (x, 0))$, so $\widehat {f \circ h}$ and $f \circ \widehat h$ have the same restriction to $X$. It follows that $F (f \circ h) = G (f \circ \widehat h) = f (G (\widehat h)) = f (F (h))$, showing that $F$ is discrete. (3) For every $h \in \Lform_\alpha (X, d)$, $\alpha > 0$, $F (h) = G (\widehat h)$ by definition. However, we can build another map $h'$ which coincides with $h$ on $X$ as follows. Since $\overline{\mathbb{R}}_+, \dreal$ is Yoneda-complete hence standard, $\mathbf B (\overline{\mathbb{R}}_+, \dreal), \dreal^+$ is also standard by Proposition~\ref{prop:B:std}. We can therefore apply Lemma~\ref{lemma:d(_,x)}: $\dreal^+ (\_, (y, s))$ is $1$-Lipschitz continuous for every $(y, s) \in \mathbf B (\overline{\mathbb{R}}_+, \dreal)$. Taking $y=s=0$, the map $m = \dreal^+ (\_, (0, 0)) \colon (x, r) \mapsto \max (x-r, 0)$ is $1$-Lipschitz continuous. We define $h'$ as $m \circ \mathbf B^\alpha (h)$. Note that $\mathbf B^\alpha (h)$ is $\alpha$-Lipschitz continuous by Lemma~\ref{lemma:Balpha:f}, hence $h'$ is also $\alpha$-Lipschitz continuous, by Lemma~\ref{lemma:comp:Lip}. Explicitly, $h' (x, r) = \max (h (x) - \alpha r, 0)$. Taking $r=0$, we see that $h'$ coincides with $h$ on $X$, or more precisely, $h' \circ \eta_X = h$. Since $h'$ is $\alpha$-Lipschitz continuous, $G (h') = \sup_{i \in I} (F_i (h' \circ \eta_X) + \alpha r - \alpha r_i)$. We rewrite $h' \circ \eta_X$ as $h$ and recall that $F (h) = G (h')$, whence $F (h) = \sup_{i \in I} (F_i (h) + \alpha r - \alpha r_i)$. That shows that $F$ is the naive supremum of ${(F_i)}_{i \in I}$. (4) By Proposition~\ref{prop:LPrev:simplesup}~(3) (and the isomorphism between $\Lform\Prev X$ and $\Prev X$ that ensues from Proposition~\ref{prop:Gbar}), any naive supremum of a directed family that happens to be continuous, such as $F$, is the supremum. \qed To recap, we would be able to show that $\Prev X$ (or any of its subspaces $Y$ of interest, defined by any combination of superlinearity, sublinearity, linearity, subnormalization, normalization, discreteness) is Yoneda-complete, assuming $X, d$ standard, as follows. Consider a directed family ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (Y, \dKRH)$. We make a detour through $\mathbf B (X, d)$, and realize that ${(\Prev \eta_X (F_i), r_i)}_{i \in I}$ is directed, and has a (naive) supremum $(G, r)$ given in (\ref{eq:G}). $G$ is supported on $V_\epsilon$ for $\epsilon > 0$ as small as we wish (Lemma~\ref{lemma:G:supp:Veps}). If we could show that $G$ is supported on $X$, which happens to be the intersection of countably many such sets $V_\epsilon$ (for $\epsilon=1/2^n$, $n \in \nat$; recall Lemma~\ref{lemma:Veps}), then Proposition~\ref{prop:supp:X} would imply that ${(F_i, r_i)}_{i \in I}$ would itself have a supremum in $\mathbf B (Y, \dKRH)$, and that it would be computed as the naive supremum. Then $Y$ would be Yoneda-complete. While it might seem intuitive that any prevision that is supported on $V_\epsilon$ for arbitrarily small values of $\epsilon > 0$ should in fact be supported on $X$, this is exactly what is missing in our argument. We shall fix that in several special cases. In the meantime, we state what we have as the following proposition. \begin{prop}[Conditional Yoneda-completeness] \label{prop:supp:complete} Let $X, d$ be a standard quasi-metric space. Let $S$ be any subset of the following properties on previsions: sublinearity, superlinearity, linearity, subnormalization, normalization, discreteness; let $Y$ denote the set of previsions on $X$ satisfying $S$, and $Z$ be the set of previsions on $\mathbf B (X, d)$ that satisfy $S$. If every element of $Z$ that is supported on $V_{1/2^n}$ for every $n \in \nat$ is in fact supported on $X$, then $Y, \dKRH$ is Yoneda-complete, and directed suprema in $\mathbf B (Y, \dKRH)$ are computed as naive suprema. The same result holds for $\dKRH^a$ in lieu of $\dKRH$, for any $a > 0$. \qed \end{prop} \subsection{Preparing for Algebraicity} \label{sec:continuity-results} Having dealt with Yoneda-completeness, under some provisos, we now prepare the grounds for later results, which will state that certain spaces of previsions are algebraic. The following will be instrumental in characterizing the center points of those spaces. \begin{prop} \label{prop:dKRH:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, $\alpha \in \mathbb{R}_+$, and $a \in \mathbb{R}_+$, $a > 0$. For every $n \in \nat$, for all $a_1, \ldots, a_n \in \mathbb{R}_+$ and every $n$-tuple of center points $x_1$, \ldots, $x_n$ in $X, d$, for every prevision $F$ on $X$, the following maps are continuous from $\Lform_\alpha (X, d)^\patch$ (resp., $\Lform_\alpha^a (X, d)^\patch$) to $(\overline{\mathbb{R}}_+)^\dG$: \begin{enumerate} \item $f \mapsto \dreal (\sum_{i=1}^n a_i f (x_i), F (f))$, \item $f \mapsto \dreal (\max_{i=1}^n f (x_i), F (f))$, \item $f \mapsto \dreal (\min_{i=1}^n f (x_i), F (f))$, \end{enumerate} \end{prop} \proof We only deal with $\Lform_\alpha (X, d)$, as the case of $\Lform_\alpha^a (X, d)$ is similar. As a consequence of Corollary~\ref{corl:cont:Lalpha:simple} and the fact that the patch topology is finer than the cocompact topology, the map $f \mapsto \sum_{i=1}^n a_i f (x_i)$ is continuous from $\Lform_\alpha (X, d)^\patch$ to $(\overline{\mathbb{R}}_+)^\dG$. Similarly for $f \mapsto \max_{i=1}^n f (x_i)$ and for $f \mapsto \min_{i=1}^n f (x_i)$. $F$ is continuous from $\Lform X$ to $\overline{\mathbb{R}}_+$, hence also from $\Lform_\alpha (X, d)$ to $\overline{\mathbb{R}}_+$ (because $\Lform_\alpha (X, d)$ has the subspace topology from $\Lform X$), hence also from $\Lform_\alpha (X, d)^\patch$ to $\overline{\mathbb{R}}_+$. The claim will then follow from the fact that $(s, t) \mapsto \dreal (s, t)$ is continuous from $(\overline{\mathbb{R}}_+)^\dG \times \overline{\mathbb{R}}_+$ to $(\overline{\mathbb{R}}_+)^\dG$, which we now prove. The non-trivial open subsets of $(\overline{\mathbb{R}}_+)^\dG$ are the open intervals $[0, a[$. The inverse image of the latter by $\dreal$ is $\{(s, t) \in \overline{\mathbb{R}}_+ \times \overline{\mathbb{R}}_+ \mid s < t + a\} = \{(s, t) \in \overline{\mathbb{R}}_+ \times \overline{\mathbb{R}}_+ \mid \exists b \in \mathbb{R}_+ . s < b+a, b < t\} = \bigcup_{b \in \mathbb{R}_+} [0, b+a[ \times ]b, +\infty]$. \qed The sum $\sum_{i=1}^n a_i f (x_i)$ is just the integral of $f$ with respect to the so-called simple valuation $\sum_{i=1}^n a_i \delta_{x_i}$, all notions we shall introduce later. Since all continuous maps from a compact space to $(\overline{\mathbb{R}}_+)^\dG$ reach their maximum, we obtain immediately: \begin{lem} \label{lemma:dKRH:max} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, let $\nu = \sum_{i=1}^n a_i \delta_{x_i}$ be a simple valuation on $X$ such that every $x_i$ is a center point, and let $F$ be a prevision on $X$. There is an $h \in \Lform_1 (X, d)$ such that $\dKRH (\nu, F) = \dreal (\sum_{i=1}^n a_i h (x_i), F (h))$; in other words, the supremum in (\ref{eq:dKRH}) is reached. Similarly, for every $a \in \mathbb{R}_+$, $a > 0$, the supremum in (\ref{eq:dKRHa}) is reached: there is an $h \in \Lform_1^a (X, d)$ such that $\dKRH^a (\nu, F) = \dreal (\sum_{i=1}^n a_i h (x_i), F (h))$. \qed \end{lem} One can make a function $h$ explicit where the supremum is reached. Recall the functions $\sea x b$ from Section~\ref{sec:functions-sea-x}. \begin{prop} \label{prop:dKRH:max:d} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, let $\nu = \sum_{i=1}^n a_i \delta_{x_i}$ be a simple valuation on $X$ such that every $x_i$ is a center point, and let $F$ be a prevision on $X$. There are numbers $b_i \in \overline{\mathbb{R}}_+$, $1\leq i\leq n$, such that $\dKRH (\nu, F) = \dreal (\sum_{i=1}^n a_i h (x_i), F (h))$, where $h = \bigvee_{i=1}^n \sea {x_i} {b_i}$. Similarly, for every $a \in \mathbb{R}_+$, $a > 0$, there are numbers $b_i \in [0, \alpha a]$, $1\leq i \leq n$, such that $\dKRH^a (\nu, F) = \dreal (\sum_{i=1}^n a_i h (x_i), F (h))$, where $h = \bigvee_{i=1}^n \sea {x_i} {b_i}$. \end{prop} \proof We deal with the $\dKRH$ case, the $\dKRH^a$ case is similar. Using Lemma~\ref{lemma:dKRH:max}, there is an $h_0 \in \Lform_1 (X, d)$ such that $\dKRH (\nu, F) = \dreal (\sum_{i=1}^n a_i h_0 (x_i), \allowbreak F (h_0))$. Let $b_i = h_0 (x_i)$, $1\leq i\leq n$, and define $h$ as $\bigvee_{i=1}^n \sea {x_i} {b_i}$. By Lemma~\ref{lemma:sea:min}, $h$ is in $\Lform_1 (X, d)$ and $h \leq h_0$, so $F (h) \leq F (h_0)$. Since $\dreal$ is antitone in its second argument, $\dKRH (\nu, F) \leq \dreal (\sum_{i=1}^n a_i h_0 (x_i), F (h))$, and since $h_0 (x_i) = h (x_i)$ for each $i$, $\dKRH (\nu, \nu') \leq \dreal (\sum_{i=1}^n a_i h (x_i), F (h))$. The reverse inequality is by definition of $\dKRH$. \qed \subsection{Topologies on Spaces of Previsions} \label{sec:topol-spac-prev} There is a natural topology on $\Prev X$, called the \emph{weak} topology, or the \emph{Scott weak$^*$} topology: \begin{defi}[Weak topology] \label{defn:weak} For every topological space $X$, the \emph{weak topology} on $\Prev X$ has subbasic open sets $[h > b] = \{F \in \Prev X \mid F (h) > b\}$, $h \in \Lform X$, $b \in \mathbb{R}_+$. \end{defi} In other words, the weak topology is the coarsest topology such that $F \mapsto F (h)$ is continuous from $\Prev X$ to $\overline{\mathbb{R}}_+$ (with its Scott topology), for every $h \in \Lform X$. When $X, d$ is a standard quasi-metric space, another subbase for the weak topology is given by the open sets $[h > b]$, where now $h$ is restricted to be 1-Lipschitz continuous. This is because, for $h \in \Lform X$, $h = \sup_{\alpha > 0} h^{(\alpha)}$, so $[h > b] = \bigcup_{\alpha > 0} [1/\alpha h^{(\alpha)} > b/\alpha]$. By the same argument, this time using the fact that $h = \sup_{\beta > 0} \min (h, \beta)$, hence that $[h > b] = \bigcup_{\beta > 0} [\min (h, \beta) > b]$, the open sets $[h > b]$ also form a subbase of the weak topology, where $h$ is now \emph{bounded} and in $\Lform X$. Now fix $a \in \mathbb{R}_+$, $a > 0$. For $h$ bounded in $\Lform X$, $h$ is the directed supremum of bounded Lipschitz continuous maps $h^{(\alpha)}$, $\alpha > 0$, so the same argument shows that an even smaller subbase is given by the sets $[h > b]$ where $h \in \Lform_\infty^\bnd (X, d)$. By Lemma~\ref{lemma:Linfa}, $h$ is in $\Lform_\alpha^a (X, d)$ for some $\alpha > 0$. Then $[h > b] = [1/\alpha h > b/\alpha]$, so that we can take the sets $[h > b]$ as a subbase of the weak topology, where $h$ is now in $\Lform_1^a (X, d)$. This will be used in the next proposition. \begin{prop} \label{prop:weak:dScott:a} Let $X, d$ be a standard quasi-metric space, $a, a' > 0$ with $a \leq a'$. Let $S$ be a subset of the following properties on previsions: sublinearity, superlinearity, linearity, subnormalization, normalization, discreteness; and let $Y$ denote the set of previsions on $X$ satisfying $S$. Assume finally that the spaces $Y, \dKRH^a$, $Y, \dKRH^{a'}$ and $Y, \dKRH$ are Yoneda-complete and that directed suprema in their spaces of formal balls are computed as naive suprema. Then we have the following inclusions of topologies on $Y$: \begin{quote} weak $\subseteq$ $\dKRH^a$-Scott $\subseteq$ $\dKRH^{a'}$-Scott $\subseteq$ $\dKRH$-Scott. \end{quote} \end{prop} \proof \emph{First inclusion.} We use yet another topology, this time on $\mathbf B (Y, \dKRH^a)$. Let $[h > b]^+$ be defined as the set of those formal balls $(F, r)$ with $F \in Y$ such that $F (h) > r+b$. The \emph{weak$^+$ topology} on $\mathbf B (Y, \dKRH^a)$ has a subbase of open sets given by the sets $[h > b]^+$, for every $h \in \Lform_1^a (X, d)$ and $b \in \mathbb{R}_+$. $[h > b]^+$ is upwards-closed in $\mathbf B (Y, \dKRH^a)$. Indeed, assume that $(G, r) \in [h > b]^+$, where $h \in \Lform_1^a (X, d)$, and that $(G, r) \leq^{\dKRH^{a+}} (G', r')$. By Lemma~\ref{lemma:LaPrev:ord}~(2), $G (h) - r \leq G' (h) - r'$. Since $G (h) > r+b$, $G' (h) > r'+b$, namely $G' \in [h > b]^+$. To show that $[h > b]^+$ is Scott-open in $\mathbf B (Y, \dKRH^a)$, let ${(G_i, r_i)}_{i \in I}$ be a directed family with (naive) supremum $(G, r)$ in $\mathbf B (Y, \dKRH^a)$ and assume that $(G, r) \in [h > b]^+$. Since $G (h) = \sup_{i \in I} (G_i (h) + r -r_i) > r+b$, $G_i (h) > r_i + b$ for some $i \in I$, i.e., $(G_i, r_i)$ is in $[h > b]^+$. We can now proceed to show that $[h > b]$ is $\dKRH^a$-Scott open, for every $h \in \Lform_1^a (X, d)$. Equating $Y$ with the subset of all formal balls $(G, 0)$, $G \in Y$, $[h > b]$ is equal to $Y \cap [h > b]^+$. Since $[h > b]^+$ is Scott-open, $[h > b]$ is $\dKRH^a$-Scott open. \emph{Second and third inclusions.} The proofs of the second and third inclusions are similar. We rely on the easily checked inequalities $\dKRH^a \leq \dKRH^{a'} \leq \dKRH$, for $a \leq a'$. This implies that: $(*)$ $(F, r) \leq^{\dKRH^+} (F', s)$ implies $(F, r) \leq^{\dKRH^{a'+}} (F', s)$, and that $(F, r) \leq^{\dKRH^{a'+}} (F', s)$ implies $(F, r) \leq^{\dKRH^{a+}} (F', s)$. Let $\mathcal U$ be a Scott-open subset of $\mathbf B (Y, \dKRH^a)$. Since $\mathcal U$ is upwards-closed in $\leq^{\dKRH^{a+}}$, it is also upwards-closed in $\leq^{\dKRH^{a'+}}$ by $(*)$. Assume now that $(F, r) \in \mathcal U$ is the supremum in $\mathbf B (Y, \dKRH^{a'})$ of a family ${(F_i, r_i)}_{i \in I}$ that is directed with respect to $\leq\dKRH^{a'+}$. By $(*)$ again, ${(F_i, r_i)}_{i \in I}$ is directed with respect to $\leq\dKRH^{a+}$. Therefore ${(F_i, r_i)}_{i \in I}$ has a supremum $(F', r')$ in $\mathbf B (Y, \dKRH^a)$ that is uniquely characterized by $r'=\inf_{i \in I} r_i$, $F' (h) = \sup_{i \in I} (F_i (h) + r - r_i)$ for every $h \in \Lform_1^a (X, d)$, since suprema are assumed to be naive. However, by naivety again, $r = \inf_{i \in I} r_i$ and $F (h) = \sup_{i \in I} (F_i (h) + r - r_i)$ for every $h \in \Lform_1^{a'} (X, d)$. This holds in particular for every $h \in \Lform_1^a (X, d)$, so $F=F'$. We have therefore obtained that $(F, r)$ is also the supremum of the directed family ${(F_i, r_i)}_{i \in I}$ in $\mathbf B (Y, \dKRH^a)$. Since $(F, r) \in \mathcal U$, some $(F_i, r_i)$ is also in $\mathcal U$ since $\mathcal U$ is Scott-open in $\mathbf B (Y, \dKRH^a)$. This shows that $\mathcal U$ is Scott-open in $\mathbf B (Y, \dKRH^{a'})$. Similarly, we show that every Scott-open subset of $\mathbf B (Y, \dKRH^{a'})$ is Scott-open in $\mathbf B (Y, \dKRH)$. By taking intersections $\mathcal U \cap Y$, it follows that every $\dKRH^a$-Scott open is $\dKRH^{a'}$-Scott open, and that every $\dKRH^{a'}$-Scott open is $\dKRH$-Scott open. \qed We shall see that all the mentioned topologies are in fact equal in a number of interesting cases. Proposition~\ref{prop:weak:dScott:a} is the common core of those results. \subsection{Extending Previsions to Lipschitz Maps} \label{sec:extend-prev-lipsch} A prevision $F$ can be thought as a generalized form of integral: for every $h \in \Lform X$, $F (h)$ is the integral of $h$. This will be an actual integral in case $F$ is a linear prevision, and we shall study that case in Section~\ref{sec:case-cont-valu}. In particular, $F (h)$ makes sense when $h$ is $\alpha$-Lipschitz continuous. As we now observe, the results of Section~\ref{sec:falpha-f-alpha} imply that $F (h)$ also makes sense when $h$ is $\alpha$-Lipschitz, not necessarily continuous. This will be an important gadget in Section~\ref{sec:minimal-transport}. Recall that $L_\alpha (X, d)$ is the space of $\alpha$-Lipschitz maps from $X, d$ to $\overline{\mathbb{R}}_+$, whereas $\Lform_\alpha (X, d)$ is the space of $\alpha$-Lipschitz continuous maps. We write $L_\infty (X, d)$ for $\bigcup_{\alpha > 0} L_\alpha (X, d)$. \begin{defi}[$\extF F$] \label{defn:extF} Let $X, d$ be a continuous quasi-metric space. For every prevision $F$ on $X$, define $\extF F \colon L_\infty (X, d) \to \overline{\mathbb{R}}_+$ by $\extF F (h) = F (h^{(\alpha)})$, for every $h \in L_\alpha (X, d)$, $\alpha > 0$. \end{defi} This is well-defined since $F (h^{(\alpha)})$ does not depend on $\alpha$, as long as $h$ is $\alpha$-Lipschitz. In other words, if $h$ is also $\beta$-Lipschitz, then computing $\extF F (h)$ as $F (h^{(\beta)})$ necessarily yields the same value, by Corollary~\ref{corl:ha=hb}. \begin{lem} \label{lemma:extF:prev} Let $X, d$ be a continuous quasi-metric space. For every prevision $F$ on $X$, \begin{enumerate} \item $\extF F$ is a monotonic, positively homogeneous map from $L_\infty (X, d)$ to $\overline{\mathbb{R}}_+$; \item $\extF F$ is sublinear, resp.\ superlinear, resp.\ linear, resp.\ subnormalized, resp.\ normalized if $F$ is; \item for every $h \in \Lform_\infty (X, d)$, $\extF F (h) = F (h)$; \item the restriction of $\extF F$ to $\Lform_\infty (X, d)$ is lower semicontinuous. \end{enumerate} \end{lem} \proof (1) If $f \leq g$ in $L_\infty (X, d)$, then let $\alpha > 0$ be such that $f$ and $g$ are $\alpha$-Lipschitz. Since $f^{(\alpha)}$ is $\alpha$-Lipschitz continuous and is below $f$, it is below $g$, hence below the largest $\alpha$-Lipschitz continuous map below $g$, that is $g^{(\alpha)}$. Therefore $\extF F (f) = F (f^{(\alpha)}) \leq F (g^{(\alpha)}) = \extF F (g)$. For every $h \in L_\alpha (X, d)$, and for every $a \in \mathbb{R}_+$, $ah$ is in $L_{a\alpha} (X, d)$, so $\extF F (ah) = F ((ah)^{(a\alpha)})$. That is equal to $F (a h^{(\alpha)})$ by Lemma~\ref{lemma:ah:(alpha)}, hence to $a F (h^{(\alpha)}) = a \extF F (h)$. (2) Let $f$, $g$ be in $L_\infty (X, d)$, say in $L_\alpha (X, d)$. Then $f+g$ is in $L_{2\alpha} (X, d)$, and by Lemma~\ref{lemma:largestLipcont}, $(f+g)^{(2\alpha)} (x) = \sup_{(y, s) \ll (x, 0)} (f (y) + g (y) -2\alpha s)$. The maps $(y, s) \mapsto f (y)-\alpha s$ and $(y, s) \mapsto g (y)-\alpha s$ are monotonic since $f$ and $g$ are $\alpha$-Lipschitz, so the families ${(f (y) - \alpha s)}_{(y, s) \ll (x, 0)}$ and ${(g (y) - \alpha s)}_{(y, s) \ll (x, 0)}$ are directed. Using the Scott-continuity of $+$ on $\overline{\mathbb{R}}_+$, $(f+g)^{(2\alpha)} (x) = \sup_{(y, s) \ll (x, 0)} (f (y) - \alpha s) + \sup_{(y, s) \ll (x, 0)} (g (y) - \alpha s) = f^{(\alpha)} (x) + g^{(\alpha)} (x)$. It follows immediately that $\extF F$ is sublinear, resp.\ superlinear, resp.\ linear when $F$ is. Similarly for subnormalized and normalized previsions, since $\mathbf 1^{(\alpha)} = \mathbf 1$. (3) For every $h \in \Lform_\infty (X, d)$, say $h \in \Lform_\alpha (X, d)$, $h^{(\alpha)} = h$, so $\extF F (h) = F (h^{(\alpha)}) = F (h)$. (4) By (3), the restriction of $\extF F$ to $\Lform_\infty (X, d)$ coincides with $F$, and we recall that the topology on $\Lform_\infty (X, d)$ is the subspace topology, induced from the Scott topology on $\Lform X$. \qed \section{The Case of Continuous Valuations} \label{sec:case-cont-valu} A \emph{valuation} $\nu$ on a topological space $X$ is a map $\nu \colon \Open X \to \overline{\mathbb{R}}_+$, where $\Open X$ is the complete lattice of open subsets of $X$, such that $\nu (\emptyset) = 0$ (strictness), for all $U, V$ such that $U \subseteq V$, $\nu (U) \leq \nu (V)$ (monotonicity), for all $U$, $V$, $\nu (U \cup V) + \nu (U \cap V) = \nu (U) + \nu (V)$ (modularity). It is \emph{continuous} if and only if it is Scott-continuous, \emph{subnormalized} if and only if $\nu (X) \leq 1$, and \emph{normalized} if and only if $\nu (X)=1$. We shall see that continuous valuations are the same thing as linear previsions. Before that, we related the notion with the better-known notion of measure. \subsection{Continuous Valuations and Measures} \label{sec:cont-valu-meas} A measure $\mu$ on the Borel subsets of a topological space is called \emph{$\tau$-smooth} if and only if, for every directed family of open sets ${(U_i)}_{i \in I}$, $\mu (\bigcup_{i \in I} U_i) = \sup_{i \in I} \mu (U_i)$ \cite{Topsoe:topmes}. A $\tau$-smooth measure $\mu$ gives rise to a unique continuous valuation by restriction to the open sets. In general, the restriction of a measure to the open sets is only \emph{countably continuous}, that is, for every monotone \emph{sequence} $U_0 \subseteq U_1 \subseteq \cdots \subseteq U_n \subseteq \cdots$ of open sets, $\mu (\bigcup_{n \in \nat} U_n) = \sup_{n \in \nat} \mu (U_n)$. Following Alvarez-Manilla \emph{et al.\@} \cite{AMESD:ext:val}, we call \emph{countably continuous} any valuation that satisfies this property. A poset is \emph{$\omega$-continuous} if and only if it is continuous and has a countable basis $B$: that is, every element $x$ is the supremum of a directed family of elements $y$ way-below $x$ and in the countable set $B$. A continuous poset is $\omega$-continuous if and only if its Scott topology is countably based \cite[Proposition~3.1]{Norberg:randomsets}. Let us call a quasi-metric space $X, d$ \emph{$\omega$-continuous} if and only if $\mathbf B (X, d)$ is an $\omega$-continuous poset. As shown by Edalat and Heckmann, when $X, d$ is a metric space, $\mathbf B (X, d)$ is a continuous poset, and is $\omega$-continuous if and only if $X, d$ is separable \cite[Corollary~10]{EH:comp:metric}. Hence, for metric spaces, $\omega$-continuous is synonymous with separable. \begin{lem} \label{lemma:mes=>val} Let $X, d$ be an $\omega$-continuous quasi-metric space. Every measure on $X$, with the Borel $\sigma$-algebra of its $d$-Scott topology, is $\tau$-smooth, i.e., restricts to a continuous valuation on the open sets of $X$. \end{lem} \proof Let $\mu$ be a measure on $X$. Consider the image measure $\eta_X [\mu]$ of $\mu$, namely $\eta_X [\mu] (E) = \mu (\eta_X^{-1} (E))$ for every Borel subset $E$ of $\mathbf B (X, d)$. Its restriction to the Scott-open subsets of $\mathbf B (X, d)$ yields a countably continuous valuation. Lemma~2.5 of \cite{AMESD:ext:val} states that, on an $\omega$-continuous dcpo, a valuation is continuous if and only if it is countably continuous. (R. Heckmann observed that this in fact holds on any second countable topological space.) Hence $\eta_X [\mu]$ restricts to a continuous valuation on $\mathbf B (X, d)$. Let ${(U_i)}_{i \in I}$ be a directed family of $d$-Scott open subsets of $X$, and $U$ be their union. We have: \begin{eqnarray*} \mu (U) & = & \eta_X [\mu] (\bigcup_{i \in I} \widehat U_i) \\ & = & \sup_{i \in I} \eta_X [\mu] (\widehat U_i) = \sup_{i \in I} \mu (U_i). \end{eqnarray*} \qed In the reverse direction, we use the following theorem, due to Keimel and Lawson \cite[Theorem~5.3]{KL:measureext}: every locally finite continuous valuation on a locally compact sober space extends to a Borel measure on the Borel $\sigma$-algebra. A continuous valuation $\nu$ is \emph{locally finite} if and only if every point has an open neighborhood $U$ such that $\nu (U) < +\infty$. We say that it is \emph{finite} if and only if the measure of every open subset (equivalently, of the whole space), is finite, and that it is \emph{$\sigma$-finite} if and only if one can write the whole space as a countable union of open subsets of finite measure. \begin{lem} \label{lemma:val=>mes} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Every locally finite continuous valuation on $X$ extends to a measure on $X$ with the Borel $\sigma$-algebra of its $d$-Scott topology. \end{lem} \proof We start with a finite continuous valuation $\nu$ on $X$. Consider its image $\eta_X [\nu]$, defined by $\eta_X [\nu] (\mathcal U) = \nu (\eta_X^{-1} (\mathcal U))$ for every Scott-open subset $\mathcal U$ of $\mathbf B (X, d)$. This is a finite continuous valuation on a continuous dcpo. Every continuous dcpo is locally compact and sober, so we can apply Keimel and Lawson's theorem. Let $\mu$ be an extension of $\eta_X [\nu]$ to the Borel subsets of $\mathbf B (X, d)$. Since $X, d$ is Yoneda-complete, it is standard, so $X$ is a $G_\delta$ subset of $\mathbf B (X, d)$ (Lemma~\ref{lemma:Veps}). Every Borel subset of $X$ is then also Borel in $\mathbf B (X, d)$, hence $\mu$ restricts to a measure $\mu_{|X}$ on $X$. For every open subset $U$ of $X$, $\mu_{|X} (U) = \eta_X [\nu] (U) = \nu (U)$, so $\mu_{|X}$ is a measure extending $\nu$. We now invoke Corollary~3.7 of \cite{KL:measureext}: for every locally finite continuous valuation $\nu$ on a topological space, if the finite continuous valuations $\nu_V$ defined by $\nu_V (U) = \nu (U \cap V)$, where $V$ ranges over the open subsets such that $\nu (V) < +\infty$, all extend to measures on the Borel $\sigma$-algebra, then $\nu$ also extends to a measure on the Borel $\sigma$-algebra. That is enough to conclude. \qed Together with the fact that, by the $\lambda\pi$-theorem, any two finite measures that agree on the $\pi$-system of open sets agree on the whole Borel $\sigma$-algebra, we obtain: \begin{prop}[Finite measures=finite continuous valuations] \label{prop:mes=val} On an $\omega$-continuous Yoneda-complete quasi-metric space, there is a bijective correspondence between finite continuous valuations and finite measures, defined by restriction to open sets in one direction and by extension to the Borel $\sigma$-algebra in the other direction. \end{prop} This holds in particular for (sub)normalized continuous valuations and (sub)probability measures. Note that Proposition~\ref{prop:mes=val} includes the case of $\omega$-continuous complete metric spaces, that is, of Polish spaces, since the open ball topology on metric spaces coincides with the $d$-Scott topology. \subsection{The Kantorovich-Rubinshte\u\i n-Hutchinson Quasi-Metric on Continuous Valuations} \label{sec:kant-rubinsht-n} There is an integral with respect to $\nu$, defined by the Choquet formula $\int_{x \in X} h (x) d\nu = \int_0^{+\infty} \nu (h^{-1} (]t, +\infty])) dt$, for every $h \in \Lform X$, where the right-hand side is an ordinary indefinite Riemann integral. When $\nu$ is a continuous valuation, the map $h \mapsto \int_{x \in X} h (x) d\nu$ is a linear prevision, which is subnormalized, resp.\ normalized, as soon as $\nu$ is. This is stated in \cite{Gou-csl07}, and also follows from \cite{Jones:proba,JP:proba}. The use of the Choquet formula is due to Tix \cite{Tix:bewertung}. The mapping that sends $\nu$ to its associated linear prevision is one-to-one, and the inverse map sends every linear prevision $F$ to $\nu$, where $\nu (U) = F (\chi_U)$. This allows us to treat continuous valuations, interchangeably, as linear previsions, and conversely. This also holds in the subnormalized and normalized cases. This bijection is easily turned into an isometry, by defining a quasi-metric on spaces of continuous valuations by: \begin{equation} \label{eq:V:dKRH} \dKRH (\nu, \nu') = \sup_{h \in \Lform_1 X} \dreal \left(\int_{x \in X} h (x) d\nu, \int_{x \in X} h (x) d\nu'\right), \end{equation} and a bounded version by: \begin{equation} \label{eq:V:dKRHa} \dKRH^a (\nu, \nu') = \sup_{h \in \Lform_1^a X} \dreal \left(\int_{x \in X} h (x) d\nu, \int_{x \in X} h (x) d\nu'\right). \end{equation} We write $\Val X$ for the space of continuous valuations on $X$, and $\Val_{\leq 1} X$ and $\Val_1 X$ for the subspaces of subnormalized, resp.\ normalized, continuous valuations. \begin{rem} \label{rem:KRH:useless} For any two continuous valuations $\nu$ and $\nu'$ such that $\nu (X) > \nu' (X)$, $\dKRH (\nu, \nu') = +\infty$. Indeed, let $h = a.\mathbf 1$, $a \in \mathbb{R}_+$: the right-hand side of (\ref{eq:V:dKRH}) is larger than or equal to $a (\nu (X) - \nu' (X))$, hence equal to $+\infty$, by taking suprema over $a$. This incongruity disappears on the subspace $\Val_1 X$ of normalized continuous valuations (a.k.a., \emph{probability} valuations). \end{rem} The specialization preordering of $\dKRH$, resp.\ $\dKRH^a$, on the space of linear previsions is the pointwise ordering, as we have already seen: $F \leq G$ if and only if $F (h) \leq G (h)$ for every $h \in \Lform X$ (Lemma~\ref{lemma:KRH:qmet}, Lemma~\ref{lemma:KRHa:qmet}). On $\Val X$, $\Val_{\leq 1} X$ and $\Val_1 X$, the specialization preordering is then given by $\nu \leq \nu'$ if and only if $\nu (U) \leq \nu' (U)$ for every open subset $U$ of $X$. Note that the map $h \mapsto \int_{x \in X} h (x) d\nu$ is also Scott-continuous. This follows from the Choquet formula, and the fact that Riemann integrals of non-increasing functions commute with arbitrary pointwise suprema, as noticed by Tix \cite{Tix:bewertung}. \subsection{Yoneda-Completeness} \label{sec:yoneda-completeness} Theorem~\ref{thm:LPrev:sup} and Theorem~\ref{thm:LaPrev:sup} imply: \begin{fact} \label{fact:V:complete} $\Val X$, $\Val_{\leq 1} X$ and $\Val_1 X$ are Yoneda-complete as soon as $X, d$ is standard and Lipschitz regular. \end{fact} We shall see that they are also Yoneda-complete if $X, d$ is continuous Yoneda-complete, and not necessarily Lipschitz regular. For that, considering the results of Section~\ref{sec:supports}, we shall naturally look at supports. \begin{lem} \label{lemma:support} For a continuous valuation $\nu$ on $Y$, $A$ is a support of $\nu$ if and only if, for all open subsets $U$, $V$ of $Y$ such that $U \cap A = V \cap A$, $\nu (U)=\nu (V)$. \end{lem} \proof Write $F (h)$ for $\int_{x \in X} h (x) d\nu$. If $A$ is a support of $\nu$, and $U$ and $V$ are two open subsets such that $U \cap A = V \cap A$, then $\chi_U$ and $\chi_V$ have the same restriction to $A$, so $\nu (U) = F (\chi_U) = F (\chi_V) = \nu (V)$. Conversely, assume that for all open subsets $U$, $V$ of $Y$ such that $U \cap A = V \cap A$, $\nu (U)=\nu (V)$. Consider two maps $g, h \in \Lform X$ with the same restriction to $A$. For every $t \in \mathbb{R}_+$, $g^{-1} (]t, +\infty]) \cap A = g_{|A}^{-1} (]t, +\infty]) = h_{|A}^{-1} (]t, +\infty]) = h^{-1} (]t, +\infty]) \cap A$, so $F (g) = \int_0^{+\infty} \nu (g^{-1} (]t, +\infty])) dt = \int_0^{+\infty} \nu (h^{-1} (]t, +\infty])) dt = F (h)$. \qed We recall that every locally finite continuous valuation on a locally compact sober space extends to a Borel measure on the Borel $\sigma$-algebra \cite[Theorem~5.3]{KL:measureext}. \begin{lem} \label{lemma:V:supp} Let $Y$ be a locally compact sober space, and $\nu$ be a finite continuous valuation on $Y$. Let $W_0 \supseteq W_1 \supseteq \cdots \supseteq W_n \supseteq \cdots$ be a non-increasing sequence of open subsets of $Y$. If $\nu$ is supported on each $W_n$, $n \in \nat$, then $\nu$ is supported on $\bigcap_{n \in \nat} W_n$. \end{lem} \proof Extend $\nu$ to a measure on the Borel $\sigma$-algebra of $Y$, and continue to write $\nu$ for the extension. Clearly, that extension is a finite measure, namely $\nu (Y) < +\infty$. Let $A = \bigcap_{n \in \nat} W_n$. $A$ is a $G_\delta$ set, hence is Borel. For every open subset $U$ of $Y$, $U$ and $U \cap W_n$ have the same intersection with $W_n$, so, using the fact that $W_n$ is a support of $\nu$, $\nu (U ) = \nu (U \cap W_n)$. It follows that $\nu (U) = \inf_{n \in \nat} \nu (U \cap W_n)$. For a finite measure $\nu$, the latter is equal to $\nu (\bigcap_{n \in \nat} (U \cap W_n)) = \nu (U \cap A)$. Now, if $U$ and $V$ are any two open subsets such that $U \cap A = V \cap A$, then $\nu (U) = \nu (U \cap A) = \nu (V \cap A) = \nu (V)$. \qed \begin{thm}[Yoneda-completeness, valuations] \label{thm:V:complete} Let $X, d$ be a standard quasi-metric space, and assume that it is either Lipschitz regular, or continuous Yoneda-complete. Then the spaces $\Val_{\leq 1} X$ and $\Val_1 X$, equipped with the $\dKRH$, resp.\ the $\dKRH^a$ quasi-metric ($a > 0$), are Yoneda-complete. Moreover, directed suprema $(\nu, r)$ of formal balls ${(\nu_i, r_i)}_{i \in I}$ are computed as naive suprema: $r = \inf_{i \in I} r_i$ and for every $h \in \Lform_\alpha (X, d)$ (resp., in $\Lform_\alpha^a (X, d)$), \begin{equation} \label{eq:V:sup} \int_{x \in X} h (x) d\nu = \sup_{i \in I} \left(\int_{x \in X} h (x) d\nu_i + \alpha r - \alpha r_i\right). \end{equation} \end{thm} \proof When $X, d$ is standard and Lipschitz regular, this follows from Theorem~\ref{thm:LPrev:sup} and Theorem~\ref{thm:LaPrev:sup}. When $X, d$ is continuous Yoneda-complete, $Y = \mathbf B (X, d)$ is a continuous dcpo. Every continuous dcpo is locally compact and sober in its Scott topology: in fact, the continuous dcpos are exactly the sober c-spaces \cite[Proposition~8.3.36]{JGL-topology}, and the property of being a c-space is a strong form of local compactness. Moreover, all subnormalized and all normalized continuous valuations are finite. We can therefore apply Lemma~\ref{lemma:V:supp}: every element of $\Val_{\leq 1} (\mathbf B (X, d))$, resp.\ $\Val_1 (\mathbf B (X, d))$, that is supported on $V_{1/2^n}$ for every $n \in \nat$ is in fact supported on $\bigcap_{n \in \nat} V_{1/2^n}$, which happens to be $X$ (Lemma~\ref{lemma:Veps}). The conclusion then follows from Proposition~\ref{prop:supp:complete}. \qed \subsection{Algebraicity of $\Val_{\leq 1} X$} \label{sec:algebr-cont-val_l} A \emph{simple valuation} is a finite linear combination $\sum_{i=1}^n a_i \delta_{x_i}$, where $a_1, \ldots, a_n \in \mathbb{R}_+$. It is subnormalized if $\sum_{i=1}^n a_i \leq 1$, normalized if $\sum_{i=1}^n a_i = 1$. \begin{lem} \label{lemma:V:simple:center} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For all center points $x_1$, \ldots, $x_n$ and all non-negative reals $a_1$, \ldots, $a_n$ with $\sum_{i=1}^n a_i \leq 1$, the simple valuation $\sum_{i=1}^n a_i \delta_{x_i}$ is a center point of $\Val_{\leq 1} X, \dKRH$ (resp., of $\Val_1, \dKRH$ if additionally $\sum_{i=1}^n a_i = 1$). The same result holds with $\dKRH^a$ instead of $\dKRH$, for any $a \in \mathbb{R}_+$, $a > 0$. \end{lem} \proof We only deal with the case of $\Val_{\leq 1} X$, and of $\dKRH$, the other cases are similar. Let $\nu_0 = \sum_{i=1}^n a_i \delta_{x_i}$, $U = B^{\dKRH^+}_{(\nu_0, 0), <\epsilon}$. $U$ is upwards-closed: if $(\nu, r) \leq^{\dKRH^+} (\nu', r')$ and $(\nu, r) \in U$, then $\dKRH (\nu, \nu') \leq r-r'$ and $\dKRH (\nu_0, \nu) < \epsilon -r$, so $\dKRH (\nu_0, \nu') < \epsilon-r'$ by the triangular inequality, and that means that $(\nu', r')$ is in $U$. Recall that $\Val_{\leq 1}, \dKRH$ is Yoneda-complete, and that directed suprema of formal balls are computed as naive suprema, by Theorem~\ref{thm:V:complete}. To show that $U$ is Scott-open, consider a directed family ${(\nu_i, r_i)}_{i \in I}$ in $\mathbf B (\Val_{\leq 1} X, \dKRH)$, with supremum $(\nu, r)$, and assume that $(\nu, r)$ is in $U$. That supremum is given as in (\ref{eq:V:sup}). Hence $r = \inf_{i \in I} r_i$ and for every $h \in \Lform_1 (X, d)$, $\int_{x \in X} h (x) d\nu = \sup_{i \in I} (\int_{x \in X} h (x) d\nu_i + r - r_i)$. (In the case of $\dKRH^a$, the same formula holds, this time for every $h \in \Lform_1^a (X, d)$.) Since $(\nu, r) \in U$, $\dKRH (\nu_0, \nu) < \epsilon - r$, that is, $\epsilon > r$ and $\int_{x \in X} h (x) d\nu_0 - \epsilon + r < \int_{x \in X} h (x) d\nu$. Therefore, for every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$), there is an index $i \in I$ such that $\int_{x \in X} h (x) d\nu_0 - \epsilon + r < \int_{x \in X} h (x) d\nu_i + r - r_i$, or equivalently, \begin{equation} \label{eq:V:A} \sum_{i=1}^n a_i h (x_i) < \int_{x \in X} h (x) d\nu_i + \epsilon - r_i. \end{equation} Moreover, since $\epsilon > r = \inf_{i \in I} r_i$, we may take $i$ so large that $\epsilon - r_i > 0$. Let $V_i$ be the set of all $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$) satisfying (\ref{eq:V:A}). We have just shown that $\Lform_1 (X, d) $(resp., $\Lform_1^a (X, d)$) is included in $\bigcup_{i \in I} V_i$. $V_i$ is also the inverse image of $[0, \epsilon - r_i[$ by the map $h \mapsto \dreal (\sum_{i=1}^n a_i h (x_i), \int_{x \in X} h (x) d\nu_i)$, which is continuous from $\Lform_1 (X, d)^\patch$ (resp., $\Lform_1^a (X, d)^\patch$) to $(\overline{\mathbb{R}}_+)^\dG$ by Proposition~\ref{prop:dKRH:cont}~(1). Therefore $V_i$ is open in $\Lform_1 (X, d)^\patch$ (resp., $\Lform_1^a (X, d)^\patch$), itself a compact space by Lemma~\ref{lemma:cont:Lalpha:retr}~(4). Hence there is a finite subset $J$ of $I$ such that ${(V_i)}_{i \in J}$ is also an open cover of $\Lform_1 (X, d)^\patch$ (resp., $\Lform_1^a (X, d)$). That means that for every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$), there is an index $i \in J$ (not just in $I$) such that (\ref{eq:V:A}) holds. By directedness, there is a single index $i \in I$ such that (\ref{eq:V:A}) holds for every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$). That implies that $(\nu_i, r_i)$ is in $U$, proving the claim. \qed \begin{rem} \label{rem:V:simple:center} Using the same proof, but relying on Fact~\ref{fact:V:complete} instead of Theorem~\ref{thm:V:complete}, we obtain that simple valuations supported on center points are center points of $\Val X, \dKRH$ (and similarly for subnormalized, resp.\ normalized valuations, and for $\dKRH^a$ instead of $\dKRH$), under the alternative assumption that $X, d$ is standard and Lipschitz regular. \end{rem} Our proof of algebraicity for spaces of valuations (Theorem~\ref{thm:V:alg}) will make use of the following lemma, which is folklore. \begin{lem} \label{lemma:val:cont} Let $Y$ be a continuous dcpo, with a basis $\mathcal B$. $\Val Y$ (resp., $\Val_{\leq 1} Y$) is a continuous dcpo, and a basis is given by simple valuations supported on $\mathcal B$, viz., of the form $\sum_{i=1}^n a_i \delta_{y_i}$ with $y_i \in \mathcal B$ (resp., and with $\sum_{i=1}^n a_i \leq 1$). \end{lem} \proof The first part of the Lemma, that $\Val Y$ is a continuous dcpo, holds for general continuous dcpos \cite[Theorem~IV-9.16]{GHKLMS:contlatt}, and is an extension of a theorem by C. Jones that shows a similar result for $\Val_{\leq 1} Y$ \cite[Corollary~5.4]{Jones:proba}\footnote{The fact that Jones only considers subnormalized continuous valuations is somewhat hidden. Jones states that she considers continuous valuations as maps from $\Open X$ to $\mathbb{R}_+$, not $\overline{\mathbb{R}}_+$ (Section~3.9, loc.cit.). It is fair to call those finite valuations. Finite valuations do not form a dcpo, as one can check easily. To repair this, Jones says ``We shall initially consider the set of continuous evaluations on any topological space with the additional property that $\nu (X)\leq 1$'' at the beginning of Section~4.1, loc.cit. The new condition $\nu (X) \leq 1$ is necessary to prove that her space of continuous valuations is a dcpo (Theorem~4.1, loc.cit.), and seems to have been assumed silently for the rest of the thesis.}. Every simple valuation $\sum_{i=1}^n b_i \delta_{z_i}$ on $Y$ (where, without loss of generality, $b_i > 0$ for every $i$; and $\sum_{i=1}^n b_i \leq 1$ in the case of $\Val_{\leq 1} Y$) is the supremum of the family $D$ of simple valuations of the form $\sum_{i=1}^n a_i \delta_{y_i}$, where $y_i$ ranges over the elements of $\mathcal B$ way-below $z_i$ for each $i$, and $a_i < b_i$. Indeed, for each open set $U$, for every $r < (\sum_{i=1}^n b_i \delta_{z_i}) (U)$, let $A$ be the finite set of indices $i$ such that $z_i \in U$. Note that $A$ is non-empty. Find $y_i \in \mathcal B$ way-below $z_i$ so that $y_i \in U$ for each $i \in A$, and $a_i = b_i - \epsilon$ for each $i \in A$, with $0 < \epsilon < 1/|A| \; ((\sum_{i=1}^n b_i \delta_{z_i}) (U) - r)$. Then $r < (\sum_{i=1}^n a_i \delta_{y_i}) (U)$. Every element of $D$ is way-below $\sum_{i=1}^n b_i \delta_{z_i}$: Theorem~IV-9.16 of \cite{GHKLMS:contlatt}, already cited, states that $\sum_{i=1}^n a_i \delta_{y_i} \ll \xi$, for any continuous valuation $\xi$, if and only if for every non-empty subset $A$ of $\{1, \cdots, n\}$, $\sum_{i \in A} a_i < \xi (\bigcup_{i \in A} \uuarrow y_i)$, and for $\xi = \sum_{i=1}^n b_i \delta_{z_i}$, this is obvious. Note that we only need the if part of Theorem~IV-9.16 of op.cit., which one can check by elementary means. A similar statement holds in the case of subnormalized valuations, and we conclude similarly that every element of $D$ is way-below $\sum_{i=1}^n b_i \delta_{z_i}$ in the case of subnormalized valuations. $D$ also forms a directed family: for two simple valuations $\sum_{i=1}^n a_i \delta_{y_i}$ and $\sum_{i=1}^n a'_i \delta_{y'_i}$ in $D$, pick $y''_i \in \mathcal B$ way-below $z_i$ and above both $y_i$ and $y'_i$ for each $i$, then $\sum_{i=1}^n \max (a_i, a'_i) \delta_{y''_i}$ is above the two given simple valuations, and in $D$. In a poset, if $\xi$ is the supremum of a directed family ${(\xi_i)}_{i \in I}$, and each $\xi_i$ is the supremum of a directed family of elements $\xi_{ij}$, $j \in J_i$, way-below $\xi_i$, then the family ${(\xi_{ij})}_{i \in I, j \in J_i}$ is directed and admits $\xi$ as supremum, see e.g.\ \cite[Exercise~5.1.13]{JGL-topology}. Here we know that every continuous valuation $\xi$ is the supremum of some directed family of simple valuations $\xi_i$ (way-)below $\xi$, and we have just proved that each $\xi_i$ is the directed supremum of simple valuations $\xi_{ij} \ll \xi_i$ supported on $\mathcal B$: the result follows. \qed The proof of the following theorem will proceed through the study of $\Val (\mathbf B (X, d))$. We recall the canonical embedding $\eta_X \colon x \mapsto (x, 0)$ of $X$, with its $d$-Scott topology, into $\mathbf B (X, d)$, with its Scott topology. Every continuous valuation $\nu$ on $X$ gives rise to an image valuation $\eta_X [\nu]$, which maps every open subset $U$ to $\nu (\eta_X^{-1} (U))$. For every $h \in \Lform_1 (X, d)$, and assuming again that $X, d$ is standard, recall that $h' \colon (x, r) \mapsto h (x) - r$ is Scott-continuous from $\mathbf B (X, d)$ to $\mathbb{R} \cup \{+\infty\}$. It follows that $h'' \colon (x, r) \mapsto \max (h (x) - r, 0)$ is Scott-continuous from $\mathbf B (X, d)$ to $\overline{\mathbb{R}}_+$. Then we have: \begin{lem} \label{lemma:h'':<} Let $X, d$ be a standard quasi-metric space, $\nu$ be a continuous valuation on $X$, and $h \in \Lform_1 (X, d)$. Define $h'' (x, r) = \max (h (x) - r, 0)$. Then: \begin{eqnarray*} \int_{(x, r) \in \mathbf B (X, d)} h'' (x, r) d \eta_X[\nu] & = & \int_{x \in X} h (x) d\nu. \end{eqnarray*} \end{lem} \proof $\int_{(x, r) \in \mathbf B (X, d)} h'' (x, r) d \eta_X[\nu] = \int_{x \in X} h'' (\eta_X (x)) d\nu$, by a change of variables formula. Explicitly, $\int_{x \in X} h'' (x, r) d \eta_X [\nu] = \int_0^{+\infty} \eta_X [\nu] ({h''}^{-1} (]t, +\infty])) dt = \int_0^{+\infty} \nu (\eta_X^{-1} ({h''}^{-1} (]t, +\infty])) dt = \int_0^{+\infty} \nu ((h'' \circ \eta_X)^{-1} (]t, +\infty])) dt = \int_{x \in X} h'' (\eta_X (x)) d\nu$. Since $h'' \circ \eta_X = h$, we conclude. \qed Recall that a strong basis of a standard quasi-metric space $X, d$ is any set $B$ of center points of $X$ such that, for every $x \in X$, $(x, 0)$ is the supremum of a directed family of formal balls with center points in $B$. $X, d$ is algebraic if and only if it has a strong basis. The largest strong basis is simply the set of all center points. \begin{lem} \label{lemma:B:basis} Let $X, d$ be a standard algebraic quasi-metric space, with a strong basis $\mathcal B$. Then $\mathbf B (X, d)$ is a continuous dcpo, with a basis consisting of the formal balls $(x, r)$ with $x \in \mathcal B$. For $x \in \mathcal B$, $(x, r) \ll (y, s)$ if and only if $d (x, y) < r-s$. \end{lem} \proof Since $X, d$ is standard algebraic hence continuous, $\mathbf B (X, d)$ is a continuous poset \cite[Proposition~5.18]{JGL:formalballs}. The fact that $(x, r) \ll (y, s)$ if and only if $d (x, y) < r-s$ is true, whenever $x$ is a center point, is also mentioned in that proposition. Now let $(y, s) \in \mathbf B (X, d)$. Write $(y, s)$ as the supremum of a directed family ${(y_i, s_i)}_{i \in I}$ of formal balls way-below $(y, s)$. By assumption each $(y_i, s_i)$ is the supremum of a directed family ${(x_{ij}, r_{ij})}_{j \in J_i}$, where $x_{ij} \in \mathcal B$. The family ${(x_{ij}, r_{ij}+1/2^n)}_{j \in J_i, n \in \nat}$ is also directed, and its supremum is also equal to $(y_i, s_i)$, as one easily checks by looking at the upper bounds of the family. Additionally, since $d (x_{ij}, y_i) \leq r_{ij} - s_i$, $d (x_{ij}, y_i) < r_{ij}+1/2^n - s_i$, so each $(x_{ij}, r_{ij}+1/2^n)$ is way-below $(y_i, s_i)$. This is exactly what we need to conclude that ${(x_{ij}, r_{ij}+1/2^n)}_{i \in I, j \in J_i, n \in \nat}$ is a directed family whose supremum is $(y, s)$: if, in a poset, $a$ is the supremum of a directed family $D$ and each element $b$ of $D$ is the supremum of a directed family $D_b$ of elements way-below $b$, then $\bigcup_{b \in D} D_b$ is a directed family whose supremum is $a$, see Exercise~5.1.13 of \cite{JGL-topology} for example. \qed \begin{thm}[Algebraicity for spaces of subprobabilities] \label{thm:V:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. The spaces $\Val_{\leq 1} X, \dKRH$ and $\Val_{\leq 1} X, \dKRH^a$ (for any $a \in \mathbb{R}_+$, $a > 0$) are algebraic Yoneda-complete, and a strong basis is given by the simple valuations $\sum_{i=1}^n a_i \delta_{x_i}$ with $x_i \in \mathcal B$, and $\sum_{i=1}^n a_i \leq 1$. \end{thm} \proof Let $\nu \in \Val_{\leq 1} X$. We wish to show that $(\nu, 0)$ is the supremum of a directed family of formal balls $(\mathring\nu_i, R (\nu_i))$ below $(\nu, 0)$, and where each $\mathring\nu_i$ is a simple valuation of the form $\sum_{j=1}^n a_j \delta_{x_j}$, with $x_j \in \mathcal B$, and $\sum_{j=1}^n a_j \leq 1$. Finding $\mathring\nu_i$ and $R (\nu_i)$ is obvious if $\nu$ is the zero valuation, so we assume for the rest of the proof that $\nu (X) \neq 0$. Since $X, d$ is algebraic Yoneda-complete, $\mathbf B (X, d)$ is a continuous dcpo and the formal balls with centers in $\mathcal B$ are a basis, by Lemma~\ref{lemma:B:basis}. We profit from the fact that, since $\mathbf B (X, d)$ is a continuous poset, $\Val_{\leq 1} (\mathbf B (X, d))$ is a continuous dcpo. Lemma~\ref{lemma:val:cont} even allows us to say that a basis of the latter consists in the simple valuations $\mu = \sum_{j=1}^n a_j \delta_{(x_j, r_j)}$, where each $x_j$ is in $\mathcal B$, and $\sum_{j=1}^{n_i} a_j \leq 1$. In particular, for every $\nu \in \Val_{\leq 1} X$, $\eta_X [\nu]$ is the supremum of a directed family of simple valuations $\nu_i = \sum_{j=1}^{n_i} a_{ij} \delta_{(x_{ij}, r_{ij})}$, $i \in I$, where each $x_{ij}$ is a center point, and $\sum_{j=1}^{n_i} a_{ij} \leq 1$. We can require that $r_{ij} < 1$ for all $i, j$, by the following argument. Define $\nu'_i (U) = \nu_i (U \cap V_1)$, where $V_1 = \{(x, r) \in \mathbf B (X, d) \mid r < 1\}$. $V_1$ is open, since $X, d$ is standard (Lemma~\ref{lemma:Veps}). $\nu'_i$ is the restriction of $\nu_i$ to $V_1$, and is again a subnormalized valuation. Explicitly, $\nu'_i = \sum_{\substack{1\leq j \leq n_i\\r_{ij} < 1}} a_{ij} \delta_{(x_{ij}, r_{ij})}$. Note that all the radii involved in the latter sum are strictly less than $1$. If $\nu_i \leq \nu_{i'}$ then $\nu'_i \leq \nu'_{i'}$, so the family ${(\nu'_i)}_{i \in I}$ is directed as well. Since $\nu'_i \leq \nu_i \ll \eta_X [\nu]$, this is a family of (subnormalized) simple valuations way-below $\eta_X [\nu]$. Moreover, for every open subset $U$ of $\mathbf B (X, d)$, $\eta_X [\nu] (U) = \nu (U \cap X) = \nu (U \cap V_1 \cap X) = \eta_X [\nu] (U \cap V_1)$ is equal to $\sup_{i \in I} \nu_i (U \cap V_1) = \sup_{i \in I} \nu'_i (U)$, so $\eta_X [\nu]$ is also the supremum of ${(\nu'_i)}_{i \in I}$. All this concurs to show that we may assume that $\nu_i$ satisfies $r_{ij} < 1$ for all $i, j$, replacing $\nu_i$ by $\nu'_i$ if needed. For every subnormalized simple valuation $\mu = \sum_{j=1}^n a_j \delta_{(x_j, r_j)}$ on $\mathbf B (X, d)$ such that $\mu \leq \eta_X [\nu]$ and with $r_j < 1$ for every $j$, let: \begin{eqnarray} \label{eq:ringmu} \mathring\mu & = & \sum_{i=1}^n a_j \delta_{x_j} \\ \label{eq:Rmu} R (\mu) & = & \sum_{j=1}^n a_j r_j + \nu (X) - \sum_{j=1}^n a_j. \end{eqnarray} $R (\mu)$ is a non-negative number, owing to the fact that $\mu \leq \eta_X [\nu]$: indeed $\sum_{j=1}^n a_j = \mu (\mathbf B (X, d)) \leq \eta_X [\nu] (\mathbf B (X, d)) = \nu (X)$. Therefore $\beta (\mu) = (\mathring \mu, R (\mu))$ is a well-defined formal ball on $\Val_{\leq 1} X, \dKRH$ (resp., $\dKRH^a$). For every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$), recall the construction $h''$ mentioned in Lemma~\ref{lemma:h'':<}. However, apply it to $h+\mathbf 1$, not $h$. In other words, $(h + \mathbf 1)''$ maps $(x, r)$ to $\max (h (x) - r +1, 0)$. We have $\int_{(x, r) \in \mathbf B (X, d)} (h + \mathbf 1)'' (x, r) d\mu = \sum_{j=1}^n a_j (h (x_j) - r_j + 1)$, because $r_j < 1$ for all $i, j$. Applying this to $\mu=\nu_i$ and $\mu=\nu_{i'}$, we obtain that if $\nu_i \leq \nu_{i'}$, then $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) - \sum_{j=1}^{n_i} a_{ij} r_j + \sum_{j=1}^{n_i} a_{ij} \leq \sum_{j=1}^{n_{i'}} a_{i'j} h (x_{i'j}) - \sum_{j=1}^{n_{i'}} a_{i'j} r_j + \sum_{j=1}^{n_{i'}} a_{i'j}$, namely $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) \leq \sum_{j=1}^{n_{i'}} a_{i'j} h (x_{i'j}) + R(\nu_i) - R(\nu_{i'})$. This can be rewritten as $\dreal (\int_{x \in X} h (x) d\mathring\nu_i, \int_{x \in X} h (x) d\mathring\nu_{i'}) \leq R (\nu_i) - R (\nu_{i'})$. Since $h$ is arbitrary, $\beta (\nu_i) \leq^{\dKRH^+} \beta (\nu_{i'})$ (resp., $\leq^{\dKRH^{a+}}$ , by multiplying $h''$ by $1/a$ first ). This shows that the family ${(\beta (\nu_i))}_{i \in I}$ is directed. We now claim that $\beta (\nu_i) \leq^{\dKRH^+} (\nu, 0)$ for every $i \in I$ (resp., $\leq^{\dKRH^{a+}}$). Fix $h \in \Lform_1 (X, d)$ (resp., in $\Lform_1^a (X, d)$). We wish to show that $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) \leq \int_{x \in X} h (x) d\nu + R (\nu_i)$. To this end, recall that $\nu_i \ll \eta_X [\nu]$, in particular $\nu_i \leq \eta_X [\nu]$. Integrate $(h+\mathbf 1)''$ with respect to each side of the inequality: $\int_{(x, r) \in \mathbf B (X, d)} (h+\mathbf 1)'' (x, r) d\nu_i = \sum_{j=1}^{n_i} a_{ij} h (x_{ij}) - \sum_{j=1}^{n_i} a_{ij} r_{ij} + \sum_{j=1}^{n_i} a_{ij}$, and $\int_{(x, r) \in \mathbf B (X, d)} (h+\mathbf 1)'' (x, r) d\eta_X[\nu] = \int_{x \in X} h (x) d \nu + \nu (X)$, by Lemma~\ref{lemma:h'':<}. Therefore $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) \leq \int_{x \in X} h (x) d \nu + \nu (X) + \sum_{j=1}^{n_i} a_{ij} r_{ij} - \sum_{j=1}^{n_i} a_{ij} = \int_{x \in X} h (x) d \nu + R (\nu_i)$. We finally claim that $\sup_{i \in I} \beta (\nu_i) = (\nu, 0)$. Let $R = \inf_{i \in I} R (\nu_i)$. Let $h = \mathbf 1$, and compute $\int_{(x, r) \in \mathbf B (X, d)} h'' (x, r) d\nu_i = \sum_{j=1}^{n_i} a_{ij} (1 - r_{ij}) = \nu (X) - R (\nu_i)$. Since $\sup_{i \in I} \nu_i = \eta_X [\nu]$ and integration is Scott-continuous in the valuation, $\sup_{i \in I} (\nu (X) - R (\nu_i)) = \int_{(x, r) \in \mathbf B (X, d)} h'' (x, r) \allowbreak d\eta_X [\nu] = \int_{x \in X} h (x) d\nu = \nu (X)$. Hence $R=0$. By Theorem~\ref{thm:V:complete}, directed suprema are computed as naive suprema. That is to say, $\sup_{i \in I} \beta (\nu_i)$ is a formal ball $(G, R)$, with $R=0$ as we have just seen, and where (equating $G$ with a linear prevision) $G$ maps every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$) to $\sup_{i \in I} (\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) - R (\nu_i))$. We have already noticed that $\int_{(x, r) \in \mathbf B (X, d)} (h + \mathbf 1)'' (x, r) d\nu_i$ is equal to $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) - \sum_{j=1}^{n_i} a_{ij} r_{ij} + \sum_{j=1}^{n_i} a_{ij}$, that is, to $\sum_{j=1}^{n_i} a_{ij} h (x_{ij}) + \nu (X) - R (\nu_i)$, so $G (h) = \sup_{i \in I} \int_{(x, r) \in \mathbf B (X, d)} (h + \mathbf 1)'' (x, r) d\nu_i) - \nu (X)$. This is equal to $\int_{(x, r) \in \mathbf B (X, d)} (h + \mathbf 1)'' (x, r) d \eta_X [\nu] - \nu (X)$, since $\sup_{i \in I} \nu_i = \eta_X [\nu]$ and integration is Scott-continuous in the valuation. In turn, this is equal to $\int_{x \in X} (h + \mathbf 1) (x) d\nu - \nu (X)$ by Lemma~\ref{lemma:h'':<}, namely to $\int_{x \in X} h (x) d\nu$. It follows that $G (h) = \int_{x \in X} h (x) d\nu$ for every $h \in \Lform_1 (X, d)$ (resp., $\Lform_1^a (X, d)$), and this suffices to show that $G$ coincides with $h \mapsto \int_{x \in X} h (x) d\nu$, by Corollary~\ref{corl:=:L1} (resp., Corollary~\ref{corl:=:Lbnd1}). \qed \subsection{Continuity of $\Val_{\leq 1} X$} \label{sec:cont-val-leq-1} We deduce a similar theorem for the larger class of continuous Yoneda-complete quasi-metric spaces, by relying on the fact that the continuous Yoneda-complete quasi-metric spaces are exactly the $1$-Lipschitz continuous retracts of algebraic Yoneda-complete quasi-metric spaces \cite[Theorem~7.9]{JGL:formalballs}. Recall the map $\Prev f$ from Lemma~\ref{lemma:Pf:lip}. \begin{lem} \label{lemma:Vleq1:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. The restriction of $\Prev f$ to $\Val_{\leq 1} X$ is a $1$-Lipschitz continuous map from $\Val_{\leq 1} X, \dKRH$ to $\Val_{\leq 1} Y, \KRH\partial$, and also from $\Val_{\leq 1} X, \dKRH^a$ to $\Val_{\leq 1} Y, \KRH\partial^a$ for every $a \in \mathbb{R}_+$, $a > 0$. Similarly with $\Val_1$ instead of $\Val_{\leq 1}$. \end{lem} \proof Lemma~\ref{lemma:Pf:lip} says that $\Prev f$ is $1$-Lipschitz, so $\mathbf B^1 (\Prev f)$ is monotonic. The same Lemma shows that $\Prev f$ maps subnormalized linear previsions (i.e., elements of $\Val_{\leq 1} X$) to subnormalized linear previsions, and normalized linear previsions (i.e., elements of $\Val_1 X$) to normalized linear previsions. Again we confuse linear previsions and continuous valuations. By Theorem~\ref{thm:V:complete}, $\Val_{\leq 1} X, \dKRH$ and $\Val_{\leq 1} Y, \KRH\partial$ are Yoneda-complete, and directed suprema in their spaces of formal balls are naive suprema. Similarly with $\dKRH^a$ and $\KRH\partial^a$ in lieu of $\dKRH$ and $\KRH\partial$, or with $\Val_1$ instead of $\Val_{\leq 1}$. By Lemma~\ref{lemma:Pf:lipcont}, $\mathbf B^1 (\Prev f)$ preserves naive suprema, hence all directed suprema. It must therefore be Scott-continuous, which shows the claim. \qed Let $X, d$ be a continuous Yoneda-complete quasi-metric space. There is an algebraic Yoneda-complete quasi-metric space $Y, \partial$ and there are two $1$-Lipschitz continuous maps $r \colon Y, \partial \to X, d$ and $s \colon X, d \to Y, \partial$ such that $r \circ s = \identity X$. By Lemma~\ref{lemma:Vleq1:functor}, $\Prev r$ and $\Prev s$ are also $1$-Lipschitz continuous, and clearly $\Prev r \circ \Prev s = \identity {\Val_{\leq 1} X}$, so $\Val_{\leq 1} X, \dKRH$ is a $1$-Lipschitz continuous retract of $\Val_{\leq 1} Y, \KRH\partial$. (Similarly with $\dKRH^a$ and $\KRH\partial^a$.) Theorem~\ref{thm:V:alg} states that $\Val_{\leq 1} Y, \KRH\partial$ (resp., $\KRH\partial^a$) is algebraic Yoneda-complete, whence: \begin{thm}[Continuity for spaces of subprobabilities] \label{thm:Vleq1:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The quasi-metric spaces $\Val_{\leq 1} X, \dKRH$ and $\Val_{\leq 1} X, \dKRH^a$ ($a \in \mathbb{R}_+$, $a > 0$) are continuous Yoneda-complete. \qed \end{thm} Together with Lemma~\ref{lemma:Vleq1:functor}, and Theorem~\ref{thm:V:alg} for the algebraic case, we obtain. \begin{cor} \label{cor:V:functor} $\Val_{\leq 1}, \dKRH$ defines an endofunctor on the category of continuous Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous map. Similarly with $\dKRH^a$ instead of $\dKRH$ ($a > 0$), or with algebraic instead of continuous. \qed \end{cor} \subsection{Probabilities on Metric Spaces} \label{sec:prob-metr-spac} We would like to embark on the study of \emph{normalized} continuous valuations, and the space $\Val_1 X$. The situation is more complicated than in the sub-normalized case, but the case of \emph{metric} spaces stands out. We deal with them here, before we examine the more general algebraic quasi-metric cases in subsection~\ref{sec:algebr-cont-prob}. \begin{lem} \label{lemma:dKRH:metric} Let $X, d$ be a metric space. Then $\dKRH$ is a metric, not just a quasi-metric, on $\Val_1 X$. The same holds for $\dKRH^a$, for every $a \in \mathbb{R}_+$, $a > 0$. \end{lem} \proof We consider, equivalently, the space of normalized linear previsions on $X$ instead of $\Val_1 X$. As a matter of simplification, recall that on a metric space $X, d$, the $d$-Scott topology coincides with the open ball topology. As a result, every $1$-Lipschitz map is automatically continuous, hence $1$-Lipschitz continuous since every metric space is standard. We start with the case of $\dKRH^a$. For every $h \in \Lform_1^a (X, d)$, $a-h$ is $1$-Lipschitz: for all $x, y \in X$, $(a-h (x)) - (a-h (y)) = h (y) - h (x) \leq d (y, x) = d (x,y)$, using the fact that $h$ is $1$-Lipschitz and that $d$ is a metric. Moreover, $a-h$ is bounded from above by $a$, so $a-h$ is in $\Lform_1^a (X, d)$. For every normalized linear prevision $F$ on $X$, $F (a - h) = a - F (h)$. Indeed, by linearity $F (a - h) + F (h) = F (a.\mathbf 1)$, and this is equal to $a$ since $F$ is normalized. For all normalized linear previsions $F$, $F'$ on $X$, $\dreal (F' (a - h), F (a - h))$ is therefore equal to $\dreal (a - F' (h), a - F (h)) = \dreal (F (h), F' (h))$. It follows that for every $h \in \Lform_1^a (X, d)$, there is an $h' \in \Lform_1^a (X, d)$, namely $h'=a-h$, such that $\dreal (F' (h'), F (h')) = \dreal (F (h), F' (h))$. Therefore $\dKRH^a (F, F') \leq \dKRH^a (F', F)$. By symmetry, we conclude that $\dKRH^a (F, F') = \dKRH^a (F', F)$: $\dKRH^a$ is a metric. To show that $\dKRH$ is a metric, it is enough to observe that $\dKRH (F, F')$ is equal to $\sup_{a \in \mathbb{R}_+, a > 0} \dKRH^a (F, F')$ (Lemma~\ref{lemma:KRH:KRHa}), and to use the fact that $\dKRH^a$ is a metric. \qed Since $\dKRH (F, F') = \dKRH (F', F)$ in that case, $\dKRH (F, F')$ is also equal to $\max (\dKRH (F, \allowbreak F'), \allowbreak \dKRH (F', F))$, which is easily seen to be equal to $\sup_{h \in \Lform_1 (X, d)} \max (\dreal (F (h), F' (h)), \allowbreak \dreal (F' (h), F (h)))$. The inner maximum is the symmetrized metric $\dreal^{sym}$, defined by $\dreal^{sym} (a, \allowbreak b) = |a-b|$ for all $a, b \in \mathbb{R}_+$. Therefore, we obtain the formula: \begin{eqnarray} \label{eq:dKRH:metric} \dKRH (\nu, \nu') & = & \sup_{h \text{ 1-Lipschitz}} \left|\int_{x \in X} h (x) d\nu - \int_{x \in X} h (x) d\nu'\right| \\ \nonumber & = & \sup_{h \text{ 1-Lipschitz bounded}} \left|\int_{x \in X} h (x) d\nu - \int_{x \in X} h (x) d\nu'\right|, \end{eqnarray} for $\nu, \nu' \in \Val_1 X$, in the case where $d$ is a metric on $X$. (The second equality is by Lemma~\ref{lemma:KRH:bounded}.) Similarly, \begin{eqnarray} \label{eq:dKRHa:metric} \dKRH^a (\nu, \nu') & = & \sup_{h \text{ 1-Lipschitz bounded by $a$}} \left|\int_{x \in X} h (x) d\nu - \int_{x \in X} h (x) d\nu'\right|, \end{eqnarray} for $\nu, \nu' \in \Val_1 X$, assuming again that $d$ is a metric. We recognize the usual formula for the Kantorovich-Rubinshte\u\i n metric when $a=1$. Theorem~\ref{thm:V:complete} then states: \begin{thm} \label{thm:V1:complete} For every complete metric space $X, d$, the space $\Val_1 X$ with the Kantorovich-Rubinshte\u\i n-Hutchinson metric (\ref{eq:dKRH:metric}), or with the $a$-bounded Kantorovich-Rubinshte\u\i n-Hutchinson metric (\ref{eq:dKRHa:metric}), $a \in \mathbb{R}_+$, $a > 0$, is a complete metric space. \qed \end{thm} We haven't cared to mention that the simple normalized valuations $\sum_{i=1}^n a_i \delta_{x_i}$ with $x_i$ center points in $X, d$, $\sum_{i=1}^n a_i = 1$, are center points. This is trivially true, because every point of a metric space is a center point. In fact, in the case of Theorem~\ref{thm:V1:complete}, \emph{every} normalized continuous valuation is a center point. Theorem~\ref{thm:V1:complete} resembles the classical result that, if $X, d$ is a complete separable metric space, then $\Val_1 X, \dKRH$ is a complete (separable) metric space. Note that we do not need $X$ to be separable for Theorem~\ref{thm:V1:complete} to hold. We will not show that the simple normalized valuations $\sum_{i=1}^n a_i \delta_{x_i}$ with $x_i$ taken from a given strong basis $\mathcal B$ also form a strong basis for the $\dKRH^a$ metric, that is, a dense subset (Remark~\ref{rem:strong:basis}): we shall prove that in the more general case of the $\dKRH^a$ \emph{quasi-}metric (Theorem~\ref{thm:V1:alg}). \begin{rem} \label{rem:not:dense} As a special case of the upcoming Theorem~\ref{thm:V:weak=dScott}, applied to complete metric spaces $X, d$, the $\dKRH^a$-Scott topology on $\Val_1 X$ will coincide with the weak topology. Since the former is the usual open ball topology of the metric $\dKRH^a$, $\dKRH^a$ metrizes the weak topology on spaces of normalized continuous valuations on complete metric spaces. This subsumes the well-known result that it metrizes the weak topology on spaces of probability measures on complete separable (Polish) spaces. However, the unbounded $\dKRH$ metric does \emph{not} metrize the weak topology. For a counterexample, we reuse one due to Kravchenko \cite[Lemma~3.7]{Kravchenko:complete:K}. Take any complete metric space $X, d$, with points ${(x_n)}_{n \in \nat \smallsetminus \{0\}}$ such that $n \leq d (x_0, x_n) < + \infty$. Let $\nu_n = \frac 1 n \delta_{x_n} + (1-\frac 1 n) \delta_{x_0}$. Then ${(\nu_n)}_{n \in \nat \smallsetminus \{0\}}$ converges to $\delta_{x_0}$ in the weak topology, since for every subbasic weak open set $[f > r]$ containing $\delta_{x_0}$ (i.e., $f (x_0) > r$), $\int_{x \in X} f (x) d\nu_n = \frac 1 n f (x_n) + (1-\frac 1 n) f (x_0)$ is strictly larger than $r$ for $n$ large enough (this is easy if $r \geq 0$ since then $f (x_0) > 0$, and is trivial if $r < 0$ since $f \in \Lform X$ takes its values in $\overline{\mathbb{R}}_+$). However, $\dKRH (\delta_{x_0}, \nu_n) = \sup_{h \in \Lform_1 X} \dreal (h (x_0), \frac 1 n h (x_n) + (1-\frac 1 n) h (x_0)) \geq \dreal (d (x_0, x_n), (1-\frac 1 n) d (x_0, x_n))$ (using $d (\_, x_n) \in \Lform_1 (X, d)$, see Lemma~\ref{lemma:d(_,x)}) $=\frac 1 n d (x_0, x_n) \geq 1$, which shows that ${(\nu_n)}_{n \in \nat \smallsetminus \{0\}}$ does not converge to $\delta_{x_0}$ in the open ball topology of $\dKRH$. We reassure ourselves in checking that ${(\nu_n)}_{n \in \nat \smallsetminus \{0\}}$ does converge to $\delta_{x_0}$ in the open ball topology of $\dKRH^a$, though (this must be, since we announced that it would coincide with the weak topology): $\dKRH^a (\delta_{x_0}, \nu_n) = \sup_{h \in \Lform_1^a X} \max (\frac 1 n (h (x_0) - h (x_n))) \leq \frac a n$. \end{rem} \subsection{Algebraicity and Continuity of the Probabilistic Powerdomain $\Val_1 X$} \label{sec:algebr-cont-prob} We now turn to spaces of normalized continuous valuations in the general case of algebraic, then continuous, quasi-metric spaces. The situation is less neat than with sub-normalized continuous valuations (Theorem~\ref{thm:V:alg}) or with metric spaces, and we shall distinguish two settings where we can conclude: for the bounded $\dKRH^a$ quasi-metrics (Theorem~\ref{thm:V1:alg}), or for $\dKRH$ assuming the existence of a so-called root in $X$ (Theorem~\ref{thm:V1:alg:root}). \begin{thm}[Algebraicity for spaces of probabilities, $\dKRH^a$] \label{thm:V1:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. The space $\Val_1 X, \dKRH^a$ is algebraic Yoneda-complete, for every $a \in \mathbb{R}_+$, $a > 0$. All the normalized simple normalized valuations $\sum_{i=1}^n a_i \delta_{x_i}$ with $x_i$ center points in $X, d$, $\sum_{i=1}^n a_i = 1$, are center points, and form a strong basis, even when each $x_i$ is taken from $\mathcal B$. \end{thm} \proof Let $\nu \in \Val_1 X$. We wish to exhibit a directed family of formal balls $(\nu'_i, r'_i)$, $i \in I$, below $(\nu, 0)$, whose supremum is $(\nu, 0)$, where each $\nu'_i$ is a simple valuation of the form $\sum_{j=1}^n a_j \delta_{x_j}$, with $x_j$ in $\mathcal B$, and $\sum_{j=1}^n a_j = 1$. As in the proof of Theorem~\ref{thm:V:alg}, $\eta_X [\nu]$ is the supremum of a directed family of simple valuations $\nu_i = \sum_{j=1}^{n_i} a_{ij} \delta_{(x_{ij}, r_{ij})}$, $i \in I$, where each $x_{ij}$ is in $\mathcal B$, and $\sum_{j=1}^{n_i} a_{ij} \leq 1$. We can require that $r_{ij} < 1$ for all $i, j$, as in the proof of Theorem~\ref{thm:V:alg}, and we similarly define $\mathring\nu_i$ as $\sum_{j=1}^{n_i} a_{ij} \delta_{x_{ij}}$, $R (\nu_i) = \sum_{j=1}^{n_i} a_{ij} r_{ij} + 1 - \sum_{j=1}^{n_i} a_{ij}$, $\beta (\nu_i) = (\mathring\nu_i, R (\nu_i))$. Define $i \preceq i'$ if and only if $\nu_i \leq \nu_{i'}$, for all $i, i' \in I$. As before, we show that: \begin{quote} $(*)$ for all $i \preceq i'$ in $I$, $\beta (\nu_i) \leq^{\dKRH^{a+}} \beta (\nu_{i'})$. \end{quote} This implies that the family ${(\beta (\nu_i))}_{i \in I}$ is directed. We also show that $\beta (\nu_i) \leq^{\dKRH^{a+}} (\nu, 0)$ for every $i \in I$, and that $\sup_{i \in I} \beta (\nu_i) = (\nu, 0)$. However $\mathring\nu_i$ is not normalized, only subnormalized. Fix a center point $x_0$. (There must exist one: the only algebraic quasi-metric space without a center point is the empty space, and if $X$ is empty then $\Val_1 X$ is empty as well, so we would not have had a $\nu \in \Val_1 X$ to start with.) For each $i \in I$, let $\lambda_i = 1-\mathring\nu_i (X)$, $\nu'_i = \mathring\nu_i + \lambda_i \delta_{x_0}$, and $r'_i = R (\nu_i) + a \lambda_i$. Now $\nu'_i$ is normalized. We check that ${(\nu'_i, r'_i)}_{i \in I}$ is directed. To this end, it is enough to show that for all $i \preceq i'$ in $I$, $(\nu'_i, r'_i) \leq{\dKRH^{a+}} (\nu'_{i'}, r'_{i'})$. Before we do so, we note that if $i \preceq i'$, then $\nu_i (\mathbf B (X, d)) \leq \nu_{i'} (\mathbf B (X, d))$, that is, $\sum_{j=1}^{n_i} a_{ij} \leq \sum_{j=1}^{n_{i'}} a_{i'j}$; equivalently, $\mathring\nu_i (X) \leq \mathring\nu_{i'} (X)$. Therefore: \begin{quote} $(**)$ for all $i \preceq i'$ in $I$, $\lambda_i \geq \lambda_{i'}$. \end{quote} We also note that for all $i \preceq i'$ in $I$, since by $(*)$ $\dKRH^a (\mathring\nu_i, \mathring\nu_{i'}) \leq R (\nu_i) - R (\nu_{i'})$, the inequality $\int_{x \in X} h (x) d\mathring\nu_i \leq \int_{x \in X} h (x) d\mathring\nu_{i'} + R (\nu_i) - R (\nu_{i'})$ holds for $h$ equal to the constant fonction equal to $a$; hence $a \mathring\nu_i (X) \leq a \mathring\nu_{i'} (X) + R (\nu_i) - R (\nu_{i'})$. This implies: \begin{quote} $(*{*}*)$ for all $i \preceq i'$ in $I$, $r'_i \geq r'_{i'}$. \end{quote} Let now $h$ be an arbitrary element from $\Lform_1^a (X, d)$, and assume $i \preceq i'$. We have the following: \begin{eqnarray*} \dreal (\int_{x \in X} \mskip-20mu h (x) d\nu'_i, \int_{x \in X} \mskip-20mu h (x) d\nu'_{i'}) & = & \max (\int_{x \in X} \mskip-20mu h (x) d\mathring\nu_i + h (x_0) \lambda_i - \int_{x \in X} \mskip-20mu h (x) d\mathring\nu_{i'} - h (x_0) \lambda_{i'}, 0) \\ & \leq & \max (R (\nu_i) - R (\nu_{i'}) + h (x_0) (\lambda_i - \lambda_{i'}), 0) \\ && \quad\text{since }\dKRH^a (\mathring\nu_i, \mathring\nu_{i'}) \leq R (\nu_i) - R (\nu_{i'})\text{, using }(*) \\ & \leq & \max (R (\nu_i) - R (\nu_{i'}) + a (\lambda_i - \lambda_{i'}), 0) \\ && \quad\text{since }h (x_0) \leq a\text{, and }\lambda_i \geq \lambda_{i'}\text{ by }(**) \\ & = & \max (r'_i - r'_{i'}, 0) = r'_i - r'_{i'} \quad\text{ by }(*{*}*). \end{eqnarray*} This shows that for all $i \preceq i'$ in $I$, $\dKRH^a (\nu'_i, \nu'_{i'}) \leq r'_i - r'_{i'}$, hence that $(\nu'_i, r'_i) \leq^{\dKRH^{a+}} (\nu'_{i'}, r'_{i'})$. In particular, ${(\nu'_i, r'_i)}_{i \in I}$ is directed. We know that $\beta (\nu_i) \leq^{\dKRH^{a+}} (\nu, 0)$ for every $i \in I$, and we need to show that $(\nu'_i, r'_i) \leq^{\dKRH^{a+}} (\nu, 0)$ as well. For every $h \in \Lform_1^a (X, d)$, this means showing that $\int_{x \in X} h (x) d\mathring\nu_i + h (x_0) \lambda_i \leq \int_{x \in X} h (x) d \nu + R (\nu_i) + a \lambda_i$. We know that $\int_{x \in X} h (x) d\mathring\nu_i \leq \int_{x \in X} h (x) d \nu + R (\nu_i)$ since $\beta (\nu_i) \leq^{\dKRH^{a+}} (\nu, 0)$, and we conclude since $h (x_0) \leq a$. Finally, we know that $(\nu, 0)$ is the supremum of the directed family ${(\beta (\nu_i))}_{i \in I}$ in $\mathbf B (\Val_{\leq 1} X, \dKRH^a)$, and we must show that it is also the supremum of the directed family ${(\nu'_i, r'_i)}_{i \in I}$ in $\mathbf B (\Val_1 X, \dKRH^a)$. We claim that: \begin{quote} $(\dagger)$ $\inf_{i \in I} \lambda_i = 0$. \end{quote} Indeed, recall that $\sup_{i \in I} \beta (\nu_i) = (\nu, 0)$. Since directed suprema are computed as naive suprema (Theorem~\ref{thm:V:complete}), this means that $\inf_{i \in I} R (\nu_i) = 0$, and that $\int_{x \in X} h (x) d\nu$ is equal to $\sup_{i \in I} (\int_{x \in X} h (x) d\mathring\nu_i - R (\nu_i))$ for every $h \in \Lform_1^a (X, d)$. Taking $h = a.\mathbf 1$, the latter yields $a = \sup_{i \in I} (a \mathring\nu_i (X) - R (\nu_i))$, or equivalently $\sup_{i \in I} (-a \lambda_i - R (\nu_i)) = 0$. Since ${(-R (\nu_i))}_{i \in I}$ is a directed family (because $i \preceq i'$ implies $\beta (\nu_i) \leq \beta (\nu_{i'})$, hence $R (\nu_i) \geq R (\nu_{i'})$), and ${(-\lambda_i)}_{i \in I}$ is directed as well by $(**)$, we may use the Scott-continuity of addition and of multiplication by $a$, and rewrite this as $a \sup_{i \in I} (-\lambda_i) + \sup_{i \in I} (-R (\nu_i)) = 0$, hence to $\inf_{i \in I} \lambda _i = 0$, considering that $\inf_{i \in I} R (\nu_i) = 0$. Knowing this, and using again that directed suprema are naive suprema, our task consists in showing that $\inf_{i \in I} r'_i = 0$, and that for every $h \in \Lform_1^a (X, d)$, $\int_{x \in X} h (x) d\nu = \sup_{i \in I} (\int_{x \in X} h (x) d\nu'_i - r'_i)$. The first equality is proved by rewriting $\inf_{i \in I} r'_i$ as $\inf_{i \in I} R (\nu_i) + a \inf_{i \in I} \lambda_i$, invoking Scott-continuity as above, and then using the equality $\inf_{i \in I} R (\nu_i) = 0$ and $(\dagger)$. For the second equality, $\sup_{i \in I} (\int_{x \in X} h (x) d\nu'_i - r'_i)$ is equal to $\sup_{i \in I} (\int_{x \in X} h (x) d\mathring\nu_i + h (x_0) \lambda_i - R (\nu_i) - a \lambda_i)$, which is equal to the sum of $\sup_{i \in I} (\int_{x \in X} h (x) d\mathring\nu_i - R (\nu_i)$ and of $(a - h (x_0)) \sup_{i \in I} (-\lambda_i)$, by Scott-continuity of addition and of multiplication by $a - h (x_0)$, again. The first of those summands is equal to $\int_{x \in X} h (x) d\nu$ since $(\nu, 0)$ is the supremum of ${(\beta (\nu_i))}_{i \in I}$ in $\mathbf B (\Val_{\leq 1} X, \dKRH^a)$, and the second one is equal to $0$ by $(\dagger)$. \qed A quasi-metric $d$ on a space $X$ is \emph{$a$-bounded} if and only if $d (x, y) \leq a$ for all $x, y \in X$. It is \emph{bounded} if and only if it is $a$-bounded for some $a \in \mathbb{R}_+$. \begin{rem} \label{rem:dKRHa:bounded} The quasi-metric $\dKRH^a$ is $a$-bounded on any space of previsions. \end{rem} \begin{lem} \label{lemma:dKRH=dKRHa} If $d$ is an $a$-bounded quasi-metric on $X$, then $\dKRH$ and $\dKRH^a$ coincide on any space of previsions. \end{lem} \proof Clearly, $\dKRH^a \leq \dKRH$. In order to show that $\dKRH \leq \dKRH^a$, let $F, F'$ be any two previsions, and let us consider any $h \in \Lform_1 (X, d)$. We claim that there is an $h' \in \Lform_1^a (X, d)$ such that $\dreal (F (h), F' (h)) = \dreal (F (h'), F' (h'))$. This will imply that $\dreal (F (h), F' (h)) \leq \dKRH^a (F, F')$, hence, as $h$ is arbitrary, that $\dKRH (F, F') \leq \dKRH^a (F, F')$. For any two points $x, y \in X$, $\dreal (h (x), h (y)) \leq d (x, y) \leq a$. This implies that the range of $h$ is included in an interval of length at most $a$, namely an interval $[b, b+a]$ for some $b \in \mathbb{R}_+$, or $\{+\infty\}$. In the first case, $h' = h - b.\mathbf 1$ fits. In the second case, $\dreal (F (h), F' (h)) = 0$ and the constant $0$ map fits. \qed \begin{cor} \label{corl:V1:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space with strong basis $\mathcal B$, where $d$ is bounded. The space $\Val_1 X, \dKRH$ is algebraic Yoneda-complete. All the simple normalized valuations $\sum_{i=1}^n a_i \delta_{x_i}$ with $x_i \in \mathcal B$ and $\sum_{i=1}^n a_i = 1$, are center points, and form a strong basis. \end{cor} \proof By Lemma~\ref{lemma:dKRH=dKRHa}, $\dKRH=\dKRH^a$, where $a > 0$ is some non-negative real such that $d$ is $a$-bounded. Now apply Theorem~\ref{thm:V1:alg}. \qed Corollary~\ref{corl:V1:alg} is rather restrictive, and we give a better result below. The proof reuses some of the ideas used above. In a quasi-metric space $X, d$, say that $x \in X$ is an \emph{$a$-root} if and only if $d (x, y) \leq a$ for every $y \in X$. As a special case, a $0$-root is an element below all others in the $\leq^{d^+}$ ordering. For example, $0$ is a $0$-root in $\overline{\mathbb{R}}_+$. We call \emph{root} any $a$-root, for some $a \in \mathbb{R}_+$, $a > 0$. $\mathbb{R}$ has no root. \begin{lem} \label{lemma:root:center} Let $X, d$ be a standard algebraic quasi-metric space with strong basis $\mathcal B$, and $a \in \mathbb{R}_+$, $a > 0$. If $X, d$ has an $a$-root, then, for every $\epsilon > 0$, it also has an $(a+\epsilon)$-root in $\mathcal B$. \end{lem} \proof Assume an $a$-root $x$. By Lemma~\ref{lemma:B:basis}, $(x, 0)$ is the supremum of a directed family ${(x_i, r_i)}_{i \in I}$ where each $x_i$ is in $\mathcal B$. Since $X, d$ is standard, $\inf_{i \in I} r_i = 0$, so there is an $i \in I$ such that $r_i < \epsilon$. We use the fact that $(x_i, r_i) \leq^{d^+} (x, 0)$ to infer that $d (x_i, x) \leq r_i < \epsilon$. It follows that, for every $y \in X$, $d (x_i, y) \leq d (x_i, x) + d (x, y) < a+\epsilon$. \qed \begin{thm}[Algebraicity for spaces of probabilities, $\dKRH$] \label{thm:V1:alg:root} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with strong basis $\mathcal B$, and with a root $x_0$. The space $\Val_1 X, \dKRH$ is algebraic Yoneda-complete. All the simple normalized valuations $\sum_{i=1}^n a_i \delta_{x_i}$ where $x_i$ are center points and $\sum_{i=1}^n a_i = 1$, are center points, and form a strong basis, even when each $x_i$ is taken from $\mathcal B$. \end{thm} \proof By Lemma~\ref{lemma:root:center}, we may assume that $x_0$ is also in $\mathcal B$. Let now $a \in \mathbb{R}_+$, $a > 0$, be such that $d (x_0, x) \leq a$ for every $x \in X$. For all normalized previsions $F$ and $F'$ on $X$, for every $h \in \Lform_1 (X, d)$, we claim that there is a map $h' \in \Lform_1 (X, d)$ such that $h' (x_0) \leq a$, and such that $\dreal (F (h), F' (h)) = \dreal (F (h'), F' (h'))$. The argument is similar to Lemma~\ref{lemma:dKRH=dKRHa}. For every $x \in X$, $h (x_0) \leq h (x) + d (x_0, x)$ since $h$ is $1$-Lipschitz, so $h (x) \geq h (x_0) - a$ for every $x \in X$. If $h (x_0)=+\infty$, this implies that $h$ is the constant $+\infty$ map. Then $F (h) = F (\sup_{k \in \nat} k.\mathbf 1) = \sup_{k \in \nat} k = +\infty$, and similarly $F' (h) = +\infty$, so $\dreal (F (h), F' (h)) = 0$: we can take $h'=0$. If $h (x_0) < +\infty$, then let $h' (x) = h (x) - h (x_0) + a$ for every $x \in X$. Since $h (x) \geq h (x_0) - a$ for every $x \in X$, $h' (x)$ is in $\overline{\mathbb{R}}_+$. Clearly $h' \in \Lform_1 (X, d)$, and $h' (x_0) = a$. Therefore $\dKRH (F, F') = \sup_{h \in \Lform_1 (X, d), h (x_0) \leq a} \dreal (F (h), F' (h))$. The proof is now very similar to Theorem~\ref{thm:V1:alg}. Let $\nu \in \Val_1 X$. Then $\eta_X [\nu]$ is the supremum of a directed family of simple valuations $\nu_i = \sum_{j=1}^{n_i} a_{ij} \delta_{(x_{ij}, r_{ij})}$, $i \in I$, where each $x_{ij}$ is in $\mathcal B$, and $\sum_{j=1}^{n_i} a_{ij} \leq 1$. We require $r_{ij} < 1$, define $\mathring\nu_i$ as $\sum_{j=1}^{n_i} a_{ij} \delta_{x_{ij}}$, $R (\nu_i) = \sum_{j=1}^{n_i} a_{ij} r_{ij} + 1 - \sum_{j=1}^{n_i} a_{ij}$, and $\beta (\nu_i) = (\mathring\nu_i, R (\nu_i))$. Define $i \preceq i'$ if and only if $\nu_i \leq \nu_{i'}$, for all $i, i' \in I$. We have again: $(*)$ for all $i \preceq i'$ in $I$, $\beta (\nu_i) \leq^{\dKRH^+} \beta (\nu_{i'})$. Now define $\nu'_i$ as $\mathring\nu_i + \lambda_i \delta_{x_0}$, and $r'_i = R (\nu_i) + a \lambda_i$, where $\lambda_i = 1-\mathring\nu_i (X)$, exactly as in the proof of Theorem~\ref{thm:V1:alg}. We show, as before: $(**)$ for all $i \preceq i'$ in $I$, $\lambda_i \geq \lambda_{i'}$; $(*{*}*)$ for all $i \preceq i'$ in $I$, $r'_i \geq r'_{i'}$. Let now $h$ be an arbitrary element from $\Lform_1 (X, d)$ such that $h (x_0) \leq a$, and assume $i \preceq i'$. We have the following chain of inequalities---exactly the same as in Theorem~\ref{thm:V1:alg}: \begin{eqnarray*} \dreal (\int_{x \in X} \mskip-20mu h (x) d\nu'_i, \int_{x \in X} \mskip-20mu h (x) d\nu'_{i'}) & = & \max (\int_{x \in X} \mskip-20mu h (x) d\mathring\nu_i + h (x_0) \lambda_i - \int_{x \in X} \mskip-20mu h (x) d\mathring\nu_{i'} - h (x_0) \lambda_{i'}, 0) \\ & \leq & \max (R (\nu_i) - R (\nu_{i'}) + h (x_0) (\lambda_i - \lambda_{i'}), 0) \\ && \quad\text{since }\dKRH (\mathring\nu_i, \mathring\nu_{i'}) \leq R (\nu_i) - R (\nu_{i'})\text{, using }(*) \\ & \leq & \max (R (\nu_i) - R (\nu_{i'}) + a (\lambda_i - \lambda_{i'}), 0) \\ && \quad\text{since }h (x_0) \leq a\text{, and }\lambda_i \geq \lambda_{i'}\text{ by }(**) \\ & = & \max (r'_i - r'_{i'}, 0) = r'_i - r'_{i'} \quad\text{ by }(*{*}*). \end{eqnarray*} This shows that for all $i \preceq i'$ in $I$, $\dKRH (\nu'_i, \nu'_{i'}) \leq r'_i - r'_{i'}$, since we have taken the precaution to show that $\dKRH (\nu'_i, \nu'_{i'})$ is the supremum of $\dreal (\int_{x \in X} \mskip-20mu h (x) d\nu'_i, \int_{x \in X} \mskip-20mu h (x) d\nu'_{i'})$ over all $h \in \Lform_1 (X, d)$ \emph{such that $h (x_0) \leq a$}. Therefore $(\nu'_i, r'_i) \leq^{\dKRH^+} (\nu'_{i'}, r'_{i'})$ for all $i \preceq i'$ in $I$. In particular, ${(\nu'_i, r'_i)}_{i \in I}$ is directed. The proof that $(\nu'_i, r'_i) \leq^{\dKRH^+} (\nu, 0)$ is also as in the proof of Theorem~\ref{thm:V1:alg}. It suffices to show that for every $h \in \Lform_1 (X, d)$ with $h (x_0) \leq a$, $\int_{x \in X} h (x) d\mathring\nu_i + h (x_0) \lambda_i \leq \int_{x \in X} h (x) d \nu + R (\nu_i) + a \lambda_i$. We know that $\int_{x \in X} h (x) d\mathring\nu_i \leq \int_{x \in X} h (x) d \nu + R (\nu_i)$ since $\beta (\nu_i) \leq^{\dKRH^{a+}} (\nu, 0)$, and we conclude since $h (x_0) \leq a$. Finally, we show that $(\nu, 0)$ is also the supremum of the directed family ${(\nu'_i, r'_i)}_{i \in I}$ in $\mathbf B (\Val_1 X, \dKRH)$ by first showing: $(\dagger)$ $\inf_{i \in I} \lambda_i = 0$; then by showing that it is the naive supremum of the family, that is, by showing that $\inf_{i \in I} r'_i = 0$ and that for every $h \in \Lform_1 (X, d)$, $\int_{x \in X} h (x) d\nu = \sup_{i \in I} (\int_{x \in X} h (x) d\nu'_i - r'_i)$. The first equality is easy, considering $(\dagger)$. For the second equality, $\sup_{i \in I} (\int_{x \in X} h (x) d\nu'_i - r'_i)$ is equal to $\sup_{i \in I} (\int_{x \in X} h (x) d\mathring\nu_i + h (x_0) \lambda_i - R (\nu_i) - a \lambda_i)$, which is equal to the sum of $\sup_{i \in I} (\int_{x \in X} h (x) d\mathring\nu_i - R (\nu_i)$ and of $(a - h (x_0)) \sup_{i \in I} (-\lambda_i)$, by Scott-continuity of addition and of multiplication by $a - h (x_0)$, again. (We use $h (x_0) \leq a$ again here.) The first of those summands is equal to $\int_{x \in X} h (x) d\nu$ since $(\nu, 0)$ is the (naive) supremum of ${(\beta (\nu_i))}_{i \in I}$ in $\mathbf B (\Val_{\leq 1} X, \dKRH^a)$, and the second one is equal to $0$ by $(\dagger)$. \qed The case of continuous Yoneda-complete quasi-metric spaces follows by the same pattern as in Section~\ref{sec:cont-val-leq-1}, but we need an additional argument in the case of spaces with a root. Let $X, d$ be a continuous Yoneda-complete quasi-metric space. There is an algebraic Yoneda-complete quasi-metric space $Y, \partial$ and two $1$-Lipschitz continuous maps $r \colon Y, \partial \to X, d$ and $s \colon X, d \to Y, \partial$ such that $r \circ s = \identity X$. This is again by \cite[Theorem~7.9]{JGL:formalballs}. We need to know that $Y, \partial$ is built as the formal ball completion of $X, d$, defined as follows. Let $(x, r) \prec (y, s)$ if and only if $d (x, y) < r - s$. A \emph{rounded ideal} of formal balls on $X, d$ is a set $D \subseteq \mathbf B (X, d)$ that is $\prec$-directed in the sense that every finite subset of $D$ is $\prec$-below some element of $D$, and $\prec$-downwards-closed, in the sense that any element $\prec$-below some element of $D$ is in $D$. The \emph{aperture} of such a set $D$ is $\alpha (D) = \inf \{r \mid (x, r) \in D\}$. The elements of $Y$ are exactly the rounded ideals in this sense that have aperture $0$. The quasi-metric $\partial$ is defined by $\partial (D, D') = \sup_{b \in D} \inf_{b' \in D'} d^+ (b, b')$. The set $\Dc (x, 0) = \{(y, r) \in \mathbf B (X, d) \mid (y, r) \prec (x, 0)\}$ is an element of $Y$ for every $x \in X$. We need to know these details to show: \begin{lem} \label{lemma:root:completion} The formal ball completion of a quasi-metric space $X, d$ with an $a$-root $x$ ($a \in \mathbb{R}_+$, $a > 0$) has an $a$-root, namely $\Dc (x, 0)$. \end{lem} \proof Let $D$ be an element of the formal ball completion. We must show that $\partial (\Dc (x, 0), D) \leq a$, where $\partial$ is defined as above. For every $b = (z, t) \in \Dc (x, 0)$, by definition $d (z, x) < t$. For every $b' = (y, s) \in D$, $d^+ (b, b') = \max (d (z, y) -t+s, 0) \leq \max (d (z, x) + d (x, y) - t + s, 0) \leq \max (d (x, y) + s, 0)$. Since $x$ is an $a$-root, this is less than or equal to $a+s$. We use the fact that $\alpha (D)=0$ to make $s$ arbitrarily small. Therefore $\inf_{b' \in D} d^+ (b, b') \leq a$. Taking suprema over $b \in \Dc (x, 0)$, we obtain $\partial (\Dc (x, 0), D) \leq a$. \qed We return to our argument. By Lemma~\ref{lemma:Vleq1:functor}, $\Prev r$ and $\Prev s$ are also $1$-Lipschitz continuous, and clearly $\Prev r \circ \Prev s = \identity {\Val_1 X}$, so $\Val_1 X, \dKRH$ is a $1$-Lipschitz continuous retract of $\Val_1 Y, \KRH\partial$. (Similarly with $\dKRH^a$ and $\KRH\partial^a$.) By Theorem~\ref{thm:V1:alg}, $\Val_1 Y, \KRH\partial^a$ is algebraic Yoneda-complete. If $X, d$ has a root, then $Y, \partial$ has a root, too, by Lemma~\ref{lemma:root:completion}, and in that case Theorem~\ref{thm:V1:alg:root} tells us that $\Val_1 Y, \KRH\partial$ is algebraic Yoneda-complete. In any case, we obtain: \begin{thm}[Continuity for spaces of probabilities] \label{thm:V1:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The quasi-metric space $\Val_1 X, \dKRH^a$ is continuous Yoneda-complete for every $a \in \mathbb{R}_+$, $a >0$. If $X, d$ has a root, then $\Val_1 X, \dKRH$ is continuous Yoneda-complete as well. \qed \end{thm} Together with Lemma~\ref{lemma:Vleq1:functor}, and Theorem~\ref{thm:V1:alg} for the algebraic case, we obtain: \begin{cor} \label{cor:V1:functor:a} $\Val_1, \dKRH^a$ defines an endofunctor on the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous map. \qed \end{cor} The case of $\dKRH$ requires the following additional lemma. \begin{lem} \label{lemma:V1:root} For every quasi-metric space $X, d$ with an $a$-root $x$, ($a \in \mathbb{R}_+$, $a > 0$), $\Val_1 X, \dKRH$ has an $a$-root, namely $\delta_x$. \end{lem} \proof Let $\nu \in \Val_1 X$, and $h \in \Lform_1 X$. Then: \begin{eqnarray*} h (x) & = & \int_{y \in X} h (x) d\nu \qquad \text{since $\nu (X)=1$} \\ & \leq & \int_{y \in X} h (y) + d (x,y) d\nu \qquad \text{since $h$ is $1$-Lipschitz} \\ & \leq & \int_{y \in X} h(y) d\nu + a, \end{eqnarray*} since $d (x, y) \leq a$ for every $a \in X$, and $\nu (X)=1$. It follows that $\dreal (\int_{y \in X} h (y) d\delta_x, \int_{y \in x} h (y) d\nu) \leq a$. Taking suprema over $h$, we obtain $\dKRH (\delta_x, \nu) \leq a$. \qed \begin{cor} \label{cor:V1:functor} $\Val_1, \dKRH$ defines an endofunctor on the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces with a root and $1$-Lipschitz continuous map. \qed \end{cor} \begin{rem} \label{lemma:Vleq1:root} For every quasi-metric space $X, d$, $\Val_{\leq 1} X, \dKRH$ and $\Val X, \dKRH$ have a root, namely the zero valuation. That holds even when $X, d$ does not have a root. Indeed, $0 \leq \nu$ for every continuous valuation $\nu$, namely $d (0, \nu) = 0$. \end{rem} \subsection{The Weak Topology} \label{sec:weak-topology} The weak topology on $\Val_{\leq 1} X$ and on $\Val_1 X$ is defined as a special case of what it is on general spaces of previsions, through subbasic open sets $[h > a] = \{\nu \in \Val_{\leq 1} X \mid \int_{x \in X} h (x) d\nu > a\}$, $h \in \Lform X$, $a \in \mathbb{R}_+$. When $X, d$ is continuous Yoneda-complete, because of Theorem~\ref{thm:V:complete}, the assumptions of Proposition~\ref{prop:weak:dScott:a} are satisfied, so: \begin{fact} \label{fact:V:weak} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. We have the following inclusions of topologies on $\Val_{\leq 1} X$, resp.\ $\Val_1 X$, where $0 < a \leq a'$: \begin{quote} weak $\subseteq$ $\dKRH^a$-Scott $\subseteq$ $\dKRH^{a'}$-Scott $\subseteq$ $\dKRH$-Scott. \end{quote} \end{fact} We will see that the $\dKRH$-Scott topology is in general strictly finer than the other topologies. We will also see that the other topologies are all equal, assuming $X, d$ algebraic. \begin{lem} \label{lemma:alg:balls} Let $Y, \partial$ be a standard algebraic quasi-metric space, and $\mathcal B$ be a strong basis of $Y, \partial$. Fix also $\epsilon > 0$. Every $\partial$-Scott open subset of $Y$ is a union of open balls $B^\partial_{b, < r}$, where $b \in \mathcal B$ and $0 < r <\epsilon$. \end{lem} \proof Let $V$ be a $\partial$-Scott open subset of $Y$, and $y \in V$. Our task is to show that $y \in B^\partial_{b, <r} \subseteq V$ for some $b \in \mathcal B$ and $r > 0$ such that $r < \epsilon$. By definition of the $\partial$-Scott topology, $V$ is the intersection of the Scott-open subset $\widehat V$ of $\mathbf B (Y, \partial)$ with $Y$. Since $X, d$ is standard, $V_\epsilon$ is also Scott-open by Lemma~\ref{lemma:Veps}, and $V$ is also the intersection of $\widehat V \cap V_\epsilon$ with $Y$. We use Lemma~\ref{lemma:B:basis} and the fact that $(y, 0)$ is in $\widehat V \cap V_\epsilon$ to conclude that there is a $b \in \mathcal B$ and an $r \in \mathbb{R}_+$ such that $(b, r) \ll (y, 0)$ and $(b, r)$ is in $\widehat V \cap V_\epsilon$. Since $(b, r) \ll (y, 0)$, $\partial (b, y) < r$, which implies in particular that $r > 0$, and also that $y$ is in $B^\partial_{b, <r}$. Since $(b, r)$ is in $V_\epsilon$, $r < \epsilon$. Finally, $B^\partial_{b, <r}$ is included in $V$: for every $z \in B^\partial_{b, <r}$, $d (b, z) < r$, so $(b, r) \ll (z, 0)$, and that shows that $(z, 0)$ is in $\uuarrow (b, r) \subseteq \widehat V$. \qed \begin{prop} \label{prop:V:weak=dScott:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. For every $a \in \mathbb{R}_+$, $a > 0$, the $\dKRH^a$-Scott topology coincides with the weak topology on $\Val_{\leq 1} X$, and on $\Val_1 X$. \end{prop} \proof The $\dKRH^a$-Scott topology is finer by Fact~\ref{fact:V:weak}. Conversely, let $\nu \in \Val_{\leq 1} X$ (resp., $\Val_1 X$) and $\mathcal U$ be a $\dKRH^a$-Scott open neighborhood of $\nu$. We wish to show that $\nu$ is in some weak open subset of $\mathcal U$. By Theorem~\ref{thm:V:alg} (resp., Theorem~\ref{thm:V1:alg}), $\Val_{\leq 1} X$ (resp., $\Val_1 X$) is algebraic, and we can then apply Lemma~\ref{lemma:alg:balls}, taking $\mathcal B$ equal to the subset of simple (subnormalized, resp.\ normalized) valuations $\sum_{i=1}^n a_i \delta_{x_i}$ where each $x_i$ is a center point. Therefore we can assume that $\mathcal U = B^{\dKRH^a}_{\sum_{i=1}^n a_i \delta_{x_i}, < \epsilon}$ for some such simple valuation $\sum_{i=1}^n a_i \delta_{x_i}$ and some $\epsilon > 0$. By assumption, $\dKRH^a (\sum_{i=1}^n a_i \delta_{x_i}, \nu) < \epsilon$. Let $\eta > 0$ be such that $\dKRH^a (\sum_{i=1}^n a_i \delta_{x_i}, \nu) < \epsilon - \eta$. Let $N$ be a natural number such that $a/N < \eta$, and consider the collection $\mathcal H$ of maps of the form $\bigvee_{i=1}^n \sea {x_i} {b_i}$ (see Proposition~\ref{prop:dKRH:max:d}) where each $b_i$ is an integer multiple of $a/N$ in $[0, a]$. Note that $\mathcal H$ is a finite family, and that for each $h \in \mathcal H$, $\sum_{i=1}^n a_i h (x_i) < +\infty$. Let $\mathcal V$ be the weak open $\bigcap_{h \in \mathcal H} [h > \sum_{i=1}^n a_i h (x_i) - \epsilon + \eta]$. (We extend the notation $[h > b]$ to the case where $b < 0$ in the obvious way, as the set of continuous valuations $\nu'$ such that $\int_{x \in X} h (x) d\nu' > b$; when $b < 0$, this is the whole set, hence is again open.) For every $h \in \mathcal H$, $h$ is in $\Lform_1^a (X, d)$: it is in $\Lform_1 (X, d)$ by Lemma~\ref{lemma:sea:cont}, and clearly bounded from above by $a$. Since $\dKRH^a (\sum_{i=1}^n a_i \delta_{x_i}, \nu) < \epsilon - \eta$, $\sum_{i=1}^n a_i h (x_i) < \int_{x \in X} h (x) d \nu + \epsilon - \eta$. Hence $\nu$ is in $\mathcal V$. Next, we show that $\mathcal V$ is included in $\mathcal U = B^{\dKRH^a}_{\sum_{i=1}^n a_i \delta_{x_i}, < \epsilon}$. Let $\nu'$ be an arbitrary element of $\mathcal V$. By Proposition~\ref{prop:dKRH:max:d}, there are numbers $b'_i \in [0, a]$, $1\leq i \leq n$, such that $\dKRH^a (\sum_{i=1}^n a_i \delta_{x_i}, \nu') = \dreal (\sum_{i=1}^n a_i h' (x_i), \int_{x \in X} h' (x) d\nu')$, where $h' = \bigvee_{i=1}^n \sea {x_i} {b'_i}$. For each $i$, let $b_i$ be the largest integer multiple of $a/N$ below $b'_i$, and let $h = \bigvee_{i=1}^n \sea {x_i} {b_i}$. Since $h$ is in $\mathcal H$, and $\nu' \in \mathcal V$, $\sum_{i=1}^n a_i h (x_i) < \int_{x \in X} h (x) d\nu' + \epsilon - \eta$. For each $i$, $b'_i \leq b_i + a/N \leq b_i + \eta$. It follows that, for every $x \in X$, $(\sea {x_i} {b'_i}) (x) = \max (b'_i - d (x_i, x), 0) \leq \max (b_i + \eta - d (x_i, x), 0) \leq \max (b_i - d (x_i, x), 0) + \eta = (\sea {x_i} {b_i}) (x) + \eta$. In turn, we obtain that for every $x \in X$, $h' (x) \leq h (x) + \eta$, so $\sum_{i=1}^n a_i h' (x_i) \leq \sum_{i=1}^n a_i h (x_i) + \eta$, using the fact that $\sum_{i=1}^n a_i \leq 1$. Therefore $\sum_{i=1}^n a_i h' (x_i) < \int_{x \in X} h (x) d\nu' + \epsilon$. This implies that $\dKRH^a (\sum_{i=1}^n a_i \delta_{x_i}, \nu') < \epsilon$. Therefore $\nu'$ is in $\mathcal U$. \qed \begin{rem}[$\dKRH$ does not quasi-metrize the weak topology] \label{rem:Kravchenko} The similar result where $\dKRH^a$ is replaced by $\dKRH$ is wrong. In other words, there are algebraic Yoneda-complete quasi-metric spaces $X, d$ such that the $\dKRH$-Scott topology is strictly finer than the weak topology, on $\Val_{\leq 1} X$, and on $\Val_1 X$. We can even choose $X, d$ metric: see Remark~\ref{rem:not:dense} for a counterexample, due to Kravchenko, remembering that since $\dKRH$ is a metric in that case, the open ball topology coincides with the $\dKRH$-Scott topology. \end{rem} We now use the following fact: \begin{fact} \label{fact:retract:two} If $A$ is a space with two topologies $\mathcal O_1$ and $\mathcal O_2$, and both embed into a topological space $B$ by the same topological embedding, then $\mathcal O_1 = \mathcal O_2$. \qed \end{fact} We shall apply this when $A$ is actually a retract, which is a stronger condition than just a topological embedding. We shall also use the following. \begin{fact} \label{fact:Pf:weak} Let $f \colon X \to Y$ be a continuous map between topological spaces. Then $\Prev f$ is continuous from $\Prev X$ to $\Prev Y$, both spaces being equipped with the weak topology. \end{fact} Indeed, $\Prev f^{-1} ([k > b]) = [k \circ f > b]$. \begin{thm}[$\dKRH^a$ quasi-metrizes the weak topology] \label{thm:V:weak=dScott} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For every $a \in \mathbb{R}_+$, $a > 0$, the $\dKRH^a$-Scott topology coincides with the weak topology on $\Val_{\leq 1} X$, and on $\Val_1 X$. \end{thm} \proof We reduce to the case of algebraic Yoneda-complete quasi-metric spaces. We invoke \cite[Theorem~7.9]{JGL:formalballs} again: $X, d$ is the $1$-Lipschitz continuous retract of an algebraic Yoneda-complete quasi-metric space $Y, \partial$. Call $s \colon X \to Y$ the section and $r \colon Y \to X$ the retraction. Then $\Prev s$ and $\Prev r$ form a $1$-Lipschitz continuous section-retraction pair by Lemma~\ref{lemma:Pf:lip}, and in particular $\Prev s$ is an embedding of $\Val_{\leq 1} X$ into $\Val_{\leq 1} Y$ with their $\dKRH^a$-Scott topologies (similarly with $\Val_1$). However, $s$ and $r$ are also just continuous, by Proposition~\ref{prop:cont}, so $\Prev s$ and $\Prev r$ also form a section-retraction pair between the same spaces, this time with their weak topologies, by Fact~\ref{fact:Pf:weak}. By Proposition~\ref{prop:V:weak=dScott:alg}, the two topologies on $\Val_{\leq 1} Y$ (resp., $\Val_1 Y$) are the same. Fact~\ref{fact:retract:two} then implies that the two topologies on $\Val_{\leq 1} X$ (resp., $\Val_1 X$) are the same as well. \qed \subsection{Splitting Lemmas} This section will not be used in the following, and can be skipped if you have no specific interest in it. Jones showed that, on a dcpo $X$, given a finite subset $A$, $\sum_{x \in A} a_x \delta_x \leq \sum_{y \in A} b_y \delta_y$ if and only if there is a matrix ${(t_{xy})}_{x, y \in A}$ of non-negative reals such that: $(i)$ if $x \not\leq y$ then $t_{xy}=0$, $(ii)$ $\sum_{y \in A} t_{xy} = a_x$, and $(iii)$ $\sum_{x \in A} t_{xy} \leq b_y$. This is Jones' \emph{splitting lemma} \cite[Theorem~4.10]{Jones:proba}. Additionally, if the two valuations are normalized, the the inequality $(iii)$ can be taken to be an equality. The splitting lemma for simple normalized valuations can be cast in an equivalent way as: $\mu = \sum_{x \in A} a_x \delta_x$ is less than or equal to $\nu = \sum_{y \in A} b_y \delta_y$ if and only if there is a simple normalized valuation $\tau = \sum_{x, y \in A} t_{xy} \delta_{(x, y)}$ on $X^2$, supported on the graph of $\leq$ $(i)$, whose first marginal $\pi_1 [\tau] = \Val (\pi_1) (\tau) = \sum_{x \in A} (\sum_{y \in A} t_{xy}) \delta_x$ is equal to $\mu$ $(ii)$ and whose second marginal $\pi_2 [\tau] = \Val (\pi_2) (\tau) = \sum_{y \in A} (\sum_{x \in A} t_{xy}) \delta_y$ is equal to $\nu$ $(iii)$. This theorem can be seen as a special case of results of Edwards \cite{Edwards:marginals}, which says that, in certain classes of ordered topological spaces, $\mu \leq \nu$ for probability measures $\mu$, $\nu$ if and only if there is a probability measure $\tau$ on the product space, supported on the graph of $\leq$, with first marginal equal to $\mu$ and with second marginal equal to $\nu$. \begin{lem} \label{lemma:to:<=} Let $X, d$ be a quasi-metric space. For two continuous valuations $\mu$ and $\nu$ on $X$, write $\xymatrix{\mu \ar[r]^w & \nu}$ if and only if there is a center point $x \in X$, a point $y \in X$, and a real number $t \in \mathbb{R}_+$ such that $\mu \geq t \delta_x$, $\nu = \mu + t \delta_y - t \delta_x$, and $w = t d (x, y)$. (We agree that $0 . (+\infty) = 0$ in case $t=0$ and $d (x, y)=+\infty$.) If $\xymatrix{\mu \ar[r]^w & \nu}$, then $\dKRH (\mu, \nu) = w$. \end{lem} In more elementary terms: going from $\mu$ to $\nu$ by moving an amount $t$ of mass from $x$ to $y$ makes you travel distance $t d (x, y)$. \proof For every bounded $h \in \Lform_1 (X, d)$, $\int_{z \in X} h (z) d(\nu + t \delta_x) = \int_{z \in X} h (z) d (\mu + t \delta_y)$, which implies that $\int_{z \in X} h (z) d\nu + t h (x) = \int_{z \in X} d\mu + t h (y)$, hence $\dreal (\int_{z \in X} h (x) d\mu, \int_{z \in X} h (x) d\nu) = \max (t (h (x) - h (y)), 0)$. (The latter makes sense since $h$ is bounded.) Since $h$ is $1$-Lipschitz, $\dreal (\int_{z \in X} h (x) d\mu, \int_{z \in X} h (x) d\nu) \leq t d (x, y)$. Taking suprema over all bounded maps $h \in \Lform_1 (X, d)$, and using Lemma~\ref{lemma:KRH:bounded}, $\dKRH (\mu, \nu) \leq t d (x, y) = w$. Conversely, if $d (x, y) < +\infty$, then let $b = d (x, y)$ and consider $h = \sea x b$. Then $h (x) = d (x, y)$ and $h (y) = 0$. The function $h$ $h$ is in $\Lform_1 (X, d)$, by Lemma~\ref{lemma:sea:cont}, and is bounded by $b$. The equality $\int_{z \in X} h (z) d\nu + t h (x) = \int_{z \in X} d\mu + t h (y)$ then rewrites as $\dreal (\int_{z \in X} h (x) d\mu, \int_{z \in X} h (x) d\nu) = \max (t (h (x) - h (y)), 0) = t d (x, y)$. So $\dKRH (\mu, \nu) \geq t d (x, y) = w$, whence $\dKRH (\mu, \nu) = w$. If $d (x, y) = +\infty$, then let $b \in \mathbb{R}_+$ be arbitrary and consider again $h = \sea x b$. Again, $h$ is in $\Lform_1^b (X, d)$, but now $h (x) = b$, $h (y) = 0$. We obtain $\dreal (\int_{z \in X} h (x) d\mu, \int_{z \in X} h (x) d\nu) = \max (t (h (x) - h (y)), 0) = tb$, so $\dKRH (\mu, \nu) \geq tb$ for every $b \in \mathbb{R}_+$. If $t > 0$, it follows that $\dKRH (\mu, \nu) = +\infty = w$. Otherwise, we know already that $\dKRH (\mu, \nu) \leq w = 0$, so $\dKRH (\mu, \nu) = 0 = w$. \qed \begin{rem} \label{rem:to:<=} Let $a \in \mathbb{R}_+$, $a > 0$. A similar argument shows that under the same assumptions that $x$ is a center point, $\mu \geq t \delta_x$, and $\nu = \mu + t \delta_y - t \delta_x$, then $\dKRH^a (\mu, \nu) = t \min (a, d (x,y))$. Showing $\dKRH (\mu, \nu) \leq t \min (a, d (x, y))$ means showing that $\dKRH (\mu, \nu) \leq t d (x, y)$, which is done as above, and $\dKRH (\mu, \nu) \leq t a$, which follows from the fact that $h (x) - h (y) \leq a$ for every $h \in \Lform_1^a (X, d)$. To show the reverse inequality, we take $h = \sea x a \in \Lform_1^a (X, d)$. Then $h (x) = a$, $h (y) = \max (a - d (x, y), 0) = a - \min (a, d (x, y))$, so that $\dKRH (\mu, \nu) \geq \max (t (h (x) - h (y)), 0) = \min (a, d (x, y))$. \end{rem} For two simple valuations $\mu = \sum_{x \in A} a_x \delta_x$, $\nu = \sum_{y \in B} b_y \delta_y$, say that $T = {(t_{xy})}_{x \in A, y \in B}$ is a \emph{transition matrix} from $\mu$ to $\nu$ if and only if: \begin{enumerate} \item for every $y \in B$, $\sum_{x \in A} t_{xy} = b_y$; \item for every $x \in A$, $\sum_{y \in B} t_{xy} = a_x$. \end{enumerate} The \emph{weight} of $T$ is $\sum_{x \in A, y \in B} t_{xy} d (x, y)$. \begin{lem} \label{lem:txy:<=} Let $X, d$ be a quasi-metric space, and $\mu = \sum_{x \in A} a_x \delta_x$, $\nu = \sum_{y \in B} b_y \delta_y$ be two simple valuations on $X$. The weight of any transition matrix from $\mu$ to $\nu$ is larger than or equal to $\dKRH (\mu, \nu)$. \end{lem} \proof Let $h \in \Lform_1 (X, d)$. Then: \begin{eqnarray*} \int_{x \in X} h (x) d\mu & = & \sum_{x \in A} a_x h (x) \\ & = & \sum_{x \in A, y \in B} t_{xy} h (x) \\ & \leq & \sum_{x \in A, y \in B} t_{xy} (d (x, y) + h (y)) \quad\text{since $h$ is $1$-Lipschitz} \\ & = & \sum_{x \in A, y \in B} t_{xy} d (x, y) + \sum_{x \in A, y \in B} t_{xy} h (y) \\ & = & \sum_{x \in A, y \in B} t_{xy} d (x, y) + \sum_{y \in B} b_y h (y) \\ & = & \sum_{x \in A, y \in B} t_{xy} d (x, y) + \int_{y \in X} h (y) d\nu, \end{eqnarray*} from which the result follows. \qed Refine the notation $\xymatrix{\mu \ar[r]^w & \nu}$, and define $\xymatrix{\mu \ar[r]^w_{A, B} & \nu}$ if and only if we can pick $x$ from $A$ and $y$ from $B$; explicitly, if and only if there is a center point $x \in A$, a point $y \in B$, and a real number $t \in \mathbb{R}_+$ such that $\mu \geq t \delta_x$, $\nu = \mu + t \delta_y - t \delta_x$, and $w = t d (x, y)$. \begin{lem} \label{lemma:V:<=:simple} Let $X, d$ be a standard quasi-metric space, $A$ be a non-empty finite set of center points of $X, d$, and $B$ be a non-empty finite set of points of $X$. For all simple normalized valuations $\mu = \sum_{x \in A} a_x \delta_x$ and $\nu = \sum_{y \in B} b_y \delta_y$, and $r \in \mathbb{R}_+$, the following are equivalent: \begin{enumerate} \item $\dKRH (\mu, \nu) \leq r$; \item for every map $f \colon A \cup B \to \mathbb{R}_+$ such that $f (x) \leq f (y) + d (x, y)$ for all $x, y \in A \cup B$, $\sum_{x \in A} a_x f (x) \leq \sum_{y \in B} b_y f (y) + r$; \item for all maps $f_1 \colon A \to \mathbb{R}_+$ and $f_2 \colon B \to \mathbb{R}_+$ such that $f_1 (x) \leq f_2 (y) + d (x, y)$ for all $x \in A$ and $y \in B$, $\sum_{x \in A} a_x f_1 (x) \leq \sum_{y \in B} b_y f_2 (y) + r$; \item there is a transition matrix ${(t_{xy})}_{x \in A, y \in B}$ from $\mu$ to $\nu$ of weight $\sum_{x \in A, y \in B} t_{xy} d (x, y) \leq r$; \item there is a finite path $\mu = \xymatrix{\mu_0 \ar[r]^{w_1}_{A, B} & \mu_1 \ar[r]^{w_2}_{A, B} & \cdots \ar[r]^{w_n}_{A, B} & \mu_n} = \nu$, with $\sum_{i=1}^n w_i \leq r$. \end{enumerate} \end{lem} \proof $1 \limp 2$. Let $f \colon A \cup B \to \mathbb{R}_+$ be such that $f (x) \leq f (y) + d (x, y)$ for all $x, y \in A \cup B$ (i.e., $f$ is $1$-Lipschitz). Build $h = \bigvee_{x \in A} \sea x {f (x)} \colon X \to \overline{\mathbb{R}}_+$. By Lemma~\ref{lemma:sea:min}, $h$ is the smallest $1$-Lipschitz map such that $h (x) \geq f (x)$ for every $x \in A$, and is in $\Lform_1 (X, d)$. By $1$, $\int_{x \in X} h (x) d\mu \leq \int_{y \in X} h (y) d\nu + r$, namely $\sum_{x \in A} a_x h (x) \leq \sum_{y \in A} b_y h (y) + r$. We claim that $h (z) \leq f (z)$ for every $z \in A \cup B$. That reduces to showing that for every $x \in A$, $(\sea x {f (x)}) (z) \leq f (z)$, namely that $\max (f (x) - d (x, z), 0) \leq f (z)$. Since $f$ is $1$-Lipschitz, $f (x) - d (x, z) \leq f (z)$, which concludes the proof of the claim. For every $x \in A$, we have just seen that $h (x) \leq f (x)$. Recalling that we also have $h (x) \geq f (x)$, it follows that $h (x) = f (x)$. Therefore $\sum_{x \in A} a_x f (x) \leq \sum_{y \in A} b_y h (y) + r$. Since $h (y) \leq f (y)$ for every $y \in B$, we conclude that $\sum_{x \in A} a_x f (x) \leq \sum_{y \in A} b_y f (y) + r$. $2 \limp 3$. Assume that $f_1 (x) \leq f_2 (y) + d (x, y)$ for all $x \in A$ and $y \in B$, and let $f (z) = \inf_{y \in B} f_2 (y) + d (z, y)$ for every $z \in A \cup B$. This formula defines an element of $\mathbb{R}_+$ since $B$ is non-empty. By assumption, $f_1 (x) \leq f (x)$ for every $x \in A$. Since we can take $y=z$ in the formula defining $f$, $f (y) \leq f_2 (y)$ for every $y \in B$. Finally, since for all $z, z' \in A$, $d (z, y) \leq d (z, z') + d (z', y)$, $f (z) \leq d (z, z') + f (z')$, so $f$ is $1$-Lipschitz on $A \cup B$. By $2$, $\sum_{x \in A} a_x f (x) \leq \sum_{y \in B} b_y f (y) + r$. Since $f_1 (x) \leq f (x)$ for every $x \in A$ and $f (y) \leq f_2 (y)$ for every $y \in B$, $\sum_{x \in A} a_x f_1 (x) \leq \sum_{y \in B} b_y f_2 (y) + r$. $3 \limp 4$. We use linear programming duality. (Using the max-flow min-cut theorem would suffice, too.) Recall that a (so-called primal) linear programming problem consists in maximizing $Z = \sum_{j=1}^n c_j X_j$, where every $X_j$ varies, subject to constraints of the form $\sum_{j=1}^n \alpha_{ij} X_j \leq \beta_i$ ($1\leq i \leq m$) and $X_j \geq 0$ ($1\leq j\leq n$). The dual problem consists in minimizing $W = \sum_{i=1}^m \beta_i Y_i$ subject to $\sum_{i=1}^m \alpha_{ij} Y_i \geq c_j$ ($1\leq j\leq n$) and $Y_i \geq 0$ ($1\leq i\leq m$). The two problems are dual, meaning that $Z$ is always less than or equal to $W$, and that if the maximum of the $Z$ values is different from $+\infty$, then it is equal to the minimum of the $W$ values. In particular, if $Z \leq r$ for all values of the variables $X_j$ satisfying the constraints, then $W \leq r$ for some tuple of values of the variables $Y_i$ satisfying the constraints. In our case, $j$ ranges over the disjoint sum $A+B$ (up to a bijection), and $i$ enumerates the pairs $(x, y) \in A \times B$ such that $d (x, y) < +\infty$. For $j=x\in A$, $X_j=f_1(x)$ and $c_j=a_x$; for $j=y\in B$, $X_j=f_2(y)$ and $c_j=-b_y$. For $i=(x,y) \in A \times B$ such that $d (x,y) < +\infty$, the associated constraint $\sum_{j=1}^n \alpha_{ij} X_j \leq \beta_i$ reads $f_1 (x) \leq f_2 (y) + d (x, y)$; that is, $\beta_i = d (x, y)$, and $\alpha_{ij}=1$ if $j=x\in A$, $\alpha_{ij} = -1$ if $j=y\in B$, and $\alpha_{ij}=0$ otherwise. The dual variables $Y_i$ can then be given values which we write $t_{xy}$, where $i = (x, y)$ with $d (x, y) < +\infty$, in such a way that $W$, that is, $\sum_{(x, y) \in A \times B, d (x, y) < +\infty} t_{xy} d (x, y)$, is less than or equal to $r$, and the dual constraints are satisfied. Those are: for every $x \in A$, $\sum_{y \in B, d (x, y) < +\infty} t_{xy} \geq a_x$; for every $y \in B$, $\sum_{x \in A, d (x, y) < +\infty} -t_{xy} \geq -b_y$; and $t_{xy} \geq 0$ for all $(x, y)$ such that $d (x, y) < +\infty$. To simplify notations, define $t_{xy}$ as $0$ when $d (x, y) = +\infty$. Then $\sum_{x \in A, y \in B} t_{xy} d (x, y) \leq r$, which establishes $(iii)$, and: $(i')$ $\sum_{y \in B} t_{xy} \geq a_x$ for every $x \in A$, $(ii')$ $\sum_{x \in A} t_{xy} \leq b_y$ for every $y \in B$. For every $x \in A$, let $\epsilon_x = \sum_{y \in B} t_{xy} - a_x \geq 0$. We have $\sum_{x \in A} \epsilon_x = \sum_{x \in A, y \in B} t_{xy} - \sum_{x \in A} a_x = \sum_{x \in A, y \in B} t_{xy} - 1$ (since $\mu$ is normalized) $= \sum_{y \in B} \sum_{x \in A} t_{xy} - 1 \leq \sum_{y \in B} b_y - 1 \leq 0$, using $(ii')$. Since the numbers $\epsilon_x$ are non-negative and sum to at most $0$, they are all equal to $0$, showing that $\sum_{y \in B} t_{xy} = a_x$ for every $x \in A$. To show that $\sum_{x \in A} t_{xy} = b_y$ for every $y \in B$, we reason symmetrically, by letting $\eta_y = b_y - \sum_{x \in A} t_{xy} \geq 0$, and realizing that $\sum_{y \in B} \eta_y = 1 - \sum_{x \in A, y \in B} t_{xy} \leq 1 - \sum_{x \in A} a_x = 0$, using $(i')$, we show that every $\eta_y$ is equal to $0$. $4 \limp 5$. Enumerate the pairs $(x, y)$ such that $x \in A$ and $y \in B$ in any order. It is easy to see that one can go from $\mu$ to $\nu$ by a series of transitions $\xymatrix{\ar[r]^w_{A, B} &}$, one for each such pair $(x, y)$, where we transfer some mass $t_{xy}$ from $x$ to $y$. In that case, $w = t_{xy} d (x, y)$. Each of these numbers $w$ is different from $+\infty$, since $\sum_{x \in A, y \in B} t_{xy} d (x, y) \leq r$. Moreover the sum of these numbers $w$ is less than or equal to $r$. $5 \limp 1$. This follows from Lemma~\ref{lemma:to:<=} and the triangular inequality. \qed \begin{cor} \label{corl:V:<=:simple} Let $X, d$ be a standard quasi-metric space, $A$ be a non-empty finite set of center points of $X, d$, and $B$ be a non-empty finite set of points of $X$. For all simple normalized valuations $\mu = \sum_{x \in A} a_x \delta_x$ and $\nu = \sum_{y \in B} b_y \delta_y$, if $\dKRH (\mu, \nu) < +\infty$, then: \begin{enumerate} \item there is a transition matrix ${(t_{xy})}_{x \in A, y \in B}$ from $\mu$ to $\nu$ of weight $\sum_{x \in A, y \in B} t_{xy} d (x, y)$ equal to $\dKRH (\mu, \nu)$; \item there is a finite path $\mu = \xymatrix{\mu_0 \ar[r]^{w_1}_{A, B} & \mu_1 \ar[r]^{w_2}_{A, B} & \cdots \ar[r]^{w_n}_{A, B} & \mu_n} = \nu$, with $\dKRH (\mu, \nu) = \sum_{i=1}^n w_i$. \end{enumerate} \end{cor} \proof Take $r = \dKRH (\mu, \nu)$ in Lemma~\ref{lemma:V:<=:simple}~(4): then $\sum_{x \in A, y \in B} t_{xy} d (x, y) \leq \dKRH (\mu, \nu)$. The reverse inequality is by Lemma~\ref{lem:txy:<=}. Now use Lemma~\ref{lemma:V:<=:simple}~(5) instead: there is a finite path $\mu = \xymatrix{\mu_0 \ar[r]^{w_1}_{A, B} & \mu_1 \ar[r]^{w_2}_{A, B} & \cdots \ar[r]^{w_n}_{A, B} & \mu_n} = \nu$, with $\sum_{i=1}^n w_i \leq \dKRH (\mu, \nu)$. By Lemma~\ref{lemma:to:<=} and the triangular inequality, $\dKRH (\mu, \nu) \leq \sum_{i=1}^n w_i$, from which we obtain the equality. \qed We have a similar result for the bounded KRH metric $\dKRH^a$. The resulting formula is slightly less elegant. \begin{lem} \label{lemma:V:<=:simple:a} Let $X, d$ be a standard quasi-metric space, $a > 0$, $A$ be a non-empty finite set of center points of $X, d$, and $B$ be a non-empty finite set of points of $X$. For all simple normalized valuations $\mu = \sum_{x \in A} a_x \delta_x$ and $\nu = \sum_{y \in B} b_y \delta_y$, and $r \in \mathbb{R}_+$, the following are equivalent: \begin{enumerate} \item $\dKRH^a (\mu, \nu) \leq r$; \item for every map $f \colon A \cup B \to [0, a]$ such that $f (x) \leq f (y) + d (x, y)$ for all $x, y \in A \cup B$, $\sum_{x \in A} a_x f (x) \leq \sum_{y \in B} b_y f (y) + r$; \item for all maps $f_1 \colon A \to [0, a]$ and $f_2 \colon B \to [0, a]$ such that $f_1 (x) \leq f_2 (y) + d (x, y)$ for all $x \in A$ and $y \in B$, $\sum_{x \in A} a_x f_1 (x) \leq \sum_{y \in B} b_y f_2 (y) + r$; \item there is a matrix ${(t_{xy})}_{x \in A, y \in B}$, and two vectors ${(u_x)}_{x \in A}$ and ${(v_y)}_{y \in B}$ with non-negative entries such that: \begin{enumerate} \item $\sum_{x \in A, y \in B} t_{xy} d (x, y) + a \sum_{x \in A} u_x + a \sum_{y \in B} v_y \leq r$, \item for every $x \in A$, $\sum_{y \in B} t_{xy} + u_x \geq a_x$, \item for every $y \in B$, $\sum_{x \in A} t_{xy} - v_y \leq b_y$. \end{enumerate} \end{enumerate} \end{lem} $1 \limp 2$. Given $f$ as in (1), we build $h = \bigvee_{x \in A} \sea x {f (x)} \colon X \to \overline{\mathbb{R}}_+$. We then reason as in Lemma~\ref{lemma:V:<=:simple}, additionally noting that $h$ takes its values in $[0, a]$. Indeed, for every $x \in A$, for every $z$, $(\sea x {f (x)}) (z) = \max (f (x) - d (x, z), 0) \leq f (x) \leq a$. $2 \limp 3$. We reason again as in Lemma~\ref{lemma:V:<=:simple}, but we define $f (z)$ instead as $\min (a, \inf_{y \in B} f_2 (y) + d (z, y))$, so that $f$ takes its values in $[0, a]$. For every $x \in A$, we obtain that $f_1 (x) \leq \inf_{y \in B} f_2 (y) + d (x, y)$ as before, and $f_1 (x) \leq a$ by assumption, so $f_1 (x) \leq f (x)$. We have $f (y) \leq f_2 (y)$ for every $y \in B$ by taking $z=y$ in the formula defining $f$. The rest is as before. $3 \limp 4$. We again use linear programming duality. We now have the extra constraints $f_1 (x) \leq a$ for each $x \in A$ and $f_2 (y) \leq a$ for each $y \in B$. For that, we must create new variables $u_x$, $x \in A$, and $v_y$, $y \in A$, and the dual problem now reads: minimize $W = \sum_{x \in A, y \in B} t_{xy} d (x, y) + \sum_{x \in A} a u_x + \sum_{y \in B} a v_y$ under the constraints $\sum_{y \in B} t_{xy} + u_x \geq a_x$, $x \in A$, and $\sum_{x \in A} -t_{xy} + v_y \geq -b_y$. $4 \limp 1$. Let $h \in \Lform^a_1 (X, d)$. Then: \begin{eqnarray*} \int_{x \in X} h (x) d\mu & = & \sum_{x \in A} a_x h (x) \\ & \leq & \sum_{x \in A} u_x h (x) + \sum_{x \in A, y \in B} t_{xy} h (x) \qquad \text{by (b)} \\ & \leq & \sum_{x \in A} u_x h (x) + \sum_{x \in A, y \in B} t_{xy} h (y) + \sum_{x \in A, y \in B} t_{xy} d (x, y) \qquad \text{since $h$ is $1$-Lipschitz} \\ & \leq & \sum_{x \in A} u_x h (x) + \sum_{y \in B} v_y h (y) + \sum_{y \in B} b_y h (y) + \sum_{x \in A, y \in B} t_{xy} d (x, y) \qquad \text{by (c)} \\ & \leq & a \sum_{x \in A} u_x + a \sum_{y \in V} v_y + \sum_{y \in B} b_y h (y) + \sum_{x \in A, y \in B} t_{xy} d (x, y) \\ && \qquad \text{since $h$ is bounded by $a$ from above} \\ & \leq & r + \sum_{y \in B} b_y h (y) \qquad \text{by (a)} \\ & = & r + \int_{x \in X} h (x) d\nu. \end{eqnarray*} \qed Those splitting lemmas can be seen as variants of the Kantorovich-Rubinshte\u\i n Theorem \cite{Kantorovich:1942}. The latter says that, on a complete separable metric space $X, d$ (i.e., a Polish metric space), for any two probability measures $\mu$, $\nu$, $\dKRH (\mu, \nu) = \min_\tau \int_{(x, y) \in X^2} d (x, y) d\tau$, where $\tau$ ranges over the probability measures on $X^2$ whose first marginal is equal to $\mu$ and whose second marginal is equal to $\nu$. See \cite{Fernique:KR} for an elementary proof. This is what Lemma~\ref{corl:V:<=:simple} says, in the more general asymmetric setting where we have a quasi-metric, not a metric, but applied to the less general situation where $\mu$ and $\nu$ are \emph{simple} normalized valuations and $\mu$ is supported on a set of center points. Translating that lemma, we obtain that $\dKRH (\mu, \nu)$ is equal to the infimum of all the integrals $\int_{(x, y) \in X^2} d (x, y) d \tau$, where $\tau$ ranges over certain simple normalized valuations with first marginal $\mu$ and second marginal $\nu$. Moreover, the infimum is attained in case $\dKRH (\mu, \nu) < +\infty$. \subsection{Linear Duality} \label{sec:minimal-transport} We wish to show a general linear duality theorem, in the style of Kantorovich and Rubinshte\u\i n. That would say that, for any $\mu, \nu \in \Val_1 X$, $\dKRH (\mu, \nu) = \min_\tau \int_{(x, y) \in X^2} d (x, y) d\tau$, where $\tau \in \Val_1 (X^2)$ satisfies $\pi_1 [\tau] = \mu$ and $\pi_2 [\tau] = \nu$. That runs into a silly problem: the map $(x, y) \mapsto d (x, y)$ is not lower semicontinuous, unless $d$ is a metric. Indeed, the map $y \mapsto d (x, y)$ is even antitonic (with respect to $\leq^d$), whereas lower semicontinuous maps are monotonic. One may think of taking $\tau$ to be a probability measure, and that would probably solve part of the issue, but one would need to show that $(x, y) \mapsto d (x, y)$ is a measurable map, where $X^2$ comes with the Borel $\sigma$-algebra over the $d^2$-Scott topology, where $d^2$ is introduced below. Instead, we observe that each probability distribution $\mu \in \Val_1 (X)$ gives rise to a linear prevision $F_\mu \colon h \mapsto \int_{x \in X} h (x) d\mu$, then to a linear, normalized monotonic map $\extF {F_\mu} \colon L_\infty (X, d) \to \overline{\mathbb{R}}_+$ whose restriction to $\Lform_\infty (X, d)$ is lower semicontinuous, by Lemma~\ref{lemma:extF:prev}, assuming $X, d$ continuous Yoneda-complete. Recall that $L_\infty (X, d)$ is the space of Lipschitz, not necessarily continuous, maps from $X, d$ to $\overline{\mathbb{R}}_+$. We write $L_\infty^\bnd (X, d)$ for the subspace of bounded Lipschitz maps. Since $F_\mu$ is normalized, it restricts to a map from $L_\infty^\bnd (X, d)$ to $\mathbb{R}_+$, that is, not taking the value $+\infty$. \begin{defi}[$L^\pm (X, d)$] \label{defn:L+-} Let $X, d$ be a quasi-metric space. The maps of the form $f-g$, where $f, g \in L_\infty^\bnd (X, d)$ form a real vector space $L^\pm (X, d)$. \end{defi} \begin{rem} \label{rem:L+-:op} $L^\pm (X, d)$ not only contains $L_\infty^\bnd (X, d)$, but also $L_\infty^\bnd (X, d^{op})$. Indeed, for every bounded $\alpha$-Lipschitz map $g \colon X, d^{op} \to \mathbb{R}$, $-g$ is a bounded $\alpha$-Lipschitz map from $X, d$ to $\mathbb{R}$, and therefore $g = 0 - (-g)$ is in $L^\pm (X, d)$. \end{rem} We can now extend $\extF {F_\mu}$ to integrate functions taking real values, not just non-negative values. This is ${\extF F}^\pm_\mu$, defined below. \begin{lem} \label{lemma:L+-:extF} Let $X, d$ be a continuous quasi-metric space. For every monotonic linear map $F \colon L_\infty^\bnd (X, d) \to \mathbb{R}$, there is a unique monotonic linear map $F^\pm \colon L^\pm (X, d) \to \mathbb{R}$ such that $F^\pm (h) = F (h)$ for every $h \in L_\infty^\bnd (X, d)$. \end{lem} That $F^\pm$ is linear should be taken in the usual vector-space-theoretic sense: it preserves sums and products by arbitrary real numbers. That $F$ is linear should be taken in the usual sense of this paper: it preserves sums and products by non-negative real numbers. \proof Necessarily $F^\pm (f-g) = F (f) - F (g)$ for all $f, g \in L_\infty^\bnd (X, d)$, which shows uniqueness. To show existence, we first note that for all $f, g \in L_\infty^\bnd (X, d)$, say $f$ and $g$ take their values in $[0, a]$, then $ F (f)$ and $ F (g)$ are in $[0, a]$, and are certainly different from $+\infty$. Therefore the difference $ F (f) - F (g)$ makes sense, and is a real number. If $f-g \leq f'-g'$, where $f, f', g, g' \in L_\infty^\bnd (X, d)$, we claim that $F (f) - F (g) \leq F (f') - F (g')$. Indeed, this is equivalent to $ F (f) + F (g') \leq F (g) + F (f')$, which follows from $f+g' \leq g+f'$ and the fact that $F$ is monotonic and linear. From that claim we obtain that $f-g=f'-g'$ implies $F (f) - F (g) = F (f') - F (g')$, whence defining $F^\pm (f-g)$ as $ F (f) - F (g)$ for all $f, g \in L_\infty^\bnd (X, d)$ is non-ambiguous. It also follows that $F^\pm$ is monotonic. $F^\pm$ is then clearly linear. In particular, for $a \in \mathbb{R}_+$, $F^\pm (a (f-g)) = F (af) - F (ag) = a F (f) - a F (g) = a F^\pm y(f-g)$, and for $a \in \mathbb{R}$ negative, $F^\pm (a (f-g)) = F^\pm ((-a) (g-f)) = F ((-a) g) - F ((-a) f) = (-a) F (g) - (-a) F (f) = a F (g) - a F (g) = a F^\pm (f-g)$. \qed Let $d^{op}$ be the opposite quasi-metric of $d$, defined by $d^{op} (x, y) = d (y, x)$. We equip $X \times X$ with the quasi-metric $d^2$ defined by $d^2 ((x, y), (x', y')) = d (x, x') + d (y', y) = d (x, x') + d^{op} (y, y')$---note the reversal of the $y$ components. \begin{lem} \label{lemma:d2:lip} Let $X, d$ be a quasi-metric space. The distance map $d \colon X \times X, d^2 \to \overline{\mathbb{R}}_+$ is $1$-Lipschitz. \end{lem} \proof It suffices to show that $d (x, y) \leq d (x', y') + d^2 ((x, y), (x', y'))$, which easily follows from the triangular inequality. \qed \begin{defi}[Coupling] \label{defn:coupling} Let $X, d$ be a continuous quasi-metric space, and $F$, $G$ be two linear, monotonic maps from $L_\infty^\bnd (X, d)$ to $\mathbb{R}$. A \emph{coupling} between $F$ and $G$ is a linear, monotonic map $\Gamma \colon L_\infty^\bnd (X \times X, d^2) \to \mathbb{R}$ such that $\Gamma (\mathbf 1) = 1$, $\Gamma (f \circ \pi_1) = F (f)$ for every $f \in L_\infty^\bnd (X, d)$, and $\Gamma (g \circ \pi_2) = G^\pm (g)$ for every $g \in L_\infty^\bnd (X, d^{op})$. By extension, we say that $\Gamma$ is a coupling between normalized continuous valuations $\mu$ and $\nu$ if and only if $\Gamma$ is a coupling between $\extF {F_\mu}$ and $\extF {F_\nu}$. \end{defi} Note that $G^\pm (g)$ makes sense by Remark~\ref{rem:L+-:op}. It is easy to see that $\pi_1 \colon X \times X, d^2 \to X, d$ and $\pi_2 \colon X \times X, d^2 \to X, d^{op}$ are $1$-Lipschitz, so that $f \circ \pi_1$ and $g \circ \pi_2$ are Lipschitz from $X \times X, d^2$ to $\mathbb{R}$, hence $\Gamma (f \circ \pi_1)$ and $\Gamma (g \circ \pi_2)$ make sense. Here is a trivial example of coupling, which is an analogue of the product measure between $F$ and $G$. \begin{lem} \label{lemma:coupling:x} Let $X, d$ be a continuous quasi-metric space, and $F$, $G$ be two linear, monotonic maps from $L_\infty^\bnd (X, d)$ to $\overline{\mathbb{R}}_+$. The map $F \ltimes G$ defined by $(F \ltimes G) (h) = F (x \mapsto G^\pm (y \mapsto h (x, y)))$ for every $h \in L_\infty^\bnd (X \times X, d^2)$ is a coupling between $F$ and $G$. \end{lem} \proof The map $y \mapsto h (x, y)$ is Lipschitz from $X, d^{op}$ to $\mathbb{R}$, hence is in $L^\pm (X, d)$ by Remark~\ref{rem:L+-:op}. Therefore $G^\pm (y \mapsto h (x, y))$ makes sense. We claim that the map $x \mapsto G^\pm (y \mapsto h (x, y))$ is $\alpha$-Lipschitz, assuming that $h$ is $\alpha$-Lipschitz. Indeed, for every $y$, $h (x, y) \leq h (x', y) + \alpha d (x, x')$, so $G^\pm (y \mapsto h (x, y)) \leq G^\pm (y \mapsto h (x', y) + \alpha d (x, x')) = G^\pm (y \mapsto h (x', y)) + \alpha d (x, x')$, using the fact that $G^\pm$ maps $\mathbf 1$ to $1$. It follows that our definition of $F \ltimes G$ makes sense. $F \ltimes G$ is linear and monotonic because $F$ and $G^\pm$ are. $(F \ltimes G) (\mathbf 1) = F (x \mapsto G^\pm (y \mapsto 1)) = F (x \mapsto 1) = 1$. For every $f \in L_\infty^\bnd (X, d)$, $(F \ltimes G) (f \circ \pi_1) = F (x \mapsto G^\pm (y \mapsto f (x))) = F (x \mapsto f (x)) = F (f)$, and for every $g \in L_\infty^\bnd (X, d^{op})$, $(F \ltimes G) (g \circ \pi_2) = F (x \mapsto G^\pm (y \mapsto g (y))) = F (x \mapsto G^\pm (g)) = G^\pm (g)$. \qed \begin{rem} \label{rem:coupling:alt} The map $F \rtimes G \colon h \mapsto G^\pm (y \mapsto F (x \mapsto h (x, y)))$ is also a coupling between $F$ and $G$. \end{rem} Lemma~\ref{lemma:d2:lip} allows us to make sense of $\Gamma (d)$, for any coupling $\Gamma$, where $d$ is the underlying metric. \begin{lem} \label{lemma:coupling:<=} Let $X, d$ be a continuous quasi-metric space, and $\mu, \nu \in \Val_1 X$. For every coupling $\Gamma$ between $\mu$ and $\nu$, $\dKRH (\mu, \nu) \leq \Gamma (d)$. \end{lem} \proof Using Lemma~\ref{lemma:KRH:bounded}, it suffices to show that for every bounded map $h$ in $\Lform_1 (X, d)$, $F_\mu (h) \leq F_\nu (h) + \Gamma (d)$. Since $\pi_1$ is $1$-Lipschitz, $h \circ \pi_1$ is $1$-Lipschitz, and since $\pi_2$ is $1$-Lipschitz from $X \times X, d^2$ to $X, d^{op}$, $-h \circ \pi_2$ is $1$-Lipschitz from $X \times X, d^2$ to $\mathbb{R}$. By Lemma~\ref{lemma:d2:lip}, the map $(h \circ \pi_2) + d$ is then in $L^\pm (X \times X, d^2)$. We observe that $(h \circ \pi_1) \leq (h \circ \pi_2) + d$, in other words, for all $x, y \in X$, $h (x) \leq h (y) + d (x, y)$: this is because $h$ is $1$-Lipschitz. Since $\Gamma$ is monotonic and linear, $\Gamma^\pm$ is monotonic and linear (Lemma~\ref{lemma:L+-:extF}), so $\Gamma^\pm (h \circ \pi_1) \leq \Gamma^\pm (h \circ \pi_2) + \Gamma^\pm (d)$. Since $h \circ \pi_1$ is in $L_1 (X, d)$, $\Gamma^\pm (h \circ \pi_1) = \Gamma (h \circ \pi_1)$. This is equal to $\extF {F_\mu} (h)$ by the definition of a coupling, and then to $F_\mu (h)$ by Lemma~\ref{lemma:extF:prev}~(3). Let $a$ be some upper bound on the values of $h$. Then $\Gamma^\pm (h \circ \pi_2) = a - \Gamma^\pm (a.\mathbf 1 - h \circ \pi_2)$. Since $-h \circ \pi_2$ is $1$-Lipschitz, $a.\mathbf 1 - h \circ \pi_2$ is also $1$-Lipschitz, so $\Gamma^\pm (a.\mathbf 1 - h \circ \pi_2) = \Gamma (a.\mathbf 1 - h \circ \pi_2) = \Gamma (g\circ \pi_2)$ where $g = a.\mathbf 1 - h$. By the definition of a coupling, and since $g$ is $1$-Lipschitz from $X, d^{op}$ to $\mathbb{R}_+$, $\Gamma (g \circ \pi_2) = \extF {F_\nu}^\pm (g)$. Therefore $\Gamma^\pm (h \circ \pi_2) = a - \extF {F_\nu}^\pm (g) = \extF {F_\nu}^\pm (a.\mathbf 1 - g) = \extF {F_\nu}^\pm (h)$. This is equal to $\extF {F_\nu} (h)$ since $h \in L_1 (X, d)$, hence to $F_\nu (h)$ by Lemma~\ref{lemma:extF:prev}~(3). \qed \begin{prop} \label{prop:coupling:=} Let $X, d$ be a continuous quasi-metric space, and $\mu, \nu \in \Val_1 X$. There is a coupling $\Gamma$ between $\mu$ and $\nu$ such that $\dKRH (\mu, \nu) = \Gamma (d)$. \end{prop} \proof Let $B$ be the vector space $L^\pm (X \times X, d^2)$. This is a normed vector space with $||h|| = \sup_{(x, y) \in X \times X} |h (x, y)|$. For all $f, g \in L_\infty^\bnd (X, d)$, the map $f \ominus g$ defined as mapping $(x, y)$ to $f (x) - g (y)$, is Lipschitz from $X \times X, d^2$ to $\mathbb{R}$. Indeed, if $f$ and $g$ are $\alpha$-Lipschitz, then $f (x) \leq f (x') + \alpha d (x, x')$ and $g (y') \leq g (y) + \alpha d (y', y)$, so then $f (x) - g (y) \leq f (x') + \alpha d (x, x') - g (y') + \alpha d (y', y) = (f (x') - g (y')) + \alpha d^2 ((x, y), (x', y'))$. In particular, the maps $f \ominus g$, with $f, g \in L_\infty^\bnd (X, d)$, are all in $B$. It is easy to see that the subspace $A$ generated by those maps are the maps of the form $f \ominus g$, where $f$ and $g$ are now taken from $L^\pm (X, d)$. Define a map $\Lambda \colon A \to \mathbb{R}$ by $\Lambda (f \ominus g) = \extF {F_\mu}^\pm (f) - \extF {F_\nu}^\pm (g)$. If $f \ominus g = f' \ominus g'$, then for all $x, y \in X$, $f (x) - g (y) = f' (x) - g' (y)$, so $f=f'$ and $g=g'$. It follows that $\Lambda$ is defined unambiguously. It is easy to see that $\Lambda$ is linear. We claim that $|\Lambda (f \ominus g)| \leq ||f \ominus g||$. To show that, we consider a function similar to the coupling $\extF {F_\mu} \ltimes \extF {F_\nu}$ of Lemma~\ref{lemma:coupling:x}. Let $\Phi (h)$ be defined as $\extF {F_\mu}^\pm (x \mapsto \extF {F_\nu}^\pm (y \mapsto h (x, y)))$ for every $h \in B$. Then: \begin{eqnarray*} \Phi (f \ominus g) & = & \extF {F_\mu}^\pm (x \mapsto \extF {F_\nu}^\pm (y \mapsto f (x) - g (y))) \\ & = & \extF {F_\mu}^\pm (x \mapsto f (x) - \extF {F_\nu}^\pm (g)) \\ & = & \extF {F_\mu}^\pm (f) - \extF {F_\nu}^\pm (g) = \Lambda (f \ominus g). \end{eqnarray*} Also, $|\Phi (h)| \leq ||h||$ for every $h \in B$. Indeed, $\Phi (h) \leq \extF {F_\mu}^\pm (x \mapsto \extF {F_\nu}^\pm (y \mapsto ||h||))) = \extF {F_\mu}^\pm (x \mapsto ||h||) = ||h||$, and similarly for $-\Phi (h) = \Phi (-h)$. As a consequence, $|\Lambda (f \ominus g)| = |\Phi (f \ominus g)| \leq ||f \ominus g||$. By the Hahn-Banach Theorem, $\Lambda$ has a linear extension $\Gamma$ to the whole of $B$, such that $\Gamma (h) \leq ||h||$ for every $h \in B$. We note that $\Gamma (\mathbf 1)=1$. Indeed, $\mathbf 1$ can be written as $f \ominus g$ with $f = \mathbf 1$ and $g = 0$, whence $\Gamma (\mathbf 1) = \Lambda (f \ominus g) = 1$. We claim that, for every $h \in B$ that takes its values in $\mathbb{R}_+$, then $\Gamma (h) \geq 0$. Otherwise, there is an $h \in B$ that takes its values in $\mathbb{R}_+$ such that $\Gamma (h) < 0$. By multiplying by an appropriate positive scalar, we may assume that $||h|| \leq 1$. Then $\Gamma (\mathbf 1-h) \leq ||\mathbf 1-h|| \leq 1$, although $\Gamma (\mathbf 1-h) = 1 - \Gamma (h) > 1$. It follows that $\Gamma$ is monotonic: if $h \leq h'$, then $h' - h$ takes its values in $\mathbb{R}_+$, so $\Gamma (h' - h) \geq 0$, and that is equivalent to $\Gamma (h) \leq \Gamma (h')$. Finally, for every $f \in L_\infty^\bnd (X, d)$, $\Gamma (f \circ \pi_1) = \Gamma (f \ominus 0) = \Lambda (f \ominus 0) = \extF {F_\mu}^\pm (f) = \extF {F_\mu} (f)$, and for every $g \in L_\infty^\bnd (X, d^{op})$, $\Gamma (g \circ \pi_2) = \Gamma (- (0 \ominus g)) = - \Lambda (0 \ominus g) = \extF {F_\nu}^\pm (g)$. \qed As a corollary of Lemma~\ref{lemma:coupling:<=} and Proposition~\ref{prop:coupling:=}, we obtain the following. \begin{thm}[Coupling] \label{thm:coupling} Let $X, d$ be a continuous quasi-metric space, and $\mu, \nu \in \Val_1 X$. Then: \begin{equation} \label{eq:coupling} \dKRH (\mu, \nu) = \min_\Gamma \Gamma (d), \end{equation} where $\Gamma$ ranges over the couplings between $\mu$ and $\nu$. \qed \end{thm} This holds, in particular, for every metric space, since every metric space is continuous. \section{The Hoare Powerdomain} \label{sec:hoare-powerdomain} \subsection{The $\dH$ Quasi-Metric} \label{sec:hoare-quasi-metric} For every open subset $U$ of $X$ (with its $d$-Scott topology), and every point $x \in X$, we can define the distance from $x$ to the complement $\overline U$ of $U$ by $d (x, \overline U) = \sup \{r \in \mathbb{R}_+ \mid (x, r) \in \widehat U\}$ \cite[Definition~6.10]{JGL:formalballs}, where $\widehat U$ is the largest open subset of $\mathbf B (X, d)$ such that $\widehat U \cap X \subseteq U$ (equivalently, $\widehat U \cap X = U$). Given that $X, d$ is standard, the following laws hold (Lemma~6.11, loc.cit.): $d (x, \overline U)=0$ if and only if $x \not\in U$; $d (x, \overline U) \leq d (x, y) + d (y, \overline U)$; the map $d (\_, \overline U)$ is $1$-Lipschitz continuous from $X, d$ to $\overline{\mathbb{R}}_+, \dreal$. Additionally, $d (x, \overline U) \leq \inf_{y \in \overline U} d (x, y)$, with equality if $\overline U = \dc E$ for some finite set $E$, or if $x$ is a center point (Proposition~6.12, loc.cit.). An equivalent definition, working directly with closed sets, is as follows. For a closed subset $C$ of $X$, let $\cl (C)$ be its closure in $\mathbf B (X, d)$---the smallest closed subset of $\mathbf B (X, d)$ that contains $C$. If $U$ is the complement of $C$, then $\cl (C)$ is the complement of $\widehat U$ in $\mathbf B (X, d)$. Then $d (x, C) = \sup \{r \in \mathbb{R}_+ \mid (x, r) \not\in \cl (C)\}$, and $d (x, C)=0$ if and only if $x \in C$. \begin{lem} \label{lemma:dxC} Let $X, d$ be a quasi-metric space. For every $x \in X$ and every closed subset $C$ of $X$, $d (x, C) = \inf \{r \in \mathbb{R}_+ \mid (x, r) \in \cl (C)\}$, where the infimum of an empty set of elements of $\mathbb{R}_+$ is taken to be $+\infty$. Moreover, \begin{enumerate} \item the infimum is attained if $d (x, C) < +\infty$; \item for every $r \in \mathbb{R}_+$, $d (x, C) \leq r$ if and only if $(x, r) \in \cl (C)$. \end{enumerate} \end{lem} \proof Let $A = \{r \in \mathbb{R}_+ \mid (x, r) \not\in \cl (C)\}$, $B = \{r \in \mathbb{R}_+ \mid (x, r) \in \cl (C)\}$. $A$ and $B$ partition $\mathbb{R}_+$, and since $\cl (C)$ is downwards-closed, $A$ is downwards-closed, $B$ is upwards-closed, and for all $a \in A$, $b \in B$, we must have $a < b$. It follows that $\sup A = \inf B$, proving the first claim. $B$ is also closed under (filtered) infima. Indeed, for any filtered family ${(r_i)}_{i \in I}$ of elements of $B$ with $\inf_{i \in I} r_i = r$, ${(x, r_i)}_{i \in I}$ is a directed family in $\mathbf B (X, d)$, whose supremum is $(x, r)$. (The upper bounds of ${(x, r_i)}_{i \in I}$ are those formal balls $(y, s)$ such that $d (x, y) \leq r_i - s$ for every $i \in I$, namely the elements above $(x, r)$.) That implies that $r$ is in $B$. It follows that, if $B$ is non-empty, then it is an interval of the form $[b, +\infty[$, where $b = \inf B = d (x, C)$. This shows (1). For (2), if $d (x, C) \leq r \in \mathbb{R}_+$, then $B$ is non-empty and $r \geq b$, so $r$ is in $B$, that is, $(x, r) \in \cl (C)$. Conversely, if $(x, r) \in \cl (C)$, then $r$ is in $B$, so $B$ is non-empty and $r \geq b$, meaning that $d (x, C) \leq r$. \qed In the following, we shall need to know a few elementary facts about the closure operation $\cl$. For a subset $\mathcal A$ of $\mathbf B (X, d)$, and every $a \in \mathbb{R}_+$, we agree that $\mathcal A+a$ denotes $\{(x, r+a) \mid (x, r) \in \mathcal A\}$. \begin{lem} \label{lemma:cl:1} Let $X, d$ be a standard quasi-metric space. \begin{enumerate} \item For every Scott-closed subset $\mathcal C$ of $\mathbf B (X, d)$, $\mathcal C+a$ is Scott-closed. \item For every subset $\mathcal A$ of $\mathbf B (X, d)$, $\cl (\mathcal A)+a = \cl (\mathcal A+a)$. \end{enumerate} \end{lem} \proof (1) For every formal ball $(x, r)$ below some element $(y, s+a)$ of $\mathcal C+a$ (viz., $(y, s)$ is in $\mathcal C$), $d (x, y) \leq r - s - a$, so $r \geq s+a \geq a$, and $(x, r-a) \leq^{d^+} (y, s)$. Since $\mathcal C$ is downwards-closed, $(x, r-a)$ is in $\mathcal C$, hence $(x, r)$ is in $\mathcal C+a$. This proves that $\mathcal C+a$ is downwards-closed. For every directed family ${(x_i, r_i+a)}_{i \in I}$ in $\mathcal C+a$, ${(x_i, r_i)}_{i \in I}$ is also directed, and in $\mathcal C$. If ${(x_i, r_i+a)}_{i \in I}$ has a supremum, then ${(x_i, r_i)}_{i \in I}$ also has a supremum $(x, r)$, by the definition of standardness, and the supremum of ${(x_i, r_i+a)}_{i \in I}$ is $(x, r+a)$. Since $\mathcal C$ is Scott-closed, $(x, r)$ is in $\mathcal C$, so $(x, r+a)$ is in $\mathcal C+a$. (2) Since $\cl (\mathcal A)+a$ is Scott-closed, and since it obviously contains $\mathcal A+a$, it contains $\cl (\mathcal A+a)$. For the reverse inclusion, we use the fact that the map $\_ + a$ is Scott-continuous, since $X, d$ is standard, and that for a continuous map, the closure of the image of a set contains the image of the closure. \qed In accordance with our convention that $X$ is equated with a subspace of $\mathbf B (X, d)$ through the embedding $x \mapsto (x, 0)$, we also agree that, for any subset $C$ of $X$, $C+a$ denotes $\{(x, a) \mid x \in C\}$. \begin{lem} \label{lemma:cl} Let $X, d$ be a standard quasi-metric space. For every closed subset $C$ of $X$, for every $a \in \mathbb{R}_+$: \begin{enumerate} \item $\cl (C) \cap X = C$; \item $\cl (C)+a = \cl (C+a)$. \end{enumerate} \end{lem} \proof (1) Let $U$ be the complement of $C$ in $X$, so that $\cl (C)$ is the complement of $\widehat U$ in $\mathbf B (X, d)$. Since $\widehat U \cap X = U$, by taking complements $\cl (C) \cap X = C$. (2) By Lemma~\ref{lemma:cl:1}~(2). \qed We now define a quasi-metric on $\Hoarez X$, the set of all closed subsets of $X$, in a manner that looks like one half of the Hausdorff metric. We reserve the notation $\Hoare X$ for the set of \emph{non-empty} closed subsets of $X$. \begin{defi}[$\dH$] \label{defn:dH} Let $X, d$ be a standard quasi-metric space. For any two closed subsets $C$, $C'$ of $X$, let $\dH (C, C') = \sup_{x \in C} d (x, C')$. \end{defi} \begin{lem} \label{lemma:dH} Let $X, d$ be a standard quasi-metric space. \begin{enumerate} \item For every $C \in \Hoarez X$, for every closed set $C'$, for every $a \in \mathbb{R}_+$, $\dH (C, C') \leq a$ if and only if $C+a \subseteq \cl (C')$; \item $\dH$ is a quasi-metric on $\Hoarez X$, and its specialization ordering is inclusion; \item the order $\leq^{\dH^+}$ on $\mathbf B (\Hoarez X, \dH)$ is given by $(C, r) \leq^{\dH^+} (C', r')$ if and only if $C+r \subseteq \cl (C' + r')$. \end{enumerate} \end{lem} \proof (1) By definition, $\dH (C, C') \leq a$ if and only if, for every $x \in C$, $d (x, C') \leq a$, if and only if, for every $x \in C$, $(x, a)$ is in $\cl (C')$, by Lemma~\ref{lemma:dxC}~(2). That is equivalent to $C+a \subseteq \cl (C')$. (2) By (1), $\dH (C, C')=0$ if and only if $C \subseteq \cl (C')$. The latter is equivalent to $C \subseteq \cl (C') \cap X$, hence to $C \subseteq C'$, using Lemma~\ref{lemma:cl}~(1). This shows immediately that $\dH (C, C')=\dH (C', C)=0$ implies $C=C'$, and that inclusion will be the specialization ordering of $\dH$. It still remains to show the triangular inequality, namely $\dH (C, C'') \leq \dH (C, C') + \dH (C', C'')$. If the right-hand side is equal to $+\infty$, then the inequality is obvious. Otherwise, let $a = \dH (C, C')$, $a' = \dH (C', C'')$, both elements of $\mathbb{R}_+$. By (1), $C+a \subseteq \cl (C')$ and $C'+a' \subseteq \cl (C'')$. Therefore $C+a+a' \subseteq \cl (C') + a' = \cl (C' + a')$ (by Lemma~\ref{lemma:cl}~(2)) $\subseteq \cl (C'')$. By (1) again, this implies that $\dH (C, C'') \leq a+a'$, which proves our claim. (3) $(C, r) \leq^{\dH^+} (C', r')$ if and only if $r \geq r'$ and $C+r-r' \subseteq \cl (C')$, by (1). Adding $r'$ to both sides, $C+r-r' \subseteq \cl (C')$ if and only if $C+r \subseteq \cl (C')+r'$, and $\cl (C')+r' = \cl (C'+r')$ by Lemma~\ref{lemma:cl}~(2). \qed \begin{rem} \label{rem:H:root} For every standard quasi-metric space $X, d$ with an $a$-root $x$, ($a \in \mathbb{R}_+$, $a > 0$), $\Hoare X, \dH$ has an $a$-root, namely $\dc x$. Indeed, fix $C \in \Hoarez X$. For every $y \in X$, $d (x, y) \leq a$, and that certainly holds for any fixed $y \in C$. We recall that $d (x, C) \leq \inf_{y \in C} d (x,y)$, so $d (x, C) \leq a$. By Lemma~\ref{lemma:dxC}~(2), $(x, a)$ is then in $\cl (C)$, so $\dc x + a = \dc (x, a)$ is included in $\cl (C)$. by Lemma~\ref{lemma:dH}~(1), $\dH (\dc x, C) \leq a$. $\Hoarez X, \dH$ has a root even when $X, d$ has no root, namely the empty closed subset: since $\emptyset \subseteq C$ for every $C \in \Hoarez X, \dH$, $\dH (\emptyset, C) = 0$. \end{rem} We relate that construction with previsions now. This is similar to Lemma~4.7, item~1, of \cite{jgl-jlap14}. \begin{lem} \label{lemma:eH} Let $X$ be a topological space. For each subset $C$ of $X$, let $F^C (h) = \sup_{x \in C} h (x)$. (We agree that this is equal to $0$ if $C = \emptyset$.) \begin{enumerate} \item If $C$ is closed, then $F^C$ is a discrete sublinear prevision. \item Conversely, every discrete sublinear prevision is of the form $F^C$ for some unique closed set $C$. \end{enumerate} Moreover, $F$ is normalized if and only if $C$ is non-empty. \end{lem} \proof We start with the last claim, and show that $F^C$ is normalized if and only if $C$ is non-empty. If $C = \emptyset$, $F^C = 0$ is not normalized. Otherwise, $F^C (\alpha.\mathbf 1 + h) = \sup_{x \in C} (\alpha + h (x)) = \alpha + \sup_{x \in C} h (x) = \alpha + F^C (h)$. 1. $F^C$ is clearly a prevision. We check that $F^C$ is discrete. If $C = \emptyset$, then $F^C = 0$ and for every strict $f \in \Lform \overline{\mathbb{R}}_+$, $F^C (f \circ h) = 0 = f (F^C (h))$---because $f$ is strict. Otherwise, $f (F^C (h)) = \sup_{x \in C} f (h (x))$ because every non-empty family in $\overline{\mathbb{R}}_+$ is directed, and because $f$ is Scott-continuous. Since $\sup_{x \in C} f (h (x)) = F^C (f \circ h)$, we conclude that $F^C$ is discrete. Clearly, $F^C (h+h') = \sup_{x \in C} (h (x) + h' (x)) \leq \sup_{x \in C} h (x) + \sup_{x \in C} h' (x)$, so $F^C$ is sublinear. 2. Let now $F$ be an arbitrary discrete sublinear prevision on $X$. Let $\mathcal F$ be the family of open subsets $U$ of $X$ such that $F (\chi_U)=0$. $\mathcal F$ contains the empty set, hence is non-empty. If $U \subseteq V$ and $V \in \mathcal F$, then $U$ is in $\mathcal F$ because $F$ is monotonic. For every directed family ${(U_i)}_{i \in I}$ of elements of $\mathcal F$, letting $U = \bigcup_{i \in I} U_i$, ${(\chi_{U_i})}_{i \in I}$ is also a directed family of lower semicontinuous maps whose supremum is $\chi_U$. Since $F$ is Scott-continuous, $F (\chi_U) = \sup_{i \in I} F (\chi_{U_i}) = 0$, so $U$ is in $\mathcal F$. We have shown that $\mathcal F$ is Scott-closed. We observe that $\mathcal F$ is directed. Take any two elements $U_1$, $U_2$ of $\mathcal F$. Since $F$ is sublinear, $F (\chi_{U_1}) + F (\chi_{U_2}) \geq F (\chi_{U_1} + \chi_{U_2}) = F (\chi_{U_1 \cup U_2} + \chi_{U_1 \cap U_2}) \geq F (\chi_{U_1 \cup U_2})$, so the latter is equal to $0$, meaning that $U_1 \cup U_2$ is in $\mathcal F$. $\mathcal F$ being Scott-closed and directed, it has a largest element $U_0$, and therefore $\mathcal F$ is the set of open subsets of $U_0$. We can now prove (2). If $F = F^C$, then $\mathcal F$ is the family of open subsets that do not intersect $C$, so $U_0$ is the complement of $C$. This shows the uniqueness of $C$. As far as existence is concerned, we are led to define $C$ as the complement of $U_0$. We check that $F = F^C$. Let $h$ be a lower semicontinuous map from $X$ to $\overline{\mathbb{R}}_+$. For every $t \in \mathbb{R}_+$, let $f = \chi_{]t, +\infty]}$ and notice that this is a strict lower semicontinuous map. Since $F$ is discrete, $F (f \circ h) = f (F (h))$. Noting that $f \circ h = \chi_U$ where $U$ is the open set $h^{-1} (]t, +\infty])$, $U \in \mathcal F$ is equivalent to $f (F (h)) = 0$. In other words, $h^{-1} (]t, +\infty]) \subseteq U_0$ if and only if $F (h) \leq t$. We notice that $F^C (h) \leq t$ if and only if $h (x) \leq t$ for every $x \in C$, if and only if every $x \in X$ such that $h (x) > t$ is in $U_0$ (by contraposition), if and only if $h^{-1} (]t, +\infty]) \subseteq U_0$. Therefore $F^C (h) \leq t$ if and only if $F (h) \leq t$, for every $t \in \mathbb{R}_+$. It follows that $F^C (h) = F (h)$. \qed \begin{prop} \label{prop:H:prev} Let $X, d$ be a standard quasi-metric space. The bijection $C \mapsto F^C$ is an isometry between $\Hoarez X$ and the set of discrete sublinear previsions on $X$: for all closed subsets $C$, $C'$, $\dH (C, C') = \dKRH (F^C, F^{C'})$. \end{prop} The indicated bijection is, of course, the one exhibited in Lemma~\ref{lemma:eH}. \proof Take $h = d (\_, C')$ in the definition of $\dKRH$ (Equation~(\ref{eq:dKRH})), and recall that this is $1$-Lipschitz continuous. We obtain that $\dKRH (F^C, F^{C'}) \geq \dreal (\sup_{x \in C} d (x, C'), \sup_{y \in C'} d (y, C')) = \dreal (\dH (C, C'), \dH (C', C')) = \dreal (\dH (C, C'), 0)$ (using Lemma~\ref{lemma:dH}~(2)) $= \dH (C, C')$. For the reverse inclusion, fix $h \in \Lform_1 (X, d)$, and let us check that $\sup_{x \in C} h (x) \leq \sup_{y \in C'} h (y) + \dH (C, C')$. If $\dH (C, C') = +\infty$, this is clear, so assume $a = \dH (C, C') < +\infty$. Fix $x \in C$: our aim is to show that $h (x) - a \leq \sup_{y \in C'} h (y)$. Note that the left-hand side is $h' (x, a)$, where $h' \colon (z, t) \mapsto h (z) - t$ is Scott-continuous. For every $b < h' (x, a)$, the formal ball $(x, a)$ is in the Scott-open set ${h'}^{-1} (]b, +\infty])$. Using Lemma~\ref{lemma:dH}~(1) and the fact that $a = \dH (C, C')$, $C+a$ is included in $\cl (C')$, so $(x, a)$ is in $\cl (C')$. Therefore ${h'}^{-1} (]b, +\infty])$ intersects $\cl (C')$, hence also $C'$. Let $y$ be in the intersection. Then $h' (y, 0) = h (y) > b$, which shows that $\sup_{y \in C'} h (y) > b$. As $b < h' (x, a)$ is arbitrary, we conclude. \qed \begin{rem} \label{rem:HX:dKRHa} By using similar arguments, one can show that $\dKRH^a (F^C, F^{C'}) = \dH^a (C, C')$, where $\dH^a (C, C')$ is defined as $\min (a, \allowbreak \dH (C, C'))$, for every $a \in \mathbb{R}_+$, $a > 0$. In one direction, $\dKRH^a (F^C, F^{C'}) \leq \dKRH (F^C, F^{C'}) = \dH (C, C')$ by Proposition~\ref{prop:H:prev}, and $\dKRH^a (F^C, F^{C'}) \leq a$ by definition. In the other direction, we take $h = \min (a, d (\_, C'))$ in the definition of $\dKRH^a$ (\ref{eq:dKRHa}), check that $h \in \Lform_1^a (X, d)$, and therefore obtain that $\dKRH^a (F^C, F^{C'}) \geq \dreal (\sup_{x \in C} \min (a, d (x, C')), \allowbreak \sup_{y \in C'} \min (a, d (y, C'))) = \min (a, d (C, C'))$. \end{rem} \subsection{Completeness} \label{sec:H:completeness} We shall need the following easy, folklore lemma. \begin{lem} \label{lemma:H:cl:sup} For every topological space $Y$, for every subset $A$ of $Y$, for every lower semicontinuous map $h \colon Y \to \mathbb{R} \cup \{-\infty, +\infty\}$, $\sup_{y \in A} h (y) = \sup_{y \in cl (A)} h (y)$. \end{lem} \proof Since $A \subseteq cl (A)$, $\sup_{y \in A} h (y) \leq \sup_{y \in cl (A)} h (y)$. If that inequality were strict, then there would be a real number $a$ such that $\sup_{y \in A} h (y) \leq a < \sup_{y \in cl (A)} h (y)$. In particular, $h^{-1} (]a, +\infty])$ would intersect $cl (A)$, hence also $A$. Therefore, there would be an $y \in A$ such that $h (y) > a$, which contradicts $\sup_{y \in A} h (y) \leq a$. \qed \begin{lem} \label{lemma:H:clC} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For every closed subset $\mathfrak C$ of $\mathbf B (X, d)$ such that $F^{\mathfrak C}$ is supported on $V_{1/2^n}$ for every $n \in \nat$, \begin{enumerate} \item $\mathfrak C$ is equal to $\cl (C)$, where $C$ is the closed set $\mathfrak C \cap X$; \item For every $k \in \Lform (\mathbf B (X, d))$, $F^{\mathfrak C} (k) = F^C (k \circ \eta_X)$; \item $F^{\mathfrak C}$ is supported on $X$. \end{enumerate} \end{lem} \proof We first note that for every (Scott-)open subset $\mathcal U$ of $\mathbf B (X, d)$ that intersects $\mathfrak C$, for every $\eta > 0$, $\mathfrak C$ also intersects $\mathcal U \cap V_{\eta}$. Indeed, since $\mathcal U \cap \mathfrak C \neq \emptyset$, $F^{\mathfrak C} (\chi_{\mathcal U}) = 1$. We use our assumption that $F^{\mathfrak C}$ is supported on $V_{1/2^n}$, where we choose $n$ so large that $1/2^n < \eta$: since $\chi_{\mathcal U}$ and $\chi_{\mathcal U \cap V_{\eta}}$ coincide on $V_{1/2^n}$, $F^{\mathfrak C} (\chi_{\mathcal U \cap V_{\eta}}) = 1$. Therefore $\mathfrak C$ also intersects $\mathcal U \cap V_{\eta}$. (1) Clearly $\cl (C) \subseteq \mathfrak C$. In the reverse direction, assume for the sake of contradiction that $\mathfrak C$ is not included in $\cl (C)$. There is a formal ball $(x, r)$ in $\mathfrak C$ that is in some Scott-open subset $\mathcal U$ (namely, the complement of $\cl (C)$) such that $\mathcal U \cap X$ does not intersect $C$. By our preliminary remark, $\mathfrak C$ must also intersect $\mathcal U \cap V_1$, say at $(y_0, s_0)$. Since $\mathbf B (X, d)$ is a continuous dcpo, $(y_0, s_0)$ is the supremum of a directed family of formal balls way-below $(y_0, s_0)$, so one of them, call it $(x_0, r_0)$, is in $\mathcal U \cap V_1$. It is also in $\mathfrak C$, since every Scott-closed set is downwards-closed. Since $(x_0, r_0) \ll (y_0, s_0)$, $\mathfrak C \cap \uuarrow (x_0, r_0)$ is non-empty. By our preliminary remark again, $\mathfrak C$ must intersect $\uuarrow (x_0, r_0) \cap V_{1/2}$, say at $(y_1, s_1)$. As above, there is formal ball $(x_1, r_1) \ll (y_1, s_1)$ in $\mathfrak C \cap \uuarrow (x_0, r_0) \cap V_{1/2}$. We proceed in the same way: $\mathfrak C \cap \uuarrow (x_1, r_1)$ is non-empty, hence $\mathfrak C \cap \uuarrow (x_1, r_1) \cap V_{1/4}$ contains a formal ball $(y_2, s_2)$, and that allows us to find $(x_2, r_2) \ll (y_2, s_2)$ in $\mathfrak C \cap \uuarrow (x_1, r_1) \cap V_{1/4}$. Inductively, we obtain formal balls $(x_n, r_n)$ in $\mathfrak C \cap V_{1/2^n}$ with the property that $(x_0, r_0) \ll (x_1, r_1) \ll \cdots \ll (x_n, r_n) \ll \cdots$, and $(x_0, r_0) \in \mathcal U$. Let $(y, s) = \sup_{n \in \nat} (x_n, r_n)$. Since $s \leq r_n$ and $(x_n, r_n) \in V_{1/2^n}$, $s$ is less than or equal to $1/2^n$ for every $n \in \nat$, hence zero. Since $\mathfrak C$ is Scott-closed, $(y, s) = (y, 0)$ is in $\mathfrak C$, so $y$ is in $C$. Because $\mathcal U$ is upwards-closed and $(x_0, r_0)$ is in $\mathcal U$, $(y, 0)$ must also be in $\mathcal U$. But that contradicts the fact that $\mathcal U \cap X$ does not intersect $C$. (2) Let $k \in \Lform (\mathbf B (X, d))$. We have $F^{\mathfrak C} (k) = \sup_{(x, r) \in \mathfrak C} k (x, r) = \sup_{(x, r) \in \cl (C)} k (x, r)$ by (1), so $F^{\mathfrak C} (k) = \sup_{x \in C} k (x, 0)$ by Lemma~\ref{lemma:H:cl:sup}, and that is exactly $F^C (k \circ \eta_X)$. (3) Let $g, f \in \Lform (\mathbf B (X, d))$ be such that $g_{|X} = f_{|X}$, that is, $g \circ \eta_X = f \circ \eta_X$. By (2), $F^{\mathfrak C} (g) = F^C (g \circ \eta_X) = F^C (f \circ \eta_X) = F^{\mathfrak C} (f)$. \qed \begin{prop}[Yoneda-completeness, Hoare powerdomain] \label{prop:HX:Ycomplete} Let $X, d$ be a standard Lipschitz regular quasi-metric space, or a continuous Yoneda-complete quasi-metric space. Then $\Hoarez X, \dH$ and $\Hoare X, \dH$ are Yoneda-complete. Suprema of directed families of formal balls of previsions are naive suprema. \end{prop} \proof Through the isometry of Proposition~\ref{prop:H:prev}, we only have to show that the spaces of discrete sublinear previsions (resp., the normalized discrete sublinear previsions) are Yoneda-complete under $\dKRH$. If $X, d$ is standard and Lipschitz regular, this follows from Theorem~\ref{thm:LPrev:sup}. Otherwise, this follows from Proposition~\ref{prop:supp:complete}, since the assumption on supports is verified by Lemma~\ref{lemma:H:clC}. \qed \begin{rem} \label{rem:HX:dKRHa:Ycomplete} Under the same assumptions, for every $a > 0$, $\Hoarez X, \dH^a$ and $\Hoare X, \dH^a$ are Yoneda-complete, and directed suprema of formal balls are naive suprema. The quasi-metric $\dH^a$ was introduced in Remark~\ref{rem:HX:dKRHa}. The argument is similar to Proposition~\ref{prop:HX:Ycomplete}, replacing Proposition~\ref{prop:H:prev} by Remark~\ref{rem:HX:dKRHa}. \end{rem} \begin{rem} \label{rem:Ha} For $a > 0$, it is clear that the specialization ordering $\leq^{\dH^a}$ is again inclusion: $\dH^a (C, C') = 0$ if and only if $\min (a, \dH (C, C'))=0$ if and only if $\dH (C, C')=0$, if and only if $C \subseteq C'$ by Lemma~\ref{lemma:dH}~(2). \end{rem} \subsection{$\dH$-Limits} \label{sec:dh-limits} Suprema of directed families of formal balls in $\Hoare_0 X$ and in $\Hoare X$ are naive suprema, but the detour through previsions makes that less than explicit. Let us give a more concrete description of those suprema, i.e., of $\dH$-limits. \begin{lem} \label{lemma:H:sup} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For every directed family ${(C_i, r_i)}_{i \in I}$ in $\mathbf B (\Hoare_0 (X, d), \dH)$, its supremum $(C, r)$ is given by $r = \inf_{i \in I} r_i$ and $C = \cl (\bigcup_{i \in I} C_i + r_i - r) \cap X$. \end{lem} \proof By Proposition~\ref{prop:HX:Ycomplete}, that supremum is the naive supremum, so $r = \inf_{i \in I} r_i$ in particular. One might think of writing down the naive supremum explicitly and simplifying the expression, but a direct verification seems easier. Let $\mathcal A = \bigcup_{i \in I} C_i + r_i - r$ and $\mathfrak C = \cl (\mathcal A)$. Let also $i \sqsubseteq j$ if and only if $(C_i, r_i) \leq^{\dH^+} (C_j, r_j)$, making ${(C_i, r_i)}_{i \in I, \sqsubseteq}$ a monotone net. The core of the proof consists in showing that $\mathfrak C = \cl (C)$ for $C = \mathfrak C \cap X$, and we shall obtain that by showing that $F^{\mathfrak C}$ is supported on every $V_\epsilon$, $\epsilon > 0$. For every $k \in \Lform (\mathbf B (X, d))$, $F^{\mathfrak C} (k) = \sup_{(x, s) \in \mathcal A} k (x, s)$ by Lemma~\ref{lemma:H:cl:sup}. By definition of $\mathcal A$, and another recourse to Lemma~\ref{lemma:H:cl:sup}, $F^{\mathfrak C} (k) = \sup_{i \in I} \sup_{x \in C_i} k (x, r_i-r) = \sup_{i \in I} \sup_{(x, s) \in \cl (C_i+r_i-r)} k (x, s)$. If $i \sqsubseteq j$ then $\dH (C_i, C_j) \leq r_i-r_j = (r_i-r)-(r_j-r)$, whence $(C_i, r_i-r) \leq^{\dH^+} (C_j, r_j-r)$ and therefore $\cl (C_i+r_i-r) \subseteq \cl (C_j+r_j-r)$ by Lemma~\ref{lemma:dH}~(3). In particular, $\sup_{x \in C_i} k (x, r_i-r) = \sup_{(x, s) \in \cl (C_i+r_i-r)} k (x, s)$ is smaller than or equal to $\sup_{x \in C_j} k (x, r_j-r) = \sup_{(x, s) \in \cl (C_j+r_j-r)} k (x, s)$. This shows that the outer supremum in $F^{\mathfrak C} (k) = \sup_{i \in I} \sup_{x \in C_i} k (x, r_i-r)$ is a directed supremum. For any $i_0 \in I$, the subfamily of all $i \in I$ such that $i_0 \sqsubseteq i$ is cofinal in $I$, hence $F^{\mathfrak C} (k)$ is also equal to $\sup_{i \in I, i_0 \sqsubseteq i} \sup_{x \in C_i} k (x, r_i-r)$. For every $\epsilon > 0$, if $g$ and $f$ are two elements of $\Lform (\mathbf B (X, d))$ that coincide on $V_\epsilon$, and since $r = \inf_{i \in r_i}$, there is an $i_0 \in I$ such that $r_i - r < \epsilon$ for every $i \in I$, $i_0 \sqsubseteq i$. Then $g (x, r_i-r) = f (x, r_i-r)$, and the formula we have just obtained for $F^{\mathfrak C} (k)$ shows that $F^{\mathfrak C} (g) = F^{\mathfrak C} (f)$. In other words, $F^{\mathfrak C}$ is supported on $V_\epsilon$. By Lemma~\ref{lemma:H:clC}, $\mathfrak C$ is equal to $\cl (C)$, where $C = \mathfrak C \cap X$. For every $i \in I$, $C_i + r_i \subseteq \cl (C + r)$, since $\cl (C+r) = \cl (C) + r$ (by Lemma~\ref{lemma:cl}~(2)) $ = \mathfrak C + r = \cl (\bigcup_{i \in I} C_i + r_i)$ (by Lemma~\ref{lemma:cl:1}~(2)). Hence $(C, r)$ is an upper bound of ${(C_i, r_i)}_{i \in I}$ by Lemma~\ref{lemma:dH}, item~3. Assume $(C', r')$ is another upper bound of ${(C_i, r_i)}_{i \in I}$. Then $r' \leq \inf_{i \in I} r_i = r$, and $C_i+r_i \subseteq \cl (C'+r')$ for every $i \in I$, by Lemma~\ref{lemma:dH}~(3). Let $\mathfrak D$ be the set of formal balls $(y, s)$ such that $(y, s+r) \in \cl (C'+r')$. Since $X, d$ is standard, the map $(y, s) \mapsto (y, s+r)$ is Scott-continuous, so $\mathfrak D$ is closed in $\mathbf B (X, d)$. Since $r_i \geq r$, the inclusion $C_i+r_i \subseteq \cl (C'+r')$ rewrites as $C_i+r_i-r \subseteq \mathfrak D$. Taking unions over $i \in I$, then closures, we obtain $\mathfrak C \subseteq \mathfrak D$. Since $\mathfrak C = \cl (C)$, $C$ is included in $\mathfrak D$, too, and that means that $C+r \subseteq \cl (C'+r')$. By Lemma~\ref{lemma:dH}~(3), $(C, r) \leq^{\dH^+} (C', r')$, so $(C, r)$ is the least upper bound of ${(C_i, r_i)}_{i \in I}$. \qed \subsection{Algebraicity} \label{sec:H:algebraicity} \begin{lem} \label{lemma:H:simple:center} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For all center points $x_1$, \ldots, $x_n$, $\dc \{x_1, \cdots, x_n\}$ is a center point of $\Hoarez X, \dH$, and also of $\Hoare X, \dH$ if $n \geq 1$. \end{lem} \proof The proof is very similar to Lemma~\ref{lemma:V:simple:center}. Let $C_0 = \dc \{x_1, \cdots, x_n\}$. For every $h \in \Lform X$, $F^{C_0} (h) = \max_{j=1}^n h (x_j)$, where the maximum is taken to be $0$ if $n=0$. Let $U = B^{\dKRH^+}_{(F^{C_0}, 0), <\epsilon}$. $U$ is upwards-closed: if $(F^C, r) \leq^{\dKRH^+} (F^{C'}, r')$ and $(F^C, r) \in U$, then $\dKRH (F^C, F^{C'}) \leq r-r'$ and $\dKRH (F^{C_0}, F^C) < \epsilon - r$, so $\dKRH (F^{C_0}, F^{C'}) < \epsilon - r'$ by the triangular inequality, and that means that $(F^{C'}, r')$ is in $U$. To show that $U$ is Scott-open, consider a directed family ${(C_i, r_i)}_{i \in I}$ in $\mathbf B (\Hoarez X, \dH)$ (resp., $\mathbf B (\Hoare X, \dH)$) with supremum $(C, r)$, and assume that $(F^C, r)$ is in $U$. By Proposition~\ref{prop:HX:Ycomplete}, directed suprema of formal balls are naive suprema, so $r = \inf_{i \in I} r_i$ and $C$ is characterized by the fact that, for every $h \in \Lform_1 (X, d)$, $F^C (h) = \sup_{i \in I} (F^{C_i} (h) + r - r_i)$. Since $(F^C, r)$ is in $U$, $\dKRH (F^{C_0}, F^C) < \epsilon - r$, so $\epsilon > r$ and $F^{C_0} (h) - \epsilon + r < F^C (h)$. Therefore, for every $h \in \Lform_1 (X, d)$, there is an index $i \in I$ such that $F^{C_0} (h) - \epsilon + r < F^{C_i} (h) + r - r_i$, or equivalently: \begin{equation} \label{eq:H:A} \max_{j=1}^n h (x_j) < F^{C_i} (h) + \epsilon - r_i. \end{equation} Moreover, since $\epsilon > r = \inf_{i \in I} r_i$, we may assume that $i$ is so large that $\epsilon > r_i$. Let $V_i$ be the set of all $h \in \Lform_1 (X, d)$ satisfying (\ref{eq:H:A}). Each $V_i$ is the inverse image of $[0, \epsilon - r_i[$ by the map $h \mapsto \dreal (\max_{i=1}^n h (x_i), F^{C_i} (h))$, which is continuous from $\Lform_1 (X, d)^\patch$ to $(\overline{\mathbb{R}}_+)^\dG$ by Proposition~\ref{prop:dKRH:cont}~(2). Hence ${(V_i)}_{i \in I}$ is an open cover of $\Lform_1 (X, d)^\patch$. The latter is a compact space by Lemma~\ref{lemma:cont:Lalpha:retr}~(4). Hence we can extract a finite subcover ${(V_i)}_{i \in J}$: for every $h \in \Lform_1 (X, d)$, there is an index $i$ in the finite set $J$ such that (\ref{eq:H:A}) holds. By directedness, one can require that $i$ be the same for all $h$. This shows that $(F^{C_i}, r_i)$ is in $U$, proving the claim. \qed \begin{rem} \label{rem:Ha:simple:center} A similar result holds for $\dH^a$ instead of $\dH$, $a \in \mathbb{R}_+$, $a > 0$: on a continuous Yoneda-complete quasi-metric space $X, d$, and for all center points $x_1$, \ldots, $x_n$, $\dc \{x_1, \cdots, x_n\}$ is a center point of $\Hoarez X, \dH^a$, and also of $\Hoare X, \dH^a$ if $n \geq 1$. The proof is as for Lemma~\ref{lemma:H:simple:center}. \end{rem} \begin{thm}[Algebraicity of Hoare powerdomains] \label{thm:H:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with strong basis $\mathcal B$. The spaces $\Hoarez X, \dH$ and $\Hoare X, \dH$ are algebraic Yoneda-complete. Every closed set of the form $\dc \{x_1, \cdots, x_n\}$ where every $x_i$ is a center point (and with $n \geq 1$ in the case of $\Hoare X$) is a center point of $\Hoarez X, \dH$ (resp., $\Hoare X, \dH$), and those form a strong basis, even when each $x_i$ is taken from $\mathcal B$. \end{thm} \proof $\Hoarez X, \dH$ and $\Hoare X, \dH$ are Yoneda-complete by Proposition~\ref{prop:HX:Ycomplete}, and $\dc \{x_1, \cdots, x_n\}$ is a center point as soon as every $x_i$ is a center point, by Lemma~\ref{lemma:H:simple:center}. Let $C \in \Hoarez X$. We wish to show that $(C, 0)$ is the supremum of a directed family of formal balls $(C_i, r_i)$ below $(C, 0)$, where every $C_i$ is the downward closure of finitely many center points. In case $C = \emptyset$, this is easy: take the family $\{(\emptyset, 0)\}$. Hence assume that $C$ is non-empty. Consider the family $D$ of all formal balls $(\dc \{x_1, x_2, \cdots, x_n\}, r)$ where $n \geq 1$, every $x_j$ is in $\mathcal B$, and $d (x_j, C) < r$. $D$ is a non-empty family. Indeed, since $C$ is non-empty, pick $x$ from $C$. By Lemma~\ref{lemma:B:basis}, there is a formal ball $(x_1, r) \ll (x, 0)$ (i.e., $d (x_1, x) < r$) where $x_1$ is in $\mathcal B$. Since $x_1$ is a center point, $d (x_1, C) = \inf_{z \in C} d (x_1, z)$ \cite[Proposition~6.12]{JGL:formalballs}, so $(\dc x_1, r)$ is in $D$. To show that $D$ is directed, let $(\dc \{x_1, \cdots, x_m\}, r)$ and $(\dc \{y_1, \cdots, y_n\}, s)$ be two elements of $D$. By definition, $d (x_j, C) < r$ and $d (y_k, C) < s$ for all indices $j$ and $k$. Since there are finitely many points $x_j$ and $y_k$, we can find $\epsilon > 0$ such that $d (x_j, C) < r-\epsilon$ and $d (y_k, C) < s-\epsilon$ for all indices $j$ and $k$. For each $j$, since $d (x_j, C) < r-\epsilon$, there is a point $z_j \in C$ such that $d (x_j, z_j) < r-\epsilon$. (We use Proposition~6.12 of loc.cit.\ once again.) Since $x_j$ is a center point, $B^{d^+}_{(x_j, 0), <r-\epsilon}$ is Scott-open. It also contains $(z_j, 0)$. By Lemma~\ref{lemma:B:basis}, there is a formal ball $(x'_j, r'_j) \ll (z_j, 0)$, where $x'_j$ is in $\mathcal B$ and $r'_j < \epsilon$. Indeed, one of the formal balls way-below $(z_j, 0)$ must be in the open set $V_\epsilon$ (Lemma~\ref{lemma:Veps}). By construction, $d (x'_j, C) \leq d (x'_j, z_j) < r'_j < \epsilon$: the first inequality comes from $d (x'_j, C) = \inf_{z \in C} d (x'_j, z)$, which holds since $x'_j$ is a center point, and the second one is justified by the fact that $(x'_j, r'_j) \ll (z_j, 0)$. Also $d (x_j, x'_j) \leq r-\epsilon-r'_j \leq r-\epsilon$, because $(x'_j, r'_j) \in B^{d^+}_{(x_j, 0), <r-\epsilon}$. Similarly, we find center points $y'_k$ such that $d (y'_k, C) < \epsilon$ and $d (y_k, y'_k) \leq s-\epsilon$ for every $k$. Let $E = \{x'_1, \cdots, x'_m, y'_1, \cdots, y'_n\}$, then $(\dc E, \epsilon)$ is in $D$, and we claim that $(\dc \{x_1, \cdots, x_m\}, r)$ and $(\dc \{y_1, \cdots, y_n\}, s)$ are both below $(\dc E, \epsilon)$. We check the first of those claims: $\dH (\dc \{x_1, \cdots, x_m\}, \dc E) = \sup_{j=1}^m d (x_j, \dc E) \leq \sup_{j=1}^m d (x_j, x'_j) \leq r-\epsilon$, and this proves the claim. We may use Lemma~\ref{lemma:H:sup} to compute the supremum of $D$. Instead we verify directly that this is $(C, 0)$. Every element $(\dc \{x_1, \cdots, x_m\}, r)$ of $D$ is below $(C, 0)$. To verify this, we compute: \begin{eqnarray*} \dH (\dc \{x_1, \cdots, x_m\}, C) & = & \sup_{x' \in \dc \{x_1, \cdots, x_m\}} d (x', C) \\ & = & \sup\nolimits_{j=1}^m \sup_{x' \leq x_j} d (x', C) \\ & \leq & \sup\nolimits_{j=1}^m \sup_{x' \leq x_j} \underbrace {d (x', x_j)}_0 + d (x_j, C) \\ & = & \sup\nolimits_{j=1}^m d (x_j, C), \end{eqnarray*} and we use the fact that $d (x_j, C) < r$ for every $j$. This means that $(C, 0)$ is an upper bound of $D$, and it remains to show that it is the least. We first note that there are elements of $D$ with arbitrary small radius. For that, we return to the argument we have given that $D$ is non-empty: pick $x \in C$, and use Lemma~\ref{lemma:B:basis} to find a formal ball $(x_1, r) \ll (x, 0)$ (i.e., $d (x_1, x) < r$) where $x_1$ is in $\mathcal B$. That lemma also guarantees that $r$ can be chosen as small as we wish, since we can choose $(x_1, r)$ in the open set $V_\epsilon$ for $\epsilon > 0$ as we wish, and we have seen that $(\dc x_1, r)$ is in $D$. Let $(C', r')$ be another upper bound of $D$. Since $D$ contains formal balls of arbitrary small radius, and all those radii are larger than or equal to $r'$ (because all the elements of $D$ are below $(C', r')$), $r'=0$. It remains to show that $C$ is included in $C'$, and this will prove that $(C, 0) \leq^{\dH^+} (C', 0)$, using Lemma~\ref{lemma:dH}~(2). To this end, we consider an arbitrary open subset $U$ that intersects $C$, say at $z$, and we assume for the sake of contradiction that it does not intersect $C'$. Since $z$ is not in $C'$, $(z, 0)$ is not in $\cl (C')$, by Lemma~\ref{lemma:cl}~(1), and Lemma~\ref{lemma:B:basis} gives us a formal ball $(x, \epsilon) \ll (z, 0)$ (i.e., $d (x, z) < \epsilon$) where $x$ is in $\mathcal B$. Then $B^d_{x, <\epsilon} = \uuarrow (x, \epsilon) \cap X$ contains $z$ and does not intersect $C'$, since $\uuarrow (x, \epsilon)$ does not intersect $\cl (C')$. Since $d (x, z) < \epsilon$, $d (x, C) < \epsilon$. Find $\epsilon' > 0$ such that $d (x, C) < \epsilon' < \epsilon$. The formal ball $(\dc x, \epsilon')$ is then in $D$, and since $(C', 0)$ is an upper bound of $D$, $(\dc x, \epsilon') \leq^{\dH^+} (C', 0)$. This rewrites as $d (x, C') \leq \epsilon' < \epsilon$. Using the fact that for a center point $x$, $d (x, C') = \inf_{z' \in C'} d (x, z')$, there is a point $z' \in C'$ such that $d (x, z') < \epsilon$. This contradicts the fact that $B^d_{x, <\epsilon}$ does not intersect $C'$. \qed \begin{rem} \label{rem:Ha:alg} The same result holds for $\dH^a$, for every $a \in \mathbb{R}_+$, $a > 0$: when $X, d$ is algebraic Yoneda-complete, $\Hoarez X, \dH^a$ and $\Hoare X, \dH^a$ are algebraic Yoneda-complete, with the same strong basis. The proof is the same, except in the final step, where we consider another upper bound $(C', r')$ of $D$. We need to make sure that we can take $\epsilon \leq a$. This is guaranteed by Lemma~\ref{lemma:B:basis}, which states that we can choose $\epsilon$ as small as we wish. In the last lines of the proof, we obtain that $(\dc x, \epsilon') \leq^{\dH^{a+}} (C', 0)$, hence $\min (a, d (x, C')) \leq \epsilon' < \epsilon$. Since $\epsilon \leq a$, this implies that $d (x, C') < \epsilon$, leading to a contradiction as above. \end{rem} \subsection{Continuity} \label{sec:continuity-hoare} We proceed exactly as in Section~\ref{sec:cont-val-leq-1} for subnormalized continuous valuations. \begin{lem} \label{lemma:H:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. The map $\Hoare f \colon \Hoarez X, \dH \to \Hoarez Y, \dH$ defined by $\Hoare f (C) = cl (f [C])$ is $1$-Lipschitz continuous. Moreover, $F^{\Hoare f (C)} = \Prev f (F^C)$ for every $C \in \Hoarez X$. Similarly with $\Hoare$ instead of $\Hoarez$, with $\dH^a$ instead of $\dH$. \end{lem} Recall that $f [C]$ denotes the image of $C$ by $f$. \proof We first check that $F^{\Hoare f (C)} = \Prev f (F^C)$. For every $h \in \Lform X$, $F^{\Hoare f (C)} (h) = \sup_{y \in cl (f [C])} h (y) = \sup_{y \in f [C]} h (y)$ by Lemma~\ref{lemma:H:cl:sup}. That is equal to $\sup_{x \in X} h (f (x))$. We also have $\Prev f (F^C) (h) = F^C (h \circ f) = \sup_{x \in X} h (f (x))$. Therefore $F^{\Hoare f (C)} = \Prev f (F^C)$, which shows that the isometry of Proposition~\ref{prop:H:prev} is natural. By Lemma~\ref{lemma:Pf:lip}, $\Prev f$ is $1$-Lipschitz, so $\mathbf B^1 (\Prev f)$ is monotonic. Also, $\Prev f$ maps discrete sublinear previsions to discrete sublinear previsions, and normalized previsions to normalized previsions. By Proposition~\ref{prop:HX:Ycomplete}, $\Hoarez X, \dH$ and $\Hoare X, \dH$ are Yoneda-complete, hence through the isometry of Proposition~\ref{prop:H:prev}, the corresponding spaces of discrete sublinear previsions are Yoneda-complete as well. Moreover, directed suprema of formal balls are computed as naive suprema. By Lemma~\ref{lemma:Pf:lipcont}, $\mathbf B^1 (\Prev f)$ preserves naive suprema, hence all directed suprema. It must therefore be Scott-continuous, and using the (natural) isometry of Proposition~\ref{prop:H:prev}, $\mathbf B^1 (\Hoare f)$ must also be Scott-continuous. Hence $\Hoare f$ is $1$-Lipschitz continuous. In the case of $\dH^a$, the argument is the same, except that we use Remark~\ref{rem:HX:dKRHa:Ycomplete} instead of Proposition~\ref{prop:HX:Ycomplete}. \qed Let $X, d$ be a continuous Yoneda-complete quasi-metric space. There is an algebraic Yoneda-complete quasi-metric space $Y, \partial$ and two $1$-Lipschitz continuous maps $r \colon Y, \partial \to X, d$ and $s \colon X, d \to Y, \partial$ such that $r \circ s = \identity X$. By Lemma~\ref{lemma:H:functor}, $\Hoarez r$ and $\Hoarez s$ are also $1$-Lipschitz continuous. Also, $\Hoarez r \circ \Hoarez s = \identity {\Hoarez X}$: through the isometry $C \mapsto F^C$, that boils down to $\Prev r \circ \Prev s = \identity \relax$, which is easily checked since $\Prev r (\Prev s (F^C)) (h) = \Prev s (F^C) (h \circ r) = F^C (h \circ r \circ s) = F^C (h)$, for all $C$ and $h$. Therefore $\Hoarez X, \dH$ is a $1$-Lipschitz continuous retract of $\Hoarez Y, \mH\partial$. (Similarly with $\dH^a$ and $\mH\partial^a$.) Theorem~\ref{thm:H:alg} states that $\Hoarez Y, \mH\partial$ and $\Hoare Y, \mH\partial$ (resp., $\mH\partial^a$, using Remark~\ref{rem:Ha:alg} instead) is algebraic Yoneda-complete, whence: \begin{thm}[Continuity for Hoare powerdomains] \label{thm:H:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The quasi-metric spaces $\Hoarez X, \dH$ and $\Hoarez X, \dH^a$ ($a \in \mathbb{R}_+$, $a > 0$) are continuous Yoneda-complete. Similarly with $\Hoare X$. \qed \end{thm} Together with Lemma~\ref{lemma:H:functor}, and Theorem~\ref{thm:H:alg} for the algebraic case, we obtain the following. \begin{cor} \label{cor:H:functor} $\Hoarez, \dH$ defines an endofunctor on the category of continuous Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous map. Similarly with $\Hoare$ instead of $\Hoarez$, with $\dH^a$ instead of $\dH$ ($a > 0$), or with algebraic instead of continuous. \qed \end{cor} \subsection{The Lower Vietoris Topology} \label{sec:lower-viet-topol} The lower Vietoris topology on $\Hoare X$, resp.\ $\Hoarez X$, is generated by subbasic open sets $\Diamond U = \{C \mid C \cap U \neq \emptyset\}$, where $U$ ranges over the open subsets of $X$. \begin{lem} \label{lemma:H:V=weak} Let $X, d$ be a standard quasi-metric space. The map $C \mapsto F^C$ is a homeomorphism of $\Hoarez X$ (resp., $\Hoare X$) with the lower Vietoris topology onto the space of discrete sublinear previsions on $X$ (resp., that are normalized) with the weak topology. \end{lem} \proof This is a bijection by Lemma~\ref{lemma:eH}. Fix $h \in \Lform X$, $a \in \mathbb{R}_+$. For every $C \in \Hoarez X$, $F^C$ is in $[h > a]$ if and only if $\sup_{x \in C} h (x) > a$, if and only if $h (x) > a$ for some $x \in C$, if and only if $C$ intersects $h^{-1} (]a, +\infty])$, namely, $C \in \Diamond h^{-1} (]a, +\infty])$. Therefore the bijection is continuous. In the other direction, for every open subset $U$, $\Diamond U$ is the set of all $C \in \Hoarez X$ such that $C$ intersects $\chi_U^{-1} (]1/2, +\infty])$, i.e., such that $F^C$ is in $[\chi_U > 1/2]$. The case of $\Hoare X$ is similar. \qed \begin{lem} \label{lemma:H:Vietoris} Let $X, d$ be a standard and Lipschitz regular quasi-metric space, or a continuous Yoneda-complete quasi-metric space, and let $a, a' > 0$ with $a \leq a'$. We have the following inclusions of topologies on $\Hoarez X$, resp.\ $\Hoare X$: \begin{quote} lower Vietoris $\subseteq$ $\dH^a$-Scott $\subseteq$ $\dH^{a'}$-Scott $\subseteq$ $\dH$-Scott. \end{quote} \end{lem} \proof Considering Lemma~\ref{lemma:H:V=weak}, Remark~\ref{rem:HX:dKRHa}, and Proposition~\ref{prop:HX:Ycomplete}, this is a consequence of Proposition~\ref{prop:weak:dScott:a}. \qed \begin{prop} \label{prop:H:Vietoris} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. The $\dH$-Scott topology, the $\dH^a$-Scott topology, for every $a \in \mathbb{R}_+$, $a > 0$, and the lower Vietoris topology all coincide on $\Hoarez X$, resp.\ $\Hoare X$. \end{prop} \proof Considering Lemma~\ref{lemma:H:Vietoris}, it remains to show that every $\dH$-Scott open subset is open in the lower Vietoris topology. Again we deal with $\Hoarez X$, and note that the case of $\Hoare X$ is completely analogous. The subsets $\mathcal V = \uuarrow (\dc \{x_1, \cdots, x_n\}, r)$, with $x_1$, \ldots, $x_n$ center points form a base of the Scott topology on $\mathbf B (\Hoarez X, \dH)$, using Theorem~\ref{thm:H:alg}. It suffices to show that $\mathcal V \cap \Hoarez X$ is open in the lower Vietoris topology. For that, we use the implication (1)$\limp$(3) of Lemma~5.8 of \cite{JGL:formalballs}, which states that, in a continuous quasi-metric space $Y, \partial$, for every $\epsilon > 0$, for every center point $y \in Y$, $B^{\partial^+}_{(y, 0), <\epsilon} = \uuarrow (y, \epsilon)$. Here $H = \Hoarez X$, $\partial = \dH$, is algebraic complete by Theorem~\ref{thm:H:alg}, $\mathcal V = \dc \{x_1, \cdots, x_n\}$ is a center point by Lemma~\ref{lemma:H:simple:center}, so $\uuarrow (\dc \{x_1, \cdots, x_n\}, r)$ is the open ball $B^{\dH^+}_{\dc \{x_1, \cdots, x_n\}, < r}$. Then, $C \in \Hoarez X$ is in $\mathcal V$ if and only if it is in $B^{\dH^+}_{\dc \{x_1, \cdots, x_n\}, < r}$, if and only if $\dH (\dc \{x_1, \cdots, x_n\}, C) < r$, if and only if $d (x_i, C) < r$ for every $i$, $1\leq i\leq n$, if and only if $C$ intersects all the open balls $B^d_{x_i, <r}$. Since each $x_i$ is a center point, those open balls are open in the $d$-Scott topology, so $\mathcal V \cap \Hoarez X = \bigcap_{i=1}^n \Diamond B^d_{x_i, <r}$ is open in the lower Vietoris topology. \qed \begin{thm}[$\dH$ quasi-metrizes the lower Vietoris topology] \label{thm:H:Vietoris} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The $\dH$-Scott topology, the $\dH^a$-Scott topology, for every $a \in \mathbb{R}_+$, $a > 0$, and the lower Vietoris topology all coincide on $\Hoarez X$, resp.\ $\Hoare X$. \end{thm} \proof The proof is as for Theorem~\ref{thm:V:weak=dScott}. By \cite[Theorem~7.9]{JGL:formalballs}, $X, d$ is the $1$-Lipschitz continuous retract of an algebraic Yoneda-complete quasi-metric space $Y, \partial$. Call $s \colon X \to Y$ the section and $r \colon Y \to X$ the retraction. Using Proposition~\ref{prop:H:prev}, we confuse $\Hoarez X$ with the corresponding space of discrete sublinear previsions, and similarly for $\Hoare X$, $\Hoarez Y$, $\Hoare Y$. Then $\Prev s$ and $\Prev r$ form a $1$-Lipschitz continuous section-retraction pair by Lemma~\ref{lemma:Pf:lip}, and in particular $\Prev s$ is an embedding of $\Hoarez X$ into $\Hoarez Y$ with their $\dH$-Scott topologies (similarly with $\Hoare$, or with $\dH^a$ in place of $\dH$). However, $s$ and $r$ are also just continuous, by Proposition~\ref{prop:cont}, so $\Prev s$ and $\Prev r$ also form a section-retraction pair between the same spaces, this time with their weak topologies (as spaces of previsions), by Fact~\ref{fact:Pf:weak}, that is, with their lower Vietoris topologies, by Lemma~\ref{lemma:H:V=weak}. By Proposition~\ref{prop:H:Vietoris}, the two topologies on $\Hoarez Y$ (resp., $\Hoare Y$) are the same. Fact~\ref{fact:retract:two} then implies that the two topologies on $\Hoarez X$ (resp., $\Hoare X$) are the same as well. \qed \section{The Smyth Powerdomain} \label{sec:smyth-powerdomain} \subsection{The $\dQ$ Quasi-Metric} \label{sec:dq-quasi-metric} The Smyth powerdomain of $X$ is the space of all non-empty compact saturated subsets $Q$ of $X$. Instead of defining a specific quasi-metric on such subsets, as we did with $\dH$ on the Hoare powerdomain, we shall reuse $\dKRH$, on the isomorphic space of discrete superlinear previsions on $X$. The following is inspired from \cite[Lemma~4.7, item~2]{jgl-jlap14}, as was Lemma~\ref{lemma:eH}. \begin{lem} \label{lemma:uQ} Let $X$ be a topological space. For each non-empty subset $Q$ of $X$, let $F_Q (h) = \inf_{x \in Q} h (x)$. \begin{enumerate} \item If $Q$ is compact saturated and non-empty, then $F_Q$ is a normalized discrete superlinear prevision; moreover, $F_Q (h) = \min_{x \in Q} h (x)$ for every $h \in \Lform X$. \item Conversely, if $X$ is sober, then every normalized discrete superlinear prevision is of the form $F_Q$ for some unique non-empty compact saturated set $Q$. \end{enumerate} \end{lem} \proof 1. For every $h \in \Lform X$, $\{h (x) \mid x \in Q\}$ is compact in $\overline{\mathbb{R}}_+$, and non-empty, hence has a least element. It follows that there is an $x \in Q$ such that $h (x) = F_Q (h)$. This justifies the second subclaim, that $F_Q (h) = \min_{x \in Q} h (x)$. $F_Q$ is clearly positively homogeneous and monotonic. To show Scott-continuity, let $h$ be written as the supremum of a directed family ${(h_i)}_{i \in I}$ in $\Lform X$. $F_Q (h) \geq \sup_{i \in I} F_Q (h_i)$ is a consequence of monotonicity. If the inequality were strict, then there would be a number $r \in \mathbb{R}_+$ such that $F_Q (h) > r \geq \sup_{i \in I} F_Q (h_i)$. The inequality $F_Q (h) > r$ means that the image of $Q$ by $h$ is contained in the Scott-open $]r, +\infty]$. For every $x \in Q$, $h (x) = \sup_{i \in I} h_i (x) > r$, so $h_i (x) > r$ for some $i \in I$. It follows that $Q$ is included in the union of the open subsets $h_i^{-1} (]r, +\infty])$. Since $Q$ is compact and the family of the latter open subsets is directed, $Q$ is included in one of them, say $h_i$. That means that the image of $Q$ by $h_i$ is included in $]r, +\infty]$, and that implies that $F_Q (h_i) = \min_{x \in Q} h_i (x) > r$, contradicting $r \geq \sup_{i \in I} F_Q (h_i)$. We check that $F_Q$ is discrete. Let $x \in Q$ be such that $h (x) = F_Q (h)$. For every $f \in \Lform \overline{\mathbb{R}}_+$ (strict or not), $F_Q (f \circ h)$ is trivially less than or equal to $f (h (x))$. For every $x' \in Q$, $h (x') \geq h (x)$, so $(f \circ h) (x') \geq f (h (x))$, from which it follows that $F_Q (f \circ h) = \min_{x' \in Q} (f \circ h) (x') \geq f (h (x)) = f (F_Q (h))$. Since this holds even when $f$ is not strict, it holds for the map $f (x) = \alpha + x$, for any $\alpha \in \mathbb{R}_+$. In other words, $F_Q (\alpha.\mathbf 1 + h) = \alpha + F_Q (h)$, showing that $F_Q$ is normalized. Finally, we show that $F_Q$ is superlinear. Fix $h, h' \in \Lform X$. Then $F_Q (h+h') = \min_{x \in Q} (h (x) + h (x')) \geq \min_{x \in Q} h (x) + \min_{x \in Q'} h' (x) = F_Q (h) + F_Q (h')$. 2. Now assume $X$ sober, and let $F$ be a normalized discrete superlinear prevision on $X$. Let $\mathcal F$ be the family of open subsets $U$ of $X$ such that $F (\chi_U)=1$. $\mathcal F$ is upwards-closed. Since $F$ is normalized, $\mathcal F$ contains $X$ itself, hence is non-empty. Note that, for every open subset $U$ of $X$, $F (\chi_U)$ is either equal to $1$ or to $0$. This is because $F$ is discrete: for every $t \in ]0, 1]$, $F (\chi_U) = F (\chi_{]t, +\infty]} \circ \chi_U) = \chi_{]t, +\infty]} (F (\chi_U)) \in \{0, 1\}$. Using discreteness again and superlinearity, we show that, if $U \subseteq V$, then $F (\chi_U) + F (\chi_V) = F (\chi_U+\chi_V)$. The right-hand side is larger than or equal to the left-hand side by superlinearity. Imagine it is strictly larger. Let $f \in \Lform \overline{\mathbb{R}}_+$ map any $t \leq 1/2$ to $0$, any $t \in ]1/2, 3/2]$ to $1$, and any $t > 3/2$ to $2$. By discreteness, $F (\chi_U + \chi_V) = F (f \circ (\chi_U + \chi_V)) = f (F (\chi_U + \chi_V))$, which shows that $F (\chi_U + \chi_V) \in \{0, 1, 2\}$. Since $F (\chi_U) + F (\chi_V) < F (\chi_U+\chi_V)$, $F (\chi_U)$ and $F (\chi_V)$ cannot both be equal to $1$, and since $F (\chi_U) \leq F (\chi_V)$, at least $F (\chi_U)$ is equal to $0$. Hence $F (\chi_V) < F (\chi_U+\chi_V)$. Since $F (\chi_U + \chi_V) \leq F (2 \chi_V)$, we obtain that $F (\chi_U + \chi_V)$ is a natural number between $F (\chi_V)$ (exclusive) and $2 F (\chi_V)$ (inclusive). This implies that $F (\chi_V)=1$ and $F (\chi_U + \chi_V) = 2$. Now let $f = \chi_{]3/2, +\infty]}$, and observe that $f \circ (\chi_U + \chi_V) = \chi_U$. Therefore $F (f \circ (\chi_U + \chi_V)) = F (\chi_U) = 0$. By discreteness, $F (f \circ (\chi_U + \chi_V)) = f (F (\chi_U + \chi_V)) = f (2) = 1$, contradiction. Using that, we show that $\mathcal F$ is a filter. It remains to show that for any two elements $U_1$, $U_2$ of $\mathcal F$, $U_1 \cap U_2$ is also in $\mathcal F$. Since $U_1 \cap U_2 \subseteq U_1 \cup U_2$, by the equality we have just shown, $F (\chi_{U_1 \cup U_2}) + F (\chi_{U_1 \cap U_2}) = F (\chi_{U_1 \cup U_2} + \chi_{U_1 \cap U_2}) = F (\chi_{U_1} + \chi_{U_2})$, which is larger than or equal to $F (\chi_{U_1}) + F (\chi_{U_2})$ by superlinearity, that is, to $2$. Hence $F (\chi_{U_1 \cap U_2}) \geq 2 - F (\chi_{U_1 \cup U_2}) \geq 1$, where the latter inequality comes from the fact that $F (\chi_{U_1 \cup U_2}) $ can only be equal to $0$ or to $1$. Since $F (\chi_{U_1 \cap U_2}) $ can only be equal to $0$ or to $1$, its value is $1$, so $U_1 \cap U_2$ is in $\mathcal F$. Finally, $\mathcal F$ is Scott-open, meaning that for every directed family ${(U_i)}_{i \in I}$ of open subsets whose union is in $\mathcal F$, some $U_i$ is in $\mathcal F$. This follows from the Scott-continuity of $F$. We now appeal to the Hofmann-Mislove Theorem: every Scott-open filter of open subsets of a sober space is the filter of open neighborhoods of a unique compact saturated subset $Q$. Then, for every open subset $U$, $Q \subseteq U$ if and only if $U \in \mathcal F$ if and only if $F (\chi_U) = 1$. Since $F (0)=0$, the empty set is not in $\mathcal F$, so $Q$ is non-empty. We check that $F = F_Q$. Let $h \in \Lform X$. For every $t \in \mathbb{R}_+$, let $f = \chi_{]t, +\infty]}$. Since $F$ is discrete, $F (f \circ h) = f (F (h))$. Let $U = h^{-1} (]t, +\infty])$, so that $f \circ h = \chi_U$. Then $F (h) > t$ if and only if $f (F (h)) = 1$, if and only if $F (\chi_U)=1$ if and only if $Q \subseteq U$. If $Q \subseteq U$, then $F_Q (h) = \min_{x \in Q} h (x) > t$, and conversely. Therefore $F (h) > t$ if and only if $F_Q (h) > t$, for all $h$ and $t$, whence $F = F_Q$. Finally, uniqueness is obvious: if $F = F_Q$ for some non-empty compact saturated subset $Q$, then $Q$ must be such that for every open subset $U$, $Q \subseteq U$ if and only if $F (\chi_U)=1$. \qed \begin{defi}[$\dQ$] \label{defn:dQ} Let $X, d$ be a quasi-metric space. For any two non-empty compact saturated subsets $Q$, $Q'$ of $X$, let $\dQ (Q, Q') = \dKRH (F_Q, F_{Q'})$. \end{defi} \begin{rem} \label{defn:dQa} It of course makes sense to define $\dQ^a (Q, Q')$ as $\dKRH^a (F_Q, F_{Q'})$ as well. \end{rem} We give a more concrete description of $\dQ$ in Lemma~\ref{lemma:dQ}. We start with an easy observation. \begin{lem} \label{lemma:d:lip} Let $X, d$ be a standard quasi-metric space. For every point $x' \in X$, the map $d (\_, x')$ is in $\Lform_1 (X, d)$. \end{lem} \proof Let $h = d (\_, x')$. This is a $1$-Lipschitz Yoneda-continuous map \cite[Exercise~7.4.36]{JGL-topology}, hence it is $1$-Lipschitz continuous, since $X, d$ is standard. Alternatively, $h$ is also equal to the map $d (\_, C)$ where $C$ is the closed set $\dc x'$, and that is $1$-Lipschitz continuous by \cite[Lemma~6.11]{JGL:formalballs}. \qed \begin{lem} \label{lemma:dQ} Let $X, d$ be a standard quasi-metric space. For all non-empty compact saturated subsets $Q$, $Q'$, $\dQ (Q, Q') = \sup_{x' \in Q'} d (Q, x')$, where $d (Q, x') = \inf_{x \in Q} d (x, x')$. \end{lem} \proof We first show that $\dQ (Q, Q') \leq \sup_{x' \in Q'} d (Q, x')$. It suffices to show that for every $r \in \mathbb{R}_+$ such that $r < \dQ (Q, Q')$, $r \leq d (Q, x')$ for some $x' \in Q'$. Since $r < \dQ (Q, Q') = \dKRH (F_Q, F_{Q'})$, there is an $h \in \Lform_1 (X, d)$ such that $F_Q (h) > F_{Q'} (h) + r$. Let $x' \in Q'$ be such that $F_{Q'} (h) = h (x')$, using Lemma~\ref{lemma:uQ}, item~1. For every $x \in Q$, $h (x) > h (x') + r $, so, since $h$ is $1$-Lipschitz, $d (x, x') > r$. It follows that $d (Q, x') \geq r$. In the reverse direction, we fix $x' \in Q'$, $r < d (Q, x')$, and we show that there is an $h \in \Lform_1 (X, d)$ such that $F_Q (h) \geq r + F_{Q'} (h)$. Since $r < d (Q, x')$, $d (x, x') > r$ for every $x \in Q$. Let $h = d (\_, x')$. This is in $\Lform_1 (X, d) $ by Lemma~\ref{lemma:d:lip}, and $h (x) > r$ for every $x \in Q$. It follows that $F_Q (h) = \min_{x \in Q} h (x) > r$, whereas $F_{Q'} (h) \leq h (x')=0$. \qed Lemma~\ref{lemma:dQ} means that $\dQ (Q, Q')$ is given by one half of the familiar Hausdorff formula $\dQ (Q, Q') = \sup_{x' \in Q'} \inf_{x \in Q} d (x, x')$. The quasi-metric $\dH$ is given by a formula that is almost the other half, $\sup_{x \in Q} \inf_{x' \in Q'} d (x, x')$. \begin{rem} \label{rem:dQa} We also have $\dQ^a (Q, Q') = \min (a, \dQ (Q, Q'))$, for every $a \in \mathbb{R}_+$, $a > 0$. In one direction, $\dQ^a (Q, Q') = \dKRH^a (F_Q, F_{Q'}) \leq a$ and $\dQ^a (Q, Q') = \dKRH^a (F_Q, F_{Q'}) \leq \dKRH (F_Q, F_{Q'}) = \dQ (Q, Q')$, so $\dQ^a (Q, Q') \leq \min (a, \dQ (Q, Q'))$. In the other direction, we fix $r < \min (a, \dQ (Q, Q'))$. In particular, $r < a$ and there is an $x' \in Q'$ such that $r < d (Q, x')$, whence $d (x, x') > r$ for every $x \in Q$. Let $h = \min (a, d (\_, x')) \in \Lform_1^a (X, d)$: $F_Q (h) = \min (a, \min_{x \in Q} d (x, x')) > r$, and $F_{Q'} (h) = 0$, so $\dQ^a (Q, Q') = \dKRH^a (F_Q, F_{Q'}) \geq r$. Since $r$ is arbitrary, $\min (a, \dQ (Q, Q')) \leq \dQ^a (Q, Q')$. \end{rem} \begin{rem} \label{rem:Q:root} For every standard quasi-metric space $X, d$ with an $a$-root $x$, ($a \in \mathbb{R}_+$, $a > 0$), $\Smyth X, \dQ$ has an $a$-root, namely $\upc x$. Indeed, fix $Q \in \Smyth X$. For every $y \in X$, $d (x, y) \leq a$. For every $x' \in \upc x$, $d (x, y) \leq d (x, x') + d (x', y) = d (x', y)$, so $d (\upc x, y) = \inf_{x' \in \upc x} d (x', y) = d (x, y)$. By Lemma~\ref{lemma:dQ}, $d (\upc x, Q) = \sup_{y \in Q} d (\upc x, y) = \sup_{y \in Q} d (x, y) \leq a$. \end{rem} \begin{lem} \label{lemma:d(Q,x):props} Let $X, d$ be a standard quasi-metric space. For every non-empty compact saturated subset $Q$ of $X$, for every $x' \in X$, the following hold: \begin{enumerate} \item there is an $x \in Q$ such that $d (Q, x') = d (x, x')$; $d (Q, x') = \min_{x \in Q} d (x, x')$; \item $d (Q, x') = 0$ if and only if $x' \in Q$; \item $d (Q, x') \leq d (Q, y') + d (y', x')$ for every $y' \in X$. \end{enumerate} \end{lem} \proof 1. Because $d (\_, x')$ is in $\Lform_1 (X, d) \subseteq \Lform X$ (Lemma~\ref{lemma:d:lip}), it reaches its minimum on the compact set $Q$. 2. Let $x \in Q$ be such that $d (Q, x') = d (x, x')$, by (1). If $d (Q, x')=0$, then $d (x, x')=0$, so $x \leq x'$, and since $Q$ is saturated, $x'$ is in $Q$. Conversely, if $x' \in Q$, then $d (x', x') = 0$ implies that $d (Q, x')=0$. 3. Let $y'$ be an arbitrary point of $X$. For every $x \in Q$, $d (x, x') \leq d (x, y') + d (y', y)$, and we obtain (3) by taking minima over $x \in Q$ on both sides. \qed \begin{lem} \label{lemma:dQ:spec} Let $X, d$ be a standard quasi-metric space. For all non-empty compact saturated subsets $Q$, $Q'$ of $X$, $\dQ (Q, Q')=0$ if and only if $Q \supseteq Q'$. \end{lem} \proof $\dQ (Q, Q') = 0$ if and only if $\sup_{x' \in Q'} d (Q, x')=0$ by Lemma~\ref{lemma:dQ}, if and only if $d (Q, x')=0$ for every $x' \in Q'$. By Lemma~\ref{lemma:d(Q,x):props}, this is equivalent to requiring that $x' \in Q$ for every $x' \in Q'$. \qed \begin{rem} \label{rem:dQa:spec} Because of Remark~\ref{rem:dQa}, it also follows that for every $a \in \mathbb{R}_+$, $a > 0$, $\dQ^a (Q, Q') = 0$ if and only if $Q \supseteq Q'$. \end{rem} \subsection{Completeness} \label{sec:Q:completeness} \begin{lem} \label{lemma:Q:supp} Let $X, d$ be a standard quasi-metric space. For every compact saturated subset $\mathcal Q$ of $\mathbf B (X, d)$ such that $F_{\mathcal Q}$ is supported on $V_{1/2^n}$ for every $n \in \nat$, \begin{enumerate} \item $\mathcal Q$ is included in $X$; \item $\mathcal Q$ is a compact saturated subset of $X$; \item $F_{\mathcal Q}$ is supported on $X$. \end{enumerate} \end{lem} \proof If $\mathcal Q$ is empty, this is obvious, so let us assume that $\mathcal Q$ is non-empty. For any two real numbers $r, s > 0$, $\chi_{V_r}$ and $\chi_{V_s}$ coincide on $V_{1/2^n}$, where $n$ is any natural number large enough that $1/2^n \leq r, s$. Therefore $F_{\mathcal Q} (\chi_{V_r}) = F_{\mathcal Q} (\chi_{V_s})$. It follows that $F_{\mathcal Q} (\chi_{V_r})$ is a value $a$ that does not depend on $r > 0$. The union $\bigcup_{r \in \mathbb{R}_+ \smallsetminus \{0\}} V_r$ is the whole space of formal balls, so $\sup_{r \in \mathbb{R}_+ \smallsetminus \{0\}} \chi_{V_r} = \mathbf 1$. Since $F_{\mathcal Q}$ is Scott-continuous, $\sup_{r \in \mathbb{R}_+ \smallsetminus \{0\}} F_{\mathcal Q} (\chi_{V_r}) = F_{\mathcal Q} (\mathbf 1) = 1$ (since $\mathcal Q$ is non-empty), and that is also equal to $\sup_{r \in \mathbb{R}_+ \smallsetminus \{0\}} a = a$. We have obtained that $F_{\mathcal Q} (\chi_{V_r}) = 1$ for every $r > 0$, namely that $\mathcal Q \subseteq V_r$ for every $r > 0$. As a consequence $\mathcal Q$ is included in $\bigcap_{r > 0} V_r = X$, showing (1). (2) follows from Lemma~\ref{lemma:compact:subspace}. (3) is now obvious. \qed Let $\Smyth X$ be the set of all non-empty compact saturated subsets of $X$, with the $\dQ$ quasi-metric. $\Smyth X$ is the \emph{Smyth powerdomain} of $X$. By Lemma~\ref{lemma:uQ}, if $X$ is sober in its $d$-Scott topology, then the bijection $Q \mapsto F_Q$ is an isometry of $\Smyth X, \dQ$ onto the set of normalized discrete superlinear previsions on $X$, with the usual $\dKRH$ quasi-metric. \begin{prop}[Yoneda-completeness, Smyth powerdomain] \label{prop:QX:Ycomplete} Let $X, d$ be a sober standard quasi-metric space. Then $\Smyth X, \dQ$ is Yoneda-complete. Suprema of directed families of formal balls of previsions are naive suprema. \end{prop} \proof This follows from Proposition~\ref{prop:supp:complete}, since the assumption on supports is verified by Lemma~\ref{lemma:Q:supp}. \qed \begin{rem} \label{rem:QX:Ycomplete:a} Similarly, when $X, d$ is standard and sober, $\Smyth X, \dQ^a$ is Yoneda-complete for every $a \in \mathbb{R}_+$, $a > 0$. \end{rem} As a first example of sober standard quasi-metric spaces, one can find all metric spaces. They are all standard, and they are sober since $T_2$. They are also continuous. A second family of sober standard quasi-metric spaces consists in the continuous Yoneda-complete quasi-metric spaces, in particular all algebraic Yoneda-complete quasi-metric spaces. We have already mentioned that all Yoneda-complete quasi-metric spaces are standard \cite[Proposition~2.2]{JGL:formalballs}; that continuous Yoneda-complete quasi-metric spaces are sober is Proposition~4.1 of loc.cit. Sobriety could of course be dispensed with it we worked directly on normalized discrete superlinear previsions, instead of the more familiar elements of $\Smyth X$. \subsection{$\dQ$-Limits} \label{sec:dq-limits} As for Lemma~\ref{lemma:H:sup}, there is a more direct expression of directed suprema of formal balls over $\Smyth X, \dQ$, i.e., of $\dQ$-limits, than by relying on naive suprema. Recall that $Q+r$ is the set $\{(x, r) \mid x \in Q\}$. \begin{lem} \label{lemma:Q:sup} Let $X, d$ be a sober standard quasi-metric space. Then: \begin{enumerate} \item In $\mathbf B (\Smyth X, \dQ)$, $(Q, r) \leq^{\dQ^+} (Q', r')$ if and only if $Q'+r' \subseteq \upB (Q+r)$, where $\upB$ is upward closure in $\mathbf B (X, d)$. \item If $X, d$ is continuous Yoneda-complete, then for every directed family ${(Q_i, r_i)}_{i \in I}$, the supremum $(Q, r)$ is given by $r = \inf_{i \in I} r_i$ and $Q = \bigcap_{i \in I} \upB (Q_i + r_i - r)$. \end{enumerate} \end{lem} \proof (1) If $(Q, r) \leq^{\dQ^+} (Q', r')$, then $\dQ (Q, Q') \leq r-r'$, so $r \geq r'$ and for every $x' \in Q'$, $d (Q, x') \leq r-r'$, by Lemma~\ref{lemma:dQ}. Using Lemma~\ref{lemma:d(Q,x):props}~(1), there is an $x \in Q$ such that $d (x, x') \leq r-r'$. In particular, $(x, r) \leq^{d^+} (x', r')$. This shows that every element of $Q'+r'$ is in $\upB (Q, r)$. Conversely, if every element of $Q'+r'$ is in $\upB (Q, r)$, then in particular $r \geq r'$. Indeed, since $Q'$ is non-empty, we can find $x' \in Q'$, and $(x', r') \in \upB (Q, r)$. There is an $x \in Q$ such that $(x, r) \leq^{d^+} (x', r')$, and that implies $r \geq r'$. We rewrite the inclusion $Q'+r' \subseteq \upB (Q+r)$ as: for every $x' \in Q'$, there is an $x \in Q$ such that $(x, r) \leq^{d^+} (x', r')$, i.e., $d (x, x') \leq r-r'$. It follows that $\dQ (Q, Q') \leq r-r'$ by Lemma~\ref{lemma:dQ}, whence $(Q, r) \leq^{\dQ^+} (Q', r')$. (2) Let $r = \inf_{i \in I} r_i$ and $Q = \bigcap_{i \in I} \upB (Q_i + r_i - r)$. Since $X, d$ is standard, the map $\_ + r_i - r$ is Scott-continuous, and since $\eta_X$ is also continuous, the image of $Q_i + r_i - r$ of $Q_i$ by their composition is compact in $\mathbf B (X, d)$. Hence $\upB (Q_i + r_i - r)$ is compact saturated in $\mathbf B (X, d)$. Let $i \sqsubseteq j$ if and only if $(Q_i, r_i) \leq^{\dQ^+} (Q_j, r_j)$. If $i \sqsubseteq j$ then $(Q_i, r_i-r) \leq^{\dQ^+} (Q_j, r_j-r)$, so $\upB (Q_i + r_i - r) \supseteq \upB (Q_j + r_j - r)$ by (1). It follows that the family ${(\upB (Q_i + r_i - r))}_{i \in I}$ is filtered. The radius of $\upB (Q_i + r_i - r)$, as defined in Lemma~\ref{lemma:Q:radius}, is equal to $r_i - r$, and $\inf_{i \in I} (r_i-r)=0$. Therefore, by Lemma~\ref{lemma:Xd:compact:1}, $Q$ is non-empty, compact, and saturated in $\mathbf B (X, d)$, hence an element of $\Smyth X$. We claim that $\upB (Q+r) = \bigcap_{i \in I} \upB (Q_i + r_i)$. For every element $(x, r)$ of $Q+r$ (i.e., $x \in Q$), for every $i \in I$, $(x, 0)$ lies above some element $(x_i, r_i-r)$ where $x_i \in Q_i$; so $d (x_i, x) \leq r_i-r$, which implies $(x_i, r_i) \leq^{d^+} (x, r)$, hence $(x, r) \in \upB (Q_i + r_i)$. That shows $Q+r \subseteq \bigcap_{i \in I} \upB (Q_i + r_i)$. Since the right-hand side is upwards-closed, $\upB (Q+r)$ is also included in $\bigcap_{i \in I} \upB (Q_i + r_i)$. In the reverse direction, it is enough to show that every Scott-open neighborhood $\mathcal U$ of $\upB (Q+r)$ contains $\bigcap_{i \in I} \upB (Q_i + r_i)$. Since $X, d$ is standard, the map $\_ + r$ is Scott-continuous, so $(\_ + r)^{-1} (\mathcal U)$ is Scott-open. Since $Q+r$ is included in $\mathcal U$, $Q$ is included in $(\_ + r)^{-1} (\mathcal U)$. Since $X, d$ is continuous Yoneda-complete, $\mathbf B (X, d)$ is a continuous dcpo, hence is sober, hence well-filtered, so, using the definition of $Q$, $Q_i+r_i-r$ is included in $(\_ + r)^{-1} (\mathcal U)$ for some $i \in I$. That means that $Q_i + r_i$ is included in $\mathcal U$, hence also $\upB (Q_i + r_i)$ since every Scott-open set is upwards-closed. Therefore, $\mathcal U$ indeed contains $\bigcap_{i \in I} \upB (Q_i + r_i)$. We can now finish the proof. Since $Q+r$ is included in $\upB (Q_i + r_i)$, $(Q_i, r_i) \leq^{\dQ^+} (Q, r)$ by (1). For every upper bound $(Q', r')$ of ${(Q_i, r_i)}_{i \in I}$, $r' \leq \inf_{i \in I} r_i = r$, and $Q'+r'$ is included in $\upB (Q_i + r_i)$ for every $i \in I$, by (1). Hence $Q'+r'$ is included in $\bigcap_{i \in I} \upB (Q_i + r_i) = \upB (Q+r)$, showing that $(Q, r) \leq^{\dQ^+} (Q', r')$, by (1) again. \qed \begin{rem} \label{rem:Q:sup} One can rephrase Lemma~\ref{lemma:Q:sup}~(2) as follows. Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let ${(Q_i)}_{i \in I, \sqsubseteq}$ be a Cauchy-weightable net, and ${(Q_i, r_i)}_{i \in I, \sqsubseteq}$ be some corresponding Cauchy-weighted net. Then the $\dQ$-limit of ${(Q_i)}_{i \in I, \sqsubseteq}$ is $Q$ is the filtered intersection $\bigcap_{i \in I} \upB (Q_i+r_i)$. Since $Q$ is included in $X$, this is also equal to $\bigcap_{i \in I} (X \cap \upB (Q_i+r_i))$. Note that $X \cap \upB (Q_i+r_i)$ is the set $Q_i^{+r_i}$ of points $\{x \in X \mid \exists y \in Q_i. d (y, x) \leq r_i\}$ that are at distance at most $r_i$ from $Q_i$. (Beware that there is no reason to believe that $Q_i^{+r_i}$ would be compact.) Hence $Q$ is the filtered intersection $\bigcap_{i \in I} Q_i^{+r_i}$. \end{rem} \subsection{Algebraicity} \label{sec:Q:algebraicity} \begin{lem} \label{lemma:Q:simple:center} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. For all $n \geq 1$, and center points $x_1$, \ldots, $x_n$, $\upc \{x_1, \cdots, x_n\}$ is a center point of $\Smyth X, \dQ$. \end{lem} \proof Recall that every continuous Yoneda-complete quasi-metric space is sober. Let $Q_0 = \upc \{x_1, \cdots, x_n\}$. For every $h \in \Lform X$, $F_{Q_0} (h) = \min_{j=1}^n h (x_j)$. Let $U = B^{\dKRH^+}_{(F_{Q_0}, 0), <\epsilon}$. $U$ is upwards-closed: if $(F_Q, r) \leq^{\dKRH^+} (F_{Q'}, r')$ and $(F_Q, r)$ is in $U$, then $\dKRH (F_Q, F_{Q'}) \leq r-r'$, and $\dKRH (F_{Q_0}, \allowbreak F_Q) < \epsilon - r$, so $\dKRH (F_{Q_0}, F_{Q'}) < r-r'+\epsilon-r = \epsilon-r'$, showing that $(F_{Q'}, r')$ is in $U$. To show that $U$ is Scott-open, consider a directed family ${(Q_i, r_i)}_{i \in I}$ in $\mathbf B (\Smyth X, \dQ)$, with supremum $(Q, r)$, and assume that $(F_Q, r)$ is in $U$. By Proposition~\ref{prop:QX:Ycomplete}, this is a naive supremum, so $r = \inf_{i \in I} r_i$ and $Q$ is characterized by the fact that, for every $h \in \Lform_1 (X, d)$, $F_Q (h) = \sup_{i \in I} (F_{Q_i} (h) + r - r_i)$. Since $(F_Q, r)$ is in $U$, $\dKRH (F_{Q_0}, F_Q) < \epsilon - r$, so $\epsilon > r$ and $F_{Q_0} (h) - \epsilon + r < F_Q (h)$ for every $h \in \Lform_1 (X, d)$. Therefore, for every $h \in \Lform_1 (X, d)$, there is an index $i \in I$ such that $F_{Q_0} (h) - \epsilon + r < F_{Q_i} (h) + r - r_i$, or equivalently: \begin{equation} \label{eq:Q:A} \min_{j=1}^n h (x_j) < F_{Q_i} (h) + \epsilon - r_i. \end{equation} Moreover, since $\epsilon > r = \inf_{i \in I} r_i$, we may assume that $i$ is so large that $\epsilon > r_i$. Let $V_i$ be the set of all $h \in \Lform_1 (X, d)$ such that (\ref{eq:Q:A}) holds. Each $V_i$ is the inverse image of $[0, \epsilon - r_i[$ by the map $h \mapsto \dreal (\min_{j=1}^n h (x_j), F^{C_i} (h))$, which is continuous from $\Lform_1 (X, d)^\patch$ to $(\overline{\mathbb{R}}_+)^\dG$ by Proposition~\ref{prop:dKRH:cont}~(3). Hence ${(V_i)}_{i \in I}$ is an open cover of $\Lform_1 (X, d)^\patch$. The latter is a compact space by Lemma~\ref{lemma:cont:Lalpha:retr}~(4). Hence we can extract a finite subcover ${(V_i)}_{i \in J}$: for every $h \in \Lform_1 (X, d)$, there is an index $i$ in the finite set $J$ such that (\ref{eq:Q:A}) holds. By directedness, one can require that $i$ be the same for all $h$. This shows that $(F_{Q_i}, r_i)$ is in $U$, proving the claim. \qed \begin{rem} \label{rem:Qa:simple:center} A similar result holds for $\Smyth X, \dQ^a$ for any $a \in \mathbb{R}_+$, $a > 0$, and the argument is the same as for Lemma~\ref{lemma:Q:simple:center}. \end{rem} \begin{lem} \label{lemma:Q:approx} Let $X, d$ be a standard algebraic quasi-metric space, with strong basis $\mathcal B$. For every compact subset $Q$ of $X$, for every open neighborhood $U$ of $Q$, and for every $\epsilon > 0$, there are finitely many points $x_1$, \ldots, $x_n$ in $\mathcal B$ and radii $r_1, \ldots, r_n < \epsilon$ such that $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r_j} \subseteq U$. \end{lem} \proof For every $y$ in $Q$, we can find a formal ball $(x, r) \ll (y, 0)$ in $\widehat U$ such that $x \in \mathcal B$ and such that $r < \epsilon$, by Lemma~\ref{lemma:B:basis}. The corresponding open balls $B^d_{x, <r}$ are open since $x$ is a center point, hence form an open cover. It remains to extract a finite subcover, thanks to the compactness of $Q$. \qed \begin{lem} \label{lemma:Q:B:eps} Let $X, d$ be a standard algebraic quasi-metric space. Let $Q$ be a compact saturated subset of $X$. For all center points $x_1$, \ldots, $x_m$ and all $r_1, \cdots, r_m > 0$ such that $Q \subseteq \bigcup_{j=1}^m B^d_{x_j, <r_j}$, there is an $\epsilon > 0$, $\epsilon < r_1, \cdots, r_m$, such that $Q \subseteq \bigcup_{j=1}^m B^d_{x_j, <r_j-\epsilon}$. \end{lem} \proof For each $\epsilon > 0$ such that $\epsilon < r_1, \cdots, r_m$, we consider the open subset $U_\epsilon = \bigcup_{j=1}^m B^d_{x_j, < r_j-\epsilon}$. For every $x \in Q$, $x$ is in some open ball $B^d_{x_j, <r_j}$ by assumption, so $d (x, x_j) < r_j$. This implies that there is an $\epsilon > 0$ such that $d (x, x_j) < r_j-\epsilon$ (in particular $\epsilon < r_j$). That inequality is preserved by replacing $\epsilon$ by a smaller positive number strictly less than $r_1$, \ldots, $r_m$. Then $x$ is in $U_\epsilon$. The family ${(U_\epsilon)}_{0 < \epsilon < r_1, \cdots, r_m}$ is then a chain that forms an open cover of $Q$. Since $Q$ is compact, $Q$ is included in $U_\epsilon$ for some $\epsilon > 0$ with $\epsilon < r_1, \cdots, r_m$. \qed \begin{thm}[Algebraicity of Smyth powerdomains] \label{thm:Q:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. The space $\Smyth X, \dQ$ is algebraic Yoneda-complete. Every non-empty compact saturated set of the form $\upc \{x_1, \cdots, x_n\}$ where every $x_i$ is a center point is a center point of $\Smyth X, \dQ$, and those sets form a strong basis, even when each $x_i$ is taken from $\mathcal B$. \end{thm} \proof First recall that every algebraic Yoneda-complete quasi-metric space is continuous Yoneda-complete, and that every continuous quasi-metric space is sober. Then $\Smyth, \dQ$ is Yoneda-complete by Proposition~\ref{prop:QX:Ycomplete}, and $\upc \{x_1, \cdots, x_n\}$ is a center point as soon as every $x_i$ is a center point, by Lemma~\ref{lemma:Q:simple:center}. Let $Q \in \Smyth X$. We wish to show that $(Q, 0)$ is the supremum of a directed family $D$ of formal balls $(Q_i, r_i)$ below $(Q, 0)$, where every $Q_i$ is the upward closure of finitely many center points. Let $D$ be the family of all formal balls $(\upc \{x_1, \cdots, x_n\}, r)$, where $n \geq 1$, every $x_j$ is in $\mathcal B$, and $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r}$. We start by showing that $D$ is non-empty, and that we can in fact find elements of $D$ with arbitrary small radius. By Lemma~\ref{lemma:Q:approx} with $U=X$, for every $\epsilon > 0$, there are finitely many points $x_1$, \ldots, $x_n$ in $\mathcal B$ and radii $r_1, \ldots, r_n < \epsilon$ such that $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r_j}$, in particular $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <\epsilon}$. It follows that $(\upc \{x_1, \cdots, x_n\}, \epsilon)$ is in $D$. Before we proceed, we make the following observation: $(*)$ for every $(\upc \{x_1, \cdots, x_m\}, r)$ in $D$, there is an $\epsilon > 0$ such that $\epsilon < r$ and $(\upc \{x_1, \cdots, x_m\}, r - \epsilon)$ in $D$. This is exactly Lemma~\ref{lemma:Q:B:eps} with $r_1, \cdots, r_m = r$. Note also that if $(\upc \{x_1, \cdots, x_m\}, r - \epsilon)$ is in $D$, then $(\upc \{x_1, \cdots, x_m\}, r - \epsilon')$ is also in $D$ for every $\epsilon' > 0$ with $\epsilon' \leq \epsilon$. We now claim that $D$ is directed. Let $(\upc \{x_1, \cdots, x_m\}, r)$ and $(\upc \{y_1, \cdots, y_n\}, s)$ be two elements of $D$. Using $(*)$, there is an $\epsilon > 0$ such that $\epsilon < r, s$ and such that $(\upc \{x_1, \cdots, x_m\}, r-\epsilon)$ and $(\upc \{y_1, \cdots, y_n\}, s-\epsilon)$ are again in $D$. The open subset $U = \bigcup_{j=1}^m B^d_{x_j, <r-\epsilon} \cap \bigcup_{k=1}^n B^d_{y_k, <s-\epsilon}$ contains $Q$. By Lemma~\ref{lemma:Q:approx}, there is a finite family of points $z_1$, \ldots, $z_p$ in $\mathcal B$ and associated radii $t_1, \cdots, t_p < \epsilon$ such that $Q \subseteq \bigcup_{\ell=1}^p B^d_{z_\ell, < t_\ell} \subseteq U$. Let $E = \{z_1, \cdots, z_p\}$. By construction, $Q \subseteq \bigcup_{\ell=1}^p B^d_{z_\ell, < \epsilon}$, so $(\upc E, \epsilon)$ is in $D$. We claim that $(\upc \{x_1, \cdots, x_m\}, r) \leq^{\dQ^+} (\upc E, \epsilon)$. To show that, we use Lemma~\ref{lemma:dQ} and reduce our problem to showing that $\sup_{\ell=1}^p \inf_{j=1}^m d (x_j, z_\ell) \leq r-\epsilon$. For every $\ell$, $z_\ell$ is in the open ball $B^d_{z_\ell, <t_\ell} \subseteq U \subseteq \bigcup_{j=1}^m B^d_{x_j, <r-\epsilon}$, so $d (x_j, z_\ell) < r-\epsilon$ for some $j$, and this proves the claim. We show that $(\upc \{y_1, \cdots, y_n\}, s) \leq^{\dQ^+} (\upc E, \epsilon)$ similarly. By definition of $D$, $(Q, 0)$ is an upper bound of $D$: every element $(\upc \{x_1, \cdots, x_m\}, r)$ of $D$ is such that $Q \subseteq \bigcup_{j=1}^m B^d_{x_j, <r}$, so $\dQ (\upc \{x_1, \cdots, x_m\}, Q) = \sup_{x \in Q} \inf_{j=1}^m d (x_j, x) \leq r$. We claim that $(Q, 0)$ is the least upper bound of $D$. Let $(Q', r')$ be another upper bound. Since $D$ contains formal balls of arbitrarily small radius, $r'=0$. Let us assume that $(Q, 0)$ is not below $(Q', 0)$. By definition, this means that $\dQ (Q, Q') > 0$. Let $\epsilon > 0$ be such that $\epsilon < \dQ (Q, Q')$. By Lemma~\ref{lemma:dQ}, there is a point $x' \in Q'$ such that $d (Q, x') > \epsilon$, so $d (x, x') > \epsilon$ for every $x \in Q$. Let $U$ be the open hole $T^d_{x', >\epsilon}$, a $d$-Scott open set by Lemma~\ref{lemma:hole}. By construction, $U$ contains $Q$. By Lemma~\ref{lemma:Q:approx}, $U$ contains a finite union $\bigcup_{j=1}^n B^d_{x_j, <r_j}$ of formal balls centered at points of $\mathcal B$, that union contains $Q$, and $r_j < \epsilon$ for every $j$. Then $(\upc \{x_1, \cdots, x_n\}, \epsilon)$ is in $D$, hence is below $(Q', 0)$. This means that $\dQ (\upc \{x_1, \cdots, x_n\}, Q') \leq \epsilon$. In particular, $d (\upc \{x_1, \cdots, x_n\}, x') \leq \epsilon$, so $d (x_j, x') \leq \epsilon$ for some $j$. (Indeed, $d (x, x') \leq \epsilon$ for some $x \in \upc \{x_1, \cdots, x_n\}$, say $x_j \leq x$. Then $d (x_j, x') \leq d (x_j, x) + d (x, x')$, and $d (x_j, x)=0$ since $x_j \leq x$.) Since $\bigcup_{j=1}^n B^d_{x_j, <r_j}$ is included in $U$, $x_j$ is in $U$, so $d (x_j, x') > \epsilon$, contradiction. \qed \begin{rem} \label{rem:Qa:alg} The same result holds for $\dQ^a$, for every $a \in \mathbb{R}_+$, $a > 0$: when $X, d$ is standard algebraic, $\Smyth X, \dQ^a$ is algebraic Yoneda-complete, with the same strong basis. The proof is the same, except in the final step, where we consider another upper bound $(Q', r')$. We must additionally require that $\epsilon \leq a$, and this is possible because Lemma~\ref{lemma:Q:approx} allows us to take $\epsilon$ as small as we wish. The inequality $\dQ^a (\upc \{x_1, \cdots, x_n\}, Q') \leq \epsilon$ then implies $\dQ (\upc \{x_1, \cdots, x_n\}, Q') \leq \epsilon$, allowing us to conclude as above. \end{rem} \subsection{Continuity} \label{sec:continuity-smyth} We proceed exactly as in Section~\ref{sec:cont-val-leq-1} for subnormalized continuous valuations, or as in Section~\ref{sec:continuity-hoare} for the Hoare powerdomains. We start with an easy fact. \begin{fact} \label{fact:Q:upc:inf} For every topological space $Y$, for every subset $A$ of $Y$, for every monotonic map $h \colon Y \to \mathbb{R} \cup \{-\infty, +\infty\}$, $\inf_{y \in A} h (y) = \inf_{y \in \upc A} h (y)$. \qed \end{fact} \begin{lem} \label{lemma:Q:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. The map $\Smyth f \colon \Smyth X, \dQ \to \Smyth Y, \dQ$ defined by $\Smyth f (Q) = \upc f [Q]$ is $1$-Lipschitz continuous. Moreover, $F_{\Smyth f (Q)} = \Prev f (F_Q)$ for every $Q \in \Smyth X$. Similarly with $\dQ^a$ instead of $\dQ$. \end{lem} Recall that $f [Q]$ denotes the image of $Q$ by $f$. \proof We first check that $F_{\Smyth f (Q)} = \Prev f (F_Q)$. For every $h \in \Lform X$, $F_{\Smyth f (Q)} (h) = \inf_{y \in \upc (f [Q])} h (y) = \inf_{y \in f [Q]} h (y)$ by Fact~\ref{fact:Q:upc:inf}. That is equal to $\inf_{x \in X} h (f (x))$. We also have $\Prev f (F_Q) (h) = F_Q (h \circ f) = \inf_{x \in X} h (f (x))$. Therefore $F_{\Smyth f (Q)} = \Prev f (F_Q)$. By Lemma~\ref{lemma:Pf:lip}, $\Prev f$ is $1$-Lipschitz, so $\mathbf B^1 (\Prev f)$ is monotonic. Also, $\Prev f$ maps normalized discrete superlinear previsions to normalized discrete superlinear previsions. By Proposition~\ref{prop:QX:Ycomplete}, $\Smyth X, \dQ$ is Yoneda-complete, hence through the isometry of Proposition~\ref{prop:H:prev}, the corresponding spaces of normalized discrete superlinear previsions are Yoneda-complete as well. Moreover, directed suprema of formal balls are computed as naive suprema. By Lemma~\ref{lemma:Pf:lipcont}, $\mathbf B^1 (\Prev f)$ preserves naive suprema, hence all directed suprema. It must therefore be Scott-continuous. Hence $\Smyth f$ is $1$-Lipschitz continuous. In the case of $\dQ^a$, the argument is the same, except that we use Remark~\ref{rem:QX:Ycomplete:a} instead of Proposition~\ref{prop:QX:Ycomplete}. \qed Let $X, d$ be a continuous Yoneda-complete quasi-metric space. There is an algebraic Yoneda-complete quasi-metric space $Y, \partial$ and two $1$-Lipschitz continuous maps $r \colon Y, \partial \to X, d$ and $s \colon X, d \to Y, \partial$ such that $r \circ s = \identity X$. By Lemma~\ref{lemma:Q:functor}, $\Smyth r$ and $\Smyth s$ are also $1$-Lipschitz continuous, and clearly $\Smyth r \circ \Smyth s = \identity {\Smyth X}$, so $\Smyth X, \dQ$ is a $1$-Lipschitz continuous retract of $\Smyth Y, \mQ\partial$. (Similarly with $\dQ^a$ and $\mQ\partial^a$.) Theorem~\ref{thm:Q:alg} states that $\Smyth Y, \mQ\partial$ (resp., $\mQ\partial^a$, using Remark~\ref{rem:Qa:alg} instead) is algebraic Yoneda-complete, whence: \begin{thm}[Continuity for the Smyth powerdomain] \label{thm:Q:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The quasi-metric spaces $\Smyth X, \dQ$ and $\Smyth X, \dQ^a$ ($a \in \mathbb{R}_+$, $a > 0$) are continuous Yoneda-complete. \qed \end{thm} Together with Lemma~\ref{lemma:Q:functor}, and Theorem~\ref{thm:Q:alg} for the algebraic case, we obtain. \begin{cor} \label{cor:Q:functor} $\Smyth, \dQ$ defines an endofunctor on the category of continuous Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous map. Similarly with $\dQ^a$ instead of $\dQ$ ($a > 0$), or with algebraic instead of continuous. \qed \end{cor} \subsection{The Upper Vietoris Topology} \label{sec:upper-viet-topol} The upper Vietoris topology on $\Smyth X$ is generated by basic open subsets $\Box U = \{Q \in \Smyth X \mid Q \subseteq U\}$, where $U$ ranges over the open subsets of $X$. \begin{lem} \label{lemma:Q:V=weak} Let $X, d$ be a standard quasi-metric space that is sober in its $d$-Scott topology. The map $Q \mapsto F_Q$ is a homeomorphism of $\Smyth X$ with the upper Vietoris topology onto the space of normalized discrete superlinear previsions on $X$ with the weak topology. \end{lem} \proof This is a bijection by Lemma~\ref{lemma:uQ}. Fix $h \in \Lform X$, $a \in \mathbb{R}_+$. For every $Q \in \Smyth X$, $F_Q$ is in $[h > a]$ if and only if $h (x) > a$ for every $x \in Q$, if and only if $Q \in \Box h^{-1} (]a, +\infty])$. Therefore the bijection is continuous. In the other direction, for every open subset $U$, $\Box U$ is the set of all $Q \in \Smyth X$ such that $F_Q \in [\chi_U > 1/2]$. \qed \begin{lem} \label{lemma:Q:Vietoris} Let $X, d$ be a standard quasi-metric space that is sober in its $d$-Scott topology, and $a, a' > 0$ with $a \leq a'$. We have the following inclusion of topologies: \begin{quote} upper Vietoris $\subseteq$ $\dQ^a$-Scott $\subseteq$ $\dQ^{a'}$-Scott $\subseteq$ $\dQ$-Scott. \end{quote} \end{lem} \proof Considering Lemma~\ref{lemma:Q:V=weak}, this is a consequence of Proposition~\ref{prop:weak:dScott:a}. Note that the latter applies because directed suprema of formal balls are indeed naive suprema, due to Proposition~\ref{prop:QX:Ycomplete}. \qed \begin{prop} \label{prop:Q:Vietoris} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. The $\dQ$-Scott topology and the upper Vietoris topologies coincide on $\Smyth X$. \end{prop} \proof Recall that algebraic Yoneda-complete quasi-metric spaces are sober, as a consequence of \cite[Proposition~4.1]{JGL:formalballs}, which says that all continuous Yoneda-complete quasi-metric spaces are sober. Hence Lemma~\ref{lemma:Q:Vietoris} cares for one direction. It remains to show that every $\dQ$-Scott open subset is open in the upper Vietoris topology. Theorem~\ref{thm:Q:alg} tells us that the subsets $\mathcal V = \uuarrow (\upc \{x_1, \cdots, x_n\}, r)$ with $n \geq 1$, and $x_1$, \ldots, $x_n$ center points, form a base of the Scott topology on $\mathbf B (\Smyth X, \dQ)$. We show that $\mathcal V \cap \Smyth X$ is open in the upper Vietoris topology. For that, we use the implication (1)$\limp$(3) of Lemma~5.8 of \cite{JGL:formalballs}, which states that, in a continuous quasi-metric space $Y, \partial$, for every $\epsilon > 0$, for every center point $y \in Y$, $B^{\partial^+}_{(y, 0), <\epsilon} = \uuarrow (y, \epsilon)$. Here $H = \Smyth X$, $\partial = \dQ$, is algebraic complete by Theorem~\ref{thm:Q:alg}, $\upc \{x_1, \cdots, x_n\}$ is a center point by Lemma~\ref{lemma:Q:simple:center}, so $\mathcal V = \uuarrow (\upc \{x_1, \cdots, x_n\}, r)$ is the open ball $B^{\dQ^+}_{\upc \{x_1, \cdots, x_n\}, < r}$. For every $Q$ in $\mathcal V$, $\dQ (\upc \{x_1, \cdots, x_n\}, Q) < r$, so for every $x \in Q$, there is an index $j$ such that $d (x_j, x) < r$. This implies that $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r}$. Conversely, if $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r}$, then by Lemma~\ref{lemma:Q:B:eps}, there is an $\epsilon > 0$, $\epsilon < r$, such that $Q \subseteq \bigcup_{j=1}^n B^d_{x_j, <r-\epsilon}$. In other words, for every $x \in Q$, there is a $j$ such that $d (x_j, x) < r-\epsilon$. Therefore $\dQ (\upc \{x_1, \cdots, x_n\}, Q) \leq r-\epsilon < r$, so $Q$ is in $\mathcal V$. Summing up, $\mathcal V$ is equal to $\Box \bigcup_{j=1}^n B^d_{x_j, <r}$, hence is upper Vietoris open. \qed \begin{thm}[$\dQ$ quasi-metrizes the upper Vietoris topology] \label{thm:Q:Vietoris} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The $\dQ$-Scott topology, the $\dQ^a$-Scott topology, for every $a \in \mathbb{R}_+$, $a > 0$, and the upper Vietoris topology all coincide on $\Smyth X$. \end{thm} \proof The proof is as for Theorem~\ref{thm:V:weak=dScott} or Theorem~\ref{thm:H:Vietoris}. By \cite[Theorem~7.9]{JGL:formalballs}, $X, d$ is the $1$-Lipschitz continuous retract of an algebraic Yoneda-complete quasi-metric space $Y, \partial$. Call $s \colon X \to Y$ the section and $r \colon Y \to X$ the retraction. We confuse $\Smyth X$ with the corresponding space of discrete sublinear previsions. Then $\Prev s$ and $\Prev r$ form a $1$-Lipschitz continuous section-retraction pair by Lemma~\ref{lemma:Pf:lip}, and in particular $\Prev s$ is an embedding of $\Smyth X$ into $\Smyth Y$ with their $\dQ$-Scott topologies (similarly with $\dQ^a$ in place of $\dQ$). However, $s$ and $r$ are also just continuous, by Proposition~\ref{prop:cont}, so $\Prev s$ and $\Prev r$ also form a section-retraction pair between the same spaces, this time with their weak topologies (as spaces of previsions), by Fact~\ref{fact:Pf:weak}, that is, with their lower Vietoris topologies, by Lemma~\ref{lemma:Q:V=weak}. (Recall that $X$ and $Y$ are sober since continuous Yoneda-complete.) By Proposition~\ref{prop:Q:Vietoris}, the two topologies on $\Smyth Y$ are the same. Fact~\ref{fact:retract:two} then implies that the two topologies on $\Smyth X$ are the same as well. \qed \section{Sublinear, Superlinear Previsions} \label{sec:other-previsions} \subsection{A Minimax Theorem} \label{sec:minimax-theorem-1} We establish a minimax theorem on non-Hausdorff spaces. We are not aware of any preexisting such theorem. We shall need to apply it on non-Hausdorff spaces in the case of superlinear previsions (Section~\ref{sec:superl-prev}). For sublinear previsions, several existing minimax theorems, on Hausdorff spaces, could be used instead. Our proof is a simple modification of Frenk and Kassay's Theorem~1.1 \cite{FK:minimax}, a variant of Ky Fan's celebrated minimax theorem \cite{KyFan:minimax}. One might say that this is overkill here, since the point in those theorems is that no vector space structure has to be assumed to introduce the required convexity assumptions, and the functions we shall apply this theorem to will not only be convex, but in fact linear. Our notion of linearity will be the same as in the notion of linear previsions, and that is a notion that applies on cones that are not in general embeddable in vector spaces, so that one might say that we need the added generality, at least partly. Our reason for presenting the theorem in its full generality, however, is because it is no more complicated to do so. The proof we shall give is really Frenk and Kassay's, up to minor modifications. It is not completely obvious at first sight where they require the Hausdorff assumption, and this is hidden in a use of the finite intersection property for compact sets, which typically only holds in Hausdorff spaces. We show that this can be completely be dispensed with. Given two non-empty sets $A$, $B$, and a map $f \colon A \times B \to \mathbb{R}$, we clearly have: \[ \sup_{b \in B} \inf_{a \in A} f (a,b) \leq \inf_{a \in A} \sup_{b \in B} f (a,b). \] Our aim is to strengthen that to an equality, under some extra assumptions. We shall also consider the case of functions with values in $\mathbb{R} \cup \{+\infty\}$, not just $\mathbb{R}$, although we shall not need that. (Including the value $-\infty$ as well seems to present some difficulties, but $+\infty$ does not.) The minimax theorem must rely on a separation theorem, in the style of Hahn and Banach. For our purposes, we shall use a result due to R. Tix, K. Keimel, and G. Plotkin, which we reproduce in the proposition below. We write ${\overline{\mathbb{R}}_+}_\sigma$ for $\overline{\mathbb{R}}_+$ with its Scott topology. Hence a lower semicontinuous map from $A$ to $\overline{\mathbb{R}}_+$ is a continuous map from $A$ to ${\overline{\mathbb{R}}_+}_\sigma$. A subset $A$ of $\overline{\mathbb{R}}_+^n$ is \emph{convex} if and only if for all $a_1, a_2 \in A$ and for every $\alpha \in ]0, 1[$, $\alpha a_1 + (1-\alpha) a_2$ is in $A$. A map is \emph{linear} if and only if it preserves sums and products by scalars from $\mathbb{R}_+$. Let $\vec 1$ be the all one vector in $\overline{\mathbb{R}}_+^n$. \begin{prop}[\cite{TKP:nondet:prob}, Lemma~3.7] \label{prop:strictsep} Let $Q$ be a convex compact subset of ${\overline{\mathbb{R}}_+}_\sigma^n$ disjoint from $\dc \vec 1 = [0, 1]^n$. Then there is a linear continuous function $\Lambda$ from ${\overline{\mathbb{R}}_+}_\sigma^n$ to ${\overline{\mathbb{R}}_+}_\sigma$ and a $b > 1$ such that $\Lambda (\vec 1) \leq 1$ and $\Lambda (x) > b$ for every $x \in Q$. \qed \end{prop} Writing $e_i$ for the tuple with a $1$ at position $i$ and zeros elsewhere, such a linear continuous map must map every $(x_1, x_2, \cdots, x_n)$ to $\sum_{i=1}^n c_i x_i$, where $c_i = \Lambda (e_i)$. This is indeed certainly true when every $x_i$ is different from $+\infty$, by linearity, and (Scott-)continuity gives us the result for the remaining cases. \begin{cor} \label{corl:strictsep} Let $Q$ be a non-empty convex compact subset of ${\overline{\mathbb{R}}_+}_\sigma^n$ disjoint from $\dc \vec 1 = [0, 1]^n$. There are non-negative real numbers $c_1$, $c_2$, \ldots, $c_n$ such that $\sum_{i=1}^n c_i=1$ and $\sum_{i=1}^n c_i x_i > 1$ for every $(x_1, x_2, \cdots, x_n)$ in $Q$. \end{cor} \proof Find $\Lambda$ and $b > 1$ as above. $\Lambda$ maps every $(x_1, x_2, \cdots, x_n)$ to $\sum_{i=1}^n c_i x_i$, where each $c_i$ is in $\overline{\mathbb{R}}_+$. Since $\Lambda (\vec 1) \leq 1$, $\sum_{i=1}^n c_i \leq 1$. In particular, no $c_i$ is equal to $+\infty$. Since $Q$ is non-empty, $\sum_{i=1}^n c_i x_i > b$ for some point $(x_1, x_2, \cdots, x_n)$ in $Q$. This implies that not every $c_i$ is equal to $0$, hence $\sum_{i=1}^n c_i \neq 0$. Let $c'_i = c_i / \sum_{i=1}^n c_i$. Then $\sum_{i=1}^n c'_i=1$, and for every $(x_1, x_2, \cdots, x_n)$ in $Q$, $\sum_{i=1}^n c'_i x_i > b / \sum_{i=1}^n c_i \geq b > 1$. \qed Following Frenk and Kassay, we say that a map $f \colon A \times B \to \mathbb{R} \cup \{+\infty\}$ is \emph{closely convex in its first argument} if and and only if for all $a_1, a_2 \in A$, for every $\alpha \in ]0, 1[$, for every $\epsilon > 0$, there is an $a \in A$ such that, for every $b \in B$: \[ f (a, b) \leq \alpha f (a_1, b) + (1-\alpha) f (a_2, b) + \epsilon. \] We say that $f$ is \emph{closely concave in its second argument} if and only if for all $b_1, b_2 \in B$, for every $\alpha \in ]0, 1[$, for all $\epsilon, M > 0$, there is an $b \in B$ such that, for every $a \in A$: \[ f (a, b) \geq \min (M, \alpha f (a, b_1) + (1-\alpha) f (a, b_2) - \epsilon). \] The extra lower bound $M$ serves to handle the cases where $\alpha f (a, b_1) + (1-\alpha) f (a, b_2)$ is infinite. \begin{lem} \label{lemma:minimax:convex} Let $A$ be a non-empty compact topological space and $B$ be a non-empty set. Let $f \colon A \times B \to \mathbb{R} \cup \{+\infty\}$ be a map that is closely convex in its first argument, and such that $f (\_, b)$ is lower semicontinuous for every $b \in B$. Then, for every $t \in \mathbb{R}$, $\inf_{a \in A} \sup_{b \in B} f (a, b) \leq t$ if and only if, for every normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ on $B$, $\inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i) \leq t$. \end{lem} \proof Let $L \colon A \to \mathbb{R} \cup \{+\infty\}$ be defined by $L (a) = \sup_{b \in B} f (a, b)$. Being a pointwise supremum of lower semicontinuous functions, $L$ is itself lower semicontinuous. For every normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ on $B$, $\sum_{i=1}^n c_i f (a, b_i) \leq \sum_{i=1}^n c_i L (a) = L (a)$, so $\inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i) \leq \inf_{a \in A} L (a)$. In particular, if $\inf_{a \in A} L (a) \leq t$, then $\inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i) \leq t$. In the reverse direction, assume that $\inf_{a \in A} L (a) > t$. We wish to show that $\inf_{a \in A} \allowbreak \sum_{i=1}^n c_i f (a, b_i) > t$ for some normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$. To that end, we first pick a real number $t'$ such that $\inf_{a \in A} L (a) > t' > t$, and we shall find a normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ such that $\sum_{i=1}^n c_i f (a, b_i) \geq t'$ for every $a \in A$. For every $b \in B$, $U_b = \{a \in A \mid f (a, b) > t'\}$ is open since $f (\_, b)$ is lower semicontinuous, and the sets $U_b$, $b \in B$, cover $A$: for every $a \in A$, $L (a) > t'$, so $f (a, b) > t'$ for some $b \in B$. Since $A$ is compact, there are finitely many points $b_1$, $b_2$, \ldots, $b_n$ such that $A = \bigcup_{i=1}^n U_{b_i}$. Note that, since $A$ is non-empty, $n \geq 1$. Since $f (\_, b_i)$ is lower semicontinuous on the compact space $A$, it reaches its minimum $r_i$ in $\mathbb{R} \cup \{+\infty\}$. Let $r = \min (r_1, r_2, \cdots, r_n)$, and find $c > 0$ such that $1 + c (r - t') \geq 0$. If $r \geq t'$, we can take any $c > 0$, otherwise any $c$ such that $0 < c < 1/(t'-r)$ will fit. The map $h_i \colon a \mapsto 1 + c (f (a, b_i) - t')$ is then lower semicontinuous from $A$ to $\overline{\mathbb{R}}_+$, and therefore $h (a) = (h_1 (a), h_2 (a), \cdots, h_n (a))$ defines a continuous map from $A$ to ${\overline{\mathbb{R}}_+}_\sigma^n$. Let $K$ be the image of $A$ by $h$. This is a compact subset of ${\overline{\mathbb{R}}_+}_\sigma^n$. It is also non-empty, since $A$ is non-empty. Hence $Q = \upc K$ is also non-empty, compact, and saturated. We observe the following fact: $(*)$ for every $z \in \overline{\mathbb{R}}_+^n$, if $z + \epsilon. \vec 1$ is in $Q$ for every $\epsilon > 0$, then $z$ is also in $Q$. Indeed, consider the Scott-closed set $C_\epsilon = \dc (z + \epsilon.\vec 1)$ in $\mathbb{R}_+^n$, for every $\epsilon \geq 0$. Then $C_0 = \dc z = \bigcap_{\epsilon > 0} C_\epsilon$. If $z$ were not in $Q$, then $C_0$ would not intersect $Q$, since $Q$ is upwards-closed. Then there would be finitely many values $\epsilon_1, \epsilon_2, \ldots, \epsilon_n > 0$ such that $\bigcap_{i=1}^n C_{\epsilon_i}$ does not intersect $Q$ by compactness, and by letting $\epsilon$ be the smallest $\epsilon_i$, $C_\epsilon$ would not intersect $Q$, contradiction. We claim that $Q$ is convex. Otherwise, there are two points $z_1$ and $z_2$ of $Q$ and a real number $\alpha \in {]0, 1[}$ such that $\alpha z_1 + (1-\alpha) z_2$ is not in $Q$. By $(*)$, $\alpha z_1 + (1-\alpha) z_2 +\epsilon.\vec 1$ is not in $Q$ for some $\epsilon > 0$. Since $z_1$ and $z_2$ are in $Q$, by definition there are points $a_1$ and $a_2$ in $A$ such that $h (a_1) \leq z_1$ and $h (a_2) \leq z_2$. We finally use the fact that $f$ is closely convex in its first argument: there is a point $a \in A$ such that, for every $i$, $1\leq i\leq n$, $f (a, b_i) \leq \alpha f (a_1, b_i) + (1-\alpha) f (a_2, b_i) + \epsilon/c$. That implies $h (a) \leq \alpha h (a_1) + (1-\alpha) h (a_2) + \epsilon.\vec 1 \leq \alpha z_1 + (1-\alpha) z_2 + \epsilon.\vec 1$. Since $h (a)$ is in $Q$ and $Q$ is upwards-closed, this contradicts the fact that $\alpha z_1 + (1-\alpha) z_2 + \epsilon.\vec 1$ is not in $Q$. We claim that $Q$ does not intersect $[0, 1]^n$. Assume on the contrary that there is an $a \in A$, and a $z \in [0, 1]^n$ such that $h (a) \leq z$. Since $A = \bigcup_{i=1}^n U_{b_i}$, there is an index $i$ such that $f (a, b_i) > t'$, so $h_i (a) > 1$. However $h (a) \leq z \in [0, 1]^n$ entails that $h_i (a) \leq 1$, contradiction. Now we use Corollary~\ref{corl:strictsep} and obtain non-negative real numbers $c_1$, $c_2$, \ldots, $c_n$ such that $\sum_{i=1}^n c_i=1$, and $\sum_{i=1}^n c_i z_i \geq 1$ for every $(z_1, z_2, \cdots, z_n) \in Q$. This holds for the elements $h (a)$ of $K \subseteq Q$, hence $\sum_{i=1}^n c_i (1 + c (f (a, b_i) - t')) \geq 1$ for every $a \in A$. It follows that $\sum_{i=1}^n c_i f (a, b_i) \geq t'$, as required. \qed \begin{lem} \label{lemma:minimax:concave} Let $A$ and $B$ be two non-empty sets, and let $f \colon A \times B \to \mathbb{R} \cup \{+\infty\}$ be closely concave in its second argument. For every normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ on $B$, $\inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i) \leq \sup_{b \in B} \inf_{a \in A} f (a, b)$. \end{lem} \proof We first show that, if $f$ is closely concave in its second argument, then for every normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ on $B$, for all $\epsilon, M > 0$, there is a $b \in B$ such that, for every $a \in A$: \[ f (a, b) \geq \min (M, \sum_{i=1}^n c_i f (a, b_i) - \epsilon). \] The definition of closely concave is the special case $n=2$. The case $n=0$ is vacuous, and the case $n=1$ is trivial. We show that claim by induction by induction on $n$. Assume $n\geq 3$. If $c_n=0$, then the result is a direct appeal to the induction hypothesis. If $c_n=1$, then $c_1=c_2=\cdots=c_{n-1}=0$, and we can take $b=b_n$. Otherwise, let $c'_i = c_i / (1 - c_n)$ for every $i$, and $\alpha=1-c_n$. Note that $\alpha$ is in $]0, 1[$. Also, $\sum_{i=1}^n c_i f (a, b_i) = \alpha \sum_{i=1}^{n-1} c'_i f (a, b_i) + (1-\alpha) f (a, b_n)$. By induction hypothesis, there is a point $y' \in B$ such that, for every $a \in A$: \[ f (a, y') \geq \min \left(M/\alpha, \sum_{i=1}^{n-1} c'_i f (a, b_i) - \epsilon/2\alpha\right). \] Since $f$ is closely concave in its second argument, there is a point $b \in B$ such that, for every $a \in A$: \[ f (a, b) \geq \min \left(M, \alpha f (a, y') + (1-\alpha) f (a, b_n) - \epsilon/2\right). \] Therefore, for every $a \in A$, \begin{eqnarray*} f (a, b) & \geq & \min \left(M, \alpha \min \left(M/\alpha, \sum_{i=1}^{n-1} c'_i f (a, b_i) - \epsilon/2\alpha\right) + (1-\alpha) f (a, b_n) - \epsilon/2\right) \\ & = & \min \left(M, \alpha \sum_{i=1}^{n-1} c'_i f (a, b_i) - \epsilon/2 + (1-\alpha) f (a, b_n) - \epsilon/2\right) \\ & = & \min (M, \sum_{i=1}^n c_i f (a, b_i) - \epsilon). \end{eqnarray*} We now prove the lemma by showing that for every real number $t < \inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i)$, there is an element $b \in B$ such that, for every $a \in A$, $f (a, b) \geq t$. For that, we pick $\epsilon > 0$ such that $t + \epsilon \leq \inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i)$, and we let $M$ be any non-negative number larger than $t$. The above generalization of the notion of closely concave then yields the existence of a point $b$ such that for every $a \in A$, $f (a, b) \geq \min (M, \sum_{i=1}^n c_i f (a, b_i) - \epsilon) \geq t$. \qed \begin{thm}[Minimax] \label{thm:minimax} Let $A$ be a non-empty compact topological space and $B$ be a non-empty set. Let $f \colon A \times B \to \mathbb{R} \cup \{+\infty\}$ be a map that is closely convex in its first argument, closely concave in its second argument, and such that $f (\_, b)$ is lower semicontinuous for every $b \in B$. Then: \[ \sup_{b \in B} \inf_{a \in A} f (a, b) = \inf_{a \in A} \sup_{b \in B} f (a, b), \] and the infimum on the right-hand side is attained. \end{thm} \proof The $\leq$ direction is obvious. If $\sup_{b \in B} \inf_{a \in A} f (a, b) = +\infty$, then the equality is clear. Otherwise, let $t = \sup_{b \in B} \inf_{a \in A} f (a, b)$. This is a real number that is larger than or equal to $\inf_{a \in A} \sum_{i=1}^n c_i f (a, b_i)$ for every normalized simple valuation $\sum_{i=1}^n c_i \delta_{b_i}$ on $B$, by Lemma~\ref{lemma:minimax:concave}. Lemma~\ref{lemma:minimax:convex} then tells us that $\inf_{a \in A} \sup_{b \in B} f (a, b) \leq t$, which gives the $\geq$ direction of the inequality. The fact that the infimum on the right-hand side is attained is due to the fact that $A$ is compact, and that $a \mapsto \sup_{b \in B} f (a, b)$ is lower semicontinuous, as a pointwise supremum of lower semicontinuous maps. \qed \begin{rem} \label{rem:compact} Note that we do not require $A$ to be Hausdorff. We have already said so, but one should stress it, as compactness without the Hausdorff separation axiom is a very weak property. For example, any topological space with a least element in its specialization preordering is compact. \end{rem} \begin{rem} \label{rem:closely} Among the closely convex functions (in the first argument), one finds the \emph{convex} functions in the sense of Ky Fan \cite{KyFan:minimax}, that is, the maps such that for all $a_1, a_2 \in A$, for every $\alpha \in {]0, 1[}$, there is a point $a \in A$ such that for every $b \in B$, $f (a, b) \leq \alpha f (a_1, b) + (1-\alpha) f (a_2, b)$. Similarly with closely concave functions and concave functions in the sense of Ky Fan, which satisfy that for all $b_1, b_2 \in B$, for every $\alpha \in {]0, 1[}$, there is a point $b \in B$ such that for every $a \in A$, $f (a, b) \geq \alpha f (a, b_1) + (1-\alpha) f (a, b_2)$. Among the convex functions (in the first argument) in the sense of Ky Fan, one simply finds the functions $f$ such that $f (\alpha a_1 + (1-\alpha) a_2, b) \leq \alpha f (a_1, b) + (1-\alpha) f (a_2, b)$, namely those that are convex in their first argument, in the ordinary sense, provided one can interpret scalar multiplication and addition on $A$, for example when $A$ is a convex subset of a cone. One also finds more unusual cases of Ky Fan convexity. Consider for example a family of functions $f_i \colon A \to \mathbb{R} \cup \{+\infty\}$, $i \in I$. We can see them as one function $f \colon A \times I \to \mathbb{R} \cup \{+\infty\}$ by letting $f (a, i) = f_i (a)$. If the family ${(f_i)}_{i \in I}$ is directed, then $f$ is Ky Fan concave: for all $i, j \in I$, find $k \in I$ such that $f_i, f_j \leq f_k$, then for every $\alpha \in {[0, 1]}$, for every $a \in A$, $f_k (a) \geq \alpha f_i (a) + (1-\alpha) f_j (a)$. \end{rem} \subsection{Sublinear Previsions} \label{sec:sublinear-previsions} To study sublinear previsions, we recall from \cite{JGL-mscs16} that sublinear previsions are in bijection with the non-empty closed convex subsets of linear previsions, under some slight assumptions. More precisely, let $r_\AN$ be the function that maps every set $E$ of linear previsions to the prevision $h \in \Lform X \mapsto \sup_{G \in E} G (h)$. The latter is always a sublinear prevision, which is subnormalized, resp.\ normalized, as soon as $E$ is a non-empty set of subnormalized, resp.\ normalized linear previsions. In the reverse direction, for every sublinear prevision $F$, let $s_\AN (F)$ be the set of all linear previsions $G$ such that $G \leq F$. If $F$ is subnormalized, we write $s^{\leq 1}_\AN (F)$ for the set of all subnormalized linear previsions $G \leq F$, and if $F$ is normalized, we write $s^1_\AN (F)$ for the set of all normalized linear previsions $G \leq F$. We shall write $s^\bullet_\AN$ instead of $s_\AN$, $s^{\leq 1}_\AN$, or $s^1_\AN$, when the superscript is meant to be implied. Again, we equate linear previsions with continuous valuations. Correspondingly, we shall write $\Val_\bullet X$ for the space of linear previsions (resp., subnormalized, normalized, depending on the subscript) on $X$. Corollary~3.12 of \cite{JGL-mscs16} states that, under the assumption that $\Lform X$ is locally convex, that is, if every $h \in \Lform X$ has a base of convex open neighborhoods, then: \begin{itemize} \item $r_\AN$ is continuous from the space of sublinear (resp., subnormalized sublinear, resp., normalized sublinear) previsions on $X$, with the weak topology, to $\Hoare (\Val_\bullet X)$ with its lower Vietoris topology, and where $\Val_\bullet X$ has the weak topology; \item $s^\bullet_\AN$ is continuous in the reverse direction; \item $r_\AN \circ s^\bullet_\AN = \identity \relax$; \item $\identity \relax \leq s^\bullet_\AN \circ r_\AN$. \end{itemize} Hence the space of (possibly subnormalized, or normalized) sublinear previsions, with the weak topology, occurs as a retract of $\Hoare (\Val_\bullet X)$. Letting $\Hoare^{cvx} (\Val_\bullet X)$ denote the subspace of those closed sets in $\Hoare (\Val_\bullet X)$ that are convex, Theorem~4.11 of loc.\ cit.\ additionally states that $r_\AN$ and $s^\bullet_\AN$ define a homeomorphism between $\Hoare^{cvx} (\Val_\bullet X)$ and the corresponding space of (possibly subnormalized, or normalized) sublinear previsions, under the same local convexity assumption. That assumption is satisfied on continuous Yoneda-complete quasi-metric spaces, because: \begin{prop} \label{prop:locconvex} Let $X$ be a $\odot$-consonant space, for example, a continuous Yoneda-complete quasi-metric space. $\Lform X$, with its Scott topology, is locally convex. \end{prop} \proof Proposition~\ref{prop:co=Scott} tells us that the Scott topology on $\Lform X$ coincides with the compact-open topology. One observes easily that the latter is locally convex. Explicitly, let $h \in \Lform X$, let $\mathcal U$ be an open neighborhood of $h$ in the compact-open topology. We must show that $\mathcal U$ contains a convex open neighborhood of $h$. By definition of the compact-open topology, $\mathcal U$ contains a finite intersection of open subsets of the form $[Q > a]$, all containing $h$. Since any intersection of convex sets is convex, it suffices to show that $[Q > a]$, for $Q$ compact saturated and $a \in \mathbb{R}$, is convex. For any two elements $h_1, h_2 \in [Q > a]$ and any $\alpha \in [0, 1]$, for every $x \in Q$, $\alpha h_1 (x) + (1-\alpha) h_2 (x) > a$ because $h_1 (x) > a$ and $h_2 (x) > a$; so $\alpha h_1 + (1-\alpha) h_2$ is in $[Q > a]$. \qed Hence simply assuming $X, d$ continuous Yoneda-complete will allow us to use the facts stated above on $r_\AN$ and $s_\AN^\bullet$, which will shall do without further mention. We silently equate every element $G$ of $\Val_\bullet X$ (a continuous valuation) with the corresponding linear prevision (as in $G (h)$). \begin{lem} \label{lemma:H} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, and $\alpha > 0$. Let $\bullet$ be either ``$\leq 1$'' or ``$1$''. For each $h \in \Lform_\alpha (X, d)$, the map $H \colon G \in \Val_\bullet X \mapsto G (h)$ is $\alpha$-Lipschitz continuous, when $\Val_\bullet X$ is equipped with the $\dKRH$ quasi-metric. If $h \in \Lform^a_\alpha (X, d)$, where $a > 0$, then it is also $\alpha$-Lipschitz continuous with respect to the $\dKRH^a$ quasi-metric. \end{lem} \proof We first check that it is $\alpha$-Lipschitz: for all $G, G' \in \Val_\bullet X$, $\dreal (H (G), H (G')) = \dreal (G (h), G' (h)) \leq \dKRH (G, G')$, by definition of $\dKRH$. If additionally $h$ is bounded from above by $a$, then $\dreal (G (h), G' (h)) \leq \dKRH^a (G, G')$, by definition of $\dKRH^a$. Now build $H' \colon (G, r) \mapsto H (G) - \alpha r = G (h) - \alpha r$. To show $\alpha$-Lipschitz continuity, we first recall that $\Val_\bullet X$ is Yoneda-complete (both with $\dKRH$ and with $\dKRH^a$) by Theorem~\ref{thm:V:complete}. In particular, it is standard, so we only have to show that $H'$ is Scott-continuous, thanks to Lemma~\ref{lemma:f'}. Theorem~\ref{thm:V:complete} also tells us that directed suprema of formal balls are computed as naive suprema. For every directed family ${(G_i, r_i)}_{i \in I}$ of formal balls on $\Val_\bullet X$, with (naive) supremum $(G, r)$, we have $r = \inf_{i \in I} r_i$ and $G (h) = \sup_{i \in I} (G_i (h) -\alpha r_i + \alpha r)$. It follows that $H' (G, r) = G (h) - \alpha r = \sup_{i \in I} (G_i (h) - \alpha r_i) = \sup_{i \in I} H' (G_i, r_i)$. \qed The following is where we require a minimax theorem. We silently equate every element $G'$ of $\mathcal C'$ (a continuous valuation) with the corresponding linear prevision (as in $G' (h)$). \begin{lem} \label{lemma:AN:supinf} Let $X, d$ be a continuous quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. For every simple valuation $G$ supported on center points in $\Val_\bullet X$, for every $\alpha > 0$, for every convex subset $\mathcal C'$ of $\Val_\bullet X$, \[ \sup_{h \in \Lform^a_\alpha X} \inf_{G' \in \mathcal C'} (G (h) - G' (h)) = \inf_{G' \in \mathcal C'} \sup_{h \in \Lform^a_\alpha X} (G (h) - G' (h)). \] \end{lem} \proof First, we note that $G (h) - G' (h)$ makes sense: since $h \in \Lform^a_\alpha (X, d)$ and $G$, $G'$ are subnormalized, $G (h)$ and $G' (h)$ are both in $[0, a]$, and never infinite. This also shows that the function $f \colon (h, G') \mapsto -G (h) + G' (h)$ is real-valued. We use $A = \Lform_\alpha^a (X, d)^\patch$. That makes sense because $\Lform_\alpha^a (X, d)$ is stably compact (Lemma~\ref{lemma:cont:Lalpha:retr}~(4)). Then $A$ is compact (and Hausdorff). We use $B = \mathcal C'$. Then $f$ is a map from $A \times B$ to $\mathbb{R}$, which is linear in both arguments, in the sense that it preserves sums and scalar products by non-negative reals. In particular, $f$ is certainly convex in its first argument and concave in its second argument. We claim that $h \mapsto f (h, G')$ is lower semicontinuous. Recall that $G$ is a simple valuation $\sum_{i=1}^n a_i \delta_{x_i}$. The map $h \mapsto \sum_{i=1}^n a_i h (x_i) = G (h)$ is continuous from $A = \Lform_\alpha^a (X, d)^\dG$ to $(\overline{\mathbb{R}}_+)^\dG$ by Corollary~\ref{corl:cont:Lalpha:simple}. Because the non-trivial open subsets of $(\overline{\mathbb{R}}_+)^\dG$ are of the form $[0, t[$, $t \in \mathbb{R}_+$, $h \mapsto G (h)$ is also continuous from $A$ to $\mathbb{R}$ with the Scott topology of the reverse ordering. Hence $h \mapsto - G (h)$ is continuous from $A$ to $\mathbb{R}$ with its Scott topology, i.e., it is lower semicontinuous. The map $h \mapsto G' (h)$ is also lower semicontinuous, by definition of a prevision. So $h \mapsto -G (h) + G' (h)$ is also lower semicontinuous, and that is the map $h \mapsto f (h, G')$. Theorem~\ref{thm:minimax} then implies that $\sup_{G' \in \mathcal C'} \allowbreak \inf_{h \in \Lform_\alpha^a (X, d)} (- G (h) + G' (h')) = \inf_{h \in \Lform_\alpha^a (X, d)} \allowbreak \sup_{G' \in \mathcal C'} (- G (h) + G' (h'))$. Taking opposites yields the desired result. \qed \begin{lem} \label{lemma:cl:conv} Let $Y$ be a topological space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$''. The closure of any convex subset of $\Val_\bullet Y$, in the weak topology, is convex. \end{lem} \proof This is a classical exercise, and follows from a similar result on topological vector spaces. For a fixed number $\alpha \in ]0, 1[$, consider the map $f \colon (H_1, H_2) \in \Val_\bullet Y \times \Val_\bullet Y \mapsto \alpha H_1 + (1-\alpha) H_2 \in \Val_\bullet Y$. The inverse image of $[k > b]$, where $k \in \Lform Y$, is $\{(H_1, H_2) \mid \alpha H_1 (k) + (1-\alpha) H_2 (k) > b\} = \{(H_1, H_2) \mid \exists b_1, b_2 . \alpha b_1 + (1-\alpha) b_2 \geq b, H_1 (k) > b_1, H_2 (k) > b_2\} = \bigcup_{b_1, b_2 / \alpha b_1 + (1-\alpha) b_2 \geq b} [k > b_1] \times [k > b_2]$, so $f$ is (jointly) continuous. Let $\mathcal C$ be convex, and imagine $cl (\mathcal C)$ is not. There are elements $H_1$, $H_2$ of $cl (\mathcal C)$ and a real number $\alpha \in [0, 1]$ such that $\alpha H_1 + (1-\alpha) H_2$ is not in $cl (\mathcal C)$. Clearly, $\alpha$ is in $]0, 1[$. Let $\mathcal V$ be the complement of $cl (\mathcal C)$. This is an open set, and $(H_1, H_2)$ is in the open set $f^{-1} (\mathcal V)$, where $f$ is as above. By the definition of the product topology, there are two open neighborhoods of $\mathcal U_1$ and $\mathcal U_2$ of $H_1$ and $H_2$ respectively such that $\mathcal U_1 \times \mathcal U_2 \subseteq f^{-1} (\mathcal V)$. Since $H_1$ is in $\mathcal U_1$, $\mathcal U_1$ intersects $cl (\mathcal C)$, hence also $\mathcal C$, say at $G_1$. Similarly, there is an element $G_2$ in $\mathcal U_2 \cap \mathcal C$. Since $\mathcal C$ is convex, $f (G_1, G_2) = \alpha G_1 + (1-\alpha) G_2$ is in $\mathcal C$. However, $(G_1, G_2)$ is also in $\mathcal U_1 \times \mathcal U_2 \subseteq f^{-1} (\mathcal V)$, so $f (G_1, G_2)$ is in $\mathcal V$, contradicting the fact that $\mathcal V$ is the complement of $cl (\mathcal C)$. \qed \begin{prop} \label{prop:AN:dKRH} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. For all $\mathcal C, \mathcal C' \in \Hoare (\Val_\bullet X)$, \begin{equation} \label{eq:AN:dKRH} \mH {(\dKRH^a)} (\mathcal C, \mathcal C') \geq \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C')), \end{equation} with equality if $\mathcal C'$ is convex. \end{prop} \proof \emph{The inequality.} We develop the left-hand side. Since $X, d$ is continuous Yoneda-complete, we can use Theorem~\ref{thm:V:complete}, so that $\Val_\bullet X, \dKRH^a$ is Yoneda-complete. In particular, it is standard, so we can apply Proposition~\ref{prop:H:prev}. That justifies the first of the following equalities, the others are by definition: \begin{eqnarray*} \mH {(\dKRH^a)} (\mathcal C, \mathcal C') & = & \KRH {(\dKRH^a)} (F^{\mathcal C}, F^{\mathcal C'}) \\ & = & \sup_{H \in \Lform_1 (\Val_\bullet X, \dKRH^a)} \dreal (F^{\mathcal C} (H), F^{\mathcal C'} (H)) \\ & = & \sup_{H \in \Lform_1 (\Val_\bullet X, \dKRH^a)} \dreal (\sup_{G \in \mathcal C} H (G), \sup_{G' \in \mathcal C'} H (G')). \end{eqnarray*} The right-hand side of (\ref{eq:AN:dKRH}) is equal to: \begin{eqnarray*} \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C')) & = & \sup_{h \in \Lform_1^a X} \dreal (r_\AN (\mathcal C) (h), r_\AN (\mathcal C') (h)) \\ & = & \sup_{h \in \Lform_1^a X} \dreal (\sup_{G \in \mathcal C} G (h), \sup_{G' \in \mathcal C'} G' (h)). \end{eqnarray*} Among the elements $H$ of $\Lform_1 (\Val_\bullet X, \dKRH^a)$, we find those obtained from $h \in \Lform^a_1 (X, d)$ by letting $H (G) = G (h)$, according to Lemma~\ref{lemma:H}. Therefore the left-hand side of (\ref{eq:AN:dKRH}) is larger than or equal to its right-hand side. \vskip0.5em \emph{The equality, assuming $X, d$ algebraic.} Now assume $\mathcal C'$ convex. We will show that $\mH {(\dKRH^a)} (\mathcal C, \allowbreak \mathcal C') = \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C'))$, and we do that first in the special case where $X, d$ algebraic Yoneda-complete. If the inequality were strict, then there would be a real number $t$ such that $\mH {(\dKRH^a)} (\mathcal C, \allowbreak \mathcal C') > t > \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C'))$. Note that the second inequality implies that $t$ is (strictly) positive. Using Definition~\ref{defn:dH}, $\mH {(\dKRH^a)} (\mathcal C, \mathcal C') = \sup_{G \in \mathcal C} \dKRH^a (G, \mathcal C')$, so $\dKRH^a (G, \mathcal C') > t$ for some $G \in \mathcal C$. Recall that on a standard quasi-metric space $X, d$, the map $d (\_, C)$ is $1$-Lipschitz continuous for every $d$-Scott closed set $C$ (see the beginning of Section~\ref{sec:hoare-quasi-metric}, or Lemma~6.11 of \cite{JGL:formalballs} directly). Moreover, $d (x, C) \leq \inf_{y \in C} d (x, y)$, with equality if $x$ is a center point (Proposition~6.12, loc.\ cit.) Since $\Val_\bullet X, \dKRH^a$ is Yoneda-complete hence standard, the map $\dKRH^a (\_, \mathcal C')$ is $1$-Lipschitz continuous. By Theorem~\ref{thm:V:alg} (when $\bullet$ is ``$\leq 1$'') or Theorem~\ref{thm:V1:alg} (when $\bullet$ is ``$1$''), $(G, 0)$ is the (naive) supremum of a directed family ${(G_i, r_i)}_{i \in I}$ where each $G_i$ is a simple (subnormalized, resp.\ normalized) valuation supported on center points. Since $\dKRH^a (\_, \mathcal C')$ is $1$-Lipschitz continuous, the map $(F, r) \mapsto \dKRH^a (F, \mathcal C') - r$ is Scott-continuous, so $\dKRH^a (G, \mathcal C') = \sup_{i \in I} \dKRH^a (G_i, \mathcal C') - r_i$. It follows that $\dKRH^a (G_i, \mathcal C') - r_i > t$ for some $i \in I$. We recall that $\dKRH^a (G_i, \mathcal C') \leq \inf_{G' \in \mathcal C'} \dKRH^a (G_i, G')$; in fact, equality holds since $G_i$ is a center point. In any case, $\inf_{G' \in \mathcal C'} \dKRH^a (G_i, G') > r_i + t$. Expanding the definition of $\dKRH^a$, it follows that $\inf_{G' \in \mathcal C'} \sup_{h \in \Lform^a_1 X} \dreal (G_i (h), G' (h)) > r_i + t$. Then there is a real number $t' > t+r_i$ such that, for every $G' \in \mathcal C'$, there is an $h \in \Lform^a_1 (X, d)$ such that $\dreal (G_i (h), G' (h)) > t'$; in particular, since $t' > t > 0$ and therefore $G' (h)$ cannot be equal to $+\infty$, $G_i (h) - G' (h)$ makes sense and is strictly larger than $t'$. Working in reverse, we obtain that $\inf_{G' \in \mathcal C'} \sup_{h \in \Lform^a_1 X} (G_i (h) - G' (h)) - r_i > t$. We now use Lemma~\ref{lemma:AN:supinf} and obtain that $\sup_{h \in \Lform^a_1 X} \inf_{G' \in \mathcal C'} (G_i (h) - G' (h)) - r_i > t$. Hence there is an $h \in \Lform^a_1 (X, d)$ such that $\inf_{G' \in \mathcal C'} (G_i (h) - G' (h)) - r_i > t$. That can be written equivalently as $G_i (h) - \sup_{G' \in \mathcal C'} G' (h) - r_i > t$. Since $(G_i, r_i) \leq^{\dKRH^{a+}} (G, 0)$, in particular $G_i (h) \leq G (h) + r_i$, so $G (h) - \sup_{G' \in \mathcal C'} G' (h) > t$. This certainly implies that $\sup_{G \in \mathcal C} G (h) - \sup_{G' \in \mathcal C'} G' (h) > t$, and therefore that $\dreal (\sup_{G \in \mathcal C} G (h), \sup_{G' \in \mathcal C'} G' (h)) > t$. However, we had assumed that $t > \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C')) = \sup_{h \in \Lform_1^a X} \dreal (\sup_{G \in \mathcal C} G (h), \sup_{G' \in \mathcal C'} G' (h))$, leading to a contradiction. This shows $\mH {(\dKRH^a)} (\mathcal C, \allowbreak \mathcal C') = \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C'))$ when $\mathcal C'$ is convex, assuming that $X, d$ is algebraic Yoneda-complete. \vskip0.5em \emph{The equality, in the general case.} We now deal with the general case, where $X, d$ is continuous Yoneda-complete. We use \cite[Theorem~7.9]{JGL:formalballs}: $X, d$ is a retract of some algebraic Yoneda-complete space $Y, \partial$ through $1$-Lipschitz continuous maps $r \colon Y \to X$ and $s \colon X \to Y$. We check that $r_\AN \circ \Hoarez (\Prev s) = \Prev s \circ r_\AN$ (namely that $r_\AN$ is a natural transformation): for every $\mathcal C \in \Hoare (\Val_\bullet X)$, for every $k \in \Lform Y$, \begin{eqnarray*} r_\AN (\Hoarez (\Prev s) (\mathcal C)) (k) & = & \sup_{H \in \Hoarez (\Prev s) (\mathcal C)} H (k) \\ & = & \sup_{H \in cl (\Prev s [\mathcal C])} H (k) \qquad\text{(see Lemma~\ref{lemma:H:functor})} \\ & = & \sup_{H \in \Prev s [\mathcal C]} H (k) \qquad \text{by Lemma~\ref{lemma:H:cl:sup}} \\ & = & \sup_{G \in \mathcal C} \Prev s (G) (k) = \sup_{G \in \mathcal C} G (k \circ s). \end{eqnarray*} Note that Lemma~\ref{lemma:H:cl:sup} applies because $H \mapsto H (k)$ is lower semicontinuous: the inverse image of $]b, +\infty]$ is $[k > b]$, which is open in $\Val_\bullet Y$ with the weak topology, and we recall that this coincides with the $\KRH\partial^a$-Scott topology, since $Y, \partial$ is continuous Yoneda-complete. The other term we need to compute is $\Prev s (r_\AN (\mathcal C)) (k) = r_\AN (\mathcal C) (k \circ s) = \sup_{G \in \mathcal C} G (k \circ s)$. Therefore $r_\AN \circ \Hoarez (\Prev s) = \Prev s \circ r_\AN$. Another fact we need to observe is that, when $\mathcal C'$ is convex, $\Hoarez (\Prev s) (\mathcal C') = cl (\Prev s [\mathcal C'])$ is convex. It is easy to see that $\Prev s [\mathcal C']$ is convex: for any two elements $\Prev s (G_1)$, $\Prev s (G_2)$ where $G_1, G_2 \in \mathcal C'$, for every $\alpha \in [0, 1]$, $\alpha \Prev s (G_1) + (1-\alpha) \Prev s (G_2)$ maps every $k \in \Lform Y$ to $\alpha G_1 (k \circ s) + (1-\alpha) G_2 (k \circ s)$, and is therefore equal to $\Prev s (\alpha G_1 + (1-\alpha) G_2)$, which is in $\Prev s [\mathcal C']$. Then we apply Lemma~\ref{lemma:cl:conv}. We can now finish our proof. In the first line, we use that $r \circ s$ is the identity map, and that $\Hoarez$ and $\Val_\bullet$ are functorial (Corollary~\ref{cor:H:functor}, and Corollary~\ref{cor:V:functor} or Corollary~\ref{cor:V1:functor:a} depending on the value of $\bullet$). \begin{eqnarray*} \mH {(\dKRH^a)} (\mathcal C, \mathcal C') & = & \mH {(\dKRH^a)} ((\Hoarez \circ\Prev) (r) ((\Hoarez \circ \Prev) (s) (\mathcal C)), (\Hoarez \circ \Prev) (r) ((\Hoarez \circ \Prev) (s) (\mathcal C'))) \\ & \leq & \mH {(\dKRH^a)} (\Hoarez (\Prev s) (\mathcal C), \Hoarez (\Prev s) (\mathcal C')) \qquad\text{since $(\Hoarez \circ \Prev) (r)$ is $1$-Lipschitz} \\ & = & \dKRH^a (r_\AN (\Hoarez (\Prev s) (\mathcal C)), r_\AN (\Hoarez (\Prev s) (\mathcal C'))), \end{eqnarray*} since we know that the equality version of (\ref{eq:AN:dKRH}) holds on the algebraic Yoneda-complete space $Y, \partial$, and since $\Hoarez (\Prev s) (\mathcal C') = cl (s [\mathcal C'])$ is convex, as we have just seen. We now use our previous remark that $r_\AN \circ \Hoarez (\Prev s) = \Prev s \circ r_\AN$ to obtain $\mH {(\dKRH^a)} (\mathcal C, \mathcal C') \leq \dKRH^a (\Prev s (r_\AN (\mathcal C)), \allowbreak \Prev s (r_\AN (\mathcal C')))$, and the latter is less than or equal to $\dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C'))$ since $\Prev s$ is $1$-Lipschitz. Since by (\ref{eq:AN:dKRH}), the other inequality $\mH {(\dKRH^a)} (\mathcal C, \mathcal C') \geq \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C'))$ holds, equality follows. \qed \begin{lem} \label{lemma:AN:s:lip} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then: \begin{enumerate} \item $r_\AN$ is $1$-Lipschitz from $\Hoare (\Val_\bullet X), \mH {(\dKRH^a)}$ to the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric; \item $s^\bullet_\AN$ preserves distances on the nose; in particular, $s^\bullet_\AN$ is $1$-Lipschitz. \end{enumerate} \end{lem} \proof (1) The first part of Proposition~\ref{prop:AN:dKRH} implies that $r_\AN$ is $1$-Lipschitz. (2) For any two (sub)normalized sublinear previsions $F$ and $F'$, $s^\bullet_\AN (F') = \{G \in \Val_\bullet X \mid G \leq F'\}$ is always convex, so the second part of Proposition~\ref{prop:AN:dKRH} implies that: \[ \dKRH^a (F, F') = \dKRH^a (r_\AN (s^\bullet_\AN (F)), r_\AN (s^\bullet_\AN (F'))) = \mH {(\dKRH^a)} (s^\bullet_\AN (F), s^\bullet_\AN (F')). \] We would like to note a possible, subtle issue. In writing ``$\Hoare (\Val_\bullet X), \mH {(\dKRH^a)}$'', we silently assume that $\Val_\bullet X$ has the $\dKRH^a$-Scott topology, and the elements of $\Hoare (\Val_\bullet X)$ are closed subsets for that topology. However Proposition~\ref{prop:AN:dKRH} considers closed subsets for the weak topology. Fortunately, the two topologies are the same, by Theorem~\ref{thm:V:weak=dScott}. \qed \begin{lem} \label{lemma:AN:r:lipcont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then $r_\AN$ is $1$-Lipschitz continuous from $\Hoare (\Val_\bullet X), \mH {(\dKRH^a)}$ to the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric. \end{lem} \proof By Lemma~\ref{lemma:AN:s:lip}~(1), it is $1$-Lipschitz. In particular, $\mathbf B^1 (r_\AN)$ is monotonic. Similarly $\mathbf B^1 (s^\bullet_\AN)$ is monotonic. Since $r_\AN \circ s^\bullet_\AN = \identity\relax$, $\mathbf B^1 (r_\AN) \circ \mathbf B^1 (s^\bullet_\AN) = \identity\relax$. Since $\identity\relax \leq s^\bullet_\AN \circ r_\AN$, i.e., since $\mathcal C \subseteq s^\bullet_\AN (r_\AN (\mathcal C))$ for every $\mathcal C$, we also have that $(\mathcal C, r) \leq^{\mH {(\dKRH^a)}^+} \mathbf B^1 (s^\bullet_\AN) (\mathbf B^1 (r_\AN) (\mathcal C, r))$: indeed $\mH {(\dKRH^a)} (\mathcal C, s^\bullet_\AN (r_\AN (\mathcal C))) = 0 \leq r - r$. This shows that $\mathbf B^1 (r_\AN)$ is left adjoint to $\mathbf B^1 (s^\bullet_\AN)$. We conclude because left adjoints preserve all existing suprema, hence in particular directed suprema. \qed We require the following order-theoretic lemma. \begin{lem} \label{lemma:orderretract} Every order-retract of a dcpo is a dcpo. Precisely, for any two monotonic maps $p \colon B \to A$ and $e \colon A \to B$ such that $p \circ e = \identity A$, if $B$ is a dcpo, then $A$ is a dcpo. Moreover, the supremum of any directed family ${(a_i)}_{i \in I}$ in $A$ is $p (\sup_{i \in I} e (a_i))$. \end{lem} \proof This is an easy exercise, but since one may be surprised by the fact that we do not need $p$ or $e$ to be Scott-continuous for that, we give the complete proof. First $p (\sup_{i \in I} e (a_i))$ is larger than $p (e (a_i)) = a_i$ for every $i \in I$, since $p$ is monotonic. Then, if $b$ is any other upper bound of ${(a_i)}_{i \in I}$ in $A$, then $e (a_i) \leq e (b)$ for every $i \in I$, because $e$ is monotonic, so $\sup_{i \in I} e (a_i) \leq e (b)$. It follows that $p (\sup_{i \in I} e (a_i)) \leq p (e (b)) = b$. \qed \begin{prop}[Yoneda-completeness, sublinear previsions] \label{prop:AN:Ycomplete} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. The space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric is Yoneda-complete, and directed suprema of formal balls are computed as naive suprema. \end{prop} \proof By Theorem~\ref{thm:V:alg}, resp.\ Theorem~\ref{thm:V1:alg}, $\Val_\bullet X, \dKRH^a$ is algebraic Yoneda-complete, so we can use Proposition~\ref{prop:HX:Ycomplete} and conclude that $\Hoare (\Val_\bullet X), \mH {(\dKRH^a)}$ is Yoneda-complete, and that directed suprema of formal balls are computed as naive suprema. In particular, $\mathbf B (\Hoare (\Val_\bullet X), \mH {(\dKRH^a)})$ is a dcpo. Moreover, $p = \mathbf B^1 (r_\AN)$ and $e = \mathbf B^1 (s^\bullet_\AN)$ are monotonic by Lemma~\ref{lemma:AN:s:lip}, and form an order-retraction. By Lemma~\ref{lemma:orderretract}, the poset of formal balls of the space of (sub)normalized sublinear previsions is a dcpo, and that shows that the latter space is Yoneda-complete. We compute the supremum of a directed family ${(F_i, r_i)}_{i \in I}$ of formal balls, where each $F_i$ is a (sub)normalized sublinear prevision. This is $p (\sup_{i \in I} e (F_i, r_i))$, where $p$ and $e$ are as above. For each $i \in I$, $e (F_i, r_i) = (s^\bullet_\AN (F_i), r_i)$. The closed set $\mathcal C_i = s^\bullet_\AN (F_i)$ has an associated discrete sublinear prevision $F^{\mathcal C_i}$, defined as mapping $H \in \Lform (\Val_\bullet X)$ to $\sup_{G \in \mathcal C_i} H (G) = \sup_{G \in \Val_\bullet X, G\leq F_i} H (G)$. Since directed suprema are naive suprema in $\mathbf B (\Hoare (\Val_\bullet X), \mH {(\dKRH^a)})$, $\sup_{i \in I} e (F_i, r_i)$ is $(\mathcal C, r)$ where $r = \inf_{i \in I} r_i$ and $F^{\mathcal C} (H) = \sup_{i \in I} (F^{\mathcal C_i} (H) - \alpha r_i + \alpha r) = \sup_{i \in I} (\sup_{G \in \Val_\bullet X, G \leq F_i} H (G) - \alpha r_i + \alpha r)$, for every $H \in \Lform^a_\alpha (\Val_\bullet X, \dKRH^a)$. Then $p (\sup_{i \in I} e (F_i, r_i))$ is the formal ball $(F, r)$, where $F = r_\AN (\mathcal C)$. In particular, for every $h \in \Lform^a_\alpha (X, d)$, $F (h) = \sup_{G \in \mathcal C} G (h)$. The map $H \colon G \mapsto G (h)$ is in $\Lform^a_\alpha (\Val_\bullet X, \dKRH^a)$ by Lemma~\ref{lemma:H}, and: \begin{eqnarray*} F (h) & = & \sup_{G \in \mathcal C} H (G) \\ & = & F^{\mathcal C} (H) \\ & = & \sup_{i \in I} (\sup_{G \in \Val_\bullet X, G \leq F_i} H (G) - \alpha r_i + \alpha r) \\ & = & \sup_{i \in I} (\sup_{G \in \Val_\bullet X, G \leq F_i} G (h) - \alpha r_i + \alpha r) \\ & = & \sup_{i \in I} (F_i (h) - \alpha r_i + \alpha r). \end{eqnarray*} \qed \begin{lem} \label{lemma:AN:s:lipcont} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then $s^\bullet_\AN$ is $1$-Lipschitz continuous from the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric to $\Hoare (\Val_\bullet X), \mH {(\dKRH^a)}$. \end{lem} \proof By Theorem~\ref{thm:V:weak=dScott}, the $\dKRH^a$-Scott topology coincides with the weak topology on $\Val_\bullet X$. By Theorem~\ref{thm:V:alg}, resp.\ Theorem~\ref{thm:V1:alg}, $\Val_\bullet X, \dKRH^a$ is algebraic Yoneda-complete, so we can use Proposition~\ref{prop:H:Vietoris} and conclude that the $\mH {(\dKRH^a)}$-Scott topology coincides with the lower Vietoris topology on $\Hoare (\Val_\bullet X)$. Proposition~\ref{prop:AN:Ycomplete} gives us the necessary properties that allow us to use Proposition~\ref{prop:weak:dScott:a}: the $\dKRH^a$-Scott topology on our space of sublinear previsions is finer than the weak topology. Since $s^\bullet_\AN$ is continuous from that space of sublinear previsions with the weak topology to $\Hoare (\Val_\bullet X)$ with its lower Vietoris topology, it is therefore also continuous with respect to the $\dKRH^a$-topology on the former, and the $\mH {(\dKRH^a)}$-Scott topology on the latter. We now know that $s^\bullet_\AN$ is both $1$-Lipschitz and continuous with respect to the $\dKRH^a$-Scott and $\mH {(\dKRH^a)}$-Scott topologies. By Proposition~\ref{prop:cont}, it must be $1$-Lipschitz continuous. That proposition applies because all the spaces involved are Yoneda-complete hence standard. In particular, our space of sublinear previsions is Yoneda-complete by Proposition~\ref{prop:AN:Ycomplete}, and $\Hoare (\Val_\bullet X)$ is Yoneda-complete by Proposition~\ref{prop:HX:Ycomplete}. \qed \begin{lem} \label{lemma:retract:alg} Let $X, d$ and $Y, \partial$ be two quasi-metric spaces, and two $1$-Lipschitz continuous maps: \[ \xymatrix{Y, \partial \ar@<1ex>[r]^{r} & X, d \ar@<1ex>[l]^{s}} \] with $r \circ s = \identity X$. (I.e., $X, d$ is a \emph{$1$-Lipschitz continuous retract} of $Y, \partial$ through $r$, $s$.) If $Y, \partial$ is algebraic Yoneda-complete, then $X, d$ is continuous Yoneda-complete. Every point of $X$ is a $d$-limit of a Cauchy-weightable net of points of the form $r (y)$ with $y$ taken from a strong basis of $Y, \partial$. If additionally $d (r (y), x) = \partial (y, s (x))$ for all $x \in X$ and $y \in Y$, then $X, d$ is algebraic Yoneda-complete, and a strong basis of $X, d$ is given by the image of a strong basis of $Y, \partial$ by $r$. \end{lem} \proof $\mathbf B^1 (r)$ and $\mathbf B^1 (s)$ form a retraction, in the sense that those are (Scott-)continuous maps and $\mathbf B^1 (r) \circ \mathbf B^1 (s) = \identity {\mathbf B (X, d)}$. The continuous retracts of continuous dcpos are continuous dcpos (see Corollary~8.3.37 of \cite{JGL-topology} for example), so $\mathbf B (X, d)$ is a continuous dcpo. Now assume that $B$ is a strong basis of $Y, \partial$. Let $A$ be the image of $B$ by $r$. By Lemma~\ref{lemma:B:basis}, the set $\mathring B$ of formal balls whose centers are in $B$ is a basis of the continuous dcpo $\mathbf B (Y, \partial)$. Then the image of $\mathring B$ by $\mathbf B^1 (r)$ is a basis of $\mathbf B (X, d)$. This can be found as Exercise~5.1.44 of loc.\ cit., for example: the image of a basis by a retraction is a basis. The image of $\mathring B$ is the set $\mathring A$ of formal balls whose centers are in $A$. That $\mathring A$ is a basis means exactly that every element of $X$ is the $d$-limit of a Cauchy-weightable net of points in $A$, considering that $X, d$ is Yoneda-complete. In order to show that $A$ is a strong basis, it remains to show that the elements $a = r (b)$ of $A$ are center points, under our additional assumption that $d (r (y), x) = \partial (y, s (x))$ for all $x \in X$ and $y \in Y$. Consider the open ball $B^{d^+}_{(a, 0), <\epsilon}$. This is the set of formal balls $(x, t)$ on $X$ such that $d (a, x) < \epsilon -t$, or equivalently $\partial (b, s (x)) < \epsilon - t$, using the additional assumption. Hence $B^{d^+}_{(a, 0), <\epsilon} = \mathbf B^1 (s)^{-1} (B^{\partial^+}_{(b, 0), <\epsilon})$. This is open since $b$ is a center point and $\mathbf B^1 (s)$ is continuous. \qed \begin{thm}[Continuity for sublinear previsions] \label{thm:AN:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, and $a > 0$. The space of all subnormalized (resp., normalized) sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric is continuous Yoneda-complete. It occurs as a $1$-Lipschitz continuous retract of $\Hoare (\Val_{\leq 1} X), \mH {(\dKRH^a)}$ (resp., $\Hoare (\Val_1 X), \allowbreak \mH {(\dKRH^a)}$), and as an isometric copy of $\Hoare^{cvx} (\Val_{\leq 1} X), \mH {(\dKRH^a)}$ (resp., $\Hoare^{cvx} (\Val_1 X), \mH {(\dKRH^a)}$) through $r_\AN$ and $s^{\leq 1}_\AN$ (resp., $s^1_\AN$). The $\dKRH^a$-Scott topology on the space of subnormalized (resp., normalized) sublinear previsions coincides with the weak topology. \end{thm} \proof We have already seen that $r_\AN$ and $s^\bullet_\AN$ are continuous, with respect to the weak topologies. By Lemma~\ref{lemma:AN:r:lipcont} and Lemma~\ref{lemma:AN:s:lipcont}, they are also $1$-Lipschitz continuous with respect to $\mH {(\dKRH^a)}$ on $\Hoare (\Val_\bullet X)$ and $\dKRH^a$ on the space of (sub)normalized sublinear previsions. Recall also that $r_\AN \circ s^\bullet_\AN = \identity \relax$. This shows the $1$-Lipschitz continuous retract part, and also the isometric copy part, since $s^\bullet_\AN$ and $r_\AN$ restrict to bijections with the subspace $\Hoare^{cvx} (\Val_\bullet X)$. We turn to continuity. By Theorem~\ref{thm:Vleq1:cont} or Theorem~\ref{thm:V1:cont}, $\Val_\bullet X, \dKRH^a$ is continuous Yoneda-complete. We can then use Theorem~\ref{thm:H:cont} to conclude that $\Hoare (\Val_\bullet X), \mH{(\dKRH^a)}$ is continuous Yoneda-complete. Since every $1$-Lipschitz continuous retract of a continuous Yoneda-complete quasi-metric space is continuous Yoneda-complete (Corollary~8.3.37 of \cite{JGL-topology}, already used in the proof of Lemma~\ref{lemma:retract:alg}), the space of all subnormalized (resp., normalized) sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric is continuous Yoneda-complete. We show that the $\dKRH^a$-Scott topology on the space of (sub)normalized sublinear previsions is the weak topology. We use Fact~\ref{fact:retract:two} with the space of (sub)normalized sublinear previsions, either with the weak or with the $\dKRH^a$-Scott topology, for $A$, and $\Hoare (\Val_\bullet X)$ with its $\mH {(\dKRH^a)}$-Scott topology, or equivalently with its weak topology, for $B$. The latter two topologies coincide because of Theorem~\ref{thm:V:weak=dScott} and Proposition~\ref{prop:H:Vietoris}. Since $s^\bullet_\AN$ is the section part of a $1$-Lipschitz continuous retraction, it is a topological embedding of $A$ with the $\dKRH^a$-Scott topology into $B$. We recall that $s^\bullet_\AN$ is also a section, hence a topological embedding of $A$ with the weak topology into $\Hoare (\Val_\bullet X)$ with its lower Vietoris topology, that is, into $B$, and this allows us to conclude. \qed We did not need to show that directed suprema of formal balls were computed as naive suprema to prove the latter. Still, that is true. \begin{lem} \label{lemma:AN:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, and $a > 0$. The suprema of directed families ${(F_i, r_i)}_{i \in I}$ of formal balls, where each $F_i$ is a (sub)normalized sublinear prevision on $X$, are naive suprema. \end{lem} \proof Let $\alpha > 0$, and $h$ be in $\Lform_\alpha^a (X, d)$. By Theorem~\ref{thm:AN:cont}, $\mathbf B^1 (r_\AN)$ is Scott-continuous and $\mathbf B^1 (r_\AN) \circ \mathbf B^1 (s_\AN^\bullet)$ is the identity. The supremum of ${(F_i, r_i)}_{i \in I}$ is then $\mathbf B^1 (r_\AN) (\mathcal C, r) = (r_\AN (\mathcal C), r)$, where $(\mathcal C, r) = \sup_{i \in I} (s_\AN^\bullet (F_i), r_i)$. By Proposition~\ref{prop:HX:Ycomplete}, this is a naive supremum, meaning that $r = \inf_{i \in I} r_i$ and $F^{\mathcal C} (H) = \sup_{i \in I} (F^{s_\AN^\bullet (F_i)} (H) + \alpha r - \alpha r_i)$ for every $H \in \Lform_\alpha (\Val_\bullet X, \dKRH^a)$. In other words, $F^{\mathcal C} (H) = \sup_{i \in I} (\sup_{G \in s_\AN^\bullet (F_i)} H (G) + \alpha r - \alpha r_i)$. Taking $H (G) = G (h)$ (see Lemma~\ref{lemma:H}), we obtain: \begin{eqnarray*} F^{\mathcal C} (H) & = & \sup_{i \in I} (\sup_{G \in s_\AN^\bullet (F_i)} G (h) + \alpha r - \alpha r_i) \\ & = & \sup_{i \in I} (r_\AN (s_\AN^\bullet (F_i)) (h) + \alpha r - \alpha r_i) \\ & = & \sup_{i \in I} (F_i (h) + \alpha r - \alpha r_i), \end{eqnarray*} where the last equalities are by definition of $r_\AN$, and because $r_\AN \circ s_\AN^\bullet = \identity\relax$. By using the definition of $F^{\mathcal C}$, then of $H$, then of $r_\AN$, we obtain $F^{\mathcal C} (H) = \sup_{G \in \mathcal C} H (G) = \sup_{G \in \mathcal C} G (h) = r_\AN (\mathcal C) (h)$, showing that $r_\AN (\mathcal C) (h) = \sup_{i \in I} (F_i (h) + \alpha r - \alpha r_i)$, as desired. \qed We turn to algebraic Yoneda-complete spaces. \begin{thm}[Algebraicity for sublinear previsions] \label{thm:AN:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. Let $a > 0$. The space of all subnormalized (resp., normalized) sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric is algebraic Yoneda-complete. All the sublinear previsions of the form $h \mapsto \max_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$, with $m \geq 1$, $\sum_{j=1}^{n_i} a_{ij} \leq 1$ (resp., $=1$) for every $i$, and where each $x_{ij}$ is a center point, are center points, and they form a strong basis, even when each $x_{ij}$ is constrained to lie in $\mathcal B$. \end{thm} \proof For every $\mathcal C \in \Hoare (\Val_\bullet X)$ and every (sub)normalized sublinear prevision $F'$, we have: \begin{eqnarray*} \dKRH^a (r_\AN (\mathcal C), F') & = & \dKRH^a (r_\AN (\mathcal C), r_\AN (\mathcal C')) \qquad \text{where }\mathcal C' = s^\bullet_\AN (F') \\ & = & \mH {(\dKRH^a)} (\mathcal C, \mathcal C') \qquad \text{by Proposition~\ref{prop:AN:dKRH}, since $\mathcal C'$ is convex} \\ & = & \mH {(\dKRH^a)} (\mathcal C, s^\bullet_\AN (F')). \end{eqnarray*} This is exactly the additional assumption we require to apply the final part of Lemma~\ref{lemma:retract:alg}. It follows that the space of (sub)normalized sublinear previsions is algebraic Yoneda-complete, with a strong basis of elements of the form $r_\AN (\dc \{\sum_{j=1}^{n_i} a_{ij} \delta_{x_{ij}} \mid 1\leq i\leq m\})$, where $m \geq 1$, each $\sum_{j=1}^{n_i} a_{ij} \delta_{x_{ij}}$ is a (sub)normalized simple valuation and each $x_{ij}$ is in $\mathcal B$. An easy check shows that those elements are exactly the maps $h \mapsto \max_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h ({x_{ij}})$. \qed \begin{lem} \label{lemma:AN:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. Let also $a > 0$. $\Prev f$ restricts to a $1$-Lipschitz continuous map from the space of normalized, resp.\ subnormalized sublinear previsions on $X$ to the space of normalized, resp.\ subnormalized sublinear previsions on $Y$, with the $\dKRH^a$ and $\KRH\partial^a$ quasi-metrics. \end{lem} \proof By Lemma~\ref{lemma:Pf:lip}, $\Prev f$ is $1$-Lipschitz and maps (sub)normalized sublinear previsions to (sub)normalized sublinear previsions. Also, by Lemma~\ref{lemma:Pf:lipcont}, $\mathbf B^1 (\Prev f)$ preserves naive suprema. By Lemma~\ref{lemma:AN:cont}, suprema of directed families of formal balls of (sub)normalized sublinear previsions are naive suprema, so $\mathbf B^1 (\Prev f)$ is Scott-continuous. It follows that $\Prev f$ is $1$-Lipschitz continuous. \qed With Theorem~\ref{thm:AN:alg} and Theorem~\ref{thm:AN:cont}, we obtain the following. \begin{cor} \label{cor:AN:functor} There is an endofunctor on the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous maps, which sends every object $X, d$ to the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$-Scott topology ($a > 0$), and every $1$-Lipschitz continuous map $f$ to $\Prev f$. \qed \end{cor} \subsection{Superlinear Previsions} \label{sec:superl-prev} Superlinear previsions are dealt with in a very similar fashion, although, curiously, some of the steps will differ in essential ways. We recall from \cite{JGL-mscs16} that superlinear previsions are in bijection with the non-empty compact saturated convex subsets of linear previsions. Let $r_\DN$ be the function that maps every non-empty set $E$ of linear previsions to the prevision $h \in \Lform X \mapsto \inf_{G \in E} G (h)$. The latter is a superlinear prevision as soon as $E$ is compact, which is subnormalized, resp.\ normalized, as soon as $E$ is a non-empty set of subnormalized, resp.\ normalized linear previsions. In the reverse direction, for every superlinear prevision $F$, let $s_\DN (F)$ be the set of all linear previsions $G$ such that $G \geq F$. If $F$ is subnormalized, we write $s^{\leq 1}_\DN (F)$ for the set of all subnormalized linear previsions $G \geq F$, and if $F$ is normalized, we write $s^1_\DN (F)$ for the set of all normalized linear previsions $G \geq F$. As in Section~\ref{sec:sublinear-previsions}, we use the notation $s^\bullet_\DN$. Proposition~3.22 of loc.\ cit.\ states that: \begin{itemize} \item $r_\DN$ is continuous from the space of superlinear (resp., subnormalized superlinear, resp., normalized superlinear) previsions on $X$ to $\Smyth (\Val_\bullet X)$ with its lower Vietoris topology, and where $\Val_\bullet X$ has the weak topology; \item $s^\bullet_\DN$ is continuous in the reverse direction; \item $r_\DN \circ s^\bullet_\DN = \identity \relax$; \item $\identity \relax \geq s^\bullet_\DN \circ r_\DN$. \end{itemize} Hence the space of (possibly subnormalized, or normalized) superlinear previsions occurs as a retract of $\Smyth (\Val_\bullet X)$. This holds for all topological spaces $X$, with no restriction. Let $\Smyth^{cvx} (\Val_\bullet X)$ denote the subspace of $\Smyth (\Val_\bullet X)$ consisting of convex sets. Theorem~4.15 of loc.\ cit.\ additionally states that $r_\DN$ and $s^\bullet_\DN$ define a homeomorphism between $\Smyth^{cvx} (\Val_\bullet X)$ and the corresponding space of (possibly subnormalized, or normalized) superlinear previsions. The following is where we require a minimax theorem on a non-Hausdorff space. Although this looks very similar to Lemma~\ref{lemma:AN:supinf}, the compact space we use in conjunction with our minimax theorem is not $\Lform^a_\alpha (X, d)^\patch$, but a space $\mathcal Q$ of linear previsions. The latter is almost never Hausdorff. \begin{lem} \label{lemma:DN:supinf} Let $X, d$ be a continuous quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. For every continuous valuation $G'$ in $\Val_\bullet X$, for every $\alpha > 0$, for every compact convex subset $\mathcal Q$ of $\Val_\bullet X$ (with the weak topology), \[ \sup_{h \in \Lform^a_\alpha X} \inf_{G \in \mathcal Q} (G (h) - G' (h)) = \inf_{G \in \mathcal Q} \sup_{h \in \Lform^a_\alpha X} (G (h) - G' (h)). \] \end{lem} \proof Let $A = \mathcal Q$. This is compact by assumption. Let $B = \Lform^a_\alpha (X, d)$, and let $f \colon (G, h) \in A \times B \mapsto (G (h) - G' (h))$. This is a function that takes its values in $[-a, a]$, hence in $\mathbb{R}$. It is also lower semicontinuous in $G$, since the set of elements $G \in \Val_\bullet X$ such that $f (G, h) > t$ is equal to $[h > G' (h) + t]$, an open set by the definition of the weak topology. The function $f$ is certainly convex in its first argument and concave in its second argument, being linear in both. Theorem~\ref{thm:minimax} then implies that $\sup_{h \in \Lform_\alpha^a X} \inf_{G \in \mathcal Q} (G (h) - G' (h)) = \inf_{G \in \mathcal Q} \sup_{h \in \Lform_\alpha^a X} (G (h) - G' (h))$. \qed \begin{prop} \label{prop:DN:dKRH} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. For all $\mathcal Q, \mathcal Q' \in \Smyth (\Val_\bullet X)$, where $\Val_\bullet X$ has the weak topology, \begin{equation} \label{eq:DN:dKRH} \mQ {(\dKRH^a)} (\mathcal Q, \mathcal Q') \geq \dKRH^a (r_\DN (\mathcal Q), r_\DN (\mathcal Q')), \end{equation} with equality if $\mathcal Q$ is convex. \end{prop} \proof The proof is extremely close to Proposition~\ref{prop:AN:dKRH}. We develop the left-hand side, using Definition~\ref{defn:dQ} in the first line: \begin{eqnarray*} \mQ {(\dKRH^a)} (\mathcal Q, \mathcal Q') & = & \KRH {(\dKRH^a)} (F_{\mathcal Q}, F_{\mathcal Q'}) \\ & = & \sup_{H \in \Lform_1 (\Val_\bullet X, \dKRH^a)} \dreal (F_{\mathcal Q} (H), F_{\mathcal Q'} (H)) \\ & = & \sup_{H \in \Lform_1 (\Val_\bullet X, \dKRH^a)} \dreal (\inf_{G \in \mathcal Q} H (G), \inf_{G' \in \mathcal Q'} H (G')). \end{eqnarray*} The right-hand side is equal to: \begin{eqnarray*} \dKRH^a (r_\DN (\mathcal Q), r_\DN (\mathcal Q')) & = & \sup_{h \in \Lform_1^a X} \dreal (r_\DN (\mathcal Q) (h), r_\DN (\mathcal Q') (h)) \\ & = & \sup_{h \in \Lform_1^a X} \dreal (\inf_{G \in \mathcal Q} G (h), \inf_{G' \in \mathcal Q'} G' (h)). \end{eqnarray*} Among the elements $H$ of $\Lform_1 (\Val_\bullet X, \dKRH^a)$, we find those obtained from $h \in \Lform^a_1 (X, d)$ by letting $H (G) = G (h)$, according to Lemma~\ref{lemma:H}. That shows inequality (\ref{eq:DN:dKRH}). We now assume $\mathcal Q$ convex. If the inequality were strict, then there would be a real number $t$ such that $\mQ {(\dKRH^a)} (\mathcal Q, \allowbreak \mathcal Q') > t > \dKRH^a (r_\DN (\mathcal Q), r_\DN (\mathcal Q'))$. Note that the second inequality implies that $t$ is (strictly) positive. By Lemma~\ref{lemma:dQ}, there is a linear prevision $G'$ in $\mathcal Q'$ such that $\inf_{G \in \mathcal Q} \sup_{h \in \Lform^a_1 X} \dreal (G (h), \allowbreak G' (h)) > t$. This means that there is a real number $t' > t$ such that for every $G \in \mathcal Q$, there is an $h \in \Lform_a^1 (X, d)$ such that $\dreal (G (h), G' (h)) > t'$, and since $t' > t > 0$, $G (h) - G' (h) > t'$. This implies that $\inf_{G \in \mathcal Q} \sup_{h \in \Lform^a_1 X} (G (h) - G' (h)) > t$. We now use Lemma~\ref{lemma:DN:supinf} and obtain $\sup_{h \in \Lform^a_1 X} \inf_{G \in \mathcal Q} (G (h) - G' (h)) > t$. Therefore, there is a map $h$ in $\Lform^a_1 (X, d)$ such that $\inf_{G \in \mathcal Q} (G (h) - G' (h)) > t$, namely, $\inf_{G \in \mathcal Q} G (h) - G' (h) > t$. This implies $\inf_{G \in \mathcal Q} G (h) - \inf_{G' \in \mathcal Q'} G' (h) > t$, that is, $r_\DN (\mathcal Q) (h) - r_\DN (\mathcal Q') (h) > t$. Recalling that $t > 0$, this entails $\dreal (r_\DN (\mathcal Q) (h), r_\DN (\mathcal Q') (h)) > t$. However, this is impossible since $\dKRH^a (r_\DN (\mathcal Q), r_\DN (\mathcal Q')) = \sup_{h \in \Lform^1_a X} \dreal (r_\DN (\mathcal Q) (h), r_\DN (\mathcal Q') (h))$ is less than $t$. \qed \begin{lem} \label{lemma:DN:s:lip} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then \begin{enumerate} \item $r_\DN$ is $1$-Lipschitz from $\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$ to the space of (sub)normalized superlinear previsions on $X$ with the $\dKRH^a$ quasi-metric; \item $s^\bullet_\DN$ is preserves distances on the nose; in particular, $s^\bullet_\AN$ is $1$-Lipschitz. \end{enumerate} \end{lem} \proof (1) is by the first part of Proposition~\ref{prop:DN:dKRH}. By the second part of Proposition~\ref{prop:DN:dKRH}, using the fact that $s^\bullet_\DN (F)$ is convex, \[ \mQ {(\dKRH^a)} (s^\bullet_\DN (F), s^\bullet_\DN (F')) = \dKRH^a (r_\DN (s^\bullet_\DN (F)), r_\DN (s^\bullet_\DN (F'))) = \dKRH^a (F, F'), \] and this shows (2). As in the proof of Lemma~\ref{lemma:AN:s:lip}, there is a subtle issue here. In writing ``$\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$'', we silently assume that $\Val_\bullet X$ has the $\dKRH^a$-Scott topology, and the elements of $\Smyth (\Val_\bullet X)$ are compact subsets for that topology. However Proposition~\ref{prop:DN:dKRH} considers compact subsets for the weak topology. Fortunately, the two topologies are the same, by Theorem~\ref{thm:V:weak=dScott}. \qed \begin{lem} \label{lemma:DN:s:lipcont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then $s^\bullet_\DN$ is $1$-Lipschitz continuous from the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric to $\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$. \end{lem} \proof By Lemma~\ref{lemma:DN:s:lip}, $\mathbf B^1 (r_\DN)$ and $\mathbf B^1 (s^\bullet_\DN)$ are monotonic. Since $r_\DN \circ s^\bullet_\DN = \identity\relax$, $\mathbf B^1 (r_\DN) \circ \mathbf B^1 (s^\bullet_\DN) = \identity\relax$. Since $\identity\relax \geq s^\bullet_\DN \circ r_\DN$, i.e., since $\mathcal Q \subseteq s^\bullet_\DN (r_\DN (\mathcal Q))$ for every $\mathcal Q$, we also have that $\mathbf B^1 (s^\bullet_\DN) (\mathbf B^1 (r_\DN) (\mathcal Q, r)) \leq^{\mQ {(\dKRH^a)}^+} (\mathcal Q, r)$: indeed $\mQ {(\dKRH^a)} (s^\bullet_\DN (r_\DN (\mathcal Q)), \mathcal Q) = 0$ since $s^\bullet_\DN (r_\DN (\mathcal Q)) \supseteq \mathcal Q$ (see Lemma~\ref{lemma:dQ:spec}), and this is less than or equal to $r - r$. This shows that $\mathbf B^1 (s^\bullet_\DN)$ is left adjoint to $\mathbf B^1 (r_\DN)$. We conclude because left adjoints preserve all existing suprema, hence in particular directed suprema. \qed \begin{lem} \label{lemma:DN:r:lipcont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. Let $\bullet$ be either ``$\leq 1$'' or ``$1$'', and $a > 0$. Then $r_\DN$ is $1$-Lipschitz continuous from $\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$ to the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric. \end{lem} \proof By Lemma~\ref{lemma:DN:s:lip}, $\mathbf B^1 (r_\DN)$ is monotonic. Consider a directed family of formal balls ${(\mathcal Q_i, r_i)}_{i \in I}$ on $\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$, with supremum $(\mathcal Q, r)$. We use Theorem~\ref{thm:Vleq1:cont}, resp., Theorem~\ref{thm:V1:cont} to claim that $\Val_\bullet X, \dKRH^a$ is continuous Yoneda-complete, and then Proposition~\ref{prop:QX:Ycomplete} to obtain that $\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)}$ is Yoneda-complete, and that directed suprema of formal balls are naive suprema. This means that for every $H \in \Lform^a_\alpha (\Val_\bullet X, \dKRH^a)$ ($\alpha > 0$), $F_{\mathcal Q} (H) = \sup_{i \in I} (F_{\mathcal Q_i} (H) - \alpha r_i + \alpha r)$, and $r = \inf_{i \in I} r_i$. In particular, for $H (G) = G (h)$, for any $h \in \Lform^a_1 (X, d)$ (Lemma~\ref{lemma:H}), so that $F_{\mathcal Q} (H) = \min_{G \in \mathcal Q} H (G) = \min_{G \in \mathcal Q} G (h)$, we obtain $\min_{G \in \mathcal Q} G (h) = \sup_{i \in I} (\min_{G \in \mathcal Q_i} G (h) - \alpha r_i + \alpha r)$, namely $r_\DN (\mathcal Q) (h) = \sup_{i \in I} (r_\DN (\mathcal Q_i) (h) - \alpha r_i + \alpha r)$. This characterizes $(r_\DN (\mathcal Q), r)$ as the naive supremum of ${(r_\DN (\mathcal Q_i), r_i)}_{i \in I}$. For every non-empty compact saturated set $\mathcal Q$, $r_\DN (\mathcal Q)$ is a superlinear prevision. It is in particular continuous, so $r_\DN (\mathcal Q)$ is the supremum of ${(r_\DN (\mathcal Q_i), r_i)}_{i \in I}$ by Proposition~\ref{prop:LPrev:simplesup}~(3) (and (4)). Hence $\mathbf B^1 (r_\DN)$ is Scott-continuous, which concludes the argument. \qed \begin{thm}[Continuity for superlinear previsions] \label{thm:DN:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, and $a > 0$. The space of all subnormalized (resp., normalized) superlinear previsions on $X$ with the $\dKRH^a$ quasi-metric is continuous Yoneda-complete. It occurs as a $1$-Lipschitz continuous retract of $\Smyth (\Val_{\leq 1} X), \mQ {(\dKRH^a)}$ (resp., $\Smyth (\Val_1 X), \allowbreak \mQ {(\dKRH^a)}$), and as an isometric copy of $\Smyth^{cvx} (\Val_{\leq 1} X), \mQ {(\dKRH^a)}$ (resp., $\Smyth^{cvx} (\Val_1 X), \mQ {(\dKRH^a)}$) through $r_\DN$ and $s^{\leq 1}_\DN$ (resp., $s^1_\DN$). The $\dKRH^a$-Scott topology on the space of subnormalized (resp., normalized) superlinear previsions coincides with the weak topology. Finally, directed suprema of formal balls are computed as naive suprema. \end{thm} \proof We have already seen that $r_\AN$ and $s^\bullet_\AN$ are continuous, with respect to the weak topologies. By Lemma~\ref{lemma:DN:r:lipcont} and Lemma~\ref{lemma:DN:s:lipcont}, they are also $1$-Lipschitz continuous with respect to $\mQ {(\dKRH^a)}$ on $\Smyth (\Val_\bullet X)$ and $\dKRH^a$ on the space of (sub)normalized superlinear previsions. Recall also that $r_\DN \circ s^\bullet_\DN = \identity \relax$. This shows the $1$-Lipschitz continuous retract part, and also the isometric copy part, since $s^\bullet_\DN$ and $r_\DN$ restrict to bijections with the subspace $\Smyth^{cvx} (\Val_\bullet X)$. Lemma~\ref{lemma:retract:alg} tells us that the space of (sub)normalized superlinear previsions on $X$ is continuous Yoneda-complete. Let us show that the $\dKRH^a$-Scott topology on the space of (sub)normalized superlinear previsions is the weak topology. For that, we recall that the $\mQ {(\dKRH^a)}$-Scott topology coincides with the upper Vietoris topology on $\Smyth (\Val_\bullet X)$, and we note that $s^\bullet_\DN$ is both a topological embedding for the weak and upper Vietoris topologies, and the section part of a $1$-Lipschitz continuous retraction, hence a topological embedding for the $\dKRH^a$-Scott and $\mQ {(\dKRH^a)}$-Scott topologies. The conclusion follows from Fact~\ref{fact:retract:two}. It remains to show that directed suprema of formal balls are computed as naive suprema. The argument is exactly as for Proposition~\ref{prop:AN:Ycomplete}. We use the fact that $p = \mathbf B^1 (r_\DN)$ and $e = \mathbf B^1 (s^\bullet_\DN)$ form an order-retraction, and that the supremum $(F, r)$ of a directed family ${(F_k, r_k)}_{k \in K}$ is $p (\sup_{k \in K} e (F_k, r_k))$ by Lemma~\ref{lemma:orderretract}. We invoke Theorem~\ref{thm:Vleq1:cont}, resp.\ Theorem~\ref{thm:V1:cont}, together with Proposition~\ref{prop:QX:Ycomplete} to check that the supremum $(\mathcal Q, r) = \sup_{k \in K} e (F_k, r_k)$, computed in $\mathbf B (\Smyth (\Val_\bullet X), \mQ {(\dKRH^a)})$, is a naive supremum. In particular, $r = \inf_{k \in K} r_k$. Let $\mathcal Q_k = s^\bullet_\DN (F_k)$. Then $F_{\mathcal Q_k} (H) = \inf_{G \in \mathcal Q_k} H (G) = \inf_{G \in \Val_\bullet X, G \geq F_k} H (G)$ for every $H \in \Lform^a_\alpha (\Val_\bullet X, \dKRH^a)$. We have $(F, r) = p (\mathcal Q, r)$, so $F = r_\DN (\mathcal Q)$, and that implies that $F (h) = \inf_{G \in \mathcal Q} G (h)$ for every $h \in \Lform^a_\alpha (X, d)$. The map $H \colon G \mapsto G (h)$ is in $\Lform^a_\alpha (\Val_\bullet X, \dKRH^a)$ by Lemma~\ref{lemma:H}, and: \begin{eqnarray*} F (h) & = & \inf_{G \in \mathcal Q} H (G) \\ & = & F_{\mathcal Q} (H) \\ & = & \sup_{i \in I} (\inf_{G \in \Val_\bullet X, G \geq F_i} H (G) - \alpha r_i + \alpha r) \\ & = & \sup_{i \in I} (\inf_{G \in \Val_\bullet X, G \geq F_i} G (h) - \alpha r_i + \alpha r) \\ & = & \sup_{i \in I} (F_i (h) - \alpha r_i + \alpha r). \end{eqnarray*} \qed \begin{lem} \label{lemma:DN:cont:removed} (This result removed. It stated that suprema are naive suprema, repeating the end of the previous theorem. This display kept in order to maintain the numbering of theorems.) \end{lem} As usual, algebraicity is preserved, too. \begin{thm}[Algebraicity for superlinear previsions] \label{thm:DN:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. Let $a > 0$. The space of all subnormalized (resp., normalized) superlinear previsions on $X$ with the $\dKRH^a$ quasi-metric is algebraic Yoneda-complete. All the superlinear previsions of the form $h \mapsto \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$, with $m \geq 1$, $\sum_{j=1}^{n_i} a_{ij} \leq 1$ (resp., $=1$) for every $i$, and where each $x_{ij}$ is a center point, are center points, and they form a strong basis, even when each $x_{ij}$ is constrained to lie in $\mathcal B$. \end{thm} \proof We cannot appeal to the final part of Lemma~\ref{lemma:retract:alg} in order to show algebraicity. Indeed, we would need to show $\dKRH^a (r_\DN (\mathcal Q), F') = \mQ {(\dKRH^a)} (\mathcal Q, s^\bullet_\DN (F'))$ to this end, but Proposition~\ref{prop:DN:dKRH} only allows us to conclude the similarly looking, but different equality $\dKRH^a (F, r_\DN (\mathcal Q')) = \mQ {(\dKRH^a)} (s^\bullet_\DN (F), \mathcal Q')$. We can still use the fact that every (sub)normalized superlinear prevision is a $\dKRH^a$-limit of a Cauchy-weightable net of points of the form $r_\DN (\mathcal Q)$, where each $\mathcal Q$ is taken from a strong basis of $\Smyth (\Val_\bullet X)$. Using Theorem~\ref{thm:V:alg} or Theorem~\ref{thm:V1:alg}, and Theorem~\ref{thm:Q:alg}, we know that we can take $\mathcal Q$ of the form $\upc \{\sum_{j=1}^{n_1} a_{1j} \delta_{x_{1j}}, \cdots, \sum_{j=1}^{n_m} a_{mj} \delta_{x_{mj}}\}$ where $m \geq 1$ and each $x_{ij}$ is in $\mathcal B$. Then $r_\DN (\mathcal Q)$ is exactly one of the superlinear previsions mentioned in the theorem, namely it is of the form $h \mapsto \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$. To show that our spaces of superlinear previsions are algebraic, it remains to show that $\mathring F \colon h \mapsto \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$, where $m \geq 1$ and each $x_{ij}$ is a center point, is itself a center point. We cannot use the final part of Lemma~\ref{lemma:retract:alg}. Instead we shall show that directly, in the manner of Lemma~\ref{lemma:V:simple:center} and related lemmas. In other words, we show that $B^{\dKRH^a}_{(\mathring F, 0), \epsilon}$ is Scott-open for every $\epsilon > 0$. This proceeds exactly as in Lemma~\ref{lemma:V:simple:center} and related lemmas. For that, we consider a directed family ${(F_k, r_k)}_{k \in K}$ where each $F_k$ is a (sub)normalized superlinear prevision, with supremum $(F, r)$ in $B^{\dKRH^a}_{(\mathring F, 0), \epsilon}$. For every $h \in \Lform^a_1 (X, d)$, $\dreal (\mathring F (h), \allowbreak F (h)) < \epsilon - r$, that is, $\epsilon > r$ and $\mathring F (h) - \epsilon + r < F (h)$. In particular, and using the fact that the supremum $(F, r)$ is a naive supremum (final part of Theorem~\ref{thm:DN:cont}), there is a $k \in K$ such that $\mathring F (h) - \epsilon + r_k < F_k (h)$, and we can take $k$ so large that $\epsilon - r_k > 0$. We note that $\mathring F$ is not just Scott-continuous, but that it is also continuous from $\Lform_\alpha (X, d)^\dG$ (resp., $\Lform_\alpha^a (X, d)^\dG$) to $(\overline{\mathbb{R}}_+)^\dG$. This is Corollary~\ref{corl:cont:Lalpha:simple:2}. It follows that the map $h \in \Lform^a_1 (X, d)^\patch \mapsto F_k (h) - \mathring F (h)$ is lower semicontinuous. Therefore, the set $V_k$ of all $h \in \Lform^a_1 (X, d)$ such that $\mathring F (h) - \epsilon + r_k < F_k (h)$ is open in $\Lform^a_1 (X, d)^\patch$. The argument of the previous paragraph shows that ${(V_k)}_{k \in K, \epsilon - r_k > 0}$ forms an open cover of $\Lform^a_1 (X, d)^\patch$. The latter is compact by Lemma~\ref{lemma:cont:Lalpha:retr}~(4), so we can extract a finite subcover. This gives us a finite set $J$ of indices from $K$ such that for every $h \in \Lform^a_1 (X, d)$, there is a $j \in J$ such that $\mathring F (h) - \epsilon + r_j < F_j (h)$ and $\epsilon - r_j > 0$. Using the fact that ${(F_k, r_k)}_{k \in K}$ is directed, there is even a single $k \in K$ such that $(F_j, r_j) \leq^{\dKRH^{a+}} (F_k, r_k)$ for every $j \in J$, and that implies that for every $h \in \Lform^a_1 (X, d)$, $\mathring F (h) - \epsilon + r_k < F_k (h)$. That shows that $(F_k, r_k)$ is in $B^{\dKRH^a}_{(\mathring F, 0), \epsilon}$, finishing our proof that $B^{\dKRH^a}_{(\mathring F, 0), \epsilon}$ is Scott-open. Therefore $\mathring F \colon h \mapsto \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$ is a center point. \qed \begin{lem} \label{lemma:DN:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. Let also $a > 0$. $\Prev f$ restricts to a $1$-Lipschitz continuous map from the space of normalized, resp.\ subnormalized superlinear previsions on $X$ to the space of normalized, resp.\ subnormalized superlinear previsions on $Y$, with the $\dKRH^a$ and $\KRH\partial^a$ quasi-metrics. \end{lem} \proof By Lemma~\ref{lemma:Pf:lip}, $\Prev f$ is $1$-Lipschitz and maps (sub)normalized superlinear previsions to (sub)normalized superlinear previsions. Also, $\mathbf B^1 (\Prev f)$ preserves naive suprema. By Theorem~\ref{thm:DN:cont}, suprema of directed families of formal balls of (sub)normalized sublinear previsions are naive suprema, so $\mathbf B^1 (\Prev f)$ is Scott-continuous. It follows that $\Prev f$ is $1$-Lipschitz continuous. \qed With Theorem~\ref{thm:DN:alg} and Theorem~\ref{thm:DN:cont}, we obtain the following. \begin{cor} \label{cor:DN:functor} There is an endofunctor on the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces and $1$-Lipschitz continuous maps, which sends every object $X, d$ to the space of (sub)normalized superlinear previsions on $X$ with the $\dKRH^a$-Scott topology ($a > 0$), and every $1$-Lipschitz continuous map $f$ to $\Prev f$. \qed \end{cor} \section{Lenses, Quasi-Lenses, and Forks} \label{sec:forks-quasi-lenses} A natural model for mixed angelic and demonic non-determinism, without probabilistic choice, on a space $X$, is the Plotkin powerdomain. That consists in the space of \emph{lenses} on $X$. A lens $L$ is an intersection $Q \cap C$ of a compact saturated set $Q$ and a closed set $C$, where $Q$ intersects $C$. The space of lenses is equipped with the Vietoris topology, generated by subsets $\Box U = \{L \mid L \subseteq U\}$ and $\Diamond U = \{L \mid L \cap U \neq \emptyset\}$, $U$ open in $X$. For every lens $L$, one can write $L$ canonically as $Q \cap C$, by taking $Q = \upc L$ and $C = cl (L)$. There are several competing variants of the Plotkin powerdomain. An elegant one is given by Heckmann's \emph{continuous $\mathbf A$-valuations} \cite{Heckmann:absval}. Those are Scott-continuous maps $\alpha \colon \Open X \to \mathbf A$ such that $\alpha (\emptyset) = 0$, $\alpha (X) = 1$, and for all opens $U$, $V$, $\alpha (U)=0$ implies $\alpha (U \cup V) = \alpha (V)$, and $\alpha (U) = 1$ implies $\alpha (U \cap V) = \alpha (V)$. Here $\mathbf A$ is the poset $\{0, \mathsf M, 1\}$ ordered by $0 < \mathsf M < 1$. The \emph{Vietoris topology} on the space of continuous $\mathbf A$-valuations is generated by the subsets $\Box U$ of continuous $\mathbf A$-valuations such that $\alpha (U)=1$, and the subsets $\Diamond U$ of continuous $\mathbf A$-valuations such that $\alpha (U) \neq 0$, where $U$ ranges over the open subsets of $X$. It was shown in \cite[Fact~5.2]{JGL-mscs09} that, provided $X$ is sober, the space of continuous $\mathbf A$-valuations is homeomorphic to a space of so-called \emph{quasi-lenses}, namely, pairs $(Q, C)$ of a compact saturated set $Q$ and a closed set $C$ such that $Q$ intersects $C$, $Q \subseteq \upc (Q \cap C)$ (whence $Q = \upc (Q \cap C)$), and for every open neighborhood $U$ of $Q$, $C \subseteq cl (U \cap C)$. This is topologized by a topology that we shall again call the \emph{Vietoris topology}, generated by subsets $\Box U = \{(Q, C) \mid Q \subseteq U\}$ and $\Diamond U = \{(Q, C) \mid C \cap U \neq \emptyset\}$. The homeomorphism maps $(Q, C)$ to $\alpha$ where $\alpha (U)$ is equal to $1$ if $Q \subseteq U$, to $0$ if $C$ does not intersect $U$, and to $\mathsf M$ otherwise. The space of quasi-lenses is homeomorphic to the space of lenses when $X$ is stably compact, through the map $L \mapsto (\upc L, cl (L))$ (Proposition~5.3, loc.\ cit.), but is in general larger. Note, that, in case $X$ is not only stably compact but also Hausdorff, i.e., when $X$ is compact Hausdorff, then the lenses are just the compact subsets of $X$. It should therefore come as no surprise that we shall equip our space of (quasi-)lenses with a quasi-metric resembling the Hausdorff metric. Quasi-lenses and continuous $\mathbf A$-valuations are a special case of the notion of a \emph{fork} \cite{Gou-csl07}. A fork on $X$ is by definition a pair $(F^-, F^+)$ of a superlinear prevision $F^-$ and a sublinear prevision $F^+$ satisfying \emph{Walley's condition}: \[ F^- (h+h') \leq F^- (h) + F^+ (h') \leq F^+ (h+h') \] for all $h, h' \in \Lform X$. This condition was independently discovered by Keimel and Plotkin \cite{KP:predtrans:pow}, who call such a pair \emph{medial} \cite{KP:mixed}. By taking $h'=0$, or $h=0$, Walley's condition in particular implies that $F^- \leq F^+$. A fork $(F^-, F^+)$ is normalized, subnormalized, or discrete if and only if both $F^-$ and $F^+$ are. We shall call \emph{weak topology} on the space of forks the topology induced by the inclusion in the product of the spaces of sublinear and superlinear previsions, each equipped with their weak topology. Its specialization quasi-ordering is given by $(F^-, F^+) \leq ({F'}^-, {F'}^+)$ if and only if $F^- \leq {F'}^-$ and $F^+ \leq {F'}^+$. \begin{defi}[$\dKRH$, $\dKRH^a$ on forks] \label{defn:KRH:fork} Let $X, d$ be a quasi-metric space. The $\dKRH$ quasi-metric on the space of forks on $X$ is defined by: \[ \dKRH ((F^-, F^+), ({F'}^-, {F'}^+)) = \max (\dKRH (F^-, {F'}^-), \dKRH (F^+, {F'}^+)), \] and similarly for the $\dKRH^a$ quasi-metric, $a > 0$. \end{defi} Those are quasi-metrics indeed. Lemma~\ref{lemma:KRH:qmet} and Lemma~\ref{lemma:KRHa:qmet} indeed imply: \begin{lem} \label{lemma:KRH:fork:qmet} Let $X, d$ be a standard quasi-metric space. For all forks $(F^-, F^+)$ and $({F'}^-, {F'}^+)$ on $X$, the following are equivalent: $(a)$ $(F^-, F^+) \leq ({F'}^-, {F'}^+)$; $(b)$ $\dKRH ((F^-, \allowbreak F^+), ({F'}^-, {F'}^+))=0$; $(c)$ $\dKRH^a((F^-, F^+), ({F'}^-, {F'}^+))=0$, for any $a > 0$. \qed \end{lem} \subsection{The Plotkin Powerdomain} \label{sec:plotkin-powerdomain} To us, the Plotkin powerdomain will be the set of discrete normalized forks. Equivalently, this is the space of quasi-lenses, which justifies the name. \begin{prop} \label{prop:fork=lens} Let $X$ be a sober space. There is a bijection between quasi-lenses on $X$ and discrete normalized forks on $X$, which maps the quasi-lens $(Q, C)$ to $(F_Q, F^C)$. \end{prop} \proof Fix a quasi-lens $(Q, C)$. $F^C$ is a discrete normalized sublinear prevision by Lemma~\ref{lemma:eH}, and $F_Q$ is a discrete normalized superlinear prevision by Lemma~\ref{lemma:uQ}. Moreover, the maps $Q \mapsto F_Q$ and $C \mapsto F^C$ are bijections. We need to show that if $(Q, C)$ is a quasi-lens, then $(F_Q, F^C)$ is a fork, and conversely. We claim that $F_Q (h) = \inf_{x \in Q \cap C} h (x)$ for every $h \in \Lform X$. Indeed, $F_Q (h) = \inf_{x \in Q} h (x) \leq \inf_{x \in Q \cap C} h (x)$ follows from the fact that $Q$ contains $Q \cap C$, and in the reverse direction, we use the equality $Q = \upc (Q \cap C)$ to show that for every $x \in Q$, there is a $y \in Q \cap C$ such that $h (x) \geq h (y)$. For all $h, h' \in \Lform X$, it follows that $F_Q (h+h') = \inf_{x \in Q \cap C} (h (x) + h' (x)) \leq \inf_{x \in Q \cap C} (h (x) + \sup_{x' \in Q \cap C} h' (x')) \leq \inf_{x \in Q \cap C} (h (x) + \sup_{x' \in C} h' (x')) = F_Q (h) + F^C (h')$. To show that $F_Q (h) + F^C (h') \leq F^C (h+h')$, we assume the contrary. There is a real number $t$ such that $F_Q (h) + F^C (h') > t \geq F^C (h+h')$. The second inequality implies that $F^C (h')$, which is less than or equal to $F^C (h+h')$, is a (non-negative) real number $a$. The first inequality then implies $F_Q (h) > t-a$. It follows that there must be a real number $b$ such that $F_Q (h) > b > t-a$. Since $F_Q (h) > b$, every element $x$ of $Q$ is such that $h (x) > b$. Therefore $Q$ is included in the open set $U = h^{-1} (]b, +\infty])$. We know that $C \subseteq cl (U \cap C)$, and this implies that $F^C (h') = \sup_{x \in C} h' (x) \leq \sup_{x \in cl (U \cap C)} h' (x) = \sup_{x \in U \cap C} h' (x)$ (using Lemma~\ref{lemma:H:cl:sup}). Since $b > t-a$ and $a = F^C (h')$, there is a point $x$ in $U \cap C$ such that $h' (x) > t-b$. Because $x$ is in $U$, $h (x) > b$, so $h (x)+h'(x) > t$, contradicting the inequality $t \geq F^C (h+h')$. Hence if $(Q, C)$ is a quasi-lens, then $(F_Q, F^C)$ is a fork, which is necessarily discrete and normalized. Conversely, let us assume that $(F_Q, F^C)$ is a fork. We have seen that, in particular, $F_Q \leq F^C$. Hence $F_Q (\chi_U) \leq F^C (\chi_U)$ where $U$ is the complement of $C$. That implies that $F_Q (\chi_U)=0$, i.e., $Q$ is not included in $U$, so that $Q$ intersects $C$. Let us use the inequality $F_Q (h+h') \leq F_Q (h) + F^C (h')$ on $h' = \chi_U$ where $U$ is the complement of $C$ again. Then $F^C (h')=0$, so $\inf_{y \in Q} (h (y) + \chi_U (y)) \leq \inf_{y \in Q} h (y)$. In other words, $\min (\inf_{y \in Q \cap U} (h (y) + 1), \inf_{y \in Q \cap C} h (y)) \leq \inf_{y \in Q} h (y)$, or equivalently, one of the two numbers $\inf_{y \in Q \cap U} (h (y) + 1)$, $\inf_{y \in Q \cap C} h (y)$ is less than or equal to $\inf_{y \in Q} h (y)$. That cannot be the first one, which is larger than or equal to $1+\inf_{y \in Q \cap U} h (y) \geq 1 + \inf_{y \in Q} h (y)$, so: $(*)$ $\inf_{y \in Q \cap C} h (y) \leq \inf_{y \in Q} h (y)$. Now, for every $x \in Q$, $\dc x$ is closed, hence its complement $V$ is open. Take $h = \chi_V$. Then $\inf_{y \in Q} h (y) \leq h (x) = 0$, therefore $\inf_{y \in Q \cap C} h (y)=0$ by $(*)$. Since $h$ only takes the values $0$ or $1$ (or since $Q \cap C$ is compact), there is a point $y$ in $Q \cap C$ such that $h (y)=0$, i.e., such that $y \leq x$. Therefore $Q$ is included in $\upc (Q \cap C)$. Finally, let $U$ be any open neighborhood of $Q$. We use the inequality $F_Q (h) + F^C (h') \leq F^C (h + h')$ on $h = \chi_U$, so that $1 + F^C (h') \leq F^C (h + h')$. For every open subset $V$ that intersects $C$, taking $h' = \chi_V$ we obtain that $2 \leq F^C (\chi_U + \chi_V) = \sup_{x \in C} (\chi_U (x) + \chi_V (x))$. Since $\chi_U (x) + \chi_V (x)$ can only take the values $0$, $1$, or $2$, there must be a point $x$ in $C$ such that $\chi_U (x)+\chi_V (x)=2$. Such a point must be both in $U$ and in $V$. We have shown that every open subset $V$ that intersects $C$ also intersects $U \cap C$. Letting $V$ be the complement of $cl (U \cap C)$, it follows that $C$ cannot intersect $V$, hence must be included in $cl (U \cap C)$. \qed \begin{defi}[$\dP$, $\dP^a$] \label{defn:dP} Let $X, d$ be a quasi-metric space. For any two quasi-lenses $(Q, C)$ and $(Q', C')$, let $\dP ((Q, C), (Q', C')) = \dKRH ((F_Q, F^C), (F_{Q'}, F^{C'}))$, and, for each $a > 0$, let $\dP^a ((Q, C), (Q', C')) = \dKRH^a ((F_Q, F^C), (F_{Q'}, F^{C'}))$. For every $a > 0$, $\dP^a ((Q, C), (Q', C')) = \max (\dQ^a (Q, Q'), \dH^a (C, C'))$. \end{defi} By definition of $\dQ$ (Definition~\ref{defn:dQ}) and of $\dQ^a$ (Remark~\ref{defn:dQa}), and by Proposition~\ref{prop:H:prev} and Remark~\ref{rem:HX:dKRHa}, we obtain the following characterization. \begin{lem} \label{lem:dP} Let $X, d$ be a standard quasi-metric space. Then $\dP ((Q, C), (Q', C')) = \max (\dQ (Q, Q'), \dH (C, C'))$. \end{lem} Together with Proposition~\ref{prop:fork=lens}, this implies the following. \begin{cor} \label{corl:P:prev} Let $X, d$ be a sober standard quasi-metric space. The bijection $(Q, C) \mapsto (F_Q, F^C)$ is an isometry between the space of quasi-lenses on $X$ with the $\dP$ quasi-metric, and the space of discrete normalized forks on $X$ with the $\dKRH$ quasi-metric. \qed \end{cor} \begin{rem}[The Hausdorff quasi-metric] \label{rem:dP:lens} Let $X, d$ be a standard quasi-metric space. If we equate a lens $L$ with the quasi-lens $(\upc L, cl (L))$, then we can reinterpret Lemma~\ref{lem:dP} by saying that $\dP (L, L') = \max (\dQ (\upc L, \upc L'), \dH (cl (L), cl (L'))$. We have $\dQ (\upc L, \upc L') = \sup_{x' \in \upc L'} \inf_{x \in \upc L} d (x, x')$ by Lemma~\ref{lemma:dQ}, and this is also equal to $\sup_{x' \in L'} \inf_{x \in L} d (x, x')$, since $d$ is monotonic in its first argument and antitonic in its second argument. We also have $\dH (cl (L), cl (L')) = \sup_{x \in cl (L)} d (x, cl (L'))$ by Definition~\ref{defn:dH}, and this is equal to $\sup_{x \in L} d (x, cl (L'))$ by Lemma~\ref{lemma:H:cl:sup}, since $d (\_, cl (L'))$ is $1$-Lipschitz continuous (by \cite[Lemma~6.11]{JGL:formalballs}, recalled at the beginning of Section~\ref{sec:hoare-quasi-metric}) hence continuous. If $x$ is a center point, we recall that $d (x, cl (L')) = \inf_{x' \in cl (L')} d (x, x')$. Also the map $-d (x, \_)$ is Scott-continuous, because the inverse image of $]t, +\infty]$ is the open ball $B^d_{x, <-t}$ (meaning the empty set if $t\leq 0$), so $\inf_{x' \in cl (L')} d (x, x') = -\sup_{x' \in cl (L')} -d (x, x') = - \sup_{x' \in L'} \allowbreak -d (x, x') = \inf_{x' \in L'} d (x, x')$ by Lemma~\ref{lemma:H:cl:sup}. All that implies that, on a standard quasi-metric space $X, d$ where every point is a center point, the $\dP$ quasi-metric coincides with the familiar Hausdorff quasi-metric: \[ \dP (L, L') = \max (\sup_{x \in L} \inf_{x' \in L'} d (x, x'), \sup_{x' \in L'} \inf_{x \in L} d (x, x')). \] This is the case, notably, if $X, d$ is Smyth-complete. This is also the case if $X, d$ is a \emph{metric} space, in which case $\dP$ is exactly the Hausdorff metric on the space of non-empty compact subsets of $X$. \end{rem} \begin{lem} \label{lemma:PX:naivesup} Let $X, d$ be a sober, standard quasi-metric space. For every directed family of formal balls ${((F^-_i, F^+_i), r_i)}_{i \in I}$ on the space of forks, if ${(F^-_i, r_i)}_{i \in I}$ has a naive supremum $(F^-, r)$ where $F^-$ is continuous, and if ${(F^+_i, r_i)}_{i \in I}$ has a naive supremum $(F^+, r)$ where $F^+$ is continuous, then $(F^-, F^+)$ is a fork. \end{lem} \proof Note that, because the quasi-metric on forks is defined as a maximum, the two families ${(F^-_i, r_i)}_{i \in I}$ and ${(F^+_i, r_i)}_{i \in I}$ are directed, so taking their naive suprema makes sense. Explicitly, define $i \preceq j$ as $((F^-_i, F^+_i), r_i) \leq^{\dKRH^+} ((F^-_j, F^+_j), r_j)$. Then $i \preceq j$ implies that $\dKRH ((F^-_i, F^+_i), (F^-_j, F^+_j)) \leq r_i - r_j$, hence that $\dKRH (F^-_i, F^-_j) \leq r_i - r_j$ and $\dKRH (F^+_i, F^+_j) \leq r_i - r_j$, so that $(F^-_i, r_i) \leq^{\dKRH^+} (F^-_j, r_j)$ and $(F^+_i, r_i) \leq^{\dKRH^+} (F^+_j, r_j)$. For all $i, j \in I$, there is a $k \in I$ such that $i, j \preceq k$, and that implies that $(F^-_i, r_i), (F^-_j, r_j) \leq^{\dKRH^+} (F^-_k, r_k)$, and similarly with $+$ instead of $-$. Note also that the naive suprema $(F^-, r)$ and $(F^+, r)$ must have the same radius $r = \inf_{i \in I} r_i$, by definition. By Proposition~\ref{prop:LPrev:simplesup}~(4), $F^-$ is a superlinear prevision and $F^+$ is a sublinear prevision. We must prove Walley's condition. For all $h \in \Lform_\alpha (X, d)$ (resp., $\Lform_\alpha^a (X, d)$) and $h' \in \Lform_\beta (X, d)$ (resp., $\Lform_\beta^a (X, d)$), for all $\alpha, \beta > 0$, we have: \[ F_i^- (h + h') \leq F_i^- (h) + F_i^+ (h') \leq F_i^+ (h+h') \] for every $i \in I$, since $(F_i^-, F_i^+)$ is a fork. Therefore: \begin{align*} F_i^- (h + h') + (\alpha+\beta) r - (\alpha+\beta) r_i & \leq (F_i^- (h) +\alpha r - \alpha r_i) + (F_i^+ (h')+\beta r-\beta r_i) \\ & \leq F_i^+ (h+h') + (\alpha+\beta) r - (\alpha+\beta) r_i \end{align*} We note that $h+h'$ is $(\alpha+\beta)$-Lipschitz continuous. By taking (directed) suprema over $i \in I$, and noting that addition is Scott-continuous on $\overline{\mathbb{R}}_+$, we obtain the desired inequalities: \[ F^- (h + h') \leq F^- (h) + F^+ (h') \leq F^+ (h+h') \] for all $h \in \Lform_\alpha (X, d)$ (resp., $\Lform_\alpha^a (X, d)$) and $h' \in \Lform_\beta (X, d)$ (resp., $\Lform_\beta^a (X, d)$), For all $h, h' \in \Lform X$, for every $\alpha > 0$, we then obtain that $F^- (h^{(\alpha)} + {h'}^{(\alpha)}) \leq F^- (h^{(\alpha)}) + F^+ ({h'}^{(\alpha)}) \leq F^- (h) + F^+ (h')$, hence, taking suprema over $\alpha$, $F^- (h+h') \leq F^- (h) + F^+ (h)$. Similarly, $F^- (h^{(\alpha)}) + F^+ ({h'}^{(\alpha)}) \leq F^+ (h^{(\alpha)} + {h'}^{(\alpha)}) \leq F^+ (h+h')$, hence, taking suprema over $\alpha$, $F^- (h) + F^+ (h') \leq F^+ (h+h')$. \qed \begin{prop}[Yoneda-completeness, Plotkin powerdomain] \label{prop:PX:Ycomplete} Let $X, d$ be a sober standard Lipschitz regular quasi-metric space, or a continuous Yoneda-complete quasi-metric space. The space of quasi-lenses on $X$, with the $\dKRH$ quasi-metric, is Yoneda-complete. The supremum of a directed family of formal balls ${((Q_i, C_i), r_i)}_{i \in I}$ is $((Q, C), r)$ where $r = \inf_{i \in I} r_i$, $(F_Q, r)$ is the naive supremum of ${(F_{Q_i}, r_i)}_{i \in I}$, and $(F^C, r)$ is the naive supremum of ${(F^{C_i}, r_i)}_{i \in I}$. In particular, $(Q, r)$ is the supremum of ${(Q_i, r_i)}_{i \in I}$ and $(C, r)$ is the supremum of ${(C_i, r_i)}_{i \in I}$. \end{prop} \proof Note that every continuous Yoneda-complete quasi-metric space is sober \cite[Proposition~4.1]{JGL:formalballs}, so we do not have to assume sobriety explicitly in that case. Let ${((Q_i, C_i), r_i)}_{i \in I}$ be a directed family of formal balls on the space of quasi-lenses. The families ${(Q_i, r_i)}_{i \in I}$ and ${(C_i, r_i)}_{i \in I}$ are also directed, because $\dP$ is defined as a maximum. We also obtain a directed family of formal balls $((F_{Q_i}, F^{C_i}), r_i)$ on the space of discrete normalized forks, by Proposition~\ref{prop:fork=lens}. By Proposition~\ref{prop:HX:Ycomplete} and Proposition~\ref{prop:QX:Ycomplete}, the supremum $(C, r)$ of ${(C_i, r_i)}_{i \in I}$ exists and is a naive supremum, and the supremum $(Q, r)$ (with the same $r$) of ${(Q_i, r_i)}_{i \in I}$ exists and is a naive supremum. By Lemma~\ref{lemma:PX:naivesup}, $(F_Q, F^C)$ is a fork, and certainly it is discrete and normalized. To show that $((Q, C), r)$ is the desired supremum, we observe that the upper bounds $((Q', C'), r')$ of ${((Q_i, C_i), r_i)}_{i \in I}$ are exactly those formal balls such that $(Q', r')$ is an upper bound of ${(Q_i, r_i)}_{i \in I}$ and $(C', r')$ is an upper bound of ${(C_i, r_i)}_{i \in I}$. This is again because our $\dKRH$ quasi-metric on forks (or the $\dP$ quasi-metric on quasi-lenses) is defined as a maximum. Hence $((Q, C), r)$ is such an upper bound, and if $((Q', C'), r')$ is any other upper bound, then $(Q, r) \leq^{\dQ^+} (Q', r')$ and $(C, r) \leq^{\dH^+} (C', r')$, so $((Q, C), r) \leq^{\dP^+} ((Q', C'), r')$. \qed \begin{rem} \label{rem:PX:Ycomplete} For complete metric spaces $X, d$, Proposition~\ref{prop:PX:Ycomplete} specializes to the well-known fact that the space of non-empty compact subsets of $X$, with the Hausdorff metric, is complete. \end{rem} In order to show that the Plotkin powerdomain of an algebraic Yoneda-complete quasi-metric space is algebraic, we require the following two lemmas. For disambiguation purposes, we write $\Box_{\mathcal P} U$ for the set of lenses $(Q, C)$ such that $Q \subseteq U$, reserving the notation $\Box U$ for the set of non-empty compact saturated subsets $Q$ of $X$ such that $Q \subseteq U$. We use a similar convention with $\Diamond_{\mathcal P} U$ and $\Diamond U$. \begin{customlemma}{\thesubsection.A} \label{lemma:QC:approx} Let $X, d$ be a standard algebraic quasi-metric space, with a strong basis $\mathcal B$. Given any quasi-lens $(Q, C)$ on $X$, any $\epsilon > 0$, and given open subsets $U$, $U_1$, \ldots, $U_m$ of $X$ such that $(Q, C) \in \Box_{\mathcal P} U \cap \bigcap_{i=1}^m \Diamond_{\mathcal P} U_i$, there is a finite non-empty subset $E$ of $\mathcal B$ and a number $r$ such that $0 < r \leq \epsilon$ and $(Q, C) \in \Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r} \subseteq \Box_{\mathcal P} U \cap \bigcap_{i=1}^m \Diamond_{\mathcal P} U_i$. \end{customlemma} \proof We use the following remark: (A) For every $\epsilon > 0$, the open balls $B^d_{x, <r}$ with $x \in \mathcal B$ and $0 < r < \epsilon$ form a base of the $d$-Scott topology on $X$. In order to prove this, let $U$ be any open subset of $X$, and $y \in U$. Then $(y, 0)$ is in $\widehat U \cap V_\epsilon$, and is the supremum of a directed family of formal balls of the form $(x, r)$ with $x \in \mathcal B$ and $(x, r) \ll (y, 0)$ by Lemma~\ref{lemma:B:basis}. Hence one of them is in $\widehat U \cap V_\epsilon$. Since it is in $V_\epsilon$, $r < \epsilon$, and since $(x, r) \ll (y, 0)$ where $x$ is a center point in a standard algebraic space, $d (x, y) < r$, so $y$ is in $B^d_{x, <r}$. Moreover, $B^d_{x, <r}$ is open, still because $x$ is a center point, and because $B^d_{x, <r} = B^{d^+}_{(x, 0), <r} \cap X$. Finally, $B^d_{x, <r}$ is included in $\upc (x, r) \cap X \subseteq \widehat U \cap X = U$. Also, we will rely on the following: (B) for every center point $x$, for every closed subset $C'$ of $X$, for every $r > 0$, $d (x, C') < r$ if and only if $B^d_{x, <r}$ intersects $C'$. Indeed, by \cite[Proposition~6.12]{JGL:formalballs} and since $x$ is a center point, $d (x, C') = \inf_{y \in C'} d (x, y)$, so $d (x, C') < r$ if and only if $d (x, y) < r$ for some $y \in C'$. For every $i$, $1\leq i\leq m$, $C$ intersects $U_i$. Since $Q \subseteq U$, by definition of quasi-lenses $C$ is included in $cl (U \cap C)$, so $cl (U \cap C)$ intersects $U_i$. It follows that $U \cap C$ intersects $U_i$, so that $C$ is in $\Diamond (U \cap U_i)$. Hence $(Q, C)$ is in the open set $\Box_{\mathcal P} U \cap \bigcap_{i=1}^m \Diamond_{\mathcal P} (U \cap U_i)$. Replacing $U_i$ by $U \cap U_i$ if necessary, we will therefore assume that every $U_i$ is included in $U$. For every $i$, $1\leq i\leq m$, $U_i$ intersects $C$. Using (A), there is an open ball $B^d_{x'_i, <r'_i}$ included in $U_i$ that intersects $C$, with $x'_i \in \mathcal B$ and $0 < r'_i < \epsilon$. By (B), $d (x'_i, C) < r'_i$. Let $\epsilon_1$ be any positive number such that $d (x'_i, C) < r'_i - \epsilon_1$ for every $i$, $1\leq i\leq m$. Using (B) again, this means that $B^d_{x'_i, <r'_i - \epsilon_1}$ intersects $C$. Since $Q$ is compact and $C$ is closed, $Q \cap C$ is compact. Using Lemma~\ref{lemma:Q:approx}, there are finitely many points $y'_1, \cdots, y'_p \in \mathcal B$ and radii $s'_1, \cdots, s'_p < \epsilon$ such that $Q \cap C \subseteq \bigcup_{j=1}^p B^d_{y'_j, <s'_j} \subseteq U$. By Lemma~\ref{lemma:Q:B:eps}, there is a positive number $\epsilon_2$ such that $\epsilon_2 < s'_j$ for every $j$, $1\leq j\leq n$, and such that $Q \cap C \subseteq \bigcup_{j=1}^p B^d_{y'_j, <s'_j-\epsilon_2} \subseteq U$. Let $r = \min (\epsilon, \epsilon_1, \epsilon_2)$. We use (A) a second time. For every $i$, $1\leq i\leq m$, $B^d_{x'_i, <r'_i-\epsilon_1}$ intersects $C$, so there is an open ball $B^d_{x_i, <r_i}$ included in $B^d_{x'_i, <r'_i-\epsilon_1}$ that intersects $C$, with $x_i \in \mathcal B$ and $0 < r_i < r$. Now the larger open ball $B^d_{x_i, <r}$ also intersects $C$, and we claim that it is included in $U_i$. For every $z \in B^d_{x_i, <r}$, $d (x_i, z) < r$, so $d (x'_i, z) \leq d (x'_i, x_i) + d (x_i, z) < r'_i -\epsilon_1 + r \leq r'_i - \epsilon_1 + \epsilon_1 = r'_i$; this shows that $z$ is in $B^d_{x'_i, <r'_i}$, hence in $U_i$. We now use Lemma~\ref{lemma:Q:approx} a second time. Since $Q \cap C \subseteq \bigcup_{j=1}^p B^d_{y'_j, <s'_j-\epsilon_2}$, there are finitely many points $y_1, \cdots, y_q \in \mathcal B$ and radii $s_1, \cdots, s_q < r$ such that $Q \cap C \subseteq \bigcup_{k=1}^q B^d_{y_k, <s_k} \subseteq \bigcup_{j=1}^p B^d_{y'_j, <s'_j-\epsilon_2}$. We may assume that every term $B^d_{y_k, <s_k}$ in the union $\bigcup_{k=1}^q B^d_{y_k, <s_k}$ intersects $Q \cap C$: any term of that form that does not intersect $Q \cap C$ can safely be removed from the union. In particular, $B^d_{y_k, <s_k}$ intersects $C$ for every $k$, $1\leq k\leq q$. It follows that the larger open ball $B^d_{y_k, <r}$ also intersects $C$. Since $Q \cap C$ is included in $\bigcup_{k=1}^q B^d_{y_k, <s_k}$, it is included in the larger set $\bigcup_{k=1}^q B^d_{y_k, <r}$. We claim that the latter is included in $U$. For every element $z$ of $\bigcup_{k=1}^q B^d_{y_k, <r}$, there is an index $k$, $1\leq k\leq q$, such that $d (y_k, z) < r$. Since $y_k$ is in $\bigcup_{j=1}^p B^d_{y'_j, <s'_j-\epsilon_2}$, there is an index $j$, $1\leq j\leq p$, such that $d (y'_j, y_k) < s'_j-\epsilon_2$. Then $d (y'_j, z) \leq d (y'_j, y_k) + d (y_k, z) < s'_j - \epsilon_2 + r \leq s'_j - \epsilon_2 + \epsilon_2 = s'_j$, so $z$ is in $B^d_{y'_j, <s'_j}$, hence in $U$. In summary, we have found finitely many points $x_1$, \ldots, $x_m$ and $y_1$, \ldots, $y_q$ in $\mathcal B$ and a number $r$ such that $0 < r \leq \epsilon$ and: $(a)$ $B^d_{x_i, <r}$ intersects $C$ for each $i$, $1\leq i\leq m$, $(b)$ $B^d_{x_i, <r}$ is included in $U_i$ for each $i$, $1\leq i\leq m$, $(c)$ $B^d_{y_k, <r}$ intersects $C$ for each $k$, $1\leq k\leq q$; $(d)$ $Q \cap C$ is included in $\bigcup_{k=1}^q B^d_{y_k, <r}$, and $(e)$ $\bigcup_{k=1}^q B^d_{y_k, <r}$ is included in $U$. Let $E = \{x_1, \cdots, x_m, y_1, \cdots, y_q\}$. This is a finite subset of $\mathcal B$. By $(d)$ and since $Q \cap C$ is non-empty, $q$ is different from $0$, so $E$ is non-empty. Using $(d)$ again, $Q \cap C$ is included in $\bigcup_{x \in E} B^d_{x, <r}$, and therefore $Q$ also is included in $\bigcup_{x \in E} B^d_{x, <r}$, since $Q \subseteq \upc (Q \cap C)$ by definition of quasi-lenses, and since open sets such as $\bigcup_{x \in E} B^d_{x, <r}$ are upwards-closed. For every $x \in E$, $B^d_{x, <r}$ intersects $C$, by $(a)$ and $(c)$. Therefore $(Q, C)$ is in $\Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r}$. Finally, let $(Q', C')$ be any element of $\Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r}$. From $(b)$, $(e)$, and the fact that each $U_i$ is included in $U$, we obtain that $\bigcup_{x \in E} B^d_{x, <r} \subseteq U$. Hence, and since $Q' \subseteq \bigcup_{x \in E} B_{x, <r}$, $Q'$ is included in $U$. Next, using (B), for every $x \in E$, $d (x, C') < r$. In particular, for every $i$, $1\leq i\leq m$, $d (x_i, C') < r$. By (B), $C'$ intersects $B^d_{x_i, <r}$. Since the latter is included in $U_i$ by $(b)$, $C'$ intersects $U_i$. It follows that $(Q', C')$ is in $\Box_{\mathcal P} U \cap \bigcap_{i=1}^n \Diamond_{\mathcal P} U_i$. \qed \begin{customlemma}{\thesubsection.B} \label{lemma:QC:B:eps} Let $X, d$ be a standard algebraic quasi-metric space. Given any quasi-lens $(Q, C)$ on $X$, for all center points $x_1$, \ldots, $x_m$ and all $r_1, \cdots, r_m > 0$ such that $(Q, C) \in \Box_{\mathcal P} (\bigcup_{j=1}^m B^d_{x_j, <r_j}) \cap \bigcap_{j=1}^m \Diamond_{\mathcal P} B^d_{x_j, <r_j}$, there is a number $\epsilon$ such that $0 < \epsilon < r_1, \cdots, r_m$ and $(Q, C) \in \Box_{\mathcal P} (\bigcup_{j=1}^m B^d_{x_j, <r_j-\epsilon}) \cap \bigcap_{j=1}^m \Diamond_{\mathcal P} B^d_{x_j, <r_j-\epsilon}$. \end{customlemma} \proof As in the proof of Lemma~\ref{lemma:QC:approx}, since $x_j$ is a center point for every $j$, $d (x_j, C) < r_j$ if and only if $C$ intersects $B^d_{x_j, <r_j}$, by \cite[Proposition~6.12]{JGL:formalballs}. Therefore $d (x_j, C) < r_j$ for every $j$, $1\leq j\leq m$. Let $\epsilon_1 > 0$ be such that $d (x_j, C) < r_j-\epsilon_1$ for every $j$, $1\leq j\leq m$. By Lemma~\ref{lemma:QC:B:eps}, $Q \subseteq \bigcup_{j=1}^m B^d_{x_j, <r_j-\epsilon_2}$ for some $\epsilon_2$ such that $0 < \epsilon_2 < r_1, \cdots, r_m$. It then suffices to let $\epsilon$ be $\min (\epsilon_1, \epsilon_2)$. \qed Let us write $\langle E \rangle$ for the \emph{order-convex hull} of the set $E$. This is defined as $\dc E \cap \upc E$. When $E$ is finite and non-empty, $\dc E$ is closed and intersects $\upc E$, so $\langle E \rangle$ is a lens. Therefore $(\upc \langle E \rangle, cl (\langle E \rangle))$ is a quasi-lens. We observe that $\upc \langle E \rangle = \upc E$, and that $cl (\langle E \rangle) = \dc E$. (Only the latter needs an argument: $\dc E$ is the closure of $E$ since $E$ is finite, and as $E \subseteq \langle E \rangle$, $\dc E \subseteq cl (\langle E \rangle)$; in the reverse direction, $\langle E \rangle \subseteq \dc E$, and since $\dc E$ is closed, it must also contain the closure of $\langle E \rangle$.) It follows that $(\upc E, \dc E)$ is a quasi-lens. \begin{thm}[Algebraicity of Plotkin powerdomains] \label{thm:P:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with strong basis $\mathcal B$. The space of quasi-lenses on $X, d$, with the $\dP$ quasi-metric, is algebraic Yoneda-complete. Every quasi-lens of the form $(\upc E, \dc E)$ where $E$ is a finite non-empty set of center points is a center point in the space of quasi-lenses, and those such that $E \subseteq \mathcal B$ form a strong basis. \end{thm} \proof For every $\epsilon > 0$, $B^{\dP^+}_{((\upc E, \dc E), 0), < \epsilon}$ is the set of formal balls $((Q, C), r)$ such that $\dQ (\upc E, Q) < \epsilon-r$ and $\dH (\dc E, C) < \epsilon-r$. We show that it is Scott-open as follows. For every directed family ${((Q_i, C_i), r_i)}_{i \in I}$ with supremum $((Q, C), r)$, we have seen in Proposition~\ref{prop:PX:Ycomplete} that $(Q, r)$ is the supremum of ${(Q_i, r_i)}_{i \in I}$. Since $\dQ (\upc E, Q) < \epsilon-r$, $(Q, r)$ is in $B^{\dQ^+}_{(\upc E, 0), <\epsilon}$, and that is Scott-open because $\upc E$ is a center point, by Lemma~\ref{lemma:Q:simple:center}. Therefore $(Q_i, r_i)$ is in $B^{\dQ^+}_{(\upc E, 0), <\epsilon}$ for some $i \in I$, namely, $\dQ (\upc E, Q_i) < \epsilon - r_i$, or equivalently $(\upc E, \epsilon) \leq^{\dQ^+} (Q_i, r_i)$. Similarly, and using Lemma~\ref{lemma:H:simple:center}, $(\dc E, \epsilon) \leq^{\dH^+} (Q_j, r_j)$ for some $j \in I$. By directedness we can take the same value for $i$ and $j$, and it follows that $\dP ((\upc E, \dc E), (Q_i, C_i)) < \epsilon - r_i$, i.e., that $((Q_i, C_i), r_i)$ is in $B^{\dP^+}_{((\upc E, \dc E), 0), < \epsilon}$. This shows that $(\upc E, \dc E)$ is a center point. Let $(Q, C)$ be a quasi-lens. We wish to show that $((Q, C), 0)$ is the supremum of some family $D$ of formal balls of the form $((\upc E, \dc E), r)$, with $E$ non-empty, finite, and included in $\mathcal B$. We define $D$ as the set of those formal balls of that form such that $d (x, C) < r$ for every $x \in E$ and $Q \subseteq \bigcup_{x \in E} B^d_{x, <r}$. This is simply the conjunction of the conditions we took in the proofs of Theorem~\ref{thm:H:alg} and Theorem~\ref{thm:Q:alg} As in the proof of Lemma~\ref{lemma:QC:approx}, since $x$ is a center point for every $x \in E$, $d (x, C) < r$ if and only if $C$ intersects $B^d_{x, <r}$, by \cite[Proposition~6.12]{JGL:formalballs}, so $D$ is the set of formal balls $((\upc E, \dc E), r)$ with $E$ non-empty, finite, and included in $\mathcal B$, such that $(Q, C) \in \Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r}$. By Lemma~\ref{lemma:QC:approx} with $U = X$, $m=0$, and $\epsilon > 0$ arbitrary, $D$ is non-empty. We embark on showing that $D$ is directed. Let $((\upc E, \dc E), r)$ and $((\upc F, \dc F), s)$ be two elements of $D$. Let $U = \bigcup_{x \in E} B^d_{x, <r}$ and $V = \bigcup_{y \in F} B^d_{y, <s}$. Using the fact that $\Box_{\mathcal P} U \cap \Box_{\mathcal P} V = \Box_{\mathcal P} (U \cap V)$, $(Q, C)$ is in $\Box_{\mathcal P} (U \cap V) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r} \cap \bigcap_{y \in F} \Diamond_{\mathcal P} B^d_{y, <s}$. By Lemma~\ref{lemma:QC:B:eps}, there is a number $\epsilon$ such that $0 < \epsilon < \min (r, s)$ and such that $(Q, C)$ is in $\mathcal U = \Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r-\epsilon} \cap \bigcup_{y \in F} B^d_{y, <s-\epsilon}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r-\epsilon} \cap \bigcap_{y \in F} \Diamond_{\mathcal P} B^d_{y, <s-\epsilon}$. By Lemma~\ref{lemma:QC:approx}, there is a finite non-empty subset $G$ of $\mathcal B$ and a number $t$ such that $0 < t \leq \epsilon$ and $(Q, C) \in \Box_{\mathcal P} (\bigcup_{z \in G} B^d_{z, <t}) \cap \bigcap_{z \in G} \Diamond_{\mathcal P} B^d_{z, <t} \subseteq \mathcal U$. In particular, $((\upc G, \dc G), t)$ is in $D$. Let us verify that $((\upc E, \dc E), r) \leq^{\dP^+} ((\upc G, \dc G), t)$. The quasi-lens $(\upc G, \dc G)$ is in $\Box_{\mathcal P} (\bigcup_{z \in G} B^d_{z, <t}) \cap \bigcap_{z \in G} \Diamond_{\mathcal P} B^d_{z, <t}$, hence in $\mathcal U$. By definition of $\mathcal U$, $\upc G \subseteq \bigcup_{x \in E} B^d_{x, <r-\epsilon}$, so for every $z \in \upc G$, there is an $x \in E \subseteq \upc E$ such that $d (x, z) < r-\epsilon$. If follows that $\dQ (\upc E, \upc G) < r-\epsilon \leq r-t$, by Lemma~\ref{lemma:dQ}, so $\dQ (\upc E, \upc G) \leq^{\dQ^+} r-t$. By definition of $\mathcal U$ and since $(\upc G, \dc G) \in \mathcal U$, $\dc G$ intersects $B^d_{x, <r-\epsilon}$ for every $x \in E$. In other words, for every $x \in E$, there is a $z \in \dc G$ such that $d (x, z) < r-\epsilon$. For every $x' \in \dc E$, there is a point $x \in E$ such that $x' \leq x$, and we have just seen that $\inf_{z \in \dc G} d (x, z) < r-\epsilon$. Using \cite[Proposition~6.12]{JGL:formalballs}, and since $x$ is a center point, $d (x, \dc G) < r-\epsilon$, and then $d (x', \dc G) \leq d (x', x) + d (x, \dc G)$ (using Lemma~6.11 of \cite{JGL:formalballs}) $< r-\epsilon$. Taking suprema over the elements $x'$ of $\dc E$, we obtain that $\dH (\dc E, \dc G) < r-\epsilon \leq r-t$. Combining this with $\dQ (\upc E, \upc G) < r-t$ through Lemma~\ref{lem:dP}, we obtain that $\dP ((\upc E, \dc E), (\upc G, \dc G)) < r-t$, hence that $((\upc E, \dc E), r) \leq^{\dP^+} ((\upc G, \dc G), t)$, as promised. We verify that $((\upc F, \dc F), s) \leq^{\dP^+} ((\upc G, \dc G), t)$ in a similar fashion. It follows that $D$ is directed. We claim that $((Q, C), 0)$ is the supremum of $D$. It is an upper bound: for every element $((\upc E, \dc E), r)$ of $D$, $Q$ is included in $\bigcup_{x \in E} B^d_{x, <r}$, so $\dQ (\upc E, Q) = \sup_{z \in Q} \inf_{x \in \upc E} d (x, z) \leq \sup_{z \in Q} \inf_{x \in E} d (x, z) \leq r$; and for every $x \in E$, $B^d_{x, <r}$ intersects $C$, so $d (x, C) < r$, so $\dH (\dc E, C) = \sup_{x' \in \dc E} d (x', C) = \sup_{x \in E} \sup_{x' \leq x} d (x', C) \leq \sup_{x \in E} \sup_{x' \leq x} \underbrace {d (x', x)}_0 + d (x, C) \leq r$. Therefore $\dP ((\upc E, \dc E), (Q, C)) \leq r$. If $((Q', C'), r')$ is another upper bound of $D$, then $r'=0$ since $D$ contains elements with arbitrary small radii, by Lemma~\ref{lemma:QC:approx}. We claim that $Q \supseteq Q'$ and $C \subseteq C'$. This will imply that $\dQ (Q, Q')=0$ and $\dH (C, C')=0$, so that $\dP ((Q, C), (Q', C'))$ by Lemma~\ref{lem:dP}, and therefore $((Q, C), 0) \leq^{\dP^+} ((Q', C'), 0)$. This will prove that $((Q, C), 0)$ is the least upper bound of $D$, which will terminate the proof. Let $U$ be any open neighborhood of $Q$, and let $U'$ be any open set that intersects $C$. Then $(Q, C)$ is in $\Box_{\mathcal P} U \cap \Diamond_{\mathcal P} U'$. There is a finite non-empty subset $E$ of $\mathcal B$ and a number $r > 0$ such that $(Q, C) \in \Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r} \subseteq \Box_{\mathcal P} U \cap \Diamond_{\mathcal P} U'$, by Lemma~\ref{lemma:QC:approx}. By Lemma~\ref{lemma:QC:B:eps}, there is a number $\epsilon$ such that $0 < \epsilon < r$ and $(Q, C) \in \Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r-\epsilon}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r-\epsilon}$. In particular, $((\upc E, \dc E), r-\epsilon)$ is in $D$. Since $((Q', C'), 0)$ is an upper bound of $D$, we have $((\upc E, \dc E), r-\epsilon) \leq^{\dP^+} ((Q', C'), 0)$. Hence $\dQ (\upc E, Q') \leq r-\epsilon$ and $\dH (\dc E, C') \leq r-\epsilon$, using Lemma~\ref{lem:dP}. Using Lemma~\ref{lemma:dQ}, the first of these inequalities implies that for every $z \in Q'$, there is a point $x' \in \upc E$ such that $d (x', z) \leq r-\epsilon$; in particular, there is a point $x \in E$ such that $x \leq x'$, so $d (x, z) \leq d (x', z) \leq r-\epsilon < r$. Therefore $Q' \subseteq \bigcup_{x \in E} B^d_{x, <r}$. From the second inequality, $\dH (\dc E, C') \leq r-\epsilon$, we obtain that for every $x \in E$, $d (x, C') \leq r-\epsilon < r$, so there is a point $y \in C'$ such that $d (x, y) < r$, since $x$ is a center point. In other words, for every $x \in E$, $C'$ intersects $B^d_{x, <r}$. Together with $Q' \subseteq \bigcup_{x \in E} B^d_{x, <r}$, this shows that $(Q', C')$ is in $\Box_{\mathcal P} (\bigcup_{x \in E} B^d_{x, <r}) \cap \bigcap_{x \in E} \Diamond_{\mathcal P} B^d_{x, <r}$, hence in $\Box_{\mathcal P} U \cap \Diamond_{\mathcal P} U'$. Hence $Q'$ is included in $U$ and $C'$ intersects $U'$. Since that holds for every open neighborhood $U$ of $Q$ and for every open set $U'$ that intersects $C$, we obtain that $Q \supseteq Q'$ and $C \subseteq C'$, as promised. \qed As for the Hoare and Smyth powerdomains, we reduce the study of continuity to algebraicity. \begin{lem} \label{lemma:P:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. The map $\Plotkin f$ defined by $\Plotkin f (Q, C) = (\Smyth f (Q), \Hoare f (C)) = (\upc f [Q], cl (f [C]))$ is a $1$-Lipschitz continuous map from the space of quasi-lenses on $X$ with the $\dP$ quasi-metric to the space of quasi-lenses on $Y$ with the $\mP\partial$ quasi-metric. \end{lem} \proof We need to check that $\Plotkin f (Q, C)$ is a quasi-lens. Let $Q' = \upc f [Q]$, $C' = cl (f [C])$. We rely on Proposition~\ref{prop:fork=lens}: we know that $(F_Q, F^C)$ satisfies Walley's condition, and we show that $(Q', C')$ is a quasi-lens by showing that $(F_{Q'}, F^{C'})$ satisfies Walley's condition. By Lemma~\ref{lemma:H:functor} and Lemma~\ref{lemma:Q:functor}, $F^{C'} = \Prev f (F^C)$ and $F_{Q'} = \Prev f (F_Q)$. Now $F_{Q'} (k+k') = \Prev f (F_Q) (k+k') = F_Q ((k+k') \circ f) \leq F_Q (k \circ f) + F^C (k' \circ f)$ (by Walley's condition, left inequality, on $(F_Q, F^C)$) $= \Prev f (F_Q) (k) + \Prev f (F^C) (k) = F_{Q'} (k) + F^{C'} (k)$. The other part of Walley's condition is proved similarly. $\Plotkin f$ is $1$-Lipschitz, because of Lemma~\ref{lem:dP} and the fact that $\Smyth f$ and $\Hoare f$ are $1$-Lipschitz (special cases of Lemma~\ref{lemma:H:functor} and Lemma~\ref{lemma:Q:functor}). Given a directed family of formal balls ${((Q_i, C_i), r_i)}_{i \in I}$ on quasi-lenses, we have seen in Proposition~\ref{prop:PX:Ycomplete} that its supremum $((Q, C), r)$ is characterized by $r = \inf_{i \in I} r_i$, $(Q, r)$ is the supremum of ${(Q_i, r_i)}_{i \in I}$ in $\mathbf B (\Smyth X, \dQ)$ and $(C, r)$ is the supremum of ${(C_i, r_i)}_{i \in I}$ in $\mathbf B (\Hoare X, \dH)$. Since $\Smyth f$ and $\Hoarez f$ are $1$-Lipschitz continuous (Lemma~\ref{lemma:H:functor} and Lemma~\ref{lemma:Q:functor}), $(\Smyth f (Q), r)$ is the supremum of ${(\Smyth f (Q_i), r_i)}_{i \in I}$ and $(\Hoarez f (C), r)$ is the supremum of ${(\Hoarez f (C_i), r_i)}_{i \in I}$. By Proposition~\ref{prop:PX:Ycomplete}, the supremum of ${(\Plotkin f (Q_i, C_i), r_i)}_{i \in I}$ is $((\Smyth f (Q), \Hoarez f (C)), r) = (\Plotkin f (Q, C), r)$. Therefore $\Plotkin f$ is $1$-Lipschitz continuous. \qed Let $X, d$ be a continuous Yoneda-complete quasi-metric space. There is an algebraic Yoneda-complete quasi-metric space $Y, \partial$ and two $1$-Lipschitz continuous maps $r \colon Y, \partial \to X, d$ and $s \colon X, d \to Y, \partial$ such that $r \circ s = \identity X$ \cite[Theorem~7.9]{JGL:formalballs}. By Lemma~\ref{lemma:P:functor}, $\Plotkin r$ and $\Plotkin s$ are also $1$-Lipschitz continuous, and clearly $\Plotkin r \circ \Plotkin s = \identity \relax$, so the space of quasi-lenses on $X$ with the $\dP$ quasi-metric is a $1$-Lipschitz continuous retract of that on $Y$, with the $\mP\partial$ quasi-metric. Theorem~\ref{thm:P:alg} states that the latter is algebraic Yoneda-complete, whence: \begin{thm}[Continuity for the Plotkin powerdomain] \label{thm:P:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The space of quasi-lenses on $X$ with the $\dP$ quasi-metric is continuous Yoneda-complete. \qed \end{thm} \subsection{The Vietoris Topology} \label{sec:vietoris-topology} Recall that the Vietoris topology on the space of quasi-lenses on $X$ is generated by the subbasic open sets $\Box U = \{(Q, C) \mid Q \subseteq U\}$ and $\Diamond U = \{(Q, C) \mid C \cap U \neq \emptyset\}$, where $U$ ranges over the open subsets of $X$. This generalizes the usual Vietoris topology on spaces of lenses. Recall also that the \emph{weak topology} on the space of forks is the topology induced by the inclusion in the product of the spaces of sublinear and superlinear previsions, each equipped with their weak topology. In other words, it has subbasic open sets $[h > a]_- = \{(F^-, F^+) \mid F^- (h) > a\}$ and $[h > a]_+ = \{(F^-, F^+) \mid F^+ (h) > a\}$, when $h$ ranges over $\Lform X$ and $a \in \mathbb{R}_+$. \begin{lem} \label{lemma:P:V=weak} Let $X, d$ be a sober, standard quasi-metric space. The map $(Q, C) \mapsto (F_Q, F^C)$ is a homeomorphism of the space of quasi-lenses on $X$ with the Vietoris topology and the space of discrete normalized forks on $X$ with the weak topology. \end{lem} \proof This is a bijection by Proposition~\ref{prop:fork=lens}. The homeomorphism part is an easy consequence of Lemma~\ref{lemma:H:V=weak} and Lemma~\ref{lemma:Q:V=weak}. \qed \begin{prop} \label{prop:P:Vietoris} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space. The $\dP$-Scott topology and the Vietoris topology coincide on the space of quasi-lenses on $X$. \end{prop} \proof The Vietoris open set $\Box_{\mathcal P} U$, $U$ open in $X$, is the inverse image of $\Box U$ by the first projection $\pi_1 \colon (Q, C) \mapsto Q$. Let us check that $\pi_1$ is continuous with respect to the $\dP$-Scott and $\dQ$-Scott topologies. First, $\dQ (Q, Q') \leq \dP ((Q, C), (Q', C')) = \max (\dQ (Q, Q'), \dH (C, C'))$, so $\pi_1$ is $1$-Lipschitz. For every directed family ${((Q_i, C_i), r_i)}_{i \in I}$, its supremum $(Q, r)$ is given by Proposition~\ref{prop:PX:Ycomplete}, and we have seen that $(Q, r)$ was then the (naive) supremum of ${(Q_i, r_i)}_{i \in I}$. This shows our continuity claim on $\pi_1$. Since $\Box U$ is open not only in the upper Vietoris topology, but also in the $\dQ$-Scott topology by Theorem~\ref{thm:Q:Vietoris}, $\Box_{\mathcal P} U$ is open in the $\dP$-Scott topology. We reason similarly with the second projection, showing that it is continuous with respect to the $\dP$-Scott and $\dH$-Scott topologies. Using Theorem~\ref{thm:H:Vietoris}, $\Diamond_{\mathcal P} U$ is open not only in the Vietoris topology, but also in the $\dP$-Scott topology. This shows that the Vietoris topology is coarser than the $\dP$-Scott topology. Conversely, a base of the $\dP$-Scott topology is given by the open balls $B^{\dP}_{((\upc E, \dc E), 0), <\epsilon}$, where $(\upc E, \dc E)$ ranges over the center points of the space of quasi-lenses, as given in Theorem~\ref{thm:P:alg}. That open ball is the set of quasi-lenses $(Q, C)$ such that $\dP ((\upc E, \dc E), (Q, C)) < \epsilon$, equivalently such that $Q \in B^{\dQ}_{\upc E, <\epsilon}$ and $C \in B^{\dH}_{\dc E, \epsilon}$. Those are open balls centered at center points (by Theorem~\ref{thm:Q:alg} and Theorem~\ref{thm:H:alg}), hence are open in the $\dQ$-Scott and $\dH$-Scott topologies, respectively. By Proposition~\ref{prop:Q:Vietoris} and Proposition~\ref{prop:H:Vietoris}, they are open in the upper Vietoris topology on $\Smyth X$ and in the lower Vietoris topology on $\Hoare X$ respectively. Then $B^{\dP}_{((\upc E, \dc E), 0), <\epsilon}$ is the intersection of $\pi_1^{-1} (B^{\dQ}_{\upc E, <\epsilon})$ and of $\pi_2^{-1} (B^{\dH}_{\dc E, \epsilon})$, which are both open in the Vietoris topology. Indeed, $\pi_1$ and $\pi_2$ are continuous with respect with the Vietoris topologies, since $\pi_1^{-1} (\Box U) = \Box_{\mathcal P} U$ and $\pi_1^{-1} (\Diamond U) = \Diamond_{\mathcal P} U$. It follows that $B^{\dP}_{((\upc E, \dc E), 0), <\epsilon}$ is open in the Vietoris topology, whence the Vietoris topology is also finer than the $\dP$-Scott topology. \qed \begin{thm}[$\dP$ quasi-metrizes the Vietoris topology] \label{thm:P:Vietoris} Let $X, d$ be a continuous Yoneda-complete quasi-metric space. The $\dP$-Scott topology coincides with the Vietoris topology on the space of quasi-lenses on $X$. \end{thm} \proof This is as for Theorem~\ref{thm:V:weak=dScott}, Theorem~\ref{thm:H:Vietoris}, or Theorem~\ref{thm:Q:Vietoris}. By \cite[Theorem~7.9]{JGL:formalballs}, $X, d$ is the $1$-Lipschitz continuous retract of an algebraic Yoneda-complete quasi-metric space $Y, \partial$. Call $s \colon X \to Y$ the section and $r \colon Y \to X$ the retraction. Using Corollary~\ref{corl:P:prev}, we confuse quasi-lenses with discrete normalized forks. Then $\Prev s$ and $\Prev r$ form a $1$-Lipschitz continuous section-retraction pair by Lemma~\ref{lemma:P:functor}, and in particular $\Prev s$ is an embedding of the space of quasi-lenses on $X$ into the space of quasi-lenses on $Y$, with their $\dP$-Scott topologies. However, $s$ and $r$ are also just continuous, by Proposition~\ref{prop:cont}, so $\Prev s$ and $\Prev r$ also form a section-retraction pair between the same spaces, this time with their weak topologies (as spaces of previsions), by Fact~\ref{fact:Pf:weak}, that is, with their Vietoris topologies, by Lemma~\ref{lemma:P:V=weak}. By Proposition~\ref{prop:P:Vietoris}, the two topologies on the space of quasi-lenses on $Y$ are the same. Fact~\ref{fact:retract:two} then implies that the two topologies on the space of quasi-lenses on $X$ are the same as well. \qed \subsection{Forks} \label{sec:forks} Just as one would expect by analogy with spaces of sublinear and superlinear previsions, the space of (sub)normalized forks on a space $X$ arises as a retract of the Plotkin powerdomain on its space of (sub)normalized linear previsions. We will equate continuous valuations with linear previsions, as usual, and we will therefore write $\Val X$ for the space of linear previsions on $X$, $\Val_{\leq 1} X$ for the space of subnormalized linear previsions on $X$, $\Val_1 X$ for the space of normalized linear previsions on $X$, both with the weak topology. We will in general use the notation $\Val_\bullet X$, where $\bullet$ can be nothing, ``$\leq 1$'', or ``$1$''. For every quasi-lens $(\mathcal Q, \mathcal C)$ on $\Val_\bullet X$, let $r_{\ADN} (\mathcal Q, \mathcal C)$ denote the pair $(r_{\DN} (\mathcal Q), r_{\AN} (\mathcal C))$. Conversely, for every (sub)normalized fork $(F^-, F^+)$ on $X$, let $s_{\ADN}^\bullet (F^-, F^+)$ be $(s_\DN^\bullet (F^-), s_\AN^\bullet (F^+))$. Those are close cousins of the eponymous maps of \cite[Definition~3.23]{JGL-mscs16}. Following \cite[Definition~4.6]{Keimel:topcones2}, we say that addition is \emph{almost open} on $\Lform X$ if and only if, for any two open subsets $U$ and $V$ of $\Lform X$, the upward closure $\upc (U + V)$ is open. This happens notably when $X$ is core-compact and core-coherent \cite[Lemma~3.24]{JGL-mscs16}; for a definition of core-coherence, see Definition~5.2.18 of \cite{JGL-topology}; every locally compact, coherent space is core-compact and core-coherent, where coherence means that the intersection of two compact saturated sets is compact \cite[Lemma~5.2.24]{JGL-topology}. The following is a variant of Proposition~3.32 and Proposition~4.8 of \cite{JGL-mscs16}, using quasi-lenses instead of lenses. We say that a quasi-lens $(Q, C)$ is \emph{convex} if and only if both $Q$ and $C$ are. \begin{customprop}{\thesubsection.A} \label{prop:rsADN} Let $X$ be a space such that $\Lform X$ is locally convex and has an almost open addition map, and let $\bullet$ be nothing, ``$\leq 1$'', or ``$1$''. Additionally, if $\bullet$ is ``$1$'', we assume that $X$ is compact. The map $r_\ADN$ forms a retraction of the space of quasi-lenses on $\Val_\bullet X$ with the Vietoris topology onto the space of forks (resp., of subnormalized forks if $\bullet$ is ``$\leq 1$'', of normalized forks if $\bullet$ is ``$1$'') with the weak topology, with associated section $s_\ADN^\bullet$. This retraction restricts to a homeomorphism between the subspace of convex quasi-lenses on $\Val_\bullet X$ and the space of forks (resp., of subnormalized forks if $\bullet$ is ``$\leq 1$'', of normalized forks if $\bullet$ is ``$1$'') with the weak topology. \end{customprop} \proof We already know that $r_\DN$ forms a retraction of $\Smyth (\Val_\bullet X)$ onto the space of superlinear previsions on $X$ (with the required (sub)normalization requirement, depending on $\bullet$) with the weak topology, and that $s_\DN^\bullet$ is the associated section. We also know that $r_\AN$ forms a retraction of $\Hoare (\Val_\bullet X)$ onto the space of sublinear previsions on $X$ (again with the required (sub)normalization requirement) with the weak topology, and that $s_\AN^\bullet$ is the associated section. \emph{The map $r_\ADN$ takes its values in a space of forks.} For every quasi-lens $(\mathcal Q, \mathcal C)$ on $\Val_\bullet X$, we need to check that $r_\ADN (\mathcal Q, \mathcal C)$ is a fork, and the only thing that remains to be checked is Walley's conditions. For every $h \in \Lform X$, let $h^\perp \colon \Val_\bullet X \to \overline{\mathbb{R}}_+$ map $G$ to $G (h)$. This is a lower semicontinuous map, since ${h^\perp}^{-1} (]a, +\infty]) = [h > a]$ for every $a \in \mathbb{R}_+$. Also, for all $h, h' \in \Lform X$, $(h+h')^\perp = h^\perp + {h'}^\perp$, because every $G$ in $\Val_\bullet X$ is linear. We note that, for every quasi-lens $(\mathcal Q, \mathcal C)$ on $\Val_\bullet X$, for every $h \in \Lform X$, $r_\DN (\mathcal Q) (h) = \inf_{G \in \mathcal Q} G (h) = \inf_{G \in \mathcal Q} h^\perp (G) = F_{\mathcal Q} (h^\perp)$, and similarly $r_\AN (\mathcal C) (h) = F^{\mathcal C} (h^\perp)$. By Proposition~\ref{prop:fork=lens}, $(F_{\mathcal Q}, F^{\mathcal C})$ is a (discrete) normalized fork, so for all $h, h' \in \Lform X$, \begin{align*} r_\DN (\mathcal Q) (h+h') & = F_{\mathcal Q} ((h+h')^\perp) = F_{\mathcal Q} (h^\perp + {h'}^\perp) \\ & \leq F_{\mathcal Q} (h^\perp) + F^{\mathcal C} ({h'}^\perp) & \text{(Walley's condition)} \\ & = r_\DN (\mathcal Q) (h) + r_\AN (\mathcal C) (h'), \end{align*} and \begin{align*} r_\DN (\mathcal Q) (h) + r_\AN (\mathcal C) (h') & = F_{\mathcal Q} (h^\perp) + F^{\mathcal C} ({h'}^\perp) \\ & \leq F^{\mathcal C} (h^\perp + {h'}^\perp) & \text{(Walley's condition)} \\ & = F^{\mathcal C} ((h+h')^\perp) = r_\AN (h+h'). \end{align*} \emph{The map $r_\ADN$ is continuous.} We recall that $r_\DN$ and $r_\AN$ are both continuous, being part of a retraction-section pair. Since the weak topology on spaces of forks is induced by the product topology on the product of the spaces of superlinear and sublinear previsions, it suffices to show that the maps $(\mathcal Q, \mathcal C) \mapsto r_\DN (\mathcal Q)$ and $(\mathcal Q, \mathcal C) \mapsto r_\AN (\mathcal C)$ are continuous in order to establish that $r_\ADN$ is continuous. In turn, this follows from the fact that the projection maps$(\mathcal Q, \mathcal C) \mapsto \mathcal Q$ and $(\mathcal Q, \mathcal C) \mapsto \mathcal C$ are continuous, which is clear since the inverse image of any basic open set $\Box U$ by the first map is $\Box_{\mathcal P} U$ and the inverse image of any subbasic open set $\Diamond U$ by the second map is $\Diamond_{\mathcal P} U$. \emph{The map $s_\ADN^\bullet$ takes its values in the space of quasi-lenses.} Let $(F^-, F^+)$ be an arbitrary fork on $X$ (resp. subnormalized, or normalized, depending on $\bullet$), and let $(\mathcal Q, \mathcal C) = s_\ADN^\bullet (F^-, F^+)$. We already know that $\mathcal Q$ is a non-empty compact saturated subset of $\Val_\bullet X$, and that $\mathcal C$ is a non-empty closed subset of $\Val_\bullet X$. We use Lemma~3.28 of \cite{JGL-mscs16}, which says that, for every $G' \in \Val_\bullet X$ such that $F^- \leq G'$, there is a $G \in \Val_\bullet X$ such that $F^- \leq G \leq F^+$ and $G \leq G'$. (This requires no condition on the space $X$.) By definition of $\mathcal Q$ and $\mathcal C$, this can be rephrased as: for every $G' \in \mathcal Q$, there is an element $G \in \mathcal Q \cap \mathcal C$ such that $G \leq G'$. In other words, $\mathcal Q \subseteq \upc (\mathcal Q \cap \mathcal C)$. Lemma~3.29 of \cite{JGL-mscs16} states that, given that $\Lform X$ is locally convex and has an almost open addition map, and that $X$ is compact if $\bullet$ is ``$1$'', then for every $G' \in \Val_\bullet X$ such that $G' \leq F^+$, there is a $G \in \Val_\bullet X$ such that $F^- \leq G \leq F^+$ and $G' \leq G$. This means that for every $G' \in \mathcal C$, $G'$ is in $\dc (\mathcal Q \cap \mathcal C)$. Therefore $\mathcal C \subseteq \dc (\mathcal Q \cap \mathcal C) \subseteq cl (\mathcal Q \cap \mathcal C)$. In particular, for every open neighborhood $\mathcal U$ of $\mathcal Q$, $\mathcal C$ is included in $cl (\mathcal Q \cap \mathcal U)$, so $(\mathcal Q, \mathcal C)$ is a quasi-lens. \emph{The map $s_\ADN^\bullet$ is continuous.} This follows from the fact that $s_\DN^\bullet$ and $s_\AN^\bullet$ are continuous. \emph{$r_\ADN \circ s_\ADN^\bullet$ is the identity map.} This follows since $r_\DN \circ s_\DN^\bullet$ and $r_\AN \circ s_\AN^\bullet$ are both identity maps. \emph{$s_\ADN^\bullet \circ r_\ADN$ is the identity map on the space of convex quasi-lenses.} It suffices to observes that $s_\DN^\bullet \circ r_\DN$ is the identity map on the space of convex non-empty compact saturated subsets of $X$, and that $s_\AN^\bullet \circ r_\AN$ is the identity map on the space of convex non-empty closed subsets of $X$. Those are Propositions~4.5 and~4.3 of \cite{JGL-mscs16}, respectively; the second one requires that $\Lform$ be locally convex, while the first one requires nothing from $X$. \qed \begin{customrem}{\thesubsection.B} \label{rem:lens:qlens} Under the assumption that $\Lform X$ is locally convex and has an almost open addition map (and that $X$ is compact in the case where $\bullet$ is ``$1$''), then the composition of the homeomorphism of Proposition~\ref{prop:rsADN} with the homeomorphism of \cite[Proposition~4.8]{JGL-mscs16} yields a homeomorphism from the space of convex lenses to the space of convex quasi-lenses on $\Val_\bullet X$. \end{customrem} \begin{thm}[Continuity for forks] \label{thm:ADN:cont} Let $X, d$ be a continuous Yoneda-complete quasi-metric space, $a > 0$, and $\bullet$ be ``$\leq 1$'' or ``$1$''. Let us also assume that $\Lform X$ has an almost open addition map, and that $X$ is compact in the case where $\bullet$ is ``$1$''. The space of all subnormalized (if $\bullet$ is ``$\leq 1$'', normalized if $\bullet$ is ``$1$'') forks on $X$ with the $\dKRH^a$ quasi-metric is continuous Yoneda-complete. It arises as a $1$-Lipschitz continuous retract of the Plotkin powerdomain over $\Val_\bullet X$ through $r_\ADN$ and $s_\ADN^\bullet$. That retraction cuts down to an isometry between the space of (sub)normalized forks on $X$ and the space of convex quasi-lenses over $\Val_\bullet X$, with the $\mP {(\dKRH^a)}$ quasi-metric. The supremum of a directed family of formal balls ${((F^-_i, F^+_i), r_i)}_{i \in I}$ is $((F^-, F^+), r)$ where $r = \inf_{i \in I} r_i$, $F^+$ is the naive supremum of ${(F^+_i, r_i)}_{i \in I}$, and $F^-$ is the naive supremum of ${(F^-_i, r_i)}_{i \in I}$. In particular, $(F^+, r)$ is the supremum of ${(F^+_i, r_i)}_{i \in I}$ in the space of (sub)normalized sublinear previsions and $(F^-, r)$ is the supremum of ${(F^-_i, r_i)}_{i \in I}$ in the space of (sub)normalized superlinear previsions. \end{thm} \proof We recall that, since $X, d$ is continuous Yoneda-complete, $\Lform X$ is automatically locally convex (Proposition~\ref{prop:locconvex}). We use Theorem~\ref{thm:AN:cont} and Lemma~\ref{lemma:AN:cont}, resp.\ Theorem~\ref{thm:DN:cont}, which state analogous results for $\Hoare X, \dKRH^a$ and $\Smyth X, \dKRH^a$, without any further reference. Given a directed family of formal balls ${((F^-_i, F^+_i), r_i)}_{i \in I}$, ${(F^-_i, r_i)}_{i \in I}$ is a directed family of formal balls on the space of (sub)normalized superlinear previsions on $X$ with the $\dKRH^a$ quasi-metric, and ${(F^+_i, r_i)}_{i \in I}$ is a directed family of formal balls on the space of (sub)normalized sublinear previsions on $X$ with the $\dKRH^a$ quasi-metric. Let $(F^-, r)$ be the (naive) supremum of the former and $(F^+, r)$ be the (naive) supremum of the latter. Lemma~\ref{lemma:PX:naivesup} states that $(F^-, F^+)$ is a fork, provided we check that $X$ is sober. That is guaranteed by \cite[Proposition~4.1]{JGL:formalballs}, which states that every continuous Yoneda-complete quasi-metric space is sober. One easily checks that $(F^-, F^+)$ is the supremum of ${((F^-_i, F^+_i), r_i)}_{i \in I}$. That characterization of directed suprema, together with the analogous characterization of directed suprema of formal balls on the Plotkin powerdomain (Proposition~\ref{prop:PX:Ycomplete}) and the fact that $r_\AN$, $r_\DN$, $s_\AN$, $s_\DN$ are $1$-Lipschitz continuous, shows that $r_\ADN$ and $s_\ADN$ are $1$-Lipschitz continuous. Note that we apply Proposition~\ref{prop:PX:Ycomplete} to the Plotkin powerdomain over $\Val_\bullet X, \dKRH^a$, and that is legitimate since $\Val_\bullet X, \dKRH^a$ is continuous Yoneda-complete, by Theorem~\ref{thm:Vleq1:cont} or Theorem~\ref{thm:V1:cont}. We deal with the case of $r_\ADN$ to give the idea of the proof. Let ${((Q_i, C_i), r_i)}_{i \in I}$ be a directed family of formal balls on the Plotkin powerdomain of $\Val_\bullet X$, with supremum $((Q, C), r)$. In particular, $r = \inf_{i \in I} r_i$, $(Q, r)$ is the supremum of the directed family ${(Q_i, r_i)}_{i \in I}$ and $(C, r)$ is the supremum of the directed family ${(C_i, r_i)}_{i \in I}$. Since $r_\DN$ and $r_\AN$ are $1$-Lipschitz continuous, $(r_\DN (Q), r)$ is the supremum of ${(r_\DN (Q_i), r_i)}_{i \in I}$ and $(r_\AN (C), r)$ is the supremum of ${(r_\AN (C_i), r_i)}_{i \in I}$. We have just seen that this implies that $(r_\ADN (Q, C), r) = ((r_\DN (Q), r_\AN (C)), r)$ is the supremum of ${((r_\DN (Q_i), r_\AN (C_i)), r_i)}_{i \in I}$, that is, of ${(r_\ADN (Q_i, C_i), r_i)}_{i \in I}$. We have $r_\ADN (s_\ADN (F^-, F^+))= (r_\DN (s_\DN (F^-)), r_\AN (s_\AN (F^+))) = (F^-, F^+)$, so $r_\ADN$ and $s_\ADN$ form a $1$-Lipschitz continuous retraction. That cuts down to an isometry when we restrict ourselves to convex quasi-lenses, because $r_\DN$, $s_\DN$ and $r_\AN$, $s_\AN$ form isometries when restricted to convex non-empty compact saturated sets and convex non-empty closed sets respectively, by the final part of Proposition~\ref{prop:rsADN} Since $\Val_\bullet X, \dKRH^a$ is continuous Yoneda-complete, Theorem~\ref{thm:P:cont} tells us that the Plotkin powerdomain over $\Val_\bullet X$, with the $\mP{(\dKRH^a)}$ quasi-metric, is continuous Yoneda-complete. Every $1$-Lipschitz continuous retract of a continuous Yoneda-complete quasi-metric space is continuous Yoneda-complete \cite[Theorem~7.1]{JGL:formalballs}, therefore the space of (sub)normalized forks over $X$ with the $\dKRH^a$ quasi-metric is continuous Yoneda-complete. \qed \begin{thm}[Algebraicity for forks] \label{thm:ADN:alg} Let $X, d$ be an algebraic Yoneda-complete quasi-metric space, with a strong basis $\mathcal B$. Let $a > 0$, and $\bullet$ be ``$\leq 1$'' or ``$1$''. Let us also assume that $\Lform X$ has an almost open addition map, and that $X$ is compact in the case where $\bullet$ is ``$1$''. The space of all subnormalized (resp., normalized) forks on $X$ with the $\dKRH^a$ quasi-metric is algebraic Yoneda-complete. All the forks of the form $(F^-, F^+)$ where $F^- (h) = \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$ and $F^+ (h) = \max_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$, with $m \geq 1$, $\sum_{j=1}^{n_i} a_{ij} \leq 1$ (resp., $=1$) for every $i$, and where each $x_{ij}$ is a center point, are center points, and they form a strong basis, even when each $x_{ij}$ is constrained to lie in $\mathcal B$. \end{thm} \proof We start with the second statement. Let $(F_0^-, F_0^+)$ where $F_0^- (h) = \min_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$ and $F_0^+ (h) = \max_{i=1}^m \sum_{j=1}^{n_i} a_{ij} h (x_{ij})$, with $m \geq 1$, $\sum_{j=1}^{n_i} a_{ij} \leq 1$ (resp., $=1$) for every $i$, and where each $x_{ij}$ is a center point. In order to show that $(F_0^-, F_0^+)$ is a center point, we consider the open ball $B^{\dKRH^{a+}}_{((F_0^-, F_0^+), 0) <\epsilon}$, and we claim that it is Scott-open. For every directed family of formal balls ${((F^-_i, F^+_i), r_i)}_{i \in I}$, with supremum $((F^-, F^+), r)$ inside $B^{\dKRH^{a+}}_{((F_0^-, F_0^+), 0) <\epsilon}$, we have $r = \inf_{i \in I} r_i$, $\dKRH^a (F_0^-, F^-) < \epsilon-r$, and $\dKRH^a (F_0^+, F^+) < \epsilon-r$. Also, by Theorem~\ref{thm:ADN:cont}, $(F_0^-, r)$ is the supremum of the directed family ${(F^-_i, r_i)}_{i \in I}$. Since $F_0^-$ is a center point, $B^{\dKRH^{a+}}_{(F_0^-, 0), < \epsilon}$ is Scott-open, and contains $(F_0^-, r)$, hence contains some $(F^-_i, r_i)$; similarly, $B^{\dKRH^{a+}}_{(F_0^+, r), < \epsilon-r}$ contains some $(F^+, r_i)$, and we can take the same $i$, by directedness. It follows that $\dKRH^a (F_0^-, F^-_i) < \epsilon-r_i$ and $\dKRH^a (F_0^+, F_i^+) < \epsilon-r_i$. Hence $((F_i^-, F_i^+), r_i)$ is in $B^{\dKRH^{a+}}_{((F_0^-, F_0^+), 0) <\epsilon}$. Finally, we show that the forks $(F_0^-, F_0^+)$ as above, where $x_{ij} \in \mathcal B$, form a strong basis. By Theorem~\ref{thm:V:alg} and Theorem~\ref{thm:V1:alg}, the (sub)normalized simple valuations $\sum_{i=1}^n a_i \delta_{x_i}$ where $x_i \in \mathcal B$ form a strong basis of $\Val_\bullet X$. Hence the forks of the form $(\upc E, \dc E)$, where $E$ is a finite set of such simple valuations, form a strong basis of the Plotkin powerdomain over $\Val_\bullet X$, by Theorem~\ref{thm:P:alg}. By Lemma~\ref{lemma:retract:alg}, every (sub)normalized fork is a $\dKRH^a$-limit of a Cauchy-weightable net of points of the form $r_\ADN (\upc E, \dc E)$, and this is exactly what we need to conclude. \qed \begin{lem} \label{lemma:ADN:functor} Let $X, d$ and $Y, \partial$ be two continuous Yoneda-complete quasi-metric spaces, and $f \colon X, d \mapsto Y, \partial$ be a $1$-Lipschitz continuous map. Let $a > 0$. Let us also assume that $\Lform X$ and $\Lform Y$ have almost open addition maps, and that $X$ and $Y$ are compact in the case where $\bullet$ is ``$1$''. The map $(F^-, F^+) \mapsto (\Prev f (F^-), \Prev f (F^+))$ is $1$-Lipschitz continuous from the space of normalized, resp.\ subnormalized forks on $X$ to the space of normalized, resp.\ subnormalized forks on $Y$, with the $\dKRH^a$ and $\KRH\partial^a$ quasi-metrics. \end{lem} \proof That $(\Prev f (F^-), \Prev f (F^+))$ satisfies Walley's condition is immediate. Therefore it is a fork, which is normalized or subnormalized, depending on $(F^-, F^+)$. By definition of $\dKRH^a$ as a maximum (Definition~\ref{defn:KRH:fork}), and since $\Prev f$ is $1$-Lipschitz by Lemma~\ref{lemma:Pf:lip}, the map $(F^-, F^+) \mapsto (\Prev f (F^-), \Prev f (F^+))$ is $1$-Lipschitz. For every directed family ${((F_i^-, F_i^+), r_i)}_{i \in I}$ of formal balls on (sub)normalized forks, let $((F^-, F^+), r)$ be its supremum, as given in the last part of Theorem~\ref{thm:ADN:cont}: $(F^-, r)$ is the supremum of the directed family ${(F_i^-, r_i)}_{i \in I}$ in the space of formal balls over the space of (sub)normalized superlinear previsions, and $(F^+, r)$ is the supremum of ${(F_i^+, r_i)}_{i \in I}$. By Lemma~\ref{lemma:DN:functor}, $(\Prev f (F^-), r)$ is the supremum of ${(\Prev f (F_i^-), r_i)}_{i \in I}$, and by Lemma~\ref{lemma:AN:functor}, $(\Prev f (F^+), r)$ is the supremum of ${(\Prev f (F_i^+), r_i)}_{i \in I}$, so $((\Prev f (F^-), \Prev f (F^+)), r)$ is the supremum of ${((\Prev f (F_i^-), \Prev f (F_i^+)), r_i)}_{i \in I}$, by the last part of Theorem~\ref{thm:ADN:cont} again. \qed With Theorem~\ref{thm:ADN:alg} and Theorem~\ref{thm:ADN:cont}, we obtain the following. \begin{cor} \label{cor:ADN:functor} There is a functor from the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces $X$ such that $\Lform X$ has an almost open addition map, and $1$-Lipschitz continuous maps, to the category of continuous (resp., algebraic) Yoneda-complete quasi-metric spaces, which sends every object $X, d$ to the space of subnormalized forks on $X$ with the $\dKRH^a$-Scott quasi-metric ($a > 0$), and every $1$-Lipschitz continuous map $f$ to the map $(F^-, F^+) \mapsto (\Prev f (F^-), \Prev f (F^+))$. There is a functor from the category of compact continuous (resp., algebraic) Yoneda-complete quasi-metric spaces $X$ such that $\Lform X$ has an almost open addition map, and $1$-Lipschitz continuous maps, to the category of compact continuous (resp., algebraic) Yoneda-complete quasi-metric spaces, which sends every object $X, d$ to the space of normalized forks on $X$ with the $\dKRH^a$-Scott quasi-metric ($a > 0$), and every $1$-Lipschitz continuous map $f$ to the map $(F^-, F^+) \mapsto (\Prev f (F^-), \Prev f (F^+))$. \qed \end{cor} \begin{lem} \label{lemma:Ff:weak} Let $f \colon X \to Y$ be a continuous map between topological spaces. The map $(F^-, F^+) \mapsto (\Prev f (F^-), \Prev f (F^+))$ is continuous from the space of (sub)normalized forks on $X$ to the space of (sub)normalized forks on $Y$, both spaces being equipped with the weak topology. \end{lem} \proof The inverse image of $[k > a]_- = \{(H^-, H^+) \mid H^- (k) > a\}$ is $\{(F^-, F^+) \mid \Prev f (F^-) (k) > a\} = \{(F^-, F^+) \mid F^- (k \circ f) > a\} = [k \circ f > a]_-$, and similarly the inverse image of $[k > a]_+$ is $[k \circ f >a]_+$. \qed As in the proof of Proposition~\ref{prop:P:Vietoris}, we write $\Box_{\mathcal P} U$ for the set of lenses $(Q, C)$ such that $Q \subseteq U$, reserving the notation $\Box U$ for the set of non-empty compact saturated subsets $Q$ of $X$ such that $Q \subseteq U$. We use a similar convention with $\Diamond_{\mathcal P} U$ and $\Diamond U$. \begin{prop}[$\dKRH^a$ quasi-metrizes the weak topology, forks] \label{prop:ADN:weak} Let $X, d$ be an continuous Yoneda-complete quasi-metric space. Let us also assume that $\Lform X$ has an almost open addition map, and that $X$ is compact in the case where $\bullet$ is ``$1$''. The $\dKRH^a$-Scott topology coincides with the weak topology on the space of (sub)normalized forks on $X$. \end{prop} \proof Let $Y$ denote the space of (sub)normalized forks on $X$, and $Z$ denote the space of quasi-lenses on $\Val_\bullet X$. By Theorem~\ref{thm:V:weak=dScott}, the $\dKRH^a$-Scott topology coincides with the weak topology on $\Val_\bullet X$, hence we may take either topology in order to define the quasi-lenses on $\Val_\bullet X$, hence the elements of $Z$. Also, $\Val_\bullet X, \dKRH^a$ is continuous Yoneda-complete by Theorem~\ref{thm:Vleq1:cont} (if $\bullet$ is ``$\leq 1$''), or by Theorem~\ref{thm:V1:cont} (if $\bullet$ is ``$1$''). This allows us to apply Theorem~\ref{thm:P:Vietoris} and deduce that the $\mP {(\dKRH^a)}$-Scott and Vietoris topologies coincide on $Z$. The pair of maps $r_\ADN$, $s_\ADN^\bullet$ defines a retraction of $Z$ (with either topology) onto $Y$, with its weak topology, by Proposition~\ref{prop:rsADN}. By Theorem~\ref{thm:ADN:cont}, it also defines a $1$-Lipschitz continuous retraction of $Z, \mP {(\dKRH^a)}$ onto $Y, \dKRH^a$. Since the two spaces are continuous Yoneda-complete, hence standard, this defines a topological retraction of $Z$ onto $Y$, with its $\dKRH^a$-Scott topology. This implies that $s_\ADN^\bullet$ defines a topological embedding of $Y$, either with its weak topology or with its $\dKRH^a$-Scott topology, into $Z$, hence that the two topologies on $Y$ coincide, by Fact~\ref{fact:retract:two}. \qed \begin{rem} \label{rem:ADN} We recall that every locally compact, coherent space $X$ is such that addition is almost open on $\Lform X$. We may summarize some of our findings on forks in that useful situation as follows. Let $a > 0$, and $\bullet$ be ``$\leq 1$'' or ``$1$''. Let $X, d$ be a continuous Yoneda-complete quasi-metric space that is locally compact and coherent in its $d$-Scott topology, and also compact if $\bullet$ is ``$1$''. Let $Z$ be the space of subnormalized forks on $X$ if $\bullet$ is ``$\leq 1$'', of normalized forks if $\bullet$ is ``$1$''. Then: \begin{itemize} \item (Theorem~\ref{thm:ADN:cont}) $Z, \dKRH^a$ is continuous Yoneda-complete, and arises as a $1$-Lipschitz continuous retract of the Plotkin powerdomain over $\Val_\bullet X$ through $r_\ADN$ and $s_\ADN^\bullet$. That retraction cuts down to an isometry between $Z, \dKRH^a$ and the space of convex quasi-lenses over $\Val_\bullet X$, with the $\mP {(\dKRH^a)}$ quasi-metric. Directed suprema of formal balls are computed through naive suprema. \item (Theorem~\ref{thm:ADN:alg}) If $X, d$ is additionally algebraic, with a strong basis $\mathcal B$, then $Z, \dKRH^a$ is algebraic Yoneda-complete. \item (Proposition~\ref{prop:ADN:weak}) The $\dKRH^a$-Scott topology coincides with the weak topology on $Z$. \end{itemize} \end{rem} \section{Open Questions} \label{sec:open-questions} \begin{enumerate} \item Assume $X, d$ standard algebraic. Is $\Val_1 X, \dKRH$ algebraic? This is the case if $X$ has a root (Theorem~\ref{thm:V1:alg:root}), in particular if $d$ is bounded. Close results are that $\Val_1 X, \dKRH^a$ is algebraic for every $a > 0$ (Theorem~\ref{thm:V1:alg}), and that $\Val_{\leq 1} X, \dKRH$ is algebraic (Theorem~\ref{thm:V:alg}). Beware of Kravchenko's counterexample: the $\dKRH$-Scott topology on $\Val_1 X$ is in general different from the weak topology, and the coincidence with the weak topology in the rooted and $\dKRH^a$ cases followed more or less directly from algebraicity. \item The above theorems apply to spaces of normalized, or subnormalized valuations. Are there analogous results for the space $\Val X$ of \emph{all} continuous valuations? I doubt it, since, by analogy, the space of all measures on a Polish space is in general not metrizable. \item The nice properties we obtain on $\Val_1 X$ (algebraicity, continuity, retrieving the weak topology) were obtained either for $\dKRH$ under a rootedness assumption, or for $\dKRH^a$. This prompted us to study sublinear previsions, superlinear previsions, and forks only through the $\dKRH^a$ quasi-metric. I am pretty sure we would obtain analogous results with the $\dKRH$ quasi-metric in rooted cases. \item Conversely, I have not studied the $\dKRH^a$ quasi-metric on the Plotkin powerdomain of quasi-lenses. This should present no difficulty. \item In case $X, d$ is complete metric (not just quasi-metric), $\Val_1 X, \dKRH$ is a complete metric space (Theorem~\ref{thm:V1:complete}). We do not require any form of separability, and that is quite probably due to the fact that we consider valuations instead of measures. However, this does not say anything about a possible basis: is every probability valuation the ($\dKRH$-)limit of a Cauchy(-weightable) net of simple probability valuations in that case? Note that every probability valuation is the $\dKRH^a$-limit of a Cauchy-weightable net of simple probability valuations. \item The coupling Theorem~\ref{thm:coupling} is a form of linear duality theorem, in the style of Kantorovich and Rubinshte\u\i n, for quasi-metric spaces. One should pursue this further, since the coupling $\Gamma$ is not known to arise from a probability valuation on $X \times X$ yet. That extra effort was done in \cite{Gou-fossacs08b} for symcompact quasi-metric spaces, in their open ball topology. Can we relax the assumptions? \end{enumerate} \newcommand{\etalchar}[1]{$^{#1}$} \end{document}
\begin{document} \title{Capturing Polynomial Time on Interval Graphs} \author{Bastian Laubner\\ {\small Institut f\"ur Informatik}\\ {\small Humboldt-Universit\"at zu Berlin}\\ {\small [email protected]} } \date{ } \maketitle \begin{abstract} We prove a characterization of all polynomial-time computable queries on the class of interval graphs by sentences of fixed-point logic with counting. More precisely, it is shown that on the class of unordered interval graphs, any query is polynomial-time computable if and only if it is definable in fixed-point logic with counting. This result is one of the first establishing the capturing of polynomial time on a graph class which is defined by forbidden induced subgraphs. For this, we define a canonical form of interval graphs using a type of modular decomposition, which is different from the method of tree decomposition that is used in most known capturing results for other graph classes, specifically those defined by forbidden minors. The method might also be of independent interest for its conceptual simplicity. Furthermore, it is shown that fixed-point logic with counting is not expressive enough to capture polynomial time on the classes of chordal graphs or incomparability graphs. \end{abstract} \section{Introduction}\label{sec:intro} Capturing results in descriptive complexity match the expressive power of a logic with the computational power of a complexity class. The most important open question in this area is whether there exists a natural logic whose formulas precisely define those queries which are computable in polynomial time (\cclass{PTIME}). While Immerman and Vardi showed in 1982 that fixed-point logic captures \cclass{PTIME} under the assumption that a linear order is present in each structure (cf. Theorem \ref{thm:immermanVardi}), there is no logic which is currently believed to capture \cclass{PTIME} on arbitrary unordered structures. Despite that limitation, precise capturing results for \cclass{PTIME} in the unordered case can be obtained for restricted classes of structures. Since all relational structures of a fixed finite vocabulary can be encoded efficiently as simple graphs, capturing results on restricted graph classes are of particular interest in this context. This approach has been very fruitful in the realm of graph classes defined by lists of forbidden minors. Most of these results show that \cclass{PTIME} is captured by fixed-point logic with counting \logic{FP}C when restricting ourselves to one such class, such as planar graphs~\cite{gro98a}, graphs of bounded tree-width~\cite{gromar99}, or $K_5$-free graphs~\cite{grohe08definable}. In fact, Grohe has recently shown that \logic{FP}C captures \cclass{PTIME} on any graph class which is defined by a list of forbidden minors~\cite{grohe10fixed}. Given such deep results for classes of minor-free graphs, it is natural to ask if similar results can be obtained for graph classes which are defined by a (finite or infinite) list of forbidden induced subgraphs. Much less is known here. For starters, it is shown in \cite{grohe09fixed-point} and in Section \ref{sec:noCaptureCompGraphs} that a general capturing result analogous to Grohe's is not possible for \logic{FP}C on subgraph-free graph classes, such as chordal graphs or graphs whose complements are comparability graphs of partial orders. These two superclasses of interval graphs are shown to be a ceiling on the structural richness of graph classes on which capturing \cclass{PTIME} requires less effort than for general graphs. \begin{thm}\label{thm:notCaptureCompGraphs} \logic{FP}C fails to capture \cclass{PTIME} on the class of incomparability graphs and on the class of chordal graphs. \end{thm} The main result in this paper is a positive one affirming that \logic{FP}C captures \cclass{PTIME} on the class of interval graphs. This means that a subset $\mathcal K$ of the class of interval graphs is decidable in \cclass{PTIME} if and only if there is a sentence of \logic{FP}C defining $\mathcal K$. \begin{thm}\label{thm:captureIntGraphs} \logic{FP}C captures \cclass{PTIME} on the class of interval graphs. \end{thm} The result is shown by describing an \logic{FP}C-definable canonization procedure for interval graphs, which for any interval graph constructs an isomorphic copy on an ordered domain. The capturing result then follows from the Immerman-Vardi theorem. The proof of Theorem \ref{thm:captureIntGraphs} also has a useful corollary. \begin{cor}\label{cor:IntGraphsDefinable} The class of interval graphs is \logic{FP}C-definable. \end{cor} There has been persistent interest in the algorithmic aspects of interval graphs in the past decades, also spurred by their applicability to DNA sequencing (cf. \cite{zhang94algorithm}) and scheduling problems (cf. \cite{moehring84algorithmic}). In 1976, Booth and Lueker presented the first recognition algorithm for interval graphs~\cite{booth76testing} running in time linear in the number of vertices and edges, which they followed up by a linear-time interval graph isomorphism algorithm~\cite{lueker79linear}. These algorithms are based on a special data structure called \emph{PQ-trees}. Using so-called perfect elimination orderings, Hsu and Ma~\cite{hsu99fast} and Habib~et~al.\ ~\cite{habib00lex-bfs} later presented linear-time recognition algorithms based on simpler data structures. All these approaches have in common that they make inherent use of an underlying order of the graph, which is always available in \cclass{PTIME} computations as the order in which the vertices are encoded on the worktape. Particularly, the construction of a perfect elimination ordering by lexicographic breadth-first search needs to examine the children of a vertex in some fixed order. However, such an ordering is not available when defining properties of the bare unordered graph structure by means of logic. Therefore, most of the ideas developed in these publications cannot be applied in the canonization of interval graphs in \logic{FP}C. We note that an algorithmic implementation of our method would be inferior to the existing linear-time algorithms for interval graphs. Given that our method must rely entirely on the inherent structure of interval graphs and not on an additional ordering of the vertices, we reckon that is the price to pay for the disorder of the graph structure. The main commonality of existing interval graph algorithms and the canonical form developed here is the construction of a \emph{modular decomposition} of the graph. Modules are subgraphs which interact with the rest of the graph in a uniform way, and they play an important algorithmic role in the construction of modular decomposition trees (cf. \cite{brandstaedt99graph}). As a by-product of the approach in this paper, we obtain a specific modular decomposition tree that is \logic{FP}C-definable. Such modular decompositions are fundamentally different from \emph{tree decompositions}, which are the ubiquitous tool of \logic{FP}C-canonization proofs for the aforementioned minor-free graph classes (cf. \cite{grohe08definable} for a survey of tree decompositions in this context). Since tree decompositions do not appear to be very useful for defining canonical forms on subgraph-free graph classes, showing the definability of modular decompositions is a contribution to the systematic study of capturing results on these graph classes. \section{Preliminaries and notation} We write $\ensuremath{\mathbb{N}}$ and $\ensuremath{\mathbb{N}}_0$ for the positive and non-negative integers, respectively. For $m,n \in \ensuremath{\mathbb{N}}_0$, let $[m,n] := \{\ell \in \ensuremath{\mathbb{N}}_0 \;\big|\; m \leq \ell \leq n \}$ be the \emph{closed interval of integers from $m$ to $n$}, and let $[n] := [1,n]$. Tuples of variables $(v_1, \ldots, v_k)$ are often denoted by $\vec v$ and their length by $|\vec v|$. A binary relation $<$ on a set $X$ is a \emph{strict partial order} if it is irreflexive and transitive. Two elements $x,y$ of a partially ordered set $X$ are called \emph{incomparable} if neither $x < y$ nor $y < x$. We call $<$ a \emph{strict weak order} if it is a strict partial order, and in addition, incomparability is an equivalence relation, i.e., whenever $x$ is incomparable to $y$ and $y$ is incomparable to $z$, then $x$ and $z$ are also incomparable. If $x,y$ are incomparable with respect to a strict weak order $<$, then $x < z$ implies $y < z$. Finally, a \emph{(strict) linear order} is a strict partial order in which no two elements are incomparable. If $<$ defines a strict weak order on $X$ and $\sim$ is the equivalence relation defined by incomparability, then $<$ induces a linear order on $X \!\diagup\!\! \sim$. \subsection{Graphs} All graphs in this paper are assumed to be finite, simple, and undirected unless explicitly stated otherwise. Let $G = (V,E)$ be a graph with vertex set $V$ and edge set $E$. Generally, $E$ is viewed as a binary relation. Sometimes, we also find it convenient to view edges $e$ as sets containing their two endpoints, as in $e = \{u,v\} \subseteq V$. For isomorphic graphs $G$ and $H$ we write $G \ensuremath{\cong} H$. For $W \subseteq V$ a set of vertices, $G[W]$ denotes the \emph{induced subgraph} of $G$ on $W$. The \emph{neighborhood} of a vertex $v \in V$, denoted $N(v)$, is the set of vertices adjacent to $v$ under $E$, \emph{including $v$ itself}. A subset $W\subseteq V$ is called a \emph{clique} of $G$ if $G[W]$ is a complete graph. A clique $W$ of $G$ is \emph{maximal} if it is inclusion-maximal as a clique in $G$, i.e., if no vertex $v\in V \setminus W$ can be added to $W$ so that $W\cup \{v\}$ forms a clique. Since maximal cliques are central to the constructions in this paper, they will often just be called \emph{max cliques}. \emph{Cycles} in a graph are defined in the usual way. A graph $G$ is a \emph{split graph} if its vertex set can be partitioned into two sets $U$ and $V$ so that $G[U]$ is a clique and $V$ is an independent set. We write $G = (U\dot\cup V, E)$ to emphasize on the partition. Similarly, if $G$ is a \emph{bipartite graph}, then we write $G = (U\dot\cup V, E)$ in order to emphasize that $U$ and $V$ are independent sets. The main result of this paper, Theorem \ref{thm:captureIntGraphs}, is about interval graphs, which we define and discuss now. The properties of interval graphs mentioned here are based on \cite{gilmore64characterization}. \begin{definition}[Interval graph] Let $\mathcal I$ be a finite collection of closed intervals $I_i = [a_i,b_i] \subset \ensuremath{\mathbb{N}}$. The graph $G_{\mathcal I} = (V,E)$ defined by $\mathcal I$ has vertex set $V = \mathcal I$ and edge relation $I_iI_j \in E :\Leftrightarrow I_i \cap I_j \neq \emptyset$. $\mathcal I$ is called an \emph{interval representation} of a graph $G$ if $G \ensuremath{\cong} G_{\mathcal I}$. A graph $G$ is an \emph{interval graph} if there is a collection of closed intervals $\mathcal I$ which is an interval representation of $G$. \end{definition} If $v \in V$, then $I_v$ denotes the interval corresponding to vertex $v$ in $\mathcal I$. An interval representation $\mathcal I$ for an interval graph $G$ is called \emph{minimal} if the set $\bigcup \mathcal I \subset \ensuremath{\mathbb{N}}$ is of minimum size over all interval representations of $G$. Any interval representation $\mathcal I$ can be converted into a minimal interval representation by removing a subset of $\ensuremath{\mathbb{N}}$ from all intervals in $\mathcal I$ (and then considering the remaining points in $\bigcup \mathcal I$ as an initial segment of $\ensuremath{\mathbb{N}}$). If $\mathcal I = \{I_i\}_{i \in [n]}$ is a minimal interval representation of $G$, then there is an intimate connection between the max cliques of $G$ and the sets $M(k) = \{ I_i \;\big|\; k \in I_i \}$ for $k\in \ensuremath{\mathbb{N}}$. In fact, if $M(k) \neq \emptyset$ for some $k$, then $M(k)$ forms a clique which is maximal by the minimality condition on $\mathcal I$. Conversely, if $M$ is a max clique of $G$, then $\bigcap_{v\in M} I_v = \{k\}$ for some $k\in\ensuremath{\mathbb{N}}$ by the minimality of $\mathcal I$, and $M(k) = M$. Thus, a connected graph $G$ is an interval graph if and only if its max cliques can be arranged as a path so that each vertex of $G$ is contained in consecutive max cliques. In this way, any minimal interval representation $\mathcal I$ of $G$ induces an ordering $\lhd_{\mathcal I}$ of $G$'s max cliques. We call a max clique $C$ a \emph{possible end} of $G$ if there is a minimal interval representation $\mathcal I$ of $G$ so that $C$ is $\lhd_{\mathcal I}$-minimal. Interval graphs are a classical example of an \emph{intersection graph class} of certain objects. Intersection graphs have as vertices a collection $\{o_1,\ldots, o_k\}$ of these objects with an edge between $o_i$ and $o_j$ if and only if $o_i \cap o_j \neq \emptyset$. Notice that any finite graph is the intersection graph of some collection of sets from $\ensuremath{\mathbb{N}}$, which is not the case when we restrict the allowed sets to intervals. If $\mathcal Y$ is an intersection graph class, $G =(V,E) \in \mathcal Y$, and $U$ is any subset of $V$, then $G[U]$ is also a member of $\mathcal Y$ since it is just the intersection graph of the objects in $U$. Any graph class $\mathcal G$ that is closed under taking induced subgraphs can also be defined by a possibly infinite list of \emph{forbidden induced subgraphs}, by taking all those graphs not in $\mathcal G$ that are minimal with respect to the relation of being an induced subgraph. A complete infinite family of forbidden induced subgraphs defining the class of interval graphs is given by Lekkerkerker and Boland in \cite{lekkerkerker62representation}. Some further classes of graphs are important for this paper, and will be defined now. \begin{definition}[Chordal graph] A graph is called \emph{chordal} if all its induced cycles are of length 3. \end{definition} It is easy to show that every interval graph is chordal. Chordal graphs can alternatively be characterized by the property that the maximal cliques can be arranged in a forest $T$, so that for every vertex of the graph the set of max cliques containing it is connected in $T$ (cf. \cite{diestel06graphtheory}). \begin{definition}[Comparability graph] A graph $G = (V,E)$ is called a \emph{comparability graph} if there exists a strict partial ordering $<$ of its vertex set $V$ so that $uv \in E$ if and only if $u,v$ are comparable with respect to $<$. \end{definition} A graph is called an \emph{incomparability graph} if its complement is a comparability graph. It is a well-known fact that every interval graph is an incomparability graph. In fact, a graph is an interval graph if and only if it is a chordal incomparability graph \cite{gilmore64characterization, golumbic04algorithmic}. \subsection{Logics} We assume basic knowledge in logic, particularly of first-order logic \logic{FO}. All structures considered in this paper are graphs $G = (V,E)$, i.e., relational structures with universe $V$ and one binary relation $E$ which is assumed to be symmetric and irreflexive. This section will introduce the fixed-point logics \logic{FP} and \logic{FP}C. Detailed discussions of these logics can be found in \cite{ebbinghaus99finite, graedel07finite, immerman99descriptive}. If $\phi$ is a formula of some logic, we write $\phi(x_1, \ldots, x_k)$ to indicate that the free variables of $\phi$ are among $x_1, \ldots, x_k$. If $v_1, \ldots, v_k$ are vertices of a graph $G$, then $G \models \phi[v_1, \ldots, v_k]$ denotes that $G$ satisfies $\phi$ if $x_i$ is interpreted as $v_i$ for all $i \in [k]$. Furthermore, $\phi^G[v_1, \ldots, v_{k-1},\cdot]$ denotes the subset of vertices $v_k$ in $G$ for which $G \models \phi[v_1, \ldots, v_k]$, and similarly, $\phi^G[\cdot,\ldots,\cdot] = \{ \vec v \in V^k \;\big|\; G \models \phi[\vec v] \}$. \emph{Inflationary fixed-point logic} \logic{FP} is the extension of \logic{FO} by a fixed-point operator with inflationary semantics, which is defined as follows. Let $G=(V,E)$ be a graph, let $X$ be a \emph{relation variable} of arity $r$, and let $\vec x$ be a vector of $r$ variables. Let $\phi$ be a formula whose free variables may include $X$ as a free relation variable and $\vec x$ as free (vertex) variables. For any set $F \subseteq V^r$, let $\phi[F]$ denote the set of $r$-tuples $\vec v \in V^r$ for which $\phi$ holds when $X$ is interpreted as $F$ and $\vec v$ is assigned to $\vec x$. Let the sets $F_i$ be defined inductively by $F_0 = \phi[\emptyset]$ and $F_{i+1} = F_i \cup \phi[F_i]$. Since $F_i \subseteq F_{i+1}$ for all $i\in \mathbb N_0$, we have $F_k = F_{|V|^r}$ for all $k \geq |V|^r$. We call the $r$-ary relation $F_{|V|^r}$ the \emph{inflationary fixed-point} of $\phi$ and denote it by $\left( \operatorname{ifp}_{X \leftarrow \vec x} \phi \right)$. \logic{FP} denotes the extension of \logic{FO} with the $\operatorname{ifp}$-operator. In 1982, Immerman~\cite{immerman82bounds} and Vardi~\cite{vardi82complexity} showed that \logic{FP} characterizes \cclass{PTIME} on classes of ordered structures\footnote{In fact, Immerman and Vardi showed this capturing result using a different fixed-point operator for \emph{least fixed points}. Inflationary and least fixed points were shown to be equivalent by Gurevich and Shelah~\cite{gurevich85fixed-point} and Kreutzer~\cite{kreutzer04equivalence}. Also, Immerman and Vardi proved the result for general relational structures with an ordering, while we only state their theorem for graphs.}. \begin{thm}[Immerman-Vardi]\label{thm:immermanVardi} Let $\mathcal K$ be a class of ordered graphs, i.e., graphs with an additional binary relation $<$ which satisfies the axioms of a linear order. Then $\mathcal K$ is \cclass{PTIME}-decidable if and only if there is a sentence of \logic{FP} defining $\mathcal K$. \end{thm} When no ordering is present, then \logic{FP} is not expressive enough to capture \cclass{PTIME}; in fact, it cannot even decide the parity of the underlying vertex set's size. For the capturing result in this paper, we will also need a stronger logic which is capable of such basic counting operations. For this, let $G = (V,E)$ be a graph and let $N_V := [0,|V|] \subset \ensuremath{\mathbb{N}}_0$. Instead of $G$ alone, we consider the two-sorted structure $G^+ := (V, N_V, E, <)$ with universe $V \dot\cup N_V$, so that $E$ defines $G$ on $V$ and $<$ is the natural linear ordering of $N_V \subset \ensuremath{\mathbb{N}}_0$. Notice that $E$ is not defined for any numbers from $N_V$, and also, $<$ does not give any order on $V$. Now we define \logic{FP}-sentences on $G^+$ with the convention that all variables are implicitly typed. Thus, any variable $x$ is either a vertex variable which ranges over $V$ or a numeric variable which ranges over $N_V$. The connection between the vertex and the number sort is established by \emph{counting terms} of the form $\# x \, \phi$ where $x$ is a vertex variable and $\phi$ is a formula. $\# x \, \phi$ denotes the number from $N_V$ of vertices $v \in V$ so that $G \models \phi[v]$. \logic{FP}C is now obtained by extending \logic{FP} in the two-sorted framework with counting terms. We can encode numbers from $[0,|N_V|^k - 1]$ with $k$-tuples of number variables. With the help of the fixed-point operator, we can do some meaningful arithmetic on these tuples, such as addition, multiplication, and counting the number of tuples $\vec x$ satisfying a formula $\phi(\vec x)$ (cf. \cite{graedel07finite}). With its power to handle basic arithmetic, \logic{FP}C is already more powerful than \logic{FP} on unordered graphs. Still, it is not powerful enough to capture \cclass{PTIME} by a result of Cai, F\"urer and Immerman~\cite{cai92optimal}. This fact will be used in the next section to prove similar negative results for specific classes of graphs. For this, we still need the notion of a \emph{graph interpretation}, which is a restricted version of the more general concept of a syntactical interpretation. \begin{definition}\label{def:graphInterpretation} An \emph{$\ell$-ary graph interpretation} is a tuple $\Gamma = ( \gamma_V(\vec x), \gamma_\approx (\vec x, \vec y), \gamma_E (\vec x, \vec y) )$ of \logic{FO}-formulas so that $|\vec x| = |\vec y| = \ell$ and in any graph, $\gamma_\approx$ defines an equivalence relation on $\gamma^G_V[\cdot]$. If $G = (V,E)$ is a graph, then $\Gamma[G] = (V_\Gamma, E_\Gamma)$ denotes the graph with vertex set $V_\Gamma = \gamma^G_V[\cdot] \!\diagup\!\! \approx$ and edge set $E_\Gamma = \gamma^G_E [\cdot,\cdot] \!\diagup\!\! \approx^2$. \end{definition} \begin{lem}[Graph Interpretations Lemma]\label{lem:graphInterpretationsLemma} Let $\Gamma$ be an $\ell$-ary graph interpretation. Then for any \logic{FP}C-sentence $\phi$ there is a sentence $\phi^{-\Gamma}$ with the property that $G \models \phi^{-\Gamma} \Longleftrightarrow \Gamma[G] \models \phi$. \end{lem} \proof[Idea.] A proof of this fact for first-order logic can be found in \cite{ebbinghaus94mathematical}. It essentially consists in modifying occurrences of the edge relation symbol and quantification in $\phi$ with the right versions of $\gamma_V$, $\gamma_\approx$ and $\gamma_E$. Lemma \ref{lem:countEqClasses} below is needed in order to deal with counting quantifiers in a sensible manner. We omit the details. \qed \subsection{\logic{FP}C-definable canonization} Results that prove the capturing of \cclass{PTIME} on a certain graph class usually do so by showing that there is a logically definable canonization mapping from the graph structure to the number sort. Theorem \ref{thm:captureIntGraphs} will also be proved in this way, showing that there is an \logic{FP}C-formula $\varepsilon(x,y)$ with numeric variables $x$ and $y$ so that any interval graph $G = (V,E)$ is isomorphic to $\left( [|V|], \varepsilon^G [\cdot,\cdot] \right)$. Since the number sort $N_V$ is linearly ordered, the Immerman-Vardi Theorem \ref{thm:immermanVardi} then implies that any \cclass{PTIME}-computable property of interval graphs can be defined in \logic{FP}C. Cai, F\"urer and Immerman have observed that for graph classes which admit \logic{FP}C-definable canonization, a generic method known as the Weisfeiler-Lehman (WL) algorithm can be used to decide graph isomorphism in polynomial time (cf. \cite{cai92optimal}). Thus by Theorem \ref{thm:captureIntGraphs}, the WL algorithm also decides isomorphism of interval graphs. In the light of efficient linear-time isomorphism algorithms for interval graphs, the novelty here lies in the fact that a simple combinatorial algorithm decides interval graph isomorphism without specifically exploiting these graphs' inherent structure. The algorithm is generic in the sense that it also decides isomorphism of planar graphs, graphs of bounded treewidth, and many others. \subsection{Basic formulas} We finish this section by noting some basic constructions that can be expressed in \logic{FP}C. The existence of these formulas is essentially folklore, and variants of them can for example be found in \cite{graedel07finite}. These results lay the technical foundation for a higher-level description of the canonization procedure in Section \ref{sec:CapOnIntGraphs}. We omit their straight-forward proofs. \begin{lem}[Counting equivalence classes] \label{lem:countEqClasses} Suppose $\sim$ is an \logic{FP}C-definable equivalence relation on $k$-tuples of $V$, and let $\phi(\vec x)$ be an \logic{FP}C-formula with $|\vec x| = k$. Assume that $\sim$ has at most $|V|$ equivalence classes. Then there is an \logic{FP}C-counting term giving the number of equivalence classes $[\vec v]$ of $\sim$ such that $G\models \phi[\vec u]$ for some $\vec u \in [\vec v]$. \qed \end{lem} \proof The idea is to construct the sum slicewise for each cardinality of equivalence classes first, which gives us control over the number of classes rather than the number of elements in these classes. Let $\vec s, z, a,b$ be number variables. Define the relation $R(\vec s,\vec x)$ to hold if $\vec x$ is contained in a $\sim$-equivalence class of size $\vec s$ which contains some element making $\phi$ true. Using the fixed-point operator, it is then easy to define relation $S(\vec s, z) :\Leftrightarrow z \leq \sum_{\vec i \in [\vec s]} \# a \; \exists b \; \left( a < b \;\wedge\; b \cdot \vec i = \# \vec x\; R(\vec i,\vec x) \right)$ for $\vec s$ from $1$ to $|V|^k$. Then $\# z\; S(|V|^k, z)$ is the desired counting term. \qed Let $\vec y$ be a tuple of numeric variables and let $\phi(\vec x,\vec y)$ be some formula. Using the ordering on the number sort and fixing $\vec x$, $\phi[\vec x,\cdot]$ can be considered a 0-1-string of truth values of length $|N_V|^{|\vec y|}$. If $\vec x$ is a tuple of elements so that the string defined by $\phi[\vec x,\cdot]$ is the lexicographically least of all such strings, then $\phi[\vec x,\cdot]$ is called the \emph{lexicographic leader}. Observe that such $\vec x$ need not be unique. Lexicographic leaders are used to break ties during the inductive definition of a graph's ordered canonical form. \begin{lem}[Lexicographic leader] \label{lem:lexLeaderDefable} Let $\vec x$ be a tuple of variables taking values in $V^k\times N_V^\ell$ and let $\vec y$ be a tuple of number variables. Suppose $\phi(\vec x, \vec y)$ is an \logic{FP}C-formula and $\sim$ is an \logic{FP}C-definable equivalence relation on $V^k\times N_V^\ell$. Then there is an \logic{FP}C-formula $\lambda(\vec x, \vec y)$ so that for any $\vec v \in V^k\times N_V^\ell$, $\lambda^G[\vec v, \cdot]$ is the lexicographic leader among the relations $\left\{ \phi^G[\vec u, \cdot] \;\big|\; \vec u \sim \vec v \right\}$. \qed \end{lem} \proof To start, there is a \logic{FO}-sentence $\psi(\vec x, \vec y)$ so that $\psi[\vec u, \vec v]$ holds if and only if $\phi[\vec u,\cdot]$ is lexicographically smaller or equal to $\phi[\vec v, \cdot]$. Now $\lambda$ is given by \begin{equation*} \lambda(\vec x, \vec y) = \exists \vec z \left(\vec x \sim \vec z \,\wedge\, \phi(\vec z, \vec y) \,\wedge\, \forall \vec w (\vec w \sim \vec z \,\rightarrow\, \psi(\vec z, \vec w))\right) \end{equation*} \qed Finally, we will repeatedly encounter the situation where the disjoint union of given graphs has to be defined in a canonical way. If $G_1 = ([v_1],E_1), \ldots, G_k = ([v_k],E_k)$ are (ordered) graphs in lexicographically ascending order, then we define their disjoint union $G = (V,E)$ on $V = \left[\sum_{i\in [k]} v_i \right]$ so that $G[\left[\sum_{j\in [i-1]} v_j + 1, \sum_{j\in [i]} v_j\right]]$ is order isomorphic to $G_i$ for all $i \in [k]$. It is easy to see that $G$ is uniquely well-defined, and we call it the \emph{lexicographic disjoint union} of $\{G_i\}_{i\in [k]}$. The following lemma says that lexicographic disjoint unions are \logic{FP}C-definable. \begin{lem}\label{lem:lexDisjUnionDefable} Suppose $\sim$ is an \logic{FP}C-definable equivalence relation on $V^k\times [|V|]^\ell$ and let $\upsilon(\vec x, y)$, $\varepsilon(\vec x, y,z)$ be \logic{FP}C-formulas with number variables $y,z$ defining graphs $\left( \upsilon^G[\vec v,\cdot], \varepsilon^G[\vec v, \cdot,\cdot] \right)$ on the numeric sort for each $\vec v \in V^k\times [|V|]^\ell$. Furthermore, assume that $\upsilon^G[\vec v,\cdot] = \upsilon^G[\vec v',\cdot]$ whenever $\vec v \sim \vec v'$, and that $\sum_{[\vec v] \in V \!\diagup\!\! \sim} |\upsilon^G[\vec v,\cdot]| \leq |V|$. Then there is an \logic{FP}C-formula $\omega(y,z)$ defining on $\left[ \sum_{[\vec v] \in V \!\diagup\!\! \sim} |\upsilon^G[\vec v,\cdot]|\right]$ the lexicographic disjoint union of the lexicographic leaders of $\sim$'s equivalence classes. \qed \end{lem} \proof Let $<$ be the strict weak order on $\sim$'s equivalence classes induced by the strict weak order on the classes' respective lexicographic $(\upsilon, \varepsilon)$-leader. Using Lemma \ref{lem:lexLeaderDefable}, it is easy to define $<$, using elements from $V^k\times [|V|]^\ell$ to identify equivalence classes. Using the fixed point-operator, define $\omega$ inductively starting with the $<$-least elements, saving those elements $\vec v$ from equivalence classes that have already been considered in a relation $R$. In each step, find the $<$-least elements $L$ in $V^k\times [|V|]^\ell$ which are not in $R$, calculate the number $n$ of equivalence classes contained in $L$, and then expand $\omega$ by $n$ copies of $\lambda[\vec v,\cdot]$ (which is the same for any $\vec v \in L$). \qed \section{Non-capturing results}\label{sec:noCaptureCompGraphs} This section contains some negative results of \logic{FP}C not capturing \cclass{PTIME} on a number of graph classes. In particular, this will be shown for bipartite graphs (Theorem \ref{thm:notCapOnBipGraphs}) using a simple construction and the machinery of graph interpretations (see Definition \ref{def:graphInterpretation}). Theorem \ref{thm:notCaptureCompGraphs} will then follow. We note that Theorem \ref{thm:notCapOnBipGraphs} has previously been obtained by Dawar and Richerby~\cite{dawar07power}. However, the method used here is more widely applicable and allows for stronger conclusions (see Remark \ref{rem:strongerConclusion}). The results in this section are all based on the following theorem due to Cai, F\"urer, and Immerman~\cite{cai92optimal}. \begin{fact} \label{fact:CFI} There is a \cclass{PTIME}-decidable property $\mathcal P_{\mbox{\tiny CFI}}$ of graphs of degree $3$ which is not \logic{FP}C-definable. \end{fact} For any graph $G = (V,E)$, the \emph{incidence graph} $G^I = (V\dot\cup E,F)$ is defined by $ve \in F :\Leftrightarrow v \in V$ and $v \in e \in E$. $G^I$ is bipartite and it is straightforward to define a graph interpretation $\Gamma$ so that for any graph $G$ it holds that $\Gamma[G] \ensuremath{\cong} G^I$. Furthermore, given a graph $G^I$, it is a simple $\cclass{PTIME}$-computation to uniquely reconstruct $G$ from $G^I$. Also, since the two parts of a bipartite graph can be found in linear time, it is clear how to decide whether a given graph $H$ is isomorphic to $G^I$ for some graph $G$. \begin{thm} \label{thm:notCapOnBipGraphs} \logic{FP}C does not capture \cclass{PTIME} on the class of bipartite graphs. \end{thm} \proof Recall the \cclass{PTIME}-decidable query $\mathcal P_{\mbox{\tiny CFI}}$ from Fact \ref{fact:CFI} and let $\mathcal P^I := \{ H \;\big|\; H \ensuremath{\cong} G^I$ for some $G\in \mathcal P_{\mbox{\tiny CFI}}\}$. By the remarks above, $\mathcal P^I$ is \cclass{PTIME}-decidable. So suppose that \logic{FP}C captures \cclass{PTIME} on the class of bipartite graphs. Then there is an \logic{FP}C-sentence $\phi$ such that for every bipartite graph $G$ it holds that $G\models \phi$ if and only if $G \in \mathcal P^I$. By an application of the Graph Interpretations Lemma \ref{lem:graphInterpretationsLemma} we then obtain a sentence $\phi^{-\Gamma}$ so that $G\models \phi^{-\Gamma}$ if and only if $G^I \ensuremath{\cong} \Gamma[G] \models \phi$. Thus, $\phi^{-\Gamma}$ defines $\mathcal P_{\mbox{\tiny CFI}}$, contradicting Fact \ref{fact:CFI}.\qed Theorem \ref{thm:notCaptureCompGraphs} is now a simple corollary of the following Lemma. \begin{lem}\label{lem:bipGraphIsCompGraph} Every bipartite graph $G= (U\dot\cup V,E)$ is a comparability graph. \end{lem} \proof A suitable partial order $<$ on $U\dot\cup V$ is defined by letting $u<v$ if and only if $u \in U$, $v\in V$, and $uv \in E$.\qed \begin{cor} \logic{FP}C does not capture \cclass{PTIME} on the class of incomparability graphs.\qed \end{cor} This tells us that being a comparability or incomparability graph alone is not sufficient for a graph $G$ to be uniformly \logic{FP}C-canonizable. Section \ref{sec:CapOnIntGraphs}, however, is going to show that this is possible if $G$ is both chordal \emph{and} an incomparability graph, i.e., an interval graph. In a way, this is not simply a corollary of a capturing result on a larger class of graphs, as it is shown now that \logic{FP}C does not capture \cclass{PTIME} on the class of chordal graphs, either. The construction is due to Grohe~\cite{grohe09fixed-point}. For any graph $G = (V,E)$, the \emph{split incidence graph} $G^S = (V\dot\cup E, \binom V2 \cup F)$ is given by $ve \in F :\Leftrightarrow v \in V$ and $v \in e \in E$. Notice that $G^S$ differs from $G^I$ only by the fact that all former vertices $v\in V$ form a clique in $G^S$. Given the similarity of $G^S$ and $G^I$, the analysis for split incidence graphs is completely analogous to the one for incidence graphs above. In particular, the class $\mathcal P^S := \{ H \;\big|\; H \ensuremath{\cong} G^S \text{ for some } G\in \mathcal P_{\mbox{\tiny CFI}}\}$ is \cclass{PTIME}-decidable and given a split graph $H$, the graph $G$ for which $G^S \ensuremath{\cong} H$ can be reconstructed in \cclass{PTIME} if such $G$ exists. Also, there is a graph interpretation $\Gamma'$ so that for any graph $G$: $\Gamma'[G] \ensuremath{\cong} G^S$. The proof of the following theorem is then clear, and the subsequent lemmas complete the analysis for chordal graphs. \begin{thm} \logic{FP}C does not capture \cclass{PTIME} on split graphs.\qed \end{thm} \begin{lem} Every split graph $G= (U\dot\cup V,E)$ is chordal.\qed \end{lem} \begin{cor}[Grohe~\cite{grohe09fixed-point}] \logic{FP}C does not capture \cclass{PTIME} on the class of chordal graphs.\qed \end{cor} \begin{remark} \label{rem:strongerConclusion} In fact, the proofs here admit even stronger conclusions: any \emph{regular logic} (cf. \cite{ebbinghaus94mathematical}) captures \cclass{PTIME} on the class of comparability graphs (respectively chordal graphs) if and only if it captures \cclass{PTIME} on the class of all graphs. \end{remark} Let us conclude this section by noting some non-capturing results for further intersection graph classes. A $t$-interval graph is the intersection graph of sets which are the union of $t$ intervals. By a result of Griggs and West \cite{griggs79extremal}, any graph of maximum degree 3 is a $2$-interval graph, so Fact \ref{fact:CFI} directly implies that \logic{FP}C does not capture \cclass{PTIME} on $t$-interval graphs for $t \geq 2$. In \cite{Uehara08simple}, Uehara gives a construction that implies such a non-capturing result for intersection graphs of axis-parallel line segments in the plane. It follows that \logic{FP}C does not capture \cclass{PTIME} on boxicity-$d$ graphs for $d \geq 2$, where a boxicity-$d$ graph is the intersection graph of axis-parallel boxes in $\mathbb R^d$ and the boxicity-$1$ graphs are just the interval graphs. \section{Capturing \cclass{PTIME} on interval graphs}\label{sec:CapOnIntGraphs} The goal of this section is to prove Theorem \ref{thm:captureIntGraphs} by canonization. We will exhibit a numeric \logic{FP}C-formula $\varepsilon(x,y)$ so that for any interval graph $G = (V,E)$, $([|V|], \varepsilon^G[\cdot])$ defines a graph on the numeric sort of \logic{FP}C which is isomorphic to $G$. The canonization essentially consists of finding the lexicographic leader among all possible interval representations of $G$. For this, as discussed above, it is enough to bring the maximal cliques of $G$ in the right linear order. The first lemma shows that the maximal cliques of $G$ are \logic{FO}-definable. \begin{lem} \label{lem:maxCliquesDefable} Let $G = (V,E)$ be an interval graph and let $M$ be a maximal clique of $G$. Then there are vertices $u,v \in M$, not necessarily distinct, such that $M = N(u)\cap N(v)$. \end{lem} This is fairly intuitive. Consider the following minimal interval representation of a graph $G$. \begin{center} \begin{tikzpicture} \foreach -1 in {1,...,3} { \fill[xshift=-1 cm-1cm,rounded corners=1pt,fill=gray!20!white] (-3pt,-.3cm) rectangle (3pt,1cm); \draw[xshift=-1 cm-1cm] (0,-.3cm) node[anchor=north] {\small $-1$}; } \filldraw (0,0) node[anchor=east] {\small $a$} circle (1.5pt) (2,.3) node[anchor=west] {\small $e$} circle (1.5pt); \draw[|-|] (0,.3) -- (1,.3) node[midway,above] {\small $b$}; \draw[|-|] (0,.7) -- (2,.7) node[near start,above] {\small $c$}; \draw[|-|] (1,0) -- (2,0) node[midway,above] {\small $d$}; \end{tikzpicture} \end{center} Max cliques $1$ and $3$ are precisely the neighborhoods of vertices $a$ and $e$, respectively. The vertex pairs $(a,a)$, $(a,b)$, and $(a,c)$ all define max clique $1$, and similarly three different vertex pairs define max clique $3$. Max clique $2$ is not the neighborhood of any single vertex, but it is uniquely defined by $N(b)\cap N(d)$. \proof[Proof of Lemma \ref{lem:maxCliquesDefable}] Let $\mathcal I$ be a minimal interval representation of $G$. First assume that $M$ is the $\lhd_{\mathcal I}$-least maximal clique. The lemma is trivial if $M$ is the only maximal clique of $G$, otherwise let $X$ be $M$'s $\lhd_{\mathcal I}$-successor. Since $M \neq X$ there is $v \in M\setminus X$, and $M$ is the only maximal clique of $G$ that $v$ is contained in (as $v$ is contained in $\lhd_{\mathcal I}$-consecutive max cliques). Hence, $M = N(v)$. A symmetric argument holds if $M$ is the $\lhd_{\mathcal I}$-greatest maximal clique. Now assume that $M$ is neither $\lhd_{\mathcal I}$-least nor maximal and let $X, Y$ be $M$'s immediate $\lhd_{\mathcal I}$-predecessor and successor, respectively. There exist $x \in M\setminus X$ and $y \in M\setminus Y$, and we claim that $N(x) \cap N(y) = M$. In fact, since any vertex in $M$ is contained both in $N(x)$ and $N(y)$, we have $M \subseteq N(x) \cap N(y)$. Now let $u \in N(x) \cap N(y)$ and write $I_u = [a,b]$. Let $k$ be the (unique) integer such that $M(k) = M$. Then $ux \in E$ implies $b \geq k$, and $vx \in E$ implies $a \leq k$. Thus, $k \in I_u$ and hence $u \in M$, which proves the claim. \qed Now, whether or not a vertex pair $(u,v) \in V^2$ defines a max clique is easily definable in \logic{FO}, as is the equivalence relation on $V^2$ of vertex pairs defining the same max clique. Lemma \ref{lem:maxCliquesDefable} tells us that \emph{all} max cliques can be defined by such vertex pairs. For any $v \in V$, let the \emph{span of $v$}, denoted $\operatorname{span}(v)$, be the number of max cliques of $G$ that $v$ is contained in. Since equivalence classes can be counted by Lemma \ref{lem:countEqClasses}, $\operatorname{span}(x)$ is \logic{FP}C-definable on the class of interval graphs by a counting term with $x$ as a free vertex variable. Generally representing max cliques by pairs of variables $(x,y) \in V^2$ allows us to treat max cliques as first-class objects that can be quantified over. For reasons of conceptual simplicity, the syntactic overhead which is necessary for working with this representation will not be made explicit in the remainder of this section. \subsection{Extracting information about the order of the maximal cliques} Now that we are able to handle maximal cliques, we would like to simply pick an end of the interval graph $G$ and work with the order which this choice induces on the rest of the maximal cliques. Of course, the choice of an end does not necessarily impose a linear order on the maximal cliques. However, the following recursive procedure turns out to recover all the information about the order of the max cliques induced by choosing an end of $G$. Let $\mathcal M$ be the set of maximal cliques of an interval graph $G = (V,E)$ and let $M \in \mathcal M$. The binary relation $\prec_M$ is defined recursively on the elements of $\mathcal M$ as follows: \begin{align*} \mbox{Initialization:}& \quad M \prec_M C \mbox{ for all } C \in \mathcal M \setminus \{M\}\\ C\prec_M D & \quad\mbox{if } \begin{cases} \exists E \in \mathcal M \mbox{ with } E \prec_M D \mbox{ and } (E\cap C) \setminus D \neq \emptyset \quad\mbox{or}\\ \exists E \in \mathcal M \mbox{ with } C \prec_M E \mbox{ and } (E\cap D) \setminus C \neq \emptyset. \tag{$\bigstar$} \end{cases}\label{eqn:indDef} \end{align*} The following interval representation of a graph $G$ illustrates this definition. \begin{center} \begin{tikzpicture} \foreach -1/-1text in {1/M,2/C,3/D,4/X} { \fill[xshift=-1 cm-1cm,rounded corners=1pt,fill=gray!20!white] (-3pt,-.3cm) rectangle (3pt,.9cm); \draw[xshift=-1 cm-1cm] (0,-.3cm) node[anchor=north] {\small $-1text$}; } \filldraw (0,0) circle (1.5pt) (1,0) circle (1.5pt) (3,0) circle (1.5pt); \draw[|-|] (0,.6) -- (2,.6) node[near start,above] {\small $\ell$}; \draw[|-|] (2,.3) -- (3,.3) node[midway,above] {\small $r$}; \end{tikzpicture} \end{center} Suppose we have picked max clique $M$, then $C \prec_M X$ and $D \prec_M X$ since $\ell \in C\cap D\cap M \setminus X$ and $M \prec_M X$ by the initialization step. In a second step, it is determined that $C \prec_M D$ since $r\in D\cap X \setminus C$ and $C \prec_M X$. So in this example, $\prec_M$ actually turns out to be a strict linear order on the max cliques of $G$. This is not the case in general, but $\prec_M$ will still be useful when $M$ is a possible end of $G$. The definition of $\prec_M$ seems natural to me for the task of ordering the max cliques of an interval graphs. However, I am not aware of it appearing previously anywhere in the literature. It is readily seen how to define $\prec_M$ using the inflationary fixed-point operator, where maximal cliques are defined by pairs of vertices from $G$. \begin{remark} \label{rem:alreadySTC} In fact, $\prec_M$ can already be defined using a \emph{symmetric transitive closure} operator as follows: define an edge relation on $\mathcal M^2$ by connecting $(C,D)$ and $(E,F)$ if $E\prec_M F$ follows from $C \prec_M D$ by one application of (\ref{eqn:indDef}). Inspection of (\ref{eqn:indDef}) shows that this edge relation is symmetric, hence the graph is undirected. Now $C \prec_M D$ holds if and only if $(C,D)$ is reachable from $(M,X)$ for some max clique $X$. This observation is used in \cite{koebler10interval} to show that canonical forms of interval graphs can be computed using only logarithmic space. \end{remark} The following lemmas prove important properties of $\prec_M$. We say that a binary relation $R$ on a set $A$ is \emph{asymmetric}\index{asymmetric relation} if $ab \in R$ implies $ba \not\in R$ for all $a,b\in A$. In particular, asymmetric relations are irreflexive. \begin{lem} \label{lem:antisymImpTransitive} If $\prec_M$ is asymmetric, then it is transitive. Thus, if $\prec_M$ is asymmetric, then it is a strict partial order. \end{lem} \proof By a derivation chain of length $k$ we mean a finite sequence $X_0 \prec_M Y_0$, $X_1 \prec_M Y_1$, $\ldots$, $X_k \prec_M Y_k$ such that $X_0 = M$ and for each $i \in [k]$, the relation $X_i \prec_M Y_i$ follows from $X_{i-1} \prec_M Y_{i-1}$ by one application of (\ref{eqn:indDef}). Clearly, whenever it holds that $X \prec_M Y$ there is a derivation chain that has $X \prec_M Y$ as its last element. So assume that $\prec_M$ is asymmetric. Suppose $A \prec_M B \prec_M C$ and let a derivation chain $(L_0, \ldots, L_a)$ of length $a$ be given for $A \prec_M B$. The proof is by induction on $a$. If $a=0$, then $A = M$ and $A \prec_M C$ holds. For the inductive step, suppose $a = n$ and consider the second to last element $L_{n-1}$ in the derivation chain. There are two cases: \begin{itemize} \item $L_{n-1} = (X \prec_M B)$ and there is a vertex $v \in (X\cap A)\setminus B$: By induction it holds that $X \prec_M C$. Now if we had $v \in C$, the fact that $A \prec_M B$ would imply $C \prec_M B$, which contradicts asymmetry of $\prec_M$. Hence, $v\not\in C$ and one more application of (\ref{eqn:indDef}) yields $A \prec_M C$. \item $L_{n-1} = (A \prec_M X)$ and there is a vertex $v \in (X\cap B)\setminus A$: If $v \in C$, then we immediately get $A \prec_M C$. If $v\not\in C$, then $X \prec_M C$. Thus we can derive $A \prec_M X \prec_M C$ where the left derivation chain has length $n-1$. By induction, $A \prec_M C$ follows.\qed \end{itemize} \begin{lem}\label{lem:moduleImpIncomparable} Let $\mathcal C \subset \mathcal M$ be a set of max cliques with $M \not\in \mathcal C$. Suppose that for all $A\in \mathcal M\setminus\mathcal C$ and any $C,C'\in\mathcal C$ it holds that $A\cap C = A\cap C'$. Then the max cliques in $\mathcal C$ are mutually incomparable with respect to $\prec_M$. \end{lem} \proof Suppose for contradiction that there are $C,C'\in\mathcal C$ with $C\prec_M C'$. Let $M \prec_M Y_0$, $X_1 \prec_M Y_1$, $\ldots$, $X_k \prec_M Y_k$ be a derivation chain for $C\prec_M C'$ as in the proof of Lemma \ref{lem:antisymImpTransitive}. Since $X_k = C$, $Y_k = C'$, and $M\not\in \mathcal C$, there is a largest index $i$ so that either $X_i$ or $Y_i$ is not contained in $\mathcal C$. If $X_i \not\in \mathcal C$, then $X_{i+1}\in\mathcal C$ and $Y_i=Y_{i+1} \in\mathcal C$ and it holds that $X_i \cap X_{i+1}\setminus Y_{i+1} \neq \emptyset$. Consequently, $X_i \cap X_{i+1} \neq X_i \cap Y_{i+1}$, contradicting the assumption of the lemma. Similarly, if $Y_i \not\in \mathcal C$, then $Y_{i+1}\in\mathcal C$ and $X_i = X_{i+1}\in\mathcal C$ and it holds that $Y_i\cap Y_{i+1}\setminus X_{i+1} \neq \emptyset$. Thus, $Y_i \cap Y_{i+1} \neq Y_i \cap X_{i+1}$, again a contradiction.\qed In fact, there is a converse to Lemma \ref{lem:moduleImpIncomparable} when the set of $\prec_M$-incomparable max cliques is maximal. \begin{lem} \label{lem:sameIntersection} Suppose $M$ is a max clique of $G$ and $\mathcal C$ is a maximal set of $\prec_M$-incomparable max cliques. Let $D \in\mathcal M\setminus\mathcal C$. Then $D\cap C = D\cap C'$ for all $C,C' \in \mathcal C$. \end{lem} \proof We say that a max clique $A$ \emph{splits} a set of max cliques $\mathcal X$ if there are $X,Y \in \mathcal X$ so that $A\cap X \neq A\cap Y$. If in addition to splitting $\mathcal X$, $A$ is also $\prec_M$-comparable to all the elements in $\mathcal X$, then either $A\cap X\setminus Y\neq \emptyset$ or $A\cap Y\setminus X \neq \emptyset$ and one application of (\ref{eqn:indDef}) implies that $X$ and $Y$ are comparable. Suppose for contradiction that there is $X_1 \in\mathcal M\setminus\mathcal C$ splitting $\mathcal C$. We greedily grow a list of max cliques $X_i$ with the property that $X_i \in \mathcal M \setminus (\mathcal C \cup \{X_1,\ldots,X_{i-1}\})$ splits the set $\mathcal X_{i-1} := \mathcal C \cup \{X_1,\ldots,X_{i-1}\}$. The list $X_1, \ldots, X_k$ is complete when no further max clique splits the set $\mathcal X_k$. Suppose that $M \not\in \mathcal X_k$. For any $D\in\mathcal M\setminus\mathcal X_k$ we have $D\cap X = D\cap X'$ for all $X,X'\in\mathcal X_k$, so Lemma \ref{lem:moduleImpIncomparable} implies that the max cliques in $\mathcal X_k$ are $\prec_M$-incomparable. However, this is impossible since we assumed $\mathcal C \subsetneq \mathcal X_k$ to be maximal. Therefore, $M\in\mathcal X_k$. Now let $Y_{1},\ldots Y_{\ell}$ be a shortest list of max cliques from $\mathcal X_k$ so that $Y_{\ell} = M$ and each $Y_{j}$ splits $\mathcal Y_{j-1} := \mathcal C \cup \{Y_{1},\ldots,Y_{{j-1}}\}$. \begin{claim}\label{claim:auxLemSameIntersection} For all $j\in [2,\ell]$, $Y_j\cap Y_{j-1} \neq Y_j \cap A$ for all $A\in \mathcal Y_{j-2}$. \end{claim} \proof[Proof of Claim \ref{claim:auxLemSameIntersection}] Consider $j = \ell$ and suppose that there is $A\in \mathcal Y_{\ell -2}$ with $Y_\ell \cap A = Y_\ell \cap Y_{\ell - 1}$. As $Y_\ell$ splits $\mathcal Y_{\ell-1}$, there must be some $B\in \mathcal Y_{\ell-2}$ such that $Y_\ell \cap B \neq Y_\ell \cap Y_{\ell - 1}$. But then $Y_\ell$ already splits $\mathcal Y_{\ell -2}$, so by eliminating $Y_{\ell - 1}$ we could make the list shorter. Inductively, suppose that the claim holds for all $i > j$, but not for $j$. Then there are $A,B \in \mathcal Y_{j-2}$ such that $Y_j \cap B \neq Y_j \cap Y_{j-1} = Y_j \cap A$, so $Y_j$ already splits $\mathcal Y_{j-2}$. Removing $Y_{j-1}$ from the list gives us a shorter list in which $Y_i$ still splits $\mathcal Y_{i-1}$ for all $i> j$ because of our inductive assumption. As we assumed our list to be shortest, this concludes the inductive step. \qedd{Claim \ref{claim:auxLemSameIntersection}} We now argue once again inductively backwards down the list $Y_{1}, \ldots, Y_{\ell}$ with the goal of showing that $Y_{j}$ is $\prec_M$-comparable to all max cliques in $\mathcal Y_{{j-1}}$. Certainly, this is true for $Y_{\ell} = M$ and $\mathcal Y_{\ell - 1}$. Assume that $Y_{j}$ is comparable to all max cliques in $\mathcal Y_{{j-1}}$ for $j\in [2,\ell]$. Since $Y_j \cap Y_{j-1} \neq Y_j \cap A$ for all $A\in \mathcal Y_{j-2}$ by Claim \ref{claim:auxLemSameIntersection}, it follows that $Y_{{j-1}}$ is comparable to all max cliques in $\mathcal Y_{j-2}$. Now $Y_{1}$ is comparable to all max cliques in $\mathcal C$. Since $Y_{1}$ splits $\mathcal C$, there are $C,C'\in \mathcal C$ so that $C\prec_M C'$, contradicting our assumption that the max cliques in $\mathcal C$ are $\prec_M$-incomparable. Therefore we conclude that there is no $D\in \mathcal M\setminus \mathcal C$ splitting $\mathcal C$.\qed Lemma \ref{lem:sameIntersection} says that incomparable max cliques interact with the rest of $\mathcal M$ in a uniform way. Let us make this notion more precise. A \emph{module} of $G$ is a set $S \subseteq V$ so that for any vertex $x \in V\setminus S$, $S$ is either completely connected or completely disconnected to $x$. In other words, for all $u,v \in S$ and all $x \in V\setminus S$ it holds that $ux \in E \leftrightarrow vx \in E$. The next drawing illustrates the occurrence of a module in an interval graph. \begin{center} \begin{tikzpicture}[v/.style={circle,fill,inner sep=0pt,minimum size=4pt}] \foreach -1 in {1,...,5} { \fill[xshift=-1 cm-1cm,rounded corners=1pt,fill=gray!20!white] (-3pt,-.3cm) rectangle (3pt,1.3cm); \draw[xshift=-1 cm-1cm] (0,-.3cm) node[anchor=north] {\small $-1$}; } \draw[thick,rounded corners=2pt] (.5,-.3cm) rectangle (3.7,.5) node[anchor=north east] {$S$}; \filldraw (0,0) circle (1.5pt) (1,0) circle (1.5pt) (3,.3) circle (1.5pt) (4,0) circle (1.5pt); \draw[|-|] (0,.7) -- (3,.7); \draw[|-|] (1,1) -- (4,1); \draw[|-|] (1,.3) -- (2,.3); \draw[|-|] (2,0) -- (3,0); \def-1{-1} \def2{2}; \fill[gray!20!white,rounded corners=2pt] (-1+1.2,2+.5) rectangle (-1+4.8,2-.4) node[anchor=south east,black] {$S$}; \node[v] (1) at (-1+.7,2+.2) {}; \node[v] (2) at (-1+1.9,2+1) {}; \node[v] (3) at (-1+1.5,2) {}; \node[v] (4) at (-1+2.5,2+.2) {}; \node[v] (5) at (-1+3.5,2) {}; \node[v] (6) at (-1+4.5,2+.2) {}; \node[v] (7) at (-1+4.1,2+1) {}; \node[v] (8) at (-1+5.3,2+.2) {}; \draw (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (8); \draw (2) -- (4) -- (7) -- (5) -- (2) -- (6) -- (7) -- (3); \draw (2) -- (7); \end{tikzpicture} \end{center} \begin{cor}\label{cor:incompImpModule} Suppose $M$ is a max clique of $G$ so that $\prec_M$ is a strict partial order and $\mathcal C$ is a maximal set of incomparable max cliques. Then \begin{itemize} \item $S_{\mathcal C} := \bigcup_{C \in \mathcal C} C \setminus \bigcup_{D \in \mathcal M \setminus \mathcal C}D$ is a module of $G$, and \item $S_{\mathcal C} = \left\{ v \in \bigcup\mathcal C \;\big|\; \operatorname{span}(v) \leq |\mathcal C| \right\}$. \end{itemize} \end{cor} \proof Let $u,v \in S_{\mathcal C}$ and $x \in V\setminus S_{\mathcal C}$ and suppose that $ux\in E$, but $vx \not\in E$. There is a max clique $C \in \mathcal M$ with $u,x \in C$, but $v \not\in C$, and since $u \in S$ we must have $C \in\mathcal C$. By the definition of $S_{\mathcal C}$, $x$ is also contained in some max clique $D \in \mathcal M\setminus \mathcal C$. Finally, let $C'$ be some max clique in $\mathcal C$ containing $v$, so $x \not\in C'$. Thus, $D \cap C \neq D \cap C'$, contradicting Lemma \ref{lem:sameIntersection}. For the second statement, let $v \in \bigcup\mathcal C$. If $v \in S_{\mathcal C}$, then clearly $\operatorname{span}(v) \leq |\mathcal C|$. But if $v \not\in S_{\mathcal C}$, then it is contained in some $D \in\mathcal M\setminus\mathcal C$, and by Lemma \ref{lem:sameIntersection} $v$ must also be contained in all max cliques in $\mathcal C$. Thus, $\operatorname{span}(v) > |\mathcal C|$, proving the statement. \qed This characterization of the modules occurring when defining the relations $\prec_M$ will be central in the canonization procedure of $G$. There is another corollary of Lemma \ref{lem:sameIntersection} which proves that $\prec_M$ has a particularly nice structure. \begin{cor}\label{cor:strictOrderImpWeakOrder} If $M$ is a max clique of $G$ so that $\prec_M$ is a strict partial order, then $\prec_M$ is a strict weak order. \end{cor} \proof We need to prove that $\prec_M$-incomparability is a transitive relation of $G$'s max cliques. So let $(A,B)$ and $(B,C)$ be incomparable pairs with respect to $\prec_M$. Let $\mathcal C_{AB}$ and $\mathcal C_{BC}$ be maximal sets of incomparables containing $\{A,B\}$ and $\{B,C\}$, respectively. By Lemma \ref{lem:sameIntersection}, we have $D\cap X = D\cap B = D\cap Y$ for every $X,Y\in \mathcal C_{AB} \cup \mathcal C_{BC}$ and $D\in \mathcal M \setminus (\mathcal C_{AB} \cup \mathcal C_{BC})$. As $M \not\in \mathcal C_{AB} \cup \mathcal C_{BC}$ Lemma \ref{lem:moduleImpIncomparable} implies that the max cliques in $\mathcal C_{AB} \cup \mathcal C_{BC}$ are $\prec_M$-incomparable, so in particular $A$ and $C$ are incomparable with respect to $\prec_M$.\qed At this point, let us put the pieces together and show that picking an arbitrary max clique $M$ as an end of $G$ and defining $\prec_M$ is a useful way to obtain information about the structure of $G$. \begin{lem} \label{lem:partialOrderEqEnd} Let $M$ be a max clique of an interval graph $G$. Then $\prec_M$ is a strict weak order if and only if $M$ is a possible end of $G$. \end{lem} \proof If $M$ is a possible end of $G$, then let $\mathcal I$ be a minimal interval representation of $G$ which has $M$ as its first clique. Let $\lhd_{\mathcal I}$ be the linear order $\mathcal I$ induces on the max cliques of $G$. In order to show asymmetry it is enough to observe that, as relations, we have $\prec_M \subseteq \lhd_{\mathcal I}$. It is readily verified that this holds true of the initialization step in the recursive definition of $\prec_M$, and that whenever max cliques $C,D$ satisfy (\ref{eqn:indDef}) with $\prec_M$ replaced by $\lhd_{\mathcal I}$, then it must hold that $C \lhd_{\mathcal I} D$. This shows asymmetry, and by Lemma \ref{lem:antisymImpTransitive} and Corollary \ref{cor:strictOrderImpWeakOrder} $\prec_M$ is a strict weak order. Conversely, suppose $\prec_M$ is a strict weak order. The first aim is to turn $\prec_M$ into a linear order. Let $\mathcal C$ be a maximal set of incomparable max cliques, and recall the set $S_{\mathcal C} = \bigcup_{C \in \mathcal C} C \setminus \bigcup_{D \in \mathcal M \setminus \mathcal C}D$. Since $G[S_{\mathcal C}]$ is an interval graph, we can pick an interval representation $\mathcal I_{S_{\mathcal C}}$ for $G[S_{\mathcal C}]$. The set of max cliques of $G[S_{\mathcal C}]$ is given by $\left\{ C \cap S_{\mathcal C} \;\big|\; C\in\mathcal C\right\}$, and since $S_{\mathcal C}$ is a module, $C\cap S_{\mathcal C} \neq C' \cap S_{\mathcal C}$ for any $C\neq C'$ from $\mathcal C$. Thus, $\mathcal I_{S_{\mathcal C}}$ induces a linear order $\lhd_{\mathcal C}$ on the elements of $\mathcal C$. Now let $C \lhd_M D$ if and only if $C \prec_M D$, or $C, D \in \mathcal C$ for some maximal set of incomparables $\mathcal C$ and $C \lhd_{\mathcal C} D$. This is a strict linear order since $\prec_M$ is a strict weak order. We claim that $\lhd_M$ is an ordering of the max cliques which is isomorphic to the linear order induced by some interval representation of $G$. This will imply that $M$ is a possible end of $G$. In order to prove the claim, it is enough to show that each vertex $v\in V$ is contained in consecutive max cliques. Suppose for contradiction that there are max cliques $A \lhd_M B \lhd_M C$ and $v$ is contained in $A$ and $C$, but not in $B$. Certainly, this cannot be the case if $A,B,C$ are incomparable with respect to $\prec_M$, so assume without loss of generality that $A \prec_M B$. Now, since $v \in (A\cap C)\setminus B$, (\ref{eqn:indDef}) implies that $C \prec_M B$, which contradicts the asymmetry of $\lhd_M$. \qed \begin{remark} \label{remark:doesntWorkGenerally} The recursive definition of $\prec_M$ and Lemma \ref{lem:antisymImpTransitive} through Corollary \ref{cor:strictOrderImpWeakOrder} do not depend on $G$ being an interval graph. However, the proof of Lemma \ref{lem:partialOrderEqEnd} shows that $\prec_M$ only turns out to be a partial order if the max cliques can be brought into a linear order, modulo the occurrence of modules. In particular, defining $\prec_M$ in a general chordal graph does not yield any useful information if the graph's tree decomposition into max cliques requires a tree vertex of degree 3 or more, which is the case for all chordal graphs which are not interval graphs. \end{remark} \subsection{Canonizing when $\prec_M$ is a linear order}\label{subsec:canLinOrder} Since $\prec_M$ is \logic{FP}-definable for any max clique $M$, and since asymmetry of $\prec_M$ is \logic{FO}-definable, Lemma \ref{lem:partialOrderEqEnd} gives us a way to define possible ends of interval graphs in \logic{FP}. Moreover, if $M$ is a possible end of $G = (V,E)$, then $\prec_M$ contains precisely the ordering imposed on the max cliques of $G$ by the choice of $M$ as the first clique. First, suppose that $G = (V,E)$ is an interval graph and $\prec$ is a linear order on the max cliques which is induced by an interval representation of $G$. Define the binary relation $<^G$ on the vertices of $G$ as follows. For $x \in V$, let $A_{x}$ be the $\prec$-least max clique of $G$ containing $x$. Then let \begin{equation*} x <^G y :\Leftrightarrow \begin{cases} A_{x} \prec A_{y}, \mbox{ or}\\ A_{x} = A_{y} \mbox{ and } \operatorname{span}(x) < \operatorname{span}(y). \end{cases} \end{equation*} It is readily verified that $<^G$ is a strict weak order on $V$, and if $x,y$ are incomparable, then $N(x) = N(y)$. Now it is easy to canonize $G$: if $[v]$ denotes the equivalence class of vertices incomparable to $v$, then $[v]$ is represented by the numbers from the interval $[a+1 ,a + |[v]|]$, where $a$ is the number of vertices which are strictly $<^G$-smaller than $v$. Since all vertices in $[v]$ have precisely the same neighbors in $G\setminus [v]$ and $[v]$ forms a clique, it is also clear how to define the edge relation on the number sort. Now if $G$ is any interval graph and $M$ is a possible end, we can still define an ordering for those vertices that are not contained in a module. Let $\sim^G_M$ be the equivalence relation on $V$ for which $x \sim^G_M y$ if and only if $x=y$ or there is a nonsingular maximal set of incomparables $\mathcal C$ with respect to $\prec_M$ so that $x,y \in S_{\mathcal C}$. Denote the equivalence class of $x \in V$ under $\sim^G_M$ by $[x]$, and define the edge relation $E_M$ of the graph $G_M = (V \!\diagup\!\! \sim^G_M, E_M)$ by $[u][v] \in E_M :\Leftrightarrow \exists x\in [u], y\in [v]$ s.t. $xy \in E$. It follows directly from the definition of $\sim^G_M$ that if $A$ is a max clique which is $\prec_M$-comparable to all other max cliques in $G$, then all $v \in A$ are in singleton equivalence classes $[v] = \{v\}$. If $\mathcal C$ is a nonsingular maximal set of incomparables, then there is precisely one max clique $C$ in $G_M$ which contains all the equivalence classes associated with $\mathcal C$, i.e., $C = \left\{ [v] \;\big|\; v \in \bigcup \mathcal C \right\}$. Thus $\prec_M$ induces a strict linear order on the max cliques of $G_M$. In fact, this shows that $G_M$ is an interval graph with a valid interval representation induced by $\prec_M$. \subsection{Canonizing general interval graphs} \label{subsec:generalIntGraphs} What is left is to deal with the sets $S_{\mathcal C}$ coming from maximal sets of incomparables. Let $P' = \left\{ (M,n) \;\big|\; M \in \mathcal M, n \in [|V|] \right\}$. For each $(M,n) \in P'$ define $V_{M,n}$ as the set of vertices of the connected component of $V \setminus \left\{ v \in V \;\big|\; \operatorname{span}(v) > n \right\}$ which intersects $M$ (if non-empty). Notice that $M_n := M \cap V_{M,n}$ is a max clique of $G[V_{M,n}]$. Finally, let $P$ be the set of those $(M,n) \in P'$ for which defining $\prec_{M_n}$ in $G[V_{M,n}]$ yields a strict partial order of $G[V_{M,n}]$'s max cliques. It is immediate from Corollary \ref{cor:incompImpModule} that for any maximal set of incomparable max cliques $\mathcal C$, $S_{\mathcal C} = \bigcup_{C\in\mathcal C} V_{C,|\mathcal C|}$. In this situation, for any $C\in \mathcal C$, the set $V_{C,|\mathcal C|}$ defines a component of $S_{\mathcal C}$, and $(C,|\mathcal C|) \in P$ if and only if $C \cap S_{\mathcal C}$ is a possible end of (one of the components of) $G[S_{\mathcal C}]$. This gives us enough structure to perform canonization. \proof[Proof of Theorem \ref{thm:captureIntGraphs}] We define the relation $\varepsilon(M,n,x,y)$ inductively, where $(M,n) \in P$ and $x,y$ are number variables. $([|V_{M,n}|], \varepsilon^G[M,n,\cdot,\cdot])$ will be an isomorphic copy of $G[V_{M,n}]$ on the numeric sort. To this end, start defining $\varepsilon$ for all $(M,1) \in P$, then for all $(M,2) \in P$, and so on, up to all $(M,|V|) \in P$. Suppose we want to define $\varepsilon$ for $(M,n) \in P$, then first compute the strict weak order $\prec_{M_n}$ on the interval graph $G[V_{M,n}]$. Consider any nonsingular maximal set of incomparables $\mathcal C$ and let $m := |\mathcal C|$. Let $H_1, \ldots, H_h$ be a list of the components of $G[S_{\mathcal C}]$ and let $H_i$ be such a component. By the above remarks, there exist at least two $C \in \mathcal C$ so that $V_{C,m} = H_i$ and $(C,m) \in P$. Notice that by the definitions of $P$ and $\prec_{M_n}$, we have $m < n$ and therefore all $\varepsilon^G[C,m,\cdot,\cdot]$ with $C \in \mathcal C$ have already been defined. Let $\sim$ be the equivalence relation on $P \cap (\mathcal C \times \{m\})$ defined by $(C,m) \sim (C',m) :\Leftrightarrow V_{C,m} = V_{C',m}$. Using Lemma \ref{lem:lexDisjUnionDefable}, we obtain the lexicographic disjoint union $\omega_{\mathcal C}(x,y)$ of the lexicographic leaders of $\sim$'s equivalence classes. Finally, let $<^{G_{M,n}}_M$ be the strict partial order on $V_{M,n} \!\diagup\!\! \sim^{G_{M,n}}_M$ defined above. Let $c_1, \ldots c_k$ be the list of non-singular equivalence classes of $\sim^{G_{M,n}}_M$. Each $c_i$ is associated with a unique maximal set of incomparables $\mathcal C_i$, and $c_i = S_{\mathcal C_i}$ as sets. We aim at canonizing $G_{M,n}$ using $<^{G_{M,n}}_M$, inserting the graph defined by $\omega_{\mathcal C_i}(x,y)$ in place of each $c_i$. Here is how: each $[v] \in V_{M,n} \!\diagup\!\! \sim^{G_{M,n}}_M$ is represented by the interval $[a+1, a + |[v]|]$, where $a$ is the number of vertices in equivalence classes strictly $<^{G_{M,n}}_M$-smaller than $[v]$. Since all vertices in $[v]$ have the same neighbors in all of $G\setminus [v]$, it is clear how to define the edge relation between $[v]$ and $G\setminus [v]$. If $[v]$ is not a singleton set, then $c_i = [v]$ for some $i$ and the edge relation on $c_i$ is given by $\omega_{\mathcal C_i}(x,y)$. It is clear from the construction that $\left( [|V_{m,n}|], \varepsilon^G[M,n,\cdot,\cdot] \right) \ensuremath{\cong} G[V_{M,n}]$. Also, $\varepsilon(M,n,x,y)$ can be defined in \logic{FP}C for all $(M,n) \in P$ using a fixed point-operator iterating $n$ from $1$ to $|V|$. Finally, let $\varepsilon(x,y)$ be the lexicographic disjoint union of the lexicographic leaders canonizing the components of $G$, each of which is defined by some $(M,|V|) \in P$. Then $\left( [|V|], \varepsilon^G[\cdot, \cdot] \right) \ensuremath{\cong} G$, which concludes the canonization of $G$.\qed \proof[Proof of Corollary \ref{cor:IntGraphsDefinable}] We claim that for the recognition of interval graphs, it is enough to check that (a) every edge of the graph $G$ is contained in some max clique which is defined by the joint neighborhood of some pair of vertices and (b) the canonization procedure as described in Section \ref{sec:CapOnIntGraphs} succeeds to produce a graph of the same size on the number sort. Certainly, any interval graph satisfies both conditions by the results in this paper. Conversely, assume that $G = (V,E)$ satisfies these conditions, and let $H = ([|V|],\varepsilon)$ be the ordered graph defined by the canonization procedure. We can choose a bijection $\phi: V \rightarrow [|V|]$ by breaking all ties during the definition of $H$ arbritrarily. We claim that $\phi$ is an isomorphism between $G$ and $H$. Let $u,v \in V$ and suppose $uv \not\in E$. Since the sets of max cliques respectively containing $u$ and $v$ are disjoint, at no point during the canonization procedure there is an edge defined between numbers corresponding to $u$ and $v$, and hence $\phi(u)\phi(v) \not\in \varepsilon$. If $uv \in E$, however, then $u,v$ are both contained in some definable max clique $C$ which is forced to appear in the relations $\prec_M$ defined by (\ref{eqn:indDef}). It is easy to see then that also $\phi(u)\phi(v) \in \varepsilon$, and hence $\phi$ is an isomorphism. Finally, observe that any graph defined by the canonization procedure is an interval graph. \section{Conclusion} We have proved that the class of interval graphs admits \logic{FP}C-definable canonization. Thus, \logic{FP}C captures \cclass{PTIME} on the class of interval graphs, which was shown not to be the case for any of the two obvious superclasses of interval graphs: chordal graphs and incomparability graphs. The result also implies that the combinatorial Weisfeiler-Lehman algorithm solves the isomorphism problem for interval graphs. As noted in Remark \ref{rem:alreadySTC}, the methods in this paper can be used to define \cclass{LOGSPACE}-computable canonical forms for interval graphs (cf. \cite{koebler10interval}). The \logic{FP}C-canonization of the modular decomposition tree from Section \ref{subsec:generalIntGraphs} is then replaced by Lindell's \cclass{LOGSPACE}-tree canonization algorithm~\cite{lindell92logspace}. This implies that the set of logspace computable intrinsic properties of interval graphs is recursively enumerable. However, \cclass{LOGSPACE} is not captured by first-order logic with the symmetric transitive closure operator: any rooted tree is converted into an interval graph by connecting each vertex to all its descendants. Arguing as in Section \ref{sec:noCaptureCompGraphs}, such a capturing result would imply an analogous capturing result on trees, which is ruled out by the work of Etessami and Immerman~\cite{etessami95tree}. Among the graph classes considered in this paper, the only class whose status is not settled with respect to \logic{FP}C-canonization is the class of chordal comparability graphs. While it appears that the methods employed for chordal incomparability graphs here do not carry over (see Remark \ref{remark:doesntWorkGenerally}), I believe I have found a different solution, which will be contained in the journal version of this paper. So far, little is known about logics capturing complexity classes on classes of graphs which are defined by a (finite or infinite) list of forbidden induced subgraphs. This paper makes a contribution in this direction. It seems that chordal graphs, even though they do not admit \logic{FP}C-canonization themselves, can often be handled effectively in fixed-point logic as soon as additional properties are satisfied (being a line graph, incomparability or comparability graph). It would be instructive to unify these properties. In this context, I would also like to point to Grohe's conjecture~\cite{grohe09fixed-point} that \logic{FP}C captures \cclass{PTIME} on the class of claw-free chordal graphs. \end{document}
\begin{document} \title[On $(\sigma,\delta)$-skew McCoy modules]{On $(\sigma,\delta)$-skew McCoy modules} \date{\today} \subjclass[2010]{16S36, 16U80} \keywords{McCoy module, $(\sigma,\delta)$-skew McCoy module, semicommutative module, Armendariz module, $(\sigma,\delta)$-skew Armendariz module, reduced module} \author[M. Louzari]{Mohamed Louzari} \address{Depertment of Mathematics\\ Faculty of sciences \\ Abdelmalek Essaadi University\\ BP. 2121 Tetouan, Morocco} \email{[email protected]} \author[L. Ben Yakoub]{L'moufadal Ben Yakoub} \address{Depertment of Mathematics\\ Faculty of sciences \\ Abdelmalek Essaadi University\\ BP. 2121 Tetouan, Morocco} \email{[email protected]} \begin{abstract}Let $(\sigma,\delta)$ be a quasi derivation of a ring $R$ and $M_R$ a right $R$-module. In this paper, we introduce the notion of $(\sigma,\delta)$-skew McCoy modules which extends the notion of McCoy modules and $\sigma$-skew McCoy modules. This concept can be regarded also as a generalization of $(\sigma,\delta)$-skew Armendariz modules. Some properties of this concept are established and some connections between $(\sigma,\delta)$-skew McCoyness and $(\sigma,\delta)$-compatible reduced modules are examined. Also, we study the property $(\sigma,\delta)$-skew McCoy of some skew triangular matrix extensions $V_n(M,\sigma)$, for any nonnegative integer $n\geq 2$. As a consequence, we obtain: (1) $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $M[x]/M[x](x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy, and (2) $M_R$ is $\sigma$-skew McCoy if and only if $M[x;\sigma]/M[x;\sigma](x^n)$ is $\overline{\sigma}$-skew McCoy. \end{abstract} \maketitle \section{Introduction} Throughout this paper, $R$ denotes an associative ring with unity and $M_R$ a right $R$-module. For a subset $X$ of a module $M_R$, $r_R(X)=\{a\in R|Xa=0\}$ and $\ell_R(X)=\{a\in R|aX=0\}$ will stand for the right and the left annihilator of $X$ in $R$ respectively. An Ore extension of a ring $R$ is denoted by $R[x;\sigma,\delta]$, where $\sigma$ is an endomorphism of $R$ and $\delta$ is a $\sigma$-derivation, i.e., $\delta\colon R\rightarrow R$ is an additive map such that $\delta(ab)=\sigma(a)\delta(b)+\delta(a)b$ for all $a,b\in R$ (the pair $(\sigma,\delta)$ is also called a quasi-derivation of $R$). Recall that elements of $R[x;\sigma,\delta]$ are polynomials in $x$ with coefficients written on the left. Multiplication in $R[x;\sigma,\delta]$ is given by the multiplication in $R$ and the condition $xa=\sigma(a)x+\delta(a)$, for all $a\in R$. In the next, $S$ will stand for the Ore extension $R[x;\sigma,\delta]$. On the other hand, we have a natural functor $-\otimes_RS$ from the category of right $R$-modules into the category of right $S$-modules. For a right $R$-module $M$, the right $S$-module $M\otimes_R S$ is called {\it the induced module} \cite{matczuk/induced}. Since $R[x;\sigma,\delta]$ is a free left $R$-module, elements of $M\otimes_R S$ can be seen as polynomials in $x$ with coefficients in $M$ with natural addition and right $S$-module multiplication. \par For any $0\leq i\leq j\;(i,j\in \Bbb N)$, $f_i^j\in End(R,+)$ will denote the map which is the sum of all possible words in $\sigma,\delta$ built with $i$ factors of $\sigma$ and $j-i$ factors of $\delta$ (e.g., $f_n^n=\sigma^n$ and $f_0^n=\delta^n, n\in \Bbb N $). We have $x^ja=\sum_{i=0}^jf_i^j(a)x^i$ for all $a\in R$, where $i,j$ are nonnegative integers with $j\geq i$ (see \cite[Lemma 4.1]{lam}). \par Following Lee and Zhou \cite{lee/zhou}, we introduce the notation $M[x;\sigma,\delta]$ to write the $S$-module $M\otimes_R S$. Consider $$M[x;\sigma,\delta]:=\set{\sum_{i=0}^nm_ix^i\mid n\geq 0,m_i\in M};$$ which is an $S$-module under an obvious addition and the action of monomials of $R[x;\sigma,\delta]$ on monomials in $M[x;\sigma,\delta]_{R[x;\sigma,\delta]}$ via $(mx^j)(ax^{\ell})=m\sum_{i=0}^jf_i^j(a)x^{i+\ell}$ for all $a\in R$ and $j,\ell\in \Bbb N$. The $S$-module $M[x;\sigma,\delta]$ is called the {\it skew polynomial extension} related to the quasi-derivation $(\sigma,\delta)$. \par A module $M_R$ is semicommutative, if for any $m\in M$ and $a\in R$, $ma=0$ implies $mRa=0$ \cite{rege2002}. Let $\sigma$ an endomorphism of $R$, $M_R$ is called an $\sigma$-semicommutative module \cite{zhang/chen} if, for any $m\in M$ and $a\in R$, $ma=0$ implies $mR\sigma(a)=0$. For a module $M_R$ and a quasi-derivation $(\sigma,\delta)$ of $R$, we say that $M_R$ is $\sigma$-compatible, if for each $m\in M$ and $a\in R$, we have $ma=0 \Leftrightarrow m\sigma(a)=0$. Moreover, we say that $M_R$ is $\delta$-compatible, if for each $m\in M$ and $a\in R$, we have $ma=0\Bbb Rightarrow m\delta(a)$=0. If $M_R$ is both $\sigma$-compatible and $\delta$-compatible, we say that $M_R$ is $(\sigma,\delta)$-compatible (see \cite{annin/2004}). In \cite{zhang/chen}, a module $M_R$ is called $\sigma$-{\it skew Armendariz}, if $m(x)f(x)=0$ where $m(x)=\sum_{i=0}^nm_ix^i\in M[x;\sigma]$ and $f(x)=\sum_{j=0}^ma_jx^j\in R[x;\sigma]$ implies $m_i\sigma^i(a_j)=0$ for all $i,j$. According to Lee and Zhou \cite{lee/zhou}, $M_R$ is called $\sigma$-{\it Armendariz}, if it is $\sigma$-compatible and $\sigma$-skew Armendariz. \par Following Alhevas and Moussavi \cite{moussavi/2012}, a module $M_R$ is called $(\sigma,\delta)$-skew Armendariz, if whenever $m(x)g(x)=0$ where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]$, we have $m_ix^ib_jx^j=0$ for all $i,j$. In this paper, we introduce the concept of $(\sigma,\delta)$-skew McCoy modules which is a generalization of McCoy modules and $\sigma$-skew McCoy modules. This concept can be regarded also as a generalization of $(\sigma,\delta)$-skew Armendariz modules and rings. We study connections between reduced modules, $(\sigma,\delta)$-compatible modules and $(\sigma,\delta)$-skew McCoy modules. Also, we show that $(\sigma,\delta)$-skew McCoyness passes from a module $M_R$ to its skew triangular matrix extension $V_n(M,\sigma)$. In this sens, we complete the definition of skew triangular matrix rings $V_n(R,\sigma)$ given by Isfahani \cite{isfahani/2011}, by introducing the notion of skew triangular matrix modules. Moreover, we give some results on $(\sigma,\delta)$-skew McCoyness for skew triangular matrix modules. \section{$(\sigma,\delta)$-skew McCoy modules} \par Cui and Chen \cite{cui/2011,cui/2012}, introduced both concepts of McCoy modules and $\sigma$-skew McCoy modules. A module $M_R$ is called {\it McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x]\setminus\{0\}$ implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$. A module $M_R$ is called {\it $\sigma$-skew McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma]\setminus\{0\}$ implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$. With the same manner, we introduce the concept of {\it $(\sigma,\delta)$-skew McCoy} modules which is a generalization of McCoy modules, $\sigma$-skew McCoy modules and $(\sigma,\delta)$-skew Armendariz modules. \begin{definition}Let $M_R$ be a module and $M[x;\sigma,\delta]$ the corresponding $(\sigma,\delta)$-skew polynomial module over $R[x;\sigma,\delta]$. \par$\mathbf{(1)}$ The module $M_R$ is called {\it $(\sigma,\delta)$-skew McCoy} if $m(x)g(x)=0$, where $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, implies that there exists $a\in R\setminus\{0\}$ such that $m(x)a=0$ $($i.e., $\sum_{i=\ell}^pm_if_{\ell}^i(a)=0$, for all $\ell=0,1,\cdots,p)$. \par$\mathbf{(2)}$ The ring $R$ is called {\it $(\sigma,\delta)$-skew McCoy} if $R$ is $(\sigma,\delta)$-skew McCoy as a right $R$-module. \end{definition} \begin{remark}\label{rem2}$\mathbf{(1)}$ If $M_R$ is an $(\sigma,\delta)$-skew Armendariz module then it is $(\sigma,\delta)$-skew McCoy $($Proposition \ref{prop1}$)$. But the converse is not true $($Example \ref{exp mcnotarm}$)$. \par$\mathbf{(2)}$ If $\sigma=id_R$ and $\delta=0$ we get the concept of McCoy module, if only $\delta=0$, we get the concept of $\sigma$-skew McCoy module. \par$\mathbf{(3)}$ A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if for all $m(x)\in M[x;\sigma,\delta]$, $r_{R[x;\sigma,\delta]}(m(x))\neq 0\Bbb Rightarrow r_{R[x;\sigma,\delta]}(m(x))\cap R\neq 0.$ \end{remark} An ideal $I$ of a ring $R$ is called $(\sigma,\delta)$-stable, if $\sigma(I)\subseteq I$ and $\delta(I)\subseteq I$. \begin{proposition}\label{prop2}$\mathbf{(1)}$ Let $I$ be a nonzero right ideal of $R$. If $I$ is $(\sigma,\delta)$-stable then $R/I$ is an $R$-module $(\sigma,\delta)$-skew McCoy. \par$\mathbf{(2)}$ For any index set $I$, if $M_i$ is an $(\sigma_i,\delta_i)$-skew McCoy as $R_i$-module for each $i\in I$, then $\prod_{i\in I}M_i$ is an $(\sigma,\delta)$-skew McCoy as $\prod_{i\in I}R_i$-module, where $(\sigma,\delta)=(\sigma_i,\delta_i)_{i\in I}$. \par$\mathbf{(3)}$ Every submodule of an $(\sigma,\delta)$-skew McCoy module is $(\sigma,\delta)$-skew McCoy. In particular, if $I$ is a right ideal of an $(\sigma,\delta)$-skew McCoy ring then $I_R$ is $(\sigma,\delta)$-skew McCoy module. \par$\mathbf{(4)}$ A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if every finitely generated submodule of $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proposition} \begin{proof}$\mathbf{(1)}$ Let $m(x)=\sum_{i=0}^p\overline{m}_ix^i\in (R/I)[x;\sigma,\delta]$, where $\overline{m}_i=r_i+I\in R/I$ for all $i=0,1,\cdots,p$ and $r$ an arbitrary nonzero element of $I$. We have $m(x)r=\sum_{i=0}^p(r_i+I)\sum_{\ell=0}^if_{\ell}^i(r)x^{\ell}\in I[x;\sigma,\delta]$, because $f_{\ell}^i(r)\in I$ for all $\ell=0,1,\cdots,i$. Hence $m(x)r=\bar {0}$. \par$\mathbf{(2)}$ Let $M=\prod_{i\in I}M_i$ and $R=\prod_{i\in I}R_i$ such that each $M_i$ is an $(\sigma_i,\delta_i)$-skew McCoy as $R_i$-module for all $i\in I$. Take $m(x)=(m_i(x))_{i\in I}\in M[x;\sigma,\delta]$ and $f(x)=(f_i(x))_{i\in I}\in R[x;\sigma,\delta]\setminus\{0\}$, where $m_i(x)=\sum_{s=0}^pm_i(s)x^s\in M_i[x;\sigma_i,\delta_i]$ and $f_i(x)=\sum_{t=0}^qa_i(t)x^t\in R_i[x;\sigma_i,\delta_i]$ for each $i\in I$. Suppose that $m(x)f(x)=0$, then $m_i(x)f_i(x)=0$ for each $i\in I$. Since $M_i$ is $(\sigma_i,\delta_i)$-skew McCoy, there exists $0\neq r_i\in R_i$ such that $m_i(x)r_i=0$ for each $i\in I$. Thus $m(x)r=0$ where $0\neq r=(r_i)_{i\in I}\in R$. \par$\mathbf{(3)}$ and $(4)$ are obvious. \end{proof} \begin{proposition}\label{prop1}If $M_R$ is an $(\sigma,\delta)$-skew Armendariz module then it is $(\sigma,\delta)$-skew McCoy. \end{proposition} \begin{proof} Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $g(x)=\sum_{j=0}^qb_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$. Suppose that $m(x)g(x)=0$, then $m_ix^ib_jx^j=0$ for all $i,j$. Since $g(x)\neq 0$ then $b_{j_0}\neq 0$ for some $j_0\in\{0,1,\cdots,p\}$. Thus $m_ix^ib_{j_0}x^{j_0}=0$ for all $i$. On the other hand $m_ix^ib_{j_0}x^{j_0}=\sum_{\ell=0}^p(\sum_{i=\ell}^pm_if_{\ell}^i(b_{j_0}))x^{\ell+j_0}=0$, and so $\sum_{i=\ell}^pm_if_{\ell}^i(b_{j_0})=0$ for all $\ell=0,1,\cdots,p$. Thus $m(x)b_{j_0}=0$, therefore $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proof} By the next example, we see that the converse of Proposition \ref{prop1} does not hold. \begin{example}\label{exp mcnotarm}Let $R$ be a reduced ring. Consider the ring $$R_4=\set{\left( \begin{array}{ccccc} a & a_{12} & a_{13}& a_{14} \\ 0 & a & a_{23}& a_{24} \\ 0 & 0 & a & a_{34}\\ 0 & 0 &0& a \\ \end{array} \right)\mid a,a_{ij}\in R},$$ Since $R$ is reduced then it is right McCoy and so $R_4$ is right McCoy, by \cite[Proposition 2.1]{zhao/liu}. But $R_4$ is not Armendariz by \cite[Example 3]{kim/lee}. \end{example} A module $(\sigma,\delta)$-skew McCoy need not to be McCoy by \cite[Example 2.3(2)]{cui/2012}. Also, the following example shows that, there exists a module which is McCoy but not $(\sigma,\delta)$-skew McCoy. \begin{example}\label{exp2}Let $\Bbb Z_2$ be the ring of integers modulo $2$, and consider the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ with the usual addition and multiplication. Let $\sigma$ be an endomorphism of $R$ defined by $\sigma((a,b))=(b,a)$ and $\delta$ an $\sigma$-derivation of $R$ defined by $\delta((a,b))=(a,b)-\sigma((a,b))$. The ring $R$ is commutative reduced then it is McCoy. However, for $p(x)=(1,0)x$ and $q(x)=(1,1)+(1,0)x\in R[x;\sigma,\delta]$. We have $p(x)q(x)=0$, but $p(x)(a,b)\neq 0$ for any $0\neq (a,b)\in R$. Therefore, $R$ is not $(\sigma,\delta)$-skew McCoy. Also, $R$ is not $(\sigma,\delta)$-compatible, because $(0,1)(1,0)=(0,0)$, but $(0,1)\sigma((1,0))=(0,1)^2\neq (0,0)$ and $(0,1)\delta((1,0))=(0,1)(1,1)=(0,1)\neq (0,0)$. \end{example} \begin{lemma}\label{rem3} Let $M_R$ be an $(\sigma,\delta)$-compatible module. For any $m\in M_R$, $a\in R$ and nonnegative integers $i,j$. We have the following: \par$\mathbf{(1)}$ $ma=0\Bbb Rightarrow m\sigma^i(a)=m\delta^j(a)=0$. \par$\mathbf{(2)}$ $ma=0\Bbb Rightarrow m\sigma^i(\delta^j(a))=m\delta^i(\sigma^j(a))=0$. \end{lemma} \begin{proof}The verification is straightforward. \end{proof} If $M_R$ is an $(\sigma,\delta)$-compatible module then $ma=0\Bbb Rightarrow mf_i^j(a)=0$ for any nonnegative integers $i,j$ such that $i\geq j$, where $m\in M_R$ and $a\in R$. For a subset $U$ of $M_R$ and $(\sigma,\delta)$ a quasi-derivation of $R$, the set of all skew polynomials with coefficients in $U$ is denoted by $U[x;\sigma,\delta]$. \begin{lemma}\label{prop3}Let $M_R$ be a module and $(\sigma,\delta)$ a quasi-derivation of $R$. The following are equivalent: \par$\mathbf{(1)}$ For any $U\subseteq M[x;\sigma,\delta]$, $(r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]=r_{R[x;\sigma,\delta]}(U)$. \par$\mathbf{(2)}$ For any $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$. If $m(x)f(x)=0$ implies $\sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $i,j$. \end{lemma} \begin{proof}$(1)\Bbb Rightarrow (2)$. Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$. If $m(x)f(x)=0$, we have $f(x)\in r_{R[x;\sigma,\delta]}(m(x))=(r_{R[x;\sigma,\delta]}(m(x))\cap R)[x;\sigma,\delta]$. Then $a_j\in r_{R[x;\sigma,\delta]}(m(x))$ for all $j$, so that $m(x)a_j=0$ for all $j$. But $m(x)a_j=0 \Leftrightarrow \sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $0\leq i\leq p$. Thus $\sum_{\ell=i}^pm_{\ell}f_i^{\ell}(a_j)=0$ for all $i,j$. \Bbb NL $(2)\Bbb Rightarrow (1)$. Let $U\subseteq M[x;\sigma,\delta]$, we have always $(r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]\subseteq r_{R[x;\sigma,\delta]}(U)$. Conversely, let $f(x)\in r_{R[x;\sigma,\delta]}(U)$ then by $(2)$, we have $Ua_j=0$ for all $j$ and so $a_j\in r_{R[x;\sigma,\delta]}(U)\cap R$. Therefore $f(x)\in (r_{R[x;\sigma,\delta]}(U)\cap R)[x;\sigma,\delta]$. \end{proof} \begin{theorem}[McCoy's Theorem for module extensions]\label{theo mccoy}Let $M_R$ be a module and $N$ a nonzero submodule of $M[x;\sigma,\delta]$. If one of the equivalent conditions of Lemma \ref{prop3} is satisfied. Then $r_{R[x;\sigma,\delta]}(N)\neq 0$ implies $r_{R}(N)\neq 0$. \end{theorem} \begin{proof}Suppose that $r_{R[x;\sigma,\delta]}(N)\neq 0$, then there exists $0\neq f(x)=\sum_{i=0}^pa_ix^i\in r_{R[x;\sigma,\delta]}(N)$. But $r_{R[x;\sigma,\delta]}(N)=(r_{R[x;\sigma,\delta]}(N)\cap R)[x;\sigma,\delta]$ by Lemma \ref{prop3}. Therefore all $a_i$ are in $r_{R[x;\sigma,\delta]}(N)$, so $a_i\in r_R(N)$ for all $i$. Since $f(x)\neq 0$ then there exists $i_0\in \{0,1,\cdots p\}$ such that $0\neq a_{i_0}\in r_R(N)$. So that $r_{R}(N)\neq 0$. \end{proof} \begin{definition}\label{def}Let $M_R$ be a module and $\sigma$ an endomorphism of $R$. We say that $M_R$ satisfies the condition $(\mathcal{C_{\sigma}})$ if whenever $m\sigma(a)=0$ with $m\in M$ and $a\in R$, then $ma=0$. \end{definition} \begin{proposition}\label{prop/combine}Let $m(x)=\sum_{i=0}^{p}m_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j$ $\in R[x;\sigma,\delta]$ such that $m(x)f(x)=0$. If one of the following conditions hold: \par$\mathbf{(a)}$ $M_R$ is $(\sigma,\delta)$-skew Armendariz and satisfy the condition $(\mathcal{C_{\sigma}})$. \par$\mathbf{(b)}$ $M_R$ is reduced and $(\sigma,\delta)$-compatible. Then $m_ia_j=0$ for all $i,j$. \end{proposition} \begin{proof}\par$\mathbf{(a)}$ Since $M_R$ is $(\sigma,\delta)$-skew Armendariz then from $m(x)f(x)=0$, we get $m_ix^ia_jx^j=0$ for all $i,j$. But $m_ix^ia_jx^j=m_i\sum_{\ell=0}^if_{\ell}^i(a_j)x^{j+\ell}=m_i\sigma^i(a_j)x^{i+j}+Q(x)=0$ where $Q(x)$ is a polynomial in $M[x;\sigma,\delta]$ of degree strictly less than $i+j$. Thus $m_i\sigma^i(a_j)=0$, therefore $m_ia_j=0$ for all $i,j$. \par$\mathbf{(b)}$ We will use freely the fact that, if $ma=0$ then $m\sigma^i(a)=m\delta^{j}(a)=mf_i^{j}(a)=0$ for any nonnegative integers $i,j$ with $j\geq i$. From $m(x)f(x)=0$, we have the following system of equations: $$\;\qquad\qquad\qquad\qquad\qquad m_p\sigma^p(a_q)=0,\leqno(0)$$ $$m_p\sigma^p(a_{q-1})+m_{p-1}\sigma^{p-1}(a_q)+m_pf_{p-1}^p(a_q)=0,\qquad\qquad\leqno(1)$$ $$m_p\sigma^p(a_{q-2})+m_{p-1}\sigma^{p-1}(a_{q-1})+m_pf_{p-1}^p(a_{q-1})+m_{p-2}\sigma^{p-2}(a_q)+m_{p-1}f_{p-2}^{p-1}(a_q)\leqno(2)$$ $$\quad\qquad\qquad\qquad\quad\;\; +m_pf_{p-2}^p(a_q)=0,$$ $$m_p\sigma^p(a_{q-3})+m_{p-1}\sigma^{p-1}(a_{q-2})+ m_pf_{p-1}^p(a_{q-2})+m_{p-2}\sigma^{p-2}(a_{q-1})\leqno(3)$$ $$+m_{p-1}f_{p-2}^{p-1}(a_{q-1})+m_pf_{p-2}^p(a_{q-1})+m_{p-3} \sigma^{p-3}(a_q)+m_{p-2}f_{p-3}^{p-2}(a_q)$$ $$\;\;\quad +m_{p-1}f_{p-3}^{p-1}(a_q)+m_pf_{p-3}^p(a_q)=0,$$ $$\qquad\qquad\qquad\qquad\vdots$$ $$\qquad\sum_{j+k=\ell}\;\;\sum_{i=0}^p\; \sum_{k=0}^q(m_i\sum_{j=0}^if_j^i(a_k))=0,\leqno(\ell)$$ $$\qquad\qquad\qquad\qquad\vdots$$ $$\;\qquad\qquad\qquad\qquad\quad\sum_{i=0}^pm_i\delta^i(a_0)=0.\leqno(p+q)$$ From equation $(0)$, we have $m_pa_q=0$ by $\sigma$-compatibility. Multiplying equation $(1)$ on the right hand by $a_q$, we get $$m_p\sigma^p(a_{q-1})a_q+m_{p-1}\sigma^{p-1}(a_q)a_q+m_pf_{p-1}^p(a_q)a_q=0,\;\qquad\qquad\qquad\qquad\leqno(1')$$ Since $M_R$ is semicommutative, then $$m_pa_q=0\Bbb Rightarrow m_p\sigma^p(a_{q-1})a_q=m_pf_{p-1}^p(a_q)a_q=0.$$ By Lemma \ref{lemma banal}, equation $(1')$ gives $m_{p-1}a_q=0$. Also, by $(\sigma,\delta)$-compatibility, equation $(1)$ implies $m_p\sigma^p(a_{q-1})=0$, because $m_pa_q=m_{p-1}a_q=0$. Thus $m_pa_{q-1}=0$. \par Summarizing at this point, we have $$m_pa_q=m_{p-1}a_q=m_pa_{q-1}=0\leqno(\alpha)$$ Now, multiplying equation $(2)$ on the right hand by $a_q$, we get $$m_p\sigma^p(a_{q-2})a_q+m_{p-1}\sigma^{p-1}(a_{q-1})a_q+m_pf_{p-1}^p(a_{q-1})a_q+m_{p-2}\sigma^{p-2}(a_q)a_q\leqno(2')$$ $$+m_{p-1}f_{p-2}^{p-1}(a_q)a_q+m_pf_{p-2}^p(a_q)a_q=0,\;\;$$ With the same manner as above, equation $(2')$ gives $m_{p-2}\sigma^{p-2}(a_q)a_q=0$ and thus $m_{p-2}a_q=0\;(\beta)$. Also, multiplying equation $(2)$ on the right hand by $a_{q-1}$, we get $$m_p\sigma^p(a_{q-2})a_{q-1}+m_{p-1}\sigma^{p-1}(a_{q-1})a_{q-1}+m_pf_{p-1}^p(a_{q-1})a_{q-1}\leqno(2'')$$ $$+m_{p-2}\sigma^{p-2}(a_q)a_{q-1}+m_{p-1}f_{p-2}^{p-1}(a_q)a_{q-1}+m_pf_{p-2}^p(a_q)a_{q-1}=0$$ Equations $(\alpha)$ and $(\beta)$ implies $$\;0=m_p\sigma^p(a_{q-2})a_{q-1}=m_pf_{p-1}^p(a_{q-1})a_{q-1}=m_{p-2}\sigma^{p-2}(a_q)a_{q-1}$$ $$=m_{p-1}f_{p-2}^{p-1}(a_q)a_{q-1}=m_pf_{p-2}^p(a_q)a_{q-1}\qquad\qquad\qquad\qquad\;$$ Hence, equation $(2'')$ gives $m_{p-1}\sigma^{p-1}(a_{q-1})a_{q-1}=0$ and by Lemma \ref{lemma banal}, we get $m_{p-1}a_{q-1}=0\;(\gamma)$. Now, by equations $(\alpha)$,$(\beta)$ and $(\gamma)$, we get $m_{p-1}\sigma^{p-1}(a_{q-1})=m_pf_{p-1}^p(a_{q-1})=m_{p-2}\sigma^{p-2}(a_q)=m_{p-1}f_{p-2}^{p-1}(a_q)=m_pf_{p-2}^p(a_q)=0$. Therefore equation $(2)$ implies $m_p\sigma^p(a_{q-2})=0$, so that $m_pa_{q-2}=0$. \par Summarizing at this point, we have $m_ia_j=0$ with $i+j\in \{p+q,p+q-1,p+q-2\}$. Continuing this procedure yields $m_ia_j=0$ for all $i,j$. \end{proof} \begin{lemma}\label{lemma banal}Let $M_R$ be an $(\sigma,\delta)$-compatible module, if $ma^2=0$ implies $ma=0$ for any $m\in M$ and $a\in R$. Then \par$\mathbf{(1)}$ $m\sigma(a)a=0$ implies $ma=m\sigma(a)=0$. \par$\mathbf{(2)}$ $ma\sigma(a)=0$ implies $ma=m\sigma(a)=0$. \end{lemma} \begin{proof}The proof is straightforward. \end{proof} According to Lee and Zhou \cite{lee/zhou}, a module $M_R$ is called $\sigma$-{\it reduced}, if for any $m\in M$ and $a\in R$. We have \begin{enumerate} \item [$\mathbf{(1)}$] $ma=0$ implies $mR\cap Ma=0$. \item [$\mathbf{(2)}$] $ma=0$ if and only if $m\sigma(a)=0$. \end{enumerate} The module $M_R$ is called reduced if $M_R$ is $id_R$-reduced. \begin{lemma}[{\cite[Lemma 1.2]{lee/zhou}}]\label{lemma zhou}The following are equivalent for a module $M_R$: \begin{enumerate} \item [$\mathbf{(1)}$] $M_R$ is $\sigma$-reduced. \item [$\mathbf{(2)}$] The following three conditions hold: For any $m\in M$ and $a\in R$, \begin{enumerate} \item [$\mathbf{(a)}$] $ma=0$ implies $mRa=mR\sigma(a)=0$. \item [$\mathbf{(b)}$] $ma\sigma(a)=0$ implies $ma=0$. \item [$\mathbf{(c)}$] $ma^2=0$ implies $ma=0$. \end{enumerate} \end{enumerate} \end{lemma} By Lemma \ref{lemma zhou}, a module $M_R$ is reduced if and only if it is semicommutative with $ma^2=0$ implies $ma=0$ for any $m\in M$ and $a\in R$. \begin{corollary}[{\cite[Theorem 2.19]{moussavi/2012}}]Every $(\sigma,\delta)$-compatible and reduced module is $(\sigma,\delta)$-skew Armendariz. \end{corollary} \begin{proof}Clearly from Proposition \ref{prop/combine}(b). \end{proof} Let $M_R$ be a module and $(\sigma,\delta)$ a quasi derivation of $R$. We say that $M_R$ satisfies the condition $(*)$, if for any $m(x)\in M[x;\sigma,\delta]$ and $f(x)\in R[x;\sigma,\delta]$, $m(x)f(x)=0$ implies $m(x)Rf(x)=0$. A module $M_R$ which satisfies the condition $(*)$ is semicommutative. But the converse is not true, by the next example. \begin{example}\label{ex2}Take the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ with $(\sigma,\delta)$ as considered in Example \ref{exp2}. Since $R$ is commutative then the module $R_R$ is semicommutative. However, it does not satisfy the condition $(*)$. For $p(x)=(1,0)x$ and $q(x)=(1,1)+(1,0)x\in R[x;\sigma,\delta]$. We have $p(x)q(x)=0$, but $p(x)(1,0)q(x)=(1,0)+(1,0)x\neq 0$. Thus $p(x)Rq(x)\neq 0$. \end{example} \begin{theorem}\label{th2}If a module $M_R$ is $(\sigma,\delta)$-compatible and reduced, then it satisfies the condition $(*)$. \end{theorem} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]$, such that $m(x)f(x)=0$. By Proposition \ref{prop/combine}(b) and semicommutativity of $M_R$, we have $m_iRa_j=0$ for all $i$ and $j$. Moreover, compatibility implies $m_if_k^{\ell}(Ra_j)=0$ for all $i,j,k,\ell$. Therefore $m(x)Rf(x)=0$. \end{proof} Since the ring $R=\Bbb Z_2\oplus \Bbb Z_2$ is reduced, then from Example \ref{ex2}, we can see that the condition ``$(\sigma,\delta)$-compatible" in Theorem \ref{th2} is not superfluous. \begin{proposition}\label{prop4}Let $M_R$ be an $(\sigma,\delta)$-compatible module which satisfies $(*)$. Suppose that for any $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, $m(x)f(x)=0$. Then $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. \end{proposition} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, such that $m(x)f(x)=0$. We can suppose that $a_q\neq 0$. From $m(x)f(x)=0$, we get $m_p\sigma^p(a_q)=0$. Since $M_R$ is $(\sigma,\delta)$-compatible, we have $m_pa_q=0$ which implies $m_px^pa_q=0$. Since $m(x)f(x)=0$ implies $m(x)a_qf(x)=0$. Then $$0=(m_px^p+m_{p-1}x^{p-1}+\cdots+m_1x+m_0)(a_q^2x^q+a_qa_{q-1}x^{q-1}+\cdots+a_qa_1x+a_qa_0)$$ $$\;\;\;=(m_{p-1}x^{p-1}+\cdots+m_1x+m_0)(a_q^2x^q+a_qa_{q-1}x^{q-1}+\cdots+a_qa_1x+a_qa_0).\qquad\;$$ If we put $f'(x)=a_qf(x)$ and $m'(x)=\sum_{i=0}^{p-1}m_ix^i$ then we get $m_{p-1}a_q^2=0$. Continuing this procedure yields $m_ia_q^{p+1-i}=0$ for all $i=0,1,\cdots, p$. Consequently $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. \end{proof} \begin{corollary}Let $M_R$ be an $(\sigma,\delta)$-compatible module over a reduced ring $R$. If $M_R$ satisfies $(*)$, then it is $(\sigma,\delta)$-skew McCoy. \end{corollary} \begin{proof}Let $m(x)=\sum_{i=0}^pm_ix^i\in M[x;\sigma,\delta]$ and $f(x)=\sum_{j=0}^qa_jx^j\in R[x;\sigma,\delta]\setminus\{0\}$, such that $m(x)f(x)=0$. We can suppose that $a_q\neq 0$. By Proposition \ref{prop4}, we have $m_ia_q^{p+1}=0$ for all $i=0,1,\cdots, p$. Since $M_R$ is $(\sigma,\delta)$-compatible, we get $m_ix^ia_q^{p+1}=m_i\sum_{\ell=0}^if_{\ell}^i(a_q^{p+1})x^{\ell}=0$ for all $i$. Hence $m(x)a_q^{p+1}=0$ where $a_q^{p+1}\neq 0$, because $R$ is reduced. Consequently $M_R$ is $(\sigma,\delta)$-skew McCoy. \end{proof} \begin{example}\label{ex2.2}Consider a ring of polynomials over $\Bbb Z_2$, $R=\Bbb Z_2[x]$. Let $\sigma\colon R\rightarrow R$ be an endomorphism defined by $\sigma(f(x))=f(0)$. Then \par$\mathbf{(1)}$ $R$ is not $\sigma$-compatible. Let $f=\overline{1}+x$, $g=x\in R$, we have $fg=(\overline{1}+x)x\neq 0$, however $f\sigma(g)=(\overline{1}+x)\sigma(x)=0$. \par$\mathbf{(2)}$ $R$ is $\sigma$-skew Armendariz \cite[Example~5]{hong/2003}. \end{example} From Example \ref{ex2.2}, we see that the ring $R=\Bbb Z_2[x]$ is $\sigma$-skew McCoy because it is $\sigma$-skew Armendariz, but it is not $\sigma$-compatible. Thus the $(\sigma,\delta)$-compatibility condition is not essential to obtain $(\sigma,\delta)$-skew McCoyness. \begin{example}[{\cite[Example 2.5]{louzari2}}]\label{ex5}Let $R$ be a ring, $\sigma$ an endomorphism of $R$ and $\delta$ be a $\sigma$-derivation of $R$. Suppose that $R$ is $\sigma$-rigid. Consider the ring $$V_3(R)=\set{\left( \begin{array}{ccc} a & b&c\\ 0 & a& b \\ 0 & 0 & a\\ \end{array} \right)\mid a,b,c\in R}.$$ The ring $V_3(R)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy, reduced and $(\overline{\sigma},\overline{\delta})$-compatible, and by Theorem \ref{th2}, it satisfies the condition $(*)$. \end{example} \section{$(\sigma,\delta)$-skew McCoyness of some matrix extensions} For a nonnegative integer $n\geq 2$, let $R$ be a ring and $M$ a right $R$-module. Consider $$S_n(R):=\set{\left( \begin{array}{ccccc} a & a_{12} & a_{13} & \ldots & a_{1n} \\ 0 & a & a_{23} & \ldots & a_{2n} \\ 0 & 0 & a & \ldots & a_{3n}\\ \vdots & \vdots &\vdots&\ddots &\vdots \\ 0 & 0 & 0 & \ldots & a \\ \end{array} \right)\mid a,a_{ij}\in R}$$ and $$S_n(M):=\set{\left( \begin{array}{ccccc} m & m_{12} & m_{13} & \ldots & m_{1n} \\ 0 & m & m_{23} & \ldots & m_{2n} \\ 0 & 0 & m & \ldots & m_{3n}\\ \vdots & \vdots &\vdots&\ddots &\vdots \\ 0 & 0 & 0 & \ldots & m \\ \end{array} \right)\mid m,m_{ij}\in M}$$ Clearly, $S_n(M)$ is a right $S_n(R)$-module under the usual matrix addition operation and the following scalar product operation. For $U=(u_{ij})\in S_n(M)$ and $A=(a_{ij})\in S_n(R)$, $UA=(m_{ij})\in S_n(M)$ with $m_{ij}=\sum_{k=1}^nu_{ik}a_{kj}$ for all $i,j$. A quasi derivation $(\sigma,\delta)$ of $R$ can be extended to a quasi derivation $(\overline{\sigma},\overline{\delta})$ of $S_n(R)$ as follows: $\overline{\sigma}((a_{ij}))=(\sigma(a_{ij}))$ and $\overline{\delta}((a_{ij}))=(\delta(a_{ij}))$. We can easily verify that $\overline{\delta}$ is a $\overline{\sigma}$-derivation of $S_n(R)$. \begin{theorem} A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $S_n(M)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy as an $S_n(R)$-module for any nonnegative integer $n\geq 2$. \end{theorem} \begin{proof} The proof is similar to \cite[Theorem 14]{baser/2009}. \end{proof} Now, for $n\geq 2$. Consider $$V_n(R):=\set{\left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)\mid a_0,a_1,a_2,\cdots,a_{n-1}\in R}$$ and $$V_n(M):=\set{\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right)\mid m_0,m_1,m_2,\cdots,m_{n-1}\in M}$$ With the same method as above, $V_n(M)$ is a right $V_n(R)$-module, and a quasi derivation $(\sigma,\delta)$ of $R$ can be extended to a quasi derivation $(\overline{\sigma},\overline{\delta})$ of $V_n(R)$. Note that $V_n(M)\cong M[x]/M[x](x^n)$ where $M[x](x^n)$ is a submodule of $M[x]$ generated by $x^n$ and $V_n(R)\cong R[x]/(x^n)$ where $(x^n)$ is an ideal of $R[x]$ generated by $x^n$. \begin{proposition} A module $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $V_n(M)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy as an $V_n(R)$-module for any nonnegative integer $n\geq 2$. \end{proposition} \begin{proof}The proof is similar to that of \cite[Theorem 14]{baser/2009} or \cite[Proposition 2.27]{cui/2012}. \end{proof} \begin{corollary} For a nonnegative integer $n\geq 2$, we have: \par$\mathbf{(1)}$ $M_R$ is $(\sigma,\delta)$-skew McCoy if and only if $M[x]/M[x](x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy. \par$\mathbf{(2)}$ $R$ is $(\sigma,\delta)$-skew McCoy if and only if $R[x]/(x^n)$ is $(\overline{\sigma},\overline{\delta})$-skew McCoy. \par$\mathbf{(3)}$ $R$ is McCoy if and only if $R[x]/(x^n)$ is McCoy. \end{corollary} In the next, we define {\it skew triangular matrix modules} $V_n(M,\sigma)$, based on the definition of skew triangular matrix rings $V_n(R,\sigma)$ given by Isfahani \cite{isfahani/2011}. Let $\sigma$ be an endomorphism of a ring $R$ and $M_R$ a right $R$-module. For $n\geq 2$. Consider $$V_n(R,\sigma):=\set{\left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)\mid a_0,a_2,\cdots,a_{n-1}\in R}$$ and $$V_n(M,\sigma):=\set{\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right)\mid m_0,m_2,\cdots,m_{n-1}\in M}$$ Clearly $V_n(M,\sigma)$ is a right $V_n(R,\sigma)$-module under the usual matrix addition operation and the following scalar product operation. $$\left( \begin{array}{cccccc} m_0 & m_1 & m_2 & m_3 & \ldots & m_{n-1} \\ 0 & m_0 & m_1 & m_2 & \ldots & m_{n-2} \\ 0 & 0 & m_0 & m_1 & \ldots & m_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & m_1 \\ 0 & 0 & 0 & 0& \ldots & m_0 \\ \end{array} \right) \left( \begin{array}{cccccc} a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-1} \\ 0 & a_0 & a_1 & a_2 & \ldots & a_{n-2} \\ 0 & 0 & a_0 & a_1 & \ldots & a_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & a_1 \\ 0 & 0 & 0 & 0& \ldots & a_0 \\ \end{array} \right)=$$ $$\left( \begin{array}{cccccc} c_0 & c_1 & c_2 & c_3 & \ldots & c_{n-1} \\ 0 & c_0 & c_1 & c_2 & \ldots & c_{n-2} \\ 0 & 0 & c_0 & c_1 & \ldots & c_{n-3}\\ \vdots & \vdots &\vdots& \vdots & \ddots &\vdots \\ 0 & 0 & 0 & 0 & \ldots & c_1 \\ 0 & 0 & 0 & 0& \ldots & c_0 \\ \end{array} \right),\; \mathrm{where} $$ $c_i=m_0\sigma^{0}(a_i)+m_1\sigma^1(a_{i-1})+m_2\sigma^2(a_{i-2})+\cdots+m_i\sigma^{i}(a_0)$ for each $0\leq i\leq n-1$. \par We denote elements of $V_n(R,\sigma)$ by $(a_0,a_1,\cdots,a_{n-1})$ and elements of $V_n(M,\sigma)$ by $(m_0,m_1,\cdots,m_{n-1})$. There is a ring isomorphism $\varphi\colon R[x;\sigma]/(x^n)\rightarrow V_n(R,\sigma)$ given by $\varphi(a_0+a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}+(x^n))=(a_0,a_1,a_2,\cdots,a_{n-1})$, and an abelian group isomorphism $\phi\colon M[x,\sigma]/M[x,\sigma](x^n)\rightarrow V_n(M,\sigma)$ given by $\phi(m_0+m_1x+m_2x^2+\cdots+m_{n-1}x^{n-1}+(x^n))=(m_0,m_1,m_2,\cdots,m_{n-1})$ such that $$\phi(N(x)A(x))=\phi(N(x))\varphi(A(x))$$ for any $N(x)=m_0+m_1x+m_2x^2+\cdots+m_{n-1}x^{n-1}+(x^n)\in M[x,\sigma]/M[x,\sigma](x^n)$ and $A(x)=a_0+a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}+(x^n)\in R[x;\sigma]/(x^n)$. The endomorphism $\sigma$ of $R$ can be extended to $V_n(R,\sigma)$ and $R[x;\sigma]$, and we will denote it in both cases by $\overline{\sigma}$. \begin{theorem} A module $M_R$ is $\sigma$-skew McCoy if and only if $V_n(M,\sigma)$ is $\overline{\sigma}$-skew McCoy as an $V_n(R,\sigma)$-module for any nonnegative integer $n\geq 2$. \end{theorem} \begin{proof}We shall adapt the proof of \cite[Theorem 14]{baser/2009} to this situation. Note that $V_n(R,\sigma)[x,\overline{\sigma}]\cong V_n(R[x,\sigma], \overline{\sigma})$ and $V_n(M,\sigma)[x,\overline{\sigma}]\cong V_n(M[x,\sigma], \overline{\sigma})$. We only prove when $n=2$, because other cases can be proved with the same manner. Suppose that $M_R$ is $\sigma$-skew McCoy. Let $0\neq m(x)\in V_2(M,\sigma)[x,\overline{\sigma}]$ and $0\neq f(x)\in V_2(R,\sigma)[x,\overline{\sigma}]$ such that $m(x)f(x)=0$, where $$m(x)=\sum_{i=0}^p \left(\begin{array}{cc} m_{11}^{(i)} & m_{12}^{(i)} \\ 0 & m_{11}^{(i)} \\ \end{array}\right)x^i= \left(\begin{array}{cc} \sum_{i=0}^pm_{11}^{(i)}x^i & \sum_{i=0}^pm_{12}^{(i)}x^i \\ 0 & \sum_{i=0}^pm_{11}^{(i)}x^i \\ \end{array}\right)=\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)$$ $$f(x)=\sum_{j=0}^q \left(\begin{array}{cc} a_{11}^{(j)} & a_{12}^{(j)} \\ 0 & a_{11}^{(j)} \\ \end{array}\right)x^j= \left(\begin{array}{cc} \sum_{j=0}^q a_{11}^{(j)}x^j & \sum_{j=0}^q a_{12}^{(j)}x^j \\ 0 & \sum_{j=0}^q a_{11}^{(j)}x^j \\ \end{array}\right)= \left(\begin{array}{cc} \beta_{11} & \beta_{12} \\ 0 & \beta_{11} \\ \end{array}\right)$$ Then $\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)\left(\begin{array}{cc} \beta_{11} & \beta_{12} \\ 0 & \beta_{11} \\ \end{array}\right)=0$, which gives $\alpha_{11}\beta_{11}=0$ and $\alpha_{11}\beta_{12}+\alpha_{12}\overline{\sigma}(\beta_{11})=0$ in $M[x;\sigma]$. If $\alpha_{11}\neq 0$, then there exists $0\neq \beta\in \{\beta_{11},\beta_{12}\}$ such that $\alpha_{11}\beta=0$. Since $M_R$ is $\sigma$-skew McCoy then there exists $0\neq c\in R$ which satisfies $\alpha_{11}c=0$, thus $\left(\begin{array}{cc} \alpha_{11} & \alpha_{12} \\ 0 & \alpha_{11} \\ \end{array}\right)\left(\begin{array}{cc} 0 & c \\ 0 & 0 \\ \end{array}\right)=\left(\begin{array}{cc} 0 & \alpha_{11}c \\ 0 & 0 \\ \end{array}\right)=0$. If $\alpha_{11}=0$ then $\left(\begin{array}{cc} 0 & \alpha_{12} \\ 0 & 0 \\ \end{array}\right)\left(\begin{array}{cc} 0 & c \\ 0 & 0 \\ \end{array}\right)=0$, for any $0\neq c\in R$. Therefore, $V_2(M,\sigma)$ is $\overline{\sigma}$-skew McCoy. \par Conversely, suppose that $V_2(M,\sigma)$ is an $\overline{\sigma}$-skew McCoy module. Let $0\neq m(x)=m_0+m_1x+\cdots+m_px^p\in M[x;\sigma]$ and $0\neq f(x)=a_0+a_1x+\cdots+a_qx^q\in R[x;\sigma]$, such that $m(x)f(x)=0$. Then $\left(\begin{array}{cc} m(x) & 0 \\ 0 & m(x) \\ \end{array}\right)\left(\begin{array}{cc} f(x) & 0 \\ 0 & f(x) \\ \end{array}\right)=\left(\begin{array}{cc} m(x)f(x) & 0 \\ 0 & m(x)f(x) \\ \end{array}\right)=0$, so there exists $0\neq \left(\begin{array}{cc} a & b \\ 0 & a \\ \end{array}\right)\in V_2(R,\sigma)$ such that $\left(\begin{array}{cc} m(x) & 0 \\ 0 & m(x) \\ \end{array}\right)\left(\begin{array}{cc} a & b \\ 0 & a \\ \end{array}\right)=0$, because $V_2(M,\sigma)$ is $\overline{\sigma}$-skew McCoy. Thus $m(x)a=m(x)b=0$, where $a\neq 0$ or $b\neq 0$. Therefore, $M_R$ is $\sigma$-skew McCoy. \end{proof} \begin{corollary} For a nonnegative integer $n\geq 2$, we have: \par$\mathbf{(1)}$ $M_R$ is $\sigma$-skew McCoy if and only if $M[x;\sigma]/M[x;\sigma](x^n)$ is $\overline{\sigma}$-skew McCoy. \par$\mathbf{(2)}$ $R$ is $\sigma$-skew McCoy if and only if $R[x;\sigma]/(x^n)$ is $\overline{\sigma}$-skew McCoy. \par$\mathbf{(3)}$ $M_R$ is McCoy if and only if $M[x]/M[x](x^n)$ is McCoy. \par$\mathbf{(4)}$ $R$ is McCoy if and only if $R[x]/(x^n)$ is McCoy. \end{corollary} \end{document}
\begin{document} \jl{1} \title[Bose-Hubbard dimers, Viviani's windows and pendulum dynamics] {Bose-Hubbard dimers, Viviani's windows and pendulum dynamics } \author{Eva-Maria Graefe$^{1}$, Hans J\"urgen Korsch$^2$ and Martin P. Strzys$^2$} \address{$^1$ Department of Mathematics, Imperial College London, London SW7 2AZ, UK } \address{$^2$ FB Physik, TU Kaiserslautern, D--67653 Kaiserslautern, Germany} \eads{\mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} The two-mode Bose-Hubbard model in the mean-field approximation is revisited emphasizing a geometric interpretation where the system orbits appear as intersection curves of a (Bloch) sphere and a cylinder oriented parallel to the mode axis, which provide a generalization of Viviani's curve studied already in 1692. In addition, the dynamics is shown to agree with the simple mathematical pendulum. The areas enclosed by the generalized Viviani curves, the action integrals, which can be used to semiclassically quantize the $N$-particle eigenstates, are evaluated. Furthermore the significance of the original Viviani curve for the quantum system is demonstrated. \end{abstract} \pacs{03.65.Sq, 03.75.Lm, 01.65.+g, 02.40.Yy} submitto{vPA} section{Introduction} The two-mode Bose-Hubbard system is a popular model in multi-particle quantum theory. It describes $N$ bosons, hopping between two sites with on-site interaction or a spinning particle with angular momentum $N/2$. Despite its simplicity, it offers a plethora of phenomena and applications motivating an increasing number of investigations, somewhat similar to the harmonic oscillator in single-particle quantum mechanics. In the limit of large $N$ it can be described by a non-linear Schr\"odinger or Gross-Pitaevskii equation which generates a classical Hamiltonian dynamics (see, e.g.,~\cite{Milb97,Smer97,Ragh99,Vard01b,Angl01,Link06} for some studies related to the following). Moreover it has been shown recently \cite{07semiMP,Grae09dis,Bouk09,Chuc10,Niss10,Simo12} that a semiclassical WKB-type construction can be used to approximately recover quantum effects, such as the eigenstates and interference effects for finite (and even small) numbers of particles. This is based on classical action integrals, as usual in semiclassics. In the present note, we point out an interesting relation between these contemporary studies and much older ones by astronomers and mathematicians from the times of Galilei or even earlier in ancient Greece as, e.g., Vincenzo Viviani (1622\,--\,1703) \cite{Viviani} or Eudoxus of Cnidus (408\,BC\,--\,355\,BC) \cite{Hippo}. The basic link between these studies is the family of closed curves on the surface of a sphere and the surface area they enclose, appearing as phase space action integrals in the Bose-Hubbard dimer. The paper is organized as follows: In section \ref{s-viviani} we give a brief review of the background of Viviani curves, and introduce a generalization. We furthermore calculate the areas enclosed by the generalized Viviani curves. In section \ref{sec_BH_MF} we describe the Bose-Hubbard dimer and its mean-field approximation and show that the dynamical trajectories of the latter are given by generalized Viviani curves. We also show how the dynamics of the mean-field system can be directly related to that of a mathematical pendulum. Finally we analyze the relevance of the Viviani curve for the quantum Bose-Hubbard dimer. We conclude with a summary in section \ref{sec_sum}. section{Viviani curves} \label{s-viviani} In 1692 Vincenzo Viviani, a disciple of Galileo Galilei, proposed a problem of the construction of four equal windows cut out of a hemispherical cupola such that the remaining surface area can be exactly squared \cite{Viviani} (see also \cite{cadd01} and references therein). The solution to this problem is given by an intersection of the hemisphere and a cylinder whose diameter equals the radius of the hemisphere. This intersection curve of a sphere and a cylinder tangent to the diameter of the sphere and its equator is the famous Viviani curve, which can be generalized to an intersection with an arbitrary cylinder. For a recent extended study of such curves see \cite{Krol05,Krol07}. subsection{(Generalized) Viviani curves} \label{ss-viviani} Consider a cylinder of radius $r$ centered around an axis in the $z$-direction shifted by $a$ in the $x$-direction: \begin{equation}\label{eqn_cyl} (x-a)^2+y^2 =r^2. \end{equation} We will parametrize the basis of the cylinder by an angle $\phi$, \begin{equation}\label{eqn_circle} x=a+r\cos \phi \ ,\quad y=rsin \phi \, \end{equation} as shown in figure \ref{fig-circles12}. \begin{figure} \caption{\label{fig-circles12} \label{fig-circles12} \end{figure} \begin{figure} \caption{\label{fig_viv_05} \label{fig_viv_05} \end{figure} The generalized Viviani curves are given by the intersection curves of this cylinder with the unit sphere centered at the origin illustrated in figures \ref{fig_viv_05} and \ref{fig_viv_07}. The radius $r$ of the cylinder varies between $r=0$ and $r=1+a$, otherwise there is no intersection with the sphere. For $r=1+a$ the cylinder touches the sphere at $x=-1$. We have to distinguish two cases (see figure \ref{fig-circles12}): \begin{figure} \caption{\label{fig_viv_07} \label{fig_viv_07} \end{figure} \begin{itemize} \item[(i)] For $0< a<1$ and $0<r <1-a$ all points on the circle (\ref{eqn_circle}) lead to a full intersection of the cylinder and the sphere. We obtain two single intersection loops on the northern or southern hemisphere containing the points $(a,0,+sqrt{1-a^2})$ and $(a,0,-sqrt{1-a^2})$, respectively, as illustrated on the left of figures \ref{fig_viv_05} and \ref{fig_viv_07}. \item[(ii)] For $0< a<1$ and $1-a<r<1+a$ as well as for $a>1$ and $a-1<r<1+a$ there is no intersection of the cylinder and the sphere for $|\phi |<\phi_0$ with \begin{equation}\label{w0} \phi_0=\arccos\frac{1-a^2-r^2}{2ar}\,. \end{equation} The curve of intersection with the sphere is a single loop extending from the northern to the southern hemisphere (see figure \ref{fig_viv_05} and \ref{fig_viv_07} (right)). \end{itemize} \begin{figure} \caption{\label{fig-circlesabc} \label{fig-circlesabc} \end{figure} \noindent Three special situations are of interest (see figure \ref{fig-circlesabc}): \begin{itemize} \item[(a)] For $r = a$ the cylinder passes through the center of the sphere and the intersection loop passes through the north- and south pole. As we shall see later, for the Bose-Hubbard dimer, this is the frequently considered situation where the system is initially prepared in the lower or upper state. \item[(b)] For $r =1-a$, which is only possible for $a\le 1$, the cylinder is tangent to the sphere at the point ${\boldsymbol s}_{0+}=(1,0,0)$. This implies that the intersection curve is a figure-eight loop with a self-intersection at ${\boldsymbol s}_{0+}$ (see figure \ref{fig_viv_07} (middle)). This curve is also known as the Hippopede of the Greek astronomer and mathematician Eudoxus of Cnidus (408\,BC -- 355\,BC) \cite{Hippo,Yave01}. In the Bose-Hubbard mean-field dynamics it appears as a separatrix curve, which separates the flow inside (for smaller values of $r$) from the flow outside (for larger values of $r$), \item[(c)] In the most singular situation cases (a) and (b) coincide, i.e.~the cylinder passes through the center and touches the sphere. This happens only for $r=a=\case{1}{2}$ and is the original Viviani case illustrated in figure \ref{fig_viv_05} (middle). Using the parametrization (\ref{eqn_circle}) the Viviani curve is given as \begin{equation}\label{viviani-1} x=\case{1}{2}+\case{1}{2}\cos \phi\ ,\quad y=\case{1}{2}sin \phi \end{equation} and $z^2=1-x^2-y^2= (1-\cos \phi)/2=\cos^2(\phi/2)$ and therefore $z=\pm \cos (\phi/2)$\,. This implies the relation \,$\varphi=\phi/2$\, between the azimuthal polar angle $\varphi$ and the angle $\phi$, which can also be found by purely geometric arguments. Therefore the Viviani curve is given by \begin{equation}\label{viviani-2} (x,\,y,\,z)=(\cos^2\varphi,\,\cos \varphi sin \varphi,\,sin \varphi)\,. \end{equation} Comparing with spherical polar coordinates (see (\ref{eqn_polar}) below) we observe that the Viviani curve (\ref{viviani-2}) can also be defined by the simple condition that the azimuthal angle is equal to the polar angle measured from the equator: \begin{equation}\label{viviani-3} \varphi=\pi/2-\vartheta\,. \end{equation} \end{itemize} In spite of the fact that only curve (\ref{viviani-2}) is the one originally described by Viviani \cite{Viviani}, we will deliberately denote all the intersection curves of a cylinder and a sphere as (generalized) {\it Viviani curves}. Alternatively these curves are known as {\it euclidic spherical ellipses}. This is due to their remarkable property that the sum of the euclidic distances to two focal points $x_F=a/(1+r)$, $y_F=0$, $z_F=\pmsqrt{1-x_F^2}$ on the sphere equals a constant $2c$ with $c^2=z_F^2a/x_F$ \cite{Krol05}. These euclidean spherical ellipses must be distinguished from the more popular version, where the distance is measured by the arc length on the surface. The latter version found many applications in navigation. Figure \ref{fig_vivianicurve} shows the original Viviani curve (\ref{viviani-2}), whose focal points are at $(1/3,0,\pm sqrt{8}/3)$, as well as two generalized Viviani curves, i.e.~euclidic spherical ellipses, which are chosen to possess the same focal points, however with $c$ smaller or larger than the value $c=2/sqrt{3}$ for the Viviani curve. Let us point out again that these euclidic spherical ellipses can appear as two disconnected loops. \begin{figure} \caption{\label{fig_vivianicurve} \label{fig_vivianicurve} \end{figure} Note also that the projection of the generalized Viviani curves on the $(y,z)$-plane, the curves \begin{equation} (z^2-1+a^2-r^2)^2+4a^2y^2=4a^2r^2\,, \label{yzprojection} \end{equation} are well-known functions, at least for $r=a$, where the cylinder passes through the midpoint of the sphere (case (a)), they are known as Cassini ovals and for $r=1-a$, where the cylinder is tangent to the sphere, one obtains an eight-curve, also known as Lemniscate of Gerono. The previous considerations exclusively discussed the geometry of the intersection curves. These curves can, however, be also generated by the trajectories of a time evolution. For the (original) Viviani curve, such a dynamical generation is often realized on the basis of eq.~(\ref{viviani-3}) by simply assuming a combined rotation along the circle of latitude, $\vartheta=\pi/2-\omega t$, and the meridian, $\varphi=\omega t$. Also the more general separatrix (b), the Hippopede of Eudoxus, was constructed in an effort to explain the retrograde motion of the planets by a rotation of nested spheres that share a common center with the same frequency around different axes \cite{Hippo}. As we shall see later, the same curves arise as the dynamical trajectories of the mean-field approximation for the Bose-Hubbard dimer. subsection{Area integrals} \label{s-area} The area enclosed by a generalized Viviani curve, or more precisely the sum of the enclosed areas if this curve consists of two loops, is of particular interest. Historically, of course, because this is the origin of the old pseudo-architectural problem posed and solved by Vincenzo Viviani in 1692, as stated already at the beginning of this section. Today, in the context of the Bose-Hubbard dimer discussed in section \ref{sec_BH_MF}, the area is the basis for a semiclassical quantization of the eigenvalues of the $N$-particle Bose-Hubbard dimer \cite{07semiMP,Grae09dis,Simo12}. The calculation of an area $S$ enclosed by a curve on the sphere can be conveniently carried out by means of the area conserving projection of the unit sphere onto a cylinder touching the sphere along the equator, which was already known by Archimedes: \begin{equation} \label{cyl-pro} (x,y,z) \longrightarrow (x',y',z')=(\cos \varphi, sin \varphi, z), \end{equation} that is, \begin{equation} \label{cyl-prob} \varphi=\arctan\, y/x\quad {\rm for}\quad z\ne \pm 1\,. \end{equation} The poles $(0,0,\pm 1)$ are projected onto circles $(\cos \varphi, sin \varphi, \pm 1)$. The area element is ${ \rm d } S=z(\varphi)\,{ \rm d } \varphi$, where $z(\varphi)$ is the $z$-component of the points $(x,y,z)$ on the curve parametrized by the azimuthal angle $\varphi$. As a first example we calculate the area on the sphere enclosed by the Viviani curve. The full area enclosed by the figure-eight-shaped Viviani curve is given by \begin{equation}\label{S-viviani-10a} S_V=2\pi-2S_0\,, \end{equation} where $S_0$ is the area between one half of the Viviani curve and the equator. From the $z$-component given in (\ref{viviani-2}) we see that the cylinder projection (\ref{cyl-pro}) maps the Viviani curve onto a sine-function and thus we find \begin{equation}\label{S-viviani-1_0} S_0=2\int_0^{\pi/2}\!\!z(\varphi)\,{ \rm d } \varphi=2\int_0^{\pi/2}\!\!sin\varphi\,{ \rm d } \varphi=2\,. \end{equation} Note that on a more general sphere with radius $R$ this area is given by $4R^2$ as required from the solution to the original Viviani problem. The area enclosed by the Viviani curve is thus given by \begin{equation}\label{S-viviani-1} S_V=2\pi-2S_0=2\pi-4\approx 2.2832\,. \end{equation} For evaluating the area integral in the general case, it is convenient to transform to the variable $\phi $ by means of (\ref{eqn_circle}): \begin{equation} \fl \quad z(\varphi)\,{ \rm d } \varphi=z(\phi)\,\frac{{ \rm d } \varphi}{{ \rm d } \phi}\,{ \rm d } \phi =r sqrt{1-r^2-a^2-2ar\cos\phi \ }\ \frac{r+a\cos \phi}{a^2+r^2+2ar\cos\phi }\,{ \rm d } \phi, \end{equation} and the area outside the curve is given by \begin{equation}\label{S0-1} S_0=r \int_{\phi_0}^\pi sqrt{1-r^2-a^2-2ar\cos\phi }\ \frac{r+a\cos \phi}{a^2+r^2+2ar\cos\phi }\,{ \rm d } \phi\,, \end{equation} where the lower bound $\phi_0$ is equal to zero for case (i) and otherwise given by (\ref{w0}). The integrand is proportional to $1/(r-a)$ for $\phi=\pi$ so that we have an integrable singularity for $r=a$. The area $S$ inside the curve is then equal to zero for $a>1$, \ $r<a-1$ and equal to $4\pi$ for $r>1+a$, otherwise it is given by \begin{equation}\label{S-gen} S=\left\{ \begin{array}{ll} -4S_0 \quad &{\rm for}\quad r<a \\ 2\pi -2S_0\quad &{\rm for}\quad r=a\\ 4\pi -4S_0\quad &{\rm for}\quad r>a\, . \end{array}{ \rm i }ght. \end{equation} For the special cases distinguished above the area integral can be evaluated in closed form: \begin{itemize} \item[(a)] For $r=a$, the transformation from $\varphi$ to $\phi$ simplifies to $\varphi=\phi/2$ and the integral (\ref{S0-1}) to \begin{eqnarray} S_0&=& \int_{\phi_0}^\pi sqrt{1-2a^2-2a^2\cos\phi \ }\,{ \rm d } \phi \nonumber\\[2mm] &=&\left\{\begin{array}{ll} 2E(4a^2) \ &{\rm for}\ a\le1/2\\[2mm] 4a\Big(E(1/4a^2)-\big(1-1/4a^2\big)K(1/4a^2)\Big)\ &{\rm for}\ a>1/2 \end{array}{ \rm i }ght.\label{S0-3} \end{eqnarray} where $E(m)$ and $K(m)$ are complete elliptic integrals of the first and second kind with parameter $m$. \item[(b)] For $r=1-a$ (only possible for $a<1$, special case (b) mentioned above) where the intersection curve is an eight-shaped curve with a double-point at the fixed point ${\boldsymbol s}_{0+}=(1,0,0)$ where sphere and cylinder touch each other, the area integral can be also calculated in closed form with the amazingly simple result \cite[sect.~3.2]{Krol07} \begin{equation}\label{S-sep} S=8\arcsin sqrt{1-a\,}\ -8sqrt{a(1-a)\,}\,. \end{equation} \item[(c)] For the Viviani case $r=a=1/2$ both results agree with $S_V$ in (\ref{S-viviani-1}). \end{itemize} \begin{figure} \caption{\label{fig_S_rho_a} \label{fig_S_rho_a} \end{figure} As an example, figure \ref{fig_S_rho_a} shows on the left the area $S$ enclosed by the (generalized) Viviani loops as a function of the radius $r$ of the cylinder for three values of the displacement $a$ chosen in the different regions. In each case, the area increases monotonically from $0$ to $4\pi$. The plot on the right shows the area as a function of the inverse cylinder displacement $a^{-1}$ for the curves (a) passing through the poles ($r=a$). These pole trajectories exist for all values of $a$. In the limit $a^{-1}{ \rm i }ghtarrow 0$ the cylinder with $r=a$ approaches the $(y,z)$-plane and intersects the sphere in a great circle, which divides it into two equal hemispheres of area $2\pi$. With increasing $a^{-1}$ the intersection loop is deformed and the area enclosed shrinks, but it is still a single closed curve up to the critical point $a=1/2$, the Viviani case, where it bifurcates into two loops encircling the two extrema.\\ Also shown in this figure is the area integral for case (b), see eq.~(\ref{S-viviani-1}). These figure-eight shaped trajectories only exist for $a<1$ and, with decreasing $a$, the area enclosed grows. For $a=1/2$ the cases (a) and (b) coincide, and both curves pass through the Viviani area $S/4\pi\approx 0.1817$ according to (\ref{S-viviani-1}). For still smaller values of $a$ the cylinder center approaches the center of the sphere and the area enclosed by the loops (a) through the poles goes to zero as $S\approx 2\pi a^2$ whereas the area enclosed by the loops through the touching point approaches the full surface $4\pi$. We shall return to discussing the role of the area integrals for the quantum spectrum of the Bose-Hubbard dimer in section \ref{s-quantum}, after reviewing the quantum Hamiltonian and its mean-field approximation in the following. section{The Bose-Hubbard dimer and the mean-field approximation} \label{sec_BH_MF} The two-mode Bose-Hubbard system, describing cold bosonic atoms on a `lattice' consisting of only two sites, is a standard model in the field of cold atoms \cite{Milb97,Smer97,Ragh99}. It is described by the Hamiltonian \begin{equation} \hat H=\epsilon(\hat n_1 -\hat n_2) + v (\hat a_1^\dagger \hat a_2 + \hat a_1 \hat a_2^\dagger) +\case{1}{2} c (\hat n_1 - \hat n_2)^2, \label{BH-Hamiltonian} \end{equation} with mode energies $\pm\epsilon$, coupling $v$ and on-site interaction $c$. The $\hat a_j$ and $\hat a_j^\dagger$ denote particle annihilation and creation operators in mode $j$ respectively. The total particle number $\hat N=\hat n_1+\hat n_2=\hat a_1^\dagger\hat a_1+\hat a_2^\dagger\hat a_2$ is a conserved quantity. Despite its simple theoretical structure the Bose-Hubbard dimer is of considerable importance also from an experimental point of view, describing for example ultracold atoms in a double-well trap or in the ground state of an external trap with two internal degrees of freedom \cite{Albi05,Zibo10}. Introducing self-adjoint angular momentum operators $\hat L_x$, $\hat L_y$ and $\hat L_z$ according to the Schwinger representation \begin{eqnarray} \nonumber \hat L_x =\case{1}{2}(\hat a_1^\dagger\hat a_2+\hat a_1\hat a_2^\dagger)\,,\\ \label{lxlylz}\hat L_y=\case{1}{2\rmi}\,(\hat a_1^\dagger\hat a_2-\hat a_1\hat a_2^\dagger)\,,\\ \nonumber\hat L_z=\case{1}{2}(\hat a_1^\dagger\hat a_1-\hat a_2^\dagger\hat a_2), \end{eqnarray} with $SU(2)$ commutation relation $[\hat{L}_x,\hat{L}_y]={\rm i}\hat{L}_z$ and cyclic permutations, the Hamiltonian (\ref{BH-Hamiltonian}) can be written as: \begin{equation}\label{BH-hamiltonian-L} \hat H=2 \epsilon\hat L_z+2v \hat L_x+2c \hat L^2_z. \end{equation} Here the conservation of the particle number $N$ appears as the conservation of $L^2=\frac{N}{2}\big(\frac{N}{2}+1\big)$, i.e. the rotational quantum number $L =N/2$. Note that the Hamiltonian (\ref{BH-hamiltonian-L}) is also known as the Meshkov-Lipkin-Glick Hamiltonian, an angular momentum model originally introduced in the context of nuclear physics as a solvable model against which to check typical approximation schemes of many-particle physics \cite{Lipk65,Mesh65,Glic65}. The mean-field approximation in the context of cold atoms is closely related to the classical approximation for the Lipkin-Meshkov-Glick system, which has been the subject of many studies in applications to quantum information theory or quantum phase transitions in the past (see, e.g., \cite{Vida04b,Lato05,Ribe07,Orus08} and references therein). Both the static and dynamic properties of the full many-particle system (\ref{BH-Hamiltonian}) can be numerically deduced in a straight forward manner. This makes it an ideal model for the investigation of the correspondence to the approximate mean-field description, which we shall briefly discuss in the following. subsection{Mean-field dynamics} The mean-field approximation can be formally obtained by replacing the bosonic operators by c-numbers: $\hat a_j\to \psi_j$, $\hat a_j^\dagger\to \psi_j^*$. Taking the macroscopic limit $N\to\infty$ with $cN=g=const.$ yields the discrete nonlinear Schr\"odinger or Gross-Pitaevskii equation \begin{eqnarray} \rmi\dot{\psi_1}&=&\left(\epsilon+g(|\psi_1|^2-|\psi_2|^2){ \rm i }ght)\psi_1+v\psi_2\nonumber\\ \label{eqn-GPE-herm} \rmi \dot{\psi_2}&=& v\psi_1-\left(\epsilon+g(|\psi_1|^2-|\psi_2|^2){ \rm i }ght)\psi_2, \end{eqnarray} for $\hbar=1$, where $g=c N$ is the overall interaction between the particles. Similar to linear time-dependent Schr\"odinger equations also the nonlinear dynamics (\ref{eqn-GPE-herm}) possesses a canonical structure \cite{Dira27,Wein89}: It can be derived from a classical Hamiltonian function \begin{eqnarray} H_{\rm cl}=\epsilon(|\psi_1|^2-|\psi_2|^2)+v(\psi_1^*\psi_2+\psi_1 \psi_2^*)+\frac{g}{2}(|\psi_1|^2-|\psi_2|^2)^2, \label{classical-E-psi} \end{eqnarray} with canonical equations $\rmi\dot \psi_j=\partial H_{\rm cl}/\partial\psi_j^*$. One can also formulate the nonlinear two-mode system in terms of the Bloch vector of a spin-$\case{1}{2}$ system, introducing the spin components \begin{eqnarray} \nonumber s_x=\case{1}{2} \big(\psi_1^*\psi_2+\psi_1\psi_2^*\big)\,,\\ \label{sxsysz} s_y= \case{1}{2\rmi} \big(\psi_1^*\psi_2-\psi_1\psi_2^*\big)\,,\\ \nonumber s_z=\case{1}{2} \big(\psi_1\psi_1^*-\psi_2\psi_2^*\big)\,,\ \end{eqnarray} in accordance with (\ref{lxlylz}). In these variables the total energy takes the form \begin{equation}\label{eqn_Hamfct-s} H_{\rm cl}=2\epsilons_z+2vs_x+2g s_z^2, \end{equation} and the nonlinear Schr\"odinger equation (\ref{eqn-GPE-herm}) translates to nonlinear Bloch equations of the form \begin{eqnarray} \dot s_x &=& -2\epsilon s_y-4g s_ys_z\nonumber\\ \dot s_y &=& 2\epsilon s_x+4gs_xs_z -2v s_z \\ \dot s_z &=& 2v s_y\nonumber. \label{bloch-eq-s} \end{eqnarray} These equations conserve the normalization, that is, the motion of the spin vector ${\boldsymbol s}=(s_x,s_y,s_z)$ is confined to the surface of the Bloch sphere with radius $|{\boldsymbol s}|=1/2$. Alternatively one can investigate the system in terms of the coordinates $p=|\psi_1|^2-|\psi_2|^2$ and $q=(\arg{(\psi_2)}-\arg{(\psi_1)})/2$, which are related to the coordinates on the Bloch sphere via \begin{eqnarray}\label{pq-transf} p=2s_z \ ,\quad 2q=\arctan s_y/s_x\,. \end{eqnarray} That is, they are similar to the (area preserving) projection of the vector ${\boldsymbol s}$ on a cylinder touching the sphere at the equator as introduced before in equations (\ref{cyl-pro}) and (\ref{cyl-prob}). The classical energy, that is, the Hamiltonian function, in terms of $p$ and $q$ reads \begin{eqnarray} H_{\rm cl}=\epsilon p+vsqrt{1-p^2}\cos(2q)+\frac{g}{2}p^2, \label{classical-E-pq} \end{eqnarray} and the equations of motion (\ref{eqn-GPE-herm}) \begin{eqnarray} \label{eq-motion-p} \dot{p}&=&2vsqrt{1-p^2}sin(2q)\\ \label{eq-motion-q} \dot{q}&=& \epsilon-v\frac{p}{sqrt{1-p^2}}\cos(2q)+gp, \end{eqnarray} are again canonical, that is, $\dot q=\partial H_{\rm cl}/\partial p$, $\dot p=-\partial H_{\rm cl}/\partial q$. This describes the motion of a pendulum whose length, however, depends dynamically on the momentum $p$. The energy $E=H_{\rm cl}$ is conserved. In the following we will confine ourselves to the case of a symmetric Bose-Hubbard dimer, $\epsilon=0$, which captures important characteristics of the dynamics. In addition we will assume $v>0$ without loss of generality. In this case, the fixed points of the dynamics of the Bloch vector (\ref{bloch-eq-s}), corresponding to stationary solutions of the Gross-Pitaevskii equation, are given by \begin{equation} s_y=0\ ,\quad s_z\in\Big\{0,\, 0,\, \pm\case{1}{2}sqrt{1-v^2/g^2}\, \Big\}. \end{equation} Thus we have two fixed points for $|g|\leq v$ with energies $E=-v$ and $E=v$ which are a minimum and a maximum of the energy surface. If the interaction strength $|g|$ is increased one of these extrema bifurcates at the critical interaction $|g|=v$ into a saddle point, still at the equator, and two extrema located away from the equator at $s_z=\pm\case{1}{2}sqrt{1-v^2/g^2}$ with energy $E_{\rm m}=\frac{g}{2}(1+v^2/g^2)$, two maxima for repulsive interaction $g>0$ or two minima for attractive interaction. In both cases the other extremum (minimum with energy $E=-v$ for $g>0$ or maximum with energy $E=v$ for $g<0$) remains at the equator. In this supercitical case the two fixed points $s_z\ne 0$ correspond to stationary states mainly populating one of the levels, the \textit{self trapping} states. For the following discussion it will be convenient to rescale the spin components as $x=2s_x$, $y=2s_y$, $z=2s_z$ where the nonlinear Bloch equations (\ref{bloch-eq-s}) in the symmetric case, \begin{eqnarray} \dot x &=& -2g yz\nonumber\\ \dot y &=& +2g x z -2v z \label{bloch-eq-r}\\ \dot z &=& 2v y\,,\nonumber \end{eqnarray} restrict the motion of the vector ${\boldsymbol s}=(x,y,z)$ to the unit sphere $|{\boldsymbol s}|^2=x^2+y^2+z^2=1$. We will assume that the parameters $v$ and $g$ are non-negative, otherwise the dynamics can be obtained by simple symmetry arguments. We will also use spherical polar coordinates \begin{equation}\label{eqn_polar} (x,y,z)=(sin \vartheta \cos \varphi,sin \vartheta sin \varphi,\cos \vartheta)\,. \end{equation} Of basic importance for the structure of the flow on the sphere are the fixed points \begin{equation}\label{fixpoint} {\boldsymbol s}_{0\pm}=(\pm 1,0,0) \quad {\rm and} \quad {\boldsymbol s}_{1\pm}=(a,0,\pmsqrt{1-a^2}) \quad {\rm for} \quad a^2<1 \end{equation} with $a=v/g$. The energy \begin{equation}\label{eqn_energy} E=vx+\case{1}{2}\,g\,z^2 \end{equation} (see eq.~(\ref{eqn_Hamfct-s})) is conserved and the fixed points ${\boldsymbol s}_{0\pm}$ correspond to a maximum and a minimum energy $E_{0\pm}=\pm v$. At $a=1$ the maximum bifurcates into a saddle point at ${\boldsymbol s}_{0+}$ and two maxima at ${\boldsymbol s}_{1\pm}$ with energy $E_{1\pm}=g(1+a^2)/2$ in the self trapping transition. It is now easy to see that the dynamical trajectories (\ref{bloch-eq-r}) are exactly given by the generalized Viviani curves introduced in section \ref{ss-viviani}: Inserting $z^2=1-x^2-y^2$ we can rewrite equation (\ref{eqn_energy}) in the form (\ref{eqn_cyl}) with \begin{equation}\label{eqn_a_rho} r=sqrt{1+a^2-2E/g\,}\,. \end{equation} Thus, the trajectories generated by the flow (\ref{bloch-eq-r}) can be interpreted as curves of intersection of the unit sphere with a cylinder as illustrated in figures \ref{fig_viv_05} and \ref{fig_viv_07}, that is, Viviani curves. Of special interest for the two-mode Bose-Hubbard dynamics is the case where initially the system is in the lower or upper state, i.e., at one of the poles on the Bloch sphere. This imposes the condition $r=a$. For $a>\case{1}{2}$ (i.e. $g<v/2$) this trajectory traces out a closed euclidic ellipse on the sphere. For $a=\infty$ this is a great circle in the $(y,z)$-plane which tightens if $a$ is reduced. Note that the intersection points of this ellipse with the equator at $x=1/2a$, $y=\pmsqrt{1-1/(4a^2)\,}$ approach each other until they meet for $a=\case{1}{2}$ at the fixed point ${\boldsymbol s}_+=(1,0,0)$, a double point of the figure eight shaped trajectory. For $a<\case{1}{2}$ the ellipse consists of two loops on the northern and southern hemisphere which end up as circles around the poles for $a=0$. Let us now return to the time dependence generated by the flow (\ref{bloch-eq-r}). The motion is periodic with a period $T$ derived by several authors before (see, e.g., \cite{Kenk86,Holt01a,Simo12}). Here we present a very simple derivation, which also provides new insight into the dynamics. For the simplest case, $g=0$, the flow equations (\ref{bloch-eq-r}) are linear. The $x$-component is conserved, $x=x_0$, and we find a global rotation around the $x$-axis with frequency $\omega_{0x}=2v$. In our picture this corresponds to the limit $a,r{ \rm i }ghtarrow \infty$ with fixed $a-r=x_0$. Somewhat more interesting is the case $v=0$, Here the $z$-component is conserved, $z=z_0$, and we find a rotation around the $z$-axis. Here, however, we have \begin{equation} \dot x=-2gz_0y\ , \quad \dot y=2gz_0\,x\quad rightarrow \quad \ddot x=-2gz_0\,\dot y =-4g^2z_0^2\,x \end{equation} and the frequency \begin{equation} \omega_{0z}= 2g|z_0|=2gsqrt{1-r^2} \end{equation} varies from a maximum $\omega=2g$ at the poles to $\omega=0$ at the equator. For the general case, inserting the time derivative of transformation (\ref{eqn_circle}) into the flow equations (\ref{bloch-eq-r}) we find \begin{equation}\label{dgl-penulum} \dot \phi=2gz=\pm 2gsqrt{1-a^2-r^2-2ar \cos \phi\,}\,, \end{equation} which is just the angular velocity of a simple mathematical pendulum. This is even more evident from the second time derivative: \begin{equation}\label{dgl-pendulum1} \ddot \phi=2g\dot z=4vgy=4vgrsin \phi\, \end{equation} or with $\phi=\pi+\theta$ \begin{equation}\label{dgl-pendulum2} \ddot \theta+4vgrsin \theta =0\,, \end{equation} which is the equation of motion for a mathematical pendulum, where the ratio between gravitational acceleration and pendulum length is replaced by $4vgr$, which is an energy dependent constant. The two cases (i) and (ii) distinguished in section \ref{ss-viviani} are simply the well-known librational and rotational motions of the pendulum.\\[2mm] (i) {\it Rotation:} With the abbreviations \begin{equation}\label{dgl-pendulumb} b=\frac{a^2+r^2-1}{2ar}\ ,\quad B=2gsqrt{2ar}\,. \end{equation} we obtain the period as twice the integral over $\dot\theta^{-1}$ from $0$ to $\theta_0=\pi$ as \begin{equation}\label{sol-periodR} T_{\rm R}=\frac{1}{B}\int_0^{\pi}\frac{{ \rm d } \theta}{sqrt{\cos \theta -b\,}} =\frac{1}{gsqrt{mar}}\,K(1/m)\,, \end{equation} (note that $|b|>1$) with \begin{equation}\label{parameter-m} m=\frac{1-b}{2}=\frac{1-(r-a)^2}{4ar}ge 1 \end{equation} and $K$ is the complete elliptic integral of the first kind.\\[2mm] (ii) {\it Libration:} For $|b|\le 1$ we have a hindered rotation restricted to the interval $|\theta|\le \theta_0 =\arccos b$, and the period is $4$ times the integral over the interval $0<\theta<\theta_0$, that is, \begin{equation}\label{sol-periodL} T_{\rm L}=\frac{4}{B}\int_0^{\theta_0}\frac{{ \rm d } \theta}{sqrt{\cos \theta -b\,}} =\frac{2}{gsqrt{ar}}\,K(m)\,. \end{equation} where the parameter (\ref{parameter-m}) satisfies $|m|\le1$. This, as well as the explicit construction of the solution in terms of Jacobi elliptic functions, is, of course, well known for the pendulum and also for the Bose-Hubbard dimer \cite{Holt01a} (see, e.g., \cite{Reic90,Reic04}). It should be noted, that also the integrals of section \ref{s-area} appear as action integrals for the pendulum. Some limiting cases may be of interest:\\[2mm] (1) For trajectories with $r$ close to the boundary of the allowed dynamical region, i.e.~for $r gtrsim a-1$ for $a>1$ or for $r \lesssim a+1$, we have $m \approx 0$ and with $K(0)=\pi/2$ the period (\ref{sol-periodL}) reduces to \begin{equation} T_{\rm L\pm}=\frac{\pi}{gsqrt{a(a\mp 1)}}= \frac{\pi}{sqrt{v(v\mp g)}}\,, \label{T_L} \end{equation} where one recognizes the celebrated Bogoliubov frequency $\Omega =2\pi/T=2sqrt{v(v \mp g)}$ for small excitations around the ground- and highest excited state, respectively \cite{Bogo47,Pita03}. Clearly for this small angle oscillation this result can be directly obtained from the pendulum equation (\ref{dgl-pendulum2}). On the Bloch sphere this is an oscillation in the vicinity of the fixed points ${\boldsymbol s}_{0\pm}=(\pm 1, 0,0)$ (compare eq.~(\ref{fixpoint})).\\[1mm] (2) For $a<1$ and $r=0$ we find the two fixed points ${\boldsymbol s}_{1\pm}=(a,0,\pmsqrt{1-a^2})$ from (\ref{fixpoint}). The oscillation period in their vicinity can be found from (\ref{dgl-pendulum1}) as \begin{equation} T_{\rm R}=2gsqrt{1-a^2}=2sqrt{g^2-v^2}\,. \label{T_R} \end{equation} (3) In the special case (b) mentioned in section \ref{ss-viviani} we have $a<1$ and $r=1-a$ (i.e.~$b=-1$ and $m=1$) the orbit approaches the unstable fixed point ${\boldsymbol s}_{0+}=(1,0,0)$ along the separatrix and the period becomes infinite.\\[2mm] (4) For $r=a$ the orbit passes through the poles (case (a) in section \ref{ss-viviani}) and the period is given by \begin{equation}\label{periodphoa} T=\Bigg\{ \begin{array}{lll} \frac{2}{g}\,K(4a^2) \quad &{\rm for}\quad a < \case{1}{2} \\ \frac{2}{v}\,K(1/4a^2)\quad &{\rm for}\quad a>\case{1}{2} \end{array} \end{equation} as shown in figure \ref{fig_S_rho_a}. We have shown that the restriction to the cylinder (\ref{eqn_cyl}) reduces the dynamics of the symmetric dimer on the Bloch sphere to a simple pendulum motion on the level sets of constant energy. A similar construction for Euler's equations for the free rigid body can be found in \cite{Holm91,Holm08}. subsection{Full many-particle description} \label{s-quantum} In this section we will illustrate some implications of the properties of the mean-field approximation discussed above for the many-particle two-mode Bose-Hubbard Hamiltonian (\ref{BH-Hamiltonian}) with $\hbar=1$, where we confine ourselves to the supercritical case $g>v$ ($v>0$, $g>0$). \begin{figure} \caption{\label{fig_density} \label{fig_density} \end{figure} \begin{figure} \caption{\label{fig_husimi} \label{fig_husimi} \end{figure} Diagonalizing the $N$-particle Hamiltonian we obtain the $N+1$ energy eigenvalues $E_n$, $n=0,\ldots,N$ which are found in the classically allowed mean-field interval \begin{equation} \label{E-interval} -v<\frac{E_n}{N}<\frac{g}{2}\Big(1+\frac{v^2}{g^2}\Big) \end{equation} as discussed above. Figure \ref{fig_density} shows the level density \begin{equation} \rho=\frac{\Delta n}{\Delta E} \end{equation} as a function of the mean-field energy $E$ for $N=1000$ particles and $v=1$, $g=2$, where the energy interval is discretized in 30 equidistant boxes $\Delta E$. Semiclassically the individual quantum energy eigenvalues $E_n$ can approximately be calculated from the classical action integrals by the Bohr-Sommerfeld quantization scheme (see, e.g., \cite{Gutz90}), i.e.~by the area $S(E)$ on the Bloch sphere enclosed by the classical orbit \cite{07semiMP,Simo12} and the level density is related to the energy derivative of the action, i.e.~the period $T$, see eq.~(\ref{T_L}) and (\ref{T_R}), which is also shown in the figure (compare also figure (\ref{fig_S_rho_a})). The mean-field period $T(E)$ diverges for trajectories passing through the saddle point, which agrees with the Viviani case for the chosen parameter values. For energies above the Viviani energy $E_V=v=1$ the action area consists of two disconnected loops with the same area and therefore the period is multiplied by a factor of two. The Viviani action $S_V\approx 2.28$ in (\ref{S-viviani-1}) determines semiclassically the number of states supported by the area of the Viviani window, i.e.~the number $N _V\approx S_V(N+1)/(4\pi)\approx 182$ of states above the Viviani energy $E_V$. Hence we expect $N+1-N_V=819$ states below the Viviani energy. In the vicinity of this Viviani energy the quantum energy density shows a pronounced maximum \cite{07semiMP,Chuc10}. For larger energies we have two almost degenerate states with opposite symmetry. As an example figure \ref{fig_husimi} shows the Husimi phase space distributions $|\langle \vartheta,\varphi|\Psi_n\rangle|^2$ of the eigenstates $|\Psi_n\rangle$ on the Bloch sphere for $n=790$, $819$, $820$ and $837$ with energies $E(n)/N = 0.9733$, $1.0007$, $1.0017$, and $1.0158$. The first state localizes on a single closed classical orbit, the second one, $n=819$, almost exactly at the saddle point in agreement with the semiclassical estimate above. For the other two states the Husimi densities are localized on the two disconnected loops encircling the classical stationary points $s_{1\pm}$ for the corresponding energies. \begin{figure} \caption{\label{fig_MFST2} \label{fig_MFST2} \end{figure} Let us conclude with a brief discussion of the implications of the classical Viviani curve on the quantum dynamics for the important case when the system is prepared at time $t=0$ in one of the modes. The left panel of figure \ref{fig_MFST2} shows the nonlinearity dependence of the time evolution of the mean-field population imbalance $z=|\psi_1|^2-|\psi_2|^2$ (shown in false colors), where the system is prepared in mode $2$ corresponding to the south pole of the Bloch sphere \cite{10nhbh}. For vanishing nonlinearity one would recover the usual sinusoidal Rabi oscillations between the two modes. For larger nonlinearities the oscillation period increases but one still observes a periodic complete population transfer between the modes. Although the self trapping bifurcation happens at $g=v$ one only observes a characteristic change in the dynamics at the Viviani critical value of $g=2v=2$. This can be understood in the following way: At the self trapping transition a saddle point appears, but the orbit passing through the north pole does not reach this point and consequently does not change its characteristics of a complete population transfer. With increasing nonlinearity, however, it coincides with the separatrix orbit at the critical point $g_{\rm}=2v$, the period of which diverges, as can be nicely observed in figure \ref{fig_MFST2}. For larger values we still find oscillatory behavior, however, $z$ stays confined to negative values and the system is therefore mainly populating the second mode and performs self-trapping oscillations. Also shown in figure \ref{fig_MFST2} (right panel) is the corresponding quantum many-particle expectation value $2\langle\hat L_z\rangle/N$ for $N=1000$ particles and an initial coherent state localized at the south pole. For vanishing interaction the mean-field description is exact but for nonvanishing interactions one observes a decay of the population imbalance, in particular in the vicinity of the critical interaction. The mean-field approximation is still restricted to the Bloch sphere whereas the many-particle angular momentum expectation value can penetrate the sphere. This breakdown of the mean-field approximation \cite{Vard01b,Angl01} is a consequence of a mean-field representation by a single phase space point and can be partly cured by the Liouville dynamics approach \cite{07phase,09phaseappl,Grae09dis,Henn12}, i.e. by averaging over an ensemble of initial conditions mimicking the Husimi distribution in classical phase space. section{Summary} \label{sec_sum} An interesting and unexpected interconnection between contemporary cold atom quantum physics and much older studies in mathematics and astronomy is provided by the celebrated Bose-Hubbard dimer. We have identified the mean-field trajectories of the symmetric Bose-Hubbard dimer as (generalized) Viviani curves or euclidic spherical ellipses. Furthermore we have shown that the dynamics reduces to the oscillation of a mathematical pendulum on a circle with an energy dependent radius as illustrated in figure \ref{fig-circles12}. In the librational case (ii) the pendulum motion is restricted to an angular region inside the sphere in the $(x,y)$-plane (the full line in the figure). The corresponding motion in the $z$-direction extends from the northern to the southern hemisphere and the spherical ellipse is a single closed loop. For the rotational pendulum motion (case (i)) the spherical ellipse consists of two loops on the northern and southern hemisphere. The projection on the $z$-axis is restricted to an interval on the positive or negative half-axis, respectively. In the language of the Bose-Hubbard system, $z$ is the population imbalance and therefore the population is trapped in a certain interval, an effect known as self-trapping. The self-trapping transition occurs for $g=v$, i.e.~when the radius $r$ of the sphere equals the shift $a$ of the center of the cylinder. The areas enclosed by the Viviani curves are thus the action integrals needed for a semiclassical quantization of the many-particle spectrum, and govern the energy density. The Viviani curve appears as dynamical separatrix between full oscillations and self-trapping oscillations when the system is prepared in one of the two modes. section*{Acknowledgments} EMG gratefully acknowledges support via the Imperial College JRF scheme. section*{References} \end{document}
\begin{document} \title{Connections between Romanovski and other polynomials} \author{H. J. Weber\\Department of Physics\\University of Virginia\\ Charlottesville, VA 22904, USA} \maketitle \begin{abstract} A connection between Romanovski polynomials and those polynomials that solve the one-dimensional Schr\"odinger equation with the trigonometric Rosen-Morse and hyperbolic Scarf potential is established. The map is constructed by reworking the Rodrigues formula in an elementary and natural way. The generating function is summed in closed form from which recursion relations and addition theorems follow. Relations to some classical polynomials are also given. \end{abstract} \leftline{MSC: 33C45, 42C15, 33C30, 34B24} \leftline{PACS codes: 02.30.Gp; 02.30.Hq; 02.30.Jr, 03.65.Ge} \leftline{Keywords: Romanovski polynomials; complexified Jacobi polynomials;} \leftline{~~~~~~~~~~~~~~~generating function; recursion relations; addition theorems} \section{Introduction and review of basic results} Romanovski polynomials were discovered in 1884 by Routh~\cite{routh} in the form of complexified Jacobi polynomials on the unit circle in the complex plane and were then rediscovered as real polynomials by Romanovski~\cite{rom} in a statistics framework. Recently real polynomial solutions of the Scarf~\cite{cot} and Rosen-Morse potentials~\cite{ck} in (supersymmetric) quantum mechanics were recognized~\cite{rwack} to be related to the Romanovski polynomials. Here we apply to Romanovski polynomials a recently introduced natural method of reworking the Rodrigues formula~\cite{hjw} that leads to connections with other polynomials. The paper is organized as follows. In the Introduction we define the complementary polynomials $Q_\nu^{(\alpha,-a)}(x)$ then establish both a recursive differential equation satisfied by them and the procedure for the systematic construction of the $Q_{\nu}^{(\alpha,-a)}(x),$ derive their Sturm-Liouville differential equation (ODE), their generating function and its consequences, all based on the results of ref.~\cite{hjw}. Section~2 deals with a parameter addition theorem, Sect.~3 with orthogonality integrals and Sect.~4 with connections of these Romanovski polynomials to the classical polynomials. Sect.~5 deals with further applications using auxiliary polynomials. {\bf Definition.} The Rodrigues formula that generates the polynomials is given by \begin{eqnarray} P_l^{(a,\alpha)}(x)=\frac{1}{w_l(x)}\frac{d^{l}}{dx^{l}}[w_l(x)\sigma(x)^{l}] =\frac{\sigma(x)^l}{w_0(x)}\frac{d^l w_0(x)}{dx^l}, \quad l=0,1,...\ . \label{rds} \end{eqnarray} where $\sigma(x)\equiv 1+x^2$ is the coefficient of $y''$ of the hypergeometric ODE (1) of ref.~\cite{hjw} that the polynomials satisfy. The variable $x$ is real and ranges from $-\infty$ to $+\infty.$ The corresponding weight functions \begin{eqnarray} w_l(x)=\sigma(x)^{-(l+a+1)} e^{-\alpha \cot^{-1} x}=\sigma(x)^{-l} w_0(x) \label{wf} \end{eqnarray} depend on the parameters $a, \alpha$ that are independent of the degree $l$ of the polynomial $P_l^{(a,\alpha)}(x).~\diamond$ {\it Lemma.} {\it The weight functions of the $P_l^{(a,\alpha)}(x)$ polynomials satisfy Pearson's first-order ODE} \begin{eqnarray} \sigma(x)\frac{dw_l(x)}{dx}=[\alpha-2x(l+a+1)] w_l(x). \label{pearson} \end{eqnarray} {\bf Proof.} This is straightforward to check using $d\cot^{-1} x/dx=-1/ \sigma(x).~\diamond$ We now apply the method of ref.~\cite{hjw} and introduce the {\it complementary polynomials} $Q_\nu^{(\alpha,-a)}(x)$ defining them inductively according to the Rodrigues representation (see Eq.~(5) of ref.~\cite{hjw}) \begin{equation} P_l^{(a,\alpha)}(x)=\frac{1}{w_l(x)}\frac{d^{l-\nu}}{dx^{l-\nu}} [\sigma(x)^{l-\nu}w_l(x)Q_\nu^{(\alpha,-a)}(x)],\,\,\,\,\, \nu=0, 1, \ldots, l. ~\diamond \label{Qdef} \end{equation} For Eq.~(\ref{Qdef}) to agree with the Rodigues formula~(\ref{rds}) for $\nu=0$ requires $Q_0^{(\alpha,-a)}(x)\equiv 1.$ Then comparing Eq.~(\ref{pearson}) with Eq.~(4) of ref.~\cite{hjw} we find the coefficient $\tau(x)=\alpha- \sigma'(x)(a+l)$ of $y'$ in the ODE of the polynomials. Comparing instead with the ODE~(37) of ref.~\cite{rwack} gives their parameter $\beta=-a.$ We now {\it identify the polynomials of ref.~\cite{hjw} with those defined in Eq.~(\ref{Qdef})} \begin{eqnarray} {\cal P}_\nu(x;l)=Q_\nu^{(\alpha,-a)}(x),~l\geq \nu.\label{id} \end{eqnarray} We will show below that the polynomials $Q_\nu^{(\alpha,-a)}(x)$ are independent of the parameter $l.~\diamond$ {\it Theorem~1.1. $Q_\nu^{(\alpha,-a)}(x)$ is a polynomial of degree $\nu$ that satisfies the recursive differential relation} \begin{eqnarray}\nonumber Q_{\nu+1}^{(\alpha,-a)}(x)&=&\sigma(x)\frac{dQ_{\nu }^{(\alpha,-a)}(x)}{dx} +[\tau(x)+2x(l-\nu-1)]Q_{\nu}^{(\alpha,-a)}(x)\\\nonumber &=&\sigma(x)\frac{dQ_{\nu }^{(\alpha,-a)}(x)}{dx} +[\alpha-2x(a+\nu+1)]Q_{\nu}^{(\alpha,-a)}(x),\\ \nu&=&0, 1, \ldots.\ \label{rode} \end{eqnarray} {\bf Proof.} The inductive proof of Theorem~2.2 of ref.~\cite{hjw} applied to the polynomial $Q_\nu^{(\alpha,-a)}(x)$ proves this theorem, and Eq.~(\ref{rode}) agrees with Eq.~(76) of ref.~\cite{rwack} provided their parameter $\beta=-a$ in ref.~\cite{hjw}. Since Eq.~(\ref{rode}) is independent of the parameter $l,$ so are the polynomials $Q_\nu^{(\alpha,-a)}(x)$ that are generated from it. $\diamond$ Comparing the recursive ODE~(\ref{rode}) with one stated in~\cite{rom} leads us to the {\it identification of our polynomials} \begin{eqnarray} Q_{k}^{(\alpha,k-m)}(x)=\varphi_k(m,x),\ Q_\nu^{(\alpha,-a)}(x)= \varphi_{\nu}(a+\nu, x) \label{roq} \end{eqnarray} as a {\it Romanovski polynomial} (with its parameter depending on its degree), and comparing with Eq.~(69) of ref.~\cite{rwack}, \begin{eqnarray} Q_\nu^{(\alpha,-a)}(x)=R_\nu^{(\alpha,\beta-\nu)}(x),~\beta=-a.~\diamond \label{comp} \end{eqnarray} Notice that the parameter $\alpha$ ($-\nu$ in ref.~\cite{rom}) is suppressed in Romanovski's notation. The fact that the integer index $\nu$ of the complementary polynomials occurs in the parameter $m$ of the Romanovski polynomials is sometimes disadvantageous (for orthogonality), but also occasionally a definite advantage (for the generating function). Moreover, \begin{eqnarray} {\cal P}_l(x;l)=Q_l^{(\alpha,-a)}(x)=K_l C_l^{(\alpha,-a)}(x), \label{cs} \end{eqnarray} where $K_l$ is a normalization constant and the $C_l^{(\alpha,-a)}(x),$ after a change of variables, become part of the solutions of the Scarf and Rosen-Morse potentials in the Schr\"odinger equation~\cite{cot},\cite{ck}. However, for $C_l^{(\alpha,-a)}(x)$ to be part of the solution of the Schr\"odinger equation with the trigonometric Rosen-Morse potential requires $\alpha=\alpha_l=\frac{2b}{l+a}$~\cite{ck}; but the polynomials may be defined for the general parameter $\alpha.$ Recursion relations and recursive ODEs are practical tools to systematically generate the polynomials. {\it Theorem~1.2. The polynomial $Q_\nu^{(\alpha,-a)}(x)$ satisfies the basic recursive ODE} \begin{eqnarray} \frac{dQ^{(\alpha,-a)}_{\nu}(x)}{dx}=-\nu(2a+\nu+1)Q^{(\alpha,-a)}_{\nu-1}(x) \equiv -\lambda_\nu Q^{(\alpha,-a)}_{\nu-1}(x). \label{bode} \end{eqnarray} {\bf Proof.} Eq.~(\ref{bode}) follows from a comparison of the recursive ODE~(\ref{rode}) with a three-term recursion relation as outlined in Corollary~4.2 of ref.~\cite{hjw}, is ODE~(32) of ref.~\cite{hjw}, and agrees with Eq.~(75) of ref.~\cite{rwack} provided their $\beta=-a,$ which is consistent with our previous statements. $\diamond$ Thus, taking a derivative of $Q^{(\alpha,-a)}_{\nu}(x)$ just lowers its degree (and index) by unity, up to a constant factor, a property the Romanovski polynomials share with all classical polynomials. {\it Theorem~1.3. The polynomials $Q_{\nu}^{(\alpha,-a)}(x)$ satisfy the differential equation of Sturm-Liouville type} \begin{eqnarray} \sigma(x)\frac{d^2 Q^{(\alpha,-a)}_\nu(x)}{dx^2}+[\alpha-\sigma'(x)(a+\nu)] \frac{d Q^{(\alpha,-a)}_\nu(x)}{dx}=-\lambda_\nu Q^{(\alpha,-a)}_\nu(x). \label{qode} \end{eqnarray} {\bf Proof.} Substituting the basic ODE~(\ref{bode}) in the recursive ODE~(\ref{rode}) yields \begin{eqnarray} Q^{(\alpha,-a)}_{\nu+1}(x)=-\frac{\sigma(x)}{\lambda_{\nu+1}} \frac{d^2Q^{(\alpha,-a)}_{\nu+1}(x)}{dx^2}-\frac{1}{\lambda_{\nu+1}}[\alpha- (a+\nu+1)\sigma']\frac{dQ^{(\alpha,-a)}_{\nu+1}(x)}{dx} \end{eqnarray} which, for $\nu\to \nu-1$ is the ODE of the theorem. Again, the ODE is independent of the parameter $l$ in Eq.~(\ref{id}). $\diamond$ {\it Theorem~1.4. The polynomial $P_l(x)$ satisfies the ODE} \begin{eqnarray} \sigma(x)\frac{d^2 P_l(x)}{dx^2}+\tau(x)\frac{d P_l(x)}{dx}=-\lambda_l P_l(x). \label{pode} \end{eqnarray} {\bf Proof.} For $\nu=l,$ we use $P_l(x)={\cal P}_l(x;l)$ in the notation of ref.~\cite{hjw} to rewrite the recursive ODE~(\ref{rode}) of ref.~\cite{hjw} and Eq.~(\ref{bode}) as \begin{eqnarray}\nonumber {\cal P}_l(x;l)&=&\sigma(x){\cal P'}_{l-1}(x;l)+\tau(x){\cal P}_{l-1}(x;l) =P_l(x)\\&=&-\frac{\sigma(x)}{\lambda_l}P''_l(x)-\frac{\tau(x)}{\lambda_l} P'_l(x), \label{Qode} \end{eqnarray} which is the ODE~(1) in ref.~\cite{hjw} for the polynomial $P_l(x).$ (Note that $\tau(x)$ is given right after Eq.~(\ref{Qdef}).) $\diamond$ {\it Theorem~1.5. The polynomials $Q^{(\alpha,-a)}_{\nu}(x)$ satisfy the generalized Rodrigues formulas} \begin{eqnarray} Q^{(\alpha,-a)}_{\nu}(x)&=&w^{-1}_l(x) \sigma(x)^{\nu-l} \frac{d^{\nu}}{dx^{\nu}}[w_l(x)\sigma(x)^{l}]=\frac{\sigma(x)^\nu}{w_0(x)} \frac{d^{\nu}w_0(x)}{dx^{\nu}}; \label{qrod}\\\nonumber Q^{(\alpha,-a)}_{\nu}(x)&=&w_l^{-1}(x)\sigma(x)^{\nu-l}\frac{d^{\nu-\mu}} {dx^{\nu-\mu}}\left(\sigma(x)^{l-\mu}w_l(x) Q^{(\alpha,-a)}_{\mu}(x)\right), \\ \mu&=&0, 1, \ldots, \nu . \label{gqrod} \end{eqnarray} {\bf Proof.} These Rodrigues formulas are those of Theorem~2.3 of ref.~\cite{hjw}; they agree with Eqs.~(72) and (73) of ref.~\cite{rwack} provided their $\beta=-a,$ as we found earlier. Note that, from Eq.~(\ref{wf}) the product $w_l(x)\sigma(x)^l$ does not depend on $l,$ so there is no $l$ dependence in Eqs.~(\ref{qrod},\ref{gqrod}). $\diamond$ The $Q^{(\alpha,-a)}_n(x)$ polynomials generalize the $P_n(x)$ in the sense of allowing any power of $\sigma(x)$ in the Rodrigues formula, not just $\sigma(x)^n$ as for the $P_n(x).$ In other words, the $Q^{(\alpha,-a)}_n(x)$ are associated $P_n(x)$ (or $C_n^{(\alpha,-a)}(x))$ polynomials, as in the relationship between Laguerre (Legendre) and associated Laguerre (Legendre) polynomials. A generalization of Eq.~(\ref{qrod}), \begin{eqnarray} Q^{(\alpha,-a-l)}_{\nu}(x)=\frac{\sigma(x)^{\nu+l}}{w_0(x)} \frac{d^{\nu}}{dx^{\nu}}\left(\sigma(x)^{-l}w_0(x)\right),~l=0,\pm 1,\ldots, \end{eqnarray} just reproduces the same polynomial with a shifted parameter $a\to a+l.$ {\it The generating function for the $Q_\nu^{(\alpha,-a)}(x)$ polynomials is defined as} \begin{eqnarray} Q(x,y;\alpha,-a)=\sum_{\nu=0}^{\infty}\frac{y^{\nu}}{\nu!} Q^{(\alpha,-a)}_{\nu}(x).~\diamond \label{genf} \end{eqnarray} The generating function is our main tool for deriving recursion relations. {\it Theorem~1.6 The generating function can be summed in the closed form} \begin{eqnarray}\nonumber w_l(x)Q(x,y;\alpha,-a)&=&\sigma(x)^{1-l}\sum_{\nu=0}^{\infty} \frac{(y\sigma(x))^{\nu}}{\nu!}\frac{d^{\nu}}{dx^{\nu}} \left(\sigma(x)^{-(a+1)}e^{\alpha\cot^{-1}x}\right)\,,\\\nonumber Q(x,y;\alpha,-a)&=&(1+x^2)^{a+1}e^{\alpha\cot^{-1}x} [1+(x+y(1+x^2))^2]^{-(a+1)}\\&\cdot&e^{-\alpha\cot^{-1}(x+y(1+x^2))}\ , \label{3_8} \end{eqnarray} \begin{eqnarray}\nonumber &&\frac{\partial^{\mu}}{\partial y^{\mu}}\left( w_l(x)\sigma(x)^{l-1} Q(x,y;\alpha,-a)\right)\\&=&\sigma(x)^{\mu}\sum_{\nu=\mu}^{\infty} \frac{(y\sigma(x))^{\nu-\mu}}{(\nu-\mu)!}\frac{d^{\nu-\mu}}{dx^{\nu-\mu}} (\sigma(x)^{-(a+\mu)}e^{-\alpha\cot^{-1}x}Q^{(\alpha,-a)}_{\mu}(x)),\\\nonumber &&\frac{\partial^{\mu}Q(x,y;\alpha,-a)}{\partial y^{\mu}}=w_l(x)^{-1} \sigma(x)^{\mu-l}[(1+(x+y\sigma(x))^2]^{-(a+\mu)}\\ &\cdot& e^{-\alpha\cot^{-1}(x+y\sigma(x))^2}Q^{(\alpha,-a)}_{\mu}((x+y\sigma(x))^2) . \label{3_9} \end{eqnarray} {\it Both Taylor series converge if $x$ and $x+y\sigma(x)$ are regular points of the weight function.} {\bf Proof.} The first relation is derived in Theorem~3.2 of ref.~\cite{hjw} by substituting the Rodrigues formula~(\ref{qrod}) in the defining series~(\ref{genf}) of the generating function and recognizing it as a Taylor series. The other follows similarly. $\diamond$ {\it Theorem~1.7. The generating function satisfies the partial differential equation (PDE)} \begin{eqnarray} \frac{\partial Q(x,y;\alpha,-a)}{\partial y}= \frac{\sigma(x)Q(x,y;\alpha,-a)}{1+[x+y\sigma(x)]^2} Q^{(\alpha,-a)}_1(x+y\sigma(x)). \label{pde} \end{eqnarray} {\bf Proof.} This PDE is derived by straightforward differentiation in Theorem~3.3 of ref.~\cite{hjw} in preparation for recursion relations by translating the case $\mu=1$ in Eq.~(\ref{3_9}) into a partial differential equation (PDE). $\diamond$ One of the main consequences of Theorem~1.7 is a general recursion relation. {\it Theorem~1.8. The $Q^{(\alpha,-a)}_{\nu}(x)$ polynomials satisfy the three-term recursion relation} \begin{eqnarray}\nonumber Q^{(\alpha,-a)}_{\nu+1}(x)=[\alpha-2x(a+\nu+1)]Q^{(\alpha,-a)}_{\nu}(x) -\nu\sigma(x)(2a+\nu+1)Q^{(\alpha,-a)}_{\nu-1}(x).\\ \label{recu3} \end{eqnarray} {\bf Proof.} Equation~(\ref{pde}) translates into this recursion relation by substituting Eq.~(\ref{genf}) defining the generating function and thus rewriting this as \begin{eqnarray}\nonumber &&(1+y^2\sigma^2(x)+2xy)\sum_{\nu=1}^{\infty}\frac{y^{\nu-1}}{(\nu-1)!} Q^{(\alpha,-a)}_{\nu}(x)\\&=&[\alpha-2x(a+1)-2y(a+1)\sigma(x)] \sum_{\nu=0}^{\infty}\frac{y^{\nu}}{\nu!}Q^{(\alpha,-a)}_{\nu}(x).~\diamond \end{eqnarray} Just like the recursive ODE~(\ref{rode}), this recursion allows for a systematic construction of the Romanovski polynomials, in contrast to the Rodrigues formulas which become impractical for large values of the degree $\nu.$ {\it Theorem~1.9. The polynomials $Q_{\nu}^{(\alpha,-a)}(x)$ satisfy the differential equation of Sturm-Liouville type} \begin{eqnarray}\nonumber \frac{d}{dx}\left(\sigma(x)^{l-\nu+1}w_l(x)\frac{dQ_{\nu}^{(\alpha,-a)}(x)} {dx}\right)&=&-\lambda_{\nu}\sigma(x)^{l-\nu}(x)w_l(x) Q_{\nu}^{(\alpha,-a)}(x);\\\lambda_{\nu}&=&\nu(2a+\nu+1)\ . \label{qsl} \end{eqnarray} {\bf Proof.} This ODE is equivalent to the ODE~(\ref{qode}) and agrees with Eq.~(78) of ref.~\cite{rwack} if $\beta=-a$ there. Note that the inductive proof in Theorem~5.1 in ref.~\cite{hjw} is much lengthier than our proof of Eqs.~(\ref{qode},\ref{pode}). $\diamond$ \section{Parameter Addition} The multiplicative structure of the generating function of Eq.~(\ref{3_8}) involving the two parameters in the exponents of two separate functions, as displayed in \begin{eqnarray} Q\left(x,\frac{y-x}{\sigma(x)};\alpha,-a\right)=\left(\frac{\sigma(x)} {\sigma(y)}\right)^{a+1}e^{\alpha(\cot^{-1}x-\cot^{-1}y)}, \end{eqnarray} allows for the following theorems. {\it Theorem~2.1. The $Q_{\nu}^{(\alpha,-a)}(x)$ polynomials satisfy the parameter addition relation} \begin{eqnarray} Q_{N}^{(\alpha_1+\alpha_2,-a_1-a_2-1)}(x)=\sum_{\nu_1=0}^N \left(N\atop \nu_1\right)Q_{\nu_1}^{(\alpha_1,-a_1)}(x) Q_{N-\nu_1}^{(\alpha_2,-a_2)}(x). \end{eqnarray} {\bf Proof.} This formula follows from the Taylor expansion \begin{eqnarray} \sum_{\nu_1, \nu_2=0}^{\infty}\frac{y^{\nu_1+\nu_2}}{\nu_1!\nu_2!} Q_{\nu_1}^{(\alpha_1,-a_1)}(x)Q_{\nu_2}^{(\alpha_2,-a_2)}(x)= \sum_{N=0}^{\infty}\frac{y^{N}}{N!} Q_{N}^{(\alpha_1+\alpha_2,-a_1-a_2-1)}(x). \end{eqnarray} of the {{\it generating function identity} \begin{eqnarray} Q(x,y;\alpha_1,-a_1)Q(x,y;\alpha_2,-a_2)=Q(x,y;\alpha_1+\alpha_2,-a_1-a_2-1).~ \diamond \end{eqnarray} Given the complexity of the polynomials, the elegance and simplicity of this relation are remarkable. {\bf Example.} The case $N=0$ is trivial, and $N=1$ becomes the additive identity \begin{eqnarray}\nonumber Q_1^{(\alpha_1+\alpha_2,-(a_1+a_2+1))}(x)&=&\alpha_1+\alpha_2-2x(a_1+a_2+2)\\ \nonumber&=&[\alpha_1-2x(a_1+1)]+[\alpha_2-2x(a_2+1)]\\&=& Q_1^{(\alpha_1,-a_1)}(x)+Q_1^{(\alpha_2,-a_2)}(x). \end{eqnarray} The first case involving additive and multiplicative aspects of the polynomials is $N=2$ which we decompose and multiply out as follows: \begin{eqnarray}\nonumber Q_{2}^{(\alpha_1+\alpha_2,-a_1-a_2-1)}&=& [\alpha_1+\alpha_2-2x(a_1+a_2+2)][\alpha_1+\alpha_2-2x(a_1+a_2+3)]\\\nonumber &-&2\sigma(x)(a_1+a_2+2)=\{[\alpha_1-2x(a_1+1)]\\\nonumber &+& [\alpha_2-2x(a_2+1)]\}\{[\alpha_1-2x(a_1+2)]\\\nonumber&+&[\alpha_2-2x(a_2+1)] \}-2\sigma(x)[(a_1+1)+(a_2+1)]\\\nonumber&=&[\alpha_1-2x(a_1+1)][\alpha_1 -2x(a_1+2)]-2\sigma(x)(a_1+1)\\\nonumber &+&[\alpha_2-2x(a_2+1)][\alpha_2-2x(a_2+2)]-2\sigma(x)(a_2+1)\\\nonumber &+&[\alpha_1-2x(a_1+1)][\alpha_2-2x(a_2+1)]\\\nonumber&+&[\alpha_2-2x(a_2+1)] [\alpha_1-2x(a_1+1)]\\\nonumber&=& Q_{2}^{(\alpha_1,-a_1)}+Q_{2}^{(\alpha_2,-a_2)}+Q_{1}^{(\alpha_1,-a_1)} Q_{1}^{(\alpha_2,-a_2)}\\&+&Q_{1}^{(\alpha_2,-a_2)}Q_{1}^{(\alpha_1,-a_1)}.~\diamond \end{eqnarray} The addition theorem is consistent with {\it the homogeneous polynomial theorem in the variables} $x, \alpha, \sqrt{\sigma}$ (without using $\sigma=x^2+1$) {\it which the polynomials satisfy} and can be generalized to an arbitrary number of parameters. {\it Theorem~2.2. The $Q\nu^{(\alpha,-a)}(x)$ polynomials satisfy the more general polynomial identity} \begin{eqnarray}\nonumber &&Q_N^{(\alpha_1+\alpha_2+\cdots +\alpha_n,-(a_1+a_2+\cdots +a_n+n-1))}(x)\\&=& \sum_{0\leq \nu_j\leq N, \nu_1+\cdots +\nu_n=N+n}\frac{N!} {\prod_1^n \nu_j!}\prod_{j=1}^n Q_{\nu_j}^{(\alpha_j,-a_j)}(x). \end{eqnarray} {\bf Proof.} This follows similarly from the Taylor expansion of the {\it product identity of $n$ generating functions} \begin{eqnarray}\nonumber \prod_{j=1}^n Q(x,y;\alpha_j,-a_j)=Q(x,y;\alpha_1+\alpha_2+\cdots +\alpha_n, -(a_1+a_2+\cdots +a_n+n-1)).\\ \end{eqnarray} As an application of the parameter addition theorem we now separate the two parameters $a$ and $\alpha$ into two sets of simpler polynomials $Q_{\nu}^{(0,-a)}$ and $Q_{\mu}^{(\alpha,1)}.$ To this end, we expand the generating functions in the {\it identity} \begin{eqnarray} Q(x,y;\alpha,-a)=Q(x,y;0,-a)Q(x,y;\alpha,1) \label{decomp} \end{eqnarray} in terms of their defining polynomials. This yields {\it Theorem~2.3. The $Q_\nu^{(\alpha,-a)}(x)$ polynomials satisfy the decomposition identity} \begin{eqnarray} Q^{(\alpha,-a)}_N(x)=\sum_{\nu=0}^N\left(N\atop \nu\right) Q^{(0,-a)}_{\nu}(x)Q^{(\alpha,1)}_{N-\nu}(x). \label{simpl} \end{eqnarray} {\bf Proof.} This identity follows from a Taylor expansion of the generating function identity~(\ref{decomp}) in terms of sums of products of polynomials involving only one parameter each. $\diamond$ {\bf Definition.} The generating function \begin{eqnarray} e^{\alpha [\cot^{-1}x-\cot^{-1}(x+y\sigma(x))]}=\sum_{\nu=0}^{\infty} \frac{y^{\nu}}{\nu!}Q^{(\alpha,1)}_{\nu}(x) \end{eqnarray} defines the second set of the polynomials, while the first one will be treated in detail below upon expanding the polynomials $Q^{(0,-a)}_\nu(x)$ as finite sums of Gegenbauer polynomials in Sect.~IV or finite power series in Eq.~(\ref{expl}). $\diamond$ We also note that $Q_\nu^{(0,-a)}(x)=K_\nu C_\nu^{(0,-a)}(x),$ so the latter also have a Gegenbauer polynomial expansion. {\it Theorem~2.4. The $Q_\nu^{(\alpha,-a)}(x)$ polynomials satisfy the (parity) symmetry relation} \begin{eqnarray} Q_{\nu}^{(-\alpha,-a)}(x)=(-1)^{\nu}Q_{\nu}^{(\alpha,-a)}(-x). \end{eqnarray} {\bf Proof.} This relation derives from the {\it generating function identity} \begin{eqnarray} Q(-x,-y;-\alpha,-a)=Q(x,y;\alpha,-a) \end{eqnarray} which holds because $\alpha\cot^{-1}(x+y\sigma(x))$ in the generating function Eq.~(\ref{genf}) stays invariant under $\alpha\to -\alpha, x\to -x, y\to -y,$ and $\sigma(-x)=\sigma(x).~\diamond$ \section{Orthogonality Integrals} This section deals with an application of the generating function to integrals that are relevant for studying the orthogonality of the polynomials. {\bf Definition.} We define orthogonality integrals for the $Q_\nu^{(\alpha,-a)}(x)$ polynomials by~\cite{rwack} \begin{eqnarray}\nonumber O^{(a,\alpha)}_{\mu,\nu}&=&\int_{-\infty}^{\infty}dx \frac{Q^{(\alpha,-a)}_{\mu}(x)Q^{(\alpha,-a)}_{\nu}(x)e^{-\alpha\cot^{-1}x}} {\sigma(x)^{(\mu+\nu)/2+a+2}}=0,\\a&>&-3/2,~\mu+\nu~\rm{even}, \label{orthdef} \end{eqnarray} while for $\mu+\nu$ odd there needs to be an extra $\sqrt{\sigma(x)}$ in the numerator for the orthogonality integrals to vanish. $\diamond$ Thus, the $Q_\nu^{(\alpha,-a)}(x)$ polynomials form two infinite subsets, each with general orthogonality, but polynomials from different subsets are not mutually orthogonal. While displaying infinite orthogonality, this property falls short of the general orthogonality of all classical polynomials. The $Q_\nu^{(\alpha,-a)}(x)$ polynomials form a partition of the set of all Romanovski polynomials, as shown in Eq.~(\ref{comp}), with upper index dependent on the running index $\nu,$ though. The Romanovski polynomials $R_\nu^{(\alpha,\beta)}(x)$ with upper indices independent of the degree $\nu,$ the running index, form another partition that has the finite orthogonality, as discussed in more detail in ref.~\cite{rwack}. The orthogonality of the $C_n^{(\alpha_n,-a)}(x)$ polynomials from the Schr\"odinger equation with $\alpha=\alpha_n$ as discussed below Eq.~(\ref{cs}), is yet another form of orthogonality similar to that of hydrogenic wave functions, which also differs from the mathematical orthogonality of associated Laguerre polynomials, the subject of Exercise~13.2.11 in ref.~\cite{aw}. The orthogonality integrals of Eq.~(\ref{orthdef}) suggest analyzing the following integral of the generating functions \begin{eqnarray}\nonumber I(y)&=&\int_{-\infty}^{\infty}\frac{dx}{\sigma(x)^{a+2}}\left( Q(x,\frac{y}{\sqrt{\sigma}};\alpha,-a)\right)^2 e^{-\alpha \cot^{-1}x}\\ \nonumber &=&\sum_{\nu_1,\nu_2=0}^{\infty}\frac{y^{\nu_1+\nu_2}}{\nu_1! \nu_2!}\int_{-\infty}^{\infty}dx\frac{Q^{(\alpha,-a)}_{\nu_1}(x) Q^{(\alpha,-a)}_{\nu_2}(x)e^{-\alpha\cot^{-1}x}} {\sigma(x)^{(\nu_1+\nu_2)/2+a+2}}\\&=&\sum_{\nu_1,\nu_2=0}^{\infty} \frac{y^{\nu_1+\nu_2}}{\nu_1!\nu_2!}O^{(a,\alpha)}_{\nu_1, \nu_2} \label{orthint} \end{eqnarray} which is written directly in terms of orthogonality integrals $O^{(\alpha,-a)}_{\nu_1, \nu_2}$ defined in Eq.~(\ref{orthdef}). On the other hand, we can express the integral as \begin{eqnarray}\nonumber I(y)&=&\int_{-\infty}^{\infty}\frac{dx e^{-\alpha\cot^{-1}x+2\alpha\cot^{-1}x -2\alpha\cot^{-1}(x+y\sqrt{\sigma})}}{\sigma(x)^{a+2}[1+y^2+\frac{2xy} {\sqrt{\sigma}}]^{2(a+1)}}\\\nonumber&=&\int_0^{\infty}\frac{dx}{\sigma^{a+2}} \frac{e^{\alpha\cot^{-1}x-2\alpha\cot^{-1}(x+y\sqrt{\sigma})}} {[1+y^2+\frac{2xy}{\sqrt{\sigma}}]^{2(a+1)}}\\&+&\int_0^{\infty}\frac{dx} {\sigma^{a+2}}\frac{e^{-\alpha\cot^{-1}x+2\alpha\cot^{-1}(x-y\sqrt{\sigma})}} {[1+y^2-\frac{2xy}{\sqrt{\sigma}}]^{2(a+1)}}, \end{eqnarray} which is manifestly not even in the variable $y.$ If the $Q^{(\alpha,-a)}_{\nu}$ polynomials were orthogonal, then the double sum in $I$ of Eq.~(\ref{orthint}) would collapse to a single sum over normalization integrals multiplied by even powers of $y,$ i.e. $I$ would be an even function of $y.$ This result shows that the $Q^{(\alpha,-a)}_{\nu}$ polynomials are not orthogonal in the conventional sense. In fact, the extra $\sqrt{\sigma}$ in the orthogonality integrals $O_{2\nu,2\mu+1}^{(a,\alpha)}$ is not built into the generating function. In other words, the fact that $I(y)\neq I(-y)$ indirectly confirms that the $Q^{(\alpha,-a)}_{\nu}$ polynomials have more complicated orthogonality properties than the Romanovski polynomials with parameters that are independent of the degree of the polynomial, as discussed in more detail in ref.~\cite{rwack}. Let us next consider the special parameter $\alpha=0$ and analyze similarly the integral \begin{eqnarray}\nonumber I_0(y)&=&\int_{-\infty}^{\infty}\frac{dx}{\sigma(x)^{a+2}} Q(x,\frac{y}{\sqrt{\sigma}};0,-a)=\sum_{\nu=0}^{\infty}\frac{y^{\nu}}{\nu!} \int_{-\infty}^{\infty}\frac{dx Q^{(0,-a)}_{\nu}(x)}{\sigma^{\nu/2+2+a}}\\ \nonumber&=&\sum_{\nu=0}^{\infty}\frac{y^{\nu}}{\nu!}O^{(a, 0)}_{\nu, 0} =\int_0^{\infty}\frac{dx}{\sigma^{a+2}}\frac{1}{[1+y^2+\frac{2xy} {\sqrt{\sigma}}]^{a+1}}+\int_0^{\infty}\frac{dx}{\sigma^{a+2}}\frac{1} {[1+y^2-\frac{2xy}{\sqrt{\sigma}}]^{a+1}},\\ \label{int0} \end{eqnarray} which is an even function of $y.$ If the $Q^{(0,-a)}_{\nu}(x)$ are orthogonal to $Q^{(0,-a)}_{0}(x)=1$ then the sum in Eq.~(\ref{int0}) will collapse to its first term and $I_0$ is a constant. It is quite a surprise that this actually happens in the interval $-1\leq y\leq 1$ for all parameter values $a$ for which the integral $I_0$ converges. For example, $I_0(y)=r(a)\pi=$const. with a rational number $r(a)$ that depends on the exponent $a,$ where $r(0)=1/2, r(1)=3/4, r(2)=5/8, r(3)=35/64, r(4)=3^2\cdot7/2^7,$ if $a$ is a non-negative integer; in general $I_0(y)=\sqrt{\pi}\Gamma(a+3/2)/\Gamma(a+2).$ For $y>|1|, I_0(y)$ varies and deviates from the constant value. From the structure of the integral, this anomalous behavior of $I_0$ is rather unexpected. Since for $\alpha=0$ parity is conserved in the ODE~(\ref{qode}), the orthogonality integrals $O_{2\nu,0}^{(a,0)}$ are zero and $O_{2\nu,2\mu}^{(a,0)}=0$ more generally. Since it is shown in ref.~\cite{rwack} that $O_{2\nu,2\mu}^{(a,0)}$ for $\mu\neq \nu$ vanish, the $Q^{(0,-a)}_{\nu}(x)$ polynomials are orthogonal in the conventional sense. Since each $C_l^{(0,-a)}(x)$ is proportional to $Q^{(0,-a)}_{l}(x),$ the $C_l^{(0,-a)}(x)$ polynomials are orthogonal. This is confirmed by $I_0$ and its constancy in the interval $-1\leq y\leq 1$ is thus proved. The restriction to parameter value $\alpha=0$ can be removed: \begin{eqnarray}\nonumber I_1(y)&=&\int_{-\infty}^{\infty}\frac{dx}{\sigma(x)^{a+2}} Q(x,\frac{y}{\sqrt{\sigma}};\alpha,-a)=\sum_{\nu=0}^{\infty}\frac{y^{\nu}} {\nu!}O^{(a, \alpha)}_{\nu, 0}\\\nonumber &=&\int_{-\infty}^{\infty}\frac{dx}{\sigma^{a+2}}\frac{e^{-\alpha\cot^{-1} (x+y\sqrt{\sigma})}}{[1+y^2+\frac{2xy}{\sqrt{\sigma}}]^{a+1}}\\ &=&\int_0^{\infty}\frac{dx}{\sigma^{a+2}}\frac{e^{-\alpha\cot^{-1}(x+y \sqrt{\sigma})}}{[1+y^2+\frac{2xy}{\sqrt{\sigma}}]^{a+1}}+\int_0^{\infty} \frac{dx}{\sigma^{a+2}}\frac{e^{-\alpha\cot^{-1}(x+y\sqrt{\sigma})}} {[1+y^2-\frac{2xy}{\sqrt{\sigma}}]^{a+1}}, \label{int1} \end{eqnarray} which is neither even in $y$ nor independent of $y.$ Therefore, if we wish to find the normalizations of the $Q_{\nu}^{(\alpha,-a)}$ polynomials we have to split up the generating function into its even and odd parts in $y$ and integrate them separately, each with the proper power of $\sigma(x)$ in the orthogonality integral. \section{Relations with Gegenbauer Polynomials} The relation of the Romanovski polynomials as complexified Jacobi polynomials on the unit circle in the complex plane is described in detail in ref.~\cite{rwack}. Therefore, we focus here on relations with Gegenbauer polynomials. We start with the simplest case of parameter values $a=0=\alpha,$ which also happens to be relevant for physics~\cite{ck}, to derive from the generating function an expression for $Q_{\nu}^{(\alpha,-a)}(x)$ in terms of a finite sum of Gegenbauer polynomials. For $a=0=\alpha,$ Eq.~(\ref{3_8}) takes the explicit form \begin{eqnarray} Q(x,y;0,0)=(1+x^2)[1+(x+y(1+x^2))^2]^{-1}=\frac{1}{1+2xy+y^2(1+x^2)}. \label{g00} \end{eqnarray} {\it Theorem~4.1. The $Q_{m}^{(0, 0)}(x)$ polynomials have the expansion into Gegenbauer polynomials} \begin{eqnarray} Q_{m}^{(0, 0)}(x)=m!\sum_{n=0}^{[m/2]}(-1)^nx^{2n}C_{m-2n}^{(n+1)}(-x),\ m=0, 1,\ldots . \end{eqnarray} {\bf Proof.} For $|xy|/|1+2xy+y^2|<1,$ the generating function identity~(\ref{g00}) may be expanded as an absolutely converging geometric series \begin{eqnarray} Q(x,y;0,0)=\sum_{n=0}^{\infty}\frac{(-x^2y^2)^n}{(1+2xy+y^2)^{n+1}}. \end{eqnarray} Substituting the generating function of Gegenbauer polynomials~\cite{aw} $C_l^{(n+1)}(x)$, \begin{eqnarray} (1-2xy+y^2)^{-(n+1)}=\sum_{l=0}^{\infty}C_l^{(n+1)}(x)y^l, \end{eqnarray} we obtain the expansion \begin{eqnarray}\nonumber Q(x,y;0,0)&=&\sum_{n=0}^{\infty}(-x^2y^2)^n \sum_{l=0}^{\infty}C_l^{(n+1)}(-x)y^l\\&=&\sum_{m=0}^{\infty}y^m \sum_{n=0}^{[m/2]}(-1)^nx^{2n}C_{m-2n}^{(n+1)}(-x), \end{eqnarray} where $m=l+2n$ was used upon interchanging the summations, with $[m/2]$ the integer part of $m/2$. On comparing with Eq.~(\ref{3_8}) defining the generating function $Q(y, x;\alpha,-a)$ we obtain the expansion of the $Q_{m}^{(0, 0)}(x)$ polynomials as a finite sum of Gegenbauer polynomials of Theorem~4.1. $\diamond$ Since $Q_{m}^{(0, 0)}(x)=K_{m}C_{m}^{(0, 0)}(x),$ this result is also valid for the $C_{m}^{(0, 0)}(x)$ polynomials. It can be generalized to parameter values $a\neq 0:$ {\it Theorem~4.2. The $Q_{N}^{(0,-a)}(x)$ polynomials have the Gegenbauer polynomial expansion} \begin{eqnarray} Q_{N}^{(0,-a)}(x)=N!\sum_{n=0}^{[N/2]}\left(-a-1\atop n\right)x^{2n} C^{(n+a+1)}_{N-2n}(-x). \end{eqnarray} {\bf Proof.} This relation follows from expanding the generating function \begin{eqnarray}\nonumber Q(x,y;0,-a)&=&\left( \frac{\sigma(x)}{1+x^2+y^2\sigma^2(x)+2xy\sigma(x)} \right)^{a+1}=\frac{1}{[1+2xy+y^2+x^2y^2]^{a+1}}\\\nonumber &=&\sum_{n=0}^{\infty}\left(-a-1\atop n\right)\frac{(x^2y^2)^n} {[1+2xy+y^2]^{n+a+1}}\\\nonumber&=&\sum_{n, l=0}^{\infty}\left(-a-1\atop n \right)(x^2y^2)^nC^{(n+a+1)}_l(-x) y^l\\&=&\sum_{N=0}^{\infty} y^{N} \sum_{n=0}^{[N/2]}\left(-a-1\atop n\right)x^{2n}C^{(n+a+1)}_{N-2n}(-x) \end{eqnarray} in terms of the binomial series and then again using the generating functions of the Gegenbauer polynomials. $\diamond$ {\it Theorem~4.3. The $Q_{N}^{(\alpha,-a)}(x)$ polynomials have the general Gegenbauer polynomial expansion} \begin{eqnarray} \frac{1}{N!}Q_{N}^{(\alpha,-a)}(x)=\sum_{\nu=0}^N\left(N\atop \nu\right) Q^{(\alpha,1)}_{N-\nu}(x)\sum_{n=0}^{[N/2]}\left(-a-1\atop n\right)x^{2n} C^{(n+a+1)}_{N-2n}(-x). \end{eqnarray} {\bf Proof.} Substituting the expansion of Theorem~4.2 into Eq.(\ref{simpl}) in which the Gegenbauer polynomials depend only on the parameter $a$ while the $Q^{(\alpha,1)}_{\nu}(x)$ depend only on $\alpha$ yields the desired expansion. $\diamond$ The Gegenbauer polynomials are well-known generalizations of Legendre polynomials. The {\it hyperbolic Gegenbauer} ODE \begin{eqnarray} \sigma(x)y''-(2\lambda+1)xy'+\Lambda_l^{(\lambda)}y=0 \label{ggode} \end{eqnarray} becomes the ODE~(\ref{qode}) for $\alpha=0, \nu=l$ and $2\lambda+1=2(a+\nu),$ so the solutions of Eq.~(\ref{ggode}) are the $Q_{l}^{(0,-(\lambda-l+1/2))}(x)$ polynomials. In fact, for $\alpha=0$ we can directly solve the ODE~(\ref{qode}) for the $Q_{\nu}^{(0,-a)}(x)$ polynomial solutions in terms of finite power series. {\it Theorem~4.4. The $Q_{N}^{(0,-a)}(x)$ polynomials have the explicit finite power series} \begin{eqnarray}\nonumber Q_{N}^{(0,-a)}(x)&=&\sum_{\mu=0}^{[N/2]}x^{N-2\mu}a_{\mu},\ a_{\mu}=-\frac{(N-2\mu+2)(N-2\mu+1)}{2\mu(2a+2\mu+1)}a_{\mu-1},\\\nonumber a_1&=&-\frac{N(N-1)}{2(2a+3)},\ a_{\mu}=(-1)^{\mu} \frac{N(N-1)\cdots (N-2\mu+1)}{2^{\mu}\mu!(2a+3)(2a+1)\cdots (2a+2\mu+1)}\\ \label{expl} \end{eqnarray} \begin{eqnarray} Q_{N}^{(\alpha,-a)}(x)=\sum_{\nu=0}^N\left(N\atop \nu\right) Q^{(\alpha,1)}_{N-\nu}(x)\sum_{\mu=0}^{[N/2]}x^{N-2\mu}a_{\mu} \end{eqnarray} with $a_\mu$ from Eq.~(\ref{expl}) and $[N/2]$ denoting the integer part of $N/2.$ {\bf Proof.} Since the proof by mathematical induction is straightforward, we just give the results. As the ODE is invariant under the parity transformation, $x\to -x,$ we have even and odd solutions. Substituting Eq.~(\ref{expl}) in Eq.~(\ref{simpl}) yields the second relation stated in Theorem~4.4. $\diamond$ This is also valid for the $C_l^{(0,-a)}(x)$ polynomials up to the normalization $K_l.$ \section{Auxiliary polynomials} Carrying out the innermost derivative of the Rodrigues formula~(\ref{rds}), we find \begin{eqnarray}\nonumber P_l(x)&=&\frac{\sigma^l}{w_0}\frac{d^{l-1}}{dx^{l-1}}\left(\frac{w_0}{\sigma} [\alpha-\sigma'(a+1)]\right)\\&=&\alpha Q^{(\alpha,-a-1)}_{l-1}(x)- (a+1)\frac{\sigma^l}{w_0}\frac{d^{l-1}}{dx^{l-1}}\left(\frac{\sigma' w_0} {\sigma}\right), \label{aux} \end{eqnarray} and are led to define {\it auxiliary polynomials:} \begin{eqnarray} S_{l+1}(x)=\frac{\sigma(x)^{l+1}}{w_0(x)}\frac{d^l}{dx^l} \left(\frac{\sigma'(x)w_0(x)}{\sigma(x)}\right).~\diamond \label{defx} \end{eqnarray} {\bf Example.} \begin{eqnarray} S_1(x)=\sigma'(x),~S_2(x)=\sigma''\sigma(x)+\sigma'(x)[\alpha-\sigma'(x)(a+2)], \ldots .~\diamond \end{eqnarray} So \begin{eqnarray} S_l(x)=\frac{\alpha}{a+1}Q^{(\alpha,-a-1)}_{l-1}(x)-\frac{P_l(x)}{a+1}. \label{alt1} \end{eqnarray} Applying a derivative to $w_0S_l/\sigma^l$ yields \begin{eqnarray} \frac{d^l}{dx^l}\left(\frac{\sigma'(x)w_0(x)}{\sigma(x)}\right)=\frac{\alpha} {a+1}\frac{d}{dx}\left(\frac{w_0}{\sigma^l}Q^{(\alpha,-a-1)}_{l-1}(x)\right) -\frac{1}{a+1}\frac{d}{dx}\left(\frac{w_0 P_l(x)}{\sigma^l}\right). \end{eqnarray} Using the recursive ODEs for $Q^{(\alpha,-a-1)}_{l-1}$ and $P_l$ yields \begin{eqnarray}\nonumber \frac{\sigma(x)^{l+1}}{w_0(x)}\frac{d^l}{dx^l}\left(\frac{\sigma'(x)w_0(x)} {\sigma(x)}\right)&=&\frac{\alpha}{a+1}(Q^{(\alpha,-a-1)}_{l}(x)-\sigma' Q^{(\alpha,-a-1)}_{l-1}(x))\\&-&\frac{Q^{(\alpha,-a)}_{l+1}(x)}{a+1}. \label{alt2} \end{eqnarray} A comparison of Eqs.~(\ref{alt1},\ref{alt2}) yields \begin{eqnarray} P_{l+1}(x)=\alpha\sigma'(x)Q^{(\alpha,-a-1)}_{l-1}(x)+Q^{(\alpha,-a)}_{l+1}(x), \end{eqnarray} complementing the relation ${\cal P}_l(x;l)=K_lC_l^{(\alpha,-a)}(x) =Q^{(\alpha,-a)}_l(x)=P_l(x).$ For Laguerre polynomials $\sigma(x)=x$ and relation~(\ref{aux}) corresponds to \begin{eqnarray} S_{l+1}(x)=x^{l+1}e^x\frac{d^l}{dx^l}\left(\frac{e^{-x}}{x}\right) =l!L_l^{-l-1}(x), \end{eqnarray} while Eq.~(\ref{aux}) becomes \begin{eqnarray} l L_l(x)=l L_{l-1}(x)-xL^1_{l-1}(x). \end{eqnarray} For Jacobi polynomials $\sigma'(x)=-2x.$ As $-2x=1-x-(1+x),$ where $1\pm x$ can be incorporated into the weight functions $w(x)=(1-x)^a(1+x)^b,$ there is no need to introduce auxiliary polynomials. For example, Eq.~(\ref{aux}) becomes \begin{eqnarray} 2l P_l^{(a,b)}(x)=(a+l)(1+x)P_{l-1}^{(a,b+1)}(x)-(b+l)(1-x)P_{l-1}^{(a+1,b)}(x). \end{eqnarray} \section{Discussion} We have used a simple and natural method for constructing polynomials $Q_\nu^{(\alpha,-a)}(x)$ that are complementary to the $C_n^{(\alpha,-a)}(x)$ polynomials and related to them by a Rodrigues formula. Similar to the classical orthogonal polynomials, the $Q_\nu^{(\alpha,-a)}(x)$ appear as solutions of a Sturm-Liouville ordinary second-order differential equation and obey Rodrigues formulas themselves. On the other hand, and different from the classical polynomials, their infinite sets of orthogonality integrals are not the standard ones. These real orthogonal polynomials and their nontrivial orthogonality properties are closely related to Romanovski polynomials and to physical phenomena. In summary, all basic properties of Romanovski polynomials derive from the Rodrigues formula~(\ref{rds}) except for the orthogonality integrals. \section{Acknowledgments} It is a pleasure to thank M. Kirchbach for introducing me to the $C_n^{(\alpha,-a)}(x)$ polynomials. Thanks are also due to V. Celli for help with some of the orthogonality integrals. \end{document}
\begin{document} \title{The Segal-Bargmann Transform in Clifford Analysis} \author{Swanhild Bernstein ([email protected]) and Sandra Schufmann} \date{TU Bergakademie Freiberg, Institute of Applied Analysis} \maketitle \begin{abstract} The Segal-Bargmann transform plays an essential role in signal processing, quantum physics, infinite-dimensional analysis, function theory and further topics. The connection to signal processing is the short-time Fourier transform, which can be used to describe the Segal-Bargmann transform. The classical Segal-Bargmann transform $\mathcal{B}$ maps a square integrable function to a holomorphic function square-integrable with respect to a Gaussian identity. In signal processing terms, a signal from the position space $L_2(\mathbb{R}^m,\mathbb{R})$ is mapped to the phase space of wave functions, or Fock space, $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$. We extend the classical Segal-Bargmann transform to a space of Clifford algebra-valued functions. We show how the Segal-Bargmann transform is related to the short-time Fourier transform and use this connection to demonstrate that $\mathcal{B}$ is unitary up to a constant and maps Sommen's orthonormal Clifford Hermite functions $\left\{\phi_{l,k,j}\right\}$ to an orthonormal basis of the Segal-Bargmann module $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. We also lay out that the Segal-Bargmann transform can be expanded to a convergent series with a dictionary of $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. In other words, we analyse the signal $f$ in one basis and reconstruct it in a basis of the Segal-Bargmann module. \end{abstract} \section{Introduction} \label{intro} Due to the importance of the Segal-Bargmann transform, there are various generalizations into quaternion and Clifford analysis. In particular, the Bargmann-Segal transformation has been studied in the theory of slice monogenic functions \cite{ACSS}, \cite{Diki},\cite{DG}, \cite{MNQ}. Our interest doesn't lie in these theories. We are interested in the importance of the Segal-Bargmann transform in its connection to the windowed Fourier transform and time-frequency analysis. Time-frequency analysis is an important method in signal processing, because it allows to analyse a given signal simultaneously in the time and frequency domains. A well-known tool is the short-time Fourier transform. Another closely related tool is the Segal-Bargmann transform, which is our main focus in this paper. The classical Segal-Bargmann transform maps a square integrable function to a holomorphic function square-integrable with respect to a Gaussian identity. In signal processing terms, a signal from the position space $L_2(\mathbb{R}^m,\mathbb{R})$ is mapped to the phase space of wave functions $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$. In the early 1960s, V. Bargmann and I. Segal independently investigated this space \cite{Bargmann,Segal}. While Bargmann developed a theory about the space and the corresponding transform in the finite-dimensional case, Segal focused primarily on the infinite-dimensional version of the now-called Segal-Bargmann space(s) \cite{Hall}. The space $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$ has a wide number of applications such as in infinite-dimensional analysis and stochastic distribution theory. As early as 1932, V.~Fock introduced a more general, infinite-dimensional version of this space as a quantum states space for an unknown number of particles \cite{Fock}, which is now called \textit{Fock} space. In quantum mechanics, the reproducing kernels of the Fock spaces are the so-called coherent states. Segal and Bargmann showed that an infinite union of the spaces $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$, $m\in\mathbb{N}$, is isomorphic to a certain case of the Fock space, which is why the Segal-Bargmann spaces are sometimes also called Segal-Bargmann-Fock space(s) or only Fock space. For this work, we will stick to the notion of Segal-Bargmann space. In signal and image processing not only scalar-valued but also quaternion- and Clifford-valued signals are of interest. A monogenic signal \cite{FS,BBRH,BHRHSS}, for example, consists of a scalar-valued signal and vector components, which are the Riesz transformations of the scalar-valued signal. Other applications deal with colour images of which the colours are separated and considered as components of a Clifford-valued signal, see for example \cite{BS,DCF,ES,ZWSCCSS}. The main purpose of this paper is to investigate the Segal-Bargmann transform $\mathcal{B}$ of Clifford algebra-valued functions, which has also been the focus of D. Pe{\~n}a Pe{\~n}a, I. Sabadini and F. Sommen \cite{PSS}. We will define and examine the Segal-Bargmann module $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$, a higher-dimensional analogue of the classical Segal-Bargmann space. It is known that there is a close relationship between the Gabor transform (short-time Fourier transform with a Gaussian window) and the Segal-Bargmann transform. Recently, this connection has been used to filter a signal embedded in white noise \cite{Flandrin,AHKR}. Therefore, we investigate the mapping properties of the Segal-Bargmann transform in the context of Clifford estimators. We prove that $\mathcal{B}$ is a unitary operator up to a scaling constant, and that it maps an orthonormal basis of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ to an orthonormal basis of the Segal-Bargmann module $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. For that, we will use Sommen's Clifford-Hermite functions $\left\{\phi_{l,k,j}\right\}$ as an $L^2$ basis. We also lay out that the Segal-Bargmann transform can be expanded to a series $\big(\mathcal{B}f\big)(z)=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle$ with a dictionary $\left\{\Psi_{l,k,j}\right\}$ of the Segal-Bargmann module and that this series converges absolutely locally uniformly. The paper is organised as follows. In Section \ref{subsec:Cliffordalgebras} and \ref{subsec:HilbertCliffmod} we give an overview of basic Clifford analysis and of Hilbert Clifford-modules, which replace Hilbert spaces in our context. Section \ref{sub:Pk} deals with a certain class of Clifford-valued functions, the inner spherical monogenics, which are central to the construction of a basis for the function spaces that we deal with. In section \ref{sub:STFT}, we present the short-time Fourier transform as in important tool for our work. After we have established these preliminary notes, we introduce Sommen's generalized Clifford Hermite polynomials and their relevant properties in section \ref{sec:CliffHemitepol}. In section \ref{sec:BargmannTransform}, we formally introduce the Segal-Bargmann transform and the Segal-Bargmann space of the classical, non-Clifford case, before we establish its analogue, the Segal-Bargmann module, in section \ref{sec:BargmannModules} and show some important properties of the Segal-Bargmann transform of Clifford algebra-valued functions. We conclude our paper with section \ref{sec:dictionary} by constructing a dictionary $\left\{\Psi_{l,k,j}\right\}$ for the Segal-Bargmann transform and proving the convergence of the series representation $\sum\limits_{l=0}^{\infty}\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle$. \section{Preliminaries} \label{sec:prelim} \subsection{Clifford algebras} \label{subsec:Cliffordalgebras} While real Clifford algebras have gained much interest in mathematical research since W. Clifford wrote about them in 1878, cf. \cite{Funktionentheorie}, complex Clifford algebras are a fairly recent topic of interest. In our work, we deal with both cases. We take notations and properties mainly from \cite{BDS-1982}, in which the real version is displayed, and adopt them to fit the complex case. For that, we work close to J. Ryan's \textit{Complexified clifford analysis} \cite{Ryan}, in which a detailed extension of real to complex Clifford algebras is developed. We will write $\mathbb{N}=\left\{1,2,3,\dots\right\}$ and $\mathbb{N}_0=\left\{0,1,2,\dots\right\}$. Let $n\in\mathbb{N}_0$ and $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$ denote the real Clifford algebra over $\mathbb{R}^n$ and $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$ the complex Clifford algebra over $\mathcal{C}\hspace*{-0.2em}\ellC^n$. Both are based on the multiplication rules $$ \begin{array}{cl} e_ie_j + e_je_i = 0, & i\not= j, \\ e_j^2 = -1, & i=1,2,\ldots , n. \end{array} $$ and have $e_0 \equiv 1$ as their unit element. An arbitrary element of $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$ or $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$ is called a Clifford number and is given by $$ a = \sum_{A} a_{A}e_{A},$$ where $a_{A}\in\mathbb{R}$ or $a_{A}\in\mathcal{C}\hspace*{-0.2em}\ellC$, resp., and for each $A=(n_1,\dots,n_l)$ with \linebreak $1\leq n_1 < n_2 < \ldots < n_l \leq m$, it is $e_{A} = e_{n_1}e_{n_2}\dots e_{n_l}$. The coefficient $a_0$ is called the scalar part of $a$ and $\underline{a} = \sum_{j=1}^n a_je_j$ a Clifford vector. Similar to the complex conjugation $\overline{\phantom{x}}^{\mathcal{C}\hspace*{-0.2em}\ellC}$, we can define involutions $\overline{\phantom{x}}$ for the real and $^\dagger$ for the complex Clifford algebra. Let \[\bar{e}_A=\left(-1\right)^{\frac{|A|(|A|+1)}{2}}e_A.\] Then \[\bar{a}=\sum\limits_Aa_A\bar{e}_A\] for $a\in\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$, and \[a^{\dagger}=\sum\limits_A\overline{a}_A^{\mathcal{C}\hspace*{-0.2em}\ellC}\bar{e}_A\]\label{dagger} for $a\in\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$. We refer to \cite{GilbertMurray} and state that $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$ becomes a finite dimensional Hilbert space with the inner product $$ (a,b)_0 = [\overline{a}b]_0 = \sum_A a_Ab_A$$ for all $a,b \in \mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$, and has Hilbert space norm $$ |a|_0 = \sqrt{ (a, a )_0} = \sqrt{\sum_A |a_A|^2}. $$ The inner product on $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}$ extends to a sesqui-linear inner product $$ (a,b)_0 = [a^{\dagger}b]_0 = \sum_A \overline{a_A}^{\mathcal{C}\hspace*{-0.2em}\ellC}b_A$$ for $a,b \in \mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$. It can be shown that Clifford algebras are $C^*$-algebras, see \cite{GilbertMurray}. \begin{prop} Under the involution $a \to a^{\dagger}$ each $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$ is a complex $C^*$-algebra which is a complexification of the real $C^*$-algebra $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathbb{R}}.$ \end{prop} \subsection{Hilbert Clifford-modules} \label{subsec:HilbertCliffmod} We want to consider spaces of $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}$- or $C_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$-valued functions. For that purpose, we need an anologue to the classical $L^2$ spaces. Since the elements of a Clifford algebra do not form a field, we work in Clifford-modules. The following two definitions are taken from \cite{BDS-1982} and adapted for the complex case; the real case is contained implicitly. \begin{defn} $X_{(r)}$ is a \textbf{unitary right $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module}, when $(X_{(r)}, +)$ is an abelian group and the mapping $(f, a) \to fa$ from $ X_{(r)}\times\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC} \to X_{(r)}$ is defined such that for all $a,b\in \mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$ and $f,g\in X_{(r)}:$ \begin{enumerate} \item $f(a + b) = fa + fb,$ \item $f(ab) = (fa)b, $ \item $(f+g)a = fa + ga, $ \item $fe_0 = f.$ \end{enumerate} \end{defn} We define an inner product on a unitary right $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module as follows. \begin{defn} Let $H_{(r)}$ be a unitary right $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module. Then a function\linebreak $\langle\cdot, \cdot \rangle: H_{(r)}\times H_{(r)} \to \mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC} $ is an \textbf{inner product} on $H_{(r)}$ if for all $f,g,h \in H_{(r)}$ and $a \in \mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC},$ \begin{enumerate} \item $\langle f, g+h\rangle = \langle f,g\rangle + \langle f,h\rangle$ \item $\langle f,ga\rangle = \langle f,g\rangle a $ \item $\langle f,g\rangle = \langle g,f\rangle^{\dagger}$ \item $\langle f,f\rangle _0\in\mathbb{R}_0^+$ and $\langle f,f\rangle _0=0$ if and only if $f=0$ \item $\langle fa,fa\rangle _0\leq|a|_0^2\langle f,f\rangle _0.$ \end{enumerate} The accompanying \textbf{norm} on $H_{(r)}$ is $\left\|f\right\|^2=\langle f,f\rangle_0$. \end{defn} We now give an important property of the inner product. \begin{prop}\label{prop:fg0} If $\langle\cdot,\cdot\rangle$ is an inner product on a unitary right $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module $H_{(r)}$ and $\left\|f\right\|^2=\langle f,f\rangle_0$ then \[\left|\langle f,g\rangle\right|_0\leq 2^n\left\|f\right\|\,\left\|g\right\|\] for all $f,g\in H_{(r)}$. \end{prop} Proof: We use the definition of the norm on $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$, $|a|_0^2=\sum\limits_A |a_A|^2$, and the fact that \begin{equation} \left[ae_A\right]_0=\left[\sum\limits_Ba_Be_Be_A\right]_0=\left[a_Ae_Ae_A\right]_0=-a_A\label{eq:aeA} \end{equation} for all $a\in\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$. Also, if we consider $H_{(r)}$ to be a vector space over $\mathcal{C}\hspace*{-0.2em}\ellC$ with inner product $\langle\cdot,\cdot\rangle_0$, we know that the Cauchy-Schwartz inequality \begin{eqnarray}\label{eq:Norm-CS} \left|\langle f,g\rangle_0\right|^2\leq\langle f,f\rangle_0\cdot\langle g,g\rangle_0=\left\|f\right\|^2\,\left\|g\right\|^2 \end{eqnarray} has to be true. Now, we get \begin{align*} \left|\langle f,g\rangle\right|_0^2&=\sum\limits_A\left|\langle f,g\rangle_A\right|^2 \stackrel{(\ref{eq:aeA})}{=}\sum\limits_A\left|\left[\langle f,g\rangle e_A\right]_0\right|^2\\ &\stackrel{(ii)}{=}\sum\limits_A\left|\langle f,ge_A\rangle_0\right|^2 \stackrel{(\ref{eq:Norm-CS})}{\leq}\sum\limits_A\|f\|^2\,\|ge_A\|^2\\ &\stackrel{(v)}{=}\sum\limits_A\|f\|^2\,\|g\|^2\cdot |e_A|_0^2 =\sum\limits_A\|f\|^2\,\|g\|^2\\ &=2^n\|f\|^2\,\|g\|^2. \quad \square \end{align*} As an analogue to Hilbert (vector) spaces, we now define Hilbert modules. \begin{defn} Let $H_{(r)}$ be a unitary right $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module provided with an inner product $(\cdot, \cdot).$ Then it is called a \textbf{right Hilbert $\mathcal{C}\hspace*{-0.2em}\ell_n^{\mathcal{C}\hspace*{-0.2em}\ellC}$-module} if it is complete for the norm topology derived from the inner product. \end{defn} Let $m\in\mathbb{N}=\{1,2,3,\dots\}$. We now consider the unitary right $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}$-module of functions from $\mathbb{R}^m$ to $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}$. A function ${f:\Omega\subset\mathbb{R}^m\to\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}}$ maps the vector variable $\underline{x} = \sum_{j=1}^m x_je_j$ to a Clifford number and can be written as \[f(\underline{x})=\sum\limits_Ae_Af_A(\underline{x}),\] where $f_A:\mathbb{R}^m\to\mathbb{R}$ \cite{BDS-1982}. We define an inner product as follows. \begin{defn} Let $h$ be a positive function on $\mathbb{R}^m.$ Then the inner product $\langle\cdot,\cdot\rangle_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}$ is defined as $$\langle f,g\rangle_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})} = \int_{\mathbb{R}^m} \overline{f(\underline{x})} g(\underline{x}) h(\underline{x}) d\underline{x},$$ where $ d\underline{x}$ stands for the Lebesgue measure on $\mathbb{R}^m,$ and the associated norm is $||f||_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}^2 = \left[\langle f, f \rangle_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}\right]_0 .$ \end{defn} The unitary right Clifford-module of measurable functions on $\mathbb{R}^m$ for which $||f||_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})} < \infty $ is a right Hilbert Clifford-module, which we denote by $L^2(\mathbb{R}^m, h, \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. In this paper, we will focus on the case where $h(\underline{x})=1$. Then the right Hilbert Clifford-module will simply be denoted by $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ and the inner product by $\langle\cdot,\cdot\rangle_{L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}$. We also work on functions with values in a complex Clifford algebra, i.e. $f:\Omega\subset\mathcal{C}\hspace*{-0.2em}\ellC^m\to\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$. For $\underline{z}=\sum_{j=1}^m z_je_j$, with complex $z_j$, $j=1,\dots,m$, we have \[f(\underline{z})=\sum\limits_Ae_Af_A(\underline{z})\] with $f_A:\mathcal{C}\hspace*{-0.2em}\ellC^m\to\mathcal{C}\hspace*{-0.2em}\ellC$. Analogously to the real case, we can define the right Hilbert Clifford-module $L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$, where $h$ is a positive function over $\mathcal{C}\hspace*{-0.2em}\ellC^m$. Here, \[\langle f,g\rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})} = \int_{\mathcal{C}\hspace*{-0.2em}\ellC^m} f^\dagger(\underline{z}) g(\underline{z}) h(\underline{z}) d\underline{x}\,d\underline{y}\] with $\underline{z}=\underline{x}+i\underline{y}$, where $\dagger$ denotes the involution on $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$, cf. page \pageref{dagger}. The associated norm is $||f||_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}^2=\left[\langle f,f\rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}\right]_0$. Particularly important to our work will be those spaces $L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$ for which $h$ is defined as the Gaussian function $h(\underline{z})=\frac{e^{-|\underline{z}|^2/2}}{\pi^m}$, cf. section \ref{sec:BargmannModules}. \begin{prop}\label{prop:normintnorm} \begin{enumerate} \item Let $f\in L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}).$ Then \[||f||_{L^2(\mathbb{R}^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}^2=\int\limits_{\mathbb{R}^m}\left|f(\underline{x})\right|_0^2h(\underline{x}) d\underline{x}.\] \item Let $f\in L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}).$ Then \[||f||_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}^2=\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\left|f(\underline{z})\right|_0^2h(\underline{z}) d\underline{x}\,d\underline{y}.\] \end{enumerate} \end{prop} Proof: We only show (ii) since the real case is equivalent, \begin{align*} ||f||_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}^2&=\left[\langle f,f\rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,h,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}\right]_0\\ &=\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\left[f^{\dagger}(\underline{z})f(\underline{z})\right]_0h(\underline{z}) d\underline{x}\,d\underline{y} =\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\left|f(\underline{z})\right|_0^2h(\underline{z}) d\underline{x}\,d\underline{y}. \quad \square \end{align*} \subsection{Inner spherical monogenics} \label{sub:Pk} Since many of the following results are similar for functions of real and complex Clifford algebras, we will state them for the real case. Of particular importance, when dealing with Clifford algebra-valued functions, is the Dirac operator \[D_{\underline{x}} = \sum_{j=1}^m e_j\partial_{x_j}\quad (D_{\underline{z}} = \sum\limits_{j=1}^me_j\partial_{z_j}).\] Left nullsolutions of $D_{\underline{x}}$ ($D_{\underline{z}}$) are called (complex) left monogenic functions. Let $m\in\mathbb{N}$ and $\mathfrak{P}^s$ be the space of scalar-valued polynomials in $\mathbb{R}^m$ ($\mathcal{C}\hspace*{-0.2em}\ellC^m$). Then a Clifford polynomial is an element of $\mathfrak{P}^s \otimes \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}$ ($\mathfrak{P}^s \otimes \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$). An important class of polynomials are the so called (complex) inner spherical monogenics. A left inner spherical monogenic of order $k$ is a left monogenic homogeneous Clifford polynomial $P_k$ of degree $k$. The set of all left inner spherical monogenics of order $k$ is denoted by $M_l^+(k)$ and has the dimension \cite{BDSS} \[\mathrm{dim}(M_l^+(k))=\binom{m+k-2}{k},\] (with $\mathrm{dim}(M_l^+(0))=1$ for all $m\in\mathbb{N}$). We will deal with inner spherical monogenics over both $\mathbb{R}^m$ and $\mathcal{C}\hspace*{-0.2em}\ellC^m$. To differentiate, we will write $P_k(\underline{x}):\mathbb{R}^m\to\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}$ and $P_k(\underline{z}):\mathbb{R}^m\to\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$. \subsection{Short-time Fourier Transform} \label{sub:STFT} An important tool in time-frequency analysis is the short-time Fourier Transform. It allows to analyse a given signal simultaneously in the time and in the frequency domain, because it calculates the Fourier Transform not over the whole signal, but small blocks of it. Given a signal $f(\underline{t})$ and a window function $\varphi(\underline{t})$, the short-time Fourier Transform $(V_{\varphi}f)(\underline{t},\underline{\omega})$ is classically defined as \[(V_{\varphi}f)(\underline{t},\underline{\omega}) = \frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}f(\underline{x})\overline{\varphi(\underline{x}-\underline{t})}^{\mathcal{C}\hspace*{-0.2em}\ellC}e^{-i\underline{\omega}\cdot\underline{x}}d\underline{x}.\] A commonly used window function is the Gaussian window because it provides a very good resolution of the studied signal \cite{groechenig}. It is given by $h(\underline{x})=e^{-\frac{|\underline{x}|^2}{4}}$. \section{Clifford Hermite polynomials} \label{sec:CliffHemitepol} We will now consider Clifford Hermite polynomials as a special class of Clifford polynomials. In the classical case, the Hermite polynomials over $\mathbb{R}$ can be obtained from the Taylor expansion of the function $z\mapsto e^{z^2/2}$, $$e^{z^2/2} = \sum_{n=0}^{\infty} e^{x^2/2} \frac{t^n}{n!}H_n(ix).$$ They can also be calculated explicitly by \[H_n(x)=(-1)^ne^{x^2}\frac{ d^n}{ dx^n}e^{-x^2}.\] Through a similar expansion for $\mathbb{R}^m$, F. Sommen defined \textit{radial Hermite polynomials} \cite{So}, which are explicitly given by $$H_{k,m}(\underline{x}) = (-1)^ke^{\frac{|\underline{x}|^2}{2}}D_{\underline{x}}^k e^{-\frac{|\underline{x}|^2}{2}}.$$ Since the radial Hermite polynomials only form a basis for a certain kind of $L^2$ functions, i.e. such functions that are defined on the real line, Sommen developed a more complex set of polynomials, starting from the monogenic extension of $e^{-|\underline{x}|^2/2}P_k(\underline{x})$, where $P_k(\underline{x})$ is a left inner spherical monogenic of degree $k$, cf. section \ref{sub:Pk}. This lead him to what he called the generalized Hermite polynomials, which can be used to construct a basis of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. \begin{defn}\label{def:CliffHerm} The \textbf{generalized Clifford Hermite polynomials} $H_{l,m,k}$, $l,k\in\mathbb{N}_0$, are given by \begin{equation}\label{eq:CliffHermitePolynomials} H_{l,m,k}P_k(\underline{x}) = e^{-\frac{|\underline{x}|^2}{2}}(-1)^lD_{\underline{x}}^l\left(e^{-\frac{|\underline{x}|^2}{2}}P_k(\underline{x})\right) \end{equation} where $P_k(\underline{x})$ is a left inner spherical monogenic of degree $k$. \end{defn} An important property of the generalized Clifford Hermite polynomials is their orthogonality \cite{BDSKS}. \begin{thm}\label{th:CliffordHermiteOrtho} Let $H_{l,m,k_1}$ and $H_{t,m,k_2}$ be generalized Clifford Hermite polynomials and $P_{k_1}(\underline{x})$ and $P_{k_2}(\underline{x})$ inner spherical monogenics of order $k_1$ and $k_2$, resp. Then $$\int_{\mathbb{R}^m} e^{-\frac{|\underline{x}|^2}{2}}\, \overline{H_{l,m,k_1}(\underline{x})P_{k_1}(\underline{x})}\, H_{t,m,k_2}(\underline{x})P_{k_2}(\underline{x})\, d\underline{x} = \gamma_{l,k_1} \delta_{l,t} \delta_{k_1,k_2},$$ with \begin{align*} \gamma_{2p,k}&=\frac{2^{2p+m/2+k}p!\sqrt{\pi}^m\Gamma\left(\frac{m}{2}+k+p\right)}{\Gamma\left(\frac{m}{2}\right)},\\ \gamma_{2p+1,k}&=\frac{2^{2p+m/2+k+1}p!\sqrt{\pi}^m\Gamma\left(\frac{m}{2}+k+p+1\right)}{\Gamma\left(\frac{m}{2}\right)}.\\ \end{align*} \end{thm} Building on the orthogonality, Sommen and his colleagues established an orthonormal basis of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$, cf. \cite{So,BDSKS}. \begin{thm}\label{th:L2basis} Let $\gamma_{l,k}$, $k,l\in\mathbb{N}_0$, be as defined in Theorem \ref{th:CliffordHermiteOrtho}. For each $k\in\mathbb{N}_0$, let further $\left\{P_k^{(j)}(\underline{x})\right\}_{j=1,2,\dots,\mathrm{dim}(M_l^+(k))}$ be an orthonormal basis of $M_l^+(k)$. \begin{equation}\label{eq:L2basis} \left\{\frac{1}{\sqrt{\gamma_{l,k}}}H_{l,m,k}(\underline{x})P_k^{(j)}(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}:l,k\in\mathbb{N}_0,j\leq\mathrm{dim}(M_l^+(k))\right\} \end{equation} forms an orthonormal basis of $L^2(\mathbb{R}^m, \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. \end{thm} Each element of (\ref{eq:L2basis}) depends on $l$, $k$ and the chosen basis of $M_l^+(k)$, which contains $\mathrm{dim}(M_l^+(k))=\binom{m+k-2}{k}$ elements, cf. section \ref{sub:Pk}. \section{Segal-Bargmann transform} \label{sec:BargmannTransform} The first very general version of the Segal-Bargmann space goes back to V. Fock's theory of 1932 of the quantum state space of particles \cite{Fock}. Here, we will consider the more specific finite-dimensional version of the following definitions taken from \cite{PSS}. We will transfer those definitions to the Clifford case in section \ref{sec:BargmannModules}. \begin{defn} The \textbf{Segal-Bargmann space} $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$ is defined as the \linebreak Hilbert space of entire functions $f$ in $\mathcal{C}\hspace*{-0.2em}\ellC^m$ which are square-integrable with respect to the $2m$-dimensional Gaussian density, i.e., $$ \frac{1}{\pi^m} \int_{\mathcal{C}\hspace*{-0.2em}\ellC^m} e^{-|\underline{z}|^2} |f(\underline{z})|^2 \, d\underline{x}\,d\underline{y} < \infty, \quad \underline{z} = \underline{x} + i\underline{y}. $$ It is equipped with the inner product $$ \langle f,g \rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} = \frac{1}{\pi^m} \int_{\mathcal{C}\hspace*{-0.2em}\ellC^m} e^{-|\underline{z}|^2} \overline{f(\underline{z})} g(\underline{z}) \, d\underline{x}\,d\underline{y}. $$ \end{defn} The Segal-Bargmann transform connects the Bargmann space with the Hilbert space $L^2(\mathbb{R}^m,\mathbb{R})$ by mapping the ladder onto the former. \begin{defn} The \textbf{Segal-Bargmann transform} $\mathcal{B}$ from $L^2(\mathbb{R}^m,\mathbb{R})$ to\linebreak $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$ is defined by \begin{equation}\label{def:BargmannTransform} \big(\mathcal{B}f\big)(\underline{z}) = \frac{1}{\sqrt{2\pi}^m} \int_{\mathbb{R}^m} e^{-\frac{\underline{z}\cdot\underline{z}}{2} + \underline{x} \cdot\underline{z} - \frac{\underline{x}\cdot\underline{x}}{4}} f(\underline{x})\, d\underline{x}, \end{equation} with $\underline{x} \cdot \underline{z} = \sum_{j=1}^m x_jz_j,$ for any $f\in L^2(\mathbb{R}^m,\mathbb{R}).$ \end{defn} The Segal-Bargmann transform is a linear operator. It can also be expressed in terms of a short-time Fourier Transform (cf. section \ref{sub:STFT}). \begin{prop}\label{prop:STFT-B} Let $\mathcal{B}$ be the Segal-Bargmann transform and $V_\varphi$ the short-time Fourier Transform with window $\varphi(\underline{x})=e^{-\frac{|\underline{x}|^2}{4}}.$ Then for all $f\in L^2(\mathbb{R}^m,\mathbb{R})$, \[(V_{\varphi}f)(2\underline{t},-\underline{\omega})= e^{-\frac{|\underline{z}|^2}{2}}e^{i\underline{t}\cdot \underline{\omega}}\big(\mathcal{B}f\big)(\underline{z}),\quad \underline{z} = \underline{t} +i\underline{\omega}.\] \end{prop} Proof: \begin{align*} (V_\varphi f)(2\underline{t},-\underline{\omega}) & = \frac{1}{\sqrt{2\pi}^m} \int_{\mathbb{R}^m} f(\underline{x}) e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}} e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} \int_{\mathbb{R}^m} f(\underline{x})e^{-\frac{|\underline{x}|^2}{4}+\underline{x}\cdot\underline{t} -|\underline{t}|^2}e^{i\underline{\omega}\cdot \underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} e^{-\frac{|\underline{t}|^2}{2}}e^{-\frac{|\underline{\omega}|^2}{2}}e^{i\underline{t}\cdot\underline{\omega}}\int_{\mathbb{R}^m} f(\underline{x}) e^{-\frac{|\underline{x}|^2}{4} +\underline{x}\cdot(\underline{t}+i\underline{\omega}) - \frac{(\underline{t}+i\underline{\omega})^2}{2}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} e^{-\frac{|\underline{t}|^2}{2}}e^{-\frac{|\underline{\omega}|^2}{2}}e^{i\underline{t}\cdot\underline{\omega}}\int_{\mathbb{R}^m} f(\underline{x}) e^{-\frac{|\underline{x}|^2}{4} + \underline{x}\cdot \underline{z} -\frac{\underline{z}\cdot\underline{z}}{2}} d\underline{x} \\ & = e^{-\frac{|\underline{z}|^2}{2}}e^{i\underline{t}\cdot\underline{\omega}}\big(\mathcal{B}f\big)(\underline{z}) \quad \square \end{align*} A well-known property of the Segal-Bargmann transform is that it is a unitary operator up to a scaling constant. \begin{prop}\label{prop:isometry-classical} Let $\mathcal{B}:L^2(\mathbb{R}^m,\mathbb{R})\to\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$ be the Segal-Bargmann transform (\ref{def:BargmannTransform}). Then \[\langle\mathcal{B}f,\mathcal{B}g\rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)}=\frac{1}{\sqrt{2\pi}^m}\langle f,g\rangle_{L^2(\mathbb{R}^m,\mathbb{R})}.\] \end{prop} Proof: We use Proposition \ref{prop:STFT-B}, i.e. \[\big(\mathcal{B}f\big)(\underline{z})= e^{\frac{|\underline{z}|^2}{2}}e^{-i\underline{t}\cdot \underline{\omega}}(V_{\varphi}f)(2\underline{t},-\underline{\omega}),\] with $\underline{z}=\underline{t}+i\underline{\omega}$ and $\varphi(\underline{x})=e^{-\frac{|\underline{x}|^2}{4}}.$ Then \begin{align*} \langle\mathcal{B}f,\mathcal{B}g\rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} &=\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\overline{\big(\mathcal{B}f\big)(\underline{z})}^{\mathcal{C}\hspace*{-0.2em}\ellC}\big(\mathcal{B}g\big)(\underline{z})e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y}\\ &=\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}e^{\frac{|\underline{z}|^2}{2}}e^{i\underline{t}\cdot\underline{\omega}}\overline{\big(V_\varphi f\big)(2\underline{t},-\underline{\omega})}^{\mathcal{C}\hspace*{-0.2em}\ellC}\\ &\quad\quad\quad\quad\cdot e^{\frac{|\underline{z}|^2}{2}}e^{-i\underline{t}\cdot\underline{\omega}}\big(V_\varphi g\big)(2\underline{t},-\underline{\omega})e^{-|\underline{z}|^2} d\underline{\omega}\, d\underline{t}\\ &=\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\overline{\big(V_\varphi f\big)(2\underline{t},-\underline{\omega})}^{\mathcal{C}\hspace*{-0.2em}\ellC}\big(V_\varphi g\big)(2\underline{t},-\underline{\omega}) d\underline{\omega}\, d\underline{t}\\ &=\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\overline{\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}f(\underline{x})e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x}}^{\mathcal{C}\hspace*{-0.2em}\ellC}\\ &\quad\quad\quad\quad\cdot\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}g(\underline{x})e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x}\, d\underline{\omega}\, d\underline{t}. \end{align*} Let $\varphi(\cdot-2\underline{t})$ denote the Gaussian window translated by $-2\underline{t}$ and $\mathcal{F}$ the Fourier Transform in $\mathbb{R}^m$. Thus \[\langle\mathcal{B}f,\mathcal{B}g\rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} =\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\overline{\mathcal{F}^{-1}\big(f\cdot\varphi(\cdot-2\underline{t})\big)(\underline{\omega})}^{\mathcal{C}\hspace*{-0.2em}\ellC}\mathcal{F}^{-1}\big(g\cdot\varphi(\cdot-2\underline{t})\big)(\underline{\omega}) d\underline{\omega}\; d\underline{t}.\] The Plancherel Theorem now gives us \begin{align*} \langle\mathcal{B}f,\mathcal{B}g\rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} &=\frac{1}{\pi^m}\int\limits_{\mathbb{R}^m}\int\limits_{\mathbb{R}^m}\overline{f(\underline{x})e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}}}^{\mathcal{C}\hspace*{-0.2em}\ellC}g(\underline{x})e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}} d\underline{t}\, d\underline{x}\\ &=\frac{1}{\pi^m}\int\limits_{\mathbb{R}^m}f(\underline{x})g(\underline{x})\int\limits_{\mathbb{R}^m}\left(e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}}\right)^2 d\underline{t}\, d\underline{x}. \end{align*} Last, we substitute $\underline{u}=2\underline{t}-\underline{x}$, and use the fact that $\int\limits_{\mathbb{R}^m}e^{-\frac{|\underline{u}|^2}{2}} d\underline{u}=\sqrt{2\pi}^m$. So, \begin{align*} \langle\mathcal{B}f,\mathcal{B}g\rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} &=\frac{1}{\pi^m}\int\limits_{\mathbb{R}^m}f(\underline{x})g(\underline{x}) d\underline{x}\int\limits_{\mathbb{R}^m}e^{-\frac{|u|^2}{2}}\frac{ d\underline{u}}{2^m}\\ &=\frac{1}{\sqrt{2\pi}^m}\langle f,g\rangle_{L^2(\mathbb{R}^m,\mathbb{R})}. \quad \square \end{align*} Another important property of the Segal-Bargmann transform is its invertibility. \vspace*{1cm} \begin{prop}\label{prop:Binvertible} \leavevmode \begin{enumerate} \item $F^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$ is the image of $L^2(\mathbb{R}^m,\mathbb{R})$ under the Segal-Bargmann transform. \item The Segal-Bargmann transform is invertible. \end{enumerate} \end{prop} Proof: For the proof of (i) we refer to \cite{groechenig}. (ii) then follows directly from the fact that the transform is unitary up to a constant, cf. Proposition \ref{prop:isometry-classical}. $\square$ \section{Segal-Bargmann modules} \label{sec:BargmannModules} We will now look at how the Segal-Bargmann transform of Definition \ref{def:BargmannTransform} acts on Clifford algebra-valued functions. So, from now on, let $f$ be an element of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. Then, \[\big(\mathcal{B}f\big)(\underline{z}) = \frac{1}{\sqrt{2\pi}^m} \int_{\mathbb{R}^m} e^{-\frac{\underline{z}\cdot\underline{z}}{2} + \underline{x} \cdot\underline{z} - \frac{\underline{x}\cdot\underline{x}}{4}} f(\underline{x})\, d\underline{x},\] is a function with values in the complex Clifford algebra, $\mathcal{B}f:\mathcal{C}\hspace*{-0.2em}\ellC^m\to\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$. Note that Proposition \ref{prop:STFT-B} holds for functions of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^\mathbb{R})$ as well. Consider the function space $L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$ as defined in section \ref{subsec:HilbertCliffmod}. Just as in the real case, the Segal-Bargmann transform of Clifford algebra-valued functions is unitary up to a scaling constant, as the following proposition shows. \begin{prop}\label{prop:SegalBargmannIsometry} If $f\in L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}}),$ then \[\langle \mathcal{B}f, \mathcal{B}g \rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}=\frac{1}{\sqrt{2\pi}^m}\langle f, g \rangle_{L^2(\mathbb{R}^m, \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}.\] \end{prop} Proof: Since the Segal-Bargmann transform is linear, $f=\sum\limits_A f_Ae_A$ implies that $\mathcal{B}f=\sum\limits_A \mathcal{B}f_Ae_A$. Hence, \begin{align*} \langle \mathcal{B}f, \mathcal{B}g \rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})} & = \frac{1}{\pi^m}\int_{\mathcal{C}\hspace*{-0.2em}\ellC^m} \big(\mathcal{B}f\big)^{\dagger}(\underline{z}) \big(\mathcal{B}g\big)(\underline{z}) e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y} \\ & = \sum_{A,B} \frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\overline{\big(\mathcal{B}f_A\big)}^{\mathcal{C}\hspace*{-0.2em}\ellC} (\underline{z})\overline{e_A} \big(\mathcal{B}g_B\big)(\underline{z})e_B e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y} \\ & = \sum_{A,B} \langle \mathcal{B}f_A, \mathcal{B}g_B \rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)} \overline{e_A}e_B \end{align*} In Proposition \ref{prop:isometry-classical} we have shown that $\langle \mathcal{B}f, \mathcal{B}g \rangle_{\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m, \mathcal{C}\hspace*{-0.2em}\ellC)}=\frac{1}{\sqrt{2\pi}^m}\langle f, g \rangle_{L^2(\mathbb{R}^m, \mathbb{R})}$ is true for the classical Segal-Bargmann transform $\mathcal{B}: L^2(\mathbb{R}^m,\mathbb{R})\to\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ellC)$. Therefore \begin{align*} \langle \mathcal{B}f, \mathcal{B}g \rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})} & = \sum_{A,B} \frac{1}{\sqrt{2\pi}^m}\langle f_A, g_B \rangle_{L^2(\mathbb{R}^m,\mathbb{R})} \overline{e_A}e_B \\ & = \frac{1}{\sqrt{2\pi}^m}\sum\limits_{A,B}\,\int\limits_{\mathbb{R}^m}\overline{f_A(\underline{x})}g_B(\underline{x})\overline{e_A}e_B d\underline{x}\\ & = \frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}\overline{f(\underline{x})}g(\underline{x}) d\underline{x} = \frac{1}{\sqrt{2\pi}^m}\langle f, g \rangle_{L^2(\mathbb{R}^m, \mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})} \quad \square \end{align*} A direct consequence is the following corollary. \begin{cor}\label{cor:SegalBargmannIsometry} Let $\left\|\cdot\right\|_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}=\sqrt{\left[\langle\cdot,\cdot\rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}\right]_0}$. Then \[\left\|\mathcal{B}f\right\|_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}^2=\frac{1}{\sqrt{2\pi}^m}\left\|f\right\|_{L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}^2.\] Thus $\mathcal{B}$ is an isometry from $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ into $L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$ up to $\frac{1}{\sqrt{2\pi}^m}$. \end{cor} In Theorem \ref{th:zlPk}, an orthonormal basis of the space $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ was established. The following theorem shows that the Segal-Bargmann transform maps the elements of this basis onto functions $\underline{z}^lP_k(\underline{z})$. \begin{thm}\label{th:zlPk} Let $\mathcal{B}$ be the Segal-Bargmann transform, $H_{l,m,k}$ a generalized Clifford Hermite Polynomial as defined in Definition \ref{def:CliffHerm} and $P_k$ an inner spherical monogenic of degree $k$. Then \[\left(\mathcal{B}\left(H_{l,m,k}(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}P_k(\underline{x})\right)\right)(\underline{z})=\underline{z}^lP_k(\underline{z})\] \end{thm} Proof: Our first step follows \cite{PSS}. Here, \begin{align*} &\left(\mathcal{B}\left(H_{l,m,k}(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}P_k(\underline{x})\right)\right)(\underline{z})\\ &\quad\quad\quad\quad=\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}-\frac{\underline{x}\cdot\underline{x}}{2}}H_{l,m,k}(\underline{x})P_k(\underline{x}) d\underline{x}\\ &\quad\quad\quad\quad\stackrel{(\ref{eq:CliffHermitePolynomials})}{=}\frac{(-1)^l}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}}D_{\underline{x}}^l\left(e^{-\frac{|\underline{x}|^2}{2}}P_k(\underline{x})\right) d\underline{x}\\ &\quad\quad\quad\quad=\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}D_{\underline{x}}^l\left(e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}}\right)e^{-\frac{|\underline{x}|^2}{2}}P_k(\underline{x})d\underline{x}\\ &\quad\quad\quad\quad=\frac{1}{\sqrt{2\pi}^m}\underline{z}^l\int\limits_{\mathbb{R}^m}e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}-\frac{\underline{x}\cdot\underline{x}}{4}}P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}d\underline{x}\\ &\quad\quad\quad\quad=\underline{z}^l\left(\mathcal{B}\left(P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}\right)\right)(\underline{z}). \end{align*} Next, we calculate $\mathcal{B}(P_k(\underline{x})e^{-|\underline{x}|^2/4}) $ using the windowed Fourier transform. We obtain \begin{align*} V_{\varphi}\left(P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}\right)(2\underline{t}, -\underline{\omega}) & = \frac{1}{\sqrt{2\pi}^m} \int_{\mathbb{R}^m} P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}e^{-\frac{|\underline{x}-2\underline{t}|^2}{4}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} P_k\left(-i\partial_{\underline{\omega}}\right) \int_{\mathbb{R}^m} e^{-\frac{|\underline{x}|^2}{4}}e^{-\frac{|\underline{x}|}{4}+\underline{x}\cdot\underline{t} -|\underline{t}|^2}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} P_k\left(-i\partial_{\underline{\omega}}\right) \int_{\mathbb{R}^m} e^{-\frac{|\underline{x}|}{2}+\underline{x}\cdot\underline{t} -\frac{|\underline{t}|^2}{2}}e^{-\frac{|\underline{t}|}{2}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} e^{-\frac{|\underline{t}|^2}{2}}P_k\left(-i\partial_{\underline{\omega}}\right) \int_{\mathbb{R}^m} e^{-\frac{|\underline{x}-\underline{t}|^2 }{2}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \\ & = \frac{1}{\sqrt{2\pi}^m} e^{-\frac{|\underline{t}|^2}{2}}P_k\left(-i\partial_{\underline{\omega}}\right)\left(e^{i\underline{\omega}\cdot\underline{t}} \int_{\mathbb{R}^m} e^{-\frac{|\underline{x}|^2 }{2}}e^{i\underline{\omega}\cdot\underline{x}} d\underline{x} \right) \end{align*} Since $\frac{1}{\sqrt{2\pi}^m}\int_{\mathbb{R}^m}e^{-\frac{|\underline{x}|^2}{2}}e^{i\underline{\omega}\cdot\underline{x}}\, d\underline{x}$ is the inverse Fourier Tranform of $e^{-\frac{|\underline{\omega}|^2}{2}}$, which is an invariant, we get \begin{align*} V_{\varphi}\left(P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}\right)(2\underline{t}, -\underline{\omega}) & = e^{-\frac{|\underline{t}|^2}{2}}P_k\left(-i\partial_{\underline{\omega}}\right)\left(e^{i\underline{\omega}\cdot\underline{t} -\frac{|\underline{\omega}|^2}{2}}\right) \\ & = e^{-\frac{|\underline{t}|^2}{2}}P_k\left(-i(i\underline{t}-\omega)\right) \left(e^{i\underline{\omega}\cdot\underline{t} -\frac{|\underline{\omega}|^2}{2}}\right) \\ & = e^{-\frac{|\underline{t}|^2}{2}} \left(e^{i\underline{\omega}\cdot\underline{t} -\frac{| \underline{\omega}|^2}{2}}\right) P_k(\underline{z}) \\ & = e^{-\frac{|\underline{z}|^2}{2}} e^{i\underline{\omega}\cdot \underline{t}}P_k(\underline{z}) \end{align*} with $\underline{z}=\underline{t}+i\underline{\omega}$. Because of Proposition \ref{prop:STFT-B}, this leads to \[\left(\mathcal{B}\left(P_k(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}\right)\right)(\underline{z})=P_k(\underline{z}).\] Together with the first step, the proof is complete. $\square$ We can now define an analogue to the classical Segal-Bargmann space. \begin{defn} The closure of \[\mathrm{span}\left\{\underline{z}^lP_k^{(j)}(\underline{z})\middle|l,k\in\mathbb{N}_0,j=1,\dots,\mathrm{dim}(M_l^+(k))\right\}\] is called \textbf{Segal-Bargmann module} $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. \end{defn} \begin{rem} In this definition and what follows we drop the property that a function of the Segal-Bargmann module (or space) has to be an entire functions. That means we consider the Segal-Bargmann module just as a weighted $L^2$-module. \end{rem} A consequence of Theorem \ref{th:zlPk} is the following. \begin{cor}\label{cor:F2basis} For all $l,k\in\mathbb{N}_0$, let $\left\{P_k^{(j)}(\underline{x})\right\}_{j=1,2,\dots,\mathrm{dim}(M_l^+(k))}$ be an orthonormal basis of $M_l^+(k)$ and $\gamma_{l,k}$ defined as in Theorem \ref{th:CliffordHermiteOrtho}. Then \[\left\{\sqrt{\frac{(2\pi)^m}{\gamma_{l,k}}}\underline{z}^lP_k^{(j)}(\underline{z})\middle|l,k\in\mathbb{N}_0,j=1,\dots,\mathrm{dim}(M_l^+(k))\right\}\] is an orthonormal basis of the Segal-Bargmann module $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. \end{cor} \begin{prop} Since the Segal-Bargmann transform is linear, Theorem \ref{th:zlPk} shows that it maps an element $\phi_{l,k,j}(\underline{x})=\frac{1}{\sqrt{\gamma_{l,k}}}H_{l,m,k}(\underline{x})P_k^{(j)}(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}$ of the orthonormal basis of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ (see Theorem \ref{th:L2basis}) onto \[\left(\mathcal{B}\phi_{l,k,j}\right)(\underline{z})=\frac{1}{\sqrt{\gamma_{l,k}}}\underline{z}^lP_k^{(j)}(\underline{z}).\] The statement now follows directly from Proposition \ref{prop:SegalBargmannIsometry} and Corollary \ref{cor:SegalBargmannIsometry}, which say that $\left\|\mathcal{B}\phi_{l,k,j}\right\|_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}=\frac{1}{\sqrt{2\pi}^m}\left\|\phi_{l,k,j}\right\|_{L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})}=\frac{1}{\sqrt{2\pi}^m}$ and $\mathcal{B}$ is unitary up to the scaling constant.$\square$ \begin{thm} The Segal-Bargmann module is the image of $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ under the Segal-Bargmann transform, i.e. \[\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})=L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right).\] \end{thm} Proof: First, let $F\in\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. By construction there has to exist a function $f\in L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ so that $\mathcal{B}f=F$. Since $\mathcal{B}$ is unitary up to a constant, we know that $F\in L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right)$. Hence $\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})\subseteq L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right)$. We now show the opposite inclusion. Let $F\in L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right)$. Then $F$ can be written as $F=\sum_AF_Ae_A$ with $F_A:\mathcal{C}\hspace*{-0.2em}\ell^m\to\mathcal{C}\hspace*{-0.2em}\ellC$ for all $A$. Since \begin{align*} \left\|F\right\|_{L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right)}&=\langle\sum_AF_Ae_A,\sum_BF_Be_B\rangle_0\\ &=\sum_A\left\|F_A\right\|_{L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}\right)}=\sum_A\left\|F_A\right\|_{L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ellC\right)} \end{align*} is finite if and only if $\left\|F_A\right\|_{L^2\left(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ellC\right)}$ is finite for every $A$, we know that $F_A\in L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ellC)=\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC_m,\mathcal{C}\hspace*{-0.2em}\ellC)$, cf. Proposition \ref{prop:Binvertible} (ii). Proposition \ref{prop:Binvertible} (i) tells us that $\mathcal{B}:L^2(\mathbb{R}^m,\mathbb{R})\to\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC_m,\mathcal{C}\hspace*{-0.2em}\ellC)$ is invertible, so for each $A$ there exists $f_A\in L^2(\mathbb{R}^m,\mathbb{R})$ so that $\mathcal{B}f_A=F_A$. Since $\mathcal{B}$ is linear, \[F=\sum_AF_Ae_A=\sum_A(\mathcal{B}f_A)e_A=\mathcal{B}\left(\sum_Af_Ae_A\right),\] so there exists a function $\sum_Af_Ae_A=f\in L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$ such that $\mathcal{B}f=F$. Therefore $F\in\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$. $\square$ \section{A dictionary for the Segal-Bargmann transform} \label{sec:dictionary} In this section, we want to give a series representation for the Segal-Bargmann transform $\mathcal{B}$ on the right Clifford-module $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. By demonstrating that this representation converges absolutely locally uniformly, we will show that $\mathcal{B}f$ is well-defined and can be represented in kernel form. We work close to R. Bardenet and A. Hardy \cite{Bardenet-Hardy}, who have have shown similar characteristics of the classical Segal-Bargmann transform on $L^2(\mathbb{R}^m,\mathbb{R})$ and other transforms. For the rest of this section, we will shorten our notation by writing\linebreak $L^2=L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$, $\mathcal{F}^2=\mathcal{F}^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})$, $\langle\cdot,\cdot\rangle_{\mathcal{F}^2}=\langle\cdot,\cdot\rangle_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}$ and $\|\cdot\|_{\mathcal{F}^2}=\|\cdot\|_{L^2(\mathcal{C}\hspace*{-0.2em}\ellC^m,\frac{e^{-|\underline{z}|^2}}{\pi^m},\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC})}$. Since the set $\left\{\phi_{l,k,j}\right\}_{l,k\in\mathbb{N}_0,j\in\{1,\dots,\mathrm{dim}(M_l^+(k))\}}$ of Hermite functions \begin{equation} \phi_{l,k,j}(\underline{x})=\frac{1}{\sqrt{\gamma_{l,k}}}H_{l,m,k}(\underline{x})P_k^{(j)}(\underline{x})e^{-\frac{|\underline{x}|^2}{4}}\label{eq:phi2} \end{equation} is a basis of $L^2$, see section \ref{sec:CliffHemitepol}, each Clifford algebra-valued square integrable function $f(\underline{x})$ can be expanded as \[f(\underline{x})=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\phi_{l,k,j}(\underline{x})\langle\phi_{l,k,j},f\rangle_{L^2}.\] Hence, \begin{align} &\big(\mathcal{B}f\big)(\underline{z})\nonumber\\ &\quad=\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m} \left(\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\phi_{l,k,j}(\underline{x})\langle\phi_{l,k,j},f\rangle_{L^2}\right)e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}-\frac{\underline{x}\cdot\underline{x}}{4}}d\underline{x}\nonumber\\ &\quad=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left(\frac{1}{\sqrt{2\pi}^m}\int\limits_{\mathbb{R}^m}\phi_{l,k,j}(\underline{x})e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}-\frac{\underline{x}\cdot\underline{x}}{4}}d\underline{x}\right)\langle\phi_{l,k,j},f\rangle_{L^2}\nonumber\\ &\quad=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\big(\mathcal{B}\phi_{l,k,j}\big)(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\nonumber\\ &\quad=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\frac{1}{\sqrt{\gamma_{l,k}}}\underline{z}^lP_k^{(j)}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\nonumber\\ &\quad=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\label{eq:Psi} \end{align} with $\Psi_{l,k,j}(\underline{z})=\frac{1}{\sqrt{\gamma_{l,k}}}\underline{z}^lP_k^{(j)}(\underline{z})$. To be able to show convergence of the series expansion (\ref{eq:Psi}), we need the following two lemmas. \begin{lem}\label{lem:Pk-z} Let $P_s(\underline{z})=\sum\limits_{|\alpha|=s}a_\alpha\underline{z}^\alpha$ be a homogeneous $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$-polyomial of degree $s$, with $a_\alpha\in\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$ for all $|\alpha|=s$. Then \begin{enumerate} \item $\left\|P_s(\underline{z})\right\|^2_{\mathcal{F}^2}=\sum\limits_{|\alpha|=s}\left|a_{\alpha}\right|_0^2\alpha!$, \item $\left|P_s(\underline{z})\right|_0^2\leq\frac{1}{s!}\left\|P_s(\underline{z})\right\|^2_{\mathcal{F}^2}\left|\underline{z}\right|_0^{2s}$. \end{enumerate} \end{lem} Proof: \begin{enumerate} \item We have \begin{align*} \left\|P_s(\underline{z})\right\|_{\mathcal{F}^2}^2 &=\left[\langle P_s(\underline{z}),P_s(\underline{z})\rangle_{\mathcal{F}^2}\right]_0 =\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\left[P_s^{\dagger}(\underline{z})P_s(\underline{z})\right]_0e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y}\\ &=\frac{1}{\pi^m}\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}\left[\left(\sum\limits_{|\alpha|=s}a_{\alpha}^{\dagger}(\overline{\underline{z}}^{\mathcal{C}\hspace*{-0.2em}\ellC})^\alpha\right)\left(\sum\limits_{|\beta|=s}a_\beta\underline{z}^\beta\right)\right]_0e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y}\\ &=\frac{1}{\pi^m}\sum\limits_{|\alpha|=s}\sum\limits_{|\beta|=s}\left[a_{\alpha}^{\dagger}a_{\beta}\right]_0\int\limits_{\mathcal{C}\hspace*{-0.2em}\ellC^m}(\overline{\underline{z}}^{\mathcal{C}\hspace*{-0.2em}\ellC})^\alpha\underline{z}^\beta e^{-|\underline{z}|^2} d\underline{x}\,d\underline{y}. \end{align*} We solve the integral by transforming the complex coordinates to polar coordinates, i.e. $z_j = r_je^{i\varphi_j}$, $j = 1,\dots,m$. Then, \begin{align*} \left\|P_s(\underline{z})\right\|_{\mathcal{F}^2}^2 =\frac{1}{\pi^m} & \sum\limits_{|\alpha|=s}\sum\limits_{|\beta|=s}\left[a_{\alpha}^{\dagger}a_{\beta}\right]_0\int\limits_{[0,\infty)^m}\int\limits_{[0,2\pi]^m}r_1^{\alpha_1+\beta_1}\dots r_m^{\alpha_m+\beta_m}\\ &\cdot e^{i(\beta_1-\alpha_1)}\dots e^{i(\beta_m-\alpha_m)}e^{-r_1^2-\dots-r_m^2}r_1\dots r_m d\underline{\varphi} d\underline{r} \end{align*} The integral $\int\int\dots d\underline{\varphi}d\underline{r}$ is 0 if $\alpha_j\neq\beta_j$ for any $j=1,\dots,m$. So, we get with $\int_0^\infty r^{2n+1}e^{-r^2} dr=\frac{n}{2}$, \begin{align*} \left\|P_s(\underline{z})\right\|_{\mathcal{F}^2}^2 &=\frac{1}{\pi^m}\sum\limits_{|\alpha|=s}\left[a_{\alpha}^{\dagger}a_{\alpha}\right]_0(2\pi)^m\int\limits_{[0,\infty)^m}r_1^{2\alpha_1+1}\dots r_m^{2\alpha_m+1}e^{-r_1^2-\dots-r_m^2} d\underline{r}\\ &=2^m\sum\limits_{|\alpha|=s}|a_{\alpha}|_0^2\prod\limits_{j=1}^m\frac{\alpha_j!}{2}\\ &=\sum\limits_{|\alpha|=s}|a_{\alpha}|_0^2\alpha! \end{align*} \item We use the generalization of the Binomial theorem, \begin{equation}\label{eq:BinomialTheorem} |\underline{z}|_0^{2s}=\left(|z_1|^2+\dots+|z_m|^2\right)^s=\sum\limits_{|\alpha|=s}\frac{s!}{\alpha!}|\underline{z}|^{2\alpha}, \end{equation} and Cauchy-Schwartz (CS) to get \begin{align*} \left|P_s(\underline{z})\right|_0^2 &=\left|\sum\limits_{|\alpha|=s}a_{\alpha}\underline{z}^{\alpha}\right|_0^2\\ &\leq\left(\sum\limits_{|\alpha|=s}\left|a_{\alpha}\underline{z}^{\alpha}\right|_0\right)^2 =\left(\sum\limits_{|\alpha|=s}\sqrt{\frac{\alpha!}{s!}}\left|a_{\alpha}\right|_0\sqrt{\frac{s!}{\alpha!}}\left|\underline{z}^\alpha\right|\right)^2\\ &\stackrel{CS}{\leq}\left(\frac{1}{s!}\sum\limits_{|\alpha|=s}\alpha!\left|a_{\alpha}\right|_0^2\right)\left(\sum\limits_{|\alpha|=s}\frac{s!}{\alpha!}\left|\underline{z}^\alpha\right|^2\right)\\ &\stackrel{(i),(\ref{eq:BinomialTheorem})}{=}\frac{1}{s!}\left\|P_s(\underline{z})\right\|^2_{\mathcal{F}^2}\left|\underline{z}\right|_0^{2s}. \quad \square \end{align*} \end{enumerate} \begin{lem}\label{lem:sumPsi-infty} Let $\Psi_{l,k,j}$ be defined as in (\ref{eq:Psi}). Then, \[\sup\limits_{\underline{z}\in K}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}|\Psi_{l,k,j}(\underline{z})|_0^2<\infty\] for any compact set $K\subset\mathcal{C}\hspace*{-0.2em}\ellC^m$. \end{lem} Proof: Let $\mathrm{SUP}=\sup\limits_{\underline{z}\in K}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}|\Psi_{l,k,j}(\underline{z})|_0^2$. We first note that each $\Psi_{l,k,j}(\underline{z})=\frac{1}{\sqrt{\gamma_{l,k}}}\underline{z}^lP_k^{(j)}(\underline{z})$ is a homogeneous $\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathcal{C}\hspace*{-0.2em}\ellC}$-polyomial of degree $l+k$. Hence, with Lemma \ref{lem:Pk-z} (ii), we get \[\mathrm{SUP}\leq\sup\limits_{\underline{z}\in K}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\frac{1}{(l+k)!}\left\|\Psi_{l,k,j}(\underline{z})\right\|^2_{\mathcal{F}^2}\left|\underline{z}\right|_0^{2l+2k}.\] We know that $\Psi_{l,k,j}(\underline{z})=\big(\mathcal{B}\phi_{l,k,j}\big)(\underline{z})$ (cf. Theorem \ref{th:zlPk} and the proof of Corollary \ref{cor:F2basis}) and that \[\left\|\mathcal{B}f\right\|_{\mathcal{F}^2}^2=\frac{1}{\sqrt{2\pi}^m}\left\|f\right\|_{L^2}^2\] for all $f\in L^2$ (cf. Corollary \ref{cor:SegalBargmannIsometry}). Hence, \[\left\|\Psi_{l,k,j}\right\|^2_{\mathcal{F}^2}=\frac{1}{\sqrt{2\pi}^m}\left\|\phi_{l,k,j}\right\|^2_{L^2}=\frac{1}{\sqrt{2\pi}^m}.\] We also know that $\mathrm{dim}(M_l^+(k))=\binom{m+k-2}{k}$, cf. section \ref{sub:Pk}. Together, we get \begin{align*} \mathrm{SUP}&\leq\frac{1}{\sqrt{2\pi}^m}\sup\limits_{\underline{z}\in K}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\binom{m+k-2}{k}\frac{1}{(l+k)!}\left|\underline{z}\right|_0^{2l+2k}\\ &\leq\frac{1}{\sqrt{2\pi}^m}\sup\limits_{\underline{z}\in K}\left(\sum\limits_{l=0}^\infty\frac{1}{l!}\left|\underline{z}\right|_0^{2l}\right)\left(\sum\limits_{k=0}^\infty\binom{m+k-2}{k}\frac{1}{k!}\left|\underline{z}\right|_0^{2k}\right)\\ &=\frac{1}{\sqrt{2\pi}^m}\sup\limits_{\underline{z}\in K}\left(\sum\limits_{l=0}^\infty\frac{1}{l!}\left|\underline{z}\right|_0^{2l}\right)\left(\sum\limits_{k=0}^\infty\binom{m+k-2}{k}\frac{1}{2^{mk}k!}\left(2^m\left|\underline{z}\right|_0^2\right)^k\right).\\ \end{align*} It can be shown via induction that $\binom{m+k-2}{k}\leq 2^{mk}$ for all $k\in\mathbb{N}_0$. Hence, \begin{align*} \mathrm{SUP}&\leq\frac{1}{\sqrt{2\pi}^m}\sup\limits_{\underline{z}\in K}\left(\sum\limits_{l=0}^\infty\frac{1}{l!}\left|\underline{z}\right|_0^{2l}\right)\left(\sum\limits_{k=0}^\infty\frac{1}{k!}\left(2^m\left|\underline{z}\right|_0^2\right)^k\right)\\ &=\frac{1}{\sqrt{2\pi}^m}\sup\limits_{\underline{z}\in K}e^{|\underline{z}|_0^2}\cdot e^{2^m\left|\underline{z}\right|_0^2}<\infty. \quad \square \end{align*} We are now fully equipped to show convergence of the series expansion (\ref{eq:Psi}). \begin{prop}\end{prop}\label{prop:convergence} Let $\phi_{l,k,j}$ be defined as in (\ref{eq:phi2}) and let $\Psi_{l,k,j}$ be defined as in (\ref{eq:Psi}). Then, for each compact set $K\subset\mathcal{C}\hspace*{-0.2em}\ellC^m$, \[\sup\limits_{z\in K}\left|\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\right|_0<\infty.\] \end{prop} Proof: Let $\mathrm{SUM}=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}$. Since $|\cdot|_0$ is submultiplicative, we have \begin{align*} \left|\mathrm{SUM}\right|_0 &\leq\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left|\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\right|_0\\ &\leq\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left|\Psi_{l,k,j}(\underline{z})\right|_0\cdot\left|\langle\phi_{l,k,j},f\rangle_{L^2}\right|_0 \end{align*} We now use Proposition \ref{prop:fg0} and $\left\|\phi_{l,k,j}\right\|_{L^2}=1$, so \begin{align*} \left|\mathrm{SUM}\right|_0 &\leq\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left|\Psi_{l,k,j}(\underline{z})\right|_02^m\left\|\phi_{l,k,j}\right\|_{L^2}\left\|f\right\|_{L^2}\\ &=2^m\left\|f\right\|_{L^2}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left|\Psi_{l,k,j}(\underline{z})\right|_0\\ &\leq 2^m\left\|f\right\|_{L^2}\sqrt{\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\left|\Psi_{l,k,j}(\underline{z})\right|_0^2}. \end{align*} Together with Lemma \ref{lem:sumPsi-infty} the proof is complete. $\square$ Proposition \ref{prop:convergence} shows that $\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}$ is absolutely convergent locally uniformly in $\underline{z}\in\mathcal{C}\hspace*{-0.2em}\ellC^m$. Since $\mathcal{B}f$ is the uniform limit of the triple sum on every compact subset of $\mathcal{C}\hspace*{-0.2em}\ellC^m$, it is well-defined and $\mathcal{B}$ can be represented as \begin{align*} \big(\mathcal{B}f\big)(\underline{z}) &=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\langle\phi_{l,k,j},f\rangle_{L^2}\\ &=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\int\limits_{\mathbb{R}^m}\overline{\phi_{l,k,j}(\underline{x})}f(\underline{x}) d\underline{x}\\ &=\int\limits_{\mathbb{R}^m}\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\overline{\phi_{l,k,j}(\underline{x})}f(\underline{x}) d\underline{x}\\ &=\left<\overline{\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\Psi_{l,k,j}(\underline{z})\overline{\phi_{l,k,j}}}\;,f\right>_{L^2}\\ &=\left<\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\phi_{l,k,j}\overline{\Psi_{l,k,j}(\underline{z})}\;,f\right>_{L^2}. \end{align*} Thus, \[T(\underline{x},\underline{z})=\sum\limits_{l=0}^\infty\sum\limits_{k=0}^\infty\sum\limits_{j=1}^{\mathrm{dim}(M_l^+(k))}\phi_{l,k,j}(\underline{x})\overline{\Psi_{l,k,j}}(\underline{z}) =\frac{1}{\sqrt{2\pi}^m}e^{-\frac{\underline{z}\cdot\underline{z}}{2}+\underline{x}\cdot\underline{z}-\frac{\underline{x}\cdot\underline{x}}{4}}\] is the kernel of the Segal-Bargmann transform on $L^2(\mathbb{R}^m,\mathcal{C}\hspace*{-0.2em}\ell_m^{\mathbb{R}})$. . \end{document}
\begin{document} \begin{frontmatter} \title{Double barrier reflected BSDEs with stochastic Lipschitz coefficient} \author{\inits{M.}\fnm{Mohamed}\snm{Marzougue}\corref{cor1}}\email{[email protected]} \author{\inits{M.}\fnm{Mohamed}\snm{El Otmani}}\email{[email protected]} \cortext[cor1]{Corresponding author.} \address{Laboratoire d'Analyse Math\'ematique et Applications (LAMA)\\ Facult\'{e} des Sciences Agadir, {Universit\'{e} Ibn Zohr}, {Maroc}} \markboth{M. Marzougue, M. El Otmani}{Double barrier reflected BSDEs with stochastic Lipschitz coefficient} \begin{abstract} This paper proves the existence and uniqueness of a solution to doubly reflected backward stochastic differential equations where the coefficient is stochastic Lipschitz, by means of the penalization method. \end{abstract} \begin{keywords} \kwd{BSDE and reflected BSDE} \kwd{Stochastic Lipschitz coefficient} \end{keywords} \begin{keywords}[2010] \kwd{60H20} \kwd{60H30} \kwd{65C30} \end{keywords} \received{21 July 2017} \revised{14 November 2017} \accepted{14 November 2017} \publishedonline{8 December 2017} \end{frontmatter} \section{Introduction} Backward Stochastic Differential Equations (BSDEs) were introduced (in the non-linear case) by Pardoux and Peng \cite{PP}. Precisely, given a data $(\xi,f)$ of a square integrable random variable $\xi$ and a progressively measurable function $f$, a solution to BSDE associated with data $(\xi,f)$ is a pair of $\mathcal {F}_t$-adapted processes $(Y,Z)$ satisfying \begin{equation} \label{BSDE} Y_{t}=\xi+\int_{t}^{T} f(s,Y_{s},Z_{s})ds-\int_{t}^{T}Z_{s}dB_{s}, \quad0\leq t\leq T. \end{equation} These equations have attracted great interest due to their connections with mathematical finance \cite{EQ,EPeQ}, stochastic control and stochastic games \cite{Bi,HL} and partial differential equations \cite {Pa,PP2}. In their seminal paper \cite{PP}, Pardoux and Peng generalized such equations to the Lipschitz condition and proved existence and uniqueness results in a Brownian framework. Moreover, many efforts have been made to relax the Lipschitz condition on the coefficient. In this context, Bender and Kohlmann \cite{BK} considered the so-called stochastic Lipschitz condition introduced by El Karoui and Huang \cite{ELH}. Further, El Karoui et al. \cite{EKPPQ} have introduced the notion of reflected BSDEs (RBSDEs in short), which is a BSDE but the solution is forced to stay above a lower barrier. In detail, a solution to such equations is a triple of processes $(Y,Z,K)$ satisfying \begin{equation} \label{RBSDE} Y_{t}=\xi+\int_{t}^{T} f(s,Y_{s},Z_{s})ds+K_T-K_t-\int _{t}^{T}Z_{s}dB_{s},\quad Y_t\geq L_t\ 0\leq t\leq T, \end{equation} where $L$, the so-called barrier, is a given stochastic process. The role of the continuous increasing process $K$ is to push the state process upward with the minimal energy, in order to keep it above $L$; in this sense, it satisfies $\int_0^T(Y_t-L_t)dK_t=0$. The authors have proved that equation (\ref{RBSDE}) has a unique solution under square integrability of the terminal condition $\xi$ and the barrier $L$, and the Lipschitz property of the coefficient $f$. RBSDEs have been proven to be powerful tools in mathematical finance \cite{EPeQ}, mixed game problems \cite{CK}, providing a probabilistic formula for the viscosity solution to an obstacle problem for a class of parabolic partial differential equations \cite{EKPPQ}. Later, Cvitanic and Karatzas \cite{CK} studied doubly reflected BSDEs (DRBSDEs in short). A solution to such an equation related to a generator $f$, a terminal condition $\xi$ and two barriers $L$ and $U$ is a quadruple of $(Y, Z,K^+, K^-)$ which satisfies \begin{eqnarray} \label{a} &&\hspace{-1cm} \begin{cases} Y_{t}=\xi+\displaystyle\int_{t}^{T} f(s,Y_{s},Z_{s})ds+\bigl(K_{T}^{+}-K_{t}^{+}\bigr)-\bigl(K_{T}^{-}-K_{t}^{-}\bigr)- \displaystyle\int_{t}^{T}Z_{s}dB_{s} \\ L_{t}\leq Y_{t}\leq U_{t},\ \forall t\leq T \mbox{ and }\displaystyle\int_{0}^{T}(Y_{t}-L_{t})dK_{t}^{+}=\displaystyle\int_{0}^{T}(U_{t}-Y_{t})dK_{t}^{-}=0. \end{cases} \end{eqnarray} In this case, a solution $Y$ has to remain between the lower barrier $L$ and upper barrier $U$. This is achieved by the cumulative action of two continuous, increasing reflecting processes $K^\pm$. The authors proved the existence and uniqueness of the solution when $f(t,\omega ,y,z)$ is Lipschitz on $(y,z)$ uniformly in $(t,\omega)$. At the same time, one of the barriers $L$ or $U$ is regular or they satisfy the so-called Mokobodski condition, which turns out into the existence of a difference of a non-negative supermartingales between $L$ and $U$. In addition, many efforts have been made to relax the conditions on $f$, $L$ and $U$ \cite{BHM,HH,HHd,LSM,LS,Xu,ZZ} or to deal with other issues \cite{CM,EL,ELM,EOH,RE}. Let us have a look at the pricing problem of an American game option driven by Black--Scholes market model which is given by the following system of stochastic differential equations \begin{eqnarray*} &&\left\{ \begin{array}{@{}ll} dS^0_t=r(t)S^0_tdt, & \hbox{$S^0_0>0$;} \\ dS_t=S_t\bigl(\bigl(r(t)+\theta(t)\sigma(t)\bigr)dt+\sigma(t)dB_t\bigr), & \hbox{$S_0>0$,} \end{array} \right. \end{eqnarray*} where $r(t)$ is the interest rate process, $\theta(t)$ is the risk premium process, $\sigma(t)$ is the volatility process of the market. The fair price of the American game option is defined by\vadjust{\goodbreak} \begin{equation*} Y_t=\inf_{\tau\in\Im_{[0,T]}}\sup_{\nu\in\Im_{[0,T]}} \mathbb{E} \bigl[e^{-r(t)\sigma(t)\wedge\theta(t)}J(\tau,\nu)|\mathcal{F}_t \bigr], \end{equation*} where $\Im_{[0,T]}$ is the collection of all stopping times $\tau$ with values between $0$ and $T$, and $J$ is a \textit{Payoff} given by \[ J(\tau,\nu)=U_{\nu}\mathbh{1}_{\{\nu<\tau\}}+L_{\tau} \mathbh{1}_{\{\tau \leq\nu\}}+\xi\mathbh{1}_{\{\nu\wedge\tau=T\}}. \] Here $r(t)$, $\sigma(t)$ and $\theta(t)$ are stochastic, moreover they are not bounded in general. So the existence results of Cvitanic and Karatzas \cite{CK}, Li and Shi \cite{LS} with completely separated barriers cannot be applied. Motivated by the above works, the purpose of the present paper is to consider a class of DRBSDEs driven by a Brownian motion with stochastic Lipschitz coefficient. We try to get the existence and uniqueness of solutions to those DRBSDEs by means of the penalization method and the fixed point theorem. Furthermore, the comparison theorem for the solutions to DRBSDEs will be established. The paper is organized as follows: in Section~\ref{s1}, we give some notations and assumptions needed in this paper. In Section~\ref{s2}, we establish the a priori estimates of solutions to DRBSDEs. In Section~\ref{s3}, we prove the existence and uniqueness of solutions to DRBSDEs via penalization method when one barrier is regular, in the first subsection, then we study the case when the barriers are completely separated, in the second subsection. In Section~\ref{s4}, we give the comparison theorem for the solutions to DRBSDEs. Finally, an Appendix is devoted to the special case of RBSDEs with lower barrier when the generator only depends on $y$; furthermore, the corresponding comparison theorem will be established under the stochastic Lipschitz coefficient. \section{Notations}\label{s1} Let $(\varOmega,\mathcal{F},(\mathcal{F}_{t})_{t\leq T},\mathbb{P})$ be a filtered probability space. Let $(B_{t})_{t\leq T}$ be a $d$-dimensional Brownian motion. We assume that $(\mathcal {F}_{t})_{t\leq T}$ is the standard filtration generated by the Brownian motion $(B_{t})_{t\leq T}$. We will denote by $|.|$ the Euclidian norm on $\mathbb{R}^{d}$. Let's introduce some spaces: \begin{itemize} \item$\mathcal{L}^{2}$ is the space of $\mathbb{R}$-valued and $\mathcal{F}_{T}$-measurable random variables $\xi$ such that \[ \|\xi\|^{2}=\mathbb{E} \bigl[|\xi|^{2} \bigr] < +\infty. \] \item$\mathcal{S}^{2}$ is the space of $\mathbb{R}$-valued and $\mathcal{F}_t$-progressively measurable processes $(K_{t})_{t\leq T}$ such that \[ \|K\|^{2}=\mathbb{E} \Bigl[\sup\limits _{0\leq t\leq T}|K_{t}|^{2} \Bigr] < +\infty. \] \end{itemize} Let $\beta> 0$ and $(a_{t})_{t\leq T}$ be a non-negative $\mathcal {F}_{t}$-adapted process. We define the increasing continuous process $A(t)= \int_0^t a^2(s)ds$, for all $t\leq T$, and introduce the following spaces: \begin{itemize} \item$\mathcal{L}^{2}(\beta,a)$ is the space of $\mathbb{R}$-valued and $\mathcal{F}_{T}$-measurable random variables $\xi$ such that \[ \|\xi\|_\beta^{2}=\mathbb{E} \bigl[e^{\beta A(T)}| \xi|^{2} \bigr] < +\infty. \] \item$\mathcal{S}^{2}(\beta,a)$ is the space of $\mathbb{R}$-valued and $\mathcal{F}_t$-adapted continuous processes $(Y_t)_{t\leq T}$ such that \[ \|Y\|_\beta^{2}=\mathbb{E} \Bigl[\sup\limits _{0\leq t\leq T}e^{\beta A(t)}|Y_{t}|^{2} \Bigr] < +\infty. \] \item$\mathcal{S}^{2,a}(\beta,a)$ is the space of $\mathbb{R}$-valued and $\mathcal{F}_t$-adapted processes $(Y_t)_{t\leq T}$ such that \[ \|aY\|_\beta^{2}=\mathbb{E} \Biggl[\int _{0}^{T}e^{\beta A(t)}\big|a(t)Y_{t}\big|^{2}dt \Biggr] < +\infty. \] \item$\mathcal{H}^{2}(\beta,a)$ is the space of $\mathbb{R}^d$-valued and $\mathcal{F}_t$-progressively measurable processes $(Z_t)_{t\leq T}$ such that \[ \|Z\|_\beta^{2}=\mathbb{E} \Biggl[\int_{0}^{T}e^{\beta A(t)}|Z_{t}|^{2}dt \Biggr] < +\infty. \] \item$\mathfrak{B}^2$ is the Banach space of the processes $(Y,Z)\in (\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta,a) )\times\mathcal{H}^{2}(\beta,a)$ with the norm \[ \big\|(Y,Z)\big\|_\beta=\sqrt{\|aY\|_\beta^{2}+ \|Z\|_\beta^{2}}. \] \end{itemize} We consider the following conditions: \begin{description} \item[{$(H1)$} ] The terminal condition $\xi\in\mathcal{L}^{2}(\beta,a)$. \end{description} \noindent The coefficient $f : \varOmega\times[0,T] \times\mathbb{R} \times\mathbb{R}^{d}\longrightarrow\mathbb{R}$ satisfies \begin{description} \item[{$(H2)$}] $\forall t\in[0,T]$ $\forall(y,z,y',z')\in\mathbb{R} \times\mathbb{R}^{d}\times\mathbb{R} \times\mathbb{R}^{d}$, there are two non-negative $\mathcal{F}_t$-adapted processes $\mu$ and $\gamma $ such that \[ \big|f(t,y,z)-f\bigl(t,y',z'\bigr)\big|\leq \mu(t)\big|y-y'\big|+\gamma(t)\big|z-z'\big|. \] \item[{$(H3)$}] There exists $\epsilon>0$ such that $a^2(t):=\mu (t)+\gamma^2(t) \geq\epsilon$. \item[{$(H4)$}] For all $(y,z)\in\mathbb{R} \times\mathbb{R}^{d}$, the process $(f(t,y,z))_t$ is progressively measurable and such that \[ \frac{f(.,0,0)}{a}\in\mathcal{H}^{2}(\beta,a). \] \end{description} \noindent The two reflecting barriers $L$ and $U$ are two $\mathcal {F}_t$-adapted and continuous real-valued processes which satisfy \begin{description} \item[{$(H5)$}]\ \vspace*{-9pt} \[ \mathbb{E} \Bigl[\sup_{0 \leq t\leq T}e^{2\beta A(t)}\big|L_{t}^+\big|^{2} \Bigr] +\mathbb{E} \Bigl[\sup_{0 \leq t\leq T}e^{2\beta A(t)}\big|U_{t}^-\big|^{2} \Bigr]< +\infty, \] where $L^+$ and $U^-$ are the positive and negative parts of $L$ and $U$, respectively. \item[{$(H6)$}] $U$ is regular: i.e., there exists a sequence of $(U^{n})_{n\geq0}$ such that \begin{description} \item[$(i)$] $\forall t\leq T$, $U_{t}^{n}\leq U_{t}^{n+1}$ and $\lim\limits_{n\rightarrow+\infty}U_{t}^{n}=U_{t}$ $ \mathbb{P}$-a.s \item[$(ii)$] $\forall n\geq0$, $\forall t\leq T$, \[ U_{t}^{n}=U_{0}^{n}+\int _{0}^{t}u_{n}(s)ds+\int _{0}^{t}v_{n}(s)dB_{s} \] where the processes $u_{n}$ and $ v_{n}$ are $\mathcal{F}_{t}$-adapted such that \[ \sup_{n\geq0}\sup_{0\leq t\leq T} \bigl(u_{n}(t) \bigr)^{+}\leq C \quad\mbox {and}\quad\mathbb{E} \Biggl[\int _{0}^{T}\big|v_{n}(s)\big|^{2}ds \Biggr]^{\frac {1}{2}}<+\infty. \] \end{description} \end{description} \begin{defin} Let $\beta>0$ and $a$ be a non-negative $\mathcal{F}_t$-adapted process. A solution to DRBSDE is a quadruple $(Y,Z,K^+,K^-)$ satisfying (\ref{a}) such that \begin{itemize} \item$(Y,Z)\in(\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta ,a))\times\mathcal{H}^{2}(\beta,a)$, \item$K^\pm\in\mathcal{S}^{2}$ are two continuous and increasing processes with $K^\pm_0=0$. \end{itemize} \end{defin} \section{A priori estimate}\label{s2} \begin{lemma} Let $\beta>0$ be large enough and assume $(H1)-(H6)$ hold. Let $(Y,Z,K^+,\allowbreak{}K^-)\in(\mathcal{S}^{2}(\beta,a)\cap\mathcal {S}^{2,a}(\beta,a))\times\mathcal{H}^{2}(\beta,a)\times\mathcal {S}^{2}\times\mathcal{S}^{2}$ be a solution to DRBSDE with data $(\xi ,f,L,U)$. Then there exists a constant $C_\beta$ depending only on $\beta$ such that \begin{align} & \mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}|Y_{t}|^{2} +\int_{0}^{T}e^{\beta A(t)} \bigl(a^2(t)|Y_{t}|^{2}+|Z_{t}|^{2} \bigr)dt+\big|K_{T}^{+}\big|^2+\big|K_{T}^{-}\big|^2 \Biggr] \nonumber \\[-2pt] &\quad\leq C_\beta\mathbb{E} \Biggl[e^{\beta A(T)}|\xi|^2+ \int_{0}^{T}e^{\beta A(t)}\frac{|f(t,0,0)|^2}{a^2(t)}dt \nonumber \\[-2pt] &\qquad+\sup_{0\leq t\leq T}e^{2\beta A(t)} \bigl(\big|L_{t}^+\big|^{2}+\big|U_{t}^-\big|^{2} \bigr) \Biggr]. \end{align} \end{lemma} \begin{proof} Applying It\^{o}'s formula and Young's inequality, combined with the stochastic Lipschitz assumption $(H2)$ we can write \begin{align*} & e^{\beta A(t)}|Y_t|^2+\int_{t}^{T} \beta e^{\beta A(s)}a^2(s)|Y_{s}|^2ds+\int _{t}^{T}e^{\beta A(s)}|Z_{s}|^2ds \\[-2pt] &\quad\leq e^{\beta A(T)}|\xi|^2+\frac{\beta}{2}\int _{t}^{T}e^{\beta A(s)}a^2(s)|Y_{s}|^2ds +\frac{2}{\beta}\int_{t}^{T}e^{\beta A(s)} \frac {|f(s,Y_s,Z_s)|^2}{a^2(s)}ds \\[-2pt] &\qquad+2\int_{t}^{T}e^{\beta A(s)}Y_{s}dK_{s}^{+}-2 \int_{t}^{T}e^{\beta A(s)}Y_{s}dK_{s}^{-}-2 \int_{t}^{T}e^{\beta A(s)}Y_{s}Z_{s}dB_{s} \\[-2pt] &\quad\leq e^{\beta A(T)}|\xi|^2+\frac{\beta}{2}\int _{t}^{T}e^{\beta A(s)}a^2(s)|Y_{s}|^2ds+ \frac{6}{\beta}\int_{t}^{T}e^{\beta A(s)}a^2(s)|Y_{s}|^2ds \\[-2pt] &\qquad+\frac{6}{\beta}\int_{t}^{T}e^{\beta A(s)}|Z_{s}|^2ds+ \frac{6}{\beta }\int_{t}^{T}e^{\beta A(s)} \frac{|f(s,0,0)|^2}{a^2(s)}ds \\ &\qquad+2\int_{t}^{T}e^{\beta A(s)}Y_{s}dK_{s}^{+}-2 \int_{t}^{T}e^{\beta A(s)}Y_{s}dK_{s}^{-}-2 \int_{t}^{T}e^{\beta A(s)}Y_{s}Z_{s}dB_{s}. \end{align*} Using the fact that $dK_{s}^{+}=\mathbh{1}_{\{Y_s=L_s\}}dK_{s}^{+}$ and $dK_{s}^{-}=\mathbh{1}_{\{Y_s=U_s\}}dK_{s}^{-}$, we have \begin{align} \label{e1} & e^{\beta A(t)}|Y_t|^2+\biggl( \frac{\beta}{2}-\frac{6}{\beta}\biggr)\int_{t}^{T} e^{\beta A(s)}a^2(s)|Y_{s}|^2ds +\biggl(1- \frac{6}{\beta}\biggr)\int_{t}^{T}e^{\beta A(s)}|Z_{s}|^2ds \nonumber \\[-2pt] &\quad\leq e^{\beta A(T)}|\xi|^2+\frac{6}{\beta}\int _{t}^{T}e^{\beta A(s)}\frac{|f(s,0,0)|^2}{a^2(s)}ds+2\int_{t}^{T}e^{\beta A(s)}L_{s}dK_{s}^{+} \nonumber \\[-2pt] &\qquad-2 \int_{t}^{T}e^{\beta A(s)}U_{s}dK_{s}^{-}-2 \int_{t}^{T}e^{\beta A(s)}Y_{s}Z_{s}dB_{s}. \end{align} Taking expectation on both sides above, we get \begin{align} \label{e2} &\mathbb{E} \Biggl[\int_{0}^{T} e^{\beta A(s)}a^2(s)|Y_{s}|^2ds+\int _{0}^{T}e^{\beta A(s)}|Z_{s}|^2ds \Biggr] \nonumber \\[-2pt] &\quad\leq c_\beta\mathbb{E} \Biggl[e^{\beta A(T)}|\xi|^2+ \int_{0}^{T}e^{\beta A(s)}\frac{|f(s,0,0)|^2}{a^2(s)}ds \nonumber \\[-2pt] &\qquad+\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl\vert L^+_{t} \bigr\vert ^2+ \bigl\vert K_{T}^{+} \bigr\vert ^2 +\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl \vert U^-_{t} \bigr\vert ^2+ \bigl\vert K_{T}^{-} \bigr\vert ^2 \Biggr] \end{align} and by the Burkholder--Davis--Gundy's inequality we obtain \begin{align} & \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}|Y_{t}|^{2} \nonumber \\[-2pt] &\quad\leq\mathcal{C}_\beta\mathbb{E} \Biggl[e^{\beta A(T)}| \xi|^2+\int_{0}^{T}e^{\beta A(s)} \frac{|f(s,0,0)|^2}{a^2(s)}ds \nonumber \\[-2pt] &\qquad+2\int_{t}^{T}e^{\beta A(s)}L_{s}dK_{s}^{+}-2 \int_{t}^{T}e^{\beta A(s)}L_{s}dK_{s}^{-} \Biggr]\label{e3} \\[-2pt] &\quad\leq\mathcal{C}_\beta\mathbb{E} \Biggl[e^{\beta A(T)}| \xi|^2+\int_{0}^{T}e^{\beta A(s)} \frac{|f(s,0,0)|^2}{a^2(s)}ds \nonumber \\[-2pt] &\qquad+\sup_{0\leq t\leq T}e^{2\beta A(t)} \bigl(\big|L_{t}^+\big|^{2}+\big|U_{t}^-\big|^{2} \bigr) +\big|{K_{T}^{+}}\big|^2+\big|{K_{T}^{-}}\big|^2 \Biggr].\label{e31} \end{align} To conclude, we now give an estimate of ${K_{T}^{+}}^2$ and ${K_{T}^{-}}^2$. From the equation \[ K_{T}^{+}-K_{T}^{-}=Y_0- \xi-\int_0^Tf(s,Y_s,Z_s)ds+ \int_0^TZ_sdB_s \] and the stochastic Lipschitz property $(H2)$, we have \begin{align*} & \mathbb{E} \bigl[\big|K_{T}^{+}-K_{T}^{-}\big|^2 \bigr] \\[-2pt] &\quad\leq 4\mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}|Y_{t}|^{2}+| \xi|^2+\biggl(1+\frac{3}{\beta}\biggr)\int_0^Te^{\beta A(s)}|Z_s|^2ds \\ &\qquad+\frac{3}{\beta}\int_{0}^{T}e^{\beta A(s)}a^2(s)|Y_{s}|^{2}ds +\frac{3}{\beta}\int_{0}^{T}e^{\beta A(s)} \frac {|f(s,0,0)|^2}{a^2(s)}ds \Biggr]. \end{align*} Combining this with (\ref{e3}), we derive that \begin{align} \label{e4} \mathbb{E} \bigl\vert K_{T}^{+} \bigr\vert ^2+\mathbb{E} \bigl\vert K_{T}^{-} \bigr\vert ^2 &\leq\mathfrak{C}_\beta\mathbb{E} \Biggl[e^{\beta A(T)}|\xi|^{2} +\int_{0}^{T}e^{\beta A(s)} \frac{|f(s,0,0)|^2}{a^2(s)}ds \nonumber \\ &\quad+\sup_{0\leq t\leq T}e^{2\beta A(t)} \bigl(\big|L_{t}^+\big|^{2}+\big|U_{t}^-\big|^{2} \bigr) \Biggr] +\frac{1}{2}\mathbb{E} \bigl\vert {K_{T}^{+}} \bigr\vert ^2+\frac{1}{2}\mathbb {E} \bigl\vert {K_{T}^{-}} \bigr\vert ^2. \end{align} The desired result is obtained by estimates (\ref{e2}), (\ref{e31}) and (\ref{e4}). \end{proof} \section{Existence and uniqueness of solution}\label{s3} \subsection{The obstacle $U$ is regular} In this part, we apply the penalization method and the fixed point theorem to give the existence of the solution to the DRBSDE (\ref{a}). We first consider the special case when the generator does not depend on $(y,z)$: \[ f(t,y,z)=g(t). \] \begin{thm}\label{t} Assume that $\frac{g}{a}\in\mathcal{H}^{2}(\beta,a)$ and ${(H1)}$--${(H6)}$ hold. Then, the doubly reflected BSDE (\ref{a}) with data $(\xi,g,L,U)$ has a unique solution $(Y,Z,K^+,K^-)$ that belongs to $(\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta,a))\times \mathcal{H}^{2}(\beta,a)\times\mathcal{S}^{2}\times\mathcal{S}^{2}$. \end{thm} For all $n\in\mathbb{N}$, let $(Y^n,Z^n,K^{n+})$ be the $\mathcal {F}_t$-adapted process with values in $(\mathcal{S}^{2}(\beta,a)\cap \mathcal{S}^{2,a}(\beta,a))\times\mathcal{H}^{2}(\beta,a)\times \mathcal{S}^{2}$ being a solution to the reflected BSDE with data $(\xi , g(t)-n(y-U_t)^+,L)$. That is \begin{eqnarray} \label{r1} &&\hspace{-1cm} \displaystyle \left\{ \begin{array}{@{}ll} Y^n_{t}=\xi+\displaystyle\int_{t}^{T} g(s)ds-n\displaystyle\int_{t}^{T}\bigl(Y^n_s-U_s\bigr)^+ds+K^{n+}_{T}-K^{n+}_{t}-\displaystyle\int_{t}^{T}Z^n_{s}dB_{s}& \hbox{}\\ Y^n_{t}\geq L_{t}, \ \forall t\leq T \ \mbox{and}\ \displaystyle \int_{0}^{T}\bigl(Y^n_{t}-L_{t}\bigr)dK^{n+}_{t}=0.& \hbox{} \end{array} \right. \end{eqnarray} We denote $K^{n-}_{t}:=n\int_0^t(Y^n_s-U_s)^+ds$ and $g^n(s,y):=g(s)-n(y-U_s)^+$. We have divided the proof of Theorem \ref{t} into sequence of lemmas. \begin{lemma}\label{l1} There exists a positive constant $C$ such that \[ \sup_{0\leq t\leq T} n\bigl(Y_{t}^{n}-U_{t} \bigr)^{+}\leq C\quad\mathbb{P}\hbox{-}a.s. \] \end{lemma} \begin{proof} For all $n,m \geq0$, let $(Y^{n,m},Z^{n,m})$ be the solution to the following BSDE \[ Y_{t}^{n,m} =\xi-\int_{t}^{T} \bigl\{ g(s)+m\bigl(Y_{s}^{n,m}-L_{s} \bigr)^{-}-n\bigl(Y_{s}^{n,m}-U_{s} \bigr)^{+} \bigr\}ds-\int_{t}^{T}Z_{s}^{n,m}dB_{s}. \] We denote $\bar{Y}^{n,m}=Y^{n,m}-U^{m}$. Then we have \begin{align*} \bar{Y}_{t}^{n,m}&=\xi-U_{T}^{m}+\int _{t}^{T}\bigl(g(s)+u_{m}(s)\bigr)ds-n \int_{t}^{T}\bigl(\bar{Y}_{s}^{n,m}- \bigl(U_{s}-U_{s}^m\bigr)\bigr)^{+}ds \\ &\quad +m\int_{t}^{T}\bigl(\bar{Y}_{s}^{n,m}- \bigl(L_{s}-U_{s}^m\bigr)\bigr)^{-}ds- \int_{t}^{T}\bigl(Z_{s}^{n,m}-v_{n}(s) \bigr)dB_{s}. \end{align*} For $n\geq0$, let $\mathcal{D}_{n}$ be the class of $\mathcal {F}_t$-progressively measurable process taking values in $[0,n]$. For $\nu\in\mathcal{D}_{n}$ and $\lambda\in\mathcal{D}_{m}$ we denote $R_{t}=e^{-\int_{0}^{t}(\nu(s)+\lambda(s))ds}$. Applying It\^{o}'s formula to $R_{t}\bar{Y}_{t}^{n,m}$ and using the same arguments as on page 2042 of \cite{CK}, one can show that \[ \bar{Y}_{t}^{n,m} \leq\operatorname*{ess \,sup}_{\lambda\in\mathcal{D}_{m}} \operatorname *{ess\,inf}_{\nu\in\mathcal{D}_{n}}\mathbb{E} \Biggl[ \int_{t}^{T}e^{-\int _{t}^{s}(\nu(r)+\lambda(r))dr}\big|u_{m}(s)\big|ds |\mathcal{F}_{t} \Biggr]. \] From the assumption $(H6)(ii)$, we can write $\bar{Y}_{t}^{n,m}\vee0 \leq\frac{C}{n}$. It follows that \[ \forall t\leq T,\qquad n\bigl(\bar{Y}_{t}^{n,m}\vee0\bigr) \xrightarrow[m\to +\infty]{} n\bigl(Y_{t}^{n}-U_{t} \bigr)^{+}\leq C \quad\mathbb{P}\hbox{-}a.s.\qedhere \] \end{proof} \begin{lemma}\label{lll} There exists a positive constant $C'_\beta$ depending only on $\beta$ such that for all $n\geq0$ \begin{align*} \label{e5} &\mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}\big|^{2}+ \int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}\big|^{2}dt +\int_{0}^{T}e^{\beta A(t)}\big|Z_{t}^{n}\big|^{2}dt+\big|K_{T}^{n+}\big|^2 \Biggr] \nonumber \\ &\quad\leq C'_\beta\mathbb{E} \Biggl[e^{\beta A(T)}| \xi|^2+\int_{0}^{T}e^{\beta A(t)} \biggl\vert \frac{g(t)}{a(t)} \biggr\vert ^2dt \\ &\qquad+\sup_{0\leq t\leq T}e^{2\beta A(t)}\big|U_{t}^-\big|^{2}+ \sup_{0\leq t\leq T}e^{2\beta A(t)}\big|L_{t}^+\big|^{2} \Biggr]. \end{align*} \end{lemma} \begin{proof} It\^{o}'s formula implies for $t\leq T$: \begin{align*} &\beta\mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds+ \mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\big|^{2}ds \\ &\quad\leq \mathbb{E}e^{\beta A(T)}|\xi|^2 +\frac{\beta}{2} \mathbb{E}\int_{t}^{T}e^{2\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds +\frac{2}{\beta}\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \frac {|g(s)|^2}{a^2(s)}ds \\ &\qquad+2\mathbb{E} \Biggl[\sup_{n\geq0}\sup _{0\leq t\leq T}n\bigl(Y_t^n-U_t \bigr)^+\int_{t}^{T}e^{\beta A(s)}U^-_sds \Biggr] +2\mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)}L_sdK_s^{n+} \Biggr]. \end{align*} Here we used the fact that $-nY^n_s(Y^n_s-U_s)^+\leq nU^-(Y^n_s-U_s)^+$ and $dK^{n+}_s=\mathbh{1}_{\{Y^n_s=L_s\}}dK^{n+}_s$. We conclude, by the Burkholder--Davis--Gundy's inequality, that \begin{align*} &\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y^n_t\big|^2+ \mathbb{E}\int_{0}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds+ \mathbb {E}\int_{0}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\big|^{2}ds \\ &\quad\leq c'_p\mathbb{E} \Biggl[e^{\beta A(T)}| \xi|^2 +\int_{0}^{T}e^{\beta A(s)} \frac{|g(s)|^2}{a^2(s)}ds \\ &\qquad+\sup_{0\leq t\leq T}e^{2\beta A(t)}\big|U_t^-\big|^2+ \sup_{0\leq t\leq T}e^{2\beta A(t)}\big|L_{t}^+\big|^{2}+\big|K_{T}^{n+}\big|^2 \Biggr]. \end{align*} In the same way as (\ref{e4}), we can prove that \begin{align*} \mathbb{E}\big|K_{T}^{n+}\big|^2&\leq \mathcal{C}'_p\mathbb {E} \Biggl[e^{\beta A(T)}| \xi|^2 +\int_{0}^{T}e^{\beta A(s)} \frac{|g(s)|^2}{a^2(s)}ds \\ &\quad+\sup_{0\leq t\leq T}e^{2\beta A(t)}\big|U_t^-\big|^2+ \sup_{0\leq t\leq T}e^{2\beta A(t)}\big|L_{t}^+\big|^{2} \Biggr]. \end{align*} We obtain the desired result. \end{proof} \begin{lemma}\label{ll} There exist two $\mathcal{F}_t$-adapted processes $(Y_t)_{t\leq T}$ and $(K^+_t)_{t\leq T}$ such that $Y^n\searrow Y$, $K^{n+}\nearrow K^+$ and \[ \mathbb{E} \Bigl[\sup_{0\leq t\leq T}\big|K_{t}^{n+}-K_{t}^{+}\big|^{2} \Bigr]\xrightarrow[n\to+\infty]{}0. \] \end{lemma} \begin{proof} The comparison \xch{Theorem \ref{tc}}{theorem \ref{tc}} (below) shows that $Y^0_t\geq Y^n_t\geq Y_t^{n+1}$ and $K^{n+}_t\leq K^{(n+1)+}_t$ for all $t\leq T$. Therefore, there exist processes $Y$ and $K^+$ such that, as $n\rightarrow+\infty$, for all $t\leq T$, $Y^n_t\searrow Y_t$ and $K^{n+}_t\nearrow K^+_t$. Since the process $K^+$ is continuous, it follows by Dini's theorem that \[ \mathbb{E} \Bigl[\sup_{0\leq t\leq T}\big|K_{t}^{n+}-K_{t}^{+}\big|^{2} \Bigr]\xrightarrow[n\to+\infty]{}0.\qedhere \] \end{proof} \begin{lemma}\label{l} \[ \mathbb{E} \Bigl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big| \bigl(Y_{t}^{n}-U_{t}\bigr)^+\big|^{2} \Bigr]\xrightarrow[n\to+\infty]{}0. \] \end{lemma} \begin{proof} Since $Y_t\leq Y_t^n\leq Y_t^0$, we can replace $U_t$ by $U_t\vee Y^0$; that is, we may assume that $\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)} \vert U_{t} \vert ^{2}<+\infty$. Let $(\widetilde{Y}^n,\widetilde{Z}^n,\widetilde{K}^n)$ be the solution to the following Reflected BSDE associated with $(\xi, g-n(y-U), L)$: \begin{eqnarray} \label{d2} &&\hspace{-1cm} \displaystyle \left\{ \begin{array}{@{}ll} \widetilde{Y}^n_{t}=\xi+\displaystyle\int_{t}^{T}\bigl(g(s)-n\bigl(\widetilde {Y}^n_s-U_s\bigr)\bigr)ds+\widetilde{K}^n_{T}-\widetilde{K}^n_{t}-\displaystyle \int_{t}^{T}\widetilde{Z}^n_{s}dB_{s}& \hbox{}\\ \widetilde{Y}^n_{t}\geq L_t,\ \forall t\leq T \ \mbox{and}\ \displaystyle\int_{0}^{T}\bigl(\widetilde{Y}^n_{t}-L_{t}\bigr)d\widetilde {K}^n_{t}=0. & \hbox{} \end{array} \right. \end{eqnarray} The comparison \xch{Theorem \ref{tc}}{theorem \ref{tc}} shows that $Y^n\leq\widetilde{Y}^n$ and $d\widetilde{K}^n\leq dK^{n+} \leq dK^+$. Let $\tau\leq T$ be a stopping time. Then we can write \[ \widetilde{Y}^n_{\tau}=\mathbb{E} \Biggl[e^{-n(T-\tau)} \xi+\int_{\tau }^{T}e^{-n(s-\tau)} \bigl(g(s)+nU_s\bigr)ds+\int_{\tau}^{T}e^{-n(s-\tau )}d \widetilde{K}^n_{s}|\mathcal{F}_{\tau} \Biggr]. \] Since $\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}U_{t}^{2}<+\infty$, we obtain \[ e^{-n(T-\tau)}\xi+n\int_{\tau}^{T}e^{-n(s-\tau)}U_{s}ds \xrightarrow[n\to+\infty]{} \xi\mathbh{1}_{\tau=T}+U_{\tau}\mathbh {1}_{\tau<T} \quad\mathbb{P}\hbox{-}\mathrm{a.s.} \; \mathrm{in} \; \mathcal{L}^2 \] and the conditional expectation converges also in $\mathcal{L}^2$. Moreover, \[ \Biggl\vert \int_{\tau}^{T}e^{-n(s-\tau)} g(s)ds \Biggr\vert ^2 \leq\int_{\tau}^{T}e^{\beta A(s)} \biggl\vert \frac{g(s)}{a(s)} \biggr\vert ^2ds\int _{\tau}^{T}e^{-2n(s-\tau)}e^{-\beta A(s)}a^2(s)ds. \] Then \[ \int_{\tau}^{T}e^{-n(s-\tau)} g(s)ds\xrightarrow[n \to+\infty]{}0 \quad \mathbb{P}\hbox{-}\mbox{a.s. in }\mathcal{L}^2. \] In addition, \[ 0\leq\int_{\tau}^{T}e^{-n(s-\tau)}d \widetilde{K}_{s}^{n} \leq\int_{\tau}^{T}e^{-n(s-\tau)}dK^+_{s} \xrightarrow[n\to+\infty ]{}0\mbox{ in }\mathcal{L}^1. \] Consequently, \[ \widetilde{Y}^n_{\tau}\xrightarrow[n\to+\infty]{} \xi \mathbh{1}_{\tau=T}+U_{\tau}\mathbh{1}_{\tau<T} \quad\mathbb {P}\mbox{-a.s. in }\mathcal{L}^1. \] Therefore, $Y_\tau\leq U_\tau$ $\mathbb{P}$-a.s. We deduce, from Theorem 86 page 220 in Dellacherie and Meyer \cite{DM}, that $Y_t\leq U_t$ for all $t\leq T$ $\mathbb{P}$-a.s and then $e^{\beta A(t)}(Y^n_t-U_t)^+\searrow0$ for all $t\leq T$ $\mathbb{P}$-a.s. By Dini's theorem, we have $\sup_{0\leq t\leq T}e^{\beta A(t)}(Y^n_t-U_t)^+\searrow0$ $\mathbb{P}\mbox{-a.s.}$ and the result follows from the Lebesgue's dominated convergence theorem. \end{proof} \begin{lemma} There exist two processes $(Z_t)_{t\leq T}$ and $(K^-_t)_{t\leq T}$ such that \[ \mathbb{E}\int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}-Y_{t}\big|^2dt +\mathbb{E}\int_{0}^{T}e^{\beta A(t)}\big|Z_{t}^{n}-Z_{t}\big|^{2}dt \xrightarrow[n\to+\infty]{}0. \] Moreover, \[ \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}\big|^{2} +\mathbb{E}\sup_{0\leq t\leq T}\big|K_{t}^{n-}-K_{t}^{-}\big|^{2} \xrightarrow[n\to+\infty]{}0. \] \end{lemma} \begin{proof} For all $n\geq p\geq0$ and $t\leq T$, applying It\^{o}'s formula and taking expectation yields that \begin{align*} &\mathbb{E} \Biggl[e^{\beta A(t)}\big|Y_{t}^{n}\,{-}\,Y_{t}^p\big|^{2} \,{+}\,\beta\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\,{-}\,Y_{s}^p\big|^2ds\,{+} \int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\,{-}\,Z_{s}^p\big|^2ds \Biggr] \\ &\quad\leq2\mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{p}-U_{s}\bigr)^+n \bigl(Y_{s}^{n}- U_{s}\bigr)^+ds \Biggr] \\ &\qquad+2\mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-U_{s}\bigr)^+p \bigl(Y_{s}^{p}- U_{s}\bigr)^+ds \Biggr] \\ &\quad\leq\mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl(e^{\beta A(t)}\bigl(Y_{t}^{p}-U_{t} \bigr)^{+}\bigr)^{2} \Bigr]^{\frac{1}{2}} \mathbb{E} \Biggl[ \Biggl(\int_{t}^{T}n\bigl(Y_{s}^{n}-U_{s} \bigr)^{+}ds \Biggr)^{2} \Biggr]^{\frac{1}{2}} \\ &\qquad+\mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl(e^{\beta A(t)}\bigl(Y_{t}^{n}-U_{t} \bigr)^{+}\bigr)^{2} \Bigr]^{\frac{1}{2}} \mathbb{E} \Biggl[ \Biggl(\int_{t}^{T}p\bigl(Y_{s}^{p}-U_{s} \bigr)^{+}ds \Biggr)^{2} \Biggr]^{\frac{1}{2}} \end{align*} since $(Y_{s}^{n}-Y_{s}^p)d(K_{s}^{n+}- K_{s}^{p+})\leq0$. Therefore, using Lemmas \ref{l1} and \ref{l}, we obtain \[ \mathbb{E}\int_{0}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2ds +\mathbb{E}\int_{0}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^{2}ds \xrightarrow[n,p\to+\infty]{}0. \] It follows that $(Z^n)_{n\geq0}$ is a Cauchy sequence in complete space $\mathcal{H}^2(\beta,a)$. Then there exists an $\mathcal{F}_t$-progressively measurable process $(Z_t)_{t\leq T}$ such that the sequence $(Z^n)_{n\geq0}$ tends toward $Z$ in $\mathcal{H}^2(\beta,a)$. On the other hand, by the Burkholder--Davis--Gundy's inequality, one can derive that \begin{align*} &\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2} \\ &\quad\leq\mathbb{E} \Bigl[\sup_{0\leq t\leq T}\bigl(e^{\beta A(t)} \bigl(Y_{t}^{p}-U_{t}\bigr)^{+} \bigr)^{2} \Bigr]^{\frac{1}{2}} \mathbb{E} \Biggl[ \Biggl(\int _{t}^{T}n\bigl(Y_{s}^{n}-U_{s} \bigr)^{+}ds \Biggr)^{2} \Biggr]^{\frac{1}{2}} \\ &\qquad+\mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl(e^{\beta A(t)}\bigl(Y_{t}^{n}-U_{t} \bigr)^{+}\bigr)^{2} \Bigr]^{\frac{1}{2}} \mathbb{E} \Biggl[ \Biggl(\int_{t}^{T}p\bigl(Y_{s}^{p}-U_{s} \bigr)^{+}ds \Biggr)^{2} \Biggr]^{\frac{1}{2}} \\ &\qquad+\frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2} +2c^2\mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^2ds \end{align*} where $c$ is a universal non-negative constant. It follows that \[ \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2} \xrightarrow[n,p\to+\infty]{}0 \] and then \[ \mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}\big|^{2} +\int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}-Y_{t}\big|^2dt \Biggr]\xrightarrow[n\to+\infty]{}0. \] Now, we set \[ K^-_t=Y_t-Y_0+\int_0^tg(s)ds+K^+_t-K^+_0- \int_0^tZ_sdB_s. \] One can show, at least for a subsequence (which we still index by $n$), that \[ \mathbb{E}\sup_{0\leq t\leq T}\big|K_{t}^{n-}-K_{t}^{-}\big|^{2} \xrightarrow [n\to+\infty]{}0. \] The proof is completed. \end{proof} \begin{proof}[Proof of Theorem \ref{t}] Obviously, the process $(Y_t,Z_t,K^+_t,K^-_t)_{t\leq T}$ satisfies, for all $t\leq T$, \[ Y_{t}=\xi+\int_{t}^{T} g(s)ds+ \bigl(K_{T}^{+}-K_{t}^{+}\bigr)- \bigl(K_{T}^{-}-K_{t}^{-}\bigr)-\int _{t}^{T}Z_{s}dB_{s}. \] Since $Y^n_t\geq L_t$ and from Lemma \ref{l} we have $L_t\leq Y_t \leq U_t$. In the following, we want to show that \[ \int_{0}^{T}(Y_{t}-L_{t})dK_{t}^{+}= \int_{0}^{T}(U_{t}-Y_{t})dK_{t}^{-}=0 \quad\mathbb{P}\mbox{-a.s.} \] Note that \[ \int_{0}^{T}(Y_{t}-L_{t})dK_{t}^{+}= \int_{0}^{T}\bigl(Y_{t}-Y_{t}^n \bigr)dK_{t}^{+}+\int_{0}^{T} \bigl(Y_{t}^n-L_{t}\bigr) \bigl(dK_{t}^{+}-dK_{t}^{n+} \bigr). \] Let $\omega\in\varOmega$ be fixed. It follows from Lemma \ref{ll} that, for any $\varepsilon>0$, there exists $n(\omega)$ such that $\forall n\geq n(\omega)$, $Y_t(\omega)\leq Y_t^n(\omega)+\varepsilon$. Hence \begin{equation} \label{f} \int_{0}^{T}\bigl(Y_{t}( \omega)-Y_{t}^n(\omega)\bigr)dK^{+}_{t}( \omega)\leq \varepsilon K^{+}_{T}(\omega). \end{equation} On the other hand, since the function $(Y_{t}(\omega)-L_{t}(\omega))_{t\leq T}$ is continuous, then there exists a sequence of non-negative step functions $(f^m(\omega))_{m\geq0}$ which converges uniformly on $[0,T]$ to $Y_{t}(\omega)-L_{t}(\omega)$. That is \[ \big|Y_{t}(\omega)-L_{t}(\omega)-f_t^m( \omega)\big|<\varepsilon. \] It follows that \begin{align*} &\int_{0}^{T}\bigl(Y_{t}( \omega)-L_{t}(\omega)\bigr)d \bigl(K_{t}^{+}( \omega )-K_{t}^{n+}(\omega) \bigr) \\ &\quad\leq\varepsilon\bigl(K_{T}^{+}(\omega)+K_{T}^{n+}( \omega)\bigr)+\int_{0}^{T}f_t^m( \omega)d \bigl(K_{t}^{+}(\omega)-K_{t}^{n+}( \omega) \bigr). \end{align*} Further, \[ \varepsilon\bigl(K_{T}^{+}(\omega)+K_{T}^{n+}( \omega)\bigr)\xrightarrow[n\to +\infty]{}2\varepsilon K_T^+(\omega) \] and, since $(f^m(\omega))_{m\geq0}$ is a step function, \[ \int_{0}^{T}f_t^m( \omega)d\bigl(K_{t}^{+}(\omega)-K_{t}^{n+}( \omega )\bigr)\xrightarrow[m\to+\infty]{}0. \] Therefore, we have \[ \limsup_{n\to+\infty}\int_{0}^{T} \bigl(Y_{t}^n-L_{t}\bigr)d\bigl(K_{t}^{+}-K_{t}^{n+} \bigr)\leq2\varepsilon K_T^+(\omega). \] From (\ref{f}) we deduce that \[ \int_{0}^{T}(Y_{t}-L_{t})dK_{t}^{+} \leq3\varepsilon K_T^+(\omega). \] The arbitrariness of $\varepsilon$ and $Y\geq L$, show that $\int_{0}^{T}(Y_{t}-L_{t})dK_{t}^{+}=0$. Further, by Lemma \ref{ll} and the result treated on p. 465 of Saisho \cite{YS} we can write \begin{equation} \label{e7} \int_{0}^{T}\bigl(U_{s}-Y^n_{s} \bigr)n\bigl(Y^n_{s}-U_{s}\bigr)ds\xrightarrow[n \to+\infty ]{}\int_{0}^{T}(U_{s}-Y_{s})dK^{-}_{s}. \end{equation} Since $\int_{0}^{T}(U_{s}-Y^n_{s})n(Y^n_{s}-U_{s})ds=\int_{0}^{T}(U_{s}-Y^n_{s})dK^{n-}_{s}\leq0$ for each $n\geq0$ $\mathbb{P}$-a.s. and for each $n,m\geq0$, $n\neq m$, \[ \mathbb{E} \Biggl[ \Biggl\vert \int_0^T \bigl(Y_{s}^{n}-Y_{s}^m \bigr)dK^{m-}_s \Biggr\vert \Biggr] \leq\mathbb{E} \Bigl[ \sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^m\big|K^{m-}_T \Bigr]\xrightarrow[n,m\to+\infty]{}0. \] Then we have \begin{equation} \label{e8} \limsup_{n\rightarrow+\infty}\int_0^T \bigl(U_{s}-Y^n_{s}\bigr)dK^{n-}_t \leq 0\quad\mathbb{P}\mbox{-a.s.} \end{equation} Combining (\ref{e7}) and (\ref{e8}), we get $\int_0^T(U_{s}-Y_{s})dK^{-}_s\leq0\ \mathbb{P}\mbox{-a.s.}$ Noting that $Y\leq U$, we conclude that $\int_0^T(U_{s}-Y_{s})dK^{-}_s=0$. Consequently, $(Y_t,Z_t,K^+_t,K^-_t)$ is the solution to (\ref{a}) associated to the data $(\xi, g, L, U)$. \end{proof} We can now state the main result: \begin{thm}\label{fpt} Assume $(H1)$--$(H6)$ hold for a sufficient large $\beta$. Then DRBSDE~(\ref{a}) has a unique solution $(Y,Z,K^+,K^-)$ that belongs to $(\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta,a))\times \mathcal{H}^{2}(\beta,a)\times\mathcal{S}^{2}\times\mathcal{S}^{2}$. \end{thm} \begin{proof} Given $(\phi,\psi)\in\mathfrak{B}^2$, consider the following DRBSDE : \begin{align} \label{b1} \displaystyle \left\{ \begin{array}{@{}l} Y_{t}=\xi\,{+}\displaystyle\int_{t}^{T} f(s,\phi_{s},\psi _{s})ds\,{+}\,(K_{T}^{+}-K_{t}^{+})-(K_{T}^{-}-K_{t}^{-})\,{-}\displaystyle\int_{t}^{T}Z_{s}dB_{s} \quad t\,{\leq}\, T \\ L_t\leq{Y}_{t}\leq U_t,\ \forall t\leq T \ \mbox{and}\ \displaystyle\int_{0}^{T}({Y}_{t}-L_{t})d{K}^+_{t}=\displaystyle\int_{0}^{T}(U_{t}-Y_t)d{K}^-_{t}=0. \end{array} \right. \end{align} From ${(H2)}$ and ${(H3)}$, we have \[ \big|f(t,\phi_{t},\psi_{t})\big|^2 \leq3 \bigl(a(t)^4|\phi_t|^2+a(t)^2| \psi_t|^2+\big|f(t,0,0)\big|^2 \bigr). \] It follows from ${(H4)}$ that $\frac{f}{a}\in\mathcal{H}^{2}(\beta,a)$ and then (\ref{b1}) has a unique solution $(Y,Z,\allowbreak{}K^{+},K^{-})$. We define a mapping \[ \begin{array}{ccccc} \varphi&: & \mathfrak{B}^2 &\longrightarrow& \mathfrak{B}^2 \\ & & (\phi,\psi)&\longmapsto& (Y,Z)\\ \end{array} \] Let $\varphi(\phi,\psi)=(Y,Z)$ and $\varphi(\phi',\psi')=(Y',Z')$ where $(Y,Z,K^{+},K^{-})$ (resp. $(Y',\allowbreak{}Z',K^{+'},K^{-'})$) is the unique solution to the DRBSDE associated with data $(\xi,\break {}f(.,\phi,\psi),L,U)$ (resp. $(\xi,f(.,\phi',\psi'),L,U)$). Denote $\Delta{\varGamma}=\varGamma-\varGamma'$ for $\varGamma=\break Y,Z,K^+,K^-,\phi,\psi$ and $\Delta f_t=f(t,{\phi'}_{t},{\psi'}_{t})-f(t,\phi_{t},\psi_{t})$. Applying It\^{o}'s formula to $e^{\beta A(t)}|\Delta Y_t|^2$ and taking expectation we have \begin{align*} &\mathbb{E}e^{\beta A(t)}|\Delta Y_t|^2 +\beta \mathbb{E}\int_{t}^{T} e^{\beta A(s)}a^2(s)| \Delta Y_{s}|^2ds +\mathbb{E}\int_{t}^{T}e^{\beta A(s)}| \Delta Z_{s}|^2ds \\ &\quad\leq2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \Delta Y_{s} \Delta f_s ds \\ &\quad\leq\alpha\beta\mathbb{E}\int_{t}^{T} e^{\beta A(s)}a^2(s)|\Delta Y_{s}|^2ds + \frac{2}{\alpha\beta}\mathbb{E}\int_{t}^{T} e^{\beta A(s)}\bigl(a^2(s)|\Delta\phi_s|^2+| \Delta\psi_s|^2\bigr)ds. \end{align*} We have used the fact that $\Delta Y_{s}d(\Delta K_{s}^{+}-\Delta K_{s}^{-})\leq0$. Choosing $\alpha\beta=4$ and $\beta>5$, we can write \[ \bigl\Vert \varphi(\phi,\psi) \bigr\Vert _\beta^2 \leq \frac{1}{2} \bigl\Vert (\phi ,\psi) \bigr\Vert _\beta^2. \] It follows that $\varphi$ is a strict contraction mapping on $\mathfrak {B}^2$ and then $\varphi$ has a unique fixed point which is the solution to the DRBSDE (\ref{a}). \end{proof} \begin{remark} If we consider $U=+\infty$, we obtain the BSDE with one continuous reflecting barrier $L$, then we proved the existence and uniqueness of the solution to RBSDE (\ref{RBSDE}) by means of a penalization method. Before this work, Wen L\"{u} \cite{WL} showed the existence and uniqueness result for this class of equations via the Snell envelope notion.\looseness=-1 \end{remark} \subsection{Completely separated barriers} In this section we will prove the existence of solution to (\ref{a}) when the barriers are completely separated, i.e., $ L_t<U_t$, $\forall t\leq T$. Then \begin{itemize} \item[$(\mathcal{H}7)$] there exists a continuous semimartingale \[ H_t=H_0+\int_0^th_sdB_s-V^+_t+V^-_t, \quad H_T=\xi \] with $h\in\mathcal{H}^2(0,a)$ and $V^\pm\in\mathcal{S}^2$ ($V^\pm_0=0$) are two nondecreasing continuous processes, such that \begin{equation} \label{h} L_t\leq H_t\leq U_t \qquad0 \leq t\leq T. \end{equation} \end{itemize} We will show the existence by the general penalization method. We first consider the special case when the generator does not depend on $(y,z)$: \[ f(t,y,z)=f(t). \] Let $(Y^{n},Z^{n})\in(\mathcal{S}^{2}(\beta,a)\cap\mathcal {S}^{2,a}(\beta,a))\times\mathcal{H}^{2}(\beta,a)$ be solution to the following BSDE \begin{align} \label{b} Y^{n}_{t}&=\xi+\int_{t}^{T}f(s)ds-n \int_{t}^{T}\bigl(Y^{n}_s-U_s \bigr)^+ds+n\int_{t}^{T}\bigl(Y^{n}_s-L_s \bigr)^-ds \nonumber \\ &\quad-\int_{t}^{T}Z^{n}_{s}dB_{s}. \end{align} We denote $K^{n+}_t:=n\int_{0}^{t}(Y^{n}_s-L_s)^-ds$, $K^{n-}_t:=n\int_{0}^{t}(Y^{n}_s-U_s)^+ds$, $K^{n}_t=K^{n+}_t-K^{n-}_t$ and $f^{n}(s,y)=f(s)-n(y-U_s)^++n(y-L_s)^-$. Now let us derive the uniform a priori estimates of $(Y^{n},Z^{n},K^{n+},K^{n-})$. \begin{lemma}\label{l2} There exists a positive constant $\kappa$ independent of $n$ such that, $\forall n\geq0$, \begin{align*} &\mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}\big|^{2}+ \int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}\big|^{2}dt \\ &\quad+\int_{0}^{T}e^{\beta A(t)}\big|Z_{t}^{n}\big|^{2}dt+\big|K_{T}^{n+}\big|^2+\big|K_{T}^{n-}\big|^2 \Biggr]\leq\kappa. \end{align*} \end{lemma} \begin{proof} Consider the RBSDE with data $(\xi,f,L)$. That is, \begin{eqnarray} \label{c} && \displaystyle \left\{ \begin{array}{@{}ll} \overline{Y}_{t}=\xi+\displaystyle\int_{t}^{T}f(s)ds+\overline {K}_{T}-\overline{K}_{t}-\displaystyle\int_{t}^{T}\overline {Z}_{s}dB_{s}& \hbox{}\\ \overline{Y}_{t}\geq L_t,\ \forall t\leq T \ \mbox{and}\ \displaystyle\int_{0}^{T}(\overline{Y}_{t}-L_{t})d\overline{K}_{t}=0. & \hbox{} \end{array} \right. \end{eqnarray} From Appendix~\ref{apdx} there exists a unique triplet of processes $(\overline{Y},\overline{Z},\overline{K})\in(\mathcal{S}^{2}(\beta ,a)\cap\mathcal{S}^{2,a}(\beta,a))\times\mathcal{H}^{2}(\beta ,a)\times\mathcal{S}^{2}$ being the solution to RBSDE (\ref{c}). We consider the penalization equation associated with the RBSDE (\ref{c}), for $n\in\mathbb{N}$, \begin{equation*} \overline{Y}^n_{t}=\xi+\int_{t}^{T} f(s)ds+n\int_{t}^{T}\bigl(\overline {Y}^n_s-L_s\bigr)^-ds-\int _{t}^{T}\overline{Z}^n_{s}dB_{s}. \end{equation*} The Remark \ref{Rmqcomp} implies that $\overline{Y}^0_{t}\leq\overline {Y}^n_{t}\leq\overline{Y}^{n+1}$ and $Y^n_{t}\leq\overline{Y}^n_{t}$ for all $t\leq T$. Therefore, as $n\longrightarrow+\infty$ for all $t\leq T$, $\overline{Y}^n_t\nearrow\overline{Y}_t$. Hence $Y^{n}_{t}\leq\overline{Y}_{t}$. Similarly, we consider the RBSDE with data $(\xi,f,U)$. There exists a unique triplet of processes $(\underline{Y},\underline{Z},\underline {K})\in(\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta ,a))\times\mathcal{H}^{2}(\beta,a)\times\mathcal{S}^{2}$, which satisfies \begin{eqnarray} \label{c1} && \displaystyle \left\{ \begin{array}{@{}ll} \underline{Y}_{t}=\xi+\displaystyle\int_{t}^{T}f(s)ds-(\underline {K}_{T}-\underline{K}_{t})-\displaystyle\int_{t}^{T}\underline {Z}_{s}dB_{s}& \hbox{}\\ \underline{Y}_{t}\leq U_t,\ \forall t\leq T \ \mbox{and}\ \displaystyle\int_{0}^{T}(U_t-\underline{Y}_{t})d\underline{K}_{t}=0. & \hbox{} \end{array} \right. \end{eqnarray} By the penalization equation associated with the RBSDE (\ref{c1}) \begin{equation*} \underline{Y}^n_{t}=\xi+\int_{t}^{T} f(s)ds-n\int_{t}^{T}\bigl(\underline {Y}^n_s-U_s\bigr)^+ds-\int _{t}^{T}\underline{Z}^n_{s}dB_{s} \end{equation*} and the Remark \ref{Rmqcomp}, we deduce that $Y^{n}_{t}\geq\underline {Y}_{t}$ for all $t\leq T$. Then we can write \begin{equation} \label{m} \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y^{n}_t\big|^2 \leq\max \Bigl\{\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}|\overline {Y}_t|^2,\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}| \underline {Y}_t|^2 \Bigr\} \leq\kappa. \end{equation} On the other hand, using It\^{o}'s formula and taking expectation implies for $t\leq T$: \begin{align*} &\beta\mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds+ \mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\big|^{2}ds \\ &\quad\leq\mathbb{E}e^{\beta A(T)}|\xi|^2+2\mathbb{E}\int _{t}^{T}e^{\beta A(s)}Y_{s}^{n}f(s)ds \\ &\qquad-2n\mathbb{E}\int_{t}^{T}e^{\beta A(s)}Y_{s}^{n} \bigl(Y_s^{n}-U_s\bigr)^+ds+2n\mathbb{E}\int _{t}^{T}e^{\beta A(s)}Y_{s}^{n} \bigl(Y_s^{n}-L_s\bigr)^-ds \\ &\quad\leq\mathbb{E}e^{\beta A(T)}|\xi|^2+\frac{\beta}{2}\mathbb{E} \int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^2ds+ \frac{2}{\beta}\mathbb {E}\int_{t}^{T}e^{\beta A(s)} \frac{|f(s)|^2}{a^2(s)}ds \\ &\qquad+2n\mathbb{E}\int_{t}^{T}e^{\beta A(s)}U_{s}^{-} \bigl(Y_s^{n}-U_s\bigr)^+ds+2n\mathbb{E}\int _{t}^{T}e^{\beta A(s)}L_{s}^{+} \bigl(Y_s^{n}-L_s\bigr)^-ds. \end{align*} Hence \begin{align} \label{ekk} &\frac{\beta}{2}\mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds+ \mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\big|^{2}ds \nonumber \\ &\quad\leq\mathbb{E}e^{\beta A(T)}|\xi|^2+ \frac{2}{\beta }\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \biggl\vert \frac{f(s)}{a(s)} \biggr\vert ^2ds+ \frac{1}{\alpha}\mathbb{E}\sup_{0\leq t\leq T}e^{2\beta A(t)} \bigl(\big|L_{t}^{+}\big|^2+\big|U_{t}^{-}\big|^2 \bigr) \nonumber \\ &\qquad+\alpha\mathbb{E} \Biggl[\int_{t}^{T}n \bigl(Y_s^{n}-U_s\bigr)^+ds \Biggr]^2+\alpha\mathbb{E} \Biggl[\int_{t}^{T}n \bigl(Y_s^{n}-L_s\bigr)^-ds \Biggr]^2. \end{align} Now we need to estimate $\mathbb{E} [\int_{t}^{T}n(Y_s^{n}-U_s)^+ds ]^2+ \mathbb{E} [\int_{t}^{T}n(Y_s^{n}-L_s)^-ds ]^2$. For this, let us consider the following stopping times \[ \displaystyle \left\{ \begin{array}{@{}ll} \tau_{0}=0 , & \hbox{} \\ \tau_{2l+1}=\inf\big\{t>\tau_{2l}\;\; |\;\; Y_t^{n}\leq L_t\big\}\wedge T, & l\geq0 \\ \tau_{2l+2}=\inf\big\{t>\tau_{2l+1}\;\; |\;\; Y_t^{n}\geq U_t\big\}\wedge T, & l\geq0. \end{array} \right. \] Since $Y$, $L$ and $U$ are continuous processes and $L<U$, $\tau_l<\tau _{l+1}$ on the set $\{\tau_{l+1}<T\}$. In addition the sequence $(\tau_l)_{l\geq 0}$ is of stationary type (i.e. $\forall\omega\in\varOmega$, there exists $l_0(\omega)$ such that $\tau _{l_0}(\omega)=T$). Indeed, let us set $G=\{\omega\in\varOmega, \tau _{l}(\omega)<T, \ l\geq0\}$, and we will show that $\mathbb{P}(G)=0$. We assume that $\mathbb{P}(G)>0$, therefore for $\omega\in G$, we have $Y_{\tau_{2l+1}}\leq L_{\tau_{2l+1}}$ and $Y_{\tau_{2l}}\geq U_{\tau_{2l}}$. Since $(\tau_l)_{l\geq 0}$ is nondecreasing sequence then $\tau_l\nearrow\tau$, hence $U_\tau\leq Y_\tau\leq L_\tau$ which is contradiction since $L<U$. We deduce that $\mathbb{P}(G)=0$. Obviously $Y^{n}\geq L$ on the interval $[\tau_{2l},\tau_{2l+1}]$, then the BSDE (\ref{b}) becomes \begin{equation} \label{eee1} Y^{n}_{\tau_{2l}}=Y^{n}_{\tau_{2l+1}} + \int_{\tau_{2l}}^{\tau _{2l+1}}f(s)ds-n\int_{\tau_{2l}}^{\tau_{2l+1}} \bigl(Y^{n}_s-U_s\bigr)^+ds-\int _{\tau_{2l}}^{\tau_{2l+1}}Z^{n}_{s}dB_{s}. \end{equation} On the other hand, using the assumption $(\mathcal{H}7)$, we get \begin{align*} Y_{\tau_{2l}}^{n}&\geq H_{\tau_{2l}} \mbox{ on } \{\tau_{2l}< T\}\quad\mbox {and}\quad Y_{\tau_{2l}}^{n}=H_{\tau_{2l}}=\xi\mbox{ on } \{\tau_{2l}=T\} ,\\ Y_{\tau_{2l+1}}^{n}&\leq H_{\tau_{2l+1}} \mbox{ on } \{\tau_{2l+1}< T\} \quad\mbox{and}\quad Y_{\tau_{2l+1}}^{n}=H_{\tau_{2l+1}}=\xi\mbox{ on }\{\tau _{2l+1}=T\}. \end{align*} From (\ref{eee1}) and the definition of process $H$ we obtain \begin{align*} n\int_{\tau_{2l}}^{\tau_{2l+1}}\bigl(Y_s^{n}-U_s \bigr)^+ds &\leq H_{\tau_{2l+1}}-H_{\tau_{2l}}+\int_{\tau_{2l}}^{\tau_{2l+1}}f(s)ds -\int_{\tau_{2l}}^{\tau_{2l+1}}Z_s^{n}dB_s \\ &\leq\int_{\tau_{2l}}^{\tau_{2l+1}}\bigl(h_s-Z_s^{n} \bigr)dB_s+\int_{\tau _{2l}}^{\tau_{2l+1}}\big|f(s)\big|ds+V^-_{\tau_{2l+1}}-V^-_{\tau_{2l}}. \end{align*} By summing in $l$, using the fact that $Y^{n}\leq U$ on the interval $[\tau_{2l+1},\tau_{2l+2}]$, we can write for $t\leq T$ \begin{align} \label{c2} \mathbb{E} \Biggl[n\int_{t}^{T} \bigl(Y_s^{n}-U_s\bigr)^+ds \Biggr]^2&\leq 4 \Biggl(\mathbb{E}\int_{t}^{T}|h_s|^2ds+ \mathbb{E}\int_{t}^{T}e^{\beta A_s}\big|Z_s^{n}\big|^2ds \nonumber \\ &\quad+\frac{T}{\beta}\mathbb{E}\int_{t}^{T}e^{\beta A_s} \frac{|f(s)|^2}{a^2(s)}ds+\mathbb{E}\big|V^-_T\big|^2 \Biggr). \end{align} In the same way, we obtain \begin{align} \label{c3} \mathbb{E} \Biggl[n\int_{t}^{T} \bigl(Y_s^{n}-L_s\bigr)^-ds \Biggr]^2&\leq 4 \Biggl(\mathbb{E}\int_{t}^{T}|h_s|^2ds+ \mathbb{E}\int_{t}^{T}e^{\beta A_s}\big|Z_s^{n}\big|^2ds \nonumber \\ &\quad+\frac{T}{\beta}\mathbb{E}\int_{t}^{T}e^{\beta A_s} \frac{|f(s)|^2}{a^2(s)}ds+\mathbb{E}\big|V^+_T\big|^2 \Biggr). \end{align} Combining (\ref{c2}), (\ref{c3}) with (\ref{ekk}), we obtain the desired result. \end{proof} \begin{lemma}\label{l3}\quad \begin{enumerate} \item\quad$\mathbb{E}\sup\limits_{0\leq t\leq T}e^{\beta A(t)}|(Y_{t}^{n}-U_{t})^+|^{2}\xrightarrow[n\to+\infty]{}0$. \item\quad$\mathbb{E}\sup\limits_{0\leq t\leq T}e^{\beta A(t)}|(Y_{t}^{n}-L_{t})^-|^{2}\xrightarrow[n\to+\infty]{}0$. \end{enumerate} \end{lemma} \begin{proof} Consider the following BSDE for each $n\in\mathbb{N}$ \begin{align*} \widehat{Y}^{n}_{t}&=\xi+\int_{t}^{T}f(s)ds+n \int_{t}^{T}\bigl(L_s- \widehat{Y}^{n}_s\bigr)ds -\int_{t}^{T} \widehat{Z}^{n}_{s}dB_{s} \\ &=\xi+\int_{t}^{T}f(s)ds+n\int _{t}^{T}\bigl(\widehat{Y}^{n}_s-L_s \bigr)^-ds -n\int_{t}^{T}\bigl(L_s- \widehat{Y}^{n}_s\bigr)^-ds\,{-}\int_{t}^{T} \widehat{Z}^{n}_{s}dB_{s}. \end{align*} By the Remark \ref{Rmqcomp}, we have $Y^{n}_{t}\geq\widehat{Y}^{n}_{t}$ for all $t\leq T$. Let $\nu$ be a stopping time such that $\nu\leq T$. Then \begin{equation} \label{ek4} \widehat{Y}^n_{\nu}=\mathbb{E} \Biggl[e^{-n(T-\nu)}\xi+\int_{\nu }^{T}e^{-n(s-\nu)}f(s)ds +n\int_{\nu}^{T}e^{-n(s-\nu)}L_sds| \mathcal{F}_{\nu} \Biggr]. \end{equation} It is easily seen that \[ e^{-n(T-\nu)}\xi+n\int_{\nu}^{T}e^{-n(s-\nu)}L_{s}ds \xrightarrow[n\to +\infty]{} \xi\mathbh{1}_{\nu=T}+L_{\nu} \mathbh{1}_{\nu<T} \qquad \mathbb{P}\mbox{-a.s.\ in\ } \mathcal{L}^2. \] Moreover, the conditional expectation converges also in $\mathcal {L}^2$. In addition, by the H\"{o}lder inequality, we have \begin{align*} & \Biggl\vert \int_{\nu}^{T}e^{-n(s-\nu)} f(s)ds \Biggr\vert ^2 \\ &\quad\leq \Biggl(\int_{\nu}^{T}e^{\beta A(s)} \biggl \vert \frac{f(s)}{a(s)} \biggr\vert ^2ds \Biggr) \Biggl(\int _{\nu}^{T}e^{-2n(s-\nu)-\beta A(s)}a^2(s)ds \Biggr)\xrightarrow[n\to+\infty]{}0. \end{align*} Thus $\int_{\nu}^{T}e^{-n(s-\nu)} f(s)ds\xrightarrow[n\to+\infty]{}0$ $\mathbb{P}\mbox{-a.s. \ in \ }\mathcal{L}^2$. Now, we denote \begin{align*} &\widehat{y}^n_t:=e^{-n(T-t)}\xi+\int _{t}^{T}e^{-n(s-t)}\bigl(f(s)+nL_s \bigr)ds,\\ &\tilde{y}^{n}_t:=e^{-n(T-t)}L_T+\int _{t}^{T}e^{-n(s-t)}\bigl(f(s)+nL_s \bigr)ds \end{align*} and \[ X^n_t:=e^{-n(T-t)}L_T+n\int _{t}^{T}e^{-n(s-t)}L_sds-L_t. \] By the fact that $L$ is uniformly continuous on $[0,T]$, it can be shown that the sequence $(X^n_t)_{n\geq1}$ uniformly converges in $t$, and the same for $(X^{n-}_t)_{n\geq1}$. Lebesgue's dominated convergence theorem implies that \begin{align*} &\lim\limits _{n\rightarrow+\infty}\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big| \bigl(\widehat{y}^{n}_t-L_t \bigr)^-\big|^2 =\lim_{n\rightarrow+\infty}\mathbb{E}\sup _{0\leq t\leq T}e^{\beta A(t)}\big|\bigl(\tilde{y}^{n}_t-L_t \bigr)^-\big|^2 \\ &\quad\leq2\lim_{n\rightarrow+\infty}\mathbb{E} \Biggl[\sup _{0\leq t\leq T}e^{\beta A(t)}\big|X^{n-}_t\big|^2 +\sup_{0\leq t\leq T}e^{\beta A(t)} \Biggl\vert \int _{t}^{T}e^{-n(s-t)}f(s)ds \Biggr\vert ^2 \Biggr]=0. \end{align*} So, from (\ref{ek4}), Jensen's inequality and Doob's maximal quadratic inequality (see Theorem 20, p. 11 in \cite{Pro}), we have \begin{align*} \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|\bigl( \widehat{Y}^{n}_t-L_t\bigr)^-\big|^2 & \leq\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl\vert \mathbb{E} \bigl[\bigl(\widehat{y}^{n}_t-L_t \bigr)^-|\mathcal{F}_t \bigr] \bigr\vert ^2 \\ &\leq4\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl\vert \bigl(\widehat {y}^{n}_t-L_t\bigr)^- \bigr \vert ^2\xrightarrow[n\to+\infty]{}0. \end{align*} From the fact that $Y^{n}_{t}\geq\widehat{Y}^{n}_{t}$ for all $t\leq T$ we deduce that \[ \lim\limits _{n\rightarrow+\infty}\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big| \bigl(Y^{n}_t-L_t\bigr)^-\big|^2=0. \] Similarly to proof of the Lemma \ref{l}, we can obtain \[ \lim\limits _{n\rightarrow+\infty}\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big| \bigl(Y^{n}_t-U_t\bigr)^+\big|^2=0.\qedhere \] \end{proof} \begin{lemma}\label{l4} For each $n\geq p\geq0$, we have \begin{align*} &\mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^{p}\big|^{2}+ \int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}-Y_{t}^p\big|^{2}dt \\ &\quad+\int_{0}^{T}e^{\beta A(t)}\big|Z_{t}^{n}-Z_{t}^p\big|^{2}dt+ \sup_{0\leq t\leq T}\big|K_{t}^{n}-K_{t}^{p}\big|^{2} \Biggr]\xrightarrow[n,p\to+\infty]{}0. \end{align*} \end{lemma} \begin{proof} It\^{o}'s formula implies that \begin{align*} &\mathbb{E}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2}+ \mathbb{E}\int_{t}^{T}e^{\beta A(s)}\bigl(\beta a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2+\big|Z_{s}^{n}-Z_{s}^p\big|^2 \bigr)ds \\ &\quad\leq2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-Y_{s}^p\bigr) \bigl(dK_s^{n+}-dK_s^{p+}\bigr) \\ &\qquad-2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-Y_{s}^p\bigr) \bigl(dK_s^{n-}-dK_s^{p-}\bigr) \\ &\quad\leq2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-L_{s}\bigr)^-dK_s^{p+} +2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{p}-L_{s}\bigr)^-dK_s^{n+} \\ &\qquad+2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-U_{s}\bigr)^+dK_s^{p-} +2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{p}-U_{s}\bigr)^+dK_s^{n-}. \end{align*} Hence \begin{align*} &\beta\mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2ds +\mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^2ds \\ &\quad\leq2\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl(Y_{t}^{n}-L_{t}\bigr)^-K_T^{p+} +2\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\bigl(Y_{t}^{p}-L_{t} \bigr)^-K_T^{n+} \\ &\qquad+2\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\bigl(Y_{t}^{n}-U_{t} \bigr)^+K_T^{p-} +2\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)} \bigl(Y_{t}^{p}-U_{t}\bigr)^+K_T^{n-}. \end{align*} Lemma \ref{l3} implies that \begin{equation} \mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2ds +\mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^2)ds \xrightarrow[n,p\to+\infty]{}0. \end{equation} On the other hand, by the Burkholder--Davis--Gundy's inequality, we get \begin{equation} \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^2 \xrightarrow[n,p\to+\infty]{}0. \end{equation} From the equation \begin{equation} K^n_t=Y^n_0-Y^n_t- \int_0^tf(s)ds+\int_0^tZ^n_sdB_s \quad0\leq t\leq T, \end{equation} we can conclude that \begin{equation} \mathbb{E}\sup_{0\leq t\leq T}\big|K_{t}^{n}-K_{t}^p\big|^2 \xrightarrow[n,p\to +\infty]{}0. \end{equation} The proof is completed. \end{proof} The main result of this section is the following: \begin{thm}\label{tt} Assume that $L<U$. Then the DRBSDE (\ref{a}) has a unique solution $(Y,Z,K^+,K^-)$ that belongs to $(\mathcal{S}^{2}(\beta,a)\cap\mathcal {S}^{2,a}(\beta,a))\times\mathcal{H}^{2}(\beta,a)\times\mathcal {S}^{2}\times\mathcal{S}^{2}$. \end{thm} \begin{proof} From Lemma \ref{l4}, we obtain that there exists an adapted process $(Y,Z,K)\in(\mathcal{S}^{2}(\beta,a)\cap\mathcal{S}^{2,a}(\beta ,a))\times\mathcal{H}^{2}(\beta,a)\times\mathcal{S}^{2}$ such that \begin{align} \label{c4} &\mathbb{E} \Biggl[\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}\big|^{2}+ \int_{0}^{T}e^{\beta A(t)}a^2(t)\big|Y_{t}^{n}-Y_{t}\big|^{2}dt \\ &\quad+\int_{0}^{T}e^{\beta A(t)}\big|Z_{t}^{n}-Z_{t}\big|^{2}dt+ \sup_{0\leq t\leq T}\big|K_{t}^{n}-K_{t}\big|^{2} \Biggr]\xrightarrow[n\to+\infty]{}0 \nonumber . \end{align} Then, passing to the limit as $n\rightarrow+\infty$ in the equation \[ Y^n_t=\xi+\int_t^Tf(s)ds+K^n_T-K^n_t- \int_t^TZ^n_sdB_s, \] we obtain \[ Y_t=\xi+\int_t^Tf(s)ds+K_T-K_t- \int_t^TZ_sdB_s. \] Let $\tau\leq T$ be a stopping time, by Lemma \ref{l2} we obtain that the sequences $K^{n\pm}_\tau$ are bounded in $\mathcal{L}^2$, consequently, there exist $\mathcal{F}_\tau$-measurable random variables $K^{\pm}_\tau$ in $\mathcal{L}^2$, such that there exist the subsequences of $K^{n\pm}_\tau$ weakly converging in $K^{\pm}_\tau$. Now we set $\mathcal{K}_\tau=K^+_\tau-K^-_\tau$. By \cite{Yosida} (Mazu's Lemma, p. 120), there exists, for every $n\in\mathbb{N}$, an integer $N\geq n$ and a convex combination $\sum_{j=n}^N\zeta_j^{\tau ,n}(K^{\pm}_\tau)_j$ with $\zeta_j^{\tau,n}\geq0$ and $\sum_{j=n}^N\zeta_j^{\tau,n}=1$ such that \begin{equation} \label{c6} \mathcal{K}_\tau^{n\pm}:=\sum _{j=n}^N\zeta_j^{\tau,n} \bigl(K^{\pm}_\tau \bigr)_j\xrightarrow[n\to+ \infty]{}K^{\pm}_\tau. \end{equation} Denoting $\mathcal{K}_\tau^{n}=\mathcal{K}_\tau^{n+}-\mathcal{K}_\tau ^{n-}$, it follows that \begin{equation} \label{c5} \mathbb{E}\big|\mathcal{K}_\tau^{n}- \mathcal{K}_\tau\big|^2\xrightarrow[n\to +\infty]{}0. \end{equation} Thanks to (\ref{c4}), we have $\|K_\tau^{n}-K_\tau\|_{\mathcal {L}^2}<\varepsilon$ for all $\varepsilon>0$. Therefore \begin{align*} \big\|\mathcal{K}_\tau^{n}-K_\tau \big\|_{\mathcal{L}^2}&=\Bigg\|\sum_{j=n}^N\zeta _j^{\tau,n}\bigl(\bigl(K^{\pm}_\tau \bigr)_j-K_\tau\bigr)\Bigg\|_{\mathcal{L}^2} \\ &\leq \sum_{j=n}^N \zeta_j^{\tau,n}\big\|\bigl(K^{\pm}_\tau \bigr)_j-K_\tau\big\|_{\mathcal {L}^2}<\varepsilon. \end{align*} Hence \begin{equation} \label{c5new} \mathbb{E}\big|\mathcal{K}_\tau^{n}-K_\tau\big|^2 \xrightarrow[n\to+\infty]{}0. \end{equation} Combining (\ref{c5}) and (\ref{c5new}), we obtain $\mathcal{K}_\tau=K_\tau $ a.s. Therefore, from Theorem 86, p. 220 in \cite{DM} we have $\mathcal {K}_t=K_t$ for all $t\leq T$. On the other hand, (\ref{c6}) implies that, for $\tau=T$, there exists a subsequence of $\mathcal {K}_T^{n+}:=\sum_{j=n}^N\zeta_j^{T,n}(K^{+}_T)_j$ (resp. $\mathcal {K}_T^{n-}:=\sum_{j=n}^N\zeta_j^{T,n}(K^{-}_T)_j$) converging a.s. to $K^+_T$ (resp. $K^-_T$). Then for $\mathbb{P}$-a.s. $\omega\in\varOmega$, the sequence $\mathcal{K}^{n+}_T(\omega)$ (resp. $\mathcal {K}^{n-}_T(\omega)$) is bounded. Using Theorem 4.3.3, p. 88 in \cite {Chung}, there exists a subsequence of $\mathcal{K}_t^{n+}(\omega)$ (resp. $\mathcal{K}_t^{n-}(\omega)$) tending to $K^{+}_t(\omega)$ (resp. $K^{-}_t(\omega)$), weakly. On the other hand, by the definition of stopping times $(\tau_l)_{l\geq 0}$, we have \[ \left\{ \begin{array}{@{}ll} Y_t^{n}> L_t , & \hbox{on $[\tau_{2l},\tau_{2l+1}[$;} \\ Y_t^{n}< U_t , & \hbox{on $[\tau_{2l+1},\tau_{2l+2}[$.} \end{array} \right. \] Then \[ L_t \mathbh{1}_{[\tau_{2i},\tau_{2i+1}]}(t)\leq Y_t^{n} \leq U_t \mathbh {1}_{[\tau_{2i+1},\tau_{2i+2}]}(t). \] By summing in $i$, $i=0,\ldots, l$ and passing to limit in $n$, we obtain $L_t\leq Y_t\leq U_t$. Now, we would have to show the Skorokhod's conditions. Indeed, since $\mathcal{K}_t^{n+}(\omega)$ tends to $K^{+}_t(\omega)$, using the result treated in p. 465 of \cite{YS} we can write \begin{equation} \label{c7} \int_{0}^{T}\bigl(Y^n_{t}( \omega)-L_t(\omega)\bigr)d\mathcal{K}_t^{n+}( \omega )\xrightarrow[n\to+\infty]{}\int_{0}^{T} \bigl(Y_{t}(\omega)-L_t(\omega )\bigr)dK^{+}_{t}( \omega). \end{equation} Since $\int_{0}^{T}(Y^n_{t}-L_t)dK^{n+}_{t}\leq0$, $\forall n\geq0$ a.s., and $\forall n,m\geq0$, $n\neq m$, \[ \mathbb{E} \Biggl[ \Biggl\vert \int_0^T \bigl(Y_{t}^{n}-Y_{t}^m \bigr)dK^{m+}_t \Biggr\vert \Biggr] \leq\mathbb{E} \Bigl[ \sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^m\big|K^{m+}_T \Bigr]\xrightarrow[n,m\to+\infty]{}0, \] then by \[ \int_0^T\bigl(Y_{t}^{n}-L_{t} \bigr)dK^{m+}_t=\int_0^T \bigl(Y_{t}^{n}-Y_{t}^m \bigr)dK^{m+}_t+\int_0^T \bigl(Y_{t}^{m}-L_{t}\bigr)dK^{m+}_t \] we have \begin{equation} \label{c8} \limsup_{n\rightarrow+\infty}\int_0^T \bigl(Y^n_{t}-L_t\bigr)d\mathcal {K}^{n+}_t\leq0\quad\mathbb{P}\mbox{-a.s.} \end{equation} Combining (\ref{c7}) and (\ref{c8}), we get $\int_0^T(Y_{t}-L_t)dK^{+}_t\leq0\ \mathbb{P}\mbox{-a.s.}$ Noting that $Y\geq L$, we conclude that $\int_0^T(Y_{t}-L_t)dK^{+}_t=0$. By a similar consideration, we can prove $\int_0^T(U_{t}-Y_t)dK^{-}_t=0$. Finally, using the fixed point theorem we construct a strict contraction mapping $\varphi$ on $\mathfrak{B}^2$ and conclude that $(Y_t,Z_t,K^+_t,K^-_t)$ is the unique solution to DRBSDE (\ref{a}) associated with data $(\xi,f,L,U)$. \end{proof} \section{Comparison theorem}\label{s4} In this section we prove a comparison theorem for the DRBSDE under the stochastic Lipschitz assumptions on generators. \begin{thm}\label{tcc} Let $(Y^1,Z^1,K^{1+},K^{1-})$ and $(Y^2,Z^2,K^{2+},K^{2-})$ be respectively the solutions to the DRBSDE with data $(\xi ^1,f^1,L^1,U^1)$ and $(\xi^2,f^2,L^2,U^2)$. Assume in addition the following: \begin{itemize} \item$\xi^1\leq\xi^2$ a.s. \item$f^1(t,Y^2,Z^2)\leq f^2(t,Y^2,Z^2)$ \quad$\forall t\in[0,T]$ a.s. \item$L^1_{t}\leq L^2_{t}$ and $U^1_{t}\leq U^2_{t}$\quad$\forall t\in[0,T]$ a.s. \end{itemize} Then \[ \forall t\leq T,\qquad Y^1_{t}\leq Y^2_{t} \quad\mathit{a.s.} \] \end{thm} \begin{proof} Let $\bar{\Re}=\Re^1-\Re^2$ for $\Re=Y,Z,K^+,K^+,\xi$ and \begin{itemize} \item$\zeta_t=\mathbh{1}_{\{\bar{Y}_{t}\neq0\}}\displaystyle\frac {f^1(t,Y^1_{t},Z^1_{t})-f^1(t,Y^2_{t},Z^1_{t})}{\bar{Y}_{t}}$; \item$\eta_t=\mathbh{1}_{\{\bar{Z}_{t}\neq0\}}\displaystyle\frac {f^1(t,Y^2_{t},Z^1_{t})-f^1(t,Y^2_{t},Z^2_{t})}{\bar{Z}_{t}}$; \item$\delta_t=f^1(t,Y^2_{t},Z^2_{t})-f^2(t,Y^2_{t},Z^2_{t})$. \end{itemize} Applying the Meyer--It\^{o} formula (Theorem 66, p. 210 in \cite{Pro}), there exists a continuous nondecreasing process $(\mathcal{A}_t)_{t\leq T}$ such that \begin{align*} \big|\bar{Y}_{t}^{+}\big|^{2} &=2\int _{t}^{T}\bar{Y}_{s}^{+} ( \zeta_s\bar{Y}_{s}+\eta_s\bar {Z}_{s}+\delta_s )ds -2\int_{t}^{T} \bar{Y}_{s}^{+}\bar{Z}_{s}dB_{s} \\ &\quad+2\int_{t}^{T}\bar{Y}_{s}^{+}d \bar{K}^+_{s}-2\int_{t}^{T}\bar {Y}_{s}^{+}d\bar{K}^-_{s}-(\mathcal{A}_T- \mathcal{A}_t). \end{align*} Suppose in addition that \[ \mathbb{E}\int_0^T\mu_tdt<+\infty \quad\mbox{ and }\quad \mathbb{E}\int_0^T| \gamma_t|^2dt<+\infty. \] Let $\{\varGamma_{t,s}, 0\leq t\leq s\leq T\}$ be the process defined as \[ \varGamma_{t,s}=\exp \Biggl\{ \int_{t}^{s} \biggl(\zeta_{u}-\frac{1}{2}|\eta _{u}|^{2} \biggr)du+\int_{t}^{s}\eta_{u}dB_{u} \Biggr\}>0 \] being a solution to the linear stochastic differential equation \[ \varGamma_{t,s}=1+ \int_{t}^{s} \zeta_{u}\varGamma_{t,u}du+\int_{t}^{s} \eta _{u}\varGamma_{t,u}dB_{u}. \] Applying the integration by parts and taking expectation yield \begin{align*} &\mathbb{E} \bigl[e^{\beta A(t)}\big|\bar{Y}_{t}^{+}\big|^{2} \bigr] +\beta\mathbb{E}\int_{0}^{T}e^{\beta A(s)} \varGamma_{t,s}a^2(s)\big|\bar {Y}_{s}^{+}\big|^2ds \\ &\quad\leq\mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)} \varGamma_{t,s}\zeta _s{\big|\bar{Y}_{s}^+\big|}^2 ds \Biggr] +2\mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)} \varGamma_{t,s}\delta_s\bar {Y}_{s}^{+}ds \Biggr] \\ &\qquad+2\mathbb{E}\int_t^Te^{\beta A(s)} \varGamma_{t,s}\bar{Y}_{s}^{+}dK^+_s -2\mathbb{E}\int_t^Te^{\beta A(s)} \varGamma_{t,s}\bar{Y}_{s}^{+}dK^-_s. \end{align*} Remark that \[ \bar{Y}_{s}^{+}d\bar{K}^+_s= \bigl(L^1_s-Y^2_s\bigr)\mathbh {1}_{Y^1_s>Y^2_s}dK^{1+}_s-\bigl(Y^1_s-L^2_s \bigr)\mathbh {1}_{Y^1_s>Y^2_s}dK^{2+}_s\leq0 \] and \[ \bar{Y}_{s}^{+}d\bar{K}^-_s= \bigl(Y^1_s-U^2_s\bigr)\mathbh {1}_{Y^1_s>Y^2_s}dK^{2-}_s-\bigl(U^1_s-Y^2_s \bigr)\mathbh {1}_{Y^1_s>Y^2_s}dK^{1-}_s\leq0. \] Since $\delta_s\leq0$ and $|\zeta_s|\leq a^2(s)$, one can derive that \[ \mathbb{E} \bigl[e^{\beta A(t)}\big|\bar{Y}_{t}^{+}\big|^{2} \bigr]\leq0. \] It follows that $\bar{Y}_{t}^{+}=0$, i.e $Y^1_{t}\leq Y^2_{t}$ for all $t\leq T$ a.s. \end{proof} \begin{remark}\label{Rmqcomp} \mbox{} \begin{itemize} \item If $U^i=+\infty$ for $i=1,2$, then $dK^{i-}=0$ and the comparison holds also for the reflected BSDE (\ref{RBSDE}). \item If $U^i=+\infty$ and $L^i=-\infty$ for $i=1,2$, then $dK^{i\pm }=0$ and the comparison holds also for the BSDE (\ref{BSDE}). \end{itemize} \end{remark} \begin{appendix} \section{Appendix}\label{apdx} In this section, we study a special case of the reflected BSDE when the generator depends only on $y$. We consider the following reflected BSDE \begin{eqnarray} \label{r} && \displaystyle \left\{ \begin{array}{@{}ll} Y_{t}=\xi+\displaystyle\int_{t}^{T} f(s,Y_{s})ds+K_{T}-K_{t}-\displaystyle\int_{t}^{T}Z_{s}dB_{s} & \hbox{} \\ Y_{t}\geq L_{t} \ \forall t\leq T \ \mbox{ and }\ \displaystyle\int_{0}^{T}(Y_{t}-L_{t})dK_{t}=0 & \hbox{} \end{array} \right. \end{eqnarray} where $(\xi,f,L)$ satisfies the following assumptions: \begin{itemize} \item$\xi\in\mathcal{S}^2(\beta,a)$; \item$f$ is Lipschitz, i.e. there exists a positive constant $\mu$ such that $\forall(t,y,y')\in[0,T]\times\mathbb{R}\times\mathbb{R}$ \[ \big|f(t,y)-f\bigl(t,y'\bigr)\big|\leq\mu\big|y-y'\big|; \] \item$\displaystyle\frac{f(t,0)}{a}\in\mathcal{H}^2(\beta,a)$; \item$\mathbb{E} [\sup\limits_{0\leq t\leq T}e^{2\beta A(t)}|L_t^+|^2 ]<+\infty$. \end{itemize} As in \cite{EKPPQ}, we prove the existence and uniqueness of a solution to (\ref{r}) by means of the penalization method. Indeed, for each $n\in \mathbb{N}$, we consider the following BSDE: \begin{equation} \label{rr} Y_{t}^{n}=\xi+\int_{t}^{T} f\bigl(s,Y_{s}^{n}\bigr)ds+n\int_{t}^{T} \bigl(Y^n_s-L_s\bigr)^-ds-\int _{t}^{T}Z_{s}^{n}dB_{s}. \end{equation} We denote $K_{t}^{n}:=n\int_{0}^{t}(Y_{s}^{n}-L_{s})^{-}ds$ and $f^n(t,y)=f(t,y)+n(y-L_t)^-$. Remark that $f^n$ is Lipschitz and \begin{align*} \mathbb{E}|\xi|^{2}+\mathbb{E}\int_0^T \bigl\vert f^n(t,0) \bigr\vert ^{2}dt &\leq\mathbb{E} \bigl[e^{\beta A(T)}|\xi|^{2} \bigr] +\frac{2}{\beta}\mathbb{E} \Biggl[\int_0^Te^{\beta A(t)} \biggl\vert \frac {f(t,0)}{a(t)} \biggr\vert ^{2}dt \Biggr]\\ &\quad+2n^2T\mathbb{E} \Bigl[\sup_{0\leq t\leq T}e^{2\beta A(t)}\big|L_t^+\big|^2 \Bigr]. \end{align*} From \cite{PP}, there exists a unique process $(Y^n,Z^n)$ being a solution to the BSDE (\ref{rr}). The sequence $(Y^n,Z^n,K^n)_n$ satisfies the uniform estimate \begin{align*} &\mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}\big|^{2} +\mathbb{E} \Biggl[\int_{0}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}\big|^{2}ds +\mathbb{E}\int_{0}^{T}e^{\beta A(s)}\big|Z_{s}^{n}\big|^{2}ds \Biggr] \\ &\quad\leq C\mathbb{E} \Biggl[e^{\beta A(T)}|\xi|^2+\int _{0}^{T}e^{\beta A(s)}\frac{|f(s,0)|^2}{a^2(s)}ds +\sup _{0\leq t\leq T}e^{2\beta A(s)}\big|L_{s}^+\big|^2 \Biggr]. \end{align*} where $C$ is a positive constant depending only on $\beta$, $\mu$ and $\epsilon$. Now we establish the convergence of sequence $(Y^n,Z^n,K^n)$ to the solution to (\ref{r}). Obviously $f^n(t,y)\leq f^{n+1}(t,y)$ for each $n\in\mathbb{N}$, and it follows from Remark \ref{Rmqcomp} that $Y^n\leq Y^{n+1}$. Hence there exists a process $Y$ such that $Y^n_t\nearrow Y_t$ $0\leq t\leq T$ a.s. From the a priori estimates and Fatou's lemma, we have \begin{equation*} \mathbb{E} \Bigl[\sup_{0\leq t\leq T}e^{\beta A(t)}|Y_{t}|^{2} \Bigr]\leq \liminf_{n\rightarrow+\infty}\mathbb{E} \Bigl[\sup _{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}\big|^{2} \Bigr] \leq C. \end{equation*} Then by the dominated convergence, one can derive that \[ \mathbb{E} \Biggl[\int_{0}^{T}e^{\beta A(s)}\big|Y_{s}^{n}-Y_{s}\big|^2ds \Biggr]\xrightarrow[n\to+\infty]{}0. \] On the other hand, for all $n\geq p\geq0$ and $t\leq T$, we have \begin{align*} \label{ee1} &\mathbb{E}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2} +\biggl(\beta-\frac{2\mu}{\epsilon}\biggr)\mathbb{E}\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2ds \\ &\quad+\mathbb{E}\int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^2ds \nonumber \\ &\qquad\leq2\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-L_{s}\bigr)^-dK_{s}^{p} +\mathbb{E}\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{p}-L_{s}\bigr)^-dK_{s}^{n}. \end{align*} Similarly to Lemma \ref{l3}, we can easily prove that \begin{equation} \label{e11} \mathbb{E}\sup_{0\leq t\leq T}e^{\beta A(t)}\big| \bigl(Y_{t}^{n}-L_{t}\bigr)^-\big|^{2} \xrightarrow[n\to+\infty]{}0. \end{equation} By the above result an the a priori estimates, one can derive that \[ \mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)} \bigl(Y_{s}^{n}-L_{s}\bigr)^-dK_{s}^{p} +\int_{t}^{T}e^{\beta A(s)}\bigl(Y_{s}^{p}-L_{s} \bigr)^-dK_{s}^{n} \Biggr]\xrightarrow[n,p\to+\infty]{}0. \] Thus \[ \mathbb{E} \Biggl[\int_{t}^{T}e^{\beta A(s)}a^2(s)\big|Y_{s}^{n}-Y_{s}^p\big|^2ds+ \int_{t}^{T}e^{\beta A(s)}\big|Z_{s}^{n}-Z_{s}^p\big|^2ds \Biggr]\xrightarrow[n,p\to+\infty]{}0. \] Moreover, by the Burkholder--Davis--Gundy's inequality, one can derive that \[ \mathbb{E} \Bigl[\sup\limits _{0\leq t\leq T}e^{\beta A(t)}\big|Y_{t}^{n}-Y_{t}^p\big|^{2} \Bigr]\xrightarrow[n,p\to+\infty]{}0. \] Further, from the equation (\ref{rr}), we have also \[ \mathbb{E} \Bigl[\sup\limits _{0\leq t\leq T}\big|K_{t}^{n}-K_{t}^p\big|^{2} \Bigr]\xrightarrow[n,p\to+\infty]{}0. \] Consequently there exists a pair of progressively measurable processes $(Z,K)$ such that \[ \mathbb{E}\int_0^Te^{\beta A(t)}\big|Z_{t}^{n}-Z_{t}\big|^{2}dt +\mathbb{E}\sup_{0\leq t\leq T}\big|K_{t}^{n}-K_{t}\big|^{2} \xrightarrow[n\to +\infty]{}0. \] Obviously the triplet $(Y,Z,K)$ satisfies (\ref{r}). It remains to check the Skorokhod condition. We have just seen that the sequence $(Y^n,K^n)$ tends to $(Y,K)$ uniformly in $t$ in probability. Then the measure $dK^n$ tends to $dK$ weakly in probability, hence \[ \int_0^T\bigl(Y^n_t-L_t \bigr)dK^n_t\xrightarrow[n\to+\infty]{\mathbb{P}}\int _0^T(Y_t-L_t)dK_t. \] We deduce from the equation (\ref{e11}) that $\int_0^T(Y^n_t-L_t)dK^n_t\leq0$, $n\in\mathbb{N}$, which implies that $\int_0^T(Y_t-L_t)dK_t\leq0$. On the other hand, since $Y_t\geq L_t$ then $\int_0^T(Y_t-L_t)dK_t\geq0$. Hence $\int_0^T(Y_t-L_t)dK_t=0$. \begin{remark}[Special cases] The coefficients $g^n(s,y)=g(s)-n(y-U_s)^+$ and $\tilde {g}^n(s,y)=g(s)-n(y-U_s)$ are Lipschitz and satisfy \begin{align*} &\mathbb{E}\int_0^Te^{\beta A(s)} \biggl \vert \frac{g^n(s,0)}{a(s)} \biggr\vert ^{2}ds +\mathbb{E}\int _0^Te^{\beta A(s)} \biggl\vert \frac{\tilde {g}^n(s,0)}{a(s)} \biggr\vert ^{2}ds \\ &\quad\leq 4\mathbb{E}\int_0^Te^{\beta A(s)} \biggl\vert \frac{g(s)}{a(s)} \biggr\vert ^{2}ds + \frac{4n^2T}{\epsilon}\mathbb{E} \Bigl[\sup_{n\geq0}e^{2\beta A(t)}\big|U_t^-\big|^2 \Bigr]<+\infty. \end{align*} Then the Reflected BSDEs (\ref{r1}) and (\ref{d2}) have a unique solution. \end{remark} \begin{thm}[Comparison theorem]\label{tc} Let $(Y^1,Z^1,K^1)$ and $(Y^2,Z^2,K^2)$ be solutions to the Reflected BSDE (\ref{r}) with data $(\xi^1,f^1,L)$ and $(\xi^2,f^2,L)$ respectively. If we have \begin{itemize} \item$f^1(t,y)\leq f^2(t,y)$ a.s. $\forall(t,y)$, \item$\xi^1\leq\xi^2$ a.s., \end{itemize} then $Y_t^1\leq Y_t^2$ and $K_t^1\geq K_t^2$ $\forall t\in[0,T]$ a.s. \end{thm} \begin{proof} We consider the penalized equations relative to the Reflected BSDE with data $(\xi^i,f^i,L)$ for $i=1,2$ and $n\in\mathbb{N}$, as follows \begin{equation*} Y_{t}^{n,i}=\xi^i +\int_{t}^{T} f^i\bigl(s,Y_{s}^{n,i}\bigr)ds+n\int _{t}^{T}\bigl(Y_{s}^{n,i}-L_{s} \bigr)^{-}-\int_{t}^{T}Z_{s}^{n,i}dB_{s}. \end{equation*} Let $f_n^i(t,y):=f^i(t,y)+n(y-L_{s})^{-}$. So, by the comparison theorem, we have $Y_{t}^{n,1}\leq Y_{t}^{n,2}$ for $t\leq T$. Since $K_{t}^{n,i}=n\int_{0}^{t}(Y_{s}^{n,i}-L_{s})^{-}ds$ for $i=1,2$, we deduce that $K_{t}^{n,1}\geq K_{t}^{n,2}$ for $t\leq T$. But $Y_{t}^{n,i}\nearrow Y_{t}^{i}$ and $K_{t}^{n,i}\longrightarrow K_{t}^{i}$ as $n\longrightarrow+\infty$ for $i=1,2$, and it follows that $Y_t^1\leq Y_t^2$ and $K_t^1\geq K_t^2$ for $t\leq T$. \end{proof} \end{appendix} \end{document}
\begin{document} \begin{frontmatter} \title{Numerical Study of Quantized Vortex Interaction in Complex Ginzburg--Landau Equation on Bounded Domains} \author[1]{Wei Jiang} \ead{[email protected]} \address[1]{Beijing Computational Science Research Center, Beijing 100084, P. R. China} \author[2]{Qinglin Tang} \ead{[email protected]} \address[2]{Department of Mathematics and Center for Computational Science and Engineering, National University of Singapore, Singapore 119076, Singapore} \begin{abstract} In this paper, we study numerically quantized vortex dynamics and their interaction in the two-dimensional complex Ginzburg-Landau equation (CGLE) with a dimensionless parameter $\varepsilon>0$ on bounded domains under either Dirichlet or homogeneous Neumann boundary condition. We begin with a review of the reduced dynamical laws (RDLs) for time evolution of quantized vortex centers in CGLE and show how to solve these nonlinear ordinary differential equations numerically. Then we present efficient and accurate numerical methods for solving the CGLE on either a rectangular or a disk domain under either Dirichlet or homogeneous Neumann boundary condition. Based on these efficient and accurate numerical methods for CGLE and the RDLs, we explore rich and complicated quantized vortex dynamics and interaction of CGLE with different $\varepsilon$ and under different initial physical setups, including single vortex, vortex pair, vortex dipole and vortex lattice, compare them with those obtained from the corresponding RDLs, and identify the cases where the RDLs agree qualitatively and/or quantitatively as well as fail to agree with those from CGLE on vortex interaction. Finally, we also obtain numerically different patterns of the steady states for quantized vortex lattices in the CGLE dynamics on bounded domains. \end{abstract} \begin{keyword} Complex Ginzburg-Landau equation, Quantized vortex dynamics, Bounded domain, Reduced dynamical laws. \end{keyword} \end{frontmatter} \section{Introduction} \label{sec: intro} Vortices are those waves that possess phase singularities (topological defect) and rotational flows around the singular points. They arise in many physical areas of different scale and nature ranging from liquid crystals and superfluidity to non-equilibrium patterns and cosmic strings \cite{DKT, PS}. Quantized vortices in the two dimension are those particle-like vortices, whose centers are the zero of the order parameter, possessing localized phase singularity with the topological charge (also called as winding number or index) being quantized. They have been widely observed in many different physical systems, such as the liquid helium, type-II superconductors, atomic gases and nonlinear optics \cite{A, BDZ, D, FHL1, JN}. Quantized vortices are key signatures of the superconductivity and superfluidity and their study is always one of the most important and fundamental problems since they were predicted by Lars Onsager in 1947 in connection with superfluid Helium. In this paper, we consider the vortex dynamics and interactions in two dimensional complex Ginzburg--Landau equation (CGLE), which is one of the most studied nonlinear equations in physics community \cite{AK}. It has attracted ever more attention, because it can describe various phenomena ranging from nonlinear waves to second-order phase transitions, from superconductivity, superfluidity and Bose-Einstein condensation to liquid crystals and strings in field theory \cite{AK, FPR, GP, RB}. The specific form of CGLE we study here reads as: \begin{equation} \label{cgle} (\lambda_\varepsilon+i\beta ) \partial_{t}\psi^{\varepsilon} ({\bf x},t)=\Delta \psi^{\varepsilon}+\frac{1}{\varepsilon^{2}} (1-|\psi^{\varepsilon}|^{2})\psi^{\varepsilon},\qquad {\bf x}\in\mathcal{D},\quad t>0, \end{equation} with initial condition \begin{equation}\label{ini_con} \psi^{\varepsilon}({\bf x},0) = \psi^{\varepsilon}_{0}({\bf x}), \qquad {\bf x}\in\overline{\mathcal D }, \end{equation} and under either Dirichlet boundary condition (BC) \begin{equation}\label{dir} \psi^{\varepsilon}({\bf x},t)=g({\bf x})=e^{i\omega({\bf x})},\qquad {\bf x}\in\partial\mathcal{D}, \quad t\ge0, \end{equation} or homogeneous Neumann BC \begin{equation}\label{neu-bc} \frac{\partial \psi^{\varepsilon}({\bf x},t)}{\partial \nu} =0,\qquad {\bf x}\in\partial\mathcal{D}, \quad t\ge0. \end{equation} where $\mathcal{D}\subset \mathbb{R}^{2}$ is a bounded and simple connected domain in the paper, $t$ is time, ${\bf x}=(x,y)\in \mathbb{R}^2$ is the Cartesian coordinate vector, $\psi^\varepsilon:=\psi^\varepsilon({\bf x},t)$ is a complex-valued wave function (order parameter), $\omega$ is a given real-valued function, $\psi_0^\varepsilon$ and $g$ are given smooth and complex-valued functions satisfying the compatibility condition $\psi_0^\varepsilon({\bf x})=g({\bf x})$ for ${\bf x}\in\partial\mathcal{D}$, $\nu=(\nu_1,\nu_2)$ and $\nu_\perp=(-\nu_2,\nu_1)\in \mathbb{R}^2$ satisfying $|\nu|=\sqrt{\nu_1^2+\nu_2^2}=1$ are the outward normal and tangent vectors along $\partial \mathcal{D}$, respectively, $i=\sqrt{-1}$ is the unit imaginary number, $0<\varepsilon<1$ is a given dimensionless constant, and $\lambda_\varepsilon$, $\beta$ are two positive constants. Actually, the CGLE covers many different equations arise in various different physical fields. For example, when $\lambda_\varepsilon\neq0$, $\beta=0$, it reduces to the Ginzburg-Landau equation (GLE) for modelling superconductivity. When $\lambda_\varepsilon=0$, $\beta=1$, the CGLE collapses to the nonlinear Schr\"{o}dinger equation (NLSE) for modelling Bose-Einstein Condensation or superfluidity. Denote the Ginzburg-Landau (GL) functional (`energy') as \cite{CJ, JS, LX} \begin{equation} \label{glf} \mathcal{E}^\varepsilon(t):=\int_{\mathcal{D}} \left[\frac{1}{2}|\nabla\psi^{\varepsilon}|^2+ \frac{1}{4\varepsilon^2}\left(1-|\psi^{ \varepsilon}|^2\right)^2\right]d{\bf x} =\mathcal{E}_{\rm kin}^\varepsilon(t)+\mathcal{E}_{\rm int}^\varepsilon(t), \quad t\ge0, \end{equation} where the kinetic and interaction energies are defined as \begin{equation*} \mathcal{E}_{\rm kin}^\varepsilon(t):=\frac{1}{2}\int_{\mathcal{D}} |\nabla\psi^{\varepsilon}|^2d\textbf{x},\quad \mathcal{E}_{\rm int}^\varepsilon(t):=\frac{1}{4\varepsilon^2}\int_{\mathcal{D}} \left(1-|\psi^{\varepsilon}|^2\right)^2d\textbf{x},\quad t\ge0, \end{equation*} respectively. Then, it is easy to verify that the CGLE and GLE dissipate the energy, while the NLSE conserves the energy at all the time. During the last several decades, constructions and analysis about the vortex solutions as well as studies of quantized vortex dynamics and interaction related with the CGLE (\ref{cgle}) under different scalings have been extensively studied in the literatures. For the whole space case , i.e., $\mathcal{D}=\mathbb{R}^{2}$, Neu \cite{JN} studied dynamics and interaction of well-separated quantized vortices for GLE with $\lambda_\varepsilon=1$ and NLSE under scaling $\varepsilon=1$. He found numerically that quantized vortices with winding number $m=\pm1$ are dynamically stable, and respectively, $|m|>1$ dynamically unstable in the GLE dynamics. Moreover, he found that vortices behave like point vortices in ideal fluid. Using asymptotic analysis, he derived the reduced dynamical laws (RDLs) which are sets of ordinary differential equations (ODEs) for governing the dynamics of the vortex centers to the leading order. Recently, Neu's results were extented by Bethuel et al. to investigate the asymptotic behaviour of vortices as $\varepsilon\to 0$ in the GLE dynamics under the accelerating time scale $\lambda_\varepsilon=\frac{1}{\ln\frac{1}{\varepsilon}}$ \cite{BOS, BOS1, BOS2} and in the NLSE dynamics \cite{BJS}. The corresponding RDLs that govern the motion of the limiting vortices have also be derived. Inspired by Neu's work, many other papers have been dedicated to the study of the vortex states and dynamics for the GLE and NLSE with $0<\varepsilon<1$ on a bounded domain under different BCs. For the GLE case, Lin \cite{FHL1,FHL2,FHL3} considered the the dynamics of vortices in the asymptotic limit $\varepsilon\to 0$ under various scales of $\lambda_\varepsilon$ and with Dirichlet BC (\ref{dir}) or homogeneous Neumann (\ref{neu-bc}). He derived the RDLs that govern the motion of these vortices and rigorously proved that vortices move with velocities of order $|\ln\varepsilon|^{-1}$ if $\lambda_\varepsilon=1$. Similar studies have also been conducted by E \cite{EW}, Jerrard et al. \cite{RLJHMH}, Jimbo et al. \cite{SJYM2, SJYM3} and Sandier et al. \cite{ESSS}. Unfortunately, all those RDLs are only valid up to the first time that the vortices collide or exit the domain and cannot describe the motion of multiple degree vortices. Recently, Serfaty \cite{SE} extended the RDLs for the dynamics of the vortices after collisions. For the NLSE case, Mironescu \cite{M} and Lin \cite{L} investigated stability of the vortices in NLSE with (\ref{dir}). Subsequently, Lin and Xin \cite{LX} studied the vortex dynamics on a bounded domain with either Dirichlet or Neumann BC, which was further investigated by Jerrard and Spirn \cite{JS}. In addition, Colliander and Jerrard \cite{CJ, CJ1} studied the vortex structures and dynamics on a torus or under periodic BC. In these studies, RDLs were put forth to describe the asymptotic behaviour of the vortices as $\varepsilon\to 0$, which indicate that to the leading order the vortices move according to the Kirchhoff law in the bounded domain case. However, these RDLs cannot indicate radiation and/or sound propagations created by highly co-rotating or overlapping vortices. In fact, it remains as a very fascinating and fundamental open problem to understand the vortex-sound interaction \cite{NBCT}, and how the sound waves modify the motion of vortices \cite{FPR}. For the CGLE, under scaling $\lambda_\varepsilon\sim O(\frac{1}{\ln(1/\varepsilon)})$, Miot \cite{MIOT} studied the dynamics of vortices asymptotically as $\varepsilon\to 0$ in the whole plane case and Kurzke et al. \cite{KMMS} investigated that in the bounded domain case, the corresponding RDLs were derived to govern the motion of the limiting vortices in the whole plane and/or the bounded domain, respectively. Their results showed that the RDLs in the CGLE is actually a hybrid of RDL for GLE and that for NLSE. More recently, Serfaty and Tice \cite{SI} studied the vortex dynamics in a more complicated CGLE which involves electromagnetic field and pinning effect. On the numerical aspects, finite element methods were proposed to investigate numerical solutions of GLE and related Ginzburg-Landau models for modelling superconductivity \cite{QDMGJP, QDu0, KM, Aftalion2001, CD}. Recently, by proposing efficient and accurate numerical methods for solving the CGLE (\ref{cgle}) in the whole space, Zhang et al. \cite{YZWBQD1,YZWBQD2} compared the dynamics of quantized vortices from the RDLs obtained by Neu with those obtained from the direct numerical simulation results from GLE and NLSE under different parameters and initial setups. Very recently, The second author designed some efficient and accurate numerical methods for studying vortex dynamics and interactions in the GLE and/or NLSE on bounded domains with either Dirichlet or Neumann BCs \cite{BT, BT1}. These numerical methods can be extended and applied for studying the rich and complicated phenomena related to vortex dynamics and interaction in the CGLE (\ref{cgle}) with either Dirichlet BC (\ref{dir}) or homogeneous Neumann BC (\ref{neu-bc}) on bounded domains. The main purpose of this paper is organised as: (i). to present efficient and accurate numerical methods for solving the RDLs and the CGLE (\ref{cgle}) on bounded domains under different BCs; (ii). to understand numerically how the boundary condition and geometry of the domain affect vortex dynamics and interction; (iii). to study numerically vortex interaction in the CGLE dynamics and/or compare them with those from the RDLs with different initial setups and parameter regimes; (iv). to identify cases where the reduced dynamical laws agree qualitatively and/or quantitatively as well as fail to agree with those from CGLE on vortex interaction. The rest of the paper is organized as follows. In section 2, we briefly review the reduced dynamical laws of vortex interaction under the CGLE (\ref{cgle}) with either Dirithlet or homogeneous Neumann BC and present numerical methods to discretize them. In section 3, efficient and accurate numerical methods are briefly outlined for solving the CGLE on bounded domains with different BCs. In section 4 and section 5, ample numerical results are reported for studying vortex dynamics and interaction of CGLE under Dirichlet BC and homogeneous Neumann BC. Finally, some conclusions are drawn in section 6. \section{The reduced dynamical laws and their discretization} \label{sec: rdl} The CGLE can be thought of as a hybird equation between the GLE and NLSE, and it has been proved that vortices in GLE dynamics move with a velocity of the order of $\ln(1/\varepsilon)$ if $\lambda_\varepsilon=1$, Therefore, to obtain nontrivial vortex dynamics, hereafter in this paper, we always choose \begin{equation} \label{lam_vep} \lambda_\varepsilon=\frac{\alpha}{\ln(1/\varepsilon)},\qquad\qquad 0<\varepsilon<1, \end{equation} where $\alpha$ is a positive number. In this section, we review the RDLs for governing the dynamics of vortex centers in the CGLE (\ref{cgle}) with either Dirichlet or homogeneous Neumann BCs. To simplify our discussion, for $j=1,\cdots,N$, hereafter we let ${\bf x}^0_j(t)=(x^0_j,y^0_j)$ and ${\bf x}^\varepsilon_j(t)=(x^\varepsilon_j(t),y^\varepsilon_j(t))$ be the location of the $M$ distinct and isolated vortex centers in the intial data $\psi^0$ (\ref{ini_con}) and solution of the CGLE (\ref{cgle}) with initial condition (\ref{ini_con}) at time $t\ge0$, respectively. By denoting \[X^0:=({\bf x}_1^0,{\bf x}_2^0,\ldots,{\bf x}_M^0), \qquad X^\varepsilon:=X^\varepsilon(t)=({\bf x}^\varepsilon_1(t), {\bf x}^\varepsilon_2(t),\ldots,{\bf x}^\varepsilon_M(t)), \quad t\ge0,\] then we have \cite{KMMS, CJ, FHL3}: \begin{theorem} \label{thm: rdl_gen} As $\varepsilon\rightarrow 0$, for $j=1,\cdots,N$, the vortex center ${\bf x}^\varepsilon_j(t)$ will converge to point ${\bf x}_j(t)$ satisfying: \begin{align} &\label{eqn: rdl_gen} (\alpha I+\beta n_{j} J) \frac{d{\bf x}_{j}(t)}{dt}=-\nabla_{{\bf x}_{j}}W(X), \quad 0\leq t<T,\\ & {\bf x}_j(t=0)={\bf x}^{0}_j. \end{align} \end{theorem} \noindent In equation (\ref{eqn: rdl_gen}), $T$ is the first time that either two vortex collide or any vortex exit the domain, $n_j=+1$ or $-1$ is the winding number of the vortex, $X:=X(t)=({\bf x}_1(t), {\bf x}_2(t),\ldots,{\bf x}_M(t)),$ \[I=\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix},\qquad J=\begin{pmatrix} 0 & -1 \\ 1 & 0 \\ \end{pmatrix},\] are the $2\times2$ identity and symplectic matrix, respectively. Moreover, the function $W(X)$ is the so called renormalized energy defined as: \begin{equation} \label{re_ener} W(X)=: W_{\rm cen}(X) +W_{\rm bc}(X), \end{equation} where $W_{\rm cen}$ is the renormalized energy associated to the $M$ vortex centers that defined as \begin{equation} \label{re_ener_cen} W_{\rm cen}(X) = -\sum_{1\leq i\neq j\leq N}n_{i}n_{j}\ln|{\bf x}_{i}-{\bf x}_{j}|, \end{equation} and $W_{\rm bc}(X)$ is the renormalized energy involving the effect of the BC (\ref{dir}) and/or (\ref{neu-bc}), which takes different formations in different cases. \subsection{Under Dirichlet boundary condition}\label{sec: rdl_dir} For the CGLE (\ref{cgle}) with initial condition (\ref{ini_con}) under Dirichlet BC (\ref{dir}), it has been derived formally and rigorously \cite{FHL1, KMMS, LX1, CJ, BBH, SE} that $W_{\rm bc}(X)=W_{\rm dbc}(X)$ in the renormalized energy $(\ref{re_ener})$ admits the form: \begin{equation} \label{re_ener_bc_dir} W_{\rm dbc}(X) =: -\sum_{j=1}^{M}n_{j}R({\bf x}_{j};X) +\int_{\partial\mathcal{D}}\left[R({\bf x};X)+ \sum_{j=1}^{M}n_{j}\ln|{\bf x}-{\bf x}_{j}|\right] \frac{\partial_{\nu_\perp} \omega({\bf x})}{2\pi} \;ds, \end{equation} where, for any fixed $X\in \mathcal{D}^M$, $R({\bf x};X)$ is a harmonic function in ${\bf x}$, i.e., \begin{equation}\label{harmR} \Delta R({\bf x};X) = 0,\qquad {\bf x}\in \mathcal{D}, \end{equation} satisfying the following Neumann BC \begin{equation}\label{harmRC} \frac{\partial R({\bf x};X)}{\partial \nu} = \partial_{\nu_\perp} \omega({\bf x}) -\frac{\partial}{\partial \nu}\sum_{l=1}^{M}n_{l}\ln|{\bf x}-{\bf x}_{l}|, \qquad {\bf x}\in \partial\mathcal{D}. \end{equation} Notice that to calculate $\nabla_{{\bf x}_j} W(X)$, we need to calculate $\nabla_{{\bf x}_j} R$, and since for $j=1,\cdots,N$, ${\bf x}_j$ is implicitly included in $R({\bf x},X)$ as a parameter, hence it is difficult to calculate $\nabla_{{\bf x}_j} R$ and thus difficult to solve the RDL (\ref{eqn: rdl_gen}) with (\ref{re_ener})--(\ref{re_ener_bc_dir}) even numerically. However, by using an identity in \cite{BBH} (see Eq. (51) on page 84), \[ \nabla_{{\bf x}_j}\left[W(X)+ W_{\rm dbc}(X)\right]=-2n_{j}\nabla_{{\bf x}}\left[R({\bf x};X)+\sum_{l=1\& l\ne j}^Mn_{l}\ln|{\bf x}-{\bf x}_{l}|\right]_{{\bf x}={\bf x}_{j}}, \] we have the following simplified equivalent form for (\ref{eqn: rdl_gen}). \begin{lem} \label{lem: equi_dir_rdl1} For $1\le j\le M$ and $t>0$, system (\ref{eqn: rdl_gen}) can be simplified as \begin{equation} \label{eqn: reduced1} (\alpha I+\beta m_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[\nabla_{\bf x} R\left({\bf x};X\right)|_{{\bf x}={\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right]. \end{equation} \end{lem} \noindent Moreover, for any fixed $X\in \mathcal{D}^M$, by introducing function $H({\bf x},X)$ and $Q({\bf x},X)$ that both are harmonic in ${\bf x}$ satisfying respectively the boundary condition \cite{RLJHMH, LX}: \begin{align} &\label{harmRC_tan} \frac{\partial H({\bf x};X)}{\partial \nu_\perp} = \partial_{\nu_\perp} \omega({\bf x}) -\frac{\partial}{\partial \nu}\sum_{l=1}^{M}n_{l}\ln|{\bf x}-{\bf x}_{l}|, \qquad {\bf x}\in \partial\mathcal{D},\\ &\label{harmRC_dir} Q({\bf x};X)= \omega({\bf x})-\sum_{l=1}^{M}n_{l}\theta({\bf x}-{\bf x}_{l}), \qquad {\bf x}\in \partial\mathcal{D}, \end{align} with the function $\theta: \ {\mathbb R}^2 \to [0,2\pi)$ defined as \begin{equation}\label{theta} \cos(\theta({\bf x}))=\frac{x}{|{\bf x}|}, \qquad \sin(\theta({\bf x}))=\frac{y}{|{\bf x}|}, \qquad 0\ne {\bf x}=(x,y)\in {\mathbb R}^2, \end{equation} we have the following lemma for the equivalence of the RDL (\ref{eqn: reduced1}) \cite{BT, BT1}: \begin{lem} \label{lem: equi_dir_rdl2}. For any fixed $X\in \mathcal{D}^M$, we have the following identity \begin{equation}\label{equi_dir} J\nabla_{\bf x} Q\left({\bf x};X\right)=\nabla_{\bf x} R\left({\bf x};X\right)=J\nabla_{\bf x} H\left({\bf x};X\right), \qquad {\bf x}\in \mathcal{D}, \end{equation} which immediately implies the equivalence between system (\ref{eqn: reduced1}) and the following two systems: for $t>0$ \begin{align*} & (\alpha I+\beta n_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[J\nabla_{\bf x} H\left({\bf x};X\right)|_{{\bf x}= {\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right], \\ & (\alpha I+\beta n_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[J\nabla_{\bf x} Q\left({\bf x};X\right)|_{{\bf x}= {\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right]. \end{align*} \end{lem} \noindent {\bf Proof.} For any fixed $X\in \mathcal{D}^M$, since $Q$ is a harmonic function, there exists a function $\varphi_1({\bf x})$ such that $$J\nabla_{\bf x} Q\left({\bf x};X\right)=\nabla\varphi_1({\bf x}),\qquad {\bf x}\in \mathcal{D}.$$ Thus, $\varphi_1({\bf x})$ satisfies the Laplace equation \begin{equation}\label{proof_dir1} \Delta\varphi_1({\bf x})=\nabla\cdot(J\nabla_{\bf x} Q({\bf x};X))=\partial_{yx}\varphi_1({\bf x})-\partial_{xy}\varphi_1({\bf x})=0, \quad {\bf x}\in \mathcal{D}, \end{equation} with the following Neumann BC \begin{equation}\label{proof_dir2} \partial_{\nu}\varphi_1({\bf x})=(J\nabla_{\bf x} Q({\bf x};X))\cdot\nu= \nabla_{\bf x} Q({\bf x};X)\cdot\nu_\perp=\partial_{\nu_\perp}Q({\bf x};X), \quad{\bf x}\in\partial\mathcal{D}. \end{equation} Noticing (\ref{harmRC_dir}), we obtain for ${\bf x}\in\partial\mathcal{D}$, \begin{equation}\label{proof_dir3} \partial_{\nu}\varphi_1({\bf x})= \partial_{\nu_\perp} \omega({\bf x}) -\frac{\partial}{\partial \nu_\perp}\sum_{l=1}^{M}n_{l}\theta({\bf x}-{\bf x}_{l})= \partial_{\nu_\perp} \omega({\bf x}) -\frac{\partial}{\partial \nu}\sum_{l=1}^{M}n_{l}\ln|{\bf x}-{\bf x}_{l}|. \end{equation} Combining (\ref{proof_dir1}), (\ref{proof_dir3}), (\ref{harmR}) and (\ref{harmRC}), we get \begin{equation}\label{proof_dir4} \Delta(R({\bf x};X)-\varphi_1({\bf x}))=0, \quad {\bf x}\in\mathcal{D}, \qquad \partial_{\nu}\left(R({\bf x};X)-\varphi_1({\bf x})\right)=0,\quad {\bf x}\in\partial\mathcal{D}. \end{equation} Thus $$R({\bf x};X)=\varphi_1({\bf x})+{\rm constant}, \qquad {\bf x}\in\mathcal{D},$$ which immediately implies the first equality in (\ref{equi_dir}). Similarly, since $H$ is a harmonic function, there exists a function $\varphi_2({\bf x})$ such that $$J\nabla_{\bf x} H\left({\bf x};X\right)=\nabla\varphi_2({\bf x}),\qquad {\bf x}\in \mathcal{D}.$$ Thus, $\varphi_2({\bf x})$ satisfies the Laplace equation \begin{equation}\label{proof_dir5} \Delta\varphi_2({\bf x})=\nabla\cdot(J\nabla_{\bf x} H({\bf x};X))=\partial_{yx}\varphi_2({\bf x})-\partial_{xy}\varphi_2({\bf x})=0, \qquad {\bf x}\in \mathcal{D}, \end{equation} with the following Neumann BC \begin{equation}\label{proof_dir6} \partial_{\nu}\varphi_2({\bf x})=(J\nabla_{\bf x} H({\bf x};X))\cdot\nu= \nabla_{\bf x} H({\bf x};X)\cdot\nu_\perp=\partial_{\nu_\perp}H({\bf x};X), \qquad{\bf x}\in\partial\mathcal{D}. \end{equation} Combining (\ref{proof_dir5}), (\ref{proof_dir6}), (\ref{harmR}), (\ref{harmRC}) and (\ref{harmRC_tan}), we get \begin{equation}\label{proof_dir7} \Delta(R({\bf x};X)-\varphi_2({\bf x}))=0, \quad {\bf x}\in\mathcal{D}, \qquad \partial_{\nu}\left(R({\bf x};X)-\varphi_2({\bf x})\right)=0,\quad {\bf x}\in\partial\mathcal{D}. \end{equation} Thus $$R({\bf x};X)=\varphi_2({\bf x})+{\rm constant}, \qquad {\bf x}\in\mathcal{D},$$ which immediately implies the second equality in (\ref{equi_dir}). $\square$ \subsection{Under homogeneous Neumann boundary condition}\label{sec: rdl_neu} For the CGLE (\ref{cgle}) with initial condition (\ref{ini_con}) under homogeneous Neumann BC (\ref{neu-bc}), it has been derived formally and rigorously \cite{JS,KMMS,CJ} that $W_{\rm bc}(X)$ in the renormalized energy $(\ref{re_ener})$ admit the form: \begin{equation}\label{renorm_ener_neu} W_{\rm bc}(X)=W_{\rm nbc}(X):= -\sum_{j=1}^{M}n_{j}\tilde{R}({\bf x}_{j};X), \end{equation} and by using the following identity \begin{equation} \nabla_{{\bf x}_j}\left[W(X)+ W_{\rm nbc}(X)\right]=-2n_{j}\nabla_{{\bf x}}\left[\tilde{R}({\bf x};X)+\sum_{l=1\& l\ne j}^Mn_{l}\ln|{\bf x}-{\bf x}_{l}|\right]_{{\bf x}_{j}}, \end{equation} we have the following simplified equivalent form for (\ref{eqn: rdl_gen}): \begin{lem} For $1\le j\le M$ and $t>0$, system (\ref{eqn: rdl_gen}) can be simplified as \begin{equation}\label{eqn: reduced-neu1} (\alpha I+\beta n_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[\nabla_{\bf x} \tilde{R}\left({\bf x};X\right)|_{{\bf x} ={\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right]. \end{equation} \end{lem} \noindent Moreover, for any fixed $X\in \mathcal{D}^M$, by introducing function $\tilde{H}({\bf x},X)$ and $\tilde{Q}({\bf x},X)$ that both are harmonic in ${\bf x}$ satisfying respectively the boundary condition \cite{SJYM1, SJYM2, SJYM3, LX}: \begin{align} &\label{harmRC_neu_tan} \frac{\partial \tilde{H}({\bf x};X)}{\partial \nu_\perp} = -\frac{\partial}{\partial \nu}\sum_{l=1}^{M}n_{l}\theta({\bf x}-{\bf x}_{l}), \qquad {\bf x}\in \partial\mathcal{D},\\ &\label{harmRC_neu} \frac{\partial \tilde{Q}({\bf x};X)}{\partial \nu} =-\frac{\partial}{\partial \nu}\sum_{l=1}^{M}n_{l}\theta({\bf x}-{\bf x}_{l}), \qquad {\bf x}\in \partial\mathcal{D}, \end{align} with the function $\theta: \ {\mathbb R}^2 \to [0,2\pi)$ being defined in (\ref{theta}), we have the following lemma for the equivalence of the RDL (\ref{eqn: reduced-neu1}) \cite{BT, BT1}: \begin{lem} For any fixed $X\in \mathcal{D}^M$, we have the following identity \begin{equation}\label{equi_neu} J\nabla_{\bf x} \tilde{Q}\left({\bf x};X\right)=\nabla_{\bf x} \tilde{R}\left({\bf x};X\right)=J\nabla_{\bf x} \tilde{H}\left({\bf x};X\right), \qquad {\bf x}\in \mathcal{D}, \end{equation} which immediately implies the equivalence of system (\ref{eqn: reduced-neu1}) and the following two systems: for $t>0$ \begin{align*} & (\alpha I+\beta n_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[\nabla_{\bf x} \tilde{H}\left({\bf x};X\right)|_{{\bf x}= {\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right], \\ & (\alpha I+\beta n_{j} J) \frac{d}{dt}{\bf x}_{j}(t)=2n_j\left[J\nabla_{\bf x} \tilde{Q}\left({\bf x};X\right)|_{{\bf x}= {\bf x}_{j}(t)}+\sum_{l=1\& l\ne j}^Mn_{l}\frac{{\bf x}_{j}(t)-{\bf x}_{l}(t)}{|{\bf x}_{j}(t) -{\bf x}_{l}(t)|^2}\right]. \end{align*} \end{lem} \begin{proof} Follow the line in the proof of lemma \ref{lem: equi_dir_rdl1} and we omit the details here for brevity. \end{proof} \section{Numerical methods} \label{sec: num_method} In this section, we will give a brief outline for discussing by some efficient and accurate numerical methods how to solve the CGLE (\ref{cgle}) on either a rectangle or a disk with initial condition (\ref{ini_con}) and under either Dirichlet BC (\ref{dir}) or homogeneous Neumann BC (\ref{neu-bc}). The key idea in our numerical methods are based on: (i) applying a time-splitting technique which has been widely used for nonlinear partial differential equations to decouple the nonlinearity in the CGLE \cite{GRTP, S, BJM, YZWBQD1}; and (ii) adopting proper finite difference/element and/or spectral method to discretize a gradient flow with constant coefficients \cite{BDZ, BT, BT1}. Let $\tau >0$ be the time step size, denote $t_{n}=n\tau$ for $n\ge0$. For $n=0,1,\ldots$, from time $t=t_{n}$ to $t=t_{n+1}$, the CGLE (\ref{cgle}) is solved in two splitting steps. One first solves \begin{equation}\label{step1} (\lambda_\varepsilon+i\beta)\partial_{t}\psi^{\varepsilon}({\bf x},t) =\frac{1}{\varepsilon^{2}}(1-|\psi^{\varepsilon}|^{2})\psi^{\varepsilon}, \qquad {\bf x}\in \mathcal{D}, \quad t\ge t_n, \end{equation} for the time step of length $\tau$, followed by solving \begin{equation}\label{step2} (\lambda_\varepsilon+i\beta)\partial_{t}\psi^{\varepsilon}({\bf x},t)= \Delta \psi^{\varepsilon}, \qquad {\bf x}\in\mathcal{D}, \quad t\ge t_n, \end{equation} for the same time step. Methods to discretize equation (\ref{step2}) will be outlined later. For $t\in[t_n,t_{n+1}]$, we can easily obtain from equation (\ref{step1}) the following ODE for $\rho^{\varepsilon}({\bf x},t)=|\psi^{\varepsilon}({\bf x},t)|^2$: \begin{equation}\label{eqn: rho_step1} \partial_t\rho^{\varepsilon}({\bf x},t)=\eta[1-\rho^{\varepsilon}({\bf x},t)]\rho^{\varepsilon}({\bf x},t),\quad {\bf x}\in\mathcal{D}, \quad t_n\le t\le t_{n+1}, \end{equation} where $\eta=2\lambda_\varepsilon/\varepsilon^2(\lambda_\varepsilon^2+\beta^2)$. Solving equation (\ref{eqn: rho_step1}), we have \begin{equation}\label{slrho2} \rho^{\varepsilon}({\bf x},t)=\frac{\rho^{\varepsilon}({\bf x},t_n)}{\rho^{\varepsilon}({\bf x},t_n)+(1-\rho^{\varepsilon}({\bf x},t_n))\exp[-\eta (t-t_n)]}. \end{equation} Plugging (\ref{slrho2}) into (\ref{step1}), we can integrate it exactly to get \begin{equation}\label{explicit} \psi^{\varepsilon}({\bf x},t)=\psi^{\varepsilon}({\bf x},t_{n}) \sqrt{\hat{P}({\bf x},t)}\exp\left[-\frac{i\beta}{2\lambda_\varepsilon^2}\ln \hat{P}({\bf x},t) \right], \end{equation} where \begin{equation} \label{p} \hat{P}({\bf x},t)= \frac{1}{|\psi^{\varepsilon}({\bf x},t_n)|^2+(1-|\psi^{\varepsilon}({\bf x},t_n)|^2)\exp(-\eta(t-t_n))}. \end{equation} We remark here that, in practice, we always use the second-order Strang splitting \cite{S}, that is, from time $t=t_n$ to $t=t_{n+1}$: (i) evolve (\ref{step1}) for half time step $\tau/2$ with initial data given at $t=t_n$; (ii) evolve (\ref{step2}) for one step $\tau$ starting with the new data; and (iii) evolve (\ref{step1}) for half time step $\tau/2$ again with the newer data. When $\Omega=[a,b]\times[c,d]$ is a rectangular domain, we denote $h_{x}$=$\frac{b-a}{N}$ and $h_{y}$=$\frac{d-c}{L}$ with $N$ and $L$ being two even positive integers as the mesh sizes in $x-$direction and $y-$direction, respectively. Similar to the discretization of the gradient flow with constant coefficient \cite{BT}, when the Dirichlet BC (\ref{dir}) is used for the equation (\ref{step2}), it can be discretized by using the 4th-order compact finite difference discretization for spatial derivatives followed by a Crank-Nicolson (CNFD) scheme for temporal derivative \cite{BT, BT1}; and when homogeneous Neumann BC (\ref{neu-bc}) is used for the equation (\ref{step2}), it can be discretized by using cosine spectral discretization for spatial derivatives followed by integrating in time {\sl exactly} \cite{BT, BT1}. The details are omitted here for brevity. Combining the CNFD and cosine psedudospectral discretization for Dirichlet and homogeneous Neumann BC, respectively, with the second order Strang splitting, we can obtain time-splitting Crank-Nicolson finite difference (TSCNFD) and time-splitting cosine psedudospectral (TSCP) discretizations for the CGLE (\ref{cgle}) on a rectangle with Dirichlet BC (\ref{dir}) and homogeneous Neumann BC (\ref{neu-bc}), respectively. Both TSCNFD and TSCP discretizations are unconditionally stable, second order in time, the memory cost is $O(NL)$ and the computational cost per time step is $O\left(NL\ln(NL)\right)$. In addition, TSCNFD is fourth order in space and TSCP is spectral order in space. When $\Omega=\{{\bf x} \ |\ |{\bf x}|<R\}:=B_R({\bf 0})$ is a disk with $R>0$ a fixed constant. Similar to the discretization of the GPE with an angular momentum rotation \cite{B,BDZ,YZWBQD1} and/or the gradient flow with constant coefficient \cite{BT}, it is natural to adopt the polar coordinate $(r,\theta)$ in the numerical discretization by using the standard Fourier pseduospectral method in $\theta$-direction \cite{JSTT}, finite element method in $r$-direction, and Crank-Nicolson method in time \cite{B,BDZ,YZWBQD1}. Again, the details are omitted here for brevity. \section{Numerical results under Dirichlet BC} \label{sec: CGLE_Dir} In the section, we report numerical results for vortex interactions of the CGLE (\ref{cgle}) under the Dirichlet BC (\ref{dir}) and compare them with those obtained from the corresponding RDLs. For simplicity, from now on, we assume that the parameters $\alpha=1$ in (\ref{lam_vep}) and $\beta=1$ in (\ref{cgle}). We study how the dimensionless parameter $\varepsilon$, initial setup, boundary value and geometry of the domain $\mathcal{D}$ affect the dynamics and interaction of vortices. For a given bounded domain $\mathcal{D}$, the CGLE (\ref{cgle}) is unchanged by the re-scaling ${\bf x}\to l{\bf x}$, $t\to l^2t$ and $\varepsilon\to l\varepsilon$ with $l$ the diameter of $\mathcal{D}$. Thus without lose of generality, hereafter, without specification, we always assume that the diameter of $\mathcal{D}$ is $O(1)$. The function $g$ in the Dirichlet BC (\ref{dir}) is given as \begin{equation*} g({\bf x})=e^{i(h({\bf x})+\sum_{j=1}^{M}n_j\theta({\bf x}-{\bf x}^0_j))},\qquad {\bf x}\in\partial\mathcal{D}, \end{equation*} and the initial condition $\psi_0^\varepsilon$ in (\ref{ini_con}) is chosen as \begin{equation} \label{initdbc0} \psi_0^\varepsilon({\bf x})=\psi_0^\varepsilon(x,y)=e^{ih({\bf x})}\prod_{j=1}^{M} \phi^\varepsilon_{n_j} ({\bf x}-{\bf x}^0_j), \qquad {\bf x}=(x,y)\in \overline{\mathcal{D}}, \end{equation} where $M>0$ is the total number of vortices in the initial data, the phase shift $h({\bf x})$ is a harmonic function, $\theta({\bf x})$ is defined in (\ref{theta}) and for $j=1,2,\ldots,M$, $n_j=1$ or $-1$, and ${\bf x}^0_j=(x_j^0,y_j^0)\in \mathcal{D}$ are the winding number and initial location of the $j$-th vortex, respectively. Moreover, for $j=1,\ldots,M$, the function $\phi^\varepsilon_{n_j}({\bf x})$ is chosen as a single vortex centered at the origin with winding number $n_j=1$ or $-1$ which was computed numerically and depicted in section 4 in \cite{BT, BT1}. In addition, in the following sections, we mainly consider six different modes of the phase shift $h({\bf x}):$ \begin{itemize} \item Mode 0: $h({\bf x})=0,$ \qquad \qquad \qquad \quad\;\; Mode 1: $h({\bf x})=x+y, $ \item Mode 2: $h({\bf x})=x-y, $ \qquad \qquad \qquad Mode 3: $h({\bf x})=x^2-y^2, $ \item Mode 4: $h({\bf x})=x^2-y^2+2xy,$ \qquad\ Mode 5: $h({\bf x})=x^2-y^2-2xy.$ \end{itemize} To simplify our discussion, for $j=1,2,\ldots,M$, hereafter we let ${\bf x}^{\rm r}_j(t)$ be the $j$-th vortex center in the reduced dynamics and denote $d_j^\varepsilon(t)=|{\bf x}^\varepsilon_j(t)-{\bf x}^{\rm r}_j(t)|$ as the difference of the vortex centers in the CGLE dynamics and reduced dynamics. Furthermore, in the presentation of figures, the initial location of a vortex with winding number $+1$, $-1$ and the location that two vortices merge are marked as `+', `$\circ$' and `$\diamond$', respectively. Finally, in our computations, if not specified, we take $\mathcal{D}=[-1,1]^2$, mesh sizes $h_x=h_y=\frac{\varepsilon}{10}$ and time step $\tau=10^{-6}$. The CGLE (\ref{cgle}) with (\ref{dir}) and (\ref{ini_con}) is solved by the method TSCNFD presented in section \ref{sec: num_method}. \subsection{Single vortex} \label{sec: cgle_NRfSVD} In this subsection, we present numerical results of the motion of a single quantized vortex in the CGLE dynamics and the corresponding reduced dynamics. We choose the parameters as $M=1$, $n_1=1$ in (\ref{initdbc0}). To study how the initial phase shift $h({\bf x})$, initial location of the vortex ${\bf x}_0$ and domain geometry affect the motion of the vortex and to understand the validity of the RDL, we consider the following 16 cases: \begin{itemize} \item Case I-III: ${\bf x}_1^0=(0,0)$, $h({\bf x})$ is chosen as Mode 1, 2 or 3, and $\mathcal{D}$ is type I; \item Case IV-VIII: ${\bf x}_1^0=(0.1,0)$, $h({\bf x})$ is chosen as Mode 1, 2, 3, 4 or 5, and $\mathcal{D}$ is type I; \item Case IX-XII: ${\bf x}_1^0=(0.1,0.2)$, $h({\bf x})$ is chosen as Mode 2, 3, 4 or 5, and $\mathcal{D}$ is type I; \item Case XIII-XIV: ${\bf x}_1^0=(0,0)$, $h({\bf x})=x+y$ and $\mathcal{D}$ is chosen as type II or III; \item Case XV-XVI: ${\bf x}_1^0=(0.1,0.2)$, $h({\bf x})=x^2-y^2$ and $\mathcal{D}$ is chosen as type II or III, \end{itemize} where three different types of domains $\mathcal{D}$ are considered: type I: $\mathcal{D}=[-1,1]\times[-1,1],$ type II: $\mathcal{D}=[-1,1]\times[-0.65,0.65],$ type III: $\mathcal{D}=B_1(0).$ \begin{figure} \caption{ Trajectory of the vortex center in CGLE under Dirichlet BC when $\varepsilon=\frac{1} \label{fig: cgle-one-vortex-conver-D} \end{figure} Fig. \ref{fig: cgle-one-vortex-conver-D} shows the trajectory of the vortex center when $\varepsilon=\frac{1}{32}$ for cases II-IV and VI as well as the time evolution of $d_1^\varepsilon(t)$ for different $\varepsilon$ for cases II and VI, and the trajectory of the vortex center under cases V-VIII and cases IX-XII are, respectively, shown by Fig. \ref{fig: cgle-one-vortex-BC_Eff-D} and Fig. \ref{fig: cgle-one-vortex-Geo_Eff-D} when $\varepsilon=\frac{1}{32}$ in CGLE. Based on these ample numerical results (although some results are not shown here for brevity), we made the following observations for the single vortex dynamics: \begin{figure} \caption{Trajectory of the vortex center in CGLE under Dirichlet BC when $\varepsilon=\frac{1} \label{fig: cgle-one-vortex-BC_Eff-D} \end{figure} (i). When $h({\bf x})\equiv0$, the vortex center doesn't move, which is similar to the vortex dynamics in the whole space in GLE and NLSE dynamics. (ii). When $h({\bf x})=(x+by)(x-\frac{y}{b})$ with $b\neq 0$, the vortex does not move if ${\bf x}_0=(0,0)$, while it does move if ${\bf x}_0\neq (0,0)$ (please see case III and VI for $b=1$). This is the same as the phenomena in GLE and NLSE dynamics. (iii). When $h({\bf x})\ne0$ and $h({\bf x})\neq(x+by)(x-\frac{y}{b})$ with $b\neq 0$, in general, the vortex center does move to a different point from its initial location and then it will stay there forever. This is quite different from the corresponding case in the whole space, since in that case a single vortex may move to infinity under the initial data (\ref{initdbc0}) with $h({\bf x})\ne0$. (iv). In general, the initial location, the geometry of the domain and the boundary value will take effect on the motion of the vortex center. (v). When $\varepsilon\rightarrow 0$, the dynamics of the vortex center in the CGLE dynamics converges uniformly in time to that in the reduced dynamics (see Fig. \ref{fig: cgle-one-vortex-conver-D}) which verifies numerically the validation of the RDLs. In fact, based on our extensive numerical experiments, the motion of the vortex center from the RDLs agrees with those from the CGLE dynamics qualitatively when $0<\varepsilon<1$ and quantitatively when $0<\varepsilon\ll 1$. \begin{figure} \caption{Trajectory of the vortex center in CGLE under Dirichlet BC when $\varepsilon=\frac{1} \label{fig: cgle-one-vortex-Geo_Eff-D} \end{figure} \subsection{Vortex pair} \label{sec: cgle_NRfVPD} Here we present numerical results of the interaction of vortex pair in the CGLE dynamics and its corresponding reduced dynamics. In the following numerical simulations of the subsection, we take $M=2$, $n_1=n_2=1$, ${\bf x}_1^0=(-0.3,0)$ and ${\bf x}_2^0=(0.3,0)$ in (\ref{initdbc0}). Fig. \ref{fig: cgle-vpd-modes-ep25} depicts the trajectory of the vortex centers and their corresponding time evolution of the GL functionals when $\varepsilon=\frac{1}{25}$ in the CGLE with different $h({\bf x})$ in (\ref{initdbc0}), and Fig. \ref{fig: cgle-vpd-dens-trace-err} shows contour plots of $|\psi^{\varepsilon}({\bf x},t)|$ for $\varepsilon=\frac{1}{25}$ at different times as well as the time evolution of ${\bf x}_1^{\varepsilon} (t)$, ${\bf x}_1^{\rm r} (t)$ and $d_1^{\varepsilon} (t)$ for different $\varepsilon$ with $h({\bf x})=0$ in (\ref{initdbc0}). \begin{figure} \caption{ Trajectory of the vortex centers (a) and their corresponding time evolution of the GL functionals (b) in CGLE dynamics under Dirichlet BC when $\varepsilon=\frac{1} \label{fig: cgle-vpd-modes-ep25} \end{figure} According to our ample numerical experiments, we made the following observations for the interaction of vortex pair in the CGLE dynamics with Dirichlet BC: (i). The motion of the vortex pair may be thought of as a kind of combination between that in the GLE and NLSE dynamics with Dirichlet BC. From Figs. \ref{fig: cgle-vpd-modes-ep25}-\ref{fig: cgle-vpd-dens-trace-err}, we observed that the two vortices undergo a repulsive interaction, and they first rotate with each other and meanwhile move apart from each other towards the boundary of the domain, then stop somewhere near the boundary, which indicates that the boundary of the domain imposes a repulsive force on the two vortices. As shown in previous studies \cite{BT,BT1}, a vortex pair in the GLE dynamics moves outward along the line that connects with the two vortices and finally stay static near the boundary, while in the NLSE dynamics the two vortices always rotate around each other periodically. In fact, based on our extensive numerical results, we found that the larger the value $\beta$ (or $\alpha$) is, the closer the motion in CGLE dynamics is to that in NLSE (or GLE) dynamics, which gives the sufficient numerical evidence for our above conclusion. (ii). The phase shift $h({\bf x})$ affects the motion of the vortices significantly. When $h({\bf x})=(x+by)(x-\frac{y}{b})$ with $b\neq 0$, the vortices will move outward symmetric with respect to the origin, i.e., ${\bf x}_1(t)=-{\bf x}_2(t)$ (see Fig. \ref{fig: cgle-vpd-modes-ep25}). (iii). When $\varepsilon\rightarrow 0$, the dynamics of the two vortex centers in the CGLE dynamics converges uniformly in time to that in the reduced dynamics (see Fig. \ref{fig: cgle-vpd-dens-trace-err}) which verifies numerically the validation of the RDLs in this case. In fact, based on our extensive numerical experiments, the motions of the two vortex centers from the RDLs agree with those from the CGLE dynamics qualitatively when $0<\varepsilon<1$ and quantitatively when $0<\varepsilon\ll 1$. (iv). During the dynamics evolution of CGLE, the GL functional and its kinetic part decrease as the time evolves, its interaction part changes dramatically when $t$ is small, and when $t\to \infty$, all the three quantities converge to constants (see Fig. \ref{fig: cgle-vpd-modes-ep25}), which immediately indicates that a steady state solution will be reached when $t\to\infty$. \begin{figure} \caption{ Contour plot of $|\psi^{\varepsilon} \label{fig: cgle-vpd-dens-trace-err} \end{figure} \subsection{Vortex dipole} \label{sec: cgle_NRfVDD} Here we present numerical results of the interaction of vortex dipole under the CGLE dynamics and its corresponding reduced dynamical laws. We choose the parameters in the simulations as $M=2$, $n_1=-n_2=-1$, ${\bf x}_2^0=-{\bf x}_1^0=(0.3,0)$ in (\ref{initdbc0}). Fig. \ref{fig: cgle-vdd-modes-ep25} depicts the trajectory of the vortex centers and their corresponding time evolution of the GL functionals when $\varepsilon=\frac{1}{25}$ in the CGLE with different $h({\bf x})$ in (\ref{initdbc0}), and Fig. \ref{fig: cgle-vdd-trace-err} shows contour plot of $|\psi^{\varepsilon}({\bf x},t)|$ for $\varepsilon=\frac{1}{25}$ at different times as well as the time evolution of ${\bf x}_1^{\varepsilon} (t)$, ${\bf x}_1^{\rm r} (t)$ and $d_1^{\varepsilon} (t)$ for different $\varepsilon$ with $h({\bf x})=0$ in (\ref{initdbc0}). \begin{figure} \caption{ Trajectory of the vortex centers (a) and their corresponding time evolution of the GL functionals (b) in CGLE dynamics under Dirichlet BC when $\varepsilon=\frac{1} \label{fig: cgle-vdd-modes-ep25} \end{figure} From Figs. \ref{fig: cgle-vdd-modes-ep25}-\ref{fig: cgle-vdd-trace-err} and ample numerical experiments (not shown here for brevity), we made the following observations for the interaction of vortex dipole in the CGLE dynamics with Dirichlet BC: (i). The two vortices undergo an attractive interaction, they will collide and annihilate with each other. (ii). The phase shift $h({\bf x})$ and the initial distance of the two vortices affect the motion of the vortices significantly. If $h({\bf x})=0$, regardless of where the vortices are initially located, the vortex dipole will finally merge. However, similar as the case in GLE dynamics, if $h({\bf x})\not\equiv0$, say $h({\bf x})=x+y$ for example, there would be a critical distance $d_c^\varepsilon$, which depends on the value of $\varepsilon$, that divides the motion of the vortex dipole into two cases: (a) if the initial distance between the vortex dipole $|{\bf x}_2^0-{\bf x}_1^0|>d_c^\varepsilon$, the vortex will never merge, they will finally stay static and separate at somewhere near the boundary. (b) otherwise, they do finally merge and annihilate. (iii). For $h({\bf x})=0$, when $\varepsilon\rightarrow 0$, the dynamics of the two vortex centers in the CGLE dynamics converges uniformly in time to that in the reduced dynamics (see Fig. \ref{fig: cgle-vdd-trace-err}), which verifies numerically the validation of the RDLs in this case. In fact, based on our extensive numerical experiments, the motions of the two vortex centers from the RDLs agree with those from the CGLE dynamics qualitatively when $0<\varepsilon<1$ and quantitatively when $0<\varepsilon\ll 1$ before they merge. (iv). During the dynamics evolution of CGLE, the GL functional decreases as the time evolves, its kinetic and interaction parts don't change dramatically when $t$ is small, while all the three quantities converge to constants when $t\to \infty$. Moreover, if finite time merging/annihilation happens, the GL functional and its kinetic and interaction parts change significantly during the collision. In addition, when $t\to\infty$, the interaction energy goes to $0$ which immediately implies that a steady state will be reached in the form of $\phi^{\varepsilon}({\bf x})=e^{ic({\bf x})}$, where $c({\bf x})$ is a harmonic function satisfying $c({\bf x})|_{\partial \mathcal{D}}=h({\bf x})+\sum_{j=1}^{M}n_j\theta({\bf x}-{\bf x}^0_j)$. \begin{figure} \caption{Contour plots of $|\psi^{\varepsilon} \label{fig: cgle-vdd-trace-err} \end{figure} \subsection{Vortex lattice} \label{sec: cgle_NRfVLD} Here we present numerical results about the interaction of vortex lattices under the CGLE dynamics. We consider the following 15 cases: case I. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(0.5, 0)$; ${\bf x}_2^0=(-0.25,\frac{\sqrt{3}}{4})$, ${\bf x}_3^0=(-0.25,-\frac{\sqrt{3}}{4})$, case II. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(-0.4,0)$, ${\bf x}_2^0=(0,0)$, ${\bf x}_3^0=(0.4,0)$; case III. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(0,0.3)$, ${\bf x}_2^0=(0.15,0.15)$, ${\bf x}_3^0=(0.3,0)$; case IV. $M=3$, $-n_1=n_2=n_3=1$, ${\bf x}_1^0=(0.5, 0)$; ${\bf x}_2^0=(-0.25,\frac{\sqrt{3}}{4})$, ${\bf x}_3^0=(-0.25,-\frac{\sqrt{3}}{4})$, case V. $M=3$, $n_2=-1$, $n_1=n_3=1$, ${\bf x}_1^0=(-0.4,0)$, ${\bf x}_2^0=(0,0)$, ${\bf x}_3^0=(0.4,0)$; case VI. $M=3$, $n_1=-1$, $n_2=n_3=1$, ${\bf x}_1^0=(0.2,0.3)$, ${\bf x}_2^0=(-0.3,0.4)$, ${\bf x}_3^0=(-0.4,-0.2)$; case VII. $M=4$, $n_1=n_2=n_3=n_4=1$, ${\bf x}_1^0=(0.5,0)$, ${\bf x}_2^0=(0,0.5)$, ${\bf x}_3^0=(-0.5,0)$, ${\bf x}_4^0=(0,-0.5)$; case VIII. $M=4$, $n_1=n_3=1$, $n_2=n_4=-1$, ${\bf x}_1^0=(0.5,0)$, ${\bf x}_2^0=(0,0.5)$, ${\bf x}_3^0=(-0.5,0)$, ${\bf x}_4^0=(0,-0.5)$; case IX. $M=4$, $n_2=n_3=-1$, $n_1=n_4=1$, ${\bf x}_1^0=(0.5,0)$, ${\bf x}_2^0=(0,0.5)$, ${\bf x}_3^0=(-0.5,0)$, ${\bf x}_4^0=(0,-0.5)$; case X. $M=4$, $n_1=n_3=1$, $n_2=n_4=-1$, ${\bf x}_1^0=(0.5,0.5)$, ${\bf x}_2^0=(-0.5,0.5)$, ${\bf x}_3^0=(-0.5,-0.5)$, ${\bf x}_4^0=(0.5,-0.5)$; case XI. $M=4$, $n_2=n_3=-1$, $n_1=n_4=1$, ${\bf x}_1^0=(0.5,0.5)$, ${\bf x}_2^0=(-0.5,0.5)$, ${\bf x}_3^0=(-0.5,-0.5)$, ${\bf x}_4^0=(0.5,-0.5)$; case XII. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(-0.4/3,0)$, ${\bf x}_3^0=(0.4/3,0)$, ${\bf x}_4^0=(0.4,0)$; case XIII. $M=4$, $n_2=n_3=-1$, $n_1=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(-0.4/3,0)$, ${\bf x}_3^0=(0.4/3,0)$, ${\bf x}_4^0=(0.4,0)$; case XIV. $M=4$, $n_1=n_2=-1$, $n_3=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(-0.4/3,0)$, ${\bf x}_3^0=(0.4/3,0)$, ${\bf x}_4^0=(0.4,0)$; case XV. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.2,0.3)$, ${\bf x}_2^0=(-0.3,0.4)$, ${\bf x}_3^0=(-0.4,-0.2)$; ${\bf x}_4^0=(0.3,-0.3)$. \begin{figure} \caption{ Trajectory of vortex centers for the interaction of different vortex lattices in GLE under Dirichlet BC with $\varepsilon=\frac{1} \label{fig: cgle-vortex-lattice-D} \end{figure} Fig. \ref{fig: cgle-vortex-lattice-D} shows the trajectory of the vortex centers when $\varepsilon=\frac{1}{32}$ in CGLE (\ref{cgle}) and $h({\bf x})=0$ in (\ref{initdbc0}) for the above 15 cases. From Fig. \ref{fig: cgle-vortex-lattice-D} and ample numerical experiments (not shown here for brevity), we made the following observations: (i). The dynamics and interaction of vortex lattices under the CGLE dynamics with Dirichlet BC depends on its initial alignment of the lattice, geometry of the domain $\mathcal{D}$ and the boundary value $g({\bf x})$. (ii). For a lattice of $M$ vortices, if they have the same index, then no collisions will happen for any time $t\ge0$. On the other hand, if they have opposite index, e.g. $M^+>0$ vortices with index `$+1$' and $M^->0$ vortices with index `$-1$' satisfying $M^++M^-=M$, collisions will always happen at finite time. More precisely, when $t$ is sufficiently large, there exist exactly $|M^+-M^-|$ vortices with winding number `$+1$' if $M^+> M^-$; while if $M^+< M^-$, there exist exactly $|M^+-M^-|$ vortices with winding number `$-1$'. \begin{figure} \caption{Contour plots of $|\phi^\varepsilon({\bf x} \label{fig: cgle-lattice_hx3_diff_domain} \end{figure} In order to study how the geometry of the domain and boundary conditions take effect on the alignment of vortices in the steady state patterns in the CGLE dynamics under Dirichlet BC, we made the following set-up for our numerical computations. We chose the parameters as $\varepsilon=\frac{1}{32}$, $$n_j=1,\qquad {\bf x}_j^{0}=0.5\left(\cos\left(\frac{2j\pi}{M}\right), \sin\left(\frac{2j\pi}{M}\right)\right),\qquad j=1,2,\ldots,M,$$ i.e., initially we have $M$ like vortices which are located uniformly in a circle centered at origin with radius $R_1=0.5$. Denote $\phi^\varepsilon({\bf x})$ as the steady state, i.e., $\phi^\varepsilon({\bf x}) =\lim_{t\to\infty}\psi^\varepsilon({\bf x},t)$ for ${\bf x}\in \mathcal{D}$. Fig. \ref{fig: cgle-lattice_hx3_diff_domain} depicts the contour plots of the amplitude $|\phi^\varepsilon|$ of the steady state in the CGLE dynamics with $h({\bf x})=0$ in (\ref{initdbc0}) for different $M$ and domains, and Fig. \ref{fig: cgle-lattice_den_disk_square_N12_diffhx} depicts similar results with $M=12$ for different $h({\bf x})$ in (\ref{initdbc0}). \begin{figure} \caption{Contour plots of $|\phi^\varepsilon({\bf x} \label{fig: cgle-lattice_den_disk_square_N12_diffhx} \end{figure} Based on Figs. \ref{fig: cgle-lattice_hx3_diff_domain}-\ref{fig: cgle-lattice_den_disk_square_N12_diffhx} and ample numerical results (not shown here for brevity), we made the following observations for the steady state patterns of vortex lattices under the CGLE dynamics with Dirichlet BC: (i). The vortex lattices with the same winding number undergo repulsive interaction between each other and finally they move to somewhere near the boundary of the domain. During the evolution process, no particle-like collision phenomena happen and a steady state pattern is finally formed when $t\to\infty$. As a matter of fact, the steady state is also the solution of the following minimization problem \[\phi^\varepsilon =\displaystyle {\rm argmin}_{\phi({\bf x})|_{{\bf x}\in\partial\mathcal{D}}= \psi_0^\varepsilon({\bf x})|_{{\bf x}\in\partial\mathcal{D}} } \mathcal{E}^\varepsilon(\phi).\] (ii). Both the geometry of the domain and the phase shift, i.e. $h({\bf x})$, will take significant effect on steady state patterns. (iii). At the steady state, the distance between the vortex centers and the boundary of the domain depends on $\varepsilon$ and $M$. If $M$ is fixed, when $\varepsilon$ decreases, the distance decreases; while if $\varepsilon$ is fixed, when $M$ increases, the distance decreases. We remark it here as an interesting open problem to find how the width depends on the value of $\varepsilon$, the boundary condition as well as the geometry of the domain. \section{Numerical results under Neumann BC} \label{sec: cgle_Neu} In this section, we report numerical results for vortex interactions of the CGLE (\ref{cgle}) under the homogeneous Neumann BC (\ref{neu-bc}) and compare them with those obtained from the corresponding RDLs. The initial condition $\psi_0^\varepsilon$ in (\ref{ini_con}) is also chosen as the form (\ref{initdbc0}), but with harmonic function $h({\bf x})$ replaced as $h_n({\bf x})$ so that it will satisfy the Neumann BC as \[\frac{\partial}{\partial {\bf n}}h_n({\bf x}) = -\frac{\partial}{\partial {\bf n}}\sum_{l=1}^{M} n_{l}\theta({\bf x}-{\bf x}_{l}^0), \quad {\bf x}\in\partial \Omega.\] The CGLE (\ref{cgle}) with (\ref{neu-bc}) and (\ref{initdbc0}) is solved by the numerical method TSCP presented in section \ref{sec: num_method} in the following simulations. \subsection{Single vortex} \label{sec: cgle_NRfSVN} In the subsection, we present numerical results of the motion of a single quantized vortex in the CGLE dynamics with Neumann BC and its corresponding reduced dynamical laws. We choose the parameters as $M=1$ and $n_1=1$ in (\ref{initdbc0}). Fig. \ref{fig: cgle-one-vortex-N} shows the trajectory of the vortex center for different ${\bf x}_1^0$ in (\ref{initdbc0}) when $\varepsilon=\frac{1}{25}$ as well as time evolution of ${\bf x}_1^{\varepsilon}$ and $d_1^\varepsilon$ for different $\varepsilon$. By observing Fig. \ref{fig: cgle-one-vortex-N} and ample numerical simulation results (not shown here for brevity), we could see that: (i). The initial location of the vortex affects the motion of the vortex significantly and this reflects the boundary effect coming from the Neumann BC. (ii). If ${\bf x}_1^0=(0,0)$, the vortex does not move at any time; otherwise, the vortex does move and it will run out of the domain and never come back. This phenomenon is quite different from the case with Dirichlet BC in bounded domains, in which a single vortex can never move out of the domain, or the case with the initial condition (\ref{initdbc0}) in the whole space, in which a single vortex doesn't move at all, regardless of the initial location of the vortex for the both cases. (iii). As $\varepsilon\rightarrow 0$, the dynamics of the vortex center under the CGLE dynamics converges uniformly in time to that of the RDLs very well before it exits the domain, which verifies numerically the validation of the RDLs in this case. Apparently, when the vortex center moves out of the domain, the reduced dynamics laws are no longer valid. Based on our extensive numerical experiments, the motion of the vortex center from the RDLs agrees with that from the CGLE dynamics qualitatively when $0<\varepsilon<1$ and quantitatively when $0<\varepsilon\ll 1$ before it moves out of the domain. \begin{figure} \caption{Trajectory of the vortex center when $\varepsilon=\frac{1} \label{fig: cgle-one-vortex-N} \end{figure} \begin{figure} \caption{ Contour plots of $|\psi^{\varepsilon} \label{fig: cgle-vortex-pair-N-dens} \end{figure} \subsection{Vortex pair} \label{sec: cgle_NRfVPN} Here we present numerical results of the interaction of vortex pair under the CGLE dynamics with Neumann BC and its corresponding reduced dynamical laws. We choose the simulation parameters as $M=2$, $n_1=n_2=1$ and ${\bf x}_2^0=-{\bf x}_1^0=(d_0,0)$ with $0<d_0<1$ in (\ref{initdbc0}). Fig. \ref{fig: cgle-vortex-pair-N-dens} shows the contour plots of $|\psi^{\varepsilon}({\bf x},t)|$ at different times when $\varepsilon=\frac{1}{25}$, and Fig. \ref{fig: cgle-vortex-pair-N} shows the trajectory of the vortex pair when $\varepsilon=\frac{1}{25}$ as well as time evolution of $x_1^{\varepsilon}(t)$ and $d_1^{\varepsilon}(t)$ for different $d_0$ in (\ref{initdbc0}). \begin{figure} \caption{ Trajectory of the vortex center when $\varepsilon=\frac{1} \label{fig: cgle-vortex-pair-N} \end{figure} From Figs. \ref{fig: cgle-vortex-pair-N-dens}-\ref{fig: cgle-vortex-pair-N} and ample numerical results (not shown here for brevity), we made the following observations: (i). The initial location of the vortex, i.e., the value of $d_0$ affects the motion of the vortex significantly and this reflects the boundary effect coming from the Neumann BC. (ii). For the CGLE with $\varepsilon$ fixed, there exists a sequence of critical values $d_1^{c,\varepsilon}>d_2^{c,\varepsilon}>d_3^{c,\varepsilon}>\cdots>d_k^{c,\varepsilon}>\cdots$, which can determine the escape approach about how the vortex pair moves out of the domain. More precisely, if the value of $d_0$ falls into the interval $(d_{2n+1}^{c,\varepsilon},d_{2n}^{c,\varepsilon})$, where $n=0,1,\ldots$, and $d_{0}^{c,\varepsilon}=+\infty$, then the two vortices will move out of the domain from the side boundary; otherwise, if it falls into the interval $(d_{2n+2}^{c,\varepsilon},d_{2n+1}^{c,\varepsilon})$, they will move out of the domain from the top-bottom boundary. For the RDL, there also exists such corresponding sequence of critical values $\{d_k^{c,r}, k=0,1,\ldots\}$ which determine the trajectory of the vortex pair motion. We note that it might be an interesting problem to find the values of those $d_k^{c,\varepsilon}$ and $d_k^{c,r}$ and study their convergence relations between them. (iii). The motion of the vortex pair exhibits hybrid properties of that in the GLE dynamics and NLSE dynamics with Neumann BC. As given by previous studies \cite{BT,BT1}, a vortex pair in the GLE dynamics will always move outward along the line that connects with the two vortices and finally they will move out of the domain, while in the NLSE dynamics, they will always rotate around each other periodically. Based on our extensive numerical results, we also found that under a fixed initial setup, the larger the value $\beta$ becomes, the more rotations the vortex pair will do before they exit the domain, which means that as $\beta$ becomes larger, the closer the motion in CGLE dynamics is to that in NLSE dynamics; on the other hand, as the value $\alpha$ becomes larger, the time when the vortex pair exits the domain becomes faster, which also means that the motion in CGLE dynamics becomes closer to that in GLE dynamics. This gives sufficient numerical evidence for our conclusion. (iv). As $\varepsilon\rightarrow 0$, the dynamics of the vortex pair under the CGLE dynamics converges uniformly in time to that of the RDLs very well before either of the two vortices exit the domain, which verifies numerically the validation of the RDLs in this case. (iv). During the dynamics evolution of CGLE, the GL functional and its kinetic parts decrease as the time increases. They do not change much when $t$ is small and change dramatically when either of the two vortices move out of the domain. When $t\to \infty$, all the three quantities converge to 0 (see Fig. \ref{fig: cgle-vortex-pair-N-dens} (c) \& (d)), which indicates that a constant steady state have been reached in the form of $\phi^\varepsilon({\bf x})=e^{ic_{_0}}$ for ${\bf x}\in\mathcal{D}$ with $c_{_0}$ a constant. \begin{figure} \caption{ Contour plots of $|\psi^{\varepsilon} \label{fig: cgle-vortex-dipole-N-dens} \end{figure} \subsection{Vortex dipole} \label{sec: cgle_NRfVDN} Here we present numerical results of the interaction of vortex dipole in the CGLE dynamics with Neumann BC and its corresponding reduced dynamics. We choose the simulation parameters as $M=2$, $n_2=-n_1=1$ and ${\bf x}_2^0=-{\bf x}_1^0=(d_0,0)$ with $0<d_0<1$ in (\ref{initdbc0}). Fig. \ref{fig: cgle-vortex-dipole-N-dens} shows the contour plots of $|\psi^{\varepsilon}({\bf x},t)|$ at different times when $\varepsilon=\frac{1}{25}$, and Fig. \ref{fig: cgle-vortex-dipole-N} depicts the trajectory of the vortex pair when $\varepsilon=\frac{1}{25}$ as well as time evolution of $x_1^{\varepsilon}(t)$ and $d_1^{\varepsilon}(t)$ for different $d_0$ in (\ref{initdbc0}). \begin{figure} \caption{ Trajectory of the vortex center when $\varepsilon=\frac{1} \label{fig: cgle-vortex-dipole-N} \end{figure} From Fig. \ref{fig: cgle-vortex-dipole-N-dens} and \ref{fig: cgle-vortex-dipole-N} and ample numerical results (not shown here for brevity), we can make the following observations for the interaction of vortex pair under the NLSE dynamics with homogeneous Neumann BC: (i). The initial location of the vortex, i.e., the value of $d_0$ affects the motion of the vortex significantly. (ii). For the CGLE with $\varepsilon$ fixed, there exists a critical value $d_c^{\varepsilon}$ such that: if $d_0>d_c^{\varepsilon}$, the two vortices will exit the domain from the side boundary; otherwise, they will merge somewhere in the domain. For the RDL, there also exists such corresponding critical values $d_c^{r}$. We also note that it might be an interesting problem to find those values $d_c^{\varepsilon}$ and $d_c^{r}$, and to study their convergence relation. (iii). As $\varepsilon\rightarrow 0$, the dynamics of the two vortex centers under the CGLE dynamics converge uniformly in time to that of the RDLs very well before they move out of the domain or merge with each other, which verifies numerically the validation of the RDLs in this case. \begin{figure} \caption{ Trajectory of vortex centers for the interaction of different vortex lattices in CGLE under Neumman BC with $\varepsilon=\frac{1} \label{fig: cgle-vortex-lattice-N} \end{figure} \subsection{Vortex lattice} \label{sec: cgle_NRfVLN} Here we present numerical results of the interaction of vortex lattices under the CGLE dynamics with Neumann BC. We consider the following 15 cases: case I. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(0.4, 0)$, ${\bf x}_2^0=(-0.2,\frac{\sqrt{3}}{5})$, ${\bf x}_3^0=(-0.2,-\frac{\sqrt{3}}{5})$; case II. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(-0.4,0.2)$, ${\bf x}_2^0=(0,0.2)$, ${\bf x}_3^0=(0.4,0.2)$; case III. $M=3$, $n_1=n_2=n_3=1$, ${\bf x}_1^0=(-0.4,0)$, ${\bf x}_2^0=(0,0)$, ${\bf x}_3^0=(0.4,0)$; case IV. $M=3$, $-n_1=n_2=n_3=1$, ${\bf x}_1^0=(0.4, 0)$; ${\bf x}_2^0=(-0.2,\frac{\sqrt{3}}{5})$, ${\bf x}_3^0=(-0.2,-\frac{\sqrt{3}}{5})$; case V. $M=3$, $-n_2=n_1=n_3=1$, ${\bf x}_1^0=(-0.4,0)$, ${\bf x}_2^0=(0,0)$, ${\bf x}_3^0=(0.4,0)$; case VI. $M=3$, $-n_2=n_1=n_3=1$, ${\bf x}_1^0=(-0.7,0)$, ${\bf x}_2^0=(0,0)$, ${\bf x}_3^0=(0.7,0)$; case VII. $M=4$, $n_1=n_2=n_3=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(0,0.4)$, ${\bf x}_3^0=(-0.4,0)$, ${\bf x}_4^0=(0,-0.4)$; case VIII. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(0,0.4)$, ${\bf x}_3^0=(-0.4,0)$, ${\bf x}_4^0=(0,-0.4)$; case IX. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.59,0)$, ${\bf x}_2^0=(0,0.59)$, ${\bf x}_3^0=(-0.59,0)$, ${\bf x}_4^0=(0,-0.59)$; case X. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.7,0)$, ${\bf x}_2^0=(0,0.7)$, ${\bf x}_3^0=(-0.7,0)$, ${\bf x}_4^0=(0,-0.7)$; case XI. $M=4$, $n_2=n_3=-1$, $n_1=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(0,0.4)$, ${\bf x}_3^0=(-0.4,0)$, ${\bf x}_4^0=(0,-0.4)$; case XII. $M=4$, $n_2=n_3=-1$, $n_1=n_4=1$, ${\bf x}_1^0=(0.6,0)$, ${\bf x}_2^0=(0,0.6)$, ${\bf x}_3^0=(-0.6,0)$, ${\bf x}_4^0=(0,-0.6)$; case XIII. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(-0.4/3,0)$, ${\bf x}_3^0=(0.4/3,0)$, ${\bf x}_4^0=(0.4,0)$; case XIV. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(0.4,0)$, ${\bf x}_2^0=(-0.4/3,0)$, ${\bf x}_3^0=(0.4/3,0)$, ${\bf x}_4^0=(0.4,0)$; case XV. $M=4$, $n_1=n_3=-1$, $n_2=n_4=1$, ${\bf x}_1^0=(-0.6,0)$, ${\bf x}_2^0=(-0.1,0)$, ${\bf x}_3^0=(0.1,0)$, ${\bf x}_4^0=(0.6,0)$. Fig. \ref{fig: cgle-vortex-lattice-N} shows the trajectory of the vortex centers for the above 15 cases when $\varepsilon=\frac{1}{32}$, and Fig. \ref{fig: cgle-vortex-lattice-dens-steady-N} depicts the contour plots of $|\psi^\varepsilon|$ for the initial data and corresponding steady states for cases I, III, V, VI, VII and XIV. From Figs. \ref{fig: cgle-vortex-lattice-N} and \ref{fig: cgle-vortex-lattice-dens-steady-N} and ample numerical experiments (not shown here for brevity), we can make the following observations: (i). The dynamics and interaction of vortex lattices under the CGLE dynamics with Dirichlet BC depends on its initial alignment of the lattice, geometry of the domain $\mathcal{D}$. (ii). For a lattice of $M$ vortices, if they have the same index, then at least $M-1$ vortices will move out of the domain at finite time and no collision will happen at any time; On the other hand, if they have opposite index, collision will happen at finite time. After collisions, the leftover vortices will continue to move and at most one vortex may be left in the domain. When $t$ is sufficiently large, in most cases, no vortex can be left in the domain; but when the geometry and initial setup are properly set to be symmetric and $M$ is odd, there maybe one vortex left in the domain. (iii). If finally no vortex can be left in the domain, the GL functionals will always vanish as $t\rightarrow\infty$, which indicates that the final steady state always admits the form of $\phi^{\varepsilon}({\bf x})=e^{ic_0}$ for ${\bf x}\in\mathcal{D}$ with $c_0$ a real constant. \begin{figure} \caption{ Contour plots of $|\psi^\varepsilon({\bf x} \label{fig: cgle-vortex-lattice-dens-steady-N} \end{figure} \section{Conclusion}\label{sec: cgle_con} In this paper, we proposed efficient and accurate numerical methods to simulate complex Ginzburg-Landau equation (CGLE) with a dimensionless parameter $0<\varepsilon<1$ on bounded domains with either Dirichlet or homogenous Neumann BC and its corresponding reduced dynamical laws (RDLs). By these numerical methods, we studied numerically vortex dynamics and interaction in the CGLE and compared them with those obtained from the corresponding RDLs under different initial setups. To some extent, we found that vortex dynamics in the CGLE is a hybrid of that in GLE and NLSE, which can be reflected from the fact that CGLE is a combination equation between GLE and NLSE. Based on our extensive numerical results, we verified that the dynamics of vortex centers under the CGLE dynamics converges to that of the RDLs when $\varepsilon\to 0$ before they collide and/or move out of the domain. Apparently, when the vortex center moves out of the domain, the reduced dynamics laws are no longer valid; however, the dynamics and interaction of quantized vortices are still physically interesting and they can be obtained from the direct numerical simulations for the CGLE with fixed $\varepsilon>0$ even after they collide and/or move out of the domain. We also identified the parameter regimes where the RDLs agree with qualitatively and/or quantitatively as well as fail to agree with those from the CGLE dynamics. Some very interesting nonlinear phenomena related to the quantized vortex interactions in the CGLE were also observed from our direct numerical simulation results of CGLE. Different steady state patterns of vortex lattices under the CGLE dynamics were obtained numerically. From our numerical results, we observed that both boundary conditions and domain geometry affect significantly on vortex dynamics and interaction, which can exhibit different interaction patterns compared with those in the whole space case \cite{YZWBQD1,YZWBQD2}. \end{document}
\begin{document} \title{Tensoring with infinite-dimensional\\ modules in $\scr O_0$} \author{Johan K\aa hrstr\"om} \maketitle \abstract{We show that the principal block $\scr O_0$ of the BGG category $\scr O$ for a semisimple Lie algebra $\mathfrak g$ acts faithfully on itself via exact endofunctors which preserve tilting modules, via right exact endofunctors which preserve projective modules and via left exact endofunctors which preserve injective modules. The origin of all these functors is tensoring with arbitrary (not necessarily finite-dimensional) modules in the category $\scr O$. We study such functors, describe their adjoints and show that they give rise to a natural (co)monad structure on $\scr O_0$. Furthermore, all this generalises to parabolic subcategories of $\scr O_0$. As an example, we present some explicit computations for the algebra $\mathfrak{sl}_3$.} \section{Introduction} \label{sec:intro} When studying the category $\scr O$ for a semisimple Lie algebra $\mathfrak g$, tensoring with finite dimensional $\mathfrak g$-modules gives rise to a class of functors of high importance, the so called projective functors. These functors were classified in~\cite{bg} and include the ``translation functors'', \cite{jantzen}, which can be used to prove equivalences of certain subcategories of $\scr O$. In the following we study tensoring with arbitrary (not necessarily finite dimensional) modules in $\scr O$. There is an immediate obstacle, namely the fact that, in general, the result is no longer finitely generated (in other words, such functors do not preserve $\scr O$). This can be remedied by projecting onto a fixed block of the category $\scr O$. In particular, by composing with projection to the principal block $\scr O_0$, we obtain a faithful, exact functor $G\colon M\mapsto G_M\mathrel{\mathop:}= M\otimes\underline{\phantom{J}}\delimiter"6223379_0$ from $\scr O_0$ to the category $\End(\scr O_0)$ of endofunctors on $\scr O_0$. By, defining $F_M$ and $H_M$ to be the left and right adjoints of $G_M$, we obtain a right exact contravariant functor $F\colon M\mapsto F_M$ and a left exact contravariant functor $H\colon M\mapsto H_M$ from $\scr O$ to $\End(\scr O_0)$. In Section~\ref{sec:notation} we introduce the required notions and notation, and provide a setting for studying the tensor product of arbitrary modules in $\scr O$. In Section~\ref{sec:main} we define the three functors, and determine some of their properties. The main properties are given by Theorem~\ref{thm:maintheorem}, which shows that $F_M$ preserves projectives, $G_M$ preserves tilting modules, and $H_M$ preserves injectives, for any $M\in\scr O_0$. In Section~\ref{sec:comonad} we show that the particular functors $G_{\Delta(0)}$ and $G_{\nabla(0)}$ have natural comonad and monad structures, respectively. In Section~\ref{sec:parabolic} we show how the results from the previous section generalize to parabolic subcategories of $\scr O$. Finally, in Section~\ref{sec:example} we compute the `multiplication tables' $G_{M}N$ and $F_{M}N$ for the case $\mathfrak g=\mathfrak{sl}_3(\mathbb{C})$, where $M$ and $N$ run over the simple modules in $\scr O_0$. \noindent{\bf Acknowledgments:} This paper develops some ideas of S.~Ovsienko and V.~Mazorchuk. The author thanks V.~Mazorchuk for his many comments and suggestions. \section{Notation and preliminaries} \label{sec:notation} For any Lie algebra $\mathfrak a$, we let $\enva{\mathfrak a}$ denote its universal enveloping algebra. Fix $\mathfrak g$ to be a finite dimensional complex semisimple Lie algebra, with a chosen triangular decomposition $\mathfrak g = \mathfrak n_-\oplus \mathfrak h\oplus \mathfrak n_+$, let $\mathfrak b = \mathfrak h\oplus \mathfrak n_+$ denote the Borel subalgebra, and let $R$ denote the corresponding root system, with positive roots $R_+$, negative roots $R_-$, and basis $\Pi$. Let $\scr O$ denote the corresponding BGG-category (see \cite{bgg} for details), which can be defined as the full subcategory of the category of $\mathfrak g$-modules consisting of weight modules that are finitely generated as $\enva{\mathfrak n_-}$-modules For a weight module $M$, we denote by $M_\lambda$ the subspace of $M$ of weight $\lambda\in\mathfrak h^*$, and by $\Supp M\mathrel{\mathop:}=\{\,\lambda\in\mathfrak h^*\,\vert\,M_\lambda\neq\set{0}\,\}$ the support of $M$. For a weight vector $v\in M$, we denote by $\weight(v)$ the weight of $v$, i.e. $v\in M_{\weight(v)}$. Let $\mathbb{N}_0$ denote the non-negative integers, and let $\leqslant$ denote the natural partial order on $\mathfrak h^*$, i.e. $\lambda\leqslant\mu$ if and only if $\lambda-\mu\in\mathbb{N}_0R_-$. Given an anti-automorphism $\theta\colon\mathfrak g\rightarrow \mathfrak g$ of $\mathfrak g$ we define the corresponding restricted duality $d$ on the category of weight $\mathfrak g$-modules as follows. For a weight $\mathfrak g$-module $M$, let \[ dM\mathrel{\mathop:}= \bigoplus_{\lambda\in\mathfrak h^*}\Hom_{\mathbb{C}}\bigl(M_\lambda, \mathbb{C}\bigr), \] with the action of $\mathfrak g$ given by \[ (xf)(m) \mathrel{\mathop:}= f\bigl(\theta(x)m\bigr), \] for $x\in \mathfrak g$, $f\in dM$ and $m\in M$. We will use two different restricted dualities on weight $\mathfrak g$-modules: the duality given by the anti-automorphism $\mathfrak g\rightarrow \mathfrak g$, $x\mapsto -x$, which we will denote by $M^*$, and the duality given by the Chevalley anti-automorphism, which we will denote by $M^\star$. Note that $\Supp M^\star = \Supp M$, and thus $\star$ preserves the category $\scr O$, whereas $\Supp M^*=-\Supp M$. `The dual of $M$', `$M$ is self-dual' and similar statements will, unless otherwise stated, refer to the $\star$-duality. Since $\scr O$ is not closed under tensor products (e.g.\ the tensor product of two Verma modules is never finitely generated and hence does not belong to $\scr O$), it would be convenient to define the `enlarged' category $\widetilde{\catO}$, as the full subcategory of weight $\mathfrak g$-modules $M$ having the properties \begin{enumerate} \item[(OT1)] there are weights $\lambda_1$, $\dotsc$, $\lambda_k\in\mathfrak h^*$ with \[\Supp M\subseteq \bigcup_{i=1}^k\bigl(\lambda_i+\mathbb{N}_0R_-\bigr),\] \item[(OT2)] $\dim_\mathbb{C} M_\lambda<\infty$ for all $\lambda\in\mathfrak h^*$. \end{enumerate} \begin{lemma} The category $\widetilde{\catO}$ is closed under tensor products. \end{lemma} \begin{proof} Let $M, N\in\widetilde{\catO}$. Then $M\otimes N$ is a weight module, and since \begin{equation} \label{eq:tensorsupp} \Supp (M\otimes N) = \Supp M + \Supp N, \end{equation} it is easy to see that the property (OT1) is preserved under tensor products. Also, \begin{equation} \label{eq:dimotimes} \dim(M\otimes N)_\lambda = \sum_{\mathpalette\mathclapinternal{\substack{\mu\in\Supp M,\\\nu\in\Supp N,\\ \mu+\nu=\lambda}}} \dim M_\mu\cdot \dim N_\nu. \end{equation} By (OT1) the set of pairs $\mu\in\Supp M$, $\nu\in\Supp N$ with $\mu+\nu=\lambda$ is finite for any $\lambda\in\mathfrak h^*$. By (OT2) we have that $\dim M_\mu<\infty$ and $\dim N_\nu<\infty$ for any $\mu$ and $\nu$, so it follows that the right hand side of~\eqref{eq:dimotimes} is finite, i.e. $\dim(M\otimes N)_\lambda<\infty$. \end{proof} \begin{lemma} \label{lem:dualtens} The duality $\star$ commutes with tensor products in $\widetilde{\catO}$, that is \[ (M\otimes N)^\star\cong M^\star\otimes N^\star, \] natural in $M$ and $N$. \end{lemma} \begin{proof} For $f^\star\in M^\star$ and $g^\star\in N^\star$, let $\psi(f^\star\otimes g^\star)\in (M\otimes N)^\star$ be defined by \[ \psi(f^\star\otimes g^\star)(m\otimes n) \mathrel{\mathop:}= f^\star(m)g^\star(n), \] for $m\in M$ and $n\in N$, and extended bilinearly to a map $M^\star\otimes N^\star\rightarrow (M\otimes N)^\star$. Straightforward verification shows that this is a homomorphism, natural in both $M$ and $N$. Let $m_1, m_2, \dotsc\in M$ and $n_1, n_2, \dotsc\in N$ be bases of weight vectors, and let $m_1^\star, m_2^\star, \dotsc\in M^\star$ and $n_1^\star, n_2^\star, \dotsc\in N^\star$ be the corresponding dual bases. Then we have that $\{\,m_i\otimes n_j\,\vert\,i, j=1, 2, \dotsc\,\}$ is a basis of $M\otimes N$, with the dual basis $\{\,(m_i\otimes n_j)^\star\,\vert\,i, j=1, 2, \dotsc\,\}$. Furthermore, \begin{align*} \psi(m_i^\star\otimes n_j^\star)(m_k\otimes n_l) &= m_i^\star(m_k)n_j^\star(n_l) \\ &= \delta_{ik}\delta_{jl} \\ &= (m_i\otimes n_j)^\star(m_k\otimes n_l), \end{align*} i.e. $\psi(m_i^\star\otimes n_j^\star) = (m_i\otimes n_j)^\star$, so $\psi$ is indeed an isomorphism. \end{proof} Note that $\scr O$ is the full subcategory of $\widetilde{\catO}$ consisting of finitely generated modules, and in particular the simple objects of $\widetilde{\catO}$ and $\scr O$ coincide. For $\lambda\in\mathfrak h^*$, let $L(\lambda)$ denote the simple highest weight module with highest weight $\lambda$, and let $P(\lambda)$ denote the projective cover of $L(\lambda)$. \begin{lemma} \label{lem:findecomp} All modules $M\in\widetilde{\catO}$ admit a (possibly infinite) composition series. Furthermore, for each $\lambda\in\mathfrak h^*$, the number $[M: L(\lambda)]$ of occurrences of $L(\lambda)$ as a composition factor in a composition series is finite and independent of the choice of composition series. \end{lemma} \begin{proof} Let $M\in\widetilde{\catO}$, and let $m_1$, $m_2$, $m_3$, $\dotsc$, $\in M$ be a basis of weight vectors such that $\weight(m_i)\leqslant\weight(m_j)$ implies that $j\leq i$. Such a basis exists due to (OT1) and (OT2). For $i\in\mathbb{N}_0$, let $M^{(i)}$ denote the submodule of $M$ generated by $\set{\,m_j\,\vert\,j\leq i\,}$. We thus obtain a series of finitely generated modules \[ \set{0} = M^{(0)} \subseteq M^{(1)}\subseteq M^{(2)}\subseteq M^{(3)}\subseteq\cdots, \] which, since the $m_i$:s constitute a basis of $M$, converge to $M$, i.e. \[ \bigcup_{i=0}^\infty M^{(i)} = M. \] Since the $M^{(i)}$:s are finitely generated, $M^{(i)}\in\scr O$ for all $i\in\mathbb{N}_0$. Thus, since all objects in $\scr O$ have finite length, this series can be refined to a composition series. Now, consider any composition series $(M^{(i)})$ of $M$, let $\lambda\in\mathfrak h^*$ be any weight of $M$, and let $N$ denote the submodule of $M$ generated by the weight space $M_\lambda$. Since $\dim M_\lambda<\infty$ there exists an index $k\in\mathbb{N}$ such that $M_\lambda\subseteq M^{(k)}$, and in particular such that $N$ is a submodule of $M^{(k)}$. Then $\bigl(M^{(i)}/N\bigr)_\lambda=\set{0}$ for all $i\geq k$, so \[ \bigl[(M^{(i)}/N): L(\lambda)\bigr] = 0 \] for all $i\geq k$, and thus \[ [M^{(i)}: L(\lambda)] = [N: L(\lambda)] \] for all $i\geq k$. As $N\in\scr O$, we get that $[M: L(\lambda)] = [N: L(\lambda)]$ is finite and independent of the choice of composition series. \end{proof} Recall that $\scr O$ has a block decomposition \[ \scr O = \bigoplus_{\mathpalette\mathclapinternal{\chi\in \cntr{\mathfrak g}^*}}\scr O_\chi, \] where $\cntr{\mathfrak g}$ denotes the centre of $\mathfrak g$ and $\scr O_\chi$ denotes the full subcategory of $\scr O$ consisting of modules $M$ such that for all $z\in \cntr{\mathfrak g}$, $M$ is annihilated by some power of $\bigl(z-\chi(z)\bigr)$. Hence, each module $M\in\scr O$ decomposes into direct sum \begin{equation} \label{eq:odec} M = \bigoplus_{\mathpalette\mathclapinternal{\chi\in \cntr{\mathfrak g}^*}}M_\chi, \end{equation} where $M_\chi\in\scr O_\chi$ and $M_\chi\neq\set{0}$ for only finitely many $\chi$. From Lemma~\ref{lem:findecomp} it follows that we get a similar block decomposition for $\widetilde{\catO}$, where each module $M\in\widetilde{\catO}$ decomposes as in~\eqref{eq:odec}, but with possibly countably many non-zero summands (and with some restrictions on the weight spaces of the non-zero summands). This is similar to the situation for $\scr O$-like categories over a Kac-Moody algebra, see for example~\cite{neidhardt1, rc-w}. More precisely, we have the following. \begin{lemma} For all $M\in\widetilde{\catO}$ and all $\chi\in\cntr{\mathfrak g}^*$ there are unique modules (up to isomorphism) $M_1\in\scr O_\chi$, $M_2\in\widetilde{\catO}$, with $[M_2:L(\mu)]=0$ for all $\mu\in\mathfrak h^*$ with $L(\mu)\in\scr O_\chi$, such that \[ M \cong M_1\oplus M_2. \] \end{lemma} \begin{proof} Recall that, for two $\mathfrak g$-modules $K$ and $N$, the trace $\Tr_KN$ is defined as the sum of images of all homomorphisms from $K$ to $N$. Now, let \[ M_1 \mathrel{\mathop:}= \sum_{\mathpalette\mathclapinternal{\substack{\lambda\in\mathfrak h^*,\\ P(\lambda)\in\scr O_\chi}}} \Tr_{P(\lambda)}M, \] and \[ M_2 \mathrel{\mathop:}= \sum_{\mathpalette\mathclapinternal{\substack{\lambda\in\mathfrak h^*,\\ P(\lambda)\not\in\scr O_\chi}}} \Tr_{P(\lambda)}M. \] As $\scr O$ has enough projectives, from the proof of Lemma~\ref{lem:findecomp} it follows that $M = M_1 + M_2$. Since the central characters occuring in $M_2$ are different from $\chi$, this sum must be direct. \end{proof} For each $\chi\in \cntr{\mathfrak g}^*$ we thus obtain an exact projection functor $\underline{\phantom{J}}\delimiter"6223379_\chi\colon\widetilde{\catO}\rightarrow\scr O_\chi$, such that \begin{equation} \label{eq:otdec} M = \bigoplus_{\chi\in \cntr{\mathfrak g}^*}M\delimiter"6223379_\chi \end{equation} for any $M\in\widetilde{\catO}$. \begin{lemma} The tensor product commutes with infinite direct sums in $\widetilde{\catO}$. \end{lemma} \begin{proof} Let $N$, $M_1$, $M_2$, $\dotsc\in\widetilde{\catO}$ with \[ \bigoplus_{i=1}^\infty M_i\in\widetilde{\catO}, \] let $n_1$, $n_2$, $\dots\in N$ be a basis of $N$ and let $m_1^{(i)}$, $m_2^{(i)}$, $\dots\in M_i$ be a basis of $M_i$ for each $i\in\mathbb{N}$. Then it is immediate that \[ \bigl\{\,m_j^{(i)}\otimes n_k\,\big\vert\,i, j, k\in\mathbb{N}\,\bigr\} \] constitute a basis of both \[ \bigl(M_1\oplus M_2\oplus \dotsb\bigr)\otimes N \] and \[ (M_1\otimes N)\oplus (M_2\otimes N)\oplus\dotsb, \] giving the required isomorphism. \end{proof} For $\lambda\in\mathfrak h^*$, we denote by $\Delta(\lambda)$ the corresponding Verma module with highest weight $\lambda$, and $\nabla(\lambda)\mathrel{\mathop:}=\Delta(\lambda)^\star$ the corresponding dual Verma module. Let $\scr F(\Delta)$ and $\scr F(\nabla)$ denote the categories of modules $M\in\scr O$ having a Verma filtration and dual Verma filtration, respectively, and let $\scr T=\scr F(\Delta)\cap\scr F(\nabla)$ denote the category of tilting modules (see~\cite{ringel} for more details). Let $\widetilde{\scr F}(\Delta)$, $\widetilde{\scr F}(\nabla)$ and $\widetilde{\scr T}$ denote the corresponding categories for $\widetilde{\catO}$. As $\star$ commutes with direct sums, the decomposition~\eqref{eq:otdec} implies that $M\in\widetilde{\scr F}(\Delta)$ if and only if $M^\star\in\widetilde{\scr F}(\nabla)$. Note also that $\scr F(\Delta)$ and $\widetilde{\scr F}(\Delta)$ can be characterised as the objects in $\scr O$ and $\widetilde{\catO}$ respectively which are free as $\enva{\mathfrak n_-}$-modules. Similar to the situation in $\scr O$, we have the following result for $\widetilde{\catO}$ concerning tensor products involving (dual) Verma modules and tilting modules. \begin{proposition} \label{prop:preservestilt} For any $M\in\widetilde{\catO}$, $N\in\widetilde{\scr F}(\Delta)$, $K\in\widetilde{\scr F}(\nabla)$ and $T\in\widetilde{\scr T}$ we have $M\otimes N\in\widetilde{\scr F}(\Delta)$, $M\otimes K\in\widetilde{\scr F}(\nabla)$ and $M\otimes T\in\widetilde{\scr T}$. \end{proposition} \begin{proof} To show $M\otimes N\in\widetilde{\scr F}(\Delta)$, it suffices to show that $M\otimes N\in\widetilde{\scr F}(\Delta)$ for any $N\in\scr F(\Delta)$, since the general case then follows from the fact that any module in $\widetilde{\scr F}(\Delta)$ decomposes into a direct sum of modules in $\scr F(\Delta)$. Let $m_1$, $m_2$, $\dotsc$ $\in M$ be a basis of $M$ constructed as in the proof of Lemma~\ref{lem:findecomp} and let $v_1, \dotsc, v_k\in N$ be a basis of $N$ as a $\enva{\mathfrak n_-}$-module consisting of weight vectors. We will now show that $M\otimes N$ is $\enva{\mathfrak n_-}$-free with the basis $B\mathrel{\mathop:}=\{\,m_i\otimes v_j\,\vert\,i \in\mathbb{N}, 1\leq j\leq k\,\}$. We start by showing that $B$ generates $M\otimes N$ as a $\enva{\mathfrak n_-}$-module. A set that certainly generates $M\otimes N$ over $\enva{\mathfrak n_-}$ is \[ \bar B\mathrel{\mathop:}= \bigl\{\, m_i\otimes(uv_j)\,\big\vert\,i\in\mathbb{N}, u\in\enva{\mathfrak n_-}, 1\leq j\leq k\,\bigr\}, \] since $\{m_1, m_2, \dotsc\}$ is a basis of $M$ and \[ \sum_{j=1}^k\{\,uv_j\,\vert\,u\in\enva{\mathfrak n_-}\,\}=N. \] We will show that $\bar B$ is a subset of the set generated by $B$ by induction on the degree of $u$. So, consider an element $m_i\otimes(uv_j)\in \bar B$. If $u$ has degree $0$, then $u$ is a scalar, so $m_i\otimes(uv_j)=u(m_i\otimes v_j)$ is in the set generated by $B$. Now assume $u$ has degree $d\geq 1$. Then \[ m_i\otimes(uv_j) = u(m_i\otimes v_j) + \sum_{l}(u'_lm_i)\otimes(u''_lv_j), \] for some elements elements $u'_l, u''_l\in\enva{\mathfrak n_-}$ with degree strictly less than $d$. Since we can rewrite the elements $u'_lm_i$ as linear combinations of $m_1, m_2, \dotsc$, the right hand side is in the set generated by $B$ over $\enva{\mathfrak n_-}$ by the induction hypothesis. Hence $B$ generates $\bar B$ as a $\enva{\mathfrak n_-}$-module, so $B$ generates $M\otimes N$ as a $\enva{\mathfrak n_-}$-module. To see that $M\otimes N$ is free over $B$ as a $\enva{\mathfrak n_-}$-module, let $L_l$ denote the $\enva{\mathfrak n_-}$-submodule of $M\otimes N$ generated by \[ \bigl\{\,m_i\otimes v_j\,\big\vert\,i\in\mathbb{N}, i\leq l, 1\leq j\leq k\,\bigr\}, \] and let $\bar L_l$ denote the $\enva{\mathfrak n_-}$-submodule of $M\otimes N$ generated by \[ \bigl\{\,m_l\otimes v_j\,\vert\,1\leq j\leq k\,\bigr\}. \] By straightforward induction we see that any non-zero element in $L_l$ has a summand of the form $m_i\otimes n$ for some $1\leq i\leq k$, $n\in N$. On the other hand, no element of $\bar L_{l+1}$ has such a summand by the ordering of the $m_i$:s, and hence we have \[ L_{l+1} = \bar L_{l+1}\oplus L_l. \] Thus \[ M\otimes N = \bigoplus_{l=1}^\infty \bar L_l \] as a $\enva{\mathfrak n_-}$-module. Finally, we note that $\bar L_l$ is $\enva{\mathfrak n_-}$-free with the generators \[ \bigl\{\,m_l\otimes v_j\,\vert\,1\leq j\leq k\,\bigr\}, \] since $u(m_l\otimes v_j)$ has a summand of the form $m_l\otimes (uv_j)$ for all $u\in\enva{\mathfrak n_-}$. Hence $M\otimes N$ is $\enva{\mathfrak n_-}$-free, i.e. $M\otimes N\in\widetilde{\scr F}(\Delta)$. To show that $M\otimes K\in\widetilde{\scr F}(\nabla)$, note that since $K^\star\in\widetilde{\scr F}(\Delta)$, by the previous paragraph we have $M^\star\otimes K^\star\in\widetilde{\scr F}(\Delta)$. By Lemma~\ref{lem:dualtens}, $\star$ commutes with tensor products, i.e. $M^\star\otimes K^\star=(M\otimes K)^\star$, and hence $M\otimes K\in\widetilde{\scr F}(\nabla)$. Finally, since $\widetilde{\scr T}=\widetilde{\scr F}(\Delta)\cap\widetilde{\scr F}(\nabla)$, from the first two statements it follows that $M\otimes T\in\widetilde{\scr T}$ for all $M\in\widetilde{\catO}$ and $T\in\widetilde{\scr T}$. \end{proof} \begin{corollary} For $M\in\widetilde{\scr F}(\Delta)$ and $N\in\widetilde{\scr F}(\nabla)$ we have $M\otimes N\in\widetilde{\scr T}$. Furthermore, if $\lambda_1, \lambda_2, \dotsc\in\mathfrak h^*$ and $\mu_1, \mu_2, \dotsc\in\mathfrak h^*$ are the highest weights, with multiplicities, of the Verma (respectively dual Verma) modules occurring in the Verma and dual Verma filtrations of $M$ and $N$, then \[ M\otimes N \cong \bigoplus_{i, j=1}^\infty\Delta(\lambda_i)\otimes \nabla(\mu_j). \] \end{corollary} \begin{proof} By Proposition~\ref{prop:preservestilt}, $M\otimes N\in\widetilde{\scr F}(\Delta)\cap\widetilde{\scr F}(\nabla)=\widetilde{\scr T}$. Furthermore, $\Delta(\lambda)\otimes\nabla(\mu)\in\widetilde{\scr T}$ for all $\lambda, \mu\in\mathfrak h^*$. Since tensoring over a field, the second statement now follows from the fact that tilting modules do not have self-extensions~\cite[Corollary 3]{ringel}. \end{proof} Following \cite{fiebig}, for $\lambda\in\mathfrak h^*$ and any weight module $M$ we define \[ M^{\leqslant\lambda}\mathrel{\mathop:}= M/M^{\nleqslant\lambda}, \] where $M^{\nleqslant\lambda}$ is the submodule of $M$ generated by all the weight spaces $M_\mu$ with $\mu\not\leqslant\lambda$. \begin{lemma} The assignment $\underline{\phantom{J}}^{\leqslant\lambda}\colon M\mapsto M^{\leqslant\lambda}$ defines a right exact functor on the category of weight $\mathfrak g$-modules. \end{lemma} \begin{proof} Let $M$ and $N$ be weight $\mathfrak g$-modules, and let $\varphi:M\rightarrow N$ be a homomorphism. Since homomorphisms preserve weights, the generating set for $M^{\nleqslant\lambda}$ maps to the generating set for $N^{\nleqslant\lambda}$, and hence $\varphi\bigl(M^{\nleqslant\lambda}\bigr)\subseteq N^{\nleqslant\lambda}$. We thus obtain an induced homomorphism \[ \varphi^{\leqslant\lambda}\colon M^{\leqslant\lambda}\rightarrow N^{\leqslant\lambda}. \] It is immediate that $(\Id_{M})^{\leqslant\lambda} = \Id_{M^{\leqslant\lambda}}$ and $(\varphi\circ\psi)^{\leqslant\lambda}=\varphi^{\leqslant\lambda}\circ\psi^{\leqslant\lambda}$, so $\underline{\phantom{J}}^{\leqslant\lambda}$ is indeed a functor. Now, consider an exact sequence \[ K\xrightarrow{\psi}M\xrightarrow{\varphi}N\rightarrow 0 \] of weight $\mathfrak g$-modules. For any element $n+N^{\nleqslant\lambda}\in N^{\leqslant\lambda}$, there is an element $m\in M$ with $\varphi(m)=n$, so \[ \varphi^{\leqslant\lambda}\bigl(m+M^{\nleqslant\lambda}\bigr) = n+N^{\nleqslant\lambda}, \] and thus $\varphi^{\leqslant\lambda}$ is surjective. Finally, consider an element \[ m+M^{\nleqslant\lambda}\in\ker\varphi^{\leqslant\lambda}, \] i.e.\ $\varphi(m+M^{\nleqslant\lambda})\subseteq N^{\nleqslant\lambda}$. Since $\varphi$ is surjective, we have $\varphi(M^{\nleqslant\lambda})=N^{\nleqslant\lambda}$, so there is an element $\tilde m\in M^{\nleqslant\lambda}$ with $\varphi(\tilde m) = \varphi(m)$. Now let $m' = m-\tilde m$. Since \[ \varphi(m') = \varphi(m)-\varphi(\tilde m) = 0 \] we have $m'\in\ker\varphi$, and since $\tilde m\in M^{\nleqslant\lambda}$ we have \[ m'+M^{\nleqslant\lambda}=m+M^{\nleqslant\lambda}. \] By exactness, there is an element $k\in K$ with $\psi(k)=m'$, so \[ \psi^{\leqslant\lambda}\bigl(k+K^{\nleqslant\lambda}\bigr) = m'+M^{\nleqslant\lambda} = m+M^{\nleqslant\lambda}. \] Hence $\im\psi^{\leqslant\lambda}=\ker\varphi^{\leqslant\lambda}$, and thus $\underline{\phantom{J}}^{\leqslant\lambda}$ is right exact. \end{proof} \begin{proposition} \label{prop:freeproj} Let $M$ be an $\enva{\mathfrak n_-}$-free module, say \[ M = \bigoplus_{i\in I}\enva{\mathfrak n_-}v_i \] as an $\enva{\mathfrak n_-}$-module with $\set{\,v_i\,\vert\,i\in I\,}$ being weight vectors. Then \[ M^{\leqslant\lambda} \cong \bigoplus_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\leqslant\lambda}}} \enva{\mathfrak n_-}v_i. \] as a $\enva{\mathfrak n_-}$-module. \end{proposition} \begin{proof} We claim that \[ M^{\nleqslant\lambda} = \sum_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak n_-}v_i. \] To show this, let $N$ denote the set on the right hand side. We need to show that $N$ is indeed a submodule of $M$, i.e. closed under the action of $\enva{\mathfrak g}$. By the Poincar\'e-Birkhoff-Witt Theorem we know that $\enva{\mathfrak g} = \enva{\mathfrak n_-}\,\enva{\mathfrak b}$, so \begin{align*} \enva{\mathfrak g}N &= \sum_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak g}\,\enva{\mathfrak n_-}v_i \\ &= \sum_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak g}v_i \\ &= \sum_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak n_-}\,\enva{\mathfrak b}v_i \\ &\stackrel{(*)}{=} \sum_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak n_-}v_i \\ &= N, \end{align*} where (*) holds since if $\weight(v_i)\nleqslant\lambda$, then $\mu\nleqslant\lambda$ for any $\mu\in\Supp(\enva{\mathfrak b}v_i)$. Thus, as a $\enva{\mathfrak n_-}$-module, we have \[ M^{\nleqslant\lambda} = \bigoplus_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\nleqslant\lambda}}} \enva{\mathfrak n_-}v_i, \] and hence we get that \[ M^{\leqslant\lambda} \cong \bigoplus_{\substack{i\in I,\\ \mathpalette\mathclapinternal{\weight(v_i)\leqslant\lambda}}} \enva{\mathfrak n_-}v_i. \] as a $\enva{\mathfrak n_-}$-module. \end{proof} \begin{proposition} \label{prop:fpreservesdelta} For any $M\in\scr F(\Delta)$, $N\in\widetilde{\catO}$ and $\lambda\in\mathfrak h^*$ we have that $(M\otimes N^*)^{\leqslant\lambda}\in\scr F(\Delta)$. \end{proposition} \begin{proof} Let $m_1, \dotsc, m_k\in M$ be a basis of $M$ as a $\enva{\mathfrak n_-}$-module consisting of weight vectors, and let $n_1, n_2, \dotsc\in N$ be a basis of $N$ constructed as in the proof of Lemma~\ref{lem:findecomp}. By an argument completely analogous to the case where $N$ is finite dimensional (see for instance the proof of Theorem~2.2 in \cite{jantzen}), it follows that $M\otimes N^*$ is $\enva{\mathfrak n_-}$-free over the set \[ B=\bigl\{\,m_i\otimes n_j^*\,\vert\,1\leq i\leq k, j\in\mathbb{N}\,\bigr\}. \] By Proposition~\ref{prop:freeproj} it follows that $(M\otimes N^*)^{\leqslant\lambda}$ is $\enva{\mathfrak n_-}$-free, with a $\enva{\mathfrak n_-}$-basis consisting of the vectors in $B$ satisfying $\weight(m_i\otimes n_j^*)\leqslant\lambda$. Since $N\in\widetilde{\catO}$, the number of such vectors is finite, and hence $(M\otimes N^*)^{\leqslant\lambda}\in\scr O$, i.e.\ $(M\otimes N^*)^{\leqslant\lambda}\in\scr F(\Delta)$. \end{proof} \begin{corollary} \label{cor:tensdual} For each $M\in\scr O$, $N\in\widetilde{\catO}$ and $\lambda\in\mathfrak h^*$ we have \[ (M\otimes N^*)^{\leqslant\lambda}\in\scr O. \] \end{corollary} \begin{proof} Let $P\in\scr O$ be the projective cover of $M$. As $\underline{\phantom{J}}^{\leqslant\lambda}$ is right exact, it suffices to prove that $(P\otimes N^*)^{\leqslant\lambda}\in\scr O$. But this follows from Proposition~\ref{prop:fpreservesdelta}, since every projective in $\scr O$ has a Verma flag. \end{proof} \section{The functors} \label{sec:main} We now restrict our attention to the principal block $\scr O_0$, i.e. the indecomposable block containing the trivial module $L(0)$. Let $\PFun(\scr O_0)$, $\TFun(\scr O_0)$ and $\IFun(\scr O_0)$ denote the categories of endofunctors on $\scr O_0$ which preserve the additive subcategories of projective, tilting and injective modules, respectively. Furthermore, let $\scr F_0(\Delta)=\scr F(\Delta)\cap\scr O_0$, and define $\scr F_0(\nabla)$ and $\scr T_0$ similarly. This section will be devoted to proving the following theorem, the main result of this paper, along with some of its consequences. \begin{theorem} \label{thm:maintheorem} There exist faithful functors \begin{align*} F&\colon\scr O_0\hookrightarrow\PFun(\scr O_0)^{\text{op}}, M\mapsto F_M,\\ G&\colon\scr O_0\hookrightarrow\TFun(\scr O_0), M\mapsto G_M,\\ H&\colon\scr O_0\hookrightarrow\IFun(\scr O_0)^{\text{op}}, M\mapsto H_M, \end{align*} all three satisfying $X_M\cong X_N$ if and only if $M\cong N$ (where $X=F, G, H$). \end{theorem} For $M\in\scr O_0$, we define the functor $G_M:\scr O_0\rightarrow\scr O_0$ by \[ G_MN \mathrel{\mathop:}= (M\otimes N)\delimiter"6223379_0 \] on objects, and \begin{align*} G_M\varphi&\colon G_MK\rightarrow G_ML, \\ G_M\varphi&= (\text{Id}_M\otimes \varphi)\delimiter"6223379_0 \mathrel{\mathop:}= \pi_{G_ML}\circ (\text{Id}_M\otimes \varphi) \circ \iota_{G_M K}, \end{align*} on morphisms $\varphi\colon K\rightarrow L$, where $\pi_{G_ML}\colon M\otimes L\twoheadrightarrow (M\otimes L)\delimiter"6223379_0$ and $\iota_{G_MK}\colon (M\otimes K)\delimiter"6223379_0\hookrightarrow M\otimes K$ denote the natural projection and inclusion. This defines $G_M$ as an endofunctor on $\scr O_0$. \begin{remark} \label{rem:ntransfactors} By central character considerations (i.e. from the fact that $G_ML\in\scr O_0$), it follows that $\pi_{G_ML}\circ(\text{Id}_M\otimes \varphi)$ factors through $G_M\varphi$, i.e. the diagram \[ \includegraphics{ntransfactors.1} \] commutes. \end{remark} For a homomorphism $\varphi\colon M\rightarrow N$ between to objects $M, N\in\scr O_0$, we define the corresponding natural transformation $G_{\varphi}\colon G_M\rightarrow G_N$ by \begin{align*} G_{\varphi}K&\colon G_MK\rightarrow G_NK, \\ G_{\varphi}K&\mathrel{\mathop:}= (\varphi\otimes\text{Id}_K)\delimiter"6223379_0, \end{align*} for $K\in\scr O_0$. This defines $G$ as a functor from the category $\scr O_0$ to the category of endofunctors on $\scr O_0$. Since both $M\otimes\underline{\phantom{J}}$ and $\underline{\phantom{J}}\delimiter"6223379_0$ are exact (as the tensor product is over a field), it follows that $G_M$ is exact. Recall that the category $\scr O_0$ is equivalent to $A$-mod, the category of $A$-modules, for some finite dimensional algebra $A$ (see~\cite{bgg}). Hence $G_M$ can be seen as an exact functor on $A$-mod, and in particular $G_M$ is right exact on $A$-mod. By abstract theory (e.g. Theorem~2.3, \cite{bass}), $G_M$ is naturally isomorphic to a functor on the form $\overline M\otimes_A\underline{\phantom{J}}$ for some $A$-bimodule $\overline M$. We define \[ H_M\mathrel{\mathop:}=\Hom_A(\overline M, \underline{\phantom{J}}), \] the right adjoint of $G_M$. The dual $\star$ is a self-adjoint contravariant endo\-functor on $\scr O_0$, so for any modules $K, L\in\scr O_0$ we have the following natural isomorphisms \begin{align*} \Hom_{\scr O_0}\bigl(L,(G_{M^\star}K^\star)^\star\bigr) &\cong \Hom_{\scr O_0}(G_{M^\star}K^\star, L^\star) \\ &\cong \Hom_{\scr O_0}(K^\star, H_{M^\star}L^\star) \\ &\cong \Hom_{\scr O_0}\bigl((H_{M^\star}L^\star)^\star, K\bigr). \end{align*} Furthermore, since $\star$ commutes with direct sums and tensor products, we see that \[ (G_{M^\star}K^\star)^\star = \bigl((M^\star\otimes K^\star)\delimiter"6223379_0\bigr)^\star = (M\otimes K)\delimiter"6223379_0 = G_MK. \] Thus $\star\circ H_{M^\star}\circ\star$ is the left adjoint of $G_M$, and we define \begin{equation} \label{eq:fmdef} F_M\mathrel{\mathop:}= \star\circ H_{M^\star}\circ\star. \end{equation} \begin{proposition} For any $M\in\scr O_0$ we have that $F_M\in\PFun(\scr O_0)$, $G_M\in\TFun(\scr O_0)$ and $H_M\in\IFun(\scr O_0)$. \end{proposition} \begin{proof} That $G_M\in\TFun(\scr O_0)$ follows from Proposition~\ref{prop:preservestilt}. Assume that $P\in\scr O_0$ is projective, i.e. the functor $\Hom(P, \underline{\phantom{J}})$ is exact. We need to show that $F_MP$ is projective, i.e. that $\Hom(F_MP, \underline{\phantom{J}})$ is exact. But \[ \Hom(F_MP, \underline{\phantom{J}})\cong\Hom(P, G_M\underline{\phantom{J}}), \] and the right hand side is the composition of two exact functors, so it is exact. The statement $H_M\in\IFun(\scr O_0)$ follows by duality. \end{proof} \begin{theorem} \label{thm:gadj} The left adjoint $F_M$ of $G_M$ is given by \[ F_MN = (M^*\otimes N)^{\leqslant 0}\big\delimiter"6223379_0, \] and the right adjoint $H_M$ by \[ H_M = \star\circ F_{M^\star}\circ\star. \] \end{theorem} \begin{proof} The second statement follows immediately from the definition \eqref{eq:fmdef}. The proof of the first assertion is a slight variation of the proof of Proposition~5.1 in \cite{fiebig}, also due to Fiebig. We begin by showing that we have a natural isomorphism \[ \Hom_{\mathfrak g}(M^*\otimes K, L)\cong\Hom_{\mathfrak g}(K, M\otimes L) \] for all $K, L, M\in\scr O_0$. Let $m_1, m_2, \dotsc\in M$ be a basis consisting of weight vectors, and let $m_1^*, m_2^*, \dotsc\in M^*$ denote the corresponding dual basis. For $f\in\Hom_{\mathfrak g}(M^*\otimes K, L)$, define $\hat f\in\Hom_{\mathfrak g}(K, M\otimes L)$ by \[ \hat f(k) \mathrel{\mathop:}= \sum_{i}m_i\otimes f(m_i^*\otimes k). \] Since $\Supp L\leqslant 0$, we see that the sum on the right hand side is finite, since $f(m_i^*\otimes k)=0$ for all $i$ with \[ \weight(m_i^*\otimes k)\nleqslant 0. \] For $g\in\Hom_{\mathfrak g}(K, M\otimes L)$, define $\tilde g\in\Hom_{\mathfrak g}(M^*\otimes K, L)$ by \[ \tilde g(m_i^*\otimes k)\mathrel{\mathop:}=\sum_jm_i^*(m_j)\cdot l_j, \] where $g(k)=\sum_jm_j\otimes l_j$ for some weight vectors $l_j\in L$ , with $l_j=0$ for almost all $j$. The maps $\hat\cdot$ and $\tilde\cdot$ are indeed inverse to each other, since \[ \tilde{\hat f}(m_i^*\otimes k) = \sum_jm_i^*(m_j)\cdot f(m_j^*\otimes k) = f(m_i^*\otimes k), \] and \[ \hat{\tilde g}(k) = \sum_im_i\otimes \tilde g(m_i^*\otimes k) = \sum_{i, j}m_i\otimes \bigl(m_i^*(m_j)\cdot l_j\bigr) = \sum_im_i\otimes l_i = g(k), \] where again $g(k)=\sum_jm_j\otimes l_j$. Hence \[ \Hom_{\mathfrak g}(M^*\otimes K, L)\cong\Hom_{\mathfrak g}(K, M\otimes L), \] as claimed. As we saw above, any element $f\in\Hom_{\mathfrak g}(M^*\otimes K, L)$ is zero on $(M^*\otimes K)^{\nleqslant 0}$, and hence $f$ factors uniquely through $(M^*\otimes K)^{\leqslant 0}$, so \[ \Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}, L\bigr) \cong \Hom_{\mathfrak g}(M^*\otimes K, L). \] Also, since $L\in\scr O_0$, any element in $\Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}, L\bigr)$ is zero on any block of $(M^*\otimes K)^{\leqslant 0}$ outside of $\scr O_0$, so \[ \Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}, L\bigr) \cong \Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}\big\delimiter"6223379_0, L\bigr). \] Similarly, since $K\in\scr O_0$ we have \[ \Hom_{\mathfrak g}(K, M\otimes L) \cong \Hom_{\mathfrak g}(K, M\otimes L\delimiter"6223379_0). \] Thus we have obtained a chain of natural isomorphisms \begin{align*} \Hom_{\mathfrak g}(F_MK, L) &= \Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}\big\delimiter"6223379_0, L\bigr) \cong \Hom_{\mathfrak g}\bigl((M^*\otimes K)^{\leqslant 0}, L\bigr) \\ &\cong \Hom_{\mathfrak g}(M^*\otimes K, L) \cong \Hom_{\mathfrak g}(K, M\otimes L) \\ &\cong \Hom_{\mathfrak g}(K, M\otimes L\delimiter"6223379_0) =\Hom_{\mathfrak g}(K, G_ML). \end{align*} \end{proof} \begin{corollary} $F$ and $H$ are contravariant functors, right and left exact respectively, from the category $\scr O_0$ to the category of endofunctors on $\scr O_0$. \end{corollary} \begin{proof} For $M, N\in\scr O_0$, we have by Theorem~\ref{thm:gadj} that \[ F_MN = (M^*\otimes N)^{\leqslant 0}\delimiter"6223379_0. \] Analogous to the definition of $G$, for a homomorphism $\varphi\colon M\rightarrow K$ between objects $M, K\in\scr O_0$ we define the corresponding natural transformation $F_\varphi\colon F_K\rightarrow F_M$ by \[ F_\varphi N\mathrel{\mathop:}= (\varphi^*\otimes \text{Id}_N)^{\leqslant 0}\delimiter"6223379_0 \colon F_KN \rightarrow F_MN. \] Hence, fixing $N\in\scr O_0$, and denoting by $F_{\underline{\phantom{J}}}N$ the assignment \begin{align*} F_{\underline{\phantom{J}}}M\colon x&\mapsto F_xM \end{align*} ($x$ being an object or morphism of $\scr O_0$), we see that \[ F_{\underline{\phantom{J}}}N = (\underline{\phantom{J}}\delimiter"6223379_0)\circ(\underline{\phantom{J}}^{\leqslant0}) \circ(\underline{\phantom{J}}\otimes N)\circ(\underline{\phantom{J}}^*). \] Since $\underline{\phantom{J}}^*$ is contravariant exact, $\underline{\phantom{J}}\otimes N$ is covariant exact, $\underline{\phantom{J}}^{\leqslant0}$ is covariant right exact, and $\underline{\phantom{J}}\delimiter"6223379_0$ is covariant exact, it follows that $F_{\underline{\phantom{J}}}N$ is a contravariant right exact endofunctor on $\scr O_0$, which proves the statement for $F$. The statement for $H$ follows by duality. \end{proof} \begin{remark} \label{rem:lzero} Note that, since $L(0)^*\cong L(0)\cong{}_{\mathfrak g}\mathbb{C}$, with $\mathfrak g$ acting trivially on $\mathbb{C}$, we have isomorphisms \begin{align*} G_{L(0)}M &= M\otimes L(0)\delimiter"6223379_0 \cong M\delimiter"6223379_0=M, \text{ and} \\ F_{L(0)}M &= \bigl(M\otimes L(0)^*\bigr)^{\leqslant 0}\big\delimiter"6223379_0 \cong M^{\leqslant 0}\big\delimiter"6223379_0 = M \end{align*} natural in $M$, for any $M\in\scr O_0$. Hence we have natural isomorphisms \[ G_{L(0)}\cong F_{L(0)}\cong H_{L(0)}\cong \Id, \] where $\Id$ denotes the identity functor on $\scr O_0$. \end{remark} \begin{proposition} \label{prop:acyclic} For any $M\in\scr O_0$ the following holds. \begin{enumerate}[(a)] \item $F_M$ and $G_M$ preserve $\scr F_0(\Delta)$ and are acyclic on it. \item $G_M$ and $H_M$ preserve $\scr F_0(\nabla)$ and are acyclic on it. \end{enumerate} \end{proposition} \begin{proof} $G_M$ preserves $\scr F_0(\Delta)$ and $\scr F_0(\nabla)$ by Proposition~\ref{prop:preservestilt}. $G_M$ is also acyclic on $\scr F_0(\Delta)$ and $\scr F_0(\nabla)$ since $G_M$ is exact. $F_M$ preserves $\scr F_0(\Delta)$ by Proposition~\ref{prop:fpreservesdelta}. If $F_M$ is acyclic on $K$ and $Q$, and the sequence \[ 0\rightarrow K\rightarrow N\rightarrow Q\rightarrow 0 \] is exact, then it follows that $F_M$ is acyclic on $N$. Hence it suffices to show that $F_M$ is acyclic on Verma modules, by induction on the length of Verma flags. A right exact functor is always acyclic on projective modules, so in particular $F_M$ is acyclic on $\Delta(0)$, since $\Delta(0)$ is projective. Now let $\lambda\in\mathfrak h^*$ with $\lambda < 0$ and $\Delta(\lambda)\in\scr O_0$, and assume that $F_M$ is acyclic on $\Delta(\mu)$ for all $\mu\in\mathfrak h^*$ with $\lambda < \mu$ and $\Delta(\mu)\in\scr O_0$. All Verma modules fit in a short exact sequence \begin{equation} \label{eq:prop:acyclic} 0\rightarrow K\rightarrow P(\lambda)\rightarrow\Delta(\lambda)\rightarrow 0, \end{equation} where $P(\lambda)$ is projective, and $K\in\scr F_0(\Delta)$ is filtered by Verma modules $\Delta(\mu)$ with $\lambda<\mu$. In particular, $F_M$ is acyclic on $K$ by the induction hypothesis. Hence, in the induced long exact sequence \[ \dotsb\rightarrow \scr L_{i+1}F_MP(\lambda) \rightarrow\scr L_{i+1}F_M\Delta(\lambda) \rightarrow\scr L_{i}F_MK \rightarrow\dotsb \] we have $\scr L_{i+1}F_MP(\lambda)=0$ since $P(\lambda)$ is projective, and $\scr L_iF_MK=0$ by the induction hypothesis, so $\scr L_{i+1}F_M\Delta(\lambda)=0$ for all $i>1$. It remains to show that $\scr L_1F_M\Delta(\lambda)=0$. Since $\scr L_1F_MP(\lambda)=0$, we have that \begin{equation} \label{eq:lonewhat} 0\rightarrow\scr L_1F_M\Delta(\lambda) \rightarrow F_MK\rightarrow F_MP(\lambda)\rightarrow\Delta(\lambda)\rightarrow 0 \end{equation} is exact. Now, consider the short exact sequence \[ 0\rightarrow M^*\otimes K\rightarrow M^*\otimes P(\lambda) \rightarrow M^*\otimes \Delta(\lambda)\rightarrow 0 \] obtained from \eqref{eq:prop:acyclic} by applying the functor $M^*\otimes \underline{\phantom{J}}$. The modules in the above sequence are all $\enva{\mathfrak n_-}$-free, so by Proposition~\ref{prop:freeproj} we obtain an exact sequence \begin{equation} \label{eq:lonezero} 0\rightarrow F_MK\rightarrow F_MP(\lambda)\rightarrow F_M\Delta(\lambda)\rightarrow 0 \end{equation} by applying $\underline{\phantom{J}}^{\leqslant 0}$, and thus $\scr L_1F_M\Delta(\lambda)=0$, by comparing~\eqref{eq:lonewhat} and~\eqref{eq:lonezero}. Since $H_M=\star\circ F_{M^\star}\circ\star$, and $\star$ is a contravariant exact functor swapping $\scr F_0(\nabla)$ with $\scr F_0(\Delta)$, it follows by the dual argument to the previous paragraph that $H_M$ preserves $\scr F_0(\nabla)$ and is acyclic on it. \end{proof} \begin{lemma} The functors $F$, $G$ and $H$ are faithful. \end{lemma} \begin{proof} Let $M, N\in\scr O_0$ with a non-zero homomorphism $\varphi\colon M\rightarrow N$. By the symmetry of the tensor product, we have $G_{\underline{\phantom{J}}}K\cong G_K\underline{\phantom{J}}$ for any $K\in\scr O_0$. In particular, it follows from Remark~\ref{rem:lzero} that $G_{\underline{\phantom{J}}}L(0)=\Id$. Thus $G_{\varphi}L(0)=(\varphi\otimes \text{Id}_{L(0)})\delimiter"6223379_0\neq 0$, so $G_{\varphi}$ is non-zero and hence $G$ is faithful. Now, let $m^*\in M^*$, $m^*\neq 0$ be a lowest weight vector of weight $\mu\in\mathfrak h^*$ in the image of the map $\varphi^*\colon N^*\rightarrow M^*$, and let $n^*\in N^*$ with $\varphi^*(n^*)=m^*$. Let $\lambda\in\mathfrak h^*$ be the antidominant weight, i.e. with $L(\lambda)=\Delta(\lambda)\in\scr O_0$, and consider $F_{\varphi}\Delta(\lambda)\colon F_{N}\Delta(\lambda)\rightarrow F_M\Delta(\lambda)$. Let $v\in\Delta(\lambda)$ denote a non-zero highest weight vector of $\Delta(\lambda)$. Since $\mu$ is a lowest weight of $\varphi^*(N^*)$ and $N\in\scr O_0$, it follows that $\lambda+\mu\leqslant 0$ and $\Delta(\lambda+\mu)\in\scr O_0$. In particular, by the proof of Proposition~\ref{prop:freeproj}, both $n^*\otimes v$ and $m^*\otimes v$ represent non-zero elements $\overline{n^*\otimes v}$ and $\overline{m^*\otimes v}$ in \[ (N^*\otimes \Delta(\lambda))^{\leqslant0}\big\delimiter"6223379_0 = F_N\Delta(\lambda) \] and \[ (M^*\otimes \Delta(\lambda))^{\leqslant0}\big\delimiter"6223379_0=F_M\Delta(\lambda), \] respectively. In particular, since \[ F_{\varphi}\Delta(\lambda) = (\varphi^*\otimes \Id_{\Delta(\lambda)})^{\leqslant 0}\big\delimiter"6223379_0 \] we see that \[ \bigl(F_{\varphi}\Delta(\lambda)\bigr)(\overline{n^*\otimes v}) = \overline{\varphi^*(n^*)\otimes v} = \overline{m^*\otimes v}\neq 0. \] Hence $F_{\varphi}$ is non-zero, proving that $F$ is faithful. By duality, it follows that $H$ is faithful. \end{proof} We now conclude the proof of Theorem~\ref{thm:maintheorem} by showing a slightly stronger statement than ``$X_M\cong X_N$ if and only if $M\cong N$''. \begin{proposition} Let $M, N\in\scr O_0$ with $M\ncong N$. Then \begin{enumerate}[(a)] \item $F_M\vert_\text{proj}\ncong F_N\vert_\text{proj}$, \item $G_M\vert_\text{tilt}\ncong G_N\vert_\text{tilt}$, and \item $H_M\vert_\text{inj}\ncong H_N\vert_\text{inj}$, \end{enumerate} where $\vert_\text{proj}$, $\vert_\text{tilt}$ and $\vert_\text{inj}$ denote the restrictions to the additive categories of projective, tilting and injective modules, respectively. \end{proposition} \begin{proof} We start by noting that if $G_M\cong G_N$, then \begin{equation} \label{eq:gmcgn} M \cong G_ML(0)\cong G_NL(0)\cong N. \end{equation} Assume that $F_M\vert_\text{proj}\cong F_N\vert_\text{proj}$. Since $F_M$ and $F_N$ are right exact, it follows by taking projective presentations that $F_MK\cong F_NK$ for any $K\in\scr O_0$, i.e. $F_N\cong F_M$. By the uniqueness of right adjoints, this implies that $G_M\cong G_N$ so $M\cong N$ by \eqref{eq:gmcgn}, and hence we have proved part (a). Part (c) follows from (a) by duality (as in the proof of Proposition~\ref{prop:acyclic}). For part (b), assume that $G_M\vert_\text{tilt}\cong G_N\vert_\text{tilt}$. We recall that each projective module $P\in\scr O_0$ has a tilting co-resolution, i.e. there are tilting modules $T_0, \dotsc, T_k\in\scr O_0$ such that the sequence \[ 0\rightarrow P\rightarrow T_0\rightarrow\cdots\rightarrow T_k\rightarrow 0 \] is exact (for details, see \cite[Lemma~6]{ringel}). Since $G_N$ and $G_M$ are exact and agree on the additive category of tilting modules, this induces the following commutative diagram with exact rows. \[ {\setlength{\arraycolsep}{2pt} \begin{array}{ccccccccccc} 0 & \rightarrow & G_MP & \rightarrow & G_MT_0 & \rightarrow & \cdots & \rightarrow & G_MT_k & \rightarrow & 0 \\ & & & & \rotatebox[origin=c]{270}{$\cong$} & & & & \rotatebox[origin=c]{270}{$\cong$} \\ 0 & \rightarrow & G_NP & \rightarrow & G_NT_0 & \rightarrow & \cdots & \rightarrow & G_NT_k & \rightarrow & 0 \end{array}} \] By the Five Lemma this induces an isomorphism $G_MP\cong G_NP$, which furthermore is natural, since all isomorphisms in the above diagram are natural. Hence $G_M$ and $G_N$ are naturally equivalent on projective modules, so by the right exactness $G_M\cong G_N$ as in the proof of part (a). By \eqref{eq:gmcgn} we have $M\cong N$, as required. \end{proof} \begin{proposition} \label{prop:mdnnprojtiltinj} For all $M\in\scr F_0(\Delta)$, $N\in\scr F_0(\nabla)$ we have that \begin{enumerate}[(a)] \item $F_NM$ is projective, \item $G_MN\cong G_NM$ is a tilting module, and \item $H_MN$ is injective. \end{enumerate} \end{proposition} \begin{proof} For part (a), we need to show that $\Hom(F_NM, \underline{\phantom{J}})$ is exact. Since \[ \Hom(F_NM, \underline{\phantom{J}}) \cong \Hom(M, G_N\underline{\phantom{J}}), \] it is equivalent to show that $\Hom(M, G_N\underline{\phantom{J}})$ is exact. By Proposition~\ref{prop:preservestilt}, $G_N\underline{\phantom{J}}$ maps any module to a module with a dual Verma flag, since $N\in\scr F(\nabla)$. Hence, as $G_N\underline{\phantom{J}}$ is exact, it maps an exact sequence to an exact sequence of modules in $\scr F(\nabla)$. Finally, $\Hom(M, \underline{\phantom{J}})$ is acyclic on $\scr F(\nabla)$ since $M\in\scr F(\Delta)$ (see \cite[Corollary~2]{ringel}), so applying $\Hom(M, \underline{\phantom{J}})$ to an exact sequence of modules in $\scr F(\nabla)$ again yields an exact sequence, i.e. $\Hom(M, G_N\underline{\phantom{J}})$ is exact. Part (c) follows from (a) by duality. Finally, part (b) follows directly from Proposition~\ref{prop:preservestilt}. \end{proof} \begin{corollary} For all $M\in\scr T_0$, $F_M$ maps tilting modules to projective modules, and $H_M$ maps tilting modules to injective modules. \end{corollary} In general it is quite difficult to compute $F_MN$ and $H_MN$, but the following is a nice special case. \begin{proposition} \label{prop:fnd} For each $\lambda\in\mathfrak h^*$ with $\Delta(\lambda)\in\scr O_0$ we have \begin{align*} F_{\nabla(\lambda)}\Delta(\lambda) &\cong \Delta(0), \text{ and} \\ H_{\Delta(\lambda)}\nabla(\lambda) &\cong \nabla(0). \end{align*} \end{proposition} \begin{proof} Let $\mu\in\mathfrak h^*$ be such that $\mu<0$ and $L(\mu)\in\scr O_0$. Since $\mu<0$ it follows that $\bigl(G_{\nabla(\lambda)}L(\mu)\bigr)_\lambda=\set{0}$, so \[ \dim\Hom\bigl(F_{\nabla(\lambda)}\Delta(\lambda), L(\mu)\bigr) \cong \dim\Hom\bigl(\Delta(\lambda), G_{\nabla(\lambda)}L(\mu)) = 0. \] On the other hand, we have \begin{align*} \dim\Hom\bigl(F_{\nabla(\lambda)}\Delta(\lambda), L(0)\bigr) &\cong \dim\Hom\bigl(\Delta(\lambda), G_{\nabla(\lambda)}L(0)\bigr) \\ &\cong \dim\Hom\bigl(\Delta(\lambda), \nabla(\lambda)\bigr) \\ &= 1, \end{align*} so $F_{\nabla(\lambda)}\Delta(\lambda)$ has simple top $L(0)$. By Proposition~\ref{prop:mdnnprojtiltinj} $F_{\nabla(\lambda)}\Delta(\lambda)$ is projective, and hence \[ F_{\nabla(\lambda)}\Delta(\lambda)\cong\Delta(0). \] The second statement follows by duality. \end{proof} \begin{proposition} \label{prop:nattrans} There are natural transformations \begin{enumerate}[(a)] \item $G_{\Delta(0)}\twoheadrightarrow \Id$, $\Id\hookrightarrow G_{\nabla(0)}$, \item $\Id\hookrightarrow H_{\Delta(0)}$, $F_{\nabla(0)}\twoheadrightarrow\Id$. \end{enumerate} \end{proposition} \begin{proof} Since $F_{L(0)}\cong G_{L(0)}\cong H_{L(0)}\cong\Id$, together with the fact that $F$ is right exact, $G$ is exact and $H$ is left exact, this follows by applying the functors $F$, $G$ and $H$ to the canonical homomorphisms $\Delta(0)\twoheadrightarrow L(0)$ and $L(0)\hookrightarrow\nabla(0)$. \end{proof} \section{(Co-)Monad structures} \label{sec:comonad} We briefly recall the definition of a monad and a comonad (sometimes called triple and cotriple, respectively), for details see~\cite{maclane, weibel}. A monad $(\mho, \nabla, \eta)$ on a category $\scr C$ is an endofunctor $\mho\colon\scr C\rightarrow\scr C$ together with two natural transformations $\nabla\colon \mho^2\rightarrow \mho$ and $\eta\colon \Id\rightarrow \mho$ such that the diagrams \newsavebox{\midalignbox} \newcommand{#1}{#1} \raisebox{-0.5\ht\midalignbox}{\usebox{\midalignbox}}} \begin{equation} \label{eq:monaddiagrams} \midalign{\includegraphics{monad.1}} \qquad\text{and}\qquad \midalign{\includegraphics{monad.2}} \end{equation} commute. Dually, a comonad $(\Omega, \Delta, \varepsilon)$ on a category $\scr C$ is an endofunctor $\Omega\colon\scr C\rightarrow\scr C$ together with two natural transformations $\Delta\colon \Omega\rightarrow \Omega^2$ and $\varepsilon\colon \Omega\rightarrow \Id$ such that the diagrams \begin{equation} \label{eq:comonaddiagrams} \midalign{\includegraphics{monad.3}} \qquad\text{and}\qquad \midalign{\includegraphics{monad.4}} \end{equation} commute. Fix a non-zero highest weight vector $v$ of $\Delta(0)$. Recall that $\enva{\mathfrak g}$ admits a coalgebra structure with counit $\tilde\varepsilon\colon\enva{\mathfrak g}\rightarrow\mathbb{C}$ and comultiplication $\tilde\Delta\colon\enva{\mathfrak g}\rightarrow\enva{\mathfrak g}\otimes\enva{\mathfrak g}$. This induces two homomorphisms \begin{align} \label{eq:comone} D&\colon\Delta(0)\hookrightarrow\Delta(0)\otimes\Delta(0), uv\mapsto \tilde\Delta(u)(v\otimes v), \\ \label{eq:comtwo} E&\colon\Delta(0)\twoheadrightarrow L(0), uv\mapsto \tilde\varepsilon(u), \end{align} for $u\in\enva{\mathfrak n_-}$, where we identify $L(0)$ with $\mathbb{C}$ via $\overline v\mapsto 1$. \begin{proposition} \label{prop:comonad} The homomorphisms~\eqref{eq:comone} and~\eqref{eq:comtwo} induce a co\-mon\-ad $(\Delta(0)\otimes\underline{\phantom{J}}, \Delta, \varepsilon)$ on $\widetilde{\catO}$ with $\Delta$ injective and $\varepsilon$ surjective, and dually a monad $(\nabla(0)\otimes\underline{\phantom{J}}, \nabla, \eta)$ with $\nabla$ surjective and $\eta$ injective. \end{proposition} \begin{proof} Fix $M\in\widetilde{\catO}$. Applying the functor $\underline{\phantom{J}}\otimes M$ to~\eqref{eq:comone} and~\eqref{eq:comtwo} we obtain the homomorphisms (where as above we identify $L(0)$ with $\mathbb{C}$) \[ \Delta_M\mathrel{\mathop:}= D\otimes \text{Id}_M \colon \Delta(0)\otimes M\hookrightarrow\Delta(0)\otimes\Delta(0)\otimes M, \] and \[ \varepsilon_M\mathrel{\mathop:}= E\otimes \text{Id}_M\colon \Delta(0)\otimes M\twoheadrightarrow M. \] By the proof of Proposition~\ref{prop:preservestilt}, $\Delta(0)\otimes M$ is generated by elements of the form $v\otimes m$, $m\in M$. For such an element, it is trivial to show that \[ \bigl((\varepsilon_{\Delta(0)\otimes M})\circ\Delta_M\bigr)(v\otimes m) = \bigl((\text{Id}_{\Delta(0)}\otimes \varepsilon_M)\circ\Delta_M\bigr)(v\otimes m) = v\otimes m, \] and \begin{align*} \bigl((\Delta_{\Delta(0)\otimes M})\circ\Delta_M\bigr)(v\otimes m) &= \bigl((\text{Id}_{\Delta(0)}\otimes \Delta_M)\circ\Delta_M\bigr)(v\otimes m) \\ &= v\otimes v\otimes v\otimes m, \end{align*} so the diagrams~\eqref{eq:comonaddiagrams} commute, proving that $(\Delta(0)\otimes\underline{\phantom{J}}, \Delta, \varepsilon)$ is a comonad on $\widetilde{\catO}$. Applying $\star\circ(\underline{\phantom{J}}\otimes M^\star)$ to~\eqref{eq:comone} and~\eqref{eq:comtwo} gives the homomorphisms \begin{align*} \nabla_M&\mathrel{\mathop:}= (\Delta_{M^\star})^\star\colon \nabla(0)\otimes\nabla(0)\otimes M\twoheadrightarrow\nabla(0)\otimes M, \text{ and}\\ \eta_M&\mathrel{\mathop:}= (\varepsilon_{M^\star})^\star\colon M\hookrightarrow \nabla(0)\otimes M. \end{align*} By duality, the diagrams~\eqref{eq:monaddiagrams} commute. \end{proof} We can refine this result to the category $\scr O_0$. \begin{theorem} \label{thm:comonado0} The homomorphisms~\eqref{eq:comone} and~\eqref{eq:comtwo} induce a comonad $(G_{\Delta(0)}, \bar\Delta, \bar\varepsilon)$ on $\scr O_0$, with $\bar\Delta$ injective and $\bar\varepsilon$ surjective, and dually a monad $(G_{\nabla(0)}, \bar\nabla, \bar\eta)$ on $\scr O_0$, with $\bar\nabla$ surjective and $\bar\eta$ injective. \end{theorem} We prove Theorem~\ref{thm:comonado0} in parts, throughout the rest of this section. Define \begin{align*} \bar\Delta_M&\colon G_{\Delta(0)}M\rightarrow G_{\Delta(0)}G_{\Delta(0)} M,\text{ and} \\ \bar\varepsilon_M&\colon G_{\Delta(0)}M\rightarrow M, \end{align*} by \begin{align*} \bar\Delta_M&\mathrel{\mathop:}= \pi_{G_{\Delta(0)}G_{\Delta(0)}M}\circ\Delta_M\circ\iota_{G_{\Delta(0)}M}, \text{ and} \\ \bar\varepsilon_M&\mathrel{\mathop:}= \varepsilon_M\circ\iota_{G_{\Delta(0)}M}, \end{align*} where $\pi_x$ and $\iota_x$ as before denotes natural projections and injections. Let $\bar\Delta$ and $\bar\varepsilon$ be the natural transformations corresponding to $\bar\Delta_M$ and $\bar\varepsilon_M$. \begin{remark} \label{rem:comodfactors} Similarly as in the case of Remark~\ref{rem:ntransfactors}, by central character considerations we see that $\pi_{G_{\Delta(0)}G_{\Delta(0)}}\circ\Delta_M$ factors through $\bar\Delta_M$, and $\varepsilon_M$ factors through $\bar\varepsilon_M$, i.e. the diagrams \[ \includegraphics{comodfactors.1}\qquad \includegraphics{comodfactors.2} \] commute. \end{remark} \begin{lemma} \label{lem:lcomm} The left of the diagrams~\eqref{eq:comonaddiagrams} for the triple $(G_{\Delta(0)}, \bar\Delta, \bar\varepsilon)$ commutes. \end{lemma} \begin{proof} Fix $M\in\scr O_0$, with a weight basis $m_1, m_2, \dots\in M$, and consider an element \[ \sum_{i=1}^k(u_iv)\otimes m_i\in G_{\Delta(0)}M, \] where $u_1$, $\dots$, $u_k\in\enva{\mathfrak n_-}$. Applying $\Delta_M$ yields, after collecting the elements of the form $v\otimes \underline{\phantom{J}}\otimes \underline{\phantom{J}}$, \begin{equation} \label{eq:blahhh} \sum_{i=1}^kv\otimes (u_iv)\otimes m_i + \sum_{i, j}(u'_{ij}v)\otimes (u''_{ij}v)\otimes m_i, \end{equation} where $\tilde\varepsilon(u'_{ij})=0$ for all $u'_{ij}$ in the sum on the right. Hence, when applying \[ (\varepsilon\otimes\text{Id}_{G_{\Delta(0)}M})\circ (\text{Id}_{\Delta(0)}\otimes \pi_{G_{\Delta(0)}M}), \] the right hand sum of~\eqref{eq:blahhh} maps to zero, while the left hand sum of~\eqref{eq:blahhh} maps to \[ \sum_{i=1}^k (u_iv)\otimes m_i. \] Hence $\bar\varepsilon_{G_{\Delta(0)}M}\circ\bar\Delta_M=\text{Id}_{G_{\Delta(0)}M}$, so the upper triangle of the left diagram of~\eqref{eq:comonaddiagrams} commutes. For the lower triangle, consider the following diagram. \[ \includegraphics{monad_o0.3} \] The left square and the triangle commutes by Remark~\ref{rem:comodfactors}, and the right quadrangle commutes by Remark~\ref{rem:ntransfactors}, and hence the diagram commutes. By Proposition~\ref{prop:comonad}, the top row equals $\text{Id}_{\Delta(0)\otimes M}$, and hence the bottom row equals $\text{Id}_{G_{\Delta(0)}M}$, as required. \end{proof} \begin{corollary} \label{cor:injsur} The homomorphism $\bar\varepsilon_M$ is surjective and the homomorphism $\bar\Delta_M$ is injective. \end{corollary} \begin{proof} Since $\bar\varepsilon_M=\varepsilon_M\circ\iota_{G_{\Delta(0)}M}$ it follows that $\bar\varepsilon_M$ is surjective as $\varepsilon_M$ is surjective. By Lemma~\ref{lem:lcomm} we have $G_{\Delta(0)}\bar\varepsilon_M\circ\bar\Delta_M=\text{Id}_{G_{\Delta(0)}M}$, so $\bar\Delta_M$ is injective since $\text{Id}_{G_{\Delta(0)}M}$ is injective. \end{proof} \begin{lemma} \label{lem:rcomm} The right of the diagrams~\eqref{eq:comonaddiagrams} for the triple $(G_{\Delta(0)}, \bar\Delta, \bar\varepsilon)$ commutes. \end{lemma} \begin{proof} We claim that the diagrams \[ \includegraphics[width=0.8\textwidth]{monad_o0.1} \] and \[ \includegraphics[width=0.8\textwidth]{monad_o0.2} \] commute. For the first diagram, the left and top right squares commute by Remark~\ref{rem:comodfactors}, and the bottom right square commutes by Remark~\ref{rem:ntransfactors}. For the second diagram, the left and bottom right squares commute by Remark~\ref{rem:comodfactors}. For the top right square, we note that \begin{align*} \Delta_{\Delta(0)\otimes M} &= D\otimes \text{Id}_{\Delta(0)\otimes M}, \text{ and} \\ \Delta_{G_{\Delta(0)}M} &= D\otimes \text{Id}_{G_{\Delta(0)}M}, \end{align*} so the square commutes, since \begin{align*} D\otimes \pi_{G_{\Delta(0)}M} &= (\text{Id}_{\Delta(0)\otimes \Delta(0)}\otimes \pi_{G_{\Delta(0)}M}) \circ (D\otimes \text{Id}_{\Delta(0)\otimes M}) \\ &= (D\otimes \text{Id}_{G_{\Delta(0)}M}) \circ (\text{Id}_{\Delta(0)}\otimes \pi_{G_{\Delta(0)}M}). \end{align*} Thus both diagrams commute. Hence, since \[ (\text{Id}_{\Delta(0)}\otimes\Delta_M)\circ\Delta_M = \Delta_{\Delta(0)\otimes M}\circ\Delta_M \] by Proposition~\ref{prop:comonad}, and the fact that projections commute, it follows that \[ G_{\Delta(0)}\bar\Delta_M\circ \bar\Delta_M = \bar\Delta_{G_{\Delta(0)}M}\circ\bar\Delta_M, \] and thus the right of the diagrams~\eqref{eq:comonaddiagrams} commute. \end{proof} From Lemma~\ref{lem:lcomm} and Lemma~\ref{lem:rcomm} it follows that $(G_{\Delta(0)}, \bar\Delta, \bar\varepsilon)$ is a comonad on $\scr O_0$, and $\bar\Delta$ is injective and $\bar\varepsilon$ is surjective by Corollary~\ref{cor:injsur}. Finally, as in the proof of Proposition~\ref{prop:comonad}, setting \begin{align*} \bar\nabla_M&\mathrel{\mathop:}= (\bar\Delta_{M^\star})^\star, \text{ and}\\ \bar\eta_M&\mathrel{\mathop:}= (\bar\varepsilon_{M^\star})^\star, \end{align*} gives a monad $(G_{\nabla(0)}, \bar\nabla, \bar\eta)$ with $\bar\nabla$ surjective and $\bar\eta$ injective, by duality, which concludes the proof of Theorem~\ref{thm:comonado0}. \section{Parabolic subcategories} \label{sec:parabolic} All the previous results can be generalized to the case of the parabolic analogue of $\scr O$, in the sense of Rocha-Caridi (see for example~\cite{rc, irving}). Let $\mathfrak p\subseteq \mathfrak b$ be a parabolic subalgebra of $\mathfrak g$, let $\mathfrak m\subseteq \mathfrak n_-$ with \[ \mathfrak g = \mathfrak m\oplus \mathfrak p, \] and let $R_{\mathfrak m}$ be the roots of $\mathfrak m$. The parabolic analogies of $\scr O$, $\widetilde{\catO}$, $\scr F(\Delta)$, etc.~are obtained by substituting $\mathfrak n_-$ by $\mathfrak m$, $\mathfrak b$ by $\mathfrak p$, and $R_-$ by $R_{\mathfrak m}$, in the corresponding definition. Thus, for example, $\scr O^{\mathfrak p}$ is defined as the full subcategory of the category of $\mathfrak g$-modules consisting of weight modules that are finitely generated as $\enva{\mathfrak m}$-modules, and $\scr F^{\mathfrak p}(\Delta)$ is the full subcategory of $\scr O^{\mathfrak p}$ that are free as $\enva{\mathfrak m}$-modules. Similarly, the partial order $\leqslant$ on $\mathfrak h^*$ is replaced by $\leqslant_{\mathfrak p}$ defined as $\lambda\leqslant_{\mathfrak p}\mu$ if and only if $\lambda-\mu\in\mathbb{N}_0R_{\mathfrak m}$, and so on. Recall that a generalised Verma module in $\scr O^{\mathfrak p}$ is an element of $\scr F^{\mathfrak p}(\Delta)$ that is generated by a highest weight vector (for details, see \cite{lepowsky}). We denote the generalised Verma module generated by a highest weight vector of weight $\lambda\in\mathfrak h^*$ by $\Delta^{\mathfrak p}(\lambda)$. Furthermore, the objects in $\scr F^{\mathfrak p}(\Delta)$ are precisely the objects in $\scr O^{\mathfrak p}$ that have a generalised Verma filtration. Almost all statements and proofs of the previous sections hold verbatim with these substitutions. The exception is Proposition~\ref{prop:freeproj}, which needs to be restated in the following (rather complicated) way. Let $\mathfrak g^{\mathfrak p}$ denote the semisimple part of $\mathfrak p$. \begin{proposition} \label{prop:pfreeproj} Let $M$ be a $\enva{\mathfrak m}$-free module with a $\enva{\mathfrak m}$-basis \[ \bigl\{\,v_{ij}\,\big\vert\,i\in I, 1\leq j\leq k_i\,\bigr\} \] for some index set $I$ and non-negative integers $k_i$ such that \[ L_i \mathrel{\mathop:}= \enva{\mathfrak g^{\mathfrak p}}\{\,v_{ij}\,\vert\,1\leq j\leq k_i\,\} \] is a $k_i$-dimensional $\mathfrak g^{\mathfrak p}$-module with basis $v_{i1}, \dotsc, v_{ik_i}$. Then \[ M^{\leqslant\lambda} = M^{\leqslant_{\mathfrak p}\lambda} \cong \bigoplus_{\substack{i\in I,\\ L_i\leqslant\lambda}} \enva{\mathfrak n_-}\{v_{i1}, \dotsc, v_{ik_i}\}, \] where $L_i\leqslant\lambda$ if $\weight(v_{ij})\leqslant\lambda$ for all $1\leq j\leq k_i$. \end{proposition} \begin{proof} By completely analogous arguments as in the proof of Proposition~\ref{prop:freeproj}, it follows that \[ M^{\nleqslant\lambda} = \sum_{\substack{i\in I,\\ L_i\nleqslant\lambda}} \enva{\mathfrak n_-}\{v_{i1}, \dotsc, v_{ik_i}\}, \] and hence the claim follows. \end{proof} All objects of $\scr F^{\mathfrak p}(\Delta)$ satisfy the requirements of Proposition~\ref{prop:pfreeproj}, and a straightforward argument shows that $M\otimes N^*$ does as well, for all $M\in\scr F^{\mathfrak p}(\Delta)$ and $N\in\scr O^{\mathfrak p}$. In particular, we conclude that the arguments used in Sections~\ref{sec:main} and~\ref{sec:comonad} all translate to the parabolic setting. The main results for the category $\scr O_0^{\mathfrak p}$ are thus the following. \begin{theorem} There exist faithful functors \begin{align*} F^{\mathfrak p}&\colon\scr O_0^{\mathfrak p}\hookrightarrow\PFun(\scr O_0^{\mathfrak p})^{\text{op}}, M\mapsto F^{\mathfrak p}_M,\\ G^{\mathfrak p}&\colon\scr O_0^{\mathfrak p}\hookrightarrow\TFun(\scr O_0^{\mathfrak p}), M\mapsto G^{\mathfrak p}_M,\\ H^{\mathfrak p}&\colon\scr O_0^{\mathfrak p}\hookrightarrow\IFun(\scr O_0^{\mathfrak p})^{\text{op}}, M\mapsto H^{\mathfrak p}_M, \end{align*} all three satisfying $X_M\cong X_N$ if and only if $M\cong N$ (where $X=F^{\mathfrak p}, G^{\mathfrak p}, H^{\mathfrak p}$). \end{theorem} \begin{proof} These are just the restrictions of $F$, $G$, and $H$ to $\scr O_0^{\mathfrak p}$. \end{proof} \begin{proposition} \label{prop:pacyclic} For any $M\in\scr O_0^{\mathfrak p}$ the following holds: \begin{enumerate}[(a)] \item $F^{\mathfrak p}_M$ and $G^{\mathfrak p}_M$ preserve $\scr F_0^{\mathfrak p}(\Delta)$ and are acyclic on it. \item $G^{\mathfrak p}_M$ and $H^{\mathfrak p}_M$ preserve $\scr F_0^{\mathfrak p}(\nabla)$ and are acyclic on it. \end{enumerate} \end{proposition} \begin{proposition} \label{prop:pmdnnprojtiltinj} For all $M\in\scr F_0^{\mathfrak p}(\Delta)$, $N\in\scr F_0^{\mathfrak p}(\nabla)$ we have that \begin{enumerate}[(a)] \item $F^{\mathfrak p}_NM$ is projective, \item $G^{\mathfrak p}_MN\cong G^{\mathfrak p}_NM$ is a tilting module, and \item $H^{\mathfrak p}_MN$ is injective. \end{enumerate} \end{proposition} \begin{corollary} For all $M\in\scr T_0^{\mathfrak p}$, $F^{\mathfrak p}_M$ maps tilting modules to projective modules, and $H^{\mathfrak p}_M$ maps tilting modules to injective modules. \end{corollary} \begin{proposition} \label{prop:pfnd} For each $\lambda\in\mathfrak h^*$ with $\Delta^{\mathfrak p}(\lambda)\in\scr O_0^{\mathfrak p}$ we have \begin{align*} F^{\mathfrak p}_{\nabla^{\mathfrak p}(\lambda)}\Delta^{\mathfrak p}(\lambda) &\cong \Delta^{\mathfrak p}(0), \text{ and} \\ H^{\mathfrak p}_{\Delta^{\mathfrak p}(\lambda)}\nabla^{\mathfrak p}(\lambda) &\cong \nabla^{\mathfrak p}(0). \end{align*} \end{proposition} \begin{proposition} The canonical homomorphisms $\Delta^{\mathfrak p}(0)\twoheadrightarrow L(0)$ and $\Delta^{\mathfrak p}(0)\hookrightarrow\Delta^{\mathfrak p}(0)\otimes\Delta^{\mathfrak p}(0)$ induce a comonad $(G_{\Delta^{\mathfrak p}(0)}, \Delta^{\mathfrak p}, \varepsilon^{\mathfrak p})$ on $\scr O_0^{\mathfrak p}$ with $\Delta^{\mathfrak p}$ injective and $\varepsilon^{\mathfrak p}$ surjective, and dually a monad $(G_{\nabla^{\mathfrak p}(0)}, \nabla^{\mathfrak p}, \eta^{\mathfrak p})$ with $\nabla^{\mathfrak p}$ surjective and $\eta^{\mathfrak p}$ injective. \end{proposition} \section{An example: $\mathfrak{sl}_3(\mathbb{C})$} \label{sec:example} In conclusion we will compute the `multiplication table' given by $G_MN$ and $F_MN$, where $M$ and $N$ run through the simple modules of $\scr O_0$ for the algebra $\mathfrak g=\mathfrak{sl}_3(\mathbb{C})$, see Tables~\ref{tab:gmult} and~\ref{tab:fmult}. Let $\alpha, \beta\in\mathfrak h^*$ denote the simple roots, let $s$ and $t$ be the corresponding simple reflections (i.e. with $s(\alpha)=-\alpha$ and $t(\beta)=-\beta$), and fix a Weyl-Chevalley basis $X_{\pm\alpha}$, $X_{\pm\beta}$, $X_{\pm(\alpha+\beta)}$, $H_\alpha$, $H_\beta$. The `dot' action of the Weyl group $W=S_3$ on $\mathfrak h^*$ is defined by \[ w\cdot\lambda\mathrel{\mathop:}= w(\lambda+\rho)-\rho \] for an element $w\in W$, where $\rho\in\mathfrak h^*$ is half the sum of the positive roots. We set $L(w)\mathrel{\mathop:}= L(w\cdot 0)$ for $w\in W$. Let $e$ denote the identity in $W$. There are two proper parabolic subalgebras, $\mathfrak p^{\alpha}\mathrel{\mathop:}=\mathfrak b+\langle X_{-\alpha}\rangle_\mathbb{C}$ and $\mathfrak p^{\beta}\mathrel{\mathop:}=\mathfrak b+\langle X_{-\beta}\rangle_\mathbb{C}$. \begin{figure} \caption{The simple modules in $\scr O_0$ for the algebra $\mathfrak{sl} \label{fig:sl3simple} \end{figure} \newcommand{\smd}[4]{\phantom{\scriptstyle#2\,} \hbox{\vtop{\vbox{\hbox{\clap{$\scriptstyle #1$}}\nointerlineskip \hbox{\clap{$\scriptstyle #2\;#3$}}}\nointerlineskip \hbox{\clap{$\scriptstyle#4$}}}}\phantom{\,\scriptstyle#3}} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \vphantom{$\Big)$}$G_MN$ & $L(e)$ & $L(s)$ & $L(t)$ & $L(st)$ & $L(ts)$ & $L(sts)$ \\ \hline\hline \vphantom{$\Big)$}$L(e)$ & $L(e)$ & $L(s)$ & $L(t)$ & $L(st)$ & $L(ts)$ & $L(sts)$ \\ \hline $L(s)$ & $L(s)$ & $L(st)$ & $\smd{L(sts)}{L(st)}{L(ts)}{L(sts)}$ & $0$ & $L(sts)$ & $0$ \\ \hline $L(t)$ & $L(t)$ & $\smd{L(sts)}{L(st)}{L(ts)}{L(sts)}$ & $L(ts)$ & $L(sts)$ & $0$ & $0$ \\ \hline \vphantom{$\Big)$}$L(st)$ & $L(st)$ & $0$ & $L(sts)$ & $0$ & $0$ & $0$ \\ \hline \vphantom{$\Big)$}$L(ts)$ & $L(ts)$ & $L(sts)$ & $0$ & $0$ & $0$ & $0$ \\ \hline \vphantom{$\Big)$}$L(sts)$ & $L(sts)$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline \end{tabular} \caption{The ``multiplication table'' for the bifunctor $G$ on the simple modules in $\scr O_0$ for $\mathfrak{sl}_3(\mathbb{C})$.} \label{tab:gmult} \end{center} \end{table} \newcommand{\sms}[2]{\genfrac{}{}{0pt}{1}{#1}{#2}} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \vphantom{$\Big)$}$F_MN$ & $L(e)$ & $L(s)$ & $L(t)$ & $L(st)$ & $L(ts)$ & $L(sts)$ \\ \hline\hline \vphantom{$\Big)$} $L(e)$ & $L(e)$ & $L(s)$ & $L(t)$ & $L(st)$ & $L(ts)$ & $\Delta(sts)$ \\ \hline \vphantom{$\Big)$}$L(s)$ & $0$ & $L(e)$ & $0$ & $\Delta^{\mathfrak p^\beta}(s)$ & $0$ & $\Delta(ts)\oplus P(t)$\\ \hline \vphantom{$\Big)$}$L(t)$ & $0$ & $0$ & $L(e)$ & $0$ & $\Delta^{\mathfrak p^\alpha}(t)$ & $\Delta(st)\oplus P(s)$\\ \hline \vphantom{$\Big)$}$L(st)$ & $0$ & $0$ & $0$ & $\Delta^{\mathfrak p^\beta}(e)$ & $0$ & $\Delta(t)$ \\ \hline \vphantom{$\Big)$}$L(ts)$ & $0$ & $0$ & $0$ & $0$ & $\Delta^{\mathfrak p^\alpha}(e)$ & $\Delta(s)$ \\ \hline \vphantom{$\Big)$}$L(sts)$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\Delta(e)$ \\ \hline \end{tabular} \caption{The ``multiplication table'' for the bifunctor $F$ on the simple modules in $\scr O_0$ for $\mathfrak{sl}_3(\mathbb{C})$.} \label{tab:fmult} \end{center} \end{table} The first row and column for the $G$-table follow from Remark~\ref{rem:lzero}. The zero entries are obtained by weight arguments (e.g.\ \eqref{eq:tensorsupp} and \eqref{eq:dimotimes}). Similarly one finds that $L(s)\otimes L(s)$ has a higest weight vector of weight $st\cdot 0$. Since $L(s)$ is not $\enva{\langle X_{-\beta}\rangle}$-free, it follows that $G_{L(s)}L(s)\cong L(st)$. By symmetry, $G_{L(t)}L(t)=L(ts)$. Finally, for $G_{L(s)}L(t)\cong G_{L(t)}L(s)$, counting dimensions of the weight spaces shows that $L(st)$ and $L(ts)$ each occur once in the Jordan-H\"older decomposition, and $L(sts)$ occurs twice. Furthermore, since $L(s)$ is $\enva{\langle X_{-\alpha}\rangle}$-free and $L(t)$ is $\enva{\langle X_{-\beta}\rangle}$-free, it follows that $G_{L(s)}L(t)$ is both $\enva{\langle X_{-\alpha}\rangle}$-free and $\enva{\langle X_{-\beta}\rangle}$-free. Hence neither $L(st)$ nor $L(ts)$ can occur in the socle of $G_{L(s)}L(t)$. Finally, we have \[ \biggl(G_{L(s)}L(t)\biggr)^\star = G_{L(s)^\star}L(t)^\star = G_{L(s)}L(t), \] i.e. $G_{L(s)}L(t)$ is self-dual, so neither $L(st)$ nor $L(ts)$ can occur in the top of $G_{L(s)}L(t)$. We conclude that the Loewy series of $G_{L(s)}L(t)$ is \[ G_{L(s)}L(t)\cong \smd{L(sts)}{L(st)}{L(ts)}{L(sts)}. \] The corresponding table for $F$ is given in Table~\ref{tab:fmult}. Since $F_{L(0)}M=M$, the first row is immediate. Furthermore, by Proposition~\ref{prop:fnd}, and the fact that $L(sts)=\Delta(sts)=\nabla(sts)$ we have $F_{L(sts)}L(sts)=\Delta(e)$. Similarly, by Proposition~\ref{prop:pfnd} and the fact that $L(st)=\Delta^{\mathfrak p^\beta}(st)=\nabla^{\mathfrak p^\beta}(st)$ we have $F_{L(st)}L(st)=\Delta^{\mathfrak p^\beta}(e)$ (and similarly for $F_{L(ts)}L(ts)$). Using the adjointness of $F$ and $G$, we can easily determine the top of $F_{L(i)}L(j)$, i.e. $L(k)$ is in the top of $F_{L(i)}L(j)$ if and only if $L(j)$ is in the socle of $G_{L(i)}L(k)$. In particular, this fact and the $G$-table gives us all the $0$'s in the table. The remaining cases need some additional case by case arguments. We begin with $F_{L(s)}L(s)$. By adjointness, Table~\ref{tab:gmult} shows that $F_{L(s)}L(s)$ has a simple top $L(e)$. Since $L(s)\in\scr O_0^{\beta}$, it follows that the possible modules are $L(e)$ and \[ \Delta^{\mathfrak p^\beta}(e)=\sms{L(e)}{L(s)}. \] But by Proposition~\ref{prop:pacyclic} we have $G_{L(s)}\Delta^{\mathfrak p^\beta}(e)\in\scr F_0^\beta(\Delta)$, so by analysing the weights we see that \[ G_{L(s)}\Delta^{\mathfrak p^\beta}(e)=\Delta^{\mathfrak p^\beta}(s) = \sms{L(s)}{L(st)}. \] Hence \begin{align*} \dim\Hom_{\mathfrak g}\bigl(F_{L(s)}L(s), \Delta^{\mathfrak p^\beta}(e)\bigr) &= \dim\Hom_{\mathfrak g}\bigl(L(s), G_{L(s)}\Delta^{\mathfrak p^\beta}(e)\bigr) \\ &= \dim\Hom_{\mathfrak g}\bigl(L(s), \Delta^{\mathfrak p^\beta}(s)\bigr) \\ &= 0, \end{align*} and so $F_{L(s)}L(s)\neq \Delta^{\mathfrak p^\beta}(e)$ and we conclude that $F_{L(s)}L(s)=L(e)$. Analogously, we get $F_{L(t)}L(t)=L(e)$. Since $L(sts)=\Delta(sts)$, we have $F_{L(st)}L(sts)\in\scr F_0(\Delta)$ by Proposition~\ref{prop:acyclic}, and by the proof of Proposition~\ref{prop:fpreservesdelta} we know that the Verma modules $\Delta(\lambda)$ occuring in the Verma flag of $F_{L(st)}L(sts)$ are the ones satisfying $\lambda\in sts\cdot0 - \Supp L(st)$ and $\lambda\leqslant 0$, with multiplicity equal to the dimension of the weight space of $L(st)$ of weight $sts\cdot 0 - \lambda$. The only such weight is $t\cdot 0$, with multiplicity $1$. Hence, $F_{L(st)}L(sts)=\Delta(t)$. Analogously, $F_{L(ts)}L(sts)=\Delta(s)$. Since $L(st)=\Delta^{\mathfrak p^\beta}(st)$, we have $F_{L(s)}L(st)\in\scr F_0^\beta(\Delta)$. By a similar analysis as for $F_{L(st)}L(sts)$, using the proof of Proposition~\ref{prop:pfreeproj}, we find that $F_{L(s)}L(st)$ has only one generalised Verma quotient, $\Delta^{\mathfrak p^\beta}(s)$, so $F_{L(s)}L(st)=\Delta^{\mathfrak p^\beta}(s)$. Similarly, $F_{L(t)}L(ts)=\Delta^{\mathfrak p^\alpha}(t)$. Finally, for $F_{L(s)}L(sts)$, by the same analysis as for $F_{L(st)}L(sts)$ we have that $F_{L(s)}L(sts)$ has a Verma flag with Verma quotients $\Delta(e)$, $\Delta(t)$ and $\Delta(ts)$, each with multiplicity $1$. Furthermore, using adjointness we find from Table~\ref{tab:gmult} that $F_{L(s)}L(sts)$ has top $L(ts)\oplus L(t)$. Thus, $F_{L(s)}L(sts)$ is a quotient of $P(ts)\oplus P(t)$. The module $P(ts)\oplus P(t)$ has the following standard filtration: \[ P(ts)\oplus P(t) = \smd{\Delta(ts)}{\Delta(t)}{\Delta(s)}{\Delta(e)} \oplus \sms{\Delta(t)}{\Delta(e)}. \] It is easy to see that this implies that \[ F_{L(s)}L(sts) = \Delta(ts)\oplus P(t). \] By symmetry, $F_{L(t)}L(sts) = \Delta(st)\oplus P(s)$, which completes the table. \noindent Department of Mathematics, Uppsala University, Box 480,\\ \mbox{SE-75106 Uppsala}, Sweden. e-mail: {\tt [email protected]}. \end{document}
\lie{b}egin{document} \lie{t}ildetle{Deformations of symplectic vortices} \on{aut} hor{Eduardo Gonzalez} \on{ad} dress{ Department of Mathematics University of Massachusetts Boston 100 William T. Morrissey Boulevard Boston, MA 02125} \email{[email protected]} \on{aut} hor{Chris Woodward} \on{ad} dress{Mathematics-Hill Center, Rutgers University, 110 Frelinghuysen Road, Piscataway, NJ 08854-8019, U.S.A.} \email{[email protected]} \lie{t}hanks{Partially supported by NSF grant DMS060509} \lie{b}egin{abstract} We prove a gluing theorem for a symplectic vortex on a compact complex curve and a collection of holomorphic sphere bubbles. Using the theorem we show that the moduli space of regular stable symplectic vortices on a fixed curve with varying markings has the structure of a stratified-smooth topological orbifold. In addition, we show that the moduli space has a non-canonical $C^1$-orbifold structure. \end{abstract} \lie{m}aketitle \lie{p}arskip 0in \lie{t}ableofcontents \lie{p}arskip .1in \on{ps}ection{Introduction} In this paper we generalize the following result on existence of universal deformations for stable (pseudo-)holomorphic maps. Let $(X,\omega)$ be a compact symplectic manifold equipped with a compatible almost complex structure $J$, and $(\lie{m}ul{\Sigma},\lie{m}ul{j})$ a compact nodal complex curve. A map $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ is {\em holomorphic} if $$ \overlinep \lie{m}ul{u}:= J_{\lie{m}ul{u}} \circ {\on{d}} \lie{m}ul{u} - {\on{d}} \lie{m}ul{u} \circ \lie{m}ul{j} = 0 $$ on each component of $\lie{m}ul{\Sigma}$. One naturally has the notion of a stratified-smooth {\em family} of holomorphic maps, and hence the notion of a {\em deformation}, namely the germ of a family around the central fiber together with an isomorphism of the central fiber with the given map. Recall that a deformation is {\em universal} if any other deformation is obtained from it by pullback, in a unique way, by a map of parameter spaces. A holomorphic map $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ is {\em regular} if the linearized Cauchy-Riemann operator is surjective. The following theorem is the result of the well-known gluing construction for holomorphic maps, c.f. Ruan-Tian \cite{rt:hi} or the text McDuff-Salamon \cite[Chapter 10]{ms:jh} in the case of genus zero: \lie{b}egin{theorem} \langleglebel{premain} A regular holomorphic map $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ admits a stratified-smooth universal deformation iff it is stable. \end{theorem} The construction of the universal deformation proceeds via the implicit function theorem. For each element in the infinitesimal deformation space of the stable map one first produces an approximate solution and then applies the implicit function theorem to find an exact solution. Unfortunately one uses a different Sobolev space for each ``gluing parameter'' controlling the domain, which means that it is rather tricky to show that each nearby stable holomorphic map occurs only once in the resulting family. A slightly jazzed up version of the above theorem implies that the gluing construction gives rise to orbifold charts on the regular locus of the moduli space of stable holomorphic maps. Uniqueness of the universal deformations implies that the smooth structures on each stratum are independent of the Sobolev spaces used in the implicit function theorem. One can make these charts $C^1$-compatible by suitable choices of {\em gluing profiles}, that is, coordinates on the local deformation spaces; however the $C^1$-structure on the moduli space is not canonical. The first part of the paper contains an exposition of the above theorem, which is rather scattered in the literature. The main result of the paper is a generalization of the theorem above to certain {\em gauged (pseudo)holomorphic maps}, namely {\em symplectic vortices} as introduced by Mundet \cite{mun:ham} and Cieliebak, Gaio and Salamon, see \cite{ciel:vor}. Let $G$ be a compact Lie group and $X$ a Hamiltonian $G$-manifold equipped with a moment map $\mathbb{P}hi: X \lie{t}o \lie{g}^*$ and an invariant almost complex structure $J$. Let $\Sigma$ be a compact smooth holomorphic curve with complex structure $j$ and equipped with an area form $\lie{m}athcal{V}ol_\Sigma$. A {\em gauged holomorphic map} with values in $X$ consists of a smooth principal $G$-bundle $P \lie{t}o \Sigma$, a connection $A$ on $P$, and a smooth section $u: \Sigma \lie{t}o P(X) := P \lie{t}ildemes_G X$ such that $\overlinep_A u = 0$ where $\overlinep_A$ is defined using the splitting given by the connection $A$ and the complex structures $J, j$. Let $F_A \in \mathcal{O}mega^2(\Sigma, P(\lie{g}))$ denote the curvature of $A$ and $P(\mathbb{P}hi): P(X) \lie{t}o P(\lie{g})$ the map induced by $\mathbb{P}hi$. The space of gauged holomorphic maps admits a formal symplectic structure depending on a choice of invariant metric on $\lie{g}$ so that the action of the group of gauge transformations is formally Hamiltonian. A {\em symplectic vortex} is a pair in the zero level set of the moment map: a pair $(A,u)$ such that $$ \overlinep_A u = 0, \lie{q}uad F_A + u^* (P \lie{t}ildemes_G \mathbb{P}hi) \lie{m}athcal{V}ol_\Sigma = 0 .$$ Thus the moduli space $M(\Sigma,X)$ of symplectic vortices is the symplectic quotient of the space of gauged maps by the group of gauge transformations. In certain cases where the moduli spaces are compact Cieliebak, Gaio, Mundet, and Salamon \cite{ci:symvortex} and Mundet \cite{mun:ham} constructed invariants that we will call {\em gauged Gromov-Witten invariants} by integration over these moduli spaces. In general $M(\Sigma,X)$ admits a compactification $\overline{M}(\Sigma,X)$ consisting of {\em polystable symplectic vortices} given by allowing $u$ to develop holomorphic sphere bubbles in the fibers of $P \lie{t}ildemes_G X$. A polystable vortex is {\em strongly stable} if the principal component has finite automorphism group, and {\em regular} if a certain linearized operator is surjective, that is, the moduli space is formally smooth. Our main result is the following: \lie{b}egin{theorem} \langleglebel{main} Let $\Sigma,X$ be as above. A regular strongly stable symplectic vortex from $\Sigma$ to $X$ admits a universal stratified-smooth deformation. \end{theorem} \noindent Using the deformations constructed in Theorem \ref{main} we prove that the moduli space $\overline{M}^{\reg}(\Sigma,X)$ of regular strongly stable symplectic vortices admits the structure of an oriented stratified-smooth topological orbifold, and (non-canonically) the structure of a $C^1$-orbifold. The first statement implies that if $\overline{M}^\reg(\Sigma,X)$ is compact then it carries a rational fundamental class. The second statement implies for example, that if the target carries a group action then the usual equivariant localization theorems hold for the induced group action on the moduli space. In the case that $X$ is a smooth projective variety, algebraic methods explain in \cite{cross} give similar results and provide virtual fundamental classes on the moduli space. However, the symplectic gluing construction is interesting in its own right, not in the least because it potentially extends to the case of Lagrangian boundary conditions. We understand that a forthcoming paper of Mundet i Riera and Tian gives a gluing construction for two symplectic vortices, when the structure group is the circle group. \noindent {\em Acknowledgments:} We thank Ignasi Mundet i Riera, Melissa Liu, and Robert Lipshitz for helpful comments and discussions. \on{ps}ection{Deformations of holomorphic curves} The following section is essentially a review of the material that can be found at the beginning of Siebert \cite{sie:gen}, with a few additional comments incorporating terminology of Hofer, Wysocki, and Zehnder \cite[Appendix]{ho:sc}. In the first part we review the holomorphic construction of universal deformations of stable curves. In the second part, we study smooth deformations of curves. \lie{su}bsection{Holomorphic families of stable curves} \langleglebel{stablecurves} A compact, complex {\em nodal curve} $\lie{m}ul{\Sigma}$ is obtained from a collection $(\Sigma_1,\ldots,\Sigma_k)$ of smooth, compact, complex curves by identifying a collection of distinct {\em nodal points} $$\lie{m}ul{w} = \{ \{ w_1^-,w_1^+ \}, \ldots, \{ w_m^-,w_m^+ \} \} .$$ For $l = 1,\ldots, m$, we denote by $\Sigma_{i^\lie{p}m(l)}$ the components such that $w_l^\lie{p}m \in \Sigma_{i^\lie{p}m(l)}$. A point $z \in \lie{m}ul{\Sigma}$ is {\em smooth} if it is not equal to any of the nodal points. A {\em marked nodal curve} is a nodal curve together with a collection $\lie{m}ul{z} = (z_1,\ldots, z_n)$ of distinct, smooth points. An {\em isomorphism} of marked nodal curves $(\lie{m}ul{\Sigma}_0,\lie{m}ul{z}_0)$ to $(\lie{m}ul{\Sigma}_1,\lie{m}ul{z}_1)$ is an isomorphism $\lie{p}hi: \lie{m}ul{\Sigma}_0 \lie{t}o \lie{m}ul{\Sigma}_1$ of nodal curves such that $\lie{p}hi(z_{0,i}) = z_{1,i}$ for $i = 1,\ldots, n$. A marked nodal curve is {\em stable} if it has finite automorphism group, that is, each component contains at least three marked or nodal points if genus zero, or one special point if genus one. The {\em combinatorial type} $\Gamma(\lie{m}ul{\Sigma})$ of $\lie{m}ul{\Sigma}$ is the graph whose vertices are the components and edges are the nodes and markings of $\lie{m}ul{\Sigma}$. The map $\lie{m}ul{\Sigma} \lie{m}apsto \Gamma(\lie{m}ul{\Sigma})$ extends to a functor from the category of marked nodal curves to the category of graphs. In particular, there is a canonical homomorphism $ \on{Aut} (\lie{m}ul{\Sigma}) \lie{t}o \on{Aut} (\Gamma(\lie{m}ul{\Sigma}))$, whose kernel is the product of the automorphism groups of the components of $\lie{m}ul{\Sigma}$. Let $S$ be a complex variety (or scheme). A {\em family of nodal curves} over $S$ is a complex variety $\lie{m}ul{\Sigma}_S$ equipped with a proper flat morphism $\lie{p}i: \lie{m}ul{\Sigma}_S \lie{t}o S$, such that each fiber $\lie{m}ul{\Sigma}_s, s \in S$ is a nodal curve. A {\em deformation} of a marked nodal curve $\lie{m}ul{\Sigma}$ is a germ of a family of marked nodal curves $\lie{m}ul{\Sigma}_S$ over a pointed space $(S,0)$ together with an isomorphism $\varphi: \lie{m}ul{\Sigma}_0 \lie{t}o \lie{m}ul{\Sigma}$ of the {\em central fiber} $\lie{m}ul{\Sigma}_0$ with $\lie{m}ul{\Sigma}$. A deformation $(\lie{m}ul{\Sigma}_S, \varphi)$ of $\lie{m}ul{\Sigma}$ is {\em versal} iff any other deformation $(\lie{m}ul{\Sigma}'_S \lie{t}o S', \varphi')$ is induced from a map $\lie{p}si: S' \lie{t}o S$ in the sense that there exists an isomorphism $\lie{p}hi$ of $\lie{m}ul{\Sigma}'$ with the fiber product $\lie{m}ul{\Sigma_S} \lie{t}ildemes_S S'$ in a neighborhood of the central fiber $ \lie{m}ul{\Sigma}_0$. A versal deformation is {\em universal} if the map $\lie{p}hi$ is the unique such map inducing the identity on $\lie{m}ul{\Sigma}_0$. A deformation has {\em fixed type} if the combinatorial type of the fiber is constant. A {\em universal deformation of fixed type} is a deformation of fixed type, which is universal in the above sense for deformations of fixed type. The space $\on{Def}(\lie{m}ul{\Sigma})$ of {\em infinitesimal deformations} of $\lie{m}ul{\Sigma}$ is the tangent space $T_0 S $ of the base $S$ of a universal deformation, well-defined up to isomorphism. We write $\on{Def}_\Gamma(\lie{m}ul{\Sigma})$ for the space of infinitesimal deformations of fixed type. Let $\lie{t}ilde{\Sigma}$ be the normalization of $\lie{m}ul{\Sigma}$, so that $\on{Def}_\Gamma(\lie{m}ul{\Sigma})$ is isomorphic to the space of deformations of $\lie{t}ilde{\Sigma}$ equipped with the additional markings $w_1^\lie{p}m, \ldots, w_m^\lie{p}m$ obtained by lifting the nodes. The general theory of deformations, see for example \cite{dou:ana} in the analytic setting, shows that any marked nodal curve $\lie{m}ul{\Sigma}$ admits a versal deformation with smooth parameter space $S$. $\lie{m}ul{\Sigma}$ admits a universal deformation $\lie{m}ul{\Sigma}_S \lie{t}o S$ if and only if $\lie{m}ul{\Sigma}$ is stable. Furthermore, the space $\on{Def}(\lie{m}ul{\Sigma})$ of the space of infinitesimal deformations admits a canonical isomorphism with $H^{0,1} ( \lie{m}ul{\Sigma}, T \lie{m}ul{\Sigma}[ - z_1 - \ldots - z_n ]) ,$ where $T\lie{m}ul{\Sigma}[-z_1-{\on{d}}ots, -z_n]$ is the sheaf of vector fields vanishing at $z_1, {\on{d}}ots, z_n$. The relationship between the various deformation spaces (in the case with markings, fixed type, etc.) is given as follows. The space of {\em infinitesimal automorphisms} $ \on{aut} (\lie{m}ul{\Sigma},\lie{m}ul{z})$ of $(\lie{m}ul{\Sigma},\lie{m}ul{z})$ is the space $\lie{m}athcal{V}ect(\lie{m}ul{\Sigma},\lie{m}ul{z}) = H^0( \lie{m}ul{\Sigma}, T \lie{m}ul{\Sigma}[ - z_1 - \ldots - z_n ]) $ of holomorphic vector fields vanishing at the marked points. The short exact sequence of sheaves $$ 0 \lie{t}o \oplus_{i=1}^n T_{z_i} \lie{m}ul{\Sigma} \lie{t}o T \lie{m}ul{\Sigma} \lie{t}o T \lie{m}ul{\Sigma} [ - z_1 - \ldots - z_n ] \lie{t}o 0$$ gives a long exact sequence in cohomology \cite[p. 94]{hm:mc} $$ 0 \lie{t}o \lie{m}athcal{V}ect(\lie{m}ul{\Sigma},\lie{m}ul{z}) \lie{t}o \lie{m}athcal{V}ect(\lie{m}ul{\Sigma}) \lie{t}o \lie{b}igoplus_{i=1}^n T_{z_i} \lie{m}ul{\Sigma} \lie{t}o \on{Def}(\lie{m}ul{\Sigma},\lie{m}ul{z}) \lie{t}o \on{Def}(\lie{m}ul{\Sigma}) \lie{t}o 0. $$ From now on, we omit the markings from the notation, and study deformations of a nodal marked curve $\lie{m}ul{\Sigma} = (\lie{m}ul{\Sigma}, \lie{m}ul{z})$. By $T_{w_i^\lie{p}m} \lie{m}ul{\Sigma}$, we mean the tangent space in the component of $\lie{m}ul{\Sigma}$ containing $w_i^\lie{p}m$. A {\em gluing parameter} for the $j$-th node is an element $ {\on{d}}eltata_i \in T_{w_i^+} \lie{m}ul{\Sigma} \otimes T_{w_i^-} \lie{m}ul{\Sigma} .$ The canonical conormal sequence \cite[p. 100]{hm:mc} gives rise to an exact sequence \lie{b}egin{equation} \langleglebel{type} 0 \lie{t}o \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}o \on{Def}(\lie{m}ul{\Sigma}) \lie{t}o \lie{b}igoplus_{i=1}^m T_{w_i^+} \lie{m}ul{\Sigma} \otimes T_{w_i^-} \lie{m}ul{\Sigma} \lie{t}o 0 .\end{equation} After trivialization of the tangent spaces the gluing parameters are identified with complex numbers. Universal deformations of a smooth marked curve can be constructed for example using Teichm\"uller theory \cite{earle:fibre} or by Hilbert scheme methods \cite[p. 102]{hm:mc}. Later we will need an explicit gluing construction of a universal deformation of a stable marked curve. This construction seems to be well-known, but the only proof we could find in the literature is Siebert \cite{sie:gen}. The idea is to remove small neighborhoods of the nodes, and glue the remaining components together. A {\em local coordinate near a smooth point} $z \in \lie{m}ul{\Sigma}$ is a neighborhood $U$ of $z$ and a holomorphic isomorphism $\lie{k}appa$ of $U$ with a neighborhood of $0$ in the tangent line $T_{z} \lie{m}ul{\Sigma}$, whose differential $T_0 U\lie{t}o T_z \Sigma$ is the identity. \lie{b}egin{remark} \langleglebel{convex} The space of local coordinates near $z$ is convex, since if $\lie{k}appa_0,\lie{k}appa_1$ are local coordinates then any combination $t \lie{k}appa_0 + (1-t)\lie{k}appa_1 $ is still holomorphic and has the same differential at $z$, and so by the inverse function theorem is a holomorphic isomorphism in a neighborhood of $z$. \end{remark} \noindent Any gluing parameter ${\on{d}}eltata_i$ induces an identification $$ T_{w_i^+} \lie{m}ul{\Sigma}- \{ 0 \} \lie{t}o T_{w_i^-} \lie{m}ul{\Sigma} - \{ 0 \}, \ \ \langleglembda_i^+ \lie{m}apsto {\on{d}}eltata_i/ \langleglembda_i^- .$$ Given local coordinates for the nodes of $\lie{m}ul{\Sigma}$ and a set of gluing parameters $\lie{m}ul{{\on{d}}eltata} = ({\on{d}}eltata_1,\ldots,{\on{d}}eltata_m)$, define a (possibly nodal) curve $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$ by gluing together small disks around the node $w_i$ by $z \lie{m}apsto {\on{d}}eltata_i/z$, for every gluing parameter ${\on{d}}eltata_i$ that is non-zero, where $z$ is the local coordinate given by $\lie{k}appa_i$. That is, \lie{b}egin{equation} \langleglebel{glue1} \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}} = \lie{b}igcup_{i=1}^k \Sigma_i - \{ w_1^\lie{p}m, \ldots, w_m^\lie{p}m \} / ( z \on{ps}im (\lie{k}appa_i^+)^{-1} ({\on{d}}eltata_i / \lie{k}appa_i^-(z) ) , i = 1,\ldots, m ) \end{equation} for pairs of points in the two components such that both coordinates are defined. In particular, the choice of local coordinates near the nodes defines a splitting of the sequence \eqref{type}. The gluing construction works in families as follows. Let $I^{i,\lie{p}m}_\Gamma \lie{t}o S_\Gamma $ resp. $\lie{m}ul{I}_\Gamma \lie{t}o S_\Gamma$ denote the vector bundle whose fiber at $s \in S_\Gamma$ is the tangent line at the $j$-node resp. tensor product of tangent lines at the nodes, \lie{b}egin{equation} \langleglebel{gluingpar} I^{i,\lie{p}m}_{\Gamma,s} = T_{w_{i,s}^\lie{p}m} \lie{m}ul{\Sigma}_s, \lie{q}uad \lie{m}ul{I}_{\Gamma,s} = \lie{b}igoplus_{j=1}^m T_{w_{i,s}^-} \lie{m}ul{\Sigma}_s \otimes T_{w_{i,s}^+} \lie{m}ul{\Sigma}_s .\end{equation} Let $\lie{m}ul{\Sigma}_{S_\Gamma} \lie{t}o S_\Gamma$ be a family of nodal curves of the same combinatorial type $\Gamma$, with nodal points $(w_{S_\Gamma,j}^\lie{p}m)_{i = 1}^m$. A {\em holomorphic system of local coordinates for the $i$-th node} is a holomorphic map $\lie{k}appa_i$ from a neighborhood $U_{i,\lie{p}m}$ of the zero section in $ I^{i,\lie{p}m}_S$ to $\lie{m}ul{\Sigma}_S$ which is an isomorphism onto its image and induces the identity at any point in the zero section. Given a holomorphic system of coordinates for each node $\lie{m}ul{\lie{k}appa} = (\lie{k}appa_1^+,\lie{k}appa_1^-, \ldots, \lie{k}appa_m^+,\lie{k}appa_m^-)$ the gluing construction \eqref{glue1} produces a family $ \lie{m}ul{\Sigma}_S \lie{t}o S $ over an open neighborhood $S$ of the zero section in the bundle $I \lie{t}o S_\Gamma$ of gluing parameters. \lie{b}egin{theorem} \langleglebel{aut} \cite[Proposition 2.4]{sie:gen} If $\lie{m}ul{\Sigma}_{\Gamma,S}$ is a family giving a universal deformation of fixed type, then $\lie{m}ul{\Sigma}_S $ is a universal deformation of any of its fibers, and in particular is independent up to isomorphism of deformations of the choice of local coordinates $\lie{k}appa$. \end{theorem} \noindent The following properties of universal deformations of stable curves will be used later: \lie{b}egin{theorem} \langleglebel{props} \cite[Lemma 2.7]{sie:gen} For any universal deformation $\lie{m}ul{\Sigma}_S$, the action of automorphisms $ \on{Aut} (\lie{m}ul{\Sigma})$ of $\lie{m}ul{\Sigma}$ extends to an action of $ \on{Aut} (\lie{m}ul{\Sigma})$ on $\lie{m}ul{\Sigma}_S$, possibly after shrinking $S$. For any universal deformation, there exists a neighborhood of the central fiber such that any two fibers $\lie{m}ul{\Sigma}_S$ contained in the neighborhood are isomorphic, if and only if they are related by an automorphism of $\lie{m}ul{\Sigma}$. \end{theorem} If $\lie{m}ul{\Sigma}$ is not stable, then the above construction produces a {\em minimal versal deformation} of $\lie{m}ul{\Sigma}$. That is, $\lie{m}ul{\Sigma}_S \lie{t}o S$ is versal, and any other versal deformation given by a family $\lie{m}ul{\Sigma}'_{S'} \lie{t}o S'$ is obtained by pull-back by a map $S' \lie{t}o S$. Algebraic families of connected stable nodal curves with genus $g$ and $n$ markings form the objects of a smooth {\em Deligne-Mumford stack} $\overline{M}_{g,n}$ \cite{dm:irr} which admits a coarse moduli space with the structure of a normal projective variety. The maps $\on{Def}(\lie{m}ul{\Sigma}) \lie{t}o \overline{M}_{g,n}, \ s \lie{m}apsto [\lie{m}ul{\Sigma}_s] $ (restricted to a neighborhood of $0$) provide $\overline{M}_{g,n}$ with an atlas of holomorphic orbifold charts. \lie{su}bsection{Stratified-smooth families of stable curves} \langleglebel{parsmooth} We extend the definition of families and deformations to smooth and stratified-smooth settings. Given a family $\lie{m}ul{\Sigma}_S \lie{t}o S$ of compact complex nodal curves, let $$S = \lie{b}igcup S_\Gamma,\lie{q}uad S_\Gamma = \{ s \in S, \ \ \Gamma(\lie{m}ul{\Sigma}_s) = \Gamma \} $$ denote the stratification by combinatorial type of the fiber. It follows from the gluing construction of the previous section that if $\lie{m}ul{\Sigma}_S \lie{t}o S$ is a family giving a universal deformation, then each $S_\Gamma$ is a smooth manifold, and the restriction $\lie{m}ul{\Sigma}_{\Gamma,S_\Gamma}$ of $\lie{m}ul{\Sigma}_{S_\Gamma}$ to $S_\Gamma$ gives a universal deformation of fixed type $\Gamma$. By a {\em smooth family} of curves of fixed type $\Gamma$ we mean a fiber bundle $\lie{m}ul{\Sigma}_{\Gamma,S_\Gamma} \lie{t}o S_\Gamma$ with fibers of type $\Gamma$ and smoothly varying complex structure. In the nodal case, it is obtained from a smooth family of smooth holomorphic curves, identified using a collection of pairs of smooth sections (nodes). \lie{b}egin{lemma} \langleglebel{smoothhol} Holomorphic universal deformations of fixed type are also universal in the category of smooth deformations of $\lie{m}ul{\Sigma}$. That is, let $\lie{m}ul{\Sigma}_S \lie{t}o S , \varphi$ be a universal holomorphic deformation of fixed type of a nodal curve $\lie{m}ul{\Sigma}$. Any smooth deformation $\lie{m}ul{\Sigma}_{S'}' \lie{t}o S', \varphi'$ of nodal curves of fixed type is obtained by pull-back $\lie{m}ul{\Sigma}_S \lie{t}o S$ by a smooth map $S' \lie{t}o S$. \end{lemma} \lie{b}egin{proof} By the construction of local slices for the action of diffeomorphisms in \cite{earle:fibre}, \cite[Chapter 9]{rob:const}. \end{proof} Similarly we can define {\em continuous} families of holomorphic curves, which correspond to continuous maps $S' \lie{t}o S$ to the parameter space $S$ for a universal holomorphic deformation. The following spells out the definition without reference to the universal holomorphic deformation. \lie{b}egin{definition} Let $\Gamma_0,\Gamma_1$ be graphs. A {\em simple contraction} $\lie{t}au$ is a pair of maps $\operatorname{Vert}(\lie{t}au): \operatorname{Vert}(\Gamma_0) \lie{t}o \operatorname{Vert}(\Gamma_1)$ and a bijection $\on{Edge}(\lie{t}au): \on{Edge}(\Gamma_0) \lie{t}o \on{Edge}(\Gamma_1) \cup \{ \emptyset \}$ such that $\Gamma_1$ is obtained from $\Gamma_0$ identifying the head and tail of the {\em contracting edge} $e$ such that $\on{Edge}(\lie{t}au)(e) = \emptyset$. A {\em contraction} is a sequence of simple contractions. \end{definition} \lie{b}egin{definition} \langleglebel{familycurves} A {\em continuous family} of nodal holomorphic curves consists of topological spaces $\lie{m}ul{\Sigma}_S$, a surjection $\lie{m}ul{\Sigma}_S \lie{t}o S$, and a collection of (possibly nodal) holomorphic structures $j_{{\lie{m}ul{\Sigma}}_s}$ on the fibers $\lie{m}ul{\Sigma}_s, s \in S$, which vary continuously in $s$ in the following sense: for every $s_0 \in S$ there exists for $s$ in a neighborhood of $s_0$ of some combinatorial type $\Gamma$, \lie{b}egin{enumerate} \item contractions $\lie{t}au_s : \Gamma(\lie{m}ul{\Sigma}_{s_0}) \lie{t}o \Gamma $, constant in $s \in S_\Gamma$; \item for every node $ \{ w_i^\lie{p}m \}$ collapsed under $\lie{t}au_s$, a pair of local coordinates $\lie{k}appa^\lie{p}m_i: W^\lie{p}m_i \lie{t}o \lie{m}athbb{C}$ \item for every component $\Sigma_{s_0,i}$ of $\lie{m}ul{\Sigma}_{s_0}$, a real number $\eps_s > 0$ and maps $$\lie{p}hi_{i,s}: \Sigma_{s_0,i} - \cup_{w_k^\lie{p}m \in \Sigma_{s_0,i}, \lie{t}au_s(w_k^\lie{p}m) = \emptyset} (\lie{k}appa_k^\lie{p}m)^{-1}(B_{\eps_s}) \lie{t}o \lie{m}ul{\Sigma}_{s,\lie{t}au_s(i)}$$ \end{enumerate} such that \lie{b}egin{enumerate} \item for any $s$, the images of the maps $\lie{p}hi_{i,s}$ cover $\lie{m}ul{\Sigma}_{s}$; \item for any nodal point $w_i^\lie{p}m$ of $\lie{m}ul{\Sigma}_s$ joining components $\Sigma_{s,i^\lie{p}m(k)}$, there exists a constant $\langleglembda_s \in \lie{m}athbb{C}^*$ such that $(\lie{k}appa_k^+ \circ \lie{p}hi_{s,i^+(k)}^{-1} \circ \lie{p}hi_{s,i^-(k)} \circ (\lie{k}appa_k^-)^{-1})(z) = \langleglembda_s z $, if the former is defined, and $\langleglembda_s \lie{t}o 0$ as $s \lie{t}o s_0$. \item for any $z \in \Sigma_{s_0,i}$ in the complement of the $W_{k,s}^\lie{p}m$, $\lim_{s \lie{t}o s_0}(\lie{p}hi_{i,s}(z)) = z$; \item $\lie{p}hi_{i,s}^* j_{\Sigma_{s,\lie{t}au_s(i)}}$ converges to $ j_{\Sigma_{s_0,i}}$ uniformly in all derivatives on compact sets; \item if $z_i$ is contained in $\Sigma_{s_0,k}$, then $z_i = \lim_{s \lie{t}o s_0} \lie{p}hi_{s,k}^{-1}(z_{i,s})$. \end{enumerate} A {\em stratified-smooth} family of curves is a continuous family $\lie{m}ul{\Sigma}_S \lie{t}o S$ over a stratified base $S = \lie{b}igcup_{\Gamma} S_\Gamma$ such that the restriction $\lie{m}ul{\Sigma}_{S_\Gamma}$ of $\lie{m}ul{\Sigma}_S$ to $S_\Gamma$ is a smooth family of fixed type $\Gamma$. A {\em stratified-smooth deformation} of a nodal curve $\lie{m}ul{\Sigma}$ is a germ of a stratified-smooth family of nodal curves $\lie{m}ul{\Sigma}_S$ equipped with an isomorphism of the central fiber $\lie{m}ul{\Sigma}_0$ with $\lie{m}ul{\Sigma}$. A {\em universal stratified-smooth deformation} of $\lie{m}ul{\Sigma}$ is a deformation with the property that any other stratified-smooth deformation $\lie{m}ul{\Sigma}_{S'}' \lie{t}o S'$ is obtained by pull-back by maps $\lie{p}si: S' \lie{t}o S$, $\lie{p}hi: \lie{m}ul{\Sigma} \lie{t}ildemes_S S' \lie{t}o \lie{m}ul{\Sigma}'$, and any two isomorphisms $\lie{p}hi,\lie{p}hi'$ inducing the identity on $\lie{m}ul{\Sigma}$ are equal. \end{definition} Any universal holomorphic deformation is also a universal stratified-smooth deformation, essentially by Lemma \ref{smoothhol}. In the stratified-smooth setting, the analog of Theorem \ref{props} fails and we need an additional definition: \lie{b}egin{definition} \langleglebel{excellent} A universal stratified-smooth deformation $(\lie{p}i: \lie{m}ul{\Sigma}_S \lie{t}o S,\lie{p}hi)$ is {\em strongly universal} if $\lie{p}i$ is a universal deformation of any of its fibers, and two fibers of $\lie{p}i$ are isomorphic, if and only if they are related by the action of $ \on{Aut} (\lie{m}ul{\Sigma})$. \end{definition} The construction of universal deformations extends to the smooth setting as follows. Let $\lie{m}ul{\Sigma}_{S_\Gamma} \lie{t}o S_\Gamma$ be a smooth family of curves of fixed type $\Gamma$. A {\em smooth system of local coordinates for the $i$-th node of $\lie{m}ul{\Sigma}_{S_\Gamma}$} is a smooth map $\lie{k}appa_i$ from a neighborhood $U_{i,\lie{p}m}$ of the zero section in $ I^{i,\lie{p}m}$ to $\lie{m}ul{\Sigma}_{S_\Gamma}$ which is an isomorphism onto its image and induces the identity at zero. Given a universal deformation $(\lie{m}ul{\Sigma}_{S_\Gamma} \lie{t}o S_\Gamma,\varphi)$ of fixed type $\Gamma$ and a smooth system of local coordinates, applying the gluing construction \eqref{glue1} gives a smooth family $ \lie{m}ul{\Sigma}_S \lie{t}o S$ over an open neighborhood $S$ of $0$ in $\on{Def}(\lie{m}ul{\Sigma})$. We may identify $S$ with $\on{Def}(\lie{m}ul{\Sigma})$, for simplicity of notation. \lie{b}egin{theorem} \langleglebel{log} Let $\lie{m}ul{\Sigma}$ be a stable curve. The family $\lie{m}ul{\Sigma}_S \lie{t}o S \lie{su}bset \on{Def}(\lie{m}ul{\Sigma})$ constructed by gluing from a family $\lie{m}ul{\Sigma}_{\Gamma,S} \lie{t}o S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{\Sigma})$ of fixed type, using any smooth family of local coordinates $\lie{m}ul{\lie{k}appa}$ near the nodes, gives a strongly universal stratified-smooth deformation of $\lie{m}ul{\Sigma}$. \end{theorem} \lie{b}egin{proof} Let $\lie{m}ul{\Sigma}_{S^{\lie{m}ul{\lie{k}appa}}}^{\lie{m}ul{\lie{k}appa}} \lie{t}o S^{\lie{m}ul{\lie{k}appa}}$ be a family constructed via gluing using a smooth family of local coordinates $\lie{m}ul{\lie{k}appa}$ as in \eqref{glue1}, and $\lie{m}ul{\Sigma}_S \lie{t}o S$ a universal deformation using a holomorphic family of local coordinates by the same construction \eqref{glue1}. By universality, there exists a map $\lie{p}si: S^{\lie{m}ul{\lie{k}appa}} \lie{t}o S$ so that $\lie{m}ul{\Sigma}_{\lie{p}si(s)} \cong \lie{m}ul{\Sigma}_s^{\lie{m}ul{\lie{k}appa}}$. It suffices to show that $\lie{p}si$ is a diffeomorphism on each stratum. Consider the canonical map from $T_{\lie{m}ul{{\on{d}}eltata}} S^{\lie{m}ul{\lie{k}appa}}$ to $\on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}})$, which maps an infinitesimal change in the parameter space $S^{\lie{m}ul{\lie{k}appa}}$ to the corresponding infinitesimal deformation of $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$, which we identify with an element of $\mathcal{O}mega^0(\lie{m}ul{\Sigma}, \on{End}(T \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}))$. Let $U \lie{su}bset \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$ denote the gluing region, that is, the image of the union of domains of the local coordinates. The deformations generated by the gluing parameters are supported in the gluing region $U$. On the other hand, linearly independent deformations of fixed type $\on{Def}_\Gamma(\lie{m}ul{\Sigma})$ generate deformations of the glued curve that are linearly independent on${\lie{m}ul{{\on{d}}eltata}} - U$, for sufficiently small $U$. (The generated deformations will not vanish on $U$, because of the varying local coordinates.) Thus the map $\on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}o \mathcal{O}mega^0(\lie{m}ul{\Sigma} - U)$ is injective; it follows that $TS^{\lie{m}ul{\lie{k}appa}} \lie{t}o \on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}})$ is injective, hence an isomorphism by a dimension count. This shows that the map $S^{\lie{m}ul{\lie{k}appa}} \lie{t}o S$ is a covering. Let $\lie{m}ul{\lie{k}appa}_t$ be a family of local coordinates interpolating between $\lie{m}ul{\lie{k}appa}$ and a holomorphic family. The corresponding family $\lie{p}si_t$ interpolates between the identity and $\lie{p}si$. Since each $\lie{p}si_t$ is a covering and $\lie{p}si_0$ is the identity, each $\lie{p}si_t$ is a diffeomorphism. \end{proof} The strongly universal deformations above defined using smooth families of local coordinates provide smooth orbifold charts on $\overline{M}_{g,n}$. Since the space of local coordinates is convex, one can construct the local coordinates for each stratum compatibly. Namely, let $\Gamma'$ be a combinatorial type degenerating to $\Gamma$. Local coordinates for the nodes of $M_{g,n,\Gamma}$ induce local coordinates for $M_{g,n,\Gamma'}$, in a neighborhood of $M_{g,n,\Gamma}$, via the gluing construction \eqref{glue1}. \lie{b}egin{definition} \langleglebel{compatcoord} A {\em compatible system of local coordinates} for $\overline{M}_{g,n}$ is a system of local coordinates for the nodes of each stratum $M_{g,n,\Gamma}$, so that the local coordinates on any stratum $M_{g,n,\Gamma'}$ are induced from those on $M_{g,n,\Gamma}$, in a neighborhood of $M_{g,n,\Gamma}$. \end{definition} \noindent Compatible systems of local coordinates can be constructed by induction on the dimension of $M_{g,n,\Gamma}$, using convexity on the space of local coordinates in Remark \ref{convex}. One can modify the gluing construction above by choosing a different smooth structure on the space of gluing parameters. In the language of Hofer, Wysocki and Zehnder \cite[Appendix]{ho:sc}, \lie{b}egin{definition} \langleglebel{glueprof} A {\em gluing profile} is a diffeomorphism $\varphi: (0,1] \lie{t}o [0,\infty)$. The diffeomorphism given by $\varphi({\on{d}}eltata) = -1 + 1/{\on{d}}eltata$ will be called the {\em standard gluing profile}; $\varphi({\on{d}}eltata) = e^{1/{\on{d}}eltata} - e $ will be called the {\em exponential gluing profile}, and $\varphi({\on{d}}eltata) = -\ln({\on{d}}eltata) $ the {\em logarithmic gluing profile}. \end{definition} The set of gluing profiles naturally forms a partially ordered set: Write $\varphi_1 \lie{g}eq \varphi_0$ and say $\varphi_1$ is {\em softer than} $\varphi_0$ if $\varphi_1^{-1} \varphi_0$ extends to a diffeomorphism of $[0,1]$. Write $\varphi_1 > \varphi_0$ and say that $\varphi_1$ is {\em strictly softer} than $\varphi_0$ if the derivatives of $\varphi_1^{-1} \varphi_0: [0,1] \lie{t}o [0,1]$ vanish at $0$. The exponential gluing profile, standard gluing profile, and logarithmic gluing profile form a decreasingly soft sequence in this partial order. Fix a gluing profile $\varphi$, and consider once again the gluing construction. \lie{b}egin{definition} Given a nodal curve $\lie{m}ul{\Sigma}$ with local coordinates $\lie{m}ul{\lie{k}appa}$ near the nodes, and a collection of gluing parameters ${\on{d}}eltata = ({\on{d}}eltata_1,\ldots,{\on{d}}eltata_m)$, the {\em glued curve} $\lie{m}ul{\Sigma}(\lie{m}ul{{\on{d}}eltata},\varphi)$ is defined by gluing together small disks: \lie{b}egin{equation} \langleglebel{gluecurve} \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata},\varphi,\lie{k}appa} := \left( \lie{b}igcup_{i=1}^m \Sigma_i - \{ w_1^\lie{p}m, \ldots, w_m^\lie{p}m \} \right) / \on{ps}im \end{equation} \noindent where the equivalence relation $\on{ps}im$ is given by $$ z \on{ps}im (\lie{k}appa_i^+)^{-1}( \exp( - \varphi(|{\on{d}}eltata_i|) - i \arg({\on{d}}eltata_i))/\lie{k}appa_i^-(z) , \lie{q}uad, z \in U_i^-, \lie{q}uad i = 1,\ldots, m .$$ More generally, given a family $\lie{m}ul{\Sigma}_{S_\Gamma} \lie{t}o S_\Gamma$ of curves of constant combinatorial type $\Gamma$ and a system of local coordinates near the nodes $ \lie{m}ul{\lie{k}appa}$, the construction \eqref{gluecurve} produces a family of curves $ \lie{m}ul{\Sigma}_{S^{\lie{m}ul{\lie{k}appa},\varphi}}^{\varphi,\lie{m}ul{\lie{k}appa}} \lie{t}o S^{\lie{m}ul{\lie{k}appa},\varphi} $ where $S^{\lie{m}ul{\lie{k}appa},\varphi}$ is the product of $S$ with the space of gluing parameters. \end{definition} Let $\lie{m}ul{\Sigma}$ be a compact, complex nodal curve. For any gluing profile $\varphi$ and any collection $\lie{m}ul{\lie{k}appa}$ of local coordinates near the nodes, the family $ \lie{m}ul{\Sigma}_{S^{\lie{m}ul{\lie{k}appa},\varphi}}^{\lie{m}ul{\lie{k}appa},\varphi} \lie{t}o S^{\lie{m}ul{\lie{k}appa},\varphi} $ is a stratified-smooth strongly universal deformation, since it is so for the standard gluing profile. Let $\lie{m}ul{\lie{k}appa} = (\lie{m}ul{\lie{k}appa}_\Gamma)$ be a compatible system of local coordinates near the nodes, for each combinatorial type $\Gamma$. Each stratified-smooth universal deformation above defines a {\em classifying map} \lie{b}egin{equation} \langleglebel{gluemaps} S^{\lie{m}ul{\lie{k}appa},\varphi} / \on{Aut} (\lie{m}ul{\Sigma}) \lie{t}o \overline{M}_{g,n}, \lie{q}uad s \lie{m}apsto [\lie{m}ul{\Sigma}_s] \end{equation} which is a homeomorphism onto its image, possibly after shrinking the parameter space $ S^{\lie{m}ul{\lie{k}appa},\varphi} $. (To obtain a precise meaning for ``classifying map'' it is necessary to pass to the stacks-theoretic viewpoint, which we do not discuss here.) The maps \eqref{gluemaps} provide $\overline{M}_{g,n}$ with a compatible set of stratified-smooth orbifold charts, since the transition maps are the identity on the space of gluing parameters by construction, and smooth on each stratum. We denote by $\overline{M}_{g,n}^{\lie{m}ul{\lie{k}appa},\varphi}$ the smooth structure on $\overline{M}_{g,n}$ defined by the system of local coordinates $\lie{m}ul{\lie{k}appa}$ near the nodes and the gluing profile $\varphi$; the use of this smooth structure seems to have been suggested by Hofer. It seems that these smooth structures might depend on the choice of $\lie{m}ul{\lie{k}appa}$, except in the case of the logarithmic gluing profile, in which case one has a canonical smooth structure. The forgetful maps with respect to these non-standard smooth structures have regularity properties that are worse than those with respect to the standard smooth structure. For $2g + n > 3$ we have forgetful morphisms $f_i: \overline{M}_{g,n} \lie{t}o \overline{M}_{g,n-1}$ by forgetting the $i$-th marking and collapsing unstable components. There are two possibilities: a genus zero component with one marking and two nodes is replaced by a point; a genus zero component with two markings and one node is replaced by a single marking. For any gluing profile, the maps $f_i$ are smooth away from the locus where collapsing occurs. We say a local coordinate on a genus zero curve is {\em standard} if it extends to an isomorphism with the projective line. The forgetful morphism $f_i$ is smooth near the locus of one node, two marking components if the local coordinates are standard and ${\on{d}}eltata \lie{m}apsto \exp(\varphi({\on{d}}eltata))^{-1}$ is smooth, that is, $\varphi$ is at least as hard as the logarithmic gluing profile. The forgetful morphism $f_i$ is smooth near the locus of curves containing components with two nodes and one marking if the map ${\on{d}}eltata_1,{\on{d}}eltata_{2} \lie{m}apsto \varphi^{-1}( \varphi({\on{d}}eltata_1) + \varphi({\on{d}}eltata_{2})) $ is smooth. For example, in the logarithmic gluing profile we have $({\on{d}}eltata_1, {\on{d}}eltata_2) \lie{m}apsto {\on{d}}eltata_1 {\on{d}}eltata_2$, which is smooth, while for the standard gluing profile collapsing a component gives the map $ ({\on{d}}eltata_1,{\on{d}}eltata_2) \lie{m}apsto {\on{d}}eltata_1 {\on{d}}eltata_2 / ({\on{d}}eltata_1 + {\on{d}}eltata_2) $ in the local gluing parameters, which is not smooth. \on{ps}ection{Deformations of holomorphic maps from curves} \langleglebel{maps} This section reviews the construction of a stratified-smooth universal deformations for stable (pseudo)holomorphic maps. The proof relies on a gluing theorem, of the sort given by Ruan-Tian \cite{rt:hi}; our approach follows that of McDuff-Salamon \cite{ms:jh} who treat the genus zero case. A different set-up for gluing is described in Fukaya-Oh-Ohta-Ono \cite{fooo}, and explained in more detail in Abouzaid \cite{ab:ex}. The gluing construction gives rise to charts for the moduli space of regular stable maps. \lie{su}bsection{Stable maps} Let $(X,\omega)$ be a compact symplectic manifold and $\lie{m}athcal{J}(X)$ the space of compatible almost complex structures on $X$. Let $J \in \lie{m}athcal{J}(X)$. \lie{b}egin{definition} A {\em marked nodal $J$-holomorphic map} to $X$ consists of a nodal curve $\lie{m}ul{\Sigma}$, a collection $\lie{m}ul{z} = (z_1,\ldots, z_n)$ of distinct, smooth points on $\lie{m}ul{\Sigma}$, and a $J$-holomorphic map $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$. An {\em isomorphism} of marked nodal maps from $(\lie{m}ul{\Sigma}_0, \lie{m}ul{z}_0, \lie{m}ul{u}_0)$ to $(\lie{m}ul{\Sigma}_1, \lie{m}ul{z}_1, \lie{m}ul{u}_1)$ is an isomorphism of nodal curves $\lie{m}ul{\lie{p}si}: \lie{m}ul{\Sigma}_0 \lie{t}o \lie{m}ul{\Sigma}_1$ such that $\lie{m}ul{\lie{p}si}(z_{0,i}) = z_{1,i}$ for $i=1, \ldots, n$ and $\lie{m}ul{u}_1 \circ \lie{m}ul{\lie{p}si} = \lie{m}ul{u}_0$. A marked nodal map $(\lie{m}ul{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$ is {\em stable} if it has finite automorphism group or equivalently each component $\Sigma_i$ of genus zero resp. one for which $u_i$ is constant has at least three resp. one special (nodal or marked) point. The {\em homology class} of stable map $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ is $\lie{m}ul{u}_* [\lie{m}ul{\Sigma}] \in H_2(X,\lie{m}athbb{Z})$. \end{definition} A \emph{continuous family} of $J$-holomorphic maps over a topological space $S$ is a continuous family of nodal curves $\lie{m}ul{\Sigma}_S\lie{t}o S$ (see Definition \ref{familycurves}) and a continuous map $\lie{m}ul{u}: \lie{m}ul{\Sigma}_S\lie{t}o X$ which is fiberwise holomorphic. That is, for each $s_0 \in S$ and each nearby combinatorial type $\Gamma$ we have \lie{b}egin{enumerate} \item a sequence of contractions $\lie{t}au_s : \Gamma(\lie{m}ul{\Sigma}_{s_0}) \lie{t}o \Gamma $, constant in $s \in S_\Gamma$; \item for every node $ \{ w_i^\lie{p}m \}$ collapsed under $\lie{t}au_s$, a pair of local coordinates $\lie{k}appa^\lie{p}m_i: W^\lie{p}m_i \lie{t}o \lie{m}athbb{C}$ \item for every component $\Sigma_{s_0,i}$ of $\lie{m}ul{\Sigma}_{s_0}$, a real number $\eps_s > 0$ converging to $0$ as $s \lie{t}o s_0$ and maps $$\lie{p}hi_{i,s}: \Sigma_{s_0,i} - \cup_{w_k^\lie{p}m \in \Sigma_{s_0,i}, \lie{t}au_s(w_k^\lie{p}m) = \emptyset} (\lie{k}appa_k^\lie{p}m)^{-1}(B_{\eps_s}) \lie{t}o \lie{m}ul{\Sigma}_{s,\lie{t}au_s(i)}$$ \end{enumerate} such that \lie{b}egin{enumerate} \item for any $s$, the images of the maps $\lie{p}hi_{i,s}$ cover $\lie{m}ul{\Sigma}_{s}$; \item for any nodal point $w_i^\lie{p}m$ of $\lie{m}ul{\Sigma}_s$ joining components $\Sigma_{s,i^\lie{p}m(k)}$, there exists a constant $\langleglembda_s \in \lie{m}athbb{C}^*$ such that $(\lie{k}appa_k^+ \circ \lie{p}hi_{s,i^+(k)}^{-1} \circ \lie{p}hi_{s,i^-(k)} \circ (\lie{k}appa_k^-)^{-1})(z) = \langleglembda_s z $ where defined, and $\langleglembda_s \lie{t}o 0$ as $s \lie{t}o s_0$. \item for any $z \in \Sigma_{s_0,i}$ in the complement of the $W_{k,s}^\lie{p}m$, $\lim_{s \lie{t}o s_0}(\lie{p}hi_{i,s}(z)) = z$; \item $\lie{p}hi_{i,s}^* j_{\Sigma_{s,\lie{t}au_s(i)}}$ converges to $ j_{\Sigma_{s_0,i}}$ uniformly in all derivatives on compact sets; \item if $z_i$ is contained in $\Sigma_{s_0,k}$, then $z_i = \lim_{s \lie{t}o s_0} \lie{p}hi_{s,k}^{-1}(z_{i,s})$. \item $\lie{p}hi_{i,s}^* u_s$ converges to $u_{s_0}$ uniformly in all derivatives on compact sets. \end{enumerate} \lie{b}egin{remark} It follows from the assumption that $u_S: \Sigma_S \lie{t}o X$ is continuous that the homology class $u_{s,*} [\Sigma_s]$ is locally constant in $s \in S$. Indeed continuity implies that for $s$ sufficiently close to $s_0$, $u_S$ is homotopic to a map of the form $v_S \circ \lie{g}amma_S$ where $\lie{g}amma_s: \Sigma_S \lie{t}o \Sigma_{s_0}$ is a map to the central fiber $\Sigma_{s_0}$ which collapses the gluing regions to the node. Since each $\lie{g}amma_s = \lie{g}amma_S | \Sigma_s$ maps $[\Sigma_s]$ to $[\Sigma_{s_0}]$, the claim follows. \end{remark} In particular, taking $S$ to be the topological space given as the closure of the set $S^*$ of rational numbers of the form $1/i, i \in \lie{m}athbb{Z}_{> 0}$, we say that a sequence of holomorphic maps $u_i: \Sigma_i \lie{t}o X$ {\em Gromov converges} if it extends to a continuous family over $S$. To state the Gromov compactness theorem, recall that the {\em energy} of a map $u: \Sigma \lie{t}o X$ is $$ E(u)={\on{fr}}ac{1}{2}\int |du|^2 .$$ \lie{b}egin{theorem} [Gromov compactness] Let $X,\omega,J$ be as above. Any sequence $u_i: \Sigma_i \lie{t}o X$ of stable holomorphic maps with bounded energy has a Gromov convergent subsequence. Furthermore, the limit is unique. \end{theorem} For references and discussion, see for example \cite[Theorem 1.8]{io:rel}. The definition of Gromov convergence passes naturally to equivalence classes of stable maps. A subset $C$ of $\overline{M}_{g,n}(X,d)$ is {\em Gromov closed} if any sequence in $C$ has a limit point in $C$, and {\em Gromov open} if its complement is closed. The Gromov open sets form a topology for which any convergent sequence is Gromov convergent, by an argument using \cite[Lemma 5.6.5]{ms:jh}. Furthermore, any convergent sequence has a unique limit. Gromov compactness implies that for any $E> 0$, the union of $\overline{M}_{g,n}(X,d)$ over $d \in H_2(X,\lie{m}athbb{Z})$ with $(d, [\omega]) < E$ is a compact, Hausdorff space. \lie{b}egin{definition} \langleglebel{ssfam} Let $X,\omega,J$ be as above. A {\em stratified-smooth} family of nodal $J$-holomorphic maps over a space $S$ is a pair $(\lie{m}ul{\Sigma}_S,\lie{m}ul{u}_S)$ of a stratified-smooth family of nodal curves $\lie{m}ul{\Sigma}_S \lie{t}o S$ together with a continuous map $\lie{m}ul{u}_S: \lie{m}ul{\Sigma}_S \lie{t}o X$ such that the restriction $\lie{m}ul{u}_s$ of $\lie{m}ul{u}$ to any fiber $\lie{m}ul{\Sigma}_s$ is holomorphic, and the restriction of $\lie{m}ul{u}_S$ to any stratum $\lie{m}ul{\Sigma}_\Gamma$ is smooth. A {\em stratified-smooth deformation} of a stable $J$-holomorphic map $(\lie{m}ul{\Sigma}, \lie{m}ul{u})$ is a germ of a stratified-smooth family $(\lie{m}ul{\Sigma}_S, \lie{m}ul{u}_S)$ together with an isomorphism of nodal maps $\iota: \lie{m}ul{\Sigma}_0 \lie{t}o \lie{m}ul{\Sigma}$ such that $\iota^* \lie{m}ul{u} = \lie{m}ul{u}_0$. A deformation $(\lie{m}ul{\Sigma}_S, \lie{m}ul{u}_S,\iota)$ of $(\lie{m}ul{\Sigma},\lie{m}ul{u})$ is {\em versal} if any other (germ of) family of marked, nodal curves $(\lie{m}ul{\Sigma}',\lie{m}ul{\Sigma}_0) \lie{t}o (S',0)$ is induced from a map $\lie{p}si: S' \lie{t}o S$ in the sense that there exists an isomorphism $\lie{p}hi: \lie{m}ul{\Sigma}' \lie{t}o \lie{m}ul{\Sigma} \lie{t}ildemes_S S'$ in a neighborhood of the central fiber $ \lie{m}ul{\Sigma}_0$, and $\lie{m}ul{u}'$ is obtained by composing projection on the first factor with $\lie{m}ul{u}$. A versal deformation is {\em universal} if the map $\lie{p}hi$ above is the unique map inducing the identity on $\lie{m}ul{\Sigma}_0$. \end{definition} \lie{su}bsection{Smooth universal deformations of regular stable maps of fixed combinatorial type.} Let $\lie{m}ul{u} :\lie{m}ul{\Sigma} \lie{t}o X$ be a stable map. For $p >2$ define a fiber bundle $\E \lie{t}o \B$ by $$ \B = \lie{m}athcal{J}(\lie{m}ul{\Sigma}) \lie{t}ildemes \Map(\lie{m}ul{\Sigma},X)_{1,p} , \lie{q}uad \E_{\lie{z}eta,\lie{m}ul{u}} = \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX)_{0,p} ,$$ where the latter is the space of $(0,1)$-forms with respect to the pair $(\lie{m}ul{j}(\lie{z}eta),J)$. Consider the Cauchy-Riemann section, $$ \overline{\lie{p}artial}:\B \lie{t}o \E, \ \ \ (j,\lie{m}ul{u}) \lie{m}apsto \overlinep_j \lie{m}ul{u}, \ \ \overlinep_j \lie{m}ul{u} = \lie{h}h ( {\on{d}} \lie{m}ul{u} \circ \lie{m}ul{j}(\lie{z}eta) - J_u \circ {\on{d}} \lie{m}ul{u}) .$$ Let $$ \ev: \B \lie{t}o X^{2m} , \ \ \ \lie{m}ul{u} \lie{m}apsto (\lie{m}ul{u}(w_1^-), \lie{m}ul{u}(w_1^+), \ldots, \lie{m}ul{u}(w_m^-), \lie{m}ul{u}(w_m^+) ) $$ denote the map evaluating at the nodal points. The space of stable maps of type $\Gamma$ is given as $(\overlinep,\ev)^{-1}(0)$. To obtain a Fredholm map, we quotient by diffeomorphisms of $\Sigma$, or equivalently, restrict to a minimal versal deformation $\lie{m}ul{\Sigma}_S \lie{t}o S$ of $\lie{m}ul{\Sigma}$ of fixed type. This means that for each $\lie{z}eta \in \on{Def}_\Gamma(\lie{m}ul{\Sigma})$ near $0$ we have a complex structure $\lie{m}ul{j}(\lie{z}eta)$ on $\lie{m}ul{\Sigma}$, which we may assume agrees with $\lie{m}ul{j} = \lie{m}ul{j}(0)$ near the nodes. Then the Cauchy-Riemann section induces a map $$ \on{Def}_\Gamma(\Sigma) \lie{t}ildemes \mathcal{O}mega^0(\Sigma, u^* TX) \lie{t}o \cE .$$ Linearizing the Cauchy-Riemann section, together with the differences at the nodes, gives rise to a Fredholm operator \lie{b}egin{equation} \langleglebel{explinear} \lie{t}ilde{D}_{\lie{m}ul{u}} : \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}ildemes \mathcal{O}mega^0(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX) \lie{t}o \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) \oplus \lie{b}igoplus_{i=1}^m \lie{m}ul{u}(w_i^\lie{p}m)^* TX $$ $$ \lie{t}ilde{D}_{\lie{m}ul{u}} (\lie{m}ul{\lie{z}eta},\lie{m}ul{\xi}) := \left( \lie{p}i^{0,1}_{\lie{m}ul{\Sigma}} ( \nabla \lie{m}ul{\xi} - \lie{h}h J(\lie{m}ul{u}) {\on{d}} \lie{m}ul{u} Dj(\lie{m}ul{\lie{z}eta}) - \lie{h}h J_{\lie{m}ul{u}} (\nabla_{\lie{m}ul{\xi}} J )_{\lie{m}ul{u}} \lie{p}artial \lie{m}ul{u})), ( \lie{m}ul{\xi}(w_i^+)- \lie{m}ul{\xi}(w_i^-) )_{i=1}^m \right) \end{equation} given by the linearized Cauchy-Riemann operator on each component, and the difference of the values of the section at the nodes $w^\lie{p}m_1,\ldots, w^\lie{p}m_m$. The map $\lie{m}ul{u} = (\lie{m}ul{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$ is {\em regular} if $\lie{t}ilde{D}_{\lie{m}ul{u}}$ is surjective. This is independent of the choice of representatives $j(\lie{z}eta)$: any two such choices $\lie{m}ul{j}'(\lie{z}eta),\lie{m}ul{j}(\lie{z}eta)$ are related by a diffeomorphism of $\lie{m}ul{\Sigma}$. The {\em space of infinitesimal deformations of $\lie{m}ul{u}$ of fixed type} is $$ \on{Def}_\Gamma(\lie{m}ul{u}) = \lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})/ \on{aut} (\lie{m}ul{\Sigma}) .$$ The {\em space of infinitesimal deformations of $\lie{m}ul{u}$} is $$ \on{Def}(\lie{m}ul{u}) = \on{Def}_\Gamma(\lie{m}ul{u}) \oplus \lie{b}igoplus_{i=1}^m T_{w_j^+} \lie{m}ul{\Sigma} \otimes T_{w_j^-} \lie{m}ul{\Sigma} $$ where $\Gamma$ is the type of $\lie{m}ul{u}$. \lie{b}egin{theorem} \langleglebel{fixed} Let $X,\omega,J$ be as above. A regular marked nodal $J$-holomorphic map $\lie{m}ul{u} = (\lie{m}ul{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$ admits a strongly universal deformation $(\lie{m}ul{\Sigma}_S,\lie{m}ul{u}_S,\lie{m}ul{z}_S)$ with parameter space $S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{u})$ of fixed type if and only if $\lie{m}ul{u}$ is stable. \end{theorem} \lie{b}egin{proof} Let $(\lie{m}ul{\Sigma},\lie{m}ul{u})$ be a stable map to $X$ and $\lie{m}ul{\Sigma}_S \lie{t}o S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{\Sigma})$ a minimal versal deformation of $\lie{m}ul{\Sigma}$ of fixed type constructed in \eqref{glue1}. We may write any map $C^0$-close to $\lie{m}ul{u}$ as $\exp_{\lie{m}ul{u}} (\lie{m}ul{\xi})$ for some $\lie{m}ul{\xi} \in \mathcal{O}mega^0(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX)$. Let $\mathbb{P}si_{\lie{m}ul{u}}(\lie{m}ul{\xi}): \lie{m}ul{u}^* TX \lie{t}o \exp_{\lie{m}ul{u}}(\lie{m}ul{\xi})^* TX$ denote parallel transport along geodesics with respect to the Hermitian connection $ \lie{t}ilde{\nabla} = \nabla - \lie{h}h J(\nabla J) ;$ here $\nabla$ is the Levi-Civita connection, see \cite[Chapter 2]{ms:jh}. This defines an isomorphism \lie{b}egin{equation} \langleglebel{parallel} \mathbb{P}si_{\lie{m}ul{u}}(\lie{m}ul{\xi})^{-1} : \mathcal{O}mega^{0,1}_{\lie{m}ul{j}}( \lie{m}ul{\Sigma}, \exp_{\lie{m}ul{u}}(\lie{m}ul{\xi})^* TX) \lie{t}o \mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX) .\end{equation} where subscript $j$ denotes the space of $0,1$-forms taken with respect to the complex structure $j$ on $\lie{m}ul{\Sigma}$. There is an isomorphism of $\mathcal{O}mega^{0,1}_{\lie{m}ul{j}(\lie{z}eta)}(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX)$ with $\mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma},\lie{m}ul{u}^*TX)$ given by composing the inclusion $$\mathcal{O}mega^{0,1}_{\lie{m}ul{j}(\lie{z}eta)}(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) \lie{t}o \mathcal{O}mega^1(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX)_\lie{m}athbb{C} = \mathcal{O}mega^1(\lie{m}ul{\Sigma}; \lie{m}ul{u}^* TX) \otimes_\lie{m}athbb{R} \lie{m}athbb{C}$$ with the projection $ \mathcal{O}mega^1(\lie{m}ul{\Sigma}; \lie{m}ul{u}^* TX) \otimes_\lie{m}athbb{R} \lie{m}athbb{C} \lie{t}o\mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) .$ We denote by $$ \mathbb{P}si_j(\lie{z}eta): \mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX) \lie{t}o \mathcal{O}mega^{0,1}_{\lie{m}ul{j}(\lie{z}eta)}(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) $$ the resulting map; one can think of this as a connection over the space of complex structures on $\lie{m}ul{\Sigma}$ on the bundle whose fiber is the space of $0,1$-forms with respect to $j(\lie{z}eta)$. By composing $\mathbb{P}si_{\lie{m}ul{u}}(\lie{m}ul{\xi})^{-1}$ and $\mathbb{P}si_j(\lie{z}eta)^{-1}$ we obtain an identification \lie{b}egin{equation} \langleglebel{eq:para} \mathbb{P}si_{j,\lie{m}ul{u}}(\lie{z}eta,\xi)^{-1} : \mathcal{O}mega^{0,1}_{\lie{m}ul{j}(\lie{z}eta)}(\lie{m}ul{\Sigma},\exp_{\lie{m}ul{u}}(\lie{m}ul{\xi})^* TX) \lie{t}o \mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma},\lie{m}ul{u}^*TX) . \end{equation} Define $$ \cF_{\lie{m}ul{u}}: \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}ildemes \mathcal{O}mega^0(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) \lie{t}o \mathcal{O}mega^{0,1}_{\lie{m}ul{j}}(\lie{m}ul{\Sigma},\lie{m}ul{u}^* TX) $$ $$ (\lie{z}eta,\lie{m}ul{\xi}) \lie{m}apsto \mathbb{P}si_{j,\lie{m}ul{u}}(\lie{z}eta,\lie{m}ul{\xi})^{-1} (\overlinep_{\lie{m}ul{j}(\lie{z}eta)}(\exp_{\lie{m}ul{u}}(\lie{m}ul{\xi}))) .$$ The operator $\lie{t}ilde{D}_{\lie{m}ul{u}}$ is the linearization of $\cF_{\lie{m}ul{u}}$. The implicit function theorem implies that if $\lie{m}ul{u}$ is regular then the zero set of $\cF_{\lie{m}ul{u}}$ is modelled locally on a neighborhood of $0$ in $\lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})$. Furthermore, by elliptic regularity the zero set consists entirely of smooth $J$-holomorphic maps \cite[Section B.4]{ms:jh}. Thus we obtain a smooth family of stable maps in a neighborhood of $0$ in $\lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})$. The action of $ \on{Aut} (\lie{m}ul{u})$ on the space of stable maps with domain $\lie{m}ul{\Sigma}$ induces an inclusion of the Lie algebra $ \on{aut} (\lie{m}ul{u})$ into $\lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})$. Restricting to $\on{Def}_\Gamma(\lie{m}ul{u})$, identified with a complement of $ \on{aut} (\lie{m}ul{\Sigma})$ (that is, a slice for the $ \on{Aut} (\lie{m}ul{u})$ action) gives a family $(\lie{m}ul{\Sigma}_S, \lie{m}ul{u}_S) \lie{t}o S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{u})$ of fixed type. The family $(\lie{m}ul{\Sigma}_S,\lie{m}ul{u}_S)$, together with the canonical identification $\iota$ of the central fiber with $\lie{m}ul{\Sigma}$, is a universal smooth deformation of fixed type. Indeed, another smooth family $(\lie{m}ul{\Sigma}_{S'},\lie{m}ul{u}_{S'})$ over a base $S'$ is in particular a deformation of the underlying curve. After shrinking $S'$, each fiber of $(\lie{m}ul{\Sigma}_{s'}, \lie{m}ul{u}_{s'}')$ corresponds to a zero of $\cF_{\lie{m}ul{u}}$, and so lies in the image of the map given by the implicit function theorem. The uniqueness part of the implicit function theorem gives a smooth map $\lie{p}si: S' \lie{t}o \on{Def}_\Gamma(\lie{m}ul{u})$ and an identification $\lie{m}ul{\Sigma}_{S'} \lie{t}o \lie{p}si^* \lie{m}ul{\Sigma}_S$. Any two such maps inducing the same map on the central fiber are close in a neighborhood of the central fiber. Since the automorphism groups of the central fibre are discrete, any automorphism group is discrete. Thus any two such automorphisms defined in a neighborhood of the central fiber, and equal on the central fiber must be equal in a neighborhood of the central fiber. This shows that the identification is unique, so that the deformation given by the gluing construction is universal. If $\lie{m}ul{u}$ is not stable, then it has no universal deformation since the identification with the central fiber is unique only up to a continuous family of automorphisms. \end{proof} Let $M_{g,n,\Gamma}^{\reg}(X,d)$ denote the moduli space of regular stable maps of combinatorial type $\Gamma$. A family $ \lie{m}ul{u}_S$ over $S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{u}) $ induces a map \lie{b}egin{equation} \langleglebel{fixedcharts} S \lie{t}o M_{g,n,\Gamma}^{\reg}(X,d), \ s \lie{m}apsto [\lie{m}ul{u}_s] \end{equation} where $[\lie{m}ul{u}_s]$ denotes the isomorphism class of $\lie{m}ul{u}_s: \lie{m}ul{\Sigma}_s \lie{t}o X$. \lie{b}egin{theorem} For any $g,n,d$ and combinatorial type $\Gamma$ with $m$ nodes, $M_{g,n,\Gamma}^{\reg}(X,d)$ has the structure of a smooth orbifold of dimension $(1-g)( {\on{d}}im(X) - 6) + 2 (c_1 (TX),d) - 2m + 2n$, with tangent space at $[\lie{m}ul{u}]$ isomorphic to $\on{Def}_\Gamma(\lie{m}ul{u})$. \end{theorem} \lie{b}egin{proof} By Theorem \ref{fixed}, the maps \eqref{fixedcharts} for families giving universal deformations are homeomorphisms onto their image and provide compatible charts. The dimension formula follows from Riemann-Roch: The index of $\lie{t}ilde{D}_{\lie{m}ul{u}}$ may be deformed to a complex linear operator by homotoping the zero-th order terms (which define a compact operator) to zero. \end{proof} \lie{su}bsection{Constructing stratified-smooth deformations of varying type} \langleglebel{gluing} The main result of this section is Theorem \ref{premain}, which is probably well-known, cf. \cite{rrs:mod}, \cite{rt:hi}, but for which we could not find an explicit reference. The theorem itself will not be used, but the estimates involved in the proof will be needed later for the corresponding result for vortices. The proof uses a gluing construction for holomorphic maps, which produces from a smooth family of holomorphic maps of fixed type, a stratified-smooth family of maps of varying type. \noindent {\em Step 1: Approximate Solution} \lie{b}egin{definition} Let $\lie{m}ul{\Sigma}$ be a compact, complex nodal curve. A {\em gluing datum} for $\lie{m}ul{\Sigma}$ consists of \lie{b}egin{enumerate} \item a collection of gluing parameters $\lie{m}ul{{\on{d}}eltata} = ({\on{d}}eltata_1,\ldots,{\on{d}}eltata_m)$ in the bundle $\lie{m}ul{I}$ of \eqref{gluingpar}; \item local coordinates $\lie{k}appa_j^\lie{p}m$ near the nodes $w_j^\lie{p}m$ for $j=1,\ldots,m$; \item a parameter $\rho$ which describes the width of the annulus on which the gluing of maps is performed; \item a gluing profile $\varphi$, see Definition \ref{glueprof}; \item a smooth cutoff function \lie{b}egin{equation} \langleglebel{firstcut} \alpha: \lie{m}athbb{C} \lie{t}o [0,1 ], \ \ \ \alpha(z) = \left\{ \lie{b}egin{array}{ll} 0 & |z| \leq 1 \\ 1 & |z| \lie{g}eq 2 \end{array} \right\} .\end{equation} \end{enumerate} \end{definition} We first treat the case that $\varphi$ is the standard gluing profile. Let a gluing datum be given, and $ \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$ denote the glued curve from \eqref{glue1}. Let $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ be a holomorphic map. Near each node $w_k$ let $i^\lie{p}m(k)$ denote the components on either side of $w_k$. In the neighborhoods $U_k^\lie{p}m$ (assuming they have been chosen sufficiently small) define maps $$ \xi_k^\lie{p}m: U_k^\lie{p}m \lie{t}o T_{x_k} X, \ \ \ u_{i^\lie{p}m(k)}(z) = \exp_{x_k}(\xi_k^\lie{p}m(z)) $$ where $x_k = u(w_k)$ and $\exp_{x_k}: T_{x_k} X \lie{t}o X$ denotes geodesic exponentiation. Given a holomorphic map $\lie{m}ul{u}:\lie{m}ul{\Sigma} \lie{t}o X$, and a gluing datum $(\lie{m}ul{{\on{d}}eltata},\lie{m}ul{\lie{k}appa},\rho,\varphi,\alpha)$ define the {\em pre-glued map} by interpolating between the maps on the various components using the given cutoff function and local coordinates: $\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}} = \lie{m}ul{u}(z)$ for $z \notin \cup_k U_k^\lie{p}m$ and otherwise \lie{b}egin{equation} \langleglebel{preglued} \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}} (z) = \exp_{x_k}( \alpha(\lie{k}appa_k^\lie{p}m( z) / \rho |{\on{d}}eltata_k|^{1/2} ) \xi_k^\lie{p}m(z) ) \lie{q}uad z \in U_k^\lie{p}m .\end{equation} \lie{b}egin{remark} \langleglebel{nodalrem} The same formula but with domain $\lie{m}ul{\Sigma}$ (not the glued curve) defines an {\em intermediate map} $ \lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}: \lie{m}ul{\Sigma} \lie{t}o X $ which is constant near the nodes. The right inverse of $\lie{t}ilde{D}_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}$ will be used in the gluing construction. \end{remark} First we estimate the failure of $\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}$ to satisfy the Cauchy-Riemann equation. Define on $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$ the $C^0$-metric $g$ by the identification \lie{b}egin{equation} \langleglebel{gluemetric} \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}} = \lie{m}ul{\Sigma} - \lie{b}igcup_{k,\lie{p}m} \lie{k}appa_k^\lie{p}m(B_{|{\on{d}}eltata_k|^{1/2}}(0)) / \left( \lie{k}appa^+_k(\lie{p}artial B_{|{\on{d}}eltata_k|^{1/2}}(0))\on{ps}im \lie{k}appa^-_k(\lie{p}artial B_{|{\on{d}}eltata_k|^{1/2}}(0)) \right) \end{equation} using a K\"ahler metric on $\Sigma$, see Figure \ref{metric}. \lie{b}egin{figure} \includegraphics{metric.eps} \caption{Continuous metric on a glued curve} \langleglebel{metric} \end{figure} \noindent The generalized Sobolev spaces $W^{l,p}$ with respect to this metric are defined for $p \lie{g}e 1$ and integers $l \in \{0,1\}$, see \cite{ad:so} or \cite{au:non}. For any vector bundle $E$ we denote by $\mathcal{O}mega(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}, E)_{l,p,\lie{m}ul{{\on{d}}eltata}}$ the space of $W^{l,p}$ forms with values in $E$. If $p = \infty$ the norm is independent of $\lie{m}ul{{\on{d}}eltata}$ and we drop it from the notation. Let $\lie{m}athcal{V}ert \cdot \lie{m}athcal{V}ert_{k,p,\lie{m}ul{{\on{d}}eltata}}$ denote the Sobolev $W^{k,p}$-norm on $\mathcal{O}mega^0(\lie{m}ul{u}^* TX)$ defined using the $\lie{m}ul{{\on{d}}eltata}$-dependent metric \eqref{gluemetric}. \lie{b}egin{proposition} \langleglebel{errorprop} Suppose that $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ is a stable map, and $\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}: \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}} \lie{t}o X$ is the pre-glued map defined in \eqref{preglued}, defined for $\lie{m}ul{{\on{d}}eltata}$ sufficiently small. There is a constant $c$ and an $\eps > 0$ such that if $ \lie{m}athcal{V}ert \lie{m}ul{{\on{d}}eltata} \lie{m}athcal{V}ert < \eps, \rho > 1/\eps,$ and $ |{\on{d}}eltata_k|^2 \rho < \eps, \ \ \ k= 1,\ldots, m $ then $$\lie{m}athcal{V}ert \overline{\lie{p}artial} \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}} \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}}^p \leq c \lie{su}m_{k=1}^m (|{\on{d}}eltata_k|^{1/2} \rho)^{2} .$$ \end{proposition} \lie{b}egin{proof} Compare with McDuff-Salamon \cite[Chapter 10]{ms:jh}. The error term $\overline{\lie{p}artial} \lie{m}ul{u}(\lie{m}ul{{\on{d}}eltata})$ can be estimated by terms of two types; those involving derivatives of the cutoff functions and those involving derivatives of the map $\xi_k$. The derivative of $\exp_{x_k}$ is approximately the identity near the node. The derivatives of $\alpha$ grow like $ 1/ \rho | {\on{d}}eltata_k|^{1/2}$, while the norm of $\xi_k^\lie{p}m$ is bounded by a constant times $\rho |{\on{d}}eltata_k |^{1/2}$ on the gluing region. The term involving the derivatives of $\alpha$ is bounded and supported on a region of area less than $ \lie{p}i \rho^2 |{\on{d}}eltata_k|$ for each node. The derivatives of $\xi_k^\lie{p}m$ are also uniformly bounded, and the area bound gives the required estimate. \end{proof} Let $\lie{m}ul{\Sigma}_S \lie{t}o S$ with $S \lie{su}bset \on{Def}_\Gamma(\lie{m}ul{\Sigma})$ be a family giving a minimal versal deformation of $\lie{m}ul{\Sigma}$ of fixed type, and $\lie{m}ul{\Sigma}_{S_{\lie{m}ul{{\on{d}}eltata}}(\lie{m}ul{{\on{d}}eltata})} \lie{t}o S_{\lie{m}ul{{\on{d}}eltata}} \lie{su}bset \on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}})$ a family giving a minimal versal deformation of $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$. The gluing construction \eqref{glue1} applied to the family $\lie{m}ul{\Sigma}_S$ produces a map \lie{b}egin{equation} \langleglebel{linglue} \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}o \lie{m}athcal{J}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}), \lie{q}uad \lie{z}eta \lie{m}apsto j^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta) \end{equation} which maps any deformation of the original curve to the corresponding deformation of the glued curve. In other words, any variation of complex structure on $\lie{m}ul{\Sigma}$ of fixed type induces a variation of complex structure on $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$. Similarly, for any $\xi \in \mathcal{O}mega^0(\lie{m}ul{\Sigma}, \lie{m}ul{u}^* TX)$ we obtain an element $\xi^{\lie{m}ul{{\on{d}}eltata}} \in \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}, \lie{m}ul{u}^* TX)$. \lie{b}egin{proposition} \langleglebel{errorprop2} Suppose that $\lie{m}ul{u}, \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}$ are as above, and $(\lie{z}eta,\xi) \in \on{Def}_\Gamma(u)$. There is a constant $c$ and an $\eps > 0$ such that if $ \lie{m}athcal{V}ert \lie{m}ul{{\on{d}}eltata} \lie{m}athcal{V}ert < \eps, \rho > 1/\eps$, $\lie{m}athcal{V}ert \lie{z}eta \lie{m}athcal{V}ert + \lie{m}athcal{V}ert \lie{m}ul{\xi} \lie{m}athcal{V}ert_{1,p} \leq \eps$, and $ |{\on{d}}eltata_k|^2 \rho < \eps$ for $ k= 1,\ldots, m $ then $$\lie{m}athcal{V}ert \overline{\lie{p}artial}_{\lie{m}ul{j}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta)} \exp_{\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\lie{m}ul{\xi}^{\lie{m}ul{{\on{d}}eltata}}) \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}}^p \leq c \lie{su}m_{k=1}^m (|{\on{d}}eltata_k|^{1/2} \rho)^{2} .$$ \end{proposition} \noindent {\em Step 2: Uniformly bounded right inverse} We wish to show that the map in Proposition \ref{errorprop2} can be corrected to obtain a holomorphic map. Define \lie{b}egin{equation} \langleglebel{cFd} \cF_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}: \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}ildemes \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX) \lie{t}o \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX) \end{equation} $$ (\lie{z}eta,\xi) \lie{m}apsto \mathbb{P}si_{j,\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\lie{z}eta,\xi)^{-1} (\overlinep_{\lie{m}ul{j}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta)}(\exp_{\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\xi))) .$$ Here the operator $\mathbb{P}si_{j,\lie{m}ul{u}^{\on{d}}eltata}$ is as in \eqref{eq:para}. Let $\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}(\xi)$ be the associated linear operator, that is, the linearization of \eqref{cFd} at $\xi$. This operator naturally extends to a map from Sobolev $1,p$-completion of the second factor of the domain to the $0,p$-completion of the codomain. We denote by $\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} := \lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}(0)$. We will construct an approximate inverse \lie{b}egin{equation} \langleglebel{eq:approx} T_{\lie{m}ul{{\on{d}}eltata}} : \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX ) \lie{t}o \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \oplus \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX) \end{equation} to $\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}$. The construction depends on a carefully chosen cutoff function: \lie{b}egin{lemma}\langleglebel{careful} \cite[Section 10.3]{ms:jh} For any ${\on{d}}eltata > 0, \rho > 1$ there exists a function $\lie{b}eta_{\rho,{\on{d}}eltata} : \lie{m}athbb{R}^2 \lie{t}o [0,1]$ that satisfies $$ \lie{b}eta_{\rho,{\on{d}}eltata}(z) = \lie{b}egin{cases} 0 & | z| \leq \on{ps}qrt{{\on{d}}eltata /\rho} \\ 1 & | z| \lie{g}eq \on{ps}qrt{{\on{d}}eltata \rho} \end{cases} $$ and for all $ \xi \in W^{1,p}(B_{\rho |{\on{d}}eltata_k|})$ satisfying $ \xi(0) = 0$ \lie{b}egin{equation} \langleglebel{beta1p} \lie{m}athcal{V}ert (\nabla \lie{b}eta_{2;\rho,{\on{d}}eltata}) \xi \lie{m}athcal{V}ert_{0,p} \leq c \log( \rho^2 )^{-1 + 1/p} \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{1,p} , \lie{q}uad \lie{m}athcal{V}ert \lie{b}eta_{2;\rho,{\on{d}}eltata} \lie{m}athcal{V}ert_{1,2} \leq C \log(\rho^{2})^{-1/2}. \end{equation} \end{lemma} Recall the map $\lie{m}ul{u}_0$ from Remark \ref{nodalrem}. \lie{b}egin{lemma} \langleglebel{rightex} For sufficiently small $\lie{m}ul{{\on{d}}eltata}$ there exists a right inverse $Q_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}$ of $\lie{t}ilde{D}_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}$ with image the $L^2$-perpendicular of the kernel of $\lie{t}ilde{D}_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}$. \end{lemma} \lie{b}egin{proof} Consider the maps defined by parallel transport using the modified Levi-Civita connection, $\mathbb{P}i_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}^{\lie{m}ul{u}}: \mathcal{O}mega^0(\lie{m}ul{\Sigma}, (\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}})^* TX) \lie{t}o \mathcal{O}mega^0(\lie{m}ul{\Sigma}, (\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}})^* TX) $ $ \mathbb{P}si_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}^{\lie{m}ul{u}}: \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma}, (\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}})^* TX) \lie{t}o \mathcal{O}mega^{0,1}(\lie{m}ul{\Sigma}, (\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}})^* TX) .$ The operator $ \mathbb{P}si_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}^{\lie{m}ul{u}} \lie{t}ilde{D}_{\lie{m}ul{\Sigma},\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}} (\mathbb{P}i_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}^{\lie{m}ul{u}})^{-1} $ approaches the operator $\lie{t}ilde{D}_{\lie{m}ul{u}}$ as $\lie{m}ul{{\on{d}}eltata} \lie{t}o 0$, c.f. \cite[Remark 10.2.2]{ms:jh}. The statement of the lemma follows. \end{proof} Define the approximate right inverse for $\lie{t}ilde{D}_{\lie{m}ul{u}}^{{\lie{m}ul{{\on{d}}eltata}}}$ by composing the right inverse $Q_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}}$ with a cutoff and extension operator: $ T_{\lie{m}ul{{\on{d}}eltata}} := P_{\lie{m}ul{{\on{d}}eltata}} Q_{\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}} K_{\lie{m}ul{{\on{d}}eltata}} ,$ defined as follows. The {\em cutoff operator} $$ K_{\lie{m}ul{{\on{d}}eltata}}: \mathcal{O}mega^{0,1}(\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}* TX)_{0,p,\lie{m}ul{{\on{d}}eltata}} \lie{t}o \mathcal{O}mega^{0,1}(\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata},*} TX)_{0,p} $$ is defined by $$ (K_{\lie{m}ul{{\on{d}}eltata}}({\on{et}}a))(z) = \lie{b}egin{cases} {\on{et}}a(z) & z \notin\lie{b}igcup_{k,\lie{p}m} B_{| {\on{d}}eltata_k(z)|^{1/2}} (w_k^\lie{p}m) \\ 0 & \lie{t}ext{otherwise} \end{cases} .$$ We have $ \lie{m}athcal{V}ert K_{\lie{m}ul{{\on{d}}eltata}} {\on{et}}a \lie{m}athcal{V}ert_{0,p} \leq \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}} $ by definition of the $0,p,\lie{m}ul{{\on{d}}eltata}$ norm. The {\em extension operator} $$ P_{\lie{m}ul{{\on{d}}eltata}}: \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \oplus \mathcal{O}mega^0(\lie{m}ul{\Sigma},\lie{m}ul{u}_0^{\lie{m}ul{{\on{d}}eltata}}* TX)_{1,p,\lie{m}ul{{\on{d}}eltata}} \lie{t}o \on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}) \oplus \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX)_{1,p,\lie{m}ul{{\on{d}}eltata}} $$ is defined as follows. For each component $\Sigma_i$ let $\Sigma_i^*$ denote the complements of small balls around the nodes $$ \Sigma_i^* = \Sigma_i - \lie{b}igcup_{l, w_l^\lie{p}m \in \Sigma_i} B_{ | {\on{d}}eltata_l(z)|^{1/2}/\rho} (w_l^\lie{p}m) $$ and the inclusion $\lie{p}i_i : \Sigma_i^* \lie{t}o \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}} $ induces a map $ \lie{p}i_{i,*} : \mathcal{O}mega^0( \Sigma_i^*,u_i^* TX) \lie{t}o \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}, \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX )_{0,p} .$ Define $$ P_{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta,\xi) = (\lie{z}eta^{\lie{m}ul{{\on{d}}eltata}},\xi^{\lie{m}ul{{\on{d}}eltata}}) $$ where $ \lie{z}eta^{\lie{m}ul{{\on{d}}eltata}}$ is the image of $\lie{z}eta$ under the gluing map \eqref{linglue} and $\xi^{\lie{m}ul{{\on{d}}eltata}}$ is obtained by patching together the sections $\xi$; on the gluing region arising from gluing the $k$-th node $w_k$ the section $\xi^{\lie{m}ul{{\on{d}}eltata}}$ is given by the sum $$ \lie{p}i_{i^+(k),*} \lie{b}eta_{\rho,{\on{d}}eltata_k} (\xi_{i^+(k)} - \xi(w_k)) \\ + \lie{p}i_{i^-(k),*} \lie{b}eta_{\rho,{\on{d}}eltata_k} (\xi_{i^-(k)} - \xi(w_k)) + \xi(w_k). $$ Fix a metric $\lie{m}athcal{V}ert \cdot \lie{m}athcal{V}ert$ on the finite-dimensional space $\on{Def}_\Gamma(\lie{m}ul{\Sigma})$ and define $$\lie{m}athcal{V}ert (\lie{z}eta,\xi) \lie{m}athcal{V}ert_{1,p,\lie{m}ul{{\on{d}}eltata}} = \left( \lie{m}athcal{V}ert \lie{z}eta \lie{m}athcal{V}ert^p + \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{1,p,\lie{m}ul{{\on{d}}eltata}}^p \right)^{1/p} .$$ \lie{b}egin{proposition} \langleglebel{right} Let $\lie{m}ul{u} : \lie{m}ul{\Sigma} \lie{t}o X$ be a stable map. There exist constants $c,C > 0 $ such that if $ \lie{m}athcal{V}ert \lie{m}ul{{\on{d}}eltata} \lie{m}athcal{V}ert < c $ then the approximate inverse $ T_{\lie{m}ul{{\on{d}}eltata}}$ of \eqref{eq:approx} satisfies $$ \lie{m}athcal{V}ert (\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} T_{\lie{m}ul{{\on{d}}eltata}} - I) {\on{et}}a \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}} \leq \lie{h}h \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}}, \lie{q}uad \lie{m}athcal{V}ert T_{\lie{m}ul{{\on{d}}eltata}} \lie{m}athcal{V}ert < C .$$ \end{proposition} \lie{b}egin{proof} By construction $T_{\lie{m}ul{{\on{d}}eltata}}$ is an exact right inverse for $\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}$ away from gluing region. In the gluing region the variation of complex structure on the curve vanishes and $D_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} = D_{x_k}$, the standard Cauchy-Riemann operator with values in $T_{x_k}X$. So \lie{b}egin{eqnarray*} \lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} T_{\lie{m}ul{{\on{d}}eltata}} {\on{et}}a - {\on{et}}a &=& \lie{su}m D_{x_k} \lie{b}eta_{2;\rho,{\on{d}}eltata}(z) ( \xi_{i^\lie{p}m(k)}(z) - \xi_{i^\lie{p}m(k)}(w_k)) \\ &=& \lie{su}m (D_{x_k} \lie{b}eta_{2;\rho,{\on{d}}eltata}(z) ) \xi_{i^\lie{p}m(k)}(z) \end{eqnarray*} since $K_{\lie{m}ul{{\on{d}}eltata}} {\on{et}}a = 0 $ on $B_{| {\on{d}}eltata_k|^{1/2}}(0)$ in the components adjacent to the node. Since $p > 2$, the $0,p,\lie{m}ul{{\on{d}}eltata}$-norm of the right hand side is controlled by the ordinary $L^p$ norm. By \eqref{beta1p} we have $$ \lie{m}athcal{V}ert \lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} T_{\lie{m}ul{{\on{d}}eltata}} {\on{et}}a - {\on{et}}a \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}} \leq \lie{su}m_k c | \log( \rho )|^{2/p-2} \lie{m}athcal{V}ert \xi_{i^\lie{p}m(k)} - \xi(w_k) \lie{m}athcal{V}ert_{1,p} .$$ The last factor is bounded by $ \lie{m}athcal{V}ert K_{\lie{m}ul{{\on{d}}eltata}} {\on{et}}a \lie{m}athcal{V}ert_{0,p}$, by the uniform bound on $Q_{\lie{m}ul{{\on{d}}eltata}}$, and hence $\lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}}$, by the uniform bound on $K_{\lie{m}ul{{\on{d}}eltata}}$. \end{proof} Define a right inverse $Q_{\lie{m}ul{{\on{d}}eltata}}$ to $\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}$ by the formula $$ Q_{\lie{m}ul{{\on{d}}eltata}} = T_{\lie{m}ul{{\on{d}}eltata}} (\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} T_{\lie{m}ul{{\on{d}}eltata}})^{-1} = \lie{su}m_{k \lie{g}e 0} T_{\lie{m}ul{{\on{d}}eltata}} (\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}} T_{\lie{m}ul{{\on{d}}eltata}} - I )^k .$$ The uniform bound on $T_{\lie{m}ul{{\on{d}}eltata}}$ from Lemma \ref{right} implies a uniform bound on $Q_{\lie{m}ul{{\on{d}}eltata}}$. \noindent {\em Step 3: Uniform quadratic estimate} \lie{b}egin{proposition} \langleglebel{quadratic} Let $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ be a stable map, $\lie{m}ul{{\on{d}}eltata}$ a collection of gluing parameters, and $\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}: \lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}} \lie{t}o X$ the approximate solution defined above. For every constant $c > 0$ there exist constants $c_0,{\on{d}}eltata_0 > 0$ such that if $ \lie{m}ul{u} \in \Map(\lie{m}ul{\Sigma},X)_{1,p}, \xi \in \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*}TX)_{1,p}$ $$ \lie{m}athcal{V}ert {\on{d}} \lie{m}ul{u} \lie{m}athcal{V}ert_{0,p} \leq c_0, \ \ \ \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{L_\infty} \leq c_0 \ \ \ \ \lie{m}athcal{V}ert \lie{z}eta \lie{m}athcal{V}ert \leq c_0, \ \ \ \ \lie{m}athcal{V}ert \lie{m}ul{{\on{d}}eltata} \lie{m}athcal{V}ert < {\on{d}}eltata_0 $$ then $$ \lie{m}athcal{V}ert (D \cF_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta,\xi, \lie{z}eta_1,\xi_1) - \lie{t}ilde{D}_{\lie{m}ul{u}}^{{\lie{m}ul{{\on{d}}eltata}}})(\lie{z}eta_1,\xi_1) \lie{m}athcal{V}ert_{0,p,\lie{m}ul{{\on{d}}eltata}} \leq c \lie{m}athcal{V}ert \lie{z}eta, \xi \lie{m}athcal{V}ert_{1,p} ( \lie{m}athcal{V}ert \lie{z}eta_1, \xi_1 \lie{m}athcal{V}ert_{1,p} .$$ \end{proposition} \noindent Here $D \cF_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta,\xi, \lie{z}eta_1,\xi_1)$ denotes the derivative evaluated at $\lie{z}eta,\xi$, applied to $\lie{z}eta_1,\xi_1$. We use similar notation throughout the discussion. The proof uses uniform estimates on Sobolev embedding: \lie{b}egin{lemma} \langleglebel{unifsob1} There exists a constant $c > 0$ independent of ${\on{d}}eltata$ such that the embedding $$ \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}, \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} T X)_{{1,p},\lie{m}ul{{\on{d}}eltata}} \lie{t}o \mathcal{O}mega^0(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}, \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} T X)_{0,\infty} $$ has norm less than $c$. \end{lemma} \lie{b}egin{proof} One writes the Sobolev norms as a contribution from each component of the curve $\lie{m}ul{\Sigma}$. Then on each piece, the metric near the boundary is uniformly comparable with the flat metric. The claim then follows from \cite[Chapter 4]{ad:so} which shows that the constants in the Sobolev embeddings depend only on the dimensions of the cone in the cone condition. \end{proof} \lie{b}egin{proof}[Proof of Proposition] For simplicity we assume a single gluing parameter ${\on{d}}eltata$. Let $ \mathbb{P}si_{\lie{m}ul{u}}^{{{\on{d}}eltata},x}(\lie{z}eta,\xi): \mathcal{L}ambda^{0,1} T_z^* \lie{m}ul{\Sigma}^{{\on{d}}eltata} \otimes T_x X \lie{t}o \mathcal{L}ambda^{0,1}_{\lie{m}ul{j}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta)} T_z^* \lie{m}ul{\Sigma} \otimes T_{\exp_x(\xi)} X $ denote pointwise parallel transport as in \eqref{parallel} using the parallel transport using the modified Levi-Civita connection, and projecting onto the $0,1$-part of the form defined using the complex structure $\lie{m}ul{j}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta)$ obtained from gluing $\lie{m}ul{j}(\lie{z}eta)$, see \eqref{linglue}. Let $$ \lie{m}athcal{T}heta_{\lie{m}ul{u}}^{{{\on{d}}eltata},x}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1;{\on{et}}a) = \lie{t}ilde{\nabla}_t \mathbb{P}si_{\lie{m}ul{u}}^{{{\on{d}}eltata},x} ( \lie{z}eta + t \lie{z}eta_1, \xi + t\xi_1) {\on{et}}a .$$ For $\xi, {\on{et}}a$ sufficiently small there exists a constant $c$ such that \lie{b}egin{equation} \langleglebel{Psiest} | \lie{m}athcal{T}heta_{\lie{m}ul{u}}^{{\on{d}}eltata,x}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1; {\on{et}}a )| \leq c \lie{m}athcal{V}ert \xi,\lie{z}eta \lie{m}athcal{V}ert \lie{m}athcal{V}ert \xi_1,\lie{z}eta_1 \lie{m}athcal{V}ert \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert \end{equation} where the norms on the right-hand side are any norms on the finite dimensional vector spaces $T_{\lie{m}ul{\Sigma}} M_{g,n,\Gamma}$ and $T_x X$. This estimate is uniform in ${\on{d}}eltata$, for the variation in complex structure vanishes in a neighborhood of the nodes. Differentiate the equation $ \mathbb{P}si_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi) \cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi) = \overlinep_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)}(\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi))) $ with respect to $(\lie{z}eta_1,\xi_1)$ to obtain \lie{b}egin{multline} \lie{m}athcal{T}heta_{\lie{m}ul{u}^{\on{d}}eltata}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1, \cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi) ) + \mathbb{P}si_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi)( D \cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1)) = \\ \lie{t}ilde{D}_{\lie{m}ul{u}}^{{\on{d}}eltata}(\xi, Dj^{\on{d}}eltata (\lie{z}eta,\lie{z}eta_1),D\exp_{\lie{m}ul{u}^{\on{d}}eltata} (\xi,\xi_1)) .\end{multline} Using the pointwise inequality $$ | \cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi) | < c | {\on{d}} {\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} | < c ( | {\on{d}} \lie{m}ul{u}^{\on{d}}eltata | + | \nabla \xi | ) $$ for $\lie{z}eta,\xi$ sufficiently small, the estimate \eqref{Psiest} on $\mathbb{P}hi$ produces a pointwise estimate $$ | (\mathbb{P}si_{\lie{m}ul{u}}^{{\on{d}}eltata})^{-1}(\xi) \lie{m}athcal{T}heta_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1,\cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi)) | \leq c (| {\on{d}} \lie{m}ul{u}^{\on{d}}eltata | + | \nabla \xi |) \, | ( \xi,\lie{z}eta ) | \, | (\xi_1,\lie{z}eta_1) | .$$ Hence \lie{b}egin{multline} \lie{m}athcal{V}ert \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}^{-1}(\xi) \lie{m}athcal{T}heta_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1,\cF^{\on{d}}eltata_{\lie{m}ul{u}}(\lie{z}eta,\xi)) \lie{m}athcal{V}ert_{0,p} \\ \leq c ( 1+ \lie{m}athcal{V}ert {\on{d}} \lie{m}ul{u}^{\on{d}}eltata \lie{m}athcal{V}ert_{0,p} + \lie{m}athcal{V}ert \nabla \xi \lie{m}athcal{V}ert_{0,p} ) \lie{m}athcal{V}ert (\xi,\lie{z}eta) \lie{m}athcal{V}ert_{0,\infty} \lie{m}athcal{V}ert (\xi_1,\lie{z}eta_1) \lie{m}athcal{V}ert_{0,\infty} .\end{multline} It follows that \lie{b}egin{equation} \langleglebel{firstclaim} \lie{m}athcal{V}ert \mathbb{P}si_{\lie{m}ul{u}}^{{\on{d}}eltata}(\xi)^{-1} \lie{m}athcal{T}heta_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi,\lie{z}eta_1,\xi_1,\cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta,\xi)) \lie{m}athcal{V}ert_{0,p} \leq c \lie{m}athcal{V}ert (\xi,\lie{z}eta) \lie{m}athcal{V}ert_{1,p} \lie{m}athcal{V}ert (\xi_1,\lie{z}eta_1) \lie{m}athcal{V}ert_{1,p} \end{equation} since the $W^{1,p}$ norm controls the $L^\infty$ norm by Lemma \ref{unifsob1}. We next show that there exists a constant $c > 0$ such that uniformly in ${\on{d}}eltata$, \lie{b}egin{equation} \langleglebel{secondclaim} \lie{m}athcal{V}ert \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)^{-1} \lie{t}ilde{D}_{\lie{m}ul{u}}^{\on{d}}eltata(\xi, D\lie{m}ul{j}^{\lie{m}ul{{\on{d}}eltata}}(\lie{z}eta,\lie{z}eta_1),D \exp_{{\lie{m}ul{u}^{\on{d}}eltata}} (\xi,\xi_1)) - \lie{t}ilde{D}_{\lie{m}ul{u}}^{{\on{d}}eltata} (\lie{z}eta_1,\xi_1) \lie{m}athcal{V}ert_{0,p} \leq c \lie{m}athcal{V}ert \lie{z}eta,\xi \lie{m}athcal{V}ert_{1,p} \lie{m}athcal{V}ert \lie{z}eta_1,\xi_1 \lie{m}athcal{V}ert_{1,p} .\end{equation} Indeed differentiate \eqref{cFd} to obtain \lie{b}egin{multline} \lie{t}ilde{D}_{\lie{m}ul{u}}^{{\on{d}}eltata}(\xi, D \lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1),D \exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1)) = \nabla^{0,1}_{j^{\on{d}}eltata(\lie{z}eta)} D\exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1) - \\ \lie{h}h \lie{p}i^{0,1}_{j^{\on{d}}eltata(\lie{z}eta)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} {\on{d}} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) D\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1) $$ - J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} (\nabla_{D \exp_{{\lie{m}ul{u}}}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) . \end{multline} Hence $$ \lie{t}ilde{D}_{\lie{m}ul{u}}^{\on{d}}eltata(\xi, D\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1),D \exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1)) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) \lie{t}ilde{D}_{\lie{m}ul{u}}^{{\on{d}}eltata}(\lie{z}eta_1,\xi_1) = \mathbb{P}i_1 + \mathbb{P}i_2 + \mathbb{P}i_3 $$ where the three terms $\mathbb{P}i_1,\mathbb{P}i_2,\mathbb{P}i_3$ are $$ \mathbb{P}i_1 = \nabla^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} D\exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) \nabla^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(0)} \xi_1$$ $$ \mathbb{P}i_2 = - \lie{h}h \lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} J_{\exp_{\lie{m}ul{u}}(\xi)} {\on{d}} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) D\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1) + \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) \lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(0)} \lie{h}h J_{\lie{m}ul{u}^{\on{d}}eltata} {\on{d}} \lie{m}ul{u}^{\on{d}}eltata D\lie{m}ul{j}^{\on{d}}eltata(0,\lie{z}eta_1) $$ $$ \mathbb{P}i_3 = - \lie{h}h J_{\exp_{\lie{m}ul{u}}(\xi)} (\nabla^{0,1}_{D \exp_{{\lie{m}ul{u}}^{\on{d}}eltata}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) + \lie{h}h \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) J_{\lie{m}ul{u}^{\on{d}}eltata} (\nabla_{\xi_1} J_{\lie{m}ul{u}} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(0)} \lie{m}ul{u}^{\on{d}}eltata .$$ The first difference has norm given by \lie{b}egin{multline} | \lie{p}i_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)}^{0,1} ( \nabla ( D \exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1)) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) \nabla \xi_1 ) | \\ \leq | \lie{p}i_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)}^{0,1} ( \nabla ( D \exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\xi_1)) - D\exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\nabla \xi_1) )| + |\lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} ( D\exp_{{\lie{m}ul{u}^{\on{d}}eltata}}(\xi,\nabla \xi_1) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) \nabla \xi_1 ) | \\ \leq c | \nabla \xi | | \xi_1| + c ( | \lie{z}eta| + | \xi |) |\nabla \xi_1| + c | {\on{d}} \lie{m}ul{u}^{\on{d}}eltata | | \xi| | \xi_1| .\end{multline} We write for the second difference \lie{b}egin{multline} | \lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)}( J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} {\on{d}} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) D \lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) J_{\lie{m}ul{u}^{\on{d}}eltata} {\on{d}} \lie{m}ul{u}^{\on{d}}eltata D\lie{m}ul{j}^{\on{d}}eltata(0,\lie{z}eta_1))| \\ \leq | \lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} ( J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} {\on{d}} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) D\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta,\lie{z}eta_1) - J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) {\on{d}}\lie{m}ul{u}^{\on{d}}eltata D\lie{m}ul{j}^{\on{d}}eltata(0,\lie{z}eta_1))| + \\ |\lie{p}i^{0,1}_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} ( J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) {\on{d}}\lie{m}ul{u}^{\on{d}}eltata D\lie{m}ul{j}^{\on{d}}eltata(0,\lie{z}eta_1) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) J_{\lie{m}ul{u}} {\on{d}} \lie{m}ul{u}^{\on{d}}eltata D\lie{m}ul{j}^{\on{d}}eltata(0,\lie{z}eta_1))| \\ \leq c ( |\lie{z}eta| + |\xi| + | {\on{d}} \lie{m}ul{u}^{\on{d}}eltata| + | \nabla \xi |) | |\lie{z}eta_1| . \end{multline} The third term can be estimated pointwise by $$ | J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} (\nabla_{D \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) J_{\lie{m}ul{u}^{\on{d}}eltata} (\nabla_{\xi_1} J_{\lie{m}ul{u}^{\on{d}}eltata} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(0)} \lie{m}ul{u}^{\on{d}}eltata| $$ $$ \leq | J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} (\nabla_{D \exp_{{\lie{m}ul{u}}}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) - J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} (\nabla_{D \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(0)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)| $$ $$ + | J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} (\nabla_{D \exp_{{\lie{m}ul{u}}}(\xi,\xi_1)} J_{\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi)} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(0)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) - \mathbb{P}si_{\lie{m}ul{u}^{\on{d}}eltata}(\xi) J_{\lie{m}ul{u}^{\on{d}}eltata} (\nabla_{\xi_1} J_{\lie{m}ul{u}^{\on{d}}eltata} ) \lie{p}artial_{\lie{m}ul{j}^{\on{d}}eltata(0)} \lie{m}ul{u}^{\on{d}}eltata| $$ $$ \leq c | \lie{z}eta | (| {\on{d}} \lie{m}ul{u}^{\on{d}}eltata| + |\nabla \xi |) |\xi_1| + c ( | {\on{d}} \lie{m}ul{u}^{\on{d}}eltata | + | \nabla \xi |) ( | \xi_1 |) $$ for $\xi$ sufficiently small. Combining these estimates and integrating, using the $0,p,{\on{d}}eltata$-norms on ${\on{d}} \lie{m}ul{u}$, $\nabla \xi, \nabla \xi_1$ and the $L^\infty$ norms on the other factors and Lemma \ref{unifsob1}, completes the proof. \end{proof} \noindent {\em Step 4: Implicit Function Theorem} For any $(\lie{z}eta_0,\xi_0) \in \on{Def}_\Gamma (\lie{m}ul{u})$ we denote by $\lie{z}eta_0^{\lie{m}ul{{\on{d}}eltata}}$ the deformation of $\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}}$ defined in \eqref{linglue} and by $\xi_0^{\lie{m}ul{{\on{d}}eltata}}$ the section of $\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX$ defined as in \eqref{preglued}. \lie{b}egin{theorem} \langleglebel{gluingstable} Let $\lie{m}ul{u}:\lie{m}ul{\Sigma} \lie{t}o X$ be a stable map. There exist constants $\eps_0,\eps_1 > 0$ such that for any $(\lie{z}eta_0,\xi_0,\lie{m}ul{{\on{d}}eltata}) \in \lie{k}er \lie{t}ilde{D}_{\lie{m}ul{u}} \lie{t}ildemes \lie{m}athbb{R}^m$ of norm at most $\eps_0$, there is a unique $(\lie{z}eta_1,\xi_1) = (\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}})^* {\on{et}}a_1 $ of norm at most $\eps_1$ such that the map $\exp_{\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\xi_0^{\lie{m}ul{{\on{d}}eltata}} + \xi)$ is $\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta_0 + \lie{z}eta_1)$-holomorphic, and depends smoothly on $\lie{z}eta_0,\xi_0$. \end{theorem} \lie{b}egin{proof} The first claim is an application of the quantitative version of the implicit function theorem (see for example \cite[Appendix A.3]{ms:jh}) using the uniform error bound from Proposition \ref{errorprop}, uniformly bounded right inverse from Proposition \ref{right}, and uniform quadratic estimate from Proposition \ref{quadratic}. \end{proof} \noindent {\em Step 5: Rigidification} In the previous step we have constructed a family of stable maps which we will show eventually gives rise to a parametrization of all nearby stable maps. A more natural way of parametrizing nearby stable maps involves examining the intersections with a family of codimension two submanifolds. For example, this construction of charts is that given in the algebraic geometry approach of Fulton-Pandharipande \cite{fu:st}. In order to carry this out in the symplectic approach, we study the differentiability of the evaluation maps. Let $u_S: \Sigma_S \lie{t}o X$ over a parameter space $S \lie{su}bset \on{Def}(\lie{m}ul{u})$ be the family of maps defined in the previous step. The following is similar to \cite[Lemma A1.59]{fooo}. \lie{b}egin{theorem} \langleglebel{diffev} If $ \lie{m}ul{u}_S$ is constructed using the exponential gluing profile $\varphi$ and $U \lie{su}bset \lie{m}ul{\Sigma}$ is an open neighborhood of the nodes then the map $(z,s) \lie{m}apsto \lie{m}ul{u}_s(z)$ is differentiable on a neighborhood of $(\lie{m}ul{\Sigma} - U ) \lie{t}ildemes \{ 0 \}$. \end{theorem} \lie{b}egin{proof} For simplicity, we assume that there is a single gluing parameter ${\on{d}}eltata$. Differentiability for ${\on{d}}eltata$ is studied in McDuff-Salamon \cite[Section 10.6]{ms:jh}. The discussion in our case is somewhat easier, because we use a fixed right inverse in the gluing construction. Given $(\lie{z}eta,\xi) \in \on{Def}_\Gamma(\lie{m}ul{\Sigma}) \lie{t}ildemes \mathcal{O}mega^0( \lie{m}ul{\Sigma}^{\on{d}}eltata, (\lie{m}ul{u}^{\on{d}}eltata)^* TX)$, we constructed a unique correction $(\lie{z}eta_1,\xi_1)$ in the image of the right inverse such that $ \overlinep_{\lie{m}ul{j}^{\on{d}}eltata(\lie{z}eta_0 + \lie{z}eta_1)} \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_0 + \xi_1) = 0 .$ For ${\on{d}}eltata$ fixed, $(\lie{z}eta_1,\xi_1)$ depends smoothly on $(\lie{z}eta_0,\xi_0)$, by the implicit function theorem. Hence the evaluation at $z \in \lie{m}ul{\Sigma} - U$ also depends smoothly on $(\lie{z}eta_0,\xi_0)$. The computation of the derivative with respect to the gluing parameter is complicated by the fact that for each ${\on{d}}eltata$ a different implicit function theorem is applied to obtain the correction. Let $\lie{t}ilde{D}_{\on{d}}eltata = D \cF_{\lie{m}ul{u}^{{\on{d}}eltata}}$. Differentiating the equation $\cF_{\lie{m}ul{u}^{{\on{d}}eltata}}(\lie{z}eta_0^{\on{d}}eltata + \lie{z}eta_1,\xi_0^{\on{d}}eltata + \xi_1) = 0 $ with respect to ${\on{d}}eltata$ gives $$ \lie{t}ilde{D}_{\on{d}}eltata \left( {\on{d}}dd \lie{z}eta_1, D\exp_{\lie{m}ul{u}^{\on{d}}eltata} (\xi_0^{\on{d}}eltata + \xi_1 , 0 , {\on{d}}dd \xi_1 ) \right) = - \lie{t}ilde{D}_{\on{d}}eltata \left( 0 , D\exp_{\lie{m}ul{u}^{\on{d}}eltata} (\xi_0^{\on{d}}eltata + \xi_1 , {\on{d}}dd \lie{m}ul{u}^{\on{d}}eltata , 0 ) \right) .$$ From \eqref{preglued} we have in the gluing region, \lie{b}egin{eqnarray*} {\on{d}}dd \overlinep \lie{m}ul{u}^{\on{d}}eltata &=& {\on{d}}dd \overlinep \exp_{x} ( \left( \alpha( \rho \varphi^{-1/2} |z|) \xi(z)) \right) \\ &=& D\exp_{x} \left( \alpha( \rho \varphi^{-1/2} |z| ) \xi(z), \alpha'( \rho \varphi^{-1/2} |z| )|z| \xi(z) {\on{d}}dd \varphi^{-1/2} \rho \right)^{0,1} .\end{eqnarray*} Hence there exists a constant $C$ depending on $\rho,\alpha$ but not on ${\on{d}}eltata$ such that \lie{b}egin{equation} \langleglebel{point1} \left| {\on{d}}dd \overlinep \lie{m}ul{u}^{\on{d}}eltata \right| \leq C \left| {\on{d}}dd \varphi^{-1/2} \right| .\end{equation} Now ${\on{d}}\varphi$ is given by $$ {\on{d}} ( e^{1/{\on{d}}eltata} - e)^{-1/2} = (1/2) (e^{1/{\on{d}}eltata} - e)^{-3/2} e^{1/{\on{d}}eltata} {\on{d}}eltata^{-2} {\on{d}} {\on{d}}eltata = (1/2) ( e^{1/3{\on{d}}eltata} - e^{-2/3{\on{d}}eltata + 1})^{-3/2} {\on{d}}eltata^{-2} {\on{d}} {\on{d}}eltata .$$ For ${\on{d}}eltata$ small, this is less than $\lie{h}h e^{-1/2{\on{d}}eltata} {\on{d}}eltata^{-2}$. Integrating and using the pointwise estimate \eqref{point1} we obtain for some constant $C > 0$, $$ \left\lie{m}athcal{V}ert {\on{d}}dd \overlinep \lie{m}ul{u}^{\on{d}}eltata_s \right\lie{m}athcal{V}ert_{0,p} \leq C e^{-1/2{\on{d}}eltata} {\on{d}}eltata^{-2} \leq Ce ^{-1 / {\on{d}}eltata}$$ for sufficiently small ${\on{d}}eltata$. Now the uniform quadratic estimates imply that $\lie{t}ilde{D}_{\on{d}}eltata = D \cF_{\lie{m}ul{u}^{\on{d}}eltata}(\lie{z}eta,\xi)$ is uniformly bounded from below on the right inverse of $\lie{t}ilde{D}_{\lie{m}ul{u}}^{{\on{d}}eltata} = D \cF_{\lie{m}ul{u}}^{{\on{d}}eltata}(0,0)$, for $(\lie{z}eta_0,\xi_0)$ sufficiently small. It follows that $$ \left\lie{m}athcal{V}ert ( {\on{d}}dd \lie{z}eta_1, {\on{d}}dd \xi_1) \right\lie{m}athcal{V}ert_{1,p} \leq C e^{-1/{\on{d}}eltata} $$ for $\lie{z}eta_0,\xi_0,{\on{d}}eltata$ sufficiently small as well. Hence the same is true for the evaluation ${\on{d}}dd \xi_1(z)$ for $z \in \lie{m}ul{\Sigma} - U$. In particular, $\lim_{{\on{d}}eltata \lie{t}o 0 } (\lie{p}artial_{\on{d}}eltata \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_0^{\on{d}}eltata + \xi_1))(z) = 0. $ It follows that the differential of the evaluation map has a continuous limit as ${{\on{d}}eltata} \lie{t}o 0$, which completes the proof of the Theorem. \end{proof} Using the evaluation maps in the previous step, we construct embeddings of the families constructed above into suitable moduli spaces of stable marked curves, given by adding additional marked points which map to fixed submanifolds in $X$. A codimension two submanifold $Y \lie{su}bset X$ is {\em transverse} to $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ if $\lie{m}ul{u}$ meets $Y$ transversally in a single point $\lie{m}ul{u}(z)$. \lie{b}egin{definition} Let $\lie{m}ul{u}: \lie{m}ul{\Sigma} \lie{t}o X$ be a stable map. Given any family $\lie{m}ul{Y} = (Y_1,\ldots, Y_\ell)$ of codimension two submanifolds transverse to $\lie{m}ul{u}$ and a family $\lie{m}ul{\Sigma}_S,\lie{m}ul{u}_S,\lie{m}ul{z}_S$ with parameter space $S$ of an $n$-marked stable map $(\lie{m}ul{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$, the {\em rigidified family} of $n+\ell$-marked nodal surfaces is defined by \lie{b}egin{equation} \langleglebel{rigidify} \lie{m}ul{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u}} := (\lie{m}ul{\Sigma}_S, (z_{1,S}, \ldots , z_{n+\ell,S})) \lie{t}o S, \ \ \ \ \lie{m}ul{u}_s(z_{n+i,s}) \in Y_i .\end{equation} \end{definition} \lie{b}egin{proposition}\langleglebel{diffprop} Let $\lie{m}ul{u}_S$ be a family of stable maps over a parameter space $S \lie{su}bset \on{Def}(\lie{m}ul{u})$ given by the gluing construction using a gluing profile $\varphi$ and system of coordinates $\lie{m}ul{\lie{k}appa}$. Suppose that the evaluation map $\ev: (\lie{m}ul{\Sigma} - U) \lie{t}ildemes S \lie{t}o X$ is $C^1$, and that the rigidified family has stable underlying curves. Then the rigidified family of curves $\lie{m}ul{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u}}$ is $C^1$ with respect to the gluing profile and local coordinates, that is, the map $ S \lie{m}apsto \overline{M}_{g,n+l}, \lie{q}uad s \lie{m}apsto \lie{m}ul{\Sigma}_s^{\lie{m}ul{Y},\lie{m}ul{u}} $ is $C^1$ with respect to the smooth structure defined by $\varphi, \lie{m}ul{\lie{k}appa}$. \end{proposition} \lie{b}egin{proof} By the implicit function theorem for $C^1$ maps and differentiability of evaluation maps from the previous subsection. \end{proof} \lie{b}egin{definition} \langleglebel{compat} Let $\lie{m}ul{Y},\lie{m}ul{u}$ be as above. The pair $(\lie{m}ul{Y},\lie{m}ul{u})$ is {\em compatible} if \lie{b}egin{enumerate} \item each $Y_j$ intersects $\lie{m}ul{u}$ transversally in a single point $z_j \in \lie{m}ul{\Sigma}$; \item if $\xi \in \lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})$ satisfies $\xi(z_{n+j}) \in T_{\lie{m}ul{u}(z_{n+j})} Y_j$ for $j =1,\ldots, l$ then $\xi = 0$; \item the curve $\lie{m}ul{\Sigma}$ marked with the additional points $z_{n+1},\ldots, z_{n+\ell}$ is stable; \item if some automorphism of $(\lie{m}ul{\Sigma},\lie{m}ul{u})$ maps $z_i$ to $z_j$ then $Y_i$ is equal to $Y_j$. \end{enumerate} \end{definition} The second condition says that there are no infinitesimal deformations which do not change the positions of the extra markings. \lie{b}egin{proposition} \langleglebel{versalrigid} Let $\lie{m}ul{u}$ be a parametrized regular stable map, and $\lie{m}ul{u}_S$ the stratified-smooth universal deformation constructed in Theorem \ref{gluingstable} with base $S \lie{su}bset \on{Def}(\lie{m}ul{u})$. There exists a collection $\lie{m}ul{Y}$ compatible with $\lie{m}ul{u}$. Furthermore if the evaluation map is $C^1$ as in Proposition \ref{diffprop} then $ \lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}}_S$ defines an $C^1$-immersion of $S$ into $\on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}})$. \end{proposition} \lie{b}egin{proof} First we show the existence of a compatible collection. Given a regular stable map $(\lie{m}ul{\Sigma},\lie{m}ul{z} = (z_1,\ldots,z_n),\lie{m}ul{u}:\lie{m}ul{\Sigma} \lie{t}o X)$, choose $Y_1,\ldots,Y_k$ transverse $\lie{m}ul{u}$ on the unstable components of $\lie{m}ul{\Sigma}$, so that $\lie{m}ul{\Sigma}_1 = (\lie{m}ul{\Sigma}, (z_1,\ldots,z_{n+k}))$ is a stable curve. Let $\lie{m}ul{\Sigma}_{S_1,1} \lie{t}o S_1$ denote a universal deformation of $\lie{m}ul{\Sigma}_1$. By universality, the family $\lie{m}ul{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u}}$ is induced by a map $\lie{p}si : S \lie{t}o S_1$. We successively add marked points until $\lie{p}si$ is an immersion: Suppose that $\lie{p}si$ is not an immersion. Then we may choose an additional marked point $z_{n+k+1} \in \lie{m}ul{\Sigma}$ such that ${\on{d}} \ev_{n+k+1}$ is non-trivial on $\lie{k}er D \lie{p}si$. Since $\lie{m}ul{u}$ is holomorphic, ${\on{d}} \lie{m}ul{u}(z_{n+k+1})$ is rank two at $z_{n+k+1}$. Let $Y_{n+k+1} \lie{su}bset X$ be a codimension two submanifold containing $\lie{m}ul{u}(z_{n+k+1})$ such that $\lie{m}ul{u}$ is transverse to $Y_{n+k+1}$ at $z_{n+k+1}$, and $Y_{n+k+1}$ is transversal to $\ev_{n+k+1} $ at $\lie{m}ul{\Sigma},\lie{m}ul{u}$. Suppose $z_{n+n'+1}$ has orbit $z_{n+k+1}, z_{n+k+2},\ldots, z_{n+l}$ under the group $ \on{Aut} (\lie{m}ul{u})$. Repeating the same submanifold for each marking related by automorphisms gives a collection invariant under the action of automorphisms. The map $\lie{p}si_1$ for the new family has property that the dimension of $\lie{k}er(D\lie{p}si_1)$ has dimension at least two less than that of $\lie{k}er(D\lie{p}si)$. It follows that the procedure terminates after adding a finite number of markings. The last claim follows from the second condition in Definition \ref{compat}. \end{proof} \noindent {\em Step 6: Surjectivity} In this section, we show that the family constructed above contains a Gromov neighborhood of the central fiber. First we show: \lie{b}egin{proposition} \langleglebel{surject} There exists a constant $\eps > 0$ such that any stable map $(\lie{m}ul{\Sigma}_1,\lie{m}ul{u}_1)$ with complex structure on $\lie{m}ul{\Sigma}_1$ given by $j^{\on{d}}eltata(\lie{z}eta)$ for some $\lie{z}eta \in \on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}})$, and $ \lie{m}ul{u}_1 := \exp_{\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\lie{m}ul{\xi}) $ with $ \lie{m}athcal{V}ert \lie{z}eta \lie{m}athcal{V}ert^2 + \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert^2_{1,p,\lie{m}ul{{\on{d}}eltata}} < \eps $ is of the form in Theorem \ref{gluingstable} for some $(\xi_1,\lie{z}eta_1) \in \Im (\lie{t}ilde{D}_{\lie{m}ul{u}}^{{\lie{m}ul{{\on{d}}eltata}}})^*$. \end{proposition} \lie{b}egin{proof} Compare with \cite[Section 10.7.3]{ms:jh}. Let $(\lie{z}eta,\lie{m}ul{\xi})$ be as in the statement of the Proposition. We claim that $(\lie{z}eta,\lie{m}ul{\xi}) = (\lie{z}eta_0^{\lie{m}ul{{\on{d}}eltata}},\lie{m}ul{\xi}_0^{\lie{m}ul{{\on{d}}eltata}}) + (\lie{z}eta_1,\lie{m}ul{\xi}_1)$ for some $(\lie{z}eta_0,\lie{m}ul{\xi}_0) \in \lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}})$ and $(\lie{z}eta_1,\lie{m}ul{\xi}_1) \in \Im ((D_{\lie{m}ul{u}}^{{\lie{m}ul{{\on{d}}eltata}}})^*)$ with small norm. It then follows by the implicit function theorem that $(\lie{z}eta_1,\lie{m}ul{\xi}_1)$ is the solution given in Theorem \ref{gluingstable}. Now $\lie{z}eta = \lie{z}eta_0^{\lie{m}ul{{\on{d}}eltata}}$ for some $\lie{z}eta_0 \in \on{Def}_\Gamma(\lie{m}ul{\Sigma})$ and gluing parameters $\lie{m}ul{{\on{d}}eltata}$, because $\on{Def}(\lie{m}ul{\Sigma}^{\lie{m}ul{{\on{d}}eltata}})$ is the direct sum of the image of $\on{Def}_\Gamma(\lie{m}ul{\Sigma})$ and $\lie{m}athbb{C}^m$. By the gluing theorem for indices (see e.g. \cite[Theorem 2.4.1]{orient}), the image of $\on{Def}_\Gamma(\lie{m}ul{u})$ under the gluing map projects isomorphically onto $\lie{k}er(\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}})$ for $\lie{m}ul{{\on{d}}eltata}$ sufficiently small, and so $\on{Def}_\Gamma(\lie{m}ul{u})$ is transverse to $\Im (\lie{t}ilde{D}_{\lie{m}ul{u}}^{\lie{m}ul{{\on{d}}eltata}})^*$, for $\lie{m}ul{{\on{d}}eltata}$ sufficiently small. The claim then follows from the inverse function theorem. \end{proof} Given a regular stable $\lie{m}ul{u}$ with stable domain, consider the family of $J$-holomorphic maps $ \lie{m}ul{u}_S$ produced by Theorem \ref{gluingstable} with parameter space a neighborhood $S$ of $0$ in $\on{Def}(\lie{m}ul{u}),$ equipped with a canonical identification $\iota$ of the central fiber with the original map $\lie{m}ul{u}$. In the case that the domain $\lie{m}ul{\Sigma}$ is not a stable (marked) curve, we choose codimension two submanifolds $\lie{m}ul{Y} = (Y_1,\ldots, Y_l)$ meeting $\lie{m}ul{u}$ transversally so that $\lie{m}ul{\Sigma}$ with the additional marked points is stable. Applying this to the family $\lie{m}ul{u}_S$ gives a family of marked stable maps $\lie{m}ul{u}_S^{\lie{m}ul{Y}}$ with $n + l$ marked points over a parameter space $S \lie{su}bset \on{Def}(\lie{m}ul{u}^{\lie{m}ul{Y}})$ in the deformation space of the map with the additional marked points. Now $\on{Def}(\lie{m}ul{u}^{\lie{m}ul{Y}}) \cong \on{Def}(\lie{m}ul{u}) \oplus \lie{b}igoplus_{i=1}^l T_{z_i} \lie{m}ul{\Sigma}$ includes the deformations of the markings, but these are fixed by requiring that the additional marked points map to the given collection $\lie{m}ul{Y}$. Forgetting the additional marked points gives a family $\lie{m}ul{u}_S$ of stable maps with $n$ marked points over a neighborhood of $0$ in $\on{Def}(\lie{m}ul{u})$. \lie{b}egin{proposition} $(\lie{m}ul{u}_S,\iota)$ is a versal stratified-smooth deformation of $\lie{m}ul{u}$, and in fact $\lie{m}ul{u}_S$ gives a versal stratified-smooth deformation of any of its fibers. \end{proposition} \lie{b}egin{proof} First suppose that $\lie{m}ul{\Sigma}$ is stable. Let $(\lie{m}ul{u}_{S^1}^1,\iota^1)$ be another stratified-smooth deformation of $\lie{m}ul{u}$ with parameter space $S^1$. Let $\lie{m}ul{\Sigma}_S \lie{t}o S \lie{su}bset \on{Def}(\lie{m}ul{\Sigma})$ be a minimal versal deformation of $\lie{m}ul{\Sigma}$. The family $\lie{m}ul{\Sigma}_{S^1}^1$ is obtained by pull-back of $\lie{m}ul{\Sigma}_S$ by a stratified-smooth map $\lie{p}si: S^1 \lie{t}o S$. By definition the map $\lie{m}ul{u}_s^1$ converges to the central fiber in the Gromov topology as $s$ converges to the base point $0 \in S^1$. The exponential decay estimate of \cite[Lemma 4.7.3]{ms:jh} for holomorphic cylinders of small energy imply that for $s$ sufficiently close to $0$, $\lie{m}ul{\Sigma}_s^1,\lie{m}ul{u}_s^1$ is given by exponentiation, $ \lie{m}ul{u}_s^1 = \exp_{\lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata}}}(\lie{m}ul{\xi}) $ for some $\lie{m}ul{\xi} \in \mathcal{O}mega^0( \lie{m}ul{u}^{\lie{m}ul{{\on{d}}eltata},*} TX)$ with $\lie{m}athcal{V}ert \lie{m}ul{\xi} \lie{m}athcal{V}ert_{1,p} < \eps_1$, for $s$ sufficiently close to $0$. Proposition \ref{surject} produces a stratified-smooth map $\lie{p}si : S^1 \lie{t}o \on{Def}(\lie{m}ul{u})$ such that $\lie{m}ul{u}_{S^1}^1$ is the pull-back of $\lie{p}si$. To show that the deformation $(\lie{m}ul{u}_S, \iota)$ is universal, let $ \lie{p}hi_j: \lie{m}ul{\Sigma}_{S^1}^1 \lie{t}o \lie{p}si_j^* \lie{m}ul{\Sigma}_S, j = 0,1 $ be isomorphisms of families inducing the identity on the central fiber. The difference between the two automorphisms is an automorphism of the family $\lie{m}ul{\Sigma}_{S^1}^1$ inducing the identity on the central fiber; since the automorphism group of the central fiber is discrete, the automorphism must be the identity. In the case that $\lie{m}ul{\Sigma}$ is not stable, after adding marked points passing through $Y_1,\ldots, Y_l$, we obtain a family $\lie{m}ul{u}_{S^1}^{1,\lie{m}ul{Y}}$ of stable maps with $n+l$ marked points. By the case with stable domain, this family is obtained by pull-back of $\lie{m}ul{u}_S^{\lie{m}ul{Y}}$ by some map $S^1 \lie{t}o S$. Hence $\lie{m}ul{u}_{S^1}^1$ is obtained by pull-back by the same map. The argument for an arbitrary fiber is similar and left to the reader. \end{proof} \lie{b}egin{remark} In the case that $\lie{m}ul{\Sigma}$ is unstable, it seems likely that restricting the family of Theorem \ref{gluingstable} to $\on{Def}(\lie{m}ul{u})$ (that is, the perpendicular of $ \on{aut} (\lie{m}ul{\Sigma})$) also gives a universal deformation, but we do not know how to prove this. The problem is that in this case, several different gluing parameters give the same curve, and we do not have an implicit function theorem for varying gluing parameter. \end{remark} \noindent {\em Step 7: Injectivity} By injectivity, we mean that the family constructed above contains each nearby stable map exactly once, up to the action of $ \on{Aut} (u)$. This is part of what we called ``strongly universal'' in Definition \ref{excellent}. \lie{b}egin{theorem} The versal deformations constructed in Step 6 above are strongly universal. \end{theorem} \lie{b}egin{proof} Let $\lie{m}ul{u}_S$ be a deformation constructed as in Step 6, using the exponential gluing profile. Let $\lie{m}ul{\Sigma}_{1,S^1} \lie{t}o S^1$ be a family giving a universal deformation of the curve $\lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}}$ obtained by adding the additional markings mapping to the given submanifolds. By Definition \ref{compat}, the family $\lie{m}ul{\Sigma}_{S}^{\lie{m}ul{Y},\lie{m}ul{u}}$ induces a map $\lie{p}hi: S \lie{t}o S_1$ whose differential is injective in a neighborhood of $0$. By the inverse function theorem for $C^1$ maps, $\lie{p}hi$ induces a homeomorphism onto its image. In particular, any two distinct fibers of $\lie{m}ul{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u}}$ are non-isomorphic, and so two fibers of $\lie{m}ul{\Sigma}_S$ are isomorphic if and only if they are related by a permutation of the markings. After shrinking $S$, this happens only if the permutation is induced by an automorphism of $\lie{m}ul{u}$. Given another family $\lie{m}ul{u}_{S'}' : \lie{m}ul{\Sigma}_{S'} \lie{t}o S'$ corresponding to a deformation of a fiber of $\lie{m}ul{u}_S \lie{t}o S$, by the uniqueness part of the implicit function theorem, a map $\lie{p}hi': S' \lie{t}o S_1$ so that $\lie{m}ul{u}_{S'}'$ is obtained by pull-back from $\lie{m}ul{u}_S$, and this map is unique by the injectivity just proved. This shows that $\lie{m}ul{u}_S$ gives a stratified-smooth universal deformation of any of its fibers, and so is strongly universal. \end{proof} The Theorem implies that the families in the universal deformations constructed above define stratified-smooth-compatible charts for the moduli space $\overline{M}_{g,n}(X,d)$. That is, for any stratum $M_{g,n,\Gamma}(X,d)$, the restriction of the charts given by the universal deformation of some map of type $\Gamma$ to $M_{g,n,\Gamma}(X,d)$ are smoothly compatible. \lie{b}egin{corollary} \langleglebel{orbi} Let $X,J$ be as above. For any $g \lie{g}e 0, n \lie{g}e 0$, the strongly universal stratified-smooth deformations of parametrized regular stable maps provide $\overline{M}_{g,n}^{\reg}(X)$ with the structure of a stratified-smooth topological orbifold. \end{corollary} In order to apply localization one needs to know that the fixed point sets admit tubular neighborhoods. For this it is helpful to know that $\overline{M}_{g,n}^\reg(X,d)$ admits a $C^1$ structure. In order to obtain compatible charts, we construct the local coordinates inductively as in Definition \ref{compatcoord}, starting with the strata of highest codimension. \lie{b}egin{proposition} \langleglebel{diffable} Let $X,J$ be as above. For any compatible system of local coordinates near the nodes, the strongly universal deformations constructed using the exponential gluing profile equip $\overline{M}_{g,n}^\reg(X)$ with the structure of a $C^1$-orbifold. \end{proposition} \lie{b}egin{proof} We claim that the charts induced by the universal deformations are $C^1$-compatible, assuming they are constructed from the same system of local coordinates near the nodes. Given two sets of submanifolds $\lie{m}ul{Y}_1,\lie{m}ul{Y}_2$, define $\lie{m}ul{Y} = \lie{m}ul{Y}_1 \cup \lie{m}ul{Y}_2$. The family $ \lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}}$ admits proper \'etale forgetful maps $ \lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}}_S \lie{t}o \lie{m}ul{\Sigma}_S^{\lie{m}ul{Y}_j,\lie{m}ul{u}} , \lie{q}uad j= 1,2 .$ The fiber consists of reorderings of the additional marked points induced by the action of $ \on{Aut} (\lie{m}ul{\Sigma},\lie{m}ul{u})$, and the diagram provided by $\lie{m}ul{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u}}$ expresses the composition as a smooth $C^1$-morphism of orbifolds. \end{proof} \lie{b}egin{remark} \langleglebel{smooth} Any compact $C^1$ orbifold admits a compatible $C^\infty$ structure, in analogy with the situation with manifolds. Indeed, as is well known any orbifold admits a presentation as the quotient of a manifold (namely its orthogonal frame bundle) by a locally free group action, and so the orbifold case follows from the equivariant case proved in Palais \cite{pal:act}. Hence $\overline{M}^{\reg}_{g,n}(X,d)$ if compact admits a (non-canonical) smooth structure. Presumably the compactness assumption may be removed but we have not proved that this is so. See however the construction of smoothly compatible Kuranishi charts in \cite[Appendix]{fooo}. \end{remark} \on{ps}ection{Deformations of symplectic vortices} We begin by reviewing the theory of symplectic vortices introduced by Mundet i Riera \cite{mun:ham} and Salamon and collaborators \cite{ciel:vor}. Let $\Sigma$ be a compact complex curve, $G$ a compact Lie group, and $\lie{p}i: P \lie{t}o \Sigma$ a smooth principal $G$-bundle. Given any left $G$-manifold $F$ we have a left action of $G$ on $P \lie{t}ildemes F$ given by $g(p,f) = (pg^{-1},gf)$ and we denote by $P(F) = (P \lie{t}ildemes F)/G$ the quotient, that is, the associated fiber bundle with fiber $F$. Let $X$ be a compact Hamiltonian $G$-manifold with symplectic form $\omega$ and moment map $\mathbb{P}hi:X \lie{t}o \lie{g}^*$. The action of $G$ on $X$ induces an action on $\lie{m}athcal{J}(X)$; and we denote by $\lie{m}athcal{J}(X)^G$ the invariant subspace. Let $\lie{p}si: \Sigma \lie{t}o BG$ be a classifying map for $P$, so that $P \cong \lie{p}si^* EG$ and $P(X) \cong \lie{p}si^* EG \lie{t}ildemes_G X \cong \lie{p}si^* X_G$ where $X_G = EG \lie{t}ildemes_G X$. Continuous sections $u: \Sigma \lie{t}o P(X)$ are in one-to-one correspondence with lifts of $\lie{p}si$ to $X_G$. The homology class ${\on{d}}eg(u)$ of the section $u$ is defined to be the homology class ${\on{d}}eg(u) \in H_2^G(X,\lie{m}athbb{Z})$ of the corresponding lift. Let $\A(P)$ be the space of smooth connections on $P$, and $P(\lie{g})$ the adjoint bundle. For any $A \in \A(P)$, let $F_A \in \mathcal{O}mega^2(\Sigma,P(\lie{g}))$ the curvature of $A$. Any connection $A \in \A(P)$ induces a map of spaces of almost complex structures $$ \lie{m}athcal{J}(X)^G \lie{t}o \lie{m}athcal{J}(P(X)), \ \ J \lie{m}apsto J_A$$ by combining the almost complex structures on $X$ and $\Sigma$ using the splitting defined by the connection. Let $\Gamma(\Sigma,P(X))$ denote the space of smooth sections of $P(X)$. Consider the vector bundle \lie{b}egin{equation} \langleglebel{crop} \lie{b}igcup_{u \in \Gamma(\Sigma,P(X)) } \mathcal{O}mega^{0,1}(\Sigma, u^*T^{\operatorname{vert}} P(X)) \lie{t}o \Gamma(\Sigma,P(X)) . \end{equation} We denote by $\overlinep_A$ the section given by the Cauchy-Riemann operator defined by $J_A$. A {\em gauged map} from $\Sigma$ to $X$ is a datum $(P,A,u)$ where $A \in \A(P)$ and $u: \Sigma \lie{t}o P(X)$ is a section. A {\em gauged holomorphic map} is a gauged map $(P,A,u)$ such that $\overlinep_A u = 0 $. Let $\mathbb{H}(P,X)$ be the space of gauged holomorphic maps with underlying bundle $P$. Let $\G(P)$ denote the group of gauge transformations $$ \G(P) = \{ a: P \lie{t}o P, a(pg) = a(p)g, \ \ \ \lie{p}i \circ a = \lie{p}i \} .$$ The Lie algebra of $\G(P)$ is the space of sections $\mathcal{O}mega^0(\Sigma,P(\lie{g}))$ of the adjoint bundle $P(\lie{g}) = P \lie{t}ildemes_G \lie{g}$. We identify $\lie{g} \lie{t}o \lie{g}^*$, and hence $P(\lie{g}) \lie{t}o P(\lie{g}^*)$, using an invariant metric on $\lie{g}$. Let $P(\mathbb{P}hi): P(X) \lie{t}o P(\lie{g})$ denote the map induced by the equivariant map $\mathbb{P}hi : X \lie{t}o \lie{g}$. \lie{b}egin{definition} A gauged holomorphic map $(A,u) \in \mathbb{H}(P,X)$ is a {\em symplectic vortex} (or vortex for short) if it satisfies $$ F_A + \lie{m}athcal{V}ol_{\Sigma} u^* P(\mathbb{P}hi) = 0 .$$ An {\em $n$-marked} symplectic vortex is a vortex $(A,u)$ together with $n$-tuple $\lie{m}ul{z} = (z_1,\ldots, z_n)$ of distinct points on $\Sigma$. A marked vortex $(A,u,\lie{m}ul{z})$ is {\em stable} if it has finite automorphism group. \end{definition} The equation in the definition can be interpreted as the zero level set condition for a formal moment map for the action of the group of gauge transformations, see \cite{mun:ham}, \cite{ciel:vor}. The {\em energy} of a gauged holomorphic map $(A,u)$ is given by $$ E(A,u) = \lie{h}h \int_\Sigma \left(| {\on{d}}_A u |^2 + |F_A|^2 + |u^* P(\mathbb{P}hi)|^2 \right) \lie{m}athcal{V}ol_\Sigma.$$ The \emph{equivariant symplectic area} of a pair $(A,u)$ is the pairing of the homology class ${\on{d}}eg(u)$ with the class $[\omega_G = \omega + \mathbb{P}hi] \in H^2(X_G)$, $$ D(A,u) = ({\on{d}}eg(u), [\omega_G]) = ([\Sigma], u^* [\omega_G] ) .$$ \lie{b}egin{lemma} Suppose $\lie{m}athcal{V}ol_\Sigma$ is the K\"ahler form for the metric on $\Sigma$. The energy and equivariant area are related by \lie{b}egin{equation} \langleglebel{energyaction} E(A,u) = D(A,u) + \int_\Sigma \left( | \overlinep_A u |^2 + \lie{h}h | F_A + \lie{m}athcal{V}ol_\Sigma u^* P(\mathbb{P}hi) |^2 \right) \lie{m}athcal{V}ol_\Sigma. \end{equation} \end{lemma} \lie{b}egin{proof} See \cite[Proposition 2.2]{ci:symvortex}. \end{proof} In particular, for any symplectic vortex the energy and action are equal. Let $M(P,X)$ denote the moduli space of vortices $$ M(P,X) := \mathbb{H}(P,X) \lie{q}u \G(P) = \{ F_A + \lie{m}athcal{V}ol_{\Sigma} u^* P(\mathbb{P}hi) = 0 \}/\G(P) .$$ Let $M_n(P,X)$ denote the moduli space of $n$-marked vortices, up to gauge transformation, and $M_n(\Sigma,X) = \lie{b}igcup_{P \lie{t}o \Sigma} M_n(P,X) $ the union over types of bundles $P$. Clearly, $M_n(\Sigma,X)$ is homeomorphic to the product $M(\Sigma,X) \lie{t}ildemes M_n(\Sigma) $ where $M(\Sigma,X) := M_0(\Sigma)$ and $M_n(\Sigma)$ denotes the configuration space of $n$-tuples of distinct points on $\Sigma$. We wish to study families and deformations of symplectic vortices. For families with smooth domain, the definitions are straightforward: \lie{b}egin{definition} \langleglebel{strongly universal} A {\em smooth family of vortices} on a principal $G$-bundle $P$ on $\Sigma$ over a parameter space $S$ consists of a family of connections depending smoothly on $s \in S$, that is, a smooth map $A_S : S \lie{t}ildemes P \lie{t}o T^*P \otimes \lie{g}$ on $P$ such that the restriction $A_s$ of $A_S$ to any $ \{s \} \lie{t}ildemes P$ is a connection, together with a smooth family of (pseudo)holomorphic sections $u_S = (u_s)_{s \in S}$, such that each pair $(A_s,u_s), s \in S$ is a symplectic vortex. A {\em deformation} of $(A,u)$ is a germ of a smooth family $(A_S,u_S)$ together with an isomorphism (gauge transformation) relating $(A_0,u_0)$ with $(A,u)$. A deformation is {\em universal} if it satisfies the condition as in Definition \ref{ssfam}, and {\em strongly universal} if it satisfies the conditions in Definition \ref{excellent}. \end{definition} We define a {\em linearized operator} associated to a vortex as follows. Define \lie{b}egin{equation} \langleglebel{linearized1} {\on{d}}_{A,u}: \mathcal{O}mega^1(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^0(\Sigma,u^* TP(X) ) \lie{t}o \mathcal{O}mega^2(\Sigma,P(\lie{g})) \end{equation} $$ {\on{d}}_{A,u}(a,\xi) := {\on{d}}_A a + \lie{m}athcal{V}ol_{\Sigma} u^* L_\xi P(\mathbb{P}hi). $$ Here $L_\xi P(\mathbb{P}hi)$ denotes the derivative of $P(\mathbb{P}hi)$ with respect to the vector field generated by $\xi$, and $u^* L_\xi P(\mathbb{P}hi)$ its evaluation at $u$. Define an operator \lie{b}egin{equation} \langleglebel{linearized2} {\on{d}}_{A,u}^*: \mathcal{O}mega^1(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^0(\Sigma,u^* TP(X) ) \lie{t}o \mathcal{O}mega^0(\Sigma,P(\lie{g})) \end{equation} $$ {\on{d}}_{A,u}^*(a,\xi) = {\on{d}}_A^*a + u^* L_{J \xi} P(\mathbb{P}hi) .$$ (This is not the adjoint of operator in \eqref{linearized}, but rather defined by analogy with the case $X$ trivial.) It is shown in \cite[Section 4]{ciel:vor} that if $(A,u)$ is stable then the set $$ W_{A,u} = \{ (A + a, \exp_u(\xi)), (a,\xi) \in \lie{k}er {\on{d}}_{A,u}^*\} $$ is a slice for the gauge group action near $(A,u)$. Define \lie{b}egin{multline} \langleglebel{Feps} \cF_{A,u} : \mathcal{O}mega^1(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^0(\Sigma,u^* T^{\operatorname{vert}} P(X)) \\ \lie{t}o (\mathcal{O}mega^0 \oplus \mathcal{O}mega^2)(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^{0,1}(\Sigma,u^* T^{\operatorname{vert}} P(X)) \\ (a,\xi) \lie{m}apsto \left( F_{A + a} + \lie{m}athcal{V}ol_\Sigma \exp_u(\xi)^* P(\mathbb{P}hi), {\on{d}}_{A,u}^* (a,\xi), \mathbb{P}si_u(\xi)^{-1} \overline{\lie{p}artial}_{A + a} \exp_u(\xi) \right) .\end{multline} Let $$\mathcal{O}mega^1(\Sigma,P(\lie{g})) \lie{t}o \mathcal{O}mega^1(\Sigma,u^* T^{\operatorname{vert}}P(X)), \ \ \ a \lie{m}apsto a_X $$ denote the map induced by the infinitesimal action. The linearization of the last component \eqref{Feps} is $$ D_{A,u} (a,\xi) = (\nabla_A \xi)^{0,1} + \lie{h}h J_u (\nabla_\xi J )_u \lie{p}artial_A u + a_X^{0,1}.$$ Here $0,1$ denotes projection on the $0,1$-component. The linearized operator for a vortex $(A,u)$ is the operator \lie{b}egin{multline} \langleglebel{linearized} \lie{t}ilde{D}_{A,u} = ({\on{d}}_{A,u}, {\on{d}}_{A,u}^*, D_{A,u} ): \mathcal{O}mega^1(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^0(\Sigma, u^* T^{\operatorname{vert}}P(X)) \\ \lie{t}o (\mathcal{O}mega^0 \oplus \mathcal{O}mega^2)(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^{0,1}(\Sigma, u^* T^{\operatorname{vert}}P(X)) \end{multline} A vortex $(A,u)$ is {\em regular} if the operator $\lie{t}ilde{D}_{A,u}$ is surjective. A marked vortex $(A,u,\lie{m}ul{z})$ is regular if the underlying unmarked vortex is regular. The {\em space of infinitesimal deformations} of $(A,u)$ is $ \on{Def}(A,u) := \lie{k}er(\lie{t}ilde{D}_{A,u}) .$ \lie{b}egin{theorem} \langleglebel{deformvortex} Any regular vortex with smooth domain $(A,u)$ has a strongly universal smooth deformation if and only if it is stable. \end{theorem} \lie{b}egin{proof} Give the spaces of connections and sections the structure of Banach manifolds by taking completions with respect to Sobolev norms $1,p$ for $1$-forms, and $0,p$ for $0$ and $2$-forms. For $p>2$, the map $\cF_ {A,u}$ is a smooth map of Banach spaces. \lie{b}egin{multline} \cF_{A,u} : \mathcal{O}mega^1(\Sigma,P(\lie{g}))_{1,p} \oplus \mathcal{O}mega^0(\Sigma,u^* T^{\operatorname{vert}} P(X))_{1,p} \\ \lie{t}o (\mathcal{O}mega^0 \oplus \mathcal{O}mega^2)(\Sigma,P(\lie{g}))_{0,p} \oplus \mathcal{O}mega^{0,1}(\Sigma,u^* T^{\operatorname{vert}} P(X))_{0,p} \end{multline} equivariant for the action of the group $\G(P)_{2,p}$ of gauge transformations of class $2,p$. Suppose that $(A,u)$ is regular and stable. By the implicit function theorem, there is a local homeomorphism $$ \lie{k}er(\lie{t}ilde{D}_{A,u}) \lie{t}o \left\{ \lie{b}egin{array}{c} F_{A + a} + \lie{m}athcal{V}ol_\Sigma (\exp_u(\xi))^* P(\mathbb{P}hi) = 0 \\ \overlinep_{A + a} (\exp_u(\xi)) = 0 \\ \ {\on{d}}_{A,u}^*(a,\xi) = 0 \end{array} \right\} .$$ This gives rise to a family $(A_S,u_S) \lie{t}o S$ over a neighborhood $S$ of $0$ in $\lie{k}er(\lie{t}ilde{D}_{A,u})$. By \cite[Theorem 3.1]{ciel:vor}, $(A_S,u_S)$ is a smooth family, assuming $(A,u)$ is smooth. Given any other family $(A_{S'}',u_{S'}') \lie{t}o S'$ of stable vortices with $(A'_0,u_0') = (A,u)$, the implicit function theorem provides a smooth map $S' \lie{t}o S$ so that $(A'_{S'},u'_{S'})$ is obtained from $(A,u)$ by pull-back. The first property of the universal deformation is a consequence of the slice condition; the second property follows from the fact that the projection $\lie{k}er(\lie{t}ilde{D}_{A,u}) \lie{t}o \lie{k}er(\lie{t}ilde{D}_{A_s,u_s})$ is an isomorphism for sufficiently small $s$. \end{proof} \noindent Let $M^\reg_n(\Sigma,X)$ denote the moduli space of regular, stable $n$-marked symplectic vortices from $\Sigma $ to $X$. We denote by $(c_1^G(TX),d)$ the pairing of $d$ with the first Chern class $c_1^G( P(TX) \lie{t}o P(X))$ \lie{b}egin{theorem} \langleglebel{smoothreg} Let $\Sigma,X,J$ be as above. $M^\reg_n(\Sigma,X)$ has the structure of a smooth orbifold with tangent space at $[A,u]$ isomorphic to $\on{Def}(A,u)$, and dimension of the component of homology class $d \in H_2^G(X)$ is given by $$ {\on{d}}im(M^\reg_n(\Sigma,X,d)) = (1-g)({\on{d}}im(X) - 2{\on{d}}im(G)) + 2((c_1^G(TX),d) + n) .$$ \end{theorem} \lie{b}egin{proof} Charts for $M^{\reg}_n(\Sigma,X)$ are provided by the strongly universal deformations. The dimension of the tangent space at $[A,u]$ is given by the index of the linearized operator $\lie{t}ilde{D}_{A,u}$, which deforms via Fredholm operators to the sum of the operator ${\on{d}}_A \oplus {\on{d}}_A^*$ for the connection, which has index $2{\on{d}}im(G)(g-1)$, and the linearized Cauchy-Riemann operator on the nodal curve, which has index $(1-g){\on{d}}im(X) + 2n + 2(c_1^G(TX),d)$ by Riemann-Roch, if $(A,u)$ has equivariant homology class $d$ (which determines the first Chern class of $P$ by projection.) \end{proof} \lie{su}bsection{Polystable vortices} The moduli space of symplectic vortices admits a compactification which allows bubbling of the section in the fibers. \lie{b}egin{definition} \langleglebel{nodal} A {\em nodal gauged marked holomorphic map} from $\Sigma$ to $X$ consists of a datum $(\lie{h}at{\Sigma},P,A,\lie{m}ul{u},\lie{m}ul{z})$ where $P \lie{t}o \Sigma$ is a principal $G$-bundle, $A \in A(P)$ is a connection, $\lie{h}at{\Sigma}$ is a marked nodal curve, $v: \lie{h}at{\Sigma} \lie{t}o \Sigma$ is a holomorphic map of degree $[\Sigma]$, and $\lie{m}ul{u}: \lie{h}at{\Sigma} \lie{t}o P(X)$ is a $J_A$-holomorphic map from a nodal curve $\lie{h}at{\Sigma}$ such that $\lie{p}i \circ \lie{m}ul{u}$ has class $[\Sigma]$. In other words, \lie{b}egin{enumerate} \item $\lie{h}at{\Sigma}$ is a connected nodal complex curve consisting of a {\em principal component} $\Sigma_0$ equipped with an isomorphism with $\Sigma$ together with a number of projective lines $\Sigma_1,\ldots,\Sigma_k$. We denote by $w_1^\lie{p}m,\ldots,w_m^\lie{p}m$ the nodes. For each $i = 1,\ldots, k$, we denote by $w_i^0 \in \Sigma_0$ the attaching point to the principal component. \item $(A,u) \in \mathbb{H}(P,X)$ is a gauged holomorphic map from $\Sigma$ to $X$; \item for each non-principal component $\Sigma_i$, a holomorphic map $u_i: \Sigma_i \lie{t}o P(X)_{w_i^0}$; \item $\lie{m}ul{z} = (z_1,\ldots,z_n) \in \lie{h}at{\Sigma}$ are distinct, smooth points of $\lie{h}at{\Sigma}$. \end{enumerate} \end{definition} Let $\mathbb{H}(\lie{h}at{\Sigma},P(X))$ denote the space of nodal gauged holomorphic sections with domain $\lie{h}at{\Sigma}$ and bundle $P$. The group of gauge transformations $\G(P)$ acts on $\mathbb{H}(\lie{h}at{\Sigma},P(X))$ by $ g (A,\lie{m}ul{u}) = (g^* A, g \circ \lie{m}ul{u}) .$ The generating vector field for $\lie{z}eta \in \mathcal{O}mega^0(\Sigma,P(\lie{g}))$ acting on $\mathbb{H}(\lie{h}at{\Sigma},P(X))$ at $(\lie{h}at{\Sigma},A,\lie{m}ul{u})$ is the tuple given by \lie{b}egin{equation} \langleglebel{generate} \lie{z}eta_{\mathbb{H}(\lie{h}at{\Sigma},P(X))}(\lie{h}at{\Sigma},A,\lie{m}ul{u}) = ({\on{d}}_A \lie{z}eta, u_0^* P(\lie{z}eta_X), (u_i^* P(\lie{z}eta_X(w_i^0)))_{i=1}^k ) \end{equation} in $ \mathcal{O}mega^1(\Sigma,P(\lie{g})) \oplus \mathcal{O}mega^0(\lie{h}at{\Sigma},\lie{m}ul{u}^* T^{\operatorname{vert}} P(X))$. Here $P(\lie{z}eta_X) \in \mathcal{O}mega^0(\Sigma, P( \lie{m}athcal{V}ect(X)))$ is the fiber-wise vector field generated by $\lie{z}eta$ and $u_0^* P(\lie{z}eta) \in T^{\operatorname{vert}} P(X)$ is the evaluation at $u_0$. Similarly for the bubble components $u_1,\ldots, u_k$ in the fibers $P(X)_{w^0_1},\ldots, P(X)_{w^0_k}$. A slice is given by taking the perpendicular to the tangent spaces to the $\G(P)$-orbits. We will assume for simplicity that the stabilizer of the $\G(P)$ action on the principal component is finite, so that a slice is given locally by the kernel of ${\on{d}}_{A,u_0}^*$, that is, the Coulomb gauge condition on the principal component. The implicit function theorem shows that any nearby pair $(A_1,\lie{m}ul{u}_1)$ is complex gauge equivalent to a pair of the form $(A + a, \exp_{\lie{m}ul{u}}(\lie{m}ul{v}))$ with $(a,\lie{m}ul{v}) \in \lie{k}er {\on{d}}_{A,u_0}^*$. \lie{b}egin{definition} A {\em nodal vortex} is a stable nodal gauged holomorphic map such that the principal component is an vortex. A nodal vortex $(\lie{h}at{\Sigma},A,\lie{m}ul{u},\lie{m}ul{z})$ is {\em polystable} if each sphere bubble $\Sigma_i$ on which $u_i$ is constant has at least three marked or singular points, and {\em stable} if it has finite automorphism group. An {\em isomorphism} of nodal vortices $(\lie{h}at{\Sigma},A,\lie{m}ul{u},\lie{m}ul{z}),(\lie{h}at{\Sigma}',A',\lie{m}ul{u}',\lie{m}ul{z}')$ consists of an automorphism of the domain, acting trivially on the principal component, and a corresponding automorphism of the principal bundle mapping $(A,\lie{m}ul{u}) $ to $(A',\lie{m}ul{u}')$ and mapping the markings $\lie{m}ul{z}$ to $\lie{m}ul{z}'$. For any nodal section $\lie{m}ul{u}:\lie{h}at{\Sigma} \lie{t}o P(X)$, the {\em homology class} of $\lie{m}ul{u}$ is defined as the sum of the homology class $d_0 \in H_2^G(X,\lie{m}athbb{Z})$ of the principal component $u_0$ and the homology classes $d_i \in H_2(X,\lie{m}athbb{Z}), i = 1,\ldots,k$ of the sphere bubbles, using the inclusion $H_2(X,\lie{m}athbb{Z}) \lie{t}o H_2^G(X,\lie{m}athbb{Z})$ given by equivariant formality. The {\em combinatorial type} $\Gamma(\lie{h}at{\Sigma},A,\lie{m}ul{u},\lie{m}ul{z})$ of a gauged nodal map is a rooted graph whose vertices represent the components of $\lie{h}at{\Sigma}$, whose finite edges represent the nodes, semi-infinite edges represent the markings, and whose root vertex represents the principal component. \end{definition} \noindent Note that there is no condition for points on the principal component. In particular, nodal gauged holomorphic maps with no markings can be polystable. The term {\em polystable} is borrowed from the vector bundle case. In that situation, a bundle is {\em stable} if it is flat and has only central automorphisms and polystable if it is a direct sum of stable bundles of the same slope. Any flat bundle is automatically polystable; a bundle is {\em semistable} if it is grade equivalent to a polystable bundle. In particular, the moduli space of stable bundles is definitely {\em not} compact, and we feel that the vortex terminology should include this fact as a special case. \noindent From now on, we fix the bundle $P$. \lie{b}egin{definition} Let $X$ as above. A {\em smooth family of fixed type} of nodal vortices to $X$ consists of a smooth family $\lie{h}at{Sigma}_S \lie{t}o S$ of nodal curves of fixed type, a smooth family of holomorphic maps $v_S: \lie{h}at{\Sigma}_S \lie{t}o \Sigma$ of class $[\Sigma]$, a smooth family $\lie{m}ul{u}_S : \lie{h}at{\Sigma}_S \lie{t}o P(X)$ of maps, and a smooth family $A_S : S \lie{t}ildemes P \lie{t}o T^* P$ of connections over $S$. A {\em smooth deformation} of a nodal vortex $(A,\lie{h}at{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$ of fixed type consists of a germ of a smooth family $(A_S,\lie{h}at{\Sigma}_S,\lie{m}ul{u}_S,\lie{m}ul{z}_S)$ of nodal vortices of fixed type together with an identification $\iota$ of of the central fiber with $(A,\lie{h}at{\Sigma},\lie{m}ul{u},\lie{m}ul{z})$. A {\em stratified-smooth family of marked nodal symplectic vortices} is a datum $(\lie{h}at{\Sigma}_S,A_S,\lie{m}ul{u}_S,\lie{m}ul{z}_S)$ consisting of a stratified-smooth family $\lie{h}at{\Sigma}_S \lie{t}o S$ of nodal curves, a stratified-smooth family of holomorphic maps $v: \lie{h}at{\Sigma}_S \lie{t}o \Sigma$ of class $[\Sigma]$, a stratified-smooth family $A_S$ of connections on $P$, a stratified-smooth family of maps $\lie{m}ul{u}_S: \lie{h}at{\Sigma}_S \lie{t}o P(X)$; such that each triple $(\lie{h}at{\Sigma}_s,A_s,\lie{m}ul{u}_s,\lie{m}ul{z}_s)$ is a marked nodal symplectic vortex. A {\em family of polystable symplectic vortices} is a family of marked nodal symplectic vortices such that any fiber is polystable. \end{definition} \noindent A smooth vector bundle $\lie{m}ul{E} \lie{t}o \lie{h}at{\Sigma}$ is a collection of smooth vector bundles $E_i$ over the components $\Sigma_i$ of $\lie{h}at{\Sigma}$, equipped with identifications of the fibers at nodal points $E_{i^+(w^+_k)} \lie{t}o E_{i^-(w^-_k)}, k = 1,\ldots, m$. We denote by $\mathcal{O}mega(\lie{h}at{\Sigma},\lie{m}ul{E})$ the sum over components, $ \mathcal{O}mega(\lie{h}at{\Sigma},\lie{m}ul{E}) = \lie{b}igoplus_{i=1}^k \mathcal{O}mega(\Sigma_i, E_i) $ where $E_i = \lie{m}ul{E} | \Sigma_i .$ \lie{b}egin{definition} For a polystable vortex $(\lie{h}at{\Sigma},A,\lie{m}ul{u})$, let $\lie{t}ilde{D}_{A,\lie{m}ul{u}}$ denote the {\em linearized operator} \lie{b}egin{multline} \mathcal{O}mega^1({\Sigma},P(\lie{g})) \oplus \mathcal{O}mega^0(\lie{h}at{\Sigma},\lie{m}ul{u}^* T^{\operatorname{vert}} P(X)) \\ \lie{t}o (\mathcal{O}mega^{0} \oplus \mathcal{O}mega^2)({\Sigma},P(\lie{g})) \oplus \mathcal{O}mega^{0,1}(\lie{h}at{\Sigma},\lie{m}ul{u}^* T^{\operatorname{vert}} P(X)) \oplus \lie{b}igoplus_{k=1}^m T_{\lie{m}ul{u}(w_k^\lie{p}m)}^{\operatorname{vert}} P(X) \end{multline} given by the linearized vortex operator $({\on{d}}_{A,u_0}, D_{A,u_0})$ on the principal component, the linearized Cauchy-Riemann operator $\lie{t}ilde{D}_{u_i}$ on the bubbles, the slice operator ${\on{d}}_{A,u_0}^*$, and the difference operator on the fibers over the nodes $$ \mathcal{O}mega^0(\lie{h}at{\Sigma},\lie{m}ul{u}^* T^{\operatorname{vert}} P(X)) \lie{t}o \lie{b}igoplus_{i=1}^m T_{\lie{m}ul{u}(w_i^\lie{p}m)}^{\operatorname{vert}} P(X), \lie{q}uad \lie{m}ul{\xi} \lie{m}apsto ( \lie{m}ul{\xi}(w_i^+) - \lie{m}ul{\xi}(w_i^-))_{i=1}^m .$$ $(A,\lie{m}ul{u})$ is {\em regular} if $\lie{t}ilde{D}_{A,\lie{m}ul{u}}$ is surjective. The space of infinitesimal deformations of $A,\lie{m}ul{u}$ of fixed type is $ \on{Def}_\Gamma(A,\lie{m}ul{u}) := \lie{k}er \lie{t}ilde{D}_{A,\lie{m}ul{u}}/ \on{aut} (\lie{h}at{\Sigma}) $ where $ \on{aut} (\lie{h}at{\Sigma})$ denotes the group of infinitesimal automorphisms acting trivially on the principal component. The space of infinitesimal deformations of $A,\lie{m}ul{u}$ is $$ \on{Def}(A,\lie{m}ul{u}) := \on{Def}_\Gamma(A,\lie{m}ul{u}) \oplus \lie{b}igoplus_{i=1}^m T_{w_j^+} \lie{h}at{\Sigma} \otimes T_{w_j^-} \lie{h}at{\Sigma} $$ consisting of a deformation of fixed type together with a collection of gluing parameters. \end{definition} \lie{su}bsection{Constructing deformations of symplectic vortices} First we construct deformations of fixed type. \lie{b}egin{theorem} \langleglebel{deformnodalvortex} A regular polystable vortex has a strongly universal smooth deformation of fixed type if and only if it is stable. \end{theorem} \noindent The proof is by the implicit function theorem applied to the map \lie{b}egin{multline} \cF_{A,\lie{m}ul{u}} (a,\lie{m}ul{\xi}) = (F_{A + a} + \lie{m}athcal{V}ol_\Sigma (\exp_{u_0}(\xi_0))^* P(\mathbb{P}hi), {\on{d}}_{A,u_0}^* (a,\lie{m}ul{\xi}), \\ \mathbb{P}si_{u_0}(\xi_0)^{-1} \overlinep_A u_0, (\mathbb{P}si_{u_i}(\xi_i)^{-1} \overlinep u_i)_{i=1}^k, (\lie{m}ul{\xi}(w_i^+) - \lie{m}ul{\xi}(w_i^-))_{i=1}^m ) \end{multline} whose linearization is $\lie{t}ilde{D}_{A,\lie{m}ul{u}}$. The proof is left to the reader. We denote by $M_{n,\Gamma}(\Sigma,X,d)$ of the moduli space of isomorphism classes of polystable vortices of combinatorial type $\Gamma$ of homology class $d \in H_2^G(X,\lie{m}athbb{Z})$, and $M^{\reg}_{n,\Gamma}(\Sigma,X,d)$ the regular locus. \lie{b}egin{corollary} \langleglebel{stablemreg} $M_{n,\Gamma}^\reg(\Sigma,X,d)$ has the structure of a smooth orbifold of dimension given, if $\Sigma$ is connected, by $(1 - g)({\on{d}}im(X) - 2{\on{d}}im(G)) + 2( n + (c_1^G(TX),d) - m)$ where $m$ is the number of nodes. \end{corollary} \langleglebel{gluingv} We now prove that a regular stable symplectic vortex from $\Sigma$ to $X$ admits a strongly universal stratified-smooth deformation if it is strongly stable, that is, Theorem \ref{main}. We explain the construction for a single bubble only, so that $\lie{h}at{\Sigma}$ is the union of a principal component $\Sigma_+ = \Sigma$ and a holomorphic sphere $\Sigma_-$, attached by a single pair $w_\lie{p}m$ of nodes. We denote by $(A,u_+)$ the restriction to the principal component and by $u_-$ the bubble, so that $x := u_+(w_+) = u_-(w_-)$ and $\lie{m}ul{u} = (u_+,u_-)$. We choose a local coordinate near $w$, equivariant for the action of the automorphism group $ \on{Aut} (A,\lie{m}ul{u})$ in the sense that $ \on{Aut} (A,\lie{m}ul{u})$ acts on the local coordinate by multiplication by roots of unity. The construction depends on the following choices: \lie{b}egin{definition} A {\em gluing datum} for $(\lie{h}at{\Sigma},A,\lie{m}ul{u})$ consists of \lie{b}egin{enumerate} \item neighborhoods $U_\lie{p}m$ of the nodes $w_\lie{p}m$; \item {\em local coordinates} $\lie{m}ul{\lie{k}appa} = (\lie{k}appa_+,\lie{k}appa_-)$ on $U_\lie{p}m$; \item a trivialization $ P |_{U_0} \lie{t}o G \lie{t}ildemes U_0 $ on $U_0$; \item a {\em gluing parameter} ${\on{d}}eltata$; \item an {\em annulus parameter} $\rho$ ; \item a {\em cutoff function} $\alpha$ as in \eqref{firstcut}. \end{enumerate} \end{definition} \noindent {\em Step 1: Approximate Solution} Given a nodal vortex $(A,u)$ as above and a gluing datum we wish to define an approximate solution to the vortex equations $(A, u^{\on{d}}eltata)$. Let $\exp_x: T_x X \lie{t}o X $ denote the exponential map defined by the metric on $X$. Define sections $$\xi_\lie{p}m: U_\lie{p}m \lie{t}o T_x X, \lie{q}uad u(z)=\exp_{x}(\xi_\lie{p}m(z)) .$$ Let $\lie{h}at{\Sigma}^{\on{d}}eltata$ denote the surface obtained by gluing; since the bubble is genus zero, this surface is isomorphic to $\Sigma$ but not canonically. Define the {\em pre-glued section} $\lie{m}ul{u}^{\on{d}}eltata: \lie{h}at{\Sigma}^{\on{d}}eltata \lie{t}o P(X)$, \lie{b}egin{multline} \lie{m}ul{u}^{\on{d}}eltata(z) = \exp_{x} \left( \alpha(|\lie{k}appa_+(z)|/ \rho |{\on{d}}eltata|^{1/2}) (\xi_+(z) - \xi_\lie{p}m(w^\lie{p}m)) \right. +\\ \left. \alpha (|\lie{k}appa_-(z)|/ \rho |{\on{d}}eltata|^{1/2} ) (\xi_-(z) - \xi_\lie{p}m(w^\lie{p}m)) + \xi_\lie{p}m(w^\lie{p}m) \right) \end{multline} for $| \lie{k}appa_\lie{p}m(z) | \leq 2|{\on{d}}eltata|^{1/2} \rho^2$; elsewhere let $\lie{m}ul{u}^{\on{d}}eltata(z) = \lie{m}ul{u}(z)$, using the identification of $\Sigma$ with $\lie{h}at{\Sigma}$ away from the gluing region. We do not modify $A$ in the bubble region; this is because after re-scaling the connection on the bubble is already close to the trivial connection. The pair $(A,\lie{m}ul{u}^{\on{d}}eltata)$ is an approximate solution to the vortex equations in the following Sobolev norms. Let $\mathcal{O}mega(\Sigma^{\on{d}}eltata,P^{\on{d}}eltata(\lie{g}))_{k,p}$ denote the $W^{k,p}$-completion using the standard metric. Let $g^{\on{d}}eltata$ denote the $C^0$ metric on the glued surface in \eqref{gluemetric}. Let $\mathcal{O}mega^{0,1}(\Sigma^{\on{d}}eltata, u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{k,p,{\on{d}}eltata} $. Let $$ \cH_{{\on{d}}eltata} := \mathcal{O}mega^2(\Sigma,P(\lie{g}))_{{0,3}} \oplus \mathcal{O}mega^0(\Sigma,P(\lie{g}))_{0,3} \oplus \mathcal{O}mega^{0,1}(\Sigma^{\on{d}}eltata, u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{{0,3},{\on{d}}eltata}$$ with norm \lie{b}egin{equation} \langleglebel{deltanorm} \lie{m}athcal{V}ert (\lie{p}hi,\lie{p}si,{\on{et}}a) \lie{m}athcal{V}ert_{{{\on{d}}eltata}}^2 = \lie{m}athcal{V}ert \lie{p}hi \lie{m}athcal{V}ert_{{0,3}}^2 + \lie{m}athcal{V}ert \lie{p}si \lie{m}athcal{V}ert_{0,3}^2 + \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata}^2 .\end{equation} Let $$ \cI_{{\on{d}}eltata} := \mathcal{O}mega^1(\Sigma, P(\lie{g}))_{{1,3}} \oplus \mathcal{O}mega^0(\Sigma^{\on{d}}eltata, u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{{1,3},{\on{d}}eltata} $$ with norm $$ \lie{m}athcal{V}ert (a, \xi ) \lie{m}athcal{V}ert_{{{\on{d}}eltata}}^2 = \lie{m}athcal{V}ert a \lie{m}athcal{V}ert_{{1,3}}^2 + \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{{1,3},{\on{d}}eltata}^2 .$$ Locally the moduli space of polystable vortices is in bijection with the zero set of the map \lie{b}egin{multline} \langleglebel{Fdel} \cF_{A,\lie{m}ul{u}}^{{{\on{d}}eltata}} : \cI_{{{\on{d}}eltata}} \lie{t}o \cH_{{{\on{d}}eltata}} \\ (a,\xi) \lie{m}apsto \left( F_{A + a} + \lie{m}athcal{V}ol_\Sigma \exp_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\xi)^* P(\mathbb{P}hi), {\on{d}}_{A,u_+}^* (a,\xi), \mathbb{P}si_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\xi)^{-1} \overline{\lie{p}artial}_{A + a} \exp_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\xi) \right) .\end{multline} Here the second map enforces a slice condition. That $\cF_{A,\lie{m}ul{u}}^{{{\on{d}}eltata}}$ is well-defined follows from Sobolev embedding: In particular, there is a ${\on{d}}eltata$-uniform embedding \lie{b}egin{equation} \langleglebel{unifsob} \mathcal{O}mega^0(\Sigma^{\on{d}}eltata, u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{{1,3},{\on{d}}eltata} \lie{t}o \mathcal{O}mega^0(\Sigma^{\on{d}}eltata, u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{0,\infty}, \end{equation} since the dimensions of the cone in the cone condition \cite[Chapter 4]{ad:so} are uniformly bounded in ${\on{d}}eltata$, and the metric uniformly comparable to the flat metric. \lie{b}egin{lemma} \langleglebel{approx} Let $(A,\lie{m}ul{u})$ be a symplectic vortex on a nodal curve with a single node $w = (w^+,w^-)$. There exist constants $c_0,c_1 > 0$ such that if $|{\on{d}}eltata| < c_1, \rho > 1/c_1 $ and$ |{\on{d}}eltata| \rho^4 < c_1 $ then the pair $(A,\lie{m}ul{u}^{\on{d}}eltata) \in \A(P) \lie{t}ildemes \Gamma(P(X))$ satisfies \lie{b}egin{equation} \lie{m}athcal{V}ert \cF_{A,\lie{m}ul{u}}^{{{\on{d}}eltata}}(0,0) \lie{m}athcal{V}ert \leq c_{0} |{\on{d}}eltata|^{1/3} \rho^{2/3}. \end{equation} \end{lemma} \lie{b}egin{proof} The expression $\overlinep_{A} \lie{m}ul{u}^{\on{d}}eltata$ can be expressed as a sum of terms involving derivatives of the cutoff function $\alpha$, terms involving derivatives of $\xi_j$, and terms involving the connection $A$ on the bubble region. The derivative of $\alpha$ is bounded by $ C/ \rho | {\on{d}}eltata|^{1/2}$, while the norm of $\xi_j$ is bounded by $C\rho |{\on{d}}eltata |^{1/2}$ on the gluing region. Hence the term involving the derivative of $\alpha$ is bounded and supported on a region of area less than $ C|{\on{d}}eltata| \rho^2$. In the given trivialization we have $$ \overlinep_A u = \overlinep u + A_X^{0,1}(u) $$ where $A^{0,1}$ is the $0,1$-form defined by $A \in \mathcal{O}mega^1(B_R,\lie{g})$ is the connection $1$-form in the local trivialization and $A_X^{0,1}(u)$ is the corresponding form with values in $T^{\operatorname{vert}} P(X) \otimes_\lie{m}athbb{R} \lie{m}athbb{C}$. We have $$ \lie{m}athcal{V}ert A_X^{0,1}(u) \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata} \leq \lie{m}athcal{V}ert A_X^{0,1}(u) \lie{m}athcal{V}ert_{0,3} $$ since $p \lie{g}e 2$; for $p = 2$ the $W^{0,3,{\on{d}}eltata}$ and $W^{0,3}$ norms are the same, by conformal invariance; for $p > 2$ the $0,3$-norm is strictly greater. Hence $$ \lie{m}athcal{V}ert \overlinep_{A} u^{\on{d}}eltata \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata} \leq C \lie{m}ax(|{\on{d}}eltata|^{1/3} \rho^{2/3},|{\on{d}}eltata|). $$ The moment map term $F_{A} + (\lie{m}ul{u}^{\on{d}}eltata)^* P(\mathbb{P}hi) \lie{m}athcal{V}ol_\Sigma$ vanishes except on $|\lie{k}appa_+| \leq \rho |{\on{d}}eltata^{1/2}|$, where it is uniformly bounded. Hence for ${\on{d}}eltata$ small $$ \lie{m}athcal{V}ert F_A + (\lie{m}ul{u}^{\on{d}}eltata)^* P(\mathbb{P}hi) \lie{m}athcal{V}ol_\Sigma \lie{m}athcal{V}ert_{0,3} \leq C \rho^{2/3} |{\on{d}}eltata|^{1/3} .$$ The statement of the lemma follows. \end{proof} We also wish to perform the gluing construction in families, that is, for each nearby vortex and gluing parameter we wish to find a solution to the vortex equations on $\Sigma^{\on{d}}eltata$. Define \lie{b}egin{multline} \langleglebel{Fdeldef} \cF_{A,\lie{m}ul{u}}^{D,{\on{d}}eltata} : \on{Def}_\Gamma(A,\lie{m}ul{u}) \lie{t}ildemes \cI_{{{\on{d}}eltata}} \lie{t}o \cH_{{{\on{d}}eltata}}, \lie{q}uad (a,\lie{m}ul{\xi},a_1,\xi_1) \lie{m}apsto \\ \left( F_{A} + a_0 + a_1 + \lie{m}athcal{V}ol_\Sigma \exp_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\lie{m}ul{\xi}_0^{\on{d}}eltata + \xi_1)^* P(\mathbb{P}hi), {\on{d}}_{A,\lie{m}ul{u}^{\on{d}}eltata}^* (a_0 + a_1,\lie{m}ul{\xi}_0^{\on{d}}eltata + \xi), \right. \\ \left. \mathbb{P}si_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\lie{m}ul{\xi}_0^{\on{d}}eltata + \xi_1)^{-1} \overline{\lie{p}artial}_{A + a_0 + a_1} \exp_{\lie{m}ul{u}^{{{\on{d}}eltata}}}(\lie{m}ul{\xi}_0^{\on{d}}eltata + \xi_1) \right) \end{multline} The following is proved in the same way as Lemma \ref{approx} and left to the reader: \lie{b}egin{lemma} \langleglebel{approx2} Let $(A,\lie{m}ul{u})$ be as above. There exist constants $c_0,c_1 > 0$ such that if $|{\on{d}}eltata| < c_1, \rho > 1/c_1 $, $\lie{m}athcal{V}ert (a,\xi) \lie{m}athcal{V}ert \leq c_1$ and $|{\on{d}}eltata| \rho^4 < c_1 $ then \lie{b}egin{equation} \lie{m}athcal{V}ert \cF_{A,\lie{m}ul{u}}^{D,{{\on{d}}eltata}}(a,\xi,0,0) \lie{m}athcal{V}ert \leq c_{0} |{\on{d}}eltata|^{1/3} \rho^{2/3}. \end{equation} \end{lemma} \noindent {\em Step 2: Uniformly bounded right inverse} In preparation for the construction of the uniformly bounded right inverse of $\lie{t}ilde{D}_{\on{d}}eltata$ we define the {\em intermediate family} $(A,\lie{m}ul{u}_0^{\on{d}}eltata)$ of gauged holomorphic maps on the nodal curve $\lie{h}at{\Sigma}$ is the family defined by the equations \eqref{approx}, using the identification of $\lie{h}at{\Sigma}$ and $\lie{h}at{\Sigma}^{\on{d}}eltata$ away from the gluing region. Thus $u_0^{\on{d}}eltata$ is constant in a neighborhood of the node $w^\lie{p}m$. We identify $(\lie{m}ul{u}_0^{\on{d}}eltata)^* T^{\operatorname{vert}} P(X)$ with $\lie{m}ul{u}^* T^{\operatorname{vert}} P(X)$ by geodesic parallel transport. \lie{b}egin{lemma} \langleglebel{linconverge} The operator $\lie{t}ilde{D}_{A,\lie{m}ul{u}_0^{\on{d}}eltata}$ converges in the operator norm to $\lie{t}ilde{D}_{A,\lie{m}ul{u}}$ as ${\on{d}}eltata \lie{t}o 0, \rho \lie{t}o \infty, {\on{d}}eltata \rho^4 \lie{t}o 0$. \end{lemma} \lie{b}egin{proof} The section $\lie{m}ul{u}_0^{\on{d}}eltata$ converges in the $W^{1,3}$ norm to $u$ as $\rho^2 |{\on{d}}eltata|^{1/2} \lie{t}o 0$. It follows that the operator $\xi \lie{m}apsto \lie{m}athcal{V}ol_\Sigma (u^{\on{d}}eltata_0)^* L_\xi P(\mathbb{P}hi)$ converges to $\xi \lie{m}apsto \lie{m}athcal{V}ol_\Sigma u^* L_\xi P(\mathbb{P}hi)$. Hence $ {\on{d}}_{A,u_0^{\on{d}}eltata,\eps}$ converges to ${\on{d}}_{A,u,\eps}$, and similarly for ${\on{d}}_{A,u_0,\eps}^*$. The operator $ D_{A,\lie{m}ul{u}_0^{\on{d}}eltata}$ converges to $D_{A,\lie{m}ul{u}}$, as in Lemma \ref{rightex}. \end{proof} \lie{b}egin{proposition} \langleglebel{rightvor} Let $(A,\lie{m}ul{u})$ be a nodal vortex, and $(A,\lie{m}ul{u}^{\on{d}}eltata)$ the approximate solution constructed above. There exist constants $c,C > 0$ such that if $| {\on{d}}eltata | < c$ then there exists an approximate right inverse $T_{{\on{d}}eltata}$ of the parametrized linear operator $\lie{t}ilde{D}_{{\on{d}}eltata} := \lie{t}ilde{D}_{A,\lie{m}ul{u}^{\on{d}}eltata}$ that is, a map $T_{{\on{d}}eltata} : \cH_{{\on{d}}eltata} \lie{t}o \cI_{{\on{d}}eltata} $ such that $$ \lie{m}athcal{V}ert (\lie{t}ilde{D}_{{\on{d}}eltata} T_{{\on{d}}eltata} - I) {\on{et}}a \lie{m}athcal{V}ert_{{\on{d}}eltata} \leq \lie{h}h \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{{\on{d}}eltata}, \lie{q}uad \lie{m}athcal{V}ert T_{{{\on{d}}eltata}} {\on{et}}a \lie{m}athcal{V}ert_{{\on{d}}eltata} \leq C \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{{\on{d}}eltata} .$$ \end{proposition} Given such an approximate inverse, we obtain a uniformly bounded right inverse $Q_{{\on{d}}eltata}$ to $\lie{t}ilde{D}_{{\on{d}}eltata}$ by the formula $$ Q_{{\on{d}}eltata} = T_{{\on{d}}eltata} (\lie{t}ilde{D}_{{\on{d}}eltata} T_{{\on{d}}eltata})^{-1} = \lie{su}m_{k \lie{g}e 0} T_{{\on{d}}eltata} (\lie{t}ilde{D}_{{\on{d}}eltata} T_{{\on{d}}eltata} - I )^k .$$ \lie{b}egin{proof}[Proof of \ref{rightvor}] By the regularity assumption, $\lie{t}ilde{D}_{A,\lie{m}ul{u}} $ is surjective when restricted to the space of vectors $(a,\lie{m}ul{\xi})$ such that $\xi_0(0) = \xi_1(\infty)$. By Lemma \ref{linconverge}, $\lie{t}ilde{D}^0_{\on{d}}eltata := \lie{t}ilde{D}_{A,\lie{m}ul{u}_0^{\on{d}}eltata}$ is surjective for sufficiently small $\rho,{\on{d}}eltata$, when restricted to the same space. The approximate right inverse is constructed by composing a cutoff operator $K_{\on{d}}eltata$, right inverse $Q_{\on{d}}eltata$, and gluing operator $R_{\on{d}}eltata$, as follows. (In other words, the right inverse is defined by truncating the given functions, applying the right inverse for the linearized operator on the nodal curve, and then gluing back together using cutoff functions again.) For simplicity we assume that there is a single node. Define the {\em cutoff operator} \lie{b}egin{multline} K_{{\on{d}}eltata}: \mathcal{O}mega^{0,1}(\lie{h}at{\Sigma}^{\on{d}}eltata,\lie{m}ul{u}^{{\on{d}}eltata,*} T^{\on{vert}} P(X))_{0,3,{\on{d}}eltata} \lie{t}o \mathcal{O}mega^{0,1}(\lie{h}at{\Sigma},(\lie{m}ul{u}^{\on{d}}eltata_0)^{*} T^{\operatorname{vert}}P(X))_{0,3} \\ K_{\on{d}}eltata({\on{et}}a) = \lie{b}egin{cases} 0 & \lie{k}appa_\lie{p}m( z) \in B_{| {\on{d}}eltata|^{\lie{h}h}} (0) \\ {\on{et}}a & \lie{t}ext{otherwise} \end{cases} .\end{multline} Then $ \lie{m}athcal{V}ert K_{\on{d}}eltata({\on{et}}a) \lie{m}athcal{V}ert_{0,3} \leq \lie{m}athcal{V}ert {\on{et}}a \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata} .$ Define the {\em gluing operator} $$ R_{{\on{d}}eltata}: \mathcal{O}mega^0(\lie{h}at{\Sigma},(\lie{m}ul{u}^{\on{d}}eltata_0)^{*} TX)_{{1,3}} \lie{t}o \mathcal{O}mega^0(\lie{h}at{\Sigma}^{{\on{d}}eltata},u^{{\on{d}}eltata,*} T^{\operatorname{vert}}P(X))_{{1,3},{\on{d}}eltata} $$ as follows. Let $\lie{h}at{\Sigma}_\lie{p}m^*$ denote the complements of small balls around the nodes $ \lie{h}at{\Sigma}_\lie{p}m^* = \Sigma_\lie{p}m - B_{\rho^2 | {\on{d}}eltata|^{1/2}} (w^\lie{p}m)$. Let $\lie{p}i_\lie{p}m : \lie{h}at{\Sigma}_\lie{p}m^* \lie{t}o \lie{h}at{\Sigma}^{\on{d}}eltata $ denote the inclusions. These induce maps of sections with compact support in $\lie{h}at{\Sigma}_\lie{p}m^*$, $ \lie{p}i_{\lie{p}m,*}: \, \mathcal{O}mega^0_c( \lie{h}at{\Sigma}_\lie{p}m^*,u_\lie{p}m^* TX) \lie{t}o \mathcal{O}mega^0(\lie{h}at{\Sigma}^{\on{d}}eltata, u^{{\on{d}}eltata,*} TX ) .$ Define $ R_{\on{d}}eltata(\xi) = \xi^{\on{d}}eltata $ where $$ \xi^{\on{d}}eltata = \lie{p}i_{+,*} \lie{b}eta_{\rho,{\on{d}}eltata} (\xi_+ - \xi_+(w_+)) + \lie{p}i_{-,*} \lie{b}eta_{\rho,{\on{d}}eltata} (\xi_- - \xi_-(w_-)) + \xi(w) $$ for $ \lie{k}appa_\lie{p}m(z) \in B_{| {\on{d}}eltata|^{1/2} \rho^2} (0) $ and $ \xi^{\on{d}}eltata = \xi$ otherwise. Here $x = \xi^\lie{p}m(w^\lie{p}m)$ is the value of $\xi$ at the node. Define $ T_{\on{d}}eltata := (I \lie{t}ildemes R_{\on{d}}eltata) Q_{\on{d}}eltata (I \lie{t}ildemes K_{\on{d}}eltata) .$ That is, if $ (a,\xi) = Q_{\on{d}}eltata K_{\on{d}}eltata(\lie{p}hi,\lie{p}si,{\on{et}}a) $ then $ T_{\on{d}}eltata(\lie{p}hi,\lie{p}si,{\on{et}}a) = (a,\xi^{\on{d}}eltata) .$ The map $T_{\on{d}}eltata$ is the required approximate right inverse. The difference $(\lie{t}ilde{D}_{\on{d}}eltata T_{\on{d}}eltata - I)(\lie{p}hi,\lie{p}si,{\on{et}}a)$ is the sum of terms $$ {\on{d}}_{A,u^{\on{d}}eltata} (a,\xi^{\on{d}}eltata) - \lie{p}hi, \lie{q}uad {\on{d}}_{A,u_0}^* (a,\xi^{\on{d}}eltata) - \lie{p}si, \lie{q}uad D_{A,u^{\on{d}}eltata} (a,\xi^{\on{d}}eltata) - {\on{et}}a .$$ By definition ${\on{d}}_{A,u_0^{\on{d}}eltata}(a,\xi) = \lie{p}hi$, so the first difference has contributions involving the difference between $u_0^{\on{d}}eltata$ and $u^{\on{d}}eltata$, and between $\xi^{\on{d}}eltata$ and $\xi$. But since ${\on{d}}_{A,u}(a,\xi) = {\on{d}}_A a + u^* L_{\xi_X} P(\mathbb{P}hi)$, these terms are zeroth order in $u,\xi$, $$ \lie{m}athcal{V}ert (u^{\on{d}}eltata)^* L_{\xi^{\on{d}}eltata_X} P(\mathbb{P}hi) - u_0^{\on{d}}eltata L_{\xi_X} P(\mathbb{P}hi) \lie{m}athcal{V}ert_{0,3} \leq C | {\on{d}}eltata | \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{C^0} ;$$ that is, a constant times the area of $\lie{p}i_{-,*} (\lie{h}at{\Sigma}_-^*)$, which goes to zero as ${\on{d}}eltata$ does. A similar discussion holds for the second difference. The third difference has terms arising from the cutoff function and the term $a_X^{0,1}(u)$ on the bubble region $|\lie{k}appa_+(z)| \leq | {\on{d}}eltata|^{1/2}$. We have $$ \lie{m}athcal{V}ert a_X^{0,1}(u) \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata} \leq \lie{m}athcal{V}ert a_X^{0,1}(u) \lie{m}athcal{V}ert_{0,3} \leq C |{\on{d}}eltata| .$$ since $p \lie{g}e 2$; for $p = 2$ the $W^{0,3,{\on{d}}eltata}$ and $W^{0,3}$ norms are the same, by conformal invariance; for $p > 2$ the $0,3$-norm is strictly greater. The term involving the derivative of the cutoff function satisfies $$ \lie{m}athcal{V}ert {\on{d}} \lie{b}eta_{\rho,{\on{d}}eltata} (\xi - \xi(w)) \lie{m}athcal{V}ert_{0,3,{\on{d}}eltata} \leq c \log( \rho^2 )^{-2/3} \lie{m}athcal{V}ert (\xi - \xi(w)) \lie{m}athcal{V}ert_{1,3} $$ by Lemma \ref{careful}, and so vanishes in the limit $\rho \lie{t}o \infty$. Using the uniform bound on $Q_{\on{d}}eltata$, the total difference is bounded by $C ( \log( \rho^{2})^{-2/3} + |{\on{d}}eltata|) \lie{m}athcal{V}ert (\lie{p}hi, \lie{p}si,{\on{et}}a) \lie{m}athcal{V}ert $, and so vanishes in the limit ${\on{d}}eltata \lie{t}o 0,\rho \lie{t}o \infty, |{\on{d}}eltata| \rho^2 \lie{t}o 0$. The uniform bound on $T_{{\on{d}}eltata}$ follows from the uniform bound on $Q_{\on{d}}eltata$ and the cutoff and extension operators. \end{proof} \noindent {\em Step 3: Uniform quadratic estimate} \lie{b}egin{proposition} \langleglebel{quad} There exist constants $c,C > 0$ such that if $\lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} < c$, $ |{\on{d}}eltata| < c, \rho > 1/c$ and $ |{\on{d}}eltata| \rho^4 < c $ then the map $\cF^{\on{d}}eltata_{A,\lie{m}ul{u}}$ satisfies a quadratic bound $$ \lie{m}athcal{V}ert D \cF_{A,\lie{m}ul{u}}^{\on{d}}eltata(a_1,\xi_1) - \lie{t}ilde{D}_{A,\lie{m}ul{u}^{\on{d}}eltata}(a_1,\xi_1) \lie{m}athcal{V}ert_{{\on{d}}eltata} \leq C \lie{m}athcal{V}ert a,\xi \lie{m}athcal{V}ert_{{\on{d}}eltata} \lie{m}athcal{V}ert a_1, \xi_1 \lie{m}athcal{V}ert_{{\on{d}}eltata} .$$ \end{proposition} \lie{b}egin{proof} The norm of the non-linear part of the curvature $\lie{m}athcal{V}ert [a,a_1] \lie{m}athcal{V}ert_{{0,3}}$ is bounded by Sobolev multiplication. The other term appearing in the first vortex equation satisfies $$ \lie{m}athcal{V}ert \exp_{\lie{m}ul{u}^{\on{d}}eltata} (\xi_0)^* L_{\xi_{1,X}} P(\mathbb{P}hi) - (\lie{m}ul{u}^{\on{d}}eltata)^* L_{\xi_{1,X}} P(\mathbb{P}hi) \lie{m}athcal{V}ert_{{0,3}} \leq C \lie{m}athcal{V}ert \xi_0 \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} \lie{m}athcal{V}ert \xi_1 \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} $$ for some constant $C$ independent of ${\on{d}}eltata$, using that $W^{1,3,{\on{d}}eltata}$ norm controls the $W^{0,3}$ norm uniformly. The non-linear terms in the Cauchy-Riemann equation are estimated as in Theorem \ref{quad} and \cite[Section 3.5, Lemma 10.3.1]{ms:jh}; note that we are fixing the complex structure on $\Sigma$, which avoids the more complicated analysis we gave in the previous section. The second vortex equation also involves a term of mixed type $ \mathbb{P}si_{u}(\xi + \xi_1)^{-1} (a_1)_X^{0,1}(\exp_u(\xi + \xi_1)) - \mathbb{P}si_u(\xi)^{-1} (a_1)_X^{0,1}(\exp_u(\xi)) .$ It follows from uniform Sobolev embedding that this difference has $0,3,{\on{d}}eltata$-norm bounded by $C \lie{m}athcal{V}ert a_1 \lie{m}athcal{V}ert_{{1,3}} \lie{m}athcal{V}ert \xi_1 \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata}$ for some constant $C$ independent of ${\on{d}}eltata$. \end{proof} \noindent {\em Step 4: Implicit Function Theorem} \lie{b}egin{theorem} \langleglebel{gluevor} Let $(A,\lie{m}ul{u})$ be a regular stable nodal vortex of combinatorial type $\Gamma$. There exist constants $\eps_0,\eps_1 > 0$ such that for every $(a,\lie{m}ul{\xi}, {\on{d}}eltata) \in \on{Def}_\Gamma(A,\lie{m}ul{u})$ with norm less than $\eps_0$, there exists a unique $(\lie{p}hi,\lie{p}si,{\on{et}}a)$ of norm less than $\eps_1$ such that if $(a_1,\xi_1) = Q_{\on{d}}eltata (\lie{p}hi,\lie{p}si,{\on{et}}a)$ then $(A + a_0 + a_1, \exp_{\lie{m}ul{u}^{{\on{d}}eltata}}(\lie{m}ul{\xi}_0^{{\on{d}}eltata} + \xi_1))$ is a symplectic vortex in Coulomb gauge with respect to $(A,\lie{m}ul{u}^{\on{d}}eltata)$. The solution depends smoothly on $a_0,\lie{m}ul{\xi}_0$, and transforms equivariantly the action of $\G(P)_{A,\lie{m}ul{u}}$ on $\on{Def}_\Gamma(A,\lie{m}ul{u})$. \end{theorem} \lie{b}egin{proof} Uniform error and quadratic estimates are those for $\cF_{A,\lie{m}ul{u}}^{\on{d}}eltata$ in Lemmas \ref{approx}, \ref{rightvor}, and \ref{quad}, in a uniformly bounded neighborhood of $0$ in $\on{Def}_\Gamma(A,\lie{m}ul{u})$. Then the first claim is an application of the quantitative version of the implicit function theorem (see for example \cite[Appendix A.3]{ms:jh}). Equivariance follows from uniqueness of the solution given by the implicit function theorem, since the map $\cF^{D,{\on{d}}eltata}_{A,\lie{m}ul{u}}$ is equivariant for the action of $\G(P)_{A,\lie{m}ul{u}}$. \end{proof} \noindent {\em Step 5: Rigidification} As in the case of holomorphic maps in the previous section, there is a more natural way of parametrizing nearby symplectic vortices which involves examining the intersections of the sections with submanifolds of $P(X)$, and framings induced by parallel transport. First we study the differentiability of the evaluation maps. The gluing construction of the previous step gives rise to a deformation $(A_S,\lie{m}ul{u}_S)$ of $(A,\lie{m}ul{u})$ with parameter space a neighborhood $S$ of $0$ in $\on{Def}(A,\lie{m}ul{u})$, and so a map $ S \lie{t}o \overline{M}_n(\Sigma,X), \ s \lie{m}apsto (\lie{h}at{\Sigma}_s,A_s,\lie{m}ul{u}_s) $ Consider the map \lie{b}egin{equation} \langleglebel{evs} \ev: (\lie{h}at{\Sigma} - U) \lie{t}ildemes S \lie{t}o P(X), \lie{q}uad (z,s) \lie{m}apsto \lie{m}ul{u}_s(z) . \end{equation} \lie{b}egin{proposition} \langleglebel{diffv} The map $\ev$ of \eqref{evs} is $C^1$ for the family constructed by gluing in Theorem \ref{gluevor} using the exponential gluing profile. \end{proposition} \lie{b}egin{proof} We denote by $\lie{m}ul{u}_S^\lie{p}re: \lie{h}at{\Sigma}_S \lie{t}o X$ the family obtained by pre-gluing only, that is, omitting the step which solves for an exact solution. We denote by $\ev^\lie{p}re$ the map $$ \ev^\lie{p}re : (\lie{h}at{\Sigma} - U) \lie{t}ildemes S \lie{t}o P(X), \lie{q}uad (z,s) \lie{m}apsto \lie{m}ul{u}_s^\lie{p}re(z) .$$ This map is independent of the gluing parameters, and is therefore $C^1$. We write $s = (a_0,\xi_0)$ and $A_s = A + a_0 + a_1, \lie{m}ul{u}_s = \exp_{\lie{m}ul{u}^{\lie{p}re}_s}(\xi_0^{\on{d}}eltata + \xi_1)$. The corrections $a_1,\xi_1$ depend smoothly on $a_0,\xi_0$, by the implicit function theorem, and so $\xi_1(z)$ depends smoothly on $a_0,\xi_0$. Next we take the derivative with respect to the gluing parameter. Let $(A,\lie{m}ul{u})$ be a nodal symplectic vortex, $(A,\lie{m}ul{u}^{\on{d}}eltata)$ the pre-glued pair (we omit the parameter $\rho$ controlling the diameter of the gluing region from the notation) and consider the equation $ \cF_{A,\lie{m}ul{u}^{\on{d}}eltata} (a_0 + a_1, \xi_0^{\on{d}}eltata + \xi_1) = 0 .$ Let $\lie{t}ilde{D}_{\on{d}}eltata$ denote the derivative of $\cF_{A,\lie{m}ul{u}^{\on{d}}eltata}$. Differentiating with respect to ${\on{d}}eltata$ gives $$ \lie{t}ilde{D}_{\on{d}}eltata \left({\on{d}}dd a_1 , D\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_0^{\on{d}}eltata; 0 , {\on{d}}dd \xi_1 )\right) = - \lie{t}ilde{D}_{\on{d}}eltata \left(0, D\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_0^{\on{d}}eltata; {\on{d}}dd \lie{m}ul{u}^{\on{d}}eltata, {\on{d}}dd \xi_0^{\on{d}}eltata)\right) .$$ The same arguments as in the proof of Theorem \ref{diffev} show that there exists a constant $C > 0 $ such that the right hand side is bounded in norm by $C e^{-1/{\on{d}}eltata}$. On the other hand, the norm of the left-hand side $\lie{t}ilde{D}_{\on{d}}eltata$ is uniformly bounded from below in terms of the norm of ${\on{d}}dd a_1, {\on{d}}dd \xi_1$, by the quadratic estimates. It follows that $({\on{d}}dd a_1, {\on{d}}dd \xi_1)$ is also bounded in norm by $C e^{-1/{\on{d}}eltata}$. Hence $ \lim_{{\on{d}}eltata \lie{t}o 0} \lie{p}artial_{\on{d}}eltata \ev = 0 $. It follows that $D\ev$ has a continuous limit as ${{\on{d}}eltata} \lie{t}o 0$. \end{proof} Choose a path $\lie{g}amma: [0,1] \lie{t}o \Sigma$ in the principal component and an element $\lie{p}hi_0 \in P_{\lie{g}amma(0)}$. Let $\lie{t}au_\lie{g}amma(A): P_{\lie{g}amma(0)} \lie{t}o P_{\lie{g}amma(1)}$ denote parallel transport. By an {\em $m$-framed} family of marked curves, we mean a family of curves together with an $m$-tuple of points in $P$. Given a family $(\lie{h}at{\Sigma}_S,A_S,\lie{m}ul{u}_S)$ of gauged holomorphic maps over a parameter space $S$, a collection of codimension two submanifolds $\lie{m}ul{Y} = (Y_1,\ldots,Y_k)$ in $P(X)$, and a collection of paths $\lie{m}ul{\lie{g}amma} = (\lie{g}amma_1,\ldots, \lie{g}amma_l)$ with the same initial point $y_0$ to $y_j, j = 1,\ldots, l$, define a family of marked, framed curves $\lie{h}at{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u},\lie{m}ul{\lie{g}amma},A} \lie{t}o S$ by requiring that the additional marked points $z_{n+i}$ map to $Y_i$, and the framings are given by parallel transport along the paths $\lie{g}amma_i$. \lie{b}egin{definition} \langleglebel{compat2} The data $\lie{m}ul{Y},\lie{m}ul{\lie{g}amma},A,\lie{m}ul{u}$ are {\em compatible} if \lie{b}egin{enumerate} \item each $Y_j$ intersects $u_j$ transversally in a single point $z_j \in \lie{h}at{\Sigma}$; \item if $(a,\xi) \in \lie{k}er \lie{t}ilde{D}_{A,\lie{m}ul{u}}$ satisfies $\xi(z_{n+j}) \in T_{\lie{m}ul{u}(z_{n+j})} P(X) $ for $j = 1,\ldots, k$ and $D_A \lie{t}au_{\lie{g}amma_i}(a) = 0 $ for $i = 1,\ldots, l$ then $(a,\xi) = 0$. \item the curve $\lie{h}at{\Sigma}$ marked with the additional points $z_{n+1},\ldots, z_{n+k}$ is stable. \item if some automorphism of $(\lie{h}at{\Sigma},\lie{m}ul{u})$ maps $z_i$ to $z_j$ then $Y_i$ is equal to $Y_j$. \end{enumerate} \end{definition} The second condition says that there are no infinitesimal deformations which do not change the positions of the extra markings or framings. \lie{b}egin{proposition} \langleglebel{versalrigid2} Let $(A,\lie{m}ul{u})$ be a parametrized regular stable nodal vortex, and $(A_S, \lie{m}ul{u}_S) \lie{t}o S$ the stratified-smooth universal deformation constructed by the gluing construction. There exists a collection $(\lie{m}ul{Y},\lie{m}ul{\lie{g}amma})$ compatible with $(\lie{m}ul{u},A)$. Furthermore, if $(\lie{m}ul{Y},\lie{m}ul{\lie{g}amma})$ is compatible with $(A,\lie{m}ul{u})$, then the rigidified family $\lie{h}at{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u},\lie{m}ul{\lie{g}amma},A} \lie{t}o S$ of \eqref{rigidify} is a stratified-smooth deformation of the marked-curve-with-framings $\lie{h}at{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u},\lie{m}ul{\lie{g}amma},A}$ which defines an immersion of $S$ into the parameter space for the universal deformation of the central fiber. \end{proposition} \lie{b}egin{proof} First we show the existence of a compatible collection. Suppose that the second condition is not satisfied for some $(a,\xi)$. Suppose first that $\xi \neq 0$. Let $z_{n+1}$ be a point with $\xi(z_{n+1}) \neq 0$, and choose a codimension two submanifold $Y_{n+1}$ transverse to $u$ near $u(z_{n+1})$, and such that $TY_{n+1}$ does not contain $\xi(z_{n+1})$. Adding $Y_{n+1}$ to the list of submanifolds decreases the dimension of the space of $(a,\xi)$ satisfying the condition in (b) by at least one. Repeating this process, we may assume that the only elements satisfying the condition in (b) have $\xi =0 $. Suppose that $\xi$ is zero, so that $a$ is necessarily non-zero. Choose an additional marked point $y_{l+1}$ and a path $\lie{g}amma_{l+1}$ from the base point $y_0$ to $y_{l+1}$ such that the derivative of the parallel transport over $\lie{g}amma$ with respect to $a$ over is non-zero. Appending $\lie{g}amma_{l+1}$ to the list of path decreases the dimension of $(a,\xi)$ satisfying the condition in (b) by at least one. Hence the process stops after finitely many steps, after which the kernel is trivial. The proof of the second claim is similar to Proposition \ref{versalrigid} and will be omitted. \end{proof} \noindent {\em Step 6: Surjectivity} We show that any nearby vortex appears in the family constructed above. First, we show: \lie{b}egin{proposition} \langleglebel{vorsurj} Let $(A,\lie{m}ul{u})$ be a regular strongly stable symplectic vortex. There exists a constant $\eps > 0$ such that if $(A_1,u_1) = (A + a, \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi))$ with $\lie{m}athcal{V}ert a \lie{m}athcal{V}ert_{1,3} + \lie{m}athcal{V}ert \xi \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} \leq \eps$ then after gauge transformation we have $(A_1,u_1) = (A+ a_0 + a_1,\exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_0^{\on{d}}eltata + \xi_1))$ for some $(a_0,\xi_0) \in \lie{k}er(\lie{t}ilde{D}_{A,\lie{m}ul{u}})$ and $(a_1,\xi_1)$ in the image of $Q_{\on{d}}eltata$. \end{proposition} \lie{b}egin{proof} We claim that for some constant $C > 0$, we have $(a,\xi) = (a_0,\xi_0^{\lie{m}ul{{\on{d}}eltata}}) + (a_1,\xi_1)$ for some $(a_0,\xi_0) \in \lie{k}er \lie{t}ilde{D}_{A,\lie{m}ul{u}}$ and $(a_1,\xi_1) \in \Im \lie{t}ilde{D}_{A,\lie{m}ul{u}^{{{\on{d}}eltata}}}^*$ with norm $ \lie{m}athcal{V}ert (a_1,\xi_1) \lie{m}athcal{V}ert \leq C \lie{m}athcal{V}ert (a_1,\xi_1) \lie{m}athcal{V}ert.$ Given the claim, the proposition follows by the uniqueness statement of the implicit function theorem. For any $c > 0$ there exists ${\on{d}}eltata_0$ such that for ${\on{d}}eltata < {\on{d}}eltata_0$, $$ \lie{m}athcal{V}ert \lie{t}ilde{D}_{A,\lie{m}ul{u}^{{{\on{d}}eltata}}} ( a_0^{{{\on{d}}eltata}},\lie{m}ul{\xi}_0^{{{\on{d}}eltata}}) \lie{m}athcal{V}ert \leq c \lie{m}athcal{V}ert (a_0^{{{\on{d}}eltata}},\lie{m}ul{\xi}_0^{{{\on{d}}eltata}}) \lie{m}athcal{V}ert $$ by estimates similar to those of Lemma \ref{approx}. Thus the image of $\lie{k}er \lie{t}ilde{D}_{A,\lie{m}ul{u}}$ is transverse to $\Im \lie{t}ilde{D}_{A,\lie{m}ul{u}^{\on{d}}eltata}^*$, for ${\on{d}}eltata$ sufficiently small, since it meets $\Im \lie{t}ilde{D}_{A,\lie{m}ul{u}^{\on{d}}eltata}^*$ trivially and projects isomorphically onto $\lie{k}er \lie{t}ilde{D}_{A,\lie{m}ul{u}^{\on{d}}eltata}$, by gluing for indices, as in \cite[Theorem 2.4.1]{orient}. The claim then follows from the inverse function theorem. \end{proof} Given a strongly stable symplectic vortex $(A,\lie{m}ul{u})$ with stable domain $\lie{h}at{\Sigma}$, let $(A_S,\lie{m}ul{u}_S)$ be the family given by the gluing construction above. Otherwise, if $\lie{m}ul{Y}$ is not stable, let $\lie{m}ul{Y} = (Y_1,\ldots, Y_l)$ be a collection of codimension two submanifolds of $P(X)$, and consider the family $(A,\lie{m}ul{u}^{\lie{m}ul{Y}})$ with additional marked points given by requiring that the additional marked points $z_{n+i}$ map to $Y_i$. Let $(A_S,\lie{m}ul{u}_S)$ denote the family obtained by applying the gluing construction for $(A_S,\lie{m}ul{u}_S^{\lie{m}ul{Y}})$, and then forgetting the additional marked points. \lie{b}egin{lemma} Suppose that $(A_i,u_i)$ Gromov converges to $(A,u)$. After a sequence of gauge transformations, for any $\eps$, there exists $i_0$ such that if $i > i_0$ then there exists ${\on{d}}eltata,(a_i,\xi_i)$ satisfying $(A_i,u_i) = (A + a_i, \exp_{\lie{m}ul{u}^{\on{d}}eltata}(\xi_i))$ with $\lie{m}athcal{V}ert a_i \lie{m}athcal{V}ert_{1,3} + \lie{m}athcal{V}ert \xi_i \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} \leq \eps$. \end{lemma} \lie{b}egin{proof} By definition of Gromov convergence, after gauge transformation $A_i$ $C^0$-converges to $A$ and converges uniformly in all derivatives on the complement of the bubbling set \cite{ott:thesis}. The exponential decay estimate \cite[Lemma A.2.2]{ott:thesis} show that $u_i$ converges to $u$ on the complement of the nodes, uniformly in all derivatives on compact sets, and whose derivative on the gluing region is uniformly bounded in the ${\on{d}}eltata$-dependent metric. It follows that $u_i = \exp_{u^{\on{d}}eltata}(\xi_i)$ for some ${\on{d}}eltata$ and $\xi_i \in \mathcal{O}mega^0(\Sigma^{\on{d}}eltata, (u^{\on{d}}eltata) T^{\operatorname{vert}} P(X))$ with $\lie{m}athcal{V}ert \xi_i \lie{m}athcal{V}ert_{1,3,{\on{d}}eltata} < \eps$. To obtain the improved convergence for the connection, note that $ F_{A_i} + (u_i)^* P(\mathbb{P}hi) = 0 $ and the corresponding equations for the limit $(A,u)$ imply that $$ F_{A_i} - F_A = {\on{d}}_A (A_i - A) - (A - A_i) \wedge (A - A_i) = (u_i)^* P(\mathbb{P}hi) - u^* P(\mathbb{P}hi) .$$ Since $u_i^* P(\mathbb{P}hi)$ is bounded and converges to $u^* P(\mathbb{P}hi)$ on the complement of the bubbling set, and $A_i$ converges to $A$ in $C^0$ hence $W^{0,3}$, the right hand side converges to $0$ in $W^{0,3}$ as $i \lie{t}o \infty .$ After gauge transformation we may assume that $ {\on{d}}_A^* (A - A_i) = 0$. Then the elliptic estimate for the operator ${\on{d}}_A \oplus {\on{d}}_A^*$ implies that $A - A_i$ converges to zero in $W^{1,3}$. \end{proof} \lie{b}egin{corollary} \langleglebel{versalvor} $(A_S,\lie{m}ul{u}_S)$ is a stratified-smooth versal deformation of $(\lie{h}at{\Sigma},A,\lie{m}ul{u})$. \end{corollary} \lie{b}egin{proof} Proposition \ref{vorsurj} implies that any family $(A_{S^1}^1,\lie{m}ul{u}_{S^1}^1)$ is obtained by pull-back from $(A_S,\lie{m}ul{u}_S)$, in case $\lie{h}at{\Sigma}$ is stable, or obtained from the family obtained by adding the marked points mapping to submanifolds, in general. \end{proof} \noindent {\em Step 7: Injectivity} We show that any nearby vortex appears once in our family, up to the action of $ \on{Aut} (A,u)$; this is part of the following: \lie{b}egin{theorem} Any family $(A_S,\lie{m}ul{u}_S)$ constructed by gluing using the exponential gluing profile is a strongly universal stratified-smooth deformation of $(A,\lie{m}ul{u})$. \end{theorem} \lie{b}egin{proof} Let $\overline{Z}_n(P,X)$ denote the moduli space of marked symplectic vortices up to equivalences that involve only the identity gauge transformation, so that $ \overline{M}_n(P,X) = \overline{Z}_n(P,X) / \G(P) .$ Let $(A,\lie{m}ul{u})$ be a stable marked vortex, and $W_{A,\lie{m}ul{u}}$ a slice for the gauge group action on $\overline{Z}(P,X)$, so that $$W_{A,\lie{m}ul{u}}/\G(P)_{A,\lie{m}ul{u}} \lie{t}o \overline{M}_n(P,X) $$ is a homeomorphism onto its image. Let $ \on{Aut} _0(A,\lie{m}ul{u})$ denote the subgroup of $ \on{Aut} (A,u)$ acting trivially on $P$, so that $\G(P)_{A,\lie{m}ul{u}} = \on{Aut} (A,u)/ \on{Aut} _0(A,\lie{m}ul{u})$ is the stabilizer of $(A,\lie{m}ul{u})$ under the gauge action. Let $(A_S,\lie{m}ul{u}_S)$ denote a universal deformation of $(A,\lie{m}ul{u})$ constructed by gluing using the exponential gluing profile. We claim that the map \lie{b}egin{equation} \langleglebel{glueS3} S/ \on{Aut} _0(A,\lie{m}ul{u}) \lie{t}o W_{A,\lie{m}ul{u}}, \lie{q}uad [s] \lie{m}apsto [A_s,\lie{m}ul{u}_s] \end{equation} is an injection. Indeed, rigidification produces an injection \lie{b}egin{equation} \langleglebel{glueS4} S/ \on{Aut} _0(A,\lie{m}ul{u}) \lie{t}o \overline{M}_{n+k,l}(\Sigma)/ \on{Aut} _0(A,\lie{m}ul{u}), \lie{q}uad [s] \lie{m}apsto [\Sigma^{A_s,\lie{m}ul{u}_s,\lie{m}ul{Y},\lie{m}ul{\lie{g}amma}}] \end{equation} where $ \on{Aut} _0(A,\lie{m}ul{u})$ acts by re-ordering the marked points. Since this map factors through \eqref{glueS3}, the claim follows. If $(A_{S^1},\lie{m}ul{u}_{S^1}^1)$ is a family of symplectic vortices giving a deformation of any fiber of $(A_S,\lie{m}ul{u}_S)$< then Corollary \ref{versalvor} together with injectivity shows that this family is obtained by pull-back by some map $S^1 \lie{t}o S$. Hence $(A_S,\lie{m}ul{u}_S)$ is a stratified-smooth strongly universal deformation of $(A,\lie{m}ul{u})$. \end{proof} \lie{b}egin{theorem} \langleglebel{regvortex} Let $X$ be a Hamiltonian $G$-manifold equipped with a compatible invariant almost complex structure $J \in \lie{m}athcal{J}(X)^G$. The maps \lie{b}egin{equation} S \lie{t}o \overline{M}_n(\Sigma,X), \lie{q}uad s \lie{m}apsto [A_s,\lie{m}ul{u}_s] \end{equation} associated to the universal deformations constructed above equip the locus $\overline{M}^\reg_n(\Sigma,X)$ of regular stable symplectic vortices with the structure of a stratified-smooth orbifold. If the local coordinates near the nodes are chosen compatibly and the gluing profile is the exponential gluing profile, then the deformations provide $\overline{M}^\reg_n(\Sigma,X)$ with the structure of a $C^1$-orbifold. \end{theorem} \lie{b}egin{proof} It suffices to show that the charts given by two sets $\lie{m}ul{Y}_j,\lie{m}ul{\lie{g}amma}_j$ are compatible. Define $\lie{m}ul{Y} = \lie{m}ul{Y}_1 \cup \lie{m}ul{Y}_2$ and $m = m_1 + m_2$ the total number of extra points. Similarly let $\lie{m}ul{\lie{g}amma}$ be the union of $\lie{m}ul{\lie{g}amma}_1$ and $\lie{m}ul{\lie{g}amma_2}$ of total number $l = l_1 + l_2$. The family $ \lie{h}at{\Sigma}^{\lie{m}ul{Y},\lie{m}ul{u},\lie{m}ul{\lie{g}amma},A}_S$ admits a proper \'etale forgetful map $ \lie{h}at{\Sigma}_S^{\lie{m}ul{Y},\lie{m}ul{u},\lie{m}ul{\lie{g}amma},A} \lie{t}o \lie{h}at{\Sigma}_S^{\lie{m}ul{Y}_j,\lie{m}ul{u},\lie{m}ul{\lie{g}amma}_{j},A}, \lie{q}uad j = 1,2$ whose fiber consists of the re-orderings of the points for $\lie{m}ul{Y}$ induced by automorphisms of $ \on{Aut} (A,\lie{m}ul{u})$ that fix the ordering for $\lie{m}ul{Y}_j$. It follows that the corresponding charts are $C^1$-compatible. \end{proof} \lie{b}egin{remark} As discussed in Remark \ref{smooth}, the Theorem implies that if $\overline{M}^{\reg}_n(\Sigma,X)$ is compact then it admits a (non-canonical) smooth structure. \end{remark} Let $\overline{M}_n(\Sigma)$ denote the moduli space of stable maps to $\Sigma$ with homology class $[\Sigma]$, $n$ markings and genus that of $\Sigma$, or in other words, {\em parametrized} stable curves with principal component isomorphic to $\Sigma$. Forgetting the pair $(A,u)$ gives a forgetful morphism $ \overline{M}^{\reg}_n(\Sigma,X) \lie{t}o \overline{M}_n(\Sigma) .$ Using the differentiable structure defined above, the evaluation maps are differentiable but unfortunately the forgetful morphisms are not, unless one uses a different gluing profile for the moduli space of vortices with one less marking. More precisely, the forgetful morphism $\overline{M}^\reg_n(\Sigma,X) \lie{t}o \overline{M}_n(\Sigma)$ is continuous and $C^1$ near any pair $(A,\lie{m}ul{u})$ whose domain is stable as an element of $\overline{M}_n(\Sigma)$, and a submersion near the boundary of $\overline{M}_n(\Sigma)$. For the standard smooth structure on $\overline{M}_n(\Sigma)$, the forgetful morphism $\overline{M}^\reg_n(\Sigma,X) \lie{t}o \overline{M}_n(\Sigma)$ is smooth. The gluing construction has various parametrized versions. For example, in \cite{cross} we consider a moduli space of {\em polystable polarized vortices}, which consist of a vortex together with a lift of the connection to the Chern-Simons line bundle. In each of these cases one applies the implicit function theorem using the linearized operator for the parametrized problem to prove that any {\em parametrized regular} polystable vortex has a strongly universal deformation in the parametrized sense. In particular, any regular polystable polarized vortex has a strongly universal deformation etc. {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} {\on{d}}ef\lie{p}olhk#1{\on{ps}etbox0=\lie{h}box{#1}{\ooalign{\lie{h}idewidth \lower1.5ex\lie{h}box{`}\lie{h}idewidth\crcr\lie{u}nhbox0}}} {\on{d}}ef$'${$'$} {\on{d}}ef$'${$'$} \lie{b}egin{thebibliography}{10} \lie{b}ibitemitem{ab:ex} Mohammed Abouzaid. \newblock Framed bordism and {L}agrangian embeddings of exotic spheres, 2008. \newblock arXiv.org:0812.4781. \lie{b}ibitemitem{ad:so} Robert~A. Adams. \newblock {\em Sobolev spaces}. \newblock Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1975. \newblock Pure and Applied Mathematics, Vol. 65. \lie{b}ibitemitem{au:non} Thierry Aubin. \newblock {\em Some nonlinear problems in {R}iemannian geometry}. \newblock Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998. \lie{b}ibitemitem{ci:symvortex} Kai Cieliebak, A.~Rita Gaio, Ignasi Mundet~i Riera, and Dietmar~A. Salamon. \newblock The symplectic vortex equations and invariants of {H}amiltonian group actions. \newblock {\em J. Symplectic Geom.}, 1(3):543--645, 2002. \lie{b}ibitemitem{ciel:vor} Kai Cieliebak, Ana~Rita Gaio, and Dietmar~A. Salamon. \newblock {$J$}-holomorphic curves, moment maps, and invariants of {H}amiltonian group actions. \newblock {\em Internat. Math. Res. Notices}, (16):831--882, 2000. \lie{b}ibitemitem{dm:irr} P.~Deligne and D.~Mumford. \newblock The irreducibility of the space of curves of given genus. \newblock {\em Inst. Hautes \'Etudes Sci. Publ. Math.}, (36):75--109, 1969. \lie{b}ibitemitem{dou:ana} A.~Douady. \newblock Le probl\`eme des modules locaux pour les espaces {${\lie{b}f C}$}-analytiques compacts. \newblock {\em Ann. Sci. \'Ecole Norm. Sup. (4)}, 7:569--602 (1975), 1974. \lie{b}ibitemitem{earle:fibre} Clifford~J. Earle and James Eells. \newblock A fibre bundle description of {T}eichm\"uller theory. \newblock {\em J. Differential Geometry}, 3:19--43, 1969. \lie{b}ibitemitem{fooo} Kenji Fukaya, Yong-Geun Oh, Hiroshi Ohta, and Kaoru Ono. \newblock {\em Lagrangian intersection {F}loer theory: anomaly and obstruction.}, volume~46 of {\em AMS/IP Studies in Advanced Mathematics}. \newblock American Mathematical Society, Providence, RI, 2009. \lie{b}ibitemitem{fu:st} W.~Fulton and R.~Pandharipande. \newblock Notes on stable maps and quantum cohomology. \newblock In {\em Algebraic geometry---Santa Cruz 1995}, pages 45--96. Amer. Math. Soc., Providence, RI, 1997. \lie{b}ibitemitem{cross} E.~Gonzalez and C.~Woodward. \newblock Area-dependence in gauged {G}romov-{W}itten theory. \newblock arXiv:0811.3358. \lie{b}ibitemitem{hm:mc} Joe Harris and Ian Morrison. \newblock {\em Moduli of curves}, volume 187 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 1998. \lie{b}ibitemitem{ho:sc} H.~Hofer, K.~Wysocki, and E.~Zehnder. \newblock $sc$-smoothness, retractions and new models for smooth spaces. \newblock {\em Discrete and Continuous Dyn. Systems}, 28:665--788, 2010. \lie{b}ibitemitem{io:rel} Eleny-Nicoleta Ionel and Thomas~H. Parker. \newblock Relative {G}romov-{W}itten invariants. \newblock {\em Ann. of Math. (2)}, 157(1):45--96, 2003. \lie{b}ibitemitem{ms:jh} Dusa McDuff and Dietmar Salamon. \newblock {\em {$J$}-holomorphic curves and symplectic topology}, volume~52 of {\em American Mathematical Society Colloquium Publications}. \newblock American Mathematical Society, Providence, RI, 2004. \lie{b}ibitemitem{mun:ham} Ignasi Mundet~i Riera. \newblock Hamiltonian {G}romov-{W}itten invariants. \newblock {\em Topology}, 42(3):525--553, 2003. \lie{b}ibitemitem{ott:thesis} A.~Ott. \newblock {\em Non-local vortex equations and gauged Gromov-Witten invariants}. \newblock PhD thesis, ETH Zurich, 2010. \lie{b}ibitemitem{pal:act} Richard~S. Palais. \newblock {$C\on{ps}p{1}$} actions of compact {L}ie groups on compact manifolds are {$C\on{ps}p{1}$}-equivalent to {$C\on{ps}p{\infty }$} actions. \newblock {\em Amer. J. Math.}, 92:748--760, 1970. \lie{b}ibitemitem{rrs:mod} Joel~W. Robbin, Yongbin Ruan, and Dietmar~A. Salamon. \newblock The moduli space of regular stable maps. \newblock {\em Math. Z.}, 259(3):525--574, 2008. \lie{b}ibitemitem{rob:const} Joel~W. Robbin and Dietmar~A. Salamon. \newblock A construction of the {D}eligne-{M}umford orbifold. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 8(4):611--699, 2006. \lie{b}ibitemitem{rt:hi} Yongbin Ruan and Gang Tian. \newblock Higher genus symplectic invariants and sigma models coupled with gravity. \newblock {\em Invent. Math.}, 130(3):455--516, 1997. \lie{b}ibitemitem{sie:gen} Bernd Siebert. \newblock {G}romov-{W}itten invariants of general symplectic manifolds. \newblock arXiv:dg-ga/9608005. \lie{b}ibitemitem{orient} K.~Wehrheim and C.T. Woodward. \newblock Orientations for pseudoholomorphic quilts. \newblock in preparation. \end{thebibliography} \end{document}
\begin{document} \author{Michel J.\,G. WEBER} \address{IRMA, UMR 7501, 7 rue Ren\'e Descartes, 67084 Strasbourg Cedex, France.} \email{[email protected]} \keywords{ Arithmetical function, Erd\H os-Zaremba's function, Davenport's function. \vskip 1 pt \emph{2010 Mathematical Subject Classification}: Primary 11D57, Secondary 11A05, 11A25.} \begin{abstract} Erd\"os and Zaremba showed that $ \limsup_{n\to \infty} \frac{{\mathbb P}hi(n)}{(\log\log n)^2}=e^\g$, $\g$ being Euler's constant, where ${\mathbb P}hi(n)=\sum_{d|n} \frac{\log d}{d}$. We extend this result to the function ${\mathbb P}si(n)= \sum_{d|n} \frac{(\log d )(\log\log d)}{d}$ and some other functions. We show that $ \limsup_{n\to \infty}\, \frac{{\mathbb P}si(n)}{(\log\log n)^2(\log\log\log n)}\,=\, e^\g$. The proof requires a new approach. As an application, we prove that for any $\eta>1$, any finite sequence of reals $\{c_k, k\in K\}$, $\sum_{k,\ell\in K} c_kc_\ell \, \frac{\gcd(k,\ell)^{2}}{k\ell} \le C(\eta) \sum_{\nu\in K} c_\nu^2(\log\log\log \nu)^\eta {\mathbb P}si(\nu) $, where $C(\eta)$ depends on $\eta$ only. This improves a recent result obtained by the author. \end{abstract} \maketitle \section{\bf Introduction.} \label{s1} \rm Erd\"os and Zaremba showed in \cite{EZ} the following result concerning the arithmetical function ${\mathbb P}hi(n)=\sum_{d|n} \frac{\log d}{d}$, \begin{equation}\label{EZ1}\limsup_{n\to \infty} \frac{{\mathbb P}hi(n)}{(\log\log n)^2}=e^\g, \end{equation} where $\g $ is Euler's constant. This function appears in the study of good lattice points in numerical integration, see Zaremba \cite{Z}. The proof is based on the identity \begin{eqnarray}\label{formule}{\mathbb P}hi(n) =\sum_{i=1}^r \sum_{\nu_i=1}^{\a_i}\frac{\log p_i^{\nu_i}}{p_i^{\nu_i}}\sum_{\d|n p_i^{-\a_i}}\frac{1}{\d} , \qq\qq ( n= p_1^{\a_1}\ldots p_r^{\a_r}), \end {eqnarray} which follows from \begin{eqnarray}\label{base} \sum_{d|n} \frac{\log d}{d}&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r} \frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}\Big(\sum_{i=1}^{r}\m_i\log p_i\Big) . \end{eqnarray} Let $h(n)$ be non-decreasing on integers, $h(n)= o(\log n)$, and consider the slightly larger function \begin{eqnarray}\label{phi}{\mathbb P}hi_h(n)=\sum_{d|n} \frac{(\log d )\,h(d)}{d^{}}. \end{eqnarray} In this case a formula similar to \eqref{base} no longer holds, the "log-linearity" being lost due to the extra factor $h(n)$. The study of this function requires a new approach. We study in this work the case $h(n)= \log\log n$, that is the function \begin{eqnarray}\label{psi}{\mathbb P}si (n)=\sum_{d|n} \frac{(\log d )(\log\log d)}{d^{}}. \end{eqnarray} We extend Erd\H os-Zaremba's result for this function, as well as for the functions \begin{eqnarray*} {\mathbb P}hi_1(n)&=&\sum_{ p_1^{\m_1}\ldots p^{\m_{r}}_{r}|n}\frac{\sum_{i=1}^{r} \m_i(\log p_i)(\log\log p_i)}{p_1^{\m_1}\ldots p^{\m_{r}}_{r}} \cr {\mathbb P}hi_2(n)&=&\sum_{d|n}\frac{(\log d) \log\, \O(d) }{d}, \end{eqnarray*} where $\O(d)$ denotes as usual the total number of prime factors of $d$ counting multiplicity. These functions are linked to ${\mathbb P}si$. \vskip 2 pt Throughout, $\log\log x$ (resp. $\log \log\log x$) equals $1$ if $0\le x \le e^{e}$ (resp. $0\le x \le e^{e^e}$), and equals $\log\log x$ (resp. $\log \log\log x$) in the usual sense if $x> e^{e}$ (resp. $x> e^{e^e}$). \vskip 3pt One verifies using standard arguments that \begin{equation} \label{phipsi} \limsup_{n\to \infty}\, \frac{{\mathbb P}hi_1(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g, \qq \limsup_{n\to \infty}\, \frac{{\mathbb P}si(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g,\end{equation} and in fact that \begin{equation} \label{Phi1est} \limsup_{n\to \infty}\, \frac{{\mathbb P}hi_1(n)}{(\log \log n)^2(\log\log\log n)}\,= \,e^\g . \end{equation} \vskip 2 pt By the observation made after \eqref{base}, the corresponding extension of this result to ${\mathbb P}si(n)$ is technically more delicate. It follows from \eqref{EZ1} that \begin{equation}\label{trois} \limsup_{n\to \infty}\, \frac{{\mathbb P}si(n)}{(\log \log n)^3}\, \le \, e^\g. \end{equation} The question thus arises whether the exponent of $\log\log n$ in \eqref{trois} can be replaced by $2+\e$, with $\e>0$ small. \vskip 2 pt We answer this question affirmatively by establishing the following precise result, which is the main result of this paper. \begin{theorem}\label{t1} \begin{equation*} \limsup_{n\to \infty}\, \frac{{\mathbb P}si(n)}{(\log \log n)^2(\log\log\log n)}\, =\, e^\g.\end{equation*} \end{theorem} An application of this result is given in Section \ref{s6}. The upper bound is obtained, via the inequality \begin{eqnarray}\label{convexdec} {\mathbb P}si(n)\ \le \ {\mathbb P}hi_1(n) + {\mathbb P}hi_2(n),\end{eqnarray} as a combination of an estimate of ${\mathbb P}hi_1(n)$ and the following estimate of ${\mathbb P}hi_2(n)$. Recall that Davenport's function $w(n)$ is defined by $w(n)=\sum_{p|n}\frac{\log p}{p}$. According to Theorem 4 in \cite{D} we have, \begin{equation}\label{wdavenport}\limsup_{n\to \infty}\frac{w(n)}{\log\log n}=1. \end{equation} Let also $\o(n)$ be the number of prime divisors of $n$ counted without multiplicity. \begin{theorem}\label{t2}For all odd numbers $n$ we have, \begin{eqnarray*} {\mathbb P}hi_2(n)\, \le \, C\, (\log\log\log \o(n))(\log \o(n))w(n) .\end{eqnarray*} where $C$ is an absolute constant.\end{theorem} Here and elsewhere $C$ (resp. $C(\eta)$) denotes some positive absolute constant (resp. some positive constant depending only of a parameter $\eta$). \vskip 2 pt The approach used for proving Theorem \ref{t2} can be adapted with no difficulty to other arithmetical functions of similar type. \vskip 5 pt Before continuing we mention some other existing extensions, due to Sitaramaiah and Subbarao in \cite{SS2,SS1}. For instance, the case when $\log d$ is replaced by a non-negative additive function $g$ (ie. $S(n)=\sum_{d|n} \frac{g(d)}{d}$) is studied in \cite{SS1}. In our case we note that $g(d)= (\log d)(\log\log d)$ (see \eqref{phi}), which is obviously not additive. It is proved that if $T(d)$ is one of the three arithmetical functions $\frac{\o(d)}{d}$, $\frac{\O(d)}{d}$, $\frac{\log \tau(d)}{d}$, then \begin{equation}\label{sisu1}\limsup_{n\to \infty} \frac{\sum_{d|n}T(d)}{(\log\log n)(\log\log \log n)}=c_T\,e^\g, \end{equation} where $c_T= 1$ in the two first cases, and $c_T= \log 2$ in the third case. See also Remark 2.3. A basis of their proof lies in the observation that $S(n)/\s_{-1}(n)$ is additive. Further it is proved in \cite{SS2} that for each positive integer $k$, \begin{equation}\label{sisu2}\limsup_{n\to \infty} \frac{S_k(n)}{(\log\log n)^{k+1}} =c_k\,e^\g,\qq \qq (S_k(n)=\sum_{d|n}\frac{(\log d)^k}{d}) \end{equation} where $c_k$ is a positive explicit constant. The proof is elegant and based on the derivation formula $S_k(n)=(-1)^kf^{(k)}(1)$, where $f(u)= \s_{-u}(n)$ and $f^{(k)}$ is the $k$-th derivative of $f$, which is specific to these sums. \vskip 4 pt The paper is organized as follows. Sections \ref{s4} and \ref{s5} form the main part of the paper, and consist of the proof of Theorem \ref{t2}, which is long and technical and involves the building of a binary tree (subsection \ref{subsection4.2.1}). The proof of Theorem \ref{t1} is given in section \ref{s5}. Section \ref{s2} contains complementary results and the proofs of \eqref{phipsi}, \eqref{Phi1est}. Section \ref{s6} concerns the afore mentioned application of Theorem \ref{t1}. Additional remarks and results are given in Section \ref{s7}. \vskip 2 pt \hskip -3 pt \section{\bf Proof of Theorem \ref{t2}.} \label{s4} We use a chaining argument. We make throughout the convention $0\log 0=0$. \rm \vskip 6 pt Let $n= p_1^{\a_1}\ldots p_r^{\a_r}$ be an odd number. We will use repeatedly the fact that \begin{eqnarray}\label{min}\min_{i=1}^r p_i\ge 3. \end{eqnarray} \vskip 10 pt We note that \begin{eqnarray}\label{basic} {\mathbb P}hi_2(n)&=&\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r} \frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) \cr &=& \sum_{i=1}^r\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{ \hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded} } } \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)}{p_i^{\m_i}}\Big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray} As there is no order relation on the sequence $p_1, \ldots, p_r$, it suffices to study the sum \begin{eqnarray} \label{Phi_2(r,n)} {\mathbb P}hi_2(r,n)&:=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \Big[\sum_{i=1}^{r-1} \m_i + \m_r\Big].\end{eqnarray} The sub-sums in \eqref{Phi_2(r,n)} will be estimated by using a recursion argument. \vskip 3 pt We first explain its principle and examine the structure of the sum ${\mathbb P}hi_2(r,n)$, anticipating somehow the calculations. The last sum $\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \big[\sum_{i=1}^{r-1} \m_i + \m_r\big]$ is of type $$\sum_{\m=0}^{\a_r}\a \m \big(\log (A+\m)\big)e^{-\a \m}, \qq \qq \hbox{$\a =\log p_r$, \quad $A=\sum_{i=1}^{r-1}\m_i$}.$$ It is easy to observe with \eqref{49} that the bound obtained in Lemma \ref{E1}, will induce on the sum in $\m_{r-1}$ a logarithmic factor $\log \big[h+ \sum_{i=1}^{r-2} \m_i + \m_{r-1}\big]$ where $h$ is a positive integer, and so one. More precisely, \begin{eqnarray*} & & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a} \cr & &\qq + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*} provided $A\ge 1$, and $\a \ge 1$. Whence the bound, \begin{eqnarray*} & & \frac{\log p_r}{p_r} \big(\log (A+1)\big)+ \frac{2\log p_r}{p^2_r}\big(\log (A+2)\big) + \frac{1}{p^3_r} \Big\{3\log p_r \log (A+3)+3 \log (A+3)\cr & &+ \frac{1}{\log p_r} \log (A+3) + \frac{1}{\log p_r} + \frac{1}{(\log p_r)^2(A+3)}\Big\}.\end{eqnarray*} By reporting this bound in \eqref{Phi_2(r,n)}, we get sums of type \begin{eqnarray*} \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{\log \big[\sum_{i=1}^{r-1} \m_i + h\big]}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}} \qq h=1,2,3,\end{eqnarray*} affected with new coefficients, this is displayed in \eqref{h1234}. By using \eqref{sumphi2}, the last sum is bounded by \begin{eqnarray*} & & \log \big[\sum_{i=1}^{r-2} \m_i + h\big] + \frac{1}{p_{r-1}}\Big(\log (\sum_{i=1}^{r-2} \m_i + h+1)\cr & &+ \frac{\log\big[\sum_{i=1}^{r-2} \m_i + h+1\big]}{\log p_{r-1}} + \frac{1}{(\log p_{r-1})^2(\sum_{i=1}^{r-2} \m_i + h+1)}\Big) .\cr{} & & \end{eqnarray*} By recursing once more, this allows one to bound again ${\mathbb P}hi_2(r,n)$. The remainding sums will after be all of same type. The factor $ \log \big[\sum_{i=1}^{r-1} \m_i + h\big]$ induces on the sum of order $(r-2)$ a factor $\log\big[\sum_{i=1}^{r-2} \m_i + h+1\big]$. The whole matter thus consists with understanding how the new coefficients are generated, and in particular to check whether a coefficient of order $1+\e$ will not produce by iteration a coefficient of order $(1+\e)^r$. A recurrence inequality established in Lemma \ref{E3} will allow one to control their magnitude efficiently. \subsection{Preparation} Some technical lemmas will be needed. \begin{lemma}\label{phivar} {\rm (i)} Let $\p_1(x)=x\big(\log (A+x)\big) e^{-\a x}$, $\p_2(x)=\big(\log (A+x)\big) e^{-\a x}$. Then $\p_1(x)$ is non-increasing on $[3, \infty)$ if $A\ge 1$ and $\a\ge \log 2$. Further, $\p_2(x)$ is non-increasing on $[1,\infty)$, if $A\ge 1$ and $\a\ge 1$. \vskip 3 pt \noi {\rm (ii)} Assume that $A\ge 1$ and $\a\ge \log 2$. For any integer $m\ge 1$, \begin{eqnarray} \label{phi.intest1} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&\le & \frac{1}{\a^2(A+m)}e^{-\a m}+ \frac{1}{\a}e^{-\a m} + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} \cr & & + m(\log A+m) e^{-\a m}.\end{eqnarray} \vskip 3 pt \noi {\rm (iii)} Assume that $A\ge 1$ and $\a\ge 1$. Then, \begin{eqnarray}\label{intest2} \int_1^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&\le &\frac{\log (A+1)}{\a} e^{-\a}+\frac{1}{\a^2(A+1)}e^{-\a}. \end{eqnarray} \end{lemma} \begin{proof}[\cmit Proof] \rm (i) We have $\p_1'(x) = \big(\log (A+x)\big)e^{-\a x}+ \frac{x}{A+x}e^{-\a x} -\a x \big(\log (A+x)\big) e^{-\a x}$. By assumption and since $\p_1'(x)\le 0 \Leftrightarrow \frac{1}{x}+ \frac{1}{(A+x)\log (A+x)}\le \a$, we get $$ \frac{1}{x}+ \frac{1}{(A+x)\log (A+x)}\le\frac{1}{3}+\frac{1}{8\log 2} \le \frac{1}{3}+\frac{1}{5}<\log 2\le \a.$$ Similarly $ \p_2'(x)=\frac{1}{A+x}e^{-\a x}-\a \big(\log (A+x)\big) e^{-\a x}$. As $\p_2'(x)\le 0 \Leftrightarrow (A+x)\log (A+x)\ge \frac{1}{\a}$, we also get $$(A+x)\log (A+x)\ge 2\log 2>1\ge \frac{1}{\a}.$$ \noi (ii) \rm We deduce from (i) that \begin{equation}\label{49} \a x \big(\log (A+x)\big) e^{-\a x}= \big(\log (A+x)\big)e^{-\a x}+ \frac{x}{A+x}e^{-\a x}-\big( x (\log A+x) e^{-\a x}\big)' . \end{equation} By integrating, \begin{eqnarray*}\label{phi1int} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&=& \int_m^\infty x\big(\log (A+x)\big)e^{-\a x}\hbox{\rm d} x+ \int_m^\infty \frac{x}{A+x}e^{-\a x}\hbox{\rm d} x \cr & & \quad + m(\log A+m) e^{-\a m}.\end{eqnarray*} Similarly \begin{equation*} \a \int_m^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x=\int_m^\infty \frac{1}{A+x}e^{-\a x}\hbox{\rm d} x+ \big(\log (A+m)\big) e^{-\a m}.\end{equation*} By combining we get, \begin{eqnarray*} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&=& \frac{1}{\a}\int_m^\infty\frac{1}{A+x}e^{-\a x}\hbox{\rm d} x+ \int_m^\infty \frac{x}{A+x}e^{-\a x}\hbox{\rm d} x \cr & & + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} + m(\log A+m) e^{-\a m}. \end{eqnarray*} Therefore,\begin{eqnarray*} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&\le & \frac{1}{\a^2(A+m)}e^{-\a m}+ \frac{1}{\a}e^{-\a m} + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} \cr & & + m(\log A+m) e^{-\a m}.\end{eqnarray*} \noi (iii) \rm We deduce from (i) that \begin{eqnarray*} \int_1^N \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&=&\frac{1}{\a}\int_1^N\frac{1}{(A+x)}e^{-\a x}\hbox{\rm d} x\cr & & \qq -\frac{1}{\a}\Big(\big(\log (A+1)\big) e^{-\a}-\log (A+N)\big) e^{-\a N}\Big).\end{eqnarray*} As $\frac{1}{\a}\int_1^N\frac{1}{A+x}e^{-\a x}\hbox{\rm d} x\le \frac{1}{\a^2(A+1)}e^{-\a}$, letting $N$ tend to infinity gives, \begin{eqnarray*} \int_1^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&\le &\frac{\log (A+1)}{\a} e^{-\a}+\frac{1}{\a^2(A+1)}e^{-\a}.\end{eqnarray*} \end{proof} \begin{lemma}\label{E1} Assume that $A\ge 1$, and $\a \ge 1$. Then, \begin{eqnarray*} & & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a} \cr & &\qq\qq + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*} \end{lemma} \begin{proof}[\cmit Proof] \rm As \begin{eqnarray*}\sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}&=&\a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a} \cr & & + 3\a \big(\log (A+3)\big)e^{-3\a}+ \a\sum_{\m=4}^\infty \m \big(\log (A+\m)\big)e^{-\a \m}, \end{eqnarray*} by applying Lemma \ref{phivar}-(ii), we get \begin{eqnarray*} \a\sum_{\m=4}^\infty \m \big(\log (A+\m)\big)e^{-\a \m}&\le &\a\int_{3}^\infty x \big(\log (A+x)\big)e^{-\a x} \hbox{\rm d} x \cr &\le & \frac{1}{\a^2(A+3)}e^{-3\a}+ \frac{1}{\a}e^{-3\a } + \frac{\log (A+3)}{\a}e^{-3\a} + 3(\log A+3) e^{-3\a}.\end{eqnarray*} Whence, \begin{eqnarray*} & & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a} \cr & & + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*} \end{proof} \begin{lemma}\label{E3a} Under assumption \eqref{min} we have \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s} \m_i +h\big)}{p_s^{\m_s}}&\le & \log \big(\sum_{i=1}^{s-1} \m_i + h\big) + \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big) \cr & &\quad +\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}. \end{eqnarray*} In particular, \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s-1} \m_i +h\big)}{p_s^{\m_s}} &\le & \Big(1+ \frac{1}{p_{s}}\big( 1 + \frac{1}{ \log p_{s}} +\frac{1}{3(\log p_s)^2}\big) \Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+2\big). \end{eqnarray*}\end{lemma} \begin{proof}[\cmit Proof] \rm As \begin{eqnarray*}\sum_{\m=0}^{\infty} \big(\log (A+\m)\big) e^{-\a \m} &=& \log A + \big(\log (A+1)\big) e^{-\a }+ \sum_{\m=2}^{\infty} \big(\log (A+\m)\big) e^{-\a \m} \cr &\le & \log A + \big(\log (A+1)\big) e^{-\a }+ \int_{1}^{\infty} \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x\end{eqnarray*} we deduce from Lemma \ref{phivar}-(iii), \begin{equation}\label{sumphi2}\sum_{\m=0}^{\infty} \big(\log (A+\m)\big) e^{-\a \m} \ \le \ \log A + e^{-\a}\Big(\log (A+1)+ \frac{\log (A+1)}{\a} + \frac{1}{\a^2(A+1)}\Big) . \end{equation} Consequently, \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s} \m_i + h\big)}{p_s^{\m_s}}&\le & \log \big(\sum_{i=1}^{s-1} \m_i + h\big)+ \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big)\cr & &\quad +\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}. \end{eqnarray*} Finally, \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s-1} \m_i + h\big)}{p_s^{\m_s}} &\le & \Big(1+ \frac{1}{p_{s}}\big( 1 + \frac{1}{ \log p_{s}}\big) +\frac{1}{3(\log p_s)^2}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+2\big) . \end{eqnarray*} \end{proof} \begin{corollary}\label{E2} Assume that condition \eqref{min} is satisfied. \vskip 3 pt {\rm (i)} If $\sum_{i=1}^{r-1}\m_i\ge 1$, then \begin{align*} \sum_{\m_r=0}^{\a_r }&\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ \frac{2\log p_r}{p_r^2} \log \big( \sum_{i=1}^{r-1} \m_i + 2\big) \cr &+ \frac{1}{p_r^3} \Big(3\log p_r + 3+ \frac{1}{\log p_r} \Big)\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{p_r^3\log p_r}\Big( 1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\Big). \end{align*} Further, \begin{eqnarray*} \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big)& \le & 5\ \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big).\end{eqnarray*} \vskip 3 pt {\rm (ii)} If $\sum_{i=1}^{r-1}\m_i=0$, then \begin{eqnarray*} \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) &\le & 18\ \frac{\log p_{r}}{p_{r}} .\end{eqnarray*}\end{corollary} \begin{proof}[\cmit Proof.] \rm (i) The first inequality follows from Lemma \ref{E1} with the choice $\a =\log p_r$ and $A=\sum_{i=1}^{r-1}\m_i$, noting that by assumption \eqref{min}, $\a >1$. As $p_r\ge 3$, it is also immediate that \begin{align*} & \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}} \log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \cr &\ \le \Big\{3 \frac{\log p_r}{p_r} + \frac{\log p_r }{9p_r} \Big(3 + \frac{3}{\log p_r}+ \frac{1}{(\log p_r)^2} \Big)\Big\}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{9p_r\log p_r}\Big( 1+ \frac{1}{4\log p_r}\Big) \cr &\ \le \ 5\ \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big).\end{align*} \noi (ii) If $\sum_{i=1}^{r-1}\m_i=0$, the sums relative to $\m_i$, $1\le i\le r-1$, do not contribute. Further, \begin{eqnarray*} \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big)&=&\sum_{\m_r=2}^{\a_r}\frac{\m_r\log p_{r}}{p_{r}^{\m_r}}\log \m_r\ =\ \sum_{\m=1}^{\a_r-1}\frac{(\m+1)\log p_{r}}{p_{r}^{\m+1}}\log (\m+1) \cr &\le &\frac{1}{p_{r}}\Big\{\sum_{\m=1}^{\infty}\frac{\m\log p_{r}}{p_{r}^{\m}}\log (\m+1)+\sum_{\m=1}^{\infty}\frac{\log p_{r}}{p_{r}^{\m}}\log (\m+1)\Big\} . \end{eqnarray*} Lemma \ref{E1} applied with $A=1$ and $\a =\log p_{r}$ gives the bound \begin{eqnarray*} \sum_{\m=1}^{\infty}\frac{\m\log p_{r}}{p_{r}^{\m}}\log (\m+1) &\le & \frac{(\log 2)\log p_{r}}{p_{r}} + \frac{2(\log 3)\log p_{r}}{p_{r}^2} + \frac{1}{p_{r}^3}\Big\{(6\log 2)(\log p_{r}) \cr & & +6 \log 2+ \frac{2\log 2}{(\log p_{r})} + \frac{1}{(\log p_{r})} + \frac{1}{4(\log p_{r})^2}\Big\} \cr &\le & 8\Big(\frac{\log p_{r}}{p_{r}}+\frac{1}{p_{r}^3} \Big) .\end{eqnarray*} Next estimate \eqref{sumphi2} applied with $A=1$ and $\a =\log p_{r}$, further gives, \begin{eqnarray*}\sum_{\m=1}^{\infty}\frac{\log p_{r}}{p_{r}^{\m}}\log (\m+1) &\le & \frac{1}{p_{r}}\Big(\log 2+ \frac{\log 2}{\log p_{r}} + \frac{1}{2(\log p_{r})^2}\Big) \ \le \ \frac{2}{p_{r}}.\end{eqnarray*} Whence, $ \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le 18\ \frac{\log p_{r}}{p_{r}} .$\end{proof} \begin{remark}\rm As $\log \big(\sum_{i=1}^{s} \m_i + h\big)\le \log \big(\O(n)+3\big)$, one can deduce from Corollary \ref{E2}-(ii) that \begin{eqnarray*} {\mathbb P}hi_2(r,n) &\le &18\ \frac{\log p_{r}}{p_{r}}\log (\O(n)+3) \prod_{i=1}^{r}\Big(\frac{1}{1-p_i^{-1}}\Big) .\end{eqnarray*} So that by the observation made at the beginning of section \ref{s4}, \begin{eqnarray*} {\mathbb P}hi_2(n)&\le &18\ \big(\log (\O(n)+3)\big)\Big( \sum_{j=1}^r\frac{\log p_{j}}{p_{j}}\Big) \prod_{i=1}^{r}\Big(\frac{1}{1-p_i^{-1}}\Big) .\end{eqnarray*} By combining this with the bound for ${\mathbb P}hi_1(n)$ established in Lemma \ref{phi1maj}, next using inequality \eqref{convexdec}, gives \begin{align}\label{convexdec1} {\mathbb P}si(n)\ \le\ \Big(\prod_{j=1 }^r \frac{1}{1-p_j^{ -1}} \Big)\bigg\{\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + 18\ \Big( \sum_{i=1}^r\frac{\log p_{i}}{p_{i}}\Big)\log (\O(n)+3) )\bigg\},\end{align} recalling that $r=\o(n)$. Whence by invoking Proposition \ref{tEZm}, noticing that $\o(n)\le\O(n)\le \log_{2} n$, \begin{eqnarray*} {\mathbb P}si(n)&\le & e^\g(1+o(1)) (\log\log n)^2\big(\log\log\log n+ 18 w( n)\big).\end{eqnarray*} \vskip 3 pt The finer estimate of ${\mathbb P}si(n)$ will be derived from a more precise study of the coefficients of ${\mathbb P}si(r,n)$. This is the object of the next sub-section. \end{remark} \subsection{Estimates of $\boldsymbol{ {\mathbb P}hi_2(r,n)}$.} \rm ${}$ We define successively \begin{eqnarray}\label{n1}{}\begin{cases} \ \ \ \ \ \ \ \ \m\, =\, (\m_1, \ldots, \m_r), \qq (\m_1, \ldots, \m_r)\in \displaystyle{\prod_{i=1}^r\big([0,\a_i]\cap {\mathbb N}\big)}, \cr &\cr \ \ \ p_\m(s)\, =\, p_1^{-\m_1}\ldots p_s^{-\m_s}, \qquad 1\le s\le r, \cr &\cr \quad \ \ \ {\mathbb P}i_s\, =\, \displaystyle{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_s=0}^{\a_s}p_\m(s)\, =\, \prod_{\ell = 1}^s\Big(\frac{1-p_\ell^{-\a_\ell -1}}{1-p_\ell^{s -1}}\Big) }. \end{cases} \end{eqnarray} Next, \begin{eqnarray*} {\mathbb P}hi_s(h)= \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_s=0}^{\a_s}p_\m(s) \log \big(\sum_{i=1}^{s} \m_i + h\big), \qq \quad 1\le s\le r-1. \end{eqnarray*} We also set \begin{eqnarray}\label{n5} \begin{cases}c_1\, =\, 1, \qq c_2\, =\, \frac{2}{p_r}, \qq c_3\, =\, \frac{1}{p_r^2}\big(3 + \frac{3}{\log p_r}+ \frac{1}{(\log p_r)^2}\big), \cr c_4\, =\, \frac{1}{p_r^3\log p_r}\big(1 + \frac{1}{3\log p_r}\big) \cr c_0\, =\, \frac{\log p_r}{ p_r}, \qq\qq\qq \ \, c\, =\, \sum_{i=1}^3 c_i, \cr b_s\, =\, \frac{1}{ p_s}\big( 1+ \frac{1}{ \log p_s}\big), \qq \ \ \b_s= \frac{1}{2 p_s (\log p_s)^2}. \qq\qq\qq\end{cases} \end{eqnarray} \subsubsection{\cmit Recurrence inequality.}\label{subsection4.2.1} We deduce from the first part of Lemma \ref{E3a} that \begin{eqnarray*}{\mathbb P}hi_{s}(h)&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\sum_{\m_s=0}^{\a_s}p_s^{-\m_s} \log \big(\sum_{i=1}^{s} \m_i + h\big)\bigg\} \cr &\le & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\sum_{\m_s=0}^{\infty}p_s^{-\m_s} \log \big(\sum_{i=1}^{s} \m_i + h\big)\bigg\} \cr &\le &\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\log \big(\sum_{i=1}^{s-1} \m_i + h\big) + \cr & & \quad \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big)+\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}\bigg\} \cr &\le &{\mathbb P}hi_{s-1}(h)+ \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big){\mathbb P}hi_{s-1}(h+1)+ \frac{1}{2(\log p_s)^2p_s}{\mathbb P}i_{s-1}. \end{eqnarray*} \vskip 5 pt Whence with the previous notation, \begin{lemma}\label{E3} Under assumption \eqref{min}, we have for $s=2,\ldots , r-1$,\begin{eqnarray*}{\mathbb P}hi_{s}(h)&\le &{\mathbb P}hi_{s-1}(h)+ b_s{\mathbb P}hi_{s-1}(h+1)+\b_s{\mathbb P}i_{s-1}. \end{eqnarray*} \end{lemma} \vskip 5 pt Now by using estimate (i) of Corollary \ref{E2} and the notation introduced, we have, under assumption \eqref{min}, if $\sum_{i=1}^{r-1}\m_i\ge 1$, \begin{align}\label{h1234} \sum_{\m_r=0}^{\a_r }&\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ \frac{2\log p_r}{p_r^2} \log \big( \sum_{i=1}^{r-1} \m_i + 2\big) \cr &+ \frac{1}{p_r^3} \Big(3\log p_r + 3+ \frac{1}{\log p_r} \Big)\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{p_r^3\log p_r}\Big( 1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\Big) \cr & \le c_0c_1 \log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ c_0c_2 \log \big( \sum_{i=1}^{r-1} \m_i + 2\big) +c_0c_3 \log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + c_4 \cr & =c_0\sum_{h=1}^3c_i \log \big( \sum_{i=1}^{r-1} \m_i + h\big)+c_4,\end{align} since $\frac{1}{p_r^3\log p_r}\big(1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\big)\le c_4$. \vskip 3 pt Therefore, under assumption \eqref{min}, if $\sum_{i=1}^{r-1}\m_i\ge 1$, \begin{eqnarray}{\mathbb P}hi_2(r,n)&\le & c_0 \underbrace{\sum_{h=1}^3 c_h {\mathbb P}hi_{r-1}(h)}_{(1)} + c_4 {\mathbb P}i_{r-1}.\end{eqnarray} Indeed, \begin{eqnarray*}{\mathbb P}hi_2(r,n)&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \Big[\sum_{i=1}^{r-1} \m_i + \m_r\Big] \cr &\le & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\Big\{c_0\sum_{h=1}^3c_i \log \big( \sum_{i=1}^{r-1} \m_i + h\big)+c_4\Big\} \cr &= & c_0 \underbrace{\sum_{h=1}^3 c_h {\mathbb P}hi_{r-1}(h)}_{(1)} + c_4 {\mathbb P}i_{r-1}.\end{eqnarray*} By applying the recurrence inequality with $s=r-1$ to ${\mathbb P}hi_{r-1}(h)$, one gets \begin{eqnarray*} {\mathbb P}hi_2(r,n) &\le & c_0 \underbrace{\sum_{h=1}^3 c_h \big[{\mathbb P}hi_{r-2}(h)}_{(1)}+ \underbrace{b_{r-1}{\mathbb P}hi_{r-2}(h+1)}_{(2)} \big]+c_0c\b_{r-1}{\mathbb P}i_{r-2}+ c_4 {\mathbb P}i_{r-1}.\end{eqnarray*} By applying this time the recurrence inequality to ${\mathbb P}hi_{r-2}(h)$, one also gets \begin{eqnarray*} {\mathbb P}hi_2(r,n) &\le & c_0 \underbrace{\sum_{h=1}^3 c_h {\mathbb P}hi_{r-3}(h)}_{(1)}+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-2}{\mathbb P}hi_{r-3}(h+1)}_{(3)}+c_0c b_{r-2}{\mathbb P}i_{r-3} \cr& &+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-1}{\mathbb P}hi_{r-3}(h+1)}_{(2)}+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-1}b_{r-2}{\mathbb P}hi_{r-3}(h+2)}_{(4)}+c_0cb_{r-1}\b_{r-2}{\mathbb P}i_{r-3} \cr& &+c_0c\b_{r-1}{\mathbb P}i_{r-2}+ c_4 {\mathbb P}i_{r-1}.\end{eqnarray*} One easily verifies (see expressions underlined by (1)) that the coefficient of ${\mathbb P}hi_{r-1}(h)$ is the same as the one of ${\mathbb P}hi_{r-2}(h)$ and ${\mathbb P}hi_{r-3}(h)$. So is also the case for ${\mathbb P}hi_{r-2}(h+1)$, see expressions underlined by (2). New expressions underlined by (3),\,(4) and linked to ${\mathbb P}hi_{r-3}(h+1)$, ${\mathbb P}hi_{r-3}(h+2)$ appear. \vskip 2 pt Each new coefficient is kept until the end of the iteration process generated by the recurrence inequality of Lemma \ref{E3}. \vskip 2 pt We also verify, when applying this inequality, that we pass from a majoration expressed by ${\mathbb P}hi_{r-1}(h)$, ${\mathbb P}i_{r-1}$, {\cmit uniquely}, to a majoration expressed by ${\mathbb P}hi_{r-2}$ (in $h$ or $h+1$) and ${\mathbb P}i_{r-2}$, ${\mathbb P}i_{r-1}$ {\cmit uniquely}. \vskip 2 pt This rule is general, and one verifies that when iterating this recurrence relation, we obtain at each step a bound depending on ${\mathbb P}hi_{r-d}$ and the products ${\mathbb P}i_{r-d}, {\mathbb P}i_{r-d+1},\ldots,$ $ {\mathbb P}i_{r-1}$ only. \vskip 4 pt $\underline{\hbox{\cmssqi Binary tree}}$\,: The shift of length $h$ or $h+1$ generates a binary tree whose branches are at each division (steps corresponding to the preceding iterations), either stationary\,: ${\mathbb P}hi_{r-d}(h)\to {\mathbb P}hi_{r-d-1}(h)$, or creating new coefficients\,: ${\mathbb P}hi_{r-d}(h)\to {\mathbb P}hi_{r-d-1}(h+1)$. One can represent this by the diagram below drawn from Lemma \ref{E3}. \vskip 5 pt \centerline{ ${}_\downarrow$ \hbox{\small shift\,+1, new coefficients\ }${}_\downarrow$\hskip +52 pt} \vskip -15 pt \begin{eqnarray*}{\mathbb P}hi_{s}(h)&\le &{\mathbb P}hi_{s-1}(h)+ b_s{\mathbb P}hi_{s-1}(h+1)+\b_s{\mathbb P}i_{s-1}. \end{eqnarray*} \centerline{ ${}^\uparrow$ \hbox{\small stationarity} ${}^\uparrow$ \hskip +120 pt} \vskip -3pt \centerline{\sevenrm Figure 1.}\par \par \vskip 6 pt \noi Before continuing, we recall that by \eqref{sumphi2}, \begin{eqnarray*}\sum_{\m=0}^{\a_s} \big(\log (A+\m)\big) e^{-\a \m} &\le & \log A + e^{-\a}\Big(\log (A+1)+ \frac{\log (A+1)}{\a} + \frac{1}{\a^2(A+1)}\Big) . \end{eqnarray*} \noi Thus \begin{eqnarray*} {\mathbb P}hi_1(v)&\le & \sum_{\m_1=0}^\infty p_\m(1)\log \Big(\sum_{i=1}^v\m_i+1\Big) \ =\ \sum_{\m_1=0}^\infty \frac{\log (v+\m)}{p_1^\m} \cr &\le & \log v + \frac{1}{p_1}\Big( \log (v+1) +\frac{\log (v+1)}{\log p_1}+ \frac{1}{v(\log p_1)^2}\Big) \qq (v\ge 1). \end{eqnarray*} Hence, \begin{eqnarray*} {\mathbb P}hi_1(h) &\le & C\log h. \end{eqnarray*} One easily verifies that the $d$-tuples formed with the $b_i$ have all ${\mathbb P}hi_{r-x}(h+d)$ as factor. The terms having ${\mathbb P}hi_{r-\cdot}(h+\cdot)$ as factor are forming the sum \begin{eqnarray} \label{somme} c_0\, \sum_{d=1}^{r-1}\Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) {\mathbb P}hi_1(h+d), \end{eqnarray} once the iteration process achieved, that is after having applied $(r-1)$ times the recurrence inequality of Lemma \ref{E3}. This sum can thus be bounded from above by (recalling that $h=1,2$ or $3$) \begin{eqnarray*} c_0\sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) . \end{eqnarray*} But, for all positive integers $a_{ 1},\ldots, a_{r}$ and $1\le d\le r$, we have, $$\Big(\sum_{i=1}^r a_i\Big)^d\ge d! \sum_{ 1\le i_1<\ldots<i_d\le r } a_{i_1}\ldots a_{i_d}. $$ Thus \begin{eqnarray*} \sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) &\le & \sum_{d=1}^{r-1}\frac{(\log d)}{d!}\, \Big( \sum_{i=1}^r b_{i}\Big)^d . \end{eqnarray*} As moreover, $$b_{i}=\frac{1}{p_{i}}\big(1 + \frac{1}{\log p_{i+1}}\big)\le \frac{1}{p(i)} + \frac{1}{p(i)\log p(i)},$$ one has by means of \eqref{p(i)est}, \begin{eqnarray*} \sum_{i=1}^r b_{i}&\le & \sum_{i=1}^r\big(\frac{1}{i \log i} + \frac{1}{i (\log i)^2}\big) \ \le \ \log\log r +C. \end{eqnarray*} Thus \begin{eqnarray*} \sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) &\le & C \sum_{d=1}^{r-1}\frac{(\log d)}{d!}\, (\log\log r +C)^d . \end{eqnarray*} On the one hand, \begin{eqnarray*} \sum_{\log d\le 1+\e +\log\log\log r}\frac{(\log d)}{d!}\,(\log\log r +C)^d&\le & \big(1+\e +\log\log\log r\big) \sum_{d>1}\frac{(\log\log r +C)^{d}}{d!} \cr &\le &C \big(1+\e +\log\log\log r\big) \log r. \end{eqnarray*} On the other, utilizing the classical estimate $d\,!\ge C \sqrt d \,d^d\,e^{-d}$, one has \begin{eqnarray*} \sum_{\log d> 1+\e+\log\log\log r}\frac{(\log d)}{d!}\, (\log\log r)^d&\le &\sum_{\log d> 1+\e+\log\log\log r}\frac{(\log d)}{\sqrt d}\,e^{-d(\log d-1 - \log\log\log r)} \cr &\le & \sum_{d>1}\frac{(\log d)}{\sqrt d}\,\,e^{-\e d}<\infty. \end{eqnarray*} One thus deduces, concerning the sum in \eqref{somme} that, \begin{equation} \label{somme1} c_0\sum_{d=1}^{r-1}\Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) {\mathbb P}hi_1(h+d)\ \le \ C\frac{\log p_r}{p_r}\big(1+\log\log\log r\big)\log r; \end{equation} \vskip 4 pt \subsection{\cmit Coefficients related to $ {\mathbb P}i_s$.} \vskip 3 pt By applying the recurrence inequality (Lemma \ref{E3}), one successively generates \begin{eqnarray*} & & c_4{\mathbb P}i_{r-1} \cr & & c_4{\mathbb P}i_{r-1} + c_0c\b_{r-1}{\mathbb P}i_{r-2} \cr & & c_4{\mathbb P}i_{r-1} + c_0c\b_{r-1}{\mathbb P}i_{r-2} + c_0c\b_{r-2}\big(1 + b_{r-1}b_{r-2} \big){\mathbb P}i_{r-3} \cr & &c_4{\mathbb P}i_{r-1} + c_0c\b_{r-1}{\mathbb P}i_{r-2} + c_0c\b_{r-2}\big(1 + b_{r-1}b_{r-2} \big){\mathbb P}i_{r-3} \cr & &\qq\quad \ +c_0c\b_{r-3}\big( 1+ b_{r-2} + b_{r-1} + b_{r-1} b_{r-2}\big){\mathbb P}i_{r-4}. \end{eqnarray*} $\underline{\hbox{\cmssqi Coefficients}}$\,: \begin{eqnarray*} \qq\qq {\mathbb P}i_{r-1}: c_4\qq\qq \qq\qq \, {\mathbb P}i_{r-2}: c_0c\b_{r-1} \qq\qq\qq\qq\qq\qq\qq\qq\cr {\mathbb P}i_{r-3}: c_0c\b_{r-2}(1+ b_{r-1})\qq {\mathbb P}i_{r-4}: c_0c\b_{r-3}(1+ b_{r-2}+b_{r-1}+b_{r-1}b_{r-2}).\qq\qq \end{eqnarray*} It is easy to check that the coefficients ${\mathbb P}i_{r-x}$ are exactly those of ${\mathbb P}hi_{r-x+1}(.)$ affected with the factor $c_0c\b_{r-x+1}$. The products form the sum \begin{eqnarray} \label{sumpi} c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big){\mathbb P}i_{r-d-1}. \end{eqnarray} By \eqref{p(i)est}, one has \begin{eqnarray}\label{beta}\b_j= \frac{1}{2 p_j (\log p_j)^2}\le \frac{1}{2 p(j) (\log p(j))^2}\le \frac{1}{2j (\log j)^3},\qq \hbox{if $j\ge 2$,} \end{eqnarray} Moreover, \eqref{p(i)est} and \eqref{prod} imply that \begin{eqnarray*} {\mathbb P}i_j= \prod_{\ell =1}^j \Big(\frac{1}{1-\frac{1}{p_\ell}} \Big)&\le &\prod_{\ell =1}^j \Big(\frac{1}{1-\frac{1}{p(\ell)}} \Big)\ \le \ \prod _{p\le j(\log j +\log\log j)}\Big(\frac{1}{1- \frac{1}{p}}\Big) \cr &\le & C (\log j) \, . \end{eqnarray*} We now note that by definition of ${\mathbb P}i_j$, we also have $${\mathbb P}i_j\le \max_{\ell \le 5}\prod_{p\le p(\ell)} \frac{1}{1 -\frac{1}{p}}= C_0.$$ We deduce that \begin{eqnarray}\label{Piest}{\mathbb P}i_j &\le &C( \log j) ,\qq\qq \hbox{if $j\ge 2$.} \end{eqnarray} Consequently, \eqref{Piest} and \eqref{beta} imply that \begin{eqnarray}\label{betapi}\b_{j+1}{\mathbb P}i_j &\le & \frac{C}{j (\log j)^2} ,\qq\qq \hbox{if $j\ge 2$.} \end{eqnarray} This implies that the sum in \eqref{sumpi} can be bounded as follows: \begin{eqnarray} \label{estsumpi} & & c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big){\mathbb P}i_{r-d-1} \cr &\le &c_0c \prod _{i=1}^{r-2}\big( 1+b_{r-i}\big)\cdot \sum_{d=0}^{r-2}\b_{r-d}\, {\mathbb P}i_{r-d-1} \ =\ c_0c \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{d=0}^{r-2}\b_{r-d}\, {\mathbb P}i_{r-d-1} \cr &\le &c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{d=0}^{r-2}\frac{1}{(r-d)\big(\log (r-d)\big)^2} \cr &\le &c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{\d=2}^{\infty}\frac{1}{\d (\log \d)^2} \cr &\le & c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big). \end{eqnarray} We recall that \begin{eqnarray*} \sum_{p\le x }\frac{1}{p}\le \log\log x +C. \end{eqnarray*} See for instance \cite{RS}, inequality (3.20). Thus, \begin{eqnarray}\label{1plusbi} \prod_{i=1}^{r}\big(1+b_i\big)&\le & C\, \log r.\end{eqnarray} Now estimate \eqref{1plusbi} implies that \begin{eqnarray}\label{estsumpi2} c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big){\mathbb P}i_{r-d-1}&\le & c_0c\, C \log r \cr &\le & C\, \frac{ \log p_r}{p_r} \log r.\end{eqnarray} We thus deduce from \eqref{somme1} and \eqref{sumpi} that \begin{eqnarray}\label{estS2} {\mathbb P}hi_2(r,n)&\le & C\,\frac{\log p_r}{p_r}\big(1+\log\log\log r\big)\log r + C\, \frac{ \log p_r}{p_r} \log r \cr &\le & C\,\frac{\log p_r}{p_r}(\log r)(\log\log\log r) \, .\end{eqnarray} As a result, by taking account of the observation made at the beginning of section \ref{s4}, we obtain \begin{equation}\label{estS2a} {\mathbb P}hi_2(n) \le C\, (\log\log\log r)(\log r)\sum_{i=1}^r\frac{ \log p_i}{p_i}\ =\ C\, (\log\log\log r)(\log r)w(n) \, .\end{equation} \vskip 3 pt By combining \eqref{estS2} with the upper estimate ${\mathbb P}hi_1(n)$ established at Lemma \ref{phi1maj} and using inequality \eqref{convexdec}, we arrive at \begin{equation}\label{convexdec1a} {\mathbb P}si(n)\le \Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + C\, (\log\log\log r)(\log r)w(n) ,\end{equation} recalling that $p_j\ge 3$ by assumption \eqref{min}. \section{\bf Proof of Theorem \ref{t1}.}\label{s5} First we prove inequality \eqref{convexdec}. We recall the convention $0\log 0=0$. Inequality \eqref{convexdec} is an immediate consequence of the following convexity lemma. \begin{lemma}\label{lconvexe} For any integers $\mu_i\ge 0$, $p_j\ge 2$, we have \begin{eqnarray*} \sum_{i=1}^{r}\big(\m_i\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big)&\le & \sum_{i=1}^{r} \m_i\big(\log p_i\big)\big(\log\log p_i\big) \cr & &\qq +\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray*} \end{lemma} \begin{proof} We may restrict to the case $\sum_{i=1}^{r} \m_i\ge 1$, since otherwise the inequality is trivial. Let $M=\sum_{i=1}^{r} \m_i$ and write that \begin{eqnarray*} \sum_{i=1}^{r}\m_i\big(\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big) &= & M\bigg\{ \sum_{i=1}^{r}\frac{\m_i}{M}\big(\log p_i\big) \log \Big\{\sum_{i=1}^{r} \frac{\m_i}{M} \log p_i\Big\} \cr & & \quad +\sum_{i=1}^{r}\frac{\m_i}{M}\big(\log p_i\big)(\log M) \bigg\} . \end{eqnarray*} By using convexity of $\psi(x)=x\log x$ on ${\mathbb R}_+$, we get $$ \sum_{i=1}^{r}\frac{\m_i}{\sum_{i=1}^{r}\m_i}\big(\log p_i\big) \log \Big\{\sum_{i=1}^{r} \frac{\m_i}{\sum_{i=1}^{r}\m_i} \log p_i\Big\}\le \sum_{i=1}^{r}\frac{\m_i}{\sum_{i=1}^{r}\m_i}\big(\log p_i\big)\big(\log \log p_i\big).$$ Thus \begin{eqnarray*} \sum_{i=1}^{r}\big(\m_i\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big) \le \sum_{i=1}^{r} \m_i\big(\log p_i\big)\big(\log\log p_i\big)+\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray*} \end{proof} The odd case (i.e. condition \eqref{min} is satisfied) is obtained by combining \eqref{estS2} with Corollary \ref{ests1} and utilizing inequality \eqref{convexdec}. Since $r\le \log n$, by taking account of estimate of $w(n)$ given in \eqref{wdavenport}, we get \begin{eqnarray}\label{convexdec2} {\mathbb P}si(n)&\le& e^\g(1+o(1)) (\log\log n)^2(\log \log\log n) + C\, (\log\log\log\log n)(\log\log n)^2 \cr &= & e^\g(1+o(1)) (\log\log n)^2(\log \log\log n). \end{eqnarray} \vskip 5 pt To pass from the odd case to the general case is not easy. This step will necessitate an extra analysis of some other properties of ${\mathbb P}si(n)$. \vskip 3 pt We first exclude the trivial case when $n$ is a pure power of $2$, since ${\mathbb P}si(2^k) \le C$ uniformly over $k$, and $C$ is a finite constant. \vskip 3 pt Now if $2$ divides $n$, writing $n=2^vm$, $2 \not| m$, we have \begin{eqnarray*} {\mathbb P}si(n)&=&\sum_{d|n} \frac{(\log d )(\log\log d)}{d}\ =\ \sum_{k=0}^v\sum_{\d| m}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d}. . \end{eqnarray*} As the function $x\mapsto \frac{(\log x )(\log\log x)}{x}$ decreases on $[x_0,\infty)$ for some positive real $x_0$, we can write \begin{eqnarray*} & & \sum_{k=0}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \sum_{k=k_0+1}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \int_{2^{k_0}\d}^{\infty} \frac{(\log u )(\log\log u)}{u^2}\dd u, \end{eqnarray*} where $k_0$ is depending on $x_0$ only. Moreover $$\Big(\frac{(\log u )(\log\log u)}{u}\Big)'\ge - \frac{(\log u )(\log\log u)}{u^2}.$$ Thus \begin{eqnarray*} & &\sum_{k=0}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \frac{(\log (2^{k_0}\d) )(\log\log (2^{k_0}\d))}{2^{k_0}\d}, \end{eqnarray*} whence \begin{eqnarray}\label{psik_0}{\mathbb P}si(n) &\le & \sum_{k=0}^{k_0}\sum_{\d|m} \frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d}. \end{eqnarray} Let $m=p_1^{b_1}\ldots p_{\m}^{b_{\m}}$. We have by \eqref{convexdec1} \begin{eqnarray*} {\mathbb P}si(m)&\le& \Big(\prod_{j =1 }^{\m} \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^{\m} \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le& \Big(\prod_{j =2 }^\m \frac{1}{1-p(j)^{ -1}} \Big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &=& \frac12\, \Big(\prod_{j =1 }^\m \frac{1}{1-p(j)^{ -1}} \Big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le & \frac{ e^{\g}}2\, \big( \log \m + \mathcal O(1) \big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m),\end{eqnarray*} by using Mertens' estimate \eqref{prod} and since $p(\m)\sim \m\log \m$. Furthermore by using estimate \eqref{phi1.sumr}, and since $2^\m\le m$ we get \begin{eqnarray}\label{psi(m)est} {\mathbb P}si(m) &\le & \frac{ e^{\g}}2\, \big( \log \m + \mathcal O(1) \big)(1+\e)(\log \m)(\log\log \m) + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le & \frac{ e^{\g}}2\, \big( \log \frac{\log m}{\log2} + \mathcal O(1) \big)(1+\e)(\log \frac{\log m}{\log2})(\log\log \frac{\log m}{\log2}) \cr & & + C\, (\log\log\log \frac{\log m}{\log2})(\log \frac{\log m}{\log2})(1+o(1))\log\log m \cr &\le & \frac{ e^{\g}}2\,(1+2\e) (\log \log m)^2(\log\log\log m), \end{eqnarray} for $m$ large. Now let $\psi(2^km)=\sum_{\d|m} \frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}$, $1\le k\le k_0$. If $n$ is not a pure power of $2$, then its odd component $m$ tends to infinity with $n$. Thus with \eqref{psik_0}, \begin{eqnarray}\label{estevencase} \frac{{\mathbb P}si(n)}{\big(\log \log n)^2(\log\log\log n)} &\le & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{\frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}}{ (\log \log m)^2(\log\log\log m)} . \end{eqnarray} But \begin{eqnarray}\label{estevencasea}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}&=&\frac{(k(\log 2) )(\log\log (2^k\d))+ (\log \d)(\log\log (2^k\d) }{\d} \cr &\le &k_0(\log 2) \frac{\log\big(k_0(\log 2)+\log\d\big)}{\d}+ \frac{(\log \d)(\log\log (2^{k_0}\d) }{\d}. \end{eqnarray} Now we have the inequality: $\log \log (a+x)\le \log (b\log x)$ where $b\ge (a+e)$ and $a\ge 1$, which is valid for $x\ge e$. Thus \begin{eqnarray}\label{estevencaseb}\log\big(k_0(\log 2)+\log\d\big)\le \log (k_0\log 2+ e)+\log \log \d. \end{eqnarray} Consequently \begin{eqnarray}\label{estevencase1} & & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2) \frac{\log (k_0(\log 2)+\log\d )}{\d}}{(\log \log m)^2(\log\log\log m)} \cr &\le &\sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2)\frac{\log (k_0\log 2+ e)}{\d}}{(\log \log m)^2(\log\log\log m)} \cr & & \quad + \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2) \frac{\log\log\d}{\d}}{(\log \log m)^2(\log\log\log m)} \cr&\le &2k_0(\log 2)\big(\log (k_0\log 2+ e)\big) \frac{\s_{-1}(m)}{(\log \log m)^2(\log\log\log m)} \cr & & \quad + \frac{2k_0(\log 2)}{(\log \log m)^2(\log\log\log m)}\sum_{\d|m} \frac{\log\log\d}{\d} \cr&\le &C(k_0)\Big\{\frac{1}{\log \log m(\log\log\log m)} + \frac{\s_{-1}(m)}{(\log \log m)(\log\log\log m)}\Big\} \cr&\le &\frac{C(k_0)}{\log\log\log m} \quad \to \ 0\quad \hbox{ as $m$ tends to infinity}. \end{eqnarray} Further \begin{eqnarray} \label{estevencase2}\sum_{k=0}^{k_0} \frac{1}{2^k}\frac{\sum_{\d|m}\frac{(\log \d)(\log\log (2^{k_0}\d) }{\d}}{(\log \log m)^2(\log\log\log m)} & \le & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m}\frac{(\log \d)( \log (k_0\log 2+ e)+\log \log \d) }{\d(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{\log (k_0\log 2+ e)}{(\log \log m)^2(\log\log\log m)}\,\sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m}\frac{(\log \d) }{\d} \cr & &\quad+2\,\frac{{\mathbb P}si(m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{2\log (k_0\log 2+ e)\,\s_{-1}(m)}{(\log \log m)(\log\log\log m)} \cr & &\quad+2\,\frac{{\mathbb P}si(m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{C(k_0)}{\log\log\log m} +2\,\frac{ e^{\g}}2 \,(1+2\e)\,\frac{ (\log \log m)^2(\log\log\log m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{C(k_0)}{\log\log\log m} + e^{\g} \,(1+2\e)\, , \end{eqnarray} for $m$ large, where we used estimate \eqref{psi(m)est}. \vskip 3pt Plugging estimates \eqref{estevencase1} and \eqref{estevencase2} into \eqref{estevencase} finally leads, in view of \eqref{estevencasea}, to \begin{eqnarray}\label{estevencasef} \frac{{\mathbb P}si(n)}{(\log \log n)^2(\log\log\log n)} &\le& \frac{C}{\log\log\log m} + e^{\g} \,(1+2\e)\, \end{eqnarray} for $m$ large, where $C$ depends on $k_0$ only. As $\e$ can be arbitrary small, we finally obtain \begin{eqnarray}\label{evencasef} \limsup_{n\to \infty}\frac{{\mathbb P}si(n)}{(\log \log n)^2(\log\log\log n)} &\le& e^{\g}. \end{eqnarray} This establishes Theorem \ref{t1}. \section{\bf Complementary results.}\label{s2} In this section we prove complementary estimates ${\mathbb P}hi_1$, ${\mathbb P}hi_2$ and ${\mathbb P}si$, notably estimates \eqref{phipsi} and \eqref{Phi1est} \subsection{Upper estimates.} \begin{lemma}\label{phi1maj} We have the following estimate, \begin{eqnarray*} {\mathbb P}hi_1(n)&\le&\Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}. \end{eqnarray*} \end{lemma} \begin{proof}[\cmit Proof] \rm We have $${\mathbb P}hi_1(n)=\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}\ \frac{ \m_1(\log p_1)(\log\log p_1)+\ldots +\m_r(\log p_r)(\log\log p_r)}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}} $$ The $i$-th term of the numerator yields the sum $$\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{\hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded}}} \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big). $$ Consequently, \begin{eqnarray}\label{Phi1formula} {\mathbb P}hi_1(n) &=& \sum_{i=1}^r\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{\hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded}}} \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big)\cr &=& \sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big] . \end{eqnarray} Now as $$\sum_{\m=0}^{\a_i} \frac{\m}{p_i^\m}\le\sum_{j=0}^{\infty} \frac{j}{p_i^j} =\frac{1}{(p_i-1)(1-p_i^{-1})},$$ we obtain \begin{eqnarray*} {\mathbb P}hi_1(n)&\le &\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}}\Big)\,.\,\frac{(\log p_i)\big(\log\log p_i\big)}{(p_i-1)(1-p_i^{-1})} \cr &\le &\Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}. \end{eqnarray*} \end{proof} \begin{corollary}\label{ests1} We have the following estimate, \begin{eqnarray*} \limsup_{n\to \infty}\frac{{\mathbb P}hi_1(n)}{(\log\log n)^2(\log \log\log n)} &\le & e^{\g}.\end{eqnarray*} \end{corollary} \begin{proof}[\cmit Proof]\rm Let $p(j)$ denote the $j$-th consecutive prime number, and recall that (\cite[(3.12-13)]{RS}, \begin{eqnarray}\label{p(i)est} p(i) &\ge& \max(i \log i, 2), \qq\quad\ \ i\ge 1, \cr p(i)&\le& i(\log i + \log\log i ), \qq \ \! i\ge 6. \end{eqnarray} Let $\e>0$ and an integer $r_0\ge 4$. If $r\le r_0$, then \begin{eqnarray}\label{phi1.sumr1} \sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le &\d\,r_0, \qq \qq \d= \sup_{p\ge 3}\frac{(\log p)\big(\log\log p\big)}{p-1}<\infty\,. \end{eqnarray} If $r>r_0$, then \begin{eqnarray*} \sum_{i=r_0+1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le & \Big(\max_{i>r_0}\frac{p(i)}{p(i)-1}\Big)\sum_{i=r_0+1}^r \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)} \cr &\le & \Big(\max_{i>r_0}\frac{p(i)}{p(i)-1}\Big)\sum_{i=r_0+1}^r \frac{(\log (i\log i))\big(\log\log (i\log i)\big)}{i\log i} \end{eqnarray*} We choose $r_0=r_0(\e)$ so that $\log r_0 \ge 1/\e$ and the preceding expression is bounded from above by $$(1+\e)\sum_{i=r_0+1}^r \frac{\log\log i}{i}$$ We thus have \begin{eqnarray}\label{phi1.sumr2} \sum_{i=r_0+1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le &(1+\e)\int_{r_0}^r\frac{\log\log t}{t}\dd t \cr &\le & (1+\e)(\log r)(\log\log r). \end{eqnarray} Consequently, for some $r(\e)$, \begin{eqnarray}\label{phi1.sumr} \sum_{i= 1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le & (1+\e)(\log r)(\log\log r), \qq r\ge r(\e). \end{eqnarray} By using Mertens' estimate \begin{eqnarray}\label{prod}\prod_{p\le x}\Big(\frac{1}{1-\frac{1}{p}}\Big)=e^{\g}\log x + \mathcal O(1)\qq \quad x\ge 2, \end{eqnarray} we further have \begin{equation}\label{p(i)estappl} \prod_{\ell =1}^r \Big(\frac{1}{1-\frac{1}{p_\ell}} \Big)\,\le\, \prod_{\ell =1}^r \Big(\frac{1}{1-\frac{1}{p(\ell)}} \Big) \le \prod _{p\le r(\log r +\log\log r)}\Big(\frac{1}{1- \frac{1}{p}}\Big) \,\le \, e^{\g} (\log r) + C\, , \end{equation} if $r\ge 6$, and so for any $r\ge 1$, modifying $C$ if necessary. As $r=\o(n)$ and $2^{\o(n)}\le n$, we consequently have, \begin{eqnarray*} {\mathbb P}hi_1(n) &\le & e^{\g}(1+ C\e)^2 (\log\log n)^2(\log \log\log n),\end{eqnarray*} if $r>r_0$. If $r\le r_0$, we have \begin{eqnarray*} {\mathbb P}hi_1(n)&\le & \d e^{\g}(1+\e) \big((\log r_0) + C\big):=C(\e).\end{eqnarray*} Whence, \begin{eqnarray*} {\mathbb P}hi_1(n) &\le & e^{\g}(1+\e)^2 (\log\log n)^2(\log \log\log n)+ C(\e).\end{eqnarray*} As $\e$ can be arbitrary small, the result follows. \end{proof} The following lemma is nothing but the upper bound part of \eqref{EZ1}. We omit the proof. \begin{lemma} \label{tEZm} We have the following estimate, \begin{eqnarray*} \sum_{d|n} \frac{\log d}{d} &\le &\prod_{p|n}\Big(\frac{1}{1-p^{-1}}\Big) \ \sum_{p|n}\frac{\log p}{p-1}. \end{eqnarray*} Moreover, \begin{eqnarray*} \limsup_{n\to \infty}\ \frac{1}{ (\log\log n)(\log \o(n))}\sum_{d|n} \frac{\log d}{d} &\le & e^{\g}. \end{eqnarray*} \end{lemma} \subsection{\bf Lower estimates.}\label{s3} We recall that the smallest prime divisor of an integer $n$ is noted by $P^-(n)$. \begin{lemma}\label{phi1min} Let $n=p_1^{\a_1}\ldots p_r^{\a_r}$, $r\ge 1$, $\a_i\ge1$. Then, \begin{eqnarray*} {\mathbb P}hi_1(n) &\ge & \Big(1-\frac{1}{P^-(n)}\Big)\prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{ \big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big]\end{eqnarray*} \end{lemma} \begin{proof}[\cmit Proof] By \eqref{Phi1formula}, \begin{eqnarray*} {\mathbb P}hi_1(n) &=&\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big] \cr &\ge &\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[ \frac{\big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big] \cr &\ge & \prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{(1-p_i^{ -1})\big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big].\end{eqnarray*} Thus \begin{eqnarray*} {\mathbb P}hi_1(n) &\ge & \Big(1-\frac{1}{P^-(n)}\Big)\prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{ \big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big].\end{eqnarray*} \end{proof} We easily deduce from Lemma \ref{phi1maj} and Lemma \ref{phi1min} the following corollary. \begin{corollary}\label{phi1est} Let $n=p_1^{\a_1}\ldots p_r^{\a_r}$, $r\ge 1$, $\a_i\ge1$. Then, \begin{eqnarray*}\big(1-\frac{1}{P^-(n)}\big)\prod_{j =1}^r\big(1+p_j^{-1} \big) \ \le \frac{{\mathbb P}hi_1(n)}{\sum_{i=1}^r\frac{ (\log p_i)(\log\log p_i)}{p_i}}\ \le 2\, \prod_{j =1}^r\Big(\frac{1}{1-p_j^{ -1}} \Big).\end{eqnarray*} \end{corollary} \begin{proposition} \label{tEZ} We have the following estimates \begin{eqnarray*}\hbox{$\rm a)$}& & \limsup_{n\to \infty}\ \frac{1}{ (\log\log n)} \sum_{d|n} \frac{(\log d)}{d }\ \ge \ e^{\g} \cr \hbox{$\rm b)$} & &\limsup_{n\to \infty}\, \frac{{\mathbb P}hi_1(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g, \cr \hbox{$\rm c)$} & &\limsup_{n\to \infty}\, \frac{{\mathbb P}si(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g. \end{eqnarray*} \end{proposition} \begin{proof}[\cmit Proof] \rm Case a) is Erd\H os-Zaremba's lower bound of function ${\mathbb P}hi(n)$. Since it is used in the proof of b) and c), we provide a detailed proof for the sake of completeness. \vskip 3 pt a) Let $n_j=\prod_{p<e^j}p^j$. Recall that $p(i) \ge \max(i \log i, 2)$ if $i\ge 1$. Let $r(j)$ be the integer defined by the condition $p(r(j))< e^j< p(r(j)+1)$. By using \eqref{formule} and following Gronwall's proof \cite{Gr}, we have, \begin{eqnarray*}& & \sum_{d|n_j} \frac{\log d}{d} \ =\ \sum_{i=1}^{r(j)}\prod_{\substack{ \ell=1\\ \ell\neq i}}^{r(j)}\Big(\frac{1-p(\ell)^{-j-1}}{1-p(\ell)^{-1}}\Big)\Big[\sum_{\m=0}^{j}\frac{\m\log p(i)}{p(i)^\m}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\sum_{i=1}^{r(j)} (1-p(i)^{-1})\frac{\log p(i)}{p(i)}\Big[1+ \frac{1}{p(i)}+\ldots +\frac{1}{p(i)^{j-1}}\Big] \cr &= & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\sum_{i=1}^{r(j)} \frac{\log p(i)}{p(i)}\big(1-p(i)^{-j}\big). \end{eqnarray*} Recall that $\vartheta(x)=\sum_{p\le x}\log p$ is Chebycheff's function and that $\vartheta(x)\ge(1-\e(x))x$, $x\ge 2$, where $\e(x)\to 0$ as $x$ tends to infinity. Thus, $\log n_j = j\vartheta(e^j)= je^j(1+ o(1))$, and thus $\log\log n_j = j(1+ o(1))$. \vskip 2 pt On the one hand, by \eqref{prod}, \begin{equation}\label{prodnj}\prod_{\ell=1}^{r(j)}\big(1-p(\ell)^{-1}\big)= \prod_{p<e^j}\big(1-p^{-1}\big)=\frac{e^{-\g}}{j}\big(1+ \mathcal O(\frac{1}{j})\big). \end{equation} And on the other, by Mertens' estimate \begin{equation}\label{sumnj}\sum_{p<e^j} \frac{\log p}{p}=j+\mathcal O(1)\ge (1+o(1)) \log \log n_j . \end{equation} Thus \begin{eqnarray} \label{lbeta1} \sum_{d|n_j} \frac{\log d}{d} &\ge & (1+o(1))e^{\g}(\log\log n_j)^{2} \qq\qq j\to \infty\,, \end{eqnarray} since $\zeta(j+1)\to 1$ as $j\to \infty$. \vskip 3 pt b) Let $\s'_{-1}(n)= \sum_{d|n\,,\, d\ge 3} 1/d$. Let also $X$ be a discrete random variable equal to $\log d$ if $d|n$ and $d\ge 3$, with probability $1/(d\s'_{-1}(n))$. By using convexity of the function $x\log x$ on $[1,\infty)$, we get \begin{eqnarray*} {\mathbb E \,} X\log X&=& \sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d\s'_{-1}(n) }\ \ge\ ({\mathbb E \,} X)\log\,({\mathbb E \,} X) \cr &= & \Big(\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)}{d\s'_{-1}(n) }\Big)\log \Big(\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)}{d\s'_{-1}(n) }\Big) \cr &\ge & \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d\s'_{-1}(n) }-C\Big)\Big(\log \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d }-C\Big)-\log \s_{-1}(n)\Big) . \end{eqnarray*} Whence\begin{eqnarray*} \sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d} &\ge & \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d }-C\s_{-1}(n)\Big)\Big(\log \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d }-C\Big) \cr & & - \log \s_{-1}(n)\Big) \end{eqnarray*} Letting $n=n_j$, we deduce from \eqref{lbeta1} that \begin{eqnarray*} {\mathbb P}si(n) &\ge &\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d} \ \ge \ \Big((1+o(1))e^{\g}(\log\log n_j)^{2} -C\log\log n_j\Big) \cr & & \qq \times \Big(\log \big\{(1+o(1))e^{\g}(\log\log n_j)^2-C\big\} - \log C \log\log n_j\Big) \cr & \ge &(1+o(1))e^{\g}(\log\log n_j)^2\log\log\log n_j. \end{eqnarray*} Consequently, \begin{eqnarray*} \limsup_{n\to \infty}\frac{{\mathbb P}si(n)}{(\log\log n)^2\log\log\log n} & \ge &e^{\g}. \end{eqnarray*} \vskip 3 pt c) We have \begin{eqnarray*} {\mathbb P}hi_1(n_j) &=&\sum_{i=1}^{r(j)}\prod_{\substack{\ell=1\\ \ell\neq i}}^{r(j)}\Big(\frac{1-p(\ell)^{-j-1}}{1-p(\ell)^{-1}}\Big)\Big[\sum_{\m=0}^{j}\frac{\m (\log p(i))(\log\log p(i))}{p(i)^\m}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\cr & & \quad\times\ \sum_{i=1}^{r(j)} (1-p(i)^{-1})\frac{(\log p(i))(\log\log p(i))}{p(i)}\Big[1+ \frac{1}{p(i)}+\ldots +\frac{1}{p(i)^{j-1}}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}(e^{\g}j)\big(1+ \mathcal O(\frac{1}{j})\big)\sum_{i=1}^{r(j)} \frac{(\log p(i))(\log\log p(i))}{p(i)}\big(1-p(i)^{-j}\big). \end{eqnarray*} by \eqref{prodnj}. Let $0<\e <1$. By using \eqref{sumnj}, we also have for all $j$ large enough, \begin{eqnarray*}\sum_{p<e^j} \frac{(\log p)(\log\log p)}{p} &\ge & \sum_{e^{\e j}\le p<e^j} \frac{(\log p)(\log\log p)}{p} \cr &\ge & (1+o(1))\big(\log(\e j)\big)\sum_{e^{\e j}\le p<e^j} \frac{(\log p)}{p} \cr &\ge & (1+o(1))(1-\e)j\big(\log(\e j)\big)\big(1+ \mathcal O({1}/{j})\big) \cr &\ge & (1+o(1))(1-\e)(\log \log n_j)\big(\log (\e \log \log n_j)\big).\end{eqnarray*} As $\log (\e \log \log n_j) \sim \log\log \log n_j$, $j\to \infty$, we have \begin{eqnarray*} \limsup_{j\to \infty}\frac{{\mathbb P}hi_1(n_j)}{(\log \log n_j)^2(\log\log \log n_j)} &\ge & e^{\g}(1-\e). \end{eqnarray*} As $\e$ can be arbitrarily small, this proves (c). \end{proof} \rm \begin{lemma}\label{Phi_2(r,n)min} We have the following estimate \begin{eqnarray*} {\mathbb P}hi_2(n) &\ge & (\log 2)\,\Big(\frac{P^-(n)}{P^-(n)+1}\Big) \,\Big(\prod_{i=1}^{r}\big(1+\frac{1}{ p_i}\big)\Big)\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big).\end{eqnarray*} \end{lemma} \begin{proof} We observe from \eqref{Phi_2(r,n)} that \begin{eqnarray*} {\mathbb P}hi_2(r,n)&\ge & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\frac{ \log p_r}{p_r}\log \Big[\sum_{i=1}^{r-1} \m_i + 1\Big].\end{eqnarray*} It is clear that the above multiple sum can contribute (is not null) only if $\max_{i=1}^{r-1} \m_i \ge 1$, in which case $\log\, [\,\sum_{i=1}^{r-1} \m_i + 1]\ge \log 2$. We thus have \begin{eqnarray} \label{Phi2(r,n)min} {\mathbb P}hi_2(r,n)&\ge & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\, \underbrace{ \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}}}_{\hbox{$\max_{i=1}^{r-1} \m_i \ge 1$}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}} \cr &= & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\,\prod_{i=1}^{r-1}\Big(1+\sum_{\m_i=0}^{\a_i}\frac{1}{ p_i^{\m_i}}\Big) \cr &\ge & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\,\prod_{i=1}^{r-1}\Big(1+\frac{1}{ p_i}\Big).\end{eqnarray} Consequently, \begin{eqnarray} \label{Phi2(rn)min} {\mathbb P}hi_2(n) &\ge & (\log 2)\,\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big)\,\prod_{\substack{i=1\\ i\neq j}}^{r}\Big(1+\frac{1}{ p_i}\Big) \cr &\ge & (\log 2)\,\Big(\frac{P^-(n)}{P^-(n)+1}\Big) \,\Big(\prod_{i=1}^{r}\big(1+\frac{1}{ p_i}\big)\Big)\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big).\end{eqnarray} \end{proof} \section{\bf An application.} \label{s6} We deduce from Theorem \ref{t1} the following result. \begin{theorem}\label{t3} Let $\eta>1$. There exists a constant $C(\eta)$ depending on $\eta$ only, such that for any finite set $K$ of distinct integers, and any sequence of reals $\{c_k, k\in K\}$, we have \begin{eqnarray}\label{approx}\sum_{k,\ell\in K} c_kc_\ell \frac{(k,\ell)^{2}}{k\ell}&\le & C(\eta) \sum_{\nu\in K} c_\nu^2 \,\,(\log\log\log n)^\eta\,{\mathbb P}si (\nu) . \end{eqnarray} Further, \begin{eqnarray} \label{approx1} \sum_{k,\ell \in K} c_k c_\ell\frac{(k,\ell)^{2}}{k\ell}&\le& C(\eta)\sum_{\nu \in K} c_\nu^2 (\log\log \nu)^2 (\log\log\log \nu)^{1+\eta}. \end{eqnarray} \end{theorem} This much improves Theorem 2.5 in \cite{W1} where a specific question related to G\'al's inequality was investigated, see \cite{W1} for details. The interest of inequality \eqref{approx}, is naturally that the bound obtained tightly depends on the arithmetical structure of the support $K$ of the coefficient sequence, while being close to the optimal order of magnitude $(\log\log \nu)^2$. \vskip 2 pt Theorem \ref{t3} is obtained as a combination of Theorem \ref{t1} with a slightly more general and sharper formulation of Theorem 2.5 in \cite{W1}. \begin{theorem}\label{t5} Let $\eta >1$. Then, for any real $s$ such that $0<s\le 1$, for any sequence of reals $\{c_k, k\in K\}$, we have \begin{eqnarray}\label{t1m}\sum_{k,\ell\in K} c_kc_\ell \frac{(k,\ell)^{2s}}{k^s\ell^s}&\le & C(\eta) \sum_{\nu\in K} c_\nu^2(\log\log\log \nu)^\eta \sum_{\d|\nu} \frac{(\log \d )(\log\log \d)}{\d^{2s-1}}. \end{eqnarray} The constant $C(\eta)$ depends on $\eta$ only. \end{theorem} \begin{remark}\label{rems}\rm From Theorem 2.5-(i) in \cite{W1}, follows that for every $s>1/2$, \begin{eqnarray}\label{i1} \sum_{k,\ell\in K} c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s} &\le&\zeta(2s) \inf_{0< \e\le 2s-1} \frac{1+\e}{\e } \, \sum_{\nu \in K} c_\nu^2 \, \s_{ 1+\e-2s}(\nu) , \end{eqnarray} $\s_{u}(\nu)$ being the sum of $u$-th powers of divisors of $\nu$, for any real $u$. As \begin{eqnarray*} \sum_{\d|\nu} \frac{(\log \d )(\log\log \d)}{\d^{2s-1}}\ll \sum_{\d|\nu} \frac{1}{\d^{2s-1-\e}} =\s_{ 1+\e-2s}(k) , \end{eqnarray*} estimate \eqref{t1m} is much better than the one given \eqref{i1}. \end{remark} \begin{proof}[\cmit Proof of Theorem \ref{t5}] \rm The proof is similar to that of Theorem 2.5 in \cite{W1} and shorter. Let $\e>0$ and let $J_\e$ denote the generalized Euler function. We recall that \begin{eqnarray}\label{jordan} J_\e(n)= \sum_{d|n} d^\e \m(\frac{n}{d}). \end{eqnarray} We extend the sequence $\{c_k, k\in K\}$ to all ${\mathbb N}$ by putting $c_k= 0$ if $k\notin K$. By M\"obius' formula, we have $n^\e =\sum_{d|n} J_\e (d)$. By using Cauchy-Schwarz's inequality, we successively obtain \begin{eqnarray} \label{HS1a} L&:=& \sum_{k,\ell=1}^n c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s}\ =\ \sum_{k,\ell \in K} \frac{c_k c_\ell }{k^s\ell^s}\Big\{\sum_{d\in F(K)} J_{2s} (d) {\bf 1}_{d|k} {\bf 1}_{d|\ell}\Big\} \cr \hbox{($k=ud$, $\ell=vd$)} &\le& \sum_{u,v\in F(K)} \frac{1}{u^sv^s} \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}c_{vd} \Big) \cr &\le & \sum_{u,v\in F(K)} \frac{1}{u^sv^s} \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}^2 \Big)^{1/2}\Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}} c_{vd}^2 \Big)^{1/2} \cr &=& \Big[\sum_{u \in F(K)} \frac{1}{u^s } \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}^2 \Big)^{1/2}\Big]^2 \cr &\le& \Big(\sum_{u \in F(K)} \frac{1}{u^s\psi(u) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2}{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu} } J_{2s}\big( \frac{\nu}{u }\big) u^{ s} \psi(u) \Big) , \end{eqnarray} where $\psi (u)>0$ is a non-decreasing function on ${\mathbb R}^+$. We then choose $$\psi(u) = u^{-s} \psi_1(u)\sum_{t|u} t (\log t)(\log\log t),\qq \qq \psi_1(u)= (\log\log\log u)^\eta.$$ Hence,\begin{eqnarray*} L &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2}{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu} } J_{2s}\big( \frac{\nu}{u }\big) \psi_1(u)\sum_{t|u} t (\log t)(\log\log t)\Big) \cr &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2 \psi_1(\nu) }{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu}} J_{2s}\big( \frac{\nu}{u }\big)\sum_{t|u} t (\log t)(\log\log t) \Big) . \end{eqnarray*} As $\nu \in K$, we can write \begin{eqnarray}\label{f} \sum_{\substack{u \in F(K)\\ u|\nu }} J_{2s}\big( \frac{\nu}{u }\big) \sum_{t|u} t (\log t)(\log\log t)&=& \sum_{u|\nu}\sum_{d|\frac {\nu}u}d^{2s}\m \Big(\frac {\nu}{ud}\Big)\sum_{t|u} t (\log t)(\log\log t) \cr & = &\sum_{d|\nu}d^{2s}\sum_{u|\frac {\nu}d}\m \Big(\frac {\nu}{ud}\Big)\sum_{t|u} t (\log t)(\log\log t) \cr \hbox{(writing $u=tx$)} &=& \sum_{d|\nu}d^{2s}\sum_{t|\frac {\nu}d}t (\log t)(\log\log t)\sum_{x|\frac {\nu}{dt}}\m \Big(\frac {\nu}{dtx}\Big) \cr \hbox{(writing $\frac {\nu}{dt}=x\theta$)}& =& \sum_{d|\nu}d^{2s}\sum_{t|\frac {\nu}d}t (\log t)(\log\log t)\sum_{\theta|\frac {\nu}{dt}}\m (\theta) \cr&=& \sum_{d|\nu}d^{2s}(\frac {\nu}d) (\log (\frac {\nu}d))(\log\log (\frac {\nu}d)), \end{eqnarray} where in the last inequality we used the fact that $\sum_{d|n}\m(d)$ equals $1$ or $0$ according to $n=1$ or $n>1$. Consequently, \begin{eqnarray*} L &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2 \psi_1(\nu) }{ \nu^{2s} } \sum_{d|\nu}d^{2s}(\frac {\nu}d) (\log (\frac {\nu}d))(\log\log (\frac {\nu}d)) \Big) \cr &=& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t)} \Big)\Big(\sum_{\nu \in K} c_\nu^2 \psi_1(\nu) \sum_{\d|\nu} \frac{1}{\d^{2s}}\,\d (\log \d)(\log\log \d) \Big) . \end{eqnarray*} \vskip 7 pt From the trivial estimate $\sum_{t|u}t (\log t)(\log\log t)\ge u (\log u)(\log\log u)$, it follows that \begin{eqnarray} \label{s} \sum_{k,\ell=1}^n c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s} &\le& \Big(\sum_{u \ge 1 } \frac{1}{u (\log u)(\log\log u) (\log\log\log u)^\eta } \Big) \cr & &\times \Big(\sum_{\nu \in K} c_\nu^2 (\log\log\log \nu)^\eta\sum_{\d|\nu} \frac{ (\log \d)(\log\log \d) }{\d^{2s-1}} \Big) \cr & = & C(\eta)\ \sum_{\nu \in K} c_\nu^2 (\log\log\log \nu)^\eta\sum_{\d|\nu} \frac{ (\log \d)(\log\log \d) }{\d^{2s-1}} . \end{eqnarray} \end{proof} \begin{proof}[\cmit Proof of Theorem \ref{t3}] \rm Letting $s=1$ in Theorem \ref{t5} we get \eqref{approx}, next using Theorem \ref{t1} we obtain, \begin{eqnarray} \label{1} \sum_{k,\ell=1}^n c_k c_\ell\frac{(k,\ell)^{2}}{k\ell} &\le& C(\eta)\, \sum_{\nu \in K} c_\nu^2 (\log \log \nu)^2 (\log\log\log \nu)^{1+\eta} , \end{eqnarray} which is \eqref{approx1}, and thus proves Theorem \ref{t3}. \end{proof} \rm \vskip 3 pt \section{\bf Concluding Remarks.} \label{s7} \rm The proof of Theorem \ref{t2} can be adapted with no difficulty to similar arithmetical functions, for instance with powers of $\log\log d$, but not to the functions $S_k(n)$, $k\ge 1$, which specifically depend on a derivation formula, see after \eqref{sisu2}. We remark that a simple convexity argument shows that \begin{eqnarray}\label{Phietamin}\limsup_{n\to \infty}\ \frac{S_k(n)}{ (\log\log n)^{1+k}} &\ge&e^{\g} . \end{eqnarray} Let indeed $X$ be a discrete random variable equal to $\log d$ if $d|n$, with probability $1/(d\s_{-1}(n))$. Then, $${\mathbb E \,} X^k= \sum_{d|n} \frac{(\log d)^k}{d\s_{-1}(n)}\ge ({\mathbb E \,} X)^k= \big(\sum_{d|n} \frac{\log d}{d\s_{-1}(n)}\big)^k.$$ Whence, $$ S_k(n)=\sum_{d|n} \frac{(\log d)^k}{d }\ge \s_{-1}(n)^{1-k}\big(\sum_{d|n} \frac{\log d}{d}\big)^k.$$ As $\s_{-1}(n)\le (1+o(1))e^\g\log\log n$, by using \eqref{lbeta1} we deduce that \begin{eqnarray*} S_k(n_j)&\ge& (1+o(1))e^{(1-k)\g}(\log\log n_j)^{1-k}\big(e^\g(\log\log n_j)^2)^k\cr &= &(1+o(1))e^{\g}(\log\log n_j)^{1+k}. \end{eqnarray*} Moreover for integers $n$ having sufficiently spaced prime divisors, this lower bound is optimal. More precisely, there exists a constant $C(k)$ depending on $k$ only, such that for any integer $n=\prod_{i=1}^r p_i^{\a_i}$ satisfying the condition $\sum_{i=1}^r\frac{1}{p_i-1}<2^{1-k}, $ one has \begin{eqnarray}\label{Phietaminmajex}S_k(n)\ \le \ C(k)(\log\log n)^{k} \s_{-1}(n) . \end{eqnarray} As $\s_{-1}(n)\le C\log\log n$, it follows that $S_k(n)\ \le \ C(\eta)(\log\log n)^{1+\eta}$. \vskip 7 pt \vskip 3pt \hskip -2pt We conclude with some remarks concerning Davenport's function $w(n)$. At first, if $p_1,\ldots,p_r$ are the $r$ first consecutive prime numbers and $n=p_1 \ldots p_r$, then $w( n)\sim\log\,\o(n)$. Next, the obvious bound $w( n)\ll\log\log\log n$ holds true when the prime divisors of $n$ are large, for instance when for a given positive number $B$, these prime divisors, write them $p_1,\ldots, p_r$, satisfy \begin{eqnarray}\label{prop.pfinite} \sum_{j=1}^r\frac{\log p_{j}}{p_{j}} \le B \qq \hbox{ and} \qq p_1\ldots p_r\gg e^{e^B}. \end{eqnarray} More generally, one can establish the following result. Let $\{p_i, i\ge 1\}$ be an increasing sequence of prime numbers enjoying the following property \begin{eqnarray}\label{prop.p} p_1\ldots p_s&\le & p_{s+1}\qq\qq s=1,2,\ldots\, . \end{eqnarray} Numbers of the form $n=p_1\ldots p_\nu$ with $p_1\ldots p_{i-1}\le p_i$, $2\le i\le \nu$, $\nu=1,2,\ldots$ appear as extremal numbers in some divisors questions, see Erd\H os and Hall \cite{EH}. \begin{lemma}\label{b(n)}Let $\{p_i, i\ge 1\}$ be an increasing sequence of prime numbers satisfying condition \eqref{prop.p}. There exists a constant $C$, such that if $p_1\ge C$, then for any integer $n= p_1^{\a_1}\ldots p_r^{\a_r}$ such that $\a_i\ge 1$ for each $i$, we have $w( n)\le \log\log\log n$. \end{lemma} \begin{proof}[{\cmit Proof.}] \rm We use the following inequality. Let $0<\theta<1$. There exists a number $h_\theta$ such that for any $h\ge h_\theta$ and any $H$ such that $e^{\frac{\theta}{(1-\theta)\log 2}}\le H\le h$, we have \begin{eqnarray}\label{hH} h&\le &e^h\, \log \frac{\log(H+h)}{\log H}\, . \end{eqnarray} Indeed, note that $\log (1+x) \ge \theta x$ if $0\le x \le (1-\theta)/\theta$. Let $h_\theta$ be such that if $h\ge h_\theta$, then $h\log h \le \theta(\log 2) e^h$. Thus \begin{eqnarray*}h& \le & e^h\,\theta\frac{\log 2}{\log h}\le e^h\, \theta\frac{\log 2}{\log H} \le e^h\,\log \Big(1+\frac{\log 2}{\log H}\Big)=e^h\,\log \Big(\frac{\log 2H}{\log H}\Big)\cr&\le& e^h\,\log \Big(\frac{\log H+h}{\log H}\Big) \, . \end{eqnarray*} We shall show by a recurrence on $r$ that \begin{eqnarray}\label{lll} \sum_{i=1}^r\frac{\log p_i}{p_i}&\le & \log\log\log (p_1\ldots p_r)\, . \end{eqnarray} This is trivially true if $r=1$ by the notation made in the Introduction, and since $p\ge 2$. Assume that \eqref{lll} is fulfilled for $s=1, \ldots , r-1$. Then, by the recurrence assumption, \begin{eqnarray*} \sum_{i=1}^r\frac{\log p_i}{p_i}&\le & \log\log\log (p_1\ldots p_{r-1} )+ \frac{\log p_r}{p_r}\, . \end{eqnarray*} Put $H=\sum_{i=1}^{r-1}\log p_i$, $h=\log p_r$. It suffices to show that\begin{eqnarray*} \frac{\log p_r}{p_r}\ =\ \frac{h}{e^h} &\le & \log\frac{\log \sum_{i=1}^r\log p_i}{\log \sum_{i=1}^{r-1}\log p_i}\ =\ \, \log\frac{\log H+h}{\log H}, \end{eqnarray*} But $H\le h$, by assumption \eqref{prop.p}. Choose $C=e^{\frac{\theta}{(1-\theta)\log 2}}$. Then $H\ge \log p_1\ge e^{\frac{\theta}{(1-\theta)\log 2}}$. The searched inequality thus follows from \eqref{hH}. Let $n= p_1^{\a_1}\ldots p_r^{\a_r}$, where $\a_i\ge 1$ for each $i$. We have $w( n)\le \log\log\log (p_1\ldots p_r)\le \log\log\log n$. \end{proof} \vskip 6 pt \noindent {\bf Acknowledgements.} The author is much grateful to an anonymous referee for having let him know the papers of Sitaramaiah and Subbarao \cite{SS2,SS1}, and for useful remarks. The author also thank a second referee for useful remarks. \end{document}
\begin{document} \title{Convergence rate estimates for Trotter product approximations of solution operators for non-autonomous Cauchy problems } \begin{abstract} In the present paper we advocate the Howland-Evans approach to solution of the abstract non-autonomous Cauchy problem (non-ACP) in a separable Banach space $X$. The main idea is to reformulate this problem as an autonomous Cauchy problem (ACP) in a new Banach space $ L^p(\cI,X)$, $p \in [1,\infty)$, consisting of $X$-valued functions on the time-interval $\cI$. The fundamental observation is a one-to-one correspondence between solution operators (propagators) for a non-ACP and the corresponding evolution semigroups for ACP in $L^p(\cI,X)$. We show that the latter also allows to apply a full power of the operator-theoretical methods to scrutinise the non-ACP including the proof of the Trotter product approximation formulae with operator-norm estimate of the rate of convergence. The paper extends and improves some recent results in this direction in particular for Hilbert spaces. \varepsilonnd{abstract} \thanks{\noindent Keywords: Trotter product formula, convergence rate, approximation, evolution equations, solution operator, extension theory, perturbation theory, operator splitting\\ Primary 34G10, 47D06, 34K30; Secondary 47A55.} \tableofcontents \section{Introduction} \langlebel{sec:1} The theory of evolution equations plays an important role in various areas of pure and applied mathematics, physics and other natural sciences. Since the early 1950s, starting with papers by T.Kato \cite{Kato1953} and R.S.Phillips \cite{Phillips1953}, research in this field became very active and it still enjoys a lot of attention. A comprehensive introduction to this topic is presented in \cite[Chapter VI. 9.]{EngNag2000} and also in the book by W.Tanabe \cite{Tan1979}. A general Cauchy problem for linear non-autonomous evolution equations in a Banach space has the form \begin{align}\langlebel{CauchyProblem} \dot u(t) = -C(t)u(t), ~~ u(s)=u_s \in X, ~~ 0 <s\leq t\leq T, \varepsilonnd{align} where $\{C(t)\}_{t\in \cI}$ is a one-parameter (time-dependent) family of closed linear operators in the separable Banach space $X$. Here the time-interval $\cI:=[0,T]\subset \mathbb{R}$ and we also introduce $\cI_0 : =(0,T]$. To solve the non-autonomous Cauchy problem (non-ACP) \varepsilonqref{CauchyProblem} means to find a so-called \textit{solution operator} (or \textit{propagator}): $\{U(t,s)\}_{(t,s)\in \Delta}$, $\Delta=\{(t,s)\in \cI_0\times \cI_0: 0<s\leq t\leq T\}$, with the property that $u(t)=U(t,s)u_s$, $(t,s)\in \Delta$, is in a certain sense a solution of the problem \varepsilonqref{CauchyProblem} for an appropriate set of initial data $u_s$. By definition, propagator $\{U(t ,s)\}_{(t,s)\in \Delta}$ is supposed to be a strongly continuous operator-valued function $U(\cdot ,\cdot):\Delta\rightarrow \mathcal{B}(X)$ satisfying the properties: \begin{align*}\langlebel{FunctionalEquation} &U(t,t)=I \quad \mbox{for} \quad t\in \cI_0 \ ,\\ &U(t,r)U(r,s)=U(t,s) \quad \mbox{for} \quad t,r,s\in \cI_0 \quad \mbox{with~} \quad s\leq r\leq t \ ,\\ &\|U\|_{\cB(X)} :=\sup_{(t,s)\in\Delta}\|U(t,s)\|_{\mathcal{B}(X)}<\infty \ . \varepsilonnd{align*} For details see Definition \ref{SolutionDefinition} in \S\ref{sec:3.1}. We note that there are essentially two different approaches to solve the abstract linear non-ACP \varepsilonqref{CauchyProblem} in the normed vector spaces. The first method consists of approximation of the operator family $\{C(t)\}_{t\in \cI}$ by operators $\{\{C_n(t)\}_{t\in \cI}\}_{n\in \mathbb{N}}$, for which the corresponding Cauchy problem \begin{align*} \dot u(t) = -C_n(t)u(t), ~~ u(s)=u_s \in X, ~~ 0<s\leq t\leq T \varepsilonnd{align*} can be easily solved. Often, the family of operators $\{C(t)\}_{t\in \cI}$ is approximated by a piecewise constant operators, see T.Kato \cite{Kato1970, Kato1973}. Then one encounters the problem: In which sense the sequence of approximating propagators $\{\{U_n(t,s)\}_{(t,s)\in \Delta}\}_{n\in\mathbb{N}}$ converges to the solution operator $\{U(t,s)\}_{(t,s)\in \Delta}$ of the non-ACP \varepsilonqref{CauchyProblem} ? Another approach allows to solve the problem \varepsilonqref{CauchyProblem} using perturbation, or extension, theory for linear operators. It does not need any approximation scheme, see for example \cite{EngNag2000,Nei1981,NeiZag2009}. This approach is quite flexible and can be used in very general settings. Its main idea can be described as follows: The non-ACP in $X$ can be reformulated as an \textit{autonomous} Cauchy problem (ACP) in a new Banach space $ L^p(\cI,X)$, $p \in [1,\infty)$, of $p$-summable functions on the time-interval $\cI$ with values in the Banach space $X$. In the second approach a central notion is the \textit{evolution generator} $\mathcal K$ on $ L^p(\cI,X)$. It generates a semigroup $\{\cU(\gt) = e^{-\gt \cK}\}_{\gt\ge0}$ on $L^p(\cI,X)$ which is called an \textit{evolution semigroup}. In turn the evolution semigroup on $L^p(\cI,X)$ is entirely defined by propagator $\{U(t,s)\}_{(t,s)\in{\Delta}}$ in such a way that the representation \begin{equation}\langlebel{CorrespondenceEvolutionSemigroupPropagator} \begin{split} (e^{-\tau \mathcal K}f)(t)=(\mathcal U(\tau)f)(t)= \begin{cases} U(t,t-\tau)f(t-\tau), &{\rm ~~if~} t\in (\tau, T] \ , \\ 0, &{\rm ~~if~} 0\leq t\leq \tau \ , \varepsilonnd{cases} \varepsilonnd{split} \varepsilonnd{equation} holds for any $f\in L^p(\cI,X)$. In the following we use the short notation \begin{align*} (e^{-\tau \mathcal K}f)(t)=(\mathcal U(\tau)f)(t)=U(t,t-\tau)\chi_\cI(t-\tau)f(t-\tau) \ . \varepsilonnd{align*} It turns out that there is a one-to-one correspondence between the set of all evolution generators and the set of all propagators. Moreover, the important observation is that the set of all evolution generators in $L^p(\cI,X)$ can be characterised quite independently from propagators, see Theorem \ref{PropagatorEvolutionGeneratorCorrespondeceLpSetting}. Notice that in this paper we use a definition of the generator of a semigroup which differs from the usual one by the sign, see \varepsilonqref{CorrespondenceEvolutionSemigroupPropagator}. It turns out that this choice of definition is more convenient for our presentation. The first problem we have to solve is: How to find the evolution generator for a non-ACP~\varepsilonqref{CauchyProblem} ? To this aim we introduce the so-called \textit{evolution pre-generator} \begin{align*} \widetilde{\mathcal K} = D_0 + \mathcal C, \quad \mathrm{dom}(\widetilde{\mathcal K}) = \mathrm{dom}(D_0)\cap \mathrm{dom}(\mathcal C)\subset L^p(\cI,X) \ , \varepsilonnd{align*} where $D_0$ is the generator of the right-shift semigroup and $\mathcal C$ is the multiplication operator induced by the operator family $\{C(t)\}_{t\in \cI}$ in $L^p(\cI,X)$. Appropriate assumptions on the family $\{C(t)\}_{t\in \cI}$ guarantee that operator $\mathcal C$ is a generator in $ L^p(\cI,X)$. If $\{U(t,s)\}_{(t,s)\in\Delta}$ is the solution operator of the non-ACP \varepsilonqref{CauchyProblem}, then it turns out that the generator $\cK$ of the associated evolution semigroup $\{\cU(\gt)\}_{\gt \ge 0}$ defined by \varepsilonqref{CorrespondenceEvolutionSemigroupPropagator} is a closed operator extension of the evolution pre-generator $\widetilde{\mathcal K}$. Conversely, if the evolution pre-generator $\widetilde{\cK}$ admits a closed extension, which is an evolution generator, then the corresponding propagator $\{U(t,s\}_{[t,s)\in{\Delta}}$ can be regarded as a solution operator of the non-ACP \varepsilonqref{CauchyProblem}. In general, it is difficult to answer the question: whether an evolution pre-generator admits a closed extension, which is an evolution generator ? However, the problem simplifies if the pre-generator is closable and its closure is a generator. In this case one gets that the closure is already an \textit{evolution generator}. Then obviously the evolution pre-generator admits only \textit{one} extension, which is a generator and, hence, which is an evolution generator. This means, that the non-ACP \varepsilonqref{CauchyProblem} is solvable and even uniquely. From the point of view of the operator theory the problem formulated above fits into the question: whether the sum of two generators is \textit{in essential} a generator, i.e. whether the closure of the sum of two generators is a generator. If the sum of two generators $A$ and $B$ of contractions semigroups in some Banach space is in essential a generator, then the so-called \textit{Trotter product formula} \begin{align*} e^{-\tau C} = \slim (e^{-\tau A/n}e^{-\tau B/n})^n, \quad C:= \overline{A+B} \ , \varepsilonnd{align*} in the \textit{strong} operator topology, is valid for the closure $\overline{A+B}$. This formula goes back to Sophus Lie (1875) for bounded linear operators. Later it was generalised by H.Trotter to unbounded generators of contraction semigroups, see \cite{Trotter1959}. The formula admits a further generalisation to an arbitrary pair $\{A,B\}$ of generators of semigroups if their semigroups satisfy a so-called Trotter \textit{stability} condition, cf. Proposition \ref{StabilityProposition}. Note that generalisation \cite{Trotter1959} says nothing about the convergence-rate of the Trotter product formula and by consequence about the error-bound for approximation by this formula the solution operators. To this aim one has to consider the convergence of the Trotter product formula in the operator-norm topology. For the case of Banach spaces see \cite{CachZag2001}. However, in \cite{CachZag2001} the operator $A$ was assumed to be generator of a holomorphic semigroup. In our case, this assumption is not satisfied for a principal reason: the evolution semigroup (\ref{CorrespondenceEvolutionSemigroupPropagator}) can never be a holomorphic semigroup! Nevertheless, some observations of \cite{CachZag2001} admit a generalisation to the case of evolution semigroups. We discuss this point in Remark \ref{rem:7.9}. Finally, after having determined the convergence rate of the Trotter product formula in the operator-norm, one has to carry over this result to the propagator approximations. It turns out that the Trotter product formula yields an approximation of the propagator $\{U(t,s)\}_{(t,s)\in {\Delta}}$ in the operator norm, which has (uniformly in ${\Delta}$) the \textit{same} convergence-rate as the Trotter product formula for the evolution semigroup, see Theorems \ref{TheoremEstimate} and \ref{EstimatePropagators}. We express a hope that these results might be useful in applications since they give a uniform error estimate for a discretized approximation of the solution operator $\{U(t,s)\}_{(t,s)\in {\Delta}}$ for the non-ACP \varepsilonqref{CauchyProblem}, see \S \ref{sec:7}. In particular, it concerns the numerical simulations, where as a palliative approach one uses some domain-dependent error estimates for operator \textit{splitting} schemes in the strong operator topology \cite{Batkai2011}. Now we give an overview of the contents of the paper in more details. Our aim is analysis a linear non-ACP of the form \begin{align}\langlebel{EvolutionProblem} \dot u(t) = - Au(t) - B(t)u(t), ~~ u(s)=u_s \in X, ~~ 0<s\leq t\leq T \ , \varepsilonnd{align} where $A$ is a generator of a bounded holomorphic semigroup and $\{B(t)\}_{t\in \cI}$ is a family of the closed (for any time-interval $\cI=[0,T]$) linear operators in $X$. To proceed we make the following assumptions: \begin{assump}\langlebel{ass:1.1} {\rm Let ${\alpha} \in (0,1)$ and $X$ be a (separable) Banach space. \item[\;\;(A1)] The operator $A$ is a generator of a bounded holomorphic semigroup of class $\mathcal G(M_A,0)$ (\cite{Kato1980}, Ch.IX, \S1.4) with zero in the resolvent set: $0\in\varrho(A)$. \item[\;\;(A2)] The operators $\{B(t)\}_{t\in \cI}$ are densely defined and closed for a.e. $t\in \cI$ and it holds that $\mathrm{dom}(A)\subset\mathrm{dom}(B(t))$ for a.e. $t\in \cI$. Moreover, for all $x\in \mathrm{dom}(A)$ the function $t\mapsto B(t)x$ is strongly measurable. \item[\;\;(A3)] For a.e. $t\in \cI$ and some ${\alpha} \in (0,1)$ we demand that $\mathrm{dom}(A^\alpha)\subset\mathrm{dom}(B(t))$ and that \begin{align*} C_\alpha:=\mathrm{ess ~sup}_{t\in \cI}\|B(t)A^{-\alpha}\|_{\mathcal B(X)}<\infty \ . \varepsilonnd{align*} \item[\;\;(A4)] Let $\{B(t)\}_{t\in\cI}$ be a family of generators in $X$ that for all $t\in \cI$ belong to the same class $\mathcal G(M_B,\beta)$. The function $\cI\ni t\mapsto (B(t)+\xi)^{-1}x\in X$ is strongly measurable for any $x\in X$ and any $\xi>\beta$. \item [\;\;(A5)] We assume that $\mathrm{dom}(A^*)\subset \mathrm{dom}(B(t)^*)$ and \begin{align*} C_1^*:={\rm ~ess~sup}_{t\in \cI}\|B(t)^*(A^*)^{-1}\|_{\mathcal B(X^*)}<\infty, \varepsilonnd{align*} where $A^*$ and $B(t)^*$ denote operators which are adjoint of $A$ and $B(t)$, respectively. \item[\;\;(A6)] There exists ${\beta} \in ({\alpha},1)$ and a constant $L_{\beta} > 0$ such that for a.e. $t,s\in \cI$ one has the estimate: \begin{align*} \|A^{-1}(B(t)-B(s))A^{-\alpha}\|_{\mathcal B(X)}\leq L_\beta|t-s|^\beta \ . \varepsilonnd{align*} } \varepsilonnd{assump} We comment here that, the assumptions (A4) and (A3) imply assumption (A2). So, assuming (A4) and (A3) we can drop the assumption (A2). Let $\mathcal A$ and $\cB$ be the multiplication operators \textit{induced} by $A$ and $\{B(t)\}_{t\in \cI}$ in $L^p(\cI,X)$. Further let $D_0$ be the generator of the right-shift semigroup in $L^p(\cI,X)$. Note that since $A$ is a semigroup generator in $X$, the operator $\mathcal A$ in the space $L^p(\cI,X)$ is also a generator. Moreover, the semigroup $\{e^{-\tau\mathcal A}\}_{\tau\geq0}$ commutes with the semigroup $\{e^{-\tau D_0}\}_{\tau\geq0}$. Therefore, the product $\{e^{-\tau\mathcal A}e^{-\tau D_0}\}_{\tau\geq0}$ defines a semigroup and its generator is denoted by $\mathcal K_0$. Note that $\mathcal K_0 = \overline{D_0 + \cA}$ , i.e. a closure of the operator sum $D_0 + \cA$. In general, domain of the generator $\mathcal K_0$ can be larger than $\mathrm{dom}(\mathcal A)\cap\mathrm{dom}(D_0)$. A widely used assumption about the operator $A$ is its \textit{maximal parabolic regularity}, see \cite{Acquistapace1987, Prato1984, PruessSchnaubelt2001, Arendt2007}. This means that the operator sum $D_0+\mathcal A$ is already closed, i.e. $\cK_0 = D_0 + \cA$ and $\mathrm{dom}(\mathcal K_0)= \mathrm{dom}(\mathcal A)\cap \mathrm{dom}(D_0)$. In this paper the maximal parabolic regularity is \textit{not} supposed for our purposes. Now our first of the main results can be formulated as follows: \begin{thm}\langlebel{TheoremKbecomesGeneratorIntroduction} Let the assumptions (A1), (A2) and (A3) be satisfied. Then, the operator $\mathcal K :=\mathcal K_0+\mathcal B$ with $\mathrm{dom}(\mathcal K)=\mathrm{dom}(\mathcal K_0)\cap \mathrm{dom}(\mathcal B)$, is an evolution generator in $L^p(\cI,X)$, $p \in [1,\infty)$, and the non-autonomous Cauchy problem \varepsilonqref{EvolutionProblem} has a unique solution operator $\{U(t,s)\}_{(t,s)\in {\Delta}}$ in the sense of Definition \ref{SolutionDefinition}. \varepsilonnd{thm} The proof of this theorem mainly uses a perturbation theory due to J.Voigt \cite{Voigt1977}, see Proposition \ref{VoigtsTheorem}. Note that the theorem holds without assuming that operators $\{B(t)\}_{t\in\cI}$ are generators. We comment that if the assumption (A4) is satisfied, then the induced multiplication operator $\mathcal B$ becomes a generator that also belongs to $\cG(M,{\beta})$. A pair $\{\cK_0,\cB\}$ is called \textit{Trotter-stable} if it satisfies the condition \begin{align*} \sup_{n\in \dN}\sup_{\gt \ge 0}\left\|\left(e^{-\tau\cK_0/n}e^{-\tau\cB /n}\right)^n\right\| < \infty \ . \varepsilonnd{align*} If the pair $\{\cK_0,\cB_0\}$ is Trotter-stable, then by the Trotter product formula the evolution semigroup $\{e^{-\gt \cK}\}_{\gt \ge 0}$ admits the representation \begin{align}\langlebel{eq:1.4} e^{-\tau \mathcal K} = \slim (e^{-\tau\cB /n}e^{-\tau\cK_0/n})^n. \varepsilonnd{align} It turns out that the pair $\{\cB,\cK_0\}$ is Trotter-stable if the operator family $\{B(t)\}_{t\in\mathcal I}$ is $A$-stable (see Definition \ref{StabilityDefinition}). Let us mention here that the pair $\{\cK_0,\cB\}$ is Trotter-stable if and only if the pair $\{\cB,\cK_0\}$ is Trotter-stable, i.e. the estimate: \begin{align*} \sup_{n\in \dN}\sup_{\gt \ge 0}\left\|\left(e^{-\tau\cB/n}e^{-\tau\cK_0 /n}\right)^n\right\| < \infty \ , \varepsilonnd{align*} holds. In particular this yields that one can interchange operators $\cK_0$ and $\cB$ in formula \varepsilonqref{eq:1.4}. Note that Trotter stability condition is always satisfied for generators of contraction semigroups. Let $\{U(t,s)\}_{(t,s)\in \Delta}$ be the propagator corresponding to the evolution semigroup $\{e^{-\tau \mathcal K}\}_{\tau\geq0}$ via (\ref{CorrespondenceEvolutionSemigroupPropagator}). Then the Trotter product formula yields an approximation of the propagator $\{U(t,s)\}_{(t,s)\in \Delta}$ in the strong operator topology and we prove in this paper the following assertion: \begin{thm}\langlebel{StrongConvergencePropagatorIntroduction} Let the assumptions (A1), (A3) and (A4) be satisfied. If the family $\{B(t)\}_{t\in\cI}$ is $A$-stable (see Definition \ref{StabilityDefinition}), then \begin{equation}\langlebel{eq:1.6a} \lim_{n\to\infty}\sup_{\tau\in\cI}\int_0^{T-\tau}\|\{U_n(s+\tau,s)-U(s+\tau,s)\}x\|^p_Xds = 0, \quad x \in X, \varepsilonnd{equation} for any $p\in[1,\infty)$, where the Trotter product approximation $\{\{U_n(t,s)\}_{(t,s)\in \Delta}\}_{n\in\mathbb{N}}$ is defined by \begin{equation}\langlebel{ApproximationPropagatorIntroduction} \begin{split} U_n(t,s) &:= \prod_{j=1}^{n\leftarrow} G_j(t,s\,;n), \quad n = 1,2,\ldots\:,\\ G_j(t,s\,;n) &:= e^{-\frac{t-s} n B(s+ j\frac{t-s} n)}e^{-\frac {t-s} n A},\quad j = 0,1,2,\ldots,n, \varepsilonnd{split} \varepsilonnd{equation} $(t,s) \in {\Delta}$, with the increasingly ordered product in $j$ from the right to the left. \varepsilonnd{thm} Our second main result shows that the convergence in \varepsilonqref{eq:1.6a} can be improved from the \textit{strong} to the \textit{operator-norm} topology and that the \textit{convergence-rate} can be estimated from above. \begin{thm} Let the assumptions (A1), (A3), (A4), (A5), and (A6) be satisfied. If the family of generators $\{B(t)\}_{t\in\cI}$ is $A$-stable and ${\beta} \in ({\alpha},1)$, then there is a constant $C_{\alpha, \beta}>0$ such that \begin{equation}\langlebel{eq:1.8} \varepsilonsssup_{(t,s)\in{\Delta}}\|U_n(t,s)-U(t,s)\|_{\mathcal B(X)} \le \frac{C_{{\alpha},{\beta}}}{n^{{\beta}-{\alpha}}}, \quad n = 2,3,\ldots \ . \varepsilonnd{equation} \varepsilonnd{thm} Now few remarks are in order. Recall that in \cite{IchinoseTamura1998} Ichinose and Tamura proved under stronger assumptions a sharper than (\ref{eq:1.8}) convergence rate in the \textit{operator norm}. Namely, they showed that the it is of the order $O({\ln(n)}/{n})$ if the both $A$ and $B(t)$ are positive self-adjoint operators in a Hilbert space and $\{B(t)\}_{t\geq 0}$ are Kato-infinitesimally-small with respect to $A$. On the other hand, in \cite{Batkai2011, Batkai2012} B\'atkai \textit{et al} investigated approximations of solution operators for non-autonomous evolution equations by a different type of so-called \textit{operator splittings} in the \textit{strong} operator topology. They include, as particular, symmetrised/nonsymmetrised time-dependent Trotter product approximations in the strong operator topology studied by \cite{VWZ2008, VWZ2009}, as well as some other Trotter-Kato product formulae, see e.g. \cite{NeiZag1998}. In the first paper \cite{Batkai2011}, the authors proved the strong operator convergence and established for the non-autonomous parabolic case an optimal \textit{domain-dependent} convergence rate for the (sequential) splitting approximation. The second paper \cite{Batkai2012} is devoted to a detailed analysis of the case of bounded perturbations. Equation \varepsilonqref{EvolutionProblem} describes various problems related to the linear non-ACP. As an example, we consider in \S \ref{sec:8} the diffusion equation perturbed by a time-dependent $t \mapsto V(t,\cdotot)$ scalar potential: \begin{align}\langlebel{EvolProbLaplaceAndTimeDependentPotentialIntroduction} \dot u(t) = \Delta u(t) - V(t,x)u(t), ~~ u(s)=u_s \in L^q(\Omega), ~~ 0<s\leq t\leq T, ~x\in \Omega, \varepsilonnd{align} where $\Omega\subset\mathbb{R}^d$ is a bounded domain with $C^2$- boundaries and $q\in(1,\infty)$. Let \begin{align*} V(t,x):\cI\times\Omega\rightarrow \mathbb{C}, ~~ {\rm Re}(V(t,x))\geq0 {\rm ~~for~t\in \cI, ~a.e.}~x\in\Omega. \varepsilonnd{align*} be a measurable scalar time-dependent potential. Assuming regularity of the potential $V(t,x)$, the conditions (A3), (A5) and (A6) can be satisfied. As an example for the case of $d=3$, we have the following theorem. \begin{thm}\langlebel{ExampleUniqueSolutionIntroduction} Let $\Omega\subset \mathbb{R}^3$ be a bounded domain with $C^2-$boundary. Let $\alpha\in(0, {1}/{2})$ and $q\in (3, {3}/{2\alpha})$. Choose $\varrho\in[{3}/{(2\alpha)},\infty]$, $\beta\in(\alpha,1)$ and $\tau\in[{3}/{(2\alpha+2)},\infty]$. Let $B(t)f=V(t,\cdotot)f$ define a scalar-valued multiplication operator in $X=L^q(\Omega)$ with $V\in L^\infty(\cI, L^{\varrho}(\Omega))\cap C^\beta(\cI, L^\tau(\Omega))$ and ${\rm Re}(V(t,x))\geq0$. Then, the evolution problem (\ref{EvolProbLaplaceAndTimeDependentPotentialIntroduction}) has a unique solution operator $\{U(t,s)\}_{(t,s) \in {\Delta}}$, which admits the approximation \begin{align*} {\rm sup}_{(t,s)\in\Delta}\|U_n(t,s)-U(t,s)\|_{\mathcal B(L^q(\Omega))} = O(n^{-(\beta-\alpha)}), \varepsilonnd{align*} where for $(t,s) \in {\Delta}$ the approximating propagator $U_n(t,s)$ is defined by the product formula \begin{equation*} \begin{split} U_n(t,s) &:= \prod^{n\leftarrow}_{j=1} G_j(t,s),\quad n = 1,2,\ldots\; ,\\ G_j(t,s\,;n) &:= e^{-\frac{t-s}{n}V(s + j\frac{t-s}{n},\cdotot)}e^{\frac{t-s}{n}\Delta}, \quad j = 1,2,\ldots,n \ . \varepsilonnd{split} \varepsilonnd{equation*} \varepsilonnd{thm} The conditions for other values of the parameters when $d\geq2$, $q\in(1,\infty)$, are formulated in \S\ref{sec:8}. This paper is organised as follows. In \S\ref{sec:2} we summarise some basic facts about the semigroup theory, the fractional powers of operators and the multiplication operators. In \S\ref{sec:3}, we describe our approach to solution of the non-ACPs. In \S\ref{sec:4} the existence of unique solution operator for our case of the linear non-ACP is proved. \S\ref{sec:5} presents the basic properties of stability, whereas \S\ref{sec:6} investigates convergence of the Trotter-type product approximations in the strong topology. \S\ref{sec:7} contains the proof of the lifting of these convergence to the operator-norm topology. An application to a nonstationary diffusion equation is the subject of \S\ref{sec:8}. Appendix (\S\ref{sec:9}) completes the presentation by some important auxiliary statements and formulae. Finally we point out that the paper is partially based on the master thesis \cite{Stephan2016} of one of the authors. There a special case was treated when involved semigroups are contractions. This allows to avoid stability considerations formulated in \S\ref{sec:5}. In addition, in \cite{Stephan2016} a similar approach was also developed for the space $C_0(\cI,X)$ instead of $L^p(\cI,X)$. The $C_0(\cI,X)$-approach allows to prove the results similar to that for $L^p(\cI,X)$, however, under stronger regularity assumptions on the family $\{B(t)\}_{t\in\cI}$. \section{Recall from the theory of semigroups} \langlebel{sec:2} Below we recall some basic facts from the operator and semigroup theory, which are indispensable for our presentation below. Throughout this paper we are dealing with a separable Banach space denoted by $(X,\|\cdot\|_{X})$. Let $S$ and $T$ be two operators in $X$. If $\mathrm{dom}(S)\subset\mathrm{dom}(T)$ and there are constants $a,b\geq0$ such that \begin{align*} \|Tx\|_X\leq a\|Sx\|_X+b\|x\|_X, ~~x\in\mathrm{dom}(S), \varepsilonnd{align*} then the operator $T$ is called $S$-bounded with the relative bound $a$. We define the resolvent of operator $A$ by $R(\langlembda,A)=(A-\langlembda)^{-1}:X\rightarrow \mathrm{dom}(A)$ when $\langlembda$ is from the resolvent set $\varrho(A)$. A family $\{T(t)\}_{t\geq0}$ of bounded linear operators on the Banach space $X$ is called a strongly continuous (one-parameter) semigroup if it satisfies the functional equation \begin{align*} T(0)=I, ~~T(t+s)=T(t)T(s), ~~t,s\geq0, \varepsilonnd{align*} and the orbit maps $[0,\infty) \ni t\mapsto T(t)x$ are continuous for every $x \in X$. In the following we simply call them semigroups. For a given semigroup its generator is a linear operator defined by the limit \begin{align*} Ax:=\lim_{h\searrow0}\frac{1}{h}(x - T(h)x) \varepsilonnd{align*} on domain \begin{align*} \mathrm{dom}(A):=\{x\in X: \lim_{h\searrow0}\frac{1}{h}(x - T(h)x)~~ \mathrm{exists}\}. \varepsilonnd{align*} Note that in our definition of the semigroup \textit{generator} differs from the \textit{standard} one by the sign minus, cf. \cite{Kato1980}. It is well-known that the generator of a strongly continuous semigroup is a closed and densely defined linear operator, which uniquely determines the semigroup (see e.g. \cite[Theorem I.1.4]{EngNag2000}). For a given generator $A$ we will write $\{T(t)=e^{- t A}\}_{t \ge 0}$, for the corresponding semigroup. Recall that for any semigroup $\{T(t)\}_{t\geq0}$ there are constants $M_A,{\alpha}mma_A$, such that it holds $\|T(t)\|\leq M_A e^{{\alpha}mma_A t}$ for all $t\geq0$. These semigroups are known as \textit{quasi-bounded} of class $\mathcal{G}(M_A,{\alpha}mma_A)$ and following the Kato book we write that $A\in\mathcal{G}(M_A,{\alpha}mma_A)$ for its generator \cite[Ch.IX]{Kato1980}. If ${\alpha}mma_A\leq0$, $\{T(t)\}_{t\geq0}$ is called a \textit{bounded} semigroup. For any semigroup we can construct a bounded semigroup by adding some constant $\nu\geq{\alpha}mma_A$ to its generator. Then the operator $\tilde{A} := A+\nu$ generates a bounded semigroup $\{\tilde T(t)\}_{t\geq0}$ with $\|\tilde T(t)\|\leq M_A$. If $\|T(t)\|\leq1$, the semigroup is called a \textit{contraction} semigroup and correspondingly a \textit{quasi-contraction} semigroup, if the property $\|T(t)\|\leq e^{{\alpha}mma_At}$ holds. It is known (see \cite[Ch.IX]{Kato1980}) that for a generator $A\in\mathcal{G}(M_A,{\alpha}mma_A)$ the open half plane $\{z\in\mathbb{C}:{\rm Re}(z) < - {\alpha}mma_A\}$ is contained in the resolvent set $\varrho(A)$ of $A$ and one has the estimate $\|R(\langlembda, A)^k\|\leq {M_A}/({- \rm Re}(\langlembda)-{\alpha}mma_A)^k$ for the resolvent $R(\langlembda, A)= (A - \langlembda)^{-1}$ and the natural $k \in \mathbb{N}$. Note that if $A\in\mathcal{G}(M_A,{\alpha}mma_A)$, then $\tilde{A} = A + \nu \in \mathcal{G}(M_A, {\alpha}mma_A -\nu)$. Therefore, the open half-plane $\{z\in\mathbb{C}: {\rm Re}(z) < \nu -{\alpha}mma_A\}$ is contained in the resolvent set $\varrho(\tilde A)$. Note that the semigroup $\{T(t)\}_{t\geq 0}$ on $X$ is called a \textit{bounded holomorphic} semigroup if its generator $A$ satisfies: $\mathrm{ran}(T(t))\subset\mathrm{dom}(A)$ for all $t>0$, and $\sup_{t>0}\|tAT(t)\|\leq M <\infty$. Recall, that in this case the bounded semigroup $\{T(t)\}_{t\geq 0}$ has a unique analytic continuation into the open sector $\{z\in\mathbb{C}:|\arg(z)|<\delta(M)\} \subset\mathbb{C}$ of the angle $\delta(M) >0$, which is a monotonously decreasing function of $M$ such that $\lim_{M\rightarrow \infty}\delta(M) = 0$, see e.g. \cite[Ch.1.5]{Zag2003}. For a short recall from the perturbation theory of semigroups see \S\ref{sec:4} \subsection{Fractional powers} \langlebel{sec:2.1} We recall here some facts about the fractional powers of linear operators, see e.g. \cite[Chapter 2.6]{Pazy1983}. To this end assume that $A$ is a generator of a bounded holomorphic semigroup $\{e^{-t A}\}_{t\geq0}$ and $0\in\varrho(A)$. Then the fractional power for $\alpha\in(0,1)$ is defined by \begin{align*} A^{-\alpha}=\frac{1}{\Gamma(\alpha)}\int_0^\infty t^{\alpha-1} e^{-tA}dt \ , \varepsilonnd{align*} where $\Gamma: \mathbb{R}_{+} \rightarrow \mathbb{R}$ is the Bernoulli gamma-function. Moreover, we define $A^{0}=I$. Thus, the operator family $\{A^{-\alpha}\}_{\alpha\geq0}$ defines a semigroup of bounded linear operators and the operators $A^{-\alpha}$ for $\alpha>0$ are invertible \cite[2.6.5-6]{Pazy1983}. So, for $\alpha\geq0$ we can define $A^\alpha:=(A^{-\alpha})^{-1}$. With this definition, we get $\mathrm{dom}(A^\alpha)\subset\mathrm{dom}(A^\beta)$ for $\alpha\geq\beta>0$. In particular, we have $\mathrm{dom}(A)\subset\mathrm{dom}(A^\alpha)$ for every $\alpha\in(0,1)$. The following facts are also well-known. \begin{prop}\langlebel{PropertiesFractionalPowers} Let $A$ be generator of a bounded holomorphic semigroup. \begin{enumerate} \item [\;\;\rm (i)]Then there is a constant $C_0$ such that for all $\mu>0$ it holds \begin{align*} \|A^{\alpha}(A+\mu)^{-1}\|\leq C_0\mu^{\alpha-1}. \varepsilonnd{align*} \item [\;\;\rm (ii)]For $\mu>0$ and $0<\alpha<1$ it holds that \begin{align*} \mathrm{dom}((A+\mu)^\alpha)=\mathrm{dom}(A^\alpha) \varepsilonnd{align*} \varepsilonnd{enumerate} \varepsilonnd{prop} One of the basic tool for analysis of bounded holomorphic evolution semigroups is summarised by the following proposition. \begin{prop}[{\cite[Theorem 2.6.13]{Pazy1983}}]\langlebel{HolomorphicSemigroupEstimate} Let $A$ be generator of a bounded ho\-lomorphic semigroup $U(z)$ and $0\in\varrho(A)$. Then for $0<\alpha$, we get \begin{align*} \sup_{t>0}\|t^\alpha A^\alpha U(t)\|=M^A_\alpha<\infty. \varepsilonnd{align*} \varepsilonnd{prop} \subsection{Multiplication operators} \langlebel{sec:2.2} Let $\cI=[0,T]$ be a compact interval. We consider the Banach spaces $L^p(\cI,X)$, $p \in [1,\infty)$, of Lebesgue $p$-summable $X$-valued functions. The dual space $L^p(\cI,X)^*$ of $L^p(\cI,X)$ is defined by the sesquilinear duality relation $\langle \cdotot , \cdotot \mathrm{ran}gle $, which generates bounded functionals: \begin{equation*} L^p(\cI,X) \times L^p(\cI,X)^* \ni (f, \Gamma) \mapsto \langle f , \Gamma \mathrm{ran}gle \in \mathbb{C} \ . \varepsilonnd{equation*} Then the following statement characterises the space $L^p(\cI,X)^*$. \begin{prop}[{\cite[Theorem 1.5.4]{CembranosMendoza1997}}]\langlebel{CharaterizationLpdual} Let $1\leq p<\infty$, and let $p'$ be defined by $ (p')^{-1} + (p)^{-1}=1$. Then for each $\Gamma\in L^p(\cI,X)^*$ there exists a function $\Psi:\cI\rightarrow X^*$ such that {\rm {:}} \item[\;\;\rm (i)] $\Psi$ is $w^*$-measurable, i.e. measurable are the functions $t\mapsto \langle f(t),\Psi(t)\mathrm{ran}gle$ for all $f\in L^p(\cI,X)$ , \item[\;\;\rm (ii)] The function $\|\Psi(\cdotot)\|_{X^*}:t\mapsto \|\Psi(t)\|_{X^*}$ is measurable and belongs to $L^{p'}(\cI)$, \item[\;\;\rm (iii)] If $\langle f , \Gamma \mathrm{ran}gle = \int_\cI dt \langle f(t),\Psi(t) \mathrm{ran}gle $ for all $f\in L^p(\cI,X)$, then $\|\Gamma\|=\|~\|\Psi(\cdotot)\|_{X^*} ~\|_{L^{p'}}$. Conversely, each $w^*$-measurable function $\Psi:\cI\rightarrow X^*$, for which there is $g\in L^{p'}(\cI)$ such that $\|\Psi(t)\|_{X^*}\leq g(t)$ for a.e. $t\in \cI$, induces by {\rm (iii)} a continuous linear functional $\Gamma$ on $L^p(\cI,X)$, whose norm is less than or equal to $\|g\|_{L^{p'}}$. \varepsilonnd{prop} An important role plays in the following the so-called \textit{multiplication} operators on the Banach space $L^p(\cI,X)$, $p \in [1,\infty)$. A function $\phi\in L^{\infty}(\cI)$ defines a multiplication operator $M(\phi)$ on $L^p(\cI,X)$ by \begin{align*} (M(\phi)f)(t):=\phi(t)f(t) ~~{\rm for ~a.e.~} t\in \cI, ~~\mathrm{dom}(M(\phi))=L^p(\cI,X). \varepsilonnd{align*} Moreover, let $C(\cI,X)$ be the Banach space of all continuous functions $f:\cI \longrightarrow X$ endowed with the supremum norm. By $C_0(\cI, X)$ we denote the subspace of $C(\cI,X)$ of all continuous functions, which vanish at $t = 0$. \begin{defi}\langlebel{DefinitionDenseCrossSection} We say the set $\mathcal D\subset L^p(\cI,X)$ has a dense \textit{cross-section} in $X$ if \item[\;\;\rm (i)] $\mathcal D\subset L^p(\cI,X)\cap C(\cI,X)$ , \item[\;\;\rm (ii)] for any $t\in\cI_0$ the set $ [\mathcal D]_t := \{x\in X: \varepsilonxists f\in\mathcal D {\rm ~such~ that~} \hat f (t)=x\}$ is dense in $X$ , where $\hat{f}$ denotes the unique \textit{continuous} representative of $f \in \mathcal D$. \varepsilonnd{defi} Using definition of the multiplication operator $M(\phi)$ and the cross-section density property we find a condition when a linear set is dense in $L^p(\cI,X)$. Let us denote by $W^{k,p}(\cI)$, $k \in \dN$, $p \in [1,\infty]$ the Sobolev space over $\cI$. Then one gets the following statement. \begin{prop}\langlebel{DensityDenseCrossSectionLp} If a linear set $\mathcal D\subset L^p(\cI,X)$ has a dense cross-section in $X$ and if for every $\phi\in W^{1,\infty}(\cI)$ one has {\rm{:}} $M(\phi)\mathcal D\subset \mathcal D$, then $\mathcal D$ is dense in $L^p(\cI,X)$. \varepsilonnd{prop} \begin{proof} Let $\Gamma \in L^p(\cI,X)^*$ be a functional on $L^p(\cI,X)$ such that \begin{align*} \langle f, \Gamma\mathrm{ran}gle=0, ~~f\in\mathcal D. \varepsilonnd{align*} Now we use the characterisation of the dual space $L^p(\cI,X)^*$ given by Proposition \ref{CharaterizationLpdual}. Then there is a $w^*$-measurable function $\Psi:\cI\rightarrow X^*$ such that \begin{align*} 0=\langle f, \Gamma \mathrm{ran}gle =\int_\cI \langle f(t), \Psi(t)\mathrm{ran}gle~ dt, {\rm ~for~} f \in\mathcal D. \varepsilonnd{align*} By virtue of $M(\phi)\mathcal D\subset\mathcal D$ for $\phi\in W^{1,\infty}(\cI)$, it follows that \begin{align*} 0=\int_\cI \langle \phi(t)f(t), \Psi(t)\mathrm{ran}gle~ dt=\int_\cI \phi(t)\langle f(t), \Psi(t)\mathrm{ran}gle~ dt \ , \varepsilonnd{align*} i.e., the function $t\mapsto \langle f(t), \Psi(t)\mathrm{ran}gle$ is in $L^1(\cI)$. Since $\phi\in W^{1,\infty}(\cI)$ is arbitrary, we conclude that \begin{align*} 0=\langle f(t), \Psi(t)\mathrm{ran}gle, \rm{~for~ a.e.~}t\in \cI \ . \varepsilonnd{align*} Then the condition that $\mathcal D$ has a dense cross-section implies $\Psi(t)=0$ for a.e. $t\in \cI$ and hence $\Gamma=0$. \varepsilonnd{proof} Now, let $\{C(t)\}_{t\in \cI}$ be a family of linear operators in the Banach space $X$. We note that the domains of operators $\{C(t)\}_{t\in \cI}$ may depend on the parameter $t$. The multiplication operator $\mathcal C$ in $L^p(\cI, X)$ induced by $\{C(t)\}_{t\in \cI}$ is defined by \begin{equation}\langlebel{Mult-OperatorLp} \begin{split} (\mathcal{C}f)(t)&:=C(t)f(t) \ , \ \ {\rm{with \ domain}} \\ \mathrm{dom}(\mathcal{C})&:=\left\{ f\in L^p(\cI,X)~: \begin{matrix} &f(t)\in\mathrm{dom}(C(t)) {\rm ~for~ a.e.~} t\in \cI\\ &\cI \ni t\mapsto C(t)f(t)\in L^p(\cI,X) \varepsilonnd{matrix} \right\}. \varepsilonnd{split} \varepsilonnd{equation} \begin{prop}\langlebel{ClosenesInducedOperatorLp} If $\{C(t)\}_{t\in \cI}$ is a family of closed linear operators in $X$, then the induced operator $\mathcal{C}$ is also closed. \varepsilonnd{prop} \begin{proof} Let the sequence $\{f_n\}_{n\geq 1}\subset \mathrm{dom}(\mathcal{C})$ be such that limits: $\lim_{n\rightarrow\infty} f_n = f$ and $\lim_{n\rightarrow\infty} {\mathcal C} f_n = g$, exist in the $L^p(\cI, X)$-topology. This implies that by a diagonal procedure one can find a subsequence $\{f_{n_k}\}_{k\geq 1}$ such that $\lim_{{n_k}\rightarrow\infty} f_{n_k}(t) = f(t)$ and $\lim_{{n_k}\rightarrow\infty} ({\mathcal C} f_{n_k})(t) = g(t)$ for a.e. $t\in \cI$. Since for any $t\in \cI$ the operator $C(t)$ is closed in $X$, we conclude that $f(t)\in\mathrm{dom}(C(t))$ and $ C(t) f(t) = g(t)$ for almost all $t\in \cI$. On the other hand, since $g\in L^p(\cI, X)$, it follows that $f\in\mathrm{dom}(\mathcal C)$ and that $(\mathcal C f)(t) = g (t)$ almost everywhere in $\cI$. The latter proves that operator $\mathcal C$ is closed. \varepsilonnd{proof} For a family of generators, we have the following theorem. \begin{thm}\langlebel{InducedGeneratorLp} Let $\{C(t)\}_{t\in \cI}$ be a family of generators in $X$ such that for almost all $t\in \cI$ it holds that $C(t)\in \mathcal G(M,\beta)$ for some $M\geq 1$ and $\beta\in \mathbb{R}$. If the function $\cI\ni t\mapsto (C(t)+\xi)^{-1}x\in X$ is strongly measurable for $\xi > \beta$, $x\in X$, then the induced multiplication operator $\mathcal{C}$ is a generator in $L^p(\cI, X)$, $p \in [1,\infty)$, and the corresponding semigroup $\{e^{- \tau \mathcal{C}}\}_{\tau \geq 0}$ is given by \begin{align*} (e^{-\tau \mathcal{C}}f)(t)=e^{-\tau C(t)}f(t) {\rm ~~for~ a.e.~} t\in \cI. \varepsilonnd{align*} In particular, on obtains that $\cC \in \cG(M,{\beta})$. \varepsilonnd{thm} \begin{proof} Let $\mathcal J\subset\cI$ be a Borel set with characteristic function $\chi_\mathcal J(\cdotot)$. For $\xi>\beta$ and $x\in X$ we define the mappping \begin{align*} f_{\mathcal J,\xi}:= \xi(C(\cdotot)+\xi)^{-1}\chi_\mathcal J(\cdotot)x:\cI\rightarrow X. \varepsilonnd{align*} Then, by definition $f_{\mathcal J,\xi}$ is element of $L^p(\cI,X)$ , $f_{\mathcal J,\xi}(t)\in\mathrm{dom}(C(t))$ for a.e. $t\in \cI$ and \begin{align*} C(t)f_{\mathcal J,\xi}(t)= \xi \, \chi_\mathcal J(t)\, x - \xi^2 \, (C(t)+\xi)^{-1}\chi_\mathcal J(t)\, x \varepsilonnd{align*} is also an element of $L^p(\cI,X)$. Hence, $f_{\mathcal J,\xi}\in\mathrm{dom}(\mathcal C)$. Since for a.e. $t\in \cI$ the operator $C(t)$ is a generator in $X$ the Yosida approximation argument yields that \begin{align*} f_{\mathcal J,\xi}(t)\rightarrow \chi_\mathcal J(t)x, {\rm~for~}\xi\rightarrow \infty, ~x\in X, {\rm ~a.e.~}t\in \cI. \varepsilonnd{align*} Note that it is valid for any $\mathcal J\subset\cI$. Therefore, $\mathrm{dom}(\mathcal C)$ is dense in $L^p(\cI,X)$. Now, we estimate the iterated resolvents. Recall that for any $t\in \cI$ the operators $C(t)$ belong to the same class $\mathcal G(M,\beta)$. Thus, for any $k\in \mathbb{N}$ we have \begin{align*} \|(C(t)+\langlembda)^{-k}\|_{\mathcal B(X)}\leq \frac{M}{(\langlembda-\beta)^k}, ~~\langlembda>\beta. \varepsilonnd{align*} Hence, for almost every $t\in \cI$ and any $f\in L^p(\cI,X)$, we obtain \begin{align*} \|((\mathcal C+\langlembda)^{-k}f)(t)\|_X=\|(C(t)+\langlembda)^{-k}(f(t))\|_X\leq \frac{M}{(\langlembda-\beta)^k}\|f(t)\|_X, ~~\langlembda>\beta. \varepsilonnd{align*} This implies that \begin{align*} \|(\mathcal C+\langlembda)^{-k}f\|_{L^p}\leq\frac{M}{|\langlembda-\beta|^k}\|f\|_{L^p}, ~~\langlembda>\beta, \varepsilonnd{align*} and therefore, by the Hille-Yosida Theorem (see e.g. \cite[Theorem 2.3.8]{EngNag2000}) it follows that $\mathcal C$ is a generator in $L^p(\cI,X)$. The corresponding semigroup is given by the Euler limit: \begin{align*} e^{-\tau\mathcal C}f = \lim_{n\rightarrow \infty} \left(I+\frac{\tau}{n}\mathcal C \right)^{-n}f \ , \ f \in L^p(\cI,X) \ . \varepsilonnd{align*} For any $n\geq0$, we have \begin{align*} \left(\left(I+\frac{\tau}{n}\mathcal C \right)^{-n}f\right)(t) = \left(I+\frac{\tau}{n}C(t) \right)^{-n}f(t) \ . \varepsilonnd{align*} This yields \begin{align*} (e^{-\tau\mathcal C}f)(t) &= \lim_{n\rightarrow \infty}\left(\left(I+\frac{\tau}{n}\mathcal C \right)^{-n}f\right)(t) = \lim_{n\rightarrow \infty} \left(I+\frac{\tau}{n}C(t) \right)^{-n}f(t) = e^{-\tau C(t)}f(t) \ , \varepsilonnd{align*} which coincides with expression claimed in theorem. \varepsilonnd{proof} \begin{rem} We note that the domain of the generator $\mathcal C$ does not necessarily have a dense cross-section in $X$ since its elements might be not continuous. \varepsilonnd{rem} An operator $A$ in $X$, that does not depend on the time-parameter $t$, trivially induces a multiplication operator $\mathcal A$ in $L^p(\cI, X)$ given by \begin{align*} (\mathcal{A}f)(t):=Af(t) ~~{\rm for~ a.e.~}t\in \cI \varepsilonnd{align*} with \begin{align*} \mathrm{dom}(\mathcal{A}):= \left\{f\in L^p(\cI, X): \begin{matrix} &f(t)\in\mathrm{dom}(A) ~{\rm for~ a.e.~}t\in \cI\\ &\cI \ni t\mapsto Af(t)\in L^p(\cI, X) \varepsilonnd{matrix} \right\}. \varepsilonnd{align*} Then Theorem \ref{InducedGeneratorLp} immediately yields the following corollary: \begin{cor}\langlebel{InducedConstantGeneratorLp} Let $A$ be a generator in $X$. Then the induced multiplication operator $\mathcal{A}$ is a generator in $L^p(\cI,X)$ and its semigroup is given by \begin{align*} (e^{-\tau \mathcal{A}}f)(t)=e^{-\tau A}f(t), {\rm ~~a.e.~}t\in \cI. \varepsilonnd{align*} \varepsilonnd{cor} The next lemma describes how domains of {two} induced multiplication operators in $L^p(\cI, X)$ can be described by domains of the corresponding operators in the space $X$. \begin{lem}\langlebel{DomainsInclusionsLp} Let the assumptions (A1) and (A2) be satisfied. If for each $x \in \mathrm{dom}(A)$ \begin{align}\langlebel{a:2.2} \varepsilonsssup_{t\in\cI}\|B(t)x\|_X\leq C_0 \|Ax\|_X<\infty \quad \mbox{for} \quad x\in \mathrm{dom}(A) \ , \varepsilonnd{align} is valid, then $\mathrm{dom}(\mathcal A)\subset\mathrm{dom}(\mathcal B)$. \varepsilonnd{lem} \begin{proof} Let $f\in\mathrm{dom}(\mathcal A)$. Then, by definition of $\mathrm{dom}(\mathcal A)$ one gets $f(t)\in \mathrm{dom}(A)$ for a.e. $t\in \cI$ and hence $f(t)\in \mathrm{dom}(B(t))$ for a.e. $t\in \cI$. Consequently, by virtue of \varepsilonqref{a:2.2} we obtain \begin{align*} \varepsilonsssup_{t\in\cI}\|B(t)A^{-1}\|_{\mathcal B(X)}\leq C_0 \ . \varepsilonnd{align*} Hence, one gets \begin{align*} \|B(t)f(t)\|_X = \|B(t)A^{-1}Af(t)\|_X \leq C_0\|Af(t)\|_X \ , \varepsilonnd{align*} which yields that the function $t\mapsto B(t)f(t)$ is in $L^p(\cI, X)$. Thus, $f\in\mathrm{dom}(\mathcal B)$, i.e. $\mathrm{dom}(\mathcal A)\subset\mathrm{dom}(\mathcal B)$. \varepsilonnd{proof} Note that a family $\{F(t)\}_{t\in\cI}$ of bounded operators is measurable if the map $\cI \ni t \mapsto F(t)x \in X$ is measurable for each $x \in X$. The following proposition is very useful for our purposes. \begin{prop}[{\cite{Evans1976}}]\langlebel{OperatorNormsBoundedOperator} Let $\{F(t)\}_{t\in \cI}$ be a measurable family of bounded linear operators on $X$. Then, for the induced multiplication operator $\mathcal F$ on $L^p(\cI,X)$ its norm can be expressed as \begin{align*} \|\mathcal F\|_{\mathcal B(L^p(\cI,X))}= \varepsilonsssup_{t\in\cI}\|F(t)\|_{\mathcal B(X)} \ . \varepsilonnd{align*} \varepsilonnd{prop} \section{Non-autonomous Cauchy problems and the evolution semigroups approach to solve them} \langlebel{sec:3} Let us consider the non-ACP \varepsilonqref{CauchyProblem} in the separable Banach space $X$. We are going to explain an approach of solving it by using the \textit{evolution semigroups}. \subsection{Evolution semigroup approach} \langlebel{sec:3.1} Crucial for this approach is the notion of the \textit{evolution pre-generator}. \begin{defi}\langlebel{evol-pre-gener} An operator $\mathcal K$ in $L^p(\cI,X)$, $p \in [1,\infty)$, is called a evolution pre-generator if \item[\;\;\rm (i)] $\mathrm{dom}(\mathcal K)\subset C(\cI,X)$ and $M(\phi)\mathrm{dom}(\mathcal K)\subset\mathrm{dom}(\mathcal K)$ for $\phi\in W^{1,\infty}(\cI)$, \item[\;\;\rm (ii)] $\mathcal KM(\phi)f-M(\phi)\mathcal Kf=M(\dot{\phi})f, ~~f\in\mathrm{dom}(\mathcal K), ~~\phi\in W^{1,\infty}(\cI)$, where $\dot \phi=\partial_t\phi$, \item[\;\;\rm (iii)] the domain $\mathrm{dom}(\mathcal K)$ has a dense cross-section in $X$ (see Definition \ref{DefinitionDenseCrossSection}).\\[-2ex] \noindent If, in addition, the operator $\mathcal K$ is a generator of a semigroup in $L^p(\cI,X)$, then $\mathcal K$ is called an \textit{evolution generator}. \varepsilonnd{defi} \begin{rem} The domain $\mathrm{dom}(\mathcal K)$ of an evolution pre-generator is dense in the Banach space $L^p(\cI,X)$. Indeed, the dense cross-section property {\rm (iii)} together with {\rm (i)} and Lemma \ref{DensityDenseCrossSectionLp} imply the density of $\mathrm{dom}(\mathcal K)\subset L^p(\cI,X)$. \varepsilonnd{rem} Now, we can present the main idea concerning the solving of the problem \varepsilonqref{CauchyProblem}. The next theorem explains why we are interested in such a notion as \textit{evolution semigroups}. \begin{thm}[{\cite[Theorem 4.12]{Nei1981}}]\langlebel{PropagatorEvolutionGeneratorCorrespondeceLpSetting} Between the set of all semigroups $\{e^{-\tau \mathcal K}\}_{\tau\geq0}$ on the Banach space $L^p(\cI,X)\}$, $p \in [1,\infty)$, generated by an evolution generator $\mathcal K$ and the set of all solution operators (propagators) $\{U(t,s)\}_{(t,s)\in\Delta}$ on the Banach space $X$ exists a one-to-one correspondence such that the relation \begin{align}\langlebel{CorrespondenceEvolutionSemigroupPropagatorFormula} (e^{-\tau \mathcal K}f)(t)=U(t,t-\tau)\chi_\cI(t-\tau)f(t-\tau), \varepsilonnd{align} holds for $f\in L^p(\cI, X)$ and for a.e. $t\in \cI$. \varepsilonnd{thm} In other words, there is a \textit{one-to-one} correspondence between evolution semigroups and the propagators that solve the non-ACP problem \varepsilonqref{CauchyProblem} One of the important example of evolution generator is $D_0:=\partial_t$ defined in the space $L^p(\cI,X)$ by \begin{align*} D_0f(t):=\partial_t f(t), ~ \mathrm{dom}(D_0):=\{f\in W^{1,p}([0,T],X): f(0)=0\} \ . \varepsilonnd{align*} Then, the operator $D_0$ is a generator of class $\mathcal{G}(1,0)$ of the \textit{right-shift} evolution semigroup $\{S(\tau)\}_{\tau\geq0}$ that has the form \begin{align*} (e^{-\tau D_0}f)(t)=(S(\tau) f)(t):=f(t-\tau)\chi_\cI(t-\tau), ~~f\in L^p(\cI,X), {\rm ~~a.e.~} t\in \cI. \varepsilonnd{align*} The propagator corresponding to the right-shift evolution semigroup is the {identity propagator}, i.e. $U(t,s) = I$ for $(t,s) \in \Delta \in \cI_0\times \cI_0$, where $\cI_0 = \cI \setminus \{0\}$. We note that the generator $D_0$ has empty spectrum since the semigroup $\{S(\tau)\}_{\tau\geq0}$ is \textit{nilpotent} and therefore the integral $\int_0^\infty d\tau \, e^{-\tau\langlembda}S(\tau)f$ exists for any $\langlembda\in \mathbb{C}$ and for any $f\in L^p(\cI,X)$. For a given operator family $\{C(t)\}_{t\in\cI}$ in $X$ the induced multiplication operator $\mathcal C$ in $L^p(\cI,X)$ is defined by (\ref{Mult-OperatorLp}). We consider in $L^p(\cI,X)$ the operator \begin{align}\langlebel{DefinitionKTilde} \widetilde {\mathcal K} := D_0+\mathcal C \, , ~~\mathrm{dom}(\widetilde {\mathcal K}) := \mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal C) \ . \varepsilonnd{align} \begin{lem}\langlebel{DenseCrossSectionEvolutionOperatorC} If $\mathrm{dom}(\widetilde{\mathcal K})$ has a dense cross-section, then the operator $\widetilde{\mathcal K}$ is a evolution pre-generator. \varepsilonnd{lem} \begin{proof} By (\ref{DefinitionKTilde}) we get $\mathrm{dom}(\widetilde {\mathcal K})\subset\mathrm{dom}(D_0)\subset C(\cI,X)$. Since $\mathcal C$ is an induced multiplication operator, then by definition (\ref{Mult-OperatorLp}) it commutes with the operator $M(\phi)$ for $\phi\in W^{1,\infty}(\cI)$. So, with $\mathrm{dom}(\widetilde{\mathcal K})=\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal C)$ we get $M(\phi)\mathrm{dom}(\widetilde {\mathcal K}) \subset \mathrm{dom}(\widetilde{\mathcal K})$. Then the relation $\widetilde{\mathcal K}M(\phi)f-M(\phi)\widetilde{\mathcal K}f=M(\dot{\phi})f$ for $f\in\mathrm{dom}(\widetilde{\mathcal K})$ (see Definition \ref{evol-pre-gener}, {\rm (ii)}) follows by the Leibniz rule for $(D_0M(\phi)f)(t)=\partial_t (\phi f)(t)$. \varepsilonnd{proof} Now, we precise the notion of the \textit{solution operator} of the problem \varepsilonqref{CauchyProblem} versus the \textit{propagator} $\{U(t,s)\}_{(t,s)\in\Delta}$ on the Banach space $X$ that we first described in Introduction \S\ref{sec:1}. \begin{defi}\langlebel{SolutionDefinition}~ \item[\;\;\rm (i)] The evolution non-ACP \varepsilonqref{CauchyProblem} is called \textit{correctly posed} in $\cI_0 = \cI \setminus \{0\}$ if $\widetilde{\mathcal K}$ defined by (\ref{DefinitionKTilde}) is an evolution pre-generator. \item[\;\;\rm (ii)] A propagator $\{U(t,s)\}_{(t,s)\in\Delta}$ is called a \textit{solution operator} of the correctly posed evolution problem \varepsilonqref{CauchyProblem} if the corresponding evolution generator $\mathcal K$ (Theorem \ref{PropagatorEvolutionGeneratorCorrespondeceLpSetting}) is an operator extension of $\widetilde{\mathcal K}$, i.e. $\widetilde{\mathcal K} \subseteq \mathcal K$. \item[\;\;\rm (iii)] The evolution problem \varepsilonqref{CauchyProblem} has a unique solution operator if $\widetilde {\mathcal K}$ admits only one extension that is an evolution generator. \varepsilonnd{defi} \begin{rem}~ \item[\;\;(i)] It is an open problem whether an evolution pre-generator admits several extension which are evolition generators. However, if this is case then the non-ACP \varepsilonqref{CauchyProblem} has more than one solution operator. \item[\;\;(ii)] Our Definition \ref{SolutionDefinition} (ii) of a correctly posed non-ACP is a \textit{weak} property. For example, the notion of well-posedness developed in \cite{Nickel1996} implies this property. \varepsilonnd{rem} To find extensions of the evolution pre-generator $\widetilde{\mathcal K}$ which are evolution generators is, in general, a nontrivial problem. However, there is a special case, that easily guarantees the existence of such extension and, moreover, it is unique. \begin{thm}\langlebel{EvolutionProblemUniqueSolution} Assume that the non-ACP \varepsilonqref{CauchyProblem} is correctly posed in $\cI_0$. If the evolution pre-generator $\widetilde {\mathcal K}$ is closable in $L^p(\cI,X)$ and its closure $\mathcal K$ is a generator, then the evolution problem \varepsilonqref{CauchyProblem} has a unique solution operator. \varepsilonnd{thm} \begin{proof} Assume that $\mathcal K$ belongs to the class $\mathcal G(M,\beta)$. Then by Lemma 2.16 of \cite{Nei1981} the estimate \begin{align*} \|f(t)\|_X\leq \frac{M}{(\xi-\beta)^{(p-1)/p}}\|(\mathcal K+\xi)f\|_{L^p}\ , ~~f\in \mathrm{dom}(\mathcal K)\, , \varepsilonnd{align*} holds a.e. in $\cI$ for all $\xi>\beta$. In particular, one gets for any $f\in \mathrm{dom}(\tilde {\mathcal K})$: \begin{align*} \|f\|_C\leq \frac{M}{(\xi-\beta)^{(p-1)/p}}\|(\widetilde {\mathcal K}+\xi)f\|_{L^p} \, . \varepsilonnd{align*} Hence, we conclude for the closure $\mathcal K$ of $\widetilde{\mathcal K}$ one has $\mathrm{dom}(\mathcal K)\subset C(\cI,X)$. Now, we show that ${\mathcal K}$ is an evolution generator. Let $f\in\mathrm{dom}(\mathcal K)$. Then, by the closeness of $\mathcal K$, there is a sequence $f_n\in\mathrm{dom}(\widetilde{\mathcal K})$ such that $f_n\rightarrow f$ and $\widetilde Kf_n\rightarrow \mathcal Kf$, both in $L^p(\cI,X)$. Let $\phi\in W^{1,\infty}(\cI)$. Since $\widetilde{\mathcal K}$ is an evolution pre-generator, Definition \ref{evol-pre-gener}, {\rm (ii)} yields \begin{align*} \widetilde {\mathcal K} M(\phi)f_n=M(\phi)\widetilde{\mathcal K} f_n+M(\dot \phi)f_n. \varepsilonnd{align*} Note that the right-hand side converges to $M(\phi)\mathcal K f+M(\dot \phi)f$. Therefore, we conclude that $M(\phi)f\in\mathrm{dom}(\mathcal K)$ and $\mathcal KM(\phi)f=M(\phi)\mathcal K f+M(\dot \phi)f$. Hence, $\mathcal K$ is an evolution generator. Now let $\mathcal K$ and $\mathcal K'$ be two different extensions of $\widetilde{\mathcal K}$ that are both evolution generators. Since $\mathcal K$ is the closure of $\widetilde {\mathcal K}$ and $\mathcal K'$ is closed, we get $\mathrm{dom}(\mathcal K) \subseteq \mathrm{dom}(\mathcal K')$ and the restriction: $K'\upharpoonright\mathrm{dom}(\mathcal K)=\mathcal K$. Recall that $e^{-s\mathcal K}(\mathrm{dom}(\mathcal K))\subseteq \mathrm{dom}(\mathcal K)$, for $s\geq 0$. Then for all $f\in\mathrm{dom}(\mathcal K)$ and $0 < s < \tau$ we obtain \begin{align*} \frac{d}{ds}\{ e^{-(\tau-s)\mathcal K'}e^{-sK}f\}=e^{-(\tau-s)\mathcal K'}(\mathcal K'-\mathcal K)e^{-s\mathcal K}f = 0 \ . \varepsilonnd{align*} Hence, the function $s\mapsto e^{-(\tau-s)\mathcal K'}e^{-s\mathcal K}u$ is a constant for each $u\in\mathrm{dom}(\mathcal K)$. Thus, the semigroup generated by $\mathcal K'$ must be the same as the one by $\mathcal K$, which implies $\mathcal K=\mathcal K'$. \varepsilonnd{proof} These considerations suggest the following strategy for solving the non-ACP: \\ To find the unique solution operator of the problem \varepsilonqref{CauchyProblem} it is sufficient to prove that the evolution pre-generator $\widetilde {\mathcal K}$, defined by \varepsilonqref{DefinitionKTilde}, is an \textit{essential generator}, i.e., the closure of $\widetilde{\mathcal K}$ is a generator. \subsection{A special class of evolution equations} \langlebel{sec:3.2} We are interested in the non-ACP of a special form. Setting $C(t) :=A+B(t)$, $t\in \cI$, $\mathrm{dom}(C(t))=\mathrm{dom}(A)\cap\mathrm{dom}(B(t))$ we see that this problem fits into \varepsilonqref{CauchyProblem}. The operator $A$ in $X$ trivially induces a multiplication operator $\mathcal{A}$ in the Banach space $L^p(\cI,X)$. The operator family $\{B(t)\}_{t\in \cI}$ induces a multiplication operator $\mathcal B$. Our aim is, to show that the closure of the evolution pre-generator \begin{align*} \widetilde{\mathcal K} := D_0+\mathcal A+\mathcal B,~~\mathrm{dom}(\widetilde {\mathcal K}) := \mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)\cap\mathrm{dom}(\mathcal B) \varepsilonnd{align*} becomes an evolution generator under appropriate assumptions on the operator $A$ and the operator family $\{B(t)\}_{t\in\cI}$. Firstly, we consider the operator sum $D_0+\mathcal{A}$. Let $A$ be a generator in $X$ with the semigroup $\{e^{-\tau A}\}_{\tau\geq0}$. Then $\mathcal{A}$ is a generator in $L^p(\cI,X)$ with semigroup $\{e^{-\tau \mathcal A}\}_{\tau\geq0}$ given by $(e^{-\tau \mathcal{A}}f)(t)=e^{-\tau A}f(t)$ for a.e. $t\in \cI$ (cf. Lemma \ref{InducedConstantGeneratorLp}). Since $A$ is time-independent, the operators $\mathcal A$ and $D_0$ commute. Hence, the product \begin{align*} e^{-\tau D_0}e^{-\tau \mathcal A}f=\chi_\cI(\cdot-\tau)e^{-\tau\mathcal{A}}f(\cdot-\tau) \varepsilonnd{align*} defines a semigroup on $ L^p(\cI,X)$. The generator of this semigroup is denoted by $\mathcal K_0$ and satisfies the following properties: \begin{lem}\langlebel{L0Properties} Let $A$ be a generator in $X$ inducing the multiplication operator $\mathcal A$ in $L^p(\cI,X)$. Let $D_0$ be the generator of the right-shift semigroup on $L^p(\cI,X)$. Then, the following holds: \item[\;\;\rm (i)] The set $\mathcal D:=\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)$ is dense in $L^p(\cI,X)$ and it has a dense cross-section in $X$. In particular, $\mathrm{dom}(\mathcal K_0)$ has a dense cross-section in $X$. \item[\;\;\rm (ii)] The restriction $\mathcal K_0\upharpoonright\mathcal D =: \widetilde {\mathcal K}_0 = D_0 + \mathcal A$ and the closure $\overline{(\widetilde{\mathcal K_0})} = \mathcal K_0$. \item[\;\;\rm (iii)] $\|e^{-\tau \mathcal K_0}\|_{\mathcal B(L^p(\cI,X))}=\|e^{-\tau \mathcal{A}}\|_{\mathcal B(L^p(\cI,X))}$ for $\tau \in \cI $. In particular, the generators $A$, $\mathcal A$ and $\mathcal K_0$ belong to the same class $\mathcal G(M,\beta)$. \varepsilonnd{lem} \begin{proof} (i) Note that for any $\phi\in W^{1,\infty}(\cI)$ we have $M(\phi)\mathcal D\subset \mathcal D$. Now we prove that $\mathcal D$ has a dense cross-section in $X$. To this aim, let $t_0\in \cI \setminus \{0\}$ and $x_0\in X$ be fixed. Since $A$ is a generator in $X$, by the Yosida approximation it follows that \begin{align*} \mathrm{dom}(A)\ni x_\xi:=\xi(A+\xi)^{-1}x_0\rightarrow x_0, {\rm~~as~}\xi\rightarrow\infty. \varepsilonnd{align*} Therefore, for any $\varepsilonpsilon>0$ there exists $\xi>0$ such that $\|x_\xi-x_0\|_X<\varepsilonpsilon$. Let $\psi\in C^\infty(\cI)$ be such that $\psi(0)=0$ and $\psi(t_0)=1$. Then, $g$ defined by $g(t)=\psi(t)x_\xi$ is in $\mathcal D$ and $\|g(t_0)-x_0\|_X<\varepsilonpsilon$. Assertion (ii) holds by definition and assertion (iii) follows immediately from the fact that $\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)$ is dense in $L^p(\cI,X)$ and that the operator $D_0$ belongs to the class $\mathcal G(1,0)$. \varepsilonnd{proof} \begin{rem}\langlebel{MaxParReg} In general, the operator $\widetilde{\mathcal K}_0 = D_0+\mathcal A$ must \textit{not} be a closed operator and the domain of $\widetilde{\mathcal K}_0$ may be \textit{larger} than $\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)$. Let \begin{align*} \partial_{t}u(t) = - Au(t), \quad u(0) = u_0 \ , \varepsilonnd{align*} be the evolution problem associated to the densely defined and closed operator $A$. Let us recall that if $A$ satisfies the condition of \textit{maximal parabolic regularity}, see e.g. \cite{Acquistapace1987, Prato1984, PruessSchnaubelt2001, Arendt2007}, then $A$ has to be the generator of a \textit{holomorphic} semigroup and the operator $\widetilde{\mathcal K}_0$ is closed. Hence, $\widetilde{\mathcal K}_0= \mathcal K_0$. However, if $A$ is the generator of a holomorphic semigroup, then in general it does not follow that $A$ satisfies the condition of maximal parabolic regularity. This is only true for Hilbert spaces. \varepsilonnd{rem} \section{Existence and uniqueness of the solution operator of the evolution equation}\langlebel{sec:4} In this section we want to find the solution operator for the non-ACP \varepsilonqref{EvolutionProblem} in the sense of Definition \ref{SolutionDefinition}. In particular, we show that the closure $\overline{\widetilde{\mathcal K}}$ of the operator $\widetilde{\mathcal K} = D_0+\mathcal A+\mathcal B$ is a generator (cf. Theorem \ref{EvolutionProblemUniqueSolution}). In fact, we are going to prove that $\mathcal K := \mathcal K_0+\mathcal{B}$ is an evolution generator. Note that since we deal with many generators, there is a need to investigate the sum of them. To this aim we recall two results from the perturbation theory for semigroup generators. \begin{prop}[{\cite[Corollary IX.2.5]{Kato1980}}]\langlebel{KatosPerturbationResults}~ Let $A$ be the generator of a holomorphic semigroups and let $B$ be $A$-bounded with relative bound zero. Then $A + B$ is also the generator of a holomorphic semigroup. \varepsilonnd{prop} The next result is due to J. Voigt \cite{Voigt1977}. It allows to treat perturbations with non-zero relative bounds. \begin{prop}[{\cite[Theorem 1]{Voigt1977}}]\langlebel{VoigtsTheorem} Let $\{T(t)\}_{t\geq0}$ be a semigroup acting on the Banach space $X$ with generator $A\in\mathcal{G}(M_A,{\alpha}mma_A)$. Let $B$ be a densely defined linear operator in $ X$ and assume there is a dense subspace $\mathcal D\subset X$ such that: \item[\;\;\rm (i)] $\mathcal D\subset\mathrm{dom}(A)\cap\mathrm{dom}(B)$, $T(t)\mathcal D\subset \mathcal D$ for $t\geq0$ and for all $x\in \mathcal D$ the function $t\mapsto BT(t)x$ is continuous, \item[\;\;\rm (ii)] There are constants $\beta_1\in(0,\infty]$ and $\beta_2\in[0,1)$ such that for all $x\in \mathcal D$ it holds that \begin{align*} \int_0^{\beta_1} dt \, e^{-{\alpha}mma_At}\|BT(t)x\|\leq \beta_2\|x\| \ . \varepsilonnd{align*} Then there exists a unique semigroup $\{S(t)\}_{t\geq0}$ and its generator $C$ is the closure of the restriction $(A+B)\upharpoonright\mathcal D$, with domain $\mathrm{dom}(C)=\mathrm{dom}(A)$. Moreover, the operator $B\upharpoonright\mathcal D$ is $A\upharpoonright\mathcal D$-bounded and can be extended uniquely to an $A$-bounded operator $\widehat B$ with domain $\mathrm{dom}(\widehat B)=\mathrm{dom}(A)$. For this extension one gets that $C=A+\widehat B$. In particular, if $B$ is closed, then $B$ is $A$-bounded and $C=A+B$. Moreover, the following estimate holds \begin{align*} \|S(t)\|\leq \frac{M_A}{1-\beta_2}\left(\frac{M_A}{1-\beta_2}\right)^{t/\beta_1}e^{{\alpha}mma_A t}, \quad t \ge 0 \ . \varepsilonnd{align*} \varepsilonnd{prop} \begin{lem}\langlebel{ClBClAalphaBounded} Assume (A1), (A2) and (A3) for the operators $A$ and the operator family $\{B(t)\}_{t\in\cI}$. Then, we get $\|\mathcal B\mathcal A^{-\alpha}\|_{\mathcal B(L^p(\cI,X))}\leq C_\alpha$. \varepsilonnd{lem} \begin{proof} The claim follows directly using Lemma \ref{ClosenesInducedOperatorLp} and Lemma \ref{DomainsInclusionsLp}. \varepsilonnd{proof} \begin{prop}\langlebel{KbecomesGenerator} Let the assumptions (A1), (A2) and (A3) be satisfied. Then $\mathcal K= \mathcal K_0+\mathcal{B}$ is a generator in $L^p(\cI,X)$, $p \in [1,\infty)$, with domain $\mathrm{dom}(\mathcal K)=\mathrm{dom}(\mathcal K_0)$. \varepsilonnd{prop} \begin{proof} We want to apply Proposition \ref{VoigtsTheorem}. Let $\mathcal D=\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)\cap\mathrm{dom}(\mathcal B)$. Since $\mathrm{dom}(\mathcal A)\subset\mathrm{dom}(\mathcal B)$, we have $\mathcal D=\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)$. Using Lemma \ref{L0Properties}, we conclude that $\mathcal D$ is a dense subspace of $L^p(\cI =[0,T],X)$, which is invariant under the semigroup $\{e^{-\tau \mathcal K_0}\}_{\tau \geq0}$. From Proposition \ref{HolomorphicSemigroupEstimate} we get that for a fixed $\alpha\in (0,1)$ and for any $\tau\in (0, T] = \cI_0$ there exists a constant $M^A_\alpha$ (which depends only on $\alpha$) such that $\|A^\alpha e^{-\tau A}\|\leq {M^A_\alpha}/{\tau^\alpha}$. We prove conditions (i) and (ii) of Proposition \ref{VoigtsTheorem}. Let $f\in \mathcal D=\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A) \subset C_0(\cI,X)$. Then for $\alpha\in(0,1)$ and $\tau>0$ we conclude that \begin{align*} \|\mathcal{B}e^{-\tau \mathcal K_0}f\|_{L^p}^p =&\int_\cI dt \|B(t)e^{-\tau D_0}e^{-\tau A}f(t)\|_X^p \leq \int_\cI dt \|B(t)A^{-\alpha}A^\alpha e^{-\tau A}\|_{\mathcal B(X)}^p \cdotot \|f\|_{L^p}^p\leq\\ \leq& \varepsilonsssup_{t\in \cI}\|B(t)A^{-\alpha}\|_{\mathcal B(X)}^p\cdotot\frac{(M^A_\alpha)^p}{\tau^{\alpha p}} T \cdotot\|f\|_{L^p}^p \leq C_\alpha^p \frac{(M^A_\alpha)^pT}{\tau^{\alpha p}}\|f\|_{L^p}^p. \varepsilonnd{align*} Then, we get $\|\mathcal{B}e^{-\tau \mathcal K_0}f\|_{L^p} \leq C_\alpha {M^A_\alpha T^{1/p}} \tau^{-\alpha} \, \|f\|_{L^p}.$ Moreover, for $f\in \mathcal D$ we have \begin{align*} \|\mathcal B (e^{-\tau \mathcal K_0}&-I)f\|_{L^p} = \|\int_0^\tau \mathcal B e^{-\sigma \mathcal K_0} \mathcal K_0f \|_{L^p} d\sigma = \|\int_0^\tau \mathcal B e^{-\sigma \mathcal A} e^{-\sigma D_0} \mathcal K_0f \|_{L^p} d\sigma \leq \\ \leq & \|\mathcal B \mathcal A^{-\alpha}\|_{\mathcal B(L^p(\cI,X))} \int_0^\tau \|\mathcal A^\alpha e^{-\sigma \mathcal A}\|_{\mathcal B(L^p(\cI,X))}d\sigma \|\mathcal K_0f \|_{L^p(\cI,X)} \leq\\ \leq & C_\alpha M^A_\alpha \int_0^\tau \frac{1}{\sigma ^\alpha }d\sigma \|\mathcal K_0f \|_{L^p(\cI,X)} = \frac{C_\alpha M^A_\alpha}{1-\alpha} \tau^{1-\alpha} \|\mathcal K_0f \|_{L^p(\cI,X)}, \varepsilonnd{align*} that yields continuity in $\tau = 0$ and hence, the function $\cI\ni \tau\mapsto \mathcal{B}e^{-\tau \mathcal K_0}f\in L^p(\cI,X)$ is continuous. Moreover, we get \begin{align*} \int_0^a\|\mathcal{B}e^{-\tau \mathcal K_0}f\|_{L^p(\cI,X)} d\tau \leq C_\alpha M^A_\alpha\|f\|_{L^p} \int_0^a\frac{1}{\tau^\alpha}d\tau =\frac{C_\alpha M^A_\alpha}{(1-\alpha)}a^{1-\alpha}\|f\|_{L^p(\cI,X)}. \varepsilonnd{align*} Now, take $a< \left({(1-\alpha)}/{C_\alpha M^A_\alpha}\right)^{1/(1-\alpha)}$. Hence, all conditions of Proposition \ref{VoigtsTheorem} are satisfied. So we conclude that the operator $\mathcal K = \mathcal K_0+\mathcal{B}$ with domain $\mathrm{dom}(\mathcal K)=\mathrm{dom}(\mathcal K_0)$ is a generator. \varepsilonnd{proof} Now we can state the main theorem concerning the non-ACP \varepsilonqref{EvolutionProblem}. \begin{thm}\langlebel{OurEvolutionProblemUniqueSolution} Let the assumptions (A1), (A2) and (A3) of Assumption \ref{ass:1.1} be satisfied. Then the evolution problem \varepsilonqref{EvolutionProblem} has a unique solution operator in the sense of Definition \ref{SolutionDefinition}. \varepsilonnd{thm} \begin{proof} The evolution problem is correctly posed since the set $\mathrm{dom}(D_0)\cap\mathrm{dom}(\mathcal A)$ has a dense cross-section in $X$ (cf. Lemma \ref{L0Properties}). Using Theorem \ref{EvolutionProblemUniqueSolution} and Proposition~\ref{VoigtsTheorem} the assertion follows. \varepsilonnd{proof} \begin{rem}~ \begin{enumerate} \item The existence result does \textit{not} require that the operators $B(t)$ are generators. \item The assumption $0\in\varrho(A)$ is just for simplicity. Otherwise, the generator $A$ can be shifted by a constant $\varepsilonta>0$. Proposition \ref{PropertiesFractionalPowers} ensures that the domain of the fractional power of $A$ does not change either. \item The assumptions (A1), (A2), (A3) imply that for a.e. $t\in\cI$ the operator $B(t)$ is infinitesimally small with respect to $A$. Indeed, fix $t\in\cI$, we conclude \begin{align*} \mathrm{dom}(A+\varepsilonta)=\mathrm{dom}(A)\subset\mathrm{dom}( A^{\alpha})\subset\mathrm{dom}(B(t)) \varepsilonnd{align*} for $\varepsilonta>0$ and so by Proposition \ref{PropertiesFractionalPowers} we have \begin{align*} \|B(t)(A+\varepsilonta)^{-1}\|_{\mathcal B(X)}&\leq\|B(t)A^{-\alpha}\|_{\mathcal B(X)}\cdotot\| A^{\alpha}( A+\varepsilonta)^{-1} \|_{\mathcal B(X)} \leq \frac{C_\alpha C_0}{\varepsilonta^{1-\alpha}}. \varepsilonnd{align*} And therefore for any $x\in\mathrm{dom}( A)\subset\mathrm{dom}( B(t))$, we get \begin{align*} \|B(t)x\|_X\leq \frac{C_\alpha C_0}{\varepsilonta^{1-\alpha}}\cdotot\|(A+\varepsilonta)x\|_X\leq C_\alpha C_0\varepsilonta^{\alpha} \left(\frac{1}{\varepsilonta}\|Ax\|_X+\|x\|_X \right). \varepsilonnd{align*} Since the relative bound can be chosen arbitrarily small by the large shift $\varepsilonta>0$, the perturbation Proposition \ref{KatosPerturbationResults} yields that $A+B(t)$ is the generator of a holomorphic semigroup. Hence, the problem \varepsilonqref{EvolutionProblem} is a parabolic evolution equation. \varepsilonnd{enumerate} \varepsilonnd{rem} \section{Stability condition} \langlebel{sec:5} As we have already mentioned, the existence result holds even if the operators $B(t)$ are not generators. In the following, we are going to approximate the solution using a Trotter product formula. To this end, we have to take into account the condition (A4) from Assumption \ref{ass:1.1}. \begin{rem} \item[\,\;(i)] In (A2) we assumed that the function $t\mapsto B(t)x$ for $x\in\mathrm{dom}(A)\subset\mathrm{dom}(B(t))$ is strongly measurable. The assumption (A4) implies this property, which can be easily obtained using the Yosida approximation. Using (A3), (A4) and $\mathrm{dom}(A)\subset\mathrm{dom}(A^\alpha)\subset\mathrm{dom}(B(t))$, assumption (A2) is not needed anymore. \item[\;\;(ii)] Using Theorem \ref{InducedGeneratorLp}, assumption (A4) implies that the induced operator $\mathcal B$ is a generator in $L^p(\cI,X)$. \varepsilonnd{rem} Now, let us consider the operator sums $A+B(t)$ and $\mathcal A+\mathcal B$. \begin{lem}\langlebel{SumClAClBContractionGenerator} Let the operators $A$ and $\{B(t)\}_{t\in\cI}$ satisfy assumptions (A1), (A3) and (A4) and let $\mathcal A$ and $\mathcal B$ be the corresponding induced multiplication operators in $ L^p(\cI,X)$. Then, $C(t) := A+B(t)$ is generator of a holomorphic semigroup on $X$ and it induces the multiplication operator $\mathcal C$ given by $\mathcal C = \mathcal A+\mathcal B$, which is in turn a generator of a holomorphic semigroup on $ L^p(\cI,X)$. \varepsilonnd{lem} \begin{proof} Using Lemma \ref{ClBClAalphaBounded} and Theorem \ref{KatosPerturbationResults}, we obtain that $C(t)$ and $\mathcal C$ generate holomorphic semigroups. \varepsilonnd{proof} A fundamental tool for approximation the solution operator (propagator) $\{U(t,s)\}_{(t,s)\in\Delta}$ of the evolution equation \varepsilonqref{EvolutionProblem} is the \textit{Trotter product formula}. The first step is to establish a general sufficient condition for existence of this formula in the case of evolution semigroups \S\ref{sec:3.1}. \begin{prop}[{\cite[Theorem 3.5.8]{EngNag2000}}]\langlebel{ClassicalTrotter} Let $A$ and $B$ be two generators in $X$. If there are constants $M > 0$ and ${\omega} \in \mathbb{R}$ such that the condition \begin{align}\langlebel{eq:5.1} \|(e^{-\tau/n A}e^{-\tau/n B})^n\|_{\mathcal B(X)}\leq Me^{\omega \tau} \ , \varepsilonnd{align} is satisfied for all $\tau\geq0$ and $n \in \mathbb{N}$ and if the closure of the sum: $C=\overline{A + B}$, is in turn a generator, then the corresponding semigroup is given by the Trotter product formula \begin{align}\langlebel{eq:5.2a} e^{-\tau C}x=\lim_{n\rightarrow \infty}(e^{-\tau/n A}e^{-\tau/n B})^n \, x,~~x\in X \ , \varepsilonnd{align} with uniform convergence in $\tau$ for compact intervals. \varepsilonnd{prop} \begin{rem} The condition \varepsilonqref{eq:5.1} is called the Trotter \textit{stability condition} for the pair of operators $\{A,B\}$. It turns out that if the Trotter stability condition is satisfied for the pair $\{A,B\}$, then the Trotter stability condition holds also for the pair $\{B,A\}$, i.e. there are constants $M' > 0$, and $\omega' \in \mathbb{R}$ such that \begin{align*} \|(e^{-\tau/n B}e^{-\tau/n A})^n\|_{\mathcal B(X)}\leq M'e^{\omega' \tau} \varepsilonnd{align*} for all $\tau\geq 0$ and $n\in \mathbb{N}$. In particular, the operators $A$ and $B$ can be interchanged in formula \varepsilonqref{eq:5.2a} without modification of the left-hand side. \varepsilonnd{rem} In the following, we consider two different splittings of the evolution semigroup generator $\mathcal K $, see \S\ref{sec:3.2}: \begin{align*} \mathcal K = \overline{D_0 + (\mathcal A + \mathcal B)} = \overline{D_0 + \mathcal C}, ~~{\rm and~~}\mathcal K = \mathcal K_0 + \mathcal B \ . \varepsilonnd{align*} For them we want to apply the Trotter product formula \varepsilonqref{eq:5.2a}. Note that the Trotter stability condition \varepsilonqref{eq:5.1} can be expressed in terms of operators $A$ and $B(t)$. \begin{defi}\langlebel{StabilityDefinition} Let $X$ be a separable Banach space. \begin{enumerate} \item Let $\{C(t)\}_{t\in\cI}$ be family of generators in $X$. The family $\{C(t)\}_{t\in\cI}$ is called stable if there is a constant $M > 0$ such that \begin{align}\langlebel{eq:5.2} \varepsilonsssup_{(t,s) \in {\Delta}}\left\|\prod_{j=1}^{n\leftarrow} e^{-\frac{(t-s)}{n} C(s + \frac j n(t-s))}\right\|_{\mathcal B(X)} \le M \varepsilonnd{align} holds for any $n\in\mathbb{N}$. \item Let $A$ be a generator and let $\{B(t)\}_{t\in\cI}$ be a family of generators in $X$. The family $\{B(t)\}_{t\in\cI}$ is called $A$-stable if there is a constant $M > 0$ such that \begin{align}\langlebel{eq:5.3} \varepsilonsssup_{(t,s)\in {\Delta}}\left\|\prod_{j=1}^{n\leftarrow} G_j(t,s;n)\right\|_{\mathcal B(X)} \le M \varepsilonnd{align} holds for any $n\in\mathbb{N}$ where $G_j(t,s;n)$ is defined by \varepsilonqref{ApproximationPropagatorIntroduction}. \varepsilonnd{enumerate} In the both cases these products are ordered for index $j$ increasing from the right to the left. \varepsilonnd{defi} \begin{rem} There are different types of stability conditions known for the evolution equations. This is, in particular, a condition of the \textit{Kato-stability}, which is equivalent to the \textit{renormalizability} condition for the underlying Banach space, see \cite[Definition 4.1]{NeiZag2009}. We note that below condition of the Trotter stability involves only the products (\ref{eq:5.2}), (\ref{eq:5.3}), of valued for equidistant-time steps $(t-s)/n$. Therefore, it is \textit{weaker} than the Kato-stability condition. \varepsilonnd{rem} \begin{prop}\langlebel{StabilityProposition} Let $A$ be a generator and let $\{B(t)\}_{t\in\cI}$ be a family of generators in the separable Banach space $X$. Let $\mathcal A$ and $\mathcal B$ be the multiplication operators in $L^p(\cI,X)$ induced, respectively, by $A$ and by $B(t)$. Let $\mathcal K_0 : =\overline{D_0+\mathcal A}$. \item[\rm \;\;(i)] If the operator family $\{C(t)\}_{t\in\mathcal I}$ is stable (\ref{eq:5.2}), then the pair $\{D_0,\cC\}$ is Trotter-stable (\ref{eq:5.1}). \item[\rm\;\;(ii)] If the family $\{B(t)\}_{t\in\mathcal I}$ is $A$-stable (\ref{eq:5.3}), then the pair $\{\cK_0,\cB\}$ is Trotter-stable (\ref{eq:5.1}). \varepsilonnd{prop} \begin{proof} (i) The right-shift semigroup $\{S(\tau)\}_{\tau\geq0}$ (\S \ref{sec:3.1}) is nilpotent, and hence, the product is zero for $\tau \ge T$. We have \begin{align*} ((e^{-\frac{\tau}{n} \cC}e^{-\frac{\tau}{n} D_0})^nf)(t) = \prod^{\rightarrow n-1}_{j=0} e^{-\frac{\gt}{n}C(t - j\frac{\gt}{n})}\chi_\cI(t-\gt)f(t-\gt),\quad t\in\mathbb{R}, \varepsilonnd{align*} $f \in L^p(\cI,X)$, $p \in [0,\infty)$, where the product is increasingly ordered in $j$ from the left to the right. Let us introduce the left-shift semigroup $L(\tau)$, $\tau \ge 0$, \begin{align}\langlebel{L-shift} (L(\tau)f)(t) := \chi_{\cI}(t+\tau)f(t+\tau), \quad t\in \cI, \quad f \in L^p(\cI,X),\quad p \in [1,\infty). \varepsilonnd{align} Using this semigroup we find that \begin{align*} \left( L(\tau) \left(e^{-\frac \tau n\cC}e^{-\frac \tau n D_0}\right)^n f\right)(t) = \left(\prod_{j=1}^{n\leftarrow} e^{-\frac\tau n C(t+ j\frac{\tau}{n})}\right)\chi_\cI(t+\gt)f(t), \quad t \in \cI, \varepsilonnd{align*} for $f \in L^p(\cI,X)$, $p \in [1,\infty)$. Therefore, the operator in the left-hand side is a multiplication operator induced by \begin{align*} \left\{\prod_{j=1}^{n\leftarrow} e^{-\frac\tau n C(t+j\frac{\tau}{n})} \chi_\cI(t+\tau)\right\}_{t\in\cI}. \varepsilonnd{align*} Using Proposition \ref{OperatorNormsBoundedOperator} and assuming that $\{C(t)\}_{t\in\cI}$ is Trotter-stable \varepsilonqref{eq:5.2}, we obtain the estimate \begin{align*} \|L(\tau)\left(e^{-\frac \tau n\cC}e^{-\frac \tau n D_0}\right)^n\|_{\mathcal B(L^p(\cI,X))}= \varepsilonsssup_{0 \le t \le T-\gt} \left\|\prod_{j=1}^{n\leftarrow} e^{-\frac{\tau}{n} C(t+j\frac{\tau}{n}}) \right\|_{\cB(X)} \le M. \varepsilonnd{align*} Since for $\tau \in [0,T)$ one has \begin{align}\langlebel{L-shift-id} \|\left(e^{-\frac \tau n\cC}e^{-\frac \tau n D_0}\right)^n\|_{\mathcal B(L^p(\cI,X))} = \|L(\tau)\left(e^{-\frac \tau n\cC}e^{-\frac \tau n D_0}\right)^n\|_{\mathcal B(L^p(\cI,X))} \ , \varepsilonnd{align} this estimate proves the claim (i). In a similar manner one proves the claim (ii). \varepsilonnd{proof} Now, let us introduce the operator family: \begin{align*} T(\tau) = e^{-\tau\mathcal B}e^{-\tau \mathcal K_0}, \quad \gt \ge 0. \varepsilonnd{align*} Note that if the family $\{B(t)\}_{t\in\mathcal I}$ is $A$-stable \varepsilonqref{eq:5.3}, then \begin{align}\langlebel{T<M} \|T\left(\frac\tau n\right)^n\|_{\mathcal B(L^p(\mathcal I,X))} \leq M, ~~{\rm for}~~ n\in\mathbb{N} ~~{\rm and}~~ \tau\geq 0. \varepsilonnd{align} \begin{lem}\langlebel{StabilityLemma} If the operator family $\{B(t)\}_{t\in\mathcal I}$ is $A$-stable, then \begin{align*} \|T\left(\frac\tau n\right)^m\|_{\mathcal B(L^p(\mathcal I,X))} \leq M \varepsilonnd{align*} for any $m \in \mathbb{N}$, $n \in \mathbb{N}$ and $\gt \ge 0$. In particular, we have \begin{align*} \|T(\tau)^m\|_{\mathcal B(L^p(\mathcal I,X))} \leq M \varepsilonnd{align*} for any $m \in \mathbb{N}$ and $\gt \ge 0$. \varepsilonnd{lem} \begin{proof} After the change of variables: $\tau = {\sigma n}/{m}$, one proves the first statement since it reduces to the estimate (\ref{T<M}): \begin{align*} \sup_{\tau \geq 0} \|T\left(\frac\tau n\right)^m\|_{\mathcal B(L^p(\mathcal I,X))} = \sup_{ {\sigma n}/{m} \geq 0} \|T\left(\frac\sigma m\right)^m\|_{\mathcal B(L^p(\mathcal I,X))}\leq M \ . \varepsilonnd{align*} Setting $n =1$ we get the second statement. \varepsilonnd{proof} \section{Convergence in the strong topology}\langlebel{sec:6} Theorem \ref{OurEvolutionProblemUniqueSolution} yields the existence and uniqueness of a solution operator $U(t,s)$, $(t,s)\in\Delta$, for the evolution equation \varepsilonqref{EvolutionProblem}. This solution operator may be approximated by the product-type formulae in different operator topologies under hypothesis from Assumption \ref{ass:1.1} and stability conditions. We start by the claim that the classical Trotter formula can be used to prove the \textit{strong} operator convergence in $L^p(\cI,X)$ of the product approximants for the semigroup generated by $\mathcal K$. \begin{thm}\langlebel{TheoTPFstrTopInFrakX} Let the assumptions (A1), (A3) and (A4) be satisfied. Let $\mathcal{A}$ and $\mathcal{B}$ be the induced multiplication operators in $L^p(\cI,X)$. Define $\mathcal K_0:=\overline{D_0+\mathcal{A}}$ and let $\mathcal K=\mathcal K_0+\mathcal B$. \item[\;\;\rm(i)] If the operator family $\{C(t)\}_{t\in\cI}$ is stable, then \begin{align*} e^{-\tau \mathcal K} = \slim (e^{-\frac{\tau}{n} D_0}e^{-\frac{\tau}{n}(\mathcal A+\mathcal B)})^n= \slim (e^{-\frac{\tau}{n}(\mathcal A+\mathcal B)}e^{-\frac{\tau}{n} D_0})^n \varepsilonnd{align*} in the strong operator topology uniformly in $\tau\ge 0$. \item[\;\;\rm(ii)] If the operator family $\{B(t)\}_{t\in\cI}$ is $A$-stable, then \begin{align*} e^{-\tau \mathcal K} = \slim (e^{-\frac{\tau}{n}\mathcal K_0}e^{-\frac{\tau}{n}\mathcal B})^n= \slim (e^{-\frac{\tau}{n}\mathcal B}e^{-\frac{\tau}{n}\mathcal K_0})^n \varepsilonnd{align*} in the strong operator topology uniformly in $\tau \ge 0$. \varepsilonnd{thm} \begin{proof} The proof follows immediately from Proposition \ref{KbecomesGenerator}, Proposition \ref{ClassicalTrotter} and Proposition \ref{StabilityProposition}. \varepsilonnd{proof} Theorem \ref{TheoTPFstrTopInFrakX} provides information about the strong convergence of the Trotter product formula in $L^p(\cI,X)$. Notice that two different operator splittings of the operator $\cK$ yield in Theorem\ref{TheoTPFstrTopInFrakX} two different product approximations (i) and (ii). Let $\{\{U_n(t,s)\}_{(t,s))\in{\Delta}}\}_{n\in\mathbb{N}}$ be the operator family defined by \varepsilonqref{ApproximationPropagatorIntroduction} and let $\{\mathcal U(\tau)\}_{\tau\geq0}$ be the semigroup generated by $\mathcal K$, i.e., $e^{-\tau \mathcal K}=e^{-\tau(\mathcal B+\mathcal K_0)}=\mathcal U(\tau)$. Then for any $f\in L^p(\cI,X)$ one gets \begin{align*} ((e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^nf)(t)= U_n(t,t-\tau)\chi_\cI(t-\tau)f(t-\tau), \quad \quad t\in \cI \ . \varepsilonnd{align*} Since \begin{align*} (e^{-\tau \mathcal K}f)(t)=(e^{-\tau(\mathcal B+\mathcal K_0)}f)(s)=(\mathcal U(\tau)f)(s)=U(t,t-\tau)\chi_\cI(t-\tau)f(s-\tau) \ , \varepsilonnd{align*} we conclude that \begin{align}\langlebel{eq:6.main} (\{(e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^n&-e^{-\tau(\mathcal B+\mathcal K_0)}\}f)(t) =\{U_n(t,t-\tau)-U(t,t-\tau)\}\chi_\cI(t-\tau)f(t-\tau) \ . \varepsilonnd{align} Note that the product formula in a different order yields \begin{align*} (\{(e^{-\frac{\tau}{n}\mathcal K_0}e^{-\frac{\tau}{n}\mathcal B})^n&-e^{-\tau \mathcal K}\}f)(t)=\left\{U^\prime_n(t,t-\tau)-U(t,t-\tau)\right\}\chi_\cI(t-\tau)f(t-\tau) \ , \varepsilonnd{align*} where the approximating propagator is given by \begin{equation}\langlebel{Un2} \begin{split} U^\prime_n(t,s) &:=\prod^{n-1\leftarrow}_{j=0} G^\prime_j(t,s\,;n), \quad n =1,2,\ldots\;,\\ G^\prime_j(t,s\,;n) &:= e^{-\frac{t-s} n A}e^{-\frac{t-s}{n} B(s+j\frac{t-s}{n})}, \quad j = 0,1,2,\ldots,n \ . \varepsilonnd{split} \varepsilonnd{equation} Note that for the case of Theorem \ref{TheoTPFstrTopInFrakX} (i) the Trotter product approximations get the form \begin{equation}\langlebel{Vn1} V_n(t,s) := \prod^{n\leftarrow}_{j=1} e^{-\frac{t-s} n C(s+ j\frac{t-s}{n})} \quad \mbox{and}\quad V^\prime_n(t,s) := \prod^{n-1\leftarrow}_{j=0} e^{-\frac{t-s} n C(s+j\frac{t-s}{n})} \ , \varepsilonnd{equation} for $(t,s) \in {\Delta}$, $n = 1,2,\ldots$ and $C(t)=A+B(t)$. \begin{thm}\langlebel{TheoTPFstrTopInX} Let the assumptions (A1), (A3) and (A4) be satisfied. \item[\rm\;\;(i)] If the family $\{C(t)\}_{t\in \cI}$ is stable, then \begin{equation*} \begin{split} 0 &= \lim_{n\to\infty}\sup_{\tau\in\cI}\int_0^{T-\tau}ds \, \|\{V_n(s+\tau,s)-U(s+\tau,s)\}x\|^p_X \\ &= \lim_{n\to\infty}\sup_{\tau\in\cI}\int_0^{T-\tau}ds\, \|\{V^\prime_n((s+\tau,s)-U(s+\tau,s)\}x\|^p_X \ , \varepsilonnd{split} \varepsilonnd{equation*} for any $p\in[1,\infty)$ and $x \in X$, where the families $\{\{V_n(t,s)\}_{(t,s)\in{\Delta}}\}_{n\in\mathbb{N}}$ and $\{\{V^\prime_n(t,s)\}_{(t,s)\in{\Delta}}\}_{n\in\mathbb{N}}$ are defined by \varepsilonqref{Vn1}. \item[\rm\;\;(ii)] If the family $\{B(t)\}_{t\in\cI}$ is $A$-stable, then \begin{equation} \begin{split} 0 &= \lim_{n\to\infty}\, \sup_{\tau\in\cI}\int_0^{T-\tau}ds\|\{U_n(s+\tau,s)-U(s+\tau,s)\}x\|^p_X \\ &=\lim_{n\to\infty}\, \sup_{\tau\in\cI}\int_0^{T-\tau}ds\|\{U^\prime_n(s+\tau,s)-U(s+\tau,s)\}x\|^p_X \ , \varepsilonnd{split} \varepsilonnd{equation} for any $p\in[1,\infty)$ and $x \in X$, where the families $\{\{U_n(t,s)\}_{(t,s)\in{\Delta}}\}_{n\in\mathbb{N}}$ and $\{\{U'_n(t,s)\}_{(t,s)\in{\Delta}}\}_{n\in\mathbb{N}}$ are defined, respectively, by \varepsilonqref{ApproximationPropagatorIntroduction} and \varepsilonqref{Un2}. \varepsilonnd{thm} \begin{proof} We prove only the statement for $\{\{U_n(t,s)\}_{(t,s)\in {\Delta}}\}_{n\in\mathbb{N}}$. The other statements can be proved similarly. Take $f=\phi \otimes x\in L^p(\cI)\otimes X \cong L^p(\cI, X)$ for $x\in X$ and $\phi\in L^p(\cI)$. Then, we have \begin{align*} \|&\left((e^{-\frac{\tau}{n}\mathcal B}e^{-\frac{\tau}{n}\mathcal K_0})^n-e^{-\tau K}\right)f\|^p_{L^p} = \int_0^T ds\, \|\{U_n(s,s-\tau)-U(s,s-\tau)\}\chi_\cI(s-\tau)f(s-\tau)\|^p_X = \\ & = \int_0^{T-\tau}ds\, \|\{U_n(s+\tau,s)-U(s+\tau,s)\}f(s)\|^p_X = \int_0^{T-\tau}ds\, |\phi(s)|^p\|\{U_n(s+\tau,s)-U(s+\tau,s)\}x\|^p_X , \varepsilonnd{align*} $\gt \in \cI$, which proves the claim if $\phi(t)=1$ a.e. in $\cI$. \varepsilonnd{proof} \begin{rem} We note that the corresponding convergences in Theorem \ref{TheoTPFstrTopInFrakX} and in Theorem \ref{TheoTPFstrTopInX} are equivalent. \varepsilonnd{rem} \section{Convergence in the operator-norm topology} \langlebel{sec:7} In \S \ref{sec:7} we proved the convergence of the product approximants $U_n(t,s)$ to solution operator $U(t,s)$ in the strong operator topology. In applicatons, a convergence in the operator-norm topology is more useful, especially if the rate of convergence can be estimated. Then in contrast to analysis in \cite{Batkai2011} we obtain a vector-independent estimate of accuracy of the solution approximation by the product formulae. In this section, we want to estimate the convergence-rate of for $\sup_{(t,s)\in{\Delta}}\|U(t,s)-U_n(t,s)\|_{\mathcal B(X)} \longrightarrow 0$, when $n\to \infty$. An important ingredient for that is to estimate the error bound for the Trotter product formula approximation of the evolution semigroup: $\sup_{\gt \ge 0}\|(e^{-\tau\cK_0/n}e^{-\tau\cB/n})^n-e^{-\tau\mathcal K}\|_{\mathcal B( L^p(\cI,X))} \longrightarrow 0$, when $n\to \infty$. \subsection{Technical Lemmata}\langlebel{sec:7.1} Here we state and we prove all technical lemmata that we need for demonstration of convergence and for estimate of the error bound for the Trotter product formula approximations in the operator-norm in $ L^p(\cI,X)$. \begin{lem} Assume (A1), (A2) and (A5). Then, the operator $\overline{\mathcal A^{-1}\mathcal B}$ is bounded on $L^p(\cI,X)$ and the norm $\|\overline{\mathcal A^{-1}\mathcal B}\|_{\mathcal B( L^p(\cI,X))}\leq C_1^*$, $p \in [1,\infty)$. \varepsilonnd{lem} \begin{proof} Let $x\in \mathrm{dom}(A)\subset \mathrm{dom}(B(t))$ and $\xi \in X^*$. Then, it holds that \begin{align*} |\langle A^{-1}B(t)x, \xi \mathrm{ran}gle| = |\langle x, B(t)^*(A^{-1})^*\xi \mathrm{ran}gle| \leq C_{\alpha}^* \|x\|\,\|\xi\|, {\rm ~~a.e. ~t\in\cI}. \varepsilonnd{align*} Since $\mathrm{dom}(A)\subset X$ is dense, we conclude that ${\rm ~ess~sup}_{t\in \cI} \|\overline{A^{-1}B(t)}\|_{\mathcal B(X)}\leq C_1^*$. Let $\Gamma \in L^p(\cI;X)^*$. By Proposition \ref{CharaterizationLpdual} (iii) we find \begin{align*} \Gamma(\mathcal A^{-1}\mathcal Bf) = \int_{\cI}\langle A^{-1}B(t)f(t),\Psi(t)\mathrm{ran}gle dt, \quad f \in \mathrm{dom}(\mathcal B). \varepsilonnd{align*} Then the estimate ${\rm ~ess~sup}_{t\in \cI}\|\overline{A^{-1}B(t)}\|_{\mathcal B(X)}\leq C_1^*$ implies \begin{align*} |\Gamma(\mathcal A^{-1}\mathcal Bf) | \le C^*_1 \|f\|_{L^p(\cI,X)}\left(\int_{\cI}\|\Psi(t)\|^{p'} dt\right)^{1/p'}, \varepsilonnd{align*} $f \in \mathrm{dom}(\mathcal B)$, $\Gamma \in L^p(\cI,X)^*$, which yields \begin{align*} |\Gamma(\mathcal A^{-1}\mathcal Bf) | \le C^*_1 \|f\|_{L^p(\cI,X)} \|\Gamma\|_{L^p(\cI,X)^*}, \quad f \in \mathrm{dom}(\mathcal B). \varepsilonnd{align*} Hence we get $\|\mathcal A^{-1}\mathcal Bf\|_{L^p(\cI,X)} \le C^*_1\|f\|_{L^p(\cI,X)}$, $f \in \mathrm{dom}(\mathcal B)$, which proves the claim. \varepsilonnd{proof} \begin{rem} By assumption $(A1)$ the operator $\mathcal A$ generates a holomorphic semigroup. Note that the operator $\mathcal K_0$ is not a generator of a holomorphic semigroup. Indeed, we have \begin{align*} (e^{-\tau \mathcal K_0}f)(t)=(e^{-\tau D_0}e^{-\tau A}f)(t)= e^{-\tau A}f(t-\tau)\chi_\cI(t-\tau), \quad f \in L^p(\cI,X) \ . \varepsilonnd{align*} Since the right-hand side is zero for $\tau\geq t$, the semigroup has no analytic extended to the complex plane $\mathbb{C}$. \varepsilonnd{rem} We comment that in general, $\mathrm{dom}(\mathcal K_0)\subset\mathrm{dom}(\mathcal A)$ does not hold, but we can prove the following inclusion. \begin{lem}\langlebel{L0AalphaDomainsInclusions} Let the assumption (A1) be satisfied. Then for $\alpha\in[0,1)$ one gets that \begin{equation}\langlebel{eq:6.1} \mathrm{dom}(\mathcal K_0)\subset\mathrm{dom}(\mathcal A^\alpha) \ . \varepsilonnd{equation} \varepsilonnd{lem} \begin{proof} We know that the semigroup, which is generated by $\mathcal K_0$, is nilpotent, i.e. $e^{-\tau \mathcal K_0}=0$ holds for $\tau\geq T$. Hence, generator $\mathcal K_0$ has empty spectrum. Since the semigroup is nilpotent, one gets \begin{align*} \mathcal A^\alpha \mathcal K_0^{-1}f=\mathcal A^\alpha \int_0^\infty e^{-\tau \mathcal K_0}f d\tau= \int_0^T e^{-\tau D_0}\mathcal A^\alpha e^{-\tau \mathcal A}f d\tau, \quad f \in L^p(\cI,X) \ . \varepsilonnd{align*} For $\alpha\in[0,1)$ and $\tau>0$, we obtain $\|\cA^\alpha e^{-\tau \mathcal A}\|\leq {M^A_\alpha }/{\tau^\alpha}$ and in addition: \begin{align*} \int_0^T \tau^{-\alpha}d\tau = \frac{T^{1-\alpha}}{1-\alpha}<\infty \ . \varepsilonnd{align*} So, the integrand $\mathcal A^\alpha e^{-\tau D_0}e^{-\tau \mathcal A}f$ for $\tau>0$ is integrable on $[0,\infty)$, that implies the claim. \varepsilonnd{proof} \begin{lem}\langlebel{TechnicalLemma1} Let the assumptions (A1), (A2), and (A3) be satisfied. Then there is a constant ${\Lambda}_{\alpha} > 0$ such that \begin{equation}\langlebel{eq:6.2} \|\mathcal A^\alpha e^{-\tau \mathcal K}\|_{\mathcal B( L^p(\cI,X))}\leq \frac{{\Lambda}_{\alpha}}{\tau^\alpha}, \quad \gt > 0. \varepsilonnd{equation} \varepsilonnd{lem} \begin{proof} Note that the following holds \begin{align*} \frac{d}{d\sigma} e^{-(\tau-\sigma) \mathcal K_0}e^{-\sigma \mathcal K}f &= e^{-(\tau-\sigma)\mathcal K_0}\mathcal K_0e^{-\sigma \mathcal K}f + e^{-(\tau-\sigma)\mathcal K_0}(-\mathcal K)e^{-\sigma \mathcal K}f=-e^{-(\tau-\sigma)\mathcal K_0}\mathcal Be^{-\sigma \mathcal K}f \ . \varepsilonnd{align*} So, we get \begin{align*} \int_0^\tau e^{-(\tau-\sigma)\mathcal K_0}\mathcal Be^{-\sigma \mathcal K}f d\sigma=e^{-\tau \mathcal K_0}f-e^{-\tau \mathcal K}f \ , \varepsilonnd{align*} and hence \begin{align*} \mathcal A^\alpha e^{-\tau \mathcal K} f = \mathcal A^\alpha e^{-\tau \mathcal K_0}f - \mathcal A^\alpha \int_0^\tau e^{-(\tau-\sigma)\mathcal K_0}\mathcal Be^{-\sigma \mathcal K}f d\sigma \ . \varepsilonnd{align*} Now, we estimate the two terms in the right-hand side. First we find that \begin{align*} \|\mathcal A^\alpha e^{-\tau \mathcal K_0}f\|_{L^p(\cI,X)}\leq \|A^\alpha e^{-\tau A}f(\cdotot-\tau)\|_{L^p(\cI,X)}\leq \frac{M^A_\alpha}{\tau^\alpha}\|f\|_{L^p(\cI,X)} \ . \varepsilonnd{align*} Now, let $f\in\mathrm{dom}(K)$. Since $\mathrm{dom}(\mathcal K)=\mathrm{dom}(\mathcal K_0)\subset\mathrm{dom}(\mathcal A^\alpha)$ (see Lemma \ref{L0AalphaDomainsInclusions}), one gets \begin{align*} \mathcal A^\alpha\int_0^\tau d\sigma \, e^{-(\tau-\sigma)\mathcal K_0}\mathcal Be^{-\sigma \mathcal K} f =\int_0^\tau d\sigma\, A^\alpha e^{-(\tau-\sigma)\mathcal K_0}\mathcal B \mathcal A ^{-\alpha} \mathcal A ^\alpha e^{-\sigma \mathcal K} f \ . \varepsilonnd{align*} There is a constant $C_\alpha>0$, such that $\|\mathcal B \mathcal A^{-\alpha}\|_{\mathcal B( L^p(\cI,X))}\leq C_\alpha$ (cf. Lemma \ref{ClBClAalphaBounded}). Then we find the estimate for the sum of two terms: \begin{align*} &\|\mathcal A^\alpha e^{-\tau \mathcal K}f\|_{L^p(\cI,X)} \leq \\ &\leq \frac{M^A_\alpha}{\tau^\alpha}\|f\|_{L^p(\cI,X)} + +\int_0^\tau \|A^\alpha e^{-(\tau-\sigma)\mathcal K_0}\|_{\mathcal B( L^p(\cI,X))}\cdotot \|\mathcal B \mathcal A ^{-\alpha} \|_{\mathcal B( L^p(\cI,X))}\cdotot\| \mathcal A ^\alpha e^{-\sigma K} f \|_{L^p(\cI,X)} d\sigma \leq \\ &\leq \frac{M^A_\alpha}{\tau^\alpha}\|f\|_{L^p(\cI,X)} + C_\alpha\int_0^\tau \frac{M^A_\alpha} {(\tau-\sigma)^\alpha}\|\mathcal A^\alpha e^{-\sigma \mathcal K}f\|_ {L^p(\cI,X)} d\sigma \ . \varepsilonnd{align*} Let $\|f\|_{L^p(\cI,X)} \le 1$ and we introduce $F(\gt) := \|\cA^{\alpha} e^{-\gt \cK}f\|_{L^p(\cI,X)}$. Then the Gronwall-type inequality \begin{align*} 0\leq F(\tau) \leq c_1 \tau^{-\alpha}+c_2\int_0^\tau F(\sigma) (\tau-\sigma)^{-\alpha} d\sigma, \varepsilonnd{align*} is satisfied, where $c_1=M^A_{\alpha}$ and $c_2= C_\alpha M^A_{\alpha}$. The Gronwall-type lemma (see Appendix, Lemma \ref{GronwallLemma}) states that the estimate $F(\tau)\tau^\alpha\leq 2c_1$ is valid for $\tau\in [0,\tau_0]$, where $\tau_0=\sigma_\alpha\cdotot\min\left\{{1}/{c_2} ,\left({1}/{c_2}\right)^{1/(1-\alpha)}\right\}$ and $\sigma_\alpha$ depends only on $\alpha$. Hence, we obtain \begin{equation}\langlebel{eq:6.3a} \|\mathcal A^\alpha e^{-\tau \mathcal K}f\| \leq \frac{2M^A_\alpha}{\tau^\alpha} \ , \varepsilonnd{equation} for $\gt \in (0,\gt_0]$. Since $e^{-\gt \cK}f = 0$ for $\gt \ge T$ it remains to consider the case $0 < \gt_0 < T$. If $\gt \in (\gt_0,T]$, then we find \begin{displaymath} \|\mathcal A^\alpha e^{-\tau \mathcal K}f\| \le \|\mathcal A^\alpha e^{-\tau_0 \mathcal K}e^{-(\gt-\gt_0)\cK}f\| \le \frac{2M^A_\alpha \,M_\cK}{\tau^\alpha_0} \ , \varepsilonnd{displaymath} where $\|e^{-\gt \cK}f\| \le M_\cK$ for $\gt \ge 0$ and $\|f\|_{L^p(\cI,x)} \le 1$. Here we have used that any evolution semigroup is bounded. Hence \begin{displaymath} \|\mathcal A^\alpha e^{-\tau \mathcal K}f\| \le \frac{\gt^{\alpha}}{\gt^{\alpha}_0}\frac{2M^A_\alpha \,M_\cK}{\tau^\alpha} \le \frac{T^{\alpha}}{\gt^{\alpha}_0}\frac{2M^A_\alpha \,M_\cK}{\tau^\alpha} \ , \varepsilonnd{displaymath} for $\gt \in (\gt_0,T]$ and $\|f\|_{L^p(\cI,X)} \le 1$. Setting ${\Lambda}_{\alpha} := \max\left\{2M^A_{\alpha}, ({2M^A_\alpha \,M_\cK T^{\alpha}})/{\gt^{\alpha}_0}\right\}$ and taking the supremum over the unit ball we complete the proof. \varepsilonnd{proof} \begin{lem}\langlebel{TechnicalLemma2} Let the assumptions (A1), (A3) and (A4) be satisfied . If the family of generators $\{B(t)\}_{t\in\cI}$ is $A$-stable, then there exist constants $c_1 > 0$ and $c_2>0$ such that \begin{equation}\langlebel{eq:6.3} \|\overline{T(\gt)^k\mathcal A}\|_{\mathcal B( L^p(\cI,X))}\leq \frac{c_1}{\tau^{\alpha}} +\frac{c_2}{k\tau}, \quad \gt > 0 \ , \quad k \in \mathbb{N}. \varepsilonnd{equation} \varepsilonnd{lem} \begin{proof} For $k\gt \ge T$ we have $T(\gt)^k = 0$. Hence, one has to prove the estimate \varepsilonqref{eq:6.3} only for $k\gt \le T$. By Lemma \ref{StabilityLemma} we get that $\|T(\gt)^k\|_{\mathcal B( L^p(\cI,X))}\leq M$ for some positive constant $M$. Let $f\in\mathrm{dom}(\cA)$. Then \begin{align*} \|T(\gt)^k\cA f\| &\leq \|(T(\gt)^k-e^{-k\tau \mathcal K_0})\mathcal Af\| + \|e^{-k\tau \mathcal K_0}\cA f\|\\ & \leq \|\sum_{j=0}^{k-1}T(\gt)^{k-(j+1)}(e^{-\tau \mathcal{B}}-I)e^{-(j+1)\tau \mathcal K_0}\cA f\| + \|e^{-k\tau \mathcal K_0}\cA f\|\\ & \leq M \sum_{j=0}^{k-1}\int_0^\tau d\sigma\|e^{-\sigma\mathcal{B}}\mathcal{B}{\mathcal A}^{-\alpha}\|\; \|\cA^{1+\alpha}e^{-(j+1)\tau \mathcal K_0}f\| +\|e^{-k\tau \mathcal K_0}\cA f\|, \varepsilonnd{align*} where we used $I-e^{-\tau \mathcal B}=\int_0^\tau d\sigma \, \mathcal Be^{-\sigma\mathcal B}$. We have $\|e^{-\sigma\mathcal{B}}\|\leq M_B e^{\beta_\mathcal B \sigma}\leq M_B e^{\beta_c\ B T} =: M_B'$. By Proposition \ref{HolomorphicSemigroupEstimate} we get \begin{align*} \|\cA^{1+\alpha}e^{-(j+1)\tau \mathcal K_0}f\|&\leq\frac{M^A_{1+{\alpha}}}{((j+1)\tau)^{1+\alpha}}\|f\| \ ,\\ \|\mathcal A e^{-k\tau \mathcal K_0}f\|&\leq\frac{M^A_{\alpha}}{k\tau}\|f\| \ . \varepsilonnd{align*} Therefore, one obtains the estimate: \begin{align*} \|T(\gt)^k\mathcal Af\| &\leq \frac{ M M_B'M^A_{1+\alpha} C_\alpha\tau }{\tau^{\alpha+1}}\sum_{j=0}^{k-1} \frac{1}{(j+1)^{\alpha+1}}\|f\|+\frac{ M^A_1}{k\tau}\|f\|\\ &\leq \frac{MM_B' M^A_{1+\alpha} C_\alpha \zeta(\alpha+1)}{\tau^\alpha}\|f\|+\frac{M^A_1}{k\tau}\|f\| \ , \varepsilonnd{align*} where $\zeta(\alpha+1) :=\sum_{j=1}^{\infty}{1}/{j^{\alpha+1}}$ is the Riemann $\zeta$-function. Since $T(\gt)^k = 0$ for $\gt k \ge T$, we find \begin{align*} \|T(\gt)^k\mathcal Af\| \leq \frac{MM_B' M^A_{1+\alpha} C_\alpha \zeta(\alpha+1)}{\tau^\alpha}\|f\|+ \frac{M^A_1}{k\tau}\|f\|, \quad f \in \mathrm{dom}(\cA) \ . \varepsilonnd{align*} Then estimate \varepsilonqref{eq:6.3} follows by taking supremum over the unit ball in $\mathrm{dom}(\cA)$ . \varepsilonnd{proof} \begin{lem}\langlebel{TechnicalLemma3} Let the assumptions (A1), (A3), (A4), and (A5) be satisfied. Then there is a constant $c>0$ such that for $\tau \geq 0$ we have inequalities {{\rm :}} \begin{equation*} \|(T(\gt)-e^{-\tau \mathcal K})\cA^{-{\alpha}}\|_{\mathcal B(L^p(\cI,X))}\leq c\tau ~~{\rm and~~} \|\mathcal A^{-1}(T(\gt)-e^{-\tau \mathcal K})\|_{\mathcal B(L^p(\cI,X))}\leq c\tau \ . \varepsilonnd{equation*} \varepsilonnd{lem} \begin{proof} (i) Using the representation \begin{displaymath} T(\gt)g -e^{-\tau \mathcal K}g =(e^{-\tau\mathcal B}-I)e^{-\tau \mathcal K_0}g + e^{-\tau \mathcal K_0}-e^{-\tau \mathcal K}g, \quad g \in L^p(\cI,X), \varepsilonnd{displaymath} we get \begin{equation}\langlebel{eq:6.6} \|(T(\gt)-e^{-\tau \mathcal K})\mathcal A^{-\alpha}f\|\\ \leq \; \|(e^{-\tau\mathcal B}-I)\mathcal A^{-\alpha}e^{-\tau \mathcal K_0}f\| + \|(e^{-\tau \mathcal K_0}-e^{-\tau \mathcal K}) {\mathcal A}^{-\alpha}f\| \varepsilonnd{equation} for $\tau \ge 0$, $g = \cA^{-{\alpha}}f$ and $f \in L^p(\cI,X)$. From the representation \begin{align*} (I - e^{-\tau\mathcal B})g = \int^\tau_0 e^{-s \mathcal \mathcal B} \mathcal B\, g \,ds, \qquad g \in \mathrm{dom}(\mathcal B), \varepsilonnd{align*} we obtain \begin{align*} \|(I - e^{-\tau\mathcal B}){\mathcal A}^{-\alpha} g\| \le \tau\,\|\mathcal B {\mathcal A}^{-\alpha}\| \,M_{\mathcal B} \, e^{\beta_{\mathcal B} T}\|g\|\qquad g \in \mathrm{dom}(\mathcal B), \quad \tau \ge 0 \ . \varepsilonnd{align*} Then setting $g = e^{-\tau \mathcal K_0}f$, $f \in L^p(\cI,X)$, and using the estimate $\|e^{-\tau \mathcal K_0}f\| \le M_{\mathcal A}e^{\beta_{\mathcal A}T}$, $\tau \ge 0$, one gets for the first term in the right-hand side of (\ref{eq:6.6}) the estimate \begin{align}\langlebel{eq:6.7} \|(I - e^{-\tau\mathcal B}){\mathcal A}^{-\alpha} e^{-\tau \mathcal K_0}f\| \le \tau\, C_\alpha \,M_{\mathcal B} \, M_{\mathcal A}\, e^{{\beta}_\cB T}\|f\|, \quad \tau \ge 0 \ , \varepsilonnd{align} where we also used that $e^{-\gt \cK_0} = 0$ for $\gt \ge T$. To estimate the second term note that \begin{align*} e^{-\tau \mathcal K}g -e^{-\tau \mathcal K_0}g = \int_0^\tau\frac{d}{d\sigma}\{e^{-\sigma \mathcal K}e^{-(\tau-\sigma) \mathcal K_0}\}g d\sigma = -\int_0^\tau e^{-\sigma \mathcal K}\mathcal B e^{-(\tau-\sigma ) \mathcal K_0}g d\sigma \ , \varepsilonnd{align*} $g \in \mathrm{dom} (\mathcal K_0)$. Let $g \in {\mathcal A}^{-{\alpha}}f$, $f \in \mathcal A^{\alpha} \,\mathrm{dom}(\mathcal K_0)$. Then \begin{align*} (e^{-\tau \mathcal K} -e^{-\tau \mathcal K_0}){\mathcal A}^{-{\alpha}}f = -\int_0^\tau e^{-\sigma \mathcal K}\mathcal B e^{-(\tau-\sigma ) \mathcal K_0}{\mathcal A}^{-{\alpha}}f d\sigma \ , \varepsilonnd{align*} which leads to the estimate \begin{align}\langlebel{eq:6.8} \|(e^{-\tau \mathcal K} -e^{-\tau \mathcal K_0}){\mathcal A}^{-{\alpha}}f\| \le \tau \, C_\alpha M_{\mathcal K}\,M_{\mathcal A} \|f\|, \qquad \tau \ge 0 \ , \varepsilonnd{align} $f \in \mathcal A^{\alpha}\,\mathrm{dom}(\mathcal K_0)$. Since $\mathcal A^{\alpha}\,\mathrm{dom}(\mathcal K_0)$ is dense in $L^p(\cI,X)$ the estimate extends to $f \in L^p(\cI,X)$. Taking into account \varepsilonqref{eq:6.6}, \varepsilonqref{eq:6.7}, and\varepsilonqref{eq:6.8} we get the first of the claimed in lemma inequalities. (ii) To prove the second inequality we note that \begin{align}\langlebel{eq:6.9} \|&\mathcal A^{-1}(e^{-\tau \mathcal B}e^{-\tau \mathcal K_0}-e^{-\tau \mathcal K})f\| \leq \|\mathcal A^{-1}(e^{-\tau\mathcal B}-I) e^{-\tau \mathcal K_0}f\|+\|\mathcal A^{-1}(e^{-\tau \mathcal K_0}-e^{-\tau \mathcal K})f\| \varepsilonnd{align} $f \in \mathrm{dom}(\mathcal K_0) = \mathrm{dom}(\mathcal K)$. Using \begin{align*} \mathcal A^{-1}(I - e^{-\tau\mathcal B}) e^{-\tau \mathcal K_0}f = \int^\tau_0 d\sigma \, \mathcal A^{-1}\mathcal B e^{-\sigma \mathcal B}e^{-\tau \mathcal K_0}f \ , \varepsilonnd{align*} one finds the estimate for the first term in the right-hand side of (\ref{eq:6.9}): \begin{align}\langlebel{eq:6.10} \|\mathcal A^{-1}(I - e^{-\tau\mathcal B}) e^{-\tau \mathcal K_0}f\| \le \tau\, C^*_1 M_{\mathcal B}\,M_{\mathcal A}\, e^{{\beta}_\cB T}\,\|f\|, \quad \tau \ge 0 \ . \varepsilonnd{align} For the second term we start with identity \begin{align*} e^{-\tau \mathcal K_0}f -e^{-\tau \mathcal K}f = \int_0^\tau\frac{d}{d\sigma}\{e^{-\sigma \mathcal K}e^{-(\tau-\sigma) \mathcal K}\}fd\sigma =\int_0^\tau d\sigma \, e^{-\sigma \mathcal K_0}\mathcal B e^{-(\tau-\sigma ) \mathcal K}f \ , \varepsilonnd{align*} which leads to \begin{align*} \mathcal A^{-1}(e^{-\tau \mathcal K_0} -e^{-\tau \mathcal K})f =\int_0^\tau d\sigma \, \mathcal A^{-1}\mathcal Be^{-\sigma \mathcal K_0}e^{-(\tau-\sigma ) \mathcal K}f \ , \varepsilonnd{align*} for any $f \in \mathrm{dom}(\mathcal K) = \mathrm{dom}(\mathcal K_0)$. Hence, we get the estimate \begin{align}\langlebel{eq:6.11} \|\mathcal A^{-1}(e^{-\tau \mathcal K_0} -e^{-\tau \mathcal K})f\| \le \tau \,C^*_1 M_{\mathcal A}\,M_{\mathcal K}\|f\|, \quad \tau \ge 0 \ . \varepsilonnd{align} Summarising now \varepsilonqref{eq:6.9}, \varepsilonqref{eq:6.10}, and \varepsilonqref{eq:6.11}, we obtain the second of the claimed in lemma inequalities. \varepsilonnd{proof} \begin{lem}\langlebel{TechnicalLemma4} Let the assumptions (A1), (A3), (A4), (A5), and (A6) be satisfied. If ${\beta} \in ({\alpha},1)$, then there exists a constant $Z({\beta}) > 0$ such that \begin{equation} \|\mathcal A^{-1}(T(\tau)-e^{-\tau \mathcal K})\mathcal A^{-{\beta}}\|_{\mathcal B( L^p(\cI,X))} \leq Z({\beta}) \tau^{1+{\beta}}, \quad \gt \ge 0. \varepsilonnd{equation} \varepsilonnd{lem} \begin{proof} Let $f\in \mathrm{dom}(\mathcal K_0)=\mathrm{dom}(\mathcal K)$. Then identity \begin{align*} \frac{d}{d\sigma }&T(\sigma )e^{-(\tau-\sigma )\mathcal K}f = \frac{d}{d\sigma }e^{-\sigma \mathcal B} e^{-\sigma \mathcal K_0}e^{-(\tau-\sigma )\mathcal K}f \\ =&-e^{-\sigma \mathcal B}\mathcal Be^{-\sigma \mathcal K_0}e^{-(\tau-\sigma )\mathcal K}f - e^{-\sigma \mathcal B} e^{-\sigma \mathcal K_0}\mathcal K_0e^{-(\tau-\sigma )\mathcal K}f +e^{-\sigma \mathcal B}e^{-\sigma \mathcal K_0}Ke^{-(\tau-\sigma )\mathcal K}f\\ =&-e^{-\sigma \mathcal B}\mathcal Be^{-\sigma \mathcal K_0}e^{-(\tau-\sigma )\mathcal K}f + e^{-\sigma \mathcal B} e^{-\sigma \mathcal K_0}\mathcal B e^{-(\tau-\sigma )\mathcal K}f\\ =& e^{-\sigma \mathcal B}\{e^{-\sigma \mathcal K_0}\mathcal Bf - \mathcal Be^{-\sigma \mathcal K_0}\} e^{-(\tau-\sigma )\mathcal K}f, \varepsilonnd{align*} yields \begin{align}\langlebel{eq:6.13a} &T(\tau)f-e^{-\tau \mathcal K}f = \int_0^\tau\frac{d}{d\sigma }T(\sigma )e^{-(\tau-\sigma )\mathcal K}fd\sigma=\int_0^\tau e^{-\sigma \mathcal B}\{e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\} e^{-(\tau-\sigma )\mathcal K}fd\sigma . \varepsilonnd{align} On the other hand we also have the following identity: \begin{align*} e^{-\sigma \mathcal B}&\left(e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\right) e^{-(\tau-\sigma ) \mathcal K} f\\ =&\;(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma ) \mathcal K}-e^{-(\tau-\sigma )\mathcal K_0})f+\\ &+(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}e^{-(\tau-\sigma ) \mathcal K_0}f+\\ &+\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma ) \mathcal K_0})f+\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}e^{-(\tau-\sigma )\mathcal K_0}f \ , \varepsilonnd{align*} which yields for $f=\mathcal A^{-{\beta}}g$ \begin{equation}\langlebel{eq:6.13} \begin{split} \mathcal A^{-1}&e^{-\sigma \mathcal B}\left(e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\right) e^{-(\tau-\sigma )\mathcal K}\mathcal A^{-{\beta}}g=\\ =&\;\mathcal A^{-1}(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\} (e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma )\mathcal K_0})\mathcal A^{-{\beta}}g+\\ &+\mathcal A^{-1}(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}\mathcal A^{-{\beta}} e^{-(\tau-\sigma )\mathcal K_0}g+\\ &+\mathcal A^{-1}\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma ) \mathcal K_0})\mathcal A^{-{\beta}}g+\\ &+\mathcal A^{-1}\{(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\mathcal B-\mathcal B(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\} e^{-(\tau-\sigma )\mathcal K_0}\mathcal A^{-{\beta}}g+\\ &+\mathcal A^{-1}(e^{-\sigma D_0}\mathcal B-\mathcal Be^{-\sigma D_0})\mathcal A^{-{\beta}}e^{-(\tau-\sigma )\mathcal K_0}g. \varepsilonnd{split} \varepsilonnd{equation} In the following, we estimate separately the five terms in the right-hand side of identity (\ref{eq:6.13}). To this end we note that $\mathcal A$ and $\mathcal K_0$ commute. This implies that \begin{align*} (e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma )\mathcal K_0})\mathcal A^{-{\beta}}g=\int_0^{\tau-\sigma }dr \, e^{-(\tau-\sigma -r)\mathcal K}\mathcal B\mathcal A^{-{\beta}}e^{-r\mathcal K_0}g \ . \varepsilonnd{align*} Thus, for the first term we get \begin{align*} \mathcal A^{-1}&(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\} (e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma )\mathcal K_0})\mathcal A^{-{\beta}}g\\ &=-\int_0^\sigma dr \, \mathcal A^{-1}\mathcal Be^{-r\mathcal B}\,[e^{-\sigma \mathcal K_0},\mathcal B]\mathcal A^{-{\beta}}\time\sigma \int_0^{\tau-\sigma }dr \, \mathcal A^{{\beta}}e^{-(\tau-\sigma -r)\mathcal K}\mathcal B\mathcal A^{-{\beta}}e^{-r\mathcal K_0}g \ , \varepsilonnd{align*} where \begin{align*} [e^{-\sigma \mathcal K_0},\mathcal B]f := \{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}f, \quad f \in \mathrm{dom}(\mathcal K_0), \quad \tau \ge 0. \varepsilonnd{align*} Then using Lemma \ref{TechnicalLemma1}, we obtain the estimate \begin{equation}\langlebel{eq:6.14} \begin{split} \|\mathcal A^{-1}(e^{-\sigma \mathcal B}&-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\} (e^{-(\tau-\sigma )K}-e^{-(\tau-\sigma )\mathcal K_0})\mathcal A^{-{\beta}}g\| \\ &\leq \sigma\; 2C_1^*C_{\beta}^2{\Lambda}_{\beta} M_{\mathcal B}M^2_{\mathcal A}e^{{\beta}_\cB T}\int_0^{\tau-\sigma }dr \, \frac{1}{(\tau-\sigma -r)^{\beta}} \; \|g\|\\ &\leq \sigma(\tau-\sigma )^{1-{\beta}}\; \frac{2C_1^*C_{\beta}^2{\Lambda}_{\beta} M_{\mathcal B}M^2_{\mathcal A}e^{{\beta}_\cB T}} {1-{\beta}}\;\|g\| \varepsilonnd{split} \varepsilonnd{equation} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. For the second term, one can readily establish the estimate \begin{equation}\langlebel{eq:6.15} \begin{split} \|\mathcal A^{-1}&(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}\mathcal A^{-{\beta}} e^{-(\tau-\sigma )\mathcal K_0}g\|\leq \sigma \;2C_1^*C_{\beta} M_\cB M^2_\cA e^{{\beta}_\cB T}\|g\| \ , \varepsilonnd{split} \varepsilonnd{equation} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. Now note that by virtue of relation \begin{align*} e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma )\mathcal K_0}h=\int_0^{\tau-\sigma }dr \, e^{-(\tau-r-\sigma )K}\mathcal Be^{-r\mathcal K_0}h \ , \quad h\in \mathrm{dom}(\cK_0) \ , \varepsilonnd{align*} one obtains for the third term the estimate \begin{equation}\langlebel{eq:6.16} \begin{split} \|\mathcal A^{-1}&\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma )K}-e^{-(\tau-\sigma ) \mathcal K_0})\mathcal A^{-{\beta}}g\|\leq (\tau-\sigma )\; 2C_1^*C_{\beta} M^2_\cA M_\cK \;\|g\| \ , \varepsilonnd{split} \varepsilonnd{equation} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. Moreover, using the equality \begin{align*} e^{-\sigma \mathcal K_0}-e^{-\sigma D_0}h =-\int_0^\sigma dr \, e^{-r\mathcal K_0}\mathcal A e^{-(\sigma -r)D_0}h \ , \varepsilonnd{align*} we get for the fourth term: \begin{align*} &\mathcal A^{-1}\{(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\mathcal B-\mathcal B(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\} e^{-(\tau-\sigma )\mathcal K_0}\mathcal A^{-{\beta}}g =\\ &\Bigl(-\int_0^\sigma dr \, e^{-r\mathcal K_0}e^{-(\sigma -r)D_0} \mathcal B\mathcal A^{-{\beta}}+ \mathcal A^{-1}\mathcal B\int_0^\sigma dr \, e^{-r\mathcal K_0}\mathcal A^{1-{\beta}}e^{-(\sigma -r)D_0}\Bigr) e^{-(\tau-\sigma )\mathcal K_0}g \, \varepsilonnd{align*} which yields the estimate \begin{equation}\langlebel{eq:6.17} \begin{split} &\|\mathcal A^{-1}\{(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\mathcal B-\mathcal B(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\} e^{-(\tau-\sigma )\mathcal K_0}\mathcal A^{-{\beta}}g\|\\ &\leq \sigma\; C_\alpha M_\cA \|g\| +C_1^*M_\cA \,M^A_{1-{\beta}}\int_0^\sigma dr \, \frac{1}{r^{1-{\beta}}} \;\|g\| = \left({\sigma} \;C_{\beta} M_\cA+ {\sigma}^{\beta} \;\frac{C_1^*\,M_\cA\,M^A_{1-{\beta}}}{{\beta}}\right)\; \|g\| \varepsilonnd{split} \varepsilonnd{equation} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. To estimate the fifth term, we note that \begin{align*} (e^{-\sigma D_0}\mathcal B&-\mathcal Be^{-\sigma D_0})f=e^{-\sigma D_0}B(\cdotot)f(\cdotot)-\mathcal B \chi_\cI(\cdotot-\sigma ) f(\cdotot-\sigma )=\\ &=\chi_\cI(\cdotot-\sigma )B(\cdotot-\sigma )f(\cdotot-\sigma )-B(\cdotot)\chi_\cI(\cdotot-\sigma )f(\cdotot-\sigma )=\\ &=\chi_\cI(\cdotot-\sigma )\{B(\cdotot-\sigma )-B(\cdotot)\}f(\cdotot-\sigma ) \ , \varepsilonnd{align*} and therefore, one gets \begin{equation}\langlebel{eq:6.18} \begin{split} \|\mathcal A^{-1}(e^{-\sigma D_0}\mathcal B&-\mathcal Be^{-\sigma D_0})\mathcal A^{-{\beta}}e^{-(\tau-\sigma )\mathcal K_0}g\|=\|\mathcal A^{-1}\{e^{-\sigma D_0}\mathcal B-\mathcal Be^{-\sigma D_0}\}\mathcal A^{-{\beta}}g\|\\ \leq &\varepsilonsssup_{t\in\cI}\|A^{-1}\{B(t-\sigma )-B(t)\}A^{-{\beta}}\|_{\mathcal B(X)}\;\|g\|\leq L_{\beta} \sigma ^{\beta}\|g\|. \varepsilonnd{split} \varepsilonnd{equation} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. From identity \varepsilonqref{eq:6.13} we deduce the estimate \begin{displaymath} \begin{split} \|\mathcal A^{-1}&e^{-\sigma \mathcal B}\left(e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\right) e^{-(\tau-\sigma )\mathcal K}\mathcal A^{-{\beta}}g\|\\ \le &\;\|\mathcal A^{-1}(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma ) \mathcal K}-e^{-(\tau-\sigma )\mathcal K_0})\mathcal A^{-{\beta}}g\|\\ &+\|\mathcal A^{-1}(e^{-\sigma \mathcal B}-I)\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}\mathcal A^{-{\beta}} e^{-(\tau-\sigma )\mathcal K_0}g\|\\ &+\|\mathcal A^{-1}\{e^{-\sigma \mathcal K_0}\mathcal B-\mathcal Be^{-\sigma \mathcal K_0}\}(e^{-(\tau-\sigma )\mathcal K}-e^{-(\tau-\sigma ) \mathcal K_0})\mathcal A^{-{\beta}}g\|\\ &+\|\mathcal A^{-1}\{(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\mathcal B-\mathcal B(e^{-\sigma \mathcal K_0}-e^{-\sigma D_0})\} e^{-(\tau-\sigma )\mathcal K_0}\mathcal A^{-{\beta}}g\| \\ &+\|\mathcal A^{-1}(e^{-\sigma D_0}\mathcal B-\mathcal Be^{-\sigma D_0})\mathcal A^{-{\beta}}e^{-(\tau-\sigma )\mathcal K_0}g\| \ . \varepsilonnd{split} \varepsilonnd{displaymath} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. Now taking into account \varepsilonqref{eq:6.14}, \varepsilonqref{eq:6.15}, \varepsilonqref{eq:6.16}, \varepsilonqref{eq:6.17}, and \varepsilonqref{eq:6.18} we find the estimate \begin{align*} \|\mathcal A^{-1}&e^{-\sigma \mathcal B}\left(e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\right) e^{-(\tau-\sigma )\mathcal K}\mathcal A^{-\alpha}g\|\leq\\ \leq \Bigl\{&\sigma (\tau-\sigma )^{1-\alpha}\frac{2C_1^*C_{\beta}^2{\Lambda}_{\beta} M_{\mathcal B} M^2_{\mathcal A}e^{{\beta}_\cB T}}{1-{\beta}} + \sigma\; 2C_1^*C_{\beta} M_\cB M^2_\cA e^{{\beta}_\cB T}+\\ &(\tau-\sigma )\;2C_1^*C_{\beta} M^2_\cA M_\cK + \sigma \;C_{\beta} M_\cA + \sigma ^{\beta} \; \frac{C_1^*\,M_\cA\,M^A_{1-{\beta}}}{{\beta}}+ \sigma^{\beta} \;L_\beta\Big\}\|g\| \varepsilonnd{align*} for ${\sigma} \in [0,\gt]$ and $\gt \ge 0$. Then setting \begin{align*} Z_1 &:= \frac{2C_1^*C_{\beta}^2{\Lambda}_{\beta} M_{\mathcal B}M^2_{\mathcal A}e^{{\beta}_\cB T}}{1-{\beta}}\\ Z_2 &:= 2C_1^*C_{\beta} M_\cB M^2_\cA e^{{\beta}_\cB T} + C_{\beta} M_\cA\\ Z_3 &:= 2C_1^*C_{\beta} M^2_\cA M_\cK \\ Z_4 &:= \frac{C_1^*\,M_\cA\,M^A_{1-{\beta}}}{{\beta}} + L_{\beta} \varepsilonnd{align*} we obtain \begin{align}\langlebel{eq:6.20} \|\mathcal A^{-1}&e^{-\sigma \mathcal B}\left(e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\right) e^{-(\tau-\sigma )\mathcal K}\mathcal A^{-{\beta}}g\| \leq \Bigl\{Z_1\;\sigma (\tau-\sigma )^{1-{\beta}} + Z_2\;\sigma+ Z_3\; (\tau-\sigma )+ Z_4 \; \sigma ^{\beta}\Big\}\|g\|. \varepsilonnd{align} Now we remark that \varepsilonqref{eq:6.13a} gives the representation \begin{align*} \cA^{-1}&(T(\tau)-e^{-\tau \mathcal K})\cA^{-{\beta}}g =\int_0^\tau d\sigma \, \cA^{-1}e^{-\sigma \mathcal B}\{e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\} e^{-(\tau-\sigma )\mathcal K}\cA^{-{\beta}}g \ , \varepsilonnd{align*} which yields the estimate \begin{align*} \|\cA^{-1}&(T(\tau)-e^{-\tau \mathcal K})\cA^{-{\beta}}g\|\leq\int_0^\tau d\sigma \, \|\cA^{-1}e^{-\sigma \mathcal B}\{e^{-\sigma \mathcal K_0}\mathcal B - \mathcal Be^{-\sigma \mathcal K_0}\} e^{-(\tau-\sigma )\mathcal K}\cA^{-{\beta}}g\| \ . \varepsilonnd{align*} The inserting \varepsilonqref{eq:6.20} into this estimate and using \begin{align*} \int_0^\tau \sigma \,(\tau-\sigma )^{1-{\beta}}d\sigma =\tau^{3-{\beta}}\int_0^1 \, x(1-x)^{1-{\beta}}dx=\tau^{3-{\beta}} B(2, 2-\beta), \varepsilonnd{align*} (where $B$ is the \textit{Beta-function}), we find for $\gt \ge 0$ the estimate \begin{displaymath} \|\cA^{-1}(T(\tau)-e^{-\tau \mathcal K})\cA^{-{\beta}}g\| \le Z_1 B(2, 2-\beta) \;\tau^{3-{\beta}} + \frac{Z_2 + Z_3}{2}\; \gt^2 + \frac{Z_4}{1+{\beta}}\;\gt^{1+{\beta}} \ , \varepsilonnd{displaymath} and consequently \begin{displaymath} \|\cA^{-1}(T(\tau)-e^{-\tau \mathcal K})\cA^{-{\beta}}g\| \le \left(Z_1 B(2, 2-\beta)\gt^{2-2{\beta}} + \frac{Z_2 + Z_3}{2}\gt^{1-{\beta}}+ Z_4\right)\;\gt^{1+{\beta}} \ . \varepsilonnd{displaymath} Since $T(\gt) = 0$ and $e^{-\gt \cK} = 0$ for $\gt \ge T$ we finally obtain \begin{displaymath} \|\cA^{-1}(T(\tau)-e^{-\tau \mathcal K})\cA^{-{\beta}}g\| \le \left(Z_1B(2, 2-\beta)T^{2-2{\beta}} + \frac{Z_2 + Z_3}{2} T^{1-{\beta}} + Z_4\right)\;\gt^{1+{\beta}} \ , \varepsilonnd{displaymath} which proves the lemma. \varepsilonnd{proof} \subsection{The Trotter product formula in operator-norm topology} \langlebel{sec:7.2} \begin{thm}\langlebel{TheoremEstimate} Let the assumptions (A1), (A3), (A4), (A5), and (A6) be satisfied. If the family of generators $\{B(t)\}_{t\in\cI}$ is $A$-stable and ${\beta} \in ({\alpha},1)$, then there exists a constant $C_{\alpha, \beta}>0$ such that \begin{equation}\langlebel{6.22aa} \| (e^{-\gt \mathcal B/n}e^{-\gt \cK_0/n})^{n}-e^{-\tau\cK}\|_{\mathcal B( L^p(\cI,X))}\leq \frac{C_{\alpha,\beta}}{n^{{\beta}-{\alpha}}} \ , \varepsilonnd{equation} for $\gt \ge 0$ and $n = 2,3,\ldots \ $ . \varepsilonnd{thm} \begin{proof} Let $T({\sigma}) := e^{-{\sigma} \cB}e^{-{\sigma} \cK_0}$ and $U({\sigma}) :=e ^{-{\sigma} \mathcal K}$, ${\sigma} \ge 0$. Then the following identity holds \begin{align*} T(\sigma)^n&-U(\sigma)^n =\sum_{m=0}^{n-1} T(\sigma)^{n-m-1}(T(\sigma)-U(\sigma))U(\sigma)^m\\ = &\;T(\sigma)^{n-1}(T(\sigma)-U(\sigma))+(T(\sigma)-U(\sigma))U(\sigma)^{n-1}+\sum_{m=1}^{n-2} T(\sigma)^{n-m-1}(T(\sigma)-U(\sigma)) U(\sigma)^m\\ = &\;T(\sigma)^{n-1}\mathcal A\mathcal A^{-1}(T(\sigma)-U(\sigma))+(T(\sigma)-U(\sigma))\mathcal A^{-\alpha} \mathcal A^{\alpha}U(\sigma)^{n-1}+\\ &+\sum_{m=1}^{n-2} T(\sigma)^{n-m-1}\mathcal A \mathcal A^{-1}(T(\sigma)-U(\sigma))\mathcal A^{-{\beta}} \mathcal A^{{\beta}} U(\sigma)^m \ . \varepsilonnd{align*} It easily yields the inequality \begin{align*} \|T(\sigma)^n-U(\sigma)^n\| \le& \;\|\overline{T(\sigma)^{n-1}\mathcal A}\|\; \|A^{-1}(T(\sigma)-U(\sigma))\| + \|(T(\sigma)-U(\sigma)) \mathcal A^{-{\alpha}}\| \;\|\mathcal A^{{\alpha}}U(\sigma)^{n-1}\|+\\ &\;+\sum_{m=1}^{n-2} \|\overline{T(\sigma)^{n-m-1}\mathcal A}\|\; \|\mathcal A^{-1}(T(\sigma)-U(\sigma))\mathcal A^{-{\beta}}\|\; \|\mathcal A^{{\beta}} U(\sigma)^m\| \ . \varepsilonnd{align*} From Lemma \ref{TechnicalLemma2} we get for ${\sigma} \in (0,\gt]$ and $n \ge 2$ the estimates \begin{equation}\langlebel{eq:6.21} \|\overline{T({\sigma})^{n-1}\mathcal A}\| \le \frac{c_1}{{\sigma}^{\alpha}} +\frac{c_2}{(n-1){\sigma}} \ . \varepsilonnd{equation} Now, from Lemma \ref{TechnicalLemma3} we find that \begin{displaymath} \|\cA^{-1}(T(\sigma)-U(\sigma))\| \le c \, {\sigma} \quad \mbox{and} \quad \|(T(\sigma)-U(\sigma))\mathcal A^{-\alpha}\| \le c \, {\sigma} \ . \varepsilonnd{displaymath} This implies that \begin{displaymath} \|\overline{T({\sigma})^{n-1}\mathcal A}\|\;\|\cA^{-1}(T(\sigma)-U(\sigma))\| \le c_1 c\,{\sigma}^{1-{\alpha}} + \frac{c_2\,c}{n-1} \ , \varepsilonnd{displaymath} and \begin{displaymath} \|(T(\sigma)-U(\sigma))\mathcal A^{-\alpha}\| \;\|\mathcal A^{\alpha}U(\sigma)^{n-1}\| \le \frac{c \,{\Lambda}_{\alpha}}{(n-1)^{\alpha}}\; {\sigma}^{1-{\alpha}} \ , \varepsilonnd{displaymath} where we used also Lemma \ref{TechnicalLemma1}, \varepsilonqref{eq:6.2}. Since by Lemma \ref{TechnicalLemma4} \begin{align*} \|\mathcal A^{-1}(T({\sigma})-e^{-{\sigma} \mathcal K})\mathcal A^{-{\beta}}\|_{\mathcal B( L^p(\cI,X))}\leq Z({\beta})\;{\sigma}^{1+{\beta}} \ , \quad \gt \in [0,\gt_0) \ , \varepsilonnd{align*} one gets \begin{displaymath} \begin{split} \|\overline{T(\sigma)^{n-m-1}\mathcal A}\|&\; \|\mathcal A^{-1}(T(\sigma)-U(\sigma))\mathcal A^{-{\beta}}\|\; \|\mathcal A^{\beta} U(\sigma)^m\|\\ &\le c_1\,Z({\beta})\,{\Lambda}_{\beta}\frac{{\sigma}^{1-{\alpha}}}{m^{\beta}} + c_2\,Z({\beta})\,{\Lambda}_{\beta}\frac{1}{(n-1 -m)\,m^{\beta}} \ . \varepsilonnd{split} \varepsilonnd{displaymath} From Lemma \ref{lem:8.2} of Appendix (\S \ref{sec:9}) we obtain inequalities \begin{displaymath} \begin{split} \sum^{n-2}_{m=1} &\|\overline{T(\sigma)^{n-m-1}\mathcal A}\|\; \|\mathcal A^{-1}(T(\sigma)-U(\sigma))\mathcal A^{-{\beta}}\|\; \|\mathcal A^{{\beta}} U(\sigma)^m\| \\ & \le c_1\,Z({\beta})\,{\Lambda}_{\beta}\,{\sigma}^{1-{\alpha}}\sum^{n-2}_{m=1} \frac{1}{m^{\beta}} + c_2\,Z({\beta})\,{\Lambda}_{\beta} \sum^{n-2}_{m=1}\frac{1}{(n-1 -m)\,m^{\beta}}\\ & \le \frac{c_1\,Z({\beta})\,{\Lambda}_{\beta}}{1-{\beta}} (n-1)^{1-{\beta}}{\sigma}^{1-{\alpha}} + \frac{2c_2\,Z({\beta})\,{\Lambda}_{\beta}}{1-{\beta}} \frac{1}{(n-1)^{\beta}} + c_2\,Z({\beta})\,{\Lambda}_{\beta}\frac{\ln(n-1)}{(n-1)^{{\beta}}}. \varepsilonnd{split} \varepsilonnd{displaymath} Summarising all these ingredients one gets the estimate \begin{displaymath} \begin{split} \|T&({\sigma})^n - U({\sigma})^n\| \\ \le & \;c_1 c\,{\sigma}^{1-{\alpha}} + \frac{c_2\,c}{n-1} + \frac{c \,{\Lambda}_{\alpha}}{(n-1)^{\alpha}}\;{\sigma}^{1-{\alpha}}\\ & + \frac{c_1\,Z({\beta})\,{\Lambda}_{\beta}}{1-{\beta}} (n-1)^{1-{\beta}}{\sigma}^{1-{\alpha}} + \frac{2c_2\,Z({\beta})\,{\Lambda}_{\beta}}{1-{\beta}} \frac{1}{(n-1)^{\beta}} + c_2\,Z({\beta})\,{\Lambda}_{\beta}\frac{\ln(n-1)}{(n-1)^{{\beta}}} \ . \varepsilonnd{split} \varepsilonnd{displaymath} If we set ${\sigma} := \tau/n$, then \begin{displaymath} \begin{split} \|T&(\tau/n)^n - U(\tau/n)^n\| \\ \le & \;\frac{c_1 c\;T^{1-{\alpha}}}{(n-1)^{1-{\alpha}}} + \frac{c_2\,c}{n-1} + \frac{c \,{\Lambda}_{\alpha}\,T^{1-{\alpha}}}{(n-1)}\\ & + \frac{c_1\,Z({\beta})\,{\Lambda}_{\beta}\,T^{1-{\alpha}}}{1-{\beta}}\frac{1}{(n-1)^{{\beta}-{\alpha}}} + \frac{2c_2\,Z({\beta})\, {\Lambda}_{\beta}}{1-{\beta}}\frac{1}{(n-1)^{\beta}} + c_2\,Z({\beta})\,{\Lambda}_{\beta}\frac{\ln(n-1)}{(n-1)^{{\beta}}}. \varepsilonnd{split} \varepsilonnd{displaymath} for $\gt \ge 0$ and $n = 2,3,\ldots$. Hence, there exists a constant $C_{{\alpha},{\beta}} > 0$ such that \varepsilonqref{6.22aa} holds. \varepsilonnd{proof} \begin{rem}\langlebel{rem:7.9} \item[\;\; (i)] If in condition (A6) we put ${\beta}=1$, then for each ${\gamma} \in ({\alpha},1)$ there exists a constant $C_{{\alpha},{\gamma}} > 0$ such that \begin{equation}\langlebel{eq:6.24a} \| (e^{-\gt B/n}e^{-\gt \cK_0/n})^{n}-e^{-\gt \cK}\|_{\mathcal B( L^p(\cI,X))} \leq C_{{\alpha},{\gamma}} \, \frac{1}{n^{{\gamma}-{\alpha}}} \ , \varepsilonnd{equation} for $\gt \ge 0$ and $n = 2,3,\ldots$. \item[\;\;(ii)] It is worth to note that our result depends only on domains of the operators $A$ and $B(t)$. \item[\;\;(iii)] Until now, error estimates in operator-norm for the Trotter product formula on \textit{Banach spaces} for the time-independent case is proven only under the assumption that at least one of the involved operators is generator of a holomorphic contraction semigroup, see \cite{CachZag2001}. Therefore, although motivated by \cite{CachZag2001}, the Theorem \ref{TheoremEstimate} is the first result, where this assumption is dropped. \item[\;\;(iv)] If the family of generators is independent of $t \in \cI$, i.e. $B(t) = B$, then condition (A6) is automatically satisfied for any ${\beta} \ge 0$. In particular, we can set ${\beta} = 1$. Since $\cA$ and $\cB$ commute with $D_0$ we get \begin{equation}\langlebel{eq:6.24} (e^{-\gt \cB/n}e^{-\gt \cK_0/n})^n = (e^{-\gt \cB/n}e^{-\gt \cA/n})^n e^{-\gt D_0}, \quad \gt \ge 0, \quad n \in \mathbb{N}. \varepsilonnd{equation} Now we comment that if one of the operators: $A$ or $B$, is generator of a holomorphic contraction semigroup and another one of a contraction semigroup on a Banach space $X$, then from Theorem 3.6 of \cite{CachZag2001} we get the existence of constants $b_1 > 0$, $b_2 > 0$ and $\varepsilonta > 0$ such that the estimate \begin{displaymath} \|(e^{-\gt B/n}e^{-\gt A/n})^n - e^{-\gt C}\|_{\mathcal B(X)} \le (b_1 + b_2 \gt^{1-{\alpha}})e^{\gt \varepsilonta} \ \frac{\ln(n)}{n^{1-{\alpha}}} \ , \varepsilonnd{displaymath} holds for $\gt \ge 0$ and $n \in \mathbb{N}$. Applying this result to \varepsilonqref{eq:6.24} we immediately obtain the existence of a constant $R > 0$ such that \begin{displaymath} \|(e^{-\gt \cB/n}e^{-\gt \cK_0/n})^n - e^{-\gt \cK}\|_{\mathcal B( L^p(\cI,X))} \le R \ \frac{\ln(n)}{n^{1-{\alpha}}} \ , \varepsilonnd{displaymath} is valid for $\gt \ge 0$ and $n \in \mathbb{N}$. Note that this estimate is sharper than the estimate in \varepsilonqref{eq:6.24a}. \varepsilonnd{rem} \subsection{Norm convergence for propagators}\langlebel{sec:7.3} We investigate here the consequences of Theorem \ref{TheoremEstimate} for convergence of the approximants, $\{U_n(t,s)\}_{(t,s)\in\Delta}$, $n \in \mathbb{N}$, \varepsilonqref{ApproximationPropagatorIntroduction}, to the propagator $\{U(t,s)\}_{(t,s)\in {\Delta}}$, which solves the non-ACP \varepsilonqref{EvolutionProblem}. Recall that by (\ref{eq:6.main}) one gets the relation \begin{align}\langlebel{eq:7main} (\{(e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^n-&e^{-\tau (\mathcal B+\mathcal K_0)}\}f)(t) =\{U_n(t,t-\tau )-U(t,t-\tau )\}\chi_\cI(t-\tau )f(t-\tau ) \varepsilonnd{align} for $(t,t-\gt) \in {\Delta}$ and $f \in L^p(\cI,X)$, where the Trotter product approximation for propagator $U(t,s)$ has the form \begin{displaymath} U_n(t,s):=\stackrel{\longrightarrow}\prod_{j=1}^n e^{-\frac{t-s} n B(s+(n-j+1)\frac{t-s} n)} e^{-\frac {t-s} n A}, \quad (t,s) \in {\Delta} \ . \varepsilonnd{displaymath} is increasingly ordered from the left to the right. \begin{thm}\langlebel{EstimatePropagators} Let the assumptions (A1), (A3), (A4), (A5), and (A6) be satisfied. If the family of generators $\{B(t)\}_{t\in\cI}$ is $A$-stable and ${\beta} \in ({\alpha},1)$, then there exists a constant $C_{\alpha, \beta}>0$ such that \begin{equation}\langlebel{eq:6.26} \varepsilonsssup_{(t,s)\in{\Delta}}\|U_n(t,s)-U(t,s)\|_{\mathcal B(X)} \le \frac{C_{{\alpha},{\beta}}}{n^{{\beta}-{\alpha}}}, \quad n = 2,3,\ldots \ . \varepsilonnd{equation} The constant $C_{{\alpha},{\beta}}$ coincides with that in the estimate \varepsilonqref{6.22aa} of Theorem \ref{TheoremEstimate}. \varepsilonnd{thm} \begin{proof} We set \begin{displaymath} S_n(t,s) := U_n(t,s) - U(t,s), \quad (t,s) \in {\Delta}, \quad n \in \mathbb{N} \ , \varepsilonnd{displaymath} and \begin{align*} \mathcal S_n(\tau) := L(\tau)\{(e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^n-e^{-\tau (\mathcal B+\mathcal K_0)}\}: L^p(\cI,X) \rightarrow L^p(\cI,X) \ , \varepsilonnd{align*} for $\gt \ge 0$ and $n =2,3,\ldots \ $. Here $L(\tau)$, $\tau \ge 0$, is the left-shift semigroup (\ref{L-shift}). Then by (\ref{eq:7main}) we get \begin{align}\langlebel{Sn-Sn} (\mathcal S_n(\gt)g)(t) = S_n(t+\gt,t)\chi_\cI(t+\gt)g(t), \quad t \in \cI_0, \quad g \in L^p(\cI,X). \varepsilonnd{align} Hence, for any $\tau \in\cI$ and $n \in \mathbb{N}$, the operator $\mathcal S_n^\tau $ is a multiplication operator on $ L^p(\cI,X)$ induced by the family $\{S_n(\cdotot+\gt,\cdotot)\chi_\cI(\cdotot+\gt)\}_{\gt\in\cI}$ of bounded operators. Applying first (\ref{L-shift-id}) and then Proposition \ref{OperatorNormsBoundedOperator} for (\ref{Sn-Sn}), one gets for $\tau \geq 0$ the equality \begin{align}\langlebel{main-equal} \|&(e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^n-e^{-\tau (\mathcal B+\mathcal K_0)}\|_{\mathcal B( L^p(\cI,X))} = \|L(\tau)\{(e^{-\frac \tau n \mathcal B}e^{-\frac \tau n \mathcal K_0})^n-e^{-\tau (\mathcal B+\mathcal K_0)}\}\|_{\mathcal B( L^p(\cI,X))} \nonumber\\ &= \|\mathcal S_n(\tau )\|_{\mathcal B( L^p(\cI,X))} = \varepsilonsssup_{t\in\cI_0} \|S_n(t+\gt,t)\chi_\cI(t+\gt)\|_{\mathcal B(X)} \nonumber \\ &= \varepsilonsssup_{t\in\cI_0} \|\{U_n(t+\tau ,t)-U(t+\tau ,t)\}\chi_\cI(t+\tau )\|_{\mathcal B(X)} = \varepsilonsssup_{t\in(0,T-\tau ]} \|U_n(t+\tau ,t)-U(t+\tau ,t)\|_{\mathcal B(X)}. \varepsilonnd{align} Now taking into account Theorem \ref{TheoremEstimate} we find \begin{displaymath} \varepsilonsssup_{t\in(0,T-\tau ]} \|U_n(t+\tau,t)-U(t+\tau,t)\|_{\mathcal B(X)} \le \frac{C_{{\alpha},{\beta}}}{n^{{\beta}-{\alpha}}}, \quad \gt \ge 0,\quad n \in 2,3,\ldots \ , \varepsilonnd{displaymath} which yields \varepsilonqref{eq:6.26}. \varepsilonnd{proof} \begin{rem} \item[\;\;(i)] The equality (\ref{main-equal}) shows that estimates \varepsilonqref{6.22aa} and \varepsilonqref{eq:6.26} are equivalent. \item[\;\;(ii)] We note that \textit{a priori} for a fixed $n \in \mathbb{N}$ the operator family $\{U_n(t,s)\}_{(t,s)\in \Delta}$ do not define a propagator since the \textit{co-cycle equation} is, in general, not satisfied. But one can check that \begin{align*} U_n(t,s)=U_{n-k}\left(t,s+\frac{k}{n}(t-s)\right)U_k\left(s+\frac{k}{n}(t-s), s\right), \varepsilonnd{align*} is satisfied for $0<s\leq t\leq T$, $n\in \mathbb{N}$ and any $k\in\{0,1,\dots, n\}$. \varepsilonnd{rem} \section{Example: Diffusion equation perturbed by a time-dependent potential} \langlebel{sec:8} Here we investigate a non-autonomous problem when diffusion equation is perturbed by a time-dependent potential. To this aim consider the Banach space $X=L^q(\Omega)$, where $\Omega\subset\mathbb{R}^d$, $d\geq 2$, is a bounded domain with $C^{2}$-boundary and $q\in (1,\infty)$. Then the equation for the non-ACP reads as \begin{align} \dot u(t)=\Delta u(t) -B(t)u(t), ~~u(s)=u_s\in L^q(\Omega), ~~t,s\in \cI_0 \ , \langlebel{EvolProbLaplaceAndTimeDependentPotential} \varepsilonnd{align} where $\Delta$ denotes the Laplace operator in $L^q(\Omega)$ with Dirichlet boundary conditions defined by the mapping \begin{align*} \Delta:\mathrm{dom}(\Delta)=H^2_q(\Omega)\cap \mathring H^1_q(\Omega)\rightarrow L^q(\Omega). \varepsilonnd{align*} Then operator $-\Delta$ is generator of a holomorphic contraction semigroup on $L^q(\Omega)$, \cite[Theorem 7.3.5/6]{Pazy1983}, and $0\in\varrho(A)$. Let $B(t)$ denote a time-dependent scalar-valued multiplication operator given by \begin{align*} (B(t)f)(x)=V(t,x)f(x), ~~\mathrm{dom}(B(t))=\{f\in L^q(\Omega): V(t,x)f(x)\in L^q(\Omega) \} \ , \varepsilonnd{align*} where \begin{align*} V:\cI\times\Omega\rightarrow \mathbb{C}, ~~ V(t,\cdotot) \in L^{\varrho}(\Omega) \ . \varepsilonnd{align*} For $\alpha \in (0,1)$, the fractional power of $-\Delta$ are defined on the domain $\mathring{H}^{2\alpha}_q(\Omega)$ by \begin{align*} (-\Delta)^{\alpha}: \mathring{H}^{2\alpha}_q(\Omega)\rightarrow L^q(\Omega). \varepsilonnd{align*} Note, that for $2\alpha< 1/q$, it holds that $\mathring{H}^{2\alpha}_q(\Omega) = H^{2\alpha}_q(\Omega)$. The operator $\Delta^*$ is dual to $\Delta$ and it is defined on domain \begin{equation*} \mathrm{dom}(\Delta^*)=H^2_{q'}(\Omega)\cap \mathring{H}^1_{q'}(\Omega)\subset L^{q'}(\Omega) \ , \varepsilonnd{equation*} where ${1}/{q}+ {1}/{q'}=1$. Since operators $B(t)$ are scalar-valued, one gets that the dual $B(t)^* = \overline{B(t)}: \mathrm{dom}(B(t))\subset L^{q'}(\Omega)\rightarrow L^{q'}(\Omega)$. \begin{rem} Note that the operator $A=-\Delta$ in $L^p(\Omega)$, $p\in(1,\infty)$ with Dirichlet boundary conditions verify that \textit{maximal parabolic regularity} condition, see \cite{Arendt2007}. In particular this means that $\widetilde{\mathcal K_0} = D_0 +\mathcal A$ is closed and hence coincides with its closure: $\widetilde{\mathcal K_0} = \mathcal K_0$. \varepsilonnd{rem} To prove the existence and uniqueness of solution of the non-ACP \varepsilonqref{EvolProbLaplaceAndTimeDependentPotential} and in order to construct the product approximants for this solution, we have to verify the assumptions (A1) - (A6). In particular, we have to determine the required regularity of $V(t,\cdotot)\in L^\varrho(\Omega)$ to ensure that (\cite[Corollary 3.7]{CachZag2001}) \begin{equation*} \mathrm{dom}((-\Delta)^\alpha)\subset\mathrm{dom}(B(t)) \ \ \ {\rm{and}} \ \ \ \mathrm{dom}(\Delta^*)\subset \mathrm{dom}(B(t)^* \ , \varepsilonnd{equation*} or in other words: \begin{align}\langlebel{OurAimedEmbedding} H^{2\alpha}_q(\Omega),H^{2}_{q'}(\Omega)\subset \mathrm{dom}(B(t)) \ . \varepsilonnd{align} Using Sobolev embeddings, one obtains general description of the embedding \begin{align} H^s_{{\alpha}mma_1} (\Omega) \subset L^{{\alpha}mma_2}(\Omega) {\rm ~~for ~} \begin{cases} {\alpha}mma_2\in[{\alpha}mma_1, \frac{d {\alpha}mma_1/s}{d/s - {\alpha}mma_1}], {\rm ~if~} {\alpha}mma_1 \in (1, {d}/{s})\\ {\alpha}mma_2\in[{\alpha}mma_1, \infty), {\rm ~if~} {\alpha}mma_1 \in [{d}/{s}, \infty) \varepsilonnd{cases}\langlebel{SobolevEmbeddings} \ . \varepsilonnd{align} For our case \varepsilonqref{OurAimedEmbedding}, we obtain $ H^{2\alpha}_q(\Omega)\subset L^r(\Omega) {\rm ~~and~~} H^{2}_{q'}(\Omega)\subset L^\rho(\Omega)$, for some constants $r,\rho\in(1,\infty]$. Hence, it suffices to ensure $L^r(\Omega), L^\varrho(\Omega)\subset\mathrm{dom}(B(t))$. The parameters $r, \rho$ define $\tilde r, \tilde \rho$ via \begin{align}\langlebel{RelationRTildeRhoTilde} \frac{1}{r}+ \frac{1}{\tilde r} = \frac{1}{q}, ~~\frac{1}{\rho}+ \frac{1}{\tilde \rho} = \frac{1}{q'} \ , \varepsilonnd{align} and since the operator $B(t)$ is a multiplication operator defined by $V(t,\cdotot)$, the regularity of $V(t,\cdotot)$ has to be at least as \begin{align*} \varrho = \max\{\tilde r, \tilde \rho\} \ . \varepsilonnd{align*} We distinguish these cases collecting them in the Table 1: \begin{enumerate} \item For $d\geq 4$, or for $d=3$ and $\alpha\in(0,\frac{1}{2})$: \begin{center} \begin{tabular}{l|l|l||l|l|l} $q\in$ & $\tilde r\in$ & $\varrho\in$ & $q\in$ & $\tilde r\in$ & $\varrho\in$ \\ \hline $(1, \frac{d}{d-2\alpha}]$ &$[\frac d {2\alpha},\infty]$ & $(q', \infty]$ & $(\frac{d}{d-2\alpha}, \frac{d}{d-2}]$&$[\frac d {2\alpha},\infty]$ & $(\frac{d}{2\alpha}, \infty]$ \\ $(\frac{d}{d-2}, \frac{d}{2\alpha})$ &$[\frac d {2\alpha},\infty]$ &$[\frac{d}{2\alpha}, \infty]$ & $[\frac{d}{2\alpha}, \infty)$ & $(q,\infty]$&$(q,\infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=3$ and $\alpha \in [\frac{1}{2}, \frac 3 4)$: \begin{center} \begin{tabular}{l|l|l||l|l|l} $q\in$ & $\tilde r\in$ & $\varrho\in$& $q\in$ & $\tilde r\in$ & $\varrho\in$\\ \hline $(1, \frac{3}{3-2\alpha}]$&$[\frac{3}{2\alpha}, \infty]$ & $(q', \infty]$ & $(\frac{3}{3-2\alpha}, 2]$ & $[\frac{3}{2\alpha}, \infty]$ & $(q', \infty]$ \\ $(2, \frac{3}{2\alpha})$ &$[\frac 3 {2\alpha},\infty]$& $[\frac{3}{2\alpha}, \infty]$ & $[\frac{3}{2\alpha}, \infty)$ &$(q,\infty]$& $(q,\infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=3$ and $\alpha \in [\frac{3}{4}, 1)$: \begin{center} \begin{tabular}{l|l|l||l|l|l} $q\in$ & $\tilde r \in$ & $\varrho\in$& $q\in$ &$\tilde r \in$& $\varrho\in$\\ \hline $(1, 2]$ & $[\frac 3 {2\alpha}, \infty]$ & $(q', \infty]$ & $(2, \infty)$ & $(q, \infty]$& $(q, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=2$ and $\alpha\in(0,\frac 1 2]$: \begin{center} \begin{tabular}{l|l|l||l|l|l||l|l|l} $q\in$ & $\tilde r\in$ & $\varrho\in$& $q\in$ & $\tilde r\in$ & $\varrho\in$ & $q\in$ & $\tilde r\in$ & $\varrho\in$\\ \hline $(1, 2)$ & $[\frac 1 \alpha, \infty]$ & $[\max\{q', \frac 1 \alpha\}, \infty]$ & $[2, \frac 1 \alpha)$ & $[\frac 1 \alpha, \infty]$ & $[\frac 1 \alpha, \infty]$ & $[\frac 1 \alpha, \infty)$ &$(q, \infty]$ & $(q, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=2$ and $\alpha\in(\frac 1 2, 1)$: \begin{center} \begin{tabular}{l|l|l||l|l|l||l|l|l} $q\in$ & $\tilde r\in$ & $\varrho\in$ & $q\in$ & $\tilde r\in$ & $\varrho\in$ & $q\in$ & $\tilde r\in$ & $\varrho\in$\\ \hline $(1, \frac 1 \alpha)$ & $[\frac 1 \alpha, \infty]$ & $[q', \infty]$ & $[\frac 1 \alpha, 2)$ & $(q, \infty]$ & $[q', \infty]$ & $[2, \infty)$ & $(q, \infty]$ & $(q, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \varepsilonnd{enumerate} The Existence Theorem \ref{OurEvolutionProblemUniqueSolution} yields the following theorem \begin{thm}\langlebel{ExampleUniqueSolution} Let $\Omega\subset \mathbb{R}^d$ be a bounded domain with $C^2$- boundary, let $q\in (1, \infty)$ and let $\alpha\in(0, 1)$. Let $B(t)f=V(t,\cdotot)f$ define a scalar valued multiplication operator on $L^q(\Omega)$ with $V\in L^\infty(\cI, L^{\tilde r}(\Omega))$. Let $\tilde r\in(1,\infty)$ be chosen from the above tables. Then, the non-ACP \varepsilonqref{EvolProbLaplaceAndTimeDependentPotential} has a unique solution operator (propagator). \varepsilonnd{thm} \begin{proof} Using relation \varepsilonqref{RelationRTildeRhoTilde} and the Sobolev embeddings \varepsilonqref{SobolevEmbeddings}, it is easy to see that $\mathrm{dom}((-\Delta)^\alpha)\subset\mathrm{dom}(B(t))$ holds. Hence, the assumptions of Theorem \ref{OurEvolutionProblemUniqueSolution} are satisfied. \varepsilonnd{proof} \begin{rem} In \cite{PruessSchnaubelt2001}, the existence of a solution operator for equation \varepsilonqref{EvolProbLaplaceAndTimeDependentPotential} is shown assuming weaker regularity in space and time for the potential. We assumed uniform boundedness of $\|B(t)(-\Delta)^\alpha\|_{\mathcal B(X)}$, which is indeed too strong, but important for the further considerations. \varepsilonnd{rem} Now, we study the convergence of the Trotter product approximants of the solution operator. We assume that the real part of potential $V(t,x)$ is positive: \begin{align*} {\rm Re}(V(t,x)) \geq 0, {\rm ~~for~a.e.~} (t,x)\in \cI\times\Omega \ . \varepsilonnd{align*} Then, for any $t\in \cI$ the operator $V(t,x)$ is a generator of a contraction semigroup on $X=L^q(\Omega)$ (\cite[Theorem I.4.11-12]{EngNag2000}). Moreover, assumption (A4) is satisfied. Now let \begin{align*} F(t):=(-\Delta)^{-1}B(t)(-\Delta)^{-\alpha} : L^q(\Omega)\rightarrow H^{2}_q(\Omega)\cap \mathring H^1_q(\Omega)\subset L^q(\Omega). \varepsilonnd{align*} Assuming $V\in L^\infty(\cI,L^\varrho(\Omega))$, where $\varrho$ is chosen from the Table 1, we find that $\mathrm{dom}((-\Delta)^\alpha)\subset\mathrm{dom}(B(t))$ and $\mathrm{dom}(\Delta^*)\subset\mathrm{dom}(B(t)^*)$. Hence, the operator $F(t)$ is uniformly bounded. It rests only to prove the H\"older-continuity of the operator-valued function $t\mapsto F(t)$. Let $f\in L^q(\Omega)$ and $g\in L^{q'}(\Omega)$. Define $\tilde f = \Delta^{-\alpha} f \in \mathring{H}^{2\alpha}_q(\Omega)\subset L^r(\Omega)$ and $\tilde g = (\Delta^{-1})^*g=(\Delta^*)^{-1}g\in H^2_{q'}(\Omega)\cap \mathring H^1_{q'} (\Omega)\subset L^{\rho}(\Omega)$. Then, we get for $t\in \cI$ \begin{align*} \langle F(t) f, g\mathrm{ran}gle &= \langle (-\Delta)^{-1}B(t)(-\Delta)^{-\alpha}f, g\mathrm{ran}gle =\langle (-\Delta)^{-\alpha}f, B(t)^* (-\Delta^*)^{-1}g\mathrm{ran}gle = \langle \tilde f, B(t)^* \tilde g\mathrm{ran}gle \ . \varepsilonnd{align*} The boundedness of $\langle \tilde f, B(t)^* \tilde g\mathrm{ran}gle$ can be ensured by $V(t,\cdotot)\in L^\tau(\Omega)$, where $\tau\in(1,\infty)$ is defined via \begin{align} \frac{1}{r}+\frac{1}{\tau}+\frac{1}{\rho}=1\langlebel{RelationTau}. \varepsilonnd{align} The following Table 2 for parameters turns out: \begin{enumerate} \item For $d\geq 4$, or for $d=3$ and $\alpha\in(0,\frac{1}{2})$: \begin{center} \begin{tabular}{l|l||l|l||l|l} $q\in$ & $\tau \in$ & $q\in$ & $\tau \in$ & $q\in$ & $\tau \in$\\ \hline $(1, \frac{d}{d-2}]$ &$(\frac{\frac d {2\alpha}q}{\frac{d}{2\alpha}(q-1) + q},\infty]$ & $[\frac{d}{d-2}, \frac{d}{2\alpha}]$ & $[\frac{d}{2\alpha+2},\infty] $ & $(\frac d {2\alpha},\infty]$ & $[\frac{\frac d 2 q'} {\frac d 2(q'-1) +q'}, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=3$ and $\alpha \in [\frac 1 2, 1)$: \begin{center} \begin{tabular}{l|l||l|l||l|l} $q\in$ & $\tau \in$ & $q\in$ & $\tau \in$ & $q\in$ & $\tau \in$ \\ \hline $(1, \frac 3 {2\alpha}]$ & $(\frac{\frac 3 {2\alpha}q}{\frac 3 {2\alpha}(q-1) +q }, \infty]$ & $[\frac 3{2\alpha}, 3]$ & $(1, \infty]$ & $(3, \infty]$& $[\frac{\frac 3 2 q'} {\frac 3 2(q'-1) +q'}, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \item For $d=2$ and $\alpha\in(0,1)$: \begin{center} \begin{tabular}{l||l} $q\in$ & $\tau\in$\\ \hline $(1, \infty)$ & $(1, \infty]$ \varepsilonnd{tabular} \varepsilonnd{center} \varepsilonnd{enumerate} Since $r\geq q$, it holds that $\tau\leq \tilde \rho$ and hence, $\tau \leq \varrho = \max\{\tilde r, \tilde \rho\}$ . We are now able to estimate the product approximation of the non-ACP solution of \varepsilonqref{EvolProbLaplaceAndTimeDependentPotential}. \begin{thm} Let $\Omega\subset \mathbb{R}^d$ be a bounded domain with $C^2$- boundary, let $q\in (1, \infty)$, $\alpha\in(0, 1)$ and $\beta\in(\alpha,1)$. Choose $\varrho, \tau\in(1,\infty)$ from the above Table 1 and 2. Let $B(t)f=V(t,\cdotot)f$ define a scalar-valued multiplication operator in $L^q(\Omega)$ with \begin{align*} V\in L^\infty(\cI, L^{\varrho}(\Omega))\cap C^\beta(\cI, L^\tau(\Omega)). \varepsilonnd{align*} Moreover, let ${\rm Re}(V(t,x))\geq0$ for $t\in \cI$ and for a.e. $x\in \Omega$. Then, the solution operator $U(t,s)$ of \varepsilonqref{EvolProbLaplaceAndTimeDependentPotential} can be approximated in the operator-norm topology with the error bound \begin{align*} {\rm sup}_{(t,s)\in\Delta}\|U_n(t,s)-U(t,s)\|_{\mathcal B(L^q(\Omega))} = O(n^{-(\beta-\alpha)}) \ , \varepsilonnd{align*} by the Trotter product approximants: \begin{align}\langlebel{ApproximationPropagatorExample} U_n(t,s) = \stackrel{\longrightarrow} \prod_{j=1}^n e^{-\frac{t-s}{n}V(\frac{n-j+1}{n}t+ \frac{j-1}{n}s,\cdotot)}e^{\frac{t-s}{n}\Delta} \ . \varepsilonnd{align} \varepsilonnd{thm} \begin{proof} One can easily verify that the operator $(-\Delta)^{-1}B(t)(-\Delta)^{-\alpha}$ is bounded. The stability condition is satisfied, since we deal with generators of contraction semigroups. The $\varepsilonsssup$ becomes indeed a $\sup$ since we have continuous time-dependence for propagator's approximants. Then the claim follows by virtue of Theorem \ref{EstimatePropagators}. \varepsilonnd{proof} We conclude this section by some number of remarks. \begin{rem}~ \item[\;\;(i)] We focused on domains, which are compact and have $C^2$-boundaries. Our arguments can be extended to a more general domains. \item[\;\;(ii)] Although the propagator approximants $\{U_n(t,s)\}_{(t,s)\in\Delta}$ defined in \varepsilonqref{ApproximationPropagatorExample} looks elaborate, they have a simple structure. The semigroup in $L^q(\mathbb{R}^d)$ generated by the Laplace operator is given by the Gauss-Weierstrass semigroup (see for example \cite[Chapter 2.13]{EngNag2000}) defined via \begin{align*} (e^{t\Delta}u)(x) = (T(t)u)(x) = (4\pi t)^{-d/2}\int_{\mathbb{R}^d} dy \, e^{-\frac{|x-y|^2}{4t}}u(y) \ . \varepsilonnd{align*} The factors $e^{-\tau V(t_j)}$, $j = 1,2, \ldots, n$, in the product approximant \varepsilonqref{ApproximationPropagatorExample} are scalar valued and can be easily computed. \item[\;\;(iii)] In \cite{Batkai2011}, see Theorem 5.2 , the authors proved for the same approximation (called there the sequential splitting procedure) a \textit{vector-dependent} convergence rate on a subspace in $L^q(\mathbb{R}^d)$, where the potential $V$ is bounded and its commutator with Laplacian verifies a supplementary \textit{commutator} condition. \varepsilonnd{rem} \section{Appendix}\langlebel{sec:9} The next Gronwall-type lemma is useful. It can be proved by iterating the Volterra-integral equation \cite[Theorem 2.25]{Hunter2001}. \begin{lem}\langlebel{GronwallLemma} Let $F$ be a real function satisfying \begin{align*} 0\leq F(t) \leq c_1 t^{-\alpha}+c_2\int_0^t F(s) (t-s)^{-\alpha} ds, ~~t>0 \ , \varepsilonnd{align*} for some positive constants $c_1, c_2>0$ and $\alpha\in(0,1)$. Then there is a constant $C=2c_1$ and a time value $t_0=\sigma_\alpha\cdotot\left\{{1}/{c_2} , \left({1}/{c_2}\right)^{1/(1-\alpha)}\right\}$ (where $\sigma_\alpha$ depends only on $\alpha\in(0,1)$) such that $F(t)t^{\alpha}\leq C$ for small $t\in(0,t_0)$. \varepsilonnd{lem} Further we prove the following lemma. \begin{lem}\langlebel{lem:8.2} Let ${\beta} \in [0,1)$. Then the estimates \begin{equation}\langlebel{eq:9.1} \sum^{n-1}_{m=1} \frac{1}{m^{\beta}} \le \frac{n^{1-{\beta}}}{1-{\beta}} \ , \varepsilonnd{equation} and \begin{equation}\langlebel{eq:9.2} \sum^{n-1}_{m=1}\frac{1}{(n-m) m^{\beta}} \le \frac{2}{1-{\beta}}\frac{1}{n^{\beta}} + \frac{\ln(n)}{n^{\beta}} \ , \varepsilonnd{equation} are valid for $n = 2,3,\ldots$ \ . \varepsilonnd{lem} \begin{proof} The function $f(x) = x^{-{\beta}}$, $x > 0$, is decreasing. Hence \begin{displaymath} \sum^{n-1}_{m=1} \frac{1}{m^{\beta}} \le \int^{n-1}_0 dx \, \frac{1}{x^{\beta}} \le \frac{(n-1)^{1-{\beta}}}{1-{\beta}} \le \frac{n^{1-{\beta}}}{1-{\beta}} \ , \varepsilonnd{displaymath} for $n =2,3,\ldots$\, which proves (\ref{eq:9.1}). Further, we have \begin{displaymath} \sum^{n-1}_{m=1}\frac{1}{(n-m) m^{\beta}} = \frac{1}{n}\sum^{n-1}_{m=1}\frac{n}{(n-m)m^{\beta}} = \frac{1}{n}\sum^{n-1}_{m=1}\frac{1}{m^{\beta}} + \frac{1}{n}\sum^{n-1}_{m=1}\frac{m^{1-{\beta}}}{n-m} \ . \varepsilonnd{displaymath} Since $\sum^{n-1}_{m=1} {1}/{m^{\beta}} \le n^{1-{\beta}}/({1-{\beta}})$ we get the estimate \begin{equation}\langlebel{eq:9.3} \sum^{n-1}_{m=1}\frac{1}{(n-m) m^{\beta}} \le \frac{1}{1-{\beta}}\frac{1}{n^{\beta}} + \frac{1}{n}\sum^{n-1}_{m=1} \frac{m^{1-{\beta}}}{n-m} \ . \varepsilonnd{equation} Note that the function $f(x) := {x^{1-{\beta}}}/{(n-x)}$, $x \in [0,n)$, is increasing. Hence for $n\geq 2$, \begin{displaymath} \begin{split} \sum^{n-2}_{m=1}\frac{m^{1-{\beta}}}{n-m} &\le \int^{n-1}_1 dx \, \frac{x^{1-{\beta}}}{n-x} \le n^{1-{\beta}}\int^1_{\tfrac{1}{n}} ds \, \frac{(1-s)^{1-{\beta}}}{s} \le n^{1-{\beta}}\int^1_{\tfrac{1}{n}} ds \, \frac{1}{s} = n^{1-{\beta}}\ln(n). \varepsilonnd{split} \varepsilonnd{displaymath} Consequently we obtain the estimate \begin{displaymath} \frac{1}{n}\sum^{n-1}_{m=1}\frac{m^{1-{\beta}}}{n-m} = \frac{1}{n}\sum^{n-2}_{m=1}\frac{m^{1-{\beta}}}{n-m} + \frac{1}{n}(n-1)^{1-{\beta}} \le \frac{\ln(n)}{n^{{\beta}}} + \frac{1}{1-\beta}\frac{1}{n^{{\beta}}}, \quad n = 2,3,\ldots\, , \varepsilonnd{displaymath} which together with (\ref{eq:9.3}) proves (\ref{eq:9.2}). \varepsilonnd{proof} \begin{thebibliography}{10} \bibitem{Acquistapace1987} P. Aquistapace and B. Terreni. \newblock A unified approach to abstract linear nonautonomous parabolic equations. \newblock {\varepsilonm Rendiconti del Seminario Matematico della Universita di Padova}, tome 78, 47-107, 1987. \bibitem{Arendt2007} W.~Arendt, R.~Chill, S.~Fornaro, and C.~Poupaud. \newblock {$L^p$}-maximal regularity for non-autonomous evolution equations. \newblock {\varepsilonm J. Differential Equations}, 237(1):1--26, 2007. \bibitem{Batkai2011} A. B\'atkai, P. Csom\'os, B. Farkas, and G. Nickel. \newblock Operator splitting for non-autonomous evolution equations. \newblock {\varepsilonm Journal of Functional Analysis}, 260: 2163--2190, 2011. \bibitem{Batkai2012} A. B\'atkai and E. Sikolya. \newblock The norm convergence of a Magnus expansion method. \newblock {\varepsilonm Cent. Eur. J. Math}, 10(1): 150-158, 2012. \bibitem{CachZag2001} V.~Cachia and V.~A. Zagrebnov. Operator-norm convergence of the Trotter product formula for holomorphic semigroups. \newblock {\varepsilonm J. Operator Theory}, 46 : 199--213, 2001. \bibitem{CembranosMendoza1997} P.~Cembranos and J.~Mendoza. \newblock {\varepsilonm Banach spaces of vector-valued functions}, volume 1676 of {\varepsilonm Lecture Notes in Mathematics}. \newblock Springer-Verlag, Berlin, 1997. \bibitem{EngNag2000} K.-J. Engel and R.~Nagel. \newblock {\varepsilonm One-parameter semigroups for linear evolution equations}. \newblock Springer-Verlag, New York, 2000. \bibitem{Evans1976} D.~E. Evans. \newblock Time dependent perturbations and scattering of strongly continuous groups on {B}anach spaces. \newblock {\varepsilonm Math. Ann.}, 221(3):275--290, 1976. \bibitem{Hunter2001} J.~K. Hunter and B.~Nachtergaele. \newblock {\varepsilonm Applied Analysis}. \newblock World Scientific Publishing Co., Singapore, 2001. \bibitem{IchinoseTamura1998} T.~Ichinose and H.~Tamura. \newblock Error estimate in operator norm of exponential product formulas for propagators of parabolic evolution equations. \newblock {\varepsilonm Osaka J. Math.}, 35(4):751--770, 1998. \bibitem{Kato1953} T.~Kato. \newblock Integration of the equation of evolution in a {B}anach space. \newblock {\varepsilonm J. Math. Soc. Japan}, 5:208--234, 1953. \bibitem{Kato1970} T.~Kato. \newblock Linear evolution equations of ``hyperbolic'' type. \newblock {\varepsilonm J. Fac. Sci. Univ. Tokyo Sect. I}, 17:241--258, 1970. \bibitem{Kato1973} T.~Kato. \newblock Linear evolution equations of ``hyperbolic'' type. {II}. \newblock {\varepsilonm J. Math. Soc. Japan}, 25:648--666, 1973. \bibitem{Kato1980} T.~Kato. \newblock {\varepsilonm Perturbation theory for linear operators}. \newblock Classics in Mathematics. Springer-Verlag, Berlin, 1995. \bibitem{Nei1981} H.~Neidhardt. \newblock {On abstract linear evolution equations. {I}.} \newblock {\varepsilonm Math. Nachr.}, 103:283--298, 1981. \bibitem{NeiZag1998} H.~Neidhardt and V.~A. Zagrebnov. \newblock On Error Estimates for the Trotter–Kato Product Formula \newblock {\varepsilonm Letters in Mathematical Physics}, 44: 169–-186, 1998. \bibitem{NeiZag2009} H.~Neidhardt and V.~A. Zagrebnov. \newblock Linear non-autonomous {C}auchy problems and evolution semigroups. \newblock {\varepsilonm Adv. Differential Equations}, 14(3-4):289--340, 2009. \bibitem{Nickel1996} G. Nickel. \newblock On evolution semigroups and nonautonomous {C}auchy problems. \newblock {\varepsilonm Diss. Summ. Math.}, 1(1-2):195--202, 1996. \bibitem{Pazy1983} A.~Pazy. \newblock {\varepsilonm Semigroups of linear operators and applications to partial differential equations}. \newblock Springer-Verlag, New York, 1983. \bibitem{Phillips1953} R.~S. Phillips. \newblock Perturbation theory for semi-groups of linear operators. \newblock {\varepsilonm Trans. Amer. Math. Soc.}, 74:199--221, 1953. \bibitem{Prato1984} G. da Prato and P. Grisvard. \newblock Maximal Regularity for Evolution Equations by Interpolation and Extrapolation \newblock {\varepsilonm Journal of Functional Analysis}, 58: 107-124, 1984. \bibitem{PruessSchnaubelt2001} J.~Pr{\"u}ss and R.~Schnaubelt. \newblock Solvability and maximal regularity of parabolic evolution equations with coefficients continuous in time. \newblock {\varepsilonm J. Math. Anal. Appl.}, 256(2):405--430, 2001. \bibitem{Stephan2016} A.~Stephan. \newblock {\it On operator-norm estimates for approximations of solutions of evolution equations using the Trotter product formula}. \newblock Master's thesis, Humboldt-Universit\"at zu Berlin, 2016. \bibitem{Tan1979} H.~Tanabe. \newblock {\varepsilonm Equations of evolution}. \newblock Pitman (Advanced Publishing Program), Boston, Mass.-London, 1979. \bibitem{Trotter1959} H.~F. Trotter. \newblock On the product of semi-groups of operators. \newblock {\varepsilonm Proc. Amer. Math. Soc.}, 10:545--551, 1959. \bibitem{Voigt1977} J.~Voigt. \newblock On the perturbation theory for strongly continuous semigroups. \newblock {\varepsilonm Math. Ann.}, 229(2):163--171, 1977. \bibitem{VWZ2008} P.-A. Vuillermot, W.F. Wreszinski, and V. A. Zagrebnov. \newblock A Trotter-Kato product formula for a class of non-autonomous evolution equations. \newblock \textit{Nonlinear Analysis}, 69: 1067–-1072, 2008. \bibitem{VWZ2009} P.-A. Vuillermot, W.F. Wreszinski, and V. A. Zagrebnov. \newblock A general Trotter-Kato formula for a class of evolution operators. \newblock \textit{J. Funct. Anal.} 257: 2246–-2290, 2009. \bibitem{Zag2003} V.A.~Zagrebnov, \textit{Topics in the Theory of Gibbs Semigroups}, KU Leuven University Press, Leuven 2003. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[Prandtl equation] { Long time well-posdness\\ of the Prandtl equations in Sobolev space} \author[C.-J. Xu]{Chao-Jiang Xu} \date{May 8, 2016} \address{Chao-Jiang XU \newline\indent School of Mathematics and Statistics, Wuhan University \newline\indent 430072, Wuhan, P. R. China \newline \indent and \newline \indent Universit\'e de Rouen, CNRS, UMR 6085-Laboratoire de Math\'ematiques \newline \indent 76801 Saint-Etienne du Rouvray, France } \email{[email protected] } \author[X. Zhang]{Xu Zhang} \address{Xu ZHANG \newline\indent School of Mathematical Sciences, Xiamen University, Xiamen, Fujian 361005, China \newline \indent and \newline \indent Universit\'e de Rouen, CNRS, UMR 6085-Laboratoire de Math\'ematiques \newline \indent 76801 Saint-Etienne du Rouvray, France} \email{[email protected]} \keywords{Prandtl boundary layer equation, energy method, well-posedness theory, monotonic condition, Sobolev space} \subjclass[2010]{35M13, 35Q35, 76D10, 76D03, 76N20} \begin{abstract} In this paper, we study the long time well-posedness for the nonlinear Prandtl boundary layer equation on the half plane. While the initial data are small perturbations of some monotonic shear profile, we prove the existence, uniqueness and stability of solutions in weighted Sobolev space by energy methods. The key point is that the life span of the solution could be any large $T$ as long as its initial date is a perturbation around the monotonic shear profile of small size like $e^{-T}$. The nonlinear cancellation properties of Prandtl equations under the monotonic assumption are the main ingredients to establish a new energy estimate. \end{abstract} \maketitle \tableofcontents \section{Introduction} In this work, we study the initial-boundary value problem for the Prandtl boundary layer equation in two dimension, which reads \begin{equation*} \begin{cases} \partial_t u + u \partial_x u + v\partial_y u + \partial_x p = \partial^2_y u,\quad t>0,\, (x, y)\in\mathbb{R}^2_+, \\ \partial_x u +\partial_y v =0, \\ u|_{y=0} = v|_{y=0} =0 , \ \lim\limits_{y\to+\infty} u = U(t,x), \\ u|_{t=0} =u_0 (x,y)\, , \end{cases} \end{equation*} where $\mathbb{R}^2_+=\{(x, y)\in \mathbb{R}^2;\, y>0\}$, $u(t,x,y)$ represents the tangential velocity, $v(t, x, y)$ normal velocity. $p(t, x)$ and $U(t, x)$ are the values on the boundary of the Euler's pressure and Euler's tangential velocity and determined by the Bernoulli's law: $ \partial_t U(t,x) + U(t,x)\partial_x U(t,x) + \partial_x p =0.$ Prandtl equations is a major achievement in the progress of understanding the famous D'Alembert's paradox in fluid mechanics. In a word, D'Alembert's paradox can be stated as: while a solid body moves in an incompressible and inviscid potential flow, it undergoes neither drag or buoyancy. This of course disobeys our everyday experiences. In 1904, Prandtl said that, in fluid of small viscosity, the behavior of fluid near the boundary is completely different from that away from the boundary. Away from the boundary part can be almost considered as ideal fluid, but the near boundary part is deeply affected by the viscous force and is described by Prandtl boundary layer equation which was firstly derived formally by Prandtl in 1904 (\cite{prandtl1904uber}). From the mathematical point of view, the well-posedness and justification of the Prandtl boundary layer theory don't have satisfactory theory yet, and remain open for general cases. During the past century, lots of mathematicians have investigated this problems. The Russian school has contributed a lot to the boundary layer theory and their works were collected in \cite{oleinik1999prandtl}. Up to now, the local existence theory for the Prandtl boundary layer equation has been achieved when the initial data belong to some special functional spaces: 1) the analytic space or analytic with respect to the tangential variable \cite{vicol2013,sammartino2003onlyx,sammartino1998analytic-1,sammartino1998analytic-2}; 2) Sobolev spaces or H\"older spaces under monotonicity assumption \cite{xu2014local,wangyaguang2014three,masmoudi2012local,oleinik1999prandtl,xin2004global}; 3) recently \cite{masmoudi2013gevrey} in Gevrey class with non-degenerate critical point. See also \cite{vicol2014} where the initial data is monotone on a number of intervals and analytic on the complement. Except explaining the D'Alembert's Parabox, Prandtl equations play a vital role in the challenging problem: inviscid limit problem. { In deed, as pointed out by Grenier-Guo-Nguyen \cite{ggn1,ggn2,ggn3}, the long time behavior of the Prandtl equations is important to make progress towards the inviscid limit of the Navier-Stokes equations. We must understand behaviors of solutions to on a longer time interval than the one which causes the instability used to prove ill-posedness. } To the best of our knowledge, under the monotonic assumption, by using the Crocco transformation, Oleinik (\cite{oleinik1999prandtl}) obtained the long-time smooth solution in H\"older space for the Prandtl equation defined on the interval $0\le x\le L$ with $L$ very small. Xin-Zhang (\cite{xin2004global}) proved the global existence of weak solutions if the pressure gradient has a favorable sign, that is $\partial_x p\le 0$. See \cite{wangyaguang2015global} for a similar work in 3-D case. The global existence of smooth solutions in the monotonic case remains open. In the analytical frame, Ignatova-Vicol (\cite{vicol2015}) recently get an almost global-in-time solution which is analytic with respect to the tangential variable, see also \cite{zhangping2014longtime} for a same attempt work by using a refined Littlewood-Paley analysis. On the other side, without the monotonicity assumption, E and Engquist in \cite{e-2} constructed finite time blowup solutions to the Prandtl equation. After this work, there are many un-stability or strong ill-posedness results. In particular, G\'erard-Varet and Dormy \cite{gerard2010ill} showed that the linearized Prandtl equation around the shear flow with a non-degenerate critical point is ill-posed in the sense of Hadamard in Sobolev spaces. See also \cite{e-1,gerard2010remark,grenier-2,guo2011nonlinear,renardy2009ill-steady-hydrostatic} for the relative works. Besides, Crocoo transformation can't be used to Navier-Stokes equations. The best choice left for us is to get the long time wellposedness by energy method, since energy method works well for both Navier-Stokes equations and Euler equations. Recently, there are two works\cite{xu2014local,masmoudi2012local} where the local-in-time wellposedness is obtained by different kinds of energy methods. One is by Nash-Moser-H\"ormander iteration. The other is by using uniform estimates of the regularized parabolic equation and Maximal Principle. Motivated by above analysis, in this work, using directly energy method, we will prove the long time existence of smooth solutions of Prandtl equations in Sobolev space. { In details, for any fixed $T>0$, we will show that if the initial perturbation are size of $e^{-T}$ small enough, then the life time of solutions to Prandtl equations could at least be $T$.} In what follows, we choose the uniform outflow $ U(t,x)= 1$ which implies $p_x=0$. In other words the following problem for the Prandtl equation is considered : \begin{equation}\label{full-prandtl} \begin{cases} \partial_t u + u \partial_x u + v\partial_y u = \partial^2_y u,\,\, t>0,\,\, (x, y)\in\mathbb{R}^2_+, \\ \partial_x u +\partial_y v =0, \\ u|_{y=0} = v|_{y=0} =0 , \ \lim\limits_{y\to+\infty} u =1, \\ u|_{t=0} =u_0(x, y). \end{cases} \end{equation} The weighted Sobolev spaces (similar to \cite{masmoudi2012local}) are defined as follows: \begin{equation*} \|f\|_{H^n_\lambda(\mathbb{R}^2_+)}^2 = \sum\limits_{|\alpha_1+\alpha_2|\le n}\int_{\mathbb{R}^2_+} \langle y \rangle^{2\lambda+2\alpha_2} |\partial^{\alpha_1}_{x}\partial^{\alpha_2}_{y} f|^2 dx dy\, ,~~~ \lambda > 0,~n \in \mathbb{N}^+. \end{equation*} Specially, $\|f\|_{L^2_\lambda(\mathbb{R}^2_+)} = \|f\|_{H^0_\lambda(\mathbb{R}^2_+)} $ and $H^n$ stands for the usual Sobolev space. {\bf Initial data of shear flow.} Loosely speaking, shear flow is a solution to Prandtl equations and is independent of $x$. For more details, please check the {\it analysis of shear flow } part in Section \ref{section2} and Lemma \ref{shear-profile}. We denote shear flow as $u^s$. From now on, we consider solutions to Prandtl equations as their perturbations around some shear flow. That is to say, \[ u(t, x, y) = u^s(t, y) + \tilde{u}(t, x, y), t \ge 0.\] Assume that $u^s_0$(initial datum of shear flow) satisfies the following conditions: \begin{align} \label{shear-critical-momotone} \begin{cases} u^s_0\in C^{m+4}([0, +\infty[),\,\,\, \lim\limits_{y \to + \infty} u^s_0(y)=1;\\ (\partial^{2p}_y u^s_0)(0) = 0,\,\,\,0\le 2p\le m+4;\\ c_1\langle y \rangle^{-k}\le (\partial_y u^s_0)(y)\le c_2 \langle y \rangle^{-k}, ~~ \forall\,y\ge 0,\\ |(\partial_y^p u^s_0)(y)| \le c_2 \langle y \rangle^{-k-p+1},\,\, \forall\,\,y\ge 0,\,\, 1\le p\le m+4, \end{cases} \end{align} for certain $c_1, c_2>0$ and even integer $m$. We have the following long time wellposedness results. \begin{theorem}\label{main-theorem} Let $m\ge 6$ be an even integer, $k>1$ and $-\frac12<\nu<0$. Assume that $u^s_0$ satisfies \eqref{shear-critical-momotone}, the initial data $\tilde u_0=(u_0-u^s_0) \in H^{m+3}_{k + \nu }(\mathbb{R}^2_+)$, and $\tilde u_0$ satisfies the compatibility condition up to order $m+2$. Then for any $T>0$, there exists $\delta_0>0$ small enough such that if \begin{equation}\label{initial-small} \|\tilde u_0 \|_{H^{m+1}_{k + \nu }(\mathbb{R}^2_+)}\le \delta_0, \end{equation} then the initial-boundary value problem \eqref{full-prandtl} admits a unique solution $(u, v)$ with $$ (u-u^s)\in L^\infty([0, T]; H^{m}_{k+\nu-\delta'}(\mathbb{R}^2_+)),\,\, v\in L^\infty([0, T]; L^\infty(\mathbb{R}_{y, +}; H^{m-1}(\mathbb{R}_x)), $$ where $\delta'>0$ satisfying $\nu+\frac 12<\delta'<\nu+1$ and $k+\nu-\delta'>\frac 12$. Moreover, we have the stability with respect to the initial data in the following sense: given any two initial data $$ u^1_0=u^s_0+\tilde{u}^1_0,\quad u^2_0=u^s_0+\tilde{u}^2_0, $$ if $u^s_0$ satisfies \eqref{shear-critical-momotone} and $\tilde{u}^1_0, \,\tilde{u}^2_0$ satisfies \eqref{initial-small}, then the solutions $u^1$ and $u^2$ to \eqref{full-prandtl} satisfy, $$ \|u^1-u^2\|_{L^\infty([0, T]; H^{m-3}_{k+\nu-\delta'}(\mathbb{R}^2_+))} \le C\|u^1_0-u^2_0 \|_{ H^{m+1}_{k +\nu}(\mathbb{R}^2_+)}, $$ where the constant $C$ depends on the norm of $\partial_y{u}^1, \partial_y{u}^2$ in $L^\infty([0, T]; H^m_{k+\nu-\delta'+1}(\mathbb{R}^2_+))$. \end{theorem} \begin{remark}~ \begin{itemize} \item [1.] We also can verify , \begin{align*} \partial_y (u-u^s)\in L^\infty([0, T]; H^{m}_{k+\nu-\delta' + 1}(\mathbb{R}^2_+)),\, \partial_y v\in L^\infty([0, T]; H^{m-1}_{k+\nu-\delta'}(\mathbb{R}^2_+)). \end{align*} \item [2.] From \eqref{c-tilde} and \eqref{bound-2}, the relationship between the life span $T$ and the size of initial data is:  $$ \delta_0\,\,\approx\,\, e^{-T}. $$ \item [3.] The results of main Theorem can be generated to the periodic case where $x$ is in torus. \item [4.] We find that the weight of solution $u(t) - u^s(t)$ is smaller than that of initial dates $u_0 - u^s_0$. There means that there exist decay loss of order $\delta'>0$ which may be very small. It results from the term $v\,\partial_y u$ which is the major difficulty for the analysis of Prandtl equation. \end{itemize} \end{remark} This article is arranged as follows. In Section \ref{section2}, we explain the main difficulties for the study of the Prandtl equation and present an outline of our approach. In Section \ref{section3}, we study the approximate solutions to \eqref{full-prandtl} by a parabolic regularization. In Section \ref{section4}, we prepare some technical tools and the formal transformation for the Prandtl equations. Sections \ref{section5} is dedicated to the uniform estimates of approximate solutions obtained in Section \ref{section3}. We prove finally the main theorem in Section \ref{section7}-\ref{section8}. \noindent {\bf Notations: } The letter $C$ stands for various suitable constants, independent with functions and the special parameters, which may vary from line to line and step to step. When it depends on some crucial parameters in particular, we put a sub-index such as $C_\epsilon$ etc, which may also vary from line to line. \section{Preliminary}\label{section2} \noindent {\bf Difficulties and our approach.} Now, we explain the main difficulties in proving Theorem \ref{main-theorem}, and present the strategies of our approach. It is well-known that the major difficulty for the study of the Prandtl equation \eqref{full-prandtl} is the term $v\,\partial_y u$, where the vertical velocity behaves like $$ v(t, x, y)=-\int^y_0 \partial_x u(t, x, \tilde y)d\tilde y, $$ by using the divergence free condition and boundary conditions. So it introduces a loss of $x$-derivative. The $y$-integration create also a loss of weights with respect to $y$-variable. Then the standard energy estimates do not work. This explains why there are few existence results in the literatures. Recalling that in \cite{xu2014local} (see also \cite{masmoudi2012local} for a similar transformation), under the monotonic assumption $\partial_y u>0$, we divide the Prandtl equations by $\partial_y u$ and then take derivative with respect to $y$, to obtain an equation of the new unknown function $f=\left(\frac{u}{\partial_y u}\right)_y$ . In the new equation, the term $v$ disappears by using the divergence free condition. Here a little different from \cite{xu2014local}, we use $ g_m =\left(\frac{\partial_x^m u}{\partial_y u}\right)_y$, where $m$ stands for the highest derivative with $x$. From \cite{masmoudi2012local}, we can observe that we only need to worry about the highest derivative with $x$. This is why we only define $g_m$. In order to prove the existence of solutions, following the idea of Masmoudi-Wong (\cite{masmoudi2012local}), we will construct an approximate scheme and study the parabolic regularized Prandtl equation \eqref{shear-prandtl-approxiamte}, which preserves the nonlinear structure of the original Prandlt equation \eqref{full-prandtl}, as well as the nonlinear cancellation properties. Then by uniform energy estimates of the approximate solutions, the existence of solutions to the original Prandlt equation \eqref{full-prandtl} follows. This energy estimate also implies the uniqueness and the stability. The uniform energy estimate for the approximate solutions is the main duty of this paper. \noindent {\bf Analysis of shear flow.} We write the solution $(u, v)$ of system \eqref{full-prandtl} as \begin{align*} u(t, x, y) = u^s(t, y) + \tilde{u}(t, x, y),\,\, v(t, x, y)=\tilde v(t, x, y), \end{align*} where $u^s(t,y)$ is the solution of the following heat equation \begin{align} \label{shear-flow} \begin{cases} \partial_t u^s - \partial_y^2 u^s =0,\\ u^s|_{y=0} = 0, \lim\limits_{y \to + \infty} u^s(t,y) = 1,\\ u^s|_{t=0} = u^s_0(y). \end{cases} \end{align} Then \eqref{full-prandtl} can be written as \begin{equation}\label{non-shear-prandtl} \begin{cases} \partial_t\tilde{u} + (u^s + \tilde{u}) \partial_x\tilde{u} + \tilde v (u^s_y +\partial_y \tilde{u}) = \partial^2_y\tilde{u}, \\ \partial_x\tilde{u} +\partial_y\tilde{v} =0, \\ \tilde{u}|_{y=0} = \tilde{v}|_{y=0} =0 , \ \lim\limits_{y\to+\infty} \tilde{u} = 0, \\ \tilde{u}|_{t=0} =\tilde{u}_0 (x,y)\, . \end{cases} \end{equation} We first study the shear flow, \begin{lemma} \label{shear-profile} Assume that the initial date $u^s_0$ satisfy \eqref{shear-critical-momotone}, then for any $T>0$, there exist $\tilde{c}_1, \tilde{c}_2, \tilde{c}_3>0$ such that the solution $u^s(t,y)$ of the initial boundary value problem \eqref{shear-flow} satisfies \begin{align} \label{shear-critical-momotone-2} \begin{cases} \tilde{c}_1\langle y \rangle^{-k}\le \partial_y u^s(t, y) \le \tilde{c}_2 \langle y \rangle^{-k}, ~~ \forall\,(t, y)\in [0, T]\times \bar{\mathbb{R}}_+,\\ |\partial_y^p u^s(t, y)| \le \tilde{c}_3 \langle y \rangle^{-k-p+1},\,\, \forall\,\,(t, y)\in [0, T]\times \bar{\mathbb{R}}_+,\,\, 1\le p\le m+4, \end{cases} \end{align} where $\tilde{c}_1, \tilde{c}_2, \tilde{c}_3$ depend on $T$. \end{lemma} \begin{proof} Firstly, the solution of \eqref{shear-flow} can be written as \begin{align*} u^s (t,y) &=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}-e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) u_0^s (\tilde{y}) d\tilde{y}\\ &=\frac{1}{\sqrt {\pi}} \Big(\int^{+\infty}_{- {\frac{y}{2\sqrt t}}} e^{-\xi^2} u_0^s (2\sqrt t \xi +y) d\xi - \int^{+\infty}_{ {\frac{y}{2\sqrt t}}} e^{-\xi^2}u_0^s (2\sqrt t \xi -y)d\xi \Big), \end{align*} which gives \begin{align*} \partial_t u^s(t, y) =& \frac{1}{\sqrt {\pi t}} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} {\xi}\, e^{-\xi^2} (\partial_y u_0^s) (2\sqrt t \xi +y) d\xi \\ &\qquad- \int^{+\infty}_{ {\frac{y}{2\sqrt t}}}{\xi}\, e^{-\xi^2}(\partial_y u_0^s) (2\sqrt t \xi -y)d\xi \Big). \end{align*} By using $(\partial_y^{2j}u_0^s)(0)=0$ for $0\le 2j\le m+4$, it follows \begin{align}\label{u-0} \begin{split} \partial^p_y u^s(t, y) =& \frac{1}{\sqrt \pi} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} e^{-\xi^2} (\partial^p_yu_0^s) (2\sqrt t \xi+y) d\xi\\ &\quad+ (-1)^{p+1}\int^{+\infty}_{ {\frac{y}{2\sqrt t}}} e^{-\xi^2}(\partial^p_yu_0^s) (2\sqrt t \xi -y)d\xi \Big)\\ &=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+ (-1)^{p+1}e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) (\partial^p_yu_0^s) (\tilde{y}) d\tilde{y}, \end{split} \end{align} for all $1\le p\le m+4$. For $p=1$, we have, \begin{align*} \partial_y u^s(t, y) =& \frac{1}{\sqrt \pi} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} e^{-\xi^2} (\partial_yu_0^s) (2\sqrt t \xi+y) d\xi\\ &\quad+\int^{+\infty}_{ {\frac{y}{2\sqrt t}}} e^{-\xi^2}(\partial_yu_0^s) (2\sqrt t \xi -y)d\xi \Big)\\ &=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) (\partial_yu_0^s) (\tilde{y}) d\tilde{y}\,. \end{align*} Thanks to the monotonic assumption \eqref{shear-critical-momotone}, we have that \begin{align*} \partial_y u^s(t, y) &\approx \frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) \langle \tilde{y}\rangle^{-k} d\tilde{y}\\ &\approx \frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{-\infty} e^{-\frac{(y+\tilde{y})^2} {4 t}} \langle \tilde{y}\rangle^{-k} d\tilde{y}\,. \end{align*} Recalling now Peetre's inequality, for any $\lambda\in\mathbb{R}$ \begin{equation*} \tilde{c}_0\langle y\rangle^{\lambda}\langle y +\tilde{y}\rangle^{-|\lambda|}\le \langle \tilde{y}\rangle^{\lambda}\le \tilde{c}^{-1}_0\langle {y}\rangle^{\lambda}\langle y+ \tilde{y}\rangle^{|\lambda|}, \end{equation*} then for $\lambda=-k$, we get the first estimate of \eqref{shear-critical-momotone-2} with \begin{equation}\label{c-tilde} \tilde{c}_1=c_1\tilde{c}_0 (1+T)^{-\frac k2},\,\,\tilde{c}_2=c_2\tilde{c}^{-1}_0 (1+T)^{\frac k2}. \end{equation} For the second estimate of \eqref{shear-critical-momotone-2}, \eqref{u-0} implies \begin{align*} |\partial^p_y u^s(t, y)| &\le \frac{c_2}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) \langle \tilde{y}\rangle^{-k-p+1} d\tilde{y}\\ &\le \frac{c_2}{2\sqrt {\pi t}} \int^{+\infty}_{-\infty} e^{-\frac{(y+\tilde{y})^2} {4 t}} \langle \tilde{y}\rangle^{-k-p+1} d\tilde{y}\,. \end{align*} Using now Peetre's inequality, with $\lambda=-k-p+1$, we get \begin{align*} |\partial^p_y u^s(t, y)|\le c_2\tilde{c}^{-1}_0(1+T)^{\frac {k+p-1}2}\langle {y}\rangle^{-k-p+1}, \end{align*} for any $(t, y)\in [0, T]\times \mathbb{R}_+$. \end{proof} \noindent {\bf Compatibility conditions and reduction of boundary data.} We give now the precise version of the compatibility condition for the nonlinear system \eqref{non-shear-prandtl} and the reduction properties of boundary data. \begin{proposition}\label{prop-comp} Let $m\ge 6$ be an even integer, and assume that $\tilde{u}$ is a smooth solution of the system \eqref{non-shear-prandtl}, then the initial data $\tilde u_0$ have to satisfy the following compatibility conditions up to order $m+2$: \begin{equation}\label{compatibility-a1} \begin{cases} &\tilde{u}_0(x, 0)=0, \quad\,(\partial^2_y \tilde{u}_0)(x, 0)=0, \,\,\forall x\in \mathbb{R},\\ &(\partial^4_y \tilde u_0)(x, 0)=\big(\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)\big) (\partial_y\partial_x\tilde{u}_0)(x, 0),\forall x\in \mathbb{R}, \end{cases} \end{equation} and for $4\le 2p\le m$, \begin{equation}\label{compatibility-a2} (\partial^{2(p+1)}_y \tilde{u}_0)(x, 0)=\sum^p_{q=2}\sum_{(\alpha, \beta)\in \Lambda_q}C_{\alpha,\beta}\prod\limits_{j=1}^q \partial_x^{\alpha_j}\partial_y^{\beta_j +1} \big( u^s_0 + \tilde{u}_0 \big)\big|_{y=0}\,,\, \,\,\forall x\in \mathbb{R}, \end{equation} where \begin{align}\label{Lambda-p} \begin{split} \Lambda_q=&\bigg\{ (\alpha, \beta)=(\alpha_1, \cdots, \alpha_q; \beta_1, \cdots, \beta_q)\in \mathbb{N}^{q}\times\,\mathbb{N}^{q};\\ &\qquad\alpha_j+\beta_j\le 2p-1,\,\,\, 1\le j\le q;\,\,~\sum^q_{j=1}3\alpha_j + \beta_j = 2p +1;\\ &\qquad\quad\quad\qquad~~\sum\limits_{j=1}^{q}\beta_j \le 2p -2,~\,0<\sum\limits_{j=1}^{q} \alpha_j \le p - 1\bigg\}\, . \end{split} \end{align} \end{proposition} Remark that for $\alpha_j>0$, we have $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \big( u^s+ \tilde{u} \big)=\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}$. So the condition $0<\sum\limits_{j=1}^{q} \alpha_j$ implies that, for each terms of \eqref{compatibility-a2}, there is at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}_0$. \begin{proof} By the assumption of this Proposition, $\tilde u$ is a smooth solution. If we need the existence of the trace of $\partial_y^{m+2} \tilde u$ on $y=0$, then we at least need to assume that $\tilde{u}\in L^\infty([0, T]; H^{m+3}_{k+\ell-1}(\mathbb{R}^2_+))$. Recalling the boundary condition in \eqref{non-shear-prandtl}: \begin{equation*} \tilde u(t, x, 0)=0, \quad \tilde v(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \end{equation*} then the following is obvious: \begin{equation*} (\partial_t\partial^n_x\tilde u)(t, x, 0)=0, \quad (\partial_t\partial^n_x \tilde v)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \, 0\le n \le m. \end{equation*} Thus the first result of \eqref{compatibility-a1} is exactly the compatibility of the solution with the initial data at $t=0$. For the second result of \eqref{compatibility-a1}, using the equation of \eqref{non-shear-prandtl}, we find that, fro $0\le n\le m$ \begin{equation*} (\partial^2_{y}\partial^n_x\tilde{u})(t, x, 0)=0,\quad (\partial_t\partial^2_{y}\partial^n_x\tilde{u})(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation*} Derivating the equation of \eqref{non-shear-prandtl} with $y$, $$ \partial_t\partial_y\tilde{u} + \partial_y\big((u^s + \tilde{u}) \partial_x\tilde{u}\big) +\partial_y\big(\tilde {v} (u^s_y + \partial_y \tilde{u})\big) = \partial^3_{y}\tilde{u}, $$ observing \begin{align*} \bigg(\partial_y\big((u^s + \tilde{u}) \partial_x\tilde{u}\big) +\partial_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\big)\bigg)\bigg|_{y=0}=0, \end{align*} then we get \begin{equation*} (\partial_t\partial_y\tilde{u}))|_{y=0} = (\partial^3_{y}\tilde{u}_\epsilon)|_{y=0} . \end{equation*} Derivating again the equation of \eqref{non-shear-prandtl} with $y$, $$ \partial_t\partial^2_y\tilde{u} + \partial^2_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^2_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg) = \partial^4_{y}\tilde{u}, $$ using Leibniz formula \begin{align*} &\partial^2_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^2_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg)\\ &=(\partial^2_y(u^s + \tilde{u})) \partial_x\tilde{u} +(\partial^2_y\tilde {v})(u^s_y + \partial_y \tilde{u})\\ &\quad+(u^s + \tilde{u}) \partial^2_y\partial_x\tilde{u} +\tilde {v} \partial^2_y(u^s_y + \partial_y \tilde{u})\\ &\qquad+2(\partial_y(u^s + \tilde{u})) \partial_y\partial_x\tilde{u} +2(\partial_y\tilde {v})\partial_y(u^s_y + \partial_y \tilde{u}), \end{align*} thus, \begin{equation*} (\partial^4_y \tilde u)(t, x, 0)=\bigg(u^s_y(t, 0) + (\partial_y\tilde{u})(t, x, 0)\bigg) (\partial_y\partial_x\tilde{u})(t, x, 0), \end{equation*} and \begin{equation}\label{boundary-a15} \begin{split} (\partial_t\partial^4_y \tilde u)(t, x, 0)=&\bigg(\partial_y u^s(t, 0) + (\partial_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial^3_y\partial_x\tilde{u})(t, x, 0)\bigg) \\ & + \bigg(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial_y\partial_x\tilde{u})(t, x, 0)\bigg). \end{split} \end{equation} For $p=2$, we have $$ \partial_t\partial^4_y\tilde{u}+ \partial^4_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^4_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg) = \partial^6_{y}\tilde{u}, $$ using Leibniz formula \begin{align*} &\partial^4_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^4_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg)\\ &=(\partial^4_y(u^s + \tilde{u})) \partial_x\tilde{u} +(\partial^4_y\tilde {v})(u^s_y + \partial_y \tilde{u}) +(u^s + \tilde{u}) \partial^4_y\partial_x\tilde{u} +\tilde {v} \partial^4_y(u^s_y + \partial_y \tilde{u})\\ &\qquad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} +(\partial^j_y\tilde {v})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg), \end{align*} thus, by \eqref{boundary-a15} \begin{equation}\label{boundary-16-0} \begin{split} &(\partial^6_y \tilde u)(t, x, 0)= (\partial_t\partial^4_y \tilde u)(t, x, 0) -(\partial^3_y\partial_x{u})(u^s_y + \partial_y \tilde{u})(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} +(\partial^j_y\tilde {v})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg)(t, x, 0)\\ &=\quad\quad \bigg(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial_y\partial_x\tilde{u})(t, x, 0)\bigg)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} -(\partial^{j-1}_y\partial_x\tilde {u})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg)(t, x, 0). \end{split} \end{equation} Taking the values at $t=0$, we have proven \eqref{compatibility-a2} for $p=2$. The case of $p\ge 3$ is then by induction. \end{proof} \begin{remark} By the similar methods, we can prove that if $\tilde u$ is a smooth solution of the system \eqref{non-shear-prandtl}, then we have \begin{equation*} \begin{cases} &\tilde{u}(t, x, 0)=0, \,\,(\partial^2_y \tilde{u})(t, x, 0)=0, \,\,\forall (t, x)\in [0, T]\times \mathbb{R},\\ &(\partial^4_y \tilde u)(t, x, 0)=\big(u^s_y(t, 0) + (\partial_y\tilde{u})(t, x, 0)\big) (\partial_y\partial_x\tilde{u})(t, x, 0),\forall (t, x)\in [0, T]\times \mathbb{R}, \end{cases} \end{equation*} and for $4\le 2p\le m$, \begin{equation}\label{boundary-data1-e} (\partial^{2(p+1)}_y \tilde{u})(t, x, 0)=\sum^p_{q=2}\sum_{(\alpha, \beta)\in \Lambda_q}C_{\alpha,\beta}\prod\limits_{j=1}^q \partial_x^{\alpha_j}\partial_y^{\beta_j +1} \Big( u^s(t, 0) + \tilde{u}(t, x, 0) \Big), \end{equation} for all $ (t, x)\in [0, T]\times \mathbb{R}$, where $\Lambda_q$ is defined in \eqref{Lambda-p}. See Lemma 5.9 of \cite{masmoudi2012local} and Lemma 4 of \cite{masmoudi2013gevrey} for the similar results. \end{remark} Remark that the condition $0<\sum\limits_{j=1}^{q} \alpha_j$ implies that, for each terms of \eqref{boundary-data1-e}, there is at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}(t, x, 0)$. \section{The approximate solutions} \label{section3} To prove the existence of solution of the Prandtl equation, we study a parabolic regularized equation for which we can get the existence by using the classical energy method. \noindent {\bf Nonlinear regularized Prandtl equation.} We study the following nonlinear regularized Prandtl equation, for $0<\epsilon\le 1$, \begin{equation}\label{shear-prandtl-approxiamte} \left\{\begin{array}{l} \partial_t\tilde{u}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon +{v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon) = \partial^2_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\tilde{u}_\epsilon, \\ \partial_x\tilde{u}_\epsilon +\partial_y{v}_\epsilon =0, \\ \tilde{u}_\epsilon|_{y=0} = {v}_\epsilon|_{y=0} =0 , \ \lim\limits_{y\to+\infty} \tilde{u}_\epsilon = 0, \\ \tilde{u}_\epsilon|_{t=0}=\tilde{u}_{0, \epsilon} =\tilde{u}_0+\epsilon \mu_\epsilon \, , \end{array}\right. \end{equation} where we choose the corrector $\epsilon \mu_\epsilon $ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility condition up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}. We study now the boundary data of the solution for the regularized nonlinear system \eqref{shear-prandtl-approxiamte} which give also the precise version of the compatibility condition for the system \eqref{shear-prandtl-approxiamte}, see \cite{cannone-non1,cannone-non2} for the Prandtl equation with non-compatible data. \begin{proposition}\label{prop-comp-b} Let $m\ge 6$ be an even integer $1<k, 0< \ell<\frac12$ and $k+\ell>\frac 32$, and assume that $\tilde{u}_0$ satisfies the compatibility conditions \eqref{compatibility-a1} and \eqref{compatibility-a2} for the system \eqref{non-shear-prandtl}, and $\mu_\epsilon \in H^{m+3}_{k +\ell'-1}(\mathbb{R}^2_+)$ for some $\frac 12 <\ell'<\ell+\frac 12$ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility conditions up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}. If $\tilde{u}_\epsilon \in L^\infty ([0, T]; H^{m+3}_{k +\ell}(\mathbb{R}^2_+))\cap Lip([0, T]; H^{m+1}_{k +\ell}(\mathbb{R}^2_+))$ is a solution of the system \eqref{shear-prandtl-approxiamte}, then we have \begin{equation*} \begin{cases} &\tilde{u}_\epsilon(t, x, 0)=0, \,\,(\partial^2_y \tilde{u}_\epsilon)(t, x, 0)=0, \,\,\forall (t, x)\in [0, T]\times \mathbb{R},\\ &(\partial^4_y \tilde u_\epsilon)(t, x, 0)=\big(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\big) (\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0),\forall (t, x)\in [0, T]\times \mathbb{R}, \end{cases} \end{equation*} and for $4\le 2p\le m$, \begin{equation}\label{boundary-data1b} \begin{split} (\partial^{2(p+1)}_y \tilde{u}_\epsilon)(t, x, 0)=& \sum^p_{q=2}\sum^{q-1}_{l=0}\epsilon^l\sum_{(\alpha^l, \beta^l)\in \Lambda^l_q}C_{\alpha^l,\beta^l} \\& \qquad\times\, \prod\limits_{j=1}^q \partial_x^{\alpha^l_j}\partial_y^{\beta^l_j +1} \big( u^s(t, 0) + \tilde{u}_\epsilon(t, x, 0) \big), \end{split} \end{equation} for all $ (t, x)\in [0, T]\times \mathbb{R}$, where \begin{equation*} \begin{split} \Lambda^l_q=&\bigg\{(\alpha, \beta)=(\alpha_1, \cdots, \alpha_p; \beta_1, \cdots, \beta_p)\in \mathbb{N}^{q}\times \mathbb{N}^q;\\ &\qquad \alpha_j+\beta_j\le 2p-1,\,,~~1\le j\le q; \,\,~\sum^q_{j=1}3\alpha_j + \beta_j = 2p +4l+1;\\ &\qquad\qquad\sum\limits_{j=1}^{q}\beta_j \le 2p -2l-2,~\,0<\sum\limits_{j=1}^{q} \alpha_j \le p +2l - 1\bigg\}. \end{split} \end{equation*} \end{proposition} \begin{remark}\label{remark3.2}. \begin{itemize} \item[1.] Remark that the condition $0<\sum\limits_{j=1}^{q} \alpha^l_j$ implies that, for each terms of \eqref{boundary-data1b}, there are at last one factor like $\partial_x^{\alpha^l_j}\partial_y^{\beta^l_j +1} \tilde{u}_\epsilon(t, x, 0)$. \item[2.] Here we change the notation for the wighted index of function space, in fact, using the notations of Theorem \ref{main-theorem}, we have $$ \ell=\nu-\delta'+1,\quad \ell'=\nu+1. $$ \end{itemize} \end{remark} \begin{proof} Firstly, for $ p\le \frac m2$, we have $\partial^{2p+2}_y\tilde{u}_\epsilon \in L^\infty ([0, T]; H^{1}_{k +\ell + 2p + 1}(\mathbb{R}^2_+))$. So the trace of $\partial^{2p+2}_y\tilde{u}_\epsilon$ exists on $y=0$. Using the boundary condition of \eqref{shear-prandtl-approxiamte}, we have, for $0\le n\le m+2$, \begin{equation*} \partial^n_x\tilde u_\epsilon(t, x, 0)=0, \quad \partial^n_xv_\epsilon(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \end{equation*} and for $0\le n\le m$ \begin{equation*} (\partial_t\partial^n_x\tilde u_\epsilon)(t, x, 0)=0, \quad (\partial_t\partial^n_x v_\epsilon)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation*} From the equation of \eqref{shear-prandtl-approxiamte}, we get also \begin{equation}\label{boundary-12b} (\partial^2_{y}\partial^n_x\tilde{u}_\epsilon)(t, x, 0)=0,\quad (\partial_t\partial^2_{y}\partial^n_x\tilde{u}_\epsilon)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation} On the other hand, $$ \partial_t\partial_y\tilde{u}_\epsilon + \partial_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^3_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial_y\tilde{u}_\epsilon, $$ observing \begin{align*} \big[\partial_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial_y\bigg({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\big]\big|_{y=0}=0, \end{align*} we get \begin{equation*} (\partial_t\partial_y\tilde{u}_\epsilon)|_{y=0} = (\partial^3_{y}\tilde{u}_\epsilon)|_{y=0} + \epsilon (\partial^2_{x}\partial_y\tilde{u}_\epsilon)|_{y=0}. \end{equation*} We have also $$ \partial_t\partial^2_y\tilde{u}_\epsilon + \partial^2_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^2_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^4_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial^2_y\tilde{u}_\epsilon, $$ using Leibniz formula \begin{align*} &\partial^2_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^2_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\\ &=(\partial^2_y(u^s + \tilde{u}_\epsilon)) \partial_x\tilde{u}_\epsilon +(\partial^2_y{v}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\quad+(u^s + \tilde{u}_\epsilon) \partial^2_y\partial_x\tilde{u}_\epsilon +{v}_\epsilon \partial^2_y(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\qquad+2(\partial_y(u^s + \tilde{u}_\epsilon)) \partial_y\partial_x\tilde{u}_\epsilon +2(\partial_y{v}_\epsilon)\partial_y(u^s_y + \partial_y \tilde{u}_\epsilon), \end{align*} thus, \begin{equation}\label{boundary-14} (\partial^4_y \tilde u_\epsilon)(t, x, 0)=\left(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\right) (\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0). \end{equation} Applying $\partial_t$ to \eqref{boundary-14}, we have \begin{equation*} \begin{split} &(\partial_t\partial^4_y \tilde u_\epsilon)(t, x, 0)=\left(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u}_\epsilon)(t, x, 0)+\epsilon (\partial^2_x\partial_y \tilde u_\epsilon)(t, x, 0)\right)(\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0)\\ &+\left(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\right) \left((\partial^3_y\partial_x\tilde{u}_\epsilon)(t, x, 0)+\epsilon (\partial^3_x\partial_y \tilde u_\epsilon)(t, x, 0)\right). \end{split} \end{equation*} On the other hand, we have $$ \partial_t\partial^4_y\tilde{u}_\epsilon + \partial^4_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^4_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^6_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial^4_y\tilde{u}_\epsilon, $$ using Leibniz formula \begin{align*} &\partial^4_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^4_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\\ &=(\partial^4_y(u^s + \tilde{u}_\epsilon)) \partial_x\tilde{u}_\epsilon +(\partial^4_y{v}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\quad+(u^s + \tilde{u}_\epsilon) \partial^4_y\partial_x\tilde{u}_\epsilon +{v}_\epsilon \partial^4_y(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\qquad+\sum_{1\le j\le 3}C^4_j \big((\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon +(\partial^j_y{v}_\epsilon)\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big), \end{align*} thus, \begin{equation*} \begin{split} &(\partial^6_y \tilde u_\epsilon)(t, x, 0)= (\partial_t\partial^4_y \tilde u_\epsilon)(t, x, 0) -(\partial^3_y\partial_x{u}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \big[(\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon +(\partial^j_y{v}_\epsilon)\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big](t, x, 0)\\ &\qquad\qquad -\underline{\epsilon \partial^2_{x}\partial^4_y\tilde{u}_\epsilon(t, x, 0)}. \end{split} \end{equation*} Using \eqref{boundary-14}, we get then \begin{equation}\label{boundary-16} \begin{split} &(\partial^6_y \tilde u_\epsilon)(t, x, 0) = \big(\partial^3_y u^s(t, 0) + \partial^3_y\tilde{u}_\epsilon (t, x, 0)\big)\partial_y\partial_x\tilde{u}_\epsilon(t, x, 0)\\ &\hskip 5cm -\underline{2\epsilon \partial_x\partial_y\tilde{u}_\epsilon(t, x, 0) (\partial_y\partial_x^2\tilde{u}_\epsilon)(t, x, 0)}\\ &+\sum_{1\le j\le 3}C^4_j \big[(\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon - \partial^{j - 1}_y\partial_x \tilde{u}_\epsilon\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big](t, x, 0), \end{split} \end{equation} Compared to \eqref{boundary-16-0}, the underlined term is the new term. This is the Proposition \ref{prop-comp-b} for $p=2$. We can complete the proof of Proposition \ref{prop-comp-b} by induction. \end{proof} The proof of the above Proposition implies also the following result. \begin{corollary}\label{coro-boundary} Let $m\ge 6$ be an even integer, assume that $\tilde{u}_0$ satisfies the compatibility conditions \eqref{compatibility-a1} - \eqref{compatibility-a2} for the system \eqref{non-shear-prandtl} and $\partial_y\tilde{u}_{0}\in H^{m+2}_{k+\ell'}(\mathbb{R}^2_+)$, then there exists $\epsilon_0>0$, and for any $0<\epsilon\le \epsilon_0$ there exists $\mu_\epsilon \in H^{m+3}_{k +\ell'-1}(\mathbb{R}^2_+)$ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility condition up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}. Moreover, for any $m\le \tilde m\le m+2$ $$ \|\partial_y\tilde{u}_{0, \epsilon}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}\le \frac 32 \| \partial_y\tilde{u}_{0}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}, $$ and $$ \lim_{\epsilon\to 0}\|\partial_y\tilde{u}_{0, \epsilon}-\partial_y\tilde{u}_{0}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}=0. $$ \end{corollary} \begin{proof} We use the proof of the Proposition \ref{prop-comp-b}. Taking the values at $t=0$ for \eqref{boundary-12b}, then \eqref{compatibility-a1} implies that the function $\mu_\epsilon$ satisfies \begin{equation*} (\partial^n_x\mu_\epsilon )(x, 0)=0, \quad (\partial^2_y \partial^n_x\mu_\epsilon )(x, 0)=0,\quad x\in \mathbb{R}\,. \end{equation*} Taking $t=0$ for \eqref{boundary-14}, we have \begin{align*} (\partial^4_y \tilde u_0)(x, 0)+\epsilon(\partial^4_y \mu_\epsilon)(x, 0))=&\big[\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)+\epsilon(\partial_y \mu_\epsilon)(x, 0)\big]\\ &\times\big[(\partial_y\partial_x\tilde{u}_0)(x, 0)+\epsilon(\partial_y \partial_x\mu_\epsilon)(x, 0)\big], \end{align*} using \eqref{compatibility-a1}, we have that $\mu_\epsilon$ satisfies \begin{equation*} \begin{split} (\partial^4_y \mu_\epsilon )(x, 0))=&\big(\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)\big)(\partial_y \partial_x\mu_\epsilon )(x, 0)\\ &+(\partial_y \mu_\epsilon )(x, 0)(\partial_y\partial_x\tilde{u}_0)(x, 0)\\ &+\epsilon(\partial_y \partial_x\mu_\epsilon )(x, 0)(\partial_y \partial_x\mu_\epsilon )(x, 0). \end{split} \end{equation*} We have also \begin{equation*} \begin{split} (\partial_t\partial^4_y \tilde u_\epsilon)(0, x, 0)=&\big(\partial^3_y u^s_0(0) + (\partial^3_y\tilde{u}_\epsilon)(0, x, 0)+\epsilon (\partial^2_x\partial_y \tilde u_\epsilon)(0, x, 0)\big)\\ &\times \big((\partial^3_y\partial_x\tilde{u}_\epsilon)(0, x, 0)+\epsilon (\partial^3_x\partial_y \tilde u_\epsilon)(0, x, 0)\big). \end{split} \end{equation*} Taking the values at $t=0$ for \eqref{boundary-16}, we obtain a restraint condition for $(\partial^6_y \mu_\epsilon)(x, 0)$, \begin{align*} \partial_y^6 \mu_\epsilon (x, 0) & =( (\partial_y^3 u^s_0 + \partial_y^3 \tilde{u}_0 ) \partial_y \partial_x \mu_\epsilon)|_{y=0} + \partial_y^3 \mu_\epsilon \partial_y \partial_x \tilde{u}_0|_{y=0} + \epsilon \partial_y^3 \mu_\epsilon \partial_y \partial_x \mu_\epsilon|_{y=0}\\ & - \underline{2  \partial_x\partial_y\tilde{u}_0(x, 0) (\partial_y\partial_x^2\tilde{u}_0)( x, 0)}- 2 \epsilon \partial_x\partial_y\tilde{u}_0( x, 0) (\partial_y\partial_x^2\mu_\epsilon)(t, x, 0) \\ & - 2 \epsilon \partial_x\partial_y\mu_\epsilon(t, x, 0) (\partial_y\partial_x^2\tilde{u}_0)(t, x, 0) - 2 \epsilon^2  \partial_x\partial_y\mu_\epsilon(x, 0) (\partial_y\partial_x^2\mu_\epsilon)(x, 0) \\ & + \sum\limits_{1 \le j \le 3}C_j^4 \big[ \partial_y^j\big( u^s_0 + \tilde{u}_0 \big)\partial_y^{4 - j}\partial_x \mu_\epsilon + \partial_y^j \mu \partial_y^{4 - j} \partial_x \tilde{u}_0 + \epsilon\partial_y^j \mu \partial_y^{4 - j} \partial_x \mu_\epsilon \big]\big|_{y=0}\\ & - \sum\limits_{ 1 \le j \le 3 }C_j^4 \big[ \partial_y^{j-1} \partial_x \tilde{u}_0 \partial_y^{4 - j} \mu_\epsilon + \epsilon \partial_y^{j - 1} \partial_x \mu_\epsilon \partial_y^{4 - j} \partial_y \mu_\epsilon \big]\big|_{y = 0}\\ & - \sum\limits_{ 1 \le j \le 3 }C_j^4 \partial_y^{j - 1} \partial_x \mu_\epsilon \partial_y^{4 - j}( \partial_y u^s_0 + \partial_y \tilde{u}_0 ) \big|_{y = 0}, \end{align*} thus \begin{equation}\label{mu-6} \begin{split} \partial_y^6 \mu_\epsilon (x, 0) & = - ~ \underline{2  \partial_x\partial_y\tilde{u}_0(x, 0) (\partial_y\partial_x^2\tilde{u}_0)( x, 0)}\\ & + \sum\limits_{\alpha_1, \beta_1; \alpha_2, \beta_2}C_{\alpha_1, \beta_1; \alpha_2, \beta_2} \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1} ( u^s_0 + \tilde{u}_0) \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1}\mu_\epsilon(x, 0)\\ & + \sum\limits_{\alpha_1, \beta_1; \alpha_2, \beta_2}C_{\alpha_1, \beta_1; \alpha_2, \beta_2} \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1} \mu_\epsilon \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1}\mu_\epsilon(x, 0), \end{split} \end{equation} where the summation is for the index $\alpha_2+\beta_2\le 3;\, \alpha_1+\beta_1+\alpha_2+\beta_2\le 3$. The underlined term in the above equality is deduced from the underlined term in \eqref{boundary-16}. All these underlined terms are from the added regularizing term $\epsilon\partial_x^2 \tilde{u}$ in the equation \eqref{shear-prandtl-approxiamte}. This means that the regularizing term $\epsilon \partial_x^2 \tilde{u}$ has an affect on the boundary. This is why we add a corrector term. More generally, for $6\le 2p\le m$, we have that $(\partial^{2(p+1)}_y \mu_\epsilon)(x, 0) $ is a linear combination of the terms of the form $$ \prod\limits_{j=1}^{q_1}\left( \partial_x^{\alpha^1_j}\partial_y^{\beta^1_j +1} \big( u^s_0 + \tilde{u}_0 \big)\right)\bigg|_{y=0},\,\quad \prod\limits_{i=1}^{q_2}\left( \partial_x^{\alpha^2_i}\partial_y^{\beta^2_i+1} \mu_\epsilon\right)\bigg|_{y=0}\,, $$ and $$ \prod\limits_{j=1}^{q_1}\left( \partial_x^{\alpha^1_j}\partial_y^{\beta^1_j +1} \big( u^s_0 + \tilde{u}_0 \big)\right)\bigg|_{y=0}\,\times \, \prod\limits_{i=1}^{q_2}\left( \partial_x^{\alpha^2_i}\partial_y^{\beta^2_i+1} \mu_\epsilon\right)\bigg|_{y=0}\,, $$ where the coefficients of the combination can be depends on $\epsilon$ but with a non-negative power. We have also $\alpha^l_j+\beta^l_j+1\le 2p, l=1, 2$, thus $(\partial^{2(p+1)}_y \mu_\epsilon)(x, 0)$ is determined by the low order derivatives of $\mu_\epsilon$ and these of $\tilde u_0$. We now construct a polynomial function $\tilde \mu_\epsilon$ on $y$ by the following Taylor expansion, $$ \tilde \mu_\epsilon(x, y)=\sum^{\frac m2+1}_{p=3} \tilde \mu^{2p}_\epsilon(x)\frac{ y^{2p}}{(2p)!}\,, $$ where $$ \tilde \mu^{6}_\epsilon(x)=-2 (\partial_x\partial_y\tilde{u}_0)(x, 0)(\partial_y\partial^2_x\tilde{u}_0)(x, 0), $$ and $\tilde \mu^{2p}_\epsilon(x)$ will give successively by $(\partial^{2q}_y \mu_\epsilon)(x, 0)$ with $(\partial^{2q+1}_y \mu_\epsilon)(x, 0)=0, q=0, \cdots, m$, and it is then determined by $(\partial^\alpha_x\partial^\beta_y\tilde u_0)|_{y=0}$. Finally we take $\mu_\epsilon= \chi(y)\tilde \mu_\epsilon$ with $\chi\in C^\infty([0, +\infty[);\, \chi(y)=1,\, 0\le y\le 1;\, \chi(y)=0,\, y\ge 2$. Thus we complete the proof of the Corollary. \end{proof} \begin{remark}\label{remark-corrector} Suppose that $\tilde u_0$ satisfies the compatibility conditions up to order $m+2$ for the system \eqref{non-shear-prandtl} with $m\ge 4$, then for the regularized system \eqref{shear-prandtl-approxiamte}, if we want to obtain the smooth solution $\tilde w_\epsilon$, we have to add a non-trivial corrector $\mu_\epsilon$ to the initial data such that $\tilde u_0+\epsilon\mu_\epsilon$ satisfies the compatibility conditions up to order $m+2$ for the system \eqref{shear-prandtl-approxiamte}. In fact, if we take $\mu_\epsilon$ with $$ (\partial^{j}_y \mu_\epsilon)(x, 0)=0,\quad 0\le j\le 5, $$ then \eqref{mu-6} implies $$ (\partial^{6}_y \mu_\epsilon)(x, 0)=-2 (\partial_x\partial_y\tilde{u}_0)(x, 0)(\partial_y\partial^2_x\tilde{u}_0)(x, 0), $$ which is not equal to $0$. So added a corrector is necessary for the initial data of the regularized system. \end{remark} We will prove the the existence of the approximate solutions of the system \eqref{shear-prandtl-approxiamte} by using the following equation of vorticity $ \tilde{w}_\epsilon=\partial_y\tilde{u}_\epsilon $, it reads \begin{equation} \label{shear-prandtl-approxiamte-vorticity} \begin{cases} & \partial_t\tilde{w}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{w}_\epsilon +{v}_\epsilon (u^s_{yy} + \partial_y\tilde{w}_{\epsilon}) = \partial^2_{y}\tilde{w}_\epsilon + \epsilon \partial^2_{x}\tilde{w}_\epsilon, \\ & \partial_y\tilde{w}_{\epsilon}|_{y=0}=0,\\ & \tilde{w}_{\epsilon}|_{t=0}=\tilde{w}_{0, \epsilon}=\tilde{w}_0+\epsilon \partial_y\mu_{\epsilon}, \end{cases} \end{equation} where \begin{equation}\label{u-v-w} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad \tilde{v}_\epsilon(t, x, y)=-\int^{y}_0\partial_x \tilde{u}_\epsilon(t, x, \tilde y) d\tilde y. \end{equation} We have the following theorem for the existence of approximate solutions \begin{theorem}\label{theorem3.1} Let $\partial_y \tilde{u}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, and $m\ge 6$ be an even integer, $k>1, 0\le \ell<\frac12, k+\ell>\frac32$, assume that $\tilde{u}_0$ satisfies the compatibility conditions of order $m+2$ for the system \eqref{non-shear-prandtl}. Suppose that the shear flow satisfies $$ |\partial^{p+1}_y u^s(t, y)|\le C\langle y \rangle^{-k-p}, \quad (t, y)\in [0, T_1]\times \mathbb{R}_+,\,\, 0\le p\le m+2. $$ Then, for any $0<\epsilon\le \epsilon_0$ and $0<\bar\zeta$, there exits $T_\epsilon>0$ which depends on $\epsilon$ and $\bar\zeta$, such that if $$ \|\tilde{w}_{0}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar\zeta, $$ then the system \eqref{shear-prandtl-approxiamte-vorticity}-\eqref{u-v-w} admits a unique solution $$ \tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), $$ which satisfies \begin{equation}\label{2-estimate} \|\tilde{w}_\epsilon\|_{L^\infty([0, T_\epsilon];H^m_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_{0, \epsilon}\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\le 2 \|\tilde{w}_0\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}. \end{equation} \end{theorem} \begin{remark}. \begin{itemize} \item[(1)] Remark that $T_\epsilon$ depends on $\epsilon$ and $\bar\zeta$, and $T_\epsilon\to 0$ as $\epsilon \to 0$. So this is not a bounded estimate for the approximate solution sequences $\{u^s+\tilde{u}_\epsilon; 0<\epsilon\le \epsilon_0\}$ where $\epsilon_0>0$ is given in Corollary \ref{coro-boundary}. When the initial data $\tilde u_{0}$ is small enough, we observe that $u^s+\tilde{u}_\epsilon$ preserves the monotonicity and convexity of the shear flow on $[0, T_\epsilon]$. \item [(2)] In this theorem, for the regularized Prandtl equation, there are not constrain conditions on the initial date, meaning that we don't need the monotonicity or convexcity of shear flow $u^s$, and $\bar\zeta$ is also arbitrary. \end{itemize} \end{remark} If $ \tilde{w}_\epsilon$ is a solution of the system \eqref{shear-prandtl-approxiamte-vorticity}-\eqref{u-v-w}, then \eqref{Hardy1} with $\lim_{y\to +\infty} \tilde u_\epsilon=0$ imply $$ \tilde{u}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell-1}(\mathbb{R}^2_+)), $$ and $$ \tilde v_\epsilon\in L^\infty([0, T_\epsilon]; L^\infty(\mathbb{R}_{y, +}; H^{m+1}(\mathbb{R}_x)). $$ Integrating the equation of \eqref{shear-prandtl-approxiamte-vorticity} over $[y, +\infty[$ imply that $(\tilde u_\epsilon, \tilde v_\epsilon)$ is a solution of the system \eqref{shear-prandtl-approxiamte}, except the boundary condition to check: \begin{equation}\label{u-w-0} \tilde{u}_\epsilon(t, x, 0)=-\int^{+\infty}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y=0,\quad (t, x)\in [0, T_\epsilon]\times \mathbb{R}. \end{equation} In fact, noting $f(t, x)=-\int^{+\infty}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y =\tilde{u}_\epsilon(t, x, 0)$, a direct calculate give \begin{equation} \label{3.00} \begin{cases} & \partial_t f+f \partial_x f = \epsilon \partial^2_{x}f, \quad (t, x)\in ]0, T_\epsilon]\times \mathbb{R};\\ & f|_{t=0}=0, \end{cases} \end{equation} here we use \begin{align*} \int^{+\infty}_0 {v}_\epsilon (u^s_{yy} + \partial_y\tilde{w}_{\epsilon}) dy&= \big[{v}_\epsilon (u^s_{y} + \tilde{w}_{\epsilon})\big]^{+\infty}_0 -\int^\infty_0 (\partial_y{v}_\epsilon) (u^s_{y} + \tilde{w}_{\epsilon}) dy\\ &=\int^\infty_0 (\partial_x{u}_\epsilon) \partial_y(u^s + \tilde{u}_{\epsilon}) dy\\ &= \big[(\partial_x{u}_\epsilon) (u^s + \tilde{u}_{\epsilon})\big]\big|^{+\infty}_0 -\int^\infty_0 (\partial_x{w}_\epsilon) (u^s+ \tilde{u}_{\epsilon}) dy\\ &=- f \partial_x f -\int^\infty_0 (\partial_x{w}_\epsilon) (u^s+ \tilde{u}_{\epsilon}) dy. \end{align*} Since $f\in L^\infty([0, T_\epsilon], H^{m+2}(\mathbb{R}))$, the uniqueness of solution for equation \eqref{3.00} imply that $f=0$ on $[0, T_\epsilon]\times \mathbb{R}$. \eqref{u-w-0} imply also \begin{equation*} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y= \int^{y}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad (t, x, y)\in [0, T_\epsilon]\times \mathbb{R}^2_+. \end{equation*} We will prove Theorem \ref{theorem3.1} by the following three Propositions, where the first one is devoted to the local existence of approximate solution $\tilde{w}_\epsilon$ of \eqref{shear-prandtl-approxiamte-vorticity}. \begin{proposition}\label{prop3.0} Let $\tilde{w}_{0, \epsilon}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, $m\ge 6$ be an even integer, $k>1, 0\le \ell<\frac12, k+\ell> \frac 32$, and satisfy the compatibility conditions up to order $m+2$ for \eqref{shear-prandtl-approxiamte-vorticity}. Suppose that the shear flow satisfies $$ |\partial^{p+1}_y u^s(t, y)|\le C\langle y \rangle^{-k-p}, \quad (t, y)\in [0, T_1]\times \mathbb{R}_+,\,\, 0\le p\le m+2. $$ Then, for any $0<\epsilon\le 1$ and $\bar\zeta>0$, there exits $T_\epsilon>0$ such that if $$ \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar\zeta, $$ then the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $$ \tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))\, . $$ \end{proposition} \begin{remark}\label{remark3.5} If $\tilde{w}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$ is the initial data in Theorem \ref{theorem3.1}, using Corollary \ref{coro-boundary}, there exists $\epsilon_0>0$, and for any $0<\epsilon\le \epsilon_0$, there exists $\mu_\epsilon \in H^{m+3}_{k+\ell}(\mathbb{R}^2_+)$ such that $\tilde{w}_{0,\epsilon}= \tilde{w}_{0}+\epsilon \partial_y \mu_\epsilon $ satisfies the compatibility conditions up to order $m+2$ for the system \eqref{shear-prandtl-approxiamte-vorticity}, and $$ \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac 32 \|\tilde{w}_{0}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. $$ Then, using Proposition \ref{prop3.0}, we obtain also the existence of the approximate solution under the assumption of Theorem \ref{theorem3.1}. \end{remark} The proof of this Proposition is standard since the equation in \eqref{shear-prandtl-approxiamte-vorticity} is a parabolic type equation. Firstly, we establish the {\it \`a priori} estimate and then prove the existence of solution by the standard iteration and weak convergence methods. Because we work in the weighted Sobolev space and the computation is not so trivial, we give a detailed proof in the Appendix \ref{section-a3}, to make the paper self-contained. So the rest of this section is devoted to proving the estimate \eqref{2-estimate}. \noindent {\bf Uniform estimate with loss of $x$-derivative } In the proof of the Proposition \ref{prop3.0} (see Lemma \ref{lemmab.2}), we already get the {\it \`a priori} estimate for $\tilde{w}_\epsilon$. Now we try to prove the estimate \eqref{2-estimate} in a new way, and our object is to establish an uniform estimate with respect to $\epsilon>0$. We first treat the easy part in this subsection. We define the non-isotropic Sobolev norm, \begin{equation}\label{norm-1} \|f\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}= \sum_{|\alpha_1+\alpha_2|\le m, \alpha_1\le m-1}\|\langle y\rangle^{k+\ell+\alpha_2}\,\partial^{\alpha_1}_x \partial^{\alpha_2}_y f\|_{L^{2}(\mathbb{R}^2_+)}^2, \end{equation} where we don't have the $m$-order derivative with respect to $x$-variable. Then $$ \|f\|^2_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}= \|f\|^2_{H^{m,m-1}_{k+\ell}(\mathbb{R}^2_+)}+\|\partial^m_x f\|^2_{L^{2}_{k+\ell}(\mathbb{R}^2_+)}. $$ \begin{proposition}\label{prop3.1} Let $m\ge 6$ be an even integer, $k>1, 0< \ell<\frac12, k+\ell> \frac 32$, and assume that $\tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ is a solution to \eqref{shear-prandtl-approxiamte-vorticity}, then we have \begin{equation} \label{approx-less-k} \begin{split} &\frac{d}{dt}\|\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+ \|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}\\ &\qquad+ \epsilon\|\partial_x\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)} \le C_1\bigg( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^m \bigg), \end{split} \end{equation} where $C_1>0$ is independent of $\epsilon$. \end{proposition} \noindent {\bf Remark.} The above estimate is uniform with respect to $\epsilon>0$, but on the left hand of \eqref{approx-less-k}, we missing the terms $\|\partial^{m}_x\tilde{w}_\epsilon\|_{L^{2}_{k+\ell}}^2$. This is because that we can't control the term $$ \partial^{m}_x\tilde{v}_\epsilon(t, x, y)=- \int^y_0\partial^{m+1}_x\tilde{u}_\epsilon(t, x, \tilde{y}) d\tilde{y}, $$ which is the major difficulty in the study of the Prandtl equation. We will study this term in the next Proposition with a non-uniform estimate firstly, and then focus on proving the uniform estimate in the rest part of this paper. \begin{proof} For $|\alpha|=\alpha_1+\alpha_2\le m, \alpha_1\le m-1$, we have \begin{equation}\label{non-approx-est-less-s} \begin{split} &\partial_t \partial^{\alpha} \tilde{w}_\epsilon - \epsilon \partial^2_x\partial^{\alpha} \tilde{w}_\epsilon - \partial_y^2 \partial^{\alpha}\partial\tilde{w}_\epsilon \\ &= - \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) - \partial^{\alpha} \big( \tilde{v}_\epsilon ( u^s_{yy}+ \partial_y\tilde{w}_{\epsilon} ) \big). \end{split} \end{equation} Multiplying the \eqref{non-approx-est-less-s} with $ \langle y \rangle^{2(k+\ell+{\alpha_2})} \partial^{\alpha} \tilde{w}_\epsilon $, and integrating over $\mathbb{R}^2_+$, \begin{equation*} \begin{split} &\int_{\mathbb{R}^2_+} (\partial_t \partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy - \epsilon \int_{\mathbb{R}^2_+} (\partial^2_x\partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\ &\qquad\qquad- \int_{\mathbb{R}^2_+} (\partial^2_y\partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\ &= - \int_{\mathbb{R}^2_+} \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon - \tilde{v}_\epsilon ( u^s_{yy}+ \partial_y\tilde{w}_{\epsilon} )\big) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy . \end{split} \end{equation*} Remark that for $\tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$, all above integrations are in the classical sense. We deal with each term on the left hand respectively. After integration by part, we have \begin{align*} & \int_{\mathbb{R}^2_+} (\partial_t \partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}(\mathbb{R}^2_+)} \partial^{\alpha} \tilde{w}_\epsilon dx dy =\frac 12 \frac{d}{ dt}\| \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2,\\ & -\epsilon\int_{\mathbb{R}^2_+} ( \partial_x^{2} \partial^{\alpha} \tilde{w}_\epsilon )\langle y \rangle^{2(k+\ell)+2{\alpha_2}(\mathbb{R}^2_+)} \partial^{\alpha} \tilde{w}_\epsilon dx dy =\epsilon\|\partial_x \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2, \end{align*} and \begin{align*} &\quad -\int_{\mathbb{R}^2_+} \partial_y^{2} \partial^{\alpha} \tilde{w}_\epsilon \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\ &=\|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2+\int_{\mathbb{R}^2_+} \partial^{\alpha}\partial_y\tilde{w}_\epsilon (\langle y \rangle^{2(k+\ell)+2{\alpha_2}} )'\partial^{\alpha} \tilde{w}_\epsilon dx dy\\ &\qquad\qquad+\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx . \end{align*} Cauchy-Schwarz inequality implies \begin{align*} &\left|\int_{\mathbb{R}^2_+} \partial^{\alpha}\partial_y\tilde{w}_\epsilon (\langle y \rangle^{2(k+\ell)+2{\alpha_2}} )'\partial^{\alpha} \tilde{w}_\epsilon dx dy\right|\\ &\qquad\le\frac{1}{16} \|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2 +C\| \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2-1}(\mathbb{R}^2_+)}^2. \end{align*} We study now the term $$ \int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx. $$ {\bf Case : $|\alpha|\le m-1$}, using the trace Lemma \ref{lemma-trace}, we have \begin{align*} \left|\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx\right| &\le\|(\partial^{\alpha}\partial_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{\alpha}\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\ &\le C\|\partial^{\alpha}\partial^2_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)} \|\partial^{\alpha}\partial_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}\\ &\le C\|\partial_y\tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}\|\tilde{w}_\epsilon\|_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}\\ &\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^2_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*} {\bf Case : $\alpha_1=m-1, \alpha_2=1$}, using \eqref{boundary-12b}, we have $$ (\partial^{\alpha}\tilde{w}_\epsilon)|_{y=0}= (\partial_x^{\alpha_1}\partial_y^{2} \tilde{u}_\epsilon)|_{y=0} = 0, $$ thus $$ \int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)|_{y=0} dx=0. $$ {\bf Case : $\alpha_1=0, \alpha_2=m$}. Only in this case, we need to suppose that $m$ is even. Using again the trace Lemma \ref{lemma-trace}, we have \begin{align*} \left|\int_{\mathbb{R}} \left(\partial^{m+1}_y\tilde{w}_\epsilon \partial^{m}_y \tilde{w}_\epsilon\right)|_{y=0} dx\right| &\le\|(\partial^{m+2}_y\tilde{u}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{m}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\ &\le C\|(\partial^{m+2}_y\tilde{u}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|\partial^{m+1}_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}\\ &\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{m+2}_y\tilde{u}_\epsilon) |_{y=0}\|^2_{L^2(\mathbb{R})}. \end{align*} Using Proposition \ref{prop-comp-b} and the trace Lemma \ref{lemma-trace}, we can estimate the above last term $\|(\partial^{m+2}_y\tilde{u}_\epsilon) |_{y=0}\|^2_{L^2(\mathbb{R})}$ by a finite summation of the following forms $$ \|\prod^{p}_{j=1} (\partial^{\alpha_j}_x\partial^{\beta_j+1}_y( u^s + \tilde u_\epsilon))|_{y=0}\|^2_{L^2(\mathbb{R})}\le C\|\partial_y\prod^{p}_{j=1} (\partial^{\alpha_j}_x\partial^{\beta_j+1}_y( u^s + \tilde u_\epsilon)) \|^2_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)} $$ with $2\le p\le \frac m2$, $\alpha_j + \beta_j \le m -1$ and $\{j; \alpha_j>0\}\not=\emptyset$. Then using Sobolev inequality and $m\ge 6$, we get $$ \|(\partial^{m+2}_y\tilde{u}_\epsilon) |_{y=0}\|_{L^2(\mathbb{R})}\le C \|\tilde{w}_\epsilon\|^{m/2}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. $$ {\bf Case : $1\le \alpha_1\le m-2, \alpha_1+\alpha_2=m, \alpha_2$ even}, using the same argument to the precedent case, we have \begin{align*} &\left|\int_{\mathbb{R}}(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon)|_{y=0}dx \right|=\left|\int_{\mathbb{R}} (\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|\\ &\qquad\le \|(\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\ &\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{\alpha_1}_x \partial^{\alpha_2+2}_y\tilde{u}_\epsilon)|_{y=0}\|^2_{L^2(\mathbb{R})}\\ &\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^{\alpha_2}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*} {\bf Case : $1\le \alpha_1\le m-2, \alpha_1+\alpha_2=m, \alpha_2$ odd}, integration by part with respect to $x$ variable implies \begin{align*} &\left|\int_{\mathbb{R}} (\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|=\left|\int_{\mathbb{R}} (\partial^{\alpha_1-1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1+1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|\\ &\qquad\le \|(\partial^{\alpha_1-1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{\alpha_1+1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\ &\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{\alpha_1+1}_x \partial^{\alpha_2+1}_y\tilde{u}_\epsilon)|_{y=0}\|^2_{L^2(\mathbb{R})}\\ &\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^{\alpha_2-1}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*} Finally, we have proven \begin{align*} \begin{split} & \int_{\mathbb{R}^2_+} \big(\partial_t \partial^{\alpha} \tilde{w}_\epsilon - \partial_y^2\partial^{\alpha} \tilde{w}_\epsilon - \epsilon \partial^2_x\partial^{\alpha} \tilde{w}_\epsilon\big) \langle y \rangle^{2 (k+\ell+\alpha_2)} \partial^{\alpha} \tilde{w}_\epsilon dx dy\\ & \ge\frac12 \frac{d}{ dt}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2+ \epsilon\|\partial_x\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2 +\|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2\\ &\qquad-\frac{1}{4} \|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}-C\|\tilde{w}_\epsilon\|^{m}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{split} \end{align*} We estimate now the right hand of \eqref{non-approx-est-less-s}. For the first item, we need to split it into two parts \begin{align*} - \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) = - (u^s + \tilde{u}_\epsilon)\partial_x \partial^{\alpha} \tilde{w}_\epsilon + [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon. \end{align*} Firstly, we have \begin{align*} \int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_\epsilon) \partial_x \partial^{\alpha} \tilde{w}_\epsilon\big) \langle y \rangle^{2(k+\ell+{\alpha_2})}\partial^{\alpha} \tilde{w}_\epsilon dx dy \le \|\partial_x \tilde{u}_\epsilon\|_{L^\infty}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+{\alpha_2}}}^2, \end{align*} then using \eqref{sobolev-1}, we get \begin{align*} \left|\int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_\epsilon) \partial_x \partial^{\alpha} \tilde{w}_\epsilon\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \tilde{w}_\epsilon dx dy\right| \le \| \tilde{w}_\epsilon\|_{H^3_1}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+{\alpha_2}}}^2. \end{align*} For the commutator operator, in fact, it can be written as \begin{align*} & [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon = \sum\limits_{\beta \le \alpha,\, 1\le|\beta|}C^\beta_\alpha \,\,\partial^{\beta}( u^s + \tilde{u}_\epsilon) \partial^{\alpha - \beta}\partial_x \tilde{w}_\epsilon. \end{align*} Then for $|\alpha|\le m, m\ge 4$, using the Sobolev inequality again and Lemma \ref{inequality-hardy}, $$ \| [ (u^s + \tilde{u}), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}\le C( \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}} +\|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}). $$ Thus \begin{align*} \left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2(k+\ell+\alpha_2)}\big( [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon \big)\cdot \partial^{\alpha} \tilde{w}_\epsilon dx dy \right|\le C \big( \|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}} +\|\tilde{w}_\epsilon\|^3_{H^m_{k+\ell}} \big) , \end{align*} and \begin{align*} \left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2(k+\ell+\alpha_2)}\big( \partial^{\alpha} \big( (u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big)\big)\partial^{\alpha} \tilde{w}_\epsilon dx dy\right| \le C \big( \|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}} +\|\tilde{w}_\epsilon\|^3_{H^m_{k+\ell}} \big), \end{align*} where $C$ is independent of $\epsilon$. For the next one, similar to the first term in \eqref{non-approx-est-less-s}, we have \begin{align*} \partial^{\alpha} \big( \tilde{v}_\epsilon ( u^s_{yy}+\partial_y\tilde{w}_\epsilon ) \big) & = \tilde{v}_\epsilon \partial_y \partial^{\alpha} \tilde{w}_\epsilon - [\tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon+ \partial^{\alpha} (\tilde{v}_\epsilon u^s_{yy} ). \end{align*} Then \begin{align*} \left|\int_{\mathbb{R}^2_+} \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell+\alpha_2)} (\partial_y\partial^{\alpha} \tilde{w}_\epsilon)\cdot \partial^{\alpha} \tilde{w}_\epsilon dx dy\right| &\le \|\tilde{v}_\epsilon\|_{L^\infty(\mathbb{R}^2_+)} \|\partial_y \tilde{w}_\epsilon\|_{H^m_{k+\ell}}\| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}\\ &\le \frac 1{4} \|\partial_y \tilde{w}_\epsilon\|^2_{H^m_{k+\ell}(\mathbb{R}^2_+)}+C\| \tilde{w}_\epsilon\|^4_{H^m_{k+\ell}(\mathbb{R}^2_+)} \end{align*} where we have used \begin{equation*} \begin{split} &\|\tilde{v}_{\epsilon}\|_{L^\infty(\mathbb{R}^2_+)} \le C\|\partial_x\tilde{u}_{\epsilon}\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12 +\delta}(\mathbb{R}_{y, +}))}\\ &\le C \int_{\mathbb{R}^2_+}\langle y\rangle ^{1+2\delta}(|\partial_x\tilde{u}_{\epsilon}|^2+|\partial^2_x\tilde{u}_{\epsilon}|^2) dx dy\\ &\le C \int_{\mathbb{R}^2_+}\langle y\rangle ^{3+2\delta}(|\partial_x\tilde{w}_{\epsilon}|^2+|\partial^2_x\tilde{w}_{\epsilon}|^2) dx dy\le C\| \tilde{w}_\epsilon\|_{H^2_{\frac 32 +\delta}}, \end{split} \end{equation*} where $\delta>0$ is small. Noticing that \begin{align*} [ \tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon & = \sum\limits_{ \beta \le \alpha, 1\le |\beta|} C^\beta_\alpha \,\,\, \partial^{\beta} \tilde{v}_\epsilon \,\,\partial^{\alpha - \beta}\partial_y \tilde{w}_\epsilon. \end{align*} Since $H^m_\ell$ is an algebra for $m\ge 6$, we only need to pay attention to the order of derivative in the above formula. Firstly for $|\beta|\ge 1$, we have for $|\alpha-\beta|+1\le m$, $$ -\partial^{\beta} \tilde{v}_\epsilon=\partial^{\beta_1}_x \partial^{\beta_2}_y \int^y_0\tilde{u}_{\epsilon, x} d\tilde {y} =\left\{\begin{array}{ll} \partial^{\beta_1+1}_x \partial^{\beta_2-1}_y \tilde{u}_{\epsilon},&\quad \beta_2\ge 1,\\ \int^y_0 \partial^{\beta_1+1}_x \tilde{u}_{\epsilon} d\tilde {y},&\quad\beta_2=0. \end{array}\right. $$ Now using the hypothesis $\beta\le \alpha, 1\le |\beta|$ and $\beta_1\le \alpha_1\le m-1$, using Lemma \ref{inequality-hardy}, we get $$ \| [ \tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon \|_{L^2_{k+\ell+\alpha_2}}\le C\|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}. $$ On the other hand, if $\alpha_2=0$, using $-1+\ell<-\frac 12$, we can get $$ \| \partial^{m-1}_x( \tilde{v}_\epsilon u^s_{yy}) \|_{L^2_{k+\ell}}\le C\|\partial^{m}_x\tilde{u}_\epsilon\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}\|u^s_{yy} \|_{L^2_{k+\ell}(\mathbb{R}_+)} \le C\|\tilde{w}_\epsilon\|_{H^m_{\frac 32+\delta}}. $$ Similar computation for other cases, we can get, for $\alpha_2>0, \alpha_1+\alpha_2\le m$, $$ \| \partial^{\alpha}( \tilde{v}_\epsilon u^s_{yy}) \|_{L^2_{k+\ell+\alpha_2}}\le C\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}. $$ Combining the above estimates, we have finished the proof of the Proposition \ref{prop3.1}. \end{proof} \noindent {\bf Smallness of approximate solutions.} To close the energy estimate, we still need to estimate the term $\partial^m_x \tilde{w}_\epsilon$. \begin{proposition} \label{prop3.2} Under the hypothesis of Theorem \ref{theorem3.1}, and with the same notations as in Proposition \ref{prop3.1}, we have \begin{align} \label{approx-k} \begin{split} & \frac 12\frac{d}{dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3\epsilon}{4}\|\partial_x^{m+1}\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 \\ & \le C\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3 \big) + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big). \end{split} \end{align} \end{proposition} \begin{proof} We have \begin{align*} \partial_t \partial_x^m \tilde{w}_\epsilon - \partial_y^2 \partial_x^m \tilde{w}_\epsilon - \epsilon \partial_x^m\partial^2_x \tilde{w}_{\epsilon}= - \partial_x^m \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) - \partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big), \end{align*} then the same computations as in Proposition \ref{prop3.1} give \begin{align} \label{approx-k-part-1} \begin{split} & \frac{d}{2 dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{{k+\ell}}}^2 + \epsilon\|\partial_x^{m+1} \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2\\ &\le C( \|\tilde{w}_\epsilon \|_{H^m_{k+\ell}}^2+\|\tilde{w}_\epsilon \|_{H^m_{k+\ell}}^3)\\ &+\left|\int_{\mathbb{R}^2_+} \partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big) \langle y \rangle^{2(k+\ell)} \partial_x^m \tilde{w}_\epsilon dxdy\right|, \end{split} \end{align} where the boundary terms is more easy to control, since $$ (\partial_y \partial_x^m\tilde{w}_\epsilon)(t, x, 0)=(\partial^2_y \partial_x^m\tilde{u}_\epsilon)(t, x, 0)=0,\,\,\,(t, x)\in [0, T]\times \mathbb{R}. $$ The estimate of the last term on right hand is the main obstacle for the study of the Prandtl equations. \begin{align*} \partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big) & = \tilde{v}_\epsilon \partial_x^m \partial_y \tilde{w}_\epsilon + (\partial^m_x\tilde{v}_\epsilon) (\partial_y \tilde{w}_\epsilon+u^s_{yy})\\ &\quad +\sum_{1\le j\le m-1}C^j_m\, \partial^j_x\tilde{v}_\epsilon\partial^{m-j}_x \partial_y\tilde{w}_\epsilon. \end{align*} For the first term \begin{align*} \int_{\mathbb{R}^2_+} \tilde{v}_\epsilon(\partial_x^m \partial_y \tilde{w}_\epsilon) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)dx dy &=\frac{1}{2}\int \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell)} \partial_y(\partial_x^m \tilde{w}_\epsilon)^2 dx dy \\ & = \frac{1}{2}\int \tilde{u}_{\epsilon, x} \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)^2 dx dy\\ & - \ell \int \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell) - 1} (\partial_x^m \tilde{w}_\epsilon)^2 dx dy\\ & \le C \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3, \end{align*} where we have used $\tilde{v}_\epsilon|_{y=0}=0$, and \begin{align*} \left|\int_{\mathbb{R}^2_+} \big(\sum_{1\le j\le m-1}C^j_m\, \partial^j_x\tilde{v}_\epsilon\partial^{m-j}_x \partial_y\tilde{w}_\epsilon) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon\big)dx dy\right| \le C \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3. \end{align*} Finally for the worst term, we have \begin{align*} &\left|\int_{\mathbb{R}^2_+} (\partial^m_x \tilde{v}_\epsilon)(\partial_y \tilde{w}_\epsilon +u^s_{yy}) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)dx dy \right|\\ &\le C\|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))} \|\partial_y\tilde{w}_\epsilon\|_{L^\infty(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))} \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}\\ &\qquad\qquad +\|\partial^m_x \tilde{v}_\epsilon u^s_{yy}\|_{L^2_{k+\ell}(\mathbb{R}^2_+)} \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}. \end{align*} On the other hand, observing $$ \partial^m_x \tilde{v}_\epsilon (t, x, y)=-\int^y_0 \partial^{m+1}_x \tilde{u}_\epsilon(t, x, \tilde{y})d\tilde y, $$ then using Sobolev inequality and Lemma \ref{inequality-hardy}, for $\delta>0$ small, $$ \|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}\le C\|\partial^{m+1}_x \tilde{u}_\epsilon\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)} \le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}, $$ we get $$ \|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))} \le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}. $$ Using the hypothesis for the shear flow $u^s$ and $\ell-1<-\frac 12$, \begin{align*} \|\partial^m_x (\tilde{v}_\epsilon u^s_{yy})\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}&\le \|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}\| u^s_{yy}\|_{L^2_{k+\ell}(\mathbb{R}_+)}\\ &\le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}, \end{align*} and for $k+\ell\ge \frac 32+\delta$, $$ \|\partial_y\tilde{w}_\epsilon\|_{L^\infty(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))} \le C \|\partial_y\tilde{w}_\epsilon\|_{H^1(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))}\le C\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}. $$ Thus, we have \begin{align} \label{approx-k-part-3} \begin{split} &\int \big(\partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big)\big) \, \langle y \rangle^{2(k+\ell)} \partial_x^m \tilde{w}_\epsilon dx dy \\ & \le C \| \tilde{w}_\epsilon \|_{H^m_{k+\ell}}^3 + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big) + \frac{\epsilon}{4}\| \partial_x^{m+1} \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}}^2. \end{split} \end{align} From \eqref{approx-k-part-1} and \eqref{approx-k-part-3}, we have, if $k+\ell >\frac 32$, \begin{align*} \begin{split} &\frac 12 \frac{d}{dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3\epsilon}{4}\|\partial_x^{m+1}\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 \\ & \le C\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3 \big) + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big). \end{split} \end{align*} \end{proof} \begin{proof}[{\bf End of proof of Theorem \ref{theorem3.1}}] Combining \eqref{approx-less-k} and \eqref{approx-k}, for $m\ge 6, k>1, \frac 32 -k<\ell<\frac 12 $ and $0<\epsilon\le 1$, we get \begin{align} \label{approx-total} \frac{d}{dt}\| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 &\le \frac{C}{\epsilon}\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^m \big), \end{align} with $C>0$ independent of $\epsilon$. From \eqref{approx-total}, by the nonlinear Gronwall's inequality, we have $$ \| \tilde{w}_\epsilon(t)\|^{m-2}_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le\, \frac{\|\tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}^{m-2}} {e^{-\frac{C}{\epsilon}t(\frac m2-1)}-(\frac m2-1)\frac{C}{\epsilon}t \|\tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}^{m-2} },\,\,~~0< t \le T_\epsilon, $$ where we choose $T_\epsilon>0$ such that \begin{align} \label{time-1} \left(e^{-\frac{C}{\epsilon}T_\epsilon(\frac m2-1)}-(\frac m2-1)\frac{C}{\epsilon}T_\epsilon \bar \zeta^{\,m-2} \right)^{-1}=\left(\frac 43\right)^{m-2}. \end{align} Finally, we get for any $\| \tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}\le \bar\zeta$, and $0<\epsilon\le \epsilon_0$, \begin{align*}\label{bound-3} \| \tilde{w}_\epsilon(t)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43\| \tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\le 2\| \tilde{w}_0\|_{H^m_{k+\ell}(\mathbb{R}^2_+)},~~~~~0< t \le T_\epsilon. \end{align*} \end{proof} The rest of this paper is dedicated to improve the results of Proposition \ref{prop3.2}, and try to get an uniform estimate with respect to $\epsilon$. Of course, we have to recall the assumption on the shear flow in the main Theorem \ref{main-theorem}. \section{Formal transformations}\label{section4} Since the estimate \eqref{approx-less-k} is independent of $\epsilon$, we only need to treat \eqref{approx-k} in a new way to get an estimate which is also independent of $\epsilon$. To simplify the notations, from now on, we drop the notation tilde and sub-index $\epsilon$, that is, with no confusion, we take $$ u=\tilde{u}_\epsilon,\quad v=\tilde{v}_\epsilon,\quad w=\tilde{w}_\epsilon. $$ Let $w\in L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+), m\ge 6, k>1, 0<\ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~ k+\ell>\frac 32$ be a classical solution of \eqref{shear-prandtl-approxiamte-vorticity} which satisfies the following {\em \`a priori} condition \begin{equation}\label{apriori} \|w\|_{ L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. \end{equation} Then \eqref{sobolev-1} gives \begin{align*} \|\langle y\rangle ^{k+\ell} w\|_{ L^\infty ([0, T]\times\mathbb{R}^2_+)}\le &C(\|\langle y\rangle ^{\frac 12+\delta}(\langle y\rangle ^{k+\ell}w)_y \|_{ L^\infty ([0, T]; L^2(\mathbb{R}^2_+))}\\ &+ \|\langle y\rangle ^{\frac 12+\delta}(\langle y\rangle ^{k+\ell}w)_{xy} \|_{ L^\infty ([0, T]; L^2(\mathbb{R}^2_+))})\\ &\le C_m \|w\|_{ L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}, \end{align*} which implies $$ |\partial_y u(t, x, y)|=|w(t, x, y)|\le C_m\, \zeta \, \langle y\rangle^{-k-\ell},\quad (t, x, y)\in [0, T]\times \mathbb{R}^2_+. $$ We assume that {\bf $\zeta$ is small enough} such that \begin{equation}\label{C0} C_m\, \zeta \le \frac{\tilde c_1}{4}, \end{equation} where $C_m$ is the above Sobolev embedding constant. Then we have for $\ell\ge 0$, \begin{align} \label{pior-2}\frac{\tilde c_1}4 \langle y \rangle^{-k} \le |u^s_y + u_y|\le 4\tilde c_2 \langle y \rangle^{-k},\quad (t, x, y)\in [0, T]\times \mathbb{R} \times \mathbb{R}^+. \end{align} \noindent {\bf The formal transformation of equations.} Under the conditions \eqref{C0} and \eqref{pior-2}, in this subsection, we will introduce the following formal transformations of system \eqref{shear-prandtl-approxiamte}. { Set, for $0\le n\le m$ \[ g_n = \left( \frac{\partial^n_x u}{u^s_y + u_y} \right)_y,\,~\eta_1 = \frac{u_{xy}}{u^s_y + u_y},~ \eta_2 = \frac{u^s_{yy} + u_{yy}}{{u^s_y +u_y}}, \,\forall (t, x, y)\in [0, T]\times \mathbb{R}^2_+ . \] Formally, we will use the following notations \[ \partial^{-1}_y g_n(t, x, y) = \frac{\partial^n_x u}{u^s_y + {u}_y}(t,x, y),\, \partial_y \partial_y^{-1} g_n = g_n,~\forall (t, x, y)\in [0, T]\times \mathbb{R}^2_+ \] Applying $\partial^n_x$ to \eqref{shear-prandtl-approxiamte}, we have \begin{align}\label{shear-prandtl-approxiamte-aa} \begin{split} \partial_t\partial^n_x u + (u^s + {u}) \partial_x\partial^n_x u &+(\partial^n_x {v}) (u^s_y + \partial_y {u})\\ &= \partial^2_{y}\partial^n_x u + \epsilon \partial^2_{x}\partial^n_x u+ A^1_n+A^2_n, \end{split} \end{align} where \begin{align*} A^1_n=-[\partial^n_x,\, (u^s + {u})] \partial_x u=- \sum_{i=1}^{n}C^i_n\partial_x^i {u} \, \partial^{n + 1 -i}_x {u}, \,\,\\ A^2_n=-[\partial^n_x,\, (u^s_y + \partial_y {u})] {v}= -\sum_{i=1}^{n}C^i_n\partial_x^i {w} \,\partial^{n -i}_x {v}. \end{align*} Dividing \eqref{shear-prandtl-approxiamte-aa} with $(u^s_y + {u}_y)$ and performing $\partial_y$ on the resulting equation, observing $$ \partial_x\partial^n_x u +\partial_y\partial^n_x v= \partial^n_x(\partial_x u +\partial_y v)=0, $$ we have for $j=1, 2$, \begin{equation*} \begin{split} & \partial_y\left(\frac{\partial_t {\partial^n_x u}}{u^s_y + {u}_y}\right) + (u^s + {u}) \partial_y\left(\frac{\partial_x{\partial^n_x u}} {u^s_y + {u}_y}\right)\\ &\qquad\qquad= \partial_y\left(\frac{\partial^2_{y}{\partial^n_x u} + \epsilon \partial^2_{x}{\partial^n_x u}}{u^s_y + {u}_y}\right) + \partial_y\left(\frac{A^1_n+A^2_n}{u^s_y + {u}_y}\right). \end{split} \end{equation*} We compute each term on the support of $ $, \begin{align*} \partial_y\left(\frac{\partial_t {\partial^n_x u} }{u^s_y + {u}_y}\right) &= \partial_y\bigg(\partial_t \frac{{\partial^n_x u} }{u^s_y + {u}_y} + \partial_y^{-1} g_n \, \frac{\partial_t {u}_y + \partial_t u^s_y }{u^s_y + {u}_y} \bigg)\\ &= \partial_t g_n + \partial_y\bigg( \partial_y^{-1} g_n \, \frac{ \partial_t u^s_y+\partial_t {u}_y}{u^s_y + \tilde{u}_y} \bigg), \end{align*} \begin{align*} (u^s + {u})\partial_y \left(\frac{\partial_x {\partial^n_x u}}{u^s_y + {u}_y}\right) & = (u^s + {u})\bigg\{\partial_x\partial_y\left(\frac{{\partial^n_x u}}{u^s_y + {u}_y}\right) +\partial_y \left(\frac{{\partial^n_xu}}{u^s_y +{u}_y}\right) \, \frac{{u}_{xy}}{u^s_y + {u}_y} \\ &\hskip 3cm + \left(\frac{\partial^n_x{u}}{u^s_y + {u}_y}\right)\partial_y\left( \frac{{u}_{xy}}{u^s_y + {u}_y}\right)\bigg\}\\ & = (u^s + {u})(\partial_x g_n + g_n\, \eta_1 + \partial_y^{-1} g_n \, \partial_y \eta_1), \end{align*} \begin{align*} \frac{{\partial^2_{y} {\partial^n_x u}}}{u^s_y + {u}_y} =\partial^2_y \left(\frac{{ {\partial^n_x u}}}{u^s_y + {u}_y} \right) + 2\left(\frac{{ \partial_y {u}}}{u^s_y + {u}_y}\right)\frac{u^s_{yy} + {u}_{yy}}{{u^s_y + {u}_y}} - {\partial^n_x u} \, \partial^2_y\left(\frac{1}{u^s_y + {u}_y}\right), \end{align*} \begin{align*} \partial^2_y \left(\frac{1}{u^s_y + {u}_y}\right) = - \partial_y\left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)^2}\right)= -\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + {u}_y)^2} + 2 \left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)}\right)^2 \frac{1}{u^s_y + {u}_y}, \end{align*} \begin{align*} \frac{{ \partial_y {\partial^n_x u}}}{u^s_y + {u}_y}\frac{u^s_{yy} +{u}_{yy}}{{u^s_y + {u}_y}} = \left(\frac{{{\partial^n_xu}}}{u^s_y + {u}_y}\right)_y \frac{u^s_{yy} + {u}_{yy}}{{u^s_y + {u}_y}} - \frac{{{\partial^n_xu}}}{u^s_y + {u}_y}\left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)}\right)^2. \end{align*} So \begin{align*} \frac{{\partial^2_{y}{\partial^n_x u}}}{u^s_y + {u}_y} = \partial_y g_n + 2(g_n \eta_2 - 2 \partial^{-1}_y g_n \eta_2^2) + \partial^{-1}_y g_n \left(\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + \tilde{u}_y)}\right), \end{align*} \begin{align*} \partial_y\left(\frac{{\partial^2_{y} {\partial^n_x u}}}{u^s_y + {u}_y}\right)&=\partial^2_y g_n + 2 (\partial_y g_n) \eta_2 + 2g_n \partial_y \eta_2 - 4g_n\eta_2^2\\ &- 8 \partial_y^{-1} g_n \eta_2 \partial_y \eta_2 + \partial_y\bigg(\partial_y^{-1} g_n\, \frac{u_{yyy}^s + {u}_{yyy}}{u^s_y + {u}_y}\bigg). \end{align*} Similarly, we have \begin{align*} \frac{{\partial^2_{x} {\partial^n_x u}}}{u^s_y + {u}_y} &= \partial^2_x\left(\frac{{{\partial^n_x u}}}{u^s_y + {u}_y} \right) + 2\left(\frac{{{\partial^n_x u}}}{u^s_y + {u}_y}\right)_x\frac{ {u}_{xy}}{{u^s_y + {u}_y}} \\ &- 2\frac{{{\partial^n_x u}}}{u^s_y +{u}_y}\left( \frac{{u}_{xy}}{(u^s_y +{u}_y)}\right)^2 + \frac{{{\partial^n_x u}}}{u^s_y + {u}_y}\frac{{u}_{xxy}}{(u^s_y +{u}_y)}, \end{align*} \begin{align*} \partial_y \left(\frac{{\partial^2_{x}{\partial^n_x u}}}{u^s_y + {u}_y}\right)& = \partial^2_xg_n + 2\partial_x g_n \eta_1 + 2\partial_x \partial_y^{-1} g_n \partial_y \eta_1 \\ &- 2g_n\eta_1^2 - {4 \partial_y^{-1} g_n \eta_1 \partial_y \eta_1} +\partial_y\bigg(\partial_y^{-1} g_n\,\frac{ {u}_{xxy}}{u^s_y +{u}_y}\bigg)\, . \end{align*} For the boundary condition, we only need to pay attention to $j=1$. From \eqref{shear-prandtl-approxiamte-aa} and the boundary condition for $(u, v)$ in \eqref{shear-prandtl-approxiamte}, we observe $$ \partial^n_x u|_{y=0}=0,\,\,\partial^2_y \partial^n_x u|_{y=0}=0,\,\, (u^s_y+u_y)|_{y=0}\not=0. $$ At the same time, \begin{align*} 0=\frac{{\partial^2_{y}{\partial^n_x u}}}{u^s_y + {u}_y}\bigg|_{y=0} &= \partial_y g_n|_{y=0} + 2(g_n \eta_2 - 2 (\partial^{-1}_y g_n )\eta_2^2)|_{y=0}\\ & \qquad+ \partial^{-1}_y g_n \left(\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + \tilde{u}_y)}\right)\bigg|_{y=0}, \end{align*} and $$ \eta_2|_{y=0} = \frac{u^s_{yy} + u_{yy}}{{u^s_y +u_y}}\bigg|_{y=0}=0, \quad \partial^{-1}_y g_n(t, x, y)|_{y=0} = \frac{\partial^n_x u}{u^s_y + {u}_y}(t,x, y)\bigg|_{y=0}=0,\, $$ we get then $$ (\partial_y g_n)|_{y=0}=0,\quad 0\le n\le m. $$ Finally, we have, for $j=1, 2$, \begin{equation} \begin{cases} \label{non-monotone-transformation} \partial_t g_n + ( u^s + {u}) \partial_x g_n - \partial^2_y g_n - \epsilon \partial^2_xg_n\\ \qquad\qquad\qquad - \epsilon \,2\, (\partial_x \partial_y^{-1} g_n) \partial_y \eta_1 = M_n,\\ (\partial_y g_n)|_{y=0}=0, \\ g_n|_{t=0}= g_{n,0}, \end{cases} \end{equation} with $M_n=\sum^6_{j=1}M_j^n$, \begin{align*} & M^n_1 = -(u^s + {u})( g_n \eta_1 +( \partial_y^{-1} g_n ) \partial_y \eta_1) ,\\ & M^n_2 = 2(\partial_y g_n) \eta_2 + 2g_n (\partial_y \eta_2 - 2\eta_2^2) - 8 (\partial_y^{-1} g_n)\, \eta_2 \partial_y \eta_2 ,\\ & M^n_3 = \epsilon \big(2(\partial_x g_n) \eta_1 - 2 g_n\,\eta_1^2 - 4{(\partial_y^{-1} g_n) \eta_1 \partial_y \eta_1}\big), \\ & M^n_4 = \partial_y\bigg(\partial_y^{-1} g_n \, \frac{(u^s + {u}) {w}_x + {v} ({w}_y + u^s_{yy})}{u^s_y + {u}_y}\bigg),\\ & M^n_5 = -\partial_y\bigg(\frac{\sum_{i=1}^{n}C^i_n\partial_x^i {u} \cdot \partial^{n + 1 -i}_x {u} }{ u^s_y +{u}_y} \bigg)\,,\\ & M^n_6 = -\partial_y\bigg(\frac{ \sum_{i=1}^{n}C^i_n\partial_x^i {w} \cdot \partial^{n -i}_x {v}}{ u^s_y +{u}_y} \bigg)\, , \end{align*} where we have used the relation, $$ \partial_t u^s_y+\partial_t {u}_y -(u_{yyy}^s + {u}_{yyy})-\epsilon {u}_{xxy}=-(u^s + {u}) {w}_x + {v} (u^s_{yy}+{w}_y). $$ \section{Uniform estimate}\label{section5} In the future application(see Lemma \ref{lemma-g-h-w}), we need that the weight of $g_m$ big then $\frac 12$, but from the definition, $ w\in H^{m+2}_{k + \ell}(\mathbb{R}^2_+)$ imply only $g_m\in H^{2}_{\ell}(\mathbb{R}^2_+)$ with $0<\ell<\frac 12$. So the first step is to improve this weights if the weight of the initial data is more big. We first have \begin{lemma}\label{lemma-initial-deta} If $\tilde w_0\in H^{m+2}_{k + \ell'}(\mathbb{R}^2_+), m\ge 6, k>1, 0< \ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~ k+\ell>\frac 32$ which satisfies \eqref{apriori}-\eqref{C0} with $0<\zeta\le1$, then $( g_m)(0)\in H^2_{k+\ell}(\mathbb{R}^2_+)$, and we have $$ \|( g_m)(0)\|_{H^2_{\ell'}(\mathbb{R}^2_+)}\le C \|\tilde w_0\|_{H^{m+2}_{k + \ell''}(\mathbb{R}^2_+)}. $$ \end{lemma} \noindent {\bf Remark.} In fact, observing \begin{align*} g_m(0)= \left( \frac{\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}} \right)_y= \frac{\partial_y\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}}- \frac{\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}} \eta_2(0), \end{align*} then \eqref{pior-2} implies \begin{align*} \langle y \rangle ^{k+\ell}| g_m(0)|\le C\langle y \rangle ^{k + \ell'}| \partial_x^m \tilde{w}_0|+C\langle y \rangle ^{k + \ell'-1}| \partial_x^m \tilde{u}_0|, \end{align*} which finishes the proof of this Lemma. \begin{proposition}\label{lemma-non-x-k-monotone-part1-2} Let $w \in L^\infty ([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), m\ge 6, k>1, 0\le \ell<\frac12, ~\ell'> \frac{1}{2},~~\ell' - \ell < \frac{1}{2},~k+\ell>\frac 32$, satisfy \eqref{apriori}-\eqref{C0} with $0<\zeta\le1$. Assume that the shear flow $u^s$ verifies the conclusion of Lemma \ref{shear-profile}, and $g_n$ satisfies the equation \eqref{non-monotone-transformation} for $ 1\le n\le m$, then we have the following estimates, for $t\in [0, T]$ \begin{equation} \label{uniform-part2-2} \begin{split} \frac{d}{dt}\sum^m_{n=1}\| g_n \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^m_{n=1}\| \partial_y g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \epsilon \sum^m_{n=1}\| \partial_x g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ & \le C_2(\sum^m_{n=1}\| g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|{w}\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2), \end{split} \end{equation} where $C_2$ is independent of $\epsilon$. \end{proposition} \noindent {\bf\em Approach of the proof for the Proposition \ref{lemma-non-x-k-monotone-part1-2}:} We can't prove \eqref{uniform-part2-2} directly, since the approximate solution $w_\epsilon$ obtained in Theorem \ref{theorem3.1} is belongs to $L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$, which implies only $ g_n\in L^\infty([0, T_\epsilon]; H^{2}_{\ell}(\mathbb{R}^2_+))$. Then we can't use $\langle y\rangle^{2\ell'} g_n \in L^\infty([0, T_\epsilon]; H^{2}_{\ell-2\ell'}(\mathbb{R}^2_+)) $ as the test function to the equation \eqref{non-monotone-transformation}. To overcome this difficulty, we consider that \eqref{non-monotone-transformation} as a linear system for $ g_n, n=1, \cdots, m$ with the coefficients and the source terms depends on $w$ and their derivatives up to order $m$, we will clarify this confirmation in the following proof of the the Proposition \ref{lemma-non-x-k-monotone-part1-2}. We prove now the estimate \eqref{uniform-part2-2} by the following approach: For the linear system \eqref{non-monotone-transformation}, we prove firstly \eqref{uniform-part2-2} as {\em \`a priori} estimate. Lemma \ref{lemma-initial-deta} imply that $ {g}_{n}(0)\in H^2_{\ell'}(\mathbb{R}^2_+), n=1,\cdots, m$, then by using Hahn-Banach theorem, this {\em \`a priori} estimate imply the existence of solutions $$ g_n\in L^\infty([0, T]; H^2_{\ell'}(\mathbb{R}^2_+)),\quad n=1, \cdots, m. $$ Finally, by uniqueness, we can prove the estimate \eqref{uniform-part2-2} by proving it as {\em \`a priori} estimate. So that the proof of the Proposition \ref{lemma-non-x-k-monotone-part1-2} is reduced to the proof of the {\em \`a priori} estimate \eqref{uniform-part2-2}. \begin{proof}[{\bf Proof of the {\em \`a priori} estimate \eqref{uniform-part2-2}}] Multiplying the linear system \eqref{non-monotone-transformation} by $\langle y\rangle^{2\ell'}g_{n}\in L^\infty([0, T]; H^2_{-\ell'}(\mathbb{R}^2_+)) $ and integrating over $\mathbb{R} \times \mathbb{R}^+$. We start to deal with the left hand of \eqref{non-monotone-transformation} first, we have \begin{align*} \int_{\mathbb{R}^2_+} \partial_t g_{n} \, \langle y\rangle^{2\ell'} g_{n} dx dy = \frac 12\frac{d}{dt}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2, \end{align*} and \begin{align*} \int_{\mathbb{R}^2_+} ( u^s + {u})\partial_x g_{n} \, \langle y\rangle^{2\ell'} g_{n}dx dy =& \frac{1}{2}\int_{\mathbb{R}^2_+} ( u^s + {u}) \cdot \partial_x (\langle y\rangle^{2\ell'} g_{n}^2) dx dy \\ &\le \frac 12 \|{u}_x\|_{L^\infty(\mathbb{R}^2_+)}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ &\le C\|w\|_{H^2_{1}(\mathbb{R}^2_+)}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} Integrating by part, where the boundary value is vanish, \begin{align*} - \int_{\mathbb{R}^2_+} \partial_y^2 g_{n} \, \langle y\rangle^{2\ell'} g_{n}dx dy &= \|\partial_y g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \int_{\mathbb{R}^2_+} \partial_y g_{n} (\langle y\rangle^{2\ell'})' g_{n} dx dy\\ & \ge \frac{3}{4}\| \partial_y g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 - 4 \| g_{n}\|_{L^2(\mathbb{R}^2_+)}^2, \end{align*} and \begin{align*} -\epsilon\int_{\mathbb{R}^2_+} \partial_x^2 g_{n} \, \langle y\rangle^{2\ell'} g_{n} dx dy =\epsilon\| \partial_x g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} We have also \begin{align*} &-\epsilon \int_{\mathbb{R}^2_+} \big(\partial_x \partial_y^{-1} g_{n} \big)\partial_y \eta_1 \langle y\rangle^{2\ell'}g_{n} dx dy \\ & = \epsilon \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} \partial_y \eta_1 \langle y\rangle^{2\ell'} \partial_x g_{n} dx dy\\ &\quad+ \epsilon \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} (\partial_y \partial_x \eta_1)\langle y\rangle^{2\ell'} g_{n}dx dy\\ & \le \epsilon\| \partial_y^{-1} g_{n} \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \frac{\epsilon}{8}\|\partial_x g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ & \quad+ \epsilon\| \partial_y^{-1} g_{n}\partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 + \epsilon\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} So by \eqref{non-monotone-transformation} and $0<\epsilon\le 1$, we obtain \begin{equation*} \begin{split} &\frac{d}{dt}\|g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+\| \partial_yg_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)} +\epsilon \|\partial_x g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}\\ &\le C \|g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+\|(\partial_y^{-1} g_{n}) \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 \\ &\qquad+ \| (\partial_y^{-1} g_{n}) \partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 + 2\sum_{j=1}^{6}\bigg|\int_{\mathbb{R}^2_+} {M}_{j} \, \langle y\rangle^{2\ell'}g_{n} dx dy\bigg|. \end{split} \end{equation*} Then we can finish the proof of the {\em \`a priori} estimate \eqref{uniform-part2-2} by the following four Lemmas. \end{proof} \begin{lemma}\label{lemma5.1} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split} \| \partial_y^{-1} g_{n} \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \| \partial_y^{-1} g_{n} \partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 \le C \|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma} \begin{proof} Notice that \eqref{apriori} and \eqref{C0} imply \begin{align*} &|\eta_1| \le C \langle y \rangle^{-\ell},\quad |\partial_x \eta_1 |\le C\langle y \rangle^{-\ell},\\ &|\partial_y \eta_1 |\le C \langle y \rangle^{-\ell - 1},\quad |\partial_y \partial_x \eta_1| \le C \langle y \rangle^{-\ell - 1}. \end{align*} Then $\ell'>\frac 12, \ell'-\ell<\frac12$, imply \begin{align*} \begin{split} \|\partial_y^{-1} g_{n} (\partial_y \partial_x \eta_1)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & \le C\int_{\mathbb{R}^2_+}\langle y \rangle^{2(\ell' - \ell-1)} \Big(\int^y_0 g_n (t, x, \tilde y)d\tilde y \Big)^2 dx dy\\ & \le C\| g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{align*} Similarly, we also obtain \begin{align*} \begin{split} \| \partial_y^{-1} g_{n} \partial_y \eta_1\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 \le C\| g_{n} \|_{L^2_{ \ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{align*} \end{proof} \begin{lemma}\label{lemma5.2} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split} \left|\int_{\mathbb{R}^2_+} \sum^4_{j = 0}\tilde{M}^n_{j } \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le & \frac18\|\partial_y g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+ \frac{\epsilon}8\| \partial_x g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)} \\ &\quad+ \tilde{C}(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2), \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma} \begin{proof} Recalling $ {M}^n_{1 } = -(u^s + {u})\big( g_{n} \eta_1 +( \partial_y^{-1} g_{n} ) \partial_y \eta_1 \big) $, by Lemma \ref{lemma5.1}, \begin{align*} \left|\int_{\mathbb{R}^2_+} (u^s + {u})g_{n} \eta_1 \, \langle y\rangle^{2\ell'}g_{n} dxdy\right| &\le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\ \int_{\mathbb{R}^2_+} |(u^s + {u}) ( \partial_y^{-1} g_n) \partial_y \eta_1\, \langle y \rangle^{2\ell'} g_{n} | dy dx &\le C \| w\|_{H^n_{k+ \ell}}^2 + C \| g_{n}\|_{L^2_{\ell'}}^2. \end{align*} Besides, we have \begin{align*} \left|\int_{\mathbb{R}^2_+} {M}^n_{1 } \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le {C}(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{\ell'}(\mathbb{R}^2_+)}^2). \end{align*} The estimates of $ {M}^n_2$ and $ {M}^n_3$ needs the following decay rate of $\eta_2$: \begin{align*} &|\eta_2| \le C \langle y \rangle^{-1},\quad |\partial_x \eta_2| \le C \langle y \rangle^{-\ell-1},\\ &|\partial_y \eta_2 |\le C \langle y \rangle^{-2},\quad |\partial_y \partial_x \eta_2| \le C \langle y \rangle^{-\ell - 2}. \end{align*} Recall $ {M}_{2 }^n= 2 \partial_y g_{n} \eta_2 + 2g_{n} (\partial_y \eta_2 - 2\eta_2^2) - 8 \partial_y^{-1} g_{n}\, \eta_2 \partial_y \eta_2$. We have \begin{align*} & \left|\int_{\mathbb{R}^2_+} g_{n} (\partial_y \eta_2 - \eta_2^2) \, \langle y\rangle^{2\ell'} g_{n}dx dy\right| \le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\ &\left|\int_{\mathbb{R}^2_+} (\partial_y g_{n}) \eta_2 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right| \le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \frac{1}{8}\| \partial_y g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\ &\left|2 \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} \eta_2 \, \partial_y \eta_2 \langle y\rangle^{2\ell'}g_{n} dx dy\right| \\ &\qquad\qquad \le C\| \langle y \rangle^{\ell'-3}\partial_y^{-1} g_n \|_{L^2}^2 + \| g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} All together, we conclude \begin{align*} \left|\int_{\mathbb{R}^2_+} {M}^n_{2 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| &\le C(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{\ell'}(\mathbb{R}^2_+)}^2)+ \frac{1}{8}\| \partial_y g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2, \end{align*} and exactly same computation gives also \begin{align*} \left|\int_{\mathbb{R}^2_+} {M}^n_{3 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| &\le C(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2)+ \frac{\epsilon}{8}\|\partial_x g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} Now using \eqref{apriori}-\eqref{C0} and $m\ge 6$, with the same computation as above, we can get $$ \left|\int_{\mathbb{R}^2_+} {M}^n_{4 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| \le C\left(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2\right). $$ which finishes the proof of Lemma \ref{lemma5.2}. \end{proof} \begin{lemma}\label{lemma5.3} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split} &\left|\int_{\mathbb{R}^2_+} {M}^n_5 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le \tilde{C}\left(\sum^n_{p=1}\|\tilde g_p\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k + \ell}(\mathbb{R}^2_+)}^2\right), \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma} \noindent \begin{proof} Recall, \begin{align*} \tilde{M}_{5 }^n & =\sum_{i\ge 4}C^i_ng_{i}\, \partial^{n + 1 -i}_x u + \sum_{1\le i\le 3}C^i_n\partial_x^i u\, g_{n+1-i }\\ &\quad+\sum_{i\ge 4}C^i_n \partial_y^{-1} g_{n} \partial^{n + 1 -i}_x w + \sum_{1\le i\le 3}C^i_n\partial_x^i w\, \partial_y^{-1} g_{n+ 1 - i } , \end{align*} here if $n\le 3$, we have only the last term. Then, for { $\|w\|_{H^{m}_{k+\ell}}\le \zeta\le 1, m\ge 6$}, \begin{align*} &\sum_{i\ge 4}C^i_n\|g_{i}\, \partial^{n + 1 -i}_x u\|_{L^2_{\ell'}(\mathbb{R}^2_+)} + \sum_{1\le i\le 3}\|\partial_x^i u\, g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\ &\le \sum_{i\ge 4}C^i_n\|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}\|\partial^{n + 1 -i}_x u\|_{L^\infty(\mathbb{R}^2_+)} \\ &\qquad+ \sum_{1\le i\le 3}C^i_n\|\partial_x^i u\|_{L^\infty(\mathbb{R}^2_+)}\, \|\tilde g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\ &\le C \sum_{i\ge 4}C^i_n\|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}\|w\|_{H^{n + 3 -i}_1} \\ &\qquad+ C \sum_{1\le i\le 3}C^i_n\|w\|_{H^{i + 3}_1}\, \|g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\ & \le C \sum_{i=1}^{n} \| g_{i}\|_{L^2_{\ell'}}. \end{align*} Similarly, for the second line in ${M}_{5}$, by Lemma \ref{lemma5.1}, we have \begin{align*} \sum_{i\ge 4}C^i_n\| (\partial^{-1}_y g_{i}) \partial^{n + 1 -i}_x w\|_{L^2_{\ell'}(\mathbb{R}^2_+)} &\le \sum_{i\ge 4}C^i_n\| \langle y \rangle^{\ell' - \ell - 1} (\partial^{-1}_y g_{i}) \|_{L^2(\mathbb{R}^2_+)} \| \langle y \rangle^{\ell + 1}\partial^{n + 1 -i}_x w\|_{L^\infty}\\ & \le C \sum_{i = 1}^{n} \|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}. \end{align*} We have proven Lemma \ref{lemma5.3}. \end{proof} \begin{lemma}\label{lemma5.4} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split} &\left|\int_{\mathbb{R}^2_+} {M}^n_6 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\\ &\quad\le \frac 1{8m} \sum^n_{p=1}\|\partial_y g_p\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+ \tilde{C}\left(\sum^n_{p=1}\| g_p\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2\right), \end{split} \end{equation*} where $\tilde{C}$ is independents of $\epsilon$. \end{lemma} \begin{proof} Recall \begin{align*} {M}_{6 } & = \sum_{i=1}^{n}C^i_n g_{i}\eta_2 \partial^{n -i}_x {v}+ \sum_{i=1}^{n}C^i_n g_{i} \partial^{n+1 -i}_x {u} + \sum_{i=1}^{n}C^i_n \partial_y g_{i} \partial^{n -i}_x {v}\\ & + \sum_{i=1}^{n} \partial_y^{-1} g_{i} \left( C_n^i\partial^{n -i}_x {v} \partial_y \eta_2 + C_n^i \partial^{n+1 -i}_x {u} \eta_2\right). \end{align*} {In ${M}_6^n$, we just study the term $\partial_y g_{1} \partial_x^{n - 1} v $ as an example, the others terms are similar, \begin{align*} \int_{\mathbb{R}^2_+} \partial_y g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} g_{n} & = - \int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} \partial_y g_{n}\\ & + \int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n } u\, \langle y \rangle^{2\ell'} g_{n} dx dy, \end{align*} \begin{align*} \int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} \partial_y g_{n} dx dy & \le \frac{1}{8m} \|\partial_y g_{n}\|_{L^2_{\ell'}}^2 + C\| g_{1 } \partial_x^{n - 1} v \|_{L^2_{\ell'}}^2, \end{align*} \begin{align*} \| g_{1 } \partial_x^{n - 1} v \|_{L^2_{\ell'}}^2 & \le \sup_{x \in \mathbb{R}} \int_{0}^{+\infty} \langle y \rangle^{2\ell'}g_{1 }^2 dy \, \sup_{y \in \mathbb{R}_+} \int_{-\infty}^{+\infty} \left|\int_{0}^{y}\partial_x^{n} u dz\right|^2 dy\\ & \le \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|\partial_x g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 \bigg) \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}|\partial_x^{n} u| dz\right|^2 dy\\ & \le C \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|\partial_x g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 \bigg)\\ &\qquad \times \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}\langle y \rangle^{- k - \ell + 1} \, \langle y \rangle^{k + \ell - 1}|\partial_x^{n} u| dz\right|^2 dy\\ & \le C \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \| g_{2 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|w\|_{H^m_{k +\ell}}^2\bigg)\\ & \qquad\times \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}\langle y \rangle^{- k - \ell + 1} \, \langle y \rangle^{k + \ell - 1}|\partial_x^{n} u| dz\right|^2 dy\\ & \le C\sum_{i=1}^{2}\|g_{i}\|_{L^2_{\ell'}}^2 + C \|w\|_{H^m_{k +\ell}}^2. \end{align*} Here we have used Lemma \ref{lemma5.1} and \[ k + \ell - 1 > \frac{1}{2},~~ \| w\|_{H^m_{k + \ell}} \le 1, \] and \[ \partial_x g_j = g_{j+1} - g_j \eta_1 - \partial_y^{-1} g_{n} \cdot \partial_y \eta_1. \] } By the similar trick, we have completed the proof of this lemma. \end{proof} \section{Existence of the solution}\label{section7} Now, we can conclude the following energy estimate for the sequence of approximate solutions. \begin{theorem}\label{energy} Assume $u^s$ satisfies Lemma \ref{shear-profile}. Let $m\ge 6$ be an even integer, $k + \ell >\frac{3}{2}, 0 < \ell<\frac12, ~\ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~$, and $\tilde{u}_{0}\in H^{m+3}_{k + \ell'-1}(\mathbb{R}^2_+)$ which satisfies the compatibility conditions \eqref{compatibility-a1}-\eqref{compatibility-a2}. Suppose that $\tilde{w}_\epsilon \in L^\infty ([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ is a solution to \eqref{shear-prandtl-approxiamte-vorticity} such that \begin{equation*} \|\tilde{w}_\epsilon\|_{ L^\infty ([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+)}\le \zeta \end{equation*} with $$ 0<\zeta\le1, \quad C_m\zeta\le \frac{\tilde c_1}{2}, $$ where $0<T\le T_1$ and $T_1$ is the lifespan of shear flow $u^s$ in the Lemma \ref{shear-profile}, $C_m$ is the Sobolev embedding constant in \eqref{C0}. Then there exists $C_T>0, \tilde C_T>0$ such that, \begin{equation}\label{energy estimate-A} \|\tilde{w}_\epsilon\|_{ L^\infty ([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))}\le C_T\|\tilde{u}_{0}\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}, \end{equation} where $C_T>0$ is increasing with respect to $0<T\le T_1$ and independent of $0<\epsilon\le 1$. \end{theorem} Firstly, we collect some results to be used from Section \ref{section3} - \ref{section5}. We come back to the notations with tilde and the sub-index $\epsilon$. Then $g^\epsilon_m, h^\epsilon_m$ are the the functions defined by $\tilde{u}_\epsilon$. Under the hypothesis of Theorem \ref{energy}, we have proven the estimates \eqref{approx-less-k} and \eqref{uniform-part2-2} \begin{equation} \label{approx-less-k-b} \begin{split} \frac{d}{ dt}\| \tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2 &+ \|\partial_y\tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2\\ &+ \epsilon\|\partial_x\tilde{w}_\epsilon\|_{H^{m, m- 1}_{k+\ell}(\mathbb{R}^2_+)}^2 \le C_1 \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2, \end{split} \end{equation} \begin{equation} \label{uniform-part2-1b} \begin{split} \frac{d}{dt}\sum^m_{n=1}\| g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^m_{n=1}\| \partial_y g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \epsilon \sum^m_{n=1}\| \partial_x g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ & \le C_2(\sum^m_{n=1}\| g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2)\,, \end{split} \end{equation} \begin{lemma}\label{lemma-initial} For the inital date, we have \begin{align*} T^\epsilon_m(g, w)(0)&= \sum^m_{n=1}\| g^\epsilon_n(0)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2+\| \tilde{w}_\epsilon(0)\|_{H^{m, m -1}_{k+\ell}(\mathbb{R}^2_+)}^2\\ &\le C\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2, \end{align*} where $C$ is independent of $\epsilon$. \end{lemma} \begin{proof} Notice for any $1\le n\le m$, \[ g^\epsilon_n = \big(\frac{\partial_x^n \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon}\big)_y = \frac{\partial_x^n\partial_y \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon} - \frac{\partial_x^n \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon} \eta_2, \] and $\tilde{u}_\epsilon(0)=\tilde{u}_0$, then we deduce, for any $1\le n\le m$, \begin{align*} &\| g^\epsilon_n(0)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\le 2\left\| \frac{\partial_x^n\partial_y \tilde{u}_0}{u^s_{0, y} + \tilde{w}_0}\right\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2+2\left\| \frac{\partial_x^n\tilde{u}_0}{u^s_{0, y} + \tilde{w}_0}\eta_2(0)\right\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ &\le C\big(\| \partial_x^n\partial_y \tilde{u}_0\|_{L^2_{k+\ell'}(\mathbb{R}^2_+)}^2+\|\partial_x^n\tilde{u}_0\|_{L^2_{k+\ell'-1}(\mathbb{R}^2_+)}^2\big) \le C\|\tilde{u}_0\|_{H^{m+1}_{k+\ell'-1}(\mathbb{R}^2_+)}^2. \end{align*} \end{proof} From \eqref{approx-less-k-b} and \eqref{uniform-part2-1b}, we have \begin{align} \label{uniform-full-1b} \begin{split} & \| g^\epsilon_m\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2 \\ & \le C_8 e^{C_2 t}\int^t_0e^{-C_2 \tau}\| \tilde{w}_\epsilon(\tau)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 d\tau+C_9e^{C_2 t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2 . \end{split} \end{align} \begin{lemma}\label{lemma-g-h-w} We have also the following estimate : $$ \|\partial^m_x\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}^2\le \tilde C \|{g}_m^\epsilon\|_{L^2_{\ell'}}^2. $$ where $\tilde C$ is independent of $\epsilon$. \end{lemma} \begin{proof} By the definition, \[ \partial_x^m \tilde{u}_\epsilon(t, x, y) = ( u^s_y+ \tilde{w}_\epsilon)\int_{0}^{y} g^\epsilon_m(t, x, \tilde y) d\tilde y,\quad y\in \mathbb{R}_+, \] Therefore, \[ \partial_x^m \tilde{w} = ( u^s_{yy}+(\tilde{w}_\epsilon)_y)\int_{0}^{y} g^\epsilon_m(t, x, \tilde y) d\tilde y - ( u^s_y+ \tilde{w}_\epsilon) g^\epsilon_m(t, x, y) ,\,\,~~ y \ge 0 , \] and \begin{align*} \| \partial_x^m \tilde{w}\|_{L^2_{k + \ell}}^2 & \le C \int_{\mathbb{R}_+^2}\langle y \rangle^{2\ell - 2} \bigg( \int_{0}^{y} g_m^\epsilon(t,x, z) dz \bigg)^2 dx dy + \|g_m^\epsilon(t)\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2\\ & \le C \|g_m^\epsilon(t)\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2, \end{align*} where we have used $\ell - 1 < - \frac{1}{2}$ and $\frac12 < \ell'$. \end{proof} \begin{proof}[{\bf End of proof of Theorem \ref{energy}}] Combining \eqref{uniform-full-1b}, Lemma \ref{lemma-initial} and Lemma \ref{lemma-g-h-w}, we get, for any $t\in ]0, T]$, \begin{align*} \begin{split} \|\tilde{w}_\epsilon(t)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}^2\le & \tilde C_8 e^{C_2 t}\int^t_0e^{-C_2 \tau}\| \tilde{w}_\epsilon(\tau)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 d\tau\\ &+\tilde C_9e^{C_2 t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2, \end{split} \end{align*} with $\tilde C_8, \tilde C_9$ independent of $0<\epsilon\le 1$. We have by Gronwell's inequality that, for any $t\in ]0, T]$, $$ \|\tilde{w}_\epsilon(t)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}^2\le \tilde C_9e^{(C_2+\tilde C_8) t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2. $$ So it is enough to take \begin{align}\label{bound-2} C^2_T=\tilde C_9e^{(C_2+\tilde C_8) T} \end{align} which gives \eqref{energy estimate-A}, and $C_T$ is increasing with respect to $T$. We finish the proof of Theorem \ref{energy}. \end{proof} \begin{theorem}\label{uniform-existence} Assume $u^s$ satisfies Lemma \ref{shear-profile}, and let $\tilde{u}_{0}\in H^{m+3}_{k + \ell'-1}(\mathbb{R}^2_+)$, $m\ge 6$ be an even integer, $k>1, 0<\ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~k+\ell>\frac 32$, and $$ 0<\zeta\le 1\,\,\,\mbox{with}\,\,\, C_m\zeta\le \frac{\tilde c_1}{2}, $$ where $C_m$ is the Sobolev embedding constant. If there exists $0<\zeta_0$ small enough such that, \begin{equation*} \|\tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}\le \zeta_0, \end{equation*} then, there exists $\epsilon_0>0$ and for any $0<\epsilon\le \epsilon_0$, the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}_\epsilon$ such that $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta, $$ where $T_1$ is the lifespan of shear flow $u^s$ in the Lemma \ref{shear-profile}. \end{theorem} \begin{remark}\label{remark7.1} Under the uniform monotonic assumption \eqref{shear-critical-momotone}, some results of above theorem holds for any fixed $T>0$. But $\zeta_0$ decreases as $T$ increases, according to the \eqref{c-tilde}. \end{remark} \begin{proof} We fix $0<\epsilon\le 1$, then for any $\tilde{w}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, Theorem \ref{theorem3.1} ensures that, there exists $\epsilon_0>0$ and for any $0<\epsilon\le \epsilon_0$, there exits $T_\epsilon>0$ such that the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(0)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le 2 \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}. $$ Now choose $\zeta_0$ such that \begin{equation*} \max\{2, C_{T_1}\} \zeta_0\le \frac{\zeta}{2}. \end{equation*} On the other hand, taking $\tilde{w}_\epsilon(T_\epsilon)$ as initial data for the system \eqref{shear-prandtl-approxiamte-vorticity}, Theorem \ref{theorem3.1} ensures that there exits $T'_\epsilon>0$, which is defined by \eqref{time-1} with $\bar\zeta=\frac{\zeta}{2}$, such that the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}'_\epsilon \in L^\infty ([T_\epsilon, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$ \|\tilde{w}'_\epsilon\|_{L^\infty ([T_\epsilon, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(T_\epsilon)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le \zeta. $$ Now, we extend $\tilde{w}_\epsilon$ to $[0, T_\epsilon+T'_\epsilon]$ by $\tilde{w}'_\epsilon$, then we get a solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. $$ So if $T_\epsilon+T'_\epsilon<T_1$, we can apply Theorem \ref{energy} to $\tilde{w}_\epsilon$ with $T=T_\epsilon+T'_\epsilon$, and use \eqref{energy estimate-A}, this gives $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le C_{T_1} \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}\le \frac{\zeta}{2}. $$ Now taking $\tilde{w}_\epsilon(T_\epsilon+T'_\epsilon)$ as initial data for the system \eqref{shear-prandtl-approxiamte-vorticity}, applying again Theorem \ref{theorem3.1}, for the same $T'_\epsilon>0$, the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}'_\epsilon \in L^\infty ([T_\epsilon+T'_\epsilon, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$ \|\tilde{w}'_\epsilon\|_{L^\infty ([T_\epsilon+T'_\epsilon, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(T_\epsilon+T'_\epsilon)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le \zeta. $$ Now, we extend $\tilde{w}_\epsilon$ to $[0, T_\epsilon+2T'_\epsilon]$ by $\tilde{w}'_\epsilon$, then we get a solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. $$ So if $T_\epsilon+2T'_\epsilon<T_1$, we can apply Theorem \ref{energy} to $\tilde{w}_\epsilon$ with $T=T_\epsilon+2T'_\epsilon$, and use \eqref{energy estimate-A}, this gives again $$ \|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le C_{T_1} \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}\le \frac{\zeta}{2}. $$ Then by recurrence, we can extend the solution $\tilde{w}_\epsilon$ to $[0, T_1]$, and then the lifespan of approximate solution is equal to that of shear flow if the initial date $\tilde{u}_0$ is small enough. \end{proof} \label{sec-exi} We have obtained the following estimate, for $m\ge 6$ and $0<\epsilon\le\epsilon_0$, \begin{align*} \|\tilde{w}_\epsilon(t)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le \zeta,\quad t \in [0, T_1]. \end{align*} By using the equation \eqref{shear-prandtl-approxiamte-vorticity} and the Sobolev inequality, we get, for $0<\delta<1$ \[ \|\tilde{w}_\epsilon\|_{Lip ([0, T_1]; C^{2, \delta}(\mathbb{R}^2_+))}\le M<+\infty. \] Then taking a subsequence, we have, for $0<\delta'<\delta$, \[ \tilde{w}_\epsilon \to \tilde{w}\,\,({\epsilon\,\to\,0}),\,\, \text{locally strong in }~~C^0([0, T_1]; C^{2, \delta'}(\mathbb{R}^2_+))\,, \] and \[ \partial _t \tilde{w} \in L^\infty ([0, T_1]; H^{m-2}_{k+\ell}(\mathbb{R}^2_+)),\quad \tilde{w} \in L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+)), \] with \begin{align*} \|\tilde{w}\|_{L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+))} \le \zeta. \end{align*} Then we have \begin{align*} \tilde{u}=\partial^{-1}_y w\in L^\infty([0, T_1]; H^{m}_{k+\ell-1}(\mathbb{R}^2_+)), \end{align*} where we use the Hardy inequality \eqref{Hardy1}, since $$ \lim_{y\to+\infty} \tilde{u}(t, x, y)=-\lim_{y\to+\infty} \int^{+\infty}_y \tilde{w}(t, x, \tilde{y} )d\tilde{y}=0. $$ In fact, we also have $$ \lim_{y\to 0} \tilde{u}(t, x, y)=\lim_{y\to 0} \int^y_0 \tilde{w}(t, x, \tilde{y} )d\tilde{y}=0. $$ Using the condition $k+\ell-1>\frac 12$, we have also $$ \tilde{v}=-\int^y_0 \tilde{u}_x\, d \tilde{y} \in L^\infty ([0, T_1]; L^\infty(\mathbb{R}_{+, y}); H^{m-1}(\mathbb{R}_x)). $$ We have proven that, $\tilde{w}$ is a classical solution to the following vorticity Prandtl equation \begin{align*} \begin{cases} & \partial_t\tilde{w} + (u^s + \tilde{u}) \partial_x\tilde{w} + \tilde{v} \partial_y(u^s_y+\tilde{w}) = \partial^2_y\tilde{w},\\ & \partial_y \tilde{w}|_{y=0} = 0,\\ & \tilde{w}|_{t=0} = \tilde{w}_0, \end{cases} \end{align*} and $(\tilde{u}, \tilde{v})$ is a classical solution to \eqref{non-shear-prandtl}. Finally, $(u, v)=(u^s+\tilde{u}, \tilde{v})$ is a classical solution to \eqref{full-prandtl}, and satisfies \eqref{main-energy}. In conclusion, we have proved the following theorem which is the existence part of main Theorem \ref{main-theorem}. \begin{theorem}\label{main-theorem-bis} Let $m\ge 6$ be an even integer, $k>1, 0< \ell<\frac12,~ \frac 12< \ell' < \ell+ \frac 12,~ k+\ell>\frac 32$, assume that $u^s_0$ satisfies \eqref{shear-critical-momotone}, the initial date $\tilde{u}_0 \in H^{m+3}_{k + \ell' -1 }(\mathbb{R}^2_+)$ and $\tilde{u}_0 $ satisfies the compatibility condition \eqref{compatibility-a1}-\eqref{compatibility-a2} up to order $m+2$. Then there exists $T>0$ such that if \begin{equation*} \|\tilde{u}_0 \|_{H^{m+1}_{k + \ell' -1 }(\mathbb{R}^2_+)}\le \delta_0, \end{equation*} for some $\delta_0>0$ small enough, then the initial-boundary value problem \eqref{non-shear-prandtl} admits a solution $(\tilde{u}, \tilde{v})$ with \begin{align*} &\tilde{u}\in L^\infty([0, T]; H^{m}_{k+\ell-1}(\mathbb{R}^2_+)),\quad \partial_y\tilde{u}\in L^\infty([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+)). \end{align*} Moreover, we have the following energy estimate, \begin{align}\label{main-energy} \begin{split} \|\partial_y\tilde{u}\|_{L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))} \le C\|\tilde{u}_0 \|^2_{ H^{m+1}_{k + \ell' -1 }(\mathbb{R}^2_+)}. \end{split} \end{align} \end{theorem} \section{Uniqueness and stability}\label{section8} Now, we study the stability of solutions which implies immediately the uniqueness of solution. Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$ respectively. Denote $\bar u = \tilde{u}^1 - \tilde{u}^2$ and $\bar v= \tilde{v}^1-\tilde{v}^2$, then \begin{equation*} \begin{cases} \partial_t \bar{u} + (u^s + \tilde{u}_1)\partial_x \bar{u} + (u^s_y + \tilde{u}_{1, y})\bar{v} = \partial^2_y \bar{u} - \tilde{v}_2 \partial_y\bar{u} -(\partial_x\tilde{u}_2) \bar{u},\\ \partial_x \bar{u}+\partial_y\bar{v}=0,\\ \bar{u}|_{y=0}=\bar{v}|_{y=0}=0,\\ \bar{u}|_{t=0}=\tilde{u}^1_0 - \tilde{u}^2_0 . \end{cases} \end{equation*} So it is a linear equation for $\bar{u}$. We also have for the vorticity $\bar w= \partial_y \bar u$, \begin{equation}\label{stability-2} \begin{cases} \partial_t \bar{w} + (u^s + \tilde{u}_1)\partial_x \bar{w} + (u^s_{yy} + \tilde{w}_{1, y})\bar{v} = \partial^2_y \bar{w} - \tilde{v}_2 \partial_y\bar{w} -(\partial_x\tilde{w}_2) \bar{u},\\ \partial_y\bar{w}|_{y=0}=0,\\ \bar{w}|_{t=0}=\tilde{w}^1_0 - \tilde{w}^2_0 . \end{cases} \end{equation} \noindent {\bf Estimate with a loss of $x$-derivative.} Firstly, for the vorticity $\bar w=\partial_y \bar u$, we deduce an energy estimate with a loss of $x$-derivative with the anisotropic norm defined by \eqref{norm-1}. \begin{proposition}\label{prop8.1} Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$, then we have \begin{equation} \label{w-bar-less-k} \begin{split} \frac{d}{ dt}\| \bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2+ \|\partial_y\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2\le \bar C_1\| \bar{w}\|_{H^{m-2}_{k+\ell}}^2, \end{split} \end{equation} where the constant $\bar C_1$ depends on the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. \end{proposition} \begin{proof} The proof of this Proposition is similar to the proof of the Proposition \ref{prop3.1}, and we need to use that $m-2$ is even. We only give the calculation for the terms which need a different argument. Moreover we also explain why we only get the estimate on $\|\bar{w}\|_{H^{m-2}_{k+\ell}}^2$ but require the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. With out loss of the generality, we suppose that $\|\bar{w}\|_{H^{m-2}_{k+\ell}}\le 1, \|\tilde{w}^1\|_{H^{m}_{k+\ell}}\le 1$ and $\|\tilde{w}^2\|_{H^{m}_{k+\ell}}\le 1$. Derivating the equation of \eqref{stability-2} with $\partial^\alpha=\partial^{\alpha}_x\partial^{\alpha_2}_y$, for $|\alpha|=\alpha_1+\alpha_2\le m-2, \alpha_1\le m-3$, \begin{align}\label{8.1} \begin{split} &\partial_t \partial^{\alpha} \bar{w} - \partial_y^2 \partial^{\alpha}\partial\bar{w}= - \partial^{\alpha} \big((u^s + \tilde{u}_1)\partial_x \bar{w}+ \tilde{v}_2 \partial_y\bar{w}\\ &\qquad\qquad+( u^s_{yy}+ \tilde{w}_{1, y} )\bar{v} +(\partial_x\tilde{w}_2) \bar{u} \big). \end{split} \end{align} Multiplying the above equation with $ \langle y \rangle^{k + \ell'+{\alpha_2}} \partial^{\alpha} \bar{w}$, the same computation as in the proof of the Proposition \ref{prop3.1}, in particular, the reduction of the boundary-data are the same, gives \begin{align*} \begin{split} & \int_{\mathbb{R}^2_+} \bigg(\partial_t \partial^{\alpha} \bar{w} - \partial_y^2\partial^{\alpha} \bar{w} \bigg) \langle y \rangle^{2 (k+\ell+\alpha_2)} \partial^{\alpha} \bar{w} dx dy\\ & \ge\frac 12 \frac{d}{dt}\|\partial^{\alpha}\bar{w}\|_{L^2_{k+\ell+\alpha_2}}^2+ \frac{3}{4} \|\partial_y \bar{w}\|_{H^{m-2, m-3}_{k+\ell}}^2 - C\|\bar{w}\|_{H^{m-2}_{k+\ell}}^2. \end{split} \end{align*} As for the right hand of \eqref{8.1}, for the first item, we split it into two parts \begin{align*} - \partial^{\alpha} \bigg((u^s + \tilde{u}_1)\partial_x \bar{w}\bigg) = - (u^s + \tilde{u}_1)\partial_x \partial^{\alpha} \bar{w} + [ (u^s + \tilde{u}_1), \partial^{\alpha}]\partial_x \bar{w}. \end{align*} Firstly, we have \begin{align*} \left|\int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_1) \partial_x \partial^{\alpha} \bar{w}\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| \le \| \tilde{w}_1\|_{H^3_1}\|\partial^{\alpha} \bar{w}\|_{L^2_{k+\ell+{\alpha_2}}}^2. \end{align*} For the commutator operator, we have, \begin{align*} \| [ (u^s + \tilde{u}_1), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}&\le C \|\tilde{w}_1\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\|\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} Notice that for this term, we don't have the loss of $x$-derivative. With the similar method for the terms $\tilde{v}_2 \partial_y\bar{w}$, we get \begin{align*} \left|\int_{\mathbb{R}^2_+} \tilde{v}_2 \partial_y\bar{w}\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| &\le \| \tilde{w}_2\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\| \bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2. \end{align*} For the next one, we have $$ \partial^{\alpha} \bigg(( u^s_{yy}+\partial_y\tilde{w}_1) \bar{v} \bigg) = \sum\limits_{ \beta \le \alpha } C^\alpha_\beta\, \partial^{\beta} ( u^s_{yy}+\partial_y\tilde{w}_1) \partial^{\alpha - \beta}\bar{v}, $$ and thus \begin{align*} &\left\|\sum\limits_{ \beta \le \alpha, 1\le |\beta|<|\alpha| } C^\alpha_\beta\, \partial^{\beta} ( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha - \beta}\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\\ &\qquad\le C\| \tilde{w}_1\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\| \bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} On the other hand, using Lemma \ref{inequality-hardy} and $\frac 32 -k<\ell<\frac 12$, \begin{align*} &\left\|\big(\partial^{\alpha} ( u^s_{yy}+\partial_y\tilde{w}_1)\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\le \left\|\big(\partial^{\alpha} u^s_{yy}\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}+ \left\|\big(\partial^{\alpha} \partial_y\tilde{w}_1\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\\ &\qquad\qquad\le C \left\|\bar{v}\right\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}+ C\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\left\|\bar{v}\right\|_{L^\infty(\mathbb{R}^2_+)}\\ &\qquad\qquad\le C \left\|\bar{u}_x\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+ C\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}(\left\|\bar{u}_x\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)} +\left\|\bar{u}_{xx}\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)})\\ &\qquad\qquad\le C(1+\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w}\right\|_{H^2_{\frac 12+\delta}(\mathbb{R}^2_+)}\\ &\qquad\qquad\le C(1+\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w}\right\|_{H^2_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} So this term requires the norms $\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})$. Moreover, if $\alpha_2\not =0$ \begin{align*} &\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha} \bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}= \left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha_1}_x\partial^{\alpha_2-1} \bar{u}_x\right\|_{L^2_{k+\ell+{\alpha_2}}}\\ &\qquad\qquad\le C (1+\| \tilde{w}_1\|_{H^{m-1}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w} \right\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}, \end{align*} and also if $\alpha_2 =0$ \begin{align*} &\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha_1}_x \bar{v}\right\|_{L^2_{k+\ell}}= \left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{-1}_y\partial^{\alpha_1}_x \bar{u}_x\right\|_{L^2_{k+\ell}}\\ &\qquad\qquad\le C (1+\| \tilde{w}_1\|_{H^{m-1}_{k+\ell}(\mathbb{R}^2_+)})\left\|\partial^{\alpha_1+1}_x\bar{w}\right\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}. \end{align*} These two cases imply the loss of $x$-derivative. Similar argument also gives \begin{align*} \left|\int_{\mathbb{R}^2_+} \big(\partial^{\alpha}(\partial_x\tilde{w}_2) \bar{u}\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| \le C \| \tilde{w}_2\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\| \bar{w}\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}^2, \end{align*} which finishes the proof of the Proposition \ref{prop8.1}. \end{proof} \noindent {\bf Estimate on the loss term.} To close the estimate \eqref{main-energy}, we need to study the terms $\|\partial^{m-2}_x \bar w\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}$ which is missing in the left hand side of \eqref{w-bar-less-k}. Similar to the argument in Section \ref{section7}, we will recover this term by the estimate of functions \begin{align*} \bar g_n& = \left( \frac{\partial_x^n \bar{u}}{u^s_y + \tilde u_{1,y}} \right)_y, \quad \forall (t, x, y)\in [0, T]\times \mathbb{R} \times \mathbb{R}^+. \end{align*} \begin{proposition} \label{prop8.2b} Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$, then we have \begin{equation*} \begin{split} \frac{d}{dt}\sum^{m-2}_{n=1}\| \bar g_n \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^{m-2}_{n=1}\| \partial_y \bar g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\ & \le C_2(\sum^{m-2}_{n=1}\| \bar g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|\bar {w}\|_{H^{m-2}_{k+\ell}}^2), \end{split} \end{equation*} where the constant $\bar C_2$ depends on the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. \end{proposition} These Propositions can be proven by using exactly the same calculation as in Section \ref{section5}. The only difference is that when we use the Leibniz formula, for the term where the order of derivatives is $|\alpha|=m-2$, it acts on the coefficient which depends on $\tilde{u}^1, \tilde{u}^2$. Therefore, we need their norm in the order of $(m-2)+1$. So we omit the proof of this Proposition here. With the similar argument to the proof of Theorem \ref{energy}, we get \begin{equation*} \|\bar{w}\|_{ L^\infty ([0, T]; H^{m-2}_{k+\ell}(\mathbb{R}^2_+))}\le C \|\bar{u}_{0}\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}, \end{equation*} which finishes the proof of Theorem \ref{main-theorem}. \appendix \section{Some inequalities} We will use the following Hardy type inequalities. \begin{lemma}\label{inequality-hardy} Let $f : \mathbb{R} \times \mathbb{R}^+\to \mathbb{R}$. Then \begin{itemize} \item[(i)] if $\lambda > - \frac{1}{2}$ and $ \lim\limits_{y \to \infty} f(x,y) = 0$, then \begin{equation} \label{Hardy1} \|\langle y \rangle^\lambda f\|_{L^2 (\mathbb{R}^2_+)} \le C_\lambda \|\langle y \rangle^{\lambda +1} \partial_y f\|_{L^2 (\mathbb{R}^2_+)}; \end{equation} \item[(ii)] if $-1 \le \lambda < - \frac{1}{2}$ and $f(x, 0) = 0$, then \begin{equation*} \|\langle y \rangle^\lambda f\|_{L^2 (\mathbb{R}^2_+)} \le C_\lambda \| \langle y \rangle^{\lambda + 1} \partial_y f \|_{L^2 (\mathbb{R}^2_+)}. \end{equation*} \end{itemize} Here $C_\lambda \to +\infty,~ \text{as}~~\lambda \to -\frac 12$. \end{lemma} We need the following trace theorem in the weighted Sobolev space. \begin{lemma}\label{lemma-trace} Let $\lambda>\frac 12$, then there exists $C>0$ such that for any function $f$ defined on $\mathbb{R}^2_+$, if $\partial_y f\in L^2_{\lambda}(\mathbb{R}^2_+)$, it admits a trace on $\mathbb{R}_x\times\{0\}$, and satisfies $$ \|\gamma_0(f)\|_{L^2 (\mathbb{R}_x)}\le C \|\partial_y f\|_{L^2_\lambda(\mathbb{R}^2_+)}, $$ where $\gamma_0(f)(x)=f(x, 0)$ is the trace operator. \end{lemma} The proof of the above two Lemmas is elementary, so we leave it to the reader. We use also the following Sobolev inequality and algebraic properties of $H^m_{k+\ell}(\mathbb{R}^2_+)$, \begin{lemma}\label{lemma2.4} For the suitable functions $f, g$, we have: \noindent 1) If the function $f$ satisfies $f(x, 0) = 0$ or $\lim_{y \to +\infty}f(x, y)=0$, then for any small $\delta>0$, \begin{equation}\label{sobolev-1} \begin{split} \|f\|_{L^\infty(\mathbb{R}^2_+)}\le C(\| f_y\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+\| f_{x y}\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}). \end{split} \end{equation} 2) For $m\ge 6, k+\ell>\frac 32$, and any $\alpha, \beta\in \mathbb{N}^2$ with $|\alpha|+|\beta|\le m$, we have \begin{equation}\label{sobolev-2} \begin{split} \|(\partial^\alpha f)(\partial^\beta g)\|_{L^2_{k+\ell+\alpha_2+\beta_2}(\mathbb{R}^2_+)}\le C\|f\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}. \end{split} \end{equation} 3) For $m\ge 6, k+\ell>\frac 32$, and any $\alpha\in \mathbb{N}^2, p\in\mathbb{N}$ with $|\alpha|+p\le m$, we have, $$ \|(\partial^\alpha f)(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\le C\|f\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{\frac 12+\delta}(\mathbb{R}^2_+)}, $$ where $\partial^{-1}_y$ is the inverse of derivative $\partial_y$, meaning, $\partial^{-1}_y g=\int^y_0 g(x, \tilde y) \, d\tilde{y}$. \end{lemma} \begin{proof} For (1), using $f(x, 0) = 0$, we have \begin{equation*} \begin{split} \|f\|_{L^\infty(\mathbb{R}^2_+)}&= \left\|\int^y_0 (\partial_y f)(x, \tilde y) \, d\tilde{y}\right\|_{L^\infty(\mathbb{R}^2_+)} \le C\|\partial_y f\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12+\delta}(\mathbb{R}_+))}\\ &\le C(\| \partial_y f\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+\| \partial_x\partial_y f\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}). \end{split} \end{equation*} If $\lim_{y\to+\infty} f(x, y)=0$, we use $$ f(x, y)=-\int^\infty_y (\partial_y f)(x, \tilde y) \, d\tilde{y}. $$ For (2), firstly, $m\ge 6$ and $|\alpha|+|\beta|\le m$ imply $|\alpha|\le m-2$ or $|\beta|\le m-2$, without loss of generality, we suppose that $|\alpha|\le m-2$. Then, using the conclusion of (1), we have \begin{equation*} \begin{split} \|(\partial^\alpha f)(\partial^\beta g)\|_{L^2_{k+\ell+\alpha_2+\beta_2}(\mathbb{R}^2_+)} &\le \|\langle y\rangle^{\alpha_2}(\partial^\alpha f)\|_{L^\infty(\mathbb{R}^2_+)}\|\partial^\beta g\|_{L^2_{k+\ell+\beta_2}(\mathbb{R}^2_+)}\\ &\le C\|f\|_{H^{|\alpha|+2}_{\frac 12 +\delta}(\mathbb{R}^2_+)}\|\partial^\beta g\|_{L^2_{k+\ell+\beta_2}(\mathbb{R}^2_+)}, \end{split} \end{equation*} which give \eqref{sobolev-2}. For (3), if $|\alpha|\le m-2$, we have \begin{equation*} \begin{split} \|(\partial^\alpha f)&(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\\ &\le \|\langle y\rangle^{k+\ell+\alpha_2}(\partial^\alpha f)\|_{L^2(\mathbb{R}_{y, +}; L^\infty(\mathbb{R}_x))}\|\partial^p_x (\partial^{-1}_y g)\|_{L^\infty(\mathbb{R}_{y, +}; L^2(\mathbb{R}_x))}\\ &\le C\|f\|_{H^{|\alpha|+2}_{k+\ell}(\mathbb{R}^2_+)}\|\partial^p_x g\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}. \end{split} \end{equation*} If $p\le m-2$, we have \begin{equation*} \begin{split} \|(\partial^\alpha f)&(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\\ &\le \|\langle y\rangle^{k+\ell+\alpha_2}(\partial^\alpha f)\|_{L^2(\mathbb{R}^2_+)}\|\partial^p_x (\partial^{-1}_y g)\|_{L^\infty(\mathbb{R}^2_+)}\\ &\le C\|f\|_{H^{|\alpha|}_{k+\ell}(\mathbb{R}^2_+)}\|\partial^p_x g\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12+\delta}(\mathbb{R}_{y, +}))}\\ &\le C\|f\|_{H^{|\alpha|}_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{\frac 12+\delta}(\mathbb{R}^2_+)}. \end{split} \end{equation*} We have completed the proof of the Lemma. \end{proof} \section{The existence of approximate solutions} \label{section-a3} Now, we prove the Proposition \ref{prop3.0}, the existence of solution to the vorticity equation $ \tilde{w}_\epsilon=\partial_y\tilde{u}_\epsilon $ and suppose that $m, k, \ell$ and $u^s(t, y)$ satisfy the assumption of Proposition \ref{prop3.0}, \begin{align} \label{apendix-vorticity-bb} \begin{cases} & \partial_t\tilde{w}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{w}_\epsilon +{v}_\epsilon (u^s_{yy} + \partial_y \tilde{w}_{\epsilon}) = \partial^2_{y}\tilde{w}_\epsilon + \epsilon \partial^2_{x}\tilde{w}_\epsilon, \\ & \partial_y \tilde{w}_{\epsilon}|_{y=0}=0\\ &\tilde{w}_\epsilon|_{t=0}=\tilde{w}_{0, \epsilon}, \end{cases} \end{align} where \begin{equation*} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad \tilde{v}_\epsilon(t, x, y)=-\int^{y}_0\partial_x \tilde{u}_\epsilon(t, x, \tilde y) d\tilde y. \end{equation*} We will use the following iteration process to prove the existence of solution, where $ w^0=\tilde{w}_{0, \epsilon}$, \begin{align} \label{apendix-vorticity-iteration} \begin{cases} & \partial_tw^n + (u^s+{u}^{n-1} )\partial_x w^{n} + (u^s_{yy} +\partial_y w^{n-1}){v}^{n} = \partial^2_{y}w^n + \epsilon \partial^2_{x}w^n , \\ & \partial_y w^n |_{y=0}=0\\ &w^n|_{t=0}=\tilde{w}_{0, \epsilon}, \end{cases} \end{align} with \begin{equation*} u^{n-1}(t, x, y)=-\int^{+\infty}_y w^{n-1}(t, x, \tilde y) d\tilde y, \end{equation*} and \begin{align*} v^{n}(t, x, y)&=-\int^{y}_0\partial_x u^{n}(t, x, \tilde y) d\tilde y\\ &=\int^{y}_0\int^{+\infty}_{\tilde y} \partial_x w^{n}(t, x, z) dz d\tilde y. \end{align*} Here for the boundary data, we have $$ \partial^3_{y}w^n|_{y=0}=((u^s_y+{w}^{n-1} )\partial_x w^{n})|_{y=0}, $$ \begin{equation*} \begin{split} &\qquad(\partial^5_y w^n)(t, x, 0)\\ &= \left(\partial^3_y u^s(t, 0) + \partial^2_y w^{n-1}(t, x, 0)+\epsilon (\partial^2_x w^{n-1})(t, x, 0)\right)( \partial_x w^n )(t, x, 0)\\ &\qquad+\left(u^s_y(t, 0) + (w^{n-1})(t, x, 0)\right) \left((\partial^2_y\partial_x w^{n})(t, x, 0)+\epsilon (\partial^3_x w^{n})(t, x, 0)\right)\\ &\qquad\qquad\qquad\qquad-(\partial_y\partial_x w^{n})(u^s_y + w^{n-1})(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + u^{n-1})) \partial^{4-j}_y\partial_x u^{n} - (\partial^{j - 1}_y\partial_x \tilde{u}^n )\partial^{4-j}_y(u^s_y + w^{n-1})\bigg)(t, x, 0)\\ &\qquad\qquad -\epsilon \partial^2_{x}\bigg(\left(u^s_y(t, 0) + (w^{n-1})(t, x, 0)\right) (\partial_x w^n )(t, x, 0)\bigg). \end{split} \end{equation*} and also for $3\le p\le \frac m2+1$, $\partial^{2p+1}_{y}w^n|_{y=0}$ is a linear combination of the terms of the form: \begin{align}\label{mu-4} \prod^{q_1}_{j=1}\bigg(\partial_x^{\alpha_j} \partial_y^{\beta_j + 1}\big( u^s + {u}^n \big) \bigg)\bigg|_{y=0}\times \prod^{q_2}_{l=1} \bigg(\partial_x^{\tilde \alpha_l} \partial_y^{\tilde\beta_l + 1}\big( u^s + {u}^{n-i} \big)\bigg)\bigg|_{y=0}\,\, , \end{align} where $2\le q_1+q_2\le p,\,\, 1\le i \le \min\{n,\, p\} $ and \begin{align*} &\alpha_j + \beta_j\le 2p - 1, \,\, 1\le j\le q_1;\,\, \tilde\alpha_l + \tilde\beta_l \le 2p - 1,\,\, 1\le l\le q_2;&\\ &\sum^{q_1}_{j=1} (3\alpha_j + \beta_j) +\sum^{q_2}_{l=1}(3\tilde \alpha_l + \tilde\beta_l )= 2p +1\,;&\\ &~\sum\limits_{j=1}^{q_1}\beta_j+\sum\limits_{l=1}^{q_2}\tilde \beta_l \le 2p -2;\,\,~\sum\limits_{j=1}^{q_1} \alpha_j +\sum\limits_{l=1}^{q_2} \tilde\alpha_l \le p - 1, \,\,\,0<\sum\limits_{j=1}^{q_1} \alpha_j .& \end{align*} Remark that the condition $0<\sum\limits_{j=1}^{q_1} \alpha_j$ implies that, in \eqref{mu-4}, there are at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} {u}^n(t, x, 0)$. For given $w^{n-1}$, we have ${u}^{n-1}=\partial^{-1}_yw^{n-1} $ and ${v}^{n}=-\partial^{-1}_y{u}^{n}_{x}$. We will prove the existence and boundness of the sequence $\{ w^n, n\in\mathbb{N}\}$ in $L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ to the linear equation \eqref{apendix-vorticity-iteration} firstly, then the existence of solution to \eqref{apendix-vorticity-bb} is guaranteed by using the standard weak convergence methods. \begin{lemma}\label{lemma-app-vorticity-iteration-1} Assume that $w^{n-i}\in L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), 1\le i \le \min\{n, \frac{m}{2} +1\}$ and $\tilde w_{0, \epsilon}$ satisfies the compatibility condition up to order $m+2$ for the system \eqref{apendix-vorticity-bb}, then the initial-boundary value problem \eqref{apendix-vorticity-iteration} admit a unique solution $w^n$ such that, for any $t\in [0, T]$, \begin{equation}\label{appendix-ck-a} \frac {d}{dt}\|w^n (t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2\le B^{n-1}_T\|{w}^n(t)\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+ D^{n-1}_T\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{equation} where \begin{align*} B^{n-1}_T=C\bigg(1+&{ \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|_{L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))}\\ &+ (1+\frac{1}{\epsilon} ) { \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|^2_{L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))} \bigg), \end{align*} and $$ D^{n-1}_T=C{ \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|^{m+2}_{L^\infty([0, T];H^{m+2}_{k+\ell}(\mathbb{R}^2_+))}\,. $$ \end{lemma} \begin{proof} Once we get {\em \`a priori} estimate for this linear problem, the existence of solution is guaranteed by the Hahn-Banach theorem. So we only prove the {\em \`a priori} estimate of the smooth solutions. For any $\alpha\in \mathbb{N}^2, |\alpha|\le m+2$, taking the equation \eqref{apendix-vorticity-iteration} with derivative $\partial^\alpha$, multiplying the resulting equation by $\langle y \rangle^{2k + 2\ell+2\alpha_2} \partial^\alpha w^n $ and integrating by part over $\mathbb{R}^2_+$, one obtains that \begin{align}\label{appendix-ck} \begin{split} &\frac{1}{2}\frac{d}{dt}\|w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \|\partial_y w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \epsilon \|\partial_x w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2\\ & =\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 }\partial^\alpha\big((u^s+{u}^{n-1})\partial_x w^{n} \\ &\qquad\qquad\qquad \qquad -(\partial^{-1}_y {u}^n_{x})(u^s_{yy} +\partial_y {w}^{n-1})\big)\partial^\alpha w^n dx dy\\ &\quad+\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}^2_+} (\langle y \rangle^{2k+ 2\ell +2\alpha_2 })'\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n}dxdy \\ &\qquad \qquad +\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}} (\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n})\big|_{y=0}dx, \end{split} \end{align} With similar analysis to Section \ref{section5}, we have \begin{align*} \begin{split} &\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 }(u^s+{u}^{n-1})\partial_x\partial^\alpha w^{n}\partial^\alpha w^n dx dy\right|\\ &=\left|-\frac 12 \int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 }\partial_x(u^s+{u}^{n-1})\partial^\alpha w^{n}\partial^\alpha w^n dx dy\right|\\ &\le C \|{u}^{n-1}\|_{L^\infty(\mathbb{R}^2_+)} \|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{split} \end{align*} and \begin{align*} \begin{split} &\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 }[\partial^\alpha, (u^s+{u}^{n-1})]\partial_x w^{n}\partial^\alpha w^n dx dy\right|\\ &\le C(1+ \|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} ) \|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. \end{split} \end{align*} For the second term on the right hand of \eqref{appendix-ck}, by using the Leibniz formula, we need to pay more attention to the following two terms \begin{align*} \begin{split} &\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2}\big(\partial^\alpha\partial^{-1}_y {u}^n_{x}\big)(u^s_{yy} +\partial_y {w}^{n-1})\partial^\alpha w^n dx dy\right|\\ &\le C(1+\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)})\|\partial_x w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \|w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\\ &\le\frac{\epsilon}{2}\|\partial_x w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+ \frac{C}{\epsilon}(1+\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)})^2 \|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{split} \end{align*} and \begin{align*} \begin{split} &\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 } {v}^n\big)\big(\partial^\alpha\partial_y {w}^{n-1}\big)\partial^\alpha w^n dx dy\\ &=-\int_{\mathbb{R}^2_+}\partial_y \big( \langle y \rangle^{2k+ 2\ell +2\alpha_2 }(\partial^{-1}_y {u}^n_{x})\big)\big(\partial^\alpha{w}^{n-1}\big)\partial^\alpha w^n dx dy\\ &\quad-\int_{\mathbb{R}^2_+}\big( \langle y \rangle^{2k+ 2\ell +2\alpha_2 }(\partial^{-1}_y {u}^n_{x})\big)\big(\partial^\alpha {w}^{n-1}\big)\partial_y \partial^\alpha w^n dx dy, \end{split} \end{align*} here we have used $v^n|_{y=0}=0$, thus \begin{align*} \begin{split} &\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2} {v}^n\big)\big(\partial^\alpha\partial_y {w}^{n-1}\big)\partial^\alpha w^n dx dy\right|\\ &\le C\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \big(\|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+\|\partial_y w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \|w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\big). \end{split} \end{align*} For the boundary term, similar to the proof of Proposition \ref{prop3.1}, we can get \begin{align*} &\sum_{|\alpha|\le {m+2}}\left|\int_{\mathbb{R}} (\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n})\big|_{y=0}dx\right|\\ &\le \frac 1{16} \|\partial_y w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+ C\|w^{n-1}\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} We get finally \begin{align*} \begin{split} \frac {d}{dt}\|w^n (t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 &+ \|\partial_y w^n(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \epsilon \|\partial_x w^n(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 \\ &\le B^{n-1}_T\|{w}^n(t)\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+ D^{n-1}_T\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\, . \end{split} \end{align*} \end{proof} \begin{lemma}\label{lemmab.2} Suppose that $m, k, \ell$ and $u^s(t, y)$ satisfy the assumption of Proposition \ref{prop3.0}, $\bar\zeta>0$, then for any $0<\epsilon\le 1$, there exists $T_\epsilon>0$ such that for any $\tilde{w}_{0, \epsilon}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$ with $$ \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar \zeta, $$ the iteration equations \eqref{apendix-vorticity-iteration} admit a sequence of solution $\{w^n, n\in\mathbb{N}\}$ such that, for any $t\in [0, T_\epsilon]$, $$ \|w^n(t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac 43\|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} ,\quad \forall n\in\mathbb{N}. $$ \end{lemma} \noindent {\bf Remark.} Here $\bar\zeta$ is aribitary. \begin{proof} Integrating \eqref{appendix-ck-a} over $[0, t]$, for $0<t\le T$ and $T>0$ small, $$ \|w^n(t)\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac{\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}}{e^{-\frac m2 B^{n-1}_T t}-\frac m2 D^{n-1}_T t\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}}. $$ We prove the Lemma by induction. For $n=1$, we have \begin{align*} B^{0}_T&=C\left(1+\|\tilde w_{0, \epsilon}\|_{ H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+ (1+\frac{1}{\epsilon} ) \|\tilde w_{0, \epsilon}\|^2_{ H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \right)\\ &\le C\left(1+\bar \zeta+ (1+\frac{1}{\epsilon} ) \bar \zeta^2 \right) , \end{align*} and $$ D^{n-1}_T=C\|\tilde w_{0, \epsilon}\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le C\bar \zeta^{m+2}. $$ Choose $T_\epsilon>0$ small such that $$ \left(e^{-\frac m2 C\left(1+2\bar \zeta+ 4(1+\frac{1}{\epsilon} ) \bar \zeta^2 \right) T_\epsilon}-\frac m2 C(2\bar \zeta)^{m+2} T_\epsilon (2\bar \zeta)^{m}\right)^{-1}=\left(\frac 43\right)^m, $$ we get $$ \|w^1(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43 \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. $$ Now the induction hypothesis is: for $0\le t\le T_\epsilon$, $$ \|w^{n-1}(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43 \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, $$ thanks to the choose of $T_\epsilon$, we have also $$ \left(e^{-\frac m2 B^{n-1}_{T_\epsilon} T_\epsilon}-\frac m2 D^{n-1}_{T_\epsilon} T_\epsilon\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\right)^{_1}\le \left(\frac 43\right)^m $$ for any $t\in [0, T_\epsilon]$, then we finish the proof of the Lemma \ref{lemmab.2}. \end{proof} \section*{Acknowledgments} The first author was partially supported by `` the Fundamental Research Funds for the Central Universities'' and the NSF of China (No. 11171261). The second author was supported by a period of sixteen months scholarship from the State Scholarship Fund of China, and he would thank ``Laboratoire de math\'ematiques Rapha\"el Salem de l'Universit\'e de Rouen" for their hospitality. \end{document}
\begin{document} \title{Rich Words in the Block Reversal of a Word} \author{Kalpana Mahalingam, Anuran Maity, Palak Pandoh} \address {Department of Mathematics,\\ Indian Institute of Technology Madras, Chennai, 600036, India} \email{[email protected], [email protected], [email protected]} \keywords{Combinatorics on words, rich words, run-length encoding, block reversal} \ensuremath{\mathfrak m}aketitle \begin{abstract} The block reversal of a word $w$, denoted by $\ensuremath{\mathfrak m}athtt{BR}(w)$, is a generalization of the concept of the reversal of a word, obtained by concatenating the blocks of the word in the reverse order. We characterize non-binary and binary words whose block reversal contains only rich words. We prove that for a binary word $w$, richness of all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ depends on $l(w)$, the length of the run sequence of $w$. We show that if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $2\leq l(w)\leq 8$. We also provide the structure of such words. \end{abstract} \section{Introduction} Inversions, insertions, deletions, duplications, substitutions and translocations are some of the operations that transform a DNA sequence from a primitive sequence (see \cite{Cantone2013,Cantone2010,Zhong2004,mahalingam2020}). A rearrangement of chromosomes can happen when a single sequence undergoes breakage and one or more segments of the chromosome are shifted by some form of dislocation (\cite{Zhong2004}). Mahalingam et al. (\cite{blore}) defined the block reversal of a word which is a rearrangement of strings when dislocations happen through inversions. The authors generalized the concept of the reversal of a word where in place of reversing individual letters, they decomposed the word into factors or blocks and considered the new word such that the blocks appear in the reverse order. The block reversal operation of a word $w$, denoted by $\ensuremath{\mathfrak m}athtt{BR}(w)$, is represented in Figure \ref{f2}. \begin{figure} \caption{For $w \in \Sigma^*$, $w'\in \ensuremath{\mathfrak m} \label{f2} \end{figure} If the word $w$ can be expressed as a concatenation of its factors or blocks $B_i$ such that $w=B_1B_2\cdots B_k$, then $w'=B_kB_{k-1}\cdots B_1$ is an element of $\ensuremath{\mathfrak m}athtt{BR}(w)$. Since there are multiple ways to divide a word into blocks, the block reversal of a word forms a set. Mahalingam et al. (\cite{blore}) proved that there is a strong connection between the block reversal and the non-overlapping inversion of a word. A non-overlapping inversion of a word is a set of inversions that do not overlap with each other. In 1992, Sch\"oniger et al. (\cite{Schon1992}) presented a heuristic for computing the edit distance when non-overlapping inversions are allowed. They presented an $\ensuremath{\mathfrak m}athcal{O}$($n^ 6$) exact solution for the alignment with the non-overlapping inversion problem and showed the non-overlapping inversion operation ensures that all inversions occur in one mutation step. Instances of problems considering the non-overlapping inversions include the string alignment problem, the edit distance problem, the approximate matching problem, etc. (\cite{Cantone2010,Cantone20,Kece93,Augusto2006}). Kim et al. (\cite{Kim2015}) studied the non-overlapping inversion on strings from a formal language theoretic approach. A word is a palindrome if it is equal to its reverse. Let $|w|$ be the length of the word $w$. It was proved by Droubay et al. (\cite{epi}) that a word $w$ has at most $|w|$ non-empty distinct palindromic factors. The words that achieve the bound were referred to as rich words by Glen et al. (\cite{palrich}). Several properties of rich words were studied in the literature (see \cite{pal3,epi,palrich,guo}). Droubay et al. (\cite{epi}) proved that a word $w$ contains exactly $|w|$ non-empty distinct palindromic factors iff the longest palindromic suffix of any prefix $p$ of $w$ occurs exactly once in $p$. Guo et al. (\cite{guo}) provided necessary and sufficient conditions for richness in terms of the run-length encoding of binary words. It is known that on a binary alphabet, the set of rich words contain factors of the period-doubling words, factors of Sturmian words, factors of complementary symmetric Rote words, etc. (see \cite{palrot1, epi, schaclo}). In a non-binary alphabet, the set of rich words contain, for example factors of Arnoux–Rauzy words and factors of words coding symmetric interval exchange. There are many results in the literature regarding the occurrence of rich words in infinite and finite words, but there are significantly fewer results about the occurrence of rich words in a language. The occurrence of rich words in the conjugacy class of a word $w$, denoted by $C(w)$, is a well-studied concept in literature (see \cite{careymusic, palrich, restivobwt, oeis1}). Shallit et al. (\cite{oeis1}) calculated the number of binary words $w$ of a particular length such that every conjugate of $w$ is rich. A word $w$ is said to be circularly rich if all of the conjugates of $w$ (including itself) are rich, and $w$ is a product of two palindromes. Glen et al. (\cite{palrich}) studied circularly rich words and proved the equivalence conditions for circularly rich words. They proved that a word $w$ is circularly rich iff the infinite word $w^\omega$ is rich iff $ww$ is rich where $w^\omega$ is a word formed by concatenating infinite copies of $w$. Restivo et al. (\cite{restivobwt, bwtRESTIVO}) outlined relationships between circularly rich words and the Burrows–Wheeler transform, a highly efficient data compression algorithm. In many musical contexts, scale and rhythmic patterns are extended beyond a single iteration of the interval of periodicity. From the equivalent conditions for circularly rich words, proved by Glen et al. (\cite{palrich}), if $w$ is the step pattern of an octave-based scale and is rich, and if $ww$ is also rich, then the property can be extended without limit $(w^\omega)$. Lopez et al. (\cite{lopezmusic}) examined that circular palindromic richness is inherent in numerous musical contexts, including all well-formed and maximally even sets and also in non well-formed scales which display three different step sizes. Carey (\cite{careymusic}) also deeply studied circularly rich words from a music theory perspective. He proposed that perfectly balanced scales that display circular palindromic richness and also exhibit relatively few step differences may prove to be advantageous from a cognitive and musical perspective. Since block reversal operation is a generalization of conjugate operation, the study of rich words in the block reversal of the word has possible applications in music theory and data compression techniques. In this paper, we characterize words whose block reversal contains only rich words. We find a necessary and sufficient condition for a non-binary word such that all elements in its block reversal are rich. For a binary word $w'$, we prove that the richness of elements of $\ensuremath{\mathfrak m}athtt{BR}(w')$ depends on $l(w')$ which is the length of the run sequence of $w'$. We show that for a binary word $w$, if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $2\leq l(w) \leq 8$. We also find the structure of binary words whose block reversal consists of only rich words. The paper is organized as follows. In Section \ref{sec3}, we prove that for a non-binary word $w$, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is either of the form $a_1a_2a_3 \cdots a_k$ or $a_j^{|w|}$ where each $a_i\in \Sigma$ is distinct. In Section \ref{sec4}, we show that for a binary word $w$, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich if $l(w) =2$. We also show that if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $2\leq l(w) \leq 8$. We discuss the case when $3\leq l(w)\leq 8$ separately in detail and provide the structure of words such that all elements in their block reversal are rich. We end the paper with a few concluding remarks. \section{Basic definitions and notations}\label{sec2} Let $\Sigma$ be a non-empty set of letters. A word $w=[a_{i}]$ over $\Sigma$ is a finite sequence of letters from $\Sigma$ where $a_i$ is the $i$-$th$ letter of $w$. We denote the empty word by $\lambda$. By $\Sigma^*$, we denote the set of all words over $\Sigma$ and $\Sigma^+=\Sigma^*\setminus \{\lambda\}$. The length of a word $w$, denoted by $|w|$, is the number of letters in $w$. $\Sigma^n$ and $\Sigma^{\geq n}$ denote the set of all words of length $n$ and the set of all words of length greater than or equal to $n$, respectively. For $a \in \Sigma$, $|w|_a$ denotes the number of occurrences of $a$ in $w$. A word $u$ is a factor or block of the word $w$ if $w=puq$ for some $p, q\in \Sigma^*$. If $p= \lambda$, then $u$ is a prefix of $w$ and if $q = \lambda$, then $u$ is a suffix of $w$. Let $Fac(w)$ denote the set of all factors of the word $w$. $\ALPH(w)$ denotes the set of all letters in $w$. Two words $u$ and $v$ are called conjugates of each other if there exist $x,y\in \Sigma^*$ such that $u=xy$ and $v=yx$. For a word $w= w_1w_2\cdots w_n$ such that $w_i\in \Sigma$, the reversal of $w$, denoted by $w^R$, is the word $w_n\cdots w_2w_1$. A word $w$ is a palindrome if $w=w^R$. By $P(w)$, we denote the number of all non-empty palindromic factors of $w$. A word $w$ has at most $|w|$ distinct non-empty palindromic factors. The words that achieve the bound are called rich words. Every non-empty word $w$ over $\Sigma$ has a unique encoding of the form $w = a_1^{n_1}a_2^{n_2}\ldots a_k^{n_k},$ where $n_i \geq 1$, $a_i \neq a_{i+1}$ and $a_i\in \Sigma$ for all $i$. This encoding is called run-length encoding of $w$ (\cite{guo}). The word $a_1 a_2 \ldots a_k$ is called the trace of $w$. The sequence $(n_1, n_2, \ldots, n_k)$ is called the run sequence of $w$ and the length of the run sequence of $w$ is $k$. For any binary word $w$ over $\Sigma = \{a,b\}$, the complement of $w$, denoted by $w^c$, is the word $\phi(w)$ where, $\phi$ is a morphism such that $\phi(a)=b$ and $\phi(b)=a$. For example, if $w= ababb$, then $w^c = babaa$. We recall the definition of the block reversal of a word from Mahalingam et al. (\cite{blore}). \begin{definition}\cite{blore} \label{br} Let $w,\;B_i \in \Sigma^+$ for all $i$. The block reversal of $w$, denoted by $\ensuremath{\mathfrak m}athtt{BR}(w)$, is the set $$\ensuremath{\mathfrak m}athtt{BR}(w) =\{ B_tB_{t-1}\cdots B_1 \;:\; \;w=B_1B_2 \ldots B_t, \; ~t\ge 1\}.$$ \end{definition} Note that a word can be divided into a maximum of $|w|$ blocks. We illustrate Definition \ref{br} with the help of an example. \begin{example}\label{e2} Let $\Sigma=\{a, b,c\}$. Consider $u=abbc$ over $\Sigma$. Then, $$\ensuremath{\mathfrak m}athtt{BR}(u) = \{ cbab, cbba, cabb, bbca, bcab, abbc, bcba \}.$$ \end{example} For more information on words, the reader is referred to Lothaire (\cite{Lothaire1997}) and Shyr (\cite{Shyr2001}). \section{Block Reversal of Non-binary Words}\label{sec3} It is well known that a rich word $w$ contains exactly $|w|$ distinct palindromic factors. In this section, we find a necessary and sufficient condition for a non-binary word such that all elements in its block reversal are rich. We recall the following from Glen et al. (\cite{palrich}). \begin{theorem}\cite{palrich}\label{tglen} For any word $w$, the following properties are equivalent:\\ (i) $w$ is rich;\\ (ii) for any factor $u$ of $w$, if $u$ contains exactly two occurrences of a palindrome $p$ as a prefix and as a suffix only, then $u$ is itself a palindrome. \end{theorem} \begin{lemma}\cite{palrich}\label{rich} If $w$ is rich, then \begin{itemize} \item all factors of $w$ are rich. \item $w^R$ is rich. \end{itemize} \end{lemma} We first give a necessary condition under which $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains at least one rich word. \begin{lemma}\label{45tn} Let $w \in \Sigma^n$. If $\ensuremath{\mathfrak m}athtt{BR}(w)$ has no rich element, then $|\ALPH(w)|< n-1$. \end{lemma} \begin{proof} Let $w \in \Sigma^n$ such that $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains no rich element. We prove that if $|\ALPH(w)| \geq n-1$, then there exists at least one rich word in $\ensuremath{\mathfrak m}athtt{BR}(w)$. If $|\ALPH(w)|=n $, then all elements in $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. If $|\ALPH(w)|=n-1 $, then for $u_1, u_2, u_3 \in \Sigma^*$, $w = u_1 a u_2 a u_3$ such that $a \notin \ALPH(u_i)$ for all $i$ and $\ALPH(u_i) \cap \ALPH(u_j) = \emptyset$ for $i\neq j$. Now, $u_3u_2a^2u_1 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is a rich word. \end{proof} We now give an example of a word $w \in \Sigma^n$ with $|\ALPH(w)|=n-2$ such that $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains no rich word. \begin{example}\label{45tnr} For $a, b \in \Sigma$, consider $w= u_1 \textbf{a} u_2 \textbf{b} u_3 \textbf{b} u_4 \textbf{a} u_5$ such that $a, b \notin \ALPH(u_i)$, $|u_i|\geq 3$ for each $i$, $\ALPH(u_i) \cap \ALPH(u_j)=\emptyset$ for $i\neq j$ and $\sum_{i=1}^{i=5}|\ALPH(u_i)|=|w|-4$. Then, $|\ALPH(w)|=|w|-2$. We denote by $\pi(w)$, the set of all permutations of the word $w$, i.e., $\pi(w)=\{u\in \Sigma^*|\; |u|_a=|w|_a \text{ for all } a\in \Sigma \}$. One can easily observe that $\ensuremath{\mathfrak m}athtt{BR}(w)$ is a subset of $\pi(w)$. If $\pi(w)$ has no rich words, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ also has no rich words. Suppose there is a $ v \in \pi(w)$ such that $v$ is rich then, as $|\ALPH(v)|=|v|-2$, $|v|_a=|v|_b=2$, by Theorem \ref{tglen}, we have, $\{ a \alpha a, b \alpha' b~ |~ \alpha, \alpha' \in \Sigma^{\geq 2} \text{ such that } \alpha \neq bzb, \alpha' \neq az'a \text{ where } z, z' \in \Sigma \cup \{\lambda\} \}$ $\cap \; Fac(v) = \emptyset$. Otherwise, if $a \alpha a$ or $b \alpha' b$ lies in $Fac(v)$, then as $\alpha \neq bzb$, $\alpha' \neq az'a$ where $z, z' \in \Sigma \cup \{\lambda\}$, $\alpha, \alpha' \in \Sigma^{\geq 2}$, $|\alpha|_a = 0$ and $|\alpha'|_b = 0$, we have, $a \alpha a$ and $b \alpha' b$ are not palindromes, which contradicts Theorem \ref{tglen}. This implies that $v$ is of one of the following forms: \begin{align}\label{algrt1} v_1 x^2 v_2 y^2 v_3,\; v_1 x x_1 x v_2 y x_2 y v_3, \; v_1 xy^2x v_2,\; v_1 xx_1x v_2 y^2 v_3, \; v_1 y^2 v_2 xx_1x v_3, \; v_1 xyxy v_2, \; v_1 x y x_1 y x v_2 \end{align} where $x \neq y \in \{a, b\}$, $ x_1, x_2 \in \Sigma \setminus \{a, b\}$ and $v_i \in \Sigma^*$ for all $i$. Now, from the structure of $w$, we observe that $\ensuremath{\mathfrak m}athtt{BR}(w)$ does not contain any element of forms in (\ref{algrt1}). Thus, $v \notin \ensuremath{\mathfrak m}athtt{BR}(w)$. Now, $\ensuremath{\mathfrak m}athtt{BR}(w) \subseteq \pi(w)$ and $v\in\pi(w) $ is rich implies no element of $\ensuremath{\mathfrak m}athtt{BR}(w)$ is rich. \end{example} Note that some elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ may not be rich even when $w$ is rich. For example, the word $w=abbc$ is rich, but $ bcab \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich. We now give a necessary and sufficient condition on a non-binary word $w$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. We need the following results. \begin{lemma} \label{nori} Let $w = a_1^{n_1}a_2uv$ where $a_1\neq a_2$, $u \in \{a_1,a_2\}^+$, $v \in (\Sigma \setminus \{ a_1, a_2\})^+$ and $n_1 \geq 1$. Then, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ which is not rich. \end{lemma} \begin{proof} Let $w = a_1^{n_1} a_2 u v$ such that $a_1\neq a_2$, $u \in \{a_1,a_2\}^+$, $v \in (\Sigma \setminus \{ a_1, a_2\})^+$ and $n_1 \geq 1$. Let $u = u' a_i$ where $i=1$ or $2$. Then, $w = a_1^{n_1} a_2 u'a_i v $. Note that $w'=u' a_i v a_2 a_1^{n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ and $w''=a_i v a_1^{n_1} a_2 u' \in \ensuremath{\mathfrak m}athtt{BR}(w)$. The factor $ a_i v a_2 a_1$ of $w'$ is not a palindrome for $i=1$ as $a_i\notin \ALPH(v)$ and similarly the factor $ a_iv a_1^{n_1} a_2$ of $w''$ is not a palindrome for $i=2$. Hence, by Theorem \ref{tglen} and Lemma \ref{rich}, $w'\in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich when $i=1$ and $w''\in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich when $i=2$. \end{proof} We also have the following: \begin{lemma}\label{notrich} For $u_1, u_2, u_3, u_4, u_5 \in \Sigma^*$, consider $w= u_1 a_i u_2 a_j u_3 a_{k} u_4 a_i u_5$ such that $a_j \neq a_k$, $a_j \neq a_i \neq a_k$ and $a_k$ is not a suffix of $u_3$. Then, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ which is not rich. \end{lemma} \begin{proof} For $u_1, u_2, u_3, u_4, u_5 \in \Sigma^*$, consider $w= u_1 a_i u_2 a_j u_3 a_{k} u_4 a_i u_5$ where $a_j \neq a_k$, $a_j \neq a_i \neq a_k$ and $a_k$ is not a suffix of $u_3$. We have the following cases: \begin{itemize} \item $a_i\notin \ALPH(u_3) : $ Then, $w'= u_5 u_4 a_i a_k u_3 a_j a_i u_2 u_1 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ and $a_j \neq a_k$ implies $a_i a_k u_3 a_j a_i$ is not a palindromic factor of $w'$. Then, by Theorem \ref{tglen}, $w'$ is not rich. \item $a_i\in \ALPH(u_3) : $ Then, let $u_3 = u_3' a_i u_3''$ such that $u_3', u_3'' \in \Sigma^*$ and $|u_3''|_{a_i}=0$. Now, $w''= u_5 u_4 a_i a_{k} u_3'' a_i u_1 a_i u_2 a_j u_3' \in \ensuremath{\mathfrak m}athtt{BR}(w)$. If $w''$ is not rich, then we are done. If $w''$ is rich, then $a_i a_{k} u_3'' a_i\in Fac(w'')$ and since, $|u_3''|_{a_i}=0$, by Theorem \ref{tglen}, $a_i a_{k} u_3'' a_i$ is a palindrome. This implies $u_3''=\lambda$ as $a_k$ is not a suffix of $u_3$. Then, $w=u_1 a_i u_2 a_j u_3' a_i a_{k} u_4 a_i u_5$. Now, $w'''= u_4 a_i u_5 u_3' a_i a_{k} a_j a_i u_2 u_1 \in \ensuremath{\mathfrak m}athtt{BR}(w) $. Then, $a_i a_{k} a_j a_i \in Fac(w''')$ is not a palindrome as $a_j\neq a_k$. Therefore, by Theorem \ref{tglen}, $w'''$ is not rich. \end{itemize} \end{proof} Now, we find a necessary and sufficient condition for a non-binary word such that all elements in its block reversal are rich. \begin{theorem} Let $w$ be a non-binary word. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is either of the form $a_1a_2a_3 \cdots a_k$ or $a_i^{|w|}$ where $a_i\in \Sigma$ are distinct. \end{theorem} \begin{proof} Let $w \in \Sigma^*$. If $|\ALPH(w)|=1$, we are done. Assume $|\ALPH(w)|\geq 3$ and consider the run-length encoding of $w$ to be $ a_1^{n_1}a_2^{n_2}a_3^{n_3} \cdots a_k^{n_k}$ where $a_i \neq a_{i+1}\in \Sigma$, $k \geq 3$ and $n_i \geq 1$. Let all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ be rich. We have the following cases: \begin{itemize} \item All $a_t$'s are distinct for $1\leq t\leq k$ : We prove that $n_t=1$ for $1\leq t\leq k$. Assume if possible that there exists at least one $n_j \geq 2$ for some $j$, i.e., $n_j = 2m +s$ for $m\geq 1$ and $ s\in \{0,1\}$. Let $\gamma = a_1^{n_1} a_2^{n_2} \cdots a_{j-1}^{n_{j-1}}$ and $\gamma'=a_{j+1}^{n_{j+1}} a_{j+2}^{n_{j+2}} \cdots a_k^{n_k}$. Then, $w = \gamma a_j^{2m+s}\gamma'$. Since, $a_j^m \gamma' \gamma a_j^m a_j^s \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is rich and all $a_t$'s are distinct, by Theorem \ref{tglen}, $u = a_j^m \gamma' \gamma a_j^m$ is a palindrome. Now, as $|\ALPH(w)|\geq 3$ and all $a_t$'s are distinct, $u$ is not a palindrome which is a contradiction. Therefore, $n_t = 1$ for $1\leq t\leq k$ and $w=a_1a_2a_3 \cdots a_k$. \item Otherwise, suppose $i$ is the least index such that $|a_1a_2\cdots a_k|_{a_i}\geq 2$ and $a_j=a_i$ where $a_l\neq a_i$ for $i+1\leq l\leq j-1$ i.e., $j$ is the first position at which $a_i$ repeats for $i<j$. We have the following cases : \begin{itemize} \item $i\geq 3$ : Note that $a_i \neq a_1$ and $a_i \neq a_2$. Let $n_i \geq n_j$ such that $n_i = n_j + s' $ where $s' \geq 0$. Now, for $\delta = a_3^{n_3} a_4^{n_4} \cdots a_{i-2}^{n_{i-2}}$, $\alpha = a_{i+1}^{n_{i+1}} a_{i+2}^{n_{i+2}} \cdots a_{j-1}^{n_{j-1}}$ and $\beta = a_{j+1}^{n_{j+1}} a_{j+2}^{n_{j+2}}\cdots a_{k}^{n_{k}}$, we have, $$w = a_1^{n_1}a_2^{n_2} \delta a_{i-1}^{n_{i-1}} a_i^{n_j + s'} \alpha a_j^{n_j} \beta. $$ Since, $\beta a_j^{n_j} a_1^{n_1} a_2^{n_2} \delta a_{i-1}^{n_{i-1}} a_i^{n_j + s'} \alpha \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is rich, by Theorem \ref{tglen}, we get, $a_j^{n_j} a_1^{n_1} a_2^{n_2} \delta a_{i-1}^{n_{i-1}} a_i^{n_j}$ is a palindrome, and hence, $a_1 = a_{i-1}$. Similarly, as $\beta a_j^{n_j} a_2^{n_2} \delta a_{i-1}^{n_{i-1}} a_i^{n_j} a_i^{s'} \alpha a_1^{n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$, by Theorem \ref{tglen}, we have, $a_j^{n_j} a_2^{n_2} \delta a_{i-1}^{n_{i-1}} a_i^{n_j}$ is a palindrome, which gives $a_2 = a_{i-1}$. Thus, $a_1 = a_2$ which is a contradiction. A symmetrical argument holds for the case $n_i< n_j$. \item $i\leq 2$ : Let $i=1$, i.e., $a_1$ has a repetition and there exists an index $l>1$ such that $a_1 = a_l$. If all elements in $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then by Lemma \ref{notrich}, there exists at most one distinct letter between $a_1$ and $a_l$. Similarly, between any two occurrences of $a_2$, there exists at most one distinct letter. Then, $w$ can only be of forms $a_1^{n_1} a_2^{n_2} a_3^{n_3} a_2 z'$ or $ a_1^{n_1} a_2 u v$ where $a_1 \neq a_3$, $z' \in \{\Sigma \setminus \{a_1\}\}^*$, $u \in \{ a_1, a_2 \}^+$ and $v \in (\Sigma \setminus \{ a_1, a_2 \} )^+$. If $w$ is in form $a_1^{n_1} a_2^{n_2} a_3^{n_3} a_2 z'$, then $z'a_2 a_3^{n_3} a_1^{n_1} a_2^{n_2} \in \ensuremath{\mathfrak m}athtt{BR}(w)$. Since, $a_2 a_3^{n_3} a_1^{n_1} a_2$ is not a palindrome, by Theorem \ref{tglen}, $z'a_2 a_3^{n_3} a_1^{n_1} a_2^{n_2} $ is not rich, a contradiction. Now, consider $w$ is in form $ a_1^{n_1} a_2 u v$. Note that as $|\ALPH(w)|\geq 3,$ $v\neq \lambda$. By Lemma \ref{nori}, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ which is not rich, a contradiction. \end{itemize} \end{itemize} Thus, if $|\ALPH(w)|\geq 3$ and all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $w=a_1a_2a_3 \cdots a_k$ where each $a_i\in \Sigma$ are distinct.\\ The converse is straightforward. \end{proof} \section{Block Reversal of Binary Words}\label{sec4} Anisiu et al. (\cite{pcofw}) showed that any binary word of length greater than $8$, has at least $8$ non-empty palindromic factors. A set of words that achieve the bound of having exactly $8$ palindromic factors was given by Fici et al. (\cite{lepin}). We recall the definition of $k$-$th$ power of $u \in \Sigma^*$ from Brandenburg (\cite{FUPH}) as the prefix of least length $u'$ of $u^n$ where $n\geq k$ such that $|u'|\geq k|u|$. For example, given a word $aba$, the $\frac{5}{3}$-$th$ power of $aba$ is $ aba^{(\frac{5}{3})} = abaab$. Fici et al. (\cite{lepin}) showed that for all $u\in C(v)$ where $v= abbaba$, $P(u^{(\frac{n}{6})})=8$, $n\geq 9$. Mahalingam et al. (\cite{lep}) characterized words $w$ such that $P(w) =8$. They proved that a binary word $w$ has $8$ palindromic factors iff $w$ is of the form $u^{(\frac{n}{6})}$ where $u\in C(v) \cup C(v^R)$ and $v=abbaba$. In this section, we discuss the case of binary words. Let $l(w)$ be the length of the run sequence of a binary word $w$. We prove that if $l(w)=2$, then all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich and if $l(w)\geq 9$, then there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ that is not rich. Then, we study the block reversal of binary words with $3\leq l(w)\leq 9$. The results in this section also hold for complement words as we have considered unordered alphabet $\Sigma=\{a,b\}$. \subsection{\textbf{Block reversal of binary words $\bf{w}$ with} $\bf{l(w)= 2}$ $\bf{\&}$ $\bf{l(w)\geq 9}$} Now, we discuss the block reversal of binary words $w$ with $l(w)=2$ $\&$ $l(w)\geq 9$. We first recall the following from Guo et al. (\cite{guo}). \begin{proposition} \cite{guo}\label{runlen} Every binary word having a run sequence of length less than or equal to $4$ is rich. \end{proposition} It was verified by Anisiu et al. (\cite{pcofw}) that for all short binary words (up to $|w|=7$), $P(w)=|w|$. We observe that for words $w$ with $|w|> 7$, some elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ may not be rich even when $w$ is rich. For example, $w = a^2b^3a^3$ is rich but $a^2bab^2a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$, is not rich. We discuss the case when $l(w)=2$ for a word $w$ in the following. \begin{proposition}\label{u1} If $w = a^{n_1}b^{n_2}$ where $n_1, n_2 \geq 1$, then all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{proposition} \begin{proof} Let $w=a^{n_1}b^{n_2}$ where $n_1, n_2 \geq 1$. Then, $$\ensuremath{\mathfrak m}athtt{BR}(w) = \{b^{n_2'}a^{n_1'}b^{n_2-n_2'}a^{n_1-n_1'}\; : \;0 \leq n_1' \leq n_1, \; 0 \leq n_2' \leq n_2\}.$$ Since the length of the run sequence of each element of $\ensuremath{\mathfrak m}athtt{BR}(w)$ is less than or equal to $4$, by Proposition \ref{runlen}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{proof} We now prove that for a binary word $w$ with length of the run sequence greater than $8$, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ that is not rich. We recall the following from Mahalingam et al. (\cite{blore}). \begin{lemma}\label{hhh}\cite{blore} $\ensuremath{\mathfrak m}athtt{BR}(v) \ensuremath{\mathfrak m}athtt{BR}(u) \subseteq \ensuremath{\mathfrak m}athtt{BR}(uv)$ for $u,\; v\in \Sigma^*$. \end{lemma} We now have the following: \begin{proposition}\label{u3} Let $w \in \{a, b\}^*$ such that $l(w)\geq 9$. Then, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ that is not rich. \end{proposition} \begin{proof} Let $w \in \{a, b\}^*$ such that $l(w) \geq 9$. Since, $l(w) \geq 9$, then for $n_i\geq 1$, consider $w' = a^{n_1} b^{n_2} a^{n_3} b^{n_4} a^{n_5} b^{n_6}\\ a^{n_7} b^{n_8} a^{n_9}$ to be a prefix of $w$. If $w$ is not rich, then we are done. Otherwise, $w$ is rich, then by Lemma \ref{rich}, all factors of $w$ are rich. Also, from Lemma \ref{hhh}, we have, $\ensuremath{\mathfrak m}athtt{BR}(v) \ensuremath{\mathfrak m}athtt{BR}(u) \subseteq {\ensuremath{\mathfrak m}athtt{BR}(uv)}$ for $u,\; v\in \Sigma^*$. We show that there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w')$ that is not rich to complete the proof. Suppose to the contrary that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w')$ are rich. Let $w_1, w_2, w_3 \in \ensuremath{\mathfrak m}athtt{BR}(w')$ where $$w_1 = a^{n_9+n_7} b^{n_8+n_6} a b a^{n_5-1+n_3}b^{n_4-1+n_2}a^{n_1},$$ $$w_2 = b^{n_8} a^{n_9+n_7} b a b^{n_6-1+n_4} a^{n_5-1+n_3+n_1}b^{n_2} \text{ \;\;and}$$ $$ w_3 = a^{n_9} b^{n_8+n_6} a^{n_7+n_5} b a b^{n_4-1+n_2} a^{n_3-1+n_1}.$$ We have the following: \begin{itemize} \item If $n_3+n_5\geq 3$, then $a^2 b^{n_8+n_6} a b a^2$ is a factor of $w_1$ that contains exactly two occurrences of a palindrome $a^2$ as a prefix and as a suffix. By Theorem \ref{tglen}, if $w_1$ is rich, then $a^2 b^{n_8+n_6} a b a^2$ is a palindrome which is a contradiction. Hence, $n_3 = n_5 =1$. \item If $n_4+n_6\geq 3$, then $a^2 b a b^{n_6-1+n_4} a^2$ is a factor of $w_2$ that contains exactly two occurrences of a palindrome $a^2$ as a prefix and as a suffix. By Theorem \ref{tglen}, if $w_2$ is rich, then $ a^2 bab^{n_6-1+n_4} a^2$ is a palindrome which is a contradiction. Hence, $n_4 =n_6=1$. \item If $n_2 \geq 2$, then $b^2 a^{n_7+n_5} b a b^2$ is a factor of $w_3$ that contains exactly two occurrences of a palindrome $b^2$ as a prefix and as a suffix. By Theorem \ref{tglen}, if $w_3$ is rich, then $ b^2 a^{n_5+n_7} b a b^2$ is a palindrome which is a contradiction. Hence, $n_2=1$. \end{itemize} Hence, we have $n_2=n_3=n_4=n_5=n_6=1$ and $w' = a^{n_1} b a b a b a^{n_7} b^{n_8} a^{n_9}$. Let $w_4,\; w_5\in \ensuremath{\mathfrak m}athtt{BR}(w')$ where $w_4= a^{n_9+n_7} b^{n_8} a b^2 a^{n_1+1} b$ and $w_5 = a^{n_9+n_7} b^{n_8+1} a b^2 a^{1+n_1}$. Now, $a^2 b^{n_8}a b^2 a^2$ is a factor of $w_4$ that contains exactly two occurrences of a palindrome $a^2$ as a prefix and as a suffix. By Theorem \ref{tglen}, since $w_4$ is rich, $n_8=2$. Also, $a^2 b^{n_8+1}a b^2 a^2$ is a factor of $w_5$ that contains exactly two occurrences of a palindrome $a^2$ as a prefix and as a suffix. By Theorem \ref{tglen}, since $w_5$ is rich, $n_8=1$, which is a contradiction. Thus, there always exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w')$ that is not rich. \end{proof} \subsection{\bf{Block reversal of binary words $\bf{w}$ with} $\bf{3\leq l(w)\leq 8}$} We now consider the case for a binary word $w$ such that $3\leq l(w)\leq 8$. We observe that the result varies with the structure of the word. We compile all results towards the end of this section. We first recall the following from Anisiu et al. (\cite{pcofw}). \begin{theorem}\cite{pcofw}\label{7ric} If $w$ is a binary word of length less than $8$, then $P(w)=|w|$. If $w$ is a binary word of length $8$, then $7\leq P(w)\leq 8$ and $P(w)= 7$ iff $w$ is of the form $aabbabaa$ or $aababbaa$. \end{theorem} Now, with the help of examples, we illustrate that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ may be rich for a binary word $w$ such that $3\leq l(w)\leq 8$. \begin{example}\label{u2} For $3\leq l(w)\leq 8$, consider \[w=\left \{\begin{array}{cc} a(ba)^i & \text{for $i=\frac{l(w)-1}{2}$ and $l(w)$ odd,}\\ (ab)^i & \text{for $i=\frac{l(w)}{2}$ and $l(w)$ even.}\end{array} \right.\] It can be observed that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{example} Thus, from Propositions \ref{u1} and \ref{u3} and Example \ref{u2}, we conclude the following. \begin{theorem} Let $w$ be a binary word and $l(w)$ be the length of the run sequence of $w$. If all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $2\leq l(w)\leq 8$. \end{theorem} We now consider the following example of a binary word $v$ with $3\leq l(v)\leq 8$ such that there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(v)$ that is not rich. \begin{example}\label{u4} For $3\leq l(v)\leq 8$, consider \[v=\left \{\begin{array}{cc} a^2b^3a^3(ba)^i & \text{for $i=\frac{l(v)-3}{2}$ and $l(v)$ odd,}\\ a^2b^3a^3(ba)^ib & \text{for $i=\frac{l(v)-4}{2}$ and $l(v)$ even.}\end{array} \right.\] and \[v'=\left \{\begin{array}{cc} (ba)^i a^2 b^2 a b a^2 & \text{for $i=\frac{l(v)-3}{2}$ and $l(v)$ odd,}\\ (ba)^i b a^2 b^2 a b a^2 & \text{for $i=\frac{l(v)-4}{2}$ and $l(v)$ even.} \end{array} \right.\] It can observed that $v' \in \ensuremath{\mathfrak m}athtt{BR}(v)$. Note that by Lemma \ref{rich}, $v'$ is not rich as $a^2 b^2 a b a^2$ is not rich. \end{example} We conclude from Examples \ref{u2} and \ref{u4} that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ may or may not be rich for a binary word $w$ such that $3\leq l(w)\leq 8$. Now, we find the structure of binary words $w$ with $3\leq l(w)\leq 8$ such that the block reversal of $w$ contains only rich words. We recall the following from Mahalingam et al. (\cite{blore}). \begin{lemma}\label{f1}\cite{blore} Let $w\in \Sigma^+$, $(\ensuremath{\mathfrak m}athtt{BR}(w))^R = \ensuremath{\mathfrak m}athtt{BR}(w^R)$. \end{lemma} We conclude the following from Lemmas \ref{rich} and \ref{f1}. \begin{remark}\label{o1} All elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff all elements of $\ensuremath{\mathfrak m}athtt{BR}(w^R)$ are rich. \end{remark} We first study the case when the length of the run sequence of the word is equal to $8$. \begin{proposition}\label{g8} Let $w\in \{a, b\}^*$ and $l(w)=8$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w=abababab$. \end{proposition} \begin{proof} Let $w$ be a binary word with $l(w)=8$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}a^{n_5}b^{n_6}a^{n_7}b^{n_8}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. Let $$w'=b^{n_8+n_6} a^{n_7+n_5} b a b^{n_4-1+n_2} a^{n_3-1+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w).$$ Then, $w'$ is rich. If $n_4 \geq 2$ or $n_2\geq 2$, then since $v=b^{2} a^{n_7+n_5} b a b^{2}\in Fac(w')$ and $v$ contains exactly two occurrences of $b^2$, by Theorem \ref{tglen}, $b^{2} a^{n_7+n_5} b a b^{2}$ is a palindrome which is a contradiction. Thus, $n_2=n_4=1$. Now, by Remark \ref{o1}, we get, $n_7=n_5=1$. Thus, $w=a^{n_1}ba^{n_3}bab^{n_6}ab^{n_8}$. Now, consider $$w''= b^{n_8} a^{2} b^{n_6+1} a b a^{n_3-1+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w).$$ Then, $w''$ is rich. If $n_1\geq 2$ or $n_3\geq 2$, then since $v'=a^{2} b^{n_6+1} a b a^{2} \in Fac(w'')$ and $v'$ contains exactly two occurrences of $a^2$, by Theorem \ref{tglen}, $ a^{2} b^{n_6+1} a b a^{2}$ is a palindrome which is a contradiction. Thus, $n_1=n_3=1$. Now, by Remark \ref{o1}, we get, $n_8=n_6=1$. Thus, $w=abababab$. The converse follows from Theorem \ref{7ric}. \end{proof} We conclude the following from Proposition \ref{g8}. \begin{remark} Let $w$ be a binary word such that $l(w)=8$ and $|w|>8$. Then, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ that is not rich. \end{remark} We now consider the case when the length of the run sequence of the word is $7$. For a binary word $w$, if $l(w)=7$, then $|w|\geq 7$. If $|w|=7$, it is well known (\cite{pcofw}) that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. We consider the case when $|w|>7$ in the following. \begin{proposition}\label{g7} Let $w$ be a binary word with $|w|>7$ and $l(w)=7$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is $ababab^2a$, $abab^2aba$ or $ab^2ababa$. \end{proposition} \begin{proof} Let $w$ be a binary word with $l(w)=7$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}a^{n_5}b^{n_6}a^{n_7}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. Let $$\alpha = a^{n_7-1+n_5} b^{n_6} a b^{n_4+n_2} a^{n_3+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$$ and $$\beta =a^{n_7-1+n_5} b^{n_6} a b^{n_4} a^{n_3+n_1} b^{n_2} \in \ensuremath{\mathfrak m}athtt{BR}(w) .$$ Then, $\alpha,\; \beta$ are rich. If $n_7 \geq 2$ or $n_5 \geq 2$, then by Theorem \ref{tglen}, $a^{2} b^{n_6} a b^{n_4+n_2} a^{2}$ and $a^{2} b^{n_6} a b^{n_4} a^{2}$ are palindromic factors of $\alpha$ and $\beta $, respectively. This implies $n_6=n_4+n_2$ and $n_6=n_4$, which is a contradiction to the fact that $n_2 \geq 1$. Thus, $n_7 = n_5 =1$. \\ Since, $n_7=n_5=1$, by Remark \ref{o1}, we get, $n_1=n_3=1$. Thus, $w=a b^{n_2}a b^{n_4}a b^{n_6} a$. We now show that $n_6\leq2$. Consider $\gamma = b^{n_6-1} aa b a b^{n_4+n_2} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$. Then, $\gamma$ is rich. If $n_6\geq 3$, then by Theorem \ref{tglen}, $b^{2} a^2 b a b^{2}$ is a palindromic factor of $\gamma$ which is a contradiction. Thus, $n_6\leq 2$. We have the following cases: \begin{enumerate} \item $n_6=2: $ If $n_4\geq 2$ or $n_2\geq 2$, then $b^2 a^2 ba b^{n_4-1+n_2} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, in this case, $n_4=n_2=1$. We have, $w=a b a b a b^{2} a$. \item $n_6=1 :$ Here, $w=a b^{n_2}a b^{n_4}a b a$. If $n_4\geq 2$ and $n_2\geq 2$, $b^{n_4} a b a^2 b^{n_2} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. So, either $n_2=1$ or $n_4=1$. Note that if $n_2=n_4=1$, then $|w|=7$, which is a contradiction. We are left with the following cases: \begin{itemize} \item $n_2=1$ and $n_4 \geq 2$ : Here, $w=a b a b^{n_4}a b a$. If $n_4\geq 3$, $b^{n_4-1} a b a^2 b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich, a contradiction. Thus, $n_4 =2$ and $w=a b a b^{2}a b a$. \item $n_4=1$ and $n_2\geq 2$ : Here, $w=a b^{n_2} a ba b a$. If $n_2\geq 3$, $ a b^2 a b a^2 b^{n_2-1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich, a contradiction. Thus, $n_2=2$ and $w=a b^2 a ba b a$. \end{itemize} \end{enumerate} The converse follows from Theorem \ref{7ric}. \end{proof} We conclude the following from Proposition \ref{g7}. \begin{remark} Let $w$ be a binary word such that $l(w)=7$ and $|w|>8$. Then, there exists an element in $\ensuremath{\mathfrak m}athtt{BR}(w)$ that is not rich. \end{remark} We now consider the case when the length of the run sequence of the word is $6$. We need the following: \begin{remark}\label{rem2} We consider the block reversal of the following words: \begin{enumerate} \item Let $w=a^2 b a b a b^{n_6}$ for $n_6<4$. \begin{itemize} \item If $n_6=1$ or $2$, then as $|w|\leq 8$, by Theorem \ref{7ric}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \item If $n_6=3$, then $b a b^2 a^2 bab \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which implies that not all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{itemize} \item Let $w=a^3 b a b a b^{n_6}$ for $n_6<4$. \begin{itemize} \item If $n_6=1$, then by Theorem \ref{7ric}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \item If $n_6=2$, then $a b a b^2 a^2 b a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which implies that not all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \item If $n_6=3$, then $ba b^2 a^3 bab \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which implies that not all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{itemize} \item Let $w=a^{n_1} b a b a b$ for $n_1\geq 1$. We show that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Let $w=D_1 D_2 D_3 \cdots D_k$ where $k \geq 2$ and each $D_i \in \Sigma^+$. Then, either $D_1 = a^j$, $D_2=a^{n_1-j}x$ and $D_3 \cdots D_k = y$ where $x, y \in \Sigma^*$, $x y =babab$ and $1 \leq j <n_1$ or $D_1 = a^{n_1} x'$ and $D_2 D_3 \cdots D_k=y'$ where $x' \in \Sigma^*, y' \in \Sigma^+$ and $x' y' = babab$. Then, for distinct elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$, we can divide $w$ in at most seven non-empty blocks. \begin{itemize} \item When we divide $w$ in two non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_2 = \{ b a^{n_1} baba, ab a^{n_1} bab, bab a^{n_1} ba, abab a^{n_1} b, babab a^{n_1}, a^{n_1-i}babab a^i~ |~ 1 \leq i \leq n_1-1 \}.$ We can observe that each element of $A_2$ is rich. \item When we divide $w$ in three non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_3 = \{ ba a^{n_1} bab, bba a^{n_1}ba, baba a^{n_1}b, bbaba a^{n_1}, b a^{n_1-i} baba a^{i}, abba^{n_1}ba, ababa^{n_1}b, ab bab a^{n_1},\\ ab a^{n_1-i} bab a^{i}, babba a^{n_1}, bab a^{n_1-i} ba a^{i}, abab b a^{n_1}, abab a^{n_1-i} b a^{i}, babab a^{n_1} ~|~ 1 \leq i \leq n_1-1 \}$. We can observe that each element of $A_3$ is rich. \item When we divide $w$ in four non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_4 = \{ bab a^{n_1} ba, baab a^{n_1} b, babab a^{n_1}, ba a^{n_1-i} bab a^{i}, bbaa a^{n_1} b, bb aba a^{n_1}, bba a^{n_1-i} ba a^{i}, baba a^{n_1-i} b a^{i},\\ b baba a^{n_1}, abba a^{n_1} b, abbba a^{n_1}, abb a^{n_1-i} ba a^{i}, ababb a^{n_1}, abab a^{n_1-i} b a^{i}, abb ab a^{n_1}, babba a^{n_1}, ababb a^{n_1} ~|~ 1 \leq i \leq n_1-1 \}$. We can observe that each element of $A_4$ is rich. \item When we divide $w$ in five non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_5 =\{ baba a^{n_1} b, babba a^{n_1}, bab a^{n_1-i} ba a^{i}, baab b a^{n_1}, baab a^{n_1-i} b a^{i}, ba bab a^{n_1}, bbaa b a^{n_1}, bb aa a^{n_1-i} b a^{i},\\ bbaba a^{n_1}, abbab a^{n_1}, abba a^{n_1-i} b a^{i}, ababb a^{n_1}, a bbb a a^{n_1} ~|~ 1 \leq i \leq n_1-1 \}$. We can observe that each element of $A_5$ is rich. \item When we divide $w$ in six non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_6 = \{ babab a^{n_1}, baba a^{n_1-i} b a^{i}, babba a^{n_1}, baabb a^{n_1}, bbaab a^{n_1}, abb ab a^{n_1} ~|~ 1 \leq i \leq n_1-1\}$. We can observe that each element of $A_6$ is rich. \item When we divide $w$ in seven non-empty blocks, then $\ensuremath{\mathfrak m}athtt{BR}(w)$ contains the following:\\ Let, $A_7=\{ babab a^{n_1} \}$. Clearly, $babab a^{n_1}$ is rich. Thus, each element of $A_7$ is rich. Also, as $babab a^{n_1}$ is rich, $w=a^{n_1} babab $ is rich. \end{itemize} Therefore, as $\ensuremath{\mathfrak m}athtt{BR}(w)= \{w\} \cup A_2 \cup A_3 \cup A_4 \cup A_5 \cup A_6 \cup A_7$, each element of $\ensuremath{\mathfrak m}athtt{BR}(w)$ is rich. In a similar fashion one can also show that each element of $\ensuremath{\mathfrak m}athtt{BR}(ababab^{n_1})$ is also rich. \end{enumerate} \end{remark} We now have the following: \begin{proposition}\label{g6} Let $w$ be a binary word with $|w|>7$ and $l(w)=6$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w\in \{u, (u^c)^R\;|\; u\in T\}$ where $$T=\{a b^2 a b a^2 b, a b a b^2 a^2 b, a b a b a^2 b^2, a^2 b a b^2 a b, a^2 b a b a b^2, a b a^2 b^2 a b, a^{n_1} b a b a b\; |\;n_1\geq 3\}.$$ \end{proposition} \begin{proof} Let $w$ be a binary word with $|w|>7$ and $l(w)=6$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}a^{n_5}b^{n_6}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. If $n_5\geq 3$, then by Theorem \ref{tglen}, $ b^{n_6-1} a^{n_5-1} b a b^{n_4+n_2} a^{n_3+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich. This implies, $n_5\leq 2$. By Remark \ref{o1}, we get, $n_2\leq 2$. We have the following cases: \begin{itemize} \item $n_5 = 2 : $ If $n_1 \geq 2$ or $n_3\geq 2$, then by Theorem \ref{tglen}, $ b^{n_6-1} a^{2} b a b^{n_4+n_2} a^{n_3-1+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_1 =n_3= 1$. Thus, in this case, we get, $w = a b^{n_2} a b^{n_4} a^2 b^{n_6}$ where $n_2\leq 2$. We have the following cases: \begin{itemize} \item $n_2=2 : $ Then, $w^R = b^{n_6} a^{2} b^{n_4} a b^{2} a$. By Remark \ref{o1}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w^R)$ are rich. Here, $n_4=n_6=1$, otherwise $a b^2 a b a^2 b^{n_4-1+n_6} \in \ensuremath{\mathfrak m}athtt{BR}(w^R)$ is not rich. Thus, $w = a b^{2} a b a^2 b$. \item $n_2 = 1 : $ Then, $w = a b a b^{n_4} a^2 b^{n_6}$. We have the following cases: \begin{itemize} \item $n_4 \geq 2$ : If $n_6 \geq 2$, then by Theorem \ref{tglen}, $b^{n_6} a^2 b a b^{n_4} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Otherwise, $n_6 =1$ and $w = a b a b^{n_4} a^2 b$. If $n_4 \geq 3$, then by Theorem \ref{tglen}, $b^{n_4-1} a^2 b a b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_4 =2$ and $w = a b a b^{2} a^2 b$. \item $n_4=1$ : If $n_6 \geq 3$, then by Theorem \ref{tglen}, $ b^{n_6-1} a^2 b a b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_6\leq 2$. If $n_6=1$, then $|w|=7$, thus, $n_6=2$ and $w = a b a b a^2 b^{2}$. \end{itemize} \end{itemize} \item $n_5 = 1 : $ Here, $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4} a b^{n_6}$. We have the following cases: \begin{itemize} \item $n_1 \geq 2$ : If $n_3\geq 2$, then by Theorem \ref{tglen}, $ a^{n_3} b^{n_4} a b^{n_6} a^{n_1} b^{n_2} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_4 \neq n_6$ ($ a^{n_3} b^{n_4} a b^{n_6+n_2} a^{n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_4 = n_6$, respectively) is not rich which is a contradiction. So, $n_3=1$ and $w= a^{n_1}b^{n_2} a b^{n_4}a b^{n_6}$ where $n_2\leq 2$. We have the following cases: \begin{itemize} \item $n_2 = 2 : $ Now, $w^R = b^{n_6} a b^{n_4} a b^2 a^{n_1}$. By Remark \ref{o1}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w^R)$ are rich. If $n_4\geq 2$ or $n_6\geq 2$, then $a^{n_1-1} b^2 a b a^2 b^{n_4-1+n_6} \in \ensuremath{\mathfrak m}athtt{BR}(w^R)$ is not rich, a contradiction. Thus, $n_4 = n_6=1$. We get, $w= a^{n_1}b^{2} a b a b $. If $n_1\geq 3$, then by Theorem \ref{tglen}, $b a^{n_1-1} b^2 a b a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_1=2$ and $w= a^2 b^{2} a b a b$. \item $n_2 = 1 : $ Then, $w= a^{n_1} b a b^{n_4}a b^{n_6}$. We have the following cases: \begin{itemize} \item $n_4\geq 2$ : If $n_6 \geq 2$, then by Theorem \ref{tglen}, $b^{n_6} a^{n_1} b a b^{n_4}a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Otherwise, $n_6=1$. Here, $n_1=2$, otherwise, by Theorem \ref{tglen}, $b a^{n_1-1} b a b^{n_4} a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich. Thus, $w= a^{2} b a b^{n_4} a b$. Also, $n_4=2$, otherwise, by Theorem \ref{tglen}, $ab b^{n_4-2} a^2 b a b^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich. Thus, $w= a^{2} b a b^{2} a b$. \item $n_4 = 1 : $ If $n_6\geq 4$, then by Theorem \ref{tglen}, $b^{n_6-2} a^{n_1} b a b a b^2\in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $w=a^{n_1} b a b a b^{n_6}$ for $1\leq n_6\leq 3$. In this case, if $n_1\geq 4$ and $2\leq n_6 \leq 3$, then by Theorem \ref{tglen}, $a^{n_1-2} b a b a b^{n_6} a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich. We are left with the words $a^2 b a b a b^{n_6}$, $a^3 b a b a b^{n_6}$ for $n_6<4$ and $a^{n_1} b a b a b$ for $n_1\geq 1$. By Remark \ref{rem2}, one can conclude that if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $w$ is $a^{n_1} b a b a b$ or $a^{2} b a b a b^{2}$ where $n_1 \geq 3$. \end{itemize} \end{itemize} \item $n_1=1$ : Similar to the case $n_1=2$, one can prove that if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $w$ is of one of the following forms: $a b^2 a^2 b a b,\; a b a^2 b^2 a b, \; a b a^2 b a b^2,\; a b a b a b^{n_6} $, where $n_6 \geq 3$. \end{itemize} \end{itemize} The converse follows from Theorem \ref{7ric} and Remark \ref{rem2}. \end{proof} We now consider the case when the length of the run sequence of the word is $5$. \begin{proposition}\label{g5} Let $w$ be a binary word with $|w|>7$ and $l(w)=5$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is $a^{n_1}ba^{n_3}ba^{n_5}$, $ab^{n_2}ab^{n_4}a^2$, $a^2b^{n_2}ab^{n_4}a$, $ab^{n_2}a^2b^{n_4}a$ where $n_2+n_4 =4$ for $n_i\geq 1$. \end{proposition} \begin{proof} Let $w$ be a binary word with $|w|>7$ and $l(w)=5$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}a^{n_5}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. If $n_2 = n_4 = 1$, then by Theorem \ref{tglen}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Now, consider $n_2 + n_4 \geq 3$. If $n_5 \geq 3$, then by Theorem \ref{tglen}, $a^{n_5-1} b a b^{n_4-1+n_2} a^{n_3+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_5\leq 2$. By Remark \ref{o1}, we get, $n_1\leq 2$. We are left with the following cases: \begin{itemize} \item $n_5=2 : $ Then, $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}a^{2}$ where $n_2 + n_4 \geq 3$ and $n_1\leq 2$. If $n_3\geq 2$ or $n_1= 2$, then by Theorem \ref{tglen}, $a^{2} b a b^{n_4-1+n_2} a^{n_3-1+n_1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_3=n_1=1$ and in this case, $w=a b^{n_2} a b^{n_4} a^{2}$ where $n_2 + n_4 \geq 3$. As $|w|>7$, we get, $n_2+n_4\geq 4$. If $n_2+n_4= 4$, then by Theorem \ref{7ric}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. We now consider the case when $n_2 + n_4 \geq 5$. We have the following cases: \begin{itemize} \item $n_2=i$ for $1\leq i\leq 3$: Then, $n_4 \geq 5-i$ and $w=a b^i a b^{n_4} a^{2}$. By Theorem \ref{tglen}, we get, $b^{n_4-3+i} a^2 b a b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. \item $n_2 \geq 4 : $ Then, $n_4 \geq 1$ and $w=a b^{n_2} a b^{n_4} a^{2}$. If $n_4 = 1$, then $b^{n_2-2} a b a^2 b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. If $n_4 \geq 2$, then $b^{n_4} a^2 b a b^{n_2-1} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. \end{itemize} Hence, if $n_5=2$ and all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $w=a b^{n_2} a b^{n_4} a^{2}$ where $n_2+n_4=4$. \item $n_5= 1 $: If $n_1=2$, then by Remark \ref{o1}, we get from the case $n_5=2$, if all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich, then $w= a^2 b^{n_2} a b^{n_4} a$ where $n_2+n_4=4$. Otherwise, $n_1=1$. Then, $w=ab^{n_2}a^{n_3}b^{n_4}a$ where $n_2+n_4 \geq 3$. We have the following cases: \begin{itemize} \item $n_3 \geq 3$ : If $n_2=n_4$, then since $n_2+n_4 \geq 3$, we get both $n_2,\;n_4 \geq 2$. By Theorem \ref{tglen}, $a^{n_3-1} b^{n_4} a b a^{2} b^{n_2-1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_2 \neq n_4$. By Theorem \ref{tglen}, $a^{n_3-1} b^{n_4} a b^{n_2} a^{2} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. \item $n_3\leq 2$ : Then, $w = a b^{n_2} a^{n_3} b^{n_4} a$ where $n_2+n_4\geq 3$. As $|w|>7$, we get, $n_2+n_4\geq 4$. If $n_2+n_4 = 4$, then by Theorem \ref{7ric}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Thus, $w = a b^{n_2} a^{n_3} b^{n_4} a$ where $n_2+n_4 = 4$. Now, we consider the case when $n_2+n_4\geq 5$. We have the following cases: \begin{itemize} \item $n_2=i$ for $1\leq i\leq 3$: Then, $n_4 \geq 5-i$ and $w=a b^i a^{n_3} b^{n_4} a$. By Theorem \ref{tglen}, we get, $b^{n_4-3+i} a b a^2 b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3=2$ ( $b^{n_4-3+i} a^2 b a b^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3=1$, respectively) is not rich which is a contradiction. \item $n_2 \geq 4 : $ Then, $n_4 \geq 1$ and $w = a b^{n_2} a^{n_3} b^{n_4} a$. If $n_4=1$, then by Theorem \ref{tglen}, $b^{n_2-2} a^2 b a b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3 =2$ ( $b^{n_2-2} a b a^2 b^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3 =1$, respectively ) is not rich which is a contradiction. Otherwise, $n_4 \geq 2$. By Theorem \ref{tglen}, $b^{n_4} a b a^2 b^{n_2-1} a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3=2$ ( $b^{n_4} a^2 b a b^{n_2-1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ for $n_3=1$, respectively) is not rich which is a contradiction. \end{itemize} \end{itemize} \end{itemize} The converse follows from Theorems \ref{tglen} and \ref{7ric}. \end{proof} We consider the case when the length of the run sequence of the word is $4$ and the length of the word is greater than $7$. We have the following result. \begin{proposition}\label{g4} Let $w$ be a binary word with $|w|>7$ and $l(w)=4$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is of the form $ab^{n_2}ab^{n_4}$ or $a^{n_1}b a^{n_3}b$ or $w\in S$ where \[S=\big\{ a^{n_1}b^{n_2}a^{n_3}b^{n_4}| (n_2,n_4), (n_1,n_3) \in \{(3,1),(2,2),(1,3)\}\big\} \] and $n_i\geq 1$. \end{proposition} \begin{proof} Let $w$ be a binary word with $l(w)=4$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}b^{n_4}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. If $n_1=n_3=1$ or $n_2= n_4 = 1$, then all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Now, we consider the case $n_1+n_3 \geq 3$ and $n_2 + n_4 \geq 3$. If $n_3 \geq 4$, then for $j \neq r$ and $i+j=n_4$, and $r+s = n_2$, consider $\alpha = b^i a^{n_3-2} b^j a b^r a^{1+n_1} b^s \in \ensuremath{\mathfrak m}athtt{BR}(w)$. By Theorem \ref{tglen}, $\alpha$ is not rich which is a contradiction. Thus, $n_3 \leq 3$. We have the following cases: \begin{itemize} \item $n_3=3$ : Then, for $j' \neq r'$ and $i'+j'=n_4$, and $r'+s' = n_2$, consider $\beta = b^{i'} a^{2} b^{j'} a b^{r'} a^{n_1} b^{s'} \in \ensuremath{\mathfrak m}athtt{BR}(w)$. If $n_1 \geq 2$, then by Theorem \ref{tglen}, $\beta$ is not rich which is a contradiction. Thus, if $n_3=3$, then $n_1=1$. \item $n_3=2: $ Then, for $j'' \neq n_2$ and $i''+j''=n_4$, consider $\gamma = b^{i''} a^{2} b^{j''} a b^{n_2} a^{n_1-1} \in \ensuremath{\mathfrak m}athtt{BR}(w) $. If $n_1 \geq 3$, $\gamma $ is not rich which is a contradiction. Thus, if $n_3=2$, then $n_1\leq 2$. \item $n_3=1 :$ Then, by Theorem \ref{tglen}, for $n_1 \geq 4$, $a^{n_1-2}b^{n_2}a b^{n_4} a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich when $n_2 \neq n_4$ and $ b a^{n_1-2}b^{n_2}a b^{n_4-1} a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich when $n_2 = n_4$. Thus, if $n_3=1$, then $n_1\leq 3$. \end{itemize} Now, $w^R = b^{n_4} a^{n_3} b^{n_2} a^{n_1}$. Then from Remark \ref{o1}, one can similarly deduce that $n_2\leq 3$ and we also conclude the following: \begin{itemize} \item If $n_2=3$, then $n_4=1$. \item If $n_2=2$, then $n_4\leq 2$. \item If $n_2=1$, then $n_4\leq 3$. \end{itemize} Hence, as $|w|>7$, we get, $w\in S$ where \[S=\big\{ a^{n_1}b^{n_2}a^{n_3}b^{n_4}| (n_2,n_4), (n_1,n_3) \in \{(3,1),(2,2),(1,3)\}\big\} \] for $n_i\geq 1$ and $n_1+n_2+n_3+n_4>7$.\\ The converse follows from Theorems \ref{tglen} and \ref{7ric}. \end{proof} We consider the case when the length of the run sequence of the word is $3$ and the length of the word is greater than $7$. We need the following result. \begin{lemma}\label{l1} Let $w=a^{n_1} b^{n_2} a$ or $a b^{n_2} a^{n_1}$ where $n_1\geq 1$ and $n_2\in \{3, 4\}$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{lemma} \begin{proof} We prove the result for $w=a^{n_1} b^{n_2} a$. The proof for $w=a b^{n_2} a^{n_1}$ is similar. Let $w=a^{n_1} b^{n_2} a$ where $n_1\geq 1$ and $n_2\in \{3, 4\}$. Consider $w' \in \ensuremath{\mathfrak m}athtt{BR}(w)$. Then, one can observe that $2\leq l(w')\leq 6$. If $l(w')\leq 4$, then from Proposition \ref{runlen}, $w'$ is rich. We are left with the following cases: \begin{itemize} \item $l(w')=5 :$ Then, $w'$ is either $a b^{i_1} a^{i_2} b^{i_3} a^{i_4}$ or $b^{j_1} a b^{j_2} a^{n_1} b^{j_3}$ where $i_1+i_3=n_2$, $i_2+i_4 = n_1$ and $j_1+j_2+j_3=n_2$. By Theorem \ref{tglen}, $w'$ is rich. \item $l(w')=6 :$ Then, $w'= b^{k_1} a b^{k_2} a^{k_3} b^{k_4} a^{k_5} $ where $k_1+k_2+k_4=n_2$ and $k_3+k_5=n_1$. By Theorem \ref{tglen}, $w'$ is rich. \end{itemize} \end{proof} We have the following: \begin{proposition}\label{g3} Let $w$ be a binary word with $|w|>7$ and $l(w)=3$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich iff $w$ is of one of the following forms: \begin{itemize} \item $a^2 b^4 a^2$, $ab^{m_2}a$, $a^{m_1}ba^{m_3}$, $a^{m_1}b^2a^{m_3}$. \item $a b^{n_2} a^{n_3}, a^{n_1} b^{n_2} a $ where $n_2 \in \{ 3, 4\}$ and $ n_1, n_3 \geq 3$. \end{itemize} where $m_i\geq 1$. \end{proposition} \begin{proof} Let $w$ be a binary word with $l(w)=3$ such that all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Consider $w=a^{n_1}b^{n_2}a^{n_3}$ to be the run-length encoding of $w$ where $n_i\geq 1$ for all $i$. If $n_1=n_3=1$ or $n_2 \in \{1, 2\}$, then all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. Here, $w=ab^{n_2}a$ or $a^{n_1}ba^{n_3}$ or $a^{n_1}b^2a^{n_3}$. Now, we consider $n_1+n_3 \geq 3$ and $n_2 \geq 3$. Let $n_2\geq 5$ and for $j \neq r$, $i+j=n_3$ and $r+s=n_1$, consider $\alpha = a^i b^2 a^j b a^r b^{n_2-3} a^s\in \ensuremath{\mathfrak m}athtt{BR}(w)$. Since, $b^2 a^j b a^r b^{2}$ is not a palindrome, by Theorem \ref{tglen}, $\alpha$ is not rich which is a contradiction. Thus, $n_2\leq 4$. So, $n_2 \in \{ 3, 4\}$. We have the following cases: \begin{itemize} \item $n_3\geq 3$ : Then, consider $\beta = a^{n_3-1} b a b^2 a^{n_1} b^{n_2-3} \in \ensuremath{\mathfrak m}athtt{BR}(w)$. If $n_1\geq 2$, then by Theorem \ref{tglen}, $a^{2} b a b^2 a^{2}$ is a palindrome, which is a contradiction. Thus, if $n_3 \geq 3$, then $n_1=1$. Here, $w= a b^{n_2} a^{n_3}$ where $n_2 \in \{ 3, 4\}$ and $ n_3 \geq 3$. \item $n_3=2$ : Then, consider $\beta' = a^2 b a b^{n_2-1} a^{n_1-1}\in \ensuremath{\mathfrak m}athtt{BR}(w)$. If $n_1\geq 3$, then $\beta'$ is not rich. Thus, if $n_3=2$, then $n_1 \leq 2$. Thus, as $|w|\geq 8$, $w= a^2 b^4 a^2$. \item $n_3=1$ : Then by Lemma \ref{l1}, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich. \end{itemize} The converse follows from Theorems \ref{tglen} and \ref{7ric} and Lemma \ref{l1}. \end{proof} We conclude the following from Propositions \ref{g8}, \ref{g7}, \ref{g6}, \ref{g5}, \ref{g4} and \ref{g3} for the binary words with $3\leq l(w) \leq 8.$ \begin{theorem} Let $w$ be a binary word with $|w|>7$. Then, all elements of $\ensuremath{\mathfrak m}athtt{BR}(w)$ are rich for \begin{enumerate} \item $l(w)=8$ iff $w=abababab$ \item $l(w)=7$ iff $w$ is $ababab^2a$, $abab^2aba$ or $ab^2ababa$ \item $l(w)=6$ iff $w\in \{u, (u^c)^R\;|\; u\in T\}$ where $$T=\{a b^2 a b a^2 b, a b a b^2 a^2 b, a b a b a^2 b^2, a^2 b a b^2 a b, a^2 b a b a b^2, a b a^2 b^2 a b, a^{n_1} b a b a b\; |\;n_1\geq 3\}$$ \item $l(w)=5$ iff $w$ is $a^{n_1}ba^{n_3}ba^{n_5}$, $ab^{n_2}ab^{n_4}a^2$, $a^2b^{n_2}ab^{n_4}a$, $ab^{n_2}a^2b^{n_4}a$ where $n_2+n_4 =4$ \item $l(w)=4$ iff $w$ is $ab^{n_2}ab^{n_4}$ or $a^{n_1}b a^{n_3}b$ or $w\in S$ where \[S=\big\{ a^{n_1}b^{n_2}a^{n_3}b^{n_4}| (n_2,n_4), (n_1,n_3) \in \{(3,1),(2,2),(1,3)\}\big\} \] \item $l(w)=3$ iff $w$ is of one of the following forms: \begin{itemize} \item $a^2 b^4 a^2$, $ab^{n_2}a$, $a^{n_1}ba^{n_3}$, $a^{n_1}b^2a^{n_3}$ \item $a b^{n_2} a^{n_3}, a^{n_1} b^{n_2} a $ where $n_2 \in \{ 3, 4\}$ and $ n_1, n_3 \geq 3$ \end{itemize} \end{enumerate} where $n_i\geq 1$. \end{theorem} \section{Conclusions} In this paper, we have characterized words whose block reversal contains only rich words. We have found necessary and sufficient conditions for a non-binary word such that all elements in its block reversal are rich. For a binary word, we have showed that the result varies with the length of the run sequence of the word and the structure of the word. In future, we would like to find an upper bound on the number of elements in the block reversal of the binary word. It would also be interesting to study other combinatorial properties such as counting primitive words, bordered and unbordered words in the block reversal set of a word. \section{Appendix} Proof of the subcase $n_5=1$ and $n_1=1$ in Proposition \ref{g6}: \begin{proof} \item $n_1 = 1 : $ Then, $w= a b^{n_2} a^{n_3} b^{n_4} a b^{n_6}$ where $n_2\leq 2$. We have the following cases: \begin{itemize} \item $n_2 = 2 : $ Then, $w^R = b^{n_6} a b^{n_4} a^{n_3} b^{2} a$. If $n_4 \geq 2$ or $n_6 \geq 2$, then by Theorem \ref{tglen}, $b^2 a b a^{n_3+1} b^{n_4-1+n_6} \in \ensuremath{\mathfrak m}athtt{BR}(w^R)$ is not rich which is a contradiction. Thus, $n_4 = n_6=1$ and $w= a b^{2} a^{n_3} b a b $. If $n_3\geq 3$, then by Theorem \ref{tglen}, $a^{n_3-1} b a b^3 a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, as $|w|>7$, $n_3=2$ and $w= a b^{2} a^{2} b a b $. \item $n_2 = 1 : $ Then, $w= a b a^{n_3} b^{n_4} a b^{n_6}$. If $n_4 \geq 2$ and $n_6 \geq 2$, then by Theorem \ref{tglen}, for $n_3 \geq 2$, $b^{n_6} a b a^{n_3} b^{n_4}a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ and for $n_3=1$, $b^{n_6} a^2 b a b^{n_4} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ are not rich which is a contradiction. Then, we have the following cases: \begin{itemize} \item $n_4 \geq 2$ and $n_6 = 1 : $ Then by Theorem \ref{tglen}, for $n_3 \geq 3$, $b a^{n_3-1} b^{n_4} a b a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. For $n_3=2$, $w= a b a^2 b^{n_4} a b$. If $n_4\geq 3$, then by Theorem \ref{tglen}, $ab b^{n_4-2} a b a^2 b^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_4=2$ and $w= a b a^2 b^{2} a b$. For $n_3=1$, $w= a b a b^{n_4} a b$. Since, $|w|>7$, $n_4 \geq 3$. Then, by Theorem \ref{tglen}, $b^2 a^2 b a b^{n_4-1} \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. \item $n_6 \geq 2$ and $n_4 = 1 : $ Then, $w= a b a^{n_3} b a b^{n_6}$. Now, by Theorem \ref{tglen}, for $n_3 \geq 3$, $a^{n_3-1} b a b^{n_6} a^2 b\in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. For $n_3=2$, $w= a b a^{2} b a b^{n_6}$. Then by Theorem \ref{tglen}, for $n_6 \geq 3$, $b^{n_6-1}a^2 b a b^2 a \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. Thus, $n_6=2$ and $w= a b a^{2} b a b^{2}$. For $n_3=1$, $w= a b a b a b^{n_6}$. \item $n_4 = 1 $ and $n_6 = 1 : $ Then, $w= a b a^{n_3} b a b$. Now, by Theorem \ref{tglen}, for $n_3 \geq 3$, $a^{n_3-1} b a b^2 a^2 \in \ensuremath{\mathfrak m}athtt{BR}(w)$ is not rich which is a contradiction. For $n_3\leq 2$, $|w|<8$. Hence, we omit this. \end{itemize} \end{itemize} \end{proof} \end{document}
\hat a_2egin{document} \title{Testing nonclassicality in multimode fields:\\ a unified derivation of classical inequalities} \hat a_1uthor{Adam Miranowicz} \hat a_1ffiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \hat a_1ffiliation{Faculty of Physics, Adam Mickiewicz University, PL-61-614 Pozna\'n, Poland} \hat a_1uthor{Monika Bartkowiak} \hat a_1ffiliation{Faculty of Physics, Adam Mickiewicz University, PL-61-614 Pozna\'n, Poland} \hat a_1uthor{Xiaoguang Wang} \hat a_1ffiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \hat a_1ffiliation{Zhejiang Institute of Modern Physics, Department of Physics, Zhejiang University, Hangzhou 310027, China} \hat a_1uthor{Yu-xi Liu} \hat a_1ffiliation{Institute of Microelectronics, Tsinghua University, Beijing 100084, China} \hat a_1ffiliation{Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing 100084, China} \hat a_1ffiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \hat a_1uthor{Franco Nori} \hat a_1ffiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \hat a_1ffiliation{Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA} \date{\today} \hat a_2egin{abstract} We consider a way to generate operational inequalities to test nonclassicality (or quantumness) of multimode bosonic fields (or multiparty bosonic systems) that unifies the derivation of many known inequalities and allows to propose new ones. The nonclassicality criteria are based on Vogel's criterion corresponding to analyzing the positivity of multimode $P$~functions or, equivalently, the positivity of matrices of expectation values of, e.g., creation and annihilation operators. We analyze not only monomials, but also polynomial functions of such moments, which can sometimes enable simpler derivations of physically relevant inequalities. As an example, we derive various classical inequalities which can be violated only by nonclassical fields. In particular, we show how the criteria introduced here easily reduce to the well-known inequalities describing: (a) multimode quadrature squeezing and its generalizations including sum, difference and principal squeezing, (b) two-mode one-time photon-number correlations including sub-Poisson photon-number correlations and effects corresponding to violations of the Cauchy-Schwarz and Muirhead inequalities, (c) two-time single-mode photon-number correlations including photon antibunching and hyperbunching, and (d) two- and three-mode quantum entanglement. Other simple inequalities for testing nonclassicality are also proposed. We have found some general relations between the nonclassicality and entanglement criteria, in particular, those resulting from the Cauchy-Schwarz inequality. It is shown that some known entanglement inequalities can be derived as nonclassicality inequalities within our formalism, while some other known entanglement inequalities can be seen as sums of more than one inequality derived from the nonclassicality criterion. This approach enables a deeper analysis of the entanglement for a given nonclassicality. \end{abstract} \pacs{42.50.Ar, 42.50.Xa, 03.67.Mn} \maketitle \pagenumbering{arabic} \section{Introduction} Testing whether a given state of a system cannot be described within a classical theory, has been one of the fundamental problems of quantum theory from its beginnings to current studies in, e.g., quantum optics~\hat a_3ite{Glauber,Sudarshan,DodonovBook,VogelBook,MandelBook,PerinaBook,Walls79,Loudon80,Loudon87,Smirnov87,Klyshko96,Dodonov02}, condensed matter (see, e.g., Refs.~\hat a_3ite{DodonovBook,Nori}), nanomechanics~\hat a_3ite{Schwab05,Wei06}, and quantum biology (see, e.g., Ref.~\hat a_3ite{qbiology}). Macroscopic quantum superpositions (being at the heart of the Schr\"odinger-cat paradox) and related entangled states (which are at the core of the Einstein-Podolsky-Rosen paradox and Bell's theorem) are famous examples of nonclassical states which are not only physical curiosities but now fundamental resources for quantum-information processing~\hat a_3ite{Nielsen}. All states (or phenomena) are quantum, i.e., nonclassical. Thus, it is quite arbitrary to call some states ``classical''. Nevertheless, some states are closer to their classical approximation than other states. The most classical {\em pure} states of the harmonic oscillator are coherent states. Thus, usually, they are considered classical, while all other pure states of the harmonic oscillator are deemed nonclassical. The nonclassicality criterion for {\em mixed} states is more complicated and it is based on the Glauber-Sudarshan $P$~function~\hat a_3ite{Glauber,Sudarshan}. A commonly accepted formal criterion which enables to distinguish nonclassical from classical states reads as follows~\hat a_3ite{DodonovBook,VogelBook,MandelBook,PerinaBook}: A quantum state is {\em nonclassical} if its Glauber-Sudarshan $P$ function cannot be interpreted as a {\em true} probability density. Note that, according to this definition, any entangled state is nonclassical, but not every separable state is classical. Various operational criteria of nonclassicality (or quantumness) of single-mode fields were proposed~(see, e.g., Refs.~\hat a_3ite{DodonovBook,VogelBook,Richter02,Rivas} and references therein). In particular, Agarwal and Tara~\hat a_3ite{Agarwal92}, Shchukin, Richter and Vogel (SRV)~\hat a_3ite{NCL1,NCL2} proposed nonclassicality criteria based on matrices of moments of annihilation and creation operators for single-mode fields. Moreover, an efficient method for measuring such moments was also developed by Shchukin and Vogel~\hat a_3ite{SVdetect}. It is not always sufficient to analyze a single-mode field, i.e., an elementary excitation of a normal mode of the field confined to a one-dimensional cavity. To describe the generation or interaction of two or more bosonic fields, the standard analysis of single-system nonclassicality should be generalized to the two- and multi-system (multimode) case. Simple examples of such bosonic fields are multimode number states, multimode coherent and squeezed light, or fields generated in multi-wave mixing, multimode scattering, or multi-photon resonance. Here, we study in greater detail and modify an operational criterion of nonclassicality for multimode radiation fields of Vogel \hat a_3ite{Vogel08}, which is a generalized version of the SRV nonclassicality criterion~\hat a_3ite{NCL1,NCL2} for single-mode fields. It not only describes the multimode fields, but can also be applied in the analysis of the dynamics of radiation sources. This could be important for the study of, e.g., time-dependent correlation functions, which are related to time-dependent field commutation rules (see, e.g., subsections 2.7 and 2.8 in Ref.~\hat a_3ite{VogelBook}). A variety of multimode nonclassicality inequalities has been proposed in quantum optics~(see, e.g., textbooks~\hat a_3ite{DodonovBook,VogelBook,MandelBook,PerinaBook}, reviews~\hat a_3ite{Walls79,Loudon80,Loudon87,Klyshko96,Smirnov87}, and Refs.~\hat a_3ite{Yuen76,Kozierowski77,Caves85,Reid86,Dalton86,Schleich87,Agarwal88,Luks88,Hillery89,Lee90,Zou90,Klyshko96pla,Miranowicz99a,Miranowicz99b,An99,An00,Jakob01}) and tested experimentally (see, e.g., Refs.~\hat a_3ite{Clauser74,Kimble77,Short83,Slusher85,Grangier86,Hong87,Lvovsky02}). The nonclassicality criterion described here enables a simple derivation of them. Moreover, it offers an effective way to derive new inequalities, which might be useful in testing the nonclassicality of specific states generated in experiments. It is worth noting that we are analyzing nonclassicality criteria but {\em not} a degree of nonclassicality. We admit that the latter problem is experimentally important and a few ``measures'' of nonclassicality have been proposed~\hat a_3ite{Hillery87,Lee92,Lutkenhaus95,Dodonov00,Marian02,Dodonov03,Malbouisson03,Kenfack04,Asboth05,Boca09}. Analogously to the SRV nonclassicality criteria, Shchukin and Vogel~\hat a_3ite{SV05} proposed an entanglement criterion based on the matrices of moments and partial transposition. This criterion was later amended~\hat a_3ite{MP06} and generalized~\hat a_3ite{MPHH09} to replace partial transposition by nondecomposable positive maps and contraction maps (e.g., realignment). A similar approach for entanglement verification, based on the construction of matrices of expectation values, was also investigated in Refs.~\hat a_3ite{Rigas06,Korbicz06,Moroder08,Haseler08}. Here we analyze relations between classical inequalities derived from the two- and three-mode nonclassicality criteria and the above-mentioned entanglement criterion. The article is organized as follows: In Sect.~\ref{Sect2}, a nonclassicality criterion for multimode bosonic fields is formulated. We apply the criterion to rederive known and a few apparently new nonclassicality inequalities. In subsection~\ref{Sect3a}, we summarize the Shchukin-Vogel entanglement criterion~\hat a_3ite{SV05,MP06}. In subsection~\ref{Sect3b}, we apply it to show that some known entanglement inequalities (including those of Duan {\em et al.}~\hat a_3ite{Duan} and Hillery and Zubairy~\hat a_3ite{Hillery06}) exactly correspond to unique nonclassicality inequalities. In subsection~\ref{Sect3c}, we analyze such entanglement inequalities (including Simon's criterion~\hat a_3ite{Simon}) that are represented apparently not by a single inequality but by sums of inequalities derived from the nonclassicality criterion. Moreover, other entanglement inequalities are derived in subsection~\ref{Sect3c2}. The discussed nonclassicality and entanglement criteria are summarized in Tables~I and~II. We conclude in Sect.~\ref{Sect4}. \section{Nonclassicality criteria for multimode fields\label{Sect2}} An $M$-mode bosonic state $\hat\rho$ can be completely described by the Glauber-Sudarshan $P$~function defined by~\hat a_3ite{Glauber,Sudarshan}: \hat a_2egin{eqnarray} \hat\rho &=& \intda P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*)|\hat a_2m{\hat a_1lpha}\rangle\langle \hat a_2m{\hat a_1lpha}|, \label{N01} \end{eqnarray} where $|\hat a_2m{\hat a_1lpha}\rangle= \prod_{m=1}^M|\hat a_1lpha_m\rangle$ and $|\hat a_1lpha_m\rangle$ is the $m$th-mode coherent state, i.e., the eigenstate of the $m$th-mode annihilation operator $\hat a_m$, $\hat a_2m{\hat a_1lpha}$ denotes complex multivariable $(\hat a_1lpha_1,\hat a_1lpha_2,...,\hat a_1lpha_M)$, and ${\rm d}^2 \hat a_2m{\hat a_1lpha}=\prod_{m}{\rm d}^2\hat a_1lpha_m$. The density matrix $\hat\rho$ can be supported on the tensor product of either infinite-dimensional or finite-dimensional Hilbert spaces. For the sake of simplicity, we assume the number $M$ of modes to be finite. But there is no problem to generalize our results for an infinite number of modes. A criterion of nonclassicality is usually formulated as follows~\hat a_3ite{Titulaer65}: \hat a_2egin{criterion} A multimode bosonic state $\hat\rho$ is considered to be nonclassical if its Glauber-Sudarshan $P$~function cannot be interpreted as a classical probability density, i.e., it is nonpositive or more singular than Dirac's delta function. Conversely, a state is called classical if it is described by a $P$~function being a classical probability density. \end{criterion} It is worth noting that Criterion~1 (and the following criteria) does not have a fundamental indisputable validity, and it was the subject of criticism by, e.g., W\"unsche~\hat a_3ite{Wunsche04}, who made the following two observations. (i) In the vicinity of any classical state there are nonclassical states, as can be illustrated by analyzing modified thermal states. So, arbitrarily close to any classical state there is a nonclassical state giving, to arbitrary precision, exactly the same outcomes as for the classical state in any measurement. Note that analogous problems can be raised for entanglement criteria~\hat a_3ite{MPHH09} for continuous-variable systems, as in the vicinity of any separable state there are entangled states.~\hat footnote{It is worth stressing that this is the case only for continuous-variable systems: in the finite dimensional case, the set of separable states has finite volume.} (ii) There are intermediate quasiclassical (or unorthodox classical) states, which {\em cannot} be clearly classified as classical or nonclassical according to Criterion~1. This can be illustrated by analyzing the squeezing of thermal states, which does not lead immediately from classical to nonclassical states. Due to the singularity of the $P$~function, Criterion~1 is not operationally useful as it is extremely difficult (although sometimes possible~\hat a_3ite{Kiesel}) to directly reconstruct the $P$~function from experimental data. Recently, Shchukin, Richter and Vogel~\hat a_3ite{NCL1,NCL2} proposed a hierarchy of operational criteria of nonclassicality of single-mode bosonic states. This approach is based on the normally ordered moments of, e.g., annihilation and creation operators or position and momentum operators. An infinite set of these criteria (by inclusion of the correction analogous to that given in Ref.~\hat a_3ite{MP06}) corresponds to a single-mode version of Criterion~1. Let us consider a (possibly infinite) countable set $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$ of $M$-mode operators $\hat f_i\equiv \hat f_i (\hat {\hat a_2f a},\hat {\hat a_2f a}^\dagger)$, each a function of annihilation, $\hat {\hat a_2f a}\equiv (\hat a_1,\hat a_2,...,\hat a_M)$, and creation, $\hat {\hat a_2f a}^\dagger$, operators. For example, we may choose such operators as monomials \hat a_2egin{eqnarray} \label{eq:product} \hat f_{i} = \prod_{m=1}^M (\hat a^\dagger_m)^{i_{2m-1}} \hat a_m^{i_{2m}}, \end{eqnarray} where $i$ stands in this case for the multi-index ${\hat a_2f i}\equiv (i_{1},i_{2},...,i_{2M})$, but the $\hat f_{i}$'s can be more complicated functions, for example polynomials in the creation and annihilation operators. If \hat a_2egin{equation} \label{N02} \hat f = \sum_{i} c_{i} \hat f_{i}, \end{equation} where $c_{i}$ are arbitrary complex numbers, then with the help of the $P$~function one can directly calculate the normally ordered (denoted by $::$) mean values of the Hermitian operator $\hat f^\dagger \hat f$ as follows~\hat a_3ite{NCL1,Korbicz}: \hat a_2egin{eqnarray} \normal{\hat f^\dagger \hat f } &=& \intda |f(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*)|^2 P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) . \label{N03} \end{eqnarray} The crucial observation of SRV~\hat a_3ite{NCL1} in the derivation of their criterion is the following: \hat a_2egin{observation} \label{obs:obsSRV} If the $P$~function for a given state is a classical probability density, then ${\normal{\hat f^\dagger \hat f}}\ge 0$ for any function $\hat f$. Conversely, if $\normal{ \hat f^\dagger \hat f } < 0$ for some $\hat f$, then the $P$~function is not a classical probability density. \end{observation} The condition based on nonpositivity of the $P$~function is usually considered a necessary and sufficient condition of nonclassicality. In fact, as shown by Sperling~\hat a_3ite{Sperling}, if the $P$~function is more singular than Dirac's $\delta$-function [e.g., given by the $n$th derivative of $\delta(\hat a_1lpha)$ for $n=1,2,...$], then it is also nonpositive. With the help of Eq.~(\ref{N02}), $\normal{\hat f^\dagger \hat f}$ can be given by \hat a_2egin{eqnarray} \normal{\hat f^\dagger \hat f} &=& \sum_{i,j} c^*_{i}c_{j} M^{\rm (n)}_{ij}(\hat\rho) \label{N04} \end{eqnarray} in terms of the normally ordered correlation functions \hat a_2egin{eqnarray} M^{\rm (n)}_{ij}(\hat\rho) &=& {\rm Tr}\,(: \hat f_{i}^\dagger \hat f_{j}:\, \hat\rho), \label{N05} \end{eqnarray} where the superscript $(n)$ (similarly to $:\,:$) denotes the normal order of field operators. In the special case of two modes, analyzed in detail in the next sections, and with the choice of $\hat f_i$ given by Eq.~\eqref{eq:product}, Eq.~(\ref{N05}) can be simply written as \hat a_2egin{equation} M^{\rm (n)}_{ij}(\hat\rho) ={\rm Tr}\,\hat a_2ig[:({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}}):\hat\rho\hat a_2ig], \label{N06} \end{equation} where $\hat a=\hat a_1$ and $\hat b=\hat a_2$. It is worth noting that there is an efficient optical scheme \hat a_3ite{SVdetect} for measuring the correlation functions~(\ref{N06}). With a set $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$ fixed, the correlations \eqref{N05} form a (possibly infinite) Hermitian matrix \hat a_2egin{eqnarray} M^{\rm (n)}(\hat\rho) = [M^{\rm (n)}_{ij}(\hat\rho)]. \label{N07} \end{eqnarray} In order to emphasize the dependence of~(\ref{N07}) on the choice of $\hat F$, we may write $M^{\rm (n)}_{\hat F}(\hat\rho)$. Moreover, let $[M^{\rm (n)}(\hat\rho)]_{\hat a_2f r}$, with ${\hat a_2f r}=(r_1,\ldots,r_N)$, denote the $N \times N$ principal submatrix of $M^{\rm (n)}(\hat\rho)$ obtained by deleting all rows and columns except the ones labeled by $r_1,\ldots,r_N$. Analogously to Vogel's approach \hat a_3ite{Vogel08}, by applying Sylvester's criterion (see, e.g., \hat a_3ite{Strang,MP06}) to the matrix~(\ref{N07}), a generalization of the single-mode SRV criterion for multimode fields can be formulated as follows: \hat a_2egin{criterion} For any choice of $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$, a multimode state $\hat\rho$ is nonclassical if there exists a negative principal minor, i.e., $\det [M_{\hat F}^{\rm (n)}(\hat\rho)]_{\hat a_2f r}< 0$, for some ${\hat a_2f r}\equiv(r_1,\ldots,r_N)$, with $1\le r_1< r_2<\ldots < r_{N}$. \end{criterion} According to Vogel~\hat a_3ite{Vogel08}, this criterion (and the following Criterion~3) can also be applied to describe the nonclassicality of space-time correlations and the dynamics of radiation sources by applying the generalized $P$~function: \hat a_2egin{eqnarray} P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) &=& \left\langle \dd \prod_{i=1}^M \delta(\hat a_i - \hat a_1lpha_i)\dd \right\rangle. \label{VogelP} \end{eqnarray} where $\hat a_2m{\hat a_1lpha}=(\hat a_1lpha_1,...,\hat a_1lpha_M)$, with $\hat a_1lpha_i=\hat a_1lpha_i({\hat a_2f r}_i,t_i)$ depending on the space-time arguments $({\hat a_2f r}_i,t_i)$. By contrast to the standard definition of $P$~function, symbol $\dd\,\dd$ describes both the normal order of field operators and also time order, i.e., time arguments increase to the right (left) in products of creation (annihilation) operators~\hat a_3ite{VogelBook}. As an example, we will apply this generalized criterion to show the nonclassicality of photon antibunching and hyperbunching effects in Appendix C. Note that Criterion~2, even for the choice of $\hat f_i$ given by Eq.~\eqref{eq:product} and in the special case of single-mode fields, does not exactly reduce to the SRV criterion as it appeared in Ref.~\hat a_3ite{NCL2}. To show this, let us denote by $M^{\rm (n)}_N(\hat\rho)$ the submatrix corresponding to the first $N$ rows and columns of $M^{\rm (n)}(\hat\rho)$. According to the original SRV criterion (Theorem~3 in Ref.~\hat a_3ite{NCL2}), a single-mode state is nonclassical if there exists an $N$, such that the leading principal minor is negative, i.e. $\det M^{\rm (n)}_N(\hat\rho)<0$. Such formulated criterion fails for singular (i.e., $\det M^{\rm (n)}_N(\hat\rho) =0$) matrices of moments, as explained in detail in the context of quantum entanglement in Ref.~\hat a_3ite{MP06}. Considering $[M^{\rm (n)}_{\hat F}(\hat\rho)]_{\hat a_2f r}$ is equivalent to considering the correlation matrix corresponding to a subset $\hat F'\subset \hat F$, with $\hat F'=(\hat f_{r_1},\hat f_{r_2},...,\hat f_{r_N})$, i.e., $[M^{\rm (n)}_{\hat F}(\hat\rho)]_{\hat a_2f r}=M^{\rm (n)}_{\hat F'}(\hat\rho)$. We note that the subset symbol is used for brevity although it is not very precise, as the $\hat F$s are ordered collections of operators. Thus, by denoting \hat a_2egin{eqnarray} M^{\rm (n)}_{\hat F'}(\hat\rho) \equiv [M^{\rm (n)}_{\hat F}(\hat\rho)]_{\hat a_2f r} \hspace{4.5cm} \nonumber\\ = \Mat{ \normal{\hat f_{r_1}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_1}^\dagger \hat f_{r_2}} & \hat a_3dots & \normal{\hat f_{r_1}^\dagger \hat f_{r_N}} } { \normal{\hat f_{r_2}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_2}^\dagger \hat f_{r_2}} & \hat a_3dots & \normal{\hat f_{r_2}^\dagger \hat f_{r_N}} } { \vdots & \vdots & \ddots & \vdots } { \normal{\hat f_{r_N}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_N}^\dagger \hat f_{r_2}} & \hat a_3dots & \normal{\hat f_{r_N}^\dagger \hat f_{r_N}} }, \label{N08} \end{eqnarray} and its determinant \hat a_2egin{eqnarray} \dfnnew{\hat F'}(\hat\rho)\equiv \det \, M^{\rm (n)}_{\hat F'}(\hat\rho), \label{N08a} \end{eqnarray} we can equivalently rewrite Criterion~2 as: \hat a_2egin{criterion} A multimode bosonic state $\hat\rho$ is nonclassical if there exists $\hat F$, such that $\dfnnew{\hat F}(\hat\rho)$ is negative. \end{criterion} This can be written more compactly as: \hat a_2egin{eqnarray} \hat\rho \textrm{~is classical} &\Rightarrow& \hat forall { \hat F}: \quad \dfn(\hat\rho) \ge 0, \nonumber \\ \hat\rho \textrm{~is nonclassical} &\Leftarrow& \exists {\hat F}: \quad \dfn(\hat\rho) <0. \label{N09} \end{eqnarray} In the following, we use the symbol $\ncl$ to emphasize that a given inequality {\em can} be satisfied only for {\em nonclassical} states and the symbol $\hat a_3l$ to indicate that an inequality {\em must} be satisfied for all {\em classical} states. Let us comment further on the relation between Criteria 2 and 3 and the SRV criterion (in its amended version that takes into account the issue of singular matrices). Criterion 3 corresponds to checking the positivity of an infinite matrix $M_{ij}^{(n)}$ defined as in \eqref{N05} with the $\hat f_i$'s chosen to be all possible monomials given by Eq.~\eqref{eq:product}. Considering the positivity of larger and larger submatrices of this matrix leads to a hierarchy of criteria: testing the positivity of some submatrix $M^{(n)}_N$ leads to a stronger criterion than testing the positivity of a submatrix $M^{(n)}_{N'}$, with $N'< N$. Nonetheless, when one invokes Sylvester's criterion in order to transform the test of positivity of a matrix into the test of positivity of its many principal minors, it is arguably difficult to speak of a ``hierarchy''. Indeed, because of the issue of the possible singularity of the matrix we cannot simply consider, e.g., leading principal minors involving larger and larger submatrices. As regards the general formalism, of course by adding operators to the set $\hat F$, and therefore increasing the dimension of the matrix $M^{(n)}_{\hat F'}$, one obtains a hierarchy of \emph{matrix} conditions on classicality. Nonetheless, also in our case when moving to scalar inequalities by considering determinants, we face the issue of the possible singularity of matrices. Motivated also by this difficulty, in the present article we do not focus so much on the idea a hierarchy of criteria, but rather explore the approach that by using matrices of expectations values it is possible to easily obtain criteria of nonclassicality and entanglement in the form of inequalities. As already explained, this is done by referring to Observation \ref{obs:obsSRV} and considering $\hat f_i$'s possibly more general than monomials, e.g., polynomials. Indeed, when we choose a set of operators $\hat F=(\hat f_1,\hat f_2,\dots)$, we compute the corresponding matrix of expectation values, and we check its positivity, what we are doing is equivalent to checking positivity of, e.g., ${\normal{\hat f^\dagger \hat f}}$ for all $f$'s that can be written as a linear combination of the operators in $\hat F$: $\hat f=\sum_ic_i\hat f_i$. As polynomials can be expanded into monomials, it is clear that checking the positivity of a matrix $M^{(n)}_{\hat F}$ with $\hat F$ consisting of polynomials, cannot give a stronger criterion than checking the positivity of a matrix $M^{(n)}_{\hat F'}$, where $\hat F'$ is given by all the monomials appearing in the elements of $\hat F$. Of course, to have a stronger \emph{matrix} criterion of classicality we pay a price in terms of the dimension of the matrix $M^{(n)}_{\hat F'}$, which is larger than $M^{(n)}_{\hat F}$. Further, as we will see, by considering general sets $\hat F$---that is, not only containing monomials---one can straightforwardly obtain interesting and physically relevant inequalities, which may be difficult to pinpoint when considering monomials as ``building blocks''. It is worth noting that the possibility of using polynomial functions of moments was also discussed in Ref.~\hat a_3ite{SV05} in the context of entanglement criterion. Finally, we remark that to make the above criteria sensitive in detecting nonclassicality, the $f_i$ must be chosen such that the normal ordering is important in giving $M^{(n)}$. In particular, assuming this special structure for the $f_i$'s, there must be some combination of creation and annihilation operators. On the contrary, the inclusion of only creation or only annihilation operators would give a matrix $M^{(n)}$ positive for every state, thus completely useless for detecting nonclassicality. \hat a_2egin{table*}[tbp] \hat a_3aption{Criteria for single-time nonclassical effects in two-mode (TM) and multimode (MM) fields, and two-time nonclassical effects in single-mode (SM) fields.} \hat a_2egin{center} \hat a_2egin{tabular}{l c c} \hline\hline {\hat a_2f Nonclassical effect} & {\hat a_2f Criterion} & Equations \\ \hline\hline MM quadrature squeezing & $\dn(1,\hat X_{\hat a_2m{\phi}})<0$ & (\ref{N10}),~(\ref{N15})\\[5pt] TM principal squeezing of Luk\v{s} {\em et al.}~\hat a_3ite{Luks88} & $\dn(\Delta \hat a_{12}^\dagger,\Delta \hat a_{12})=\dn(1, \hat a_{12}^\dagger, \hat a_{12})<0$ & (\ref{N16})--(\ref{z36}) \\[5pt] TM sum squeezing of Hillery~\hat a_3ite{Hillery89} & $\dn(1,\hat V_{\phi})<0$ & (\ref{N20}),~(\ref{N22}) \\[5pt] MM sum squeezing of An-Tinh~\hat a_3ite{An99} & $\dn(1,\hat {\hat a_3al V}_{\phi})<0$ & (\ref{z9}),~(\ref{z10})\\[5pt] TM difference squeezing of Hillery~\hat a_3ite{Hillery89} & $\dn(1,\hat W_{\phi})<- \hat frac12 \min \left(\mean{\hat n_1},\mean{\hat n_2}\right)$ & (\ref{N23}),~(\ref{N26}),~(\ref{z1})\\[5pt] MM difference squeezing of An-Tinh~\hat a_3ite{An00}& $\dn(1,\hat {\hat a_3al W}_{\phi})<-\hat frac14 \left||\mean{\hat C}|-\mean{\hat D}\right|$ & (\ref{z18}),~(\ref{z19}) \\[5pt] TM sub-Poisson photon-number correlations & $\dn(1,\hat n_1 \pm\hat n_2)<0$ & (\ref{N28}),~(\ref{N30}) \\[5pt] Cauchy-Schwarz inequality violation& $\dn(\hat f_1,\hat f_2)<0$ & (\ref{x94}),~(\ref{x96}) \\[5pt] TM Cauchy-Schwarz inequality violation via Agarwal's test~\hat a_3ite{Agarwal88}& $\dn(\hat n_1,\hat n_2)<0$ & (\ref{x15}),~(\ref{x17}) \\[5pt] TM Muirhead inequality violation via Lee's test~\hat a_3ite{Lee90}& $\dn(\hat n_1-\hat n_2)<0$ & (\ref{x30}),~(\ref{x30a}) \\[5pt] SM photon antibunching& $\dn[\hat n(t),\hat n(t+\tau)]<0$ & (\ref{y05}),~(\ref{x23}) \\[5pt] SM photon hyperbunching& $\dn[\Delta\hat n(t),\Delta\hat n(t+\tau)]$ & (\ref{y05a}),~(\ref{x27}),~(\ref{z34}) \\ &\quad $=\dn[1,\hat n(t),\hat n(t+\tau)]<0$ & \\[4pt] Other TM nonclassical effects & $\dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)<0$ & (\ref{x72}) \\[5pt] & $\dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger \hat a_2)<0$ & (\ref{x78}) \\[5pt] & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)<0$ & (\ref{x84}) \\[5pt] & $\dn(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & (\ref{x90}) \\[5pt] & $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)<0$ & (\ref{x36}) \\[2pt] \hline \hline \end{tabular} \end{center} \end{table*} \hat a_2egin{table*}[tbp] \hat a_3aption{Entanglement criteria via nonclassicality criteria.} \hat a_2egin{center} \hat a_2egin{tabular}{l c c c} \hline\hline Reference & {\hat a_2f Entanglement criterion} & {\hat a_2f Equivalent nonclassicality criterion} & Equations \\ \hline\hline Duan {\em et al.}~\hat a_3ite{Duan} & $\dPT(\Delta\hat a_1,\Delta\hat a_2)=\dPT(1,\hat a_1,\hat a_2)<0$ & $\dn(\Delta\hat a_1,\Delta\hat a_2^\dagger)=\dn(1,\hat a_1,\hat a_2^\dagger)<0$ & (\ref{x7})--(\ref{z30}) \\[5pt] Simon~\hat a_3ite{Simon} & $\dPT(1,\hat a_1,\hat a_1^\dagger,\hat a_2,\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2) +\dn(1,\hat a_1,\hat a_2^\dagger)$ & (\ref{x43}) \\[5pt] & & ~~~$+\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)+\dn(1,\hat a_1,\hat a_2^\dagger,\hat a_2)<0$ & \\[5pt] Mancini {\em et al.}~\hat a_3ite{Mancini} & $\dPT(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger) +1<0$ & (\ref{x81}),~(\ref{x57}) \\[5pt] Hillery \& Zubairy~\hat a_3ite{Hillery06} & $\dPT(1,\hat a_1\hat a_2)<0$ & $\dn(1,\hat a_1\hat a_2^\dagger)<0$ & (\ref{x1}),~(\ref{x2}) \\[5pt] {\em ditto} & $\dPT(1,\hat a_1^m \hat a_2^n)<0$ & $\dn(1,\hat a_1^m (\hat a_2^\dagger)^n)<0$ & (\ref{x60})--(\ref{x63}) \\[5pt] {\em ditto} & $\dPT(\hat a_1,\hat a_2)<0$ & $\dn(\hat a_1,\hat a_2^\dagger)<0$ & (\ref{x4}),~(\ref{x6}) \\[5pt] {\em ditto} & $\dPT(1,\hat a_1\hat a_2\hat a_3)<0$ & $\dn(1,\hat a_1^\dagger\hat a_2\hat a_3)<0$ & (\ref{x34}),~(\ref{x46}) \\[5pt] Miranowicz {\em et al.}~\hat a_3ite{MPHH09}& $\dPT(\hat a_1,\hat a_2\hat a_3)<0$ & $\dn(\hat a_1^\dagger,\hat a_2\hat a_3)<0$ & (\ref{x49}) \\[4pt] Other entanglement tests & $\dPT(1,\hat a_1^{k} \hat a_2^{l}\hat a_3^{m})<0$ & $\dn(1,(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m})<0$ & (\ref{z24}),~(\ref{z25}) \\[5pt] & $\dPT(\hat a_1^{k}, \hat a_2^{l}\hat a_3^{m})<0$ & $\dn((\hat a_1^\dagger)^{k}, \hat a_2^{l}\hat a_3^{m})<0$ & (\ref{z26}),~(\ref{z27}) \\[5pt] & $\dPT(1,\hat a_1\hat a_2,\hat a_1^\dagger \hat a_2^\dagger)<0$ & $\dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2) + (\mean{\hat n_{1}+\hat n_{2}}+1)\, \dn(1,\hat a_1\hat a_2^\dagger)<0$ & (\ref{x69}),~(\ref{x56}) \\[5pt] & $\dPT(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2)<0$ & $\dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)+\<\hat n_1\>\<\hat n_2\> + \mean{\hat n_{1}+\hat n_{2}}\, \dn(1,\hat a_1\hat a_2)<0$ & (\ref{x75}),~(\ref{x59}) \\[5pt] & $\dPT(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger)<0$ & (\ref{x87}),~(\ref{x58}) \\[2pt] \hline \hline \end{tabular} \end{center} \end{table*} \subsection{Nonclassicality and the Cauchy-Schwarz inequality} The Cauchy-Schwarz inequality (CSI) for operators can be written as follows (see, e.g., Ref.~\hat a_3ite{MandelBook}): \hat a_2egin{eqnarray} \mean{\hat A^{\dagger} \hat A} \mean{\hat B^{\dagger} \hat B} &\ge& |\mean{\hat A^{\dagger} \hat B}|^2, \label{x92} \end{eqnarray} where $\hat A$ and $\hat B$ are arbitrary operators for which the above expectations exist. Indeed, $\mean{\hat A^{\dagger} \hat B}\equiv {\rm Tr}\,(\rho\hat{A}^\dagger \hat B)$ is a valid inner product because of the positivity of $\rho$. Similarly, one can define a valid scalar product for a positive $P$~function. In detail, by identifying $\hat A=f_1({\hat a_2f a,a}^\dagger)$ and $\hat B=f_2({\hat a_2f a,a}^\dagger)$, one can define the scalar product \hat a_2egin{equation} \label{x97} \hat a_2egin{split} \normal{\hat f_i^{\dagger} \hat f_j} = \intda f^*_i(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) f_j(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*). \end{split} \end{equation} Then, a CSI can be written as: \hat a_2egin{eqnarray} \normal{\hat f_1^{\dagger} \hat f_1} \normal{\hat f_2^{\dagger} \hat f_2} \hat a_3l |\normal{\hat f_1^{\dagger} \hat f_2}|^2. \label{x94} \end{eqnarray} Such CSI, for a given choice of operators $\hat f_1$ and $\hat f_2$, can be violated by some nonclassical fields described by a $P$~function which is not positive everywhere, that is such that \eqref{x97} does not actually define a scalar product. We then say that the state of the fields violates the CSI. The nonclassicality of states violating the CSI can be shown by analyzing Criterion~3 for $\hat F=(\hat f_1,\hat f_2)$, which results in \hat a_2egin{eqnarray} \dfn &=& \DET{\normal{\hat f_1^{\dagger} \hat f_1} & \normal{\hat f_1^{\dagger} \hat f_2}}{ \normal{\hat f_1 \hat f_2^{\dagger}} &\normal{\hat f_2^{\dagger} \hat f_2}} \ncl 0. \label{x96} \end{eqnarray} \subsection{A zoo of nonclassical phenomena\label{Sect2b}} In Table I, we present a variety of multimode nonclassicality criteria, which can be derived by applying Criterion 3 as shown in this subsection and in greater detail in Appendices A--C. In the following, we give a few simple examples of other classical inequalities, which---to our knowledge---have not been discussed in the literature. In particular, we analyze inequalities based on determinants of the following form: \hat a_2egin{eqnarray} D(x,y,z) &=& \left| \hat a_2egin{array}{lll} 1 & x & x^* \\ x^* & z & y^* \\ x & y & z \ \end{array} \right|. \label{x66} \end{eqnarray} (i) By applying Criterion~3 for $\hat F=(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)$, we obtain \hat a_2egin{equation} \dfn=D(\langle\hat a_1\hat a_2\rangle,\langle\hat a_1^2\hat a_2^2\rangle,\langle\hat n_1\hat n_2\rangle)\ncl 0, \label{x72} \end{equation} where $\hat n_1=\hat a_1^\dagger \hat a_1$ and $\hat n_2=\hat a_2^\dagger \hat a_2$. \noindent (ii) For $\hat F=(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger \hat a_2)$ one obtains \hat a_2egin{equation} \dfn=D(\langle\hat a_1\hat a_2^\dagger\rangle,\langle\hat a_1^2(\hat a_2^\dagger)^2\rangle,\langle\hat n_1\hat n_2\rangle)\ncl 0. \label{x78} \end{equation} (iii) For $\hat F=(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)$, Criterion~3 leads to \hat a_2egin{equation} \dfn=D(\langle\hat a_1+\hat a_2^\dagger\rangle,\langle(\hat a_1+\hat a_2^\dagger)^2\rangle,z)\ncl 0, \label{x84} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2\rangle$. \noindent (iv) For $\hat F=(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)$ one has \hat a_2egin{equation} \dfn=D(\langle\hat a_1+\hat a_2\rangle,\langle(\hat a_1+\hat a_2)^2\rangle,z)\ncl 0, \label{x90} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2^\dagger\rangle$. These nonclassicality criteria, given by Eqs.~(\ref{x72})--(\ref{x90}), will be related to the entanglement criteria in subsection~\ref{Sect3c2}. Another example, which is closely related to the Simon entanglement criterion~\hat a_3ite{Simon}, as will be shown in subsection~\ref{Sect3c1}, can be obtained from Criterion~3 assuming $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)$. Thus, we obtain: \hat a_2egin{equation} \dfn= \left| \hat a_2egin{array}{lllll} 1 & \mean{\hat a_1} & \mean{\hat a_1^\dagger} & \mean{\hat a_2^\dagger} & \mean{\hat a_2} \\ \mean{\hat a_1^\dagger} & \mean{\hat a_1^\dagger\hat a_1} & \mean{(\hat a_1^\dagger)^2} & \mean{\hat a_1^\dagger\hat a_2^\dagger} & \mean{\hat a_1^\dagger\hat a_2} \\ \mean{\hat a_1} & \mean{\hat a_1^2} & \mean{\hat a_1^\dagger\hat a_1} & \mean{\hat a_1\hat a_2^\dagger} & \mean{\hat a_1\hat a_2} \\ \mean{\hat a_2} & \mean{\hat a_1\hat a_2} & \mean{\hat a_1^\dagger\hat a_2} & \mean{\hat a_2^\dagger\hat a_2} & \mean{\hat a_2^2} \\ \mean{\hat a_2^\dagger} & \mean{\hat a_1\hat a_2^\dagger} & \mean{\hat a_1^\dagger\hat a_2^\dagger} & \mean{(\hat a_2^\dagger)^2} & \mean{\hat a_2^\dagger\hat a_2} \\ \end{array} \right| \ncl 0. \label{x36} \end{equation} \section{Entanglement and nonclassicality criteria} Here, we express various two- and three-mode entanglement inequalities in terms of nonclassicality inequalities derived from Criterion~3, which are summarized in Table~II. First, we briefly describe the Shchukin-Vogel entanglement criterion, which enables the derivation of various entanglement inequalities. \subsection{The Shchukin-Vogel entanglement criterion\label{Sect3a}} The Criterion~3 of nonclassicality resembles the Shchukin-Vogel (SV) criterion~\hat a_3ite{SV05,MP06,MPHH09} for distinguishing states with positive partial transposition (PPT) from those with nonpositive partial transposition (NPT). In analogy to Eqs.~(\ref{N06}) and~(\ref{N07}), one can define a matrix $M(\hat\rho)=[M_{ij}(\hat\rho)]$ of moments as follows: \hat a_2egin{equation} M_{ij}(\hat\rho) ={\rm Tr}\,\hat a_2ig[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}})\hat\rho\hat a_2ig], \label{N06x} \end{equation} where the subscripts $i$ and $j$ correspond to multi-indices $(i_1,i_2,i_3,i_4)$ and $(j_1,j_2,j_3,j_4)$, respectively. Note that, contrary to Eq.~(\ref{N06}), the creation and annihilation operators are {\em not} normally ordered. As discussed in Ref.~\hat a_3ite{MPHH09}, the matrix $M(\hat\rho)$ of moments for a separable state $\hat\rho$ is also separable, i.e., \hat a_2egin{equation} \hat\rho=\sum_i p_i \hat\rho^A_i\otimes\hat\rho^B_i \Rightarrow M(\hat\rho)=\sum_i p_i M^A(\hat\rho^A_i)\otimes M^B(\hat\rho^A_i), \label{Nsep} \end{equation} where $p_i\geq0$, $\sum_ip_i=1$, $M^A(\hat\rho^A)= \sum_{i'j'}M_{i'j'}(\Hat\rho^A)|i'\rangle\langle j'|$ is expressed in a formal basis $\{|i'\rangle\}$ with $i'=(i_1,i_2,0,0)$ and $j'=(j_1,j_2,0,0)$; $M^B(\Hat\rho^B)$ defined analogously. Reference~\hat a_3ite{SV05} proved the following criterion: \hat a_2egin{criterion} A bipartite quantum state $\hat\rho$ is NPT if and only if $M(\hat\rho^\Gamma)$ is NPT. \label{t1} \end{criterion} The elements of the matrix of moments, $M(\hat\rho^\Gamma)=[M_{ij}(\hat\rho^\Gamma)]$, where $\Gamma$ denotes partial transposition in some fixed basis, can be simply calculated as \hat a_2egin{eqnarray} M_{ij}(\hat\rho^\Gamma) ={\rm Tr}\,\hat a_2ig[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}})\hat\rho^\Gamma\hat a_2ig] \nonumber\\ ={\rm Tr}\,\hat a_2ig[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger j_{3}}{\hat b}^{j_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger i_{3}}{\hat b}^{i_{4}})\hat\rho\hat a_2ig].\; \label{N06y} \end{eqnarray} Let us define \hat a_2egin{eqnarray} d^{\Gamma}_{\hat F}(\hat\rho)= \Det{ \langle\hat f_{r_1}^\dagger \hat f_{r_1}\rangle^{\Gamma} & \langle\hat f_{r_1}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \hat a_3dots & \langle\hat f_{r_1}^\dagger \hat f_{r_N} \rangle^{\Gamma} }{ \langle\hat f_{r_2}^\dagger \hat f_{r_1}\rangle^{\Gamma} & \langle\hat f_{r_2}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \hat a_3dots & \langle\hat f_{r_2}^\dagger \hat f_{r_N}\rangle^{\Gamma} }{ \vdots & \vdots & \ddots & \vdots }{ \langle\hat f_{r_N}^\dagger \hat f_{r_1}\rangle^{\Gamma} & \langle\hat f_{r_N}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \hat a_3dots & \langle\hat f_{r_N}^\dagger \hat f_{r_N}\rangle^{\Gamma} }, \label{t4a} \end{eqnarray} in terms of $\langle\hat f_{r_i}^\dagger \hat f_{r_j}\rangle^{\Gamma}\equiv \mean{(\hat f_{r_i}^\dagger \hat f_{r_j})^\Gamma}$ ($i,j=1,...,N$). For example, if $\hat X$ is an operator acting on two or more modes, and we take partial transposition with respect to the first mode, $$ \hat X^\Gamma=(T\otimes{\rm id})(\hat X), $$ with $T$ the transposition acting on the first mode and ${\rm id}$ the identity operation doing nothing on the remaining modes, respectively. Then the SV Criterion~4, for brevity referred here to as the entanglement criterion, can be formulated as follows~\hat a_3ite{MPHH09}: \hat a_2egin{criterion} A bipartite state $\hat\rho$ is NPT if and only if there exists ${\hat F}$, such that $d^{\Gamma}_{\hat F}(\hat\rho)$ is negative. \end{criterion} This Criterion~5 can be written more compactly as follows: \hat a_2egin{eqnarray} \hat \rho \textrm{~is PPT} &\Leftrightarrow& \hat forall {\hat F}: \quad d^\Gamma_{\hat F}(\hat\rho) \ge 0, \nonumber \\ \hat \rho \textrm{~is NPT} &\Leftrightarrow& \exists {\hat F}: \quad d^\Gamma_{\hat F}(\hat\rho) <0. \label{N08b} \end{eqnarray} As for the case of the nonclassicality criteria, the original SV criterion actually refers to a set $\hat F$ given by monomials in the creation and annihilation operators. This entanglement criterion can be applied not only to two-mode fields but also to multimode fields~\hat a_3ite{SV06multi,MPHH09}. Note that Criterion~5 does not detect PPT-entangled states (which are part, and possibly the only members, of the family of the so-called bound entangled states)~\hat a_3ite{HorodeckiReview}. Analogously to the notation of $\ncl$, we use the symbol $\ent$ to indicate that a given inequality can be fulfilled {\em only} for entangled states. Here we show that various well-known entanglement inequalities can be derived from the nonclassicality Criterion~3 including the criteria of Hillery and Zubairy \hat a_3ite{Hillery06}, Duan {\em et al.} \hat a_3ite{Duan}, Simon \hat a_3ite{Simon}, or Mancini {\em et al.} \hat a_3ite{Mancini}. We also derive new entanglement criteria and show their relation to the nonclassicality criterion. Other examples of entanglement inequalities, which can be easily derived from nonclassicality criteria, include~\hat a_3ite{Raymer,Agarwal05,Song}. However, for brevity, we do not include them here. \subsection{Entanglement and the Cauchy-Schwarz inequality \label{Marco}} The matrix $M_{\hat F}^{(n)}(\hat \rho)$ is linear in its state $\hat \rho=\sum_ip_i\hat \rho_i$. Therefore we have \hat a_2egin{eqnarray} M_{\hat F}^{(n)}(\hat \rho)=\sum_ip_i M_{\hat F}^{(n)}(\hat \rho_i)\geq 0 \end{eqnarray} if $M_{\hat F}^{(n)}(\hat \rho_i)\geq 0$ for all $\hat \rho_i$. Thus, $M_{\hat F}^{(n)}$ is positive for separable states if it is positive on factorized states. Let \hat a_2egin{eqnarray} \hat F=(\hat f_1,\ldots,\hat f_N) \end{eqnarray} with functions $\hat f_{i}=\hat f_{i1}\hat f_{i2}\hat a_3dots \hat f_{iM}$, where \hat a_2egin{eqnarray} \hat f_{ij}=\hat a_2egin{cases} 1 & \textrm{if }i\neq k_j\\ \textrm{either }g_j(\hat a_j)\textrm{ or }g_j(\hat a^\dagger_j)& \textrm{if }i=k_j. \end{cases} \end{eqnarray} Here, $i$ is the index of the element $\hat f_i$ in $\hat F$, and index $j$ refers to the mode. $\hat f_{ij}$ is possibly different from the identity for one unique value $i=k_j$, and in that case it is equal to a function $g_j$ of either the creation or annihilation operators of mode $j$, but not of both. Writing the matrix $M_{\hat F}^{(n)}$ in a formal basis $\{|k\rangle\}$, one then has \hat a_2egin{eqnarray} \hat a_2egin{aligned} M_{\hat F}^{(n)}&=\sum_{kl}\langle :\hat f_k^{\dagger}\hat f_l : \rangle |k\rangle\langle l |\\ &=\sum_{kl}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}\ldots \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |.\\ \end{aligned} \end{eqnarray} For factorized states holds \hat a_2egin{eqnarray} \hat a_2egin{aligned} M_{\hat F}^{(n)}&=\sum_{kl}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle \hat a_3dots \langle : \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |\\ &=\sum_{k}\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle \hat a_3dots \langle : \hat f_{kM}^{\dagger}\hat f_{kM}: \rangle |k\rangle\langle l |\\ &\quad+\sum_{k\neq l}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle \hat a_3dots \langle : \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |\\ &=\sum_{k}\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle \hat a_3dots \langle : \hat f_{kM}^{\dagger}\hat f_{kM}: \rangle |k\rangle\langle l |\\ &\quad+\sum_{k\neq l}\langle \hat f_{k1}^{\dagger}\rangle\langle \hat f_{l1} \rangle \hat a_3dots \langle \hat f_{kM}^{\dagger}\rangle\langle \hat f_{lM}\rangle |k\rangle\langle l |\\ &\geq \sum_{k}\langle \hat f_{k1}^{\dagger}\rangle \langle \hat f_{k1} \rangle \hat a_3dots \langle \hat f_{kM}^{\dagger}\rangle \langle \hat f_{kM} \rangle |k\rangle\langle l |\\ &\quad+\sum_{k\neq l}\langle \hat f_{k1}^{\dagger}\rangle\langle \hat f_{l1} \rangle \hat a_3dots \langle \hat f_{kM}^{\dagger}\rangle\langle \hat f_{lM}\rangle |k\rangle\langle l |\\ &=\hat a_2^\daggerig(\sum_{k}\langle \hat f_{k1}^{\dagger}\rangle \hat a_3dots \langle \hat f_{kM}^{\dagger}\rangle |k\rangle\hat a_2^\daggerig) \\ &\quad\times \hat a_2^\daggerig(\sum_{l}\langle \hat f_{l1}\rangle \hat a_3dots \langle \hat f_{lM}\rangle \langle l|\hat a_2^\daggerig)\geq0. \label{Marco1} \end{aligned} \end{eqnarray} The first equality comes from the state being factorized. The third equality is due to the fact that the $\hat f_{ij}$s are functions of either annihilation or creation operators, but not of both, so $\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle= \langle \hat f_{k1}^{\dagger}\hat f_{l1}\rangle$ or $\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle= \langle \hat f_{l1} \hat f_{k1}^{\dagger}\rangle$, and that for $k\neq l$ at least one among $\hat f_{k1}^{\dagger}$ and $\hat f_{l1}$, let us say, e.g., $\hat f_{l1}$, is equal to the identity---in particular this implies that its expectation value is equal to $\langle \hat f_{l1}\rangle=1$. The first inequality is due to the fact that $\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle= \langle \hat f_{k1}^{\dagger}\hat f_{k1}\rangle$ or ${\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle}= \langle \hat f_{k1} \hat f_{k1}^{\dagger}\rangle$, and to the Cauchy-Schwarz inequality. \subsection{Entanglement criteria {\em equal} to nonclassicality criteria\label{Sect3b}} By applying the nonclassicality Criterion~3, we give a few examples of classical inequalities, which can be violated {\em only} by entangled states. \subsubsection{Hillery-Zubairy's entanglement criteria} Hillery and Zubairy~\hat a_3ite{Hillery06} derived a few entanglement inequalities both for two-mode fields: \hat a_2egin{eqnarray} \mean{\hat n_1\hat n_2} \ent |\mean{\hat a_1\hat a_2^\dagger}|^2, \label{x1} \\ \mean{\hat n_1}\mean{\hat n_2} \ent |\mean{\hat a_1\hat a_2}|^2 , \label{x4} \end{eqnarray} and three-mode fields \hat a_2egin{eqnarray} \mean{\hat n_1\hat n_2\hat n_3} \ent |\mean{\hat a_1^\dagger\hat a_2\hat a_3}|^2. \label{x34} \end{eqnarray} These inequalities can be derived from the entanglement Criterion~5~\hat a_3ite{SV05,MPHH09} assuming: $\hat F=(1,\hat a_1\hat a_2)$ to derive Eq.~(\ref{x1}), $\hat F=(\hat a_1,\hat a_2)$ for Eq.~(\ref{x4}), and $\hat F=(1,\hat a_1\hat a_2\hat a_3)$ for Eq.~(\ref{x34}). On the other hand, Eq.~(\ref{x1}) can be obtained from the nonclassicality Criterion~3 assuming $\hat F=(1,\hat a_1\hat a_2^\dagger)$, which gives \hat a_2egin{eqnarray} \dfn &=& \DET{1&\mean{\hat a_1\hat a_2^\dagger}}{\mean{\hat a_1^\dagger\hat a_2}& \mean{\hat n_1\hat n_2}} \ncl 0. \label{x2} \end{eqnarray} Analogously, assuming $\hat F=(\hat a_1,\hat a_2^\dagger)$, one gets \hat a_2egin{eqnarray} \dfn &=& \DET{\mean{\hat n_1}&\mean{\hat a_1^\dagger\hat a_2^\dagger}} {\mean{\hat a_1\hat a_2}&\mean{\hat n_2}} \ncl 0, \label{x6} \end{eqnarray} which corresponds to Eq.~(\ref{x4}). By choosing a set of three-mode operators $\hat F=(1,\hat a_1^\dagger\hat a_2\hat a_3)$, one readily obtains \hat a_2egin{eqnarray} \dfn &=& \DET {1&\mean{\hat a_1^\dagger\hat a_2\hat a_3}} {\mean{\hat a_1\hat a_2^\dagger\hat a_3^\dagger}& \mean{\hat n_1\hat n_2\hat n_3}} \ncl 0, \label{x46} \end{eqnarray} which corresponds to Eq.~(\ref{x34}). By applying Criterion~3 with $\hat F=(\hat a_1^\dagger,\hat a_2\hat a_3)$, we find another inequality \hat a_2egin{eqnarray} \dfn &=& \DET {\mean{\hat n_1}&\mean{\hat a_1\hat a_2\hat a_3}} {\mean{\hat a_1\hat a_2\hat a_3}^*& \mean{\hat n_2\hat n_3}} \ncl 0, \label{x49} \end{eqnarray} which was derived in Ref.~\hat a_3ite{MPHH09} from the entanglement Criterion~5. Using the Cauchy-Schwarz inequality, Hillery and Zubairy~\hat a_3ite{Hillery06} also found a more general form of inequality than the one in Eq.~(\ref{x1}), which reads as follows: \hat a_2egin{eqnarray} \mean{(\hat a_1^\dagger)^m\hat a_1^m (\hat a_2^\dagger)^n\hat a_2^n} \ent |\mean{\hat a_1^m (\hat a_2^\dagger)^n}|^2. \label{x60} \end{eqnarray} This inequality can be derived from the nonclassicality Criterion~3 for $\hat F=(1,\hat a_1^m (\hat a_2^\dagger)^n)$, which leads to \hat a_2egin{eqnarray} \dfn = \DET {1&\mean{\hat a_1^m (\hat a_2^\dagger)^n}} {\mean{(\hat a_1^\dagger)^m \hat a_2^n} & \mean{ (\hat a_1^\dagger)^m\hat a_1^m (\hat a_2^\dagger)^n\hat a_2^n}} \ncl 0. \label{x62} \end{eqnarray} Alternatively, Eq.~(\ref{x60}) can be derived from the entanglement Criterion~5 for $\hat F=(1,\hat a_1^m \hat a_2^n)$. Thus, we see that \hat a_2egin{equation} \dn(1,\hat a_1^m (\hat a_2^\dagger)^n) = \dPT(1,\hat a_1^m \hat a_2^n) \ent 0, \label{x63} \end{equation} where, for clarity, we use the notation $d^{k}(\hat F)$ instead of $d^{k}_{\hat F}$ for $k=(n),\Gamma$. Moreover, we can generalize entanglement inequality, given by Eq.~(\ref{x46}), as follows: \hat a_2egin{equation} \mean{\hat n_1^{k}\hat n_2^{l}\hat n_3^{m} } \ent |\mean{(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m}}|^2 \label{z24} \end{equation} for arbitrary integers $k,l,m>0$. This inequality can be proved by applying both Criteria~3 and~5: \hat a_2egin{eqnarray} \dn(1,(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m})= \dPT(1,\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}) \hspace{3cm} \nonumber \\ =\DET{1&\mean{(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m}}} {\langle(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m}\rangle^*& \mean{\hat n_1^{k}\hat n_2^{l}\hat n_3^{m} }}\ncl 0, \hspace{1cm} \label{z25} \end{eqnarray} where the first mode is partially-transposed. Analogously, Eq.~(\ref{x49}) can be generalized to following entanglement inequality: \hat a_2egin{equation} \mean{\hat n_1^{k}}\mean{\hat n_2^{l}\hat n_3^{m} } \ent |\mean{\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}}|^2, \label{z26} \end{equation} which can be shown by applying Criteria~3 and~5: \hat a_2egin{eqnarray} \dn((\hat a_1^\dagger)^{k}, \hat a_2^{l}\hat a_3^{m})&=& \dPT(\hat a_1^{k}, \hat a_2^{l}\hat a_3^{m}) \nonumber \\ &=&\DET{\mean{\hat n_1^{k}}&\mean{\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}}} {\langle\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}\rangle^* & \mean{\hat n_2^{l}\hat n_3^{m} }} \ncl 0. \hspace{5mm} \label{z27} \end{eqnarray} It is worth remarking that in all the above cases, once the $\ncl$ inequalities are found as nonclassicality inequalities, it is easy to check that they can be satisfied only by entangled states, that is they really are $\ent$ inequalities. Indeed, the determinant condition is the only nontrivial one for establishing the positivity of the involved $2\times 2$ matrices. Further, these matrices are linear in the state with respect to which the expectation values are calculated. Thus, if we prove that the matrices are positive for factorized states, then we have that they are necessarily positive for a separable state, and so are the determinants. For the sake of concreteness and clarity, we prove the positivity of the $2\times 2$ matrix of Eq.~\eqref{x6} for a factorized state. The positivity of the other matrices for factorized states is analogously proved. For a factorized state, as a special case of inequalities given in Eq.~(\ref{Marco1}), we have \hat a_2egin{equation} \hat a_2egin{aligned} \left( \hat a_2egin{array}{cc} \mean{\hat n_1}&\mean{\hat a_1^\dagger\hat a_2^\dagger}\\ \mean{\hat a_1\hat a_2}&\mean{\hat n_2} \end{array} \right) &= \left( \hat a_2egin{array}{cc} \mean{\hat a_1^\dagger\hat a_1}&\mean{\hat a_1^\dagger}\mean{\hat a_2^\dagger}\\ \mean{\hat a_1}\mean{\hat a_2}&\mean{\hat a_2^\dagger\hat a_2} \end{array} \right)\\ &\geq \left( \hat a_2egin{array}{cc} \mean{\hat a_1^\dagger}\mean{\hat a_1}&\mean{\hat a_1^\dagger}\mean{\hat a_2^\dagger}\\ \mean{\hat a_1}\mean{\hat a_2}&\mean{\hat a_2^\dagger}\mean{\hat a_2} \end{array} \right)\\ &= \left( \hat a_2egin{array}{c} \mean{\hat a_1^\dagger}\\ \mean{\hat a_2} \end{array} \right) \left( \hat a_2egin{array}{cc} \mean{\hat a_1}&\mean{\hat a_2^\dagger} \end{array} \right)\geq0, \end{aligned} \end{equation} where the first inequality is due to the Cauchy-Schwarz inequality $\mean{\hat{X}^\dagger\hat{X}}\geq|\mean{\hat{X}}|^2$. \subsubsection{Entanglement criterion of Duan et al.} A sharpened version of the entanglement criterion of Duan {\em et al.}~\hat a_3ite{Duan} can be formulated as follows~\hat a_3ite{SV05}: \hat a_2egin{eqnarray} \mean{\Delta\hat a_1^\dagger\Delta\hat a_1}\mean{\Delta\hat a_2^\dagger\Delta\hat a_2}\ent |\mean{\Delta\hat a_1\Delta\hat a_2}|^2, \label{x7} \end{eqnarray} where $\Delta\hat a_i=\hat a_i-\mean{\hat a_i}$ for $i=1,2$. Equation~(\ref{x7}) follows from the entanglement Criterion~5 for $\hat F=(1,\hat a_1,\hat a_2)$~\hat a_3ite{SV05} or, equivalently, for $\hat F=(\Delta\hat a_1,\Delta\hat a_2)$. It can also be derived from the nonclassicality Criterion~3 for $\hat F=(\Delta\hat a_1,\Delta\hat a_2^\dagger)$. Thus, we obtain \hat a_2egin{eqnarray} \dfn &=& \DET {\mean{\Delta\hat a_1^\dagger\Delta\hat a_1}&\mean{\Delta\hat a_1^\dagger\Delta\hat a_2^\dagger}} {\mean{\Delta\hat a_1\Delta\hat a_2}& \mean{\Delta\hat a_2^\dagger\Delta\hat a_2}} \ncl 0, \label{x9} \end{eqnarray} which corresponds to Eq.~(\ref{x7}). Alternatively, by choosing $\hat F=(1,\hat a_1,\hat a_2^\dagger)$, one obtains \hat a_2egin{equation} \dfn = \DETT {1&\mean{\hat a_1}&\mean{\hat a_2^\dagger}} {\mean{\hat a_1^\dagger}&\<\hat n_1\>&\mean{\hat a_1^\dagger\hat a_2^\dagger}} {\mean{\hat a_2}&\mean{\hat a_1\hat a_2}&\<\hat n_2\>}, \label{z30} \end{equation} which is equal to Eq.~(\ref{x9}). Thus, it is seen that this nonclassicality criterion is equal to the entanglement criterion. Moreover, the advantage of using polynomials, instead of monomial, functions of moments in $\hat F$ is apparent. The same conclusion was drawn by comparing Eqs.~(\ref{N18}) and~(\ref{z36}) or Eqs.~(\ref{x27}) and~(\ref{z34}). \subsection{Entanglement criteria via sums of nonclassicality criteria\label{Sect3c}} Here, we present a few examples of classical inequalities derived from the entanglement Criterion~5 and the nonclassicality Criterion~3 that are apparently not equal. More specifically, we have presented in subsection~\ref{Sect3b} examples of classical inequalities, which can be derived from the entanglement Criterion~5 for a given $\hat F_1$ or, equivalently, from the nonclassicality Criterion~3 for $\hat F_2$ equal to a partial transpose of $\hat F_1$. In this section, we give examples of entanglement inequalities, which {\em cannot} be derived from Criterion~3 for $\hat F_2=\hat F_1^\Gamma$. States satisfying Criterion~5 for entanglement must be nonclassical, as any entangled state is necessary nonclassical in the sense of Criterion 1. We will provide specific examples that satisfying an entanglement inequality implies satisfying one or more nonclassical inequalities. This approach enables an analysis of the entanglement for a given nonclassicality. The main problem is to express $\dfPT\equiv\dPT(\hat F)$ as linear combinations of some $\dn(\hat F^{(k)})$, i.e.: \hat a_2egin{eqnarray} \dfPT = \sum_k c_k \dn(\hat F^{(k)}), \label{x52} \end{eqnarray} where $c_k>0$. To find such expansions explicitly, we apply the following three properties of determinants: (i) The Laplace expansion formula along any row (or column): $\det M=\sum_{j}(-1)^{i+j}M_{ij}\mu_{ij}$, where $\mu_{ij}$ is a minor of a matrix $M=(M_{ij})$. (ii) Swapping rule: By exchanging any two rows (columns) of a determinant, the value of the determinant is the same of the original determinant but with opposite sign. (iii) Summation rule: If some (or all) the elements of a column (row) are sum of two terms, then the determinant can be given as the sum of two determinants, e.g., $\det(a+a',b+b';c,d)=\det(a,b;c,d)+\det(a',b';c,d).$ \subsubsection{Simon's entanglement criterion\label{Sect3c1}} As the first example of such nontrivial relation between the nonclassicality and entanglement criteria, let us consider Simon's entanglement criterion~\hat a_3ite{Simon}. As shown in Ref.~\hat a_3ite{SV05}, it can be obtained from Criterion~5 as $\dfPT\ent 0$ for $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2,\hat a_2^\dagger)$. We found that Simon's criterion can be expressed as a sum of nonclassicality criteria as follows: \hat a_2egin{eqnarray} \dfPT &=& \dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2) +\dn(1,\hat a_1,\hat a_2^\dagger) \nonumber\\ &&+\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)+\dn(1,\hat a_1,\hat a_2^\dagger,\hat a_2), \label{x43} \end{eqnarray} where $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)$ is given by Eq.~(\ref{x36}). Moreover, $\dfn$ for $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)$, $\hat F=(1,\hat a_1,\hat a_2^\dagger,\hat a_2)$ and $\hat F=(1,\hat a_1,\hat a_2^\dagger)$ can be obtained from~(\ref{x36}) by analyzing its principal minors. Thus, one can prove the entanglement for a given nonclassicality by checking the violation of specific classical inequalities resulting from the nonclassicality Criterion~3. \subsubsection{Other entanglement criteria\label{Sect3c2}} Now, we present a few entanglement inequalities, which are simpler than Simon's criterion, but still correspond to sums of nonclassicality inequalities. Let us denote the following determinant: \hat a_2egin{eqnarray} D(x,y,z,z') &=& \left| \hat a_2egin{array}{lll} 1 & x & x^* \\ x^* & z & y^* \\ x & y & z' \ \end{array} \right|. \label{x65} \end{eqnarray} (i) Criterion~5 for $\hat F=(1,\hat a_1\hat a_2,\hat a_1^\dagger \hat a_2^\dagger)$ results in \hat a_2egin{equation} \dfPT=D\left(\langle\hat a_1\hat a_2^\dagger\rangle,\langle\hat a_1^2(\hat a_2^\dagger)^2\rangle, \<\hat n_1 \hat n_2\>,z'\right)\ent 0, \label{x69} \end{equation} where $z'=\mean{(\hat n_1+1)(\hat n_2+1)}$. By using the aforementioned properties of determinants, we find that the entanglement criterion in Eq.~(\ref{x69}) can be given as the following sum of nonclassicality inequalities resulting from Criterion~3: \hat a_2egin{eqnarray} \dfPT &=& \dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2) \nonumber \\ &&+ (\<\hat n_1\> +\<\hat n_2\>+1)\, \dn(1,\hat a_1\hat a_2^\dagger). \label{x56} \end{eqnarray} (ii) Criterion~5 for $\hat F=(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2)$ leads to \hat a_2egin{equation} \dfPT=D(\langle\hat a_1\hat a_2\rangle,\langle\hat a_1^2\hat a_2^2\rangle,z,z')\ent 0, \label{x75} \end{equation} where $z=\<\hat n_1 \hat n_2\> +\<\hat n_1\> $ and $z'=\<\hat n_1 \hat n_2\> +\<\hat n_2\>$. Analogously to Eq.~(\ref{x56}), we find that the following sum of the nonclassicality criteria corresponds to the entanglement criterion in Eq.~(\ref{x75}): \hat a_2egin{eqnarray} \dfPT &=& \dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)+\<\hat n_1\>\<\hat n_2\> \nonumber \\ &&+ (\<\hat n_1\> +\<\hat n_2\>)\, \dn(1,\hat a_1\hat a_2). \label{x59} \end{eqnarray} (iii) For $\hat F=(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)$, one obtains \hat a_2egin{equation} \dfPT=D(\langle\hat a_1+\hat a_2\rangle,\langle(\hat a_1+\hat a_2)^2\rangle,z,z)\ent 0, \label{x81} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2^\dagger\rangle+1$. Analogously to the former cases, we find the relation between the entanglement criterion in Eq.~(\ref{x81}) and the nonclassicality Criterion~3 as follows: \hat a_2egin{eqnarray} \dfPT &=& \dn(1,\hat a_1+\hat a_2,\hat a_1^\dagger+\hat a_2^\dagger) \notag\\ && + 2 \dn(1,\hat a_1+\hat a_2) +1. \label{x57} \end{eqnarray} (iv) As a final example, let us consider the entanglement Criterion~5 for $\hat F=(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)$. One obtains \hat a_2egin{equation} \dfPT=D(\langle\hat a_1+\hat a_2^\dagger\rangle,\langle(\hat a_1+\hat a_2^\dagger)^2\rangle,z,z')\ent 0, \label{x87} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2\rangle$ and $z'=z+2$, which is related to the nonclassicality Criterion~3 as follows: \hat a_2egin{equation} \dfPT = \dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger), \label{x58} \end{equation} where $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2)$ is given by Eq.~(\ref{x84}), and $\dn(1,\hat a_1+\hat a_2^\dagger)$ is given by its principal minor. Equation~(\ref{x87}) corresponds to the entanglement criterion of Mancini {\em et al.} \hat a_3ite{Mancini} (see also~\hat a_3ite{SV05}). \section{Conclusions\label{Sect4}} We derived classical inequalities for multimode bosonic fields, which can {\em only} be violated by {\em nonclassical} fields, so they can serve as a nonclassicality (or quantumness) test. Our criteria are based on Vogel's criterion~\hat a_3ite{Vogel08}, which is a generalization of analogous criteria for single-mode fields of Agarwal and Tara~\hat a_3ite{Agarwal92} and, more directly, of Shchukin, Richter, and Vogel (SRV)~\hat a_3ite{NCL1,NCL2}. The nonclassicality criteria correspond to analyzing the positivity of matrices of normally ordered moments of, e.g., annihilation and creation operators, which, by virtue of Sylvester's criterion, correspond to analyzing the positivity of Glauber-Sudarshan $P$~function. We used not only monomial, but also polynomial functions of moments. We showed that this approach can enable simpler and more intuitive derivation of physically relevant inequalities. We demonstrated how the nonclassicality criteria introduced here easily reduce to the well-known inequalities~(see, e.g., textbooks~\hat a_3ite{DodonovBook,VogelBook,MandelBook,PerinaBook}, reviews~\hat a_3ite{Walls79,Loudon80,Loudon87,Klyshko96}, and Refs.~\hat a_3ite{Yuen76,Kozierowski77,Caves85,Reid86,Dalton86,Schleich87,Agarwal88,Luks88,Hillery89,Lee90,Zou90,Klyshko96pla,Miranowicz99a,Miranowicz99b,An99,An00,Jakob01}) describing various multimode nonclassical effects, for short referred to as the nonclassicality inequalities. Our examples, summarized in Tables~I and~II, include the following: (i)~Multimode quadrature squeezing~\hat a_3ite{VogelBook} and its generalizations, including the sum and difference squeezing defined by Hillery~\hat a_3ite{Hillery89}, and An and Tinh~\hat a_3ite{An99,An00}, as well the principal squeezing related to the Schr\"odinger-Robertson indeterminacy relation~\hat a_3ite{SR} as defined by Luk\v{s} {\em et al.}~\hat a_3ite{Luks88}. (ii)~Single-time photon-number correlations of two modes, including squeezing of the sum and difference of photon numbers (which is also referred to as the photon-number sum/difference sub-Poisson photon-number statistics)~\hat a_3ite{PerinaBook}, violations of the Cauchy-Schwarz inequality~\hat a_3ite{MandelBook} and violations of the Muirhead inequality~\hat a_3ite{Muirhead,Lee90}, which is a generalization of the arithmetic-geometric mean inequality. (iii)~Two-time photon-number correlations of single modes including photon antibunching~\hat a_3ite{VogelBook,MandelBook,Miranowicz99a} and photon hyperbunching~\hat a_3ite{Jakob01,Miranowicz99b} for stationary and nonstationary fields. (iv)~Two- and three-mode quantum entanglement inequalities (e.g., Refs.~\hat a_3ite{Duan,Hillery06,Simon,Mancini}). We have shown that some known entanglement inequalities (e.g., of Duan {\em et al.}~\hat a_3ite{Duan}, and Hillery and Zubairy~\hat a_3ite{Hillery06}) can be derived as nonclassical inequalities. Other entanglement inequalities (e.g., of Simon~\hat a_3ite{Simon}) can be represented by sums of nonclassicality inequalities. Moreover, we developed a general method of expressing inequalities derived from the Shchukin-Vogel entanglement criterion~\hat a_3ite{SV05,MP06} as a sum of inequalities derived from the nonclassicality criteria. This approach enables a deeper analysis of the entanglement for a given nonclassicality. We also presented a few inequalities derived from the nonclassicality and entanglement criteria, which to our knowledge have not yet been described in the literature. It is seen that the nonclassicality criteria based on matrices of moments offer an effective way to derive specific inequalities which might be useful in the verification of nonclassicality of particular states generated in experiments. It seems that the quantum-information community more or less ignores nonclassicality as something closely related to quantum entanglement. We hope that this article presents a useful approach in the direction of a common treatment of both types of phenomena. \hat a_2egin{acknowledgments} We are very grateful to Marco Piani for his help in clarifying and generalizing some results of this article. We also thank Werner Vogel and Jan Sperling for their comments. A.M. acknowledges support from the Polish Ministry of Science and Higher Education under Grant No. 2619/B/H03/2010/38. X.W. was supported by the National Natural Science Foundation of China under Grant No. 10874151, the National Fundamental Research Programs of China under Grant No. 2006CB921205, and Program for New Century Excellent Talents in University (NCET). Y.X.L. was supported by the National Natural Science Foundation of China under Grant No. 10975080. F.N. acknowledges partial support from the National Security Agency, Laboratory of Physical Sciences, Army Research Office, National Science Foundation Grant No. 0726909, JSPS-RFBR Contract No. 09-02-92114, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and FIRST (Funding Program for Innovative R\&D on S\&T). \end{acknowledgments} \hat a_2egin{appendix} \section{Unified derivations of criteria for quadrature squeezing and its generalizations} Here and in the following appendices, we present a unified derivation of the known criteria for various multimode nonclassicality phenomena, which are summarized in Table~I. \subsection{Multi-mode quadrature squeezing} The {\em quadrature squeezing} of multimode fields can be defined by a negative value of the normally ordered variance \hat a_3ite{Caves85,Loudon87,VogelBook} \hat a_2egin{equation} \varn{X_{\hat a_2m{\phi}}}<0 \label{N10} \end{equation} with $\Delta\hat{X}_{\hat a_2m{\phi}} =\hat{X}_{\hat a_2m{\phi}}-\langle\hat{X}_{\hat a_2m{\phi}}\rangle$, of the multimode quadrature operator \hat a_2egin{equation} \hat X_{\hat a_2m{\phi}} = \sum_{m=1}^M c_m\; \hat x_m(\phi_m), \label{N11} \end{equation} which is given in terms of single-mode phase-rotated quadratures \hat a_2egin{equation} \hat x_m(\phi_m)= \hat a_m \exp(i\phi_m) + \hat a_m^\dagger \exp(-i\phi_m). \label{N12} \end{equation} It is a straightforward generalization of the single-mode quadrature squeezing~\hat a_3ite{Yuen76,Walls79}. In~(\ref{N11}), $\hat a_2m{\phi}=(\phi_1,...,\phi_M)$ and $c_m$ are real parameters. In the analysis of physical systems, it is convenient to analyze the annihilation ($\hat a_m$) and creation ($\hat a_m^\dagger$) operators corresponding to slowly-varying operators. Usually, $\hat x_m(0)$ and $\hat x_m(\pi/2)$ are interpreted as canonical position and momentum operators, although this interpretation can be applied for any two quadratures of orthogonal phases, $\hat x_m(\phi_m)$ and $\hat x_m(\phi_m+\pi/2)$. The normally ordered variance can be directly calculated from the $P$~function as follows: \hat a_2egin{equation} \varn{X_{\hat a_2m{\phi}}} = \intda P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*)[X_{\hat a_2m{\phi}} (\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*)-\langle\hat X_{\hat a_2m{\phi}}\rangle]^2, \label{N13} \end{equation} where \hat a_2egin{equation} X_{\hat a_2m{\phi}}(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) = \sum_{m=1}^M c_m (\hat a_1lpha_m e^{i\phi_m} +\hat a_1lpha^*_m e^{-i\phi_m}) \label{N14} \end{equation} and $\hat a_2m{\hat a_1lpha}=(\hat a_1lpha_1,...,\hat a_1lpha_M)$. From Eq.~(\ref{N13}) it is seen that a negative value of $\varn{X_{\hat a_2m{\phi}}}$ implies the nonpositivity of the $P$~function in some regions of phase space, so the multimode quadrature squeezing is a nonclassical effect. This conclusion can also be drawn by applying Criterion~3. In fact, by choosing $\hat F=(1,\hat X_{\hat a_2m{\phi}})$, one obtains \hat a_2egin{equation} d_{\hat F}^{\rm (n)} = \DET{1 & \langle\hat X_{\hat a_2m{\phi}}\rangle} {\langle\hat X_{\hat a_2m{\phi}}\rangle & \langle:\hat X_{\hat a_2m{\phi}}^2:\rangle} = \varn{X_{\hat a_2m{\phi}}} \ncl 0, \label{N15} \end{equation} which is the squeezing condition (\ref{N10}). \subsection{Two-mode principal squeezing} For simplicity, we analyze below the two-mode ($M=2$) case for $c_1=c_2=1$ and $\phi_2-\phi_1=\pi/2$. The two-mode {\em principal} (quadrature) squeezing can be defined as the $\hat a_2m{\phi}$-optimized squeezing defined by Eq.~(\ref{N10}): \hat a_2egin{equation} \min_{\hat a_2m{\phi}:\phi_2-\phi_1=\pi/2} \varn{X_{\hat a_2m{\phi}}} < 0. \label{N16} \end{equation} By applying the Schr\"odinger-Robertson indeterminacy relation~\hat a_3ite{SR}, Luk\v{s} {\em et al.}~\hat a_3ite{Luks88} have given the following necessary and sufficient condition for the two-mode {\em principal squeezing} \hat a_2egin{equation} \langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle < |\langle (\Delta \hat a_{12})^2 \rangle| , \label{N17} \end{equation} where $$\hat a_{12}=\hat a_{1}+\hat a_{2},\quad\Delta \hat a_{12}=\hat a_{12}-\langle\hat a_{12}\rangle.$$ This condition for principal squeezing can be derived from Criterion~3 by choosing $\hat F=(\Delta \hat a_{12}^\dagger,\Delta \hat a_{12})$, which leads to: \hat a_2egin{equation} d_{\hat F}^{\rm (n)} = \DET{\langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle & \langle (\Delta \hat a_{12})^2 \rangle} {\langle (\Delta \hat a_{12}^\dagger)^2 \rangle & \langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle} \ncl 0. \label{N18} \end{equation} Equivalently, by applying Criterion~3 for $\hat F=(1, \hat a_{12}^\dagger, \hat a_{12})$ one obtains: \hat a_2egin{equation} d_{\hat F}^{\rm (n)} =\DETT{1&\mean{\hat a^\dagger_{12}}&\mean{\hat a_{12}}} {\mean{\hat a_{12}}&\mean{\hat n_{12}}&\mean{(\hat a_{12})^2}} {\mean{\hat a^\dagger_{12}}&\mean{(\hat a^\dagger_{12})^2} &\mean{\hat n_{12}}}, \label{z36} \end{equation} where $$\hat n_{12}=\hat a^\dagger_{12}\hat a_{12}=\hat n_1+\hat n_2 +2{\rm Re}(\hat a_1^\dagger\hat a_2).$$ The determinants, given by Eqs.~(\ref{N18}) and~(\ref{z36}) are equal to each other and equivalent to Eq.~(\ref{N17}). This example shows that the application of polynomial functions of moments, instead of monomials, can lead to matrices of moments of lower dimension. Thus, the polynomial-based approach can enable simpler and more intuitive derivations of physically relevant criteria. \subsection{Sum squeezing} According to Hillery~\hat a_3ite{Hillery89}, a two-mode state exhibits {\em sum squeezing} in the direction $\phi$ if the variance of \hat a_2egin{eqnarray} \hat V_{\phi} &=& \hat frac12 (\hat a_1 \hat a_2 e^{-i\phi}+ \hat a_1^\dagger \hat a_2^\dagger e^{i\phi} ) \label{N19} \end{eqnarray} satisfies \hat a_2egin{eqnarray} \var{V_{\phi}} &<& \hat frac12 \langle\hat V_z\rangle, \label{N20} \end{eqnarray} where $$\hat V_z=\hat frac12(\hat n_1+\hat n_2+1)$$ and $\hat n_m=\hat a_m^\dagger \hat a_m$ for $m=1,2$. As for the case of quadrature squeezing, $\hat a_1$ and $\hat a_2$ usually correspond to slowly varying operators. Let us denote $\hat V_x=\hat V(\phi=0)$ and $\hat V_y=\hat V(\phi=\pi/2)$. It is worth mentioning that the operators $\hat V_x$, $(-\hat V_y)$ and $\hat V_z$ are the generators of the SU(1,1) Lie algebra. Equation~(\ref{N20}) can be readily justified by noting that $[\hat V_x,\hat V_y]=i\hat V_z$, which implies the Heisenberg uncertainty relation $$\var{V_{x}}\var{V_{y}}\ge \hat frac14\langle\hat V_z\rangle^2.$$ By analogy with the standard quadrature squeezing, sum squeezing occurs when $\min\{\var{V_{x}},\var{V_{y}}\}<\langle\hat V_z\rangle/2$, or more generally if Eq.~(\ref{N20}) is satisfied. We note that, in analogy to the principal quadrature squeezing, one can define the principal sum squeezing by minimizing $\var{V_{\phi}}$ over $\phi$: \hat a_2egin{equation} \min_{\phi}\var{V_{\phi}} < \hat frac12 \langle\hat V_z\rangle. \label{N20a} \end{equation} Conditions~(\ref{N20}) and~(\ref{N20a}) can be easily derived from Criterion~3. In fact, by noting that \hat a_2egin{eqnarray} \var{V_{\phi}} &=& \varn{V_{\phi}} + \hat frac12 \langle\hat V_z\rangle, \label{N21} \end{eqnarray} the condition for sum squeezing can equivalently be given by a negative value of the variance $\varn{V_{\phi}}$. On the other hand, by applying Criterion~3 for $\hat F=(1,\hat V_{\phi})$, one obtains \hat a_2egin{eqnarray} d_{\hat F}^{\rm (n)} = \DET{1 & \mean{\hat V_{\phi}}} {\mean{\hat V_{\phi}} & \mean{:\hat V^2_{\phi}:}} =\varn{V_{\phi}} \ncl 0, \label{N22} \end{eqnarray} which is equivalent to Eq.~(\ref{N20}). So it is seen that sum squeezing is a nonclassical effect---in the sense of Criterion 1. Two-mode sum squeezing can be generalized for any number of modes by defining the following $M$-mode phase-dependent operator~\hat a_3ite{An99}: \hat a_2egin{equation} \hat {\hat a_3al V}_\phi = \hat frac12 \left( {\rm e}^{-i \phi}\prod_{j} \hat a_j+ {\rm e}^{i \phi}\prod_{j} \hat a_j^\dagger \right) \label{z2} \end{equation} satisfying the commutation relation \hat a_2egin{equation} [\hat {\hat a_3al V}_\phi ,\hat {\hat a_3al V}_{\phi+\pi/2} ]=\hat frac{i}2 \hat C, \quad\hat C=\prod_{j}(1+\hat n_j)-\prod_{j}\hat n_j. \label{z5} \end{equation} Hereafter $j=1,...,M$ and we note that $|\mean{\hat C}|=\mean{\hat C}$. Thus, multimode sum squeezing along the direction $\phi$ occurs if \hat a_2egin{equation} \var{{\hat a_3al V}_\phi} < \hat frac{|\mean{\hat C}|}4. \label{z9} \end{equation} One can find that \hat a_2egin{equation} \var{{\hat a_3al V}_\phi}=\varn{{\hat a_3al V}_\phi}+ \hat frac{|\mean{\hat C}|}4. \label{z6} \end{equation} Thus, by applying the nonclassicality Criterion~3 for $\hat F=(1,\hat {\hat a_3al V}_\phi)$, we obtain the sum squeezing condition \hat a_2egin{equation} \varn{{\hat a_3al V}_\phi} = \dfn \ncl 0, \label{z10} \end{equation} which is equivalent to condition in Eq.~(\ref{z9}). \subsection{Difference squeezing} As defined by Hillery~\hat a_3ite{Hillery89}, a two-mode state exhibits {\em difference squeezing} in the direction $\phi$ if \hat a_2egin{eqnarray} \var{W_{\phi}} &<& \hat frac12 |\langle\hat W_z\rangle|, \label{N23} \end{eqnarray} where \hat a_2egin{eqnarray} \hat W_{\phi} &=& \hat frac12 (\hat a_1 \hat a_2^\dagger e^{i\phi}+ \hat a_1^\dagger \hat a_2 e^{-i\phi} ) \label{N24} \end{eqnarray} and $\hat W_z=\hat frac12(\hat n_1-\hat n_2)$. The principal difference squeezing can be defined as: \hat a_2egin{equation} \min_{\phi}\var{W_{\phi}} < \hat frac12 |\langle\hat W_z\rangle|, \label{N23a} \end{equation} in analogy to the principal quadrature squeezing and the principal sum squeezing. Contrary to the $\hat V_{i}$ operators for sum squeezing, operators $\hat W_x=\hat W(\phi=0)$, $\hat W_y=\hat W(\phi=\pi/2)$ and $\hat W_z$ are generators of the SU(2) Lie algebra. The uncertainty relation $\var{W_{x}}\var{W_{y}}\ge (1/4)|\langle\hat W_z\rangle|^2$, justifies defining difference squeezing by Eq.~(\ref{N23}). One can find that \hat a_2egin{equation} \var{W_{\phi}} = \varn{W_{\phi}} + \hat frac14 (\langle\hat n_1\rangle+\langle\hat n_2\rangle). \label{N25} \end{equation} By recalling Criterion~3 for $\hat F=(1,\hat W_{\phi})$, it is seen that \hat a_2egin{eqnarray} d_{\hat F}^{\rm (n)} =\varn{W_{\phi}} \ncl 0, \label{N26} \end{eqnarray} in analogy to Eq.~(\ref{N22}). And the condition for sum squeezing, given by Eq.~(\ref{N23}), can be formulated as: \hat a_2egin{equation} \dfn < - \hat frac12 \min_{i=1,2} \mean{\hat n_i}. \label{z1} \end{equation} So, states exhibiting difference squeezing are nonclassical. But also states satisfying \hat a_2egin{eqnarray} \hat frac14|\langle\hat n_1\rangle-\langle\hat n_2\rangle| \le \var{W_{\phi}} < \hat frac14(\langle\hat n_1\rangle+\langle\hat n_2\rangle) \label{N27} \end{eqnarray} are nonclassical although {\em not} exhibiting difference squeezing. The first inequality in Eq.~(\ref{N27}) corresponds to condition opposite to squeezing condition given by Eq.~(\ref{N23}). Criterion~3 can also be applied to the multimode generalization of difference squeezing, which can be defined via the operator~\hat a_3ite{An00}: \hat a_2egin{equation} \hat {\hat a_3al W}_\phi = \hat frac12 {\rm e}^{-i \phi}\prod_{k=1}^K \hat a_k \prod_{m=K+1}^M \hat a_m^\dagger + {\rm H.c.} \label{z12} \end{equation} for any $K<M$. For simplicity, hereafter, we skip the limits of multiplication in $\prod_{k}$ and $\prod_{m}$. The commutation relation \hat a_2egin{equation} [\hat {\hat a_3al W}_\phi ,\hat {\hat a_3al W}_{\phi+\pi/2} ]=\hat frac{i}2 \hat C, \label{z13} \end{equation} where \hat a_2egin{equation} \hat C=\prod_{k}(1+\hat n_k)\prod_{m}\hat n_m -\prod_{k}\hat n_k \prod_{m} (1+\hat n_m), \label{z14} \end{equation} justifies the choice of the following condition for multimode difference squeezing along the direction $\phi$~\hat a_3ite{An00}: \hat a_2egin{equation} \var{{\hat a_3al W}_\phi} < \hat frac{|\mean{\hat C}|}4. \label{z18} \end{equation} We find that \hat a_2egin{equation} \var{{\hat a_3al W}_\phi}=\varn{{\hat a_3al W}_\phi}+ \hat frac{|\mean{\hat D}|}4, \label{z15} \end{equation} where \hat a_2egin{equation} \hat D=\prod_{k}(1+\hat n_k)\prod_{m}\hat n_m +\prod_{k}\hat n_k \prod_{m} (1+\hat n_m)-2 \prod_{j=1}^M\hat n_j. \label{z17} \end{equation} By applying Criterion~3 for $\hat F=(1,\hat {\hat a_3al W}_\phi)$, we obtain the following condition for multimode difference squeezing: \hat a_2egin{equation} \dfn = \varn{{\hat a_3al W}_\phi} < \hat frac14 \left(|\mean{\hat C}|-\mean{\hat D}\right), \label{z19} \end{equation} which corresponds to the original condition, given by Eq.~(\ref{z18}). For states exhibiting difference squeezing, the right-hand side of Eq.~(\ref{z19}) is negative. In fact, if $\mean{\hat C}>0$ then \hat a_2egin{equation} \hat C - \hat D = -2 \prod_{k}\hat n_k \left(\prod_{m}(1+\hat n_m) -\prod_{m}\hat n_m \right) < 0, \label{z20} \end{equation} otherwise \hat a_2egin{equation} \hat C - \hat D = -2 \left(\prod_{k}(1+\hat n_k) -\prod_{k}\hat n_k \right) \prod_{m}\hat n_m < 0. \label{z21} \end{equation} It is seen that the difference squeezing condition is stronger than the nonclassicality condition $\dfn\ncl 0$. This means that states satisfying inequalities \hat a_2egin{eqnarray} \hat frac14 \left(|\mean{\hat C}|-\mean{\hat D}\right) \le \varn{{\hat a_3al W}_\phi} < 0 \label{z21a} \end{eqnarray} are nonclassical but {\em not} exhibiting difference squeezing. \section{Unified derivations of criteria for one-time photon-number correlations} Various criteria for the existence of nonclassical photon-number intermode phenomena in two-mode radiation fields have been proposed (see, e.g., Refs.~\hat a_3ite{Reid86,Agarwal88,Lee90,DodonovBook,VogelBook,MandelBook,PerinaBook}). Here, we give a few examples of such nonclassical phenomena revealed by single-time moments. \subsection{Sub-Poisson photon-number correlations} The {\em squeezing} of the sum ($\hat n_+=\hat n_1 +\hat n_2$) or difference ($\hat n_-=\hat n_1 -\hat n_2$) of photon numbers occurs if \hat a_2egin{eqnarray} \varn{n_{\pm}} &<& 0, \label{N28} \end{eqnarray} which can be interpreted as the photon-number sum/difference {\em sub-Poisson statistics}, respectively~\hat a_3ite{PerinaBook}. These are nonclassical effects, as can be seen by analyzing the $P$~function: \hat a_2egin{equation} \varn{n_{\pm}} = \intda P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) [(|\hat a_1lpha_1|^2\pm|\hat a_1lpha_2|^2) -\langle\hat n_{\pm}\rangle]^2, \label{N29} \end{equation} where $\hat a_2m{\hat a_1lpha}=(\hat a_1lpha_1,\hat a_1lpha_2)$. Thus, photon-number squeezing implies the nonpositivity of the $P$~function. The same conclusion can also be drawn by applying Criterion~3 for $\hat F_{\pm}=(1,\hat n_{\pm})$, which leads to \hat a_2egin{eqnarray} d_{\hat F_{\pm}}^{\rm (n)} = \DET{1&\mean{\hat n_{\pm}}} {\mean{\hat n_{\pm}}&\normal{\hat n_{\pm}^2}} = \varn{n_{\pm}} \ncl 0. \label{N30} \end{eqnarray} \subsection{Agarwal's nonclassicality criterion} Here, we consider an example of the violation of the CSI for two modes at the same evolution time. Other examples of violations of the CSI for a single mode, but at two different evolution times, are discussed in Appendix C in relation to photon antibunching and hyperbunching. By considering the violation of the following CSI: \hat a_2egin{eqnarray} \normal{\hat n_1^2}\normal{\hat n_2^2} &\hat a_3l& \langle \hat n_1 \hat n_2\rangle^2, \label{x15} \end{eqnarray} Agarwal~\hat a_3ite{Agarwal88} introduced the following nonclassicality parameter: \hat a_2egin{equation} I_{12} = \hat frac{\sqrt{\langle :\hat{n}_1^2:\rangle \langle :\hat{n}_2^2:\rangle}} {\mean{\hat{n}_1 \hat{n}_2}}-1. \label{x14} \end{equation} Explicitly, the nonclassicality of phenomena described by a negative value of $I_{12}$ is also implied by Criterion~3 for $\hat F=(\hat n_1,\hat n_2)$, which results in \hat a_2egin{eqnarray} \dfn &=& \DET{\normal{\hat n_1^2} & \mean{\hat{n}_1 \hat{n}_2}} {\mean{\hat{n}_1 \hat{n}_2} & \normal{\hat n_2^2}} \ncl 0. \label{x17} \end{eqnarray} \subsection{Lee's nonclassicality criterion} The Muirhead classical inequality~\hat a_3ite{Muirhead} is a generalization of the arithmetic-geometric mean inequality. Lee has formulated this inequality as follows~\hat a_3ite{Lee90} \hat a_2egin{equation} D_{12} = \langle :\hat{n}_1^2:\rangle + \langle :\hat{n}_2^2:\rangle - 2 \langle \hat{n}_1 \hat{n}_2\rangle \hat a_3l 0. \label{x30} \end{equation} The nonclassicality of correlations with a negative value of the parameter $D_{12}$ is readily seen by applying Criterion~3 for $\hat F=(\hat n_1-\hat n_2)\equiv (\hat n_{-})$, which yields \hat a_2egin{equation} D_{12} = \normal{\hat{n}_{-}^2} \ncl 0. \label{x30a} \end{equation} For comparison, let us analyze Criterion~3 for $\hat F=(1,\hat n_{-})$, which leads to \hat a_2egin{equation} \dfn = \normal{\hat{n}_{-}^2} -\mean{\hat{n}_{-}}^2 \hat a_3l 0. \label{x31} \end{equation} Clearly \hat a_2egin{equation} D_{12} < 0 \Rightarrow \dfn \ncl 0. \label{x31a} \end{equation} Thus, the criterion given by Eq.~(\ref{x31}) detects more nonclassical states than that based on the $D_{12}$ parameter. Alternatively, a direct application of the relation \hat a_2egin{equation} D_{12} = \intda P(\hat a_2m{\hat a_1lpha,\hat a_1lpha}^*) (|\hat a_1lpha_1|^2- |\hat a_1lpha_2|^2)^2 \ncl 0 \label{x91} \end{equation} also implies the nonpositivity of the $P$~function in some regions of phase space. \section{Unified derivations of criteria for two-time photon-number correlations} Here, we consider the two-time single-mode photon-number nonclassical correlations on examples of photon antibunching and photon hyperbunching. \subsection{Photon antibunching} The {\em photon antibunching}~\hat a_3ite{Kimble77,Walls79,Loudon80,VogelBook,MandelBook} of a stationary or nonstationary single-mode field can be defined via the two-time second-order intensity correlation function given by \hat a_2egin{eqnarray} G^{(2)}(t,t+\tau) &=& \normalo{\hat{n}(t)\hat{n}(t+\tau)} \notag\\ &=&\langle \hat{a}^{\dagger}(t)\hat{a}^{\dagger}(t+\tau) \hat{a}(t+\tau)\hat{a}(t)\rangle\quad\quad \label{y01} \end{eqnarray} or its normalized intensity correlation functions defined as \hat a_2egin{equation} g^{(2)}(t,t+\tau )= \hat frac{G^{(2)}(t,t+\tau )}{\sqrt{ G^{(2)}(t,t)G^{(2)}(t+\tau ,t+\tau )}}, \label{y02} \end{equation} where $\dd\,\dd$ denotes the time order and normal order of field operators. Photon antibunching occurs if $g^{(2)}(t,t)$ is a strict local minimum at $\tau =0$ for $g^{(2)}(t,t+\tau )$ considered as a function of $\tau$ (see, e.g., Refs.~\hat a_3ite{MandelBook,Miranowicz99a}): \hat a_2egin{eqnarray} g^{(2)}(t,t+\tau ) > g^{(2)}(t,t). \label{y05} \end{eqnarray} {\em Photon bunching} occurs if $g^{(2)}(t,t+\tau)$ decreases, while {\em photon unbunching} appears if $g^{(2)}(t,t+\tau)$ is locally constant. For {\em stationary} fields [i.e., those satisfying $G^{(2)}(t,t+\tau)=G^{(2)}(\tau)$ so $g^{(2)}(t,t+\tau)=g^{(2)}(\tau)$], Eq.~(\ref{y05}) reduces to the standard definition of photon antibunching~\hat a_3ite{VogelBook,MandelBook}: \hat a_2egin{eqnarray} g^{(2)}(\tau )> g^{(2)}(0). \label{y05b} \end{eqnarray} Photon antibunching, defined by Eq.~(\ref{y05}), is a nonclassical effect as it corresponds to the violation of the Cauchy-Schwarz inequality: \hat a_2egin{eqnarray} G^{(2)}(t,t)G^{(2)}(t+\tau ,t+\tau ) \hat a_3l \hat a_2ig[G^{(2)}(t,t+\tau )\hat a_2ig]^2. \label{y09} \end{eqnarray} As shown in Ref.~\hat a_3ite{Vogel08}, this property follows from Criterion~3 based on the generalized definition of space-time $P$~function, given by~(\ref{VogelP}). In fact, by assuming $\hat F=(\hat n(t),\hat n(t+\tau))$, which leads to \hat a_2egin{eqnarray} \dfn &=& \DET{\normalo{\hat n^2(t)} & \normalo{\hat n(t)\hat n(t+\tau)}} {\normalo{\hat n(t)\hat n(t+\tau)} & \normalo{\hat n^2(t+\tau)}} \notag \\ &=& \DET{G^{(2)}(t,t) & G^{(2)}(t,t+\tau)} {G^{(2)}(t,t+\tau) & G^{(2)}(t+\tau,t+\tau)} \ncl 0.\quad\quad \label{x23} \end{eqnarray} \subsection{Photon hyperbunching} {\em Photon hyperbunching}~\hat a_3ite{Jakob01}, also referred to as photon antibunching effect~\hat a_3ite{Miranowicz99b}, can be defined as: \hat a_2egin{eqnarray} \overline{g}^{(2)}(t,t+\tau )> \overline{g}^{(2)}(t,t), \label{y05a} \end{eqnarray} given in terms of the correlation coefficient~\hat a_3ite{Berger93} \hat a_2egin{equation} \overline{g}^{(2)}(t,t+\tau )= \hat frac{\overline{G}^{(2)}(t,t+\tau )}{\sqrt{ \overline{G}^{(2)}(t,t)\overline{G}^{(2)}(t+\tau ,t+\tau )}}, \label{y07} \end{equation} where the covariance $\overline{G}^{(2)}(t,t+\tau)$ is given by \hat a_2egin{equation} \overline{G}^{(2)}(t,t+\tau) = G^{(2)}(t,t+\tau) -G^{(1)}(t) G^{(1)}(t+\tau), \label{y08} \end{equation} and $G^{(1)}(t)=\langle \hat n(t)\rangle =\langle \hat{a}^{\dagger }(t) \hat{a}(t)\rangle$ is the light intensity. It is worth noting that, for {\em stationary} fields, the definitions given by Eqs.~(\ref{y05}) and~(\ref{y05a}) are equivalent and equivalent to definitions of photon antibunching based on other normalized correlation functions, e.g., \hat a_2egin{equation} \tilde g^{(2)}(t,t+\tau )=\hat frac{G^{(2)}(t,t+\tau )}{[ G^{(1)}(t)] ^{2}}. \label{y08a} \end{equation} However for {\em nonstationary} fields, these definitions correspond in general to different photon antibunching effects~\hat a_3ite{Miranowicz99a,Miranowicz99b,Jakob01}. Analogously to Eq.~(\ref{y05}), the photon hyperbunching, defined by Eq.~(\ref{y05a}), can occur for nonclassical fields violating the Cauchy-Schwarz inequality: \hat a_2egin{equation} \overline{G}^{(2)}(t,t)\overline{G}^{(2)}(t+\tau ,t+\tau ) \hat a_3l \hat a_2ig[\overline{G}^{(2)}(t,t+\tau )\hat a_2ig]^2. \label{y10} \end{equation} Again, the nonclassicality of this effect can be shown by applying Criterion~3 for the space-time $P$~function, given by~(\ref{VogelP}), assuming $\hat F=(\Delta \hat n(t),\Delta \hat n(t+\tau))$, where $\Delta \hat n(t) =\hat n(t)-\mean{\hat n(t)}$. Thus, one obtains \hat a_2egin{equation} \dfn = \DET{\overline G^{(2)}(t,t) & \overline G^{(2)}(t,t+\tau)} {\overline G^{(2)}(t,t+\tau) & \overline G^{(2)}(t+\tau,t+\tau)} \ncl 0, \label{x27} \end{equation} which is equivalent to Eq.~(\ref{y05a}). Alternatively, by choosing $\hat F=(1,\hat n(t),\hat n(t+\tau))$, one finds \hat a_2egin{equation} \dfn = \DETT {1&\mean{\hat n(t)}&\mean{\hat n(t+\tau)}} {\mean{\hat n(t)}&\mean{\dd\hat n^2(t)\dd} &\mean{\dd \hat n(t) \hat n(t+\tau)\dd}} {\mean{\hat n(t+\tau)}&\mean{\dd \hat n(t) \hat n(t+\tau)\dd}& \mean{\dd\hat n^2(t+\tau)\dd}}, \label{z34} \end{equation} which is equal to the determinant given by Eq.~(\ref{x27}). By comparing Eqs.~(\ref{x27}) and~(\ref{z34}), analogously to Eqs.~(\ref{N18}) and~(\ref{z36}), it is seen the advantage of using polynomial, instead of monomial, functions of moments in $\hat F$. Finally, it is worth noting that the {\em single-mode sub-Poisson} photon-number statistics, defined by the condition $\varn{n}<0$, although also referred to as {\em photon antibunching}, is an effect different from those defined by Eqs.~(\ref{y05}) and~(\ref{y05a}), as shown by examples in Ref.~\hat a_3ite{Zou90}. \end{appendix} \hat a_2egin{thebibliography}{99} \hat a_2ibitem{Glauber} R. J. Glauber, \extra{Coherent and incoherent states of the radiation field,} Phys. Rev. \textbf{131}, 2766 (1963). \hat a_2ibitem{Sudarshan} E. C. G. Sudarshan, \extra{Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams,} \prl \textbf{10}, 277 (1963). \hat a_2ibitem{DodonovBook} V.V. Dodonov and V.I. Man'ko (eds.), {\em Theory of Nonclassical States of Light} (Taylor \& Francis, New York, 2003). \hat a_2ibitem{VogelBook} W. Vogel and D.-G. Welsch, {\em Quantum Optics} (Wiley-VCH, Weinheim, 2006). \hat a_2ibitem{MandelBook} L. Mandel and E. Wolf, {\em Optical Coherence and Quantum Optics} (Cambridge Univ. Press, Cambridge, 1995). \hat a_2ibitem{PerinaBook} J. Pe\v{r}ina, {\em Quantum Statistics of Linear and Nonlinear Optical Phenomena} (Reidel, Dortrecht, 1991). \hat a_2ibitem{Walls79} D. F. Walls, \extra{Evidence for the quantum nature of light,} Nature {\hat a_2f 280}, 451 (1979). \hat a_2ibitem{Loudon80} R. Loudon, \extra{Non-classical effects in the statistical properties of light,} Rep. Prog. Phys. {\hat a_2f 43}, 913 (1980). \hat a_2ibitem{Loudon87} R. Loudon and P. L. Knight, \extra{Squeezed light,} J. Mod. Opt. {\hat a_2f 34}, 709 (1987). \hat a_2ibitem{Smirnov87} D. F. Smirnov and A. S. Troshin, \extra{New phenomena in quantum optics: photon antibunching, sub-Poisson photon statistics, and squeezed states,} Sov. Phys. Usp. {\hat a_2f 30}, 851 (1987) [Usp. Fiz. Nauk. {\hat a_2f 153}, 233 (1987)]. \hat a_2ibitem{Klyshko96} D. N. Klyshko, \extra{The nonclassical light,} Usp. Fiz. Nauk {\hat a_2f 166}, 613 (1996) [Sov. Phys. Usp. {\hat a_2f 39}, 573 (1996)]. \hat a_2ibitem{Dodonov02} V. V. Dodonov, \extra{`Nonclassical' states in quantum optics: a `squeezed' review of the first 75 years,} J. Opt. B: Quantum Semiclass. Opt. {\hat a_2f 4}, R1 (2002). \hat a_2ibitem{Nori} X. Hu and F. Nori, \extra{Phonon squeezed states: Quantum noise reduction in solids,} Physica B {\hat a_2f 263}, 16 (1999); S.N. Shevchenko, A.N. Omelyanchouk, A.M. Zagoskin, S. Savel'ev, and F. Nori, \extra{Distinguishing quantum from classical Rabi oscillations in a phase qubit,} New J. Phys. {\hat a_2f 10}, 073026 (2008); A.M. Zagoskin, E. Il'ichev, M.W. McCutcheon, J. F. Young, and F. Nori, \extra{Generation of squeezed states of microwave radiation in a superconducting resonant circuit,} \prl {\hat a_2f 101}, 253602 (2008); N. Lambert, C. Emary, Y. N. Chen, and F. Nori, \extra{Distinguishing quantum and classical transport through nanostructures,} e-print arXiv:1002.3020. \hat a_2ibitem{Schwab05} K. C. Schwab and M. L. Roukes, \extra{Putting mechanics into quantum mechanics,} Phys. Today {\hat a_2f 58} (7), 36 (2005). \hat a_2ibitem{Wei06} L. F. Wei, Y. X. Liu, C. P. Sun, and F. Nori, \extra{Probing tiny motions of nanomechanical resonators: classical or quantum mechanical,} \prl {\hat a_2f 97}, 237201 (2006); N. Lambert and F. Nori, \extra{Detecting quantum-coherent nanomechanical oscillations using the current-noise spectrum of a double quantum dot,} \prb {\hat a_2f 78}, 214302 (2008). \hat a_2ibitem{qbiology} G.~S.~Engel, T. R. Calhoun, E. L. Read, T. K. Ahn, T. Man\v{c}al, Y. C. Cheng, R. E. Blankenship, and G. R. Fleming, \extra{Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems,} Nature (London) {\hat a_2f 446}, 782 (2007). \hat a_2ibitem{Nielsen} M. A. Nielsen and I. L. Chuang, \textit{Quantum Computation and Quantum Information} (Cambridge University Press, Cambridge, 2000). \hat a_2ibitem{Richter02} Th. Richter and W. Vogel, \extra{Nonclassicality of quantum states: a hierarchy of observable conditions,} \prl {\hat a_2f 89}, 283601 (2002). \hat a_2ibitem{Rivas} A. Rivas and A. Luis, \extra{Nonclassicality of states and measurements by breaking classical bounds on statistics}, \pra {\hat a_2f 79}, 042105 (2009). \hat a_2ibitem{Agarwal92} G.~S. Agarwal and K.~Tara, \extra{Nonclassical character of states exhibiting no squeezing or sub-Poissonian statistics,} \pra {\hat a_2f 46}, 485 (1992). \hat a_2ibitem{NCL1} E. Shchukin, Th. Richter, and W. Vogel, \extra{Nonclassicality criteria in terms of moments,} \pra {\hat a_2f 71}, 011802(R) (2005). \hat a_2ibitem{NCL2} E. V. Shchukin and W. Vogel, \extra{Nonclassical moments and their measurement,} \pra {\hat a_2f 72}, 043808 (2005). \hat a_2ibitem{SVdetect} E. Shchukin and W. Vogel, \extra{Universal measurement of quantum correlations of radiation,} \prl {\hat a_2f 96}, 200403 (2006). \hat a_2ibitem{Vogel08} W. Vogel, \extra{Nonclassical correlation properties of radiation fields,} \prl {\hat a_2f 100}, 013605 (2008). \hat a_2ibitem{Yuen76} H. P. Yuen, \extra{Two-photon coherent states of the radiation field,} \pra {\hat a_2f 13}, 2226 (1976). \hat a_2ibitem{Kozierowski77} M. Kozierowski and R. Tana\'s, \extra{Quantum fluctuations in second-harmonic light generation,} Opt. Commun. {\hat a_2f 21}, 229 (1977). \hat a_2ibitem{Caves85} C. M. Caves and B. L. Schumaker, \extra{New formalism for two-photon quantum optics,} \pra {\hat a_2f 31}, 3068 (1985). \hat a_2ibitem{Reid86} M. D. Reid and D. F. Walls, \extra{Violations of classical inequalities in quantum optics,} \pra {\hat a_2f 34}, 1260 (1986). \hat a_2ibitem{Dalton86} B.J. Dalton, \extra{Effect of internal atomic relaxation on quantum fields,} Phys. Scr. T {\hat a_2f 12}, 43 (1986). \hat a_2ibitem{Schleich87} W. Schleich and J. A. Wheeler, \extra{Oscillations in photon distribution of squeezed states and interference in phase space,} Nature (London) {\hat a_2f 326}, 574 (1987). \hat a_2ibitem{Agarwal88} G. S. Agarwal, \extra{Nonclassical statistics of fields in pair coherent states,} J. Opt. Soc. Am. B {\hat a_2f 5}, 1940 (1988). \hat a_2ibitem{Luks88} A. Luk\v{s}, V. Pe\v{r}inov\'a and J. Pe\v{r}ina, \extra{Principal squeezing of vacumm fluctuations,} Opt. Commun. {\hat a_2f 67}, 149 (1988); A. Luk\v{s}, V. Pe\v{r}inov\'a and Z. Hradil, \extra{Principal squeezing,} Acta Phys. Polon. {\hat a_2f A74}, 713 (1988). \hat a_2ibitem{Hillery89} M. Hillery, \extra{Sum and difference squeezing of the electromagnetic field,} \pra \textbf{40}, 3147 (1989). \hat a_2ibitem{Lee90} C. T. Lee, \extra{Many-photon antibunching in generalized pair coherent states,} \pra {\hat a_2f 41}, 1569 (1990); \extra{Nonclassical photon statistics of two-mode squeezed states,} {\em ibid.} {\hat a_2f 42}, 1608 (1990). \hat a_2ibitem{Zou90} X. T. Zou and L. Mandel, \extra{Photon-antibunching and sub-Poissonian photon statistics,} \pra \textbf{41}, 475 (1990). \hat a_2ibitem{Klyshko96pla} D. N. Klyshko, \extra{Observable signs of nonclassical light,} Phys. Lett. A {\hat a_2f 213}, 7 (1996). \hat a_2ibitem{Miranowicz99a} A. Miranowicz, J. Bajer, H. Matsueda, M. R. B. Wahiddin and R. Tana\'s, \extra{Comparative study of photon antibunching of non-stationary fields (part I),} J. Opt. B: Quantum Semiclass. Opt. \textbf{1}, 511 (1999). \hat a_2ibitem{Miranowicz99b} A. Miranowicz, H. Matsueda, J. Bajer, M. R. B. Wahiddin and R. Tana\'s, \extra{Comparative study of photon bunching of classical fields (part II),} J. Opt. B: Quantum Semiclass. Opt. \textbf{1}, 603 (1999). \hat a_2ibitem{An99} N. B. An and V. Tinh, \extra{General multimode sum-squeezing,} Phys. Lett. A {\hat a_2f 261}, 34 (1999). \hat a_2ibitem{An00} N. B. An and V. Tinh, \extra{General multimode difference-squeezing,} Phys. Lett. A {\hat a_2f 270}, 27 (2000). \hat a_2ibitem{Jakob01} M. Jakob, Y. Abranyos, and J. A Bergou, \extra{Comparative study of hyperbunching in the fluorescence from a bichromatically driven atom,} J. Opt. B: Quantum Semiclass. Opt. {\hat a_2f 3}, 130 (2001). \hat a_2ibitem{Clauser74} J. F. Clauser, \extra{Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect,} \prd {\hat a_2f 9}, 853 (1974). \hat a_2ibitem{Kimble77} H. J. Kimble, M. Dagenais, and L. Mandel, \extra{Photon antibunching in resonance fluorescence,} \prl {\hat a_2f 39}, 691 (1977). \hat a_2ibitem{Short83} R. Short and L. Mandel, \extra{Observation of sub-poissonian photon statistics,} \prl {\hat a_2f 51}, 384 (1983). \hat a_2ibitem{Slusher85} R. E. Slusher, L. W. Hollberg, B. Yurke, J. C. Mertz, and J. F. Valley, \extra{Observation of squeezed states generated by four-wave mixing in an optical cavity,} \prl {\hat a_2f 55}, 2409 (1985). \hat a_2ibitem{Grangier86} P. Grangier, G. Roger, and A. Aspect, \extra{Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences,} Europhys. Lett. {\hat a_2f 1}, 173 (1986). \hat a_2ibitem{Hong87} C. K. Hong, Z. Y. Ou, and L. Mandel, \extra{Measurement of subpicosecond time intervals between two photons by interference,} \prl {\hat a_2f 59}, 2044 (1987). \hat a_2ibitem{Lvovsky02} A. I. Lvovsky and J. H. Shapiro, \extra{Nonclassical character of statistical mixtures of the single-photon and vacuum optical states}, \pra {\hat a_2f 65}, 033830 (2002). \hat a_2ibitem{Hillery87} M. Hillery, \extra{Nonclassical distance in quantum optics,} \pra {\hat a_2f 35}, 725 (1987). \hat a_2ibitem{Lee92} C. T. Lee, \extra{Moments of P functions and nonclassical depths of quantum states,} \pra {\hat a_2f 45}, 6586 (1992). \hat a_2ibitem{Lutkenhaus95} N. L\"utkenhaus and S. M. Barnett, \extra{Nonclassical effects in phase space,} \pra {\hat a_2f 51}, 3340 (1995). \hat a_2ibitem{Dodonov00} V. V. Dodonov, O. V. Man'ko, V. I. Man'ko and A. W\"unsche, \extra{Hilbert Schmidt distance and non-classicality of states in quantum optics,} J. Mod. Opt. {\hat a_2f 47}, 633 (2000). \hat a_2ibitem{Marian02} P. Marian, T. A. Marian, and H. Scutaru, \extra{Quantifying nonclassicality of one-mode Gaussian states of the radiation field,} \prl {\hat a_2f 88}, 153601 (2002). \hat a_2ibitem{Dodonov03} V. V. Dodonov and M. B. Ren\'o, \extra{Classicality and anticlassicality measures of pure and mixed quantum states,} Phys. Lett. A {\hat a_2f 308}, 249 (2003). \hat a_2ibitem{Malbouisson03} J. M. C. Malbouisson and B. Baseia, \extra{On the measure of nonclassicality of field states,} Phys. Scr. {\hat a_2f 67}, 93 (2003). \hat a_2ibitem{Kenfack04} A. Kenfack and K. \.Zyczkowski, \extra{Negativity of the Wigner function as an indicator of non-classicality,} J. Opt. B: Quantum Semiclass. Opt. {\hat a_2f 6}, 396 (2004). \hat a_2ibitem{Asboth05} J. K. Asb\'oth, J. Calsamiglia, and H. Ritsch, \extra{Computable measure of nonclassicality for light,} \prl {\hat a_2f 94}, 173602 (2005). \hat a_2ibitem{Boca09} M. Boca, I. Ghiu, P. Marian, and T. A. Marian, \extra{Quantum Chernoff bound as a measure of nonclassicality for one-mode Gaussian states,} \pra {\hat a_2f 79}, 014302 (2009). \hat a_2ibitem{SV05} E. Shchukin and W. Vogel, \extra{Inseparability criteria for continuous bipartite quantum states,} \prl {\hat a_2f 95}, 230502 (2005). \hat a_2ibitem{MP06} A. Miranowicz and M. Piani, \extra{Comment on ``Inseparability criteria for continuous bipartite quantum states'',} \prl {\hat a_2f 97}, 058901 (2006). \hat a_2ibitem{MPHH09} A. Miranowicz, M. Piani, P. Horodecki and R. Horodecki, \extra{Inseparability criteria based on matrices of moments,} \pra {\hat a_2f 80}, 052303 (2009). \hat a_2ibitem{Rigas06} J. Rigas, O. G\"uhne, and N. L\"utkenhaus, \extra{Entanglement verification for quantum-key-distribution systems with an underlying bipartite qubit-mode structure,} \pra {\hat a_2f 73}, 012341 (2006). \hat a_2ibitem{Korbicz06} J. K. Korbicz and M. Lewenstein, \extra{Group-theoretical approach to entanglement,} \pra {\hat a_2f 74}, 022318 (2006). \hat a_2ibitem{Moroder08} T. Moroder, M. Keyl, and N. L\"utkenhaus, \extra{Truncated su(2) moment problem for spin and polarization states,} J. Phys. A {\hat a_2f 41}, 275302 (2008). \hat a_2ibitem{Haseler08} H. H\"aseler, T. Moroder, and N. L\"utkenhaus, \extra{Testing quantum devices: practical entanglement verification in bipartite optical systems,} \pra {\hat a_2f 77}, 032303 (2008). \hat a_2ibitem{Duan} L.~M. Duan, G.~Giedke, J.~I. Cirac, and P.~Zoller, \extra{Inseparability criterion for continuous variable systems,} \prl {\hat a_2f 84}, 2722 (2000). \hat a_2ibitem{Hillery06} M. Hillery and M. S. Zubairy, \extra{Entanglement conditions for two-mode states,} \prl {\hat a_2f 96}, 050503 (2006). \hat a_2ibitem{Simon} R.~Simon, \extra{Peres-Horodecki separability criterion for continuous variable systems,} \prl {\hat a_2f 84}, 2726 (2000). \hat a_2ibitem{Mancini} S.~Mancini, V.~Giovannetti, D.~Vitali, and P.~Tombesi, \extra{Entangling macroscopic oscillators exploiting radiation pressure,} \prl {\hat a_2f 88}, 120401 (2002). \hat a_2ibitem{Titulaer65} U. M. Titulaer and R. J. Glauber, \extra{Correlation functions for coherent fields,} Phys. Rev. {\hat a_2f 140}, B676 (1965). \hat a_2ibitem{Wunsche04} A. W\"unsche, \extra{About the nonclassicality of states defined by nonpositivity of the $P$-quasiprobability,} J. Opt. B: Quantum Semiclass. Opt. {\hat a_2f 6}, 159 (2004). \hat a_2ibitem{Kiesel} T. Kiesel, W. Vogel, V. Parigi, A. Zavatta and M. Bellini, \extra{Experimental determination of a nonclassical Glauber-Sudarshan $P$ function,} \pra {\hat a_2f 78}, 021804(R) (2008). \hat a_2ibitem{Korbicz} J. K. Korbicz, J. I. Cirac, J. Wehr, and M. Lewenstein, \extra{Hilbert's 17th problem and the quantumness of states,} \prl {\hat a_2f 94}, 153601 (2005). \hat a_2ibitem{Sperling} J. Sperling, private communication. \hat a_2ibitem{Strang} G. Strang, {\em Linear Algebra and Its Applications} (Academic Press, New York, 1980). \hat a_2ibitem{SV06multi} E. Shchukin and W. Vogel, \extra{Conditions for multipartite continuous-variable entanglement,} \pra {\hat a_2f 74}, 030302(R) (2006). \hat a_2ibitem{HorodeckiReview} R. Horodecki, P. Horodecki, M. Horodecki, K. Horodecki, \extra{Quantum entanglement,} Rev. Mod. Phys. {\hat a_2f 81}, 865 (2009). \hat a_2ibitem{Raymer} M.~G. Raymer, A.~C. Funk, B.~C. Sanders, and H.~de~Guise, \extra{Separability criterion for separate quantum systems,} \pra {\hat a_2f 67}, 052104 (2003). \hat a_2ibitem{Agarwal05} G.~S. Agarwal and A. Biswas, \extra{Quantitative measures of entanglement in pair-coherent states,} J. Opt. B: Quantum Semiclass. Opt. {\hat a_2f 7}, 350 (2005). \hat a_2ibitem{Song} L. Song, X. Wang, D. Yan, and Z. S. Pu, \extra{Entanglement conditions for tripartite systems via indeterminacy relations,} J. Phys. B {\hat a_2f 41}, 015505 (2008). \hat a_2ibitem{SR} E. Schr\"odinger, Sitz. Ber. Preuss. Akad. Wiss. (Phys.-Math. Kl.) \textbf{19}, 296 (1930); H. R. Robertson, \extra{An indeterminacy relation for several observables and its classical interpretation,} Phys. Rev. \textbf{46}, 794 (1934). \hat a_2ibitem{Muirhead} R. F. Muirhead, \extra{Some methods applicable to identities and inequalities of symmetric algebraic functions of $n$ letters,} Proc. Edinburgh Math. Soc. {\hat a_2f 21}, 144 (1903). \hat a_2ibitem{Berger93} M. A. Berger, {\em An Introduction to Probability and Stochastic Processes} (Springer, New York, 1993). \end{thebibliography} \end{document}
\begin{document} \title{Randomly perturbed switching dynamics of a dc/dc converter} \author{Chetan D. Pahlajani} \address{Discipline of Mathematics\\ Indian Institute of Technology Gandhinagar\\ Palaj, Gandhinagar 382355\\ India} \email{[email protected]} \date{\today.} \begin{abstract} In this paper, we study the effect of small Brownian noise on a switching dynamical system which models a first-order {\sc dc}/{\sc dc} buck converter. The state vector of this system comprises a continuous component whose dynamics switch, based on the {\sc on}/{\sc off} configuration of the circuit, between two ordinary differential equations ({\sc ode}), and a discrete component which keeps track of the {\sc on}/{\sc off} configurations. Assuming that the parameters and initial conditions of the unperturbed system have been tuned to yield a stable periodic orbit, we study the stochastic dynamics of this system when the forcing input in the {\sc on} state is subject to small white noise fluctuations of size $\varepsilon$, $0<\varepsilon \ll 1$. For the ensuing stochastic system whose dynamics switch at random times between a small noise stochastic differential equation ({\sc sde}) and an {\sc ode}, we prove a functional law of large numbers which states that in the limit of vanishing noise, the stochastic system converges to the underlying deterministic one on time horizons of order $\mathscr{O}(1/\varepsilon^\nu)$, $0 \le \nu < 2/3$. \end{abstract} \maketitle \color{black} \section{Introduction}\left\langlebel{S:Intro} Ordinary differential equations ({\sc ode}) and dynamical systems play a fundamental role in modelling and analysis of various phenomena arising in science and engineering. In many applications, however, the smooth evolution of the {\sc ode} dynamics is punctuated by discrete instantaneous events which give rise to switching or non-smooth behaviour. Examples include instantaneous switching between different governing {\sc ode} in a power electronic circuit \cite{BKYY,BV_PowerElectronics,dBGGV}, discontinuous change in velocity for an oscillator impacting a boundary \cite{SH83,Nor91}, etc. In such instances, the dynamical system involves functions which are not smooth, but only piecewise-smooth in their arguments. Such piecewise-smooth dynamical systems \cite{dBBCK} display a wealth of phenomena not seen in their smooth counterparts, and have hence been the subject of much current research. Dynamical systems arising in practice are almost always subject to random disturbances, owing perhaps to fluctuating external forces, or uncertainties in the system, or unmodelled dynamics, etc. A more accurate picture can therefore be obtained by modelling such systems (at least in the continuous-time case) using {\it stochastic differential equations} ({\sc sde}); intuitively, this corresponds to adding a ``noise" term to the {\sc ode}. For cases where the perturbing noise is small, it is natural to ask whether the stochastic (perturbed) system converges to the deterministic (unperturbed) one in the limit of vanishing noise, and if yes, how the asymptotic behaviour of the fluctuations may be quantified. Such questions have played a significant role in the development of limit theorems for stochastic processes; see, for instance \cite{DZ98,EK86,FW_RPDS,PS_Multiscale}. Although smooth dynamical systems perturbed by noise have been analysed in great depth over the past few decades, the effect of random noise on non-smooth or switching dynamical systems remains, with some exceptions (see, for instance, \cite{CL_TAC_2007,CL_SICON,HBB1,HBB2,SK14,SK_SD,SK_JNS}), relatively unexplored. One of the challenges in such an undertaking is that even in the absence of noise, the dynamics of switching systems can prove rather difficult to analyse. Part of the reason is the frequently encountered intractability of such systems to analytic computation \cite{BC_Boost, dBGGV}, even in cases when the component subsystems are linear. Our primary interest is the study of stochastic processes which arise due to small random perturbations of (non-smooth) switching dynamical systems. These problems are of immediate relevance in the analysis of {\sc dc/dc} converters in power electronics---naturally susceptible to noise---in which time-varying circuit topology leads to mathematical models characterised by switching between different governing {\sc ode}. In the purely deterministic setting, the dynamics of these systems have been extensively studied, with much of the work focussing on {\it buck converters} \cite{BKYY,dBGGV,HDJ92,FO96}; these are circuits used to transform an input {\sc dc} voltage to a lower output {\sc dc} voltage. Perhaps the simplest of these is the first-order buck converter; this is a system which switches between two linear first-order {\sc ode}. While this circuit is pleasantly amenable to some explicit computation, it nevertheless displays rich dynamics in certain parameter regimes. Periodic orbits, bifurcations and chaos for this converter have been studied in \cite{BKYY,HDJ92}. In the present paper, we study small random perturbations of a switching dynamical system which models a first-order buck converter. The state vector of this system comprises a continuous component (the inductor current) governed by one of two different {\sc ode}, and a discrete component which takes values $1$ or $0$ depending on whether the circuit is in the {\sc on} versus {\sc off} configurations, respectively. Assuming that the parameters and initial conditions of the unperturbed system have been tuned to yield a stable periodic orbit, we study the stochastic dynamics of this system when the forcing (input {\sc dc} voltage) is subject to small white noise fluctuations of size $\varepsilon$, with $0< \varepsilon \ll 1$. Our main result is a {\it functional law of large numbers} ({\sc flln}) which states that, as $\varepsilon \searrow 0$, the solution of the stochastically perturbed system converges to that of the underlying deterministic system, over time horizons $\mathsf{T}_\varepsilon$ of order $\mathscr{O}(1/\varepsilon^\nu)$ for any $0 \le \nu < 2/3$. Part of the novelty of this work, in the context of the literature on switching diffusions (see, e.g., \cite{BBG_JMAA_1999,LuoMao,YZ10,YZ_book}), is that the switching in our problem is {\it not} driven by a discrete-state stochastic process (whose transitions may occur at a rate depending on the continuous component of the state); rather, the switching occurs whenever the continuous component of the state hits a threshold ({\sc on} $\to$ {\sc off}), or upon the arrival of a time-periodic signal ({\sc off} $\to$ {\sc on}). Our switching is thus {\it entirely} determined by the continuous component, together with a periodic clock signal. We also note that since the input {\sc dc} voltage in the buck converter influences the inductor current only in the {\sc on} state \cite{BKYY}, the perturbed system has alternating stochastic and deterministic evolutions: the dynamics switch {\it at random times} between an {\sc sde} driven by a small Brownian motion of size $\varepsilon$ in the {\sc on} state, and an {\sc ode} in the {\sc off} state. The import of our results is that even in the presence of small stochastic perturbations, one may expect the buck converter to function close to its desired operation for ``reasonably long" times. The rest of the paper is organised as follows. In Section \ref{S:ProblemStatement}, we describe the switching systems (deterministic and stochastic) in some detail, and we pose our problem of interest. Next, in Section \ref{S:MainResult}, we state our main result (Theorem \ref{T:FLLN}) and outline the steps to the proof through a sequence of auxiliary lemmas and propositions. A few of these auxiliary results are proved in Section \ref{S:MainResult}, with the remainder (the slightly lengthier ones) being deferred to Section \ref{S:Proofs}. \section{Problem Description}\left\langlebel{S:ProblemStatement} In this section, we formulate our problem of interest. We start with a description, including the governing {\sc ode}'s and the switching mechanism, of a dynamical system modelling a first-order buck converter in Section \ref{SS:Det_sw_sys}. Random perturbations of this system, which lead to a switching {\sc sde}/{\sc ode} model, are discussed in Section \ref{SS:Stoch_sw_sys}. In Section \ref{SS:explicit_formulas}, we obtain explicit formulas for solutions to both the {\sc sde} and the {\sc ode}'s {\it between} switching times, and we piece these together {\it at} switching times to obtain expressions describing the overall evolution of both the perturbed (stochastic) and unperturbed (deterministic) switching systems. Finally, after showing in Section \ref{SS:stable_periodic_orbit} how problem parameters can be tuned and initial conditions chosen to ensure that the unperturbed system has a stable periodic orbit, we pose our questions of interest. Before proceeding further, we note that we have a {\it hybrid} system. Indeed, the full state of the system is specified by a vector $z=(x,y)$ taking values in $\mathscr{Z} \triangleq \mathbb{R} \times \{0,1\}$; here, $x \in \mathbb{R}$ is the continuous component of the state---corresponding to the inductor current in the buck converter---while the discrete component $y$ takes values $1$ or $0$ depending on whether the switch is {\sc on} or {\sc off}. \subsection{Deterministic switching system}\left\langlebel{SS:Det_sw_sys} As noted above, the state of our system at time $t \in [0,\infty)$ will be specified by a vector $z(t) \triangleq (x(t),y(t))$ taking values in $\mathscr{Z} \triangleq \mathbb{R} \times \{0,1\}$. We will assume that the dynamics of $x(t)$ when $y(t)=1$ ({\sc on} configuration) are governed by the {\sc ode} \begin{equation}\left\langlebel{E:ode-on} \frac{dx}{dt}= - \alpha_\textsf{on} \thinspace x + \beta, \end{equation} while the dynamics of $x(t)$ when $y(t)=0$ ({\sc off} configuration) are described by \begin{equation}\left\langlebel{E:ode-off} \frac{dx}{dt}= - \alpha_\textsf{off} \thinspace x. \end{equation} Here, $\beta$, $\alpha_\textsf{on}$, $\alpha_\textsf{off}$ are fixed positive parameters with $\beta$ representing the (rescaled) input voltage of an external power source, while $\alpha_\textsf{on}$ and $\alpha_\textsf{off}$ denote the (rescaled) resistances in the {\sc on} and {\sc off} configurations, respectively.\footnote{More precisely, $\beta = V_{\mathsf{in}}/L$, $\alpha_\textsf{on}=R/L$ and $\alpha_\textsf{off}=(R+r_d)/L$, where $V_{\mathsf{in}}$ is the input voltage, $R$ denotes the load resistance, and $r_d$ is the diode resistance \cite{BKYY}.} The switching between the {\sc on} and {\sc off} configurations is effected as follows. A reference level $x_\textsf{ref} \in \left(0,\beta/\alpha_\textsf{on} \right)$ is fixed. Suppose the system starts in the {\sc on} configuration, i.e., $y(0)=1$, with $x(0) \in (0,x_\textsf{ref})$. The current $x(t)$ increases according to \eqref{E:ode-on}, with $y(t)$ staying at 1, until $x(t)$ hits the level $x_{\sf ref}$. At this point, an {\sc on} $\to$ {\sc off} transition occurs: $y(t)$ jumps to 0 and $x(t)$ now evolves according to \eqref{E:ode-off}. This continues until the next arrival of a periodic clock signal with period 1 (which arrives at times $n \in \mathbb{N}$) triggers an {\sc off} $\to$ {\sc on} transition: $y(t)$ jumps back to 1, $x(t)$ again evolves according to \eqref{E:ode-on}, and the cycle continues. Note that if a clock pulse arrives in the {\sc on} configuration, it is ignored. Of course, if one starts in the {\sc off} configuration, $x(t)$ evolves according to \eqref{E:ode-off} until the next clock pulse, at which point the system goes {\sc on}, and the subsequent dynamics are as described above. An important assumption in our analysis is that $x(t)$ is continuous across switching times. \subsection{Random perturbations}\left\langlebel{SS:Stoch_sw_sys} We now suppose that the forcing term $\beta$ in \eqref{E:ode-on} is subjected to small white noise perturbations of size $\varepsilon$, $0 < \varepsilon \ll 1$; for the buck converter, this corresponds to small random fluctuations in the input voltage. In this setting, the state of the system at time $t \in [0,\infty)$ is given by a stochastic process $Z^\varepsilon_t \triangleq (X^\varepsilon_t,Y^\varepsilon_t)$ taking values in $\mathscr{Z}$. The dynamics of $X^\varepsilon_t$ in the {\sc on} configuration ($Y^\varepsilon_t=1$) are now governed by the {\sc sde} \begin{equation}\left\langlebel{E:sde-on} dX^\varepsilon_t = (-\alpha_\textsf{on} X^\varepsilon_t + \beta)dt + \varepsilon dW_t, \end{equation} where $W_t$ is a standard one-dimensional Brownian motion, while evolution of $X^\varepsilon_t$ in the {\sc off} state ($Y^\varepsilon_t=0$) is governed by the {\sc ode} \eqref{E:ode-off}, as before. The switching mechanism is similar to that in the unperturbed case, but with the stochastic processes $X^\varepsilon_t$, $Y^\varepsilon_t$ playing the roles of $x(t)$, $y(t)$. Note, in particular, that the times for {\sc on} $\to$ {\sc off} transitions are given by {\it passage times} of $X^\varepsilon_t$ (governed by \eqref{E:sde-on}) to the level $x_\textsf{ref}$. As before, $X^\varepsilon_t$ is assumed to be continuous across switching times. \subsection{Explicit formulas}\left\langlebel{SS:explicit_formulas} The foregoing discussion makes clear how $z(t)=(x(t),y(t))$ and $Z^\varepsilon_t = (X^\varepsilon_t,Y^\varepsilon_t)$ are to be obtained, once an initial condition $z_0=(x_0,y_0) \in \mathscr{Z}$ has been specified: the evolutions of $x(t)$ and $X^\varepsilon_t$ are given, respectively, by {\it concatenating} solutions to \eqref{E:ode-on} and \eqref{E:ode-off}, and solutions to \eqref{E:sde-on} and \eqref{E:ode-off}, at the respective switching times, maintaining continuity. The function $y(t)$ and the sample paths of $Y^\varepsilon_t$---which are piecewise constant and take values in $\{0,1\}$---will be assumed to be right-continuous. Below, we obtain expressions for $z(t)$ and $Z^\varepsilon_t$ starting from initial condition $z_0=(x_0,1)$ where $x_0 \in (0,x_\textsf{ref})$. We note that starting with $y_0=1$ entails no real loss of generality; indeed, as will become apparent, the expressions below can be easily modified to accommodate the case when $y_0=0$. In the sequel, we will use $1_A$ to denote the indicator function of the set $A$, and for real numbers $a,b$, we let $a \wedge b$ and $a \vee b$ denote the minimum and maximum of $a$ and $b$, respectively. \subsubsection{Solution of deterministic switching system} As indicated above, we fix an initial condition $z_0=(x_0,1)$ with $x_0 \in (0,x_\textsf{ref})$. Let $s_0 \triangleq 0$, and set $\mathsf{x}^{0,\textsf{off}}(t) \equiv x_0$. Next, define $\mathsf{x}^{1,\textsf{on}}(t) \triangleq 1_{[s_0,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left(x_0 - \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on} t}\right\}$ for $t \ge 0$. Let $t_1 \triangleq \inf\{t > 0:\mathsf{x}^{1,\textsf{on}}(t)=x_\textsf{ref}\}$ be the first time that $\mathsf{x}^{1,\textsf{on}}(t)$ reaches level $x_\textsf{ref}$ and define $\mathsf{x}^{1,\textsf{off}}(t) \triangleq 1_{[t_1,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_1)}.$ Let $s_1 \triangleq \inf \{t>t_1:t \in \mathbb{Z}\}$ be the time of arrival of the next clock pulse. The solution of the deterministic switching system on the interval $[s_0,s_1)$ is now given by $\mathsf{x}^1(t) \triangleq \mathsf{x}^{1,\textsf{on}}(t) \cdot 1_{[s_0,t_1)}(t)+ \mathsf{x}^{1,\textsf{off}}(t) \cdot 1_{[t_1,s_1)}(t)$. In general, given the solution over $[s_0,s_{n-1})$, the solution $\mathsf{x}^{n}(t)$ over $[s_{n-1},s_n)$ is obtained as follows. We let \begin{equation}\left\langlebel{E:det_state_pieces} \begin{aligned} \mathsf{x}^{n,\textsf{on}}(t) &\triangleq 1_{[s_{n-1},\infty)}(t) \cdot \left\{\frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})}\right\} \qquad \text{for $t \ge 0$,}\\ t_{n} &\triangleq \inf\{t>s_{n-1}:\mathsf{x}^{n,\textsf{on}}(t)=x_\textsf{ref}\},\\ \mathsf{x}^{n,\textsf{off}}(t) &\triangleq 1_{[t_{n},\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_{n})} \qquad \text{for $t \ge 0$,}\\ s_{n} &\triangleq \inf\{t>t_{n}: t \in \mathbb{Z}\},\\ \mathsf{x}^{n}(t) &\triangleq \mathsf{x}^{n,\textsf{on}}(t) \cdot 1_{[s_{n-1},t_n)}(t) + \mathsf{x}^{n,\textsf{off}}(t) \cdot 1_{[t_{n},s_{n})}(t). \end{aligned} \end{equation} The evolution of the deterministic switching system over $[0,\infty)$ is now given by \begin{equation}\left\langlebel{E:det_state_full} z(t) =(x(t),y(t)) \qquad \text{where} \quad x(t) \triangleq \sum_{n \ge 1} \mathsf{x}^{n}(t), \quad y(t) \triangleq \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t). \end{equation} We have thus decomposed the evolution into a sequence of {\sc on}/{\sc off} cycles with the switching times $t_n$ and $s_n$ corresponding to the $n$-th {\sc on} $\to$ {\sc off} and {\sc off} $\to$ {\sc on} transitions, respectively; next, we have solved the {\sc ode} \eqref{E:ode-on} and \eqref{E:ode-off} between switching times, and then linked the pieces together while maintaining continuity of $x(t)$ at switching times. \subsubsection{Solution of stochastic switching system} We now provide a similar detailed construction of the stochastic process $Z^\varepsilon_t=(X^\varepsilon_t,Y^\varepsilon_t)$ starting from the same initial condition $z_0=(x_0,1)$. Let $W=\{W_t, \mathscr{F}_t:0 \le t < \infty\}$ be a standard one-dimensional Brownian motion on the probability space $(\Omega,\mathscr{F},\mathbb{P})$. We introduce, for each $n \in \mathbb{N}$, the processes $\mathsf{X}^{n,\varepsilon,\textsf{on}}_t$, $\mathsf{X}^{n,\varepsilon,\textsf{off}}_t$, $\mathsf{X}^{n,\varepsilon}_t$ and random switching times $\tau_n^\varepsilon$, $\sigma_n^\varepsilon$, which are defined recursively as follows. Set $\sigma_0^\varepsilon \triangleq 0$ and define $\mathsf{X}^{0,\varepsilon,\textsf{off}}_t \equiv x_0$. Now, let $\mathsf{X}^{1,\varepsilon,\textsf{on}}_t \triangleq 1_{[\sigma_0^\varepsilon,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left(x_0- \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on} t} + \varepsilon \int_{0}^t e^{-\alpha_\textsf{on}(t-u)} dW_u\right\}$ for $t \ge 0$, and let $\tau_1^\varepsilon \triangleq \inf\{t >0:\mathsf{X}^{1,\varepsilon,\textsf{on}}_t=x_\textsf{ref} \}$ be the first passage time of $\mathsf{X}^{1,\varepsilon,\textsf{on}}_t$ to level $x_\textsf{ref}$. We next define $\mathsf{X}^{1,\varepsilon,\textsf{off}}_t \triangleq 1_{[\tau_1^\varepsilon,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-\tau_1^\varepsilon)}$ and let $\sigma_1^\varepsilon \triangleq \inf\{t> \tau_1^\varepsilon:t \in \mathbb{Z}\}$ be the time of arrival of the next clock pulse. We now set $\mathsf{X}^{1,\varepsilon}_t \triangleq \mathsf{X}^{1,\varepsilon,\textsf{on}}_t \cdot 1_{[\sigma_0^\varepsilon,\tau_1^\varepsilon)}(t) + \mathsf{X}^{1,\varepsilon,\textsf{off}}_t \cdot 1_{[\tau_1^\varepsilon,\sigma_1^\varepsilon)}(t)$. To compactly express $\mathsf{X}^{n,\varepsilon,\textsf{on}}_t$ for general $n$, let $I=\{I_t:0 \le t < \infty\}$ be the process defined by $I_t \triangleq \int_0^t e^{\alpha_\textsf{on} u} dW_u$ for $t \ge 0$. Note that $I$ is a continuous, square-integrable Gaussian martingale. We now define, for each $n \in \mathbb{N}$, \begin{equation}\left\langlebel{E:stoch_state_pieces} \begin{aligned} \mathsf{X}^{n,\varepsilon,\textsf{on}}_t &\triangleq 1_{[\sigma_{n-1}^\varepsilon,\infty)}(t) \cdot \left\{ \frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-\sigma_{n-1}^\varepsilon)} + \varepsilon \thinspace e^{-\alpha_\textsf{on} t} \left(I_t - I_{t\wedge \sigma_{n-1}^\varepsilon} \right) \right\},\\ \tau_{n}^\varepsilon &\triangleq \inf\{t>\sigma_{n-1}^\varepsilon: \mathsf{X}^{n,\varepsilon,\textsf{on}}_t = x_\textsf{ref}\},\\ \mathsf{X}^{n,\varepsilon,\textsf{off}}_t &\triangleq 1_{[\tau_{n}^\varepsilon,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-\tau_{n}^\varepsilon)},\\ \sigma_{n}^\varepsilon &\triangleq \inf\{t > \tau_{n}^\varepsilon: t \in \mathbb{Z}\},\\ \mathsf{X}^{n,\varepsilon}_t &\triangleq \mathsf{X}^{n,\varepsilon,\textsf{on}}_t \cdot 1_{[\sigma_{n-1}^\varepsilon,\tau_{n}^\varepsilon)}(t) + \mathsf{X}^{n,\varepsilon,\textsf{off}}_t \cdot 1_{[\tau_{n}^\varepsilon,\sigma_{n}^\varepsilon)}(t). \end{aligned} \end{equation} Our stochastic process of interest is now given by \begin{equation}\left\langlebel{E:stoch_state_full} Z^\varepsilon_t \triangleq (X^\varepsilon_t,Y^\varepsilon_t), \qquad \text{where} \qquad X^\varepsilon_t \triangleq \sum_{n \ge 1} \mathsf{X}^{n,\varepsilon}_t, \quad Y^\varepsilon_t \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(t). \end{equation} Once again, the evolution comprises a sequence of {\sc on}/{\sc off} cycles, with the quantities above admitting a natural interpretation which parallels the unperturbed (deterministic) case. \subsection{Stable periodic orbit}\left\langlebel{SS:stable_periodic_orbit} We now describe the assumptions on the problem parameters that ensure the existence of a stable periodic solution to \eqref{E:det_state_pieces}, \eqref{E:det_state_full}. The argument proceeds by analysing the {\it stroboscopic} map \cite{BKYY} which takes the system state at one clock instant to the state at the next. Map-based techniques are used extensively in analysing the switching dynamics of power electronic circuits; see also \cite{dBGGV,HDJ92}. \begin{assumption}\left\langlebel{A:assumptions_real} Fix $x_\textsf{ref}>0$, $0< \alpha_\textsf{on} < \log 2$. Select $\beta>0$ such that \begin{equation}\left\langlebel{E:beta_assumption_real} 2 x_\textsf{ref} \thinspace \alpha_{\textsf{on}} < \beta < \left( \frac{e^{\alpha_\textsf{on}}}{e^{\alpha_\textsf{on}}-1}\right) x_\textsf{ref} \thinspace \alpha_\textsf{on}. \end{equation} Let $\alpha_\textsf{off}>0$ such that \begin{equation}\left\langlebel{E:alpha_off_assumption_real} \alpha_\textsf{on} < \alpha_\textsf{off} < \left(\frac{\beta/\alpha_\textsf{on} - x_\textsf{ref}}{x_\textsf{ref}}\right) \alpha_\textsf{on}. \end{equation} \end{assumption} We now define a map $f:[0,x_\textsf{ref}] \to [0,x_\textsf{ref}]$ which maps $x_0 \in [0,x_\textsf{ref}]$ to the solution $x(t)$ at time 1, subject to the initial condition being $(x_0,1)$, i.e., $f:x_0 \mapsto x(1;x_0,1)$. We are interested in the case when $f$ is only {\it piecewise smooth}. Put another way, if we let $x_\textsf{border} \triangleq \beta/\alpha_\textsf{on} + \left( x_\textsf{ref} - \beta/\alpha_\textsf{on} \right) e^{\alpha_\textsf{on}}$ be the particular value of $x_0$ for which the corresponding $\mathsf{x}^{1,\textsf{on}}(t)$ satisfies $\mathsf{x}^{1,\textsf{on}}(1)=x_\textsf{ref}$, we would like $x_\textsf{border} \in (0,x_\textsf{ref})$. It is easily seen that the upper bound on $\beta$ in \eqref{E:beta_assumption_real} ensures that such is indeed the case. The map $f:x_0 \mapsto x(1;x_0,1)$ is seen to be given by \begin{equation}\left\langlebel{E:psmap} f(x) \triangleq \begin{cases} \frac{\beta}{\alpha_\textsf{on}} + \left( x-\frac{\beta}{\alpha_\textsf{on}} \right) e^{-\alpha_\textsf{on}} \qquad & \text{if $0 \le x \le x_\textsf{border}$,}\\ x_\textsf{ref} \thinspace e^{-\alpha_\textsf{off}} \left( \frac{\beta/\alpha_\textsf{on} - x}{\beta/\alpha_\textsf{on} - x_\textsf{ref}} \right)^{\alpha_\textsf{off}/\alpha_\textsf{on}} \qquad & \text{if $x_\textsf{border}<x\le x_\textsf{ref}$.} \end{cases} \end{equation} \begin{proposition}\left\langlebel{P:fixed_point} Suppose Assumption \ref{A:assumptions_real} holds. Then, the mapping $f:[0,x_\textsf{ref}] \to [0,x_\textsf{ref}]$ has a unique fixed point $x^*$ which lies in the interval $(x_\textsf{border},x_\textsf{ref})$. Further, $|f^\prime(x^*)|<1$, implying that $x^*$ is a stable fixed point of the discrete-time dynamical system $x_n \mapsto f(x_n)$. \end{proposition} \begin{proof} Let $h(x) \triangleq f(x)-x$. Note that $h(0)=f(0)>0$, $h(x_\textsf{ref})=f(x_\textsf{ref})-x_\textsf{ref}=x_\textsf{ref}(e^{-\alpha_\textsf{off}}-1)<0$. Since $h$ is continuous, there exists $x^* \in (0,x_\textsf{ref})$ such that $h(x^*)=0$, i.e., $f(x^*)=x^*$. Since $f(x)>x$ for all $x \in [0,x_\textsf{border}]$, we must have $x^* \in (x_\textsf{border},x_\textsf{ref})$. Further, since $f(x)$ decreases on $(x_\textsf{border},x_\textsf{ref})$ as $x$ increases over this same interval, $f$ can have at most one fixed point. It is easily checked that for $x \in (x_\textsf{border},x_\textsf{ref})$, we have \begin{equation*} |f^\prime(x)| = \frac{\alpha_\textsf{off} \thinspace f(x)}{\beta - \alpha_\textsf{on} \thinspace x} \le \frac{\alpha_\textsf{off} \thinspace x_\textsf{ref}}{\beta - \alpha_\textsf{on} \thinspace x_\textsf{ref}} < 1, \end{equation*} where the last inequality follows from the upper bound on $\alpha_\textsf{off}$ in \eqref{E:alpha_off_assumption_real}. This proves stability of $x^*$. \end{proof} We can now pose our principal questions of interest. Suppose $z(\cdot)$, $Z^\varepsilon_\cdot$ are obtained from \eqref{E:det_state_full}, \eqref{E:stoch_state_full}, respectively, with initial conditions $z(0)=Z^\varepsilon_0=(x^*,1)$, where $x^*$ is as in Proposition \ref{P:fixed_point}. \begin{itemize} \item For any fixed $\mathsf{T} \in \mathbb{N}$, do the dynamics of $Z^\varepsilon_\cdot$ converge to those of $z(\cdot)$ in a suitable sense as $\varepsilon \searrow 0$? \item If yes, can the results be strengthened to the case when $\mathsf{T}=\mathsf{T}_\varepsilon \in \mathbb{N}$ grows to infinity, but ``not too fast", as $\varepsilon \searrow 0$? \end{itemize} In the next section, we will show that both these questions can be answered in the affirmative, provided $\mathsf{T}_\varepsilon = \mathscr{O}(1/\varepsilon^\nu)$ with $0 \le \nu < 2/3$. \section{Main Result}\left\langlebel{S:MainResult} Recall that the state space for the evolution of $z(t)$ and $Z^\varepsilon_t$ is $\mathscr{Z}=\mathbb{R} \times \{0,1\}$, which inherits the metric $r(z_1,z_2) \triangleq \left\{|x_1-x_2|^2 + |y_1-y_2|^2 \right\}^{1/2}$ for $z_1=(x_1,y_1)$, $z_2=(x_2,y_2) \in \mathscr{Z}$, from $\mathbb{R}^2$. If $I$ is a closed subinterval of $[0,\infty)$, we let $D(I;\mathscr{Z})$ be the space of functions $\mathsf{z}:I \to \mathscr{Z}$ which are right-continuous with left limits. This space can be equipped with the Skorokhod metric $d_I$ \cite{ConvProbMeas,EK86}, which renders it complete and separable. If $\mathsf{z} \in D(I;\mathscr{Z})$ and $J$ is a closed subinterval of $I$, then the restriction of $\mathsf{z}$ to $J$ is an element of $D(J;\mathscr{Z})$ which, for simplicity of notation, will also be denoted by $\mathsf{z}$. For our switching systems of interest, we note that the function $z(t)$ in \eqref{E:det_state_full}, and the sample paths of the process $Z^\varepsilon_t$ in \eqref{E:stoch_state_full}, belong to $D([0,\infty);\mathscr{Z})$. Our goal here is to study the convergence, as $\varepsilon \searrow 0$, of $Z^\varepsilon_\cdot$ to $z(\cdot)$ in the space $D([0,\mathsf{T}_\varepsilon];\mathscr{Z})$ for time horizons $\mathsf{T}_\varepsilon = \mathscr{O}(1/\varepsilon^\nu)$ where $0 \le \nu <2/3$. We start by defining the Skorokhod metric $d_I$ on the space $D(I;\mathscr{Z})$, where $I=[0,\mathsf{T}]$ for some $\mathsf{T} >0$.\footnote{See \cite{ConvProbMeas,EK86} for the case $I=[0,\infty)$.} Let $\tilde\Lambda_\mathsf{T}$ be the set of all strictly increasing continuous mappings from $[0,\mathsf{T}]$ onto itself,\footnote{Thus, we have $\left\langlembda(0)=0$ and $\left\langlembda(\mathsf{T})=\mathsf{T}$ for all $\left\langlembda \in \tilde{\Lambda}_\mathsf{T}$.} and let $\Lambda_\mathsf{T}$ be the set of functions $\left\langlembda \in \tilde{\Lambda}_\mathsf{T}$ for which \begin{equation*} \gamma_\mathsf{T}(\left\langlembda) \triangleq \sup_{0 \le s < t \le \mathsf{T}} \left| \log \frac{\left\langlembda(t)-\left\langlembda(s)}{t-s} \right| < \infty. \end{equation*} For $\mathsf{z}_1$, $\mathsf{z}_2 \in D([0,\mathsf{T}];\mathscr{Z})$, we now define \begin{equation}\left\langlebel{E:Sk_metric} d_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \triangleq \inf_{\left\langlembda \in \Lambda_\mathsf{T}} \left\{ \gamma_\mathsf{T}(\left\langlembda) \vee \sup_{0 \le t \le \mathsf{T}} r\left(\mathsf{z}_1(t),\mathsf{z}_2(\left\langlembda(t))\right) \right\}. \end{equation} Note that if $d^u_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \triangleq \sup_{0 \le t \le \mathsf{T}} r(\mathsf{z}_1(t),\mathsf{z}_2(t))$ is the uniform metric on $D([0,\mathsf{T}];\mathscr{Z})$, then $d_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \le d^u_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2)$. Indeed, the latter corresponds to the specific choice $\left\langlembda(t) \equiv t$. We now state our main result. \begin{theorem}[Main Theorem]\left\langlebel{T:FLLN} Fix $\mathfrak{T} \in \mathbb{N}$, $0 \le \nu<2/3$. For $\varepsilon \in (0,1)$, let $\mathsf{T}_\varepsilon \in \mathbb{N}$ such that $\mathsf{T}_\varepsilon \le \mathfrak{T}/\varepsilon^\nu$. Suppose $z(\cdot)$, $Z^\varepsilon_\cdot$ are given by \eqref{E:det_state_full}, \eqref{E:stoch_state_full}, respectively, with initial conditions $z(0)=Z^\varepsilon_0=(x^*,1)$, where $x^*$ is as in Proposition \ref{P:fixed_point}. Then, for any $p \in [1,\infty)$, we have that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z) \to 0$ in $L^p$, i.e., \begin{equation}\left\langlebel{E:conv-in-Lp} \lim_{\varepsilon \searrow 0} \mathbb{E} \left[\left(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)\right)^p \right]=0. \end{equation} \end{theorem} \begin{remark}\left\langlebel{R:conv-in-prob} Of course, Theorem \ref{T:FLLN} implies that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ converges to $0$ in probability, i.e., for any $\vartheta>0$, we have $\lim_{\varepsilon \searrow 0} \mathbb{P} \left\{ d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z) \ge \vartheta \right\}=0$. \end{remark} To explain the intuition behind Theorem \ref{T:FLLN}, we note that when $\varepsilon \ll 1$, the likely behaviour of $X^\varepsilon_t$ is to closely track $x(t)$. Therefore, one expects that with high probability, we have $\tau_n^\varepsilon \approx t_n$, $\sigma^\varepsilon_n=s_n = n$ for each $1 \le n \le \mathsf{T}_\varepsilon$ (at least if $\mathsf{T}_\varepsilon$ is not too large). On this ``good" event, a random time-deformation $\left\langlembda$, for which $\gamma_{\mathsf{T}_\varepsilon}(\left\langlembda)$ is small, can be used to align the jumps of $Y^\varepsilon_t$ and $y(t)$ so that $Y^\varepsilon_{\left\langlembda(t)} \equiv y(t)$. Continuity now ensures that $X^\varepsilon_{\left\langlembda(t)}$ is close to $x(t)$, and we get that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ can be bounded above by a term which goes to zero as $\varepsilon \searrow 0$. It now remains to show that the probability of the complement of this event, i.e., the event where one or more of the $\tau_n^\varepsilon$ differ from $t_n$ by a significant amount, is small. Our thoughts are organised as follows. First, we introduce an additional scale $\delta \searrow 0$ to quantify proximity of $\tau^\varepsilon_n$ to $t_n$; we will later take $\delta = \varepsilon^\varsigma$ for suitable $\varsigma>0$. Now, for $\varepsilon,\delta \in (0,1)$, set $G_0^{\varepsilon,\delta} \triangleq \Omega$, and for $n \ge 1$, define \begin{equation*} \begin{aligned} G_n^{\varepsilon,\delta} & \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: |\tau_n^\varepsilon(\omega)-t_n| \le \delta\}\\ B_n^{\varepsilon,\delta} & \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: |\tau_n^\varepsilon(\omega)-t_n| > \delta\} = G_{n-1}^{\varepsilon,\delta} \setminus G_n^{\varepsilon,\delta}. \end{aligned} \end{equation*} Note that the $G_n^{\varepsilon,\delta}$'s are decreasing, i.e., $G_0^{\varepsilon,\delta} \supset G_1^{\varepsilon,\delta} \supset \dots$ and that the $B_n^{\varepsilon,\delta}$'s are pairwise disjoint. Consequently, $ \cap_{n=1}^{\mathsf{T}_\varepsilon} G_n^{\varepsilon,\delta}=G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta} $ and $\mathbb{P} \left( \cup_{n=1}^{\mathsf{T}_\varepsilon} B_n^{\varepsilon,\delta} \right) = \sum_{n=1}^{\mathsf{T}_\varepsilon} \mathbb{P} \left( B_n^{\varepsilon,\delta} \right)$. We now outline the principal steps in proving Theorem \ref{T:FLLN}. First, in Proposition \ref{P:pth_power_estimate}, we derive a path-wise estimate for $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ and its positive powers. This result assures us that our quantity of interest is indeed small on the event $G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$ and of order $1$ on its complement. Then, in Proposition \ref{P:bn}, we obtain an upper bound on $\mathbb{P}\left( B_n^{\varepsilon,\delta} \right)$ in terms of the tail of the standard normal distribution. These two propositions enable us to complete the proof of Theorem \ref{T:FLLN}. Both Propositions \ref{P:pth_power_estimate} and \ref{P:bn} are proved through a series of Lemmas; the proofs of the latter are deferred to Section \ref{S:Proofs}. We start by introducing some notation. Let $\mathsf{t}_\textsf{on} \triangleq t_n-s_{n-1}$ and $\mathsf{t}_\textsf{off} \triangleq s_n-t_n$ denote the fractions of time in each interval $[n,n+1]$ for which the deterministic system is in the {\sc on} and {\sc off} states respectively, and let $\mathsf{t}_\mathsf{min} \triangleq \mathsf{t}_\textsf{on} \wedge \mathsf{t}_\textsf{off}$. \begin{proposition}\left\langlebel{P:pth_power_estimate} For every $p>0$, there exists a constant $C_p>0$ such that for all $\varepsilon,\delta \in (0,1)$, $\mathsf{T}_\varepsilon \in \mathbb{N}$ satisfying $0 <\delta \le \mathsf{t}_\mathsf{min}/(4\mathsf{T}_\varepsilon)$, we have \begin{equation}\left\langlebel{E:pth_power_estimate} (d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p \le C_p \left\{ (\mathsf{T}_\varepsilon \delta)^p + \delta^p + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + \varepsilon^p \thinspace \mathsf{T}_\varepsilon^p \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|^p \right\}. \end{equation} \end{proposition} To prove Proposition \ref{P:pth_power_estimate}, we will employ the (random) time-deformation $\left\langlembda^\varepsilon:\Omega \to \Lambda_{\mathsf{T}_\varepsilon}$ defined by \begin{multline}\left\langlebel{E:time_deformation_a} \left\langlembda^\varepsilon_t (\omega) \triangleq \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ s_{n-1} + \left( \frac{\tau_n^\varepsilon(\omega)-s_{n-1}}{t_n-s_{n-1}}\right) (t-s_{n-1}) \right\}\\ + \sum_{n \ge 1} 1_{[t_n,s_n)}(t) \cdot \left\{ \tau_n^\varepsilon(\omega) + \left( \frac{s_n-\tau_n^\varepsilon(\omega)}{s_n-t_n}\right) (t-t_n) \right\} \qquad \text{if $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$,} \end{multline} and \begin{equation}\left\langlebel{E:time_deformation_b} \left\langlembda^\varepsilon_t (\omega) \triangleq t \qquad \text{if $\omega \in \Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$.} \end{equation} Note that in actuality, $\left\langlembda^\varepsilon=\left\langlembda^{\varepsilon,\delta}$. However, we have suppressed the $\delta$-dependence to reduce clutter and also because we will eventually take $\delta=\delta(\varepsilon)$. The first step is to show that $\gamma_{\mathsf{T}_\varepsilon}(\left\langlembda^\varepsilon)$ is small; this is accomplished in Lemma \ref{L:time_deformation_norm} below. Next, in Lemma \ref{L:full_state_close}, we estimate $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\left\langlembda^\varepsilon_t}-x(t) |^p$ and $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t) |^p$ for $p>0$. \begin{lemma}\left\langlebel{L:time_deformation_norm} Let $\mathsf{T}_\varepsilon \in \mathbb{N}$. If $0 <\delta \le \mathsf{t}_\mathsf{min}/(4\mathsf{T}_\varepsilon)$, then for each $\omega \in \Omega$, we have \begin{equation}\left\langlebel{E:time_deformation_norm} \gamma_{\mathsf{T}_\varepsilon}(\left\langlembda^\varepsilon(\omega)) \le \frac{4\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}}. \end{equation} \end{lemma} \begin{lemma}\left\langlebel{L:full_state_close} For every $p>0$, there exists a constant $c_p>0$ such that for all $\varepsilon,\delta \in (0,1)$, we have \begin{equation}\left\langlebel{E:full_state_close_ms} \begin{aligned} \sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\left\langlembda^\varepsilon_t}-x(t) |^p & \le c_p \left\{ \delta^p\thinspace 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + \varepsilon^p \thinspace \mathsf{T}_\varepsilon^p \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|^p \right\},\\ \sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t)|^p & \le 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}. \end{aligned} \end{equation} \end{lemma} Lemmas \ref{L:time_deformation_norm} and \ref{L:full_state_close} are proved in Section \ref{S:Proofs}. We now have \begin{proof}[Proof of Proposition \ref{P:pth_power_estimate}] Let $\left\langlembda^\varepsilon \in \Lambda_{\mathsf{T}_\varepsilon}$ be as in \eqref{E:time_deformation_a}, \eqref{E:time_deformation_b}. It is easily seen from \eqref{E:Sk_metric} that \\ $(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p \le 3^p \left\{ (\gamma_{\mathsf{T}_\varepsilon}(\left\langlembda^\varepsilon))^p + \sup_{0 \le t \le \mathsf{T}_\varepsilon} |X^\varepsilon_{\left\langlembda^\varepsilon_t}-x(t)|^p + \sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t)|^p \right\}$. The claim \eqref{E:pth_power_estimate} now follows from Lemmas \ref{L:time_deformation_norm}, \ref{L:full_state_close}. \end{proof} We now estimate $\mathbb{P}\left( B_n^{\varepsilon,\delta} \right)$. Let $\mathfrak{n}tail$ denote the right tail of the normal distribution; i.e., \begin{equation*} \mathfrak{n}tail(x) \triangleq \frac{1}{\sqrt{2 \pi}} \int_x^\infty e^{-t^2/2} dt \qquad \text{for $x \ge 0$.} \end{equation*} A simple integration by parts yields \begin{equation}\left\langlebel{E:gaussian_tail_estimate_1} \mathfrak{n}tail(x) \le \frac{3}{\sqrt{2\pi}} x^2 e^{-x^2/2}, \qquad \text{for $x \ge 1$.} \end{equation} \begin{proposition}\left\langlebel{P:bn} There exists $\delta_0 \in (0,1)$ and $K>0$ such that whenever $0<\delta<\delta_0$, $\varepsilon \in (0,1)$, and $n \ge 1$, we have \begin{equation}\left\langlebel{E:prob_bn} \mathbb{P}\left( B_n^{\varepsilon,\delta} \right) \le 3 \mathfrak{n}tail \left( K \frac{\delta}{\varepsilon} \right). \end{equation} \end{proposition} To prove this proposition, we write the event $B_n^{\varepsilon,\delta}$ as the disjoint union $B_n^{\varepsilon,\delta,-}\cup B_n^{\varepsilon,\delta,+}$ where $B_n^{\varepsilon,\delta,-} \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: \tau_n^\varepsilon(\omega) < t_n - \delta \}$ and $B_n^{\varepsilon,\delta,+} \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: \tau_n^\varepsilon(\omega) > t_n + \delta \}$. The quantities $\mathbb{P}( B_n^{\varepsilon,\delta,-})$ and $\mathbb{P}( B_n^{\varepsilon,\delta,+})$ are now estimated separately in Lemmas \ref{L:bnminus} and \ref{L:bnplus} below, whose proofs are given in Section \ref{S:Proofs}. \begin{lemma}\left\langlebel{L:bnminus} There exists $K_->0$ such that for any $n \ge 1$, $\varepsilon,\delta \in (0,1)$, we have \begin{equation}\left\langlebel{E:prob_bnminus} \mathbb{P}\left( B_n^{\varepsilon,\delta,-} \right) \le 2 \mathfrak{n}tail \left( K_- \frac{\delta}{\varepsilon} \right). \end{equation} \end{lemma} \begin{lemma}\left\langlebel{L:bnplus} There exists $\delta_+ \in (0,1)$ and $K_+>0$ such that whenever $0 < \delta < \delta_+$, $\varepsilon \in (0,1)$, $n \ge 1$, we have \begin{equation}\left\langlebel{E:prob_bnplus} \mathbb{P}\left( B_n^{\varepsilon,\delta,+} \right) \le \mathfrak{n}tail \left( K_+ \frac{\delta}{\varepsilon} \right). \end{equation} \end{lemma} We now provide \begin{proof}[Proof of Proposition \ref{P:bn}] From Lemmas \ref{L:bnminus} and \ref{L:bnplus}, we take $K \triangleq K_-\wedge K_+$, $\delta_0 \triangleq \delta_+$. Now, noting that $\mathfrak{n}tail$ is strictly decreasing, we use \eqref{E:prob_bnminus} and \eqref{E:prob_bnplus} to get \eqref{E:prob_bn}. \end{proof} Finally, we have \begin{proof}[Proof of Theorem \ref{T:FLLN}] Fix $p \in [1,\infty)$ and let $\delta \triangleq \varepsilon^\varsigma$ where $\nu<\varsigma<1$. By the Burkholder-Davis-Gundy inequalities \cite[Theorem 3.3.28]{KS91}, there exists a universal positive constant $k_{p/2}$ such that $\mathbb{E} \left[ \sup_{0 \le t \le \mathsf{T}_\varepsilon} | W_t |^p \right] \le k_{p/2} \mathsf{T}_\varepsilon^{p/2}$. Noting that $\mathbb{E}[1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}]=\sum_{n=1}^{\mathsf{T}_\varepsilon} \mathbb{P}(B_n^{\varepsilon,\delta})$, we see from Propositions \ref{P:pth_power_estimate} and \ref{P:bn} that for $\varepsilon \in (0,1)$ small enough,\footnote{One can check that $0 < \varepsilon < \left(\frac{\mathsf{t}_\mathsf{min}}{4\mathfrak{T}}\right)^{1/(\varsigma-\nu)}\wedge \delta_{0}^{1/\varsigma}$ will suffice.} \begin{equation*} \mathbb{E}[(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p] \le C_p \left\{ \mathfrak{T}^p \varepsilon^{(\varsigma-\nu)p} + \varepsilon^{\varsigma p} + \frac{3 \mathfrak{T}}{\varepsilon^\nu} \mathfrak{n}tail \left(K \frac{1}{\varepsilon^{1-\varsigma}}\right) + k_{p/2} \mathfrak{T}^{3p/2} \varepsilon^{(1-3\nu/2)p} \right\}. \end{equation*} Since $0 \le \nu<2/3$ and $\nu<\varsigma<1$, straightforward calculations using \eqref{E:gaussian_tail_estimate_1} yield \eqref{E:conv-in-Lp}. \end{proof} \section{Proofs of Lemmas}\left\langlebel{S:Proofs} \begin{proof}[Proof of Lemma \ref{L:time_deformation_norm}] For $\omega \in \Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $\gamma_{\mathsf{T}_\varepsilon}(\left\langlembda^\varepsilon(\omega))=0$, implying \eqref{E:time_deformation_norm}. So, fix $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$. Note that the function $\left\langlembda^\varepsilon_\cdot(\omega)$ is piecewise-linear with ``corners" at $0=s_0<t_1<s_1<\dots<t_{\mathsf{T}_\varepsilon}<s_{\mathsf{T}_\varepsilon}=\mathsf{T}_\varepsilon$. Since $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $\tau_n^\varepsilon(\omega) \in [t_n-\delta,t_n+ \delta]$ for $1 \le n \le \mathsf{T}_\varepsilon$. Recalling that $\left\langlembda^\varepsilon_{t_n}(\omega) = \tau_n^\varepsilon(\omega)$, it is now easy to see that \begin{equation}\left\langlebel{E:time_def_aux_1} \max_{1 \le n \le \mathsf{T}_\varepsilon} \left\{ \left| \frac{\left\langlembda^\varepsilon_{t_n}(\omega)-\left\langlembda^\varepsilon_{s_{n-1}}(\omega)}{t_n-s_{n-1}} - 1 \right| \vee \left| \frac{\left\langlembda^\varepsilon_{s_n}(\omega)-\left\langlembda^\varepsilon_{t_n}(\omega)}{s_n-t_n} - 1 \right| \right\} \le \frac{\delta}{\mathsf{t}_\mathsf{min}}. \end{equation} Now, let $0 \le s < t \le \mathsf{T}_\varepsilon$ and let $\{u_0,u_1,\dots,u_k\}$ be a sequential enumeration of all corners starting just to the left of $s$ and ending just to the right of $t$, i.e., $u_0 \le s < u_1 < \dots < u_{k-1} < t \le u_k$. By the triangle inequality, we have \begin{multline*} |\left\langlembda^\varepsilon_t(\omega)-\left\langlembda^\varepsilon_s(\omega) - (t-s)| \le |t-u_{k-1}| \left| \frac{\left\langlembda^\varepsilon_t(\omega)-\left\langlembda^\varepsilon_{u_{k-1}}(\omega)}{t-u_{k-1}} - 1 \right| \\+ \sum_{i=2}^{k-1} |u_i-u_{i-1}| \left| \frac{\left\langlembda^\varepsilon_{u_i}(\omega)-\left\langlembda^\varepsilon_{u_{i-1}}(\omega)}{u_i-u_{i-1}} - 1 \right| + |u_1-s| \left| \frac{\left\langlembda^\varepsilon_{u_1}(\omega)-\left\langlembda^\varepsilon_s(\omega)}{u_1-s} - 1 \right|. \end{multline*} Noting that $|t-u_{k-1}|, |u_{k-1}-u_{k-2}|,\dots,|u_1-s|$ are less than $|t-s|$, recalling the piecewise-linear nature of $\left\langlembda^\varepsilon_t(\omega)$, and using \eqref{E:time_def_aux_1}, we get \begin{equation*} \left| \frac{\left\langlembda^\varepsilon_t(\omega)-\left\langlembda^\varepsilon_s(\omega)}{t-s} - 1 \right| \le \sum_{i=1}^{k} \left| \frac{\left\langlembda^\varepsilon_{u_i}(\omega)-\left\langlembda^\varepsilon_{u_{i-1}}(\omega)}{u_i-u_{i-1}} - 1 \right| \le \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}}. \end{equation*} Thus, \begin{equation*} \log \left( 1 - \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \right) \le \log \left( \frac{\left\langlembda^\varepsilon_t(\omega)-\left\langlembda^\varepsilon_s(\omega)}{t-s} \right) \le \log \left( 1+ \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \right). \end{equation*} Using the estimate $|\log(1\pm x)| \le 2|x|$ for $|x| \le 1/2$ \cite[pp. 127]{ConvProbMeas}, we get that for $0<\delta \le \mathsf{t}_\mathsf{min}/{4\mathsf{T}_\varepsilon}$, we have \begin{equation*} -\frac{4\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \le \log \left( \frac{\left\langlembda^\varepsilon_t(\omega)-\left\langlembda^\varepsilon_s(\omega)}{t-s} \right) \le \frac{4\mathsf{T}_\varepsilon\delta}{\mathsf{t}_\mathsf{min}} . \end{equation*} Since $s,t$ are arbitrary, \eqref{E:time_deformation_norm} follows. \end{proof} \begin{proof}[Proof of Lemma \ref{L:full_state_close}] We first bound $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\left\langlembda^\varepsilon_t}-x(t) |^p$. Write $X^\varepsilon_{\left\langlembda^\varepsilon_t}-x(t) = \sum_{i=1}^3 \mathsf{L}^{i,\varepsilon}_t$ where \begin{equation*} \begin{aligned} \mathsf{L}^{1,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\left\langlembda^\varepsilon_t) \cdot \left\{ \frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (\left\langlembda^\varepsilon_t-\sigma_{n-1}^\varepsilon)} \right\}\\ & \phantom{\triangleq} - \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{\frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})}\right\},\\ \mathsf{L}^{2,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\tau_{n}^\varepsilon,\sigma^\varepsilon_n)}(\left\langlembda^\varepsilon_t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(\left\langlembda^\varepsilon_t-\tau_{n}^\varepsilon)} - \sum_{n \ge 1} 1_{[t_{n},s_n)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_{n})}, \\ \mathsf{L}^{3,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\left\langlembda^\varepsilon_t) \cdot \varepsilon \thinspace e^{-\alpha_\textsf{on} \left\langlembda^\varepsilon_t} \left(I_{\left\langlembda^\varepsilon_t} - I_{\left\langlembda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon} \right). \end{aligned} \end{equation*} We start by noting that $\mathsf{L}^{1,\varepsilon}_t(\omega)$, $\mathsf{L}^{2,\varepsilon}_t(\omega)$ are bounded for all $(t,\omega) \in [0,\mathsf{T}_\varepsilon]\times\Omega$. We will now show that for $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, $|\mathsf{L}^{1,\varepsilon}_t|$, $|\mathsf{L}^{2,\varepsilon}_t|$ are in fact of order $\delta$. We note that if $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, then $\left\langlembda^\varepsilon_t(\omega) \in [\sigma^\varepsilon_{n-1}(\omega),\tau^\varepsilon_n(\omega))$ iff $t \in [s_{n-1},t_n)$ for all $1 \le n \le \mathsf{T}_\varepsilon$. Thus, we have for each $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation*} \begin{aligned} 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{1,\varepsilon}_t &= 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (\left\langlembda^\varepsilon_t-\sigma_{n-1}^\varepsilon)}\right.\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. - \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})} \right\}\\ &= 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} - \mathsf{x}^{n-1,\textsf{off}}(s_{n-1})\right) e^{-\alpha_\textsf{on}(\left\langlembda^\varepsilon_t-\sigma^\varepsilon_{n-1})} \right.\\ &\left. \qquad \qquad \qquad \qquad \qquad + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1})-\frac{\beta}{\alpha_\textsf{on}}\right) \left(e^{-\alpha_\textsf{on}(\left\langlembda^\varepsilon_t-\sigma^\varepsilon_{n-1})} - e^{-\alpha_\textsf{on}(t-s_{n-1})}\right) \right\}. \end{aligned} \end{equation*} It is now easy to see that \begin{equation*} \left| 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{1,\varepsilon}_t \right| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta + \left(\frac{\beta}{\alpha_\textsf{on}}-x^*\right) \alpha_\textsf{on} \delta\right\}. \end{equation*} Hence, there exists $K_1>0$ such that for all $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation}\left\langlebel{E:cont_state_close_ms_1} |\mathsf{L}^{1,\varepsilon}_t(\omega)| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_1 \delta + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_1. \end{equation} Turning to $\mathsf{L}^{2,\varepsilon}_t$, we note that \begin{equation*} 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{2,\varepsilon}_t = 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \sum_{n \ge 1} 1_{[t_{n},s_n)}(t)\cdot x_\textsf{ref} \medspace \left\{ e^{-\alpha_\textsf{off}(\left\langlembda^\varepsilon_t-\tau_{n}^\varepsilon)} - e^{-\alpha_\textsf{off}(t-t_{n})}\right\}, \end{equation*} which gives \begin{equation*} \left|1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{2,\varepsilon}_t \right| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \sum_{n \ge 1} 1_{[t_{n},s_n)}(t) \cdot x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace |\left\langlembda^\varepsilon_t-\tau^\varepsilon_n-t+t_n| \le 2 x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta. \end{equation*} Hence, there exists $K_2>0$ such that for all $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation}\left\langlebel{E:cont_state_close_ms_2} |\mathsf{L}^{2,\varepsilon}_t(\omega)| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_2 \delta + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_2. \end{equation} Turning now to $\mathsf{L}^{3,\varepsilon}_t$, we use integration by parts to get $I_t = e^{\alpha_\textsf{on} t} W_t - \alpha_\textsf{on} \int_0^t W_s e^{\alpha_\textsf{on} s}ds$. It now follows that for any $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation*} \mathsf{L}^{3,\varepsilon}_t = \varepsilon \thinspace \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\left\langlembda^\varepsilon_t) \cdot\left[ W_{\left\langlembda^\varepsilon_t} - e^{-\alpha_\textsf{on} [\left\langlembda^\varepsilon_t - \left\langlembda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon]} W_{\left\langlembda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon} -\alpha_\textsf{on} e^{-\alpha_\textsf{on} \left\langlembda_t^\varepsilon} \int_{\left\langlembda_t^\varepsilon\wedge\sigma^\varepsilon_{n-1}}^{\left\langlembda^\varepsilon_t} W_s e^{\alpha_\textsf{on} s} ds \right], \end{equation*} whence $|\mathsf{L}^{3,\varepsilon}_t | \le \varepsilon \left( 2 + \alpha_\textsf{on}\right) \mathsf{T}_\varepsilon \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|$. Recalling \eqref{E:cont_state_close_ms_1} and \eqref{E:cont_state_close_ms_2}, we see that for $p>0$, there exists $c_p>0$ such that the first line of \eqref{E:full_state_close_ms} holds. To bound $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t) |^p$, note that for $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $Y^\varepsilon_{\left\langlembda^\varepsilon_t}=y(t)$ for all $t \in [0,\mathsf{T}_\varepsilon]$. Consequently, $\sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t)|^p = 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\left\langlembda^\varepsilon_t}-y(t)|^p \le 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}$. \end{proof} To state and prove Lemmas \ref{L:bnminus} and \ref{L:bnplus}, we will need some notation. For $n \in \mathbb{N}$, $\xi \in (0,x_\textsf{ref})$, we let $a_{n}(t;\xi) \triangleq 1_{[n-1,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left( \xi - \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on}(t-(n-1))}\right\}$. It is now easily checked that for $t \ge n-1$ and $\varkappa \in (0,1)$ small enough, \begin{equation}\left\langlebel{E:an_envelope} a_n(t;x^*)-\varkappa \le a_n(t;x^*-\varkappa) \le a_n(t;x^*) \le a_n(t;x^* + \varkappa) \le a_n(t;x^*) + \varkappa. \end{equation} We will also find it helpful to express the continuous square-integrable martingale $I_t = \int_0^t e^{\alpha_\textsf{on} u}dW_u$ as a time-changed Brownian motion. The quadratic variation process of $I$ given by $\left\langle I \right\mathscr{R}gle_t = \int_0^t e^{2 \alpha_\textsf{on} u} du = (e^{2 \alpha_\textsf{on} t} - 1)/(2 \alpha_\textsf{on})$ for $t \ge 0$ is strictly increasing with $\lim_{t \to \infty} \left\langle I \right\mathscr{R}gle_t=\infty$. It therefore follows \cite[Theorem 3.4.6]{KS91} that the process $V = \{ V_s, \mathscr{G}_s:0 \le s < \infty\}$ defined by $V_s \triangleq I_{T(s)}$, $\mathscr{G}_s \triangleq \mathscr{F}_{T(s)}$ where $T(s) \triangleq \inf\{t \ge 0:\left\langle I \right\mathscr{R}gle_t > s\}= [\log \left( 1 + 2 \alpha_\textsf{on} s \right)]/(2 \alpha_\textsf{on})$, is a standard one-dimensional Brownian motion, and further, that $I_t = V_{\left\langle I \right\mathscr{R}gle_t}$ for all $t \ge 0$. Below, we will use the fact that if $\omega \in G_{n-1}^{\varepsilon,\delta}$ (where $n \in \mathbb{N}$), then $\sigma_{n-1}^\varepsilon=n-1$, and $|\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega)-x^*| \le x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace |\tau_{n-1}^\varepsilon(\omega)-t_{n-1}| \le x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta$. Set \begin{equation}\left\langlebel{E:mu_definition} \mu \triangleq \beta - (\alpha_\textsf{on} + \alpha_\textsf{off}) x_\textsf{ref}, \qquad \varkappa \triangleq x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta. \end{equation} Note that, on account of the upper bound on $\alpha_\textsf{off}$ in \eqref{E:alpha_off_assumption_real}, we have $\mu>0$. \begin{proof}[Proof of Lemma \ref{L:bnminus}] We start by noting that for $n \ge 1$, \begin{equation*} \mathsf{X}^{n,\varepsilon,\textsf{on}}_t \cdot 1_{G_{n-1}^{\varepsilon,\delta}} (\omega) = 1_{G_{n-1}^{\varepsilon,\delta}} (\omega) \cdot 1_{[n-1,\infty)}(t) \cdot \left\{ a_n\left( t; \mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} \right) + \varepsilon e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \right\}. \end{equation*} Using the fact that for $\omega \in G_{n-1}^{\varepsilon,\delta}$, we have $\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega) \in [x^*-\varkappa,x^*+\varkappa]$, together with \eqref{E:an_envelope}, we get \begin{multline*} B_n^{\varepsilon,\delta,-} \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n-\delta]} \left( a_n\left( t; \mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} \right) + \varepsilon e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \right) \ge x_\textsf{ref} \right\}\\ \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n-\delta]} e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \ge \frac{x_\textsf{ref} - a_n(t_n-\delta; x^*) - \varkappa }{\varepsilon} \right\}\\ \subset \left\{ \omega \in \Omega: \sup_{t \in [n-1,t_n-\delta]} e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \ge \frac{\mu \delta}{\varepsilon} \right\} \end{multline*} where the latter set inclusion uses the fact that $x_\textsf{ref} - a_n(t_n-\delta;x^*) - \varkappa \ge \delta \mu$. We now easily get that \begin{equation*} \mathbb{P} \left( B_n^{\varepsilon,\delta,-} \right) \le \mathbb{P} \left\{ \sup_{t \in [n-1,t_n-\delta]} \left( V_{\left\langle I \right\mathscr{R}gle_t} - V_{\left\langle I \right\mathscr{R}gle_{n-1}} \right) \ge \frac{\mu \delta e^{\alpha_\textsf{on}(n-1)}}{\varepsilon} \right\}. \end{equation*} Letting $u_n \triangleq \left\langle I \right\mathscr{R}gle_{n-1}$, $q \triangleq \left\langle I \right\mathscr{R}gle_t-u_n$, $v_n \triangleq \left\langle I \right\mathscr{R}gle_{t_n-\delta}$, and noting that $\hat{V}_q \triangleq V_{u_n + q} - V_{u_n}$ is a Brownian motion, we get \begin{equation*} \mathbb{P} \left( B_n^{\varepsilon,\delta,-} \right) \le \mathbb{P} \left\{ \sup_{q \in [0,v_n-u_n]} \hat{V}_q \ge \frac{\mu \delta e^{\alpha_\textsf{on}(n-1)}}{\varepsilon} \right\} = 2\mathfrak{n}tail \left( \frac{\mu \delta}{\varepsilon \sqrt{ \frac{e^{2 \alpha_\textsf{on}(t^*-\delta)} - 1} {2 \alpha_\textsf{on}} }} \right), \end{equation*} where we have explicitly computed $u_n$, $v_n$, and also used \cite[Remark 2.8.3]{KS91}. Since $e^{2\alpha_\textsf{on}(t^*-\delta)}-1 \le e^{2 \alpha_\textsf{on} t^*}$, we easily get \eqref{E:prob_bnminus} with $K_- \triangleq \sqrt{2 \alpha_\textsf{on}} e^{-\alpha_\textsf{on} t^*} \mu$. \end{proof} \begin{proof} [Proof of Lemma \ref{L:bnplus}] Using the fact that for $\omega \in G_{n-1}^{\varepsilon,\delta}$, we have $\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega) \in [x^*-\varkappa,x^*+\varkappa]$, together with \eqref{E:an_envelope}, we get \begin{multline*} B_n^{\varepsilon,\delta,+} = \left\{\omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n+\delta]} \left( a_n(t;\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}) + \varepsilon e^{-\alpha_\textsf{on} t} (I_t - I_{n-1}) \right) < x_\textsf{ref} \right\} \\ \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: a_n(t_n+\delta;x^*) - \varkappa + \varepsilon e^{-\alpha_\textsf{on} (t_n + \delta)} \left(I_{t_n+\delta}-I_{n-1}\right) < x_\textsf{ref} \right\} \\ \subset \left\{ \omega \in \Omega: e^{-\alpha_\textsf{on} (t_n + \delta)} \left(I_{t_n+\delta}-I_{n-1}\right) < \left( \frac{x_\textsf{ref}+\varkappa-a_n(t_n+\delta;x^*)}{\varepsilon} \right) \right\} \end{multline*} Recalling Assumption \ref{A:assumptions_real}, a bit of computation reveals that if we let \begin{equation*}\left\langlebel{E:delta_zero} \delta_+ \triangleq \frac{1}{\alpha_\textsf{on}} \log \left[ \frac{2 \beta - 2 \alpha_\textsf{on} x_\textsf{ref}}{\beta - \alpha_\textsf{on} x_\textsf{ref} + \alpha_\textsf{off} x_\textsf{ref}} \right] > 0, \end{equation*} then, for $0 < \delta <\delta_+$, we have \begin{equation*} \frac{x_\textsf{ref}+\varkappa-a_n(t_n+\delta;x^*)}{\varepsilon} < -\left( \frac{\mu}{2} \right) \frac{\delta}{\varepsilon} < 0 \end{equation*} Since $e^{-\alpha_\textsf{on}(t_n+\delta)} (I_{t_n+\delta}-I_{n-1}) \sim \mathscr{N} \left(0, \frac{1-e^{-2\alpha_\textsf{on}(t^*+\delta)}}{2 \alpha_\textsf{on}} \right)$, a straightforward calculation yields that for $0<\delta<\delta_+$, \eqref{E:prob_bnplus} holds with $K_+ \triangleq \mu\sqrt{\alpha_\textsf{on}/2}$. \end{proof} \end{document}
\begin{document} We observe that a standard transformation between \emph{ordinal} trees (arbitrary rooted trees with ordered children) and binary trees leads to interesting succinct binary tree representations. There are four symmetric versions of these transformations. Via these transformations we get four succinct representations of $n$-node binary trees that use $2n + n/(\log n)^{O(1)}$ bits and support (among other operations) navigation, inorder numbering, one of pre- or post-order numbering, subtree size and lowest common ancestor (LCA) queries. The ability to support inorder numbering is crucial for the well-known range-minimum query (RMQ) problem on an array $A$ of $n$ ordered values. While this functionality, and more, is also supported in $O(1)$ time using $2n + o(n)$ bits by Davoodi et al.'s (\emph{Phil. Trans. Royal Soc. A} \textbf{372} (2014)) extension of a representation by Farzan and Munro (\emph{Algorithmica} \textbf{6} (2014)), their \emph{redundancy}, or the $o(n)$ term, is much larger, and their approach may not be suitable for practical implementations. One of these transformations is related to the Zaks' sequence (S.~Zaks, \emph{Theor. Comput. Sci.} \textbf{10} (1980)) for encoding binary trees, and we thus provide the first succinct binary tree representation based on Zaks' sequence. Another of these transformations is equivalent to Fischer and Heun's (\emph{SIAM J. Comput.} \textbf{40} (2011)) 2d-Min-Heap\ structure for this problem. Yet another variant allows an encoding of the Cartesian tree of $A$ to be constructed from $A$ using only $O(\sqrt{n} \log n)$ bits of working space. \end{abstract} \title{On Succinct Representations of Binary Trees\thanks{An abstract of some of the results in this paper appeared in \emph{Computing and Combinatorics: Proceedings of the 18th Annual International Conference COCOON 2012}, Springer LNCS 7434, pp. 396--407, 2012.}} \author{Pooya Davoodi\\New York University, Polytechnic School of Engineering\\\texttt{[email protected]} \and Rajeev Raman\\ {University of Leicester}\\ \texttt{[email protected]}\\ \and {Srinivasa Rao Satti}\\ {Seoul National University}\\ \texttt{[email protected]} } \maketitle \input{abstract.tex} \section{Introduction} \label{sec:intro} Binary trees are ubiquitous in computer science, and are integral to many applications that involve indexing very large text collections. In such applications, the space usage of the binary tree is an important consideration. While a standard representation of a binary tree node would use three pointers---to its left and right children, and to its parent---the space usage of this representation, which is $\Theta(n\log n)$ bits to store an $n$-node tree, is unacceptable in large-scale applications. A simple counting argument shows that the minimum space required to represent an $n$-node binary tree is $2n - O(\log n)$ bits in the worst case. A number of \emph{succinct} representations of static binary trees have been developed that support a wide range of operations in $O(1)$ time\footnote{These results, and all others discussed here, assume the word RAM model with word size $\Theta(\log n)$ bits.}, using only $2n + o(n)$ bits \cite{jacobson89,fm-swat08}. However, succinct binary tree representations have limitations. A succinct binary tree representation can usually be divided into two parts: a \emph{tree encoding}, which gives the structure of the tree, and takes close to $2n$ bits, and an \emph{index} of $o(n)$ bits which is used to perform operations on the given tree encoding. It appears that the tree encoding constrains both the way nodes are numbered and the operations that can be supported in $O(1)$ time using a $o(n)$-bit index. For example, the earliest succinct binary tree representation was due to Jacobson \cite{jacobson89}, but this only supported a level-order numbering, and while it supported basic navigational operations such as moving to a parent, left child or right child in $O(1)$ time, it did not support operations such as \emph{lowest common ancestor (LCA)} and reporting the size of the subtree rooted at a given node, in $O(1)$ time (indeed, it still remains unknown if there is a $o(n)$-bit index to support these operations in Jacobson's encoding). The importance of having a variety of node numberings and operations is illustrated by the \emph{range minimum query (RMQ)} problem, defined as follows. Given an array $A[1..n]$ of totally ordered values, RMQ problem is to preprocess $A$ into a data structure to answer the query $\rmqarg{i}{j}$: given two indexes $1 \le i \le j \le n$, return the index of the minimum value in $A[i..j]$. The aim is to minimize the query time, the space requirement of the data structure, as well as the time and space requirements of the preprocessing. This problem finds variety of applications including range searching~\cite{Saxena2009}, text indexing~\cite{Abouelhoda2004,Sadakane2007}, text compression~\cite{Chen2008}, document retrieval~\cite{Muthukrishnan2002,Sadakane2007a,Valimaki2007}, flowgraphs~\cite{Georgiadis2004}, and position-restricted pattern matching~\cite{Iliopoulos2008}. Since many of these applications deal with huge datasets, highly space-efficient solutions to the RMQ problem are of great interest. A standard approach to solve the RMQ problem is via the~\emph{Cartesian tree}~\cite{Vuillemin1980}. The Cartesian tree of $A$ is a binary tree obtained by labelling the root with the value $i$ where $A[i]$ is the smallest element in $A$. The left subtree of the root is the Cartesian tree of $A[1..i-1]$ and the right subtree of the root is the Cartesian tree of $A[i+1..n]$ (the Cartesian tree of an empty sub-array is the empty tree). As far as the RMQ problem is concerned, the key property of the Cartesian tree is that the answer to the query $\rmqarg{i}{j}$ is the label of the node that is the \emph{lowest common ancestor (LCA)} of the nodes labelled $i$ and $j$ in the Cartesian tree. Answering RMQs this way does \emph{not} require access to $A$ at query time: this means that $A$ can be (and often is) discarded after pre-processing. Since Farzan and Munro \cite{fm-swat08} showed how to represent binary trees in $2n + o(n)$ bits and support LCA queries in $O(1)$ time, it would appear that there is a fast and highly space-efficient solution to the RMQ problem. Unfortunately, this is not true. The difficulty is that the label of a node in the Cartesian tree (the index of the corresponding array element) is its rank in the \emph{inorder} traversal of the Cartesian tree. Until recently, none of the known binary tree representations \cite{jacobson89,fm-swat08,frs-icalp09} was known to support inorder numbering. Indeed, the first $2n + o(n)$-bit and $O(1)$-time solution to the RMQ problem used an \emph{ordinal} tree, or an arbitrary rooted, ordered tree, to answer RMQs \cite{fh-sjc11}. Recently, Davoodi et al. \cite{NavarroDRS14} augmented the representation of \cite{fm-swat08} to support inorder numbering. However, Davoodi et al.'s result has some shortcomings. The $o(n)$ additive term---the \emph{redundancy}---can be a significant overhead for practical values of $n$. Using the results of \cite{DBLP:conf/focs/Patrascu08} (expanded upon in \cite{sn-soda10}), the redundancy of Jacobson's binary tree representation, as well as Fischer and Heun's RMQ solution, can be reduced to $n/(\log n)^{O(1)}$: this is not known to be true for the results of \cite{fm-swat08,NavarroDRS14}. Furthermore, there are several good practical implementations of ordinal trees \cite{acns09,ottaviano,sdsl}, but the approach of \cite{fm-swat08,NavarroDRS14} is complex and needs significant work before its practical potential can even be evaluated. Finally, the results of \cite{fm-swat08,NavarroDRS14} do not focus on the construction space or time, while this is considered in \cite{fh-sjc11}. \subsubsection*{Our Results.} We recall that there is a well-known transformation between binary trees and ordinal trees (in fact, there are four symmetric versions of this transformation). This allows us to represent a binary tree succinctly by transforming it into an ordinal tree, and then representing the ordinal tree succinctly. We note a few interesting properties of the resulting binary tree representations: \begin{itemize} \item The resulting binary tree representations support inorder numbering in addition to either postorder or preorder. \item The resulting binary tree representations support a number of operations including basic navigation, subtree size and LCA in $O(1)$ time; the latter implies in particular that they are suitable for the RMQ problem. \item The resulting binary tree representations use $2n + n/(\log n)^{O(1)}$ bits of space, and so have low redundancy. \item Since there are implementations of ordinal trees that are very fast in practice \cite{acns09,ottaviano,sdsl}, we believe the resulting binary tree representations will perform well in practice. \item One of the binary tree representations, when applied to represent the Cartesian tree, gives the same data structure as the 2d-Min-Heap\ of Fischer and Heun \cite{fh-sjc11}. \item If one represents the ordinal tree using the BP encoding \cite{DBLP:conf/birthday/Raman013} then the resulting binary tree encoding is Zaks' sequence\ \cite{DBLP:journals/tcs/Zaks80}; we believe this to be the first succinct binary tree representation based on Zaks' sequence\ sequence. \end{itemize} Finally, we also show in Section~\ref{sec:construction-space} that using these representations, we can make some improvements to the preprocessing phase of the RMQ problem. Specifically, we show that given an array $A$, a $2n+O(1)$-bit encoding of the tree structure of the Cartesian tree of $A$ can be created in linear time using only $O(\sqrt{n} \log n)$ bits of working space. This encoding can be augmented with additional data structures of size $o(n)$ bits, using only $o(n)$ bits of working space, thereby ``improving'' the result of~\cite{fh-sjc11} where $n+o(n)$ working space is used (the accounting of space is slightly different). \subsection{Preliminaries} \label{sec:prelim} \subsubsection*{Ordinal Tree Encodings.} We now discuss two encodings of ordinal trees. The first encoding is the natural \emph{balanced parenthesis (BP)} encoding~\cite{jacobson89,mr-sjc01}. The BP~sequence of an ordinal tree is obtained by performing a depth-first traversal, and writing an opening parenthesis each time a node is visited, and a closing parenthesis immediately after all its descendants are visited. This gives a $2n$-bit encoding of an $n$-node ordinal tree as a sequence of balanced parentheses. The \emph{depth-first unary degree sequence (DFUDS)} \cite{bdmrrr-05} is another encoding of an $n$-node ordinal tree as a sequence of balanced parentheses. This again visits the nodes in depth-first order (specifically, pre-order) and outputs the \emph{degree} $d$ of each node---defined here as the number of children it has---in unary as $(^d )$. The resulting string is of length $2n -1$ bits and is converted to a balanced parenthesis string by adding an open parenthesis to the start of the string. See Figure~\ref{fig:tree-encodings} for an example. \begin{figure} \caption{BP and DFUDS encodings of ordinal trees.} \label{fig:tree-encodings} \end{figure} \subsubsection*{Succinct Ordinal Tree Representations.} Table~\ref{tab:ops} gives a subset of operations that are supported by various ordinal tree representations (see \cite{DBLP:conf/birthday/Raman013} for other operations). \begin{table} \caption{A list of operations on ordinal trees. In all cases below, if the value returned by the operation is not defined (e.g. performing $\text{parent}()$ on the root of the tree) an appropriate ``null'' value is returned.} \label{tab:ops} \centering \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{Operation} & \multicolumn{1}{|c|}{Return Value}\\ \hline $\text{parent}(x)$ & the parent of node $x$\\ $\text{child}(x,i)$ & the $i$-th child of node $x$, for $i\geq1$\\ $\text{next-sibling}arg{x}$ & the next sibling of $x$\\ $\prevsiblingarg{x}$ & the previous sibling of $x$\\ $\text{depth}(x)$ & the depth of node $x$\\ $\text{select}_{o}(j)$ & the $j$-th node in $o$-order, for $o \in \{ \text{preorder}, \text{postorder},$ \\ & \hspace*{1in}$\text{preorder}Right, \text{postorder}Right\}$\\ $\text{leftmost-leaf}(x)$ & the leftmost leaf of the subtree rooted at node $x$\\ $\text{rightmost-leaf}(x)$ & the rightmost leaf of the subtree rooted at node $x$\\ $\text{subtree-size}arg{x}$ & the size of the subtree rooted at node $x$ (excluding $x$ itself)\\ $\text{LCA}(x, y)$ & the lowest common ancestor of the nodes $x$ and $y$\\ $\text{level-ancestor}arg{x}{i}$ & the ancestor of node $x$ at depth $d-i$, where $d = \text{depth}(x)$, for $i\geq0$\\ \hline \end{tabular} \end{table} \begin{theorem}[\cite{sn-soda10}] \label{thm:ordinal-tree-rep} There is a succinct ordinal tree representation that supports all operations in Table~\ref{tab:ops} in $O(1)$ time and uses $2n + n/(\log n)^{O(1)}$ bits of space, where $n$ denotes the number of nodes in the represented ordinal tree. \end{theorem} \section{Succinct Binary Trees Via Ordinal Trees} \label{sec:transform} We describe succinct binary tree representations which are based on ordinal tree representations. In other words, we present a method that transforms a binary tree to an ordinal tree (indeed we study several and similar transformations) with properties used to simulate various operations on the original binary tree. We show that using known succinct ordinal tree representations from the literature, we can build succinct binary tree representations that have not yet been discovered. We also introduce a relationship between two standard ordinal tree encodings (BP and DFUDS) and study how this relationship can be utilized in our binary tree representations. \subsection{Transformations} We define four transformations \trans{1}, \trans{2}, \trans{3}, and \trans{4} that transform a binary tree into an ordinal tree. We describe the $i$-th transformation by stating how we generate the output ordinal tree \transTree{i} given an input binary tree $T_b$. Let $n$ be the number of nodes in $T_b$. The number of nodes in each of \transTree{1}, \transTree{2}, \transTree{3}, and \transTree{4} is $n+1$, where each node corresponds to a node in $T_b$\ except the root (think of the root as a dummy node). In the following, we show the correspondence between the nodes in $T_b$\ and the nodes in \transTree{i} by using the notation \transOrdering{i}{$u$}{$v$}, which means the node $u$ in $T_b$\ corresponds to the node $v$ in \transTree{i}. Given a node $u$ in $T_b$, and its corresponding node $v$ in \transTree{i}, we show which nodes in \transTree{i} correspond to the left and right children of $u$: \\ \setlength{\tabcolsep}{0.6cm} \begin{tabular}[h]{l l} \transOrdering{1}{left-child$(u)$}{first-child$(v)$} & \transOrdering{1}{right-child$(u)$}{next-sibling$(v)$} \\ \transOrdering{2}{left-child$(u)$}{last-child$(v)$} & \transOrdering{2}{right-child$(u)$}{previous-sibling$(v)$} \\ \transOrdering{3}{left-child$(u)$}{next-sibling$(v)$} & \transOrdering{3}{right-child$(u)$}{first-child$(v)$} \\ \transOrdering{4}{left-child$(u)$}{previous-sibling$(v)$} & \transOrdering{4}{right-child$(u)$}{last-child$(v)$} \\ \end{tabular} \\ The example in Figure \ref{fig:transformations} shows a binary tree which is transformed to an ordinal tree by each of the four transformations. Notice that \transTree{2} and \transTree{4} are the mirror images of \transTree{1} and \transTree{3}, respectively. \begin{figure} \caption{Top: a binary tree, nodes numbered in inorder. Bottom from left to right: the ordinal trees \transTree{1} \label{fig:transformations} \end{figure} \subsection{Effect of Transformations on Orderings} It is clear that the order in which the nodes appear in $T_b$\ is different from the node ordering in each of \transTree{1}, \transTree{2}, \transTree{3}, and \transTree{4}; however there is a surprising relation between these orderings. Consider the three standard orderings \text{inorder}, \text{preorder}, and \text{postorder}\ on binary trees and ordinal trees\footnote{\text{inorder}\ is only defined on binary trees}. While \text{preorder}\ and \text{postorder}\ arrange the nodes from left to right by default, we define their symmetries which arrange the nodes from right to left: \text{preorder}Right\ and \text{postorder}Right\ (i.e., visiting the children of each node from right to left in traversals). Each of the four transformations maps two of the binary tree orderings to two of the ordinal tree orderings. For example, if the first transformation maps \text{inorder}\ to \text{postorder}, that means a node with \text{inorder}\ rank $k$ in $T_b$\ corresponds to a node with \text{postorder}\ rank $k$ in \transTree{1}. In the following, we show all of these mappings ($ -1 $ means that the mapping is off by 1, due to the presence of the dummy root): \setlength{\tabcolsep}{0.5cm} \begin{tabular}{l l l} \trans{1}: & $\text{inorder}\ \rightarrow \text{postorder}$ & $\text{preorder}\ \rightarrow \text{preorder}\ -1 $ \\ \trans{2}: & $\text{inorder}\ \rightarrow \text{postorder}Right$ & $\text{preorder}\ \rightarrow \text{preorder}Right\ -1 $ \\ \trans{3}: & $\text{inorder}\ \rightarrow \text{preorder}Right -1$ & $\text{postorder}\ \rightarrow \text{postorder}Right $ \\ \trans{4}: & $\text{inorder}\ \rightarrow \text{preorder} -1$ & $\text{postorder} \rightarrow \text{postorder} $ \\ \end{tabular} \subsection{Effect of Transformations on Ordinal Tree Encodings} The four transformation methods define four ordinal trees \transTree{1}, \transTree{2}, \transTree{3}, \transTree{4} (from a given binary tree) which may look completely different, however they have relationships. In the following, we show a pairwise connection between \transTree{1} and \transTree{3} and between \transTree{2} and \transTree{4}. \paragraph{From paths to siblings and vice versa.} Recall the following transformation functions on the binary tree nodes: \setlength{\tabcolsep}{1cm} \begin{tabular}[h]{l l} \transOrdering{1}{left-child$(u)$}{first-child$(v)$} & \transOrdering{1}{right-child$(u)$}{next-sibling$(v)$} \\ \transOrdering{3}{left-child$(u)$}{next-sibling$(v)$} & \transOrdering{3}{right-child$(u)$}{first-child$(v)$} \end{tabular} \\ If we combine the above functions, we can conclude the following: $$ \text{first-child$ (v) $ in \trans{1}} = \text{next-sibling$ (v) $ in \trans{3}} $$ $$ \text{next-sibling$ (v) $ in \trans{1}} = \text{first-child$ (v) $ in \trans{3}} $$ The above relation yields a connection between paths in \transTree{1} and the siblings in \transTree{3}. Consider any downward path $(u_1, u_2, \ldots, u_k)$ in \transTree{1}, where $u_1$ is the $i$-th child of a node for any $i>1$ and all $u_2, \ldots, u_k$ are the first child of their parents. All the nodes $u_1, u_2, \ldots, u_k$ become siblings in \transTree{3} and they are all the children of a node which corresponds to the previous-sibling of $u_1$ in \transTree{1}. Also, if $u_1$ is the root of \transTree{1} then $u_1$ corresponds to the root in \transTree{3} and all $u_2, \ldots, u_k$ become the children of the root of \transTree{3}. The paths in \transTree{3} are related to siblings in \transTree{1} in a similar way. Also, a similar connection exists between \transTree{2} and \transTree{4} with the only difference that first child, next sibling, and previous sibling should be turned into last child, next sibling, and previous sibling in order. \paragraph{From BP to \text{DFUDS}\ and vice versa.} Section~\ref{sec:prelim} introduced two standard ordinal tree encodings: BP and \text{DFUDS}. The relation between paths and siblings in the different types of transforms suggests a relation between the BP and \text{DFUDS}\ sequences of these transformations, as paths of the kind considered above lead to consecutive sequences of parentheses, which in \text{DFUDS}\ can be viewed as the encoding of a group of siblings. We now formalize this intuition. Let \text{DFUDS}Right\ be the \text{DFUDS}\ sequence where the children of every node are visited from right to left. \begin{figure} \caption{Illustrating how the result of transformations \trans{1} \label{fig:join} \end{figure} \begin{theorem} \label{thm:equivalence} For any binary tree $T_b$, let \transTree{i} denote the result of \trans{i} applied to $T_b$. Then: \begin{enumerate} \item The BP sequences of \transTree{1} and \transTree{3} are the reverses of the BP sequences of \transTree{2} and \transTree{4}, respectively. \item The \text{DFUDS}\ sequences of \transTree{1} and \transTree{3} are equal to the \text{DFUDS}Right\ sequences of \transTree{2} and \transTree{4}, respectively. \item The BP sequence of \transTree{1} is equal to the \text{DFUDS}\ sequence of \transTree{4}. \item The BP sequence of \transTree{2} is equal to the \text{DFUDS}\ sequence of \transTree{3}. \end{enumerate} \end{theorem} \begin{proof} (1) and (2) follow directly from the definitions of the transformations. We prove (3) by induction ((4) is analogous). For any ordinal tree $T$ denote by BP($T$) and \text{DFUDS}($T$) the BP and \text{DFUDS}\ encodings of $T$. For the base case, if $T_b$ consists of a singleton node, then BP(\transTree{1}) = \text{DFUDS}(\transTree{4}) = (()). Before going to the inductive case, observe that for any binary tree $T$, BP(\trans{i}($T$)) is a string of the form $(Y)$, where the parenthesis pair enclosing $Y$ represents the dummy root of \trans{i}($T$), and $Y$ is the BP representation of the ordered forest obtained by deleting the dummy root. On the other hand, \text{DFUDS}(\trans{i}(T)), which can also be viewed as a string of the form $(Y)$, is interpreted differently: the first open parenthesis is a dummy, while $Y)$ is the \emph{core} \text{DFUDS}\ representation of the entire tree \trans{i}($T$), including the dummy root. Now let $T_b$ be a binary tree with root $x$, left subtree $L$ and right subtree $R$. By induction, BP(\trans{1}($L$)) = \text{DFUDS}(\transTree{4}($L$)) = $(Y)$ and BP(\transTree{1}($R$)) = \text{DFUDS}(\transTree{4}($R$)) = $(Z)$. Note that \transTree{1} is obtained as follows. Replace the dummy root of \trans{1}($L$) by $x$, and insert $x$ as the first child of the dummy root of \trans{1}($R$) (see Figure~\ref{fig:join}). Thus, BP(\transTree{1}) = $((Y)Z)$. Next, \transTree{4} is obtained as follows: replace the dummy root of \trans{4}($R$) by $x$, and insert $x$ as the last child of the dummy root of \trans{4}($L$) (see Figure~\ref{fig:join}). The core \text{DFUDS}\ representation of \transTree{4} (i.e. excluding the dummy open parenthesis) is obtained as follows: add an open parenthesis to the front of the core \text{DFUDS}\ representation of \trans{4}($L$), to indicate that the root of \trans{4}($L$), which is also the root of \transTree{4}, has one additional child. This gives the parenthesis sequence $(Y)$. To this we append the core \text{DFUDS}\ representation of \trans{4}($R$), giving $(Y)Z)$. Now we add the dummy open parenthesis, giving $((Y)Z)$, which is the same as BP(\transTree{1}). \end{proof} \subsection{Succinct Binary Tree Representations} We consider the problem of encoding a binary tree in a succinct data structure that supports the operations: \text{left-child}, \text{right-child}, \text{parent}, \text{subtree-size}, \text{LCA}, \text{inorder}, and one of \text{preorder}\ or \text{postorder}\ (we give two data structures supporting \text{preorder}\ and two other ones supporting \text{postorder}). We design such a data structure by transforming the input binary tree into an ordinal tree using any of the transformations \trans{1}, \trans{2}, \trans{3}, or \trans{4}, and then encoding the ordinal tree by a succinct data structure that supports appropriate operations. For each of the four ordinal trees that can be obtained by the transformations, we give a set of algorithms, each performs one of the binary tree operations. In other words, we show how each of the binary tree operations can be performed by ordinal tree operations. Here, we show this for transformation \trans{1}, and later in the section we present the pseudocode for all the transformations. In the following, $u$ denotes any given node in a binary tree $T_b$, and \transarg{1}{u} denotes the node corresponding to $u$ in \transTree{1}: \paragraph{\text{left-child}.} The left child of $u$ is the first child of \transarg{1}{u}, which can be determined by the operation \text{child}arg{1}{\transarg{1}{u}} on \transTree{1}. \paragraph{\text{right-child}.} The right child of $u$ is the next sibling of \transarg{1}{u}, which can be determined by the operation \text{next-sibling}arg{\transarg{1}{u}} on \transTree{1}. \paragraph{\text{parent}.} To determine the parent of $u$, there are two cases: (1) if \transarg{1}{u} is the first child of its parent, then the answer is the parent of \transarg{1}{u} to be determined by \text{parent}arg{\transarg{1}{u}} on \transTree{1}; (2) if \transarg{1}{u} is not the first child of its parent, then the answer is the previous sibling of \transarg{1}{u} to be determined by the operation \prevsiblingarg{\transarg{1}{u}} on \transTree{1}. \paragraph{subtree-size.} The subtree size of $u$ is equal to the subtree size of \transarg{1}{u} plus the sum of the subtree sizes of all the siblings to the right of \transarg{1}{u}. Let $\ell$ be the rightmost leaf in the subtree of the parent of \transarg{1}{u}. To obtain the above sum, we subtract the preorder number of \transarg{1}{u} from the preorder number of~$\ell$, which can be performed using the operations \text{rightmost-leaf}arg{\text{parent}arg{\transarg{1}{u}}} and \text{preorder}. \paragraph{LCA} Let $w$ be the LCA of the two nodes $u$ and $v$ in $T_b$. We want to find $w$ given $u$ and $v$, assuming w.l.o.g. that \text{preorder}arg{u} is smaller than \text{preorder}arg{v}. Let lca be the LCA of \transarg{1}{u} and \transarg{1}{v} in \transTree{1}. Observe that lca is a child of \transarg{1}{w} and an ancestor of \transarg{1}{u}. Thus, we only need to find the ancestor of \transarg{1}{u} at level $i$, where $i-1$ is the depth of lca. To compute this, we utilize the operations \text{LCA}, \text{depth}, and \text{level-ancestor}\ on \transTree{1}. See Tables \ref{tbl:operations-t1} through \ref{tbl:operations-t4} for the pseudocode of all the binary tree operations on all four transformations. \input{bintree-alg} \begin{theorem} \label{thm:transform} There is a succinct binary tree representation of size $2n + n/(\log n)^{O(1)}$ bits that supports all the operations \text{left-child}, \text{right-child}, \text{parent}, \text{subtree-size}, \text{LCA}, \text{inorder}, and one of \text{preorder}\ or \text{postorder}\ in $O(1)$ time, where $n$ denotes the number of nodes in the represented binary tree. \end{theorem} \begin{proof} We can use any of the transformations \trans{1}, \trans{2}, \trans{3}, or \trans{4} and use any ordinal tree representation that uses $2n + n/(\log n)^{O(1)}$ bits and supports the operations required by our algorithms (Theorem~\ref{thm:ordinal-tree-rep}). \end{proof} \subsection{BP for Binary Trees (Zaks' sequence)} \label{sec:binary-tree-bp} Let $T_b$\ be a binary tree which is transformed to the ordinal tree \transTree{1} by the first transformation \trans{1}. We show that the BP sequence of \transTree{1} is a special sequence for $T_b$. This sequence is called Zaks' sequence, and our transformation methods show that Zaks' sequence\ can be indeed used to encode binary trees into a succinct data structure that supports various operations. In the following, we give a definition for Zaks' sequence\ of $T_b$\ and we show that it is indeed equal to the BP sequence of~\transTree{1}. \paragraph{Zaks' sequence.} Initially, extend the binary tree $T_b$\ by adding external nodes wherever there is a missing child. Now, label the internal nodes with an open parenthesis, and the external nodes with a closing parenthesis. Zaks' sequence\ of $T_b$\ is derived by traversing $T_b$\ in \text{preorder}\ and printing the labels of the visited nodes. If $T_b$\ has $n$ nodes, Zaks' sequence\ is of length $2n +1$. If we insert an open parenthesis at the beginning of Zaks' sequence, then it becomes a balanced-parentheses sequence. Notice that Zaks' sequence\ is different from Jacobson's~\cite{jacobson89} approach where $T_b$\ is traversed in level-order. See Figure \ref{fig:binary-tree-bp} for an example. Each pair of matching open and close parentheses in Zaks' sequence, except the extra open parenthesis at the beginning and its matching close parenthesis, is (conceptually) associated with a node in $T_b$. Observe that the open parentheses in the sequence from left to right correspond to the nodes in preorder, and the closing parentheses from left to right correspond to the nodes in inorder. \begin{figure} \caption{Illustrating the connection between Zaks' sequence\ of $T_b$\ and the BP sequence of \transTree{1} \label{fig:binary-tree-bp} \end{figure} \paragraph{Zaks' sequence\ and transformation \trans{1}.} Zaks' sequence\ of $T_b$\ including the extra open parenthesis at the beginning is the same as the BP sequence of \transTree{1}. This is stated in the following lemma: \begin{theorem} An open parenthesis appended by Zaks' sequence\ of a binary tree is the same as the BP sequence of the ordinal tree obtained by applying the first transformation \trans{1} to the binary tree. \end{theorem} \begin{proof} Let $T_b$\ and \transTree{1} be the binary tree and ordinal tree respectively, and let $S$ be the sequence of an open parenthesis appended by Zaks' sequence\ of $T_b$. We prove the lemma by induction on the number of nodes in $T_b$. For the base case, If $T_b$\ has only one node, then Zaks' sequence\ of $T_b$\ will be $ ()) $ and the BP of \transTree{1} will be $ (()) $. Take as the induction hypothesis that the lemma is correct for $T_b$\ with less than $ n $ nodes. In the inductive step, let $T_b$\ have $ n $ nodes. Let $u_1, u_2, \ldots, u_k$ be the right most path in $T_b$, where $ 1\le k \le n $. Let $l_i$ be the left subtree of $ u_i $ for $ 1\le i \le k $. Thus, $ S = ( (z_1 (z_2 \cdots (z_k ) $, where $ z_i $ is Zaks' sequence\ of $ l_i $. Notice that the size of each $ l_i $ is smaller than $ n $. Therefore by the hypothesis, $(z_i $ is the BP sequence $b_i$ of the ordinal tree \transarg{1}{l_i}. The ordinal tree \transTree{1} has a dummy root which has $ k $ subtrees which are identical to \transarg{1}{l_i} for $1\le i \le k$. Therefore, the BP sequence of \transTree{1} is $(b_1 b_2 \cdots b_k) $, where $ b_i$ is the BP sequence of the subtree rooted at \transarg{1}{u_i}. Thus, the BP sequence of \transTree{1} is the same as $ S $. \end{proof} \section{Cartesian Tree Construction in $o(n)$ Working Space} \label{sec:construction-space} Given an array $A$, we now show how to construct one of the succinct tree representations implied in Theorem~\ref{thm:transform} in small working space. The model is that the array $A$ is present in read-only memory. There is a working memory, which is read-write, and the succinct Cartesian tree representation is written to a readable, but write-once, output memory. All memory is assumed to be randomly accessible. A straightforward way to construct a succinct representation of a Cartesian tree is to construct the standard pointer-based representation of the Cartesian tree from the given array in linear time~\cite{gbt-stoc84}, and then construct the succinct representation using the pointer-based representation. The drawback of this approach is that the space used during the construction is $O(n \log n)$ bits, although the final structure uses only $O(n)$ bits. Fischer and Heun~\cite{fh-sjc11} show that the construction space can be reduced to $n + o(n)$ bits. In this section, we show how to improve the construction space to $o(n)$ bits. In fact, we first show that a parenthesis sequence corresponding to the Cartesian tree can be output in linear time using only $O(\sqrt{n} \log n)$ bits of working space; subsequently, using methods from \cite{grrr-tcs06} the auxiliary data structures can also be constructed in $o(n)$ working space. \begin{theorem} Given an array $A$ of $n$ values, let $T_c$ be the Cartesian tree of $A$. We can output BP(\trans{4}($T_c$)) in $O(n)$ time using $O(\sqrt{n} \log n)$ bits of working space. \end{theorem} \begin{proof} The algorithm reads $A$ from left to right and outputs a parenthesis sequence as follows: when processing $A[i]$, for $i = 1, \ldots, n$ we compare $A[i]$ with all the suffix minima of $A[1..i-1]$---if $A[i]$ is smaller than $j \ge 0$ suffix minima, then we output the string $(\, )^j$. Finally we add the string $(\,)^{j+1}$ to the end, where $j$ is the number of suffix minima of $A[1..n]$. Given any ordinal tree $T$, define the \emph{post-order degree sequence} parenthesis string obtained from $T$, denoted by PODS($T$), as follows. Traverse $T$ in post-order, and at each node that has $d$ children, output the bit-string $(\, )^d$. At the end, output an additional $)$. Let $T_c$ be the Cartesian tree of $A$ and for $i = 1, \ldots, 4$, let \transTree{i} = \trans{i}($T_c$) as before. Observe that in \transTree{1}, for $i=1, \ldots, n$, the children of the $i$-th node in post-order, which is the $i$-th node in $T_c$ in in-order and hence corresponds to $A[i]$, are precisely those suffix minima that $A[i]$ was smaller than when $A[i]$ was processed. Furthermore, the children of the (dummy) root of \transTree{1} are the suffix maxima of $A[1..n]$. Thus, the string output by the above pre-processing of $A$ is PODS(\transTree{1}). We now show that PODS(\transTree{1}) = BP(\transTree{4}). The proof is along the lines of Theorem~\ref{thm:equivalence}. If $L$ and $R$ denote the left subtree of the root of $T_c$, assume inductively that BP(\trans{4}($L$)) = PODS(\trans{1}($L$)) = $(Y)$ and BP(\trans{4}($R$)) = PODS(\trans{1}($L$)) = $(Z)$. Using the reasoning in Theorem~\ref{thm:equivalence} (summarized in Figure~\ref{fig:join}) we see immediately that BP(\transTree{4}) = $(Y(Z))$, and by a reasoning similar to the one used for \text{DFUDS}\ in Theorem~\ref{thm:equivalence}, it is easy to see that PODS(\transTree{1}) = $(Y(Z))$ as well, as required. While the straightforward approach would be to maintain a linked list of the locations of the current suffix minima, this list could contain $\Theta(n)$ locations and could take $\Theta(n \log n)$ bits. Our approach is to use the output string itself to encode the positions of the suffix minima. One can observe that if the output string is created by the above process, it will be of the form $( b_1 ( ... ( b_k $ where each $b_i$ is a (possibly empty) maximal balanced parenthesis string -- the remaining parentheses are called \emph{unmatched}. It is not hard to see that the unmatched parentheses encode the positions of the suffix minima in the sense that if the unmatched parentheses (except the opening one) are the $i_1, i_2 \ldots, i_k$-th opening parentheses in the current output sequence then the positions $i_1 - 1, \ldots, i_k - 1$ are precisely the suffix minima positions. Our task is therefore to sequentially access the next unmatched parenthesis, starting from the end, when adding the new element $A[i+1]$. We conceptually break the string into blocks of size $\lfloor \sqrt{n} \rfloor$. For each block that contains at least one unmatched parenthesis, store the following information: \begin{itemize} \item its block number (in the original parenthesis string) and the total number of open parenthesis in the current output string before the start of the block. \item the position $p$ of the rightmost parenthesis in the block, and the number of open parentheses before it in the block. \item a pointer to the next block with at least one unmatched parenthesis. \end{itemize} This takes $O(\log n)$ bits per block, which is $O(\sqrt{n} \log n)$ bits. \begin{itemize} \item For the rightmost block (in which we add the new parentheses), keep positions of all the unmatched parentheses: the space for this is also $O(\sqrt{n} \log n)$ bits. \end{itemize} When we process the next element of $A$, we compare it with unmatched parentheses in the rightmost block, which takes $O(1)$ time per unmatched parenthesis that we compared the new element with, as in the algorithm of \cite{gbt-stoc84}. Updating the last block is also trivial. Suppose we have compared $A[i]$ and found it smaller than all suffix maxima in the rightmost block. Then, using the linked list, we find the rightmost unmatched parenthesis (say at position $p$) in the next block in the list, which takes $O(1)$ time, and compare with it (this is also $O(1)$ time). If $A[i]$ is smaller, then sequentially scan this block leftwards starting at position $p$, skipping over a maximal BP sequence to find the next unmatched parenthesis in that block. The time for this sequential scan is $O(n)$ overall, since we never sequentially scan the same parenthesis twice. Updating the blocks is straightforward. Thus, the creation of the output string can be done in linear time using $O(\sqrt{n} \log n)$ bits of working memory. \end{proof} \input{arxiv-paper.bbl} \end{document}
\begin{document} \begin{center} \begin{large} Logarithmic intertwining operators \\ and\\ the space of conformal blocks over the projective line \end{large} \end{center} \vskip5ex \begin{center} Yusuke Arike \vskip1ex Department of Mathematics, Graduate School of Science\\ Osaka University\\ [email protected] \end{center} \vskip2ex \abstract{ We show that the space of logarithmic intertwining operators among logarithmic modules for a vertex operator algebra is isomorphic to the space of $3$-point conformal blocks over the projective line. This is considered as a generalization of Zhu's result for ordinary intertwining operators among ordinary modules.} \vskip2ex \section{Introduction} One of the most important problems in representation theory of vertex operator algebras is to determine fusion rules which are the dimensions of intertwining operators among three modules for vertex operator algebras. Intertwining operators of the type $\fusion{M^1}{M^2}{M^3}$ are linear maps $I(-,z):M^1\to\Hom_\mathbb{C}(M^2,M^3)[[z,z^{-1}]]$ with several axioms (see \cite{FHL}) where $M^i\,(i=1,2,3)$ are modules for a vertex operator algebra. The definition of intertwining operators given in \cite{FHL} treats modules on which $L_0$ acts as a semisimple operator. However, in general, we have to consider modules which do not decompose into $L_0$-eigenspaces but do into generalized $L_0$-eigenspaces Such modules are called {\it logarithmic modules} in \cite{M1}. A notion of {\it logarithmic intertwining operators} among logarithmic modules is introduced in \cite{M1}. Logarithmic intertwining operators may involve logarithmic terms. It is shown in \cite{M1} that a logarithmic intertwining operator among ordinary modules is nothing but the so-called intertwining operator. Several examples of logarithmic modules are found and logarithmic intertwining operators among these modules are constructed (see eg. \cite{M1}, \cite{M2}, \cite{AM}). On the other hand in conformal field theory its important feature is a notion of conformal blocks associated with vertex operator algebras. Mathematically rigorous formulation of $N$-point conformal blocks on Riemann surfaces associated with vertex operator algebras is given in \cite{Z1} with the assumption that the corresponding vertex operator algebra is quasi-primary generated. It is shown in \cite{Z1} that the space of $3$-point conformal blocks over the projective line $\mathbb{C}P^1$ is isomorphic to the space of intertwining operators among ordinary modules for a vertex operator algebra. In this paper we give a sort of generalization of Zhu's result in the case that the modules are logarithmic. More precisely we are going to prove that the space of $3$-point conformal blocks over the projective line is isomorphic to the space of logarithmic intertwining operators without the assumption that a vertex operator algebra is quasi-primary generated. Taking the formulation of the space of coinvariants in \cite{NT} we do not have to assume that a vertex operator algebra is quasi-primary generated. The study on logarithmic intertwining operators is very important since if we could know its dimension from $S$-matrix obtained by formal characters in fact if a vertex operator algebra is rational and satisfies several conditions the dimension of intertwining operators is completely determined by $S$-matrix. However it is left for further studies. The paper is organized as follows. In section 2 we recall the definition of vertex operator algebras and their modules. The definition of logarithmic modules is located here. Also we describe the space of conformal blocks over $\mathbb{C}P^1$ according to \cite{NT}. In section 3 we recall the definition and properties of logarithmic intertwining operators which are given in \cite{M1} and \cite{HLZ}. We state the main theorem in this paper and give a proof in section 4. The linear maps between the space of logarithmic intertwining operators and the space of $3$-point conformal blocks are defined and it is proved that these maps are well-defined and mutually inverse. \section{Vertex operator algebras and the space of conformal blocks over the projective line} Throughout this paper we use the notation $\mathbb{N}=\{0,1,2,\dots\}$. \subsection{Vertex operator algebras and current Lie algebras} A {\it vertex operator algebra} is a $\mathbb{N}$-graded vector space $V=\bigoplus_{k=0}^\infty V_k$ with $\dim V_k<\infty \,(k\in\mathbb{Z}_{\ge0})$ equipped with a linear map \begin{equation} Y(-,z):V\to\End(V)[[z,z^{-1}]],\,Y(a,z)=\sum_{n\in\mathbb{Z}}a_{(n)}z^{-n-1} \end{equation} and with distinguished vectors $\mathbf{1}\in V_0$ called the {\it vacuum vector} and $\omega\in V_2$ called the {\it Virasoro vector} satisfying the following axioms (see e.g. \cite{FHL}, \cite{MN}): \vskip 1ex \noindent (1) For any pair of vectors in $V$ there exists a nonnegative integer $N$ such that $a_{(n)}b=0$ for all integers $n\ge N$.\\ \noindent (2) For any vectors $a, b, c\in V$ and integers $p, q, r\in\mathbb{Z}$, \begin{equation} \begin{split} &\sum_{i=0}^\infty\binom{p}{i}(a_{(r+i)}b)_{(p+q-i)}c\\ &\qquad\qquad=\sum_{i=0}^\infty(-1)^i\binom{r}{i}(a_{(p+r-i)}b_{(q+i)}c-(-1)^rb_{(q+r-i)}a_{(p+i)}c) \end{split} \end{equation} hold.\\ \noindent (3) $Y(\mathbf{1},z)=\id_V$.\\ (4) $Y(a,z)\mathbf{1}\in V[[z]]$ and $a_{(-1)}\mathbf{1}=a$.\\ \noindent (5) Set $L_n=\omega_{(n+1)}$. Then $\{L_n\,|\,n\in\mathbb{Z}\}$ together with the identity map on $V$ give a representation of the Virasoro algebra on $V$ with central charge $c_V\in\mathbb{C}$.\\ \noindent (6) $L_0a=ka$ for any $a\in V_k$ and nonnegative integers $k$. \vskip 1ex \noindent (7) $\dfrac{d}{dz}Y(a,z)=Y(L_{-1}a,z)$ for any $a\in V$. \noindent (8) Denote $|a|=k$ for any $a\in V_k$ and then \begin{equation} |a_{(n)}b|=|a|+|b|-1-n \end{equation} for any homogeneous $b\in V$ and $n\in\mathbb{Z}$. In order to define the space of conformal blocks we introduce the spaces $V^{(1)}=\bigoplus_{k=0}^\infty V_k\otimes \mathbb{C}((\xi))(d\xi)^{1-k}$ and $V^{(0)}=\bigoplus_{k=0}^\infty V_k\otimes\mathbb{C}((\xi))(d\xi)^{-k}$. Let $\nabla:V^{(0)}\to V^{(1)}$ be the linear map defined by \begin{equation} v\otimes f(\xi)(d\xi)^{-n}=L_{-1}v\otimes f(\xi)(d\xi)^{-k}+v\otimes\frac{df(\xi)}{d\xi}(d\xi)^{1-k}. \end{equation} We set $\mathfrak{g}=V^{(1)}/\nabla V^{(0)}$ and denote the image of $a\otimes f(\xi)(d\xi)^{1-k}\in V_k\otimes\mathbb{C}((\xi))(d\xi)^{1-k}$ by $J(a,f)$. Then we have: \begin{proposition}[\cite{NT}, Proposition 2.1.1]\label{prop-current} The vector space $\mathfrak{g}$ is a Lie algebra with the blacket \begin{equation} [J(a,f), J(b,g)]=\sum_{m=0}^{|a|+|b|-1}\frac{1}{m!}J\bigl(a_{(m)}b, \frac{d^mf}{d\xi^m}g\bigr) \end{equation} for homogeneous $a, b\in V$. \end{proposition} The Lie algebra $\mathfrak{g}$ is called the {\it current Lie algebra}. Let us denote $J_n(a)=J(a,\xi^{n+|a|-1})$. Applying the construction of the current Lie algebra to the vector space $\bigoplus_{k=0}^\infty V_k\otimes\mathbb{C}[\xi,\xi^{-1}](d\xi)^{1-k}$, we have a graded Lie algebra $\bar{\mathfrak{g}}=\bigoplus_{n\in\mathbb{Z}}\bar{\mathfrak{g}}_n$ where the vector space $\bar{\mathfrak{g}}_n$ is linearly spanned by $J_n(a)\,(a\in V)$. The Lie algebra $\bar{\mathfrak{g}}$ is a Lie subalgebra of $\mathfrak{g}$. The following proposition plays an important role when we define duality functor on the category of $V$-modules. \begin{proposition}[\cite{NT}, Proposition 4.1.1]\label{prop-dual} The linear map $\theta:\bar{\mathfrak{g}}\to\bar{\mathfrak{g}}$ defined by \begin{equation}\label{eq-involution} \theta(J_n(a))=(-1)^{|a|}\sum_{j=0}^\infty\frac{1}{j!}J_{-n}(L_1^ja) \end{equation} for $a\in V$ and $n\in\mathbb{Z}$ is an anti-Lie algebra involution. \end{proposition} \subsection{Modules for vertex operator algebras} Let $M$ be a weak $V$-module (see \cite{DLM} for the definition). A weak $V$-module $M$ is called $\mathbb{N}$-gradable if it admits a decomposition $M=\bigoplus_{n\in\mathbb{N}}M_{(n)}$ such that \begin{equation} a_{(n)}M_{(k)}\subseteq M_{(|a|+k-1-n)} \end{equation} for homogeneous $a\in V$ and $n\in\mathbb{Z}$. Let $M$ be a weak $V$-module. A weak $V$-module is called a {\it logarithmic module} if $M$ decomposes into a direct sum of generalized $L_0$-eigenspaces. Let $M=\bigoplus_{n\in\mathbb{Z}_{\ge0}}M_{(h+n)}$ be a logarithmic module with a complex number $h$ and \begin{equation} M_{(h+n)}=\{x\in M\,|\,(L_0-h-n)^{k+1}x=0 \text{ for a nonnegative integer }k \}. \end{equation} Obviously $M$ is a $\mathbb{N}$-gradable $V$-module. In this paper a $V$-module $M$ is always a logarithmic $V$-module satisfies the following conditions. \noindent i) There exist a complex number $h$ and a nonnegative integer $k$ such that $M=\bigoplus_{n=0}^\infty M_{(h+n)}$ with $M_{(h+n)}=\{u\in M\,|\,(L_0-h-n)^{k+1}u=0\}$ for all $n\in\mathbb{Z}$.\\ \noindent ii) $\dim M_{(h+n)}<\infty$ for all nonnegative integers $n$. We denote $|u|=h+n$ for $u\in M_{(h+n)}$ for short. We remark that any $V$-module in this paper is a $V$-module in the sense of \cite{NT} and \cite{MNT}. Let $k$ be a nonnegative integer and let $\mathscr{C}_k$ be the category consisting of $V$-modules whose homogeneous subspaces are annihilated by $(L_0-h-n)^{k+1}$. Then it follows that $\mathscr{C}_0\subseteq \mathscr{C}_1\subseteq\dotsb \subseteq\mathscr{C}_k\subseteq\dotsb$. Any $V$-module $M$ is a $\bar{\mathfrak{g}}$-module by the action \begin{equation}\label{eq-action} J_n(a)u=a_{(|a|-1+n)}u \end{equation} for any homogeneous $a\in V$ and $u\in M$ (cf. \cite{DLM}, \cite{NT}). For any $a\in V$ and $u\in M$, there exists a nonnegative integer $n_0$ such that $a_{(n)}u=0$ for all $n\ge n_0$. Therefore, the $V$-module $M$ is also a $\mathfrak{g}$-module by the action \eqref{eq-action}. Let us denote the restricted dual of a $V$-module $M=\bigoplus_{n=0}^\infty M_{(h+n)}$ by $D(M)=\bigoplus_{n=0}^\infty M_{(h+n)}^\ast$ where $M_{(h+n)}^\ast=\Hom_\mathbb{C}(M_{(h+n)},\mathbb{C})$. A $\bar{\mathfrak{g}}$-module structure on $D(M)$ can be defined by letting \begin{equation} \langle J_n(a)\varphi, u\rangle=\langle \varphi, \theta(J_n(a))u\rangle \end{equation} for all $\varphi\in D(M)$ and $u\in M$. The following proposition is known. \begin{proposition}[\cite{NT}, Proposition 4.2.1, cf. \cite{FHL}, Theorem 5.2.1] There exists a unique $V$-module structure on $D(M)$ which extends $\bar{\mathfrak{g}}$-module structure. \end{proposition} Since $\langle L_n\varphi,u\rangle=\langle \varphi, L_{-n}u\rangle$ for all $\varphi\in D(M)$ and $u\in M$, we see that $D(M)=\oplus_{n=0}^{\infty}D(M)_{(h+n)}$ and $D(M) \in\mathscr{C}_k$ for any $M\in\mathscr{C}_k$. \subsection{The space of conformal blocks over the projective line} Let $\mathbb{C}P^1=\mathbb{C}\cup\{\infty\}$ be the projective line and $z$ its inhomogeneous coordinate. Let $A=\{1,2,\dotsc,N,\infty\}$ and let us fix a set $p_A=(p_a)_{a\in A}$ of $N+1$ distinct points $p_a\in\mathbb{C}P^1\,(a\in A)$ with $p_\infty=\infty$. We write $\xi_a=z-p_a\,(a\not=\infty)$ and $\xi_\infty=z$, respectively. We denote by $H^0(\mathbb{C}P^1,\Omega^{1-k}(\ast p_A))$ the vector space of global meromorphic $(1-k)$-differentials whose poles are located only at $p_a\,(a\in A)$. Set $H(V,\ast p_A)^{(1)}=\bigoplus_{k=0}^\infty V_k\otimes H^0(\mathbb{C}P^1,\Omega^{1-k}(\ast p_A))$ and $H(V,\ast p_A)^{(0)}=\bigoplus_{k=0}^\infty V_k\otimes H^0(\mathbb{C}P^1,\Omega^{-k}(\ast p_A))$. Define the linear map $\nabla:H(V,\ast p_A)^{(0)}\to H(V,\ast p_A)^{(1)}$ by \begin{equation} a\otimes f(z)(dz)^{1-k}\mapsto L_{-1}a\otimes f(z)(dz)^{-k}+a\otimes \frac{df(z)}{dz}(dz)^{1-k}\qquad(a\in V_k). \end{equation} We set \begin{equation} \mathfrak{g}(\mathbb{C}P^1,\ast p_A)=H(V,\ast p_A)^{(1)}/\nabla H(V,\ast p_A)^{(0)}. \end{equation} It is shown (cf. \cite[Proposition 5.1.1]{NT}) that the vector space $\mathfrak{g}(\mathbb{C}P^1,\ast p_A)$ is a Lie algebra with the blacket \begin{multline} \bigl[a\otimes f(z)(dz)^{1-|a|}, b\otimes g(z)(dz)^{1-|b|}\bigr]\\ =\sum_{m=0}^\infty\frac{1}{m!}a_{(m)}b\otimes \frac{d^mf(z)}{dz^m}g(z)(dz)^{2-|a|-|b|+m}. \end{multline} For each $a\in A$ we define the linear map \begin{equation} i_a:H^0(\mathbb{C}P^1,\Omega^k(\ast p_A))\to \begin{cases} \mathbb{C}((\xi_a))(d\xi_a)^k, & a\in A\backslash\{\infty\}\\ &\\ \mathbb{C}((\xi_\infty^{-1}))(d\xi_\infty)^k, & a=\infty \end{cases} \end{equation} by taking the Laurent expansion at $z=p_a$ in terms of the coordinate $\xi_a$. We denote $i_a f(z)(dz)^k$ by $f_a(\xi_a)(d\xi_a)^k$. For any $a\in A\backslash\{\infty\}$, we define the linear map $j_a:\mathfrak{g}(\mathbb{C}P^1,\ast p_A)\to\mathfrak{g}$ by $j_a(a\otimes f(z)(dz)^{1-k})=a\otimes f_a(\xi_a)(d\xi_a)^{1-k}$ and the linear map $j_\infty:\mathfrak{g}(\mathbb{C}P^1,\ast p_A)\to\mathfrak{g}$ by $j_\infty(a\otimes f(z)(dz)^{1-k})=-\theta(a\otimes f_\infty(\xi_\infty)(d\xi_\infty)^{1-k})$. Then the linear map $j_\infty$ is well-defined since \begin{equation} j_\infty(a\otimes f(z)(dz)^{1-k})=-\sum_{n\le n_0}f_n\theta(J_n(a)),\; \theta(J_n(a))=(-1)^kJ_{-n}(e^{L_1}a), \end{equation} where $f_\infty(\xi_\infty)=\sum_{n\le n_0}f_n \xi_\infty^{n+k-1}$ (see \cite{NT}). The following proposition is fundamental. \begin{proposition}[\cite{NT}, Proposition 5.1.3]\label{prop-Lie} For any $a\in A$, the linear map $j_a:\mathfrak{g}(\mathbb{C}P^1,\ast p_A)\to\mathfrak{g}$ is a Lie algebra homomorphism. \end{proposition} Let $M^a\,(a\in A)$ be $V$-modules. We set $M_A=\bigotimes_{a\in A}M^a$ and $\mathfrak{g}_A=\mathfrak{g}^{\oplus|A|}$. Let $\rho_a:\mathfrak{g}\to\End(M^a)$ be the representation defined by \eqref{eq-action} for $a\in A$. Then the linear map $\rho_A:\mathfrak{g}_A\to\End(M_A)$ defined by $\rho_A=\oplus_{a\in A}\rho_a$ is a representation of the Lie algebra $\mathfrak{g}_A$ on $M_A$. We denote the image of the Lie algebra homomorphism $j_A=\sum_{a\in A}j_a$ by $\mathfrak{g}_{p_A}^{out}$, which acts on $M_A$ via $\rho_A$. The following definition is given by \cite{NT}. \begin{definition} The vector space $C^*(M_A, p_A)=\Hom_\mathbb{C}(M_A/\mathfrak{g}_{p_A}^{out}M_A,\mathbb{C})$ is called the space of conformal blocks at $p_A$. \end{definition} \section{Logarithmic intertwining operators} In this section, we recall the notion of logarithmic intertwining operators and their properties according to \cite{M1}. \subsection{Definition} \begin{definition}[\cite{M1}] Let $M^1,\,M^2$ and $M^3$ be weak $V$-modules. A {\it logarithmic intertwining operator of the type $\fusion{M^1}{M^2}{M^3}$} is a linear map \begin{align} &I(-,z):M^1\to\Hom_\mathbb{C}(M^2,M^3)\{z\}[\log z]\\ &I(u,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}u_{(\alpha)}^nz^{-\alpha-1}(\log z)^k \end{align} with the following properties:\\ \noindent i)\,(Truncation condition) For any $u_1\in M^1$, $u_2\in M^2$ and $0\le k\le d$, \begin{equation} (u_1)_{(\alpha)}^ku_2=0 \end{equation} for sufficiently large $\mathbb{R}eal(\alpha)$.\\ \noindent ii)\,($L_{-1}$-derivative property) For any $u_1\in M^1$, \begin{equation} I(L_{-1}u_1,z)=\frac{d}{dz}I(u_1,z). \end{equation} \noindent iii)\, For all $a\in V$, $u_1\in M_1$, $u_2\in M_2$, $\alpha\in\mathbb{C}$, $0\le n\le d$ and $p,q\in\mathbb{Z}$, we have \begin{equation}\label{eq-borcherds} \begin{split} &\sum_{i=0}^\infty\binom{p}{i} (a_{(q+i)}u_1)_{(\alpha+p-i)}^n\\ &\qquad\qquad=\sum_{i=0}^\infty(-1)^i\binom{q}{i}(a_{(p+q-i)}(u_1)^n_{(\alpha+i)}-(-1)^q(u_1)_{(\alpha+q-i)}^na_{(p+i)}). \end{split} \end{equation} We denote the space of logarithmic intertwining operators of the type $\fusion{M^1}{M^2}{M^3}$ by $I\fusion{M^1}{M^2}{M^3}$, that is, we use the same notation as usual intertwining operators. \end{definition} Setting $q=0$ and $p=0$ in \eqref{eq-borcherds}, respectively, we have \begin{align} &[a_{(p)},(u_1)_{(\alpha)}^n]=\sum_{i=0}^\infty \binom{p}{i}(a_{(i)}u_1)_{(\alpha+p-i)}^n,\label{eq-com}\\ &(a_{(q)}u_1)_{(\alpha)}^n=\sum_{i=0}^\infty(-1)^i\binom{q}{i}\label{eq-ass} \{a_{(q-i)}(u_1)_{(\alpha+i)}^n-(-1)^qa_{(\alpha+q-i)}^na_{(i)}\} \end{align} and we call, by abuse of terminologies, the {\it commutator formula} and {\it associativity formula}, respectively. By the commutator formula, we have \begin{equation} [L_{-1}, u_{(\alpha)}^n]=(L_{-1}u)_{(\alpha)}^n\label{eq-derivation} \end{equation} for any $u\in M^1$ and $0\le n\le d$. By the associativity formula, \eqref{eq-derivation} and $L_{-1}$-derivative property, we have \begin{equation} (L_0u)_{(\alpha)}^n= \begin{cases} [L_0, (u)_{(\alpha)}^n]+(\alpha+1)(u)_{(\alpha)}^n-(n+1) (u)_{(\alpha)}^{n+1} & 0\le n\le d-1,\\ &\\ [L_0, (u)_{(\alpha)}^n]+(\alpha+1)(u)_{(\alpha)}^n & n=d\\ \end{cases} \label{eq-fund} \end{equation} for any $u\in M^1$. \subsection{Properties for logarithmic intertwining operators} Let $M^i=\bigoplus_{n=0}^\infty M^i_{(h_i+n)}\,(i=1,2,3)$ be objects in $\mathscr{C}_{k_i}$ for nonnegative integers $k_i \,(i=1,2,3)$ and complex numbers $h_i\,(i=1,2,3)$. Suppose that a logarithmic intertwining operator $I(-,z)$ of the type $\fusion{M^1}{M^2}{M^3}$ is of the form \begin{equation} I(u_1,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}(u_1)_{(\alpha)}^nz^{-\alpha-1}(\log z)^n. \end{equation} For any homogeneous element $u_i\in M^i\,(i=1,2)$, we introduce notations \begin{align} x_1(u_1)_{(\alpha)}^nu_2&=((L_0-|u_1|)u_1)_{(\alpha)}^nu_2,\\ x_2(u_1)_{(\alpha)}^nu_2&=(u_1)_{(\alpha)}^n(L_0-|u_2|)u_2,\\ x_3(u_1)_{(\alpha)}^nu_2&=(L_0+\alpha+1-|u_1|-|u_2|)(u_1)_{(\alpha)}^nu_2. \end{align} Note that these operations $x_1$ and $x_2$ are mutually commutative (see \cite{M1}). By using these operations we get: \begin{lemma}[\cite{HLZ}, Lemma 3.8]\label{lemma-fund} Let \begin{equation} I(-,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}(-)_{(\alpha)}^nz^{-\alpha-1}(\log z)^n \end{equation} be a logarithmic intertwining operator of the type $\fusion{M^1}{M^2}{M^3}$ and let $p, q$ be integers such that $p\ge0$ and $0\le q\le d$. Then \[ x_3^p(u_1)_{(\alpha)}^qu_2=\sum_{\ell=0}^N\binom{p}{\ell}\frac{(q+\ell)!}{q!} (x_1+x_2)^{p-\ell}(u_1)_{(\alpha)}^{q+\ell}u_2 \] for homogeneous $u_1\in M^1$ and $u_2\in M^2$ where $N=\min\{p,\,d-q\}$. \end{lemma} The following proposition is proved in \cite{M1} by using differential equations and in \cite[Proposition 3.9]{HLZ} by using Lemma \ref{lemma-fund}. \begin{proposition}[\cite{M1}, Proposition 1.10]\label{prop-eigen} Suppose that $M^i\in\mathscr{C}_{k_i}\,(i=1,2,3)$ for nonnegative integers $k_i$ and that $M^i=\bigoplus_{n=0}^\infty M^i_{(h_i+n)}$ for complex numbers $h_i\,(i=1,2,3)$. Let $I(-,z)\in I\fusion{M^1}{M^2}{M^3}$ be a logarithmic intertwining operator such that \begin{equation} I(u_1,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}(u_1)_{(\alpha)}^n z^{-\alpha-1}(\log z)^n \qquad (u_1\in M^1). \end{equation} \noindent {\rm(1)}\,For any homogeneous $u_i\in M^i\,(i=1,2)$ we have $|(u_1)_{(\alpha)}^nu_2|=|u_1|+|u_2|-1-\alpha$ for all $0\le n\le d$.\\ \noindent {\rm(2)}\, For any $u_i\in M^i\,(i=1,2)$ we have \[ I(u_1,z)u_2\in\sum_{n=0}^{k_1+k_2+k_3}M^3((z))z^{h_3-h_1-h_1}(\log z)^n. \] \end{proposition} \section{The space of $3$-point conformal blocks and logarithmic intertwining operators} In this section we focus on $3$-point conformal blocks in conformal field theories over the projective line. We prove that the space of $3$-point conformal blocks over $\mathbb{C}P^1$ is isomorphic to the space of logarithmic intertwining operators. The almost same result is found in \cite{Z1}, however, the categories of modules of us and the one in \cite{Z1} are slightly different. \subsection{Main theorem} Set $A=\{1,2,\infty\}$ and let $p_A=\{0,1,\infty\}$ be the set of points on $\mathbb{C}P^1$. Let $z$ be the inhomogeneous coordinate of $\mathbb{C}P^1$. The $\xi_0=z$, $\xi_1=z-1$ and $\xi_\infty=z$ are local coordinate of $\mathbb{C}P^1$ at $0,1,$ and $\infty$, respectively. Take $V$-modules $M^1,M^2$ and $M^3$. We assume that there exist complex numbers $h_i\in\mathbb{C} \,(i=1,2,3)$ such that $M^i=\bigoplus_{n=0}^\infty M_{(h_i+n)}$ and that $M^i\in\mathscr{C}_{k_i}\,(i=1,2,3)$ for nonnegative integers $k_i \,(i=1,2,3)$. Let us set $M_A=M^1\otimes M^2\otimes M^3$. We denote the space of conformal blocks at $p_A=\{0,1,\infty\}$ by $C^*(M_A,p_A)$. Then we can now state the main theorem of the paper which is a generalization of Zhu's result \cite[Proposition 7.4]{Z1}. \begin{theorem}\label{thm-1} Let $M^i (i=1,2,3)$ be $V$-modules with $M^i=\bigoplus_{n=0}^\infty M^i_{(h_i+n)}$ and $M^i\in\mathscr{C}_{k_i}\,(i=1,2,3)$ for nonnegative integers $k_i \,(i=1,2,3)$. The space of conformal blocks $C^*(M_A,p_A)$ at $p_A=\{0,1,\infty\}$ is isomorphic to the space of logarithmic intertwining operators of the type $\fusion{M^2}{M^1}{D(M^3)}$ \end{theorem} Let $C_2(V)$ be the vector subspace of $V$ spanned by vectors of the form $a_{(2)}b\,(a,b\in V)$. If $\dim V/C_2(V)<\infty$ we say that $V$ satisfies {\it Zhu's finiteness condition} which is introduced in \cite{Z2}. By combining \cite[Theorem 5.8.1]{NT} and the theorem we get: \begin{corollary} If $V$ satisfies Zhu's finiteness condition then the space of intertwining operators is finite-dimensional. \end{corollary} \subsection{Proof of Theorem \ref{thm-1}} For any logarithmic intertwining operator $I(-,z)$ of the type $\fusion{M^2}{M^1}{D(M^3)}$, we define $F\in\Hom_\mathbb{C}(M_A,\mathbb{C})$ by \begin{equation}\label{eq-main-def-1} \left\langle F, u_1\otimes u_2\otimes u_3\right\rangle=\left\langle I(u_2,1)u_1,u_3\right\rangle \end{equation} for any $u_1\in M^1$, $u_2\in M^2$ and $u_3\in M^3$. For any $V$-module $M\in\mathscr{C}_{k}$ we define the operator $z^{L_0}:M\to M\{z\}[\log z]$ by \begin{equation} z^{L_0}u=\sum_{j=0}^{k}\frac{1}{j!}(L_0-|u|)^jz^{|u|}(\log z)^j. \end{equation} For any $x\in C^*(M_A,p_A)$, we define $I_x(-,z)\in\Hom_\mathbb{C}(M^1, D(M^3))\{z\}[\log z]$ by \begin{multline}\label{eq-main-def-2} \left\langle I_x(u_2,z)u_1,u_3\right\rangle\\ =\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle \text{ for all }u_i\in M^i\,(i=1,2,3). \end{multline} We are going to give a prove of the theorem by dividing its into three steps. In step 1 we prove that $F$ belongs to $C^*(M_A,p_A)$ and show that $I_x$ is a logarithmic intertwining operator among $V$-modules in step 2. The final step is devoted to the proof that the correspondence between $F$ and $I_x$ is one-to-one. \noindent {\bf (Step 1)} In order to prove that $F$ belongs to $C^*(M_A,p_A)$, by the definition of the space of conformal blocks, it is sufficient to prove that \begin{equation}\label{eq-main-0} \begin{split} &\left\langle F, j_0(a\otimes f(z)(dz)^{1-k})u_1\otimes u_2\otimes u_3\right\rangle\\ &\quad\qquad\qquad+\left\langle F, u_1\otimes j_1(a\otimes f(z)(dz)^{1-k})u_2\otimes u_3\right\rangle\\ &\qquad\qquad\qquad\qquad\qquad+\left\langle F, u_1\otimes u_2\otimes j_\infty(a\otimes f(z)(dz)^{1-k})u_3\right\rangle =0 \end{split} \end{equation} for all $a\in V_k$ and $f(z)(dz)^{1-k}\in H^0(\mathbb{C}P^1,\Omega^{1-k}(\ast p_A))$. It is well known that $\{z^p(z-1)^q(dz)^{1-k}\,|\,p,q\in\mathbb{Z}\}$ is a topological basis of $H^0(\mathbb{C}P^1,\Omega^{1-k}(\ast p_A))$. Therefore, it is enough to show \eqref{eq-main-0} for $f(z)=z^p(z-1)^q\,(p,q\in\mathbb{Z})$. First of all we have \begin{equation}\label{eq-main-1} \begin{split} j_0(a\otimes z^p(z-1)^q(dz)^{1-k})u_1&=\Bigl(\sum_{i=0}^\infty(-1)^{q-i}\binom{q}{i}a\otimes \xi_0^{p+i}(d\xi_0)^{1-k}\Bigr)u_1\\ &=\sum_{i=0}^\infty(-1)^{q-i}\binom{q}{i}J_{p+i-k+1}(a)u_1\\ &=\sum_{i=0}^\infty(-1)^{q-i}a_{(p+i)}u_1 \end{split} \end{equation} and secondly \begin{equation}\label{eq-main-2} \begin{split} j_1(a\otimes z^p(z-1)^q(dz)^{1-k})u_2&=\Bigl(\sum_{i=0}^\infty\binom{p}{i}a\otimes \xi_1^{q+i}(d\xi_1)^{1-k}\Bigr)u_2\\ &=\sum_{i=0}^\infty\binom{p}{i}J_{q+i-k+1}(a)u_2\\ &=\sum_{i=0}^\infty\binom{p}{i}a_{(q+i)}u_2 \end{split} \end{equation} and finally \begin{equation}\label{eq-main-3} \begin{split} j_\infty(a\otimes z^p(z-1)^q(dz)^{1-k})u_3&=-\theta\Bigl(\sum_{i=0}^\infty(-1)^i\binom{q}{i}a\otimes \xi_\infty^{p+q-i}(d\xi_\infty)^{1-k}\Bigr)u_3\\ &=-\sum_{i=0}^\infty(-1)^i\binom{q}{i}\theta(J_{p+q-i-k+1}(a))u_3 \end{split} \end{equation} for all $a\in V_k$ and $p,q\in\mathbb{Z}$. By \eqref{eq-main-1}--\eqref{eq-main-3}, the definition of the functional $F$, Proposition \ref{prop-dual} and Proposition \ref{prop-eigen}, the left-hand side of \eqref{eq-main-0} is equal to \begin{equation} \begin{split} &\sum_{i=0}^\infty(-1)^{q-i}\binom{q}{i}\left\langle (u_2)^0_{(\alpha+q-i)}a_{(p+i)}u_1, u_3\right\rangle\\ &\quad\qquad\qquad+\sum_{i=0}^\infty\binom{p}{i}\left\langle(a_{(q+i)}u_2)_{(\alpha+p-i)}^0u_1,u_3\right\rangle\\ &\qquad\qquad\qquad\qquad\qquad-\sum_{i=0}^\infty(-1)^i\binom{q}{i}\left\langle a_{(p+q-i)}(u_2)_{(\alpha-i)}^0u_1, u_3 \right\rangle \end{split} \end{equation} where $\alpha=|u_1|+|u_2|-|u_3|+k-2-p-q$, which vanishes by $\eqref{eq-borcherds}$. Hence \eqref{eq-main-0} is proved. \noindent {\bf(Step 2)} We now prove that $I_x(-,z)\in I\fusion{M^2}{M^1}{D(M^3)}$. Since $M^i=\bigoplus_{n=0}^\infty M^i_{(h_i+n)}\in\mathscr{C}_{k_i}\,(i=1,2,3)$ we have \begin{equation}\label{eq-main-4} \begin{split} &z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3= \sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}\\ &\qquad\qquad\qquad\qquad\times(L_0-|u_1|)^{n_1}u_1\otimes(L_0-|u_2|)^{n_2}u_2\otimes (L_0-|u_3|)^{n_3}u_3\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times z^{|u_3|-|u_1|-|u_2|}(\log z)^{n_1+n_2+n_3} \end{split} \end{equation} for homogeneous $u_1\in M^1$, $u_2\in M^2$ and $u_3\in M^3$. Then the left-hand side of \eqref{eq-main-def-2} is an element in $\mathbb{C}[z,z^{-1}]z^{-h_1-h_2+h_3}[\log z]$. Therefore $\left\langle I_x(u_2,z)u_1,-\right\rangle=\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}-\right\rangle$ gives an element of the space \begin{equation} \Hom_\mathbb{C}(M_3,\mathbb{C})[[z,z^{-1}]]z^{-h_1-h_2+h_3}[\log z], \end{equation} which shows $I_x(u_2,z)\in \Hom_\mathbb{C}(M_1, D(M_3))[[z,z^{-1}]]z^{-h_1-h_2+h_3}[\log z]$. Therefore we can write \begin{equation} I_x(u_2,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{Z}+h_1+h_2-h_3}(u_2)_{(\alpha)}^nz^{-\alpha-1}(\log z)^n. \end{equation} For fixed $u_1\in M^1_{(h_1+\ell_1)}$ and $u_2\in M^2_{(h_2+\ell_2)}$ with nonnegative integers $\ell_1$ and $\ell_2$, we have by \eqref{eq-main-4} \begin{equation}\label{eq-main-10} \langle I_x(u_2,z)u_1, u_3\rangle=\sum_{n=0}^{k_1+k_2+k_3}\sum_{\ell=0}^\infty c_\ell^n z^{h_3-h_1-h_2-\ell_1-\ell_2+\ell} (\log z)^n \end{equation} where $c_\ell^n$ are complex numbers. The \eqref{eq-main-10} implies that $(u_2)_{(\alpha)}^nu_1=0$ for $\alpha>h_3-h_1-h_2+\ell_1+\ell_2-1$. Hence $I_x(-,z)$ satisfies the truncation condition. In order to prove $L_{-1}$-derivative property, we first note that \begin{equation}\label{eq-main-11} \begin{split} &\left\langle x, j_0(\omega\otimes z(dz)^{-1})z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &\quad\qquad+\left\langle x, z^{-L_0}u_1\otimes j_1(\omega\otimes z(dz)^{-1})z^{-L_0}u_2\otimes z^{L_0}u_3 \right\rangle\\ &\quad\qquad\qquad+\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes j_\infty(\omega\otimes z(dz)^{-1})z^{L_0}u_3 \right\rangle=0. \end{split} \end{equation} The left-hand side of \eqref{eq-main-11} turns to be \begin{equation}\label{eq-deri} \begin{split} &\left\langle x,L_0z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &\quad\qquad\qquad\qquad+\left\langle x, z^{-L_0}u_1\otimes (L_0+L_{-1})z^{-L_0}u_2\otimes z^{L_0}u_3 \right\rangle\\ &\quad\qquad\qquad\qquad\qquad\qquad\qquad-\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes L_0z^{L_0}u_3 \right\rangle. \end{split} \end{equation} From now on each term in \eqref{eq-deri} is simplified. Let us consider the second term of \eqref{eq-deri}. Since $[L_{-1},L_0]=-L_{-1}$, we have \begin{equation} \left\langle x, z^{-L_0}u_1\otimes L_{-1}z^{-L_0}u_2\otimes z^{L_0}u_3 \right\rangle =z\left\langle I_x(L_{-1}u_2,z)u_1, u_3\right\rangle, \end{equation} which shows \begin{equation}\label{eq-deri1} \begin{split} z\left\langle I_x(u_2,z)u_1,u_3\right\rangle =&-\left\langle x,L_0z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &-\left\langle x, z^{-L_0}u_1\otimes L_0z^{-L_0}u_2\otimes z^{L_0}u_3 \right\rangle\\ &+\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes L_0z^{L_0}u_3 \right\rangle. \end{split} \end{equation} The first term of \eqref{eq-deri1} can be calculated to be \begin{equation}\label{eq-deri2} \begin{split} &\left\langle x,L_0z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &=\sum_{n_1=0}^{k_1-1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3}\\ &\quad\times\left\langle x,(L_0-|u_1|)^{n_1+1}u_1\otimes (L_0-|u_2|)^{n_2}u_2\otimes (L_0-|u_3|)^{n_3}u_3\right\rangle\\ &\qquad\qquad+|u_1|\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle. \end{split} \end{equation} Similarly, the second term of \eqref{eq-deri1} becomes \begin{equation}\label{eq-deri3} \begin{split} &\left\langle x,z^{-L_0}u_1\otimes L_0z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &=\sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2-1}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3}\\ &\quad\times\left\langle x,(L_0-|u_1|)^{n_1}u_1\otimes (L_0-|u_2|)^{n_2+1}u_2\otimes (L_0-|u_3|)^{n_3}u_3\right\rangle\\ &\qquad\qquad+|u_2|\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle. \end{split} \end{equation} Finally, the third term is \begin{equation}\label{eq-deri4} \begin{split} &\left\langle x,z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes L_0z^{L_0}u_3\right\rangle\\ &=\sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3-1} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3}\\ &\quad\times\langle x,(L_0-|u_1|)^{n_1}u_1\otimes (L_0-|u_2|)^{n_2}u_2\otimes (L_0-|u_3|)^{n_3+1}u_3\rangle\\ &\qquad\qquad-|u_3|\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle. \end{split} \end{equation} In all of the calculations given above we have used the fact that each $M^i$ is an object in $\mathscr{C}_{k_i}$. By using \eqref{eq-deri1}--\eqref{eq-deri4} we obtain \begin{align} &\frac{d}{dz}\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\\ &\qquad =-z^{-1}\left\langle x,L_0z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\notag\\ &\qquad\qquad -z^{-1}\left\langle x, z^{-L_0}u_1\otimes (L_0)z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle\notag\\ &\qquad\qquad\qquad+z^{-1}\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes L_0z^{L_0}u_3\right\rangle,\notag \end{align} which shows \begin{equation} \frac{d}{dz}\left\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\right\rangle =\frac{d}{dz}\left\langle I_x(u_2,z)u_1,u_3\right\rangle. \end{equation} Hence we have proved the $L_{-1}$-derivative property \begin{equation} I_x(L_{-1}u_2,z)u_1=\dfrac{d}{dz}I_x(u_2,z)u_1. \end{equation} Finally we will show \eqref{eq-borcherds}. Since $x$ is a conformal block, for any $p,\,q\in\mathbb{Z}$ and $a\in V_k$, we have \begin{equation} \begin{split} &\langle x, j_0(a\otimes z^p(z-1)^q(dz)^{1-k})z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\rangle\\ &\quad+\langle x, z^{-L_0}u_1\otimes j_1(a\otimes z^p(z-1)^q(dz)^{1-k})z^{-L_0}u_2\otimes z^{L_0}u_3\rangle\\ &\qquad+\langle x, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes j_\infty(a\otimes z^p(z-1)^q(dz)^{1-k})z^{L_0}u_3\rangle=0. \end{split} \end{equation} By \eqref{eq-main-1}--\eqref{eq-main-3}, \eqref{eq-involution} and $[L_0, a_{(n)}]=(k-n-1)a_{(n)}$ we have \begin{equation}\label{eq-borcherds-prove} \begin{split} &\sum_{i=1}^\infty(-1)^{q+i}\binom{q}{i}z^{-p-i+k-1}\langle I_x(u_2,z)a_{(p+i)}u_1, u_3\rangle\\ &\quad+\sum_{i=1}^\infty\binom{p}{i}z^{-q-i+k-1}\langle I_x(a_{(q+i)}u_2,z)u_1, u_3\rangle\\ &\qquad-\sum_{i=0}^\infty(-1)^i\binom{q}{i}z^{-p-q+i+k-1}\langle a_{(p+q-i)}I_x(u_2, z)u_1, u_3\rangle =0. \end{split} \end{equation} Recall that $I_x(u_2,z)u_1=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}(u_2)_{(\alpha)}^nu_1z^{-\alpha-1}(\log z)^n$. Then taking the coefficient of $z^{-\alpha-p-q+k-2}(\log z)^n$ in \eqref{eq-borcherds-prove} gives \eqref{eq-borcherds}. \noindent {\bf (Step 3)} We will show that $F=x$ for any $x\in C^*(M_A,p_A)$ and that $I_{F}(-,z)=I(-,z)$ for $I(-,z)\in I\fusion{M^2}{M^1}{D(M^3)}$. Suppose that $I_x(u_2,z)=\sum_{n=0}^d\sum_{\alpha\in\mathbb{C}}(u_2)_{(\alpha)}^nz^{-\alpha-1}(\log z)^n$. By \eqref{eq-main-4}, we have \begin{equation} \begin{split} \langle F, u_1\otimes u_2\otimes u_3\rangle&=\langle I_x(u_2,1)u_1, u_3\rangle\\ &=\langle (u_2)_{(|u_1|+|u_2|-|u_3|-1)}^0u_1, u_3 \rangle\\ &=\langle x, u_1\otimes u_2\otimes u_3\rangle \end{split} \end{equation} for any homogeneous $u_i\in M^i\,(i=1,2,3)$, which implies $F=x$. Conversely, we see that \begin{equation}\label{eq-425} \begin{split} &\langle I_{F}(u_2,z)u_1, u_3\rangle\\ &=\langle F, z^{-L_0}u_1\otimes z^{-L_0}u_2\otimes z^{L_0}u_3\rangle\\ &=\sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}\\ &\quad\times\langle F, (L_0-|u_1|)^{n_1}u_1\otimes (L_0-|u_2|)^{n_2}u_2 \otimes (L_0-|u_3|)^{k_3}u_3\rangle\\ & \quad\times z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3}\\ &=\sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}\\ &\times\langle ((L_0-|u_2|)^{n_2}u_2)_{(\alpha)}^0(L_0-|u_1|)^{n_1}u_1, (L_0-|u_3|)^{n_3}u_3\rangle\\ &\quad\times z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3}\\ &=\sum_{n_1=0}^{k_1}\sum_{n_2=0}^{k_2}\sum_{n_3=0}^{k_3} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!}\\ &\times\langle (L_0-|u_3|)^{n_3} ((L_0-|u_2|)^{n_2}u_2)_{(\alpha)}^0(L_0-|u_1|)^{n_1}u_1, u_3\rangle\\ &\quad\times z^{-|u_1|-|u_2|+|u_3|}(\log z)^{n_1+n_2+n_3} \end{split} \end{equation} where $\alpha=|u_1|+|u_2|-|u_3|-1$. On the other hand, by Lemma \ref{lemma-fund}, we have \begin{equation}\label{eq-426} \begin{split} &\sum_{n_1+n_2+n_3=k} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} (L_0-|u_3|)^{n_3} ((L_0-|u_2|)^{n_2}u_2)_{(\alpha)}^0(L_0-|u_1|)^{n_1}u_1\\ &=\sum_{n_1+n_2+n_3=k} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} \sum_{\ell=0}^{n_3}\binom{n_3}{\ell}\ell!(x_1+x_2)^{n_3-\ell}x_1^{n_1}x_2^{n_2} (u_2)_{(\alpha)}^{\ell}u_1\\ &=\sum_{n_1+n_2+n_3=k} \frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} \sum_{\ell=0}^{n_3}\binom{n_3}{\ell}\ell!(x_1+x_2)^{n_3-\ell}x_1^{n_1}x_2^{n_2} (u_2)_{(\alpha)}^{\ell}u_1\\ &=\sum_{\ell=0}^{k}\sum_{n_3=\ell}^{k}\Bigl(\sum_{n_1+n_2=k-n_3}\frac{(-1)^{n_1+n_2}}{n_1!n_2!(n_3-\ell)!} (x_1+x_2)^{n_3-\ell}x_1^{n_1}x_2^{n_2} (u_2)_{(\alpha)}^{\ell}u_1\Bigr)\\ &=\sum_{\ell=0}^{k}\sum_{n_3=0}^{k-\ell}\Bigl(\sum_{n_1+n_2=k-\ell-n_3}\frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} (x_1+x_2)^{n_3}x_1^{n_1}x_2^{n_2} (u_2)_{(\alpha)}^{\ell}u_1\Bigr)\\ &=\sum_{\ell=0}^{k}\Bigl(\sum_{n_1+n_2+n_3=k-\ell}\frac{(-1)^{n_1+n_2}}{n_1!n_2!n_3!} (x_1+x_2)^{n_3}x_1^{n_1}x_2^{n_2} (u_2)_{(\alpha)}^{\ell}u_1\Bigr)\\ &=\sum_{\ell=0}^k \frac{1}{(k-\ell)!}(-x_1-x_2+x_1+x_2)^{k-\ell} (u_2)_{(\alpha)}^{\ell}u_1\\ &=(u_2)_{(\alpha)}^{k}u_1. \end{split} \end{equation} Therefore, by combining \eqref{eq-425} and \eqref{eq-426} we obtain \begin{equation} \langle I_{F_I}(u_2,z)u_1, u_3\rangle=\langle I(u_2,z)u_1, u_3\rangle \end{equation} for homogeneous $u_i\in M^i\,(i=1,2,3)$. The theorem is proved. \end{document}
\begin{document} \sloppy \title{Near-Optimal Distributed Implementations of Dynamic Algorithms for Symmetry Breaking Problems} \iflong \else \begin{abstract} The field of dynamic graph algorithms aims at achieving a thorough understanding of real-world networks whose topology evolves with time. Traditionally, the focus has been on the classic sequential, centralized setting where the main quality measure of an algorithm is its update time, i.e.\ the time needed to restore the solution after each update. While real-life networks are very often distributed across multiple machines, the fundamental question of finding efficient \emph{dynamic, distributed graph algorithms} received little attention to date. The goal in this setting is to optimize both the \emph{round} and \emph{message} complexities incurred per update step, ideally achieving a message complexity that matches the centralized update time in $O(1)$ (perhaps amortized) rounds. Toward initiating a systematic study of \emph{dynamic, distributed algorithms}, we study some of the most central symmetry-breaking problems: maximal independent set (MIS), maximal matching/(approx-) maximum cardinality matching (MM/MCM), and $(\Delta + 1)$-vertex coloring. This paper focuses on dynamic, distributed algorithms that are \emph{deterministic}, and in particular --- robust against an \emph{adaptive} adversary. Most of our focus is on the MIS algorithm, which achieves $O\left(m^{2/3}\log^2 n\right)$ amortized messages in $O\left(\log^2 n\right)$ amortized rounds in the \textsc{Congest} model. Notably, the amortized message complexity of our algorithm matches the amortized update time of the best-known deterministic {\em centralized} MIS algorithm by Gupta and Khan [SOSA'21] up to a $\operatorname{polylog} n$ factor. The previous best deterministic distributed MIS algorithm, by Assadi et al.\ [STOC'18], uses $O(m^{3/4})$ amortized messages in $O(1)$ amortized rounds, i.e., we achieve a polynomial improvement in the message complexity by a $\operatorname{polylog} n$ increase to the round complexity; moreover, the algorithm of Assadi et al. makes an implicit assumption that the network is connected at all times, which seems excessively strong when it comes to dynamic networks. Using techniques similar to the ones we developed for our MIS algorithm, we also provide deterministic algorithms for MM, approximate MCM and $(\Delta + 1)$-vertex coloring whose message complexities match or nearly match the update times of the best centralized algorithms, while having either constant or $\operatorname{polylog}(n)$ round complexities. \end{abstract} \fi \iflong \ifanonymous \author{Anonymous Author(s)} \else \author{Shiri Antaki\thanks{Tel Aviv University \href{mailto:[email protected]}{[email protected]}} \and Quanquan C. Liu\thanks{Massachusetts Institute of Technology \href{mailto:[email protected]}{[email protected]}} \and Shay Solomon \thanks{Tel Aviv University \href{mailto: [email protected]}{ [email protected]}}} \fi \maketitle \iflong \begin{abstract} The field of dynamic graph algorithms aims at achieving a thorough understanding of real-world networks whose topology evolves with time. Traditionally, the focus has been on the classic sequential, centralized setting where the main quality measure of an algorithm is its update time, i.e.\ the time needed to restore the solution after each update. While real-life networks are very often distributed across multiple machines, the fundamental question of finding efficient \emph{dynamic, distributed graph algorithms} received little attention to date. The goal in this setting is to optimize both the \emph{round} and \emph{message} complexities incurred per update step, ideally achieving a message complexity that matches the centralized update time in $O(1)$ (perhaps amortized) rounds. Toward initiating a systematic study of \emph{dynamic, distributed algorithms}, we study some of the most central symmetry-breaking problems: maximal independent set (MIS), maximal matching/(approx-) maximum cardinality matching (MM/MCM), and $(\Delta + 1)$-vertex coloring. This paper focuses on dynamic, distributed algorithms that are \emph{deterministic}, and in particular --- robust against an \emph{adaptive} adversary. Most of our focus is on our MIS algorithm, which achieves $O\left(m^{2/3}\log^2 n\right)$ amortized messages in $O\left(\log^2 n\right)$ amortized rounds in the \textsc{Congest} model. Notably, the amortized message complexity of our algorithm matches the amortized update time of the best-known deterministic {\em centralized} MIS algorithm by Gupta and Khan [SOSA'21] up to a $\operatorname{polylog} n$ factor. The previous best deterministic distributed MIS algorithm, by Assadi et al.\ [STOC'18], uses $O(m^{3/4})$ amortized messages in $O(1)$ amortized rounds, i.e., we achieve a polynomial improvement in the message complexity by a $\operatorname{polylog} n$ increase to the round complexity; moreover, the algorithm of Assadi et al. makes an implicit assumption that the network is connected at all times, which seems excessively strong when it comes to dynamic networks. Using techniques similar to the ones we developed for our MIS algorithm, we also provide deterministic algorithms for MM, approximate MCM and $(\Delta + 1)$-vertex coloring whose message complexities match or nearly match the update times of the best centralized algorithms, while having either constant or $\operatorname{polylog}(n)$ round complexities. \end{abstract} \fi \section{Introduction}\label{sec:intro} Traditional graph algorithms process a \emph{static} graph on a single (centralized) machine and are sequential; thus, their runtime is at least linear in the graph size. Even linear-time static algorithms, which are traditionally considered extremely fast and optimal, are often inadequate for coping with \emph{modern big data}, which dynamically changes and evolves at a rapid pace. Often, such big data cannot be stored in one machine and hence distributed methods are required to process it. Efficiently coping with such data is widely recognized as one of the most important challenges of modern computation. A \emph{dynamic} graph algorithm is one that efficiently deals with rapid changes to the input graph, where a common goal is to maintain a subgraph with some key property while the underlying graph changes over time. A \emph{distributed} graph algorithm is one that efficiently deals with graph data stored in multiple machines, where the corresponding processors work \emph{in parallel} in order to achieve a common goal by communicating and coordinating their actions via message passing. The focus of dynamic graph algorithms, up until the past few years, has almost exclusively been in the classic \emph{sequential, centralized} setting, where the main quality measure is the algorithm's {\em update time}, i.e., the time needed to update the graph structure of interest per update step. Meanwhile, the focus of distributed algorithms has been mainly on the \emph{static} setting, primarily on the \emph{round complexity} of {\em static tasks}. Surprisingly, the fundamental question of \emph{distributing} known sequential dynamic graph algorithms received very little attention to date. Most previous works \cite{AOSS18,BKM19,BEG18,CDK20,CHK16,KG18,KS18,PPS16,PS16,LPR09} focused on small amortized round complexity. A few works considered message complexity~\cite{AOSS18,PS16} but only for certain problems. Minimizing the number of messages is an important goal. A small number of messages implies a small load on the communication links, which in some cases enables running multiple algorithms (or multiple instances of the same algorithm) concurrently and it can allow for pipeline implementation. Moreover, this measure is useful when considering cases where one cares about the total work done (as captured by the number of messages sent), e.g.\ when the total network bandwidth is limited. We remark that many real-world systems are often bandwidth limited; examples of bandwidth limited systems include systems with poor wireless connections, an over-saturated network (many independent agents are on the same network, as in, e.g., a large company), or a mobile data network in a poorly connected area. On a low bandwidth network, an algorithm which uses less rounds but more messages in principle can actually take longer to finish its execution than an algorithm that uses less messages but more rounds. We define the \emph{message-efficiency} of our algorithms to be the ratio between the \emph{amortized message complexity} per update of our algorithm and the sequential amortized update time of the best-known centralized algorithm. Note that although we consider message-efficiency as a separate property, we still want our algorithms to run in $\poly\log n$ or $O(1)$ rounds, as is standard. Our goal is to design distributed algorithms with (nearly) constant message-efficiency, meaning the amortized message complexity asymptotically matches the amortized update time of the best centralized algorithm for the problem. Of course, at the same time, we want to upper bound the amortized number of rounds of such algorithms by (nearly) constant. We allow $\poly \log n$ slacks in both the message and round complexities. In this paper, we aim at initiating a systematic study of dynamic, distributed message-efficient algorithms by considering a single edge update per step, as in the classic, sequential centralized setting. Even for this basic setting, there are many challenges underlying the adaption of centralized algorithms to the distributed setting. Very recently, Censor-Hillel et al.\ \cite{CDK20} and Bamberger et al.\ \cite{BKM19} studied the question of simultaneously handling concurrent updates in a distributed setting. However, their algorithms require $\Omega(m)$ amortized messages. Thus, it seems crucial to first thoroughly understand the base case of a single update and only later extend to more general settings. We focus on classic \emph{symmetry-breaking} problems: maximal independent set (MIS), maximal matching (MM), $3/2$-maximum cardinality matching (MCM), and $(\Delta + 1)$-vertex coloring. Symmetry-breaking constitutes one of the most important challenges in distributed computing, since in many distributed systems processors might be in the same state, yet one must somehow break the symmetry to perform almost any nontrivial computation. One challenge in optimizing the message-efficiency is that many centralized algorithms for these problems rely on global variables and data structures, and they often rely on periodic \emph{global restarts}. The idea behind a periodic global restart is for the algorithm to handle some number of updates ``lazily'', essentially without changing the data structure, until a sufficient number of updates have been accumulated. Such centralized approaches are problematic in the distributed setting as each individual vertex does not have access to global information. Moreover, many centralized algorithms~\cite{AOSS18,DZ18,GuptaK18,KNNP20,NS13} also assume knowledge of the number of edges in the graph at any time, which does not lose generality in a centralized setting but is a very strong assumption to make in (dynamic) distributed settings -- such information can be acquired, but through potentially expensive communication. As a result, distributing centralized dynamic algorithms is a challenging task. We shall restrict our attention to the standard $\textsc{CONGEST}\xspace$ model of communication (\cite{PelB00}), where message size is bounded by $O(\log n)$ bits. In particular, we use the most studied model of the distributed dynamic setting, the \emph{local wakeup ($\textsc{CONGEST}\xspace$) model} (cf.\ \cite{AOSS18,CHK16,KS18,PPS16,PS16}), where following an edge update $(u,v)$, only the updated vertices $u$ and $v$ wake up. The update procedure proceeds in fault-free synchronous rounds during which every processor {\em that has been woken up} is allowed to exchange $O(\log n)$-bit messages with its neighbors until finishing its execution, which differs from the standard static setting in a crucial aspect: in the static setting all the vertices are woken up at the outset and engage in the algorithm, whereas in the dynamic setting a vertex has to be woken up as part of the update procedure---by receiving a message that propagated from either $u$ or $v$---in order to engage in the update procedure. In particular, to achieve good message-efficiency, vertices cannot blindly participate in the update procedure as even the size of the $2$-hop neighborhood of an updated vertex can be large, leading to large message complexity; this poses a highly nontrivial challenge for algorithms which need to maintain some global invariant. The round and message complexities can be viewed as the ``runtime'' and ``total work'' of the algorithm, respectively. Our paper focuses on solving the dynamic MIS problem via a deterministic, \textsc{CONGEST}\xspace algorithm that minimizes message complexity and number of rounds using new techniques to resolve the issue surrounding an unknown number of edges in the entire graph. We then use similar techniques to those used in our algorithm for solving MIS to solve the other problems discussed in this paper, namely, MM, $3/2$-approximate MCM, and $(\Delta + 1)$-vertex coloring. For MM, $3/2$-approximate MCM, and $(\Delta + 1)$-coloring, there exist $O(\Delta)$-message, $O(1)$-round naive algorithms where each node queries its immediate neighbors for the desired property (their color, whether they are in a matching\dots etc.); the goal is beat these algorithms in number of messages in the setting where $\Delta = \Omega(\sqrt{m})$, where $\Delta$ is a fixed upper bound on the maximum degree in the graph. For MM, $(3/2)$-approximate MCM, and $(\Delta + 1)$-coloring, we provide the first non-trivial $O(1)$-message-efficient deterministic, distributed algorithms that match the update times of $O(\sqrt{m})$ of the best-known centralized algorithms~\cite{KNNP20,NS13}. Our algorithm runs in $O(1)$ worst-case rounds for MM and $(\Delta + 1)$-coloring and $O(\log \Delta)$ worst-case rounds for $3/2$-approximate MCM. Furthermore, our algorithms for these problems are simple and, hence, practically implementable. \myparagraph{Related Work for MIS} We give a more comprehensive survey of the related work on MIS since this problem has received more attention in the dynamic, distributed setting. The MIS problem has been intensively studied over the years since the celebrated works of \cite{AlonBI86,Linial87,Luby86} from the mid 1980s. Most of the literature on the problem arguably revolves around parallel and distributed settings, perhaps due to the practical applications of MIS in these settings. In recent years there has been a growing body of work on the problem of \emph{maintaining} an MIS in dynamic networks \cite{AOSS18,AOSS19,BDHSS19,CHK16,CZ19,DZ18,GuptaK18,KW13}. Although most of this work has focused on the standard centralized, sequential setting, Censor-Hillel et al.\ \cite{CHK16} gave a {randomized} algorithm for maintaining an MIS against an \emph{oblivious adversary} in distributed dynamic networks.\footnote{In the \emph{oblivious adversarial model} (see, e.g., \cite{CW77b,KKM13}), the adversary has complete knowledge of all the edges in the graph and their arrival order, as well as of the algorithm, but does not have access to the random bits used by the algorithm, and so cannot adapt its updates in response to the random choices of the algorithm.} The distributed algorithm of \cite{CHK16} achieves an (amortized) message complexity of $\Omega(\Delta)$ and an \emph{expected} round complexity of $O(1)$. König and Wattenhofer \cite{KW13} gave an algorithm for maintaining an MIS which requires a constant number of rounds, but as opposed to \cite{CHK16}, makes the assumptions that a node gracefully leaves the network, and messages may have unbounded size. Also, the number of broadcasts that are done may be large. Assadi et al.~\cite{AOSS18} showed that their main centralized algorithm for MIS can be naturally adapted into a deterministic distributed algorithm with amortized message complexity $O(\min\{m^{3/4},\Delta\})$ and an amortized round complexity of $O(1)$. However, this algorithm operates under the assumption that knowledge of up-to-date estimates of $m$ is provided to all vertices; this leads to an implicit assumption that the graph remains connected throughout the course of the algorithm, which is a strong assumption to make, especially in dynamic networks. In the centralized, sequential setting, the deterministic $O(m^{3/4})$ bound of Assadi et al.\ \cite{AOSS18} for general graphs was improved to $O(m^{2/3})$ by Gupta and Khan \cite{GuptaK18} and independently to $O(m^{2/3} \sqrt{\log m})$ by Du and Zhang \cite{DZ18}. Allowing randomization against an oblivious adversary, the update time in general graphs was reduced to $O(\sqrt{m})$ \cite{DZ18}, further to $\widetilde{O}(\min\{m^{1/3},\sqrt{n}\})$ \cite{AOSS19}, and ultimately to $\poly \log n$ \cite{BDHSS19,CZ19}. To the best of our knowledge, the only distributed algorithms for the problem are the two mentioned above \cite{AOSS18, CHK16}. That said, it seems rather straightforward to distribute the randomized algorithms with amortized update time $\operatorname{polylog}(n)$ \cite{BDHSS19,CZ19}, to obtain a distributed algorithm with both message and round complexities bounded by $O(\operatorname{polylog}(n))$.\footnote{We did not make any effort to verify this claim, as our goal was to achieve an efficient {\em deterministic} algorithm, or at least one that does not assume an {\em oblivious} adversary.} However, the disadvantage of these randomized algorithms is that they crucially require the oblivious adversary assumption. While such an assumption might be fine in the centralized setting, in the distributed setting it is easier for adversaries to corrupt links in the network as well as to corrupt and/or eavesdrop on the messages sent through such links. Thus, in such settings the oblivious adversary assumption seems excessively strong and impractical. In this paper, we shall restrict our attention to deterministic algorithms, which are, in particular, robust against an adaptive adversary. Prior to this work there was no deterministic (or even randomized against a non-oblivious adversary) distributed algorithm for MIS with a message complexity of $o(m^{3/4})$, and moreover, it seemed highly unclear if the deterministic algorithms of \cite{DZ18,GuptaK18}, with amortized update time $\tilde O(m^{2/3})$ could be distributed efficiently. \subsection{Our Contributions} \myparagraph{Maximal independent set} We present a deterministic distributed algorithm that achieves amortized message and round complexities of $O(m^{2/3}\log^2{n})$ and $O(\log^2{n})$, respectively. To this end, we reduce the problem of dynamically maintaining an MIS to that of statically computing it. Our reduction builds on the aforementioned (centralized, sequential) algorithm of Gupta and Khan \cite{GuptaK18} in a nontrivial way. \begin{theorem} \label{t1} Equipped with a black-box static deterministic algorithm for computing an MIS within $T(n)$ rounds for any $n$-vertex distributed network, an MIS can be maintained \emph{deterministically} (in the local wakeup model) over any sequence of edge insertions and deletions that start from an empty distributed network on $n$ vertices, within $O(T(n))$ \emph{amortized round complexity} and $O(m^{2/3} \cdot T(n))$ \emph{amortized message complexity}, where $m$ denotes the dynamic number of edges. \end{theorem} Using the MIS algorithm of \cite{GGR20,RG20}, which runs in $O(\log^5 n)$ rounds, on top of the transformation of Theorem \ref{t1}, the amortized bounds on the round and message complexities of the resulting distributed algorithm are $O(\log^5 n)$ and $O(m^{2/3} \log^5 n)$, respectively. While the black-box static MIS algorithm used in the transformation of Theorem \ref{t1} applies to arbitrary graphs, bounded diameter graphs admit faster MIS algorithms. Next, we strengthen the transformation of Theorem \ref{t1}, to achieve a diameter-sensitive transformation. We stress that the algorithm returned as output of this transformation applies to any dynamic graph; the restriction on the diameter is only for the black-box static MIS algorithm. The black-box MIS algorithm should also satisfy another property; given as input an independent set $M' \subseteq V$ of the graph, the output MIS should be a superset of the input set $M'$; we shall call this an {\em input-respecting MIS}. \begin{theorem} \label{t2} Equipped with a black-box static deterministic algorithm for computing an input-respecting MIS within $T'(n)$ rounds for any $n$-vertex distributed network {\em with diameter at most 6}, an MIS can be maintained \emph{deterministically} over any sequence of edge insertions and deletions that start from an empty network on $n$ vertices, within $O(T'(n))$ \emph{amortized round complexity} and $O(m^{2/3} \cdot T'(n))$ \emph{amortized message complexity}, where $m$ denotes the dynamic number of edges. \footnote{For our purposes it suffices to take a constant of 6, but any fixed constant $c \geq 6$ works.} \end{theorem} The MIS algorithm of \cite{CPS20} runs in $O(D \log^2 n)$ rounds in distributed graphs of diameter $D$. We adapt the algorithm of \cite{CPS20} to return an input-respecting MIS. Plugging the resulting MIS algorithm into the transformation of Theorem \ref{t2} yields: \begin{corollary}\label{cor} Starting from an empty distributed network on $n$ vertices, an MIS can be maintained \emph{deterministically} over any sequence of edge insertions and deletions with $O(\log^2 n)$ \emph{amortized round complexity} and $O(m^{2/3} \log^2 n)$ \emph{amortized message complexity}. \end{corollary} Our algorithm uses unicast rather than broadcast messages, which allows each processor to communicate differently with each of its neighbors, and, more concretely, to communicate with a subset of its neighbors---otherwise there is no hope to achieve a message complexity of $o(\Delta)$. Prior to this work, the distributed algorithm of \cite{AOSS18} was the only one providing a deterministic algorithm with $o(m)$ amortized message complexity. By allowing the amortized round complexity to grow from constant to a small polylogarithmic factor, we obtain a polynomial improvement in the message complexity. Perhaps more important than this improvement, in contrast to the work of \cite{AOSS18} that relies on the network being connected at all times, our algorithm does not make any assumptions on the network's connectivity. Assuming that the network is always connected seems to be too much to ask for --- particularly for dynamic networks. To cope efficiently with disconnected graphs, our algorithm has to bypass several nontrivial challenges, some of which are discussed next in~\cref{sec:tech-overview}. \myparagraph{Other symmetry-breaking problems} We also show the following results for MM, $(3/2)$-approximate MCM, and $(\Delta+1)$-vertex coloring, which were not known prior to this paper. All of our results for these problems are $O(1)$-message-efficient, matching the sequential running time of the best centralized deterministic algorithms for these problems~\cite{KNNP20,NS13}. \begin{theorem}\label{thm:mm} Starting from an empty distributed network on $n$ vertices, a maximal matching (MM) and a $(3/2)$-approximate maximum matching can be maintained \emph{deterministically} over any sequence of edge insertions and deletions with $O(1)$ and $O(\log \Delta)$ rounds, worst-case, respectively, and $O(\sqrt{m})$ worst-case and amortized message complexity, respectively. \end{theorem} \begin{theorem}\label{thm:coloring} Starting from an empty distributed network on $n$ vertices, a $(\Delta + 1)$-vertex coloring can be maintained \emph{deterministically} over any sequence of edge insertions and deletions with $O(1)$ rounds, worst-case, and $O(\sqrt{m})$ worst-case message complexity. \end{theorem} \subsection{Paper Organization} In~\cref{sec:tech-overview}, we provide a technical overview of our algorithms. Then, in~\cref{sec:simple} we review the edge orientation technique and present our algorithms for MM and $(\Delta+1)$-coloring. In~\cref{sec:MCM}, we present our algorithm for $(3/2)$-approximate MCM. Finally, in~\cref{sec:MIS}, we provide our new algorithm for MIS that improves on~\cite{AOSS18}. \section{Proof Overview and Technical Challenges}\label{sec:tech-overview} Throughout the paper we show that oftentimes centralized algorithms that require a {\em global} property such as the current value of $m$ can instead use information obtained from each vertex's {\em local} neighborhood to approximate the global property. Similarly, instead of maintaining {\em global} invariants, we settle for local, weaker variations of such invariants, and demonstrate that they are sufficiently strong for our purposes. Our insights and techniques may be applied more broadly to obtain deterministic, dynamic, distributed algorithms beyond the specific problems we study in this paper. We first consider the maximal matching and $(\Delta + 1)$-vertex coloring problems, and our proposed algorithms for them can be viewed as a ``warm-up'' for our later algorithms. Our algorithms employ {\em dynamic edge orientations} to achieve $O(\sqrt{m})$ message complexity (matching the centralized update time), together with constant round complexity, without knowing the current number of edges $m$ and without making any connectivity assumptions. We demonstrate that dynamic edge orientations are useful for ensuring that each vertex sends information to at most $O(\sqrt{m})$ neighbors per update. By maintaining the \emph{inherently local} invariant that edges are oriented from vertices with low-degree\xspace to vertices with high-degree\xspace, we can achieve a maximum outdegree of $O\left(\sqrt{m}\right)$. Using this, we can make sure that each vertex has complete information on all its incoming neighbors, and only need to spend $O(\sqrt{m})$ messages to find the remaining information on its out-neighbors. More specifically, for algorithms where vertices only need to know information about their direct neighbors, we show that such an orientation is enough to ensure $O(\sqrt{m})$ message complexity (and $O(1)$ round complexity). We then proceed to the $3/2$-maximum cardinality matching (MCM) problem. To achieve an efficient algorithm in this case, the idea of dynamic edge orientations is insufficient because vertices need not only information about their direct neighbors but also information about their $2$-hop neighborhood. To ensure that the number of messages stays $O(\sqrt{m})$, we made two key insights. First, using the same high-degree\xspace, low-degree\xspace partitioning as for MM and $(\Delta + 1)$-coloring (cutoff of $\Theta(\sqrt{m})$ but without knowing $m$), we ensure that \emph{all high-degree vertices are always matched}. We show that the following is true for each \emph{unmatched} high-degree\xspace vertex $w$: either $w$ can find a free (unmatched) neighbor in the first $\Theta(\sqrt{m})$ neighbors it queries, or $w$ can find a neighbor that is matched to a low-degree\xspace vertex (we call this the \emph{surrogate}) in the first $\Theta(\sqrt{m})$ neighbors it queries. This fact allows us to ensure that $w$ is always matched. The second key insight is that searching $O(\sqrt{m})$ neighbors of any vertex allows us to determine whether the vertex is high-degree\xspace or low-degree\xspace, and, importantly, we do this without needing to know the value of $m$. This insight allows us to do the following for each vertex incident to a matching edge that was deleted: we can search its neighbors by guessing the value of $\sqrt{m}$ in $O(\log \Delta)$ attempts; starting with one neighbor, we successively search two times the previous number of neighbors until either we find a free neighbor or a surrogate (or we run out of neighbors). Once we find a free neighbor or a surrogate, we are done since we have successfully matched the vertex. By our observation above, it is sufficient to search $\Theta(\sqrt{m})$ neighbors for each high-degree\xspace vertex before finding a free neighbor or a surrogate. Since we double the number of neighbors we search, we overestimate the number of such neighbors by at most a factor of $2$ and so the number of messages sent is $O(\sqrt{m})$ (dominated by the last guess). The number of rounds is $O(\log \Delta)$ since we require this many guesses before guessing $\Delta$. These insights show that to approximate a high-degree\xspace/low-degree\xspace partition for the MCM problem, it is enough to look at (part of) a vertex's $2$-hop neighborhood. Our main result for this paper is solving the dynamic, distributed MIS problem; hence, we spend the remainder of this section on its challenges and solutions. We use the ideas we obtain from the other problems combined with several new insights in order to achieve our final algorithm. The main challenge we face is that we are no longer able to use only the $2$-hop neighborhood in our algorithm; instead we must use (part of) the $6$-hop neighborhood and we formulate a new \emph{restart} procedure (discussed a bit later) for this purpose. Our distributed algorithm for maintaining an MIS, as summarized in Corollary \ref{cor}, is a direct consequence of the reduction of Theorem \ref{t2}, which strengthens that of Theorem \ref{t1}. The starting point of both our reductions is the centralized sequential algorithm of \cite{GuptaK18}, hereafter {Algorithm [GK]}, with amortized update time $O(m^{2/3})$. We start by giving a high-level overview of [GK]. [GK] dynamically partitions the vertex set $V$ into two subsets $V_H$ and $V_L$ of high-degree and low-degree vertices, respectively, according to a degree cutoff of $m^{2/3}$, giving priority to the vertices of $V_L$ to be in the MIS over those of $V_H$. For dynamic graphs of maximum degree $\le \Delta$, there is a deterministic algorithm, hereafter Algorithm $\mathcal{A}$, for maintaining an MIS with amortized update time $O(\Delta)$ \cite{AOSS18}. The subgraph $G_L = G[V_L]$ of $G$ induced by $V_L$ has maximum degree $\le m^{2/3}$, thus an MIS $M_L$ for $V_L$ can be maintained with update time $O(m^{2/3})$ by applying Algorithm $\mathcal{A}$ on $G_L$. No vertex of $V_H$ that is incident to an MIS vertex of $V_L$ can be in the MIS. Let the remaining vertices of $V_H$ which are not in the MIS and not adjacent to any vertices in the MIS be $V^*_H$; let $G^*_H = G[V^*_H]$ be the subgraph of $G$ induced by $V^*_H$. Since $G^*_H$ contains at most $O(|V^*_H|^2) = O(|V_H|^2) = O(m^{2/3})$ edges, a fresh MIS $M_H$ for $G^*_H$ can be computed in $O(m^{2/3})$ time following every edge update by running any linear-time static MIS algorithm. Specifically, the algorithm used in [GK] is the trivial one of iteratively picking an arbitrary $v \in V^*_H$ to add to $M_H$ and then removing $v$ and all adjacent vertices from $V^*_H$; the procedure runs until $G^*_H$ is empty. The output MIS, given by $M := M_L \cup M_H$, is thus maintained in $O(m^{2/3})$ update time. We next highlight some of the challenges that we faced on the way to achieving an efficient distributed implementation of [GK]. \begin{challenge} \label{c1} The task of computing a fresh MIS $M_H$ on $G^*_H$ following every edge update cannot be distributed efficiently. In fact, this task as is cannot be distributed {\em even inefficiently}. \end{challenge} First and foremost, computing an MIS on $G^*_H$ requires all the vertices in $V^*_H$ to wake up. Alas, in the local wakeup model, only the updated vertices are woken up following an edge update, hence waking up all the vertices in $V^*_H$ may be prohibitive if the diameter is large and even infeasible (if the graph is not connected). Instead of computing an MIS on the entire $G^*_H$, we propose to apply it on a {\em carefully chosen} subgraph of $G^*_H$, denoted here by $\tilde G_H$, where all the vertices of $\tilde G_H$ are at constant distance from the updated vertices. Simply waking up all vertices at constant distance from the updated vertices in $O(1)$ rounds to participate in a static MIS distributed algorithm won't work, as an MIS computation over the corresponding subgraph will cause conflicts due to edges and vertices lying outside of that subgraph. Consequently, we need to first identify the vertices among those that will not trigger any further conflicts, and compute the subgraph $\tilde G_H$ induced on them only. Moreover, for the reduction of Theorem \ref{t2}, we would need $\tilde G_H$ to have a bounded diameter. This is impossible in general, since, even if the diameter of the entire graph is small, we have no control whatsoever on the diameter of $G^*_H$, let alone on the diameter of $\tilde G_H$. To overcome this difficulty, we must add to $\tilde G_H$ some vertices of $V_L$---those lying on the shortest paths between the updated vertices and the vertices of $\tilde G_H$ that we wish to apply the static MIS algorithm on. We then face another challenge: the resulting graph, denoted by $G'_H$, contains vertices of $V_L$ on the one hand, but on the other hand the static MIS algorithm that we apply on $G'_H$ must not affect the MIS $M_L$ for $V_L$, as that would blow up both the round and message complexities. To this end, we need to apply an input-respecting MIS algorithm on top of $G'_H$, where the input independent set should contain all vertices of $V_L$ in $G'_H$ that belong to $M_L$. This, in turn, requires us to adapt the algorithm of \cite{CPS20} to be input-respecting. \begin{challenge} \label{c2} Vertices (processors) do not know the number of edges in the graph, which dynamically changes. \end{challenge} The update times of various (centralized) dynamic graph algorithms depend polynomially on the dynamic number of edges $m$. For example, in the context of symmetry breaking problems, for MIS there is an $O(m^{3/4})$ update time algorithm \cite{AOSS18} and $\widetilde{O}(m^{2/3})$ update time algorithms \cite{DZ18,GuptaK18}; for maximal and $(3/2 + \varepsilon)$-approximate maximum matching there are $O(\sqrt{m})$ and $O(m^{1/4})$ update time algorithms \cite{GP13,BS16,NS13}; and for $(\Delta+1)$-coloring there is an $O(\sqrt{m})$ update time algorithm \cite{DBLP:journals/corr/abs-1909-07854}. In these algorithms and others, achieving an update time polynomial in $m$ usually requires knowledge of $m$ or some approximation of it, where this knowledge is directly translated into the dynamic maintenance of data structures and invariants with respect to $m$. In particular, the standard way to cope with a dynamic number of edges $m$ in a centralized setting is to apply a global ``restart'' procedure every time the number of edges grows or shrinks by a factor of 2, where the role of a restart is not merely to compute the new number of edges, but also to rebuild the data structures with respect to the new value of $m$, thereby restoring the validity of the invariants. In the local wakeup model, we shall assume that each update step has a running timestamp associated with it. We believe this assumption, which was used before (cf.\ \cite{AOSS18}), should be acceptable in practice. Moreover, a variant of our algorithm does not require the usage of timestamps, with the caveat that its amortized message complexity is $O(m^{2/3}_{max}\log^2{n})$ (see~\cref{sec:restart} for details). For brevity, here, we focus on our algorithm that uses timestamps. Importantly, only the updated vertices learn about the timestamp of the corresponding update. {\em If the graph were always connected}, a global restart procedure could be distributed in a straightforward way, at least in terms of computing the up-to-date number of edges. In this case, every vertex would store the timestamp $t_{last}$ of the last restart as well as the number of edges $m_{last}$ in the graph at that time, and once an update step occurs with timestamp $t_{last} + m_{last}/2$, one can compute the up-to-date number of edges $m$ in the graph within $O(m)$ rounds and messages, and then broadcast this number as well as the current timestamp to all vertices. Such a restart procedure was employed by Assadi et al.\ \cite{AOSS18} for distributing their centralized algorithm. We believe that the assumption that the graph is always connected is too strong when dynamic graphs are involved. Indeed, a standard assumption in many dynamic graph algorithms with amortized time bounds (including that of \cite{AOSS18}) is that the initial graph is empty. Moreover, the underlying goal is to improve over the $\Delta = O(n)$ message complexity of the algorithm of \cite{AOSS18}, which means that the relevant regime of graph densities is $m = o(n^{3/2})$. If the graph is not connected, however, it seems hopeless for vertices to learn up-to-date number of edges in the graph. This is, in fact, a rather generic problem, which arises naturally when attempting to distribute any centralized dynamic graph algorithm that requires knowledge of $m$ \cite{BS16,DZ18,GuptaK18,GP13,DBLP:journals/corr/abs-1909-07854,NS13}. When $m$ is not known by the vertices, we can no longer maintain the invariant that the sets $V_L$ and $V_H$ consist of the low-degree and high-degree vertices, respectively. Indeed, a global change in the number of edges also changes the degree cutoff of $m^{2/3}$, hence it is possible for this invariant to hold at some point and to be drastically violated by many vertices at a later step. This implies that vertices of $V_L$ could have very high degree, hence applying a distributed implementation of $\mathcal{A}$ on top of the subgraph $G_L$ of $G$ induced by $V_L$ would blow up the message complexity; symmetrically, vertices of $V_H$ could have very low degree, hence their number may be much larger than $m^{1/3}$, in which case the subgraph $G'_H$ of $G$ on which a static MIS computation is applied may contain many more edges than $m^{2/3}$, which too may blow up the message complexity. Overcoming this hurdle is not a minor technicality, it is in fact the crux of our distributed algorithm. To this end we propose several new structural insights into the problem, which enable us to replace the global invariants of the centralized algorithm of \cite{GuptaK18} by local, carefully-tailored invariants that can be maintained efficiently in the local wakeup model, to ultimately achieve our result. The key idea behind our approach is that we can use the local neighborhood (instead of the global neighborhood) of a vertex $v$ to determine whether $v$ should be categorized as low-degree or high-degree. Importantly, this local neighborhood approach does not require the graph to be connected and we show that it is enough to obtain our desired message complexity, together with small $\poly\log n$ round complexity. \section{Preliminaries} Given an undirected graph $G = (V, E)$, which represents a distributed network, each vertex $v \in V$ has access to an adjacency list of its current neighbors. We let $deg(v)$ denote the degree of $v$ or the size of $v$'s adjacency list. As commonly assumed, each vertex can distinguish its neighbors via unique IDs assigned to each vertex. Following standard convention in dynamic graph algorithms, the graph is empty at the outset, and there is a single edge update per step. As mentioned, we consider the local wakeup model where, following an edge update (insertion or deletion), only the two endpoints incident to that edge \emph{wake up} and may initiate an update procedure. Any other vertex remains asleep until receiving a message from a neighbor, at which stage it can start participating in the update procedure (by exchanging messages with its neighbors and performing other actions). Computation proceeds in synchronous rounds; thus all vertices (those which are awake) know when a round of communication starts and ends, and a vertex cannot respond to messages it received in the same round - it can do so in the next round. Our algorithms operate in the \textsc{CONGEST}\xspace model so messages have size $O(\log n)$ bits. In addition to considering worst-case rounds and messages per update, in this paper, we also consider \emph{amortized round complexity} (or {\em amortized update time}) and \emph{amortized message complexity}, which bound the \emph{average} number of communication rounds and $O(\log n)$-bit messages sent, respectively, needed to update the solution per edge update, over a \emph{worst-case} sequence of updates. In contrast to the static setting, the amortized message complexity of distributed tasks could be sublinear in the graph size, which makes it natural to optimize both measures of amortized round and message complexities rather than just the former. We consider three different notions of number of edges in the graph: $m_{max}$, $m_{avg}$ and $m_{cur}$. $m_{max}$ is the maximum number of edges in the graph over all updates in the update sequence. $m_{avg}$ is the average number of updates in the graph over the update sequence. Finally, $m_{cur}$ is the current number of edges in the graph after the most recent update. When the particular definition being used is obvious from context, we simply use $m$. \section{Dynamic MM and \texorpdfstring{$(\Delta + 1)$}{}-Coloring via Dynamic Edge Orientations}\label{sec:simple} We begin our discussion of our dynamic, distributed algorithms for symmetry-breaking problems with our simple warm-up algorithms for MM and $(\Delta + 1)$-coloring. We first present the edge orientation technique which is one way of ensuring that vertices only need to send messages to a small enough subset of neighbors upon an update. This technique is based on the idea that for each vertex $v$, if the edge $(u,v)$ is oriented towards $v$, then $v$ has all the relevant information about $u$. For every edge insertion or deletion that causes $u$ to change its properties, $u$ notifies all its outgoing neighbors. In such a case, when $v$ needs to make a decision using all its neighbors' properties, it only needs to ask its outgoing neighbors about their properties to have a full picture of all its neighbors. By using the edge orientation technique we can solve problems like dynamic, distributed MM and $(\Delta + 1)$-coloring. All round and message complexities in this section are in terms of $m_{cur}$, the current number of edges in the graph. For convenience, we define $m = m_{cur}$ in this section. \subsection{Edge Orientation Algorithm}\label{para:orientation} We orient the edges in the following way: provided an edge insertion $(u, v)$, the edge is oriented from the vertex with lower degree to the vertex with higher degree. If the degrees of the two vertices are equal, then the edge is oriented arbitrarily. Each vertex $v$ also maintains a number $p_v$ indicating its degree during the last time it \emph{checked the orientation} of all its adjacent edges and \emph{reoriented} any of the checked edges. Initially, we set $p_v = 1$ for all vertices. If an edge insertion or edge deletion $(u, v)$ causes $v$'s degree to fall outside the range $[p_v/2, 2p_v]$, $v$ asks its neighbors for their degree. For each neighbor $w$ of $v$, $v$ directs the edge $(w, v)$ from $w$ to $v$ if $deg(w) \leq deg(v)$. Otherwise, $v$ directs the edge $(v, w)$ from $v$ to $w$. After performing the reorientation of edges, $v$ sets $p_v = deg(v)$. The reoreintation of the edges is done gradually, $20$ edges per update step, which means that this reorientation process would be carried over during the course of the \emph{next} $deg(v)/10$ updates incident to $v$. This means that we finish updating the changes before the next set of updates causes $deg(v)$ to fall outside the range $[p_v/2, 2p_v]$ again.\\ The pseudocode for this algorithm is given in~\cref{alg:orient} and~\cref{alg:reorient-edges}. \iflong \begin{algorithm} \caption{Orient\_edges$(u,v)$}\label{alg:orient} \SetAlgoLined \LinesNumbered \KwResult{Orient the edges properly on an edge insertion or deletion.} \textbf{Input: } An edge insertion or deletion $(u,v)$.\\ \If{$deg(v) > deg(u)$}{Orient the edge from $u$ to $v$.} \Else{Orient the edge from $v$ to $u$.} \For{$z \in (u,v)$} {\If{$deg(z) > 2p_z$ or $deg(z) < p_z/2$}{Mark $z$ as in process of checking reorientation.\\$z$ updates $p_z = deg(z)$.} \If{$z$ is marked as in process of checking reorientation}{Reorient\_edges($z$)}} \end{algorithm} \begin{algorithm} \caption{Reorient\_edges($z$)}\label{alg:reorient-edges} \SetAlgoLined \LinesNumbered \KwResult{Reorient the edges adjacent to $z$ gradually} \textbf{Input: } A vertex $z$ that needs to reorient its adjacent edges.\\ \If{all the edges adjacent to $z$ are marked as checked}{Mark $z$ as done. \\ \Return} \For{$20$ adjacent edges $(z,x)$ of $z$ that still might need reorientation}{ \If{$deg(z) > deg(x)$}{Orient the edge from $x$ to $z$.} \Else{Orient the edge from $z$ to $x$.} Mark $(z,x)$ as checked} \end{algorithm} \fi The following invariant is crucial for the update algorithm. \begin{invariant}\label{inv:edge-orientation} For any edge $(u, v)$, if the edge is oriented from $u$ to $v$, then $deg(u) \leq 4deg(v)$. \end{invariant} \iflong We first prove that the invariant is always maintained under the above update algorithm~\cref{alg:orient} by using the algorithms above and~\cref{obs:reorient}. \begin{observation}\label{obs:reorient} An edge $(u,v)$ that was oriented from $u$ to $v$ is reoriented in~\cref{alg:reorient-edges} only if $deg(v)>deg(u)$. \end{observation} \begin{lemma}\label{lem:orient-update} \cref{inv:edge-orientation} is always maintained. \end{lemma} \begin{proof} We prove this lemma via induction on the update step. For the basis of the induction, we consider the graph at the beginning (before any edge update takes place), at which stage the graph is empty. Therefore, the first update would be an edge insertion $(u,v)$. If after this insertion the edge is oriented from $u$ to $v$ then by~\cref{alg:orient} $deg(u) < deg(v)$, so trivially $deg(u)\leq 4deg(v)$. Assume by induction that \cref{inv:edge-orientation} is maintained after update $i$, we now prove that \cref{inv:edge-orientation} is maintained after update $i+1$. Let $(u,v)$ be an edge in the graph after update $i+1$, such that the edge is oriented from $u$ to $v$. \begin{enumerate} \item If $(u,v)$ was inserted during update $i+1$: by~\cref{alg:orient}, the edge is oriented from $u$ to $v$ only if $deg(v)>deg(u)$ so trivially $deg(u)\leq 4deg(v)$. Note that there is no scenario in which $(u,v)$ was first oriented towards $v$ and then $u$ or $v$ would reorient this edge; this is because the orientation of an edge (both in the insertion and in reorientation) is determined according to the degrees of its endpoints, and since the degree of $u$ and $v$ did not change between the insertion and the reorientation, $(u,v)$ will not be reoriented. \item If $(u,v)$ was oriented from $v$ to $u$ after update $i$: Since $(u,v)$ is oriented from $u$ to $v$ after update $i+1$, it means that an edge reorientation was made. By~\cref{obs:reorient} this happens only if $deg(v)>deg(u)$. \item If $(u,v)$ was oriented from $u$ to $v$ after update $i$: by our induction hypothesis, after update $i$ it holds that $deg(u)\leq 4deg(v)$, if the degrees of $u$ or $v$ did not change during update $i+1$ then it still holds that $deg(u)\leq 4deg(v)$. If the degree of $u$ or $v$ changed, then an edge reorientation might occur, in that case, since $(u,v)$ is oriented from $u$ to $v$ after update $i+1$, $deg(v)$ must be greater than $deg(u)$. The only case that left to be handled is if the degree of $u$ or $v$ changed but an edge reorientation did not occur. Let $t$ be the time when the orientation of $(u, v)$ was last checked and let $deg_t(u)$ and $deg_t(v)$ be the degree of $u$ and $v$, respectively, at time $t$. Since $(u, v)$ is oriented from $u$ to $v$, it must be the case that $deg_t(u) \leq deg_t(v)$. Similarly, $deg(v)$ and $deg(u)$ must be within a factor of two of $deg_t(v)$ and $deg_t(u)$ respectively (otherwise, the edge would be checked at a later time than $t$). So $deg(u) \leq 2deg_t(u)$ and $deg(v) \geq deg_t(v)/2$. This means $deg_t(u) \leq deg_t(v)$ which implies that $deg(u)/2 \leq deg_t(u) \leq deg_t(v) \leq 2deg(v)$ which means that $deg(u) \leq 4deg(v)$. \end{enumerate} \end{proof} We prove the following property of the orientation maintained in this manner. \else \cref{inv:edge-orientation} immediately allows us to prove the following property regarding the outdegree of any vertex in the graph. \fi \begin{lemma}\label{lem:outgoing} Each vertex $v$ has at most $4\sqrt{m}$ outgoing neighbors. \end{lemma} \iflong \begin{proof} Suppose for contradiction that there exists a vertex $v$ with more than $4\sqrt{m}$ outgoing neighbors. Let $D$ be the degree of $v$. Then, by~\cref{inv:edge-orientation}, each outgoing neighbor of $v$ has degree at least $D/4$. By our assumption, because $v$ has $> 4\sqrt{m}$ outgoing neighbors, $D = deg(v) > 4\sqrt{m}$. This means that each outgoing neighbor of $v$ has degree $> \sqrt{m}$. Thus the sum of the degrees in the graph is greater than $4\sqrt{m} \cdot \sqrt{m} = 4m$. Since there are at most $m$ edges, the sum of the degrees cannot be greater than $2m$, a contradiction. Hence, each vertex $v$ has at most $4\sqrt{m}$ outgoing neighbors. \end{proof} Now, every time a vertex $v$ changes its state, $v$ updates all its outgoing neighbors about this change. When the orientation of an edge $(u,v)$ updates, $v$ must send its state to $u$ if the edge is now oriented towards $u$; otherwise, $u$ must send its state to $v$. For the $(\Delta + 1)$-coloring problem we define the state of a vertex as its color and for the maximal matching problem we define the state of a vertex to be either free or matched. This will guarantee that vertices would know the color of their incoming neighbors or which of their incoming neighbors are matched. We will see later that by using~\cref{alg:orient} and preserving the state of the outgoing neighbors as described above, we can solve those problems efficiently. \begin{lemma}\label{lem:incoming-neighbors-property} Each vertex $u$ has a complete information on the states of all its incoming neighbors after each update. \end{lemma} \begin{proof} We prove this lemma via induction on the update step. For the induction basis, we consider the graph at the beginning (before any edge update takes place), at which stage the graph is empty and therefore the condition is trivially maintained. Assume by induction that all vertices have complete information on the states of their incoming neighbors after update $i$. Let $u$ be an arbitrary vertex in the graph, we'll prove that $u$ has complete information on the state of its incoming neighbors after update $i+1$. For each incoming neighbor $v$ of $u$, if $v$ was an incoming neighbor of $u$ after update $i$, then by our explanation above, if the state of $v$ had been changed during update $i+1$, $v$ must have updated $u$ about it. Otherwise, by our induction hypothesis, $u$ has the state of $v$ after update $i$ and since it didn't change during update $i+1$, $u$ still has an updated information about $v$. If $v$ wasn't an incoming neighbor of $u$ after update $i$, it means that it became an incoming neighbor of $u$ during update $i+1$. This could happen if either $(u,v)$ was inserted during update $i+1$ or if the orientation of $(u,v)$ was changed during this update. According to the explanation above, in both cases $v$ updates $u$ about its state, so $u$ must have the current state of $v$. As this holds for each incoming neighbor $v$ of $u$, the lemma follows. \end{proof} \fi \begin{lemma}\label{lem:message-round-complexity-orient} Given an edge insertion or deletion $(u, v)$,~\cref{alg:orient} requires $O(1)$ message complexity and $O(1)$ round complexity, both worst-case. \end{lemma} \iflong \begin{proof} Whenever the degree of a vertex $u$ falls out of the range $[p_u/2, 2p_u]$, we must check all the edges incident to $u$ and reorient if necessary. This naively costs $O(deg(u))$ messages. To achieve $O(1)$ worst-case message complexity, we make the following procedure. When $deg(u)$ falls outside the range $[p_u/2, 2p_u]$, we first update $deg(u)$ to be the new degree. Then, we reorient $20$ edges incident to $u$ during each update incident to $u$, such that after $p_u/10$ updates incident to $u$, all the edges have been reoriented if necessary. The next $p_u/10$ updates incident to $u$ cannot increase the outdegree of $u$ by more than $p_u/10$, thus, the number of outgoing neighbors of $u$ is still $O(\sqrt{m})$. Futhermore, the next $p_u/10$ updates cannot cause $deg(u)$ to fall outside the range $[p_u/4, 4p_u]$ and so another reorientation update cannot occur before the current update has been processed. Finally, since each edge update is incident to two vertices, we charge at most $40$ reorientations to the update (if both endpoints are performing reorientations). As for the number of rounds, in each update $u$ uses at most eight rounds of communication, performing the following tasks (as described in~\cref{alg:orient} and in the above explanation about the state of a vertex): \begin{enumerate} \item Check the degree of $v$ if the edge $(u,v)$ was inserted and receive it. \item If $deg(u)$ falls outside the range $[p_u/2, 2p_u]$: \begin{enumerate} \item Ask $20$ neighbors their degrees. \item Receive the sent degrees. \item Reorient the edges if necessary. \end{enumerate} \item For new incoming neighbors (due to insertion or reorientation): \begin{enumerate} \item Ask its new incoming neighbors to send their state. \item Receive the sent states. \item Reorient the edges if necessary. \end{enumerate} \end{enumerate} \end{proof} \fi \subsection{Distributed \texorpdfstring{$(\Delta + 1)$}{}-Coloring} In this section, we maintain a $(\Delta + 1)$-coloring in the graph using the edge orientation technique we presented in~\cref{para:orientation}. First, given an edge update, we run~\cref{alg:orient}. For each edge $(u,v)$ that was oriented from $u$ to $v$ and is reoriented (such that it is now oriented from $v$ to $u$), $v$ sends its color to $u$. Therefore, the following property is maintained: for every edge $(u,v)$ that is oriented from $u$ towards $v$, $v$ knows the current color of $u$. The algorithm works as follows: given an edge insertion $(u, v)$ between two vertices $u$ and $v$ with the same color, pick one of the two vertices arbitrarily, w.l.o.g. $u$, to recolor. $u$ queries all its outgoing neighbors for their colors. Then, $u$ picks a color that does not conflict with any of its neighbors. This can be done since $u$ already has complete information about all its incoming neighbors, so by asking its outgoing neighbors it receives information about all its neighbors' colors. Finally, $u$ informs all its outgoing neighbors of its new color. All of $u$'s outgoing neighbors store $u$'s color. \iflong \begin{algorithm} \caption{$(\Delta + 1)$-Coloring}\label{alg:coloring} \SetAlgoLined \LinesNumbered \KwResult{Maintains a $(\Delta + 1)$-coloring in the graph upon edge insertion or deletion} \textbf{Input: } An edge insertion or deletion $(u,v)$.\\ Call~\cref{alg:orient} on $(u,v)$.\\ \If{there were edge flips}{\For{an edge $(z,x)$ that was flipped, where $z \in \{u,v\}$}{ \If{the edge is oriented towards $x$}{$z$ updates $x$ about its color} \Else{$x$ updates $z$ about its color}}} \If{$(u,v)$ is an inserted edge}{\If{$u$ and $v$ have the same color}{ pick a vertex from $\{u,v\}$ arbitrarily to recolor (w.l.o.g. $u$ is picked)\\ \For{each outgoing neighbor $w$ of $u$}{$u$ asks $w$ for its color\\ $w$ responds to $u$ with its color} recall that $u$ has complete information of all colors of its incoming neighbors (~\cref{lem:incoming-neighbors-property} ) and now has complete information about all its neighbors\\\ $u$ picks a different color from all its neighbors\\ \For{each outgoing neighbor $w$ of $u$}{$u$ informs $w$ about its new color}}} \end{algorithm} \fi Because~\cref{alg:coloring} uses the edge orientation technique presented above, \cref{inv:edge-orientation} and~\cref{lem:incoming-neighbors-property} are maintained after each edge insertion and deletion and the algorithm has the same message and round complexity. \begin{theorem}\label{lem:message-round-complexity-coloring} Starting from an empty graph, there exists a dynamic, distributed \textsc{CONGEST}\xspace $(\Delta + 1)$-coloring algorithm (\cref{alg:coloring}) that maintains a valid coloring of the distributed network in $O(\sqrt{m})$ message complexity and $O(1)$ round complexity, both worst-case. \end{theorem} \iflong \begin{proof} By~\cref{lem:message-round-complexity-orient}, reorienting the edges if necessary costs $O(1)$ worst-case messages and at most $O(1)$ rounds. Now we show that whenever $u$ is recolored, it sends at most $O(\sqrt{m})$ messages and~\cref{alg:coloring} uses $O(1)$ rounds. When $u$ is recolored, it sends its color only to its outgoing neighbors. By~\cref{lem:outgoing}, $u$ has at most $4\sqrt{m}$ outgoing neighbors. Thus, $u$ requires at most $O(\sqrt{m})$ messages to send its color to its outgoing neighbors. This only requires one round of communication. Finally, we can show that the number of messages is $O(\sqrt{m})$ and number of rounds is $O(1)$ for each edge insertion or deletion. Following any edge deletion, no vertices need to be recolored. A vertex $v$ would only need to check all its edges if its degree decreases beyond $p_v/2$. Then, by~\cref{lem:message-round-complexity-orient}, this results in $O(1)$ worst-case messages and $O(1)$ rounds per update. Following any edge insertion, $(u, v)$ where $u$ and $v$ have the same color, one of the two must recolor itself. As proved above, $u$ or $v$ require $O(\sqrt{m})$ messages to send their new color in $O(1)$ rounds. If $u$ and $v$ have different colors, neither of them needs to recolor itself. \end{proof} \fi \subsection{Maximal Matching} We shall follow the above approach for $(\Delta+1)$-coloring to maintain a maximal matching under edge updates in $O(\sqrt{m})$ messages and $O(1)$ rounds, both worst case, per update. For any edge $(u, v)$ that is oriented from $u$ to $v$, $v$ knows whether $u$ is currently matched or free. Then, given any update we first perform the edge orientation algorithm. (Any edge reorientation $(v, u)$ causes $v$ to send a message to $u$ indicating whether it is matched or free.) For an edge insertion, no additional updates need to be made. For an edge deletion $(u, v)$, if the deleted edge was an edge in the matching, then we do the following for $u$ and after that for $v$; we next describe how to handle $u$ but $v$ should be handled in the same way. $u$ first checks whether any of its incoming neighbors are free. If any are free, $u$ arbitrarily picks such an incoming neighbor to match with. If no incoming neighbors of $u$ are free, $u$ asks its outgoing neighbors whether they are free (and its outgoing neighbors sends back their answer). If any are free, $u$ arbitrarily picks a neighbor to be matched with. This algorithm allows us to achieve the same message and round complexity as our $(\Delta + 1)$-coloring algorithm. The pseudocode for our algorithm is provided in~\cref{alg:matching}. \begin{algorithm} \caption{Maximal Matching}\label{alg:matching} \footnotesize \SetAlgoLined \LinesNumbered \KwResult{Maintains a maximal matching in the graph following any edge update} \textbf{Input:} An edge insertion or deletion $(u,v)$.\\ Call~\cref{alg:orient} on $(u,v)$.\\ \If{there were edge flips}{\For{an edge $(z,x)$ that was flipped, where $z \in \{u,v\}$}{ \If{the edge is oriented towards $x$}{$z$ updates $x$ about its current status (whether it is matched or free)} \Else{$x$ updates $z$ about its current status}}} \If{$(u,v)$ is a deleted edge}{\If{$u$ and $v$ were matched}{First do the following algorithm on $u$ and then on $v$ (to avoid the situation which both of them choose the same neighbor to be matched with)\\ \For{$z \in \{u,v\}$}{ $z$ informs all its outgoing neighbors that it is unmatched\\ \If{there exists an incoming neighbor $x$ of $z$ that is unmatched}{match $z$ with $x$\\ $x$ informs all its outgoing neighbors that it is matched\\ $z$ informs all its outgoing neighbors that it is matched} \Else{ \For{each outgoing neighbor $w$ of $z$}{$z$ asks $w$ if it is unmatched\\ $w$ responds to $z$ if it's unmatched}\If{there exists an unmatched outgoing neighbor $w$ of $z$}{match $z$ with $w$\\ $w$ informs all its outgoing neighbors that it is matched\\ $z$ informs all its outgoing neighbors that it is matched}}}}} \end{algorithm} \begin{theorem}\label{lem:message-round-complexity-mm} Starting from an empty graph, there exists a dynamic, distributed \textsc{CONGEST}\xspace MM algorithm (\cref{alg:matching}) that maintains a maximal matching in the distributed network in $O(\sqrt{m})$ message complexity and $O(1)$ round complexity, both worst-case. \end{theorem} \iflong \begin{proof} For an edge update $(u,v)$, we handle $u$ and $v$ separately. We next show how to handle $u$, but $v$ should be handled in the same way. If $(u,v)$ was deleted and $u$ and $v$ were matched to each other: $u$ has complete information about its incoming neighbors by~\cref{lem:incoming-neighbors-property}, and, therefore, it can check if one of them is free. If a free incoming neighbor exists, it costs $O(1)$ rounds and messages to matched between $u$ and this neighbor. After that, notifying all the outgoing neighbors of $u$ about this change costs $O(1)$ rounds and $O(\sqrt{m})$ messages since $u$ has most $4\sqrt{m}$ outgoing neighbors by~\cref{lem:outgoing}. If $u$ does not have an unmatched incoming neighbor, then checking all of $u$'s outgoing neighbors and choosing a match, if one exists, also costs $O(1)$ rounds and $O(\sqrt{m})$ messages. If $u$ and $v$ weren't matched to each other, or if $(u,v)$ was inserted, there is no need to look for a match to $u$ or $v$. The edge insertion or deletion might cause reorientation of edges; by~\cref{lem:message-round-complexity-orient}, reorienting the edges, if necessary, costs at most $O(1)$ rounds and messages. \end{proof} \fi \section{\texorpdfstring{$3/2$-Approximate Maximum Cardinality Matching}{}}\label{sec:MCM} In this section, we provide a distributed algorithm that uses $O(\sqrt{m})$ messages and $O(\log \Delta)$ rounds in the CONGEST model to maintain a $3/2$-approximate maximum cardinality matching (MCM) in the input graph under edge insertions and deletions. Our algorithm is based on the sequential algorithm of~\cite{NS13}. The main challenge we must surmount in the distributed setting is the fact that vertices do not know $m_{cur}$, the current number of edges in the graph. Unfortunately, unlike the case with MM and $(\Delta + 1)$-coloring, it is no longer sufficient to bound the number of outgoing edges of a vertex using the edge orientation algorithm. Thus, in this section, we show a technique to differentiate a low-degree\xspace vertex from a high-degree\xspace vertex using information obtained from the $2$-hop neighborhood of a vertex without knowledge of the global $m_{cur}$. As in~\cref{sec:simple}, we define for convenience $m = m_{cur}$. The message complexity is given in terms of $m_{avg}$ since we use amortized complexity in this case. The round complexity is given in terms of $m_{cur}$ since it is given in worst case complexity. Let $G=(V,E)$ be an arbitrary graph and let $M$ be an arbitrary matching for $G$. The edges of $M$ are called \emph{matched} edges and the \emph{unmatched} edges are the remaining edges $E \setminus M$. An \emph{augmenting path} with respect to $M$ is a path whose edges alternate between edges in $M$ and edges in $E \setminus M$ which starts and ends on different \emph{free} vertices (i.e.\ vertices which are not matched). It is well-known that if $G$ does not have an augmenting path of length $3$, then $M$ is a $3/2$-approximate MCM \cite{HK73}. The natural algorithmic idea is to exclude all augmenting paths of length at most $3$ from the graph. Ideally, we would like to use the edge orientation technique to efficiently maintain a $3/2$-approximate MCM as used for MM and $(\Delta + 1)$-coloring. We know how to use the edge orientation technique to efficiently maintain a maximal matching. However, to determine whether or not there is an augmenting path of length $3$ in the graph, and if so to find it, the vertices adjacent to the edge update must have updated information not only about their neighbors, but also about their neighbors' neighbors, which makes the edge orientation technique insufficient for solving this problem. To find an augmenting path, if exists, we first partition (using the procedure explained below) the vertices into low-degree\xspace and high-degree\xspace vertices based on the threshold of $\Theta(\sqrt{m})$ (high-degree\xspace vertices have degree greater than $\sqrt{2m}$ and low-degree\xspace vertices have degree less or equal to $\sqrt{2m}$). We ensure that high-degree\xspace vertices $w$ would not go through all their neighbors each round to find an augmenting path. We do this by using an algorithm that finds a \emph{surrogate} for each high-degree\xspace vertex $w$ that becomes free. A \emph{surrogate} is a vertex $v'$ that is matched to a neighbor $v$ of $w$, such that the degree of $v'$ is at most $\sqrt{2m}$. The key observation that was made in \cite{NS13} is the following: \emph{each unmatched high-degree\xspace vertex can always find a free neighbor or surrogate in the first $O(\sqrt{m})$ neighbors it queries}. Given this fact, we are able to look only at as many neighbors of $w$ as necessary to find a surrogate. However, since we are working in the distributed setting we don't know the current value of $m$. Therefore, the algorithm works as follows: for a vertex $v$ that became free during an edge update, in each round we ``guess'' the number of neighbors we would like to check by successively \emph{doubling our number of neighbors to check} (starting with $1$ neighbor), until we find a free neighbor or surrogate or there are no more neighbors to check. Later on we show that this algorithm won't check more than $O(\sqrt{m})$ neighbors although the algorithm may not have a good estimation on the value of $m$. Using this algorithm we maintain the fact that high degree vertex will \emph{always} be matched after an update that is incident to it. Furthermore, assuming there were no augmenting paths of length $3$ before the update, after the update only vertices that are incident to the edge update or vertices that became free during the update can be part of new augmenting paths of length $3$. For a vertex that incident to an edge update, if it is matched after the update then it cannot be the start or the end of an augmenting path of length $3$, this is due to the fact that an augmenting path must start and end with unmatched vertices. This means that a high-degree vertex that is incident to an edge update will not be the start or the end of any augmenting path during this update unless it becomes low-degree. Hence, we only need to look for augmenting paths that start at free low-degree vertices. This leads to a natural procedure: for a free low-degree vertex $u$, we need to look at all its matched neighbors, e.g.\ $v$, and ask $v$ whether they have a mate $v'$ which has a neighbor, e.g.\ $w$, that is free. For any such path $(u, v, v', w)$, we remove from the matching the edge $(v,v')$ and add instead the edges $(u, v)$ and $(v', w)$. It is easy to verify (e.g.\ \cite{NS13}) that this eliminates all augmenting paths that start or end at $u$. We describe this algorithm in detail below. This algorithm allows us to obtain our desired result: \begin{theorem}\label{lem:message-round-complexity-mcm} Starting from an empty graph, there exists a dynamic, distributed \textsc{CONGEST}\xspace $(3/2)$-approximate MCM algorithm (\cref{alg:maximum-matching-deletion}) that maintains a $(3/2)$-approximate MCM in the graph in $O(\sqrt{m_{avg}})$ amortized message complexity and $O(\log \Delta)$ worst-case round complexity. \end{theorem} \iflong \myparagraph{Data Structures} Each vertex $v$ maintains the following information in its local memory: \begin{enumerate} \item A variable $mate_v$ that keeps its mate in the match (if it has one, otherwise it keeps $\varnothing$). \item A variable $d_v$ that keeps its degree. \item A linked list $N_v$ that maintains all its neighbors. \item A counter $f_v$ of the number of its free neighbors. \item A linked list $F_v$ that maintains all its free neighbors. \end{enumerate} \begin{algorithm} \caption{$3/2$-approximate MCM: Edge insertion $(u, v)$}\label{alg:maximum-matching-insertion} \SetAlgoLined \LinesNumbered \KwResult{A $3/2$-approximate MCM in the graph.} \textbf{Input: } An edge insertion $(u, v)$.\\ \If{$u$ and $v$ are both free}{Match($u,v$).\\ Add edge $(u,v)$ to the matching.\\ $u$ notifies all its neighbors that it is matched.\\ $v$ notifies all its neighbors that it is matched.\\} \ElseIf{$u$ is free}{Find\_Mate($u$,$v$)}\If{$v$ is free}{Find\_Mate($v$,$u$)} \end{algorithm} \begin{algorithm} \caption{$3/2$-approximate MCM: Edge deletion $(u, v)$}\label{alg:maximum-matching-deletion} \SetAlgoLined \small \LinesNumbered \KwResult{$3/2$-approximate MCM in the graph.} \textbf{Input: } An edge deletion $(u, v)$.\\ \For{$z\in\{u,v\}$}{ \If{$u$ and $v$ were matched to each other}{ \If{$z$ has at least one free neighbor}{ $z$ chooses a free neighbor $w$ arbitrarily.\\ Match($z,w$).\\ Add edge $(z,w)$ to the matching.\\ $w$ notifies all its neighbors that it is matched.\\} {\Else{//Makes a call to~\cref{alg:maximum-matching-surrogate}.\\$w'$ = Surrogate($z$)\\ \If{$w' \neq \varnothing$\\ //In this case, it is guaranteed that $z$ is matched and $w'$ is free.\\}{Match\_Surrogate($w'$)\\} \Else{$z$ notifies all its neighbors that it is free.\\ $z$ changes its mate: $mate_z \leftarrow \varnothing$.\\Aug-path($z$)}}}} \ElseIf{$z$ is free\\ //We want to make $z$ a matched vertex in case it is high-degree\xspace.\\}{$w'$ = Surrogate($z$)\label{line:find-surrogate}\\ \If{$w' \neq \varnothing$}{ Match\_Surrogate($w'$)\\ $z$ informs all its neighbors that it is matched.\\ }}} \end{algorithm} \begin{algorithm} \caption{Find\_Mate$(u, v)$}\label{alg:find-mate} \SetAlgoLined \LinesNumbered \KwResult{Find a mate to vertex $u$ if it exists.} \textbf{Input: } A vertex $u$ that is free and a vertex $v$ that is $u$'s neighbor and is matched.\\ $u$ asks $v$ for $v$'s mate. $v' \leftarrow mate_v$.\\ \If{$v'$ has a free neighbor which is not $u$\label{line:mate-aug-path}}{$v'$ chooses a free neighbor $x$ arbitrarily.\\ Match$(u,v)$.\\ Match$(v',x)$.\\ $u$ notifies all its neighbors that it is matched.\\ $x$ notifies all its neighbors that it is matched.} \Else{//Makes a call to~\cref{alg:maximum-matching-surrogate}.\\$w'$ = Surrogate($u$)\label{line:surrogate-mate}.\\ \If{$w' \neq \varnothing$\\ //In this case, $u$ is matched and $w'$ is free.\\} {$u$ notifies all its neighbors that it is matched.\\ Match\_Surrogate($w'$)}} \end{algorithm} \begin{algorithm} \caption{Match$(u,v)$}\label{alg:maximum-matching-match} \SetAlgoLined \LinesNumbered \KwResult{Match between two vertices $u$ and $v$} \textbf{Input: } Two vertices $u$ and $v$ such that the edge $(u,v)$ is in the graph ($u$ and $v$ might be free or matched before the call to this procedure, so updating their neighbors that they became matched is taken care of in the procedures that make the call to this algorithm).\\ update the mate of $u$ to be $v$\\ update the mate of $v$ to be $u$\\ \end{algorithm} \begin{algorithm} \caption{Aug-path$(u)$}\label{alg:aug-path} \SetAlgoLined \LinesNumbered \KwResult{Find an augmenting path starting from $u$ if it exists.} \textbf{Input: } A vertex $u$ that is free and does not have a free neighbor.\\ $u$ sends a message to all of its neighbors, notifying that it is looking for an augmenting path.\\ \For{each vertex $w$ that received a message from $u$ in the previous round} {$w$ sends a message to its mate $w'$ and asks if it has a free neighbor.\\ \If{$w'$ replies back that it has a free neighbor} {$w$ sends a message to $u$ that $w'$ is an option for an augmenting path.}} \If{$u$ receives back an option for an augmenting path}{$u$ chooses one of the options arbitrarily (say $w'$) and sends back to $w$ (the neighbor of $w'$ that is connected to $u$) that $w'$ is chosen.\\ Match($u,w$)\\ $u$ notifies all its neighbors that it is matched.\\ $w'$ chooses a free neighbor $x$ arbitrarily.\\ Match($w',x$)\\ $x$ notifies all its neighbors that it is matched.} \end{algorithm} \begin{algorithm} \caption{Surrogate$(u)$}\label{alg:maximum-matching-surrogate} \SetAlgoLined \LinesNumbered \KwResult{Find a surrogate for $u$ if exists} \textbf{Input: } A vertex $u$\\ Initialize $i=1$\\ \While{$u$ did not find a surrogate and $i\le 2\log(deg(u))$}{ $u$ sends a message to $\floor*{\sqrt{2^i}}$ arbitrary neighbors, notifying that it looks for a surrogate.\\ \For{each vertex $w$ that received a message from $u$ in the previous round}{$w$ sends a message to its mate $w'$ and asks for its degree.\\ \If{$w'$ replies back that $deg(w') \le \floor*{\sqrt{2^i}}$}{$w$ sends a message to $u$ that $w'$ is a candidate for a surrogate}} \If{$u$ receives back a candidate for a surrogate}{$u$ chooses one of the candidates arbitrarily (say $w'$) and sends back to $w$ (the neighbor of $w'$ that is connected to $u$) that $w'$ is chosen\\ Match($u,w$)\\ $w'$ notifies all its neighbors that it is free.\\ $w'$ changes its mate: $mate_{w'} \leftarrow \varnothing$.\\ //In this case the output is a surrogate $w'$ which is now free.\\ \Return $w'$} \Else{$i = i + 1$}} \Return $\varnothing$ \end{algorithm} \begin{observation} After~\cref{alg:maximum-matching-surrogate} is called on $u$, if the return value of this algorithm is a surrogate $w'$ of $u$, then $u$ is matched and $w'$ is free. \end{observation} \begin{algorithm} \caption{Match\_Surrogate$(u')$}\label{alg:match-surrogate} \SetAlgoLined \LinesNumbered \KwResult{Find a match for $u'$ if exists.} \textbf{Input: } A vertex $u'$ which is a surrogate to a vertex $u$, in particular $u'$ is free.\\ \If{$u'$ has a free neighbor}{$u'$ chooses a free neighbor $w'$ arbitrarily\\ Match ($u',w'$)\\ $w'$ notifies all its neighbors that it is matched\\}{\Else{Aug-path($u'$)}\label{line:find-aug-deletion}} \end{algorithm} \subsection{Analysis} \begin{lemma}\label{obs:surrogate} Assume~\cref{alg:maximum-matching-surrogate} is called with $u$ as its input (i.e., for finding surrogate($u$)). If $u$ is not matched after the call to this algorithm then $deg(u) \le \sqrt{2m}$. \end{lemma} \begin{proof} As proved in \cite{NS13}, if a vertex $u$ has more than $\sqrt{2m}$ neighbors, then at least one of them has a mate that can serve as a surrogate. A surrogate of $u$ is a vertex $w'$ that has degree $deg(w') \le \sqrt{2m}$ and is a mate of one of $u$'s neighbors. Assume by contradiction that $u$ has no surrogate but has more than $\sqrt{2m}$ neighbors. Since each of $u$'s neighbors has a different mate, there are more than $\sqrt{2m}$ vertices that are mates of neighbors of $u$. By our assumption, none of them can serve as a surrogate; by the definition of a surrogate, it means that all of them have degree $> \sqrt{2m}$. Hence, the sum of the degrees of all these vertices exceeds $\sqrt{2m} \cdot \sqrt{2m} = 2m$. Since the sum of the degrees in the graph is $2m$ we get a contradiction. We say that vertex $w'$ is a \textit{potential surrogate of $u$} if $w'$ is the mate of some neighbor $w$ of $u$. In~\cref{alg:maximum-matching-surrogate} we check on each iteration $i$ of the loop for each potential surrogate $w'$, if $deg(w') \le \floor*{\sqrt{2^i}}$. If we go through all of $u$'s potential surrogates and did not find a surrogate, it means that $deg(u) \le \sqrt{2m}$, otherwise, we have at least $\log(2m)$ iterations of the loop since the loop goes up to iteration $2\log(deg(u))$ which in this case is at least $2\log(\sqrt{2m}) = \log(2m)$. At iteration $\floor{\log(2m)}$ we would check if $deg(w') \le \floor*{\sqrt{2^{\log(2m)}}} = \sqrt{2m}$ for each potential surrogate $w'$, and by our proof above it is not possible that all of the potential surrogates would have degree greater than $\sqrt{2m}$. If $deg(u) > \sqrt{2m}$, we must find a surrogate after at most $\log(2m)$ iterations, as explained above. Therefore, we get that if $u$ is not matched after this algorithm then $deg(u) \le \sqrt{2m}$. \end{proof} \begin{invariant}\label{inv:high-degree} For a vertex $u$ that is adjacent to an edge update $(u,v)$, if $deg(u) > \sqrt{2m}$ then after the update $u$ must be matched. \end{invariant} \begin{lemma}\label{lem:high-deg-matched} \cref{inv:high-degree} is maintained by our algorithm after each update. \end{lemma} \begin{proof} Suppose an edge update $(u,v)$ occurred such that $deg(u)>\sqrt{2m}$. \paragraph{If $(u,v)$ was deleted:} In that case if $u$ is matched and its mate is not $v$ then after the update it is still matched and there is nothing to do. If $u$ was matched to $v$, then we must find it a new mate. If $u$ has a free neighbor $w$ then we match it with $w$. If not then we find a surrogate to $u$ by calling~\cref{alg:maximum-matching-surrogate}. By~\cref{obs:surrogate}, the algorithm finds a surrogate (and therefore a match) to $u$ since $deg(u) > \sqrt{2m}$. Thus, $u$ becomes matched after the update. \paragraph{If $(u,v)$ was inserted:} If $u$ is matched then the update doesn't affect it and it stays matched. If $u$ is free, then we first check if $u$ can be matched to $v$ (either if $v$ is free or if $u$ and $v$ are part of an augmenting path). If $u$ can't be matched to $v$ then~\cref{alg:maximum-matching-surrogate} is called on $u$. By~\cref{obs:surrogate}, since $deg(u) > \sqrt{2m}$, $u$ will be matched after running this algorithm. \\\\ In both cases, $u$ is matched after the update. \end{proof} \begin{lemma}\label{lem:aug-path} If~\cref{alg:aug-path} (Aug\_path($u$)) is called with a vertex $u$ as its input, then if there exists an augmenting path of length $3$ such that $u$ is one of its endpoints,~\cref{alg:aug-path} will find it. \end{lemma} \begin{proof} The algorithm is called on a vertex $u$ that became free during some update and has no free neighbors. The only way for an augmenting path of length $3$ to start or end with $u$ is for one of $u$'s matched neighbors $v$ to have a mate $v'$ which has a free neighbor $w$. During~\cref{alg:aug-path}, $u$ checks with all of its matched neighbors' mates to see if any has a free neighbor. If such a free vertex $w$ exists, then $u$ and $v$ become matched together and $v'$ and $w$ become matched together ($v$ is no longer matched to $v'$). Since $u$ is now matched, it cannot be the start or the end of any augmenting paths of length $3$. \end{proof} \begin{lemma} \cref{alg:maximum-matching-deletion} and~\cref{alg:maximum-matching-insertion} maintain a $(3/2)$-approximate maximum matching in the graph after each edge update. \end{lemma} \begin{proof} We prove this lemma via induction on the update step that after each update step, there are no augmenting paths of length of at most $3$ with respect to the current matching, which implies that the maintained matching is a $(3/2)$-approximate maximum matching. In the base case, the graph is empty and a $(3/2)$-approximate maximum matching exists trivially. We assume as our induction hypothesis that at the $k$-th update step, there are no augmenting paths of length $3$. We now prove this for the $(k + 1)$-st step. Suppose that there is an edge update between the vertices $u,v$ for the $(k + 1)$-st step. \paragraph{If $(u,v)$ was deleted from the graph:} Since the algorithm is symmetric with respect to $u$ and $v$, we prove that the lemma holds for $u$ and so it would also hold for $v$. \begin{enumerate} \item If $u$ and $v$ weren't matched: then the deletion of the edge wouldn't affect the matching. However, since we need to maintain the invariant (\cref{inv:high-degree}) that if $deg(u) > \sqrt{2m}$ then it should be matched after the update, we search for a surrogate to $u$ as described in~\cref{line:find-surrogate} in~\cref{alg:maximum-matching-deletion}. If $deg(u) > \sqrt{2m}$ then by~\cref{obs:surrogate} a surrogate must exist; otherwise, if $u$ does not have any free neighbor and it does not have a surrogate, then we know that $deg(u) \leq \sqrt{2m}$. If we find a surrogate $u'$, then $u'$ becoming free might create augmenting paths of length $3$ (where $u'$ is one of the endpoints in the augmenting paths). Therefore, if $u'$ doesn't have a free neighbor, an augmenting path that starts with $u'$ is looked for as described in~\cref{alg:match-surrogate} in~\cref{line:find-aug-deletion}. By~\cref{lem:aug-path}, if such a path exists then we will find it. Note that if $u'$ has a free neighbor $w$, they become matched. After becoming matched, $u'$ can't be part of an augmenting path since by the maximality of the matching, $w$ could not have had a free neighbor (otherwise, $w$ would have been matched). \item If $u$ and $v$ were matched, by~\cref{alg:maximum-matching-deletion}, we are looking for a surrogate or a mate for $u$. \begin{enumerate} \item If $u$ did not find any mate, by \cref{obs:surrogate}, $deg(u) \leq \sqrt{2m}$ and $u$ doesn't have a free neighbor. We still might find an augmenting path that $u$ is one of its endpoints, so~\cref{alg:aug-path} is called with $u$ as its input. By~\cref{lem:aug-path}, if there is an augmenting path of length at most $3$ where $u$ is one of its endpoints,~\cref{alg:aug-path} must find it. Note that any new augmenting path must contain one of $u$ or $v$ as an endpoint, no other augmenting path exists by our induction hypothesis (if $u$ or $v$ is not part of the augmenting path, then the augmenting path existed before this update which is not possible, and if $u$ and/or $v$ are part of an augmenting path but are not one the endpoints of the path, then the maximality of the matching is not preserved). \item If we do find a match to $u$ then there is no augmenting path that $u$ is part of. First, assume that $u$ had a free neighbor $w$ and therefore was matched to it. In this case, an augmentig path that includes $u$ can exist only if $w$ has a free neighbor and this is not possible by the maximality of the matching. If $u$ doesn't have a free neighbor then $u$ must have a surrogate $u'$. In~\cref{alg:match-surrogate} we check if $u'$ has a free neighbor and if so we match $u'$ with its free neighbor. If it doesn't find a free neighbor then we check for an augmenting path that starts or ends with $u'$. Again, by~\cref{lem:aug-path}, if there is an augmenting path of length $3$ that has $u'$ as one of its endpoints,~\cref{alg:aug-path} must find it. Since the only vertex that became and remained free during this update is $u'$, there can't be another augmenting path of length $3$ in the graph by our induction hypothesis. Again, if $u$ and/or $u'$ are matched, then there can't be an augmenting path that $u$ or $u'$ are part of. \end{enumerate} We have shown that following any edge deletion, there is no augmenting path of length at most $3$ in the graph. \end{enumerate} \paragraph{If $(u,v)$ was inserted to the graph:} \begin{enumerate} \item If $u$ and $v$ are both free then we match them. After that we can't have any augmenting path of length at most $3$ that contains $(u, v)$ in the middle since that would imply that both $u$ and $v$ had a free neighbor before the update; By our induction hypothesis, we know we had a maximal matching before the update, thus this is not a possible case. Also, trivially, $u$ and $v$ can't be one of an augmenting path's endpoints, since they are matched. \item If one of $u$ and $v$ is free, w.l.o.g. assume $u$: then a new augmenting path might occur between $u$, $v$ and $v$'s mate (say $v'$). By~\cref{alg:find-mate}, we first check if $v'$ has a free neighbor; if so, we have an augmenting path and therefore remove the edge $(v,v')$ from the matching, match $u$ with $v$ and match $v'$ with its free neighbor. After that we don't have any other option for an augmenting path of length $3$, since if we do, then this path already existed before this update, in contradiction to our induction hypothesis. Since there was a maximal matching before the update, $u$ doesn't have a free neighbor after the update and the maximality is preserved. However, after this update $u$ might be a high degree vertex, therefore we look for a surrogate for $u$ as described in~\cref{alg:find-mate} in~\cref{line:surrogate-mate}. The surrogate $w'$ of $u$ (if exists), might have a free neighbor or might be the start of an augmenting path of length $3$, therefore, we first match $w'$ with a free neighbor if exists and if not we call~\cref{alg:aug-path} on $w'$. By~\cref{lem:aug-path} if there is such a path, we will find it. As we noted above, if $w'$ is matched to a free neighbor, it can't be part of an augmenting path by the maximality of the matching. \end{enumerate} We get that also in the insertion case there is no augmenting path of length $3$ in the graph. We completed the induction step, which means that there are no augmenting paths of length at most $3$, and therefore we maintain a valid $(3/2)$ approximate maximum matching in the graph after each update. \end{proof} \begin{observation}\label{obs:surrogate-deg} If $u'$ is a surrogate of a vertex $u$ that was found during~\cref{alg:maximum-matching-surrogate}, then $deg(u') \le \sqrt{2m}$. \end{observation} \begin{proof} By~\cref{alg:maximum-matching-surrogate} if we found the surrogate $u'$ at iteration $i$, then its degree must be $deg(u') \le \floor*{\sqrt{2^i}}$. As explained in the proof of~\cref{obs:surrogate}, if we find a surrogate we must find it after at most $\log(2m)$ iterations, which means that $i \le \log(2m)$ which implies that $deg(u') \le \floor*{\sqrt{2^{\log(2m)}}} = \sqrt{2m}$. \end{proof} \begin{observation}\label{obs:aug-path} Only vertices $v$ with degree $deg(v) \le \sqrt{2m}$ can be an input to~\cref{alg:aug-path} as used by our algorithm. \end{observation} \begin{proof} In~\cref{alg:find-mate} or~\cref{alg:match-surrogate} we call~\cref{alg:aug-path} only with a surrogate as its input. The surrogate $u'$ can have degree at most $\sqrt{2m}$ by ~\cref{obs:surrogate-deg}. In~\cref{alg:maximum-matching-deletion} we first call~\cref{alg:maximum-matching-surrogate} on a vertex $u$ that was incident to the edge deletion, and only if a surrogate for $u$ hasn't been found, we call~\cref{alg:aug-path}. By~\cref{obs:surrogate}, a vertex that is not matched after the call of~\cref{alg:maximum-matching-surrogate} must have degree $\le \sqrt{2m}$. \end{proof} \begin{observation}\label{obs:iterations} In~\cref{alg:maximum-matching-surrogate}, the while loop must end after at most $2\log(\Delta)$ iterations. \end{observation} \begin{proof} In every iteration that we didn't find a surrogate for a vertex $u$, $i$ is incremented by $1$; the loop terminates either when we find a surrogate or $i$ reaches $2\log(deg(u)) \le 2\log(\Delta)$, so at most $O(\log{\Delta})$ iterations of the while loop are done. \end{proof} We are now ready to prove ~\cref{lem:message-round-complexity-mcm}. \begin{proof} We will show that each procedure takes $O(\sqrt{m_{avg}})$ amortized messages and at most $O(\log{\Delta})$ rounds in the worst case. \paragraph{Finding a surrogate (\cref{alg:maximum-matching-surrogate}):} The while loop in the algorithm takes at most $O(\log{\Delta})$ rounds: by~\cref{obs:iterations} the number of iterations of the while loop is at most $O(\log{\Delta})$. In each iteration there are at most $O(1)$ rounds, so the total number of rounds for the loop is $O(\log{\Delta})$. The rest of the algorithm takes at most $O(1)$ rounds and therefore the total number of rounds in the worst case is $O(\log{\Delta})$. As for the number of messages, in iteration $i$ the number of messages is at most $O(\sqrt{2^i})$. Overall, since, in each iteration, $i$ increments by $1$ and the number of iterations is at most $2\log(\Delta) \le 2\log(m)$, the total number of messages in the while loop is $\sum_{i=1}^{\log{m}}\sqrt{2^i} = O(\sqrt{m})$. In the rest of the algorithm the number of messages is at most $O(1)$ and therefore the total number of messages is $O(\sqrt{m})$. \paragraph{Finding an augmenting path (\cref{alg:aug-path}):} It is easy to see that all the operations in the algorithm take $O(1)$ rounds. It is easy to verify that the number of messages is $O(deg(u)+deg(x))$ where $u$ is the input of the algorithm and $x$ is the free vertex that is part of the augmenting path, if such path exists. By~\cref{obs:aug-path}, the degree of a vertex that is an input to this algorithm is at most $O(\sqrt{m})$, so the total number of messages is $O(\sqrt{m} + deg(x))$. We have no control over the degree of $x$, hence we next bound the \emph{amortized} number of messages sent due to this algorithm. If $x$ has $deg(x) \le c\sqrt{2m}$ for any constant $c$ then we are done; henceforth, assume that $deg(x)>c\sqrt{2m}$ for some constant $c>1$. First, note that $x$ was free before this algorithm and became matched during this algorithm. By~\cref{inv:high-degree} we know that a vertex with degree $> \sqrt{2m}$ that is adjacent to an edge update would be matched after the update, therefore if $x$ has degree $> \sqrt{2m}$ and is free at the beginning of the update, it had degree $\le \sqrt{2m}$ after processing the previous update adjacent to it. Observe that $x$ can change its high-degree/low-degree designation without being adjacent to an edge update only if the number of the edges $m$ in the graph changes (some of the updates might be adjacent to $x$, but its degree during these updates is at most $\sqrt{2m}$, otherwise, by~\cref{inv:high-degree}, $x$ is supposed to be matched). For any vertex $x$ where the degree of $x$ changes from $\sqrt{2m}$ to $> c\sqrt{2m}$ without edge updates adjacent to it, at least $\Omega(m)$ updates need to occur. After those $\Omega(m)$ updates, the number of vertices like $x$, that became high degree without edge updates adjacent to them and remained free is at most $\sqrt{2m}$ (this is because the number of vertices with degree $> \sqrt{2m}$ in the graph is at most $\sqrt{2m}$), so if the cost of handling each such vertex is $O(m)$, the total cost of handling all of these vertices is $O(m\sqrt{m})$ messages. However, after $\Omega(m)$ updates, we can get a total of $\Omega(m\sqrt{m})$ coins by charging $\Theta(\sqrt{m})$ extra coins in each update. Therefore, we charge extra $O(\sqrt{m})$ coins in each update, and get an amortized bound of $O(\sqrt{m})$ on the number of messages. \paragraph{Handling edge insertion $(u,v)$ (\cref{alg:maximum-matching-insertion}):} If $u$ and $v$ are both free then matching them takes $O(1)$ rounds and messages and notifying all of their neighbors about their new status takes $O(1)$ rounds and $O(deg(u)+deg(v))$ messages. If only one of them is free, w.l.o.g. $u$, we first check if there is an augmenting path that includes $u$ and $v$ (as described in~\cref{alg:find-mate} in~\cref{line:mate-aug-path}). To check if such augmenting path exists costs $O(1)$ rounds and messages. If such augmenting path exists, we must match $u$ with $v$ and the mate of $v$ with a free vertex that we found (denote by $x$). That costs $O(1)$ rounds and $O(deg(u)+deg(x))$ messages. If there isn't an augmenting path that $u$ and $v$ are part of, we then try to find a surrogate to $u$, which costs $O(\log{\Delta})$ rounds and $O(\sqrt{m})$ messages as analyzed above. If $u$ has a surrogate $u'$ then notifying all the neighbors of $u$ that it is matched costs $O(1)$ rounds and $O(deg(u))$ messages. Then, $u'$ also checks if it has a free neighbor $w'$; if so, matching them and notifying all the neighbors of $w'$ costs $O(1)$ rounds and $O(deg(w'))$ messages. If $u'$ doesn't have a free neighbor then we call~\cref{alg:aug-path} with $u'$ as its input; by~\cref{obs:surrogate-deg} and as analyzed above this costs $O(1)$ rounds and $O(\sqrt{m})$ amortized messages. Overall, the total cost of this algorithm is at most $O(\log{\Delta})$ rounds and $O(deg(u)+deg(v)+deg(x)+deg(w')+\sqrt{m})$ messages. Since $u, v, x, w'$ were free before this algorithm and became matched during this algorithm, we can analyze the running time of this algorithm as analyzed above (in the analysis of~\cref{alg:aug-path}), and get that the total cost of this algorithm would be at most $O(\log{\Delta})$ rounds and $O(m)$ messages, and if we amortized the messages over all updates, it would result in amortized $O(\sqrt{m})$ messages. \paragraph{Handling edge deletion $(u,v)$ (\cref{alg:maximum-matching-deletion}):} We analyze the round and message complexities of this algorithm with respect to one of the endpoints of this edge, w.l.o.g $u$; the argument for $v$ is symmetric. If $u$ and $v$ were matched, then checking if $u$ has a free neighbor and, if so, matching it costs $O(1)$ rounds and messages. $u$'s new match $w$ must update all its neighbors and that costs $O(deg(w))$ messages. If $u$ doesn't have a free neighbor then finding a surrogate to $u$ costs $O(\log{\Delta})$ rounds and $O(\sqrt{m})$ messages. If $u$ has a surrogate $z'$, then finding a free neighbor $w'$ to $z'$ and match it would cost $O(1)$ rounds and messages. Notifying all neighbors of $w'$ that $w'$ is matched costs $O(deg(w'))$ messages. If $z'$ doesn't have a free neighbor then we call~\cref{alg:aug-path} on $z'$; this costs $O(1)$ rounds and $O(\sqrt{m})$ amortized messages as analyzed above. If $u$ doesn't have a surrogate then notifying all of $u$'s neighbors that it is free costs $O(1)$ rounds and $O(\sqrt{m})$ messages using~\cref{obs:surrogate}. Then, calling~\cref{alg:aug-path} on $u$ costs $O(1)$ rounds and $O(\sqrt{m})$ amortized messages (using the analysis above of~\cref{alg:aug-path}). If $u$ and $v$ weren't matched but $u$ was free, we do a similar process as analyzed above, and the total cost of this process would be $O(\log{\Delta})$ rounds and $O(deg(w') + \sqrt{m})$ messages. Overall, the total cost of this algorithm is at most $O(\log{\Delta})$ rounds and $O(deg(w) + deg(w') + \sqrt{m})$ messages. Note that $w$ and $w'$ are both free before the update and became matched during the update. As analyzed in the cost of~\cref{alg:aug-path}, the cost of those vertices can be amortized to $O(\sqrt{m})$ messages per update, and therefore the total cost of this algorithm is at most $O(\log{\Delta})$ in the worst case rounds and $O(\sqrt{m})$ amortized messages. \end{proof} \fi \section{Maximal Independent Set}\label{sec:MIS} Building on the techniques used in the previous sections, we now present our main algorithm for dynamic, distributed MIS. Similar to our algorithm for MCM, our algorithm for MIS also needs to partition vertices into low-degree\xspace and high-degree\xspace vertices without knowledge of $m$. However, instead of looking at the two-hop neighborhood as in our algorithm for MCM, our algorithm for MIS instead looks at a subset of vertices in the \emph{$6$-hop} neighborhood to determine whether each updated vertex is low-degree\xspace or high-degree\xspace. Since the procedure for determining this information is much more complicated, we dedicate an entire section to its explanation (\cref{sec:restart}). Our MIS algorithm is first analyzed with respect to the \emph{maximum number of edges}, denoted by $m_{max}$, that exist in the graph at any point in time, and later analyzed with respect to the \emph{average number of edges}, $m_{avg}$. The latter analysis is much more challenging, since we do not assume that the graph is connected, hence vertices do not have up-to-date estimates of the current number $m$ of edges. To carry out this analysis, we assume that edge updates are tagged with a global timestamp; only the two vertices adjacent to the edge update can read the timestamp. We assume that the timestamps are given by a global running number, where each timestamp designates the number of the current update step. Of course, since the update sequence can be arbitrarily long, the update steps (and timestamps) could become prohibitively large, so the system is allowed to periodically reset the timestamps in order to make sure that each timestamp can be represented via $O(\log(n))$ bits. We do not address here the technical details behind such periodic resets of timestamps, as this may change from one system to another; this optimization lies within the responsibility of the designers of such systems. (The same assumption was made also in the work of \cite{AOSS18}, even though their work also made an additional connectivity assumption.) In this section, we present our main deterministic algorithm that maintains an MIS under edge insertions and deletions. Our deterministic distributed algorithm is inspired at a high-level by the sequential algorithm given in~\cite{GuptaK18} that partitions the vertices into \emph{high-degree} and \emph{low-degree} vertices, while giving priority to the low-degree vertices to be included in the MIS. Because each vertex does not have access to the precise value of $m$ (neither $m_{max}$ nor $m_{avg}$), instead we partition the vertices based on information from the local neighborhood of each vertex. We update the degree designations of each vertex as its local neighborhood changes. We provide our full algorithm in the next few sections. \subsection{High-Degree/Low-Degree Partitioning}\label{sec:main} First, we provide some necessary characteristics and invariants maintained by vertices in our algorithm. \paragraph{High-Degree and Low-Degree Vertices} Our algorithm ensures that all vertices in the input graph are always partitioned into \emph{high-degree\xspace} and \emph{low-degree\xspace} vertices, denoted by $V_H$ and $V_L$, respectively. We first provide an intuitive definition of these two concepts and then provide the formal definition as it relates to our algorithm. \paragraph{Intuitive definition based on~\cite{GuptaK18}} In the sequential algorithm of~\cite{GuptaK18}, each vertex $v \in V$ that has degree $deg(v) > m^{2/3}$ in $G$ (where $m$ is the current number of edges in the graph) is labeled as a high-degree\xspace vertex; otherwise, it is labeled as a low-degree\xspace vertex. Since, in the centralized setting, the algorithm has access to the current number of edges in the graph, such a definition suffices for this setting. However, in the distributed setting, a vertex does not know the current number of edges in the graph, as described in~\cref{c2}. Thus, we must perform the partition differently in the distributed model. \paragraph{High-degree/low-degree\xspace partitioning in the distributed setting} We provide the partitioning algorithm in terms of $m_{max}$ in this section and update it accordingly for $m_{avg}$ in \cref{sec:m-avg}. Each vertex $v$ stores a counter $deg'(v)$ indicating the degree it thinks is the \emph{movement bound} for moving from $V_L$ to $V_H$ or vice versa. If a vertex $v$ has degree $deg(v) > deg'(v)$ then it labels itself high-degree\xspace; otherwise, it labels itself low-degree\xspace. We initialize all $deg'(v)$ values to be $deg'(v) = 2$ for all $v \in V$ in the beginning when $deg(v) = 0$ and the graph is empty. We describe how to update $deg'(v)$ in~\cref{sec:high-degree}. Intuitively, we show that $deg'(v)$ approximates $m_{max}^{2/3}$. When we consider $m_{avg}$ in~\cref{sec:m-avg}, $deg'(v)$ will not always approximate $m_{avg}^{2/3}$ and therefore we instead use the timestamp of the current update to determine when we need to update a vertex's degree designation (see \cref{sec:m-avg} for more details). Let $G_H = (V_H, E_H)$ be the subgraph induced by the high-degree\xspace vertices in $G$ and $G_L = (V_L, E_L)$ be the subgraph induced by the low-degree\xspace vertices in $G$. In our distributed setting $G_H$ and $G_L$ are composed of vertices which currently (in the present round) consider themselves to be high-degree\xspace and low-degree\xspace, respectively. \subsection{Algorithm Overview} We consider low-degree vertices and high-degree vertices separately. The general theme of our algorithm is that low-degree vertices are given priority to be in the MIS. They do not care about whether a high-degree neighbor is in the MIS. In other words, a low-degree\xspace vertex adds itself to the MIS if and only if it has no low-degree\xspace neighbors in the MIS. On the other hand, a high-degree\xspace vertex removes itself from the MIS whenever a low-degree\xspace neighbor adds itself to the MIS. Thus, low-degree\xspace vertices and high-degree\xspace vertices follow different algorithms for adding themselves to the MIS. We provide such algorithms in detail in~\cref{sec:low-degree} and~\cref{sec:high-degree}. However, with insertions and deletions of edges, $m_{max}$ may change. We provide a \emph{restart procedure} that allows vertices to reassign their degree designations when $m_{max}$ changes by enough. The restart procedure for each vertex checks after every update incident to it to determine whether it is a low-degree\xspace or a high-degree\xspace vertex. Finally, each vertex $v$ in the graph maintains a counter $c_v$ that indicates the number of $v$'s low-degree\xspace neighbors that are in the MIS. More specifically, our algorithm contains the following procedures (described briefly here and expanded upon in the following sections): \begin{enumerate} \item Edge updates between two vertices, such that one of which is low-degree, are processed following the algorithms in~\cref{sec:low-degree}. There are two possible scenarios: \begin{enumerate} \item If an edge update causes a low-degree vertex $v$'s counter to become $0$, $c_v = 0$, then $v$ must add itself to the MIS. Then, it must inform its high-degree neighbors that it was added to the MIS. All high-degree neighbors are processed according to~\cref{alg:low-deg-enters}. \item If an edge insertion occurs between two low-degree vertices both in the MIS, then one of them, $v$, removes itself from the MIS and informs all neighbors. Then, $v$'s low-degree neighbors are processed using~\cref{alg:low-neighbor} and $v$'s high-degree neighbors are processed using~\cref{sec:low-deg-left}. \end{enumerate} \item Edge updates between two vertices, such that both of them are high-degree, are processed in~\cref{sec:high-degree}. There are two scenarios: \begin{enumerate} \item If any update causes a high-degree vertex to have no neighbors in the MIS, it adds itself to the MIS. \item If any such update causes any high-degree vertex $v$ to leave the MIS, then it must call~\cref{alg:high} to determine which of its neighbors should enter the MIS (with $v$ as the leader). \end{enumerate} \item A \emph{restart} procedure given by~\cref{alg:clean vertices} is called before handling high-degree vertices. This restart procedure ensures that the \emph{number of high-degree} vertices remains bounded by $m_{max}^{1/3}$.~\label{item:high-restart} \item If a low-degree vertex $v$'s degree, $deg(v)$, exceeds some internally maintained threshold, $deg'(v)$, then $v$ changes its degree designation to high-degree. (Note that $v$ can become low-degree again via~\cref{item:high-restart}.) \end{enumerate} As defined above, a vertex $v$ is high-degree\xspace if $deg(v) > deg'(v)$.\\ Our algorithm maintains the following invariants: \begin{invariant}\label{inv:low} If a high-degree\xspace vertex $u$ is in the MIS then none of its low-degree\xspace neighbors $w \in N_{low}(u)$ have $c_w = 0$. \end{invariant} \begin{invariant}\label{inv:restart} Throughout the execution, $deg'(v) < 4m_{max}^{2/3}$ for all $v \in V$. \end{invariant} \cref{inv:low} ensures that a high-degree\xspace vertex is in the MIS if and only if none of its low-degree\xspace vertices can enter the MIS. \cref{inv:restart} helps us maintain that vertices designated as low-degree\xspace have degree $O\left(m_{max}^{2/3}\right)$. \subsection{Updates on Low-Degree Vertices}\label{sec:low-degree} We first describe our algorithm on low-degree vertices. Specifically, we describe what happens when an edge insertion or deletion occurs between two vertices where at least one of these two vertices is a low-degree vertex. Since the graph is initially empty (contains no edges), all vertices are initially low-degree\xspace. When a vertex which is high-degree\xspace becomes low-degree\xspace and vice versa, it immediately notifies all its neighbors of its new designation. When a low-degree vertex becomes high-degree, it further needs to ensure that each of its low-degree\xspace neighbors are in the MIS if they can be. In other words, it needs to ensure that none of its low-degree neighbors are depending on it to be in the MIS. We describe this procedure in detail and prove later on that this procedure is not too costly. \iflong \else The full detailed procedure for low-degree vertices can be found in~\cref{app:low-deg}. For the sake of space, we describe the \emph{hard case} when an edge insertion occurs between two low-degree vertices both of which are in the MIS. Then, one low-degree vertex must remove itself from the MIS and inform its neighbors that it removed itself from the MIS. Its low-degree neighbors must add themselves to the MIS if they have no low-degree neighbors in the MIS. These low-degree neighbors perform the algorithm given in~\cref{alg:low-neighbor}. \paragraph{Low-Degree Vertex Exits the MIS} The vertex $v$ that removed itself from the MIS designates itself as the leader and coordinates a schedule for its low-degree neighbors to enter the MIS one-by-one in some sequential order. If a low-degree neighbor enters the MIS, it sends a message to all its neighbors that it entered the MIS. Then, neighbors which occur later on in the schedule do not enter the MIS if an earlier neighbor entered. We can afford to pay for the rounds of communication and messages in the amortized sense due to the observation that \emph{at most two low-degree vertices leave the MIS after any edge update}. We show how to handle high-degree neighbors of $v$ in the next section. \fi Upon an edge insertion, each vertex first sends its newest neighbor an $O(1)$-bit message indicating whether it is in the MIS \emph{and} whether it is low-degree\xspace or high-degree\xspace. The algorithms we describe below follow this general intuition. \paragraph{Edge Insertion between Two Low-Degree Vertices} We denote by $(u,v)$ the edge that was inserted between two initially low-degree\xspace vertices. If after this insertion w.l.o.g. $deg(u) > deg'(u)$, then $u$ re-designates itself as a high-degree\xspace vertex and notifies all its neighbors. Any vertex, w.l.o.g. $u$, that becomes high-degree\xspace after the insertion performs the following procedure. If $u$ is in the MIS, each of $u$'s neighbors, $w \in N(u)$, decreases its counter $c_w$ by one; if any low-degree\xspace neighbor $w$ now has $c_w = 0$, then $w$ sends a message with $O(1)$ bits to $u$ informing $u$ that its $c_w$ is $0$. Let $W$ be the set of low-degree\xspace neighbors $w$ of $u$ that have a $c_w=0$ and therefore want to add themselves to the MIS. If $|W| > 0$, then $u$ removes itself from the MIS. $u$ then determines an arbitrary sequential order for the vertices in $W$ to add themselves to the MIS. $u$ sends each $w \in W$ a number using $O(\log n)$ bits indicating $w$'s order number. Since the rounds are synchronous, each $w \in W$ adds itself to the MIS sequentially in this order if it does not have any low-degree\xspace neighbors in the MIS. If any $w \in W$ becomes dominated by another low-degree\xspace neighbor before its turn, it informs $u$ using an $O(1)$-bit message and $u$ updates the turn numbers of all vertices after $w$ in the order. Pseudocode for this algorithm is provided in~\cref{alg:low-neighbor}. \begin{algorithm} \small \caption{Low-degree neighbors entering MIS.}\label{alg:low-neighbor} \LinesNumbered \SetAlgoLined \KwResult{Neighbors $w \in N_{low}(u)$ add themselves to the MIS if none of their low-degree neighbors are in the MIS.} \textbf{Input: } Given $u$ which is a low-degree\xspace vertex which removed itself from the MIS or which is a low-degree\xspace vertex that is in the MIS and became high-degree\xspace.\\ $u$ sends $O(1)$-bit message to each of its neighbors to decrease their counter by $1$.\\ \For{each $w \in N_{low}(u)$ (low-degree\xspace neighbor of $u$)} {$w$ decrements its counter $c_w$ by 1.\\ \If{$c_w = 0$}{$w$ sends $O(1)$-bit message to $u$ indicating it wants to enter the MIS.} } Let $W$ be the set of low-degree\xspace neighbors of $u$ that want to enter the MIS. $u$ determines an arbitrary sequential order for $w \in W$ to be added to the MIS.\\ \For{each $w \in W$}{ $u$ sends to $w$ a $O(\log n)$-bit message indicating its order.\\ \If{no neighbor $x \in N_{low}(w)$ is in the MIS}{ $w$ adds itself to the MIS.\\ $w$ informs all neighbors it is in the MIS. \\ Each neighbor $x \in N(w)$ increments $c_{x}$ by $1$.\\ High-degree neighbors of $w$ run the algorithm provided in~\cref{alg:low-deg-enters}. } \If{$w$ has a low-degree\xspace neighbor which enters the MIS before its turn}{ $w$ informs $u$ using $O(1)$-bit message.\\ $u$ sends all $w' \in W$ after $w$ in their sequential order a new turn number.\\ } } \If{u left the MIS}{\For{each $w' \in N_{high}(u)$ (high-degree\xspace neighbor of $u$)}{ Perform the algorithm detailed in~\cref{sec:high-degree}. }} \end{algorithm} After~\cref{alg:low-neighbor} has finished running, all of $u$'s high-degree\xspace neighbors $w \in N_{high}(u)$ which do not have any neighbors in the MIS, perform the procedure detailed in~\cref{sec:high-degree}. After updating the degree designation, the algorithm continues assuming $(u,v)$ is an edge insertion between a low-degree\xspace vertex and a high-degree\xspace vertex (described below). If both $u$ and $v$ have $deg(u) > deg'(u)$ and $deg(v) > deg'(v)$, and are both in the MIS, then we perform~\cref{alg:low-neighbor} for both of them and continue the algorithm as if $(u,v)$ is an edge insertion between two high-degree\xspace vertices (described in~\cref{sec:updates-high-deg}). If, however, $(u, v)$ is inserted between two low-degree vertices and neither of which becomes high-degree\xspace, we perform the following algorithm depending on whether neither, both, or one of the two vertices are in the MIS: \begin{itemize} \item \textbf{At most one is in the MIS.} $u$ or $v$ change their counter $c_u$ and $c_v$, respectively, accordingly. \item \textbf{Both are in the MIS.} In this case, both $u$ and $v$ have informed each other that they are low-degree\xspace and in the MIS. One of $u$ or $v$ (the one with the lower ID), w.l.o.g. assume $u$, removes itself from the MIS and sends to all its neighbors $N(u)$ an $O(1)$-bit message that it is no longer in the MIS. Then, it proceeds to follow the algorithm outlined in~\cref{alg:low-neighbor}. We also need to take care of the high-degree neighbors of $u$ after it removed itself from the MIS and the high-degree\xspace neighbors of the low-degree vertices $w \in N_{low}(u)$ which have added themselves to the MIS. We take care of these high-degree\xspace neighbors separately in~\cref{sec:high-degree}. \end{itemize} \paragraph{Edge Deletion between Two Low-Degree Vertices} If neither $u$ or $v$ is in the MIS, nothing happens. If w.l.o.g. $u$ is in the MIS and $v$ is not in the MIS, $v$ decrements $c_v$ by $1$. If $c_v = 0$ it adds itself to the MIS and informs all neighbors. Otherwise, it does nothing. Again, if $v$ adds itself to the MIS, we have to take care of the high-degree neighbors of $v$ which may be in the MIS; we describe this procedure in~\cref{alg:low-deg-enters} (with $v$ as the leader). \paragraph{Edge Insertion between One Low-Degree and One High-Degree Vertex} Suppose $u$ is the low-degree vertex and $v$ is the high-degree vertex, if $deg(u) > deg'(u)$ after the insertion, $u$ becomes high-degree\xspace and performs~\cref{alg:low-neighbor}. In this case, we treat the edge insertion as an edge insertion between two high-degree\xspace vertices and handle such an insertion in~\cref{sec:updates-high-deg}. Now suppose $u$ does not change its degree designation after the insertion. If $u$ or $v$, or both, are not in the MIS, then, nothing needs to be done except for changing the counters. If both $u$ and $v$ are in the MIS, then $v$ needs to perform the procedure outlined in~\cref{alg:low-deg-enters}; this is the procedure for a high-degree\xspace vertex in the MIS when one of its low-degree\xspace neighbors enters the MIS. Obtaining this information about $u$ requires $O(1)$ messages and $O(1)$ rounds of communication. \paragraph{Edge Deletion between One Low-Degree and One High-Degree Vertex} Suppose $u$ is the low-degree vertex and $v$ is the high-degree vertex. First, if after the deletion $deg(v) \leq deg'(v)$, $v$ moves itself to $V_L$ and update all its neighbors, if $v$ is not in the MIS and has only high-degree\xspace neighbors in the MIS, it must enter the MIS and all its high-degree\xspace neighbors must perform the algorithm detailed in~\cref{sec:high-degree}; the update then continues as an edge deletion between two low-degree\xspace vertices. Otherwise, if $u$ is in the MIS and $v$ has no more low-degree neighbors in the MIS (i.e.\ $c_v = 0$ after the deletion), it needs to perform the procedures outlined in~\cref{sec:low-deg-left} when a high-degree\xspace vertex is not in the MIS and no longer has any low-degree\xspace neighbors in the MIS. If this is not the case, it does not need to do anything. Note that it is never the case that $v$ was in the MIS and $c_u = 0$ before the deletion by~\cref{inv:low}. We now describe our algorithm for high-degree\xspace vertices which ensures that all high-degree vertices which have recent changes in their low-degree neighbors add or remove themselves from the MIS as necessary. We describe two related, but different procedures that use as black boxes static, distributed MIS algorithms as subroutines. One of the two procedures is conceptually simpler but obtains worse dependence in the round and message complexity; the other is conceptually more complex but obtains better dependence. By using our restart procedure (\cref{sec:restart}) for determining our degree threshold for each vertex, a high-degree vertex can afford to inform all its high-degree neighbors whenever it adds itself to the MIS. Thus, all high-degree vertices know whether \emph{any} of its neighbors are in the MIS. (This is in contrast to low-degree vertices which only know if their low-degree neighbors are in the MIS). There are two cases high-degree vertices must handle when dealing with updates that cause their low-degree neighbors to enter or leave the MIS. The detailed algorithm for handling all cases involving high-degree vertices is given in~\cref{sec:high-degree}. We again describe the \emph{hardest case} when several low-degree vertices enter the MIS and the high-degree neighbors must leave the MIS. \paragraph{Several High-Degree Neighbors Leave MIS} When one or more high-degree neighbor must leave the MIS, we must find the set of high-degree vertices that no longer have neighbors in the MIS and add them to the MIS. We perform this procedure by waking up \emph{all} high-degree neighbors of high-degree vertices that left the MIS. Then, we run some \emph{static, distributed MIS algorithm} on the high-degree vertices that are awake. As the static, distributed MIS algorithm is costly in terms of number of messages, we must not call it on too many vertices. To ensure that we run such an algorithm on a small enough number of high-degree vertices, namely $O(m^{1/3})$ such vertices, we call our restart procedure (detailed in the next section). During our restart procedure, some high-degree vertices may become low-degree and we do not run the static MIS algorithm on such vertices. We include the pseudocode for our restart procedure in~\cref{alg:find-subgraph}. We define $v$ to be the low-degree vertex which left the MIS, $L$ to be the set of low-degree neighbors of $v$ which enter the MIS, $U'$ to be the set of high-degree neighbors of each $w \in L$ which are in the MIS, and $U$ to be the set of high-degree neighbors of each $u \in U'$. We use these definitions in~\cref{alg:find-subgraph}. By noting that our neighborhood of affected vertex has diameter $6$, we can use a more complicated procedure~\cref{alg:high} to obtain better message and round bounds by $O(\log^3n)$. We include this more complicated procedure in~\cref{sec:high-degree}. \iflong \subsection{Handling High-Degree Vertices}\label{sec:high-degree} \subsubsection{Low-Degree Neighbor Enters the MIS}\label{alg:low-deg-enters} We assume an edge insertion between a low-degree\xspace vertex $v$ and a high-degree\xspace vertex $u$ falls under this category if $v$ is in the MIS. A low-degree vertex can also enter the MIS if it moves from $V_H$ to $V_L$, or if one of its low-degree\xspace neighbors left the MIS due to an update. Suppose $u$ is a high-degree\xspace vertex that satisfies~\cref{inv:low} and is in the MIS. If a low-degree neighbor $v \in N_{low}(u)$ enters the MIS, then $u$ must leave the MIS and inform all its high-degree neighbors. We use one of two procedures to determine the set of high-degree vertices that must enter the MIS due to $u$ leaving the MIS. For convenience, we denote the set of high-degree\xspace vertices which want to enter the MIS by $V_H'$. There are several situations which may cause a low-degree\xspace vertex to enter the MIS. Furthermore, our algorithms described below rely on several characteristics of the sets of vertices that are affected by the change. Specifically we define these roles: \begin{enumerate} \item \textbf{Leader $v$}: The leader $v$ coordinates the addition into the MIS of high-degree\xspace vertices that want to enter the MIS, so that no collisions occur. \item \textbf{Set $U$}: The set of high-degree\xspace vertices that want to \emph{enter} the MIS. \item \textbf{Set $U'$}: The set of high-degree\xspace vertices that want to \emph{leave} the MIS. \item \textbf{Set $L$}: The set of low-degree\xspace neighbors of $v$ that want to \emph{enter} the MIS after the current update. \end{enumerate} We set the above roles as follows for each of the below situations. We consider the roles \emph{after} performing the restart procedure. This means that no vertices switch from $V_L$ to $V_H$ or vice versa after being assigned to the below roles. \begin{enumerate} \item \textbf{Edge insertion $(u, v)$ where $u$ is low-degree\xspace and $v$ is high-degree\xspace and $u, v$ are in the MIS}: $u$ is the leader, $U' = \left\{v\right\}$, $U$ is the set of high-degree\xspace neighbors in $N_{high}(v)$ which have no neighbors in the MIS, $L = \emptyset$. \item \textbf{$v$ leaves $V_H$ and moves to $V_L$, then enters the MIS}: $v$ is the leader, $U'$ is the set of high-degree\xspace neighbors in $N_{high}(v)$ which are in the MIS, $U$ is the set of high-degree\xspace neighbors $N_{high}(w)$ of vertices $w \in U'$ which have no neighbors in the MIS, $L = \emptyset$. \item \textbf{Edge insertion $(u, v)$ where both $u$ and $v$ are low-degree\xspace and $u, v$ are in the MIS}: w.l.o.g. $v$ leaves the MIS. $v$ is the leader, $U$ consists of the high-degree\xspace neighbors in $N_{high}(v)$ and the set of high-degree\xspace neighbors $N_{high}(w)$ of vertices $w \in U'$ that have no neighbors in the MIS, $U'$ consists of the high-degree\xspace neighbors that are in the MIS, $N_{high}(w)$, of $w \in N_{low}(v)$ where $w$ was added to the MIS, and $L$ consists of the set of low-degree\xspace neighbors $w \in N_{low}(v)$ whose $c_w = 0$. \item \textbf{$v$ leaves $V_L$ and moves to $V_H$, $v$ was originally in the MIS}: Suppose $v$ leaves the MIS when it moves into $V_H$. $v$ is the leader, $U$ consists of $w \in N_{high}(v)$ and the set of high-degree\xspace neighbors $N_{high}(w)$ of vertices $w \in U'$ that have no neighbors in the MIS, $L$ consists of $w \in N_{low}(v)$ where $c_w = 0$, and $U'$ consists of neighbors of $w$ in $N_{high}(w)$ that are in the MIS for each $w \in N_{low}(v)$ that entered the MIS. \item \textbf{Edge deletion between $(u, v)$ where $u, v$ are low-degree\xspace and $u$ is in the MIS, $c_v = 1$}: $v$ enters the MIS in this case. $v$ is the leader, $U'$ consists of $w \in N_{high}(v)$ where $w$ is in the MIS, $U$ is the set of high-degree\xspace neighbors $N_{high}(w)$ of vertices $w \in U'$ that have no neighbors in the MIS, $L = \emptyset$. \end{enumerate} \paragraph{Finding an MIS in the Subgraph Induced by $V_H'$} Suppose we are given some subset of high-degree\xspace vertices $V_H' \subseteq V_H$. Our first procedure uses the result of~\cite{GGR20} as a black box on the subgraph induced by $V_H'$. To use the result of~\cite{GGR20}, we must first determine $V_H'$ such that all vertices in $V_H'$ know to participate in running the static MIS procedure to determine the set of vertices from $V_H'$ which would enter the MIS. We first perform the following procedure provided in~\cref{alg:find-subgraph} so that all vertices in $V_H'$ receive knowledge of their participation in the MIS algorithm. Then, each vertex in $V_H'$ runs~\cite{GGR20}, given in~\cref{thm:mis-subgraph}, as a black box to obtain the set of vertices which need to enter the MIS. Our procedures also uses a \emph{restart} algorithm which we describe in~\cref{alg:clean vertices} of~\cref{sec:restart}. \fi \begin{algorithm}\caption{Find MIS within $V_H'$}\label{alg:find-subgraph} \footnotesize \SetAlgoLined \LinesNumbered \KwResult{A set of high-degree\xspace vertices which enter the MIS.} \textbf{Input: } Leader $v$, sets $U$, $U'$ and $L$ as defined above.\\ Suppose vertex $v$ is the designated leader of this procedure.\\ $v$ informs all neighbors it entered/left the MIS using $O(1)$-bit messages.\\ \If{$v$ entered the MIS}{ Let $B$ be the set of all high-degree\xspace neighbors of $v$.} \Else{\Comment{If $v$ exited the MIS}\\ Recall $L$ is the set of all low-degree\xspace neighbors $w$ of $v$ that want to enter the MIS because their $c_w = 0$.\\ Perform~\cref{alg:low-neighbor} on $L$ to determine vertices that enter the MIS.\\ Let $J$ be the set of vertices of $L$ picked to enter the MIS.\\ Let $B$ be the set of all high-degree\xspace neighbors of $v$.\\ \For{each $a \in J$}{ Every high-degree neighbor of $a$ becomes part of the set $B$. } } Let $W$ be the set of all high-degree\xspace neighbors of $u \in B$.\\ Perform~\cref{alg:clean vertices} (the restart procedure) on $\{v\} \cup J \cup B \cup W$.\\ \For{each $u \in U'$}{ $u$ leaves the MIS.\\ $u$ informs all its high-degree neighbors that it left the MIS using $O(1)$-bit messages.\\ } \For{each $w \in J$}{ $w$ enters the MIS.\\ $w$ informs all its neighbors that it entered the MIS using $O(1)$-bit messages.\\ \For{each $x \in N(w)$}{ $x$ increments $c_x = c_x + 1$.\\ } } \For{each $u \in U$}{ $u$ informs all its high-degree neighbors that it is part of $V_H'$.\\ All vertices which are in $V_H'$ know which of its neighbors are in $V_H'$\\ Each vertex $w \in V_H'$ runs the algorithm given by~\cref{thm:mis-subgraph} to find an MIS in the induced subgraph defined by $V_H'$.\\ } \end{algorithm} \iflong \begin{theorem}[Corollary 1.2~\cite{GGR20}]\label{thm:mis-subgraph} There exists a deterministic distributed algorithm that computes a maximal independent set in $O(\log^5 n)$ rounds in the \textsc{CONGEST}\xspace model. \end{theorem} Since the algorithm of~\cite{GGR20} operates in the distributed model where each vertex initially does not know the topology of the graph (except for its set of adjacent neighbors), we can directly apply this algorithm to our induced subgraph $V_H'$ as initially all vertices in $V_H'$ do not know the entire topology of $V_H'$ but do know their set of neighbors (each of which has a unique ID). \paragraph{Finding an MIS from a Partial MIS} Our second approach relies on an input-respecting subroutine that obtains an MIS from a partial MIS; although this approach is technically more complex, it obtains a round complexity that is better than the one obtained via our first approach. We use the following algorithm which incorporates the deterministic algorithm of~\cite{CPS20} to accomplish this goal. We prove an extension of~\cite{CPS20} in~\cref{lem:cps} to handle the case when the input is a graph with some number of vertices which are already in the MIS and use this procedure as a black box in our detailed procedure provided in~\cref{alg:high}. \begin{algorithm} \caption{High-Degree Vertices Entering MIS.}\label{alg:high} \LinesNumbered \SetAlgoLined \mysize \selectfont \KwResult{A set of high-degree\xspace vertices which enter the MIS.} Suppose vertex $v$ is the designated leader of this procedure, $v$ informs all neighbors it entered/left the MIS using $O(1)$-bit messages.\\ \If{$v$ entered the MIS}{ Let $B$ be the set of all high-degree\xspace neighbors of $v$.\\} \Else{ Recall $L$ is the set of all low-degree\xspace neighbors $w$ of $v$ with $c_w = 0$.\\ Perform~\cref{alg:low-neighbor} on $L$ to determine vertices that enter the MIS.\\ Let $J$ be the set of vertices of $L$ picked to enter the MIS.\\ \For{each $a \in J$}{ Every high-degree neighbor of $a$ becomes part of the set $B$. } } Let $W$ be the set of all high-degree\xspace neighbors of $u \in B$.\\ Perform~\cref{alg:clean vertices} (the restart procedure) on $\{v\} \cup J \cup B \cup W$.\\ \For{each $w \in J$}{ $w$ enters the MIS.\\ $w$ informs all its neighbors that it entered the MIS using $O(1)$-bit messages.\\ \For{each $x \in N(w)$}{ $x$ increments $c_x = c_x + 1$.\\ } } \For{each $u \in U'$}{ $u$ leaves the MIS.\\ $u$ informs all high-degree neighbors that it left the MIS using $O(1)$-bit messages.\\ } \For{each $u \in U$}{ $u$ informs all high-degree\xspace neighbors (that remained high-degree\xspace) in $U'$ it's part of $V_H'$.\\ \If{$u$ is a neighbor of $v$}{ $u$ also informs $v$ that it is a part of $V_H'$. } } \For{each $w \in U'$}{ \If{received a message from a vertex $u \in U$}{ $w$ picks one neighbor $g \in N_{low}(w) \cap(J \cup \{v\})$ and informs $g$ it has high-degree\xspace neighbors in $V_H'$. } } \For{each $y \in J$ that received a message from a vertex $u \in U$}{ $y$ informs $v$ of the existence of $V_H'$ (but not the members of $V_H'$). } $v$ becomes the leader in our procedure.\\ $v$ coordinates computing a MIS among $w \in V_H'$ using~\cref{lem:cps} on subgraph induced by $\{v\} \cup U' \cup U \cup J$.\\ If any $w\in V_H'$ enters the MIS, it informs all its high-degree neighbors.\\ This procedure proceeds until no additional vertex $w \in V_H'$ wants to enter the MIS.\\ \end{algorithm} We provide a self-contained description of the algorithm of~\cite{CPS20} in~\cref{app:cps}. \begin{lemma}[Modified Theorem 1.5~\cite{CPS20}]\label{lem:cps} Given a connected graph $G = (V, E)$, a leader $v$ which is distance $D$ away from all vertices in the graph and a set of vertices $S \subseteq V$ already in the MIS, there exists a deterministic MIS algorithm that finds an MIS among all vertices $V \setminus S$ in $G$ that completes in $O(D\log^2 |V|)$ rounds and sends at most $O(D|E|\log^2 |V|)$ messages in the $\textsc{Congest}$ model. \end{lemma} \begin{proof} We use the same algorithm as that presented in Theorem 1.5 of~\cite{CPS20}. Since $G$ is connected, the paths from all vertices in $V$ to $v$ (and vice versa) form a tree with $v$ as the root. All vertices $s \in S$ and all neighbors $N(s)$ of $s$ do not participate in the conditional expectation calculation of the algorithm described in Section 4.1 of~\cite{CPS20}. All other vertices perform the calculations associated with the conditional expectation method. Then, as in Theorem 1.5 of~\cite{CPS20}, all conditional expectation calculations are aggregated at $v$ by a standard aggregation algorithm: each intermediate vertex sums the conditional expectation values of all its children and send this value to its parent. The leader $v$ takes the aggregate value, determines a bit, and then sends each bit of the seed to the entire graph. The rest of the algorithm proceeds as in~\cite{CPS20} in $O(D\log^2 |V|)$ rounds. During each round, each vertex sends at most one message to each of its neighbors of $O(\log |V|)$ bits. Hence, the total number of messages sent is $O(D|E|\log^2 |V|)$. \end{proof} \subsubsection{Low-Degree Neighbor Leaves MIS}\label{sec:low-deg-left} We assume an edge deletion between a low-degree\xspace vertex $u$ and a high-degree\xspace vertex $v$ falls under this category if $u$ is in the MIS. If a low-degree neighbor $u$ leaves the MIS and its high-degree neighbor $v$ does not have any neighbors in the MIS, $v$ enters the MIS and informs all its high-degree neighbors. All high-degree\xspace neighbors $N_{high}(v) \cup \{v\}$ perform the restart procedure provided in~\cref{alg:clean vertices}. \subsubsection{Edge Updates Between High-Degree Vertices}\label{sec:updates-high-deg} In this section, we describe our algorithms for edge updates between two high-degree\xspace vertices. \paragraph{Edge Insertion Between Two High-Degree Neighbors} If at most one of the endpoints is in the MIS, both do nothing except for changing their counter. If both high-degree neighbors are in the MIS, one leaves (e.g.\ $u$) and performs~\cref{alg:find-subgraph} or~\cref{alg:high} with $u$ as the leader. \paragraph{Edge Deletion Between Two High-Degree Neighbors} If w.l.o.g. the degree of one of the vertices (say $v$), $deg(v)$, becomes less than $deg'(v)$, then $v$ becomes low-degree\xspace and informs all its neighbors. If $v$ has only high-degree\xspace neighbors which are in the MIS, $v$ enters the MIS and all its high-degree\xspace neighbors perform~\cref{alg:find-subgraph} or~\cref{alg:high} with $v$ as the leader. After that, the update continues as an edge deletion between one high-degree\xspace vertex and one low-degree\xspace vertex. If the degrees of both of the vertices $u$ and $v$, $deg(u)$ and $deg(v)$, become less than $deg'(v)$ and $deg'(u)$, respectively, then both of them become low-degree\xspace, inform all their neighbors and enter the MIS if needed; the update continues as edge deletion between two low-degree\xspace neighbors. Otherwise, if no degree assignment changes, and neither of the endpoints is in the MIS, nothing happens. If exactly one e.g.\ $u$ is in the MIS, the other, e.g.\ $v$, adds itself to the MIS if it has no neighbors in the MIS and informs all its neighbors.~\cref{alg:clean vertices} is called on all neighbors $N_{high}(v) \cup \{v\}$. \fi \subsection{Restart Procedures}\label{sec:restart} The first of two restart procedures called whenever~\cref{alg:find-subgraph} or~\cref{alg:high} runs or when a high-degree\xspace vertex informs its neighbors it entered or left the MIS. This procedure ensures that all high-degree\xspace neighbors of such vertices become low-degree\xspace if necessary. This might occur if the number of edges in the graph increases over time. We first provide intuition for our algorithm and then a detailed description of it. \subsubsection{Maintaining Degree Bounds under $m_{max}$} \paragraph{Intuition} Since the vertices in the graph do not know the number of the edges in the graph, the partition to high-degree\xspace vertices and low-degree\xspace vertices can be meaningless. Therefore, we guess for each vertex the maximum number of edges to ever exist in the graph, which we will tighten during the process. We present a restart procedure that maintains \cref{inv:restart} and proves~\cref{thm:restart} with $O(1)$ amortized number of rounds and $O(m_{max}^{2/3})$ amortized communication complexity, where $m_{max}$ is the maximum number of edges that in the graph throughout the updates. For the remainder of this section, we let $m$ denote $m_{max}$. \paragraph{The Algorithm} Let $deg'(v)$ be the \emph{approximate movement degree}; when a vertex $v$ has degree $deg(v) \ge deg'(v)$ it considers itself a high-degree\xspace vertex and if $deg(v) < deg'(v)$ it considers itself a low-degree\xspace vertex. Whenever its degree crosses this threshold it informs all its neighbors of the change. Since the algorithm starts from an empty graph (no edges), for each $v \in V$, $deg(v) = 0$ and we initialize $deg'(v) = 2$. It might be the case that for some vertices $deg'(v) << m^{2/3}$ and therefore more than $m^{1/3}$ vertices will be in $V_H$. To be able to perform~\cref{alg:find-subgraph} or~\cref{alg:high} and allow high-degree\xspace vertices to inform their high-degree\xspace neighbors of whether they entered or left the MIS, we must first ``clean'' $V_H$ and move the unnecessary vertices to $V_L$. This procedure is described below in~\cref{alg:clean vertices}, which uses ~\cref{alg:bfs}, and~\cref{alg:approx m} as subroutines. Namely, the crux of the subroutines is to determine the total degree of all the vertices in the subgraph and move vertices which have degrees that are too small into $V_L$. Denote by $G' = (V', E')$ the subgraph induced by the vertices that participate in~\cref{alg:find-subgraph} or~\cref{alg:high}. Let this subgraph consist of vertices $\{v\} \cup L \cup B \cup W$ such that $v$ is the vertex that started the restart (the leader).\\ We call~\cref{alg:bfs} on $G'$. \begin{algorithm} \small \caption{Construct BFS tree.}\label{alg:bfs} \LinesNumbered \SetAlgoLined \KwResult{A BFS tree $T'$ in a carefully chosen subgraph $G'$ of $G$} \For{each $u \in N(v) \cap V'$ (neighbor of $v$ in $G'$)} {$v$ sends $O(1)$-bit message to $u$ indicating that $v$ is $u$'s parent in $T'$.\\ } \While{there exists at least one high-degree\xspace vertex in $V'$ that does not yet have a parent}{ \For{each $u$ that received a message from its parent in the previous round}{ $u$ sends $O(1)$-bit message to $w \in N(u)\cap V'$ if $w$ does not have a parent yet, indicating that $u$ is $w$'s parent in $T'$.\\ If $u$ receives multiple parent messages, it arbitrarily chooses one to be its parent.\\ $w$ stores $u$ as its parent.\\ $w$ informs all its neighbors in $G'$ it has a parent.\\ }} The process ends when all the vertices in $G'$ have a parent (except $v$ which is the root). \end{algorithm} \begin{algorithm} \small \caption{Estimate a lower bound of $m$ in $G'$.}\label{alg:approx m} \LinesNumbered \SetAlgoLined \KwResult{A lower bound estimation of $m$.} Construct a BFS tree $T'$ according to~\cref{alg:bfs} and mark $v$ as the root.\\ \For{each $u$ which is a child of $v$ in $T'$} {$v$ sends $O(1)$-bit message to $u$ asking what is the sum of the degrees of all vertices in $u$'s subtree.\\} \While{exists a vertex $u$ that received a message from its parent in the previous round}{ \If{$u$ has children in $T'$}{$u$ sends the same message to all its children and waits for a response.\\ $u$ sums the values it received from its children and adds $deg(u)$ to the sum.\\ $u$ sends the sum to its parent.\\} \If{$u$ has no children in $T'$}{$u$ sends $deg(u)$ to its parent.} } \If{$v$ receives a non-zero sum of the degrees of all its children} {$v$ computes a final sum of the values received from its children and adds $deg(v)$ to the sum. Let this sum be $S$.\\ \For{each child $u$ of $v$ in $T'$} {$v$ sends a $O(\log n)$-bit message to $u$ containing $S$.} \For{each $u$ that received $S$ from its parent}{$u$ sends $S$ to its children.\\}} The process ends when all the vertices in $G'$ know $S$. \end{algorithm} In~\cref{alg:approx m}, since each vertex has only one parent (because $T'$ is a BFS tree), the degree of each vertex is added to the sum only once, and therefore the sum is given by $S = \sum_{v \in V'} deg(v)$. \begin{algorithm} \caption{Restart procedure.}\label{alg:clean vertices} \LinesNumbered \SetAlgoLined \KwResult{Some vertices $v$ with degree $deg(v) <4m^{2/3}$ move to $V_L$ and the number of vertices in remaining $G'$ (to be used in \cref{alg:find-subgraph} or~\cref{alg:high}) is at most $2m^{1/3}$.} Let $S$ be the sum calculated in~\cref{alg:approx m}.\\ Assume as in the previous algorithms $v$ is the leader and root of the tree.\\ //$v$ broadcasts using $T'$ and tells all its descendants to become low-degree\xspace (i.e. move to $V_L$) if necessary.\\ \For{each $u$ which is a child of $v$ in $T'$} {$v$ sends $O(\log(n))$-bit message to $u$ telling it to become low-degree\xspace (i.e. move to $V_L$) if $deg(u)<S^{2/3}$.\\} \For{a descendant $u$ of $v$ that got the message}{$u$ propagates the message to its children in $T'$ (if any).\\ \If{$deg(u) < S^{2/3}$}{$u$ considers itself as a low-degree\xspace vertex (i.e. moves to $V_L$).\\ If $u$ has no low-degree\xspace neighbors in the MIS it enters the MIS.\\ $u$ notifies all its neighbors in $G$ that it is low-degree\xspace and if it is in the MIS or not.\\ $u$ updates $deg'(u) = 2deg(u)$.\\}} \end{algorithm} Note that after performing~\cref{alg:clean vertices}, a vertex that had only high-degree\xspace neighbors in the MIS might move to $V_L$ and enter the MIS. After that, all its high-degree\xspace neighbors must leave the MIS and perform another restart (by calling~\cref{alg:find-subgraph} or~\cref{alg:high}). We show in our analysis that this ``chain'' of restarts is constant on average. \paragraph{Analysis of the Restart Procedure for $m_{max}$} We provide the analysis of the restart procedure described above here. This analysis will be crucial in our analysis of our main algorithm provided in~\cref{sec:analysis-max}. In the following analysis, we assume that $m = m_{max}$. \begin{observation}\label{obs:S} Throughout the execution of~\cref{alg:clean vertices}, a vertex $u$ in $G'$ moves from $V_H$ to $V_L$ if and only if its degree $deg(u)$ is smaller than $S^{2/3}$. \end{observation} \begin{lemma}\label{thm:restart} After performing \cref{alg:clean vertices}, the number of vertices that remain in $G'$ is at most $2m^{1/3}$. \end{lemma} \begin{proof} By~\cref{obs:S}, each vertex $u$ in $G'$ with degree $deg(u) < S^{2/3}$ moves to $V_L$. The vertices that remain in $G'$ are the vertices that have degree greater or equal to $S^{2/3}$. Since $S$ is the sum of the degrees in $G'$ before~\cref{alg:clean vertices} run, ~\cref{obs:S} implies that the number of vertices that remain in $G'$ is at most $S^{1/3}$ (otherwise, the sum of the degrees in $G'$ would be greater than $S^{1/3}\cdot S^{2/3}=S$). Since $S = \sum_{v \in V'} deg(v) \le 2m$, we have $S^{1/3} < 2m^{1/3}$ vertices, which means that the number of vertices that remain in $G'$ is at most $2m^{1/3}$ \end{proof} \begin{lemma}\label{lemma1} If a vertex $v \in V'$ moves from $V_H$ to $V_L$ during~\cref{alg:clean vertices}, then $deg(v) < 2m^{2/3}$. \end{lemma} \begin{proof} By~\cref{obs:S}, a vertex $v \in V'$ that moves from $V_H$ to $V_L$ during~\cref{alg:clean vertices}, moves only because its degree $deg(v) < S^{2/3}$. Since $S = \sum_{v \in V'} deg(v) \le 2m$, we get that $deg(v) < S^{2/3} < 2m^{2/3}$. \end{proof} We can now prove~\cref{inv:restart}. \begin{lemma}\label{lem:d'v} Throughout all the updates, $deg'(v) < 4m^{2/3}$ for all $v \in V$ when $m \geq 1$. \end{lemma} \begin{proof} Assume for contradiction that an update exists such that after it occurs there is a vertex $v$ with $deg'(v) \ge 4m^{2/3}$. Denote by $t$ the update in which $deg'(v)$ was last updated. $deg'(v)$ can only change when $v$ moves from $V_H$ to $V_L$. Let $deg_t(v)$ be the degree of $v$ at update $t$. Since $v$ moved from $V_H$ to $V_L$ at update $t$, it changed $deg'(v)$ to be $deg'(v) = 2deg_t(v)$ By \cref{lemma1}, we know that if $v$ moves from $V_H$ to $V_L$ then its degree $deg(v) < 2m^{2/3}$, in particular, at update $t$, $deg_t(v) < 2m^{2/3}$. By our assumption, $deg'(v) \geq 4m^{2/3}$, which implies that $deg'(v) = 2deg_t(v) \geq 4m^{2/3}$ and $deg_t(v) \geq 2m^{2/3}$, a contradiction. \end{proof} \subsubsection{Runtime Analysis of the Restart Procedure for $m_{max}$}\label{subsec:restart-analysis} We first provide a worst case analysis and afterwards provide an amortized analysis. In \cref{alg:bfs}, since the diameter of $G'$ is at most $6$, building $T'$ takes $O(diameter) = O(1)$ rounds. In each round each vertex $v$ sends a constant number of messages to all of its neighbors in $G'$, so the total number of messages would be $O(|E'|)$ messages. The same analysis as~\cref{alg:bfs} holds for~\cref{alg:approx m}. As we described in each of these procedures, the messages have size $O(\log n)$. In~\cref{alg:clean vertices}, the broadcast costs $O(diameter) = O(1)$ rounds and $O(|E'|)$ messages as explained for~\cref{alg:bfs}. However, each vertex $u$ that moves from $V_H$ to $V_L$ during this algorithm, must check if it needs to enter the MIS and notify all its neighbors about this change. Let $V_{low}$ be all the vertices from $V'$ that moved from $V_H$ to $V_L$ during the restart, and let $S' = \sum_{v\in V_{low}} deg(v)$ be the sum of their degrees in $G$. All the vertices that move to $V_L$ need to notify their neighbors about the movement and might enter the MIS, however, this should be done sequentially. Therefore, notifying all the neighbors about the change costs at most $O(|V_{low}|)$ rounds and $O(S')$ messages. Overall we pay for one restart $O(|V_{low}|)$ rounds and $O(|E'| + S')$ messages. \paragraph{Amortized analysis} We obtain in the worst case that one restart costs $O(|V_{low}|)$ rounds and $O(|E'| + S')$ messages. We present now the amortized analysis of one restart. \begin{lemma}\label{lem:m-max-runtime} \cref{alg:bfs},~\cref{alg:approx m}, and~\cref{alg:clean vertices} take $O(1)$ amortized rounds and $O(m^{2/3})$ amortized messages over the set of updates in our graph. \end{lemma} \begin{proof} As defined above, $V_{low}$ is the set of all the vertices from $V'$ that moved from $V_H$ to $V_L$ during \cref{alg:clean vertices} and $S'$ is the sum of their degrees in $G$ and $L$ is the set of low-degree\xspace vertices that enter the MIS. $|E'| = |E(V' \setminus L)| + \sum_{a \in L}deg(a)$. By construction of $L$, all low-degree\xspace vertices in $L$ are those that \emph{enter the MIS}. Thus, we can charge the cost of $\sum_{a \in L} deg(a)$ to the cost of adding $L$ to the MIS. This cost is computed later in~\cref{sec:analysis-max} by~\cref{lem:insertion-low}. Then, $|E(V' \setminus L)| \le m^{2/3} + \sum_{v\in V_{low}} deg(v) = m^{2/3} + S'$. This inequality holds due to the following argument. Recall that $G'$ consists of the vertex sets $\{v\} \cup L \cup B \cup W$ and all edges in the induced subgraph containing these vertices. If $v$ is low-degree\xspace, then $deg(v) = O(m^{2/3})$; if $v$ is high-degree\xspace, then it only communicates with other high-degree\xspace vertices, so its degree in $G'$ is $< m^{1/3} + |\{(v, w) | (v, w) \in E \wedge w \in V_{low}\}|$. Since $B$ and $W$ only contain high-degree\xspace vertices (including those from $V_{low}$), the number of vertices $|B \cup W| - |V_{low}| < 2m^{1/3}$ according to \cref{alg:clean vertices}, and the number of induced edges between vertices in $B \cup W$ is $< 4m^{2/3} + \sum_{v\in V_{low}} deg(v)\leq 4m^{2/3} + S'$. So the total number of messages $O(|E(V' \setminus L)| + S') = O(m^{2/3} +S')$. Therefore, we only need to account for the extra cost of $O(\sum_{v\in V_{low}} deg(v)) = O(S')$ messages and $O(|V_{low}|)$ rounds. We can amortize this cost via the following charging argument. Each $v\in V_{low}$ changes $deg'(v)$ to be $2deg(v)$ when the transition occurs, so it would take at least $O(deg(v))$ updates \emph{that are adjacent to $v$} until $v$ moves back to $V_H$. Since each update is adjacent to at most two vertices, in each such update before $v$ moves back to $V_H$, we can afford to pay $O(1)$ coins for the number of rounds and messages incurred during the last restart process $v$ participated in. If $v$ never returns to $V_H$, we can use the fact that this is its last restart and pay for it in advance by paying another extra $O(1)$ coins on each edge insertion adjacent to $v$ before the \emph{previous} restart. To conclude, we need to pay $O(1)$ amortized rounds and $O(m^{2/3})$ amortized messages on the restart process. \end{proof} We now analyze how we can pay for a chain of restarts. \iflong \begin{lemma}\label{lem:const} During each update, the number of low-degree\xspace vertices that leave the MIS is constant. \end{lemma} \begin{proof} A vertex $v \in V_L$ can leave the MIS in two cases: \begin{itemize} \item $v$ is in the MIS, and after the update $deg'(v) < deg(v)$, $v$ moves to $V_H$. \item both $u \in V_L$ and $v$ are in the MIS before the update, and the update is the insertion of $(u,v)$. \end{itemize} In both cases at most $2$ vertices move to $V_H$ and might leave the MIS. If the vertices stay in $V_L$, then at most $1$ leave the MIS. Since those are the only relevant cases, we get that the number of low-degree\xspace vertices that leave the MIS is constant. \end{proof} \fi \begin{restatable}{lemma}{mmaxrestartcost} All restarts that happen during an update costs $O(1)$ amortized rounds and $O(m^{2/3})$ amortized messages. \end{restatable} \iflong \begin{proof} After the first restart, only a vertex the moved from $V_H$ to $V_L$ and had only high-degree\xspace neighbors in the MIS can cause the occurrence of another restart. By~\cref{lem:const}, the number of low-degree\xspace vertices that leave the MIS in an update is constant. Since the algorithm starts from an empty graph and all vertices are initially in the MIS, when the vertex leaves the MIS we can pay another $O(1)$ rounds and $O(m^{2/3})$ messages for the restart procedure it might cause. Note that the rest of the restart's cost (which is $O(S')$ messages and $O(V_{low})$ rounds) is payed as we described above in~\cref{lem:const} for one restart. This works since in each such restart in the chain, only low-degree\xspace vertices that \emph{did not participate in a previous restart} for this update participate. Formally, if given a chain of restarts $r_1,r_2,..,r_k$ and the corresponding sets of vertices that move from $V_H$ to $V_L$, $V^1_{low}, V^2_{low}, ..., V^k_{low}$, then for any pair of $i, j$ where $1 \le i < j \le k, V^i_{low}\cap V^j_{low} = \emptyset$. Therefore, the payment for each such restart is given by different vertices. To conclude, we get that the cost of a chain of restarts is $O(1)$ amortized rounds and $O(m^{2/3})$ amortized messages. \end{proof} \fi \subsubsection{Analysis of the Main Algorithm under $m_{max}$}\label{sec:analysis-max} \iflong In this section, we provide the analysis of our main algorithm which uses the analysis of our restart procedures detailed above. \else We provide the full analysis of our main algorithm under $m_{max}$ in~\cref{app:max-analysis}. The analysis of our main algorithm follows straightforwardly from~\cref{inv:low} and~\cref{inv:restart} and the analysis of our restart procedures above. We obtain the following theorem with respect to $m_{max}$. \fi \iflong \begin{lemma}\label{lem:deletion-low} Given an edge deletion between two low-degree vertices $(u, v)$ where w.l.o.g. $u$ is in the MIS and $c_v = 0$ after the deletion, $v$ adds itself to the MIS after $O(1)$ rounds of communication and after sending $O(m_{max}^{2/3})$ messages. \end{lemma} \begin{proof} By~\cref{inv:restart}, $deg'(v) < 4m_{max}^{2/3}$ for vertex $v$. Since $v$ is low-degree, $deg(v) \leq deg'(v)$. Thus, if $c_v = 0$, $v$ adds itself to the MIS and informs all neighbors. Since $v$ has at most $deg(v)$ neighbors, this requires at most $O(m_{max}^{2/3})$ messages which are $O(1)$-bit each and $O(1)$ rounds of communication. \end{proof} \begin{lemma}\label{lem:insertion-low} Given an edge insertion between low-degree vertices $(u, v)$ where both are in the MIS,~\cref{alg:low-neighbor} requires $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages using $O(\log n)$-bit messages. \end{lemma} \begin{proof} The correctness of the procedure follows from the observation that only vertices which do not have low-degree neighbors in the MIS are added to the MIS. We prove the amortized message complexity using a similar charging argument to that given in~\cite{AOSS18}. By ~\cref{lem:const}, the number of low-degree\xspace vertices that leave the MIS in an update is constant. In addition, by ~\cref{inv:restart}, every time a low-degree\xspace vertex enters the MIS, it needs at most $O(1)$ rounds and $O(m_{max}^{2/3})$ messages to notify all its neighbors. Since the algorithm starts with an empty graph and all the vertices are in the MIS, each time a vertex leaves the MIS it pays another $O(1)$ rounds and $O(m_{max}^{2/3})$ messages to its next insertion to the MIS. Therefore, the total cost of~\cref{alg:low-neighbor} is $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages. Note that the insertion of low-degree\xspace vertices to the MIS could cause \cref{alg:find-subgraph} or~\cref{alg:high} to run. Their costs are analyzed below in~\cref{lem:high-deg-computation} and~\cref{cor:high-2-runtime}. \end{proof} We move onto the runtimes of edge updates between a high-degree\xspace vertex and a low-degree\xspace vertex. \begin{lemma}\label{lem:high-deg-computation} \cref{alg:high} requires $O(\log^2 n)$ amortized rounds and $O(m_{max}^{2/3}\log^2 n)$ amortized messages. \end{lemma} \begin{proof} The first set of steps of the procedure builds the graph $G' = (V', E')$ which undergoes the restart procedure given by~\cref{alg:clean vertices}. We can charge building this graph to the cost of the restart procedure. Then, by~\cref{lem:m-max-runtime}, the cost of performing the restart is $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages. The next set of steps adds low-degree\xspace vertices in $L$ to the MIS. By~\cref{lem:insertion-low}, these steps require $O(1)$ amortized rounds and $O(m^{2/3})$ amortized messages. Then, vertices in $U'$ inform their high-degree\xspace neighbors that they are leaving the MIS. By~\cref{thm:restart}, the number of vertices that remain in $G'$ after the restart is at most $O(m_{max}^{1/3})$. Since $U'$ consists of high-degree\xspace that remained high-degree\xspace after the restart, the cost of $U'$ informing all its high-degree\xspace neighbors that it left the MIS is $O(1)$ rounds and $O(m_{max}^{2/3})$ messages. The remaining part of~\cref{alg:high} finds an MIS for the set of vertices in $U$. First, finding $V'_H$ requires $O(1)$ rounds and $O(m^{2/3})$ messages. This is due to the fact that the number of edges in the induced subgraph is $O(m^{2/3})$ and the diameter is $O(1)$. Then, the last $4$ steps of the remaining part of the algorithm can be performed in $O(D\log^2 n)$ rounds where $D$ is the distance to the leader by a modified version of Theorem 1.5 of~\cite{CPS20} (\cref{lem:cps}). Since the distance to the leader in this case is $\leq 4$, the number of rounds necessary by our algorithm is $O(\log^2 n)$. Finally, the number of messages sent by the vertices is upper bounded by high-degree vertices which enter the MIS and must inform its high-degree neighbors. Since each high-degree vertex attempts to enter the MIS at most $O(\log^2 n)$ times and enters the MIS at most once, the number of messages sent by the high-degree vertices is bound by $O(m^{2/3}\log^2 n)$, equal to the number of edges between all high-degree vertices times $O(\log^2 n)$. Because both~\cite{CPS20} and~\cite{GGR20} are implemented in the \textsc{Congest} model, our algorithm also can be run in small bandwidth. \end{proof} \begin{corollary}\label{cor:high-2-runtime} \cref{alg:find-subgraph} requires $O(\log^5 n)$ amortized rounds and $O(m_{max}^{2/3}\log^5 n)$ amortized messages. \end{corollary} The proof of the above corollary is almost identical to the proof of~\cref{lem:high-deg-computation}. Since~\cref{alg:find-subgraph} is a subroutine of~\cref{alg:high} (except for the usage of~\cite{GGR20}), the proof of the above corollary immediately follows. \begin{lemma}\label{lem:edge-insert-high-low} Given an edge insertion, $(u, v)$, between a low-degree\xspace vertex $u$ and a high-degree\xspace vertex $v$, restoring an MIS in the graph requires $O(\log^2 n)$ amortized rounds and $O(m_{max}^{2/3}\log^2 n)$ amortized messages. \end{lemma} \begin{proof} Suppose, first, that $u$ is not in the MIS, then, regardless of whether $v$ is in the MIS, nothing happens. Then, suppose $u$ is in the MIS. If $v$ is not in the MIS, nothing happens. However, if $v$ is in the MIS, then we need to run~\cref{alg:high} with $u$ as the leader. Computing $U$ requires $O(1)$ rounds and $O(m_{max}^{2/3})$ messages since $v$ just needs to send a message to each of its high-degree neighbors that it left the MIS. The vertices $w \in N_{high}(v)$ that do not have neighbors in the MIS compose $U$. Then, by~\cref{lem:high-deg-computation},~\cref{alg:high} runs in $O(\log^2 n)$ rounds and $O(m_{max}^{2/3}\log^2 n)$ messages. \end{proof} \begin{lemma}\label{lem:edge-del-high-low} Given an edge deletion, $(u, v)$, between a low-degree\xspace vertex $u$ and a high-degree\xspace vertex $v$, restoring an MIS in the graph requires $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages. \end{lemma} \begin{proof} If $u$ is not in the MIS, nothing happens. If $u$ is in the MIS, then $v$ needs to check whether it has any neighbors remaining after the deletion that are in the MIS. If not, we need to add $v$ to the MIS and inform all its high-degree\xspace neighbors. We first perform~\cref{alg:clean vertices} to restart $N_{high}(v) \cup \{v\}$. This requires $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages by~\cref{lem:m-max-runtime}. For $v$ to inform all its high-degree\xspace neighbors after the restart requires $O(1)$ rounds and $O(m_{max}^{2/3})$ messages. \end{proof} Finally, we give the runtimes of edge insertions and deletions between two high-degree\xspace vertices. \begin{lemma}\label{lem:hd-insert} Given an edge insertion, $(u, v)$, between two high-degree\xspace vertices, restoring an MIS in the graph requires $O(\log^2 n)$ amortized rounds and $O(m_{max}^{2/3}\log^2 n)$ amortized messages. \end{lemma} \begin{proof} If at most one of $u$ or $v$ is in the MIS, nothing happens. Otherwise, one of the two vertices must leave the MIS. w.l.o.g. suppose $v$ leaves the MIS. We must run~\cref{alg:high} with $v$ as the leader. This requires $O(\log^2 n)$ amortized rounds and $O(m_{max}^{2/3}\log^2 n)$ amortized messages by~\cref{lem:high-deg-computation}. \end{proof} \begin{lemma}\label{lem:hd-delete} Given an edge deletion, $(u, v)$, between two high-degree\xspace vertices, restoring an MIS in the graph requires $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages. \end{lemma} \begin{proof} If none of $u$ and $v$ is in the MIS, nothing happens. If w.l.o.g. $u$ is in the MIS, then if $v$ does not have any neighbors in the MIS, it must enter the MIS. We run~\cref{alg:clean vertices} on $\{v\} \cup N_{high}(v)$ which costs $O(1)$ amortized rounds and $O(m_{max}^{2/3})$ amortized messages by~\cref{lem:m-max-runtime}. Then, $v$ informs all its high-degree\xspace neighbors it is in the MIS in $O(1)$ round and $O(m_{max}^{1/3})$ messages. \end{proof} \cref{lem:insertion-low,lem:deletion-low,lem:edge-del-high-low,lem:edge-insert-high-low,lem:hd-delete,lem:hd-insert} immediate give the following theorem. \fi \begin{restatable}{theorem}{mmaxfinal}\label{thm:mmax-main} There exists a deterministic algorithm in the \textsc{CONGEST}\xspace model that maintains an MIS in a graph $G = (V, E)$ under edge insertions/deletions in $O(\log^2 n)$ amortized rounds and $O(m_{max}^{2/3}\log^2 n)$ amortized messages per update. \end{restatable} \subsection{\texorpdfstring{Extending from $m_{max}$ to the Average Number of Edges $m_{avg}$}{}}\label{sec:m-avg} Thus far, our message and round complexity have been in terms of the maximum number of edges present in the graph at any time. In this section, we extend our results to the case of the average number of edges in the graph $m_{avg}$, such that $m_{avg}$ is the average number of edges throughout \emph{all} of the update sequence. We do this by dividing the update sequence into \emph{phases}, such that if the number of edges in the beginning of phase $i$ is denoted by $m_i$ then the length of phase $i$ is $m_i^{2/3}$ updates (if $m_i^{2/3} < 1$ we assume we have one update at this phase). In each phase, we obtain $\tilde{O}(1)$ amortized rounds and $\tilde{O}(m_i^{2/3})$ amortized messages. This result, using H\"{o}lder's inequality, will imply that the amortized message complexity is $\tilde{O}(m_{avg}^{2/3})$ (see ~\cref{thm:holder} for more details). To obtain this bound, we assume that each edge insertion/deletion is tagged with a \emph{timestamp} indicating the number of edge updates (including itself) that have occurred since the beginning. We assume that when an edge update $(u, v)$ occurs, both $u$ and $v$ receive the timestamp $t_{(u, v)}$ associated with the update. Using this information, we modify our algorithms above for handling updates in the $m_{avg}$ case. Each vertex $v$ stores two additional pieces of information, $S_v$ indicating the number of edges it thinks are in the graph and $t_v$ indicating the timestamp of the last time $S_v$ was updated. Initially $S_v$ is set to $0$ when the graph is empty. If an edge insertion adjacent to $v$ occurs, and after the update $deg(v)>S_v$, $v$ updates $S_v$ such that $S_v \geq deg(v)$ is always satisfied; $t_v$ is updated with the timestamp of the edge insertion that caused $S_v$ to be updated. $S_v$ is \emph{not} updated when an edge deletion that is adjacent to $v$ occurs (if no restart algorithm is called). $S_v$ is also updated with a new value when~\cref{alg:clean vertices} runs on $v$. When~\cref{alg:clean vertices} runs on $v$, $v$ stores $S_v = S$ to be its new estimate of the number of edges in its sub-component and $t_v$ to be the timestamp of the update that caused~\cref{alg:clean vertices} to be called. \paragraph{Updated main algorithm (based upon \cref{sec:main} with some minor changes} Consider an edge update $(u,v)$ (insertion or deletion) such that at least one vertex from $u$ and $v$ is low-degree\xspace, w.l.o.g. say $u$ is low-degree\xspace. In this case, if $u$ needs to send a message to its neighbors, $u$ first checks whether $t_{(u,v)} - t_u > S_u/4$ where $t_{(u,v)}$ is the update timestamp it received. If $t_{(u,v)} - t_u > S_u/4$ then $u$ henceforth considers itself a high-degree\xspace vertex. Therefore, $u$ notifies all its neighbors about its movement to $V_H$ and performs \cref{alg:low-neighbor} if necessary. If $u$ was in the MIS, every neighbor of $u$ that wakes up (say $w$) and needs to enter the MIS, receives $t_{(u,v)}$ from $u$ and first checks whether $t_{(u,v)} - t_w > S_w/4$ (notifying all of $u$'s neighbors the timestamp requires $O(\log n)$-bit messages assuming the timestamps resets to $0$ after sufficiently many updates). If so, it follows the same process as $u$. Note that more than one vertex can move from $V_L$ to $V_H$ in a single update. However, we will prove later that we can afford to pay for those movements. Once a vertex moves to $V_H$, it can return to $V_L$ only by a restart, which means that even if $w$ moved to $V_H$ but $deg(w) \le deg'(w)$, $w$ will not return to $V_L$ until a restart. The rest of the process occurs as we described in the main algorithm. \subsubsection{Updated restart procedure} Our restart procedure works like the restart procedure given in~\cref{sec:restart} except for the follow change: each vertex that moves from $V_H$ to $V_L$, say $w$, not only updates $deg'(w)$ but also saves $S_w \leftarrow S$ and updates $t_w \leftarrow t_j$, where $t_j$ is the current update's timestamp. Since we changed the definition of $m$, we also need to update \cref{thm:restart} and \cref{lemma1} to be the following. \begin{invariant}\label{inv:restart-low-deg} Any vertex adjacent to an edge update that happens during phase $i$ and is low-degree after our updated restart has degree $O(m_{i}^{2/3})$. \end{invariant} \begin{invariant}\label{inv:restart-high-deg} Any vertex adjacent to an edge update that happens during phase $i$ and is high-degree after our updated restart has at most $O(m_{i}^{1/3})$ high-degree vertices in its neighborhood when participating in~\cref{alg:high}. \end{invariant} \iflong \begin{lemma}\label{lem:remain-avg} After the updated \cref{alg:clean vertices} ends (at phase $i$), the number of vertices that remain in $G'$ is at most $2m_i^{1/3}$. \end{lemma} \begin{proof} As in \cref{thm:restart}, the number of vertices that remain in $G'$ is at most $S^{1/3}$. Note that $S = \sum_{v \in V'} deg(v) \le 2m$ where $m$ is the current number of edges. Since we can bound $m$ by $m \le m_i + m_i^{2/3} \le 2m_i$, we obtain $S \le 2m \le 4m_i$ and therefore $S ^{1/3} \le 4^{1/3}m_i^{1/3} \le 2m_i^{1/3}$ \end{proof} Using~\cref{lem:remain-avg}, we can prove our~\cref{inv:restart-high-deg} below. \begin{lemma}\label{lem:inv-high-deg} \cref{inv:restart-high-deg} holds after all updated restarts. \end{lemma} \begin{proof} By definition of~\cref{alg:clean vertices}, the set of vertices in $G'$ are ones that participate in~\cref{alg:high}. Hence, by~\cref{lem:remain-avg}, the number of vertices in the neighborhood of each high-degree vertex is $O(m_{i}^{1/3})$. \end{proof} \begin{lemma}\label{lem:updated-d'v} If a vertex $v \in V'$ moves from $V_H$ to $V_L$, then $deg(v) \leq 3m_i^{2/3}$. \end{lemma} \begin{proof} By~\cref{alg:clean vertices}, we obtain that if a vertex $v \in V'$ moves from $V_H$ to $V_L$ then $deg(v) < S^{2/3}$ . Since $S \le 2m \le 4m_i$, we conclude that if $v \in V'$ moves from $V_H$ to $V_L$, then $deg(v) < S^{2/3} \le (4m_i)^{2/3} \le 3m_i^{2/3}$. \end{proof} \fi Using~\cref{lem:updated-d'v}, we can prove our~\cref{inv:restart-low-deg} below. \begin{lemma}\label{lem:inv3-holds} \cref{inv:restart-low-deg} holds after all updated restarts. \end{lemma} \begin{proof} Suppose $v$ is a vertex adjacent to an edge update $(u,v)$ at phase $i$, and after the update $v \in V_L$. If $v \in V_L$ before the update, then after the update $v$ checks whether $deg(v) > deg'(v)$ and whether $t_{(u,v)}- t_v \ge S_v/4$. Since $v$ stays in $V_L$, it means that $deg(v) \le deg'(v)$ and $t_{(u,v)}- t_v < S_v/4$. $deg'(v)$, $t_v$, and $S_v$ were updated during the last time $v$ moved from $V_H$ to $V_L$. By the proof of \cref{lemma1} and according to~\cref{alg:clean vertices}, we obtain that when $v$ moved from $V_H$ to $V_L$ it updated $deg'(v)$ to be $2deg(v)$ which is at most $2S^{2/3}$ and updated $S_v = S$. By the assumption, there were at most $S_v/4$ updates since the last restart and the number of edges in the graph during the restart was at least $S_v/2$. Therefore, the number of edges in the graph is at least $S_v/4$. In addition, since the update occurred at phase $i$, we get that $S_v/4 = O(m_i)$. By the fact that $deg(v) < deg'(v) < 2S_v^{2/3}$ we get that $deg(v) = O(m_i^{2/3})$. If $v \in V_H$ before the update, then by ~\cref{lem:updated-d'v} we get that $deg(v) = O(m_i^{2/3})$. \end{proof} Note that \cref{lem:d'v} is not true anymore because $m$ can decrease significantly throughout phases but $deg'(v)$ can stay the same for more than one phase, and in that case $deg'(v)$ could be more than $m_i$. Hence, the timestamp that a vertex $v$ receives during its movement from $V_H$ to $V_L$ is critical. \begin{restatable}{lemma}{avgconst}\label{lem:avg-const} The average number of low-degree\xspace vertices that enter or leave the MIS during an update is constant. \end{restatable} \iflong \begin{proof} In order to prove this lemma, we show that only a constant number of low-degree\xspace vertices leave the MIS during an update. A low-degree\xspace vertex $u$ leaves the MIS for three reasons: \begin{itemize} \item an edge insertion $(u,v)$ occurs and $deg(u) > deg'(u)$ after the insertion. \item an edge update $(u,v)$ occurs and $t_{(u,v)}- t_u > S_u/4$ \item an edge insertion $(u,v)$ occurs such that both $u$ and $v$ are low-degree\xspace vertices and both are in the MIS. \end{itemize} In the first and the second cases, at most $2$ vertices might move to $V_H$ and leave the MIS ($u$ and $v$); that could happen if for both at least one of the two conditions is met. In the third case, if none of the first two conditions is met, only one of $u$ and $v$ would leave the MIS. Note that it is \emph{not} the case that only vertices that are adjacent to an edge update can move to $V_H$. However, the other low-degree\xspace vertices that can move to $V_H$ are not in the MIS. Only vertices that need to make an action (either enter or leave the MIS) check their timestamp and their degree. In addition, those vertices first check their timestamp and only after that enter or leave the MIS (if it is still necessary). When a vertex $u$ leaves the MIS, all its low-degree\xspace neighbors might enter the MIS, but before they enter they first check their timestamp; therefore, if they leave $V_H$ to become part of $V_H$, they first leave $V_H$ and enter the MIS as a high-degree\xspace vertex later if necessary. So, for these vertices, we ensure that they don't enter the MIS, and then immediately need to leave the MIS because they move to $V_H$. Overall, we get that at most $2$ low-degree\xspace vertices leave the MIS during an update. Thus, the average number of low-degree\xspace vertices that enter the MIS during an update is also constant since a vertex can only enter the MIS if it is not currently in the MIS. \end{proof} \fi \subsubsection{Updated analysis of the Restart Procedure for $m_{avg}$} \begin{theorem}\label{thm:phase_analysis} During phase $i$ for some $i\ge 0$, the amortized round complexity is $\Tilde{O}(1)$ and the amortized message complexity is $\Tilde{O}(m_i^{2/3})$ for each update. \end{theorem} To prove \cref{thm:phase_analysis}, we divide the restart procedure into two kinds of restarts: \begin{itemize} \item \textbf{Heavy restart:} a restart that happens during phase $i$ in which the estimated number of edges in the graph $S > 4m_i^{2/3}$. \item \textbf{Light restart:} a restart in which the estimated number of edges in the graph $S\le 4m_i^{2/3}$. \end{itemize} \begin{lemma}\label{lem:light-restart} A light restart takes $O(1)$ amortized rounds and $O(m_i^{2/3})$ messages. \end{lemma} \begin{proof} Since the changes we made to our restart procedure is only to save two more known values for each vertex that moves from $V_H$ to $V_L$, the running time of the restart did not change. Denote by $V'_M$ the vertices that moved from $V_H$ to $V_L$ during the restart and entered the MIS after the restart. We know from~\cref{subsec:restart-analysis} that the restart procedure takes at most $O(|V'_{M}|)$ rounds and $O(|E'| + S')$ messages. Since $|E'| + S' \le S \le 4m_i^{2/3}$ we obtain that the number of messages is at most $O(m_i^{2/3})$. By~\cref{lem:avg-const} we know that the average number of vertices that enter the MIS in an update is constant, therefore, the size of $V'_M$ is constant on average and the number of rounds is amortized $O(1)$. Note that this restart might cause other restarts in a chain (as we saw in the $m_{max}$ case), we would show later on that we can handle this case also. \end{proof} The difficulty with our update procedure for $m_{avg}$ is that trivially heavy restarts can cost more than $O(m_i^{2/3})$ worst-case, and since we cannot bound the number of heavy restarts during a phase, we might need to pay a lot for heavy restarts during phase $i$. In the next lemma, we prove that a vertex $v$ would participate in a heavy restart only if it could ``pay'' for it. \begin{restatable}{lemma}{numrestarts}\label{lem:num-of-restarts} If a vertex $v$ participated in a heavy restart $r_a$ during phase $i$ and moved from $V_H$ to $V_L$ during $r_a$, it can move back to $V_H$ and participate in another restart $r_b$ during phase $i$ only if its degree $deg(v)$ was doubled after $r_a$. \end{restatable} \iflong \begin{proof} Suppose $v$ is a vertex that moved to $V_L$ after the restart $r_a$. In order for $v$ to move back to $V_H$ during an update $j$ that occurs in phase $i$ after $r_a$, one of the following two conditions needs to be met: \begin{itemize} \item $deg(v) > deg'(v)$ \item $t_v - t_j > S_v/4$ \end{itemize} where $t_j$ is the timestamp of update $j$. Since $t_v$ was updated during $r_a$, and $r_a$ and $j$ both occurred during phase $i$, we get that $t_v - t_j < m_i^{2/3}$. In addition, $S_v = S_{r_a} > 4m_i^{2/3}$ because $r_a$ is a heavy restart, which means that during any update $j$ in phase $i$, $t_v - t_j < m_i^{2/3} < S_v/4$, so the second condition would not occur during phase $i$. Therefore, $v$ can participate in another restart only if $deg(v) >deg'(v)$ and since $deg'(v) = 2d^{r_a}_v$ (where $d^{r_a}_v$ is the degree of $v$ during the restart at $r_a$), then $deg(v)$ must be at least $2d^{r_a}_v$, which means the degree of $v$ was doubled. \end{proof} \fi \begin{restatable}{lemma}{heavyrestartcost}\label{lem:heavy-restart-cost} During phase $i$, the cost of a heavy restart is $O(1)$ amortized rounds and $O(m_i^{2/3})$ amortized messages. \end{restatable} \iflong \begin{proof} Overall, we get that in phase $i$ each heavy restart costs $O(|E'| + S')$ messages which is at most $O(m_i^{2/3} + S')$. We use the same argument as in the proof of~\cref{lem:m-max-runtime} to ignore the runtime contribution of set $L$ to $|E'|$. The inequality follows because $m_i$ can grow to at most $m_i + m_i^{2/3}$ during phase $i$. Since $S' = \sum_{v\in V_{low}} deg(v)$ (where $V_{low}$ is the group of vertices that moved from $V_H$ to $V_L$ during the restart), we can pay for each such vertex $v \in V_{low}$ if its degree doubled after the restart. By \cref{lem:num-of-restarts}, each vertex $v \in V_{low}$ that did not double its degree after the heavy restart will not participate in another restart during phase $i$. The number of vertices that does not pay for a heavy restart during phase $i$ is at most $n$ and therefore the cost of those vertices is at most $O(\sum_{v\in V} deg(v))$; however, since $deg(v)$ can change during phase $i$, we will calculate the cost according to $deg^{max}(v)$ which is the maximum degree of $v$ during phase $i$. So the maximum cost we pay for vertices that participated in a heavy restart but did not pay their part of the restart by doubling their degree is $O(\sum_{v\in V} deg^{max}(v)) = O(\sum_{v\in V} deg(v) + m_i^{2/3}) = O(m_i + m_i^{2/3}) = O(m_i)$, so if we charge on each update during phase $i$ another $O(m_i^{1/3})$ messages, we would pay at the end of phase $i$ for all those vertices and get $O(m_i^{2/3})$ amortized message complexity during this phase. The round complexity of a heavy restart is at most $O(|V'_M|)$, where again $V'_M$ is the vertices that moved from $V_H$ to $V_L$ during the restart and entered the MIS after the restart. By~\cref{lem:avg-const} we get that the average number of low-degree\xspace vertices that enter the MIS in an update is constant, therefore, the average size of $V'_M$ is constant and the restart costs amortized $O(1)$ rounds. \end{proof} \fi By \cref{lem:light-restart}, during phase $i$, we know that a light restart costs $O(1)$ amortized rounds and $O(m_i^{2/3})$ messages and by \cref{lem:heavy-restart-cost} we know that a heavy restart also costs $O(1)$ amortized rounds and $O(m_i^{2/3})$ amortized messages. The last difficulty we encounter is that one restart can cause a chain of restarts. However, recall that after a restart only a vertex that moved from $V_H$ to $V_L$ and entered the MIS can cause another restart. Since the average number of low-degree\xspace vertices that enter the MIS during an update is constant (by~\cref{lem:avg-const}), we get that the average number of restarts during a phase is also constant and by our lemmas above obtains our desired costs. Overall, the payment for all the restarts during a phase is $O(1)$ amortized rounds and $O(m_i^{2/3})$ amortized messages. \begin{restatable}{lemma}{movement}\label{lem:movement} The total cost of moving vertices from $V_L$ to $V_H$ is amortized $O(1)$ rounds and $O(m^{2/3})$ messages. \end{restatable} \iflong \begin{proof} Let $\Tilde{V}$ be all the vertices $v \in V$ with degree $deg(v) > 4m_i^{2/3}$ at the beginning of phase $i$. Note that each such vertex $v \in \Tilde{V}$ can move to $V_H$ only once during phase $i$. That is because according to \cref{lem:updated-d'v} a vertex $v \in V_H$ moves to $V_L$ only if $deg(v) < 3m_i^{2/3}$, and since each vertex $v \in \Tilde{V}\cap V_H$ has degree $deg(v) > 4m_i^{2/3}$, it can only decrease during phase $i$ to more than $3m_i^{2/3}$, which is not enough for $v$ to move back to $V_L$. Since the sum of the degrees of vertices in $\Tilde{V}$ is $O(m_i)$, we can pay for the movement of those vertices by charging another $O(m_i^{1/3})$ messages in each update. Let's take a look on the vertices in $V \setminus \Tilde{V}$; each such vertex $v$ has degree $deg(v) \le 4m_i^{2/3}$, so the cost of the movement of $v$ is at most $O(1)$ rounds and $O(m_i^{2/3})$ messages. However, the movement of $v$ might cause the movement of other vertices. Recall that only vertices that want to make an action (enter or leave the MIS) first check their stored timestamp and their degree and move to $V_H$ if necessary. By~\cref{lem:avg-const}, the average number of such low-degree\xspace vertices is constant. Therefore, the total cost of the movement of vertices from $V \setminus \Tilde{V}$ is $O(1)$ amortized rounds and $O(m_i^{2/3})$ amortized messages. \end{proof} \fi By~\cref{lem:movement} and since a vertex can from $V_H$ to $V_L$ only through a restart, we get that overall the running time of the movements in phase $i$ is $O(1)$ amortized rounds and $O(m_i^{2/3})$ amortized messages. \subsubsection{Analysis of the Main Algorithm under $m_{avg}$}\label{sec:analysis-avg} \iflong Using our updated restart procedure for $m_{avg}$ provided in~\cref{sec:m-avg}, we provide an updated analysis of our main algorithm in terms of $m_{avg}$ in this section. \begin{restatable}{lemma}{deletionlow}\label{lem:deletion-avg-low} Given an edge deletion during phase $i$ between two low-degree vertices $(u, v)$ (where $u$ and $v$ are low-degree after any restart) where w.l.o.g. $u$ is in the MIS and $c_v = 0$ after the deletion, $v$ adds itself to the MIS after $O(1)$ rounds of communication and after sending $O(m_{i}^{2/3})$ messages. \end{restatable} \begin{proof} By~\cref{inv:restart-low-deg}, the degree of $v$ after the restart is $O(m_{i}^{2/3})$. Hence, by our procedure $v$ enters the MIS after sending a message informing all its neighbors that it is entering the MIS. Hence, because $deg(v) = O(m_i^{2/3})$, $v$ adds itself to the MIS after $O(1)$ rounds of communication and after sending $O(m_i^{2/3})$ messages. \end{proof} \begin{lemma}\label{lem:insertion-avg-low} Given an edge insertion during phase $i$ between low-degree vertices $(u, v)$ (where $u$ and $v$ are low-degree after any restart) such that both are in the MIS,~\cref{alg:low-neighbor} requires $O(1)$ amortized rounds and $O(m_{i}^{2/3})$ amortized messages using $O(\log n)$-bit messages. \end{lemma} \begin{proof} By~\cref{inv:restart-low-deg}, the degree of $u$ and $v$ after the restart is $O(m_{i}^{2/3})$. Therefore, w.l.o.g. if $u$ leaves the MIS then it notifies all its neighbors with $O(1)$ rounds and $O(m_i^{2/3})$ messages. Each of $u$'s low-degree\xspace neighbors that needs to enter the MIS, checks first if it needs to move to $V_H$. If so, it moves to $V_H$ and notify all its neighbors (we analyze the cost of the movement above). If not, by the proof of~\cref{lem:inv3-holds} we get that its degree is at most $O(m_i^{2/3})$ and therefore it pays $O(1)$ rounds and $O(m_i^{2/3})$ message to notify all its neighbors. By~\cref{lem:avg-const} we know that the average number of low-degree\xspace vertices that enter the MIS during an update is constant. Therefore, the amortized cost of this update is $O(1)$ rounds and $O(m_i^{2/3})$ messages. \end{proof} \begin{lemma} Determining the set of high-degree neighbors to enter the MIS after an edge insertion between a high-degree node in the MIS and a low-degree node in the MIS requires $O(\log^2 n)$ rounds and $O(m_{i}^{2/3}\log^2 n)$ messages using~\cref{alg:high}. \end{lemma} \begin{proof} By~\cref{inv:restart-high-deg} we get that after the restart, the number of high-degree\xspace vertices that participate in~\cref{alg:high} is at most $O(m_i^{1/3})$. Obtaining the same proof of~\cref{lem:high-deg-computation} with $m_i$ instead of $m_{max}$ would suffice. \end{proof} The rest of the lemmas, \cref{lem:edge-del-high-low,lem:hd-insert,lem:hd-delete}, follow immediately for phase $i$ from our invariants and by replacing $m_{max}$ by $m_i$. We show as our last step that we can translate our costs from $m_i$ to $m_{avg}$. \begin{lemma}\label{thm:holder} If there are $t$ updates, where in phase $i$ the average number of messages during each update is $\Tilde{O}(m_i^{2/3})$, such that $O(m_i)$ is the maximum number of edges in this phase, then the number of messages during each update is $\Tilde{O}(m_{avg}^{2/3})$. \end{lemma} \begin{proof} In order to prove the theorem, we would like to prove that the average number of messages throughout all the updates is less than $\Tilde{O}(m_{avg}^{2/3})$. Since the average number of messages during each update in phase $i$ is $\Tilde{O}(m_i^{2/3})$ and the number of edges during this phase is $O(m_i)$, we can just refer to the number of sent messages to resolve an update $j$ as $(c_jm_j)^{2/3}\log ^2n$ (for some constant $c_j$) and to the number of edges in this update as $c_jm_j$, respectively. So we need to prove that: \begin{align} \frac{\sum_{j=1}^t(c_jm_j)^{2/3}\log^2 n}{t} \le \left(\frac{\sum_{j=1}^tc_jm_j}{t}\right)^{2/3}\log^2 n \label{eq:avg} \end{align} and by the fact that \begin{align} \left(\frac{\sum_{j=1}^tc_jm_j}{t}\right)^{2/3} = m_{avg}^{2/3} \label{eq:mavg} \end{align} we would prove our theorem. Note that~\cref{eq:avg} is equal to: \begin{align} \sum_{j=1}^t(c_jm_j)^{2/3} \le \left(\sum_{j=1}^tc_jm_j\right)^{2/3}t^{1/3} \label{eq:avg2} \end{align} In order to prove~\cref{eq:avg2} we need to use H\"{o}lder's inequality: \begin{align} \label{eq:holder} \sum_{k=1}^n|x_ky_k| \le \left(\sum_{k=1}^n|x_k|^p\right)^{\frac{1}{p}} \left(\sum_{k=1}^n|y_k|^q\right)^{\frac{1}{q}} \end{align} such that $(x_1,...,x_n), (y_1,...,y_n)\in \mathbb{R}^n$, $p,q \ge 1$ and $\frac{1}{p} + \frac{1}{q} = 1$. By choosing $p = \frac{3}{2}, q = 3$, and for each $1 \le j \le t$ $y_j = 1, x_j = (c_jm_j)^{2/3}$ we get that: \begin{align} \sum_{j=1}^t|(c_jm_j)^{\frac{2}{3}}| \le \left(\sum_{j=1}^t|(c_jm_j)^{\frac{2}{3}}|^{\frac{3}{2}}\right)^{\frac{2}{3}} \left(\sum_{j=1}^t|1|^3\right)^{\frac{1}{3}} \end{align} which is exactly~\cref{eq:avg2}. \end{proof} Using the above lemmas, we obtain our final result for this section with respect to $m_{avg}$. \else Using our updated restart procedure for $m_{avg}$ provided in~\cref{sec:m-avg}, we provide an updated analysis of our main algorithm in~\cref{app:m-avg} terms of $m_{avg}$, due to space constraints. The analysis then provides our main theorem in terms of $m_{avg}$. \fi \begin{restatable}{theorem}{mavgfinal}\label{thm:m-avg-final} There exists a deterministic algorithm in the \textsc{CONGEST}\xspace model that maintains an MIS in a graph $G = (V, E)$ under edge insertions/deletions in $O(\log^2 n)$ amortized rounds and $O(m_{avg}^{2/3}\log^2 n)$ amortized messages per update. \end{restatable} \appendix \section{\texorpdfstring{$O(D\log^2 n)$ round Static MIS algorithm in \textsc{CONGEST}\xspace}{}}\label{app:cps} We describe the key components of~\cite{CPS20} that are necessary to prove~\cref{lem:cps}. The algorithm provided in~\cite{CPS20} operates under the \textsc{CONGEST}\xspace model. The algorithm consists of $O(\log n)$ phases each of which requires $O(D \log n)$ rounds of communication where $D$ is the diameter of the graph. Each phase is one round of a modified and derandomized version of~\cite{Ghaffari16}. Consider phase $t$ of the algorithm where we only consider the vertices that are not yet removed from the graph (i.e.\ the vertices that have not yet been added to the MIS nor are neighbors of vertices in the MIS). Each vertex stores in its local memory a (large enough) family of hash functions and they choose a hash function based on a shared random seed that is computed deterministically. Let seed $Y = (y_1, \dots, y_{\gamma})$ be the $\gamma$ random variables that are used to select a hash function from the hash function family. The value of $y_i$ in the seed is computed in the following way. Suppose $y_1 = b_1, \dots, y_{i -1} = b_{i - 1}$ have already been computed. By Ghaffari's algorithm, a vertex can be in one of two types of \emph{golden-rounds}. If a vertex $v$ is in \emph{golden round-1}, $v$ uses the IDs of its neighbors (that are not yet removed) and some additional information from its neighbors to compute \begin{align*} E(\psi_{v, t}|Y_{i, b}) = Pr[m_{v, t} = 1| Y_{i, b}] - \sum_{u \in N(v)} Pr[m_{u, t} = 1 | Y_{i, b}] \end{align*} where $Pr[m_{v, t} = 1 | Y_{i, b}]$ is the conditional probability that $v$ is marked in phase $t$. For vertices in \emph{golden-round-2}, the vertex $v$ first finds a subset of the neighbors $W(v) \subseteq N(v)$ satisfying $\sum_{w \in W(v)} p_t(w) \in [1/40, 1/4]$. Then, let $M_{t, b}(u)$ be the conditional probability on $Y_{i, b}$ that $u$ is marked but none of its neighbors are marked. Let $M_{t, b}(u, W(v))$ be the conditional probability that another neighbor of $v$, $w \in W(v)$, other than $u$ is marked. Using these values, $v$ then computes \begin{align*} E(\psi_{v, t}|Y_{i, b}) = \sum_{u \in W(v)} Pr[m_{u, t} = 1 | Y_{i, b}] - M_{t, b}(u) - M_{t, b}(u, W(v)). \end{align*} Each vertex then sends $E(\phi'_{v, t}|Y_{i, b})$ (computed using $E(\psi_{v, t}|Y_{i, b})$) to its parent who sums up all the values they received. The leader receives the aggregate values $\sum_{v \in V'} E(\phi'_{v, t} | Y_{i, b} = 0)$ and $\sum_{v \in V'} E(\phi'_{v, t} | Y_{i, b} = 1)$ from its descendants and decides on $y_i = 0$ if $E(\phi'_{v, t} | Y_{i, b} = 0) \geq E(\phi'_{v, t} | Y_{i, b} = 1)$ and $y_i = 1$ otherwise. The leader then sends the bit to all its descendants. Once the vertices have computed $Y$, they can simulate phase $t$ of Ghaffari's algorithm. Each vertex that gets marked enters the MIS and removes themselves from the subgraph running the algorithm if none of their neighbors are marked. All neighbors of such vertices also remove themselves. The algorithm proceeds with this smaller subgraph. \end{document}
\begin{document} \title{Monogamy of Measurement-Induced NonLocality} \author{Ajoy Sen} \email{[email protected]} \affiliation{Department of Applied Mathematics, University of Calcutta, 92, A.P.C. Road, Kolkata-700009, India.} \author{Debasis Sarkar} \email{[email protected]} \affiliation{Department of Applied Mathematics, University of Calcutta, 92, A.P.C. Road, Kolkata-700009, India.} \author{Amit Bhar} \email{[email protected]} \affiliation{Department of Mathematics, Jogesh Chandra Chaudhuri College, 30, Prince Anwar Shah Road, Kolkata-700033, India.} \begin{abstract} Measurement-Induced NonLocality was introduced by Luo and Fu (Phys. Rev. Lett. \textbf{106}, 120401,(2011)) as a measure of nonlocality in a bipartite state. In this paper we will discuss monogamy property of measurement-induced nonlocality for some three- and four-qubit classes of states. Unlike discord, we find quite surprising results in this situation. Both the GHZ and W states satisfy monogamy relations in the three-qubit case, however, in general there are violations of monogamy relations in both the GHZ-class and W-class states. In case of four-qubit system, monogamy holds for most of the states in the generic class. Four qubit GHZ does not satisfy monogamy relation, but W-state does. We provide several numerical results including counterexamples regarding monogamy nature of measurement induced nonlocality. We will also extend our results of generalized W-class to n-qubit. \end{abstract} \date{\today} \pacs{} \maketitle \section{INTRODUCTION} Nonlocality is in the heart of quantum world. From Bell's theorem\cite{bell}, it is understood that no local hidden variable theory could replace quantum theory as a theory of physical world. The existence of entangled states in composite quantum systems assures the nonlocal behavior of quantum theory. Generally, violation of Bell's inequality is taken as the signature of nonlocality. Entangled states play an important role to show the violation of Bell's inequality. However nonlocality can be viewed from other perspectives. For example, there exist sets of locally indistinguishable orthogonal pure product states\cite{bennett}, used to show nonlocality without entanglement. Recently introduced measurement-induced nonlocality\cite{luo} (in short, MIN), is one of the ways to detect nonlocality in quantum states by some locally invariant measurements. Locally invariant measurements can not affect global states in classical theory but this is possible in quantum theory. So MIN is a type of nonlocal correlation which can only exists in quantum domain. MIN is defined in such a way that it is non-negative for all states, invariant under local unitary and vanishes on product state. In this sense, MIN could be observed as a type of nonlocal correlation which is induced by certain measurement. Although it is induced by some measurement, it is not a measure of quantum correlation in true sense. But it can be treated as a measure of nonlocality, induced by some kind of measurement. Now a natural question arises about the shareability of quantum correlations in multipartite states. It may be monogamous or may be polygamous. It is known that entanglement is monogamous\cite{wootters}. In this work we will investigate the monogamous behavior of MIN. We will check the monogamy property for some classes of states in both three and four-qubit system. Unlike discord\cite{giorgi,prabhu,bruss}, here we will show that both the three-qubit generalized GHZ and W states satisfy monogamy relations, however there is violations of monogamy relation if we consider the generic whole class of pure three-qubit states. In the case of pure four-qubit system we consider the most important generic class of states. It contains usual GHZ, maximally entangled states in the sense of Gour et.al.\cite{gour}. Most of the states satisfy monogamy relations. There are two important subclasses of the generic class, say, $\mathcal{M}$ and $\tau_{min}$. In one subclass monogamy holds but in another it does not. In particular, GHZ state violates monogamy relation and W state satisfies it. Therefore, monogamous relations of MIN are quite different from that of some important measures of correlation\cite{dakic,zurek} and it acts as distinguishing feature of some class of states. Our paper is organized as follows - in section II we will review some of the basic properties of MIN. In section III we will explain and discuss some four-qubit classes of states which we will require in our work. Section IV is devoted to the notion of monogamy for MIN. Section V contains results on pure three qubit systems and section VI contains results on four-qubit system. Several counterexamples and numerical figures are discussed in both the above sections. \section{OVERVIEW ON MIN} Let $\rho$ be any bipartite state shared between two parties A and B. Then MIN (denoted by $N(\rho)$) is defined as\cite{luo}, \begin{equation} N(\rho):=\max_{\Pi^{A}}\parallel \rho-\Pi^{A}(\rho)\parallel^2 \end{equation} where maximum is over all von Neumann measurements $\Pi^{A}$ which do not disturb $\rho_{A},$ the local density matrix of A, i.e., $\Sigma_{k}\Pi_{k}^{A}\rho_{A}\Pi_{k}^{A}=\rho_{A}$ and $\|.\|$ is taken as the Hilbert Schmidt norm (i.e. $\parallel X \parallel=[Tr(X^{\dag}X)]^{\frac{1}{2}}$). It is in some sense, dual to that of geometric measure of discord. Physically, MIN quantifies the global effect caused by locally invariant measurements. MIN has applications in general dense coding, quantum state steering etc. MIN vanishes for product state and remains positive for entangled states. For pure states MIN reduces to linear entropy like geometric discord\cite{dakic}. Explicit formula of MIN for $ 2 \otimes{n} $ system, $ m \otimes{n} $ (if $\rho_{A}$ is non-degenerate) system and an explicit upper bound for $ m \otimes{n} $ system were obtained by Luo and Fu in \cite{luo}. Later Mirafzali \textit{et.al.} \cite{mirafzali} formulate a way to reduce the problem of degeneracy in $ m \otimes{n} $ system and evaluate it for $ 3 \otimes{n} $ dimensional systems. MIN is invariant under local unitary, i.e., in true sense, it is a nonlocal correlation measure. The set of all zero MIN states is non-convex. Guo and Hou \cite{guo} derived the conditions for the nullity of MIN. They have found the set of states with zero MIN is a proper subset of the set of all classical-quantum states, i.e., zero discord states. MIN for classical-quantum states vanishes if each eigen-subspace of $\rho_{A}$ is one dimensional. It therefore reveals that non-commutativity is the cause of this kind of nonlocality in quantum states. Recently, in \cite{xi}, MIN has been quantified in terms of relative entropy to give it another physical interpretation. However, in our work we have used the original definition of MIN. Suppose $H^{A}$, $H^{B}$ are the Hilbert spaces associated with parties A and B respectively and $L(H^{A})$, $L(H^{B})$ denote the Hilbert space of linear operators acting on $H^{A}$, $H^{B}$ with the inner product defined by $\langle X|Y\rangle:=trX^{\dag}Y $. We state two important results which we will use in our work. \\ {\textbf{Theorem 1}:(Luo and Fu \cite{luo}) \textit{Let $|\psi\rangle_{AB}$ be any bipartite pure state with Schmidt decomposition $|\psi\rangle_{AB}=\sum_{i}\sqrt{s_{i}}|\alpha_{i}\rangle_{A}|\beta_{i}\rangle_{B},$ then $N(|\psi\rangle_{AB})=1-\sum_{i}s_{i}^{2}$}.\\\\ \textbf{Theorem 2}:(Luo and Fu \cite{luo}) \textit{Let $\rho_{AB}$ be any state of $2\otimes{n}$ dimensional system written in the form \begin{equation} \begin{split} \rho_{AB}=\frac{1}{\sqrt{2n}}\frac{I^{A}}{\sqrt{2}}\otimes{\frac{I^{B}}{\sqrt{n}}}+\sum_{i=1}^{3}x_{i}X_{i}^{A}\otimes{\frac{I^{B}}{\sqrt{n}}} +\frac{I^{A}}{\sqrt{2}}\otimes{\sum_{i=1}^{n^{2}-1}y_{i}Y_{i}^{B}}+\sum_{i=1}^{3}\sum_{j=1}^{n^{2}-1}t_{ij}X_{i}^{A}Y_{j}^{B} \end{split} \end{equation} where $\{X_{i}^{A}:\,$ i=0,1,2,3\} and $\{Y_{j}^{B}:\,{ j=0,1,2,...,n^{2}-1}\}$ are the orthonormal Hermitian operator bases for $L(H^{A})$ and $L(H^{B})$ respectively with $X_{0}^{A}=I^{A}/\sqrt{2},Y_{0}^{B}=I^{B}/\sqrt{n} $. Then \begin{equation} \begin{split} N(\rho_{AB})&=tr TT^{t}-\frac{1}{\|\textbf{x}\|^{2}}\textbf{x}^{t}TT^{t}\textbf{x}\qquad{if\textbf{\,x}\neq{\textbf{0}}}\\ &=trTT^{t}-\lambda_{3}\,\,\,\quad\quad\qquad\qquad{if\textbf{\,x}=\textbf{0}} \end{split} \end{equation} where the matrix $T=(t_{ij})$ with $\lambda_{3}$ being minimum eigenvalue of $TT^{t}$ and $\|\textbf{x}\|^{2}:=\sum_{i}x_{i}^{2}$ with $\textbf{x}=(x_{1},x_{2},x_{3})^{t}$ } Before going to discuss the monogamy properties of measurement-induced nonlocality, specifically for three- and four-qubit systems, we first mention some important classes of states in four qubit systems with some discussions on their entanglement behavior. \section{SOME SPECIAL FOUR-QUBITS CLASSES} Four qubit pure states can be classified into nine groups\cite{moor}. Among them the generic class is given by \begin{equation} \mathcal{A}\equiv{\{\sum_{j=0}^{3}z_{j}u_{j}:\sum_{j=0}^{3}|z_{j}|^{2}=1, z_{i}\in{\mathbb{C}},i=0,1,2,3\}} \end{equation} where $u_{0}\equiv{|\phi^{+}\rangle|\phi^{+}\rangle}$, $u_{1}\equiv{|\phi^{-}\rangle|\phi^{-}\rangle}$, $u_{2}\equiv{|\psi^{+}\rangle|\psi^{+}\rangle}$ and $|\phi^{\pm}\rangle=(|00\rangle\pm|11\rangle)/\surd{2}$, $|\psi^{\pm}\rangle=(|01\rangle\pm|10\rangle)/\surd{2}$. Consider two important subclasses $\mathcal{M}$ and $\tau_{min}$ of the generic class $\mathcal{A}$ which are defined as, \begin{equation} \mathcal{M}=\{\sum_{j=0}^{3}z_{j}u_{j}: \sum_{j=0}^{3}|z_{j}|^{2}=1, \sum_{j=0}^{3}z_{j}^{2}=0\} \end{equation} and \begin{equation} \tau_{min}=\{\sum_{j=0}^{3}x_{j}u_{j}: \sum_{j=0}^{3}x_{j}^{2}=1, x_{j}\in{\mathbb{R},j=0,1,2,3}\} \end{equation} These two subclasses are important in the sense that $\mathcal{M}$ is the maximally entangled class and $\tau_{min}$ has least amount of bipartite entanglement according to the definition of maximally entangled states given by Gour \textit{et. al.} \cite{gour}. Consider a pure bipartite state $|\psi_{AB}\rangle\in\mathbb{C}^{m}\otimes{\mathbb{C}^{n}}$. Then the tangle is defined as \begin{equation} \tau_{AB}=2(1-tr\rho_{A}^{2}) \end{equation} where $\rho_{A}=Tr_{B}|\psi_{AB}\rangle\langle\psi_{AB}|$. Now for a pure state $|\psi_{ABCD}\rangle$, shared between four parties, three quantities $\tau_{1},\tau_{2},\tau_{ABCD}$ are defined as \begin{equation} \tau_{1}\equiv{\frac{1}{4}(\tau_{A|BCD}+\tau_{B|ACD}+\tau_{C|ABD}+\tau_{D|ABC})} \end{equation} \begin{equation} \tau_{2}\equiv{\frac{1}{3}(\tau_{AB|CD}+\tau_{CA|BD}+\tau_{DA|BC})} \end{equation} \begin{equation} \tau_{ABCD}=4\tau_{1}-3\tau_{2} \end{equation} For the above two subclasses $\mathcal{M}$ and $\tau_{min}$ we have $\tau_{1}(\mathcal{M})=1$, $\tau_{2}(\mathcal{M})=\frac{4}{3}$, $\tau_{ABCD}(\mathcal{M})=0$ and $\tau_{1}(\tau_{min})=1$, $\tau_{2}(\tau_{min})=1$, $\tau_{ABCD}(\tau_{min})=1$. Four-qubit GHZ state belongs to the class $\tau_{min}$. \section{MONOGAMY} Monogamy is an important aspect in our physical world which restricts the shareability of bipartite correlation. Entanglement is an example of quantum correlation which is monogamous w.r.t. the tangle. Mathematically a correlation measure $Q$ is said to be monogamous iff for any n-party state $\rho_{A_{1}A_{2}...A_{n}}$ the relation \begin{equation} \sum_{k=1,k\neq{i}}^{n}Q(\rho_{A_{i}A_{k}})\leq{Q(\rho_{A_{i}|A_{1}A_{2}...A_{i-1}A_{i+1}...A_{n}})} \end{equation} holds for all $i=1,2,...,n$. Now consider an n-party state $\rho_{12...n}$. Let the locally invariant measurement be done on the party 1. Then MIN is defined as $N(\rho_{1|2...n})=\parallel \rho_{12...n}-\Pi_{1}^{*}(\rho_{1|2...n})\parallel^{2}$, where $\Pi_{1}^{*}=\{\pi_{1k}^{*}\}$ is the optimal measurement done by the party 1 which does not change its local density matrix, i.e., $\rho_{1}=\Sigma_{k}\pi_{1k}^{*}\rho_{1}\pi_{1k}^{*}$. On the other hand, since $\rho_{1}=Tr_{2,3,...,n}(\rho_{12...n})=Tr_{j}(Tr\rho_{1j}), j=2,3,...,n$, the optimal measurement also does not change the local density matrices for all two-party reduced states of $\rho_{12...n}$ of the kind, $\rho_{1j}=Tr_{2,3,...,j-1,j+1,...,n}(\rho_{12...n})$. Then $\Sigma_{j}N(\rho_{1j})\geq{\sum_{j}\|\rho_{1j}-\Pi_{1}^{*}(\rho_{1j})\|^{2}}$. So in case of polygamy $N(\rho_{1|2...n})<\sum_{j}\|\rho_{1j}-\Pi_{1}^{*}(\rho_{1j})\|^{2}$ and if $N(\rho_{1|2...n})\geq\sum_{j}\|\rho_{1j}-\Pi_{1}^{*}(\rho_{1j})\|^{2}$ then the state is monogamous w.r.t. MIN. \section{MONOGAMY IN THREE QUBIT SYSTEM} Any three qubit pure state $|\psi_{ABC}\rangle$ has a generic form $\lambda_{0}|000\rangle+\lambda_{1}e^{i\theta}|100\rangle+\lambda_{2}|101\rangle+\lambda_{3}|110\rangle+\lambda_{4}|111\rangle$, where $\lambda_{i}\in{\mathbb{R}},\theta\in{[0,\pi]},\sum_{i}\lambda_{i}^{2}=1 $\cite{acin}. This class includes GHZ class. W class can also be availed by putting $\lambda_{4}=0$. For the general state $|\psi_{ABC}\rangle$, the reduced density matrix $\rho_{AB}=Tr_{C}|\psi_{ABC}\rangle\langle\psi_{ABC}|$ has the form \[ \rho_{AB}= \begin{bmatrix} \lambda_{0}^{2} & 0 & \lambda_{0}\lambda_{1}e^{-i\theta} & \lambda_{0}\lambda_{3}\\ 0 & 0 & 0 & 0 \\ \lambda_{0}\lambda_{1}e^{i\theta} & 0& \lambda_{1}^{2}+\lambda_{2}^{2} & \lambda_{1}\lambda_{3}e^{i\theta}+\lambda_{2}\lambda_{4} \\ \lambda_{0}\lambda_{3} & 0 & \lambda_{1}\lambda_{3}e^{-i\theta}+\lambda_{2}\lambda_{4} & \lambda_{3}^{2}+\lambda_{4}^{2} \end{bmatrix} \] where the correlation matrix $ T=(t_{ij})$ is obtained from the relation $t_{ij}=tr (\rho \frac{\sigma_{i}}{\sqrt{2}}\otimes{\frac{\sigma_{j}}{\sqrt{2}}}),i,j=1,2,3; \sigma_{i}$'s being the Pauli matrices: \[ T= \begin{bmatrix} \lambda_{0}\lambda_{3} & 0 & \lambda_{0}\lambda_{1}\cos{\theta} \\ 0 & -\lambda_{0}\lambda_{3} & -\lambda_{0}\lambda_{1}\sin{\theta} \\ -\lambda_{1}\lambda_{3}\cos{\theta}-\lambda_{2}\lambda_{4} & -\lambda_{1}\lambda_{3}\sin{\theta}& 0.5-\lambda_{1}^{2}-\lambda_{3}^{2} \\ \end{bmatrix} \] Other reduced density matrix $\rho_{AC}$ and its corresponding correlation matrix could be written from the expressions of $\rho_{AB}$ and $T$ by only interchanging $\lambda_{2}$ and $\lambda_{3}$. Coherent vector for both the reduced density matrices is $\textbf{x}=(\lambda_{0}\lambda_{1}\cos{\theta},-\lambda_{0}\lambda_{1}\sin{\theta},\lambda_{0}^{2}-0.5)^{t}$.\\ Clearly, $\parallel \textbf{x} \parallel=0$ iff $\lambda_{0}^{2}=0.5,\lambda_{1}^{2}=0$. In case of $\parallel \textbf{x} \parallel=0$ we have, \begin{eqnarray} N(\rho_{AB})&=&2a+c-\min\{a,\frac{1}{2}(a+c-\sqrt{(a-c)^{2}+4b^{2}})\}\\ N(\rho_{AC})&=&2g+k-\min\{g,\frac{1}{2}(g+k-\sqrt{(g-k)^{2}+4b^{2}})\} \end{eqnarray} \begin{eqnarray} N(\rho_{A|BC})&=&0.5 \end{eqnarray} where \begin{eqnarray} a&=&\lambda_{0}^{2}\lambda_{3}^{2} \\ b&=&-\lambda_{0}\lambda_{2}\lambda_{3}\lambda_{4} \\ c&=&\lambda_{2}^{2}\lambda_{4}^{2}+(0.5-\lambda_{3}^{2})^{2} \\ g&=&\lambda_{0}^{2}\lambda_{2}^{2} \\ k&=&\lambda_{3}^{2}\lambda_{4}^{2}+(0.5-\lambda_{2}^{2})^{2} \end{eqnarray} Now, the minimum value of $N(\rho_{AB})+N(\rho_{AC})$ is 0.5. So in case of $ \parallel \textbf{x} \parallel=0$ monogamy is violated for most of the states. For example, we can consider a state with $\lambda_{2}=\lambda_{3}=\lambda_{4}=\sqrt{\frac{1}{6}}$. In this case $N(\rho_{AB})+N(\rho_{AC})=0.516046>0.5$. Numerical simulation of $10^{6}$ random states (the states are generated by choosing random $\lambda_{i}$'s $,i=0,1,...,5$ with uniform distribution) shows that around $0.02\% $ GHZ class states and around $ 20\% $ W class states satisfy equality of the monogamy relation.\\ When $\parallel \textbf{x} \parallel\neq 0$ we have \begin{eqnarray} N(\rho_{A|BC})&=2\lambda_{0}^{2}(\lambda_{2}^{2}+\lambda_{3}^{2}+\lambda_{4}^{2}) \end{eqnarray} \begin{eqnarray} N(\rho_{AB})&=a+b+c-\frac{1}{\|\textbf{x}\|^{2}}\textbf{x}^{t}TT^{t}\textbf{x}\\ N(\rho_{AC})&=g+f+k-\frac{1}{\|\textbf{x}\|^{2}}\textbf{x}^{t}TT^{t}\textbf{x} \end{eqnarray} where \begin{eqnarray} a&=&\lambda_{0}^{2}\lambda_{3}^{2}+\lambda_{0}^{2}\lambda_{1}^{2}\cos^{2}\theta \\ b&=&\lambda_{0}^{2}\lambda_{3}^{2}+\lambda_{0}^{2}\lambda_{1}^{2}\sin^{2}\theta \\ c&=&(\lambda_{2}\lambda_{4}+\lambda_{1}\lambda_{3}\cos\theta)^{2}+\lambda_{1}^{2}\lambda_{3}^{2}\sin^{2}\theta+(0.5-\lambda_{1}^{2}-\lambda_{3}^{2})^{2}\\ g&=&\lambda_{0}^{2}\lambda_{2}^{2}+\lambda_{0}^{2}\lambda_{1}^{2}\cos^{2}\theta\\ f&=&\lambda_{0}^{2}\lambda_{2}^{2}+\lambda_{0}^{2}\lambda_{1}^{2}\sin^{2}\theta\\ k&=&(\lambda_{3}\lambda_{4}+\lambda_{1}\lambda_{2}\cos\theta)^{2}+\lambda_{1}^{2}\lambda_{2}^{2}\sin^{2}\theta+(0.5-\lambda_{1}^{2}-\lambda_{2}^{2})^{2} \end{eqnarray} Now the maximum value of $ N(\rho_{AB})+N(\rho_{AC})$ is 0.5. Hence three qubit pure states with $\parallel \textbf{x} \parallel\neq 0$ satisfies the monogamy relation. Specifically, the three qubit generalized GHZ class of pure states($\alpha|000\rangle +\beta|111\rangle$) is monogamous in the region $\alpha\neq \beta$ and the monogamy relation holds good with equality if $\alpha= \beta(=\frac{1}{\sqrt{2}})$. In the three qubit generalized W-Class states($\alpha|001\rangle +\beta|010\rangle+\gamma|001\rangle$) we have $N(\rho_{AB})+N(\rho_{AC})=N(\rho_{A|BC})=2|\alpha|^{2}(1-|\alpha|^{2})$. That is, the monogamy relation holds with equality. \section{MONOGAMY IN FOUR QUBIT SYSTEM} Consider a four-qubit generic pure state $|\psi_{ABCD}\rangle$ shared between four parties $A, B, C,D$ i.e., from the class $\mathcal{A}$ where \begin{equation} |\psi_{ABCD}\rangle=\sum_{j=0}^{3}z_{j}u_{j}, \quad{\sum_{j=0}^{3}|z_{j}|^{2}=1}. \end{equation} We consider the two-qubit reduced density matrices $\rho_{AB}, \rho_{AC}$ and $\rho_{AD}$ of $\rho=|\psi_{ABCD}\rangle\langle\psi_{ABCD}|$. Each reduced density matrix is of the form \[ \frac{1}{4} \begin{bmatrix} \alpha & 0 & 0 & \beta\\ 0 & \gamma & \delta & 0 \\ 0 & \delta & \gamma & 0 \\ \beta & 0 & 0 & \alpha \end{bmatrix} \] where $\alpha,\beta,\gamma,\delta$ are some suitable functions of $z_{0},z_{1},z_{2},z_{3}$ such that $\alpha,\beta,\gamma,\delta\in{\mathbb{R}}$ and $\alpha+\gamma=2$. We define 4 quantities $a=z_{0}+z_{1},b=z_{0}-z_{1},c=z_{2}+z_{3},d=z_{2}-z_{3}$.\\ Then for $\rho_{AB}$: \begin{eqnarray} \alpha&=&2(|z_{0}|^{2}+|z_{1}|^{2}) \\ \beta&=&2(|z_{0}|^{2}-|z_{1}|^{2}) \\ \gamma&=&2(|z_{2}|^{2}+|z_{3}|^{2}) \\ \delta&=&2(|z_{2}|^{2}-|z_{3}|^{2}) \end{eqnarray} for $\rho_{AC}$: \begin{eqnarray} \alpha&=&(|a|^{2}+|c|^{2}) \\ \beta&=&2 Re(\overline{a}c) \\ \gamma&=&(|b|^{2}+|d|^{2}) \\ \delta&=&2 Re(\overline{b}d) \end{eqnarray} and for $\rho_{AD}$: \begin{eqnarray} \alpha&=&(|a|^{2}+|d|^{2}) \\ \beta&=&2 Re(\overline{a}d) \\ \gamma&=&(|b|^{2}+|c|^{2}) \\ \delta&=&2 Re(\overline{b}c) \end{eqnarray} The elements of the correlation matrix $T$ can be obtained from $t_{ij}=tr(\rho_{AX} X_{i}\otimes{Y_{j}}), ~X=B,C,D$. Eigenvalues of the matrix $TT^{t}$ are of the form $k(\beta\pm\delta)^{2},k(\alpha-\gamma)^2$ with $k=\frac{1}{16}$. The coherent vector $\textbf{x}=(x_{1},x_{2},x_{3})^{t}$ as obtained from the relation $x_{i}=tr(\rho_{AX} X_{i}\otimes{I}), ~X=B,C,D$ is zero for all the three cases. Hence we have(by Theorem 2), \begin{eqnarray} N(\rho_{AX})&=&k[2(\beta^{2}+\delta^{2})+(\alpha-\gamma)^{2}-\lambda_{3}]\quad \text{where}\quad X=B,C,D\\ \lambda_{3}&=&min\{(\beta+\delta)^{2},(\alpha-\gamma)^2,(\beta-\delta)^{2}\} \end{eqnarray} On the other hand, we can write the state $\rho_{A|BCD}$ in the form $\frac{1}{\sqrt{2}}(|0\phi_{0}\rangle+|1\phi_{1}\rangle)$ where $|\phi_{0}\rangle, |\phi_{1}\rangle$ are mutually orthonormal. Since this is a pure state we have $N(\rho_{A|BCD})=0.5$ (using Theorem 1). Numerical simulation for $10^{6}$ generic states shows that about 66\% of them satisfies monogamy relation. Hence in general four-qubit generic class is not monogamous w.r.t. MIN. So there exist quantum states whose locally shared nonlocality exceeds the amount of globally shared nonlocality.\\ \begin{figure} \caption{\textit{$10^{5} \label{fig:1} \end{figure}\\ However for the sub classes $\mathcal{M}$ and $\tau_{min}$ we can still check whether monogamy relation holds or not, and if not then what is the amount of violation. These will be illustrated in the next few results.\\\\ \textbf{Theorem 3}: \textit{Consider a four-qubit system. Then for any pure state $\rho$, belonging to the generic class $\mathcal{A}$ we have $N(\rho_{XY})\leq\sum_{i=1}^{3}\lambda_{i}$ where $\rho_{XY}$ denotes any bipartite reduced density matrix of $\rho$ and $\lambda_{i}$'s are the eigenvalues of $TT^{t}$, T being the correlation matrix of $\rho_{XY}$}.\\\\ \textit{Proof}: The proof of the theorem is very easy as it directly follows form Theorem 2 keeping in mind that the eigenvalues of $TT^{t}$} are all positive and the Bloch vector $\|\textbf{x}\|=\textbf{0}$ for all bipartite reduced states of any generic state.\\ We will use this theorem in our proof of the the next two results. $\blacksquare$ \\\\ \textbf{Theorem 4}: \textit{Let $\rho_{ABCD}$ be any four-qubit pure state belonging to the class $\mathcal{M}$. Then $N(\rho_{AB})+N(\rho_{AC})+N(\rho_{AD})\leq \frac{1}{4}$ }\\\\ \textit{Proof}: Let us consider any four qubit pure state $\rho_{ABCD}=|\psi\rangle_{ABCD}\langle\psi|$ belonging to the class $\mathcal{M}$ i.e., $|\psi\rangle_{ABCD}=\sum_{j=0}^{3}z_{j}u_{j}$, with $\sum_{j=0}^{3}|z_{j}|^{2}=1, \sum_{j=0}^{3}z_{j}^{2}=0$. Then, taking $z_{j}=x_{j}+iy_{j}$ $\forall j=0,1,2,3; x_{j},y_{j}\in{\mathbb{R}}$ and utilizing the above restrictions, we get $\sum_{j=0}^{3}x_{j}^{2}=\sum_{j=0}^{3}y_{j}^{2}=\frac{1}{2}$ and $\sum_{j=0}^{3}x_{j}y_{j}=0$. The fruitful implementation of the result of previous theorem and simple algebraic manipulations using these results lead to the fact $N(\rho_{AB})+N(\rho_{AC})+N(\rho_{AD})\leq \frac{1}{4}$.$\blacksquare$ \\\\ \textbf{Theorem 5}: \textit{Let $\rho_{ABCD}$ be any four-qubit pure state belonging to the class $\tau_{min}$. Then $\frac{1}{2}\leq N(\rho_{AB})+N(\rho_{AC})+N(\rho_{AD})\leq \frac{3}{4}$ }.\\\\ \textit{Proof}: Let us consider any four-qubit pure state $\rho_{ABCD}=|\psi\rangle_{ABCD}\langle\psi|$, belonging to the class $\tau_{min}$ i.e., $|\psi\rangle_{ABCD}=\sum_{j=0}^{3}x_{j}u_{j}$, with $\sum_{j=0}^{3}x_{j}^{2}=1 $ and $x_{j}\in\mathbb{R}, j=0,1,2,3 $. Now let us denote the sets of eigenvalues of $\rho_{AB},\rho_{AC}$ and $\rho_{AD}$ as $\{\lambda_{1}^{AB},\lambda_{2}^{AB},\lambda_{3}^{AB} \}$, $\{\lambda_{1}^{AC},\lambda_{2}^{AC},\lambda_{3}^{AC} \}$ and $\{\lambda_{1}^{AD},\lambda_{2}^{AD},\lambda_{3}^{AD} \}$ respectively. Without loss of generality we can consider, $\lambda_{i}^{AB}\geq \lambda_{j}^{AB}\geq \lambda_{k}^{AB}$,$\forall i,j,k=1,2,3$ with $i\neq j\neq k$. This consideration gives natural connections among the eigenvalues of $\rho_{AC}$ and also $\rho_{AD}$. These are $\lambda_{i}^{AC}\geq \lambda_{j}^{AC}\geq \lambda_{k}^{AC}$ and $\lambda_{i}^{AD}\geq \lambda_{j}^{AD}\geq \lambda_{k}^{AD}$, $\forall i,j,k=1,2,3$ with $i\neq j\neq k$. Observing the behavior of the eigenvalues and straight forward derivation quite easily establish the desired result, i.e., $\frac{1}{2}\leq N(\rho_{AB})+N(\rho_{AC})+N(\rho_{AD})\leq \frac{3}{4}$.$\blacksquare$ \\\\ These two results give us information about the monogamous behavior of MIN in two important classes of four-qubit system. Since, $N(\rho_{A|BCD})={\frac{1}{2}}$ in the whole class $\mathcal{A}$, therefore, MIN is monogamous in the subclass $\mathcal{M}$ but it is polygamous in other class $\tau_{min}$. Further, all the states that are connected to these classes by LU, share the same fate. Four-qubit GHZ state belongs to the class $\tau_{min}$. Hence GHZ and their LU equivalent states are not monogamous w.r.t MIN. Another two states(and obviously their LU equivalent states) $|L\rangle=\frac{1}{\sqrt{3}}(u_{0}+\omega u_{1}+\omega^{2}u_{2})$ where $\omega=e^{\frac{2i\pi}{3}}$ and $|M\rangle= \frac{i}{\sqrt{2}}u_{0}+\frac{1}{\sqrt{6}}(u_{1}+u_{2}+u_{3})$ which maximize the Tsallis $\alpha$-entropy for different regions of $\alpha$ \cite{gour} satisfy the monogamy relation of MIN. On the other hand the four-qubit cluster states satisfies the monogamy relation as their two party reduced density matrices are completely mixed (i.e., MIN is zero). For four-qubit generalized W-states $\rho^{W}=\alpha|1000\rangle+\beta|0100\rangle+\gamma|0010\rangle+\delta|0001\rangle$ where $|\alpha|^{2}+|\beta|^{2}+|\gamma|^{2}+|\delta|^{2}=1$, monogamy relation holds with equality, i.e., it can be easily shown (as in the three-qubit case) that $N(\rho_{AB}^{W})+N(\rho_{AC}^{W})+N(\rho_{AD}^{W})=N(\rho_{A|BCD}^{W})=2|\alpha|^{2}(1-|\alpha|^{2})$. Hence for this type of states nonlocality shows additive property with respect to each party. This result of generalized W- class can be further extended to n-qubit system with the same conclusion.\\ \section{Conclusion} Thus we have explored the monogamy nature of MIN and found certain classes of states on which MIN shows monogamous nature. Unlike geometric discord,\cite{bruss} MIN can be polygamous for pure states as revealed in some subclasses of three- and four-qubit generic class. On the other hand, W-class seems to satisfy the monogamy relation with equality in n-qubit system. So for the W-class MIN becomes additive in terms of sharing between the parties. The monogamous nature of W-class states w.r.t. MIN in any dimension indicates a distinguishing feature of this class of states. Thus existence of monogamy of this type of correlation put a restriction on the amount of shared nonlocality. Monogamous nature of MIN for these classes of states can be exploited in providing some cryptographic protocol. {\bf Acknowledgement.} The author A. Sen acknowledges the financial support from University Grants Commission, New Delhi, India. \end{document}
\begin{document} \baselineskip 13.75pt \title[ ]{An extension of the Glauberman ZJ-Theorem } \author{{M.Yas\.{I}r} K{\i}zmaz } \address{Department of Mathematics, Bilkent University, 06800 Bilkent, Ankara, Turkey} \email{[email protected]} \subjclass[2010]{20D10, 20D20} \keywords{controlling fusion, ZJ-theorem, $p$-stable groups} \maketitle \begin{abstract} Let $p$ be an odd prime and let $J_o(X)$, $J_r(X)$ and $J_e(X)$ denote the three different versions of Thompson subgroups for a $p$-group $X$. In this article, we first prove an extension of Glauberman's replacement theorem (\cite[Theorem 4.1]{Gla}). Secondly, we prove the following: Let $G$ be a $p$-stable group and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. If $D$ is a strongly closed subgroup in $P$, then $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are normal subgroups of $G$. Thirdly, we show the following: Let $G$ be a $\text{Qd}(p)$-free group and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$, then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. We also prove a similar result for a $p$-stable and $p$-constrained group. Lastly, we give a $p$-nilpotency criteria, which is an extension of Glauberman-Thompson $p$-nilpotency theorem. \end{abstract} \section{Introduction} Throughout the article, all groups considered are finite. Let $P$ be a $p$-group. For each abelian subgroup $A$ of $P$, let $m(A)$ be the rank of $A$, and let $d_r(P)$ be the maximum of the numbers $m(A)$. Similarly, $d_o(P)$ is defined to be the maximum of orders of abelian subgroups of $P$ and $d_e(P)$ is defined to be the maximum of orders of elementary abelian subgroups of $P$. Define $$\mathcal A_r(P)=\{A\leq P \mid A \textit{ is abelian} \textit{ and } m(A)=d_r(P) \}, $$ $$\mathcal A_o(P)=\{A\leq P \mid A \textit{ is abelian} \textit{ and } |A|=d_o(P) \}$$ and $$\mathcal A_e(P)=\{A\leq P \mid A \textit{ is elementary abelian} \textit{ and } |A|=d_e(P) \}.$$ Now we are ready to define three different versions of Thompson subgroup: $J_r(P)$, $J_o(P)$ and $J_e(P)$ are subgroups of $P$ generated by all members of $\mathcal A_r(P), \mathcal A_o(P)$ and $\mathcal A_e(P)$, respectively. Thompson proved his normal complement theorem according to $J_r(P)$ in \cite{Thmp}, which states that ``if $N_G(J_r(P))$ and $C_G(Z(P))$ are both $p$-nilpotent and $p$ is odd then $G$ is $p$-nilpotent". Later Thompson introduced ``a replacement theorem" and a subgroup similar to $J_o(P)$ in \cite{Thmp2}. Due to the compatibility of the replacement theorem with $J_o(P)$, Glauberman worked with $J_o(P)$, indeed, he extended the replacement theorem of Thompson for odd primes (see \cite[Theorem 4.1]{Gla}). We should note that Glauberman's replacement theorem is one of the important ingredients of the proof of his ZJ-theorem.\\\\ \textbf{Theorem (Glauberman).} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. Then $Z(J_o(P))$ is a characteristic subgroup of $G$.\\\\ There are many important consequences of his theorem. One of the striking ones is that $N_G(Z(J_o(P)))$ controls strong $G$-fusion in $P$ when $G$ does not involve a subquotient isomorphic to $Q_d(p)$ (see \cite[Theorem B]{Gla}). Another consequence of his theorem is an improvement of Thompson normal complement theorem. This result says that if $N_G(Z(J_o(P))$ is $p$-nilpotent and $p$ is odd then $G$ is $p$-nilpotent. There is still active research on properties of Thompson's subgroups. A current article \cite{Pt} is describing algorithms for determining $J_e(P)$ and $J_o(P)$. We also refer to \cite{Pt} and \cite{Kho} for more extensive discussions about literature and replacement theorems, which we do not state here. It deserves to be mentioned separately that Glauberman obtained remarkably more general versions of the Thompson replacement theorem in his later works (see \cite{Gla4} and \cite{Gla3}). We should also note that even if \cite[Theorem 1]{Pt} is attributed to Thompson replacement theorem \cite{Thmp} in \cite{Pt}, it seems that the correct reference is Isaacs replacement theorem (see \cite{Isc2}). In \cite{Crv}, the ZJ-theorem is given according to $J_e(P)$ (see \cite[Theorem 1.21, Definition 1.16]{Crv}). Although it might be natural to think that Glauberman ZJ-theorem is also correct for ``$J_e(P)$ and $J_r(P)$", there is no reference verifying that. We should also mention that Isaacs proved the Thompson normal complement theorem according to $J_e(P)$ in his book (see \cite[Chapter 7]{Isc}). However, the ZJ-theorem is not contained in his book. One of the purposes of this article is to generalize Glauberman replacement theorem (see \cite[Theorem 4.1]{Gla}), which was used by Glauberman in the proof of his ZJ-theorem. We also note that our replacement theorem is an extension of Isaacs replacement theorem (see \cite{Isc2}) when we consider odd primes. The following is the first main theorem of our article: \begin{theorem}\label{A} Let $G$ be a $p$-group for an odd prime $p$ and $A\leq G$ be abelian. Suppose that $B\leq G$ is of class at most $2$ such that $B'\leq A$, $A\leq N_G(B)$ and $B\nleq N_G(A)$. Then there exists an abelian subgroup $A^*$ of $G$ such that \begin{enumerate}[label=(\alph*)] \item\textit{ $|A|=|A^*|,$} \item \textit{$A\cap B <A^*\cap B,$} \item \textit{$A^*\leq N_G(A)\cap A^G,$} \item \textit{ the exponent of $A^*$ divides the exponent of $A$. Moreover, $rank(A)\leq rank(A^*)$.} \end{enumerate} \end{theorem} One of the main differences from \cite[Theorem 4.1]{Gla} is that we are not taking $A$ to be of maximal order. By removing the order condition, we obtain more flexibility to apply the replacement theorem. Since our replacement theorem is easily applicable to all versions of Thompson subgroups and there is a gap in the literature whether ZJ-theorem holds for other versions of Thompson subgroups, we shall prove our extensions of ZJ-theorem for all different versions of Thompson subgroups. \begin{definition*}\cite[pg 22]{Gla2}\label{def:p-stable} A group $G$ is called \textbf{$p$-stable} if it satisfies the following condition: Whenever $P$ is a $p$-subgroup of $G$, $g\in N_G(P)$ and $[P,g,g]=1$ then the coset $gC_G(P)$ lies in $O_p(N_G(P)/C_G(P))$. \end{definition*} Let $K$ be a $p$-group. We write $\Omega(K)$ to denote the subgroup $\langle \{x\in K\mid x^{p}=1 \}\rangle$ of $K$. Note that $\text{Qd}(p)$ is defined to be a semidirect product of $\mathbb Z_p \times \mathbb Z_p$ with $SL(2,p)$ by the natural action of $SL(2,p)$ on $\mathbb Z_p \times \mathbb Z_p$. Here is the second main theorem of the article; \begin{theorem}\label{B} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$. If $D$ is a strongly closed subgroup in $P$ then $Z(J_o(D)), \ \Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are normal subgroups of $G$. \end{theorem} We prove Theorem B mainly by following the original proof given by Glauberman and with the help of Theorem A. When we take $D=P$, we obtain that $Z(J_o(P)),\Omega(Z(J_r(P)))$ and $\Omega(Z(J_e(P)))$ are characteristic subgroups of $G$ under the hypothesis of Theorem B. Both $Z(J_r(P))$ and $Z(J_e(P))$ need an extra operation ``$\Omega$" and it does not seem quite possible to remove ``$\Omega$" by the method used here. \begin{corollary}\label{C} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Suppose that $C_G(O_{p}(G))\leq O_{p}(G)$ and $D$ is a strongly closed subgroup in $P$. If the exponent of $\Omega(D)$ is $p$, then $Z(J_o(\Omega(D))), \ Z(J_r(\Omega(D)))$ and $Z(J_e(\Omega(D)))$ are normal subgroups of $G$. \end{corollary} \begin{proof}[\textbf{Proof}] Suppose that the exponent of $\Omega(D)$ is $p$. Let $U\leq \Omega(D)$ and $U^g\leq P$ for some $g\in G$. Then we see that $U^g \leq D$ as $D$ is strongly closed in $P$. Since the exponent of $U$ is $p$, we get that $U^g\leq \Omega(D)$. Thus $\Omega(D)$ is strongly closed in $P$, and so $Z(J_o(\Omega(D)))\lhd G$ by Theorem B. On the other hand, $J_o(\Omega(D))=J_e(\Omega(D))=J_r(\Omega(D))$ since the exponent of $\Omega(D)$ is $p$. Then the result follows. \end{proof} Note that the condition on the exponent of $\Omega(D)$ is naturally satisfied if $\Omega(D)$ is a regular $p$-group and it is well known that $p$-groups of class at most $p-1$ are regular. Thus, we may apply Corollary C when $|\Omega(D)|\leq p^p$, in particular. One of the advantages of working with $\Omega(D)$ is that $J_x(\Omega(D))$ could be determined more easily compared to $J_x(D)$ for most of the $p$-groups for $x\in \{o,r,e\}$. \begin{definition*}\cite[pg 268]{Gor} A group $G$ is called \textbf{ $p$-constrained} if $C_G(U)\leq O_{p',p}(G)$ for a Sylow $p$-subgroup $U$ of $O_{p',p}(G)$. \end{definition*} \begin{theorem}\label{E} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Assume that $N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$. If $D$ is a strongly closed subgroup in $P$ then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{theorem} \begin{remark} In \cite{H}, it is proven that if $G$ is $p$-stable and $p>3$ then $G$ is $p$-constrained by using classification of finite simple groups (see Proposition 2.3 in \cite{H}). Thus, the assumption ``$N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$'' is automatically satisfied when $p>3$ and $G$ is a $p$-stable group. \end{remark} \begin{theorem}\label{F} Let $p$ be an odd prime, $G$ be a \text{Qd}(p)-free group, and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$ then the normalizers of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{theorem} \begin{remark} In Theorem \ref{F}, if we take $D=P$, then the proof of this special case follows by Theorem \ref{B} and \cite[Theorem 6.6]{Gla2}. However, the general case requires some extra work. Indeed, we shall prove Theorem \ref{F} by constructing an appropriate section conjugacy functor depending on $D$, and applying \cite[Theorem 6.6]{Gla2}. \end{remark} The following is an easy corollary of Theorem \ref{F}. \begin{corollary} Let $p$ be an odd prime, $G$ be a $\text{Qd}(p)$-free group, and $P\in Syl_p(G)$. If the exponent of $\Omega(D)$ is $p$, then the normalizers of the subgroups $Z(J_o(\Omega(D))),Z(J_r(\Omega(D)))$ and $Z(J_e(\Omega(D)))$ control strong $G$-fusion in $P$. \end{corollary} \begin{proof}[\textbf{Proof}] As in the proof of Corollary \ref{C}, we see that $\Omega(D)$ is strongly closed in $P$ since the exponent of $\Omega(D)$ is $p$. Thus, $J_o(\Omega(D))=J_r(\Omega(D))=J_e(\Omega(D))$ and the result follows by Theorem \ref{F}. \end{proof} Lastly we state an extension of Glauberman-Thompson $p$-nilpotency theorem. \begin{theorem}\label{H} Let $p$ be an odd prime, $G$ be a group and $P\in Syl_p(G)$. If $D$ is a strongly closed subgroup in $P$ then $G$ is $p$-nilpotent if one of the normalizer of subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ is $p$-nilpotent. \end{theorem} \section{The proof of theorem A} We first state the following lemma, which is extracted from the proof of Glauberman replacement theorem. \begin{lemman}[Glauberman]\label{Glb} Let $p$ be an odd prime and $G$ be a $p$-group. Suppose that $G=BA$ where $B$ is a normal subgroup of $G$ such that $B'\leq Z(G)$ and $A$ is an abelian subgroup of $G$ such that $[B,A,A,A]=1$. Then $[b,A]$ is an abelian subgroup of $G$ for each $b\in B$. \end{lemman} \begin{proof}[\textbf{Proof}] Let $x,y\in A$. Our aim is to show that $[b,x]$ and $[b,y]$ commute. Set $u=[b,y]$. If we apply Hall-Witt identity to the triple $(b,x^{-1},u)$, we obtain that $$[b,x,u]^{x^{-1}}[x^{-1},u^{-1},b]^{u}[u,b^{-1},x^{-1}]^b=1.$$ Note that the above commutators of weight $3$ lie in the center of $G$ since $B'\leq Z(G)$. Thus we may remove conjugations in the above equation. Moreover, $[u,b^{-1},x^{-1}]=1$ as $[u,b^{-1}]\in B'$. Thus we obtain that $[b,x,u][x^{-1},u^{-1},b]=1$, and so $$[b,x,u]=[x^{-1},u^{-1},b]^{-1}.$$ Since $[x^{-1},u^{-1},b]=[[x^{-1},u^{-1}],b]\in Z(P)$, we see that $$[x^{-1},u^{-1},b]^{-1}=[[x^{-1},u^{-1}],b]^{-1}=[[x^{-1},u^{-1}]^{-1},b]=[[u^{-1},x^{-1}],b]$$ by \cite[Lemma 2.2.5(ii)]{Gor}. As a consequence, we get that $[b,x,u]=[[u^{-1},x^{-1}],b]$. By inserting $u=[b,y]$, we obtain $$[[b,x],[b,y]]=[[[b,y]^{-1},x^{-1}],b].$$ Now set $\overline G=P/B'$. Then clearly $\overline B$ is abelian. It follows that $[\overline B,\overline A, \overline A]\leq Z(\overline P)$ since $[B,A,A,A]=1$ and $\overline B$ is abelian. Then we have $$[[b,y]^{-1},x^{-1}]\equiv [[b,y]^{-1},x]^{-1}\equiv [[b,y],x] \ mod \ B' $$ by applying \cite[Lemma 2.2.5(ii)]{Gor} to $\overline G$. Since $x$ and $y$ commute and $\overline {[b,A]}\subseteq \overline B$ is abelian, we see that $$[b,y,x]\equiv [b,x,y] \ mod \ B'$$ by \cite[Lemma 2.2.5(i)]{Gor}. Finally we obtain $$[[b,x],[b,y]]=[[[b,y]^{-1},x^{-1}],b]= [[[b,y],x],b]=[[b,x,y],b].$$ By symmetry, we also have that $[[b,y],[b,x]]=[[b,x,y],b]$. Then it follows that $[[b,y],[b,x]]=[[b,y],[b,x]]^{-1}$, and so $[[b,x],[b,y]]=1$ since $G$ is of odd order. \end{proof} \begin{lemman}\label{rank lemma} Let $A$ be an abelian $p$-group and $E$ be the largest elementary abelian subgroup of $A$. Then $rank(E)=rank(A)$. \end{lemman} \begin{proof}[\textbf{Proof}] Consider the homomorphism $\phi:A\to A$ by $\phi(a)=a^p$ for each $a\in A$. Notice that $\phi(A)=\Phi(A)$ and $E=Ker(\phi)$, and so $|A/\Phi(A)|=|E|$. Since both $E$ and $A/\Phi(A)$ are elementary abelian groups of same order, we get $rank(E)=rank(A/\Phi(A))$. On the other hand, $rank(A/\Phi(A))=rank(A)$ and the result follows. \end{proof} \begin{proof}[\textbf{Proof of Theorem A}] We proceed by induction on the order of $G$. We can certainly assume that $G=AB$. Since $A$ is not normal in $G$, there exists a maximal subgroup $M$ of $G$ such that $A\leq M$. Clearly $A$ normalizes $M\cap B$ as both $M$ and $B$ are normal in $G$. Suppose that $M\cap B$ does not normalize $A$. By induction applied to $M$, there exists a subgroup $A^*$ of $M$ such that $A^*$ satisfies the conclusion of the theorem. Then $A^*$ also satisfies $(a),\ (c)$ and $(d)$ in $G$. Moreover, $A\cap (M\cap B)=A\cap B< A^*\cap B$, and so $G$ also satisfies the theorem. Hence, we can assume that $M\cap B\leq N_G(A)$. Notice that $M=M\cap AB=A(M\cap B)$, and so $M=N_G(A)$. Clearly $M\cap B$ is a maximal subgroup of $B$. Then $A$ acts trivially on $B/(M\cap B)$, and so $[B,A]\leq M=N_G(A)$. Thus, we see that $[B,A,A]\leq A$ which yields $[B,A,A,A]=1.$ Moreover, we have that $B'\leq Z(G)$ since $B'\leq A$ and $B'\leq Z(B)$. It follows that $[b,A]$ is abelian for any $b\in B$ by Lemma \ref{Glb}. Let $b\in B\setminus M$. Then $A\neq A^b\lhd M$. Set $H=AA^b$ and $Z=A\cap A^b$. Then clearly $H$ is a group and $Z\leq Z(H)$. On the other hand, $H$ is of class at most $2$ since $H/Z$ is abelian. Note that the identity $(xy)^n=x^ny^n[x,y]^{\frac{n(n-1)}{2}}$ holds for all $x,y\in H$ as $H$ is of odd order. It follows that the exponent of $H$ is the same as the exponent of $A$. Now we shall show that $H\cap B$ is abelian. First we claim that $H\cap B=(A\cap B)[A,b]$. Clearly, we have $[A,b]\subseteq H\cap B$ since $H=AA^b$. It follows that $(A\cap B)[A,b]\subseteq H\cap B$ as $A\cap B \leq H\cap B$. Next we obtain the reverse inequality. Let $x\in H\cap B$. Then $x=ac^b$ for $ a,c\in A$ such that $ac^b\in B$. Since $B\lhd G$, we see that $[c,b]\in B$, and so $ac\in B$ as $ac[c,b]=ac^b\in B$. It follows that $ac\in A\cap B$ and $x=ac[c,b]\in (A\cap B)[A,b]$, which proves the equality. Since $B'\leq A$, we see that $A\cap B\lhd B$. Then $A\cap B=A^b\cap B$ and hence $A\cap B=Z\cap B$. In particular, we see that $A\cap B\leq Z\leq Z(H)$. It follows that $H\cap B=(A\cap B)[A,b]$ is abelian since $[A,b]$ is an abelian subgroup of $H$ and $(A\cap B)\leq Z(H)$. Now set $A^*=(H\cap B)Z$. Note that $A^*$ is abelian as $H\cap B$ is abelian and $Z\leq Z(H)$. Now we shall show that $A^*$ is the desired subgroup. Clearly, the exponent of $A^*$ divides the exponent of $H$, which shows the first part of $(d)$. Note that $A<H$ and $H=H\cap AB=A(H\cap B)$, and so $H\cap B>A\cap B$. It follows that $A^*\cap B\gamma_Geq H\cap B>A\cap B$, which shows $(b)$. On the other hand, $$A^*\leq H=AA^b\leq M\cap A^G=N_G(A)\cap A^G,$$ which shows $(c)$. It remains to prove $(a)$ and the second part of $(d)$. Since $A^*=(H\cap B)Z$, we have $$|A^*|=\dfrac{|H\cap B||Z|}{|Z\cap B|}=\dfrac{|H\cap B||Z|}{|A\cap B|}.$$ On the other hand, $H=AA^b=A(H\cap B)$. Hence we have $$\dfrac{|A A^b|}{|A^b|}=\dfrac{|A|}{|A\cap A^b|}=\dfrac{|A|}{|Z|}=\dfrac{|H\cap B|}{|A\cap B|}.$$ Thus, we see that $|A|=|A^*|$ as desired. Now let $E$ be the largest elementary abelian subgroup of $A$. We shall observe that $E$ and $A$ enjoy some similar properties. Note that $E\lhd M=N_G(A)$ since $E$ is a characteristic subgroup of $A$. Hence, $EE^b$ is a group. Now set $H_1=EE^b$, $Z_1=E\cap E^b$ and $E^*=(H_1\cap B)Z_1$. First observe that $Z_1\leq Z(H_1)$, and so $H_1$ is of class at most $2$. It follows that the exponent of $E^*$ is $p$ since $H_1$ is of odd order. Thus, $E^*$ is elementary abelian as $E^*\leq A^*$ and $A^*$ is abelian. Note also that $E\cap B=E\cap (A\cap B)$, and so $E\cap B$ is characteristic in $A\cap B$. Then we see that $E\cap B\lhd B$ as $A\cap B\lhd B$. This also yields that $E\cap B=(E\cap B)^b=E^b\cap B$, and hence $E\cap B=Z_1\cap B$. Lastly, observe that $H_1=EE^b=EE^b\cap EB=E(H_1\cap B)$. Now we can show that $|E|=|E^*|$ by using the same method used for showing that $|A|=|A^*|$. Then we see that $ rank(A)=rank(E)=rank(E^*)\leq rank(A^*)$ by Lemma \ref{rank lemma}. \end{proof} \section{The proof of theorem \ref{B}} \begin{lemman}Let $P$ be a $p$-group and $R$ be a subgroup of $P$. Then if there exists $A\in \mathcal A_{x}(P)$ such that $A\leq R$ then $J_x(R)\leq J_x(P)$ for $x\in \{ o,r,e\}$. Moreover, $J_x(P)=J_x(R)$ if and only if $J_x(P)\subseteq R$ for $x\in \{ o,r,e\}$. \end{lemman} The above lemma is an easy observation and we shall use it without any further reference. \begin{lemman}\cite[Theorem 8.1.3]{Gor} \label{opg} Let $G$ be a $p$-stable group such that $C_G(O_p(G))\leq O_p(G)$. If $P\in Syl_p(G)$ and $A$ is an abelian normal subgroup of $P$ then $A\leq O_p(G)$. \end{lemman} \begin{proof}[\textbf{Proof}] Since $O_p(G)$ normalizes $A$, we see that $[O_p(G),A,A]=1$. Write $C=C_G(O_p(G))$. Then we have $AC/C\leq O_p(G/C)$. Note that $O_p(G/C)=O_p(G)/C $ since $C\leq O_p(G)$. It follows that $A\leq O_p(G)$. \end{proof} \begin{definition}\label{strongly closed set} Let $G$ be a group, $P\in Syl_p(G)$ and $D$ be a nonempty subset of $P$. We say that $D$ is a strongly closed subset in $P$ (with respect to $G$) if for all $U\subseteq D$ and $g\in G$ such that $U^g\subseteq P$, we have $U^g\subseteq D$. \end{definition} \begin{lemman}\label{strogly closed} Let $G$ be a group and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subset in $P$. If $N\lhd G$ and $D\cap N$ is nonempty then $D\cap N$ is also a strongly closed subset in $P$. Moreover, $G=N_G(D\cap N)N$. \end{lemman} \begin{proof}[\textbf{Proof}] Let $Q=P\cap N$ and write $D^*=D\cap N$. Then we see that $Q\in Syl_p(N)$. Let $U\subseteq D^*$ and $g\in G$ such that $U^g\subseteq P$. It follows that $U^g\subseteq D$ as $U\subseteq D$ and $D$ is strongly closed in $G$. Since $N\lhd G$, we see that $U^g\leq N$ which yields that $U^g\subseteq N\cap D=D^*$ which shows the first part. We already know that $G=N_G(Q)N$ by Frattini argument. Thus, it is enough to show that $N_G(Q)\leq N_G(D^*)$. Let $x\in N_G(Q)$. Then $D^{*^x}\subseteq Q\leq P$. Since $D^*$ is strongly closed in $P$, we see that $D^*{^x}= D^*$. It follows that $x\in N_G(D^*)$, as desired. \end{proof} \begin{lemman}\label{crucial lemma} Let $P$ be a $p$-group, $p$ be odd, and let $B,N\unlhd P$. Suppose that $B$ is of class at most $2$ and $B'\leq A$ for all $A\in \mathcal A_x(N)$. Then there exists $A\in \mathcal A_x(N)$ such that $B$ normalizes $A$ while $x\in \{o,r,e\}.$ \end{lemman} \begin{proof}[\textbf{Proof}] First suppose that $x=e$. Now choose $A\in \mathcal A_e(N)$ such that $A\cap B$ is maximum possible. If $B$ does not normalize $A$ then there exists an abelian subgroup $A^*\leq P$ such that $|A^*|=|A|$, $A^*\leq A^P\cap N_P(A)$, $A^*\cap B>A\cap B$ and the exponent of $A^*$ divides that of $A$ by Theorem A. We first observe that $A^*$ is an elementary abelian subgroup as the exponent of $A$ is $p$. Since $A\leq N\lhd P$, we see that $A^*\leq A^P\leq N$. Hence, $A^*\in \mathcal A_e(N)$ which contradicts to the maximality of $A\cap B$. Thus $B$ normalizes $A$ as desired. Now suppose that $x=r$ and let $ A\in \mathcal A_r(N)$. Then we apply Theorem A in a similar way and find $A^*\leq N $ with $rank(A^*)\gamma_Geq rank(A)$. Since the rank of $A$ is maximal possible in $N$, we see that $A^*\in \mathcal A_r(N)$. The rest of the argument follows similarly. The case $x=o$ also follows in a similar fashion. \end{proof} \begin{theoremn}\label{maim thm} Let $p$ be an odd prime, $G$ be a $p$-stable group, and $P\in Syl_p(G)$. Let $D$ be a strongly closed subset in $P$ and $B$ be a normal $p$-subgroup of $G$. Write $K=\langle D \rangle$, $Z_o=Z(J_o(K)), \ Z_r=\Omega(Z(J_r(K))) \textit{ and } Z_e=\Omega(Z(J_e(K)))$. If all members of $\mathcal A_x(K)$ are included in the set $D$ then $Z_x\cap B\lhd G$ while $x\in \{o,r,e\}$. \end{theoremn} \begin{proof}[\textbf{Proof}] Write $J(X)=J_e(X)$ for any $p$-subgroup $X$ and set $Z=Z_e$. We can clearly assume that $B\neq 1$. Let $G$ be a counter example, and choose $B$ to be the smallest possible normal $p$-subgroup contradicting to the theorem. Notice that $K\unlhd P$ as $D$ is a normal subset of $P$, and so $Z\unlhd P$. In particular, $B$ normalizes $Z$. Set $B_1=(Z\cap B)^G$. Clearly $B_1\leq B$. Suppose that $B_1<B$. By our choice of $B$, we get $Z\cap B_1\lhd G$. Since $Z\cap B\leq B_1$, we have $Z\cap B\leq Z\cap B_1\leq Z\cap B$, and hence $Z\cap B= Z\cap B_1$. This contradiction shows that $B=B_1=(Z\cap B)^G$. Clearly $B'<B$, and hence $Z\cap B' \lhd G$ by our choice of $B$. Since $Z$ and $B$ normalize each other, $[Z\cap B,B]\leq Z\cap B'$. Since $B$ and $Z\cap B'$ are both normal subgroups of $G$, we obtain $[(Z\cap B)^g,B]\leq Z\cap B'$ for all $g\in G$. This yields $[(Z\cap B)^G,B]=[B,B]=B'\leq Z\cap B'.$ In particular, we have $B'\leq Z$, and so $[ Z\cap B, B']=1$. It follows that $[B,B']=1$ as $B=(Z\cap B )^G$. As a consequence, we see that $B$ is of class at most $2$. Notice that $Z\leq A$ for all $A\in \mathcal A_e(K)$ due to the fact that $AZ$ is an elementary abelian subgroup of $K$. Thus we see that, in particular, $B'\leq A$ for all $A\in \mathcal A_e(K).$ Let $N$ be the largest normal subgroup of $G$ that normalizes $Z\cap B$. Set $D^*=D\cap N$, which is nonempty by our hypothesis, and write $K^*=\langle D^* \rangle$. We see that $G=N_G(D^*)N$ by Lemma \ref{strogly closed}, and so $G=N_G(K^*)N$. It follows that $G=N_G(J(K^*))N$ since $J(K^*)$ is a characteristic subgroup of $K^*$. Suppose that $J(K)\leq K^*$. Then we see that $J(K)=J(K^*)$, and hence $Z\cap B$ is normalized by $N_G(J(K^*))$. It follows that $Z\cap B\lhd G$. Thus we may assume that $J(K)\nsubseteq K^*.$ There exists $A\in \mathcal A_e(K)$ such that $B$ normalizes $A$ by Lemma \ref{crucial lemma}. Hence, $[B,A,A]=1$ since $[B,A]\leq A$. Since $G$ is $p$-stable and $B\lhd G$, we have that $AC/C\leq O_p(G/C)$ where $C=C_G(B)$. Note that $C$ normalizes $Z\cap B$, and so $C\leq N$ by the choice of $N$. It follows that $AN/N\leq O_p(G/N).$ Now we claim that $O_p(G/N)=1$. Let $L\lhd G$ such that $L/N=O_p(G/N)$. Then $L=(L\cap P)N$, and hence $L$ normalizes $Z\cap B$ as both $N$ and $L\cap P$ normalize $Z\cap B$. The maximality of $N$ forces that $N=L$, which yields that $A\leq N$. Note that $A\subseteq D$ by hypothesis, and so $A\subseteq N\cap D=D^*\subseteq K^*$. We see that $Z\leq A\leq J(K^*)$, and so we have $J(K^*)\leq J(K)$. It follows that $Z \cap B\leq Z\leq \Omega(Z(J(K^*)))$. Set $X=\Omega(Z(J(K^*)))$. Then we see that $G=NN_G(X)$ since $G=NN_G(K^*)$ and $X$ is characteristic in $K^*$. Since $N$ normalizes $Z\cap B$, each distinct conjugate of $Z\cap B$ comes via an element of $N_G(X).$ Thus, $B=(Z\cap B)^G=(Z\cap B)^{N_G(X)}\leq X$. Since $J(K)\nsubseteq K^*$, some members of $\mathcal A_e(K)$ do not lie in $ K^*$. Among such members choose $ A_1 \in \mathcal A_e(K)$ such that $A_1\cap B$ is maximum possible. Note that $B$ does not normalize $A_1$, since otherwise this forces $A_1\leq K^*$ as in previous paragraphs. Then there exists $A^*\leq P$ such that $|A^*|=|A|$, $A^*\cap B>A_1\cap B$, $A^*\leq A_1^P\cap N_P(A_1)$ and the exponent of $A^*$ divides the exponent of $A_1$ by Theorem A. Since $A_1$ is elementary abelian, we see that $A^*$ is also elementary abelian. Moreover, $A^*\leq K$ as $A_1^P\leq K\lhd P$. It follows that $A^*\in \mathcal A_e(K)$, and so $A^*\leq K^*$ due to the choice of $A_1$. We see that $XA^*$ is a group and $A^*\in \mathcal A_e(K^*)$, and hence $B\leq X\leq A^*$. It follows that $B\leq A^*\leq N_P(A_1)$, which is the final contradiction. Thus, our proof is complete for $Z_e$. Almost the same proof works for $Z_r$ and $Z_o$ without any difficulty. \end{proof} When we work with $J_o(K)$, we do not need to use $\Omega$ operation due to the fact that $Z(J_o(K))\leq A$ for all $A\in \mathcal A_o(K)$. However, this does not need to be satisfied for $Z(J_e(K))$ and $Z(J_r(K))$. In these cases, however, the rank conditions force that $\Omega(Z(J_x(K)))\leq A$ for all $ A\in \mathcal A_{x}(K)$ for $x\in \{e,r\}$. This difference causes the use of $\Omega$ operation necessary for $Z(J_e(K))$ and $Z(J_r(K))$. \begin{proof}[\textbf{Proof of Theorem \ref{B}}] As in our hypothesis, let $G$ be a $p$-stable group that $C_G(O_p(G))\leq O_p(G)$ and $D$ be a strongly closed subgroup in $P$. Since all these subgroups $Z(J_o(D)), \ \Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ are abelian normal subgroups of $G$, we see that they must lie in $O_p(G)$ by Lemma \ref{opg}. Note that $D$ is also a strongly closed subset in $P$ and satisfies the hypothesis of Theorem \ref{maim thm}. Then the results follow from Theorem \ref{maim thm}. \end{proof} In this section, we see another application of Theorem \ref{maim thm} by proving the following theorem, which we shall need in the next section. \begin{theoremn}\label{main thm2} Let $p$ be an odd prime, $G$ be a $p$-stable and $p$-constrained group, and $P\in Syl_p(G)$. Let $D$ be a strongly closed subset in $P$. Write $K=\langle D \rangle$, $Z_o=Z(J_o(K)), \ Z_r=\Omega(Z(J_r(K))) \textit{ and } Z_e=\Omega(Z(J_e(K)))$. If all members of $\mathcal A_x(K)$ are included in the set $D$, then the normalizer of $Z_x(K)$ controls strong $G$-fusion in $P$ while $x\in \{o,r,e\}$. \end{theoremn} We need the following lemma in the proof of Theorem \ref{main thm2}. \begin{lemman}\cite[Lemma 7.2]{Gla}\label{p-stable} If $G$ is a $p$-stable group, then $G/O_{p'}(G)$ is also $p$-stable. \end{lemman} Since the $p$-stability definition we used here is not same with that of \cite{Gla} and \cite[Lemma 7.2]{Gla} has also extra assumption that $O_p(G)\neq 1$, it is appropriate to give a proof of this lemma here. \begin{proof}[\textbf{Proof}] Write $N=O_{p'}(G)$ and $\overline G=G/N$. Let $V$ be $p$-subgroup of $\overline G$. Then there exists a $p$-subgroup $U$ of $G$ such that $\overline U=V$. Let $\overline x \in N_{\overline G}(\overline U)$ such that $[\overline U, \overline x, \overline x ]=\overline 1$. Clearly, we can write $\overline x=\overline x_1 \overline x_2$ such that $\overline x_1$ is a $p$-element, $\overline x_2$ is a $p'$-element and $[\overline x_1,\overline x_2]=\overline 1$ for some $x_1,x_2\in G$. It follows that $[\overline U, \overline x_i, \overline x_i ]=\overline 1$ for $i=1,2$. Then we see that $\overline x_2\in C_{\overline G}(\overline U)$ by \cite[Lemma 4.29]{Isc}. Thus, it is enough to show that $\overline x_1 \in O_p(N_{\overline G}(\overline U)/C_{\overline G}(\overline U))$ to finish the proof. Since $\overline x_1$ is a $p$-element of $\overline G$, $x_1=sn$ where $n\in N$ and $s$ is a $p$-element of $G$, which yields that $\overline x_1=\overline s$. Then we see that $[UN,s,s]\in N$ and $s\in N_G(UN)$ by the previous paragraph. Note that $U\in Syl_p(UN)$ and $| Syl_p(UN)|$ is a $p'$-number. Consider the action of $\langle s \rangle $ on $Syl_p(UN)$. Then we observe that $s$ normalizes $U^n$ for some $n\in N$. Thus, we get that $[U^n,s,s]\leq U^n\cap N=1$. Note that $\overline U=\overline {U^n}$, and so we take $U^n=U$ without loss of generality. Let $K\leq N_G(U)$ such that $K/C_G(U)=O_p(N_G(U)/C_G(U))$. Thus we observe that $s\in K$ as $G$ is $p$-stable. Note that $N_{\overline G}(\overline U)=\overline {N_G(U)}$ and $C_{\overline G}(\overline U)=\overline {C_G(U)}$ by \cite[Lemma 7.7]{Isc}. Hence, we see that $\overline x_1=\overline s \in \overline K$ and $\overline K/\overline {C_G(U)}\leq O_p(\overline {N_G(U)}/\overline {C_G(U)})=O_p(N_{\overline G}(\overline U)/C_{\overline G}(\overline U))$, which completes the proof. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{main thm2}}] Write $\overline G=G/O_{p'}(G)$. Then $\overline G$ is $p$-stable by Lemma \ref{p-stable}. Since $G$ is $p$-constrained, we have $C_{\overline G}(O_{p}(\overline G))\leq O_p(\overline G)$ by \cite[Theorem 1.1(ii)]{Gor}. Note that $Z_x\leq O_p(\overline G)$ by Lemma \ref{opg} for $x\in \{o,e,r\}$. We see that $\overline G$ satisfies the hypotheses of Theorem \ref{maim thm} as $\overline P$ is isomorphic to $P$ and $\overline D$ is the desired strongly closed set in $\overline P$. It follows that $Z_x(\overline K)\lhd \overline G$ by Theorem \ref{maim thm}, and so we get $G=O_{p'}(G)N_G(Z_x(K))$ for $x\in \{o,e,r\}$. Hence, $N_G(Z_x(K))$ controls strong $G$-fusion in $P$ by \cite[Lemma 7.1]{Gla} for $x\in \{o,e,r\}$. \end{proof} \section{The Proofs of Theorems \ref{E}, \ref{F} and \ref{H} } \begin{lemman}\label{strongly closed2} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $H\leq G$, $N\lhd G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Then \begin{enumerate}[label=(\alph*)] \item\textit{ $D^g\cap H$ is strongly closed in $P^g\cap H$ with respect to $H$ if $D^g\cap H$ is nonempty.} \item \textit{$DN/N$ is strongly closed in $PN/N$ with respect to $G/N$.} \end{enumerate} \end{lemman} \begin{proof}[\textbf{Proof}] $(a)$ Let $U\subseteq D^g\cap H$ and $h\in H$ such that $U^h\subseteq P^g\cap H$. Since $U\subseteq D^g$ and $U^h\subseteq P^g$, we see that $U^h\subseteq D^g$ as $D^g$ is strongly closed in $P^g$ with respect to $G$. Thus, $U^h\subseteq D^g\cap H$ as $U^h\subseteq H$. $(b)$ Let $U/N\subseteq DN/N$ and suppose that $(U/N)^y\subseteq PN/N$ for some $y\in G$. By an easy argument, we can find $V\subseteq D$ such that $U/N=VN/N$. Then we see that $VN\subseteq DN$ and $(VN)^y=V^yN\subseteq PN$. We need to show that $V^yN\subseteq DN$. Notice that $\langle V^y \rangle=\langle V \rangle^y $ is a $p$-subgroup of $PN$. Since $P\in Syl_p(PN)$, there exists $x\in PN$ such that $V^y\subseteq P^x$. Since $D^x$ is strongly closed in $P^x$ and $V^x\subseteq D^x$, we see that $V^y\subseteq D^x$. Thus, $V^yN\subseteq D^xN$. Write $x=mn$ for $m\in P$ and $n\in N$. Note that $D^x=D^{mn}=D^n$ as $D$ is a normal set in $P$. It follows that $D^xN=D^nN=DN$. Consequently, $V^yN\subseteq DN$ as desired. \end{proof} Let $\mathcal L_p(G)$ be the set of all $p$-subgroups of $G$. A map $W:\mathcal L_p(G)\to \mathcal L_p(G)$ is called \textbf{a conjugacy functor} if the followings hold for each $U\in \mathcal L_p(G)$: \begin{enumerate} \item[(i)]\textit{ $W(U)\leq U$}, \item[(ii)] \textit{$W(U)\neq 1 $ unless $U=1$, and} \item[(iii)] \textit{$W(U)^g=W(U^g)$ for all $g\in G$.} \end{enumerate} \textbf{A section of $G$} is a quotient group $H/K$ where $K\unlhd H\leq G$. Let $\mathcal L_p^*(G)$ be the set of all sections of $G$ that are $p$-groups. A map $W:\mathcal L_p^*(G)\to \mathcal L_p^*(G)$ is called \textbf{a section conjugacy functor} if the followings hold for each $H/K \in \mathcal L_p^*(G)$: \begin{enumerate} \item[(i)]\textit{ $W(H/K)\leq H/K$}, \item[(ii)] \textit{$W(H/K)\neq 1 $ unless $H/K=1$, and} \item[(iii)] \textit{$W(H/K)^g=W(H^g/K^g)$ for all $g\in G$.} \item[(iv)] Suppose that $N\lhd H$, $N\leq K$ and $K/N$ is a $p'$-group. Let $P/N$ be a Sylow $p$-subgroup of $H/N$ and set $W(P/N)=L/N$. Then $W(H/K)=LK/K$. \end{enumerate} For more information about section conjugacy functors and their properties, we refer to \cite{Gla2}. Note that a sufficient condition for $(iii)$ and $(iv)$ is the following: whenever $Q,R \in \mathcal L_p^*(G)$ and $\phi:Q\to R$ is an isomorphism, $\phi (W(Q))=W(R).$ Thus, the operations like $ZJ_x,\Omega ZJ_x \textit{ and } J_x$ are section conjugacy functors for $x \in \{o,r,e\}$. \begin{lemman}\label{conjugacy functor} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $W:\mathcal{L}_p(G)\to \mathcal{L}_p(G)$ be a conjugacy functor. For each $p$-subgroup $U$ of $P$ define $$W_D(U)=\begin{cases} W(\langle U\cap D \rangle) & \ if \langle U\cap D \rangle\neq 1 \\ W(U) & \ if \ \langle U\cap D \rangle=1 \\ \end{cases}$$ and for all $V\in \mathcal{L}_p(G)$ and $x\in G$ such that $V^x\leq P$ define $W_D(V)=(W_D(V^x))^{x^{-1}}.$ Then the map $W_D:\mathcal{L}_p(G)\to \mathcal{L}_p(G)$ is a conjugacy functor. Moreover for each $y\in G$, $W_D=W_{D^y}$. \end{lemman} \begin{proof}[\textbf{Proof}] Since $W$ is a conjugacy functor, it is easy to see that $W_D(U)\leq U$ and $W_D(U)\neq 1$ unless $U=1$ for each $U\in \mathcal{L}_p(G)$ by our settings. Now we need to show that $W_D(U)^g=W_D(U^g)$ for all $g\in G$ and $U\in \mathcal{L}_p(G)$, and indeed $W_D$ is well defined. First suppose that $U,U^g\leq P$ for some $g\in G$. We first show that $W_D(U)^g=W_D(U^g)$for this special case. Note that $(U\cap D)^g\subseteq U^g\leq P$, and so $(U\cap D)^g\subseteq U^g\cap D$ as $D$ is strongly closed in $P$. On the other hand, $(U^g\cap D)^{g^{-1}}\subseteq U \leq P$, and so $(U^g\cap D)^{g^{-1}}\subseteq U\cap D$ as $D$ is strongly closed in $P$. By showing the reverse inequality, we obtain that $(U\cap D)^g= U^g\cap D.$ Now if $\langle U\cap D \rangle=1$ then $\langle U^g\cap D \rangle=1$ and $W_D(U)^g=W(U)^g=W(U^g)=W_D(U^g)$. The second equality holds as $W$ is a conjugacy functor. On the other hand, we get $W_D(U)^g=W(\langle U\cap D \rangle)^g=W(\langle U\cap D \rangle^g)=W(\langle U^g\cap D \rangle )=W_D(U^g)$ when $\langle U\cap D \rangle \neq 1.$ Now let $V\in \mathcal{L}_p(G)$ and $x,y\in G$ such that $V^x,V^y\leq P$. Then by setting $U=V^x$ and $g=x^{-1}y$, we have $U^g=V^y$ and $W_D(U)^g=W_D(U^g)$ by the previous paragraph. It follows that $W_D(V^y)=W_D(V^x)^{x^{-1}y}$. Then $W_D(V^y)^{y^{-1}}=W_D(V^x)^{x^{-1}}$, and so $W_D$ is well defined. Now let $z\in G$. Then $W_D(V^z)=W_D(V^x)^{x^{-1}z}=(W_D(V^x)^{x^{-1}})^z=W_D(V)^z$, which completes the proof of first part. Lastly, since $D^y$ is strongly closed in $P^y$, $W_{D^y}$ is a conjugacy functor for $y\in G$ by the first part. It is routine to check that they are indeed the same function. \end{proof} \begin{remark}\label{emtpty remark} Although a strongly closed set is nonempty according to Definition \ref{strongly closed set}, if we take $D=\emptyset$ in the previous lemma, we get $W_{\emptyset}(U)=W(U)$. Thus, we set $W_{\emptyset}=W$ for any conjugacy functor $W$. \end{remark} \begin{lemman}\label{section conjugacy functor} Let $P\in Syl_p(G)$ and $D$ be a strongly closed subset in $P$. Let $K\unlhd H\leq G$, $N\lhd G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Let $W:\mathcal{L}^*_p(G)\to \mathcal{L}^*_p(G)$ be a section conjugacy functor. Then the followings hold: \begin{enumerate}[label=(\alph*)] \item\textit{ $W_{D^g\cap H}:\mathcal{L}_p(H)\to \mathcal{L}_p(H)$ is a conjugacy functor.} \item \textit{$W_{DN/N}:\mathcal{L}_p(G/N)\to \mathcal{L}_p(G/N)$ is a conjugacy functor.} \item \textit{$W_{(D^g\cap H)K/K}:\mathcal{L}_p(H/K)\to \mathcal{L}_p(H/K)$ is a conjugacy functor.} \end{enumerate} \end{lemman} \begin{proof}[\textbf{Proof}] $(a)$ By taking the restrictions of $W$ to the section $H/1$, we obtain a conjugacy functor $W:\mathcal{L}_p(H)\to \mathcal{L}_p(H)$. By Lemma \ref{strongly closed2} (a), $ D^g\cap H$ is strongly closed in $H\cap P^g$ with respect to $H$ if $D^g\cap H$ is nonempty. Then the result follows from Lemma \ref{conjugacy functor} and Remark \ref{emtpty remark}. Similarly, $(b)$ follows by Lemma \ref{strongly closed2} (b) and Lemma \ref{conjugacy functor}. Part $(c)$ also follows in a similar fashion. \end{proof} \begin{remark}\label{resrection} It should be noted that we only need $W$ be to be a conjugacy functor to establish Lemma \ref{section conjugacy functor} (a). Now assume the hypotheses and notation of Lemma \ref{section conjugacy functor}. Let $U\in \mathcal L_p(H)$. Then it is easy to see that $W_{D^g}(U)=W_{D^g\cap H}(U)$ by their definitions, and so $W_D(U)=W_{D^g\cap H}(U)$ by Lemma \ref{conjugacy functor}. Thus, the map $W_{D^g\cap H}$ is equal to the restriction of $W_D$ to $\mathcal L_p(H)$. \end{remark} \begin{lemman}\label{final} Assume the hypothesis and notation of Lemma \ref{section conjugacy functor}. We define $W^*_D:\mathcal{L}^*_p(G)\to \mathcal{L}^*_p(G)$ by setting $W^*_D(H/K)=W_{(D^g\cap H)K/K}(H/K)$ for each $H/K\in \mathcal{L}^*_p(G)$. Then $$W^*_D(H/K)=\begin{cases} W(\langle D^g\cap H \rangle K/ K) & \ if \ D^g\cap H\nsubseteq K. \\ W(H/K) & \ if \ D^g\cap H\subseteq K. \\ \end{cases} $$ Moreover, $W^*_D$ is a section conjugacy functor. \end{lemman} \begin{proof}[\textbf{Proof}] Firs suppose that $D^g\cap H\subseteq K$. Then $H/K\cap (D^g\cap H)K/K=K/K$, and so $W_{(D^g\cap H)K/K}(H/K)=W(H/K)$. If $D^g\cap H\nsubseteq K$ then $H/K\cap (D^g\cap H)K/K\neq K/K$, and so $W_{(D^g\cap H)K/K}(H/K)=W(\langle D^g\cap H \rangle K/ K)$ by its definition, which shows the first part. Note that $W^*_D(H/K)\leq H/K$ and $W^*_D(H/K)\neq 1$ unless $H/K=1$ by Lemma \ref{section conjugacy functor}(c). Now, we need to show that $(iii)$ and $(iv)$ in the definition of a section conjugacy functor hold. Pick $x\in G$. Since $(D^g\cap H)K/K$ is a strongly subset in $(P^g\cap H)K/K$, $(D^g\cap H)^xK^x/K^x$ is a strongly closed subset in $(P^g\cap H)^xK^x/K^x$. Moreover, $D^g\cap H\subseteq K$ if and only if $D^{gx}\cap H^x\subseteq K^x$. Thus, if $W^*_D(H/K)=W(H/K)$, then $W^*_D(H^x/K^x)=W(H^x/K^x)$. It follows that $$W^*_D(H^x/K^x)=W(H^x/K^x)=W(H/K)^x=W^*_D(H/K)^x.$$ The second equality holds as $W$ is a section conjugacy functor. Now if $W^*_D(H/K)=W(\langle D^g\cap H \rangle K/ K)$ then $$W^*_D(H^x/K^x))=W(\langle D^{gx}\cap H^x \rangle K^x/ K^x)=W((\langle D^g\cap H \rangle K/ K)^x)=W^*_D(H/K)^x.$$ The last equality holds as $W$ is a section conjugacy functor. Thus we see that $(iii)$ is satisfied. Now let $N\lhd H$ such that $N\leq K$ and $K/N$ is a $p'$-group. Let $X/N$ be a Sylow $p$-subgroup of $H/N$. We need to show that if $W^*_D(X/N)=L/N$ then $W^*_D(H/K)=LK/K$. Now pick $h\in H$ such that $(X/N)^h\supseteq (D^g\cap H)N/N$. By part $(iii)$, we have $W^*_D(X/N)^h=L^h/N^h=L^h/N$. If we could show that $W^*_D(H/K)=L^hK/K$, we can conclude that $$W^*_D(H/K)=W^*_D(H/K)^{h^{-1}}=(L^hK/K)^{h^{-1}}=LK/L$$ by part $(iii)$. Thus, we see that it is enough to show the claim for $(X/N)^h$, and so we may simply assume that $(D^g\cap H)N/N \subseteq X/N$. Clearly $\langle D^g\cap H\rangle$ is a $p$-group. Since $K/N$ is a $p'$-group, we see that $D^g\cap H\subseteq K$ if and only if $D^g\cap H\subseteq N$. Thus, if $W^*_D(H/K)=W(H/K)$ then $W^*_D(X/N)=W(X/N)$. It follows that $W^*_D(H/K)=LK/K$ as $W$ is a section conjugacy functor. Assume that $D^g\cap H\nsubseteq K$. Then $W^*_D(H/K)=W(\langle D^g\cap H \rangle K/ K)$ and $W^*_D(X/N)=W(\langle D^g\cap H \rangle N/ N)=L/N$. Now write $H^*=\langle D^g\cap H\rangle K$ and $P^*=\langle D^g\cap H\rangle N$. Observe that $P^*/N\in Syl_p(H^*/N)$ and recall $K/N$ is a $p'$-group. Since $W$ is a section conjugacy functor and $W(P^*/N)=L/N$, we get $W(H^*/K)=LK/K$. Then the result follows. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{E}}] Let $p$ be an odd prime, $G$ be a $p$-stable group and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subgroup in $P$. Let $H$ be a $p$-constrained subgroup of $G$ and $g\in G$ such that $P^g\cap H\in Syl_p(H)$. Since each $p$-subgroup of $H$ is also a $p$-subgroup of $G$, we see that $H$ is also a $p$-stable group. Now let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. It follows that $W_{D^g\cap H}$ is a conjugacy functor by Lemma \ref{section conjugacy functor}(a). Note that $W_{D^g\cap H}(P^g\cap H)\in \{W(D^g\cap H), W(P^g\cap H)\}$, and so $N_H(W_{D^g\cap H}(P^g\cap H))$ controls strong $H$-fusion in $P^g\cap H$ by Theorem \ref{main thm2} in both cases. Note also that $W_{D^g\cap H}(P^g\cap H)=W_D(P^g\cap H)$ by Remark \ref{resrection}. Now assume that $N_G(U)$ is $p$-constrained for each nontrivial subgroup $U$ of $P$. Fix $U\leq P$ and let $S\in Syl_p(N_G(U))$. Then by the arguments in the first paragraph, we see that the normalizer of $W_D(S)$ in $N_G(U)$ controls strong $N_G(U)$-fusion in $S$, and so we obtain that $N_G(W_D(P))$ control strong $G$-fusion in $P$ by \cite[Theorem 5.5(i)]{Gla2}. It follows that the normalizers the of the subgroups $Z(J_o(D))$, $\Omega(Z(J_r(D)))$ and $\Omega(Z(J_e(D)))$ control strong $G$-fusion in $P$. \end{proof} \begin{lemman}\label{ff} Let $p$ be an odd prime, $G$ be a group, and $P\in Syl_p(G)$. Suppose that $D$ is a strongly closed subgroup in $P$. Let $G^*$ be a section of $G$ such that $G^*$ is $p$-stable and $C_{G^*}(O_p(G^*))\leq O_p(G^*)$. If $S\in Syl_p(G^*)$, then $W^*_D(S)\lhd G^*$ for each $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. \end{lemman} \begin{proof}[\textbf{Proof}] Note that $D$ is also a strongly closed set in $P$. We assume the notation of Lemma \ref{final}. Let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then clearly $W$ is a section conjugacy functor. It follows that $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*=X/K$ be a section of $G$ such that $$C_{G^*}(O_p(G^*))\leq O_p(G^*).$$ Let $H/K\in Syl_p(G^*)$. Then we see that $W^*_D(H/K)=W(H/K)$ if $D^g\cap H\subseteq K$. In this case, $W(H/K)=Z(J_o(H/K)), \ \Omega (Z(J_e(H/K)))$ or $\Omega (Z(J_r(H/K))) $ which are normal subgroups of $G^*$ by Theorem \ref{B}. If $D^g\cap H\nsubseteq K$ then $(D^g\cap H)K/K$ is a strongly closed subgroup in $H/K$ with respect to $G^*$. Write $D^*=( D^g\cap H) K/K$ , then $$W^*_D(H/K)=W(D^*)=Z(J_o(D^*)), \ \Omega (Z(J_e(D^*))), \ or \ \Omega (Z(J_r(D^*))) $$ which are normal subgroups of $G^*$ by Theorem \ref{B}. Thus we see that $W_D^*(H/K)\unlhd G^*$ for all cases. \end{proof} Now we are ready to prove Theorems \ref{F} and \ref{H}. \begin{proof}[\textbf{Proof of Theorem \ref{F}} ] Let $p$ be an odd prime, $G$ be a $\text{Qd}(p)$-free group, and $P\in Syl_p(G)$ as in our hypothesis. Since $G$ does not involve a section isomorphic to $\text{Qd}(p)$, every section of $G$ is $p$-stable by \cite[Proposition 14.7]{Gla2}. Now let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then we have that $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*$ be a section of $G$ such that $C_{G^*}(O_p(G^*))\leq O_p(G^*)$ and let $S\in Syl_p(G^*)$. Then we see that $W^*_D(S)\lhd G^*$ by Lemma \ref{ff}. It follows that $N_G(W^*_D(P))$ controls strong $G$-fusion in $P$ by \cite[Theorem 6.6]{Gla2}. We see that $W^*_D(P)=Z(J_o(D)), \ \Omega (Z(J_e(D))), \ or \ \Omega (Z(J_r(D)))$ according to choice of $W$, which completes the proof. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{H}}] Let $W\in \{ZJ_o, \Omega ZJ_e, \Omega ZJ_r \}$. Then $W^*_D:\mathcal L^*_p(G)\to \mathcal L^*_p(G)$ is a section conjugacy functor by Lemma \ref{final}. Let $G^*$ be a section of $G$ such that $C_{G^*}(O_p(G^*))\leq O_p(G^*)$ and $G^*/(O_p(G^*)$ is $p$-nilpotent. Suppose also that $S^*\in Syl_p(G^*) $ is a maximal subgroup of $G^*$. Let $H$ be the normal Hall $p'$-subgroup of $G^*/O_p(G^*)$. Write $S=S^*/O_p(G^*)$. Then $S$ is also maximal in $G^*/(O_p(G^*)$ and $S$ acts on $H$ via coprime automorphisms. If $1<U\leq H$ is $S$-invariant then $SU=G^*/(O_p(G^*)$ by the maximality of $S$. Since $SH=G^*/(O_p(G^*)$ and $S\cap H=1$, we see that $U=H$. Thus, there is no proper nontrivial $S$-invariant subgroup of $H$. On the other hand, we may choose an $S$-invariant Sylow subgroup of $H$ by \cite[Theorem 3.23(a)]{Isc}. This forces $H$ to be a $q$-group for some prime $q$, and so $H'<H$. It follows that $H$ is abelian due to the fact that $H'$ is $S$-invariant. Let $H^*$ be a Hall $p'$-subgroup of $G^*$. Then we see that $H^*O_p(G^*)/(O_p(G^*)\cong H^*$. Thus, we observe that Hall $p'$-subgroups of $G^*$ are also abelian. Since $p$ is odd, we see that a Sylow $2$-subgroup of $G^*$ is abelian. This yields that $G^*$ does not involve a section isomorphic to $SL(2,p)$, and so every section of $G^*$ is $p$-stable by \cite[Proposition 14.7]{Gla2}. Then we obtain that $W^*_D(S^*)\lhd G^*$ by Lemma \ref{ff}. It follows that $G$ is $p$-nilpotent by \cite[Theorem 8.7]{Gla2}. \end{proof} \end{document}
\begin{document} \author{ Heinz H.\ Bauschke\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{[email protected]}.}, ~ Walaa M.\ Moursi\thanks{ Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA and Mansoura University, Faculty of Science, Mathematics Department, Mansoura 35516, Egypt. E-mail: \texttt{[email protected]}.} ~and~ Xianfu Wang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{[email protected]}. }} \title{\textsc Generalized monotone operators\\ and their averaged resolvents} \date{February 22, 2019} \maketitle \begin{abstract} \noindent The correspondence between the monotonicity of a (possibly) set-valued operator and the firm nonexpansiveness of its resolvent is a key ingredient in the convergence analysis of many optimization algorithms. Firmly nonexpansive operators form a proper subclass of the more general -- but still pleasant from an algorithmic perspective -- class of averaged operators. In this paper, we introduce the new notion of conically nonexpansive operators which generalize nonexpansive mappings. We characterize averaged operators as being resolvents of comonotone operators under appropriate scaling. As a consequence, we characterize the proximal point mappings associated with hypoconvex functions as cocoercive operators, or equivalently; as displacement mappings of conically nonexpansive operators. Several examples illustrate our analysis and demonstrate tightness of our results. \end{abstract} {\small \noindent {\bfseries 2010 Mathematics Subject Classification:} {Primary 47H05, 47H09, Secondary 49N15, 90C25. } \noindent {\bfseries Keywords:} averaged operator, cocoercive operator, firmly nonexpansive mapping, hypoconvex function, maximally monotone operator, nonexpansive mapping, proximal operator. } \section{Introduction} In this paper, we assume that \begin{empheq}[box=\mybluebox]{equation*} \label{T:assmp} \text{$X$ is a real Hilbert space}, \end{empheq} with inner product $\innp{\cdot,\cdot}$ and induced norm $\norm{\cdot}$. Monotone operators form a beautiful class of operators that play a crucial role in modern optimization. This class includes subdifferential operators of proper lower semicontinuous convex functions as well as matrices with positive semidefinite symmetric part. (For detailed discussions on monotone operators and the connection to optimization problems, we refer the reader to \cite{BC2017}, \cite{Borwein50}, \cite{Brezis}, \cite{BurIus}, \cite{Comb96}, \cite{Comb04}, \cite{Mord18}, \cite{Rock98}, \cite{Simons1}, \cite{Simons2}, \cite{Zeidler2a}, \cite{Zeidler2b}, and the references therein.) The correspondence between the maximal monotonicity of an operator and the firm nonexpansiveness of its \ensuremath{\varnothing}h{resolvent} is of central importance from an algorithmic perspective: to find a critical point of the former, iterate the later! Indeed, firmly nonexpansive operators belong to the more general and pleasant class of \ensuremath{\varnothing}h{averaged} operators. Let $x_0\in X$ and let $T\colon X\to X$ be averaged. Thanks to the Krasnosel'ski\u{\i}--Mann iteration (see \cite{krans}, \cite{Mann} and also \cite[Theorem~5.14]{BC2017}), the sequence $(T^n x_0)_{n\in \ensuremath{\mathbb N}}$ converges weakly to a fixed point of $T$. When $T$ is the \ensuremath{\varnothing}h{proximal mapping} associated with a proper lower semicontinuous convex function $f$, the set of fixed points of $T$ is the set of critical point of $f$; equivalently the set of minimizers of $f$. In fact, iterating $T$ is this case produces the famous proximal point algorithm, see \cite{Rock76}. \ensuremath{\varnothing}h{The main goal of this paper is to answer the question: Can we explore a new correspondence between a set-valued operator and its resolvent which generalizes the fundamental correspondence between monotone operators and firmly nonexpansive mappings (see \cref{fact:corres})? Our approach relies on the new notion of \ensuremath{\varnothing}h{\text{conically nonexpansive}\ }operators as well as the notions of $\rho$-monotonicity (respectively $\rho$-comonotonicity) which, depending on the value of $\rho$, reduce to strong monotonicity, monotonicity or hypomonotonicity (respectively cocoercivity, monotonicity or cohypomonotonicity).} Although some correspondences between a monotone operator $(\rho\geq 0)$ and its resolvent have been established in \cite{BMW2}, our analysis here not only provides more quantifications and but also goes beyond monotone operators. We now summarize the three main results of this paper: \begin{itemize} \item[{\bf R1}] \namedlabel{R:1}{\bf R1} We show that, when $\rho>-1$, the resolvent of a $\rho$-monotone operator as well as the resolvent of its inverse are single-valued and have full domain. This allows us to extend the classical theorem by Minty (see \cref{thm:minty}) to this class of operators (see \cref{thm:Minty:type}). \item[{\bf R2}] \namedlabel{R:2}{\bf R2} We characterize \text{conically nonexpansive}\ operators (respectively averaged operators and nonexpansive operators) to be resolvents of $\rho$-comonotone operators with $\rho>-1$ (respectively $\rho>-\tfrac{1}{2}$ and $\rho\ge -\tfrac{1}{2}$) (see \cref{cor:eq:nexp:-0.3} and also \cref{tab:1}). \item[{\bf R3}] \namedlabel{R:3}{\bf R3} As a consequence of \ref{R:2}, we obtain a novel characterization of the proximal point mapping associated with a \ensuremath{\varnothing}h{hypoconvex} function\footnote{This is also known as \ensuremath{\varnothing}h{weakly convex} function.} (under appropriate scaling of the function) to be a \text{conically nonexpansive}\ mapping, or equivalently, the displacement mapping of a cocoercive operator (see \cref{prop:hypocon:av}). \end{itemize} The remainder of this paper is organized as follows. \cref{sec:mono:comono} is devoted to the study of the properties of $\rho$-monotone and $\rho$-comonotone operators. In \cref{sec:aver}, we provide a characterization of averaged operators as resolvents of $\rho$-comonotone operators. \cref{sec:JA:RA} provides useful correspondences between an operators and its resolvent as well as its reflected resolvent. In \cref{sec:linear}, we focus on $\rho$-monotone and $\rho$-comonotone linear operators. In the final \cref{sec:hypocon}, we establish the connection to hypoconvex functions. The notation we use is standard and follows, e.g., \cite{BC2017} or \cite{Rock98}. \section{$\rho$-monotone and $\rho$-comonotone operators} \label{sec:mono:comono} Let $A\colon X\rras X$. Recall that the \ensuremath{\varnothing}h{resolvent} of $A$ is $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1} $ and the \ensuremath{\varnothing}h{reflected resolvent} of $A$ is $R_A=2J_A-\ensuremath{\operatorname{Id}} $, where $\ensuremath{\operatorname{Id}}\colon X\to X\colon x\mapsto x$. The \ensuremath{\varnothing}h{graph} of $A$ is $\gra A=\menge{(x,u)\in X\times X}{u\in Ax}$. Let $T\colon X\to X$ and let $\alpha \in\left]0,1\right[$. Recall that \begin{enumerate} \item $T$ is \ensuremath{\varnothing}h{nonexpansive} if $(\forall (x,y)\in X\times X)$ $\norm{Tx-Ty}\le \norm{x-y}$. \item $T $ is \ensuremath{\varnothing}h{$\alpha$-averaged} if there exists a nonexpansive operator $N\colon X\to X$ such that $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$; equivalently, $(\forall (x,y)\in X\times X)$ we have \begin{equation} (1-\alpha)\norm{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-Ty)}^2 \le \alpha(\norm{x-y}^2-\norm{Tx-Ty}^2). \end{equation} \item $T $ is \ensuremath{\varnothing}h{firmly nonexpansive} if $T$ is $\tfrac{1}{2}$-averaged. Equivalently, if $(\forall (x,y)\in X\times X)$ $\normsq{Tx-Ty}+\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\le \normsq{x-y}$. \end{enumerate} We begin this section by stating the following two useful facts. \begin{fact}{\rm (see, e.g., \cite[Theorem~2]{EckBer})} \label{fact:corres} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, and set $A=T^{-1}-\ensuremath{\operatorname{Id}}$. Then $T=J_A$. Moreover, the following hold: \begin{enumerate} \item $T$ is firmly nonexpansive if and only if $A$ is monotone. \item $T$ is firmly nonexpansive and $D=X$ if and only if $A$ is maximally monotone. \end{enumerate} \end{fact} \begin{fact}[{\bf Minty's Theorem}]{\rm\cite{Minty} (see also \cite[Theorem~21.1]{BC2017})} \label{thm:minty} Let $A\colon X\rras X$ be monotone. Then \begin{equation} \label{eq:Minty} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)}. \end{equation} Moreover, \begin{equation} \label{eq:Minty:2} \text{$A$ is maximally monotone $\siff$ $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=X$.} \end{equation} \end{fact} \begin{defn} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. Then \begin{enumerate} \item $A$ is \ensuremath{\varnothing}h{$\rho$-monotone} if $(\forall (x,u)\in \gra A)$ $(\forall (y,v)\in \gra A)$ we have \begin{equation} \label{eq:def:hmon} \innp{x-y,u-v}\ge \rho\normsq{x-y}. \end{equation} \item $A$ is \ensuremath{\varnothing}h{maximally $\rho$-monotone} if $A$ is $\rho$-monotone and there is no $\rho$-monotone operator $B\colon X\rras X$ such that $\gra B$ properly contains $\gra A$, i.e., for every $(x,u)\in X\times X$, \begin{equation} (x,u)\in \gra A ~\siff~ (\forall (y,v)\in \gra A) ~\innp{x-y,u-v}\ge \rho\normsq{x-y}. \end{equation} \item $A$ is \ensuremath{\varnothing}h{\ensuremath{\mathbf{h}}mon} if $(\forall (x,u)\in \gra A)$ $(\forall (y,v)\in \gra A)$ we have \begin{equation} \label{eq:def:cohmon} \innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} \item $A$ is \ensuremath{\varnothing}h{\ensuremath{\mathbf{h}}maxmon} if $A$ is \ensuremath{\mathbf{h}}mon\ and there is no \ensuremath{\mathbf{h}}mon\ operator $B\colon X\rras X$ such that $\gra B$ properly contains $\gra A$, i.e., for every $(x,u)\in X\times X$, \begin{equation} (x,u)\in \gra A ~\siff~ (\forall (y,v)\in \gra A) ~\innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} \end{enumerate} \end{defn} Some comments are in order. \begin{rem}\ \begin{enumerate} \item When $\rho=0$, both $\rho$-monotonicity of $A$ and $\rho$-comonotonicity of $A$ reduce to the monotonicity of $A$; equivalently to the monotonicity of $A^{-1}$. \item When $\rho< 0$, $\rho$-monotonicity is known as $\rho$-\ensuremath{\varnothing}h{hypomonotonicity}, see \cite[Example~12.28]{Rock98} and \cite[Definition~6.9.1]{BurIus}. In this case, the $\rho$-comonotonicity is also known as \ensuremath{\varnothing}h{$\rho$-cohypomonotonicity} (see \cite[Definition~2.2]{CombPenn04}). \item In passing, we point out that when $\rho>0$, $\rho$-monotonicity of $A$ reduces to $\rho$-strong monotonicity of $A$, while $\rho$-comonotonicity of $A$ reduces to $\rho$-cocoercivity\footnote{Let $\beta>0$ and let $T\colon X\to X$. Recall that $T$ is $\beta$-\ensuremath{\varnothing}h{cocoercive} if $\beta T$ is firmly nonexpansive, i.e., $(\forall (x,y)\in X\times X)$ $\innp{x-y,Tx-Ty}\ge \beta \normsq{Tx-Ty}$. } of $A$. \end{enumerate} \end{rem} Unlike classical monotonicity, $\rho$-comonotonicity of $A$ is \ensuremath{\varnothing}h{not} equivalent to $\rho$-comonotonicity of $A^{-1}$. Instead, we have the following correspondences. \begin{lemma} \label{lem:bmon:inv} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. The following are equivalent: \begin{enumerate} \item \label{lem:bmon:inv:i} $A$ is \rhmon. \item \label{lem:bmon:inv:ii} $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is monotone. \item \label{lem:bmon:inv:iii} $A^{-1}$ is $\rho$-monotone, i.e., $(\forall (x,u)\in \gra A^{-1})$ $(\forall (y,v)\in \gra A^{-1})$ $ \innp{x-y,u-v}\ge \rho \normsq{x-y}. $ \end{enumerate} \end{lemma} \begin{proof} ``\ref{lem:bmon:inv:i}$\RA$\ref{lem:bmon:inv:ii}": Let $\{(x,u),(y,v) \}\subseteq X\times X$. Then $\{(x,u),(y,v) \}\subseteq\gra (A^{-1}-\rho\ensuremath{\operatorname{Id}})$ $\siff$ [$u\in A^{-1}x-\rho x $ and $v\in A^{-1}y-\rho y $] $\siff$ $\{(x, u+\rho x),(y, v+\rho y)\}\subseteq \gra A^{-1}$ $\siff$ $\{( u+\rho x, x),(v+\rho y,y)\}\subseteq \gra A$ $\RA$ $\innp{x-y,u-v+\rho(x-y)}\ge \rho\normsq{x-y}$ $\siff$ $\rho\normsq{x-y}+\innp{x-y,u-v}\ge \rho\normsq{x-y}$ $\siff$ $\innp{u-v,x-y}\ge 0$. ``\ref{lem:bmon:inv:ii}$\RA$\ref{lem:bmon:inv:iii}": Let $\{(x,u),(y,v) \}\subseteq\gra A^{-1}$. Then $\{(x,u-\rho x),(y,v-\rho y) \}\subseteq\gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$. Hence $\innp{x-y, u-v-\rho(x-y)}\ge 0$; equivalently $\innp{x-y, u-v}\ge\rho\normsq{x-y}$. ``\ref{lem:bmon:inv:iii}$\RA$\ref{lem:bmon:inv:i}": Let $\{(x,u),(y,v) \}\subseteq X\times X$. Then $\{(x,u),(y,v) \}\subseteq\gra A$ $\siff$ $\{(u,x),(v,y) \}\subseteq \gra A^{-1}$ $\RA $ $\innp{x-y,u-v}\ge \rho\normsq{u-v}$. \end{proof} \begin{lemma} \label{lem:gra:conn} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. Then the following hold: \begin{enumerate} \item \label{eq:grA:grainv} $\gra A=\menge{(u+\rho x,x)}{(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})}$. \item \label{eq:grainv:grA} $\gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})=\menge{(u,x-\rho u)}{(x,u)\in \gra A}$. \end{enumerate} \end{lemma} \begin{proof} \ref{eq:grA:grainv}: Let $(x,u)\in X\times X$. Then $(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$ $\siff$ $u\in A^{-1}x-\rho x$ $\siff$ $u+\rho x\in A^{-1} x$ $\siff$ $x\in A(u+\rho x)$ $\siff$ $(u+\rho x,x)\in \gra A$. This proves ``$\supseteq$" in \ref{eq:grA:grainv}. The opposite inclusion can be proved similarly. \ref{eq:grainv:grA}: The proof proceeds similar to that of \ref{eq:grA:grainv}. \end{proof} \begin{lemma} \label{lem:bmax:inv} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. The following are equivalent: \begin{enumerate} \item \label{lem:bmax:inv:i} $A$ is \rhmaxmon. \item \label{lem:bmax:inv:ii} $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is maximally monotone. \end{enumerate} \end{lemma} \begin{proof} Note that \cref{lem:bmon:inv} implies that $A$ is \rhmon\ $\siff $ $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is monotone. ``\ref{lem:bmax:inv:i}$\RA$\ref{lem:bmax:inv:ii}": Let $(y,v)\in X\times X$. Then $(y,v)$ is monotonically related to $\gra(A^{-1}-\rho\ensuremath{\operatorname{Id}})$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u-v}\ge 0$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u-v}+\rho\normsq{x-y}\ge \rho\normsq{x-y}$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u+\rho x-(v+\rho y)}\ge\rho\normsq{x-y}$. Because the last inequality holds for all $(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$, the parametrization of $\gra A$ given in \cref{lem:gra:conn}\ref{eq:grA:grainv} and the \ensuremath{\varnothing}h{maximal} $\rho$-comonotonicity of $A$ imply that $(v+\rho y, y) \in \gra A$. Therefore, by \cref{lem:gra:conn}\ref{eq:grainv:grA}, $(y,v)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$. ``\ref{lem:bmax:inv:ii}$\RA$\ref{lem:bmax:inv:i}": Let $(y,v)\in X\times X$. Then $(y,v)$ is $\rho$-comonotonically related to $\gra A$ $\siff$ $(\forall (x,u)\in \gra A)$ $\innp{x-y,u-v}\ge \rho \normsq{u-v}$ $\siff$ $(\forall (x,u)\in \gra A)$ $\innp{x-\rho u-(y-\rho v),u-v}\ge 0$. It follows from \cref{lem:gra:conn}\ref{eq:grainv:grA} and the \ensuremath{\varnothing}h{maximal} monotonicity of $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ that $(v, y-\rho v)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$, equivalently, using \cref{lem:gra:conn}\ref{eq:grA:grainv}, $(y,v)\in \gra A$. \end{proof} \begin{rem} Note that when $\rho<0$, the (maximal) monotonicity of $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is equivalent to the (maximal) monotonicity of the Yosida approximation $(A^{-1}-\rho\ensuremath{\operatorname{Id}})^{-1}$. Such a characterization is presented in \cite[Proposition~6.9.3]{BurIus}. \end{rem} \begin{prop} \label{prop:surject} Let $A\colon X\rras X$ be \ensuremath{\mathbf{h}}maxmon\ where $\rho>-1$. Then $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=X$. \end{prop} \begin{proof} By \cref{lem:bmax:inv}, $A^{-1}-\rho \ensuremath{\operatorname{Id}}$ is maximally monotone. Consequently, because $1+\rho>0$, the operator $\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}})$ is maximally monotone. Applying \cref{eq:Minty:2} to $\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}})$ we have $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{ran}} ((1+\rho)\ensuremath{\operatorname{Id}}+(A^{-1}-\rho \ensuremath{\operatorname{Id}})) =(1+\rho)\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}}))=(1+\rho)X=X$. \end{proof} \begin{prop} \label{prop:gen:min} Let $A\colon X\rras X$. Then the following hold: \begin{enumerate} \item \label{prop:gen:min:i} $J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A.$ \item \label{prop:gen:min:ii} $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{dom}}(\ensuremath{\operatorname{Id}}-J_A)=\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)$. \end{enumerate} \end{prop} \begin{proof} \ref{prop:gen:min:i}: This follows from \cite[Proposition~23.7(ii)~and~Definition~23.1]{BC2017}. \ref{prop:gen:min:ii}: Using \ref{prop:gen:min:i}, we have $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1}) =\ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}+A^{-1})^{-1} =\ensuremath{\operatorname{dom}} J_{A^{-1}} =\ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}-J_A) =(\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{Id}})\cap (\ensuremath{\operatorname{dom}} J_A) =\ensuremath{\operatorname{dom}} J_A =\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)$. \end{proof} \begin{corollary}[\bf{surjectivity of $\ensuremath{\operatorname{Id}}+A$ and $\ensuremath{\operatorname{Id}}+A^{-1}$}] \label{cor:surj} Let $A\colon X\rras X$ be \ensuremath{\mathbf{h}}maxmon\ where $\rho>-1$. Then \begin{equation} \ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)=X, \end{equation} and \begin{equation} \ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}-J_A)=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=X. \end{equation} \end{corollary} \begin{proof} Combine \cref{prop:surject} and \cref{prop:gen:min}\ref{prop:gen:min:i}\&\ref{prop:gen:min:ii}. \end{proof} \begin{prop}[\bf{single-valuedness of the resolvent}] \label{prop:s:v} Let $A\colon X\rras X$ be \ensuremath{\mathbf{h}}mon\ where $\rho>-1$. Then $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1}$ and $J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A$ are at most single-valued. \end{prop} \begin{proof} Let $x\in \ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$ and let $(u,v)\in X\times X$. Then $\{u,v\}\subseteq J_A x$ $\siff$ [$x-u\in Au$ and $x-v\in Av$] $\RA$ $\innp{(x-u)-(x-v),u-v}\ge \rho\normsq{u-v}$ $\siff$ $-\normsq{u-v}\ge \rho\normsq{u-v}$. Since $\rho>-1$, the last inequality implies that $u=v$. Now combine with \cref{prop:gen:min}\ref{prop:gen:min:i}. \end{proof} \begin{corollary}[{See also \cite[Proposition~3.4]{PhanDao18}}] Let $A\colon X\rras X$ be \ensuremath{\mathbf{h}}maxmon\ where $\rho>-1$. Then $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1}$ and $J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A$ are single-valued and $\ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{dom}} J_{A^{-1}}=X$. \end{corollary} In \cref{ex:sv:fr:fail} below, we illustrate that the assumption that $\rho >-1$ is critical in the conclusion of \cref{cor:surj} and \cref{prop:s:v}. \begin{example} \label{ex:sv:fr:fail} Suppose that $X\neq \{0\}$. Let $C$ be a nonempty closed convex subset of $X$, let $r\in \ensuremath{\mathbb R}_+$, set $B=-\ensuremath{\operatorname{Id}}-rP_C$, set $A=B^{-1}$ and set $\rho=-(1+r)\le -1$. Then the following hold: \begin{enumerate} \item \label{ex:sv:fr:fail:i} $B-\rho \ensuremath{\operatorname{Id}} $ is maximally monotone. \item \label{ex:sv:fr:fail:ii} $A$ is \rhmaxmon. \item \label{ex:sv:fr:fail:iii} $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=(\rho+1)C=-rC$. \item \label{ex:sv:fr:fail:iii:b} $\ensuremath{\operatorname{Id}}+A$ is surjective $\siff$ $[C=X \text{~and~}r>0]$. \item \label{ex:sv:fr:fail:iv} $J_A$ is at most single-valued $\siff$ $J_{A^{-1}}$ is at most single-valued $\siff$ $[C=X \text{~and~}r>0]$. \end{enumerate} \end{example} \begin{proof} \ref{ex:sv:fr:fail:i}: Indeed, $B-\rho\ensuremath{\operatorname{Id}}=-\ensuremath{\operatorname{Id}}-rP_C+(1+r)\ensuremath{\operatorname{Id}}=r(\ensuremath{\operatorname{Id}}-P_C)$. It follows from \cite[Example~23.4~\&~Proposition~23.11(i)]{BC2017} that $\ensuremath{\operatorname{Id}}-P_C$ is maximally monotone. Because $r\ge 0$, the operator $B-\rho\ensuremath{\operatorname{Id}}=r(\ensuremath{\operatorname{Id}}-P_C)$ is maximally monotone as well. \ref{ex:sv:fr:fail:ii}: Combine \ref{ex:sv:fr:fail:i} and \cref{lem:bmax:inv}. \ref{ex:sv:fr:fail:iii}: The first identity is \cref{prop:gen:min}\ref{prop:gen:min:ii}. Now $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+B)=\ensuremath{\operatorname{ran}}(-rP_C) =-r\ensuremath{\operatorname{ran}} P_C=-rC=(\rho+1)C$. \ref{ex:sv:fr:fail:iii:b}: This is a direct consequence of \ref{ex:sv:fr:fail:iii}. \ref{ex:sv:fr:fail:iv}: The first equivalence follows from \cref{prop:gen:min}\ref{prop:gen:min:i}. Note that $[r=0 \text{~or~} C=\{0\}]$ $\siff rC=\{0\}$ $\siff rP_C\equiv 0$ $\siff$ $B=-\ensuremath{\operatorname{Id}}$ $\siff$ $\gra J_{A^{-1}}=\gra J_B=\{0\}\times X$. Now suppose that $r> 0$. Then $J_{A^{-1}}=J_B=(\ensuremath{\operatorname{Id}}+B)^{-1}=(-rP_C)^{-1} =(\ensuremath{\operatorname{Id}}+N_C)\circ(-r^{-1}\ensuremath{\operatorname{Id}})$ which is at most single-valued $\siff $ $C=X$, by e.g., \cite[Theorem~7.4]{BC2017}. \end{proof} \begin{prop} \label{prop:surj:ow} Let $A\colon X\ensuremath{\rightrightarrows} X$ be \ensuremath{\mathbf{h}}mon, where $\rho>-1$, and such that $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)=X$. Then $A$ is \ensuremath{\mathbf{h}}maxmon. \end{prop} \begin{proof} Let $(x,u)\in X\times X$ such that $(\forall (y,v)\in \gra A)$ \begin{equation} \label{eq:min:od} \innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} It follows from the surjectivity of $\ensuremath{\operatorname{Id}}+A$ that there exists $(y,v)\in X\times X$ such that $v\in Ay$ and $x+u=y+v\in (\ensuremath{\operatorname{Id}} +A)y$. Consequently, \cref{eq:min:od} implies that $\rho\normsq{u-v}\le \innp{x-y,u-v} =\innp{-(u-v),u-v}=-\normsq{u-v}$. Hence, because $\rho> -1$, we have $u=v$ and thus $x=y$ which proves the maximality of $A$. \end{proof} \begin{thm}[{\bf Minty parametrization}] \label{thm:Minty:type} Let $A\colon X\rras X$ be \ensuremath{\mathbf{h}}mon\ where $\rho>-1$. Then \begin{equation} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)}. \end{equation} Moreover, $A$ is \ensuremath{\mathbf{h}}maxmon\ $\siff$ $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=X$, in which case \begin{equation} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in X}. \end{equation} \end{thm} \begin{proof} Let $(x,u)\in X\times X$. In view of \cref{prop:s:v} we have $(x,u)\in \gra A$$\siff u\in Ax$ $\siff x+u\in x+Ax=(\ensuremath{\operatorname{Id}} +A)x$ $\siff x=J_A(x+u)$ $\siff$ [$z:=x+u\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}} +A)$, $x=J_A z$ and $u=x+u-x=x+u-J_A(x+u)=(\ensuremath{\operatorname{Id}}-J_A)z$]. The equivalence of maximal $\rho$-comonotonicity of $A$ and the surjectivity of $\ensuremath{\operatorname{Id}} +A$ follows from combining \cref{cor:surj} and \cref{prop:surj:ow}. \end{proof} \begin{corollary} \label{lem:gr:RA} Suppose that $A\colon X\rras X$ is maximally $\rho$-comonotone where $\rho>-1$ and let $(x,u)\in X\times X$. Then the following hold: \begin{enumerate} \item \label{lem:gr:RA:i} $ (x,u)\in \gra J_A\siff (u, x-u) \in \gra A $. \item \label{lem:gr:RA:ii} $ (x,u)\in \gra R_A\siff \bigl(\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u) \bigr) \in \gra A $. \end{enumerate} \end{corollary} \begin{proof} Let $(x,u)\in X\times X$ and note that in view of \cref{prop:s:v} and \cref{thm:Minty:type} $J_A\colon X\to X$ and consequently $R_A\colon X\to X$ are single-valued. \ref{lem:gr:RA:i}: We have $(x,u)\in \gra J_A$ $\siff$ $u=J_Ax$ $\siff$ $x-u=(\ensuremath{\operatorname{Id}}-J_A)x$. Now use \cref{thm:Minty:type}. \ref{lem:gr:RA:ii}: We have $(x,u) \in\gra R_A\siff u=R_A x=2J_Ax-x$ $\siff x+u=2J_A x$ $\siff J_A x=\tfrac{1}{2}(x+u)$ $\siff x-J_Ax=x-\tfrac{1}{2}(x+u)=\tfrac{1}{2}(x-u)$ $\siff (\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u)) \in \gra A$, where the last equivalence follows from \cref{thm:Minty:type}. \end{proof} \section{$\rho$-comonotonicity and averagedness} \label{sec:aver} We start this section with the following definition. \begin{definition} \label{def:con:nonexp} Let $T\colon X\to X$ and let $\alpha\in \left ]0,+\infty\right [$. Then $T $ is $\alpha$-\ensuremath{\varnothing}h{\text{conically nonexpansive}} if there exists a nonexpansive operator $N\colon X\to X$ such that $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. \end{definition} \begin{remark} \label{rem:con:nonexp} In view of \cref{def:con:nonexp}, it is clear that $T$ is $\alpha$-averaged if and only if [$T$ $\alpha$-\text{conically nonexpansive}\ and $\alpha\in \left ]0,1\right [$]. Similarly, $T$ is nonexpansive if and only if $T$ $1$-\text{conically nonexpansive}. \end{remark} The proofs of the next two results are straightforward and hence omitted. \begin{lemma} \label{lem:coco:conc} Let $T\colon X\to X$ and let $\alpha\in \left ]0,+\infty\right [$. Then \begin{equation} \text{ $T $ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $\ensuremath{\operatorname{Id}}-T$ is $\tfrac{1}{2\alpha}$-cocoercive}. \end{equation} \end{lemma} \begin{lemma} \label{lem:lip:cute} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $N \colon D\to X$, let $\alpha\in\left[1,+\infty\right[$ and set $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Suppose that $N\colon D\to X$ is nonexpansive. Then $(\forall (x,y)\in D\times D)$ we have \begin{equation} \norm{Tx-Ty}\le (2\alpha-1)\norm{x-y}, \end{equation} i.e., $T$ is Lipschitz with constant $2\alpha-1$. \end{lemma} One can directly verify the following result. \begin{lemma} \label{lem:gen:Hilb} Let $(x,y)\in X\times X$ and let $\alpha\in \ensuremath{\mathbb R}$. Then \begin{equation} \alpha^2\normsq{x}-\normsq{(\alpha-1)x+y} =2\alpha\innp{x-y,y}-(1-2\alpha)\normsq{x-y}. \end{equation} \end{lemma} \begin{lemma} \label{lem:av:ch} Let $D$ be a nonempty subset of $X$, let $N \colon D\to X$, let $\alpha\in \ensuremath{\mathbb R}$ and set $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Then $N$ is nonexpansive if and only if $(\forall (x,y)\in D\times D)$ we have \begin{equation} 2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}. \end{equation} \end{lemma} \begin{proof} Let $(x,y)\in D\times D$. Applying \cref{lem:gen:Hilb} with $(x,y)$ replaced by $(x-y, Tx-Ty)$, we learn that \begin{subequations} \begin{align} &\qquad2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}- (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\\ &=\alpha^2\normsq{x-y}-\normsq{(\alpha-1)(x-y)+(1-\alpha)(x-y)+\alpha(Nx-Ny)}\\ &=\alpha^2\big(\normsq{x-y}-\normsq{Nx-Ny}\big). \end{align} \end{subequations} Now $N$ is nonexpansive $\siff$ $\normsq{x-y}-\normsq{Nx-Ny}\ge 0$ and the conclusion directly follows. \end{proof} We now provide new characterizations of averaged and nonexpansive operators. \begin{corollary} \label{cor:non:av:ch} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in\left]0, +\infty\right[$ and let $(x,y)\in D\times D$. Then the following hold: \begin{enumerate} \item \label{cor:non:av:ch:i} $T$ is nonexpansive $\siff $ $2\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge -\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. \item \label{cor:non:av:ch:ii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. \end{enumerate} \end{corollary} \begin{proof} \ref{cor:non:av:ch:i}: Apply \cref{lem:av:ch} with $\alpha=1$. \ref{cor:non:av:ch:ii}: A direct consequence of \cref{lem:av:ch}. \end{proof} \begin{prop} \label{prop:gen:avn} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in \left]0,+\infty\right[$, set $A=T^{-1}-\ensuremath{\operatorname{Id}}$ and set $N=\tfrac{1}{\alpha}T-\tfrac{1-\alpha}{\alpha}\ensuremath{\operatorname{Id}}$, i.e., $T=J_A=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Then the following hold: \begin{enumerate} \item \label{prop:gen:avn:iii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $N$ is nonexpansive $\siff$ $A$ is $\big(\tfrac{1}{2\alpha}-1\big)$-comonotone. \item \label{prop:gen:avn:iv} {\rm [}$T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X${\rm ]} $\siff$ {\rm [}$N$ is nonexpansive and $D=X${\rm ] } $\siff$ $A$ is maximally $\big(\tfrac{1}{2\alpha}-1\big)$-comonotone. \end{enumerate} \end{prop} \begin{proof} \ref{prop:gen:avn:iii}: The first equivalence is \cref{def:con:nonexp}. We now turn to the second equivalence. ``$\RA$": Let $\{(x,u),(y,v)\}\subseteq \gra A$. Then $(x,u)=(T(x+u),(\ensuremath{\operatorname{Id}}-T)(x+u))$ and likewise $(y,v)=(T(y+v),(\ensuremath{\operatorname{Id}}-T)(y+v))$. It follows from \cref{lem:av:ch} applied with $(x,y)$ replaced by $(x+u,y+v)$ that $2\alpha\innp{x-y,u-v}\ge (1-2\alpha)\normsq{u-v}$. Since $\alpha>0$, the conclusion follows by dividing both sides of the last inequality by $2\alpha$. ``$\LA$": Using \cref{thm:Minty:type}, we learn that $(\forall(x,y)\in D\times D)$ $\{(Tx,(\ensuremath{\operatorname{Id}}-T)x),(Ty,(\ensuremath{\operatorname{Id}}-T)y)\}\subseteq\gra A$ and hence $\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge \bk{\tfrac{1}{2\alpha}-1}\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. Thus $2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge(1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. Now use \cref{lem:av:ch}. \ref{prop:gen:avn:iv}: Note that $\ensuremath{\operatorname{dom}} N=\ensuremath{\operatorname{dom}} T=\ensuremath{\operatorname{ran}} T^{-1}=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$. Now combine \ref{prop:gen:avn:iii} and \cref{thm:Minty:type}. \end{proof} \begin{prop} \label{prop:-0.3:av} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in \left]0,+\infty\right[$, set $A=T^{-1}-\ensuremath{\operatorname{Id}}$, i.e., $T=J_A$, and set $\rho=\tfrac{1}{2\alpha}-1>-1$. Then the following equivalences hold: \begin{enumerate} \item \label{prop:-0.3:conic:ii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $A$ is $\rho$-comonotone. \item \label{prop:-0.3:conic:iii} {\rm [}$T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X$ {\rm ]} $\siff$ $A$ is maximally $\rho$-comonotone. \item \label{prop:-0.3:nonexp:ii} $T$ is nonexpansive $\siff$ $A$ is $\bigl(-\tfrac{1}{2}\bigr)$-comonotone. \item \label{prop:-0.3:nonexp:iii} {\rm [}$T$ is nonexpansive and $D=X$ {\rm ]} $\siff$ $A$ is maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone. \end{enumerate} If we assume that $\alpha\in \left]0,1\right[$, equivalently, $\rho>-\tfrac{1}{2}$, then we additionally have: \begin{enumerate} \setcounter{enumi}{4} \item \label{prop:-0.3:av:ii} $T$ is $\alpha$-averaged $\siff$ $A$ is \bmon. \item \label{prop:-0.3:av:iii} {\rm [}$T$ is $\alpha$-averaged and $D=X${\rm ]} $\siff$ $A$ is \bmaxmon. \end{enumerate} \end{prop} \begin{proof} \ref{prop:-0.3:conic:ii}\&\ref{prop:-0.3:conic:iii}: This follows from \cref{prop:gen:avn}\ref{prop:gen:avn:iii}\&\ref{prop:gen:avn:iv}. \ref{prop:-0.3:nonexp:ii}--\ref{prop:-0.3:av:iii}: Combine \ref{prop:-0.3:conic:ii} and \ref{prop:-0.3:conic:iii} with \cref{rem:con:nonexp}. \end{proof} \begin{corollary}{\rm\bf(The characterization corollary).} \label{cor:eq:nexp:-0.3} Let $T\colon X\to X$. Then the following hold: \begin{enumerate} \item \label{cor:eq:nexp:-0.3:i} $T$ is nonexpansive if and only if it is the resolvent of a maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone operator $A\colon X\rras X$. \item \label{cor:eq:nexp:-0.3:0} Let $\alpha\in \left]0,+\infty\right[$. Then $T$ is $\alpha$-\text{conically nonexpansive}\ if and only if it is the resolvent of a \bmon\ operator $A\colon X\rras X$, where $\rho=\tfrac{1}{2\alpha}-1>-1$ \big(i.e., $\alpha=\tfrac{1}{2(\rho+1)}$\big). \item \label{cor:eq:nexp:-0.3:ii} Let $\alpha\in \left]0,1\right[$. Then $T$ is $\alpha$-averaged if and only if it is the resolvent of a \bmon\ operator $A\colon X\rras X$ where $\rho=\tfrac{1}{2\alpha}-1>-\tfrac{1}{2}$ (i.e., $\alpha=\tfrac{1}{2(\rho+1)}$). \end{enumerate} \end{corollary} \begin{example} Suppose that $U$ is a closed linear subspace of $X$ and set $N=2P_U-\ensuremath{\operatorname{Id}}$. Let $\alpha\in \left[0,+\infty\right[$, set $T_\alpha=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$, and set $A_\alpha=(T_\alpha)^{-1}-\ensuremath{\operatorname{Id}}$. Then for every $\alpha\in \left[0,+\infty\right[$, $T_\alpha$ is $\alpha$-conically nonexpansive and \begin{equation} A_\alpha =\begin{cases} N_U,&\text{if }\alpha=\tfrac{1}{2};\\ \tfrac{2\alpha}{1-2\alpha}P_{U^\perp}, &\text{otherwise}. \end{cases} \end{equation} Moreover, $A_\alpha$ is $\bigl(\tfrac{1}{2\alpha}-1\bigr)$-comonotone. \end{example} \begin{proof} First note that $T_\alpha=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha (2P_U-\ensuremath{\operatorname{Id}}) =(1-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha P_U$. The case $\alpha=\tfrac{1}{2}$ is clear by, e.g., \cite[Example~23.4]{BC2017}. Now suppose that $\alpha\in \left[0,+\infty\right[ \smallsetminus \{\tfrac{1}{2}\}$, and let $y\in X$. Then $y\in A_\alpha x$ $\siff x+y\in (\ensuremath{\operatorname{Id}}+A_\alpha) x$ $\siff x=T_\alpha (x+y )=(1-2\alpha) (x+y)+2\alpha P_U(x+y)$ $\siff x=x+y-2\alpha(\ensuremath{\operatorname{Id}}-P_U)(x+y)$ $\siff y=2\alpha P_{U^\perp}(x+y) =2\alpha P_{U^\perp} x+2\alpha P_{U^\perp}y =2\alpha P_{U^\perp} x+2\alpha y$. Therefore, $y=\tfrac{2\alpha}{1-2\alpha} P_{U^\perp} x$, and the conclusion follows in view of \cref{cor:eq:nexp:-0.3}\ref{cor:eq:nexp:-0.3:0}. \end{proof} \begin{prop} \label{prop:av:-0.3} Let $A\colon X\rras X$ be such that $\ensuremath{\operatorname{dom}} A\neq \fady$, let $\rho\in \left]-1,+\infty\right[$, set $D=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$, set $T = {J_A}$, i.e., $A=T^{-1}-\ensuremath{\operatorname{Id}}$, and set $N= 2(\rho+1)T-(2\rho+1)\ensuremath{\operatorname{Id}}$, i.e., $T=\tfrac{2\rho+1}{2(\rho+1)}\ensuremath{\operatorname{Id}}+\tfrac{1}{2(\rho+1)}N$. Then the following equivalences hold: \begin{enumerate} \item \label{prop:av:-0.3:ii} $A$ is $\rho$-comonotone $\siff$ $N$ is nonexpansive. \item \label{prop:av:-0.3:iii} $A$ is maximally $\rho$-comonotone $\siff$ $N$ is nonexpansive and $D=X$. \end{enumerate} \end{prop} \begin{proof} \ref{prop:av:-0.3:ii}: Set $\alpha=\tfrac{1}{2(\rho+1)}$ and note that $\alpha>0$. It follows from \cref{prop:s:v} that $T=J_A$ is single-valued. Now use \cref{prop:gen:avn}\ref{prop:gen:avn:iii}. \ref{prop:av:-0.3:iii}: Combine \ref{prop:av:-0.3:ii} and \cref{prop:gen:avn}\ref{prop:gen:avn:iv}. \end{proof} \begin{prop} \label{prop:all:oth} Let $A\colon X\rras X$ be such that $\ensuremath{\operatorname{dom}} A\neq \fady$, let $\rho\in \left]-1,+\infty\right[$, set $D=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$, set $T={J_A}$, i.e., $A=T^{-1}-\ensuremath{\operatorname{Id}}$, and set $\alpha=\tfrac{1}{2(\rho+1)}$. Then we have the following equivalences: \begin{enumerate} \item \label{prop:all:oth:i:i} $A$ is $\rho$-comonotone $\siff$ $T$ is $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}. \item \label{prop:all:oth:i:ii} $A$ is maximally $\rho$-comonotone $\siff$ $T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X$. \item \label{prop:all:oth:ii} $A$ is $\bigl(-\tfrac{1}{2}\bigr)$-comonotone $\siff$ $T$ is nonexpansive. \item \label{prop:all:oth:iii} $A$ is maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone $\siff$ $T$ is nonexpansive and $D=X$. \item \label{prop:all:oth:iv}\ {\rm[}$A$ is \bmon\ and $\rho>-\tfrac{1}{2}${\rm]} $\siff$ $T$ is $\alpha$-averaged. \item \label{prop:all:oth:v}\ {\rm [}$A$ is \bmaxmon\ and $\rho>-\tfrac{1}{2}${\rm]} $\siff$ {\rm[}$T$ is $\alpha$-averaged and $D=X${\rm]}. \end{enumerate} \end{prop} \begin{proof} \ref{prop:all:oth:i:i}--\ref{prop:all:oth:v}: Use \cref{prop:-0.3:av}. \end{proof} \begin{corollary} Let $A\colon X\rras X$ be maximally \bmon\ and $\rho>-\tfrac{1}{2}$. Then $J_A$ is $\tfrac{1}{2(\rho+1)}$-averaged. \end{corollary} The following corollary provides an alternative proof to \cite[Proposition~6.9.6]{BurIus}. \begin{corollary} \label{cor:zer:cl:con} Let $A\colon X\rras X$ be maximally \bmon\ and $\rho\ge -\tfrac{1}{2}$. Then $\ensuremath{\operatorname{zer}} A$ is closed and convex. \end{corollary} \begin{proof} It is clear that $\ensuremath{\operatorname{zer}} A=\ensuremath{\operatorname{Fix}} J_A$. The conclusion now follows from combining \cite[Corollary~4.14]{BC2017} and \cref{prop:all:oth}\ref{prop:all:oth:iii}. \end{proof} Table~\ref{tab:1} below summarizes the main results of this section. \begin{table}[H] \resizebox{0.78\textwidth}{!}{\begin{minipage}{\textwidth} \begin{tabular}{p{3.5cm}|p{2.6cm}p{0.35cm}|p{2.2cm} p{0.35cm}|p{3.6cm}p{0.35cm}|p{3.5cm}} \toprule $\rho$& $A$ & &$A^{-1}$& &$J_A$ & & $J_{A^{-1}}$ \\ \hline \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \draw [line width=0.75mm] (0,0)--(1.4,0); \foreach \x in {0} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north]{\footnotesize\x}; \node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (0,0) {}; \end{tikzpicture} & $\rho$-cocoercive &$\siff$ & $\rho$-strongly monotone & $\siff$ & $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}\ &$\siff$ & $(\rho+1)$-cocoercive \\ \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \draw [line width=0.75mm] (0,0)--(0,0); \foreach \x in {0} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north] {\footnotesize\x}; \node[circle,draw=black, fill=black, inner sep=0pt,minimum size=5pt] at (0,0) {}; \end{tikzpicture} &monotone &$\siff$ & monotone &$\siff$ &firmly nonexpansive&$\siff$ & firmly nonexpansive \\ \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \draw [line width=0.75mm] (-1/2,0)--(0,0); \foreach \x in {0,-0.5} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north] {\footnotesize\x}; \node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (0,0) {}; \node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (-0.5,0) {}; \end{tikzpicture} &$\rho$-comonotone &$\siff$ & $\rho$-monotone &$\siff$ &$\tfrac{1}{2(\rho+1)}$-averaged &$\siff$ & $(\rho+1)$-cocoercive \\ \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \foreach \x in {-0.5} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north] {\footnotesize \x}; \node[circle,draw=black, fill=black, inner sep=0pt,minimum size=5pt] at (-0.5,0){}; \end{tikzpicture} & $\rho$-comonotone &$\siff$ & $\rho$-monotone &$\siff$ & nonexpansive &$\siff$ & $\tfrac{1}{2}$-cocoercive \\ \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \draw [line width=0.75mm] (-1,0)--(-0.5,0); \foreach \x in {-1,-0.5} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north] {\footnotesize \x}; \node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (-0.5,0) {}; \node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (-1,0) {}; \end{tikzpicture} & $\rho$-comonotone &$\siff$ & $\rho$-monotone &$\siff$ & $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}\ &$\siff$ & $(\rho+1)$-cocoercive \\ \begin{tikzpicture}[baseline=-0.3ex] \draw[>=triangle 45, <->] (-2,0) -- (1.5,0); \draw [line width=0.75mm] (-1.85,0)--(-1,0); \foreach \x in {-1} \draw (\x,1pt) -- (\x,-1pt) node[anchor=north] {\footnotesize \x}; \node[circle,draw=black, fill=black, inner sep=0pt,minimum size=5pt] at (-1,0){}; \end{tikzpicture} & $\rho$-comonotone &$\siff$ & $\rho$-monotone & $\RA$ & may fail to be at most single-valued & $\siff$ & may fail to be at most single-valued \\ \toprule \end{tabular} \end{minipage}} \caption{Properties of an operator $A$ and its inverse $A^{-1} $ along with the corresponding resolvents $J_A$ and $J_{A^{-1}}$ respectively, for different values of $\rho\in \ensuremath{\mathbb R}$. Here, $A$ satisfies the implication: $\{(x,u),(y,v)\}\subseteq \gra A \RA \innp{x-y,u-v}\ge \rho\normsq{u-v}$.} \label{tab:1} \end{table} \section{Further properties of the resolvent $J_A$ and the reflected resolvent $R_A$} \label{sec:JA:RA} We start this section with the following useful lemma. \begin{lemma} \label{lem:JA:RA:corr} Let $T\colon X\to X$, let $\alpha\in \left[0,1\right[$. Then the following hold: \begin{enumerate} \item \label{lem:JA:RA:corr:ii} $T$ is $\alpha$-averaged $\siff$ $ 2T-\ensuremath{\operatorname{Id}}=(1-2\alpha)\ensuremath{\operatorname{Id}} +2\alpha N$ for some nonexpansive $N\colon X\to X$. \item \label{lem:JA:RA:corr:iv} $[T=\tfrac{\alpha}{2}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff $ $-(2T-\ensuremath{\operatorname{Id}})$ is $\alpha$-averaged\footnote{This is also known as $\alpha$-negatively averaged (see \cite[Definition~3.7]{Gisel17}).}, in which case $T$ is a Banach contraction with Lipschitz constant $\alpha<1$. \item \label{lem:JA:RA:corr:iii} $T$ is $\tfrac{1}{2}$-strongly monotone $\siff $ $2T-\ensuremath{\operatorname{Id}}$ is monotone. \end{enumerate} \end{lemma} \begin{proof} \ref{lem:JA:RA:corr:ii}: We have: $T$ is $\alpha$-averaged $\siff $ [$T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$ and $N$ is nonexpansive] $\siff$ [$2T-\ensuremath{\operatorname{Id}}=(2-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha N-\ensuremath{\operatorname{Id}}=(1-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha N$ and $N$ is nonexpansive]. \ref{lem:JA:RA:corr:iv}: Indeed, $[T=\tfrac{\alpha}{2}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff $ $2T-\ensuremath{\operatorname{Id}}=(\alpha-1)\ensuremath{\operatorname{Id}}+\alpha N=-((1-\alpha)\ensuremath{\operatorname{Id}}+\alpha(-N))$, equivalently $2T-\ensuremath{\operatorname{Id}}$ is $\alpha$-negatively averaged. \ref{lem:JA:RA:corr:iii}: We have: $T$ is $\tfrac{1}{2}$-strongly monotone $\siff$ $T-\tfrac{1}{2}\ensuremath{\operatorname{Id}} $ is monotone $\siff$ $2T-\ensuremath{\operatorname{Id}} $ is monotone. \end{proof} Before we proceed, we recall the following useful fact (see, e.g., \cite[Proposition~4.35]{BC2017}). \begin{fact} \label{Proposition:4.35} Let $T\colon X\to X$, let $(x,y)\in X\times X$ and let $\alpha \in]0, 1[$. Then \begin{equation} \text{$T$ is $\alpha$- averaged $\siff$ $\normsq{Tx-Ty}+(1-2\alpha) \normsq{x-y} \le 2(1-\alpha)\innp{x-y,Tx-Ty}$.} \end{equation} \end{fact} \begin{prop} \label{p:corres:A:RA} Let $\alpha\in \left]0,1\right[$, let $\beta\in \bigl]-\tfrac{1}{2},+\infty\bigr[$, let $A\colon X\rras X$ and suppose that $A$ is $\beta$-comonotone. Then the following hold: \begin{enumerate} \item \label{p:corres:A:RA:ii} $A$ is $\beta$-comonotone $\siff$ $J_A$ is $\tfrac{1}{2(1+\beta)}$-averaged $\siff$ $R_A=\big(1-\tfrac{1}{1+\beta}\big )\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\beta}N$ for some nonexpansive $N\colon X\to X$. \item \label{p:corres:A:RA:iv} $A$ is $\beta$-strongly monotone $\siff$ $[J_A=\tfrac{1}{2(\beta+1)}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff$ $-R_A$ is $\tfrac{1}{\beta+1}$-averaged, in which case $J_A$ is a Banach contraction with Lipschitz constant $\tfrac{1}{\beta+1}<1$. \item \label{p:corres:A:RA:iii} $A$ is nonexpansive $\siff$ $J_A$ is $\tfrac{1}{2}$-strongly monotone $\siff$ $R_A$ is monotone. \item \label{p:corres:A:RA:i} $A$ is $\alpha$-averaged $\siff$ $R_A$ is $\tfrac{1-\alpha}{\alpha}$-cocoercive. \item \label{p:corres:A:RA:i:d} $A$ is firmly nonexpansive $\siff$ $R_A$ is firmly nonexpansive. \end{enumerate} \end{prop} \begin{proof} Let $\{(x,u),(y,v)\}\subseteq X \times X$. Using \cref{lem:gr:RA}\ref{lem:gr:RA:i}, we have $ \{(x,u),(y,v)\}\subseteq\gra J_A$ $\siff \{(u, x-u), (v,y-v)\}\subseteq\gra A$, which we shall use repeatedly. \ref{p:corres:A:RA:ii}: Let $\{(x,u),(y,v)\}\subseteq\gra J_A$. We have \begin{subequations} \begin{align} &~A \text{~is~} \beta\text{-comonotone~} \nonumber\\ \siff&~\beta \normsq{(x-y)-(u-v)}\le \innp{(x-y)-(u-v), u-v} \\ \siff&~\beta\normsq{x-y}+\beta\normsq{u-v}-2\beta\innp{x-y,u-v} \le \innp{x-y,u-v}-\normsq{u-v} \\ \siff&~ \beta\normsq{x-y}+(\beta+1)\normsq{u-v} \le (2\beta+1) \innp{x-y,u-v} \\ \siff & ~\normsq{u-v} +\tfrac{\beta}{\beta+1}\normsq{x-y} \le \tfrac{2\beta+1}{\beta+1} \innp{x-y,u-v} \\ \siff &~ \normsq{u-v} +\big(1-\tfrac{1}{\beta+1}\big)\normsq{x-y} \le 2\big(1-\tfrac{1}{2(\beta+1)}\big) \innp{x-y,u-v} \\ \siff &~ J_A \text{~is~} \tfrac{1}{2(\beta+1)}\text{-averaged}, \text{} \\ \siff &~ \text{$R_A=\big(1-\tfrac{1}{1+\beta}\big )\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\beta}N$ for some nonexpansive $N\colon X\to X$,} \end{align} \end{subequations} where the last two equivalences follow from \cref{Proposition:4.35} and \cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:ii}, respectively. \ref{p:corres:A:RA:iv}: We start by proving the equivalence of the first and third statement. (see \cite[Proposition~5.4]{Gisel17} for ``$\RA$" and also \cite[Proposition~2.1(iii)]{MV18}). Let $\{(x,u),(y,v)\}\subseteq \gra (-R_A)$, i.e., $\{(x,-u),(y,-v)\}\subseteq \gra R_A$. In view of \cref{lem:gr:RA}\ref{lem:gr:RA:ii}, this is equivalent to $ \{(\tfrac{1}{2}(x-u), \tfrac{1}{2}(x+u)), (\tfrac{1}{2}(y-v), \tfrac{1}{2}(y+v))\}\subseteq\gra A$. We have \begin{subequations} \begin{align} & A \text{ is $\beta$-strongly monotone} \nonumber\\ \siff~& \innp{(x-y)+(u-v),(x-y)-(u-v)}\ge \beta \normsq{(x-y)-(u-v)} \\ \siff~& \normsq{x-y}-\normsq{u-v}\ge \beta\normsq{x-y}+\beta\normsq{u-v} -2\beta\innp{x-y,u-v} \\ \siff~& 2\beta\innp{x-y,u-v} \ge (\beta-1)\normsq{x-y}+(\beta+1)\normsq{u-v} \\ \siff~& \tfrac{ 2\beta}{\beta+1}\innp{x-y,u-v} \ge \tfrac{\beta-1}{\beta+1}\normsq{x-y}+\normsq{u-v} \\ \siff~& 2\bk{1-\tfrac{1}{\beta+1}}\innp{x-y,u-v} \ge\bk{1-\tfrac{2}{\beta+1}}\normsq{x-y}+\normsq{u-v} \\ \siff~& -R_A \text{~is~} \tfrac{1}{\beta+1}\text{-averaged}, \end{align} \end{subequations} where the last equivalence follows from \cref{Proposition:4.35}. Now apply \cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:iv} to prove the equivalence of the second and third statements in \ref{p:corres:A:RA:iv}. \ref{p:corres:A:RA:iii}: Let $\{(x,u),(y,v)\}\subseteq\gra J_A$ and note that \cref{lem:gr:RA}\ref{lem:gr:RA:i} implies that $x-u\in Au$, $y-v\in Av$, $2u-x\in (\ensuremath{\operatorname{Id}} - A)u$ and $2v-y\in (\ensuremath{\operatorname{Id}} - A)v$. It follows from \cref{cor:non:av:ch}\ref{cor:non:av:ch:i} applied with $(T,x,y)$ replaced by $(A,u,v)$ that \begin{subequations} \begin{align} A \text{~is nonexpansive} \siff~& \innp{(x-y)-(u-v),2(u-v)-(x-y)} \nonumber \\ &\ge -\tfrac{1}{2}\normsq{2(u-v)-(x-y)} \\ \siff~&-\normsq{x-y}-2\normsq{u-v}+3\innp{x-y,u-v} \nonumber\\ &\ge -2\normsq{u-v}-\tfrac{1}{2}\normsq{x-y}+2\innp{x-y,u-v} \\ \siff ~& \innp{x-y, u-v}\ge \tfrac{1}{2}\normsq{x-y} \\ \siff ~& J_A \text{~is~} \tfrac{1}{2}\text{-strongly monotone} \\ \siff ~& R_A \text{~is~} \text{ monotone}, \end{align} \end{subequations} where the last equivalence follows from \cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:iii}. \ref{p:corres:A:RA:i}: Let $\{(x,u),(y,v)\}\subseteq X \times X$. Using \cref{lem:gr:RA} we have $ \{(x,u),(y,v)\}\subseteq\gra R_A$ $\siff \big\{\big(\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u)\big), \big(\tfrac{1}{2}(y+v), \tfrac{1}{2}(y-v)\big)\big\}\subseteq\gra A$. Let $\{(x,u),(y,v)\}\subseteq\gra R_A$. Applying \cref{cor:non:av:ch}\ref{cor:non:av:ch:ii} with $(T,x,y)$ replaced by $\big(A, \tfrac{1}{2}(x+u),\tfrac{1}{2}(y+v)\big)$ and \cref{rem:con:nonexp}, we learn that \begin{subequations} \begin{align} A \text{~is~} \alpha\text{-averaged~} \siff~& 2\alpha\innp{\tfrac{1}{2}((x-y)-(u-v)),u-v} \ge (1-2\alpha)\normsq{u-v} \\ \siff~&\alpha\innp{x-y,u-v}-\alpha\normsq{u-v} \ge (1-2\alpha)\normsq{u-v} \\ \siff ~& \tfrac{\alpha}{1-\alpha}\innp{x-y, u-v}\ge \normsq{u-v}, \end{align} \end{subequations} equivalently $R_A$ is $\tfrac{1-\alpha}{\alpha}$-cocoercive. \ref{p:corres:A:RA:i:d}: Apply \ref{p:corres:A:RA:i} with $\alpha=\tfrac{1}{2}$. \end{proof} \begin{rem} \cref{p:corres:A:RA}\ref{p:corres:A:RA:ii} generalizes the conclusion of \cite[Proposition~5.3]{Gisel17}. Indeed, if $\beta>0$ we have $A$ is $\beta$-cocoercive, equivalently $R_A$ is $\tfrac{1}{\beta+1}$-averaged. \end{rem} \section{$\rho$-monotone and $\rho$-comonotone linear operators} \label{sec:linear} Let $A\in \ensuremath{\mathbb R}^{n\times n}$ and set $A_{s}=\tfrac{A+A\tran}{2}$. In the following we use $\lam_{\min}(A)$ and $\lam_{\max}(A)$ to denote the smallest and largest eigenvalue of $A$, respectively, provided all eigenvalues of $A$ are real. \begin{prop} \label{prop:linear:hypo} Suppose that $A\in \ensuremath{\mathbb R}^{n\times n}$. Then the following hold: \begin{enumerate} \item \label{prop:linear:hypo:i} $A$ is $\rho$-monotone $\siff$ $\lam_{\min}(A_{s})\ge \rho$. \item \label{prop:linear:hypo:ii} $A$ is $\rho$-comonotone $\siff$ $\lam_{\min}(A_{s}-\rho A\tran A)\ge 0$. \end{enumerate} \end{prop} \begin{proof} Let $ x\in \ensuremath{\mathbb R}^n$. \cref{prop:linear:hypo:i}: $A$ is $\rho$-monotone $\siff$ $\innp{x,Ax}\ge \rho \normsq{x}$ $\siff$ $\innp{x,(A-\rho \ensuremath{\operatorname{Id}})x}\ge 0$ $\siff$ $\innp{x,(A-\rho \ensuremath{\operatorname{Id}})_sx}\ge 0$ $\siff$ $\innp{x,(A_s-\rho \ensuremath{\operatorname{Id}})x}\ge 0$ $\siff$ $A_s-\rho \ensuremath{\operatorname{Id}}\succeq 0$ $\siff$ $A_s\succeq\rho \ensuremath{\operatorname{Id}}$ $\siff$ $\lam_{\min}(A_s)\ge \rho$. \cref{prop:linear:hypo:ii}: $A$ is $\rho$-comonotone $\siff$ $\innp{x,Ax}\ge \rho \normsq{Ax}$ $\siff$ $\innp{x,(A_s-\rho A\tran A)x}\ge 0$ $\siff$ $A_s-\rho A\tran A\succeq 0$ $\siff$ $\lam_{\min}(A_s-\rho A\tran A)\geq 0$. \end{proof} \begin{example} \label{ex:linear:rmono:rcomono} Suppose that $N\colon X\to X$ is continuous and linear such that $N^*=-N$ and $N^2=-\ensuremath{\operatorname{Id}}$. Then $N$ is nonexpansive. Moreover, let $\lambda\in \left[0,1\right[$, set $T_\lambda=(1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N$ and set $A_\lambda=(T_\lambda)^{-1}-\ensuremath{\operatorname{Id}}$. Then the following hold: \begin{enumerate} \item \label{ex:linear:rmono:rcomono:i} We have \begin{equation} A_\lambda=\tfrac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big). \end{equation} \item \label{ex:linear:rmono:rcomono:ii} $ A_\lambda$ is $\rho$-monotone with optimal $\rho =\tfrac{\lambda(1-2\lambda)}{\lam^2+(1-\lam)^2}$. \item \label{ex:linear:rmono:rcomono:iii} $ A_\lambda$ is $\rho$-comonotone with optimal $\rho= \tfrac{1-2\lambda}{2\lambda} $. \end{enumerate} \end{example} \begin{proof} Let $x\in X$. Then $\norm{Nx}^2 =\innp{Nx,Nx} =\innp{x,N^*Nx} =\innp{x,-N^2x} =\innp{x,x}=\normsq{x} $. Hence $N$ is nonexpansive; in fact, $N$ is an isometry. Now set \begin{equation} B_\lambda = \tfrac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big). \end{equation} \cref{ex:linear:rmono:rcomono:i}: We have \begin{subequations} \begin{align} (\ensuremath{\operatorname{Id}}+B_\lambda)T_\lambda &=\left(\ensuremath{\operatorname{Id}}+\tfrac{\lambda}{(1-\lambda)^2 +\lambda^2}\big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big)\right) \big((1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N\big) \\ &=\tfrac{1}{(1-\lambda)^2+\lambda^2} \big((1-\lambda)\ensuremath{\operatorname{Id}}-\lambda N\big)\big((1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N\big) \\ &=\tfrac{1}{(1-\lambda)^2+\lambda^2} \big((1-\lambda)^2\ensuremath{\operatorname{Id}}-\lambda^2N^2\big)=\ensuremath{\operatorname{Id}}. \end{align} \end{subequations} Similarly, one can show that $T_\lambda(\ensuremath{\operatorname{Id}}+B_\lambda)=\ensuremath{\operatorname{Id}}$ and the conclusion follows. \cref{ex:linear:rmono:rcomono:ii}: Using \cref{ex:linear:rmono:rcomono:i}, we have \begin{subequations} \begin{align} \innp{x, A_\lambda x} &= \frac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\norm{x}^2-\innp{Nx,x}\big)\\ &= \frac{\lambda(1-2\lambda)}{(1-\lambda)^2+\lambda^2} \norm{x}^2. \label{sub:eq:lin} \end{align} \end{subequations} \cref{ex:linear:rmono:rcomono:iii}: Using \cref{ex:linear:rmono:rcomono:i}, we have \begin{subequations} \begin{align} \normsq{A_\lambda x} &= \frac{\lambda^2}{((1-\lambda)^2+\lambda^2)^2} \big((1-2\lam)^2\normsq{x}+\normsq{Nx}\big) \\ &= \frac{\lam^2}{((1-\lam)^2+\lam^2)^2} \big((1-2\lam)^2+1\big)\normsq{x}. \end{align} \end{subequations} Therefore, combining with \cref{sub:eq:lin} we obtain \begin{subequations} \begin{align} \innp{x, A_\lambda x} &= \frac{(1-2\lam)((1-\lam)^2+\lam^2)}{\lam ((1-2\lam)^2+1)} \cdot \frac{\lam^2((1-2\lam)^2+1)}{((1-\lam)^2+\lam^2)^2} \normsq{x} \\ &=\frac{(1-2\lam)((1-\lam)^2+\lam^2)}{\lam ((1-2\lam)^2+1)} \normsq{A_\lambda x},\\ &=\frac{1-2\lambda}{2\lambda} \normsq{A_\lambda x}, \end{align} \end{subequations} and the conclusion follows. \end{proof} \section{Hypoconvex functions} \label{sec:hypocon} In this section, we apply results in the previous sections to characterize proximal mappings of hypoconvex functions. We shall assume that $f\colon X\to \left]-\infty,+\infty\right]$ is a proper lower semicontinuous function minorized by a concave quadratic function: $\ensuremath{\exists\,}sts \nu\in\ensuremath{\mathbb R}, \beta\in \ensuremath{\mathbb R}, \alpha\geq 0$ such that $$(\forall x\in X)\quad f(x)\geq -\alpha\|x\|^2-\beta\|x\|+\nu.$$ For $\mu>0$, the Moreau envelope of $f$ is defined by $$e_{\mu}f(x)=\inf_{y\in X}\Big(f(y)+\frac{1}{2\mu}\|x-y\|^2\Big),$$ and the associated proximal mapping $\ensuremath{\operatorname{Prox}}_{\mu f}$ by \begin{equation} \ensuremath{\operatorname{Prox}}_{\mu f}(x) =\underset{y\in X}{\ensuremath{\operatorname{argmin}}}\Big(f(y)+\frac{1}{2\mu}\|x-y\|^2\Big), \end{equation} where $x\in X$. We shall use $\partial f$ for the subdifferential mapping from convex analysis. \begin{definition}\label{def:abst} An abstract subdifferential $\pars$ associates a subset $\pars f(x)$ of $X$ to $f$ at $x\in X$, and it satisfies the following properties: \begin{enumerate} \item\label{i:p1} $\pars f=\partial f$ if $f$ is a proper lower semicontinuous convex function; \item\label{i:p1.5} $\pars f=\nabla f$ if $f$ is continuously differentiable; \item\label{i:p2} $0\in\pars f(x)$ if $f$ attains a local minimum at $x\in\ensuremath{\operatorname{dom}} f$; \item\label{i:p3} for every $\beta\in\ensuremath{\mathbb R}$, $$\pars\Big(f+\beta\frac{\|\cdot-x\|^2}{2}\Big)=\pars f +\beta (\ensuremath{\operatorname{Id}}-x).$$ \end{enumerate} \end{definition} The Clarke--Rockafellar subdifferential, Mordukhovich subdifferential, and Fr\'echet subdifferential all satisfy Definition~\ref{def:abst}\ref{i:p1}--\ref{i:p3}, see, e.g., \cite{Clarke90}, \cite{Mord06, Mord18}, so they are $\pars$. Related but different abstract subdifferentials have been used in \cite{aussel95, ioffe84, thibault95}. Recall that $f$ is $\tfrac{1}{\lambda}$-hypoconvex (see \cite{Rock98, Wang2010}) if \begin{equation} f((1-\tau)x+\tau y)\le (1-\tau) f(x) +\tau f(y) +\frac{1}{2\lambda}\ \tau(1-\tau) \norm{x-y}^2, \end{equation} for all $(x,y)\in X\times X$ and $\tau\in \left]0,1\right[$. \begin{proposition}\label{p:sub:for} If $f\colon X\to \left]-\infty,+\infty\right]$ is a proper lower semicontinuous $\frac{1}{\lambda}$-hypoconvex function, then \begin{equation}\label{e:hypoc:sub} \pars f=\partial\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big)-\frac{1}{\lambda}\ensuremath{\operatorname{Id}}. \end{equation} Consequently, for a hypoconvex function the Clarke--Rockafellar, Mordukhovich, and Fr\'echet subdifferential operators all coincide. \end{proposition} \begin{proof} For the convex function $f+\frac{1}{2\lambda}\|\cdot\|^2$, apply Definition~\ref{def:abst}\ref{i:p1} and \ref{i:p3} to obtain $$\partial\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big) =\pars\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big) =\pars f+\frac{1}{\lambda}\ensuremath{\operatorname{Id}}$$ from which \eqref{e:hypoc:sub} follows. \end{proof} Let $f^*$ denote the Fenchel conjugate of $f$. The following result is well known in $\ensuremath{\mathbb R}^n$, see, e.g., \cite[Exercise~12.61(b)(c), Example~11.26(d)~and~Proposition~12.19]{Rock98}, and \cite{Wang2010}. In fact, it also holds in a Hilbert space. \begin{prop}\label{prop:hypo:func} The following are equivalent: \begin{enumerate} \item \label{prop:hypo:func:i} $f$ is $\tfrac{1}{\lambda}$-hypoconvex. \item \label{prop:hypo:func:ii} $f+\tfrac{1}{2\lam} \normsq{\cdot}$ is convex. \item \label{prop:hypo:func:iii} $\ensuremath{\operatorname{Id}}+\lambda \pars f$ is maximally monotone. \item \label{prop:hypo:func:vi} $(\forall \mu\in \left] 0,\lam\right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\lambda/(\lambda-\mu)$-Lipschitz continuous with \begin{equation} \label{eq:prox:res} \ensuremath{\operatorname{Prox}}_{\mu f} =J_{\mu\pars f} =(\ensuremath{\operatorname{Id}}+\mu \pars f )^{-1}. \end{equation} \item \label{prop:hypo:func:v} $(\forall \mu\in \left] 0,\lam\right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is single-valued and continuous. \end{enumerate} \end{prop} \begin{proof} ``\ref{prop:hypo:func:i}$\Leftrightarrow$\ref{prop:hypo:func:ii}": Simple algebraic manipulations. ``\ref{prop:hypo:func:ii}$\Rightarrow$\ref{prop:hypo:func:iii}": As $$\partial\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)= \pars\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)=\pars f +\frac{1}{\mu}\ensuremath{\operatorname{Id}}$$ is maximally monotone, $\ensuremath{\operatorname{Id}}+\mu\pars f$ is maximally monotone. ``\ref{prop:hypo:func:iii}$\Rightarrow$\ref{prop:hypo:func:vi}": By Definition~\ref{def:abst}\ref{i:p2} and \ref{i:p3}, $y\in \ensuremath{\operatorname{Prox}}_{\mu f}(x)$ implies that $$0\in\pars\Big(f(y)+\frac{1}{2\mu}\|y-x\|^2\Big)=\pars f(y)+\frac{1}{\mu}(y-x).$$ Thus, one has \begin{equation}\label{e:prox:abst} (\forall x\in X)\ \ensuremath{\operatorname{Prox}}_{\mu f}(x)\subseteq (\ensuremath{\operatorname{Id}}+\mu \pars f)^{-1}(x). \end{equation} Using $$\ensuremath{\operatorname{Id}}+\mu\pars f=\frac{\lambda-\mu}{\lambda}\Big(\ensuremath{\operatorname{Id}}+\frac{\mu}{\lambda-\mu}(\ensuremath{\operatorname{Id}}+\lambda\pars f)\Big)$$ yields $$(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}=J_{A}\circ\Big(\frac{\lambda}{\lambda-\mu}\ensuremath{\operatorname{Id}}\Big),$$ where $A=\frac{\mu}{\lambda-\mu}(\ensuremath{\operatorname{Id}}+\lambda\pars f)$ is maximally monotone by the assumption. Since $J_{A}$ is nonexpansive on $X$, $(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}$ is $\lambda/(\lambda-\mu)$-Lipschitz. Together with \eqref{e:prox:abst}, we obtain $\ensuremath{\operatorname{Prox}}_{\mu f}=(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}.$ ``\ref{prop:hypo:func:vi}$\Rightarrow$\ref{prop:hypo:func:v}": Clear. ``\ref{prop:hypo:func:v}$\Rightarrow$\ref{prop:hypo:func:ii}": Let $x\in X$ and let $\mu\in \left]0, \lambda\right[$. We have \begin{equation} \label{e:conj} e_{\mu}f(x)=\frac{1}{2\mu}\|x\|^2-\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)^*\Big(\frac{x}{\mu}\Big), \end{equation} and $e_{\mu}$ is locally Lipschitz, see, e.g., \cite[Proposition 3.3(b)]{Jourani14}. By \cite[Proposition 5.1]{Bernard05}, \ref{prop:hypo:func:v} implies that $e_{\mu}f$ is Fr\'echet differentiable with $\nabla e_{\mu}f=\mu^{-1}(\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f})$. Then $\big(f+\frac{1}{2\mu}\|\cdot\|^2\big)^*$ is Fr\'echet differentiable by \eqref{e:conj}. It follows from \cite[Theorem 1]{Stromberg11} that $ f+\frac{1}{2\mu}\|\cdot\|^2$ is convex. Since this hold for every $\mu\in ]0,\lambda[$, \ref{prop:hypo:func:ii} follows. \end{proof} We now provide a new refined characterization of hypoconvex functions in terms of the cocoercivity of their proximal operators; equivalently, of the conical nonexpansiveness of the displacement mapping of their proximal operators. \begin{theorem} \label{prop:hypocon:av} Let $\mu\in \left] 0,\lambda\right[$. Then the following are equivalent. \begin{enumerate} \item \label{prop:hypocon:av:i} $f$ is $\tfrac{1}{\lambda}$-hypoconvex. \item \label{prop:hypocon:av:ii} $\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}. \item \label{prop:hypocon:av:iii} $\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda-\mu}{\lambda}$-cocoercive. \end{enumerate} \end{theorem} \begin{proof} ``\ref{prop:hypocon:av:i}$\siff$\ref{prop:hypocon:av:ii}": Using $0<\tfrac{\mu}{\lambda}<1$ we have \begin{align*} &\qquad\text{$f$ is $\tfrac{1}{\lambda}$-hypoconvex} \\ &\siff \text{$\ensuremath{\operatorname{Id}}+\lambda \pars f$ is maximally monotone} \tag{\text{by \cref{prop:hypo:func}}} \\ &\siff \text{$\tfrac{\mu}{\lambda}\ensuremath{\operatorname{Id}}+ \mu\pars f$ is maximally monotone} \\ &\siff \text{$\mu\pars f$ is maximally $\bigl(-\tfrac{\mu}{\lambda}\bigr)$-monotone} \\ &\siff \text{$(\mu\pars f)^{-1}$ is maximally $\bigl(-\tfrac{\mu}{\lambda}\bigr)$-comonotone} \tag{\text{by \cref{lem:bmax:inv}}} \\ &\siff \text{$J_{(\mu\pars f)^{-1}}$ } \text{is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{cor:eq:nexp:-0.3}\ref{cor:eq:nexp:-0.3:0}}} \\ &\siff \text{$ \ensuremath{\operatorname{Id}} - J_{\mu\pars f}$ } \text{is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{prop:gen:min}\ref{prop:gen:min:i} }} \\ &\siff \text{$\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f} $ is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{eq:prox:res}}}. \end{align*} ``\ref{prop:hypocon:av:ii}$\siff$\ref{prop:hypocon:av:iii}": Use \cref{lem:coco:conc}. \end{proof} \begin{corollary} \label{cor:Lips:grad} Suppose that $f\colon X\to \ensuremath{\mathbb R}$ is Fr\'echet differentiable such that $\grad f$ is Lipschitz with a constant $1/\lambda$. Then the following hold: \begin{enumerate} \item \label{cor:Lips:grad:i} $\ensuremath{\operatorname{Id}} +\lambda \grad f$ is maximally monotone. \item \label{cor:Lips:grad:ii} $f$ is $\tfrac{1}{\lambda}$-hypoconvex. \item \label{cor:Lips:grad:iii} $f+\tfrac{1}{2\lambda} \normsq{\cdot}$ is convex. \item \label{cor:Lips:grad:iv} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is single-valued. \item \label{cor:Lips:grad:v} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda-\mu}{\lambda}$-cocoercive. \item \label{cor:Lips:grad:vi} $(\forall \mu\in \left] 0,\lambda \right[)$ $ \ensuremath{\operatorname{Prox}}_{\mu f} =J_{\mu\pars f} =(\ensuremath{\operatorname{Id}}+\mu \grad f )^{-1}. $ \item \label{cor:Lips:grad:vii} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}. \end{enumerate} \end{corollary} \begin{proof} Definition~\ref{def:abst}\ref{i:p1.5} implies that $(\forall x\in X)$ $\pars f(x)=\{\grad f(x)\}$. \ref{cor:Lips:grad:i}: Indeed, $\lambda \grad f$ is nonexpansive. Now the conclusion follows from \cite[Example~20.29]{BC2017}. \ref{cor:Lips:grad:ii}--\ref{cor:Lips:grad:vii}: Combine \ref{cor:Lips:grad:i} with \cref{prop:hypo:func} and \cref{prop:hypocon:av}. \end{proof} Finally, we give two examples to illustrate our results. \begin{example} Suppose that $X=\ensuremath{\mathbb R}$. Let $\lambda>0$ and set, for every $\lambda $, $f_\lambda \colon x\mapsto \exp(x)-\tfrac{1}{2\lambda}x^2 $. Then $f$ is $\tfrac{1}{\lambda}$-hypoconvex by \cref{prop:hypo:func}, $f'_\lambda\colon x\mapsto \exp(x) -\tfrac{x}{\lambda}$, and we have $\ensuremath{\operatorname{Id}}+\lambda f'_\lambda=\lambda \exp $ is maximally monotone. Moreover, for every $\mu\in \left]0,\lambda\right ]$ we have \begin{subequations} \begin{align} \ensuremath{\operatorname{Prox}}_{\mu f_\lambda}(x) &=\bigl( \ensuremath{\operatorname{Id}}+\mu f'_\lambda\bigr)^{-1}(x) =\bigl((1-\tfrac{\mu}{\lambda})\ensuremath{\operatorname{Id}}+\mu\exp\bigr)^{-1}(x) \label{se:1} \\ &=\begin{cases} \ln\bigl(\tfrac{x}{\mu}\bigr) , &\text{if~~}\mu=\lambda;\\ \tfrac{\lambda x}{\lambda-\mu} -\operatorname{Lambert} W \bigl(\tfrac{\lambda\mu\exp(\lambda x/(\lambda-\mu ))}{\lambda-\mu }\bigr), &\text{if $\mu\in \left]0,\lambda\right[$}, \end{cases} \end{align} \end{subequations} where the first identity in \cref{se:1} follows from \cref{cor:Lips:grad}\ref{cor:Lips:grad:vi}. \end{example} \begin{example} Let $D$ be a nonempty closed convex subset of $X$, let $\lambda>0$ and set, for every $\lambda $, $f_\lambda = \iota_D -\tfrac{1}{2\lambda}\norm{\cdot}^2 $. Then $f$ is $\tfrac{1}{\lambda}$-hypoconvex by \cref{prop:hypo:func}, and $\pars f_\lambda =N_D-\tfrac{1}{\lambda}\ensuremath{\operatorname{Id}}$ by Proposition~\ref{p:sub:for}. Moreover, for every $\lambda>0$, we have $\ensuremath{\operatorname{Id}}+\lambda \pars f_\lambda=N_D$ is maximally monotone. Finally, using \cref{eq:prox:res} and \cite[Example~23.4]{BC2017} we have for every $\mu\in \left]0,\lambda\right [$ \begin{subequations} \begin{align} \ensuremath{\operatorname{Prox}}_{\mu f_\lambda} &=\bigl( \ensuremath{\operatorname{Id}}+\mu \pars f_\lambda\bigr)^{-1} =\bigl((1-\tfrac{\mu}{\lambda})\ensuremath{\operatorname{Id}}+\mu N_D\bigr)^{-1}\\ &=\bigl((1-\tfrac{\mu}{\lambda})(\ensuremath{\operatorname{Id}}+ N_D)\bigr)^{-1} =P_D\circ \bigl(\tfrac{\lambda}{\lambda-\mu}\ensuremath{\operatorname{Id}}\bigr). \end{align} \end{subequations} In particular, if $D$ is a closed convex cone we learn that $ \ensuremath{\operatorname{Prox}}_{\mu f_\lambda}=\tfrac{\lambda}{\lambda-\mu}P_D$. \end{example} \section*{Acknowledgment} HHB, WMM, and XW were partially supported by the Natural Sciences and Engineering Council of Canada. \small \end{document}
\begin{document} \pagestyle{plain} \title {The Morava $K$-theory of $BO(q)$ and $MO(q)$} \author{Nitu Kitchloo} \author{W. Stephen Wilson} \address{Department of Mathematics, Johns Hopkins University, Baltimore, USA} \email{[email protected]} \email{[email protected]} \thanks{Nitu Kitchloo is supported in part by NSF through grant DMS 1307875.} \date{\today} {\abstract We give an easy proof that the Morava $K$-theories for $BO(q)$ and $MO(q)$ are in even degrees. Although this is a known result, it had followed from a difficult proof that $BP^*(BO(q))$ was Landweber flat. Landweber flatness follows from the even Morava $K$-theory. We go further and compute an explicit description of $K(n)_*(BO(q))$ and $K(n)_*(MO(q))$ and reconcile it with the purely algebraic construct from Landweber flatness. } \maketitle \section{Introduction} We are concerned with the (co)homology theory, Morava $K$-theory, $K(n)^*(-)$, where $K(n)_* = \mathbb Z/2 [v_n^{\pm 1}]$ with the degree of $v_n$ equal to $2(2^n - 1 ) $ (we are only concerned with $p=2$). What brought us to the problem of computing the Morava $K$-theories of the spaces $BO(q)$ was a real need to have $BP^*(BO(q))$ be Landweber flat (in the sense \cite{Land:Hom}) for \cite{Kitch-Wil-BO}. $BP^*(BO(q))$ had been computed in \cite{WSW:BO} and was shown to be Landweber flat in \cite{KY}, with some seriously complex computations. Kono and Yagita went on to show that $K(n)^*(BO(q))$ was concentrated in even degrees because $BP^*(BO(q))$ was. The computation in \cite{KY} does not give an explicit answer to what $K(n)^*(BO(q))$ is, only that it is even degree. If it is known that $K(n)^*(BO(q))$ is even degree for all $n$, then the results of \cite{RWY} show that $BP^*(BO(q))$ is Landweber flat, without having to compute it. We present here an easy proof that $K(n)_*(BO(q))$ is even degree and then go further and give a basis. Duality for Morava $K$-theory is straightforward, so $K(n)^*(BO(q))$ is also even degree. \begin{thm}[\cite{KY}] \leftarrowbel{even} \ \begin{enumerate}[(i)] \item $K(n)^*(BO(q))$ and $K(n)^*(MO(q))$ are even degree for all $n$. \item $BP^*(BO(q))$ is Landweber flat. \end{enumerate} \end{thm} As mentioned, $(ii)$ follows directly from $(i)$ using \cite{RWY} but Kono and Yagita prove $(ii)$ first and then $(i)$. $(i)$ will be proven in Section 3. We work with the homology version of the theories and have: \begin{thm} \leftarrowbel{detail} \ \begin{enumerate}[(i)] \item There are elements $b_{2i} \in K(n)_{2i}(BO(1))$ for $0 < i < 2^n$ coming from $K(n)_{2i}(RP^\infty)$. \item There are elements $c_{4i} \in K(n)_{4i}(BO(2))$ for $2^n \le i $. \item Using products from the standard maps $BO(i)\times BO(j) \rightarrow BO(i+j)$, a basis for the reduced homology, $\widetilde{K(n)}_*(BO(q))$, is: $$ \{b_{2i_1}b_{2i_2}\ldots b_{2i_k} c_{4j_1}c_{4j_2}\ldots c_{4j_m}\} \quad 0 < k+2m \le q. $$ $$ 0 < i_1 \le i_2 \le \ldots \le i_k < 2^n \le j_1 \le j_2 \le \ldots \le j_m $$ \item $\widetilde{K(n)}_*(MO(q))$ is as above with $k+2m = q$. \end{enumerate} \end{thm} In \cite{WSW:BO}, it was shown that \begin{equation} \leftarrowbel{alg} BP^*(BO(q)) \simeq BP^*[[c_1,c_2,\ldots,c_q]]/(c_1 - c_1^*, c_2 - c_2^*, \ldots,c_q - c_q^*), \end{equation} where $c_j$ is the Conner-Floyd Chern class and $c_j^*$ is its complex conjugate. In \cite{KY}, Kono and Yagita show that $BP^*(BO(q))$ is Landweber flat and that \begin{equation} K(n)^*(BO(q)) \simeq K(n)^* \widehat{\otimes}_{BP^*} BP^*(BO(q)). \end{equation} This shows that the Morava $K$-theory is even degree. We have computed Morava $K$-theory directly to show it is even degree, so the results of \cite{RWY} also give us Landweber flatness for $BP^*(BO(q))$. Either approach gives us: \begin{equation} \leftarrowbel{cohomology} K(n)^*(BO(q)) \simeq K(n)^*[[c_1,c_2,\ldots,c_q]]/(c_1 - c_1^*, c_2 - c_2^*, \ldots,c_q - c_q^*). \end{equation} This is a purely algebraic construct that looks nothing like the answer given in this paper. In Section 5 we reconcile it with our direct computation of $K(n)_*(BO(q))$ by finding a basis for it that is consistent with what we find for $K(n)_*(BO(q))$. We review some facts about the standard homology of $BO(q)$ in Section 2 and prove the details of Theorem \operatorname{re}f{detail} in Section 4. \section{The standard homology of $BO(q)$ and $MO(q)$} We begin with some review of basic facts about the homology of $BO$ and $BO(n)$. All of our (co)homology will be with $\mathbb Z/2$ coefficients. We start with elements $$ b_i \in \tilde{H}_i(RP^\infty = BO(1)) \quad i > 0. $$ We have $$ \tilde{H}_*(BO(1)) = \mathbb Z/2\{b_i:i > 0\} $$ and maps $$ BO(1) \rightarrow \cdots \rightarrow BO(q-1) \rightarrow BO(n) \rightarrow \cdots \rightarrow BO. $$ The image of the above $b_i$ in $H_*(BO)$ give us the well-known homology of $BO$ as a polynomial algebra: $$ H_*(BO) = \mathbb Z/2[b_1,b_2,\ldots]. $$ We also have the usual maps \begin{equation} \leftarrowbel{product} BO(q) \times BO(k) \longrightarrow BO(q+k). \end{equation} For homology we only need $$ \prod^q BO(1) \longrightarrow BO(q). $$ Because $b_i b_j = b_j b_i$, we have elements: $$ b_{i_1} b_{i_2} \cdots b_{i_k} \in \tilde{H}_*(BO(q)) \quad \mbox{for} \quad 0 < k \le q \quad \mbox{and} \quad 0 < i_1 \le i_2 \le \cdots \le i_k. $$ These elements form a basis for the reduced homology of $BO(q)$. As an aside, if that is not commonly understood, we can quickly use the better known cohomology of $BO(q)$ to see that the size is right. We have $$ H^*(BO(q)) = \mathbb Z/2[w_1,w_2,\ldots,w_q], $$ a polynomial algebra on the Stiefel-Whitney classes. If, by induction, we know $H_*(BO(q-1))$, all we have to do to see the size is right is show that the elements with $k = q$ above are in one-to-one correspondence with the ideal generated by $w_q \in H^*(BO(q))$. That correspondence is easily given by $$ 0 < i_1 \le i_2 \le \cdots \le i_q \quad \mbox{ goes to } \quad w_q^{i_1} w_{q-1}^{i_2 - i_1} w_{q-2}^{i_3-i_2} \cdots w_1^{i_q - i_{q-1}}. $$ The Steenrod algebra operates on the mod 2 homology of $BO$ and $BO(q)$. As an element of the Steenrod algebra operates on an element $ b_{i_1} b_{i_2} \cdots b_{i_k} $, it does not alter the number of $b$'s, so we can define: $$ M_q = \mathbb Z/2\{ b_{i_1} b_{i_2} \cdots b_{i_q} \} \quad \mbox{for} \quad 0 < i_1 \le i_2 \le \cdots \le i_q $$ and we get the reduced homology \begin{equation} \leftarrowbel{equ1} \tilde{H}_*(BO(q)) = \bigoplus_{j=1}^q M_j \end{equation} and \begin{equation} \leftarrowbel{equ2} \tilde{H}_*(BO) = \bigoplus_{j=1}^\infty M_j \end{equation} as modules over the Steenrod algebra. From \cite{MitPrid} we know that stably $BO(q) \simeq \Wedge_{1\le i \le q} MO(i)$, so, stably, $BO(q) \simeq BO(q-1) \Wedge MO(q)$. From this we see that $M_q = H_*(MO(q))$. \section{The Morava $K$-theories of $BO(q)$ and $MO(q)$ are even} The first differential in the Atiyah-Hirzebruch spectral sequence (AHSS), $H_*(X;K(n)_*)$, is just the Milnor primitive, $Q_n$, which is easy to evaluate in $H_*(BO(1))$ as it just takes $b_{2k}$ to $b_{2k+1-2^{n+1}}$, as long as $2k > 2^{n+1}-1$. \begin{remark} After the first differential, the AHSS collapses for $K(n)_*(BO(1))$ because the AHSS is even degree. The reduced homology is $K(n)_*$ free on $\{b_2,b_4, \ldots, b_{2^{n+1}-2} \}$. \end{remark} \begin{remark} More interesting is that after the first differential for $BO$ we are also done, with the polynomial result, from the AHSS: $$ K(n)_*(BO) \simeq K(n)_*[b_2,b_4,\ldots,b_{2^{n+1}-2}]\otimes K(n)_*[b_{2i}^2 : i \ge 2^n ], $$ which was done in \cite{RWY}. The differential, or as we prefer to say, the $Q_n$ homology, is computed by pairing up what is missing above as $$ P(b_{2i+1})\otimes E(b_{2i+2^{n+1}}). $$ Each of these has trivial $Q_n$ homology. This collapses after this first differential because it is even degree. Since $b_{2i}$, $i \ge 2^n$, is not an element, the notation is misleading. Later, we will give this generator the name $c_{4i}$. The element exists in $k(n)_*(BO)$ and reduces to $b_{2i}^2$ in $H_*(BO)$. \end{remark} \begin{proof}[Proof of Theorem \operatorname{re}f{even}] Now we know that the first differential of the AHSS is all it takes to get $K(n)_*(BO)$ and see that it is all in even degrees. The first differential is just an operation from the Steenrod algebra, $Q_n$. By Equation \eqref{equ2}, we must have the $Q_n$ homology of each $M_j$ in even degrees. From this we see that $K(n)_*(BO(q))$ and $K(n)_*(MO(q))$ must be in even degrees, and by standard Morava $K$-theory duality, $K(n)^*(BO(q))$ is in even degrees. This completes the proof of Theorem \operatorname{re}f{even}. \end{proof} \section{The details of the Morava $K$-theories of $BO(q)$ and $MO(q)$ } All of the homology of $BO(q)$ came from products of elements from $BO(1)$. For Morava $K$-theory we have to use elements from $BO(2)$ as well. Two kinds of elements in $K(n)_*(BO(2))$ come from $K(n)_*(BO(1))$. First we have the image coming from the map $BO(1) \rightarrow BO(2)$, i.e. $K(n)_*\{b_2,b_4, \ldots, b_{2^{n+1}-2} \}$. Our second kind comes from the product, $BO(1)\times BO(1) \rightarrow BO(2)$, which gives: $$ K(n)_*\{b_{2i_1}b_{2i_2}\} \quad 0 < i_1 \le i_2 < 2^n. $$ There are more elements that come from $M_2$ in $K(n)_*(BO(2))$. In particular, from the computation of $K(n)_*(BO)$ we know that all $b_{2j}^2$ survive. These elements live in $M_2$ so actually survive to $K(n)_*(BO(2))$. Consequently, between $K(n)_*(BO(1))$ and $K(n)_*(BO(2))$, we have all the multiplicative generators of $K(n)_*(BO)$. We easily see which $M_q$ these multiple products live in by the number of $b$'s. We can now pretty much read off the description of a basis for $K(n)_*(BO(q))$. To make the description a little easier to read, we can consider the part that comes from $M_q$ and call it $M_q^K = K(n)_*(MO(q))$. Then we have: $$ K(n)_*(BO(q)) \simeq K(n)_*(BO(q-1)) \bigoplus K(n)_*(MO(q)). $$ We are not using the splitting from \cite{MitPrid} to compute $K(n)_*(BO(q))$, only to compute $K(n)_*(MO(q))$. Let's give new names to the elements in $K(n)_*(BO(2))$ represented by $b_{2j}^2$ so we won't have the non-existent product hanging around. Let's set $c_{4j} = b_{2j}^2$ for $j \ge 2^n$. We can now give an explicit description of $M_q^K = K(n)_*(MO(q))$. $$ M_q^K \simeq K(n)_*\{b_{2i_1}b_{2i_2}\ldots b_{2i_k} c_{4j_1}c_{4j_2}\ldots c_{4j_m}\} \quad k+2m=q. $$ $$ 0 < i_1 \le i_2 \le \ldots \le i_k < 2^n \le j_1 \le j_2 \le \ldots \le j_m $$ This completes the proof of Theorem \operatorname{re}f{detail}. There is still one bit of unaccounted for structure that we should mention. Although $K(n)_*(BO(q))$ is not an algebra, it is a coalgebra. The coalgebra structure for the $b$'s comes from $BO(1)$, so, for $p < 2^n$, we get $$ \psi(b_{2p}) = \sum_{i+j=p} b_{2i} \otimes b_{2j}. $$ The $c_{4j}$ are written in terms of the $b$'s in the AHSS, so we also know their coproduct modulo $(v_n)$. It would just be $$ \psi(c_{4p}) = \psi(b_{2p}^2) = \sum_{i+j=p} b_{2i}^2 \otimes b_{2j}^2 \qquad \mod (v_n). $$ If $i \ge 2^n$, replace $b_{2i}^2$ with $ c_{4i}$. Do the same with $j$. We can work modulo $(v_n)$ because this single differential also computes $k(n)_*(BO(q))$ where we only have non-negative powers of $v_n$. We know that $K(n)_*(BO) \subset K(n)_*(BU)$. In \cite{KLW}, there are elements of $K(n)_*(BU)$ named $z_q$ that are our $c_{4(2^n+q)}$. In \cite[Theorem 3.14]{KLW}, the $z_q$ are computed in terms of $K(n)_*(BU)$ modulo $(v_n^2)$, and their complexity, and consequently the complexity of the coproduct, shows up here already. This is to be expected given the complexity of the dual algebra structure from Equation \eqref{cohomology}. \section{Reconciliation} The map $BO(q) \rightarrow BU(q)$ automatically gives a map of the algebraic construct on the right side of Equation \eqref{alg} to $BP^*(BO(q))$. The work of \cite{WSW:BO} first involves showing the map is surjective, which is done with the Adams spectral sequence. To show injectivity, the algebraic construct is analyzed. We can use that analysis here to show what we want. We have to establish some notation first. We have $BP^*(CP^\infty) \simeq BP^*[[x]]$, $x \in BP^2(CP^\infty)$ and $$ \xymatrix{ BP^*(\prod^q CP^\infty) & \simeq & BP^*[[x_1,x_2,\ldots,x_q]] \\ \cup & & \cup \\ BP^*(BU(q)) & \simeq & BP^*[[c_1,c_2,\ldots,c_q]] } $$ The inclusion is given by all of the symmetric functions, which are generated by the elementary symmetric functions given by the $c_k$. For $I = (i_1,\ldots,i_q)$, let $x^I = x_1^{i_1} \ldots x_q^{i_q}$. Two monomials are equivalent if some permutation of the $x_i$ takes one to the other. Define the symmetric function $$ s_I = \sum x^I $$ where the sum goes over all monomials equivalent to $x^I$. The elementary symmetric function is $c_k = \sum x_1 \ldots x_k$. Theorem 1.30 of \cite[page 358]{WSW:BO} computes $c_k^*$ for $BP$ as $$ c_k^* = (-1)^k c_k + \sum_{i > 0} v_i s_{2^i,\underbrace{1,1,\ldots,1}_{k-1}} \quad \mod J^2 $$ where $J=(2,v_1,v_2,\ldots)$. We know that the generators of $BP^*(BO(q))$ all map non-trivially to the cohomology $H^*(BO(q))$. As a result, we can look at this relation using only the coefficients of $k(n)^* = \mathbb Z/2[v_n]$ and consider the relation modulo $(v_n^2)$. Inductively, the only relation we need is $k=q$. This reduces to $$ c_q - c_q^* = v_n s_{2^n,\underbrace{1,1,\ldots,1}_{q-1}} \qquad \mod (v_n^2). $$ Note that for $BU(q)$, our relation is divisible by $c_q = x_1 \ldots x_q$, i.e. $$ s_{2^n,1,1,\ldots,1} = c_q s_{2^n-1}. $$ Because $K(n)^*(BU(q))\simeq K(n)^* \widehat{\otimes} BP^*(BU(q))$, we can be quite sloppy with our powers of $v_n$ because we are going to invert $v_n$ to get our algebraic description in the end. The degree of $v_n$ is negative, so the more powers of $v_n$, the higher the degree of the symmetric function. The following theorem will reconcile our two different descriptions of $K(n)^*(BO(q))$. \begin{thm} A basis for $ K(n)^*[[c_1,\ldots,c_q]]/(c_1 - c_1^*, \ldots,c_q - c_q^*)$ in terms of symmetric functions is given by $$ s_{IJ} = \sum x_1^{i_1}\ldots x_m^{i_m} x_{m+1}^{j_1}\ldots x_{m+p}^{j_p} $$ where $0 < i_1 < \ldots < i_m < 2^n$ and $0 \le j_1 \le \ldots \le j_p$ with $j_{2i-1} = j_{2i}$. \end{thm} \begin{remark} The definition forces $p$ to be even. If we drop the $i_m < 2^n$ condition, any $s_K$ can be written in this form. First, just find all the pairs of equal exponents and create $J$. Finding $I$ is easy after that. \end{remark} \begin{remark} All we do in our proof is reduce arbitrary elements to those in our theorem. Because we know $K(n)_*(BO(q))$, we know that there can be no further reduction, so this is a basis. This does reconcile the two descriptions though. \end{remark} \begin{proof} The proof is by double induction. First, it is by induction on $q$. This is easy to start with $q=1$ where the result is well known and straightforward, but worth talking about anyway as it illustrates things to come in the proof. The relation in $k(n)^*(BU(1))$ that gives $k(n)^*(BO(1))$ and then $K(n)^*(BO(1))$ is just $0 = c_1 - c_1^* = v_n s_{2^n} = v_n x^{2^n}$ modulo $(v_n^2)$. The induction is on the degree of the symmetric function, which in this case is just powers of $x$. Inverting $v_n$, we see that $x^{2^n}$ is zero modulo higher powers of $x$. For any $s_{2^n+k} = x^{2^n+k}$, we have $$ 0 = s_{2^n} s_k = s_{2^n + k} \mod \text{ higher powers of $x$}. $$ That is, each $s_{2^n+k}$ is zero modulo higher degree symmetric products. By induction on the degree of the symmetric product (i.e. induction on $k$) we push the relation to higher and higher degrees. In the topology on $K(n)^*(BU(1)) \simeq K(n)^*[[x]]$, this converges to zero, and so each $s_{2^n+k}$, $k \ge 0$, is really zero. We remind the reader that our relation isn't really $s_{2^n,1,\ldots,1}=0$ modulo higher degree symmetric functions. The relation has a $v_n$ in front. Since our relation really is in $k(n)^*(-)$ because it comes from $BP^*(-)$, all powers of $v_n$ are positive. Since we are going to invert $v_n$ in the end in order to get $K(n)^*(-)$, we can be quite loose with our $v_n$'s. The same thing will happen in the general, arbitrary $q$, case. However, for $q > 1$, there are non-trivial basis elements in high degrees, so this process doesn't have to go to zero in the limit, but could settle on a basis element. Either way, it works for our proof. From our induction on $q$, we assume the result for $q-1$. Stably, $BO(q) \simeq BO(q-1) \Wedge MO(q)$ from \cite{MitPrid} as well as $BU(q) \simeq BU(q-1) \Wedge MU(q)$. From \cite{WSW:BO}, we know that $BP^*(MO(q))$ is the ideal in $BP^*(BO(q))$ generated by $c_q$, and so the same is true for $K(n)^*(BO(q))$. Of course, the same is true for $BP^*(MU(q))$, $BP^*(BU(q))$, and $K(n)^*(BU(q))$. Consequently, we can focus our attention on the symmetric functions divisible by $c_q$ when there are only $q$ variables. We know that $H^*(BU(q))$ is free on the symmetric functions $s_I$ with $I= ( i_1,\ldots,i_q)$. If all $i_k > 0$, this is a basis for $H^*(MU(q))$ and if some are not greater than 0, they are part of the basis for $H^*(BU(q-1))$. This splitting is only additive, not multiplicative. Because there is no torsion, this is all true for $BP^*(-)$, $k(n)^*(-)$ and $K(n)^*(-)$ as well. Our next induction is on the degree of the symmetric functions. We will show that elements not of the form in our theorem are zero modulo higher degree elements. We know that $K(n)^*(BO(q))$ is $K(n)^*(BU(q))$ modulo the relations already described and that $K(n)^*(BU(q))$ is just given by the usual symmetric functions. To prove our result, we will not mod out our relations, but work with $BU(q)$ and just describe how the relations accomplish what we want. This will suffice for our purposes. We begin our induction by noticing that all elements in degrees less than the degree of $s_{2^n,1,\ldots,1} =c_q s_{2^n-1}$ are in our desired basis. The only element in the degree of $s_{2^n,1,\ldots,1}$ not in the basis is our relation element, which is zero modulo higher degree symmetric functions (ignoring the $v_n$ as discussed above). An arbitrary element not of the form in the theorem simply has $i_m \ge 2^n$ instead of $i_m < 2^n$. Having fixed a degree, we first consider the cases where $i_m = 2^n + k$, with $k > 0$. Since we are working with elements divisible by $c_q$, we can divide by $c_q$ to get a new symmetric function, $s_{I'J'}$, with each $i_s$ replaced by $i_s - 1$ and the same for the $j_s$. This symmetric function has $i_m' = 2^n +k -1$. Since $k > 0$, this is known to be zero modulo higher degrees by our induction on degree. Multiplying by $c_k$ to get our original symmetric function, we see it must be zero modulo higher degrees. Note that we are using our induction on $q$ here. If $i_1$ or $j_1$ (or both) are equal to $1$, then $s_{I'J'}$ is in $K(n)^*(BU(q-1))$ because it is not divisible by $c_q$. By our induction, we know the behavior of the relations here. In our fixed degree, we have eliminated all of the bad elements except those with $i_m = 2^n$. From such a symmetric function $s_{IJ}$, we create a new symmetric function $s_{I'J'}$ by eliminating the $x_m^{i_m}=x_m^{2^n}$ term and subtracting $1$ from from all of the other $i_s$ and $j_s$. We want to analyze $$ s_{2^n,1,1,\ldots,1}s_{I'J'}. $$ Since $s_{2^n,1,1,\ldots,1}$ is zero modulo higher degrees, this product is too. Multiplying symmetric functions can be tricky because the result can be a sum of symmetric functions. The easy one to deal with is when $i_1$ and $j_1$ are greater than one (recall that $m+p=q$). In this case, if your $x^{2^n}$ term is multiplied by any power of $x$, we are in the situation where our product has $x^{2^n + k}$, with $k >0$, and we have dealt with those terms already. The only thing left is to multiply the $x^{2^n}$ back into the place it was removed from and then all of the other exponents are raised by 1, giving us back our original $s_{IJ}$. Things are slightly more complicated if $i_1$ or $j_1$ is 1. (They must be at least 1 because everything is divisible by $c_q$.) Again, if our $x^{2^n}$ is multiplied by a non-zero power of $x$, we get $x^{2^n+k}$ and these terms have been handled already. Our $x^{2^n}$ must hit an $x^0$ term, but by the definition of symmetric functions, these are all equivalent, so the other $x_i$ all just have their exponent raised by 1 in our product and we get our $s_{IJ}$ back, showing it is zero modulo higher degrees. \end{proof} \end{document}
\begin{document} \begin{center} \today\\[10pt] {\Large\bf Co-elementary Equivalence, Co-elementary Maps, and Generalized Arcs} \\[20pt] Paul Bankston\\ Department of Mathematics, Statistics and Computer Science\\ Marquette University\\ Milwaukee, WI 53233\\[20pt] \end{center} \begin{abstract} By a {\bf generalized arc\/} we mean a continuum with exactly two non-separating points; an {\bf arc} is a metrizable generalized arc. It is well known that any two arcs are homeomorphic (to the real closed unit interval); we show that any two generalized arcs are co-elementarily equivalent, and that co-elementary images of generalized arcs are generalized arcs. We also show that if $f:X \to Y$ is a function between compact Hausdorff spaces and if $X$ is an arc, then $f$ is a co-elementary map if and only if $Y$ is an arc and $f$ is a monotone continuous surjection. \end{abstract} \section{Introduction and Outline of Results.}\label{1} A {\bf generalized arc\/} is a continuum (i.e., a connected compact Hausdorff space) that has exactly two non-separating points; an {\bf arc\/} is a metrizable generalized arc. The class of generalized arcs is precisely the class of linearly orderable continua; each generalized arc admitting exactly two compatible linear orders. The class of (continuous images of) generalized arcs has been extensively studied over the years (see \cite{HY,NTT,Wil}); the most well-known results in this area being that any two arcs are homeomorphic (to the standard closed unit interval on the real line); and (Hahn-Mazurkiewicz) that a Hausdorff space is a continuous image of an arc if and only if that space is a locally connected metrizable continuum. In this paper, a continuation of \cite{Ban3}, we study the model-theoretic topology of generalized arcs; in particular, the ``dualized model theory'' of these spaces. Many notions from classical first-order model theory, principally elementary equivalence and elementary embedding, may be phrased in terms of mapping conditions involving the ultraproduct construction. Because of the (Keisler-Shelah) ultrapower theorem (see, e.g., \cite{CK}), two relational structures are elementarily equivalent if and only if some ultrapower of one is isomorphic to some ultrapower of the other; a function from one relational structure to another is an elementary embedding if and only if there is an ultrapower isomorphism so that the obvious square mapping diagram commutes (see, e.g., \cite{Ban2,Ban5,Ekl}). The ultrapower construction in turn is a direct limit of direct products, and is hence capable of being transferred into a purely category-theoretic setting. In this paper we focus on the category $\bf CH$ of compact Hausdorff spaces and continuous maps, but perform the transfer into the opposite category (thus justifying the phrase ``dualized model theory'' above). In $\bf CH$ one then constructs ultracoproducts, and talks of co-elementary equivalence and co-elementary maps. Co-elementary equivalence is known \cite{Ban2,Ban5,Gur} to preserve important properties of topological spaces, such as being infinite, being Boolean (i.e., totally disconnected), having (Lebesgue) covering dimension $n$, and being a decomposable continuum. If $f:X \to Y$ is a co-elementary map in $\bf CH$, then of course $X$ and $Y$ are co-elementarily equivalent (in symbols $X \equiv Y$). Moreover, since $f$ is a continuous surjection (see \cite{Ban2}), additional information about $X$ is transferred to $Y$. For instance, continuous surjections in $\bf CH$ cannot raise {\bf weight\/} (i.e., the smallest cardinality of a possible topological base, and for many reasons the right cardinal invariant to replace cardiality in the dualized model-theoretic setting), so metrizability (i.e., being of countable weight in the compact Hausdorff context) is preserved. Also local connectedness is preserved, since continuous surjections in $\bf CH$ are quotient maps. Neither of these properties is an invariant of co-elementary equivalence alone. When attention is restricted to the full subcategory of Boolean spaces, the dualized model theory matches perfectly with the model theory of Boolean algebras because of Stone duality. In the larger category there is no such match \cite{Bana,Ros}, however, and one is forced to look for other (less direct) model-theoretic aids. Fortunately there is a finitely axiomatizable Horn class of bounded lattices, the so-called {\it normal disjunctive\/} lattices \cite{Ban8} (also called {\it Wallman\/} lattices in \cite{Ban5}), comprising precisely the (isomorphic copies of) lattices that serve as bases for the closed sets of compact Hausdorff spaces. We go from lattices to spaces, as in the case of Stone duality, via the {\bf maximal spectrum\/} $S(\;)$, pioneered by H. Wallman \cite{Walm}. $S(A)$ is the space of maximal proper filters of $A$; a typical basic closed set in $S(A)$ is the set of elements of $S(A)$ containing a given element of $A$. $S(\;)$ is contravariantly functorial; if $f:A \to B$ is a homomorphism of normal disjunctive lattices and $M \in S(B)$, then $f^S(M)$ is the unique maximal filter in $A$ containing the pre-image of $M$ under $f$. It is a fairly straightforward task to show, then, that $S(\;)$ converts ultraproducts to ultracoproducts, elementarily equivalent lattices to co-elementarily equivalent compact Hausdorff spaces, and elementary embeddings to co-elementary maps (see \cite{Ban2,Ban4,Ban5,Ban8,Gur}). An important consequence of this is a L\"{o}wenheim-Skolem theorem for co-elementary maps: every compact Hausdorff space maps co-elementarily onto a compact metrizable space. (This result is used in \ref{2.4} and \ref{2.6} below.) In \cite{Ban3} we showed that any locally connected metrizable space co-elementarily equivalent to an arc is already an arc; here we present the following results. $(i)$ if $f:X \to Y$ is a co-elementary map in $\bf CH$, and if $Y$ is locally connected (in particular, a generalized arc), then $f$ is a monotone continuous surjection; $(ii)$ co-elementary images of (generalized) arcs are (generalized) arcs; $(iii)$ any two generalized arcs are co-elementarily equivalent; $(iv)$ if $X$ is a generalized arc and $f:X \to Y$ is an irreducible co-elementary map in $\bf CH$, then $f$ is a homeomorphism; $(v)$ if every locally connected co-elementary pre-image of an arc is a generalized arc, then every locally connected compact Hausdorff space co-elementarily equivalent to a generalized arc is also a generalized arc; and $(vi)$ if $X$ is an arc and $f$ is a function from $X$ to a compact Hausdorff space $Y$, then $f$ is a co-elementary map if and only if $Y$ is an arc and $f$ is a monotone continuous surjection. Local connectedness is necessarily a part of $(v)$ above. We do not know at present whether the hypothesis in $(v)$ is true; nor do we know whether monotone surjections between generalized arcs are always co-elementary maps. \subsection{Remark.}\label{1.1} By way of contrast, there is a Boolean analogue to some of the results above. Define a {\bf generalized Cantor set\/} to be any non-empty Boolean space with no isolated points, and a {\bf Cantor set\/} to be a metrizable generalized Cantor set. It is well known that any two Cantor sets are homeomorphic (to the standard Cantor middle thirds set in the real line), and that the generalized Cantor sets are precisely the Stone duals of the atomless Boolean algebras, constituting an elementary class whose first-order theory is $\aleph_0$-categorical, complete, and model complete. In $(ii)$ and $(iii)$, one may replace ``arc'' with ``Cantor set'' uniformly; a straightforward application of $\aleph_0$-categoricity. The analog of $(iv)$ is false (see Example 3.3.4$(iv)$ in \cite{Ban2}); the projective cover map to a generalized Cantor set is always an irreducible co-elementary map between (seldom-homeomorphic) generalized Cantor sets. As for $(v)$, it follows from the results on dimension in \cite{Ban2} that any compact Hausdorff space co-elementarily equivalent to a generalized Cantor set is itself a generalized Cantor set. Finally, regarding $(vi)$, {\it all\/} continuous surjections between generalized Cantor sets are co-elementary maps. This is a direct consequence of the model completeness of the theory of atomless Boolean algebras.\\ \section{Methods and Proofs.}\label{2} We begin with a proof of $(i)$ above. Recall that a map $f:X \to Y$ is {\bf monotone\/} (resp. {\bf strongly monotone\/}) if the inverse image of a point (resp. a closed connected subset) of $Y$ is connected in $X$. \subsection{Proposition.}\label{2.1} Let $f:X \to Y$ be a co-elementary map in $\bf CH$, with $Y$ locally connected. Then $f$ is a strongly monotone continuous surjection.\\ \noindent {\bf Proof.\/} Assume $f:X \to Y$ is co-elementary, $Y$ is locally connected, and $f$ is not strongly monotone. Then there is a subcontinuum $S$ of $Y$ such that the inverse image $A := f^{-1}[S]$ is disconnected. Since $A$ is closed, we can write $A = A_1 \cup A_2$ where each $A_i$ is closed non-empty, and $A_1 \cap A_2 = \emptyset$. Let $U_i$ be an open neighborhood of $A_i$, with $U_1 \cap U_2 = \emptyset$. If $C$ is a subcontinuum of $X$ containing $A$, then we can pick some $x_C \in C \setminus (U_1 \cup U_2)$. Let $B$ be the closure of the set of all such points $x_C$, as $C$ ranges over all subcontinua containing $A$. Since no point $x_C$ lies in $U_1 \cup U_2$, $B$ is disjoint from $A$, but intersects every subcontinuum of $X$ that contains $A$. Now $f[B]$ is closed in $Y$ and disjoint from $S$. Let $W$ be an open neighborhood of $S$ whose closure misses $f[B]$. Since $Y$ is locally connected, we have, for each $y \in S$, a connected open neighborhood $V_y$ of $y$ such that $V_y \subseteq W$. Since $S$ is connected, so also is $V := \bigcup_{y \in S}V_y$; and the closure $K$ of $V$ is a subcontinuum containing $S$. Since $V \subseteq W$, and the closure of $W$ is disjoint from $f[B]$, we know that $K$ is also disjoint from $f[B]$. We need a fact proved elsewhere.\\ \noindent {\bf Lemma.\/}(Lemma 2.8 in \cite{Ban5}) Let $f:X \to Y$ be a co-elementary map in $\bf CH$, with $K \subseteq Y$ a subcontinuum. Then there is a subcontinuum $C \subseteq X$ such that $K = f[C]$, and whenever $V \subseteq K$ is open in $Y$, $f^{-1}[V] \subseteq C$.\\ Using the Lemma, there exists a subcontinuum $C \subseteq X$ such that $f[C] = K$ and $f^{-1}[V] \subseteq C$. Let $x \in A$. Then there is a neighborhood $U$ of $x$ with $f[U] \subseteq V$. Thus $x \in U \subseteq f^{-1}[V] \subseteq C$, hence we infer $A \subseteq C$. Every subcontinuum of $X$ containing $A$ must intersect $B$, so $\emptyset \neq f[B \cap C] \subseteq f[B] \cap f[C] = f[B] \cap K = \emptyset$. This contradiction completes the proof. $\dashv$ \subsection{Remark.}\label{2.2} The Lemma above provides only a weak consequence of co-elementarity. Indeed, the usual projection map from the standard closed unit square in the plane onto its first co\"{o}rdinate is not co-elementary because it does not preserve topological dimension. Nevertheless, it does satisfy the conclusion of the Lemma.\\ Now we are in a position to prove $(ii)$. \subsection{Proposition.}\label{2.3} Let $f:X \to Y$ be a co-elementary map in $\bf CH$. If $X$ is a generalized arc, then so is $Y$.\\ \noindent {\bf Proof.\/} Let $f:X \to Y$ be a co-elementary map in $\bf CH$, with $X$ a generalized arc. $Y$ is a locally connected continuum because $X$ is locally connected and $f$ is a continuous surjection. By \ref{2.1}, $f$ is monotone; it remains to show $Y$ has precisely two non-separating points. Let $a, b \in X$ be the two non-separating points of $X$. $Y$ is non-degenerate because of co-elementarity; monotonicity then tells us that $f(a) \neq f(b)$. If $f(a)$ were to separate $Y$, we could also separate $X \setminus K$, where $K:= f^{-1}[\{f(a)\}]$ is a subcontinuum (i.e., closed subinterval) containing the endpoint $a$. This is easily seen to be impossible for generalized arcs. Now let $y \in Y \setminus \{f(a),f(b)\}$, with $K := f^{-1}[\{f(y)\}]$. Then $K$ is a subcontinuum of $X$ containing neither endpoint. Thus $X \setminus K$ is disconnected; hence $y$ separates $Y$. We therefore conclude that $Y$ is a generalized arc. $\dashv$.\\ We can very quickly settle $(iii)$. \subsection{Proposition.}\label{2.4} Let $X$ and $Y$ be two generalized arcs. Then $X \equiv Y$.\\ \noindent {\bf Proof.\/} Let $X$ and $Y$ be generalized arcs. By the L\"{o}wenheim-Skolem theorem for co-elementary maps, there exist co-elementary maps $f:X \to X_0$ and $g:Y \to Y_0$, where $X_0$ and $Y_0$ are compact metrizable. By \ref{2.3}, the images are generalized arcs; hence they are arcs. Thus $X_0$ and $Y_0$ are homeomorphic, and we conclude $X \equiv Y$ because \cite{Ban2} co-elementary equivalence is an honest equivalence relation. $\dashv$\\ To handle $(iv)$, recall that a continuous surjection $f:X \to Y$ is {\bf irreducible\/} if $Y$ is not the image under $f$ of a proper closed subset of $X$. \subsection{Proposition.}\label{2.5} Let $f:X \to Y$ be an irreducible co-elementary map in $\bf CH$. If $X$ is a generalized arc, then $f$ is a homeomorphism.\\ \noindent {\bf Proof.\/} It suffices to show $f$ is one-one. Let $y \in Y$, with $K := f^{-1}[\{y\}]$, a subcontinuum of $X$ by \ref{2.1}. Since $X$ is a generalized arc, $K$ is either a singleton or a closed subinterval with non-empty interior. The latter case easily contradicts the irreducibility of $f$, however. $\dashv$\\ In \cite{Gur} it is shown that every infinite compact Hausdorff space is co-elementarily equivalent to a compact Hausdorff space that is not locally connected. (See also \cite{Ban5} for refinements.) This explains the necessity of the local connectedness hypothesis in $(v)$. \subsection{Proposition.}\label{2.6} Suppose every locally connected co-elementary pre-image of an arc is a generalized arc. Then every locally connected compact Hausdorff space co-elementarily equivalent to a generalized arc is itself a generalized arc.\\ \noindent {\bf Proof.\/} Suppose $X \in \bf CH$ is locally connected, $X \equiv Y$, and $Y$ is a generalized arc. As in the proof of \ref{2.4} above, we have co-elementary maps $f:X \to X_0$ and $g:Y \to Y_0$, where $X_0$ and $Y_0$ are metrizable. Furthermore, we know that $X_0$ is locally connected and that $Y_0$ is an arc (\ref{2.1} again). By the transitivity of co-elementary equivalence, we know $X_0 \equiv Y_0$; by the main result of \cite{Ban3}, we know $X_0$ is an arc. Our hypothesis then tells us that $X$ is a generalized arc. $\dashv$\\ We finish with a proof of $(vi)$. If $X$ is an arc and $f:X \to Y$ is a co-elementary map in $\bf CH$, then $Y$ is an arc and $f$ is a monotone continuous surjection by \ref{2.1} and \ref{2.2}. So it suffices to prove the following. \subsection{Proposition.}\label{2.7} Every monotone continuous surjection from an arc to itself is a co-elementary map.\\ \noindent {\bf Proof. } Let us take our arc to be the standard closed unit interval ${\bf I}$ with its usual order. $f$ is either $\leq$-preserving or $\leq$-reversing, so we lose no generality in assuming $f$ to be the former. For any topological space $X$, we denote the closed set lattice of $X$ by $F(X)$. $F(\;)$ converts continuous maps contravariantly into lattice homomorphisms, and serves as a right inverse for $S(\;)$: $S(F(X))$ is naturally homeomorphic to $X$ for any compact Hausdorff $X$. Monotone continuous surjections from $\bf I$ to itself are strongly monotone; hence $f^F:F({\bf I}) \to F({\bf I})$ is a lattice embedding that takes closed intervals (in this case the connected elements of the lattice) to closed intervals. However, $f^F$ will take atoms to non-atoms when $f$ is not injective. Thus $f^F$ is not an elementary embedding without being an isomorphism. The idea is to restrict the domain and range of $f^F$ in such a way that the resulting lattice embedding, call it $g$, is elementary, and $g^S = f$. Our plan is to create an elementary lattice embedding $g:{\cal A} \to {\cal B}$, where ${\cal A}$ and ${\cal B}$ are atomless lattice bases for ${\bf I}$ (i.e., both $\cal A$ and $\cal B$ are atomless, as well as meet-dense in $F({\bf I})$), and $g$ agrees with the restriction of $f^F$ to ${\cal A}$. Since $S(\cal A)$ and $S(\cal B)$ are naturally homeomorphic to $\bf I$, and $f$ is just $g^S$ conjugated with these homeomorphisms, $f$ is a co-elementary map provided $g^S$ is. For each $y \in \bf I$, let $\lambda (y) := \mbox{inf}(f^{-1}[\{y\}])$ and $\rho (y) := \mbox{sup}(f^{-1}[\{y\}])$. Then for any closed interval $[x,y] \in F(\bf I)$, $f^F([x,y]) = [\lambda (x),\rho (y)]$. Both $\lambda$ and $\rho$ are right inverses for $f$, and are hence strictly increasing (but not necessarily continuous). Of course $\lambda (0) = 0$ and $\rho (1) = 1$. Let $L,R \subseteq \bf I$, with $0 \in L$ and $1 \in R$. If ${\cal I}(L,R)$ denotes the set of all finite unions of intervals $[x,y]$ with $x \in L$ and $y \in R$, then ${\cal I}(L,R)$ is a sublattice of $F(\bf I)$, which is atomless just in case $L \cap R = \emptyset$. If $L$ and $R$ are dense in $\bf I$, then ${\cal I}(L,R)$ is a lattice base as well. Now fix $L,R \subseteq \bf I$ to be disjoint countable dense subsets, with $0 \in L$ and $1 \in R$, and set ${\cal A} := {\cal I}(L,R)$. Then the image of $\cal A$ under $f^F$ is ${\cal I}(\lambda [L],\rho[R])$. Clearly $\lambda [L] \cap \rho [R] = \emptyset$, $0 \in \lambda [L]$, and $1 \in \rho [R]$. Let $L', R' \subseteq \bf I$ be disjoint countable dense subsets, with $\lambda [L] \subseteq L'$, $\rho [R] \subseteq R'$, and set ${\cal B} := {\cal I}(L',R')$. Then $\cal B$ is a countable atomless lattice base for $F(\bf I)$, and we denote by $g:{\cal A} \to \cal B$ the embedding $f^F$ with its domain and range so restricted. It remains to show that $g$ is an elementary embedding, and for this it suffices to show that for each finite set $S$ in $\cal A$ and each $b \in \cal B$, there is an automorphism on $\cal B$ that fixes $g[S]$ pointwise and takes $b$ into $g[\cal A]$. Let $x_1, ..., x_n$ be a listing, in increasing order, of the endpoints of the component intervals of $g[S] \cup \{b\}$ (so each $x_i$ is in $L' \cup R'$), with $X_i := f^{-1}[\{f(x_i)\}]$, $1 \leq i \leq n$. Each $X_i$ is either a singleton or a non-degenerate closed interval, and for $1 \leq i < j \leq n$, either $X_i = X_j$ or each element of $X_i$ is less than each element of $X_j$. Let $U_i$ be an open-interval neighborhood of $X_i$ such that $U_i \cap U_j = \emptyset$ whenever $X_i \neq X_j$. Since $f$ is a $\leq$-preserving surjection and the sets $L$ and $R$ are dense in $\bf I$, each $U_i$ has infinite intersection with both $\lambda [L]$ and $\rho [R]$. If $x_i \in \lambda [L] \cup \rho [R]$, set $x_i' := x_i$. Otherwise we know $x_i$ is an endpoint of a component interval of $b$; and we choose $x_i' \in U_i$ in such a way that $x_i' \in \lambda [L]$ if and only if $x_i \in L'$, and $x_i' < x_j'$ whenever $x_i < x_j$ and $X_i = X_j$. This procedure produces an increasing sequence $x_1',...,x_n'$ of elements of $\lambda [L] \cup \rho [R]$; $x_i' \in \lambda [L]$ if and only if $x_i \in L'$. For each $a \in g[S] \cup \{b\}$, let $a'$ be built up using the endpoints $x_i'$ in the same way as $a$ is built up using the endpoints $x_i$. Then $a' = a$ for each $a \in g[S]$, and $b' \in g[\cal A]$. Now by a classic (Cantor) back and forth argument, there is an order automorphism on $L' \cup R'$ that fixes $L'$ and $R'$ setwise and takes $x_i$ to $x_i'$ for $1 \leq i \leq n$. This order automorphism gives rise to the lattice automorphism on $\cal B$ that we require. $\dashv$\\ \end{document}
\begin{document} \title{Voltage operations on maniplexes} \author{Isabel Hubard} \address{Institute of Mathematics, National Autonomous University of Mexico (IM UNAM), 04510 Mexico City, Mexico} \email{[email protected]} \author{Elías Mochán} \address{College of Science, Northeastern University, 02115 Boston, USA} \email{[email protected]} \author{Antonio Montero} \address{Institute of Mathematics, National Autonomous University of Mexico (IM UNAM), 04510 Mexico City, Mexico} \email{[email protected]} \maketitle \listoftodos\relax \begin{abstract} Classical geometric and topological operations on polyhedra, maps and polytopes often give rise to structures with the same symmetry group as the original one, but with more flags. In this paper we introduce the notion of \emph{voltage operations} on maniplexes, as a way to generalize such operations. Our technique provides a way to study classical operations in a graph theory setting. In fact, voltage operations can be applied to symmetry type graphs, and more generally to $n$-valent properly $n$-edge colored graphs. We focus on studying the interactions between voltage operations and the symmetries of the operated object, and show that they can be potentially used to build maniplexes with prescribed symmetry type graphs. Moreover, a complete characterization of when an operation can be seen as a voltage operation is given. \end{abstract} \section{Introduction} Traditionally, operations on polyhedra introduce local transformations on the vertices, edges or faces. Classical examples include the Wythoffian ones, such as truncation and medial. Operations in maps and polytopes have been studied previously. For example, in \cite{OrbanicPellicerWeiss_2010_MapOperations$k$}, Orbanič, Pellicer and Weiss explore map operations to build $k$-orbit maps. They take a combinatorial approach and divide each flag (incident triplets vertex-edge-face) of a map into several new ones. In a very recent paper \cite{CunninghamPellicerWilliams_StratifiedOperationsManiplexes_preprint} Cunningham, Pellicer and Williams introduce \emph{stratified operations} to study monodromy groups of some important families of non-reflexive maniplexes (a generalization of -the flag graph of- polytopes, where some conditions are relaxed). This concept is closely related to our concept of voltage operation and many of their results can be written in our language and vice versa. Classical operations often give rise to maps or polytopes with the same symmetry group as the original one, but with more flags. Thus, by applying one of these operations to a polytope we get a new polytope with more flag orbits. Furthermore, it is not difficult to see that, often, the ``local configuration'' of the flags does not depend on the original polytope but on the operation per se. Could we use operations to get \emph{any} local configuration of the flag orbits? We shall explain this question more precisely. The flag graph of a polytope $\mathcal{P}$ is a properly edge-colored $n$-valent graph whose vertices are the flags of $\mathcal{P}$ and two of them are adjacent if the corresponding flags differ in exactly one element. The colors of the edges are given by the type of element they differ in. The quotient of the flag graph of $\mathcal{P}$ by its symmetry group is the \emph{symmetry type graph} of $\mathcal{P}$. The symmetry type graph can be thought as a way to represent the local configuration of the flag orbits. We can notice that when we apply a classical operation to different regular polyhedra, the resulting polyhedra often have isomorphic symmetry type graphs. (For example, if $\med$ represents the medial operation and $\mathcal{P}$ is a regular polyhedron, the symmetry type graph of $\med(\mathcal{P})$ will have either one single vertex, or two vertices and an edge of color $2$ joining them, depending on whether or not $\mathcal{P}$ is self-dual.) Moreover, if we apply an operation $\oo$ to two polyhedra, both with the same symmetry type graph $\mathcal{T}$, it is very likely that both resulting polyhedra have the same symmetry type graph $\mathcal{T}'$. A natural question to ask is: given a properly edge-colored $n$-valent graph $\mathcal{T}$, is there a maniplex whose symmetry type graph is $\mathcal{T}$? Going back to operations, can we find an operation $\oo$ such that if $\mathcal{P}$ is a reflexible maniplex, then the symmetry type graph of $\oo(\mathcal{P})$ is precisely $\mathcal{T}$? These questions give rise to \emph{voltage operations} as a potential useful tool to find answers. Voltage operations define a technique that uses voltage graphs to generalize the above mentioned classical operations, as well as many others, even in higher dimensions (or ranks). These operations can also be defined for maniplexes. But not only that, they can also be defined for \emph{premaniplexes}, a more general concept that includes both maniplexes and their symmetry type graphs. It is important to remark that voltage graphs have been used before in constructing polytope-like structures. Notably, in \cite{PellicerPotocnikToledo_2019_ExistenceResultTwo} the authors use voltage graphs to build $2$-orbit $n$-maniplexes (for $n \geq 4$) with prescribed symmetry type graphs. In \cite{CunninghamDelRioFrancosHubardToledo_2015_SymmetryTypeGraphs} the authors use voltage graphs, without explicitly mentioning them, to find generators and relations for the automorphism group of $3$-orbit polytopes. In \cite{Mochan_2021_AbstractPolytopesTheir_PhDThesis} the second author of this manuscript exploited extensively the use of voltage graphs to solve some relevant problems in the area, namely, problems 1 and 2 in \cite{CunninghamPellicer_2018_OpenProblems$k$}. We start this paper by giving some basic definitions that we shall need throughout the manuscript. In \cref{sec:operations} we give the definition of a voltage operation and look in detail into the mix as a voltage operation; we also study some connectivity properties of the voltage operations. In Section\nobreakspace \ref {sec:ClassicalExamples} we explore some examples of classical operations on polytopes and polyhedra. In particular we show how pyramids, prisms, Wythoffian operations, among others, can be seen as voltage operations. In Section\nobreakspace \ref {sec:automorphisms} we describe how automorphisms and voltage operations interact. We show that if $\mathcal{M}$ is a maniplex with symmetry type $\mathcal{T}$ and $\oo$ is a voltage operation, then every automorphism of $\mathcal{M}$ induces an automorphism of $\oo(\mathcal{M})$. That is, the automorphism group of $\mathcal{M}$ is a subgroup of the automorphism group of $\oo(\mathcal{M})$. Moreover, we show that the symmetry type graph of $\oo(\mathcal{M})$ with respect to the automorphism group of $\mathcal{M}$ can be obtained y applying the voltage operation to $\mathcal{T}$. In Section\nobreakspace \ref {sec:composition} we show that voltage operations are closed under composition and we find a simple way to describe the composition of two voltage operations. Finally, in Section\nobreakspace \ref {sec:universal} we give conditions for two voltage operations to be equivalent. We also characterize voltage operations in terms of what they do to the universal maniplex. More precisely, in Theorem\nobreakspace \ref {thm:covers} we prove that if $\oo$ is an application that assigns a premaniplex $\oo(\mathcal{X})$ to each premaniplex $\mathcal{X}$, then we prove that $\oo$ is a voltage operation if and only if $\oo(\mathcal{U}/\Gamma) \cong \oo(\mathcal{U})/\Gamma$ for every group $\Gamma \leq \aut(\mathcal{U})$. \section{Preliminaries}\label{sec:defs} \inline{Pasar un corrector y dejar todo en un sólo tipo de inglés} \subsection{Graphs} In this work we use the definition of graph used in \cite{MalnicNedelaSkoviera_2000_LiftingGraphAutomorphisms}, which is slightly more general that the usual definition. A \emph{graph} $X$ is a quadruple $(D,V,I,(\cdot)^{-1})$ where $D$ and $V$ are disjoint sets, $I:D \to V$ is a mapping and $(\cdot)^{-1}: D \to D$ is an involutory permutation of $D$. The set $V$ is the set of \emph{vertices} of $X$, the set $D$ is the set of \emph{darts} of $X$. For a dart $d$, the vertex $I(d)$ is the \emph{initial vertex} or \emph{starting point} of $d$ and $d^{-1}$ is the \emph{inverse} of $d$. The \emph{terminal vertex} or \emph{endpoint} of $d$ is the starting point of $d^{-1}$. The \emph{edges} are the orbits of $D$ under the action of $(\cdot)^{-1}$. If an edge consists only of a single dart, that is, a dart $d$ that satisfies $d^{-1}=d$, then it is called a \emph{semiedge}. A \emph{loop} is an edge consisting of two darts whose initial vertex is the same. A \emph{link} is an edge that is not a loop or a semiedge. That is, an edge whose two darts are different and have different starting points. Usually a graph $X$ is defined by their vertex set $V(X)$ and the set $E(X)$ of edges. We can recover the set of darts as the set of formal pairs \[D(X)= \left\lbrace \pth[e]{v} : v \in V(X), e \in E(X) \text{ and $v$ is incident to $e$} \right\rbrace. \] The function $I$ is given by $I(\pth[e]{v})=v$ and if an edge $e$ is incident to the vertices $u$ and $v$, then $\left( \pth[e]{v} \right)^{-1} = \pth[e]{u}$. Formally speaking, this definition turns loops into semiedges. In this work we will not use graphs with loops hence we allow this small abuse. If $X=(V(X),D(X), I_{X}, (\cdot)^{-1}_{X})$ and $Y=(V(Y),D(Y), I_{Y}, (\cdot)^{-1}_{Y})$ are graphs, a \emph{graph homomorphism} $f: X \to Y$ is a pair of functions $f=(f_V, f_D)$ such that $f_V: V(X) \to V(Y)$, $f_D: D(X) \to D(Y)$ that satisfy \[\begin{aligned} I_{Y}\left( d f_{D} \right) &= \left( I_{X} (d) \right)f_V && \text{and} \\ \left( (d)f_D \right)_{Y}^{-1} &= ( d^{-1}_{X} )f_D \end{aligned} \] for every dart $d \in D(X)$. Note that we evaluate graph homomorphisms on the right. If $v\in V(X)$ and $d \in D(X)$, we shall write $vf$ and $df$ instead of $vf_{V}$ and $df_{D}$. If both $f_{V}$ and $f_{D}$ are bijective and $f^{-1}:=(f_{V}^{-1},f_{D}^{-1})$ is also a graph homomorphism, then we say that $f$ (and $f^{-1})$ is an \emph{isomorphism} and that $X$ and $Y$ are \emph{isomorphic} (and write $X \cong Y$). Naturally, for a graph $X$, an \emph{automorphism} is an isomorphism $f:X \to X$. A \emph{path} is a finite sequence $W=(d_{1}, \dots, d_{k})$ of darts such that the endpoint of $d_{i}$ is the starting point of $d_{i+1}$ for $i \in \lbrace0, \dots, k-1\rbrace$. We usually omit commas and parentheses and simply write $W=d_{1} \dots d_{k}$. The number $k$ is the \emph{length} of $W$. The starting point of $d_1$ is the \emph{starting point} of $W$; we also say that $W$ starts at $I(d_{1})$. Similarly, the endpoint of $W$ is the endpoint of $d_{k}$ and we say that $W$ ends at this vertex. A single vertex is a path of length $0$. A path is \emph{closed} if its starting point and its endpoint are the same vertex. Notions such as degree of a vertex, cycle, subgraphs, connectivity, connected components and trees extend naturally from the classic definition of graphs and paths. Given a path $W = d_{1} \dots d_{k}$, an \emph{elementary (graph) move} consist in inserting or removing a pair of consecutive inverse darts at any point in the sequence $d_{1}, \dots, d_{k}$. If a path $V$ can be obtained from a path $W$ by applying a series of elementary moves then we say that the paths are \emph{graph-homotopic} (and write $W \sim V$). Clearly graph-homotopy is an equivalence relation and we often identify a path with its homotopy class. Observe that if $W$ is graph-homotopic to a path of length $0$ then $W$ must be closed. Given two paths $W$ and $V$ we say that they are \emph{compatible} if the endpoint of $W$ is the starting point of $V$. We can operate compatible paths by concatenation. This is, if $W=d_{1} \dots d_{k}$ and $V=a_{1} \dots a_{\ell}$ then $WV= d_{1} \dots d_{k} a_{1} \dots a_{\ell}$. If $W \sim W'$ and $V \sim V'$ then $WV \sim W'V'$, which implies that we can operate not only compatible paths but homotopy classes of compatible paths. The \emph{fundamental groupoid} of a graph $X$, denoted by $\fg(X)$ is the set of graph-homotopy classes of $X$ with the partial operation defined above. If $u$ is a vertex in $X$, then the fundamental group of $X$ at $u$, denoted by $\fg^{u}(X)$ is the set of graph-homotopy classes of closed paths at $u$ with concatenation as operation. Observe that $\fg^{u}(X)$ is actually a group and that if $V$ is a path from $v$ to $u$ then $\fg^{v}(X) = V \fg^{u}(X) V^{-1} $. If $X$ is a graph and $G$ is a group, a \emph{voltage assignment} is a function $\xi: \fg(X) \to G$ that satisfies $\xi(WV) = \xi(V)\xi(W)$ for any two compatible paths $W$ and $V$ \footnote{Usually a voltage assignment $\xi$ is defined such that $\xi(WV)=\xi(W)\xi(V)$ (cf. \cite{MalnicNedelaSkoviera_2000_LiftingGraphAutomorphisms}). The reason we do it the other way is because we are considering right actions, as is customary in the polytopes and maniplexes literature.}. The group $G$ is called the \emph{voltage group} of $\xi$ and the pair $(X,\xi)$ is called a \emph{voltage graph}. Since every path can be though as the product of its darts, we can regard a voltage assignment as a function $\xi:D(X) \to G$ such that for every dart $d$, $\xi(d^{-1})= \xi(d)^{-1}$. The voltage of a path $W= d_{1}, \dots, d_{k}$ is simply $\xi(d_{k}) \cdots \xi(d_{1})$. If $(X,\xi)$ is a voltage graph, the \emph{derived graph} is the graph $X^{\xi}$ whose vertices and darts are the elements in $V(X) \times G$ and $D(X) \times G$, respectively. The initial vertex of a dart $(d,g)$ is $(I(d),g)$ and its inverse is $(d^{-1}, \xi(d)g)$. Informally speaking, the adjacent vertices of a given vertex $(x,g)$ are the vertices $(x, \xi(d)g)$ for each dart $d$ starting at $x$. If $X$ is a graph and $\xi: \fg(X) \to \Gamma$ and $\zeta: \fg(X) \to \Gamma$ are voltage assignments, then $\xi$ and $\zeta$ are \emph{equivalent} if there is an isomorphism $X^{\xi} \to X^{\zeta}$ such that the following diagram commutes: \begin{equation}\label{eq:EquivVolts} \begin{tikzcd} [column sep = small] X^\xi \arrow[dashed]{rr} \arrow{rd}{} & & X^{\zeta} \arrow{dl}{} \\ & X & \end{tikzcd} \end{equation} where the arrows pointing to $X$ are the projection to the first coordinate, which is a homomorphism. The following Theorem is well known (see \cite{MalnicNedelaSkoviera_2000_LiftingGraphAutomorphisms}): \begin{thm}\label{thm:VoltsEquiv} If $(\mathcal{X},\xi)$ is a connected voltage graph and $x_0$ is a vertex in $\mathcal{X}$, there exists a voltage assignment $\xi'$ on $\mathcal{X}$ satisfying that: \begin{itemize} \item $\xi'$ is equivalent to $\xi$; \item for every dart $d$ in $\mathcal{X}$, $\xi'(d)\in\xi(\fg^{x_0}(\mathcal{X}))$; and \item there is a spanning tree $T$ of $\mathcal{X}$ such that for every dart $d$ in $T$, $\xi'(d)=1$. \end{itemize} \end{thm} Note that if $\mathcal{X}$ is connected, without loss of generality, we can assume that a voltage assignment $\xi$ on $\mathcal{X}$ satisfies the last two conditions of Theorem\nobreakspace \ref {thm:VoltsEquiv}. We can think of the spanning tree $T$ as a \emph{fundamental region} for the group $G$. \subsection{Premaniplexes and maniplexes} A \emph{properly $n$-edge-colored graph} is a graph $X$ and a function $c: D(x) \to \left\lbrace 0, \dots, n-1 \right\rbrace $ such that $c(d) = c(d^{-1})$ for every dart $d$ and if $d_{1}$ and $d_{2}$ are such that $I(d_{1}) = I(d_{2})$, then $c(d_{1})\neq c(d_{2})$. Observe that $c$ induces a proper edge-coloring in the classical sense. A \emph{premaniplex of rank $n$} or simply $n$-premaniplex is a properly $n$-edge-colored graph $\mathcal{X}$ such that every vertex is the starting point of one dart of each color and if $i,j \in \left\lbrace0, \dots, n-1 \right\rbrace $ are such that $\left| i-j \right| \geq 2 $, then the alternating paths of length 4 with colors $i,j$ are closed. If a premaniplex $\mathcal{X}$ is connected and simple, that is, if there are no semiedges or parallel edges, then we say that $\mathcal{X}$ is a maniplex. Maniplexes were introduced in \cite{Wilson_2012_ManiplexesPart1} as a combinatorial generalization of maps to higher ranks. Constructions of maniplexes with given symmetry properties can be found in \cite{PellicerPotocnikToledo_2019_ExistenceResultTwo}. Natural examples of maniplexes are the flag graphs of polytopes. In fact, if $\mathcal{X}$ is a maniplex, we usually call its vertices \emph{flags}. Moreover, if $\mathcal{M}$ is an $n$-maniplex and $i \in \lbrace0, \dots, n-1\rbrace$, the \emph{$i$-faces} of $\mathcal{M}$ are the connected components of $\mathcal{M}$ after removing the edges of color $i$. More generally, if $0 \leq k < \ell \leq n-1$ the \emph{$(k,\ell)$-sections} of $\mathcal{M}$ are the connected components of $\mathcal{M}$ after removing the edges of color $i$ if $i < k$ or $i > \ell$. Note that when $\mathcal{M}$ is the flag graph of a polytope $\mathcal{P}$, the faces and sections of $\mathcal{M}$ are in correspondence with the faces and sections of $\mathcal{P}$. These close relation between flag graphs of polytopes and maniplexes allows us to think of polytopes in a graph-theoretical approach. In this paper, we slightly abuse of language and whenever we talk of a polytope $\mathcal{P}$ we actually refer to the flag graph $\mathcal{P}$. In \cite{GarzaVargasHubard_2018_PolytopalityManiplexes} Garza-Vargas and Hubard characterize when a maniplex is the flag graph of a polytope. Finally, observe that the notions of flags, sections and faces extend naturally to premaniplexes. If $\mathcal{X}$ is a premaniplex, whenever we write $x \in \mathcal{X}$ we mean that that $x$ is a flag (vertex) in $\mathcal{X}$. If $x$ is a vertex in an $n$-premaniplex, and $i \in \left\lbrace 0, \dots, n-1 \right\rbrace $ we denote by $\pth{x}$ the dart of color $i$ whose starting point is $x$. We denote by $x^{i}$ the \emph{$i$-adjacent vertex} of $x$, that is, the endpoint of the dart $\pth{x}$. Given two $n$-premaniplexes $\mathcal{X}$ and $\mathcal{X}'$, a \emph{(premaniplex) homomorphism} from $\mathcal{X}$ to $\mathcal{X}'$ is a function that preserves $i$-adjacencies, for $i\in\lbrace0, \dots, n-1\rbrace$. In the particular case where $\mathcal{X}$ and $\mathcal{X}'$ are prepolytopes, these homomorphisms are called rap-maps. It is easy to prove that if $\mathcal{X}'$ is connected, every homomorphism with codomain $\mathcal{X}'$ is surjective (see \cite[Lemma 2.5]{MonsonPellicerWilliams_2014_MixingMonodromyAbstract} for a proof for rap-maps that naturally extends to homomorphism of premaniplexes). In general, every connected component of $\mathcal{X}'$ is either contained completely in the image of an homomorphism or in its complement. The class of all $n$-premaniplexes together with the premaniplex homomorphisms as arrows forms a category which we will call $\mathrm{pMpx}^n$. A surjective homomorphism is called a \emph{covering}. If there is a covering from $\mathcal{X}$ to $\mathcal{X}'$ we say that \emph{$\mathcal{X}$ covers $\mathcal{X}'$}. It is easy to see that if $\mathcal{X}'$ is connected, any homomorphism from $\mathcal{X}$ to $\mathcal{X}'$ is a covering. The notions of \emph{isomorphism} and \emph{automorphism} of premaniplexes are defined in the usual way. The automorphism group of $\mathcal{X}$ is denoted by $\aut(\mathcal{X})$, and acts naturally on flags, $i$-faces and $(k,l)$-sections. Such actions will be consider as right actions. A premaniplex $\mathcal{X}$ is \emph{regular} if $\aut(\mathcal{X})$ acts transitively on vertices. We say that $\mathcal{X}$ is a \emph{$k$-orbit} premaniplex if $\aut(\mathcal{X})$ induces $k$ orbits on vertices. These notions coincide with the analogue notions for polytopes. For example, the flag graph of a polytope is a regular maniplex if and only if the polytope is regular itself. If $\mathcal{X}$ is an $n$-premaniplex and $\Gamma \leq \aut(\mathcal{X})$ the \emph{quotient} $\mathcal{X}/\Gamma$ is the $n$-colored graph whose vertices are the orbits $\lbracex\Gamma : x \in V(\mathcal{X})\rbrace$ and for $i \in \lbrace0, \dots, n-1\rbrace$ $(x\Gamma)^{i} = (x^{i})\Gamma$. Observe that the adjacencies are well-defined since $(x^{i}) \gamma = (x \gamma)^{i}$ for every $\gamma \in \aut(\mathcal{X})$. Moreover, it is straightforward that $\mathcal{X}/\Gamma$ is an $n$-premaniplex. If $\mathcal{M}$ is a maniplex and $\Gamma \leq \aut(\mathcal{M})$, the \emph{symmetry type graph} of $\mathcal{M}$ with respect to $\Gamma$ is the quotient $\mathcal{M}/\Gamma$. In particular, when $\Gamma = \aut(\mathcal{M})$ then the quotient $\mathcal{M}/\Gamma$ is the \emph{symmetry type graph} of $\mathcal{M}$ (see \cite{CunninghamDelRioFrancosHubardToledo_2015_SymmetryTypeGraphs, Mochan_2021_AbstractPolytopesTheir_PhDThesis}). The \emph{universal rank-$n$ Coxeter group} (with string diagram) is the group $\mathcal{C}^{n}=\gen{\rho_{0}, \dots \rho_{n-1}}$ defined by the following relations: \begin{equation}\label{eq:relsReg} \begin{aligned} \rho_{i}^{2} &= \epsilon && \text{for all } i \in \lbrace0, \dots, n-1\rbrace, \\ (\rho_{i}\rho_{j})^{2} &= \epsilon && \text{if } |i-j| \geq 2. \end{aligned} \end{equation} The \emph{universal $n$-maniplex} $\mathcal{U}^{n}$ is the Cayley graph associated with the universal Coxeter group $\mathcal{C}^{n}$. That is, the vertex set of $\mathcal{U}^{n}$ is $\mathcal{C}^{n}$ and for $\gamma \in \mathcal{C}^{n}$, the $i$-adjacent vertex of $\gamma$ is $\rho_{i}\gamma$. The maniplex $\mathcal{U}^{n}$ is in fact the flag graph of the universal polytope (see \cite[Theorem 5.2]{Hartley_1999_AllPolytopesAre}). We omit the rank of both the universal Coxeter group and the universal maniplex whenever it is implicit. Since the universal Coxeter group $\mathcal{C}$ acts transitively by automorphisms on the universal maniplex $\mathcal{U}$, then $\mathcal{U}$ is a regular maniplex. In fact, $\aut(\mathcal{U})$ is precisely $\mathcal{C}$. We denote by $r_{i}$ the permutation of $V(\mathcal{U})$ that swaps every vertex $x$ with $x^{i}$. More precisely, for $\gamma \in \mathcal{C} = V(\mathcal{U})$, $r_i: \gamma \mapsto \rho_i\gamma $. The \emph{monodromy group} of $\mathcal{U}$, denoted by $\mon(\mathcal{U})$, is the permutation group of $V(\mathcal{U})$ generated by $\lbracer_{0}, \dots, r_{n-1}\rbrace$. We call \emph{monodromies} the elements of $\mon(\mathcal{U})$ The permutations $r_0, \dots, r_{n-1}$ admit a (left) action on $\mathcal{X}$ that is compatible with the (right) action of $\aut(\mathcal{U})$, that is \[(r_{i}x) \gamma = r_i(x\gamma)\] for every $x \in V(\mathcal{U})$, $\gamma \in \aut(\mathcal{U})$ and $i \in \lbrace0,\dots, n-1\rbrace$. This induces an action of $\mon(\mathcal{U})$ on $V(\mathcal{U})$. Observe that the elements $r_0, \dots, r_{n-1}$ satisfy the relations \begin{equation}\label{eq:monU} \begin{aligned} r_{i}^{2} &= 1 && \text{for all } i \in \lbrace0, \dots, n-1\rbrace, \\ (r_{i}r_{j})^{2} &= 1 && \text{if } |i-j| \geq 2. \end{aligned} \end{equation} In fact, there exists an isomorphism from $\mathcal{C}$ to $\mon(\mathcal{U})$ mapping $\rho_{i}$ to $r_{i}$ (see \cite[Theorem 3.9]{MonsonPellicerWilliams_2014_MixingMonodromyAbstract}). The relations in Equation\nobreakspace \textup {(\ref {eq:monU})} imply that if $\mathcal{X}$ is any $n$-premaniplex, then $\mon(\mathcal{U})$ acts on $\mathcal{X}$ by $r_i: x \mapsto x^{i}$ for every vertex $x$ and $i \in \lbrace0, \dots, n-1 \rbrace$. The \emph{monodromy group} of $\mathcal{X}$ is the permutation group induced by this action. Naturally, we denote this group by $\mon(\mathcal{X})$. It is important to remark that the term \emph{connection group} has been used for what we call the monodromy group and has gained popularity in the last few years. Recall that for a vertex $x$ and $i \in \lbrace0, \dots, n-1\rbrace$, $\pth{x}$ denotes the dart of color $i$ whose starting point is $x$. We can generalize this notation. If $W$ is a path starting at $x$ and following the colors $i_{1}, i_{2}, \dots, i_{k}$ then we write $W=\pth[i_{k}, i_{k-1}, \dots, i_{1} ]{x}$. For every vertex $x$ and $i_{1}, i_{2}, \dots, i_{k} \in \left\lbrace 0, \dots, n-1 \right\rbrace $, the path $\pth[i_{k}, i_{k-1}, \dots, i_{1} ]{x}$ ends at the vertex $r_{i_{k}} \cdots r_{i_{1}}x$. Given a vertex $u$ of $\mathcal{U}$ and a path $W=\pth[i_k,i_{k-1}\ldots,i_1]{x}$ in a premaniplex $\mathcal{X}$, the \emph{lift of $W$ starting at $u$} is simply the path $\pth[i_k,i_{k-1},\ldots,i_1]{u}$ in $\mathcal{U}$. An \emph{elementary (maniplex) move} on a path $\pth[w]{x}$ with $w=i_{k}, i_{k-1}, \dots, i_{1}$ consist of either adding or removing the same color two times at any two consecutive positions (i.e. if $v=i_k,\ldots,i_\ell,j,j,i_{\ell-1},\ldots,i_1$ then $\pth[w]{x}\mapsto \pth[v]{x}$ and $\pth[v]{x}\mapsto \pth[w]{x}$ are elementary moves) or swapping two non-consecutive colors in consecutive positions, more precisely, if $|i_\ell-i_{\ell-1}|>1$, and $u=i_k,\ldots,i_{\ell+1},i_{\ell-1},i_\ell,i_{\ell-2},\ldots,i_1$ then $\pth[w]{x}\mapsto \pth[u]{x}$ is an elementary move. We say that two paths in a premaniplex are \emph{maniplex-homotopic} if we can turn one into the other by a finite sequence of elementary (maniplex) moves. Observe that two paths in an premaniplex are homotopic if their lifts to the universal maniplex $\mathcal{U}$ starting at a the same vertex also end at the same vertex. The homotopy class of a path in a premaniplex is uniquely determined by its starting vertex and a monodromy of the universal maniplex. If $w \in \mon(\mathcal{U}^{n})$ and $w=r_{i_{k}} \cdots r_{i_{1}}$ is a word representing $w$, then we denote by $\pth[w]x$ the homotopy class of the path $\pth[i_{k}, i_{k-1}, \dots, i_{1} ]{x}$. Observe that any two words representing $w$ yield homotopic paths and if $W_{1}, W_{2} \in \pth[w]{x}$, then both paths end at $wx$. Let $\mathcal{X}$ be a premaniplex and let $\fg(\mathcal{X})$ be its fundamental groupoid. If $\xi:\fg(\mathcal{X})\to \Gamma$ is a voltage assignment such that $\xi(W)$ is the identity in $\Gamma$ whenever $W$ is a path of length 4 alternating between two non-consecutive colors, we say that the pair $(\mathcal{X},\xi)$ is a \emph{voltage premaniplex}. In other words, a voltage premaniplex is a voltage graph where the voltage of the maniplex homotopy class of a path is well defined. \section{Voltage operations} \label{sec:operations} Let $\mathcal{X}$ be an $n$-premaniplex and let $\mathcal{Y}$ be an $m$-premaniplex. Consider the voltage assignment $\eta: \fg(\mathcal{Y}) \to \mon(\mathcal{U}^{n})$. We define the $m$-colored graph $\mathcal{X} \ertimes \mathcal{Y}$ in the following way: the vertex set of $\mathcal{X} \ertimes \mathcal{Y}$ is $\vr(\mathcal{X}) \times \vr(\mathcal{Y})$ and, for each $i\in \lbrace0,1,\dots, m\rbrace$, there is an edge of color $i$ from $(x,y)$ to $\left(\eta(\pth{y})x,r_i y\right)$. Since, in $\mathcal{Y}$, there is an $i$-edge from $y$ to $r_iy$, then the darts $\pth{y}$ and $\pth{r_iy}$ are inverse, implying that $\eta(\pth{r_iy})\eta(\pth{y})$ is the identity. Thus, each vertex of the graph $\mathcal{X} \ertimes \mathcal{Y}$ has exactly one $i$-edge and, hence, it has degree $m$. In the terminology of \cite{MalnicNedelaSkoviera_2000_LiftingGraphAutomorphisms}, $\mathcal{X}\ertimes \mathcal{Y}$ is the derived graph from the \emph{voltage space} $(\mathcal{X},\mon(\mathcal{U});\eta)$ with $\mathcal{X}$ being the abstract fiber. A pair $\left( \mathcal{Y},\eta \right)$ is called a \emph{voltage operator} if it is a voltage premaniplex with $\eta: \fg(\mathcal{Y}) \to \mon(\mathcal{U}^{n})$ (or, if the rank of $\mathcal{Y}$ is $m$, an $\left( n,m \right)$-\emph{voltage operator}). Similarly, we say that the mapping $\mathcal{X} \mapsto \mathcal{X} \ertimes \mathcal{Y}$, where $\mathcal{X}$ runs over all $n$-premaniplexes and $(\mathcal{Y},\eta)$ is a fixed voltage operator, is a \emph{voltage operation} (or an $(n,m)$-\emph{voltage operation}). Given premaniplexes $\mathcal{X}$ and $\mathcal{Y}$ and a voltage assignment $\eta$, we shall see that $\mathcal{X} \ertimes \mathcal{Y}$ is a premaniplex itself, although need not be a maniplex (even if $\mathcal{X}$ is a maniplex). Before showing this, we give some straightforward examples of voltage operations. Let us denote by ${\mathbf{1}}^{n}$ the symmetry type graph of a regular $n$-maniplex. That is, the premaniplex whit only one vertex and $n$ semiedges (see Figure\nobreakspace \ref {fig:1ton}). Moreover, if $G$ is a group and $\eta: {\mathbf{1}}^{n} \to G$ is a voltage assignment that assigns $g_{i}$ to the semiedge of color $i$, then we denote this voltage graph by $\left( {\mathbf{1}}^{n}, [g_{0}, \dots, g_{n-1}] \right)$. \begin{figure} \caption{The premaniplex ${\mathbf{1} \label{fig:EjemplosRango3} \caption{ The premaniplex ${\mathbf{2} \label{fig:1ton} \caption{Premaniplexes with $1$ and $2$ vertices.} \label{fig:2_I} \end{figure} \begin{exam}\label{eg:d-auto} \leavevmode The concept of a $d$-automorphism of a polytope is defined in \cite{HubardOrbanicIvicWeiss_2009_MonodromyGroupsSelf}, and can be generalized to maniplexes in a straightforward way. Let ${\mathcal M}$ be an $n$-maniplex and consider $\mon({\mathcal U}^n) =\langle r_0, r_1, \dots, r_{n-1} \rangle$ and $d$ an automorphism of $\mon(\mathcal{U}^n)$. We shall keep the vertices of $\mathcal{M}$ and the action of $\mon(\mathcal{U}^n)$ on them, but we choose a new set of labeled generators of the same permutation group acting on the same set of flags to obtain a new maniplex, ${\mathcal M}^d$. More precisely, the maniplex ${\mathcal M}^d$ is defined as follows: the vertices of $\mathcal{M}^d$ are the vertices of $\mathcal{M}$ and given a vertex $x$, the dart of color $i$ starting at $x$ ends at $d(r_i)x$. In other words, in $\mathcal{M}^d$, $x^i = d(r_i)x$. If $\mathcal{M} \cong \mathcal{M}^d$, we call each isomorphism $\varphi:\mathcal{M}\to\mathcal{M}^d$ a \emph{$d$-automorphism} of $\mathcal{M}$, and we say that $\mathcal{M}$ is \emph{$d$-automorphic}. The maniplex $\mathcal{M}^d$ can easily be seen as an $(n,n)$-voltage operation: \begin{center} if $(\mathcal{Y},\eta) = ({\mathbf{1}}^{n},[d(r_0), d(r_1) \dots, d(r_{n-1})])$, then $\mathcal{M} \ertimes \mathcal{Y} = \mathcal{M}^d$. \end{center} Classical examples of the above operation are the dual and the Petrial of an $n$-maniplex $\mathcal{X}$: \begin{itemize} \item If $(\mathcal{Y},\eta) = ({\mathbf{1}}^{n},[r_0, \dots, r_{n-1}])$, then $\mathcal{X} \ertimes \mathcal{Y}$ is $\mathcal{X}$ itself. \item If $(\mathcal{Y},\eta) = ({\mathbf{1}}^{n},[r_{n-1}, \dots, r_{0}])$, then $\mathcal{X} \ertimes \mathcal{Y}$ is the dual of $\mathcal{X}$. \item If $(\mathcal{Y},\eta) = ({\mathbf{1}}^{n},[r_0, \dots, r_{n-4}, r_{n-3}r_{n-1}, r_{n-2}, r_{n-1}])$, then $\mathcal{X} \ertimes \mathcal{Y}$ is the generalized Petrial of $\mathcal{X}$. \end{itemize} In the above examples we know that whenever $\mathcal{X}$ is in fact a maniplex, then so is $\mathcal{X} \ertimes \mathcal{Y}$. \end{exam} If $(\mathcal{Y}, \eta)$ is a voltage operator, we want to show that $\mathcal{X} \ertimes \mathcal{Y}$ is a premaniplex, for every premaniplex $\mathcal{X}$. Thus, we need to show that the alternating paths in $\mathcal{X} \ertimes \mathcal{Y}$ of length $4$ with colors $i$ and $j$, with $\left| i-j \right| \geq 2 $, are closed. In other words, given a premaniplex $\mathcal{X}$ and a voltage operator $(\mathcal{Y}, \eta)$ we want $\pth[i,j,i,j]{(x,y)}=(x,y)$, whenever $\left| i-j \right| \geq 2 $, for every $x\in\mathcal{X}$ and $y \in \mathcal{Y}$. More generally, we are interested in obtaining properties for the paths in $\mathcal{X} \ertimes \mathcal{Y}$ from the paths in $\mathcal{X}$ or $\mathcal{Y}$. The following result is a straightforward but useful observation towards this direction. \begin{rem}\label{rem:endsPaths} Let $(\mathcal{Y}, \eta)$ a voltage operator. Assume that $W=\pth[i_{k},\dots, i_{1}]{y}$ is a path in $\mathcal{Y}$ starting at $y$. For every $x \in \mathcal{X}$ the path $\pth[i_{k},\dots, i_{1}]{(x,y)}$ in $\mathcal{X} \ertimes \mathcal{Y}$ that starts at $(x,y)$ and follows the same colors as $W$ finishes at $\left(\eta(W)x, r_{i_{k}} \cdots r_{i_{1}}y\right)$. \end{rem} By Remak\nobreakspace \ref {rem:endsPaths} the path $\pth[i,j,i,j]{(x,y)}$ starts at $(x,y)$ and finishes at $\left(\eta(\pth[i,j,i,j]{y})x, r_ir_jr_ir_j y\right)$. Thus, the path $\pth[i,j,i,j]{(x,y)}$ is closed if and only if \[\begin{aligned} \eta(\pth[i,j,i,j]{y})x &= x && \text{and} \\ r_{i}r_{j}r_{i}r_{j}y &= y, \end{aligned}\] Note now that since $\mathcal{Y}$ is a premaniplex, then $r_{i}r_{j}r_{i}r_{j}y = y$ for every $y\in \mathcal{Y}$, which implies that $\pth[i,j,i,j]{(x,y)}$ is closed if and only if $\eta(\pth[i,j,i,j]{y})$ fixes $x$. But $\eta(\pth[i,j,i,j]{y})$ is the identity, since $(\mathcal{Y}, \eta)$ is a voltage premaniplex. Therefore, we have the following proposition. \begin{prop} Given a voltage premaniplex $(\mathcal{Y}, \eta)$ and a premaniplex $\mathcal{X}$, the voltage operation $\mathcal{X} \ertimes \mathcal{Y}$ is a premaniplex. \end{prop} Since we are working with voltage premaniplexes, for the rest of the paper, whenever we refer to paths being ``homotopic'' or to the ``homotopy class'' of a path, we are thinking in terms of maniplex homotopy. In the same way, the notation $\fg(\mathcal{Y})$ will denote the fundamental groupoid consisting of paths in $\mathcal{Y}$ considered up to maniplex homotopy. Although we have special interest on voltage operations that give rise to connected premaniplexes, there are interesting examples of voltage operations in which we (often) obtain disconnected objects. A \emph{rooted} premaniplex is a pair $(\mathcal{X},x)$ where $\mathcal{X}$ is a connected premaniplex and the \emph{root} $x$ is a vertex of $\mathcal{X}$. If $\mathcal{X}$ is not connected, then $(\mathcal{X},x)$ denotes the rooted premaniplex with the connected component of $x$ in $\mathcal{X}$ as its underlying premaniplex. If $(\mathcal{X},x)$ and $(\mathcal{Y},y)$ are rooted premaniplexes and $(\mathcal{Y},\eta)$ is a voltage operator, then $(\mathcal{X},x)\ertimes(\mathcal{Y},y)$ denotes the rooted premaniplex $(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$. \begin{exam}\label{eg:sections} \leavevmode Let $\mathcal{X}$ be a $n$-premaniplex, $-1 \leq k < \ell \leq n$ and let $(\mathcal{Y},\eta) = ({\mathbf{1}}^{\ell - k - 1},[r_{k+1}, \dots, r_{\ell-1}])$. Then $\mathcal{X} \ertimes \mathcal{Y}$ is a graph whose connected components are the $(k,\ell)$-sections of $\mathcal{X}$. In particular, if $k=-1$ and $\mathcal{X}$ is a maniplex, then $\mathcal{X} \ertimes \mathcal{Y}$ determines the set of all $\ell$-faces of $\mathcal{X}$, and if $(\mathcal{X},x)$ is a rooted premaniplex. Moreover, if $y$ denotes the only vertex in $\mathcal{Y}$, for each $x \in \mathcal{X}$, then $(\mathcal{X},x)\ertimes (\mathcal{Y},y)$ is the $\ell$-face of $\mathcal{X}$ that contains $x$. \end{exam} Note that if $\mathcal{Y}$ is disconnected, then $\mathcal{X}\ertimes\mathcal{Y}$ consists of disjoint copies of $\mathcal{X}\rtimes_{\eta_i}\mathcal{Y}_i$, where $\mathcal{Y}_i$ runs over the connected components of $\mathcal{Y}$ and $\eta_i$ is the restriction of $\eta$ to $\fg(\mathcal{Y}_i$). In a recent paper \cite{CunninghamPellicerWilliams_StratifiedOperationsManiplexes_preprint} the term \emph{stratified operations} is introduced. They are operations on maniplexes defined in terms of a set $A$ of \emph{strata} admitting a left action of $\mon(\mathcal{U})$ and a function $\varphi$ that assigns a monodromy to each pair $(a,r_i)$ where $r_i$ is a generator of $\mon(\mathcal{U})$. We can describe stratified operations in terms of voltage operations. If $\oo$ is a stratified operation with strata set $A$, we can define the voltage operator $(\mathcal{Y},\eta)$ by taking the vertices of $\mathcal{Y}$ to be the set $A$, defining the adjacencies by $a^i=r_i a$, and defining the voltage assignment $\eta$ by $\eta(\pth{a})=\varphi(a,r_i)$. There may be a subtle difference between the stratified operation $\oo$ and the voltage operation defined by $(\mathcal{Y},\eta)$. Mainly, if $\mathcal{M}$ is a maniplex $\oo(\mathcal{M})$ must be a maniplex too, and therefore connected, while $\mathcal{M}\ertimes\mathcal{Y}$ might have more than one connected component. One can check that all stratified operations are the result of applying a voltage operation and then choosing one connected component. The converse, however is not true. There are voltage operations that do not define stratified operations when one chooses a connected component of the result. This is because the definition of stratified operation requires that the natural projection $\oo(\mathcal{M})\to\mathcal{M}$ must be surjective. The snub operation described in Section\nobreakspace \ref {sec:ClassicalExamples} and discussed further in Section\nobreakspace \ref {sec:automorphisms} is a voltage operation, but not a stratified operation, since each connected component only uses half of the flags of the original maniplex when this is orientable. \subsection{The mix as a voltage operation on premaniplexes}\label{sec:mix} The mix of two regular polytopes was defined in terms of their automorphism groups (see \cite{McMullenSchulte_2002_MixRegularPolytope, MonsonPellicerWilliams_2014_MixingMonodromyAbstract}). In \cite{CunninghamPellicer_2018_OpenProblems$k$} the mix is defined for rooted maniplexes as a natural generalization of the parallel product of rooted maps (see \cite{Wilson_1994_ParallelProductsGroups}), and hence it is also defined of rooted polytopes. If $(\mathcal{M}, \Phi)$ and $(\mathcal{N}, \Psi)$ are rooted maniplexes, then $(\mathcal{M},\Phi) \diamondsuit (\mathcal{N},\Psi)$ is the smallest maniplex that covers both $(\mathcal{M}, \Phi)$ and $ (\mathcal{N}, \Psi)$. In particular, it is connected. We shall see that the mix can be defined as a voltage operation on premaniplexes. Let $\mathcal{Y}$ be an $n$-premaniplex. We denote by $\diamondsuiter$ the voltage assignment that maps each dart of color $i$ to $r_{i}$. In other words $\diamondsuiter(\pth[\omega]y):=\omega$. We call $\diamondsuiter$ the \emph{mixing voltage (for $\mathcal{Y}$)} and we say that $(\mathcal{Y},\diamondsuiter)$ is a \emph{mix operator}. Then $\mathcal{X} \diamondsuittimes \mathcal{Y}$ is the \emph{mix} $\mathcal{X} \diamondsuit \mathcal{Y}$, and it is easy to see that this generalizes the same concept previously defined for regular abstract polytopes. By rooting the premaniplexes, this voltage operation generalizes the one for rooted maniplexes. With our definition of the mix as a voltage operation we allow the resulting graph to be disconnected. However, each connected component of $\mathcal{X} \diamondsuittimes \mathcal{Y}$ covers both $\mathcal{X}$ and $\mathcal{Y}$, so if at least one of them is simple (that is, a maniplex) the mentioned components are maniplexes as well. Moreover, the connected component of $\mathcal{X}\diamondsuit\mathcal{Y}$ containing the vertex $(x,y)$ is the smallest premaniplex that covers both $(\mathcal{X},x)$ and $(\mathcal{Y},y)$ (cf. \cite[Proposition 3.10]{CunninghamPellicer_2018_OpenProblems$k$}). We can use this definition of the mix to find the smallest cover of a maniplex satisfying some property, as we shall see in the next example. \begin{eg}\label{eg:DoubleCovers} Let $I\subset\lbrace0,1,\ldots,n-1\rbrace$. We denote by ${\mathbf{2}}^n_I$ the $n$-premaniplex with 2 vertices with semiedges of colors in $I$ at each vertex and links of colors not in $I$ between the $2$ vertices (see Figure\nobreakspace \ref {fig:2_I}). We can use these premaniplexes to describe certain properties of maniplexes: \emph{orientable} $n$-maniplexes (those that are bipartite) are those that cover ${\mathbf{2}}^n_\emptyset$ and \emph{vertex-bipartite} maniplexes (those whose $1$-skeleton is bipartite) are those that cover ${\mathbf{2}}^n_{\lbrace1,2,\ldots,n-1\rbrace}$, for example. In general, if $\mathcal{X}$ covers ${\mathbf{2}}^n_I$ it means that there is a coloring of the vertices (or flags) of $\mathcal{X}$ with two colors such that $i$-adjacent flags are of the same color if and only if $i\in I$. See \cite{KoikePellicerRaggiWilson_2017_FlagBicoloringsPseudo, PellicerPotocnikToledo_2019_ExistenceResultTwo} for a detailed discussion on flag colorings. If a maniplex $\mathcal{X}$ does not cover ${\mathbf{2}}^n_I$, then $\mathcal{X}\diamondsuit {\mathbf{2}}^n_I$ is a maniplex that covers $\mathcal{X}$ but also covers ${\mathbf{2}}^n_I$. We call this the \emph{double cover of $\mathcal{X}$ with respect to $I$}. If $\mathcal{X}$ is non-orientable, $\mathcal{X}\diamondsuit {\mathbf{2}}^n_\emptyset$ is the so called \emph{orientable double cover of $\mathcal{X}$}. Note that if $\mathcal{X}$ does cover ${\mathbf{2}}^n_I$, then $\mathcal{X}\diamondsuit {\mathbf{2}}^n_I = \mathcal{X} \diamondsuittimes {\mathbf{2}}^n_I$ (with $\diamondsuiter$ the mixing voltage) consists of two isomorphic copies of $\mathcal{X}$, but the flags of these copies will be colored by the vertices of ${\mathbf{2}}^n_I$. More precisely, if we colored the vertices of ${\mathbf{2}}^n_I$ with white and black, then each vertex (flag) of $\mathcal{X}\diamondsuit {\mathbf{2}}^n_I$ is colored white or black according to its second coordinate. So $\mathcal{X}\diamondsuit {\mathbf{2}}^n_I$ consists of two copies of $\mathcal{X}$ but together with an $I$-compatible coloring, and the two copies have opposite colorings. In particular, if $X$ is orientable and $I=\emptyset$, the two copies of $\mathcal{X} \diamondsuit {\mathbf{2}}^{n}_{\emptyset}$ are mirror images; if $\mathcal{X}$ is a chiral maniplex, then $\mathcal{X} \diamondsuittimes {\mathbf{2}}^{n}_{\emptyset}$ consists of its two enantiomorphic forms. \end{eg} Before ending this section let us note that the mixing voltages are the only voltages such that the product is naturally commutative. More precisely: \begin{prop}\label{prop:MixConmuta} Let $(\mathcal{X},\eta_\mathcal{X})$ and $(\mathcal{Y},\eta_\mathcal{Y})$ be $(n,n)$-voltage operators. Then the function $(x,y)\mapsto (y,x)$ is an isomorphism between $\mathcal{X}\ertimes[\eta_\mathcal{Y}]\mathcal{Y}$ and $\mathcal{Y}\ertimes[\eta_\mathcal{X}]\mathcal{X}$ if and only if both $\eta_\mathcal{X}$ and $\eta_\mathcal{Y}$ are mixing voltages. \end{prop} \begin{proof} The alleged function is an isomorphism if and only if it maps $(x,y)^i = (\eta_\mathcal{Y}(\pth{y})x,y^i)$ to $(y,x)^i = (\eta_\mathcal{X}(\pth{x})y,x^i)$, for all $x\in\mathcal{X}$, $y\in\mathcal{Y}$ and $i\in\lbrace0,1,\ldots,n-1\rbrace$. This means that $\eta_\mathcal{Y}(\pth{y}) = r_i$ and $r_i = \eta_\mathcal{X}(\pth{x})$. \end{proof} \begin{question} Is it possible that $\mathcal{X}\ertimes[\eta_\mathcal{Y}]\mathcal{Y}$ and $\mathcal{Y}\ertimes[\eta_\mathcal{X}]\mathcal{X}$ are isomorphic without $\eta_\mathcal{X}$ and $\eta_\mathcal{Y}$ being mixing voltages? \end{question} \subsection{Connectivity}\label{sec:connectivity} We now turn our attention to the connectivity of $\mathcal{X}\ertimes \mathcal{Y}$. We say that an $(n,m)$-voltage operator $(\mathcal{Y},\eta)$ \emph{preserves connectivity} if whenever $\mathcal{X}$ is connected $\mathcal{X}\ertimes \mathcal{Y}$ is connected as well (in the context of \cite{CunninghamPellicerWilliams_StratifiedOperationsManiplexes_preprint}, these are called \emph{fully stratified operations}). We first analyze when we can find a path between two vertices of $\mathcal{X} \ertimes \mathcal{Y}$ and use this to determine when a voltage operator preserves connectivity. \begin{lemma}\label{prop:paths} Let $(x,y)$ and $(x',y')$ two vertices of $\mathcal{X} \ertimes \mathcal{Y}$. There is a path in $\mathcal{X} \ertimes \mathcal{Y}$ that starts at $(x,y)$ and ends at $(x',y')$ if and only if there exists a path $W$ in $\mathcal{Y}$ that connects $y$ with $y'$ and such that $\eta(W)x= x'$. \end{lemma} \begin{proof} Assume there is a path $\pth[i_{k},\dots,i_{1}]{(x,y)}$ that ends at $(x',y')$. But by Remak\nobreakspace \ref {rem:endsPaths} the path $\pth[i_{k}, \dots, i_{1}]{(x,y)}$ ends at $\left( \eta(W)x, r_{i_{k}}\cdots r_{i_{1}}y\right)$, where $W$ denotes the path $\pth[i_{k}, \dots, i_{1}]{y}$. Hence, $\eta(W)x = x'$ and $r_{i_{k}}\cdots r_{i_{1}}y = y'$. Moreover, $r_{i_{k}}\cdots r_{i_{1}}y$ is precisely the endpoint of $W$, which implies that $W$ connects $y$ with $y'$. Conversely, assume that there exists a path $W$ that connects $y$ with $y'$ and satisfies that $\eta(W)x = x'$. Since $W$ starts at $y$, it can be written as $W=\pth[i_{k}, \dots, i_{1}]{y}$, for some $i_{k}, \dots, i_{1} \in \left\lbrace 0, \dots, m-1 \right\rbrace$. This implies that $y' = r_{i_{k}}\cdots r_{i_{1}}y$. By Remak\nobreakspace \ref {rem:endsPaths}, the path $\pth[i_{k}, \dots, i_{1}]{(x,y)}$ ends at $\left( \eta(W)x, r_{i_{k}}\cdots r_{i_{1}}y\right) = (x',y')$. \end{proof} \begin{prop} \label{prop:connectedness} The graph $\mathcal{X} \ertimes \mathcal{Y}$ is connected if and only if $\mathcal{Y}$ is connected and $\eta(\fg^{y_0}(\mathcal{Y}))$ acts transitively on $\mathcal{X}$ for some vertex $y_0 \in \mathcal{Y}$. \end{prop} \begin{proof} We start by assuming that $\mathcal{X} \ertimes\mathcal{Y}$ is connected, and let $x,x'\in \mathcal{X}$. First note that $\mathcal{X}\ertimes\mathcal{Y}$ covers $\mathcal{Y}$, so $\mathcal{Y}$ must be connected as well. Now we want to prove that $\eta(\fg^{y_0}(\mathcal{Y}))$ acts transitively on $\mathcal{X}$. Since $\mathcal{X} \ertimes\mathcal{Y}$ is connected, there is a path from $(x,y_0)$ to $(x',y_0)$. By Lemma~\ref{prop:paths} there exists $W\in \fg^{y_0}(\mathcal{Y})$ such that $\eta(W)x=x'$. Since $x$ and $x'$ were arbitrary, we have proved that $\eta(\fg^{y_0}(\mathcal{Y}))$ acts transitively on $\mathcal{X}$. Now we assume that $\mathcal{Y}$ is connected and $\eta(\fg^{y_0}(\mathcal{Y}))$ acts transitively on $\mathcal{X}$. We will prove that $\mathcal{X} \ertimes \mathcal{Y}$ is connected. Let $x_0 \in \mathcal{X}$ be fixed, and let $x\in\mathcal{X}$ and $y\in \mathcal{Y}$. Since $\mathcal{Y}$ is connected, there is a path $V$ from $y_0$ to $y$ in $\mathcal{Y}$. Let $\sigma=\eta(V)$. Since $\eta(\fg^{y_0}(\mathcal{Y}))$ acts transitively on $\mathcal{X}$, there exists $W \in \fg^{y_0}(\mathcal{Y})$ such that $\eta(W)x_0=\sigma^{-1}x$. Then the path $WV$ goes from $y_0$ to $y$ and satisfies that $\eta(WV)x_0=x$. By Lemma\nobreakspace \ref {prop:paths} the existence of the path $WV$ implies that there exists a path in $\mathcal{X} \ertimes \mathcal{Y}$ from $(x_0,y_0)$ to $(x,y)$. That is, $(x_0,y_0)$ is connected to every vertex of $\mathcal{X} \ertimes \mathcal{Y}$, implying that $\mathcal{X} \ertimes \mathcal{Y}$ is connected. \end{proof} \begin{coro}\label{coro:connected} A voltage operator $(\mathcal{Y},\eta)$ preserves connectivity if and only if $\mathcal{Y}$ is connected and $\eta(\fg^{y_0}(\mathcal{Y})) = \mon(\mathcal{U})$ for some $y_0\in \mathcal{Y}$. \end{coro} In \cite{CunninghamDelRioFrancosHubardToledo_2015_SymmetryTypeGraphs} the authors give a way to find generators of the automorphism group of a maniplex, given its symmetry type graph (STG). They do so they in terms of a spanning tree of the STG with trivial voltage in all its darts. Hence, it will be helpful to see Corollary\nobreakspace \ref {coro:connected} in the light of such a spanning tree of $\mathcal{Y}$. \begin{coro} Let $(\mathcal{Y},\eta)$ be a voltage operator. Assume that $\mathcal{Y}$ is connected and has a spanning tree $T$ such that all arcs in $T$ have trivial voltage. Then $(\mathcal{Y},\eta)$ preserves connectivity if and only if $\mon(\mathcal{U})$ is generated by the voltages of the darts in $\mathcal{Y}$ not in $T$. \end{coro} \begin{proof} The group $\fg^{y_0}(\mathcal{Y})$ is generated by the paths $C_d$ where $d$ varies among darts in $\mathcal{Y}$ not in $T$ and $C_d$ denotes the path that starts in $y_0$, goes to the initial vertex of $d$ through $T$, then uses the dart $d$, and then goes back to $y_0$ through $T$. This implies that $\eta(\fg^{y_0}(\mathcal{Y}))$ is generated by the voltages of those paths, but $\eta(C_d)=\eta(d)$. \end{proof} \section{More examples of voltage operations} \label{sec:ClassicalExamples} Here, we shall see some examples of operations on maps and polytopes that can be seen as voltage operations. In fact, if we have an operation on polytopes (or maps) that can be seen as a voltage operation, by defining the corresponding voltage operator we have defined the operation on maniplexes and premaniplexes. \subsubsection*{Wythoffian constructions} We start by recalling the Wythoffian constructions from convex regular polytopes (see for example \cite{coxeter1999beauty, schulte2016wythoffian}). Given a regular convex polytope, its symmetry group is generated by reflections $\lbrace\rho_0, \dots, \rho_{n-1}\rbrace$. We call these reflections the {\em generating reflections} of the group and note that their mirrors form a cone that, intersected with the polytope, results on a fundamental region for the polytope with respect to its symmetry group. For a Wythoffian construction, first choose a non-empty set $A$ of generating reflections and pick a point $v$ of the fundamental region that is not fixed by any of the reflections in $A$, but fixed by any generating reflection not in $A$ (if they exist). The vertices of the new convex polytope are the orbit of $v$ under $\gen{\rho_0, \dots \rho_{n-1}}$ and the polytope is the convex hull of such vertices. This can be represented on the Coxeter diagram (a graph with $n$ nodes, each representing a generating reflection, and an edge labeled by the order of the product of their nodes, whenever it is bigger than $2$), and marking the nodes representing the generating reflections in $A$. If $n=3$, then $A$ is a non-empty subset of a $3$ element set, so there are seven possibilities for $A$, each giving a different construction. In particular, for $A = \lbrace\rho_0\rbrace$ and $A = \lbrace\rho_2\rbrace$ the Wythoffian constructions give rise to the identity and dual operations, respectively, that have been given as voltage operations in Example\nobreakspace \ref {eg:d-auto}. It is not difficult to generalize the Wythoffian constructions of regular polyhedra to maps on surfaces, by investigating the flag adjacencies of the constructed polyhedra. In particular, the medial and truncation of a map have been studied in \cite{ hubard2013medial, HubardOrbanicIvicWeiss_2009_MonodromyGroupsSelf} and \cite{OrbanicPellicerWeiss_2010_MapOperations$k$}, respectively. Figure\nobreakspace \ref {fig:whytoffian} shows voltage operators $(\mathcal{Y}, \eta)$ for five of the Wythoffian constructions to be seen as a voltage operation. (As pointed out above, the two remaining ones correspond to when $A = \lbrace\rho_0\rbrace$ and $A = \lbrace\rho_2\rbrace$ that are the identity and dual operations, respectively.) \begin{figure} \caption{\label{fig:medial} \label{fig:medial} \caption{\label{fig:truncation} \label{fig:truncation} \caption{\label{fig:trunc_dual} \label{fig:trunc_dual} \caption{ $A=\left\lbrace \rho_{0} \label{fig:whyt_02} \caption{ $A=\left\lbrace \rho_{0} \label{fig:whyt_012} \caption{Whytoffian operators for rank $3$. The edges in red, green and blue represent $0$-, $1$- and $2$- adjacencies, respectively.} \label{fig:whytoffian} \end{figure} \begin{figure} \caption{ Snub operator} \label{fig:snub} \end{figure} The \emph{omnitruncation} in rank $3$ (Figure\nobreakspace \ref {fig:whyt_012}) can be easily generalized to higher dimensions for a regular convex polytope $\mathcal{P}$: it corresponds to the Whytoffian construction where $A$ is the set of all reflections $\lbrace\rho_0, \rho_1, \dots, \rho_{n-1}\rbrace$. We can define this operation when $\mathcal{P}$ is an abstract polytope (regular or not) in the following way. For $i\in\lbrace0,1,\ldots,n-1\rbrace$, the $i$-faces of $\ot(\mathcal{P})$ are chains of $\mathcal{P}$ that do not contain the greatest nor least elements and have $n-i$ elements, and given two faces $C$ and $C'$ of $\ot(\mathcal{P})$ we say that $C\leq_{\ot(\mathcal{P})} C'$ if and only if $C'\subset C$. In particular, the vertices of $\ot(\mathcal{P})$ are the flags of $\mathcal{P}$ and an edge of the form $\Phi\setminus\lbrace\Phi_i\rbrace$ joins the vertices $\Phi$ and $\Phi^i$. This means that the 1-skeleton of $\ot(\mathcal{P})$ is the flag graph of $\mathcal{P}$. Another way to construct $\ot(\mathcal{P})$ is as the colorful polytope of the flag graph of $\mathcal{P}$ \cite{AraujoPardoHubardOliverosSchulte_2013_ColorfulPolytopesGraphs}. The omnitruncation on rank $n$ can be thought as a voltage operation. To see this, use the following construction for the voltage operator. Let $\mathcal{S}$ be the flag graph of an $(n-1)$-simplex. First, change the color of each edge by increasing it by 1 (i.e. if two flags of $\mathcal{S}$ are $i$-adjacent, they will be joined by an edge of color $i+1$). All these edges will have the identity voltage. Note that now each vertex has an edge of color $i$, for $i\in \lbrace1,2,\dots, n-1\rbrace$. Now, label each connected component of the graph with edges of colors $ 1, \dots, n-2$ with a different number from 0 to $n-2$. Then, add a semiedge of color $0$ at each vertex with voltage $r_j$ where $j$ is the label of connected component of the vertex. The resulting voltage operator $(\mathcal{Y},\eta)$ satisfies that the flag graph of $\ot(\mathcal{P})$ is $\mathcal{P}\ertimes \mathcal{Y}$. As can be expected all Wythoffian constructions for higher dimensions can be seen as voltage operations. We do not give here all the voltage operators, as for rank $n$, there can be up to $n!$ vertices in it. However, Theorem~Theorem\nobreakspace \ref {thm:derivedAsProduct} gives a way to find each of them. The {\em snub} of a polyhedron can be thought as a generalized Wythoffian construction, where the rotational subgroup is used. We shall not give details of this kind of constructions, but rather note some interesting differences between the snub and the Wythoffian described above. The {\em snub operator} given in Figure\nobreakspace \ref {fig:snub}. In contrast with the Wythoffian operations that preserve connectivity of any maniplex, the snub operator only preserves connectivity for non-orientable premaniplexes. If we apply this snub operation to an orientable premaniplex $\mathcal{X}$ (in particular to a regular convex polyhedron) we get two copies of what is usually referred to as \emph{the snub $\mathcal{X}$}, each copy having all the ``rotational'' symmetry of $\mathcal{X}$ but not the reflection symmetry of $\mathcal{X}$. We will explore this phenomenon deeper in Section\nobreakspace \ref {sec:automorphisms}. \subsubsection*{Prisms and pyramids over polytopes} Let $\mathcal{P}$ be an abstract polytope. The \emph{pyramid over $\mathcal{P}$}, denoted by $\pyr(\mathcal{P})$ is the poset $\mathcal{P}\times \lbrace0,1\rbrace$ with the product order, that is, $(F,\ell)\leq(G,\ell')$ if and only if $F\leq G$ and $\ell\leq \ell'$. Similarly, the \emph{prism over $\mathcal{P}$}, denoted by $\prism(\mathcal{P})$ is defined as the poset $((\mathcal{P}\setminus\mathcal{P}_{-1})\times \Lambda) \cup\lbraceF_{-1}\rbrace$ where $\Lambda$ is the poset $\lbrace\lbrace\lambda_0\rbrace,\lbrace\lambda_1\rbrace,\lbrace\lambda_0,\lambda_1\rbrace\rbrace$ ordered by inclusion and $F_{-1}$ is the unique minimum element of the prism. Both the prism and the pyramid over an $n$-polytope are $(n+1)$-polytopes. They are particular cases of products of polytopes, in the sense of \cite{GleasonHubard_2018_ProductsAbstractPolytopes}: the pyramid is the join product by a vertex, while the prism is the direct product by an edge. One can see that each flag $\Psi$ in $\pyr(\mathcal{P})$ is of the form \[ \Psi = \lbrace(\Phi_{-1},0),(\Phi_0,0),\ldots,(\Phi_t,0),(\Phi_t,1),(\Phi_{t+1},1),\ldots,(\Phi_n,1)\rbrace, \] where $\Phi$ is a flag in $\mathcal{P}$ and $t\in\lbrace-1,0,1,\ldots,n\rbrace$. Hence, we identify the set of flags of $\pyr(\mathcal{P})$ with $\mathcal{F}(\mathcal{P})\times \lbrace-1,0,\ldots,n\rbrace$. In a similar way, one can identify the set of flags of $\prism(\mathcal{P})$ with $\mathcal{F}\times \lbrace0,1,\ldots,n\rbrace\times\lbrace\lambda_0,\lambda_1\rbrace$, where the last coordinate tells us if the 0-face is of the form $(\Phi_0,\lbrace\lambda_0\rbrace)$ or $(\Phi_0,\lbrace\lambda_1\rbrace)$. Let us first observe the pyramid. The $i$-adjacencies of the flags (see \cite{GleasonHubard_2018_ProductsAbstractPolytopes}), for $i \in \lbrace0,1,\dots, n\rbrace$, are given by \[ (\Phi,t)^i=\begin{cases} (\Phi^i,t) & \text{if } 0\leq i<t,\\ (\Phi,t-1) & \text{if } i=t,\\ (\Phi,t+1) & \text{if } i=t+1,\\ (\Phi^{i-1},t) & \text{if } t+1<i. \end{cases} \] Let $(\mathcal{Y},\eta)$ be the $(n,n+1)$-voltage operator whose vertices are the numbers $\lbrace-1,0,\ldots,n\rbrace$ with the $i$-adjacencies (for $i \in \lbrace0,1,\dots, n\rbrace$) given by: \[ t^i=\begin{cases} t & \text{if } i<t \text{ or } t+1< i, \\ t-1 & \text{if } i=t,\\ t+1 & \text{if } i=t+1, \end{cases} \] and the voltage assignment given by: \[ \xi(\pth{t})=\begin{cases} r_i & \text{if } i<t,\\ 1 & \text{if } i=t \text{ or } i=t+1,\\ r_{i-1} & \text{if } t+1<i. \end{cases} \] (see Figure\nobreakspace \ref {fig:pyr}). Then the flag graph of $\pyr(\mathcal{P})$ is $\mathcal{G}(\mathcal{P})\ertimes \mathcal{Y}$ where $(\mathcal{Y},\eta)$. \begin{figure} \caption{Voltage operator for the pyramid over an $n$-polytope} \label{fig:pyr} \end{figure} We now turn our attention to the prism, where the $i$-adjacencies of the flags (see \cite{GleasonHubard_2018_ProductsAbstractPolytopes}), for $i \in \lbrace0,1,\dots, n\rbrace$, are given by \[ (\Phi,t,\lambda)^i=\begin{cases} (\Phi^i,t,\lambda) & \text{if } i<t,\\ (\Phi,t,\lambda') & \text{if } 0=i=t, \\ (\Phi,t-1,\lambda) & \text{if } 0<i=t,\\ (\Phi,t+1,\lambda) & \text{if } i=t+1,\\ (\Phi^{i-1},t,\lambda) & \text{if } t+1<i,\\ \end{cases} \] where $\lambda'$ is $\lambda_1$ if $\lambda=\lambda_0$ and vice versa. Let $(\mathcal{Y}',\eta')$ be the $(n,n+1)$-voltage operator whose vertices are $\lbrace0,\ldots,n\rbrace\times\lbrace\lambda_0,\lambda_1\rbrace$ with the adjacencies given by: \[ (t,\lambda)^i=\begin{cases} (t,\lambda) & \text{if } i<t \text{ or } i>t+1,\\ (t,\lambda') & \text{if } 0=i=t, \\ (t-1,\lambda) & \text{if } i=t>0,\\ (t+1,\lambda) & \text{if } i=t+1,\\ \end{cases} \] and the voltage assignment given by: \[ \eta'(\pth{t})=\begin{cases} r_i & \text{if } i<t,\\ 1 & \text{if } i=t \text{ or } i=t+1,\\ r_{i-1} & \text{if } t+1<i. \end{cases} \] We show this operator in Figure\nobreakspace \ref {fig:prism}. Then the flag graph of $\pyr(\mathcal{P})$ is $\mathcal{G}(\mathcal{P})\ertimes \mathcal{Y}$ where $(\mathcal{Y},\eta)$. The flag graph of $\prism(\mathcal{P})$ is $\mathcal{G}(\mathcal{P})\ertimes[\eta']\mathcal{Y}'$. \begin{figure} \caption{Voltage operator for the prism over an $n$-polytope} \label{fig:prism} \end{figure} \subsubsection*{The trapezotope} Given a polytope $\mathcal{P}$, we define the \emph{trapezotope over $\mathcal{P}$} denoted by $\trp(\mathcal{P})$ as follows: the faces of rank $i$ of the trapezotope are the ordered pairs $(F,G)$ where $F$ and $G$ are faces of $\mathcal{P}$, $F\leq G$ and $\rk(G)-\rk(F)=i$. In particular, the vertices of $\trp(\mathcal{P})$ are the pairs $(F,F)$, so in correspondence to the faces of $\mathcal{P}$, and the edges of $\trp(\mathcal{P})$ are in correspondence to the edges of the Hasse diagram of $\mathcal{P}$. Thus, the $1$-skeleton of the trapezotope of $\mathcal{P}$ is the Hasse diagram of $\mathcal{P}$. The trapezotope over $\mathcal{P}$ is in fact the dual of the \emph{antiprism over $\mathcal{P}$}, defined in \cite{GleasonHubard_2021_AntiprismAbstractPolytope}. In \cite{GleasonHubard_2021_AntiprismAbstractPolytope} the authors showed that the trapezotope of a polytope $\mathcal{P}$ (or rather its dual) is in fact a polytope and that the flags are in one-to-one correspondence with the set $\mathcal{F}(\mathcal{P})\times \mathbb{Z}_2^{n+1}$. When we denote a flag of $\trp(\mathcal{P})$ by $(\Phi,v)$, the $i$-th coordinate of the vector $v$ tells us if the faces $(\Phi,v)_i=(F_i,G_i)$ and $(\Phi,v)_{i-1}=(F_{i-1},G_{i-1})$ differ in the first coordinate (when $v_i=0$) or in the second one (when $v_i=1$). Using this natural correspondence, one can prove that the $i$-adjacent flag of the flag $(\Phi,v)$ is: \[ (\Phi,v)^i= \begin{cases} (\Phi,v+e_0) & \text{if } i=0,\\ (\Phi^j,v) & \text{if } v_{i+1}=v_i,\\ (\Phi,v+e_i+e_{i+1}) & \text{if } v_{i+1} \neq v_i \text{ and } i\neq 0, \end{cases} \] where $j$ is the rank of $F_i$ if $v_i=0$ or the rank of $G_i$ if $v_i=1$ and $e_{i}$ denotes the vector in $\mathbb{Z}_2^{n+1}$ whose all but the $i$-th coordinate are zero. Let $\mathcal{Y}$ be the premaniplex whose vertices are $\mathbb{Z}_2^n$ and the adjacencies are given by: \[ v^i= \begin{cases} v+e_0 & \text{if } i=0,\\ v & \text{if } v_{i+1}=v_i\\ v+e_i+e_{i+1} & \text{if } v_{i+1} \neq v_i \text{ and } i\neq 0. \end{cases} \] And let $\eta$ be the voltage assignment given by \[ \eta(\pth{v})= \begin{cases} 1 & \text{if } i=0 \text{ or } v_{i+1} \neq v_i,\\ r_j & \text{if } v_{i+1} = v_i, \end{cases} \] where $j$ is defined the same way as before. One can confirm that the voltage operator $(\mathcal{Y},\eta)$ satisfies that for every polytope $\mathcal{P}$, the flag graph of $\trp(\mathcal{P})$ is $\mathcal{G}(\mathcal{P})\ertimes \mathcal{Y}$. \begin{figure} \caption{The trapezotope operator in rank $3$. Edges in red, green and blue represent $0$-, $1$- and $2$- adjacencies, respectively.} \label{fig:tpr} \end{figure} \subsubsection*{The $k$-bubble} The last example that we shall see in this section is the $k$-bubble of a polytope. This operation was introduced by Helfand in \cite{Helfand_2013_ConstructionsKOrbit_PhDThesis} as a generalization of the truncation of the vertices. Let $\mathcal{P}$ be an abstract $n$-polytope. The $k$-bubble $[\mathcal{P}]_k$ of $\mathcal{P}$ is defined as follows. Let $\mathcal{P}_i$ denote the set of $i$-faces of $\mathcal{P}$. The set $([\mathcal{P}]_k)_i$ of $i$-faces of $[\mathcal{P}]_{k}$ is defined by \[([\mathcal{P}]_k)_i= \begin{cases} \mathcal{P}_i & \text{ if } i<k, \\ \lbrace(F,G):F\in \mathcal{P}_k,G\in \mathcal{P}_{k+1}, F<G\rbrace & \text{ if } i=k,\\ \mathcal{P}_i \cup \lbrace(F,G):F\in \mathcal{P}_k,G\in \mathcal{P}_{i+1}, F<G\rbrace & \text{ if } i>k. \end{cases} \] The order in $[\mathcal{P}]_k$ is defined by the following rules: Let $H$ and $H'$ be faces in $\mathcal{P}$ with ranks different than $k$, let $F$ be a $k$-face of $\mathcal{P}$ and let $G$ and $G'$ be faces of $\mathcal{P}$ properly containing $F$ \begin{itemize} \item $H<H'$ in $[\mathcal{P}]_k$ if and only if $H<H'$ in $\mathcal{P}$. \item $H<(F,G)$ in $[\mathcal{P}]_k$ if and only if $H<F$ in $\mathcal{P}$. \item $(F,G)<H$ in $[\mathcal{P}]_k$ if and only if $G\leq H$ in $\mathcal{P}$. \item $(F,G)<(F,G')$ in $[\mathcal{P}]_k$ if and only if $G<G'$ in $\mathcal{P}$. \end{itemize} One can see that the flags of $[\mathcal{P}]_k$ are of the form \[\lbraceF_0,\ldots,F_{k-1},(F_k,F_{k+1}),\ldots,(F_k,F_\ell),F_\ell,\ldots,F_{n-1}\rbrace\] where $\lbraceF_0,\ldots,F_{n-1}\rbrace$ is a flag in $\mathcal{P}$. So every flag of $[\mathcal{P}]_k$ is determined by a flag $\Phi$ of $\mathcal{P}$ and a number $\ell\in\lbracek+1,\ldots,n\rbrace$. If we denote this flag by $(\Phi,\ell)$ then the flag adjacencies on $[\mathcal{P}]_k$ are described by the following rules: \[ (\Phi,\ell)^i= \begin{cases} (r_i\Phi,\ell) & \text{ if } i\leq k+1\\ (r_{i+1}\Phi,\ell) & \text{ if } k+1\leq i \leq \ell-2, \\ (\Phi,\ell-1) & \text{ if } k+1\leq i =\ell-1,\\ (\Phi,\ell+1) & \text{ if } i=\ell,\\ (r_i\Phi,\ell) & \text{ if } i \geq \ell+1. \end{cases} \] \begin{comment} Now we are ready to construct the voltage operator $(\mathcal{Y},\eta)$. The vertex set will be $\lbracek+1,\ldots,n\rbrace$. Let $k+1\leq \ell \leq n$. Given a flag $(\Phi,\ell)\in p^{-1}(\ell)$ we have to find its $i$-adjacent flag. This will depend on how $i$ compares to $k$ and to $\ell$: For $i\leq k$ the flag $i$-adjacent to $(\Phi,\ell)$ is $(r_i\Phi,\ell)$, so we draw a semi-edge of color $i$ at the vertex $\ell$ in $\mathcal{Y}$ and assign to it the voltage $r_i$. For $k+1\leq i \leq \ell-2$ we can see that the $i$-adjacent flag to $(\Phi,\ell)$ is $(r_{i+1}\Phi,\ell)$, so we draw also a semi-edge of color $i$ at the vertex $\ell$ but we assign $r_{i+1}$ as its voltage. The $\ell$-adjacent flag to the flag $(\Phi,\ell)$ is $(\Phi,\ell+1)$, so we draw an edge of color $\ell$ between the vertices $\ell$ and $\ell+1$ in $\mathcal{Y}$ and assign the trivial voltage to it. Finally, if $i>\ell$, the $i$-adjacent flag to $(\Phi,\ell)$ is again $(r_i\Phi,\ell)$, so we draw a semi-edge of color $i$ at the vertex $\ell$ with voltage $r_i$. We end up with the voltage operator in Figure\nobreakspace \ref {fig:kBubble}. \end{comment} With this information we can see that the $k$-bubble is described by the voltage operator in Figure\nobreakspace \ref {fig:kBubble}. \begin{figure} \caption{The $k$-bubble operator. The vertices (from left to right) can be labeled with $\ell \in \lbracek+1, \dots, n\rbrace$ and the voltage of the dart $\pth{\ell} \label{fig:kBubble} \end{figure} \subsubsection*{The $2^\mathcal{M}$ and $\hat{2}^\mathcal{M}$ operations} Given a finite maniplex $\mathcal{M}$ of rank $n$, the maniplex $2^\mathcal{M}$ was defined in \cite{douglas2015twist} as follows: Label the $0$-faces of $\mathcal{M}$ with the set $\lbrace1, \dots, \ell \rbrace$. The flag set of $2^\mathcal{M}$ is $\mathcal{M}\times \mathbb{Z}_2^{\ell}$. Then adjacencies of a flag $(\Psi, v)$, with $\Psi\in\mathcal{M}$ and $v \in \mathbb{Z}_2^{\ell}$, are given by the following formula, where $e_{k} \in \mathbb{Z}_2^{\ell}$ is the vector that has $0$ in all its entries except the $k$-th one: \[ (\Psi,v)^i= \begin{cases} (\Psi,v+e_k) & \text{if } i=0, \text{ and the $0$-face of $\Psi$ is labeled with $k$,}\\ (\Psi^{i-1},v) &\text{otherwise}. \end{cases} \] It is well-known that if $\mathcal{M}$ is the flag graph of a polytope $\mathcal{P}$, then $2^\mathcal{M}$ is the flag graph of the polytope $2^\mathcal{P}$ defined by Danzer in \cite{danzer1984regular} (see \cite{Mochan_2021_AbstractPolytopesTheir_PhDThesis} for details). The maniplex $\hat{2}^\mathcal{M}$ is defined as the dual of $2^{\mathcal{M}^{\ast}}$, where $\mathcal{M}^{\ast}$ is the dual of $\mathcal{M}$. Thus, the set of flags of $\hat{2}^\mathcal{M}$ is $\mathcal{M} \times \mathbb{Z}_2^{m}$, where the set of facets of $\mathcal{M}$ is labeled with the set $\lbrace1, \dots, m\rbrace$. The adjacencies in $\hat{2}^\mathcal{M}$ are given by: \[ (\Psi,v)^i= \begin{cases} (\Psi^i,v)& \text{if } i<n,\\ (\Psi,v + e_k) & i=n, \text{ and the facet of $\Psi$ is labeled with $k$.} \end{cases} \] where $e_{k}\in\mathbb{Z}_2^{m}$ is the vector that has $0$ in all its entries except the $k$-th one. These operations on maniplexes are very interesting. In particular in the context of voltage operations, Example\nobreakspace \ref {eg:2^maniplex} will show that one cannot find a premaniplex $\mathcal{Y}$ that satisfies that $2^\mathcal{M} = \mathcal{M} \ertimes \mathcal{Y}$, for every maniplex $\mathcal{M}$ (not even if we fix the rank of $\mathcal{M}$, or ask $\mathcal{M}$ to be regular). However we shall see that if $\mathcal{M}$ is a regular premaniplex, then there exists a premaniplex $\mathcal{Y}_\mathcal{M}$ and a voltage assignment $\eta$ such that $\hat{2}^\mathcal{M} = \mathcal{M} \ertimes \mathcal{Y}_\mathcal{M}$. Let $\mathcal{M}$ be a regular $n$-premaniplex and let $\rho_0, \rho_1, \dots, \rho_{n-1}$ be the distinguished generators of $\aut(\mathcal{M})$ with respect to the base flag $\Phi$. We define the premaniplex $\mathcal{Y}:=\mathcal{Y}_\mathcal{M}$ as follows. The set of vertices of $\mathcal{Y}$ is the set $\mathbb{Z}_2^{m}$. Given $v \in \mathcal{Y}$ the entries of $v$ correspond to facets of $\mathcal{M}$. Assume that the facets of $\mathcal{M}$ are labeled with the set $\lbrace1, \dots, m\rbrace$, as above, in such a way that the facet corresponding to the base flag is labeled with $1$. Notice that every element in $\aut(\mathcal{M})$ permutes the facets of $\mathcal{M}$, which in turn induces a action of $\aut(\mathcal{M})$ on $\mathbb{Z}_2^{m}$ by permutation of coordinates. More precisely, for $\alpha \in \aut(\mathcal{M})$ and $v=(v_1, \dots v_{m}) \in \mathbb{Z}_2^{m}$, let $v \alpha$ denote the vector $(v_{1\alpha^{-1}}, \dots, v_{m\alpha^{-1}})$, that is, the vector resulting from $v$ after permuting the coordinates according to $\alpha$. The adjacencies in $\mathcal{Y}$ are given as follows: for $i\in \lbrace0,1,\dots, n-1\rbrace$ there is an edge of color $i$ between $v$ and $v^i:=v \rho_{i}$. Further, there is an edge of color $n$ between $v$ and $v+e_{1}$. We now define the voltage assignment on $\mathcal{Y}$ as: \[ \eta(^iv)= \begin{cases} r_i& \text{if } i<n,\\ 1 &i=n. \end{cases} \] We shall show that $\hat{2}^\mathcal{M} \cong\mathcal{M} \ertimes \mathcal{Y}$. For this, we recall that $\mathcal{M}$ is regular, and hence any flag of $\mathcal{M}$ can be written in a unique way as $\Phi\alpha$, where $\Phi$ is the base flag and $\alpha \in \aut(\mathcal{M})$. Consider $\varphi:\hat{2}^\mathcal{M} \to \mathcal{M} \ertimes \mathcal{Y}$ defined as \[ \varphi(\Phi\alpha, v)=(\Phi\alpha, v \alpha^{-1}).\] Then, $\varphi$ preserves incidences. In fact, we observe that if $k= 1\alpha$: \[\begin{aligned} \Big(\varphi(\Phi\alpha, v)\Big)^n &=\Big(\Phi\alpha, v \alpha^{-1} \Big)^n \\ &= \Big( \eta(^n(v \alpha^{-1}))(\Phi\alpha), ( v \alpha^{-1} )^n \Big) \\ &= \Big( \Phi\alpha, v \alpha^{-1} + e_{1} \Big) \\ &= \Big( \Phi\alpha, (v + e_{1 \alpha}) \alpha^{-1} \Big) \\ &= \Big( \Phi\alpha, (v+e_{k}) \alpha^{-1}\Big)\\ &= \varphi \Big( \Phi\alpha, v+e_{k}\Big) \\ &= \varphi\Big( (\Phi\alpha,v)^n \Big). \end{aligned}\] On the other hand, for $i<n$ we have that: \[\begin{aligned} \Big(\varphi( \Phi\alpha, v)\Big)^i &= \Big( \Phi\alpha, v \alpha^{-1}\Big)^i\\ &= \Big( \eta\big(^i(v \alpha^{-1})\big)(\Phi\alpha), (v\alpha^{-1})^i\Big)\\ &= \Big( r_i\Phi\alpha, (v\alpha^{-1})^i\Big)\\ &= \Big( \Phi^i\alpha, v\alpha^{-1}\rho_i \Big)\\ &= \Big( \Phi^i\alpha, v (\rho_i \alpha)^{-1} )\Big)\\ &= \Big( \Phi \rho_{i} \alpha, v (\rho_i \alpha)^{-1} )\Big)\\ &= \varphi \Big( \Phi\rho_i\alpha, v\Big)\\ &= \varphi \Big( \big((\Phi\alpha)^i, v\big)\Big) \\ &= \varphi\Big( (\Phi\alpha,v)^i \Big). \end{aligned}\] The computation above shows that $\hat{2}^\mathcal{M}$ and $\mathcal{M} \ertimes \mathcal{Y}$ are isomorphic. It is clear that if we drop the regularity condition on $\mathcal{M}$ the given voltage operator $(\mathcal{Y}, \eta)$ is not well-defined as it depends heavily on the existence of the distinguished generators of the automorphism group. Thus, we can ask ourselves the following questions. \begin{question}\label{Q1} Is it true that for any maniplex $\mathcal{M}$, there exists a voltage operator $(\mathcal{Y}_\mathcal{M},\eta)$ such that $\hat{2}^\mathcal{M} \cong \mathcal{M} \ertimes \mathcal{Y}_\mathcal{M}$? \end{question} For example, we have not been able to determine if there exists such voltage operator for the case when $\mathcal{M}$ is the pyramid over a digon. If the answer for the above question is negative, then we could ask: \begin{question} Give necessary and sufficient conditions on a maniplex $\mathcal{M}$ in such a way that there exists a voltage operator $(\mathcal{Y}_\mathcal{M},\eta)$ such that $\hat{2}^\mathcal{M} \cong \mathcal{M} \ertimes \mathcal{Y}_\mathcal{M}$. \end{question} Note that a sufficient condition for the last question is to have $\mathcal{M}$ to be regular, as shown above. However chances are this condition is not necessary. For example, we believe that if the symmetry type graph of $\mathcal{M}$ is itself a regular pre-maniplex, the above voltage operator might exist. \section{Automorphisms} \label{sec:automorphisms} In this section we will study the interplay between the automorphism groups of a premaniplex and the resulting premaniplex after a voltage operation, as well as their symmetry type graphs. Let $\mathcal{X}$ be a premaniplex and let $(\mathcal{Y}, \eta)$ be a voltage operator. Observe that the elements of $\aut(\mathcal{X})$ act as automorphisms on $\mathcal{X} \ertimes \mathcal{Y}$. Indeed, if $\gamma \in \aut(\mathcal{X})$, then the mapping $\overline{\gamma}: (x,y) \mapsto (x\gamma, y)$ induces an automorphism of $\mathcal{X} \ertimes \mathcal{Y}$. To see this, just note that $\overline{\gamma}$ commutes with the monodromies\[\begin{aligned} r_{i} \left( (x,y) \overline{\gamma} \right) &= r_{i}\left( x\gamma,y \right) \\ &= \left( \eta (^{i}y)x \gamma, r_{i}y \right) \\ &= \left( \eta(^{i} y)x, r_{i}y \right)\overline{\gamma}\\ &= \left( r_{i}(x,y) \right) \overline{\gamma}. \end{aligned}\] Thus, $\aut(\mathcal{X}) \leq \aut(\mathcal{X} \ertimes \mathcal{Y})$ for every voltage operator $(\mathcal{Y}, \eta)$ and we may replace $\overline{\gamma}$ by $\gamma$ to simplify notation. The latter result implies that $\aut(\mathcal{X})$ acts by automorphism on $\mathcal{X} \ertimes \mathcal{Y}$. Note that $\mathcal{X}$ naturally covers $\mathcal{X}/\Gamma$ for every group $\Gamma \leq \aut(\mathcal{X})$. As expected $\mathcal{X} \ertimes Y$ naturally covers $(\mathcal{X} \ertimes Y)/\Gamma$, In fact, we have slightly more general result: \begin{thm}\label{thm:covers} Let $\mathcal{X}$ and $\mathcal{X}'$ be $n$-premaniplexes, $(\mathcal{Y},\eta)$. If $\mathcal{X}$ covers $\mathcal{X}'$, then $\mathcal{X}\ertimes \mathcal{Y}$ covers $\mathcal{X}'\ertimes\mathcal{Y}$. In particular, when $\mathcal{X}'$ is obtained from $\mathcal{X}$ as a quotient by a group $\Gamma \leq \aut(\mathcal{X})$, we get that \[\mathcal{X}' \ertimes Y \cong \left( \mathcal{X} / \Gamma \right) \ertimes \mathcal{Y} \cong \left( \mathcal{X} \ertimes \mathcal{Y} \right)/ \Gamma.\] \end{thm} \begin{proof} Let $\varphi:\mathcal{X}\to\mathcal{X}'$ be a covering homomorphism. If $\mathcal{X}'=\mathcal{X}/\Gamma$ we chose $\varphi$ as the natural projection $x\mapsto x\Gamma$. Define $\varphi' : \mathcal{X} \ertimes \mathcal{Y} \to \mathcal{X}'\ertimes \mathcal{Y}$ by $\varphi': (x,y) \mapsto (\varphi(x), y)$. Now, using the fact that $\varphi$ is a homomorphism we simply check that \[ \begin{aligned} r_i\left(\varphi'(x,y)\right) &= r_i\left(\varphi(x),y\right)\\ &= \left(\eta(\pth{y})\varphi(x), r_i y\right)\\ &= \left(\varphi(\eta(\pth{y})x), r_i y \right)\\ &= \varphi'\left( \eta(\pth{y})x, r_i y \right)\\ &= \varphi' \left(r_i(x,y)\right). \end{aligned} \] This proves that $\varphi'$ is a homomorphism, and it is trivial to see that it is surjective. Therefore it is a covering. In the particular case when $\mathcal{X}'=\mathcal{X}/\Gamma$ we get that $\varphi'(x,y) = (x\Gamma,y)$, but by using the natural action of $\Gamma$ on $\mathcal{X}\ertimes \mathcal{Y}$ we get that $(x\Gamma,y) = (x,y)\Gamma$. \end{proof} \begin{comment} \begin{thm}\label{thm:covers} Let $\mathcal{X}$ be a $n$-premaniplex and $\Gamma \leq \aut(\mathcal{X})$. Let $(\mathcal{Y},\eta)$ be a $(m,n)$-voltage operator. Then \[\left( \mathcal{X} \ertimes \mathcal{Y} \right)/ \Gamma \cong \left( \mathcal{X} / \Gamma \right) \ertimes \mathcal{Y}.\] \end{thm} \begin{proof} There is a natural bijection $\varphi : \left( \mathcal{X} \ertimes \mathcal{Y} \right) / \Gamma \to \left( \mathcal{X}/\Gamma \right) \ertimes \mathcal{Y}$ given by $\varphi: (x,y)\Gamma \mapsto (x\Gamma, y)$. As pointed out above, the elements of $\Gamma$ act as automorphisms of $\mathcal{X} \ertimes \mathcal{Y}$, implying that $r_{i}\left( (x,y)\Gamma \right) = \left( r_{i}(x,y) \right) \Gamma$. Hence, we obtain that \[r_{i}\left( (x,y)\Gamma \right) = \left( r_{i}(x,y) \right) \Gamma = (\eta(^{i}y)x, r_i y)\Gamma.\] On the other hand, \[r_{i}\left(x \Gamma, y \right) = \left( \eta(^{i}y)x \Gamma, r_{i}y \right), \] which implies that $\varphi$ is a premaniplex isomorphism. \end{proof} \end{comment} The above theorem is of particular interest when $\mathcal{X}$ is a maniplex and $\Gamma =\aut(\mathcal{X})$. In this case, $\mathcal{T}:=\mathcal{X}/\Gamma$ is the symmetry type graph of $\mathcal{X}$. Theorem\nobreakspace \ref {thm:covers} tells us that the symmetry type graph of $\mathcal{X} \ertimes \mathcal{Y}$ with respect to $\Gamma$ is precisely $\mathcal{T} \ertimes \mathcal{Y}$. However it is important to remark that $\Gamma$ might be a proper subgroup of $\aut(\mathcal{X} \ertimes \mathcal{Y})$, that is, $\mathcal{T} \ertimes \mathcal{Y}$ might not be the symmetry type graph of $\mathcal{X} \ertimes \mathcal{Y}$, even when $\mathcal{X} \ertimes \mathcal{Y}$ is a maniplex. An example of such situation will be given in Example\nobreakspace \ref {eg:PyramidMedialVolts}. This phenomenon will be explored deeply in \cite{HubardMochanMontero_MoreVoltageOperations_preprint}. \emph{Words of caution}: the fact that $\aut(\mathcal{X})$ acts by automorphisms on $\mathcal{X}\ertimes\mathcal{Y}$ {\em does not} imply that it acts by automorphisms on each connected component of $\mathcal{X}\ertimes\mathcal{Y}$. For example, as pointed out in Section\nobreakspace \ref {sec:ClassicalExamples}, when $\mathcal{X}$ is an orientable (pre)maniplex of rank 3 and $(\mathcal{Y},\eta)$ is the snub operator of Figure\nobreakspace \ref {fig:snub}, $\mathcal{X}\ertimes \mathcal{Y}$ has two connected components, say the \emph{left snub} (whose flags are those of the form $(x,y)$ with $x\in\mathcal{X}$ a white flag), and the \emph{right snub} (whose flags are those of the form $(x,y)$ with $x\in\mathcal{X}$ a black flag). Each component has all the ``rotational'' symmetry of $\mathcal{X}$ but none of the ``reflection'' symmetries of $\mathcal{X}$. However, if there exists a reflection $\tau \in \aut(\mathcal{X})$, that is, an automorphism that interchanges white flags with black flags, then $\tau$ acts on $\mathcal{X}\ertimes \mathcal{Y}$ by swapping the left snub with the right snub. This means that if we consider rooted premaniplexes $(\mathcal{X},x)$ and $(\mathcal{Y},y)$, then the rooted premaniplex $(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$ may not have all the symmetries of $\mathcal{X}$. In fact we have the following result: \begin{prop} Let $(\mathcal{X},x)$ be a rooted premaniplex, $((\mathcal{Y},y),\eta)$ a rooted voltage operator and $\gamma\in\aut(\mathcal{X})$. Then $\gamma\in\aut(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$ if and only if there exists a closed path $W$ based on $y$ such that $x\gamma = \eta(W)x$. \end{prop} \begin{proof} Assume that $\gamma\in\aut(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$. This means that $(x,y)\gamma$ is an element of $(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$; in other words, $(x,y)\gamma$ and $(x,y)$ are in the same connected component of $\mathcal{X}\ertimes \mathcal{Y}$. Hence, there exists a monodromy $\omega$ such that $\omega(x,y) = (x,y)\gamma$. Note that on one hand, $(x,y)\gamma = (x\gamma, y)$, while on the other hand $\omega(x,y) = (\eta(\pth[\omega]{y})x, \omega y)$. Thus, $(x\gamma, y) = (\eta(\pth[\omega]{y})x, \omega y)$, which implies that $y=\omega y$ and therefore $W:= \pth[\omega]{y} \in \fg^y(\mathcal{Y})$. Conversely, suppose that there exists $W\in \fg^y(\mathcal{Y})$ such that $x\gamma = \eta(W)x$. The fact that $W\in \fg^y(\mathcal{Y})$ implies that there exists a monodromy $\omega$ such that $\omega y = y$ and $W=\pth[\omega]{y}$. Hence, $(x,y)\gamma = (x\gamma, y) = (\eta(W)x, y) = (\eta(\pth[\omega]{y})x, y) = (\eta(\pth[\omega]{y})x, \omega y) = \omega(x,y)$, which implies that $(x,y)\gamma$ and $(x,y)$ are in the same connected component of $\mathcal{X}\ertimes \mathcal{Y}$. Since we already know that $\gamma$ is an automorphism of $\mathcal{X}\ertimes \mathcal{Y}$, this implies that $\gamma\in\aut(\mathcal{X}\ertimes \mathcal{Y}, (x,y))$. \end{proof} Note that we may use Theorem\nobreakspace \ref {thm:covers} to determine when an operation is not a voltage operation, as we see in the following example. \begin{eg}\label{eg:2^maniplex} In this example we will prove that there is no $(n,n+1)$-voltage operator $(\mathcal{Y},\eta)$ such that $2^\mathcal{M} = \mathcal{M}\ertimes \mathcal{Y}$ for every $n$-maniplex $\mathcal{M}$ (as was hinted in \protect \MakeUppercase {Q}uestion\nobreakspace \ref {Q1}). For a $2$-gon $\lbrace2\rbrace$, the polytope $2^{\lbrace2\rbrace}$ is the quadrangular dihedron $\lbrace4,2\rbrace$ (the map on the sphere consisting of two squared faces sharing all the vertices and edges). For the square $\lbrace4\rbrace$, we have that $2^{\lbrace4\rbrace}$ is the toroidal map $\lbrace4,4\rbrace_{(4,0)}$ (a $4\times 4$ grid on the torus). We can get a $2$-gon from a square by taking its quotient by the cyclic group of order two generated by the half-turn around the center, that is $\lbrace2\rbrace=\lbrace4\rbrace/\mathbb{Z}_2$. However, it is not possible to get the quadrangular dihedron by taking a quotient of $\lbrace4,4\rbrace_{(4,0)}$ by any group of order 2. In other words $2^{\lbrace4\rbrace} / \mathbb{Z}_2 \ncong 2^{\lbrace4\rbrace/\mathbb{Z}_2}$. Hence, Theorem\nobreakspace \ref {thm:covers} is not satisfied for the operation $2^\mathcal{M}$. Thus, there is no voltage operator $(\mathcal{Y},\eta)$ such that $2^\mathcal{M} = \mathcal{M} \ertimes \mathcal{Y}$ for every maniplex $\mathcal{M}$. \end{eg} The previous example uses Theorem\nobreakspace \ref {thm:covers} to prove that there does not exist a voltage operation $(\mathcal{Y}, \eta)$ such that $2^\mathcal{M} \cong \mathcal{M}\ertimes \mathcal{Y}$ for all maniplexes $\mathcal{M}$, in other words, that $\mathcal{M}\mapsto 2^\mathcal{M}$ is not a voltage operation. Observe that this fact can also be established in a more elementary way by simply comparing the number of flags in $\mathcal{M} \ertimes \mathcal{Y}$ and in $2^{\mathcal{M}}$. We use Theorem\nobreakspace \ref {thm:covers} to emphasize the relation between quotients and voltage operations. In fact, Theorem\nobreakspace \ref {thm:covers} characterizes all voltage operations; this shall be shown in Theorem\nobreakspace \ref {thm:OpsAsProducts}. It is easy to come up with examples of operations that are not voltage operations, for instance, by defining them differently for different cases. However, the operation $\mathcal{M}\mapsto 2^\mathcal{M}$ is of interest because it is a functor between the categories $\mathrm{pMpx}^n$ and $\mathrm{pMpx}^{n+1}$, in other words, if there is a homomorphism $p:\mathcal{M} \to \mathcal{M}'$ then there is a natural way to define a homomorphism $\tilde{p}:2^\mathcal{M}\to 2^{\mathcal{M}'}$, as $\tilde{p}(\Phi,v)\mapsto (p(\Phi),u)$ where \[u_i=\sum_{j\in p^{-1}(i)} v_j.\] It is straightforward prove that $\tilde{p}$ is a maniplex homomorphism and that $p\mapsto \tilde{p}$ is indeed a functor. Let $\mathcal{M}$ be a premaniplex, let $\Gamma$ be a group of automorphisms of $\mathcal{M}$ and let $\mathcal{X}=\mathcal{M}/\Gamma$. If $(\mathcal{Y},\eta)$ is a voltage operation, Theorem\nobreakspace \ref {thm:covers} tells us that $(\mathcal{M}\ertimes \mathcal{Y})/\Gamma$ is isomorphic to $\mathcal{X} \ertimes \mathcal{Y}$. Hence, there exists a voltage assignment $\xi$ on $\mathcal{X}$ with voltage group $\Gamma$ such that $\mathcal{X}^\xi$ is isomorphic to $\mathcal{M}$, and a voltage assignment $\theta:\fg(\mathcal{X} \ertimes \mathcal{Y})\to \Gamma$ such that $(\mathcal{X} \ertimes \mathcal{Y})^\theta$ is isomorphic to $\mathcal{M} \ertimes \mathcal{Y}$. The following theorem tells us how to find $\theta$ in terms of $\eta$ and $\xi$. \begin{thm}\label{thm:theta} Let $(\mathcal{X},\xi)$ be a voltage premaniplex with voltage group $\Gamma$ and let $(\mathcal{Y},\eta)$ be a voltage operator. Define $\theta = \theta(\eta,\xi):\Pi(\mathcal{X} \ertimes \mathcal{Y})\to \Gamma$ as follows: \begin{equation}\label{eq:theta} \theta(\pth[\omega]{(x,y)}):=\xi(\pth[{\eta(\pth[\omega]{y})}]{x}). \end{equation} Then $(\mathcal{X} \ertimes \mathcal{Y})^\theta$ is isomorphic to $\mathcal{X}^\xi \ertimes \mathcal{Y}$. \end{thm} \begin{proof} Start by noticing that, by definition, $r_i(x,\gamma) = (r_ix, \xi(^ix)\gamma)$ and similarly $r_i((x,y),\gamma) = (r_i(x,y), \theta(^i(x,y))\gamma)$. Now, define the function $$\varphi:(\mathcal{X} \ertimes \mathcal{Y})^\theta \to \mathcal{X}^\xi \ertimes \mathcal{Y}$$ given by $$\varphi((x,y),\gamma):=((x,\gamma),y).$$We shall show that $\varphi$ is a premaniplex isomorphism. Since both $\mathcal{X}^\xi$ and $(\mathcal{X} \ertimes \mathcal{Y})^\theta$ have $\Gamma$ as voltage group, $\varphi$ is a bijection. We should see next that $\varphi$ preserves $i$\nobreakdash-adjacencies for $i\in\lbrace0,1,\ldots,m-1\rbrace$. In fact: \[ \begin{aligned} \varphi\left(r_i((x,y),\gamma)\right) &= \varphi\left(r_i(x,y),\theta(\pth{(x,y)})\gamma\right)\\ &= \varphi\left((\eta(\pth{y})x,r_i y),\theta(\pth{(x,y)})\gamma\right)\\ &= \left((\eta(\pth{y})x,\theta(\pth{(x,y)})\gamma),r_i y\right)\\ &= \left((\eta(\pth{y})x,\xi(\pth[\eta(\pth{y})]{x})\gamma),r_i y\right)\\ &= \left(\eta(\pth{y})(x,\gamma), r_i y\right)\\ &= r_i \left((x,\gamma),y\right)\\ &= r_i \varphi \left((x,y),\gamma\right). \end{aligned} \] \end{proof} \begin{eg}\label{eg:PyramidMedialVolts} It is not difficult to see that the antiprism of a $q$-gon can be obtained by taking the medial of the pyramid over a $q$-gon. Thus, we shall use Theorem\nobreakspace \ref {thm:theta} to recover the antiprism of a $q$-gon as a voltage maniplex. To do so, we first construct the pyramid over a $q$-gon as a voltage maniplex and then apply the medial operator (as a voltage operator). Note that the automorphism group of a $q$-gonal pyramid coincides with the automorphism group of its base, which is the dihedral group \[ \mathbb{D}_q = \gen{\rho_0,\rho_1\mid \rho_0^2=\rho_1^2=(\rho_0\rho_1)^q=1},\] where $\rho_0$ is thought as the reflection in a plane orthogonal to the base, that includes the midpoint of an edge $e$ in the base; and $\rho_1$ as the reflection in a plain orthogonal to the base, that includes one of the vertices incident to $e$. Hence, the $q$-gonal pyramid can be recovered from its symmetry type graph $\mathcal{X}$ via the voltage premaniplex $(\mathcal{X},\xi)$ shown in Figure\nobreakspace \ref {fig:theta_pyr}. (As in previous examples, the colors red, green and blue for the edges represent $0$-, $1$- and $2$-adjacencies, respectively.) Let $(\mathcal{Y},\eta)$ be the medial operator. Then, by Theorem\nobreakspace \ref {thm:theta}, $(\mathcal{X}\ertimes \mathcal{Y},\theta)$ is the antiprism over a $q$-gon, where $\theta$ is defined as in the theorem. Theorem\nobreakspace \ref {thm:theta} gives us a way to construct such antiprism. When we want to calculate, for example, the endpoint and voltage of the dart $\pth[0]{(a,x)}$ (the red dart that starts at $(a,x)$), we first see that $\eta(\pth[0]{x}) = r_1$, so we follow the path $\pth[1]{a}$ and we see that it ends in $a$ and has voltage $\xi(\pth[1]{a}) = \rho_1$. Therefore, $(a,x)^0 = (\eta(\pth[0]{x})a,x^i) = (a,x)$ (i.e., the dart is a semiedge) and the dart $\pth[0]{(a,x)}$ has voltage $\theta(\pth[0]{(a,x)}) = \rho_1$. Note that $\mathcal{X}\ertimes \mathcal{Y}$ is the symmetry type graph of the antiprism with respect to the automorphism group of its base, but not with respect to the full automorphism group of the antiprism. In fact, the antiprism over any polygon always has extra symmetry, which is induced by the isomorphism between the base of the antiprism and its dual, so antiprisms over polygons have usually four orbits on flags. The antiprism over the triangle is in fact an octahedron and therefore regular, so in that case the actual symmetry type graph is ${\mathbf{1}}^3$.\end{eg} \begin{figure} \caption{A voltage premaniplex for the medial of a pyramid, where the colors red, green and blue for the edges represent $0$-, $1$- and $2$-adjacencies, respectively.} \label{fig:theta_pyr} \end{figure} An consequence of Theorem\nobreakspace \ref {thm:theta} is the following corollary: \begin{coro}\label{coro:OperatingRegulars} Let $\mathcal{X}$ be a regular $n$-premaniplex with automorphism group $\langle \rho_{0}, \dots, \rho_{n-1} \rangle$ and let $(\mathcal{Y},\eta)$ be a $(n,m)$-voltage operator. Then $\mathcal{X}\ertimes \mathcal{Y}$ is isomorphic to the derived graph $\mathcal{Y}^\nu$, where $\nu:\fg(\mathcal{Y})\to\aut(\mathcal{X})$ is the voltage assignment obtained from $\eta$ by replacing each $r_i$ with $\rho_i$. \end{coro} \begin{proof} Since $\mathcal{X}$ is regular, it is isomorphic to the derived graph $({\mathbf{1}}^n)^\xi$ where, if $x$ is the only vertex of ${\mathbf{1}}^n$, $\xi(\pth{x})=\rho_i$. Then Theorem\nobreakspace \ref {thm:theta} tells us that $\mathcal{X}\ertimes \mathcal{Y}$ is isomorphic to $({\mathbf{1}}^n\ertimes \mathcal{Y})^\theta$ with $\theta(\pth[\omega]{(x,y)}) = \xi (\pth[{\eta(\pth[\omega]{y}}]){x})$. This means precisely that $\theta$ replaces each occurrence of $r_i$ in $\eta$ by the voltage of the semiedge of color $i$ in ${\mathbf{1}}^n$, but this is exactly $\rho_i$. By applying the natural isomorphism ${\mathbf{1}}^n \ertimes \mathcal{Y}\to \mathcal{Y}$ we get the desired result. \end{proof} \section{Composition of voltage operations}\label{sec:composition} The result of applying a voltage operation to a premaniplex is again a premaniplex. Thus, it is natural to think about the composition of two voltage operations. In contrast, the result of applying a voltage operation to a maniplex is not always a maniplex, so one must be careful with this fact when composing operations, as the result of a voltage operation can be disconnected. It is interesting to note that in fact the composition of two voltage operations can be written as a new voltage operation. In this section we describe how to do this by using the operator $\theta$ defined in Theorem\nobreakspace \ref {thm:theta}, but instead of using an arbitrary voltage premaniplex $(\mathcal{X},\xi)$ and a voltage operator $(\mathcal{Y},\eta)$ we use two voltage operators $(\mathcal{Y}_1,\eta_1)$ and $(\mathcal{Y}_2,\eta_2)$. \begin{thm}\label{thm:composition} Let $\mathcal{X}$ be an $n$-premaniplex, $(\mathcal{Y}_1,\eta_1)$ an $(n,m)$-voltage operator and $(\mathcal{Y}_2,\eta_2)$ a $(m,\ell)$-voltage operator. Then $$(\mathcal{X} \ertimes[\eta_1] \mathcal{Y}_1)\ertimes[\eta_2] \mathcal{Y}_2 \cong \mathcal{X} \ertimes[\theta] (\mathcal{Y}_1\ertimes[\eta_2] \mathcal{Y}_2),$$ where $\theta=\theta(\eta_2,\eta_1)$ is defined as in Equation\nobreakspace \textup {(\ref {eq:theta})}. \end{thm} \begin{proof} We define $\varphi:(\mathcal{X} \ertimes[\eta_1] \mathcal{Y}_1)\ertimes[\eta_2] \mathcal{Y}_2 \to \mathcal{X} \ertimes[\theta] (\mathcal{Y}_1\ertimes[\eta_2] \mathcal{Y}_2)$ in the natural way, that is, $$\varphi\left((x,y_1),y_2\right) = \left(x,(y_1,y_2)\right).$$ It is clear that $\varphi$ is a bijection, so we only need to prove that it commutes with $r_i$, for all $i\in\lbrace0,1,\ldots,\ell-1\rbrace$: \[ \begin{aligned} \varphi\left(r_i\left((x,y_1),y_2\right)\right) &= \varphi\left(\eta_2(\pth{y_2})(x,y_1), r_i y_2\right)\\ &= \varphi\left((\eta_1(\pth[{\eta_2(\pth{y_2})}]{y_1})x,\eta_2(\pth{y_2})y_1), r_i y_2\right)\\ &= \varphi\left((\theta(\pth{(y_1,y_2)}x,\eta_2(\pth{y_2})y_1), r_i y_2\right)\\ &= \left(\theta(\pth{(y_1,y_2)}x,(\eta_2(\pth{y_2})y_1, r_i y_2)\right)\\ &= \left(\theta(\pth{(y_1,y_2)}x,r_i(y_1, y_2)\right)\\ &= r_i \left( x,(y_1,y_2)\right)\\ &= r_i \varphi \left((x,y_1),y_2\right) \end{aligned} \] \end{proof} The above theorem can be apply in different contexts, we give some examples here. Let $({\mathbf{1}}^m,d)$ be the dual operator and consider a $(n,m)$-voltage operator $(\mathcal{Y},\eta)$. If $\mathcal{X}$ is a premaniplex, then $(\mathcal{X}\ertimes \mathcal{Y})\ertimes[d] {\mathbf{1}}^m$ is the dual of $(\mathcal{X}\ertimes \mathcal{Y})$. Theorem\nobreakspace \ref {thm:composition} tells us that this dual is in fact isomorphic to $\mathcal{X}\ertimes[\theta](\mathcal{Y}\ertimes[d]{\mathbf{1}}^m)$. Thus, $(\mathcal{Y}\ertimes[d]{\mathbf{1}}^m,\theta)$ is the dual of $\mathcal{Y}$, where the voltages of the darts are the same as the ones of the darts of the dual color in $(\mathcal{Y},\eta)$. In other words, if $(\mathcal{Y}^*,\eta^*)$ denotes the voltage operator we get by recoloring the darts of color $i$ in $\mathcal{Y}$ with the color $n-1-i$, then the dual of $\mathcal{X}\ertimes \mathcal{Y}$ is $\mathcal{X}\ertimes[\eta^*]\mathcal{Y}^*$. More generally, Theorem\nobreakspace \ref {thm:composition} lets us define the composition of two operators. If $(\mathcal{Y}_1,\eta_1)$ is an $(n,m)$-operator and $(\mathcal{Y}_2,\eta_2)$ is an $(m,\ell)$-operator we define the \emph{composition of $(\mathcal{Y}_1,\eta_1)$ with $(\mathcal{Y}_2,\eta_2)$} as $(\mathcal{Y}_1\ertimes[\eta_2] \mathcal{Y}_2,\theta(\eta_1,\eta_2))$ and denote it by $(\mathcal{Y}_1,\eta_1)\circ (\mathcal{Y}_2,\eta_2)$. Theorem\nobreakspace \ref {thm:composition} tells us that $(\mathcal{Y}_1,\eta_1)\circ (\mathcal{Y}_2,\eta_2)$ is an $(n,\ell)$-operator and that the composition of operators is associative. This allows us to define a new category: recall that $\mathrm{pMpx}^n$ denotes the class of all premaniplexes of rank $n$ and let $\overline{\mathrm{pMpx}}$ be the category whose objects are the classes $\mathrm{pMpx}^n$ with $n \geq 1$, and whose arrows are voltage operators. An $(n,m)$-operator is an arrow from $\mathrm{pMpx}^n$ to $\mathrm{pMpx}^m$ and the composition of arrows is defined as above. The neutral element at the object $\mathrm{pMpx}^n$ is the arrow $({\mathbf{1}}^n,\diamondsuiter)$ where $\diamondsuiter$ is the mixing voltage, and the isomorphisms are precisely the voltage operators described in Example\nobreakspace \ref {eg:d-auto}. It might be also interesting to study the analogous category obtained by considering rooted voltage operations. Observe that the snub operation seems to act differently on orientable maniplexes than in non-orientable ones. However, the result of applying the snub operation to a non-orientable maniplex $\mathcal{M}$ is one of the connected components of doing the same operation to the orientable double cover of $\mathcal{M}$. This phenomenon is easy to understand with the following results: \begin{thm}\label{thm:mixandfix} Let $\mathcal{X}$ be an $n$-premaniplex, $(\mathcal{Y}_1,\diamondsuiter)$ be a mix $n$-operator and $(\mathcal{Y}_2,\eta)$ be an $(n,m)$-voltage operator with $\mathcal{Y}_2$ connected. Let $y_1\in\mathcal{Y}_1$ and $y_2\in\mathcal{Y}_2$ be fixed and suppose that $\eta(\fg^{y_2}(\mathcal{Y}_2))$ fixes $y_1$. Then, the induced (colored) graph of $(\mathcal{X}\diamondsuittimes\mathcal{Y}_1)\ertimes \mathcal{Y}_2$ with vertex set $\lbrace((x,y_1),y):x\in\mathcal{X}, y\in\mathcal{Y}_2\rbrace$ is isomorphic to $\mathcal{X}\ertimes\mathcal{Y}_2$. \end{thm} \begin{proof} Because of Theorem\nobreakspace \ref {thm:VoltsEquiv}, we may assume without loss of generality that $\eta(W)\in\eta(\Pi^{y_2}(\mathcal{Y}_2))$ for every path $W\in \fg(\mathcal{Y}_2)$, and thus $\eta(W)$ fixes $y_1$. First we notice that the induced (colored) graph of $\mathcal{Y}_1\ertimes\mathcal{Y}_2$ with vertex set $\lbrace(y_1,y):y\in \mathcal{Y}_2\rbrace$ forms an isomorphic copy of $\mathcal{Y}_2$. This is easy to see since the $i$-adjacent flag to $(y_1,y)$ (in $\mathcal{Y}_1\ertimes\mathcal{Y}_2$) is $(\eta(\pth{y})y_1,y^i) = (y_1,y^i)$, because, by assumption, the voltages of paths in $\mathcal{Y}_2$ fix $y_1$. Next we use Theorem\nobreakspace \ref {thm:composition} to see $(\mathcal{X}\diamondsuittimes{\mathcal{Y}_1})\ertimes\mathcal{Y}_2$ as $\mathcal{X}\ertimes[\theta] (\mathcal{Y}_1\ertimes \mathcal{Y}_2)$ with $\theta=\theta(\eta,\diamondsuiter)$. This means that we can consider the $i$-adjacent flag to $((x,y_1),y)$ in $(\mathcal{X}\diamondsuittimes{\mathcal{Y}_1})\ertimes\mathcal{Y}_2$ as the $i$-adjacent flag to $(x,(y_1,y))$ in $\mathcal{X}\ertimes[\theta] (\mathcal{Y}_1\ertimes \mathcal{Y}_2)$; we write $((x,y_1),y)\leftrightarrow (x,(y_1,y))$ to denote that these two points are in correspondence under the isomorphism. On one hand, observe that by definition of $\theta$, for any $y\in\mathcal{Y}_2$ and any monodromy $\omega$, we have that $\theta(\pth[\omega]{(y_1,y))}$ is $\diamondsuiter(\pth[{\eta(\pth[\omega]{y}})]{y_1})$. On the other hand, by definition of $\diamondsuiter$ we have that $\diamondsuiter(\pth[{\eta(\pth[\omega]{y}})]{y_1})=\eta(\pth[\omega]{y})$. Hence, \[ \begin{aligned} \big((x,y_1),y\big)^i &\leftrightarrow \big(x,(y_1,y)\big)^i \\ &= \big( \theta(^i(y_1,y))x, (y_i,y)^i\big) \\ &= \big(\mu(^{\eta(^iy)}y_1)x, (y_1,y)^i\big) \\ &= \big(\eta(^iy)x , (y_1,y)^i\big) \\ &= \big(\eta(^iy)x , (\eta(^iy)y_1, y^i)\big) \\ &= \big(\eta(^iy)x , (y_1,y^i)\big), \end{aligned} \] and since $\big(\eta(^iy)x , (y_1,y^i)\big) \leftrightarrow \big((\eta(^iy)x , y_1),y^i\big)$, the theorem follows. \end{proof} \begin{coro}\label{coro:mixandfixalot} Let $\mathcal{X}$ be an $n$-premaniplex, $(\mathcal{Y}_1,\diamondsuiter)$ be a mix $n$-operator and $(\mathcal{Y}_2,\eta)$ be an $(n,m)$-voltage operator with $\mathcal{Y}_2$ connected. Suppose that $\eta(\fg^{y_2}(\mathcal{Y}_2)$ fixes every vertex of $\mathcal{Y}_1$. Then $(\mathcal{X}\diamondsuittimes \mathcal{Y}_1)\ertimes \mathcal{Y}_2$ has a copy of $\mathcal{X}\ertimes \mathcal{Y}_2$ for each vertex of $\mathcal{Y}_1$. \end{coro} We now have the tools to understand the relation between the snub of a non-orientable 3-maniplex $\mathcal{M}$ and the snub of its double cover. In Example\nobreakspace \ref {eg:DoubleCovers} we saw that the orientable double cover of $\mathcal{M}$ is $\mathcal{M}\diamondsuit {\mathbf{2}}^3_\emptyset=\mathcal{M}\diamondsuittimes{{\mathbf{2}}^3_\emptyset}$. Hence, if $(\mathcal{Y},\eta)$ is the snub operator (see Figure\nobreakspace \ref {fig:snub}), then Corollary\nobreakspace \ref {coro:mixandfixalot} tells us that $(\mathcal{M}\diamondsuit 2^3_\emptyset)\ertimes\mathcal{Y}$ consists of two copies of $\mathcal{M}\ertimes \mathcal{Y}$. (Note that the voltages of $\eta$ take values in $\mon^+(\mathcal{U}^3)$,the group that fixes the vertices of ${\mathbf{2}}^3_\emptyset$.) In other words, the snub $\mathcal{M}$ is an unrooted snub $\mathcal{M}\diamondsuit {\mathbf{2}}^3_\emptyset$. The following is a similar example. Given a maniplex $\mathcal{M}$ with a bipartite 1-skeleton (that is, the graph with the $0$-faces as vertices and the $1$-faces as edges, with the induced incidence), one can find a voltage operator $(\mathcal{Y},\eta)$ such that $\mathcal{M}\ertimes\mathcal{Y}$ has two connected components and in each component the vertices of one part of the bipartition are truncated while the ones in the other part remain unchanged. If now we apply this same operation to a maniplex $\mathcal{M}'$ whose 1-skeleton is not bipartite, the result is one of the connected components of $(\mathcal{M}'\diamondsuit {\mathbf{2}}_{\lbrace1,2,\ldots,n-1\rbrace}^n)\ertimes \mathcal{Y}$. \section{Voltage operations on the universal maniplex}\label{sec:universal} Since a voltage operator $(\mathcal{Y},\eta)$ is a voltage premaniplex, it natural to consider the derived premaniplex $\mathcal{Y}^\eta$. In this section we see that this derived premaniplex is in fact the one obtained from applying the corresponding voltage operation to the universal maniplex $\mathcal{U}$. We shall use this fact to prove that Theorem\nobreakspace \ref {thm:covers} characterizes voltage operations. \begin{thm}\label{thm:derivedAsProduct} Let $\mathcal{U}$ be the universal $n$-maniplex and let $(\mathcal{Y},\eta)$ a $(n,m)$-voltage operator. Then $\mathcal{Y}^\eta$ is isomorphic to $\mathcal{U} \ertimes \mathcal{Y}$. \end{thm} \begin{proof} Let $\Phi$ be a base flag of $\mathcal{U}$. Consider the function $\varphi: \mathcal{Y}^{\eta} \to \mathcal{U}\ertimes \mathcal{Y}$ given by $\varphi(y,\omega) = (\omega \Phi, y)$. Recall that any flag of $\mathcal{U}$ is of the form $\omega \Phi$ for some (unique) $\omega \in \mon(\mathcal{U})$, which implies that the function $\varphi$ is bijective. Finally, observe that \[\begin{aligned} \varphi(r_i(y,\omega)) &= \varphi(r_iy,\eta(\pth{y})\omega)\\ &= (\eta(\pth{y})\omega \Phi, r_iy )\\ &= r_i(\omega \Phi, y)\\ &= r_i(\varphi(y,\omega)) \end{aligned} \] \end{proof} The proof of Theorem\nobreakspace \ref {thm:derivedAsProduct} is based on the fact that since $\mathcal{U}$ is regular, its monodromy group acts regularly on its vertices (flags). We can generalize Theorem\nobreakspace \ref {thm:derivedAsProduct} as follows. Let $\mathcal{X}$ be a regular premaniplex and let $(\mathcal{Y},\eta)$ be a voltage operator. If $S_\mathcal{X} = \lbrace\omega \in \mon(\mathcal{U}): \omega x= x \ \mathrm{for} \ \mathrm{all} \ x\in \mathcal{X}\rbrace$, that is, the kernel of the projection $\mon(\mathcal{U}) \to \mon(\mathcal{X})$, then $\mon(\mathcal{X}) \cong \mon(\mathcal{U})/ S_{\mathcal{X}}$. Let $\eta_\mathcal{X}:\fg(\mathcal{Y})\to \mon(\mathcal{X})$ be the voltage assignment on $\mathcal{Y}$ we get by reducing $\eta$ to $\mon(\mathcal{X})$, that is $\eta_\mathcal{X}(W):=\eta(W) S_\mathcal{X}$. Then by using the exact same argument as in Theorem\nobreakspace \ref {thm:derivedAsProduct} we can see that $\mathcal{X}\ertimes \mathcal{Y}$ is isomorphic to $\mathcal{Y}^{\eta_\mathcal{X}}$. An immediate consequence of Theorems\nobreakspace \ref {thm:covers} and\nobreakspace \ref {thm:derivedAsProduct} is that operators $(\mathcal{Y}, \eta_{1})$ and $(\mathcal{Y},\eta_{2})$ with $\eta_{1}$ and $\eta_{2}$ equivalent voltages yield equivalent voltage operations. More precisely, \begin{prop}\label{prop:equivalentVolt} Let $(\mathcal{Y},\eta_{1})$ and $(\mathcal{Y},\eta_{2})$ two $(n,m)$-operators with equivalent voltages $\eta_{1}$ and $\eta_{2}$. Then for every $n$-premaniplex $\mathcal{X}$, \[\mathcal{X} \ertimes[\eta_{1}] \mathcal{Y} \cong \mathcal{X} \ertimes[\eta_{2}] \mathcal{Y}.\] Moreover, there is an isomorphism $\mathcal{X}\ertimes[\eta_{1}] \mathcal{Y} \to \mathcal{X} \ertimes[\eta_{2}] \mathcal{Y}$ such that the following diagram commutes: \begin{equation}\label{eq:EquivVoltsOps} \begin{tikzcd}[column sep =small] \mathcal{X}\ertimes[\eta_1]\mathcal{Y} \arrow[dashed]{rr} \arrow{rd}{} & & \mathcal{X}\ertimes[\eta_2]\mathcal{Y} \arrow{dl}{} \\ & \mathcal{Y} & \end{tikzcd} \end{equation} \end{prop} \begin{proof} From \cite{Hartley_1999_AllPolytopesAre} we know that $\mathcal{X} \cong \mathcal{U}/ \Gamma$ for certain $\Gamma \leq \Gamma(\mathcal{U})$, now \[\begin{aligned} \mathcal{X} \ertimes[\eta_{1}] \mathcal{Y} &\cong (\mathcal{U} / \Gamma) \ertimes[\eta_{1}] \mathcal{Y} \\ &\cong \left(\mathcal{U} \ertimes[\eta_{1}] \mathcal{Y} \right) / \Gamma \\ &\cong \mathcal{Y}^{\eta_{1}} / \Gamma \\ &\cong \mathcal{Y}^{\eta_{2}} / \Gamma \\ &\cong \left(\mathcal{U} \ertimes[\eta_{2}] \mathcal{Y} \right) / \Gamma \\ &\cong (\mathcal{U} / \Gamma) \ertimes[\eta_{2}] \mathcal{Y} \\ &\cong \mathcal{X} \ertimes[\eta_{2}] \mathcal{Y}. \end{aligned}\] If we start with a vertex $(x,y)$ in $\mathcal{X}\ertimes[\eta_1]\mathcal{Y}$ and apply the natural isomorphisms between consecutive terms in the equation above, we get the following sequence, where $\Phi$ is a base flag of $\mathcal{U}$: \[ \begin{aligned} (x,y)&\mapsto (\Psi\Gamma,y)&& \text{for some } \Psi\in\mathcal{U}\\ &\mapsto (\Psi,y)\Gamma\\ &\mapsto (y,\omega)\Gamma&&\text{where } \omega\in\mon(\mathcal{U}) \text{ is such that } \Psi=\omega \Phi \\ &\mapsto (y,\tilde{\omega})\Gamma &&\text{for some } \tilde{\omega}\in\mon{\mathcal{U}} \text{ (since $\eta_1$ is equivalent to $\eta_2$)}\\ &\mapsto (\tilde{\Psi},y)\Gamma && \text{where } \tilde{\Psi}=\tilde{\omega}\Phi\\ &\mapsto (\tilde{\Psi}\Gamma,y)\\ &\mapsto (\tilde{x},y) &&\text{for some } \tilde{x}\in\mathcal{X}. \end{aligned} \] We can see that this isomorphism makes the diagram in Equation\nobreakspace \textup {(\ref {eq:EquivVoltsOps})} commute. \end{proof} In the proof of Proposition\nobreakspace \ref {prop:equivalentVolt} we strongly used that if $\mathcal{X} \cong \mathcal{U}/\Gamma$ for $\Gamma \leq \aut(\mathcal{U})$, then $\mathcal{X} \ertimes \mathcal{Y} \cong (\mathcal{U} \ertimes \mathcal{Y})/\Gamma$ for any voltage operator $(\mathcal{Y},\eta)$ (which is Theorem\nobreakspace \ref {thm:covers} of this paper). In the following result we will show that this property characterizes all voltage operations. \begin{thm}\label{thm:OpsAsProducts} Let $\oo$ be a mapping that assigns an $m$-premaniplex $\oo(\mathcal{X})$ to each $n$-premaniplex $\mathcal{X}$. Assume that there is an action of $\Gamma(\mathcal{U})$ on $\oo(\mathcal{U})$ such that $\oo(\mathcal{U} / \Gamma) \cong \oo(\mathcal{U})/\Gamma$ for every $\Gamma \leq \aut(\mathcal{U})$, then there exists an $(n,m)$-voltage operator $(\mathcal{Y}, \eta)$ such that \[\oo(\mathcal{X}) \cong \mathcal{X} \ertimes \mathcal{Y}\] for every premaniplex $\mathcal{X}$. \end{thm} \begin{proof} Let ${\mathbf{1}}^{n}$ denote the unique $n$-premaniplex with one vertex and define $\mathcal{Y}$ as the premaniplex $\oo({\mathbf{1}}^{n})$. Observe that \[ \oo(\mathcal{U})/\aut(\mathcal{U}) \cong \oo\left( \mathcal{U}/\aut(\mathcal{U}) \right) \cong \oo({\mathbf{1}}^{n}) = \mathcal{Y}, \] which implies that there exists a voltage assignment $\eta:\fg(\mathcal{Y}) \to \aut(\mathcal{U}) \cong \mon(\mathcal{U})$ such that $\mathcal{Y}^{\eta} \cong \oo(\mathcal{U})$. The pair $(\mathcal{Y}, \eta)$ defines a voltage operator and by Theorem\nobreakspace \ref {thm:derivedAsProduct}, $\mathcal{U} \ertimes \mathcal{Y} \cong \mathcal{Y}^{\eta} \cong \oo(\mathcal{U}) $. Finally, if $\mathcal{X}$ is a premaniplex and $\Gamma \leq \aut(\mathcal{U})$ is such that $\mathcal{X} \cong \mathcal{U}/\Gamma$, then \[ \oo(\mathcal{X}) \cong \oo(\mathcal{U}/\Gamma) \cong \oo(\mathcal{U})/ \Gamma \cong (\mathcal{U} \ertimes \mathcal{Y})/\Gamma \cong \left( \mathcal{U}/\Gamma \right) \ertimes \mathcal{Y} \cong \mathcal{X} \ertimes \mathcal{Y}. \qedhere \] \end{proof} We often come across operations that are well defined for some family $F$ of premaniplexes (for example, for convex polytopes) but such that it is not immediately evident how to generalize them for all premaniplexes. One may ask if it is possible to extend operations of this kind to all premaniplexes (of the given rank) in such a way that the operation is a voltage operation. As an example, in Section\nobreakspace \ref {sec:ClassicalExamples}, we have found voltage operations that extend the Wythoffian operations, the $k$-bubble and the trapezotope operation to all premaniplexes. Theorem\nobreakspace \ref {thm:OpsAsProducts} and Corollary\nobreakspace \ref {coro:OperatingRegulars} answer this question and find the corresponding voltage operation, when possible. The idea is as follows: suppose there is some regular premaniplex $\mathcal{P}$ and an operation $\oo$ such that we already know $\oo(\mathcal{P})$. According to the proof of Theorem\nobreakspace \ref {thm:OpsAsProducts}, if $\oo$ is indeed a voltage operation $\mathcal{X}\mapsto \mathcal{X}\ertimes \mathcal{Y}$, then $\mathcal{Y}$ should be $\oo({\mathbf{1}}^n)$, but due to Theorem\nobreakspace \ref {thm:covers} $\mathcal{Y}$ must coincide with $\oo(\mathcal{P})/\aut(\mathcal{P})$. Now let $\nu:\fg(\mathcal{Y})\to\aut(\mathcal{P})$ be the voltage assignment such that $\oo(\mathcal{P})$ is isomorphic to $\mathcal{Y}^\nu$. Corollary\nobreakspace \ref {coro:OperatingRegulars} tells us that $\nu$ is obtained by replacing each occurrence of $r_i$ by $\rho_i$ in the voltage assignment $\eta:\fg(\mathcal{Y})\to \mon(\mathcal{U})$. So we can recover $\eta$ by replacing every instance of $\rho_i$ by $r_i$ in $\nu$. Note that, in general, such replacement is not well defined unless $\mathcal{P}$ is the universal polytope $\mathcal{U}$, however with some intuition we can find the right way to do it for the natural occurring operations. However, observe that one can do this without knowing if $\oo$ was in fact a voltage operation. To know if $\oo$ is a voltage operation or not, one must see that $\oo(\mathcal{X})=\mathcal{X}\ertimes \mathcal{Y}$ for $\mathcal{X}$ in the family $F$. In fact, if for some $\mathcal{X} \in F$ we observe that $\oo(\mathcal{X})\neq\mathcal{X}\ertimes \mathcal{Y}$, then $\oo$ cannot be seen as a voltage operation. \section{Final remarks and open problems} We have seen that voltage operations generalize classical operations on maps and polytopes and allow us to define such classical operations on premaniplexes. One can see that voltage operations naturally generalize to hypertopes (thin residually connected geometries), ``complexes'', and their quotients. Following \cite{Wilson_2012_ManiplexesPart1}, a {\em complex} is a properly $n$-edge colored $n$-valent graph (note that a complex is a combinatorial map in the sense of Vince \cite{Vince_1983_CombinatorialMaps}. Complexes generalize (the ``chamber graphs'' of) hypertopes. Thus, one can define a precomplex as a $n$-valent pregraph that has been properly $n$-edge colored. In this context, the definition of a voltage operator as well as the results in this paper hold if we ask $\eta:\Pi(\mathcal{Y})\to W^n$, where $W^n$ is the group generated by $n$ involutions with no other relations among the generators. In particular, this allows us to define operations like the ones given in Sections~\ref{sec:operations} and~\ref{sec:ClassicalExamples} to hypertopes. Symmetry type graphs and voltage assignments have proven to be strong tools to study polytopes and maniplexes. A natural (and well-known) problem that arises when dealing with symmetry type graphs is the following: \begin{problem}\label{prob:stg} Given a premaniplex $T$, does there exist a polytope (or maniplex) such that its symmetry type graph (with respect to the full automorphism group) is $T$? \end{problem} A particular example of this problem was the question posted in the early 1990's by Schulte and Weiss of whether or not there exist chiral polytopes of all ranks (which was solved by Pellicer in 2010 \cite{Pellicer_2010_ConstructionHigherRank} Of course, one can generalize \protect \MakeUppercase {P}roblem\nobreakspace \ref {prob:stg} to hypertopes. \begin{problem} Given a precomplex $T$, does there exist a hypertope (or complex) such that its symmetry type graph (with respect to the full automorphism group) is $T$? \end{problem} One thing that one tries to do when dealing the above question is to use voltage assignments on $T$ to lift to a maniplex $\mathcal{M}$. In \cite{Mochan_2021_AbstractPolytopesTheir_PhDThesis} the voltage assignments that give a polytope as the derived graph are characterized. Of course the voltage group acts by automorphisms on $\mathcal{M}$ and the quotient of $\mathcal{M}$ by it is precisely $T$. However, $\mathcal{M}$ can have (and often does) symmetries not coming from the voltage group. In the context of voltage operations, we have shown that given a voltage operator $(\mathcal{Y}, \eta)$ and a premaniplex $\mathcal{X}$, all automorphisms of $\mathcal{X}$ act as automorphisms of $\mathcal{X}\ertimes[\eta]\mathcal{Y}$. However, again, $\mathcal{X}\ertimes[\eta]\mathcal{Y}$ might have extra symmetry (for example, in the case when one applies the medial operation to a self-dual map). It is natural to ask when is it true that automorphisms of $\mathcal{Y}$ also lift to $\mathcal{X}\ertimes[\eta]\mathcal{Y}$, and when all automorphisms of $\mathcal{X}\ertimes[\eta]\mathcal{Y}$ come either from $\mathcal{X}$ or from $(\mathcal{Y}, \eta)$. We refer to \cite{HubardMochanMontero_MoreVoltageOperations_preprint} for more details about these questions. Answering, at least partially, these questions might be of great help \protect \MakeUppercase {P}roblem\nobreakspace \ref {prob:stg} (at least partially). \begin{problem} Give necessary conditions on $(\mathcal{Y},\eta)$ so that one can compute the symmetry type graph of $\mathcal{X}\ertimes[\eta]\mathcal{Y}$ with respect to its full automorphism group in terms of $\mathcal{X}$ and $(\mathcal{Y},\eta)$. \end{problem} If one's interest is on polytopes rather that in (pre)maniplexes, one can ask when a voltage operation preserves polytopality, though we might not care if the result of the operation is not connected. More precisely, \begin{problem} Give necessary and sufficient conditions on $(\mathcal{Y}, \eta)$ for $(\mathcal{P},\Phi)\ertimes(\mathcal{Y},y)$ to be a polytope, for all rooted polytopes $(\mathcal{P}, \Phi)$ and $y\in\mathcal{Y}$. \end{problem} Similarly, \begin{problem} Give necessary and sufficient conditions on $(\mathcal{Y}, \eta)$ for $({\mathcal H},\Phi)\ertimes(\mathcal{Y},y)$ to be a hypertope, for all rooted hypertopes $({\mathcal H}, \Phi)$ and $y\in\mathcal{Y}$. \end{problem} \end{document}
\begin{equation}gin{document} \maketitle \begin{equation}gin{abstract} This article considers the spatially inhomogeneous, non-cutoff Boltzmann equation. We construct a large-data classical solution given bounded, measurable initial data with uniform polynomial decay of mild order in the velocity variable. Our result requires no assumption of strict positivity for the initial data, except locally in some small ball in phase space. We also obtain existence results for weak solutions when our decay and positivity assumptions for the initial data are relaxed. Because the regularity of our solutions may degenerate as $t\to 0$, uniqueness is a challenging issue. We establish weak-strong uniqueness under the additional assumption that the initial data possesses no vacuum regions and is H\"older continuous. As an application of our short-time existence theorem, we prove global existence near equilibrium for bounded, measurable initial data that decays at a finite polynomial rate in velocity. \epsnd{abstract} \tableofcontents \section{Introduction} We consider the Boltzmann equation, a fundamental kinetic integro-differential equation from statistical physics \cite{cercignani1969kinetic, truesdell-muncaster, cercignani1988book, chapmancowling, villani2002review}. The unknown function $f(t,x,v)\geq 0$ models the particle density of a diffuse gas in phase space at time $t \geq 0$, location $x \in \mathbb R^3$, and velocity $v\in \mathbb R^3$. The equation reads \begin{equation}gin{equation}\label{e.boltzmann} (\partial_t + v\cdot \nabla_x) f = Q(f,f), \epsnd{equation} where the left-hand side is a transport term, and $Q(f,g)$ is the Boltzmann collision operator with {\it non-cutoff} collision kernel, which we describe in detail below. The purpose of this article is to develop a well-posedness theory for \epsqref{e.boltzmann} on a time interval $[0,T]$, making minimal assumptions on the initial data $f_{\rm in}(x,v)\geq 0$. In particular, we would like our local existence theory to properly encapsulate the {\it regularizing effect} of the non-cutoff Boltzmann equation. This effect comes from the nonlocal diffusion produced by $Q(f,f)$ in the velocity variable, and has been studied extensively, as we survey below in Section \ref{s:related}. In light of this regularizing effect, it is natural and desirable to construct a solution $f$ with initial data in a low-regularity (ideally zeroth-order) space, such that $f$ has at least enough regularity for positive times to evaluate the equation in a pointwise sense. However, so far this has only been achieved in the close-to-equilibrium \cite{alonso2020giorgi,silvestre2022nearequilibrium} and space homogeneous (i.e $x$-independent) \cite{desvillettes2004homogeneous} regimes. For the general case, essentially all of the local existence results for classical solutions in the literature \cite{amuxy2010regularizing, amuxy2011bounded, amuxy2013mild, morimoto2015polynomial, HST2020boltzmann, henderson2021existence} require $f_{\rm in}$ to lie in a weighted Sobolev space of order at least $4$. The current article fills this gap by constructing a solution with initial data in a weighted $L^\infty$-space. Another goal of our analysis is to optimize the requirement on the decay of $f_{\rm in}$ for large velocities. Because of the nonlocality of $Q$, decay of solutions is intimately tied to regularity, and since we work in the physical regime $\gamma \leq 0$ (see \epsqref{e.B-def}), the decay of $f$ for positive times is limited by the decay of $f_{\rm in}$. In our main existence result, we require $f_{\rm in}$ to have pointwise polynomial decay of order $2s+3$, where $2s\in (0,2)$ is the order of the diffusion (see \epsqref{e.b-bounds}). In particular, the energy density $\int_{\mathbb R^3} |v|^2 f(t,x,v) \, \mathrm{d} v$ of our solutions may be infinite, which places them outside the regime where the conditional regularity estimates of Imbert-Silvestre \cite{imbert2020review} may be applied out of the box. The possible presence of vacuum regions in the initial data is a key source of difficulty. The regularization coming from $Q(f,f)$ relies on positivity properties of $f$, in a complex way that reflects the nonlocality of $Q$. In the space homogeneous setting, conservation of mass provides sufficient positivity of $f$ for free. The close-to-equilibrium assumption would also ensure that $f$ has regions of strict positivity at all times. By contrast, in the case of general initial data, any lower bounds for $f$ must degenerate at a severe rate as $t\searrow 0$, which impacts the regularity of the solution for small times and causes complications for the well-posedness theory. Our main existence theorem requires a weak positivity assumption, namely that $f_{\rm in}$ is uniformly positive in some small ball in phase space. We consider solutions posed on the spatial domain $\mathbb R^3$, with no assumption that the solution or the initial data decay for large $|x|$. This regime includes the physically important example of a localized disturbance away from a Maxwellian equilibrium $M(v) = c_1 e^{-c_2|v|^2}$; that is $f = M(v) + g(t,x,v)$, where $g(t,x,v) \to 0$ as $|x|\to\infty$ but $g$ is not necessarily small. Our regime also includes spatially periodic solutions as a special case. The lack of integrability in $x$ is a nontrivial source of difficulty and in particular makes energy methods much less convenient. Also, the total mass, energy, and entropy of the solution could be infinite, so we do not have access to the usual bounds coming from conservation of mass and energy and monotonicity of entropy. In Section \ref{s:prior-existence}, we give a more complete bibliography of well-posedness results for \epsqref{e.boltzmann}. \subsection{The collision operator} Boltzmann's collision operator is a bilinear integro-differential operator defined for functions $f,g:\mathbb R^3\to\mathbb R$ by \begin{equation}gin{equation}\label{e.Q} Q(f,g) := \int_{\mathbb R^3} \int_{\mathbb S^2} B(v-v_*,\sigma) [f(v_*')g(v') - f(v_*) g(v)] \, \mathrm{d} \sigma \, \mathrm{d} v_*. \epsnd{equation} Because collisions are assumed to be elastic, the pre- and post-collisional velocities all lie on a sphere of diameter $|v-v_*|$ parameterized by $\sigma\in\mathbb S^2$, and are related by the formulas \[ v' = \frac{v+v_*} 2 + \frac{|v-v_*|} 2 \sigma, \quad v_*' = \frac{v+v_*} 2 - \frac{|v-v_*|} 2 \sigma.\] We take the standard \epsmph{non-cutoff} collision kernel $B$ defined by \begin{equation}gin{equation}\label{e.B-def} B(v-v_*,\sigma) = b(\cos\theta) |v-v_*|^\gamma, \quad \cos\theta = \sigma \cdot \frac{v-v_*}{|v-v_*|}, \epsnd{equation} for some $\gamma > -3$. The angular cross-section $b$ is singular as $\theta$ (the angle between pre- and post-collisional velocities) approaches $0$ and satisfies the bounds \begin{equation}gin{equation}\label{e.b-bounds} c_b \theta^{-1-2s} \leq b(\cos\theta) \sin \theta \leq \frac 1 {c_b} \theta^{-1-2s}, \epsnd{equation} for some $c_b>0$ and $s\in (0,1)$. This implies $b$ has the asymptotics $b(\cos\theta) \approx \theta^{-2-2s}$ as $\theta \to 0$. The parameters $\gamma$ and $s$ reflect the modeling choices made in defining $Q(f,g)$. When electrostatic interactions between particles are governed by an inverse-square-law potential of the form $\phi(x) = c|x|^{1-p}$ for some $p>2$, then one has $\gamma = (p-5)/(p-1)$ and $s = 1/(p-1)$. As is common in the literature, we consider arbitrary pairs $(\gamma,s)$ and disregard the parameter $p$. For our main results, we assume \[ \gamma < 0, \] but otherwise, we do not place any restriction on $\gamma$ and $s$. The integral in \epsqref{e.Q} has two singularities: as $\theta\to 0$, and as $v_*\to v$. The non-integrable singularity at $\theta \approx 0$ (grazing collisions), which is related to the long-range interactions taken into account by the physical model, is the source of the regularizing properties of the operator $Q$. \subsection{Main results} For $1\leq p\leq \infty$, define the velocity-weighted $L^p$ norms \[ \|f\|_{L^p_q(\mathbb R^3)} = \|\langle v\rangle^q f\|_{L^p(\mathbb R^3)}, \quad \|f\|_{L^p_q(\Omega\times\mathbb R^3)} = \|\langle v\rangle^q f\|_{L^p(\Omega\times\mathbb R^3_v)} \] where $\Omega$ is any subset of $\mathbb R^3_x$ or $[0,\infty)\times\mathbb R^3_x$. Our results involve kinetic H\"older spaces $C^{\begin{equation}ta}_{\rm \epsll}$ that are defined precisely in Section \ref{s:holder-spaces} below. These spaces are based on a distance $d_\epsll$ that is adapted to the scaling and translation symmetries of the Boltzmann equation. Roughly speaking, a function in $C^\begin{equation}ta_\epsll$ for some $\begin{equation}ta>0$ is $C^\begin{equation}ta$ in $v$, $C^{\begin{equation}ta/(1+2s)}$ in $x$, and $C^{\begin{equation}ta/(2s)}$ in $t$. Note that the subscript $q$ in $L^p_q$ refers to a decay exponent, while the subscript $\epsll$ in $C^\begin{equation}ta_\epsll$ refers to the distance $d_\epsll$. In the statement of our main results, for brevity's sake, we make the convention that all constants may depend on the parameters $\gamma$, $s$, and $c_b$ from the collision kernel, even if they are not specifically mentioned. Our first main result is about the existence of classical solutions: \begin{equation}gin{theorem}\label{t:existence} Let $\gamma\in (-3,0)$ and $s\in (0,1)$. Assume that $f_{\rm in} \geq 0$ lies in $L^\infty_q(\mathbb R^6)$ with $q > 2s+3$, and that for some $x_m,v_m \in \mathbb R^3$ and $\delta, r>0$, \begin{equation}gin{equation}\label{e.mass-core} f_{\rm in}(x,v) \geq \delta, \quad \text{ for } (x,v) \in B_r(x_m,v_m). \epsnd{equation} Then there exists $T>0$ depending on $q$ and $\|f_{\rm in}\|_{L^\infty_q}$ and a solution $f(t,x,v) \geq 0$ to the Boltzmann equation \epsqref{e.boltzmann} in $L^\infty_q([0,T]\times \mathbb R^6)$. This solution is locally of class $C^{2s}_\epsll$. More precisely, for each compact $\Omega\subset (0,T]\times \mathbb R^6$, there exist $C, \alpha>0$ depending on $q$, $\Omega$, $x_m$, $v_m$, $r$, $\delta$, and $\|f_{\rm in}\|_{L^\infty_q(\mathbb R^6)}$, such that \[ \|f\|_{C^{2s+\alpha}_\epsll(\Omega)} \leq C.\] Furthermore, for any $m\geq 0$ and partial derivative $D^k f$, where $k$ is a multi-index in $(t,x,v)$ variables, there exists $q(k,m)>0$ such that for any compact $\Omega\subset (0,T]\times\mathbb R^3$, \[ f_{\rm in} \in L^\infty_{q(k,m)}(\mathbb R^6) \quad \mathbb Rightarrow \quad \|D^k f\|_{L^\infty_{m}(\Omega\times\mathbb R^3)} \leq C,\] with $C$ depending on $k$, $m$, $\Omega$, and the initial data. If $f_{\rm in}$ decays faster than any polynomial, i.e. $f_{\rm in} \in L^\infty_q(\mathbb R^6)$ for all $q>0$, then the solution $f$ is $C^\infty$ in all three variables for positive times, and $D^k f\in L^\infty_{m}(\Omega\times\mathbb R^3)$ for all $m\geq 0$, multi-indices $k$, and compact sets $\Omega$. At $t=0$, the solution agrees with $f_{\rm in}$ in the following weak sense: for all $\varphi \in C^1_{t,x} C^2_v$ with compact support in $[0,T)\times \mathbb R^6$, \begin{equation}gin{equation}\label{e.weak-initial-data} \int_{\mathbb R^6} f_{\rm in}(x,v) \varphi(0,x,v) \, \mathrm{d} v \, \mathrm{d} x = \int_0^T \int_{\mathbb R^6} [f(\partial_t + v\cdot \nabla_x)\varphi + Q(f,f) \varphi] \, \mathrm{d} v \, \mathrm{d} x \, \mathrm{d} t. \epsnd{equation} \epsnd{theorem} Some comments on the theorem statement are in order: \begin{equation}gin{itemize} \item The local regularity of $f$ of order $2s+\alpha$, where $\alpha>0$ is uniform on compact sets, is enough to make pointwise sense of $Q(f,f)$, as we prove in Lemma \ref{l:Q-makes-sense}. The norm $C^{2s+\alpha}_\epsll$ also controls the material derivative $(\partial_t + v\cdot \nabla_x)f$ (see \cite[Lemma 2.7]{imbert2018schauder}). Therefore, although $\partial_t f$ and $\nabla_x f$ do not necessarily exist classically, the two sides of equation \epsqref{e.boltzmann} have pointwise values and are equal at every $(t,x,v)\in (0,T]\times \mathbb R^6$. \item In general, our solutions may have a discontinuity at $t=0$. If we make the additional assumption that $f_{\rm in}$ is continuous, then $f$ is continuous as $t\to 0$ and agrees with $f_{\rm in}$ pointwise. This is proven in Proposition \ref{p:cont-match}. \item It is not \epsmph{a priori} obvious that the time integral on the right in \epsqref{e.weak-initial-data} converges, since the regularity required to make pointwise sense of $Q(f,f)$ degenerates as $t\to 0$. Using the weak formulation of the collision operator (see \epsqref{e.weak-collision} below) one can bound $\int_{\mathbb R^3} Q(f,f) \varphi \, \mathrm{d} v$ from above using only bounds for $\varphi$ in $W_v^{2,\infty}$ and $f$ in $L^\infty\cap L^1_{\gamma+2s}$. This implies that the formula \epsqref{e.weak-initial-data} is well-defined. \item It seems that one can deduce existence of a classical solution in the $\gamma=0$ case fairly easily via \Cref{t:existence}. Indeed, the $C^{2s+\alpha}_\epsll$ estimates are uniform as $\gamma\nearrow 0$, so one can use local convergence of solutions $f^\gamma$ to obtain $f^0$ that solves~\epsqref{e.boltzmann}, after performing a suitable convergence analysis for $Q(f,f)$ as $\gamma \nearrow 0$. In the interest of brevity, we do not analyze this case in detail. \epsnd{itemize} Although our main interest is in constructing classical solutions, our approach is robust enough to prove the existence of weak solutions when the decay and positivity conditions on $f_{\rm in}$ are relaxed. In particular, we obtain a well-defined notion of weak solution for any $f_{\rm in}\in L^\infty_q(\mathbb R^6)$, $q>\gamma+2s+3$, without any quantitative lower bound assumptions on $f_{\rm in}$. These weak solutions do not have enough regularity to evaluate $Q(f,f)$ pointwise, so we define the weak formulation of the collision operator\footnote{This weak form of $Q(f,f)$ is very classical and goes back to James Clerk Maxwell's 1867 work on the theory of gases \cite{maxwell1867}.} as follows: \begin{equation}gin{equation}\label{e.weak-collision} W(g,h,\varphi) := \frac 1 2 \int_{\mathbb R^3} \int_{\mathbb S^2} B(v-v_*,\sigma) g(v) h(v_*) [\varphi(v_*') + \varphi(v') - \varphi(v_*) - \varphi(v)] \, \mathrm{d} \sigma \, \mathrm{d} v_* . \epsnd{equation} When $f$ is sufficiently smooth and rapidly decaying, the identity $\int_{\mathbb R^3} \varphi Q(f,f) \, \mathrm{d} v = \int_{\mathbb R^3} W(f,f,\varphi) \, \mathrm{d} v$ follows from the pre-post-collisional change of variables and symmetrization, see, e.g. \cite[Chapter 1, Section 2.3]{villani2002review}. Our result on weak solutions is as follows: \begin{equation}gin{theorem}\label{t:weak-solutions} Let $\gamma$ and $s$ be as in Theorem \ref{t:existence}. Assume that $f_{\rm in}\geq 0$ lies in $L^\infty_{q}(\mathbb R^6)$ for some $q> \gamma+2s+3$. Then there exists $T>0$ depending on $q$ and $\|f_{\rm in}\|_{L^\infty_{q}(\mathbb R^6)}$ and $f: [0,T]\times \mathbb R^6\to [0,\infty)$ such that, for any $\varphi \in C^1_{t,x} C^2_v$ with compact support in $[0,T)\times\mathbb R^6$, there holds \begin{equation}gin{equation}\label{e.weak-solution-def} \int_{\mathbb R^6} f_{\rm in}(x,v) \varphi(0,x,v) \, \mathrm{d} v \, \mathrm{d} x = \int_0^T \int_{\mathbb R^6}[ f (\partial_t + v\cdot \nabla_x)\varphi +W(f,f,\varphi)] \, \mathrm{d} v \, \mathrm{d} x \, \mathrm{d} t. \epsnd{equation} If, in addition, there exist $\delta, r>0$ and $x_m,v_m\in \mathbb R^3$ with \[ f_{\rm in}(x,v) \geq \delta, \quad \text{ for } (x,v) \in B_r(x_m)\times B_r(v_m),\] then $f$ is locally H\"older continuous: for any compact $\Omega\subset (0,T]\times\mathbb R^6$, there exist $C,\alpha>0$ depending on $\Omega$, $x_m$, $v_m$, $r$, $\delta$, and $\|f\|_{L^\infty_q(\mathbb R^6)}$, with \[ \|f\|_{C^\alpha_\epsll(\Omega)}\leq C.\] \epsnd{theorem} This definition of weak solution is similar to one used by Alexandre \cite{alexandre2001solutions} who worked under stricter hypotheses on the initial data. Next, we present our main result on uniqueness. This is a challenging issue because of the generality of our existence theorem. We discuss some of the specific difficulties in Section \ref{s:intro-uniqueness} below. \begin{equation}gin{figure} \begin{equation}gin{overpic}[scale=.175] {f_0.jpg} \put(-2,25){\color{red} $R$} \put(100,25){\color{red} $R$} \put(16,20.75){\color{red}$x$} \put(29,8.75){\color{red} $v_x$} \put(61.5,20.75){\color{red} $x$} \put(75.75,14.25){\color{red} $v_x$} \epsnd{overpic} \caption{A cartoon depicting two $f_{\rm in}$ satisfying the condition~\epsqref{e.uniform-lower}. The set where $f_{\rm in}\geq \delta$ is depicted by the gray shading. Notice that, for every $x$, there is a $v_x$ between the dashed red lines such that a translation of the red ball ($B_r$) centered at $(x,v_x)$ is within the shaded region. This would not be the case if, e.g., $f_{\rm in} \epsquiv 0$ for all $x \in B_1(0)$.} \label{f.uniform-lower} \epsnd{figure} For this result, we make additional assumptions on $f_{\rm in}$: there are $\delta$, $r$, $R>0$ so that \begin{equation}gin{equation}\label{e.uniform-lower} \text{for each } x\in \mathbb R^3, \ \text{ there is } \ v_x\in B_R(0) \ \text{ such that }\ f_{\rm in} \geq \delta 1_{B_r(x,v_x)}. \epsnd{equation} (See Figure \ref{f.uniform-lower}.) This condition is stronger than \epsqref{e.mass-core} and rules out vacuum regions in the spatial domain at $t=0$. We also need to assume that $f_{\rm in}$ is H\"older continuous. Our uniqueness theorem is as follows: \begin{equation}gin{theorem}\label{t:uniqueness} Let $\gamma \in (-3,0)$ and $s\in(0,1)$. For any $\alpha>0$ and for $q>0$ sufficiently large, depending only on $\alpha$, $\gamma$, $s$, and $c_b$, assume that $f_{\rm in}\in L^\infty_q(\mathbb R^6)\cap C^\alpha_\epsll(\mathbb R^6)$, and that $f_{\rm in}$ satisfies the lower bound assumption \epsqref{e.uniform-lower}. Let $f$ be the classical solution on $[0,T]\times\mathbb R^6$ constructed in Theorem \ref{t:existence} with initial data $f_{\rm in}$. Then there exists $T_U>0$ depending on $\delta$, $r$, $R$, $\alpha$, $\|f_{\rm in}\|_{L^\infty_q(\mathbb R^6)}$, and $\|f_{\rm in}\|_{C^\alpha_\epsll(\mathbb R^6)}$, such that for any weak solution $g$ in the sense of Theorem \ref{t:weak-solutions} with initial data $f_{\rm in}$, and such that \[ g\in L^1([0,T_U],L^\infty_q(\mathbb R^6)), \] the equality $f(t,x,v)=g(t,x,v)$ holds everywhere in $[0,\min(T,T_U)]\times\mathbb R^6$. \epsnd{theorem} Let us make the following comments on the statement of this theorem: \begin{equation}gin{itemize} \item This uniqueness result holds up to a time $T_U$ that depends on the $C^\alpha$ bound for $f_{\rm in}$, and may be smaller than $T$, the time of existence granted by Theorem \ref{t:existence}. This makes sense because our proof of uniqueness breaks down as $\alpha$ is sent to 0. \item The admissible values of the parameter $q$ in Theorem \ref{t:uniqueness} are explicitly computable from our proof. \epsnd{itemize} The uniqueness or non-uniqueness of the solutions constructed in Theorem \ref{t:existence}, without any regularity assumption for $f_{\rm in}$, remains an interesting open question. For other examples of nonlinear evolution equations where uniqueness is not understood in the same generality as existence, even though the system regularizes instantaneously, we refer to \cite{kiselev2008blowup, HST2020landau, anceschi2021fokkerplanck}. \subsection{Application: global existence near equilibirium} In our main results, we do not assume that our initial data is close to equilibrium. In the case that $f_{\rm in}$ is sufficiently close to a Maxwellian equilibrium state, solutions are known to exist globally in time and converge to equilibrium as $t\to\infty$, and there is a large literature about this regime (see Section \ref{s:prior-existence} below). Although in general the near-equilibrium and far-from-equilibrium regimes seem very different mathematically, we are nevertheless able to prove a new result in the close-to-equilibrium regime as an application of our Theorem \ref{t:existence}: \begin{equation}gin{corollary}\label{c:global} Let $\gamma$ and $s$ be as in Theorem \ref{t:existence}, let $M(x,v) = (2\pi)^{-3/2} e^{-|v|^2/2}$, and let $q_0>5$ be fixed. There exists $q_1> q_0$ depending on $q_0$, $\gamma$, $s$, and $c_b$, such that for any $f_{\rm in}:\mathbb T^3\times\mathbb R^3\to [0,\infty)$ such that \[ f_{\rm in} \in L^\infty_{q_1}(\mathbb T^3\times\mathbb R^3), \] and for any $\varepsilon\in (0,\frac 1 2)$, there exists a $\delta>0$, depending on $\varepsilon$ and $\|f_{\rm in}\|_{L^\infty_{q_1}(\mathbb T^3\times\mathbb R^3)}$, such that if \[ \|f_{\rm in} - M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} < \delta, \] then there exists a global classical solution $f:[0,\infty) \times\mathbb T^3\times\mathbb R^3 \to [0,\infty)$ to the Boltzmann equation \epsqref{e.boltzmann}, with \[ \|f(t) - M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} < \varepsilon, \] for all $t\geq 0$. \epsnd{corollary} By the results of \cite{desvillettes2005global}, this solution converges to $M$ as $t\to\infty$ faster than any polynomial rate. The proof of Corollary \ref{c:global} is based on a strategy developed by the second named author, jointly with Silvestre \cite{silvestre2022nearequilibrium}. The idea is to combine a short-time existence theorem, the conditional regularity estimates of Imbert-Silvestre \cite{imbert2020smooth}, and the global trend-to-equilibrium result of Desvillettes-Villani \cite{desvillettes2005global}. The result in \cite{silvestre2022nearequilibrium} worked under the assumption $\gamma+2s\geq0$. Taking advantage of the generality of the short-time existence theorem in the current paper, Corollary \ref{c:global} improves on \cite{silvestre2022nearequilibrium} in two ways: by including the case $\gamma+2s<0$, and by working with initial data that decays at a finite polynomial rate, rather than at a rate faster than any polynomial. For the regime $\gamma+2s<0$, this seems to be the first result proving global existence near equilibrium for initial data in a zeroth-order space. \subsection{Related work} \subsubsection{Prior well-posedness results and comparison}\label{s:prior-existence} Existence results for the non-cutoff Boltzmann equation fall into the following categories: \begin{equation}gin{itemize} \item {\it Spatially homogeneous solutions.} Local well-posedness and smoothing are well understood for the space homogeneous equation, and classical solutions are known to exist globally when $\gamma+2s\geq 0$. We refer to \cite{ukai1984gevrey, desvillettes2004homogeneous, fournier2008uniqueness, desvillettes2009stability, morimoto2009homogeneous, chen2011smoothing, he2012homogeneous-boltzmann, glangetas2016sharp} and the references therein, as well as \cite{villani1998weak} for so-called $H$-solutions and \cite{lu2012measure, morimoto2016measure} for measure-valued solutions. \item {\it Close-to-equilibrium solutions.} For close-to-equilibrium solutions, we refer to \cite{gressman2011boltzmann, amuxy2011global, alexandre2012global, alonso2018polynomial, alonso2020giorgi, herau2020regularization, zhang2020global, duan2021global, silvestre2022nearequilibrium, cao2022moments} and the references therein. The first result for near-equilibrium initial data in a weighted $L^\infty_{x,v}$ space was \cite{alonso2020giorgi}, which applied to $\gamma>0$. This was extended to the case $\gamma+2s\in [0,2]$ in \cite{silvestre2022nearequilibrium}, and we extend it to $\gamma+2s<0$ in our Corollary \ref{c:global}. We also refer to \cite{duan2021global, morimoto2016global, zhang2020global} for other results in low-regularity spaces. Results that work with polynomially decaying initial data include \cite{herau2020regularization, alonso2018polynomial, alonso2020giorgi, cao2022moments}. \item {\it Close-to-vacuum solutions.} Recently, global solutions that are close to the vacuum state $f\epsquiv 0$ have been constructed by Chaturvedi \cite{chaturvedi2021vacuum}, with initial data in a tenth-order Sobolev space with Gaussian weight. \item {\it Weak solutions.} A generalized notion of solution for the non-cutoff Boltzmann equation called {\it renormalized solution with defect measure} was constructed by Alexandre-Villani \cite{alexandre2002longrange}. The uniqueness and regularity of these solutions are not understood, but they exist globally in time, for any initial data $f_{\rm in}$ such that \[ \int_{\mathbb R^6} f_{\rm in}(x,v)(1+|v|^2 + |x|^2 + \log f_{\rm in}(x,v)) \, \mathrm{d} x \, \mathrm{d} v < \infty. \] This assumption is weaker than ours in terms of local integrability and $v$-decay, but stronger in terms of $x$-decay. If $f_{\rm in}$ satisfies the assumptions of both \cite{alexandre2002longrange} and our Theorem \ref{t:weak-solutions}, then our weak solutions are, in particular, renormalized solutions with defect measure, as can be seen from the stability theorem \cite[Theorem 2]{alexandre2002longrange} and the fact that our weak solutions are obtained as a limit of classical solutions of \epsqref{e.boltzmann}. \item {\it Short-time solutions.} Early results on local existence in the non-cutoff case were due to the AMUXY group (Alexandre, Morimoto, Ukai, Xu, and Yang) \cite{amuxy2010regularizing, amuxy2011bounded} and required initial data to lie in Sobolev spaces of order 4 with Gaussian velocity weights. Later results relaxed the decay assumption by treating initial data with finite polynomial decay, at the cost of increasing the regularity requirement on $f_{\rm in}$. The first result in this direction was from Morimoto-Yang \cite{morimoto2015polynomial}, who worked with $s\in (0,\frac 1 2)$ and $\gamma \in (-\frac 3 2, 0]$ and took $\langle v\rangle^q f_{\rm in}\in H^6(\mathbb T^3\times\mathbb R^3)$ with $q> 13$. Next, the work \cite{HST2020boltzmann} by the current authors assumed $\max\{-3,-\frac 3 2 - 2s\} < \gamma < 0$ and required $\langle v\rangle^q f_{\rm in} \in H^5(\mathbb T^3\times\mathbb R^3)$ for some non-explicit $q$. See also \cite{amuxy2011uniqueness} for an earlier uniqueness result in a similar regime as \cite{HST2020boltzmann}. Most recently, Henderson-Wang \cite{henderson2021existence} extended the result of \cite{HST2020boltzmann} to the case $\gamma+2s<-\frac 3 2$. The only prior results that require fewer than 4 derivatives for $f_{\rm in}$ are restricted to the case $s\in (0,\frac 1 2)$: see \cite{amuxy2013mild}, which requires at least two Sobolev derivatives as well as spatial localization, and \cite[Theorem 1.2]{henderson2021existence}, which requires only $\langle v\rangle^q f_{\rm in}\in C^1(\mathbb T^3\times\mathbb R^3)$ and $q$ sufficiently large, but uses a specific argument that cannot be generalized to $s>\frac 1 2$. Our Theorem \ref{t:existence} represents a significant improvement, in terms of the decay and regularity assumptions on $f_{\rm in}$, and applies for any $\gamma< 0$ and $s\in (0,1)$. \epsnd{itemize} \subsubsection{Regularizing effect}\label{s:related} The regularizing effect of the non-cutoff Boltzmann equation is a major theme and motivation of this work. The first rigorous understanding of this effect came in the 1990s\footnote{Much earlier, the idea that $Q(f,g)$ behaves like a fractional differentiation operator in $v$ was understood on a heuristic level by Cercignani \cite{cercignani1969kinetic}.} with Desvillettes' work on the two-dimensional homogeneous setting \cite{desvillettes1995kac, desvillettes1996homogeneous, desvillettes1997nonradial}, as well as functional estimates for $Q(f,g)$ in Sobolev spaces by various authors \cite{lions1998compactness, alexandre1998entropy, villani1999entropy}, culminating in the sharp entropy dissipation estimate of Alexandre-Desvillettes-Villani-Wennberg \cite{alexandre2000entropy}. The key property for many of these estimates is the following functional identity for the collision operator: \begin{equation}gin{equation}\label{e.entropy-dissipation} \int_{\mathbb R^3} Q(f,f) \log f \, \mathrm{d} v = -\frac 1 4\int_{\mathbb R^6}\int_{\mathbb S^2} B(v-v_*,\sigma)\left( f' f_*' - f f_*\right) \log \frac{f' f_*'}{ff_*} \, \mathrm{d} \sigma \, \mathrm{d} v_*\, \mathrm{d} v \leq 0. \epsnd{equation} This identity implies the entropy $\int_{\mathbb R^6} f\log f \, \mathrm{d} v \, \mathrm{d} x$ is nonincreasing for solutions of \epsqref{e.boltzmann}, but even more, it implies a smoothing effect in the $v$ variable, because the quantity on the right---called the {\it entropy dissipation}---turns out to control $\|\sqrt f\|_{H^s_v}$, up to a lower-order correction term. In the context of the homogeneous Boltzmann equation, this fractional smoothing effect can be iterated to show solutions are $C^\infty$. For the full inhomogeneous equation, the matter is more difficult because the diffusion acts only in velocity, and the smoothing effect of \epsqref{e.boltzmann} is therefore hypoelliptic rather than parabolic. Results such as \cite{amuxy2010regularizing, chen2012smoothing} proved that any solution that lies in $H^5_{x,v}(\mathbb R^6)$ uniformly in time, decays faster than any polynomial, and satisfies a lower bound on the mass density, is in fact $C^\infty$. More recently, the breakthrough result of Imbert-Silvestre \cite{imbert2020smooth}, which finished off a long program of the two authors and Mouhot \cite{silvestre2016boltzmann, imbert2016weak, imbert2018schauder, imbert2018decay, imbert2020lowerbounds} (see also the survey articles \cite{mouhot2018review,imbert2020review, silvestre2022review}), established $C^\infty$ estimates for solutions of \epsqref{e.boltzmann} that depend only on bounds for the mass, energy, and entropy densities of the solution, as well as (when $\gamma<0$) the polynomial decay rates of the initial data. See also \cite{loher2022quantitative} for a quantitative version of the H\"older estimate of \cite{imbert2016weak}. These results do not use entropy dissipation estimates, relying instead on understanding the ellipticity of $Q(f,g)$ as an integro-differential operator. The current article adapts some techniques from the Imbert-Silvestre program. \subsubsection{Comparison with Landau equation}\label{s:landau} The Landau equation is a kinetic model that can be derived from the Boltzmann equation \epsqref{e.boltzmann} in the limit as grazing collisions (collisions with $\theta \approx 0$ in \epsqref{e.b-bounds}) predominate (see e.g. \cite{desvillettes1992grazing, alexandre2004landau}). This equation reads \[ \partial_t f + v\cdot \nabla_x f = Q_L(f,f), \] where the Landau collision operator takes the form \begin{equation}gin{equation}\label{e.QL} Q_L(f,g) = \nabla_v\cdot(\bar a^f \nabla_v g) + \bar b^f \cdot \nabla_v g + \bar c^f g, \epsnd{equation} and $\bar a^f$, $\bar b^f$, and $\bar c^f$ are defined in terms of velocity integrals of $f$. This is an important model in plasma physics and has also attracted a great deal of interest as an equation with similar mathematical properties to the Boltzmann equation. In \cite{HST2020landau}, we proved existence and uniqueness results for the Landau equation that are in a similar spirit to Theorem \ref{t:existence} and Theorem \ref{t:uniqueness} above. While \cite{HST2020landau} provides a helpful outline for the current study, the Boltzmann case turns out to be much more challenging. This is partly because $Q_L(f,g)$ defined in \epsqref{e.QL} is a second-order differential operator which is {\it local} in $g$, unlike $Q(f,g)$, which is nonlocal in both $f$ and $g$. The local structure of the Landau equation is more amenable to barrier arguments because, letting $f$ be a solution and $g$ be an upper (resp. lower) barrier, one can derive good upper (resp. lower) bounds for $Q_L(f,g)$ at a crossing point between $f$ and $g$, using information about $g$ only at the crossing point. In the Boltzmann case, bounding $Q(f,g)$ is more subtle because one has to take into account the values of both $f$ and $g$ in the entire velocity domain $\mathbb R^3$. Since we use barrier arguments extensively in this work, the ``double nonlocality'' of $Q$ is a significant source of difficulty. Compared to \cite{HST2020landau}, the current study makes a much less stringent positivity assumption on the initial data: the main result of \cite{HST2020landau} required that no $x$ location in $\mathbb R^3$ be too far from a region in which $f_{\rm in}$ is uniformly positive, whereas our Theorem \ref{t:existence} only requires that $f_{\rm in}$ is uniformly positive in one single region. This is due to improvements in our method, rather than differences between the two equations. We also refer to the well-posedness theorem of \cite{anceschi2021fokkerplanck} for a nonlinear Fokker-Planck equation (studied earlier in \cite{imbert2021nonlinear, liao2018fokkerplanck}) that shares some properties with the Boltzmann and Landau equations. With a similar approach as~\cite{HST2020landau}, the authors construct a solution with initial data in $L^\infty_{x,v}(\mathbb R^6)$. As in \cite{HST2020landau} and the current article, an extra assumption of H\"older continuity is needed to prove uniqueness. We note that the authors also prove an interesting estimate on the diffusion asymptotics of the solution. \subsection{Difficulties and proof ideas} \subsubsection{Existence} The prior large-data existence results for \epsqref{e.boltzmann} cited above are based on the energy method. To demonstrate some disadvantages of this method, let us integrate \epsqref{e.boltzmann} against $\varphi(x) f$ for a compactly supported cutoff $\varphi$, which is needed because $f$ and its derivatives may not decay as $|x|\to \infty$. This gives \begin{equation}\label{e.energy} \frac 1 2 \frac d {dt} \int_{\mathbb R^6} \varphi f^2 \, \mathrm{d} x \, \mathrm{d} v \leq \int_{\mathbb R^6}\left[ \varphi f Q(f,f) -\frac 1 2 f^2 v\cdot \nabla_x \varphi \right] \, \mathrm{d} v \, \mathrm{d} x. \epse One would like to bound the right-hand side in terms of $\int \varphi f^2 \, \mathrm{d} v \, \mathrm{d} x$, but this is not possible with either term. First, the collision operator $Q(f,f)$ cannot be controlled using only $L^2_v$-based norms of $f$ due to the kinetic factor $|v-v_*|^\gamma$ (recall $\gamma \in (-3,0)$). Instead, an $L^p_v$ bound is needed, with $p>2$ depending on $\gamma$ and $s$. Therefore, to continue with $L^2$-based energy estimates, one must seek bounds on higher derivatives of $f$ in order to use an embedding theorem. Second, the $Q(f,f)$ integral involves three $f$ terms, so an $L^2$-estimate will not close\footnote{One might hope to use the fact that, in some sense, $Q(f,\cdot)$ involves an average over $v$ in order to close the estimate. Unfortunately, $Q$ has no such average in $x$, meaning $L^\infty_x$-regularity of $f$ is required.}. One cannot sidestep this by working in an $L^p$ space with $p \in (2,\infty)$: the analogous integral will involve $p+1$ copies of $f$ and, hence, will not close. For these reasons, the energy method seems incompatible with working in a zeroth-order space. Similarly, the growth of $v\cdot \nabla_x \varphi$ for large $v$ means the second term on the right cannot be controlled by $\int \varphi f^2 \, \mathrm{d} v \, \mathrm{d} x$. (When $\gamma+2s>0$, there are also terms coming from $Q(f,f)$ that grow as $|v|\to\infty$, leading to a similar issue, even in the spatially periodic case where $\varphi$ is not needed.) One standard way to overcome this issue \cite{amuxy2010regularizing, amuxy2011bounded, amuxy2013mild} is to divide $f$ by a time-dependent Gaussian $e^{(\rho-\kappa t) \langle v\rangle^2}$, which adds a term proportional to $\langle v\rangle^2 f$ to the equation, with the correct sign to absorb the terms with growing velocity dependence. However, this requires $f_{\rm in}$ to have velocity decay proportional to a Gaussian. More intricate methods, based on the coercivity properties of $Q(f,f)$, have been found to deal with this velocity growth \cite{morimoto2015polynomial, HST2020boltzmann, henderson2021existence}, but these also require working with polynomial decay of relatively high degree. Instead of the energy method, we use a barrier argument to propagate decay estimates in $L^\infty_q$ from $t=0$ forward in time, using barriers of the form $g = N e^{\begin{equation}ta t} \langle v\rangle^{-q}$ with $N, \begin{equation}ta, q>0$. The function $g$ is a valid barrier if $q>\gamma+2s+3$ and if $f$ also decays at a rate proportional to $\langle v\rangle^{-q}$, which we show via a detailed analysis of $Q(f, g)$ in Lemma \ref{l:Q-polynomial}. This argument gives a closed estimate in the space $L^\infty_q([0,T_q]\times\mathbb R^6)$ for some $T_q>0$ depending on $\|f_{\rm in}\|_{L^\infty_q(\mathbb R^6)}$. To understand the regularity of our solutions for positive times, we need to propagate higher decay estimates for $f$, because each step of the regularity bootstrap uses up a certain number of velocity moments. This brings up a subtle difficulty: since our time of existence should depend only on some fixed $L^\infty_{q_0}$ norm of $f_{\rm in}$ with $q_0$ small, we need to propagate higher $L^\infty_q$ norms to a common time interval $[0,T]$ depending only on the norm of $f_{\rm in}$ in $L^\infty_{q_0}$. We note that this is one place where the double nonlocality (see Section \ref{s:landau}) causes issues. To overcome this, we return to our barrier argument, proceeding more carefully in order to extract a small gain in the exponent $q$, which can then be iterated to bound any $L^\infty_q$ norm on $[0,T]$, provided $\|f_{\rm in}\|_{L^\infty_q}$ is finite (see Lemma \ref{p:upper-bounds}). Once we have propagated sufficient decay forward in time, we would like to apply the global regularity estimates of \cite{imbert2020smooth}. These estimates are an important tool in our study, but applying them to the problem we consider is not straightforward for several reasons. First, the authors of \cite{imbert2020smooth} work under the assumption $\gamma+2s\in [0,2]$, while we treat any $\gamma\in (-3,0)$ and $s\in (0,1)$. Therefore, we need to extend the analysis of \cite{imbert2020smooth} to the case $\gamma+2s< 0$, with suitably modified hypotheses (see Section \ref{s:global-reg}). The change of variables developed in \cite{imbert2020smooth} to pass from local to global regularity estimates is defined in a way that does not generalize well to the case $\gamma+2s< 0$, and the main novelty of our work in Section \ref{s:global-reg} is defining a suitable change of variables for this case. Second, the estimates of \cite{imbert2020smooth} require a uniform-in-$x$ positive lower bound on the mass density $\int_{\mathbb R^3} f(t,x,v)\, \mathrm{d} v$, but it does not seem possible to propagate such a bound forward from time zero with current techniques (unlike in the space homogeneous case). Instead, we work with initial data that is pointwise positive in a small ball, and spread this positivity to the whole domain via our result in \cite{HST2020lowerbounds}. This means we need to re-work the regularity estimates of \cite{imbert2020smooth} to depend quantitatively on pointwise lower bounds for $f$ rather than a lower mass density bound. Finally, we need to understand how the regularity of $f$ degenerates as $t\searrow 0$ in the case of irregular initial data, which requires us to revisit some of the arguments in \cite{imbert2020smooth} to track the dependence on $t$. Our extension of the global regularity estimates and change of variables of \cite{imbert2020smooth} to the case $\gamma+2s< 0$ may be of independent interest. \subsubsection{Uniqueness}\label{s:intro-uniqueness} In this section, we discuss some of the difficulties in proving uniqueness. Given two solutions $f$, $g$, the bilinearity of $Q$ implies, with $h=f-g$, \begin{equation}gin{equation}\label{e.h-eqn} \partial_t h + v\cdot \nabla_x h = Q(h,f) + Q(g,h). \epsnd{equation} Any standard strategy for proving uniqueness would involve bounding $h$ in some norm, using this equation or its equivalent. For the sake of discussion, we set aside the (nontrivial) difficulties related to velocity growth on the right-hand side of \epsqref{e.h-eqn}, as well as the potential lack of decay for large $x$, to focus on a more serious difficulty: that some regularity of $f$ in the $v$ variable is needed to control the term $Q(h,f)$, either of order $2s$ for a pointwise bound or order $s$ for integrals like $\int g Q(h,f) \, \mathrm{d} v$. In the context of irregular initial data, regularity estimates for $f$ must degenerate as $t\searrow 0$, but one may still get a good bound for $h$ if this degeneration is slow enough for a Gr\"onwall-style argument. Let us distinguish between two very different regimes: {\it If vacuum regions are present in the initial data}, then the known lower bounds for $f$, which are expected to be sharp, degenerate very quickly in such regions, at a rate like $e^{-c/t}$ (see \cite{HST2020lowerbounds}). The available regularization mechanisms, such as entropy dissipation or linear De Giorgi estimates, rely on lower bounds for the mass density of $f$, and are therefore useless as $t\searrow 0$. For this reason, uniqueness of solutions in this regime is expected to be very difficult and require completely new ideas, if it even holds. {\it If there are no vacuum regions in the initial data}, the situation appears more hopeful, because we can use our earlier result \cite{HST2020lowerbounds} to obtain positive lower bounds for $f$ and $g$ that are uniform for small times. Because $f$ and $g$ satisfy good lower and upper bounds on some time interval, they enjoy the regularity provided by entropy dissipation on that interval, and one might try to exploit this regularity to prove uniqueness. Let us make a brief digression to explain why this approach does not work: As described in Section \ref{s:related}, one has an {\it a priori} bound on $\sqrt f$ in $L^2_{t,x}H^s_v([0,T]\times\mathbb R^6)$ via the formal identity \epsqref{e.entropy-dissipation}, which can be improved to a bound for $f$ itself in the same space, using our $L^\infty_q$ estimates for $f$. The same bounds apply to $g$ and (by the triangle inequality) $h$. Integrating \epsqref{e.h-eqn} against $h$ and using coercivity and trilinear estimates for $Q$ that are standard in the literature, one would obtain an estimate of the following form (recall that we are ignoring velocity weights and the possible lack of decay for large $x$): \[ \frac d {dt} \|h(t)\|_{L^2_{x,v}}^2 \lesssim \int \|h(t,x)\|_{L^2_v} \|f(t,x)\|_{H^s_v} \|h(t,x)\|_{H^s_v} \, \mathrm{d} x. \] To close this estimate, one would need to bound the right-hand side by a constant times $\|h(t)\|_{L^2_{x,v}}^2$. An $L^\infty_x$-bound on $\|f(t,x)\|_{H^s_v}$ and a bound like $\|h(t,x)\|_{H^s_v} \lesssim \|h(t,x)\|_{L^2_v}$ would be sufficient to do this, but unfortunately, we only have bounds for $f$ and $h$ in $L^2_x H^s_v$, so it is not at all clear how to close the above argument. This gap between an $L^2_x$ estimate arising from the formal structure of the equation, and a desired estimate in $L^\infty_x$, is reminiscent of the current state of the global well-posedness problem for \epsqref{e.boltzmann}: $L^\infty_x$ bounds for the mass, energy, and entropy densities would be sufficient to extend large-data solutions globally in time \cite{imbert2020smooth}, but the natural conservation laws of the equation only provide bounds in $L^1_x$ for these densities. Bridging this gap is widely considered to be out of reach with current techniques. Based on this apparent similarity, we believe that our assumption of H\"older continuity for $f_{\rm in}$ in Theorem \ref{t:uniqueness} is more than a technicality, and that removing it may be a difficult problem. Instead of entropy dissipation, one may try to apply the global H\"older estimates of De Giorgi and Schauder type from \cite{imbert2020smooth} to obtain enough regularity to bound $Q(h,f)$ pointwise. Although these estimates on $[\tau,T]\times\mathbb R^6$ are uniform in $x$, they also must degenerate as $\tau\to 0$ since they include the case of irregular initial data. In Proposition \ref{p:nonlin-schauder}, we determine the explicit dependence on $\tau$ when Schauder estimates are applied to $f$: ignoring velocity weights, one has \begin{equation}gin{equation}\label{e.fake-schauder} \|f\|_{C^{2s+\alpha'}_\epsll([\tau,T]\times\mathbb R^6)} \leq C \tau^{-1 + \frac{\alpha-\alpha'}{2s}} \|f\|_{C^\alpha_\epsll([\tau/2,T]\times\mathbb R^6)}^{1+(\alpha+2s)/\alpha'}, \epsnd{equation} with $\alpha' = \alpha \frac {2s}{1+2s}$. This exponent of $\tau$ is consistent with a gain of regularity of order $2s+\alpha' - \alpha$ on a kinetic cylinder of width $\sim\tau^{\frac 1 {2s}}$ in the time variable (see \epsqref{e.cylinder}). By a similar heuristic, the global $C^\alpha_\epsll$ estimate (Theorem \ref{t:degiorgi}) on $[\tau/2,T]\times\mathbb R^6$ should have a constant proportional to $\tau^{-\frac \alpha {2s}}$. Combining this with \epsqref{e.fake-schauder}, a bound for the $C^{2s+\alpha'}_\epsll$ norm in terms of $\|f\|_{L^\infty}$ would give an overall $\tau$ dependence of \[ \tau^{-1 + \frac{\alpha - \alpha'}{2s} - \frac \alpha {2s}(1+\frac{\alpha+2s}{\alpha'})}, \] which is not integrable as $\tau\to 0$. Therefore, this line of argument does not seem feasible without any additional regularity assumptions for $f_{\rm in}$. On the other hand, if $f$ were bounded in $C^\alpha _\epsll$ uniformly for small times, an estimate of the form \epsqref{e.fake-schauder} would be sufficient to derive a time-integrable bound on $Q(h,f)$ in \epsqref{e.h-eqn}. This motivates our extra assumption that $f_{\rm in}$ is H\"older continuous, and the following step-by-step strategy for proving uniqueness: \begin{equation}gin{enumerate} \item Prove that the H\"older modulus of $f_{\rm in}$ in $(x,v)$ variables is propagated forward to positive times. To do this, we study the function $g$ defined for $(t,x,v,\chi,\nu) \in \mathbb R^{13}$ and $m>0$ by \begin{equation}gin{equation*} g(t,x,v,\chi,\nu) = \frac{f(t,x+\chi,v+\nu) - f(t,x,v)}{(|\chi|^2+|\nu|^2)^{\alpha/2}} \langle v\rangle^m. \epsnd{equation*} Bounding $g$ in $L^\infty$ on a short time interval is equivalent to controlling the weighted $\alpha$-H\"older seminorm of $f$. Note that this is the H\"older seminorm with respect to the Euclidean scaling on $\mathbb R^6$, not the kinetic scaling that one might expect. This choice is imposed on us by the proof. Using \epsqref{e.boltzmann}, we derive an equation satisfied by $g$ and use Gr\"onwall's inequality to bound $g$ on a short time interval. This step requires a detailed analysis of the quantity $Q(f,f)(t,x+\chi,v+\nu) - Q(f,f)(t,x,v)$, the repeated use of annular decompositions of the velocity integrals defining $Q$, and an estimate of the form~\epsqref{e.fake-schauder} coming from a carefully scaled version of the Schauder estimates. This approach to propagating H\"older continuity is inspired by \cite{constantin2015SQG}. \item Show that the H\"older regularity for $f$ in $(x,v)$ from the previous step implies H\"older regularity in $t$ as well. This property is clearly false for general functions on $\mathbb R^7$, so we must exploit the equation \epsqref{e.boltzmann}. The proof is surprisingly intricate and is based on controlling a finite difference in $t$ of $f$ via well-chosen barriers. \item Using the regularity from the prior two steps, apply Schauder estimates to conclude $C^{2s+\alpha'}_\epsll$ regularity for $f$, for some $\alpha'>0$. \item Armed with this regularity for $f$, return to \epsqref{e.h-eqn} and use (for the only time in this paper) the energy method to bound $h= f-g$ in a weighted $L^2$-norm and establish weak-strong uniqueness. The energy method is chosen because of its compatibility with our notion of weak solution, but one must contend with the lack of decay for large $x$. This step of the proof combines the strategy for $L^2$-estimates developed in \cite{HST2020boltzmann, henderson2021existence} with a spatial localization method that is compatible with the transport term. The particular form of our localizing cutoff function (which depends on both $x$ and $v$) leads to extra difficulties because we cannot deal with the $x$ and $v$ integrations separately. \epsnd{enumerate} We should note that this strategy requires working with regularity in all three variables because of the application of Schauder estimates, even though the important ingredient for proving uniqueness is the regularity in $v$. \subsection{Open problems} \subsubsection{Relaxing the H\"older continuity assumption in Theorem \ref{t:uniqueness}} As discussed in Section \ref{s:intro-uniqueness}, proving uniqueness without any regularity assumptions on $f_{\rm in}$ may be a difficult problem. In the Landau case, a recent result \cite{henderson2022schauder} by the first named author, jointly with W.~Wang, derived a uniqueness theorem that requires $f_{\rm in}$ to be H\"older continuous in $x$ but only logarithmically H\"older in $v$, via Schauder estimates with time-irregular coefficients. See \cite{biagi2022schauder} for a similar Schauder estimate. It is likely that an analogous improvement is available for the Boltzmann equation via a refinement of the Schauder estimates in \cite{imbert2018schauder}, though this would be nontrivial to prove. Even with such an improvement, H\"older regularity in $x$ would be needed for the initial data. \subsubsection{The case $\gamma > 0$} In the case $\gamma > 0$, the analysis of the Boltzmann equation is somewhat different because the kinetic term $|v-v_*|^\gamma$ in $Q(f,g)$ becomes a growing weight instead of a singularity. Our argument in this paper for local existence uses the assumption $\gamma\leq 0$ crucially, and a different argument would be required for $\gamma > 0$. We have proven some of our intermediate results in this paper without the restriction $\gamma\leq 0$, with a mind to eventually filling this gap. \subsubsection{Classical solutions without a locally uniform lower bound} Our construction of classical solutions requires a locally uniform positive lower bound at time zero (condition \epsqref{e.mass-core} in Theorem \ref{t:existence}). This is automatically true if the initial data is continuous and not identically zero, but our initial data may be discontinuous, so \epsqref{e.mass-core} is an extra assumption we have to make. In either limits $\delta\searrow 0$ or $r\searrow 0$, we lose all quantitative control on the pointwise regularity of our solutions, and we can only recover a weak solution in the sense of Theorem \ref{t:weak-solutions}. On the other hand, if $f_{\rm in}$ is identically 0, then the solution is also identically zero for positive times, and is therefore perfectly smooth. This leaves open the question of regularity for solutions with initial data $f_{\rm in}\in L^\infty_q(\mathbb R^6)$ that is not identically zero but nowhere uniformly positive. \subsubsection{Decay estimates and continuation} Continuation criteria for \epsqref{e.boltzmann} are highly relevant because they represent partial progress toward the outstanding open problem of global existence of non-perturbative solutions. As with any short-time existence result, our Theorem \ref{t:existence} implies a continuation criterion: solutions can be extended past any time $T$ such that $\|f(t)\|_{L^\infty_q(\mathbb R^6)}$ remains finite for $t\in[0,T]$, for some $q>2s+3$.\footnote{By \cite{HST2020lowerbounds}, the lower bound condition \epsqref{e.mass-core} is automatically satisfied for any positive time $T$, with constants depending on $T$, as long as it holds at time zero. Note that the time of existence granted by Theorem \ref{t:existence} does not depend quantitatively on $\delta$, $r$, or $v_m$.} On the other hand, the continuation criterion of \cite{HST2020lowerbounds} (which combined the lower bounds of \cite{HST2020lowerbounds} with the continuation criterion of \cite{imbert2020smooth}) states that solutions can be continued as long as $\|f(t)\|_{L^\infty_x (L^1_2)_v(\mathbb R^6)}$ remains finite. The continuation criterion of \cite{HST2020lowerbounds} only applies to solutions that are smooth, rapidly decaying, and spatially periodic, and applies only when $\gamma+2s\in [0,2]$. Ideas related to the decay analysis in the current paper could likely strengthen \cite{HST2020lowerbounds} by enlarging the class of solutions and ranges of $(\gamma,s)$ that can be handled, and possibly by replacing $L^\infty_x (L^1_2)_v$ with a weaker $L^\infty_x (L^1_q)_v$ norm. We plan to explore this question in a future article. \subsection{Notation}For any $\lambda\in \mathbb R$, we write $\lambda_+ = \max\{\lambda,0\}$ and $\lambda_- = \max\{-\lambda,0\}$. We call a constant \epsmph{universal} if it depends only on $\gamma$, $s$, and the constant $c_b$ in \epsqref{e.b-bounds}. Inside of proofs, to keep the notation clean, we often write $A \lesssim B$ to mean $A\leq CB$ for a constant $C>0$ depending on $\gamma$, $s$, $c_b$, and the quantities in the statement of the lemma or theorem being proven. We also write $A\approx B$ when $A\lesssim B$ and $B\lesssim A$. Throughout the manuscript, it is always assumed that $\gamma < 0$ unless otherwise indicated (some results apply to the case $\gamma \in [0,1]$ as well). We say that a solution to~\epsqref{e.boltzmann} is classical if $(\partial_t + v\cdot\nabla_x) f$ and $Q(f,f)$ are continuous and~\epsqref{e.boltzmann} holds pointwise. \subsection{Outline of the paper} In Section \ref{s:prelim}, we recall and slightly extend some results from the literature that are needed for our study. Section \ref{s:global-reg} extends the change of variables and global regularity estimates of \cite{imbert2020smooth} to the case $\gamma+2s<0$. Section \ref{s:existence} is devoted to the proof of existence. Section \ref{s:time-reg} addresses the extension of H\"older regularity from $(x,v)$ variables to the $t$ variable. Section \ref{s:holder-xv} propagates a H\"older modulus from $t=0$ to positive times, and Section \ref{s:uniqueness} finishes the proof of uniqueness. Section \ref{s:global} proves existence of global solutions near equilibrium. Appendix \ref{s:cov-appendix} proves the key properties of the change of variables defined in Section \ref{s:global-reg}, and Appendix \ref{s:lemmas} contains some technical lemmas. \section{Preliminaries}\label{s:prelim} \subsection{Kinetic H\"older spaces}\label{s:holder-spaces} To study the regularity properties of the Boltzmann equation, we use the kinetic H\"older spaces from \cite{imbert2020smooth, imbert2018schauder}, which we briefly recall now. First, let us recall two transformations that are well-adapted to the symmetries of linear kinetic equations with velocity diffusion of order $2s$. For $z_1 = (t_1,x_1,v_1)$ and $z = (t,x,v)$ points of $\mathbb R^7$, define the Lie product \[ z_1 \circ z = ( t_1 + t, x_1 + x + tv_1, v_1 + v).\] and the dilation \begin{equation}\label{e.dilation-def} \delta_r(z) = (r^{2s}t, r^{1+2s}x, rv), \quad r>0. \epse Next, define the distance\footnote{In fact, $d_\epsll$ does not satisfy the triangle inequality if $s< 1/2$. (See \cite{imbert2018schauder}.) This fact causes no issues in our analysis, and we refer to $d_\epsll$ as a distance regardless.} \begin{equation}gin{equation}\label{e.dl} d_\epsll(z_1,z_2) := \min_{w\in \mathbb R^3} \max\{ |t_1-t_2|^{\frac 1 {2s}}, |x_1 - x_2 - (t_1-t_2) w|^{\frac 1 {1+2s}}, |v_1 - w|, |v_2 - w|\}. \epsnd{equation} This distance is invariant under left translations (hence the $\epsll$ subscript, which stands for left-invariant) and dilations: for any $z_1, z_2, \xi\in \mathbb R^7$ and $r>0$, \begin{equation}gin{align} d_\epsll(\xi\circ z_1, \xi\circ z_2) &= d_\epsll(z_1,z_2).\label{e.left}\\ d_\epsll(\delta_r(z_1), \delta_r(z_2)) &= rd_\epsll(z_1,z_2),\label{e.dilation} \epsnd{align} The distance $d_\epsll$ is not invariant under right translations. However, for right translations in the velocity variable, one has the useful property \begin{equation}gin{equation}\label{e.right-translation} \begin{equation}gin{split} d_\epsll(z_1\circ(0,0,w), z_2\circ(0,0,w)) &\leq d_\epsll(z_1,z_2) + |t_1-t_2|^{1/(1+2s)} |w|^{1/(1+2s)}\\ &\leq d_\epsll(z_1,z_2) + d_\epsll(z_1,z_2)^{2s/(1+2s)} |w|^{1/(1+2s)} \epsnd{split} \epsnd{equation} We define the kinetic cylinders in a way that respects the transformations \epsqref{e.left}, \epsqref{e.dilation}: \begin{equation}gin{equation}\label{e.cylinder} Q_r(z_0) = \{z=(t,x,v) \in \mathbb R^7 :t_0-r^{2s}< t\leq t_0, |x-x_0-(t-t_0)v_0|<r^{1+2s}, |v-v_0| < r\}. \epsnd{equation} We often write $Q_r = Q_r(0)$. Note that $Q_r = \delta_r(Q_1)$, and $Q_r(z_0) = z_0 \circ Q_r$. The kinetic H\"older spaces are defined in terms of approximation by polynomials. For any monomial $m$ in the variables $t,x,v$ of the form \[ m(t,x,v) = c t^{\alpha_0} x_1^{\alpha_1} x_2^{\alpha_2} x_3^{\alpha_3} v_1^{\alpha_4} v_2^{\alpha_5} v_3^{\alpha_6},\] with $c\neq 0$, we define the kinetic degree as \[\mbox{deg}_k m = 2s\alpha_0 + (1+2s)(\alpha_1 + \alpha_2 + \alpha_3) + \alpha_4 + \alpha_5 + \alpha_6.\] This definition is compatible with the scaling $(t,x,v) \mapsto \delta_r(t,x,v)$. For any nonzero polynomial $p(t,x,v)$, we define its kinetic degree as the maximum of $\mbox{deg}_k$ over all monomial terms in $p$. Now we are ready to define the kinetic H\"older spaces: \begin{equation}gin{definition} Given any $\alpha>0$ and any open set $D\subset \mathbb R^7$, a continuous function $f:D\to \mathbb R$ is \epsmph{$\alpha$-H\"older continuous} at $z_0\in D$ if there exists a polynomial $p(t,x,v)$ with $\mbox{deg}_k(p)< \alpha$, and \begin{equation}gin{equation}\label{e.f-holder} |f(z) - p(z)| \leq C d_\epsll(z,z_0)^\alpha, \quad z\in D. \epsnd{equation} We say $f\in C^\alpha_\epsll(D)$ if the inequality \epsqref{e.f-holder} holds at all points of $D$. The semi-norm $[f]_{C^\alpha_\epsll(D)}$ is the smallest value of the constant $C$ such that \epsqref{e.f-holder} holds for all $z,z_0\in D$ (with the polynomial $p$ depending on $z_0$). The norm $\|f\|_{C^\alpha_\epsll(D)}$ is defined as $\|f\|_{L^\infty(D)} + [f]_{C^\alpha_\epsll(D)}$. \epsnd{definition} For functions defined on open subsets $D\subset \mathbb R^6$, the seminorm $[f]_{C^\alpha_{\epsll,x,v}(D)}$ can be defined similarly as the smallest constant $C>0$ such that for every $(x_0,v_0)\in D$, there is a polynomial $p(x,v)$ with $\mbox{deg}_k(p)< \alpha$, such that \[ |f(x,v) - p(x,v)|\leq Cd_\epsll((0,x_0,v_0), (0,x,v))^\alpha. \] We also define the global kinetic H\"older spaces with polynomial weights: \begin{equation}gin{definition}\label{d:C-alpha-q} Given $\alpha, q>0$ and $0<\tau<T$, we define the weighted semi-norm \[ [f]_{C^\alpha_{\epsll,q}([\tau,T]\times\mathbb T^6)} := \sup\left\{ (1+|v|)^q [f]_{C^\alpha_\epsll(Q_r(z))} : r\in (0,1] \text{ and } Q_r(z) \subset [\tau,T]\times\mathbb R^6\right\}.\] We say $f\in C^\alpha_{\epsll,q}([\tau,T]\times\mathbb R^6)$ if the norm \[ \|f\|_{C^\alpha_{\epsll,q}([\tau,T]\times\mathbb R^6)} = \|f\|_{L^\infty_q([\tau,T]\times\mathbb R^6)} + [f]_{C^\alpha_{\epsll,q}([\tau,T]\times\mathbb R^6)}\] is finite. \epsnd{definition} \subsection{Well-posedness for regular initial data} As part of our existence proof, we need to construct solutions corresponding to smooth, rapidly decaying approximations of our initial data. For this, we use the following proposition, which combines two short-time existence results from the literature. We state here a non-sharp result with assumptions that are uniform in $\gamma$ and $s$ for the sake of brevity (and because we do not need the sharp version). \begin{equation}gin{proposition}\label{p:prior-existence} Let $\gamma \in (-3,0)$ and $s\in (0,1)$. Let $\mathbb T_M^3$ be the 3-dimensional torus of side length $M>0$. For any $k\geq 6$, there exists $n_0,p_0>0$ depending on universal constants and $k$, such that for any initial data $f_{\rm in}\geq 0$ defined for $(x,v)\in \mathbb T_M^3\times \mathbb R^3$ with $f_{\rm in} \in H^k_{n}\cap L^\infty_{p}(\mathbb T^3_M\times \mathbb R^3)$ with $n\geq n_0$ and $p\geq p_0$, there exists a unique solution $f\geq 0$ to \epsqref{e.boltzmann} in $C^0([0,T], H^k_{n}\cap L^\infty_{p}( \mathbb T_M^3\times \mathbb R^3))$ for some $T>0$ depending on $\|f_{\rm in} \|_{H^k_{n}} + \|f_{\rm in}\|_{L^\infty_{p}}$, with $f(0,x,v) = f_{\rm in}(x,v)$. \epsnd{proposition} The proofs for the case $M=1$ can found in the following works: for any $s\in (0,1)$ and $\max\{-3,-3/2-2s\} < \gamma < 0$, see \cite{HST2020boltzmann}. For $s\in (0,1)$ and $\gamma \in (-3, -2s)$, see \cite{henderson2021existence}. To extend this result to the case of general $M>0$, we rescale to the torus $\mathbb T^3$ of side length 1 by defining \[ \tilde f_{\rm in} := M^{\gamma+3} f_{\rm in}^\varepsilon (M x, M v), \quad x\in \mathbb T^3, v\in \mathbb R^3. \] The result for the $M=1$ case gives us a solution $\tilde f$ on $[0,T]\times \mathbb T^3\times\mathbb R^3$, and to scale back to the torus of size $M$, we define \[ f^\varepsilon(t,x,v) := \frac 1 {M^{\gamma+3}}\tilde f^\varepsilon\left( t, \frac x {M}, \frac v {M}\right), \quad t\in [0,T_\varepsilon], x\in \mathbb TM, v\in \mathbb R^3. \] By a direct calculation, $f$ solves the Boltzmann equation \epsqref{e.boltzmann}, with initial data $f_{\rm in}$. The function $f$ lies in the same regularity spaces as $\tilde f$. \subsection{Carleman representation}\label{s:carleman} The collision operator $Q(f,g)$ defined in \epsqref{e.Q} can be written as a sum of two terms $Q = Q_{\rm s} + Q_{\rm ns}$, where the first (``\textbf{\textit{s}}ingular'') term $Q_{\rm s}$ acts as a nonlocal diffusion operator of order $2s$. The second (``\textbf{\textit{n}}on\textbf{\textit{s}}ingular'') term $Q_{\rm ns}$ is a lower-order convolution term. By adding and subtracting $f(v_*')g(v)$ inside the integral in \epsqref{e.Q}, one has \begin{equation}gin{equation} \begin{equation}gin{split} &Q_{\rm s}(f,g) = \int_{\mathbb R^3} \int_{{\mathbb S}^2} (g(v')-g(v)) f(v_*') B(|v-v_*|,\sigma)\, \mathrm{d} \sigma \, \mathrm{d} v_* \qquad\text{and}\\ &Q_{\rm ns}(f,g) = g(v) \int_{\mathbb R^3} \int_{{\mathbb S}^2} (f(v_*')-f(v_*)) B(|v-v_*|,\sigma) \, \mathrm{d} \sigma \, \mathrm{d} v_*. \epsnd{split} \epsnd{equation} It can be shown \cite{alexandre2000noncutoff, silvestre2016boltzmann} that $Q_{\rm s}(f,\cdot)$ is equal to an integro-differential operator with kernel depending on $f$: \begin{equation}gin{lemma}{\cite[Section 4]{silvestre2016boltzmann}}\label{l:Q1} The term $Q_{\rm s}(f,g)$ can be written \begin{equation}gin{equation}\label{e.Q1-new} Q_{\rm s}(f,g) = \int_{\mathbb R^3} (g(v')-g(v)) K_f (v, v') \, \mathrm{d} v', \epsnd{equation} with kernel \begin{equation}gin{equation}\label{e.kernel} K_f(v,v') = \frac {1} {|v'-v|^{3+2s} }\int_{(v'-v)^\perp} f(v+w) |w|^{\gamma+2s+1} \tilde b(\cos\theta) \, \mathrm{d} w, \epsnd{equation} where $\tilde b$ is uniformly positive and bounded. \epsnd{lemma} Above, we have used the shorthand $(v-v')^\perp$ to mean $\{w: w\cdot(v-v') = 0 \}$. For the term $Q_{\rm ns}$, we have the following formula, which is related to the Cancellation Lemma of \cite{alexandre2000entropy}: \begin{equation}gin{lemma}\label{l:Q2} The term $Q_{\rm ns}(f,g)$ can be written \[ Q_{\rm ns}(f,g) = Cg(v)\int_{\mathbb R^3} f(v+w) |w|^\gamma \, \mathrm{d} w, \] for a constant $C>0$ depending only on the bounds \epsqref{e.b-bounds} for the collision cross-section $b$. \epsnd{lemma} \subsection{Self-generating lower bounds} The main result of \cite{HST2020lowerbounds} states that if $f_{\rm in}$ is uniformly positive in some ball in $(x,v)$ space, this positivity is spread instantly to the entire domain: \begin{equation}gin{theorem}{\cite[Theorem 1.2]{HST2020lowerbounds}}\label{t:lower-bounds} Let $\gamma \in (-3,1)$ and $s\in (0,1)$. Suppose that $f$ is a classical solution ($C^1$ in $t,x$ and $C^2$ in $v$) of \epsqref{e.boltzmann} on $[0,T]\times \mathbb R^6$, with initial data $f(0,x,v)\geq 0$ satisfying the lower bound \epsqref{e.mass-core}, i.e. \begin{equation}gin{equation*} f(0,x,v) \geq \delta, \quad (x,v) \in B_r(x_m,v_m), \epsnd{equation*} for some $x_m,v_m\in \mathbb R^3$ and $\delta, r>0$. Assume that $f$ satisfies \begin{equation}gin{equation}\label{e.hydro-general} \begin{equation}gin{split} &\sup_{t\in [0,T],x\in \mathbb R^3} \int_{\mathbb R^3} \langle v\rangle^{(\gamma+2s)_+} f(t,x,v) \, \mathrm{d} v \leq K_0, \qquad \text{ and}\\ &\sup_{t\in [0,T],x\in \mathbb R^3} \| f(t,x,\cdot)\|_{L^p(\mathbb R^3)} \leq P_0 \quad \text{ for some } p>\frac{3}{3+\gamma+2s} \quad (\mbox{only if } \gamma + 2s < 0). \epsnd{split} \epsnd{equation} Then \[ f(t,x,v) \geq \mu(t,x) e^{-\epsta(t,x)|v|^2}, \quad (t,x,v) \in (0,T]\times\mathbb R^6,\] where $\mu(t,x)$ and $\epsta(t,x)$ are uniformly positive and bounded on any compact subset of $(0,T]\times \mathbb R^3$, and depend only on $t$, $T$, $|x-x_m|$, $\delta$, $r$, $v_m$, $K_0$, and $P_0$. Furthermore, near the point $x_m$, the lower bounds are uniform up to time zero: \begin{equation}gin{equation}\label{e.small-t-lower} f(t,x,v) \geq \mu e^{-\epsta|v|^2}, \quad (t,x,v) \in (0,T]\times B_{r/2}(x_m)\times \mathbb R^3, \epsnd{equation} for constants $\mu, \epsta>0$ depending on $\delta$, $r$, $v_m$, $K_0$, and $P_0$. \epsnd{theorem} As stated in \cite{HST2020lowerbounds}, this theorem requires an upper bound on the energy density $\int_{\mathbb R^3} |v|^2 f(t,x,v) \, \mathrm{d} v$. However, it is clear from the proof that a bound on the $\gamma+2s$ moment is sufficient. More specifically, the only purpose of the energy density bound is to estimate $Q_{\rm s}$ from above via Lemma \ref{l:C2Linfty} below, and a bound for $\int_{\mathbb R^3} \langle v\rangle^{\gamma+2s} f\, \mathrm{d} v$ suffices to estimate the convolution $f\ast|v|^{\gamma+2s}$ in Lemma \ref{l:C2Linfty}. We should also note that \epsqref{e.small-t-lower} is not stated as part of the main result of \cite{HST2020lowerbounds}, but follows immediately from \cite[Lemma 3.1 and Proposition 3.3]{HST2020lowerbounds}. The following lemma gives a cone of nondegeneracy for the collision kernel $K_f$. When combined with the previous theorem, it provides coercivity estimates for $Q_s(f,\cdot)$ that depend only on the initial data and the quantities in \epsqref{e.hydro-general}. \begin{equation}gin{lemma}{\cite[Lemma 4.1]{HST2020lowerbounds}}\label{l:cone} Let $f:\mathbb R^3\to \mathbb R$ be a nonnegative function with $f(v) \geq \delta 1_{B_r(v_m)}$ for some $\delta, r > 0$ and $v_m\in \mathbb R^3$. There exist constants $\lambda, \mu, C > 0$ (depending on $\delta$, $r$, and $|v_m|$) such that for each $v\in \mathbb R^3$, there is a symmetric subset of the unit sphere $A(v)\subset \mathbb S^2$ such that: \begin{equation}gin{itemize} \item $|A(v)|_{\mathcal H^2}\geq \mu (1+|v|)^{-1}$. where $|\cdot|_{\mathcal H^2}$ is the $2$-dimensional Hausdorff measure. \item For all $\sigma \in A(v)$, $|\sigma\cdot v|\leq C$. \item Whenever $(v-v')/|v-v'| \in A(v)$, \[ K_f(v,v') \geq \lambda (1+|v|)^{1+\gamma+2s} |v'-v|^{-3-2s}. \] \epsnd{itemize} \epsnd{lemma} These results also give a pointwise upper bound that we will need in our analysis. The following proposition combines the $L^\infty$ bounds of \cite{silvestre2016boltzmann} with the cone of nondegeneracy of Lemma \ref{l:cone}: \begin{equation}gin{proposition}\cite[Proposition 4.2]{HST2020lowerbounds}\label{p:Linfty} Let $f$ be a solution of the Boltzmann equation \epsqref{e.boltzmann} on $[0,T]\times\mathbb T^3\times\mathbb R^3$ that satisfies \epsqref{e.hydro-general}. Assume that the initial data satisfies \epsqref{e.mass-core}. Then $f$ satisfies an $L^\infty$ bound that is uniform away from $t=0$, i.e. \[ \|f(t,\cdot,\cdot)\|_{L^\infty(\mathbb T^3\times\mathbb R^3)} \leq C(1+t^{-3/(2s)}),\] for a constant depending on $T$, $\delta$, $r$, $v_0$, $K_0$, and (if $\gamma+2s<0$) $P_0$. \epsnd{proposition} \subsection{Local regularity estimates} We recall the local regularity estimates of \cite{imbert2016weak} and \cite{imbert2018schauder} for linear kinetic equations of the following type: \begin{equation}gin{equation}\label{e.linear-kinetic} \partial_t f + v\cdot \nabla_x f = \int_{\mathbb R^3} (f(t,x,v') - f(t,x,v))K(t,x,v,v') \, \mathrm{d} v' + h, \epsnd{equation} where the kernel $K$ satisfies suitable ellipticity assumptions. First, we have a De Giorgi-type estimate that gives H\"older continuity of solutions: \begin{equation}gin{theorem}[\cite{imbert2016weak}]\label{t:degiorgi} Let $K: (-1,0]\times B_1 \times B_2 \times \mathbb R^3 \to \mathbb R_+$ be a kernel satisfying the following ellipticity conditions, uniformly in $t$ and $x$, for some $\lambda, \Lambda > 0$: \begin{equation}gin{flalign} &\text{For all } v\in B_2, r>0,\quad \inf_{|e|=1}\int_{B_r(v)} ((v'-v)\cdot e)^2_+ K(t,x,v,v') \, \mathrm{d} v' \geq \lambda r^{2-2s},\quad \text{(if $s< 1/2$)},\label{e.coercivity1} \epsnd{flalign} \begin{equation}gin{flalign} &\text{For any } f(v) \text{ supported in } B_2, \label{e.coercivity2}&\\ &\phantom{ \text{For any } f(v) } \iint_{B_2\times\mathbb R^3} f(v) (f(v) - f(v')) K(t,x,v,v') \, \mathrm{d} v' \, \mathrm{d} v \geq \lambda \|f\|_{\dot H^s(\mathbb R^3)}^2 - \Lambda \|f\|_{L^2(\mathbb R^3)}^2,\nonumber \epsnd{flalign} \begin{equation}gin{flalign} &\text{For all } v\in B_2 , r>0, \quad \int_{\mathbb R^3\setminus B_r(v)} K(t,x,v,v') \, \mathrm{d} v' \leq \Lambda r^{-2s},&\label{e.upper1} \epsnd{flalign} \begin{equation}gin{flalign} &\text{For all } v'\in B_2 , r>0, \quad \int_{\mathbb R^3\setminus B_r(v')} K(t,x,v,v') \, \mathrm{d} v \leq \Lambda r^{-2s},&\label{e.upper2} \epsnd{flalign} \begin{equation}gin{flalign} &\text{For all } v \in B_{7/4}, \quad \left| \mbox{p.v.} \int_{B_{1/4}(v)}(K(t,x,v,v') - K(t,x,v',v)) \, \mathrm{d} v'\right| \leq \Lambda,&\label{e.cancellation1} \epsnd{flalign} \begin{equation}gin{flalign} &\text{For all } r\in [0,1/4] \text{ and } v\in B_{7/4}, \label{e.cancellation2}&\\ &\phantom{\text{For }}\left| \mbox{p.v.} \int_{B_r(v)} (K(t,x,v,v') - K(t,x,v',v)) (v'-v) \, \mathrm{d} v'\right| \leq \Lambda (1+r^{1-2s}), \quad \text{(if $s\geq 1/2$)}.\nonumber \epsnd{flalign} Let $f:(-1,0]\times B_1\times \mathbb R^3\to\mathbb R$ be a bounded function that is a solution of \epsqref{e.linear-kinetic} in $Q_1$, for some bounded function $h$. Then $f$ is H\"older continuous in $Q_{1/2}$, and \[ \|f\|_{C^\alpha_\epsll(Q_{1/2})} \leq C\left( \|f\|_{L^\infty((-1,0]\times B_1\times \mathbb R^3)} + \|h\|_{L^\infty(Q_1)}\right).\] The constants $C>0$ and $\alpha \in (0,1)$ depend only on $\lambda$ and $\Lambda$. \epsnd{theorem} Next, we recall Schauder-type estimates for linear kinetic integro-differential equations of the form \epsqref{e.linear-kinetic}. As in \cite{imbert2018schauder}, the kernel is assumed to be elliptic in the sense of the following definition: \begin{equation}gin{definition}[Ellipticity class]\label{d:ellipticity} Given $s\in (0,1)$ and $0< \lambda < \Lambda$, a kernel $K:\mathbb R^3\setminus\{0\}\to \mathbb R_+$ lies in the ellipticity class of order $2s$ if \begin{equation}gin{itemize} \item $K(w) = K(-w)$. \item For all $r>0$, \begin{equation}gin{equation}\label{e.upper-bound-schauder} \int_{B_r} |w|^2 K(w) \, \mathrm{d} w \leq \Lambda r^{2-2s}. \epsnd{equation} \item For any $R>0$ and $\varphi \in C^2(B_R)$, \begin{equation}gin{equation}\label{e.coercivity3} \iint_{B_R\times B_R} |\varphi(v) - \varphi(v')|^2 K(v'-v) \, \mathrm{d} v' \, \mathrm{d} v \geq \lambda \iint_{B_{R/2}\times B_{R/2}} |\varphi(v) - \varphi(v')|^2 |v'- v|^{-3-2s} \, \mathrm{d} v' \, \mathrm{d} v. \epsnd{equation} \item If $s<1/2$, assume in addition that for each $r>0$, \begin{equation}gin{equation}\label{e.coercivity4} \inf_{|e|=1} \int_{B_r} (w\cdot e)^2_+ K(w) \, \mathrm{d} w \geq \lambda r^{2-2s}. \epsnd{equation} \epsnd{itemize} \epsnd{definition} For technical convenience, we quote the scaled form of the Schauder estimate on cylinders $Q_{2r}$ with $r>0$, as in \cite[Theorem 4.5]{imbert2020smooth}: \begin{equation}gin{theorem}[\cite{imbert2018schauder, imbert2020smooth}]\label{t:schauder} Let $0 < \alpha < \min(1,2s)$, and let $\alpha' = \frac {2s}{1+2s}\alpha$. Let $f:(-(2r)^{2s},0]\times B_{(2r)^{1+2s}} \times \mathbb R^3\to \mathbb R$ be a solution of the linear equation \epsqref{e.linear-kinetic} in $Q_{2r}$ for some bounded function $h$ and some integral kernel $K_z(w) = K(t,x,v,v+w): (-(2r)^{2s}, 0]\times B_{(2r)^{1+2s}}\times \mathbb R^3\times\mathbb R^3\to [0,\infty)$ satisfying, for each $t$, $x$, and $v$, the ellipticity assumptions of Definition \ref{d:ellipticity} for uniform constants $0< \lambda< \Lambda$, as well as the H\"older continuity assumption \begin{equation}gin{equation}\label{e.kernel-holder} \int_{B_\rho} |K_{z_1}(w) - K_{z_2}(w)| |w|^2 \, \mathrm{d} w \leq A_0 \rho^{2-2s} d_\epsll(z_1,z_2)^{\alpha'}, \quad z_1,z_2\in Q_{2r}, \rho>0, \epsnd{equation} for some $\alpha>0$. If $f\in C^\alpha_\epsll((-(2r)^{2s},0]\times B_{(2r)^{1+2s}} \times \mathbb R^3)$ and $h\in C^\alpha_\epsll(Q_{2r})$, then \[\begin{equation}gin{split} \|f\|_{C^{2s+\alpha'}_\epsll(Q_{r})} &\leq C\left(\max\left(r^{-2s-\alpha'+\alpha}, A_0^{(2s+\alpha'-\alpha)/\alpha'} \right)[f]_{C^\alpha_\epsll([-(2r)^{2s},0]\times B_{(2r)^{1+2s}}\times\mathbb R^3)}\right.\\ &\qquad \left. + [h]_{C^{\alpha'}_\epsll(Q_{2r})} + \max(r^{-\alpha'}, A_0)\|h\|_{L^\infty(Q_{2r})}\right). \epsnd{split}\] The constant $C$ depends on $s$, $\lambda$, and $\Lambda$. \epsnd{theorem} \begin{equation}gin{remark}\label{r:ellipticity} The local estimates of Theorems \ref{t:degiorgi} and \ref{t:schauder} impose a number of conditions on the integral kernel $K$. When the kernel is defined in terms of a function $f$ according to the formula for $K_f$ from \epsqref{e.kernel}, one must place appropriate conditions on $f$ so that the kernel $K_f$ satisfies all the hypotheses of these two theorems. Regarding the coercivity conditions \epsqref{e.coercivity1}, \epsqref{e.coercivity2}, \epsqref{e.coercivity3}, and \epsqref{e.coercivity4}, it is understood in the literature (see \cite{imbert2016weak, chaker2020coercivity, imbert2020smooth}) that all of these conditions follow from the existence of a cone of nondegeneracy as in Lemma \ref{l:cone}. The upper bound conditions \epsqref{e.upper1} and \epsqref{e.upper2} from Theorem \ref{t:degiorgi} hold for $K_f$ (locally in $v$) whenever the convolution $f\ast|\cdot|^{\gamma+2s} = \int_{\mathbb R^3} f(v+w)|w|^{\gamma+2s} \, \mathrm{d} w$ is bounded. This is shown in \cite[Lemmas 3.4 and 3.5]{imbert2016weak}. The cancellation conditions \epsqref{e.cancellation1} and \epsqref{e.cancellation2} hold whenever the convolutions $f\ast |\cdot|^\gamma$ and $f\ast |\cdot|^{\gamma+1}$ are bounded, from \cite[Lemmas 3.6 and 3.7]{imbert2016weak}. In particular, these conditions all hold whenever $f\in L^\infty_q(\mathbb R^3)$ for $q>\gamma+2s+3$. We emphasize that these four lemmas from \cite{imbert2016weak} are proven for any $\gamma$ and $s$ such that $\gamma+2s\leq 2$, including in the case $\gamma+2s<0$. From Lemma \ref{l:K-upper-bound-2}, we see that the upper bound \epsqref{e.upper-bound-schauder} is also satisfied whenever the convolution $f\ast |\cdot|^{\gamma+2s+3}$ is bounded. To sum up, this discussion shows that the kernel $K_f$ defined by \epsqref{e.kernel} satisfies the hypotheses of Theorems \ref{t:degiorgi} and \ref{t:schauder} whenever $f\in L^\infty_q(\mathbb R^3)$ with $q>\gamma+2s+3$ and $f$ satisfies a pointwise lower bound condition as in Lemma \ref{l:cone}, with constants depending on $|v|$, $\|f\|_{L^\infty_q(\mathbb R^3)}$, $q$, $\delta$, $r$, and $|v_m|$. In general, these estimates for $K_f$ degenerate as $|v|\to \infty$, which means $K_f$ is uniformly elliptic on any fixed bounded domain in velocity space, but not uniformly elliptic globally. The change of variables $\mathcal T_0$, described in \cite{imbert2020smooth} and Section \ref{s:global-reg} below, addresses this difficulty. \epsnd{remark} \subsection{Estimates for the collision operator} First, we have an integral estimate on annuli for the kernel $K_f$ defined in \epsqref{e.kernel}: \begin{equation}gin{lemma}{\cite[Lemma 4.3]{silvestre2016boltzmann}}\label{l:K-upper-bound} For any $r>0$, \[ \int_{B_{2r}\setminus B_r} K_f(v,v+w) \, \mathrm{d} w \leq C \left( \int_{\mathbb R^3} f(v+w)|w|^{\gamma+2s} \, \mathrm{d} w\right) r^{-2s}. \] \epsnd{lemma} The following two closely related estimates can be proven by writing the integral over $B_r$ (respectively, $B_r^c$) as a sum of integrals over $B_{r2^{-n}}\setminus B_{r2^{-n-1}}$ for $n=0,1,2,\ldots$ (respectively $n=-1,-2,\ldots$) and applying Lemma \ref{l:K-upper-bound} for each $n$: \begin{equation}gin{lemma}\label{l:K-upper-bound-2} For any $r>0$, \[ \int_{B_{r}} K_f(v,v+w) |w|^2 \, \mathrm{d} w \leq C \left( \int_{\mathbb R^3} f(v+w)|w|^{\gamma+2s} \, \mathrm{d} w\right) r^{2-2s}. \] \[ \int_{B_r^c} K_f(v,v+w) \, \mathrm{d} w \leq C \left( \int_{\mathbb R^3} f(v+w)|w|^{\gamma+2s} \, \mathrm{d} w\right) r^{-2s}. \] \epsnd{lemma} \begin{equation}gin{lemma}{\cite[Lemma 2.3]{imbert2020lowerbounds}}\label{l:C2Linfty} For any bounded, $C^2$ function $\varphi$ on $\mathbb R^3$, the following inequality holds: \[ |Q_s(f, \varphi)| \leq C\left(\int_{\mathbb R^3} f(v+w) |w|^{\gamma+2s} \, \mathrm{d} w\right) \|\varphi\|_{L^\infty(\mathbb R^3)}^{1-s} \|D^2 \varphi\|_{L^\infty(\mathbb R^3)}^s.\] \epsnd{lemma} The next lemma, which appears to be new, is related to \cite[Proposition 3.1(v)]{HST2020boltzmann}, but the statement here is sharper in terms of the decay exponent. The small gain in the exponent provided by this lemma will be crucial in propagating higher polynomial decay estimates forward in time. \begin{equation}gin{lemma}\label{l:Q-polynomial} For any $q_0> \gamma + 2s + 3$ let $f\in L^\infty_{q_0}(\mathbb R^3)$ be a nonnegative function, and choose $q\in [q_0, q_0-\gamma]$. (Recall that $\gamma < 0$.) Then there holds \[ Q(f,\langle \cdot\rangle^{-q})(v) \leq C\|f\|_{L^\infty_{q_0}(\mathbb R^3)}\langle v\rangle^{-q}.\] The constant $C$ depends on universal constants and $q$. \epsnd{lemma} \begin{equation}gin{proof} Writing $Q = Q_{\rm s} + Q_{\rm ns}$, from Lemma \ref{l:Q2} we have \[ Q_{\rm ns} (f, \langle \cdot\rangle^{-q})(v) \approx \langle v\rangle^{-q} \int_{\mathbb R^3} f(v-w) |w|^{\gamma} \, \mathrm{d} w \lesssim \langle v\rangle^{-q} \|f\|_{L^\infty_{q_0}(\mathbb R^3)},\] by the convolution estimate Lemma \ref{l:convolution}, since $q_0> \gamma+2s+3 > \gamma + 3$. For the singular term, Lemma \ref{l:Q1} gives \[ Q_{\rm s} (f,\langle \cdot \rangle^{-q})(v)= \int_{\mathbb R^3} K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v'.\] If $|v|\leq 2$, then Lemma \ref{l:C2Linfty} implies \[\begin{equation}gin{split} Q_{\rm s}(f,\langle \cdot \rangle^{-q})(v) &\lesssim \left(\int_{\mathbb R^3} f(v+w) |w|^{\gamma+2s} \, \mathrm{d} w \right) \|\langle v\rangle^{-q} \|_{L^\infty(\mathbb R^3)}^{1-s} \|D^2\langle v\rangle^{-q}\|_{L^\infty(\mathbb R^3)}^s\\ &\lesssim \|f\|_{L^\infty_{q_0}(\mathbb R^3)} \lesssim \|f\|_{L^\infty_{q_0}(\mathbb R^3)} \langle v\rangle^{-q}, \epsnd{split} \] since $v$ lives on a bounded domain and $q_0 > \gamma + 2s + 3$. When $|v|\geq 2$, we write the integral over $\mathbb R^3$ as an infinite sum by defining, for each integer $k$, the annulus $A_k(v) = B_{2^k|v|}(v) \setminus B_{2^{k-1}|v|}(v)$. For the terms with $k\leq -1$, for $v' \in A_k(v) \subset B_{|v|/2}(v)$ we Taylor expand $g(v) := \langle v\rangle^{-q}$ to obtain \[ g(v') - g(v) = (v'-v) \cdot \nabla g(v) + \frac 1 2 (v'-v) \cdot (D^2g (z) (v'-v)), \quad \text{for some } z\in B_{|v|/2}(v).\] The symmetry of the kernel $K_f$ implies $\int_{A_k(v)} (v'-v) \cdot \nabla g(v) K_f(v,v') \, \mathrm{d} v' = 0$. We then have, using Lemma \ref{l:K-upper-bound} and that $q_0 > 3 + \gamma + 2s$, \[\begin{equation}gin{split} \Big|\sum_{k\leq -1} \int_{A_k(v)} &K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v'\Big| \lesssim \|D^2 g\|_{L^\infty(B_{|v|/2}(v))} \sum_{k\leq -1} \int_{A_k(v)}|v'-v|^2 K_f(v,v') \, \mathrm{d} v'\\ &\lesssim \langle v\rangle^{-q-2}\left(\int_{\mathbb R^3} f(v+w) |w|^{\gamma+2s}\, \mathrm{d} w\right) \sum_{k\leq -1} (2^{k-1} |v|)^{2-2s}\\ &\lesssim \langle v\rangle^{-q-2} \|f\|_{L^\infty_{q_0}(\mathbb R^3)} \langle v\rangle^{(\gamma+2s)_+} |v|^{2-2s} \lesssim \langle v\rangle^{-q}\|f\|_{L^\infty_{q_0}(\mathbb R^3)}. \epsnd{split}\] For the terms with $k\geq 0$, we further divide $A_k(v)$ into $A_k(v) \cap B_{|v|/2}(0)$ and $A_k(v) \setminus B_{|v|/2}(0)$. (Note that $A_k(v) \cap B_{|v|/2}(0)$ is empty unless $k = 0$ or $k=1$.) In $A_k(v) \setminus B_{|v|/2}(0)$, we use $\langle v'\rangle^{-q} \lesssim \langle v\rangle^{-q}$ and Lemma \ref{l:K-upper-bound} to write \[\begin{equation}gin{split} \sum_{k\geq 2}\int_{A_k(v)\setminus B_{|v|/2}} K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v' &\lesssim \langle v\rangle^{-q} \left(\int_{\mathbb R^3} f(v+w) |w|^{\gamma+2s} \, \mathrm{d} w\right)\sum_{k\geq 2} (2^{k-1}|v|)^{-2s}\\ &\lesssim \|f\|_{L^\infty_{q_0}(\mathbb R^3)} \langle v\rangle^{-q} , \epsnd{split} \] where we again used that $q_0> \gamma+2s+3$. It only remains to bound the integral over $v' \in B_{|v|/2}(0)$. From \cite[Lemma 2.4]{henderson2021existence}, we have \[ |K_f(v,v')| \lesssim \frac 1{|v-v'|^{3+2s}} \|f\|_{L^\infty_{q_0}} \langle v\rangle^{\gamma+2s+3-q_0} \qquad \text{if } |v'| \leq |v|/2. \] This implies \[ \begin{equation}gin{split} \int_{B_{|v|/2}} K_f(v,v') &[\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v' \lesssim \|f\|_{L^\infty_{q_0}} \int_{B_{|v|/2}}\frac{ \langle v\rangle^{\gamma+2s+3-q_0}}{|v-v'|^{3+2s}} \langle v'\rangle^{-q} \, \mathrm{d} v'\\ &\approx \|f\|_{L^\infty_{q_0}} \langle v\rangle^{-q_0+\gamma} \int_{B_{|v|/2}} \langle v'\rangle^{-q}\, \mathrm{d} v', \epsnd{split} \] where we used the nonnegativity of $K_f$ to discard the term $-\langle v\rangle^{-q}$ inside the integral and we also used that $|v'-v|\geq |v|/2$. When $q >3$, the final integral is finite and we clearly find \begin{equation} \int_{B_{|v|/2}} K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v' \lesssim \|f\|_{L^\infty_{q_0}} \langle v\rangle^{-q_0 + \gamma} \leq \|f\|_{L^\infty_{q_0}}\langle v\rangle^{- q}, \epse due to the condition $q \leq q_0 - \gamma$. When $q < 3$, the above becomes \begin{equation} \begin{equation}gin{split} \int_{B_{|v|/2}} K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v' &\lesssim \|f\|_{L^\infty_{q_0}} \langle v\rangle^{-q_0 + \gamma + 3 - q} \\&\leq \|f\|_{L^\infty_{q_0}} \langle v\rangle^{-(3+\gamma + 2s) + \gamma + 3 - q} = \|f\|_{L^\infty_{q_0}} \langle v\rangle^{- 2s - q} \epsnd{split} \epse since $q_0 > 3 + \gamma + 2s$. The case $q=3$ is the same up to an additional $\log \langle v\rangle$ factor, which is controlled by $\langle v\rangle^{-2s}$. Hence, in all cases, we find \begin{equation} \int_{B_{|v|/2}} K_f(v,v') [\langle v'\rangle^{-q} - \langle v\rangle^{-q}] \, \mathrm{d} v' \lesssim \|f\|_{L^\infty_{q_0}} \langle v\rangle^{- q}. \epse This completes the proof. \epsnd{proof} \section{Change of variables and global regularity estimates}\label{s:global-reg} The regularity estimates and continuation criterion of Imbert-Silvestre \cite{imbert2020smooth} apply to the case $\gamma + 2s\in [0,2]$. In this section, we discuss the extension of these results to $\gamma + 2s < 0$, with suitably modified hypotheses. As mentioned above, our current study requires these regularity estimates to establish the smoothness of our solutions for positive times. More generally, the global estimates and the change of variables used to prove them are important tools in the study of the non-cutoff Boltzmann equation, so extending these tools to the case $\gamma+2s<0$ may be of independent interest. The key obstacle in passing from local estimates (Theorems \ref{t:degiorgi} and \ref{t:schauder}) to global estimates on $[0,T]\times\mathbb R^3\times\mathbb R^3$ is the degeneration of the upper and lower ellipticity bounds for the collision kernel $K_f(t,x,v,v')$ as $|v|\to \infty$. To overcome this, the authors of \cite{imbert2020smooth} developed a change of variables that ``straightens out'' the anisotropic ellipticity of the kernel and allows to precisely track the behavior of the estimates for large $|v|$. Their change of variables is defined as follows: for a fixed reference point $z_0 = (t_0,x_0,v_0) \in [0,\infty)\times \mathbb R^6$ with $|v_0|>2$ and with $\gamma + 2s\geq 0$, define the linear transformation $T_0:\mathbb R^3\to \mathbb R^3$ by \[ T_0 (av_0 + w) := \frac a {|v_0|}v_0 + w \quad \text{ where } a\in \mathbb R, \, w\cdot v_0 = 0. \] Next, for $(t,x,v) \in Q_1(z_0)$ and when $\gamma + 2s \geq 0$, define \begin{equation}gin{equation}\label{e.cov+} \begin{equation}gin{split} (\bar t, \bar x, \bar v) = \mathcal T_0(t,x,v) &:= \left( t_0 + \frac t {|v_0|^{\gamma+2s}}, x_0 + \frac {T_0x + tv_0}{|v_0|^{\gamma+2s}}, v_0 + T_0v\right) \\ &= z_0 \circ \left( \frac t {|v_0|^{\gamma+2s}}, \frac {T_0 x} {|v_0|^{\gamma+2s}}, T_0v\right). \epsnd{split} \epsnd{equation} When $|z_0| \leq 2$, let $\mathcal T_0(t,x,v) = z_0 \circ z$. The definition \epsqref{e.cov+} applies when $\gamma + 2s \in [0,2]$, and does not generalize well to the case $\gamma + 2s< 0$. Indeed, the solution $f$ of~\epsqref{e.boltzmann} is not defined at the point $(\bar t, \bar x, \bar v)$ if $\bar t < 0$, which occurs if, e.g., $\gamma + 2s < 0$ and $v_0$ is sufficiently large. Thus, for the case $\gamma+2s < 0$, we introduce a new definition: first, extend the definition of $T_0$ as follows: \begin{equation}gin{equation}\label{e.T0-def} T_0 (av_0 + w) := \frac 1 {|v_0|^\frac{(\gamma + 2s)_-}{2s}} \Big(\frac a {|v_0|}v_0 + w\Big) \quad \text{ where } a\in \mathbb R, \, w\cdot v_0 = 0. \epsnd{equation} Then, let \begin{equation}gin{equation}\label{e.cov-} \begin{equation}gin{split} (\bar t, \bar x, \bar v) = \mathcal T_0(t,x,v) &:= \left( t_0 + t, x_0 + T_0x + tv_0, v_0 + T_0v\right) \\ &= z_0 \circ \left(t , T_0 x, T_0v\right). \epsnd{split} \epsnd{equation} As above, when $|v_0|<2$, we take $\mathcal T_0z = z_0\circ z$. It is interesting to note that sending $s\to 1$ in \epsqref{e.cov-} recovers the change of variables that was applied to the Landau equation in \cite{henderson2017smoothing} in the very soft potentials regime. For $r>0$ and $z_0 = (t_0,x_0,v_0)\in \mathbb R^7$, define \[ \mathcal E_r(z_0) := \mathcal T_0(Q_r), \quad E_r(v_0) := v_0 + T_0(B_r), \] and \[ \mathcal E_r^{t,x}(z_0) := \begin{equation}gin{cases} \Big\{\Big(t_0 +\frac{t}{|v_0|^{\gamma+2s}}, x_0 + \frac{1}{|v_0|^{\gamma+2s}}(T_0 x + tv_0)\Big) : t\in [-r^{2s},0], x\in B_{r^{1+2s}}\Big\}, &\gamma+2s\geq 0, \\ \big\{\big(t_0 +t, x_0 + T_0x + tv_0\big) :t\in [-r^{2s},0], x\in B_{r^{1+2s}}\big\}, &\gamma+2s<0, \epsnd{cases} \] so that \begin{equation} \mathcal E_r(z_0) = \mathcal E_r^{t,x}(z_0) \times E_r(v_0). \epse Now, for a solution $f$ of the Boltzmann equation \epsqref{e.boltzmann} and $z_0$ such that $t_0\geq 1$, let us define \[ \bar f(t,x,v) = f(\bar t, \bar x, \bar v).\] By direct computation, $\bar f$ satisfies \[ \partial_t \bar f + v\cdot \nabla_x \bar f = \mathcal L_{\bar K_f} \bar f + \bar h, \quad \text{ in } Q_1,\] where \begin{equation}gin{equation}\label{e.barKf} \bar K_f(t,x,v,v') := \begin{equation}gin{cases} \frac{1}{|v_0|^{1+\gamma+2s}} \, K_f(\bar t, \bar x, \bar v, v_0 + T_0v'), &\qquad \gamma+2s\geq 0, \\ |v_0|^{2+\frac{3\gamma}{2s}} \, K_f(\bar t, \bar x, \bar v, v_0 +T_0 v'), &\qquad\gamma+2s<0, \epsnd{cases} \epsnd{equation} and \[ \bar h(t,x,v) = c_b |v_0|^{-(\gamma+2s)_+} f(\bar t, \bar x, \bar v) [f\ast |\cdot|^\gamma](\bar t, \bar x, \bar v).\] For use in the convolution defining $K_f$, we also define \[ \bar v' := v_0 + T_0 v'. \] In order to apply the local regularity estimates (Theorems \ref{t:degiorgi} and \ref{t:schauder} above) to $\bar f$, one obviously needs to check that the kernel $\bar K_f$ satisfies the hypotheses of these two theorems. In the case $\gamma+2s\in [0,2]$, this was done in \cite[Section~5]{imbert2020smooth}. As stated, the results in \cite[Section~5]{imbert2020smooth} require a uniform upper bound on the energy density $\int_{\mathbb R^3} |v|^2 f(t,x,v) \, \mathrm{d} v$, but it is clear from the proof that this can be replaced by a bound on the $2s$ moment $\int_{\mathbb R^3} |v|^{2s}f(t,x,v) \, \mathrm{d} v$. \begin{equation}gin{proposition}{\cite[Theorems 5.1 and 5.4]{imbert2020smooth}}\label{p:moderately-soft-kernel} Assume $\gamma + 2s \in [0,2]$. Let $z_0 = (t_0,x_0,v_0)$ be arbitrary, and let $\mathcal E_1(z_0)$ be defined as above. If $f$ satisfies \[ \begin{equation}gin{aligned} &0<m_0\leq \int_{\mathbb R^3}f(t,x,v) \leq M_0, &\qquad&\qquad& &\int_{\mathbb R^3}|v|^{2s}f(t,x,v) \, \mathrm{d} v \leq \tilde E_0, \\ & \int_{\mathbb R^3} f(t,x,v)\log f(t,x,v) \, \mathrm{d} v \leq H_0, &\quad&\text{and }\quad& &\sup_{v\in\mathbb R^3}\int_{\mathbb R^3}f(t,x,v+u)|u|^\gamma \, \mathrm{d} u \leq C_\gamma, \epsnd{aligned} \] for all $(t,x) \in \mathcal E_1^{t,x}(z_0)$, then the kernel $\bar K_f(t,x,v,v')$ defined in \epsqref{e.barKf} ($\gamma+2s\geq 0$ case) satisfies the conditions \epsqref{e.coercivity1}---\epsqref{e.cancellation2} of Theorem \ref{t:degiorgi}, with constants depending only on $\gamma$, $s$, $m_0$, $M_0$, $\tilde E_0$, $H_0$, and $C_\gamma$. In particular, all constants are independent of $v_0$. Furthermore, for each $z=(t,x,v) \in \mathcal E_1$, the kernel $\bar K_z(w) = \bar K_f(t,x,v,v+w)$ lies in the ellipticity class defined in Definition \ref{d:ellipticity}, with constants as in the previous paragraph. \epsnd{proposition} For the case $\gamma +2s<0$, the result corresponding to Proposition \ref{p:moderately-soft-kernel} is contained in the following proposition, which we prove in Appendix \ref{s:cov-appendix}: \begin{equation}gin{proposition}\label{p:very-soft-kernel} Asume $\gamma+2s <0$. Let $z_0$ and $\mathcal E_1(z_0)$ be as in Proposition \ref{p:moderately-soft-kernel}, and assume that \[ \|f(t,x,\cdot)\|_{L^\infty_q(\mathbb R^3)} \leq L_0, \qquad \text{for all } (t,x) \in \mathcal E_1^{t,x}(z_0),\] for some $q>2s+3$, and \[ f(t,x,v) \geq \delta, \quad \text{for all } (t,x,v) \in \mathcal E_1^{t,x}(z_0)\times B_r(v_m),\] for some $v_m\in \mathbb R^3$, and $\delta, r>0$. Then the kernel $\bar K_f(t,x,v,v')$ defined in \epsqref{e.barKf} ($\gamma+2s<0$ case) satisfies the conditions \epsqref{e.coercivity1}---\epsqref{e.cancellation2} of Theorem \ref{t:degiorgi}, with constants depending only on $L_0$, $\delta$, $r$, and $v_m$. Furthermore, for each $z\in \mathcal E_1$, the kernel $\bar K_z(w) = \bar K_f(t,x,v,v+w)$ lies in the ellipticity class defined in Definition \ref{d:ellipticity}, with constants as in the previous paragraph. \epsnd{proposition} We note that one should be able to obtain a more optimized version of \Cref{p:very-soft-kernel} by working in a weighted $L^p_v$-based space with $p < \infty$; however, this is not necessary for our work here, so we state and prove the simpler version above. It is also interesting to note that our definition \epsqref{e.cov-} would not work well in the case $\gamma+2s\geq 0$, so the separate definitions seem to be unavoidable. Let us give a more concise, and less sharp, restatement of the previous two propositions, that is sufficient for our purposes. When $q>2s+3$, the norm $\|f\|_{L^\infty_q(\mathbb R^3)}$ controls the mass and entropy densities, as well as the $2s$-moment and the constant $C_\gamma$, so we have the following: \begin{equation}gin{proposition}\label{p:concise-cov} Let $z_0$ and $\mathcal E_1(z_0)$ be as in Proposition \ref{p:moderately-soft-kernel}, and assume that \[ \|f(t,x,\cdot)\|_{L^\infty_q(\mathbb R^3)} \leq L_0, \qquad \text{for all } (t,x) \in \mathcal E_1^{t,x}(z_0),\] for some $q>2s+3$. Assume further that \[ f(t,x,v) \geq \delta, \qquad \text{for all } (t,x,v) \in \mathcal E_1^{t,x}(z_0)\times B_r(v_m), \] for some $v_m\in \mathbb R^3$, and $\delta, r>0$. Then the kernel $\bar K_f(t,x,v,v')$ defined in \epsqref{e.barKf} satisfies all the conditions of Theorem \ref{t:degiorgi} and belongs to the ellipticity class defined in Definition \ref{d:ellipticity}, with constants depending only on $L_0$, $\delta$, $r$, and $v_m$. \epsnd{proposition} The H\"older regularity of the kernel is also required for applying the Schauder estimate of Theorem \ref{t:schauder}. In the $\gamma+2s\geq 0$ case, this regularity is given by \cite[Lemma 5.20]{imbert2020smooth}. For $\gamma+2s<0$, we prove it in Lemma \ref{l:cov-holder} below. We summarize these two results here: \begin{equation}gin{lemma}\label{l:concise-holder} For any $f:[0,T]\times \mathbb R^6 \to [0,\infty)$ such that $f\in C^\alpha_{\epsll,q_1}([0,T]\times\mathbb R^6)$ with \[ q_1> \begin{equation}gin{cases} 5+\frac{\alpha}{1+2s} &\quad \text{ if }\gamma+2s\geq 0, \\ 3 + \frac{\alpha}{1+2s} &\quad \text{ if }\gamma+2s< 0, \epsnd{cases} \] and for any $|v_0|>2$ and $r\in (0,1]$, let \[ \bar K_{f,z}(w) := \bar K_f(t,x,v,v+w) \qquad \text{ for } z = (t,x,v) \in Q_{2r}. \] Then we have \[ \int_{B_\rho} |\bar K_{f,z_1}(w) - \bar K_{f,z_2}(w)| |w|^2 \, \mathrm{d} w \leq \bar A_0 \rho^{2-2s} d_\epsll(z_1,z_2)^{\alpha'}, \quad \rho > 0, z_1,z_2\in Q_{2r}, \] with $\alpha' = \alpha \frac {2s} {1+2s}$ and \begin{equation}\label{e.A_0} \bar A_0 \leq C |v_0|^{P(\alpha,\gamma,s)} \|f\|_{C^\alpha_{\epsll, q_1}([0,T]\times\mathbb R^6)}, \epse where \[P(\gamma,s,\alpha) = \begin{equation}gin{cases} \frac \alpha {1+2s}(1-2s-\gamma)_+, &\gamma+2s\geq 0,\\ 2+\frac \alpha {1+2s}, & \gamma+2s<0,\epsnd{cases}\] and the constant $C$ depends on universal quantities, $\alpha$, and $q_1$, but is independent of $|v_0|$. \epsnd{lemma} Next, we discuss global regularity estimates for solutions of the Boltzmann equation. The following is a global (in $v$) H\"older estimate that combines the local De Giorgi/Nash/Moser-type estimate of Theorem \ref{t:degiorgi} with the change of variables $\mathcal T_0$. It extends \cite[Corollary 7.4]{imbert2020smooth} to the case where $\gamma+2s$ may be negative. \begin{equation}gin{theorem}\label{t:global-degiorgi} Let $f$ be a solution of the Boltzmann equation \epsqref{e.boltzmann} in $[0,T]\times \mathbb R^6$. For some domain $\Omega \subseteq \mathbb R^3$ and $\tau \in (0,T)$, assume there are $\delta, r , R>0$ such that for each $x\in \Omega$, there exists $v_x\in B_R$ with \[ f(t,x,v) \geq \delta, \quad \text{in } [\tau/2,T]\times(B_r(x,v_x) \cap\Omega\times \mathbb R^3), \] and assume that $\|f\|_{L^\infty_{q_0}([0,T]\times\mathbb R^6)} \leq L_0$ for some $q_0> 2s+3$. Define \begin{equation}gin{equation}\label{e.barc} \bar c = \begin{equation}gin{cases} 1 &\qquad \text{ if }\gamma+2s\geq 0,\\ - \frac{\gamma}{2s} &\qquad \text{ if }\gamma+2s<0. \epsnd{cases} \epsnd{equation} Then, for any $q>3$ such that $f\in L^\infty_{q}([0,T]\times\mathbb R^6)$, and any $\Omega'$ compactly contained in $\Omega$, there exists $\alpha_0>0$ such that for any $\alpha \in (0,\alpha_0]$, one has $f\in C^\alpha_{\epsll,q-\bar c\alpha}$, with \begin{equation}gin{equation}\label{e.dg-est} \|f\|_{C^\alpha_{\epsll,q-\bar c\alpha}([\tau,T]\times\Omega'\times\mathbb R^3)} \leq C\|f\|_{L^\infty_{q}([0,T]\times \mathbb R^6)}. \epsnd{equation} The constants $C$ and $\alpha_0$ depend on $L_0$, $q$, $\gamma$, $s$, $\delta$, $r$, $v_m$, $\tau$, $\Omega$, and $\Omega'$. If $\Omega = \mathbb R^3$, then we can replace $\Omega'$ with $\mathbb R^3$ in \epsqref{e.dg-est}. \epsnd{theorem} \begin{equation}gin{proof} Choose a point $z_0= (t_0,x_0,v_0)$ and $r\in (0,1)$. We prove an interior estimate in the cylinder $Q_r(z_0)$, which implies the statement of the Theorem via a standard covering argument. We claim that for any $\bar z_1, \bar z_2 \in \mathcal E_{r/2}(z_0)$, \begin{equation}gin{equation}\label{e.holder-local} |f(\bar z_1) - f(\bar z_2)| \leq C (1+|v_0|)^{-q} d_\epsll(z_1,z_2)^\alpha, \epsnd{equation} with $C, \alpha>0$ as in the statement of the theorem. If $|v_0|\leq 2$, then \epsqref{e.holder-local} follows from Theorem \ref{t:degiorgi}. The dependence of the constants $C$ and $\alpha$ in this case is made clear from the discussion in Remark \ref{r:ellipticity}. Note that $2s+3\geq \gamma+2s+3$. If $|v_0|>2$, the proof of \epsqref{e.holder-local} proceeds exactly as in \cite[Proposition 7.1]{imbert2020smooth}\footnote{The proof of Proposition 7.1 in \cite{imbert2020smooth} makes use of Lemma \ref{l:convolution} below, as well as \cite[Lemma 6.3]{imbert2020smooth}. Although \cite{imbert2020smooth} works under the global assumption $\gamma+2s\in [0,2]$, it is clear from the proof that \cite[Lemma 6.3]{imbert2020smooth} applies to both cases $\gamma+2s\geq 0$ and $\gamma+2s<0$, with the same statement.} and uses the change of variables \epsqref{e.cov+} or \epsqref{e.cov-}. The key point is that we can apply Theorem \ref{t:degiorgi} with the constants $\lambda$ and $\Lambda$ depending on $L_0$, $\delta$, $r$, and $v_m$, and independent of $v_0$, by Proposition \ref{p:concise-cov}. We omit the details of the proof of \epsqref{e.holder-local} since they are the same as in \cite{imbert2020smooth}. Estimate \epsqref{e.holder-local} is equivalent to \[ [\bar f]_{C^\alpha_\epsll(Q_{r/2}(z_0))} \lesssim (1+|v_0|)^{-q}. \] Using Lemma \ref{l:holder-cov} or \cite[Lemma 5.19]{imbert2020smooth} to translate from $\bar f$ to $f$, we obtain \[ \|f\|_{C^\alpha_\epsll(\mathcal E_{r/2}(z_0))} \lesssim (1+|v_0|)^{\bar c\alpha-q} + \|f\|_{L^\infty(\mathcal E_{r/2}(z_0))}, \] with $\bar c$ as in the statement of the theorem. In order to estimate the $C^{\alpha}_{\epsll,q}$ seminorm of $f$ (See Definition \ref{d:C-alpha-q}), we need to work with local seminorms of $f$ on cylinders $Q_{r/2}(z_0)$ rather than on the twisted cylinders $\mathcal E_{r/2}(z_0)$. From the definitions \epsqref{e.cov+} and \epsqref{e.cov-} of the $\mathcal T_0$ change of variables, we see that $\mathcal E_{r/2}(z_0)\supset Q_{(r/2)|v_0|^{-\bar c}}(z_0)$. Using \cite[Lemma 3.5]{imbert2020smooth}, we extend the upper bound to the larger set $Q_{r/2}(z_0)$: \[ \begin{equation}gin{split} [f]_{C^\alpha_\epsll(Q_{r/2}(z_0))} &\lesssim \left((1+|v_0|)^{\bar c\alpha - q} + (|v_0|^{-\bar c})^{-\alpha}\|f\|_{L^\infty(Q_{r/2}(z_0))}\right)\\ & \lesssim (1+|v_0|)^{\bar c\alpha - q} \|f\|_{L^\infty_q([0,T]\times\mathbb R^6)}. \epsnd{split} \] This estimate implies the desired upper bound on $[f]_{C^{\alpha}_{\epsll,q-\bar c \alpha}([0,T]\times\mathbb R^6)}$, which concludes the proof. \epsnd{proof} Next, we have a global Schauder estimate that improves $C^\alpha_\epsll$-regularity as in Theorem \ref{t:global-degiorgi} to $C^{2s+\alpha}_\epsll$-regularity. In order to apply the estimate to derivatives of $f$, this estimate is stated for the linear Boltzmann equation \begin{equation}gin{equation}\label{e.linear-boltzmann} \partial_t g + v\cdot \nabla_x g = Q_{\rm s}(f,g) + h. \epsnd{equation} Choosing $g=f$ and $h = Q_{\rm ns}(f,f)$ would recover the original Boltzmann equation \epsqref{e.boltzmann}. Unlike in the previous theorem, we work out the explicit dependence on $\tau$ of the estimate in Theorem \ref{t:global-schauder}. The dependence on $\tau$ is needed in Proposition \ref{prop:holder_propagation} which is one ingredient of our proof of uniqueness. \begin{equation}gin{theorem}\label{t:global-schauder} Let $f:[0,T]\times\mathbb R^6\to [0,\infty)$, and assume that for some domain $\Omega\subseteq \mathbb R^3$, there exist $\delta, r, R>0$ such that, if $x \in \Omega$, there is $v_x\in \mathbb R^3$ with \[ f(t,x,v) \geq \delta, \quad \text{ in } [0,T]\times \left(B_r(x,v_x) \cap (\Omega \times \mathbb R^3)\right). \] Assume that $\|f\|_{L^\infty_{q_0}([0,T]\times\mathbb R^6)} \leq L_0$ for some $q_0>2s+3$. Furthermore, assume $f\in C^\alpha_{\epsll,q_1}([0,T]\times\mathbb R^6)$ for some $\alpha\in (0,\min(1,2s))$ and $q_1$ as in Lemma \ref{l:concise-holder}. Let $g$ be a solution of \epsqref{e.linear-boltzmann} with $h\in C^{\alpha'}_{\epsll,q_1}([0,T]\times\mathbb R^6)$, where $\alpha' = \frac {2s}{1+2s}\alpha$ and $2s+\alpha'\not\in\{1,2\}$. Then, for any $\tau\in (0,T)$ and $\Omega'$ compactly contained in $\Omega$, one has the estimate \begin{equation}gin{equation}\label{e.2salpha} \begin{equation}gin{split} &\|g\|_{C^{2s+\alpha'}_{\epsll,q_1-\kappa}([\tau,T]\times\Omega'\times\mathbb R^3)} \\ &\quad\leq C\left(1 + \tau^{-1 + \frac{\alpha-\alpha'}{2s}}\right) \|f\|_{C^\alpha_{\epsll, q_1}([0,T]\times\mathbb R^6)}^\frac{\alpha -\alpha'+ 2s}{\alpha'} \left( \|g\|_{C^\alpha_{\epsll,q_1}([0,T]\times\mathbb R^6)} + \|h\|_{C^{\alpha'}_{\epsll, q_1}([0,T]\times \mathbb R^6)}\right), \epsnd{split} \epsnd{equation} where the moment loss for higher order regularity of $g$ is \[\begin{equation}gin{split} \kappa& := \begin{equation}gin{cases} (1-2s-\gamma)_+ \left(1 + \frac \alpha {1+2s} - \frac \alpha {2s}\right) + 2s+\alpha', &\gamma+2s\geq 0\\ (2 + \frac \alpha {1+2s})(\frac{1+2s}\alpha- \frac 1 {2s}) - \gamma \left(1+\frac \alpha{1+2s}\right), &\gamma+2s < 0,\epsnd{cases} \epsnd{split}\] and the constant $C$ depends on $\gamma$, $s$, $\alpha$, $q_0$, $q_1$, $L_0$, $\delta$, $r$, $R$, $\Omega$, and $\Omega'$. If $\Omega = \mathbb R^3$, then we can replace $\Omega'$ with $\mathbb R^3$ in \epsqref{e.2salpha}. \epsnd{theorem} \begin{equation}gin{proof} We follow the proof of \cite[Proposition 7.5]{imbert2020smooth} suitably modifying the argument in order to allow the case $\gamma + 2s < 0$ and to determine the explicit dependence on $\tau$. In this proof, to keep the notation clean, any norm or seminorm given without a domain, such as $\|f\|_{C^\alpha_{\epsll,q}}$, is understood to be over $[0,T]\times\mathbb R^6$. Let $z_0 = (t_0,x_0,v_0)\in (\tau,T]\times\Omega'\times\mathbb R^3$ be fixed, and define \[ \rho = \min\big(1, (t_0/2)^\frac{1}{2s}\big). \] Since we do not track the explicit dependence of this estimate on $\Omega'$ and $\Omega$, we assume without loss of generality that $\mathcal E_\rho(z_0)\subset [0,T]\times\Omega\times\mathbb R^3$. If $|v_0|>2$, then let $\varphi\in C_c^\infty(\mathbb R^3)$ be a cutoff function supported in $B_{|v_0|/8}$ and identically 1 in $B_{|v_0|/9}$. Define $\bar g := [(1-\varphi)g]\circ \mathcal T_0$. The function $\bar g$ is defined on $Q_\rho$ and satisfies \[ \partial_t \bar g + v\cdot \nabla_x \bar g = \int_{\mathbb R^3} (\bar g' - \bar g) \bar K_f(t,x,v,v') \, \mathrm{d} v' + \bar h + \bar h_2,\] where $\bar g' = \bar g(t,x,v')$, and $\bar K_f$ is defined in \epsqref{e.barKf}. The source terms are defined by \[ \begin{equation}gin{split} \bar h &= |v_0|^{-(\gamma+2s)_+} h \circ \mathcal T_0,\\ \bar h_2 &= |v_0|^{-(\gamma+2s)_+} \int_{\mathbb R^3} \varphi(v') g(\bar t, \bar x, v') K_f(\bar t, \bar x, \bar v, v') \, \mathrm{d} v'. \epsnd{split} \] By Proposition \ref{p:concise-cov}, the kernel $\bar K_f$ lies in the ellipticity class of Definition \ref{d:ellipticity}, with constants depending on $L_0$, $\delta$, $r$, and $R$. Applying the local Schauder estimate Theorem \ref{t:schauder} to $\bar g$, we obtain \begin{equation}gin{equation}\label{e.local-schauder} \begin{equation}gin{split} [\bar g]_{C^{2s+\alpha'}_\epsll(Q_{\rho/2})} &\lesssim \max\left( \rho^{-2s-\alpha' + \alpha}, \bar A_0^{(2s+\alpha' - \alpha)/\alpha'}\right) [\bar g]_{C^\alpha_{\epsll}((-\rho^{2s},0]\times B_{\rho^{1+2s}} \times\mathbb R^3)}\\ &\quad + [\bar h + \bar h_2]_{C_\epsll^{\alpha'}(Q_\rho)} + \max(\rho^{-\alpha'},\bar A_0)\|\bar h + \bar h_2\|_{L^\infty(Q_\rho)}, \epsnd{split} \epsnd{equation} where we recall the definition of $\bar A_0$ from~\epsqref{e.A_0}. Let us estimate the terms in this right-hand side one by one. To estimate $\bar A_0$, we use Lemma \ref{l:concise-holder}: \[ \begin{equation}gin{split} \bar A_0 &\lesssim |v_0|^{P(\gamma,s,\alpha)} \|f\|_{C^\alpha_{\epsll,q_1}}, \epsnd{split} \] with $q_1$ and $P(\gamma, s, \alpha)$ as in Lemma \ref{l:concise-holder}. Next, Lemma \ref{l:holder-cov} below (if $\gamma+2s<0$) or \cite[Lemma 5.19]{imbert2020smooth} (if $\gamma+2s\geq 0$) imply \[ [\bar g]_{C^\alpha_\epsll((-\rho^{2s},0]\times B_{\rho^{1+2s}}\times\mathbb R^3)} \leq \|(1-\varphi) g\|_{C^\alpha_\epsll} \lesssim |v_0|^{-q_1}\|g\|_{C^\alpha_{\epsll,q_1}}, \] with $q_1$ as above. The last inequality follows because $1-\varphi$ is supported for $|v|\geq |v_0|/9$. For the terms involving $\bar h$ and $\bar h_2$, note that \[\begin{equation}gin{split} \|\bar h\|_{L^\infty(Q_\rho)} &= |v_0|^{-(\gamma+2s)_+} \|h\|_{L^\infty(\mathcal E_\rho(z_0))} \leq |v_0|^{-(\gamma+2s)_+ - q_1}\|h\|_{L^\infty_{q_1}},\\ [\bar h]_{C^{\alpha'}_\epsll(Q_\rho)} &\leq |v_0|^{-(\gamma+2s)_+} [h]_{C^{\alpha'}_\epsll(\mathcal E_\rho(z_0))} \leq |v_0|^{-(\gamma+2s)_+-q_1} [h]_{C^{\alpha'}_{\epsll,q_1}}, \epsnd{split}\] where the second line used Lemma \ref{l:holder-cov} or \cite[Lemma 5.19]{imbert2020smooth}. For $\bar h_2$, we use \cite[Lemma 6.3 with $q=q_1$ and Corollary 6.7 with $q = q_1 - \alpha/(1+2s)$]{imbert2020smooth} to write \[ \begin{equation}gin{split} \|\bar h_2\|_{L^\infty(Q_\rho)} &\lesssim |v_0|^{-q_1 +\gamma - (\gamma+2s)_+}\|f\|_{L^\infty_{q_1}}\|g\|_{L^\infty_{q_1}} ,\\ [\bar h_2]_{C^{\alpha'}_\epsll(Q_\rho)} &\lesssim |v_0|^{-q_1 + \gamma -(\gamma+2s)_+ + 2\alpha/(1+2s)} \|f\|_{C^\alpha_{\epsll,q_1}} \|g\|_{C^\alpha_{\epsll,q_1}}. \epsnd{split} \] Combining all of these inequalities with \epsqref{e.local-schauder}, we have \[ \begin{equation}gin{split} & [\bar g]_{C^{2s+\alpha'}_\epsll(Q_{\rho/2})}\\ &\lesssim \max\Big( t_0^\frac{-2s+ (\alpha -\alpha')}{2s}, |v_0|^{P(\gamma,s,\alpha)\frac{2s- \alpha + \alpha'}{\alpha'}}\|f\|_{C^\alpha_{\epsll,q_1}}^\frac{2s-\alpha +\alpha'}{\alpha'}\Big) |v_0|^{-q_1} \|g\|_{C^\alpha_{\epsll,q_1}} \\&\quad +|v_0|^{-(\gamma+2s)_+ - q_1}[h]_{C^{\alpha'}_{\epsll,q_1}} + |v_0|^{-q_1+\gamma-(\gamma+2s)_+ + \frac{2\alpha}{1+2s}}\|f\|_{C^\alpha_{\epsll,q_1}} \|g\|_{C^\alpha_{\epsll,q_1}}\\ &\quad + \max\Big(t_0^{-\frac{\alpha'}{2s}},|v_0|^{P(\gamma,s,\alpha)}\|f\|_{C^\alpha_{\epsll,q_1}}\Big) \Big(|v_0|^{-(\gamma+2s)_+-q_1}\| h\|_{L^\infty} + |v_0|^{-q_1+\gamma-(\gamma+2s)_+}\|g\|_{L^\infty_{q_1}} \|f\|_{L^\infty_{q_1}}\Big). \epsnd{split} \] Keeping only the largest powers of $t_0^{-1}$, $|v_0|$, and $\|f\|_{C^\alpha_{\epsll,q_1}}$, we have (recall that $\alpha<2s$) \[ [\bar g ]_{C^{2s+\alpha'}_\epsll(Q_{\rho/2})} \lesssim \mathcal A, \] where we have introduced the shorthand \begin{equation} \mathcal A := \Big(1 + t_0^{-1+\frac{\alpha-\alpha'}{2s}}\Big) |v_0|^{-q_1 + P(\gamma,s,\alpha)\big(1+\frac{2s-\alpha}{\alpha'}\big)} \|f\|_{C^\alpha_{\epsll,q_1}}^{1+\frac{2s-\alpha}{\alpha'}} \left( \|g\|_{C^\alpha_{\epsll,q_1}} + \|h\|_{C^{\alpha'}_{\epsll,q_1}} \right). \epse Now, we apply Lemma \ref{l:holder-cov} or \cite[Lemma 5.19]{imbert2020smooth} to translate from $\bar g$ back to $g$: \[ [g]_{C^{2s+\alpha'}_\epsll(\mathcal E_{\rho/2}(z_0))} \lesssim |v_0|^{\bar c (2s+\alpha')} \|\bar g\|_{C^{2s+\alpha'}_\epsll(Q_{\rho/2}(z_0))} \lesssim |v_0|^{\bar c(2s+\alpha')}\left( \mathcal A + \|g\|_{L^\infty(Q_{\rho/2}(z_0))}\right), \] with $\bar c$ as in \epsqref{e.barc}. As in the proof of Theorem \ref{t:global-degiorgi}, we need to pass from twisted cylinders $\mathcal E_\rho(z_0)$ to kinetic cylinders $Q_\rho(z_0)$. Since $\mathcal E_\rho(z_0)\supset Q_{\rho|v_0|^{-\bar c}}(z_0)$, we use \cite[Lemma 3.5]{imbert2020smooth} to obtain \[ [g]_{C^{2s+\alpha'}_\epsll(Q_\rho(z_0))} \lesssim |v_0|^{\bar c(2s+\alpha')} \mathcal A + |v_0|^{\bar c(2s+\alpha')} \|g\|_{L^\infty(Q_\rho(z_0))}. \] We finally have \begin{equation}gin{equation}\label{e.schauder-cylinder} [g]_{C^{2s+\alpha'}_\epsll(Q_\rho(z_0))} \lesssim \Big(1 + t_0^{-1+\frac{\alpha-\alpha'}{2s}}\Big) (1+|v_0|)^{-q_1 + \kappa} \|f\|_{C^\alpha_{\epsll,q_1}}^{1+\frac{2s-\alpha}{\alpha'}} \left( \|g\|_{C^\alpha_{\epsll,q_1}} + \|h\|_{C^{\alpha'}_{\epsll,q_1}} \right), \epsnd{equation} with \[ \kappa := P(\gamma,s,\alpha)\Big(1+\frac{2s-\alpha}{\alpha'}\Big)+\bar c(2s+\alpha'),\] as in the statement of the theorem. We have derived \epsqref{e.schauder-cylinder} under the assumption $|v_0|>2$. If $|v_0|\leq 2$, then we may apply Theorem \ref{t:schauder} directly to $g$, without using the change of variables. Proceeding as above, we obtain \epsqref{e.schauder-cylinder} in this case as well, using $1\lesssim ( 1+ |v_0|)^{-q_1+\kappa}$. This completes the proof, since $t_0\gtrsim \tau$. \epsnd{proof} When we apply the linear estimate of Theorem \ref{t:global-schauder} to solutions of the Boltzmann equation, we obtain the following time-weighted Schauder estimate: \begin{equation}gin{proposition}\label{p:nonlin-schauder} Let $f:[0,T]\times\mathbb R^6\to [0,\infty)$ be a classical solution to \epsqref{e.boltzmann} satisfying the assumptions for $f$ in \Cref{t:global-schauder}. For any $t_0\in (0,T)$, $\alpha>0$, and $q_1$ as in Lemma \ref{l:concise-holder}, the estimate \[ \|f\|_{C^{2s+\alpha'}_{\epsll,q_1}([t_0/2,t_0]\times\mathbb R^6)} \leq C t_0^{-1+(\alpha-\alpha')/(2s)} \|f\|_{C^\alpha_{\epsll,q_1+\kappa+\alpha/(1+2s)+\gamma}([0,t_0]\times\mathbb R^6)}^{1+(\alpha+2s)/\alpha'}, \] holds whenever the right-hand side is finite, where $\alpha' = \alpha\frac{2s}{1+2s}$, $C>0$ is a constant depending on $\gamma$, $s$, $\alpha$, $q_0$, $q_1$, $L_0$, $\delta$, $r$, $R$, $\Omega$, and $\Omega'$, and $\kappa$ is the constant from Theorem \ref{t:global-schauder}. \epsnd{proposition} \begin{equation}gin{proof} Theorem \ref{t:global-schauder} with with $g=f$ and $h=Q_{\rm ns}(f,f)$ implies \begin{equation}gin{equation*} \begin{equation}gin{split} &\|f\|_{C^{2s+\alpha'}_{\epsll,q_1}([t_0/2,t_0]\times\mathbb R^6)}\\ & \qquad\leq C t_0^{-1+(\alpha-\alpha')/(2s)} \|f\|_{C^\alpha_{\epsll,q_1+\kappa}([0,t_0]\times\mathbb R^6)}^{(\alpha-\alpha'+2s)/\alpha' }\left(\|f\|_{C^\alpha_{\epsll,q_1+\kappa}([0,t_0]\times\mathbb R^6)} + \|Q_{\rm ns}(f,f)\|_{C^{\alpha'}_{\epsll,q_1+\kappa}([0,t_0]\times\mathbb R^6)}\right)\\ &\qquad\leq C t_0^{-1+(\alpha-\alpha')/(2s)} \|f\|_{C^\alpha_{\epsll,q_1+\kappa+\alpha/(1+2s)+\gamma}([0,t_0]\times\mathbb R^6)}^{1+(\alpha+2s)/\alpha'} , \epsnd{split} \epsnd{equation*} using Lemma \ref{l:Q2holder} to bound $Q_{\rm ns}(f,f)$. \epsnd{proof} Next, we discuss higher regularity estimates for the solution $f$. The following proposition is in some sense a restatement of the main theorem of \cite{imbert2020smooth}, using hypotheses that are convenient for our purposes ($L^\infty_q$ bounds and pointwise lower bounds for $f$, rather than the mass, energy, and entropy density bounds used in \cite{imbert2020smooth}). This result extends the higher regularity estimates to the case $\gamma+2s<0$, although we should point out that the hypotheses here are stronger than in \cite{imbert2020smooth} and it is not currently known how to prove global regularity estimates {\it depending only on mass, energy, and entropy bounds} in the case $\gamma+2s<0$. \begin{equation}gin{proposition}[Higher regularity]\label{p:higher-reg} Let $f$ be a classical solution to \epsqref{e.boltzmann} on $[0,T]\times\mathbb R^6$. Assume that for some $\Omega \subseteq \mathbb R^3$, there exist $\delta, r, R>0$ such that for any $x\in \Omega$, there is a $v_x\in B_R$ such that $f(t,x,v) \geq \delta$ whenever $(x,v)\in B_r(x,v_x)\cap(\Omega\times\mathbb R^3)$. Fix any $m \geq 0$ and multi-index $k$ in the $(t,x,v)$ variables. Then there exists $q(k,m)$ such that, if $f \in L^\infty_{q(k,m)}([0,T]\times \mathbb R^6)$, then \begin{equation} \label{e.higher-reg-est} \|D^k f\|_{L^\infty_m([\tau,T]\times\Omega'\times\mathbb R^3)} \leq C. \epse The constant $q(k,m)$ depends on $k$, $m$, $\delta$, $r$, $R$, $\tau$, $\Omega'$, and $\Omega$. The constant $C$ depends on the same quantities as well as $\|f\|_{L^\infty_{q(k,m)}}$. Furthermore, $q(k,m)$ and $C$ are nonincreasing functions of $\tau$. If $\Omega = \mathbb R^3$, then we can replace $\Omega'$ with $\mathbb R^3$ in \epsqref{e.higher-reg-est}. \epsnd{proposition} If we were working on a periodic spatial domain and only considering the case $\gamma+2s\geq 0$, we could remove the dependence on higher $L^\infty_q$-norms of $f$, by using decay estimates as in \cite{imbert2018decay}. However, we need to apply these estimates on the whole space $\mathbb R^3_x$ and also $\gamma+2s<0$, so we state it in the form above. We intend to apply Proposition \ref{p:higher-reg} in situations where higher $L^\infty_q$-norms of $f$ are bounded in terms of the initial data and weaker norms of $f$. The proof of Proposition \ref{p:higher-reg} is the same as the proof of \cite[Theorem 1.2]{imbert2020smooth} and consists of the following ingredients: \begin{equation}gin{itemize} \item The change of variables and global estimates described in this section. \item The bootstrapping procedure explained in Section 9 of \cite{imbert2020smooth}, which consists of applying Theorems \ref{t:global-degiorgi} and \ref{t:global-schauder} to partial derivatives and increments of $f$. This makes use of certain facts about increments that are proven in Section 8 of \cite{imbert2020smooth}. The analysis in Sections 8 and 9 of \cite{imbert2020smooth} does not use the sign of $\gamma+2s$ in any way. At each step of the bootstrapping, a certain (non-explicit) number of velocity moments are used up. The result of \cite{imbert2020smooth} proves estimates for all partial derivatives of $f$, but one can obviously stop the process after a finite number of iterations, which gives rise to the condition $f \in L^\infty_{q(k,m)}([0,T]\times\mathbb R^6)$ in our statement. \item Bilinear estimates for the operators $Q_{\rm s}$ and $Q_{\rm ns}$ in H\"older spaces, which are also needed in the course of bootstrapping. These lemmas are also essentially independent of the sign of $\gamma+2s$, but we record them in Appendix \ref{s:lemmas} for the sake of completeness: see Lemmas \ref{l:Q1holder} and \ref{l:Q2holder}. \epsnd{itemize} Finally, we have a continuation criterion for smooth solutions, which will be used in our proof of Theorem \ref{t:existence}. It is intended mainly as an internal result and works with solutions defined on a torus of general side length. It is certainly not sharp. \begin{equation}gin{proposition}[Continuation criterion]\label{p:continuation} Let $\mathbb T_M^3$ be the periodic torus of side length $M>0$. Let $f$ be a classical solution to \epsqref{e.boltzmann} in $[0,T)\times \mathbb T_M^3\times \mathbb R^3$ with $T>0$. Suppose that initial data $f_{\rm in}$ is smooth, is rapidly decaying in $v$, and that there is $\delta, r>0$ $x_m \in \mathbb T_M^3$, and $v_m\in \mathbb R^3$ such that \[ f_{\rm in}(x,v)\geq \delta, \quad (x,v)\in B_r(x_m, v_m). \] Then there exists $q_{\rm cont}$ such that, if \begin{equation}\label{e.c62701} \|f\|_{L^\infty_{q_{\rm cont}}([0,T)\times \mathbb T^3_M\times\mathbb R^3)} < \infty, \epse then $f$ can be extended to a classical solution $[0,T+\varepsilon]\times \mathbb T_M^3\times \mathbb R^3$ for some $\varepsilon>0$. The decay rate $q_{\rm cont}$ depends on $\gamma$, $s$, $T$, $\delta$, $r$, and $|v_m|$. The constant $\varepsilon$ depends on the same quantities as well as $\|f(t)\|_{L^\infty_{q_{\rm cont}}([0,T)\times \mathbb T^3_M\times\mathbb R^3)}$. Furthermore, $q_{\rm cont}$ is a nonincreasing function of $T$, and $\varepsilon$ is a nondecreasing function of $T$. \epsnd{proposition} \begin{equation}gin{proof} First, by scaling, we may consider the case $M=1$. Indeed, defining $f^M(t,x,v) := M^{\gamma+3}f(t,Mx,Mv)$, it is clear that: (i) $f^M$ solves~\epsqref{e.boltzmann} on $[0,T)\times\mathbb T^3\times \mathbb R^3$, (ii) $f^M$ exists on $[0,T+\varepsilon]\times\mathbb T^3\times\mathbb R^3$ if and only if $f$ exists on $[0,T+\varepsilon]\times\mathbb T_M^3\times\mathbb R^3$, and (iii) inequality \epsqref{e.c62701} holds if and only if $\|f^M\|_{L^\infty_{q_{\rm cont}}([0,T)\times\mathbb T^3\times\mathbb R^3)} < \infty$. Fix $k= 6$, and let $n$ and $p$ be the corresponding constants from \Cref{p:prior-existence}. We can apply \Cref{p:higher-reg} for all multi-indices $j$ of order at most $10$ with the choice $m =n$ to find $q(10,n)$ such that \begin{equation}\label{e.c62901} \sup_{|j| \leq 10} \|D^j f\|_{L^\infty_{n}([T/2,T]\times \mathbb T^3\times \mathbb R^3)} \leq C_0. \epse Note that the lower bound condition required to apply \Cref{p:higher-reg} follows from \Cref{t:lower-bounds} and the compactness of $\mathbb T^3$. Define $q_{\rm cont} = \max\{p, q(10,n)\}$, and note that $q_{\rm cont}$ depends on the quantities claimed in the statement, by Proposition \ref{p:higher-reg}. The constant $C_0$ in \Cref{e.c62901} depends on $T$, $n$, $p$, $\delta$, $r$, $|v_m|$, $\|f(t)\|_{L^\infty_{q_{\rm cont}}([0,T)\times \mathbb T^3\times\mathbb R^3)}$, and $\|f\|_{C^\alpha([0,T]\times\mathbb T^3\times\mathbb R^3)}$, again as a result of \Cref{p:higher-reg}. We claim that if $\|f\|_{L^\infty_{q_{\rm cont}}}([0,T)\times\mathbb T^3\times\mathbb R^3)< \infty$, then $f$ can be extended past time $T$. Indeed, for any $t \in [T/2,T)$, the estimate \epsqref{e.c62901} plus Sobolev embedding in $\mathbb T^3\times\mathbb R^3$ provides a uniform bound for \begin{equation} f(t) \in H^6_{n} \cap L^\infty_{q_{\rm cont}} (\mathbb T^3 \times \mathbb R^3) \epse depending only on the constants above. Since $q_{\rm cont}\geq p$, and by our choice of $k$, $n$, and $p$, we may apply \Cref{p:prior-existence} at any $t\in [T/2,T)$ to obtain a solution $\tilde f$ in $C^0([t, t+\varepsilon'), H^{k}_{n} \cap L^\infty_{q_{\rm cont}}(\mathbb T^3\times \mathbb R^3))$ with $\varepsilon'$ depending on the constant $C_0$ in \epsqref{e.c62901}. From Proposition \ref{p:higher-reg}, $C_0$ is a nonincreasing function of $\tau = T/2$, which implies $\varepsilon'$ is nondecreasing in $T$. Since $k= 6$, Sobolev embedding implies $\tilde f$ is a classical solution (in particular, it is twice differentiable in $x$ and $v$, and via the equation \epsqref{e.boltzmann}, once differentiable in $t$). The proof is then finished by choosing $\varepsilon = \varepsilon'/2$ and $t \in (T-\varepsilon'/2,T)$ and concatenating $f$ and $\tilde f$. \epsnd{proof} \section{Existence of solutions}\label{s:existence} This section is devoted to the proof of Theorems \ref{t:existence} and \ref{t:weak-solutions}. \subsection{Decay estimates} We begin with novel decay estimates that are needed for our construction. These estimates are stated for any suitable solution $f$ of the Boltzmann equation \epsqref{e.boltzmann} on a periodic spatial domain $\mathbb T_M^3$, i.e. the torus of side length $M>0$. In Section \ref{s:construct} below, we apply these estimates to our approximating sequence. Throughout this subsection, we assume the initial data corresponding to $f$ satisfies \begin{equation}gin{equation}\label{e.initial-data-good} f_{\rm in} \in C^\infty(\mathbb T^3_M\times \mathbb R^3) \cap L^\infty_{q'}(\mathbb T_M^3\times\mathbb R^3) \quad \text{for all } q'\geq 0. \epsnd{equation} However, we do not assume {\it a priori} that $f$ satisfies polynomial decay of all orders for positive times. First, using a barrier argument, we show that polynomial upper bounds of order larger than $\gamma+2s+3$ are propagated forward in time: \begin{equation}gin{lemma}\label{l:simple-bound} Let $q_0 > \gamma+2s+3$ and $q\in [q_0,q_0-\gamma]$ be fixed. Let $f$ be a solution of \epsqref{e.boltzmann} on $[0,T]\times\mathbb T_M^3\times\mathbb R^3$ for some $M>0$, and assume $f\in L^\infty_{q'}([0,T]\times\mathbb T_M^3\times\mathbb R^3)$ for some $q'> q$ and that $f_{\rm in}$ satisfies \epsqref{e.initial-data-good}. Then \[ \|f(t)\|_{L^\infty_{q}(\mathbb T^3_{M}\times \mathbb R^3)} \leq \|f_{\rm in}\|_{L^\infty_{q}(\mathbb T_M^3\times \mathbb R^3)} \epsxp(C_0 \|f\|_{L^\infty_{q_0}([0,T]\times\mathbb T_M^3\times\mathbb R^3)} t), \quad 0\leq t\leq T,\] for a constant $C_0>0$ depending only on universal quantities and $q$ and $q_0$. In particular, $C_0$ is independent of $f_{\rm in}$ and the norm of $f$ in $L^\infty_{q'}$. \epsnd{lemma} \begin{equation}gin{proof} Define the barrier function $g(t,x,v) =N e^{\begin{equation}ta t} \langle v\rangle^{-q}$, with $N,\begin{equation}ta>0$ to be chosen later. By taking $N> \|f_{\rm in}\|_{L^\infty_{q}(\mathbb T^3_{M_\varepsilon}\times \mathbb R^3)}$, we ensure $f(0,x,v) < g(0,x,v)$ for all $x$ and $v$. We would like to show \begin{equation}gin{equation}\label{e.claim} f(t,x,v) < g(t,x,v), \quad (t,x,v) \in [0,T]\times \mathbb T_M^3\times \mathbb R^3. \epsnd{equation} If this bound fails, then because $f$ decays at a rate strictly faster than $\langle v\rangle^{-q}$, and $f$ is periodic in the $x$ variable, there must be a first time $t_{\rm cr}\in (0,T]$ and location $(x_{\rm cr},v_{\rm cr})$ at which $f$ and $g$ cross. At the first crossing time, we have the following equalities and inequalities: \[ \begin{equation}gin{split} \partial_t f(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &\geq \partial_t g(t_{\rm cr},x_{\rm cr},v_{\rm cr}),\\ \nabla_x f(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &= 0,\\ f(t_{\rm cr},x,v) &\leq g(t_{\rm cr},x,v), \quad x\in \mathbb TM, v\in \mathbb R^3. \epsnd{split} \] Combining this with the Boltzmann equation \epsqref{e.boltzmann}, we have \begin{equation}gin{equation}\label{e.crossing} \partial_t g(t_{\rm cr},x_{\rm cr},v_{\rm cr}) \leq Q(f,f) (t_{\rm cr},x_{\rm cr},v_{\rm cr}) \leq Q(f,g)(t_{\rm cr},x_{\rm cr},v_{\rm cr}). \epsnd{equation} To justify the last inequality, write $Q(f,f) = Q_{\rm s}(f,f) + Q_{\rm ns}(f,f)$ and use the nonnegativity of the kernel $K_{f}$ to obtain \[ \begin{equation}gin{split} Q_{\rm s} (f, f)(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &= \int_{\mathbb R^3} K_{f}(v,v') [f(t_{\rm cr},x_{\rm cr},v') - f(t_{\rm cr},x_{\rm cr},v_{\rm cr})] \, \mathrm{d} v'\\ &\leq \int_{\mathbb R^3} K_{f}(v,v') [g(t_{\rm cr},x_{\rm cr},v') - g(t_{\rm cr},x_{\rm cr},v_{\rm cr})] \, \mathrm{d} v', \epsnd{split} \] since $f(t_{\rm cr},x_{\rm cr},v) \leq g(t_{\rm cr},x_{\rm cr},v)$ and $f(t_{\rm cr},x_{\rm cr},v_{\rm cr}) = g(t_{\rm cr},x_{\rm cr},v_{\rm cr})$. Next, realize that $Q_{\rm ns}(f,f)(v) = f(t_{\rm cr},x_{\rm cr},v) [f\ast |\cdot|^\gamma](t_{\rm cr},x_{\rm cr},v) \leq g(t_{\rm cr},x_{\rm cr},v)[f\ast |\cdot|^\gamma](t_{\rm cr},x_{\rm cr},v)$. We have established the last inequality in \epsqref{e.crossing}. From Lemma \ref{l:Q-polynomial} we have, for some $C_0>0$ as in the statement of the lemma, \[ Q(f,g)(t_{\rm cr},x_{\rm cr},v_{\rm cr}) = N e^{\begin{equation}ta t_{\rm cr}} Q(f,\langle \cdot \rangle^{-q})(t_{\rm cr},x_{\rm cr},v_{\rm cr}) \leq C_0 N e^{\begin{equation}ta t_{\rm cr}} \|f(t_{\rm cr})\|_{L^\infty_{q_0}(\mathbb T_M^3 \times \mathbb R^3)} \langle v_{\rm cr}\rangle^{-q}. \] Together with \epsqref{e.crossing} and $\partial_t g = N\begin{equation}ta e^{\begin{equation}ta t} \langle v\rangle^{-q}$, this implies \[ N \begin{equation}ta e^{\begin{equation}ta t_{\rm cr}} \langle v_{\rm cr} \rangle^{-q} \leq C_0 N e^{\begin{equation}ta t_{\rm cr}} \|f\|_{L^\infty_{q_0}([0,T]\times\mathbb TM\times\mathbb R^3)} \langle v_{\rm cr}\rangle^{-q},\] which is a contradiction if we choose $\begin{equation}ta = 2C_0\|f\|_{L^\infty_{q_0}([0,T]\times\mathbb TM\times\mathbb R^3)}$. Hence,~\epsqref{e.claim} must hold. After choosing \[ N = \|f_{\rm in}\|_{L^\infty_{q}([0,T]\times\mathbb TM\times\mathbb R^3)} + \nu \] in \epsqref{e.claim} and sending $\nu \to 0$, the proof is complete. \epsnd{proof} Consider the $q= q_0$ case of Lemma \ref{l:simple-bound}. Intuitively, we would like to iterate this estimate to obtain a uniform \epsmph{a priori} bound on $\|f(t)\|_{L^\infty_{q_0}}$ up to some positive time. To do this precisely, we use the following technical lemma, which encodes the result of such an iteration. \begin{equation}gin{lemma}{\cite[Lemma 2.4]{HST2020landau}}\label{l:annoying} If $H:[0,T]\to \mathbb R_+$ is a continuous increasing function, and $H(t) \leq A e^{Bt H(t)}$ for all $t\in [0,T]$ and some constants $A, B > 0$, then \[ H(t) \leq e A \quad \text{ for } \quad 0\leq t\leq \min\left( T, \frac 1 {eAB}\right).\] \epsnd{lemma} Next, we prove the key result of this subsection, which allows us to bound higher $L^\infty_q$ norms of $f(t)$ in terms of lower decay norms and the initial data: \begin{equation}gin{lemma}\label{p:upper-bounds} Let $q_{\rm base}>\gamma+2s+3$ be fixed, and let $f\in L^\infty_{q_{\rm base}}([0,T]\times\mathbb TM\times\mathbb R^3)$ be a solution to \epsqref{e.boltzmann}, with $f_{\rm in}$ satisfying \epsqref{e.initial-data-good}. There exists $C>0$ depending on universal quantities and $q_{\rm base}$, such that for $T_f$ given by \[ T_f = \frac C {\|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times R^3)}}, \] the following hold: \begin{equation}gin{enumerate} \item[(a)] The solutions $f$ satisfy \[ \|f(t)\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times\mathbb R^3)} \leq C\|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times \mathbb R^3)}, \quad 0\leq t\leq \min(T_f,T). \] \item[(b)] If $q> q_{\rm base}$ and $f\in L^\infty_{q}([0,T]\times\mathbb TM\times\mathbb R^3)$, there holds \begin{equation}gin{equation}\label{e.epsilon-bound} \|f(t)\|_{L^\infty_q(\mathbb TM\times\mathbb R^3)} \leq \|f_{\rm in}\|_{L^\infty_q(\mathbb TM\times\mathbb R^3)} \epsxp\left[M_q\left(t,\|f_{\rm in}\|_{L^\infty_q(\mathbb TM\times\mathbb R^3)}\right) \right], \quad 0\leq t\leq \min(T_f,T), \epsnd{equation} for some increasing function $M_q:\mathbb R_+\times \mathbb R_+\to \mathbb R_+$ depending on universal constants, $q$, $q_{\rm base}$, and $\|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb T_M^3\times\mathbb R^3)}$. \epsnd{enumerate} \epsnd{lemma} \begin{equation}gin{proof} To prove (a), for any $t\in (0,\min(T_f,T)]$, we define \[ H(t):= \|f\|_{L^\infty_{q_{\rm base}}([0,t]\times \mathbb TM\times\mathbb R^3)}. \] Applying Lemma \ref{l:simple-bound} with $T = t$, we obtain $H(t) \leq \|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times\mathbb R^3)} \epsxp( C_0 H(t) t)$. Lemma \ref{l:annoying} with $A = \|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times\mathbb R^3)}$ and $B = C_0$ implies \begin{equation}gin{equation}\label{e.propagation} \|f(t)\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times\mathbb R^3)} \leq C \|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TM\times\mathbb R^3)}, \quad 0\leq t\leq \min(T_f,T), \epsnd{equation} with $T_f$ as in the statement of the proposition. This establishes (a). For (b), given $q>q_{\rm base}$, let $N\in \mathbb Z_{\geq 0}$ and $\theta\in [0,1)$ be such that $q = q_{\rm base} + (N+\theta)|\gamma|$. Applying Lemma \ref{l:simple-bound}, followed by \epsqref{e.propagation}, we have \[ \begin{equation}gin{split} \|f(t)\|_{L^\infty_{q_{\rm base}+|\gamma|}(\mathbb TM\times\mathbb R^3)} &\leq \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+|\gamma|}(\mathbb TM\times\mathbb R^3)}\epsxp(C \|f\|_{L^\infty_{q_{\rm base}}([0,T]\times\mathbb TM\times\mathbb R^3)} t), \epsnd{split} \] for $0\leq t\leq \min(T_f,T)$, where $C$ is the constant from \epsqref{e.propagation}. We have shown that, up to time $\min(T_f,T)$, the bound \epsqref{e.epsilon-bound} holds in the case $q= q_{\rm base} + |\gamma|$, with $M_{q_{\rm base}+|\gamma|}(t,z) = C t \|f\|_{L^\infty_{q_{\rm base}}}$. We now iterate this argument $N$ times to obtain \[ \begin{equation}gin{split} &\|f(t)\|_{L^\infty_{q_{\rm base}+N|\gamma|}(\mathbb TM\times\mathbb R^3)}\\ &~~\leq \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}(\mathbb TM\times\mathbb R^3)}\epsxp(C \|f\|_{L^\infty_{q_{\rm base}+(N-1)|\gamma|}([0,T]\times\mathbb TM\times\mathbb R^3)} t)\\ &~~\leq \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}(\mathbb TM\times\mathbb R^3)}\epsxp\left(C \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+(N-1)|\gamma|}} \epsxp\left[ M_{q_0+(N-1)|\gamma|}\left(t,\|f_{\rm in}\|_{L^\infty_{q_{\rm base}+(N-1)|\gamma|}}\right)\right] t\right) \\&~~ \leq \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}(\mathbb TM\times\mathbb R^3)}\epsxp\left(C \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}} \epsxp\left[ t M_{q_{\rm base}+(N-1)|\gamma|}\left(\|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}}\right)\right] t\right) \\&~~ = \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}(\mathbb TM\times\mathbb R^3)}\epsxp\left(M_{q_{\rm base} + N |\gamma|}(t, \|f_{\rm in}\|_{L^\infty_{q_{\rm base}+N|\gamma|}})\right), \epsnd{split} \] where we use the recursive definition $M_{q_{\rm base}+N|\gamma|}(t,z) = C t z \epsxp(M_{q_{\rm base}+(N-1)|\gamma|}(t,z))$. Finally, for any small $\nu>0$, we apply Lemma \ref{l:simple-bound} again, with exponents $q-\nu =q_{\rm base} + (N+\theta)|\gamma| - \nu$ and $q_{\rm base}+N|\gamma|$, and argue similarly to obtain \begin{equation}gin{equation}\label{e.propagation-q} \|f(t)\|_{L^\infty_{q-\nu}(\mathbb TM\times\mathbb R^3)} \leq \|f_{\rm in}\|_{L^\infty_q(\mathbb TM\times\mathbb R^3)} \epsxp\left[ M_q\left(t,\|f_{\rm in}\|_{L^\infty_q(\mathbb TM\times\mathbb R^3)}\right)\right], \quad 0\leq t\leq \min(T_f,T), \epsnd{equation} with $M_q(t,z) = C_0 z t \epsxp(M_{q_{\rm base}+N|\gamma|}(t,z))$. The functions $M_q(t,z)$ depend on $\|f\|_{L^\infty_{q_{\rm base}}([0,t]\times \mathbb T_{M}^3\times \mathbb R^3)}$, but this quantity is bounded in terms of $\|f_{\rm in}\|_{L^\infty_{q_{\rm base}(\mathbb T_M^3\times\mathbb R^3)}}$, by (a). Since $q-\nu< q$, our applications of Lemma \ref{l:simple-bound} are justified. The right-hand side is independent of $\nu$, so we can send $\nu\to 0$ and conclude (b). \epsnd{proof} \subsection{Construction of approximate solutions}\label{s:construct} Consider initial data $f_{\rm in}\in L^\infty_{q_{\rm base}}(\mathbb T_M^3\times\mathbb R^3)$ with $q_{\rm base}>\gamma+2s+3$. Our first step is to approximate $f_{\rm in}$ by smoothing, cutting off large values of $x$ and $v$, extending by $x$-periodicity, and adding a region of uniform positivity. This will give rise to a sequence of solutions $f^\varepsilon$ that solve \epsqref{e.boltzmann} with initial data $f_{\rm in}$ in the limit as $\varepsilon\to 0$. The same construction of $f^\varepsilon$ will be used to build classical solutions and weak solutions. In more detail, for any $r\in (0,1)$, define the following functions. First, let $\psi:\mathbb R^6\to \mathbb R_+$ be a standard smooth mollifier supported in $B_1(0)$ with $\int_{B_1(0)} \psi \, \mathrm{d} x = 1$, and then denote \begin{equation} \psi_\varepsilon(x,v) = \varepsilon^{-6}\psi(x/\varepsilon,v/\varepsilon). \epse Next, for any $r>0$, let $\zeta_r:\mathbb R^3\to \mathbb R_+$ be a smooth cutoff such that $\sup_{\mathbb R^3} |\nabla \zeta_r|\lesssim 1$ and \begin{equation} \zeta_r(\xi) = \begin{equation}gin{cases} 1 \quad \text{ for } \xi \in B_{1/r}\\ 0 \quad \text{ for } \xi \in B_{1/r+1}^c, \epsnd{cases} \epse Then, for any $\varepsilon>0$, define \begin{equation}gin{equation}\label{e.f0eps} f_{\rm in}^\varepsilon(x,v) := \zeta_\varepsilon(x)\zeta_\varepsilon(v) [f_{\rm in}\ast \psi_\varepsilon](x,v)+ \varepsilon \psi(x,v). \epsnd{equation} Letting $\mathbb TMeps$ be the three-dimensional torus of side length $M_\varepsilon := 2(1/\varepsilon+2)$ centered at $(0,0,0)$, we extend $f_{\rm in}^\varepsilon$ by $x$-periodicity to obtain a smooth function on $\mathbb TMeps\times\mathbb R^3$, or equivalently, a smooth function on $\mathbb R^6$ that is $M_\varepsilon$-periodic in the $x$ variable. The following construction of an approximate solution $f^\varepsilon$ is more intricate than one might expect. First, the time of existence provided to us by Proposition \ref{p:prior-existence} depends on the $H^k_n \cap L^\infty_p$ space one chooses, so we need an extra argument to obtain a smooth, rapidly decaying solution on a uniform time interval, even though $f_{\rm in}^\varepsilon$ is smooth and rapidly decaying. Second, the exponent $q_{\rm cont}$ in the continuation criterion of Proposition \ref{p:continuation} may degenerate to $+\infty$ as $T\searrow 0$, so for small times we must perform the continuation ``by hand'' using Proposition \ref{p:prior-existence} and our decay estimates above. For each $\varepsilon>0$, by Proposition \ref{p:prior-existence}, there is a time $T_\varepsilon>0$ and a solution $f^\varepsilon(t,x,v) \geq 0$ defined on $[0,T_\varepsilon]\times \mathbb T_{M_\varepsilon}^3\times \mathbb R^3$, continuous up to $t=0$, with $f^\varepsilon(0,x,v) =f_{\rm in}^\varepsilon(x,v)$. We assume $T_\varepsilon \leq T_f$ from Lemma \ref{p:upper-bounds}, since our goal is to show $f^\varepsilon$ exists up to time $T_f$. Noting that $f_{\rm in}^\varepsilon$ is smooth in $(x,v)$ and rapidly decaying in $v$, we choose some large (fixed) values of $k$, $n$, and $p$ when we apply Proposition \ref{p:prior-existence}. We then have \begin{equation}gin{equation}\label{e.epsilon-smooth} f^\varepsilon(t) \in H^k_{n}\cap L^\infty_{p}(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3) \quad\text{for } t\in [0,T_\varepsilon]. \epsnd{equation} Choosing $T_\varepsilon$ smaller if necessary, we also have \begin{equation}gin{equation}\label{e.twice} \|f^\varepsilon(t)\|_{H^k_n(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)} \leq 2\left(\|f_{\rm in}^\varepsilon\|_{H^k_n(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)} + \|f_{\rm in}^\varepsilon\|_{L^\infty_p(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)}\right), \quad t\in [0,T_\varepsilon]. \epsnd{equation} Now, let $q>p$ be arbitrary. Applying Proposition \ref{p:prior-existence} a second time in the space $H^k_n \cap L^\infty_{q}$, we see there is some $T_{\varepsilon,q}\in (0,T_\varepsilon]$ such that $\|f^\varepsilon(t)\|_{L^\infty_{q}(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)} < \infty$ when $t\in [0,T_{\varepsilon,q}]$. Lemma \ref{p:upper-bounds}(b) then implies estimate \epsqref{e.epsilon-bound} holds up to time $T_{\varepsilon,q}$. Since the right-hand side of \epsqref{e.epsilon-bound} is bounded uniformly in $t\in [0,T_{\varepsilon}]$, we can combine this with \epsqref{e.twice} to bound the norm of $f^\varepsilon(t)$ in the space $H^k_n \cap L^\infty_q$ by a constant depending only on $T_\varepsilon$ and the initial data $f_{\rm in}^\varepsilon$. We apply Proposition \ref{p:prior-existence} again to conclude $f^\varepsilon$ lies in $L^\infty_q([0,T_{\varepsilon,q} + T_{\varepsilon,q}']\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)$ for some $T_{\varepsilon,q}'$ depending only on the upper bound in \epsqref{e.epsilon-bound}. Lemma \ref{p:upper-bounds} then implies the estimate \epsqref{e.epsilon-bound} can be extended to the time interval $[0,T_{\varepsilon,q} + T_{\varepsilon,q}']$. Combining this with \epsqref{e.twice}, the process can be iterated finitely many times until $f\in L^\infty_q([0,T_\varepsilon]\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)$, with estimate \epsqref{e.epsilon-bound} valid up to time $T_\varepsilon$. Since $q>p$ was arbitrary, we conclude that all $L^\infty_q(\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)$ norms of $f^\varepsilon(t)$ are finite, with the estimate \epsqref{e.epsilon-bound} valid, up to time $T_\varepsilon$. Note that $f_{\rm in}^\varepsilon$ satisfies the lower bound condition \epsqref{e.mass-core} with $\delta =\varepsilon$, $r =1$, and $(x_m, v_m) = (0,0)$, so that we can apply the continuation criterion of Proposition \ref{p:continuation} to extend the solution $f^\varepsilon$ to a time interval $T_\varepsilon + \epsta$, with $\epsta$ depending on $T_\varepsilon$, the initial data, and the $L^\infty_{q_{\rm cont}}$-norm of $f^\varepsilon$ on $[0,T_\varepsilon]\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3$, which is controlled by \epsqref{e.epsilon-bound}. Since Proposition \ref{p:continuation} implies $f^\varepsilon(t) \in L^\infty_{q_{\rm cont}}$ for $t< T_\varepsilon + \epsta$, Lemma \ref{p:upper-bounds} tells us that the bound \epsqref{e.epsilon-bound} holds up to time $T_\varepsilon+\epsta$, and we can repeat this argument finitely many times until $f^\varepsilon$ exists up to time $T_f$ and lies in $L^\infty_{q_{\rm cont}}([0,T_f]\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)$. To extend higher $L^\infty_q$ estimates up to time $T_f$, we combine Proposition \ref{p:prior-existence} and Lemma \ref{p:upper-bounds} in the same way as above. We omit the details of this step. We now have a solution $\tilde f^\varepsilon \in L^\infty_q([0,T_f]\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3)$ for every $q>0$. By Proposition \ref{p:higher-reg}, $f^\varepsilon$ is also smooth, with regularity estimates depending on $\varepsilon$. Let us summarize the results of the last two subsections: \begin{equation}gin{proposition}\label{p:upper-bounds2} Let $q_{\rm base}> \gamma+2s+3$, and let $T_f = C \|f_{\rm in}\|_{L^\infty_{q_{\rm base}}}^{-1}$ as in Lemma \ref{p:upper-bounds}. For any $\varepsilon>0$, with $f_{\rm in}^\varepsilon$ defined as in \epsqref{e.f0eps}, there exist smooth, rapidly decaying solutions $f^\varepsilon\geq 0$ to \epsqref{e.boltzmann} on $[0,T_f]\times\mathbb T_{M_\varepsilon}^3\times\mathbb R^3$ with initial data $f_{\rm in}^\varepsilon$. These $f^\varepsilon$ satisfy the estimates \begin{equation}gin{equation}\label{e.base-est} \|f^\varepsilon(t)\|_{L^\infty_{q_{\rm base}}(\mathbb TMeps\times\mathbb R^3)} \leq C\|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb TMeps\times \mathbb R^3)}, \quad 0\leq t\leq T_f. \epsnd{equation} and for all $q>0$, \begin{equation}gin{equation}\label{e.higher-decay-est} \|f^\varepsilon(t)\|_{L^\infty_q(\mathbb TMeps\times\mathbb R^3)} \leq \|f_{\rm in}^\varepsilon\|_{L^\infty_q(\mathbb TMeps\times\mathbb R^3)} \epsxp\left[M_q\left(t,\|f_{\rm in}^\varepsilon\|_{L^\infty_q(\mathbb TMeps\times\mathbb R^3)}\right) \right], \quad 0\leq t\leq T_f, \epsnd{equation} with $M_q$ as in Lemma \ref{p:upper-bounds}. \epsnd{proposition} Recall that $\|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb R^6)}< \infty$ by assumption. For $\varepsilon>0$ sufficiently small, we clearly have $\|f_{\rm in}^\varepsilon\|_{L^\infty_{q_{\rm base} }(\mathbb TMeps\times\mathbb R^3)} \leq 2 \|f_{\rm in}\|_{L^\infty_{q_{\rm base}}(\mathbb R^6)}$. Therefore, the right-hand side of \epsqref{e.base-est} is independent of $\varepsilon$. On the other hand, the right-hand side of \epsqref{e.higher-decay-est} is independent of $\varepsilon$ if and only if $f_{\rm in} \in L^\infty_q(\mathbb R^6)$. Even if this upper bound is not uniform in $\varepsilon$, the quantities are still finite up to time $T_f$ (which is independent of $\varepsilon$). In the next subsection, we derive regularity estimates that are uniform in $\varepsilon$. \subsection{Regularity of $f^\varepsilon$ for positive times} In the next two subsections, we prove the existence of classical solutions (Theorem \ref{t:existence}). Therefore, we work under the assumption that $f_{\rm in}$ satisfies the quantitative lower bound \epsqref{e.mass-core}, and that $f_{\rm in}\in L^\infty_{q_0}(\mathbb R^6)$ with $q_0>2s+3$. We no longer need to use the compactness of our spatial domain. From now on, we consider $f^\varepsilon$ to be defined on $[0,T_f]\times\mathbb R^6$, periodic in $x$ with period $M_\varepsilon$. Recall that $M_\varepsilon\to \infty$ as $\varepsilon\to 0$. The estimates in this subsection apply on domains that are bounded in the $x$ variable, so for any fixed such domain, the $x$-periodicity is irrelevant for $\varepsilon$ small enough. For brevity, we implicitly assume throughout this subsection that $\varepsilon$ is small enough for any statement we make about a bounded $x$ domain. In order to apply regularity estimates in an $\varepsilon$-independent way, we first need suitable lower bounds for the solutions $f^\varepsilon$ for positive times. For $\varepsilon$ sufficiently small depending on $\delta$, $|x_m|$, and $|v_m|$, the hypothesis \epsqref{e.mass-core} implies \begin{equation}\label{e.c62704} f_{\rm in}^\varepsilon(x,v) \geq \frac \delta 2, \quad (x,v) \in B_r(x_m,v_m). \epse Applying Theorem \ref{t:lower-bounds} to the smooth solutions $f^\varepsilon$, we have \begin{equation}gin{equation}\label{e.uniform-lower-b} f^\varepsilon(t,x,v) \geq \mu(t,x) e^{-\epsta(t,x) |v|^2}, \epsnd{equation} with $\mu$ and $\epsta$ uniformly positive and bounded on any compact subset of $(0,T_f]\times\mathbb R^3$, and depending only on $\delta$, $r$, $t$, $|x-x_m|$, $v_m$, and $\|f^\varepsilon\|_{L^\infty_{q_0}([0,T_f]\times\mathbb R^6)}$. Because of Proposition \ref{p:upper-bounds2}, the norm $\|f^\varepsilon\|_{L^\infty_{q_0}([0,T_f]\times\mathbb R^6)}$ is bounded above by a constant times $\|f_{\rm in}\|_{L^\infty_{q_0}(\mathbb R^6)}$, and therefore $\mu$ and $\epsta$ can be chosen independently of $\varepsilon$. Now we apply regularity estimates. Let $z_0 = (t_0,x_0,v_0)\in (0,T_f]\times\mathbb R^6$, and let $r_0 \leq \min(1,(t_0/2)^{1/(2s)})$. With $\bar c$ as in Theorem \ref{t:global-degiorgi}, choose $\alpha>0$ small enough that $q_1 := q_0 - \bar c \alpha > 2s+3$. Since $q_0>2s+3>3$ and \epsqref{e.uniform-lower-b} holds, Theorem \ref{t:global-degiorgi} gives \begin{equation}gin{equation}\label{e.C-alpha-ell} \|f^\varepsilon\|_{C^\alpha_{\epsll, q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)} \leq C_{t_0} \|f^\varepsilon\|_{L^\infty_{q_0}([0,T]\times\mathbb R^6)}, \epsnd{equation} where $Q_{r_0}^{t,x}(z_0) := (t_0-r_0^{2s},t_0]\times \{x: |x-x_0 - (t-t_0)v_0| < r_0^{1+2s}\}$. The next step is to apply Schauder. Since we only assume decay of order $q_0>2s+3$ for $f_{\rm in}$, we cannot afford to use the global (in $v$) Schauder estimate of Theorem \ref{t:global-schauder}, so we proceed with the local Schauder estimate of Theorem \ref{t:schauder} instead. For any $z=(t,x,v)\in Q_{r_0}(z_0)$, we define $K_{f^\varepsilon,z}(w) = K_{f^\varepsilon}(t,x,v,v+w)$. We need to check that the kernel satisfies the H\"older hypothesis in Theorem \ref{t:schauder}: \begin{equation}gin{lemma}\label{l:f-eps-kernel} With $q_1$, $\alpha$, $z_0$, $r_0$, and $f^\varepsilon$ as above but under the additional condition that $(q_1 - 3 - \gamma-2s)(1+2s) > \alpha$, the kernel $K_{f^\varepsilon,z}(w)$ satisfies the H\"older continuity condition \epsqref{e.kernel-holder}, with constant $A_0$ depending on universal constants, $q_1$, $\alpha$, $t_0$, $x_0$, $r_0$, $\delta$, $|v_m|$, $|x_0-x_m|$, and $\|f_{\rm in}\|_{L^\infty_{q_0}}$. It has no dependence on $\varepsilon$ as long as $\varepsilon$ is small enough that~\epsqref{e.c62704} holds. \epsnd{lemma} \begin{equation}gin{proof} For $z_1,z_2 \in Q_{r_0}(z_0)$, we have \[ \begin{equation}gin{split} K_{f^\varepsilon,z_1}(w) - K_{f^\varepsilon,z_2}(w) &= |w|^{-3-2s} \int_{\{h\cdot w = 0\}} |h|^{\gamma+2s+1} [f^\varepsilon(z_1\circ(0,0,h)) - f^\varepsilon(z_2\circ(0,0,h))] \tilde b(w,h) \, \mathrm{d} h\\ & = K_{g,z_1}(w), \epsnd{split} \] where $g(z) = f^\varepsilon(z) - f^\varepsilon((z_2\circ z_1^{-1}) \circ z)$. With Lemma \ref{l:K-upper-bound-2}, this implies, for $\rho>0$, \begin{equation}gin{equation}\label{e.Brho} \begin{equation}gin{split} \int_{B_\rho}&|K_{f^\varepsilon,z_1}(w) - K_{f^\varepsilon,z_2}(w)||w|^2 \, \mathrm{d} w = \int_{B_\rho} |K_{g,z_1}(w)| |w|^2\, \mathrm{d} w\\ &\leq \left(\int_{\mathbb R^3}|w|^{\gamma+2s} |g(t_1,x_1,v_1+w) |\, \mathrm{d} w \right) \rho^{2-2s}\\ &= \left(\int_{\mathbb R^3}|w|^{\gamma+2s} |f^\varepsilon(z_1\circ(0,0,w)) - f^\varepsilon(z_2\circ(0,0,w))| \, \mathrm{d} w \right) \rho^{2-2s}. \epsnd{split} \epsnd{equation} Next, we estimate $|f^\varepsilon(z_1\circ(0,0,w)) - f^\varepsilon(z_2\circ(0,0,w))|$. Note that for $w\in \mathbb R^3$, one has $z_i\circ(0,0,w) \in [t_0/4, T]\times\mathbb R^6$ for $i=1,2$. We claim that, for any $w\in \mathbb R^3$, \[ \begin{equation}gin{split} &|f^\varepsilon(z_1\circ(0,0,w)) - f^\varepsilon(z_2\circ(0,0,w))| \\&\qquad\lesssim \|f^\varepsilon\|_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)} \langle v_1 + w\rangle^{-q_1} d_\epsll(z_1\circ(0,0,w),z_2\circ(0,0,w))^\alpha, \epsnd{split} \] where $q_1 = q_0 -\bar c\alpha$ as above. Indeed, this formula follows by using the seminorm $[f^\varepsilon]_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)}$ when $d_\epsll(z_1\circ(0,0,w),z_2\circ(0,0,w))< 1$ and the norm $\|f^\varepsilon\|_{L^\infty_{q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)}$ when $d_\epsll(z_1\circ(0,0,w),z_2\circ(0,0,w))\geq 1$. We have also used $\langle v_1 + w\rangle \approx \langle v_2 + w\rangle$, since $v_1, v_2 \in B_{r_0}(v_0)$. Now, using \epsqref{e.right-translation}, we have \[ \begin{equation}gin{split} |f^\varepsilon(z_1\circ(0,0,w)) &- f^\varepsilon(z_2\circ(0,0,w))| \\ & \lesssim \|f^\varepsilon\|_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3} \langle v_1 + w\rangle^{-q_1} \Big(d_\epsll(z_1,z_2) + d_\epsll(z_1,z_2)^\frac{2s}{1+2s} |w|^\frac{1}{1+2s}\Big)^{\alpha}, \epsnd{split} \] Returning to \epsqref{e.Brho}, we have \begin{equation}gin{equation} \begin{equation}gin{split} &\int_{B_\rho}|K_{f^\varepsilon,z_1}(w) - K_{f^\varepsilon,z_2}(w)||w|^2 \, \mathrm{d} w\\ &\leq \rho^{2-2s}\|f^\varepsilon\|_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)} \int_{\mathbb R^3} |w|^{\gamma+2s} \langle v_1 + w\rangle^{-q_1} \Big(d_\epsll(z_1,z_2) + d_\epsll(z_1,z_2)^\frac{2s}{1+2s} |w|^\frac{1}{1+2s}\Big)^{\alpha} \, \mathrm{d} w\\ &\leq \rho^{2-2s}\|f^\varepsilon\|_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)} \langle v_0\rangle^{\gamma+2s+\frac{\alpha}{1+2s}} d_\epsll(z_1,z_2)^{\alpha'}, \epsnd{split} \epsnd{equation} with $\alpha' = \alpha \frac {2s}{1+2s}$. We have used $q_1>2s+3 > \gamma+2s +\alpha/(1+2s)+3$. Applying \epsqref{e.C-alpha-ell}, we see that $\|f^\varepsilon\|_{C^\alpha_{\epsll,q_1}(Q_{r_0}^{t,x}(z_0)\times\mathbb R^3)}$ is bounded independently of $\varepsilon$. This implies \epsqref{e.kernel-holder} holds, as in the statement of the lemma. \epsnd{proof} Because of Lemma \ref{l:f-eps-kernel}, we may apply Theorem \ref{t:schauder} to $f^\varepsilon$ and obtain \[ \|f^\varepsilon\|_{C^{2s+\alpha'}_\epsll(Q_{r_0/2}(z_0))} \leq C_0, \] with $C_0$ depending on $t_0$, $|v_0|$, and the initial data $f_{\rm in}$, but independent of $\varepsilon$. \subsection{Convergence as $\varepsilon \to 0$ and the conclusion of \Cref{t:existence}} For each compact subset $\Omega\subset (0,T_f]\times \mathbb R^6$, our work above implies that $f^\varepsilon$ is bounded in $C^{2s+\alpha'}_\epsll(\Omega)$ for some $\alpha'$ depending on $\Omega$. (Note that the dependence of $\alpha'$ on $\Omega$ follows from the dependencies of $\alpha_0$ in \Cref{t:global-degiorgi}). This implies the sequence $f^\varepsilon$ is precompact in $C^{2s+\alpha''}_\epsll(\Omega)$ for any $\alpha''\in (0,\alpha')$, and some subsequence of $f^\varepsilon$ converges in $C^{2s+\alpha''}_\epsll(\Omega)$ to a function $f$. Since $\Omega$ was arbitrary, $f$ can be defined as an element of $C^{2s}_{\epsll, \rm loc}([0,T]\times \mathbb R^6)$, and for any compact $\Omega$, there is an $\alpha''$ with $f\in C^{2s+\alpha''}_\epsll(\Omega)$. Since $f^\varepsilon \to f$ pointwise, $f$ also lies in $L^\infty_{q_0}([0,T_f]\times \mathbb R^6)$, by Proposition \ref{p:upper-bounds2}. From \cite[Lemma 2.7]{imbert2018schauder}, the norm $C^{2s+\alpha''}_\epsll$ controls the material derivative $(\partial_t + v\cdot \nabla _x)f^\varepsilon$ (but not the separate terms $\partial_t f^\varepsilon$ and $\nabla_x f^\varepsilon$). In particular, for each compact $\Omega$, \[ \|(\partial_t + v\cdot \nabla_x) f^\varepsilon\|_{C^{\alpha''}_\epsll(\Omega)} \leq C\|f^\varepsilon\|_{C^{2s+\alpha''}_\epsll(\Omega)},\] and the convergence of $f^\varepsilon$ in $C^{2s+\alpha''}_\epsll$ implies $(\partial_t+v\cdot \nabla_x)f$ is a locally H\"older continuous function. To analyze the convergence of $Q(f^\varepsilon,f^\varepsilon)$ as $\varepsilon\to 0$, we use the following lemma: \begin{equation}gin{lemma}\label{l:Q-makes-sense} Let $g, h\in L^\infty_{q}(\mathbb R^3)$ with $q> \gamma +2s +3$. For some $v_0\in \mathbb R^3$ and $\alpha>0$, assume $h\in C^{2s+\alpha}(B_1(v_0))$. Then \begin{equation}gin{equation}\label{e:Qgh} |Q(g,h)(v)| \leq C\|g\|_{L^\infty_{q}(\mathbb R^3)}( \|h\|_{C^{2s+\alpha}(B_1(v_0))}+\|h\|_{L^\infty(\mathbb R^3)}) \langle v_0\rangle^{(\gamma+2s)_+}, \quad v\in B_1(v_0). \epsnd{equation} \epsnd{lemma} \begin{equation}gin{proof} Writing $Q = Q_{\rm s} + Q_{\rm ns}$ as usual, the singular term is handled by \cite[Lemma 4.6]{imbert2020smooth}, which implies \[ |Q_{\rm s}(g,h)(v)| \leq C\left(\int_{\mathbb R^3} g(w)|v+w|^{\gamma+2s} \, \mathrm{d} w\right) \|h\|_{L^\infty(\mathbb R^3)}^{\frac \alpha {2s+\alpha}} [h]_{C^{2s+\alpha}(v)}^{\frac{2s}{2s+\alpha}},\] where $[h]_{C^{2s+\alpha}(v)}$ denotes the smallest constant $N>0$ such that there exists a polynomial $p$ of degree less than $2s+\alpha$ with $|h(v+w) - p(w)|\leq N |w|^{2s+\alpha}$ for all $w\in \mathbb R^3$. Using $a^{\frac{\alpha}{2s+\alpha}}b^{\frac{2s}{2s+\alpha}}\lesssim a + b$, and noting that \[ [h]_{C^{2s+\alpha}(v)} \leq [h]_{C^{2s+\alpha}(B_1(v))} + \|h\|_{L^\infty(B_1(v)^c)},\] we see that $Q_{\rm s}(g,h)(v)$ is bounded by the right-hand side of \epsqref{e:Qgh}, using the convolution estimate of Lemma \ref{l:convolution} and $\langle v\rangle\approx \langle v_0\rangle$. For $Q_{\rm ns}(g,h)(v) = c_b h(v) [g\ast |\cdot|^\gamma](v)$, another application of Lemma \ref{l:convolution} implies \[ |Q_{\rm ns}(g,h)(v)| \leq C \|h\|_{L^\infty(\mathbb R^3)} \|g\|_{L^\infty_q(\mathbb R^3)}\langle v\rangle^\gamma, \] since $q>3$. The conclusion of the lemma follows. \epsnd{proof} Using bilinearity, we write $Q(f^\varepsilon, f^\varepsilon) - Q(f,f) = Q(f^\varepsilon, f^\varepsilon-f) + Q(f^\varepsilon-f, f)$. Let $q$ equal the average of $q_0$ and $\gamma+2s+3$. Then, since $f^\varepsilon$ and $f$ share a common uniform bound in $L^\infty_{q_0}([0,T_f]\times\mathbb R^6)$ with $q< q_0$, and $f^\varepsilon\to f$ uniformly on compact sets, we in fact have $f^\varepsilon \to f$ strongly in $L^\infty_{q}([\tau,T_f]\times\Omega_x\times\mathbb R^3)$ for any $\tau\in (0,T_f)$ and $\Omega_x\subset \mathbb R^3$. Together with the convergence in $C^{2s+\alpha'}_\epsll(\Omega)$ for compact $\Omega$, this is enough to apply Lemma \ref{l:Q-makes-sense} and conclude $Q(f^\varepsilon,f^\varepsilon) \to Q(f,f)$ locally uniformly. In particular, $Q(f,f)$ is well-defined. We have shown that $f$ satisfies the Boltzmann equation \epsqref{e.boltzmann} in the pointwise sense. To address the initial data, we multiply the equation \epsqref{e.boltzmann} satisfied by $f^\varepsilon$ by some $\varphi\in C^1_{t,x} C^2_v$ with compact support in $[0,T_f) \times \mathbb R^6$, and integrate by parts: \begin{equation}gin{equation}\label{e.phi-0} \begin{equation}gin{split} \iint_{\mathbb R^3\times\mathbb R^3} \varphi(0,x,v) f_{\rm in}^\varepsilon(x,v) \, \mathrm{d} v \, \mathrm{d} x &= \int_0^{T_f} \iint_{\mathbb R^3\times\mathbb R^3} f^\varepsilon (\partial_t \varphi + v\cdot \nabla_x \varphi)\, \mathrm{d} v \, \mathrm{d} x \, \mathrm{d} t \\ &\quad +\int_0^{T_f} \iint_{\mathbb R^3\times\mathbb R^3} \varphi Q(f^\varepsilon, f^\varepsilon) \, \mathrm{d} x \, \mathrm{d} v \, \mathrm{d} t. \epsnd{split} \epsnd{equation} The left-hand side converges to $\iint_{\mathbb R^3\times\mathbb R^3} \varphi(0,x,v) f_{\rm in}(x,v) \, \mathrm{d} x \, \mathrm{d} v$ by the convergence of $f_{\rm in}^\varepsilon$ to $f_{\rm in}$ in $L^1(\supp(\varphi(0,\cdot,\cdot))$ (recall the definition of $f_{\rm in}^\varepsilon$~\epsqref{e.f0eps}). The convergence of the first integral on the right in \epsqref{e.phi-0} is also straightforward, by the uniform upper bounds for $f^\varepsilon$ in $L^\infty_{q_0}$ and the pointwise convergence of $f^\varepsilon$ to $f$. For the second integral on the right, we need to proceed more carefully. The continuity properties needed to apply Lemma \ref{l:Q-makes-sense} and control $Q(f^\varepsilon, f^\varepsilon)$ pointwise may degenerate as $t\to 0$ at a potentially severe rate. Therefore, we use the weak formulation of the collision operator to bound this integral. This is made precise in the following lemma: \begin{equation}gin{lemma}\label{l:weak-estimate} For any $\varphi\in C^2(\mathbb R^3)$, and $v, v_*\in \mathbb R^3$, there holds \begin{equation}gin{equation}\label{e.sphere-est} \left|\int_{\mathbb S^2} B(v-v_*,\sigma) [\varphi(v_*') + \varphi(v') - \varphi(v_*) - \varphi(v)] \, \mathrm{d} \sigma\right| \leq C \|\varphi\|_{C^2(\mathbb R^3)}|v-v_*|^\gamma (1+|v-v_*|^{2s}), \epsnd{equation} for a universal constant $C$. In particular, for any functions $g, h$ on $\mathbb R^3$ such that the right-hand side is finite, one has \begin{equation}gin{equation*} \left|\int_{\mathbb R^3}W(g,h,\varphi) \, \mathrm{d} v\right| \leq C \|\varphi\|_{C^{2}(\mathbb R^3)} \iint_{\mathbb R^3\times \mathbb R^3} g(v_*) h(v) |v-v_*|^\gamma (1+|v-v_*|^{2s}) \, \mathrm{d} v_* \, \mathrm{d} v, \epsnd{equation*} where $W(g,h,\varphi)$ is defined as in \epsqref{e.weak-collision}. \epsnd{lemma} Estimates of this general type are common in the Boltzmann literature, see e.g. \cite[Chapter 2, Formula (112)]{villani2002review}. However, we could not find a reference with the (apparently sharp) asymptotics $|v-v_*|^{\gamma+2s}$ inside the integral. The sharp asymptotics will be important in the proof of Theorem \ref{t:weak-solutions} below, where we only assume enough velocity decay to control the $L^1$ moment of order $\gamma+2s$ of our weak solutions. \begin{equation}gin{proof} Let us recall some facts about the geometry of elastic collisions: \begin{equation}gin{align} |v_*' - v_*| &= |v' - v|\label{e.pre-post}\\ v_*' + v' &= v_*+v\label{e.momentum}\\ |v' - v| &\approx \theta |v-v_*|,\label{e.cosine-law} \epsnd{align} where we recall $\theta$ from~\epsqref{e.B-def}. The second fact \epsqref{e.momentum} corresponds to conservation of momentum. Let us also introduce the standard abbreviations $F' = F(v')$, $F_*' = F(v_*')$, $F_* = F(v_*)$, and $F = F(v)$ for any function $F$. Recalling that $B(v-v_*,\sigma) = |v-v_*|^\gamma \theta^{-2-2s} \tilde b(\theta)$, with $\tilde b (\theta) \approx 1$, we divide the integral over $\mathbb S^2$ in \epsqref{e.sphere-est} into two domains: \[ D_1 := \{ \sigma : |\theta| \leq |v-v_*|^{-1}\}, \quad D_2 := \mathbb S^2 \setminus D_1,\] where $D_2$ is empty if $|v-v_*|\leq 1/\pi$. In $D_1$, following a common method for controlling the angular singularity, we Taylor expand $\varphi$ and use the identities \epsqref{e.momentum} and \epsqref{e.pre-post}. We obtain \[\begin{equation}gin{split} \varphi' + \varphi_*' - \varphi_* - \varphi &= \nabla \varphi_*\cdot (v_*' - v_*) + \nabla \varphi\cdot (v' - v) + O(\|D^2\varphi\|_{L^\infty} |v'-v|^2)\\ &= (\nabla \varphi - \nabla \varphi_*)\cdot (v' - v) + O(\|D^2\varphi\|_{L^\infty}|v' - v|^2). \epsnd{split} \] From \epsqref{e.cosine-law}, the second term on the right in the last expression is proportional to $\|D^2\varphi\|_{L^\infty} \theta^2|v-v_*|^2$. To handle the first term on the right, we parameterize $\mathbb S^2$ with spherical coordinates $\sigma = (\theta, \epsta) \in [0,\pi]\times [0,2\pi]$, where $\theta=0$ corresponds to $v = v'$. A simple geometric argument shows $\left| \int_0^{2\pi} (v' - v) \, \mathrm{d} \epsta\right| \lesssim |v-v_*|\theta^2$. Therefore, we have \[ \left|\int_0^{2\pi} [ \varphi' + \varphi_*' - \varphi_* - \varphi] \, \mathrm{d} \epsta\right|\lesssim \|D^2\varphi\|_{L^\infty} \theta^2 |v-v_*|^2, \] which implies \[ \begin{equation}gin{split} \Big|\int_{D_1} &B(v-v_*,\sigma)[\varphi' + \varphi_*' - \varphi_* - \varphi] \, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \Big|\\ &\lesssim |v-v_*|^{\gamma+2} \int_0^{|v-v_*|^{-1}} \theta^{-2-2s} \|D^2\varphi\|_{L^\infty} \theta^2 \sin \theta \, \mathrm{d} \theta \lesssim \|D^2\varphi\|_{L^\infty} |v-v_*|^{\gamma+2s} . \epsnd{split} \] For the integral over $D_2$, since $|\theta|\geq |v-v_*|^{-1}$, we have \[ \begin{equation}gin{split} \Big| \int_{D_2} &|v-v_*|^\gamma \theta^{-2-2s} \tilde b(\theta)[\varphi' + \varphi_*' - \varphi_* - \varphi] \, \mathrm{d} \sigma \Big|\\ &\lesssim \|D^2\varphi\|_{L^\infty} |v-v_*|^{\gamma} \int_{|v-v_*|^{-1}}^\pi \int_0^{2\pi} \theta^{-2-2s} \sin\theta \, \mathrm{d} \epsta \, \mathrm{d} \theta \lesssim \|D^2\varphi\|_{L^\infty} |v-v_*|^{\gamma}(1+|v-v_*|^{2s}), \epsnd{split} \] which establishes \epsqref{e.sphere-est}. Next, recalling the weak formulation \epsqref{e.weak-collision} we have \begin{equation}gin{equation*} \int_{\mathbb R^3} W(g,h,\varphi) \, \mathrm{d} v = \frac 1 2 \int_{\mathbb R^3} \iint_{\mathbb R^3\times \mathbb S^2} B(v-v_*,\sigma) g h_* [\varphi' + \varphi_*'- \varphi_* - \varphi] \, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v, \epsnd{equation*} and the last conclusion of the lemma follows directly from \epsqref{e.sphere-est}. \epsnd{proof} Returning to \epsqref{e.phi-0}, for each $t\in (0,T_f]$, the locally uniform convergence of $Q(f^\varepsilon,f^\varepsilon)$ to $Q(f,f)$ implies \[\iint_{\mathbb R^3\times\mathbb R^3} \varphi Q(f^\varepsilon,f^\varepsilon) \, \mathrm{d} x \, \mathrm{d} v \to \iint_{\mathbb R^3\times\mathbb R^3} \varphi Q(f,f) \, \mathrm{d} x \, \mathrm{d} v.\] By Lemma \ref{l:weak-estimate} and our uniform upper bound on $\|f^\varepsilon\|_{L^\infty_{q_0}(\mathbb R^6)}$ with $q_0>2s+3$, we may apply the Dominated Convergence Theorem to the time integral of $\varphi Q(f^\varepsilon, f^\varepsilon)$ and conclude that $f$ agrees with the initial data $f_{\rm in}$ in the sense of \epsqref{e.weak-initial-data}. Finally, we consider the higher regularity of $f$. The approximate solutions $f^\varepsilon$ are smooth and rapidly decaying, so for any compact $\Omega\subset (0,T_f]\times\mathbb R^3$, partial derivative $D^k$, and $\alpha, m>0$, Proposition \ref{p:higher-reg} provides a $q(k,m)>0$ such that $\|D^k f^\varepsilon\|_{C^\alpha_{\epsll,m}(\Omega\times\mathbb R^3_v)}$ is bounded for positive times in terms of $\|f_{\rm in}^\varepsilon\|_{L^\infty_{q(k,m)}(\mathbb R^6)}$. From Proposition \ref{p:upper-bounds2}, this bound is independent of $\varepsilon$ if the initial data $f_{\rm in}$ is bounded in $L^\infty_{q(k,m)}(\mathbb R^6)$. Applying a standard compactness argument, these $C^\alpha_{\epsll,m}$ estimates for $D^k f^\varepsilon$ imply $L^\infty_{m}$ estimates for $D^k f$ in the limit as $\varepsilon\to 0$. This concludes the proof of Theorem \ref{t:existence}. \subsection{\Cref{t:weak-solutions}: the existence of weak solutions} In this subsection, we prove Theorem \ref{t:weak-solutions}. The proof is based on the same approximating sequence $f^\varepsilon$ from the proof of Theorem \ref{t:existence}. The relaxed conditions on $f_{\rm in}$ result in weaker uniform regularity for $f^\varepsilon$, and correspondingly, a different notion of convergence as $\varepsilon\to 0$. In more detail, assume that $f_{\rm in}\geq 0$ lies in $L^\infty_{q_0}(\mathbb R^6)$ with $q_0>\gamma+2s+3$. This initial data may not necessarily satisfy any uniform positivity condition. Let $f^\varepsilon_{\rm in}$ be defined as in \epsqref{e.f0eps}, and as above, let $f^\varepsilon$ be smooth solutions to \epsqref{e.boltzmann} with initial data $f^\varepsilon_{\rm in}$. By Proposition \ref{p:upper-bounds2}, these solutions exist on a uniform time interval $[0,T_f]$, and are uniformly bounded in $L^\infty_{q_0}([0,T_f]\times\mathbb R^6)$. Since $L^\infty_{q_0}([0,T_f]\times\mathbb R^6)$ is the dual of $L^1_{-q_0}([0,T_f]\times\mathbb R^6)$, some sequence of $f^\varepsilon$ converges in the weak-$\ast$ $L^\infty_{q_0}$ sense to a function $f\in L^\infty_{q_0}([0,T_f]\times\mathbb R^6)$. To show $f$ is a weak solution of \epsqref{e.boltzmann}, note that for each $\varepsilon>0$ and $\varphi \in C^1_{t,x} C^2_v$ with compact support in $[0,T_f)\times\mathbb R^6$, integrating by parts implies the weak formulation \epsqref{e.phi-0} holds for $f^\varepsilon$. The left-hand side of \epsqref{e.phi-0} converges by the $L^1$-convergence of $f^\varepsilon_{\rm in}$ to $f_{\rm in}$ on $\supp(\varphi(0,\cdot,\cdot))$, exactly as above. For the right-hand side, we have \[ \int_0^{T_f} \iint_{\mathbb R^3\times\mathbb R^3} (f^\varepsilon - f) (\partial_t\varphi+v\cdot\nabla_x\varphi) \, \mathrm{d} v\, \mathrm{d} x \, \mathrm{d} t \to 0, \] by the weak-$\ast$ convergence of $f^\varepsilon$ to $f$, since $(\partial_t\varphi + v\cdot\nabla_x \varphi) \in L^1_{-q_0}([0,T_f]\times\mathbb R^6)$. For the collision term, since $f^\varepsilon$ is smooth and rapidly decaying, we may apply the identity $\int_{\mathbb R^3} \varphi Q(f^\varepsilon,f^\varepsilon) \, \mathrm{d} v = \int_{\mathbb R^3} W(f^\varepsilon,f^\varepsilon,\varphi) \, \mathrm{d} v$. Using bilinearity, we have, for each $t,x$, \[ \begin{equation}gin{split} \int_{\mathbb R^3}W(f^\varepsilon,&f^\varepsilon,\varphi) \, \mathrm{d} v - \int_{\mathbb R^3} W(f,f,\varphi) \, \mathrm{d} v = \int_{\mathbb R^3} W(f^\varepsilon,f^\varepsilon-f,\varphi) \, \mathrm{d} v + \int_{\mathbb R^3} W(f,f^\varepsilon-f, \varphi) \, \mathrm{d} v. \epsnd{split} \] The first term on the right is equal to \begin{equation}gin{equation}\label{e.first-term} \begin{equation}gin{split} \frac 1 2 \int_{\mathbb R^3}f^\varepsilon\int_{\mathbb R^3}\int_{\mathbb S^2} (f^\varepsilon_* - f_*) B(v-v_*,\sigma)[\varphi' + \varphi_*' - \varphi_* - \varphi] \, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v. \epsnd{split} \epsnd{equation} From \epsqref{e.sphere-est} in Lemma \ref{l:weak-estimate}, we see that for fixed $v\in \mathbb R^3$, \[ \int_{\mathbb S^2} B(v-v_*,\sigma) [\varphi' + \varphi_*' - \varphi_* - \varphi] \, \mathrm{d} \sigma \in L^1_{-q_0}(\mathbb R^3_{v_*}),\] since $q_0> \gamma+2s+3$. This implies \epsqref{e.first-term} converges to 0 as $\varepsilon\to 0$. The same argument, after exchanging the $v$ and $v_*$ integrals, implies $\int_{\mathbb R^3} W(f,f^\varepsilon- f,\varphi) \, \mathrm{d} v \to 0$. We conclude $f^\varepsilon$ is a weak solution to \epsqref{e.boltzmann} in the sense of Theorem \ref{t:weak-solutions}. Next, consider the additional assumption that $f_{\rm in}(x,v)\geq \delta$ for all $(x,v) \in B_r(x_m,v_m)$. As above, this implies lower bounds of the form \epsqref{e.uniform-lower-b} for $f^\varepsilon$ that are independent of $\varepsilon$. Together with our uniform bound on $\|f^\varepsilon\|_{L^\infty_{q_0}([0,T_f]\times\mathbb R^6)}$ with $q_0>\gamma+2s+3$, this allows us to apply the local De Giorgi estimate of Theorem \ref{t:degiorgi}. (Note that under such an assumption on $q_0$, we cannot necessarily apply the global De Giorgi estimate of Theorem \ref{t:global-degiorgi}.) We obtain, for any compact $\Omega\subset (0,T_f]\times\mathbb R^6$, \[ \|f^\varepsilon\|_{C^\alpha(\Omega)} \leq C,\] with $C,\alpha>0$ depending on $\Omega$, $\delta$, $r$, $v_m$, $x_m$, and $\|f_{\rm in}\|_{L^\infty_{q_0}(\mathbb R^6)}$. Since this bound is independent of $\varepsilon$, the same conclusion applies to $f$. This concludes the proof of Theorem \ref{t:weak-solutions}. \subsection{Continuous matching with initial data} Here we show, under the assumption that $f_{\rm in}$ is continuous, that the solution $f$ is continuous as $t\searrow 0$. We prove this as a consequence of a more general result for linear kinetic integro-differential equations of the following form: \begin{equation}gin{equation}\label{e.kide} \partial_t f + v\cdot \nabla_x f = \int_{\mathbb R^3} K(t,x,v,v') (f(t,x,v') - f(t,x,v)) \, \mathrm{d} v' + c f, \epsnd{equation} where $K: [0,T]\times \R^3\times\R^3\times\mathbb R^3\to [0,\infty)$ and $c: [0,T]\times\R^3\times\R^3 \to [0,\infty)$ satisfy, for all $(t,x,v)\in [0,T]\times\R^3\times\R^3$, $w\in \mathbb R^3$, and $r>0$, \begin{equation}\label{e.K_c_assumptions} \begin{equation}gin{split} & K(t,x,v,v+w) = K(t,x,v,v-w),\\ &\int_{B_{2r}(v)\setminus B_r(v)} K(t,x,v,v') \, \mathrm{d} v' \leq \Lambda \langle v\rangle^{\gamma+2s} r^{-2s}, \quad \text{ and} \\%& &|c(t,x,v)| \leq \Lambda \langle v\rangle^\gamma, \epsnd{split} \epse for some constants $\Lambda>0$, $\gamma>-3$, and $s\in (0,1)$. From the Carleman decomposition of $Q(f,f)$ described in Section \ref{s:carleman}, together with Lemma \ref{l:K-upper-bound}, one sees that equation \epsqref{e.kide} includes the Boltzmann equation \epsqref{e.boltzmann} as a special case. \begin{equation}gin{proposition}\label{p:cont-match} Let $f \in C^2_\epsll\cap L^{\infty}([0,T]\times\mathbb R^6)$ be a solution to~\epsqref{e.kide} with initial data $f_{\rm in}$. Then, for any $(x_0,v_0)$ and $\epsta>0$, there exists $t_\epsta, r_\epsta>0$ such that, if \[ |(x,v) - (x_0,v_0)| < r_\epsta \quad\text{ and }\quad t\in [0,t_\epsta), \] then \begin{equation}gin{equation}\label{e.c7146} |f(t,x,v) - f_{\rm in}(x_0,v_0)| < \epsta. \epsnd{equation} The constants $t_\epsta$ and $r_\epsta$ depend only on $|v_0|$, $\epsta$, $\Lambda$, $s$, $\|f\|_{L^\infty}$ and the modulus of continuity of $f_{\rm in}$ at $(x_0,v_0)$. In particular, the constants do not depend on $\|f\|_{C^2_\epsll}$. \epsnd{proposition} To apply this proposition to the solution $f$ to \epsqref{e.boltzmann} constructed in Theorem \ref{t:existence}, we use the smooth approximating sequence $f^\varepsilon$. Proposition \ref{p:cont-match} applies to the smooth solutions $f^\varepsilon$, with constants depending on $\Lambda \lesssim \|f^\varepsilon\|_{L^\infty_{q_0}(\mathbb R^6)}$, which is bounded independently of $\varepsilon$ by Proposition \ref{p:upper-bounds2}. Since $f^\varepsilon \to f$ pointwise in $[0,T]\times\mathbb R^6$, the conclusion of Proposition \ref{p:cont-match} also applies to $f$. Note that we allow both cases $\gamma < 0$ and $\gamma \geq 0$ in Proposition \ref{p:cont-match}. \begin{equation}gin{proof} In the proof, we only obtain the upper bound on $f - f_{\rm in}$ in~\epsqref{e.c7146}. The lower bound can be obtained in an exactly analogous way, so we omit it. Without loss of generality, we assume that $x_0 = 0$. Fix $\delta\in(0,1)$ sufficiently small so that \begin{equation}\label{e.c7141} |f_{\rm in}(x,v) - f_{\rm in}(0,v_0)| < \frac{\epsta}{2} \qquad\text{ if } |x|^2 + |v-v_0|^2 \leq \delta^2. \epse Let \begin{equation}\label{e.c_t_epsilon} T_\delta = \frac{\delta}{4( |v_0| + 2\delta)}. \epse Our goal is to construct a supersolution $F$ for $f$ on $[0, T_\delta]\times B_\delta(0,v_0)$. To begin, we let \begin{equation}\label{e.c7142} F(t,x,v) = e^{2\Lambda \langle v_0\rangle^\gamma t} \left(\|f\|_{L^\infty([0,T]\times\mathbb R^6)} \psi\left(\frac{|x-vt|^2 + |v-v_0|^2}{\delta^2} \right) + \frac{\epsta}{2} + \rho t + f_{\rm in}(0,v_0)\right), \epse where $\psi \in C^2(\mathbb R)$ satisfies \begin{equation}\label{e.c7147} \psi(s) = \begin{equation}gin{cases} 0 \qquad &\text{ if } s \leq 0,\\ 1 \qquad &\text{ if } s \geq 1/2, \epsnd{cases} \qquad 0 \leq \psi' \leq 4, \qquad\text{ and } \qquad |\psi''| \leq 32, \epse and \begin{equation}\label{e.c7148} \rho = A \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \frac{\langle v\rangleo^{\gamma + 2s}}{\delta^{2s}} e^{2\Lambda T_\delta\langle v_0\rangle^\gamma}, \epse for a large constant $A>0$ to be chosen depending only on $\Lambda$, $\langle v_0\rangle$, $\gamma$, and $s$. We claim that \begin{equation}gin{equation}\label{e.barf-f} F > f \quad \text{ on } [0,T_\delta)\times \mathbb R^6. \epsnd{equation} Note that all terms in $F$ except $f_{\rm in}(0,v_0)$ can be made smaller than $\epsta$ by choosing $r_\epsta>0$ and $t_\epsta\in (0,T_\delta)$ sufficiently small. Therefore, the proof is complete once we establish \epsqref{e.barf-f}. First we note that~\epsqref{e.barf-f} holds initially. Indeed, from~\epsqref{e.c7141} and~\epsqref{e.c7142}, it is clear that $F> f$ for $(t,x,v) \in \{0\}\times \mathbb R^6$. Next, we show that~\epsqref{e.barf-f} holds away from $(0,v_0)$: \begin{equation}\label{e.c62702} F > f \qquad\text{ for } (t,x,v) \in (0,T_\delta]\times B_\delta(0,v_0)^c. \epse Fix any $(t,x,v) \in (0,T_\delta]\times B_\delta(0,v_0)^c$. If $|v-v_0| \geq \delta/\sqrt 2$, then, by the definition of $\psi$~\epsqref{e.c7147}, \begin{equation}\label{e.c62703} F(t, x, v) > \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \psi\left(\frac{|x-vt|^2 + |v-v_0|^2}{\delta^2} \right) = \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \geq f(t,x,v). \epse Hence,~\epsqref{e.c62702} is established in this case. Assume now that $|v-v_0| < \delta/\sqrt 2$. Then \begin{equation} |x-tv|^2 + |v - v_0|^2 = |x|^2 + |v-v_0|^2 - 2 t x \cdot v + t^2 |v|^2 \geq |x|^2 + |v-v_0|^2 - 2 t x \cdot v. \epse Next, we use Young's inequality and then that $t< T_\delta$, where $T_\delta$ is defined in~\epsqref{e.c_t_epsilon}, and $|v| \leq |v_0| + \delta/\sqrt 2$, to find \begin{equation} |x-tv|^2 + |v - v_0|^2 \geq \frac{3|x|^2}{4} + |v-v_0|^2 - 4 t^2 |v|^2 \geq \frac{3\delta^2}{4} - 4 \frac{\delta^2}{4^2(|v_0|+\delta)^2} (|v_0| + \delta/\sqrt2)^2 \geq \frac{\delta^2}{2}. \epse The argument of~\epsqref{e.c62703} then applies to establish~\epsqref{e.c62702}. Hence,~\epsqref{e.c62702} holds in both cases. Due to~\epsqref{e.c62702}, if $F \not> f$ on $[0,T_\delta]\times \mathbb R^6$, then, defining the crossing time as \begin{equation} t_{\rm cr} = \inf\{t : F(t,x,v) = f(t,x,v) \quad\text{for some}\quad (x,v) \in \mathbb R^6\}, \epse it follows that $t_{\rm cr} \in (0,T_\delta]$ by the continuity of $F$ and $f$, and there exists a crossing point $(x_{\rm cr}, v_{\rm cr})\in B_\delta(0,v_0)$ such that \begin{equation}\label{e.matching_tch} F(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}) = f(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}). \epse Since $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$ is a minimum of $F- f$ on $(0,T_\delta)\times B_\delta(0,v_0)$, then, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, we have \begin{equation}\label{e.c7143} \begin{equation}gin{split} &F = f, \qquad \partial_t F \leq \partial_t f, \qquad \nabla_x F = \nabla_x f, \qquad \text{and} \\& \int_{B_\delta(v_{\rm cr})} (F' - F_{\rm cr}) K(t_{\rm cr}, x_{\rm cr}, v_{\rm cr},v') \, \mathrm{d} v' \geq \int_{B_\delta(v_{\rm cr})} ( f' - f_{\rm cr}) K(t_{\rm cr}, x_{\rm cr},v_{\rm cr},v') \, \mathrm{d} v'. \epsnd{split} \epse Note that, in the last integral, we have used the notation that, for any $v'$, \[ f' = f(t_{\rm cr}, x_{\rm cr}, v') \quad\text{ and }\quad f_{\rm cr} = f(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}) \] and similarly for $F$. It follows, from~\epsqref{e.kide} and~\epsqref{e.c7143}, that, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, \begin{equation}\label{e.c7144} \partial_t F + v\cdot \nabla_x F \leq \partial_t f + v\cdot\nabla_x f = \mathcal L f + c F. \epse By a direct calculation with \epsqref{e.c7142}, we also have \[ \partial_t F + v\cdot \nabla_x F = 2\Lambda\langle v_0\rangle^\gamma F + \rho e^{2\Lambda \langle v_0\rangle^\gamma t}. \] Note that $c(t,x_{\rm cr},v_{\rm cr})\leq \Lambda\langle v_{\rm cr}\rangle^\gamma$ by~\epsqref{e.K_c_assumptions}. Hence, up to decreasing $\delta$, $c(t,x_{\rm cr},v_{\rm cr}) < 2\Lambda \langle v_0\rangle^\gamma$. Hence, we only need to show that, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, \begin{equation}\label{e.c7145} \mathcal{L} f \leq \rho e^{2\Lambda \langle v_0\rangle^\gamma t_{\rm cr}}, \epse in order to obtain a contradiction with \epsqref{e.c7144} and conclude the proof. To prove \epsqref{e.c7145}, we start by using~\epsqref{e.c7143} to write, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, \begin{equation}\label{e.c714_I1_I2} \begin{equation}gin{split} \mathcal{L} f &\leq \int_{B_\delta(v_{\rm cr})^c} (f' - f_{\rm cr}) K(t_{\rm cr}, x_{\rm cr},v_{\rm cr}, v') \, \mathrm{d} v' + \int_{B_\delta(v_{\rm cr})} (F' - F_{\rm cr}) K(t_{\rm cr},x_{\rm cr},v_{\rm cr}, v') \, \mathrm{d} v'\\ &=: I_1 + I_2. \epsnd{split} \epse First, we bound $I_1$. Using~\epsqref{e.K_c_assumptions}, we have, with $A_k = B_{2^{k+1}\delta}(v_{\rm cr})\setminus B_{2^{k}\delta}(v_{\rm cr})$, \begin{equation}\label{e.c714_I1} \begin{equation}gin{split} I_1 &\lesssim \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \int_{B_\delta(v_{\rm cr})^c} K(t_{\rm cr}, x_{\rm cr},v_{\rm cr}, v') \, \mathrm{d} v'\\ &\lesssim \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \sum_{k=0}^\infty \int_{A_k} K(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}, v') \, \mathrm{d} v' \\&\lesssim \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \langle v\ranglet^{\gamma + 2s} \sum_{k=0}^\infty (\delta 2^k)^{-2s} \lesssim \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \langle v\ranglet^{\gamma+2s} \delta^{-2s}. \epsnd{split} \epse Next, we bound $I_2$. From the integral estimate for $K$ in \epsqref{e.K_c_assumptions} applied on a series of annuli, it is easy to show \begin{equation}gin{equation}\label{e.K-ball} \int_{B_r(v)} |v-v'|^2 K(t,x,v,v') \, \mathrm{d} v' \leq \Lambda \langle v\rangle^{\gamma+2s} r^{2-2s}, \epsnd{equation} for $r>0$. Using the symmetry of $K$ with respect to $(v'-v_{\rm cr})$ as in~\epsqref{e.K_c_assumptions}, we see \[ \int_{B_\delta(v_{\rm cr})} (v' - v_{\rm cr}) \cdot \nabla_v F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) K(t_{\rm cr},x_{\rm cr},v_{\rm cr}, v') \, \mathrm{d} v' = 0. \] Then, using a Taylor expansion and the definition of $F$, we see that \[ \begin{equation}gin{split} &\left|F' - F_{\rm cr} - (v' - v_{\rm cr}) \cdot \nabla_v F(v_{\rm cr})\right|\\ & \lesssim e^{2\Lambda\langle v_0\rangle^\gamma t_{\rm cr}} \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \left(\frac {t_{\rm cr}^2 + 1}{\delta^2} + \frac{t_{\rm cr}^2 |x_{\rm cr}-v_{\rm cr} t_{\rm cr}|^2 + |v_{\rm cr}-v_0|^2}{\delta^4} \right)|v' - v_{\rm cr}|^2 \\&\lesssim e^{2\Lambda \langle v_0\rangle^\gamma t_{\rm cr}} \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \frac{1}{\delta^2}|v' - v_{\rm cr}|^2, \epsnd{split} \] using $|x_{\rm cr}|^2 + |v_{\rm cr}- v_0|< \delta^2$ and $t_{\rm cr} < T_\delta$. Putting the above together and applying~\epsqref{e.K-ball}, we find \begin{equation}\label{e.c714_I2} \begin{equation}gin{split} I_2 &\lesssim e^{2\Lambda\langle v_0\rangle^\gamma t_{\rm cr}} \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \frac{1}{\delta^2} \int_{B_\delta(v_{\rm cr})} |v' - v_{\rm cr}|^2 K(t_{\rm cr},x_{\rm cr},v_{\rm cr}, v')\, \mathrm{d} v'\\ &\lesssim e^{2\Lambda\langle v_0\rangle^\gamma t_{\rm cr}} \|f\|_{L^\infty([0,T]\times\mathbb R^6)}\frac{\langle v\ranglet^{\gamma + 2s}}{\delta^{2s}}. \epsnd{split} \epse Combining~\epsqref{e.c714_I1_I2},~\epsqref{e.c714_I1}, and~\epsqref{e.c714_I2}, we find, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, \[ \mathcal{L} f \lesssim \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \frac{\langle v\ranglet^{\gamma + 2s}}{\delta^{2s}}\left( 1+ e^{2\Lambda\langle v_0\rangle^\gamma t_{\rm cr}}\right). \] Using the choice of $\rho$ from~\epsqref{e.c7148}, the fact that $\langle v\ranglep \approx \langle v\ranglet$ (recall that $|v_0 - v_{\rm cr}| < \delta < 1$), and increasing $A$ as necessary yields the claim~\epsqref{e.c7145}. This concludes the proof. \epsnd{proof} \section{Time regularity in kinetic integro-differential equations}\label{s:time-reg} As part of our proof of uniqueness, we need to show that solutions $f$ to \epsqref{e.boltzmann} are H\"older continuous for positive times with uniform bounds as $t\searrow0$ as long as the initial data $f_{\rm in}$ is H\"older continuous. This will be accomplished in Section \ref{s:holder-xv}. To prove this, we first need to understand the following fundamental property: if $f(t,\cdot,\cdot)$ is H\"older in $(x,v)$ for each time value $t$, then they are H\"older in $(t,x,v)$? The corresponding fact for linear parabolic equations (regularity in $x$ implies regularity in $(t,x)$, in the suitable scaling) is classical, but it is a nontrivial task to extend this to kinetic integro-differential equations~\epsqref{e.kide} (including Boltzmann). For future potential applications, we prove this property for general linear kinetic equations of the form \epsqref{e.kide} given above, with $K$ and $c$ satisfying \epsqref{e.K_c_assumptions}. Recall the local kinetic H\"older seminorm $[f]_{C^\alpha_{\epsll,x,v}(D)}$ defined for subsets $D\subset \mathbb R^6$, as defined in Section \ref{s:holder-spaces}. Since this section concerns H\"older exponents $\alpha < s< 1$, this seminorm can equivalently be defined as \[ [f]_{C^\alpha_{\epsll,x,v}(D)} = \sup_{(x,v), (x_0,v_0)\in D} \frac{|f(x,v) - f(x_0,v_0)|}{d_\epsll((0,x,v),(0,x_0,v_0))^\alpha}.\] As usual, we define $\|f\|_{C^\alpha_{\epsll,x,v}(D)} = \|f\|_{L^\infty(D)} + [f]_{C^\alpha_{\epsll,x,v}(D)}$. The main result of this section is as follows. We note that this proposition is proven in both cases $\gamma< 0$ and $\gamma\geq 0$. \begin{equation}gin{proposition}\label{prop:holder_in_t} Suppose that $f \in C^\alpha_{\epsll, q}([0,T]\times \mathbb R^6)$ for some $\alpha\in (0,\min\{1,2s\})$ and $q\geq 0$, and that $f$ solves~\epsqref{e.kide}. Then we have \begin{equation}gin{equation*} \|f\|_{C^\alpha_{\epsll,q}([0,T]\times\mathbb R^6)} \leq C \sup_{t\in[0,T]} \left\|\langle v\rangle^{q+\alpha(\gamma+2s)_+/(2s)}f(t)\right\|_{C^\alpha_{\epsll,x,v}(\mathbb R^6)}. \epsnd{equation*} The constant $C$ depends only on universal constants, $\alpha$, and $\Lambda$. \epsnd{proposition} The key lemma for proving Proposition \ref{prop:holder_in_t} is the following (recall the definition of the dilation $\delta_r$~\epsqref{e.dilation-def}): \begin{equation}gin{lemma}\label{lem:scaling_holder} Under the assumptions of \Cref{prop:holder_in_t}, let $z_1 \in [0,T]\times \mathbb R^6$, $r\in (0,1]$ be arbitrary, and $z_2 = (t_2,x_2,v_2)$ be such that $t_2 \in[0, \langle v_1 \rangle^{-(\gamma+2s)_+}]$, $r^{2s} t_2 + t_1 \in [0,T]$, and $|x_2|, |v_2| < 1$. Then we have \begin{equation}gin{equation*} \begin{equation}gin{split} |f(z_1 \circ \delta_r (z_2)) - f(z_1)| \lesssim |r|^\alpha \left(\|f\|_{L^\infty([0,T]\times \mathbb R^6)} + [f(t_1,\cdot,\cdot)]_{C^\alpha_{\epsll,x,v}(B_1(x_1,v_1))}\right), \epsnd{split} \epsnd{equation*} with the implied constant depending only on universal constants, $\alpha$, and $\Lambda$. \epsnd{lemma} \begin{equation}gin{proof} The proof is based on a barrier argument. Let $z_1$ be as in the statement of the lemma. Without loss of generality, we may assume that $t_1 = 0$ and $x_1 = 0$. Then $z_2 \in [0, \min\{\langle v_1\rangle^{-(2s+\gamma)_+}, T\}]\times B_1(0) \times B_1(v_1)$. \noindent {\epsm Step 1: An auxiliary function and its equation.} Let us set some useful notation. For any $r\geq 0$ and any function $g$, let \[ g_r(z) = g(z_1 \circ \delta_r(z)). \] Then, let \[ \tilde F(z) = f_r(z) - f(0, 0, v_1). \] It is straightforward to check that \[ \partial_t \tilde F + v\cdot \nabla_x \tilde F - \mathcal{L} f_r = r^{2s} c_r f_r, \] where we have the defined the nonlocal operator and kernel \begin{equation} \begin{equation}gin{split} &\mathcal{L} f_r = \int_{\mathbb R^3} f_r(v') - f_r(v)) \mathcal{K} (t,x,v,v') \, \mathrm{d} v' \\&\text{and } \quad \mathcal{K}(t,x,v,v') = r^{3+2s} K(r^{2s} t, r^{1+2s} x + r^{2s} v_1 t, rv+v_1, rv' + v_1). \epsnd{split} \epse We first notice the following bounds derived from~\epsqref{e.K_c_assumptions}: for any $L>0$ and $v$, \begin{equation}\label{e.cK2} \begin{equation}gin{split} \int_{B_{2L}(v) \setminus B_L(v)} &\mathcal{K}(v,v') \, \mathrm{d} v' = r^{3+2s}\int_{B_{2L}(v)\setminus B_L(v)} K(rv + v_1, rv' + v_1) \, \mathrm{d} v' \\&= r^{2s} \int_{B_{2rL}(rv+v_1)\setminus B_{rL}(rv+v_1)} K(rv +v_1, w) \, \mathrm{d} w \lesssim L^{-2s} \langle rv + v_1\rangle^{\gamma+2s} , \epsnd{split} \epse and, applying \epsqref{e.cK2} on a decreasing sequence of annuli, \begin{equation}\label{e.cK1} \begin{equation}gin{split} &\int_{B_L(v)} |v'-v|^2 \mathcal{K}(v,v') \, \mathrm{d} v' =r^{3+2s}\int_{B_L(v)} |v'-v|^2 K(rv + v_1, rv' + v_1) \, \mathrm{d} v' \\&\quad= r^{-(2-2s)} \int_{B_{rL}(rv+v_1)} |w - (rv + v_1)|^2 K(rv +v_1, w) \, \mathrm{d} w \lesssim L^{2-2s} \langle rv + v_1\rangle^{\gamma+2s} , \epsnd{split} \epse where, for brevity, we have omitted the dependence on $(t,x)$. We also have via \epsqref{e.K_c_assumptions} the symmetry property \[ \mathcal K(v,v+w) = r^{3+2s} K(r v + v_1, rv + v_1+rw) = r^{2s+3} K(rv+v_1,rv + v_1 - rw) = \mathcal K(v,v-w).\] Our goal is to obtain a local upper bound on $\tilde F$; that is, a bound at $(t_2, x_2, v_2)$ satisfying the smallness assumption in the statement of the lemma. Hence, we use a suitable multiplicative cutoff function. Let $\phi \in C_c^\infty(\mathbb R^6)$ be a cut-off function such that, for all $i,j \in \{1,2,3\}$, \begin{equation}gin{equation}\label{e.cutoff} \begin{equation}gin{split} &\phi\approx \left( |v|^2 + |x|^2 + 1 \right)^{-\alpha/2} \\&|\partial_{x_i} \phi|\lesssim |x|\left(|v|^2+|x|^2+1 \right)^{-\alpha/2-1}, \\&|\partial_{x_ix_j} \phi| \lesssim |x|^2\left(|v|^2+|x|^2+1 \right)^{-\alpha/2-2}, \quad\text{ and} \epsnd{split} \qquad \begin{equation}gin{split} &|\partial_{v_i} \phi|\lesssim |v|\left(|v|^2+|x|^2+1 \right)^{-\alpha/2-1}, \\&|\partial_{v_iv_j} \phi| \lesssim |v|^2\left(|v|^2+|x|^2+1 \right)^{-\alpha/2-2}, \\&|\partial_{x_iv_j} \phi| \lesssim |x||v|\left(|v|^2+|x|^2+1 \right)^{-\alpha/2-2}. \epsnd{split} \epsnd{equation} Define \begin{equation}\label{e.psi_twist} F = \phi \tilde F. \epse Note that this is not the same function $F$ from the proof of Proposition \ref{p:cont-match}. After a straightforward computation, we find \begin{equation}\label{e.F_equation} \begin{equation}gin{split} \partial_t F + v\cdot \nabla_x F = \frac{v\cdot \nabla_x \phi}{\phi} F + \phi \mathcal{L} \tilde F + r^{2s}\phi c_r f_r. \epsnd{split} \epse The goal is now to estimate $F$ from above. \noindent{\epsm Step 2: An upper barrier for $F$.} Fix \begin{equation}\label{e.R} R = \frac {\langle v_1 \rangle} {2r}. \epse For $C_0$ to be determined, let \begin{equation}\label{e.overline F} \begin{equation}gin{split} \overline F(t) = 2 e^{t C_0 \langle v_1\rangle^{(\gamma+2s)_+}} \Big( &\|F(0,\cdot,\cdot)\|_{L^\infty(B_R\times B_R)} + \sup_{s\in[0,t_2], \max\{|x|, |v|\} = R} F(s,x,v)_+ \\&+ \frac 1 2 r^{2s} t \Lambda \langle 2v_1\rangle^{\gamma_+}\|f\|_{L^\infty([0,T]\times \mathbb R^3\times B_{rR}(v_1)} \Big), \epsnd{split} \epse where $\Lambda$ is the constant from \epsqref{e.K_c_assumptions}. Our goal is to show that $F \leq \overline F$ on $[0,t_2]\times B_R \times B_R$. Notice that, by construction, \[ \sup_{(x,v)\in \overline B_R \times \overline B_R} F(0,x,v) < \overline F(0). \] Hence, if $F \leq \overline F$ does not always hold, we can take the first crossing time $t_{\rm cr}$ that $\sup F(t_{\rm cr}) = \overline F(t_{\rm cr})$. By the assumed uniform continuity of $f$ in time, we immediately see that $t_{\rm cr}>0$. On the $(x,v)$ boundary, that is, when $\max\{|x|,|v|\} = R$, one has $F < \overline F$ by construction. Hence, any crossing point must occur in the interior, and we can find $(x_{\rm cr},v_{\rm cr}) \in B_R(0) \times B_R(0)$ such that \begin{equation}\label{e.c6241} F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) = \overline F(t_{\rm cr}). \epse Using that $(t_{\rm cr},x_{\rm cr},v_{\rm cr})$ is the first crossing point, we find the following: \begin{equation}\label{e.c6242} \begin{equation}gin{split} \partial_t F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &\geq \partial_t \overline F(t_{\rm cr}),\\ \nabla_{x,v} F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &= 0,\\ F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &\geq F(t_{\rm cr},x,v) ~~\text{for all }(x,v)\in\mathbb R^3\times B_R,\\ \mathcal{L} F(t_{\rm cr},x_{\rm cr},v_{\rm cr}) &\leq 0. \epsnd{split} \epse These facts imply, at the point $(t_{\rm cr}, x_{\rm cr},v_{\rm cr})$, \[ \begin{equation}gin{split} \partial_t F &+ v\cdot \nabla_x F \geq \partial_t \overline F \\&= C_0 \langle v_1\rangle^{(\gamma+2s)_+} \overline F + e^{t C_0 \langle v\rangleo^{(\gamma+2s)_+}} r^{2s} \|c\|_{L^\infty([0,T]\times\mathbb R^3\times B_{rR}(v_1))}\|f\|_{L^\infty([0,T]\times \mathbb R^3\times B_{rR}(v_1)}\\ &\geq C_0 \langle v_1\rangle^{(\gamma+2s)_+} F + r^{2s} \Lambda \langle 2v_1\rangle^{\gamma_+}\|f\|_{L^\infty([0,T]\times \mathbb R^3\times B_{rR}(v_1)}. \epsnd{split} \] Above we used~\epsqref{e.K_c_assumptions} and the smallness assumption on $t_2$. This, combined with the equation \epsqref{e.F_equation} satisfied by $F$, yields \[ C_0 \langle v_1\rangle^{(\gamma+2s)_+} F + r^{2s}\Lambda \langle 2v_1\rangle^{\gamma_+} \|f\|_{L^\infty([0,T]\times\mathbb R^6)} \leq \frac{v_{\rm cr} \cdot \nabla_x \phi}{\phi} F + \phi \mathcal{L} \tilde F + r^{2s} \phi c_r f_r. \] By~\epsqref{e.cutoff} and Young's inequality, we see that $|(v\cdot\nabla_x \phi) / \phi|$ is bounded uniformly. Also,~\epsqref{e.K_c_assumptions} as well as the fact that $|v_{\rm cr}|\leq R$ and the definition of $f_r$ imply that \begin{equation} r^{2s} \phi c_r f_r \leq r^{2s}\Lambda \langle 2v_1\rangle^{\gamma_+} \|f\|_{L^\infty([0,T]\times \mathbb R^3\times B_{rR}(v_1)}. \epse Therefore, we will reach a contradiction if we can show that, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$, \begin{equation}gin{equation}\label{e.holder1} \phi \mathcal{L} \tilde F < C_0\langle v_1 \rangle^{(\gamma+2s)_+} F. \epsnd{equation} Once~\epsqref{e.holder1} is established, we can conclude that $F\leq \overline F$ on $[0,t_2]\times B_R\times B_R$. To keep the proof clean, we adopt the notation $h' = h(t_{\rm cr},x_{\rm cr},v')$ and $h_{\rm cr} = h(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$ for any function $h$. Recall from~\epsqref{e.c6242} that $F' \leq F_{\rm cr}$. Then, since $F = \phi \tilde F$, we have \begin{equation}gin{equation}\label{e.max-trick} \begin{equation}gin{split} \phi_{\rm cr} \mathcal{L} \tilde F_{\rm cr} = \int_{\mathbb R^3} \phi_{\rm cr} (\tilde F' - \tilde F_{\rm cr}) \mathcal{K} \, \mathrm{d} v' &= \int_{\mathbb R^3} \phi_{\rm cr}\left( \frac{F'}{\phi'} - \frac{F_{\rm cr}}{\phi_{\rm cr}}\right) \mathcal{K} \, \mathrm{d} v'\\ &\leq \int_{\mathbb R^3} \phi_{\rm cr} F_{\rm cr} \left( \frac{1}{\phi'} - \frac{1}{\phi_{\rm cr}}\right) \mathcal{K} \, \mathrm{d} v' = I. \epsnd{split} \epsnd{equation} Fix $L = \langle v_{\rm cr} \rangle / 2$ and decompose \begin{equation}\label{e.c6243} I = \int_{B_L(v_{\rm cr})} \phi_{\rm cr} F_{\rm cr} \left( \frac{1}{\phi'} - \frac{1}{\phi_{\rm cr}}\right) \mathcal{K} \, \mathrm{d} v' + \int_{B_L(v_{\rm cr})^c} \phi_{\rm cr} F_{\rm cr} \left( \frac{1}{\phi'} - \frac{1}{\phi_{\rm cr}}\right) \mathcal{K} \, \mathrm{d} v' = I_1 + I_2. \epse Consider the first term $I_1$. Expanding $1/\phi'$ to second order in the $v'$ variable, and noting that the first-order term vanishes because of the symmetry of the kernel $\mathcal{K}$, we have \begin{equation}\label{e.c6254} \begin{equation}gin{split} I_1 &\leq \frac{1}{2}\phi_{\rm cr} F_{\rm cr} \sup_{v' \in B_L(v_{\rm cr})} |D_v^2(1/\phi')| \int_{B_L(v_{\rm cr})} |v'-v_{\rm cr}|^2 \mathcal{K} \, \mathrm{d} v'. \epsnd{split} \epse Now, using the properties ~\epsqref{e.cutoff} of $\phi$, as well as the upper bound ~\epsqref{e.cK1} for $\mathcal{K}$, we find \begin{equation}\label{e.c6251} I_1 \lesssim F_{\rm cr} (|v_{\rm cr}|^2+|x_{\rm cr}|^2+1)^{-\alpha/2} \langle rv_{\rm cr} + v_1\rangle^{\gamma+2s} L^{2-2s} \sup_{v' \in B_L(v_{\rm cr})}\left( |v'|^2 + |x_{\rm cr}|^2 + 1 \right)^{\alpha/2-2} \epse First notice that, since $|v_{\rm cr}|<R = \langle v_1\rangle/(2r)$, we have $\langle rv_{\rm cr} + v_1\rangle \approx \langle v_1 \rangle$. Next, by the choice of $L$, we have $\langle v\ranglep \approx \langle v\ranglet$. Therefore, \epsqref{e.c6251} becomes (up to increasing $C_0$) \begin{equation}\label{e.c6253} I_1 \lesssim \frac{F_{\rm cr} \langle v_1\rangle^{\gamma + 2s} \langle v_{\rm cr}\rangle^{2-2s}}{(|v_{\rm cr}|^2 + |x_{\rm cr}|^2 + 1)^2} < \frac{C_0}{2} F_{\rm cr} \langle v_1\rangle^{(\gamma+2s)_+}, \epse as desired. We now turn to the second term $I_2$ in~\epsqref{e.c6243}. Since the $-1/\phi_{\rm cr}$ term in the integrand has a good sign, we immediately see \[ I_2 \leq \phi_{\rm cr} F_{\rm cr} \int_{B_L(v_{\rm cr})^c} \frac{1}{\phi'} \mathcal{K} \, \mathrm{d} v'. \] Using the asymptotics~\epsqref{e.cutoff} of $\phi$ we find \[ I_2 \lesssim \phi_{\rm cr} F_{\rm cr} \int_{B_L(v_{\rm cr})^c} (|v'|^2+|x_{\rm cr}|^2+1)^{\alpha/2} \mathcal{K} \, \mathrm{d} v' = \phi_{\rm cr} F_{\rm cr} \sum_{k=0}^\infty \int_{A_{k,L}} (|v'|^2+|x_{\rm cr}|^2+1)^{\alpha/2} \mathcal{K} \, \mathrm{d} v', \] where we define \[ A_{k,L} = B_{2^{k+1}L}(v_{\rm cr}) \setminus B_{2^kL}(v_{\rm cr}). \] On the annulus $A_{k,L}$, we have $|v'| \lesssim | v_{\rm cr}| + 2^{k+1}L$ so that $(|v'|^2+|x_{\rm cr}|^2+1)^{\alpha/2} \lesssim \langle v_{\rm cr} \rangle^\alpha + \langle x_{\rm cr} \rangle^\alpha + 2^{\alpha k} L^\alpha$. Using the bound~\epsqref{e.cK2} for $\mathcal{K}$ and the fact that $\alpha < s$ yields \begin{equation}\label{e.c6255} \begin{equation}gin{split} I_2 &\lesssim \phi_{\rm cr} F_{\rm cr} \sum_{k=0}^\infty \left( \langle v_{\rm cr} \rangle^\alpha + \langle x_{\rm cr} \rangle^\alpha + 2^{\alpha k}L^\alpha \right) \int_{A_{k,L}} \mathcal{K} \, \mathrm{d} v' \\ &\lesssim \phi_{\rm cr} F_{\rm cr} \sum_{k=0}^\infty \left( \langle v_{\rm cr} \rangle^\alpha + \langle x_{\rm cr} \rangle^\alpha + 2^{\alpha k}L^\alpha \right) \langle r v_{\rm cr} + v_1 \rangle^{\gamma+2s} 2^{-2s k} L^{-2s}\\ &\lesssim \frac{\langle v_{\rm cr} \rangle^\alpha + \langle x_{\rm cr} \rangle^\alpha + L^{\alpha-2s}}{(|v_{\rm cr}|^2+|x_{\rm cr}|^2+1)^{\alpha/2}} F_{\rm cr} \langle v_1 \rangle^{\gamma+2s} \lesssim F_{\rm cr} \langle v_1 \rangle^{\gamma + 2s} \epsnd{split} \epse where in the second-to-last step we used that $r|v_{\rm cr}| \leq rR \leq |v_1|/2$. Using~\epsqref{e.c6253} and~\epsqref{e.c6255} in~\epsqref{e.c6243}, we find \[ \phi \mathcal{L} \tilde F \leq I_1 + I_2 < C_0 F(t_{\rm cr}) \langle v_1 \rangle^{(\gamma+2s)_+}, \] which concludes the proof of~\epsqref{e.holder1} and allows us to deduce the upper bound \begin{equation}\label{e.c62801} F \leq \bar F \qquad\text{ on } [0,t_2]\times B_R\times B_R. \epse \noindent{\epsm Step 3: Quantitative bounds on $\overline F$.} We establish here the upper bound on $\overline F$: \begin{equation}\label{e.c6258} ~~~~~~ \overline F \lesssim r^\alpha \langle v\rangleone^{-q} \left(\|f\|_{L_q^\infty([0,T]\times \mathbb R^6)} + [\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_2(0,v_1))} \right) ~~\quad \text{ in } [0,t_2]\times B_R\times B_R. \epse To this end, fix any $(t,x,v)$ with $t \in [0,t_2]$. The first step is to notice that the exponential term in the definition~\epsqref{e.overline F} of $\overline F$ can be bounded by $e^{C_0}$ since $t\leq t_2 \leq \langle v_1\rangle^{-(\gamma + 2s)_+}$. Similarly, we have $t\Lambda \langle 2v_1\rangle^{\gamma_+} \lesssim 1$. Using these two observation and that $2s > \alpha$ yields \begin{equation}\label{e.c62503} \begin{equation}gin{split} \overline F(t,x,v) &\lesssim \|F(0,\cdot,\cdot)\|_{L^\infty(B_R\times B_R)} + \sup_{t\in[0,t_2], \max\{|x'|, |v'|\} = R} F(t',x',v')_+ \\& \qquad + r^\alpha \|f\|_{L^\infty([0,T]\times\mathbb R^3\times B_{rR}(v_1))}. \epsnd{split} \epse We now bound the $F(0,\cdot,\cdot)$ term in~\epsqref{e.c62503}. Fixing any $(x',v') \in B_R\times B_R$, we have \[ \begin{equation}gin{split} F(0,x',v') &= \phi(x',v') (f(z_1\circ(\delta_r(0,x',v'))) - f(0,0,v_1))\\ &= \phi(x',v') (f(0,r^{1+2s}x',rv'+v_1) - f(0,0,v_1)). \epsnd{split} \] If $r^{1+2s}|x'|, r|v'| \leq 1$, then recalling the asymptotics of $\phi$ from~\epsqref{e.cutoff} and the definition \epsqref{e.dl} of $d_\epsll$, we have \begin{equation}\label{e.c6259} \begin{equation}gin{split} |F(0,x',v')| &\lesssim (|v'|^2+|x'|^2+1)^{-\frac{\alpha}{2}} r^\alpha \left(\max\{|x'|^{\frac{1}{1+2s}}, |v'|\}\right)^\alpha \langle v\rangleone^{-q}[\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(0,v_1))} \\&\leq r^\alpha \langle v\rangleone^{-q} [\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(0,v_1))}. \epsnd{split} \epse On the other hand, if either $r^{1+2s}|x'|$ or $r|v'| \geq 1$, we find \begin{equation}\label{e.c62501} \begin{equation}gin{split} |F(0,x',v')| &\lesssim (|v'|^2+|x'|^2+1)^{-\alpha/2} \langle v\rangleone^{-q}\|f\|_{L_q^\infty([0,T]\times B_{rR}(0)) \times B_{rR}(v_1)} \\& \leq r^\alpha \langle v\rangleone^{-q}\|f\|_{L^\infty_q([0,T]\times\mathbb R^6)}, \epsnd{split} \epse since $r^{\alpha(1+2s)} \leq r^\alpha$ and $|v'-v_1| \leq rR$ implies that $\langle v\rangle \approx \langle v\rangleone$. Combining~\epsqref{e.c6259} and~\epsqref{e.c62501}, we obtain \begin{equation}\label{e.c62502} |F(0,x',v')| \lesssim r^\alpha \langle v\rangleone^{-q}\left([\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(x_1,v_1))} + \|f\|_{L_q^\infty([0,T]\times\mathbb R^6)}\right), \epse as desired. Next, we turn to the middle term on the right hand side of~\epsqref{e.c62503}. Fix any $(t',x',v') \in [0,t_2]\times \overline B_R \times \overline B_R$ where $\max\{|x'|,|v'|\} = R$. Again, recalling the properties ~\epsqref{e.cutoff} and~\epsqref{e.psi_twist} of $\phi$, we have \[ \begin{equation}gin{split} |F(t',x',v')| &\lesssim (|v'|^2+|x'|^2+1)^{-\alpha/2} \|f\|_{L^\infty([0,T]\times\mathbb R^3\times B_{rR}(v_1))} \\& \lesssim \min \left( \langle v\ranglep^{-\alpha}, \langle x' \rangle^{-\alpha} \right) \langle v\rangleone^{-q}\|f\|_{L_q^\infty([0,T]\times\mathbb R^6)}. \epsnd{split} \] Recall from~\epsqref{e.R} that $R = \langle v_1\rangle/(2r)$. Then clearly, \begin{equation}\label{e.c62504} |F(t',x',v')| \lesssim r^{\alpha} \langle v\rangleone^{-q} \|f\|_{L_q^\infty([0,T]\times\mathbb R^6)}. \epse Combining~\epsqref{e.c62503}, \epsqref{e.c62502}, and~\epsqref{e.c62504}, we find that \epsqref{e.c6258} holds true. \noindent{\epsm Step 4: Conclusion of the proof.} Recalling that $|t_2|, |x_2|, |v_2| \leq 1$, we have $\phi(t_2,x_2,v_2) \approx 1$. From Step 2, we have $F(t_2,x_2,v_2) \leq \overline F(t_2)$. This yields \[ \begin{equation}gin{split} f(z_1 \circ \delta_r(z_2)) - f(z_1) = \frac{1}{\phi(t_2,x_2,z_2)} F(t_2,x_2,z_2) \lesssim \overline F(t_2). \epsnd{split} \] Combining this with~\epsqref{e.c6258}, we find \[ f((z_1 \circ \delta_r(z_2)) - f(z_1) \lesssim r^\alpha \langle v\rangleone^{-q} \left([\langle v\rangle^{-q} f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(0,v_1))} + \|f\|_{L_q^\infty([0,T]\times\mathbb R^6)} \right). \] The same proof with $-\overline F$ as a lower barrier of $F$ gives \[ f((z_1 \circ \delta_r(z_2)) - f(z_1) \gtrsim - r^\alpha \langle v\rangleone^{-q} \left([\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(0,v_1))} + \|f\|_{L_q^\infty([0,T]\times\mathbb R^6)} \right). \] We deduce \[ |f(z_1\circ\delta_r(z_2)) - f(z_1)| \lesssim r^\alpha \langle v\rangleone^{-q}\left([\langle v\rangle^q f(0,\cdot,\cdot)]_{C^{\alpha}_{\epsll,x,v}(B_1(0,v_1))} + \|f\|_{L_q^\infty([0,T]\times\mathbb R^6)} \right), \] which concludes the proof. \epsnd{proof} Now we are ready to use \Cref{lem:scaling_holder} to prove \Cref{prop:holder_in_t}: \begin{equation}gin{proof}[Proof of \Cref{prop:holder_in_t}] We will show \begin{equation}gin{equation}\label{e.unweighted} \begin{equation}gin{split} &\|f\|_{C^{\alpha}_\epsll(Q_1(z_0)\cap ([0,T]\times \mathbb R^6))} \\&\qquad \lesssim \langle v_0\rangle^{-q + \alpha \frac{(\gamma+2s)_+}{2s}} \Big(\|f\|_{L_q^\infty([0,T]\times \mathbb R^6)} + \sup_{t \in [(t_0-1)_+,t_0]} [\langle v\rangle^q f(t,\cdot,\cdot)]_{C_{\epsll, x,v}^{\alpha}(B_2(x_0,v_0))}\Big). \epsnd{split} \epsnd{equation} The conclusion of the Proposition follows from \epsqref{e.unweighted} in a straightforward way. Fix any $z_1$ and $z_2$ in $Q_1(z_0)\cap([0,T]\times\mathbb R^6)$, and assume without loss of generality that $t_2\geq t_1$. If $d_\epsll(z_2, z_1) \geq \frac 1 2 \langle v_1\rangle^{-(\gamma+2s)_+/2s}$, then we simply have \[ \frac{|f(z_2) - f(z_1)|}{d_\epsll(z_1,z_2)^{\alpha}} \lesssim \langle v_0\rangle^{\frac{(\gamma+2s)_+}{2s}\alpha} \langle v\rangleo^{-q}\| f\|_{L_q^\infty([0,T]\times \mathbb R^6)} \] since $\langle v_1\rangle \approx \langle v\rangleone$. Therefore, for the rest of the proof, we assume \begin{equation}\label{e.c1231} d_\epsll(z_1,z_2) < \frac 1 2 \langle v_1 \rangle^{-\frac{(\gamma + 2s)_+}{2s}}. \epse We set the notation \[ s_2 = \frac{t_2-t_1}{r^{2s}}, \quad w_2 = \frac{v_2-v_1}{r}, \quad\text{ and }\quad y_2 = \frac{x_2 - x_1 - r^{2s} s_2 v_1}{r^{1+2s}}. \] where $r \in (0,1]$ is to be chosen based on two cases below. We immediately notice that \begin{equation}\label{e.c1233} z_2 = z_1 \circ \delta_r(s_2,y_2,w_2). \epse The first, simpler case is when \begin{equation}\label{e.c1232} t_2-t_1 \leq \langle v_1 \rangle^{-(\gamma + 2s)_+} d_\epsll(z_1,z_2)^{2s}. \epse In this case, we let \[ r = 2d_\epsll(z_1,z_2). \] Let us check that the hypotheses of Lemma \ref{lem:scaling_holder} are satisfied. From \epsqref{e.c1231}, we have $r\leq 1$. Also, \epsqref{e.c1232} implies \[s_2 = \frac{t_2-t_1}{2^{2s}d_\epsll(z_1,z_2)^{2s}} < \langle v_1\rangle^{-(2s+\gamma)_+}. \] From the definition \epsqref{e.dl} of $d_\epsll$, we have \[ |w_2| = \frac{|v_2-v_1|}{2d_\epsll(z_1,z_2)} \leq 1, \qquad |y_2| = \frac{|x_2 - x_1 - (t_2-t_1)v_1|}{2^{1+2s}d_\epsll(z_1,z_2)^{1+2s}} \leq 1.\] Therefore, we can apply~\Cref{lem:scaling_holder} to find \[ |f(z_2) - f(z_1)| \lesssim r^\alpha \langle v\rangleo^{-q} \left(\|f\|_{L_q^\infty([0,T]\times \mathbb R^6)} + [\langle v\rangle^q f(t_1,\cdot,\cdot)]_{C^\alpha_{\epsll,x,v}(B_2(x_0,v_0))}\right), \] using $B_1(x_1,v_1)\subset B_2(x_0,v_0)$. Since $r\approx d_\epsll(z_1,z_2)$, this finishes the proof of \epsqref{e.unweighted} in this case. Next, we consider the case where \begin{equation}\label{e.c1234} t_2-t_1 > \langle v_1 \rangle^{-(\gamma + 2s)_+} d_\epsll(z_1,z_2)^{2s}. \epse In this case, we let \[ r = 2(t_2-t_1)^\frac{1}{2s} \langle v_1 \rangle^\frac{(\gamma+2s)_+}{2s}. \] Notice that $r > 2^{1/(2s)} d_\epsll(z_1,z_2)$ by~\epsqref{e.c1234}. Once again, we would like to apply Lemma \ref{lem:scaling_holder}. From \epsqref{e.c1231} and the definition of $d_\epsll$, we have \begin{equation} r \leq 2d_\epsll(z_1,z_2) \langle v_1\rangle^{(\gamma+2s)_+/(2s)} \leq 1. \epse We also have $s_2 < \langle v_1\rangle^{-(\gamma+2s)_+}$ by construction. Using $r> 2^{1/(2s)} d_\epsll(z_1,z_2)$ and the definition of $d_\epsll$, we have \[ |w_2| < \frac{|v_2-v_1|}{2d_\epsll(z_1,z_2)} \leq 1, \qquad |y_2| = \frac{|x_2 - x_1 - (t_2-t_1)v_1|}{r^{1+2s}} < \frac{|x_2 - x_1 - (t_2-t_1)v_1|}{2^{1+\frac{1}{2s}}d_\epsll(z_1,z_2)^{1+2s}}\leq 1.\] Therefore, we can apply~\Cref{lem:scaling_holder} as above to find \[ |f(z_2) - f(z_1)| \lesssim r^\alpha \langle v\rangleone^{-q} \left(\|f\|_{L_q^\infty([0,T]\times \mathbb R^6)} + [\langle v\rangle^{-q} f(t_1,\cdot,\cdot)]_{C^\alpha_{\epsll,x,v}(B_2(x_0,v_0))}\right), \] which concludes the proof of \epsqref{e.unweighted}, since $r\lesssim \langle v_1 \rangle^{(1+\gamma/(2s))_+} d_\epsll(z_1,z_2)$ and $\langle v_1 \rangle \approx \langle v_0\rangle$. \epsnd{proof} \section{Propagation of H\"older regularity}\label{s:holder-xv} In this section and the next, we need to place extra assumptions on our initial data as in the statement of Theorem \ref{t:uniqueness}. We recall here the lower bound condition \epsqref{e.uniform-lower}: there are $\delta$, $r$, and $R>0$ such that \begin{equation}gin{equation*} \text{for each } x\in \mathbb R^3, \, \epsxists \, v_x\in B_R(0) \text{ such that } f_{\rm in} \geq \delta 1_{B_r(x,v_x)}. \epsnd{equation*} With $f$ the solution with initial data $f_{\rm in}$ constructed in Theorem \ref{t:existence}, the self-generating lower bounds of Theorem \ref{t:lower-bounds} (in particular, estimate \epsqref{e.small-t-lower}) then imply \begin{equation}gin{equation}\label{e.f-lower} f(t,x,v) \geq \mu e^{-\epsta|v|^2}, \quad (t,x,v) \in (0,T]\times\mathbb R^6. \epsnd{equation} for some $\mu, \epsta>0$ depending on $T$, $\delta$, $r$, $R$, $\|f\|_{L^\infty_q[0,T]\times \mathbb R^6)}$, and $q> 3 + \gamma + 2s$. The uniformity of these lower bounds in $t$ and $x$ will allow us to control the time-dependence of the constants when we apply Schauder estimates. We also assume the initial data is H\"older continuous. The main result of this section propagates this H\"older regularity to positive times: \begin{equation}gin{proposition}\label{prop:holder_propagation} Let $f:[0,T]\times\mathbb R^6\to \mathbb R$ be the solution to \epsqref{e.boltzmann} constructed in Theorem \ref{t:existence}. Suppose that $f_{\rm in}$ satisfies \epsqref{e.uniform-lower} for some $\delta$, $r$, and $R$, and that $\langle v\rangle^{m} f_{\rm in} \in C^{\alpha(1+2s)}_\epsll(\mathbb R^6)$ for some $\alpha\in (0,\min\{1,2s\})$ and $f_{\rm in} \in L^\infty_q(\mathbb R^6)$, and that $m$ and $q$ are sufficiently large, depending on $\gamma$, $s$, and $\alpha$. Then there exists $T_U \in (0,T]$ such that \begin{equation}gin{equation}\label{e.holder-prop-bound} \|\langle v\rangle^{m-\alpha(\gamma+2s)_+/(2s)} f\|_{C^\alpha_\epsll([0,T_U]\times \mathbb R^6)} \leq C \|\langle v\rangle^{m} f_{\rm in}\|_{C^{\alpha(1+2s)}_{\epsll,x,v}(\mathbb R^6)}. \epsnd{equation} The constants $C$ and $T_U$ depend only on universal constants, $m$, $q$, $\alpha$, $\delta$, $r$, $R$, $[f_{\rm in}]_{C^\alpha_{\epsll,q}}$, and $\|f\|_{L^{\infty}_q([0,T]\times\mathbb R^6)}$. \epsnd{proposition} To control the H\"{o}lder continuity of $f$, we adapt an idea from a previous work on well-posedness for the Landau equation \cite{HST2020landau}, originally inspired by a method of \cite{constantin2015SQG} to obtain regularity for the SQG equation. For $(t,x,v,\chi,\nu) \in \mathbb R_+ \times \mathbb R^3 \times \mathbb R^3 \times B_1(0)^2$ and $m \in \mathbb N$, define \begin{equation}gin{equation}\label{e.c010404} \begin{equation}gin{split} &\tau f(t,x,v,\chi,\nu) := f(t,x+\chi, v+\nu),\\ &\delta f (t,x,v,\chi,\nu) = \tau f(t,x,v,\chi,\nu) - f(t,x,v),\\ &g(t,x,v,\chi,\nu) = \frac{\delta f (t,x,v,\chi,\nu)}{(|\chi|^{2} + |\nu|^2)^{\alpha/2}} \langle v\rangle^{m}, \epsnd{split} \epsnd{equation} Note that, if $f \in C^\infty$, then $\lim_{(\chi,\nu)\rightarrow (0,0)} g(t,x,v,\chi,\nu)$ exists for every $(t,x,v)$. The function $g$, defined by this limit on $\mathbb R_+ \times \mathbb R^3 \times \mathbb R^3 \times \lbrace 0 , 0 \rbrace$, is then $C^\infty$. By symmetry, the maximum of $g$ and the minimum of $g$ are the same magnitude. Thus, it is equivalent to the $L^\infty_{x,v,\chi,\nu}$ norm of $g$, which is, in turn, equivalent to the weighted $C^\alpha_{x,v}$ semi-norm of $f$, as remarked in this elementary lemma: \begin{equation}gin{lemma}\label{lem:holder_char} Fix any $f : \mathbb R^3 \times \mathbb R^3 \to \mathbb R$ and let $g: \mathbb R^3 \times \mathbb R^3 \times B_1\times B_1 \to \mathbb R$ be defined by \[ g(x,v,\chi,\nu) = \langle v\rangle^m \frac{\delta f(x,v,\chi,\nu) }{(|\chi|^2+|\nu|^2)^{\alpha/2}}. \] Then \[ \max_{(x,v,\chi,\nu) \in \mathbb R^6\times B_1^2} g = \|g\|_{L^\infty(\mathbb R^6\times B_1^2)} \approx \|\langle v \rangle^m f\|_{C_{x,v}^\alpha(\mathbb R^6)} \approx \sup_{(x,v)\in\mathbb R^6} \langle v\rangle^{m} \|f\|_{C_{x,v}^\alpha(B_1(x,v))} \] where the implied constants depend only on $m$ and $\alpha$. Here, $C_{x,v}^\alpha$ denotes the standard H\"older space on $\mathbb R^6$. \epsnd{lemma} We emphasize that $g$ measures the H\"older continuity of $f$ in the Euclidean metric of $\mathbb R^6$, rather than the metric $d_\epsll$ that matches the scaling of the equation. This choice is imposed on us by the proof: see the term $\chi\cdot \nu/(|\chi|^2 + |\nu|^2) g$ in \epsqref{e.c010401} below. This term needs to be uniformly bounded, which would not be the case if the displacements $\chi$ and $\nu$ were given the natural exponents according to the kinetic scaling. It is straightforward to show that the two H\"older norms control each other, although with a loss of exponent: for any suitable function $h$ and domain $\Omega\subset\mathbb R^6$, \begin{equation}gin{equation}\label{e.holder-compare} \|h\|_{C^\frac{\alpha}{1+2s}_{\epsll,x,v}(\Omega)} \lesssim \|h\|_{C_{x,v}^\frac{\alpha}{1+2s}(\Omega)} \lesssim \|h\|_{C^\alpha_{\epsll,x,v}(\Omega)}, \epsnd{equation} where the $C^\alpha_{\epsll,x,v}$ norm has been defined in Section \ref{s:holder-spaces}. Our strategy to prove Proposition \ref{prop:holder_propagation} is to bound $g$ from above using a barrier argument. The defining equation for the barrier $\overline G$ will correspond to the estimates that are available for $g$ at a first crossing point, so that we can derive a contradiction at that point. Therefore, we present the upper bounds for $g$ in the following key lemma, before explaining the barrier argument. \begin{equation}gin{lemma}\label{lem:holder_prop} Let $f$, $\alpha$, $m$, and $q$ be as in Proposition~\ref{prop:holder_propagation}, and let $g$ be defined as in \epsqref{e.c010404}. For some $t_{\rm cr}>0$, if $g$ has a global maximum over $[0,t_{\rm cr}]\times \mathbb R^6\times B_1(0)^2$ at $(t_{\rm cr}, x_{\rm cr},v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr})$, then \begin{equation} \label{e.g-ineq} \partial_t g \leq C \left(g + t_0^{\mu(\alpha,s)} g^{\theta(\alpha,s)}\right) \qquad\text{ at } (t_{\rm cr},x_{\rm cr}, v_{\rm cr}, \chi_{\rm cr},\nu_{\rm cr}), \epse where $\alpha' = \alpha \frac{2s}{1+2s}$, \[ \begin{equation}gin{split} \mu(\alpha,s) &:= \left( - 1 + \frac{\alpha-\alpha'}{2s}\right)\left( \frac{2s+\alpha'/2}{2s+\alpha'}\right) \in (-1,0),\\ \theta(\alpha,s) &:= 1 + \left(1+\frac{\alpha+2s}{\alpha'}\right) \left( \frac{2s+\alpha'/2}{2s+\alpha'}\right), \epsnd{split} \] and the constant $C>0$ depends on universal constants, $\alpha$, $q$, $m$, $\delta$, $r$, $R$, and $\|f\|_{L^\infty_q([0,T]\times\mathbb R^6)}$. \epsnd{lemma} Before proving Lemma~\ref{lem:holder_prop}, we show how to use it to conclude Proposition~\ref{prop:holder_propagation}. \begin{equation}gin{proof}[Proof of Proposition~\ref{prop:holder_propagation}] We begin by noting that we can assume, without loss of generality, that \begin{equation}\label{e.c12231} \|\langle v\rangle^{q'} D^k f\|_{L^\infty([0,T]\times\mathbb R^6)} < \infty \qquad\text{ for every $q'$ and multi-index $k$ sufficiently large.} \epse Indeed, if not, we may use the approximating sequence $f^\varepsilon$ from the proof of Theorem \ref{t:existence}, which is sufficiently smooth, and the bound \epsqref{e.holder-prop-bound}, which does not depend quantitatively on norms of order higher than $\alpha$, is inherited by $f$ in the limit. First, we claim that, with $f$, $m$, $\alpha$, and $T_U \in (0,T]$ as in the statment of the proposition, \begin{equation}gin{equation}\label{e.holder-prop-bound-xv} \|\langle v\rangle^{m} f\|_{L^\infty_t([0,T_U], C^\alpha(\mathbb R^6))} \leq 2 \|\langle v\rangle^{m} f_{\rm in}\|_{C^\alpha(\mathbb R^6)}. \epsnd{equation} To prove \epsqref{e.holder-prop-bound-xv}, we use the function $g$ defined in \epsqref{e.c010404} and construct a barrier $\overline G$ on a small time interval, that controls $g$ from above. With $N > 0$ to be chosen later, define $\overline G$ to be the unique solution to \begin{equation}gin{equation}\label{e.g_supersoln} \begin{equation}gin{cases} \frac{d}{dt} \overline G(t) = N \left(\overline G + t^{\mu(\alpha,s)}\overline G(t)^{\theta(\alpha,s)}\right),\\ \overline G(0) = 1 + \|g(0,\cdot)\|_{L^\infty (\mathbb R^6 \times B_1(0)^2)} + N \| f \|_{L^\infty_q([0,T] \times \mathbb R^6)}. \epsnd{cases} \epsnd{equation} where $\mu(\alpha,s)$ and $\theta(\alpha,s)$ are as in Lemma \ref{lem:holder_prop}. This solution $\overline G$ exists on some time interval $[0,T_G]$ with $T_G$ depending only on $\alpha$, $s$, $N$, $\|g(0,\cdot)\|_{L^\infty}$, and $\| f \|_{L^\infty_q}$. Later, we will choose $N$ depending only on $\| f \|_{L^\infty_q}$. Our goal is to show that $g(t,x,v,\chi,\nu) < \overline G(t)$ for all $t\in [0,T_G]$. Let $t_{\rm cr}$ be the first time that $\|g\|_{L^\infty([0,t_{\rm cr}]\times \mathbb R^3 \times \mathbb R^3 \times B_1(0)^2)} = \overline G(t_{\rm cr})$. It is clear from \epsqref{e.c12231} that $t_{\rm cr} >0$. We seek a contradiction at $t=t_{\rm cr}$. Next, we claim that we may assume existence of a point $(x_{\rm cr},v_{\rm cr},\chi_{\rm cr},\nu_{\rm cr}) \in \mathbb R^3\times\mathbb R^3\times \overline B_1(0)^2$ so that \begin{equation}\label{e.c12233} g(t_{\rm cr},x_{\rm cr},v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr}) = \|g(t_{\rm cr},\cdot,\cdot,\cdot,\cdot)\|_{L^\infty} = \overline G(t_{\rm cr}). \epse Indeed, if not we may take any sequence $(t_{\rm cr}, x_n, v_n, \chi_n, \nu_n)$ such that \begin{equation} g(t_{\rm cr},x_n,v_n, \chi_n, \nu_n) \to \|g(t_{\rm cr},\cdot,\cdot,\cdot,\cdot)\|_{L^\infty} = \overline G(t_{\rm cr}). \epse Then define \begin{equation} f_n(t,x,v,\chi,\nu) = f(t, x+x_n, v, \chi, \nu) \quad\text{ and }\quad g_n(t,x,v,\chi,\nu) = g(t, x+x_n, v, \chi, \nu). \epse Due to the fast decay of $f$ as $|v|\to\infty$ and its smoothness, see~\epsqref{e.c12231}, it follows that, up to passing to a subsequence, there exist $\tilde f$ and $\tilde g$ such that $f_n \to \tilde f$ and $g_n \to \tilde g$ in $C^{2+\alpha}_{\epsll}$ locally uniformly. Using again the fast decay and smoothness of $f$, it follows that $|v_n|$ is bounded, in which case, up to taking a subsequence, $v_n \to v_{\rm cr}$ for some $v_{\rm cr}\in \mathbb R^3$. Similarly, $(\chi_n, \nu_n) \to (\chi_{\rm cr},\nu_{\rm cr}) \in \overline B_1(0)^2$. It follows that \begin{equation}\label{e.c12232} \tilde g(t_{\rm cr}, 0, v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr}) = \lim_{n\to\infty} g_n(t_{\rm cr}, 0, v_n, \chi_n, \nu_n) = \overline G(t_{\rm cr}). \epse Notice that $\tilde f$ inherits all the of the same (global) bounds as $f$ and satisfies the Boltzmann equation~\epsqref{e.boltzmann}, by the locally uniform convergence in $C^{2+\alpha}_\epsll$. Therefore, without loss of generality, a crossing point exists as in \epsqref{e.c12233}. Now, we show that, up to increasing $N$ if necessary, $(\chi_{\rm cr},\nu_{\rm cr}) \in B_1(0)^2$; that is, $\chi_{\rm cr}$ and $\nu_{\rm cr}$ lie in the interior of $B_1(0)$. Indeed, if $(\chi_{\rm cr},\nu_{\rm cr})$ were located on the boundary of $B_1(0)^2$, then a direct calculation using \epsqref{e.c010404} and $\chi_{\rm cr}^2 + \nu_{\rm cr}^2 \geq 1$ shows \[ g(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr}) \leq \|f\|_{L^\infty_m} < \overline G(t_{\rm cr}), \] which contradicts~\epsqref{e.c12233}. It follows that $(\chi_{\rm cr},\nu_{\rm cr})\in B_1(0)^2$. From Lemma~\ref{lem:holder_prop}, we find, at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr})$, \[ \partial_t g \leq C\left( g + t_0^{\mu(\alpha,s)} g^{\theta(\alpha,s)}\right). \] Since $g = \overline G$ at this point, we have \[ \partial_t g \leq C\left( \overline G + t_0^{\mu(\alpha,s)} \overline G^{\theta(\alpha,s)}\right). \] Hence, by increasing $N$ if necessary, we have, due to~\epsqref{e.g_supersoln}, \begin{equation}\label{e.c12234} \partial_t g < \frac{d\overline G}{dt}. \epse On the other hand, since $\overline G - g$ has a minimum at $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr})$, one has \begin{equation}\label{e.c12235} \partial_t( \overline G - g) \leq 0. \epse The inequalities~\epsqref{e.c12234} and~\epsqref{e.c12235} contradict each other, which implies the time $t_{\rm cr}$ does not exist. This establishes that $g< \overline G$ on $[0,T_G]$. The inequality \epsqref{e.holder-prop-bound-xv} therefore holds up to a time $T_U = \min\{T, T_G\}$, which implies the existence of $T_U$ as in the statement of the proposition. Since the statement we want to prove is in terms of H\"older norms $C^\alpha_\epsll$ with kinetic scaling, we apply \epsqref{e.holder-compare} to both sides of \epsqref{e.holder-prop-bound-xv} and obtain \[ \|\langle v\rangle^{m} f\|_{L^\infty_t([0,T_U], C^\alpha_{\epsll,x,v}(\mathbb R^6))} \lesssim \|\langle v\rangle^m f_{\rm in}\|_{C^{\alpha(1+2s)}_{\epsll,x,v}(\mathbb R^6)}. \] Finally, we apply Proposition \ref{prop:holder_in_t} to conclude \epsqref{e.holder-prop-bound}, and the proof is complete. \epsnd{proof} We now prove the key estimate of Lemma~\ref{lem:holder_prop}. \begin{equation}gin{proof}[Proof of Lemma \ref{lem:holder_prop}] We proceed in several steps. First, we convert the classical derivative terms in the equation for $f$ to derivatives of $g$. Next, we obtain bounds on the collision operator using the bounds on $g$ and the Schauder estimates for $f$. This involves an intricate decomposition of the collision kernel, each portion of which we write as a separate step. \noindent{}{\it Step 1: The equation for $g$.} If we take a finite difference of the Boltzmann equation \epsqref{e.boltzmann}, we get that \begin{equation}gin{equation} \label{e.delta} \partial_t \delta f + v \cdot \nabla_x \delta f + \nu \cdot \nabla_\chi \delta f = \tau Q(f,f) - Q(f,f). \epsnd{equation} Multiplying by $\langle v\rangle^{m} (|\chi|^2 + |\nu|^2)^{-\alpha/2}$ and commuting the derivative operators yields \begin{equation}\label{e.c010401} \partial_t g + v \cdot \nabla_x g + \nu \cdot \nabla_\chi g + \alpha \frac{\chi \cdot \nu}{|\chi|^2 + |\nu|^2} g = \frac{\langle v\rangle^m}{(|\chi|^2 + |\nu|^2)^\frac{\alpha}{2}} \left( \tau Q(f,f) - Q(f,f)\right). \epse We next consider the right hand side of~\epsqref{e.c010401}. Applying the Carleman decomposition $Q = Q_{\rm s} + Q_{\rm ns}$ as usual, we write \begin{equation}gin{equation}\label{e.320} \tau(Q(f,f)) - Q(f,f) = Q_\text{s}(\delta f, \tau f) + Q_\text{s}(f, \delta f) + Q_\text{ns}(\delta f, \tau f) + Q_\text{ns}(f, \delta f). \epsnd{equation} Since $(t_{\rm cr}, x_{\rm cr},v_{\rm cr},\chi_{\rm cr},\nu_{\rm cr})$ is the location of an interior maximum, we have $\nabla_x g = \nabla_\chi g = 0$ at this point. Hence~\epsqref{e.c010401} becomes \begin{equation}\label{e.c010402} \begin{equation}gin{split} \partial_t g &+ \alpha \frac{\chi \cdot \nu}{|\chi|^2 + |\nu|^2} g = \frac{\langle v\rangle^m}{(|\chi|^2 + |\nu|^2)^\frac{\alpha}{2}} \left( Q_\text{s}(\delta f, \tau f) + Q_\text{s}(f, \delta f) + Q_\text{ns}(\delta f, \tau f) + Q_\text{ns}(f, \delta f)\right). \epsnd{split} \epse Since $|\chi\cdot \nu|/(|\chi|^2+|\nu|^2) \leq 1$, the conclusion of the lemma follows if we can find suitable upper bounds for the terms on the right-hand side of \epsqref{e.c010402}. We estimate these four terms one by one. \noindent{}{\it Step 2: Bounding the $Q_{\rm s}(\delta f, \tau f)$ term in~\epsqref{e.c010402}.} As usual, since $Q_{\rm s}$ acts only in the velocity variable, we omit the dependence on $(t_{\rm cr}, x_{\rm cr})$ in the following calculation. First, we make note of a useful fact often used in the sequel: since $\nu \in B_1(0)$, it follows that \begin{equation}\label{e.c010403} \tau f (v) \langle v\rangle^k \lesssim \| f \|_{L^\infty_k} \qquad\text{ for any $k\geq 0$}. \epse Next, we recall that $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr}, \chi_{\rm cr}, \nu_{\rm cr})$ is crucially the location of a maximum of $g$. For ease of notation, we drop the `cr' subscript and simply refer to the point as $(t,x,v,\chi,\nu)$. We set \begin{equation} J_1 = \frac{\langle v\rangle^m}{(|\chi|^2 + |\nu|^2)^{\alpha/2}} Q_{\rm s}(\delta f, \tau f). \epse Using \epsqref{e.kernel}, this can be rewritten as: \[ \begin{equation}gin{split} J_1 &= \int_{\mathbb R^3} (\tau f(v) - \tau f(v')) \frac{\langle v\rangle^m K_{\delta f}(v,v')}{(|\chi|^2 + |\nu|^2)^{\alpha/2}} \, \mathrm{d} v' = \int_{\mathbb R^3} (\tau f(v) - \tau f(v')) \tilde{K}_g (v,v') \, \mathrm{d} v', \epsnd{split} \] where, recalling the definition~\epsqref{e.c010404} of $g$, \[ \tilde{K}_g(v,v') = \frac{\langle v\rangle^m K_{\delta f}(v,v')}{(|\chi|^2 + |\nu|^2)^{\alpha/2}} \approx |v-v'|^{-3-2s} \int_{w \perp v-v'} g(v+w) \frac{\langle v\rangle^m}{\langle v+w \rangle^m} |w|^{\gamma+2s+1} \, \mathrm{d} w. \] Let us record some useful upper bounds for $\tilde K_g$: from Lemma \ref{l:K-upper-bound}, we have, for any $r>0$, \begin{equation}gin{equation} \label{e.K_annuli} \begin{equation}gin{split} \int_{B_{2r} \setminus B_r} &|\tilde{K}_g(v,v+z)|\, \mathrm{d} z \lesssim \frac{\langle v\rangle^m}{(|\chi|^2+|\nu|^2)^{\alpha/2}} \left( \int_{\mathbb R^3} \delta f(v+w) |w|^{\gamma+2s}\, \mathrm{d} w\right) r^{-2s}\\ &\lesssim r^{-2s} \int_{\mathbb R^3} \frac{ |g(v+w)| |w|^{\gamma+2s}\langle v\rangle^m}{ \langle v+w \rangle^m} \, \mathrm{d} w \leq C r^{-2s} \| g \|_{L^\infty}\langle v\rangle^{m+(\gamma+2s)_+}, \epsnd{split} \epsnd{equation} since $m> \gamma+2s+3$. Applying \epsqref{e.K_annuli} on an infinite union of annuli $B_{2r}\setminus B_r$, $B_{4r}\setminus B_{2r}$, etc., we obtain \begin{equation}gin{equation} \label{e.K_tail} \int_{B_r^c} |\tilde{K}_g(v,v+z)| \, \mathrm{d} z \lesssim r^{-2s} \| g \|_{L^\infty} \langle v\rangle^{m+(\gamma+2s)_+}. \epsnd{equation} Finally, from \cite[Lemma 2.4]{henderson2021existence} (which holds for all ranges of $\gamma+2s$), we obtain the following pointwise upper bound: if $|v'| \leq |v|/3$, then \begin{equation}gin{equation} \label{e.K_near_zero} |\tilde K_g(v,v')| \lesssim \frac{\| g \|_{L^\infty}}{|v-v'|^{3+2s}} \langle v\rangle^{\gamma + 2s + 3} \lesssim \frac{\| g \|_{L^\infty}}{| v|^{3+2s}} \langle v\rangle^{\gamma + 2s + 3}. \epsnd{equation} We require this estimate since we work in uniform spaces with weights in $v$, which means we sometimes encounter the quantity $\langle v\rangle / \langle v' \rangle$. This is bounded, except when $|v'|$ is small compared to $|v|$, so we need the extra moment decay of \epsqref{e.K_near_zero} to compensate in that case. The analysis now proceeds in two slightly different ways, based on two cases. \noindent{}{\it Case 1: $|v| \geq 1$.} Setting $R = 4 \langle v\rangle / 3$ and $r = |v|/3$, we write \[ \begin{equation}gin{split} J_1 &= \int_{B_R^c(v)} (\tau f(v) - \tau f(v')) \tilde K_g(v,v') \, \mathrm{d} v' + \int_{B_r(0)} (\tau f(v) - \tau f(v')) \tilde K_g(v,v') \, \mathrm{d} v' \\ &\quad\quad\quad\quad+ \int_{B_R(v) \setminus B_r(0)} (\tau f(v) - \tau f(v')) \tilde K_g(v,v') \, \mathrm{d} v' =: J_{1,1} + J_{1,2} + J_{1,3}. \epsnd{split} \] Notice that the only term involving the singularity at $v=v'$ is $J_{1,3}$ due to the choice of $R$ and $r$. For the long-range term, using \epsqref{e.K_tail} and the fact that $\langle v'\rangle\gtrsim \langle v\rangle$ when $v'\in B_R^c$, we have \[ \begin{equation}gin{split} J_{1,1} &\leq \int_{B_R^c(v)} \left(|\tau f(v)|\langle v\rangle^{m+(\gamma+2s)_+} + |\tau f(v')| \langle v'\rangle^{m+(\gamma+2s)_+}\right) \frac{|\tilde K_g(v,v')|}{\langle v\rangle^{m+(\gamma+2s)_+}} \, \mathrm{d} v'\\ &\lesssim \| f \|_{L^\infty_q} \int_{B_R^c(v)} \frac{|\tilde K_g(v,v')|}{\langle v\rangle^{m+(\gamma+2s)_+}} \, \mathrm{d} v' \lesssim \| f \|_{L^{\infty}_q} \langle v\rangle^{-2s} \| g \|_{L^\infty}. \epsnd{split} \] In the second inequality, we used that $q > m + (\gamma + 2s)_+$, by assumption. The near-zero term is bounded using \epsqref{e.K_near_zero} as follows: \[ \begin{equation}gin{split} J_{1,2} &\lesssim \int_{B_r(0)} \frac{|\tau f (v)| + |\tau f (v')|}{|v|^{3+2s}} \langle v\rangle^{\gamma + 2s + 3} \| g \|_{L^\infty} \, \mathrm{d} v' \\ &\lesssim \| g \|_{L^\infty} \| f \|_{L^\infty_q} \int_{B_r(0)}\left( \frac{\langle v\rangle^{\gamma+2s+3-q} + \langle v\rangle^{\gamma+2s+3} \langle v'\rangle^{-q}}{ |v|^{3+2s}}\right) \, \mathrm{d} v'. \epsnd{split} \] Recalling that $|v| \geq 1$ so $|v|\approx \langle v\rangle$, we then find \[ J_{1,2} \lesssim \|g\|_{L^\infty} \|f\|_{L^\infty_q}\int_{B_r(0)} \left(\langle v\rangle^{\gamma-q} + \langle v\rangle^\gamma \langle v'\rangle^{-q}\right) \, \mathrm{d} v' \lesssim \|g\|_{L^\infty} \|f\|_{L^\infty_q} (1+\langle v\rangle^\gamma),\] since $q>3$. The short-range term $J_{1,3}$ is the most difficult to bound because it contains the singularity at $v=v'$. To handle this, we use smoothness of $\tau f$, which follows from the Schauder estimate of Proposition \ref{p:nonlin-schauder}: with $p\geq 0$ to be chosen later, \begin{equation}gin{equation}\label{e.schauder1} \|f\|_{C^{2s+\alpha'}_{\epsll,m-p}([t_0/2,t_0]\times\mathbb R^6)} \lesssim t_0^{-1+(\alpha-\alpha')/(2s)} \|f\|_{C^\alpha_{\epsll,m-p+\kappa+\alpha/(1+2s)+\gamma}([0,t_0]\times\mathbb R^6)}^{1+(\alpha+2s)/\alpha'}. \epsnd{equation} Since we need to bound $J_{1,3}$ in terms of $g$ (which corresponds to the weighted $C^\alpha$ norm of $f$ in $(x,v)$-variables) rather than the $(t,x,v)$-H\"older norm, we combine \epsqref{e.schauder1} with Proposition \ref{prop:holder_in_t} to write \begin{equation}gin{equation}\label{e.schauder-application} \begin{equation}gin{split} \|f\|_{C^{2s+\alpha'}_{\epsll,m-p}([t_0/2,t_0]\times\mathbb R^6)} &\leq C t_0^{-1+(\alpha-\alpha')/(2s)} \sup_{t\in[0,t_0]} \left\|\langle v\rangle^{m} f(t)\right\|_{C^\alpha_{\epsll,x,v}(\mathbb R^6)}^{1+(\alpha+2s)/\alpha'}\\ &\leq C t_0^{-1+(\alpha-\alpha')/(2s)} \|g\|_{L^\infty}^{1+(\alpha+2s)/\alpha'}, \epsnd{split} \epsnd{equation} using Lemma \ref{lem:holder_char} and \epsqref{e.holder-compare}. Here, we have chosen \[ p :=\kappa + \frac{\alpha}{1+2s}+\gamma+\frac{\alpha}{2s}(\gamma+2s)_+. \] The decay exponent $m-p$ on the left in \epsqref{e.schauder-application} is too weak for our estimates below. To get around this, we use interpolation to trade regularity for decay: recalling that $\alpha' = \alpha \frac {2s}{1+2s}$, for \[q\geq m+((\gamma+2s)_+ + \alpha')\Big(2+\frac{4s}{\alpha'}\Big)+\Big(1+\frac{4s}{\alpha'}\Big)p,\] Lemma \ref{l:moment-interpolation} implies \begin{equation}gin{equation}\label{e.schauder-application2} \begin{equation}gin{split} \|f\|_{C^{2s+\alpha'/2}_{\epsll,m+(\gamma+2s)_+ + \alpha'}([t_0/2,t_0]\times\mathbb R^6)} &\lesssim \|f\|_{C^{2s+\alpha'}_{\epsll,m-p}([t_0/2,t_0]\times\mathbb R^6)}^{\frac{2s+\alpha'/2}{2s+\alpha'}}\|f\|_{L^\infty_{q}}^{\frac{\alpha'/2}{2s+\alpha'}} \\ &\lesssim \left(t_0^{-1+(\alpha-\alpha')/(2s)} \|g\|_{L^\infty}^{1+(\alpha+2s)/\alpha'}\right)^{\frac{2s+\alpha'/2}{2s+\alpha'}} \|f\|_{L^\infty_{q}}^{\frac{\alpha'/2}{2s+\alpha'}} \\ &\lesssim t_0^{\mu(\alpha,s)} \|g\|_{L^\infty}^{\epsta(\alpha,s)}, \epsnd{split} \epsnd{equation} where \[ \begin{equation}gin{split} \mu(\alpha,s) &:= \left( - 1 + \frac{\alpha-\alpha'}{2s}\right)\left( \frac{2s+\alpha'/2}{2s+\alpha'}\right) \in (-1,0),\\ \epsta(\alpha,s) &:= \left(1+\frac{\alpha+2s}{\alpha'}\right) \left( \frac{2s+\alpha'/2}{2s+\alpha'}\right), \epsnd{split} \] and we have absorbed the dependence on $\|f\|_{L^\infty_{q}}$ into the implied constant. Now, we apply \epsqref{e.schauder-application2} to the term $J_{1,3}$. The argument differs slightly based on whether $2s +\alpha'/2 > 1$ or not. We consider the former case, as it more complicated. For $v'\in B_R(v)\setminus B_r(0)$, since $|v+\nu| \approx |v|$, estimate \epsqref{e.schauder-application2} implies \begin{equation}gin{equation}\label{e.tau-est} \begin{equation}gin{split} &\frac{|\tau f (v') - \tau f (v) - (v-v')\cdot \nabla_v (\tau f)(v)|}{|v-v'|^{2s+\alpha'/2}} \langle v\rangle^{m+(\gamma+2s)_+ +\alpha'} \lesssim t_0^{\mu(\alpha,s)} \|g\|_{L^\infty}^{\epsta(\alpha,s)}. \epsnd{split} \epsnd{equation} Let $\mathcal{A}_j := B_{R/2^j}(v) \setminus B_{R/2^{j+1}}(v)$ with the added condition that $\mathcal{A}_0 = (B_R(v) \setminus B_{R/2}(v)) \setminus B_r(0)$. We treat the cases $j=0$ and $j>1$ separately. The simpler case is $j=0$, where the regularity of $f$ is not required because $v'$ is bounded away from $v$. In this case, using that $\langle v\ranglep \approx \langle v\rangle$ and the inequality~\epsqref{e.K_annuli}, we find \begin{equation} \begin{equation}gin{split} \int_{\mathcal{A}_0} &|\tau f(v) - \tau f(v')| |\tilde K_g(v,v')| dv' \lesssim \langle v\rangle^{-q} \int_{\mathcal{A}_0} |\tilde K_g(v,v')| dv' \\& \lesssim \langle v\rangle^{-q} \int_{B_R(v) \setminus B_{R/2}(v)} |\tilde K_g(v,v')| dv' \lesssim \langle v\rangle^{-q - 2s} \langle v\rangle^{m + (\gamma + 2s)_+} \|g\|_{L^\infty} \lesssim \|g\|_{L^\infty}. \epsnd{split} \epse Next we consider the general $j$ case. Here we use the symmetry of $K_g(v,v')$ around $v$ (i.e., $K_g(v,v+h) = K_g(v,v-h)$), the regularity estimate~\epsqref{e.tau-est}, and the fact that $\langle v\ranglep \approx \langle v\rangle$ to obtain \[ \begin{equation}gin{split} \sum_{j \geq 1} &\Big|\int_{\mathcal{A}_j} (\tau f(v') - \tau f(v)) \tilde K_g(v,v') \, \mathrm{d} v'\Big| \\ &= \sum_{j \geq 1} \Big|\int_{\mathcal{A}_j} (\tau f(v') - \tau f(v) - (v-v')\cdot \nabla_v (\tau f)(v) ) \tilde K_g(v,v') \, \mathrm{d} v'\Big| \\ &\lesssim t_0^{\mu(\alpha,s)}\|g\|_{L^\infty}^{\epsta(\alpha,s)}\langle v\rangle^{-m-(\gamma+2s)_+-\alpha'}\sum_{j \geq 0} \int_{\mathcal{A}_j} |v-v'|^{2s+\alpha'/2} |\tilde K_g(v,v')| \, \mathrm{d} v' \\ &\lesssim t_0^{\mu(\alpha,s)}\|g\|_{L^\infty}^{\epsta(\alpha,s)} \langle v\rangle^{-\alpha'} \sum_{j \geq 0} \frac{R^{2s+\alpha'}}{2^{j(2s+\alpha')}}\| g \|_{L^\infty} \frac{2^{2s j}}{R^{2s}} \lesssim t_0^{\mu(\alpha,s)}\|g\|_{L^\infty}^{1+\epsta(\alpha,s)}, \epsnd{split} \] Putting together both estimates above, we find \begin{equation} J_{1,3} \lesssim \|g\|_{L^\infty} + t_0^{\mu(\alpha,s)}\|g\|_{L^\infty}^{1 + \epsta(\alpha,s)}. \epse Putting the estimates of $J_{1,i}$ together, we have that \begin{equation}gin{equation} \label{e.J1} J_1 \lesssim \|g\|_{L^\infty} + t_0^{\mu(\alpha,s)} \|g\|_{L^\infty}^{1+\epsta(\alpha,s)}. \epsnd{equation} \noindent{}{\it Case 2: $|v| < 1$.} This case is similar to Case 1, but more simple. It is necessary because the estimates we used for $J_{1,2}$ above relied on $|v|$ being bounded away from zero. In this case, however, we do not need the near-zero term at all. Instead, with $R = 4\langle v\rangle/3$ as above, we write \[ J_1 = \int_{B_R^c(v)} (\tau f(v) - \tau f(v')) \tilde K_g(v,v') \, \mathrm{d} v' + \int_{B_R(v)} (\tau f(v) - \tau f(v')) \tilde K_g(v,v') \, \mathrm{d} v' =: J_{1,1} + J_{1,4}, \] observing that the first term is indeed the same as $J_{1,1}$ in the previous case (and is bounded in the same way). The new term $J_{1,4}$ is bounded in the same way as $J_{1,3}$ in the previous case, now taking $\mathcal{A}_0 = B_R(v) \setminus B_{R/2}(v)$ and observing that, since $|v| < 1$, the smoothness estimate \epsqref{e.tau-est} for $\tau f$ holds even when $v'$ is close to zero. Thus \epsqref{e.J1} holds in both cases. \noindent{}{\it Step 3: Bounding the $Q_{\rm s}(f, \delta f)$ term in~\epsqref{e.c010402}.} Define \[ J_2 := \frac {\langle v\rangle^m}{(|\chi|^2+|\nu|^2)^{\alpha/2}} Q_{\rm s}(f,\delta f) = \int_{\mathbb R^3} \left( g(v') \frac{\langle v\rangle^m}{\langle v' \rangle^m} - g(v) \right) K_f(v,v') \, \mathrm{d} v', \] where the second equality used the definition of $g$. Since $g(t_{\rm cr}, x_{\rm cr}, v_{\rm cr},\chi_{\rm cr}, \nu_{\rm cr}) = \overline G(t_{\rm cr}) > 0$ and this is the maximum of $g$ in all variables, we can proceed as in \epsqref{e.max-trick} from the proof of Lemma \ref{lem:scaling_holder} to write $g(v') \leq g(v_{\rm cr})$, which yields \[ J_2 \leq \| g \|_{L^\infty} \int_{\mathbb R^3} \left( \frac{\langle v\rangle^m}{\langle v' \rangle^m} - 1 \right) K_f(v,v') \, \mathrm{d} v'. \] Defining $\phi(v') := \langle v\rangle^m / \langle v' \rangle^m - 1$, we have \[ J_2 \leq \|g\|_{L^\infty} \int_{\mathbb R^3} \phi(v') K_f(v,v') \, \mathrm{d} v'.\] Once again, the analysis is slightly different depending on the size of $v$. \noindent{}{\it Case 1: $|v| \geq 1$.} Define $R = |v|$ and $r = |v|/3$. Note that, for $v' \in B_R^c(0)$, one has $\phi(v') \leq 0$. Since we seek an upper bound, we can discard the integral over $B_R^c(0)$ from $J_2$. Setting $\mathcal{B} := (B_R(0)\setminus B_r(0)) \setminus B_r(v)$, we then split $J_2$ as \[ \begin{equation}gin{split} J_2 &\lesssim \| g \|_{L^\infty} \left( \int_{B_r(v)} \phi(v') K_f(v,v') \, \mathrm{d} v' + \int_{B_r(0)} \phi(v') K_f(v,v') \, \mathrm{d} v' + \int_{\mathcal{B}} \phi(v') K_f (v,v') \, \mathrm{d} v' \right)\\ &=: \| g \|_{L^\infty} \left( J_{2,1} + J_{2,2} + J_{2,3} \right). \epsnd{split} \] For $J_{2,1}$, we Taylor expand $\phi$ to second order around $v' = v$ and note that (as in the proof of Lemma \ref{lem:scaling_holder}) the first order term vanishes due to the symmetry of the kernel, since the domain of integration is a ball centered at $v_{\rm cr}$. This yields \[ J_{2,1} \lesssim \int_{B_r(v)} E(v,v') K_f (v,v') \, \mathrm{d} v', \] where $|E(v,v')| \lesssim |v-v'|^2 \langle v\rangle^m \sup_{w \in B_r(v)} \langle w \rangle^{-m-2} \lesssim |v-v'|^2 \langle v\rangle^{-2}$, since $\langle w\rangle\approx \langle v\rangle$ for $w \in B_r(v)$. Using Lemma \ref{l:K-upper-bound-2} to bound $K_f$, we have \[ \begin{equation}gin{split} J_{2,1} &\lesssim \langle v\rangle^{-2} \int_{B_r(v)} |v-v'|^2 K_f(v,v') \, \mathrm{d} v'\\ &\lesssim \langle v\rangle^{-2} \left( \int_{\mathbb R^3} f(v+w)|w|^{\gamma+2s} \, \mathrm{d} w \right) r^{2-2s} \lesssim \| f \|_{L^{\infty}_k} \langle v\rangle^{(\gamma+2s)_+} \langle v\rangle^{-2s}\lesssim \|f\|_{L^\infty_q}, \epsnd{split} \] since $q> \gamma+2s+3$. For $J_{2,2}$, within $B_r(0)$, we have no better estimate than $\phi(v') \leq \langle v\rangle^m$. However, we can use the pointwise upper bound of \cite[Lemma 2.4]{henderson2021existence}, and the fact that $|v| \geq 1$, to obtain \[ J_{2,2} \lesssim \int_{B_r(0)} \frac{\langle v\rangle^m}{|v-v'|^{3+2s}} \| f \|_{L^{\infty}_q} \langle v\rangle^{\gamma+2s+3-q} \, \mathrm{d} v' \lesssim \langle v\rangle^{3+m+\gamma-q} \| f \|_{L^{\infty}_q}, \] since $q\geq m+\gamma+3$. Lastly, if $v' \in \mathcal{B}$, then $\phi(v')$ is bounded by a constant independent of $v$. Since $\mathcal{B} \subset B_r^c(v)$, we use the tail estimate for $K_f$ (Lemma \ref{l:K-upper-bound-2}) to obtain \[ J_{2,3} \lesssim \int_{B_r^c(v)} K_f (v,v') \, \mathrm{d} v' \lesssim \| f \|_{L^{\infty}_q} \langle v\rangle^{-2s}. \] Combining the above estimates yields \begin{equation}gin{equation} \label{e.J2} J_2 \lesssim \| g \|_{L^\infty} \| f \|_{L^{\infty}_q}. \epsnd{equation} \noindent{\it Case 2: $|v| < 1$.} As in Step 2 above, this step is less delicate than the large-$v$ case, but necessary since the estimate used above for $J_{2,2}$ degenerates as $|v| \rightarrow 0$. In this case, we remark that $\phi$ is bounded uniformly, so we can use the splitting \begin{equation}gin{equation}\label{e.J2small} J_2 \lesssim \| g \|_{L^\infty} \left( \int_{B_1(v)} E(v,v') K_f(v,v') \, \mathrm{d} v' + \int_{B_1^c(v)} K_f(v,v') \, \mathrm{d} v' \right). \epsnd{equation} with $E(v,v')$ defined as in Case 1. Since $|E(v,v')| \lesssim |v-v'|^2$ on $B_1(v)$, the same calculation as for $J_{2,1}$ gives that the first term in \epsqref{e.J2small} is bounded by $\| f \|_{L^{\infty}_q}$. The second term is also bounded by $\| f \|_{L^{\infty}_q}$ by Lemma \ref{l:K-upper-bound-2}, and we conclude \epsqref{e.J2} holds in this case as well. \noindent{}{\it Step 4: Bounding the $Q_{\rm ns}$ terms in~\epsqref{e.c010402}.} The last two terms are more straightforward. Recalling that $\tau f(v) \langle v\rangle^q \lesssim \| f \|_{L^\infty_q}$, we have \[ \begin{equation}gin{split} \frac{\langle v\rangle^m}{(|\chi|^2+|\nu|^2)^{\alpha/2}} Q_{\rm ns}(\delta f, \tau f) &= c_b \tau f(v_{\rm cr}) \int_{\mathbb R^3} g(v+w) \frac{\langle v\rangle^m}{\langle v+w\rangle^m} |w|^\gamma \, \mathrm{d} w \\ &\lesssim \| g \|_{L^\infty}\| f \|_{L^\infty_q}\langle v\rangle^{m-q} \int_{\mathbb R^3} \frac{|w|^\gamma}{\langle v + w \rangle^m} \, \mathrm{d} w \lesssim \| g \|_{L^\infty} \| f \|_{L^\infty_q}, \epsnd{split}\] since $m>\gamma+3$ and $q> m+\gamma$. We have used Lemma \ref{l:convolution} to estimate the convolution in the last line. Finally, \[ \begin{equation}gin{split} \frac{\langle v\rangle^m}{(|\chi|^2+|\nu|^2)^{\alpha/2}} Q_{\rm ns}(f,\delta f)&\approx g(v) \int_{\mathbb R^3} f(v+w) |w|^\gamma \, \mathrm{d} w\\ &\lesssim \| g \|_{L^\infty} \| f \|_{L^\infty_q} \int_{\mathbb R^3} \frac{|w|^\gamma}{\langle v+w \rangle^q} \, \mathrm{d} w \lesssim \| g \|_{L^\infty} \| f \|_{L^\infty_q}, \epsnd{split}\] since $q>\gamma+3$. Combining our upper bounds for the four terms on the right in \epsqref{e.c010402}, the proof of the lemma is complete, setting $\theta(\alpha,s) = 1+\epsta(\alpha,s)$. \epsnd{proof} \section{Uniqueness}\label{s:uniqueness} In this section, we complete the proof of Theorem \ref{t:uniqueness}. Letting $f$ be the classical solution guaranteed by Theorem \ref{t:existence} and $g$ a weak solution in the sense of Theorem \ref{t:weak-solutions}, the goal is to establish a Gr\"onwall-type inequality for $h := f-g$ in a space-localized, velocity-weighted, $L^2$-based norm. Following \cite{morimoto2015polynomial}, we define our cutoff as follows: let \[ \phi(x,v) := \frac{1}{1+|x|^2+|v|^2}, \] and for $a \in \mathbb R^3$, let \[ \phi_a(x,v) := \phi(x-a,v). \] Note that \begin{equation}gin{equation} \label{e.uniq_kin} \left| v \cdot \nabla_{x} \phi_a(x,v) \right| = \left| \frac{2 (x-a) \cdot v}{(1+|x-a|^2 + |v|^2)^2} \right| \lesssim \phi_a(x,v). \epsnd{equation} For given $n\geq 0$, we define the space-localized, velocity-weighted, $L^2$-based space $X_n$ in terms of the following norm: \[ \| f \|_{X_n}^2 := \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a(x,v) \langle v\rangle^{2n} f^2(x,v) \, \mathrm{d} v \, \mathrm{d} x. \] We begin with two auxiliary lemmas. First, we have a modification of Lemma 4.2 from \cite{henderson2021existence}: \begin{equation}gin{lemma}\label{lem:HW21} Suppose that $\mu>-3$, $n > 3/2 + \mu$, and $l > 3/2 + \mu + (3/2 - n)_+$. If $g \in X_n$, then \begin{equation}gin{equation} \label{HW21 4.2} \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a(x,v) \langle v\rangle^{-2l} \left( \int_{\mathbb R^3} g(x,w) |v-w|^\mu \, \mathrm{d} w \right)^2 \, \mathrm{d} v \, \mathrm{d} x \lesssim \| g \|_{X_n}^2 \epsnd{equation} \epsnd{lemma} \begin{equation}gin{proof} Without loss of generality, $g \geq 0$. We split the inner-most (convolutional) integral into two regions according to the ball $B_{|v|/10}(v)$. That is, \[ \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{-2l} \left( \int_{\mathbb R^3} g(w) |v-w|^\mu \, \mathrm{d} w \right)^2 \, \mathrm{d} v \, \mathrm{d} x \\ &\quad\quad \lesssim\sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{-2l} \left( \left( \int_{B_{|v|/10}(v)} g(w) |v-w|^\mu \, \mathrm{d} w \right)^2 \right.\\ &\quad \quad \qquad \qquad\qquad \qquad \left.+ \left( \int_{B_{|v|/10}(v)^C} g(w) |v-w|^\mu \, \mathrm{d} w \right)^2 \right) \, \mathrm{d} v \, \mathrm{d} x =: I_1 + I_2. \epsnd{split} \] Note that, when $w \in B_{|v|/10}(v)$, we have that $\langle w \rangle \approx \langle v\rangle$ and $v \in B_{|w|/2}(w)$; in particular, $\sup_a \phi_a(x,v) / \phi_a(x,w) \approx 1$. Then, Cauchy-Schwarz and Fubini yield \[ \begin{equation}gin{split} I_1 &\lesssim \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3}\phi_a(x,v) \langle v\rangle^{-2l} \langle v\rangle^{3+\mu} \int_{B_{|v|/10}(v)} g(w)^2 |v-w|^\mu \, \mathrm{d} w \, \mathrm{d} v \, \mathrm{d} x \\ &\lesssim \sup_{a \in \mathbb R^3} \int_{\mathbb R^3} \int_{\mathbb R^3} g(w)^2 \int_{B_{|w|/2}(w)} \langle v\rangle^{-2l+3+\mu} \phi_a(x,v) |v-w|^\mu \, \mathrm{d} v \, \mathrm{d} w \, \mathrm{d} x \\ &\lesssim \sup_{a \in \mathbb R^3} \int_{\mathbb R^3}\int_{\mathbb R^3} \phi_a(x,w) g(w)^2 \langle w \rangle^{-2l+3+\mu} \int_{B_{|w|/2}(w)} |v-w|^\mu \, \mathrm{d} v \, \mathrm{d} w \, \mathrm{d} x \\ &\lesssim \sup_{a \in \mathbb R^3} \int_{\mathbb R^3}\int_{\mathbb R^3} \phi_a(x,w) g(w)^2 \langle w \rangle^{-2l + 2(3+\mu)} \, \mathrm{d} w \, \mathrm{d} x \lesssim \| g \|_{X_n}^2, \epsnd{split} \] where we also used that $-l+3+\mu < n$. For the remaining term, we again use Cauchy-Schwarz, followed by H\"older's inequality in $x$, to obtain \[ \begin{equation}gin{split} I_2 &\lesssim \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a(x,v) \langle v\rangle^{-2l} \left( \int_{B_{|v|/10}(v)^C} \phi_a(x,w) \langle w \rangle^{2n} g(w)^2 \, \mathrm{d} w \right)\\ &\qquad \qquad\times \left( \int_{B_{|v|/10}(v)^C} \frac{|v-w|^{2\mu}}{\langle w \rangle^{2n} \phi_a(x,w)} \, \mathrm{d} w \right) \, \mathrm{d} v \, \mathrm{d} x\\ &\lesssim \| g \|_{X_n}^2 \sup_{a \in \mathbb R^3} \sup_{x \in \mathbb R^3} \int_{\mathbb R^3} \phi_a(x,v) \langle v\rangle^{-2l}\int_{B_{|v|/10}(v)^C} \frac{|v-w|^{2\mu}}{\langle w \rangle^{2n}} \left( |x-a|^2 + \langle w \rangle^2 \right) \, \mathrm{d} w \, \mathrm{d} v \\ &\lesssim \| g \|_{X_n}^2 \sup_{z \in \mathbb R^3} \int_{\mathbb R^3} \langle v\rangle^{-2l} \frac{\langle v\rangle^{2\mu + (3-2n)_+}|z|^2 + \langle v\rangle^{2\mu + (5-2n)_+}}{|z|^2 + \langle v\rangle^2} \, \mathrm{d} v \lesssim \| g \|_{X_n}^2. \epsnd{split} \] \epsnd{proof} We also recall the following result from \cite{HST2020boltzmann}: \begin{equation}gin{lemma}{\cite[Lemma A.1]{HST2020boltzmann}}\label{l:A1} For any $\rho > 0$ and $v_0 \in \mathbb R^3$ such that $\rho \geq 2|v_0|$, and any $H: \mathbb R^3 \rightarrow [0,\infty)$ such that the right-hand side is finite, we have \begin{equation}gin{equation}\label{HST A1} \int_{\partial B_\rho(0)} \int_{(z-v_0)^\perp} H(z+w) \, \mathrm{d} w \, \mathrm{d} z \lesssim \rho^2 \int_{B_{\rho/2}^C(0)} \frac{H(w)}{|w|} \, \mathrm{d} w. \epsnd{equation} \epsnd{lemma} We are now ready to proceed with the proof of uniqueness. With $f$ and $g$ as above, we define $h = f-g$ and observe that \begin{equation}gin{equation} \label{e.uniq_dif} \partial_t h + v \cdot \nabla_x h = Q(h,f) + Q(g,h), \epsnd{equation} in the weak sense. We integrate (in $x$ and $v$) \epsqref{e.uniq_dif} against $\phi_a(x,v) h \langle v\rangle^{2n}$ for some $n\geq \frac 3 2$ to be determined later. Even though this is not an admissible test function for the weak solution $g$, these calculations can be justified by a standard approximation procedure, which we omit. Next, we take a supremum over $a \in \mathbb R^3$ to yield \[ \begin{equation}gin{split} \frac{1}{2} \frac{d}{dt} \| h \|_{X_n}^2 &\leq \frac 1 2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} v \cdot \nabla_x \phi_a h^2 \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x + \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a Q(h,f) h \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x\\ &\qquad + \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a Q(g,h) h \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x. \epsnd{split} \] Using \epsqref{e.uniq_kin}, we bound the first term on the right by \[ \frac 1 2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} |v \cdot \nabla_x \phi_a | h^2 \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x \leq \frac 1 2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a h^2 \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x = \frac 1 2 \| h \|_{X_n}^2, \] which yields \begin{equation}gin{equation} \label{e.uinq_gron} \begin{equation}gin{split} \frac {d}{dt} \| h \|_{X_n}^2 &\leq \| h \|_{X_n}^2 + 2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \left( Q_\text{s}(h,f) + Q_\text{ns}(h,f) + Q(g,h) \right) h \langle v\rangle^{2n} \, \mathrm{d} v \, \mathrm{d} x \\ &=: \| h \|_{X_n}^2 + I_1 + I_2 + I_3. \epsnd{split} \epsnd{equation} We bound the terms in this right-hand side one by one. Since $t$ plays no role in these estimates, we prove them for general functions $f, g, h$ defined for $(x,v)\in \mathbb R^6$, with the integrals $I_1$, $I_2$, and $I_3$ defined as in \epsqref{e.uinq_gron}. As above, norms such as $\|\cdot\|_{L^\infty_q}$ with no specified domain are understood to be over $\mathbb R^6$ throughout this section. For $I_1$, we need the following intermediate lemma: \begin{equation}gin{lemma}\label{l:Chris_lemma} For $v\in \mathbb R^3$, let \[ {\mathcal K}(x,v) := \int_{B_{\langle v\rangle/2}} K_{|h|}(x,v,v') f(x,v') \, \mathrm{d} v', \] where $K_{|h|}(x,v,v')$ is defined in terms of the function $|h|(x,v)$ according to formula \epsqref{e.kernel}. Assume that $n>\max(\frac 3 2,\gamma+2s+\frac 5 2)$. If $\gamma +2s > -2$, suppose that $q > 3$, and, if $\gamma +2s \leq -2$, suppose that $q > (3 +\gamma)/2$. Then there holds \begin{equation}gin{equation}\label{e.Chris_lemma} \sup_{a \in \mathbb R^3} \int_{\mathbb R^3} \int_{B_{10}(0)^C} \phi_a \langle v\rangle^{2n} {\mathcal K}^2 \, \mathrm{d} v \, \mathrm{d} x \leq C \| h \|_{X_n}^2 \| f \|_{L^\infty_q}^2, \epsnd{equation} for a universal constant $C>0$ that tends to $\infty$ as $\gamma \nearrow 0$. \epsnd{lemma} \begin{equation}gin{proof} The proof is divided into two cases depending on $\gamma + 2s$. \noindent {\it Case 1: $\gamma + 2s > -2$.} This argument is inspired by Proposition 3.1(i) of \cite{HST2020boltzmann}, but requires some modification due to the presence of the space-localizing weight $\phi_a$. Let $r = \langle v\rangle / 2$ and define \[ H_a(x,z) := h(x,z)^2 \langle z \rangle^{2n} |z| \phi_a(x,z) \quad \text{ and } \quad \Phi_a(x,v,w) := \frac{|w|^{2\gamma+4s+2}}{\langle v+w \rangle^{2n} |v+w| \phi_a(x,v+w)} . \] Using Cauchy-Schwarz twice, and the fact that $\int_{\mathbb R^3} \langle v' \rangle^{-q} \, \mathrm{d} v' \lesssim 1$, we obtain \begin{equation}gin{equation}\label{uniq: gjiurhub} \begin{equation}gin{split} \phi_a(x,v)^{\frac 1 2} {\mathcal K}(x,v) &\leq \| f \|_{L^\infty_q} \int_{B_r(0)} \phi_a(x,v)^{\frac 1 2} \langle v' \rangle^{-q} K_{|h|} (x,v,v') \, \mathrm{d} v' \\ &\lesssim \| f \|_{L^\infty_q} \int_{B_r(0)} \frac{\langle v' \rangle^{-q}}{|v-v'|^{3+2s}} \int_{(v-v')^\perp} \phi_a(x,v)^{\frac 1 2} |h(v+w)||w|^{\gamma+2s+1} \, \mathrm{d} w \, \mathrm{d} v' \\ & \lesssim \| f \|_{L^\infty_q} \int_{B_r(0)} \frac{\langle v' \rangle^{-q}}{|v-v'|^{3+2s}} \left( \int_{(v-v')^\perp} \phi_a(x,v) \Phi_a(x,v,w) \, \mathrm{d} w \right)^{\frac 1 2}\\ &\qquad \qquad \qquad \times \left( \int_{(v-v')^\perp} H_a(x,v+w) \, \mathrm{d} w \right)^{\frac 1 2} \, \mathrm{d} v'\\ & \lesssim \| f \|_{L^\infty_q} \left( \int_{B_r(0)} \frac{\langle v' \rangle^{-q}}{|v-v'|^{6+4s}} \left( \int_{(v-v')^\perp} \phi_a(x,v) \Phi_a(x,v,w) \, \mathrm{d} w \right)\right.\\ &\qquad \qquad \qquad \times\left. \left( \int_{(v-v')^\perp} H_a(x,v+w) \, \mathrm{d} w \right) \, \mathrm{d} v' \right)^{\frac 1 2}. \epsnd{split} \epsnd{equation} At this point we observe a few important algebraic relations between the variables $v$, $v'$, and $w$. Since $|v| \geq 10$, we have $|v| \approx \langle v\rangle$, and since $|v'| < \langle v\rangle / 2$, we also have $|v-v'| \approx \langle v\rangle$. Furthermore, since $w \perp (v-v')$, we have that $|v+w| \approx |v| + |w| \approx \langle v\rangle + \langle w \rangle$ (that is, a converse to the triangle inequality for those two variables); see \cite[Lemma~2.4]{henderson2021existence}. With these relations in hand, we have \[ \begin{equation}gin{split} &\int_{(v-v')^\perp} \phi_a(x,v) \Phi_a(x,v,w) \, \mathrm{d} w \\&\quad\quad\quad= \int_{(v-v')^\perp} \frac{|w|^{2\gamma + 4s+2}(1+|x-a|^2+|v+w|^2)}{\langle v + w \rangle^{2n} |v+w| (1+|x-a|^2+|v|^2)} \, \mathrm{d} w \\ &\quad\quad\quad \lesssim \int_{(v-v')^\perp} \frac{|w|^{2\gamma + 4s+2}}{\langle v\rangle^{2n+1} + \langle w \rangle^{2n+1}} \, \mathrm{d} w + \int_{(v-v')^\perp} \frac{|w|^{2\gamma + 4s + 4}}{(\langle v\rangle^{2n+1}+\langle w \rangle^{2n+1})\langle v\rangle^2} \, \mathrm{d} w \\ &\quad\quad\quad \lesssim \langle v\rangle^{2\gamma+4s+3-2n}, \epsnd{split} \] where we needed $\gamma + 2s > -2$ so that the singularities at $w=0$ are integrable, and $n > \gamma + 2s + \frac 5 2$ so the tails converge. Since $|v'| < \langle v\rangle /2$ and $|v| \geq 10$, we have that $|v-v'| \approx \langle v\rangle$. Thus, \epsqref{uniq: gjiurhub} becomes \begin{equation}gin{equation}\label{uniq: simplified gjiurhub} \phi_a(x,v) {\mathcal K}(x,v)^2 \lesssim \| f \|_{L^\infty_q}^2 \langle v\rangle^{2\gamma - 3 -2n} \int_{B_r(0)} \langle v' \rangle^{-q} \int_{(v-v')^\perp} H_a(x,v+w) \, \mathrm{d} w \, \mathrm{d} v'. \epsnd{equation} This implies, using Lemma \ref{l:A1} on $H_a$, and spherical coordinates for the $v$-integral, that \[ \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \int_{\mathbb R^3} \int_{B_{10}^C} \phi_a \langle v\rangle^{2n} {\mathcal K}(x,v)^2 \, \mathrm{d} v \, \mathrm{d} x \approx \sup_{a \in \mathbb R^3}\int_{10}^\infty \rho^{2n} \int_{\mathbb R^3} \int_{\partial B_\rho(0)} \phi_a(x,z) {\mathcal K}(x,z)^2 \, \mathrm{d} z \, \mathrm{d} x \, \mathrm{d} \rho\\ &\quad \lesssim \| f \|_{L^\infty_q}^2 \sup_{a \in \mathbb R^3}\int_{10}^\infty \rho^{2\gamma-3} \int_{B_r(0)} \langle v' \rangle^{-q} \int_{\mathbb R^3} \int_{\partial B_\rho(0)} \int_{(z-v')^\perp} H_a(x,z+w) \, \mathrm{d} w \, \mathrm{d} z \, \mathrm{d} x \, \mathrm{d} v' \, \mathrm{d} \rho \\ &\quad \lesssim \| f \|_{L^\infty_q}^2 \sup_{a \in \mathbb R^3} \int_{10}^\infty \rho^{2\gamma-1} \left( \int_{\mathbb R^3} \langle v' \rangle^{-q} \, \mathrm{d} v' \right) \int_{\mathbb R^3} \int_{B_{\rho/2}^C(0)} \frac{h(x,w)^2 \langle w \rangle^{2n} |w| \phi_a(x,w)}{|w|} \, \mathrm{d} w \, \mathrm{d} x \, \mathrm{d} \rho \\ &\quad \lesssim \| f \|_{L^\infty_q}^2 \sup_{a \in \mathbb R^3} \int_{10}^\infty \rho^{2\gamma-1} \int_{\mathbb R^3} \int_{B_{\rho/2}^C(0)} h(x,w)^2 \langle w \rangle^{2n} \phi_a(x,w) \, \mathrm{d} w \, \mathrm{d} x \, \mathrm{d} \rho \\ &\quad \lesssim \| f \|_{L^\infty_q}^2 \int_{10}^\infty \rho^{2\gamma - 1} \| h \|_{X_n}^2 \, \mathrm{d} \rho \lesssim \| f \|_{L^\infty_q}^2 \| h \|_{X_n}^2, \epsnd{split} \] as desired. Note that the last inequality uses that $\gamma < 0$ in an essential way. \noindent {\it Case 2: $\gamma + 2s \leq -2$.} This case requires a detailed analysis of the integral kernel to deal with the more severe singularity. As in Case 1, whenever $|v| \geq 10$, $|v'| < \langle v\rangle/2$, and $w \perp (v-v')$, we have that $\langle v\rangle / \langle v+w \rangle \lesssim 1$. Therefore, \[ \langle v\rangle^n K_{|h|}(x,v,v') \approx \frac{1}{|v-v'|^{3+2s}} \int_{(v-v')^\perp} \langle v\rangle^n |h(x,v+w)||w|^{\gamma+2s+1} \, \mathrm{d} w \lesssim K_{\langle \cdot \rangle^n |h|}(x,v,v'). \] Let us define $H(x,v) = \langle v\rangle^n |h(x,v)|$. To properly estimate ${\mathcal K}(x,v)$, we need to take advantage of the decay available from $f$. However, since the integral is over $B_r(0)$, we cannot exploit this smallness directly. Instead, we split the domain of integration $B_r(0)$ into $B_{r^s}(0)$ and $B_r(0) \setminus B_{r^s}(0)$ (recall that $r = \langle v\rangle / 2$ and $|v| \geq 10$, so that the splitting makes sense). Then we have \begin{equation}gin{equation}\label{e.uniq-lemma-1} \begin{equation}gin{split} \int_{B_r(0) \setminus B_{r^s}(0)} K_H(x,v,v') f(x,v') \, \mathrm{d} v' &\lesssim \| f \|_{L^\infty_q} \langle v\rangle^{-sq}\int_{B_r(0)} K_H(x,v,v') \, \mathrm{d} v'\\ &\lesssim \| f \|_{L^\infty_q} \langle v\rangle^{-sq-2s} \int_{\mathbb R^3} H(x,v+w) |w|^{\gamma + 2s} \, \mathrm{d} w, \epsnd{split} \epsnd{equation} using $B_r(0)\subset B_{2\langle v\rangle}(v) \setminus B_{\langle v\rangle/8}(v)$ and Lemma \ref{l:K-upper-bound}. Next, we note that $B_{r^s(0)} \subset B_{|v|+r^s}(v) \setminus B_{|v|-r^s}(v)$, so that \[ \int_{B_{r^s}(0)} K_H(x,v,v') f(x,v') \, \mathrm{d} v' \lesssim \| f \|_{L^\infty_q} \int_{B_{|v|+r^s}(v) \setminus B_{|v|-r^s}(v)} K_H(x,v,v') \, \mathrm{d} v'. \] Expanding into spherical coordinates centered at $v$ (i.e. $v' = v+\rho z$ with $z\in \partial B_1$), we have \begin{equation}gin{equation}\label{e.uniq-lemma-2} \begin{equation}gin{split} &\int_{B_{r^s}} K_H(x,v,v') f(x,v') \, \mathrm{d} v' \lesssim \| f \|_{L^\infty_q} \int_{|v|-r^s}^{|v|+r^s} \frac{\rho^2}{\rho^{3+2s}} \int_{\partial B_1} \int_{z^\perp} H(x,v+w)|w|^{\gamma+2s+1} \, \mathrm{d} w \, \mathrm{d} z \, \mathrm{d}\rho \\ &\quad\quad\quad \lesssim \| f \|_{L^\infty_q} \langle v\rangle^{-1 - 2s} r^s \int_{\partial B_1} \int_{z^\perp} H(x,v+w)|w|^{\gamma+2s+1} \, \mathrm{d} w \, \mathrm{d} z \\ &\quad\quad\quad \lesssim \| f \|_{L^\infty_q} \langle v\rangle^{-1-s} \int_{\mathbb R^3} H(x,v+w)|w|^{\gamma+2s} \, \mathrm{d} w, \epsnd{split} \epsnd{equation} where we used Lemma \ref{l:A1} with $\rho=1$ in the last line. Combining \epsqref{e.uniq-lemma-1} and \epsqref{e.uniq-lemma-2}, and recalling $q > (1-s)/s$ by assumption, we now have \[ \langle v\rangle^n {\mathcal K}(v) \lesssim \| f \|_{L^\infty_q} \langle v\rangle^{-\min\{1+s, qs+2s\}} \int_{\mathbb R^3} H(x,v+w)|w|^{\gamma+2s} \, \mathrm{d} w, \] and so, using once more Lemma \ref{lem:HW21} with $\mu = \gamma + 2s \leq -2$, $n= 0 > 3/2 + \mu$, and $l = \min\{1+s, qs+2s\}> (3/2 + \mu + (3/2-n)_+$, we obtain \[ \begin{equation}gin{split} \sup_{a \in \mathbb R^3} \int_{\mathbb R^3} \int_{B_{10}^C} &\phi_a \left( \langle v\rangle^n {\mathcal K}(x,v) \right)^2 \, \mathrm{d} v \, \mathrm{d} x\\ &\lesssim \| f \|_{L^\infty_q}^2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{-2(1+s)} \left( \int_{\mathbb R^3} H(x,v+w) |w|^{\gamma+2s}\, \mathrm{d} w \right)^2 \, \mathrm{d} v \, \mathrm{d} x \\ & \lesssim \| f \|_{L^\infty_q}^2 \| h \|_{X_n}^2, \epsnd{split} \] using $\|H\|_{X_0} = \|h\|_{X_n}$. Altogether, this yields \epsqref{e.Chris_lemma}. \epsnd{proof} Now we are ready to bound the singular term in \epsqref{e.uinq_gron}: \begin{equation}gin{lemma}[Bound on $I_1$]\label{l:uniqI1} Let $m$ and $n$ be as in Lemma \ref{l:Chris_lemma}, and assume in addition that $n> \frac 3 2 +(\gamma+2s)_+$ and $q> n+3+\gamma+4s$. Assume also that $\langle v\rangle^m f\in L^\infty_x C^{2s+\alpha}_v$ for some $\alpha\in (0,1)$, and that $m > n + \frac 3 2 + \gamma+2s+\alpha$. With $I_1$ defined as in \epsqref{e.uinq_gron}, we have \[ I_1 \leq C\| h \|_{X_n}^2 \left( \| f \|_{L^{\infty}_q} + \| \langle v\rangle^{m} f \|_{L^\infty_xC^{2s+\alpha}_v} \right), \] for any $h$ and $f$ such that the right-hand side is finite. The constant $C>0$ is universal. \epsnd{lemma} \begin{equation}gin{proof} We use the annular decomposition (defining $A_k(v) := B_{2^k|v|}(v) \setminus B_{2^{k-1}|v|}(v)$) to write \[ Q_\text{s}(h,f) = \sum_{k \in \mathbb Z} \int_{A_k(v)} K_h(x,v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v'. \] Then we have \[ I_1 \lesssim \sup_{a \in \mathbb R^3} \sum_{k \in \mathbb Z} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} h \int_{A_k(v)} K_h(x,v,v') (f(x,v')-f(x,v)) \, \mathrm{d} v' \, \mathrm{d} v \, \mathrm{d} x. \] The analysis is divided into four cases, based on range and the relative sizes of $v$ and $v'$. \noindent {\it Case 1: $k \leq -1$.} If $2s+\alpha \leq 1$, then we have \[ |f(x,v') - f(x,v)| \leq \|f(x,\cdot)\|_{C^{2s+\alpha}_v(B_1(v))} |v'-v|^{2s+\alpha} \leq \|\langle v\rangle^{m} f(x,\cdot)\|_{C_v^{2s+\alpha}(\mathbb R^3_v)} \langle v\rangle^{-m} (2^k|v|)^{2s+\alpha}. \] With Lemma \ref{l:K-upper-bound}, we have for each $x\in \mathbb R^3$, \begin{equation}gin{equation}\label{e.uniq-Holder-annulus} \begin{equation}gin{split} &\left| \int_{A_k(v)} K_h(x,v,v') (f(x,v')-f(x,v)) \, \mathrm{d} v' \right|\\ & \qquad\qquad\lesssim \langle v\rangle^{-m} (2^k |v|)^{\alpha} \|\langle v\rangle^m f(x,\cdot) \|_{C_v^{2s+\alpha}(\mathbb R^3)} \int_{\mathbb R^3} |h(x,w)| |v-w|^{\gamma+2s} \, \mathrm{d} w. \epsnd{split} \epsnd{equation} On the other hand, if $2s+\alpha \in (1,2)$ (we may always assume $2s+\alpha< 2$ by taking $\alpha$ smaller if necessary), we have \[ \begin{equation}gin{split} |f(x,v') - f(x,v) - \nabla_v f(x,v)\cdot (v'-v)| &\leq \|f(x,\cdot)\|_{C^{2s+\alpha}_v(B_1(v))} |v'-v|^{2s+\alpha}\\ & \lesssim \|\langle v\rangle^{m} f(x,\cdot)\|_{C_v^{2s+\alpha}(\mathbb R^3_v)} \langle v\rangle^{-m} (2^k|v|)^{2s+\alpha}. \epsnd{split} \] As usual, the symmetry of $K_h(x,v,v')$ implies the first-order term integrates to zero over $A_k(v)$, and we obtain \epsqref{e.uniq-Holder-annulus} in this case as well. From \epsqref{e.uniq-Holder-annulus}, using Lemma \ref{lem:HW21} with $\mu = \gamma+2s$ and $l = m - n - \alpha$, we have \begin{equation}gin{equation} \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a(x,v) \langle v\rangle^{2n} \left( \int_{A_k(v)} K_h(v,v') (f(x,v')-f(x,v)) \, \mathrm{d} v' \right)^2 \, \mathrm{d} v \, \mathrm{d} x \\ &\quad \lesssim \| f \|_{L^\infty_x C^{2s+\alpha}_{m,v}}^2 2^{2k\alpha} \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3}\phi_a \langle v\rangle^{-2(m - n-\alpha)} \left( \int_{\mathbb R^3} |h(x,w)| |v-w|^{\gamma+2s} \, \mathrm{d} w \right)^2 \, \mathrm{d} v \, \mathrm{d} x \\ &\quad \lesssim \| f \|_{L^\infty_xC^{2s+\alpha}_{m,v}}^2 2^{2k\alpha} \| h \|_{X_n}^2. \epsnd{split} \epsnd{equation} The terms for $k \leq 1$ are summable, and we find that \begin{equation}gin{equation}\label{unique I1 close range} \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \sum_{k \leq 1} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} h \int_{A_k(v)} K_h(v,v') (f(x,v')-f(x,v))\, \mathrm{d} v' \, \mathrm{d} v \, \mathrm{d} x \\ &\quad \lesssim \| h \|_{X_n} \sup_{a \in \mathbb R^3} \sum_{k \leq 1} \Big( \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} \Big( \int_{A_k(v)} K_h(v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v' \Big)^2 \, \mathrm{d} v \, \mathrm{d} x \Big)^{\frac 1 2}\\ &\quad \lesssim \| f \|_{L^\infty_xC^{2s+\alpha}_{m,v}} \| h \|_{X_n}^2. \epsnd{split} \epsnd{equation} \noindent {\it Case 2: $k \geq K_0$ for $K_0 = \log_2(\langle v\rangle/|v|)$.} Notice that the choice of $K_0$ yields $|v'| \geq \langle v\rangle/2$. Thus, \begin{equation}gin{equation}\label{e.case2} \begin{equation}gin{split} &\left| \int_{A_k(v)} K_h(x,v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v' \right| \lesssim \langle v\rangle^{-q} \| f \|_{L^{\infty}_q} \int_{A_k(v)} |K_h(x,v,v')| \, \mathrm{d} v' \\ &\quad\quad\quad \lesssim \langle v\rangle^{-q} \| f \|_{L^{\infty}_q} (2^k \langle v\rangle)^{-2s} \left( \int_{\mathbb R^3} |h(x,w)| |v-w|^{\gamma+2s} \, \mathrm{d} w \right). \epsnd{split} \epsnd{equation} Here we used that $\langle v\ranglep \approx \langle v\rangle$ to obtain the $\langle v\rangle^{-q}\| f \|_{L^{\infty}_q}$ decay from $f(x,v')$. Therefore, \begin{equation}gin{equation}\label{unique I1 far large v'} \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \sum_{k \geq K_0} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} h \int_{A_k(v)} K_h(v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v' \, \mathrm{d} v \, \mathrm{d} x\\ &\quad\quad\lesssim \| h \|_{X_n} \| f \|_{L^\infty_q} \sup_{a \in \mathbb R^3} \sum_{k > K_0} 2^{-2ks} \left( \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2(n-q-2s)} \left( \int_{\mathbb R^3} |h(x,w)| |v-w|^{\gamma+2s} \, \mathrm{d} w \right) \, \mathrm{d} v \, \mathrm{d} x \right)^{\frac 1 2} \\ &\quad\quad \lesssim \| f \|_{L^\infty_q} \| h \|_{X_n}^2, \epsnd{split} \epsnd{equation} from Lemma \ref{lem:HW21}. We used $q >n+ 3 + 2\gamma+2s$. \noindent {\it Case 3: $k \in [0,K_0]$ and $|v| < 10$.} Notice that $|v'| \leq \langle v\rangle/2$. This case is a formality, and uses exactly the same estimates as in Case 1. The only difference is in how we deduce that $\langle v\ranglep \approx \langle v\rangle$. In Case 1, this is because $|v-v'| \leq |v|/2$ from the condition on $k$ and the definition of $A_k$. Here, it is due to the choice of $K_0$ (the $\lesssim$ direction) and the smallness of $|v|$ (the $\gtrsim$ direction). Hence, we omit the details. \noindent {\it Case 4: $k \in [0,K_0]$ and $|v| \geq 10$.} This case uses estimates similar to Case 2. With $r = \langle v\rangle/2$, we decompose as \begin{equation} \begin{equation}gin{split} \sum_{k = 0}^{K_0} &\int_{A_k(v)} K_h(x,v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v'\\ &= \sum_{k = 0}^{K_0} \left(\int_{A_k(v) \cap B_r(0)} + \int_{A_k(v) \cap B_r(0)^c}\right) K_h(x,v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v' =: J_1 + J_2. \epsnd{split} \epse We begin with $J_1$: \[ \begin{equation}gin{split} J_1 &= \sum_{k = 0}^{K_0} \int_{A_k(v) \cap B_r(0)} K_h(x,v,v')(f(x,v')-f(x,v)) \, \mathrm{d} v' \\ & \lesssim f(v) \int_{B_r(0)} K_{|h|}(x,v,v') \, \mathrm{d} v' + \int_{B_r(0)}K_{|h|}(x,v,v') f(x,v') \, \mathrm{d} v' =: J_{11} + J_{12}. \epsnd{split} \] For $J_{11}$, arguing as in \epsqref{e.case2}, we note that \[ \int_{B_r(0)} K_{|h|}(x,v,v') \, \mathrm{d} v' \leq \int_{B_{2\langle v\rangle}(v) \setminus B_{\langle v\rangle/4}(v)} K_{|h|}(x,v,v') \, \mathrm{d} v' \lesssim \langle v\rangle^{-2s} \int_{\mathbb R^3} |h(x,v')| |v-v'|^{\gamma+2s} \, \mathrm{d} v'. \] Then once more using Lemma \ref{lem:HW21} yields \begin{equation}gin{equation}\label{uniq: dktluls} \begin{equation}gin{split} &\sup_{a \in \mathbb R^3} \sum_{k>0} \int_{\mathbb R^3} \int_{B_{10}(0)^C} \phi_a \langle v\rangle^{2n} h \int_{A_k(v) \cap B_r(0)} K_h(x,v,v')f(x,v)\, \mathrm{d} v'\, \mathrm{d} v \, \mathrm{d} x \\ &\quad\quad\quad \lesssim \| h \|_{X_n} \| f \|_{L^\infty_q} \sup_{a \in \mathbb R^3} \left( \int_{\mathbb R^3} \int_{\mathbb R^3} \langle v\rangle^{-2q + 2n - 4s} \left( \int_{\mathbb R^3} |h(x,w)| |v-w|^{\gamma+2s} \, \mathrm{d} w \right)^2 \, \mathrm{d} v \, \mathrm{d} x \right)^{\frac 1 2} \\ &\quad\quad\quad \lesssim \| f \|_{L^\infty_q} \| h \|_{X_n}^2. \epsnd{split} \epsnd{equation} To estimate $J_{12}$, we recall the notation ${\mathcal K}(x,v) := \int_{B_r(0)} K_{|h|}(x,v,v') f(x,v') \, \mathrm{d} v'$. Then \begin{equation}gin{equation}\label{uniq: brgrt} \begin{equation}gin{split} \sup_{a \in \mathbb R^3} \sum_{k>0} \int _{\mathbb R^3}\int_{B_{10}^C} &\phi_a \langle v\rangle^{2n} h \int_{A_k(v) \cap B_r(0)} K_{|h|}f(x,v')\, \mathrm{d} v' \, \mathrm{d} v \, \mathrm{d} x\\ &\lesssim \| h \|_{X_n} \left( \sup_{a \in \mathbb R^3} \int_{\mathbb R^3} \int_{B_{10}^C} \phi_a \langle v\rangle^{2n} {\mathcal K}^2 \, \mathrm{d} v \, \mathrm{d} x \right)^{\frac 1 2} \lesssim \|h\|_{X_n}^2 \|f\|_{L^\infty_q}, \epsnd{split} \epsnd{equation} by Lemma \ref{l:Chris_lemma}. We now consider $J_2$. Here, we have $|v'| \geq \langle v\rangle/2$, so the arguments of Case 2 apply verbatim. Hence, we omit the argument. This completes the desired estimate for Case 4, which together with the first three cases establishes the conclusion of the lemma. \epsnd{proof} \begin{equation}gin{lemma}[Bound on $I_2$]\label{l:uniqI2} With $I_2$ defined as in \epsqref{e.uinq_gron}, there holds \[ I_2 \leq C \|h\|_{X_n}^2\|f\|_{L^\infty_q}, \] whenever $n\geq \frac 3 2$ and $q>n+\frac 3 2 + \gamma$. The constant $C>0$ is universal. \epsnd{lemma} \begin{equation}gin{proof} With the simple observation that \[ I_2 \lesssim \| h \|_{X_n} \| Q_{\text{ns}}(h,f) \|_{X_n} \] we use the definition of $Q_{\text{ns}}$ and Lemma \ref{lem:HW21} with $\mu = \gamma$ and $l = q-n$, to immediately find \[ \begin{equation}gin{split} \| Q_{\text{ns}}(h,f) \|_{X_n}^2 &\lesssim \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} f(x,v)^2 \left( \int_{\mathbb R^3} |h(x,z)| |v-z|^\gamma \, \mathrm{d} z \right)^2 \, \mathrm{d} v \, \mathrm{d} x \\ & \lesssim \| f \|_{L^\infty_q}^2 \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2(n-q)} \left( \int_{\mathbb R^3} |h(x,z)| |v-z|^\gamma \, \mathrm{d} z \right)^2 \, \mathrm{d} v \, \mathrm{d} x \lesssim \| f \|_{L^\infty_q}^2\| h \|_{X_n}^2, \epsnd{split} \] which yields the desired bound for $I_2$. \epsnd{proof} \begin{equation}gin{lemma}[Bound on $I_3$]\label{l:uniqI3} With $I_3$ defined as in \epsqref{e.uinq_gron}, there exists a universal $C>0$ such that \[ I_3 \leq C \|h\|_{X_n}^2 \|g\|_{L^\infty_q}, \] whenever $n\geq 2$ and $q>2n+\gamma+5$. \epsnd{lemma} \begin{equation}gin{proof} First, define \[ \Psi_a(x,v) := \phi_a(x,v)^{\frac 1 2} \langle v\rangle^n \quad \text{ and } \quad J_a(x,v) = \Psi_a(x,v) h(x,v). \] We begin by splitting $I_3$ into a coercive part and a commutator: \[ \begin{equation}gin{split} I_3 &= \sup_{a \in \mathbb R^3} \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} Q(g,h)h \, \mathrm{d} v \, \mathrm{d} x \\ &= \sup_{a \in \mathbb R^3} \left( \iint_{\mathbb R^3\times\mathbb R^3} J_a Q(g, J_a) \, \mathrm{d} v \, \mathrm{d} x + \iint_{\mathbb R^3\times\mathbb R^3} J_a \left( Q(g,h) \Psi_a - Q(g, J_a) \right) \, \mathrm{d} v \, \mathrm{d} x \right) \\ &=: \sup_{a \in \mathbb R^3} \left( I_{31} + I_{32} \right). \epsnd{split} \] We need to keep the supremum in $a$ on the outside since the "coercive" term $I_{31}$ will contribute a strong negative component which is needed to control $I_{32}$. Specifically, using a well-known symmetrization technique (see, e.g. \cite[Lemma 4.1]{amuxy2011bounded}), we have \[ I_{31} = -\frac 1 2 D_a + \iint_{\mathbb R^3\times\mathbb R^3} Q(g, J_a^2) \, \mathrm{d} v \, \mathrm{d} x, \] where \[ D_a := \iint_{\mathbb R^9 \times {\mathbb S}^2} (J_a(x,v')-J_a(x,v))^2 g(x,v_*) B(|v-v_*|,\cos \theta) \, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x. \] For the second term in $I_{31}$, recalling $Q = Q_{\rm s} + Q_{\rm ns}$, we use a change of variables and the Cancellation Lemma \cite[Lemma 1]{alexandre2000entropy} to write \[ \begin{equation}gin{split} \iint_{\mathbb R^3\times\mathbb R^3} Q_{\rm s}(g, J_a^2) \, \mathrm{d} v \, \mathrm{d} x &= \iint_{\mathbb R^3\times\mathbb R^3} J_a^2 \int_{\mathbb R^3} (K_g(x,v',v) - K_g(x,v,v')) \, \mathrm{d} v' \, \mathrm{d} v \, \mathrm{d} x\\ &\lesssim \iint_{\mathbb R^3\times\mathbb R^3} \phi_a \langle v\rangle^{2n} h^2 \int_{\mathbb R^3} g(x,z) |v-z|^\gamma \, \mathrm{d} z \, \mathrm{d} v \, \mathrm{d} x\lesssim \|h\|_{X_n}^2 \|g\|_{L^\infty_q}, \epsnd{split} \] since $q>2n+\gamma+5> \gamma+3$. The nonsingular term is handled similarly: \[ \begin{equation}gin{split} \iint_{\mathbb R^3\times\mathbb R^3} Q_{\rm s}(g, J_a^2) \, \mathrm{d} v \, \mathrm{d} x &\lesssim \iint_{\mathbb R^3\times\mathbb R^3} J_a^2 \int_{\mathbb R^3} g(x,z) |v-z|^\gamma \, \mathrm{d} z \, \mathrm{d} v \, \mathrm{d} x\lesssim \|h\|_{X_n}^2 \|g\|_{L^\infty_q}. \epsnd{split} \] We conclude \begin{equation}gin{equation}\label{uinq: coercive I31} I_{31}+\frac 1 2 D_a \lesssim \| h \|_{X_n}^2 \| g \|_{L^\infty_q}. \epsnd{equation} For $I_{32}$, recalling the abbreviations $F = F(x,v)$, $F_* = F(x,v_*)$, $F' = F(x,v')$, and $F_*' = F(x,v_*')$ for any function $F$, and writing $B = B(|v-v_*|,\cos\theta)$, we have \[ \begin{equation}gin{split} I_{32} &= \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times\mathbb S^2} B J_a \left[ (g_*' h' - g_*h) \Psi_a - (g_*' J_a' - g_* J_a)\right] \, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x\\ &= \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times\mathbb S^2} B J_a g_*' h' (\Psi_a - \Psi_a')\, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x, \epsnd{split} \] since $J_a = \Psi_a h$. Next, we apply the pre-post-collisional change of variables: $v\leftrightarrow v'$, $v_* \leftrightarrow v_*'$, $\sigma \mapsto \sigma' := (v-v_*)/|v-v_*|$. This transformation has unit Jacobian and leaves $B(|v-v_*|, \sigma)$ invariant. This gives \begin{equation}gin{equation}\label{e.I32} \begin{equation}gin{split} I_{32} &= \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times\mathbb S^2} B J_a' g_* h (\Psi_a'-\Psi_a)\, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x \\ &= \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times\mathbb S^2} B J_a g_* h (\Psi_a'-\Psi_a)\, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x \\ &\quad + \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times\mathbb S^2} B (J_a' - J_a) g_* h (\Psi_a'-\Psi_a)\, \mathrm{d} \sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x.\\ &=: I_{321} + I_{322}. \epsnd{split} \epsnd{equation} We consider the term $I_{322}$ first. We want to extract a ``singular'' piece (in the form of $D_a$, as defined in \epsqref{uinq: coercive I31}) which will cancel with the coercive part of $I_{31}$. Specifically, from the inequality $ab\leq \frac 1 2 (a^2+b^2)$, we have \[ \begin{equation}gin{split} I_{322} &\leq \frac 1 2 \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B (J_a'-J_a)^2 g_* \, \mathrm{d}\sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x \\ &\quad\quad\quad + \frac 1 2\iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B h^2 g_* (\Psi_a' - \Psi_a)^2 \, \mathrm{d}\sigma \, \mathrm{d} w \, \mathrm{d} v \, \mathrm{d} x, \epsnd{split} \] so that \[ I_{322} - \frac 1 2 D_a \leq \frac 1 2 \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B h^2 g_* (\Psi_a' - \Psi_a)^2 \, \mathrm{d}\sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x. \] Next, write $(\Psi'_a- \Psi_a)^2 = 2 \Psi_a (\Psi_a - \Psi_a') + (\Psi_a')^2 - \Psi_a^2$ to obtain \begin{equation}gin{equation} \begin{equation}gin{split} I_{322} - \frac 1 2 D_a &\leq \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B h^2 g_* \Psi_a (\Psi_a - \Psi_a') \, \mathrm{d}\sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x\\ &\quad + \frac 1 2 \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B h^2 g_* ((\Psi_a')^2 - \Psi_a^2) \, \mathrm{d}\sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x. \epsnd{split} \epsnd{equation} Since $J_a = \Psi_a h$, the first term on the right is the negative of $I_{321}$. Returning to \epsqref{e.I32}, we now have \begin{equation}gin{equation}\label{e.uniq3} I_{32} - \frac 1 2 D_a \leq \frac 1 2 \iint_{\mathbb R^3\times\mathbb R^3} \iint_{\mathbb R^3\times{\mathbb S}^2} B h^2 g_* ((\Psi_a')^2 - \Psi_a^2) \, \mathrm{d}\sigma \, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x. \epsnd{equation} It only remains to bound this right-hand side. To do this, we start with the Taylor expansion for $\Psi_a^2 = \phi_a(x,v) \langle v\rangle^{2n}$ in $v$ (in this proof, subscipts such as $\partial_i$ always denote differentiation in $v$): \[ \Psi_a^2(x,v')-\Psi_a^2(x,v) = \partial_{i} \Psi_a^2(x,v) (v'-v)_i + \frac 1 2 \partial_{ij}^2 \Psi_a^2(x,\tilde{v}) (v'-v)_i (v'-v)_j, \] where we sum over repeated indices, and $\tilde{v} = \tau v' + (1-\tau) v$ for some $\tau \in (0,1)$. By a direct calculation, we have \[ |\partial_i \Psi_a^2(x,v)| \lesssim \Psi_a^2(x,v) \langle v\rangle^{-1} \quad \text{ and } \quad |\partial_{ij}^2 \Psi_a(x,\tilde{v})| \lesssim \Psi_a^2(x,\tilde{v}) \left( \frac{1}{ \langle \tilde{v} \rangle^2} + \phi_a(x,\tilde{v}) \right). \] Noting that $v'-v = \frac 1 2 |v-v_*|(\sigma-\sigma' \cos \theta) + \frac 1 2 |v-v_*| (\cos\theta -1) \sigma'$, we have \begin{equation}gin{equation}\label{e.uniq4} \begin{equation}gin{split} &\int_{{\mathbb S}^2} B ((\Psi_a')^2-\Psi_a^2) \, \mathrm{d} \sigma = \frac{|v-v_*|}{2} \nabla_v \Psi_a^2(x,v) \cdot \int_{{\mathbb S}^2}B (\sigma - (\sigma \cdot \sigma') \sigma') \, \mathrm{d}\sigma\\ &\quad\quad\quad + \frac{|v-v_*|}{2} \nabla_v \Psi_a^2(x,v) \cdot \sigma' \int_{{\mathbb S}^2}B (\cos\theta-1) \, \mathrm{d}\sigma +\frac 1 2 \int_{{\mathbb S}^2}B \partial_{ij}^2 \Psi_a^2(x,\tilde{v}) (v'-v)_i (v'-v)_j \, \mathrm{d}\sigma. \epsnd{split} \epsnd{equation} The first term on the right is zero by symmetry. Since $B \approx \theta^{-2-2s}|v-v_*|^\gamma$, the second term is bounded by $\lesssim \Psi_a^2(x,v) |v-v_*|^{1+\gamma} \langle v\rangle^{-1}$. Noting that $|v-v'|^2 = \frac 1 2 |v-v_*|^2 (1-\cos\theta)$ and that $|v|^2 + |v_*|^2 = |v'|^2 + |v_*'|^2$, we bound the third term by \[ \Psi_a^2(x,v) |v-v_*|^{2+\gamma} \int_{{\mathbb S}^2} \mathbb Theta_a(x,v,\tilde{v}) (1-\cos\theta) |\theta|^{-2-2s} \, \mathrm{d} \sigma, \] with \[ \mathbb Theta_a(x,v,\tilde v) := \frac{\langle \tilde{v} \rangle^{2n}}{\langle v\rangle^{2n}} \frac{\phi_a(x,\tilde{v})}{\phi_a(x,v)} ( \langle \tilde{v} \rangle^{-2} + \phi_a(x,\tilde{v})). \] Note that, since $\tilde{v}$ depends on $v'$, it also implicitly depends on $\sigma$. To estimate $\mathbb Theta_a$, we split into three cases: \noindent {\it Case 1: $|\tilde{v}| \geq |v|/2$.} In this case, $\phi_a(x,\tilde{v}) / \phi_a(x,v) \lesssim 1$. Using this, as well as $\langle \tilde{v} \rangle^2 \lesssim \langle v\rangle^2 + \langle v_* \rangle^2$, we obtain \[ \mathbb Theta_a(x,v,\tilde{v}) \lesssim \left(1 + \frac{\langle v_* \rangle^{2n}}{\langle v\rangle^{2n}} \right) \langle v\rangle^{-2}. \] \noindent {\it Case 2: $|\tilde{v}| < |v|/2$ and $|x+a| \geq |v|$.} Here again $\phi_a(x,\tilde{v}) / \phi_a(x,v) \lesssim 1$ (due to the size of $|x+a|$) and also $\phi_a(x,\tilde{v}) \leq \langle v\rangle^{-2}$, so we have \[ \mathbb Theta_a(x,v,\tilde{v}) \lesssim \langle v\rangle^{-2}. \] \noindent {\it Case 3 ($|\tilde{v}| < |v|/2$ and $|x+a| < |v|$).} First, note that \[ \langle \tilde{v} \rangle^{2n} \phi_a(x,\tilde{v}) (\langle \tilde{v} \rangle^{-2} + \phi_a(x,\tilde{v})) \lesssim \langle \tilde{v} \rangle^{2n-4} \lesssim \langle v\rangle^{2n-4} + \langle v_*\rangle^{2n-4},\] which is always true, but in this case we also have that $\langle v\rangle^{-2n} \phi_a(x,v)^{-1} \lesssim \langle v\rangle^{2-2n}$. Thus \[ \mathbb Theta_a(x,v,\tilde{v}) \lesssim \frac{\langle v\rangle^{2n-4} + \langle v_* \rangle^{2n-4}}{\langle v\rangle^{2n-2}} = \left( 1 + \frac{\langle v_* \rangle^{2n-4}}{\langle v\rangle^{2n-4}} \right) \langle v\rangle^{-2} \lesssim \left( 1 + \frac{\langle v_*\rangle^{2n}}{\langle v\rangle^{2n}} \right) \langle v\rangle^{-2}. \] The last inequality followed from $(\frac a b)^{2n-4} \leq 1 + (\frac a b)^{2n}$, since $n\geq 2$. Putting all three cases together yields, for all $x$, $v$, $v_*$, and $\sigma$, \[ \mathbb Theta_a(x,v,\tilde{v}) \lesssim \frac{\langle v\rangle^{2n} + \langle v_*\rangle^{2n}}{\langle v\rangle^{2n+2}}. \] With \epsqref{e.uniq3} and \epsqref{e.uniq4}, we then have \begin{equation}gin{equation}\label{uniq: I321} \begin{equation}gin{split} I_{32} - \frac 1 2 D_a &\lesssim \iiint_{\mathbb R^9} J_a^2 g_* \left( |v-v_*|^{1+\gamma}\langle v\rangle^{-1} + |v-v_*|^{2+\gamma} \int_{{\mathbb S}^2} \mathbb Theta_a(x,v,\tilde{v}) \frac{1-\cos\theta}{|\theta|^{2+2s}} \, \mathrm{d}\sigma \right)\, \mathrm{d} v_* \, \mathrm{d} v \, \mathrm{d} x \\ &\lesssim \| h \|_{X_n}^2 \| g \|_{L^\infty_q} \sup_{v \in \mathbb R^3} \left( \int_{\mathbb R^3} \frac{|v-v_*|^{1+\gamma}}{\langle v_* \rangle^q \langle v\rangle} \, \mathrm{d} v_* + \int_{\mathbb R^3} \frac{|v-v_*|^{2+\gamma} (\langle v\rangle^{2n} + \langle v_* \rangle^{2n})}{\langle v_* \rangle^q \langle v\rangle^{2n+2}} \, \mathrm{d} v_* \right) \\ &\lesssim \| h \|_{X_n}^2 \| g \|_{L^\infty_q}. \epsnd{split} \epsnd{equation} We used $q>\gamma+4$ in the first term, and $q>2n+\gamma+5$ in the second term. Putting \epsqref{uniq: I321} together with \epsqref{uinq: coercive I31} yields the desired bound on $I_3$. \epsnd{proof} We are now able to complete the proof of the proof of uniqueness. \begin{equation}gin{proof}[Proof of \Cref{t:uniqueness}] First, note that, instead of the assumption $f_{\rm in} \in C^\alpha_{\epsll, x, v}$, we may assume that $\langle v\rangle^{m} f_{\rm in} \in C^\alpha_{\epsll,x,v}$, for any fixed $m>0$. Indeed, up to decreasing $\alpha$ and increasing the exponent $q$ from our hypotheses $f_{\rm in} \in L^\infty_q$, we can interpolate using \Cref{l:moment-interpolation} to trade regularity for velocity decay. Define $\begin{equation}ta = \frac{\alpha}{1+2s}$ and $\begin{equation}ta' = \begin{equation}ta \frac {2s}{1+2s} = \frac {2s\alpha}{(1+2s)^2}$. Next, choose $m>0$ large enough to satisfy the hypotheses of Lemma \ref{l:uniqI1} and Proposition \ref{prop:holder_propagation}, and define \[ m' := m-\kappa-\begin{equation}ta/(1+2s) - \gamma-\begin{equation}ta(\gamma+2s)_+/(2s), \] where $\kappa>0$ is the constant from Theorem \ref{t:global-schauder}. Now we apply the Schauder estimate of Proposition \ref{p:nonlin-schauder} (which relies on \epsqref{e.f-lower}), followed by the small-time H\"older estimate of Proposition \ref{prop:holder_propagation} to obtain, for $t\in [0,T_U]$, \begin{equation}gin{equation}\label{e.f-estimate-uniq} \begin{equation}gin{split} \| \langle v\rangle^{m'} f (t)\|_{L^\infty_x C^{2s+\begin{equation}ta'}_{v}} &\lesssim \|f\|_{C^{2s+\begin{equation}ta'}_{\epsll,m'}([t/2,t]\times\mathbb R^6)}\\ &\lesssim t^{-1+(\begin{equation}ta-\begin{equation}ta')/(2s)}\|f\|_{C^\begin{equation}ta_{\epsll,m' + \kappa + \begin{equation}ta/(1+2s) + \gamma}([0,t]\times\mathbb R^6)}^{1+(\begin{equation}ta+2s)/\begin{equation}ta'}\\ &\lesssim t^{-1+(\begin{equation}ta-\begin{equation}ta')/(2s)} \|\langle v\rangle^{m}f_{\rm in}\|_{C^{\alpha}_{\epsll,x,v}(\mathbb R^6)}^{1+(\begin{equation}ta+2s)/\begin{equation}ta'}, \epsnd{split} \epsnd{equation} since $\alpha = \begin{equation}ta(1+2s)$. Now, combining Lemma \ref{l:uniqI1} (with $\begin{equation}ta'$ playing the role of $\alpha$), Lemma \ref{l:uniqI2}, and Lemma \ref{l:uniqI3} with inequality \epsqref{e.uinq_gron}, we have \[ \frac{d}{dt} \| h(t) \|_{X_n}^2 - \| h(t) \|_{X_n}^2 \lesssim \| h(t) \|_{X_n}^2 \left( \| g(t) \|_{L^\infty_q} + \| f(t) \|_{L^\infty_q} + \| \langle v\rangle^m f (t)\|_{L^\infty_x C^{2s+\begin{equation}ta'}_{v}} \right). \] Using \epsqref{e.f-estimate-uniq} for the last term on the right, and absorbing the norm of $f_{\rm in}$ into the implied constant, we now have \[ \frac{d}{dt} \| h \|_{X_n}^2 \leq C \left( \| g(t) \|_{L^\infty_q} + \| f(t) \|_{L^\infty_q} + t^{- 1 + (\begin{equation}ta-\begin{equation}ta')/2s} \right) \| h \|_{X_n}^2. \] By our assumptions that $\|g(t)\|_{L^\infty_q(\mathbb R^6)} \in L^1([0,T_U])$ and $\|f(t)\|_{L^\infty_q} \in L^\infty([0,T_U])$, and $\|h(0)\|_{X_n}^2 = 0$, we conclude that $\|h(t)\|_{X_n} \epsquiv 0$ for all $t\in [0,T_U]$ by Gr\"onwall's inequality. After replacing $\alpha(1+2s)$ with $\alpha$, we obtain the statement of Theorem \ref{t:uniqueness}. \epsnd{proof} \section{Global existence near equilibrium}\label{s:global} In this section, we prove Corollary \ref{c:global}. The proof mainly follows the approach of \cite{silvestre2022nearequilibrium}. To pass from the local existence result of Theorem \ref{t:existence} to a global existence result near equilibrium, we must first show that the time of existence depends on the distance of $f_{\rm in}$ to the Maxwellian $M$: \begin{equation}gin{lemma}\label{l:Max} Let $M(x,v) = (2\pi)^{-3/2} e^{-|v|^2/2}$, and let $q>\gamma+2s+3$ be fixed. Given $T>0$ and $\varepsilon\in (0,\frac 1 2)$, there exists $\delta>0$ such that if \[ \|f_{\rm in} - M\|_{L^\infty_{q}(\mathbb T^3\times\mathbb R^3)} < \delta,\] then the solution $f$ to \epsqref{e.boltzmann} guaranteed by Theorem \ref{t:existence} exists up to time $T$, and satisfies \[ \|f(t,\cdot,\cdot) - M \|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} < \varepsilon, \quad t\in [0,T].\] \epsnd{lemma} \begin{equation}gin{proof} To begin, we make the restriction $\|f_{\rm in} - M\|_{L^\infty(\mathbb T^3\times\mathbb R^3)}< \frac 1 2$. From Theorem \ref{t:existence}, the solution $f$ exists on a time interval $[0,T_f]$, with $T_f$ depending only on $\|f_{\rm in}\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} \leq \|M\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} + \frac 1 2$. In particular, $T_f$ is bounded below by a constant depending only on $q$. Writing $f = M+\tilde f$, we have the following equation for $\tilde f$: \[ \partial_t \tilde f + v\cdot \nabla_x \tilde f = Q(M+\tilde f, M+\tilde f) = Q(M+\tilde f, \tilde f) + Q(\tilde f, M), \quad (t,x,v) \in [0,T_f]\times\mathbb T^3\times\mathbb R^3,\] since $Q(M,M) = 0$. We will derive an upper bound for $\|\tilde f\|_{L^\infty_q}$ using a barrier argument similar to the proof of Lemma \ref{l:simple-bound}. With $T$ and $\varepsilon$ as in the statement of the lemma, let $\delta, \begin{equation}ta>0$ be two constants such that \[ \delta e^{\begin{equation}ta T} < \varepsilon.\] The specific values of $\delta$ and $\begin{equation}ta$ will be chosen later. Defining $g(t,x,v) =\delta e^{\begin{equation}ta t} \langle v\rangle^{-q}$, and taking $\delta > \|f_{\rm in}-M\|_{L^\infty_{q}(\mathbb T^3\times \mathbb R^3)}$, we have $|\tilde f(0,x,v)| < g(0,x,v)$ for all $x$ and $v$. We claim $\tilde f(t,x,v)<g(t,x,v)$ in $[0,\min(T,T_f)]\times\mathbb T^3\times\mathbb R^3$. If not, then by making $q$ slightly smaller, but still larger than $\gamma+2s+3$, we ensure the function $f$ decays in $v$ at a polynomial rate faster than $\langle v\rangle^{-q}$. Together with the compactness of the spatial domain $\mathbb T^3$, this implies there is a first crossing point $(t_{\rm cr}, x_{\rm cr}, v_{\rm cr})$ with $t_{\rm cr}>0$, where $f(t_{\rm cr},x_{\rm cr},v_{\rm cr}) = g(t_{\rm cr},x_{\rm cr},v_{\rm cr})$. At this point, one has, as in the proof of Lemma \ref{l:simple-bound}, \begin{equation}gin{equation}\label{e.crossing2} \partial_t g\leq Q(M+\tilde f,\tilde f) + Q(\tilde f, M) \leq Q(M+\tilde f,g) + Q(\tilde f, M). \epsnd{equation} Lemma \ref{l:Q-polynomial} implies, at $(t_{\rm cr}, x_{\rm cr},v_{\rm cr})$, \begin{equation}gin{equation} Q(M+ \tilde f,g) = \delta e^{\begin{equation}ta t_{\rm cr}} Q(M+\tilde f,\langle \cdot \rangle^{-q}) \leq C \delta e^{\begin{equation}ta t_{\rm cr}} \|M+\tilde f(t_{\rm cr}, \cdot,\cdot)\|_{L^\infty_{q}(\mathbb T^3 \times \mathbb R^3)} \langle v_{\rm cr}\rangle^{-q}. \epsnd{equation} Since $\tilde f(t_{\rm cr},x_{\rm cr},v_{\rm cr}) = g(t_{\rm cr}, x_{\rm cr},v_{\rm cr}) = \delta e^{\begin{equation}ta t} \langle v_{\rm cr}\rangle^{-q} < \frac 1 2 \langle v_{\rm cr}\rangle^{-q}$, and $(t_{\rm cr}, x_{\rm cr},v_{\rm cr})$ is the location of a maximum of $\tilde f$ in $(x,v)$ space, we conclude $\|\tilde f(t_{\rm cr},\cdot,\cdot)\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} < \frac 1 2$. This implies $\|M+\tilde f(t_{\rm cr},\cdot,\cdot)\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} \leq C$ for a constant $C$ depending only on $q$. We therefore have \begin{equation}gin{equation}\label{e.C1} Q(M+ \tilde f,g) \leq C \delta e^{\begin{equation}ta t_{\rm cr}} \langle v_{\rm cr}\rangle^{-q}. \epsnd{equation} For the term $Q(\tilde f,M)$, which appeared as a result of recentering around $M$, we write $Q(\tilde f,M) = Q_{\rm s}(\tilde f,M) + Q_{\rm ns}(\tilde f,M)$. The singular part is handled by \cite[Lemma 3.8]{silvestre2022nearequilibrium}, whose proof does not depend on the sign of $\gamma+2s$ and is therefore valid under our assumptions. This lemma gives \begin{equation}gin{equation}\label{e.C2} |Q_{\rm s}(\tilde f, M)(t_{\rm cr}, x_{\rm cr},v_{\rm cr})|\leq C \delta e^{\begin{equation}ta t} \langle v_{\rm cr}\rangle ^{-q+\gamma}, \quad \text{ if } |v_{\rm cr}|\geq R, \epsnd{equation} for universal constants $C, R>0$. On the other hand, if $|v_{\rm cr}|< R$, the more crude estimate of Lemma \ref{l:C2Linfty} yields \begin{equation}gin{equation}\label{e.C3} \begin{equation}gin{split} |Q_{\rm s}(\tilde f,M)(t_{\rm cr}, x_{\rm cr},v_{\rm cr})| &\leq C \left(\int_{\mathbb R^3} \tilde f(v_{\rm cr}+w)|w|^{\gamma+2s}\, \mathrm{d} w\right) \|M\|_{L^\infty(\mathbb R^3)}^{1-s} \|D_v^2 M\|_{L^\infty(\mathbb R^3)}^s \\ &\leq C \|\tilde f\|_{L^\infty_q} \langle R\rangle^{(\gamma+2s)_+} \leq C\delta e^{\begin{equation}ta t_{\rm cr}} \langle v_{\rm cr}\rangle^{-q}, \epsnd{split} \epsnd{equation} since $\langle R\rangle^{(\gamma+2s)_+} \approx 1 \approx \langle v_{\rm cr}\rangle^{-q}$ and $\|\tilde f(t_{\rm cr},\cdot,\cdot)\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} \leq \delta e^{\begin{equation}ta t_{\rm cr}}$. For the nonsingular part, we have \begin{equation}gin{equation}\label{e.C4} \begin{equation}gin{split} |Q_{\rm ns}(\tilde f, M)(t_{\rm cr},x_{\rm cr},v_{\rm cr})| &\leq C M(v_{\rm cr}) \int_{\mathbb R^3} \tilde f(t_{\rm cr}, x_{\rm cr}, w) |v_{\rm cr}+w|^\gamma \, \mathrm{d} w\\ & \leq C \|\tilde f(t_{\rm cr},\cdot,\cdot)\|_{L^\infty_{q}(\mathbb T^3\times\mathbb R^3)} \langle v_{\rm cr}\rangle^{-q} \leq C \delta e^{\begin{equation}ta t_{\rm cr}}\langle v_{\rm cr}\rangle^{-q}, \epsnd{split} \epsnd{equation} since $q>\gamma+2s+3>\gamma+3$ and $M$ decays much faster than $\langle v\rangle^{-q}$. Collecting all our inqualities and recalling \epsqref{e.crossing2} and $\partial_t g = \delta\begin{equation}ta e^{\begin{equation}ta t} \langle v\rangle^{-q}$, we now have \[ \delta \begin{equation}ta e^{\begin{equation}ta t_{\rm cr}} \langle v_{\rm cr} \rangle^{-q} \leq C_0 \delta e^{\begin{equation}ta t_{\rm cr}} \langle v_{\rm cr}\rangle^{-q},\] where $C_0$ is the maximum among the constants in \epsqref{e.C1}, \epsqref{e.C2}, \epsqref{e.C3}, and \epsqref{e.C4}. This implies a contradiction if we choose $\begin{equation}ta = 2C_0$. We conclude $f(t,x,v)< g(t,x,v)$ on $[0,\min(T,T_f)]\times\mathbb T^3\times\mathbb R^3$, as claimed. A similar argument using $-g$ as a lower barrier for $\tilde f$ gives \[|\tilde f(t,x,v)|< g(t,x,v) = \delta e^{2C_0 t} \langle v\rangle^{-q}, \quad 0\leq t \leq \min(T,T_f).\] Finally, we choose $\delta = \min(\frac 1 4, \varepsilon e^{-2 C_0 T})$, so that $\langle v\rangle^q \tilde f(t,x,v) < \delta e^{2C_0 T} < \varepsilon$ whenever $t\leq \min(T,T_f)$, and $\|f_{\rm in} - M\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} < \delta < \frac 1 2$. The above barrier argument does not depend quantitatively on the time of existence $T_f$. If $T> T_f$, then we have shown $\|\tilde f(T_f, \cdot, \cdot)\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} < \varepsilon < \frac 1 2$, and by applying Theorem \ref{t:existence} again with initial data $f(T_f,\cdot,\cdot)$, we can continue the solution to a time interval $[0,2T_f]$. Repeating finitely many times, we continue the solution to $[0,NT_f]\times\mathbb T^3\times\mathbb R^3$ where $NT_f>T$, with $\|\tilde f\|_{L^\infty_q([0,T]\times\mathbb T^3\times\mathbb R^3)} < \varepsilon$, as desired. \epsnd{proof} Next, we need the main result of \cite{desvillettes2005global}. The result in \cite{desvillettes2005global} is stated for solutions defined on $[0,\infty)\times\mathbb T^3\times\mathbb R^3$, and gives conditions under which solutions converge to a Maxwellian $M$ as $t\to\infty$. Since the estimates at a fixed time $t$ do not depend on any information about the solution for times greater than $t$, we easily conclude (as in \cite{silvestre2022nearequilibrium}) the following restatement that applies to solutions defined on a finite time interval: \begin{equation}gin{theorem}\label{t:dv} Let $f\geq 0$ be a solution to \epsqref{e.boltzmann} on $[0,T]\times \mathbb T^3\times \mathbb R^3$ satisfying, for a family of positive constants $C_{k,q}$, \[ \|f\|_{L^\infty([0,T],H^{k}_{q}(\mathbb T^3\times \mathbb R^3))} \leq C_{k,q} \quad \text{ for all } k,q\geq 0,\] and also satisfying the pointwise lower bound \[ f(t,x,v) \geq K_0 e^{-A_0 |v|^2}, \quad \text{ for all } (t,x,v).\] Then for any $p>0$ and for any $k,q>0$, there exists $C_p>0$ depending on $\gamma$, $s$, $A_0$, $K_0$, the constant $c_b$ in \epsqref{e.b-bounds}, and $C_{k',q'}$ for sufficiently large $k'$ and $q'$, such that for all $t \in [0,T]$, \[ \|f(t,\cdot,\cdot)- M\|_{H^{k}_{q}(\mathbb T^3\times \mathbb R^3)} \leq C_p t^{-p},\] where $M$ is the Maxwellian with the same total mass, momentum, and energy as $f$. \epsnd{theorem} Along with Lemma \ref{l:Max} and Theorem \ref{t:dv}, the proof of Corollary \ref{c:global} relies on the global regularity estimates of \cite{imbert2020smooth}, which we extended to $\gamma +2s< 0$ in Proposition \ref{p:higher-reg}. We are now ready to give the proof: \begin{equation}gin{proof}[Proof of Corollary \ref{c:global}] First, let us assume $f_{\rm in}\in L^\infty_q(\mathbb T^3\times\mathbb R^3)$ for all $q>0$, i.e. $f$ decays pointwise faster than any polynomial. For $q_0>5$ fixed, use Lemma \ref{l:Max} to select $\delta_1>0$ such that $f$ exists on $[0,1]\times \mathbb T^3\times\mathbb R^3$ with $\|f(t)-M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} < \varepsilon$ for all $t\in [0,1]$, whenever $\|f_{\rm in}-M\|_{L^\infty_q(\mathbb T^3\times\mathbb R^3)} < \delta_1$. By our Theorem \ref{t:existence} and the rapid decay of $f_{\rm in}$, the solution $f$ is $C^\infty$ in all variables. Now, if the conclusion of the theorem is false, there is a first time $t_{\rm cr} >1$ such that \begin{equation}gin{equation}\label{e.equilibrium-crossing} \|f(t_{\rm cr},\cdot,\cdot)-M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} = \varepsilon. \epsnd{equation} Since $\|f-M\|_{L^\infty_{q_0}([0,t_{\rm cr}]\times\mathbb T^3\times\mathbb R^3)} \leq \varepsilon$, the solution $f$ satisfies uniform lower bounds for $(t,x,v)\in [0,t_{\rm cr}]\times B_1(0)\times B_1(0)$. These lower bounds, together with the bound on $\|f\|_{L^\infty_{q_0}}$, imply via Proposition \ref{p:higher-reg} that $f$ satisfies uniform estimates in $H^k_q([1,t_{\rm cr}]\times \mathbb T^3\times\mathbb R^3)$ for all $k,q>0$, with constants independent of $t$ and depending only on $q_0$ and the norms $\|f_{\rm in}\|_{L^\infty_q}$ of the initial data for $q>0$. On the time interval $[1,t_{\rm cr}]$, the function $f$ satisfies the hydrodynamic bounds \begin{equation}gin{equation}\label{e.hydro} 0< m_0\leq \int_{\mathbb R^3} f(t,x,v) \, \mathrm{d} v \leq M_0, \,\int_{\mathbb R^3} |v|^2f(t,x,v) \, \mathrm{d} v \leq E_0, \, \int_{\mathbb R^3} f(t,x,v) \log f(t,x,v) \, \mathrm{d} v \leq H_0, \epsnd{equation} uniformly in $t$ and $x$, for some constants $m_0, M_0, E_0, H_0$ depending only on $q_0$, as a result of the inequality $\|f-M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)}\leq \varepsilon< \frac 1 2$. (This follows from a quick computation, or we may apply \cite[Lemma 2.3]{silvestre2022nearequilibrium}.) From \cite{imbert2020lowerbounds}, this implies\footnote{When $\gamma+2s<0$, the lower bounds of \cite{imbert2020lowerbounds} require a bound on the $L^\infty$ norm in addition to the bounds in \epsqref{e.hydro}. This $L^\infty$ bound also clearly folllows from the inequality $\|f-M\|_{L^\infty_{q_0}} \leq \varepsilon$.} the lower Gaussian bound $f(t,x,v)\geq c_1 e^{-c_2|v|^2}$, with $c_1,c_2>0$ depending only on $q_0$. The hypotheses of \cite{desvillettes2005global}, restated above as Theorem \ref{t:dv}, are satisfied, so choosing $p=1$, we have \[ \|f(t_{\rm cr},\cdot,\cdot)\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} \leq C_1 t_{\rm cr}^{-1},\] where $C_1>0$ depends only on $\gamma$, $s$, $c_b$, $q_0$, and the norm of $f_{\rm in}$ in $L^\infty_{q_1}(\mathbb T^3\times\mathbb R^3)$ for some $q_1$ depending on $q_0$. Combining this with \epsqref{e.equilibrium-crossing} gives $t_{\rm cr} \leq C_1/\varepsilon$. Letting $T = C_1/\varepsilon+1$, we use Lemma \ref{l:Max} again to select $\delta_2>0$ such that $f$ exists on $[0,T]\times \mathbb T^3\times\mathbb R^3$, with \[\|f(t,\cdot,\cdot)-M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)}< \varepsilon,\] for all $t\in [0,T]$. This inequality implies the first crossing time $t_{\rm cr}> C_1/\varepsilon+1$, a contradiction with $t_{\rm cr}\leq C_1/\varepsilon$. Therefore, if $\|f_{\rm in}- M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)} < \delta := \min(\delta_1, \delta_2)$, we conclude there is no crossing time $t_{\rm cr}$, and $\|f(t,\cdot,\cdot)-M\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)}< \varepsilon$ holds for all $t$ such that the solution $f$ exists. In particular, $\|f(t)\|_{L^\infty_{q_0}(\mathbb T^3\times\mathbb R^3)}$ is bounded by a constant independent of $t$, and Theorem \ref{t:existence} implies the solution can be extended for all time. Next, we consider the general case, where $f_{\rm in}$ decays at only a finite polynomial rate. Looking at the proof of the previous case, we see that the choice of $\delta$ depends only on $\varepsilon$, $q_0$, and the size of $f_{\rm in}$ in the $L^\infty_{q_1}$ norm for some $q_1$ depending on $q_0$. Therefore, if $f_{\rm in} \in L^\infty_{q_1}(\mathbb T^3\times\mathbb R^3)$ and satisfies all the hypotheses of Corollary \ref{c:global}, we can approximate $f_{\rm in}$ by cutting off large velocities, apply the rapid-decay case considered above to obtain global solutions, and take the limit as the cutoff vanishes. We omit the details of this standard approximation procedure. \epsnd{proof} \appendix \section{Change of variables}\label{s:cov-appendix} This appendix is devoted to the proof of Proposition \ref{p:very-soft-kernel}, which establishes the properties of the integral kernel $\bar K_f$ defined in \epsqref{e.barKf} for the case $\gamma + 2s < 0$. Noting that the case $\gamma + 2s \geq 0$ has received a full treatment in~\cite{imbert2020smooth}, we only consider the regime \begin{equation} \gamma+2s<0 \epse throughout this appendix. In order to prove Proposition \ref{p:very-soft-kernel}, we need to verify the following for the kernel $\bar K_f(t,x,v,v')$: coercivity, boundedness, cancellation, and H\"older continuity in $(t,x,v)$. The proof strategies and notation broadly follow \cite[Section 5]{imbert2020smooth}, which addressed the case $\gamma+2s\in [0,2]$. However, the details are sufficiently different that it is necessary to provide full proofs. When $|v_0|\leq 2$, the change of variables is defined as $\mathcal T_0z = z_0\circ z$, i.e. a simple recentering around the origin. Therefore, $\bar K_f$ inherits the properties of $K_f$, which satisfies suitable ellipticity properties on any bounded velocity domain, see Remark \ref{r:ellipticity}. Therefore, in this appendix we focus only on the case $|v_0|> 2$. Recalling the definition \epsqref{e.T0-def} of the linear transformation $T_0$, we see that in the current regime, \[ T_0 (av_0+w) = |v_0|^{\frac{\gamma+2s}{2s}}\left( \frac a {|v_0|} v_0 + w\right), \quad \text{where } a\in \mathbb R, w\cdot v_0 = 0, \gamma+2s< 0. \] When we import facts involving this linear transformation from \cite{imbert2020smooth}, we use the notation $T_0^+$ for the transformation $T_0$ as it is defined in the case $\gamma+2s\geq 0$. Then one has \begin{equation}gin{equation}\label{e.T0-cases} T_0 v = |v_0|^{\frac{\gamma+2s}{2s}} T_0^+ v, \quad \text{ when } \gamma +2s < 0. \epsnd{equation} Note that the definition of $T_0^+$ as a linear transformation does not depend on $\gamma$ or $s$. The $T_0^+$ notation is intended only for use in the current appendix. In the following lemmas about the change of variables, we omit the dependence of $\bar f$ and $\bar K_f$ on $\bar t$ and $\bar x$, since the conditions all hold uniformly in $t$ and $x$. \subsection{Coercivity} With $A(v) \subset \mathbb S^2$ the subset of the unit sphere given by Lemma \ref{l:cone}, define $\Xi(v)$ to be the corresponding cone in $\mathbb R^3$, $\Xi(v) := \{w\in \mathbb R^3 : \frac w{|w|} \in A(v)\}$. \begin{equation}gin{lemma}[Transformed cone of non-degeneracy] Let $f$, $\delta$, $r$, and $v_m$ satisfy the assumptions of Lemma \ref{l:cone}. Fix $v_0 \in \mathbb R^3$ and $v\in B_2$, and define \[ \begin{equation}gin{split} \bar A(v) &= \{\sigma \in \mathbb S^2 : T_0\sigma/|T_0\sigma| \in A(v_0+T_0v)\},\\ \bar \Xi(v) &= \{w \in \mathbb R^3 : T_0 w \in \Xi(v_0+ T_0v)\}. \epsnd{split} \] Then there are constants $\lambda, k>0$, depending only on $\delta$, $r$, and $v_m$ (but not on $v_0$ or $v$), such that \begin{equation}gin{itemize} \item $\bar K_f(v,v+w) \geq \lambda |w|^{-3-2s}$ whenever $w \in \bar \Xi(v)$; \item$\mathcal H^2(\bar A(v)) \geq k$, where $\mathcal H^2$ is the $2$-dimensional Hausdorff measure. \epsnd{itemize} \epsnd{lemma} \begin{equation}gin{proof} For the first bullet point, Lemma \ref{l:cone} and the definition \epsqref{e.barKf} of $\bar K_f$ imply that, for $w\in \bar \Xi(v)$, \begin{equation}gin{equation} \begin{equation}gin{split} \bar K_f(v,v+w) &= |v_0|^{2+\frac{3\gamma}{2s}} K_f(v_0+T_0 v, v_0 + T_0 v + T_0 w)\\ & \geq \lambda |v_0|^{2+(3\gamma)/(2s)} |\bar v|^{\gamma+2s+1}|T_0 w|^{-3-2s} \geq \lambda |w|^{-3-2s}, \epsnd{split} \epsnd{equation} since $|\bar v|\approx |v_0|$ and $|T_0w|\leq |v_0|^{\frac{\gamma+2s}{2s}}|w|$. For the second bullet point, use \epsqref{e.T0-cases} to write $v_0 + T_0 v = v_0 + T_0^+ \tilde v$ with $\tilde v = |v_0|^{\frac{\gamma+2s}{2s}} v \in B_2$. Next, recall the following fact from \cite[Lemma 5.6]{imbert2020smooth}: for any $\tilde v\in B_2$, \begin{equation}gin{equation}\label{e.H2} \mathcal H^2(\{\sigma \in \mathbb S^2 : T_0^+\sigma/|T_0^+\sigma| \in A(v_0 + T_0^+ \tilde v)\}) \geq k, \epsnd{equation} for some $k>0$ depending on the constants of Lemma \ref{l:cone}, and independent of $v_0$. We note that the statement and proof of estimate \epsqref{e.H2} do not depend on the values of $\gamma$ and $s$. For any $v\in B_2$, we conclude from \epsqref{e.H2}, using $T_0^+ \tilde v = T_0 v$ and $T_0^+\sigma/|T_0^+\sigma| = T_0\sigma/|T_0\sigma|$, that \[ \mathcal H^2 (\bar A(v)) = \mathcal H^2(\{\sigma \in \mathbb S^2 : T_0\sigma/|T_0\sigma| \in A(v_0 + T_0v)\}) \geq k, \] as desired. \epsnd{proof} \subsection{Boundedness conditions} Next, we address the upper ellipticity bounds for the kernel $\bar K_f$. The following lemma corresponds to \cite[Lemma 5.10]{imbert2020smooth}, but the proof must be modified to account for the extra powers of $|v_0|$ in the definition \epsqref{e.barKf} of $\bar K_f$. \begin{equation}gin{lemma}\label{l:5-10} For $v_0 \in \mathbb R^3\setminus B_2$, $v\in B_2$, and $r>0$, \[ \int_{\mathbb R^3\setminus B_r(v)} \bar K_f(v,v') \, \mathrm{d} v' \leq \bar \Lambda r^{-2s}, \] with \[ \bar \Lambda := |v_0|^{-\gamma-2s} \int_{\mathbb R^3} f(\bar v + w)\left( |v_0|^2 - \left(v_0\cdot \frac w {|w|}\right)^2 + 1\right)^s |w|^{\gamma+2s} \, \mathrm{d} w. \] \epsnd{lemma} \begin{equation}gin{proof} From the definition \epsqref{e.barKf} of $\bar K_f$, we have \[ \begin{equation}gin{split} \int_{\mathbb R^3\setminus B_r(v)} \bar K_f(v,v') \, \mathrm{d} v' &= |v_0|^{2+(3\gamma)/(2s)} \int_{\mathbb R^3\setminus B_r(v)} K_f(\bar v, \bar v') \, \mathrm{d} v'\\ &= \int_{\mathbb R^3\setminus T_0(B_{ r})} K_f(\bar v, \bar v + u) \, \mathrm{d} u, \epsnd{split} \] from the change of variables $u = \bar v' - \bar v = T_0(v' - v)$. Following \cite{imbert2020smooth}, we use \epsqref{e.kernel} to write \begin{equation}gin{equation}\label{e.Kf-Er} \begin{equation}gin{split} \int_{\mathbb R^3\setminus B_r(v)} \bar K_f(v,v') \, \mathrm{d} v' &\lesssim \int_{u\in \mathbb R^3\setminus T_0(B_{ r})} |u|^{-3-2s} \int_{w\perp u} f(\bar v+w) |w|^{1+\gamma+2s} \, \mathrm{d} w \, \mathrm{d} u\\ &= \int_{w\in \mathbb R^3} \left(\int_{u\perp w, u\in \mathbb R^3\setminus T_0(B_{ r})} |u|^{-2-2s} \, \mathrm{d} u\right) f(\bar v+w)|w|^{\gamma+2s} \, \mathrm{d} w, \epsnd{split} \epsnd{equation} where we used \[ \int_u \int_{w\perp u} (\ldots) \, \mathrm{d} w \, \mathrm{d} u = \int_w \int_{u\perp w} (\ldots) \frac {|u|}{|w|} \, \mathrm{d} u \, \mathrm{d} w. \] Recall that $T_0(B_{r})$ is an ellipsoid with radius $\bar r := r |v_0|^\frac{\gamma+2s}{2s}$ in directions orthogonal to $v_0$ and radius $\bar r/|v_0| = r|v_0|^{\gamma/(2s)}$ in the $v_0$ direction. Its intersection with the plane $\{u\perp w\}$ is an ellipse, whose smallest radius is \[ \rho := \frac {r|v_0|^\frac{\gamma+2s}{2s}}{ \sqrt{|v_0|^2\left(1 - \left(\frac{v_0\cdot w}{|v_0||w|}\right)^2\right) + \left(\frac{v_0\cdot w}{|v_0| |w|}\right)^2}}. \] This follows from formula (5.10) in \cite{imbert2020smooth}, with $\bar r = r|v_0|^\frac{\gamma+2s}{2s}$ replacing $r$. We therefore have $\mathbb R^3\setminus E_r \subset \mathbb R^3\setminus B_\rho$, and \[ \begin{equation}gin{split} \int_{u\perp w, u\in \mathbb R^3\setminus E_r} |u|^{-2-2s} \, \mathrm{d} u \lesssim \rho^{-2s}&\leq r^{-2s} |v_0|^{-\gamma-2s} \left( |v_0|^2 \left(1-\left(\frac{v_0\cdot w}{|v_0||w|}\right)^2\right) + \left(\frac{v_0\cdot w}{|v_0| |w|}\right)^2\right)^s\\ &\leq r^{-2s} |v_0|^{-\gamma-2s} \left( |v_0|^2 - \left( v_0\cdot \frac w {|w|}\right)^2 + 1\right)^s. \epsnd{split} \] Combining this expression with \epsqref{e.Kf-Er}, the conclusion of the lemma follows. \epsnd{proof} \begin{equation}gin{lemma}[Boundedness conditions]\label{l:boundedness} If $\|f\|_{L^\infty_q(\mathbb R^3)}< \infty$ for some $q> 2s+3$, then the kernel $\bar K_f$ satisfies the two conditions \begin{equation}gin{align} \int_{\mathbb R^3\setminus B_r(v)} \bar K_f(v,v') \, \mathrm{d} v' \leq \Lambda r^{-2s}, \quad \text{ for all } v \in B_2 \text{ and } r>0,\label{e.first-boundedness}\\ \int_{\mathbb R^3\setminus B_r(v')} \bar K_f(v,v') \, \mathrm{d} v \leq \Lambda r^{-2s}, \quad \text{ for all } v' \in B_2 \text{ and } r>0,\label{e.second-boundedness} \epsnd{align} for a constant $\Lambda\lesssim \|f\|_{L^\infty_q(\mathbb R^3)}$. In particular, $\Lambda$ is independent of the base point $v_0$. \epsnd{lemma} \begin{equation}gin{proof} The proof of \epsqref{e.first-boundedness} begins with estimating the expression $\bar \Lambda$ in Lemma \ref{l:5-10} from above. First, note that \[\left(|v_0|^2 - \left(v_0 \cdot \frac w {|w|}\right)^2 + 1\right)^s \lesssim \left(|v_0|^2 - \left(v_0 \cdot \frac w {|w|}\right)^2\right)^{s} + 1,\] and we have \[ \bar \Lambda \leq \int_{\mathbb R^3} f(\bar v + w) \left(|v_0|^2 - \left(v_0 \cdot \frac w {|w|}\right)^2\right)^{s}\left(\frac {|w|}{|v_0|}\right)^{\gamma+2s} \, \mathrm{d} w + \int_{\mathbb R^3} f(\bar v + w) \left(\frac {|w|}{|v_0|}\right)^{\gamma+2s} \, \mathrm{d} w =: J_1 + J_2.\] To bound $J_2$, the convolution estimate of Lemma \ref{l:convolution} gives $J_2 \lesssim \|f\|_{L^\infty_q} |\bar v|^{\gamma+2s} |v_0|^{-\gamma-2s} \lesssim \|f\|_{L^\infty_q}$, since $|\bar v| \approx |v_0|$ and $q > \gamma+2s+3$. For $J_1$, letting $w = \alpha v_0/|v_0| + b$ with $b\cdot v_0 = 0$, one has \[ \left(\frac{|w|}{|v_0|}\right)^{\gamma+2s}\left( |v_0|^2 - \left( v_0 \cdot \frac w {|w|}\right)^2\right)^{s} = |v_0|^{-\gamma} |b|^{2s}|w|^\gamma.\] Noting that $|b|\leq |v_0+w|$, we have \[ \begin{equation}gin{split} \int_{\mathbb R^3} f(\bar v+w) |v_0|^{-\gamma} |b|^{2s} |w|^\gamma \, \mathrm{d} w &\leq |v_0|^{-\gamma} \|f\|_{L^\infty_q(\mathbb R^3)} \int_{\mathbb R^3} \langle \bar v+w\rangle^{-q} |v_0+w|^{2s} |w|^{\gamma} \, \mathrm{d} w\\ &\lesssim |v_0|^{-\gamma} \|f\|_{L^\infty_q(\mathbb R^3)} \int_{\mathbb R^3} \langle v_0+w\rangle^{-q+2s} |w|^{\gamma} \, \mathrm{d} w \lesssim \|f\|_{L^\infty_q(\mathbb R^3)}, \epsnd{split} \] using $\langle \bar v+w\rangle \approx \langle v_0+w\rangle$ and $q> \gamma+2s+3$. This establishes the upper bound $\bar \Lambda \lesssim \|f\|_{L^\infty_q(\mathbb R^3)}$. Combining this with Lemma \ref{l:5-10} concludes the proof of the first boundedness condition \epsqref{e.first-boundedness}. To establish \epsqref{e.second-boundedness}, we assume as usual that $|v_0|>2$. For any $v'\in B_2$ and $r>0$, changing variables with $\bar v = v_0 + T_0 v$, we have \[ \begin{equation}gin{split} \int_{\mathbb R^3\setminus B_r(v')} \bar K_f(v,v') \, \mathrm{d} v &= |v_0|^{2+\frac{3\gamma}{2s}} \int_{\mathbb R^3\setminus B_r(v')} K_f(\bar v, \bar v') \, \mathrm{d} v = \int_{\mathbb R^3\setminus E_r(\bar v')} K_f(\bar v, \bar v') \, \mathrm{d} \bar v. \epsnd{split} \] The last integral is estimated in the proof of \cite[Lemma 5.13]{imbert2020smooth}, up to choosing a different value of $r$. More specifically, our $E_r$ would be $E_{\bar r}$ with $\bar r = r|v_0|^{\frac{\gamma+2s}{2s}}$ in the notation of \cite{imbert2020smooth}. Therefore, their calculation (which does not depend on the sign of $\gamma+2s$) implies \[ \begin{equation}gin{split} & \int_{\mathbb R^3\setminus B_r(v')} \bar K_f(v,v') \, \mathrm{d} v\\ & \leq \int_{\mathbb R^3} f(\bar v'+u)|u|^\gamma \Big( (r|v_0|^\frac{\gamma+2s}{2s})^{-2s} |u|^{2s} \Big( 1+ |v_0|^2 - \frac{(v_0\cdot u)^2}{|u|^2} \Big)^s + (r|v_0|^\frac{\gamma+2s}{2s})^{-s} |u|^s \Big| \frac u {|u|}\cdot v_0\Big|^s \Big) \, \mathrm{d} u \\ &\leq I_1 + I_2, \epsnd{split} \] where \[ \begin{equation}gin{split} I_1 &= r^{-2s} |v_0|^{-\gamma-2s} \int_{\mathbb R^3} f(\bar v' + u) |u|^{\gamma+2s} \Big(1+|v_0|^2 - \frac{(v_0\cdot u)^2}{|u|^2}\Big)^s \, \mathrm{d} u,\\ I_2 &= r^{-s} |v_0|^{-\gamma/2} \int_{\mathbb R^3} f(\bar v' + u) |u|^{\gamma+s} \, \mathrm{d} u. \epsnd{split} \] The term $I_1$ is bounded by a constant times $\|f\|_{L^\infty_q(\mathbb R^3)}r^{-2s}$, by our estimate of $\bar\Lambda$ in the beginning of the current proof. For $I_2$, we have \[ I_2 \leq r^{-s} |v_0|^{-\gamma/2} \|f\|_{L^\infty_q(\mathbb R^3)} \int_{\mathbb R^3} \langle \bar v' + u\rangle^{-q} |u|^{\gamma+s} \, \mathrm{d} u \lesssim r^{-s} |v_0|^{-\gamma/2} \|f\|_{L^\infty_q(\mathbb R^3)} \langle \bar v'\rangle^{\gamma+s}, \] since $q >\gamma+2s+3>\gamma+s+3$. By assumption, $v' \in B_2$, which implies $|\bar v'| = |v_0 + T_0v'|\lesssim |v_0|$. Since $\gamma+2s<0$, the factor $|v_0|^{(\gamma+2s)/2}$ is bounded by 1, and we conclude $I_2 \lesssim r^{-s} \|f\|_{L^\infty_q(\mathbb R^3)}$. Note that values of $r$ greater than $2$ are irrelevant for \epsqref{e.second-boundedness}, because the kernel $\bar K_f(v,v')$ is only defined for $v \in B_1$. For large $r$, the domain of integration in \epsqref{e.second-boundedness} is empty. Therefore, we have $r^{-s} \lesssim r^{-2s}$, and the proof is complete. \epsnd{proof} We remark that the bound of $I_2$ in the proof of the previous lemma is the only place where our definition \epsqref{e.cov-} of the change of variables would not easily generalize to the case $\gamma+2s\geq 0$. We also have the following alternative characterization of the upper bounds for $\bar K_f$, which is needed as one of the hypotheses of Theorem \ref{t:schauder}. It follows from \epsqref{e.first-boundedness} in Lemma \ref{l:boundedness} in the same way that Lemma \ref{l:K-upper-bound-2} above follows from Lemma \ref{l:K-upper-bound}: \begin{equation}gin{corollary}\label{c:w-squared} Let $f\in L^\infty_q([0,T]\times\mathbb R^6)$ for some $q>2s+3$. For $|v_0|>2$ and $z= (t,x,v)\in Q_1$, let $\bar K_{f,z}(w) = \bar K_f(t, x,v, v+w)$. Then for any $r>0$, there holds \[ \int_{B_r} \bar K_{f,z} (w) |w|^2 \, \mathrm{d} w \leq C \|f(\bar t,\bar x,\cdot)\|_{L^\infty_q(\mathbb R^3)} r^{2-2s},\] The constant $C$ depends only on $\gamma$ and $s$. \epsnd{corollary} \subsection{Cancellation conditions} Next, we establish two cancellation conditions for $\bar K_f$, which say that $\bar K_f(v,v')$ is not too far from being symmetric, on average. As a technical tool in proving these lemmas, one needs the following ``modified principal value'' result, which allows one to change variables according to $v' \mapsto \bar v'$ without altering the cancellation involved in defining principal value integrals. This lemma is proven in \cite[Lemmas 5.14 and 5.16]{imbert2020smooth}, with an argument that does not use the sign of $\gamma+2s$. Therefore, the lemma remains valid in our context. \begin{equation}gin{lemma}\label{l:modified} Let $\gamma+2s<0$ and $\rho>0$, and let $T_0$ be defined as above. \begin{equation}gin{enumerate} \item[(a)] If $f:\mathbb R^3\to \mathbb R$ is such that $\langle v\rangle^{\gamma+2s} D^2 f \in L^1(\mathbb R^3)$, then \[ \lim_{\rho\to 0+} \int_{B_\rho \setminus T_0(B_\rho)} (K_f(\bar v,\bar v+w) - K_f(\bar v+w,\bar v)) \, \mathrm{d} w = 0.\] \item[(b)] If $f:\mathbb R^3\to \mathbb R$ is such that $\langle v\rangle^{\gamma+2s}\nabla f \in L^1(\mathbb R^3)$, then \[ \lim_{\rho\to 0+} \int_{B_\rho\setminus T_0(B_\rho)} w K_f(\bar v+w, \bar v) \, \mathrm{d} w = 0.\] \epsnd{enumerate} \epsnd{lemma} This lemma is proven for $T_0^+$ rather than $T_0$ in \cite{imbert2020smooth}. However, $T_0$ and $T_0^+$ can easily be interchanged here, by rescaling the parameter $\rho$. Next, we prove the first cancellation condition, following the strategy of \cite[Lemma 5.15]{imbert2020smooth}: \begin{equation}gin{lemma}[First Cancellation Condition]\label{l:first-cancellation} Fix $q>3+\gamma$. Suppose that $f\in L^\infty_q(\mathbb R^3)$ and $\langle v\rangle^{\gamma+2s} D_v^2 f \in L^1(\mathbb R^3)$. Then the kernel $\bar K_f$ satisfies \[ \begin{equation}gin{split} \left| {\rm p.v.} \int_{\mathbb R^3} (\bar K_f(v,v') - \bar K_f(v',v)) \, \mathrm{d} v'\right| &\leq C \|f\|_{L^\infty_q(\mathbb R^3)},\epsnd{split} \] where $C>0$ is universal. \epsnd{lemma} \begin{equation}gin{proof} If $|v_0|\leq 2$, then the conclusion follows from the classical Cancellation Lemma, stated for example in \cite[Lemma 3.6]{imbert2016weak}. If $|v_0|>2$, then we write for any $v\in B_2$, \[ \begin{equation}gin{split} {\rm p.v.}\int_{\mathbb R^3}& (\bar K_f(v,v') - \bar K_f(v',v)) \, \mathrm{d} v'\\ &= |v_0|^{2+\frac{3\gamma}{2s}} {\rm p.v.}\int_{\mathbb R^3} (K_f(\bar v, \bar v') - K_f(\bar v',\bar v)) \, \mathrm{d} v'\\ &= |v_0|^{2+\frac{3\gamma}{2s}} \lim_{R\to 0+} \int_{\mathbb R^3\setminus B_{R}} (K_f(\bar v, \bar v + T_0 v') - K_f(\bar v + T_0 v', \bar v)) \, \mathrm{d} v'\\ &= \lim_{R\to 0+} \int_{\mathbb R^3\setminus T_0(B_R)} (K_f(\bar v, \bar v + w) - K_f(\bar v+ w, \bar v)) \, \mathrm{d} w\\ &= \lim_{R\to 0+} \int_{\mathbb R^3\setminus B_{R}} (K_f(\bar v, \bar v + w) - K_f(\bar v+ w, \bar v)) \, \mathrm{d} w\\ &= {\rm p.v.} \int_{\mathbb R^3} (K_f(\bar v, \bar v + w) - K_f(\bar v+ w, \bar v)) \, \mathrm{d} w, \epsnd{split} \] where we have used Lemma \ref{l:modified}(a) with $\rho = R|v_0|^{\gamma/(2s)+1}$ in the next-to-last equality. Next, we apply \cite[Lemma 3.6]{imbert2016weak} and obtain, for $q> \gamma+3$ \[ \left| {\rm p.v.}\int_{\mathbb R^3} (\bar K_f(v,v') - \bar K_f(v',v)) \, \mathrm{d} v' \right| \leq C\left(\int_{\mathbb R^3} f(z) |\bar v - z|^\gamma \, \mathrm{d} z\right) \leq C \|f\|_{L^\infty_q(\mathbb R^3)}, \] as desired. \epsnd{proof} \begin{equation}gin{lemma}[Second Cancellation Condition] Fix $s\in [\frac 1 2, 1)$. Suppose that $f\in L^\infty_q(\mathbb R^3)$ with $q>3+2s$ and $\langle v\rangle^{\gamma+2s} \nabla_v f \in L^1(\mathbb R^3)$. Then for all $r\in [0,\frac 1 4]$ and $v\in B_{7/4}$, there holds \[ \left| {\rm p.v.} \int_{B_r(v)} (\bar K_f(v,v') - \bar K_f(v',v)) (v'-v) \, \mathrm{d} v'\right| \leq \Lambda (1+r^{1-2s}),\] with $\Lambda$ depending on $\|f\|_{L^\infty_q(\mathbb R^3)}$. \epsnd{lemma} \begin{equation}gin{proof} This proof is similar to the proof of \cite[Lemma 5.18]{imbert2020smooth}. Here, we give a sketch of the argument and discuss the changes needed for our setting. First, we claim that for $|v_0|\geq 2$, $u\in B_1(v_0)$, and $r\in (0,1)$, there holds \begin{equation}gin{equation}\label{e.cancellation-claim} \int_{\mathbb R^3} f(u+z) |z|^{\gamma+1} \min(1, r^{2-2s} |z|^{2s-2}) \, \mathrm{d} z \lesssim \|f\|_{L^\infty_q(\mathbb R^3)} |v_0|^{\gamma+2s-1} r^{1-2s}. \epsnd{equation} To show this, we divide the integral into $B_r$ and $\mathbb R^3\setminus B_r$, and write \[ \int_{B_r} f(u+z) |z|^{\gamma+1} \, \mathrm{d} z \leq \|f\|_{L^\infty_q(\mathbb R^3)} \int_{B_r} \langle u+ z\rangle^{-q} |z|^{\gamma+1} \, \mathrm{d} z \lesssim \|f\|_{L^\infty_q(\mathbb R^3)} |v_0|^{-q} r^{\gamma+4}.\] Since $s\in [\frac 1 2, 1)$ and $r\in (0,1)$, we have $r^{\gamma+4}\leq 1\leq r^{1-2s}$. Also, $q> 2s+3> \gamma+2s-1$, so $|v_0|^{-q} < |v_0|^{\gamma+2s-1}$. Next, since $q> 2s+3 > \gamma+2s+2$, \[ \begin{equation}gin{split} r^{2-2s}\int_{\mathbb R^3\setminus B_r} f(u+z) |z|^{\gamma+2s-1} \, \mathrm{d} z &\leq r^{2-2s} \|f\|_{L^\infty_q(\mathbb R^3)} \int_{\mathbb R^3\setminus B_r} \langle u+z\rangle^{-q} |z|^{\gamma+2s-1} \, \mathrm{d} z\\ & \leq r^{2-2s} \|f\|_{L^\infty_q(\mathbb R^3)} \langle u\rangle^{\gamma+2s-1} \leq r^{1-2s} |v_0|^{\gamma+2s-1} \|f\|_{L^\infty_q(\mathbb R^3)}, \epsnd{split} \] using $|u|\approx |v_0|$, $\gamma + 2s \leq 0$, and $1 \leq r^{-1}$. This establishes \epsqref{e.cancellation-claim}. Now, to prove the lemma, we may focus on the case $|v_0|>2$, by \cite[Lemma 3.7]{imbert2016weak}. By the symmetry property $K_f(\bar v, \bar v + \bar w) = K_f(\bar v, \bar v-\bar w)$ of $K_f$, one can easily show ${\rm p.v.} \int_{B_r(v)} \bar K_f(v',v) (v'-v) \, \mathrm{d} v' = 0$. Therefore, it suffices to bound the remaining term \[ {\rm p.v.} \int_{B_r(v)} \bar K_f(v,v') (v'-v) \, \mathrm{d} v'. \] Using the definition \epsqref{e.barKf} of $\bar K_f$ and changing variables according to $\bar w = T_0(v-v')$ (which is compatible with the principal value integral, by Lemma \ref{l:modified}(b)), this term equals \begin{equation}gin{equation}\label{e.Er-int} {\rm p.v.} \int_{E_r} (T_0^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w. \epsnd{equation} With $\bar r = |v_0|^{\frac{\gamma+2s}{2s}} r$ as above, we decompose this integral as follows, using $E_r = T_0(B_r) = T_0^+(B_{\bar r})$: \begin{equation} \begin{equation}gin{split} {\rm p.v.} \int_{E_r} (T_0^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w &\leq \Big|{\rm p.v.} \int_{B_{\bar r}} (T_0^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w\Big| \\&\qquad + \Big| \int_{T_0^+(B_{\bar r}) \setminus B_{\bar r}} (T_0^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w\Big| =: I_1 + I_2. \epsnd{split} \epse For $I_2$, we use the following inequality from the proof of \cite[Lemma 5.18]{imbert2020smooth} , with $\bar r$ replacing $r$: \[ \begin{equation}gin{split} \Big| \int_{T_0^+(B_{\bar r}) \setminus B_{\bar r}} ((T_0^+)^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w\Big| &\lesssim |v_0| \int_{\mathbb R^3} f(\bar v' + z) |z|^{\gamma+1} \min(1, \bar r^{2-2s}|z|^{2s-2}) \, \mathrm{d} w \\ &\quad + \int_{\mathbb R^3}f(\bar v' + z) \bar r^{1-2s} |z|^{\gamma +2s} (1+|v_0|^2 - (v_0\cdot z)/|z|^2 )^s \, \mathrm{d} w. \epsnd{split} \] The first term on the right is estimated using \epsqref{e.cancellation-claim} with $r = \bar r$. The second term is estimated using our upper bound $\bar \Lambda \lesssim \|f\|_{L^\infty_q(\mathbb R^3)}$ from the proof of Lemma \ref{l:boundedness} with $\bar v'$ replacing $\bar v$, which is valid because $|\bar v|\approx |\bar v'|\approx |v_0|$. In all, we have \[ \left| \int_{T_0^+(B_{\bar r}) \setminus B_{\bar r}} ((T_0^+)^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w\right| \lesssim {\bar r}^{1-2s}|v_0|^{\gamma+2s} = |v_0|^{\frac{\gamma+2s}{2s}} r^{1-2s}. \] Since $T_0^{-1}w = |v_0|^{-\frac{\gamma+2s}{2s}} (T_0^+)^{-1} \bar w$, this implies $I_2 \lesssim r^{1-2s}$. For $I_1$, we use another calculation quoted from the proof of \cite[Lemma 5.18]{imbert2020smooth}, where $\bar r$ once again plays the role of $r$: \[\begin{equation}gin{split} \left| \int_{B_{\bar r}} ((T_0^+)^{-1} \bar w) K_f(\bar v - \bar w, \bar v) \, \mathrm{d} \bar w\right| &\lesssim |v_0| \int_{\mathbb R^3} f(\bar v' + z) |z|^{\gamma+1} \min(1, \bar r^{2-2s}|z|^{2s-2}) \, \mathrm{d} z , \epsnd{split} \] and using \epsqref{e.cancellation-claim} again, we conclude \begin{equation} I_1 \lesssim |v_0|^{- \frac{\gamma+2s}{2s} + \gamma+2s}\bar r^{1-2s} = r^{1-2s}, \epse which concludes the proof. \epsnd{proof} \subsection{H\"older continuity} In this subsection, we establish the H\"older continuity of the kernel $\bar K_f$. First, we have a lemma on the kinetic H\"older spaces and their relationship to the change of variables \epsqref{e.cov-}. \begin{equation}gin{lemma}\label{l:holder-cov} Given $z_0\in \mathbb R^{7}$ and $F:\mathcal E_R(z_0) \to \mathbb R$, define $\bar F :Q_R \to \mathbb R$ by $\bar F(z) = F(\mathcal T_0(z))$. Then, \[ \|\bar F\|_{C_\epsll^\begin{equation}ta(Q_R)} \lesssim \|F\|_{C_\epsll^\begin{equation}ta(\mathcal E_R(z_0))} \leq |v_0|^{\bar c \begin{equation}ta} \|\bar F\|_{C^\begin{equation}ta_\epsll(Q_R)}, \] with $\bar c :=-\gamma/(2s)>0$. \epsnd{lemma} \begin{equation}gin{proof} The argument is the same as \cite[Lemma 5.19]{imbert2020smooth}, but the powers of $|v_0|$ are different because of the different definition of $\mathcal T_0$ (recall~\epsqref{e.cov-}). We claim that for all $|v_0|>2$ and $z, z_1 \in \mathbb R^7$, \begin{equation}gin{align} d_\epsll(\mathcal T^{-1}(z), \mathcal T^{-1}(z_1)) &\leq |v_0|^{-\frac \gamma {2s}} d_\epsll(z,z_1),\label{e.1}\\ d_\epsll(\mathcal T(z), \mathcal T(z_1)) &\leq d_\epsll(z,z_1).\label{e.2} \epsnd{align} For \epsqref{e.1}, we use $\|T_0^{-1}\| = |v_0|^{1 - \frac{\gamma+2s}{2s}}$ and the definition \epsqref{e.dl} of $d_\epsll$ to write \[\begin{equation}gin{split} d_\epsll(&\mathcal T^{-1}(z), \mathcal T^{-1}(z_1)) \\&= \min_{w\in \mathbb R^3} \max\Big( |t-t_1|^\frac{1}{2s}, |T_0^{-1}(x- x_1 - (t-t_1) w)|^\frac{1}{1+2s}, |T_0^{-1} (v-w)|, |T_0^{-1} (v_1-w)| \Big)\\ &\leq |v_0|^{-\frac{\gamma}{2s}}\min_{w\in \mathbb R^3} \max\left(|t-t_1|^\frac{1}{2s}, |x-x_1 - (t-t_1)w|^\frac{1}{1+2s}, |v-w|, |v_1-w|\right)\\ &\leq |v_0|^{-\frac{\gamma}{2s}} d_\epsll(z,z_1). \epsnd{split}\] The proof of \epsqref{e.2} is similar, so we omit it. With \epsqref{e.1} and \epsqref{e.2}, the left invariance of $d_\epsll$ implies that for $\bar z = \mathcal T_0 z$ and $\bar z_1 = \mathcal T_0 z_1$, \begin{equation}gin{equation}\label{e.3} d_\epsll(\bar z, \bar z_1) \leq d_\epsll(z,z_1) \lesssim |v_0|^{-\frac{\gamma}{2s}} d_\epsll(\bar z, \bar z_1).\epsnd{equation} To conclude the proof, fix $z, z_1 \in Q_R$, and let $\bar z = \mathcal T_0 z$ and $\bar z_1 = \mathcal T_0 z_1$. Let $p$ be the polynomial expansion of $\bar F$ at the point $z_1$ of degree $\deg_k p < \begin{equation}ta$, such that $|\bar F(z) - p(z)|\leq [\bar F]_{C_\epsll^\begin{equation}ta} d_\epsll(z,z_1)^\begin{equation}ta$. Since $p\circ \mathcal T_0^{-1}$ is a polynomial of the same degree as $p$, there holds \[ |F(\bar z) - (p\circ \mathcal T_0^{-1})(\bar z)| = |\bar F(z) - p(z)|\leq [\bar F]_{C_\epsll^\begin{equation}ta} d_\epsll(z,z_1)^\begin{equation}ta \leq [\bar F]_{C_\epsll^\begin{equation}ta} d_\epsll(\bar z, \bar z_1)^\begin{equation}ta |v_0|^{-\frac{\begin{equation}ta\gamma}{2s}},\] from the second inequality in \epsqref{e.3}. This implies the second inequality in the statement of the lemma. The first inequality in the lemma follows from the first inequality of \epsqref{e.3} in a similar way. \epsnd{proof} Next, we establish the H\"older regularity of the kernel $\bar K_f$. The following lemma extends \cite[Lemma 5.20]{imbert2020smooth}. \begin{equation}gin{lemma}\label{l:cov-holder} Assume $\gamma+2s < 0$. For any $f:[0,T]\times \mathbb R^3\times\mathbb R^3 \to \mathbb R$ such that $f\in C^\alpha_{\epsll,q}$ with $q> 3 + \alpha/(1+2s)$, and for any $|v_0|>2$ and $r\in(0,1]$, let \[ \bar K_{f,z}(w) := \bar K_f(t,x,v,v+w), \quad z = (t,x,v) \in Q_{2r}.\] Then we have \[ \int_{B_\rho} |\bar K_{f,z_1}(w) - \bar K_{f,z_2}(w)| |w|^2 \, \mathrm{d} w \leq \bar A_0 \rho^{2-2s} d_\epsll(z_1,z_2)^{\alpha'}, \quad \rho > 0, z_1,z_2\in Q_{2r}, \] with $\alpha' = \alpha \frac {2s} {1+2s}$ and \[ \bar A_0 \leq C |v_0|^{2+\frac{\alpha}{1+2s}} \|f\|_{C^\alpha_{\epsll, q}}, \] where the constant $C$ depends on universal quantities, $\alpha$, $q$, and $T-\tau$, but is independent of $|v_0|$. \epsnd{lemma} \begin{equation}gin{proof} Changing variables according to $w \mapsto T_0^{-1} w$, \[ \begin{equation}gin{split} \int_{B_\rho} |\bar K_{f,z_1} (w) - \bar K_{f,z_2}(w)| |w|^2 \, \mathrm{d} w &= |v_0|^{2+\frac{3\gamma}{2s}} \int_{B_\rho} |K_{f,\bar z_1}(T_0w) - K_{f,\bar z_2}(T_0w)| |w|^2 \, \mathrm{d} w\\ &= \int_{E_\rho} |K_{f,\bar z_1}(w) - K_{f,\bar z_2}( w)| |T_0^{-1} w|^2 \, \mathrm{d} w\\ &\leq |v_0|^{-\frac{\gamma}{s}}\int_{B_{\rho|v_0|^{(\gamma+2s)/2s}}} |K_{f,\bar z_1}( w) - K_{f,\bar z_2}( w)| |w|^2 \, \mathrm{d} w, \epsnd{split}\] using $\|T_0^{-1}\| = |v_0|^{1 - \frac{\gamma+2s}{2s}}$ and $E_\rho \subset B_{\rho |v_0|^{(\gamma+2s)/2s}}$. Next, it can be shown (see the proof of \cite[Lemma 5.20]{imbert2020smooth}) from the definition \epsqref{e.kernel} of $K_f$ that \[ |K_{f,\bar z_1}(w) - K_{f,\bar z_2}(w)| \leq K_{\Delta f, \bar z_1}(w),\] where, following the notation of~\cite{imbert2020smooth}, \[ \Delta f(z) = |f(z) - f(\xi\circ z)| \] and $\xi = \bar z_2 \circ \bar z_1^{-1}$. Using Lemma \ref{l:K-upper-bound-2}, we now have \begin{equation}gin{equation}\label{e.BrhoK} \begin{equation}gin{split} \int_{B_\rho} |\bar K_{f,z_1} (w) - &\bar K_{f,z_2}(w)| |w|^2 \, \mathrm{d} w \lesssim |v_0|^{-\gamma/s} \int_{B_{\rho|v_0|^{(\gamma+2s)/2s}}} K_{\Delta f, \bar z_1}(w) |w|^2 \, \mathrm{d} w \\ &\lesssim |v_0|^{-\gamma/s} \left(\int_{\mathbb R^3} \Delta f(\bar t_1,\bar x_1,\bar v_1+u) |u|^{\gamma+2s} \, \mathrm{d} u\right) \left( \rho |v_0|^{\frac{\gamma+2s}{2s}}\right)^{2-2s}\\ &\lesssim |v_0|^{ 2 - \gamma - 2s} \left(\int_{\mathbb R^3} \Delta f(\bar t_1,\bar x_1,\bar v_1 +u) |u|^{\gamma+2s} \, \mathrm{d} u\right) \rho^{2-2s}. \epsnd{split} \epsnd{equation} To estimate $\Delta f(\bar t_1, \bar x_1, \bar v_1 +u)$ from above, using \epsqref{e.right-translation}, one can show (see formula (5.23) in \cite{imbert2020smooth} or the analysis of \epsqref{e.Brho} above) that \[ |f(\bar z_1 \circ (0,0,u)) - f(\bar z_2\circ (0,0,u))| \lesssim \Big(d_\epsll(\bar z_1, \bar z_2) + |\bar t_1 - \bar t_2|^\frac{1}{1+2s}|u|^{1/(1+2s)}\Big)^\alpha \langle\bar v_1 +u\rangle^{-q} \|f\|_{C^\alpha_{\epsll, q}}, \] with $q := q' + \alpha/(1+2s)$. Therefore, \[ \begin{equation}gin{split} \Delta f(\bar t_1, \bar x_1, \bar v_1 + u) &\leq \langle \bar v_1 + u\rangle^{-q} \|f\|_{C^\alpha_{\epsll, q}} \left( d_\epsll(\bar z_1 , \bar z_2) + |\bar t_1 - \bar t_2|^\frac{1}{1+2s} |u|^\frac{1}{1+2s}\right)^\alpha\\ &\lesssim \langle \bar v_1 + u\rangle^{-q} \|f\|_{C^\alpha_{\epsll, q}} \Big(d_\epsll(z_1 , z_2)^\alpha + |t_1 - t_2|^{\frac{\alpha}{1+2s} }|u|^\frac{\alpha}{1+2s}\Big)\\ &\leq \langle \bar v_1 + u\rangle^{-q} |u|^\frac{\alpha}{1+2s} \|f\|_{C^\alpha_{\epsll,\tilde q}} d_\epsll(z_1,z_2)^{\alpha'} , \epsnd{split} \] since $d_\epsll(\bar z_1, \bar z_2) \leq d_\epsll(z_1,z_2)$ and $|\bar t_1 - \bar t_2| = |t_1-t_2|\leq d_\epsll(z_1,z_2)^{2s}$. Returning to \epsqref{e.BrhoK}, we have \[ \begin{equation}gin{split} \int_{B_\rho} |\bar K_{f,z_1} (w) - \bar K_{f,z_2}(w)| |w|^2 \, \mathrm{d} w &\lesssim |v_0|^{2-\gamma-2s} \|f\|_{C^\alpha_{\epsll,q}} d_\epsll(z_1,z_2)^{\alpha'}\left(\int_{\mathbb R^3} |u|^{\gamma+2s+\alpha/(1+2s)} \langle v_1 + u\rangle^{-q} \, \mathrm{d} u\right) \rho^{2-2s}\\ &\lesssim |v_0|^{2+\alpha/(1+2s)}\|f\|_{C^\alpha_{\epsll,q}} d_\epsll(z_1,z_2)^{\alpha'} \rho^{2-2s} \epsnd{split} \] since $q>3 + \alpha/(1+2s)$ (recall $\gamma+2s<0$). We have used the convolution estimate from Lemma \ref{l:convolution} and the fact that $|\bar v_1|\approx |v_0|$. \epsnd{proof} \section{Technical lemmas}\label{s:lemmas} In this appendix, we collect some technical lemmas. First, we have an estimate for convolutions with functions $v\mapsto |v|^p$. We state it without proof. \begin{equation}gin{lemma}\label{l:convolution} For any $p>-3$ and $f\in L^\infty_q(\mathbb R^3)$ with $q>3 + p_+$, there holds \[ \int_{\mathbb R^3} f(v+w) |w|^p \, \mathrm{d} w \leq C\|f\|_{L^\infty_q(\mathbb R^3)} \langle v\rangle^p,\] for a constant $C$ depending on $p$ and $q$. \epsnd{lemma} The following interpolation lemma allows us to trade regularity for decay. We also omit the proof of this lemma, which is standard. \begin{equation}gin{lemma}\label{l:moment-interpolation} Suppose that $\phi:\mathbb R^6\to \mathbb R$ is such that $\phi\in L^\infty_{q_1}(\mathbb R^6)$ and $\phi\in C^\alpha_{\epsll,q_2}(\mathbb R^6)$, for some $\alpha\in (0,1)$ and $q_1 \geq q_2 \geq 0$. If $\begin{equation}ta \in (0,\alpha)$ and $m\in [q_2, q_1]$ are such that \[ m \leq q_1 \left(1 - \frac{\begin{equation}ta}{\alpha}\right) + q_2 \frac{\begin{equation}ta}{\alpha}, \] then \[ \|\phi\|_{C^{\begin{equation}ta}_{\epsll,m}(\mathbb R^6)} \lesssim [\phi]_{C^\alpha_{\epsll,q_2}(\mathbb R^6)}^\frac{\begin{equation}ta}{\alpha} \|\phi\|_{L^{\infty}_{q_1}(\mathbb R^6)}^{1-\frac{\begin{equation}ta}{\alpha}}. \] \epsnd{lemma} Next, we quote an estimate for $Q_{\rm s}(f,g)$ H\"older norms. This lemma is stated in \cite{imbert2020smooth} for the case $\gamma+2s\geq0$, but it is clear from the proof that the same statement holds when $\gamma+2s<0$. \begin{equation}gin{lemma}{\cite[Lemma 6.8]{imbert2020smooth}}\label{l:Q1holder} Let $q> 3 + (\gamma+2s)_+$ and $\alpha \in (0,\min(1,2s))$, and assume that $f\in C^\alpha_{\epsll,q}(\mathbb R^3)$ and $g \in C^{2s+\alpha}_{\epsll,q}(\mathbb R^3)$. Then $Q_1(f,g)\in C^{\alpha'}_{\epsll,q-(\gamma+2s)_+ - \alpha/(1+2s)}(\mathbb R^3)$ for $\alpha' = \frac {2s}{1+2s}\alpha$, and \[ \|Q_1(f,g)\|_{C^{\alpha'}_{q-(\gamma+2s)_+ - \alpha/(1+2s)}(\mathbb R^3)} \leq C \|f\|_{C^\alpha_{\epsll,q}(\mathbb R^3)}\|g\|_{C^{2s+\alpha}_{\epsll,q}(\mathbb R^3)}.\] The constant $C$ depends only on $s$, $\gamma$, and the collision kernel. \epsnd{lemma} Finally, we have an estimate for $Q_{\rm ns}(f,g)$ in H\"older norms: \begin{equation}gin{lemma}{\cite[Lemma 6.2]{imbert2020smooth}}\label{l:Q2holder} For $\alpha \in (0,\min(1,2s))$, let $q> 3 + \alpha/(1+2s)$ and $\alpha' = \frac {2s}{1+2s} \alpha$. If $f \in C^\alpha_{\epsll,q}(\mathbb R^3)$ and $g\in C^{\alpha'}_{\epsll,q+\alpha/(1+2s)+\gamma}(\mathbb R^3)$, then \[\|Q_{\rm ns}(f,g) \|_{C^{\alpha'}_{\epsll,q}(\mathbb R^3)} \leq C\|f\|_{C^\alpha_{\epsll,q}(\mathbb R^3)} \|g\|_{C^{\alpha'}_{\epsll,q+\alpha/(1+2s) + \gamma}(\mathbb R^3)},\] with the constant $C$ depending only on $\gamma$, $s$, and $\alpha$. \epsnd{lemma} \epsnd{document}
\begin{document} \title{The warping degree of a link diagram} \begin{abstract} For an oriented link diagram $D$, the warping degree $d(D)$ is the smallest number of crossing changes which are needed to obtain a monotone diagram from $D$. We show that $d(D)+d(-D)+sr(D)$ is less than or equal to the crossing number of $D$, where $-D$ denotes the inverse of $D$ and $sr(D)$ denotes the number of components which have at least one self-crossing. Moreover, we give a necessary and sufficient condition for the equality. We also consider the minimal $d(D)+d(-D)+sr(D)$ for all diagrams $D$. For the warping degree and linking warping degree, we show some relations to the linking number, unknotting number, and the splitting number. \end{abstract} \section{Introduction} The warping degree and a monotone diagram is defined by Kawauchi for an oriented diagram of a knot, a link \cite{kawauchi} or a spatial graph \cite{kawauchi2}. The warping degree represents such a complexity of a diagram, and depends on the orientation of the diagram. For an oriented link diagram $D$, we say that $D$ is monotone if we meet every crossing point as an over-crossing first when we travel along all components of the oriented diagram with an order by starting from each base point. This notion is earlier used by Hoste \cite{hoste} and by Lickorish-Millett \cite{lickorish-millett} in computing polynomial invariants of knots and links. The warping degree $d(D)$ of an oriented link diagram $D$ is the smallest number of crossing changes which are needed to obtain a monotone diagram from $D$ in the usual way. We give the precise definitions of the warping degree and a monotone diagram in Section 2. Let $-D$ be the diagram $D$ with orientations reversed for all components, and we call $-D$ the inverse of $D$. Let $c(D)$ be the crossing number of $D$. We have the following theorem in \cite{shimizu} which is for a knot diagram: \phantom{x} \begin{thm}{\cite{shimizu}} Let $D$ be an oriented knot diagram which has at least one crossing point. Then we have $$d(D)+d(-D)+1\leq c(D). $$ Further, the equality holds if and only if $D$ is an alternating diagram. \label{dk} \end{thm} \phantom{x} \noindent Let $D$ be a diagram of an $r$-component link ($r\geq 1$). Let $D^i$ be a diagram on a knot component $L^i$ of $L$, and we call $D^i$ a component of $D$. We define a property of a link diagram as follows: \phantom{x} \begin{defn} A link diagram $D$ has \textit{property $C$} if every component $D^i$ of $D$ is alternating, and the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for each $i\neq j$. \end{defn} \phantom{x} \noindent Note that a diagram $D$ has property $C$ if $D$ is an alternating diagram in the case that $r=1$. We generalize Theorem \ref{dk} to a link diagram: \phantom{x} \begin{thm} Let $D$ be an oriented link diagram, and $sr(D)$ the number of components $D^i$ such that $D^i$ has at least one self-crossing. Then we have $$d(D)+d(-D)+sr(D)\leq c(D).$$ Further, the equality holds if and only if $D$ has property $C$. \label{mainthm} \end{thm} \phantom{x} \noindent For example, the link diagram $D$ in Figure \ref{8-2_5} has $d(D)+d(-D)+sr(D)=3+3+2=8=c(D)$. \begin{figure}\label{8-2_5} \end{figure} \noindent Let $D$ be a diagram of a link. Let $u(D)$ be the unlinking number of $D$. As a lower bound for the value $d(D)+d(-D)+sr(D)$, we have the following inequality: \phantom{x} \begin{thm} We have $$2u(D)+sr(D)\leq d(D)+d(-D)+sr(D).$$ \end{thm} \phantom{x} \noindent The rest of this paper is organized as follows. In Section 2, we define the warping degree $d(D)$ of an oriented link diagram $D$. In Section 3, we define the linking warping degree $ld(D)$, and consider the value $d(D)+d(-D)$ to prove Theorem \ref{mainthm}. In Section 4, we show relations of the linking warping degree and the linking number. In Section 5, we apply the warping degree to a link itself. In Section 6, we study relations to unknotting number and crossing number. In Section 7, we define the splitting number and consider relations between the warping degree and the splitting number. In Section 8, we show methods for calculating the warping degree and the linking warping degree. \section{The warping degree of an oriented link diagram} Let $L$ be an $r$-component link, and $D$ a diagram of $L$. We take a sequence ${\bf a}$ of base points $a_i$ ($i=1,2,\dots ,r$), where every component has just one base point except at crossing points. Then $D_{\bf a}$, the pair of $D$ and ${\bf a}$, is represented by $D_{\bf a}=D^1_{a_1}\cup D^2_{a_2}\cup \dots \cup D^r_{a_r}$ with the order of ${\bf a}$. A self-crossing point $p$ of $D^i_{a_i}$ is a \textit{warping crossing point of} $D^i_{a_i}$ if we meet the point first at the under-crossing when we go along the oriented diagram $D^i_{a_i}$ by starting from $a_i$ ($i=1,2,\dots ,r$). A crossing point $p$ of $D^i_{a_i}$ and $D^j_{a_j}$ is a \textit{warping crossing point between} $D^i_{a_i}$ \textit{and} $D^j_{a_j}$ if $p$ is the under-crossing of $D^i_{a_i}$ ($1\leq i<j\leq r$). A crossing point $p$ of $D_{\bf a}$ is a \textit{warping crossing point of} $D_{\bf a}$ if $p$ is a warping crossing point of $D^i_{a_i}$ or a warping crossing point between $D^i_{a_i}$ and $D^j_{a_j}$ \cite{kawauchi}. \begin{figure}\label{wcp} \end{figure} \noindent For example in Figure \ref{wcp}, $p$ is a warping crossing point of $D^1_{a_1}$, and $q$ is a warping crossing point between $D^1_{a_1}$ and $D^2_{a_2}$. We define the warping degree for an oriented link diagram \cite{kawauchi}. The \textit{warping degree} of $D_{\bf a}$, denoted by $d(D_{\bf a})$, is the number of warping crossing points of $D_{\bf a}$. The \textit{warping degree} of $D$, denoted by $d(D)$, is the minimal warping degree $d(D_{\bf a})$ for all base point sequences ${\bf a}$ of $D$. Ozawa showed that a non-trivial link which has a diagram $D$ with $d(D)=1$ is a split union of a twist knot or the Hopf link and $r$ trivial knots ($r\geq 0$) \cite{ozawa}. Fung also showed that a non-trivial knot which has a diagram $D$ with $d(D)=1$ is a twist knot \cite{stoimenow}. \phantom{x} \noindent For an oriented link diagram and its base point sequence $D_{\bf a}=D^1_{a_1}\cup D^2_{a_2}\cup \dots \cup D^r_{a_r}$, we denote by $d(D^i_{a_i})$ the number of warping crossing points of $D^i_{a_i}$. We denote by $d(D^i_{a_i},D^j_{a_j})$ the number of warping crossing points between $D^i_{a_i}$ and $D^j_{a_j}$. By definition, we have that \begin{align*} d(D_{\bf a})=\sum _{i=1}^r d(D^i_{a_i})+\sum _{i<j} d(D^i_{a_i},D^j_{a_j}). \end{align*} \noindent Thus, the set of the warping crossing points of $D_{\bf a}$ is divided into two types in the sense that the warping crossing point is self-crossing or not. \noindent The pair $D_{\bf a}$ is \textit{monotone} if $d(D_{\bf a})=0$. For example, $D_{\bf a}$ depicted in Figure \ref{monotone} is monotone. \begin{figure}\label{monotone} \end{figure} \noindent Note that a monotone diagram is a diagram of a trivial link. Hence we have $u(D)\leq d(D)$, where $u(D)$ is the unlinking number of $D$ (\cite{nakanishi}, \cite{taniyama}). \section{Proof of Theorem \ref{mainthm}} In this section, we prove Theorem \ref{mainthm}. We first define the linking warping degree, which is like a restricted warping degree and which has relations to the crossing number and the linking number (see also Section 4). The number of non-self warping crossing points does not depend on the orientation. We define the \textit{linking warping degree} of $D_{\bf a}$, denoted by $ld(D_{\bf a})$, by the following formula: $$ld(D_{\bf a})=\sum _{i<j}d(D^i_{a_i},D^j_{a_j})=d(D_{\bf a})-\sum _{i=1}^rd(D_{a_i}^i),$$ where $D^i_{a_i}, D^j_{a_j}$ are components of $D_{\bf a}$ $(1\leq i<j\leq r)$. The \textit{linking warping degree} of $D$, denoted by $ld(D)$, is the minimal $ld(D_{\bf a})$ for all base point sequences ${\bf a}$. It does not depend on any choices of orientations of components. For example, the diagram $D$ in Figure \ref{stacked} has $ld(D)=2$. A pair $D_{\bf a}$ is \textit{stacked} if $ld(D_{\bf a})=0$. A diagram $D$ is \textit{stacked} if $ld(D)=0$. For example, the diagram $E$ in Figure \ref{stacked} is a stacked diagram. We remark that a similar notion is mentioned in \cite{hoste}. Note that a monotone diagram is a stacked diagram. A link $L$ is \textit{completely splittable} if $L$ has a diagram $D$ without non-self crossings. Notice that a completely splittable link has some stacked diagrams. \begin{figure}\label{stacked} \end{figure} \phantom{x} \noindent The \textit{linking crossing number} of $D$, denoted by $lc(D)$, is the number of non-self crossing points of $D$. Remark that $lc(D)$ is always even. For an unordered diagram $D$, we assume that $D^i$ and $D^i\cup D^j$ denote subdiagrams of $D$ with an order. We have the following relation of linking warping degree and linking crossing number. \phantom{x} \begin{lem} We have $$ld(D)\leq \frac{lc(D)}{2}.$$ Further, the equality holds if and only if the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for every $i\neq j$. \begin{proof} Let ${\bf a}$ be a base point sequence of $D$, and $\tilde{{\bf a}}$ the base point sequence ${\bf a}$ with the order reversed. We call $\tilde{{\bf a}}$ the reverse of ${\bf a}$. Since we have that $ld(D_{\bf a})+ld(D_{\tilde{{\bf a}}})=lc(D)$, we have the inequality $ld(D)\leq lc(D)/2$. Let $D$ be a link diagram such that the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for each $i\neq j$. Then we have $ld(D_{\bf a})=lc(D)/2$ for every base point sequence ${\bf a}$. Hence we have $ld(D)=lc(D)/2$. On the other hand, we consider the case the equality $2ld(D)=lc(D)$ holds. For an arbitrary base point sequence ${\bf a}$ of $D$ and its reverse $\tilde{{\bf a}}$, we have $$ld(D_{\bf a})\geq ld(D)=lc(D)-ld(D)\geq lc(D)-ld(D_{\bf a})=ld(D_{\tilde{{\bf a}}})\geq ld(D).$$ Then we have $lc(D)-ld(D_{\bf a})=ld(D)$. Hence we have $ld(D_{\bf a})=ld(D)$ for every base point sequence ${\bf a}$. Let ${\bf a}'=(a_1,a_2,\dots ,a_{k-1},a_{k+1},a_k,a_{k+2},\dots ,a_r)$ be the base point sequence which is obtained from ${\bf a}=(a_1,a_2,\dots ,a_k,a_{k+1},\dots ,a_r)$ by exchanging $a_k$ and $a_{k+1}$ ($k=1,2,\dots ,r-1$). Then, the number of over-crossings of $D^k$ is equal to the number of under-crossings of $D^k$ in the subdiagram $D^k\cup D^{k+1}$ of $D_{\bf a}$ because we have $ld(D_{\bf a})=ld(D_{{\bf a}'})$. This completes the proof. \end{proof} \label{2dc} \end{lem} \phantom{x} \noindent We next consider the value $d(D)+d(-D)$ for an oriented link diagram $D$ and the inverse $-D$. We have the following proposition: \phantom{x} \begin{prop} Let $D$ be an oriented link diagram. The value $d(D)+d(-D)$ does not depend on the orientation of $D$. \begin{proof} Let $D'$ be $D$ with the same order and another orientation. Since we have $d({D^i}')=d(D^i)$ or $d({D^i}')=d(-D^i)$, we have $d({D^i}')+d(-{D^i}')=d(D^i)+d(-D^i)$ for each $D^i$ and ${D^i}'$. Then we have \begin{align*} d(D')+d(-D')&=\sum _{i=1}^{r}d({D^i}')+ld(D')+\sum _{i=1}^{r}d(-{D^i}')+ld(-D')\\ &=\sum _{i=1}^{r}\{ d({D^i}')+d(-{D^i}')\} +2ld(D')\\ &=\sum _{i=1}^{r}\{ d(D^i)+d(-D^i)\} +2ld(D)\\ &=\sum _{i=1}^{r}d(D^i)+ld(D)+\sum _{i=1}^{r}d(-D^i)+ld(-D)\\ &=d(D)+d(-D). \end{align*} \end{proof} \end{prop} \phantom{x} \noindent A link diagram is a \textit{self-crossing diagram} if every component of $D$ has at least one self-crossing. In other words, a diagram $D$ of an $r$-component link $L$ is a self-crossing diagram if $sr(D)=r$. We have the following lemma: \phantom{x} \begin{lem} Let $D$ be a self-crossing diagram of an $r$-component link. Then we have $$d(D)+d(-D)+r\leq c(D).$$ Further, the equality holds if and only if $D$ has property $C$. \phantom{x} \begin{proof} We have \begin{align*} d(D)+d(-D)+r&=\sum _{i=1}^{r}d(D^i)+ld(D)+\sum _{i=1}^{r}d(-D^i)+ld(-D)+r\\ &=\sum _{i=1}^r\{ d(D^i)+d(-D^i)+1\} +2ld(D)\\ &\leq \sum _{i=1}^r c(D^i)+2ld(D)\\ &\leq \sum _{i=1}^r c(D^i)+lc(D)\\ &=c(D), \end{align*} where the first inequality is obtained by Theorem \ref{dk}, and the second inequality is obtained by Lemma \ref{2dc}. Hence we have the inequality. The equality holds if and only if $D$ has property $C$ which is obtained by Theorem \ref{dk} and Lemma \ref{2dc}. \end{proof} \label{self-crossing} \end{lem} \phantom{x} \noindent We give an example of Lemma \ref{self-crossing}. \phantom{x} \begin{eg} In Figure \ref{parallel}, there are three diagrams with 12 crossings. The diagram $D$ is a diagram such that any component is alternating and has 3 over-non-self crossings and 3 under-non-self crossings. Then we have $d(D)+d(-D)+r=12=c(D)$. The diagram $D'$ is a diagram which has a non-alternating component diagram. Then we have $d(D')+d(-D')+r=10<c(D')$. The diagram $D''$ is a diagram such that a component has 2 over-non-self crossings and 4 under-non-self crossings. Then we have $d(D'')+d(-D'')+r=10<c(D'')$. \begin{figure}\label{parallel} \end{figure} \end{eg} \phantom{x} \noindent Lemma \ref{self-crossing} is only for self-crossing link diagrams. We prove Theorem \ref{mainthm} which is for every link diagram. \phantom{x} \noindent {\it Proof of Theorem \ref{mainthm}.} For every component $D^i$ such that $D^i$ has no self-crossings, we apply a Reidemeister move of type I as shown in Figure \ref{lr-1}. \begin{figure}\label{lr-1} \end{figure} Then we obtain the diagram ${D^i}'$ from $D^i$, and ${D^i}'$ satisfies $d({D^i}')=d({-D^i}')=0=d(D^i)=d(-D^i)$ and $c({D^i}')=1=c(D^i)+1$. For example the base points $a_i$, $b_i$ in Figure \ref{lr-1} satisfy $d(D_{a_i}^i)=d(D^i)=0$, $d(-D_{b_i}^i)=d(-D^i)=0$. We remark that every $D^i$ and ${D^i}'$ are alternating. We denote by $D'$ the diagram obtained from $D$ by this procedure. Since every component has at least one self-crossing, we apply Lemma \ref{self-crossing} to $D'$. Then we have $$d(D')+d(-D')+r\leq c(D').$$ And we obtain $$d(D)+d(-D)+r\leq c(D)+(r-sr(D)).$$ Hence we have $$d(D)+d(-D)+sr(D)\leq c(D).$$ The equality holds if and only if $D$ has property $C$. $\square$ \section{The linking warping degree and linking number} In this section, we consider the relation of the linking warping degree and the linking number. For a crossing point $p$ of an oriented diagram, $\varepsilon (p)$ denotes the sign of $p$, namely $\varepsilon (p)=+1$ if $p$ is a positive crossing, and $\varepsilon (p)=-1$ if $p$ is a negative crossing. For an oriented subdiagram $D^i\cup D^j$, the \textit{linking number} of $D^i$ with $D^j$ is defined to be $$\mathrm{Link}(D^i,D^j)=\frac{1}{2}\sum _{p\in D^i\cap D^j}\varepsilon (p).$$ The linking number of $D^i$ with $D^j$ is independent of the diagram (cf. \cite{cromwell}, \cite{kawauchi}). We have a relation of the linking warping degree and the linking number of a link diagram in the following proposition: \phantom{x} \begin{prop} For a link diagram $D$, we have the following (i) and (i\hspace{-1pt}i). \begin{description} \item[(i)] We have $$\sum _{i<j} |\mathrm{Link}(D^i,D^j)|\leq ld(D).$$ Further, the equality holds if and only if under-crossings of $D^i$ in $D^i\cup D^j$ are all positive or all negative with an orientation for every subdiagram $D^i\cup D^j$ ($i<j$). \item[(i\hspace{-1pt}i)] We have \begin{align} \sum _{i<j} |\mathrm{Link}(D^i,D^j)|\equiv ld(D) \ (\bmod \ 2). \label{mod2} \end{align} \end{description} \phantom{x} \begin{proof} \begin{description} \item[(i)] For a subdiagram $D^i\cup D^j$ ($i<j$) with $d(D^i,D^j)=m$, we show that $$|\mathrm{Link}(D^i,D^j)|\leq d(D^i,D^j).$$ Let $p_1, p_2, \dots ,p_m$ be the warping crossing points between $D^i$ and $D^j$, and $\varepsilon (p_1)$,$\varepsilon (p_2)$,$\dots ,\varepsilon (p_m)$ the signs of them. Since a stacked diagram is a diagram of a completely splittable link, we have \begin{align} \mathrm{Link}(D^i,D^j)-(\varepsilon (p_1)+\varepsilon (p_2)+\dots +\varepsilon (p_m))=0 \label{sign-o-u} \end{align} by applying crossing changes at $p_1, p_2, \dots ,p_m$ for $D^i\cup D^j$. Then we have $$|\mathrm{Link}(D^i,D^j)|=|\varepsilon (p_1)+\varepsilon (p_2)+\dots +\varepsilon (p_m)|\leq m=d(D^i,D^j).$$ Hence we obtain $$\sum _{i<j} |\mathrm{Link}(D^i,D^j)|\leq ld(D).$$ The equality holds if and only if under-crossings of $D^i$ in $D^i\cup D^j$ are all positive or all negative with an orientation for every subdiagram $D^i\cup D^j$ ($i<j$). \\ \item[(i\hspace{-1pt}i)] By the above equality (\ref{sign-o-u}), we observe that $\mathrm{Link}(D^i,D^j)=\varepsilon (p_1)+\varepsilon (p_2)+\dots +\varepsilon (p_m)=\varepsilon (q_1)+\varepsilon (q_2)+\dots +\varepsilon (q_n)$, where $p_k$ (resp. $q_k$) is an under-crossing (resp. an over-crossing) of $D^i$ in $D^i\cup D^j$, $ld(D^i\cup D^j)=m$ and $lc(D^i\cup D^j)=m+n$. A similar fact is also mentioned in \cite{rolfsen}. We have \begin{align*} \mathrm{Link}(D^i,D^j)&=\varepsilon (p_1)+\varepsilon (p_2)+\dots +\varepsilon (p_m)\\ &\equiv m \ (\bmod \ 2)\\ &=d(D^i,D^j). \end{align*} Hence we have the modular equality \begin{align*} \sum _{i<j} |\mathrm{Link}(D^i,D^j)|\equiv ld(D) \ (\bmod \ 2). \end{align*} \end{description} \end{proof} \label{lld} \end{prop} \phantom{x} \begin{eg} In Figure \ref{ex3components}, $D$ has $(0,2,3)$, $E$ has $(0,2,2)$, and $F$ has $(4,4,4)$, where $(l,m,n)$ of $D$ denotes that $\sum_{i<j}|\mathrm{Link}(D^i,D^j)|=l$, $ld(D)=m$, and $lc(D)/2=n$. \begin{figure}\label{ex3components} \end{figure} \end{eg} \noindent The \textit{total linking number} of an oriented link $L$ is defined to be $\sum _{i<j}\mathrm{Link}(D^i,D^j)$ with a diagram and an order. We have the following corollary: \phantom{x} \begin{cor} We have $$\sum _{i<j}\mathrm{Link}(D^i,D^j)=\sum _{k=1}^r \{ \varepsilon (p_k) | p_k : \textit{a non-self warping crossing point of }D_{\bf a}\},$$ where ${\bf a}$ is a base point sequence of $D$. \label{total} \end{cor} \phantom{x} \noindent Corollary \ref{total} is useful in calculating the total linking number of a diagram. For example in Figure \ref{linkcor}, the diagram $D$ with $4$ components and $11$ crossing points has $ld(D)=4$. We have that the total linking number of $D$ is $0$ by summing the signs of only $4$ crossing points. \begin{figure}\label{linkcor} \end{figure} \section{To a link invariant} In this section, we consider the minimal $d(D)+d(-D)$ for minimal crossing diagrams $D$ of $L$ in the following formula: $$e(L)=\min \{ d(D)+d(-D) | D:\text{ a diagram of }L\text{ with }c(D)=c(L)\} ,$$ where $c(L)$ denotes the crossing number of $L$. In the case where $K$ is a non-trivial knot, we have \begin{align} e(K)+1\leq c(K). \label{ek} \end{align} Further, the equality holds if and only if $K$ is a prime alternating knot \cite{shimizu}. Note that the condition for the equality of (\ref{ek}) requires that $D$ is a minimal crossing diagram in the definition of $e(L)$. We next define $c^*(L)$ and $e^*(L)$ as follows: $$c^*(L)=\min \{c(D) | D:\text{ a self-crossing diagram of }L \} ,$$ $$e^*(L)=\min \{d(D)+d(-D) | D:\text{ a self-crossing diagram of }L\text{ with }c(D)=c^*(L)\} .$$ As a generalization of the above inequality (\ref{ek}), we have the following theorem: \phantom{x} \begin{thm} For an $r$-component link $L$, we have $$e^*(L)+r\leq c^*(L).$$ Further, the equality holds if and only if every self-crossing diagram $D$ of $L$ with $c(D)=c^*(L)$ has property $C$. \begin{proof} Let $D$ be a self-crossing diagram of $L$ with $c(D)=c^*(D)$. We assume that $D$ satisfies the equality $d(D)+d(-D)=e^*(L)$. Then we have \begin{align*} e^*(L)+r&=d(D)+d(-D)+r\\ &=\sum _{i=1}^{r}d(D^i)+ld(D)+\sum _{i=1}^{r}d(-D^i)+ld(-D)+r\\ &=\sum _{i=1}^r\{ d(D^i)+d(-D^i)+1\} +2ld(D)\\ &\leq \sum _{i=1}^r c(D^i)+2ld(D)\\ &\leq \sum _{i=1}^r c(D^i)+lc(D)\\ &=c(D)=c^*(L), \end{align*} where the first inequality is obtained by Theorem \ref{dk}, and the second inequality is obtained by Lemma \ref{2dc}. If $D$ has a non-alternating component $D^i$, or $D$ has a diagram $D^i\cup D^j$ such that the number of over-crossings of $D^i$ is not equal to the number of under-crossings of $D^i$, then we have $e^*(L)+r<c^*(L)$. On the other hand, the equality holds if $D$ has property $C$. \end{proof} \end{thm} \phantom{x} \noindent We have the following example: \phantom{x} \begin{eg} For non-trivial prime alternating knots $L^1,L^2,\dots ,L^r$ ($r\geq 2$), we have a non-splittable link $L$ by performing $n_i$-full twists for every $L^i$ and $L^{i+1}$ ($i=1,2,\dots ,r$) with $L^{r+1}=L^1$ as shown in Figure \ref{full}, where we assume that $n_1$ and $n_r$ have the same sign. \begin{figure}\label{full} \end{figure} \noindent Note that we do not change the type of knot components $L^i$. Let $D$ be a diagram of $L$ with $c(D)=c(L)$. Then we notice that $D$ is a self-crossing diagram with $c(D)=c^*(L)$. We also notice that $D$ has property $C$ because $lc(D^i\cup D^j)=2|n_i|$ and $\mathrm{Link}(D^i,D^j)=n_i$, and $lc(D^1\cup D^r)=2|n_1+n_r|$ and $\mathrm{Link}(D^1,D^r)=n_1+n_r$ in the case where $r=2$. Hence we have $e^*(L)+r=c^*(L)$ in this case. \end{eg} \phantom{x} \noindent We have the following corollary: \phantom{x} \begin{cor} Let $L$ be an $r$-component link whose all components are non-trivial. Then we have $$e(L)+r\leq c(L).$$ Further, the equality holds if and only if every diagram $D$ of $L$ with $c(D)=c(L)$ has property $C$. \begin{proof} Since every diagram $D$ of $L$ is a self-crossing diagram, we have $e(L)=e^*(L)$ and $c(L)=c^*(L)$. \end{proof} \end{cor} \phantom{x} \noindent We also consider the minimal $d(D)+d(-D)+sr(D)$ and the minimal $sr(D)$ for diagrams $D$ of $L$ in the following formulas: $$f(L)=\min \{ d(D)+d(-D)+sr(D) | D:\text{ a diagram of }L\},$$ $$sr(L)=\min \{ sr(D) | D:\text{ a diagram of }L\}.$$ \noindent Note that the value $f(L)$ and $sr(L)$ also do not depend on the orientation of $L$. Jin and Lee mentioned in \cite{jinlee} that every link has a diagram which restricts to a minimal crossing diagram for each component. Then we have the following proposition: \phantom{x} \begin{prop} The value $sr(L)$ is equal to the number of non-trivial knot components of $L$. \end{prop} \phantom{x} \noindent The following corollary is directly obtained from Theorem \ref{mainthm}. \phantom{x} \begin{cor} We have $$f(L)\leq c(L).$$ \begin{proof} For a diagram $D$ with $c(D)=c(L)$, we have $$f(L)\leq d(D)+d(-D)+sr(D)\leq c(D)=c(L),$$ where the second inequality is obtained by Theorem \ref{mainthm}. \end{proof} \label{fcl} \end{cor} \phantom{x} \noindent We have the following question: \phantom{x} \begin{q} When does the equality $f(L)=c(L)$ hold? \end{q} \phantom{x} \begin{eg} In Figure \ref{ex-fc}, there are two link diagrams $D$ and $E$. We assume that $D$ (resp. $E$) is a diagram of a link $L$ (resp. $M$). We have $f(L)=c(L)=5$ because we have $d(D)+d(-D)+sr(D)=2+2+1$ and we know $d(D^i)\geq u(3_1)=1$, $ld(D)\geq 1$, and $sr(D)\geq sr(L)=1$, where $D^i$ is any diagram of $3_1$. On the other hand, we have that $f(M)<c(M)$ because $f(M)\leq d(E)+d(-E)+sr(E)=3+3+1=7<10=c(M)$. \begin{figure}\label{ex-fc} \end{figure} \end{eg} \phantom{x} \section{Relations of warping degree, unknotting number, and crossing number} In this section, we enumerate several relations of the warping degree, the unknotting number or unlinking number, and the crossing number. Let $|D|$ be $D$ with orientation forgotten. We define the minimal warping degree of $D$ for all orientations as follows: $$d(|D|):=\min \{ d(D) | D: |D|\text{ with an orientation} \} .$$ Note that the minimal $d(|D|)$ for all diagrams $D$ of $L$ is equal to the ascending number $a(L)$ \cite{ozawa}: $$a(L)=\min \{ d(|D|) | D:\text{ a diagram of }L\} .$$ Let $E$ be a knot diagram, and $D$ a diagram of an $r$-component link. We review the relation of the unknotting number $u(E)$ (resp. the unlinking number $u(D)$) and the crossing number $c(E)$ (resp. $c(D)$) of $E$ (resp. $D$). The following inequalities are well-known \cite{nakanishi}: \begin{align} u(E)\leq \frac{c(E)-1}{2}, \label{ucdk} \end{align} \begin{align} u(D)\leq \frac{c(D)}{2}. \label{ucdl} \end{align} \noindent Moreover, Taniyama mentioned the following conditions \cite{taniyama}: \phantom{x} \noindent The necessary condition for the equality of (\ref{ucdk}) is that $E$ is a reduced alternating diagram of some $(2,p)$-torus knot, or $E$ is a diagram with $c(E)=1$. The necessary condition for the equality of (\ref{ucdl}) is that every $D^i$ is a simple closed curve on $\mathbb{S}^2$ and every subdiagram $D^i\cup D^j$ is an alternating diagram. \phantom{x} \noindent Hanaki and Kanadome characterized the link diagrams $D$ which satisfy $u(D)=(c(D)-1)/2$ as follows \cite{hanaki-kanadome}: \phantom{x} \noindent Let $D=D^1\cup D^2\cup \dots \cup D^r$ be a diagram of an $r$-component link. Then we have $$u(D)=\frac{c(D)-1}{2}$$ if and only if exactly one of $D^1,D^2,\dots ,D^r$ is a reduced alternating diagram of a $(2,p)$-torus knot, the other components are simple closed curves on $\mathbb{S}^2$, and the non-self crossings of the subdiagram $D^i\cup D^j$ are all positive, all negative, or empty for each $i\neq j$. In addition, they showed that any minimal crossing diagram $D$ of a link $L$ with $u(L)=(c(L)-1)/2$ satisfies $u(D)=(c(D)-1)/2$. \phantom{x} \noindent Abe and Higa study the knot diagrams $D$ which satisfy $$u(D)=\frac{c(D)-2}{2}.$$ Let $D$ be a knot diagram with $u(D)=(c(D)-2)/2$. They showed in \cite{abe-higa} that for any crossing point $p$ of $D$, one of the components of $D_p$ is a reduced alternating diagram of a $(2,p)$-torus knot and the other component of $D_p$ has no self-crossings, where $D_p$ is the diagram obtained from $D$ by smoothing at $p$. In addition, they showed that any minimal crossing diagram $D$ of a knot $K$ with $u(K)=(c(K)-2)/2$ satisfies the above condition. \phantom{x} \noindent By adding to (\ref{ucdk}), we have the following corollary: \phantom{x} \begin{cor} For a knot diagram $E$, we have $$u(E)\leq d(|E|)\leq \frac{c(E)-1}{2}.$$ Further, if we have $$u(E)=d(|E|)=\frac{c(E)-1}{2},$$ then $E$ is a reduced alternating diagram of some $(2,p)$-torus knot, or $E$ is a diagram with $c(E)=1$. \end{cor} \phantom{x} \noindent By adding to (\ref{ucdl}), we have the following corollary. \phantom{x} \begin{cor} \begin{description} \item[(i)] For an $r$-component link diagram $D$, we have $$u(D)\leq d(|D|)\leq \frac{c(D)}{2}.$$ \item[(i\hspace{-1pt}i)] We have $$u(D)\leq d(|D|)=\frac{c(D)}{2}$$ if and only if every $D^i$ is a simple closed curve on $\mathbb{S}^2$ and the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for each $i\neq j$. \\ \item[(i\hspace{-1pt}i\hspace{-1pt}i)] If we have $$u(D)=d(|D|)=\frac{c(D)}{2},$$ then every $D^i$ is a simple closed curve on $\mathbb{S}^2$ and for each pair $i$, $j$, the subdiagram $D^i\cup D^j$ is an alternating diagram. \end{description} \phantom{x} \begin{proof} \begin{description} \item[(i)] The equality $u(D)\leq d(|D|)$ holds because $u(D)\leq d(D)$ holds for every oriented diagram. We show that $d(|D|)\leq c(D)/2$. Let $D$ be an oriented diagram which satisfies $$d(D)=\sum ^r_{i=1} d(D^i)+ld(D)=d(|D|).$$ Then $D$ also satisfies \begin{align} d(D^i)\leq \frac{c(D^i)}{2} \label{101} \end{align} for every component $D^i$ because of the orientation of $D$. By Lemma \ref{2dc}, we have \begin{align} ld(D)\leq \frac{lc(D)}{2}. \label{102} \end{align} Then we have $$\sum ^r_{i=1} d(D^i)+ld(D)\leq \sum ^r_{i=1} \frac{c(D^i)}{2}+\frac{lc(D)}{2}$$ by (\ref{101}) and (\ref{102}). Hence we obtain the inequality $$d(|D|)\leq \frac{c(D)}{2}.$$\\ \item[(i\hspace{-1pt}i)] Suppose that the equality $d(|D|)=c(D)/2$ holds. Then the equalities \begin{align} d(D^i)=\frac{c(D^i)}{2} \label{103} \end{align} and \begin{align} ld(D)=\frac{lc(D)}{2} \label{104} \end{align} hold by (\ref{101}) and (\ref{102}), where $D$ has an orientation such that $d(D)=d(|D|)$. The equality (\ref{103}) is equivalent to that $c(D^i)=0$ for every $D^i$. We prove this by an indirect proof. We assume that $c(D^i)>0$ for a component $D^i$. In this case, we have the inequality \begin{align} d(D^i)+d(-D^i)+1\leq c(D^i) \label{105} \end{align} by Theorem \ref{dk} since $D^i$ has a self-crossing. We also have \begin{align} d(D^i)=d(-D^i)=\frac{c(D^i)}{2} \label{106} \end{align} because $d(D^i)\leq d(-D^i)$ and (\ref{103}). By substituting (\ref{106}) for (\ref{105}), we have $$c(D^i)+1\leq c(D^i).$$ This implies that the assumption $c(D^i)>0$ is contradiction. Therefore every $D^i$ is a simple closed curve. The inequality (\ref{104}) is equivalent to that the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for each $i\neq j$ by Lemma \ref{2dc}. On the other hand, suppose that every $D^i$ is a simple closed curve, and the number of over-crossings of $D^i$ is equal to the number of under-crossings of $D^i$ in every subdiagram $D^i\cup D^j$ for each $i\neq j$, then we have $$d(|D|)=ld(D)=\frac{lc(D)}{2}=\frac{c(D)}{2}.$$\\ \item[(i\hspace{-1pt}i\hspace{-1pt}i)] This holds by Corollary \ref{3cor}(i) and above Taniyama's condition. \end{description} \end{proof} \label{3cor} \end{cor} \phantom{x} \noindent Let $K$ be a knot, and $L$ an $r$-component link. Let $u(K)$ be the unknotting number of $K$, and $u(L)$ be the unlinking number of $L$. The following inequalities are also well-known \cite{nakanishi}: \begin{align} u(K)\leq \frac{c(K)-1}{2}, \label{uck} \end{align} \begin{align} u(L)\leq \frac{c(L)}{2}. \label{ucl} \end{align} \noindent The following conditions are mentioned by Taniyama \cite{taniyama}: \phantom{x} \noindent The necessary condition for the equality of (\ref{uck}) is that $K$ is a ($2,p$)-torus knot ($p$:odd,$\neq \pm 1$). The necessary condition for the equality of (\ref{ucl}) is that $L$ has a diagram $D$ such that every $D^i$ is a simple closed curve on $\mathbb{S}^2$ and every subdiagram $D^i\cup D^j$ is an alternating diagram. \phantom{x} \noindent By adding to (\ref{uck}) and (\ref{ek}), we have the following corollary: \phantom{x} \begin{cor} \begin{description} \item[(i)] We have $$u(K)\leq \frac{e(K)}{2}\leq \frac{c(K)-1}{2}.$$ \item[(i\hspace{-1pt}i)] We have $$u(K)\leq \frac{e(K)}{2}=\frac{c(K)-1}{2}$$ if and only if $K$ is a prime alternating knot. \\ \item[(i\hspace{-1pt}i\hspace{-1pt}i)] If we have $$u(K)=\frac{e(K)}{2}=\frac{c(K)-1}{2},$$ then $K$ is a ($2,p$)-torus knot ($p$:odd,$\neq \pm 1$). \end{description} \end{cor} \phantom{x} \noindent By adding to (\ref{ucl}), we have the following corollary: \phantom{x} \begin{cor} For a diagram of an unoriented $r$-component link, we have $$u(L)\leq \frac{e(L)}{2}\leq \frac{c(L)}{2}.$$ Further, if the equality $u(L)=e(L)/2=c(L)/2$ holds, then $L$ has a diagram $D=D^1\cup D^2\cup \dots \cup D^r$ such that every $D^i$ is a simple closed curve on $\mathbb{S}^2$ and for each pair $i,j$, the subdiagram $D^i\cup D^j$ is an alternating diagram. \phantom{x} \begin{proof} We prove the inequality $u(L)\leq e(L)/2$. Let $D$ be a minimal crossing diagram of $L$ which satisfies $e(L)=d(D)+d(-D)$. Then we obtain $$e(L)=d(D)+d(-D)\geq 2u(D)\geq 2u(L).$$ The condition for the equality is due to above Taniyama's condition. \end{proof} \end{cor} \phantom{x} \section{Splitting number} In this section, we define the splitting number and enumerate relations of the warping degree and the complete splitting number. The \textit{splitting number} (resp. \textit{complete splitting number}) of $D$, denoted by $Split(D)$ (resp. $split(D)$), is the smallest number of crossing changes which are needed to obtain a diagram of a splittable (resp. completely splittable) link from $D$. The splitting number of a link which is the minimal $Split(D)$ for all diagrams $D$ is defined by Adams \cite{adams}. The \textit{linking splitting number} (resp. \textit{linking complete splitting number}) of $D$, denoted by $lSplit(D)$ (resp. $lsplit(D)$), is the smallest number of non-self-crossing changes which are needed to obtain a diagram of a splittable (resp. completely splittable) link from $D$. We have the following propositions: \phantom{x} \begin{prop} \begin{description} \item[(i)] We have $$split(D)\leq d(|D|).$$\\ \item[(i\hspace{-1pt}i)] We have $$split(D)\leq lsplit(D)\leq ld(D)\leq \frac{lc(D)}{2}\leq \frac{c(D)}{2}.$$ \end{description} \label{lsplit} \end{prop} \phantom{x} \noindent We give examples of Proposition \ref{lsplit}. \begin{eg} The diagram $D$ in Figure \ref{ex-sd} has $split(D)=2<d(|D|)=3$. The diagram $E$ in Figure \ref{ex-sd} has $split(E)=d(|E|)=3$. \begin{figure}\label{ex-sd} \end{figure} \end{eg} \phantom{x} \begin{eg} The diagram $D$ in Figure \ref{ex-splitl} has $split(D)=1<lsplit(D)=2$. The diagram $E$ in Figure \ref{ex-splitl} has $split(E)=lsplit(E)=2$. \begin{figure}\label{ex-splitl} \end{figure} \end{eg} \phantom{x} \begin{eg} The diagram $D$ in Figure \ref{ex-lsplitld} has $lsplit(D)=3<ld(D)=5$. The diagram $E$ in Figure \ref{ex-lsplitld} has $lsplit(E)=ld(E)=5$. \begin{figure}\label{ex-lsplitld} \end{figure} \end{eg} \phantom{x} \noindent We raise the following question: \phantom{x} \begin{q} When does the equality $$split(D)=d(|D|),$$ $$split(D)=lsplit(D)$$ or $$lsplit(D)=ld(D)$$ hold? \end{q} \phantom{x} \section{Calculation of warping degree} In this section, we show methods for calculating the warping degree and linking warping degree by using matrices. First, we give a method for calculating the warping degree $d(D)$ of an oriented knot diagram $D$. Let $a$ be a base point of $D$. We can obtain the warping degree $d(D_a)$ of $D_a$ by counting the warping crossing points easily. Let $[D_a]$ be a sequence of some "$o$" and "$u$", which is obtained as follows. When we go along the oriented diagram $D$ from $a$, we write down "$o$" (resp. "$u$") if we reach a crossing point as an over-crossing (resp. under-crossing) in numerical order. We next perform normalization to $[D_a]$, by deleting the subsequence "$ou$" repeatedly, to obtain the normalized sequence $\lfloor D_a\rfloor $. Then we have $$d(D)=d(D_a)-\frac{1}{2}\sharp \lfloor D_a\rfloor ,$$ where $\sharp \lfloor D_a\rfloor $ denotes the number of entries in $\lfloor D_a\rfloor $. Thus, we obtain the warping degree $d(D)$ of $D$. In the following example, we find the warping degree of a knot diagram by using the above algorithm. \begin{eg} For the oriented knot diagram $D$ and the base point $a$ in Figure \ref{948}, we have $d(D_a)=4$ and $[D_a]=[oouuouuouuouoouoou]$. By normalizing $[D_a]$, we obtain $\lfloor D_a\rfloor =[uuoo]$. Hence we find the warping degree of $D$ as follows: $$d(D)=4-\frac{1}{2}\times 4=2.$$ \begin{figure}\label{948} \end{figure} \end{eg} \noindent For some types of knot diagram, this algorithm is useful in formulating the warping degree or looking into its properties. We enumerate the properties of an oriented diagram of a pretzel knot of odd type in the following example: \begin{eg} Let $D=P(\varepsilon _1n_1, \varepsilon _2n_2, \dots ,\varepsilon _mn_m)$ be an oriented pretzel knot diagram of odd type ($\varepsilon _i\in{+1,-1}, n_i,m$: odd$>0$), where the orientation is given as shown in Figure \ref{pretzel}. We take base points $a$, $b$ in Figure \ref{pretzel}. Then we have $$d(D_a)=d(-D_b)=\frac{c(D)}{2}+\sum _i \frac{(-1)^{i+1}\varepsilon _i}{2}$$ and $$\sharp \lfloor D_a\rfloor =\sharp \lfloor -D_b\rfloor .$$ Hence we have $d(D)=d(-D)$ in this case. In particular, if $D$ is alternating i.e. $\varepsilon _1=\varepsilon _2=\dots =\varepsilon _m=\pm 1$, then we have that $$d(D)=\frac{c(D)}{2}-\frac{1}{2}.$$ \begin{figure}\label{pretzel} \end{figure} \end{eg} \noindent We next explore how to calculate the linking warping degree $ld(D)$ by using matrices. For a link diagram $D$ and a base point sequence ${\bf a}$ of $D$, we define an $r$-square matrix $M(D_{\bf a})=(m_{i j})$ by the following rule: \begin{itemize} \item For $i\neq j$, $m_{i j}$ is the number of crossings of $D^i$ and $D^j$ which are under-crossings of $D^i$. \item For $i=j$, $m_{i j}=d(D^i)$. \end{itemize} \noindent We show an example. \phantom{x} \begin{eg} For $D_{\bf a}$ and $D_{\bf b}$ in Figure \ref{matrix}, we have \begin{eqnarray*} M(D_{\bf a})= \left( \begin{array}{ccc} 0&1&0\\ 1&0&0\\ 2&2&0\\ \end{array} \right) ,\ M(D_{\bf b})= \left( \begin{array}{ccc} 0&2&2\\ 0&0&1\\ 0&1&0\\ \end{array} \right) . \end{eqnarray*} \begin{figure}\label{matrix} \end{figure} \end{eg} \phantom{x} \noindent We note that $ld(D_{\bf a})$ is obtained by summing the upper triangular entries of $M(D_{\bf a})$, that is $$ld(D_{\bf a})=\sum _{i<j}m_{i j},$$ and we notice that $$d(D_{\bf a})=\sum _{i\leq j}m_{i j},$$ where $m_{i j}$ is an entry of $M(D_{\bf a})$ ($i,j=1,2,\dots ,r$). For the base point sequence ${\bf a}'=(a_1,a_2,\dots ,a_{k+1},a_k,\dots ,a_r)$ which is obtained from a base point sequence ${\bf a}$ by exchanging $a_k$ and $a_{k+1}$ ($k=1,2,\dots ,r-1$), the matrix $M(D_{{\bf a}'})$ is obtained as follows: $$M(D_{{\bf a}'})=P_kM(D_{\bf a})P_k^{-1},$$ where \begin{eqnarray*} P_k= \left( \begin{array}{cccccccc} 1&&&&&&&\\ &\ddots &&&&&&\\ &&&0&1&&&\\ &&&1&0&&&\\ &&&&&&\ddots &\\ &&&&&&&1\\ \end{array} \right) ; m_{i j}=\left\{ \begin{array}{l} 1\text{ for }(i,j)=(k,k+1),(k+1,k)\\ \hspace{3mm}\text{and}(i,j)=(i,i) (i\neq k,k+1),\\ 0\text{ otherwise. } \end{array} \right. \end{eqnarray*} \phantom{x} \noindent With respect to the linking warping degree, we have $$ld(D_{{\bf a}'})=ld(D_{\bf a})-m_{k k+1}+m_{k+1 k},$$ where $m_{k k+1}, m_{k+1 k}$ are entries of $M(D_{\bf a})$. To enumerate the permutation of the order of ${\bf a}=(a_1,a_2,\dots ,a_r)$, we consider a matrix $Q=P^{r-1}P^{r-2}\dots P^2P^1$, where $P^n$ denotes $P_nP_{n+1}\dots P_{k_n}$ ($n\leq k_n \leq r-1$) or the identity matrix $E_r$. Since $Q$ depends on the choices of $k_n$ ($n=1,2,\dots ,r-1$), we also denote $Q$ by $Q_{\mathbf{k}}$, where $\mathbf{k}=(k_1,k_2,\dots ,k_{r-1})$ ($n\leq k_n \leq r$) and we regard $P^n=E_r$ in the case $k_n=r$. Hence we obtain the following formula: $$ld(D)=\min _{\mathbf{k}} \{ \sum _{i<j}m_{i j} | m_{i j}:\text{ an entry of }Q_{\mathbf{k}}M(D_{\bf a})Q_{\mathbf{k}}^{-1}\}.$$ Thus, we obtain the warping degree of an oriented link diagram by summing the warping degrees $d(D^i)$ ($i=1,2,\dots ,r$) and the linking warping degree $ld(D)$. \title{The warping degree of a link diagram} \end{document}
\begin{document} \pdfpagewidth=8.5in \pdfpageheight=11.in \pagenumbering{roman} \title{Transfer Learning using Feature Selection} \author{Paramveer S. Dhillon} \date{April 17, 2009} \dept{Computer and Information Science} \supervisor{Lyle H. Ungar} \groupchair{Rajeev Alur} \submitdate{April 2009} \copyrightyear{2009} \beforepreface \tableofcontents \tablespagetrue \listoftables \figurespagetrue \listoffigures \prefacesection{Acknowledgements} I would like to thank Prof. Lyle Ungar for advising work on this thesis, as well as Prof. Ben Taskar (CIS, University of Pennsylvania) and Prof. Dean Foster (Statistics, University of Pennsylvania) for serving on the thesis committee. Besides, this I would also like to thank Prof. Martha Palmer, University of Colorado (Boulder) U.S.A for providing the Word Sense Disambiguation data, and Prof. Dana Pe'er, Columbia University, New York City U.S.A for providing the Yeast dataset. Besides this I would also like to thank Brian Tomasik, Computer Science Department, Swarthmore College, PA, U.S.A for providing help with some of the experiments for MIC. \prefacesection{Abstract} We present three related ways of using Transfer Learning to improve feature selection. The three methods address different problems, and hence share different kinds of information between tasks or feature classes, but all three are based on the information theoretic Minimum Description Length (MDL) principle and share the same underlying Bayesian interpretation. The first method, MIC, applies when predictive models are to be built simultaneously for multiple tasks (``simultaneous transfer'') that share the same set of features. MIC allows each feature to be added to none, some, or all of the task models and is most beneficial for selecting a small set of predictive features from a large pool of features, as is common in genomic and biological datasets. Our second method, TPC (Three Part Coding), uses a similar methodology for the case when the features can be divided into feature classes. Our third method, Transfer-TPC, addresses the ``sequential transfer'' problem in which the task to which we want to transfer knowledge may not be known in advance and may have different amounts of data than the other tasks. Transfer-TPC is most beneficial when we want to transfer knowledge between tasks which have unequal amounts of labeled data, for example the data for disambiguating the senses of different verbs. We demonstrate the effectiveness of these approaches with experimental results on real world data pertaining to genomics and to Word Sense Disambiguation (WSD). \pagenumbering{arabic} \chapter{ Introduction} Classical supervised learning algorithms use a set of feature-label pairs to learn mappings from the features to the associated labels. They generally do this by considering each classification task (each possible label) in isolation and learning a model for that task. Learning models independently for different tasks often works well, but when the labeled data is limited and expensive to obtain, an attractive alternative is to build shared models for multiple related tasks. For example, when one is trying to predict a set of related responses (``tasks"), be they multiple clinical outcomes for patients or growth rates for yeast strains under different conditions, it may be possible to ``borrow strength'' by sharing information between the models for the different responses. Inductive transfer can be particularly valuable when we have disproportionate amount of labeled data for ``similar tasks''. In such a case, if we build separate models for each task, then we often get poor predictive accuracies on tasks which have little data. Transfer learning can potentially be used to share information from the tasks with more labeled data to ``similar'' tasks with less data, significantly boosting their predictive accuracies. Transfer learning has been widely used~\cite{Caruana97multitasklearning,ando2005flp,NIPS2008,argyriou,bendavid,koller07,rainangkoller}, but generally for determining a shared latent space between tasks, and not for feature selection. Our contribution is to present three models for doing transfer learning that focus on feature selection. Each of the three models is best suited for a different problem structure. The problem of disambiguating word senses based on their context illustrates the three different types of applications. Firstly, each observation of a word (e.g. the sentence containing the verb ``fire'') is associated with multiple labels corresponding to each of the different possible meanings (E.g., for firing a person, firing a gun, firing off a note, etc.) Rather building separate models for each each sense (``Is this word sense 1 or not?,'' ``Is this word sense 2 or not'', etc.), we can note that features that are useful for predicting one sense are likely to be useful for predicting the other senses (perhaps with a coefficient of different sign.) Secondly, when predicting whether a word has a given sense, we can group the features derived from its context into different classes. For example, there are features that characterize the specific words before and after the target word, features based on the part of speech labels of those words, and features characterizing the topic of the document that the ambiguous word is in. We can ``transfer knowledge'' between the features (not the tasks!), but noting that when one feature is selected from class, then other features are more likely to be selected from the same class. Finally, when predicting whether a word has a given sense, one might make use of the fact that models for predicting synonyms of that word are likely to share many of the same features. I.e., a model for disambiguating one sense of ``discharge'' is likely to use many of the same features as one for disambiguating the sense of ``fire'' which is its synonym. We address all three problems using penalized regression, where linear or logistic regression models of the form $y = x \beta$ are learned such that the coefficients (weights) $\beta$ minimize a penalized likelihood such as $$ || Y - \hat{Y} ||_2 + \lambda || \beta||_0$$ We use an $\ell_0$ norm on $\beta$ (the number of nonzero coefficients) to encourage sparse solutions and, critically, we use information theory to pick the penalty $\lambda$ in a way that implements the transfer learning. We can broadly divide the above three problems into two categories. We address the first two problems using ``simultaneous transfer:'' training data for all the tasks or feature classes are assumed to be present before learning. We then select a ``joint'' set of features shared across the related tasks or feature sets. We call the information theoretic penalty used in feature selection MIC (Multiple Inclusion Criterion) and TPC (Three Part Coding) for the multi-task and multi-feature class problems, respectively. We address the third problem, transfering between different tasks which do not share observations, as in the case of different words using ``sequential transfer:'' I.e., we assume that models for some tasks have been learned and are then used to aid feature selection in building a model for a new task. We call the method used for this problem as ``Transfer-TPC.'' We now describe each of these methods (MIC, TPC, and Transfer-TPC'') in slightly more detail. MIC addresses the classic multi-task learning problem \cite{Caruana97multitasklearning} where each observation is associated with multiple tasks (a.k.a. multiple labels or multiple responses, {\bf Y}), and allows each feature to be added to none, some, all of the tasks and is most beneficial for selecting a small set of predictive features from a large pool of features. For example, the tasks can be different senses of a word, to be predicted from the word context or different phenotypes (human diseases or yeast growth rates) to be predicted from a set of gene expression values. Our second approach, TPC (Three Part Coding), is extremely similar to MIC, but applies when the features can be divided into feature classes. Feature classes are pervasive in real data as show in Fig.~\ref{featClasses}. For example, in gene expression data, the genes that serve as features may be grouped into classes based on their membership in gene families or pathways. When doing word sense disambiguation or named entity extraction, features fall into classes including adjacent words, their parts of speech, and the topic and venue of the document the word is in. When predictive features occur predominantly in a small number of feature classes, TPC significantly improves feature selection over naive methods which do not account for the classes. TPC does not expect the data to have multiple responses, rather it assume features are shared within classes as opposed to MIC where they are shared across tasks. The two methods could, of course, be used together. \begin{figure}\label{featClasses} \end{figure} MIC tends to include a given feature into more and more tasks as by doing so the cost of that feature becomes ``cheap'', as explained below. TPC tends to include more and more features from a single feature class as the cost of adding subsequent features from a feature class is less. They differ slightly in their details due to different assumptions about the correlation structure of features and responses, but are otherwise effectively identical. Transfer-TPC, which uses ``sequential transfer'' from a set of already modeled ``similar'' tasks to guide feature selection on a new task, is somewhat different from classic multi-task learning methods, in that different feature values and different amounts of data are available for the different tasks. Transfer-TPC is most beneficial when we want to transfer knowledge between tasks which have unequal amounts of labeled data. For example, the VerbNet dataset has roughly six times more data for one sense of the word ``kill'' than for the distributionally similar senses of other words like ``arrest'' and ``capture''. In such cases, we can transfer knowledge between these similar senses of words to facilitate learning predictive models for the rarer word senses. Transfer-TPC gives significant improvement in performance in all cases; though the gain in predictive performance is more pronounced when the test task has lesser amount of data than the train tasks, as we demonstrate in Section~\ref{TransTPC}. Our models use $\ell_0$ penalty instead of the $\ell_1 / \ell_2$ penalty~\cite{argyriou,obozinski} to induce sparsity and select features. The exact $\ell_0$ penalty requires subset selection, known to be NP-hard \cite{natarajan1995sas}, but a close solution can be found by stepwise search. Although approximate, stepwise $\ell_0$ methods generally yield sparser models than exact $\ell_1$ methods \cite{defensel0}. Moreover, they allow for more flexible choice of penalties, as we illustrate later in the thesis. All the three models use information theoretic Minimum Description Length (MDL) principle \cite{Risannen} to derive an efficient coding scheme for stepwise regression. The rest of the thesis is organized as follows. In next chapter we review relevant previous work. In Chapter 3 we provide background on basic feature selection methods and the MDL principle. Then in Chapter 4 we provide the general methodology used by all our models. In Chapters 5, 6 and 7 we describe the MIC, TPC and Transfer-TPC models in detail, and also show experimental results on real and synthetic data. In Chapter 8, we give a discussion of all the three models and show some connections among the three models. We conclude in Chapter 9 by providing a brief summary. \chapter{ Related Work} ``Multi-Task Learning" or ``Transfer Learning" has been studied extensively \cite{Caruana97multitasklearning,ando2005flp,NIPS2008,argyriou,bendavid,koller07,rainangkoller} in literature. To give a couple examples: \cite{ando2005flp} do joint empirical risk minimization and treat the multi-response problem by introducing a low-dimensional subspace which is common to all the response variables. \cite{rainangkoller} construct a multivariate Gaussian prior with a full covariance matrix for a set of ``similar" supervised learning tasks and then use semidefinite programming (SDP) to combine these estimates and learn a good prior for the current learning task. \cite{koller07} use the concept of meta-features; they learn meta-priors and feature weights from a set of similar prediction tasks using convex optimization. Some traditional methods such as neural networks also share parameters between the different tasks~\cite{Caruana97multitasklearning}. However, none of the above methods do feature selection. This limits their applicability in domains such as computational biology (e.g., genomics) and language (e.g., Word Sense Disambiguation) \cite{WSD06} where often only a handful of the thousands of potential features are predictive and feature selection is very important. There has been a small amount of work which does feature selection for multi-task learning~\cite{argyriou,obozinski}. Both these papers use an $\ell_2$ penalty over coefficients for all tasks associated with a single feature, combined with an $\ell_1$ penalty over features; this tends to put each feature into either all or none of the task models. \cite{argyriou} use this mixed norm ($\ell_1 / \ell_2$) approach for multi-task feature selection and show that the general subspace selection problem can be formulated as an optimization problem involving the trace norm. \cite{obozinski} also use a $\ell_1 / \ell_2$ block-norm regularization, but they focus on the case where the trace norm is not required and instead use a homotopy-based approach to evaluate the entire regularization path efficiently \cite{lars}. \chapter{ Background } Standard feature selection methods for supervised learning assume a setting consisting of {\it n} observations and a fixed number of {\it m} candidate features. The goal of feature selection is to select the feature subset that will lead to a model with least prediction error on test set. For many prediction tasks only a small fraction of the total {\it m} features are beneficial, so good feature selection methods can give large improvement in predictive accuracy \cite {nips03}. The state of the art feature selection methods use either $\ell_0$ or $\ell_1$ penalty on the coefficients. $\ell_1$ penalty methods such as Lasso \cite{Lasso} and its variants \cite{fusedlasso,grouplasso}, being convex, can be solved by optimization and give guaranteed optimal solutions \cite{lars}. On the other hand, $\ell_0$ penalty methods require an explicit search through the feature space (as in stepwise, stagewise and streamwise regression), but have the advantage that they allow the use of theory to select regularization penalties. As such, they avoid the usual cross validation used in $\ell_1$ methods, and they can be easily extended to select penalties in more complex settings as in this thesis. The most common of these $\ell_0$ penalty methods is stepwise feature selection. It is an iterative procedure in which at each step all features are tested at each iteration, and the best feature is selected and added to the model. The stepwise search terminates when either all of the {\it m} candidate features have been added to the model, or none of the remaining features are beneficial to the model, according to some measure such as a p-value threshold. Another, recent method of interest is streamwise feature selection \cite{zhou:06:jmlr} (SFS), which is a greedy online method. In this method each feature is evaluated for addition to the model only once and if the reduction in prediction error resulting from adding the feature to the model is more than an ``adaptively adjusted" threshold then that feature is added to the model. It contrasts with the ``batch" methods as Support Vector Machines (SVMs), neural nets etc. which require having all features in advance. SFS is somewhat similar to an alternate class of feature selection methods that control the False Discovery Rate (FDR)\cite{Benjamini}, and scales well to very large feature sets. \chapter{General Methodology} In this chapter we describe the basic methodology that all our three models share, i.e. use MDL (Minimum Description Length) based coding schemes. All our three models use a Minimum Description Length (MDL) \cite{Risannen} based coding scheme, which we explain in the, to specify another penalized likelihood method. In general, penalized likelihood methods aim to minimize an objective function of the form \begin{equation} \label{penlikelihood} \text{score} = -2\log(\text{likelihood of {\bf Y} given {\bf X}}) + F \times q, \end{equation} where $q$ is the current number of features in the model. Various penalties $F$ have been proposed, including $F=2$, corresponding to AIC (Akaike Information Criterion), $F=\log n$, corresponding to BIC (Bayesian Information Criterion), and $F = 2 \log m$, corresponding to RIC (Risk Inflation Criterion---similar to a ``Bonferroni correction")~\cite{FosterGeorge}. The penalties for these methods are summarized in the Table~\ref{tab:PenIC}. \begin{table} [htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \caption{Penalties for different Information Criterion methods.} \begin{small} \label{tab:PenIC} \vskip 0.02in \begin{center} \begin{tabular}{lccc} \hline Name & Penalty & Assumption\\ \hline \small Akaike Information Criterion(AIC) & 2 & - \\ Bayesian Information Criterion (BIC) & log(n) & n$\gg$p\\ Risk Inflation Criterion (RIC) & 2log(p) & p$\gg$n\\ \hline \end{tabular} \end{center} \end{small} \vskip -0.02in \end{table} Each of these penalties can be interpreted within the framework of the Minimum Description Length (MDL) principle \cite{Risannen}. MDL envisions a ``sender," who knows {\bf X} and {\bf Y}, and a ``receiver," who knows only {\bf X}. In order to transmit {\bf Y} using as few bits as possible, the sender encodes not the raw {\bf Y} matrix but instead a model for {\bf Y} given {\bf X}, followed by the residuals of {\bf Y} about that model. The length $S$ of this message, in bits, is called the {\it description length} and is the sum of two components. The first is $S_E$, the number of bits for encoding the residual errors, which according to standard MDL is given by the negative log-likelihood of the data given the model; this can be identified with the first term of Equation~\ref{penlikelihood}. The second component, $S_M$, is the number of bits used to describe the model itself and can be seen as corresponding to the second term of Equation~\ref{penlikelihood}. For MIC, we use the term {\it total description length} (TDL) to denote the combined length of the message for all {\it h} tasks and hence we select features for the $h$ responses (tasks)\footnote{The notion of ``task'' in this section is a separate response vector; and it is different than the general notion of ``task'' in transfer learning (For e.g. in Transfer-TPC), where it may not necessarily mean a separate responce vector} simultaneously to minimize $S$. Thus, when we evaluate a feature for addition into the model, we want to maximize the reduction of TDL $\Delta S^k$ incurred by adding that feature to a subset $k$ of the $h$ tasks $(1 \leq k \leq h)$: \begin{equation} \nonumber \Delta S^k = \Delta S_E^k - \Delta S_M^k \end{equation} where $\Delta S_E^k > 0$ is the reduction in residual-error coding cost due to the data likelihood increase given the new feature, and $\Delta S_M^k > 0$ is the increase in model cost to encode the new feature.\footnote{ $\Delta S_E^k$ is always greater than zero, because even a spurious feature will slightly increase the data likelihood.} As will be seen in Section \ref{coding-schemes}, MIC's model cost i.e. ($\Delta S_M$) includes a component for coding feature coefficients that resembles the AIC or BIC penalty, plus a component for specifying which features are present in the model that resembles the RIC penalty. In case of TPC and Transfer-TPC the definition of the term {\it total description length} (TDL) is a bit different and over there it is just the length of the message for the single response (task) and $\Delta S_M$ consists of; the class of the feature being added, which feature in the class, and what is its coefficient. \chapter{ Model 1: MIC (Multiple Inclusion Criterion) } In this chapter we explain MIC, which is a model for transfer/ multi -task learning, and does ``simultaneous transfer'' (joint feature selection) for multiple related tasks which share the same set of features. It uses MDL (Minimum Description Length) principle to derive an efficient coding scheme for multi -task stepwise regression. Firstly, we describe the notation used and provide a basic overviwe of MIC. Then we describe the coding schemes used in MIC and provide a comparison of various MIC coding schemes. \section{Notation Used} The symbols used throughout this section are defined in the Table~\ref{tab:Symbols}. All the values in the table are given by data except $m^*$, which is unknown. \begin{table} [thbp] \setlength{\abovecaptionskip}{1pt} \setlength{\belowcaptionskip}{1pt} \caption{Symbols used and their definitions.} \centering \label{tab:Symbols} \begin{center} \begin{tabular}{c|l} \toprule Symbol & Meaning\\ \midrule \small $n$ & Total number of observations\\ $m$ & Number of candidate features\\ $m^*$ & Number of beneficial features\\ $h$ & Total number of tasks\\ $k$ & Number of tasks into which a feature \\ & \ \ \ has been added\\ $j$ & Index of feature\\ $i$ & Index of observation\\ \bottomrule \end{tabular} \end{center} \end{table} Thus, we have an $n \times h$ response matrix {\bf Y}, with a shared $n \times m$ a feature matrix~{\bf X}. \section{Coding Schemes used in MIC}\label{coding-schemes} In this section we describe the coding scheme used by MIC for the general case in which features can be added to a subset of tasks but the tasks share strength. In next section we explore the special cases in which a feature is added to all tasks or none and features are added independently to each task (i.e., no transfer). \subsection{Code $\Delta S_{jE}^k$}\label{regression-code} Let {\bf E} be the residual error matrix: \begin{equation} \nonumber {\bf E = Y - \hat{Y}}, \end{equation} where {\bf Y} and ${\bf \hat{Y}}$ are the $n \times h$ response and prediction matrices, respectively. $\Delta S_{jE}^k$ is the decrease in negative log-likelihood that results from adding feature $j$ to some subset $k$ of the $h$ tasks. If all the tasks were independent, then $\Delta S_{jE}^k$ would simply be the sum of the changes in negative log-likelihood for each of the $h$ models separately. However, we may want our model to allow for nonzero covariance among the tasks. This is particularly true for stepwise regression, because in the first iterations of a stepwise algorithm, the effects of features not present in the model show up as part of the ``noise" error term, and if two tasks share a feature not yet in the model, the portion of the error term due to that feature will be the same. Thus, letting $e_i$, $i = 1, 2, \ldots, n$, denote the error for the $i^\text{th}$ row of {\bf E}, we assume $e_i \stackrel{\text{\tiny i.i.d.}}{\sim} \mathcal{N}(0,\Sigma)$, with $\Sigma$ an $h \times h$ covariance matrix. In other words, \begin{equation} \nonumber P(e_i)= \frac{1}{\sqrt{ (2\pi)^h |\Sigma|}} \exp \left( -\frac{1}{2} e_i^T \Sigma^{-1}e_i \right) \end{equation} in which $(\cdot)^T$, $(\cdot)^{-1}$, and $|\cdot|$ are the matrix transpose, inverse, and determinant, respectively. Therefore, \begin{equation}\label{negloglike} \begin{split} S_E&=-\log \prod_{i=1}^n P(e_i)\\ & = \frac{n}{2} \log{ \left( (2\pi)^h |\Sigma| \right) } + \frac{1}{2 \ln 2}\sum_{\nu=1}^n e_i^T \Sigma^{-1}e_i,\\ \end{split} \end{equation} where the $\frac{1}{\ln 2}$ factor appears because we use logarithm base 2 (here and throughout the remainder of the paper). Note that the superscript $k$ in $\Delta S_{jE}^k$ indicates that the reduction is incurred by adding a new feature to $k$ tasks, but the calculation $\Delta S_{jE}^k$ is over all {\it h} tasks; i.e., the whole residual error {\bf E} is taken into account. \subsection{Code $\Delta S_{jM}^k$} To describe $\Delta S_{jM}^k$ when a feature is added, MIC uses a three part coding scheme: \begin{equation} \nonumber \Delta S_{jM}^k = \ell_I + \ell_H + \ell_\theta, \end{equation} where $\ell_I$ is the number of bits needed to describe which feature is being added, $\ell_H$ is the cost of specifying the subset of $k$ of the $h$ task models in which to include the feature, and $\ell_\theta$ is the description length of the $k$ nonzero feature coefficients. We now consider different coding schemes for $\ell_I$, $\ell_H$, and $\ell_\theta$. \paragraph{Code $\ell_I$} For most data and feature sets, little is known \textit{a priori} about which features will be beneficial.\footnote{Following \cite{zhou2006sfs}, we define a ``beneficial" feature as one which, if added to the model, would reduce error on a hypothetical infinite test set.} We therefore assume that if a feature $x_j$ is beneficial, its index $j$ is uniformly distributed over $\{1, 2, \ldots, m\}$. This implies $\ell_I = \log m$ bits to encode the index, reminiscent of the RIC penalty for equation \eqref{penlikelihood}. RIC often uses no bits to code the coefficients of the features that are added, based on the assumption that $m$ is so large that the $\log m$ term dominates. This assumption is not valid in the multiple response setting, where the number of models $h$ could be large. If a feature is added to $k$ of the $h$ tasks, the cost of encoding the $k$ coefficients may be a major part of the cost. We describe the cost to code a coefficient below. \paragraph{Code $\ell_\theta$}~\label{AICcode} This term corresponds to the number of bits required to code the value of the coefficient of each feature. We could use either AIC or the more conservative BIC to code the coefficients. As explained below, we use $2$ bits for each coefficient, similar to AIC. Given a model, MDL chooses the values of the coefficients that maximize the likelihood of the data. \cite{Risannen2} proposes approximating $\theta$, the Maximum Likelihood Estimate (MLE), using a grid resolved to the nearest standard error. That is, instead of specifying $\theta$, we encode a rounded-integer value of $\theta$'s z-score $\hat{z}$, where $\theta = \theta_0 + \hat{z} \ \text{SE}(\theta)$, with $\theta_0$ being the default, null-hypothesis value (here, 0) and SE($\theta$) being the standard error of $\theta$. We assume a ``universal prior" distribution for $\hat{z}$, in which half of the probability is devoted to the null value $\theta_0$ and the other half is concentrated near $\theta_0$ and decays slowly. In particular, for $\theta \neq \theta_0$, the coding cost is 2 + $\log^+\left|\hat{z}\right|$ + $2 \log^+\log^+\left|\hat{z}\right|$ bits. This prior distribution makes sense in hard problems of feature selection where beneficial features are just marginally significant. Since $\hat{z}$ is quite small in such hard problems, the 2 bits will dominate the other two terms. In fact, we simply assume $l_\theta = 2.$ \paragraph{Code $\ell_H$} In order to specify the subset of task models that include a given feature, we encode two pieces of information: First, how many $k$ of the tasks have the feature? Second, which subset of $k$ tasks are those? One way to encode $k$ is to use $\log h$ bits to specify an integer in $\{1, 2, \ldots, h\}$; this implicitly corresponds to a uniform prior distribution on $k$. However, since we generally expect that smaller values of $k$ are more likely, we instead use coding lengths inspired by the ``idealized universal code for the integers" of \cite{elias1975} and \cite{rissanen1983}: The cost to code $k$ is $\log^* k + c_h$, where $\log^* k = \log k + \log \log k + \log \log \log k + \ldots$ so long as the terms remain positive, and $c_h$ is the constant required to normalize the implied probability distribution over $\{1, 2, \ldots, h\}$. $c_\infty \approx 2.865$ \cite{rissanen1983}, but for $h \in \{5, \ldots, 1000\}$, $c_h \approx 1$. Given $k$, there are $h \choose k$ subsets of tasks to which we can refer, which we can do by coding the index with $\log {h \choose k}$ bits. Thus, in total, we have \begin{equation} \label{lh_code} \nonumber \ell_H = \log^* k + c_h + \log {h \choose k}. \end{equation} \section{Comparison of the Coding Schemes}\label{comparison} The preceding discussion outlined a coding scheme for what we might call ``Partially Dependent MIC," or ``Partial MIC," in which models for different tasks can share some or all features. As suggested earlier, we can also consider a ``Fully Dependent MIC," or ``Full MIC," scheme in which each feature is shared across all or none of the task models. This amounts to a restricted Partial MIC in which $k=0$ or $k=h$ for each feature. The advantage comes in not needing to specify the subset of tasks used, saving $\ell_H$ bits for each feature in the model; however, Full MIC may need to code more coefficient values than Partial MIC. A third coding scheme is simply to specify each task model in isolation from the others. We call this the ``RIC" approach, because each model pays $\log m$ bits for each feature to code its index; this is equivalent, up to the base of the logarithm, to the $F = \ln m$ penalty in equation \ref{penlikelihood}. (However, we include an additional cost of $\ell_\theta$ bits to code a coefficient.) If the sum of the two costs is sufficiently less than the bits saved by the increase of the data likelihood from adding the feature to the model, the feature will be added to the model. RIC assumes that the beneficial features are not significantly shared across tasks. We compare the relative coding costs under these three coding schemes for the case where we evaluate a hypothetical feature, $x_j$, that is beneficial for $k$ tasks and spurious for the remaining $h - k$ tasks. Suppose that Partial MIC and RIC both add the feature to only the $k$ beneficial tasks, while Full MIC adds it to all $h$ tasks. We assume that if the feature is added, the three methods save approximately the same number of bits in encoding residual errors, $\Delta S_E^k$. This would happen if, say, the additional $h-k$ coefficients that Full MIC adds to its models save a negligible number of residual-coding bits (because those features are spurious) and if the estimate for $\Sigma$ is sufficiently diagonal that the negative log-likelihood calculated using \eqref{negloglike} for Partial MIC approximately equals the sum of the negative log-likelihoods that RIC calculates for each response separately. Table~\ref{tab:compare} shows that RIC and Partial MIC are the best and the second best coding schemes when $k=1$, and that their difference is on the order of $\log h$. Full MIC and Partial MIC are the best and the second best coding schemes when $k = h$, and their difference is on the order of $\log^*h$. Partial MIC is best for $k=\frac{h}{4}$. \begin{table*} [htbp] \setlength{\abovecaptionskip}{1pt} \setlength{\belowcaptionskip}{1pt} \caption{Costs in bits for each of the three schemes to code a model with $k=1$, $k=\frac{h}{4}$, and $k=h$ nonzero coefficients. $m \gg h \gg 1$, $\ell_I = \log m$, $\ell_\theta=2$, and for $h \in \{5, \ldots, 1000\}$, $c_h \approx 1$. Examples of these values for $m=2{,}000$ and $h=20$ appear in brackets; the smallest of the costs appears in bold.} \centering \label{tab:compare} \begin{center} \begin{tabular}{c|cc|cc|cc} \toprule \small $k$ & \multicolumn{2}{c|}{Partial MIC} & \multicolumn{2}{c|}{Full MIC} & \multicolumn{2}{c}{RIC}\\ \midrule 1 & $\log m + c_h + \log h + 2$ & [18.4] & $\log m + 2 h$ & [51.0] & $\log m+ 2$ & {\bf [13.0]}\\ $\frac{h}{4}$ & $\log m + \log^*\left (\frac{h}{4} \right) + c_h + \log {h \choose h/4} + \frac{h}{2}$ & {\bf [39.8]} & $\log m + 2 h$ & [51.0] & $\frac{h}{4} \log m + \frac{h}{2}$ & [64.8]\\ $h$ & $\log m + \log^* h + c_h + 2 h$ & [59.7] & $\log m + 2 h$ & {\bf [51.0]} & $h \log m + 2 h$ & [259.3]\\ \bottomrule \end{tabular} \end{center} \end{table*} \section{Stepwise Search Method} To search for a model that approximately minimizes TDL, we use a modified greedy stepwise-search algorithm. For each feature, we evaluate the change in TDL that would result from adding that feature to the model with the optimal number of associated tasks. We add the best feature and then recompute changes in TDL for the remaining features. This continues until there are no more features that would reduce TDL. The number of evaluations of features is thus $\mathcal{O}(m m^*)$, where $m^*$ is the number of features eventually added. To select the optimal number $k$ of task models in which to include a given feature, we again use a stepwise-style search. In this case, we evaluate the reduction in TDL that would result from adding the feature to each task, add the feature to the best task, recompute the reduction in TDL for the remaining tasks, and continue.\footnote{A stepwise search that re-evaluates the quality of each task at each iteration is necessary because, if we take the covariance matrix $\Sigma$ to be nondiagonal, the values of the residuals for one task may affect the likelihood of residuals for other tasks. If we take $\Sigma$ to be diagonal, as we do in Section \ref{Experimental Results}, then an $\mathcal{O}(h)$ search through the tasks without re-evaluation suffices. } However, unlike a normal stepwise search, we continue this process until we have added the feature to all $h$ task models. The reason for this is two-fold. First, because we want to borrow strength across tasks, we need to avoid overlooking cases where the correlation of a feature with any single task is insufficiently strong to warrant addition, yet the correlations with all of the tasks are. Second, the $\log \binom{h}{k}$ term in Partial MIC's coding cost does not increase monotonically with $k$, so even if adding the feature to an intermediate number of tasks does not look promising, adding it to all of them might still be worthwhile. Thus, for a given feature, we evaluate the description length of the model $\mathcal{O}(h^2)$ times. Since we need to identify the optimal $k$ for each feature evaluation, the entire algorithm requires $\mathcal{O}(h^2 m m^*)$ evaluations of TDL. However, with a few optimizations, this cost can be reduced with no practical impact on performance: \begin{itemize} \item We can quickly filter out most of the irrelevant features at each iteration by evaluating, for each feature, the decrease in negative log-likelihood that would result from simply adding it with all of its tasks, without doing any subset search. Then we keep only the top $t$ features according to this criterion, on which we proceed to do the full $\mathcal{O}(h^2)$ search over subsets. We use $t = 75$, but we find that as long as $t$ is bigger than, say, 10 or 20, it makes essentially no impact to the quality of results. This reduces the number of model evaluations to $\mathcal{O}(m m^* + m^* t h^2)$. \item We can often short-circuit the $\mathcal{O}(h^2)$ search over task subsets by noting that a model with more nonzero coefficients always has lower negative log-likelihood than one with fewer nonzero coefficients. This allows us to get a lower bound on the description length for the current feature for each number $k \in \{1, \ldots, h\}$ of nonzero tasks that we might choose as \begin{equation}\label{boundEqn} \nonumber \begin{split} &(\text{Model cost for other features already in model}) \\ &+ (\text{negative log-likelihood of $Y$ if we included all $h$ tasks for this feature})\\ & + (\text{the increase in model cost if we included just $k$ of the tasks}). \end{split} \end{equation} We then need only check those values of $k$ for which \eqref{boundEqn} is smaller than the best description length for any candidate feature's best task subset seen so far. In practice, with $h=20$, we find that evaluating $k$ up to, say, 3 or 6 is usually enough; i.e., we typically only need to add $3$ to $6$ tasks in a stepwise manner before stopping, with a cost of only $3h$ to $6h$.\footnote{If $\Sigma$ is diagonal and we do not need to re-evaluate residual likelihoods at each iteration, the cost is only $3$ to $6$ evaluations of description length.} \end{itemize} Although we did not attempt to do so, it may be possible to formulate MIC using a \emph{regularization path}, or \emph{homotopy}, algorithm of the sort that have become popular for performing $\ell_1$ regularization without the need for cross-validation (e.g., \cite{friedman2008rpg}). If possible, this would be significantly faster than stepwise search. \section{Experimental Results}\label{Experimental Results} This section evaluates the MIC approach on three synthetic datasets, each of which is designed to match the assumptions of, respectively, the Partial MIC, Full MIC, and RIC coding schemes described in Section \ref{comparison}. We also test on two biological data sets, a Yeast Growth dataset \cite{litvin2009mig}, which consists of real-valued growth measurements of multiple strains of yeast under different drug conditions, and a Breast Cancer dataset \cite{vantveer}, which involves predicting prognosis, ER status, and three other descriptive variables from gene-expression values for different cell lines. We compare the three coding schemes of Section \ref{comparison} against two other multitask algorithms: ``AndoZhang" \cite{ando2005flp} and ``BBLasso" \cite{obozinski}, as implemented in the Berkeley Transfer Learning Toolkit \cite{TLToolkit}. We did not compare MIC with other methods from the toolkit as they all require the data to have additional structure, such as {\it meta-features} \cite{koller07,rainangkoller}, or expect the features to be frequency counts, such as for the Hierarchical Dirichlet Processes algorithm. Also, none of the neglected methods does feature selection. For AndoZhang we use 5-fold CV to find the best $h$ parameter (the dimension of the subspace ($\Theta$), not to be confused with $h$ as we use it in this paper). We tried values in the range $[1, 100]$ as is done in \cite{ando2005flp}. For MIC, one can use either a full or a diagonal covariance matrix estimate. We found that substantial overfitting can occur when using a full covariance matrix, and therefore used a diagonal covariance matrix in all experiments presented below. MIC as presented in Section \ref{regression-code} is a regression algorithm, but AndoZhang and BBLasso are both designed for classification. Therefore, we made each of our responses binary 0/1 values before applying MIC with a regular regression likelihood term. Once the features were selected, however, we used logistic regression applied to just those features to obtain MIC's actual model coefficients. As noted in Section \ref{regression-code}, MIC's negative log-likelihood term can be computed with an arbitrary $h \times h$ covariance matrix $\Sigma$ among the $h$ tasks. On the data sets in this paper, we found that estimating all $h^2$ entries of $\Sigma$ could lead to overfitting, so we instead took $\Sigma$ to be diagonal. Informal experiments showed that estimating $\Sigma$ as a convex combination of the full and diagonal estimates could also work well. \subsection{Evaluation on Synthetic Datasets}\label{syn-data-settings} We created synthetic data according to three separate scenarios---called Partial, Full, and Independent. For each scenario, we generated a matrix of continuous responses as \begin{equation} \nonumber {\bf Y}_{n \times h} = {\bf X}_{n \times m} {\bf \beta}_{m \times h} + \epsilon_{n \times h} \end{equation} where $m=2{,}000$ features, $h=20$ responses, and $n=100$ observations. Then, to produce binary responses, we set to 1 those response values that were greater than or equal to the average value for their column and set to 0 the rest; this produced a roughly 50-50 split between 1's and 0's. Each nonzero entry of $\beta$ was i.i.d. $\mathcal{N}(0,1)$, and entry of $\epsilon$ was i.i.d. $\mathcal{N}(0, 0.1)$, with no covariance among the $\epsilon$ entries for different tasks. Each task had $m^*=4$ beneficial features, i.e., each column of $\beta$ had 4 nonzero entries. The scenarios differed according to the distribution of the beneficial features in $\beta$. \begin{itemize} \item In the Partial scenario, the first feature was shared across all 20 responses, the second was shared across the first 15 responses, the third across the first 10 responses, and the fourth across the first 5 responses. Because each response had four features, those responses ($6-20$) that did not have all of the first four features had other features randomly distributed among the remaining features (5, 6, \ldots, 2000). \item In the Full scenario, each response shared exactly features $1-4$, with none of features $5-2000$ being part of the model. \item In the Independent scenario, each response had four random features among ${1, \ldots, 2000}$. \end{itemize} For the synthetic data, we report precision and recall to measure the quality of feature selection. This can be done both at a coefficient level (Was each nonzero coefficient in $\beta$ correctly identified as nonzero, and vice versa?) and at an overall feature level (For features with \textit{any} nonzero coefficients, did we correctly identify them as having nonzero coefficients for any of the tasks, and vice versa?). Note that Full MIC and BBLasso always make entire rows of their estimated $\beta$ matrices nonzero and so tend to have larger numbers of nonzero coefficients. Table \ref{partialTablehIs20} shows the performance of each of the methods on five instances of the Partial, Full, and Independent synthetic data sets. On the Partial data set, {\it Partial MIC} performed the best, closely followed by {\it RIC}; on the Full synthetic data, {\it Full MIC} and {\it Partial MIC} performed equally well; and on the Independent synthetic data, the {\it RIC} algorithm performed the best closely followed by {\it Partial MIC}. It is also worth noting that the best performing methods tended to have the best precision and recall on coefficient selection. The performance trends of the three methods are in consonance with the theory of Section \ref{comparison}. The table shows that only in one of the three cases does one of these methods compete with MIC methods. BBLasso on the Full synthetic data shows comparable performance to the MIC methods, but even in that case it has a very low feature precision, since it added many more spurious features than the MIC methods. \begin{table*}[t] \caption{Test-set accuracy, precision, and recall of MIC and other methods on 5 instances of various synthetic data sets generated as described in Section \ref{syn-data-settings}. Standard errors are reported over each task; that is, with 5 data sets and 20 tasks per data set, the standard errors represent the sample standard deviation of 100 values divided by $\sqrt{100}$. {\it Note:} AndoZhang's NA values are due to the fact that it does not explicitly select features.} \begin{tabular}{cccc} \hline Method & Test Error & Coefficient & Feature\\ & $\mu\pm\sigma$ & Precision/Recall& Precision/Recall\\ \hline \multicolumn{4}{c}{Partial Synthetic Dataset}\\ \hline True Model & $0.07 \pm 0.00$ & $1.00 \pm 0.00 $/$1.00 \pm 0.00 $& $1.00 \pm 0.00 $/$1.00 \pm 0.00 $\\ Partial MIC &{\bf 0.10 $\pm$ 0.00}& $0.84 \pm 0.02 $/$0.77 \pm 0.02 $& $0.99 \pm 0.01 $/$0.54 \pm 0.05 $\\ Full MIC & $0.17 \pm 0.01$ & $0.26 \pm 0.01 $/$0.71 \pm 0.03 $& $0.97 \pm 0.02 $/$0.32 \pm 0.03 $\\ Indep. & $0.12 \pm 0.01$ & $0.84 \pm 0.02 $/$0.56 \pm 0.02 $& $0.72 \pm 0.05 $/$0.62 \pm 0.04 $\\ AndoZhang & $0.50 \pm 0.02$ & NA & NA\\ BBLasso & $0.19 \pm 0.01$ & $0.04 \pm 0.00 $/$0.81 \pm 0.02 $& $0.20 \pm 0.03 $/$0.54 \pm 0.01 $\\ \hline \multicolumn{4}{c}{Full Synthetic Dataset}\\ \hline True Model & $0.07 \pm 0.00$& $1.00 \pm 0.00 $/$1.00 \pm 0.00 $& $1.00 \pm 0.00 $/$1.00 \pm 0.00 $\\ Partial MIC &{\bf 0.08 $\pm$ 0.00}& $0.98 \pm 0.01 $/$1.00 \pm 0.00 $& $0.80 \pm 0.00 $/$1.00 \pm 0.00 $\\ Full MIC &{\bf 0.08 $\pm$ 0.00}& $0.80 \pm 0.00 $/$1.00 \pm 0.00 $& $0.80 \pm 0.00 $/$1.00 \pm 0.00 $\\ Indep. & $0.11 \pm 0.01$& $0.86 \pm 0.02 $/$0.63 \pm 0.02 $& $0.36 \pm 0.06 $/$1.00 \pm 0.00 $\\ AndoZhang & $0.45 \pm 0.02$ & NA & NA\\ BBLasso & $0.09 \pm 0.00$& $0.33 \pm 0.03 $/$1.00 \pm 0.00 $& $0.33 \pm 0.17 $/$1.00 \pm 0.00 $\\ \hline \multicolumn{4}{c}{Independent Synthetic Dataset}\\ \hline True Model & $0.07 \pm 0.00$& $1.00 \pm 0.00 $/$1.00 \pm 0.00 $& $1.00 \pm 0.00 $/$1.00 \pm 0.00 $\\ Partial MIC & $0.17 \pm 0.01$& $0.95 \pm 0.01 $/$0.44 \pm 0.02 $& $1.00 \pm 0.00 $/$0.44 \pm 0.02 $\\ Full MIC & $0.36 \pm 0.01$& $0.06 \pm 0.01 $/$0.15 \pm 0.02 $& $1.00 \pm 0.00 $/$0.14 \pm 0.02 $\\ Indep. & {\bf 0.13 $\pm$ 0.01}& $0.84 \pm 0.02 $/$0.58 \pm 0.02 $& $0.83 \pm 0.02 $/$0.58 \pm 0.03 $\\ AndoZhang & $0.49 \pm 0.00$ & NA & NA\\ BBLasso & $0.35 \pm 0.01$& $0.02 \pm 0.00 $/$0.43 \pm 0.02 $& $0.30 \pm 0.05 $/$0.42 \pm 0.06 $\\ \hline \end{tabular} \label{partialTablehIs20} \end{table*} \subsection{Evaluation on Real Datasets} This section compares the performance of MIC methods with AndoZhang and BBLasso on a Yeast dataset and Breast Cancer dataset. These are typical of biological datasets in that only a handful of features are predictive from thousands of potential features. This is precisely the case in which MIC outperforms other methods. MIC not only gives better accuracy but does so by choosing fewer features than BBLasso's $\ell_1 / \ell_2$-based approach. \subsubsection{Yeast Dataset} Our Yeast dataset comes from \cite{litvin2009mig}. It consists of real-valued growth measurements of 104 strains of yeast ($n=104$ observations) under 313 drug conditions. In order to make computations faster, we hierarchically clustered these 313 conditions into 20 groups using correlation as the similarity measure. Taking the average of the values in each cluster produced $h=20$ real-valued responses (tasks), which we then binarized into two categories: values at least as big as the average for that response (set to 1) and values below the average (set to 0). The features consisted of 526 markers (binary values indicating major or minor allele) and 6,189 transcript levels in rich media for a total of $m = 6{,}715$ features. Table \ref{growthTable20} shows test errors from 5-fold CV on this data set. As can be seen from the table, Partial MIC performs better than BBLasso. BBLasso overfits substantially, as is shown by its large number of nonzero coefficients. We also note that RIC and Full MIC perform slightly worse than Partial MIC, underscoring the point that it is preferable to use a more general MIC coding scheme compared to Full MIC or RIC. The latter methods have strong underlying assumptions, which cannot always correctly capture sharing across tasks. Like Partial MIC, AndoZhang did well on this data set; however, because the algorithm scales poorly with large numbers of tasks, the computation took 39 days. \begin{table*}[t] \caption{Accuracy and number of coefficient and features selected on five folds of CV for the Yeast and Breast Cancer data sets. For the Yeast data, $h=20$, $m=6{,}715$, $n=104$. For the Breast Cancer data, $h=5$, $m=5{,}000$, $n=100$. Standard errors are over the five CV folds; i.e., they represent (sample standard deviation) / $\sqrt{5}$. {\it Note:} {\bf These are true cross validation accuracies and no parameters have been tuned on them.} AndoZhang's NA values are due to the fact that it does not explicitly select features. } \centering \begin{tabular}{cccccc} \hline Method & Partial MIC & Full MIC & RIC & AndoZhang & BBLasso\\ \midrule \multicolumn{6}{c}{Yeast Dataset}\\ \midrule Test error &{\bf 0.38 $\pm$ 0.04} & $0.39 \pm 0.04$ & $0.41 \pm 0.05$ & $0.39 \pm 0.03$ & $0.43 \pm 0.03$ \\ Num. coeff. sel. & $22 \pm 4$ & $64 \pm 4$ & $9 \pm 1$ & NA & $1268 \pm 279$ \\ Num. feat. sel. & $4 \pm 0$ & $3 \pm 0$ & $9 \pm 1$ & NA & $63 \pm 14$ \\ \midrule \multicolumn{6}{c}{Breast Cancer Dataset}\\ \midrule Test error & {\bf 0.33 $\pm$ 0.08} & $0.37 \pm 0.08$ & $0.36 \pm 0.08$ & $0.44 \pm 0.03$ &{\bf 0.33 $\pm$ 0.08} \\ Num. coeff. sel. & $3 \pm 0$ & $11 \pm 1$ & $2 \pm 0$ & NA & $61 \pm 19$ \\ Num. feat. sel. & $2 \pm 0$ & $2 \pm 0$ & $2 \pm 0$ & NA & $12 \pm 4$ \\ \bottomrule \end{tabular} \label{growthTable20} \end{table*} \subsubsection{Breast Cancer Dataset} Our second data set pertains to Breast Cancer, containing data from five of the seven data sets used in \cite{vantveer}. It contains $1{,}171$ observations for $22{,}268$ RMA-normalized gene-expression values. We considered five associated responses (tasks); two were binary---prognosis (``good" or ``poor") and ER status (``positive" or ``negative")---and three were not---age (in years), tumor size (in mm), and grade (1, 2, or 3). We binarized the three non-binary responses into two categories: Response values at least as high as the average, and values below the average. Finally we scaled the dataset down to $n=100$ and $m=5{,}000$ (the 5{,}000 features with the highest variance), to save computational resources. Table \ref{growthTable20} shows test errors from 5-fold CV on this data set. As is clear from the table, Partial MIC and BBLasso are the best methods here. But as was the case with other datasets, BBLasso puts in more features, which is undesirable in domains (like biology and medicine) where simpler and hence more interpretable model are sought. \chapter{Model 2: TPC (Three Part Coding)}\label{TPCChapter} In this chapter we describe our second model, TPC. As mentioned earlier TPC is quite similar to MIC, and extends the concept of ``joint'' feature selection to the case when the feature matrix has structure i.e. the features are compartmentalized into feature classes. The concept of feature classes is very similar to the concept of {\it meta - features} which has been studied extensively in literature \cite{tishby08,koller07}. In fact, feature classes are a special case of {\it meta - features} when the feature has only one meta attribute, as gene classes or topic of the word etc. in our setting. More generically, starting from any set of features, one can generate new classes of features by using projections such as principle components analysis (PCA) or non-negative matrix factorization (NNMF), transformations such as log or square root, and interactions (products of features). Further ``synthetic'' feature classes can be created by finding clusters (e.g., using k-means) in the feature space as show later in the experiments section. Firstly we describe the notation and then we present the TPC scheme and compare it with standard RIC coding \cite{FosterGeorge}. Then we present an algorithm for doing ``joint'' feature selection using TPC. \section {Notation Used} The symbols used throughout the rest of this section are defined in the following Table \ref{tab:SymbolsTPC}: \begin{table} [htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \caption{Symbols used and their definitions.} \centering \begin{small} \label{tab:SymbolsTPC} \begin{center} \begin{tabular}{cl} \hline Symbol & Meaning\\ \hline \small n & Number of observations\\ m & Number of candidate features\\ $m^*$ & Number of beneficial features in the candidate \\ & feature set\\ q & Number of features currently included in the model\\ Q & Number of feature classes currently added in the\\ & model\\ K & Total number of feature classes\\ $m_k$ & Total number of candidate features in the $k^{th}$ \\ & feature class\\ \hline \end{tabular} \end{center} \end{small} \end{table} All the above values are given by the data, except $m^*$ which is unknown, and $q$ and $Q$, which are determined by the search/optimization procedure. \section{Coding Schemes used in TPC}\label{TPCcodingscheme} { \bf Coding Scheme for $\Delta S_E$ :} $\Delta S_E$ represents the increase in likelihood of the data by adding the new feature to the model. When doing linear regression, we assume a Gaussian model and hence have: \begin {equation} P(e_i)= \frac{1}{\sqrt{2\pi\sigma^2}} \exp{\left(- \frac{e_i^2}{2\sigma^2}\right) } \end {equation} where $e_i$ is the $i^{th}$ row of the ${\bf E}$ matrix i.e. $({\bf Y} - {\bf X}\beta)$ and ${\sigma}^2$ is the variance of the Gaussian noise. Now, we have :\\ \begin{equation}\label{likelihoodTPC} S_E = -\log\left(\prod_{i=1}^{n}P(e_i)\right) \end{equation} Note that the above equation~\ref{likelihoodTPC} is quite similar to the $S_E$ equation for MIC~\ref{negloglike}; the only difference being that here we have a single response (task). Intuitively, $\Delta S_E$ corresponds to the increase in benefit by adding the new feature to the model. It is always non-negative; even a spurious feature cannot decrease the training data likelihood. {\bf Coding Scheme for $\Delta S_M$ :} To describe $\Delta S_M$, when a new feature is added to the model, we use a three part coding scheme. Let $l_C$ be the number of bits needed to code the index of the ``feature class" of the evaluated feature, let $l_I$ be the number of bits used to code the index of the evaluated feature in that particular feature class, and let $l_\theta$ be the number of bits required to code the coefficient of the evaluated feature. Thus: \begin {equation}\label{deltasm} \Delta S_M = l_C + l_I + l_\theta \end {equation} This coding, as specified below, is the source of the power of our approach. Intuitively, if a feature class has many good (beneficial) features then we can share the cost of coding $l_C$ across the features and hence save many bits in coding, as each feature costs roughly $log(m_k)$ bits to code rather than $log(m)$ as required by the standard RIC penalty. Soon, we will do an exact mathematical analysis and show why this improvement occurs. But, before that we need to explain how to code each of the three terms on right hand side of Equation~\ref{deltasm}. \paragraph{\textbf{Code $l_C$:}} $l_C$ represents the number of bits required to code the index of the feature class to which the evaluated feature belongs. When we are doing feature selection by using TPC, two cases can arise: {\textbf{Case 1:} The feature class of the feature being evaluated is not yet included in the model. In this case, we code $l_C$ by using $log(K)$ bits, where \textit{K} is the total number of feature classes in the data. From now on, we will denote $l_C$ under this case as $l_C^1$ . \textbf{Case 2:} The feature class of the feature being evaluated is already included in the model. In this case, we can save some bits by coding $l_C$ using $log(Q)$ bits where \textit{Q} is the number of feature classes included in the model till that point of time. (Think of keeping an indexed list of length $Q$ of the feature classes that have been selected.) This is where TPC wins over other methods, as we do not need to waste bits on coding the feature class if it is already in the model. We will call $l_C$ under this case as $l_C^2$. We can summarize the coding scheme for $l_C$ as follows: \begin {equation} l_C = \left\{ \begin{array}{ll}log(K) & if\; the\; feature\; class\; is\; not\; in\; the \\ & model\\ log(Q) & if\; the\; feature\; class\; is\; already\; in \\ & the\; model \end{array} \right. \end {equation} \paragraph{\textbf{Code $l_I$:}} $l_I$ represents the number of bits required to code the index of the feature within its feature class. We have a total of $m_k$ features in the $k^{th}$ feature class. We use an RIC-style coding to code $l_I$ i.e. we use log($m_k$) bits to code the index of the feature. (This is equivalent to the widely used Bonferroni penalty.) Since we also code the coefficient of the feature $l_\theta$ (unlike standard RIC), we do not overfit even when the usual RIC assumption of $n << m_k$ is not valid. \begin {equation} l_I = log(m_k) \end {equation} \paragraph{\textbf{Code $l_\theta$:}} This term corresponds to the number of bits required to code the value of the coefficient of each feature. We could use either AIC or the more conservative BIC criterion to code the coefficients. We use 2 bits for each coefficient, which is quite similar to the AIC criterion. \begin {equation} l_\theta = 2 \end {equation} The detailed criteria for making this choice is explained in Section~\ref{AICcode} \section{Analysis of TPC Scheme} We now compare the TPC coding scheme with a standard coding scheme (abbreviated as SCS below) in which we use an RIC penalty for feature indexes and an AIC-like penalty (2 bits) for the coefficients of the features, as this is the form of standard feature selection setting that comes closest to TPC in theory and in performance. The Total Cost in bits used by SCS to code the \textit{q} selected features is: \begin{eqnarray} \label{TPC1} Total Cost_{SCS} &=& \overbrace{[qlog(m)]}^{RIC \hspace {0.01 in} Penalty} + \overbrace{[2q]}^{Coefficients} \nonumber \\ &=&qlog(K) +qlog(\frac{m} {K}) + 2q \end{eqnarray} The total cost used by TPC to code the same features is: \begin{eqnarray} \label{TPC2} Total Cost_{TPC}& = &\overbrace{Qlog(K)}^{l_C^1} + \overbrace{(q - Q)log(Q)}^{l_C^2}\nonumber\\ & & + \overbrace{qlog(m_k)}^{l_I} + \overbrace{2q}^{l_\theta} \end{eqnarray} The savings in coding comes from the (q-Q) features that belonged to classes that were already in the model. \paragraph {\textbf{Case 1: All Classes are of uniform size:}} In this case, log($m_k$) in the Equation \ref{TPC2} will be equal to log($\frac {m} {K}$), as the size of each feature class will be same and will be equal to $\frac {m} {K}$, where \textit{m} is the total number of candidate features and \textit{K} is the total number of feature classes. So, subtracting Eq.~(\ref{TPC2}) from Eq.~(\ref{TPC1}) we get: \begin{equation}\label{netcost} \Delta Total Cost = (q-Q)log(\frac{K} {Q}) \end{equation} Equation~\ref{netcost} shows that TPC gives substantial improvement over SCS when either one or both of the conditions $q \gg Q$ or $K \gg Q$ are true. In other words, TPC wins when there are more features $q$ than feature classes $Q$ included in the model (i.e., there are multiple features per class) or, a smaller fraction, \textit{Q}/\textit{K}, of the feature classes include selected features. In short, the real performance gain of TPC occurs when all or most of the (beneficial) selected features lie in small number of feature classes. The best case would occur when all the (beneficial) selected features lie in one class and the worst case occurs when the beneficial features are uniformly distributed across all the feature classes. In real datasets, the scenarios that we encounter lie somewhere between the best and the worst case, so we can expect substantial performance gain by using TPC. \paragraph {\textbf{Case 2: Classes are of nonuniform size:}} In this case, much of the theory remains the same as in Case 1, except that $C \neq \frac{m}{K}$. Let $\frac{m}{K}$ = $m_{avg}$, i.e., the average size of a feature class. Then equation \ref{netcost} becomes: \begin{equation}\label{case2} \Delta TotalCost = (q-Q)log(\frac{K} {Q}) + \overbrace{q log(\frac{m_{avg}}{m_k}}^{Term 2}) \end {equation} Now, it can easily be inferred that $m_{avg}$ $>$ $m_k$ occurs in the case when the beneficial features are in feature classes whose size is less than the average size of a feature class. Intuitively, $m_{avg}$ = $m_k$ occurs if the size of all the feature classes is same (which was Case 1), so the performance of TPC will be improved in this case compared to Case 1 if the beneficial features lie in small classes. The improvement in performance over Case 1 will be quite significant when the beneficial features lie in a small class i.e. C is small or there are very big classes with no beneficial features in them, in either case the contribution of Term 2 in Equation \ref{case2} will increase. \section{ Algorithms for Feature Selection using TPC} Algorithm 1 give a standard stepwise feature selection algorithm that uses TPC coding scheme. The algorithm makes multiple passes through the data and at each iteration adds the best feature in the model (i.e., the feature that has the maximum $\Delta S$). It stops when no feature provides better $\Delta S$ than in the previous iteration. \begin{algorithm}[htdp] \small \caption{Forward Stepwise regression using TPC Scheme} \begin{algorithmic}[1] \STATE $flag = True$; // flag for indicating when to stop \STATE $model$ = \{\}; // initially no features in model \STATE $prev\_max = 0$; // keeps track of the value of $\Delta S_E$ in the previous iteration \WHILE {\{flag == True\}} \FOR {\{i = 1 to m\}} \STATE $Compute$ $\Delta S_E^i$; // Increase in likelihood by adding feature `i' to the model \STATE $Compute$ $\Delta S_M^i$; // Number of extra bits required to code the $i^{th}$ feature \STATE $\Delta S^i := \Delta S_E^i - \Delta S_M^i$; \ENDFOR \STATE $i_{max} := argmax_i\{\Delta S^i\}$; //The best feature in the current iteration \STATE $current\_max := max_i\{\Delta S^i\}$; //The best penalized likelihood change in the current iteration \IF {\{$current\_max > prev\_max$\}} \STATE $model := model \bigcup \{i_{max}\}$; // Add the current feature to model \STATE $prev\_max := current\_max$; \ELSE \STATE $flag := False$; \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} It can be the case that it is not worth adding one feature from a particular class, but it is still beneficial to add multiple features from that class. In this case, it will be advantageous to use a mixed forward-backward stepwise regression strategy in which one continues the search past the stopping criterion given above, and then sequentially removes the ``worst'' feature from the now overfit model. This slight gain in search cost can find better solutions. Another algorithm which can be used is streamwise feature selection, which is greedier than the above stepwise regression methods, and works well when there are millions of candidate features. In streamwise feature selection, each feature is considered only once for addition to the model, and added if it gives significant reduction in penalized likelihood, or otherwise discarded and not examined again. \section{Experimental Results} In this section we demonstrate the results of the TPC scheme on real datasets. For our experiments we use the Stepwise TPC coding scheme and compare against standard stepwise regression with an RIC penalty, Lasso~\cite{Lasso}, Elastic Nets~\cite{elastic_net} and Group Lasso/ Multiple Kernel Learning~\cite{grouplasso}. For Group Lasso/Multiple Kernel Learning, we used a set of 13 candidate kernels, consisting of 10 Gaussian Kernels (with bandwidths $\sigma=0.5 - 20$) and 3 polynomial kernels (with degree 1-3) for each feature class as is done by \cite{jmlr08}. In the end the kernels which have non zero weights are the ones that correspond to the selected feature classes. Since GL/MKL minimizes a mixed $\ell_1/\ell_2$ norm so, it zeros out some feature classes. (Recall that GL/MKL gives no sparsity at the level of features within a feature class). The Group Lasso\cite{grouplasso} and Multiple Kernel Learning are equivalent, as has been mentioned in \cite{bach08}, therefore we used the {\it SimpleMKL} toolbox \cite{jmlr08} implementation for our experiments. For Lasso and Elastic Nets we used their standard LARS (Least Angle Regression) implementations \cite{lars}. When running Lasso and Elastic Nets, we pre-screened the datasets and kept only the best $\sim$ 1,000 features (based on their p-values), as otherwise LARS is prohibitively slow. (The authors of the code we used do similar screening, for similar reasons.) For all our experiments on Elastic Nets \cite{elastic_net} we chose the value of $\lambda_2$ (the weight on the $\ell_2$ penalty term), as $10^{-6}$. We demonstrate the results on real datasets pertaining to Word Sense Disambiguation (WSD) \cite{WSD1} and gene expression data \cite{GSEA}. As is shown below, the results were quite encouraging. \subsection{Evaluation on Real Datasets (WSD and GSEA)} In order to benchmark the real world performance of our TPC coding scheme, we chose two datasets pertaining to two diverse applications of feature selection methods, namely Natural Language Processing (NLP) and Gene Expression Analysis. More information regarding the data and the experimental results are given below. \paragraph{\textbf{Word Sense Disambiguation (WSD) Dataset:}}\label{palmerdata} A WSD dataset consisting of 172 ambiguous verbs and a rich set of contextual features \cite{WSD1} was chosen for evaluation. It consists of hundreds of observations of noun-noun collocation, noun-adjective-preposition-verb (syntactic relations in a sentence) and noun-noun combinations (in a sentence or document). The size of the WSD data and other relevant information are summarized in Table \ref{tab:wsdData}. We show results on 10 verbs picked randomly from the set of entire 172 verbs. \begin{table}[htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \caption{Word Sense Disambiguation Dataset.} \label{tab:wsdData} \begin{center} \begin{small} \begin{tabular}{lccc} \hline Dataset & \# Observations & \# Features & \# Classes\\ \hline acquire & 101 & 1081 & 43\\ care & 131 & 621 & 40\\ climb & 84 & 676 & 41\\ fire & 132 & 1217 & 43\\ add-1 & 320 & 2583 & 42\\ expand & 222 & 2144 & 42\\ allow & 344 & 2657 & 41\\ drive & 191 & 1584 & 43\\ identify & 102 & 964 & 43\\ promise & 111 & 929 & 41\\ \hline \end{tabular} \end{small} \end{center} \end{table} A sample feature vector, given below, shows typical features and their classes. In each case, the part of the feature before the underscore is the feature class. Classes included pos (part of speech of the verb), morph (verb morphology), sub (the subject of the verb), subjsyn (the wordnet synonym set labels of the subject), dobj (the direct object of the verb), dobjsyn (dobj's wordnet synsets), word-1, word-2, word+1, word+2 (the words 1 or 2 before the verb or 1 or 2 after) pos-1, pos-2, pos-3, pos-4 (the parts of speech of those words), bigrams of the words, and tp (the topics of the document). \begin{table*}[htbp] \caption{10 Fold CV accuracies of various methods on the WSD dataset (10 verbs). {\it Note:} (\#f) represents the average number of features selected over the 10 folds. {\bf These are true cross validation accuracies and no parameters have been tuned on them.} All the accuracies are ($L_1$) classification accuracies.} \label{tab:wsdResults} \small \centering \begin{tabular}{lccccc}\hline Method & acquire & care\\ & $\mu\pm\sigma$ ({\it \#f})& $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & 95.1$\pm$0.7 (1.8) & {\bf97.7$\pm$0.5 (2.0)}\\ Stepwise RIC & 93.1$\pm$0.3 (5.1) &93.1$\pm$1.3 (12.3) \\ Elastic Nets & 90.5$\pm$0.2 (6.1) &96.1$\pm$0.2 (24.1) \\ Lasso & 90.0$\pm$0.4 (15.2) &85.4$\pm$0.9 (35.8)\\ Group Lasso/MKL & {\bf 96.0$\pm$0.1 (50.3)} & 96.0$\pm$0.3 (21.7)\\ \hline Method & expand & allow\\ & $\mu\pm\sigma$ ({\it \#f})& $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 99.5$\pm$0.4 (1.8)} & {\bf 95.6$\pm$1.1 (4.0)}\\ Stepwise RIC &96.4$\pm$0.4 (4.3) &88.5$\pm$3.1 (22.4)\\ Elastic Nets &99.1$\pm$0.3 (106.9) & 93.9$\pm$0.9 (5.8) \\ Lasso & {\bf 99.5$\pm$0.3 (81.9)} & 89.1$\pm$1.0 (69.9)\\ Group Lasso/MKL & 97.7$\pm$0.7 (53) & 92.6$\pm$2.3 (2294) \\ \hline Method & fire & add-1\\ & $\mu\pm\sigma$ ({\it \#f})& $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 99.2$\pm$0.6 (1.9)} &{\bf 96.6$\pm$0.4 (4.3)}\\ Stepwise RIC &95.5$\pm$1.4 (3) &91.9$\pm$2.3 (17.2)\\ Elastic Nets &95.4$\pm$1.2 (107.8) & 93.4$\pm$0.5 (1)\\ Lasso & 93.1$\pm$1.3 (106.7) & 93.8$\pm$0.2 (1) \\ Group Lasso/MKL & 97.5$\pm$0.3 (12) & 91.3$\pm$1.5 (1952)\\ \hline Method &identify & promise\\ & $\mu\pm\sigma$ ({\it \#f}) & $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 99.0$\pm$0.2 (2.2)} &{\bf 96.4$\pm$0.5 (3.0)}\\ Stepwise RIC &{\bf 99.0$\pm$0.5 (1.9)}&88.2$\pm$3.1 (6.4)\\ Elastic Nets & 89.0$\pm$1.1 (41.2)&91.9$\pm$1.7 (4.8)\\ Lasso & 86.0$\pm$0.9 (10)& 88.3$\pm$1.1 (20.6)\\ Group Lasso/MKL & 97.3$\pm$0.6 (1)& 90.4$\pm$1.2 (232)\\ \hline Method & climb & drive\\ & $\mu\pm\sigma$ ({\it \#f}) & $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 98.8$\pm$0.7 (1.9)} &{\bf 99.0$\pm$0.3 (1.4)}\\ Stepwise RIC &88.8$\pm$3.6 (3.7) &92.1$\pm$3.1 (6.0) \\ Elastic Nets &92.5$\pm$1.1 (91) &96.9$\pm$1.0 (1.3) \\ Lasso & 88.8$\pm$1.1 (84.9) & 92.1$\pm$1.4 (18.1) \\ Group Lasso/MKL & 95.9$\pm$0.6 (11) & 97.5$\pm$0.4 (28) \\ \hline \end{tabular} \end{table*} The results for the WSD Dataset are presented in the Table~\ref{tab:wsdResults}. They show that the number of features selected vary -- sometimes the TPC select more features than other methods and vice versa -- but the classification accuracy for TPC is higher than other methods, in $7$ out of $10$ cases. It is equal to the accuracy of the best method on $2$ occasions and once it is slightly worse than GL/MKL. Overall, on the entire set of $172$ verbs TPC is significantly (5 \% significance level (Paired t-Test)) better than the competing methods on $160/172$ verbs and has the same accuracy as the best method on $4$ occasions. The accuracies averaged over all the $172$ verbs\footnote{Note: These accuracies are for the (1 vs all) 2 class prediction problem i.e. predicting the most frequent sense. On the other hand the accuracies as given in Section~\ref{TransTPC} are for the multi-class problem where we want to predict the exact sense.} are shown in Table~\ref{accuracyAll}. \begin{table}[htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \caption{ $10$ Fold CV accuracies averaged over 172 verbs} \label{accuracyAll} \begin{center} \begin{small} \begin{tabular}{ccccc} \hline Stepwise TPC & Stepwise RIC & Elastic Nets & Lasso & Group Lasso/MKL\\ \hline 89.81\% & 84.19\%& 86.29\%& 85.94\% & 87.63\% \\ \hline \end{tabular} \end{small} \end{center} \end{table} \paragraph{\textbf{Gene Set Enrichment Analysis (GSEA) Datasets:}} The second real datasets that we used for our experiments were gene expression datasets from GSEA \cite{GSEA}. There are multiple gene expression datasets and multiple criteria on which the genes can be grouped into classes. For example, different ways of generated gene classes include C1: Positional Gene Sets, C2: Curated Gene Sets, C3: Motif Gene Sets, C4: Computational Gene Sets, C5: GO Gene Sets. For our experiments, we used gene classes from the C1 and C2 collections. The gene sets in collection C1 consists of genes belonging to the entire human chromosome, divided into each cytogenetic band that has at least one gene. Collection C2 contained gene sets from various sources such as online pathway databases and knowledge of domain experts. The datasets that we used and their specifications are as shown in Table~\ref{tab:GSEAData}. \begin{table}[htbp] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \caption{GSEA Datasets.} \label{tab:GSEAData} \begin{center} \begin{small} \begin{tabular}{lccc} \hline Dataset & \# Observations & \# Features & \# Classes\\ \hline leukemia (C1) & 48 & 10056 & 182\\ gender 1 (C1) & 32 & 15056 & 212\\ diabetes (C2) & 34 & 15056 & 318\\ gender 2 (C2) & 32 & 15056 & 318\\ p53 (C2) & 50 & 10100 & 308\\ \hline \end{tabular} \end{small} \end{center} \end{table} The results for these GSEA datasets are as shown in the Table~\ref{GSEA1} below: \begin{table*}[htbp] \caption{10 Fold CV accuracies of various methods on the GSEA datasets. {\it Note:} (\#f) represents the average number of features selected over the 10 folds. {\bf These are true cross validation accuracies and no parameters have been tuned on them.} All the accuracies are ($L_1$) classification accuracies.} \label{GSEA1} \small \centering \begin{tabular}{lccc}\hline Method & leukemia & diabetes\\ & $\mu\pm\sigma$ ({\it \#f})& $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 95.8$\pm$0.8 (6.3)} & {\bf80.1$\pm$1.1 (3.7)}\\ Stepwise RIC & 87.5$\pm$1.1 (4.1) &77.3$\pm$1.6 (4.4)\\ Elastic Nets & 91.1$\pm$0.6 (7.1) &78.0$\pm$0.3 (9.1) \\ Lasso & 89.9$\pm$0.5 (15.2) &77.6$\pm$0.7 (14.8)\\ Group Lasso/MKL & 93.0$\pm$0.1 (2263) & 78.7$\pm$0.4 (7139)\\ \hline Method & gender -1 & gender -2 \\ & $\mu\pm\sigma$ ({\it \#f})& $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 93.8$\pm$0.9 (4.2)} & {\bf 96.9$\pm$1.3 (4.0)}\\ Stepwise RIC &{\bf 93.8$\pm$0.9 (4.3)} &94.5$\pm$1.2 (4.5)\\ Elastic Nets &90.6$\pm$0.4 (13.1) & 92.9$\pm$0.7 (5.8)\\ Lasso & 90.4$\pm$0.3 (16.7) & 93.1$\pm$0.5 (7.9)\\ Group Lasso/MKL & {\bf 93.8$\pm$0.7 (5084)} & 93.4$\pm$1.4 (10150) \\ \hline Method & p53\\ & $\mu\pm\sigma$ ({\it \#f})\\ \hline Stepwise TPC & {\bf 74.0$\pm$2.1 (1.1)}\\ Stepwise RIC &66.0$\pm$1.4 (2.1)\\ Elastic Nets &70.1$\pm$0.8 (6.8)\\ Lasso & 69.8$\pm$0.4 (7.4)\\ Group Lasso/MKL & 72.5$\pm$0.3 (5792)\\ \hline \end{tabular} \end{table*} For these datasets, TPC also beat the standard methods. Here also TPC is significantly better than the competing methods. It is interesting to note that TPC methods sometimes selected substantially fewer features, but still gave better performance than other methods. This is consistent with the predictions of Equation \ref{netcost} in that although the number of features selected, $q$ may be small, the number of classes, $K$, is quite large for the GSEA datasets. \chapter{Model 3: Transfer-TPC}\label{TransTPC} In this chapter we describe our last model, Transfer-TPC which falls in the second category of transfer learning models which do ``sequential transfer'' i.e. the task on which we want to transfer knowledge may not be known in advance but it is similar to other tasks according to some ``similarity metric''. Transfer-TPC is most beneficial when we want to transfer knowledge between tasks which have unequal amounts of labeled data. Transfer-TPC not only improves the learning on tasks which have lesser amount of data but also gives siginificant benefits in predictive accuracy on tasks which have comparable amount of data. So, first of all we describe our transfer learning formulation, Transfer-TPC which uses TPC, as described in last chapter, to do transfer between tasks. \section{Transfer Learning Formulation} \label{sect:pdf} Transfer-TPC uses TPC as described in Chapter~\ref{TPCChapter} as a baseline model. TPC provides more accurate predictions than competing methods and can easily be extended to incorporate prior information and share information between similar tasks, shown below. Priors on features and feature classes, as learned by transfer from other ``similar'' tasks, change the cost of coding a feature or a feature class. The number of bits that should be used to code a fact, such as a feature being included in the model, is the log of the probability of that fact being true. Thus, having better estimates of how likely a feature is to be included in the model allows more efficient coding. Similarly, knowing how likely it is that some feature in a given class of features will be included in the model allows us to code that feature class more precisely. Using priors from similar tasks to better code features and feature classes is at the core of Transfer-TPC. \subsection{Transfer TPC} For Transfer-TPC, we define two binary random variables $fc_i$ and $f_j$ $\in$ \{0,1\} that denote the events of the $i^{th}$ feature class and the $j^{th}$ feature being in or not in the model for the test task. To be more precise, $fc_i =1$ denotes the event of $i^{th}$ feature class being in the model and $fc_i=0$ denotes the complimentary event of this feature class not being selected by the model. Similar conditions hold for the case of features $f_j$. We can parameterize the distributions as follows: \begin{eqnarray} \label{equationdist} && p(fc_i=1 | \theta_i) =\theta_i \\ && p(f_j=1 | \mu_j) = \mu_j \end{eqnarray} In other words, we have a Bernoulli Distribution over the feature classes and the features. It can be represented compactly as: \begin{eqnarray} && Bernoulli(fc_i| \theta_i) =\theta_i^{fc_i}(1 - \theta_i)^{1 - fc_i} \\ && Bernoulli(f_j| \mu_j) =\mu^{f_j}(1 - \mu_j)^{1 - f_j} \end{eqnarray} If we have a total of $t$ training tasks then given the data for $j^{th}$ feature for all the training tasks: $\mathcal{D}_j =\{f_{j1},...,f_{jv},...,f_{jt}\}$; we can construct the likelihood functions from the data (under the i.i.d assumption) as: \begin{eqnarray} \nonumber && p(\mathcal{D}_{fc_i} | \theta_i)= \prod_{u=1}^{t}p(fc_{iu} | \theta_i) = \prod_{u=1}^{t}\theta^{fc_{iu}}(1 - \theta_i)^{1-fc_{iu}} \\ \nonumber && p(\mathcal{D}_j | \mu_j)= \prod_{v=1}^{t}p(f_{jv} | \mu_j) = \prod_{v=1}^{t}\mu_j^{f_{jv}}(1 - \mu_j)^{1-f_{jv}} \end{eqnarray} Note: The total data vector for all the $m$ features can be represented as:\\ $\mathcal{D}~=~\{\mathcal{D}_1,...,\mathcal{D}_m\}$; the feature class $\mathcal{D}_{fc_i}$ data can be derived from this data by considering the simple fact that a feature class will be selected i.e $(fc_i =1)$ if atleast one feature from that feature class has been selected, i.e. $\mathcal{D}_{fc_i}=\{\mathcal{D}_1,..., \mathcal{D}_{m_i}\}$, where we are assuming that $i^{th}$ feature class had features $\{1,...,m_i\}$ The posteriors can be calculated by putting a prior over the parameters $\theta_i$ and $\mu_j$ and using Bayes rule as follows: \begin{equation} p(\theta_i| \mathcal{D}_{fc_i})= p(\mathcal{D}_{fc_i}| \theta_i) \times p(\theta_i| a, b) \end{equation} where $a$ and $b$ are the hyperparameters of the Beta Prior $(\theta_i^{a-1}(1-\theta_i)^{b-1})$ which is a conjugate prior for the Bernoulli Distribution. Similarly we can write equation involving $\mu_j$ for the posterior over features. Using the posterior obtained above we can evaluate the predictive distribution of $\theta_i$ and $\mu_j$ as: \begin{equation} p(fc_i=1| \mathcal{D}_{fc_i})= \int_0^1p(fc_i = 1| \theta_i)p(\theta_i| \mathcal{D}_{fc_i})d\theta_i \end{equation} Substituting from~\ref{equationdist} in the above equation we get: \begin{equation} p(fc_i=1| \mathcal{D}_{fc_i})= \int_0^1\theta_i p(\theta_i| \mathcal{D}_{fc_i})d\theta_i = \mathbb{E}[\theta_i |\mathcal{D}_{fc_i}] \end{equation} Similarly we can write equation for the features as: \begin{equation} p(f_j=1| \mathcal{D}_j)= \int_0^1\mu_j p(\mu_j| \mathcal{D}_j)d\mu_j = \mathbb{E}[\mu_j |\mathcal{D}_j] \end{equation} Using the standard results for the mean and the posterior of a Beta distribution we obtain: \begin{equation} \label{fcequation} p(fc_i=1| \mathcal{D}_{fc_i})= \frac{k + a}{k + l + a +b} \end{equation} where $k$ is the number of times that the $i^{th}$ feature class is selected and $l$ is the complement of $k$, i.e. the number of times the $i^{th}$ feature class is not selected in the training data. We discuss below how to choose the hyperparameters of the beta prior, $a$ and $b$. For the case of features also, we obtain a similar equation as: \begin{equation} \label{fequation} p(f_j=1| \mathcal{D}_j)= \frac{a + c}{a + z + c +d} \end{equation} where $a$ is the number of times that the $j^{th}$ feature is selected and $z$ is the complement of $a$ i.e. the number of times the $j^{th}$ feature is not included in the model. As earlier $c$ and $d$ are the hyperparameters of the beta prior. \subsection{Discussion of Transfer-TPC } As can be seen from Equations~\ref{fcequation} and~\ref{fequation}, the probability that a feature class or a feature is selected is a ``smoothed'' average of the number of times they were selected in the models of tasks that are similar to the task under consideration i.e. the training tasks ($\mathcal{D}$). We use these probabilities to formulate a coding scheme which we call Transfer-TPC, which incorporates the prior information about the predictive quality of the various features and feature classes obtained from similar tasks. In light of the above, the coding scheme can be formulated as follows: \begin{equation} \label{firsteq} S_M = -\log{(p(fc_i=1| \mathcal{D}_{fc_i}))} -\log{(p(f_j=1| \mathcal{D}_j))} + 2 \end{equation} when that feature class has not yet been selected and, \begin{equation} \label{thirdeq} S_M = \log{(Q)} -\log{(p(f_j=1| \mathcal{D}_j))} + 2 \end{equation} when that feature class has already been selected. $Q$ is the total number of feature classes that have been selected up to that point of time. In both of the above equations, the first term codes the feature classes, the second term represents the coding for the features, and the third term codes the coefficients. The negative signs appear before some quantities due to the fact that those terms are negative since they represent $\log$ of fractional numbers. They also allow the coding scheme to be directly compared to the standard TPC coding scheme, as we explain shortly. The above equations are used as the coding scheme in Setting 1 in our experiments as we explain later. For Setting 2 of our experiments, the coding scheme is slightly different as we transfer a prior only on features; hence Equation~\ref{firsteq} changes to: \begin{equation} \label{secondeq} S_M = \log{(K)} -\log{(p(f_j=1| \mathcal{D}_{j}))} + 2 \end{equation} where $K$ is the total number of feature classes. Besides this, the two settings are the same. \subsection{Choice of Hyperparameters} The hyperparameters, $a$ and $b$ in Equation~\ref{fcequation} and $c$ and $d$ in Equation~\ref{fequation} control the ``smoothing'' of our probability estimates, i.e. how strongly we want to believe the evidence obtained from the training data (i.e. similar word senses) and how much effect we think it should have on the model that we learn for the test task. In all our experiments, we set $a=1$ and set $b$ so that in the limiting case of no transfer i.e. ($k=l=0$ in Equation~\ref{fcequation}) the coding scheme will reduce to the standard TPC Scheme as discussed in Section 2. Thus, we choose $b = K-1$ where $K$ is the total number of feature classes in the test task and similarly, we choose $c=1$ and $d=m_k-1$ where $m_k$ is the total number of features in the $k^{th}$ feature class for the test task. As a consequence of the above choice of the hyperparameters, in most cases we give less weight to the prior if there are few tasks in the training set. I.e., if there are only one or two tasks similar to the target test task, then the prior on the test task will be weaker than if there were many similar tasks to transfer from. \section{Experiments} In this section we demonstrate the experimental results of Transfer-TPC on real Word Sense Disambiguation (WSD) data in a variety of settings. The various tasks in this case were the various senses of the different words. Firstly, we give an overview of our algorithm i.e. how we used Transfer-TPC to the WSD problem, then we provide description of our data and explain the similarity metric that we used for defining similarity between different word senses. \subsection{Overview of our algorithm} Our transfer learning approach has several steps: \begin{itemize} \item Learn a separate model for distinguishing the different senses of each word. This results in logistic regression models for distinguishing each sense from all other senses of that word. Use feature selection so that these models have relatively small sets of features. \item Cluster word senses based on those features from their models that are positively correlated with those particular word senses. I.e., characterize each word sense by those features in its model that have positive regression coefficients. (In general, features with positive coefficients are associated with the given sense, and those with negative coefficients are associated with other senses of that word.) Clusters should only contain highly similar senses, so many senses will not end up in a cluster. We use a ``foreground-background'' clustering method that puts all singleton points into a ``background cluster'', which we then ignore. \item For each ``target'' word sense to be predicted, use the ``positive'' features of other word senses in its cluster to estimate the probability of the features being relevant for disambiguating the word that includes the target verb sense. These probabilities (priors) are used to specify the coding length (the log of the probability) when searching for the MDL model for disambiguating that word. \item Given models for distinguishing each word sense for a word from all other senses, disambiguate each occurrence of that word by choosing the sense whose model gave the highest score. \end{itemize} We share knowledge at the level of senses of the words rather than at the level of words, as there are very few words that are similar in ``all'' their senses. There are, however, many words that have one or more senses that are similar to senses of other words. Transfer occurs in the third of the steps presented above, which uses the models learned by other ``similar'' word senses (i.e. word senses falling in the same cluster) to generate a prior on what features and feature classes should be selected for the test word sense. We show below that Transfer-TPC outperforms a variety of state-of-the-art methods that do not do transfer learning, including SVMs with a well-tuned kernel, TPC without transfer learning, and simple stepwise regression with a RIC (also known as ``Bonferroni'') penalty. We also show that transfer benefits both from sharing of semantic features (e.g., the topic of the document the word is in) and syntactic features (e.g., the parts of speech of the surrounding words). Transfer-TPC is particularly useful when transferring information from frequent to rare word senses, but gives significant benefits even for words having similar amounts of data. \subsection{Description of Data} We performed our experiments on VerbNet data of 172 verbs, obtained from Martha Palmer's lab~\cite{palmer00,kipper06}. This is the same data as was used for TPC experiments in Section~\ref{palmerdata} All the 172 verbs had 36-43 different feature classes and a total of 1000-10000 features each. The number of senses varied from 2 (For example ``add'') to 15 (For example ``turn''). Note that there might be some senses of the words that did not show up in the data, for example there are 3 senses of the word ``account'' according to WordNet and VerbNet but only two of them show in our data, so we disambiguate among those two senses only. \subsection{Similarity Metric} Finding a good similarity metric, between different word senses is perhaps one of the biggest challenges that we faced. There are many human annotated ``linguistic'' similarity lexicons, like words belonging to the same Levin classes~\cite{levin}, hypernyms or synonyms according to WordNet~\cite{wordnet1,wordnetlin} or words having the same VerbNet classes~\cite{palmer00,kipper06}. In addition to this people have used InfoMap~\begin{verbatim}(http://infomap.stanford.edu)\end{verbatim}~\cite{rainangkoller} which gives a distributional similarity score for words in the corpus . One can also do k-means or heirarchical agglomerative clustering of the word senses. But the main shortcoming of all these methods is that they allot all the word sense to some cluster, but in reality if we look at the data, there are many word senses that are not similar to any other word sense, either semantically or synactically, in such a case the distributional similarity scores returned by InfoMap mostly contain noise, and there will be a risk of fitting noise and not doing a good job of transfer on the test word sense. In essence, what we need is a similarity measure that gives us very ``tight'' clusters of word senses and does not cluster all these ``junk'' word senses which are not similar to any other word sense in the corpus. So, to overcome these shortcomings, we use ``foreground-background'' clustering algorithm as proposed by~\cite{kandylas07}. This algorithm gives highly cohesive clusters of word senses called as ``foreground'' and puts all the ``junk'' word senses in the ``background''. It may help to think about the analogy with Computer Vision where foreground represents the region of interest and background consists of everything else. In our setting we firstly find positively correlated features for each sense of the word separately, using the Simes' method~\cite{simes,simes2}, as these are the ``true'' features for that particular word sense. For example for one sense of the word ``fire'' which means ``to dismiss somebody from employment '' the positive features were \begin{verbatim} `company', `executive', `friday', `hired', `interview', `job', `named',`probably',`sharon', 'wally', 'years', `join', `meet', `replace', `pay',`quoted',....,`VBD-VBN', `VB-VP',`VB-VP-NP-PRP', `PRP', `VBD-NP'... etc. \end{verbatim} whereas for another sense of the same word ``fire'' which means ``to ignite something'' the positive features were \begin{verbatim} `prehistoric', `same', `temperature',`israeli', `palestinian',`incident',`months', `retaliation',`showing'... `NNP-NP-S-VP-VBD', `NNP-NP-S-VP-VBD',`NNP',`NNP-NNP-VBD',`NNP-VBD' ... etc. \end{verbatim} Note that we have shown both the semantic and syntactic positive features, though due to space constraints we did not show all of them. Then we cluster the word senses where each word sense is represented by the above ``positive feature'' vector. These features contained both the semantic (For example the features in the ``topic'' feature class) and syntactic (For example the features in the ``pos'', ``dobj'' feature class etc.) features. Only $67\%$ of all the word senses fall into foreground clusters in this setting. The sample clusters that we got using this approach included senses of words like \begin{verbatim} {`back',`stay', `walk', `step'}, {`kill', `capture', `arrest',`strengthen', `release'}. \end{verbatim} As is obvious these two clusters contain words with semantic distributional similarity only, then we had clusters like \begin{verbatim}{`love',`promise', `turn', `wear'}, \end{verbatim} where we had words with semantic (For example `love' and `promise') as well as syntactic (For example `love' and `wear' (They share the features `leftpath1\_NNS-NP-S-VP-VBP', `leftpos1\_NNS' and `leftsurfpath1\_NNS-VBD) or `wear' and `turn') similarity. We also report results in which we perform the clustering using only semantic features and only syntactic features. The motivation behind doing this is that it would be interesting to see which kind of features are more repsonsible for the performance improvement due to transfer learning. Sample clusters for the case of only semantic similarity include \begin{verbatim} {`beat', `strike', `attack',`support'}, {`do', `die', `save'}, {`agree', `approve'}, {`end', `finish'}. \end{verbatim} For the case of only syntactic clustering, the clusters included \begin{verbatim} {`beat', `respond', `urge'}, {`note', `learn', `shake'}, {`sleep', `write-1', `write-2'}. \end{verbatim} These are just a small set of representative clusters, besides these we had about 60-70 more clusters in each case. \subsection{Experimental Setup} We break down the problem of WSD from the level of words down to the level of senses, i.e. if we have 10 verbs with 4 senses each, we will break them up into $10 \times 4$ learning tasks. Such a partitioning makes total sense as its very difficult to find a good similarity metric at the level of words i.e. it is very difficult to find two words which are similar in ``all'' the senses. But if we break the problem down to the level of senses then we can definitely find two or more words which are similar in one sense. For example, the words ``fire'' and ``dismiss'' are similar in one sense only which means ``to dismiss somebody from work'', but their other senses are quite different from each other. In such a case it would make sense to have only these senses of ``fire'' and ``dismiss'' in the same cluster for doing transfer, rather than putting all their senses in the same cluster. Later on, when we have learned models for each of these senses separately, we can again combine these senses and disambiguate the word as a whole. The predicted sense of the word is the sense whose model gave the highest score. So, its quite possible that for some senses of a word, we can do lots of transfer as there are many senses of other words similar to them, but for other senses of the same word there may not be many similar senses hence we will have less transfer for those senses. In the end, it turns out that words all of whose senses had similar senses in the corpus give very high performance on WSD and for other words, for which we could only find similar senses for some of its senses, there is a lesser improvement in performance over the baseline case in which we do no transfer, which seems quite intuitive. In order to ensure fairness of comparison we adopt the same methodology of outputting the sense whose model had the highest score, as the most probable sense, for all the methods that we compete against. Such kind of prediction in multi-class problems is commonly known as ``one vs all'' approach. We do Transfer in four slightly varied settings to tease apart the entire method and get more information about subtle aspects of our methodology . In our main Transfer-TPC setting (Setting 1) we transfer a prior on both the features and the feature classes of the test word sense and in this setting we cluster the word senses based on ``semantic and syntactic'' similarity. Setting 2 is similar to Setting 1 except that we transfer prior only on features and not on feature classes. The coding scheme for setting 1 is given by Equations~\ref{firsteq},~\ref{thirdeq} and the coding scheme for setting 2 is given by Equations~\ref{secondeq},~\ref{thirdeq}. As can be seen, these schemes differ only in the way they code the feature classes. We slightly modify the above settings in order to have further insight into the linguistic aspects of the transfer. So, we transfer a prior on only the ``semantic'' features and features classes i.e. features in feature classes like ``tp'' (topic of the document) and this time the clustering of word senses was also done based on only the ``semantic'' features (Setting 3). In Setting 4 we transfer a prior on only the ``syntactic'' features and features classes i.e. features in feature classes like ``pos'', ``dobj'' etc. and the clustering of word senses was done based on only ``syntactic'' features. \subsection{Results} We compare Transfer-TPC against standard TPC, Stepwise RIC and SVM with well tuned radial basis (RBF) kernel. Besides this we also compare results with a baseline majority voting algorithm which outputs the most frequent ``sense'' for the word as the most probable sense. For Standard TPC we used the same coding scheme as mentioned in Section~\ref{TPCcodingscheme}. For SVM we used the standard libSVM package~\cite{libsvm} and tried various kernels including linear, polynomial and RBF. In the end we used the RBF as it gave best performance on separate held out data. We tuned the ``gamma'' parameter of the RBF kernel using cross validation. The results for various settings are shown in Table~\ref{TTPC}. As is obvious from Table~\ref{TTPC}, Setting 1 i.e. the setting in which we put a prior on features as well as feature classes of the test word sense and do ``semantic + syntactic'' clustering gave the best accuracy averaged over all the 172 verbs which is significant at $5\%$ significance level (Paired t- Test). Settings 3 and 4 in which we cluster based on only ``semantic'' and ``syntactic'' features respectively,also gave significant ($5\%$ significance level in Paired t- Test) improvement in accuracy over the competing methods. But these settings performed a bit worse than Setting 1, which suggests that it is a good idea to have clusters in which the word senses have ``semantic'' as well as ``syntactic'' distributional similarity. Also, worth noting is Setting 2, in which we put the prior on only the features of the test word, gave slightly worse performance than Setting 1, 3 and 4 which suggests that it helps to generalize across features as well as feature classes. In addition to this we would like to give some examples which re -iterate the point that we made earlier i.e. transfer helps the most in cases in which the test word sense has a lot less data than the train word senses. ``kill'' had roughly $5.5 -6$ times more data than all other word senses in its cluster i.e. ``arrest'', ``capture'', ``strengthen'' etc. and in this case Transfer-TPC gave $4.61 - 9.22\%$ higher accuracies than competing methods on these three words. Also, for the case of word ``do'' which had roughly $10$ times more data than the other word senses in its cluster like ``die'' and ``save'', Transfer-TPC gave $6.11 - 8.63\%$ higher accuracies than other methods. For the case of word ``write'' which had 4 times more data than ``sleep'' transferring improved accuracy by $4.09\%$. It is worth noting that all these reported improvement in accuracies are much more than the average improvement in accuracy over the entire $172$ verbs as reported in Table 1, which explains the fact that transfer makes the biggest difference when the test words have a much lesser data as compared to train word senses, but even in cases where the words have similar amount of data we got $2.5 - 3.5\%$ increase in accuracy. We would also like to mention the case of negative-transfer \cite{Caruana97multitasklearning} i.e. transfer actually hurt performance. There were $8$ such verbs out of $172$ where we observed this phenomenon. \begin{table*} [htbp] \setlength{\abovecaptionskip}{1pt} \setlength{\belowcaptionskip}{1pt} \caption{Table showing 10 fold CV accuracies of various methods for various Transfer Learning settings. {\it Note:} {\bf These are true cross validation accuracies and no parameters have been tuned on them.} In Settings 3 and 4 we did only ``semantic'' and only ``syntactic'' clustering respectively, in contrast to Settings 1,2, so the accuracies of the competing methods i.e. TPC, SVM, Stepwise RIC and Majority Vote are a bit different across the settings, because there were words whose some senses fell in ``foreground'' in Settings 1,2 but in Settings 3 and 4 ``all'' their senses fell in ``background'' and vice versa, so all the settings did not have exactly same words. } \centering \begin{small} \label{TTPC} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Setting 1:}\\ \hline Transfer-TPC & Standard TPC & SVM & Stepwise RIC & Majority Vote\\ \hline \small $86.29\%$&$83.50\%$&$83.77\%$&$79.04\%$&$76.59\%$\\ \hline \multicolumn{5}{|c|}{Setting 2:}\\ \hline Transfer-TPC & Standard TPC & SVM & Stepwise RIC & Majority Vote\\ \hline $85.75\%$&$83.50\%$&$83.77\%$&$79.04\%$&$76.59\%$\\ \hline \multicolumn{5}{|c|}{Setting 3:}\\ \hline Transfer-TPC & Standard TPC & SVM & Stepwise RIC & Majority Vote\\ \hline $85.11\%$&$83.09\%$&$83.44\%$&$79.27\%$&$77.14\%$\\ \hline \multicolumn{5}{|c|}{Setting 4:}\\ \hline Transfer-TPC & Standard TPC & SVM & Stepwise RIC & Majority Vote\\ \hline $85.37\%$&$83.34\%$&$83.57\%$&$79.63\%$&$77.24\%$\\ \hline \end{tabular} \end{center} \end{small} \end{table*} \chapter{Discussion: A Unified View} So far, we have seen all the three models in isolation. We now look for a unified representation of the three models and explore the connections between them. This provides deeper insights into the working of the models, and on how to select the best model for a given problem. We have presented the three methods using an information theoretic approach, but they can be interpreted as Bayesian models by noting that the cost of coding an event (such as a feature being in a model) of probability $p$ is $-log(p)$. Thus, the RIC penalty of log(m) (log of the number of candidate features) is just $ -log(p)$ where $p= 1/m$ assumes that one of the $m$ features will enter the model. Transfer-TPC estimates the probabilty of a feature entering the model as being the fraction of times it was used in models on similar tasks. MIC and TPC, roughly speaking, model the probability of a feature being added to a model as being the fraction of features in the feature class that have already been added to the model. As such, they have the flavor of an empirical Bayes model, that ends up using as a prior for the class the fraction of features added to the class. \section{Connection between TPC and Transfer-TPC} TPC is the basic building block on which Transfer-TPC has been built and in the case of no transfer these two are equivalent. The basic TPC scheme can be represented in a Bayesian way as follows: \begin{equation}\label{unify} P(f_{ij}=1) = P(f_{ij}=1 |fc_j=1)*P(fc_j=1) \end{equation} where $P(fc_j=1)$ is the probability of the $j^{th}$ feature class being included in the model and $P(f_{ij}=1 |fc_j=1)$ is the probability of the $i^{th}$ feature from the $j^{th}$ feature class being included in the model given that $j^{th}$ feature class is already in the model. In the case of standard TPC, $P(fc_j=1) = \frac{1}{K}$ where $K$ is the total number of feature classes in the data and $P(f_{ij}=1 |fc_j=1)= \frac{1}{m_k}$ where $m_k$ is the total number of features in the $k^{th}$ feature class. Replacing the probabilities by $-log()$ probabilities Equation~\ref{unify} reduces to the TPC scheme as explained in Chapter 6. It can easily be seen that in case of Transfer-TPC, the above equation holds, but the values of the probabilities depend on whether those features and feature classes have been selected in the models of other ``similar tasks''. In that case~$P(fc_j=1)$~$=$~$\frac{k+a}{k+l+a+b}$ and $P(f_{ij}=1 |fc_j=1)= \frac{a + c }{a + z + c + d}$ where the symbols have the same meaning as we explained in Chapter 7. \section{Connection between MIC and TPC} As pointed out earlier, MIC and TPC both do ``simultaneous transfer'' and can be used for ``joint feature selection'' for a set of related tasks which share the same set of features. Both put coefficients into classes, the key difference is that in MIC the coefficient class is the set of coefficients of a single feature in all tasks, while in TPC, each feature class has multiple features, and is specified explicitly. In both cases, we first code whether any feature from a class is added, and then which features from within the class are to be added. This has the consequence that once one feature from a class is added, other features become much easier to add. The coding also assures that subsequent features are increasingly easy to add. This is similar in spirit to widely used methods of controlling false discovery rate in the absence of feature classes \cite{Benjamini}. \section{Connection between MIC and Transfer-TPC} MIC and Transfer-TPC are the most different of the pairs of methods, as MIC does ``simultaneous transfer'' and expects all tasks to share same set of features whereas Transfer-TPC is more flexible and can even work in the case when the tasks have unequal amounts of data and the task to which we want to transfer knowledge is unknown. In our implementation, we assume that all tasks in MIC are potentially related, but for Transfer-TPC, we explicitly look for tasks that are ``similar'' to the target task being learnt. Transfer-TPC does not require that different tasks have different sets of feature values. (Unlike MIC, which does require that the tasks share the same feature values.) In the case in which all the different tasks have same set of features and all tasks are assumed to be ``similar'' to each other, there is a direct mapping between MIC and Transfer-TPC setting, as in that case we can rewrite the $n \times h$ matrix in the MIC problem as $h$, $n \times 1$ matrices with all the $h$ different tasks being in the ``same cluster'' for doing Transfer-TPC. In short, we can say that under these conditions, MIC and Transfer-TPC settings become same, and MIC comes out as a special case of Transfer-TPC in which we are transferring from all the $h-1$ remaining tasks. \chapter{Conclusion} In this thesis we presented three related ways of using Transfer Learning to improve feature selection. The three approaches shared different kinds of information between tasks or feature classes, and were based on the information theoretic Minimum Description Length (MDL) principle. Two of the models, MIC and TPC, do ``joint feature selection'' for a set of related prediction tasks which share the same set of features while the third model, Transfer-TPC, does ``sequential transfer'' between tasks which do not share observations.. Transfer-TPC is particularly useful when transferring knowledge between tasks which have unequal amounts of labeled data. All the three models gave accuracies on a set of Genomic and Word Sense Disambiguation datasets that are uniformly as good as or better than state-of-the-art methods, often using models that are more sparse. We also saw that under certain conditions and assumptions all the three models are ``inter -reducible''. Thus, depending on the characteristics of the prediction problem at hand we can chose one of the methods to improve the task of feature selection by transferring knowledge. \end{document}
\begin{document} \title{Throttling Equilibria in Auction Markets \author{ \sf Xi Chen\footnote{Supported by NSF grants IIS-1838154 and CCF-1703925.} \begin{abstract} Throttling is a popular method of budget management for online ad auctions in which the platform modulates the participation probability of an advertiser in order to smoothly spend her budget across many auctions. In this work, we investigate the setting in which all of the advertisers simultaneously employ throttling to manage their budgets, and we do so for both first-price and second-price auctions. We analyze the structural and computational properties of the resulting equilibria. For first-price auctions, we show that a unique equilibrium always exists, is well-behaved and can be computed efficiently via t\^atonnement-style decentralized dynamics. In contrast, for second-price auctions, we prove that even though an equilibrium always exists, the problem of finding an equilibrium is PPAD-complete, there can be multiple equilibria, and it is NP-hard to find the revenue maximizing one. We also compare the equilibrium outcomes of throttling to those of multiplicative pacing, which is the other most popular and well-studied method of budget management. Finally, we characterize the Price of Anarchy of these equilibria for liquid welfare by showing that it is at most 2 for both first-price and second-price auctions, and demonstrating that our bound is tight. \end{abstract} \section{Introduction} Online ad auctions are the workhorse of the internet advertising industry; when a user visits an internet-based platform, an auction is run among the interested advertisers to determine the ad to be displayed to the user. Due to the large volume of these auctions, many advertisers are budget constrained: if they are allowed to participate in all the auctions they are interested in, they would end up spending more than their budget. This motivates the platforms to offer budget-management services. The focus of this paper is a popular budget-management service known as \emph{throttling} (or alternatively as \emph{probabilistic pacing}) which is offered by internet giants like Facebook~\citep{facebookguide}, Google~\citep{karande2013optimizing} and LinkedIn~\citep{agarwal2014budget}. Throttling manages the expenditure of an advertiser by controlling the probability with which she participates in each individual auction. The use of participation probability as a control lever allows the platform to evenly spread out an advertiser's expenditure throughout her advertising campaign, while ensuring that she does not spend more than her budget. Furthermore, in contrast to other budget-management methods like multiplicative pacing, throttling does not modify the bids of the advertisers to achieve this, which is essential for advertisers aiming to maintain a stable cost-per-opportunity~\citep{facebookguide}. Additionally, in practice, many advertisers do not opt into budget-management services that modify their bids, forcing the platform to satisfy their budget constraint by only controlling their participation probability, as in throttling~\citep{karande2013optimizing}. Importantly, throttling also gives advertisers a more representative sample of users for which they are eligible and their bid is competitive~\citep{karande2013optimizing}. This is in contrast to budget-management approaches that modify bids, such as multiplicative pacing, which biases the allocation towards users where the advertiser has a high probability of getting a click, relative to other advertisers. Many advertisers place a premium on the predictability and representative samples offered by unmodified bids, motivating the platforms to offer throttling as a budget-management option. Throttling has received significant attention in previous work, the vast majority of which studies it from the perspective of a single buyer participating in repeated generalized second-price auctions (see Section~\ref{sec:related-works}). In contrast, the scenario where all of the buyers simultaneously employ throttling to manage their budgets, and the resulting system-level properties, have received very little attention. In this paper, we attempt to remedy this situation by providing a structural and computational analysis of simultaneous multi-buyer throttling for both first-price and second-price auctions. More specifically, we analyze the resulting games, with an emphasis on equilibria and repeated play. \subsection{Main Contributions} We define a \emph{throttling game} with budget-constrained buyers (advertisers) and stochastic good types (user types), in which each buyer chooses the probability with which she participates in the auction, with the goal of maximizing her expected utility while satisfying her budget constraint in expectation. Repeated play of this throttling game captures the repeated online ad auction setting in which each buyer employs throttling to manage their budget. Furthermore, we define the concept of \emph{throttling equilibrium} for this game, show its equivalence to pure strategy Nash equilibrium, and analyze it with an emphasis on its structural and computational properties. We summarize our results below. \textbf{First-price Auctions:} We show that a throttling equilibrium always exists, and characterize it as the maximal element in the set of participation probabilities that result in all buyers satisfying their budgets (Theorem~\ref{thm:existence-first-price}). Furthermore, we use this characterization to establish its uniqueness. On the computational front, we describe decentralized dynamics in which buyers repeatedly play the throttling game and make simple t\^{a}tonnement-style adjustments to their participation probabilities based on their expected expenditure (Algorithm~\ref{alg:dynamics}). We show that these t\^{a}tonnement-style dynamics converge to an approximate throttling equilibrium in polynomial time (Theorem~\ref{thm:first-price-dynamics}). \textbf{Second-price Auctions:} We begin by establishing that a throttling equilibrium always exists for second-price auctions (Theorem~\ref{theorem:existence_second_price}), but find that it may not be unique, and for some games all throttling equilibria can be irrational. Next, we prove results about the computational complexity of finding throttling equilibria, which requires the use of terminology from computational complexity theory. Before summarizing those results, we make a note for readers who may not be familiar with complexity theory: In order to make our results more accessible, we provide an informal description of them at the head of every subsection, in an attempt to avoid letting complexity-theoretic terminology obscure the conclusions derived from the result. Continuing on with the summary of our results, we prove that the problem of computing approximate throttling equilibria is PPAD-hard even when each good has at most three bids (Theorem~\ref{thm:second-price-PPAD-hard}), by showing a reduction from the PPAD-hard problem of computing an approximate equilibrium of a threshold game~\citep{PB2021}. As a consequence, we show that, unlike first-price auctions, no dynamics can converge in polynomial time to a second-price throttling equilibrium (assuming PPAD-complete problems cannot be solved in polynomial time). Furthermore, we place the problem of computing approximate throttling equilibria in the class PPAD by showing a reduction to the problem of finding a Brouwer fixed point of a Lipschitz mapping from a unit hypercube to itself (Theorem~\ref{thm:second-price-PPAD-membership}); the latter is known to be in PPAD via Sperner's lemma (e.g. see \citealt{chen2009settling}). We provide additional evidence of the computational challenges that afflict throttling for second-price auctions by proving the NP-hardness of finding a revenue-maximizing approximate throttling equilibrium (Theorem~\ref{thm:revenue_np_hard}). We complement these hardness results by describing a polynomial-time algorithm for computing throttling equilibria for the special case in which there are at most two bids on each good (Algorithm~\ref{alg:two_buyer}), thereby precisely delineating the boundary of tractability. \textbf{Comparing Pacing and Throttling:} As mentioned earlier, multiplicative pacing is another popular method of budget management, where buyers shade their bids to smoothly spend their budgets. In contrast to throttling, equilibria and dynamics in settings where all of the buyers use pacing have received significant attention~\citep{borgs2007dynamics, balseiro2019learning, conitzer2018multiplicative, conitzer2019pacing, chen2021complexity}. This allows us to compare two of the most popular methods of budget management~\citep{informsarticle}. We show that, for first-price auctions, the revenue of the unique throttling equilibrium and the unique pacing equilibrium, although incomparable directly, are always within a factor of 2 of each other (Theorem~\ref{thm:revenue-comparison}). Moreover, we find that pacing and throttling equilibria share a remarkably similar computational and structural landscape, as summarized in Table~\ref{table:first_price} and Table~\ref{table:second_price}. In view of this comparison, our work can be seen as providing the analogous set of results for throttling equilibria that \citet{borgs2007dynamics, conitzer2018multiplicative, conitzer2019pacing, chen2021complexity} proved for pacing equilibria. Our results reaffirm what the analysis of pacing suggested: budget management for first-price auctions is more well-behaved as compared to second-price auctions. \begin{table}[H] \scriptsize \centering \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Existence} & \textbf{Rationality} & \textbf{Multiplicity} & \textbf{Computational Complexity} & \textbf{Efficient Dynamics} \\ \hline Always & Not always & Always unique & Poly.-time for approx. eq. & For approx. eq. \\ \hline Always & Always & Always unique & Poly.-time for exact eq. & For approx. eq. \\ \citep{conitzer2019pacing} & \citep{conitzer2019pacing} & \citep{conitzer2019pacing} & \citep{conitzer2019pacing} & \citep{borgs2007dynamics} \\ \hline \end{tabular} \caption{Comparison of throttling (top row) and pacing equilibria (bottom row) for \textbf{first-price} auctions.} \label{table:first_price} \end{table} \begin{table}[H] \scriptsize \centering \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Existence} & \textbf{Rationality} & \textbf{Multiplicity} & \textbf{Computational Complexity} & \textbf{Revenue Max.} \\ \hline Always & Not always & Possibly infinite & PPAD-complete for approx. eq & NP-hard \\ \hline Always & Always & Possible & PPAD-complete for both exact & NP-hard \\ \citep{conitzer2018multiplicative} & \citep{chen2021complexity} & \citep{conitzer2018multiplicative} & and approx. eq. \citep{chen2021complexity} & \citep{conitzer2018multiplicative} \\ \hline \end{tabular} \caption{Comparison of throttling equilibria (top row) and pacing equilibria (bottom row) for \textbf{second-price} auctions.} \label{table:second_price} \end{table} \textbf{Price of Anarchy:} Liquid welfare \citep{dobzinski2014efficiency, azar2017liquid} is a measure of efficiency for settings with budget constraints, like the one considered in this work. It corresponds to the maximum revenue (liquidity) that can be extracted with full knowledge of the buyers' values/bids, and reduces to social welfare when the budgets are non-binding. We show that the liquid welfare under any throttling equilibrium is at most a factor of 2 away from the liquid welfare that can be obtained by a central planner with complete information of the buyers bids/values, i.e., the Price of Anarchy is at most 2. We do so for both first-price and second-price auctions. Moreover, we provide examples to show that this bound is tight for both auction formats. \normalsize \subsection{Additional Related Work}\label{sec:related-works} Budget management in online ad auctions has received widespread attention in the literature. Here, we review the papers which most closely relate to ours. We begin by reviewing the work on throttling, then give a broad overview of the other models for bidding under budget constraints considered in the literature, and finally conclude with a discussion of the work on multiplicative pacing, which we use in our comparisons. Among work on throttling, \citet{balseiro2021budget} is the closest to ours, and acts as the inspiration for our model and terminology. In \citet{balseiro2021budget}, the authors study system equilibria under various budget management strategies in second-price auctions, one of them being throttling. They show that throttling satisfies desirable incentive properties, and, under the special case when buyer values are symmetric, they compare equilibrium seller revenue and total welfare for throttling to a variety of other budget management techniques. Crucially, unlike our work, \citet{balseiro2021budget} lacks a computational analysis and focuses solely on second-price auctions. Throttling has also received significant attention in other lines of research. \citet{agarwal2014budget} study throttling in generalized second-price (GSP) auctions from the perspective of a single buyer, provide an algorithm which determines the participation probability based on user traffic forecasts, and analyze its performance empirically on real data from LinkedIn. Similarly, \citet{xu2015smart} provide and empirically evaluate practical algorithms for throttling on data from demand-side platforms. \citet{karande2013optimizing} use throttling (under the name Vanilla Probabilistic Throttling) as the benchmark in the GSP auction setting to evaluate the budget management algorithm they describe on data from Google, and find that it empirically outperforms throttling on the metrics they study. Importantly, they do not engage in an equilibrium analysis, and their algorithm does not provide a representative sample of the traffic to advertisers. There is also a significant body of work which proposes alternatives to pacing and throttling methods. \citet{charles2013budget} study regret-free budget-smoothing policies in which the platform selects the random subset of buyers that participate in the GSP auction for each good. They show that such policies always exist, and, under the small-bids assumption, give an efficient algorithm for the special case of second-price auctions. There is also a significant body of work that models budget management in first-price auctions as an allocation problem under stochastic and adversarial input, starting with the work of \citet{mehta2007adwords}. See \citet{mehta2013online} for a survey. We emphasize that in this work, our goal is to study the budget-management mechanisms that are employed in practice, which is why we focus on comparing pacing and throttling. Unlike throttling, equilibrium analysis for the setting where all buyers simultaneously use multiplicative pacing is well-understood. \citet{conitzer2018multiplicative} define and study pacing equilibria in second-price auctions. They show that it always exists and study its structural properties. \citet{chen2021complexity} recently analyzed the computational complexity of finding pacing equilibria in second-price auctions, and proved that the problem is PPAD-complete. For first-price auctions, \citet{borgs2007dynamics} describe a simple t\^{a}tonnement-style dynamics and prove its efficient convergence to a pacing equilibrium. \citet{conitzer2019pacing} characterize the structural properties of pacing equilibria in first-price auctions and give a market-equilibrium based algorithm for computing it. A summary of these results can be found in the second row of Table~\ref{table:first_price} and Table~\ref{table:second_price}. Finally, Price of Anarchy for liquid welfare has been studied in a variety of settings with budget-constrained buyers. \citet{azar2017liquid} show that the Price of Anarchy of liquid welfare is 2 for simultaneous first-price and second-price auctions in the multi-item setting. For mixed-strategy Nash equilibria they showed an upper bound of 51.5, with both results requiring the ``no over-budgeting" assumption that requires the sum of bids to be less than the budget. Their results cannot be compared to ours because we only require the budget constraints to be satisfied in expectation, and more generally the settings have different action sets (throttling parameters versus bids). \citet{balseiro2021contextual} study a contextual-value auction model with strategic agents and establish the existence of pacing-based equilibria for all standard auction formats, including first-price and second-price auctions as special cases. They go on to show that pacing equilibria have a Price of Anarchy of at most 2 for all standard auctions in their model. \citet{gaitonde2022budget} study the repeated-auction setting and show that the Price of Anarchy is bounded above by 2 even if the system is not in equilibrium, provided that all of the buyers use a generalization of the dual-descent-based pacing algorithm introduced in \citet{balseiro2019learning}. \section{Model} Consider a seller who has $m$ types of goods to sell, and $n$ budget constrained buyers who are interested in buying these goods. The seller runs an auction amongst the buyers in order to make the sale. We assume that the type of good to be sold is drawn from some known distribution $d = (d_1, \dots, d_m)$, i.e., the good to be sold is of type $j$ with probability $d_j$. Buyer $i$ bids $\tilde{b}_{ij}$ on good type $j$, for all $i \in [n], j \in [m]$, and has a per-auction budget of $B_i > 0$. To control their budget expenditure, each buyer $i$ is associated with a \emph{throttling parameter} $\theta_i \in [0,1]$, which represents the probability with which she participates in the auction: each buyer $i$ independently flips a biased coin which comes up heads with probability $\theta_i$, and submits their bid $\tilde{b}_{ij}$ if the coin comes up heads, while submitting no bid if the coin comes up tails. We focus on the setting where each buyer wishes to satisfy her budget constraint in expectation, i.e., buyer $i$ would like to spend less than $B_i$ in expectation over the good types and participation coin flips of all buyers. Requiring the budget constraints to be satisfied in expectation draws its motivation from the large number of auctions that are run by online-advertising platforms, in conjunction with concentration arguments, and has been employed by previous works on budget management in online auctions (see, e.g.,~\cite{gummadi2012repeated,abhishek2013optimal,balseiro2015repeated,balseiro2017budget,balseiro2019learning,conitzer2018multiplicative}). Additionally, in this paper, we restrict our attention to the two most commonly-used auction formats in online advertising: first-price auctions and second-price auctions. In a first-price auction, the participating buyer with the highest bid wins the good and pays her bid, whereas in a second-price auction, the participating buyer with the highest bid wins the good and pays the second-highest bid among the participating buyers. Our model can be interpreted as a discrete version of the one defined in \cite{balseiro2021budget}. Before proceeding further, we introduce some additional notation that allows us to capture the stochastic nature of the good types via a rescaling of the bids, thereby allowing us to analyze the setting as a deterministic multi-good auction problem: Set $b_{ij} \coloneqq d_j \tilde{b}_{ij}$ for all $i \in [n], j \in [m]$. Since the participation of buyers is independent of the good type and we are only concerned with expected payments, the good type distribution $d = (d_j)_j$ and the bids $\{\tilde{b}_{ij}\}_{i,j}$ are consequential only insofar as they determine $\{b_{ij}\}_{i,j}$. Therefore, with some abuse of terminology, going forward, we refer to $b_{ij}$ as the bid of buyer $i$ on good $j$ (instead of $\tilde{b}_{ij}$, which will no longer be used).\footnote{This deterministic view is equivalent to the model of \citet{conitzer2018multiplicative}, except that we focus on probabilistic throttling for managing budgets, whereas they focus on multiplicative pacing. Their model can similarly be viewed as a stochastic setting.} Furthermore, to simplify our analysis, we will assume that ties are broken lexicographically, i.e., the smaller buyer number wins in case of a tie. Our results continue to hold for all other tie-breaking priority orders over the buyers (even when they are different for each good). The lexicographic tie-breaking rule allows for simplified notation, albeit with some abuse: We will write $b_{ij} > b_{kj}$ to mean that either $b_{ij}$ is strictly greater than $b_{kj}$, or $b_{ij} = b_{kj}$ and $i<k$. Finally, we refer to any tuple $\left(n, m, (b_{ij}), (B_i)\right)$ as a \emph{throttling game}. In online-advertising auctions, the buyers (or, more typically in practice, the platform on behalf of the buyers) attempt to satisfy their budget constraints by adjusting their throttling parameters. This naturally leads to a game where each buyer's strategy is her throttling parameter. We use $p(\theta)_{ij}$ to denote the expected payment of buyer $i$ on good $j$ when buyers use $\theta = (\theta_1, \dots, \theta_n)$ to decide their participation probabilities. Let $X_i$ be a random variable such that $X_i = 1$ if buyer $i$ participates and $X_i = 0$ if buyer $i$ does not participate. Then, by our modeling assumptions, $X_i$ is a Bernoulli$(\theta_i)$ random variable. More concretely, $p(\theta)_{ij}$ can be defined as follows: \begin{itemize} \item First-price auction: $p(\theta)_{ij} = \mathbb{E} \left[X_i b_{ij} \prod_{k: b_{kj} > b_{ij}} (1 - X_k) \right] = \theta_i b_{ij} \prod_{k: b_{kj} > b_{ij}} (1 - \theta_k)$ \item Second-price auction: \begin{align*} p(\theta)_{ij} &= \mathbb{E} \left[ \sum_{\ell: b_{\ell j} < b_{ij}} b_{\ell j} X_i X_\ell \prod_{k \neq i: b_{kj} > b_{\ell j}} (1 - X_k) \right] = \sum_{\ell: b_{\ell j} < b_{ij}} b_{\ell j} \theta_i \theta_\ell \prod_{k \neq i: b_{kj} > b_{\ell j}} (1 - \theta_k) \end{align*} \end{itemize} We overload $p(\theta)_{ij}$ to represent the expected payment in both auction formats; the auction format will be clear from the context. We assume here that the participation probability of a buyer across goods is perfectly correlated for simplicity ($X_i$ is the same for all $j$). Any other correlation structure, e.g. independent across goods, would also lead to the same results due to linearity of expectation. Next, we define the equilibrium concept which will be the main object of study in this work. \begin{definition}[Throttling Equilibrium]\label{def:exact_throt_eq} Given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, a vector of throttling parameters $\theta = (\theta_1, \dots, \theta_n) \in [0,1]^n$ is called an $\delta$-approximate throttling equilibrium if: \begin{enumerate} \item Budget constraints are satisfied: $\sum_j p(\theta)_{ij} \leq B_i$ for all $i \in [n]$ \item No unnecessary throttling occurs: If $\sum_j p(\theta)_{ij} < B_i$, then $\theta_i = 1$ \end{enumerate} \end{definition} The above definition applies to both first-price and second-price auctions using the corresponding payment rule $p(\theta)_{ij}$. Definition~\ref{def:exact_throt_eq} draws its motivation from the fact that, in a natural utility model, throttling equilibria are essentially equivalent to pure Nash Equilibria, which we describe next. Consider a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$. Fix an auction format: either first-price or second-price. Suppose buyer $i$ has value $v_{ij}$ for good type $j$ for all $i \in [n], j \in [m]$. We make the natural assumption that the buyers bid less than their value, i.e., $d_i v_{ij} \geq b_{ij}$ for all $i \in [n], j \in [m]$ in second-price auctions and strictly less than their value, i.e., $d_i v_{ij} > b_{ij}$ for all $i \in [n], j \in [m]$ in first-price auctions. Define a new game $G$ in which each buyer $i$'s strategy is her throttling parameter $\theta_i$ and her utility function is given by \begin{align*} u_i(\theta) = \begin{cases} \sum_j \left(v_{ij} d_i \theta_i \prod_{k: b_{kj} > b_{ij}} (1 - \theta_k) - p(\theta)_{ij} \right) & \text{if } \sum_j p(\theta)_{ij} \leq B_i\\ -\infty &\text{otherwise} \end{cases} \end{align*} This utility is simply the expected quasi-linear utility modified to ascribe a utility value of negative infinity for budget violations. Since all of the buyers get a non-negative utility from winning any good, increasing the throttling parameter improves utility so long as the budget constraint is satisfied. This makes it easy to see that every throttling equilibrium of the throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ is a Nash equilibrium of the corresponding game $G$. Furthermore, it is also straightforward to check that a pure Nash equilibrium of $G$ is not a throttling equilibrium only in the following scenario: There is a buyer who spends 0 in the Nash equilibrium and has a throttling parameter strictly less than 1. In this scenario, setting the throttling parameter of all such buyers to 1 yields a throttling equilibrium with exactly the same expected allocation and payment for all the buyers as under the Nash equilibrium. Hence, given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, for every pure Nash equilibrium of the corresponding game $G$, there is a throttling equilibrium with the same expected allocation and revenue. We conclude this section by defining an approximate version of throttling equilibrium, which allows us to side-step issues of irrationality that can plague exact equilibria (see Example~\ref{example:first-price-irrational} and Example~\ref{example:irrational_eq}). \begin{definition}[Approximate Throttling Equilibrium]\label{def:throt_eq} Given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, a vector of throttling parameters $\theta = (\theta_1, \dots, \theta_n)$ is called an $\delta$-approximate throttling equilibrium if: \begin{enumerate} \item Budget constraints are satisfied: $\sum_j p(\theta)_{ij} \leq B_i$ for all $i \in [n]$ \item Not too much unnecessary throttling occurs: If $\sum_j p(\theta)_{ij} < (1- \delta) B_i$, then $\theta_i \geq 1- \delta$ \end{enumerate} \end{definition} \section{Throttling in First-price Auctions} We begin by studying throttling equilibria in the first-price auction setting. We start by showing that, for first-price auctions, there always exists a unique throttling equilibrium. We then describe a simple and efficient t\^{a}tonnement-style algorithm for approximate throttling equilibria. \subsection{Existence of First-Price Throttling Equilibria} To show existence, we will characterize the throttling equilibrium as a component-wise maximum of the set of all budget-feasible throttling parameters. This argument is inspired from the technique used in \citet{conitzer2019pacing} for pacing equilibria in first-price auctions. We use the following definition to make the argument precise. \begin{definition}[Budget-feasible Throttling Parameters] Given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, a vector of throttling parameters $\theta \in [0,1]^n$ is called budget-feasible if every buyer satisfies her budget constraints, i.e., $\sum_j p(\theta)_{ij} \leq B_i$ for all buyers $i \in [n]$. \end{definition} The following lemma captures the crucial fact that the component-wise maximum of two budget-feasible throttling parameters is also budget-feasible. \begin{lemma}\label{lemma:pairwise_max} Given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, if $\theta, \tilde{\theta} \in [0,1]^n$ are budget-feasible, then $\max(\theta, \tilde{\theta}) \coloneqq (\max(\theta_i, \tilde{\theta}_i))_i$ is also budget-feasible. \end{lemma} \begin{proof} Fix $i \in [m]$ and $j \in [m]$. Without loss of generality, we assume that $\theta_i \geq \tilde{\theta}_i$. Observe that \begin{align*} p(\max(\theta, \tilde{\theta}))_{ij} = \prod_{k: b_{kj} > b_{ij}} (1 - \max(\theta_k, \tilde{\theta}_k)) \theta_i b_{ij} \leq \prod_{k: b_{kj} > b_{ij}} (1 - \theta_k) \theta_i b_{ij} = p(\theta)_{ij} \end{align*} Summing over all goods $j \in [m]$ completes the proof. \end{proof} The maximality property shown in \cref{lemma:pairwise_max} is analogous to the maximality property of multiplicative pacing: there it is also the case that component-wise maxima over pacing vectors preserves budget feasibility for first-price auctions, and this was used by \citet{conitzer2019pacing} to show several structural properties of pacing equilibria. Next we show that the maximality property allows us to establish the existence and uniqueness of throttling equilibria for first-price auction. \begin{theorem}\label{thm:existence-first-price} For every throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, there exists a unique throttling equilibrium $\theta^* \in [0,1]^n$ which is given by $ \theta^*_i = \sup\{\theta_i \in [0,1] \mid \exists\ \theta_{-i} \text{ such that } \theta = (\theta_i, \theta_{-i}) \text{ is budget-feasible}\}.$ \end{theorem} \begin{proof} Set $\theta^*_i = \sup\{\theta_i \in [0,1] \mid \exists\ \theta_{-i} \text{ such that } \theta = (\theta_i, \theta_{-i}) \text{ is budget-feasible}\}$ for all $i \in [n]$. First, we show that $\theta_i^*$ is budget-feasible. Observe that the function $\theta \mapsto \left(\sum_j p(\theta)_{1j}, \dots, \sum_j p(\theta)_{nj} \right)$ is continuous. Therefore, the pre-image of the set $\bigtimes_{i=1}^n[0,B_i]$ under this function is closed. In other words, the set of all budget-feasible throttling parameters is closed. Fix $\epsilon > 0$. For all $i \in [n]$, by the definition of $\theta^*_i$, there exists $\theta^{(i)} \in [0,1]^n$ which is budget feasible and $\theta^{(i)}_i > \theta_i^* - \epsilon$. Iterative application of Lemma~\ref{lemma:pairwise_max} yields the budget-feasibility of the vector $\theta$ defined by $\theta_i = \max_{k \in [n]} \theta^{(k)}_i$. Moreover, $\theta_i > \theta^*_i - \epsilon$ for all $i \in [n]$ because $\theta^{(i)}_i > \theta_i^* - \epsilon$ for all $i \in [n]$. Since $\epsilon> 0$ was arbitrary, we have shown that there exists a sequence of budget-feasible throttling parameters which converges to $\theta^*$, which implies that $\theta^*$ is budget-feasible because the set of budget-feasible throttling parameters is closed. Next, we show that $\theta^*$ also satisfies the `No unnecessary pacing' property. Suppose there exists $i \in [n]$ such that $\sum_j p(\theta^*)_{ij} < B_i$ and $\theta^*_i < 1$. Then, by the continuity of $\theta \mapsto \sum_j p(\theta)_{ij}$, there exists $\theta_i$ such that $\theta^*_i < \theta_i< 1$ and $\sum_j p(\theta_i, \theta^*_{-i})_{ij} < B_i$, which contradicts the definition of $\theta^*$. Therefore, for all $i \in [n]$, we have $\sum_j p(\theta^*)_{ij} < B_i$ implies $\theta^*_i = 1$. Thus, $\theta^*$ is a throttling equilibrium. Finally, we prove uniqueness of $\theta^*$. Suppose there is a throttling equilibrium $\theta \in [0,1]^n$ such that $\theta_i \neq \theta^*_i$ for some $i \in [n]$. Then, the set of buyers $C \subset [n]$ for whom $\theta_i^* > \theta_i$ is non-empty. Note that $\theta_i < 1$ for all $i \in C$. Hence, every buyer in $C$ spends her entire budget under $\theta$. On the other hand, since $\theta_i^* > \theta_i$ for all $i \in C$ and $\theta_i^* = \theta_i$ for $i\notin C$, the total payment made by buyers in $C$ under $\theta^*$ is strictly greater than the payment under $\theta$, which contradicts the budget-feasibility of $\theta^*$. Therefore, $\theta^*$ is the unique throttling equilibrium. \end{proof} We conclude this subsection by noting that in Appendix~\ref{appendix:irrationality} we describe a throttling game for which the unique throttling equilibrium has irrational throttling parameters for some buyers. In other words, rational throttling equilibrium need not always exist. Since irrational numbers cannot be represented exactly with a finite number of bits in the standard floating point representation, it leads us to consider algorithms for finding \emph{approximate} throttling equilibrium instead. \subsection{An Algorithm for Computing Approximate First-Price Throttling Equilibria} In this subsection, we define a simple t\^atonnement-style algorithm and prove that it yields an approximate throttling equilibrium in polynomial time. \begin{algorithm}[H] \caption{Dynamics for First-price Auction} \label{alg:dynamics} \begin{algorithmic} \item[\textbf{Input:}] Throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ and approximation parameter $\delta \in (0,1/2)$ \item[\textbf{Initialize:}] $\theta_i = \min\{B_i/ (2\sum_j b_{ij}), 1\}$ for all $i \in [n]$ \item[\textbf{While}] there exists a buyer $i \in [n]$ such that $\theta_i < 1 - \delta$ and $\sum_j p(\theta)_{ij} < (1 - \delta) B_i$: \begin{itemize} \item For all $i \in [n]$ such that $\theta_i < 1 - \delta$ and $\sum_j p(\theta)_{ij} < (1 - \delta) B_i$, set $\theta_i \leftarrow \theta_i/(1 - \delta)$ \end{itemize} \item[\textbf{Return:}] $\theta$ \end{algorithmic} \end{algorithm} Before proceeding to prove the correctness and efficiency of Algorithm~\ref{alg:dynamics}, we note some of its properties. Typically, in online advertising auctions, buyers participate in a large number of auctions throughout their campaign, and the platform periodically updates their throttling parameters to ensure that they don't finish their budgets prematurely and loose out on valuable advertising opportunity. The above algorithm is especially suited for this setting due to its decentralized and easy-to-implement updates to the throttling parameter. Moreover, it also lends credence to the notion of throttling equilibrium as a solution concept because the update step in Algorithm~\ref{alg:dynamics} can be implemented independently by the buyers, resulting in decentralized dynamics which converge to a throttling equilibrium in polynomially-many steps. We refer the reader to \cite{borgs2007dynamics} for a detailed model under which Algorithm~\ref{alg:dynamics} can be naturally interpreted as decentralized dynamics for online advertising auctions. In the next lemma, we show that, throughout the run of Algorithm~\ref{alg:dynamics}, all the buyers always satisfy their budget constraints. \begin{lemma}\label{lemma:no_budget_violation} Consider the run of Algorithm~\ref{alg:dynamics} on the throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ and approximation parameter $\delta \in (0,1/2)$. Then, after every iteration of the while loop, we have $\sum_{j} p(\theta)_{ij} \leq B_i$ for all $i \in [n]$. \end{lemma} \begin{proof} We prove the lemma using induction on the number of iterations of the while loop. Note that $\sum_{j} p(\theta)_{ij} \leq B_i$ for all $i \in [n]$ before the first iteration of the while loop by virtue of our initialization. Let $\theta$ and $\theta'$ represent the throttling parameters after the $t$-th iteration and the $(t+1)$-th iteration of the while loop. Suppose $\sum_{j} p(\theta)_{ij} \leq B_i$ for all $i \in [n]$. To complete the proof by induction, we need to show that $\sum_{j} p(\theta')_{ij} \leq B_i$ for all $i \in [n]$. Consider a buyer $i$. Suppose $\sum_{j} p(\theta)_{ij} \geq (1 - \delta) B_i$. By the update step of the algorithm, $\theta'_i = \theta_i$ and $\theta'_j \geq \theta_j$ for $j \neq i$. Therefore, \begin{align*} \sum_{j} p(\theta')_{ij} = \sum_j \prod_{k: b_{kj} > b_{ij}} (1 - \theta'_k) \theta'_i b_{ij} \leq \sum_j \prod_{k: b_{kj} > b_{ij}} (1 - \theta_k) \theta_i b_{ij} = \sum_{j} p(\theta)_{ij} \leq B_i \end{align*} On the other hand, suppose $\sum_{j} p(\theta)_{ij} < (1 - \delta) B_i$. Then, by the update step of the algorithm, $\theta'_i \leq \theta_i/(1- \delta)$ and $\theta'_j \geq \theta_j$ for $j \neq i$. Therefore, \begin{align*} \sum_{j} p(\theta')_{ij} = \sum_j \prod_{k: b_{kj} > b_{ij}} (1 - \theta'_k) \theta'_i b_{ij} \leq \frac{1}{1-\delta} \cdot\sum_j \prod_{k: b_{kj} > b_{ij}} (1 - \theta_k) \theta_i b_{ij} = \frac{1}{1-\delta} \cdot\sum_{j} p(\theta)_{ij} < B_i \end{align*} This completes the proof of $\sum_{j} p(\theta')_{ij} \leq B_i$ for all buyers $i \in [n]$. \end{proof} We conclude this subsection by proving the correctness and efficiency of Algorithm~\ref{alg:dynamics}. \begin{theorem}\label{thm:first-price-dynamics} Given a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ and an approximation parameter $\delta \in (0,1/2)$ as input, Algorithm~\ref{alg:dynamics} returns a $\delta$-approximate throttling equilibrium in polynomial time. \end{theorem} \begin{proof} Since each iteration of the while loop only performs basic arithmetic operations, to establish a polynomial run-time complexity, it suffices to show that the while loop terminates in polynomially-many steps. Note that $c = \min_i \min\{B_i/ (2\sum_j b_{ij}), 1\}$ is a lower bound on the initial value of the throttling parameter of every buyer. Due to the termination condition of the while loop and the update step, at each iteration of the while loop, there exists a buyer $i \in [n]$ whose throttling parameter is updated, i.e., there exists $i \in [n]$ such that $\theta_i < 1- \delta$ and $\theta_i \leftarrow \theta_i/(1 - \delta)$. Therefore, the number of iteration of the while loop $T$ satisfies the following relationships: \begin{align*} \frac{c}{(1 -\delta)^{T/n}} \leq 1 \iff T \leq \frac{n \log(1/c)}{\log(1/ (1-\delta))} \leq \frac{n\log(1/c)}{\delta} \end{align*} The second sequence of inequalities upper bounds the number of iterations of the while loop by a polynomial function of the problem size. To complete the proof, it suffices to show that at the termination of the while loop, $\theta$ is a $\delta$-approximate throttling equilibrium. Budget-feasibility follows from Lemma~\ref{lemma:no_budget_violation}, and the `Not too much unnecessary throttling' condition is satisfied by virtue of the termination condition. \end{proof} \section{Throttling in Second-price Auctions} In this section, we study throttling equilibria in second-price auctions. We begin by establishing the existence of exact throttling equilibria. Next, we show that, in contrast to first-price auctions, it is PPAD-hard to compute an approximate throttling equilibrium. To complete the characterization of its complexity, we then place the problem in PPAD. Moreover, we also show that, unlike first-price auctions, multiple throttling-equilibria can exist for second-price auctions, and finding the revenue-maximizing one is NP-hard. Finally, we complement these negative results with an efficient algorithm for the case when each good has at most two positive bids. \subsection{Existence of Second-Price Throttling Equilibria} The following theorem establishes the existence of an exact throttling equilibrium for every bid profile by invoking Brouwer's fixed-point theorem on an appropriately defined function. \begin{theorem}\label{theorem:existence_second_price} For every throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, there exists a throttling equilibrium $\theta^* \in [0,1]^n$. \end{theorem} \begin{proof} First, observe that we can write the expected payment of buyer $i$ on good $j$ under $\theta$ as \begin{align}\label{eq:exp_pay} p(\theta)_{ij} = \sum_{\ell: b_{\ell j} < b_{ij}} b_{\ell j} \theta_i \theta_\ell \prod_{k \neq i: b_{kj} > b_{\ell j}} (1 - \theta_k) = \theta_i \cdot \sum_{\ell: b_{\ell j} < b_{ij}} b_{\ell j} \theta_\ell \prod_{k \neq i: b_{kj} > b_{\ell j}} (1 - \theta_k) = \theta_i \cdot p(1, \theta_{-i})_{ij} \end{align} Define $f: [0,1]^n \to [0,1]^n$ as \begin{align*} f_i(\theta) = \min\left\{ \frac{B_i}{\sum_j p(1, \theta_{-i})_{ij}}, 1 \right\} \quad \forall \theta \in [0,1]^n \end{align*} where, we assume that $f_i(\theta) = 1$ if $\sum_j p(1, \theta_{-i})_{ij} = 0$. Note that $p(1, \theta_{-i})_{ij}$ is a continuous function of $\theta$ because it is a polynomial. Therefore, $f$ is continuous as a function of $\theta$ and hence, by Brouwer's fixed-point theorem, there exists a $\theta^*$ such that $f(\theta^*) = \theta^*$. As a consequence, for all buyers $i \in [n]$, we get the following equivalent statements \begin{align*} f_i(\theta^*) = \theta^*_i \iff \theta^*_i \cdot \sum_j p(1, \theta^*_{-i})_{ij} < B_i \text{ implies } \theta^*_i = 1 \iff \sum_j p(\theta^*)_{ij} < B_i \text{ implies } \theta^*_i = 1 \end{align*} where the last equivalence follows from equation~\ref{eq:exp_pay}. Moreover, by definition of $f$, we get \begin{align*} \theta^*_i = f_i(\theta^*) \leq \frac{B_i}{\sum_j p(1, \theta_{-i})_{ij}} \end{align*} which in conjunction with equation~\ref{eq:exp_pay} implies $\sum_j p(\theta^*)_{ij} \leq B_i$. Therefore, $\theta^*$ is a throttling equilibrium. \end{proof} Even though the above theorem establishes the existence of a throttling equilibrium for every throttling game, in Appendix~\ref{appendix:irrationality} we give an example of a throttling game for which all equilibria have a buyer with an irrational throttling parameter. This prompts us to study the problem of computing \emph{approximate} throttling equilibria, which we do in the following subsections. \subsection{Complexity of Finding Approximate Second-Price Throttling Equilibria} In the previous subsection, by way of our existence proof, we reduced the problem of finding an approximate throttling equilibrium to that of finding a Brouwer fixed point of the function $f$; but this is of little use if we want to actually compute an approximate throttling equilibrium: no known algorithm can compute a Brouwer fixed point in polynomial time and it is believed to be a hard problem. This is because the problem of computing an approximate Brouwer fixed point is a complete problem for the class PPAD \citep{papadimitriou1994complexity}; informally, this means that it is as hard as any other problem in the class, such as computing Nash equilibria of bimatrix games~\citep{daskalakis2009complexity, chen2009settling} or computing a market equilibrium under piece-wise linear concave utilities~\citep{chen2009spending, vazirani2011market}. These problems have eluded a polynomial-time algorithm for decades despite intensive effort. However, through our reduction we have only shown that the problem of computing an approximate throttling equilibrium is easier than the problem of computing a Brouwer fixed point by showing that any algorithm for the latter can be employed to solve the former. Perhaps, computing an approximate throttling equilibrium is strictly easier? Unfortunately, this is not the case and the goal of this subsection is to prove it. More precisely, we show that the problem of finding an approximate throttling equilibrium is PPAD-hard, which in informal terms means that it is as hard as any other problem in the class PPAD, in particular that of computing a Brouwer fixed point. Before stating the hardness result itself, we note a consequence of particular importance: Under the assumption that PPAD-hard problems cannot be solved in polynomial time, no dynamics can efficiently converge to an approximate throttling equilibrium in polynomial time, which is in stark contrast to throttling in first-price auctions. Now, we state the main result of the section. \color{black} \begin{theorem}\label{thm:second-price-PPAD-hard} There is a positive constant $\delta<1$ such that the problem of finding a $\delta$-approximate throttling equilibrium in a throttling game is PPAD-hard. This holds even when the number of buyers with non-zero bids for each good is at most three. \end{theorem} The proof of Theorem~\ref{thm:second-price-PPAD-hard} uses \emph{threshold games}, introduced recently by \citet{PB2021}. They showed that the problem of finding an approximate equilibrium in a threshold game is PPAD-complete. \begin{definition}[Threshold game of \citealt{PB2021}] \label{def:threshold} A threshold game is defined over a directed graph $\mathcal{G} = ([n], E)$. Each node $i\in [n]$ represents a player with strategy space $x_i\in [0,1]$. Let $N_i$ be the set of nodes~$j\in [n]$ with $(j,i)\in E$. Then $ (x_i:i\in [n]) \in [0, 1]^{n}$ is an $\epsilon$-\emph{approximate equilibrium} if every $x_i$ satisfies \begin{align*} x_i \in \begin{cases} [0,\epsilon] & \sum_{j \in N_i} x_j > 0.5 + \epsilon\\ [1-\epsilon,1] & \sum_{j \in N_i} x_j < 0.5 - \epsilon\\ [0,1] & \sum_{j \in N_i}x_j \in[ 0.5 - \epsilon, 0.5+\epsilon] \end{cases} \end{align*} \end{definition} \begin{theorem}[Theorem 4.7 of \citealt{PB2021}] \label{thm:ppad-threshold} There is a positive constant $\epsilon<1$ such that the problem of finding an $\epsilon$-approximate equilibrium in a threshold game is PPAD-hard. This holds even when the in-degree and out-degree of each node is at most three in the threshold game. \end{theorem} Given a threshold game $\mathcal{G}=([n],E)$, we let $O_i$ denote the set of nodes $j\in V$ with $(i,j)\in E$. So $|N_i|,|O_i|\le 3$ for every $i\in [n]$. To prove Theorem~\ref{thm:second-price-PPAD-hard}, we need to construct a throttling game $\mathcal{I}_\mathcal{G}$ from $\mathcal{G}$ such that any approximate throttling equilibrium of $\mathcal{I}_\mathcal{G}$ corresponds to an approximate equilibrium of the threshold game. Before rigorously diving into the construction, we give an informal description to build intuition. {With each node of $\mathcal{G}$, we will associate a collection of buyers and goods, with the goal of capturing the corresponding strategy and equilibrium conditions of the threshold game. Fix a node $i \in [n]$. We will define a strategy buyer $S(i)$ and set the strategy $x_i$ for node $i$ to be proportional to $1 - \theta_{S(i)}$. Next, in order to implement the equilibrium condition of the threshold game, our goal will be to define buyers and goods such that the linear form $\sum_{j \in N_i} x_j$ ends up being proportional to the total payment of a buyer who we will refer to as the threshold buyer $T(i)$. For each in-neighbour $j \in N_i$, we will define a neighbour good $G(i,j)$, for which the strategy buyer of the neighbour $S(j)$ places the highest bid of $6$ and the threshold buyer $T(i)$ places a bid of 5. Furthermore, for a reason that will become clear shortly, the strategy buyer $S(i)$ places a bid of 4 on $G(i,j)$. With these bids, the payment made by the threshold buyer $T(i)$ on all the neighbour goods $\{G(i,j)\}_j$ is proportional to $\theta_{T(i)} \theta_{S(i)} \sum_{j \in N_i} (1 - \theta_S(j)) = \theta_{T(i)} \theta_{S(i)} \sum_{j \in N_i} x_j$. Now, if we are somehow able to ensure that the throttling parameter of the strategy buyer $\theta_{S(i)}$ is inversely proportional to the throttling parameter of the threshold buyer $\theta_{T(i)}$, then the payment made by the threshold buyer on all the neighbour goods will be proportional to $\sum_{j \in N_i} x_j$, as desired. To achieve this, we define a reciprocal good $R(i)$. Finally, we set the budget of the threshold buyer $T(i)$ in such a way that comparing it to her payment, which is proportional to $\sum_{j \in N_i} x_j$, is tantamount to making a comparison between $\sum_{j \in N_i} x_j$ and $0.5$. The challenging part of the reduction lies in setting up the bids and budgets in a way that ensures that this comparison leads to an enforcement of the equilibrium condition of the threshold game.} \begin{figure} \caption{A diagrammatic representation of the non-zero bids made and received by the buyers and goods corresponding to a particular node of the threshold game. Consider a particular node of the threshold game, represented here by the black rectangle. It has three \textcolor{orange} \label{fig:PPAD} \end{figure} With this high level overview of the reduction in place, we move on to the rigorous construction of the throttling game $\mathcal{I}_\mathcal{G}$ from $\mathcal{G}$. First, $\mathcal{I}_\mathcal{G}$ contains the following set of goods: \begin{itemize} \item For each $i \in [n]$ and $j \in N_i$, there is a \emph{neighbor good} $G(i,j)$. \item For each $i \in [n]$, there is a \emph{reciprocal good} $R(i)$. \end{itemize} Next, setting two constants $M$ and $\delta$ as $ M = {160}/{\delta}$ and $\delta=\min \{\epsilon/(3 + \epsilon), \epsilon/2, 1/4 \},$ the throttling game $\mathcal{I}_\mathcal{G}$ has the following set of buyers: \begin{flushleft}\begin{itemize} \item For each $i\in [n]$, there is a \emph{threshold buyer} $T(i)$ who has budget $1/2$ and has non-zero bids only on the following goods: $b(T(i), G(i,j)) = 5$ for all $j \in N_i$; and $b(T(i), R(i)) = M|O_i|$. \item For each $i \in [n]$, there is a \emph{strategy buyer} $S(i)$ who has budget $M|O_i|/2$ and has non-zero bids only on the following goods: $b(S(i), R(i)) = M|O_i| + 1$; $b(S(i), G(i,j)) = 4$ for all $j \in N_i$; and $b(S(i), G(j, i)) = 6$ for all $j \in O_i$. \end{itemize}\end{flushleft} It is clear that $\mathcal{I}_\mathcal{G}$ can be constructed from $\mathcal{G}$ in polynomial time, and the number of buyers with non-zero bids for each good is at most three. Let $\theta$ be any $\delta$-approximate throttling equilibrium of $\mathcal{I}_\mathcal{G}$ and use it to define $(x_i:i\in [n])\in [0,1]^n$ as follows: \begin{equation}\label{heheeq} x_i = \min\big\{2(1 - \theta_{S(i)}),1\big\},\quad\text{for all $i \in [n]$.} \end{equation} To complete the reduction, we will show that $ (x_i:i\in [n])$ is an $\epsilon$-approximate equilibrium of the threshold game $\mathcal{G}$. Since we are considering a particular $\theta$, we will suppress the dependence on $\theta$ of the payment made by buyer $B$ on good $G$ and simply denote it by $p(B,G)$. The next lemma notes the payment terms of buyers in $\mathcal{I}_\mathcal{G}$. \begin{lemma}\label{lemma:buyer_payments} For all $i \in [n]$, we have \begin{flushleft}\begin{enumerate} \item $p(T(i), G(i,j)) = \left(1 - \theta_{S(j)}\right)\theta_{T(i)}\theta_{S(i)} 4$, for all $j \in N_i$; and $p(T(i), R(i)) = 0$. \item $p(S(i), R(i)) = \theta_{S(i)} \theta_{T(i)} M |O_i|$; $p(S(i), G(i,j)) = 0$, for all $j \in N_i$; and $$p(S(i), G(j,i)) = \theta_{S(i)} \left[\theta_{T(j)} 5\ +\ (1 - \theta_{T(j)}) \theta_{S(j)} 4 \right],\quad\text{for all $j \in O(i)$.}$$ \end{enumerate}\end{flushleft} \end{lemma} In the next lemma, we bound the total payment made by strategy buyer $S(i)$ on the neighbor goods and provide lower bounds on the throttling parameters: \begin{lemma} \label{lemma:neighbour_payment_bound} For all $i \in [n]$, we have $\theta_{S(i)} \geq (1 - 2 \delta)/2$, $\theta_{T(i)} \geq 1/32 $, and the total payment of $S(i)$ on the neighbor goods satisfies: \begin{align*} |O_i| \theta_{S(i)} \leq \sum_{j \in N_i} p(S(i), G(i,j)) + \sum_{j \in O_i} p(S(i), G(j,i)) \leq 5 |O_i| \theta_{S(i)} \end{align*} \end{lemma} \begin{proof} For $i \in [n]$, using Lemma~\ref{lemma:buyer_payments}, we get \begin{align*} \sum_{j \in N_i} p(S(i), G(i,j)) + \sum_{j \in O_i} p(S(i), G(j,i)) = \sum_{j \in O_i} \theta_{S(i)} \left[\theta_{T(j)} 5\ +\ (1 - \theta_{T(j)}) \theta_{S(j)} 4 \right] \leq 5 |O_i| \theta_{S(i)}. \end{align*} Suppose $\theta_{S(i)} < (1 - 2\delta)/2$ for some $i \in [n]$. Then, the total payment made by $S(i)$ is at most \begin{align*} \theta_{S(i)} \theta_{T(i)} M |O_i| + 5|O_i|\theta_{S(i)} < \frac{(1 - 2 \delta)M|O_i|}{2} + 5|O_i| < (1 - \delta) \cdot \frac{M|O_i|}{2} \end{align*} using $M>10/\delta$ by our choice of $M$. This contradicts the definition of $\delta$-approximate throttling equilibrium because $S(i)$ has a budget of $M|O_i|/2$. Therefore, we have $\theta_{S(i)}\ge (1-2\delta)/2$ and in particular, $\theta_{S(i)} \ge 1/4$ for all $i \in [n]$ using $\delta\le 1/4$. Hence, \begin{align*} \sum_{j \in N_i} p(S(i), G(i,j)) + \sum_{j \in O_i} p(S(i), G(j,i)) &= \sum_{j \in O_i} \theta_{S(i)} \left[\theta_{T(j)} 5\ +\ (1 - \theta_{T(j)}) \theta_{S(j)} 4 \right]\\ &> \sum_{j \in O_i} \theta_{S(i)} \left[\theta_{T(j)} \theta_{S(j)} 4\ +\ (1 - \theta_{T(j)}) \theta_{S(j)} 4 \right]\\ &= 4\theta_{S(i)} \cdot \sum_{j \in O_i} \theta_{S(j)} \geq |O_i|\theta_{S(i)}. \end{align*} Suppose, $\theta_{T(i)} < 1/32$. By Lemma~\ref{lemma:buyer_payments} the total payment of threshold buyer $T(i)$ is at most $$\sum_{j\in N_i} 4\theta_{T(i)}\le 12 \theta_{T(i)}<\frac{3}{8}\le \frac{1-\delta}{2}$$ using $|N_i|\le 3$ and $\delta\ge 1/4$. Hence, we have obtained a contradiction with the definition of $\delta$-approximate throttling equilibria. Therefore, $\theta_{T(i)} \geq 1/32$. \end{proof} We are now ready to complete the reduction. \begin{lemma} $(x_i:i\in [n])$, as defined in (\ref{heheeq}), is an $\epsilon$-approximate equilibrium of the threshold game $\mathcal{G}$. \end{lemma} \begin{proof} Fix an $i \in [n]$. First consider the case when $\sum_{j \in N_i} x_j >0.5 + \epsilon$. Assume for a contradiction that $x_i > \epsilon$. Then, $\theta(S(i)) < 1 - (\epsilon/2) \leq 1 -\delta$ using $\delta\le \epsilon/2$. By the definition of $\delta$-approximate throttling equilibrium, the total payment of threshold buyer $S(i)$ is at least $(1 -\delta)M|O_i|/2$. Combining this observation with Lemma~\ref{lemma:buyer_payments} and Lemma~\ref{lemma:neighbour_payment_bound}, we get \begin{align*} \theta_{S(i)} \left[\theta_{T(i)} M |O_i| + 5|O_i|\right] \geq (1 -\delta) \cdot \frac{M|O_i|}{2} \end{align*} and thus (using $\theta_{T(i)}\ge 1/32$ from Lemma \ref{lemma:neighbour_payment_bound} and our choice of $M=160/\delta$), $$ \theta_{S(i)} \geq \frac{1-\delta}{2 \theta_{T(i)} + (10/M)} \geq \frac{1}{2 \theta_{T(i)}} \cdot \frac{1 -\delta}{1+\delta} \ \ \implies\ \ \theta_{T(i)}\theta_{S(i)} \geq \frac{1}{2 (1 + 2 \epsilon)} $$ using $\delta\le \epsilon/2$ and $\epsilon< 1$. Moreover, note that $\sum_{j \in N_i} x_j > 0.5 + \epsilon$ implies $$\sum_{j \in N_i} 2\left(1 - \theta_{S(j)}\right) > (1+2\epsilon)/2$$ Combining the above statements allows us to bound the total payment of buyer $T(i)$: \begin{align*} \sum_{j \in N_i} \left(1 - \theta_{S(j)}\right)\theta_{T(i)}\theta_{S(i)} 4 \geq \frac{4}{2 (1 + 2 \epsilon)} \cdot \sum_{j \in N_i} \left(1 - \theta_{S(j)}\right) > \frac{1}{2}. \end{align*} This yields a contradiction because $T(i)$ has budget $1/2$. Hence $x_i \leq \epsilon$ when $\sum_{j \in N_i} x_j > 0.5 + \epsilon$. Next consider the case of $\sum_{j \in N_i} x_j < 0.5 - \epsilon$. The budget constraint of $S(i)$ and Lemma~\ref{lemma:neighbour_payment_bound} yield \begin{align*} \theta_{S(i)} \left[\theta_{T(i)} M |O_i| + |O_i|\right] \leq \frac{M|O_i|}{2} \end{align*} which implies that \begin{align}\label{heheeq2}\theta_{S(i)} \left[\theta_{T(i)} M |O_i| + \theta_{T(i)}|O_i|\right] \leq \frac{M|O_i|}{2}\ \ \implies\ \ \theta_{T(i)}\theta_{S(i)} \leq \frac{1}{2(1 + (1/M))}< \frac{1}{2}. \end{align} By Lemma~\ref{lemma:neighbour_payment_bound}, we have $\theta_{S(j)} \geq (1 - 2 \delta)/2$ and thus, $2(1 - \theta_{S(j)}) \leq 1 + 2\delta$. Multiplying both sides by $(1 - 2\delta)$ yields $2(1 - \theta_{S(j)})(1 - 2\delta) \leq 1 - 4\delta^2 < 1$. In other words, we have \begin{equation}\label{heheeq3} 2(1 - \theta_{S(j)})(1 - 2\delta) < \min\{2(1 - \theta_{S(j)}), 1\} = x_j \end{equation} for every $j\in [n]$. This together with $\sum_{j \in N_i} x_j < 0.5 - \epsilon$ implies $$(1 - 2\delta)\sum_{j \in N_i} 2\left(1 - \theta_{S(j)}\right) < (1-2\epsilon)/2.$$ Therefore, we get that the total payment of $T(i)$ satisfies the following bound \begin{align*} \sum_{j \in N_i} \left(1 - \theta_{S(j)}\right)\theta_{T(i)}\theta_{S(i)} 4 < \sum_{j \in N_i} 2\left(1 - \theta_{S(j)}\right) < \frac{1 - 2\epsilon}{2(1 - 2\delta)} \leq (1 - \delta) \cdot \frac{1}{2} \end{align*} using $\delta\le \epsilon/2$. As a consequence of the definition of $\delta$-approximate throttling equilibria, we have that $\theta_{T(i)} \geq 1 -\delta$. Finally, using (\ref{heheeq2}) and (\ref{heheeq3}), we have \begin{align*} x_i > 2(1 - \theta_{S(i)}) (1 - 2 \delta) \geq 2\left(1 - \frac{1}{2\theta_{T(i)}}\right) (1 - 2 \delta) \geq \frac{(1 - 2\delta)^2}{1 - \delta} > \frac{1 - 4 \delta}{1 - \delta} \geq 1 - \epsilon, \end{align*} where the last inequality follows from $\delta \leq \epsilon/(3 + \epsilon)$. \end{proof} This completes the reduction, and thereby the proof of Theorem~\ref{thm:second-price-PPAD-hard}, because we have shown that for any $\delta$-approximate throttling equilibrium of the throttling game $\mathcal{I}_\mathcal{G}$, the strategy $(x_i)_i$ is an $\epsilon$-approximate equilibrium of the threshold game $\mathcal{G}$. \paragraph{PPAD Membership of Approximate Second-Price Throttling Equilibria} Next, we show that the problem of computing a $\delta$-approximate throttling equilibrium belongs to PPAD by showing a reduction to BROUWER: the problem of computing an approximate fixed point of a Lipschitz continuous function from a $n$-dimensional unit cube to itself, which known to be in PPAD~\citep{chen2009settling}. Its proof is motivated by the argument for existence of exact throttling equilibria given in Theorem~\ref{theorem:existence_second_price} and can be found in Appendix~\ref{appendix:PPAD-membership} \begin{theorem}\label{thm:second-price-PPAD-membership} The problem of computing an approximate throttling equilibrium is in PPAD. \end{theorem} \subsection{NP-hardness of Revenue Maximization under Throttling} To further strengthen our hardness result, next we establish the NP-hardness of computing the revenue-maximizing approximate throttling equilibrium. With revenue being one of the primary concerns of advertising platforms, this result provides further evidence of the computational difficulties which plague throttling equilibria in second-price auctions. We begin by defining the decision version of the revenue-maximization problem. \begin{definition}[REV] Given a throttling game $G$ and target revenue $R$ as input, decide if there exists a $\delta$-approximate throttling equilibrium of $G$, for any $\delta \in [0,1)$, with revenue greater than or equal to $R$. \end{definition} Note that we allow for arbitrarily bad approximations to the throttling equilibrium by allowing $\delta$ to be any number in $[0,1)$. Theorem~\ref{thm:revenue_np_hard} states the problem of finding the revenue maximizing approximate throttling equilibrium is NP-hard. Its based on a reduction from 3-SAT to REV and has been relegated to Appendix~\ref{appendix:NP-hard}. \begin{theorem}\label{thm:revenue_np_hard} REV is NP-hard. \end{theorem} \subsection{An Algorithm for Second-Price Throttling Equilibria with Two Buyers Per Good} Next, we contrast the hardness results of the previous subsection with an algorithm for the case when each good receives at most two non-zero bids. Since goods with only one positive bid never result in a payment, without loss of generality, we can assume that every good has exactly two buyers with positive bids. More precisely, in this subsection, we will assume that $|\{i \in [n] \mid b_{ij}>0\}| = 2$ for all $j \in [m]$. This special case demarcates the boundary of tractability for computing throttling equilibria in second-price auctions: Our PPAD-hardness result (Theorem~\ref{thm:second-price-PPAD-hard}) holds for the slightly more general case of each good receiving at most three positive bids. We begin by describing the algorithm (Algorithm~\ref{alg:two_buyer}). \begin{algorithm}[H] \caption{Algorithm for the Two Buyer Case} \label{alg:two_buyer} \begin{algorithmic} \item[\textbf{Input:}] Throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ and parameter $\gamma > 0$ \item[\textbf{Initialize:}] $\theta_i = \min\{B_i/ (2\sum_j b_{ij}), 1\}$ for all $i \in [n]$ \item[\textbf{While}] there exists a buyer $i \in [n]$ such that $\theta_i < 1 - \gamma$ and $\sum_j p(\theta)_{ij} < (1 - \gamma)^3 B_i$: \begin{enumerate} \item For all $i \in [n]$ such that $\theta_i < 1 - \gamma$ and $\sum_j p(\theta)_{ij} < (1 - \gamma)^2 B_i$, set $\theta_i \leftarrow \theta_i/(1 - \gamma)$ \item For all $i \in [n]$ such that $\sum_j p(\theta)_{ij} > B_i$, set $\theta_i \leftarrow (1 - \gamma)\theta_i$ \end{enumerate} \item[\textbf{Return:}] $\theta$ \end{algorithmic} \end{algorithm} The following theorem, whose proof can be found in Appendix~\ref{appendix:two-buyer}, establishes the correctness and polynomial runtime of Algorithm~\ref{alg:two_buyer}. \begin{theorem}\label{thm:two-buyer} Algorithm~\ref{alg:two_buyer} returns a $(1 - 3 \gamma)$-approximate throttling equilibrium in time which is polynomial in the size of the instance and $1/\gamma$. \end{theorem} \section{Comparing Pacing and Throttling}\label{sec:TE-PE} In this section, we compare two of the most popular budget management strategies: multiplicative pacing and throttling. First, we restate the definition of pacing equilibrium, as it appears in \citet{conitzer2018multiplicative, conitzer2019pacing}. Under pacing, each buyer $i$ has a pacing parameter $\alpha_i$ and, she bids $\alpha_i b_{ij}$ on good $j$. Let $p_j(\alpha)$ denote the price on good $j$ when all of the buyers use pacing, i.e., $p_j(\alpha)$ is the highest (second-highest) element among $\{\alpha_i v_{ij}\}_i$ for first-price (second-price) auctions. Then, a tuple $((\alpha_i), (x_{ij}))$ of pacing parameters and allocations $x_{ij}$ is called a pacing equilibrium if the following hold: \begin{itemize}[noitemsep] \item[(a)] Only buyers with the highest bid win the good: $x_{ij} > 0$ implies $\alpha_i v_{ij}= \max_i \alpha_i b_{ij}$. \item[(b)] Full allocation of each good with a positive bid: $\max_i \alpha_i b_{ij} > 0$ implies $\sum_{i\in [n]} x_{ij} = 1$. \item[(c)] Budgets are satisfied: $\sum_{j\in [m]} x_{ij} p_j(\alpha) \leq B_i$. \item[(d)] No unnecessary pacing: $\sum_{j\in [m]} x_{ij} p_j(\alpha) < B_i$ implies $\alpha_i=1$. \end{itemize} \paragraph{Comparing Pacing and Throttling in First-Price Auctions} We begin with a comparison of pacing equilibria and throttling equilibria for first-price auctions. In \citet{conitzer2019pacing}, the authors show that a unique pacing equilibrium always exists in first-price auctions and characterize it as the largest element in the collection of all budget-feasible vectors of pacing parameters. In Theorem~\ref{thm:existence-first-price}, we show the analogous result for throttling using similar techniques. However, unlike pacing equilibrium, which is known to be rational \citet{conitzer2019pacing}, there exist throttling games where the throttling equilibrium is irrational as we demonstrate through Example~\ref{example:first-price-irrational}. Furthermore, in \citet{borgs2007dynamics}, the authors develop t\^atonnement-style dynamics similar to those described in Algorithm~\ref{alg:dynamics}, which converge to an approximate pacing equilibrium in polynomial time. In combination with Theorem~\ref{thm:first-price-dynamics}, this provides evidence supporting the tractability of budget management for first-price auctions. The uniqueness of pacing equilibrium and throttling equilibrium in first-price auctions is conducive to comparison, which we carry out for revenue. More specifically, in Theorem~\ref{thm:revenue-comparison}, we show that the revenue under the pacing equilibrium and the throttling equilibrium are always within a multiplicative factor of 2 of each other. Let REV(PE) and REV(TE) denote the revenue under the unique pacing equilibrium and the unique throttling equilibrium respectively. \begin{theorem}\label{thm:revenue-comparison} For any throttling game $\left(n, m, (b_{ij}), (B_i)\right)$, the revenue from the pacing equilibrium and the revenue from the throttling equilibrium are always within a factor of 2 of each other, i.e., REV(PE) $\leq 2 \times \text{REV(TE)}$ and REV(TE) $\leq 2 \times \text{REV(PE)}$. \end{theorem} \begin{proof} Consider a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$. Let $\theta = (\theta_i)_i$ be the unique throttling equilibrium (TE) and $\alpha = (\alpha_i)_i$ be the unique pacing equilibrium (PE) for this game. We will use $p_j(\theta)$ and $p_j(\alpha)$ to denote the (expected) payment made to the seller on good $j$ under the TE and PE respectively. Then, REV(TE) $= \sum_j p_j(\theta)$ and REV(PE) $= \sum_j p_j(\alpha)$. First, we show that REV(PE) $\leq 2 \times \text{REV(TE)}$. Let $N \coloneqq \{i \in [n] \mid \theta_i = 1\}$ be the set of buyers who are not budget constrained under the TE. Moreover, define \begin{align*} M \coloneqq \{j \in [m] \mid \exists\ i\in N \text{ such that } i \text{ wins a non-zero fraction of } j \text{ under the PE } \alpha \} \end{align*} Note that, since $\theta_i = 1$ and $\alpha_i \leq 1$ for all $i \in N$, we get that the TE yields a higher revenue for the seller on all goods in the set $M$, i.e., $p_j(\theta) \geq p_j(\theta)$ for all $j \in N$. Therefore, REV(TE) $\geq \sum_{j \in M} p_j(\theta) \geq \sum_{j \in M} p_j(\alpha)$. Furthermore, the definition of throttling equilibrium implies that every buyer $i \notin N$ spends her entire budget $B_i$ under the TE. Hence, by our choice of $M$, we get $\sum_{j \notin M} p_j(\alpha) \leq \sum_{i \notin N} B_i \leq$ REV(TE). Combining the two statements yields REV(PE) $\leq 2 \times \text{REV(TE)}$, as desired. Next, we complete the proof by showing that REV(TE) $\leq 2 \times \text{REV(PE)}$. Let $S = \{i \in [n] \mid \alpha_i = 1\}$ be the set of buyers who are not budget constrained under the PE. Note that, for all $i \in S$ and $j \in [m]$, buyer $i$ bids $b_{ij}$ under the PE, which implies $p_j(\alpha) \geq \max_{i \in S} b_{ij}$ for all goods $j \in [m]$. Therefore, for any good $j \in [m]$, the total payment made by buyers in the set $S$ under the TE is at most $p_j(\alpha)$. As a consequence, the total payment made by buyers in $S$ under the TE is at most REV(PE). Furthermore, the buyers not in $S$ completely spend their budget under the TE so the total payment made by buyers not in $S$ under the TE is also at most REV(PE). Hence, we have the desired inequality REV(TE) $\leq 2 \times \text{REV(PE)}$. \end{proof} In Appendix~\ref{appendix:TE-PE}, we give examples to demonstrate that REV(TE) can be arbitrarily close to $2 \times \text{REV(PE)}$, and REV(PE) can be arbitrarily close to $(4/3) \times \text{REV(TE)}$. In other words, for Theorem~\ref{thm:revenue-comparison}, the inequality REV(TE) $\leq 2 \times \text{REV(PE)}$ is tight and the inequality REV(PE) $\leq 2 \times \text{REV(TE)}$ is not too loose. \paragraph{Comparing Pacing and Throttling for Second-Price Auctions} This subsection is devoted to the comparison of pacing equilibria and throttling equilibria in second-price auctions. We begin by noting that, in stark contrast to first-price auctions, there could be infinitely many throttling equilibria for second-price auctions as the following example demonstrates. \begin{example} There are 2 goods and 2 buyers. The bids are given by $b_{11} = b_{22} = 2$, $b_{12} = b_{21} = 1$, and the budgets are given by $B_1 = B_2 = 1/2$. Then, it is straightforward to check that any pair of throttling parameters $\theta_1, \theta_2 \in [1/2, 1]$ such that $\theta_1 \theta_2 = 1/2$ forms a throttling equilibrium. \end{example} \cite{conitzer2018multiplicative} demonstrate that multiplicity (although only finitely-many different equilibria) also shows up for pacing equilibria in second-price auctions, which in combination with the multiplicity of throttling equilibria bodes unfavorably for potential comparisons of revenue. The similarities do not end with multiplicity: the problems of computing an approximate pacing equilibrium and computing an approximate throttling equilibrium are both PPAD-complete \citep{chen2021complexity}. As a consequence, we get that, unlike first-price auctions, no dynamics can converge to an approximate equilibrium in polynomial time for second-price auctions under either budget-management approach (assuming PPAD-hard problems cannot be solved in polynomial time). Furthermore, finding the revenue maximizing throttling equilibrium and finding the revenue maximizing pacing equilibrium are both NP-hard problems~\citep{conitzer2018multiplicative}. However, unlike throttling equilibria, a rational pacing equilibrium always exists \citep{chen2021complexity}. \section{Price of Anarchy} In this section, we study the efficiency of throttling equilibria in first-price and second-price auctions. We will use liquid welfare \citep{dobzinski2014efficiency} to measure efficiency. It is an alternative to social welfare which is more suitable for settings with budget constraints, and it reduces to social welfare when budgets are infinite. \begin{definition}\label{definition:liquid-welfare} For an allocation $y = \{y_{ij}\}$, where $y_{ij} \in [0,1]$ denotes the probability of allocating good $j$ to buyer $i$, its liquid welfare $\lw(x)$ is defined as \begin{align*} \lw(y) = \sum_{i=1}^n \min\left\{ \sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} \end{align*} \end{definition} \begin{remark} Liquid welfare is traditionally defined as the amount of revenue that can be extracted from budget-constrained buyers with full knowledge of their values. If buyer $i$ was assumed to have value $v_{ij}$ for good $j$, this is given by \begin{align*} \sum_{i=1}^n \min\left\{ \sum_{j=1}^m v_{ij} y_{ij}, B_i \right\} \,. \end{align*} However, since our model does not assume a valuation structure, we define $\lw(y)$ to capture the amount of revenue that can be extracted from budget-constrained buyers with full knowledge of their bids if no buyer could be charged more than her bid for any good. It reverts to the traditional definition when $b_{ij} = v_{ij}$. \end{remark} Let $y(\theta)$ denote the allocation when the buyers use the throttling parameters $\theta = (\theta_1, \dots, \theta_n)$, and let $\mathbb{T}heta^*$ be the set of all throttling equilibria. Price of Anarchy \citep{koutsoupias2009worst}, which we define next, is the ratio of the worst-case liquid welfare of throttling equilibria to the best-possible liquid welfare that can be attained by any allocation. It measures the worst-case loss in efficiency incurred due the strategic behavior of agents when compared to the optimal outcome that could be achieved by a central planner. \begin{definition} The Price of Anarchy (PoA) of throttling equilibria for liquid welfare is given by \begin{align*} \text{PoA} = \frac{\sup_{y \in (\Delta^n)^m} LW(y)}{\inf_{\theta \in \mathbb{T}heta^*} \lw(y(\theta))} \end{align*} \end{definition} We begin by establishing an upper bound on the Price of Anarchy for both first-price and second-price auctions. Its proof critically leverages the no-unnecessary-throttling condition of throttling equilibria, and is inspired by the Price of Anarchy result of \citet{balseiro2021contextual} for pacing equilibria. \begin{theorem}\label{thm:poa} For both first-price and second-price auctions, we have $\text{PoA} \leq 2$. \end{theorem} Next, we show that the upper bound on the PoA established in Theorem~\ref{thm:poa} is tight for both first-price and second-price auctions. We do so by demonstrating particular instances for which the bound is tight, starting with the second-price auction format. \begin{example}\label{example:SP-poa-tight} Consider a second-price auction with $m+1$ buyers and $m$ goods for some $m \in \mathbb{Z}_+$. Each of the first $m$ buyers bid $1$ for the $m$ goods respectively and have a budget of $\infty$, i.e., for $i \in [m]$, we have \begin{align*} b_{ij} = \begin{cases} 1 &\text{if } i = j\\ 0 &\text{if } i \neq j \end{cases}\,, \end{align*} and $B_i = \infty$ (any $B > 2m$ would suffice). The last buyer has bid $b_{m+1,j} = m$ for each of the goods $j \in [m]$ and has a budget of $B_{m+1} = m + \epsilon$ for some small $\epsilon > 0$. In any throttling equilibrium $\theta \in \mathbb{T}heta$, we have $\theta_i = 1$ for all $i \in [m]$ because of the no-unnecessary-throttling condition. Since the sum of all the second-highest bids is $m$ and buyer $m+1$ has the highest bid for every good, she cannot possibly spend her entire budget of $B_{m+1} = m + \epsilon$ and we must also have $\theta_{m+1} = 1$ by the no-unnecessary throttling condition. Therefore, there is a unique throttling equilibrium $\theta$ such that $\theta_i = 1$ for all $i \in [m+1]$ and it has liquid welfare given by \begin{align*} \lw(y(\theta)) = \left( \sum_{i=1}^m \min\left\{ y_{ii}(\theta), B_i \right\} \right) + \min\left\{ \sum_{j=1}^m m \cdot y_{m+1,j}(\theta), m + \epsilon \right\} = m+ \epsilon \end{align*} because $y_{m+1,j}(\theta) = 1$ for all $j \in [m]$. On the other hand, consider the allocation $y$ such that $y_{ii} = 1$ for all $i \in [m-1]$ and $y_{m+1, m} = 1$. It has liquid welfare given by \begin{align*} \lw(y) = \left( \sum_{i=1}^{m-1} \min\left\{ y_{ii}, B_i \right\} \right) + \min\left\{ m \cdot y_{m+1,m}, m + \epsilon \right\} = m -1 + m = 2m-1 \,. \end{align*} Hence, the PoA is at least $(2m-1)/(m+\epsilon)$. As $m$ and $\epsilon$ were arbitrary, we can consider the limit when $m \to \infty$ and $\epsilon \to 0$, which yields the required lower bound of $\text{PoA} \geq 2$. \end{example} Observe that in the previous example none of the buyers were throttled ($\theta_i = 1$), which indicates that the lower bound is driven more by the second-price auction format than the specific budget management method, and applies to other methods like pacing. Next, we show that our bound is tight for first-price auctions. \begin{example}\label{example:FP-poa-tight} Consider a first-price auction with $m+1$ buyers and $m+1$ goods, for some $m \in \mathbb{Z}_+$. Each of the first $m$ buyers bid $1$ for the first $m$ goods respectively and bid $m$ on good $m+1$, and have a budget of $1$, i.e., for each $i \in [m]$, we have \begin{align*} b_{ij} = \begin{cases} 1 &\text{if } j = i\\ m &\text{if } j = m+1\\ 0 &\text{otherwise} \end{cases}\,, \end{align*} and $B_i = 1$. Moreover, buyer $m+1$ has value $b_{m+1, m+1} = m$ for the $(m+1)$-th good and $b_{m+1, j} = 0$ for all $j \in [m]$, with $B_{m+1} = \infty$. Consider a throttling equilibrium $\theta \in \mathbb{T}heta$. We begin by showing that $\theta_i < 1$ for all $i \in [m]$. For contradiction, suppose not. Let $i$ be the smallest index such that $\theta_i = 1$. Then, buyer $i$ spends 1 on good $i$ and spends $m \cdot \prod_{k=1}^{i-1} (1 - \theta_k) > 0$ on good $m+1$ (we use the lexicographic tie-breaking rule), which makes her total expenditure strictly greater than her budget of $B_i = 1$, thereby yielding the required contradiction. Hence, $\theta_i < 1$ for all buyers $i \in [m]$, and consequently, the no-unnecessary-throttling condition implies that their total expected expenditure is exactly 1, i.e., the following equivalent statements hold \begin{align}\label{eqn:SP-poa-inter-1} \theta_i \cdot 1 + \left( \prod_{k=1}^{i-1} (1 - \theta_k) \right) \cdot \theta_i \cdot m = 1 \quad \iff \quad \theta_i = \frac{1}{1 + \left( \prod_{k=1}^{i-1} (1 - \theta_k) \right) \cdot m} \,. \end{align} Moreover, since their expenditure is $B_i=1$, that is also their contribution towards the liquid welfare. Let $g(i) \coloneqq \prod_{k=1}^{i} (1 - \theta_k)$ denote the probability that the first $i$ buyers do not participate. Next, observe that $\theta_{m+1} =1$ because of the no-unnecessary-throttling condition and $B_{m+1} = \infty$. Therefore, due to the lexicographic tie-breaking rule, buyer $m$ wins good $m+1$ with probability $g(m) = \prod_{k=1}^{m} (1 - \theta_k)$. Hence, the liquid welfare of $\theta$ is given by \begin{align*} \lw(y(\theta)) = m \cdot 1 + g(m) \cdot m\, \end{align*} On the other hand, the allocation $y$ which awards good $i$ to buyer $i$ for all $i \in [m+1]$ has $\lw(y) = m + m = 2m$. Consequently, we have \begin{align*} \text{PoA} \geq \frac{2m}{m + g(m) \cdot m} = \frac{2}{1 + g(m)}\,. \end{align*} To show $\text{PoA} \geq 2$, it suffices to show $\lim_{m \to \infty} g(m) = 0$, which is what we do next. First, observe that \eqref{eqn:SP-poa-inter-1} implies the following recursion for $g(\cdot)$: \begin{align*} g(i) = (1 - \theta_i) g(i-1) = \frac{\left( \prod_{k=1}^{i-1} (1 - \theta_k) \right) \cdot m}{1 + \left( \prod_{k=1}^{i-1} (1 - \theta_k) \right) \cdot m} \cdot g(i-1) = \frac{g(i-1)^2 \cdot m}{1 + g(i-1) \cdot m}\,. \end{align*} We will inductively show that $g(i) \leq 1 - i/(m + \sqrt{m})$. Set $b = 1/(m+\sqrt{m})$ The base case $i = 1$ follows because $\theta_1 = 1/(1 + m)$ (see \eqref{eqn:SP-poa-inter-1}). Suppose $g(i-1) \leq 1 - (i-1)/(m + \sqrt{m})$ for some $i \in [m]$. Then, we have \begin{align*} g(i) = \frac{g(i-1)^2 \cdot m}{1 + g(i-1) \cdot m} = \frac{m}{\frac{1}{g(i-1)^2} + \frac{m}{g(i-1)}} \leq \frac{m}{\frac{1}{(1 - b(i-1))^2} + \frac{m}{1 - b(i-1)}} = \frac{m \cdot (1 - bi + b)^2}{1 + m (1 - bi + b)} \,. \end{align*} To complete the induction, it suffices to show: \begin{align*} & \frac{m \cdot (1 - bi + b)^2}{1 + m (1 - bi + b)} \leq 1 - bi\\ \iff & m(1 + b^2 i^2 + b^2 -2bi + 2b - 2b^2i) \leq 1 - bi + m(1 + b^2i^2 - 2bi + b - b^2i)\\ \iff & 1 - bi + m(b^2i - b - b^2) \geq 0\\ \iff & 1 - m \left( \frac{1}{m + \sqrt{m}} + \frac{1}{(m + \sqrt m)^2} \right) - i \left( \frac{1}{m+\sqrt m} - \frac{m}{(m + \sqrt m)^2} \right) \geq 0\\ \iff & 1 - m \left( \frac{1}{m + \sqrt{m}} + \frac{1}{(m + \sqrt m)^2} \right) - i \left( \frac{\sqrt m}{(m + \sqrt m)^2} \right) \geq 0\\ \end{align*} To see why the last inequality in the above equivalence chain holds, observe that: \begin{align*} &1 - m \left( \frac{1}{m + \sqrt{m}} + \frac{1}{(m + \sqrt m)^2} \right) - i \left( \frac{\sqrt m}{(m + \sqrt m)^2} \right)\\ \geq & 1 - m \left( \frac{1}{m + \sqrt{m}} + \frac{1}{(m + \sqrt m)^2} \right) - m \left( \frac{\sqrt m}{(m + \sqrt m)^2} \right)\\ = & \frac{(m + \sqrt m)^2 - m(m+\sqrt m) - m - m \sqrt m}{(m + \sqrt m)^2}\\ = & \frac{m^2 + m + 2 m\sqrt m - m^2 - m \sqrt m - m - m\sqrt m}{(m + \sqrt m)^2}\\ = & 0 \end{align*} which completes the induction. Hence, $g(m) \leq 1 - m/(m + \sqrt m)$ and $\lim_{m \to \infty} g(m) = 0$, as required. \end{example} \section{Conclusion and Future Work} We defined the notion of a throttling equilibrium and studied its properties for both first-price and second-price auctions. Through our analysis of computational and structural properties, we found that throttling equilibria in first-price auctions satisfy the desirable properties of uniqueness and polynomial-time computability. In contrast, we showed that for second-price auctions, equilibrium multiplicity may occur, and computing a throttling equilibrium is PPAD hard. This disparity between the two auction formats is reinforced when we compare throttling and pacing: our results show that the properties of throttling equilibrium across the two formats have a striking similarity to the properties of first-price versus second-price pacing equilibrium. Finally, we also showed that the Price of Anarchy of throttling equilibria for liquid welfare is bounded above by 2 for both first-price and second-price auctions, and that this bound is tight for both auction formats. Altogether, this provides a comprehensive analysis of the equilibria which arise from the use of throttling as a method of budget management. There are many interesting directions for future work, such as what happens when a combination of pacing and throttling-based buyers exist in the market, whether the combination of throttling and pacing behaves well for second-price auctions, whether second-price throttling equilibria can be computed efficiently under some natural assumptions on the bids, and whether the tractability of budget management in first-price auctions holds more generally beyond throttling and pacing. { \small } \begin{center} \LARGE Electronic Companion:\\ Throttling Equilibria in Auction Markets\\ \large \author{Xi Chen, Christian Kroer, Rachitesh Kumar} \end{center} \appendix \section{Appendix: Examples of Irrational Throttling Equilibria}\label{appendix:irrationality} \paragraph{First-Price Auctions:} First, we give an example for which the unique \emph{first-price} throttling equilibrium is irrational. \begin{example}\label{example:first-price-irrational} Define a throttling game as follows: There are 2 goods and 2 buyers, i.e., $m = 2$ and $n = 2$; $b_{11} = b_{12} = 2$ and $b_{21} = 1, b_{22} = 3$; $B_1 = 2$ and $B_2 = 1$. Suppose, in equilibrium, the buyers use the throttling parameters $\theta_1$ and $\theta_2$. Then the payment of buyer 1 and buyer 2 are given by $2\theta_1 + 2(1 - \theta_2)\theta_1$ and $3\theta_2 + (1 - \theta_1) \theta_2$ respectively. Therefore, for this game, in any throttling equilibrium, we have $0 < \theta_1, \theta_2 < 1$ and $\theta_3 = 1$, which implies $ 2\theta_1 + 2(1 - \theta_2)\theta_1 = 2$ and $3\theta_2 + (1 - \theta_1) \theta_2 = 1 $. Substituting $\theta_1 = 1/(2 - \theta_2)$ from the first equation into the second yields \begin{align*} 3\theta_2 + \theta_2 \cdot \frac{1 - \theta_2}{2 - \theta_2} = 1 \end{align*} which implies $4 \theta_2^2 - 7 \theta_2 + 1 = 0$. As $\theta_2 < 1$, Solving the quadratic gives $\theta_2 = (7 - \sqrt{33})/8$. \end{example} \paragraph{Second-Price Auctions:} Next, we give an example for which all \emph{second-price} throttling equilibria are irrational. \begin{example}\label{example:irrational_eq} Define a throttling game as follows: \begin{itemize} \item There are 4 goods and 3 buyers, i.e., $m = 4$ and $n = 3$ \item $b_{11} = b_{12} = 2$, $b_{14} = 1$, $b_{23} = b_{24} = 4$, $b_{22} = 1$, $b_{31} = 1$ and $b_{33} = 2$ \item $B_1 = B_2 = 1$ and $B_3 = \infty$ \end{itemize} For this game, in any throttling equilibrium, we have $0 < \theta_1, \theta_2 < 1$ and $\theta_3 = 1$. Hence, if $\theta$ is a throttling equilibrium, then it satisfies $ \theta_1 + \theta_1 \theta_2 = 1$ and $2\theta_2 + \theta_2 \theta_1 = 1 $. Substituting $\theta_1 = 1/(1 + \theta_2)$ from the first equation into the second equation yields \begin{align*} 2\theta_2 + \theta_2 \cdot \frac{1}{1 + \theta_2} = 1 \end{align*} which further implies $2 \theta_2^2 + 2 \theta_2 - 1 = 0$. As $\theta_2 > 0$, solving the quadratic gives $\theta_2 = (\sqrt{3} - 1)/2$. \end{example} \section{Appendix: Missing Proofs} \subsection{Proof of Theorem~\ref{thm:second-price-PPAD-membership}}\label{appendix:PPAD-membership} Consider a throttling game $\left(n, m, (b_{ij}), (B_i)\right)$ and an approximation parameter $\delta \in (0,1/2)$. Define $f: [0,1]^n \to [0,1]^n$ as \begin{align*} f_i(\theta) = \min\left\{ \frac{(1 - \delta/2)B_i}{\sum_j p(1, \theta_{-i})_{ij}}, 1 \right\} = \min\left\{ \frac{(1 - \delta/2)B_i}{\max\{\sum_j p(1, \theta_{-i})_{ij}, B_i/2\} }, 1 \right\} \quad \forall \theta \in [0,1]^n \end{align*} First, we prove that $f$ is $L$-Lipschitz continuous with Lipschitz constant $L = 2mn \overline{B} \underline{B}^{-2} \overline{b}$, where $\overline{b} = \max_{i,j} b_{ij}$, $\overline{B} = \max_{i} B_i$. To achieve this, we will repeatedly use the following facts about Lipschitz functions. For Lipschitz continuous functions $f$ and $g$ with Lipschitz constants $L_1$ and $L_2$ respectively, \begin{itemize} \item $f + g$ is $L_1 + L_2$-Lipschitz continuous \item If $f$ and $g$ are bounded above by $M$, then $f \cdot g$ is $M(L_1 + L_2)$-Lipschitz continuous \item If $f$ is bounded below by $c$, then $1/f$ is $L_1/c^2$-Lipschitz continuous \item For a constant $C$, $\max\{f, C\}$ and $\min\{f,C\}$ are both $L_1$-Lipschitz continuous \end{itemize} Observe that \begin{align*} p(1, \theta_{-i})_{ij} =\sum_{\ell: b_{\ell j} < b_{ij}} b_{\ell j} \theta_\ell \prod_{k \neq i: b_{kj} > b_{\ell j}} (1 - \theta_k) \end{align*} Therefore, for all $i \in [n]$, $\theta \mapsto p(1, \theta_{-i})_{ij}$ is $(2n\overline{b})$-Lipschitz continuous, which further implies that $\theta \mapsto \sum_{j} p(1, \theta_{-i})_{ij}$ is $2mn\overline{b}$-Lipschitz continuous. Finally, due to the second equality in the definition of $f$, we get that $f$ is $(2mn \overline{B} \underline{B}^{-2} \overline{b})$-Lipschitz continuous. Since BROUWER is in PPAD~\citep{chen2009settling}, to complete the proof, it suffices to show that a $(\delta \underline{B}/4m \overline{b})$-approximate fixed point $\theta^*$ of $f$, i.e, $\theta^*$ such that $\|f(\theta^*) - \theta^*\|_\infty \leq \delta \underline{B}/4m \overline{b}$, is a $\delta$-approximate throttling equilibrium. First, note that $p(1, \theta_{-i})_{ij} \leq \overline{b}$ for all $i \in [n], j \in [m]$. Therefore, $f(\theta)_i \geq \underline{B}/2m \overline{b}$ for all $i \in [n]$. Hence, for $i \in [n]$, we have \begin{align*} \biggr\lvert 1 - \frac{\theta_i^*}{f_i(\theta^*)} \biggr\rvert \leq \frac{\delta \underline{B}}{f_i(\theta^*) \cdot 4m \overline{b}} \leq \frac{\delta}{2} \end{align*} As a consequence, we get $\theta_i^* \leq (1 + \delta/2) f_i(\theta^*)$ and $ \theta^* \geq (1- \delta/2) f_i(\theta^*)$. The first inequality implies which in turn implies \begin{align*} \sum_j p(\theta^*)_{ij} = \theta^*_i \cdot \sum_j p(1, \theta^*)_{ij} \leq (1 + \delta/2)(1 - \delta/2)B_i \leq B_i \end{align*} and the second one implies that if $\theta^*_i < 1 - \delta/2$, then \begin{align*} \sum_j p(\theta^*)_{ij} = \theta^*_i \cdot \sum_j p(1, \theta^*)_{ij} \geq (1 - \delta/2)^2 B_i \geq (1 - \delta)B_i \end{align*} Hence, $\theta^*$ is a $\delta$-approximate throttling equilibrium, thereby completing the proof.\qed \subsection{Proof of Theorem~\ref{thm:revenue_np_hard}}\label{appendix:NP-hard} Consider an instance of 3-SAT with variables $\{x_1, \dots, x_n\}$ and clauses $\{C_1, \dots, C_m\}$. Our goal is to define an instance $\mathcal{I}$ of REV (a throttling game $G$ and a target revenue $R$) which always has the same solution (Yes or No) as the 3-SAT instance, and has a size of the order $\text{poly}(n,m)$. We do so next, starting with an informal description to build intuition. To better understand the core motivations behind the gadgets, we will restrict our attention to exact throttling equilibria ($\delta = 0$) in the informal discussion that follows. As we will see in the formal proof, the target revenue $R$ can be chosen carefully to ensure that only exact throttling equilibria can achieve the revenue $R$. \textbf{Reciprocal Gadget:} Fix $i \in [n]$. Corresponding to variable $x_i$, there are two goods $\mathbb{A}_i$ and $\mathbb{B}_i$, and two buyers $V_i^+$ and $V_i^-$ in the throttling game $G$. Each buyer bids 1 for one of the goods and bids 2 for the other, with both buyers bidding differently on each good. Furthermore, we set the budgets of both buyers to be 1/2, and ensure that they do not spend any non-zero amount on goods other than $\mathbb{A}_i$ and $\mathbb{B}_i$. In equilibrium, this forces the throttling parameter of $V_i^+$ (which we denote by $\theta_i^+$) to be half of the reciprocal of the throttling parameter of $V_i^-$ (which we denote by $\theta_i^-$) and vice-versa. As a consequence, both throttling parameters lie in the interval $[1/2,1]$. \textbf{Binary Gadget:} For each variable $x_i$, there are two additional goods $\mathbb{S}_i$ and $\mathbb{T}_i$, which receive a bid of 1 from buyers $V_i^+$ and $V_i^-$ respectively. The throttling game $G$ also has one unbounded buyer $U$ who has an infinite budget, and bids 2 on both goods $\mathbb{S}_i$ and $\mathbb{T}_i$. By the definition of throttling equilibria (Definition~\ref{def:exact_throt_eq}), the throttling parameter of $U$ is always 1 in equilibrium. Therefore, buyer $U$ wins both $\mathbb{S}_i$ and $\mathbb{T}_i$ with probability one, and pays $\theta_i^+ + \theta_i^- = \theta_i^+ + 1/2\theta_i^+$ for it. Finally, observe that $t \mapsto t + 1/2t$, when restricted to $t \in [1/2,1]$, is maximized at $t = 1$ or $t = 1/2$. Therefore, by appropriately choosing the target revenue $R$, we can ensure that revenue $R$ is only achieved by throttling equilibria in which exactly one of the following holds: $(\theta_i^+ = 1, \theta_i^- = 1/2)$ or $(\theta_i^+ = 1/2, \theta_i^- = 1)$. This allows us to interpret $\theta_i^+ = 1$ as setting $x_i = 1$ and $\theta_i^- = 1$ as setting $x_i = 0$. \textbf{Clause Gadget:} For each clause $C_j$, there is a good $\mathbb{C}_j$. If $C_j$ contains a non-negated literal $x_i$, then buyer $V_i^+$ bids 1 on good $\mathbb{C}_j$, and if it contains a negated literal $\neg x_i$, then buyer $V_i^-$ bids 1 on good $\mathbb{C}_j$. Furthermore, the unbounded buyer $U$ bids 2 on good $\mathbb{C}_j$, thereby always winning it. Hence, the total payment on good $\mathbb{C}_j$ is 1 if some literal is satisfied (corresponding throttling parameter is 1), and is 1/2 if no literal is satisfied (corresponding throttling parameters are 1/2). The rest of the reduction boils down to choosing $R$ appropriately. \begin{proof}[Proof of Theorem~\ref{thm:revenue_np_hard}] Guided by the informal intuition described above, we proceed with the formal definition of the instance $\mathcal{I}$, which involves specifying the throttling game $G$ and the target revenue $R$. The throttling game $G$ consists of the following goods: \begin{itemize} \item \textbf{Reciprocal Gadget:} For each variable $x_i$, there are two goods $\mathbb{A}_i$ and $\mathbb{B}_i$. \item \textbf{Binary Gadget:} For each variable $x_i$, there are two binary goods $\mathbb{S}_i$ and $\mathbb{T}_i$. \item \textbf{Clause Gadget:} For each clause $C_j$, there is a good $\mathbb{C}_j$. \end{itemize} Moreover, $G$ has the following set of buyers: \begin{itemize} \item Corresponding to each variable $x_i$, there are two buyers $V_i^+$ and $V_i^-$ with non-zero bids only for the following goods: \begin{itemize} \item $b(V_i^+, \mathbb{A}_i) = 2$ and $b(V_i^+, \mathbb{B}_i) = 1$ \item $b(V_i^-, \mathbb{A}_i) = 1$ and $b(V_i^-, \mathbb{B}_i) = 2$ \item $b(V_i^+, \mathbb{S}_i) = 1$ \item $b(V_i^-, \mathbb{T}_i) = 1$ \item $b(V_i^+, \mathbb{C}_j) = 1$ if $x_i$ is a literal in $C_j$ \item $b(V_i^-, \mathbb{C}_j) = 1$ if $\neg x_i$ is a literal in $C_j$ \end{itemize} Moreover, the budget of both $V_i^+$ and $V_i^-$ is $1/2$ for all $i \in [n]$. \item There is one unbounded buyer $U$ with $b(U, \mathbb{C}_j) = 2$ for all $j \in [m]$ and $b(U, \mathbb{S}_i) = b(U, \mathbb{T}_i) = 2$ for all $i \in [n]$. Moreover, $U$ has a budget of $\infty$. \end{itemize} Set the target revenue to be $R = n + m + (3n/2)$. Suppose there exists a $\delta$-approximate throttling equilibrium $\mathbb{T}heta$, for some $\delta \in [0,1)$, with revenue greater than or equal to $R$. Let $\theta_i^+$ and $\theta_i^-$ denote the throttling parameters of $V_i^+$ and $V_i^-$ in $\mathbb{T}heta$. Then, $\theta_i^+ \theta_i^- \leq 1/2$ by virtue of the budget constraints. Therefore, the revenue from goods $\{\mathbb{A}_i\}_{i=1}^n \cup \{\mathbb{B}_i\}_{i=1}^n$ is at most $n$. Furthermore, it is easy to see that the revenue from goods $\{\mathbb{C}_j\}_{j=1}^m$ is at most $m$. Additionally, the total payment by buyer $U$ on goods $\mathbb{S}_i$ and $\mathbb{T}_i$ is at most $\theta_i^+ + \theta_i^- \leq \theta_i^+ + (1/2\theta_i^+)$. Note that $\theta_i^+ + (1/2\theta_i^+)$ is maximized at $\theta_i^+ = 1/2$ or $\theta_i^+ = 1$, with a value of $\theta_i^+ + (1/2\theta_i^+) = 3/2$. Therefore, the revenue from goods $\{\mathbb{S}_i\}_{i=1}^n \cup \{\mathbb{T}_i\}_{i=1}^n$ is at most $3n/2$. Hence, the total payment made on all the goods is at most $R$. For the total revenue under $\mathbb{T}heta$ to be greater than or equal to $R$, the revenue from $\{\mathbb{S}_i\}_{i=1}^n \cup \{\mathbb{T}_i\}_{i=1}^n$ must be at least $3n/2$ and the revenue from $\{\mathbb{C}_j\}_{j=1}^m$ must be at least $m$. Hence, under $\mathbb{T}heta$, buyer $U$ has a throttling parameter of $1$, and for each $i \in [n]$, either $(\theta_i^+ = 1, \theta_i^- = 1/2)$ or $(\theta_i^+ = 1/2, \theta_i^- = 1)$. Furthermore, the payment made by buyer $U$ on $\mathbb{C}_j$ is 1 for every $j \in [m]$. This allows us to assign values to the variables as follows: set $x_i = 1$ if $\theta_i^+ = 1$ and $x_i = 0$ if $\theta_i^- = 1$. With this assignment of the variables, each clause is satisfied since the payment made by buyer $U$ on $\mathbb{C}_j$ is 1 for all $j \in [m]$. Hence, we have shown that if there exists a $\delta$-approximate throttling equilibrium with revenue $R$ or greater, then there exists a satisfying assignment for the 3-SAT instance. Conversely, note that if there exists a satisfying assignment for the 3-SAT instance, then setting $\theta_i^+ = 1$, $\theta_i^- = 1/2$ if $x_i = 1$ and $\theta_i^+ = 1/2$, $\theta_i^- = 1$ if $x_i = 0$ yields a throttling equilibrium with revenue equal to $R$. To complete the proof, observe that the size of the instance $|\mathcal{I}| = \text{poly}(n,m)$. \end{proof} \subsection{Proof of Theorem~\ref{thm:two-buyer}}\label{appendix:two-buyer} In this appendix, we analyze the correctness and runtime of Algorithm~\ref{alg:two_buyer}. To do so, we will make repeated use of the following crucial observation: \begin{align} \label{two_buyer_payment} p(\theta)_{ij} = \begin{cases} \theta_i \theta_k b_{kj} &\text{if } b_{ij} > b_{kj} > 0\ \text{for some } k \in [n]\\ 0 &\text{otherwise} \end{cases} \end{align} In particular, this observation implies that $p(1, \theta_i)$ is a linear function of $\theta$. The following lemma makes a step towards the proof of correctness of the algorithm by showing that the budget constraints are always satisfied. \begin{lemma}\label{lemma:alg_budget} At the start of each iteration of the while loop, we have $\sum_j p(\theta)_{ij} \leq B_i$ for all $i \in [n]$. \end{lemma} \begin{proof} We will use induction on the number of iterations of the while loop to prove this lemma. By our choice of initialization of $\theta$, the budget constraints are satisfied before the first iteration of the while loop. Suppose the constraints are satisfied before the start of the $t$-th iteration and the value of $\theta$ at that stage is $\theta^{(0)}$. We will use $\theta^{(1)}$ and $\theta^{(2)}$ to the denote the value of $\theta$ after step 1 and step 2 of the $t$-th iteration respectively. Consider a buyer $i$ such that $\sum_j p(\theta^{(1)})_{ij} > B_i$. By equation~\ref{two_buyer_payment}, we get $$B_i < \sum_j p(\theta^{(1)})_{ij} \leq \left(\sum_j p(\theta^{(0)})_{ij}\right)/(1 - \gamma)^2$$ which further implies $\sum_j p(\theta^{(0)})_{ij} > (1 - \gamma)^2 B_i$. Therefore, the throttling parameter of buyer $i$ was not changed in step 1 of the $t$-th iteration, i.e., $\theta_i^{(0)} = \theta_i^{(1)}$. As a consequence, we get \begin{align*} \sum_j p(\theta^{(1)})_{ij} \leq \left(\sum_j p(\theta^{(0)})_{ij}\right)/(1 - \gamma) \end{align*} After step 2 of the $t$-th iteration, we get $\theta^{(2)} = (1 - \gamma) \theta^{(1)}$. Hence, \begin{align*} \sum_j p(\theta^{(2)})_{ij} \leq (1 - \gamma)\sum_j p(\theta^{(1)})_{ij} \leq \left(\sum_j p(\theta^{(0)})_{ij}\right) \leq B_i \end{align*} where the last inequality follows from our inductive hypothesis. As $\theta^{(2)}$ is the value of $\theta$ after the $t$-th iteration, the lemma follows by induction. \end{proof} The next lemma establishes that the algorithm never loses any progress, i.e., any buyer who satisfies the `Not too much unnecessary throttling condition' of Definition~\ref{def:throt_eq} at the beginning of some iteration of the while loop continues to do so at the end of it. \begin{lemma}\label{lemma:alg_progress} If $\sum_j p(\theta)_{ij} \geq (1 - \gamma)^3 B_i$ or $\theta_i \geq 1 - \gamma$ at the start of some iteration of the while loop, then $\sum_j p(\theta)_{ij} \geq (1 - \gamma)^3 B_i$ or $\theta_i \geq 1 - \gamma$ at the end of that iteration. \end{lemma} \begin{proof} Consider an iteration of while loop which starts with $\theta = \theta^{(0)}$. We will use $\theta^{(1)}$ and $\theta^{(2)}$ to the denote the value of $\theta$ after step 1 and step 2 of this iteration. If $\sum_j p(\theta^{(0)})_{ij} \geq (1 - \gamma)^2 B_i$ at the beginning of the iteration, then $\sum_j p(\theta^{(2)})_{ij} \geq (1 - \gamma)^3 B_i$ because \begin{align*} \sum_j p(\theta^{(1)})_{ij} \geq (1 - \gamma)^2 B_i \quad \text{and} \quad \sum_j p(\theta^{(2)})_{ij} \geq (1 - \gamma) \sum_j p(\theta^{(1)})_{ij} \end{align*} Suppose $(1 - \gamma)^3 B_i \leq \sum_j p(\theta^{(0)})_{ij} < (1 - \gamma)^2 B_i$ and $\theta^{(0)}_i < 1 -\gamma$ at the start of the iteration. Then, after step 1, we have $(1 - \gamma)^2 B_i \leq \sum_j p(\theta^{(1)})_{ij} \leq B_i$. Hence, after step 2, we get $(1 - \gamma)^3 B_i \leq \sum_j p(\theta^{(3)})_{ij}$. Finally, suppose $(1 - \gamma)^3 B_i \leq \sum_j p(\theta^{(0)})_{ij} < (1 - \gamma)^2 B_i$ and $\theta^{(0)}_i \geq 1 -\gamma$ at the start of the iteration. Then, after step 1, we have $\sum_j p(\theta^{(1)})_{ij} \leq B_i$. Hence, after step 2, we still have $\theta^{(2)}_i \geq (1 - \gamma)$. This completes the proof of the lemma. \end{proof} Finally, we combine the above lemmas to establish the correctness and polynomial-runtime of the algorithm. \begin{proof}[Proof of Theorem~\ref{thm:two-buyer}] Let $\theta^*$ be the vector of throttling parameters returned by the algorithm. Lemma~\ref{lemma:alg_budget} implies that $\theta^*$ satisfies the budget constraints of every buyer. Furthermore, upon combining $(1 - \gamma)^3 \geq 1 - 3 \gamma$ with the termination condition of the while loop, we get that either $\theta^*_i \geq 1 - \gamma$ or $\sum_j p(\theta^*)_{ij} \geq (1 - 3\gamma) B_i$ for all $i \in [n]$, which makes $\theta^*$ a $(1 - 3 \gamma)$-approximate throttling equilibrium. Next, we bound the running time of the algorithm. Define $c = \min_i \min\{B_i/ (2\sum_j b_{ij}), 1\}$. Note that $c \leq \theta_i \leq 1$ for all $i \in [n]$ for the entire run of the algorithm. Based on Lemma~\ref{lemma:alg_progress}, we define \begin{align*} A(\theta) \coloneqq \{i \in [n] \mid\sum_j p(\theta)_{ij} \geq (1 - \gamma)^3 B_i \text{ or } \theta_i \geq 1 - \gamma \end{align*} Then Lemma~\ref{lemma:alg_progress} simply states that if $i \in A(\theta)$ at the start of iteration $T$ of the while loop, then $i \in A(\theta)$ at the start of all future iterations $t \geq T$. Moreover, recall that the while loop terminates when $A(\theta) = [n]$. Observe that, in each iteration of the while loop, $\theta_i \leftarrow \theta_i/(1- \gamma)$ for some $i \notin A(\theta)$. Hence, the total number of iterations of the while loop $T$ satisfies the following equivalent statements: \begin{align*} \frac{c}{(1 -\gamma)^{T/n}} \leq 1 \iff T \leq \frac{n \log(1/c)}{\log(1/ (1-\gamma))} \leq \frac{n\log(1/c)}{\gamma} \end{align*} This completes the proof because each iteration takes polynomially many steps. \end{proof} \subsection{Proof of Theorem~\ref{thm:poa}} Fix a throttling equilibrium $\theta \in \mathbb{T}heta$. Recall that we use $X = (X_1, \dots, X_n)$ to capture the random profile of buyers who participate in the auctions, where $X_i = 1$ if and only if buyer $i$ participates in the auctions, and $\Pr(X_i = 1) = \theta_i$. Let $y_{ij}(X)$ be the indicator random variable which equals 1 if and only if good $j$ is allocated to buyer $i$ under the participation profile $X = (X_1, \dots, X_n)$, and is zero otherwise. Moreover, let $p_{j}(X)$ denote the price of item $j$ under the participation profile $X = (X_1, \dots, X_n)$. Here, the price is the highest/second-highest bid for first-price/second-price auctions respectively, and is interpreted to be 0 if no buyers bid in an auction. Observe that \begin{align*} p_{ij}(\theta) = \mathbb{E}\left[ p_{j}(X) y_{ij}(X) \right]\,. \end{align*} Fix a benchmark allocation $y = \{y_{ij}\}$. We begin by establishing the following lemma, which will play a critical role in the proof of the theorem. \begin{lemma}\label{lemma:poa} For all $i \in [n]$, we have \begin{align*} \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right]\,. \end{align*} \end{lemma} \begin{proof} We consider two cases. First assume that $\theta_i < 1$. Then, the no-unnecessary-throttling condition implies that $\sum_{j=1}^m p_{ij}(\theta) = B_i$. Now, observe that $y_{ij}(X) > 0$ only if $b_{ij} \geq p_j(X)$. Consequently, we have \begin{align*} \mathbb{E}\left[ \sum_{j=1}^m b_{ij} y_{ij}(X) \right] \geq \mathbb{E}\left[ \sum_{j=1}^m p_j(X) y_{ij}(X) \right] = \sum_{j=1}^m p_{ij}(\theta) = B_i\,. \end{align*} Hence, we get \begin{align*} \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} &= B_i\\ &\geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\}\\ &\geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right]\,, \end{align*} thereby establishing the required lemma statement for a buyer $i$ such that $\theta_i < 1$. Next, consider a buyer $i$ such that $\theta_i = 1$, i.e., buyer $i$ always participates. Since $p_j(X) > b_{ij}$ whenever $y_{ij}(X) < 1$, we have \begin{align*} 0 \geq \mathbb{E}\left[ (b_{ij} - p_j(X)) (1 - y_{ij}(X)) y_{ij} \right]\,. \end{align*} Moreover, we also have \begin{align*} \mathbb{E}\left[ b_{ij} y_{ij}(X) \right] \geq \mathbb{E}\left[ (b_{ij} - p_j(X)) y_{ij}(X) y_{ij}\right] \end{align*} Adding the two inequalities, we get \begin{align*} \mathbb{E}\left[ b_{ij} y_{ij}(X) \right] &\geq \mathbb{E}\left[ (b_{ij} - p_j(X)) (1 - y_{ij}(X)) y_{ij} \right] + \mathbb{E}\left[ (b_{ij} - p_j(X)) y_{ij}(X) y_{ij}\right] = \mathbb{E}\left[ (b_{ij} - p_j(X)) y_{ij} \right]\,. \end{align*} Summing over all goods $j \in [m]$ yields \begin{align*} \mathbb{E}\left[ \sum_{j=1}^m b_{ij} y_{ij}(X) \right] &\geq \sum_{j=1}^m b_{ij} y_{ij} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right] \\ &\geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right] \end{align*} Additionally, we also have \begin{align*} B_i \geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} \geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right] \end{align*} Therefore, \begin{align*} \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right]\,. \end{align*} This concludes the lemma by establishing it for buyers $i$ with $\theta_i = 1$. \end{proof} With Lemma~\ref{lemma:poa} in hand, we are ready to prove the theorem. First, note that \begin{align*} \sum_{i=1}^m \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} &\geq \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \sum_{i=1}^m \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij} \right]\\ &= \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) \sum_{i=1}^m y_{ij} \right]\\ &= \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) \right]\\ &\geq \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) \sum_{i=1}^m y_{ij}(X) \right]\\ &= \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \sum_{i=1}^m \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij}(X) \right] \end{align*} where the second inequality follows from the observation that a good is always allocated whenever it has a positive bid, i.e., $\sum_{i=1}^m y_{ij}(X) = 1$ whenever $p_j(X) > 0$. Hence, if we can show that \begin{align}\label{eqn:poa-required} \sum_{i=1}^m \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \sum_{i=1}^m \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij}(X) \right] \,, \end{align} we will get \begin{align*} &\sum_{i=1}^m \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\} - \sum_{i=1}^m \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \\ \iff &\sum_{i=1}^m \min \left\{ 2 \cdot \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \sum_{i=1}^m \min \left\{\sum_{j=1}^m b_{ij} y_{ij}, B_i \right\}\, \end{align*} and thereby complete the proof, because the benchmark allocation $y$ and the throttling equilibrium $\theta$ are both arbitrary. In the remainder, we establish \eqref{eqn:poa-required}. Since $y_{ij}(X) > 0$ only when $b_{ij} \geq p_j(X)$, we have \begin{align*} \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right] \geq \mathbb{E}\left[\sum_{j=1}^m p_j(X) y_{ij}(X) \right]\,. \end{align*} Moreover, the budget constraint of buyer $i$ implies \begin{align*} B_i \geq \mathbb{E}\left[\sum_{j=1}^m p_j(X) y_{ij}(X) \right]\,. \end{align*} Combining the two inequalities, we get: \begin{align*} \min \left\{ \mathbb{E}\left[\sum_{j=1}^m b_{ij} y_{ij}(X) \right], B_i \right\} \geq \mathbb{E}\left[ \sum_{j=1}^m p_{j}(X) y_{ij}(X) \right]\,. \end{align*} Summing over all buyers $i \in [n]$ yields \eqref{eqn:poa-required}, as required. \section{Appendix: Examples for Section~\ref{sec:TE-PE}}\label{appendix:TE-PE} First, we provide an example to show that the inequality REV(TE) $\leq 2 \times \text{REV(TE)}$ is tight. \begin{example} Consider the throttling game in which there is 1 good and 2 buyers. The bids are given by $b_{11} = 1/\epsilon$, $b_{21} = 1 - \epsilon$ for $\epsilon > 0$ and the budgets are given by $B_1 = 1$, $B_2 = \infty$. Then, in the unique pacing equilibrium, we have $\alpha_1 = \epsilon$ and $\alpha_2 = 1$, whereas in the unique throttling equilibrium, we have $\theta_1 = \epsilon$ and $\theta_2 = 1$. Hence, REV(PE) = 1 and REV(TE) = $1 + (1 - \epsilon)^2$. Since, this is true for arbitrarily small $\epsilon$, we get that the inequality REV(TE) $\leq 2 \times \text{REV(TE)}$ is tight established in Theorem~\ref{thm:revenue-comparison} is tight. \end{example} Next, we give a family of examples for which REV(PE) is arbitrarily close to $(4/3) \times \text{REV(TE)}$. \begin{example} Consider a throttling game with 2 goods and 2 buyers. Fix $\epsilon> 0$. The bids are given by $b_{11} = 1 + \epsilon$, $b_{12} = 1$ and $b_{21} = 1$. Moreover, the budgets are given by $B_1 = 1 - \epsilon$ and $B_2 = \infty$. Then, the unique pacing equilibrium is given by $\alpha_1 = 1 - \epsilon$, $\alpha_2 = 1$, and the unique throttling equilibrium is given by $\theta_1 = (1 - \epsilon)/(2 + \epsilon)$, $\theta_2 = 1$. Since $\epsilon$ was arbitrary, we can take it to be arbitrarily small. In which case, we get REV(PE) $\simeq$ 2 and REV(PE) $\simeq$ 1.5, as desired. \end{example} \end{document}
\begin{document} \title{Betweenness Structures of Small Linear Co-Size - Appendix} \getcounter{thr}{L:thr} \getcounter{dfn}{L:dfn} \getcounter{lmm}{L:lmm} \getcounter{prp}{L:prp} \getcounter{crl}{L:crl} \getcounter{clm}{L:clm} \getcounter{fct}{L:fct} \getcounter{obs}{L:obs} \getcounter{cnj}{L:cnj} \getcounter{prb}{L:prb} \getcounter{figure}{L:figure} \section{Results} This is an appendix to \cite{gnszabo2018qusarxiv}, and contains the proofs of some technical results for small betweenness structures. We use definitions and references (e.g. theorem and figure numbering) from \cite{gnszabo2018qusarxiv} but we also repeat the results to be proved here along with the corresponding figures. We prove the following results. \setcounter{foo}{\thefigure} \setcounterref{figure}{Fsmallgr} \addtocounter{figure}{-1} \begin{figure} \caption{Spanner graphs of small exceptional quasilinear betweenness structures. Edges of weight different from $1$ are indicated by double-lines.} \end{figure} \setcounter{figure}{\thefoo} \setcounter{foo}{\theclm} \setcounterref{lmm}{Lsmallgr} \addtocounter{lmm}{-1} \begin{lmm} Up to isomorphism, the quasilinear betweenness structures of order at most $7$ are the following: \begin{itemize} \item $\mathcal{R}_{n, i}^4$ for $5\leq n\leq 7$, $i\in I_n^4$; \item $\mathcal{S}_n^4$ for $5\leq n\leq 7$; \item $\mathcal{B}(G)$ where $G$ is one of the graphs in Figure \ref{Fsmallgr}. \end{itemize} \end{lmm} \setcounter{lmm}{\thefoo} \setcounter{foo}{\theclm} \setcounterref{clm}{Cn5} \addtocounter{clm}{-1} \begin{clm} Let $\mathcal{B}\in B(5, 2)$ be a regular betweenness structure. Then $\mathcal{H}(\mathcal{B})$ is a tight star. \end{clm} \setcounter{clm}{\thefoo} \setcounter{foo}{\thefigure} \setcounterref{figure}{Fmetr7} \addtocounter{figure}{-1} \begin{figure} \caption{$\tilde{\mathcal{H} \caption{$\tilde{\mathcal{H} \caption{$\tilde{\mathcal{H} \caption{Triangle hypergraphs in Claim \ref{Cmetr7} \end{figure} \setcounter{figure}{\thefoo} \setcounter{foo}{\theclm} \setcounterref{clm}{Cmetr7} \addtocounter{clm}{-1} \begin{clm} The following statements hold for the hypergraphs $\tilde{\mathcal{H}}_1$, $\tilde{\mathcal{H}}_2$ and $\tilde{\mathcal{H}}_3$ in Figure \ref{Fmetr7}: \begin{enumerate} \item $\tilde{\mathcal{H}}_1$ is metrizable, and for every betweenness structure $\mathcal{A}$ with triangle hypergraph $\tilde{\mathcal{H}}_1$, $\mathcal{A}\simeq\mathcal{T}_{7, 1}$; \item $\tilde{\mathcal{H}}_2$ is not metrizable; \item $\tilde{\mathcal{H}}_3$ is not metrizable. \end{enumerate} \end{clm} \setcounter{clm}{\thefoo} \section{Proof of Lemma \ref{Lsmallgr}} Let $\mathcal{B} = (X,\beta)$ be a quasilinear betweenness structure of order $n\leq 7$ and set $\mathcal{H} =\mathcal{H}(\mathcal{B})$. For the sake of this proof, we will denote the betweenness structure induced by the graph $H_n^i$ from Figure \ref{Fsmallgr} by $\mathcal{A}_n^i$. There are no quasilinear betweenness structures of order $n < 3$, and if $n = 3$, then $\mathcal{B}$ is obviously induced by a triangle. If $n = 4$, then $\mathcal{B}$ is of co-size $1$, so $\mathcal{H}$ is obviously a tight star. Thus, we can apply Lemma \ref{Lgen3} with $c = 3$ to obtain that $\mathcal{B}\simeq\mathcal{Q}_4^3\simeq\mathcal{A}_4^1$ or $\mathcal{B}\simeq\mathcal{R}_{4, 1}^3\simeq\mathcal{S}_4^3\simeq\mathcal{A}_4^2$. \begin{figure} \caption{Case 1} \label{SFthyp1} \caption{Case 2} \label{SFthyp2} \caption{Case 3} \label{SFthyp3} \caption{Case 4} \label{SFthyp4} \caption{Possible triangle hypergraphs of small exceptional quasilinear betweenness structures. The hyperedges are represented by triangles.} \label{Fthyp} \end{figure} Next, suppose that $5\leq n\leq 7$. Since for all $n\geq 3$, $$\tau(n, 0)= n - 4$$ by Part \ref{Equs11} Theorem \ref{Tqus1}, we obtain from Lemma \ref{Lgen2a} that $\mathcal{H}$ is a $\Delta$-star. If $n = 5$ and $\mathcal{B}$ is regular or $n\geq 6$ and $\mathcal{H}$ is a tight star, then $\mathcal{B}$ is isomorphic to either $\mathcal{R}_{n, i}^4$ for some $i\in I_n^4$ or to $\mathcal{S}_n^4$, as shown by Lemma \ref{Lgen3} with parameter $c = 4$. Therefore, either $n = 5$ and $\mathcal{B}$ is irregular or $n\geq 6$ and $\mathcal{H}$ is a non-tight $\Delta$-star. The four possible triangle hypergraphs are shown in Figure \ref{Fthyp}. \begin{figure} \caption{Case 1.1} \caption{Case 1.2} \caption{Case 1.3} \caption{Case 1.4} \caption{Cases 1.1-1.4. in the proof of Lemma \ref{Lsmallgr} \label{Fc1ca-d} \end{figure} \begin{case}{1} In this case, $n = 5$, $X =\{u, v, x, y, z\}$ and $T =\{x, y, z\}$ is the sole triangle of $\mathcal{B}$ (see Figure \ref{SFthyp1}). Because of the case's assumption, there exists a subset $Y\subseteq X$ such that \begin{equation}\label{EQ19} \mathcal{B}\vert_Y\simeq\mathcal{C}_4. \end{equation} Points $u$ and $v$ partition $T$ into three parts: $(\cdot\ u\ v)_\mathcal{B} =\{p\in X : (p\ u\ v)_\mathcal{B}\}$, $(u\cdot v)_\mathcal{B} =\{p\in X : (u\ p\ v)_\mathcal{B}\}$ and $(u\ v\ \cdot)_\mathcal{B} =\{p\in X : (u\ v\ p)_\mathcal{B}\}$. Notice that $T\not\subseteq Y$, therefore, $u, v\in Y$. We obtain from (\ref{EQ19}) that \begin{equation}\label{EQ20} \emph{either }|(u\cdot v)_{\mathcal{B}\vert_Y}|\geq 2\emph{, or }|(\cdot\ u\ v)_{\mathcal{B}\vert_Y}|\geq 1\emph{ and }|(u\ v\ \cdot)_{\mathcal{B}\vert_Y}|\geq 1. \end{equation} Note that we can replace $\mathcal{B}\vert_Y$ with $\mathcal{B}$ in (\ref{EQ20}), so the following cases are possible up to symmetry (see Figure \ref{Fc1ca-d}): \begin{itemize} \item\textsc{Case} 1.1: $(u\ x\ v)_\mathcal{B}\wedge (u\ y\ v)_\mathcal{B}\wedge (u\ z\ v)_\mathcal{B}$; \item\textsc{Case} 1.2: $(u\ x\ v)_\mathcal{B}\wedge (u\ y\ v)_\mathcal{B}\wedge (u\ v\ z)_\mathcal{B}$; \item\textsc{Case} 1.3: $(x\ u\ v)_\mathcal{B}\wedge (u\ v\ y)_\mathcal{B}\wedge (u\ z\ v)_\mathcal{B}$; \item\textsc{Case} 1.4: $(x\ u\ v)_\mathcal{B}\wedge (u\ v\ y)_\mathcal{B}\wedge (u\ v\ z)_\mathcal{B}$. \end{itemize} \begin{case}{1.1} We can suppose without loss of generality that $Y =\{u, v, x, y\}$. Now, because of (\ref{EQ19}), $(x\ u\ y)_\mathcal{B}$ and $(x\ v\ y)_\mathcal{B}$ hold. The betweenness structures $\mathcal{B} - x$ and $\mathcal{B} - y$ are linear, so they are isomorphic to either $\mathcal{P}_4$ or $\mathcal{C}_4$. We show that \begin{equation}\label{EQ5} \mathcal{B} - x\simeq\mathcal{B} - y\simeq\mathcal{C}_4. \end{equation} Relation $(x\ z\ u)_\mathcal{B}$ cannot hold, otherwise $(x\ z\ u)_\mathcal{B}$ and $(x\ u\ y)_\mathcal{B}$ would imply $(x\ z\ y)_\mathcal{B}$ by f.r.p., in contradiction with $T$ being a triangle. Similarly, we obtain that none of the betweennesses $(x\ z\ v)_\mathcal{B}$, $(y\ z\ u)_\mathcal{B}$ and $(y\ z\ v)_\mathcal{B}$ hold. As a consequence, neither $\mathcal{B} - x$ nor $\mathcal{B} - y$ is isomorphic to $\mathcal{P}_4$. For example, if $\mathcal{B} - x\simeq\mathcal{P}_4$, then because of $(u\ y\ v)_\mathcal{B}$ and $(u\ z\ v)_\mathcal{B}$, $(u\ z\ y)_\mathcal{B}$ or $(y\ z\ v)_\mathcal{B}$ hold in contradiction with the previous assertion. Now, (\ref{EQ5}) yields $(x\ u\ z)_\mathcal{B}$, $(x\ v\ z)_\mathcal{B}$, $(y\ u\ z)_\mathcal{B}$ and $(y\ v\ z)_\mathcal{B}$, therefore, we have $9$ nontrivial betweennesses: $$(u\ x\ v)_\mathcal{B}, (u\ y\ v)_\mathcal{B}, (u\ z\ v)_\mathcal{B}, (x\ u\ y)_\mathcal{B}, (x\ v\ y)_\mathcal{B}$$ $$(x\ u\ z)_\mathcal{B}, (x\ v\ z)_\mathcal{B}, (y\ u\ z)_\mathcal{B}\text{ and } (y\ v\ z)_\mathcal{B}.$$ These betweennesses cover all collinear triples of $\mathcal{B}$ and show that $\mathcal{B}\simeq\mathcal{B}(K_{2, 3}) =\mathcal{A}_5^1$. \end{case} \begin{case}{1.2} It follows from (\ref{EQ20}) that $Y =\{u, v, x, y\}$ and so $(x\ u\ y)_\mathcal{B}$ and $(x\ v\ y)_\mathcal{B}$ hold by (\ref{EQ19}). Relations $(u\ x\ z)_\mathcal{B}$ and $(x\ v\ z)_\mathcal{B}$ follows from $(u\ x\ v)_\mathcal{B}$ and $(u\ v\ z)_\mathcal{B}$ by f.r.p. Similarly, $(u\ y\ z)_\mathcal{B}$ and $(y\ v\ z)_\mathcal{B}$ follows from $(u\ y\ v)_\mathcal{B}$ and $(u\ v\ z)_\mathcal{B}$, hence, considering (\ref{EQ19}) and the case's initial assumptions, we have $9$ non-trivial betweennesses: $$(x\ u\ y)_\mathcal{B}, (u\ y\ v)_\mathcal{B}, (y\ v\ x)_\mathcal{B}, (v\ x\ u)_\mathcal{B}, (u\ v\ z)_\mathcal{B},$$ $$(u\ x\ z)_\mathcal{B}, (x\ v\ z)_\mathcal{B}, (u\ x\ v)_\mathcal{B}\text{ and } (u\ v\ z)_\mathcal{B}.$$ These betweennesses cover all collinear triples of $\mathcal{B}$ and show that $\mathcal{B}\simeq\mathcal{R}_{5, 1}^4$. \end{case} \begin{case}{1.3} It follows from (\ref{EQ20}) that $Y =\{u, v, x, y\}$ and so $(v\ y\ x)_\mathcal{B}$ and $(y\ x\ u)_\mathcal{B}$ hold by (\ref{EQ19}). Further, $(x\ u\ v)_\mathcal{B}$ and $(u\ z\ v)_\mathcal{B}$ imply $(x\ u\ z)_\mathcal{B}$ by f.r.p. Similarly, $(u\ v\ y)_\mathcal{B}$ and $(u\ z\ v)_\mathcal{B}$ imply $(u\ z\ y)_\mathcal{B}$ and $(z\ v\ y)_\mathcal{B}$. Now, considering the case's assumptions, we have $9$ non-trivial betweennesses: $$(x\ u\ v)_\mathcal{B}, (u\ v\ y)_\mathcal{B}, (u\ z\ v)_\mathcal{B}, (v\ y\ x)_\mathcal{B}, (y\ x\ u)_\mathcal{B}$$ $$(x\ u\ z)_\mathcal{B}, (x\ z\ v)_\mathcal{B}, (u\ z\ y)_\mathcal{B}\text{ and } (z\ v\ y)_\mathcal{B}.$$ These betweennesses cover all collinear triples of $\mathcal{B}$ and show that $\mathcal{B}\simeq\mathcal{S}_5^4$. \end{case} \begin{case}{1.4} It follows from (\ref{EQ20}) that $\mathcal{B} - x\not\simeq\mathcal{C}_4$, so we can assume without loss of generality that $Y =\{u, v, x, y\}$ and hence $(y\ x\ u)_\mathcal{B}$ and $(v\ y\ x)_\mathcal{B}$ hold by (\ref{EQ19}). Moreover, since $\mathcal{B} - x\simeq\mathcal{P}_4$, $(u\ v\ y\ z)_\mathcal{B}$ or $(u\ v\ z\ y)_\mathcal{B}$ is true. However, in case of $(u\ v\ z\ y)_\mathcal{B}$, $(v\ z\ y)_\mathcal{B}$ and $(v\ y\ x)_\mathcal{B}$ would lead to the contradiction $(z\ y\ x)_\mathcal{B}$ by f.r.p., hence, $(u\ v\ y\ z)_\mathcal{B}$ holds and in particular, \begin{equation}\label{EQ5a} (v\ y\ z)_\mathcal{B} \end{equation} must be true. Now, there are two cases to consider. \begin{wcase}{1.4.1}{$\mathcal{B} - y\simeq\mathcal{C}_4$} In this case, relations $(z\ x\ u)_\mathcal{B}$ and $(v\ z\ x)_\mathcal{B}$ hold. However, $(v\ z\ x)_\mathcal{B}$ and (\ref{EQ5a}) imply $(y\ z\ x)_\mathcal{B}$ by f.r.p., which is impossible for $T$ is a triangle. \end{wcase} \begin{wcase}{1.4.2}{$\mathcal{B} - y\simeq\mathcal{P}_4$} Now, $(x\ u\ v\ z)_\mathcal{B}$ holds. However, $(x\ v\ z)_\mathcal{B}$ and (\ref{EQ5a}) leads to a contradiction again. \end{wcase} \end{case} \begin{figure} \caption{Spanner graphs of $5$-point quasilinear betweenness structures} \label{Fqlinord5} \end{figure} In summary, we obtain that the quasilinear betweenness structures on $5$ points are exactly the ones induced by the graphs listed in Figure \ref{Fqlinord5}. \end{case} \begin{case}{2} In this case, $n = 6$, $X =\{u, v, w, x, y, z\}$ and there are two triangles in $\mathcal{B}$ that intersect in one vertex: $T =\{x\ y\ z\}$ and $R =\{x\ u\ v\}$ (see Figure \ref{SFthyp2}). Now, $\mathcal{B} - x$ is a linear betweenness structure of order $5$, therefore, it is ordered by Proposition \ref{Plin}. Further, $\mathcal{B} - y$, $\mathcal{B} - z$, $\mathcal{B} - u$ and $\mathcal{B} - v$ are quasilinear betweenness structures of order $5$, hence, they are induced by one of the $5$-vertex graphs in Figure \ref{Fqlinord5}. However, the spanner graph cannot be isomorphic to $K_{2, 3}$ because each substructure in question intersect the ordered substructure $\mathcal{B} - x$ in $4$ points and thus has a $4$-point ordered substructure itself. \begin{figure} \caption{Case 2.1} \caption{Case 2.2} \caption{Case 2.3} \caption{Case 2.4} \caption{Cases 2.1-2.4. in the proof of Lemma \ref{Lsmallgr} \label{Fc2} \end{figure} Consider now $\mathcal{B} - v$. Because of the argument above, it is induced by either $R_{5, 1}^4$ or $S_5^4$. As $(\mathcal{B} - v) - x$ must be ordered, we can determine the position of $x$ in the spanner graph of $\mathcal{B} - v$ up to symmetry. After that, $y$ and $z$ must be the two uniquely determined vertices for which $\{x, y, z\}$ forms a triangle ($y$ and $z$ are in symmetric position in $\mathcal{B}$, so they are interchangeable). Finally, we can place $u$ and $w$ in two possible ways. There are four cases altogether shown in Figure \ref{Fc2}. \begin{case}{2.1} The ordering of $\mathcal{B} - x$ is an extension of the ordering of $(\mathcal{B} - x) - v = [u, y, w, z]$. Let $i$ denote the position of $v$ in this extended ordering. \begin{wcase}{2.1.1}{$i = 1$} $(v\ u\ x)_{\mathcal{B} - x}\wedge (u\ x\ w)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ u\ x)_\mathcal{B}$, which is impossible since $R =\{x, u, v\}$ was a triangle. \end{wcase} \begin{wcase}{2.1.2}{$i = 2$} $(x\ u\ y)_{\mathcal{B} - v}\wedge (u\ v\ y)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.1.3}{$i = 3$} As the triple $\{v, x, z\}$ is collinear, so one of the following cases hold: \begin{itemize} \item\textsc{Case} 2.1.3.1: $(v\ x\ z)_\mathcal{B}$; \item\textsc{Case} 2.1.3.2: $(x\ v\ z)_\mathcal{B}$; \item\textsc{Case} 2.1.3.3: $(x\ z\ v)_\mathcal{B}$. \end{itemize} \begin{case}{2.1.3.1} $(v\ x\ z)_\mathcal{B}\wedge (u\ v\ z)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (u\ v\ x)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.1.3.2} $(x\ v\ z)_\mathcal{B}\wedge (u\ x\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (u\ x\ v)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.1.3.3} $(x\ z\ v)_\mathcal{B}\wedge (x\ w\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (w\ z\ v)_\mathcal{B}$, contradicting $(v\ w\ z)_{\mathcal{B} - x}$. \end{case} \end{wcase} \begin{wcase}{2.1.4}{$i = 4$} $(u\ w\ v)_{\mathcal{B} - x}\wedge (u\ x\ w)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (u\ x\ v)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.1.5}{$i = 5$} $(u\ w\ v)_{\mathcal{B} - x}\wedge (u\ x\ w)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (u\ x\ v)_\mathcal{B}\ \lightning$ \end{wcase} \end{case} \begin{case}{2.2} Similarly to the previous case, $\mathcal{B} - x$ is an extension of the ordering of $(\mathcal{B} - x) - v = [w, y, u, z]$. Let $i$ denote the position of $v$ in this extended order. \begin{wcase}{2.2.1}{$i = 1$} $(v\ w\ u)_{\mathcal{B} - x}\wedge (w\ x\ u)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ x\ u)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.2.2}{$i = 2$} As the triple $\{v, x, z\}$ is collinear, one of the following cases hold: \begin{itemize} \item\textsc{Case} 2.2.2.1: $(v\ x\ z)_\mathcal{B}$; \item\textsc{Case} 2.2.2.2: $(x\ v\ z)_\mathcal{B}$; \item\textsc{Case} 2.2.2.3: $(x\ z\ v)_\mathcal{B}$. \end{itemize} \begin{case}{2.2.2.1} $(v\ x\ z)_\mathcal{B}\wedge (x\ u\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ x\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.2.2.2} $(x\ v\ z)_\mathcal{B}\wedge (v\ u\ z)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ v\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.2.2.3} $(x\ z\ v)_\mathcal{B}\wedge (x\ u\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{case} \end{wcase} \begin{wcase}{2.2.3}{$i = 3$} $(y\ u\ x)_{\mathcal{B} - v}\wedge (y\ v\ u)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (v\ u\ x)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.2.4}{$i = 4$} $(x\ u\ z)_{\mathcal{B} - v}\wedge (u\ v\ z)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.2.5}{$i = 5$} $(w\ u\ v)_{\mathcal{B} - x}\wedge (w\ x\ u)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{wcase} \end{case} \begin{case}{2.3} Similarly to the previous cases, $\mathcal{B} - x$ is an extension of the ordering of $(\mathcal{B} - x) - v = [u, y, w, z]$. Let $i$ denote the position of $v$ in this extended order. \begin{wcase}{2.3.1}{$i = 1$} $(v\ u\ z)_{\mathcal{B} - x}\wedge (u\ x\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ u\ x)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.3.2}{$i = 2$} $(x\ u\ y)_{\mathcal{B} - v}\wedge (u\ v\ y)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.3.3}{$i = 3$} $(x\ u\ w)_{\mathcal{B} - v}\wedge (u\ v\ w)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.3.4}{$i = 4$} As the triple $\{v, x, y\}$ is collinear, one of the following cases hold: \begin{itemize} \item\textsc{Case} 2.3.4.1: $(v\ x\ y)_\mathcal{B}$; \item\textsc{Case} 2.3.4.2: $(x\ v\ y)_\mathcal{B}$; \item\textsc{Case} 2.3.4.3: $(x\ y\ v)_\mathcal{B}$. \end{itemize} \begin{case}{2.3.4.1} $(v\ x\ y)_\mathcal{B}\wedge (x\ u\ y)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ x\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.3.4.2} $(x\ v\ y)_\mathcal{B}\wedge (x\ y\ w)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ y\ w)_\mathcal{B}$, contradicting $(y\ w\ v)_{\mathcal{B} - x}$. \end{case} \begin{case}{2.3.4.3} $(x\ y\ v)_\mathcal{B}\wedge (x\ u\ y)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{case} \end{wcase} \begin{wcase}{2.3.5}{$i = 5$} $(u\ z\ v)_{\mathcal{B} - x}\wedge (u\ x\ z)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (u\ x\ v)_\mathcal{B}\ \lightning$ \end{wcase} \end{case} \begin{case}{2.4} Similarly to the previous cases, $\mathcal{B} - x$ is an extension of the ordering of $(\mathcal{B} - x) - v = [w, y, u, z]$. Let $i$ denote the position of $v$ in this extended order. \begin{wcase}{2.4.1}{$i = 1$} As the triple $\{v, x, y\}$ is collinear, one of the following cases hold: \begin{itemize} \item\textsc{Case} 2.4.1.1: $(v\ x\ y)_\mathcal{B}$; \item\textsc{Case} 2.4.1.2: $(x\ v\ y)_\mathcal{B}$; \item\textsc{Case} 2.4.1.3: $(x\ y\ v)_\mathcal{B}$. \end{itemize} \begin{case}{2.4.1.1} $(v\ x\ y)_\mathcal{B}\wedge (v\ y\ u)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (v\ x\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.4.1.2} $(x\ v\ y)_\mathcal{B}\wedge (x\ y\ u)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (x\ v\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.4.1.3} $(x\ y\ v)_\mathcal{B}\wedge (x\ w\ y)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (w\ y\ v)_\mathcal{B}$, contradicting $(v\ w\ y)_{\mathcal{B} - x}$. \end{case} \end{wcase} \begin{wcase}{2.4.2}{$i = 2$} $(x\ w\ u)_{\mathcal{B} - v}\wedge (w\ v\ u)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ v\ u)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.4.3}{$i = 3$} $(x\ y\ u)_{\mathcal{B} - v}\wedge (y\ v\ u)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ v\ u)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.4.4}{$i = 4$} $(u\ z\ x)_{\mathcal{B} - v}\wedge (u\ v\ z)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (u\ v\ x)_\mathcal{B}\ \lightning$ \end{wcase} \begin{wcase}{2.4.5}{$i = 5$} As the triple $\{v, x, y\}$ is collinear, one of the following cases hold: \begin{itemize} \item\textsc{Case} 2.4.5.1: $(v\ x\ y)_\mathcal{B}$; \item\textsc{Case} 2.2.5.2: $(x\ v\ y)_\mathcal{B}$; \item\textsc{Case} 2.2.5.3: $(x\ y\ v)_\mathcal{B}$. \end{itemize} \begin{case}{2.4.5.1} $(v\ x\ y)_\mathcal{B}\wedge (x\ w\ y)_{\mathcal{B} - v}\overset{\text{f.r.p.}}{\Rightarrow} (v\ w\ y)_\mathcal{B}$, contradicting $(w\ y\ v)_{\mathcal{B} - x}$. \end{case} \begin{case}{2.4.5.2} $(x\ v\ y)_\mathcal{B}\wedge (y\ u\ v)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ v\ u)_\mathcal{B}\ \lightning$ \end{case} \begin{case}{2.4.5.3} $(x\ y\ v)_\mathcal{B}\wedge (y\ u\ v)_{\mathcal{B} - x}\overset{\text{f.r.p.}}{\Rightarrow} (x\ u\ v)_\mathcal{B}\ \lightning$ \end{case} \end{wcase} \end{case} \end{case} \begin{case}{3} In this case, $n = 6$, $X =\{u, v, w, x, y, z\}$ and there are two, disjoint triangles in $\mathcal{B}$: $T =\{x, y, z\}$ and $R =\{u, v, w\}$ (see Figure \ref{SFthyp3}). Now, for any point $p\in X$, $\mathcal{B} - p$ is a quasilinear betweenness structure of order $5$. \begin{wcase}{3.1}{$\mathcal{B}$ contains an ordered substructure on $4$ points} We can suppose without loss of generality that $(\mathcal{B} - x) - u$ is an ordered substructure. We know from Case 1 that there are two betweenness structures, $\mathcal{R}_{5, 1}^4$ and $\mathcal{S}_5^4$, of order $5$ that contain an ordered substructure on $4$ points. Thus, considering $\mathcal{B} - x$ and $\mathcal{B} - u$, there are three cases up to symmetry: \begin{itemize} \item\textsc{Case} 3.1.1 $\mathcal{B} - x\simeq\mathcal{B} - u\simeq\mathcal{R}_{5, 1}^4$; \item\textsc{Case} 3.1.2 $\mathcal{B} - x\simeq\mathcal{R}_{5, 1}^4$, $\mathcal{B} - u\simeq\mathcal{S}_5^4$; \item\textsc{Case} 3.1.3 $\mathcal{B} - x\simeq\mathcal{B} - u\simeq \mathcal{S}_5^4$. \end{itemize} \begin{figure} \caption{Case 3.1.1} \caption{Case 3.1.2} \caption{Case 3.1.3} \caption{Cases 3.1.1-3.1.3 in the proof of Lemma \ref{Lsmallgr} \label{SFc31} \label{SFc32} \label{SFc33} \end{figure} \begin{case}{3.1.1} Since $R$ is a triangle and $(\mathcal{B} - x) - u$ is ordered, we can suppose without loss of generality that $\mathcal{B} - x$ is induced by the corresponding graph shown in Figure \ref{SFc31}. Similarly, as $(\mathcal{B} - u) - x$ is ordered, $T$ is a triangle and $(y\ v\ z\ w)_{\mathcal{B} - x}$ holds by the previous argument, $\mathcal{B} - u$ must be induced by the corresponding graph shown in Figure \ref{SFc31}. Next, observe the following. \begin{clm}\label{C4equ}$ $ \begin{enumerate} \item $(x\ y\ u)_\mathcal{B}\Leftrightarrow (x\ v\ u)_\mathcal{B}\Leftrightarrow (x\ z\ u)_\mathcal{B}\Leftrightarrow (x\ w\ u)_\mathcal{B}$;\label{E4equ1} \item $\neg (y\ x\ u)_\mathcal{B}\wedge\neg (x\ u\ v)_\mathcal{B}\wedge\neg (z\ x\ u)_\mathcal{B}\wedge\neg (x\ u\ w)_\mathcal{B}$.\label{E4equ2} \end{enumerate} \end{clm} \begin{prf} \ref{E4equ1}. From Part \ref{E4equ1} we only show that $(x\ y\ u)_\mathcal{B}\Leftrightarrow (x\ v\ u)_\mathcal{B}$. Equivalences $(x\ v\ u)_\mathcal{B}\Leftrightarrow (x\ z\ u)_\mathcal{B}$ and $(x\ z\ u)_\mathcal{B}\Leftrightarrow (x\ w\ u)_\mathcal{B}$ can be proved in a similar way. Relations $(x\ y\ u)_{\mathcal{B}}$ and $(x\ v\ y)_{\mathcal{B} - u}$ imply $(x\ v\ u)_\mathcal{B}$ by f.r.p. Conversely, $(x\ v\ u)_{\mathcal{B}}$ and $(v\ y\ u)_{\mathcal{B} - x}$ imply $(x\ y\ u)_\mathcal{B}$. \ref{E4equ2}. From Part \ref{E4equ2} we prove $\neg (y\ x\ u)_\mathcal{B}$ as an example. Relation $(y\ u\ z)_{\mathcal{B} - x}$ holds, so $(y\ x\ u)_\mathcal{B}$ would imply $(y\ x\ z)_\mathcal{B}$ by f.r.p., a contradiction. $\square$ \end{prf} Now, if all of the four betweennesses in Part \ref{E4equ1} of Claim \ref{C4equ} hold, then with the $14$ betweennesses from $\mathcal{B} - x$ and $\mathcal{B} - u$, we have altogether $18$ non-trivial betweennesses that form a betweenness structure isomorphic to $\mathcal{A}_6^2$. Otherwise, Part \ref{E4equ2} of Claim \ref{C4equ} yields $(x\ u\ y)_\mathcal{B}$, $(u\ x\ v)_\mathcal{B}$, $(x\ u\ z)_\mathcal{B}$ and $(u\ x\ w)_\mathcal{B}$ because the underlying triples must be collinear. Now, with the 14 betweennesses from $\mathcal{B} - x$ and $\mathcal{B} - u$, we have altogether $18$ non-trivial betweennesses that form a betweenness structure isomorphic to $\mathcal{A}_6^3$ \end{case} \begin{case}{3.1.2} As $R =\{u, v, w\}$ is a triangle and $(\mathcal{B} - x) - u$ is ordered, we can assume without loss of generality that $\mathcal{B} - x$ is induced by the corresponding graph in Figure \ref{SFc32}. Similarly, as $(\mathcal{B} - u) - x$ is ordered and $(y\ v\ z\ w)_{\mathcal{B} - x}$ holds by the previous argument, $\mathcal{B} - u$ is induced by the corresponding graph in Figure \ref{SFc32}. Observe that $(\mathcal{B} - x) - y$ is induced by a star and $(\mathcal{B} - u) - y$ is induced by a path, hence, we obtain from $(x\ w\ z\ v)_{\mathcal{B} - u}$ that $$\mathcal{B} - y\simeq\mathcal{R}_{5, 1}^4$$ and its adjacency graph is the one shown in Figure \ref{SFc32}. Finally, $(u\ x\ w)_{\mathcal{B} - y}$ and $(y\ u\ w)_{\mathcal{B} - x}$ imply $(y\ u\ x)_\mathcal{B}$ by f.r.p. Together with the $17$ betweennesses from $\mathcal{B} - x$, $\mathcal{B} - u$ and $\mathcal{B} - y$, we have altogether $18$ non-trivial betweennesses that form a betweenness structure isomorphic to $\mathcal{A}_6^2$. \end{case} \begin{case}{3.1.3} As $R$ is a triangle and $(\mathcal{B} - x) - u$ is ordered, we can suppose without loss of generality that the first graph in Figure \ref{SFc33} induces $\mathcal{B} - x$. Similarly, as $(\mathcal{B} - u) - x$ is ordered and $(y\ v\ z\ w)_{\mathcal{B} - x}$ holds by the previous argument, $\mathcal{B} - u$ is induced by the corresponding graph in Figure \ref{SFc33}. It is also true that $$\mathcal{B} - y\simeq\mathcal{S}_5^4,$$ otherwise we would be back to Case 3.1.2. Now, because of $(x\ w\ z\ v)_{\mathcal{B} - u}$, $\mathcal{B} - y$ is induced by the third graph in Figure \ref{SFc33}. Finally, $(y\ u\ w)_{\mathcal{B} - x}$ and $(u\ x\ w)_{\mathcal{B} - y}$ imply $(y\ u\ x)_\mathcal{B}$ by f.r.p. Together with the $17$ betweennesses from $\mathcal{B} - x$, $\mathcal{B} - u$ and $\mathcal{B} - y$, we have $18$ non-trivial betweennesses that form a betweenness structure isomorphic to $\mathcal{A}_6^1\simeq\mathcal{C}_6$. \end{case} \end{wcase} \begin{wcase}{3.2}{$\mathcal{B}$ does not contain any ordered substructures on $4$ points} In this case, for all $p\in X$, $$\mathcal{B} - p\simeq\mathcal{B}(K_{2, 3}).$$ and the unique triangle must be the class of size $3$ of $K_{2, 3}$. Thus, we can determine the spanner graphs of all the substructures $\mathcal{B} - p$, $p\in X$, from which we obtain all betweennesses of $\mathcal{B}$. It follows that $\mathcal{B}$ is isomorphic to $\mathcal{A}_6^4\simeq\mathcal{B}(K_{3, 3})$. \end{wcase} To summarize Case 3, $\mathcal{B}$ is induced by a graph isomorphic to one of the $6$-vertex graphs in Figure \ref{Fsmallgr}. \end{case} \begin{case}{4} In this case, $n = 7$, $X =\{p, u, v, w, x, y, z\}$ and there are three triangles in $\mathcal{B}$ that intersect in one point: $\{x, y, z\},\{x, u, v\}$ and $\{x, p, w\}$ (see Figure \ref{SFthyp4}). Now, $\mathcal{B} - y$ is a quasilinear betweenness structure of order $6$ with two triangles intersecting in $1$ point, hence, it belongs to Case 2. However, we have already shown that Case 2 is impossible. $\square$ \end{case} \begin{figure} \caption{Case 1} \caption{Case 2} \caption{Case 3} \caption{Cases 1-3 in the proof of Claim \ref{Cn5} \label{SFn5a} \label{SFn5b} \label{SFn5c} \end{figure} \section{Proof of Claim \ref{Cn5}} Suppose to the contrary that $\mathcal{H}(\mathcal{B})$ is not a tight star. Then clearly $|R\cap T| = 1$. Let $x, y, z, u$ and $v$ be the points of $\mathcal{B}$ such that $R =\{x, u, v\}$ and $T =\{x, y, z\}$. Since $\mathcal{B}$ is regular by assumption, $\mathcal{B} - x$ must be ordered, so one of the following cases hold up to symmetry: \begin{itemize} \item\textsc{Case} 1: $(u\ v\ y\ z)_\mathcal{B}$; \item\textsc{Case} 3: $(u\ y\ z\ v)_\mathcal{B}$; \item\textsc{Case} 2: $(u\ y\ v\ z)_\mathcal{B}$. \end{itemize} Before we analyze these cases, observe that for any point $p\neq x$, $\mathcal{B} - p$ is a quasilinear betweenness structure on $4$ points, thus, it is induced by one of the two $4$-vertex graphs in Figure \ref{Fsmallgr}. Further, notice that each betweenness of $\mathcal{A}_4^1$ has its middle point outside of the unique triangle. \begin{case}{1} Since $T$ is the only triangle of $\mathcal{B} - u$ and $(v\ y\ z)_\mathcal{B}$ holds by the case's assumption, $\mathcal{B} - u$ must be induced by the graph in Figure \ref{SFn5a} and consequently, $(v\ x\ z)_{\mathcal{B} - u}$ holds. However, this and $(u\ v\ z)_\mathcal{B}$ imply $(u\ v\ x)_\mathcal{B}$ by f.r.p., in contradiction with $R$ being a triangle. \end{case} \begin{case}{2} Similarly to the previous case, $\mathcal{B} - u$ is induced by the corresponding graph in Figure \ref{SFn5b}, so $(y\ x\ v)_{\mathcal{B} - u}$ holds. However, this and $(u\ y\ v)_\mathcal{B}$ imply $(u\ x\ v)_\mathcal{B}$ by f.r.p. in contradiction with $R$ being a triangle. \end{case} \begin{case}{3} As $(u\ v\ z)_\mathcal{B}$ holds by the case's assumption, $\mathcal{B} - y$ and $\mathcal{B} - v$ must be induced by the corresponding graphs in Figure \ref{SFn5c} and thus $(x\ z\ v)_{\mathcal{B} - y}$ and $(x\ u\ y)_{\mathcal{B} - v}$ hold. From these betweennesses and the case's assumption, we obtain that $\mathcal{B} - u$ and $\mathcal{B} - z$ are induced by the corresponding graphs in Figure \ref{SFn5c}. But now, $(x\ y\ v)_{\mathcal{B} - u}$ contradicts $(x\ v\ y)_{\mathcal{B} - z}$. $\square$ \end{case} \begin{figure} \caption{Vertex-labeled spanner graphs of quasilinear betweenness structures on $6$ points. The kernel of each triangle hypergraph is encircled by dashed line.} \label{Fapi} \end{figure} \section{Proof of Claim \ref{Cmetr7}} For six distinct points $p_1, p_2, p_3, p_4, p_5, p_6$, let $\mathcal{R}_{6, 1}^4(p_1, p_2; p_3, p_4, p_5, p_6)$, $\mathcal{R}_{6, 2}^4(p_1,\allowbreak p_2;\allowbreak p_3,\allowbreak p_4,\allowbreak p_5,\allowbreak p_6)$ and $\mathcal{S}_6^4(p_1, p_2; p_3, p_4, p_5, p_6)$ denote the betweenness structures induced by the graphs shown in Figure \ref{Fapi}. \begin{case}{\ref{Emetr71}} It is clear that $\tilde{\mathcal{H}}_1$ is metrizable as $\mathcal{H}(\mathcal{T}_{7, 1})\simeq\tilde{\mathcal{H}}_1$. Next, let $\mathcal{A}$ be a betweenness structure such that $\mathcal{H}(\mathcal{A}) =\tilde{\mathcal{H}}_1$ and label the points of $\tilde{\mathcal{H}}_1$ as in Figure \ref{SFmetr71}. We show that $\mathcal{A}\simeq\mathcal{T}_{7, 1}$. It can be easily seen that $\mathcal{A} - x$, $\mathcal{A} - w$, $\mathcal{A} - y$ and $\mathcal{A} - z$ are all quasilinear betweenness structures of order $6$ and their triangle hypergraphs are tight starts, hence, they are isomorphic to either $\mathcal{R}_{6, 1}^4$, $\mathcal{R}_{6, 2}^4$ or $\mathcal{S}_6^4$ by Lemma \ref{Lgen3}. Observe the following. \begin{obs}\label{O6qlin} Let $\mathcal{B}\in B(6, 2)$ be a betweenness structure on ground set $X =\{p_1, p_2, p_3, p_4, p_5, q\}$ such that $(p_1\ p_2\ p_3\ p_4\ p_5)_\mathcal{B}$ holds and $\mathcal{H}(\mathcal{B})$ is a tight star with kernel $K =\{p_i, q\}$. Then \begin{enumerate} \item $i = 1\Rightarrow\mathcal{B} =\mathcal{S}_6^4(p_1, q; p_2, p_3, p_4, p_5)$; \item $i = 2\Rightarrow\mathcal{B} =\mathcal{R}_{6, 1}^4(q, p_2; p_1, p_3, p_4, p_5)$; \item $i = 3\Rightarrow\mathcal{B} =\mathcal{R}_{6, 2}^4(q, p_3; p_1, p_2, p_4, p_5)$. \end{enumerate} (Cases $i = 4$ and $i = 5$ can be obtained by symmetry.) \end{obs} Suppose first that one of the betweenness structures $\mathcal{A} - q'$, $q'\in\{x, y, z, w\}$ is isomorphic to $\mathcal{R}_{6, 1}^4$. We may assume that $q' = x$. Then $$\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; p_3, p_4, p_5, p_6)$$ where $$\{p_3, p_4\} =\{u, w\}\emph{ and }\{p_5, p_6\} =\{p, v\},$$ so there are four possibilities. If $\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; w, u, p, v)$, then $(w\ z\ u\ p\ v)_\mathcal{A}$ holds, hence, $\mathcal{A} - y =\mathcal{S}_6^4(w, x; z, u, p, v)$ by Observation \ref{O6qlin}. Further, since $\{y, z\} =\ker(\mathcal{H}(\mathcal{A} - w))$ and $(y\ u\ p\ v)_{\mathcal{A} - x}$ and $(z\ u\ p\ v)_{\mathcal{A} - x}$ hold, $\mathcal{A} - w$ must be $\mathcal{R}_{6, 1}^4(y, z; x, u, p, v).$ Now, $(x\ u\ p)_{\mathcal{A} - w}$ holds in contradiction with $(u\ p\ x)_{\mathcal{A} - y}$. If $\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; w, u, v, p)$, then similarly to the previous case, $\mathcal{A} - y =\mathcal{S}_6^4(w, x; z, u, v, p)$ and $\mathcal{A} - w =\mathcal{R}_{6, 1}^4(y, z; x, u, v, p)$, which contradict on the triple $\{u, p, x\}$. If $\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; u, w, p, v)$, then $\mathcal{A} - y$ must be $\mathcal{R}_{6, 2}^4(x, w; u, z, p, w)$, which contradicts the fact that $\{x, w, p\}$ is a triangle. Lastly, we show that if $\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; u, w, v, p)$, then $\mathcal{A}\simeq\mathcal{T}_{7, 1}$. It is easy to see by Observation \ref{O6qlin} that $\mathcal{A} - y =\mathcal{R}_{6, 2}^4(x, w; u, z, v, p)$ and $\mathcal{A} - z =\mathcal{R}_{6, 2}^4(x, w; u, y, v, p)$. Both implies that $\mathcal{A} - w =\mathcal{R}_{6, 1}^4(y, z; u, x, v, p)$. These substructures are consistent with one another and cover all collinear triples of $\mathcal{A}$. It is easy to see now that $\mathcal{A}$ is induced by the vertex-labeled $T_{6, 1}$ in Figure \ref{SFtpi1}. Next, suppose that none of the betweenness structures $\mathcal{A} - q'$, $q'\in\{x, y, z, w\}$ is isomorphic to $\mathcal{R}_{6, 1}^4$, but one of them, for example $\mathcal{A} - x$, is isomorphic to $\mathcal{R}_{6, 2}^4$. Then $$\mathcal{A} - x =\mathcal{R}_{6, 2}^4(y, z; p_3, p_4, p_5, p_6)$$ where $$\{p_3, p_6\} =\{p, v\}\emph{ and }\{p_4, p_5\} =\{u, w\},$$ so, there are two possibilities up to symmetry. If $\mathcal{A} - x =\mathcal{R}_{6, 2}^4(y, z; p, u, w, v)$, then $\mathcal{A} - y =\mathcal{R}_{6, 1}^4(x, w; v, z, u, p)$, while if $\mathcal{A} - x =\mathcal{R}_{6, 2}^4(y, z; p, w, u, v)$, then $\mathcal{A} - y =\mathcal{R}_{6, 1}^4(x, w; p, z, u, v)$ by Observation \ref{O6qlin}. In both cases, $\mathcal{A} - y$ contradicts our previous assumption on $\mathcal{A} - q'$. Finally, suppose that all of the betweenness structures $\mathcal{A} - q'$, $q'\in\{x, y, z, w\}$ are isomorphic to $\mathcal{S}_6^4$. Then $$\mathcal{A} - x =\mathcal{S}_6^4(y, z; p_3, p_4, p_5, p_6)$$ where $$\{p_3, p_6\} =\{u, w\}\emph{ and }\{p_4, p_5\} =\{p, v\},$$ so there are two possibilities up to symmetry. If $\mathcal{A} - x =\mathcal{S}_6^4(y, z; u, p, v, w)$, then $\mathcal{A} - y =\mathcal{R}_{6, 1}^4(x, w; z, v, p, u)$, while if $\mathcal{A} - x =\mathcal{S}_6^4(y, z; u, v, p, w)$, then $\mathcal{A} - y =\mathcal{R}_{6, 1}^4(x, w; z, p, v, u)$ by Observation \ref{O6qlin}. In both cases, $\mathcal{A} - y$ contradicts our previous assumption on $\mathcal{A} - q'$. In summary, we can conclude that $\mathcal{A}\simeq\mathcal{T}_{7, 1}$. \end{case} \begin{case}{\ref{Emetr72}} Suppose to the contrary that $\tilde{\mathcal{H}}_2$ is metrizable and let $\mathcal{A}$ be a betweenness structure such that $\mathcal{H}(\mathcal{A}) =\tilde{\mathcal{H}}_2$. Label the points of $\tilde{\mathcal{H}}_2$ as in Figure \ref{SFmetr72}. Notice that both $\mathcal{A} - x$ and $\mathcal{A} - q$ are a quasilinear betweenness structures with $2$ triangles that form a tight star. The next observation follows from Theorem \ref{Tqus1}. \begin{obs}\label{Okern} Let $\mathcal{A}$ be a quasilinear betweenness structure of order $n\geq 6$ such that $\mathcal{H}(\mathcal{A})$ is a tight star. Then there is exactly one cyclic line in $\mathcal{A}$ and it contains $\ker(\mathcal{H}(\mathcal{A}))$. \end{obs} We obtain from Observation \ref{Okern} that there is exactly one cyclic line $L_x$ in $\mathcal{A} - x$, and it contains the kernel $\{y, z\}$. Further, $L_x$ does not contain $p$ or $q$ because $\{y, z, p\}$ and $\{y, z, q\}$ are triangles. Hence, $L_x$ is a cyclic line in $\mathcal{A} - q$ as well that does not contain $p$, contradicting Observation \ref{Okern}. \end{case} \begin{case}{\ref{Emetr73}} Suppose to the contrary that $\tilde{\mathcal{H}}_3$ is metrizable and let $\mathcal{A}$ be a betweenness structure such that $\mathcal{H}(\mathcal{A}) =\tilde{\mathcal{H}}_3$. Label the points of $\tilde{\mathcal{H}}_3$ as in Figure \ref{SFmetr73}. Observe that $\mathcal{A} - x$ and $\mathcal{A} - z$ are quasilinear betweenness structures on $6$ points with $2$ triangles that form a tight star, hence, they are isomorphic to either $\mathcal{R}_{6, 1}^4$, $\mathcal{R}_{6, 2}^4$ or $\mathcal{S}_6^4$. If $\mathcal{A} - x\simeq\mathcal{R}_{6, 1}^4$, then we can assume by symmetry that $\mathcal{A} - x =\mathcal{R}_{6, 1}^4(y, z; u, v, p, q)$. It is also easy to see by Observation \ref{O6qlin} that $\mathcal{A} - z =\mathcal{R}_{6, 1}^4(y,\allowbreak x;\allowbreak u,\allowbreak v,\allowbreak p,\allowbreak q)$, from which $(y\ u\ x)_\mathcal{A}$ follows. This is, however, impossible as $\{u, x, y\}$ was a triangle. Next, if $\mathcal{A} - x\simeq\mathcal{R}_{6, 2}^4$, then we can assume by symmetry that $\mathcal{A} - x =\mathcal{R}_{6, 2}^4(y, z; p, u, v, q)$. It is easy to see that $\mathcal{A} - z =\mathcal{R}_{6, 2}^4(y, x; p, u, v, q)$ by Observation \ref{O6qlin}, from which the contradiction $(y\ u\ x)_{\mathcal{A}}$ follows again. Finally, if $\mathcal{A} - x\simeq\mathcal{S}_6^4$, then we can assume by symmetry that $\mathcal{A} - x =\mathcal{S}_6^4(y, z; u, p, q, v)$. Now, we obtain from Observation \ref{O6qlin} that $\mathcal{A} - z =\mathcal{S}_6^4(y, x; u, p, q, v)$, from which $(u\ y\ x)_{\mathcal{A}}$ follows, leading to a contradiction again. $\square$ \end{case} \end{document}
\begin{document} \begin{titlepage} \title{On an Incompressible Navier-Stokes/Cahn-Hilliard System with Degenerate Mobility} \author{ Helmut Abels\footnote{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, 93040 Regensburg, Germany, e-mail: {\sf [email protected]}}, Daniel Depner\footnote{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, 93040 Regensburg, Germany, e-mail: {\sf [email protected]}}, and Harald Garcke\footnote{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, 93040 Regensburg, Germany, e-mail: {\sf [email protected]}}} \date{} \varepsilonnd{titlepage} \maketitle \begin{abstract} We prove existence of weak solutions for a diffuse interface model for the flow of two viscous incompressible Newtonian fluids in a bounded domain by allowing for a degenerate mobility. The model has been developed by Abels, Garcke and Gr\"un for fluids with different densities and leads to a solenoidal velocity field. It is given by a non-homogeneous Navier-Stokes system with a modified convective term coupled to a Cahn-Hilliard system, such that an energy estimate is fulfilled which follows from the fact that the model is thermodynamically consistent. \varepsilonnd{abstract} \noindent{\bf Key words:} Two-phase flow, Navier-Stokes equations, diffuse interface model, mixtures of viscous fluids, Cahn-Hilliard equation, degenerate mobility \noindent{\bf AMS-Classification:} Primary: 76T99; Secondary: 35Q30, 35Q35, 76D03, 76D05, 76D27, 76D45 \section{Introduction} \label{intro} Classically the interface between two immiscible, viscous fluids has been modelled in the context of sharp interface approaches, see e.g. \cite{Mue85}. But in the context of sharp interface models it is difficult to describe topological changes, as e.g. pinch off and situations where different interfaces or different parts of an interface connect. In the last 20 years phase field approaches have been a promising new approach to model interfacial evolution in situations where interfacial energy effects are important, see e.g. \cite{Che02}. In phase field approaches a phase field or order parameter is introduced which rapidly changes its value in the interfacial region and attains two prescribed values away from the interface. For two-phase flow of immiscible, viscous fluids a phase-field approach first has been introduced by Hohenberg and Halperin \cite{HH77}, the so-called ``Model H''. In their work the Cahn-Hilliard equation was coupled to the Navier-Stokes system in such a way that capillary forces on the interface are modelled with the help of the phase field. The approach of Hohenberg and Halperin \cite{HH77} was restricted to the case where the densities of the two fluids are the same or at least are very close (``matched densities''). It has been later shown by Gurtin, Polignone, Vi\~{n}als \cite{GPV96} that the model can be derived in the context of rational thermodynamics. In particular global and local energy inequalities are true. These global energy estimates can be used to derive a priori estimates and this has been used by Boyer~\cite{Boy99} and by Abels~\cite{Abe09b} for proofs of existence results. Often the densities in two phase flow are quite different. Therefore, there have been several attempts to derive phase field models for two phase flow with non-matched densities. Lowengrub and Truskinovsky \cite{LT98} derived a first thermodynamically consistent phase field model for the case of different densities. The model of Lowengrub and Truskinovsky is based on a barycentric velocity and hence the overall velocity field turns out to be not divergence free in general. In addition, the pressure enters the Cahn-Hilliard equation and as a result the coupling between the Cahn-Hilliard equation and the Navier-Stokes equations is quite strong. This and the fact that the velocity field is not divergence free makes numerical and analytical approaches quite difficult. To the authors knowledge there have been so far no numerical simulations for the full Lowengrub-Truskinovsky model. With respect to analytical results we refer to the works of Abels~\cite{Abe09a, Abe12} for existence results. In a paper by Ding, Spelt and Shu \cite{DSS07} a generalization of Model H for non-matched densities and a divergence free velocity field has been derived. However it is not known whether this model is thermodynamically consistent. A first phase field model for non-matched densities and a divergence free velocity field which in addition fulfills local and hence global free energy inequalities has been derived by Abels, Garcke and Gr\"un \cite{AGG12}. The model in \cite{AGG12} is given by the following system of Navier-Stokes/Cahn-Hilliard equations: \begin{align*} \varphiartial_t (\rho(\varphi) \mathbf{v}) + \operatorname{div} ( \mathbf{v} \otimes(\rho(\varphi) \mathbf{v} + \widetilde{\mathbf{J}})) &- \operatorname{div} (2 \varepsilonta(\varphi) D \mathbf{v}) + \nabla p & \\ & = - \operatorname{div}(a(\varphi) \nabla \varphi \otimes \nabla \varphi)& \mbox{in } \, Q_T , \\ \operatorname{div} \, \mathbf{v} &= 0& \mbox{in } \, Q_T , \\ \varphiartial_t \varphi + \mathbf{v} \cdot \nabla \varphi &= \mbox{div}\left(m(\varphi) \nabla \mu \right)& \mbox{in } \, Q_T ,\\ \mu = \Psi'(\varphi) + a'(\varphi) \frac{|\nabla \varphi|^2}{2} &- \operatorname{div}\left( a(\varphi) \nabla \varphi \right)& \mbox{in } \, Q_T , \varepsilonnd{align*} where $\widetilde{\mathbf{J}} = -\frac{\tilde{\rho}_2 - \tilde{\rho}_1}{2} m(\varphi) \nabla \mu$, $Q_T=\Omega\times(0,T)$ for $0<T<\infty$, and $\Omega \subset \mathbb{R}^d$, $d=2,3$, is a sufficiently smooth bounded domain. We close the system with the boundary and initial conditions \begin{alignat*}{2} \mathbf{v}|_{\varphiartial \Omega} &= 0 &\qquad& \text{on}\ \varphiartial\Omega\times (0,T), \\ \varphiartial_n \varphi|_{\varphiartial \Omega} = \varphiartial_n \mu|_{\varphiartial \Omega} &= 0&& \text{on}\ \varphiartial\Omega\times (0,T), \\ \left(\mathbf{v} , \varphi \right)|_{t=0} &= \left( \mathbf{v}_0 , \varphi_0 \right) &&\text{in}\ \Omega, \varepsilonnd{alignat*} where $\varphiartial_n \varphi = n\cdot \nabla \varphi$ and $n$ denotes the exterior normal at $\varphiartial\Omega$. Here $\mathbf{v}$ is the volume averaged velocity, $\rho=\rho(\varphi)$ is the density of the mixture of the two fluids, $\varphi$ is the difference of the volume fractions of the two fluids and we assume a constitutive relation between $\rho$ and the order parameter $\varphi$ given by $\rho(\varphi) = \frac{1}{2} (\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi$, see \cite{ADG12} for details. In addition, $p$ is the pressure, $\mu$ is the chemical potential associated to $\varphi$ and $\tilde{\rho}_1$, $\tilde{\rho}_2$ are the specific constant mass densities of the unmixed fluids. Moreover, ${D}\mathbf{v}= \frac12(\nabla \mathbf{v} + \nabla \mathbf{v}^T)$, $\varepsilonta(\varphi)>0$ is a viscosity coefficient, and $m(\varphi) \geq 0$ is a degenerate mobility coefficient. Furthermore, $\Psi(\varphi)$ is the homogeneous free energy density for the mixture and the (total) free energy of the system is given by \begin{align*} E_{\mbox{\footnotesize free}}(\varphi) = \int_{\Omega} \left( \Psi(\varphi) + a(\varphi)\frac{|\nabla \varphi|^2}{2} \right)\, d x \varepsilonnd{align*} for some positive coefficient $a(\varphi)$. The kinetic energy is given by $E_{\mbox{\footnotesize kin}}(\varphi,\mathbf{v}) \hspace*{-1pt}= \int_\Omega \rho(\varphi) \frac{|\mathbf{v}|^2}{2} \, dx$ and the total energy as the sum of the kinetic and free energy is \begin{align} \begin{split} \label{totalenergy} E_{\mbox{\footnotesize tot}}(\varphi,\mathbf{v}) &= E_{\mbox{\footnotesize kin}}(\varphi,\mathbf{v}) + E_{\mbox{\footnotesize free}}(\varphi) \\ &= \int_\Omega \rho(\varphi)\frac{|\mathbf{v}|^2}{2} \, dx + \int_\Omega \left( \Psi(\varphi) + a(\varphi)\frac{|\nabla \varphi|^2}{2} \right) dx . \varepsilonnd{split} \varepsilonnd{align} In addition there have been further modelling attempts for two phase flow with different densities. We refer to Boyer \cite{Boy02} and the recent work of Aki et al. \cite{ADGK12}. We remark that for the model of Boyer no energy inequalities are known and the model of Aki et al.\,does not lead to velocity fields which are divergence free. In \cite{ADG12} an existence result for the above Navier-Stokes/Cahn-Hil\-liard model has been shown in the case of a non-degenerate mobility $m(\varphi)$. As is discussed in \cite{AGG12} the case with non-degenerate mobility can lead to Ostwald ripening effects, i.e., in particular larger drops can grow to the expense of smaller ones. In many applications this is not reasonable and as pointed out in \cite{AGG12} degenerate mobilities avoid Ostwald ripening and hence the case of degenerate mobilities is very important in applications. In what follows we assume that $m(\varphi) = 1-\varphi^2$ for $|\varphi| \leq 1$ and extend this by zero to all of $\mathbb{R}$. In this way we do not allow for diffusion through the bulk, i.e., the region where $\varphi = 1$ resp. $\varphi = -1$, but only in the interfacial region, where $|\varphi|<1$. The degenerate mobility leads to the physically reasonable bound $|\varphi| \leq 1$ for the order parameter $\varphi$, which is the difference of volume fractions and therefore we can consider in this work a smooth homogeneous free energy density $\Psi$ in contrast to the previous work~\cite{ADG12}. For the Cahn-Hilliard equations without the coupling to the Navier-Stokes equations Elliott and Garcke~\cite{EG96} considered the case of a degenerate mobility, see also Gr\"un \cite{Gru95}. We will use a suitable testing procedure from the work \cite{EG96} to get a bound for the second derivatives of a function of $\varphi$ in the energy estimates of Lemma \ref{lem:energyestimate}. We point out that our result is also new for the case of model $H$ with degenerate mobility, i.e., $\tilde{\rho}_1 = \tilde{\rho}_2$, which implies $\widetilde{\mathbf{J}} =0$ in the above Navier-Stokes/Cahn-Hilliard system. The structure of the article is as follows: In Section 2 we summarize some notation and preliminary results. Then, in Section 3, we reformulate the Navier-Stokes/Cahn-Hilliard system suitably, define weak solutions and state our main result on existence of weak solutions. For the proof of the existence theorem in Subsections 3.2 and 3.3 we approximate the equations by a problem with positive mobility $m_\varepsilon$ and singular homogeneous free energy density $\Psi_\varepsilon$. For the solution $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mathbf{J}_\varepsilon)$ of the approximation (with $\mathbf{J}_\varepsilon = - m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon$) we derive suitable energy estimates to get weak limits. Then we extend the weak convergences to strong ones by using methods similar to the previous work of the authors \cite{ADG12}, careful estimates of the additional singular free energy density and by an additional subtle argument with the help of time differences and a theorem of Simon \cite{Sim87}. We remark that this last point would be easier in the case of a constant coefficient $a(\varphi)$ in the free energy. Finally we can pass to the limit $\varepsilon \to 0$ in the equations for the weak solutions $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mathbf{J}_\varepsilon)$ and recover the identities for the weak solution of the main problem. \section{Preliminaries and Notation} \label{prelimi} We denote $a\otimes b = (a_i b_j)_{i,j=1}^d$ for $a,b\in \mathbb{R}^d$ and $A_{\operatorname{sym}}= \frac12 (A+A^T)$ for a matrix $A\in \mathbb{R}^{d\times d}$. If $X$ is a Banach space and $X'$ is its dual, then \begin{equation*} \weight{f,g} \varepsilonquiv \weight{f,g}_{X',X} = f(g), \qquad f\in X', \;g\in X, \varepsilonnd{equation*} denotes the duality product. We write $X\hookrightarrow \hookrightarrow Y$ if $X$ is compactly embedded into $Y$. Moreover, if $H$ is a Hilbert space, $(\cdot\,,\cdot )_H$ denotes its inner product. Moreover, we use the abbreviation $(.\,,.)_{M}=(.\,,.)_{L^2(M)}$. \noindent {\bf Function spaces:} If $M\subseteq \mathbb{R}^d$ is measurable, $L^q(M)$, $1\leq q \leq \infty$, denotes the usual Lebesgue-space and $\|.\|_q$ its norm. Moreover, $L^q(M;X)$ denotes the set of all strongly measurable $q$-integrable functions if $q \in [1,\infty)$ and essentially bounded strongly measurable functions, if $q = \infty$, where $X$ is a Banach space. Recall that, if $X$ is a Banach space with the Radon-Nikodym property, then \begin{equation*} L^q(M;X)'= L^{q'}(M;X')\qquad \text{for every}\ 1\leq q < \infty \varepsilonnd{equation*} by means of the duality product $ \weight{f,g}= \int_{M} \weight{f(x),g(x)}_{X',X} dx $ for $f\in L^{q'}(M;X')$, $g\in L^{q}(M;X)$. If $X$ is reflexive or $X'$ is separable, then $X$ has the Radon-Nikodym property, cf. Diestel and Uhl~\cite{DU77}. Moreover, we recall the Lemma of Aubin-Lions: If $X_0\hookrightarrow\hookrightarrow X_1 \hookrightarrow X_2$ are Banach spaces, $1<p<\infty$, $1\leq q <\infty$, and $I\subset \mathbb{R}$ is a bounded interval, then \begin{equation}\label{eq:AubinLions} \left\{v\in L^p( I; X_0): \frac{dv}{dt} \in L^q(I;X_2) \right\} \hookrightarrow\hookrightarrow L^p(I;X_1). \varepsilonnd{equation} See J.-L.~Lions~\cite{Lio69} for the case $q>1$ and Simon~\cite{Sim87} or Roub{\'{\i}}\-{\v{c}}ek~\cite{Rou90} for $q=1$. Let $\Omega \subset \mathbb{R}^d$ be a domain. Then $W^k_q(\Omega)$, $k\in \varepsilonnsuremath{\mathbb{N}}_0$, $1\leq q\leq \infty$, denotes the usual $L^q$-Sobolev space, $W^k_{q,0}(\Omega)$ the closure of $C^\infty_0(\Omega)$ in $W^k_q(\Omega)$, $W^{-k}_q(\Omega)= (W^k_{q',0}(\Omega))'$, and $W^{-k}_{q,0}(\Omega)= (W^k_{q'}(\Omega))'$. We also use the abbreviation $H^k(\Omega) = W^k_2(\Omega)$. Given $f\in L^1(\Omega)$, we denote by $ f_\Omega = \frac1{|\Omega|}\int_\Omega f(x) \,dx $ its mean value. Moreover, for $m\in\mathbb{R}$ we set \begin{equation*} L^q_{(m)}(\Omega):=\{f\in L^q(\Omega):f_\Omega=m\}, \qquad 1\leq q\leq \infty. \varepsilonnd{equation*} Then for $f \in L^2(\Omega)$ we observe that \begin{align*} P_0 f:= f-f_\Omega= f-\frac1{|\Omega|}\int_\Omega f(x) \,dx \varepsilonnd{align*} is the orthogonal projection onto $L^2_{(0)}(\Omega)$. Furthermore, we define \begin{equation*} H^1_{(0)}\varepsilonquivH^1_{(0)} (\Omega)= H^1(\Omega)\cap L^2_{(0)}(\Omega), \qquad (c,d)_{H^1_{(0)}(\Omega)} := (\nabla c,\nabla d)_{L^2(\Omega)}. \varepsilonnd{equation*} Then $H^1_{(0)}(\Omega)$ is a Hilbert space due to Poincar\'e's inequality. \noindent {\bf Spaces of solenoidal vector-fields:} For a bounded domain $\Omega \subset \mathbb{R}^d$ we denote by $C^\infty_{0,\sigma}(\Omega)$ in the following the space of all divergence free vector fields in $C^\infty_0(\Omega)^d$ and $L^2_\sigma(\Omega)$ is its closure in the $L^2$-norm. The corresponding Helmholtz projection is denoted by $P_\sigma$, cf. e.g. Sohr \cite{Soh01}. We note that $P_\sigma f = f- \nabla p$, where $p \in W^1_2(\Omega)\cap L^2_{(0)}(\Omega)$ is the solution of the weak Neumann problem \begin{equation}\label{eq:WeakHelmholtz} (\nabla p,\nabla \varphi)_{\Omega} = (f, \nabla \varphi)\quad \text{for all}\ \varphi \in C^\infty(\ol{\Omega}). \varepsilonnd{equation} \noindent {\bf Spaces of continuous vector-fields:} In the following let $I=[0,T]$ with $0<T< \infty$ or let $I=[0,\infty)$ if $T=\infty$ and let $X$ be a Banach space. Then $BC(I;X)$ is the Banach space of all bounded and continuous $f\colon I\to X$ equipped with the supremum norm and $BUC(I;X)$ is the subspace of all bounded and uniformly continuous functions. Moreover, we define $BC_w(I;X)$ as the topological vector space of all bounded and weakly continuous functions $f\colon I\to X$. By $C^\infty_0(0,T;X)$ we denote the vector space of all smooth functions $f\colon (0,T)\to X$ with $\operatorname{supp} f\subset\subset (0,T)$. We say that $f\in W^1_p(0,T;X)$ for $1\leq p <\infty$, if and only if $f, \frac{df}{dt}\in L^p(0,T;X)$, where $\frac{df}{dt}$ denotes the vector-valued distributional derivative of $f$. Finally, we note: \begin{lemma}\label{lem:CwEmbedding} Let $X,Y$ be two Banach spaces such that $Y\hookrightarrow X$ and $X'\hookrightarrow Y'$ densely. Then $L^\infty(I;Y)\cap BUC(I;X) \hookrightarrow BC_w(I;Y)$. \varepsilonnd{lemma} \noindent For a proof, see e.g. Abels \cite{Abe09a}. \section{Existence of Weak Solutions} In this section we prove an existence result for the Navier-Stokes/Cahn-Hilliard system from the introduction for a situation with degenerate mobility. Since in this case we will not have a control of the gradient of the chemical potential, we reformulate the equations by introducing a flux $\mathbf{J} = -m(\varphi) \nabla \mu$ consisting of the product of the mobility and the gradient of the chemical potential. In this way, the complete system is given by: \begin{subequations} \begin{align} \varphiartial_t (\rho \mathbf{v}) + \operatorname{div} (&\rho \mathbf{v} \otimes \mathbf{v}) - \operatorname{div} (2 \varepsilonta(\varphi) D \mathbf{v}) + \nabla p \nonumber \\ & + \operatorname{div}(\mathbf{v} \otimes \beta \mathbf{J}) = - \operatorname{div}(a(\varphi) \nabla \varphi \otimes \nabla \varphi) & \mbox{in } \, Q_T , \label{degequ1} \\ \operatorname{div} \, \mathbf{v} &= 0 & \mbox{in } \, Q_T , \label{degequ2} \\ \varphiartial_t \varphi + \mathbf{v} \cdot \nabla \varphi &= -\operatorname{div} \mathbf{J} & \mbox{in } \, Q_T , \label{degequ3} \\ \mathbf{J} &= -m(\varphi) \nabla \left( \Psi'(\varphi) + a'(\varphi) \frac{|\nabla \varphi|^2}{2} \right.\nonumber \\ & \hspace*{60pt} - \operatorname{div}\left( a(\varphi) \nabla \varphi \right) \bigg) & \mbox{in } \, Q_T , \label{degequ4} \\ \mathbf{v}|_{\varphiartial \Omega} &= 0 & \mbox{on } \, S_T , \label{degequ5} \\ \varphiartial_n \varphi|_{\varphiartial \Omega} &= (\mathbf{J} \cdot n)|_{\varphiartial \Omega} = 0 & \mbox{on } \, S_T , \label{degequ6} \\ \left(\mathbf{v} , \varphi \right)|_{t=0} &= \left( \mathbf{v}_0 , \varphi_0 \right) & \mbox{in } \, \Omega , \label{degequ7} \varepsilonnd{align} \varepsilonnd{subequations} where we set $\beta = \frac{\tilde{\rho}_2 - \tilde{\rho}_1}{2}$ and $\mathbf{J} = -m(\varphi) \nabla \mu$ as indicated above. The constitutive relation between density and phase field is given by $\rho(\varphi) = \frac{1}{2}(\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi$ as derived in Abels, Garcke and Gr\"un \cite{AGG12}, where $\tilde{\rho}_i>0$ are the specific constant mass densities of the unmixed fluids and $\varphi$ is the difference of the volume fractions of the fluids. By introducing $\mathbf{J}$, we omitted the chemical potential $\mu$ in our equations and we search from now on for unknowns $(\mathbf{v},\varphi,\boldsymbol{J})$. In the above formulation and in the following, we use the abbreviations for space-time cylinders $Q_{(s,t)}=\Omega \times (s,t)$ and $Q_t = Q_{(0,t)}$ and analogously for the boundary $S_{(s,t)}=\varphiartial \Omega \times (s,t)$ and $S_t = S_{(0,t)}$. Equation \varepsilonqref{degequ5} is the no-slip boundary condition for viscous fluids, $(\mathbf{J} \cdot n)|_{\varphiartial \Omega} = 0$ resulting from $\varphiartial_n \mu |_{\varphiartial \Omega} = 0$ means that there is no mass flux of the components through the boundary, and $\varphiartial_n \varphi |_{\varphiartial \Omega} = 0$ describes a contact angle of $\varphii/2$ of the diffused interface and the boundary of the domain. \subsection{Assumptions and Existence Theorem for Weak Solutions} In the following we summarize the assumptions needed to formulate the notion of a weak solution of \varepsilonqref{degequ1}-\varepsilonqref{degequ7} and an existence result. \begin{assumption} \label{assumptions} We assume that $\Omega \subset \mathbb{R}^d$, $d=2,3$, is a bounded domain with smooth boundary and additionally we impose the following conditions. \begin{enumerate} \item We assume $a, \Psi \in C^1(\mathbb{R})$, $\varepsilonta \in C^0(\mathbb{R})$ and $0 < c_0 \leq a(s), \varepsilonta(s) \leq K$ for given constants $c_0,K > 0$. \item For the mobility $m$ we assume that \begin{align} \label{assumpdegmob} m(s) = \left\{ \begin{array}{cl} 1 - s^2 \,, & \mbox{if } \, |s| \leq 1 , \\ 0 \,, & \mbox{else} . \varepsilonnd{array} \right. \varepsilonnd{align} \varepsilonnd{enumerate} \varepsilonnd{assumption} We remark that other mobilities which degenerate linearly at $s = \varphim 1$ are possible. The choice \varepsilonqref{assumpdegmob} typically appears in applications, see Cahn and Taylor \cite{CT94} and Hilliard \cite{Hil70}. Other degeneracies can be handled as well but some would need additional assumptions, see Elliott and Garcke \cite{EG96}. We reformulate the model suitably due to the positive coefficient $a(\varphi)$ in the free energy, so that we can replace the two terms with $a(\varphi)$ in equation \varepsilonqref{degequ4} by a single one. To this end, we introduce the function $A(s) \mathrel{\mathop:\!\!=} \int_0^s \sqrt{a(\tau)} \, d\tau$. Then $A'(s) = \sqrt{a(s)}$ and \begin{align*} - \sqrt{a(\varphi)} \, \Delta A(\varphi) = a'(\varphi) \, \frac{|\nabla \varphi|^2}{2} - \operatorname{div} \left(a(\varphi) \, \nabla \varphi \right) \varepsilonnd{align*} resulting from a straightforward calculation. By reparametrizing the potential $\Psi$ through $\widetilde{\Psi}: \mathbb{R} \to \mathbb{R}$, $\widetilde{\Psi}(r) \hspace*{-1pt}:=\hspace*{-1pt} \Psi(A^{-1}(r))$ we see $\Psi'(s) \hspace*{-1pt }= \hspace*{-1pt}\sqrt{a(s)} \widetilde{\Psi}'(A(s))$ and therefore we can replace line \varepsilonqref{degequ4} with the following one: \begin{align} \label{replacedegequ4} \mathbf{J} &= -m(\varphi) \nabla \left( \sqrt{a(\varphi)} \left( \widetilde{\Psi}'(A(\varphi)) - \Delta A(\varphi) \right) \right). \varepsilonnd{align} We also rewrite the free energy with the help of $A$ to \begin{align*} E_{\mbox{\footnotesize free}}(\varphi) = \int_\Omega \left( \widetilde{\Psi}(A(\varphi)) + \frac{|\nabla A(\varphi)|^2}{2} \right) dx \,. \varepsilonnd{align*} \begin{remark} With the above notation and with the calculation \begin{align*} - &\operatorname{div} (a(\varphi) \nabla \varphi \otimes \nabla \varphi) \\ &= - \operatorname{div} (a(\varphi) \nabla \varphi) \nabla \varphi - a(\varphi) \nabla \left( \frac{|\nabla \varphi|^2}{2} \right) \\ &= - \operatorname{div} (a(\varphi) \nabla \varphi) \nabla \varphi + \nabla (a(\varphi)) \frac{|\nabla \varphi|^2}{2} - \nabla \left( a(\varphi) \frac{|\nabla \varphi|^2}{2} \right) \\ &= \left( - \operatorname{div} (a(\varphi) \nabla \varphi) + a'(\varphi) \frac{|\nabla \varphi|^2}{2} \right) \nabla \varphi - \nabla \left( a(\varphi) \frac{|\nabla \varphi|^2}{2} \right) \\ &= - \sqrt{a(\varphi)} \Delta A(\varphi) \nabla \varphi - \nabla \left( a(\varphi) \frac{|\nabla \varphi|^2}{2} \right) \varepsilonnd{align*} we rewrite line \varepsilonqref{degequ1} with a new pressure $g = p + a(\varphi) \frac{|\nabla \varphi|^2}{2}$ into: \begin{align} \begin{split} \label{replacedegequ1} \varphiartial_t (\rho \mathbf{v}) & + \operatorname{div} (\rho \mathbf{v} \otimes \mathbf{v}) - \operatorname{div} (2 \varepsilonta(\varphi) D \mathbf{v}) + \nabla g + \operatorname{div}(\mathbf{v} \otimes \beta \mathbf{J}) \\ & = - \sqrt{a(\varphi)} \Delta A(\varphi) \nabla \varphi \,. \varepsilonnd{split} \varepsilonnd{align} We remark that in contrast to the formulation in \cite{ADG12} we do not use the equation for the chemical potential here. \varepsilonnd{remark} Now we can define a weak solution of problem \varepsilonqref{degequ1}-\varepsilonqref{degequ7}. \begin{definition} \label{defweaksolution} Let $T \in (0,\infty)$, $\mathbf{v}_0 \in L^2_\sigma(\Omega)$ and $\varphi_0 \in H^1(\Omega)$ with $|\varphi_0| \leq 1$ almost everywhere in $\Omega$. If in addition Assumption \ref{assumptions} holds, we call the triple $(\mathbf{v},\varphi,\mathbf{J})$ with the properties \begin{align*} & \mathbf{v} \in BC_w([0,T];L^2_\sigma(\Omega)) \cap L^2(0,T;H_0^1(\Omega)^d) \,, \\ & \varphi \in BC_w([0,T];H^1(\Omega)) \cap L^2(0,T;H^2(\Omega)) \; \mbox{ with } \; |\varphi| \leq 1 \, \mbox{ a.e. in } \, Q_T \,, \\ & \mathbf{J} \in L^2(0,T;L^2(\Omega)^d) \, \mbox{ and} \\ & \left( \mathbf{v},\varphi \right)|_{t=0} = \left( \mathbf{v}_0 , \varphi_0 \right) \varepsilonnd{align*} a weak solution of \varepsilonqref{degequ1}-\varepsilonqref{degequ7} if the following conditions are satisfied: \begin{align} \begin{split} \label{weakline1} - \left(\rho \mathbf{v} , \varphiartial_t \boldsymbol{\psi} \right)_{Q_T} &+ \left( \operatorname{div}(\rho \mathbf{v} \otimes \mathbf{v}) , \boldsymbol{\psi} \right)_{Q_T} + \left(2 \varepsilonta(\varphi) D\mathbf{v} , D\boldsymbol{\psi} \right)_{Q_T} \\ & - \left( (\mathbf{v} \otimes \beta \mathbf{J}) , \nabla \boldsymbol{\psi} \right)_{Q_T} = -\left( \sqrt{a(\varphi)} \Delta A(\varphi) \, \nabla \varphi , \boldsymbol{\psi} \right)_{Q_T} \varepsilonnd{split} \varepsilonnd{align} for all $\boldsymbol{\psi} \in \left[C_0^\infty(\Omega \times (0,T))\right]^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$, \begin{align} -\int_{Q_T} \varphi \, \varphiartial_t \zeta \, dx \, dt + \int_{Q_T} (\mathbf{v} \cdot \nabla \varphi) \, \zeta \, dx \, dt = \int_{Q_T} \mathbf{J} \cdot \nabla \zeta \, dx \, dt \label{weakline2} \varepsilonnd{align} for all $\zeta \in C_0^\infty((0,T;C^1(\overline{\Omega}))$ and \begin{align} \begin{split} \label{weakline3} \int_{Q_T} &\mathbf{J} \cdot \boldsymbol{\varepsilonta} \, dx \, dt \\ &= -\int_{Q_T} \left( \sqrt{a(\varphi)} \left( \widetilde{\Psi}'(A(\varphi)) - \Delta A(\varphi) \right) \right) \operatorname{div} (m(\varphi) \boldsymbol{\varepsilonta})\, dx \, dt \varepsilonnd{split} \varepsilonnd{align} for all $\boldsymbol{\varepsilonta} \in L^2(0,T;H^1(\Omega)^d) \cap L^\infty(Q_T)^d$ which fulfill $\boldsymbol{\varepsilonta} \cdot n = 0$ on $S_T$. \varepsilonnd{definition} \begin{remark} The identity \varepsilonqref{weakline3} is a weak version of \begin{align*} \mathbf{J} = - m(\varphi) \, \nabla \left( \sqrt{a(\varphi)} \left( \widetilde{\Psi}'(A(\varphi)) - \Delta A(\varphi) \right) \right) \,. \varepsilonnd{align*} \varepsilonnd{remark} Our main result of this work is the following existence theorem for weak solutions on an arbitrary time interval $[0,T]$, where $T > 0$. \begin{theorem} \label{theo:existenceweaksolution} Let Assumption \ref{assumptions} hold, $\mathbf{v}_0 \in L^2_\sigma(\Omega)$ and $\varphi_0 \in H^1(\Omega)$ with $|\varphi_0| \leq 1$ almost everywhere in $\Omega$. Then there exists a weak solution $(\mathbf{v},\varphi,\mathbf{J})$ of \varepsilonqref{degequ1}-\varepsilonqref{degequ7} in the sense of Definition \ref{defweaksolution}. Moreover for some $\widehat{\mathbf{J}} \in L^2(Q_T)$ it holds that $\mathbf{J} = \sqrt{m(\varphi)} \widehat{\mathbf{J}}$ and \begin{align} \begin{split} \label{weakline5} E_{\mbox{\footnotesize tot}}(\varphi(t),\mathbf{v}(t)) &+ \int_{Q_{(s,t)}} 2 \varepsilonta(\varphi) \, |D\mathbf{v}|^2 \, dx \, d\tau + \int_{Q_{(s,t)}} |\widehat{\mathbf{J}}|^2 \, dx \, d\tau \\ &\leq E_{\mbox{\footnotesize tot}}(\varphi(s),\mathbf{v}(s)) \varepsilonnd{split} \varepsilonnd{align} for all $t \in [s,T)$ and almost all $s \in [0,T)$ including $s=0$. The total energy $E_{\mbox{\footnotesize tot}}$ is the sum of the kinetic and the free energy, cf. \varepsilonqref{totalenergy}. In particular, $\mathbf{J} = 0$ a.e. on the set $\{|\varphi| = 1\}$. \varepsilonnd{theorem} The proof of the theorem will be done in the next two subsections. But first of all we consider a special case which can then be excluded in the following proof. Due to $|\varphi_0| \leq 1$ a.e. in $\Omega$ we note that $\int_\Omega \hspace*{-14pt}{-}\hspace*{5pt} \varphi_0 \, dx \in [-1,1]$. In the situation where $\int_\Omega \hspace*{-14pt}{-}\hspace*{5pt} \varphi_0 \, dx = 1$ we can then conclude that $\varphi_0 \varepsilonquiv 1$ a.e. in $\Omega$ and can give the solution at once. In fact, here we set $\varphi \varepsilonquiv 1$, $\mathbf{J} \varepsilonquiv 0$ and let $\mathbf{v}$ be the weak solution of the incompressible Navier-Stokes equations without coupling to the Cahn-Hilliard equation, where $\rho$ and $\varepsilonta$ are constants. The situation where $\int_\Omega \hspace*{-14pt}{-}\hspace*{5pt} \varphi_0 \, dx = -1$ can be handled analogously. With this observation we can assume in the following that \begin{align*} \int_\Omega \hspace*{-13pt}{-}\hspace*{4pt} \varphi_0 \, dx \in (-1,1) \,, \varepsilonnd{align*} which will be needed for the reference to the previous existence result of the authors \cite{ADG12} and for the proof of Lemma \ref{lem:energyestimate}, $(iii)$. \subsection{Approximation and Energy Estimates} In the following we substitute problem \varepsilonqref{degequ1}-\varepsilonqref{degequ7} by an approximation with positive mobility and a singular homogeneous free energy density, which can be solved with the result from the authors in \cite{ADG12}. For the weak solutions of the approximation we then derive energy estimates. First we approximate the degenerate mobility $m$ by a strictly positive $m_\varepsilon$ as \begin{eqnarray*} m_\varepsilon(s) := \left\{ \begin{array}{cl} m(-1+\varepsilon) & \mbox{for } \, s \leq -1 + \varepsilon \,, \\ m(s) & \mbox{for } \, |s| < 1-\varepsilon \,, \\ m(1-\varepsilon) & \mbox{for } \, s \geq 1 - \varepsilon \,. \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} In addition we use a singular homogeneous free energy density $\Psi_\varepsilon$ given by \begin{align*} \Psi_\varepsilon(s) &:= \Psi(s) + \varepsilon \Psi_{\mbox{\footnotesize ln}}(s) \,, \; \mbox{ where } \\ \Psi_{\mbox{\footnotesize ln}}(s) &:= (1+s)\ln(1+s) + (1-s)\ln(1-s)\,. \varepsilonnd{align*} Then $\Psi_\varepsilon \in C([-1,1]) \cap C^2((-1,1))$ fulfills the assumptions on the homogeneous free energy as in Abels, Depner and Garcke \cite{ADG12}, which were given by \begin{align*} \lim_{s \to \varphim 1} \Psi_\varepsilon'(s) = \varphim \infty \,, \quad \Psi_\varepsilon''(s) \geq \kappa \; \mbox{ for some } \; \kappa \in \mathbb{R} \; \mbox{ and } \; \lim_{s \to \varphim 1} \frac{\Psi_\varepsilon''(s)}{\Psi_\varepsilon'(s)} = + \infty . \varepsilonnd{align*} To deal with the positive coefficient $a(\varphi)$, we set similarly as above $\widetilde{\Psi}_{\mbox{\footnotesize ln}}(r) := \Psi_{\mbox{\footnotesize ln}}(A^{-1}(r))$ and $\widetilde{\Psi}_\varepsilon(r) := \Psi_\varepsilon(A^{-1}(r))$ for $r \in [a,b] := A([-1,1])$. Now we replace $m$ by $m_\varepsilon$ and $\Psi$ by $\Psi_\varepsilon$ and consider the following approximate problem, this time for unknowns $(\mathbf{v},\varphi,\mu)$: \begin{subequations} \begin{align} \varphiartial_t (\rho \mathbf{v}) &+ \operatorname{div} \left(\rho \mathbf{v} \otimes \mathbf{v} \right) - \operatorname{div} \left(2 \varepsilonta(\varphi) D\mathbf{v} \right) + \nabla g \nonumber \\ & + \operatorname{div} \left(\mathbf{v} \otimes \beta m_\varepsilon(\varphi) \nabla \mu \right) = - \sqrt{a(\varphi)} \Delta A(\varphi) \nabla \varphi & \mbox{in } \, Q_T , \label{approxequ1} \\ \operatorname{div} \mathbf{v} &= 0 & \mbox{in } \, Q_T , \label{approxequ2} \\ \varphiartial_t \varphi + \mathbf{v} \cdot \nabla \varphi &= \operatorname{div} (m_\varepsilon(\varphi) \nabla \mu) & \mbox{in } \, Q_T , \label{approxequ3} \\ \mu &= \sqrt{a(\varphi)} \left( \widetilde{\Psi}_\varepsilon'(A(\varphi)) - \Delta A(\varphi) \right) & \mbox{in } \, Q_T , \label{approxequ5} \\ \mathbf{v}|_{\varphiartial \Omega} &= 0 & \mbox{on } \, S_T , \label{approxequ6} \\ \varphiartial_n \varphi|_{\varphiartial \Omega} &= \varphiartial_n \mu|_{\varphiartial \Omega} = 0 & \mbox{on } \, S_T , \label{approxequ7} \\ \left(\mathbf{v} , \varphi \right)|_{t=0} &= \left( \mathbf{v}_0 , \varphi_0 \right) & \mbox{in } \, \Omega . \label{approxequ8} \varepsilonnd{align} \varepsilonnd{subequations} From \cite{ADG12} we get the existence of a weak solution $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mu_\varepsilon)$ with the properties \begin{align*} & \mathbf{v}_\varepsilon \in BC_w([0,T];L^2_\sigma(\Omega)) \cap L^2(0,T;H_0^1(\Omega)^d) \,, \\ & \varphi_\varepsilon \in BC_w([0,T];H^1(\Omega)) \cap L^2(0,T;H^2(\Omega)) \,, \; \ \Psi_\varepsilon'(\varphi_\varepsilon) \in L^2(0,T;L^2(\Omega)) \,, \\ & \mu_\varepsilon \in L^2(0,T;H^1(\Omega)) \, \mbox{ and} \\ & \left.\left( \mathbf{v}_\varepsilon,\varphi_\varepsilon \right)\right|_{t=0} = \left( \mathbf{v}_0 , \varphi_0 \right) \varepsilonnd{align*} in the following sense: \begin{align} \begin{split} \label{approxweak1} - \left(\rho_\varepsilon \mathbf{v}_\varepsilon , \varphiartial_t \boldsymbol{\psi} \right)_{Q_T} &+ \left( \operatorname{div}(\rho_\varepsilon \mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon) , \boldsymbol{\psi} \right)_{Q_T} + \left(2 \varepsilonta(\varphi_\varepsilon) D\mathbf{v}_\varepsilon , D\boldsymbol{\psi} \right)_{Q_T} \\ & - \left( (\mathbf{v}_\varepsilon \otimes \beta m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon ) , \nabla \boldsymbol{\psi} \right)_{Q_T} = \left( \mu_\varepsilon \nabla \varphi_\varepsilon , \boldsymbol{\psi} \right)_{Q_T} \varepsilonnd{split} \varepsilonnd{align} for all $\boldsymbol{\psi} \in \left[C_0^\infty(\Omega \times (0,T))\right]^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$, \begin{align} - \left(\varphi_\varepsilon , \varphiartial_t \zeta \right)_{Q_T} + \left( \mathbf{v}_\varepsilon \cdot \nabla \varphi_\varepsilon , \zeta \right)_{Q_T} &= - \left(m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon , \nabla \zeta \right)_{Q_T} \label{approxweak2} \varepsilonnd{align} for all $\zeta \in C_0^\infty((0,T);C^1(\overline{\Omega}))$ and \begin{align} \mu_\varepsilon = \sqrt{a(\varphi_\varepsilon)} \left( \widetilde{\Psi}_\varepsilon'(A(\varphi_\varepsilon)) - \Delta A(\varphi_\varepsilon) \right) \; \mbox{ almost everywhere in } \, Q_T . \label{approxweak3} \varepsilonnd{align} Moreover, \begin{align} \begin{split} \label{approxweak4} E_{\mbox{\footnotesize tot}}(\varphi_\varepsilon(t),\mathbf{v}_\varepsilon(t)) &+ \int_{Q_{(s,t)}} 2 \varepsilonta(\varphi_\varepsilon) \, |D\mathbf{v}_\varepsilon|^2 \, dx \,d\tau \\ &+ \int_{Q_{(s,t)}} m_\varepsilon(\varphi_\varepsilon) |\nabla \mu_\varepsilon|^2 \, dx \, d\tau \,\leq\, E_{\mbox{\footnotesize tot}}(\varphi_\varepsilon(s),\mathbf{v}_\varepsilon(s)) \varepsilonnd{split} \varepsilonnd{align} for all $t \in [s,T)$ and almost all $s \in [0,T)$ has to hold (including $s=0$). Herein $\rho_\varepsilon$ is given as $\rho_\varepsilon = \frac{1}{2}(\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi_\varepsilon$. Note that due to the singular homogeneous potential $\Psi_\varepsilon$ we have $|\varphi_\varepsilon| < 1$ almost everywhere. \begin{remark} \label{rem:termsagree} Note that equation \varepsilonqref{approxweak1} can be rewritten with the help of the identity \begin{align*} \left( \mu_\varepsilon \nabla \varphi_\varepsilon , \boldsymbol{\psi} \right)_{Q_T} &= - \left( \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) \nabla \varphi_\varepsilon , \boldsymbol{\psi} \right)_{Q_T}. \varepsilonnd{align*} This can be seen by testing \varepsilonqref{approxweak3} with $\nabla \varphi_\varepsilon \cdot \boldsymbol{\psi}$ and noting that $\boldsymbol{\psi}$ is divergence free. \varepsilonnd{remark} For the weak solution $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mu_\varepsilon)$ we get the following energy estimates: \begin{lemma} \label{lem:energyestimate} For a weak solution $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mu_\varepsilon)$ of problem \varepsilonqref{approxequ1}-\varepsilonqref{approxequ8} we have the following energy estimates: \begin{align*} (i) & \quad \sup_{0 \leq t \leq T} \int_\Omega \left( \rho_\varepsilon(t) \frac{\;|\mathbf{v}_\varepsilon(t)|^2}{2} + \frac{1}{2} |\nabla \varphi_\varepsilon(t)|^2 + \Psi_\varepsilon(\varphi_\varepsilon(t)) \right) dx \\ & \quad \; + \int_{Q_T} 2 \varepsilonta(\varphi_\varepsilon) |D \mathbf{v}_\varepsilon|^2 \, dx \, dt + \int_{Q_T} m_\varepsilon(\varphi_\varepsilon) |\nabla \mu_\varepsilon|^2 \, dx \, dt \,\leq\, C \,, \\ (ii) & \quad \sup_{0 \leq t \leq T} \int_\Omega \hspace*{-2pt} G_\varepsilon(\varphi_\varepsilon(t)) \, dx + \int_{Q_T} |\Delta A(\varphi_\varepsilon)|^2 dx \, dt \,\leq\, C \,, \\ (iii) & \quad \varepsilon^3 \int_{Q_T} |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)|^2 \, dx \, dt \,\leq\, C \,, \\ (iv) & \quad \int_{Q_T} |\widehat{\mathbf{J}}_\varepsilon|^2 \, dx \, dt \,\leq\, C \,, \; \mbox{ where } \, \widehat{\mathbf{J}}_\varepsilon = - \sqrt{m_\varepsilon(\varphi_\varepsilon)} \,\nabla \mu_\varepsilon \,. \varepsilonnd{align*} Here $G_\varepsilon$ is a non-negative function defined by $G_\varepsilon(0) = G_\varepsilon'(0) = 0$ and $G_\varepsilon''(s) = \frac{1}{m_\varepsilon(s)} \sqrt{a(s)}$ for $s \in [-1,1]$. \varepsilonnd{lemma} \begin{proof} ad $(i)$: This follows directly from the estimate \varepsilonqref{approxweak4} derived in the work of Abels, Depner and Garcke \cite{ADG12}. We just note that for the estimate of $\nabla \varphi_\varepsilon$ we use $\nabla A(\varphi_\varepsilon) = \sqrt{a(\varphi_\varepsilon)} \nabla \varphi_\varepsilon$ and the fact that $a$ is bounded from below by a positive constant due to Assumption \ref{assumptions}. ad $(ii)$: From line \varepsilonqref{approxweak2} we get that $\varphiartial_t \varphi_\varepsilon \in L^2(0,T;\left( H^1(\Omega) \right)')$, since $\nabla \mu_\varepsilon \in L^2(Q_T)$ and $\mathbf{v} \cdot \nabla \varphi = \operatorname{div} (\mathbf{v} \,\varphi)$ with $\mathbf{v} \, \varphi \in L^2(Q_T)$. Then we derive for a function $\zeta \in L^2(0,T;H^2(\Omega))$ the weak formulation \begin{align} \begin{split} \label{lab1} \int_0^t & \langle \varphiartial_t \varphi_\varepsilon , \zeta \rangle \, d\tau + \int_{Q_t} \mathbf{v}_\varepsilon \cdot \nabla \varphi_\varepsilon \zeta \, dx \, d\tau \\ &= - \int_{Q_t} m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon \cdot \nabla \zeta \, dx \, d\tau \\ &= \int_{Q_t} \sqrt{a(\varphi_\varepsilon)} \left( \widetilde{\Psi}_\varepsilon'(A(\varphi_\varepsilon)) - \Delta A(\varphi_\varepsilon) \right) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \nabla \zeta) \, dx \, d\tau , \varepsilonnd{split} \varepsilonnd{align} where we additionally used \varepsilonqref{approxweak3} to express $\mu_\varepsilon$. Now we set as test function $\zeta = G_\varepsilon'(\varphi_\varepsilon)$, where $G_\varepsilon$ is defined by $G_\varepsilon(0) = G_\varepsilon'(0) = 0$ and $G_\varepsilon''(s) = \frac{1}{m_\varepsilon(s)} A'(s)$ for $s \in [-1,1]$. Note that $G_\varepsilon$ is a non-negative function, which can be seen from the representation $G_\varepsilon(s) = \int_0^s \left( \int_0^r \frac{1}{m_\varepsilon(\tau)} A'(\tau) \, d\tau \right) dr$. With $\zeta = G_\varepsilon'(\varphi_\varepsilon)$ it holds that \begin{align*} \nabla \zeta &= G''_\varepsilon(\varphi_\varepsilon) \nabla \varphi_\varepsilon = \frac{1}{m_\varepsilon(\varphi_\varepsilon)} \nabla \left( A(\varphi_\varepsilon) \right) \; \mbox{ and therefore } \; \\ \operatorname{div} \left(m_\varepsilon(\varphi_\varepsilon) \nabla \zeta \right) &= \Delta \left( A(\varphi_\varepsilon) \right) . \varepsilonnd{align*} Hence we derive \begin{align} \begin{split} \label{lab2} \int_0^t &\langle \varphiartial_t \varphi_\varepsilon , G'_\varepsilon(\varphi_\varepsilon) \rangle d\tau + \int_{Q_t} \mathbf{v}_\varepsilon \cdot \nabla \varphi_\varepsilon G'_\varepsilon(\varphi_\varepsilon) \, dx \, d\tau \\ &= \int_{Q_t} \sqrt{a(\varphi_\varepsilon)}\left( \widetilde{\Psi}_\varepsilon'(A(\varphi_\varepsilon)) - \Delta A(\varphi_\varepsilon) \right) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \\ &= \int_{Q_t} \Psi_\varepsilon'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau - \int_{Q_t} \sqrt{a(\varphi_\varepsilon)} \, |\Delta A(\varphi_\varepsilon)|^2 \, dx \, d\tau \,. \varepsilonnd{split} \varepsilonnd{align} With this notation we deduce \begin{align*} \int_0^t \langle \varphiartial_t \varphi_\varepsilon , G'_\varepsilon(\varphi_\varepsilon) \rangle dt &= \int_\Omega G_\varepsilon(\varphi(t)) \, dx - \int_\Omega G_\varepsilon(\varphi_0) \, dx \quad \mbox{ and } \\ \int_{Q_t} \mathbf{v}_\varepsilon \cdot \nabla \varphi_\varepsilon G'_\varepsilon(\varphi_\varepsilon) \, dx \, dt &= \int_{Q_t} \mathbf{v}_\varepsilon \cdot \nabla \left(G_\varepsilon(\varphi_\varepsilon)\right) dx \, dt \\ &= -\int_{Q_t} \operatorname{div} \mathbf{v}_\varepsilon \, G_\varepsilon(\varphi_\varepsilon) \, dx \, dt = 0 \,. \varepsilonnd{align*} {\allowdisplaybreaks For the first term on the right side of \varepsilonqref{lab2} we observe \begin{align*} \int_{Q_t} &\Psi_\varepsilon'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \\ &= \int_{Q_t} \Psi'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau + \varepsilon \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \\ & \leq -\int_{Q_t} \Psi''(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \nabla A(\varphi_\varepsilon) \, dx \, dt \\ &= -\int_{Q_t} \Psi''(\varphi_\varepsilon) \sqrt{a(\varphi_\varepsilon)} |\nabla \varphi_\varepsilon|^2 \, dx \, dt . \varepsilonnd{align*}} Herein the estimate \begin{align*} \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau &\leq 0 \varepsilonnd{align*} for the logarithmic part of the homogeneous free energy density is derived as follows. With an approximation of $\varphi_\varepsilon$ by $\varphi_\varepsilon^\alpha = \alpha \varphi_\varepsilon$ for $0<\alpha<1$ we have that $|\varphi_\varepsilon^\alpha| < \alpha < 1$ and therefore \begin{align*} \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon^\alpha) \Delta A(\varphi_\varepsilon^\alpha) \, dx \, d\tau &= -\int_{Q_t} \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon^\alpha) \nabla \varphi_\varepsilon^\alpha \cdot \nabla A(\varphi_\varepsilon^\alpha) \, dx \, d\tau \leq 0 \,, \varepsilonnd{align*} where we used integration by parts. To pass to the limit for $\alpha \nearrow 1$ in the left side we observe that $\varphi_\varepsilon^\alpha \to \varphi_\varepsilon$ in $L^2(0,T;H^2(\Omega))$. Hence together with the bound $|\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon^\alpha)| \leq |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)|$ we can use Lebesgue's dominated convergence theorem to conclude \begin{align*} \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon^\alpha) \Delta A(\varphi_\varepsilon^\alpha) \, dx \, d\tau \longrightarrow \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \; \mbox{ for } \; \alpha \nearrow 1. \varepsilonnd{align*} With the bound from below $a(s) \geq c_0 > 0$ from Assumption \ref{assumptions} we derived therefore \begin{align*} \int_\Omega &G_\varepsilon(\varphi(t)) \, dx + \int_{Q_t} |\Delta A(\varphi_\varepsilon)|^2 \, dx \, d\tau \\ &\leq C \left( \int_\Omega G_\varepsilon(\varphi_0) \, dx + \int_{Q_t} \Psi''(\varphi_\varepsilon) \sqrt{a(\varphi_\varepsilon)} \, |\nabla \varphi_\varepsilon|^2 \, dx \, d\tau \right) . \varepsilonnd{align*} Now we use $m_\varepsilon(\tau) \geq m(\tau)$ to observe the inequality \begin{align*} G_\varepsilon(s) &= \int_0^s \bigg( \int_0^r \frac{1}{m_\varepsilon(\tau)} \underbrace{A'(\tau)}_{=\sqrt{a(\tau)}} \, d\tau \bigg) dr \\ &\leq \int_0^s \left( \int_0^r \frac{1}{m(\tau)} \sqrt{a(\tau)} \, d\tau \right) dr =: G(s) \; \mbox{ for s $\in (-1,1)$}. \varepsilonnd{align*} Due to the special choice of the degenerate mobility $m$ in \varepsilonqref{assumpdegmob} we conclude that $G$ can be extended continuously to the closed interval $[-1,1]$ and that therefore the integral $\int_\Omega G(\varphi_0) \, dx$ and in particular the integral $\int_\Omega G_\varepsilon(\varphi_0) \, dx$ is bounded. Moreover, since $\Psi''(s)$ is bounded in $|s| \leq 1$ and since we estimated $\int_\Omega |\nabla \varphi_\varepsilon(t)|^2 \, dx$ in $(i)$, we proved $(ii)$. ad $(iii)$: To show this estimate we will argue similarly as in the time-discrete situation of Lemma 4.2 in Abels, Depner and Garcke \cite{ADG12}. We multiply equation \varepsilonqref{approxweak3} with $P_0 \varphi_\varepsilon$, integrate over $\Omega$ and get almost everywhere in $t$ the identity \begin{align} \label{P0ident} \begin{split} \int_{\Omega} \mu_\varepsilon P_0 \varphi_\varepsilon \, dx &= \int_{\Omega} \Psi'(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx + \varepsilon \int_{\Omega} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx \\ & \quad - \int_{\Omega} \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx . \varepsilonnd{split} \varepsilonnd{align} By using in identity \varepsilonqref{approxweak2} a test function which depends only on time $t$ and not on $x \in \Omega$, we derive the fact that $(\varphi_\varepsilon)_\Omega = (\varphi_0)_\Omega$ and by assumption this number lies in $(-1+\alpha, 1-\alpha)$ for a small $\alpha > 0$. In addition with the property $\lim_{s \to \varphim 1} \Psi_{\mbox{\footnotesize ln}}'(s) = \varphim \infty$ we can show the inequality $\Psi_{\mbox{\footnotesize ln}}'(s)(s - (\varphi_0)_\Omega) \geq C_\alpha |\Psi_{\mbox{\footnotesize ln}}'(s)| - c_\alpha$ in three steps in the intervals $[-1,-1+\tfrac{\alpha}{2}]$, $[-1+\tfrac{\alpha}{2}, 1-\tfrac{\alpha}{2}]$ and $[1-\tfrac{\alpha}{2},1]$ successively. Altogether this leads to the following estimate: \begin{align} \label{P0est} \varepsilon \int_{\Omega} |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)| \, dx &\leq C \left( \varepsilon \int_{\Omega} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx + 1 \right) . \varepsilonnd{align} We observe the fact that $\int_{\Omega} \mu_\varepsilon P_0 \varphi_\varepsilon \, dx = \int_{\Omega} (P_0 \mu_\varepsilon ) \varphi_\varepsilon \, dx$ and due to integration by parts \begin{align*} - \int_{\Omega} & \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx \\ &= \int_\Omega \sqrt{a(\varphi_\varepsilon)} \nabla A(\varphi_\varepsilon) \cdot \nabla \varphi_\varepsilon \, dx + \int_\Omega \frac{1}{2} a(\varphi_\varepsilon)^{-\frac{1}{2}} \nabla \varphi_\varepsilon \cdot \nabla A(\varphi_\varepsilon) P_0 \varphi_\varepsilon \, dx \\ &= \int_\Omega a(\varphi_\varepsilon) |\nabla \varphi_\varepsilon|^2 \, dx + \int_\Omega \frac{1}{2} P_0 \varphi_\varepsilon |\nabla \varphi_\varepsilon|^2 \, dx \,. \varepsilonnd{align*} Combining estimate \varepsilonqref{P0est} with identity \varepsilonqref{P0ident} we are led to \begin{align*} \varepsilon \int_{\Omega} |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)| \, dx &\leq C \left( \int_{\Omega} |(P_0 \mu_\varepsilon) \varphi_\varepsilon| \, dx + \int_{\Omega} |\Psi'(\varphi_\varepsilon) P_0 \varphi_\varepsilon |\, dx \right. \\ & \quad + \left. \int_{\Omega} |\sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) P_0 \varphi_\varepsilon| \, dx + 1 \right) \\ &\leq C \left( \|P_0 \mu_\varepsilon\|_{L^2(\Omega)} + \|\nabla \varphi_\varepsilon\|_{L^2(\Omega)} + 1 \right) \\ & \leq C \left( \|\nabla \mu_\varepsilon \|_{L^2(\Omega)} + 1 \right) . \varepsilonnd{align*} In the last two lines we have used in particular the facts that $\varphi_\varepsilon$ is bounded between $-1$ and $1$, that $\Psi'$ is continuous, the energy estimate from $(ii)$ for $\sup_{0 \leq t \leq T} \|\nabla \varphi_\varepsilon\|_{L^2(\Omega)}$ and the Poincar\'{e} inequality for functions with mean value zero. With the last inequality we can estimate the integral of $\mu_\varepsilon$ by simply integrating identity \varepsilonqref{approxweak3} over $\Omega$: \begin{align*} \left| \int_{\Omega} \mu_\varepsilon \, dx \right| &\leq \int_{\Omega} |\Psi'(\varphi_\varepsilon)| \,dx + \varepsilon \int_{\Omega} |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)| \, dx + \left| \int_{\Omega} \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) \, dx \right| \\ &\leq C \left( \|\nabla \mu_\varepsilon \|_{L^2(\Omega)} + 1 \right) , \varepsilonnd{align*} where we used similarly as above integration by parts for the integral over $\sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon)$. By the splitting of $\mu_\varepsilon$ into $\mu_\varepsilon = P_0 \mu_\varepsilon + (\mu_\varepsilon)_\Omega$ we arrive at \begin{align*} \|\mu_\varepsilon\|_{L^2(\Omega)}^2 &\leq C \left( \|\nabla \mu_\varepsilon\|_{L^2(\Omega)}^2 + 1 \right) . \varepsilonnd{align*} Then, again from identity \varepsilonqref{approxweak3}, we derive \begin{align*} \varepsilon^2 |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)|^2 &\leq C \left( |\mu_\varepsilon|^2 + |\Delta A(\varphi_\varepsilon)|^2 + 1 \right) \varepsilonnd{align*} and together with the last estimates and an additional integration over time $t$ this leads to \begin{align*} \varepsilon^2 \|\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)\|_{L^2(Q_T)}^2 &\leq C \left( \|\nabla \mu_\varepsilon \|^2_{L^2(Q_T)} + 1 \right) . \varepsilonnd{align*} Note that we used the bound $\|\Delta A(\varphi_\varepsilon)\|_{L^2(Q_T)} \leq C$ from $(ii)$. Furthermore, due to the bounds in $(i)$, we see $\varepsilon \|\nabla \mu_\varepsilon\|^2_{L^2(Q_T)} \leq C$ since $m_\varepsilon(s) \geq \varepsilon$ for $|s| \leq 1$ and therefore we arrive at \begin{align*} \varepsilon^3 \|\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)\|_{L^2(Q_T)}^2 &\leq C \,. \varepsilonnd{align*} ad $(iv)$: This follows directly from $(i)$. \varepsilonnd{proof} \iffalse \begin{remark} \label{rem:diffproof} In the case of $a(\varphi) \varepsilonquiv 1$, which is the standard situation in other works, we can give a different proof of part $(iii)$ of Lemma \ref{lem:energyestimate}. Since this proof could be useful for other applications, we give it in detail. So for this remark we assume that $a(\varphi) \varepsilonquiv 1$. From line \varepsilonqref{approxequ5} for the chemical potential $\mu_\varepsilon$ we derive with the projection $P_0$ onto $L^2_{(0)}(\Omega)$ the identity \begin{align} \label{eq:muprojected} P_0 \mu_\varepsilon &= P_0 \Psi'(\varphi_\varepsilon) + \varepsilon P_0 \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) - \Delta \varphi_\varepsilon \,. \varepsilonnd{align} Now we use a result of Abels \cite{Abe07} about subgradients of a certain energy for a fixed $m \in (-1,1)$. Let $E_\varepsilon : L^2(\Omega) \to \mathbb{R}$ with domain $\operatorname{dom} E_\varepsilon = \{ \varphi \in H^1(\Omega) \cap L^2_{(m)}(\Omega) \,|\, |\varphi| \leq 1$ a.e.$\}$ be given by \begin{align*} E_\varepsilon(\varphi) = \left\{ \begin{array}{cl} \tfrac{1}{2} \int_\Omega |\nabla \varphi|^2 \, dx + \varepsilon \int_\Omega \Psi_{\mbox{\footnotesize ln}}(\varphi) \,dx & \mbox{for } \varphi \in \operatorname{dom} E_\varepsilon \,, \\ + \infty & \mbox{else}. \varepsilonnd{array} \right. \varepsilonnd{align*} From \cite[Prop. 3.12.5]{Abe07} we get the fact that the domain of definition of the subgradient $\varphiartial E_\varepsilon$ is given by \begin{align*} \mathcal{D}(\varphiartial E_\varepsilon) = \left\{ \varphi \in H^2(\Omega) \cap L^2_{(m)}(\Omega) \,|\, \Psi_{\mbox{\footnotesize ln}}'(\varphi) \in L^2(\Omega) \,, \; \Psi_{\mbox{\footnotesize ln}}'(\varphi) |\nabla \varphi|^2 \in L^1(\Omega) \,, \; \left. \varphiartial_n \varphi \right|_{\varphiartial \Omega} = 0 \right\} \varepsilonnd{align*} and that $\varphiartial E_\varepsilon(\varphi) = \varepsilon P_0 \Psi_{\mbox{\footnotesize ln}}'(\varphi) - \Delta \varphi$. Furthermore for $\varphi \in \mathcal{D}(\varphiartial E_\varepsilon)$ there holds the estimate \begin{align*} \|\varphi\|_{H^2(\Omega)} + \varepsilon \|\Psi_{\mbox{\footnotesize ln}}'(\varphi)\|_{L^2(\Omega)} \leq C \left( \|\varphiartial E_\varepsilon(\varphi)\|_{L^2(\Omega)} + \|\varphi\|_{L^2(\Omega)} + 1 \right) \,. \varepsilonnd{align*} Now in our case we get from \varepsilonqref{eq:muprojected} the identity \begin{align*} P_0 \mu_\varepsilon = P_0 \Psi'(\varphi_\varepsilon) + \varphiartial E_\varepsilon(\varphi_\varepsilon) \varepsilonnd{align*} and we note that $\int_\Omega \varphi_\varepsilon = \int_\Omega \varphi_0$ due to the weak formulation of equation \varepsilonqref{approxequ3}. This leads to the following estimate \begin{align*} \varepsilon^2 \|\Psi_{\mbox{\footnotesize ln}}'(\varphi)\|_{L^2(\Omega)}^2 & \leq C \left( \|\varphiartial E_\varepsilon(\varphi_\varepsilon)\|_{L^2(\Omega)}^2 + \|\varphi_\varepsilon\|_{L^2(\Omega)}^2 + 1 \right) \\ &\leq C \left( \|P_0 \mu_\varepsilon\|_{L^2(\Omega)}^2 + \|P_0 \Psi(\varphi_\varepsilon)\|_{L^2(\Omega)}^2 + \|\varphi_\varepsilon\|_{L^2(\Omega)}^2 + 1 \right) \\ &\leq C \left( \|\nabla \mu_\varepsilon\|_{L^2(\Omega)}^2 + \|P_0 \Psi(\varphi_\varepsilon)\|_{L^2(\Omega)}^2 + \|\varphi_\varepsilon\|_{L^2(\Omega)}^2 + 1 \right) \,. \varepsilonnd{align*} Since $\Psi(s)$ is smooth in $|s| \leq 1$ and due to the bounds of Lemma \ref{lem:energyestimate},$(i)$, in particular \[ \varepsilon \int_{Q_T} |\nabla \mu_\varepsilon|^2 \, dx \, dt \leq C \] due to $m_\varepsilon(s) \geq \varepsilon$ for $|s| \leq 1$, we deduce with an additional integration over time $t$ the bound \begin{align*} \varepsilon^3 \|\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)\|_{L^2(Q_T)}^2 & \leq C \,. \varepsilonnd{align*} \varepsilonnd{remark} \fi \iffalse \begin{remark} The estimate in the previous Lemma, part (ii), can be extended to \begin{align*} \quad \sup_{0 \leq t \leq T} \int_\Omega \hspace*{-2pt} G_\varepsilon(\varphi_\varepsilon(t)) \, dx + \int_{Q_T} \hspace*{-2pt}\left( |\Delta A(\varphi_\varepsilon)|^2 + \Psi_\varepsilon''(\varphi_\varepsilon) |\nabla \varphi_\varepsilon|^2 \right) dx \, dt \,\leq\, C \,. \varepsilonnd{align*} \begin{proof} First we remark that one can derive that $\Psi_\varepsilon''(\varphi_\varepsilon) |\nabla \varphi_\varepsilon|^2 \in L^1(Q_T)$ by using \cite[Lemma 4.2 and (3.21)]{ADG12} and the Lemma of Fatou. Then we refine the above proof by deriving an equality instead of an inequality for the first term on the right side of \varepsilonqref{lab2}: \begin{align*} & \;\int_{Q_t} \Psi_\varepsilon'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \\ =& \int_{Q_t} \Psi'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau + \varepsilon \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \\ =& -\int_{Q_t} \Psi''(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \nabla A(\varphi_\varepsilon) \, dx \, dt - \varepsilon \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \nabla A(\varphi_\varepsilon) \, dx \, d\tau \\ =& -\int_{Q_t} \Psi''(\varphi_\varepsilon) \sqrt{a(\varphi_\varepsilon)} |\nabla \varphi_\varepsilon|^2 \, dx \, dt - \varepsilon \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon) \sqrt{a(\varphi_\varepsilon)} |\nabla \varphi_\varepsilon|^2 \, dx \, d\tau. \varepsilonnd{align*} Herein the identity \[ \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau = -\int_{Q_t} \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \nabla A(\varphi_\varepsilon) \, dx \, d\tau \] for the logarithmic part of the homogeneous free energy density is derived as follows. We know from the properties of a weak solution of \varepsilonqref{approxequ1}-\varepsilonqref{approxequ6} that both sides of the identity exist. With an approximation of $\varphi_\varepsilon$ by $\varphi_\varepsilon^\alpha = \alpha \varphi_\varepsilon$ for $0<\alpha<1$ we have that $|\varphi_\varepsilon^\alpha| < \alpha < 1$ and therefore \begin{align*} \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon^\alpha) \Delta A(\varphi_\varepsilon^\alpha) \, dx \, d\tau = -\int_{Q_t} \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon^\alpha) \nabla \varphi_\varepsilon^\alpha \cdot \nabla A(\varphi_\varepsilon^\alpha) \, dx \, d\tau \varepsilonnd{align*} with integration by parts. To pass to the limit for $\alpha \nearrow 1$ we observe that $\varphi_\varepsilon^\alpha \to \varphi_\varepsilon$ in $L^2(0,T;H^2(\Omega))$ and together with the bounds $|\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon^\alpha)| \leq |\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon)|$ and $0 \leq \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon^\alpha) \leq \Psi_{\mbox{\footnotesize ln}}''(\varphi_\varepsilon)$ we can use Lebesgue's dominated convergence theorem. Now we proceed as in the above proof with the additional term \[ \varepsilon \int_{Q_t} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \Delta A(\varphi_\varepsilon) \, dx \, d\tau \] on the left side. \varepsilonnd{proof} \varepsilonnd{remark} \fi \subsection{Passing to the limit in the Approximation} In this subsection we use the energy estimates to get weak limits for the sequences $(\mathbf{v}_\varepsilon,\varphi_\varepsilon,\mathbf{J}_\varepsilon)$, where $\mathbf{J}_\varepsilon = \sqrt{m_\varepsilon(\varphi_\varepsilon)} \, \widehat{\mathbf{J}}_\varepsilon \left( = -m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon \right)$. With some subtle arguments we extend the weak convergences to strong ones, so that we are able to pass to the limit for $\varepsilon \to 0$ in the equations \varepsilonqref{approxweak1}-\varepsilonqref{approxweak3} to recover the identities \varepsilonqref{weakline1}-\varepsilonqref{weakline3} in the definition of the weak solution for the main problem \varepsilonqref{degequ1}-\varepsilonqref{degequ7}. Using the energy estimates in Lemma \ref{lem:energyestimate}, we can pass to a subsequence to get \begin{align*} \mathbf{v}_\varepsilon \rightharpoonup \mathbf{v} & \;\mbox{ in } \; L^2(0,T;H^1(\Omega)^d) , \\ \varphi_\varepsilon \rightharpoonup \varphi & \; \mbox{ in } \; L^2(0,T;H^1(\Omega)) ,\\ \widehat{\mathbf{J}}_\varepsilon \rightharpoonup \widehat{\mathbf{J}} & \; \mbox{ in } \; L^2(0,T;L^2(\Omega)^d) \;\mbox{ and} \\ \mathbf{J}_\varepsilon \rightharpoonup \mathbf{J} & \; \mbox{ in } \; L^2(0,T;L^2(\Omega)^d) \varepsilonnd{align*} for $\mathbf{v} \in L^2(0,T;H^1(\Omega)^d) \cap L^\infty(0,T;L^2_\sigma(\Omega))$, $\varphi \in L^\infty(0,T;H^1(\Omega))$ and $\widehat{\mathbf{J}}, \mathbf{J} \in L^2(0,T;L^2(\Omega)^d)$. Here and in the following all limits are meant to be for suitable subsequences $\varepsilon_k \to 0$ for $k \to \infty$. With the notation $\mathbf{J}_\varepsilon = -m_\varepsilon(\varphi_\varepsilon) \nabla \mu_\varepsilon$ the weak solution of problem \varepsilonqref{approxequ1}-\varepsilonqref{approxequ8} fulfills the following equations: \begin{align} \begin{split} \label{approxweakline1} - \big(\rho_\varepsilon \mathbf{v}_\varepsilon , &\varphiartial_t \boldsymbol{\psi} \big)_{Q_T} + \left( \operatorname{div}(\rho_\varepsilon \mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon) , \boldsymbol{\psi} \right)_{Q_T} + \left(2 \varepsilonta(\varphi_\varepsilon) D\mathbf{v}_\varepsilon , D\boldsymbol{\psi} \right)_{Q_T} \\ & - \left( (\mathbf{v}_\varepsilon \otimes \beta \mathbf{J}_\varepsilon) , \nabla \boldsymbol{\psi} \right)_{Q_T} = -\left( \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) \, \nabla \varphi_\varepsilon , \boldsymbol{\psi} \right)_{Q_T} \varepsilonnd{split} \varepsilonnd{align} for all $\boldsymbol{\psi} \in \left[C_0^\infty(\Omega \times (0,T))\right]^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$, \begin{align} -\int_{Q_T} \varphi_\varepsilon \, \varphiartial_t \zeta \, dx \, dt + \int_{Q_T} (\mathbf{v}_\varepsilon \cdot \nabla \varphi_\varepsilon) \, \zeta \, dx \, dt = \int_{Q_T} \mathbf{J}_\varepsilon \cdot \nabla \zeta \, dx \, dt \label{approxweakline2} \varepsilonnd{align} for all $\zeta \in C_0^\infty((0,T;C^1(\overline{\Omega}))$ and \begin{align} \begin{split} \label{approxweakline3} \int_{Q_T} & \mathbf{J}_\varepsilon \cdot \boldsymbol{\varepsilonta} \, dx \, dt \\ &= -\int_{Q_T} \left( \Psi_\varepsilon'(\varphi_\varepsilon) - \sqrt{a(\varphi_\varepsilon)}\Delta A(\varphi_\varepsilon) \right) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \varepsilonnd{split} \varepsilonnd{align} for all $\boldsymbol{\varepsilonta} \in L^2(0,T;H^1(\Omega)^d) \cap L^\infty(Q_T)^d$ with $\boldsymbol{\varepsilonta} \cdot n = 0$ on $S_T$. For the last line we used that for functions $\boldsymbol{\varepsilonta}$ with $\boldsymbol{\varepsilonta} \cdot n = 0$ on $S_T$ it holds \begin{align*} \int_{Q_T} \mathbf{J}_\varepsilon \cdot \boldsymbol{\varepsilonta} \, dx \, dt &= \int_{Q_T} \nabla \mu_\varepsilon \cdot m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta} \, dx \, dt = -\int_{Q_T} \mu_\varepsilon \operatorname{div}(m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \\ &= -\int_{Q_T} \left(\Psi_\varepsilon'(\varphi_\varepsilon) - \sqrt{a(\varphi_\varepsilon)}\Delta A(\varphi_\varepsilon) \right) \operatorname{div}(m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \,. \varepsilonnd{align*} Now we want to pass to the limit $\varepsilon \to 0$ in the above equations to achieve finally the weak formulation \varepsilonqref{weakline1}-\varepsilonqref{weakline3}. For the convergence in identity \varepsilonqref{approxweakline1} we first note that \begin{align*} \varphiartial_t \varphi_\varepsilon &\; \mbox{ is bounded in } \; L^2(0,T;\left(H^1(\Omega)\right)') \; \mbox{ and} \\ \varphi_\varepsilon & \;\mbox{ is bounded in } \; L^\infty(0,T;H^1(\Omega)) \,. \varepsilonnd{align*} Therefore we can deduce from the Lemma of Aubins-Lions \varepsilonqref{eq:AubinLions} the strong convergence \begin{align*} \varphi_\varepsilon \to \varphi \; \mbox{ in } \; L^2(0,T;L^2(\Omega)) \varepsilonnd{align*} and $\varphi_\varepsilon \to \varphi$ pointwise almost everywhere in $Q_T$. From the bound of $\Delta A(\varphi_\varepsilon)$ in $L^2(Q_T)$ and from \begin{align*} \nabla A(\varphi_\varepsilon) \cdot n &= \sqrt{a(\varphi_\varepsilon)} \nabla \varphi_\varepsilon \cdot n = 0 \; \mbox{ on } \, S_T, \varepsilonnd{align*} we get from elliptic regularity theory the bound \begin{align*} \|A(\varphi_\varepsilon)\|_{L^2(0,T;H^2(\Omega))} & \leq C \,. \varepsilonnd{align*} This yields \begin{align*} A(\varphi_\varepsilon) \rightharpoonup g \; \mbox{ in } \; L^2(0,T;H^2(\Omega)) \varepsilonnd{align*} at first for some $g \in L^2(0,T;H^2(\Omega))$, but then, due to the weak convergence $\nabla \varphi_\varepsilon \rightharpoonup \nabla \varphi$ in $L^2(0,T;L^2(\Omega))$ and due to the pointwise almost everywhere convergence $a(\varphi_\varepsilon) \to a(\varphi)$ in $Q_T$ we can identify $g$ with $A(\varphi)$ to get \begin{align*} A(\varphi_\varepsilon) \rightharpoonup A(\varphi) \; \mbox{ in } \; L^2(0,T;H^2(\Omega)) \,. \varepsilonnd{align*} The next step is to strengthen the convergence of $\nabla \varphi_\varepsilon$ in $L^2(Q_T)$. To this end, we remark that by definition $A$ is Lipschitz-continuous with \begin{align*} |A(r) - A(s)| \leq \left| \int_s^r \sqrt{a(\tau)} \, d\tau \right| \leq C |r-s| \,. \varepsilonnd{align*} Furthermore from the bound of $\varphiartial_t \varphi_\varepsilon$ in $L^2(0,T;\left(H^1(\Omega)\right)')$ we get with the notation $\varphi_\varepsilon(.+h)$ for a shift in time \begin{align*} \| \varphi_\varepsilon(.+h) - \varphi_\varepsilon \|_{L^2(0,T-h;\left(H^1(\Omega)\right)')} &\leq Ch \,, \varepsilonnd{align*} which leads to the estimate \begin{align*} \lefteqn{\| A(\varphi_\varepsilon(.+h)) - A(\varphi_\varepsilon) \|_{L^2(0,T-h;\left(H^1(\Omega)\right)')}} \\ & \; \leq C \| \varphi_\varepsilon(.+h) - \varphi_\varepsilon \|_{L^2(0,T-h;\left(H^1(\Omega)\right)')} \\ & \; \leq C h \longrightarrow 0 \quad \mbox{as } \; h \to 0 \,. \varepsilonnd{align*} Together with the bound of $A(\varphi_\varepsilon)$ in $L^2(0,T;H^2(\Omega))$ we can use a theorem of Simon \cite[Th. 5]{Sim87} to conclude the strong convergence \begin{align*} A(\varphi_\varepsilon) \to A(\varphi) \; \mbox{ in } \; L^2(0,T;H^1(\Omega)) \,. \varepsilonnd{align*} From $\nabla A(\varphi_\varepsilon) = \sqrt{a(\varphi_\varepsilon)} \nabla \varphi_\varepsilon$ we get then in particular the strong convergence \begin{align*} \nabla \varphi_\varepsilon \to \nabla \varphi \; \mbox{ in } \; L^2(0,T;L^2(\Omega)) \,. \varepsilonnd{align*} In addition we want to use an argument of Abels, Depner and Garcke from \cite[Sec. 5.1]{ADG12} which shows that due to the a priori estimate in Lemma \ref{lem:energyestimate} and the structure of equation \varepsilonqref{approxweakline1} we can deduce the strong convergence $\mathbf{v}_\varepsilon \to \mathbf{v}$ in $L^2(0,T;L^2(\Omega)^d)$. In few words we show with the help of some interpolation inequalities the bound of $\varphiartial_t(P_\sigma(\rho_\varepsilon \mathbf{v}_\varepsilon))$ in the space $L^{\frac{8}{7}}( W^1_\infty(\Omega)')$ and together with the bound of $P_\sigma(\rho_\varepsilon \mathbf{v}_\varepsilon)$ in $L^2(0,T;H^1(\Omega)^d)$ this is enough to conclude with the Lemma of Aubin-Lions the strong convergence \begin{align*} P_\sigma(\rho_\varepsilon \mathbf{v}_\varepsilon) \to P_\sigma(\rho \mathbf{v}) \; \mbox{ in } \; L^2(0,T;L^2(\Omega)^d) \,. \varepsilonnd{align*} From this we can derive $\mathbf{v}_\varepsilon \to \mathbf{v}$ in $L^2(0,T;L^2(\Omega)^d)$. For the details we refer to \cite[Sec. 5.1 and Appendix]{ADG12}. With the last convergences and the weak convergence $\mathbf{J}_\varepsilon \rightharpoonup \mathbf{J}$ in $L^2(Q_T)$ we can pass to the limit $\varepsilon \to 0$ in line \varepsilonqref{approxweakline1} to achieve \varepsilonqref{weakline1}. The convergence in line \varepsilonqref{approxweakline2} follows from the above weak limits of $\varphi_\varepsilon$ and $\mathbf{J}_\varepsilon$ in $L^2(Q_T)$ and the strong ones of $\mathbf{v}_\varepsilon$ and $\nabla \varphi_\varepsilon$ in $L^2(Q_T)$. Finally, the convergence in line \varepsilonqref{approxweakline3} can be seen as follows: The left side converges due to the weak convergence of $\mathbf{J}_\varepsilon$ and for the right side we calculate \begin{align} \int_{Q_T} &\left(\Psi_\varepsilon'(\varphi_\varepsilon) - \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) \right) \operatorname{div}(m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \nonumber \\ &= \int_{Q_T} \Psi'(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt + \varepsilon \int_{Q_T} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \nonumber \\ & \quad - \int_{Q_T} \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \,. \label{thirdconverg} \varepsilonnd{align} The first and the third term can be treated similarly as in Elliott and Garcke \cite{EG96}. For the convenience of the reader we give the details. First we observe the fact that $m_\varepsilon \to m$ uniformly since for all $s \in \mathbb{R}$ it holds: \begin{align*} |m_\varepsilon(s) - m(s)| &\leq m(1-\varepsilon) \to 0 \quad \mbox{for } \varepsilon \to 0 \,. \varepsilonnd{align*} Hence we conclude with the pointwise convergence $\varphi_\varepsilon \to \varphi$ a.e. in $Q_T$ that \begin{align*} m_\varepsilon(\varphi_\varepsilon) \to m(\varphi) \quad \mbox{a.e. in } \; Q_T \,. \varepsilonnd{align*} In addition with the convergences $\Psi'(\varphi_\varepsilon) \to \Psi'(\varphi)$, $a(\varphi_\varepsilon) \to a(\varphi)$ a.e. in $Q_T$ and with the weak convergence $\Delta A(\varphi_\varepsilon) \to \Delta A(\varphi)$ in $L^2(Q_T)$ we are led to \begin{align*} \int_{Q_T} \Psi'(\varphi_\varepsilon) m_\varepsilon(\varphi_\varepsilon) \operatorname{div} \boldsymbol{\varepsilonta} \, dx\,dt &\longrightarrow \hspace*{-2pt} \int_{Q_T} \Psi'(\varphi) m(\varphi) \operatorname{div} \boldsymbol{\varepsilonta} \, dx\,dt \; \mbox{ and } \\ \int_{Q_T} \hspace*{-3pt} \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) m_\varepsilon(\varphi_\varepsilon) \operatorname{div} \boldsymbol{\varepsilonta} \, dx \hspace*{1pt} dt &\longrightarrow \hspace*{-2pt} \int_{Q_T} \hspace*{-3pt} \sqrt{a(\varphi)} \Delta A(\varphi) m(\varphi) \operatorname{div} \boldsymbol{\varepsilonta} \, dx \hspace*{1pt} dt . \varepsilonnd{align*} The next step is to show that $m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon \to m'(\varphi) \nabla \varphi$ in $L^2(Q_T)$. To this end we split the integral in the following way: \begin{align*} \int_{Q_T} & |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon - m'(\varphi) \nabla \varphi|^2 \, dx\,dt \\ &= \int_{Q_T \cap \{|\varphi| < 1\}} |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon - m'(\varphi) \nabla \varphi|^2 \, dx\,dt \\ & \quad + \int_{Q_T \cap \{|\varphi| = 1\}} |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon - m'(\varphi) \nabla \varphi|^2 \, dx\,dt \,. \varepsilonnd{align*} Since $\nabla \varphi = 0$ a.e. on the set $\{|\varphi| = 1\}$, see for example Gilbarg and Trudinger \cite[Lem.~7.7]{GT01}, we obtain \begin{align*} &\int_{Q_T \cap \{|\varphi| = 1\}} |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon - m'(\varphi) \nabla \varphi|^2 \, dx\,dt \\ & \;\; = \int_{Q_T \cap \{|\varphi| = 1\}} |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon|^2 \, dx\,dt \\ & \;\; \leq C \int_{Q_T \cap \{|\varphi| = 1\}} |\nabla \varphi_\varepsilon|^2 \, dx\,dt \longrightarrow C \int_{Q_T \cap \{|\varphi| = 1\}} |\nabla \varphi|^2 \, dx\,dt = 0 \,. \varepsilonnd{align*} Although $m_\varepsilon'$ is not continuous, we can conclude on the set $\{|\varphi_\varepsilon| < 1\}$ the convergence $m_\varepsilon'(\varphi_\varepsilon) \to m'(\varphi)$ a.e. in $Q_T$. Indeed, for a point $(x,t) \in Q_T$ with $|\varphi(x,t)| < 1$ and $\varphi_\varepsilon(x,t) \to \varphi(x,t)$, it holds that $|\varphi_\varepsilon(x,t)| < 1 - \delta$ for some $\delta > 0$ and $\varepsilon$ small enough and in that region $m_\varepsilon'$ and $m'$ are continuous. Hence we have \begin{align} m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon \longrightarrow m'(\varphi) \nabla \varphi \quad \mbox{ a.e. in } \; Q_T \label{convpointwiseme} \varepsilonnd{align} and the generalized Lebesgue convergence theorem now gives \begin{align*} \int_{Q_T \cap \{|\varphi| < 1\}} |m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon - m'(\varphi) \nabla \varphi|^2 \, dx \,dt \longrightarrow 0 \,, \varepsilonnd{align*} which proves finally $m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon \to m'(\varphi) \nabla \varphi$ in $L^2(Q_T)$. Similarly as above, together with the convergences $\Psi'(\varphi_\varepsilon) \to \Psi'(\varphi)$, $a(\varphi_\varepsilon) \to a(\varphi)$ a.e. in $Q_T$ and with the weak convergence $\Delta A(\varphi_\varepsilon) \to \Delta A(\varphi)$ in $L^2(Q_T)$ we are led to \begin{align*} \int_{Q_T} & \Psi'(\varphi_\varepsilon) m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \boldsymbol{\varepsilonta} \, dx\,dt \\ &\longrightarrow \int_{Q_T} \Psi'(\varphi) m'(\varphi) \nabla \varphi \cdot \boldsymbol{\varepsilonta} \, dx\,dt \; \mbox{ and } \\ \int_{Q_T} & \sqrt{a(\varphi_\varepsilon)} \Delta A(\varphi_\varepsilon) m_\varepsilon'(\varphi_\varepsilon) \nabla \varphi_\varepsilon \cdot \boldsymbol{\varepsilonta} \, dx \, dt \\ &\longrightarrow \int_{Q_T} \sqrt{a(\varphi)} \Delta A(\varphi) m'(\varphi) \nabla \varphi \cdot \boldsymbol{\varepsilonta} \, dx \, dt . \varepsilonnd{align*} Now we are left to show that the second term of the right side in \varepsilonqref{thirdconverg} converges to zero. To this end, we split it in the following way: \begin{align*} & \varepsilon \int_{\Omega_T} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \\ &= \varepsilon \int_{\{|\varphi_\varepsilon| \leq 1 - \varepsilon\}} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \\ & \quad + \varepsilon \int_{\{|\varphi_\varepsilon| > 1 - \varepsilon\}} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) \, dx \, dt \\ &=: (I)_\varepsilon + (II)_\varepsilon \,. \varepsilonnd{align*} On the set $\{|\varphi_\varepsilon| \leq 1 - \varepsilon\}$ we use that $\Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) = \ln(1+\varphi_\varepsilon) - \ln(1-\varphi_\varepsilon) + 2$ and therefore $\left| \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \right| \leq |\ln \varepsilon | + C$ to deduce that \begin{align*} |(I)_\varepsilon| & \leq \varepsilon (|\ln \varepsilon| + C) \int_{Q_T} |\operatorname{div} (m_\varepsilon(\varphi_\varepsilon) \boldsymbol{\varepsilonta}) | \, dx \, dt \longrightarrow 0 \,. \varepsilonnd{align*} On the set $\{|\varphi_\varepsilon| > 1 - \varepsilon\}$, we use that $m_\varepsilon(\varphi_\varepsilon) = \varepsilon (2-\varepsilon)$ to deduce \begin{align*} (II)_\varepsilon &= \varepsilon^2 (2-\varepsilon) \int_{\{|\varphi_\varepsilon| > 1 - \varepsilon\}} \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \operatorname{div} \boldsymbol{\varepsilonta} \, dx \, dt \\ & \leq C \varepsilon^2 \| \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \|_{L^2(Q_T)} \\ & = C \sqrt{\varepsilon} \left( \varepsilon^{\frac{3}{2}} \| \Psi_{\mbox{\footnotesize ln}}'(\varphi_\varepsilon) \|_{L^2(Q_T)} \right) \longrightarrow 0 \,, \varepsilonnd{align*} since the last term in brackets is bounded by the energy estimate form Lemma \ref{lem:energyestimate}. For the relation of $\widehat{\mathbf{J}}$ and $\mathbf{J}$ we note that due to $\widehat{\mathbf{J}}_\varepsilon \rightharpoonup \widehat{\mathbf{J}}$, $\mathbf{J}_\varepsilon \rightharpoonup \mathbf{J}$ in $L^2(Q_T)$, $\mathbf{J}_\varepsilon = \sqrt{m_\varepsilon(\varphi_\varepsilon)} \, \widehat{\mathbf{J}}_\varepsilon$ and $\sqrt{m_\varepsilon(\varphi_\varepsilon)} \to \sqrt{m(\varphi)}$ a.e. in $Q_T$ from \varepsilonqref{convpointwiseme} we can conclude \begin{align*} \mathbf{J} &= \sqrt{m(\varphi)} \, \widehat{\mathbf{J}} \,. \varepsilonnd{align*} From the weak convergence $\widehat{\mathbf{J}}_\varepsilon \rightharpoonup \widehat{\mathbf{J}}$ in $L^2(Q_T)$ we can conclude that \begin{align*} \int_{Q_(s,t)} |\widehat{\mathbf{J}}|^2 \, dx \, d\tau & \leq \liminf_{\varepsilon \to 0} \int_{Q_(s,t)} m_\varepsilon(\varphi_\varepsilon) |\nabla \mu_\varepsilon|^2 \, dx \, d\tau \varepsilonnd{align*} for all $0 \leq s \leq t \leq T$ and this is enough to proceed as in Abels, Depner and Garcke \cite{ADG12} to show the energy estimate. Finally we just remark that the continuity properties and the initial conditions can be derived with the same arguments as in \cite[Sec. 5.2, 5.3]{ADG12}, so that altogether we proved Theorem~\ref{theo:existenceweaksolution}. \begin{thebibliography}{ambreitest} \bibitem[Abe09a]{Abe09a}Abels H., \textit{Existence of weak solutions for a diffuse interface model for viscous, incompressible fluids with general densities}, Comm. Math. Phys., vol. 289 (2009), p.45-73. \bibitem[Abe09b]{Abe09b}Abels H., \textit{On a diffuse interface model for two-phase flows of viscous, incompressible fluids with matched densities}, Arch. Rat. Mech. Anal., vol. 194 (2009), p.463-506. \bibitem[Abe12]{Abe12}Abels H., \textit{Strong Well-Posedness of a Diffuse Interface Model for a Viscous, Quasi-Incompressible Two-Phase Flow}, SIAM J. Math. Anal., vol. 44, no. 1 (2012), p.316-340. \bibitem[ADG12]{ADG12}Abels H., Depner D., Garcke H., \textit{Existence of weak solutions for a diffuse interface model for two-phase flows of incompressible fluids with different densities}, arXiv:1111.2493, to appear in J. Math. Fluid Mech. (2012). \bibitem[AGG12]{AGG12}Abels H., Garcke H., Gr\"un G., \textit{Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities}, Math. Models Meth. Appl. Sci., vol 22, no.3 (2012). \bibitem[ADGK12]{ADGK12}Aki G., Dreyer W., Giesselmann J., Kraus C., \textit{A quasi-incompressible diffuse interface model with phase transition}, WIAS preprint no. 1726, Berlin (2012). \bibitem[Boy99]{Boy99} Boyer~F., \textit{Mathematical study of multi-phase flow under shear through order parameter formulation}, Asymptot. Anal., vol. 20, no.2 (1999), p.175-212. \bibitem[Boy02]{Boy02}Boyer F., \textit{A theoretical and numerical model for the study of incompressible mixture flows}, Comput. Fluids, vol. 31 (2002), p.41-68. \bibitem[CT94]{CT94}Cahn J. W., Taylor J. E., \textit{Surface motion by surface diffusion}, Acta Metall., vol. 42 (1994), p.1045-1063. \bibitem[Che02]{Che02}Chen L.-Q., \textit{Phase-field models for microstructure evolution}, Annu. Rev. Mater. Res., vol. 32 (2002), p.113-140. \bibitem[DU77]{DU77}Diestel J., Uhl Jr. J.J., \textit{Vector Measures}, Amer. Math. Soc., Providence, RI, 1977. \bibitem[DSS07]{DSS07}Ding H., Spelt P.D.M., Shu C., \textit{Diffuse interface model for incompressible two-phase flows with large density ratios}, J. Comp. Phys., vol. 22 (2007), p.2078-2095. \bibitem[EG96]{EG96}Elliott C.M., Garcke H., \textit{On the Cahn-Hilliard equation with degenerate mobility}, SIAM J. Math. Anal., vol. 27, no. 2 (1996), p 404-423. \bibitem[Gru95]{Gru95}Gr\"un G., \textit{Degenerate parabolic equations of fourth order and a plasticity model with nonlocal hardening}, Z. Anal. Anwendungen, vol. 14 (1995), p.541-573. \bibitem[GPV96]{GPV96}Gurtin M.E., Polignone D., Vi\~{n}als J., \textit{Two-phase binary fluids and immiscible fluids described by an order parameter}, Math. Models Meth. Appl. Sci., vol. 6, no.6 (1996), p.815-831. \bibitem[GT01]{GT01}D.~Gilbarg, N.~S.~Trudinger, \textit{Elliptic Partial Differential Equations of Second Order}, Springer 2001. \bibitem[Hil70]{Hil70}Hilliard J. E., \textit{Spinodal decomposition}, in Phase Transformations, American Society for Metals, Cleveland, 1970, p.497-560. \bibitem[HH77]{HH77}Hohenberg P.C., Halperin B.I., \textit{Theory of dynamic critical phenomena}, Rev. Mod. Phys., vol. 49 (1977), p.435-479. \bibitem[Lio69]{Lio69}Lions J.-L., \textit{Quelques M\'{e}thodes de R\'{e}solution des Probl\`{e}mes aux Limites Non lin\'{e}aires}, Dunod, Paris, 1969. \bibitem[LT98]{LT98}Lowengrub J., Truskinovsky L., \textit{Quasi-incompressible Cahn-Hilliard fluids and topological transitions}, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., vol.~454 (1998), p.2617-2654. \bibitem[Mue85]{Mue85}M\"uller I., \textit{Thermodynamics}, Pitman Advanced Publishing Program. XVII, Boston-London-Melbourne, 1985. \bibitem[Rou90]{Rou90}Roub\'{i}\v{c}ek T., \textit{A generalization of the Lions-Temam compact embedding theorem}, \v{C}asopis P\v{e}st. Mat., vol. 115, no.4 (1990), p.338-342. \bibitem[Sim87]{Sim87}Simon J., \textit{Compact sets in the space $L^p(0,T;B)$}, Ann. Mat. Pura Appl. (4), vol.~146 (1987), p.65-96. \bibitem[Soh01]{Soh01} Sohr H., \textit{The {N}avier-{S}tokes Equations}, Birkh\"auser Advanced Texts: Basler Lehrb\"ucher, Birkh\"auser Verlag, Basel, 2001. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[Scattered Representations]{Scattered representations of $SL(n, \bC)$} \author{Chao-Ping Dong} \author{Kayue Daniel Wong} \address[Dong]{School of Mathematical Sciences, Soochow University, Suzhou 215006, P.~R.~China} \email{[email protected]} \address[Wong]{School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong 518172, P. R. China} \email{[email protected]} \begin{abstract} Let $G$ be $SL(n, \bC)$. The unitary dual $\widehat{G}$ was classified by Vogan in the 1980s. This paper aims to describe the Zhelobenko parameters and the spin-lowest $K$-types of the scattered representations of $G$, which lie at the heart of $\widehat{G}^d$---the set of all the equivalence classes of irreducible unitary representations of $G$ with non-vanishing Dirac cohomology. As a consequence, we will verify a couple of conjectures of the first-named author for $G$. \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction}\label{sec:intro} \subsection{Preliminaries on complex simple Lie groups} Let $G$ be a complex connected simple Lie group, and $H$ be a Cartan subgroup of $G$. Let $\fg_0$ and $\fh_0$ be the Lie algebra of $G$ and $H$ respectively, and we drop the subscripts to stand for the complexified Lie algebras. We adopt a positive root system $\Delta^+(\fg_0, \fh_0)$, and let $\varpi_1, \dots, \varpi_{\mathrm{rank}(\fg_0)}$ be the corresponding fundamental weights with $\rho=\varpi_1+\cdots+\varpi_{\mathrm{rank}(\fg_0)}$ being the half sum of positive roots. Fix a Cartan involution $\theta$ on $G$ such that its fixed points form a maximal compact subgroup $K$ of $G$. Then on the Lie algebra level, we have the Cartan decomposition $$ \fg_0=\frk_0+\fp_0. $$ We denote by $\langle\cdot, \cdot\rangle$ the Killing form on $\fg_0$. This form is negative definite on $\frk_0$ and positive definite on $\fp_0$. Moreover, $\frk_0$ and $\fp_0$ are orthogonal to each other under $\langle\cdot, \cdot\rangle$. We shall denote by $\|\cdot\|$ the norm corresponding to the Killing form. Let $H=TA$ be the Cartan decomposition of $H$, with $\fh_0=\ft_0+\fa_0$. We make the following identifications: \begin{equation}\label{identifction} \fh\cong \fh_0\times \fh_0, \quad \ft=\{(x, -x): x\in\fh_0\}, \quad \fa\cong\{(x, x): x\in \fh_0\}. \end{equation} Take an arbitrary pair $(\lambda_L, \lambda_R)\in \fh_0^*\times \fh_0^*$ such that $\mu:=\lambda_L-\lambda_R$ is integral. Denote by $\{\mu\}$ the unique dominant weight to which $\mu$ is conjugate under the action of the Weyl group $W$. Write $\nu:=\lambda_L + \lambda_R$. We can view $\mu$ as a weight of $T$ and $\nu$ a character of $A$. Put $$ I(\lambda_L, \lambda_R):={\rm Ind}_B^G(\bC_{\mu}\otimes \bC_{\nu}\otimes {\bf 1})_{K-{\rm finite}}, $$ where $B$ is the Borel subgroup of $G$ determined by $\Delta^+(\fg_0, \fh_0)$. It is not hard to show that $V_{\{\mu\}}$, the $K$-type with highest weight $\{\mu\}$, occurs exactly once in $I(\lambda_L, \lambda_R)$. Let $J(\lambda_L, \lambda_R)$ be the unique irreducible subquotient of $I(\lambda_L, \lambda_R)$ containing $V_{\{\mu\}}$. By \cite{Zh}, every irreducible admissible $(\fg, K)$-module has the form $J(\lambda_L, \lambda_R)$. Indeed, up to equivalence, $J(\lambda_L, \lambda_R)$ is the unique irreducible admissible $(\fg, K)$-module with infinitesimal character the $W \times W$ orbit of $(\lambda_L, \lambda_R)$, and lowest $K$-type $V_{\{\lambda_L-\lambda_R\}}$. We will refer to the pair $(\lambda_L, \lambda_R)$ as the {\it Zhelobenko parameter} for the module $J(\lambda_L, \lambda_R)$. \subsection{Dirac cohomology} Fix an orthonormal basis $Z_1, \dots, Z_l$ of $\fp_0$ with respect to the inner product on $\fp_0$ induced by $\langle\cdot, \cdot\rangle$. Let $U(\fg)$ be the universal enveloping algebra of $\fg$, and put $C(\fp)$ as the Clifford algebra of $\fp$. One checks that \begin{equation}\label{Dirac-operator} D:=\sum_{i=1}^{l} Z_i\otimes Z_i \in U(\fg)\otimes C(\fp) \end{equation} is independent of the choice of the orthonormal basis $Z_1, \dots, Z_l$. The operator $D$, called the \textit{Dirac operator}, was introduced by Parthasarathy \cite{P1}. By construction, $D^2$ is a natural Laplacian on $G$, which gives rise to the Parthasarathy's Dirac inequality (see \eqref{Dirac-inequality} below). The inequality is very effective for detecting non-unitarity of $(\fg,K)$-modules, but is by no means sufficient to classify all (non-)unitary modules. To sharpen the Dirac inequality, and to offer a better understanding of the unitary dual, Vogan formulated the notion of Dirac cohomology in 1997 \cite{V2}. Let ${\rm Ad}: K\rightarrow SO(\fp_0)$ be the adjoint map, ${\rm Spin}\ \fp_0$ be the spin group of $\fp_0$, and denote by $p: {\rm Spin}\ \fp_0\rightarrow SO(\fp_0)$ the spin double covering map. Put $$ \widetilde{K}:=\{(k,s)\in K\times {\rm Spin} \, \fp_0\mid {\rm Ad}(k)=p(s)\}. $$ As in the case of $K$-types, we will refer to an irreducible $\widetilde{K}$-type with highest weight $\delta$ as $V_{\delta}$. Let $\pi$ be any admissible $(\fg, K)$-module, and $S$ be the spin module of $C(\fp)$. Then $U(\fg)\otimes C(\fp)$, in particular the Dirac operator $D$, acts on $\pi\otimes S$. Now the \textit{Dirac cohomology} is defined as the $\widetilde{K}$-module \begin{equation}\label{Dirac-cohomology} H_D(\pi):={\rm Ker} D/({\rm Ker} D \cap {\rm Im} D). \end{equation} It is evident from the definition that Dirac cohomology is an invariant for admissible $(\fg, K)$-modules. To compute this invariant, the Vogan conjecture, proved by Huang and Pand\v{z}i\'{c} \cite{HP1}, says that whenever $H_D(\pi) \neq 0$, one would have \begin{equation}\label{thm-HP} \gamma+\rho=w\Lambda, \end{equation} where $\Lambda$ is the infinitesimal character of $\pi$, $\gamma$ is the highest weight of any $\widetilde{K}$-type in $H_D(\pi)$, and $w$ is some element of $W$. It turns out that many interesting $(\fg,K)$-modules $\pi$, such as some $A_q(\lambda)$-modules and all the highest weight modules, have non-zero Dirac cohomology (see \cite{HKP}, \cite{HPP}). One would therefore like to classify all representations with non-zero Dirac cohomology. \subsection{Spin-lowest $K$-type} From now on, we set $\pi$ as an irreducible unitary $(\fg, K)$-module with infinitesimal character $\Lambda$. In order to get a clearer picture on $H_D(\pi)$, the first-named author introduced the notion of spin-lowest $K$-types. Given an arbitrary $K$-type $V_{\delta}$, its spin norm is defined as \begin{equation}\label{spin-norm} \|\delta\|_{\rm spin}:=\|\{\delta-\rho\}+\rho\|. \end{equation} Then a $K$-type $V_{\tau}$ occurring in $\pi$ is called a \textit{spin-lowest $K$-type} of $\pi$ if it achieves the minimum spin norm among all the $K$-types showing up in $\pi$. As an application of spin-lowest $K$-type, note that $D$ is self-adjoint on the unitarizable module $\pi\otimes S$. By writing out $D^2$ carefully, and by using the \emph{PRV-component} \cite{PRV}, we can rephrase \textit{Parthasarathy's Dirac operator inequality} \cite{P2} as follows: \begin{equation}\label{Dirac-inequality} \|\delta\|_{\rm spin}\geq \|\Lambda\|, \end{equation} where $V_{\delta}$ is any $K$-type. Moreover, one can deduce from \cite[Theorem 3.5.3]{HP2} that $H_D(\pi)\neq 0$ if and only if the spin-lowest $K$-types $V_{\tau}$ attain the lower bound of Equation \eqref{Dirac-inequality}. In such cases, $V_{\{\tau - \rho\}}$ will show up in $H_D(\pi)$. Put in a different way, the spin-lowest $K$-types of $\pi$ are exactly the $K$-types contributing to $H_D(\pi)$ whenever the cohomology is non-vanishing (see Proposition 2.3 of \cite{D1} for more details). \subsection{Scattered representations} \label{scattered} Based on the studies \cite{BP,DD}, we are interested in the irreducible unitarizable $(\fg, K)$-modules $J(\lambda, -s\lambda)$ such that \begin{itemize} \item[(i)] the weight 2$\lambda$ is dominant integral, i.e., $2\lambda=\sum_{i=1}^{\mathrm{rank}(\fg_0)}c_i\varpi_i$, where each $c_i$ is a positive integer; \item[(ii)] the element $s\in W$ is an involution such that each simple reflection $s_i$, $1\leq i\leq \mathrm{rank}(\fg_0)$, occurs in one (thus in each) reduced expression of $s$; \item[(iii)] the module has non-zero Dirac cohomology, i.e., $H_D(J(\lambda, -s\lambda))\neq 0$, or equivalently, there exists a $K$-type $V_{\tau}$ in $J(\lambda, -s\lambda)$ such that \begin{equation} \label{spin=2lambda} \|\tau\|_{\rm spin} = \|(\lambda, -s\lambda)\| = \|2\lambda\| \end{equation} \end{itemize} According to \cite{DD}, there are only finitely many such representations, which are called the \textit{scattered representations}. These representations lie at the heart of $\widehat{G}^d$ --- the set of all the irreducible unitary $(\fg, K)$-modules of $G$ with non-zero Dirac cohomology up to equivalence. Namely, by Theorem A of \cite{DD}, any member of $\widehat{G}^d$ is either a scattered representation, or it is cohomologically induced from a scattered representation tensored with a suitable unitary character of the Levi factor of a certain proper $\theta$-stable parabolic subgroup. In the latter case, one can easily trace the spin-lowest $K$-types along with the Dirac cohomology of the modules before and after induction. It is therefore of interest to have a good understanding of scattered representations. \subsection{Overview} In this manuscript, we focus on Lie groups $G$ of Type $A$. For convenience, we will start from the group $GL(n, \bC)$, written as $GL(n)$ for short. In this case, Vogan classified the unitary dual. The part that we need can be described as follows. \begin{theorem}[\cite{V1}] \label{unitary} All irreducible unitary representations of $GL(n)$ with regular half-integral infinitesimal characters are parabolically induced from a unitary character, i.e. they are of the form $${\rm Ind}_{(\prod_{i=0}^m GL(a_i))U}^{GL(n)}(\bigotimes_{i=0}^m {\det}^{p_i} \otimes \mathbf{1})$$ for some $a_i \in \mathbb{Z}_{>0}$ and $p_i \in \mathbb{Z}$. For simplicity, we will write the parabolically induced module ${\rm Ind}_{LU}^{G}(\pi \otimes \mathbf{1})$ as ${\rm Ind}_{L}^{G}(\pi)$ for the rest of the manuscript. \end{theorem} Using \cite[Theorem 2.4]{BP}, all such $\pi$ have non-zero Dirac cohomology. Moreover, \cite{BDW} proved Conjecture 4.1 of \cite{BP}, which says $$H_D(\pi) = 2^{[\frac{\mathrm{rank}(\fg_0)}{2}]}V_{\{\tau - \rho\}},$$ where $V_{\tau}$ is the {\it unique} spin-lowest $K$-type appearing in $\pi$ {\it with multiplicity one}. However, it is not clear what $V_{\tau}$ is like from the calculations in \cite{BDW}. In Section 2, we will give an algorithm to compute $V_{\tau}$ for all such $\pi$ (see Proposition \ref{prop-spin-lowest}). In Section 3, we will see how the calculations for $GL(n)$ in Section 2 can be translated to $SL(n)$, which gives a combinatorial description of scattered representations of $SL(n)$ (Proposition \ref{prop:scattered}). As a result, we prove the following: \begin{itemize} \item The spin-lowest $K$-type of each scattered representation of $SL(n)$ is {\it unitarily small} in the sense of Salamanca-Riba and Vogan \cite{SV} (Corollary \ref{cor-u-small}); and \item The number of scattered representations of $SL(n)$ is equal to $2^{n-2}$ (Corollary \ref{cor-number}). \end{itemize} This verifies Conjecture C of \cite{DD} in the case of $SL(n)$, and proves Conjecture 5.2 of \cite{D2} respectively. It is worth noting that for any non-trivial scattered representation, its spin-lowest $K$-type lives deeper than, and differs from the lowest $K$-type. We hope the effort here will shed some light on the real case in future. \section{An algorithm computing the spin-lowest $K$-types} In this section, we give an algorithm to find the spin-lowest $K$-types of the irreducible unitary modules of $GL(n)$ given by Theorem \ref{unitary}. We use a {\bf chain} $$\mathcal{C} := \{c, c-2 \dots, c-(2k-2), c-2k\},$$ where $c, k \in \mathbb{Z}$ with $k >0$, to denote the Zhelobenko parameter $$\begin{pmatrix} \lambda \\ -w_0\lambda \end{pmatrix} = \begin{pmatrix} \frac{c}{2} & \frac{c}{2} - 1& \dots & \frac{c}{2} - (k-1) & \frac{c}{2} - k \\ -\frac{c}{2} + k & -\frac{c}{2} + (k-1) & \dots & -\frac{c}{2} + 1 & -\frac{c}{2} \\ \end{pmatrix}.$$ Note that the entries of $\mathcal{C}$ are precisely equal to $2\lambda$. Also, this parameter corresponds to the one-dimensional module ${\det}^{c-k}$ of $GL(k+1)$. Consequently, Theorem \ref{unitary} implies that the Zhelobenko parameters of all irreducible unitary modules with regular half-integral infinitesimal character can be expressed by the chains $$(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i,$$ where all the entries of $\mathcal{C}_i$ are disjoint. In order to understand the spin-lowest $K$-types of these modules of $GL(n)$, we make the following: \begin{definition} \begin{itemize} \item[(a)] Two chains $\mathcal{C}_1 = \{A, \dots, a\}$, $\mathcal{C}_2 = \{B, \dots, b\}$ are {\bf linked} if the entries of $\mathcal{C}_1$ and $\mathcal{C}_2$ are disjoint satisfying $$A > B > a \ \ \ \ \text{or} \ \ \ \ B > A > b.$$ \item[(b)] We say a union of chains $\displaystyle \bigcup_{i \in I} \mathcal{C}_i$ is {\bf interlaced} if for all $i \neq j$ in $I$, there exist indices $i = i_0, i_1, \dots, i_m = j$ in $I$ such that $\mathcal{C}_{i_{l-1}}$ and $\mathcal{C}_{i_{l}}$ are linked for all $1 \leq l \leq m$. (By convention, we also let the single chain $\mathcal{C}_1$ be interlaced). \end{itemize} \end{definition} For example, the parameter $\{9,7,5\} \cup \{6,4,2\} \cup \{3,1\}$ is interlaced, while the parameter $\{10,8\} \cup \{9,7\} \cup \{6,4\} \cup \{5,3,1\}$ is not interlaced. We are now in the position to describe the spin-lowest $K$-types of the unitary modules in Theorem \ref{unitary} using chains. \begin{algorithm} \label{alg:spinlkt} Let $J(\lambda,-s\lambda)$ be an irreducible unitary module of $GL(n)$ in Theorem \ref{unitary} with $(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i$, where $$\mathcal{C}_i := \{k_i + (d_i -1), \dots, k_i - (d_i - 1)\} = \{C_{i,1},\dots,C_{i,d_i}\}$$ is a chain with average value $k_i$ and length $d_i$. Then the lowest $K$-type is equal to (a $W$-conjugate of) $(\mathcal{T}_0, \dots, \mathcal{T}_m)$, where $$\mathcal{T}_i := (\underbrace{k_i, \dots, k_i}_{d_i}).$$ By re-indexing the chains when necessary, we may and we will assume that \begin{equation} \label{eq:order} \mbox{ for any }\ 0\leq i<j\leq m, \quad k_i > k_j \ \ \text{or}\ \ d_i < d_j\ \text{if}\ \ k_i = k_j. \end{equation} Let us change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ for all pairs of linked chains $\mathcal{C}_i$ and $\mathcal{C}_j$ such that $i<j$ by the following rule: \begin{itemize} \item[(a)] If $C_{i,1} > C_{j,1} \geq C_{j,d_j} > C_{i,d_i}$, i.e. \begin{align*} \{C_{i,1},\ \ \dots,\ \ C_{i,d_i-p}&,\ \ \overbrace{C_{i,d_i-p+1},\ \dots \dots,\ \ C_{i,d_i}}^{p} \} \\ &\{C_{j,1}, \ \ \dots,\ \ C_{j,d_j}\} \end{align*} with $C_{j,1} = C_{i,d_i} + 2p-1$ and $d_j \leq p$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: \[\boxed{\begin{aligned} \mathcal{T}_i' &: (*,\ \ \dots,\ \ *,\ \overbrace{k_i+p,\ k_i+(p-1),\ \dots,\ k_i+(p - d_j + 1),\ * ,\ \dots,\ *}^{p} ) \\ \mathcal{T}_j' &: \ \ \ \ \ \ \ \ \ \ \ \ (k_j - p,\ k_j - (p-1),\ \dots,\ k_j-(p - d_j+1)), \end{aligned}}\] where the entries marked by $*$ remain unchanged. \item[(b)] If $C_{i,1} > C_{j,1} > C_{i,d_i} > C_{j,d_j}$, i.e. \begin{align*} \{C_{i,1},\ \ \dots,\ \ C_{i,d_i-p}&,\ \ \overbrace{C_{i,d_i-p+1},\ \ \dots,\ \ C_{i,d_i}}^{p} \} \\ &\{C_{j,1}, \ \ \ \dots\dots,\ \ \ C_{j,p},\ \ \ \ C_{j,p+1},\ \ \dots, \ \ C_{j,d_j}\} \end{align*} with $C_{j,1} = C_{i,d_i} + 2p-1$ and $d_j > p$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: \[\boxed{\begin{aligned} \mathcal{T}_i' &: (*,\dots,\ *,\ \overbrace{k_i+1,\ \dots,\ k_i+p}^{p} ) \\ \mathcal{T}_j' &: \ \ \ \ \ \ \ \ \ \ (k_j-1, \ \dots,\ k_j-p,\ *,\ \ \dots, \ \ *). \end{aligned}}\] where the entries marked by $*$ remain unchanged. \item[(c)] If $C_{j,1} > C_{i,1} > C_{j,d_j}$, then since $k_i \geq k_j$ one also have $C_{j,1} > C_{i,1} \geq C_{i,d_i} > C_{j,d_j}$ i.e. \begin{align*} \{C_{i,1}, \ \ \dots ,\ \ &C_{i,d_i}\} \\ \{\underbrace{C_{j,1},\ \ \ \ \ \ \dots, \ \ \ \ \ \ C_{j,q}}_{q},&\ \ \ \ \ \ C_{j,q+1}, \ \ \dots, \ \ C_{j,d_j}\} \end{align*} with $C_{j,1} = C_{i,d_i} + 2q-1$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: \[\boxed {\begin{aligned} \mathcal{T}_i'&: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (k_i + (q-d_0+1),\ \dots,\ k_i + (q-1),\ k_i + q) \\ \mathcal{T}_j'&: (\underbrace{*,\ \dots,\ *,\ k_j - (q-d_0+1),\ \dots,\ k_j - (q-1),\ k_j -q}_{q},\ *, \ \ \dots, \ \ *), \end{aligned}}\] where the entries marked by $*$ remain unchanged. \end{itemize} In the above three cases, we only demonstrate the situation that $\mathcal{C}_i$ is in the first row and $\mathcal{C}_j$ is in the second row. The rule is the same when $\mathcal{C}_j$ is in the first row while $\mathcal{C}_i$ is in the second row. After running through all pairs of linked chains, $V_{\tau}$ is defined as the $K$-type with highest weight $\tau$ given by (a $W$-conjugate of) $\bigcup_{i=0}^m \mathcal{T}_i'$. \end{algorithm} \begin{example} \label{eg:spinlowest} Consider $(\lambda,-s\lambda) = \begin{aligned} \{10 && && 8\} && && \{6\} && && \{4\} \\ && \{9 && && 7 && && 5 && && 3 && && 1\} \end{aligned}$. Then the lowest $K$-type of $J(\lambda,-s\lambda)$ is \begin{align*} (9 && && 9) && && (6) && && (4) \\ && (5 && && 5 && && 5 && && 5 && && 5) \end{align*} To compute $V_{\tau}$, let us label the chains so that \eqref{eq:order} holds: $$ \mathcal{T}_0=(9 \quad 9),\quad \mathcal{T}_1=(6), \quad \mathcal{T}_2=(5 \quad 5\quad 5 \quad 5 \quad 5),\quad \mathcal{T}_3=(4). $$ Then we apply (a) to the pair $\mathcal{T}_2$, $\mathcal{T}_3$, apply (b) to the pair $\mathcal{T}_0$, $\mathcal{T}_2$, and apply (c) to the pair $\mathcal{T}_1$, $\mathcal{T}_2$. This gives us \begin{align*} (9 && && 10) && && (8) && && (2) \\ && (4 && && 3 && && 5 && && 7 && && 5). \end{align*} Thus $\tau = (10,9,8,7,5,5,4,3,2)$. \end{example} \begin{theorem} Let $J(\lambda,-s\lambda)$ be a unitary module of $GL(n)$ in Theorem \ref{unitary}, and $V_{\tau}$ be obtained by Algorithm \ref{alg:spinlkt}. Then $[J(\lambda,-s\lambda):V_{\tau}] > 0$. \end{theorem} \begin{proof} Let $\displaystyle J(\lambda,-s\lambda) = {\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)})$. By rearranging the Levi factors, one can assume the chains $\mathcal{C}_0$, $\dots$, $\mathcal{C}_m$ satisfy Equation \eqref{eq:order}. We are interested in studying \begin{align*} \left[{\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)}): V_{\tau}\right] &= \left[\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)} \otimes \bigotimes_{i=0}^m V_{(t,\dots,t)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)} \otimes V_{(t,\dots,t)}|_{\prod_{i=1}^m GL(a_i)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau+(t,\dots,t)}|_{\prod_{i=0}^m GL(a_i)}\right] \\ &= \left[{\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}): V_{\tau+(t,\dots,t)}\right] \end{align*} So we can assume $k_i > 0$ for all $i$ without loss of generality. We prove the theorem by induction on the number of Levi components. The theorem obviously holds when there is only one Levi component -- the irreducible module is a unitary character of $GL(n)$. Now suppose we have the hypothesis holds when there are $m$ Levi factors, i.e. $$\left[{\rm Ind}_{\prod_{i=0}^{m-1} GL(a_i)}^{GL(n')}(\bigotimes_{i=0}^{m-1} V_{(k_i,\dots,k_i)}) : V_{\tau_{m-1}}\right] > 0,$$ where $n' = n - a_m$, and $\tau_{m-1}$ is obtained by applying Algorithm \ref{alg:spinlkt} on $\bigcup_{i=0}^{m-1} \mathcal{C}_i$. Suppose now $\tau_{m}$ is obtained by applying Algorithm \ref{alg:spinlkt} on $\bigcup_{i=0}^m \mathcal{C}_i$. Then \begin{align*} &\ \left[{\rm Ind}_{\prod_{i=0}^{m} GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^{m} V_{(k_i,\dots,k_i)}) : V_{\tau_{m}}\right] \\ = &\ \left[{\rm Ind}_{GL(n') \times GL(a_m)}^{GL(n)}\left({\rm Ind}_{\prod_{i=0}^{m-1} GL(a_i)}^{GL(n')}(\bigotimes_{i=0}^{m-1} V_{(k_i,\dots,k_i)}) \otimes V_{(k_m,\dots,k_m)}\right) : V_{\tau_{m}}\right] \\ \geq &\ \left[{\rm Ind}_{GL(n') \times GL(a_m)}^{GL(n)}(V_{\tau_{m-1}} \otimes V_{(k_m,\dots,k_m)}) : V_{\tau_{m}}\right] \\ =&\ c_{\tau_{m-1}, (k_m,\dots,k_m)}^{\tau_{m}} \end{align*} Here $c_{\mu,\nu}^{\lambda}$ is the Littlewood-Richardson coefficient, and the last step uses Theorem 9.2.3 of \cite{GW}. Suppose $\tau_{m-1} = \bigcup_{i=0}^{m-1} \mathcal{T}_i''$. Here these $\mathcal{T}_i''$ are obtained by applying Algorithm \ref{alg:spinlkt} on $\mathcal{C}_0$, $\dots$, $\mathcal{C}_{m-1}$. Then $\tau_m$ is obtained from applying Algorithm \ref{alg:spinlkt} on $\mathcal{T}_i''$ and $\mathcal{T}_m = (k_m,\dots,k_m)$ for all linked $\mathcal{C}_i$ and $\mathcal{C}_m$. More precisely, by applying Rules (a) -- (c) in Algorithm \ref{alg:spinlkt}, $\tau_m$ is obtained from $\tau_{m-1}$ by the following: \begin{itemize} \item[(i)] Construct a new partition $\tau_{m-1} \cup (k_m,\dots,k_m)$. \item[(ii)] For each linked $\mathcal{C}_i$ and $\mathcal{C}_m$, add $(0,\dots,0, A, A-1,\dots,a+1,a,0,\dots,0)$ on the rows of $\tau_{m-1}$ corresponding to $\mathcal{T}_i''$, and subtract $(0,\dots,0, A, A-1,\dots,a+1,a,0,\dots,0)$ on the corresponding rows of $(k_m,\dots,k_m)$. \item[(iii)] $\tau_m$ is obtained by going through (ii) for all $\mathcal{C}_i$ linked with $\mathcal{C}_m$. \end{itemize} By the above construction of $\tau_m$, it follows from the Littlewood-Richardson Rule as stated on page 420 of \cite{GW} that \begin{equation}\label{LR-geq1} c_{\tau_{m-1}, (k_m,\dots,k_m)}^{\tau_{m}} \geq 1. \end{equation} Indeed, it suffices to find \emph{one} \emph{L-R skew tableaux} of shape $\tau_m/\tau_{m-1}$ and weight $$ (\underbrace{k_m,\dots, k_m}_{d_m}) $$ in the sense of Definition 9.3.17 of \cite{GW}. Recall that $d_m$ is the number of entries of the chain $\mathcal{C}_m$. To do so, we first describe the Ferrers diagram $\tau_m/\tau_{m-1}$. Suppose $\mathcal{C}_{i_1}$, $\dots$, $\mathcal{C}_{i_l}$ are linked to $\mathcal{C}_m$ with $i_1 > \dots > i_l$. By Step (ii) of the above algorithm, we add $(A_j, A_j - 1, \dots, a_j +1, a_j)$ to the rows in $\tau_{m-1}$ corresponding to the chains $\mathcal{C}_{i_j}$. Note that by our ordering of the chains, we must have $$A_l > \dots > a_l > A_{l-1} > \dots > a_{l-1} > \dots > A_1 > \dots > a_1.$$ The rows of the Ferrers diagram $\tau_m/\tau_{m-1}$ have lengths \begin{equation} \label{eq-skew} \underbrace{A_1, \dots, a_1}_{:= \mathcal{R}_1}; \cdots; \underbrace{A_l, \dots, a_l}_{:= \mathcal{R}_l}; \underbrace{k_m,\dots, k_m; (k_m - a_1), \dots, (k_m - A_1); \cdots; (k_m - a_l), \dots, (k_m - A_l)}_{:= \mathcal{R}_{l+1}} \end{equation} with $\sum_{j=1}^{l+1} |\mathcal{R}_j| = d_m$, where $|\mathcal{R}_j|$ is the number of entries in $\mathcal{R}_j$. Now we fill in the entries on each row of $\tau_m/\tau_{m-1}$ as follows. Consider the standard Young tableau $T$ whose row sizes are $(\underbrace{k_m,\dots, k_m}_{d_m})$ and the entries of the $i$-th row of $T$ are all equal to $i$. Now let a sequence of subtableaux of $T$ given by $$T_1 \subset T_2 \subset \dots \subset T_l \subset T_{l+1} := T$$ such that for each $1 \leq j \leq l$, $T_j$ has the shape of the form $$ A_j > \dots > a_j > \dots > A_1 > \dots > a_1. $$ Consider the skew tableau $T_j/T_{j-1}$ for $1 \leq j \leq l+1$ (where we take $T_0$ to be the empty tableau), then the column sizes of $T_j/T_{j-1}$ is the same as the parametrization for the tableau $\mathcal{R}_{j}$ marked in \eqref{eq-skew}. For each $1 \leq j \leq l+1$, fill in the rows of the Ferrers diagram $\tau_m/\tau_{m-1}$ corresponding to $\mathcal{R}_j$ in \eqref{eq-skew} by filling the $t$-th row of $\mathcal{R}_j$ with the $t$-th entries on each column of $T_j/T_{j-1}$ counting from the top in ascending order. This will give us a \emph{semi-standard skew tableau} of shape $\tau_m/\tau_{m-1}$ and weight $(\underbrace{k_m,\dots, k_m}_{d_m})$ (see Definition 9.3.16 of \cite{GW}), whose row word is a \emph{reverse lattice word} by Definition 9.3.17 of \cite{GW}. To sum up, it is a desired L-R tableau and \eqref{LR-geq1} follows. \end{proof} \begin{proposition}\label{prop-spin-lowest} Let $J(\lambda,-s\lambda)$ be a unitary module of $GL(n)$ in Theorem \ref{unitary}, and $V_{\tau}$ be the $K$-type obtained by Algorithm \ref{alg:spinlkt}. Then $\tau$ satisfies $$\{\tau - \rho\} = 2\lambda - \rho.$$ Consequently, $V_{\tau}$ is a spin-lowest $K$-type of $J(\lambda,-s\lambda)$ by Equation \eqref{spin=2lambda}. \end{proposition} \begin{proof} We prove the proposition by induction on the number of chains in $(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i$, where the chains are arranged so that Equation \eqref{eq:order} holds. Suppose that the proposition holds for $\bigcup_{i=0}^{m-1} \mathcal{C}_i$. There are two possibilities when adding $\mathcal{C}_m$: \begin{itemize} \item There exists $\mathcal{C}_i$ such that $\mathcal{C}_i$ and $\mathcal{C}_m$ is related by Rule (a) in Algorithm \ref{alg:spinlkt}: \begin{align*} \{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathcal{C}_i\ \ \ \ \ \ \ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \} \\ &\{\ \ \mathcal{C}_{m}\ \ \} \end{align*} \item There exist $\mathcal{C}_j$ and $\mathcal{C}_{r}$, $\dots$, $\mathcal{C}_{m-1}$, such that $\mathcal{C}_j$ and $\mathcal{C}_m$ are related by Rule (b), and $\mathcal{C}_{l}$, $r \leq l \leq m-1$ and $\mathcal{C}_m$ are related by Rule (c) in Algorithm \ref{alg:spinlkt}: \begin{align*} \{\ \ \ \ \ &\mathcal{C}_j\ \ \ \ \ \} \ \ \ \ \ \{\ \ \mathcal{C}_{r}\ \ \} \ \ \ \dots \ \ \ \{\ \ \mathcal{C}_{m-1}\ \ \} \\ &\{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathcal{C}_m\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \} \end{align*} \end{itemize} We will only study the second case, and the proof of the first case is simpler. Suppose the chains in the second case are interlaced in the following fashion: \begin{equation} \label{eq:interlaced} \begin{aligned} \{\ \ \ \ &\ \mathcal{C}_{j}\ \ \ \ \ \ \ \ \ \} &&\ \ \{\overbrace{\ \ \mathcal{C}_{r}\ \ }^{d_{r}}\} \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ \{\overbrace{\ \ \ \mathcal{C}_{m-1}\ \ \ }^{d_{m-1}}\}\\ \{&\underbrace{C_{m,1}, \cdots}_{p}\ \ \underbrace{\cdots\cdots }_{a_r} &&\underbrace{\cdots\ \ \cdots }_{d_{r}} \ \ \underbrace{\cdots\cdots}_{a_{r+1}} \ \ \cdots\cdots &&\underbrace{\cdots \ \ \ \ \cdots }_{d_{m-1}},\ \underbrace{\cdots,\ C_{m,d_m}}_{a_m}\} \end{aligned} \end{equation} for some $j+1 \leq r \leq m-1$, and the chains $\mathcal{C}_{j+1}$, $\dots$, $\mathcal{C}_{r-1}$---which have not been shown in \eqref{eq:interlaced}---are linked with $\mathcal{C}_j$ under Rule (a) of Algorithm \ref{alg:spinlkt}. To simplify the calculations below, we introduce the notation $$(a)^{\epsilon}_d:=\underbrace{a, a+\epsilon, \dots, a+(d-1)\epsilon}_d.$$ Then $2\lambda$ is equal to the entries in Equation \eqref{eq:interlaced}. Since the values of the adjacent entries within the same chain differ by $2$, and the values of the interlaced entries differ by $1$, one can calculate $2\lambda - \rho$ up to a translation by a constant on all coordinates as follows: \begin{equation} \label{2lambda} \begin{aligned} \{\cdots&\ (A_{r-1})^0_p\} &&\ \ \{(A_{r})^0_{d_{r}}\} \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ \{(A_{m-1})^0_{d_{m-1}}\}\\ \dots\ \ \{&(A_{r-1})^0_p\ \ (A_r)^{-1}_{a_r} &&(A_{r})^0_{d_{r}}\ \ (A_{r})^{-1}_{a_{r+1}} \ \ \cdots\cdots &&(A_{m-1})^0_{d_{m-1}}\ \ (A_{m-1})^{-1}_{a_m}\} \end{aligned} \end{equation} where $\displaystyle A_x := \sum_{l=x}^{m-1} a_{l+1}$ for $r-1 \leq x\leq m-1$ (note that the smallest entry of \eqref{2lambda} is $1$, appearing at the rightmost entry of the bottom chain). On the other hand, the calculation in Algorithm \ref{alg:spinlkt} gives $\tau$ as follows: $$\begin{aligned} (\cdots & \ \ (k_j)^{0}_{p}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ (k_{r})^{0}_{d_{r}} \ \ \ \ \ \ \ \cdots\ \ \ \ \ \ \ (k_{m-1})^{0}_{d_{m-1}}\\ \dots \ \ (&(k_m)^{0}_{p}\ \ (k_m)^{0}_{a_r}\ \ (k_m)^{0}_{d_{r}}\ (k_m)^{0}_{a_{r+1}} \ \cdots\ (k_m)^{0}_{d_{m-1}}(k_m)^{0}_{a_m}) \end{aligned} = \bigcup_{i=0}^m \mathcal{T}_i \ \longrightarrow\ \ \bigcup_{i=0}^m \mathcal{T}_i' = \tau,$$ where $\bigcup_{i=0}^m \mathcal{T}_i'$ is given by \begin{equation} \label{tau} \begin{aligned} (\ \cdots &\ (k_j+1)^{1}_{p}) \ \ \ \ \ \ \ \ \ \ \ (k_{r}+(q_r-d_{r}+1))^1_{d_{r}} \ \ \ \ \ \ \cdots\ \ \ \ \ \ \ (k_{m-1}+(q_{m-1}-d_{m-1}+1))^1_{d_{m-1}}\\ \dots \ \ (&(k_m-1)^{-1}_{p} (k_m)^{0}_{a_r} (k_m-(q_r-d_{r}+1))^{-1}_{d_{r}}(k_m)^{0}_{a_{r+1}} \cdots(k_m-(q_{m-1}-d_{m-1}+1))^{-1}_{d_{m-1}}(k_m)^{0}_{a_m}) \end{aligned} \end{equation} and $q_i$ are obtained by Rule (c) of Algorithm \ref{alg:spinlkt}. For instance, $q_r=p+a_r+d_{r}$. Note that $$ k_j-(d_j-1)=k_{r}+(d_{r}-1)+2a_r+2. $$ Therefore, $$ k_j-d_j=k_{r}+d_{r}+2a_r. $$ From this, one deduces easily that $k_j\geq k_{r}+q_r+1$. Thus it makes sense to talk about the interval $[k_{r}+q_r+1, k_j]$. Before we proceed, we pay closer attention to the coordinates of $\mathcal{T}_{j}'$, which is the leftmost chain on the top row of Equation \eqref{tau}. More precisely, it consists of three parts: \begin{itemize} \item[(i)] As mentioned in the paragraph after Equation \eqref{eq:interlaced}, by applying Rule (a) of Algorithm \ref{alg:spinlkt} between $\mathcal{C}_j$ and each of $\mathcal{C}_{j+1}$, $\dots$, $\mathcal{C}_{r-1}$, one can check that $$\bigcup_{i=j+1}^{r-1}\mathcal{T}_{i}' \subset [k_{r}+q_r+1, k_j].$$ Suppose there are $\delta \geq 0$ coordinates in $\bigcup_{i=j+1}^{r-1}\mathcal{T}_{i}'$, then there will be exactly $\delta$ coordinates in $\mathcal{T}_{j}'$ having coordinates strictly greater than $k_j + p$. \item[(ii)] By applying Algorithm \eqref{alg:spinlkt} to $\mathcal{C}_j$ and $\mathcal{C}_m$, we have $p$ coordinates $(k_j+1)^1_p$ in $\mathcal{T}_j'$ as in Equation \eqref{tau}. \item[(iii)] The other coordinates of $\mathcal{T}_j'$ are either equal to $k_j$, or smaller than $k_j$ if they are linked with $\mathcal{C}_t$ with $t < j$. \end{itemize} In conclusion, the coordinates of $\mathcal{T}_j'$ are given by $(\overbrace{\sharp\ \dots\ \sharp}^{\delta} ; (k_j+1)^1_p; \overbrace{\flat\ \dots\ \flat}^{d_j - \delta - p})$, where $\sharp\ \dots\ \sharp$ has coordinates greater than $k_j+p$, and $\flat\ \dots\ \flat$ has coordinates smaller than $k_j + 1$. We now arrange the coordinates of $\bigcup_{i = j}^{m} \mathcal{T}_i'$ in Equation \eqref{tau} as follows: \begin{align*} &\overbrace{\sharp\ \dots\ \sharp}^{\delta} > \overbrace{(k_j+1)^{1}_{p}}^{p} > \overbrace{\flat \cdots \flat}^{d_j-p-\delta} > \overbrace{\bigcup_{i=j+1}^{r-1} \mathcal{T}_i'}^{\delta} > \mathcal{T}_r' > \dots > \mathcal{T}_{m-1}' > (k_m)^{0}_{a_r} = \dots = (k_m)^{0}_{a_m} \\ & > (k_m-1)^{-1}_{p} > (k_m-(q_r-d_{r}+1))^{-1}_{d_{r}} > \dots > (k_m-(q_{m-1}-d_{m-1}+1))^{-1}_{d_{m-1}} \end{align*} Here elements in the blocks $\mathcal{T}'_r, \dots, \mathcal{T}'_{m-1}$ are still kept in the increasing manner. Note that if $x < y$, then $\mathcal{T}_x' > \mathcal{T}_y'$ in terms of their coordinates. We index the coordinates of $\tau$ shown in Equation \eqref{tau} using the above ordering, with the smallest coordinate indexed by $1$: \begin{equation} \label{rho} \begin{aligned} (\dots &\ (d_m+D_r+ d_j - p+1)^1_p) \ \ ((d_m+D_{r+1}+1)^1_{d_{r}}) \ \ \ \ \ \cdots\ \ \ \ \ \ ((d_m+1)^1_{d_{m-1}})\\ (&(D_r+p)^{-1}_p (D_r + p + 1)^1_{a_r}(D_r)^{-1}_{d_{r}} (D_r + p + a_r + 1)^1_{a_{r+1}} \cdots (D_{m-1})^{-1}_{d_{m-1}} (D_r + p + \sum_{l=r}^{m-1} a_l + 1)^1_{a_m}), \end{aligned} \end{equation} where $\displaystyle D_x := \sum_{l = x}^{m-1} d_{l}$ for $r\leq x\leq m-1$. Note that the coordinates of the last row read as \begin{align*} (D_r + p, \dots, 2, 1)&=((D_r+p)^{-1}_p;\ (D_r)^{-1}_{d_{r}};\ \dots;\ (D_{m-1})^{-1}_{d_{m-1}}),\\ (D_r + p + 1, \dots, d_m-1, d_m)&=\\ ((D_r + p + 1)^1_{a_r};\ \dots ;&\ (D_r + p + \sum_{l=r}^{x-1} a_l + 1)^1_{a_x};\ \dots;\ (D_r + p + \sum_{l=r}^{m-1} a_l + 1)^1_{a_m}). \end{align*} Up to a translation of a constant of all coordinates, the difference between Equation \eqref{tau} and \eqref{rho} gives (a $W$-conjugate of) $\{\tau - \rho\}$, which is of the form: \begin{equation} \label{taurho} \begin{aligned} (\cdots &\ (\beta_j)^0_p) \ \ \ \ \ \ \ \ \ \ \ (\beta_r)^0_{d_{r}} \ \ \ \ \ \ \cdots\ \ \ \ \ \ (\beta_{m-1})^0_{d_{m-1}}\\ (&(\alpha_j)^0_p\ \ {\bf *\ *\ *\ }(\alpha_r)^0_{d_{r}}\ {\bf *\ *\ *}\ \cdots\ (\alpha_{m-1})^0_{d_{m-1}}\ {\bf *\ *\ *}) \end{aligned} \end{equation} Our goal is to show \eqref{2lambda} and \eqref{taurho} are equal up to a translation of a constant of all coordinates. So we need to show the following: \begin{itemize} \item[(i)] $\alpha_j = \beta_j$: We need to show $$k_m - 1 - (D_r + p) = k_j + 1 - (d_m + D_r + d_j-p + 1).$$ In fact, we have \begin{align*} C_{m,1} &= C_{j,d_j} + 2p - 1 \\ k_m + (d_m - 1) &= k_j - (d_j-1) + 2p - 1 \\ k_m - p -1 &= k_j - d_j + p - d_m \\ k_m - 1 - (D_r + p) &= k_j + 1 - (d_m + D_r + d_j -p + 1) \end{align*} as required. \item[(ii)] $\alpha_x = \beta_x$ for all $r \leq x \leq m-1$: This is the same as showing $$k_m - (q_x - d_{x} + 1) - D_x = k_{x} + (q_x - d_{x} + 1) - (d_m + D_{x+1}+1).$$ As in (i), we consider \begin{align*} C_{m,1} &= C_{x,d_{x}} + 2q_x - 1 \\ k_m + (d_m - 1) &= k_{x} - (d_{x}-1) + 2q_x - 1 \\ k_m - q_x + d_{x} -1 &= k_{x} + q_x - d_m \\ k_m - q_x + d_{x} -1 - D_x + D_{x+1} + d_{x} &= k_{x} + (q_x+1) - (d_m + 1) \\ k_m - q_x + d_{x} -1 - D_x &= k_{x} + (q_x-d_{1}+1) - (d_m + D_{x+1} + 1) \end{align*} as we wish to show.\\ \item[(iii)] $\alpha_j - \alpha_x = A_{r-1} - A_{x}$ for all $r \leq x \leq m-1$: In other words, we need to show $$[(k_m - 1) - (D_r +p)] - [(k_m - (q_x-d_{x}+1)) - D_x] = A_{r-1} - A_{x} = a_r + \dots + a_x$$ Indeed, by looking at Equation \eqref{eq:interlaced} and applying Rule (c) of Algorithm \ref{alg:spinlkt}, one gets \begin{align*} p + (a_r + \dots + a_x) + (d_r + \dots + d_x) &= q_x \\ q_x - p &= (A_{r-1} - A_{x}) + (D_r - D_{x+1}) \\ (k_m - 1) - (k_m - 1) + q_x - p - D_r + D_{x+1} &= A_{r-1} - A_{x} \\ [(k_m - 1) - (D_r +p)] - (k_m -1) + q_x + (D_x - d_x) &= A_{r-1} - A_{x} \\ [(k_m - 1) - (D_r +p)] - [(k_m - (q_x-d_x+1)) - D_x] &= A_{r-1} - A_{x} \end{align*} so the result follows.\\ \item[(iv)] Collecting the $*\ *\ *$ entries of Equation \eqref{taurho} consecutively from left to right gives $$\underbrace{\alpha_j,\dots,\alpha_r+1}_{a_r};\ \cdots\cdots;\ \underbrace{\alpha_x,\dots,\alpha_{x+1}+1}_{a_{x+1}};\ \cdots\cdots;\ \underbrace{\alpha_{m-1},\dots,\alpha_{m-1} - (a_m-1)}_{a_m}$$ In order for the above expression to make sense, one needs $\alpha_x - \alpha_{x+1} = a_x$ for all $r\leq x \leq m-1$ for instance. This is indeed the case, since $\alpha_x - \alpha_{x+1} = A_{x} - A_{x+1}$ by (iii), and the latter is equal to $a_{x+1}$ by the definition of $A_x$ for $r-1 \leq x\leq m-1$. So it suffices to check $\displaystyle k_m - (D_r + p + \sum_{l=r}^{x} a_l + 1) = \alpha_x.$ To see it is the case, one can check that the leftmost entry of the second row of Equation \eqref{taurho} is equal to \begin{align*} \alpha_j &= k_m - 1 - (D_r + p) \\ \alpha_x + A_{r-1} - A_{x} &= k_m - (D_r + p + 1) \ \ \ \ \ \ \ \ \ \text{(by (iii))} \\ \alpha_x + \sum_{l = r}^{x} a_l &= k_m - (D_r + p + 1) \\ \alpha_x &= k_m - (D_r + p + \sum_{l=r}^{x} a_l + 1) \end{align*} as follows. \end{itemize} Combining (i) -- (iv), Equation \eqref{taurho} can be rewritten as \begin{align*} (\cdots&\ (\alpha_j)^0_p) &&\ \ \ ((\alpha_r)^0_{d_r}) \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ ((\alpha_{m-1})^0_{d_{m-1}})\\ (&(\alpha_j)^0_p\ \ \ \ \ \ \ (\alpha_j)^{-1}_{a_r} &&(\alpha_r)^0_{d_r}\ \ \ \ \ (\alpha_r)^{-1}_{a_{r+1}} \ \ \ \ \cdots\cdots &&(\alpha_{m-1})^0_{d_{m-1}}\ \ (\alpha_{m-1})^{-1}_{a_m}) ,\end{align*} whose coordinates are in descending order from left to right. So it is equal to $\{\tau - \rho\}$ up to a translation of a constant. Moreover, by comparing it with Equation \eqref{2lambda}, we have shown that all coordinates of $2\lambda - \rho$ and $\{\tau - \rho\}$ differ by a constant (note that the other coordinates on the left of $\mathcal{C}_j$ are taken care of by induction hypothesis). To see they are exactly equal to each other, we calculate the {\it true} values of $A_{m-1}$ and $\alpha_{m-1}$ in $2\lambda - \rho$ and $\tau$ respectively on the entry marked by $\circledast$ below: \begin{align*} \{\dots, &\ \ *, \dots ,* \} &&\ \ \ \{*,\dots,*\} \ \ \ \ \ \ \cdots\ &&\ \ \{*,\dots, *\}\\ \{&*,\dots, *;\ \ \ \ *,\dots,*;\ &&*, \dots,*;\ \ \ *,\dots,*;\ \ \ \cdots;\ &&*,\dots,\circledast;\ \ \underbrace{*,\dots,*}_{a_m}\} \end{align*} For $2\lambda - \rho$, $\circledast$ takes the value $$C_{m, d_m - a_m} - \rho_{a_m + 2},$$ where $\rho = (\rho_n, \dots, \rho_2, \rho_1)$ with $\rho_i = \rho_1 + (i-1)$. So it can be simplified as \begin{align*} C_{m,d_m - a_m} - \rho_{a_m + 2} &= k_m - (d_m - 1) + 2a_m - \rho_{a_m + 2} \\ &= k_m - d_m + 1 + 2a_m - \rho_1 - (a_m +1)\\ &= k_m - d_m + a_m - \rho_1 \end{align*} On the other hand, for $\{\tau - \rho\}$, $\circledast$ takes the value $$k_m - q_{m-1} - \rho_{1}$$ (Recall that we had $\alpha_{m-1} = k_m - q_{m-1} - 1$ for $\circledast$ in our previous calculation). By looking at Equation \eqref{eq:interlaced} and applying Rule (c) of Algorithm \ref{alg:spinlkt} again, one has $q_{m-1} = d_m - a_m$, hence $2\lambda - \rho$ and $\{\tau - \rho\}$ takes the same value on the $\circledast$ coordinate. Since we have seen that their coordinates differ by the same constant, one can conclude that $2\lambda - \rho = \{\tau - \rho\}$. \end{proof} \begin{example} For the the interlaced chain in Example \ref{eg:spinlowest}, the translate of $2\lambda - \rho$ in Equation \eqref{2lambda} is equal to \begin{align*} \ &\begin{aligned} \{10-8 && && 8-6\} && && \{6-4\} && && \{4-2\} \\ && \{9-7 && && 7-5 && && 5-3 && && 3-1 && && 1-0\} \end{aligned} \\ =\ &\begin{aligned} \{2 && && 2\} && && \{2\} && && \{2\} \\ && \{2 && && 2 && && 2 && && 2 && && 1\} \end{aligned}. \end{align*} Also, the translate of $\tau - \rho$ in Equation \eqref{taurho} is given by: \begin{align*} \ &\begin{aligned} (9-8 && && 10-9) && && (8-7) && && (2-1) \\ && (4-3 && && 3-2 && && 5-4 && && 7-6 && && 5-5) \end{aligned}\\ =\ &\begin{aligned} (1 && && 1) && && (1) && && (1) \\ && (1 && && 1 && && 1 && && 1 && && 0) \end{aligned} \end{align*} Hence their coordinates differ by the same constant $1$. To see $2\lambda - \rho$ and $\{\tau - \rho\}$ are equal, where $\rho = (4,3,2,1,0,-1,-2,-3,-4)$, one can look at the {\it true} values of them for the rightmost entry of the bottom chain: \[ 2\lambda - \rho:\ 1 - \rho_1 = 1 - (-4) = 5;\ \ \ \ \ \tau - \rho:\ 5 - \rho_5 = 5 - 0 = 5. \] Hence $2\lambda - \rho = \{\tau - \rho\} = (6,6,6,6,6,6,6,6,5)$, and the unique $\widetilde{K}$-type in the Dirac cohomology of the corresponding unitary module is $V_{(6,6,6,6,6,6,6,6,5)}$. \end{example} \section{Scattered Representations of $SL(n)$} It is easy to parametrize irreducible unitary representations of $SL(n)$ using the parametrization for $GL(n)$. In such cases, we impose the condition on $\lambda$ such that the sum of the coordinates is equal to $0$. In other words, for each possible regular, half-integral infinitesimal character $\lambda$ for $SL(n)$, one can shift the coordinates by a suitable scalar, so that it corresponds to an infinitesimal character $\lambda'$ of $GL(n)$ whose smallest coordinate is equal to $1/2$. Therefore, the irreducible unitary representations of $SL(n)$ are parametrized by chains with $n$ coordinates whose smallest coordinate is equal to $1$. The following proposition characterizes which of these representations are scattered in the sense of Section \ref{scattered}: \begin{proposition} \label{prop:scattered} Let $\pi := J(\lambda,-s\lambda)$ be an irreducible unitary representation of $SL(n)$ such that $\lambda$ is dominant and half-integral. Then $\pi$ is a scattered representation if and only if the translated Zhelobenko parameter $(\lambda',-s\lambda')$ can be expressed as a union of interlaced chains with smallest coordinate equal to $1$. \end{proposition} \begin{proof} By the arguments in Section \ref{scattered}, one only needs to check that $s \in W$ involves all simple reflections in its reduced expression if and only if $(\lambda',-s\lambda') = \bigcup_{i=0}^m \mathcal{C}_i$ are interlaced. Indeed, $s \in W$ can be read from $\bigcup_{i=0}^m \mathcal{C}_i$ as follows: label the entries of $\bigcup_{i=0}^m \mathcal{C}_i$ in descending order, e.g. $$\bigcup_{i=0}^m \mathcal{C}_i = \begin{aligned} &\ \ \{p_{k+1},\ \ \dots \}\ \ \cdots \\ \{p_1,\ \ p_2, \ \ \dots,\ \ &p_k, \ \ \ p_{k+2},\ \ \dots \}\ \ \cdots \end{aligned}$$ with $p_1 > p_2 > \dots > p_n$, then we `flip' the entries of each chain $\mathcal{C}_i$ by $\{C_{i,1},\dots,C_{i,d_i}\}$ $\rightarrow$ $\{C_{i,d_i},\dots,C_{i,1}\}$. Suppose we have $$\begin{aligned} \begin{aligned} &\ \ \{p_{s_{k+1}},\ \ \dots \}\ \ \cdots \\ \{p_{s_1},\ \ p_{s_2}, \ \ \dots,\ \ &p_{s_k}, \ \ \ p_{s_{k+2}},\ \ \dots \}\ \ \cdots \end{aligned} \end{aligned}$$ after flipping each chain, then $s \in S_n$ is obtained by $s = \begin{pmatrix}1 & 2 & \dots & n \\ s_1 & s_2 & \dots & s_n \end{pmatrix}$ (see Example \ref{eg:interlaced}). Define the equivalence class of interlaced chains by letting $\mathcal{C}_i \sim \mathcal{C}_j$ iff $i = j$, or $\mathcal{C}_i, \mathcal{C}_j$ are interlaced. So we have a partition of $\{p_1, \dots, p_n\}$ by the entries of chains in the same equivalence class. It is not hard to check that the entries on each partition have consecutive indices, i.e. $$\mathcal{E}_i = \{p_{a_i}, p_{a_i + 1}, \dots, p_{b_i -1}, p_{b_i}\}$$ and $\bigcup_{i=0}^m \mathcal{C}_i$ are interlaced iff there is only one equivalence class. We now prove the proposition. Suppose there exists more than one equivalence class, i.e. we have $$\mathcal{E}_1 = \{p_1, \dots, p_a\};\ \ \ \mathcal{E}_2 = \{p_{a+1}, \dots, p_b\}$$ for some $1 \leq a < n$. Since the smallest element in any equivalence class must be the smallest element of a chain, and the largest element in a class must be the largest element of a chain, we have $$\mathcal{C}_i = \{\ \dots,\ p_{a}\}\ \ \{p_{a+1},\ \dots \} = \mathcal{C}_j.$$ By the above description of $s \in S_n$, it is obvious that $s \in S_a \times S_{n-a} \subset S_n$, which does not involve the simple reflection $s_a$. Conversely, if there is only one equivalence class, we suppose on the contrary that there exists some $1\leq a < n$ such that $s \in S_a \times S_{n-a}$. Since $p_a, p_{a+1}$ are in the same equivalence class, then at least one of the following $$\{p_a, p_{a+1}\},\ \ \ \ \ \{p_a, p_{a+2}\}, \ \ \ \ \ \{p_{a-1}, p_{a+1}\}$$ is in the same chain $\mathcal{C}_i$ for some $0 \leq i \leq m$. By `flipping' $\mathcal{C}_i$ in either case, there must be some $u \leq a < a+1 \leq v$ such $s = \begin{pmatrix} \dots & u & \dots & v & \dots \\ \dots & v & \dots & u & \dots \end{pmatrix}$. The reduced expression of such $s$ must involve the simple reflection $s_a$, hence we obtain a contradiction. Therefore, $s$ must involve all simple reflections in its reduced expression. \end{proof} \begin{example} \label{eg:interlaced} Consider the interlaced chain with smallest coordinate $1$ given in Example \ref{eg:spinlowest}: \begin{align*} \{10 && && 8\} && && \{6\} && && \{4\} \\ && \{9 && && 7 && && 5 && && 3 && && 1\} \end{align*} Its corresponding irreducible representation in $SL(9)$ has Langlands parameter $(\lambda',-s\lambda')$, where $s = \begin{pmatrix}1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ 3 & 9 & 1 & 8 & 5 & 6 & 7 & 4 & 2 \end{pmatrix}$, and $\lambda'$ $=$ $[1/2,1/2,1/2,1/2,1/2,1/2,1]$, where $[a_1,\dots,a_m]$ is defined by $$[a_1,\dots,a_m] := a_1\varpi_1 + \dots, + a_m\varpi_m.$$ In fact, the coordinates of $\lambda'$ is simply obtained by taking the difference of the neighboring coordinates of $\lambda = \frac{1}{2}(10,9,8,7,6,5,4,3,1)$. The calculation in Example \ref{eg:spinlowest} implies that the spin-lowest $K$-type for $J(\lambda',-s\lambda')$ in $SL(8)$ is $V_{[1,1,1,2,0,1,1,1]}$. \end{example} \begin{example} \label{spherical} We explore the possibilities of chains $\bigcup_{i=0}^m \mathcal{C}_i$ whose corresponding Zhelobenko parameter $(\lambda',-s\lambda')$ gives a spherical representation. In order for the lowest $K$-type to be trivial, we need the $\mathcal{T}_i$'s in Algorithm \ref{alg:spinlkt} to have the same average value $k_i$ for all $i$, that is, the mid-point of all $\mathcal{C}_i$'s (if there is more than one) must be the same. This leaves the possibility of $\bigcup_{i=0}^m \mathcal{C}_i$ consisting of a single chain, which corresponds to the trivial representation, or there are two chains of lengths $a > b > 0$ whose entries are of different parity. Hence it must be of the form $$\{2a-1, 2a-3, \dots, 3, 1\} \cup \{a+(b-1), a+(b-3), \dots, a-(b-3), a-(b-1)\},$$ where $a$, $b$ are of different parity. In other words, such representations can only occur for $SL(n)$ with $n = a+b$ is odd, and is equal to $Ind_{S(GL(a) \times GL(b))}^{SL(n)}(\mathrm{triv} \otimes \mathrm{triv})$, which are the unipotent representations corresponding to the nilpotent orbit with Jordan block $(2^b1^{a-b})$ (see \cite[Section 5.3]{BP}). Its Langlands parameter $(\lambda',-s\lambda')$ has $2\lambda' = [\underbrace{2,\dots,2}_{(a-b-1)/2},\underbrace{1,\dots,1}_{2b},\underbrace{2,\dots,2}_{(a-b-1)/2}]$ and $s = w_0$ (see \cite[Conjecture 5.6]{DD}). Moreover, its spin-lowest $K$-type is given by Equation (5.5) of \cite{BP}, which matches with our calculations in Algorithm \ref{alg:spinlkt}. \end{example} For the rest of this section, we give two applications of Proposition \ref{prop:scattered}: \subsection{The spin-lowest $K$-type is unitarily small} To offer a unified conjectural description of the unitary dual, Salamanca-Riba and Vogan formulated the notion of unitarily small (\emph{u-small} for short) $K$-type in \cite{SV}. Here we only quote it for a complex connected simple Lie group $G$ -- using the setting in the introduction, a $K$-type $V_{\delta}$ is u-small if and only if $\langle \delta-2\rho, \varpi_i\rangle\leq 0$ for $1\leq i\leq \mathrm{rank}(\fg_0)$ (see Theorem 6.7 of \cite{SV}). \begin{lemma}\label{lemma-u-small} Let $\lambda=\sum_{i=1}^{\mathrm{rank}(\fg_0)}\lambda_i\varpi_i \in\fh_0^*$ be a dominant weight such that $\lambda_i=\frac{1}{2}$ or $1$ for each $1\leq i\leq n$, and $V_{\delta}$ be the $K$-type with highest weight $\delta$ such that $$ \{\delta-\rho\}=2\lambda-\rho. $$ Then $\langle\delta-2\rho, \varpi_i\rangle\leq 0$, $1\leq i\leq \mathrm{rank}(\fg_0)$. Therefore, the $K$-type $V_{\delta}$ is u-small. \end{lemma} \begin{proof} By assumption, there exists $w\in W$ such that $\delta=w^{-1}(2\lambda-\rho)+\rho$. Thus \begin{align*} \langle \delta-2\rho, \varpi_i\rangle &=\langle w^{-1}(2\lambda-\rho)-\rho, \varpi_i\rangle\\ &=\langle w^{-1}(2\lambda-\rho), \varpi_i\rangle-\langle\rho, \varpi_i\rangle\\ &=\langle 2\lambda-\rho, w(\varpi_i)\rangle -\langle\rho, \varpi_i\rangle. \end{align*} On the other hand, let $w=s_{\beta_1}s_{\beta_2}\cdots s_{\beta_p}$ be a reduced decomposition of $w$ into simple root reflections. Then by Lemma 5.5 of \cite{DH}, \begin{equation} \varpi_i-w(\varpi_i)=\sum_{k=1}^p \langle \varpi_i, \beta_k^\vee \rangle s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k). \end{equation} Note that $s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k)$ is a positive root for each $k$. Now we have that \begin{align*} \langle \delta-2\rho, \varpi_i\rangle &=\Big\langle 2\lambda-\rho,\varpi_i- \sum_{k=1}^p \langle \varpi_i, \beta_k^\vee \rangle s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k)\Big\rangle -\langle\rho, \varpi_i\rangle\\ &=2\langle\lambda -\rho, \varpi_i\rangle-\sum_{k=1}^p \langle \varpi_i, \beta_k^\vee \rangle \langle 2\lambda-\rho, s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k) \rangle \\ &\leq 2\langle \lambda-\rho, \varpi_i\rangle\\ &\leq 0. \end{align*} \end{proof} \begin{corollary} \label{cor-u-small} The unique spin-lowest $K$-type $V_{\tau}$ of any scattered representation of $SL(n)$ is u-small. Consequently, Conjecture C of \cite{DD} holds for $SL(n)$. \end{corollary} \begin{proof} Let $(\lambda, -s\lambda)$ be the Zhelobenko parameter for a scattered representation of $SL(n)$. Write $\lambda=\sum_{i=1}^{n-1} \lambda_i \varpi_i$ in terms of the fundamental weights. Then it is direct from our definition of the interlaced chains that each $\lambda_i$ is either $\frac{1}{2}$ or $1$ (recall Proposition \ref{prop:scattered} and Example \ref{eg:interlaced}). Let $V_{\tau}$ be the unique spin-lowest $K$-type of the scattered representation. Then $\{\tau-\rho\}=2\lambda-\rho$ (see Proposition \ref{prop-spin-lowest}). Thus the result follows from Lemma \ref{lemma-u-small}. \end{proof} \subsection{Number of scattered representations} As another application of Proposition \ref{prop:scattered}, we compute the number of scattered representations of $SL(n)$. By the proposition, it is equal to the number of interlaced chains with $n$ entries with the smallest entry equal to $1$. We now give an algorithm of constructing new interlaced chains with smallest coordinate equal to $1$ from those with one less coordinate: \begin{algorithm} \label{interlaced} Let $\displaystyle \bigcup_{i=1}^p \{2A_i-1,\dots, 2a_i-1\} \cup \bigcup_{j=1}^q \{2B_j, \dots, 2b_j\}$ be a union of interlaced chains with such that \begin{itemize} \item $A_{i'} > A_i$ if $i' > i$, and $B_{j'} > B_j$ if $j' > j$; and\\ \item $2a_p - 1 = 1$ \end{itemize} We construct two new interlaced chains with one extra coordinate as follows. (When $q=0$, we adopt CASE I only.)\\ \noindent{\bf CASE I:} If $2A_p -1 > 2B_q + 1$, then the two new interlaced chains are \begin{align*} && && && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} and \begin{align*} && \{{\bf 2A_p-2}\} && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{2A_p-1&& \dots && \dots && && 2a_p-1\} && && \dots \end{align*} \noindent{\bf CASE II:} If $2A_p-1 = 2B_q+1$, then the two new interlaced chains are \begin{align*} && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} and \begin{align*} \{{\bf 2B_q+2} && 2B_q&& && \dots && && 2b_q\} && && \dots \\ & \{2A_p-1 && && \dots && && 2a_p-1\} && && \dots \end{align*} \noindent{\bf CASE III:} If $2A_p-1 = 2B_q - 1$, then the two new interlaced chains are \begin{align*} & \{2B_q && && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} and \begin{align*} \{{\bf 2B_q + 2} && 2B_q&& && \dots && && 2b_q\} && && \dots \\ && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} \noindent{\bf CASE IV:} If $2A_p-1 < 2B_q - 1$, then the two new interlaced chains are \begin{align*} \{2B_q&& \dots && \dots && && 2B_q\} && && \dots \\ && \{{\bf 2B_q-1}\} && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} and \begin{align*} \{{\bf 2B_q+2} && 2B_q&& && \dots && && 2b_p\} && && \dots \\ && && && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots \end{align*} \end{algorithm} \begin{example} Suppose we begin with an interlaced chain $\{9,7,5,3,1\} \cup \{4,2\}$. Then the new interlaced chains with one extra coordinate are $$\{11,9,7,5,3,1\} \cup \{4,2\} \ \ \text{and}\ \ \{9,7,5,3,1\} \cup \{8\} \cup \{4,2\}.$$ \end{example} \begin{proposition} All interlaced chains with $n \geq 2$ entries with smallest coordinate equal to $1$ can be obtained uniquely from the chain $\{3\ 1\}$ by inductively applying the above algorithm. \end{proposition} \begin{proof} Suppose $\bigcup_{i=0}^m \mathcal{C}_i$ be interlaced chains with largest coordinate equal to $M \in \mathcal{C}_0$. We remove a coordinate from it by the following rule: If $\mathcal{C}_i \neq \{M-1\}$ for all $i$, remove the entry $M$ from $\mathcal{C}_0$. Otherwise, remove the whole chain $\{M-1\}$ from the original interlaced chains. One can easily check from the definition of interlaced chain that the reduced chains are still interlaced, and one can recover the original chain by applying Algorithm \ref{interlaced} on the reduced chain. Therefore, for all interlaced chains with smallest entry $1$, we can use the reduction mentioned in the first paragraph repeatedly to get an interlaced chain with only $2$ entries, which must be of the form $\{3\ 1\}$, and repeated applications of Algorithm \ref{interlaced} on $\{3\ 1\}$ will retrieve the original interlaced chains (along with other chains). In other words, all interlaced chains with smallest entry $1$ can be obtained by Algorithm \ref{interlaced} inductively on $\{3\ 1\}$. We are left to show that all interlaced chains are uniquely constructed using the algorithm -- Suppose on the contrary that there are two different interlaced chains that give rise to the same $\bigcup_{i=0}^m \mathcal{C}_i$ after applying Algorithm \ref{interlaced}. By the algorithm, these two chains must be obtained from $\bigcup_{i=0}^m \mathcal{C}_i$ by removing its largest odd entry $M_o \in \mathcal{C}_p$ or largest even entry $M_e \in \mathcal{C}_q$. So they must be equal to $$\bigcup_{i \neq p} \mathcal{C}_i \cup (\mathcal{C}_p \backslash \{M_o\})\ \ \ \text{and}\ \ \ \bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\})$$ respectively. Assume $M_o > M_e$ for now (and the proof for $M_e > M_o$ is similar). By applying Algorithm \ref{interlaced} to $\bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\})$, we obtain two interlaced chains $$\bigcup_{i \neq p,q} \mathcal{C}_i \cup \mathcal{C}_p' \cup (\mathcal{C}_q \backslash \{M_e\})\ \ \ \text{and}\ \ \ \bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\}) \cup \{M_o -1\},$$ where $\mathcal{C}_p' := \{M_o +2, \overbrace{M_o, \dots, m_o}^{\mathcal{C}_p}\}$. Note that none of the above gives rise to the interlaced chains $\bigcup_{i=0}^m \mathcal{C}_i$: Even in the case when $M_0 -1 = M_e$, $(\mathcal{C}_q \backslash \{M_e\}) \cup \{M_o -1\}$ and $\mathcal{C}_q$ are different -- although they have the same coordinates, the first consists of two chains while the second consists of one chain only. So we have a contradiction, and the result follows. \end{proof} \begin{corollary}\label{cor-number} The number of interlaced chains with $n$ coordinates and the smallest coordinate equal to $1$ is equal to $2^{n-2}$. \end{corollary} Since the scattered representations of $SL(n+1)$ are in one to one correspondence with interlaced chains with $n+1$ coordinates having smallest coordinate $1$, this corollary implies that the number of scattered representations of Type $A_n$ is equal to $2^{n-1}$. This verifies Conjecture 5.2 of \cite{D2}. Moreover, by using \texttt{atlas}, the spin-lowest $K$-types for all scattered representations of $SL(n)$ with $n \leq 6$ are given in Tables 1--3 of \cite{D2}. One can easily check the results there match with our $V_{\tau}$ in Algorithm \ref{alg:spinlkt}. \begin{example}\label{exam-scattered-small-rank} Let us start from $SL(2, \bC)$ and the chain $\{3\quad 1\}$. This chain corresponds to the trivial representation. Now we consider $SL(3, \bC)$. By Algorithm \ref{interlaced}, the chain $\{3\quad 1\}$ for $SL(2)$ produces two chains $$\{5\quad 3 \quad 1\} \qquad\qquad\qquad\qquad \begin{aligned} \{ &2 \} \\ \{ 3 \ \ \ &\ \ \ \ 1\} \end{aligned}.$$ The first corresponds to the trivial representation, while the second gives the representation with $\lambda=[\frac{1}{2}, \frac{1}{2}]$ and $s = \begin{pmatrix}1 & 2 & 3 \\ 3 & 2 & 1 \end{pmatrix}$. One computes by Algorithm \ref{alg:spinlkt} that the spin-lowest $K$-type $\tau=[1, 1]$. Now let us consider $SL(4)$. By Algorithm \ref{interlaced}, the chain $\{5\quad 3\quad 1\}$ for $SL(3)$ produces two chains $$\{7\quad 5\quad 3 \quad 1\} \ \qquad\qquad\qquad\qquad \begin{aligned} \{ &4 \} \\ \{5 \ \ \ &\ \ \ \ 3 \ \ \ \ \ \ \ 1\} \end{aligned}.$$ The first chain corresponds to the trivial representation, while the second one gives the representation with $\lambda=[\frac{1}{2}, \frac{1}{2}, 1]$ and $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 4 & 2 & 3 & 1 \end{pmatrix}$. One computes by Algorithm \ref{alg:spinlkt} that the spin-lowest $K$-type $\tau=[2, 0, 1]$. The other chain of $SL(3)$ shall produce \begin{align*} \{ &2 \} &\ \qquad \{4 \ \ \ \ &\ \ \ 2\}\\ \{5 \ \ \ \ \ \ \ 3 \ \ \ &\ \ \ \ 1\} &\ \qquad \{&3 \ \ \ \ \ \ \ 1\} \end{align*} One computes that $\lambda=[1, \frac{1}{2}, \frac{1}{2}]$, $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 4 & 2 & 3 & 1 \end{pmatrix}$, $\tau=[1, 0, 2]$; and that $\lambda=[\frac{1}{2}, \frac{1}{2}, \frac{1}{2}]$, $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{pmatrix}$, $\tau=[1, 1, 1]$, respectively. These four representations (and their spin-lowest $K$-types) match precisely with Table 1 of \cite{D2}. \end{example} \centerline{\scshape Acknowledgements} We thank the referee sincerely for very careful reading and nice suggestions. \centerline{\scshape Funding} Dong was supported by the National Natural Science Foundation of China (grant 11571097, 2016--2019). Wong is supported by the National Natural Science Foundation of China (grant 11901491) and the Presidential Fund of CUHK(SZ). \end{document}
\begin{equation}gin{document} \title{Exponential improvement in precision \\ for simulating sparse Hamiltonians} \author{ \normalsize Dominic W.\ Berry\thanks{Department of Physics and Astronomy, Macquarie University} \enskip \normalsize Andrew M.\ Childs\thanks{Department of Combinatorics \& Optimization and Institute for Quantum Computing, University of Waterloo}\bar hspace*{1.1mm}$^{,}$\thanks{Canadian Institute for Advanced Research} \enskip \normalsize Richard Cleve\thanks{Cheriton School of Computer Science and Institute for Quantum Computing, University of Waterloo}\bar hspace*{1.3mm}$^{,\ddag}$ \enskip \normalsize Robin Kothari$^{\S}$ \enskip \normalsize Rolando D.\ Somma\thanks{Theory Division, Los Alamos National Laboratory} } \date{} \maketitle \begin{equation}gin{abstract} We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a $d$-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ with precision $\epsilon$ using $O\big(\tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$ queries and $O\big(\tau \frac{\log^2(\tau/\epsilon)}{\log\log(\tau/\epsilon)}n\big)$ additional 2-qubit gates, where $\tau = d^2 \|{H}\|_{\max} t$. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous- and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of ``oblivious amplitude amplification'' that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error. \end{abstract} \section{Introduction} \label{sec:intro} Simulation of quantum mechanical systems is a major potential application of quantum computers. Indeed, the problem of simulating Hamiltonian dynamics was the original motivation for the idea of quantum computation \cite{Fey82}. Lloyd provided an explicit algorithm for simulating many realistic quantum systems, namely those whose Hamiltonian is a sum of interactions acting nontrivially on a small number of subsystems of limited dimension \cite{Llo96}. If the interactions act on at most $k$ subsystems, such a Hamiltonian is called \emph{$k$-local}. Here we consider the more general problem of simulating sparse Hamiltonians, a natural class of systems for which quantum simulation has been widely studied. Note that $k$-local Hamiltonians are sparse, so algorithms for simulating sparse Hamiltonians can be used to simulate many physical systems. Sparse Hamiltonian simulation is also useful in quantum algorithms~\cite{AT03,CCJY09,HHL09,CJS13}. A Hamiltonian is said to be \emph{$d$-sparse} if it has at most $d$ nonzero entries in any row or column. In the sparse Hamiltonian simulation problem, we are given access to a $d$-sparse Hamiltonian $H$ acting on $n$ qubits via a black box that accepts a row index $i$ and a number $j$ between $1$ and $d$, and returns the position and value of the $j$th nonzero entry of $H$ in row $i$. Given such a black box for $H$, a time $t>0$ (without loss of generality), and an error parameter $\epsilon>0$, our task is to construct a circuit that performs the unitary operation $e^{-iHt}$ with error at most $\epsilon$ using as few queries to $H$ as possible. To develop practical algorithms, we would also like to upper bound the number of additional 2-qubit gates. The \textit{time complexity} of a simulation is the sum of the number of queries and additional 2-qubit gates. The first efficient algorithm for sparse Hamiltonian simulation was due to Aharonov and Ta-Shma \cite{AT03}. The key idea (also applied in \cite{CCDFGS03}) is to use edge coloring to decompose the Hamiltonian $H$ into a sum of Hamiltonians $\sum_{j=1}^\eta H_j$, where each $H_j$ is easy to simulate. These terms are then recombined using the Lie product formula, which states that $e^{-iHt} \approx (e^{-iH_1t/r} e^{-iH_2t/r} \cdots e^{-iH_\eta t/r})^r$ for large $r$. This method gives query complexity $O(\mathop{\mathrm{poly}}(n,d) (\norm{H}t)^2/\epsilon)$, where $\norm{\cdot}$ denotes the spectral norm. This was later improved using high-order product formulas and more efficient decompositions of the Hamiltonian \cite{Suz91,Chi04,BAC+07,CK11,ChildsWiebe}. The best algorithms of this type \cite{CK11,ChildsWiebe} have query complexity \begin{equation} d^2(d+\log^*n)\norm{H}t\,\exp\Bigl(O\bigl(\sqrt{\log(d\norm{H}t/\epsilon)}\bigr)\Bigr). \end{equation} This complexity is only slightly superlinear in $\norm{H}t$ in that $\exp(O(\sqrt{\log(d\norm{H}t/\epsilon)}))$ is asymptotically smaller than $(d\norm{H}t/\epsilon)^{\delta}$ for any constant $\delta > 0$; however, $\exp(O(\sqrt{\log(d\norm{H}t/\epsilon)}))$ is not polylogarithmic in $d\norm{H}t/\epsilon$. We show the following (where $\norm{H}_{\max}$ denotes the largest entry of $H$ in absolute value). \begin{equation}gin{restatable}[Sparse Hamiltonian simulation]{theorem}{SPARSE}\label{thm:sparse} A $d$-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ within error $\epsilon$ with $O\big(\tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$ queries and $O\big(\tau \frac{\log^2(\tau/\epsilon)}{\log\log(\tau/\epsilon)}n\big)$ additional 2-qubit gates, where $\tau \colonequals d^2 \norm{H}_{\max} t \ge 1$. \end{restatable} \noindent Our algorithm has no query dependence on $n$, improved dependence on $d$ and $t$, and exponentially improved dependence on $1/\epsilon$. Our new approach to Hamiltonian simulation strictly improves all previous approaches based on product formulas (e.g., \cite{Llo96,AT03,Chi04,BAC+07,CK11}). An alternative Hamiltonian simulation method based on a quantum walk \cite{Chi10,BC12} is incomparable. That method has query complexity $O(d \norm{H}_{\max} t/\sqrt{\epsilon})$, so its performance is better in terms of $\norm{H}_{\max} t$ and $d$ but significantly worse in terms of $\epsilon$. Thus, while suboptimal for (say) constant-precision simulation, the results of \thm{sparse} currently give the best known Hamiltonian simulations as a function of $\epsilon$. Essentially the same approach used for \thm{sparse} can be applied even when the Hamiltonian is time dependent. The query complexity is unaffected by any such time dependence, except that we take the largest max-norm of the Hamiltonian over all times (i.e., $\tau$ is redefined as $\tau \colonequals d^2 h t$ with $h \colonequals \max_{s \in [0,t]} \norm{H(s)}_{\max}$). The number of additional 2-qubit gates is $O\big(\tau \frac{\log(\tau/\epsilon)\log((\tau+\tau')/\epsilon)}{\log\log(\tau/\epsilon)}n\big)$, where $\tau' \colonequals d^2 h' t$ with $h' \colonequals \max_{s \in [0,t]} \norm{\frac{\mathrm{d}}{\mathrm{d}s}H(s)}$. This dependence on $h'$ is a dramatic improvement over previous methods for simulating time-dependent Hamiltonians using high-order product formulas \cite{WBHS11}. Another previous simulation method \cite{PQSV11} also improved the dependence on $h'$, but at the cost of substantially worse dependence on $t$ and $\epsilon$. While our approach applies to sparse Hamiltonians in general, it can sometimes be improved using additional structure. In particular, consider the case of a $k$-local Hamiltonian acting on a system of qubits. (A $k$-local Hamiltonian acting on subsystems of limited dimension is equivalent to a $k$-local Hamiltonian acting on qubits with an increased value of $k$.) Since a term acting only on $k$ qubits is $2^k$-sparse, we can apply \thm{sparse} with $d=2^k M$, where $M$ is the total number of local terms. However, by taking the structure of sparse Hamiltonians into account, we find an improved simulation with $\tau$ replaced by $\tilde\tau \colonequals 2^k M \norm{H}_{\max} t$. The performance of our algorithm is optimal or nearly optimal as a function of some of its parameters. A lower bound of $\Omega(\norm{H}_{\max}t)$ follows from the no-fast-forwarding theorem of \cite{BAC+07}, showing that our algorithm's dependence on $\norm{H}_{\max}t$ is almost optimal. However, prior to our work, there was no known $\epsilon$-dependent lower bound, not even one ruling out algorithms with no dependence on $\epsilon$. We show that, surprisingly, our query dependence on $\epsilon$ in \thm{sparse} is optimal. \begin{equation}gin{restatable}[$\epsilon$-dependent lower bound for Hamiltonian simulation]{theorem}{LOWERBOUND}\label{thm:hamsimlower} For any $\epsilon>0$, there exists a 2-sparse Hamiltonian $H$ with $\norm{H}_{\max}<1$ such that simulating $H$ with precision $\epsilon$ for constant time requires $\Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries. \end{restatable} Our Hamiltonian simulation algorithm is based on a connection to the so-called fractional quantum query model. A result of Cleve, Gottesman, Mosca, Somma, and Yonge-Mallo \cite{CGM+09} shows that this model can be simulated with only small overhead using standard, discrete quantum queries. While this can be seen as a kind of Hamiltonian simulation, simulating the dynamics of a sparse Hamiltonian appears \emph{a priori} unrelated. Here we relate these tasks, giving a simple reduction from Hamiltonian simulation to the problem of simulating (a slight generalization of) the fractional-query model, so that improved simulations of the fractional-query model directly yield improvements in Hamiltonian simulation. To introduce the notion of fractional queries, recall that in the usual model of quantum query complexity, we wish to solve a problem whose input $x \in \{0,1\}^N$ is given by an oracle (or black box) that can be queried to learn the bits of $x$. The measure of complexity, called the query complexity, is the number of times we query the oracle. More precisely, we are given access to a unitary gate $Q_x$ whose action on the basis states $|j\rangle|b\rangle$ for all $j \in [N] \colonequals \{1,2,\ldots,N\}$ and $b \in \{0,1\}$ is $ Q_x |j\rangle|b\rangle = (-1)^{b x_j}|j\rangle|b\rangle$. A quantum query algorithm is a quantum circuit consisting of arbitrary $x$-independent unitaries and $Q_x$ gates. The query complexity of such an algorithm is the total number of $Q_x$ gates used in the circuit. The query model is often used to study the complexity of evaluating a classical function of $x$. However, it is also natural to consider more general tasks. In order of increasing generality, such tasks include state generation \cite{AMRR11}, state conversion \cite{LMRSS11}, and implementing unitary operations \cite{BC12}. Here we focus on the last of these tasks, where for each possible input $x$ we must perform some unitary operation $U_x$. Considering this task leads to a strong notion of simulation: to simulate a given algorithm in the sense of unitary implementation, one must reproduce the entire correct output state for every possible input state, rather than simply (say) evaluating some predicate in one bit of the output with a fixed input state. Since quantum mechanics is fundamentally described by the continuous dynamics of the Schr{\"o}\-ding\-er equation, it is natural to ask if the query model can be made less discrete. In particular, instead of using the gate $Q_x$ for unit cost, what if we can make half a query for half the cost? This perspective is motivated by the idea that if $Q_x$ is performed by a Hamiltonian running for unit time, we can stop the evolution after half the time to obtain half a query. In general we could run this Hamiltonian for time $\alpha \in (0,1]$ at cost $\alpha$. This \emph{fractional-query model} is at least as powerful as the standard (\emph{discrete-query}) model. More formally, we define the model as follows. \begin{equation}gin{definition}[Fractional-query model] For an $n$-bit string $x$, let $Q^\alpha_x$ act as $Q^\alpha_x|j\rangle|b\rangle = e^{-i\pi\alpha b x_j}|j\rangle|b\rangle$ for all $j \in [N]$ and $b \in \{0,1\}$. An algorithm in the fractional-query model is a sequence of unitary gates $U_{m}Q_x^{\alpha_{m}}U_{m-1}\cdots U_{1}Q_x^{\alpha_{1}}U_{0}$, where $U_i$ are arbitrary unitaries and $\alpha_i \in (0,1]$ for all $i$. The fractional-query complexity of this algorithm is $\sum_{i=1}^{m} \alpha_i$ and the total number of fractional-query gates used is $m$. \end{definition} This idea can be taken further by taking the limit as the sizes of the fractional queries approach zero to obtain a continuous variant of the model, called the \emph{continuous-query model} \cite{FG98}. In this model, we have access to a query Hamiltonian $H_x$ acting as $H_x |j\rangle|b\rangle = \pi b x_j|j\rangle|b\rangle$. Unlike the fractional- and discrete-query models, this is not a circuit-based model of computation. In this model we are allowed to evolve for time $T$ according to the Hamiltonian given by $H_x + H_D(t)$ for an arbitrary time-dependent driving Hamiltonian $H_D(t)$, at cost $T$. More precisely, the model is defined as follows. \begin{equation}gin{definition}[Continuous-query model] Let $H_x$ act as $H_x |j\rangle|b\rangle = \pi b x_j|j\rangle|b\rangle$ for all $j \in [N]$ and $b \in \{0,1\}$. An algorithm in the continuous-query model is specified by an arbitrary $x$-independent driving Hamiltonian $H_D(t)$ for $t \in [0,T]$. The algorithm implements the unitary operation $U(T)$ obtained by solving the Schr{\"o}dinger equation \begin{equation} \label{eq:schrodinger} i \frac{\mathrm{d}}{\mathrm{d}t} U(t) = \big( H_x + H_D(t) \big) U(t) \end{equation} with $U(0)=\mathbbm{1}$. The continuous-query complexity of this algorithm is the total evolution time, $T$. \end{definition} Because $e^{-i\alpha H_x} = Q^\alpha_x$, running the Hamiltonian $H_x$ with no driving Hamiltonian for time $T=\alpha$ is equivalent to an $\alpha$-fractional query. In the remainder of this work we omit the subscript $x$ on $Q$ for brevity. While initial work on the continuous-query model focused on finding analogues of known algorithms \cite{FG98,Moc07}, it has also been studied with the aim of proving lower bounds on the discrete-query model \cite{Moc07}. Furthermore, the model has led to the discovery of new quantum algorithms. In particular, Farhi, Goldstone, and Gutmann \cite{FGG08} discovered an algorithm with continuous-query complexity $O(\sqrt{n})$ for evaluating a balanced binary NAND tree with $n$ leaves, which is optimal. This result was later converted to the discrete-query model with the same query complexity \cite{CCJY09,ACR+10}. A similar conversion can be performed for any algorithm with a sufficiently well-behaved driving Hamiltonian \cite{Chi10}. However, this leaves open the question of whether continuous-query algorithms can be generically converted to discrete-query algorithms with the same query complexity. This was almost resolved by \cite{CGM+09}, which gave an algorithm that approximates a $T$-query continuous-query algorithm to bounded error with $O\big(T \frac{\log T}{\log\log T}\big)$ discrete queries. This algorithm can be made time efficient \cite{BCG14} (informally, the number of additional 2-qubit gates is close to the query complexity). However, to approximate a continuous-query algorithm to precision $\epsilon$, the algorithm of \cite{CGM+09} uses $O\big(\frac{1}{\epsilon} \frac{T\log T}{\log\log T}\big)$ queries. Ideally we would like the dependence on $\epsilon$ to be polylogarithmic, instead of polynomial, in $1/\epsilon$. For example, such behavior would be desirable when using a fractional-query algorithm as a subroutine. Here we present a significantly improved and simplified simulation of the continuous- and fractional-query models. In particular, we show the following. \begin{equation}gin{restatable}[Continuous-query simulation]{theorem}{CQUERYSIM}\label{thm:cquerysim} An algorithm with continuous- or fractional-query complexity $T \ge 1$ can be simulated with error at most $\epsilon$ with $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ queries. For continuous-query simulation, if there is a circuit using at most $g$ gates that implements the time evolution due to $H_D(t)$ between any two times $t_1$ and $t_2$ with precision $\epsilon/T$, then the number of additional 2-qubit gates for the simulation is $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}[g + \log(\bar h T/\epsilon)]\big)$, where $\bar h \colonequals \frac{1}{T}\int_{0}^{T}\norm{H_D(t)} \mathrm{d}t$. \end{restatable} Since the continuous-query model is at least as powerful as the discrete-query model, a discrete simulation must use $\Omega(T)$ queries, showing our dependence on $T$ is close to optimal. However, as for the problem of Hamiltonian simulation, there was previously no $\epsilon$-dependent lower bound. Along the lines of \thm{hamsimlower}, we show a lower bound of $\Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries for a continuous-query algorithm with $T = O(1)$ (\thm{fqsimlower}), so the dependence of our simulation on $\epsilon$ is optimal. For the problem of evaluating a classical function of a black-box input, an approach based on an invariant called the \textit{$\gamma_2$ norm} shows that the continuous-query complexity is at most a constant factor smaller than the discrete-query complexity for a bounded-error simulation \cite{LMRSS11}. However, it remains unclear whether the algorithm can be made time efficient and whether the unitary dynamics of a continuous-query algorithm can be simulated (even with bounded error) using $O(T)$ queries. Such a result does hold for state conversion, but its dependence on error is quadratic \cite{LMRSS11}. More generally, the optimal tradeoff between $T$ and $\epsilon$ for simulation of continuous-query algorithms using discrete queries---and for simulation of Hamiltonian dynamics---remains open (with or without conditions on the time complexity). The remainder of this article is organized as follows. In \sec{overview} we give a high-level overview of the techniques used in our results. In \sec{main} we describe our simulation of the continuous- and fractional-query models using discrete queries. In \sec{Ham} we apply these results to Hamiltonian simulation. In \sec{time} we analyze the time complexity of our algorithms, and in \sec{lb} we prove $\epsilon$-dependent lower bounds showing optimality of their error dependence. We conclude in \sec{conclusion} with a brief discussion of some open questions. In \app{proofs}, we provide some proofs of known results for the sake of completeness. \section{High-level overview of techniques} \label{sec:overview} We begin by proving \thm{cquerysim}, our improved simulation of continuous- and fractional-query algorithms. Then we prove \thm{sparse} by reducing an instance of a sparse Hamiltonian simulation problem to an instance of a fractional-query algorithm, which can then be simulated via \thm{cquerysim}. We prove \thm{hamsimlower} using ideas from the no-fast-forwarding theorem from~\cite{BAC+07} and properties of the unbounded-error quantum query complexity of the parity function. We now sketch the approach for each of the main theorems, highlighting the novel ideas. \subsection{Continuous-query simulation (\thm{cquerysim})} \label{sec:thm3} First consider the simulation of fractional queries using discrete queries. We show that an algorithm with constant fractional-query complexity can be simulated in the discrete-query model using $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries (\lem{main}). The claimed upper bound for simulating a fractional-query algorithm with query complexity $T$ follows easily by breaking the algorithm into pieces with constant fractional-query complexity. Since the continuous- and fractional-query models are equivalent (\thm{equiv}), the result for the continuous-query model (\thm{cquerysim}) follows. We prove \lem{main} in two steps. Let the unitary performed by the constant-query fractional-query algorithm be $V$ and let the (unknown) state it acts on be $\ket{\psi}$. We would like to create the state $V\ket{\psi}$ up to error $\epsilon$. First we construct a circuit $\tilde U$ that performs $V$ with amplitude $\sqrt{p}$ up to error $\epsilon$, in the sense that $\tilde U$ is within error $\epsilon$ of a unitary $U$ that maps $|0^m\rangle\ket{\psi}$ to $\sqrt{p}|0^m\rangleV|\psi\rangle + \sqrt{1-p} |\Phi^\perp\rangle$ for some constant $p$ and some state $|\Phi^\perp\rangle$ with $(|0^m\rangle\langle0^m|\otimes \mathbbm{1})|\Phi^\perp\rangle=0$. The existence of such a $\tilde U$ that makes $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries was shown by \cite{CGM+09}. Their strategy is to measure the first $m$ qubits and obtain $V|\psi\rangle$ with constant probability. If the measurement fails, they recover the original state $|\psi\rangle$ from $|\Phi^\perp\rangle$ using a fault-correction procedure, which is itself probabilistic and occasionally fails, requiring a recursive correction algorithm to remove all faults. The time-efficient implementation of this recursive fault-correction procedure \cite{BCG14} is cumbersome. Our alternative approach uses $\tilde U$ to deterministically create $V|\psi\rangle$ without measurements. We show in general how to create $V|\psi\rangle$ with a constant number of applications of $\tilde U$ when $p$ is a constant. To do this, we introduce a notion of ``oblivious amplitude amplification'' that can have the same performance as standard amplitude amplification, but that can be applied even when the reflection about the input state is unavailable. This idea, which is inspired by the in-place QMA amplification procedure of Marriott and Watrous \cite{MW05}, is a general result that can potentially be applied in other contexts. Most of the algorithm is easily made time efficient, except the preparation of a certain quantum state. However, this state can be prepared efficiently \cite{BCG14} and the result follows. \subsection{Hamiltonian simulation reduction (\thm{sparse})} \label{sec:thm1} Next we describe the main ideas of our Hamiltonian simulation algorithm. We remove the dependence of the query cost on $n$ with a simple trick involving local edge coloring of bipartite graphs. This strategy is quite general and can be used to remove $n$-dependence from several known Hamiltonian simulation algorithms. The improved dependence on $\epsilon$ results from our algorithm for simulating the fractional-query model in the discrete-query model (\thm{cquerysim}). As mentioned previously, we reduce Hamiltonian simulation to a generalization of the task of simulating the fractional-query model. Examining the basic Lie product formula $e^{-iHt} \approx (e^{-iH_1t/r} e^{-iH_2t/r} \cdots e^{-iH_\eta t/r})^r$, we see that if $Q_j \colonequals e^{-iH_j}$ were query oracles, this would be a fractional-query algorithm using multiple oracles $Q_j$ for time $t$ each. (Note that because the query complexity of the simulation depends only on the total time over which fractional queries are applied rather than the total number of fractional queries, there is no advantage to using higher-order product formulas.) We reduce a fractional-query algorithm that calls each of $\eta$ different query oracles for time $t$ to a fractional-query algorithm that uses query time $\eta t$ with a single query oracle that can perform any $Q_j$. Thus it suffices to decompose the given Hamiltonian $H$ into a sum of Hamiltonians for which the matrices $Q_j$ can be viewed as query oracles in \thm{cquerysim}. We show such a decomposition (\lem{1sparse}) that yields that stated upper bound. This algorithm can be made time efficient since it is essentially a reduction to continuous-query simulation. \subsection{Lower bounds (\thm{hamsimlower} and \thm{fqsimlower})} \label{sec:thm2} Finally, we prove lower bounds showing optimality of our algorithms as a function of $\epsilon$ (\thm{hamsimlower} and \thm{fqsimlower}). The main idea behind both lower bounds is to show a Hamiltonian whose exact simulation for any time $t>0$ allows us to compute the parity of a string with unbounded error, which is as hard as computing parity exactly, requiring $\Omega(n)$ queries \cite{BBC+01,FGGS98}. Because one must apply the Hamiltonian $\Omega(n)$ times to have nonzero amplitude on a state that encodes the parity, the evolution for constant time only produces the answer at $n$th order in the Taylor series, so the parity is only successfully computed with probability $\Theta(1/n!)$. To obtain an unbounded-error algorithm for parity, one must simulate this evolution accurately enough to resolve such a small success probability. Thus we must have $\epsilon = O(1/n!)$, giving the lower bound of $\Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. \section{From continuous to discrete queries} \label{sec:main} In this section we present our improved simulation of continuous or fractional queries in the conventional discrete query model. The main result of this section is \lem{cquerysim}, which establishes the query complexity claimed in \thm{cquerysim}. The time-complexity part of \thm{cquerysim} is established in \sec{time}. For concreteness, we quantify the distance between unitaries $U$ and $V$ with the function $\norm{U-V}$ and the distance between states $|\psi\rangle$ and $|\phi\rangle$ with the function $\norm{|\psi\rangle-|\phi\rangle}$. As the error ultimately appears inside a logarithm, the precise choice of distance measure is not significant. We begin by recalling the equivalence of the continuous- and fractional-query models for any error $\epsilon>0$. An explicit simulation of the continuous-query model by the fractional-query model was provided by \cite{CGM+09}; the proof is a straightforward application of a result of \cite{HR90}. The other direction is apparently folklore (e.g., both directions are implicitly assumed in \cite{Moc07}); we provide a short proof in \app{equiv} for completeness. \begin{equation}gin{restatable}[Equivalence of continuous- and fractional-query models]{theorem}{EQUIV} \label{thm:equiv} For any $\epsilon>0$, any algorithm with continuous-query complexity $T$ can be implemented with fractional-query complexity $T$ with error at most $\epsilon$ and $m = O(\bar h T^2/\epsilon)$ fractional-query gates, where $\bar h \colonequals \frac{1}{T} \int_0^T \norm{H_D(t)} \, \mathrm{d}t$ is the average norm of the driving Hamiltonian. Conversely, any algorithm with fractional-query complexity $T$ can be implemented with continuous-query complexity $T$ with error at most $\epsilon$. \end{restatable} Since the two models are equivalent, it suffices to convert a fractional-query algorithm to a discrete-query algorithm. We start with a fractional-query algorithm that makes at most 1 query. The result for multiple queries (\lem{cquerysim}) follows straightforwardly. \begin{equation}gin{restatable}{lemma}{MAIN} \label{lem:main} Any algorithm in the fractional-query model with query complexity at most 1 can be implemented with $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries in the discrete-query model with error at most $\epsilon$. \end{restatable} The construction of the algorithm in this main lemma can be viewed in two steps. First, we show how to unitarily construct a superposition of the required state along with a label in state $\ket{0^{m+1}}$ and another state whose label is orthogonal. The construction is similar to that in \cite{CGM+09,BCG14}; the main difference is that we do not measure the state of the label. (This step is shown in the sequence \lem{gadget}, \lem{segment}, and \lem{approxsegment}.) Then, in the second step, rather than performing a fault-correction procedure upon seeing a measurement outcome other than $0^{m+1}$, we perform the underlying unitary operation in the first step three times (one of which is backwards) in conjunction with certain reflections to arrive at the required state. This step can be viewed as applying a generalization of amplitude amplification that is shown in \lem{oaa}. \begin{equation}gin{figure}[ht] \[ bymatrix @*=<0em> @R=0.4em @C=0.5em { \push{\ket{0}}&\gate{R_\alpha}&\ctrl{1}&\gate{P}&\gate{R_\alpha}&\qw\\ \push{\bar hphantom{|\psi\rangle}}& \qw &\multigate{3}{Q}&\qw &\qw &\qw\\ \push{\ket{\psi}}& \vdots & & & \vdots & \\ &&&&\\ \push{\bar hphantom{|\psi\rangle}}& \qw &\ghost{Q}&\qw &\qw &\qw } \] \caption{\label{fig:gadget}The fractional-query gadget. After performing the controlled-$Q$ operation on the target state $|\psi\rangle$, the operation $Q^\alpha$ is performed with amplitude depending on $\alpha$.} \end{figure} The first step of the construction uses the fractional-query gadget \cite[Section II.B]{CGM+09} shown in \fig{gadget}. This gadget behaves as follows, as we show in \app{approxsegment}. \begin{equation}gin{restatable}[Gadget Lemma \cite{CGM+09}]{lemma}{GADGET} \label{lem:gadget} Let $Q$ be a unitary matrix with eigenvalues $\pm 1$; let $\alpha \in [0,1]$. The circuit in \fig{gadget}, with $R_\alpha \colonequals \frac{1}{\sqrt{c+s}}\left(\begin{equation}gin{smallmatrix}\sqrt{c} & \sqrt{s} \\ \sqrt{s} & -\sqrt{c}\end{smallmatrix}\right)$ and $P \colonequals \left(\begin{equation}gin{smallmatrix}1 & 0 \\ 0 & i\end{smallmatrix}\right)$, performs the map \begin{equation} |0\rangle|\psi\rangle \mapsto \sqrt{q_\alpha}|0\ranglee^{-i\pi\alpha/2}Q^\alpha |\psi\rangle + \sqrt{1-q_\alpha}|1\rangle|\phi\rangle \end{equation} for some state $|\phi\rangle$, where $c \colonequals \cos(\pi \alpha/2)$, $s \colonequals \sin(\pi \alpha/2)$, $q_\alpha \colonequals 1/(c+s)^2 = 1/(1+\sin(\pi\alpha))$, and $Q^\alpha = \frac{1}{2}(\mathbbm{1}+Q) + e^{-i\pi\alpha} \frac{1}{2}(\mathbbm{1}-Q) = e^{-i \pi \alpha/2} (c \mathbbm{1} + i s Q)$. \end{restatable} While the proof in \app{approxsegment} shows that $|\phi\rangle = e^{-i \pi/4} Q^{-1/2}|\psi\rangle$, we do not use this fact in our analysis, in contrast to previous approaches \cite{CGM+09,BCG14}. Note that while we have defined the fractional-query model to use fractions $\alpha \in (0,1]$, a similar simulation could be applied if we allowed negative fractional-time evolutions with $\alpha \in [-1,1]$. In particular, we could define $s = \sin(\pi|\alpha|/2)$, $P = (\begin{equation}gin{smallmatrix}1 & 0 \\ 0 & i \mathop{\mathrm{sgn}}(\alpha)\end{smallmatrix})$ and carry through an analogous analysis. However, for simplicity, we restrict our attention to the model with only positive fractional time evolutions. \begin{equation}gin{figure}[ht] \[ bymatrix @*=<0em> @R=.6em @C=0.65em { \push{|0\rangle} \gategroup{2}{2}{9}{11}{.8em}{--} \gategroup{2}{1}{5}{2}{.5em}{.} & \qw & \qw & \qw & \qw & \qw &\push{\cdots}& & \qw & \qw& \gate{\Upsilon} & \qw \\ \push{|0\rangle} & \gate{R_{\alpha_1}} & \qw & \ctrl{4} & \qw & \qw &{\cdots}& & \qw & \qw& \gate{R_{\alpha_1}P} & \qw \\ \push{\vdots} & & & & & &\ddots& & & & \vdots & \\ & & & & & & & & & & & \\ \push{|0\rangle} & \gate{R_{\alpha_{m}}} & \qw & \qw & \qw & \qw &{\cdots}& & \qw & \ctrl{1} & \gate{R_{\alpha_{m}}P} & \qw \\ \push{\bar hphantom{|\psi\rangle}}& \qw & \multigate{3}{U_0} & \multigate{3}{Q} & \multigate{3}{U_1} & \qw &{\cdots}& & \multigate{3}{U_{m-1}} & \multigate{3}{Q}& \multigate{3}{U_{m}} & \qw \\ \push{|\psi\rangle}& {\vdots} & & & & &\ddots& & & & & \\ & & & & & && & & & & \\ \push{\bar hphantom{|\psi\rangle}} & \qw & \ghost{U_0} & \ghost{Q} & \ghost{U_1} & \qw &{\cdots}& & \ghost{U_{m-1}} & \ghost{Q}& \ghost{U_{m}} & \qw \\ } \] \caption{\label{fig:segment}A segment to implement the fractional-query algorithm. The segment consists of many concatneated applications of the fractional-query gadget, interspersed with $x$-independent unitaries $U_i$. The state preparation is indicated in the dotted box, and the main operation is performed by the circuit in the dashed box. The additional ancilla at the top is introduced to reduce the amplitude for performing the correct operation to exactly $1/2$.} \end{figure} We now collect the gadgets into segments as shown in \fig{segment} and show that, with an appropriate choice of parameters, a segment implements a fractional-query algorithm with constant query complexity with amplitude $1/2$. This specific choice facilitates one-step exact oblivious amplitude amplification. Other than this choice of constant, this lemma is the same as in \cite{CGM+09}. For completeness, we provide a proof in \app{approxsegment}. \begin{equation}gin{restatable}[Segment Lemma]{lemma}{SEGMENT} \label{lem:segment} Let $V$ be a unitary implementable by a fractional-query algorithm with query complexity at most $1/5$, i.e., there exists an $m$ such that $V=U_{m}Q^{\alpha_{m}}U_{m-1}\cdots U_{1}Q^{\alpha_{1}}U_{0}$ with $\alpha_i \geq 0$ for all $i$ and $\sum_{i=1}^{m} \alpha_i \leq 1/5$. Let $P$ and $R_\alpha$ be as in \lem{gadget}. Then there exists a unitary $\Upsilon$ on the additional ancilla such that the circuit in \fig{segment} performs the map \begin{equation} |0^{m+1}\rangle|\psi\rangle \mapsto \frac{1}{2}|0^{m+1}\ranglee^{i\vartheta}V|\psi\rangle + \frac{\sqrt{3}}{2}|\Phi^\perp\rangle \end{equation} for some state $|\Phi^\perp\rangle$ satisfying $(|0^{m+1}\rangle\langle0^{m+1}| \otimes \mathbbm{1})|\Phi^\perp\rangle=0$ and some $\vartheta \in [0,2\pi)$. \end{restatable} Although the segment in \fig{segment} makes $m$ queries, it is possible to approximate this segment within precision $\epsilon$ using only $O(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)})$ queries. To get some intuition for why this is possible, note that the state on the control registers decides how many queries are performed. For example, if all the control registers were set to $|0\rangle$ when the controlled-$Q$ gates act, then no queries would be performed, even though the circuit contains $m$ query gates. In general, the number of queries performed when the control registers are set to $|b_1,b_2,\ldots,b_m\rangle$ is the Hamming weight of $b$. In \fig{segment}, the state of the control registers has very little overlap with high-weight states, so we can approximate that state with one that has no overlap with high-weight states. We then show how to rearrange such a circuit to obtain a new circuit that uses very few query gates. This lemma follows the same proof structure as Section II.C of \cite{CGM+09}, but is more general since we do not restrict all the fractional queries to have the same value of $\alpha$. This change requires us to use a version of the Chernoff bound for independent (but not necessarily identically distributed) random variables instead of the one used in \cite{CGM+09}. The lemma is proved in \app{approxsegment}. \begin{equation}gin{restatable}[Approximate Segment Lemma]{lemma}{APPROXSEGMENT} \label{lem:approxsegment} Let $V$ be a unitary implementable by a fractional-query algorithm with query complexity at most $1/5$. Then for any $\epsilon>0$, there exists a unitary quantum circuit that makes $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ discrete queries and, within error $\epsilon$, performs a unitary $U$ acting as \begin{equation} U |0^{m+1}\rangle|\psi\rangle = \frac{1}{2}|0^{m+1}\ranglee^{i\vartheta}V|\psi\rangle + \frac{\sqrt{3}}{2}|\Phi^\perp\rangle \end{equation} for some state $|\Phi^\perp\rangle$ satisfying $(|0^{m+1}\rangle\langle0^{m+1}| \otimes \mathbbm{1})|\Phi^\perp\rangle=0$ and some $\vartheta \in [0,2\pi)$. \end{restatable} Up to this point our proof is similar to previous approaches \cite{CGM+09,BCG14}. In those previous approaches, the map of \lem{approxsegment} was used to probabilistically create the desired state by measuring the first $m+1$ qubits. With constant probability we obtain the desired state, but in the other case we have a fault and have to recover the original input state. This recovery stage required a fault-correction procedure that is difficult to analyze and considerably harder to make time efficient. We avoid these difficulties by introducing oblivious amplitude amplification. Given a unitary $U$ that implements another unitary $V$ with some amplitude (in a certain precise sense), this idea allows one to use a version of amplitude amplification to give a better implementation of $V$. In particular, as in amplitude amplification, if the amplitude for implementing $V$ is known, we can exactly perform $V$. In standard amplitude amplification, to amplify the ``good'' part of a state, we need to be able to reflect about the state itself and the projector onto the good subspace. While the latter is easy in our application, we cannot reflect about the unknown input state. Nevertheless, we show the following. \begin{equation}gin{lemma}[Oblivious amplitude amplification] \label{lem:oaa} Let $U$ and $V$ be unitary matrices on $\mu+n$ qubits and $n$ qubits, respectively, and let $\theta \in (0,\pi/2)$. Suppose that for any $n$-qubit state $|\psi\rangle$, \begin{equation} U|0^\mu\rangle|\psi\rangle = \sin(\theta) |0^\mu\rangleV|\psi\rangle + \cos(\theta) |\Phi^\perp\rangle, \end{equation} where $|\Phi^\perp\rangle$ is an $(\mu+n)$-qubit state that depends on $|\psi\rangle$ and satisfies $\Pi|\Phi^\perp\rangle=0$, where $\Pi \colonequals |0^{\mu}\rangle\langle0^{\mu}| \otimes \mathbbm{1}$. Let $R \colonequals 2\Pi-\mathbbm{1}$ and $S \colonequals -U R U^\dag R$. Then for any $\ell \in {\mathbb{Z}}$, \begin{equation}gin{align} S^\ell U |0^\mu\rangle|\psi\rangle &= \sin\bigl((2\ell+1)\theta\bigr) |0^\mu\rangleV|\psi\rangle + \cos\bigl((2\ell+1)\theta\bigr) |\Phi^\perp\rangle. \end{align} \end{lemma} Note that $R$ is not the reflection about the initial state, so \lem{oaa} does not follow from amplitude amplification alone. However, in the context described in the lemma, it suffices to use a different reflection. The motivation for oblivious amplitude amplification comes from work of Marriott and Watrous on in-place amplification of QMA \cite{MW05} (see also related work on quantum rewinding for zero-knowledge proofs \cite{Wat09} and on using amplitude amplification to obtain a quadratic improvement \cite{NWZ09}). Specifically, the following technical lemma shows that amplitude amplification remains in a certain 2-dimensional subspace in which it is possible to perform the appropriate reflections. \begin{equation}gin{lemma}[2D Subspace Lemma] \label{lem:2d} Let $U$ and $V$ be unitary matrices on $\mu+n$ qubits and $n$ qubits, respectively, and let $p \in (0,1)$. Suppose that for any $n$-qubit state $|\psi\rangle$, \begin{equation} U|0^\mu\rangle|\psi\rangle = \sqrt{p}|0^\mu\rangleV|\psi\rangle + \sqrt{1-p}|\Phi^{\perp}\rangle, \end{equation} where $|\Phi^{\perp}\rangle$ is an $(\mu+n)$-qubit state that depends on $|\psi\rangle$ and satisfies $\Pi|\Phi^\perp\rangle=0$, where $\Pi \colonequals |0^{\mu}\rangle\langle0^{\mu}| \otimes \mathbbm{1}$. Then the state $|\Psi^\perp\rangle$ defined by the equation \begin{equation} U |\Psi^\perp\rangle\colonequals\sqrt{1-p}|0^\mu\rangleV|\psi\rangle - \sqrt{p}|\Phi^\perp\rangle \end{equation} is orthogonal to $|\Psi\rangle \colonequals |0^\mu\rangle|\psi\rangle$ and satisfies $\Pi|\Psi^\perp\rangle=0$. \end{lemma} \begin{equation}gin{proof} For any $|\psi\rangle$, let $|\Phi\rangle \colonequals |0^\mu\rangleV|\psi\rangle$. Then for all $|\psi\rangle$, we have \begin{equation}gin{align} U|\Psi\rangle &= \sqrt{p}|\Phi\rangle + \sqrt{1-p}|\Phi^{\perp}\rangle \label{eq:psi} \\ U|\Psi^{\perp}\rangle &= \sqrt{1-p}|\Phi\rangle - \sqrt{p}|\Phi^{\perp}\rangle, \label{eq:psiperp} \end{align} where $\Pi|\Phi^{\perp}\rangle = 0$. By taking the inner product of these two equations, we get $\langle\Psi|\Psi^{\perp}\rangle = 0$. The lemma asserts that not only is $|\Psi^{\perp}\rangle$ orthogonal to $|\Psi\rangle$, but also $\Pi |\Psi^{\perp}\rangle = 0$. To show this, consider the operator \begin{equation} Q \colonequals (\langle0^\mu| \otimes \mathbbm{1})U^{\dag}\Pi U(|0^\mu\rangle \otimes \mathbbm{1}). \end{equation} For any state $|\psi\rangle$, \begin{equation} \langle\psi|Q|\psi\rangle = \norm{\Pi U |0^\mu\rangle|\psi\rangle}^2 = \norm{\Pi (\sqrt{p}|\Phi\rangle + \sqrt{1-p}|\Phi^{\perp}\rangle)}^2 = \norm{\sqrt{p}|\Phi\rangle}^2 = p. \end{equation} In particular, this holds for a basis of eigenvectors of $Q$, so $Q = p \mathbbm{1}$. Thus for any $|\psi\rangle$, we have \begin{equation} p|\psi\rangle = Q|\psi\rangle = (\langle0^\mu| \otimes \mathbbm{1})U^{\dag}\Pi U(|0^\mu\rangle \otimes \mathbbm{1})|\psi\rangle = (\langle0^\mu| \otimes \mathbbm{1})U^{\dag}\Pi U|\Psi\rangle = \sqrt{p} (\langle0^\mu| \otimes \mathbbm{1})U^{\dag}|\Phi\rangle. \end{equation} From \eq{psi} and \eq{psiperp} we get $U^{\dag} |\Phi\rangle = \sqrt{p} |\Psi\rangle +\sqrt{1-p} |\Psi^{\perp}\rangle$. Plugging this into the previous equation, we get \begin{equation}gin{align} p |\psi\rangle &= \sqrt{p}(\langle0^\mu|\otimes \mathbbm{1}) ( \sqrt{p} |\Psi\rangle + \sqrt{1-p} |\Psi^{\perp}\rangle) = p |\psi\rangle + \sqrt{p(1-p)} (\langle0^\mu|\otimes \mathbbm{1})|\Psi^{\perp}\rangle. \end{align} This gives us $\sqrt{p(1-p)}(\langle0^\mu|\otimes \mathbbm{1})|\Psi^{\perp}\rangle=0$. Since $p \in (0,1)$, this implies $\Pi |\Psi^{\perp}\rangle = 0$. \end{proof} Note that this fact can also be viewed as a consequence of Jordan's Lemma \cite{Jor75}, which decomposes the space into a direct sum of 1- and 2-dimensional subspaces that are invariant under the projectors $\Pi$ and $U^\dagger \Pi U$. In this decomposition, $\Pi$ and $U^\dagger \Pi U$ are rank-1 projectors within each 2-dimensional subspace. Let $|0\rangle|\psi_i\rangle$ denote the eigenvalue-1 eigenvector of $\Pi$ within the $i$th 2-dimensional subspace $S_i$. Since $S_i$ is invariant under $U^\dagger \Pi U$, the state $U^\dagger \Pi U|0\rangle|\psi_i\rangle = \sqrt{p}U^\dagger|0\rangleV|\psi_i\rangle$ belongs to $S_i$. Let $|\Phi_i^{\perp}\rangle$ be such that $|0\rangle|\psi_i\rangle = U^{\dag}(\sqrt{p}|0\rangleV|\psi_i\rangle + \sqrt{1-p}|\Phi_i^{\perp}\rangle)$. Then $|\Psi_i^{\perp}\rangle \colonequals U^{\dag} (\sqrt{1-p}|0\rangleV|\psi_i\rangle - \sqrt{p}|\Phi_i^{\perp}\rangle)$ is in $S_i$, since it is a linear combination of $|0\rangle|\psi_i\rangle$ and $U^\dagger \Pi U|0\rangle|\psi_i\rangle$. However, $|\Psi_i^{\perp}\rangle$ is orthogonal to $|0\rangle|\psi_i\rangle$ and is therefore an eigenvalue-0 eigenvector of $\Pi$, since $\Pi$ is a rank-1 projector in $S_i$. Thus for each $i$, $|\psi_i\rangle$ and $|\Psi_i^\perp\rangle$ satisfy the conditions of the lemma. We claim that the number of 2-dimensional subspaces (and hence the number of states $|\psi_i\rangle$) is $2^n$. There are at most $2^n$ such subspaces since $\Pi$ has rank $2^n$ and is rank-1 in each subspace. There also must be at least $2^n$ 2-dimensional subspaces, since otherwise there would be a state $|0\rangle|\psi\rangle$ that is in a 1-dimensional subspace, i.e., is invariant under both $\Pi$ and $U^\dagger \Pi U$. This is not possible because $U^\dagger \Pi U$ acting on $|0\rangle|\psi\rangle$ yields $\sqrt{p}U^\dagger|0\rangleV|\psi\rangle$, which is a subnormalized state since $p<1$. Finally, since there are $2^n$ linearly independent $|\psi_i\rangle$, an arbitrary state $|\psi\rangle$ can be written as a linear combination of $|\psi_i\rangle$, and the result follows. With the help of \lem{2d} we can prove \lem{oaa}. \begin{equation}gin{proof}[Proof of \protect{\lem{oaa}}] Since \lem{2d} shows that the evolution occurs within a two-dimensional subspace (or its image under $U$), the remaining analysis is essentially the same as in standard amplitude amplification. For any $|\psi\rangle$, we define $|\Psi\rangle\colonequals |0^\mu\rangle|\psi\rangle$ and $|\Phi\rangle\colonequals |0^\mu\rangleV|\psi\rangle$, so that \begin{equation}gin{align} U |\Psi\rangle &= \sin(\theta) |\Phi\rangle + \cos(\theta) |\Phi^\perp\rangle, \end{align} where $\theta\in(0,\pi/2)$ is such that $\sqrt{p}=\sin(\theta)$. We also define $|\Psi^\perp\rangle$ through the equation \begin{equation}gin{align} U |\Psi^\perp\rangle \colonequals \cos(\theta) |\Phi\rangle - \sin(\theta) |\Phi^\perp\rangle. \end{align} By \lem{2d}, we know that $\Pi|\Psi^\perp\rangle=0$. Using these two equations, we have \begin{equation}gin{align} U^\dag |\Phi\rangle &= \sin(\theta) |\Psi\rangle + \cos(\theta) |\Psi^\perp\rangle \\ U^\dag |\Phi^\perp\rangle &= \cos(\theta) |\Psi\rangle - \sin(\theta) |\Psi^\perp\rangle. \end{align} Then a straightforward calculation gives \begin{equation}gin{align} S |\Phi\rangle &= -U R U^\dag |\Phi\rangle \nonumber \\ &= -U R (\sin(\theta) |\Psi\rangle + \cos(\theta) |\Psi^\perp\rangle) \nonumber \\ &= -U (\sin(\theta) |\Psi\rangle - \cos(\theta) |\Psi^\perp\rangle) \nonumber \\ &= \bigl(\cos^2(\theta)-\sin^2(\theta)\bigr) |\Phi\rangle - 2\cos(\theta)\sin(\theta) |\Phi^\perp\rangle \nonumber \\ &= \cos(2\theta) |\Phi\rangle - \sin(2\theta) |\Phi^\perp\rangle. \end{align} Similarly, \begin{equation}gin{align} S |\Phi^\perp\rangle &= U R U^\dag |\Phi^\perp\rangle \nonumber \\ &= U R (\cos(\theta) |\Psi\rangle - \sin(\theta) |\Psi^\perp\rangle) \nonumber \\ &= U (\cos(\theta) |\Psi\rangle + \sin(\theta) |\Psi^\perp\rangle) \nonumber \\ &= 2\cos(\theta)\sin(\theta) |\Phi\rangle + \bigl(\cos^2(\theta)-\sin^2(\theta)\bigr) |\Phi^\perp\rangle \nonumber \\ &= \sin(2\theta) |\Phi\rangle + \cos(2\theta) |\Phi^\perp\rangle. \end{align} Thus we see that $S$ acts as a rotation by $2\theta$ in the subspace $\spn\{|\Phi\rangle,|\Phi^\perp\rangle\}$, and the result follows. \end{proof} We are now ready to complete the proof of \lem{main} using \lem{approxsegment} and \lem{oaa}. \begin{equation}gin{proof}[Proof of \protect\lem{main}] We are given a fractional-query algorithm that makes at most 1 query. This can be split into 5 steps that make at most 1/5 queries each in the fractional-query model. We perform the analysis for these steps of size $1/5$; the difference is only a constant factor that does not affect the asymptotics. We convert this fractional-query algorithm into a discrete-query algorithm with some error. From \lem{approxsegment}, we know that for any such fractional-query algorithm $V$, there is an algorithm that makes $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ discrete queries and maps the state $|0^{m+1}\rangle|\psi\rangle$ to a state that is at most $\epsilon$ far from $\frac{1}{2}|0^{m+1}\ranglee^{i\vartheta}V|\psi\rangle + \frac{\sqrt{3}}{2}|\Phi\rangle$, for some state $|\Phi\rangle$ that satisfies $(|0^{m+1}\rangle\langle0^{m+1}| \otimes \mathbbm{1})|\Phi\rangle=0$ and some $\vartheta \in [0,2\pi)$. We wish to perform the unitary $V$ on the input state $|\psi\rangle$ approximately. The unitary operation $U$ defined in \lem{approxsegment} maps $|0^{m+1}\rangle|\psi\rangle \mapsto \frac{1}{2}|0^{m+1}\ranglee^{i\vartheta}V|\psi\rangle + \frac{\sqrt{3}}{2}|\Phi\rangle$. The operation $U$ satisfies the conditions of \lem{oaa} with $\mu=m+1$ and $\sin^2(\theta)=1/4$. Thus a single application of $S$ (using three applications of $U$) would produce the state $V|\psi\rangle$ exactly. While we cannot necessarily perform $U$, using \lem{approxsegment} we can perform another unitary operation $\tilde U$ that is within error $\epsilon/3$ of $U$. Since we only perform the unitary three times, we obtain a state $\epsilon$-close to $V|\psi\rangle$ when we use $\tilde U$ instead of $U$. \end{proof} By straightforwardly concatenating such simulations with sufficiently small error, we obtain simulations for longer times. This establishes the following lemma, which is the query-complexity part of \thm{cquerysim}. \begin{equation}gin{lemma} \label{lem:cquerysim} An algorithm with continuous- or fractional-query complexity $T \ge 1$ can be simulated with error at most $\epsilon$ with $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ queries. \end{lemma} \begin{equation}gin{proof} Given an algorithm that runs for time $T$ in the continuous-query model, we can convert it to an algorithm with fractional-query complexity $T$ with error at most $\epsilon/2$ using \thm{equiv}. Given a fractional-query algorithm that makes $T$ queries, we can divide it into $\ceil{T}$ pieces that make at most 1 query each and invoke \lem{main} with error $\epsilon/2\ceil{T}$ to obtain $\ceil{T}$ discrete-query algorithms, each of which makes $O\big(\frac{\log(\ceil{T}/\epsilon)}{\log\log(\ceil{T}/\epsilon)}\big)$ queries. When run sequentially on the input state, they yield an output that is $\epsilon/2$-close to the correct output (by subadditivity of error). Thus the final state has error at most $\epsilon$. \end{proof} \section{Hamiltonian simulation} \label{sec:Ham} We now apply the results of the previous section to give improved algorithms for simulating sparse Hamiltonians. The main result of this section is the reduction from an instance of the sparse Hamiltonian simulation problem to a fractional-query algorithm, which establishes \lem{sparse}, the query-complexity part of \thm{sparse}. The time-complexity part of \thm{sparse} is established in \sec{time}. To see the connection between the fractional-query model and Hamiltonian simulation, consider the example of a Hamiltonian $H = H_1 + H_2$, where $H_1$ and $H_2$ have eigenvalues $0$ and $\pi$, so that $e^{-iH_1}$ and $e^{-iH_2}$ have eigenvalues $\pm 1$. From the Lie product formula, we have $e^{-i(H_1+H_2)T} \approx (e^{-iH_1T/r}e^{-iH_2T/r})^r$ for large $r$. If we think of $H_1$ and $H_2$ as query Hamiltonians, this is a fractional-query algorithm that makes $T$ queries to each Hamiltonian. We might therefore expect that $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ discrete queries to $e^{-iH_1}$ and $e^{-iH_2}$ suffice to implement $e^{-i(H_1+H_2)T}$ to precision $\epsilon$. Here we do this by generalizing the results of the previous section to allow multiple fractional-query oracles. For a set $\mathcal{Q} = \{Q_1, \ldots, Q_\eta\}$ of unitary matrices with eigenvalues $\pm 1$, we say $U$ is a fractional-query algorithm over $\mathcal{Q}$ with cost $T$ if $U$ can be written as $U_\lambda Q^{\alpha_\lambda}_{i_\lambda}U_{\lambda-1}\cdots U_{1}Q^{\alpha_1}_{i_1}U_0$, where $0<\alpha_i\leq 1$, $\sum_{i=1}^{\lambda} \alpha_i = T$, and $i_j \in [\eta]$ for all $j\in [\lambda]$. \begin{equation}gin{restatable}[Multiple-query model]{theorem}{MULTIQUERY} \label{thm:multiquery} Let $\mathcal{Q} = \{Q_1, \ldots, Q_\eta\}$ be a set of unitaries with eigenvalues $\pm 1$. Let $U$ be a fractional-query algorithm over $\mathcal{Q}$ with cost $T$. Let $Q \colonequals \sum_{j=1}^\eta |j\rangle\langlej| \otimes Q_j$. Then $U$ can be implemented by a circuit that makes $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ queries to $Q$ with error at most $\epsilon$. \end{restatable} \begin{equation}gin{proof} We prove this by reduction to \thm{cquerysim}. We know that $U$ can be written in the form $U = U_\lambda Q^{\alpha_\lambda}_{i_\lambda}U_{\lambda-1}\cdots U_{1}Q^{\alpha_1}_{i_1}U_0$, where $0<\alpha_i\leq 1$, $\sum_{i=1}^{\lambda} \alpha_i = T$, and $i_j \in [\eta]$ for all $j\in [\lambda]$. We first express $U$ as a fractional-query algorithm over $\mathcal{Q}$ with cost $T$. To do this, we add an extra control register to the original circuit for $U$. This register holds the index $i_j$ of the next query to be performed. We start with this register initialized to $|0\rangle$. Let $V_0$ be any unitary that maps $|0\rangle$ to $|i_1\rangle$. The action of $Q^{\alpha_1}_{i_1}U_0$ on any state $|\psi\rangle$ is the same as the action of $Q^{\alpha_1}(V_0 \otimes U_0)$ on the second register of $|0\rangle|\psi\rangle$. Similarly, for all $j \in [\lambda]$, let $V_j$ be any unitary that maps $|i_j\rangle$ to $|i_{j+1}\rangle$, where $i_{\lambda+1} \colonequals 0$. Thus the circuit $(V_\lambda \otimes U_\lambda)Q^{\alpha_\lambda}(V_{\lambda-1} \otimes U_{\lambda-1})\cdots (V_1 \otimes U_{1})Q^{\alpha_1}(V_0 \otimes U_0)$ maps $|0\rangle|\psi\rangle$ to $|0\rangleU|\psi\rangle$. This construction gives a fractional-query algorithm with fractional-query complexity $T$ given oracle access to $Q$. Since $Q$ has eigenvalues $\pm 1$, we can invoke \thm{cquerysim} to give a discrete-query algorithm that makes $O\big(T\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ queries to $Q$ and performs $U$ up to error $\epsilon$. \thm{cquerysim} assumes the queries are diagonal in the computational basis, whereas here we assume only that $Q$ has eigenvalues $\pm1$. However, these two scenarios are equivalent since the target system can be considered in a basis where $Q$ is diagonal. Therefore \thm{cquerysim} applies to the slightly more general scenario considered here. \end{proof} This theorem allows us to simulate a Hamiltonian $H = H_1 + \cdots + H_\eta$ for time $t$ using resources that scale only slightly superlinearly in $\eta t$, provided each $H_j$ has eigenvalues $0$ and $\pi$ (or more generally, by rescaling, provided each $H_j$ has the same two eigenvalues). For any $\epsilon > 0$, there is a sufficiently large $r$ so that $e^{-iHt}$ is $\epsilon$-close to $(e^{-iH_1t/r} \cdots e^{-iH_\eta t/r})^r$, which is of the form required by \thm{multiquery} if $e^{-iH_j}$ has eigenvalues $\pm 1$. Since $\norm{e^{-iHt} - (e^{-iH_1t/r} \cdots e^{-iH_\eta t/r})^r} = O((\eta\bar{h}t)^2/r)$, where $\bar{h} \colonequals \max_j \norm{H_j}$ \cite{BAC+07}, choosing $r = \Omega((\eta\bar{h}t)^2/\epsilon)$ is sufficient to achieve an $\epsilon$-approximation. Since our Hamiltonians $H_j$ have constant norm, we have $\bar{h} = O(1)$ and get the following corollary. \begin{equation}gin{corollary} \label{cor:hamsim} For a Hamiltonian $H = \sum_{j=1}^\eta H_j$, where $H_j$ has eigenvalues $0$ and $\pi$ for all $j \in [\eta]$, define $Q \colonequals \sum_j |j\rangle\langlej| \otimes e^{-iH_j}$. The unitary $e^{-iHt}$ can be implemented by a fractional-query algorithm over $Q$, up to error $\epsilon$, with query complexity $\tau = \eta t$ and $O(\eta^3 t^2/\epsilon)$ fractional-query gates. Thus $e^{-iHt}$ can be implemented up to error $\epsilon$ by a circuit with $O\big(\tau\frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$ invocations of $Q$. \end{corollary} To simulate arbitrary sparse Hamiltonians, we decompose them into Hamiltonians with this property. To do this we first decompose the Hamiltonian into a sum of 1-sparse Hamiltonians (with at most 1 nonzero entry in any row or column). Second, we decompose 1-sparse Hamiltonians into Hamiltonians of the required form. \begin{equation}gin{restatable}{lemma}{ONESPARSE} \label{lem:1sparse} For any 1-sparse Hamiltonian $G$ and precision $\gamma>0$, there exist $O(\norm{G}_{\max}/\gamma)$ Hamiltonians $G_j$ with eigenvalues $\pm 1$ such that $\norm{{G} - \gamma \sum_{j} G_j}_{\max} \leq \sqrt{2}\gamma$. \end{restatable} \begin{equation}gin{proof} First we decompose the Hamiltonian $G$ as $G=G_X+iG_Y+G_Z$, where $G_X$ contains the off-diagonal real terms, $iG_Y$ contains the off-diagonal imaginary terms, and $G_Z$ contains the on-diagonal real terms. Next, for each of $G_bi$ for $bi\in\{X,Y,Z\}$, we construct an approximation $\tilde G_bi$ with each entry rounded off to the closest multiple of $2\gamma$. Since each entry of $\tilde{G_bi}$ is at most $\gamma$ away from the corresponding entry in $G_bi$, we have $\norm{G_bi-\tilde{G_bi}}_{\max} \leq \gamma$. Denoting $\tilde G=\tilde G_X+i\tilde G_Y+\tilde G_Z$, this implies $\norm{G-\tilde{G}}_{\max} \leq \sqrt{2}\gamma$. Next, we take $C^bi := \tilde G_bi / \gamma$, so $\norm{C^bi}_{\max} = \ceil{\norm{G_bi}_{\max}/\gamma} \le \ceil{\norm{G}_{\max}/\gamma}$. We can then decompose each 1-sparse matrix $C^bi$ into $\|C^bi\|_{\max}$ matrices, each of which is 1-sparse and has entries from $\{-2,0,2\}$. If $C^bi_{jk}$ is $2p$, then the first $|p|$ matrices in the decomposition have a $2$ for $p>0$ (or $-2$ if $p<0$) at the $(j,k)$ entry, and the rest have $0$. More explicitly, we define \begin{equation}gin{equation} C^{bi,\ell}_{jk} \colonequals \begin{equation}gin{cases} 2 & \text{if } C^{bi}_{jk} \ge 2\ell >0 \\ -2 & \text{if } C^{bi}_{jk} \le -2\ell <0 \\ 0 & \text{otherwise} \end{cases} \end{equation} for $bi\in\{X,Y,Z\}$ and $\ell \in [\norm{C^{bi}}_{\max}]$. This gives a decomposition into at most $3 \ceil{\norm{G}_{\max}/\gamma}$ terms with eigenvalues in $\{-2,0,2\}$. To obtain matrices with eigenvalues $\pm 1$, we perform one more step to remove the $0$ eigenvalues. We divide each $C^{bi,\ell}$ into two copies, $C^{bi,\ell,+}$ and $C^{bi,\ell,-}$. For any column where $C^{bi,\ell}$ is all zero, the corresponding diagonal element of $C^{bi,\ell,+}$ is $+1$ (if $bi \in \{X,Z\}$) or $+i$ (if $bi=Y$) and the diagonal element of $C^{bi,\ell,-}$ is $-1$ (if $bi \in \{X,Z\}$) or $-i$ (if $bi=Y$). Otherwise, we let $C^{bi,\ell,+}_{jk}=C^{bi,\ell,-}_{jk}=C^{bi,\ell}_{jk}/2$. Thus $C^{bi,\ell}=C^{bi,\ell,+}+C^{bi,\ell,-}$. Moreover, each column of $C^{bi,\ell,\pm}$ has exactly one nonzero entry, which is $\pm 1$ (or $\pm i$ on the diagonal of $C^{Y,\ell,\pm}$). This gives a decomposition $\tilde G/\gamma = \sum_{\ell,\pm} (C^{X,\ell,\pm}+iC^{Y,\ell,\pm}+C^{Z,\ell,\pm})$ in which each term has eigenvalues $\pm 1$. The decomposition contains at most $6\ceil{\norm{G}_{\max}/\gamma} = O(\norm{G}_{\max}/\gamma)$ terms. \end{proof} \lem{1sparse} gives a decomposition of the required form as the eigenvalues can be adjusted to $0$ and $\pi$ by adding the identity matrix and multiplying by $\pi/2$. It remains to decompose a sparse Hamiltonian into 1-sparse Hamiltonians. Known results decompose a $d$-sparse Hamiltonian $H$ into a sum of $O(d^2)$ 1-sparse Hamiltonians~\cite{BAC+07}, but simulating one query to a $1$-sparse Hamiltonian requires $O(\log^* n)$ queries to the oracle for $H$. We present a simplified decomposition theorem that decomposes a $d$-sparse Hamiltonian into $d^2$ 1-sparse Hamiltonians. A query to the individual 1-sparse Hamiltonians can be performed using $O(1)$ queries to the original Hamiltonian, removing the $\log^* n$ factor. \begin{equation}gin{restatable}{lemma}{DECOMPOSITION} \label{lem:decomposition} If $H$ is a $d$-sparse Hamiltonian, there exists a decomposition $H = \sum_{j=1}^{d^2} H_j$ where each $H_j$ is 1-sparse and a query to any $H_j$ can be simulated with $O(1)$ queries to $H$. \end{restatable} \begin{equation}gin{proof} The new ingredient in our proof is to assume that the graph of $H$ is bipartite. (Here the \emph{graph of $H$} has a vertex for each basis state and an edge between two vertices if the corresponding entry of $H$ is nonzero.) This is without loss of generality because we can simulate the Hamiltonian $\sigma_x \otimes H$ instead, which is indeed bipartite and has the same sparsity as $H$. From a simulation of $\sigma_x \otimes H$, we can recover a simulation of $H$ using the identity $e^{-i(\sigma_x \otimes H)t}|+\rangle|\psi\rangle = |+\ranglee^{-iHt}|\psi\rangle$. Now we decompose a bipartite $d$-sparse Hamiltonian into a sum of $d^2$ terms. To do this, we give an edge coloring of the graph of $H$ (i.e., an assignment of colors to the edges so that no two edges incident on the same vertex have the same color). Given such a coloring with $d^2$ colors, the Hamiltonian $H_j$ formed by only considering edges with color $j$ is 1-sparse. We use the following simple coloring. For any pair of adjacent vertices $u$ and $v$, let $r(u,v)$ denote the rank of $v$ in $u$'s neighbor list, i.e., the position occupied by $v$ in a sorted list of $u$'s neighbors. This is a number between $1$ and $d$. Let the color of the edge $(u,v)$, where $u$ comes from the left part of the bipartition and $v$ comes from the right, be the ordered pair $(r(u,v),r(v,u))$. This is a valid coloring since if $(u,v)$ and $(u,w)$ have the same color, then in particular the first component of the ordered pair is the same, so $r(u,v) = r(u,w)$ implies $v = w$. A similar argument handles the case where the common vertex is on the right. Given a color $(a,b)$, it is easy to simulate queries to the Hamiltonian corresponding to that color. To compute the nonzero entries of the $j$th row for this color, if $j$ is in the left partition, then we find the neighbor of $j$ that has rank $a$; let us call this $\ell$. Then we find the neighbor of $\ell$ that has rank $b$. If this neighbor is $j$, then $\ell$ is the position of the nonzero entry in row $j$; otherwise there is no nonzero entry. If $j$ is in the right partition, the procedure is the same, except with the roles of $a$ and $b$ reversed. This procedure uses two queries. \end{proof} Observe that the simple trick of making the Hamiltonian bipartite suffices to remove the $O(\log^* n)$ term present in previous decompositions of this form. This trick is quite general and can be applied to remove a factor of $O(\log^* n)$ wherever such a factor appears in a known Hamiltonian simulation algorithm (e.g., \cite{BAC+07,CK11,WBHS11}). \lem{decomposition} decomposes our Hamiltonian $H$ into $d^2$ 1-sparse Hamiltonians. We further decompose $H$ using \lem{1sparse} into a sum of $\eta = O(d^2\norm{H}_{\max}/\gamma)$ Hamiltonians $G_j$ such that $\norm{H - \gamma \sum_{j=1}^{\eta} G_j}_{\max} \leq \sqrt{2}\gamma d^2$, since each 1-sparse Hamiltonian is approximated with precision $\sqrt{2}\gamma$ and there are $d^2$ approximations in this sum. To upper bound the simulation error, we have $ \norm{e^{-iHt} - e^{-i\gamma \sum_j G_j t}} \le \norm{(H - \gamma \sum_{j=1}^{\eta} G_j)t} \le \sqrt{2} \gamma d^3 t $, where we used the fact that $\norm{e^{iA} - e^{iB}} \leq \norm{A-B}$ (as explained in the proof of \thm{equiv}) and $\norm{A} \leq d \norm{A}_{\max}$ for a $d$-sparse matrix $A$. Choosing $\gamma = {\epsilon}/{\sqrt{2} d^3 t}$ gives the required precision. We now invoke \cor{hamsim} with number of Hamiltonians $\eta = O(d^2\norm{H}_{\max}/\gamma)$ and simulation time $\gamma t$ to get $\tau = d^2\norm{H}_{\max}t$. Plugging this value of $\tau$ into \cor{hamsim} gives us the following lemma, which is the query-complexity part of \thm{sparse}. \begin{equation}gin{lemma} \label{lem:sparse} A $d$-sparse Hamiltonian $H$ can be simulated for time $t$ with error at most $\epsilon$ using $O\big(\tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$ queries, where $\tau \colonequals d^2 \norm{H}_{\max} t \ge 1$. \end{lemma} Note that above we have determined the values of $r$ and $\gamma$ to use, but these values do not affect the query complexity (although they do affect the time complexity). This is because $r$ and $\gamma$ affect the value of $m$, but the analysis in \sec{main} is independent of $m$. This enables a simple generalization to time-dependent Hamiltonians. We can approximate the true evolution by a product of evolutions under time-independent Hamiltonians for each of the $r$ time intervals of length $t/r$. Provided the derivative of the Hamiltonian is bounded, this approximation can be made arbitrarily accurate by choosing $r$ large enough. As the query complexity does not depend on $r$, it is independent of $h'$, similar to \cite{PQSV11}. Finally, consider simulating a $k$-local Hamiltonian. A term acting nontrivially on at most $k$ qubits is $2^k$-sparse: two states $x,y \in \{0,1\}^n$ are adjacent if the only bits on which $x$ and $y$ differ are among the $k$ bits involved in the local term. Using this structure, we can give an explicit $2^k$-coloring, improving over the $4^k$-coloring provided by \lem{decomposition}: we simply color an edge between states $x$ and $y$ by indicating which of the $k$ bits are flipped. Thus we can decompose a $k$-local Hamiltonian with $M$ terms as a sum of $2^k M$ 1-sparse Hamiltonians. Using this decomposition in place of \lem{decomposition}, we find a simulation as in \thm{sparse} but with $\tau$ replaced by $\tilde\tau \colonequals 2^k M \norm{H}_{\max} t$. \section{Time complexity} \label{sec:time} We now consider the time complexities of the algorithms described in \thm{cquerysim} and \thm{sparse} (recall that time complexity refers to the sum of the number of queries and additional 2-qubit gates used in the algorithm). Our approach considerably simplifies this analysis over previous work and gives improved upper bounds. The basic algorithm as described in \sec{main} is inefficient as it relies on creating a state of $m = \mathop{\mathrm{poly}}(h,T,\frac{1}{\epsilon})$ qubits. Instead, as in previous work \cite{BCG14}, we create a compressed version of this state that allows us to perform the necessary controlled operations and to reflect about the zero state. Our simplified approach does not require measuring the control qubits, an operation that accounts for much of the technical complexity of \cite{BCG14}. We now prove \thm{cquerysim} from \sec{intro}, which we restate for convenience. \CQUERYSIM* \begin{equation}gin{proof} The query complexity of this theorem was established in \lem{cquerysim}. As in the analysis of query complexity, it suffices to simulate a segment implementing evolution for time $1/5$ with precision $\epsilon/5T$. To simulate the continuous-query model, we can assume without loss of generality that query evolutions are approximated (as in \thm{equiv}) by $m$ fractional evolutions of equal length $1/5m$. Thus we can assume that in each segment, as defined in \lem{segment}, $\alpha \colonequals \alpha_i=1/5m$ for all $i \in [m]$. Let $c \colonequals \cos(\pi/10m)$ and $s \colonequals \sin(\pi/10m)$. The idealized initial state of the ancilla qubits (i.e., the state in the dotted box of \fig{segment}) is \begin{equation} \left(\frac{\sqrt{c}|0\rangle + \sqrt{s}|1\rangle}{\sqrt{c+s}}\right)^{\otimes m} = \sum_{b \in \{0,1\}^m} \kappa^{m-|b|} \sigma^{|b|} |b\rangle, \end{equation} where $\kappa \colonequals \frac{\sqrt{c}}{\sqrt{c+s}}$ and $\sigma \colonequals \frac{\sqrt{s}}{\sqrt{c+s}}$. We truncate this state to the subspace of those $b$ with Hamming weight $|b| \le k$. Specifically, we prepare the encoded state \begin{equation} \sum_{\ell \in L} \kappa^{m-|\ell|} \sigma^{|\ell|} |\ell\rangle + \delta |{\perp}\rangle, \label{eq:encodedstate} \end{equation} where $L \colonequals \{(\ell_1,\ldots,\ell_h)\colon 1 \le h \le k,\, \ell_1+\cdots+\ell_h \le m-h\}$, $|{\perp}\rangle$ is a special state orthogonal to all terms in the first sum, and the coefficient $\delta$ was shown to be small in \lem{approxsegment}. Observe that there is a natural bijection between $L$ and the set of strings $b$ with $|b| \le k$, given by $b \leftrightarrow 0^{\ell_1}10^{\ell_2}10^{\ell_3}\ldots0^{\ell_h}10^{m-h-\ell_1-\cdots-\ell_h}$. It is straightforward to perform the operation \eq{rearranged} from the proof of \lem{approxsegment}, conditioning on $b$ as represented by $\ell$. Recall that $W_i(b)$ represents the evolution under the driving Hamiltonian from time $\sum_{j=1}^i \ell_j/5m$ to time $\sum_{j=1}^{i+1}\ell_{j}/5m$ (where we define $\ell_{k+1}=m$). By assumption, any such evolution can be performed with precision $O(\epsilon/T)$ using $g$ gates. Also, recall that $Q_i(b)$ is simply $Q$ if $i \le |b|$ or $\mathbbm{1}$ otherwise, so it can be applied in time $O(\log k)$. Thus the operation \eq{rearranged} can be applied in time $O(k(g+\log k))$. At the end of the segment we must effectively apply the final $P$ and $R$ gates to the encoded state before reflecting about the encoding of $|0^m\rangle$. (That is, we jointly reflect about this state and $|0\rangle$ for the additional ancilla in \fig{segment}.) The $P$ gates are straightforward to apply in the given encoding. Rather than apply the encoded $R$ gates directly, reflect about the encoding of $|0^m\rangle$, and then apply the encoded $R$ gates for the next segment, it suffices to reflect about the encoding of $R_\alpha^{\otimes m}|0^m\rangle$ (note that $R_\alpha^\dag = R_\alpha$). This can be done by applying the inverse of the procedure for preparing \eq{encodedstate}, reflecting about the initial state, and applying the preparation procedure. Overall, we see that the segment can be applied to the encoded initial state with suitable accuracy using $O(k(g+\log m))$ gates, plus the cost of preparing the encoded ancillas. The encoded initial state \eq{encodedstate} can be prepared in time $O(k(\log m + \log\log(1/\epsilon))) = O(k \log m)$, as described in Sections 4.2--4.4 of \cite{BCG14} (see in particular equation (22)). Since $k = O\big(\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)}\big)$ (from the proof of \lem{approxsegment} with error at most $\epsilon/5T$) and $m = \mathop{\mathrm{poly}}(T,\bar h,\frac{1}{\epsilon})$ (from \thm{equiv}), the overall complexity of making the encoded ancilla state is $O\big(\frac{\log(T/\epsilon) \log(\bar h T/\epsilon)}{\log\log(T/\epsilon)}\big)$. Thus the cost of implementing a constant-query algorithm to precision $\epsilon/5T$ is \begin{equation} O(k(g+\log m)) =O\left(\frac{\log(T/\epsilon)}{\log\log(T/\epsilon)} [g+\log(\bar h T/\epsilon)]\right). \end{equation} Implementing $O(T)$ segments, each with this complexity, gives the stated time complexity. With error bounded by $\epsilon/5T$ for each segment, the overall error is at most $\epsilon$. \end{proof} Using this approach we can similarly prove \thm{sparse} from \sec{intro}, which we restate for convenience. \SPARSE* \begin{equation}gin{proof} The query complexity of this theorem was established in \lem{sparse}. Since the query complexity of \thm{sparse} is proved by reduction to \thm{cquerysim}, a time-efficient version of \thm{sparse} can be obtained by essentially the same procedure as the time-efficient version of \thm{cquerysim}. In this reduction, $\tau$ plays the role of $T$. Note that the reduction ultimately uses a fractional-query simulation, so we cannot directly use the result as stated in \thm{cquerysim}, where the time-complexity is for the continuous-query case. Nevertheless, we can obtain a similar result if $g$ is taken to represent the cost of performing any sequence of consecutive non-query operations in the fractional-query algorithm. The term $\log(\bar h T/\epsilon)$ in \thm{cquerysim} results from discretizing a continuous-query algorithm with a driving Hamiltonian and does not arise here. The non-query operations $V_j$ for $j \in [m]$ described in the proof of \thm{multiquery} are straightforward to implement. In the application to Hamiltonian simulation, we simply cycle through all $\eta$ terms in order, so all the $V_j$s can simply add $1$ modulo $\eta$, and a sequence $V_{j'}\cdots V_j$ adds $j'-j \bmod \eta$. Without loss of generality, we can assume $\eta$ is a power of $2$, so addition modulo $\eta$ can be performed by standard binary addition, keeping only the $\log_2\eta$ least significant bits. Thus any operation to be performed between queries can be applied using $g = O(\log \eta) = O(\log(d\norm{H}_{\max}t/\epsilon))$ operations (where the value of $\eta$ is discussed following the proof of \lem{decomposition}). Next, observe that it suffices to decompose the evolution into $m = \eta^3 t^2/\epsilon = \mathop{\mathrm{poly}}(t,\norm{H}_{\max},d,\frac{1}{\epsilon})$ terms (as stated in \cor{hamsim}). In the proof of \thm{cquerysim}, the time complexity for a constant-query algorithm is $O(k(g+\log m))$. This upper bounds the number of additional gates required to perform the non-query operations. Using $g= O(\log(d\norm{H}_{\max}t/\epsilon))$ and $\log m = O(\log(d\norm{H}_{\max}t/\epsilon))$, we see that this is $O\big(\tau \frac{\log^2(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$. This only accounts for the operations performed between applications of the unitary $Q$ defined in \cor{hamsim}. It remains to implement $Q \colonequals \sum_{j=1}^\eta |j\rangle\langlej| \otimes e^{-iH_j}$ using the oracle, where $H = \sum_{j=1}^\eta H_j$ and $H_j$ are Hamiltonians with eigenvalues $0$ and $\pi$. To implement $Q$ we need to read the first register to learn which 1-sparse Hamiltonian is to be simulated and then simulate the 1-sparse Hamiltonian $H_j$. The first part is straightforward; from $j$ we can determine which 1-sparse Hamiltonian is to be simulated and whether it is an $X$, $Y$, or $Z$ term, in the notation of \lem{1sparse}. This can be done with $O(\log \eta)$ gates, which is linear in the size of the first register. Now we need to implement the 1-sparse Hamiltonian on an $n$-qubit register. This can be done with $O(n)$ gates using the constructions in \cite{AT03,CCDFGS03}. For example, to implement an $X$ Hamiltonian on a state $|v\rangle$, we can write down the index of $v$'s neighbor in another register, swap the two registers, and uncompute the second register. Thus we can implement $Q$ using $O(\log \eta + n)$ gates. Since the number of uses of $Q$ is the query complexity, the total number of gates used for all invocations of $Q$ and the non-query operations is $O\big(\tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}[\log(\tau/\epsilon)+n]\big)$, which is $O\big(\tau \frac{\log^2(\tau/\epsilon)}{\log\log(\tau/\epsilon)}n\big)$. \end{proof} The same techniques can be straightforwardly applied to simulate time-dependent sparse Hamiltonians. We divide the evolution into intervals of length $t/r$, so the Hamiltonian can change by no more than $h't/r$ over such an interval, where $h' \colonequals \max_{s \in [0,t]} \norm{\frac{\mathrm{d}}{\mathrm{d}s} H(s)}$. Thus the error for each interval is $O(h't^2/r^2)$, and the error in the overall simulation is $O(h't^2/r)$. Therefore it suffices to take $r=\Omega(h't^2/\epsilon)$. Then $m = \mathop{\mathrm{poly}}(t,h,h',d,\frac{1}{\epsilon})$, and the complexity is $O\big(\tau \frac{\log(\tau/\epsilon) \log((\tau + \tau')/\epsilon)}{\log\log(\tau/\epsilon)}n\big)$ as stated. \section{Lower bounds} \label{sec:lb} We now show that in general, any sparse Hamiltonian simulation method must use $\Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ discrete queries to obtain error at most $\epsilon$, so dependence of the query complexity in \thm{sparse} on $\epsilon$ is tight up to constant factors. To show this, we use ideas from the proof of the no-fast-forwarding theorem~\cite[Theorem 3]{BAC+07}, which says that generic Hamiltonians cannot be simulated in time sub-linear in the evolution time. The Hamiltonian used in the proof of that theorem has the property that simulating it for time $t = \pi n/2$ determines the parity of $n$ bits exactly. We observe that simulating this Hamiltonian (with sufficiently high precision) for any time $t > 0$ gives an unbounded-error algorithm for the parity of $n$ bits, which also requires $\Omega(n)$ queries~\cite{FGGS98,BBC+01}. We now prove \thm{hamsimlower} from \sec{intro}, which we restate for convenience. \LOWERBOUND* \begin{equation}gin{proof} To construct the Hamiltonian, we begin with a simpler Hamiltonian $H'$ that acts on vectors $|i\rangle$ with $i \in \{0,1,\ldots,N\}$ \cite{CDEL04}. The nonzero matrix entries of $H'$ are $\langlei\left|H'\right|i+1\rangle=\langlei+1\left|H'\right|i\rangle=\sqrt{(N-i)(i+1)}/N$ for $i \in \{0,1,\ldots,N-1\}$. We have $\norm{H'}_{\max} < 1$, and simulating $H'$ for $t = \pi N/2$ starting with the state $|0\rangle$ gives the state $|N\rangle$ (i.e., $e^{-iH'\pi N/2}|0\rangle=|N\rangle$). More generally, for $t \in [0,\pi N/2]$, we claim that $|\langleN|e^{-iH't}|0\rangle| = |{\sin(t/N)}|^N$. To see this, consider the Hamiltonian $\bar X \colonequals \sum_{j=1}^N X^{(j)}$, where $X \colonequals \left(\begin{equation}gin{smallmatrix}0 & 1 \\ 1 & 0\end{smallmatrix}\right)$ and the superscript $(j)$ indicates that the operator acts nontrivially on the $j$th qubit. Since $e^{-iXt} = \cos(t)\mathbbm{1} - i \sin(t) X$, we have $|\langle11\ldots 1|e^{-i\bar Xt}|00\ldots 0\rangle| = |{\sin(t)}|^N$. Defining $|{\wt_k}\rangle \colonequals \binom{N}{k}^{-1/2} \sum_{|x|=k} |x\rangle$, we have \begin{equation} \bar X|{\wt_k}\rangle = \sqrt{(N-k+1)k} |{\wt_{k-1}}\rangle + \sqrt{(N-k)(k+1)} |{\wt_{k+1}}\rangle. \end{equation} This is precisely the behavior of $NH'$ with $|k\rangle$ playing the role of $|{\wt_k}\rangle$, so the claim follows. Now, as in \cite{BAC+07}, consider a Hamiltonian $H$ generated from an $N$-bit string $x_1 x_2 \ldots x_{N}$. $H$ acts on vertices $|i,j\rangle$ with $i\in \{0,\ldots,N\}$ and $j\in \{0,1\}$. The nonzero matrix entries of this Hamiltonian are \begin{equation} \langlei,j\left|H\right|i-1,j\oplus x_i\rangle=\langlei-1,j\oplus x_i\left|H\right|i,j\rangle=\sqrt{(N-i+1)i}/N \end{equation} for all $i$ and $j$. By construction, $|0,0\rangle$ is connected to either $|i,0\rangle$ or $|i,1\rangle$ (but not both) for any $i$; it is connected to $|i,j\rangle$ if and only if $j=x_1 \oplus x_2 \oplus \cdots \oplus x_{i}$. Thus $|0,0\rangle$ is connected to either $|N,0\rangle$ or $|N,1\rangle$, and determining which is the case determines the parity of $x$. The graph of this Hamiltonian contains two disjoint paths, one containing $|0,0\rangle$ and $|N,\textsc{parity}(x)\rangle$ and the other containing $|0,1\rangle$ and $|N,1 \oplus {\textsc{parity}(x)}\rangle$. Restricted to the connected component of $|0,0\rangle$, this Hamiltonian is the same as $H'$. Thus, starting with the state $|0,0\rangle$ and simulating $H$ for time $t$ gives $|\langleN,\textsc{parity}(x)|e^{-iHt}|0,0\rangle| = |{\sin(t/N)}|^N$. Furthermore, for any $t$, we have $\langleN,1 \oplus {\textsc{parity}(x)}|e^{-iHt}|0,0\rangle = 0$ since the two states lie in disconnected components. Simulating this Hamiltonian exactly for any time $t>0$ starting with $|0,0\rangle$ yields an unbounded-error algorithm for computing the parity of $x$, as follows. First we measure $e^{-iHt}|0,0\rangle$ in the computational basis. We know that for any $t>0$, the state $e^{-iHt}|0,0\rangle$ has some nonzero overlap on $|N,\textsc{parity}(x)\rangle$ and zero overlap on $|N,1\oplus\textsc{parity}(x)\rangle$. If the first register is not $N$, we output 0 or 1 with equal probability. If the first register is $N$, we output the value of the second register. This is an unbounded-error algorithm for the parity of $x$, and thus requires $\Omega(N)$ queries. Since the unbounded-error query complexity of parity is $\Omega(N)$ \cite{FGGS98,BBC+01}, this shows that exactly simulating $H$ for any time $t>0$ needs $\Omega(N)$ queries. However, even if we only have an approximate simulation, the previous algorithm still works as long as the error in the output state is smaller than the overlap $|\langleN,\textsc{parity}(x)|e^{-iHt}|0,0\rangle|$. If we ensure that the overlap is larger than $\epsilon$ by a constant factor, then even with error $\epsilon$, the overlap on that state will be larger than $\epsilon$. On the other hand, the overlap on $|N,1\oplus\textsc{parity}(x)\rangle$ is at most $\epsilon$, since the output state is $\epsilon$ close to the ideal output state which has no overlap. To achieve an overlap much larger than $\epsilon$, we need $|{\sin(t/N)}|^{N}$ to be much larger than $\epsilon$. There is some value of $N$ in $\Theta\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ that achieves this. \end{proof} A similar construction shows that any $\epsilon$-error simulation of the continuous-query model must use $\Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ discrete queries, so \lem{main} is tight up to constant factors. Again we show that a sufficiently high-precision simulation of a certain Hamiltonian could be used to compute parity with unbounded error. However, in the fractional-query model, the form of the Hamiltonian is restricted and it is unclear how to implement the weights that simplify the analysis of the dynamics in \thm{hamsimlower}. Instead, we consider a quantum walk on an infinite unweighted path that also solves parity with unbounded error, and we show that this still holds if the path is long but finite. \begin{equation}gin{theorem}[$\epsilon$-dependent lower bound for continuous-query simulation] \label{thm:fqsimlower} For any $\epsilon>0$, given a query Hamiltonian $H_x$ for a string of $N = \Theta\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ bits, simulating $H_x + H_D(t)$ for constant time with precision $\epsilon$ requires $\Omega(N)$ queries. \end{theorem} \begin{equation}gin{proof} We prove a lower bound for simulating a Hamiltonian of the form $H'=\sum_{a=1}^\eta c_a U_a^\dag H_x U_a$ with coefficients $c_1,\ldots,c_\eta \in {\mathbb{R}}$. The Hamiltonian $H_x$ can be used to simulate $H'$ to any given accuracy with overhead $\sum_a |c_a|$, so this implies a lower bound for simulating $H_x$. In particular, by taking $r$ sufficiently large, the evolution under $H'$ can be approximated arbitrarily closely as \begin{equation}gin{equation} e^{-iH't} \approx \left(\prod_{a=1}^\eta U_a^\dag e^{-i H_x c_a t/r} U_a\right)^r. \end{equation} This corresponds to a fractional-query algorithm with cost $t \sum_{a=1}^\eta |c_a|$. By \thm{equiv}, this fractional-query algorithm can be simulated with arbitrarily small error by a continuous-query algorithm with the same cost. This continuous-query algorithm uses the query Hamiltonian $H_x$, and its driving Hamiltonian $H_D(t)$ implements the unitaries $\{U_a,U_a^\dag\}_{a=1}^\eta$ at appropriate times. Viewing the Hamiltonian in terms of the graph of its nonzero entries, the oracle Hamiltonian $H_x$ provides input-dependent self-loops. First we modify it to give input-dependent edges. Observe that \begin{equation}gin{align} \mathrm{Had} \begin{equation}gin{pmatrix}1&0\\0&0\end{pmatrix} \mathrm{Had} &= \frac{1}{2}\begin{equation}gin{pmatrix}1&1\\1&1\end{pmatrix} \end{align} where $\mathrm{Had} \colonequals \frac{1}{\sqrt2}(\begin{equation}gin{smallmatrix}1 & 1\\1 & -1\end{smallmatrix})$ is the Hadamard gate. Thus we can include a term in the Hamiltonian that has an edge between two vertices associated with the input index $i$ (and self-loops on those vertices) if $x_i=1$, and is zero otherwise. Now consider a space with basis states $|i,j,k\rangle$ where $i \in {\mathbb{Z}}$ and $j,k \in \{0,1\}$. The label $j$ plays the same role as in \thm{hamsimlower}, whereas the new label $k$ indexes two positions for each value of $i$. These new positions are needed because the pairs of vertices associated with each input index must be disjoint. To specify the Hamiltonian, we define unitaries $U_1,U_2,U_3,U_4$ so that the nonzero matrix elements of $U_a^\dag H_x U_a$ for $a \in \{1,2,3,4\}$ are \begin{equation}gin{align} \langlei,0,k|U_1^\dag H_x U_1|i,0,\bar k\rangle &= \langlei,0,k|U_1^\dag H_x U_1|i,0,k\rangle = x_i/2 \\ \langlei,1,k|U_2^\dag H_x U_2|i,1,\bar k\rangle &= \langlei,1,k|U_2^\dag H_x U_2|i,1,k\rangle = x_i/2 \\ \langlei,k,k|U_3^\dag H_x U_3|i,\bar k,\bar k\rangle &= \langlei,k,k|U_3^\dag H_x U_3|i,k,k\rangle = x_i/2 \\ \langlei,k,\bar k|U_4^\dag H_x U_4|i,\bar k,k\rangle &= \langlei,\bar k,k|U_4^\dag H_x U_4|i,\bar k,k\rangle = x_i/2 \end{align} for all $i \in [N]$ and $k \in \{0,1\}$. Combining these four contributions to obtain a Hamiltonian $- U_1^\dag H_x U_1 - U_2^\dag H_x U_2 + U_3^\dag H_x U_3 + U_4^\dag H_x U_4$ and observing that the self-loops cancel, these matrix elements can be summarized in terms of the gadget shown in \fig{paritygadget}. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \begin{equation}gin{tikzpicture} \coordinate (tl) at (0,2); \coordinate (tr) at (2,2); \coordinate (bl) at (0,0); \coordinate (br) at (2,0); \filldraw (tl) circle (1.5pt) node [left] {$(i,0,0)$}; \filldraw (tr) circle (1.5pt) node [right] {$(i,0,1)$}; \filldraw (bl) circle (1.5pt) node [left] {$(i,1,0)$}; \filldraw (br) circle (1.5pt) node [right] {$(i,1,1)$}; \draw[dashed] (tl) -- (tr); \draw[dashed] (bl) -- (br); \draw (tl) -- (br); \draw (bl) -- (tr); \end{tikzpicture} \end{center} \caption{\label{fig:paritygadget} The gadget for querying $x_i$. If $x_i=0$, no edges are present. If $x_i=1$, the solid edges have weight $1/2$ and the dashed edges have weight $-1/2$.} \end{figure} We add a driving Hamiltonian to connect these gadgets to form two paths encoding the parity similarly as in \thm{hamsimlower}, and we extend the paths infinitely in both directions. Specifically, the driving Hamiltonian $H_D$ has nonzero matrix elements \begin{equation}gin{align} \langlei,j,k|H_D|i,j,\bar k\rangle &= 1/2 \end{align} for all $i \in {\mathbb{Z}}$ and $j,k \in \{0,1\}$ (corresponding to the dashed edges in \fig{paritygadget}, but with positive weight), and \begin{equation}gin{align} \langlei+1,j,0|H_D|i,j,1\rangle &= \langlei,j,1|H_D|i+1,j,0\rangle = 1/2 \end{align} for all $i \in {\mathbb{Z}}$ and $j \in \{0,1\}$ (corresponding to edges that join sectors with adjacent values of $i$). Then the total Hamiltonian \begin{equation} H = - U_1^\dag H_x U_1 - U_2^\dag H_x U_2 + U_3^\dag H_x U_3 + U_4^\dag H_x U_4 + H_D \label{eq:fraclbham} \end{equation} is $1/2$ times the adjacency matrix of the disjoint union of two infinite paths, one with vertices \begin{equation} \begin{equation}gin{aligned} \ldots,&(0,0,0),(0,0,1),(1,0,0),(1,x_1,1),(2,x_1,0),(2,x_1 \oplus x_2,1),\ldots, \\ &(N,x_1 \oplus \cdots \oplus x_N,1),(N+1,x_1 \oplus \cdots \oplus x_N,0),(N+1,x_1 \oplus \cdots \oplus x_N,1),\ldots \end{aligned} \end{equation} and the other with vertices \begin{equation} \begin{equation}gin{aligned} \ldots,&(0,1,0),(0,1,1),(1,1,0),(1,1 \oplus x_1,1),(2,1 \oplus x_1,0),(2,1 \oplus x_1 \oplus x_2,1),\ldots, \\ &(N,1 \oplus x_1 \oplus \cdots \oplus x_N,1),(N+1,1 \oplus x_1 \oplus \cdots \oplus x_N,0),(N+1,1 \oplus x_1 \oplus \cdots \oplus x_N,1),\ldots. \end{aligned} \end{equation} Analogous to the Hamiltonian $H$ in the proof of \thm{hamsimlower}, $(0,0,1)$ is in the same component as $(N,b,1)$ if and only if $b=\textsc{parity}(x)$. To compute the probability of reaching $(n,\textsc{parity}(x),1)$ starting from $(0,0,1)$ after evolving with the Hamiltonian \eq{fraclbham} for time $t$, we can use the expression for the propagator on an infinite path in terms of a Bessel function (see for example \cite{CCDFGS03}). Specifically, we have \begin{equation} |\langleN,\textsc{parity}(x),1|e^{-iHt}|0,0,1\rangle| = |J_{2N}(t)|. \end{equation} For large $N$ and for any fixed $t \ne 0$, we have $|J_N(t)| = e^{-\Theta(N\log N)}$ \cite[Section 8.1]{Wat22}. Thus, as in the proof of \thm{hamsimlower}, even a simulation with error $\epsilon$ gives the result with nonzero probability provided $N = \Theta\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. The preceding argument uses a Hamiltonian acting on an infinite-dimensional space. However, we can truncate it to act on a finite space with essentially the same effect. Specifically, we apply the Truncation Lemma of \cite{CGW13} with $\mathcal{K} = \spn\{|i,j,k\rangle \colon -N^3-N^2 \le i \le N^3+N^2,\, j,k \in \{0,1\}\}$ and $W=H$. Let $P$ project onto $\mathcal{K}$ and let $P'$ project onto $\spn\{|i,j,k\rangle \colon -N^2 \le i \le N^2,\, j,k \in \{0,1\}\}$. Finally, let $|\gamma(t)\rangle = P'e^{-iHt}|0,0,1\rangle$. Then $\delta^2 \colonequals \norm{e^{-iHt}|0,0,1\rangle - |\gamma(t)\rangle}^2 = |J_{2N^2+1}(t)|^2+2\sum_{j=2}^\infty |J_{2N^2+j}(t)|^2 \le e^{-\Omega(N^2 \log N)}$. Furthermore, $(\mathbbm{1}-P)H^r|\gamma(t)\rangle=0$ for all $r \in \{0,1,\ldots,N^3\}$. Also observe that $\norm{H}=1$. Thus the Truncation Lemma shows that \begin{equation} \norm{(e^{-iHt} - e^{-iPHPt}) |0,0,1\rangle} \le \left( \frac{4et}{N^3}+2 \right)\bigl(\delta+2^{-N^3}(1+\delta)\bigr) \le e^{-\Omega(N^2 \log N)}, \end{equation} so the error incurred by truncating $H$ to the Hamiltonian $PHP$ acting on the finite-dimensional space $\mathcal{K}$ is asymptotically negligible compared to $\epsilon$. \end{proof} \section{Open questions} \label{sec:conclusion} While our algorithm for continuous-query simulation is optimal as a function of $\epsilon$ alone, it is suboptimal as a function of $T$, and it is unclear what tradeoffs might exist between these two parameters. The best known lower bound as a function of both $\epsilon$ and $T$ is $\Omega\big(T+\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. It would be surprising if this bound were achievable, but it remains open to find such an algorithm or to prove a better lower bound. In general, any improvement to the tradeoff between $\epsilon$ and $T$ could be of interest. In the context of time-independent sparse Hamiltonian simulation, the quantum walk-based simulation of \cite{Chi10,BC12} achieves linear dependence on $t$, whereas our upper bound is superlinear in $t$. However, the dependence on $\epsilon$ is significantly worse in the walk-based approach. It would be desirable to combine the benefits of these two approaches into a single algorithm. Another open question is to better understand the dependence of our sparse Hamiltonian simulation method on the sparsity $d$. While we use $d^{2+o(1)}$ queries, the method of \cite{BC12} uses only $O(d)$ queries. Could the performance of the simulation based on fractional queries be improved by a different decomposition of the Hamiltonian? \begin{equation}gin{thebibliography}{10} \bibitem{AT03} Dorit Aharonov and Amnon Ta-Shma, \emph{Adiabatic quantum state generation and statistical zero knowledge}, Proceedings of the 35th ACM Symposium on Theory of Computing, pp.~20--29, 2003, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0301023}{arXiv:quant-ph/0301023}}. \bibitem{ACR+10} Andris Ambainis, Andrew~M. Childs, Ben~W. Reichardt, Robert {\v S}palek, and Shengyu Zhang, \emph{Any {AND}-{OR} formula of size {$N$} can be evaluated in time {$N^{1/2+o(1)}$} on a quantum computer}, SIAM Journal on Computing \textbf{39} (2010), no.~6, 2513--2530, \eprint{arXiv:quant-ph/0703015} and \eprint{arXiv:0704.3628}. \bibitem{AMRR11} Andris Ambainis, Lo{\"i}ck Magnin, Martin Roetteler, and J{\'e}r{\'e}mie Roland, \emph{Symmetry-assisted adversaries for quantum state generation}, Proceedings of the 26th IEEE Conference on Computational Complexity, pp.~167--177, 2011, \mbox{\bar href{http://arxiv.org/abs/arXiv:1012.2112}{arXiv:1012.2112}}. \bibitem{BBC+01} Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de~Wolf, \emph{Quantum lower bounds by polynomials}, Journal of the ACM \textbf{48} (2001), no.~4, 778--797, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/9802049}{arXiv:quant-ph/9802049}}. \bibitem{BAC+07} Dominic~W. Berry, Graeme Ahokas, Richard Cleve, and Barry~C. Sanders, \emph{Efficient quantum algorithms for simulating sparse {H}amiltonians}, Communications in Mathematical Physics \textbf{270} (2007), no.~2, 359--371, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0508139}{arXiv:quant-ph/0508139}}. \bibitem{BC12} Dominic~W. Berry and Andrew~M. Childs, \emph{Black-box {H}amiltonian simulation and unitary implementation}, Quantum Information and Computation \textbf{12} (2012), no.~1--2, 29--62, \mbox{\bar href{http://arxiv.org/abs/arXiv:0910.4157}{arXiv:0910.4157}}. \bibitem{BCG14} Dominic~W. Berry, Richard Cleve, and Sevag Gharibian, \emph{Gate-efficient discrete simulations of continuous-time quantum query algorithms}, Quantum Information and Computation \textbf{14} (2014), no.~1--2, 1--30, \mbox{\bar href{http://arxiv.org/abs/arXiv:1211.4637}{arXiv:1211.4637}}. \bibitem{Chi04} Andrew~M. Childs, \emph{Quantum information processing in continuous time}, Ph.D. thesis, Massachusetts Institute of Technology, 2004. \bibitem{Chi10} \bysame, \emph{On the relationship between continuous- and discrete-time quantum walk}, Communications in Mathematical Physics \textbf{294} (2010), no.~2, 581--603, \mbox{\bar href{http://arxiv.org/abs/arXiv:0810.0312}{arXiv:0810.0312}}. \bibitem{CCDFGS03} Andrew~M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel~A. Spielman, \emph{Exponential algorithmic speedup by quantum walk}, Proceedings of the 35th ACM Symposium on Theory of Computing, pp.~59--68, 2003, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0209131}{arXiv:quant-ph/0209131}}. \bibitem{CCJY09} Andrew~M. Childs, Richard Cleve, Stephen~P. Jordan, and David Yonge-Mallo, \emph{Discrete-query quantum algorithm for {NAND} trees}, Theory of Computing \textbf{5} (2009), no.~5, 119--123, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0702160}{arXiv:quant-ph/0702160}}. \bibitem{CGW13} Andrew~M. Childs, David Gosset, and Zak Webb, \emph{Universal computation by multi-particle quantum walk}, Science \textbf{339} (2013), no.~6121, 791--794, \mbox{\bar href{http://arxiv.org/abs/arXiv:1205.3782}{arXiv:1205.3782}}. \bibitem{CK11} Andrew~M. Childs and Robin Kothari, \emph{Simulating sparse {H}amiltonians with star decompositions}, Theory of Quantum Computation, Communication, and Cryptography (TQC 2010), Lecture Notes in Computer Science, vol. 6519, pp.~94--103, 2011, \mbox{\bar href{http://arxiv.org/abs/arXiv:1003.3683}{arXiv:1003.3683}}. \bibitem{ChildsWiebe} Andrew~M. Childs and Nathan Wiebe, \emph{Hamiltonian simulation using linear combinations of unitary operations}, Quantum Information and Computation \textbf{12} (2012), 901--924, \mbox{\bar href{http://arxiv.org/abs/arXiv:1202.5822}{arXiv:1202.5822}}. \bibitem{CDEL04} Matthias Christandl, Nilanjana Datta, Artur Ekert, and Andrew~J. Landahl, \emph{Perfect state transfer in quantum spin networks}, Physical Review Letters \textbf{92} (2004), no.~18, 187902, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0309131}{arXiv:quant-ph/0309131}}. \bibitem{CJS13} B.~David Clader, Bryan~C. Jacobs, and Chad~R. Sprouse, \emph{Preconditioned quantum linear system algorithm}, Physical Review Letters \textbf{110} (2013), no.~25, 250504, \mbox{\bar href{http://arxiv.org/abs/arXiv:1301.2340}{arXiv:1301.2340}}. \bibitem{CGM+09} Richard Cleve, Daniel Gottesman, Michele Mosca, Rolando~D. Somma, and David Yonge-Mallo, \emph{Efficient discrete-time simulations of continuous-time quantum query algorithms}, Proceedings of the 41st ACM Symposium on Theory of Computing, pp.~409--416, 2009, \mbox{\bar href{http://arxiv.org/abs/arXiv:0811.4428}{arXiv:0811.4428}}. \bibitem{FGG08} Edward Farhi, Jeffrey Goldstone, and Sam Gutmann, \emph{A quantum algorithm for the {H}amiltonian {NAND} tree}, Theory of Computing \textbf{4} (2008), no.~8, 169--190, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0702144}{arXiv:quant-ph/0702144}}. \bibitem{FGGS98} Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser, \emph{Limit on the speed of quantum computation in determining parity}, Physical Review Letters \textbf{81} (1998), no.~24, 5442--5444, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/9802045}{arXiv:quant-ph/9802045}}. \bibitem{FG98} Edward Farhi and Sam Gutmann, \emph{Analog analogue of a digital quantum computation}, Physical Review A \textbf{57} (1998), no.~4, 2403--2406, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/9612026}{arXiv:quant-ph/9612026}}. \bibitem{Fey82} Richard~P. Feynman, \emph{Simulating physics with computers}, International Journal of Theoretical Physics \textbf{21} (1982), no.~6--7, 467--488. \bibitem{HHL09} Aram~W. Harrow, Avinatan Hassidim, and Seth Lloyd, \emph{Quantum algorithm for linear systems of equations}, Physical Review Letters \textbf{103} (2009), no.~15, 150502, \mbox{\bar href{http://arxiv.org/abs/arXiv:0811.3171}{arXiv:0811.3171}}. \bibitem{HR90} Jacky Huyghebaert and Hans~De Raedt, \emph{Product formula methods for time-dependent {S}chr\"{o}dinger problems}, Journal of Physics A \textbf{23} (1990), no.~24, 5777. \bibitem{Jor75} Camille Jordan, \emph{Essai sur la g{\'e}om{\'e}trie {\`a} $n$ dimensions}, Bulletin de la Soci{\'e}t{\'e} Math{\'e}matique de France \textbf{3} (1875), 103--174. \bibitem{LMRSS11} Troy Lee, Rajat Mittal, Ben~W. Reichardt, Robert {\v S}palek, and Mario Szegedy, \emph{Quantum query complexity of state conversion}, Proceedings of the 52nd IEEE Symposium on Foundations of Computer Science, pp.~344--353, 2011, \mbox{\bar href{http://arxiv.org/abs/arXiv:1011.3020}{arXiv:1011.3020}}. \bibitem{Llo96} Seth Lloyd, \emph{Universal quantum simulators}, Science \textbf{273} (1996), no.~5278, 1073--1078. \bibitem{MW05} Chris Marriott and John Watrous, \emph{Quantum {A}rthur--{M}erlin games}, Computational Complexity \textbf{14} (2005), no.~2, 122--152, \mbox{\bar href{http://arxiv.org/abs/arXiv:cs/0506068}{arXiv:cs/0506068}}. \bibitem{Moc07} Carlos Mochon, \emph{Hamiltonian oracles}, Physical Review A \textbf{75} (2007), no.~4, 042313, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0602032}{arXiv:quant-ph/0602032}}. \bibitem{MR95} Rajeev Motwani and Prabhakar Raghavan, \emph{Randomized algorithms}, Cambridge University Press, 1995. \bibitem{NWZ09} Daniel Nagaj, Pawel Wocjan, and Yong Zhang, \emph{Fast amplification of {QMA}}, Quantum Information and Computation \textbf{9} (2009), no.~11-12, 1053--1068, \mbox{\bar href{http://arxiv.org/abs/arXiv:0904.1549}{arXiv:0904.1549}}. \bibitem{PQSV11} David Poulin, Angie Qarry, Rolando~D. Somma, and Frank Verstraete, \emph{Quantum simulation of time-dependent {H}amiltonians and the convenient illusion of {H}ilbert space}, Physical Review Letters \textbf{106} (2011), no.~17, 170501, \mbox{\bar href{http://arxiv.org/abs/arXiv:1102.1360}{arXiv:1102.1360}}. \bibitem{Suz91} Masuo Suzuki, \emph{General theory of fractal path integrals with applications to many-body theories and statistical physics}, Journal of Mathematical Physics \textbf{32} (1991), no.~2, 400--407. \bibitem{Wat09} John Watrous, \emph{Zero-knowledge against quantum attacks}, SIAM Journal on Computing \textbf{39} (2009), no.~1, 296--305, \mbox{\bar href{http://arxiv.org/abs/arXiv:quant-ph/0511020}{arXiv:quant-ph/0511020}}. \bibitem{Wat22} George~N. Watson, \emph{A treatise on the theory of {B}essel functions}, Cambridge University Press, 1922. \bibitem{WBHS11} Nathan Wiebe, Dominic~W. Berry, Peter H{\o}yer, and Barry~C. Sanders, \emph{Simulating quantum dynamics on a quantum computer}, Journal of Physics A \textbf{44} (2011), no.~44, 445308, \mbox{\bar href{http://arxiv.org/abs/arXiv:1011.3489}{arXiv:1011.3489}}. \end{thebibliography} \appendix \section{Proofs of known results} \label{app:proofs} In this appendix, for the sake of completeness we provide proofs of claims that are known or essentially follow from known results. \subsection{Equivalence of continuous- and fractional-query models} \label{app:equiv} \EQUIV* \begin{equation}gin{proof} A simulation of the continuous-query model by the fractional-query model with the stated properties appears in Section II.A of \cite{CGM+09}. We present their proof for completeness. We wish to implement the unitary $U(T)$ satisfying the Schr\"{o}dinger equation \eq{schrodinger} with $U(0)=\mathbbm{1}$. To refer to the solutions of this equation for arbitrary Hamiltonians and time intervals, we define $U_H(t_2,t_1)$ to be the solution of the Schr\"{o}dinger equation with Hamiltonian $H$ from time $t_1$ to time $t_2$ where $U(t_1) = \mathbbm{1}$. In this notation, $U(T) = U_{H_x + H_D}(T,0)$. Let $m$ be an integer and $\theta = T/m$. We have \begin{equation} U_{H_x + H_D}(T,0) = U_{H_x + H_D}(m\theta,(m-1)\theta) \cdots U_{H_x + H_D}(2\theta,\theta) U_{H_x + H_D}(\theta,0). \end{equation} If we can approximate each of these $m$ terms, we can use the subadditivity of error in implementing unitaries (i.e., $\norm{UV - \tilde{U}\tilde{V}} \leq \norm{U - \tilde{U}} + \norm{V - \tilde{V}}$ for unitaries $U, \tilde{U}, V, \tilde{V}$) to obtain an approximation of $U(T)$. Reference \cite{HR90} shows that for small $\theta$, the evolution according to Hamiltonians $A$ and $B$ over an interval of length $\theta$ approximates the evolution according to $A+B$ over the same interval. Specifically, from \cite[eq.~A8b]{HR90} we have \begin{equation} \norm{U_{A+B}((j+1)\theta,j\theta) - U_{A}((j+1)\theta,j\theta) U_{B}((j+1)\theta,j\theta)} \leq \int_{j\theta}^{(j+1)\theta} \mathrm{d}v \int_{j\theta}^{v}\mathrm{d}u \, \norm{[A(u),B(v)]}. \end{equation} In our application, $A(t) = H_D(t)$ and $B=H_x$. Since $\norm{H_x}=1$, the right-hand side is at most \begin{equation} 2 \int_{j\theta}^{(j+1)\theta} \mathrm{d}v \int_{j\theta}^{v} \mathrm{d}u \, \norm{H_D(u)} \leq 2 \int_{j\theta}^{(j+1)\theta} \mathrm{d}v \int_{j\theta}^{(j+1)\theta} \mathrm{d}u \, \norm{H_D(u)} = 2\theta \int_{j\theta}^{(j+1)\theta} \norm{H_D(u)} \, \mathrm{d}u. \end{equation} By subadditivity, the error in implementing $U(T)$ is at most \begin{equation} 2\theta \sum_{j=0}^{m-1} \int_{j\theta}^{(j+1)\theta} \norm{H_D(u)} \, \mathrm{d}u = 2\theta \int_{0}^{T} \norm{H_D(u)} \mathrm{d}u = 2\theta \bar h T = \frac{2\bar h T^2}{m}. \end{equation} This error is smaller than $\epsilon$ when $m \geq 2\bar h T^2/\epsilon$, which proves this direction of the equivalence. For the other direction, consider a fractional-query algorithm \begin{equation} U_{\mathrm{fq}} \colonequals U_m Q^{\alpha_m} U_{m-1} \cdots Q^{\alpha_2} U_1 Q^{\alpha_1} U_0 \label{eq:fqcircuit} \end{equation} (recall that $Q$ depends on $x$), where $\alpha_i \in (0,1]$ for all $i \in [m]$, with complexity $T = \sum_{i=1}^m \alpha_i$. Let $A_i \colonequals \sum_{j=1}^i \alpha_j$ for all $i \in [m]$ and let $U_j \equalscolon e^{-i H_D^{(j)}}$ for all $j \in \{0,1,\ldots,m\}$. Consider the piecewise constant Hamiltonian \begin{equation} H(t) = H_x + \frac{1}{\epsilon_1} \left(\delta_{t \in [0,\epsilon_1]} H_D^{(0)} + \sum_{i=1}^m \delta_{t \in [A_i-\epsilon_1,A_i]} H_D^{(i)} \right), \end{equation} where $\delta_B$ is $0$ if $B$ is false and $1$ if $B$ is true. Provided $\epsilon_1 \le \min\{\alpha_1/2,\alpha_2,\ldots,\alpha_m\}$, evolving with $H(t)$ from $t=0$ to $T$ implements a unitary close to our fractional-query algorithm. More precisely, it implements \begin{equation} \begin{equation}gin{aligned} U(T) &= e^{-i (H_D^{(m)} + \epsilon_1 H_x)} e^{-i(\alpha_m - \epsilon_1) H_x} e^{-i (H_D^{(m-1)} + \epsilon_1 H_x)} \cdots \\ &~\quad e^{-i(\alpha_2 - \epsilon_1) H_x} e^{-i (H_D^{(1)} + \epsilon_1 H_x)} e^{-i(\alpha_1 - 2\epsilon_1) H_x} e^{-i (H_D^0 + \epsilon_1 H_x)}, \end{aligned} \label{eq:piecewise} \end{equation} which satisfies $\norm{U(T) - U_{\mathrm{fq}}} = O(m\epsilon_1)$. This follows from the fact that each exponential in \eq{piecewise} approximates the corresponding unitary of \eq{fqcircuit} within error $\epsilon_1$ (e.g., $\norm{e^{-i (H_D^{(m)} + \epsilon_1 H_x)} - U_m} = O(\epsilon_1)$ and $\norm{e^{-i(\alpha_m - \epsilon_1) H_x} - Q^{\alpha_m}} = O(\epsilon_1)$) and the subadditivity of error when implementing unitaries. The fact that each exponential has error $O(\epsilon_1)$ follows from the inequality $\norm{e^{iA} - e^{iB}} \leq \norm{A-B}$. This can be proved by observing that $\norm{e^{iA} - e^{iB}} = \norm{(e^{iA/n})^n - (e^{iB/n})^n} \leq n\norm{e^{iA/n} - e^{iB/n}} \leq \norm{A-B} + O(1/n)$, where the first inequality uses subadditivity of error and the second inequality follows by Taylor expansion. Since the statement is true for all $n$, the claim follows. This simulation has continuous-query complexity $T$. Its error can be made less than $\epsilon$ by choosing $\epsilon_1$ sufficiently small (in particular, it suffices to take some $\epsilon_1 = \Theta(\epsilon/m)$). \end{proof} \subsection{The Approximate Segment Lemma} \label{app:approxsegment} In this section, we establish the Approximate Segment Lemma (\lem{approxsegment}). This lemma essentially follows from \cite{CGM+09} with minor modification. We start by proving the following Gadget Lemma, which follows from \cite[Section II.B]{CGM+09}. \GADGET* \begin{equation}gin{proof} The input state evolves as follows: \begin{equation}gin{align} |0\rangle|\psi\rangle &\mapsto \frac{\sqrt{c} |0\rangle + \sqrt{s} |1\rangle}{\sqrt{c+s}} |\psi\rangle \nonumber \\ &\mapsto \frac{1}{\sqrt{c+s}}(\sqrt{c}|0\rangle|\psi\rangle + \sqrt{s}|1\rangleQ|\psi\rangle) \nonumber \\ &\mapsto \frac{1}{c+s}[|0\rangle(c |\psi\rangle + i s Q|\psi\rangle) + \sqrt{cs} |1\rangle (|\psi\rangle - i Q|\psi\rangle)] \nonumber \\ &= \sqrt{q_\alpha} (|0\ranglee^{i \pi \alpha/2}Q^\alpha|\psi\rangle + \sqrt{\sin(\pi\alpha)}|1\rangle e^{-i \pi/4} Q^{-1/2}|\psi\rangle). \end{align} Thus the output has the stated form. \end{proof} We can now collect these gadgets into a segment, which implements a fractional-query algorithm with constant query complexity with amplitude 1/2. \SEGMENT* \begin{equation}gin{proof} We first analyze the subcircuit in the dashed box in \fig{segment}, which is the entire circuit without the first qubit. The first qubit does not interact with the rest of the qubits and is only used at the end of the proof. This subcircuit is built by composing several fractional-query gadgets (as in \fig{gadget}) with a new control qubit for each gadget but with a common target. The $m$ gadgets correspond to making the fractional queries $Q^{\alpha_i}$. The first register of a gadget indicates whether it has applied the fractional query successfully, in which case the register is $\ket{0}$, or not, in which case it is $\ket{1}$. For the $i$th gadget, the output state has amplitude $q_{\alpha_i}$ on the state $\ket{0}$ corresponding to the successful outcome, as shown in \lem{gadget}. The state of the control qubits on the output is $|0^{m}\rangle$ only when all the gadgets have successfully applied the fractional query. In this case, the target has been successfully transformed to $V|\psi\rangle$. Thus the dashed subcircuit in \fig{segment} performs the map \begin{equation} |0^{m}\rangle|\psi\rangle \mapsto \sqrt{p}|0^{m}\ranglee^{i\vartheta}V|\psi\rangle + \sqrt{1-p}|\Phi^\perp\rangle \end{equation} for some $|\Phi^\perp\rangle$ satisfying $(|0^m\rangle\langle0^m| \otimes \mathbbm{1})|\Phi^\perp\rangle=0$, where $p = \prod_{i=1}^{m} {q_{\alpha_i}}$ and $\vartheta = -\sum_{i=1}^{m} \pi\alpha_i/2 \bmod 2\pi$. This is similar to the desired statement, except that we want the amplitude in front of $|0^{m}\rangle$ to be $1/2$ instead of $\sqrt{p}$. We show that $p>1/4$ and then use the first qubit to decrease its value to exactly $1/4$. Since $\sum_{i=1}^{m} \alpha_i \leq 1/5$ by assumption, we can lower bound the value of $p$ as follows. Since $\alpha_i \geq 0$ for all $i$, using the inequalities $\sin x \leq x$ (for $x \geq 0$) and $1/(1+x) \geq 1-x$ (for $x \ge -1$) gives \begin{equation} p = \prod_{i=1}^{m} {q_{\alpha_i}} = \prod_{i=1}^{m} \frac{1}{1+\sin(\pi\alpha_i)} \geq \prod_{i=1}^{m} \frac{1}{1+\pi\alpha_i} \geq \prod_{i=1}^{m} (1-\pi\alpha_i) \geq 1 - \pi \sum_{i=1}^{m} \alpha_i \geq 1 - \frac{\pi}{5} > \frac{1}{4}, \end{equation} where the third inequality uses the fact that for $x_i \in [0,1]$, $\prod_i (1-x_i) \geq 1 - \sum_i x_i$. Thus we have $\sqrt{p} > 1/2$. Now let $\Upsilon$ be any unitary that maps $\ket{0}$ to $\frac{1}{2\sqrt{p}}\ket{0} + (1-\frac{1}{4p})^{1/2}\ket{1}$. Since $\sqrt{p} > 1/2$, we have $\frac{1}{2\sqrt{p}} < 1$, so a unitary $\Upsilon$ exists. Then for the full circuit in \fig{segment}, the amplitude corresponding to the state $\ket{0^m}$ is $\sqrt{p} \cdot \frac{1}{2\sqrt{p}} = 1/2$. \end{proof} Finally, we show that the map in the previous lemma can be performed to error $\epsilon$ using only $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries. \APPROXSEGMENT* \begin{equation}gin{proof} From \lem{segment} we know that the circuit in \fig{segment} performs the claimed map with no error. However, the circuit makes $m$ discrete queries, which can be arbitrarily large. We wish to construct a circuit with error at most $\epsilon$ that makes only $O\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$ queries, independent of $m$. We first analyze the subcircuit in the dotted box in \fig{segment}. The output of this subcircuit is $\ket{\zeta} = \bigotimes_{i=1}^{m} R_{\alpha_i} |0\rangle = \bigotimes_{i=1}^{m} \frac{1}{\sqrt{c_i+s_i}}(\sqrt{c_i}|0\rangle + \sqrt{s_i}|1\rangle)$, where $c_i \colonequals \cos(\pi \alpha_i/2)$ and $s_i \colonequals \sin(\pi \alpha_i/2)$. We also define $q_i \colonequals q_{\alpha_i} = 1/(c_i+s_i)^2 = 1/(1+\sin(\pi\alpha_i))$. We can write $\ket{\zeta} = \sum_{x \in \{0,1\}^{m}} w_x|x\rangle$ with $\sum_x |w_x|^2 = 1$. Now consider the subnormalized state $\ket{\zeta_k} \colonequals \sum_{|x| \leq k} w_x|x\rangle$, where $|x|$ denotes the Hamming weight of $x$ and $k \leq m$ is a positive integer. In the circuit, we approximate the state $|\zeta\rangle$ with some $|\zeta_k\rangle$. Clearly $|\zeta_m\rangle = |\zeta\rangle$, and the approximation becomes worse as $k$ decreases. To achieve a $1-\epsilon^2/2$ approximation, we claim it suffices to take $k = \Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. Since $1 - \langle\zeta|\zeta_k\rangle = \sum_{|x|>k} |w_x|^2$, we must upper bound $\sum_{|x|>k} |w_x|^2$ in terms of $k$. Consider $m$ independent random variables $X_i$ with $\Pr(X_i = 0) = \frac{c_i}{c_i+s_i}$ and $\Pr(X_i = 1) = \frac{s_i}{c_i+s_i}$. The probability that $\sum_i X_i > k$ is $\sum_{|x|>k} |w_x|^2$, since $|w_x|^2$ is the probability of the event $X_i = x_i$ for all $i$. For such events, the Chernoff bound (see for example \cite[Theorem 4.1]{MR95}) says that for any $\delta>0$, \begin{equation} \Pr \left(\sum_i X_i > (1+\delta)\mu\right) < \frac{e^{\delta\mu}}{(1+\delta)^{(1+\delta)\mu}}, \end{equation} where $\mu \colonequals \sum_i \Pr(X_i = 1) = \sum_i \frac{s_i}{c_i+s_i}$. Since $\alpha_i \geq 0$ and $\sum_i \alpha_i \leq 1/5$, we have $\mu \geq 0$ and $\mu = \sum_i \frac{s_i}{c_i+s_i} \leq \sum_i s_i = \sum_i \sin(\pi\alpha_i/2) \leq \sum_i \pi\alpha_i/2 \leq \pi/10 \leq 1$, where we used the facts that $\sin x \leq x$ for all $x>0$ and $\sin \theta + \cos \theta \geq 1$ for all $\theta \in [0,\pi/2]$. Setting $k = (1+\delta)\mu$, we get $\sum_{|x|>k} |w_x|^2 = \Pr (\sum_i X_i > k) < {e^{k-\mu}}/{(1+\delta)^{k}} = {e^{k-\mu}\mu^k}/{k^k} < e^k/k^k$. This is less than $\epsilon^2/2$ when $k = \Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. For such a value of $k$, the state $\ket{\zeta_k}$ has inner product at least $1-\epsilon^2$/2 with $\ket{\zeta}$. Let $|\tilde{\zeta}\rangle$ denote the normalized $\ket{\zeta_k}$ for some choice of $k = \Omega\big(\frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\big)$. The state $|\tilde{\zeta}\rangle$ also has inner product at least $1-\epsilon^2/2$ with $\ket{\zeta}$. We replace the dotted box in \fig{segment} with $|\tilde{\zeta}\rangle$, a fixed state that requires no queries to create. With this modification, the control qubits are in a superposition over states with Hamming weight at most $k$, suggesting that this circuit can be performed with at most $k$ queries. We now show that this is possible. The control qubits are in a superposition over states $\ket{b}$ where $b \in \{0,1\}^{m}$. The value of $b_i$ decides whether the $i$th query occurs or not. The string $b$ therefore completely determines the product of unitary matrices that is applied to $|\psi\rangle$ when the control qubits are in the state $|b\rangle$. This product contains at most $k$ query gates, and thus may be written as \begin{equation} W_{|b|}(b) \, Q \, W_{{|b|}-1}(b) \cdots Q \, W_1(b) \, Q \, W_0(b). \end{equation} Note that the $W_i$ operators are functions of $b$. We may also write this unitary as \begin{equation} W_k(b) \, Q_k(b) \, W_{k-1}(b) \cdots Q_2(b) \, W_1(b) \, Q_1(b) \, W_0(b), \label{eq:rearranged} \end{equation} where for $i \le |b|$ the $W_i$ operators are as before and for $i > |b|$, we have $W_i = \mathbbm{1}$. Here $Q_i(b)$ is defined to be $Q$ when $i \leq |b|$ and $\mathbbm{1}$ when $i>|b|$. We can now construct a circuit that performs the unitary in \eq{rearranged} controlled on the value of $b$. This circuit has at most $k$ query gates and performs the same unitary as the circuit in \fig{segment} with $\ket{\zeta}$ replaced with $|\tilde{\zeta}\rangle$. Finally, we show that the actual operation performed, denoted $\tilde U$, is within error $\epsilon$ of the ideal unitary $U$. The only difference between these operations is that $\tilde U$ prepares $|\tilde\zeta\rangle$ rather than $\ket{\zeta}$ in the initial step. Therefore the error between $\tilde U$ and $U$ is at most the error between an operation that prepares $|\tilde\zeta\rangle$ and an operation that prepares $\ket{\zeta}$. If we required $U$ to prepare $\ket{\zeta}$ using $\bigotimes_{i=1}^m R_{\alpha_i}$, it would be difficult to design a nearby unitary that prepares $|\tilde\zeta\rangle$. However, the lemma does not specify the action of $U$ on states not of the form $|0^{m+1}\rangle\ket{\psi}$, so we can make any convenient choice of the operation preparing $\ket{\zeta}$ that is close to the operation preparing $|\tilde\zeta\rangle$. Let $R:=\bigotimes_{i=1}^m R_{\alpha_i}$ and denote the unitary that prepares $|\tilde\zeta\rangle$ by $\tilde R$. In the computational basis, $R$ has first column $\zeta$ and $\tilde R$ has first column $\tilde\zeta$. We claim there is a unitary $R'$ that is within $\epsilon$ of $\tilde R$ but that has the same first column as $R$. To see this, let $\theta$ satisfy $\langle\tilde\zeta|\zeta\rangle=\cos\theta$. Consider the 2-dimensional subspace spanned by $|\zeta\rangle$ and $|\tilde\zeta\rangle$, and let $E$ be the unitary that rotates by angle $\theta$ in this subspace, but acts as the identity outside the subspace. In particular, $E|\tilde\zeta\rangle=|\zeta\rangle$. Taking $R' := E\tilde R$, we see that $R'$ has the first column $\zeta$ as required. The error is $\|R'-\tilde R\|=\|E\tilde R-\tilde R\|=\|E-\mathbbm{1}\|=\sqrt{2-2\cos\theta}$. Since $\langle\tilde\zeta|\zeta\rangle \ge 1-\epsilon^2/2$, we find $\|R'-\tilde R\|\le\epsilon$. Because the remainder of the circuit is identical, the overall error between $\tilde U$ and $U$ is at most $\epsilon$ as claimed. \end{proof} \end{document}
\begin{document} \title{On the subgroup structure of the hyperoctahedral group in six dimensions} \begin{abstract} We investigate the subgroup structure of the hyperoctahedral group in six dimensions. In particular, we study the subgroups isomorphic to the icosahedral group. We classify the orthogonal crystallographic representations of the icosahedral group and analyse their intersections and subgroups, using results from graph theory and their spectra. \end{abstract} \section{Introduction} The discovery of quasicrystals in 1984 by Shechtman et al. has spurred the mathematical and physical community to develop mathematical tools in order to study structures with non-crystallographic symmetry. Quasicrystals are alloys with five-fold or ten-fold symmetry in their atomic positions, and therefore they cannot be organised as (periodic) lattices. In crystallographic terms, their symmetry group $G$ is non-crystallographic. However, the non-crystallographic symmetry leaves a lattice invariant in higher dimensions, providing an integral representation of $G$, usually referred to as \emph{crystallographic representation}. This representation is reducible and contains a two- or three- dimensional invariant subspace. This is the starting point to construct quasicrystals via the \emph{cut-and-project} method, described, among others, by Senechal \cite{senechal}, or as a model set \cite{moody}. In this paper we are interested in icosahedral symmetry. The icosahedral group $\mathcal{I}$ consists of all the rotations that leave a regular icosahedron invariant, and it is the largest of the finite subgroup of $SO(3)$. $\mathcal{I}$ contains elements of order $5$, therefore it is non-crystallographic in 3D; the (minimal) crystallographic representation of it is six-dimensional \cite{levitov}. The full icosahedral group, denoted by $\mathcal{I}_h$, contains also the reflections and is equal to $\mathcal{I} \times C_2$, where $C_2$ denotes the cyclic group of order 2. $\mathcal{I}_h$ is isomorphic to the Coxeter group $H_3$ and is made up of 120 elements. In this work we focus on the icosahedral group $I$, because it plays a central role in applications in virology \cite{giuliana}. However, our considerations apply equally to the larger group $\mathcal{I}_h$. Levitov and Rhyner \cite{levitov} classified the lattices in $\mathbb{R}^6$ that are left invariant by $\mathcal{I}$: there are, up to equivalence, exactly three lattices, usually referred to as \emph{icosahedral Bravais lattices}, namely the simple cubic (SC), body-centered cubic (BCC) and face-centered cubic (FCC). The point group of these lattices is the six-dimensional hyperoctahedral group, denoted by $B_6$, which is a subgroup of $O(6)$ and can be represented in the standard basis of $\mathbb{R}^6$ as the set of all $6 \times 6$ orthogonal and integral matrices. The subgroups of $B_6$ which are isomorphic to the icosahedral group constitute the integral representations of it; among them, the crystallographic ones (following terminology from \cite{levitov}) are those which split, in $GL(6,\mathbb{R})$, into two three-dimensional irreducible representations of $\mathcal{I}$. Therefore they carry two subspaces in $\mathbb{R}^3$ which are invariant under the action of $\mathcal{I}$ and can be used to model the quasiperiodic structures. The embedding of the icosahedral group into $B_6$ has been used extensively in the crystallographic literature. Katz \cite{katz}, Senechal \cite{senechal}, Kramer and Zeidler, \cite{zeidler}, Grimm \cite{grimm}, among others, start from a six-dimensional crystallographic representation of $\mathcal{I}$ to construct three-dimensional Penrose tilings and icosahedral quasicrystals. Kramer \cite{kramer} and Indelicato et al. \cite{giuliana} also apply it to study structural transitions in quasicrystals. In particular, Kramer considers in $B_6$ a representation of $\mathcal{I}$ and a representation of the octahedral group $\mathcal{O}$ which share a tetrahedral subgroup, and defines a continous rotation (called Schur rotation) between cubic and icosahedral symmetry which preserves intermediate tetrahedral symmetry. Indelicato et al. define a transition between two icosahedral lattices as a continous path connecting the two lattice bases keeping some symmetry preserved, described by a maximal subgroup of the icosahedral group. The rationale behind this approach is that the two corresponding lattice groups share a common subgroup. These two approaches are shown to be related \cite{paolo}, hence the idea is that it is possible to study the transitions between icosahedral quasicrystals by considering two distinct crystallographic representations of $\mathcal{I}$ in $B_6$ which share a common subgroup. These papers motivate the idea of studying in some detail the subgroup structure of $B_6$. In particular, we focus on the subgroups isomorphic to the icosahedral group and its subgroups. Since the group is quite large (it has $2^66!$ elements), we use for computations the software \texttt{GAP} \cite{GAP}, which is designed to compute properties of finite groups. More precisely, based on \cite{baake}, we generate the elements of $B_6$ in \texttt{GAP} as a subgroup of the symmetric group $S_{12}$ and then find the classes of subgroups isomorphic to the icosahedral group. Among them we isolate, using results from character theory, the class of crystallographic representations of $\mathcal{I}$. In order to study the subgroup structure of this class, we propose a method using graph theory and their spectra. In particular, we treat the class of crystallographic representations of $\mathcal{I}$ as a graph: we fix a subgroup $\mathcal{G}$ of $\mathcal{I}$ and say that two elements in the class are adjacent if their intersection is equal to a subgroup isomorphic to $\mathcal{G}$. We call the resulting graph $\mathcal{G}$-graph. These graphs are quite large and difficult to visualise; however, by analysing their spectra \cite{doob} we can study in some detail their topology, hence describing the intersection and the subgroups shared by different representations. The paper is organised as follows. After recalling, in Section \ref{crystal_section}, the definitions of point group and lattice group, we define, in Section \ref{ico_section}, the crystallographic representations of the icosahedral group and the icosahedral lattices in 6 dimension. We provide, following \cite{haase}, a method for the construction of the projection into 3D using tools from the representation theory of finite groups. In Section \ref{cube_section} we classify, with the help of \texttt{GAP}, the crystallographic representations of $\mathcal{I}$. In Section \ref{grafi_section} we study their subgroup structure, introducing the concept of $\mathcal{G}$-graph, where $\mathcal{G}$ is a subgroup of $\mathcal{I}$. \section{Lattices and non-crystallographic groups}\label{crystal_section} Let $\boldsymbol{b}_i$, $i=1,\ldots n$ be a basis of $\mathbb{R}^n$, and let $B \in GL(n, \mathbb{R})$ be the matrix whose columns are the components of $\boldsymbol{b}_i$ with respect to the canonical basis $\{\boldsymbol{e}_i, i=1,\ldots, n \}$ of $\mathbb{R}^n$. A \emph{lattice} in $\mathbb{R}^n$ is a $\mathbb{Z}$-free module of rank $n$ with basis $B$, i.e. \begin{equation*} \mathcal{L}(B) = \left\{ \boldsymbol{x} = \sum_{i=1}^n m_i \boldsymbol{b}_i : \; m_i \in \mathbb{Z} \right\}. \end{equation*} Any other lattice basis is given by $BM$, where $M \in GL(n,\mathbb{Z})$, the set of invertible matrices with integral entries (whose determinant is equal to $\pm1$) \cite{artin}. The \emph{point group} of a lattice $\mathcal{L}$ is given by all the orthogonal transformations that leave the lattice invariant \cite{zanzotto}: \begin{equation*} \mathcal{P}(B) = \{ Q \in O(n) : QB = BM, \; \exists M \in GL(n,\mathbb{Z}) \}. \end{equation*} We notice that, if $Q \in \mathcal{P}(B)$, then $B^{-1}QB = M \in GL(n,\mathbb{Z})$. In other words, the point group consists of all the orthogonal matrices which can be represented in the basis $B$ as integral matrices. The set of all these matrices constitute the \emph{lattice group} of the lattice: \begin{equation*} \mathcal{L}mbda(B) = \{ M \in GL(n,\mathbb{Z}): M = B^{-1}QB, \; \exists Q \in \mathcal{P}(B) \}. \end{equation*} The lattice group provides an \emph{integral representation} of the point group, and these are related via the equation \begin{equation*} \mathcal{L}mbda(B) = B^{-1}\mathcal{P}(B)B, \end{equation*} and moreover the following hold \cite{zanzotto}: \begin{equation*} \mathcal{P}(BM) = \mathcal{P}(B), \quad \mathcal{L}mbda(BM) = M^{-1}\mathcal{L}mbda(B)M, \quad M \in GL(n,\mathbb{Z}). \end{equation*} We notice that a change of basis in the lattice leaves the point group invariant, whereas the corresponding lattice groups are conjugated in $GL(n,\mathbb{Z})$. Two lattices are \emph{inequivalent} if the corresponding lattice groups are not conjugated in $GL(n,\mathbb{Z})$ \cite{zanzotto}. As a consequence of the crystallographic restriction (see, for example, \cite{grimm}) five-fold symmetry is forbidden in dimensions 2 and 3, and therefore any group $G$ containing elements of order 5 cannot be the point group of a two- or three-dimensional lattice. We therefore call these groups \emph{non-crystallographic}. In particular, three-dimensional icosahedral lattices cannot exist. However, a non-crystallographic group leaves some lattices invariant in higher dimensions, and the smallest such dimension is called the minimal embedding dimension. Following \cite{levitov}, we introduce: \begin{defn}\label{cryst} Let $G$ be a non-crystallographic group. A \emph{crystallographic representation} $\rho$ of $G$ is a $D$-dimensional representation of $G$ such that: \begin{enumerate} \item the charaters $\chi_{\rho}$ of $\rho$ are integers; \item $\rho$ is reducible and contains a 2- or 3- dimensional representation of $G$. \end{enumerate} \end{defn} We observe that the first condition implies that $G$ must be the subgroup of the point group of a $D$-dimensional lattice. The second condition tells us that $\rho$ contains either a two- or three-dimensional invariant subspace $E$ of $\mathbb{R}^D$, usually referred to as \emph{physical space} \cite{levitov}. \section{$6D$ icosahedral lattices}\label{ico_section} The icosahedral group $\mathcal{I}$ is generated by two elements, $g_2$ and $g_3$, such that $g_2^2 = g_3^3 = (g_2g_3)^5 = e$, where $e$ denotes the identity element. It has order 60 and it is isomorphic to $A_5$, the alternating group of order 5. Its character table is as follows (note that $\tau = \frac{\sqrt{5}+1}{2}$, and that $\mathcal{C}_2$, $\mathcal{C}_3$, $\mathcal{C}_5$ and $\mathcal{C}_5^2$ denote the conjugacy classes in $\mathcal{I}$ of $g_2$, $g_3$, $g_2g_3$ and $(g_2g_3)^2$, respectively, and the numbers denote the orders of each class): \begin{center} \begin{tabular}{l|c c c c c} Irrep & $E$ & $12\mathcal{C}_5$ & $12\mathcal{C}_5^2$ & $15\mathcal{C}_2$ & $20\mathcal{C}_3$ \\ \mathcal{H}line $A$ & 1 & 1 & 1 & 1 & 1 \\ $T_1$ & 3 & $\tau$ & 1-$\tau$ & -1 & 0 \\ $T_2$ & 3 & 1-$\tau$ & $\tau$ & -1 & 0 \\ $G$ & 4 & -1 & -1 & 0 & 1 \\ $H$ & 5 & 0 & 0 & 1 & -1 \\ \end{tabular} \end{center} From the character table we see that the (minimal) crystallographic representation of $\mathcal{I}$ is 6-dimensional and is given by $T_1 \oplus T_2$. Therefore, $\mathcal{I}$ leaves a lattice in $\mathbb{R}^6$ invariant. \cite{levitov} proved that the three inequivalent lattices of this type, mentioned in the introduction and referred to as \emph{icosahedral (Bravais) lattices}, are given by, respectively: \begin{equation*} \mathcal{L}_{SC} = \left\{ \boldsymbol{x} =(x_1, \ldots, x_6) : x_i \in \mathbb{Z} \right\}, \end{equation*} \begin{equation*} \mathcal{L}_{BCC} = \left\{ \boldsymbol{x} = \frac{1}{2}(x_1, \ldots, x_6) : x_i \in \mathbb{Z}, \; x_i = x_j \; \text{mod}2, \forall i,j=1,\ldots,6 \right\}, \end{equation*} \begin{equation*} \mathcal{L}_{FCC} = \left\{ \boldsymbol{x} = \frac{1}{2}(x_1, \ldots, x_6) : x_i \in \mathbb{Z}, \; \sum_{i=1}^6 x_i = 0 \; \text{mod}2 \right\}. \end{equation*} We note that a basis of the SC lattice is the canonical basis of $\mathbb{R}^6$. Its point group is given by \begin{equation}\label{B_6} \mathcal{P}_{SC} = \{ Q \in O(6) : Q = M \in GL(6,\mathbb{Z}) \} = O(6) \cap GL(6,\mathbb{Z}) \simeq O(6,\mathbb{Z}), \end{equation} which is the \emph{hyperoctahedral group} in dimension $6$, denoted by $B_6$ \cite{baake}. All the three lattices have point group $B_6$, whereas their lattice groups are different and, indeed, they are not conjugate in $GL(6,\mathbb{Z})$ \cite{levitov}. Let $\mathcal{H}$ be a subgroup of $B_6$ isomorphic to $\mathcal{I}$. $\mathcal{H}$ provides a (faithful) integral and orthogonal representation of $\mathcal{I}$. Moreover, if $\mathcal{H} \simeq T_1 \oplus T_2$ in $GL(6,\mathbb{R})$, then $\mathcal{H}$ is also crystallographic (in the sense of Definition \ref{cryst}). All the other crystallographic representations are given by $B^{-1}\mathcal{H} B$, where $B \in GL(6,\mathbb{R})$ is a basis of an icosahedral lattice in $\mathbb{R}^6$. Therefore we can focus our attention, without loss of generality, on the orthogonal crystallographic representations. \subsection{Projection operators}\label{proiezione} Let $\mathcal{H}$ be a crystallographic representation of the icosahedral group. $\mathcal{H}$ splits into two 3-dimensional irreducible representations (IRs), $T_1$ and $T_2$, in $GL(6,\mathbb{R})$. This means that there exists a matrix $R \in GL(6,\mathbb{R})$ such that \begin{equation}\label{R} \mathcal{H}':= R^{-1}\mathcal{H} R = \left( \begin{array}{cc} T_1 & 0 \\ 0 & T_2 \end{array} \right). \end{equation} The two IRs $T_1$ and $T_2$ leave two three-dimensional subspaces invariant, which are usually referred to as the \emph{physical} (or parallel) space $E^{\parallel}$ and the \emph{orthogonal} space $E^{\perp}$ \cite{katz}. In order to find the matrix $R$ (which is not unique in general), we follow \cite{haase} and use results from the representation theory of finite groups (for proofs and further results see, example, \cite{fulton}). In particular, let $\mathcal{G}amma : G \rightarrow GL(n,F)$ be an $n$-dimensional representation of a finite group $G$ over a field $F$ ($F = \mathbb{R}, \mathbb{C}$). By Maschke's theorem, $\mathcal{G}amma$ splits, in $GL(n,F)$, as $m_1 \mathcal{G}amma_1 \oplus \ldots \oplus m_r \mathcal{G}amma_r$, where $\mathcal{G}amma_i : G \rightarrow GL(n_i, F)$ is an $n_i$-dimensional IR of $G$. Then the \emph{projection operator} $P_i : F^n \rightarrow F^{n_i}$ is given by \begin{equation}\label{proj} P_i:=\frac{n_i}{|G|} \sum_{g \in G} \chi^*_{\mathcal{G}amma_i}(g) \mathcal{G}amma(g), \end{equation} where $\chi_{\mathcal{G}amma_i}^*$ denotes the complex conjugate of the character of the representation $\mathcal{G}amma_i$. This operator is such that its image $\text{Im}(P_i)$ is equal to a $n_i$-dimensional subspace $V_i$ of $F^n$ invariant under $\mathcal{G}amma_i$. In our case, we have two projection operators, $P_i : \mathbb{R}^6 \rightarrow \mathbb{R}^3$, $i = 1,2$, corresponding to the IRs $T_1$ and $T_2$, respectively. We assume the image of $P_1$, $\text{Im}(P_1)$, to be equal to $E^{\parallel}$, and $\text{Im}(P_2) = E^{\perp}$. If $\{\boldsymbol{e}_j, j=1,\ldots,6 \}$ is the canonical basis of $\mathbb{R}^6$, then a basis of $E^{\parallel}$ (respectively $E^{\perp}$) can be found considering the set $\{\mathcal{H}at{\boldsymbol{e}}_j:=P_i\boldsymbol{e}_j, j=1,\ldots, 6\}$ for $i=1$ (respectively $i=2$) and then extracting a basis $\mathcal{B}_i$ from it. Since $\text{dim}E^{\parallel} = \text{dim}E^{\perp}=3$, we obtain $\mathcal{B}_i = \{\mathcal{H}at{\boldsymbol{e}}_{i,1},\mathcal{H}at{\boldsymbol{e}}_{i,2},\mathcal{H}at{\boldsymbol{e}}_{i,3} \}$, for $i=1,2$. The matrix $R$ can be thus written as \begin{equation}\label{matrice_proiezione} R = \left(\underbrace{\mathcal{H}at{\boldsymbol{e}}_{1,1},\mathcal{H}at{\boldsymbol{e}}_{1,2},\mathcal{H}at{\boldsymbol{e}}_{1,3}}_{\text{basis of} \; E^{\parallel}},\underbrace{\mathcal{H}at{\boldsymbol{e}}_{2,1},\mathcal{H}at{\boldsymbol{e}}_{2,2},\mathcal{H}at{\boldsymbol{e}}_{2,3}}_{\text{basis of} \; E^{\perp}}\right). \end{equation} Denoting by $\pi^{\parallel}$ and $\pi^{\perp}$ the $3 \times 6$ matrices which represent $P_1$ and $P_2$ in the bases $\mathcal{B}_1$ and $\mathcal{B}_2$, respectively, we have, by linear algebra \begin{equation}\label{inv} R^{-1} = \left( \begin{array}{c} \pi^{\parallel} \\ \pi^{\perp} \end{array} \right). \end{equation} Since $R^{-1}\mathcal{H} = \mathcal{H}' R^{-1}$ (cf. \eqref{R}), we obtain \begin{equation}\label{comm} \pi^{\parallel}(\mathcal{H}(g)\boldsymbol{v}) = T_1(g)(\pi^{\parallel}(\boldsymbol{v})), \quad \pi^{\perp}(\mathcal{H}(g)\boldsymbol{v}) = T_2(g)(\pi^{\perp}(\boldsymbol{v})), \end{equation} for all $g \in \mathcal{I}$ and $\boldsymbol{v} \in \mathbb{R}^6$. In particular, the following diagramme commutes for all $g \in G$: \begin{equation}\label{diagramma} \begin{CD} \mathbb{R}^6 @>\mathcal{H}(g)>> \mathbb{R}^6\\ @VV\pi^{\parallel}V @VV\pi^{\parallel}V\\ E^{\parallel} @>T_1(g)>> E^{\parallel} \end{CD} \end{equation} The set $(\mathcal{H}, \pi^{\parallel})$ is the starting point for the construction of quasicrystals via the cut-and-project method (\cite{senechal}, \cite{paolo}). \section{Crystallographic representations of $\mathcal{I}$}\label{cube_section} From the previous section it follows that the six-dimensional hyperoctahedral group $B_6$ contains all the (minimal) orthogonal crystallographic representations of the icosahedral group. In this section we classify them, with the help of the computer software programme \texttt{GAP} \cite{GAP}. \subsection{Representations of the hyperoctahedral group $B_6$} Permutation representations of the $n$-dimensional hyperoctahedral group $B_n$ in terms of elements of $S_{2n}$, the symmetric group of order $(2n)!$, have been described in \cite{baake}. In this subsection we review these results, since they allow us to generate $B_6$ in \texttt{GAP} and further study its subgroup structure. It follows from \eqref{B_6} that $B_6$ consists of all the orthogonal integral matrices. A matrix $A=(a_{ij})$ of this kind must satisfy $AA^T = I_6$, the identity matrix of order 6, and have integral entries only. It is easy to see that these conditions imply that $A$ has entries in $\{0,\pm 1\}$ and each row and column contains 1 or $-1$ only once. These matrices are called \emph{signed permutation matrices}. It is straightforward to see that any $A \in B_6$ can be written in the form $NQ$, where $Q$ is a $6 \times 6$ permutation matrix and $N$ is a diagonal matrix with each diagonal entry being either 1 or -1. We can thus associate with each matrix in $B_6$ a pair $(\boldsymbol{a}, \pi)$, where $\boldsymbol{a} \in \mathbb{Z}_2^6$ is a vector given by the diagonal elements of $N$, and $\pi \in S_6$ is the permutation associated with $Q$. The set of all these pairs constitute a group (called the \emph{wreath product} of $\mathbb{Z}_2$ and $S_6$, and denoted by $\mathbb{Z}_2 \wr S_6$, \cite{humpreys}) with the multiplication rule given by \begin{equation*} (\boldsymbol{a}, \pi)(\boldsymbol{b}, \sigma):=(\boldsymbol{a}_{\sigma}+_2 \boldsymbol{b}, \pi\sigma), \end{equation*} where $+_2$ denotes addition modulo 2 and \begin{equation*} (\boldsymbol{a}_{\sigma})_k:=a_{\sigma(k)}, \quad \boldsymbol{a} = (a_1, \ldots, a_6). \end{equation*} $\mathbb{Z}_2 \wr S_6$ and $B_6$ are isomorphic, an isomorphism $T$ being the following: \begin{equation}\label{isomorfismo} [T(\boldsymbol{a},\pi)]_{ij}:=(-1)^{a_j}\delta_{i,\pi(j)}. \end{equation} It immediately follows that $|B_6| = 2^66! = 46,080$. A set of generators is given by \begin{equation}\label{generatori} \alpha:=(\mathbf{0}, (1,2)), \quad \beta:=(\mathbf{0},(1,2,3,4,5,6)), \quad \gamma:=((0,0,0,0,0,1),\text{id}_{S_6}), \end{equation} which satisfy the relations \begin{equation*} \alpha^2 = \gamma^2 = \beta^6 = (\mathbf{0},\text{id}_{S_6}). \end{equation*} Finally, the function $\phi : \mathbb{Z}_2 \wr S_6 \rightarrow S_{12}$ defined by \begin{equation}\label{morfismo} \phi(\boldsymbol{a}, \pi)(k):= \left\{ \begin{aligned} &\pi(k)+6a_k \quad \text{if} \; 1\leq k \leq 6 \\ &\pi(k-6)+6(1-a_{k-6}) \quad \text{if} \; 7 \leq k \leq 12 \end{aligned} \right. \end{equation} is injective and maps any element of $\mathbb{Z}_2 \wr S_6$ into a permutation of $S_{12}$, and provides a faithful permutation representation of $B_6$ as a subgroup of $S_{12}$. Combining \eqref{isomorfismo} with the inverse of \eqref{morfismo} we get the function \begin{equation}\label{iso} \psi:= T \circ \phi^{-1} : S_{12} \rightarrow B_6 \end{equation} which can be used to map a permutation into an element of $B_6$. \subsection{Classification} \begin{figure} \caption{A planar representation of an icosahedral surface, showing our labelling convention for the vertices; the dots represent the locations of the symmetry axes corresponding to the generators of the icosahedral group and its subgroups. The kite highlighted is a fundamental domain of the icosahedral group.} \label{ico} \end{figure} In this subsection we classify the orthogonal crystallographic representations of the icosahedral group. We start by recalling a standard way to construct such a representation, following \cite{zappa}. We consider a regular icosahedron and we label each vertex by a number from 1 to 12, so that the vertex opposite to vertex $i$ is labelled by $i+6$ (see Figure \ref{ico}). This labelling induces a permutation representation $\sigma : \mathcal{I} \rightarrow S_{12}$ given by \begin{align*} \sigma(g_2) &= (1,6)(2,5)(3,9)(4,10)(7,12)(8,11), \\ \sigma(g_3) & = (1,5,6)(2,9,4)(7,11,12)(3,10,8). \end{align*} Using \eqref{iso} we obtain a representation $\mathcal{H}at{\mathcal{I}} : \mathcal{I} \rightarrow B_6$ given by \begin{equation}\label{gen} \mathcal{H}at{\mathcal{I}}(g_2) = \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{array} \right), \quad \mathcal{H}at{\mathcal{I}}(g_3) = \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right). \end{equation} We see that $\chi_{\mathcal{H}at{\mathcal{I}}}(g_2) = -2$ and $\chi_{\mathcal{H}at{\mathcal{I}}}(g_3) = 0$, so that, by looking at the character table of $\mathcal{I}$, we have \begin{equation*} \chi_{\mathcal{H}at{\mathcal{I}}} = \chi_{T_1} + \chi_{T_2}, \end{equation*} which implies, using Maschke's theorem \cite{fulton}, that $\mathcal{H}at{\mathcal{I}} \simeq T_1 \oplus T_2$ in $GL(6,\mathbb{R})$. Therefore, the subgroup $\mathcal{H}at{\mathcal{I}}$ of $B_6$ is a crystallographic representation of $\mathcal{I}$. Before we continue, we recall the following \cite{humpreys}: \begin{defn}\label{conjugacy_class} Let $H$ be a subgroup of a group $G$. The \emph{conjugacy class of $H$ in $G$} is the set \begin{equation*} \mathcal{C}_G(H) := \{ gHg^{-1} : g \in G \}. \end{equation*} \end{defn} In order to find all the other crystallographic representations, we use the following scheme: \begin{enumerate} \item We generate $B_6$ as a subgroup of $S_{12}$ using \eqref{generatori} and \eqref{morfismo}; \item we list all the conjugacy classes of the subgroups of $B_6$ and find a representative for each class; \item we isolate the classes whose representatives have order $60$; \item we check if these representatives are isomorphic to $\mathcal{I}$; \item we map these subgroups of $S_{12}$ into $B_6$ using \eqref{iso} and isolate the crystallographic ones by checking the characters; denoting by $S$ the representative, we decompose $\chi_S$ as \begin{equation*} \chi_S = m_1 \chi_{A} + m_2 \chi_{T_1} + m_3 \chi_{T_2} + m_4 \chi_{G} + m_5 \chi_{H}, \quad m_i \in \mathbb{N}, \; i=1,\ldots,5. \end{equation*} Note that $S$ is crystallographic if and only if $m_2 = m_3 =1$ and $m_1=m_4=m_5 = 0$. \end{enumerate} We implemented steps 1-4 in \texttt{GAP} (see Appendix). There are three conjugacy classes of subgroups isomorphic to $\mathcal{I}$ in $B_6$. Denoting by $S_i = < g_{2,i},g_{3,i} >$ the representatives of the classes returned by \texttt{GAP}, we have, using \eqref{iso}, \begin{equation*} \chi_{S_1}(g_{2,1}) = 2, \; \chi_{S_1}(g_{3,1}) = 3 \mathbb{R}ightarrow \chi_{S_1} = 2\chi_A + \chi_G \mathbb{R}ightarrow S_1 \simeq 2A \oplus G, \end{equation*} \begin{equation*} \chi_{S_2}(g_{2,2}) = -2, \; \chi_{S_2}(g_{3,2}) = 0 \mathbb{R}ightarrow \chi_{S_2} = \chi_{T_1} + \chi_{T_2} \mathbb{R}ightarrow S_2 \simeq T_1 \oplus T_2, \end{equation*} \begin{equation*} \chi_{S_3}(g_{2,3}) = 2, \; \chi_{S_3}(g_{3,3}) = 0 \mathbb{R}ightarrow \chi_{S_3} = \chi_A + \chi_H \mathbb{R}ightarrow S_3 \simeq A \oplus H. \end{equation*} Since $2A$ is decomposable into 2 one-dimensional representations, it is not strictly speaking 2D in the sense of Definition \ref{cryst}, and as a consequence, only the second class contains the crystallographic representations of $\mathcal{I}$. A computation in \texttt{GAP} shows that its order is 192. We thus have the following \begin{prop}\label{class} The crystallographic representations of $\mathcal{I}$ in $B_6$ form a unique conjugacy class in the set of all the classes of subgroups of $B_6$, and its order is equal to 192. \end{prop} We briefly point out that the other two classes of subgroups isomorphic to $\mathcal{I}$ in $B_6$ have an interesting algebraic intepretation. First of all, we observe that $B_6$ is an \emph{extension} of $S_6$, since according to \cite{humpreys}, \begin{equation*} B_6 / \mathbb{Z}_2^6 \simeq (\mathbb{Z}_2 \wr S_6) / \mathbb{Z}_2^6 \simeq S_6. \end{equation*} Following \cite{rotman}, it is possible to embed the symmetric group $S_5$ into $S_6$ in two different ways. The canonical embedding is achieved by fixing a point in $\{1, \ldots, 6 \}$ and permuting the other five, whereas the other embedding is by means of the so-called "exotic map" $\varphi : S_5 \rightarrow S_6$, which acts on the six 5-Sylow subgroups of $S_5$ by conjugation. Recalling that the icosahedral group is isomorphic to the alternating group $A_5$, which is a normal subgroup of $S_5$, then the canonical embedding corresponds to the representation $2A\oplus G$ in $B_6$, while the exotic one corresponds to the representation $A \oplus H$. In what follows, we will consider the subgroup $\mathcal{H}at{\mathcal{I}}$ previously defined as a representative of the class of the crystallographic representations of $\mathcal{I}$, and denote this class by $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$. Recalling that two representations $D^{(1)}$ and $D^{(2)}$ of a group $G$ are said to be \emph{equivalent} if there are related via a similarity transformation, i.e. there exists an invertible matrix $S$ such that \begin{equation*} D^{(1)} = S D^{(2)} S^{-1}, \end{equation*} then an immediate consequence of Proposition \ref{class} is the following \begin{cor} Let $\mathcal{H}_1$ and $\mathcal{H}_2$ be two orthogonal crystallographic representations of $\mathcal{I}$. Then $\mathcal{H}_1$ and $\mathcal{H}_2$ are equivalent in $B_6$. \end{cor} We observe that the determinant of the generators of $\mathcal{H}at{\mathcal{I}}$ in \eqref{gen} is equal to 1, so that $\mathcal{H}at{\mathcal{I}} \in B_6^+:= \{ A \in B_6 : \text{det}A = 1 \}$. Proposition \ref{class} implies that all the crystallographic representations belong to $B_6^+$. The remarkable fact is that they split into \emph{two} different classes in $B_6^+$. To see this, we first need to generate $B_6^+$. In particular, with \texttt{GAP} we isolate the subgroups of index $2$ in $B_6$, which are normal in $B_6$, and then, using \eqref{iso}, we find the one whose generators have determinant equal to 1. In particular, we have \begin{align*} B_6^+ = & <(1,2,6,4,3)(7,8,12,10,9),(5,11)(6,12), \\ & (1,2,6,5,3)(7,8,12,11,9),(5,12,11,6)>. \end{align*} We can then apply the same procedure to find the crystallographic representations of $\mathcal{I}$, and see that they split into two classes, each one of size 96. Again we can choose $\mathcal{H}at{\mathcal{I}}$ as a representative for one of these classes; a representative $\mathcal{H}at{\mathcal{K}}$ for the other one is given by \begin{equation}\label{class_2} \mathcal{H}at{\mathcal{K}} = \left< \left( \begin{array}{cccccc} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right) \right>. \end{equation} We note that in the more general case of $\mathcal{I}_h$, we can construct the crystallographic representations of $\mathcal{I}_h$ starting from the crystallographic representations of $\mathcal{I}$. First of all, we recall that $\mathcal{I}_h = I \times C_2$, where $C_2$ is the cyclic group of order 2. Let $\mathcal{H}$ be a crystallographic representation of $\mathcal{I}$ in $B_6$, and let $\mathcal{G}amma = \{1,-1 \}$ be a one-dimensional representation of $C_2$. Then the representation $\mathcal{H}at{\mathcal{H}}$ given by \begin{equation*} \mathcal{H}at{\mathcal{H}} := \mathcal{H} \otimes \mathcal{G}amma, \end{equation*} where $\otimes$ denotes the tensor product of matrices, is a representation of $\mathcal{I}_h$ in $B_6$ and it is crystallographic in the sense of Definition \ref{cryst} \cite{fulton}. \subsection{Projection into the 3D space} \begin{table}[!t] \caption{Explicit forms of the IRs $T_1$ and $T_2$ with $\mathcal{H}at{\mathcal{I}} \simeq T_1 \oplus T_2$.} \begin{center} \begin{tabular}{c c c} Generator & Irrep $\mathcal{G}amma_1$ & Irrep $\mathcal{G}amma_2$ \\ \mathcal{H}line $g_2$ & $\frac{1}{2} \left( \begin{array}{ccc} \tau-1 & 1 & \tau \\ 1 & -\tau & \tau-1 \\ \tau & \tau-1 & -1 \end{array} \right)$ & $\frac{1}{2} \left( \begin{array}{ccc} \tau-1 & -\tau & -1 \\ -\tau & -1 & \tau-1 \\ -1 & \tau-1 & -\tau \end{array} \right)$ \\ $g_3$ & $\frac{1}{2} \left( \begin{array}{ccc} \tau & \tau-1 & 1 \\ 1-\tau & -1 & \tau \\ 1 & -\tau & 1-\tau \end{array} \right)$ & $\frac{1}{2} \left( \begin{array}{ccc} -1 & 1-\tau & -\tau \\ \tau-1 & \tau & -1 \\ \tau & -1 & 1-\tau \end{array} \right)$ \end{tabular} \end{center} \label{irreps} \end{table} We study in detail the projection into the physical space $E^{\parallel}$ using the methods described in Section \ref{proiezione}. Let $\mathcal{H}at{\mathcal{I}}$ be the crystallographic representation of $\mathcal{I}$ given in \eqref{gen}. Using \eqref{proj} with $n_i = 3$ and $|G| = |\mathcal{I}| = 60$ we obtain the following projection operators \begin{equation*} P_1 = \frac{1}{2\sqrt{5}}\left( \begin{array}{cccccc} \sqrt{5} & 1 & -1 & -1& 1 & 1 \\ 1 & \sqrt{5} &1 &-1 &-1 &1 \\ -1& 1 & \sqrt{5} & 1& -1& 1 \\ -1 &-1 &1 &\sqrt{5} & 1 &1 \\ 1& -1 & -1 & 1 & \sqrt{5} & 1 \\ 1 & 1 & 1 & 1 & 1 & \sqrt{5} \end{array} \right), \end{equation*} \begin{equation*} P_2 = \frac{1}{2\sqrt{5}}\left( \begin{array}{cccccc} \sqrt{5} & -1 & 1 & 1& -1 & -1 \\ -1 & \sqrt{5} &-1 & 1 & 1 & -1 \\ 1& -1 & \sqrt{5} & -1& 1& -1 \\ 1 & 1 &-1 &\sqrt{5} & -1 &-1 \\ -1 & 1 & 1 & -1 & \sqrt{5} & -1 \\ -1 & -1 & -1 & -1 & -1 & \sqrt{5} \end{array} \right) . \end{equation*} The rank of these operators is equal to 3. We choose as a basis of $E^{\parallel}$ and $E^{\perp}$ the following linear combination of the columns $\mathbf{c}_{i,j}$ of the projection operators $P_i$, for $i = 1,2$ and $j =1, \ldots, 6$: \begin{equation*} \left( \underbrace{\frac{\mathbf{c}_{1,1}+\mathbf{c}_{1,5}}{2}, \frac{\mathbf{c}_{1,2}-\mathbf{c}_{1,4}}{2}, \frac{\mathbf{c}_{1,3}+\mathbf{c}_{1,6}}{2}}_{\text{basis of $E^{\parallel}$}}, \underbrace{\frac{\mathbf{c}_{2,1}-\mathbf{c}_{2,5}}{2}, \frac{\mathbf{c}_{2,2}+\mathbf{c}_{2,4}}{2},\frac{\mathbf{c}_{2,3}-\mathbf{c}_{2,6}}{2}}_ {\text{basis of $E^{\perp}$}} \right). \end{equation*} With a suitable rescaling, we obtain the matrix $R$ given by \begin{equation*} R = \frac{1}{\sqrt{2(2+\tau)}} \left( \begin{array}{cccccc} \tau & 1 & 0 & \tau & 0 & 1 \\ 0 & \tau & 1 & -1 & \tau & 0 \\ -1 & 0 & \tau & 0 & -1& \tau \\ 0 & -\tau & 1 & 1 & \tau & 0 \\ \tau & -1 & 0 & -\tau & 0 & 1 \\ 1 & 0 & \tau & 0 & -1 & -\tau \end{array} \right) . \end{equation*} The matrix $R$ is orthogonal and reduces $\mathcal{H}at{\mathcal{I}}$ as in \eqref{R}. In Table \ref{irreps} we give the explicit forms of the reduced representation. The matrix representation in $E^{\parallel}$ of $P_1$ is given by (see \eqref{inv}) \begin{equation*} \pi^{\parallel} = \frac{1}{\sqrt{2(2+\tau)}} \left( \begin{array}{cccccc} \tau & 0 & -1 & 0 & \tau & 1 \\ 1 & \tau & 0 & -\tau & -1 & 0 \\ 0 & 1 & \tau & 1 & 0 & \tau \end{array} \right). \end{equation*} The orbit $\{T_1(\pi^{\parallel}(\boldsymbol{e}_j)) \}$, where $\{\boldsymbol{e}_j, j = 1,\ldots, 6 \}$ is the canonical basis of $\mathbb{R}^6$, represents a regular icosahedron in 3D centered at the origin (\cite{senechal}, \cite{katz} and \cite{giuliana}). Let $\mathcal{K}$ be another crystallographic representation of $\mathcal{I}$ in $B_6$. By Proposition \ref{class}, $\mathcal{K}$ and $\mathcal{H}at{\mathcal{I}}$ are conjugated in $B_6$. Consider $M \in B_6$ such that $M \mathcal{H}at{\mathcal{I}} M^{-1} = \mathcal{K}$ and let $ S = MR$. We have \begin{equation*} S^{-1} \mathcal{K} S = (MR)^{-1} \mathcal{K} (MR) = R^{-1} M^{-1} \mathcal{K} M R = R^{-1} \mathcal{H}at{\mathcal{I}} R = T_1 \oplus T_2. \end{equation*} Therefore it is possible, with a suitable choice of the reducing matrices, to reduce all the crystallographic representations of $\mathcal{I}$ in $B_6$ into the same irreps. \section{Subgroup structure}\label{grafi_section} \begin{table}[!t] \caption{Non-trivial subgroups of the icosahedral group: $\mathcal{T}$ stands for the tetrahedral group, $\mathcal{D}_{2n}$ for the dihedral group of order $2n$, and $C_n$ for the cyclic group of order $n$. } \begin{center} \begin{tabular}{c c c c} Subgroup & Generators & Relations & Order \\ \mathcal{H}line $\mathcal{T}$ & $g_2, g_{3d}$ & $g_2^2=g_{3d}^3=(g_2g_{3d})^3=e$ & 12 \\ $\mathcal{D}_{10}$ & $g_{2d},g_{5d}$ & $g_{2d}^2=g_{5d}^5=(g_{5d}g_{2d})^2=e$ & 10 \\ $\mathcal{D}_{6}$ & $g_{2d},g_3$ & $g_{2d}^2=g_3^3=(g_3g_{2d})^2=e$ & 6 \\ $C_5$ & $g_{5d}$ & $g_{5d}^5=e$ & 5 \\ $\mathcal{D}_4$ & $g_{2d},g_2$ & $g_{2d}^2=g_2^2=(g_2g_{2d})^2=e$ & 4 \\ $C_3$ & $g_3$ & $g_3^3=e$ & 3 \\ $C_2$ & $g_2$ & $g_2^2=e$ & 2 \\ \end{tabular} \end{center} \label{ico_sgp} \end{table} \begin{table}[!t] \caption{Permutation representations of the generators of the subgroups of the icosahedral group.} \begin{center} \begin{tabular}{l} \mathcal{H}line $\sigma(g_2) = (1,6)(2,5)(3,9)(4,10)(7,12)(8,11)$ \\ $\sigma(g_{2d}) = (1,12)(2,8)(3,4)(5,11)(6,7)(9,10)$ \\ $\sigma(g_3) = (1,5,6)(2,9,4)(7,11,12)(3,10,8)$ \\ $\sigma(g_{3d}) = (1,10,2)(3,5,12)(4,8,7)(6,9,11)$ \\ $\sigma(g_5) = (1,2,3,4,5)(7,8,9,10,11)$ \\ $\sigma(g_{5d}) = (1,10,11,3,6)(4,5,9,12,7)$ \\ \mathcal{H}line \end{tabular} \end{center} \label{perm_rep} \end{table} \begin{table}[!t] \caption{Order of the classes of subgroups of the icosahedral group in $\mathcal{I}$ and $B_6$.} \begin{center} \begin{tabular}{c c c} Subgroup & $|\mathcal{C}_{\mathcal{I}}(\mathcal{G})|$ & $|\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})|$ \\ \mathcal{H}line $\mathcal{T}$ & 5 & 480\\ $\mathcal{D}_{10}$ & 6 & 576\\ $\mathcal{D}_6$ & 10 & 960\\ $\mathcal{D}_4$ & 5 & 120\\ $C_5$ & 6 & 576\\ $C_3$ & 10 & 320\\ $C_2$ & 15 & 180\\ \mathcal{H}line \end{tabular} \end{center} \label{classi_sgp} \end{table} The non-trivial subgroups of $\mathcal{I}$ are listed in Table \ref{ico_sgp}, together with their generators \cite{hoyle}. Note that $\mathcal{T}$, $\mathcal{D}_{10}$ and $\mathcal{D}_6$ are maximal subgroups of $\mathcal{I}$, and that $\mathcal{D}_4$, $C_5$ and $C_3$ are normal subgroups of $\mathcal{T}$, $\mathcal{D}_{10}$ and $\mathcal{D}_6$, respectively (\cite{humpreys}, \cite{artin}). The permutation representations of the generators in $S_{12}$ are given in Table \ref{perm_rep} (see also Figure \ref{ico}). Since $\mathcal{I}$ is a small group, its subgroup structure can be easily obtained in \texttt{GAP} by computing explicitly all its conjugacy classes of subgroups. In particular, there are $7$ classes of non-trivial subgroups in $\mathcal{I}$: any subgroup $H$ of $\mathcal{I}$ has the property that, if $K$ is another subgroup of $\mathcal{I}$ isomorphic to $H$, then $H$ and $K$ are conjugate in $\mathcal{I}$ (this property is referred to as the "friendliness'' of the subgroup $H$, \cite{soicher}). In other words, denoting by $n_{\mathcal{G}}$ the number of subgroups of $\mathcal{I}$ isomorphic to $\mathcal{G}$, i.e. \begin{equation}\label{ng} n_{\mathcal{G}} := | \{H < \mathcal{I} : H \simeq \mathcal{G} \}|, \end{equation} we have (cf. Definition \ref{conjugacy_class}) \begin{equation*} n_{\mathcal{G}} = |\mathcal{C}_{\mathcal{I}}(\mathcal{G})|. \end{equation*} In Table \ref{classi_sgp} we list the order of each class of subgroups in $\mathcal{I}$. Geometrically, different copies of $C_2$, $C_3$ and $C_5$ correspond to the different two-,three- and five-fold axes of the icosahedron, respectively. In particular, different copies of $\mathcal{D}_{10}$ stabilise one of the 6 five-fold axes of the icosahedron, and each copy of $\mathcal{D}_6$ stabilises one of the 10 three-fold axes. Moreover, it is possible to inscribe 5 tetrahedra into a dodecahedron, and each different copy of the tetrahedral group in $\mathcal{I}$ stabilises one of these tetrahedra. \subsection{Subgroups of the crystallographic representations of $\mathcal{I}$} Let $\mathcal{G}$ be a subgroup of $\mathcal{I}$. The function \eqref{iso} provides a representation of $\mathcal{G}$ in $B_6$, denoted by $\mathcal{K}_{\mathcal{G}}$, which is a subgroup of $\mathcal{H}at{\mathcal{I}}$. Let us denote by $\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ the conjugacy class of $\mathcal{K}_{\mathcal{G}}$ in $B_6$. The next lemma shows that this class contains all the subgroups of the crystallographic representations of $\mathcal{I}$ in $B_6$. \begin{lemma} Let $\mathcal{H}_i \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ be a crystallographic representation of $\mathcal{I}$ in $B_6$, and let $\mathcal{K}_i \subseteq \mathcal{H}_i$ be a subgroup of $\mathcal{H}_i$ isomorphic to $\mathcal{G}$. Then $\mathcal{K}_i \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$. \end{lemma} \begin{proof} Since $\mathcal{H}_i \in C_{B_6}(\mathcal{H}at{\mathcal{I}})$, there exists $g \in B_6$ such that $g\mathcal{H}_ig^{-1} = \mathcal{H}at{\mathcal{I}}$, and therefore $g\mathcal{K}_ig^{-1} = \mathcal{K}'$ is a subgroup of $\mathcal{H}at{\mathcal{I}}$ isomorphic to $\mathcal{G}$. Since all these subgroups are conjugate in $\mathcal{H}at{\mathcal{I}}$ (they are "friendly" in the sense of \cite{soicher}), there exists $h \in \mathcal{H}at{\mathcal{I}}$ such that $h\mathcal{K}'h^{-1} = \mathcal{K}_{\mathcal{G}}$. Thus $(hg)\mathcal{K}_i(hg)^{-1} = \mathcal{K}_{\mathcal{G}}$, implying that $\mathcal{K}_i \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$. \end{proof} We next show that every element of $\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ is a subgroup of a crystallographic representation of $\mathcal{I}$. \begin{lemma}\label{lemma_sgp} Let $\mathcal{K}_i \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$. There exists $\mathcal{H}_i \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ such that $\mathcal{K}_i$ is a subgroup of $\mathcal{H}_i$. \end{lemma} \begin{proof} Since $\mathcal{K}_i \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$, there exists $g \in B_6$ such that $g\mathcal{K}_ig^{-1} = \mathcal{K}_{\mathcal{G}}$. We define $\mathcal{H}_i :=g^{-1}\mathcal{H}at{\mathcal{I}} g$. It is immediate to see that $\mathcal{K}_i$ is a subgroup of $\mathcal{H}_i$. \end{proof} As a consequence of these Lemmata, $\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ contains all the subgroups of $B_6$ which are isomorphic to $\mathcal{G}$ \emph{and} are subgroups of a crystallographic representation of $\mathcal{I}$. The explict forms of $\mathcal{K}_{\mathcal{G}}$ are given in the Appendix. We point out that it is possible to find subgroups of $B_6$ isomorphic to a subgroup $\mathcal{G}$ of $\mathcal{I}$ which are \emph{not} subgroups of any crystallographic representation of $\mathcal{I}$. For example, the following subgroup \begin{equation*} \mathcal{H}at{\mathcal{T}} = \left< \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right) \right>. \end{equation*} is isomorphic to the tetrahedral group $\mathcal{T}$; a computation in \texttt{GAP} shows that it is not a subgroup of any elements in $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$. Indeed, the two classes of subgroups, $\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{T}})$ and $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{T}})$, are disjoint. Using \texttt{GAP}, we compute the size of each $\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ (see Table \ref{classi_sgp}). We observe that $|\mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})| < |\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})| \cdot n_{\mathcal{G}}$. This implies that crystallographic representations of $\mathcal{I}$ may share subgroups. In order to describe more precisely the subgroup structure of $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ we will use some basic results from graph theory and their spectra, which we are going to recall in the next section. \subsection{Some basic results of graph theory and their spectra} In this section we recall, without proofs, some concepts and results from graph theory and spectral graph theory. Proofs and further results can be found, for example, in \cite{foulds} and \cite{doob}. Let $G$ be a graph with vertex set $V= \{v_1, \ldots, v_n \}$. The number of edges incident with a vertex $v$ is called the \emph{degree} of $v$. If all vertices have the same degree $d$, then the graph is called \emph{regular of degree $d$}. A \emph{walk of length $l$} is a sequence of $l$ consecutive edges, and it is called a \emph{path} if they are all distinct. A \emph{circuit} is a path starting and ending at the same vertex, and the \emph{girth} of the graph is the length of the shortest circuit. Two vertices $p$ and $q$ are \emph{connected} if there exists a path containing $p$ and $q$. The \emph{connected component} of a vertex $v$ is the set of all vertices connected to $v$. The \emph{adjacency matrix} $A$ of $G$ is the $n \times n$ matrix $A=(a_{ij})$ whose entries $a_{ij}$ are equal to $1$ if the vertex $v_i$ is adjacent to the vertex $v_j$, and $0$ otherwise. It is immediate to see from its definition that $A$ is symmetric and $a_{ii}=0$ for all $i$, so that $\text{Tr}(A) = 0$. It follows that $A$ is diagonalisable and all its eigenvalues are real. The \emph{spectrum of the graph} is the set of all the eigenvalues of its adjacency matrix $A$, usually denoted by $\sigma(A)$. \begin{teo}\label{walks} Let $A$ be the adjacency matrix of a graph $G$ with vertex set $V = \{v_1, \ldots, v_n \}$. Let $N_k(i,j)$ denote the number of walks of length $k$ starting at vertex $v_i$ and finishing at vertex $v_j$. We have \begin{equation*} N_k(i,j) = A^k_{ij}. \end{equation*} \end{teo} We recall that the \emph{spectral radius} of a matrix $A$ is defined by $\rho(A):= \text{max} \{|\lambda| : \lambda \in \sigma(A) \}$. If $A$ is a non-negative matrix, i.e. if all its entries are non-negative, then $\rho(A) \in \sigma(A)$ \cite{johnson}. Since the adjacency matrix of a graph is non-negative, $|\lambda| \leq \rho(A):=r$, where $\lambda \in \sigma(A)$ and $r$ is the largest eigenvalue. $r$ is called the \emph{index} of the graph $G$. \begin{teo}\label{regolare} Let $\lambda_1, \ldots, \lambda_n$ be the spectrum of a graph $G$, and let $r$ denote its index. Then $G$ is regular of degree $r$ if and only if \begin{equation*} \frac{1}{n} \sum_{i=1}^n \lambda_i^2 = r. \end{equation*} Moreover, if $G$ is regular the multiplicity of its index is equal to the number of its connected components. \end{teo} \subsection{Applications to the subgroup structure} Let $\mathcal{G}$ be a subgroup of $\mathcal{I}$. In the following we represent the subgroup structure of the class of crystallographic representations of $\mathcal{I}$ in $B_6$, $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$, as a graph. We say that $\mathcal{H}_1, \mathcal{H}_2 \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ are adjacent to each other (i.e. connected by an edge) in the graph if there exists $P \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ such that $P = \mathcal{H}_1 \cap \mathcal{H}_2$. We can therefore consider the graph $ G = (\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}}),E)$, where an edge $e \in E$ is of the form $(\mathcal{H}_1,\mathcal{H}_2)$. We call this graph \emph{$\mathcal{G}$-graph}. Using \texttt{GAP}, we compute the adjacency matrices of the $\mathcal{G}$-graphs. The algorithms used are shown in the Appendix. The spectra of the $\mathcal{G}$-graphs are given in Table \ref{spectra_graphs}. We first of all notice that the adjacency matrix of the $C_5$-graph is the null matrix, implying that there are no two representations sharing precisely a subgroup isomorphic to $C_5$, i.e. not a subgroup containing $C_5$. We point out that, since the adjacency matrix of the $\mathcal{D}_{10}$-graph is not the null one, then there exist cystallographic representations, say $\mathcal{H}_i$ and $\mathcal{H}_j$, sharing a maximal subgroup isomorphic to $\mathcal{D}_{10}$. Since $C_5$ is a (normal) subgroup of $\mathcal{D}_{10}$, then $\mathcal{H}_i$ and $\mathcal{H}_j$ do share a $C_5$ subgroup, but also a $C_2$ subgroup. In other words, if two representations share a five-fold axis, then necessarily they also share a two-fold axis. \begin{table}[!t] \caption{Spectra of the $\mathcal{G}$-graphs for $\mathcal{G}$ a non-trivial subgroup of $\mathcal{I}$ and $\mathcal{G} = \{e\}$, the trivial subgroup consisting of only the identity element $e$. The numbers highlighted are the indices of the graphs, and correspond to their degrees $d_{\mathcal{G}}$.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \mathcal{H}line \multicolumn{2}{|c|}{$\mathcal{T}$-Graph} & \multicolumn{2}{|c|}{$\mathcal{D}_{10}$-Graph} & \multicolumn{2}{|c|}{$\mathcal{D}_6$-Graph} & \multicolumn{2}{|c|}{$C_5$-graph} \\ \mathcal{H}line Eig. & Mult. & Eig. & Mult. & Eig. & Mult. & Eig. & Mult. \\ $\mathbf{5}$ & 1 & $\mathbf{6}$ & 6 & $\mathbf{10}$ & 6 & $\mathbf{0}$ & 192\\ 3 & 45 & 2 & 90 & 2 & 90 & & \\ -3 & 45 & -2 & 90 & -2 & 90 & & \\ 1 & 50 & -6 & 6 & -10 & 6 & &\\ - 1 & 50 & & & & & &\\ -5 & 1 & & & & & &\\ \mathcal{H}line \multicolumn{2}{|c|}{$\mathcal{D}_4$-graph} & \multicolumn{2}{|c|}{$C_3$-graph} & \multicolumn{2}{|c|}{$C_2$-graph} & \multicolumn{2}{|c|}{$\{ e \}$-graph} \\ \mathcal{H}line Eig. & Mult. & Eig & Mult. & Eig. & Mult. & Eig. & Mult. \\ $\mathbf{30}$ & 1 & $\mathbf{20}$ & 2 & $\mathbf{60}$ & 2 & $\mathbf{60}$ & 1 \\ 18 & 5 & 4 & 90 & 4 & 90 & 12 & 5 \\ 12 & 5 & -4 & 100 & -4 & 90 & 4 & 90 \\ 6 & 15 & & & -12 & 10 & -4 & 90 \\ 2 & 45 & & & & & -12 & 5 \\ 0 & 31 & & & & & -60 & 1 \\ -2 & 30 & & & & & & \\ -4 & 45 & & & & & & \\ -8 & 15 & & & & & & \\ \mathcal{H}line \end{tabular} \end{center} \label{spectra_graphs} \end{table} A straightforward calculation based on Theorem \ref{regolare} leads to the following \begin{prop}\label{reg} Let $\mathcal{G}$ be a subgroup of $\mathcal{I}$. Then the corresponding $\mathcal{G}$-graph is regular. \end{prop} In particular, the degree $d_{\mathcal{G}}$ of each $\mathcal{G}$-graph is equal to the largest eigenvalue of the corresponding spectrum. As a consequence we have the following \begin{prop}\label{sgp_rep} Let $\mathcal{H}$ be a crystallographic representation of $\mathcal{I}$ in $B_6$. Then there are exactly $d_{\mathcal{G}}$ representations $\mathcal{K}_j \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ such that \begin{equation*} \mathcal{H} \cap \mathcal{K}_j = P_j, \quad \exists \; P_j \in \mathcal{C}(\mathcal{K}_{\mathcal{G}}), \quad j = 1, \ldots, d_{\mathcal{G}}. \end{equation*} In particular, we have $d_{\mathcal{G}} = 5,6,10,0,30,20,60$ and $60$ for $\mathcal{G} = \mathcal{T},\mathcal{D}_{10},\mathcal{D}_6$, $C_5, \mathcal{D}_4, C_3, C_2$ and $\{e \}$, respectively. \end{prop} In particular, this means that for any crystallographic representation of $\mathcal{I}$ there are precisely $d_{\mathcal{G}}$ other such representations which share a subgroup isomorphic to $\mathcal{G}$. In other words, we can associate to the class $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ the "subgroup matrix" $S$ whose entries are defined by \begin{equation*} S_{ij} = |\mathcal{H}_i \cap \mathcal{H}_j|, \qquad i,j =1,\ldots, 192. \end{equation*} The matrix $S$ is symmetric and $S_{ii} = 60$, for all $i$, since the order of $\mathcal{I}$ is 60. It follows from Proposition \ref{sgp_rep} that each row of $S$ contains $d_{\mathcal{G}}$ entries equal to $|\mathcal{G}|$. Moreover, a rearrangement of the columns of $S$ shows that the 192 crystallographic representations of $\mathcal{I}$ can be grouped into 12 sets of 16 such that any two of these representations in such a set of 16 share a $\mathcal{D}_4$-subgroup. This implies that the corresponding subgraph of the $\mathcal{D}_4$-graph is a \emph{complete graph}, i.e. every two distinct vertices are connected by an edge. From a geometric point of view, these 16 representations correspond to "6-dimensional icosahedra". This ensemble of 16 such icosahedra embedded into a six-dimensional hypercube can be viewed as 6D analogue of the 3D ensemble of five tetrahedra inscribed into a dodecahedron, sharing pairwise a $C_3$-subgroup. We notice that, using Theorem \ref{regolare}, not all the graphs are connected. In particular, the $\mathcal{D}_{10}$- and the $\mathcal{D}_6$-graphs are made up of six connected components, whereas the $C_3$- and the $C_2$-graphs consist of two connected components. With \texttt{GAP}, we implemented a \emph{breadth-first search} (BFS) algorithm \cite{foulds}, which starts from a vertex $i$ and then ``scans'' for all the vertices connected to it, which allows us to find the connected components of a given $\mathcal{G}$-graph (see Appendix). We find that each connected component of the $\mathcal{D}_{10}$- and the $\mathcal{D}_6$-graphs is made up of $32$ vertices, while for the $C_3$ and $C_2$ each component consists of $96$ vertices. For all other subgroups, the corresponding $\mathcal{G}$-graph is connected and the connected component contains trivially 192 vertices. We now consider in more detail the case when $\mathcal{G}$ is a maximal subgroup of $\mathcal{I}$. Let $\mathcal{H} \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ and let us consider its vertex star in the corresponding $\mathcal{G}$-graph, i.e. \begin{equation}\label{vertex_star} V(\mathcal{H}) := \{ \mathcal{K} \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}}) : \mathcal{K} \; \text{is adjacent to} \; \mathcal{H} \}. \end{equation} A comparison of Tables \ref{ico_sgp} and \ref{spectra_graphs} shows that $d_{\mathcal{G}} = n_{\mathcal{G}}$ (i.e. the number of subgroups isomorphic to $\mathcal{G}$ in $\mathcal{I}$, cf. \eqref{ng}) and therefore, since the graph is regular, $|V(\mathcal{H})| = d_{\mathcal{G}} = n_{\mathcal{G}}$. This suggests that there is a 1-1 correspondence between elements of the vertex star of $\mathcal{H}$ and subgroups of $\mathcal{H}$ isomorphic to $\mathcal{G}$; in other words, if we fix any subgroup $P$ of $\mathcal{H}$ isomorphic to $\mathcal{G}$, then $P$ "connects" $\mathcal{H}$ with exactly another representation $\mathcal{K}$. We thus have the following \begin{prop}\label{reps} Let $\mathcal{G}$ be a maximal subgroup of $\mathcal{I}$. Then for every $P \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$ there exist \emph{exactly} two crystallographic representations of $\mathcal{I}$, $\mathcal{H}_1, \mathcal{H}_2 \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$, such that $P = \mathcal{H}_1 \cap \mathcal{H}_2$. \end{prop} In order to prove it, we first need the following lemma \begin{lemma}\label{triangle} Let $\mathcal{G}$ be a maximal subgroup of $\mathcal{I}$. Then the corresponding $\mathcal{G}$-graph is triangle-free, i.e. it has no circuits of length three. \end{lemma} \begin{proof} Let $A_{\mathcal{G}}$ be the adjacency matrix of the $\mathcal{G}$-graph. By Theorem \ref{walks}, its third power $A_{\mathcal{G}}^3$ determines the number of walks of length 3, and in particular its diagonal entries, $(A^3_{\mathcal{G}})_{ii}$, for $i = 1, \ldots, 192$, correspond to the number of triangular circuits starting and ending in vertex $i$. A direct computation shows that $(A^3_{\mathcal{G}})_{ii} = 0$, for all $i$, thus implying the non-existence of triangular circuits in the graph. \end{proof} \begin{proof}[Proof of Proposition \ref{reps}] If $P \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$, then, using Lemma \ref{lemma_sgp}, there exists $\mathcal{H}_1 \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ such that $P$ is a subgroup of $\mathcal{H}_1$. Let us consider the vertex star $V(\mathcal{H}_1)$. We have $|V(\mathcal{H}_1)| = d_{\mathcal{G}}$; we call its elements $\mathcal{H}_2, \ldots, \mathcal{H}_{d_{\mathcal{G}}+1}$. Let us suppose that $P$ is not a subgroup of any $\mathcal{H}_j$, for $j=2, \ldots, d_{\mathcal{G}}+1$. This implies that $P$ does not connect $\mathcal{H}_1$ with any of these $\mathcal{H}_j$. However, since $\mathcal{H}_1$ has exactly $n_{\mathcal{G}}$ different subgroups isomorphic to $\mathcal{G}$, then at least two vertices in the vertex star, say $\mathcal{H}_2$ and $\mathcal{H}_3$, are connected by the same subgroup isomorphic to $\mathcal{G}$, which we denote by $Q$. Therefore we have \begin{equation*} Q = \mathcal{H}_1 \cap \mathcal{H}_2, \quad Q = \mathcal{H}_1 \cap \mathcal{H}_3 \mathbb{R}ightarrow Q = \mathcal{H}_2 \cap \mathcal{H}_3. \end{equation*} This implies that $\mathcal{H}_1$, $\mathcal{H}_2$ and $\mathcal{H}_3$ form a triangular circuit in the graph, which is a contradiction due to Lemma \ref{triangle}, hence the result is proved. \end{proof} It is noteworthy that the situation in $B_6^+$ is different. If we denote by $X_1$ and $X_2$ the two disjoint classes of crystallographic representations of $\mathcal{I}$ in $B_6^+$ (cf. \eqref{class_2}), we can build, in the same way as described before, the $\mathcal{G}$-graphs for $X_1$ and $X_2$, for $\mathcal{G} = \mathcal{T}, \mathcal{D}_{10}$ and $\mathcal{D}_6$. The result is that the adjacency matrices of all these 6 graphs are the null matrix of dimension 96. This implies that these graphs have no edges, and so the representations in each class do not share any maximal subgroup of $\mathcal{I}$. As a consequence, we have the following: \begin{prop}\label{class2} Let $\mathcal{H},\mathcal{K} \in \mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ be two crystallographic representations of $\mathcal{I}$, and $P = \mathcal{H} \cap \mathcal{K}$, $P \in \mathcal{C}_{B_6}(\mathcal{K}_{\mathcal{G}})$, where $\mathcal{G}$ is a maximal subgroup of $\mathcal{I}$. Then $\mathcal{H}$ and $\mathcal{K}$ are not conjugated in $B_6^+$. In other words, the elements of $B_6$ which conjugate $\mathcal{H}$ with $\mathcal{K}$ are matrices with determinant equal to $-1$. \end{prop} We conclude by showing a computational method which combines the result of Propositions \ref{class} and \ref{sgp_rep}. We first recall the following \begin{defn} Let $H$ be a subgroup of a group $G$. The \emph{normaliser} of $H$ in $G$ is given by \begin{equation*} N_G(H) := \{ g \in G : gHg^{-1} = H \}. \end{equation*} \end{defn} \begin{cor} Let $\mathcal{H}$ and $\mathcal{K}$ be two crystallographic representations of $\mathcal{I}$ in $B_6$, and $P \in \mathcal{C}(\mathcal{K}_{\mathcal{G}})$ such that $P = \mathcal{H} \cap \mathcal{K}$. Let \begin{equation*} A_{\mathcal{H},\mathcal{K}} = \{ M \in B_6 : M\mathcal{H} M^{-1} = \mathcal{K} \} \end{equation*} be the set of all the elements of $B_6$ which conjugate $\mathcal{H}$ with $\mathcal{K}$, and let $N_{B_6}(P)$ be the normaliser of $P$ in $B_6$. We have \begin{equation*} A_{\mathcal{H},\mathcal{K}} \cap N_{B_6}(H) \neq \varnothing. \end{equation*} In other words, it is possible to find a non-trivial element $M \in B_6$ in the normaliser of $P$ in $B_6$ which conjugates $\mathcal{H}$ with $\mathcal{K}$. \end{cor} \begin{proof} Let us suppose $A_{\mathcal{H},\mathcal{K}} \cap N_{B_6}(H) = \varnothing$. Then $M P M^{-1} \neq P$, for all $M \in A_{\mathcal{H},\mathcal{K}}$. This implies, since $M \mathcal{H} M^{-1} = \mathcal{K}$, that $P$ is not a subgroup of $\mathcal{K}$, which is a contradiction. \end{proof} We give now an explicit example. We consider the representation $\mathcal{H}at{\mathcal{I}}$ as in \eqref{gen}, and its subgroup $\mathcal{K}_{\mathcal{D}_{10}}$ (the explicit form is given in the Appendix). With \texttt{GAP}, we find the other representation $\mathcal{H}_0 \in \mathcal{C}(\mathcal{H}at{\mathcal{I}})$ such that $\mathcal{K}_{\mathcal{D}_{10}} = \mathcal{H}at{\mathcal{I}} \cap \mathcal{H}_0$. Its explicit form is given by \begin{equation*} \mathcal{H}_0 = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \end{array} \right) \right>. \end{equation*} A direct computation shows that the matrix \begin{equation*} M = \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) \end{equation*} belongs to $N_{B_6}(\mathcal{K}_{\mathcal{D}_{10}})$ and conjugate $\mathcal{H}at{\mathcal{I}}$ with $\mathcal{H}_0$. Note that $\text{det}M = -1$. \section{Conclusions} In this work we explored the subgroup structure of the hyperoctahedral group. In particular we found the class of the crystallographic representations of the icosahedral group, whose order is 192. Any such representation, together with its corresponding projection operator $\pi^{\parallel}$, can be chosen to construct icosahedral quasicrystals via the cut-and-project method. We then studied in detail the subgroup structure of this class. For this, we proposed a method based on spectral graph theory and introduced the concept of $\mathcal{G}$-graph, for a subgroup $\mathcal{G}$ of the icosahedral group. This allowed us to study the intersection and the subgroups shared by different representations. We have shown that, if we fix any representation $\mathcal{H}$ in the class and a maximal subgroup $P$ of $\mathcal{H}$, then there exists exactly one other representation $\mathcal{K}$ in the class such that $P = \mathcal{H} \cap \mathcal{K}$. As explained in the introduction, this can be used to describe transitions which keep intermediate symmetry encoded by $P$. In particular, this result implies in this context that a transition from a structure arising from $\mathcal{H}$ via projection will result in a structure obtainable for $\mathcal{K}$ via projection if the transition has intermediate symmetry described by $P$. Therefore, this setting is the starting point to analyse structural transitions between icosahedral quasicrystals, following the methods proposed in \cite{kramer}, \cite{katz} and \cite{paolo}, which we are planning to address in a forthcoming publication. These mathematical tools have many applications also in other areas. A prominent example is virology. Viruses package their genomic material into protein containers with regular structures that can be modelled via lattices and group theory. Structural transitions of these containers, which involve rearrangements of the protein lattices, are important in rendering certain classes of viruses infective. As shown in \cite{giuliana}, such structural transitions can be modelled using projections of 6D icosahedral lattices and their symmetry properties. The results derived here therefore have a direct application to this scenario, and the information on the subgroup structure of the class of crystallographic representations of the icosahedral group and their intersections provides information on the symmetries of the capsid during the transition. \section{Appendix A} In order to render this paper self-contained, we provide the character tables of the subgroups of the icosahedral group, following \cite{artin} and \cite{fulton}. \begin{itemize} \item Tetrahedral group $\mathcal{T}$: \begin{center} \begin{tabular}{l|c c c c } Irrep & $\mathcal C(e)$ & $4C_3$ & $4C_3^2$ & $3C_2$ \\ \mathcal{H}line A & 1 & 1 & 1 & 1 \\ \multirow{2}*{E} & 1 & $\omega$ & $\omega^2$ & 1 \\ & 1 & $\omega^2$ & $\omega$ & 1 \\ T & 3 & 0 & 0 & -1 \end{tabular} \end{center} where $\omega = e^{\frac{2\pi i}{3}}$. \item Dihedral group $\mathcal{D}_{10}$: \begin{center} \begin{tabular}{l|c c c c } Irrep & $E$ & $2C_5$ & $2C_5^2$ & $5C_2$ \\ \mathcal{H}line $A_1$ & 1 & 1 & 1 & 1 \\ $A_2$ & 1 & 1 & 1 & -1 \\ $E_1$ & 2 & $\gamma$ & $\gamma'$ & 0 \\ $E_2$ & 2 & $\gamma'$ & $\gamma$ & 0 \end{tabular} \end{center} with $\gamma = 2\text{cos}(\frac{2\pi}{5}) = \frac{\sqrt{5}-1}{2}$ and $\gamma' = 2\text{cos}(\frac{4\pi}{5}) = -\frac{\sqrt{5}+1}{2}$. \item Dihedral group $\mathcal{D}_6$ (isomorphic to the symmetric group $S_3$): \begin{center} \begin{tabular}{l|c c c } Irrep & $E$ & $3C_2$ & $2C_3$ \\ \mathcal{H}line $A_1$ & 1 & 1 & 1 \\ $A_2$ & 1 & -1 & 1 \\ $E$ & 2 & 0 & -1 \end{tabular} \end{center} \item Cyclic group $C_5$: \begin{center} \begin{tabular}{l|c c c c c } Irrep & e & $C_5$ & $C_5^2$ & $C_5^3$ & $C_5^4$ \\ \mathcal{H}line A & 1 & 1 & 1 & 1 & 1 \\ \multirow{2}*{$E_1$} & 1 & $\epsilon$ & $\epsilon^2$ & $\epsilon^{2*}$ & $\epsilon^*$ \\ & 1 & $\epsilon^*$ & $\epsilon^{2*}$ & $\epsilon^2$ & $\epsilon$ \\ \multirow{2}*{$E_2$} & 1 & $\epsilon^2$ & $\epsilon^*$ & $\epsilon$ & $\epsilon^{2*}$ \\ & 1 & $\epsilon^{2*}$ & $\epsilon$ & $\epsilon^*$ & $\epsilon^2$ \end{tabular} \end{center} where $\epsilon = e^{\frac{2\pi i}{5}}$. \item Dihedral group $D_4$ (the Klein Four Group): \begin{center} \begin{tabular}{l|c c c c} Irrep & E& $C_{2x}$ & $C_{2y}$ & $C_{2z}$ \\ \mathcal{H}line $A$ & 1 & 1 & 1 & 1 \\ $B_1$ & 1 & 1 & -1 & -1 \\ $B_2$ & 1 & -1 & 1 & -1 \\ $B_3$ & 1 & -1 & -1 & 1 \end{tabular} \end{center} \item Cyclic group $C_3$: \begin{center} \begin{tabular}{l|c c c} Irrep & E& $C_3$ & $C_3^2$ \\ \mathcal{H}line $A$ & 1 & 1 & 1 \\ \multirow{2}*{E} & 1 & $\eta$ & $\eta^2$ \\ & 1 & $\eta^2$ & $\eta$ \end{tabular} \end{center} where $\eta = e^{\frac{2\pi i}{3}}$. \item Cyclic group $C_2$: \begin{center} \begin{tabular}{l|c c} Irrep & E & $C_2$ \\ \mathcal{H}line $A$ & 1 & 1 \\ $B$ & 1 & -1 \end{tabular} \end{center} \end{itemize} \section{Appendix B} In this Appendix we show the explicit forms of $\mathcal{K}_{\mathcal{G}}$, the representations in $B_6$ of the subgroups of $\mathcal{I}$, together with their decompositions in $GL(6,\mathbb{R})$. \begin{equation*} \mathcal{K}_{\mathcal{T}} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccccc} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1\\ -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \end{array} \right) \right>, \end{equation*} \begin{equation*} \mathcal{K}_{\mathcal{D}_{10}} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0\\ -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \right>, \end{equation*} \begin{equation*} \mathcal{K}_{\mathcal{D}_{6}} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right) \right>, \end{equation*} \begin{equation*} \mathcal{K}_{C_5} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0\\ -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \right>, \end{equation*} \begin{equation*} \mathcal{K}_{\mathcal{D}_{4}} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{array} \right) \right>. \end{equation*} \begin{equation*} \mathcal{K}_{C_3} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right) \right>, \end{equation*} \begin{equation*} \mathcal{K}_{C_2} = \left< \left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{array} \right) \right>. \end{equation*} \begin{equation*} \mathcal{K}_{\mathcal{T}} \simeq 2T, \qquad \mathcal{K}_{\mathcal{D}_{10}} \simeq 2A_2 \oplus E_1 \oplus E_2, \qquad \mathcal{K}_{\mathcal{D}_6} \simeq 2A_2 \oplus 2E, \qquad \mathcal{K}_{\mathcal{C}_5} \simeq 2A \oplus E_1 \oplus E_2, \end{equation*} \begin{equation*} \mathcal{K}_{\mathcal{D}_4} \simeq 2B_1 \oplus 2B_2 \oplus 2B_3, \qquad \mathcal{K}_{\mathcal{C}_3} \simeq 2A \oplus 2E, \qquad \mathcal{K}_{\mathcal{C}_2} \simeq 2A \oplus 4B. \end{equation*} \section{Appendix C} In this Appendix we show our algorithms, which have been implemented in \texttt{GAP} and used in various sections of the paper. We list them with a number from 1 to 5. \begin{enumerate} \item Classification of the crystallographic representations of $\mathcal{I}$ (see Section \ref{cube_section}). The algorithm carries out steps 1-4 used to prove Proposition \ref{class}. \item Computation of the vertex star of a given vertex $i$ in the $\mathcal{G}$-graphs. In the following, $H$ stands for the class $\mathcal{C}_{B_6}(\mathcal{H}at{\mathcal{I}})$ of the crystallographic representations of $\mathcal{I}$, $i \in \{1, \ldots, 192 \}$ denotes a vertex in the $\mathcal{G}$-graph corresponding to the representation $H[i]$ and $n$ stands for the size of $\mathcal{G}$: we can use the size instead of the explicit form of the subgroup since, in the case of the icosahedral group, all the non isomorphic subgroups have different sizes. \item Computation of the adjacency matrix of the $\mathcal{G}$-graph. \item BFS algorithm for the computation of the connected component of a given vertex $i$ of the $\mathcal{G}$-graph. \item Computation of \emph{all} connected components of a $\mathcal{G}$-graph. \end{enumerate} \noindent {\bf{Algorithm 1}}: \\ {\small{ \noindent \texttt{gap > B6:= Group([(1,2)(7,8),(1,2,3,4,5,6)(7,8,9,10,11,12),(6,12)]); \\ gap > C:= ConjugacyClassesSubgroups(B6); \\ gap > C60:= Filtered(C,x->Size(Representative(x))=60); \\ gap > Size(C60); \\ 3 \\ gap > s60:= List(C60,Representative); \\ gap > I:= AlternatingGroup(5); \\ gap> IsomorphismGroups(I,s60[1]); \\ $[ (2,4)(3,5), (1,2,3) ] -> [ (1,3)(2,4)(7,9)(8,10), (3,10,11)(4,5,9) ]$ \\ gap> IsomorphismGroups(I,s60[2]);\\ $[ (2,4)(3,5), (1,2,3) ] -> [ (1,2)(3,10)(4,9)(5,11)(6,12)(7,8), (1,2,4)(3,12,5)(6,11,9)(7,8,10) ]$ \\ gap > IsomorphismGroups(I,s60[3]); \\ $[ (2,4)(3,5), (1,2,3) ] -> [ (2,6)(4,11)(5,10)(8,12), (1,3,5)(2,4,6)(7,9,11)(8,10,12) ]$ \\ gap> CB6s60:= ConjugacyClassSubgroups(B6,s60[2]); \\ gap> Size(CB6s60); \\ 192}}}\\ \noindent {\bf{Algorithm 2}}: \\ {\small{ \noindent \texttt{gap> vertex star:=function(H,i,n) \\ > local j,R,S; \\ > R:=[]; \\ > for j in [1..Size(H)] do \\ > S:=Intersection(H[i],H[j]); \\ > if Size(S) = n then \\ > R:=Concatenation(R,[j]); \\ > fi; \\ > od; \\ > return R; \\ > end;}}} \\ \noindent {\bf{Algorithm 3}}: \\ {\small{ \noindent \texttt{ gap> adjacency matrix:=function(H,n) \\ > local i,j,C,A; \\ > A:=NullMat(Size(H),Size(H)); \\ > for i in [1..Size(H)] do \\ > C:=vertex star(H,i,n); \\ > for j in [1..Size(C)] do \\ > A[i][C[j]]:=1; \\ > od; \\ > od; \\ > return A; \\ > end;}}}\\ \noindent {\bf{Algorithm 4}}: \\ {\small{ \noindent \texttt{gap> connected component:=function(H,i,n) \\ > local R,S,T,j,k,C; \\ > R:=[i]; \\ > S:=[i]; \\ > while Size(S) <= Size(H) do \\ > T:=[]; \\ > for j in [1..Size(R)] do \\ > C:=vertex star(H,R[j],n); \\ > for k in [1..Size(C)] do \\ > if (C[k] in S) = false then \\ > Add(S,C[k]); \\ > T:=Concatenation(T,[C[k]]); \\ > fi; \\ > od; \\ > od; \\ > if T = [] then return S; \\ > else \\ > R:=T; \\ > fi; \\ > od; \\ > return S; \\ > end;}}} \\ \noindent {\bf{Algorithm 5}}: \\ {\small{ \noindent \texttt{gap> connected components:=function(H,n)\\ > local j,S,C;\\ > C:=[connected component(H,1,n)];\\ > S:=Flat(C);\\ > if Size(S) = Size(H) then return S;\\ > fi;\\ > for j in [1..Size(H)] do\\ > if (j in S) = false then\\ > C:=Concatenation(C,[connected component(H,j,n)]);\\ > S:=Flat(C);\\ > if Size(S) = Size(H) then return C;\\ > fi;\\ > fi;\\ > od;\\ > end;}}}\\ {\bf{Acknowledgements}}. We would like to thank Silvia Steila, Pierre-Philippe Dechant, Paolo Cermelli and Giuliana Indelicato for useful discussions, and Paolo Barbero and Alessandro Marchino for technical help. {} \end{document}
\begin{document} \title{On the Behavioral Consequences of Reverse Causality\thanks{ This paper cannibalizes and supersedes a previous paper titled \textquotedblleft A Simple Model of Monetary Policy under Phillips-Curve Causal Disagreements\textquotedblright . Financial support by ERC Advanced Investigator grant no. 692995 is gratefully acknowledged. I also thank Yair Antler, Tuval Danenberg, Nathan Hancart and Heidi Thysen for helpful comments.}} \author{Ran Spiegler\thanks{ Tel Aviv University, University College London and CFM. URL: http://www.tau.ac.il/\symbol{126}rani. E-mail: [email protected].}} \maketitle \begin{abstract} Reverse causality is a common causal misperception that distorts the evaluation of private actions and public policies. This paper explores the implications of this error when a decision maker acts on it and therefore affects the very statistical regularities from which he draws faulty inferences. Using a quadratic-normal parameterization and applying the Bayesian-network approach of Spiegler (2016), I demonstrate the subtle equilibrium effects of a certain class of reverse-causality errors, with illustrations in diverse areas: development psychology, social policy, monetary economics and IO. In particular, the decision context may protect the decision maker from his own reverse-causality causal error. That is, the cost of reverse-causality errors can be lower for everyday decision makers than for an outside observer who evaluates their choices. \pagebreak \end{abstract} \section{Introduction} Reverse causality is a ubiquitous error. Observing a correlation between two variables of interest, we often form an instinctive causal interpretation in one direction, yet true causation may go in the opposite direction. Reverse causality is often invoked as a warning to social scientists and other researchers, to beware of naive causal interpretations of correlational data. For example, Harris (2011) famously criticized the developmental psychology literature for taking for granted that observed correlation between child personality and parental behavior reflected a causal link from the latter to the former. Harris argued that causality might go in the opposite direction: parental behavior could be a \textit{response} to the child's innate temperament. Clearly, when researchers and other outside observers interpret correlations under the sway of a reverse-causality error, their evaluation of private interventions or public policy may become distorted. But what happens when decision makers $act$ on a reverse-causality error, such that their resulting behavior affects the very statistical regularities from which they draw inferences through their faulty prism? For illustration, consider the developmental psychology example mentioned above, and embed it in the following decision context. A counselor observes a child's initial condition and chooses a therapy. These two factors cause the child's behavior. Parental behavior is a response to the child's behavior and the counselor's therapy. However, the counselor operates under the perception that parental behavior is an independent variable that joins the list of factors that cause the child's behavior. This difference can be represented by two directed acyclic graphs (DAGs), which conventionally represent causal models (Pearl (2009)): \begin{eqnarray} && \begin{array}{ccccc} \theta & \rightarrow & a & & \\ & \searrow & \downarrow & \searrow & \\ & & x & \rightarrow & y \end{array} \qquad \qquad \begin{array}{ccccc} \theta & \rightarrow & a & & \\ & \searrow & \downarrow & & \\ & & x & \leftarrow & y \end{array} \TCItag{Figure 1} \\ &&\quad \text{True model}\qquad \qquad \qquad \quad \text{Subjective model} \nonumber \end{eqnarray} In this diagram, $\theta $ represents initial conditions (e.g. family characteristics), $a$ represents the counselor's action, $x$ represents the child's behavior and $y$ represents the parent's behavior. The counselor's subjective causal model departs from the true model by inverting the link between $x$ and $y$ and severing the link from $a$ into $y$. That is, the counselor regards $y$ as an exogenous variable that causes $x$, whereas in fact it is an endogenous variable and $x$ is one of its direct cause. The diagram is paradigmatic, in the sense that it could fit many other real-life situations. In a monetary economics context, different specifications of the Phillips curve disagree over which of the two variables, inflation and employment, is a dependent variable and which is an explanatory variable. This can be viewed as a disagreement over the direction of causality (see Sargent (1999) and Cho and Kasa (2015) -- I will discuss this example in detail in Sections 2 and 3). In a social policy context, $\theta $ represents initial socioeconomic or demographic conditions, $a$ represents welfare policy, $x$ represents poverty levels or income inequality, and $y$ is a public-health indicator. According to this interpretation, public health is an objective consequence of income inequality or poverty, yet designers of social policy operate under the assumption that health is an exogenous, independent factor that affects the social outcome rather than being caused by it (for a survey of a literature that wrestles with the causal relation between income inequality and health, see Pickett and Wilkinson (2015)). I should emphasize that in neither of these examples do I take an empirical stand on whether the supposedly \textquotedblleft true model\textquotedblright\ is indeed true. My only objective is to take familiar disagreements over the direction of causality in various contexts and place them in a $decision$ context, in which the objective data-generating process is affected by the decision maker's behavior, which in turn is a response to statistical inferences that reflect his causal error. In this way, reverse causality ceases to be the exclusive problem of outside observers who make scientific claims and becomes a payoff-relevant concern for decision makers with skin in the game. Is there a fundamental difference between the two scenarios? How does the decision context affect the magnitude of errors induced by reverse causality? To study these questions, I apply the modeling approach developed in Spiegler (2016,2020a), which borrows the formalism of Bayesian networks from the Statistics and AI literature (Cowell et al. (1999), Pearl (2009)) to analyze decision making under causal misperceptions. According to this approach, a decision maker (DM henceforth) fits a DAG-represented causal model to an objective joint distribution over the variables in his model. The DM best-replies to the possibly distorted belief that arises from this act of fitting a misspecified causal model to objective data. I apply this model to the specification of true and subjective causal models depicted in Figure 1. I employ a quadratic-normal parameterization, such that the DM's payoffs are given by a quadratic loss function of $a$ and $x$, and the joint distribution over the four variables is Gaussian. This parameterization helps in terms of analytic tractability and transparent interpretation of the results, which allows for crisp comparative statics. However, it is also appropriate because fitting a DAG-represented model to a Gaussian distribution is equivalent to OLS estimation of a recursive system of linear regressions (Koller and Friedman (2009, Ch. 7)). Causal interpretations of linear regressions permeate discussions of reverse causality in social-science settings. Assuming that the environment is Gaussian ensures that the linearity of the DM's model is not wrong per se -- the misspecification lies entirely in the underlying causal structure, which is represented by the DAG. The analysis of this simple model demonstrates the subtle equilibrium effects of decision making under reverse causality errors. One effect is that features of the model that would be irrelevant in a quadratic-normal model with rational expectations -- specifically, the variances of the noise terms in the equations for $x$ and $y$ -- play a key role when the DM exhibits the reverse-causality error given by Figure 1. The results also illuminate the difference between committing a reverse-causality error as a \textquotedblleft spectator\textquotedblright\ or as an \textquotedblleft actor\textquotedblright . A special case of the specification given by Figure 1 is that the action's objective effect on the outcome variables $x$ and $y$ is null. In this case, the DM can be viewed as a \textquotedblleft spectator\textquotedblright . This DM's reverse-causality error induces a wrong prediction of $x$ and he suffers a welfare loss as a result. Compare this with a DM who is an \textquotedblleft actor\textquotedblright , in the sense that his action has a non-null direct effect on $y$ (but still no effect on $x$). I show that when this effect $ fully$ offsets the direct effect of $x$ on $y$ (in the sense that $E(y\mid a,x)=x-a$), the DM's equilibrium strategy is as if he has rational expectations. This DM suffers no welfare loss as a result of his reverse-causality error. Thus, the decision context protects an \textquotedblleft actor\textquotedblright\ from his error in a way that it does not for a \textquotedblleft spectator\textquotedblright . The lesson is that in some situations, reverse causality may be less of a problem for DMs than for the scientists who analyze their behavior. \section{The Model} A DM observes an exogenous state of Nature $\theta \sim N(0,\sigma _{\theta }^{2})$ before taking a real-valued action $a$. The DM is an expected-utility maximizer with vNM utility function $u(a,x)=-(x-a)^{2}$, where $x$ is an outcome variable that is determined by $\theta $ and $a$ according to the equation \begin{equation} x=\theta -\gamma a+\varepsilon \label{eq x} \end{equation} Another outcome $y$ is then determined by $x$ and $a$ according to the equation \begin{equation} y=x-\lambda a+\eta \label{eq y} \end{equation} The parameters $\gamma ,\lambda $ are constants that capture the direct effects of $a$ on the outcome variables $x$ and $y$. Assume $\gamma ,\lambda \in \lbrack 0,1]$, such that these direct effects are of an \textquotedblleft \textit{offsetting}\textquotedblright\ nature. The terms $ \varepsilon \sim N(0,\sigma _{\varepsilon }^{2})$ and $\eta \sim N(0,\sigma _{\eta }^{2})$ are independently distributed noise variables. The ratio of the variances of these noise terms, denoted \begin{equation} \tau =\frac{\sigma _{\varepsilon }^{2}}{\sigma _{\eta }^{2}} \label{tau} \end{equation} will play an important role in the sequel. A (potentially mixed) strategy for the DM is a function that assigns a distribution over $a$ to every realization of $\theta $. Once we fix a strategy, we have a well-defined joint probability measure $p$ over all four variables $\theta ,a,x,y$, which can be written as a factorization of marginal and conditional distributions as follows:\footnote{ The factorization is invalid when some terms involve conditioning on zero probability events. Since this is not going to be a problem for us, I ignore the imprecision.} \begin{equation} p(\theta ,a,x,y)=p(\theta )p(a\mid \theta )p(x\mid \theta ,a)p(y\mid a,x) \label{BNfactorization} \end{equation} The term $p(a\mid \theta )$ represents the DM's strategy. This factorization reflects the causal structure that underlies $p$ and that can be described by the leftward DAG in Figure 1, which I denote $G^{\ast }$.\footnote{ Schumacher and Thysen (2020) study a principal-agent model in which the true causal model has the same form as $G^{\ast }$, although the nodes have different interpretations (in particular, the action variable corresponds to the initial node), and the agent's subjective causal model is quite different.} I interpret $p$ as a long-run distribution that results from many repetitions of the same decision problem. Our DM faces a one-shot decision problem, against the background of the long-run experience accumulated by many previous generations of DMs who faced the same one-shot problem (each time with a different, independent draw of the random variables $\theta ,\varepsilon ,\eta $). \noindent \textit{The correct-specification benchmark} \noindent Suppose the DM correctly perceives the true model -- i.e., he has \textquotedblleft rational expectations\textquotedblright . Then, he will choose $a$ to minimize \[ E[(x-a)^{2}\mid \theta ,a] \] where the expectation $E$ is taken w.r.t the objective conditional distribution $p_{G^{\ast }}(x\mid \theta ,a)\equiv p(x\mid \theta ,a)$. This means that we can plug (\ref{eq x}) into the objective function and use the fact that $\varepsilon $ is independently distributed with $E(\varepsilon )=0 $. Hence, the rational-expectations prediction is that the DM will choose \begin{equation} a_{G^{\ast }}(\theta )=\frac{\theta }{1+\gamma } \label{a_RE} \end{equation} Note that $y$ has no direct payoff relevance for the DM. Moreover, since $y$ does not affect the transmission from $\theta $ and $a$ to $x$, it is entirely irrelevant for the DM's decision, because $y$ is a consequence of $ x $ and $a$. Therefore, the constant $\lambda $ plays no role in the DM's rational action. Moreover, the variances of the noise terms $\varepsilon ,\eta $ are irrelevant for $a_{G^{\ast }}$. Now relax the assumption that the DM correctly perceives the true model. Instead, he believes that the distribution over $\theta ,a,x,y$ obeys a causal structure that is given by the rightward DAG in Figure 1, which I denote by $G$. Compared with $G^{\ast }$, the DAG $G$ inverts the causal link between $x$ and $y$, and also removes the link $a\rightarrow y$. This means that the DM regards $y$ as an exogenous direct cause of $x$, instead of the consequence of $x$ and $a$ that it actually is. The following are two examples of situations that fit this specification of the true and subjective DAGs $G^{\ast }$ and $G$. \noindent \textit{Example 2.1: Parenting} \noindent Here is a variant on the story presented in the Introduction. A parent observes characteristics of his child (possibly the child's home behavior in a previous period), captured by the state variable $\theta $. He then chooses how toughly to behave toward the child, as captured by the variable $a$. The child's resulting home behavior is captured by some index $ x$. The quality of the child's school interactions with teachers and peers, as measured by the index $y$, is a consequence of the child's and the parent's home behavior. However, the parent believes that school interactions are an independent driver of the child's home behavior, rather than a consequence of home interactions. \noindent \textit{Example 2.2: Quantity setting} \noindent A large firm observes a demand indicator $\theta $ before setting its production quantity $a$. The price $x$ is a function of demand and quantity. The variable $y$ represents a competitive fringe that reacts to the price and the production quantity. However, the firm believes that the market agents that constitute the competitive fringe are not price takers, but rather independent decision makers that affect the price. The quadratic loss function $u(x,a)=-(x-a)^{2}$ is not necessarily plausible in these examples. However, in both cases we can write down a plausible quadratic utility function (e.g. due to linear demand and constant marginal costs in Example 2.2), and then redefine $a,x,y$ via some linear transformation to obtain the quadratic loss specification, without any effect on our formal results (that is, the DM's action will be the same up to the above linear transformation). The reason is that under any quadratic utility function, the DM's optimal action is linear in $E_{G}(x\mid \theta ,a)$, which is his subjective expectation of $x$ conditional on $\theta $ and $a$. Following Spiegler (2016), I assume that given the long-run distribution $p$ , the DM forms a subjective belief, denoted $p_{G}$, by fitting his model $G$ to $p$ according to the Bayesian-network factorization formula. For an arbitrary DAG $R$ over some collection of variables $x_{1},...,x_{n}$, the formula is \begin{equation} p_{R}(x_{1},...,x_{n})=\dprod\limits_{i=1}^{n}p(x_{i}\mid x_{R(i)}) \label{bayesnetfactor} \end{equation} where $x_{M}=(x_{i})_{i\in M}$ and $R(i)$ represents the set of variables that are viewed as direct causes of $x_{i}$ according to $R$. Formula (\ref {BNfactorization}) was a special case of (\ref{bayesnetfactor}) for $ R=G^{\ast }$. Given our specification of the subjective DAG $G$, (\ref {bayesnetfactor}) reads as follows: \[ p_{G}(\theta ,a,x,y)=p(\theta )p(y)p(a\mid \theta )p(x\mid \theta ,a,y) \] The interpretation of this belief-formation model is as follows. The DM's DAG $G$ is a misspecified subjective model. The DM perceives the statistical regularities in his environment through the prism of his incorrect subjective model. In other words, he fits his model to the long-run statistical data (generated by the true distribution $p$), producing the subjective distribution $p_{G}$. A related interpretation is based on Esponda and Pouzo (2016). We can regard $G$ as a set of probability distributions -- i.e. all distributions $p^{\prime }$ for which $ p_{G}^{\prime }\equiv p^{\prime }$. The DM goes through a process of Bayesian learning, by observing a sequence of $i.i.d$ draws from $p$. The limit belief of this process is $p_{G}$ (see Spiegler (2020a)). The DM's subjective belief $p_{G}$ induces the following conditional distribution over $x$, given the DM's information and action: \begin{equation} p_{G}(x\mid \theta ,a)=\int_{y}p(y)p(x\mid \theta ,a,y) \label{p_G(x|theta,a)} \end{equation} The DM chooses $a$ to maximize his expected utility w.r.t this conditional belief, which means minimizing \begin{equation} \int_{x}p_{G}(x\mid \theta ,a)(x-a)^{2} \label{objective function} \end{equation} Spiegler (2016) demonstrates that in general, this behavioral model can be ill-defined because unlike $p(x\mid \theta ,a)$, the subjective conditional probability $p_{G}(x\mid \theta ,a)$ need not be invariant to the DM's strategy $(p(a\mid \theta ))$. To resolve this ambiguity, Spiegler (2016) introduces a notion of \textquotedblleft personal equilibrium\textquotedblright , such that the DM's subjective optimization is formally defined as an equilibrium concept (this is a hallmark of models of decision making under misspecified models -- see also Esponda and Pouzo (2016)). However, given our specification of $G$ and $G^{\ast }$, this subtlety is irrelevant.\footnote{ Specifically, the reason is that the payoff-relevant variables $\theta ,a,x$ form a clique in $G$, while $G$ treats $y$ as independent of $\theta ,a$. Spiegler (2016) provides a general condition for specifications of $G$ and $ G^{\ast }$ that allow the modeler to ignore personal-equilibrium effects.} Therefore, in what follows I analyze the DM's subjective optimization as such, without invoking the notion of personal equilibrium. The variation in Section 4.2 will force us to revisit this issue.$ $ \noindent \textit{Comment: The pure prediction case} \noindent When $\gamma =0$, the DM's action has no causal effect on $x$. In other words, the DM faces a pure prediction problem: his subjectively optimal action is equal to his subjective expectation of $x$ conditional on $ \theta $. Therefore, the link $a\rightarrow x$ can be omitted from $G^{\ast } $. This raises the following question: can we be equally cavalier about whether to include the link $a\rightarrow x$ in the subjective DAG $G$? Including the link means that the DM erroneously regards his action as a direct cause of $x$ (but he is open to the possibility that when estimated, the measured effect will be null). It turns out that the answer to our question is positive, in the following sense. When we remove the link $a\rightarrow x$ from $G$, we can no longer ignore the subtleties that called for treating subjective maximization as an equilibrium phenomenon. If we apply the concept of personal equilibrium, and impose in addition the requirement that the DM's equilibrium strategy is pure, then the model's prediction will be the same as when $G$ includes the link $a\rightarrow x$. If the relation between $a$ and $\theta $ were noisy -- e.g. by introducing a random shock to the DM's behavior, or an explicit preference shock -- this would cease to be true, and we would have to take a stand on whether $G$ includes the link $a\rightarrow x$ -- i.e. whether the DM understands that $a$ has no causal effect on $x$.$ $ \noindent \textit{Example 2.3: Inflation forecasting with a wrong Phillips curve} \noindent The following variation on Sargent's (1999) simplified version of the well-known monetary model due to Barro and Gordon (1983) fits the pure prediction case.\footnote{ This specification (with rational expectations) was also employed by Athey et al. (2005).} The variable $\theta $ represents a monetary quantity such as money growth, which determines inflation $x$ via (\ref{eq x}). Independently, the private sector (corresponding to the DM in our model) observes $\theta $ and forms an inflation forecast $a$. The variable $y$ represents employment, which is determined as a function of inflation and inflationary expectations via the Phillips curve (\ref{eq y}). The parameter $\lambda $ measures the extent to which anticipated inflation offsets the real effect of actual inflation; the case of $\lambda =1$ captures the \textquotedblleft new classical\textquotedblright\ assumption that only unanticipated inflation has real effects. The private sector's inflation forecast is based on a misspecified Phillips curve, which regards $y$ as an independent, explanatory variable and $x$ as the dependent variable. Sargent (1999) and Cho and Kasa (2015) refer to this inversion of the causal inflation-employment relation in terms of an econometric identification strategy, and dub it a \textquotedblleft Keynesian fit\textquotedblright . The key difference between these papers and the present example is that they assume that the private sector has rational expectations; it is the monetary authority (which chooses $\theta $ ) that operates under a misspecified model (Spiegler (2016) describes how to reformulate Sargent's example in the DAG language). In contrast, in the present example, it is the \textit{private sector} that bases its inflation forecasts on a wrong Phillips curve. \section{Analysis} The following is the main result of this paper.$ $ \begin{proposition} \label{mainresult}The DM's subjectively optimal strategy is \begin{equation} a_{G}(\theta )=\frac{\theta }{1+\gamma +\tau (1-\lambda )} \label{a(theta,u)} \end{equation} \end{proposition} \begin{proof} By assumption, the DM chooses $a$ to minimize (\ref{objective function}). Since $p_{G}$ is Gaussian, the conditional distribution $p_{G}(x\mid \theta ,a,y)$ can be written as a regression equation \begin{equation} x=c_{0}+c_{1}\theta +c_{2}a+c_{3}y+\psi \label{x_regression} \end{equation} where $c_{0},c_{1},c_{2},c_{3}$ are coefficients and $\psi \sim N(0,\sigma _{\psi }^{2})$ is an independent noise term. Since $G$ treats $y$ and $\psi $ as independently distributed error terms with $E(\psi )=0$, (\ref {x_regression}) implies \begin{equation} E_{G}(x\mid \theta ,a)=c_{0}+c_{1}\theta +c_{2}a+E(y) \label{EG(x|a,theta)} \end{equation} It also follows from (\ref{x_regression}) that (\ref{objective function}) can be rewritten as \begin{equation} \int_{y}p(y)E_{\psi }(c_{0}+c_{1}\theta +c_{2}a+c_{3}y-a+\psi )^{2} \label{objective_plug} \end{equation} where the expectation over $\psi $ is taken w.r.t the independent distribution $N(0,\sigma _{\psi }^{2})$. Note that the DM integrates over $y$ as if it is independent of $a$ because indeed, $G$ posits that $y\perp a$. This assumption is wrong, according to the true model $G^{\ast }$. Choosing $a$ to minimize (\ref{objective_plug}), and using the observation that $E(\psi )=0$, we obtain the following first-order condition: \[ (c_{2}-1)\cdot \lbrack c_{0}+c_{1}\theta +c_{2}a+c_{3}E(y)-a]=0 \] This can be written as \[ (c_{2}-1)\cdot \lbrack E_{G}(x\mid \theta ,a)-a]=0 \] Therefore, the DM's strategy in a linear personal equilibrium is: \begin{equation} a=E_{G}(x\mid \theta ,a)=\frac{c_{0}+c_{1}\theta +c_{3}E(y)}{1-c_{2}} \label{strategy} \end{equation} I will show that $c_{0}=E(y)=0$, derive $c_{1}$ and $c_{2}$ and show they are uniquely determined (and in particular that $c_{2}\neq 1$). To do so, let us derive \begin{equation} E_{G}(x\mid \theta ,a)=\int_{y}p(y)E(x\mid \theta ,a,y) \label{EG(x|theta,a)} \end{equation} Let us calculate $E(x\mid \theta ,a,y)$. Plugging (\ref{eq x}) in (\ref{eq y} ), we obtain \begin{equation} x=\theta -\gamma a+\varepsilon =y+\lambda a-\eta \label{x and y} \end{equation} Therefore, \begin{equation} E(x\mid \theta ,a,y)=E(\theta -\gamma a+\varepsilon \mid \theta ,a,y)=\theta -\gamma a+E(\varepsilon \mid \theta ,a,y) \label{E(x|theta,a,y)} \end{equation} To derive $E(\varepsilon \mid \theta ,a,y)$, rearrange (\ref{x and y}) to obtain \[ \varepsilon +\eta =y+(\lambda +\gamma )a-\theta \] Therefore, \[ E(\varepsilon \mid \theta ,a,y)=E(\varepsilon \mid \varepsilon +\eta =y+(\lambda +\gamma )a-\theta ) \] Since $\varepsilon \sim N(0,\sigma _{\varepsilon }^{2})$ and $\eta \sim N(0,\sigma _{\eta }^{2})$ are independent variables, we can use the standard signal extraction formula to obtain \begin{equation} E(\varepsilon \mid \theta ,a,y)=\beta \cdot (y+(\lambda +\gamma )a-\theta ) \label{E epsilon} \end{equation} where \begin{equation} \beta =\frac{\sigma _{\varepsilon }^{2}}{\sigma _{\varepsilon }^{2}+\sigma _{\eta }^{2}}=\frac{\tau }{1+\tau } \label{beta} \end{equation} Plugging (\ref{E epsilon}) in (\ref{E(x|theta,a,y)}), we obtain \[ E(x\mid \theta ,a,y)=(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma )a+\beta y \] Plugging this expression in (\ref{EG(x|theta,a)}), we obtain \[ E_{G}(x\mid \theta ,a)=(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma )a+\beta E(y) \] Plugging this expression in (\ref{strategy}), we obtain \[ a_{G}(\theta )=\frac{1-\beta }{1-\beta \lambda +\gamma (1-\beta )}\theta + \frac{\beta }{1-\beta \lambda +\gamma (1-\beta )}E(y) \] Plugging this in (\ref{eq x}) in (\ref{eq y}) and repeatedly taking expectations, we obtain $E(y)=0$ such that $a_{G}(\theta )$ is given by (\ref {a(theta,u)}). We have thus pinned down $c_{1}=0$, $c_{1}=1-\beta $ and $ c_{2}=\beta \lambda -\gamma (1-\beta )$ in (\ref{strategy}). (The value of $ c_{3}$ is irrelevant because $E(y)=0$.) Since $\lambda ,\gamma \in \lbrack 0,1]$ and $\beta \in (0,1)$, $c_{2}\neq 1$ such that (\ref{strategy}) is well-defined and unique.$ $ \end{proof} The DM's subjectively optimal strategy $a_{G}(\theta )$ has several noteworthy features, in comparison with the correct-specification benchmark $ a_{G^{\ast }}(\theta )$.$ $ \noindent \textit{Sensitivity to the state of Nature }$\theta $ \noindent The coefficient of $\theta $ in the expression for $a_{G}(\theta )$ is lower than $1/(1+\gamma )$, which is the correct-specification benchmark value. This means that the DM's reverse-causality error results in diminished responsiveness to the state of Nature. The reason is that the DM erroneously believes that $\theta $ is not the only independent factor affecting $x$ (apart from $a$). The extent of this rigidity effect depends on $\tau $, as explained below.$ $ \noindent \textit{The relevance of }$\tau $ \noindent In the correct-specification benchmark, the variances of the error terms in the relations (\ref{eq x})-(\ref{eq y}) are irrelevant for the DM's behavior. This is a consequence of the quadratic-normal specification of the model. In contrast, the DM's reverse-causality error lends these variances a crucial role. When $\tau =\sigma _{\varepsilon }^{2}/\sigma _{\eta }^{2}$ is small -- corresponding to the case that the equation for $x$ is precise relative to the equation for $y$ -- $a_{G}(\theta )$ is closer to the correct-specification benchmark $a_{G^{\ast }}(\theta )$. In contrast, when $ \tau $ is large, the departure of $a_{G}(\theta )$ from $a_{G^{\ast }}(\theta )$ is big. In particular, in the $\tau \rightarrow \infty $ limit, $a_{G}(\theta )$ is entirely unresponsive to $\theta $. The intuition for this effect is as follows. When forming the belief $ E_{G}(x\mid \theta ,a)$, the DM integrates over $y$ as if it is an exogenous, independent variable (this is the DM's basic causal error) and estimates the conditional expectation $E(x\mid \theta ,a,y)$ for each value of $y$. This conditional expectation represents a signal extraction problem, and therefore takes into account the relative precision of the equations for $x$ and $y$. When $\tau $ is small, $y$ is uninformative of $x$ relative to $ \theta $, and therefore the conditional expectation $E(x\mid \theta ,a,y)$ places a low weight on $y$. As a result, the DM's erroneous treatment of $y$ does not matter much, such that the DM's belief ends up being approximately correct. In contrast, when $\tau $ is large, $\theta $ is uninformative of $ x $ relative to $y$, and therefore the conditional expectation $E(x\mid \theta ,a,y)$ places a large weight to $y$. Since the DM regards $y$ as an exogenous variable that is independent of $\theta $, the DM ends up attaching low weight to the realized value of $\theta $ when evaluating $ E_{G}(x\mid \theta ,a)$.$ $ \noindent \textit{The role of }$\lambda $ \noindent Recall that the parameter $\lambda $ measures the direct effect of $a$ on $y$, which is the outcome variable that the DM erroneously regards as an independent cause of $x$. The case of $\lambda =0$ corresponds to a DM who is a \textquotedblleft spectator\textquotedblright\ as far as the outcome variable $y$ is concerned. At the other extreme, the case of $ \lambda =1$ corresponds to a DM whose action fully offsets the effect of $x$ on $y$. It is easy to see from (\ref{a(theta,u)}) that the departure of the DM's subjectively optimal action from the correct-specification benchmark becomes smaller as $\lambda $ grows larger. Accordingly, the DM's welfare loss due to his reverse-causality error becomes smaller. When $\lambda =1$, the DM's action $coincides$ with the correct-specification benchmark. In this case, the DM's reverse-causality error inflicts no welfare loss. In other words, the fact that the DM is not a spectator but an actor who influences $y$ protects him from the consequences of his erroneous view of $y$ as a cause of $x$. The intuition for this effect is as follows. When $\lambda =1$, $y=x-a+\eta $ . This means that given $\theta $, \[ y=x-E_{G}(x\mid a,\theta )+\eta \] In other words, $y$ is equal to the difference between the realization of $x$ and its point forecast, plus independent noise. The point forecast can be viewed as an OLS estimate based on some linear regression (where $x$ is regressed on $\theta $, $a$ and $y$, and then $y$ is integrated out). From this point of view, $y$ is an OLS residual (plus independent noise). By definition, this residual is independent of the regression's variables, and therefore its incorrect inclusion has no distorting effect on the estimation of $x$. Consider the pure-prediction case of $\gamma =0$, and let us revisit Example 2.3, which concerns inflation forecasts based on a wrong Phillips curve that inverts the direction of causality between inflation and employment. Recall that the case of $\lambda =1$ corresponds to the \textquotedblleft new classical\textquotedblright\ monetary theory that anticipated inflation has no real effects. As it turns out, in this case the private sector forms correct inflation forecasts despite its wrong model. A discrepancy with the rational-expectations forecast -- in the direction of more rigid forecasts that only partially react to fluctuations in $\theta $ -- arises when $ \lambda <1$ -- i.e. when anticipated inflation has real effects. \section{Two Variations} This section explores two variations on the basic model of Section 2. Each variation relaxes one of the twin assumptions that $G$ inverts the link between $x$ and $y$ and treats $y$ as an exogenous variable. \subsection{Belief in Exogeneity without Reverse Causality} Let the DM's subjective DAG be the rightward DAG in Figure 1, and modify the true model $G^{\ast }$ (given by the leftward DAG in Figure 1) with the following DAG $G^{\ast \ast }$: \[ \begin{array}{ccccc} \theta & \rightarrow & a & & \\ & \searrow & \downarrow & \searrow & \\ & & x & \leftarrow & y \end{array} \] Specifically, the equations for $x$ and $y$ are \begin{eqnarray*} y &=&\delta a+\eta \\ x &=&\theta -\kappa a+\alpha y+\varepsilon \end{eqnarray*} where $\alpha ,\delta ,\kappa $ are constants in $(0,1)$, and $\varepsilon \sim N(0,\sigma _{\varepsilon }^{2})$ and $\eta \sim N(0,\sigma _{\eta }^{2}) $ are independently distributed noise variables. The DM's subjective model $G$ departs from $G^{\ast \ast }$ by removing the link $a\rightarrow y$, but otherwise the DAGs are identical. Thus, the only causal misperception that $G$ embodies relative to the true model $G^{\ast \ast }$ is a belief that $y$ is an exogenous, independent variable. The following example provides an illustration of this specification of $G$ and $ G^{\ast \ast }$. \noindent \textit{Example 4.1: Public health} \noindent Suppose that $\theta $ represents initial conditions of a public health situation; the DM is a public-health authority and $a$ represents the intensity of some mitigating public-health measure; $y$ represents the population's behavioral personal-safety response (higher $y$ corresponds to more lax behavior); and $x$ represents the eventual public health outcome. The authority's error is that it takes the likelihood of various scenarios of the population's behavior as given without taking into account the fact that it responds to the intensity of the public-health measure. We will now see that although this error has non-null behavioral implications, those are quite different from what we saw in Section 3. As in the main model, the DM chooses $a$ such that \[ a=E_{G}(x\mid \theta ,a)=\int_{y}p(y)E(x\mid \theta ,a,y) \] The difference is that now, $E(x\mid \theta ,a,y)$ is more straightforward to calculate because $x$ is conditioned on its actual direct causes: \[ E(x\mid \theta ,a,y)=\theta -\kappa a+\alpha y \] I take it for granted that $E(y)=0$. Therefore, \[ a=\theta -\kappa a \] such that \[ a_{G}(\theta )=\frac{\theta }{1+\kappa } \] By comparison, the correct-specification action would satisfy \[ a=E(x\mid \theta ,a)=\theta -\kappa a+\alpha \delta a \] such that \[ a_{G^{\ast \ast }}(\theta )=\frac{\theta }{1+\kappa -\alpha \delta } \] We see that as in the model of Section 2, treating $y$ as an independent exogenous variable leads to a more rigid response to $\theta $. Here the reason is that since $E(y)=0$, the average unconditional effect of $y$ on $x$ is null, whereas conditional on $a>0$ it is positive. Therefore, failing to treat $y$ as an intermediate consequence of $a$ leads the DM to neglect a causal channel that affects $x$, which means that he ends up underestimating the total effect of $a$ on $x$. However, unlike the basic model of Section 2, the variances of the noise parameters play no role in the solution: a change in the precision of the equations for $x$ or $y$ will have no effect on the outcome. Finally, unlike the basic model, the decision context does not protect the DM from his own error. Indeed, a larger $\delta $ only $magnifies$ the discrepancy between $ a_{G}(\theta )$ and $a_{G^{\ast \ast }}(\theta )$. The conclusion is that the mere failure to recognize the endogeneity of $y$ leads to very different effects than when this failure is combined with the reverse-causality error. \subsection{Reverse Causality without Belief in Exogeneity} In this sub-section I consider a variation on the model that retains the reverse causality aspect of $G$, while relaxing the assumption that the DM regards $y$ as an independent exogenous variable. Thus, assume that the true DAG is $G^{\ast }$, as given by Figure 1, while the DM's subjective DAG $G$ is \begin{equation} \begin{array}{ccccc} \theta & \rightarrow & a & & \\ & \searrow & \downarrow & \searrow & \\ & & x & \leftarrow & y \end{array} \tag{Figure 2} \end{equation} The subjective model departs from the true model by inverting the link between $x$ and $y$, but otherwise it is identical to $G^{\ast }$. The DM's misperception can be viewed in terms of the conditional-independence properties that it gets wrong. While $G$ postulates that $y\perp \theta \mid a$, $G^{\ast }$ violates this property because of the causal channel from $\theta $ to $y$ that passes through $x$. \noindent \textit{Example 4.2: More parenting} \noindent Suppose that $\theta $ represents an adolescent's initial conditions. The DM is the adolescent's parent. The variable $y$ represents the amount of time that the adolescent spends online, and $a$ represents the intensity of the parent's attempts to limit this online exposure. The variable $x$ represents the adolescent's mental health. Thus, according to the true model, online exposure is not a cause but a consequence of mental health. The parent's intervention may have a direct effect on mental health, possibly because it includes spending more time with his child, which may have direct effects on $x$, independently of the activity it substitutes away from. The parent believes that online exposure has a direct causal effect on the adolescent's mental health. Let us analyze the DM's subjectively optimal strategy under this specification of $G$. The DM chooses $a$ to minimize \begin{equation} \int_{x}p_{G}(x\mid \theta ,a)(x-a)^{2}=\int_{y}p(y\mid a)\int_{x}p(x\mid \theta ,a,y)(x-a)^{2} \label{extended pG} \end{equation} Note that the conditional probability term $p(y\mid a)$ is not invariant to the DM's strategy $(p(a\mid \theta ))$. Therefore, unlike the previous specifications of the true and subjective DAGs, here we cannot define the DM's subjective optimization unambiguously. Let us apply the notion of personal equilibrium as in Spiegler (2016). A DM's strategy $(p(a\mid \theta ))$ is a personal equilibrium if it always prescribes subjective optimal actions with respect to the belief $(p_{G}(x\mid \theta ,a))$, which in turn is calculated when we take the DM's strategy as given. I now analyze personal equilibria for this specification.\footnote{ In general, this definition requires trembles in order to be fully precise. This subtlelty is irrelevant for our present purposes.} Using essentially the same argument as in the proof of Proposition \ref {mainresult}, the DM's strategy satisfies: \begin{equation} a=E_{G}(x\mid \theta ,a) \label{a=EG} \end{equation} where \[ E_{G}(x\mid \theta ,a)=\int_{y}p(y\mid a)E(x\mid \theta ,a,y) \] Calculating $E(x\mid \theta ,a,y)$ proceeds exactly as in the proof of Proposition \ref{mainresult} because the true process that links these four variables is the same. Therefore, \[ E(x\mid \theta ,a,y)=(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma )a+\beta y \] where $\beta $ is given by (\ref{beta}). It follows that \begin{eqnarray*} E_{G}(x &\mid &\theta ,a)=\int_{y}p(y\mid a)[(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma )a+\beta y] \\ &=&(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma )a+\beta \int_{y}p(y\mid a)y \end{eqnarray*} Our task now is to calculate the last term of this expression: First, plugging (\ref{eq x})-(\ref{eq y}), we obtain \begin{eqnarray*} \beta \int_{y}p(y &\mid &a)y=\beta \int_{\theta ^{\prime }}p(\theta ^{\prime }\mid a)\int_{y}p(y\mid \theta ^{\prime },a)y \\ &=&\beta \int_{\theta ^{\prime }}p(\theta ^{\prime }\mid a)[\theta ^{\prime }-\gamma a-\lambda a] \\ &=&\beta E(\theta ^{\prime }\mid a)-\beta \left( \gamma +\lambda \right) a \end{eqnarray*} where $E(\theta ^{\prime }\mid a)$ is determined by the DM's equilibrium strategy, which we have yet to derive. Plugging this in the expression for $E_{G}(x\mid \theta ,a)$, we obtain \begin{eqnarray*} E_{G}(x &\mid &\theta ,a)=(1-\beta )\theta +(\beta \lambda +\beta \gamma -\gamma -\beta \gamma -\beta \lambda )a+\beta E(\theta ^{\prime }\mid a) \\ &=&(1-\beta )\theta -\gamma a+\beta E(\theta ^{\prime }\mid a) \end{eqnarray*} Plugging in (\ref{a=EG}) and rearranging, we obtain \[ \theta =\frac{(1+\gamma )a-\beta E(\theta ^{\prime }\mid a)}{1-\beta } \] This equation defines $\theta $ as a deterministic function $f$ of $a$, such that $E(\theta ^{\prime }\mid a)=f(a)$. It follows that \[ f(a)=\frac{(1+\gamma )a-\beta f(a)}{1-\beta } \] or \[ f(a)=(1+\gamma )a \] Inverting the function, we obtain \[ f^{-1}(\theta )=\frac{\theta }{1+\gamma } \] which is the DM's unique personal equilibrium strategy. Thus, when the DM correctly perceives that $a$ has a direct causal effect on $y$, the fact that he inverts the causal link between $x$ and $y$ is immaterial for his behavior, which coincides with the correct-specification benchmark. It follows that the DM's belief that $y$ is independent of $ \theta $ is essential for the anomalous effects of our main model. The same conclusion would be obtained if we assumed instead that the DM's subjective DAG does not contain a direct $a\rightarrow y$ link but instead contains the link $\theta \rightarrow y$ (or an indirect link $\theta \rightarrow \phi \rightarrow y$, where $\phi $ is a variable that is objectively correlated with $\theta $ but independent of all other variables conditional on $\theta $). \section{Conclusion} This paper explored behavioral consequences of a certain reverse-causality error -- inverting the causal link between two outcome variables and deeming one of them exogenous -- in quadratic-normal environments. Two novel qualitative effects emerge from our analysis. First, the DM's behavior is sensitive to the variances of the noise terms in the equations for the two outcome variables $x$ and $y$ -- something that would not arise in the quadratic-normal setting under a correctly specified subjective model. Second, the anomalous effects vanish when the DM's direct effect on the final outcome variable $y$ fully offsets the effect that the intermediate outcome variable $x$ has on $y$. In this sense, the decision context protects the DM from his reverse-causality error. As we saw in Section 4, these novel effects crucially rely on the combination of the two aspects of the DM's model misspecification -- namely, his inversion of the causal link between the two outcome variables and his belief in the exogeneity of $y$. When one of these assumptions is relaxed, the DM's behavior ceases to display these effects and can even revert to the correct-specification benchmark. This should not be construed as \textquotedblleft fragility\textquotedblright\ of our analysis, but rather as further demonstration that in Gaussian environments, rational-expectations predictions tend to be robust to many causal misspecifications (see Spiegler (2020b)). The same causal misperceptions would be less innocuous under a non-Gaussian objective data-generating process. And of course, the specification of the true and subjective DAGs in Figure 1 does not exhaust the range of relevant reverse-causality errors. Spiegler (2016) gives an example in which the true DAG is $a\rightarrow x\leftarrow y$ , the DM's wrong model is $a\rightarrow x\rightarrow y$, and $y$ (rather than $x$) is the payoff-relevant variable. That is, in reality $y$ is exogenous and the DM's reverse-causality error leads him to believe that $a$ causes $y$ -- the exact opposite of the situation in our main model. Eliaz et al. (2021) quantify the maximal possible distortion of the estimated correlation between $x$ and $y$ due to this error (as well as more general versions of it). Thus, I am not claiming that the results in this paper are universal features of situations in which the true and subjective DAGs disagree on the direction of a certain link. That would be analogous to claiming that subgame perfect equilibrium in an extensive-form game is robust to changes in the game form. Nevertheless, as the examples in this paper demonstrated, many real-life situations fit the mold of Figure 1 and its variations, such that hopefully the analysis in this paper has provided relevant insights into the consequences of reverse-causality errors for agents who act on them. \end{document}
\begin{document} \title{Label Noise Detection under the Noise at Random Model with Ensemble Filters} \begin{abstract} Label noise detection has been widely studied in Machine Learning because of its importance in improving training data quality. Satisfactory noise detection has been achieved by adopting ensembles of classifiers. In this approach, an instance is assigned as mislabeled if a high proportion of members in the pool misclassifies it. Previous authors have empirically evaluated this approach; nevertheless, they mostly assumed that label noise is generated completely at random in a dataset. This is a strong assumption since other types of label noise are feasible in practice and can influence noise detection results. This work investigates the performance of ensemble noise detection under two different noise models: the Noisy at Random (NAR), in which the probability of label noise depends on the instance class, in comparison to the Noisy Completely at Random model, in which the probability of label noise is entirely independent. In this setting, we investigate the effect of class distribution on noise detection performance since it changes the total noise level observed in a dataset under the NAR assumption. Further, an evaluation of the ensemble vote threshold is conducted to contrast with the most common approaches in the literature. In many performed experiments, choosing a noise generation model over another can lead to different results when considering aspects such as class imbalance and noise level ratio among different classes. \end{abstract} \keywords{ Label noise \and Noise detection \and Ensemble methods \and Noise at Random \and Ensemble noise filtering} \section{Introduction} \label{sec:introduction} Data quality is of great importance for {ML} applications and, in particular, for classification tasks. Conventionally in these tasks, a training set of labeled instances is given as input to an {ML} algorithm, which will acquire useful knowledge to make predictions for new instances. In practice, real-world datasets frequently contain irregularities such as incompleteness, noise, and data inconsistencies that impact {ML} performance \citep{Han}. In this light, noise detection and filtering are quite relevant techniques for {ML} \citep{Zhu-attr}. According to the literature, noise may occur in both attributes and classes \citep{Zhu-attr}. This work focuses on the latter problem, in which an unknown proportion of instances in a dataset are mislabeled because of different reasons. This is a relevant problem since label noise can harm the identification of true class boundaries in a problem, increase the chance of overfitting, and affect learning performance in general \citep{survey}. Previous works successfully adopted the \textit{classification noise filtering} method \citep{Brodley}\citep{Sluban-diversity}\citep{unlabled2018} for label noise detection, widespread in the literature. In this approach, mislabeled instances in a dataset are identified according to the output results of a classifier or an ensemble of classifiers. For example, in the majority vote for ensemble noise detection, an instance is marked as mislabeled if most classifiers in a pool incorrectly classify it. In the consensus vote for ensemble noise detection, a record is considered noisy if all classifiers in the pool misclassify it. As with most ML tasks, empirical evaluation is also crucial in the context of noise detection techniques. Producing a ground-truth dataset for evaluation usually requires additional domain experts to decide which instances were mislabeled. This process can be costly, and experts are not always available. This problem is mitigated when artificial datasets are used or when simulated noise is injected into a dataset in a controlled way. The investigation of how noise influences the learning process is simplified when a systematic addition of noise is performed \citep{GARCIA2019}. Label noise can be injected into a dataset by assuming three distinct models of noise \cite{survey}: (i) Noisy Completely at Random (NCAR), in which the probability of an instance being noisy is random, (ii) Noisy at Random (NAR), the probability of an instance being noisy depends on its label, and (iii) Non-Noisy at Random (NNAR), the probability of an instance being noisy also depends on its attributes. In many previous works \citep{Sluban-diversity} \citep{Brodley} \citep{saesSMOTE} \citep{GARCIA2019}, a single noise model is chosen over another to perform experiments. Nevertheless, it is usually not clear how this choice can affect experimental results. Additionally, other aspects, like class distribution, can impact the distribution of noise differently in a dataset depending on the noise type considered. For instance, a human supervisor may find it more difficult to label records from the minority class than the majority class in some contexts. In this work, it is investigated how noise models can influence noise detection experiments under different aspects. In contrast to previous studies, the influence of distinct label noise models on ensemble noise detection is evaluated in this research under various contexts such as class imbalance, noise distribution, ensemble thresholds, and percentage of noise in data. It is shown that different results are achieved depending on the context. For instance, even under the same noise model (e.g., NAR), a detection technique may have quite distinct performance results if class imbalance changes (e.g., NAR with imbalanced vs. NAR with balanced class distributions). The remainder of this paper is organized as follows. In Section~\ref{sec:background}, an overview of label noise detection is presented. The proposed methodology is described in Section~\ref{sec:experiments}. Experiments are presented in Section~\ref{sec:results}. Finally, Section~\ref{sec:conclusion} summarizes the paper and presents future work. \section{Related Works} \label{sec:background} In \cite{Zhu-attr}, two types of noise are distinguished for supervised learning datasets: attribute (or feature) and class (or label) noise. The former is present in one or more features due to absent, incorrect, or missing values. In turn, label noise can be generated because of many reasons, such as the low reliability of human experts during labeling, incomplete information, communication problems, among others \citep{Sluban-advances}. The presence of noise in the training dataset can lead to an increase in processing time, higher model complexity, and the chance of overfitting, which will then deteriorate the predictive performance \citep{LORENA2004}. According to the literature, removing examples with feature noise is not as beneficial as label noise detection. This occurs since the values of non-noisy features can be helpful in the classification process and because there is only one label, while there are many attributes \citep{survey}. Besides, feature noise can later give rise to label noise. Hence, this work will concentrate on the label noise problems. From now on, label noise is also referred to as noise. This section initially presents a brief review of standard label noise detection techniques in the literature (Section~\ref{sec:background_detection_techniques}). Previous works have commonly evaluated such methods by performing experiments in which noise is intentionally injected into a dataset. A model is required in these experiments to control how the label noise is distributed across the dataset instances. Different label noise models are presented in Section~\ref{sec:background_noise_taxonomy}. Finally, the scope of this paper and its contributions are presented in Section~\ref{sec:main_contribuitions}. \subsection{Noise Detection Approaches} \label{sec:background_detection_techniques} Several techniques have already been developed for dealing with label noise. According to \cite{survey}, two broad strategies can be used: (1) the algorithm-level strategy, i.e., designing classifiers that are more robust and noise-tolerant, and (2) the data-level strategy, i.e., performing data cleaning by filtering noisy instances as a preprocessing step. \subsubsection{Algorithm-level Approach} Some learning algorithms are naturally more tolerant to noise, which can be used as a benefit in the presence of label noise. Ensemble methods like \textit{bagging} have the diversity increased when noise is present, which helps to cope with mislabeled examples. The decision tree pruning process is also more robust to noisy data as it has been shown that this technique decreases the influence of label noise since it prevents data overfitting \citep{abellan-baggingDT}. Even more robust learning algorithms can be derived by including the noise information during the learning process. In \cite{B2016star}, for example, it is proposed the \textit{generalized robust Logistic Regression} (gLR), in which the exponential distribution was adopted to model noise in such a way that points closer to the decision boundary have a relatively higher chance of being mislabeled. In the \textit{robust Kernel Fisher Discriminant} \citep{FisherLawrence:2001}, a probability of the label being noisy is derived by applying an Expectation-Maximization algorithm. In the \textit{robust kernel logistic regression} \citep{BOOTKRAJANG2014}, the optimal hyperparameters of the method are automatically determined using Multiple Kernel Learning and Bayesian regularization techniques. In \cite{B2016star}, a logistic regression classifier is built by employing a noise model based on a mixture of Gaussians. In \cite{biggio2011}, the \textit{Label Noise robust SVMs} deal with noise present in data by correcting the kernel matrix with a specially structured matrix based on the information regarding the level of noise in the dataset. The approaches above directly model label noise during the learning process. Although the advantage of those methods is to use prior knowledge regarding a noise model and its consequences \citep{survey}, it increases the complexity of learning algorithms and can lead to overfitting, because of the additional parameters of the training data model. \subsubsection{Data-level Approach} While the algorithm-level strategy aims to implement robust models using some available information related to the noise present in data, the data-level approach handles noisy data before the training process. The algorithm-level methods are less versatile, as not all algorithms have robust versions. In turn, the data-level strategy has the advantage of considering the noise filtering and the learning phase as distinct steps. Hence, it avoids using polluted instances during the learning process, improving both predictive performance and computational cost. Also, filtering approaches are usually cheap and easy to implement \citep{survey}. Thus, our work will be focused on the data-level category. Label noise filtering can be performed in a variety of ways. For example: by using complexity measures for the records \citep{sun-adHocMeasure} \citep{Smith2014}; partitioning approaches for removing mislabeled instances in large datasets \citep{zhu-large}\citep{GARCIAGIL2019bigData}; filtering noisy examples by verifying the impact of the removal on the learning process \citep{LOOPC}; using neighborhood-based algorithms to remove instances that are distant from the ones of the same class \citep{Wilson2000,Kanj2016ENN}, among others. In our work, we adopted the ensemble approach for noise filtering, in which instances are removed when a certain number of algorithms misclassifies them \citep{YUAN2018psma}. This approach has been widely chosen \citep{Brodley}\citep{zhu-large}\citep{comitee} \citep{Sluban-advances}\citep{Sluban2014}\citep{saesSMOTE}\citep{YUAN2018psma}\citep{unlabled2018}, as it overcomes the problem of relying on a single classifier for noise filtering. Using only one classifier for noise filtering can cause the removal of too many instances. The ensemble approach improves noise detection since an instance is likely to have been incorrectly labeled if distinct classifiers disagree on their predictions. The ensemble noise filtering applies the $k$-fold cross-validation, i.e., in $k$ repetitions, $k-1$ folds of the dataset are used for training each algorithm in the ensemble, and the remaining fold is used for validation. Then, all records are classified by all algorithms in the pool, and the observed errors are taking into account to remove instances. An essential issue in ensemble-based noise filtering is how many misclassifications are assumed to consider an instance as noisy. There are two common choices in the literature: the \textit{consensus} and the \textit{majority} vote \citep{unlabled2018}\citep{Sluban-diversity}\citep{YUAN2018psma}. Whereas the majority vote classifies an instance as incorrectly labeled if a majority of the algorithms in the pool misclassifies it, the consensus vote requires that all classifiers have misclassified the record. These vote techniques can produce different results. As the consensus requires a higher agreement of classifiers, it tends to remove a few instances. On the other hand, the majority vote may throw out too many instances, including noise-free ones that could be relevant. The trade-off between choosing the majority or the consensus approach can be replaced by the problem of selecting a vote threshold. The majority and the consensus vote are special cases, when thresholds are 50\% and 100\%, respectively. The adequate vote threshold would be related to the expected proportion of noisy instances in a dataset. Nevertheless, few works have investigated the influence of varying the vote thresholds. For example, in \cite{threshold2005} and \cite{threshold2018}, the authors showed that selecting appropriate values of the vote threshold usually performed better than using the standard filtering approaches. \subsection{Noise Models} \label{sec:background_noise_taxonomy} In real-world applications, evaluating whether an example is noisy or not generally requires the examination of domain specialists. Nonetheless, this is not always feasible as they may not be available. Moreover, consulting a specialist tends to increase the duration and cost of the preprocessing step. This problem is mitigated when artificial datasets are used, or simulated noise is injected into a dataset in a controlled way. The study and further validation of noise detection techniques and noise models' influence on the learning process are simplified when a systematic addition of noise is performed. In order to do so, it is imperative to choose the method by which the noise will be inserted into a dataset. In \cite{survey}, the authors provided a taxonomy of label noise models, reflecting the distribution of noisy instances in a dataset. The three models are shown in Figure \ref{fig:survey_p}. Let \textit{X} be the vector of features, \textit{Y} the true class, \textit{\^Y} the observed label, and \textit{E} a binary variable indicating if a labeling error occurred. Each model has a different assumption on how noise is generated. \begin{figure} \caption{Statistical taxonomy of label noise according to \cite{survey} \label{fig:survey_p} \end{figure} \begin{enumerate} \item Noisy Completely at Random ({NCAR}): the occurrence of a mislabeled instance is independent of the instance's attributes and class. Mislabeled records are uniformly present across the instance space. In a binary classification problem, for example, there will exist the same proportion of mislabeled instances in both classes. In other words, as shown in Figure \ref{fig:survey_a}, the occurrence of an error \textit{E} is independent of the other random variables, including the true class itself (\textit{Y}). For this model, the mislabeled instance probability is given by $p_{e} = P(E = 1) = P(Y \neq \textit{\^Y})$. \item Noisy at Random ({NAR}): the labeling errors probability depends on the instance class, although it is not dependent on records' attributes. Once mislabeling is conditional to instance classes, it allows us to model asymmetric label noise, i.e., when samples from certain classes are more prone to be mislabeled. This model could be applied, for example, to simulate mislabeling classification that is often verified in medical case-control studies where the misclassification of disease outcome may be unrelated to risk factor exposure (non-differential) \citep{differencial}. As shown in Figure \ref{fig:survey_b}, \textit{E} is still independent of \textit{X} but it is conditioned by \textit{Y}. For this model, the mislabeled instance probability is given by $p_{e} = P(E = 1) = \sum_{y \in Y} P(Y = y)P(E = 1|Y = y)$. \item Noisy not at Random ({NNAR}): the probability of an error occurrence depends not only on the instance class but also on the instance attributes. In this case, for example, samples are more likely to be mislabeled when they are similar to records of another class or when they are located in certain regions of the instance space. By applying this model, it is possible to simulate mislabeling near classification boundaries or in low-density regions. It also can be used for medical case-control studies where the misclassification of disease outcome may be related to risk factor exposure (differential) \citep{differencial}. As can be seen in Figure \ref{fig:survey_c}, this is a more complex model, where \textit{E} depends on both \textit{X} and \textit{Y}, i.e., labeling errors are more likely for certain classes and in certain regions of the \textit{X} space. \end{enumerate} It is usually quite challenging to identify the kind of noise present in a dataset without any background knowledge. Nevertheless, it is crucial to evaluate how sensitive noise detection techniques are to the noise distribution in a dataset. In this work, analyses regarding the {NAR} and {NCAR} models were performed in different contexts. NNAR scenarios are equally relevant, although more diverse in terms of assumptions that relate to the instances' attributes and the chance of label noise. This work focused on the {NAR} and {NCAR} model to provide controlled experimental scenarios and investigate pertinent aspects (e.g., class distribution, noise level per class) that can impact label noise detection. Once deeply studied, such contexts can be extended in future work to cover the NNAR assumption. \subsection{Scope and Contribution} \label{sec:main_contribuitions} As detailed previously, several works have extensively studied approaches to better handle noise detection either by developing noise-tolerant algorithms or by identifying and filtering data irregularities in a preprocessing step. In most previous studies, artificial noise is randomly injected into data to evaluate proposed systems. This work delivers relevant findings on how different noise models affect noise detection, indicating that new noise handlers systems should be evaluated in a broader context. In opposite to many related studies, which usually focus on creating new detectors, this paper analyses label noise detection under NCAR and NAR models and their behavior in different settings such as class imbalanced data, amount of noise, and noise distribution per class. The current work is focused on evaluating ensemble filtering techniques, regarding different aspects that can impact the noise detection performance, such as the noise model, the class imbalance ratio and the noise level per class. Previous related works are commonly limited to evaluate ensemble filtering techniques assuming the NCAR model, such as \cite{Sluban-diversity} and \cite{unlabled2018}. Differently, our work investigates the performance of noise filters under the NAR model, in which noise level can vary depending on the class. Additionally, we addressed other aspects like class imbalance ratio and noise level per class, to investigate the impact on noise detection performance, also depending on the noise model assumed in the experiments. Previous works on ensemble noise filtering are also limited to evaluate and selecting between the majority or the consensus approach. The trade-off between choosing the majority or the consensus approach can be replaced by the problem of choosing a vote threshold. The majority and the consensus are special cases, when thresholds are 50\% and 100\%, respectively. The adequate vote threshold would be related to the expected proportion of noisy instances in a dataset. Nevertheless, few works have investigated the influence of varying the vote thresholds. For instance, in \cite{threshold2005} and \cite{threshold2018}, the authors showed that selecting adequate values of the vote threshold usually performed better than using the standard filtering approaches. The choice of the vote threshold is another important aspect that is addressed in the current work. As it will be seen, our work produced important findings which can be considered when developing new noise detection techniques, evaluating existing approaches, and modeling specific real-world problems. \section{Proposed Methodology} \label{sec:experiments} This section details the experimental setup adopted in our work to evaluate the ensemble noise detectors under {NCAR} and {NAR} models. Different aspects are jointly considered in the performed evaluation, which were not properly evaluated yet in the literature: (1) the class imbalance ratio in a dataset; (2) the noise ratio comparing the majority and minority classes; (3) the noise model itself. The experimental protocol adopted in this work is summarized in Figure~\ref{fig:graph-protocol}. The protocol starts with a real-world dataset given as input. Initially, a data cleaning process using the consensus vote is applied to remove possible noise from the input dataset. Then a new dataset is generated from the cleaned data with the desired class imbalance ratio ($IR$). The generated dataset is split into training (70\%) and testing (30\%) data. The training data is used to produce the pool of classifiers used for ensemble noise detection. For evaluating the noise detector, the percentage of noise \emph{p} is injected into the testing data according to the noise ratio \emph{M} and a chosen noise model. Each test instance is given as input to the pool of classifiers. With all predictions, the test instance is marked as noisy if at least \emph{L} classifiers misclassify it. Then, some evaluation measures described in this section are calculated and analyzed. The main purpose of the proposed methodology of experiments is to derive insights on how to design ensemble noise detectors under specific conditions, like different class distributions and noise levels per class. It is important to highlight that domain knowledge and experts (when available) can be very helpful to estimate such specific conditions. For instance the noise level per class can be estimated by relying on an expert inspection of a sample of data instances to identify eventual labeling errors. Once such conditions are identified via auxiliary data and experts, the design of the noise detectors can be more adequately performed. \begin{figure} \caption{General experimental protocol.\label{fig:graph-protocol} \label{fig:graph-protocol} \end{figure} \FloatBarrier In Section~\ref{sec:setup_ensemble}, the algorithms for the ensemble filter and the vote scheme approach are presented. In Section~\ref{sec:setup_datasets}, the real-world datasets are detailed, and it is described the methodology for data generation with specific settings. The procedure for noise injection regarding the noise model is explained in Section~\ref{sec:setup_noise_injection}. Depending on the label noise model adopted in the experiments, noise detection techniques can have distinct performance behavior, usually measured by precision and recall, defined in Section~\ref{sec:background_performance_measure}. Lastly, the input variables and the experimental protocol are summarized in Section~\ref{sec:setup_protocol}. \subsection{Ensemble} \label{sec:setup_ensemble} The noise detection ensemble used in the experiments was generated from 10 algorithms adopted in the related work \citep{Sluban-diversity}. They were chosen from different families: decision trees, Bayesian models, neural networks, support vector machines, random forest, nearest neighbors, and ruled-based methods, forming a diverse pool of classifiers. All of them are implemented in R from specific packages, as shown in Table \ref{tab:algorithms}. Default parameters suggested in the R packages were employed for all algorithms in the experiments. Parameters could be optimized for each algorithm, which would result in more accurate classifiers. However, as we are dealing with ensembles, the lack of parameter optimization would be compensated by using diverse algorithms. The impact of parameter optimization in ensemble noise detection will be better investigated in future work. \begin{table}[h] \setlength\extrarowheight{2pt} \caption{Learning algorithms for classification noise filtering.} \label{tab:algorithms} \begin{center} \begin{tabularx}{0.7\textwidth}{ll} \toprule \textbf{Algorithm} & \textbf{R Package} \\ \hline CN2 (rule learner) & RoughSets \\ kNN (nearest neighbor) & class \\ Naive Bayes & naivebayes \\ Random forest & randomForest \\ SVM (RBF Kernel) & e1071 \\ J48 & RWeka \\ JRip & RWeka \\ Multiplayer perceptron & RSNNS \\ Decision tree & party \\ SMO (linear Kernel) & RWeka \\ \bottomrule \end{tabularx} \end{center} \end{table} \pagebreak Unlike previous work that adopted the majority and the consensus vote approaches for ensemble-based noise filtering, in our work, we evaluate different values for the decision threshold \emph{L} in the ensemble. If more than \emph{L}\% of classifiers in the pool misclassify an instance, it is considered wrongly labeled. In our experiments, the threshold \emph{L} varies from 10\% to 100\%. The majority and the consensus approaches are special cases, respectively, assuming \emph{L} = 50\% (i.e., more than half classifiers plus 1) and 100\% (i.e., all classifiers). Of course, too small values, like 10\%, would be reasonable only in contexts with almost noise-free datasets and/or ones that want to force a very high level of recall in noise detection. Nevertheless, we vary \emph{L} to check if there is a threshold that maximizes the noise detection under specific contexts, i.e., considering the class imbalance ratio, noise distribution, and percentage of total noise. \subsection{Data} \label{sec:setup_datasets} Quantitative assessment of noise detection methods requires knowing which are the noisy instances beforehand. In real-world datasets, this is achieved either by expert labeling or by randomly injecting artificial noise into a dataset. While the former approach is not feasible for an extensive evaluation, the latter still has uncertainty about which instances are originally noisy when dealing with real-world datasets. \addtocounter{table}{-1} \begin{table}[h] \setlength\extrarowheight{2pt} \caption{Real-world data information. \textit{Missing} = instances with at least one missing value.} \label{tab:dataset_info} \begin{center} \begin{tabularx}{0.7\textwidth}{l|r|ccc|ccc} \toprule \multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Attr.}} & \multicolumn{3}{c|}{\textbf{Original}} & \multicolumn{3}{c}{\textbf{After cleaning}} \\ \cline{3-8} & & \textbf{IR} & \textbf{Inst.} & \textbf{Missing} & \textbf{IR} & \textbf{Inst.} & \textbf{ Removal (\%)} \\ \hline arcene & 10001 & 44:56 & 200 & 0 & 44:56 & 199 & 0.50\\ breast-c & 10 & 34:66 & 699 & 16 & 35:65 & 673 & 3.72\\ column2C & 7 & 32:68 & 310 & 0 & 32:68 & 308 & 0.65\\ credit & 16 & 44:56 & 690 & 37 & 45:55 & 644 & 6.67\\ cylinder-bands & 40 & 42:58 & 540 & 263 & 36:64 & 276 & 48.89\\ diabetes & 9 & 35:65 & 768 & 0 & 31:69 & 720 & 6.25\\ eeg-eye-state & 15 & 45:55 & 14980 & 0 & 45:55 & 14979 & 0.01\\ glass0 & 10 & 33:67 & 214 & 0 & 33:67 & 214 & 0.00\\ glass1 & 10 & 36:64 & 214 & 0 & 35:65 & 212 & 0.93\\ heart-c & 14 & 46:54 & 303 & 7 & 45:55 & 289 & 4.62\\ heart-statlog & 14 & 44:56 & 270 & 0 & 44:56 & 262 & 2.96\\ hill-valley & 101 & 50:50 & 1212 & 0 & 48:52 & 1184 & 2.31\\ ionosphere & 35 & 36:64 & 351 & 0 & 35:65 & 345 & 1.71\\ kr-vs-kp & 37 & 48:52 & 3196 & 0 & 48:52 & 3194 & 0.06\\ mushroom & 23 & 48:52 & 8124 & 2480 & 38:62 & 5644 & 30.53\\ pima & 9 & 35:65 & 768 & 0 & 32:68 & 732 & 4.69\\ sonar & 61 & 47:53 & 208 & 0 & 46:54 & 206 & 0.96\\ steel-plates-fault & 34 & 35:65 & 1941 & 0 & 35:65 & 1941 & 0.00\\ tic-tac-toe & 10 & 35:65 & 958 & 0 & 35:65 & 958 & 0.00\\ voting & 17 & 39:61 & 435 & 203 & 47:53 & 228 & 47.59\\ \bottomrule \end{tabularx} \end{center} \end{table} In our experiments, we adopted 20 real-world binary datasets available at the KEEL-dataset repository \citep{KEEL}, UCI repository \citep{UCI:2017}, and Open Media Library \citep{OpenML2013}. Some multi-class datasets are modified to obtain two-class imbalanced problems, defining the joint of one or more classes as positive and the remainder as negative. The list of datasets is presented in Table \ref{tab:dataset_info}. \begin{figure} \caption{Data generation process.} \label{fig:dataset_gen_process} \end{figure} \FloatBarrier For each dataset, a three-step process is adopted (see Figure \ref{fig:dataset_gen_process}) to generate a new dataset with controlled IR and injected label noise. The dataset generation process is described below: \begin{itemize} \item \textbf{Data cleaning:} As suggested in \cite{Sluban-diversity}, a data cleaning step is applied to reduce the inherent label noise that is possibly present in the dataset. Hence, the evaluation of noise detection will be mainly contingent on the artificial noise injected in a controlled manner. In this step, a 10-fold classification is employed, and the consensus method is used to remove noisy instances. The consensus vote was chosen for this step for being more strict, ensuring that only instances undoubtedly noisy are discarded since it requires all ensemble classifiers' agreement. \item \textbf{Undersampling:} In this step, three datasets are generated for each cleaned data, in order to obtain the following {IR} configurations: (1) 50:50, (2) 30:70, and (3) 20:80. For generating a dataset with a given IR, a random undersampling process of the majority class is applied. \item \textbf{Noise injection:} In this step, noise is artificially injected according to a given label noise model. This step is detailed in Section \ref{sec:setup_noise_injection}. \end{itemize} \subsection{Noise Injection} \label{sec:setup_noise_injection} For noise detection evaluation, label noise is injected into the test set by changing the class label in a determined proportion $p$ of records. In our experiments, the desired noise level \emph{p} assumed four different values: $5\%, 10\%, 15\%$, and $20\%$ of instances. For the NAR model, the noise is inserted to achieve a specific ratio $M$ of noisy instances per class. Two values of $M$ were chosen in such a way to analyze the impact of a significant discrepancy in the noise level per class: \begin{itemize} \item $M = 9/1$: for every nine noisy instances in the minority class, there is one noisy instance in the majority class. It simulates a scenario of application in which it is more difficult to label examples in the minority class. This chosen ratio is referred to as NAR (9:1) in the discussion of results. \item $M = 1/9$: in turn, for each noisy instance in the minority class, there are nine noisy instances in the majority class. It means that the majority class is more prone to have label noise. This chosen ratio corresponds to NAR (1:9) in the discussion of results. \end{itemize} Notice that the NCAR model is a particular case of NAR by assuming $M = 1$, i.e., NCAR is equivalent to NAR (1:1). Each class's exact number of noisy instances is determined according to the desired noise level $p$ and the ratio $M$. Let $d_n$ be the number of instances in the test set. Let $n_1$ and $n_2$ be the number of noisy instances in each class. Then $n_1 + n_2 = p\times d_n$ and $M = n_1/n_2$. In order to obtain such constraints, $n_1$ and $n_2$ are defined according to the following equations: \begin{equation} n_1 = \frac{M\times (d_n\times p)}{M+1} \end{equation} \begin{equation} n_2 = \frac{d_n\times p}{M+1} \end{equation} For example, suppose that we have $d_n = 1000$ records in the test set, and the desired noise level is $p=0.10$ (100 noisy instances). Then, consider that noise is injected according to the three settings: NCAR, NAR (9:1) NAR (1:9). In the first setting, $M=1$, and by adopting the above equations, we would find $n_1 = 50 $ and $n_2 = 50$. The number of noisy instances in the test set is the same for both classes. This example is illustrated in Figure~\ref{fig:noise_injection_NCAR}. \FloatBarrier \begin{figure} \caption{Noise injection with NCAR model for different imbalance ratios (IR).} \label{fig:noise_injection_NCAR} \end{figure} \FloatBarrier In the NAR (9:1) setting, $M=9$ and then $n_1 = 90$ (90 noisy instances in minority class) and $n_2 = 10$ (10 noisy instances in majority class). In this case, the noise level in minority class is higher in the test set. This example is illustrated in Figure~\ref{fig:noise_injection_NAR91}. \FloatBarrier \begin{figure} \caption{Noise injection with NAR 9:1 model for different imbalance ratios (IR).} \label{fig:noise_injection_NAR91} \end{figure} \FloatBarrier Finally, for the NAR (1:9) setting, $M=1/9$, hence, $n_1 = 10$ (10 noisy instances in the minority class) and $n_2 = 90$ (90 noisy instances in the majority class). In this case, the number of noisy instances injected in the test set is greater for the majority class, as illustrated in Figure~\ref{fig:noise_injection_NAR19}. \FloatBarrier \begin{figure} \caption{Noise injection with NAR 1:9 model for different imbalance ratios (IR).} \label{fig:noise_injection_NAR19} \end{figure} \FloatBarrier \subsection{Performance Measures} \label{sec:background_performance_measure} Most experiments in the literature assess the efficiency of methods in detecting noise regarding accuracy \citep{survey}. A primary measure to evaluate the performance of noise detection is \textit{precision}, which means how many noisy instances the detector correctly identified among all records identified as noisy: \[ \mbox{Precision} = \frac{\mbox{number of noisy cases correctly identified}}{\mbox{number of all noisy cases identified}} \] In addition to the precision, another helpful measure is \textit{recall}, which calculates how many instances the detector correctly identified as noisy among all the noisy records inserted into the dataset: \[ \mbox{Recall} = \frac{\mbox{number of noisy cases correctly identified}}{\mbox{number of all noisy cases in dataset}} \] Finally, a measure that trades off \textit{precision} versus \textit{recall} is the \textit{F-score}, which is the weighted harmonic mean of \textit{precision} and \textit{recall}: \begin{equation} \textit{F-score} = \frac{\beta\times Precision \times Recall}{Precision + Recall} \label{eq_precision} \end{equation} \noindent where $\beta^2 = \frac{1-\alpha}{\alpha}$, with $\alpha \in [0,1]$ and $\beta^2 \in [0, \infty]$. Setting the $\beta$ parameter makes it possible to assign more importance to either precision or recall when calculating the \textit{F-score}. In this work, the standard \textit{F-score} (also referred to as $F_{1} score$, $\beta = 1$ and $\alpha = 1/2$ ) was used. It equally weights precision and recall. \subsection{Experimental Protocol Algorithm} \label{sec:setup_protocol} The experimental protocol illustrated in Figure~\ref{fig:graph-protocol} is also outlined in Algorithm~\ref{algo:exp_procedure_real}. The algorithm was executed for each input parameter combination shown in Table~\ref{tab:exp-strucuture}. Given the stochastic nature of noise injection, this insertion is usually repeated several times for each noise level \citep{Sluban2014} \citep{zhu-large}. In this work, for each parameters combination, Algorithm~\ref{algo:exp_procedure_real} was repeated 100 times, and the average results of \textit{Precision}, \textit{Recall}, and \textit{F1-Measure} were employed to evaluate the ensemble noise detector. \addtocounter{table}{-1} \begin{table}[t] \setlength\extrarowheight{2pt} \caption{Experimental setup} \label{tab:exp-strucuture} \begin{center} \begin{tabularx}{0.7\textwidth}{c|c|c|c|c|l} \toprule Imbalance Ratio (IR) & \multicolumn{4}{c|}{Total percentage of} & Noise Ratio (M) \\ \cline{1-1} \cline{6-6} Class min : Class maj & \multicolumn{4}{c|}{noise in testing set (p)} & Class min : Class maj \\ \hline \multirow{3}{*}{50 : 50 } & \multirow{3}{*}{5\%} & \multirow{3}{*}{10\%} & \multirow{3}{*}{15\%} & \multirow{3}{*}{20\%} & NCAR \\ & & & & & NAR (1 : 9) \\ & & & & & NAR (9 : 1) \\ \hline \multirow{3}{*}{30 : 70} & \multirow{3}{*}{5\%} & \multirow{3}{*}{10\%} & \multirow{3}{*}{15\%} & \multirow{3}{*}{20\%} & NCAR \\ & & & & & NAR (1 : 9) \\ & & & & & NAR (9 : 1) \\ \hline \multirow{3}{*}{20 : 80} & \multirow{3}{*}{5\%} & \multirow{3}{*}{10\%} & \multirow{3}{*}{15\%} & \multirow{3}{*}{20\%} & NCAR \\ & & & & & NAR (1 : 9) \\ & & & & & NAR (9 : 1) \\ \bottomrule \end{tabularx} \end{center} \end{table} \begin{algorithm} \caption{Experimental procedure} \label{algo:exp_procedure_real} \begin{center} \begin{flushleft} \textbf{Input:} IR, M, p, dataset, classifiers, threshold (L) \\ \textbf{Output: } performance measures \end{flushleft} \begin{algorithmic}[1] \STATE $cleanData\gets dataCleasing(dataset,classifiers)$ \STATE $data\gets generateData(cleanData, IR)$ \STATE $training, testing\gets split(data, 70\%,30\%)$ \COMMENT{Split data in training and testing data} \STATE $noisy\_testing\gets injectNoise(testing,M,p)$ \FOR{c in classifiers} \STATE $model\gets train(training,c)$ \STATE $predictions\gets classify(model,noisy\_testing)$ \ENDFOR \STATE $ensemble\_prediction\gets voting(predictions,L)$ \STATE $measures\gets calculate(ensemble\_prediction)$ \end{algorithmic} \end{center} \end{algorithm} \section{Results} \label{sec:results} In this section, the findings from the experiments described in Section~\ref{sec:setup_protocol} are presented and examined. The results shown in this section were obtained from the steps outlined in Algorithm~\ref{algo:exp_procedure_real}. The discussion is carried out by putting into perspective each input parameter that resulted in a specific scenario, facilitating the analysis and comparison. In this way, we first examine the noise detection concerning balanced and imbalanced datasets exploring the impact of different noise levels per class in Section~\ref{sec:real_balancec_vs_imbalanced}. In Section~\ref{sec:real_different_thresholds}, we analyze noise detection under different noise ensemble thresholds. Lastly, statistical tests of the results are detailed in Section~\ref{sec:stat_tests}. \subsection{Imbalance Ratio and Noise Level per Class} \label{sec:real_balancec_vs_imbalanced} Figures~\ref{fig:results_real_fscore_comparison_1} and \ref{fig:results_real_fscore_comparison_2} show the {\it F-score} for eight datasets, varying the noise level and the imbalance ratio. F-score tends to increase as the general noise level (p) increases as well. Similar patterns of results were observed in the other datasets. This is expected since the noise detection task becomes easier for higher amounts of noise in a dataset. \FloatBarrier \begin{figure} \caption{\textit{F-score} \label{fig:results_real_fscore_comparison_1} \end{figure} \FloatBarrier \FloatBarrier \begin{figure} \caption{\textit{F-score} \label{fig:results_real_fscore_comparison_2} \end{figure} \FloatBarrier The general behavior observed in the scenario of balanced data (IR 50:50 - first column) was that NCAR and NAR models produced the same effect on noise detection regardless of the noise distribution (noise level per class). In other terms, when the dataset is balanced, the noise detection performance depends more on the percentage of noise in the dataset than on how the noise is distributed per class. For imbalanced datasets, noise detection is affected by the noise level per class, as revealed by the F-score discrepancy between NCAR and NAR models shown in Figures~\ref{fig:results_real_fscore_comparison_1} and \ref{fig:results_real_fscore_comparison_2}. This difference increases when the noise level gets higher, as observed for \textit{arcene} and \textit{cylinder-bands} datasets in a more pronounced way. Smaller differences in NCAR and NAR models results were mainly observed when the noise level was 5\%. In such a noise level, it is, in fact, more difficult to obtain good performance measures. \addtocounter{table}{-1} \begin{table}[h!] \setlength\extrarowheight{2pt} \caption{F-score variation vs class imbalance ratio.} \label{tab:fscoreVariationRealData} \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|rrr|rrr|rrr|rrr} \toprule \multirow{3}{*}{\textbf{Datasets}} & \multicolumn{12}{c}{\textbf{F-score variation when IR goes from 50:50 to 20:80}} \\ \cline{2-13} & \multicolumn{3}{c|}{\textbf{5\% of noise}} & \multicolumn{3}{c|}{\textbf{10\% of noise}} & \multicolumn{3}{c|}{\textbf{15\% of noise}} & \multicolumn{3}{c}{\textbf{20\% of noise}} \\ \cline{2-13} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} \\ \hline arcene & \textcolor{red}{-3.6} & \textcolor{blue}{1.1} & \textcolor{blue}{2.8} & \textcolor{red}{-17.4} & \textcolor{red}{-5.4} & \textcolor{blue}{7.1} & \textcolor{red}{-19.2} & \textcolor{blue}{1.3} & \textcolor{blue}{7.6} & \textcolor{red}{-21.6} & \textcolor{blue}{1.7} & \textcolor{blue}{9.8} \\ breast-cancer-wisconsin & \textcolor{red}{-1.1} & \textcolor{red}{-1.4} & \textcolor{blue}{4.7} & \textcolor{red}{-0.4} & \textcolor{black}{0.0} & \textcolor{blue}{3.8} & \textcolor{red}{-0.9} & \textcolor{red}{-0.6} & \textcolor{blue}{2.2} & \textcolor{red}{-0.4} & \textcolor{red}{-0.1} & \textcolor{blue}{1.7} \\ column2C & \textcolor{red}{-13.6} & \textcolor{blue}{4.1} & \textcolor{red}{-2.8} & \textcolor{red}{-10.5} & \textcolor{red}{-2.1} & \textcolor{red}{-0.4} & \textcolor{red}{-15.6} & \textcolor{blue}{1.7} & \textcolor{blue}{1.8} & \textcolor{red}{-13.3} & \textcolor{red}{-2.4} & \textcolor{blue}{0.9} \\ credit & \textcolor{red}{-5.8} & \textcolor{red}{-1.2} & \textcolor{blue}{2.3} & \textcolor{red}{-8.6} & \textcolor{blue}{0.6} & \textcolor{blue}{1.6} & \textcolor{red}{-7.0} & \textcolor{black}{0.0} & \textcolor{blue}{1.4} & \textcolor{red}{-6.6} & \textcolor{red}{-0.3} & \textcolor{blue}{2.0} \\ cylinder-bands & \textcolor{red}{-13.8} & \textcolor{blue}{6.6} & \textcolor{blue}{5.4} & \textcolor{red}{-23.6} & \textcolor{red}{-5.1} & \textcolor{blue}{13.8} & \textcolor{red}{-33.0} & \textcolor{red}{-1.8} & \textcolor{blue}{16.2} & \textcolor{red}{-35.8} & \textcolor{red}{-2.6} & \textcolor{blue}{20.8} \\ diabetes & \textcolor{red}{-12.8} & \textcolor{blue}{3.0} & \textcolor{blue}{5.5} & \textcolor{red}{-10.0} & \textcolor{red}{-2.9} & \textcolor{blue}{7.2} & \textcolor{red}{-11.8} & \textcolor{red}{-2.6} & \textcolor{blue}{5.4} & \textcolor{red}{-11.1} & \textcolor{red}{-0.6} & \textcolor{blue}{5.4} \\ eeg-eye-state & \textcolor{red}{-27.2} & \textcolor{red}{-15.0} & \textcolor{red}{-6.3} & \textcolor{red}{-27.5} & \textcolor{red}{-14.0} & \textcolor{red}{-4.8} & \textcolor{red}{-27.0} & \textcolor{red}{-12.4} & \textcolor{red}{-3.4} & \textcolor{red}{-25.4} & \textcolor{red}{-10.9} & \textcolor{red}{-2.2} \\ glass0 & \textcolor{red}{-1.5} & \textcolor{blue}{1.9} & \textcolor{red}{-3.0} & \textcolor{red}{-0.5} & \textcolor{blue}{8.8} & \textcolor{blue}{7.9} & \textcolor{red}{-5.1} & \textcolor{blue}{3.5} & \textcolor{blue}{6.5} & \textcolor{red}{-4.5} & \textcolor{blue}{3.3} & \textcolor{blue}{8.9} \\ glass1 & \textcolor{red}{-7.6} & \textcolor{red}{-6.2} & \textcolor{red}{-0.8} & \textcolor{red}{-18.1} & \textcolor{blue}{3.1} & \textcolor{blue}{8.8} & \textcolor{red}{-21.6} & \textcolor{blue}{2.2} & \textcolor{blue}{8.9} & \textcolor{red}{-27.0} & \textcolor{red}{-1.8} & \textcolor{blue}{11.9} \\ heart-c & \textcolor{red}{-4.7} & \textcolor{blue}{11.1} & \textcolor{blue}{13.7} & \textcolor{red}{-8.8} & \textcolor{blue}{4.9} & \textcolor{blue}{11.4} & \textcolor{red}{-9.0} & \textcolor{blue}{1.3} & \textcolor{blue}{9.2} & \textcolor{red}{-8.8} & \textcolor{blue}{3.1} & \textcolor{blue}{9.6} \\ heart-statlog & \textcolor{red}{-8.2} & \textcolor{red}{-12.6} & \textcolor{red}{-7.0} & \textcolor{red}{-11.4} & \textcolor{blue}{2.3} & \textcolor{red}{-0.5} & \textcolor{red}{-11.4} & \textcolor{red}{-4.6} & \textcolor{blue}{0.4} & \textcolor{red}{-10.9} & \textcolor{red}{-0.3} & \textcolor{blue}{2.1} \\ hill-valley & \textcolor{red}{-8.8} & \textcolor{blue}{7.6} & \textcolor{blue}{17.4} & \textcolor{red}{-13.3} & \textcolor{blue}{12.1} & \textcolor{blue}{26.5} & \textcolor{red}{-18.1} & \textcolor{blue}{12.6} & \textcolor{blue}{29.0} & \textcolor{red}{-17.2} & \textcolor{blue}{15.7} & \textcolor{blue}{30.8} \\ ionosphere & \textcolor{red}{-4.6} & \textcolor{red}{-5.3} & \textcolor{blue}{3.6} & \textcolor{red}{-4.1} & \textcolor{red}{-2.8} & \textcolor{blue}{3.3} & \textcolor{red}{-6.4} & \textcolor{blue}{1.5} & \textcolor{blue}{1.1} & \textcolor{red}{-4.7} & \textcolor{blue}{0.1} & \textcolor{blue}{1.9} \\ kr-vs-kp & \textcolor{red}{-8.7} & \textcolor{red}{-6.8} & \textcolor{red}{-7.4} & \textcolor{red}{-5.9} & \textcolor{red}{-3.9} & \textcolor{red}{-3.6} & \textcolor{red}{-4.7} & \textcolor{red}{-3.0} & \textcolor{red}{-3.0} & \textcolor{red}{-3.9} & \textcolor{red}{-2.1} & \textcolor{red}{-2.0} \\ mushroom & \textcolor{red}{-0.1} & \textcolor{red}{-0.1} & \textcolor{red}{-0.1} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} & \textcolor{red}{-0.1} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} & \textcolor{black}{-0.0} \\ pima & \textcolor{red}{-4.5} & \textcolor{blue}{0.9} & \textcolor{blue}{8.2} & \textcolor{red}{-6.1} & \textcolor{blue}{1.4} & \textcolor{blue}{9.9} & \textcolor{red}{-5.9} & \textcolor{blue}{5.0} & \textcolor{blue}{8.9} & \textcolor{red}{-5.7} & \textcolor{blue}{2.3} & \textcolor{blue}{8.5} \\ sonar & \textcolor{blue}{14.1} & \textcolor{blue}{18.6} & \textcolor{blue}{15.0} & \textcolor{blue}{13.4} & \textcolor{blue}{13.4} & \textcolor{blue}{19.4} & \textcolor{blue}{9.0} & \textcolor{blue}{13.7} & \textcolor{blue}{17.8} & \textcolor{blue}{6.3} & \textcolor{blue}{10.0} & \textcolor{blue}{16.5} \\ steel-plates-fault & \textcolor{black}{0.0} & \textcolor{blue}{1.0} & \textcolor{blue}{0.4} & \textcolor{black}{-0.0} & \textcolor{blue}{0.4} & \textcolor{blue}{0.3} & \textcolor{black}{0.0} & \textcolor{blue}{0.4} & \textcolor{blue}{0.2} & \textcolor{black}{-0.0} & \textcolor{blue}{0.1} & \textcolor{blue}{0.1} \\ tic-tac-toe & \textcolor{red}{-22.1} & \textcolor{red}{-4.9} & \textcolor{blue}{5.0} & \textcolor{red}{-26.1} & \textcolor{red}{-5.9} & \textcolor{blue}{5.4} & \textcolor{red}{-26.6} & \textcolor{red}{-5.2} & \textcolor{blue}{5.1} & \textcolor{red}{-27.1} & \textcolor{red}{-5.4} & \textcolor{blue}{5.4} \\ voting & \textcolor{red}{-6.8} & \textcolor{red}{-13.6} & \textcolor{red}{-4.1} & \textcolor{red}{-5.8} & \textcolor{red}{-7.6} & \textcolor{red}{-2.5} & \textcolor{red}{-4.4} & \textcolor{red}{-6.2} & \textcolor{red}{-2.0} & \textcolor{red}{-4.7} & \textcolor{red}{-4.8} & \textcolor{red}{-1.4} \\ \bottomrule \end{tabular} } \end{center} \end{table} Table~\ref{tab:fscoreVariationRealData} shows how F-score changes when the scenario varies from a balanced dataset to an imbalanced dataset. Negative and positive numbers denote, respectively, a decrease and increase in noise detection. As can be seen, noise distribution is important when considered in combination with class imbalance. For instance, under the NAR (9:1) model (i.e., more noise instances in the minority class than in the majority's), noise detection worsened its performance when class imbalance was increased. This was the overall behavior confirmed by the negative F-score variation presented in Table~\ref{tab:fscoreVariationRealData} at NAR (9:1) column. On the other hand, an opposite pattern of results was observed for the NAR (1:9) model, i.e., noise detection was improved when class imbalance increased. The greater number of positive F-score variations found at the NAR (1:9) column endorse that, in imbalanced datasets, noisy instances in the majority class tend to be more easily detected than noisy instances in the minority class. No consistent pattern of performance was observed in Table~\ref{tab:fscoreVariationRealData} for the NCAR model. This indicates that when the noise is evenly distributed in an imbalanced dataset, the particularities and difficulties of the problem itself may be more crucial in noise detection performance than the IR. \begin{figure} \caption{Variation in \textit{F-score} \label{fig:results_real_fscore_variation_15_1} \end{figure} Noise detection was impacted by class imbalance under the NAR model as expected. For a better visualization of this result, Figures~\ref{fig:results_real_fscore_variation_15_1} and \ref{fig:results_real_fscore_variation_15_2} show the general behavior of noise detection under the NAR model when IR increases. This can be observed by the more prominent and negative bar on the left side of the graphs in contrast to a smaller and positive bar on the right. This comportment was observed in all datasets, except for the \textit{sonar} dataset. This may imply that the noise model characteristics and class imbalance ratio, in general, have more influence on noise detection than the nature of the classification problem for the majority of problems. \begin{figure} \caption{Variation in \textit{F-score} \label{fig:results_real_fscore_variation_15_2} \end{figure} \subsection{Noise Detection vs Ensemble Vote Thresholds} \label{sec:real_different_thresholds} Figures~\ref{fig:results_real_threshold_1} and \ref{fig:results_real_threshold_2} show the noise detection performance under different ensemble vote thresholds for each dataset at 15\% of noise level for the NAR and NCAR models. As can be seen, for most datasets, similar behavior was observed: better noise detection was achieved with smaller threshold values under the NAR (9:1) model (i.e., more noise instances in the minority class than in the majority's), and higher threshold values under the NAR (1:9) model when IR is increased. Under the NCAR model, threshold values close to $L=5$ (majority vote) returned higher \textit{F-score} results. The above behavior is verified in a more or less pronounced way depending on the dataset. For example, in Figure~\ref{fig:results_real_threshold_1} for \textit{arcene} dataset, the best threshold under the NAR(9:1) model is $L = 7$ when IR is 50:50, $L = 5$ when IR is 30:70, and $L = 3$ for an IR of 20:80. Under the same settings, for \textit{pima} dataset in Figure~\ref{fig:results_real_threshold_2}, the best threshold under the NAR(9:1) model is $L = 8$ when IR is 50:50, $L = 6$ when IR is 30:70 and $L = 4$ for an IR of 20:80. The optimal values for each dataset are different, but the general behavior is the same. When varying the ensemble threshold, experiments show that a smaller number of voter algorithms delivers better noise detection under NAR(9:1) model, and that a greater threshold produces better performance under NAR(1:9). Finally, under NCAR models, the majority vote performs better. Our results imply a change in the common practice adopted in the literature. The majority vote detection, widely used in related studies, corresponds to applying a default decision threshold. This is not the best option if it is expected a different noise level per class. Better thresholds can be set to improve noise detection performance. Different aspects like the class imbalance ratio and the noise model have to be considered. \subsection{Statistical Tests} \label{sec:stat_tests} The Friedman test \citep{friedman1979} was performed in order to compare the impact of all three noise models over the 228 problems (19 datasets\footnote{Mushroom dataset was removed because of its 100\% precision.} $\times$ 3 IR's $\times$ 4 different percentages of noise). The level of significance was set to $\alpha = 0.05$, i.e., 95\% confidence. The Friedman test shows a significant difference in the detection noise for the three models in certain contexts. As shown in Table~\ref{tab:results_friedmanRealSummary}, from the 152 data imbalanced problems analyzed, 77.63\% (118/152) presented a significant difference in the detection results. When considering only the problems with 20:80 IR, this number is equal to 88.16\%. On the other hand, when it comes to balanced datasets (76 of cases), only 18.42\% are significantly different. These results are aligned with the hypothesis discussed in the previous section. The choice of a noise generation model is more likely to impact detection results in data-imbalance problems. \begin{figure} \caption{\textit{F-score} \label{fig:results_real_threshold_1} \end{figure} \begin{figure} \caption{\textit{F-score} \label{fig:results_real_threshold_2} \end{figure} The influence of the amount of noise on ensemble detection was also tested. From the 57 problems for each percentage of noise, approximately half of the cases presented a significant difference (56.1\% for 5\% of noise, 54.4\% for 10\%, 63.2\% for 15\% and 57.9\% for 20\%). In this way, the amount of noise in data seems not to be as relevant as the IR on noise detection under different noise models. A second statistical analysis was also conducted in a pairwise fashion in order to verify if the noise models significantly improved/harmed the noise detection under certain contexts. In order to do so, the Wilcoxon non-parametric signed-rank test with the level of significance $\alpha = 0.05$ was used in all problems. The tests were performed for each percentage of noise. As the results were equivalent (independently of the amount of noise), the following discussion will be regarding the noise percentage of 15\% shown in Table~\ref{tab:results_wilcoxonReal15}. Table~\ref{tab:results_wilcoxonReal15} presents a pairwise comparison of noise detection results for each noise model. The W/T/L denote the wins (better performance on noise detection), ties (equivalent performance on noise detection), and losses (worse performance on noise detection) produced by the noise models on the columns in comparison to the ones on the rows. For instance, the ensemble detector on problems under NAR(1:9) with 30:70 IR (column) performed 4 times better and 15 times worse (4/0/15) in comparison to the detection on problems under NAR(1:9) with 50:50 IR (row). For this same example, the $\textit{p-value} = 0.007$ implies there is a significant difference in the results. \begin{table}[!] \setlength\extrarowheight{2pt} \begin{center} \caption{Summary of Friedman test results on each problem.\label{tab:results_friedmanRealSummary}} \begin{tabularx}{1\textwidth}{l|rr|rr|rr|rr|cc} \toprule \multirow{2}{*}{IR} & \multicolumn{8}{c|}{\textbf{Cases with significant difference}} & \multicolumn{2}{c}{\multirow{2}{*}{\textbf{Total per IR}}} \\ \cline{2-9} & \multicolumn{2}{c|}{\textbf{5\% of noise}} & \multicolumn{2}{c|}{\textbf{10\% of noise}} & \multicolumn{2}{c|}{\textbf{15\% of noise}} & \multicolumn{2}{c|}{\textbf{20\% of noise}} & \multicolumn{2}{c}{} \\ \hline \textbf{50:50} & 6/19 & 31.6\% & 1/19 & 5.3\% & 6/19 & 31.6\% & 1/19 & 5.3\% & 14/76 & 18.4\% \\ \textbf{30:70} & 11/19 & 57.9\% & 14/19 & 73.7\% & 12/19 & 63.2\% & 14/19 & 73.7\% & 51/76 & 67.1\% \\ \textbf{20:80} & 15/19 & 78.9\% & 16/19 & 84.2\% & 18/19 & 94.7\% & 18/19 & 94.7\% & 67/76 & 88.2\% \\ \cline{1-11} \textbf{Total} & 32/57 & 56.1\% & 31/57 & 54.4\% & 36/57 & 63.2\% & 33/57 & 57.9\% & \\ \end{tabularx} \end{center} \end{table} Focusing on problems under the same IR, only one case showed a significant difference, as discussed previously. For balanced data, the NAR(1:9) model (more noise in the majority class) produced a positive impact on noise detection, performing 15 times out of 19 better than the detection under NCAR, although with a $\textit{p-value} = 0.038$. For the case of 30:70 IR, the NAR(9:1) model (more noise in the minority class) harmed detection as its performance was worse 17 times when compared to the detection under NCAR and NAR(1:9) models with a really small $\textit{p-value}$. This was also verified when data was exposed to a different percentage of noise. Lastly, when increasing IR (20:80), tests showed noise detection is significantly improved under NAR(1:9) in comparison to the other noise models as the ensemble detector performed better in all problems (19/0/0). \addtocounter{table}{-1} \begin{table}[t] \setlength\extrarowheight{2pt} \begin{center} \caption{Wilcoxon test on real problems when there is 15\% noise in data. W \textbackslash T \textbackslash L = wins\textbackslash ties\textbackslash losses. p-value $< 0.05$ are highlighted.\label{tab:results_wilcoxonReal15}} \begin{footnotesize} \begin{tabular}{lll|ll|lll|lll} \toprule \multirow{2}{*}{\textbf{IR}} & \multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Noise Model}}} & \multicolumn{2}{c|}{\textbf{50:50}} & \multicolumn{3}{c|}{\textbf{30:70}} & \multicolumn{3}{c}{\textbf{20:80}} \\ & \multicolumn{2}{l|}{} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (9:1)\end{tabular}} & \textbf{NCAR} & \textbf{\begin{tabular}[c]{@{}c@{}}NAR\\ (1:9)\end{tabular}} \\ \hline \multirow{6}{*}{\textbf{50:50}} & \multirow{2}{*}{\textbf{NAR(9:1)}} & \textbf{W/T/L} & 5/0/14 & 11/1/7 & 16/0/3 & 17/0/2 & 15/0/4 & 17/0/2 & 17/0/2 & 16/0/3 \\ & & \textbf{p-value} & 0.073 & 0.360 & \textbf{0.001} & \textbf{0.000} & \textbf{0.001} & \textbf{0.001} & \textbf{0.001} & \textbf{0.001} \\ & \multirow{2}{*}{\textbf{NCAR}} & \textbf{W/T/L} & & 15/0/4 & 11/0/8 & 10/0/9 & 9/0/10 & 13/0/6 & 8/0/11 & 12/0/7 \\ & & \textbf{p-value} & & \textbf{0.038} & 0.481 & 0.952 & 0.324 & 0.409 & 0.952 & 0.153 \\ & \multirow{2}{*}{\textbf{NAR(1:9)}} & \textbf{W/T/L} & & & 6/0/13 & 4/0/15 & 4/0/15 & 3/0/16 & 3/0/16 & 3/0/16 \\ & & \textbf{p-value} & & & \textbf{0.021} & \textbf{0.005} & \textbf{0.007} & \textbf{0.003} & \textbf{0.002} & \textbf{0.004} \\ \hline \multirow{6}{*}{\textbf{30:70}} & \multirow{2}{*}{\textbf{NAR(9:1)}} & \textbf{W/T/L} & & & & 17/0/2 & 17/0/2 & 16/0/3 & 16/0/3 & 17/0/2 \\ & & \textbf{p-value} & & & & \textbf{0.000} & \textbf{0.000} & \textbf{0.012} & \textbf{0.001} & \textbf{0.000} \\ & \multirow{2}{*}{\textbf{NCAR}} & \textbf{W/T/L} & & & & & 17/0/2 & 4/0/15 & 9/0/10 & 15/0/4 \\ & & \textbf{p-value} & & & & & \textbf{0.000} & \textbf{0.004} & 0.856 & \textbf{0.004} \\ & \multirow{2}{*}{\textbf{NAR(1:9)}} & \textbf{W/T/L} & & & & & & 1/0/18 & 4/0/15 & 7/0/12 \\ & & \textbf{p-value} & & & & & & \textbf{0.000} & \textbf{0.001} & \textbf{0.035} \\ \hline \multirow{6}{*}{\textbf{20:80}} & \multirow{2}{*}{\textbf{NAR(9:1)}} & \textbf{W/T/L} & & & & & & & 18/0/1 & 19/0/0 \\ & & \textbf{p-value} & & & & & & & \textbf{0.000} & \textbf{0.000} \\ & \multirow{2}{*}{\textbf{NCAR}} & \textbf{W/T/L} & & & & & & & & 19/0/0 \\ & & \textbf{p-value} & & & & & & & & \textbf{0.000} \\ \bottomrule \end{tabular} \end{footnotesize} \end{center} \end{table} \section{Conclusions} \label{sec:conclusion} Many studies have focused their attention on data quality issues due to its importance in {ML} applications and the known fact that real-world datasets frequently contain noise~\citep{survey}. Noise can be presented in data in its attributes and also in its classes \citep{Zhu-attr}. This work focused the studies on class noise (also label noise). For this type of problem, the \textit{classification noise filtering} approach is usually applied to remove data irregularities before the learning step. The most common filtering consists of using the predictions of an ensemble of algorithms so that instances are removed upon a wrong classification \citep{Brodley}\citep{Sluban-diversity}\citep{unlabled2018}. To evaluate Noise Filters, simulated noise is usually injected into a dataset, and then analyses can be performed on the results \citep{GARCIA2019}. In \cite{survey}, three different label noise generation for injecting noise are presented: (1) {NCAR}, in which the probability of an instance being noisy is random, (2) {NAR}, the probability of an instance being noisy depends on its label, and (3) {NNAR}, the probability of an instance being noisy also depends on its attributes. Although there are many approaches to model different noise behaviors, in many previous works \citep{Sluban-diversity} \citep{Brodley} \citep{saesSMOTE} \citep{GARCIA2019}, one type of noise is chosen over another without considering the different impacts of each. Also, despite the majority and consensus vote approaches being the most common ensemble voting schemes used for filtering noise, studies \citep{threshold2005}\citep{threshold2018} have shown that selecting adequate values for the ensemble threshold can lead to superior results. This work presented an empirical study focused on ensemble-based noise detection and its performance under three different noise models generation: NCAR, where noise is equally distributed among class, NAR model by applying more noise in the majority class, and NAR model by employing more noise in the minority classes. The relation between detection performance and injected noise model was assessed through performance measures ({\it F-score}, Precision, Recall), considering different ratios of inserted noise and imbalance class configurations. The impact produced on filtering performance was also evaluated under different ensemble thresholds. As conclusions, we could observe that noise detection was not affected by the noise model in balanced data. In contrast, when dealing with imbalanced data, we faced two different scenarios for noise detection under the NAR model. Firstly, the addition of noise in the minority class harmed the noise detection in all problems. Moreover, by applying more noise in the majority class, noise detection improved in most cases. Besides, increasing the IR from 30:70 to 20:80 resulted in an even worse noise detection when the minority class was the noisiest one, and the opposite was verified when the majority class had more noise. The experiments performed with different ensemble thresholds showed that not always \textit{majority} and \textit{consensus} voting are the best options. Better noise detection was achieved for both models NCAR and NAR(9:1) if less than 50\% of the algorithms were selected. On the other hand, better noise detection was achieved for NAR(1:9) if more than 50\% of the algorithms were selected. As future work, we intend to investigate the NNAR model and evaluate other noise filtering techniques, expanding the work to multi-class problems as well. \section*{Acknowledgment} This research has been supported in part by the following Brazilian agencies: CAPES, CNPq and FACEPE. \end{document}
\begin{equation}gin{document} \begin{equation}gin{frontmatter} \thispagestyle{empty} \begin{equation}gin{abstract}[1]{Introduction to Experimental Quantum Measurement with Superconducting Qubits }{Mahdi Naghiloo, Murch Lab}{Physics }{2019}{Kater Murch} \textbf{Abstract}---Quantum technology has been rapidly growing due to its potential revolutionary applications. In particular, superconducting qubits provide a strong light-matter interaction as required for quantum computation and in principle can be scaled up to a high level of complexity. However, obtaining the full benefit of quantum mechanics in superconducting circuits requires a deep understanding of quantum physics in such systems in all aspects. One of the most crucial aspects is the concept of measurement and the dynamics of the quantum systems under the measurement process. This document is intended to be a pedagogical introduction to the concept of quantum measurement from an experimental perspective. We study the dynamics of a single superconducting qubit under continuous monitoring. We demonstrate that weak measurement is a versatile tool to investigate fundamental questions in quantum dynamics and quantum thermodynamics for open quantum systems. \end{abstract} \tables \end{frontmatter} \begin{equation}gin{main} \chapter{Introduction\langle\!\langleanglebel{ch1}} Quantum mechanics has revolutionized our understanding of nature since its development in the 20th century. Its prescription for the workings of nature is full of unexpected rules that remain counterintuitive even after over a century of confirmations. In the past decades, we have witnessed enormous progress in technology and control over quantum systems. These technologies aim to use the counterintuitive properties of quantum mechanics for real-life applications such as secure communication~\cite{gisin2007quantum}, high-precision sensing~\cite{degen2017quantum}, and information processing~\cite{wendin2017quantum}. These ambitious and revolutionary goals have driven a tremendous effort in the implementation of quantum devices in a variety of platforms ranging between, photonics, atomic systems, nano-mechanical structures, and superconducting circuits. Each platform offers a unique capability over others; photons are suited for transmitting quantum information, while atoms can serve as long-lived quantum memories. In this regard, superconducting circuits have gained a lot of attention for quantum computation owing to the strong light-matter interaction achievable in these circuits. Apart from computational goals, the superconducting circuit architecture is a powerful technology to explore quantum physics and can serve as a testbed for fundamental questions in science. In part, this is because the characteristics of quantum systems made of artificial atoms are rather easy to manipulate which opens possibilities to explore non-trivial quantum systems by the versatile design and engineering of superconducting circuits. The pronounced interplay between science and engineering in the superconducting circuit technology brings on active research from different perspectives ranging from fundamental studies to practical applications. In particular, understanding the physics of open quantum systems and the concept of measurement is considered a core problem in modern physics~\cite{rotter2015review}. Open systems appear in many disciplines in science, from environmental science to social science and atomic physics to biophysics. With recent progress in quantum technology and its applications, a deeper understanding of open quantum systems is required to face practical challenges. However, the importance of open quantum systems is not limited to practical applications. From the fundamental point of view, many questions tie into open quantum systems in some ways---questions such as how classical laws emerge from underlying quantum laws, the classical-quantum boundary~\cite{zurek1992environment,modi2012classical,tan2016quantum}, the arrow of time~\cite{dressel2017arrow,manikandan2018fluctuation,harrington2018characterizing}, and exploring quantum thermodynamics~\cite{vinjanampathy2016quantum}. The dynamics of open quantum systems cannot be described by the Schr\"odinger equation due to the interaction with the environment. This interaction results in dissipation and decoherence in quantum systems. Superconducting circuits naturally tend to interact with all available degrees of freedom which makes them highly controllable systems yet presents a challenge to preserve quantum coherence. Therefore, one of the most active areas of research in quantum circuit technology is directed toward understanding and controlling decoherence channels and encoding quantum information in states that are protected from decoherence~\cite{houck2008controlling,gladchenko2009superconducting,gambetta2011superconducting,kockum2018decoherence,ningyuan2015time,dempster2014understanding}. Another approach to cope with dissipation and decoherence is to come up with clever designs and protocols out of imperfect parts that enable to correct for imperfections and perform a perfect tasks~\cite{kelly2015state,ofek2016extending,corcoles2015demonstration,reed2012realization}. From the quantum measurement point of view, if we are able to monitor dissipation of a quantum system, we could then maintain its coherence~\cite{korotkov2010decoherence,kim2012protecting}. In fact, measurement on quantum system can be used as a resource for feedback to control dynamics~\cite{gillett2010experimental,vijay2012stabilizing}, to herald non-trivial states~\cite{sayrin2011real}, and to prepare entangled states~\cite{sorensen2003measurement,ruskov2003entanglement,roch2014observation}. Therefore the concept of measurement in open quantum systems is important in many ways. In particular, weak measurement enables one to continuously monitor a quantum system without destroying its quantum coherence~\cite{murch2013observing}. This provides a powerful tool to explore quantum dynamics in its most fundamental level~\cite{hacohen2016quantum, foroozani2016correlations, naghiloo2016mapping, weber2014mapping,naghiloo2017quantum}. Understanding the dynamics of continuously monitored systems in turn opens new ways for novel applications such as sensing~\cite{cujia2018watching,naghiloo2017achieving} and parameter estimation~\cite{kiilerich2016bayesian}. Also, superconducting circuits and quantum measurement techniques have a lot to offer to the newly emerging field of quantum thermodynamics~\cite{vinjanampathy2016quantum}. The hope is that understanding the dynamics of quantum systems lead us to an understanding of underlying thermodynamic law in the quantum regime. In this context, the quantum system (e.g. a qubit) is in contact with the environment as a reservoir. By continuous monitoring of the reservoir, we can learn about energy exchange between the system and the reservoir. These observations would be helpful to understand the underlying thermodynamical laws and fluctuations in the system. This raises many new questions about the relevant thermodynamics parameters in the quantum regime like heat, work, and entropy~\cite{naghiloo2017thermodynamics}, the validity of the classical thermodynamics laws for quantum systems~\cite{brandao2015second, toyabe2010experimental}, the emergence of thermalization and irreversibility~\cite{gogolin2016equilibration,dressel2017arrow} from quantum mechanical principles, and the energy-information connection~\cite{parrondo2015thermodynamics,naghiloo2018information}. Many of these questions can be addressed by a deep understanding of open quantum system dynamics given from quantum measurement techniques. Finally, the superconducting quantum systems can be engineered to realize non-trivial systems such as hybrid systems~\cite{kurizki2015quantum}, ``giant" atoms~\cite{kockum2018decoherence}, engineered baths~\cite{murch2012cavity,harrington2018bath}, and non-Hermitian systems~\cite{el2018non,peng2014parity, chen2017exceptional} where each of these hybrid systems opens new opportunities to explore unprecedented areas in physics. In particular, non-Hermitian systems which obey Parity-Time (PT) symmetry have gained a lot of attention both from theoretical \cite{bender2016pt,bender2017behavior,bender2016comment,bender2018series,bender2016analytic,bender2018p,bender2018pt} and experimental \cite{el2018non,naghiloo2019quantum} perspective owing to their topological and nonreciprocal properties. \section{Overview} This document is intended to be a pedagogical introduction to quantum measurement with a focus on experiments in the superconducting qubit platform. A goal of this document is to provide a clear and simple picture of quantum measurement in superconducting qubit circuits for those who are new to the field. To this end, I will try to address questions I encountered when beginning this research and cover questions I have received from other students during my PhD studies. Chapter~2 provides a basic theoretical discussion about the light-matter interaction and preliminary theory for measurement and characterization of superconducting circuits. Chapter~3 provides basic experimental knowledge about quantum measurement and superconducting circuits in close connection with the theoretical discussions of Chapter~2. Chapter~4 provides a pedagogical discussion of generalized measurements and continuous monitoring of a qubit and provides experimental procedures for two types of continuous measurements corresponding to measurement operators $\sigma_z$ and $\sigma_-$. Chapter~5 and 6 discuss two experiments in close connection with the pedagogical discussions of previous chapters. In Chapter~5, we will study how measurement affects the dynamics of quantum systems. In particular, I discuss the situation where the spontaneous emission of a quantum emitter is measured by homodyne detection. Typically, spontaneous emission is associated with the sudden jump of an atom or molecule from an excited state to lower energy state by emission of a photon. Spontaneous jump dynamics occur because most of detectors are sensitive to energy quanta. However, light has both wave and particle nature, and here we explore how the spontaneous emission process is altered if we detect the wave rather than the particle nature of light. To do this, we interfere the spontaneously emitted light from a quantum emitter with another electromagnetic wave, measuring a specific amplitude of the emission. The dynamics of the quantum emitter under such a detection scheme are drastically different than what is observed when photons are detected, for the state of the quantum emitter can no longer simply jump between energy levels. Rather, the emitter's state takes on diffusive dynamics and follows a continuous quantum trajectory between its excited and ground state. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=0.7\textwidth]{catcher} \caption[Photon detection vs. homodyne detection]{ \footnotesize \textbf{Photon detection vs. homodyne detection (discussed in Chapter~5):} The behavior of a quantum emitter depends on how we detect its emission. If we hire a catcher as a detector which is sensitive to the energy quanta (addresses the particle notion of light), the emitter behaves like a pitcher (spontaneous jump behavior). However, if we instead ``listen" to the emitter, it behaves according to the wave nature of its emitted energy (diffusive behavior).} \langle\!\langleanglebel{catcher} \end{center} \end{figure} Chapter~6 discusses quantum thermodynamics under the guise of Maxwell's demon. The thought experiment of Maxwell demon, whereby knowing the position and velocity of the molecules, a demon can sort hot and cold particle in a box was in apparent violation of 2$^\text{nd}$ law of thermodynamics. This thought experiment revealed a profound connection between energy and information in thermodynamics and has driven a lot of theoretical and experimental studies to understand this connection in many different platforms. In Chapter~6, we study the experimental realization of Maxwell's demon in a quantum system using continuous monitoring. We show that the second law of thermodynamics can be violated by a quantum Maxwell's demon unless we consider the demon's information. In our case, this information is quantum information which is susceptible to decoherence. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=0.4\textwidth]{maxwells_demon} \caption[Quantum Maxwell's demon]{ \footnotesize \textbf{ Quantum Maxwell's demon (discussed in Chapter~6):} We experimentally study a quantum version of Maxwell's demon who sorts particles that are in a quantum superposition of both hot and cold. We will see that the information obtained by the demon can be lost due to the decoherence and inefficient detection. Image adopted from Li's group.}\langle\!\langleanglebel{fig:test2} \end{center} \end{figure} \chapter{The Light-Matter Interaction\langle\!\langleanglebel{ch2}} This chapter provides the basic theoretical concepts of the light-matter interaction. The aim of this chapter is to pedagogically introduce concepts related to the rest of this document, especially Chapter~3 where we experimentally discuss qubit-cavity characterization. We consider the simplest example\footnote{One would think that the simplest situation is a qubit in free space. However free space supports infinite continuum of modes. In this regard, the free space situation is not the simplest situation.} of the light-matter interaction where a two-level quantum system (a qubit) interacts only with a single mode of light\footnote{A mode of light contains photons all of the same frequency, polarization, and spatial distribution.}. In practice, this situation can be achieved by placing the qubit inside a cavity that supports a discrete set of modes. By a proper choice of qubit and cavity frequencies, cavity mode geometry, qubit placement and orientation, the qubit can effectively interact with only one of the modes of the cavity\footnote{However this assumption works fine for many practical situations, it may not be accurate enough in general. In fact, this is an issue of fundamental importance see for example see~\cite{malek17cutoff,gely17converg}}.\\ \section{One-dimensional cavity modes} The electromagnetic mode of a cavity can be described by Maxwell's equations in classical electrodynamics. In the next section, we discuss the proper description of an electromagnetic field in quantum mechanics. Here we focus on a one-dimensional (1D) cavity but we will see that the result can be simply extended to higher dimensions. Here, I follow the conventional quantization found in quantum optics textbooks (e.g. Ref.~\cite{gerr05,walls2007quantum}) and discuss the quantization of electromagnetic field of an actual cavity (a volume bounded by perfect conductors) which is relevant to the three-dimensional (3D) architecture of cavity quantum electrodynamics (cQED)\footnote{Often in quantum circuit literature, this discussion is introduced by quantization of an $LC$ circuit; we will discuss this when we study the qubit. In this chapter we will see theoretically why a cavity bounded by superconducting walls is an $LC$ circuit, and later study this physically in Chapter~\rangle\!\rangleef{ch3} (See Fig.~\rangle\!\rangleef{fig:rect_cav}).}. In order to quantize the electromagnetic field, we may solve Maxwell's equations for a given set of boundary conditions and identify a corresponding canonical position $q$ and canonical momentum $p$. Then we transition to the quantum case by promoting $q$ and $p$ to operators\footnote{This is a convenient way to quantized photons, massless particles. For ``massive'' particles (e.g. electron in a box) one can solve the Schr\"odinger equation.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.7\textwidth]{cavity_1d.pdf} \caption[One dimensional cavity]{ {\footnotesize \textbf{One dimensional cavity:} Two infinite superconducting walls separated by distance $L$ form a cavity that supports a discrete number of electromagnetic modes in the $z$-dimension (the second mode is shown). Due to the translational symmetry in $x$ and $y$ directions, the electromagnetic fields are only functions of $z$. For simplicity, we assume the electric field (red lines) has polarization along $x$ axis and consequently the magnetic field (blue lines) is along $y$ axis.}} \langle\!\langleanglebel{fig:1Dcavity} \end{figure} For a one-dimensional cavity, consider a pair of infinite perfect conducting walls separated by the distance $L$ along the $z$-direction as depicted in Figure~\rangle\!\rangleef{fig:1Dcavity}. This configuration can be considered as one-dimensional because we have a continuous translational invariance in $x$ and $y$ dimensions. Therefore the electric and magnetic fields only depend on the $z$-coordinate. For simplicity, we assume that the electric field is polarized along $x$-axis which implies that the magnetic field is only along the $y$-axis. This is an empty cavity with no external current or charge source, therefore for Maxwell's equations we have, \begin{equation}gin{subequations} \begin{equation}gin{eqnarray} \triangledown \times \vec{E} = - \frac{\partial \vec{B}}{\partial t } & \rangle\!\rangleightarrow & \frac{\partial E_x(z,t)}{\partial z } = - \frac{\partial B_y(z,t)}{\partial t },\\ \triangledown \times \vec{B} = \varepsilon_0 \mu_0 \frac{\partial \vec{E}}{\partial t } & \rangle\!\rangleightarrow & - \frac{\partial B_y(z,t)}{\partial z } = \varepsilon_0 \mu_0 \frac{\partial E_x(z,t)}{\partial t },\\ \triangledown \cdot \vec{E} = 0 & \rangle\!\rangleightarrow & \frac{\partial E_x(z,t)}{\partial x } = 0,\\ \triangledown \cdot \vec{B} = 0 & \rangle\!\rangleightarrow & \frac{\partial B_y(z,t)}{\partial y } = 0. \end{eqnarray} \langle\!\langleanglebel{eq:maxwells} \end{subequations} Given perfect conducting walls, the electric field is required to vanish at the boundaries; $E_x(z=0,t)=0$ and $E_x(z=L,t)=0 $. One can show that the solution for electric and magnetic field inside the cavity are, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:EandB} \begin{equation}gin{eqnarray} E_x(z,t) &=& \mathcal{E} \ q(t) \sin(k z), \langle\!\langleanglebel{eq:Ex}\\ B_y(z,t) &=& \mathcal{E} \ \frac{\mu_0 \varepsilon_0}{k} \dot{q}(t) \cdot \cos(k z). \langle\!\langleanglebel{eq:By0} \end{eqnarray} \end{subequations} The normalization constant $\mathcal{E}$ is conveniently set to be $\mathcal{E} =\sqrt{\frac{2 \omega_c^2}{V \epsilon_0 }}$ where $V$ is the effective volume of the cavity\footnote{Here the constant $\mathcal{E}$ is defined in a way that the total energy in the cavity finds a compact form in Equation~\eqref{eq:H_singmode} which conveniently ensures that $\hat{q}$ and $\hat{p}$ obey the canonical commutation relation $[ \hat{q}, \hat{p}]=i \hbar$.}. The parameter, $k=m \pi/L,\ m=1, 2, ... $ is wave number corresponding to the frequency $\omega_c=\frac{k}{\sqrt{\mu_0 \epsilon_0}}$. The function $q(t)$ describes the time-evolution for modes and has a dimension of length\footnote{The actual form for $q$ is $q(t)=\sin(\omega t +\phi)$. But for now, we rather to implicitly represent it by $q(t)$ and we will see that it acts as the canonical position.}. Each integer value $m$ corresponds to one mode of the cavity. Figure~\rangle\!\rangleef{fig:1Dcavity} shows the electric and magnetic field for the second mode of the cavity ($m=2$). The total electromagnetic energy (per unit of volume) stored in one mode can be written as, \begin{equation}gin{eqnarray} H = \frac{1}{V} \int dV \langle\!\langleeft( \ \ \frac{\epsilon_0}{2} | E_x(z,t)|^2 + \frac{1}{2 \mu_0} |B_y(z,t)|^2 \ \ \rangle\!\rangleight). \langle\!\langleanglebel{eq:H_singlemode} \end{eqnarray} By substituting Equation~\eqref{eq:EandB} in \eqref{eq:H_singlemode}, one can show that total energy is equal to, \begin{equation}gin{eqnarray} H = \frac{1}{2} \langle\!\langleeft[ p^2(t) + \omega^2_c q^2(t) \rangle\!\rangleight], \langle\!\langleanglebel{eq:H_singmode} \end{eqnarray} where $p(t)=\dot{q}(t)$. From Eq.~\eqref{eq:H_singmode}, it is apparent that the energy of an electromagnetic mode is analogous to the energy of a classical harmonic oscillator if we consider $q(t)$ and $p(t)$ as the canonical position and momentum. Having canonical position and momentum identified, the Hamiltonian may be treated quantum mechanically by promoting the canonical parameters to be operators ($p,q \langle\!\langleongrightarrow \hat{p}, \hat{q}$). This results in a \emph{quantum} Hamiltonian for a harmonic oscillator: \begin{equation}gin{eqnarray} \hat{H} = \frac{1}{2} \langle\!\langleeft[ \hat{p}^2(t) + \omega^2_c \hat{q}^2(t) \rangle\!\rangleight]. \langle\!\langleanglebel{eq:H_singmodeq} \end{eqnarray} Therefore, we may conclude that each mode of the cavity acts as a quantum harmonic oscillator\footnote{In this transition, we may keep/drop the time-dependence to work in Heisenberg/Schr\"odinger picture.}. Note that in the classical description of Equation~\eqref{eq:EandB}, we already found that the cavity has discrete modes. However in that picture, each mode could have continuous amount of energy. The transition to a quantum mechanical description happens in Equation~\eqref{eq:H_singmode} $\to$~\eqref{eq:H_singmodeq} which results in quantization of the energy spectrum for each mode. To see this, it is convenient to define non-Hermitian operators\footnote{Here $\hbar$ is introduced indicating we enter quantum world. However, we set $\hbar=1$ throughout this thesis except for few confusing situations.} \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:aadag} \begin{equation}gin{eqnarray} \hat{a} &=& \frac{1}{\sqrt{2 \omega_c}} ( \omega_c \hat{q} + i \hat{p}), \\ \hat{a}^{\dagger} &=& \frac{1}{\sqrt{2 \omega_c}} ( \omega_c \hat{q} - i \hat{p}), \end{eqnarray} \end{subequations} which are annihilation and creation operators for a photon in the corresponding mode of the cavity and obey the commutation relation $[\hat{a} ,\hat{a}^{\dagger} ]=1$. The electric and magnetic fields, which are now operators, can be represented by $\hat{a}$ and $\hat{a}^{\dagger}$ as, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:EBaadag} \begin{equation}gin{eqnarray} \hat{E}_x(z,t) &=& \mathcal{E}_0 (\hat{a} + \hat{a}^{\dagger} ) \sin(k z), \langle\!\langleanglebel{eq:Eaadag} \\ \hat{B}_y(z,t) &=& i \mathcal{B}_0 (\hat{a} - \hat{a}^{\dagger} ) \cos(k z). \langle\!\langleanglebel{eq:Baadag} \end{eqnarray} \end{subequations} The Hamiltonian Eq.~\eqref{eq:H_singmodeq} also takes a compact representation in terms of $\hat{a}$ and $\hat{a}^{\dagger}$, \begin{equation}gin{eqnarray} \hat{H}= \omega_c (\hat{a}^{\dagger}\hat{a} + \frac{1}{2} ) = \omega_c (\hat{n} + \frac{1}{2} ), \langle\!\langleanglebel{eq:Haadag} \end{eqnarray} where the operator $\hat{n}=\hat{a}^{\dagger}\hat{a} $ is the \textit{number} operator. Knowing the Hamiltonian for the electromagnetic field of a single cavity mode, we can describe the state of the cavity by solving the corresponding eigenvalue problem. Considering Hamiltonian \eqref{eq:Haadag} we have, \begin{equation}gin{eqnarray} \hat{H} |n\rangle\!\rangleanglengle = E_n |n\rangle\!\rangleanglengle , \hspace{0.5cm} n=0,1,2,... \langle\!\langleanglebel{eq:hnen} \end{eqnarray} where $\{ |n \rangle\!\rangleanglengle \}$ are \emph{photon number states} or \emph{Fock states} representing the energy eigenstate for the single mode cavity field with the corresponding energy $E_n= \omega_c (n+\frac{1}{2})$. The photon-number states $\{|n\rangle\!\rangleanglengle \}$ form a complete basis to describe any arbitrary state of the cavity. That means at any given time, the cavity is either in one of states $|n\rangle\!\rangleanglengle$ or in some linear superposition of them, $\sum_n c_n|n \rangle\!\rangleanglengle$. However, In general, the cavity state can be in a mixed state, an incoherent superposition of Fock states, like thermal states, which are conveniently represented by the density matrix $\rangle\!\rangleho=\sum_n P_n |n \rangle\!\rangleanglengle \langle\!\langleanglengle n|$. \subsection{How to visualize the state of light} We may describe the quantum state of the light inside the cavity by a wave function $|\psi\rangle\!\rangleanglengle$ which can be represented in any arbitrary basis e.g. photon-number basis, $|\psi\rangle\!\rangleanglengle=\sum_n c_n|n \rangle\!\rangleanglengle$. Now the question is what is the best way to characterize and visualize $|\psi\rangle\!\rangleanglengle$. One way to do this is by looking at the expectation value of the electric and magnetic fields, $\langle\!\langleanglengle \psi|E|\psi \rangle\!\rangleanglengle$ and $\langle\!\langleanglengle \psi|B|\psi \rangle\!\rangleanglengle$ and their fluctuations. In the previous section we learned that electric and magnetic fields are quantum objects and are described by operators, Eq.~\eqref{eq:EBaadag}. From these equations, it is apparent that the electric and magnetic operators are directly related to the canonical position and momentum respectively. \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:EB_prop_xp} \begin{equation}gin{eqnarray} \hat{E} \propto (\hat{a} + \hat{a}^{\dagger} ) \propto \hat{q} \langle\!\langleanglebel{eq:E_prop_x} \\ \hat{B} \propto (\hat{a} - \hat{a}^{\dagger} ) \propto \hat{p} \langle\!\langleanglebel{eq:B_prop_p} \end{eqnarray} \end{subequations} The only difference between operators $\{\hat{E},\hat{B}\}$ and $\{\hat{q},\hat{p}\}$ is an extra spatial dependence term in the electromagnetic operators which depends on the details of the geometry in the system\footnote{The spatial dependence has to do with the geometry of the problem which sets the spatial property for all photons in the same way. Let me explain this by a question; What is the difference between a Fock state, let's say $|1\rangle\!\rangleanglengle$, of a cylindrical cavity and a rectangular cavity? The is no difference. They both represent having a photon in a cavity. But if you were asked about the spatial probability distribution of that photon inside that cavity, then the answer indeed depends on the geometry of each cavity. Later when we discuss the qubit placement inside the cavity, we will see this spatial dependence comes into play implicitly in the coupling between cavity and qubit.}. Therefore, a general way to visualize the state of light, regardless of the geometry of the cavity, is to look at the probability distribution of photons in phase space, $W(q,p)$. This ``quasi-probability" distribution\footnote{It is called ``quasi-probability'' because unlike a normal probability distribution, the Wigner function may be negative for a non-classical light.} is known as the \emph{Wigner function}. There are a bunch of different representations for the Wigner function in different bases. For example, in the canonical position basis $\{|q\rangle\!\rangleanglengle\}$, the Wigner function has the following definition for a given pure state $|\psi\rangle\!\rangleanglengle$, \begin{equation}gin{eqnarray} W(q,p) =\frac{1}{2\pi } \int_{-\infty}^{+\infty} \langle\!\langleanglengle q+x/2| \psi\rangle\!\rangleanglengle \langle\!\langleanglengle \psi | q-x/2 \rangle\!\rangleanglengle e^{+ipx} dx. \langle\!\langleanglebel{eq:wigner} \end{eqnarray} In this section, we will see that Wigner function has an intuitive distribution for classical light (e.g. coherent light, thermal light) but it is somewhat nonintuitive for non-classical light (e.g. single photon state). Now we briefly discuss a few common states of light for a single mode of a cavity.\\ \emph{\textbf{Fock state}}-- As we introduced earlier, Fock states, or photon-number states, are eigenstates of the quantum harmonic oscillator. Thus they have the simplest representation in the photon-number basis\footnote{They are simple in terms of representation but experimentally, the preparation of a cavity in a Fock state is not simple~\cite{houck2007generating}.} and describing the situation where exactly $n$ photons exist in the cavity, \begin{equation}gin{eqnarray} |\psi\rangle\!\rangleanglengle = |n\rangle\!\rangleanglengle \hspace{0.5cm}& &\hspace{0.5cm} \mbox{(Fock state)} \langle\!\langleanglebel{eq:fockstate} \end{eqnarray} including \textit{vacuum} state $|0\rangle\!\rangleanglengle$ where there is no photon in a cavity\footnote{There is no clear spatial visualization of photon-number states inside a cavity. But for our purpose one may have some sort of visualization by combining both notions of light; wave and particle. In that sense, one can imagine that each photon is a packet of energy that extended inside the cavity so that its spatial probability distribution follows the distribution of the energy on that mode. A conventional way to characterize the state of the light is by calculating its \textit{Wigner function} which is somehow a probability distribution as a function of canonical position and momentum but it doesn't give any visualization in real space.}. Photon-number states are orthogonal to each other, $\langle\!\langleanglengle n | m \rangle\!\rangleanglengle = \delta_{n,m}$, which means, experimentally, one should be able to distinguish $|n\rangle\!\rangleanglengle$ from $|m\rangle\!\rangleanglengle$ without any ambiguity. Considering this orthogonality, it is easy to check that the expectation values of the electromagnetic field operators (Eq.~\rangle\!\rangleef{eq:EBaadag}) for photon-number states are zero regardless of the number of photons. But the expectation value for the electromagnetic field squared (e.g. $\langle\!\langleanglengle E^2 \rangle\!\rangleanglengle$) and electromagnetic fluctuations (e.g. $\Delta E= \sqrt{\langle\!\langleanglengle E^2 \rangle\!\rangleanglengle - \langle\!\langleanglengle E \rangle\!\rangleanglengle^2}$) are nonzero even for a vacuum state\footnote{However, $\langle\!\langleanglengle E \rangle\!\rangleanglengle$ and $\langle\!\langleanglengle B \rangle\!\rangleanglengle$ for a superposition of two or more Fock states can be non-zero.}.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~1:} Show that $\langle\!\langleanglengle E \rangle\!\rangleanglengle=0$, $\langle\!\langleanglengle B \rangle\!\rangleanglengle=0$ for a Fock state $|n\rangle\!\rangleanglengle$, but $\langle\!\langleanglengle E^2 \rangle\!\rangleanglengle\neq0$, $\langle\!\langleanglengle B^2 \rangle\!\rangleanglengle\neq0$. What is the electric field uncertainty $\Delta E$ for the vacuum state? }} For example the Wigner function for state $|0\rangle\!\rangleanglengle$ and $|1\rangle\!\rangleanglengle$ has following form, \begin{equation}gin{eqnarray} |0\rangle\!\rangleanglengle & \to & W_0(q,p) =\frac{1}{2\pi } e^{-(q^2+p^2)} \langle\!\langleanglebel{eq:W0}\\ |1\rangle\!\rangleanglengle & \to & W_1(q,p) =\frac{1}{2\pi } (2q^2+2p^2-1) e^{-(q^2+p^2)}. \langle\!\langleanglebel{eq:W1} \end{eqnarray} It is somewhat easy to find some classical interpretation for a Wigner function when it is not negative. For that just consider that $q$ and $p$ are related to the electric and magnetic field\ad{s}. For example the vacuum state (Eq.~\eqref{eq:W0}) depicted in Fig.~\rangle\!\rangleef{fig:wigner_fock}a shows the probability is maximum for $q=0, p=0$, corresponding to zero electric and magnetic field. But there is some probability for a non-zero electromagnetic field around zero which comes from vacuum fluctuations and accounts for a vacuum energy $\omega_c/2$. So even an empty cavity has some amount of energy and electric and magnetic field fluctuate around zero\footnote{This makes the vacuum state non-classical.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{wigner0_1.pdf} \caption[Wigner distribution for photon-number states]{ {\footnotesize \textbf{Wigner distribution for photon-number states:} \textbf{a}, The vacuum state $|0\rangle\!\rangleanglengle$ has a Gaussian distribution centered at the origin of the phase space. \textbf{b}, The single photon state $|1\rangle\!\rangleanglengle$ exhibits negative probabilities around the origin.}} \langle\!\langleanglebel{fig:wigner_fock} \end{figure} However, the Wigner distribution is not very intuitive for photon-number states other than the vacuum state. For example, as it is apparent from Equation~\eqref{eq:W1} (also depicted in Fig~\rangle\!\rangleef{fig:wigner_fock}b), the Wigner function is negative in some region for the state $|1\rangle\!\rangleanglengle$. It is hard to interpret the negative probability density, thus states with negative Wigner functions are called non-classical states. Note that the photon-number states are eigenstates of the harmonic oscillator Hamiltonian (Eq.~\rangle\!\rangleef{eq:Haadag}), thus the Fock state Wigner functions are stationary and do not evolve in time. It is worth here to mention some common operational relations for Fock states: \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:aadag_fock} \begin{equation}gin{eqnarray} \hat{a} |n\rangle\!\rangleanglengle &=& \sqrt{n} |n-1\rangle\!\rangleanglengle \\ \hat{a}^{\dagger} |n\rangle\!\rangleanglengle &=& \sqrt{n+1} |n+1\rangle\!\rangleanglengle \\ \hat{n} |n\rangle\!\rangleanglengle &=& \hat{a}^{\dagger} \hat{a} |n\rangle\!\rangleanglengle = n |n\rangle\!\rangleanglengle \end{eqnarray} \end{subequations} Where $\hat{a} (\hat{a}^{\dagger})$ annihilates (creates) a photon and $\hat{n}$ leaves the state intact and gives the number of photons. With that, let's finish the discussion of Fock states by a ``counterintuitive" question.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~2:} Consider a situation where the single cavity mode contains superposition of two Fock states described by $|\psi\rangle\!\rangleanglengle = \sqrt{0.99} |0\rangle\!\rangleanglengle + \sqrt{0.01} |100\rangle\!\rangleanglengle$. What is the average number of photons in the cavity? If you annihilate a photon by acting annihilation operator $\hat{a}$ on this state, then how many photons remain in the cavity? Interpret the result. }} \emph{\textbf{Coherent state}}-- One of the most common types of light is \textit{coherent} light which is also known as classical light. In fact, the output of a laser or a signal generator is coherent light. Experimentally, one can simply send the output of a signal generator at the right frequency to a cavity to produce a coherent state in the cavity. The coherent state can be represented in the photon-number basis as, \begin{equation}gin{eqnarray} |\psi\rangle\!\rangleanglengle = |\alpha\rangle\!\rangleanglengle = \sum_n c_n |n\rangle\!\rangleanglengle \hspace{0.2cm},\hspace{0.2cm} c_n=e^{-|\alpha|^2/2} \frac{\alpha^n}{\sqrt{n!}}, \langle\!\langleanglebel{eq:coherent} \end{eqnarray} where $c_n$ indicates the contribution of each photon-number state in the coherent state. The parameter $\alpha$ is a constant\footnote{Note, $\alpha=|\alpha|e^{i\phi}$ can be any complex number. We will later see that the phase $\phi$ has a very simple meaning (the phase of the oscillations) when we discuss the coherent state in analogy with a classical oscillator.} whose magnitude is related to the average number of photons, $\langle\!\langleanglengle \hat{n} \rangle\!\rangleanglengle =|\alpha|^2$, of the coherent state $|\alpha\rangle\!\rangleanglengle$. In Figure~\rangle\!\rangleef{fig:coh_c_n} we plot $c_n$ versus $n$ for two different values of $\alpha$. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{coherent_c_n2.pdf} \caption[Photon number distributions for coherent states]{ {\footnotesize \textbf{Photon number distributions for coherent states:} The blue (red) distribution shows the photon number distribution for a coherent state which has an average photon number $\bar{n}=1/4\ (\bar{n}=16)$. The photon number distribution for the higher average number of photons is more like a Gaussian distribution.}} \langle\!\langleanglebel{fig:coh_c_n} \end{figure}\\ The blue distribution is for $\alpha=1/2$ which corresponds to the average number of photons $\bar{n}= \langle\!\langleanglengle n \rangle\!\rangleanglengle =1/4$. That means if we measure the number of photons in the cavity, we mostly ($c_0^2=0.88^2 \sim 0.78$) find zero photons but on average we get 1/4 photon. The red distribution shows the distribution of photon-states for the coherent state that has 16 photons on average\footnote{It is important to distinguish coherent light with other incoherent mixed distributions of photons. It is possible that an incoherent light gives the same distribution of photons as a coherent light does, but a coherent state requires a certain relative phase between Fock states. For example, a qubit evolves totally different interacting with coherent light versus incoherent light even if they have a same photon number distribution.}. The fact that the distribution for the higher average number of photons is more like a Gaussian distribution, follows from the \textit{central limit theorem} for a Poisson distribution.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~3:} Show that in the limit $\alpha\gg 1$ the photon distribution $c_n$ approaches to a Gaussian distribution centered at $|\alpha|^2$ and variance of $|\alpha|^2$. }} It is easy to show that coherent state is the eigenstate of annihilation operator, \begin{equation}gin{eqnarray} \hat{a} | \alpha \rangle\!\rangleanglengle =\alpha | \alpha \rangle\!\rangleanglengle. \langle\!\langleanglebel{eq:alpha_alpha_a_alpha} \end{eqnarray} However, since $\hat{a}$ is a non-Hermitian operator, the corresponding eigenstates $\{|\alpha\rangle\!\rangleanglengle\}$ do not form a orthogonal basis\footnote{Two coherent states $|\alpha\rangle\!\rangleanglengle$ and $|\begin{equation}ta\rangle\!\rangleanglengle$ are orthogonal only in the limit of $|\alpha - \begin{equation}ta| \gg1$.}. Unlike the photon-number state, the coherent state is not an eigenstate of the Hamiltonian \eqref{eq:Haadag}, therefore it has time evolution\footnote{Here we assume that we are in Schr\"odinger picture which is more intuitive and convenient to discuss the evolution of the system. However, calculating the expectation values are often more straightforward in Heisenberg picture.}. But it turns out that the time evolution of a coherent state is simply a rotation in phase space.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~4:} Show that a coherent state remains a coherent state under the time evolution but $\alpha$ acquires a phase: $|\alpha\rangle\!\rangleanglengle (t)= |\alpha_t\rangle\!\rangleanglengle$, where $\alpha_t=e^{-i \omega_c t} \alpha$. }} Now it is the time to discuss why coherent light often is considered classical light. As we see in Equation~\eqref{eq:coherent}, a coherent state is indeed a superposition of quantized photon number states. But it turns out that most of its characteristics can be understood in a close analogy with a classical light. In other words, when a cavity is populated with coherent light, the behavior of the cavity corresponds to classical oscillatory motion. For example, by considering Equation~\eqref{eq:alpha_alpha_a_alpha} and the fact that $|\alpha\rangle\!\rangleanglengle (t)= |e^{-i \omega_c t} \alpha \rangle\!\rangleanglengle$, it is easy to show that the expectation value of the electromagnetic field (Eq.~\rangle\!\rangleef{eq:EBaadag}) for a coherent state is non-zero and oscillatory in time, \begin{equation}gin{subequations} \begin{equation}gin{eqnarray} \langle\!\langleanglengle \alpha_t| E |\alpha_t\rangle\!\rangleanglengle &=& 2 \mathcal{R}[\alpha_t] \mathcal{E}_0 \ \sin(k z) \ \cos(\omega_c t ) \nonumber\\ &=& 2|\alpha| \mathcal{E}_0 \ \sin(k z) \ \cos(\omega_c t+\phi), \langle\!\langleanglebel{eq:E_exp_coherent}\\ \langle\!\langleanglengle \alpha_t| B |\alpha_t\rangle\!\rangleanglengle &=& 2 \mathcal{I}[\alpha_t] \mathcal{B}_0 \ \cos(k z)\ \sin(\omega_c t) \nonumber \\ &=& 2|\alpha| \mathcal{B}_0 \ \cos(k z)\ \sin(\omega_c t-\phi), \langle\!\langleanglebel{eq:B_exp_coherent} \end{eqnarray} \end{subequations} where we used the fact that $\alpha_t=e^{-i\omega_c t} \alpha$ and $\alpha$ itself is a complex number $\alpha=|\alpha|e^{i\phi}$. You may notice that the expectation value for the electric and magnetic fields are similar to the classical solutions of Maxwell's equation (Eq.~\rangle\!\rangleef{eq:EandB}). Therefore, the quantum description of the coherent state is consistent with our classical understanding of the oscillating electric and magnetic modes of a harmonic oscillator. In addition, one can show that the coherent state has minimum quantum fluctuations equal to the vacuum fluctuations. This is a minimum uncertainty allowed by the Heisenberg uncertainty principle, assuming no squeezing. These fluctuations can be considered as an intrinsic uncertainty related to determining both the amplitude and phase of the electromagnetic field.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~5:} Show that coherent state has minimum fluctuations (like a vacuum state) in each quadrature, $\langle\!\langleanglengle ( \Delta I)^2 \rangle\!\rangleanglengle=\langle\!\langleanglengle ( \Delta Q)^2 \rangle\!\rangleanglengle= 1/4$, where $I=(\hat{a} + \hat{a}^{\dagger})/2$, $Q=(\hat{a} - \hat{a}^{\dagger})/2i$. }} Thus the Wigner function for a coherent state is a vacuum Wigner function displaced in phase space (Fig.~\rangle\!\rangleef{fig:wigner_coherent}a) by an amount $\alpha$ which can be written in this form, \begin{equation}gin{eqnarray} |\alpha \rangle\!\rangleanglengle & \to & W_{\alpha}(q,p) =\frac{1}{2\pi } \exp( -(q-\mathcal{R}[\alpha])^2-(p-\mathcal{I}[\alpha])^2), \langle\!\langleanglebel{eq:Wc} \end{eqnarray} where $\mathcal{R}[\alpha]$ ($\mathcal{I}[\alpha]$) is the real (imaginary) part of $\alpha$. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{wigner0_coh.pdf} \caption[Winger function for a coherent state]{ {\footnotesize \textbf{Winger function for a coherent state:} \textbf{a}, The Wigner function for a coherent state is a Gaussian distribution displaced from the origin by amount of $\alpha$. The coherent state has minimum uncertainty in each quadrature like a vacuum state. \textbf{b}, The evolution of coherent state under harmonic oscillator Hamiltonian is simply a rotation around the origin.}} \langle\!\langleanglebel{fig:wigner_coherent} \end{figure} As illustrated in Figure~\rangle\!\rangleef{fig:wigner_coherent}b, the coherent state evolves around the origin of phase space by frequency $\omega_c$. That means the energy swings back-and-forth from electric (potential) to magnetic (kinetic). Therefore one may realize that the coherent state's Wigner function is very similar to the classical ``phasor diagram" of a noisy signal. The difference is that when considering classical signals, we assume one can in principle reduce the noise and make it arbitrarily small. But for the coherent state the ``noise" in each quadrature is quantum noise, originating from vacuum fluctuations as described by the Heisenberg uncertainty principle. In the limit of large average photon number, the noise (either classical or quantum) is negligible compared to the actual signal. Therefore, the classical picture and quantum picture completely overlap in that limit. It worth mentioning here that the coherent state has a very important role in quantum measurement. In particular, a precise measurement of the phase of a coherent signal is an essential component for most quantum measurement experiments. Usually, we are not interested in the natural oscillation frequency of a coherent signal. Therefore we go to a frame that exactly rotates with that frequency. In that \emph{rotating frame}, the coherent state doesn't rotate anymore in phase space. The coherent state Wigner distribution is either along $q$ or $p$ or somewhere between and remains a steady-state. So in the rotating frame we freeze the time evolution for the oscillator. For simplicity let us assume the oscillator state is along the $q$ axis, which means all the energy is potential (like a stretched spring or a pendulum at its turning point). In the rotating frame, the coherent state is stationary and the phase is fixed unless, for any reason, the coherent state experiences an external phase shift (or a kick) on top of its normal phase evolution due to a perturbative interaction. In such case, the coherent state rotates to a new place in phase space. We can easily detect that displacement in the rotating frame\footnote{Rotating frames are useful in many ways; both in theory and experiment. Theoretically, sometimes it is easier to solve a problem in a rotating frame or it is more clear to see the dynamics of a system. Experimentally, as we will see in the next chapter, it is very natural and easy to work in a rotating frame. Otherwise, it wouldn't possible to precisely measure the phase shifts in a rapidly rotating signal (typically $\omega_c/2\pi \sim 5$ GHz).}(see Figure~\rangle\!\rangleef{fig:sig_kick_rf}). We will see in the next chapter that this type of phase detection is the essence of qubit readout measurement. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{signal_kick_phase_shift.pdf} \caption[Phase shifts for coherent state in the rotating frame]{ {\footnotesize \textbf{Phase shifts for coherent state in the rotating frame:} The phase shift of a coherent signal is easily detectable in the rotating frame.}} \langle\!\langleanglebel{fig:sig_kick_rf} \end{figure} \section{Qubit} Experimentally there are many ways to realize a qubit. Here we discuss theoretically how to realize a qubit with a superconducting circuit. In our circuit toolbox we have only three elements to work with: capacitors, inductors, and Josephson junctions (JJ)\footnote{Of course we wanted to avoid resistors in our toolbox but this is something that comes for free. Even in superconducting circuits, there are various ways that energy can dissipate. e.g. photon emission/radiation, coupling to phonons.} (Fig.~\rangle\!\rangleef{fig:QED_toolbox}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.58\textwidth]{toolbox.pdf} \caption[Circuit QED toolbox]{ {\footnotesize \textbf{Circuit QED toolbox:} Quantum circuit technology relies on these three elements. The required nonlinearity comes from the JJ which is basically a dissipationless nonlinear inductor.}} \langle\!\langleanglebel{fig:QED_toolbox} \end{figure} The most important element is the Josephson junction which introduces a circuit nonlinearity necessary to form a qubit. In order to realize a qubit, the idea is to make a nonlinear (anharmonic) oscillator out of Josephson junction and use only the two lowest energy states as a qubit\footnote{Normally in circuit QED literature the transmon discussion is introduced by a circuit called Cooper pair box. The transmon is a Cooper pair box in a limit of a large shunt capacitance---see a nice discussion in Ref~\cite{schu07thesis}. Here, I approach the discussion of the qubit by starting from transmon as a nonlinear oscillator.}. \subsection{Josephson junctions} The Josephson junction (JJ) comprises of a thin ($\sim 1$ nm) layer of an insulator sandwiched between two superconducting slabs (Fig~\rangle\!\rangleef{fig:jj}). The superconducting leads consist of many atoms, but due to their superconducting state they can be described by a single complex number, $\Psi_{1,2}= \sqrt{n_{1,2}} e^{i \theta_{1,2}}$, where $n_{1,2}$ and $\theta_{1,2}$ indicate the number of Cooper pairs and the phase of the superconducting order parameter on each side of the junction. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.4\textwidth]{jj.pdf} \caption[Josephson junction]{ {\footnotesize \textbf{Josephson junction:} The JJ consists of two superconductors separated by a thin layer of insulator. The Cooper pairs on each side can tunnel through the insulator and create a super-current $I$. Remarkably, the current can be non-zero even when $V=0$. The highly nonlinear $I$-$V$ characteristics of the JJ can be exploited for quantum circuits.}} \langle\!\langleanglebel{fig:jj} \end{figure} It has been shown\footnote{There is a straightforward derivation for Josephson equations based on microscopic BCS theory, See for example Ref.~\cite{tinkham2004introduction}.} that, effectively, a JJ can be though of as a dissipationless nonlinear inductor which has the $I$-$V$ characteristics, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:jjI} \begin{equation}gin{eqnarray} I&=&I_0 \sin(\delta)\\ V&=&\frac{\Phi_0}{2 \pi} \dot{\delta}, \end{eqnarray} \end{subequations} where $\delta=\theta_2-\theta_1$ and $I_0$ is a critical current above which the JJ becomes a normal dissipative junction. One can then infer the effective inductance of the Josephson junction is, \begin{equation}gin{eqnarray} V=L \frac{d I}{d t} \to \frac{\Phi_0}{2 \pi} \dot{\delta} = L I_0 \dot{\delta} \cos(\delta) \to L=\frac{\Phi_0}{2 \pi I_0 \cos(\delta) } \to L=\frac{L_{J0}}{\cos(\delta) }, \langle\!\langleanglebel{eq:jjL} \end{eqnarray} where $\Phi_0=\frac{h}{2 e}$ is the flux quantum, and we define $L_{J0}=\frac{\Phi_0}{2 \pi I_0}$ as the Josephson inductance at zero current. It is apparent that the Josephson inductance is a function of current $L=L(I)$. This dependence can be explicitly shown by using Equation~\eqref{eq:jjI}a in ~\eqref{eq:jjL}, \begin{equation}gin{eqnarray} L=\frac{L_{J0}}{\sqrt{1-(\frac{I}{I_0})^2} } . \langle\!\langleanglebel{eq:jjLI} \end{eqnarray} Moreover, one can use two JJs (assuming identical JJs) in a loop to effectively have a tunable JJ where the critical current can be tuned by passing an external flux $\Phi_{ext}$ through the loop, \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:JJ_squid} I_0^{\mathrm{SQUID}}= 2I_0 | \cos(\frac{\pi \Phi_{ext}}{\Phi_0})| \end{eqnarray} where $I_0$ is the critical current of an individual junction. \\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~6:} Derive Equation~\eqref{eq:JJ_squid}. What is the effective critical current for an asymmetric SQUID where two non-identical JJs are placed in a loop? }} The total energy stored\footnote{Naturally, a JJ also has some small capacitance, but for our purposes and simplicity we ignore this since we are eventually going to shunt the JJ to a much larger capacitor to make a transmon qubit.} in a JJ can be calculated by adding up the energy changes $dU/dt= VI$ (assuming there was no current ($\delta=0$) at $t=-\infty$) and obtain\footnote{For a normal inductor the energy is simply $U=\int VI dt= \int L I dI = LI^2/2$. But for JJ the inductance $L$ is a function current $I$.}, \begin{equation}gin{eqnarray} U &=& \int_{-\infty}^t I(t')V dt' = \frac{I_0 \Phi_0}{2 \pi} \int_{-\infty}^t \sin(\delta) \dot{\delta} dt' \nonumber \\ &=& \frac{I_0 \Phi_0}{2 \pi} \int_{-\infty}^t \sin(\delta) d\delta = E_J[1-\cos(\delta)], \langle\!\langleanglebel{eq:jjE} \end{eqnarray} where we define the Josephson energy $E_J=\Phi_0 I_0/2 \pi= \hbar I_0/2e$. In the next subsection, we will shunt a JJ by a capacitor and quantize the LC circuit (or JJ-C circuit). We will see that the parameter $\delta$ is the canonical position for that anharmonic oscillator. In the next chapter, we provide details from the experimental perspective, e.g. fabrication and characterization of a JJ. \subsection{Transmon qubit} The fact that the inductance of the JJ is a function of current passing through the JJ, makes it an interesting nonlinear element which can be leveraged for a qubit architecture. In particular, one can imagine shunting the JJ by a capacitor to have the anharmonic oscillator depicted in Figure~\rangle\!\rangleef{fig:transmon_simple}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.3\textwidth]{transmon_simple.pdf} \caption[Transmon circuit]{ {\footnotesize \textbf{Transmon circuit:} The transmon circuit consists of a JJ shunted by a relatively large capacitor so that $E_J\gg E_C$.}} \langle\!\langleanglebel{fig:transmon_simple} \end{figure} The total energy of the circuit is, \begin{equation}gin{eqnarray} H_{\mathrm{trans}}= \frac{Q^2}{2C} + E_J[1-\cos(\delta)], \end{eqnarray} where $Q$ is the total charge in the capacitor $C$ and we use Equation~\eqref{eq:jjE} for JJ energy. It is convenient to represent the total charge in capacitor in terms of number of Cooper pairs\footnote{When $I<I_0$, only pairs of electrons tunnel through the JJ insulating barrier, called Cooper pairs. Thus in this case it makes sense to represent charge in terms of the number of Cooper pairs, $m$.}, $Q= 2e m$. Therefore the total energy can be written in this form, \begin{equation}gin{eqnarray} H_{\mathrm{trans}}= 4 E_C m^2 + E_J[1-\cos(\delta)], \end{eqnarray} where we define the charging energy $E_C=e^2/2C$. The first terms is the kinetic energy stored in capacitor and last term is the potential energy stored in JJ (inductor). Similar to quantization of harmonic oscillator, here $m$ and $\delta$ are canonical momentum and position for the transmon circuit. Therefore, we may transition to the quantum regime by promoting them to be operators and then we arrive at the quantum Hamiltonian, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{trans}}= 4 E_C \hat{m}^2 + E_J (1-\cos \hat{\delta}). \langle\!\langleanglebel{eq:trans_H0} \end{eqnarray} Now, we have a Hamiltonian for the transmon circuit. In order to find the energy transitions of the transmon, we need to find the eigenvalues and eigenstates for this Hamiltonian. This Hamiltonian has an analytic solution in the $\hat{\delta}$-basis\footnote{In $\delta$-basis you have $\hat{m}=i\hbar \frac{\partial}{\partial \hat{\delta}}$ then you obtain a solvable 2nd-order differential equation.} in terms of Matthieu functions (see for example Ref.~\cite{devoret2004shortreviwe}). More conveniently, one can truncate the Hilbert space and perform numerical diagonalization\footnote{Numerical calculation in number basis is more convenient because the first term is diagonal and 2nd term is tri-diagonal, $ \langle\!\langleanglengle m \pm 1 | \cos(\delta) | m \rangle\!\rangleanglengle =1/2$. Note, $e^{i\delta} |m\rangle\!\rangleanglengle = |m-1\rangle\!\rangleanglengle$.} in $\hat{m}$-basis. In the limit of $E_J/E_C \gg 1$ which implies $\delta\langle\!\langlel1$, one may expand the last term up to the 4th-order of $\delta$ and obtain the harmonic oscillator Hamiltonian plus a nonlinear term, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{trans}}= 4 E_C \hat{m}^2 + E_J \delta^2/2 - E_J \delta^4/24 + \cdots \langle\!\langleanglebel{eq:trans_H1} \end{eqnarray} This is convenient approximation because we can follow same procedure for harmonic oscillator quantization and use creation and annihilation operators. Looking at the first two terms in the Hamiltonian~\eqref{eq:trans_H1} in analogy to a harmonic LC circuit\footnote{For this analogy, consider a LC circuit energy as $E=\frac{m^2}{2C} + \frac{\delta^2}{2 L}$ where $m$ and $\delta$ are charge and flux respectively.} we have, \begin{equation}gin{subequations} \begin{equation}gin{eqnarray} 4E_c &\langle\!\langleeftrightarrow& \frac{1}{2C}\\ \frac{E_J}{2} &\langle\!\langleeftrightarrow& \frac{1}{2 L}\\ \omega_J= \sqrt{8E_J E_c} &\langle\!\langleeftrightarrow& \omega_{LC}=\frac{1}{\sqrt{LC}} \end{eqnarray} \end{subequations} Similarly one can define creation and annihilation operators\footnote{ Here we have, $\hat{\delta}= \sqrt{\frac{\hbar Z_R}{2}} (\hat{b} + \hat{b}^\dagger), \hat{m}= i \sqrt{\frac{\hbar}{2 Z_R}} (\hat{b} - \hat{b}^\dagger)$ , where $Z_R=\sqrt{ \frac{8 E_C}{E_J}}$.} and write down the Hamiltonian~\eqref{eq:trans_H1} in terms of $\hat{b}$ and $\hat{b}^\dagger$, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{trans}}&=& \omega_J \hat{b}^\dagger \hat{b} - \frac{E_C}{12} ( \hat{b} + \hat{b}^\dagger )^4 + ... \nonumber \\ &=& \omega_J \hat{b}^\dagger \hat{b} - \frac{E_C}{2} ( \hat{b}^\dagger \hat{b}^\dagger \hat{b} \hat{b} + 2 \hat{b}^\dagger \hat{b} ) + ..., \langle\!\langleanglebel{eq:trans_H_bb} \end{eqnarray} where the first term comes from first two terms in Equation~\eqref{eq:trans_H1} and the last terms comes from the third term in Equation~\eqref{eq:trans_H1}.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~7:} Derive Equation~\eqref{eq:trans_H_bb}. Note that you will need normal ordering and the rotating wave approximation to ignore the terms that do not conserve the energy. }} \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{anharm_energylevel.pdf} \caption[Transmon energy levels]{ {\footnotesize \textbf{Transmon energy levels:} A typical transmon ($E_J/E_C=40$) potential (Eq~\eqref{eq:trans_H0} in solid black curve) in comparison with a nonlinear oscillator (Eq.~\eqref{eq:trans_H1} in the solid red curve) and a harmonic oscillator (dashed parabola). The first three energy levels are also depicted for the transmon (nonlinear oscillator) in comparison to the harmonic oscillator. The typical values for transition energies/frequencies are shown. Note $E_{01}=\hbar \omega_{01} $ and we set $\hbar=1$.}} \langle\!\langleanglebel{fig:anharm_energylevel} \end{figure} One can rearrange Equation~\eqref{eq:trans_H_bb} in this form, \begin{equation}gin{subequations} \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{trans}}&=& ( \omega_J - E_C) \hat{b}^\dagger \hat{b} - \frac{E_C}{2} \hat{b}^\dagger \hat{b}^\dagger \hat{b} \hat{b} \\ &=& \omega_{01} \hat{b}^\dagger \hat{b} + \frac{\alpha}{2} \hat{b}^\dagger \hat{b}^\dagger \hat{b} \hat{b}, \langle\!\langleanglebel{eq:trans_H_bb2} \end{eqnarray} \end{subequations} where we arrive at a Hamiltonian for an anharmonic oscillator with a lower energy transition $\omega_{01}= \sqrt{8E_J E_C} -E_C$ and an anharmonicity $\alpha/2\pi=-E_C$ as in shown in Figure~\rangle\!\rangleef{fig:anharm_energylevel}.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~8:} Find the first three eigenvalues for the anharmonic oscillator Hamiltonian \eqref{eq:trans_H_bb2}. You may use perturbation theory. In case you prefer to do this numerically is makes sense to do it for the original Hamiltonian~\eqref{eq:trans_H0}. }} With reasonable anharmonicity $\alpha/2\pi=E_{12}-E_{01}$ (typically $\alpha/2\pi\sim -300$ MHz) we can individually address the lower states and leave higher levels intact\footnote{This is true as long as the Rabi oscillation we induced in the lower transition is much less that anharmonicity, $\Omega_R \langle\!\langlel \alpha$.}. Therefore we consider a transmon circuit as a two level system which can be described as a pseudo-spin with the Pauli operator, \begin{equation}gin{eqnarray} \hat{H}_q=-\frac{ \omega_q}{2} \sigma_z, \langle\!\langleanglebel{eq:Hq} \end{eqnarray} where $\omega_q = \omega_{01}$ the lowest transition in the transmon circuit\footnote{The minus sign is because we use the NMR convention in which $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle =1$ for the ground state.}. \section{Qubit-cavity interaction} In previous sections, we quantized a single mode of the electromagnetic field for a cavity and showed that it results in a harmonic oscillator Hamiltonian (Eq.~\rangle\!\rangleef{eq:Haadag}). In this section, we consider only the lowest mode of the cavity ($m=1$) which has the minimum frequency. This mode has maximum electromagnetic field amplitude at the center of the cavity ($z=L/2$). Here, we study the interaction between this mode of the cavity (as a quantum harmonic oscillator) and a two-level quantum system (qubit) which is represented by Hamiltonian~\eqref{eq:Hq}. Assume that we place the qubit right at the center of the cavity. The dimension of the qubit is much smaller than the dimension of the cavity therefore with a good approximation, the qubit only interacts with the electromagnetic field at $z=L/2$ as depicted\footnote{The assumption that the qubit interacts only with the electromagnetic field at the center of the cavity is a classical interpretation. In quantum picture, each photon is a packet of energy extended to the entire cavity. But this classical picture is very clear to convey the fact that by placing the qubit at the center of the cavity, statistically, the qubit experiences a stronger electromagnetic field.} in Figure~\rangle\!\rangleef{fig:qubit_cavity}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.68\textwidth]{qubit_cavity.pdf} \caption[The qubit-cavity interaction]{ {\footnotesize \textbf{The qubit-cavity interaction:} The qubit is placed at the center of the cavity where the electromagnetic field is maximum for the first mode of the cavity. The qubit interacts with the electric field via its electric dipole $d$.}} \langle\!\langleanglebel{fig:qubit_cavity} \end{figure} The qubit interacts via its electric dipole moment to the electric field of the cavity via the interaction Hamiltonian, \begin{equation}gin{eqnarray} H_{int}= - \hat{d} \ \cdot \ \hat{E}_x(\frac{L}{2},t) \hspace{0.1cm} , \mbox{where} \hspace{0.3cm} \hat{d}= \langle\!\langleeft( {\begin{equation}gin{array}{cc} 0 & d \\ d^{*} & 0 \\ \end{array} } \rangle\!\rangleight). \end{eqnarray} The parameters $d$ is the magnitude of the dipole of the qubit which can be in any direction. Let's define $d_x$ as the magnitude of the qubit dipole aligned with electric field of the cavity. Then the effective dipole operator can be represented as $\hat{d}=d_x \sigma_x=d_x ( \sigma_+ + \sigma_-)$ where $\sigma_+$ ($\sigma_-$) are the raising (lowering) operators for the qubit. Without loss of generality, we can assume $d_x$ is real\footnote{Note, the complex $d_x$ means that the electric dipole has non-zero moment along $\sigma_y$.}. Then the interaction Hamiltonian reads, \begin{equation}gin{eqnarray} H_{int}= - g (\hat{a} + \hat{a}^{\dagger} )(\sigma_+ + \sigma_- ), \langle\!\langleanglebel{eq:Hint1} \end{eqnarray} where we use Equation~\eqref{eq:Eaadag} and define $g=d_x \mathcal{E}_0$ to quantify the interaction strength or qubit-cavity coupling energy\footnote{If we place the qubit off-center the coupling $g$ would be smaller. In fact, the placement of the qubit inside the cavity is, to some extent, a knob to adjust the qubit-cavity coupling.}. \subsection{Jaynes-Cummings model} Now we have all the pieces to describe the combined qubit-cavity system. Note that the qubit Hamiltonian (Eq.~\rangle\!\rangleef{eq:Hq}) by itself has two eigenstates $\{ |g\rangle\!\rangleanglengle, |e\rangle\!\rangleanglengle\}$ corresponding to two eigenvalues (energies) $\{\mp \omega_q/2\}$. Similarly, a single cavity mode Hamiltonian (Eq.~\rangle\!\rangleef{eq:Haadag}) by itself has an infinite number of eigenstates $\{|n\rangle\!\rangleanglengle \}$ with eigenvalues $\{ \omega_c ( n+ 1/2) \}$ corresponding to $n$ photons in that mode. Here we are interested to know what are the eigenstates and eigenvalues of the hybrid system of the cavity and qubit combined via the interaction Hamiltonian (Eq.~\rangle\!\rangleef{eq:Hint1}). The total Hamiltonian\footnote{Here we refer to it as the Rabi Hamiltonian ---the JC Hamiltonian comes from the Rabi Hamiltonian once taking the RWA.} has three parts, \begin{equation}gin{eqnarray} H_{\mathrm{Rabi}}= \omega_c (\hat{a}^{\dagger}\hat{a} + \frac{1}{2} ) -\frac{1}{2} \omega_q \sigma_z - g (\hat{a} + \hat{a}^{\dagger} )(\sigma_- + \sigma_+ ). \langle\!\langleanglebel{eq:RabiH} \end{eqnarray} In the case of no interaction between qubit and cavity ($g=0$) the eigenstates of the qubit-cavity system are simply the tensor product of the cavity and qubit eigenstates $\{ |g\rangle\!\rangleanglengle|n\rangle\!\rangleanglengle , |e\rangle\!\rangleanglengle|n\rangle\!\rangleanglengle \}$ which are called bare states or the bare basis and, obviously, with eigenvalues that are simply the sum of eigenvalues for each qubit and cavity eigenstates, $\{ \pm \omega_q/2 + \omega_c (n+1/2) \}$. \begin{equation}gin{eqnarray} |g\rangle\!\rangleanglengle|0\rangle\!\rangleanglengle &\rangle\!\rangleightarrow& \mbox{{\small qubit in ground state, no photons in the cavity}}\\ |g\rangle\!\rangleanglengle|n+1\rangle\!\rangleanglengle &\rangle\!\rangleightarrow& \mbox{{\small qubit in ground state, $n+1$ photons in the cavity}}\\ |e\rangle\!\rangleanglengle|n\rangle\!\rangleanglengle &\rangle\!\rangleightarrow& \mbox{{\small qubit in excited state, $n$ photons in the cavity}} \end{eqnarray} However bare states no longer are the energy eigenstates for the system when the qubit and cavity interact ($g\neq0$). Yet, we can represent the total Hamiltonian in the bare basis and attempt to diagonalize it to find its eigenstates and eigenvalues. Before we do this, we simplify the interaction Hamiltonian by the rotating wave approximation (RWA). This approximation is valid in most practical situations where the coupling strength is much less than both the qubit and cavity frequency, $g\langle\!\langlel \omega_q, \omega_c$, and also $|\omega_c - \omega_q| \langle\!\langlel | \omega_c+\omega_q|$. Having this situation in mind, let's revisit the interaction Hamiltonian where we have four terms, \begin{equation}gin{eqnarray} H_{int} \Rightarrow \hat{a}^{\dagger} \sigma_- + \hat{a} \sigma_+ + \hat{a}^{\dagger} \sigma_+ + \hat{a} \sigma_- \end{eqnarray} The first term describes `the decay of the qubit and creation of a photon for the cavity' and second term accounts for `an excitation of the qubit and annihilation of a photon in the cavity'. These processes somehow ``conserve" the total energy in the system since the energy change would be $\pm (\omega_c - \omega_q)$, which is much less that the total energy in the system even in the few photon regime where $E_{tot} \sim \omega_c + \omega_q$. However, the last two terms correspond to `the excitation (decay) of the qubit and creation (annihilation) of a photon for cavity' which requires a relatively substantial energy change $\pm (\omega_c + \omega_q)$ in the system, especially when we have only a few photons in the system. This means that the last two processes are much less likely to occur compared to the first two processes so we can simply ignore those terms\footnote{One would expect RWA breaks in the regime of many photons. See for example \cite{khezri26beyand,sank16beyond} for beyond RWA.}. This also can be understood from energy-time uncertainty principle which implies that the last two processes happen on much faster time-scales and normally are averaged out compared to the first two processes\footnote{For example, see chapter~4 of the Ref.~\cite{gerr05} for more detailed discussion of RWA }. Therefore with this rotating wave approximation (RWA) we obtain the Jaynes-Cummings Hamiltonian, \begin{equation}gin{eqnarray} H_{\mathrm{JC}}= \omega_c (\hat{a}^{\dagger}\hat{a} + \frac{1}{2} ) -\frac{1}{2} \omega_q \sigma_z - g ( \hat{a}^{\dagger} \sigma_- + \hat{a} \sigma_+ ). \langle\!\langleanglebel{eq:JCH} \end{eqnarray} Although the RWA simplifies the Hamiltonian, still we have to deal with an infinite dimensional Hilbert space (since the number of photons $n$ ranges from $0 \rangle\!\rangleightarrow \infty $) which means the Hamiltonian is a semi-infinite matrix which makes it tricky to diagonalize. Normally in such situation we truncate the Hilbert space at some point, but fortunately in this case we can go around this problem and diagonalize the Hamiltonian in the infinite dimension Hilbert space. If we use the bare basis to represent the $H_{JC}$ in the form of matrix we find, \begin{equation}gin{eqnarray} H_{JC} = \langle\!\langleeft( \begin{equation}gingroup\makeatletter\def8{8}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\langle\!\langleanglerge\normalfont#1}} {\begin{equation}gin{array}{cccccc} \frac{1}{2} \omega_c- \frac{\omega_q}{2} & 0 & 0 & 0& 0 & 0\\ 0 & \frac{3}{2} \omega_c - \frac{\omega_q}{2} & g & 0 & 0 & 0\\ 0 & g & \frac{3}{2} \omega_c + \frac{\omega_q}{2} & 0 & 0 & 0\\ & & & ... & & \\ 0 & 0 & 0 & 0 & (n +\frac{1}{2}) \omega_c - \frac{\omega_q}{2}& \sqrt{n+1}g \\ 0 & 0 & 0&0 & \sqrt{n+1}g & (n +\frac{1}{2}) \omega_c + \frac{\omega_q}{2} \\ \end{array} } \endgroup \rangle\!\rangleight), \end{eqnarray} which shows the Hamiltonian is block-diagonal and all blocks follow a general form (except the first block which has only one element $\frac{1}{2} \omega_c- \frac{\omega_q}{2}$ corresponding to the absolute ground state of the system). Having a block-diagonal Hamiltonian makes it easy to find its eigenvalues. We only need to diagonalized individual blocks and the resulting eigenvalues of each block indeed are the eigenvalues of the entire Hamiltonian. For each block $M_n$ we have, \begin{equation}gin{eqnarray} M_n = \langle\!\langleeft( {\begin{equation}gin{array}{cc} (n +\frac{1}{2}) \omega_c - \frac{\omega_q}{2} & \sqrt{n+1}g \\ \sqrt{n+1}g & (n +\frac{1}{2}) \omega_c + \frac{\omega_q}{2} \\ \end{array} } \rangle\!\rangleight), \end{eqnarray} where $n=1,2,\langle\!\langledots$. The eigenstates of $M_n$ and $|g\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle$ corresponding to $n=0$, form a complete set of eigenstates for the entire qubit-cavity system. For the eigenvalues we have, \begin{equation}gin{eqnarray} E_g &=& -\frac{\Delta}{2} \\ E\mp &=& (n+1)\omega_c \mp \frac{1}{2} \sqrt{4g^2 (n+ 1) + \Delta^2} .\langle\!\langleanglebel{eq:dressEpm} \end{eqnarray} where $\Delta=\omega_q-\omega_c$. The eigenstate associated with each of these eigenvalues are called the \textit{dressed states} of the qubit and cavity, \begin{equation}gin{eqnarray} |0,-\rangle\!\rangleanglengle &=& |g\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle \\ |n,-\rangle\!\rangleanglengle &=& \cos(\theta_n) |g\rangle\!\rangleanglengle |n +1\rangle\!\rangleanglengle -\sin(\theta_n) |e\rangle\!\rangleanglengle |n\rangle\!\rangleanglengle \langle\!\langleanglebel{eq:dressm}\\ |n,+\rangle\!\rangleanglengle &=& \sin(\theta_n) |g\rangle\!\rangleanglengle |n +1\rangle\!\rangleanglengle + \cos(\theta_n) |e\rangle\!\rangleanglengle |n\rangle\!\rangleanglengle \langle\!\langleanglebel{eq:dressp} \end{eqnarray} where $\theta_n= \frac{1}{2} \tan^{-1} (2g \sqrt{n+1} / \Delta)$ which quantifies the ``level of hybridization". In the limit of $\Delta \rangle\!\rangleightarrow 0$ where qubit and cavity have a the same energy we have $\theta_n=\pi/4$ and the dressed states are in maximum hybridization, \begin{equation}gin{eqnarray} |n, \mp \rangle\!\rangleanglengle &=& \frac{1}{\sqrt{2}} \langle\!\langleeft( |g\rangle\!\rangleanglengle |n +1\rangle\!\rangleanglengle \mp |e\rangle\!\rangleanglengle |n\rangle\!\rangleanglengle \rangle\!\rangleight), \langle\!\langleanglebel{eq:polaritonmp} \end{eqnarray} which means each of the dressed states has a 50 \%-50 \% characteristic of the cavity photon and qubit excitations. These states are called \emph{polaritons}. The energy difference between the first two polariton states is $2 g$. A nice way to look at dressed state energy levels is by comparing them to the corresponding uncoupled system energy levels, the bare states. For that, consider Figure~\rangle\!\rangleef{fig:avoidcross2} where we display the energy levels of an uncoupled qubit-cavity system compared to the dressed state energy levels for different values of the qubit-cavity detuning. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{avoidcross3.pdf} \caption[Dressed states vs bare states]{ {\footnotesize \textbf{Dressed states vs bare states:} The panels illustrate the dressed states of the qubit-cavity system for different qubit-cavity detunings in comparison with the bare states (refer to the main text for a more detailed description). Note that this illustration is not accurate and lacks some details but we rather to avoid them here.} } \langle\!\langleanglebel{fig:avoidcross2} \end{figure} The bare state energy levels are depicted by solid black lines. The dressed states are depicted by bars that are color-coded by blue (red) for cavity- (qubit-) like states. In the first panel, the qubit and cavity are far detuned ($\Delta \langle\!\langlel 0$) which means $\theta_n \simeq 0$ and the effective coupling is negligible. Therefore the dressed states energy levels almost overlap with the uncoupled cavity-quit state, the bare states (as depicted in panel 1). In the second panel, we change the energy level for the qubit. The detuning $\Delta$ is still negative but it is getting smaller and smaller in terms of magnitude. The dressed states start pushing away each other and deviate from the corresponding bare states. In this situation, $\theta_n \in(0, \pi/4)$ and the upper dressed state acquires some qubit character, and similarly, the lower dressed state acquire some photon character. In panel three $\Delta=0$ and the hybridization is its maximum level, $\theta_n=\pi/4$ and the dressed states (which we now call polaritons) push each other away and deviate maximally from the bare states. The separation between two polaritons is $2 g$. Now both polaritons have acquired equal photon and qubit character as depicted by color-coded bars in panel 3. If we further increase the energy level of the qubit (see panel 4) then again we get dressed states. Note that in panel 4, unlike in panel 3, the lower (upper) polariton has more photon (qubit) character. By increasing the detuning further, as in panel 5, we effectively decouple the qubit and cavity and the dressed states again approach the bare states. If we keep increasing the qubit frequency even further then the qubit energy will approach the higher level of the cavity and we would see another avoided crossing corresponding to $n=1$. Every time qubit level crosses one of the cavity levels, we may expect an avoided crossing and hybridization\footnote{Considering the higher energy levels of the cavity one might think that it is also possible that qubit level couples to two or multiple cavity energy levels at the same time. This is true, but usually, this effect is only significant when the qubit-cavity coupling is so strong ($g\sim \omega_c,\omega_q$) that qubit and cavity energy levels push each other even when they are far detuned. This regime is known as ultra strong coupling~\cite{niemczyk2010circuit,bosman2017multi}. But normally the coupling rate $g\langle\!\langlel \omega_c, \omega_q$. Therefore, in order to have hybridization the qubit energy has to be very close to the cavity energy ($\Delta\langle\!\langlel \omega_c,\omega_q$). In our case, we can safely assume that qubit effectively couples only to one cavity energy level at a time. However, I should warn you that in our description of the avoided crossing which is represented in Figure~\rangle\!\rangleef{fig:avoidcross2}, we have ignored the higher transmon energy levels which would make the situation much more complicated. Considering the transmon as a two-level system is good for intuition, but to be accurate one must include more transmon levels.}. It is convenient to plot transition energy versus detuning since (as we will see in Chapter~3) we normally characterize the system by measuring the transition frequencies by doing spectroscopy. For example, when $n=0$ we have, \begin{equation}gin{eqnarray} E_\mp - E_g &=& \omega_c \mp \frac{1}{2} \sqrt{4g^2 + \Delta^2} +\Delta/2 .\langle\!\langleanglebel{eq:dresstrans} \end{eqnarray} In Figure~\rangle\!\rangleef{fig:avoidcross1}, we plot the energy $E_{\pm} - E_g $ versus detuning which clearly shows the avoided crossing. The transition energy levels are color coded so that again red (blue) is the qubit- (photon-) like transition. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{avoidcross1.pdf} \caption[Avoided crossing:]{ {\footnotesize \textbf{Avoided crossing:} The transition energy from higher and lower dressed states to the ground state versus the detuning $\Delta$. The transition energy is scaled by the energy of the cavity $\omega_c$ and the detuning is scaled by the coupling rate $g$}. The dashed lines indicate the bare states' transition. Note that you can somehow see a similar avoided-crossing curve in Figure~\rangle\!\rangleef{fig:avoidcross2} by connecting the upper (lower) dressed states in different detunings together.} \langle\!\langleanglebel{fig:avoidcross1} \end{figure} In this section, we learned that if we put a qubit inside a cavity, the energy levels hybridize and we have dressed states. Yet, just as we considered transmon as a two-level system (TLS) by addressing only lower transition, here also we consider the ground state and the lower dressed state as our new qubit. \subsection{Dispersive approximation} In this section, we perform another approximation to the interaction Hamiltonian. This approximation is valid in the regime that cavity and qubit are far detuned $\Delta\gg g$. In such situations, the interaction is relatively weak. In principle, in this regime, the cavity and qubit do not directly exchange energy unlike what we explicitly have in the interaction term\footnote{Note that this doesn't mean that in this limit the JC interaction term is not valid. It means that the effect of the coupling is so weak such that we can approximately represent the Hamiltonian in a simpler form.} in the JC Hamiltonian \eqref{eq:JCH}. For that, consider the unitary transformation $$\hat{T}=e^{\langle\!\langleanglembda(\sigma_- a^{\dagger} - \sigma_+ a)},$$ where $\langle\!\langleanglembda=\frac{g}{\Delta}$. If we apply this transformation\footnote{Applying a unitary transformation is somehow a change of frame. So we do not add/remove any physics.} to the JC Hamiltonian~\eqref{eq:JCH} and use the Baker-Campbell-Hausdorff relation to evaluate all terms up to order $\langle\!\langleanglembda^2$ we have, \begin{equation}gin{eqnarray} \hat{T} \ \hat{H}_{JC} \ \hat{T}^{\dagger} = \omega_c (\hat{a}^{\dagger}\hat{a} + \frac{1}{2} ) -\frac{1}{2} \omega_q \sigma_z - \frac{g^2}{\Delta} \hat{a}^{\dagger} \hat{a}\sigma_z + \frac{g^2}{2 \Delta} \sigma_z. \langle\!\langleanglebel{eq:dispersive_trans} \end{eqnarray} We may ignore constant terms\footnote{The term $ \frac{g^2}{2 \Delta} \sigma_z$ (Lamb shift) is also a constant shift in qubit frequency that we can absorb it into $\omega_q$.} since these do not affect dynamics, and obtain the JC Hamiltonian in the dispersive limit, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{dis}} = \omega_c \hat{a}^{\dagger}\hat{a} - \frac{1}{2} \omega_q \sigma_z - \frac{g^2}{\Delta} \hat{a}^{\dagger} \hat{a}\sigma_z. \langle\!\langleanglebel{eq:H_dis} \end{eqnarray} \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~9:} Show that Equation~\eqref{eq:dispersive_trans} is true by using Baker-Campbell-Hausdorff relation, $$e^{\langle\!\langleanglembda \hat{B}} \hat{A} e^{-\langle\!\langleanglembda \hat{B}} = \hat{A} + \langle\!\langleanglembda [\hat{B},\hat{A}] + \frac{\langle\!\langleanglembda^2}{2!} [\hat{B}, [\hat{B},\hat{A}]] + \mathcal{O}[\langle\!\langleanglembda^3],$$ and keeping the terms up to the order $\langle\!\langleanglembda^2$. }} The dispersive Hamiltonian~\eqref{eq:H_dis} describes the situation were the cavity and qubit are far detuned and coupling is weak and dressed states are almost overlapping with the bare states (see Figure~\rangle\!\rangleef{fig:avoidcross2} panel 1). Yet, there is a very small interaction as described by the last term in Equation~\eqref{eq:H_dis}. In order to make better sense of this interaction, we re-arrange the terms in Equation~\eqref{eq:H_dis} as follows, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{dis}} = ( \omega_c - \chi \sigma_z ) \hat{a}^{\dagger}\hat{a} - \frac{1}{2} \omega_q \sigma_z, \langle\!\langleanglebel{eq:H_dis_rearange} \end{eqnarray} where $\chi=\frac{g^2}{\Delta}$ is the dispersive shift or dispersive coupling rate\footnote{Note that we define $\Delta=\omega_q-\omega_c$ and usually we prefer to have $\omega_q<\omega_c$ because of the transmon higher levels and also to avoid coupling to higher frequency cavity modes~\cite{koch2007charge}. Therefore the dispersive coupling is often negative.}. We see that the dispersive interaction is manifested as a qubit-state-dependent frequency shift for the cavity. If the qubit is in the ground (excited) state $|g\rangle\!\rangleanglengle$ ($|e\rangle\!\rangleanglengle$) then $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle =1$ ($\langle\!\langleanglengle \sigma_z =-1 \rangle\!\rangleanglengle$) which means that the cavity frequency shifts by $+\chi$ ($- \chi$). Therefore one can detect this frequency shift for the cavity to determine the state of the qubit. Alternatively, one can rearrange the terms in (\eqref{eq:H_dis}) as, \begin{equation}gin{eqnarray} \hat{H}_{\mathrm{dis}} = \omega_c \hat{a}^{\dagger}\hat{a} - \frac{1}{2} ( \omega_q +2 \chi \bar{n}) \sigma_z, \langle\!\langleanglebel{eq:H_dis_rearange2} \end{eqnarray} and interpret the interaction as a shift in qubit frequency due to photon occupation ($\bar{n} =\hat{a}^{\dagger}\hat{a}$) in the cavity\footnote{In chapter 4 we will use this interpretation to calibrate dispersive shift and average photon number in the system.}. \section{Dynamics of a driven qubit} In this section we discuss some of the most basic and important dynamics of the qubit. Essentially, we want to know what happens to the qubit if we continuously drive it with a coherent signal. We may take two approaches to solve this problem. One approach is semi-classical, where we treat the coherent drive as a classical signal. The other approach is fully quantum, where we treat the drive as a coherent state of light. For most purposes, the semi-classical approach works perfectly fine and captures almost all the physics we are interested in. Therefore, we discuss the semi-classical approach (for fully quantum mechanical approach see Ref.~\cite{gerr05}). \subsection{Rabi oscillations: The semi-classical approach} We are interested in qubit dynamics and we ignore the cavity for now\footnote{We have qubit inside the cavity and the qubit and cavity are already hybridized and we consider lowest two levels of system (ground state and the lowest dressed-state, or polariton state) as our new qubit. Moreover we assume that the qubit drive is off-resonant with the cavity transition. Therefore in this situation we effectively have just a qubit. Although experimentally the cavity still plays a crucial rule in terms of noise protection and will be essential for qubit readout, this is not our focus in this section.}. With that, assume we have a qubit with Hamiltonian $\hat{H}_q = - \omega_q \sigma_z/2$ and electric dipole moment $\hat{d}= \vec{d} \sigma_x$. The qubit interacts with the electric field of the coherent light (a classical signal) $E(t) = E \cos(\omega_d t)$ by the interaction Hamiltonian $H_{int}= -\vec{E} \cdot \hat{d}$. Therefore, for the total Hamiltonian we have, \begin{equation}gin{eqnarray} H_{\mathrm{semi-classic}}= -\frac{1}{2} \omega_q \sigma_z - E(t) \cdot \hat{d}. \langle\!\langleanglebel{eq:RabiH2} \end{eqnarray} For simplicity we assume that the dipole moment of the qubit is aligned with the electric field. Therefore we obtain, \begin{equation}gin{eqnarray} H_{\mathrm{semi-classic}}= -\frac{1}{2} \omega_q \sigma_z - A \cos(\omega_d t) \sigma_x, \langle\!\langleanglebel{eq:H_SC} \end{eqnarray} where $A=Ed$ quantifies how strong the interaction is. Now we want to know how the qubit evolves under this Hamiltonian. There are couple of ways we may solve this Hamiltonian. The first way is to solve the Schr\"odinger equation for this time-dependent Hamiltonian. We start with an \emph{ansatz} instead of starting from scratch. The idea is that if we have no electric field or turn off the interaction, then we know the solution for Hamiltonian~\eqref{eq:H_SC} would be $|\psi \rangle\!\rangleanglengle = C_g |g\rangle\!\rangleanglengle + C_e |e\rangle\!\rangleanglengle$ and its time evolution would be $|\psi (t) \rangle\!\rangleanglengle = C_g e^{+i\frac{\omega_q}{2}}|g\rangle\!\rangleanglengle + C_e e^{-i\frac{\omega_q}{2}} |e\rangle\!\rangleanglengle$ . Now, we hope to find the solutions for \eqref{eq:H_SC} in the form of, \begin{equation}gin{eqnarray} |\psi (t) \rangle\!\rangleanglengle = C_g(t) e^{+i\frac{\omega_q}{2}}|g\rangle\!\rangleanglengle + C_e(t) e^{-i\frac{\omega_q}{2}} |e\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:ansatz} \end{eqnarray} where we just let the coefficients also be time-dependent. Now we plug this ansatz into the Schr\"odinger equation, \begin{equation}gin{eqnarray} i \frac{\partial |\psi (t) \rangle\!\rangleanglengle }{\partial t} = \langle\!\langleeft( -\frac{1}{2} \omega_q \sigma_z - A \cos(\omega_d t) \sigma_x \rangle\!\rangleight) |\psi (t) \rangle\!\rangleanglengle \langle\!\langleanglebel{eq:H_ansatz} \end{eqnarray} By substituting Equation~\eqref{eq:ansatz} into~\eqref{eq:H_ansatz}, one can obtain two coupled ordinary differential equations (ODEs) for $C_g(t)$ and $C_e(t)$, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:ODE1_rabi_classic_both} \begin{equation}gin{eqnarray} \dot{C}_g&=& i A \cos(\omega_d t) e^{-i \omega_q t} C_e, \langle\!\langleanglebel{eq:ODE1_rabi_classic}\\ \dot{C}_e&=&i A \cos(\omega_d t) e^{+i \omega_q t} C_g. \langle\!\langleanglebel{eq:ODE2_rabi_classic} \end{eqnarray} \end{subequations} In order to solve this analytically, we do a simplification which is nothing but the RWA. First, we expand $\cos(\omega_d t)e^{\pm i \omega_q t}= e^{i (\omega_d \pm \omega_q) t} + e^{-i (\omega_d \mp \omega_q) t}$, then argue that we are not interested in very short timescales in the dynamics. In fact, in practically, we normally are not sensitive to short timescales\footnote{Assuming that the qubit frequency and drive are both in range of $5$ GHz, then the fast oscillatory terms $e^{ \pm i (\omega_d + \omega_q) t}$ oscillate at the 100 picosecond timescale. We are normally interested in qubit dynamics at microsecond timescale. Even for fast 5~ns rotation pulses, many of these fast oscillations are averaged out.}. Therefore we ignore fast rotating terms $e^{ \pm i (\omega_d + \omega_q) t}$ that are comparatively slow to the rotating terms $e^{ \pm i (\omega_d - \omega_q) t}$. Then we have \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:ODE1and2_rabi_c_RWA} \begin{equation}gin{eqnarray} \dot{C}_g&=&i A \ e^{-i (\omega_q -\omega_d) t} C_e, \langle\!\langleanglebel{eq:ODE1_rabi_c_RWA}\\ \dot{C}_e&=&i A \ e^{+i ( \omega_q - \omega_d) t} C_g . \langle\!\langleanglebel{eq:ODE2_rabi_c_RWA} \end{eqnarray} \end{subequations} For the qubit initially in the ground state (initial conditions $C_g(0)=1$, $C_e(0)=0$) one can show that the solutions are, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:ODE1_rabi_c_RWA2_both} \begin{equation}gin{eqnarray} C_g(t)&=& \frac{e^{-i \frac{\Delta_d}{2}t}}{\Omega_R} \langle\!\langleeft( \Omega_R \cos(\frac{\Omega_R}{2}t ) + i \Delta_d \sin(\frac{\Omega_R}{2}t) \rangle\!\rangleight) \langle\!\langleanglebel{eq:ODE1_rabi_c_RWA2}\\ C_e(t)&=&i \frac{ A \ e^{+i \frac{\Delta_d}{2}t}}{\Omega_R} \sin(\frac{\Omega_R}{2}t), \langle\!\langleanglebel{eq:ODE2_rabi_c_RWA2} \end{eqnarray} \end{subequations} where $\Delta_d=\omega_q-\omega_d$ and $\Omega_R=\sqrt{A^2 + \Delta_d^2}$. Having the solution for $|\psi(t)\rangle\!\rangleanglengle$, we can obtain the the evolution for any relevant observables. In order to see what the dynamics look like, we may look at the population of the qubit excited state, \begin{equation}gin{eqnarray} P_e(t)= |C_e(t)|^2&=& \frac{ A^2 \ }{\Omega_R^2} \sin^2(\frac{\Omega_R}{2}t). \langle\!\langleanglebel{eq:classic_pe} \end{eqnarray} As depicted in Figure~\rangle\!\rangleef{fig:pe_rabi_classic}, the qubit doesn't respond that much to a far detuned drive, but as the detuning gets smaller the oscillations grow. For an on-resonant drive ($\Delta_d=0$), we have slowest but highest contrast oscillations of the qubit populations. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{pe_classic_shevron.pdf} \caption[Rabi oscillations]{ {\footnotesize \textbf{Rabi oscillations:} \textbf{a}, The chevron plot. The excited state population $P_e$ versus time for different detunings $\Delta_q$. \textbf{b}, Three cuts from the chevron plot at different detuning values. The on-resonant drive gives the maximum contrast for the oscillations.}} \langle\!\langleanglebel{fig:pe_rabi_classic} \end{figure}\\ The fact that we can fully rotate the qubit from the ground to excited by an on-resonant drive is very practical. All we need is to know how strong and how long to drive the qubit with light to put the qubit in the excited state\footnote{$\pi$ pulse calibration! We will see in next chapter how this is done in experiment.}. \textbf{\emph{Rotating frame--}} There is a rather easy way to solve the Hamiltonian~\eqref{eq:H_SC} by going to a rotating frame of drive. That makes the Hamiltonian time-independent\footnote{This example would be useful to see how rotating frame works. Moreover, this solution will give better picture of detuned Rabi oscillations in the Bloch sphere.}. For this, we transform the Hamiltonian by a unitary operator $U(t)$. The Hamiltonian in the new frame can be evaluated by the following relation, \begin{equation}gin{eqnarray} \hat{\mathcal{H} }=&{U}(t)\hat{H}{U}^{\dagger}(t) - i {U}\dot{U}^{\dagger}. \langle\!\langleanglebel{eq:RotatingFrame_Ut} \end{eqnarray} Now consider $U(t)=e^{ - i \frac{\omega_d}{2} \sigma_z t }$, which basically transforms the Hamiltonian to a frame that rotates with the frequency of drive, $\omega_d$. One can show that the Hamiltonian $\eqref{eq:H_SC}$ in the rotating frame of the drive would be \begin{equation}gin{eqnarray} \hat{\mathcal{H} }=&- \frac{1}{2} \Delta_d \sigma_z - \frac{E d}{2} \sigma_x, \langle\!\langleanglebel{eq:RotatingFrame_H_sc} \end{eqnarray} which no longer has time dependence. Now we may diagonalize the Hamiltonian in qubit energy basis, \begin{equation}gin{eqnarray} \hat{\mathcal{H} }= \frac{1}{2} \langle\!\langleeft( {\begin{equation}gin{array}{cc} -\Delta_d & -A \\ -A & +\Delta_d \\ \end{array} } \rangle\!\rangleight). \langle\!\langleanglebel{eq:H_sc_Matrix} \end{eqnarray} to obtain its eigenvalues, $E_{\pm} = \pm\frac{1}{2} \sqrt{A^2 + \Delta_d^2}$ and eigenstates, \begin{equation}gin{eqnarray} |V_-\rangle\!\rangleanglengle &=& \cos(\theta) |e\rangle\!\rangleanglengle - \sin(\theta) |g\rangle\!\rangleanglengle,\\ |V_+\rangle\!\rangleanglengle &=& \sin(\theta) ) |e\rangle\!\rangleanglengle + \cos(\theta) |g\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:eigen_Vpm} \end{eqnarray} where $\theta=\tan^{-1}(\frac{A}{\sqrt{A^2 + \Delta_d^2}-\Delta_d})$. Figure~\rangle\!\rangleef{fig:Vpm_bloch} demonstrates the eigenstate $|V_\pm\rangle\!\rangleanglengle$ in the Bloch sphere picture. \begin{equation}gin{figure} \centering \includegraphics[width = 0.9\textwidth]{Vpm_bloch_rabi.pdf} \caption[Eigenstates on the Bloch sphere for a driven qubit]{ {\footnotesize \textbf{Eigenstates on the Bloch sphere for a driven qubit:} The eigenstates $|V_\pm \rangle\!\rangleanglengle$ for a detuned driven qubit make angle $\theta$ respect to the equator in the Bloch sphere. This picture gives a better understanding of why the population doesn't reach to the maximum value for a detuned drive.}} \langle\!\langleanglebel{fig:Vpm_bloch} \end{figure} The evolution of the system can be described by \begin{equation}gin{eqnarray} |\psi(t)\rangle\!\rangleanglengle= C_+ e^{-i E_+t}|V_+\rangle\!\rangleanglengle + C_- e^{-i E_-t}|V_-\rangle\!\rangleanglengle \langle\!\langleanglebel{eq:psi_t_vpm} \end{eqnarray} where $C_\pm$ are constants determined by the initial condition. We may rewrite Equation~\eqref{eq:psi_t_vpm} in terms of $|g\rangle\!\rangleanglengle$ and $|e\rangle\!\rangleanglengle$ using~\eqref{eq:eigen_Vpm}, \begin{equation}gin{eqnarray} |\psi(t)\rangle\!\rangleanglengle &=& \langle\!\langleeft[ C_+ e^{-i E_+t} \sin(\theta) + C_- e^{-i E_-t} \cos(\theta) \rangle\!\rangleight] |e\rangle\!\rangleanglengle \nonumber \\ &+& \langle\!\langleeft[ C_+ e^{-i E_+t} \cos(\theta) - C_- e^{-i E_-t} \sin(\theta) \rangle\!\rangleight] |g\rangle\!\rangleanglengle \langle\!\langleanglebel{eq:psi_t_ge} \end{eqnarray} For example for a qubit starting in the ground state, $|\psi(0)\rangle\!\rangleanglengle = |g\rangle\!\rangleanglengle$, we have $C_+=\cos(\theta), C_-=-\sin(\theta)$, then the time evolution would be, \begin{equation}gin{eqnarray} |\psi(t)\rangle\!\rangleanglengle= -i \sin(E_+ t) \sin(2 \theta) |e\rangle\!\rangleanglengle + \cos(E_+ t) \sin(2\theta) |g\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:psi_t_ge2} \end{eqnarray} where we used the fact that $E_+=-E_-$. Once again we can calculate the expectation value for any observable. For example, the probability of being in the excited state would be\footnote{You may need convince yourself that $\sin^2[ 2 \tan^{-1} ( \frac{A}{\sqrt{A^2 + \Delta_d^2}-\Delta_d})] = \frac{A^2}{\sqrt{A^2 + \Delta_d^2}}$.}, \begin{equation}gin{eqnarray} P_e=\sin^2(2\theta) \sin^2(E_+ t) = \frac{A^2}{A^2 + \Delta_d^2} \sin^2\bigg(\frac{\sqrt{A^2 + \Delta_d^2}}{2} t\bigg), \langle\!\langleanglebel{eq:psi_t_ge3} \end{eqnarray} which is consistent with the result we had in lab frame where the Hamiltonian was time-dependent (see Eq.~\rangle\!\rangleef{eq:classic_pe}). But, this picture gives a visualization of why the population doesn't reach the maximum value for a detuned drive as illustrated in Figure~\rangle\!\rangleef{fig:Vpm_bloch}. Generally, in the experiment we use an on-resonance drive to prepare the qubit states. Figure~\rangle\!\rangleef{fig:sum_up_rabi} summarizes our discussion of a driven qubit in the lab and rotating frame by showing Rabi oscillations of a qubit initialized in ground state for both on-resonant and detuned drives. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.69\textwidth]{sum_up_rabi.pdf} \caption[Driven qubit evolution in the Bloch sphere]{ {\footnotesize \textbf{Driven qubit state evolution in the Bloch sphere:} The red (blue) line shows the evolution of a driven qubit in the lab frame (rotating frame of the drive) \textbf{a}, for an on-resonant drive and \textbf{b}, for a detuned drive.}} \langle\!\langleanglebel{fig:sum_up_rabi} \end{figure} \subsection{Dynamics in the presence of dissipation} So far we assume that the qubit is an ideal closed system that undergoes unitary evolution given by the Schr\"odinger equation. However, in reality all systems either classical or quantum are open systems, meaning they are interacting with their environment to some extent. For quantum systems, this interaction degrades the peculiar quantum property of the system (e.g. superposition and entanglement) resulting in energy dissipation and decoherence. Dissipation is a curse in many applications of quantum information. However, dissipation is believed to be the essential piece for allowing the classical laws to emerge out of the underlying quantum laws. For our purposes, there are two main mechanisms which we need to consider to have a more realistic picture of qubit dynamics: \emph{relaxation} and \emph{dephasing}. \emph{\textbf{Relaxation}}-- Placing the qubit inside a cavity protects the qubit from environmental noise and limits the available modes the qubit can interact with. Still, the qubit finds some ways to relax its energy and decay to the ground state\footnote{Also, sometimes we intentionally provide the qubit with a decay channel to relax its energy.}. For example, when you prepare the qubit in the excited state, the qubit eventually decays to the ground state and relaxes its energy in the form of a photon to one of the unknown/known decay channels. This process of jumping\footnote{For now, we assume this process happens instantaneously which is a very reasonable and valid assumption. Yet, we will see later that the decay of an atom is not always jumpy.} from $|e\rangle\!\rangleanglengle \to |g\rangle\!\rangleanglengle$ happens in a random time. This process is not included in the Hamiltonian, therefore we need to account for that somewhat phenomenologically\footnote{In principle one can build these processes into the Hamiltonian if write down the Hamiltonian for the entire universe: qubit+cavity+environment.}. Let's say you prepare the qubit in the excited state or in some superposition state $|\psi\rangle\!\rangleanglengle$. Having the qubit relax after a time, you are not sure if the qubit is still in state $|\psi\rangle\!\rangleanglengle$ or has relaxed into the ground state. You might have a mixed feeling about the state of the qubit and, in quantum mechanics, this is an absolutely legitimate feeling because the qubit is indeed in a \emph{mixed state} which can be described by a density matrix $\rangle\!\rangleho=a |\psi\rangle\!\rangleanglengle \langle\!\langleanglengle \psi | + b |g\rangle\!\rangleanglengle \langle\!\langleanglengle g | $. Where $a$ is the probability that qubit is still in state $|\psi\rangle\!\rangleanglengle$ and $b$ is the probability that qubit has decayed into ground state $|g\rangle\!\rangleanglengle$\footnote{Here comes a lesson that I learned very late: In quantum mechanics the state of the system is nothing but your state of knowledge about that system. The reality happens in your mind!?}. If you wait longer, the probability $a$ becomes smaller and smaller while the probability $b$ approaches to 1 meaning that you are getting certain that the qubit is in the ground state. The time scale that qubit spontaneously relaxes its energy is called the \emph{relaxation time} and is indicated by $T_1$. Experimentally, identifying the $T_1$ is a basic step of any qubit characterization process. As depicted in Figure~\rangle\!\rangleef{fig:T2_time}a, the $T_1$ time is the time by which the population of excited state decays to $1/e$ of its initial value, $P_e(t)=P_e(0)e^{-t/T_1}$. We will see soon how we systematically account for this process in the qubit dynamics. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{t1_t2.pdf} \caption{ {\footnotesize Relaxation and dephasing of a qubit.}} \langle\!\langleanglebel{fig:T2_time} \end{figure} \emph{\textbf{Dephasing}}-- There is another imperfection that affects the dynamics of a qubit. In reality, due to the various noise sources in the system, the frequency of the qubit shifts around stochastically. This imperfection doesn't cause the qubit to relax its energy but instead we lose track of the qubit resonance frequency and thus loose track of the phase of the qubit wavefunction. Considering the evolution of a superposition state in the lab frame, the qubit state rotates around the equator of the Bloch sphere. After time $t$ the phase of the qubit would be $\phi=\omega_q t$, however, if the qubit frequency stochastically jitters around $\omega_q$, then the final phase at time $t$ would be $\phi=\omega t + \zeta(t)$, where $\zeta$ is our uncertainty about the phase of the qubit which is growing in time\footnote{$\zeta(t)$ can be considered as a 1D-random-walk distribution.}. Therefore, after a time, again we have a mixed feeling about the state qubit and we lose the quantum coherences as depicted in Figure~\rangle\!\rangleef{fig:T2_time}b. The time scale that the qubit loses its coherence is usually called the dephasing time or $T_2^*$ and it is characterized by a Ramsey measurement in experiment. In the next section, we discuss how to systematically account for relaxation and dephasing in qubit dynamics. \subsection*{Lindblad master equation } In order to account for non-unitary and dissipative processes (e.g. dephasing and relaxation) in qubit dynamics, we consider the Heisenberg picture where the unitary evolution of density matrix $\rangle\!\rangleho$ is described by, \begin{equation}gin{eqnarray} \dot{\rangle\!\rangleho}= -i [\hat{\mathcal{H} }, \rangle\!\rangleho], \langle\!\langleanglebel{eq:Heisen_lindblad} \end{eqnarray} where $\hat{\mathcal{H}}$ is the driven qubit Hamiltonian in the rotating frame---see Eq~\eqref{eq:RotatingFrame_H_sc}. Equation~\eqref{eq:Heisen_lindblad} is equivalent to the Schr\"odinger equation except with the density matrix approach we can also describe unitary evolution for a mixed state. Moreover, the Heisenberg picture allows us to add more terms for $\dot{\rangle\!\rangleho}$ to describe other non-unitary processes for the system like dephasing and relaxation\footnote{Normally these non-unitary processes are positive and trace preserving.}. In general, we have \begin{equation}gin{eqnarray} \dot{\rangle\!\rangleho}= -i [\hat{\mathcal{H} }, \rangle\!\rangleho] + \sum_i \langle\!\langleeft( L_i^{\dagger} \rangle\!\rangleho L_i -\frac{1}{2} \{ L_i^{\dagger} L_i , \rangle\!\rangleho \} \rangle\!\rangleight), \langle\!\langleanglebel{eq:Lindblad_gen} \end{eqnarray} where each $L_i$ is an ``Lindbladian" operator describing a specific non-unitary process. For example, the Lindbladian operator for the relaxation process of a qubit is $L_{\mathrm{relaxation}}=\sqrt{\gamma} \sigma_-$ where $\gamma=1/T_1$ accounts for the rate in which the qubit decays. The dephasing Lindbladian operator $L_{\mathrm{dephasing}}=\sqrt{\gamma_\phi}\sigma_z$ where $\gamma_\phi=1/T_2^*$ quantifies in which rate the qubit dephases and we loose coherences.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~10:} Explicitly represent the Linblad Master equation \eqref{eq:Lindblad_gen} in terms of Bloch components $x=\mathrm{Tr}(\rangle\!\rangleho \sigma_x), y=\mathrm{Tr}(\rangle\!\rangleho \sigma_y),,z=\mathrm{Tr}(\rangle\!\rangleho \sigma_z)$ for $\hat{\mathcal{H}}= - \Omega_R \sigma_y/2$ in presence of relaxation and dephasing. Now you may solve these equations to obtain the evolution for the qubit. }} In Figure~\rangle\!\rangleef{fig:sum_up_depha_relax} we plot the evolution of a driven qubit in the presence dephasing and relaxation. We will return to the Lindbladian evolution and non-unitary evolution in Chapter~4 when we discuss continuous measurements. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.6\textwidth]{sumupdephasrelax.pdf} \caption[Dephasing and relaxation for the qubit]{ {\footnotesize \textbf{Dephasing and relaxation for the qubit:} The solution of the Lindblad equation for a driven qubit in presence of dephasing and relaxation.}} \langle\!\langleanglebel{fig:sum_up_depha_relax} \end{figure} \chapter{Superconducting Quantum Circuits\langle\!\langleanglebel{ch3}} The aim of this chapter is to make a clear connection between the theoretical concepts introduced in the previous chapter and their experimental realization. I will discuss {measurements with} superconducting circuits including, transmon qubits, 3D cavities, and parametric amplifiers from the experimental point of view. I will try to give a clear explanation of the basic procedures of fabrication and characterization. \section{Cavity \langle\!\langleanglebel{section:cavity}} In the previous chapter, we discussed a 1D cavity by considering two perfectly conducting walls separated by a distance $L$. Aluminum is a good choice for these walls since it becomes superconducting below $1.2$ K and can be used to realize a high quality factor cavity. Copper can also be used when we need to thread external magnetic flux {through the cavity to tune the qubit resonance.} \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{rectang_cavity_TE.pdf} \caption[TE$_{101}$ mode in rectangular 3D cavity]{ {\footnotesize \textbf{TE$_{101}$ mode in rectangular 3D cavity:} \textbf{a}, The typical cavity geometry is shown. Red lines shows the electric field profile along $z$,$x$ spatial directions for the TE$_{101}$ mode. The electric field oscillations are maximum at the center if the cavity. \textbf{b}, The surface current oscillations (red arrows) have been depicted for the TE$_{101}$ mode. The induced charges are maximum at the center (opposite charges at the top and bottom of the cavity). The equivalent circuit diagram is shown for a section of the cavity. The cut-line (magenta dashed line) indicates one of the planes where the surface current is tangential.}} \langle\!\langleanglebel{fig:rect_cav} \end{figure} Although we induced the cavity in 1D, it is trivial to extend the result to 3D. One can show that for a 3D cavity described by $L_x,L_y,L_z$ dimensions and depicted in Figure~\rangle\!\rangleef{fig:rect_cav}, the Equation~\rangle\!\rangleef{eq:EandB} simply generalized to, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:EB3d} \begin{equation}gin{eqnarray} E(\vec{r},t) &=& \mathcal{E} \ q(t) \sin(\vec{k}\cdot \vec{r}) \langle\!\langleanglebel{eq:Ex3d}\\ B(\vec{r},t) &=& \mathcal{E} \ \frac{\mu_0 \varepsilon_0}{k} \dot{q}(t) \cos(\vec{k}\cdot \vec{r}), \langle\!\langleanglebel{eq:By3d} \end{eqnarray} \end{subequations} where $\vec{k}=(n_x \pi /L_x , n_z \pi /L_z,n_z \pi /L_z)$ and $\vec{r}=(x,y,z)$ and the corresponding resonance frequency of modes are, \begin{equation}gin{eqnarray} f= \omega_c/2\pi = \frac{c}{2} \sqrt{ (\frac{n_x }{L_x})^2 + (\frac{n_y}{L_y})^2 + (\frac{n_z \pi}{L_z})^2 } ,\langle\!\langleanglebel{eq:res_freq} \end{eqnarray} where $c$ is the speed of light inside the cavity. Each mode is described by a set of integers ${n_x,n_y,n_z}$, for example, TE$_{101}$ corresponds to a mode with an the electric field profile that has one anti-node in $x$ and $z$ directions (depicted in Figure~\rangle\!\rangleef{fig:rect_cav}a~\cite{pozar}). Thus we have spatially distributed electromagnetic modes inside a cavity, and apart from that, all the quantum mechanical descriptions are essentially identical. Furthermore, we are still interested only in one of these modes. Therefore we consider only the lowest mode of the cavity and choose the dimensions so that the other higher modes' frequencies are far away from the lowest frequency. The dimensions that we normally use are {$L_z \sim L_x \sim 3.0$~cm} and $L_y \sim 0.8$~cm which gives a cavity frequency $\omega_c/2\pi \sim$7 GHz for the TE$_{101}$ mode. Figure~\rangle\!\rangleef{fig:rect_cav}b shows the surface current and the equivalent circuit diagram for the TE$_{101}$ mode\footnote{For a better circuit diagram visualization, you can replace the inductor with a wire and imagine that the loop has some effective inductance. In this way, the loop magnetic field gives the correct direction for the cavity magnetic field at that cross-section.}. Although it is easy to analytically calculate the resonance frequency for the simple geometries like a rectangular or cylinder cavity, one can use {numerical simulations} to get more realistic predictions {since they can account} for geometry imperfections, input-output connectors, and qubit chip\footnote{Moreover, simulation gives you to have access to more detailed information about the electromagnetic field distribution inside the cavity.}. Figure~\rangle\!\rangleef{fig:cavity_3d} shows a simulation result for cavity transmission by considering other components using {Ansys} HFSS. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{cavity_sim.pdf} \caption[HFSS simulation for cavity transmission]{ {\footnotesize \textbf{HFSS simulation for cavity transmission:} The cavity transmission is simulated by HFSS (red curve) which is in agreement with actual measurement (blue curve).}} \langle\!\langleanglebel{fig:cavity_3d} \end{figure} The simulation is in a good agreement with actual transmission measurement of a similar cavity\footnote{Although, the details of the input/output connectors and pins (e.g. length, shape, soldering parts, ...) are may not have significant contribution to the cavity frequency, they significantly affect the cavity quality factor because they can dramatically alter the characteristic impedance of the ports.}. In order to fabricate a cavity, we literally machine a cavity in two chunks of aluminum as depicted in Figure~\rangle\!\rangleef{fig:cavity_3d}c\footnote{Symmetrical pieces are not only convenient in terms of the fabrication, but also in this geometry, symmetrical pieces minimize the adverse effect of imperfections where two pieces are connected since the surface current doesn't need to pass between pieces at all. Note the cut-line (magenta dashed line) in Figure~\rangle\!\rangleef{fig:rect_cav}b.}. \emph{Cavity linewidth-} The cavity linewidth $\kappa$, can be determined by measuring the transmission through the cavity. As depicted in Figure~\rangle\!\rangleef{fig:cavity_kappa}a, we use a vector network analyzer (VNA) and record S12 (or S21) transmitted power versus the frequency. The parameter $\kappa$ is roughly the frequency bandwidth by which the transmitted power drops by 3 dB a depicted in \rangle\!\rangleef{fig:cavity_kappa}b. More rigorously, one can fit the transmitted power (in linear scale) to a Lorentzian function $F(x)=\frac{A}{(x-f_c)^2 + \kappa^2/4}$ and extract the cavity frequency $f_c=\omega_c/2\pi$ and cavity linewidth $\kappa$ which is the FWHM of the Lorentzian fit. We will see in Chapter 4, the parameter $\kappa$ quantifies how much signal we get from the cavity and this value needed for calibration of the quantum efficiency during measurement. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{cavity_kappa.pdf} \caption[The cavity linewidth characterization]{ {\footnotesize \textbf{The cavity linewidth characterization:} The cavity linewidth $\kappa$ can be quantified by measuring the scattering parameters of the cavity. \textbf{a},\textbf{b} In the transmission measurement S21, the cavity linewidth can be estimated by the bandwidth that the transmission signal drops by 3 dB. \textbf{c}, More carefully, one can scale the transmission in the linear form and fit to the Lorentzian function. The FWHM of the Lorentzian function would be the cavity linewidth.}} \langle\!\langleanglebel{fig:cavity_kappa} \end{figure} \emph{Cavity phase shift-} It is worth discussing the cavity phase shift across resonance. As depicted in Figure~\rangle\!\rangleef{fig:cavity_chi_shift}, the phase of the transmitted signal shifts by $\pi$ across the resonance of the cavity\footnote{The reflected signal acquires a $2\pi$ phase shift across the cavity resonance. Does this mean it is better to use reflection to detect the phase shift?} which can be represented by, \begin{equation}gin{eqnarray} \theta=\arctan\langle\!\langleeft[\frac{2}{\kappa} (\omega_c-\omega)\rangle\!\rangleight] \xrightarrow{\omega \simeq \omega_c} \theta = \frac{2}{\kappa} (\omega_c - \omega), \langle\!\langleanglebel{eq:ref_phase_cavity} \end{eqnarray} which varies almost linearly around the cavity resonance frequency with slope $2/\kappa$ which quantifies the sensitivity to the frequency by measuring the phase of the transmitted (or reflected) signal\footnote{We will see in Chapter~4 that the cavity phase shift and cavity linewidth come into play for describing {continuous measurement in terms of POVMs.}}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{cavity_xhi_shift.pdf} \caption[The cavity phase shift across the resonance]{ {\footnotesize \textbf{The cavity phase shift across the resonance:} The transmitted signal through the cavity experiences a $\pi$ phase shift across the resonance of the cavity. Near resonance, the phase shift is approximately linear with a slope of $2/\kappa$.}} \langle\!\langleanglebel{fig:cavity_chi_shift} \end{figure} \emph{Cavity internal and external quality factors-} The external quality factor $Q_\mathrm{ext}$ can be adjusted by the length of {the input and output port pin antennas}. Normally, we have two pins corresponding to the weak port and strong ports. The weak port is used as an input (qubit manipulation and readout) and usually has $\sim$ 100 times weaker coupling to the cavity compared to the strong port. The lengths of the port antennas determine the coupling to a $50\,\Omega$ transmission line, determining the external quality factor $Q_\mathrm{ext}$. The internal quality factor (often called unloaded quality factor) has to do with the losses due to the cavity itself, e.g. absorption of photons by the cavity. The total cavity quality factor is then, \begin{equation}gin{eqnarray} \frac{1}{Q_{\mathrm{tot}}} = \frac{1}{Q_{\mathrm{int}}} + \frac{1}{Q_{\mathrm{ext}}}. \langle\!\langleanglebel{eq:Q_total} \end{eqnarray} Often, the deliberate coupling to the outside is dominant $Q_{\mathrm{int}} \gg Q_{\mathrm{ext}}$. Therefore the total quality factor is almost equal to the external quality factor. Note, that $Q_{\mathrm{tot}}=\omega_c/\kappa$. For a careful characterization of the internal quality factor and the input and output coupling strengths, one can perform reflection measurements on each port (while the other port is terminated by 50 $\Omega$). By analyzing the amplitude and phase of the reflected signals, one can obtain both the internal and external quality factors for the cavity and characterize the coupling strength for each port (see Refs.~\cite{megrant2012planar,khalil2012analysis}). \section{Qubit} In Chapter 2 we studied the Josephson junction and the transmon qubit from a theoretical perspective. Here we discuss the fabrication and characterization of Josephson junctions and transmon circuits. \subsection{Transmon fabrication:} Josephson junctions can be fabricated by evaporation of aluminum on a silicon wafer using an electron beam evaporator which allows for directional evaporation. The common technique for JJ fabrication is the double-angle evaporation technique which utilizes the evaporation directionality for fabrication. A typical procedure for the JJ fabrication includes; spin-coating e-beam resists on a silicon wafer, e-beam lithography, development, pre-cleaning, double-angle evaporation, lift-off, and post-cleaning. \textbf{\emph{e-beam resist-}} We use a stack of two resists for junction fabrication. The bottom layer normally is a relatively thick ($\sim 1\mu$m) and soft resist (MMA). In contrast, the top layer is a relatively thin ($\sim 300$ nm) and hard (e.g. ZEP) resist. The reason for this choice of resist staking is to achieve a wide undercut which is convenient for clean lift-off as depicted in Figure~\rangle\!\rangleef{fig:liftoff}. It also enables for a suspended bridge needed for junction fabrication (Fig.~\rangle\!\rangleef{fig:liftoff}c). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{liftoff.pdf} \caption[Double stack e-beam resist]{ {\footnotesize \textbf{Double stack e-beam resist:} \textbf{a}, With a single layer of resist, it is often difficult to get small and clean patterns. Mainly because the wall of the resist may also get deposited which connects top layers to the bottom layers which make it difficult to properly lift-off the resist without peeling off the actual pattern. \textbf{b}, This issue can be avoided by using two layers of the resist. The top layer provides sharp edges as a mask and the bottom layer act as a spacer. The proper amount of undercut aids the liftoff process. \textbf{c}, Moreover, the undercut of the lower resist can be used to created suspended resist (free-standing bridge) which is used for the JJ fabrication in double angle evaporation.}} \langle\!\langleanglebel{fig:liftoff} \end{figure} The resist layers can be coated on the substrate by spin-coating. The thickness of the layers is controlled by {the spinning velocity, the total time of spinning, and the viscosity of the resist}. A typical spin-coating recipe for MMA/ZEP double stack resist is displayed in Table~\rangle\!\rangleef{table:resist}. \begin{equation}gin{table}[ht] \centering \begin{equation}gin{tabular}{|c|c|} \hline Step~1& MMA spin-coat, 3000 rpm, 60 seconds\\ Step~2&Soft bake for 5 minutes,200$^\circ$C\\ Step~3& ZEP spin-coat, 3000 rpm, 60 seconds\\ Step~4&Soft bake for 3 minutes,180$^\circ$C\\ \hline \end{tabular} \caption[Double-stack e-beam MMA/ZEP resist spin coating recipe ]{{\footnotesize \textbf{Double-stack e-beam MMA/ZEP resist spin coating recipe}}}\langle\!\langleanglebel{table:resist} \end{table} \emph{\textbf{Electron-beam lithography-}} We use a 30 keV focused beam of electrons in a scanning electron microscope (SEM) to pattern the resist. The SEM is controlled by Nanometer Pattern Generation Software (NPGS). For fine features, we need to have a good focus of the electron beam. {To achieve a good focus,} we use gold particles which are easily detectable in the SEM for in-situ focus calibration\footnote{We drop gold particles close to the edge of the sample and try to get the best focus at each point. NPGS uses the focus point data and extrapolates the focus settings for the entire chip. One can also use single point focus and move the beam by $\sim$1000 $\mu$m and write the pattern with the same focus settings. Ideally, for the transmon junction, one should be able to distinguish $\sim$5 nm gold particles at each focus point.}. The transmon pattern is designed in `Design CAD' software using polygons in different layers\footnote{NPGS allows for different expose/focus setting for each layer. Therefore, with a multi-layer pattern, we can optimize the {exposure} time.}. A simple transmon pattern design in Design CAD software has been shown in Figure~\rangle\!\rangleef{fig:designcad}. Each layer represented in different color. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{designcad.pdf} \caption[A simple design for transmon qubit]{ {\footnotesize \textbf{A simple design for transmon qubit:} A qubit design consists of few polygons in different layers in DesignCAD software. The free-standing bridge design for a JJ and few micrometer extremities are shown in red (Layer~0). The connector lines in the two steps (Layer~1,2) shown in blue and capacitor pads (Layer~3) in green. The corresponding e-beam dosage for each layer is displayed in Table~\rangle\!\rangleef{table:SEMdose}.}} \langle\!\langleanglebel{fig:designcad} \end{figure} We use {a higher magnification} and lower dosage for finer features. Table~\rangle\!\rangleef{table:SEMdose} displays typical focus and dosage for each layers. \begin{equation}gin{table}[ht] \centering \begin{equation}gin{tabular}{|c|c|c|c|} \hline Layer~\# & smallest feature size & SEM {magnification} & e-beam current \\ \hline Layer~0 & $<$ 200 nm & 1300X & 30 pA \\ Layer~1 & $\sim$ 1 $\mu$m & 600X & 220 pA \\ Layer~2 & $\sim$ 10 $\mu$m & 200X &600 pA \\ Layer~3 & $>$100 $\mu$m & 50X & 10000 PA \\ \hline \end{tabular} \caption[The NPGS setting for a 30 KV electron beam]{{\footnotesize \textbf{The NPGS settings for s 30 keV electron beam}.}}\langle\!\langleanglebel{table:SEMdose} \end{table} \emph{\textbf{Resist development and pre-cleaning-}} The development recipe has three steps. We use an ice bath to bring the developer's temperature down to {$\sim 0^\circ$C} to slow down the development process. Figure~\rangle\!\rangleef{fig:icebath} demonstrates the development recipe. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{icebath.pdf} \caption[e-beam resist development recipe]{ {\footnotesize \textbf{e-beam resist development recipe:} \textbf{Step~1}, ZEP developer in ice bath, wait for developer to cool down to T$\sim 1^\circ$C. Plunge the sample into the beaker for 30 seconds, then blow dry immediately. \textbf{Step~2}, MMA developer for 160 seconds and blow dry afterward. \textbf{Step~3}, Rinse with IPA for 15 seconds. The left panel shows the JJ area in the simple transmon design (Fig.~\rangle\!\rangleef{fig:designcad}) before and after development. Note the undercut and the suspended bridge in the middle.}} \langle\!\langleanglebel{fig:icebath} \end{figure} After the development we may use `oxygen plasma cleaning' to further remove resist residue from the substrate surface. \emph{\textbf{Electron-beam evaporation-}} We use a double angle evaporation method to fabricate JJs as depicted in Figure~\rangle\!\rangleef{fig:qubit_fab_2angle}. The {transmon capacitor pads are} also fabricated {during this} process. The thickness of the aluminum film is normally 30 nm for the lower layer and 60 nm for the top layer and there is $\sim1$ nm thick of aluminum oxide layer grown between two layers. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{jj_fab2.pdf} \caption[Double-angle evaporation and Josephson junction fabrication]{ {\footnotesize \textbf{Double-angle evaporation and Josephson junction fabrication:} Considering the indicated cross-section of the freestanding bridge in Figure~\rangle\!\rangleef{fig:liftoff} we use double angle evaporation to fabricate the JJ. \textbf{a}, The first layer of aluminum evaporation is about 30 nm. \textbf{b}, Introducing the oxygen mixture to form a thin layer of aluminum oxide $\sim 1$ nm as the insulator. \textbf{c}, The second layer of aluminum evaporation is about 60 nm at the opposite angle {normal to the substrate surface}. \textbf{d}, Removing the resists and the {deposited aluminum} in a lift-off process.}} \langle\!\langleanglebel{fig:qubit_fab_2angle} \end{figure} \emph{\textbf{Lift-off and post-cleaning-}} We use acetone in temperature $T\sim 60^\circ$C for 40 minutes to dissolve the resist which leaves behind the transmon circuit on the substrate. Figure~\rangle\!\rangleef{fig:qubit_fab} shows the SEM image of the final transmon circuit and the JJ. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{jj_patern.pdf} \caption[Qubit pattern SEM]{ {\footnotesize \textbf{Qubit pattern SEM}.}} \langle\!\langleanglebel{fig:qubit_fab} \end{figure} \subsection{JJ characterization} For the qubit design, we have a couple of considerations. First, the qubit frequency and its anharmonicity need to be in the proper range. We would like to have anharmonicity somewhere in the range $200-300$ MHz. According to Equation~\rangle\!\rangleef{eq:trans_H_bb2}, the anharmonicity is determined by the energy associated to the shunted capacitor, $E_C = \frac{e^2}{2C} $. This capacitance mostly comes from the transmon pads. Therefore $E_C$ can be set based on the design of the transmon pads (the size and separation of pads).\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~1:} What is the capacitance between two sheets of perfect conductors separated in the horizontal orientation by $2a$ in a homogeneous medium. What if the medium has two different dielectric constants on each side as depicted? \centering \includegraphics[width = 0.6\textwidth]{transmon_pad.pdf} }} Using HFSS simulation, the capacitance of our normal design ( see Fig.~\rangle\!\rangleef{fig:qubit_fab}a, pad size $400\times400 \ \mu$m separated by $200\ \mu$m, with connection arms) is about $C=0.057$ pF. The contribution of $C_J$ in negligible (estimated to be about $0.35$ fF for $200 \times 100$ nm JJ area, assuming oxide layer thickness $\sim 1$ nm). \begin{equation}gin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{qubit_cap1.pdf} \caption[The HFSS simulation for the transmon shunting capacitor]{{\footnotesize \textbf{The HFSS simulation for the transmon shunting capacitor:} Using HFSS, the shunting capacitance is calculated to be $C=0.057$ pF (for the simple design shown in Figure~\rangle\!\rangleef{fig:designcad}) which results in $E_J \sim 340$ MHz.}} \langle\!\langleanglebel{fig:td1} \end{figure} For a certain qubit design $E_C$ is fixed and for a certain qubit frequency $\omega_{q}=\sqrt{8E_J E_c} - E_c$, the only knob is the Josephson energy $E_J=\frac{\hbar}{2e}I_c $, where the critical current is a function of the junction area and the thickness of the oxide layer, $I_c \propto \mathrm{area}/ \mathrm{oxide \ thickness}$. Therefore, having the right critical current is critical. Fortunately, there is a very useful relationship between the resistance of a JJ at room temperature $R_n$ and the JJ critical current, \begin{equation}gin{eqnarray} I_c=\frac{\pi\Delta(0)}{2 e R_n}, \end{eqnarray} where $\Delta(0) \sim170 \ \mu$eV is the aluminum superconducting energy gap at zero temperature\footnote{Basically the gap energy depends on the temperature, $ \Delta(T)= \Delta(0) \tanh[\frac{\Delta(T)}{2k_B T}]$. However since the qubit will be operated at temperature close to zero, $T \langle\!\langlel T_c$, therefore with good approximation $\Delta(T) \simeq \Delta(0)$.}. The normal resistance of the junction $R_n$ can be measured by sending a probe current $I_\mathrm{prob}$ through the junction and reading the voltage $V_\mathrm{probe}$ across the junction. With this room temperature resistance measurement, and with our prior knowledge about the $E_C$ (either from previous transmon measurements or simulation) we can estimate the frequency of the qubit before the cool-down. The estimation for transmon energy transition would be, \begin{equation}gin{eqnarray} E_{01}&=& \sqrt{\frac{h \Delta(0) }{C R_n}}-\frac{e^2}{2C} \nonumber\\ f=\omega_{01}/(2\pi) &=& \sqrt{\frac{\Delta(0) }{h C R_n}}-\frac{e^2}{2 h C} , \langle\!\langleanglebel{eq:jj_brob} \end{eqnarray} where $h$ is Planck's constant\footnote{Here, we would rather $\hbar$ and treat it carefully, since $E_C$ doesn't explicitly depend on $\hbar$.}. For example, in order to have qubit frequency around $\omega_{01}/(2\pi) \sim 6$ GHz with our normal transmon geometry ($C \sim 0.057$ pF see Fig.~\rangle\!\rangleef{fig:td1}), the critical current should be $I_c \sim 0.015 \ \mu$A ($R_n=18$ k$\Omega$). \section{Qubit-cavity system characterization} In this section, we discuss a typical qubit characterization procedure. Here are the typical steps before the cool-down: \begin{equation}gin{itemize} \item Josephson junction room temperature resistance probe, \item Choosing a proper cavity and weak/strong pin length adjustment. \item The qubit placement inside the cavity. \item Characterizing the cavity transmission (qubit chip included). \end{itemize} Then we put the cavity-qubit system inside the fridge and cool them down. The minimum circuitry inside the fridge is depicted in Figure~\rangle\!\rangleef{fig:exp_setup_char}. The main qubit characterization includes five basic experiments. \begin{equation}gin{itemize} \item One-tone spectroscopy, or ``punch-out". \item Two-tone spectroscopy. \item Rabi measurement. \item {$T_1$} measurement. \item Ramsey measurement ($T_2^*$). \end{itemize} The first two experiments are in the frequency domain which means we only look at scattering parameters of the system for characterization. However, the last three experiments {are measured} in the time domain and involve preparation and readout of the qubit state. In the following sections, we discuss how these are performed in the lab and what we learn from each experiment in more detail. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.7\textwidth]{exp_setup_char.pdf} \caption[The minimum experimental setup for basic qubit characterization]{ {\footnotesize \textbf{The minimum experimental setup for basic qubit characterization:} The input lines can be used for qubit manipulation signals and cavity probe signals. Note that we don't get any signal back from input lines (because of $\sim 2\times 50$ dB of attenuation). However, because we have a circulator connected to the strong port, the reflection off of the strong port can be measured by sending the signal from the input~\#2 and receiving it back from the output. We will refer to the fridge circuitry (depicted in left) by the short version (depicted in right) throughout this document.}} \langle\!\langleanglebel{fig:exp_setup_char} \end{figure} \subsection{One-tone spectroscopy: ``punch-out"} The first step is to check if the qubit is ``alive" or not. For that, we need to check whether the cavity frequency shifts based on the state of the qubit. Of course, at this point, we don't know the qubit frequency to carefully manipulate it. Fortunately, we don't need it to know what is the qubit frequency to check if it is there. One way to think about that is if the qubit is coupled to the cavity, {the cavity becomes hybridized with the qubit and} we should be able to detect a little bit of nonlinearity in the cavity. All we need to {do is} compare the transmission (or reflection) of the cavity in low power versus high-power and see if the frequency of the cavity shifts. When we probe the cavity with very low power we are pretty sure that qubit is in its ground state\footnote{If you have not convinced, consider that we only sweep the VNA frequency across the cavity resonance frequency so that VNA span is $\sim \kappa$. Considering the avoided crossing picture and the fact that $g\gg\kappa$, therefore, the qubit couldn't be in this region. So we are pretty much sure that we are not driving the qubit in this situation.}. Therefore we measure the resonance frequency of cavity when qubit is in the ground state. Next, we turn up the power of the VNA to a very high power. In this case, we send a huge amount of photons into the cavity which essentially overwhelms the qubit. Basically, the driving amplitude is so high that the induced current exceeds the critical current of the junction. Practically, in such a high power regime, we measure the bare cavity frequency. Now if there is a working qubit inside the cavity we can see the cavity frequency shift and we say that the cavity ``punched out". \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{punch_out.pdf} \caption[The ``punch-out" measurement] {\footnotesize {\textbf{The ``punch-out" measurement:} The low (high) power transmission of the cavity indicated by the blue (red) trace. For low power probe signal, the qubit remains in the ground state and we essentially measure the dressed cavity transition indicated by the blue double-sided arrow. In high power case, the is qubit ``washed out" and we essentially measure the bare cavity transition as depicted by the red double-sided arrow.}} \langle\!\langleanglebel{fig:punch_out} \end{figure} There is one more piece of information we can get from the punch-out measurement. If the high-power frequency shifts to a lower frequency, we infer that qubit frequency is below the cavity and vise versa. A bigger shift means that the cavity and qubit are {more strongly coupled}\footnote{Note, the placement of the qubit inside the cavity also affects the coupling and consequently the punch-out shift.}. One can consider the phase shift {of the cavity resonance} as a rough estimation of $\chi=g^2/\Delta$ but a careful characterization of $\chi$ can be done with time domain measurements. If the qubit bare frequency happens to be very close to the cavity bare frequency $\Delta<g$, then the qubit and cavity may enter the polariton regime and you may clearly see two peaks (separated by $\sim 2g$) in the cavity at low power transmission\footnote{In this case you are directly resolving polariton qubit.}. If the high power peak (bare cavity) is exactly on the middle of low-power peaks, then qubit and cavity are exactly on-resonance and the separation is exactly\footnote{Therefore, one can use a tunable qubit to directly measure the effective coupling strength $g$.} $2g$. \subsection{Two-tone spectroscopy} Knowing that the qubit is working, the next step is to find the qubit frequency. The idea is to continuously send a weak microwave signal to the cavity {at the low power cavity resonance}---the cavity frequency when the qubit is in the ground state---and probe the cavity transmission. Therefore, we constantly receive a high transmission signal because we probe at the resonance of the cavity. While this first tone is on, we start sending another microwave signal (labeled as BNC\footnote{`BNC' is simply the name of the generator we normally use in the lab.}) into the cavity. {We sweep the frequency of the probe tone BNC} and monitor the cavity transmission as depicted in Figure~\rangle\!\rangleef{fig:spectroscopy}. During the sweep, once the BNC frequency hits the qubit transition frequency (BNC$=\omega_q$), it excites the qubit\footnote{Actually the BNC signal drives Rabi oscillations on the qubit. We saw in previous chapter (Eq.~\rangle\!\rangleef{eq:psi_t_ge3}) that the qubit reaches maximum excitation ($P_e=1$) only if the drive is on-resonance with the qubit.} therefore the state of the qubit is no longer in ground state (on average) and that causes a shift in the cavity frequency\footnote{Remember interaction Hamiltonian in the dispersive regime~(Eq.~\rangle\!\rangleef{eq:H_dis_rearange}) which results in the qubit-state-dependent frequency for the cavity.}. Now, because the VNA frequency (which is fixed) is no longer {resonant with the} cavity, the transmitted power drops\footnote{If BNC hits the cavity frequency (which is not shown here) we may also see a dip in transmission. However, we never mistake that with the qubit dip because that happens exactly at the VNA frequency. This dip could be due to some nonlinearity induced into the cavity but it is more likely to be the amplification chain saturation. Which means, when we add the relatively high power BNC signal on top of VNA and both highly transmitted to amplifiers, the amplifiers may be saturated and this effect may show up as a dip in the VNA trace.} as depicted in Figure~\rangle\!\rangleef{fig:spectroscopy}b. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{spectroscopy2.pdf} \caption[Two-tone spectroscopy]{ {\footnotesize \textbf{Two-tone spectroscopy:} \textbf{a}, The schematics for experimental setup for two-tone spectroscopy. A constant fixed frequency signal from the VNA at the cavity low power probes the cavity while a signal from the BNC sweeps across different frequencies from 4 to 6 GHz. \textbf{b}, The cavity transmission versus BNC frequency shows a dip at qubit frequency. \textbf{c}, With higher power for BNC the two-photon transition is also detectable (red trace). Note that the qubit transition is power broadened.}} \langle\!\langleanglebel{fig:spectroscopy} \end{figure} If we increase the BNC signal amplitude (by $\sim 10$ {dBm}), we can also see a second dip at a slightly lower frequency. This dip corresponds to the process by which two photons of drive excited the qubit from the ground state to second excited state\footnote{Transition from $|g\rangle\!\rangleanglengle \to |f\rangle\!\rangleanglengle$ is a two-photon process which is less probable compared to a one-photon process. Therefore a higher power is needed to drive that transition.}, $|g\rangle\!\rangleanglengle \to |f\rangle\!\rangleanglengle$ as depicted in Figure~\rangle\!\rangleef{fig:spectroscopy}c. The second dip gives a useful piece of information which allows us to simply calculate the transmon anharmonicity, {$\omega_{eg} - \omega_{fe}= 2 (\omega_1- \omega_2)\simeq2(5.2-5.05)\,\text{GHz}=300$ MHz in this case}. We just discussed spectroscopy in transmission mode. Equivalently, we may {use reflection off the cavity} for spectroscopy\footnote{The are a couple of reasons we may want to use reflection {for} spectroscopy. First, we might have a limited number of input lines so might not have a weak port for the cavity. Or, sometimes we have low signal-to-noise in transmission and we might have a better chance looking at the reflected phase. Also, we sometime may not have enough power from the weak port and we can use reflection port which has much stronger coupling to the cavity.}. In reflection, most of the signal is reflected from the cavity, therefore there is not much information in the magnitude of the signal. But, we can look at the phase of the reflected signal which is sensitive to the cavity resonance frequency\footnote{In spectroscopy by reflection you may get a dip or peak depend on the delay offset of the VNA.}. \subsection{Time domain measurement: basics} In this section, we will discuss the time domain measurement of the qubit. Time domain measurements require initialization, preparation and manipulation of the qubit state, and readout. In what follows, we briefly discuss these three steps. \emph{\textbf{Initialization--}} In our case the initialization is quite simple. The lifetime of the system is on the order of tens of microseconds, therefore all we need to do is leave the qubit for some amount of time ($\sim$100 microseconds) to make sure it is in ground state with fairly high fidelity\footnote{In our case this fidelity is about 97\% which depends on the effective temperature of the system. One may calculate the probability of thermal excitations for the qubit $P_e^{\mathrm{thermal}} = \exp(-\hbar \omega/ K_B T)$ given the temperature of the fridge $T$ and energy of the qubit $\omega_q$. However, the effective temperature for our system is slightly higher than the physical temperature of the fridge as we normally measure $P_e^{\mathrm{thermal}} =0.03$ at 10 mK.}. \emph{\textbf{Preparation/Manipulation--}} Unlike the spectroscopy measurements where we constantly send signals and measure the scattering parameters, in time domain measurements we need to carefully send signals to the system with accurate timing and proper duration. This means we need to be able to switch the signals on and off with reasonable accuracy. We use analog RF $I/Q$ \textit{mixers} to perform switching. Mixers can be used for modulating and demodulating. For switching purposes, we use mixers as a modulator\footnote{We will see that for readout purposes we use mixers as a demodulator.}. As depicted in Figure~\rangle\!\rangleef{fig:mixer}a, a typical $I/Q$ mixer, ideally, multiplies the LO signal by signals in port $I$ and $Q$ with 90-degree phase difference\footnote{We will see later that this 90-degree phase difference has a very important role in the qubit preparation and tomography.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.7\textwidth]{mixer.pdf} \caption[$I/Q$ mixer]{ {\footnotesize \textbf{$I/Q$ mixer:} \textbf{a}, An $I/Q$ mixer can be used as a modulator. Note that the outputs correspond to $I$ and $Q$ pulses are out of phase by 90 degrees. \textbf{b}, The DC pulse in $I/Q$ ports can be used to control (switch) a continuous signal. The blue (red) DC pulse is the input for port $I$ (port $Q$) and its corresponding output is in-phase (90 out-of-phase) with respect to the local oscillator.}} \langle\!\langleanglebel{fig:mixer} \end{figure} Therefore one can use an $I/Q$ mixer in modulation mode to switch a continuous signal. For that, we send the continuous signal to LO port and we switch it by a DC pulse on port $I$ or $Q$. The RF port output is only a segment of the continuous signal LO, as gated by the DC pulses depicted in Figure~\rangle\!\rangleef{fig:mixer}b. This technique gives us enough control to prepare and manipulate the qubit via Rabi oscillations of the qubit. For example, If we choose the LO frequency to be the qubit frequency then by applying a DC pulse to the port $I$, the pulse rotates the qubit. Thus by choosing a proper duration and amplitude of the DC pulse, we can prepare the qubit in the excited state or a superposition state. Figure~\rangle\!\rangleef{fig:rabi_along_x_y} demonstrates qubit preparations for the excited state and superposition states where we define pulses in port $I$ (port $Q$) to rotate the qubit along $x$-axis ($y$-axis)\footnote{Of course there is no preferred direction for the qubit as $x$ and $y$. However, the first pulse (first rotation) in each run of the experiment sets a clock reference, determining the rotating frame of the coherent drive. Subsequent signals will rotate the qubit along the same axis if it is in-phase with the first pulse and will rotate in a different axis if it is out-of-phase with respect to the first rotation pulse.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{mixer_rotation_x_y.pdf} \caption[Qubit rotation pulses]{ {\footnotesize \textbf{Qubit rotation pulses:} Pulses in the $I,Q$ ports results in rotation in $x,y$ directions in the Bloch sphere. \textbf{a}, A $\pi/2$ pulse in port $I$ rotates the qubit along the $x$ direction and prepares a superposition state along $y$ axis. \textbf{b}, $\pi$ pulse prepares the qubit in the excited state. \textbf{c}, A $\pi/2$ pulse on $Q$ rotates the qubit around the $y$ axis and prepares the qubit in a superposition along $x$ axis.}} \langle\!\langleanglebel{fig:rabi_along_x_y} \end{figure} \emph{Single sideband modulation--} In practice, using DC pulses in $I/Q$ ports to manipulate the qubit has two drawbacks due to the mixer nonlinearity. First, the mixer may not exactly provide 90-degree phase difference between $I/Q$ ports which is not convenient and requires careful corrections for tomography results. The second drawback is that even when we do not apply a pulse to the $I/Q$ ports there may be some {signal} leakage from the LO port to the RF port. This leakage can be minimized by adding DC offsets to the $I$ and $Q$ inputs. But even very small leakage constantly drives the qubit and causes imperfections in the experiment. One way around this issue is to employ a single sideband modulation (SSB) technique for the qubit pulses. The idea is to set LO frequency to be $\omega_q \pm \Omega_{\mathrm{SSB}}$ where $ \Omega_{\mathrm{SSB}} \sim 100$-$500$ MHz is the sideband frequency. Then, for $I/Q$ ports we apply signals with the frequency of $\Omega_{\mathrm{SSB}}$ with $\pm$90 degrees out-of-phase to up-convert (down-convert) the LO to qubit frequency as depicted in Figure~\rangle\!\rangleef{fig:SSB} for the up-converting case. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{SSB.pdf} \caption[Single sideband modulation (SSB)]{{\footnotesize \textbf{Single Sideband Modulation (SSB):} By up-converting the LO signal by $\Omega_{\mathrm{SSB}}$ the mixer outputs at qubit frequency. The relative phase of SSB pulses determines the phase of the output signal thus the direction of the rotation for the qubit.}} \langle\!\langleanglebel{fig:SSB} \end{figure} SSB solves both drawbacks we had with DC pulsing technique. In this case, we don't need to worry about ``mixer non-orthogonality" because the phase of the output pulse is set by the phase of the $I/Q$ signal\footnote{The mixer non-orthogonality, in this case, may cause some issues with carrier leakage or leakages in opposite sideband and higher harmonics. But that can be compensated by adjusting the phase in AWG pulses.}. Therefore, the phase stability of the arbitrary waveform generator (AWG) sets our rotation axis accuracy, which is normally good enough for sideband frequencies of a few hundred MHz. As it is shown in Figure~\rangle\!\rangleef{fig:SSB} the first pulses in $I/Q$ port defines the preferred axis (we defined this to be the $x$ axis). The phase of subsequent pulses referenced by AWG determines the rotation axis as it is shown for 90 out-of-phase pulse which results in a qubit rotation along $y$ axis. Moreover, we don't need to worry about small leakage of LO to RF because the LO frequency is off-resonant by $\Omega_\mathrm{SSB}$ and won't disturb the qubit. \emph{\textbf{Readout--}} We discussed how to manipulate the qubit state by sending signals with frequency $\omega_q$ into the system to rotate the qubit. Here we discuss how to actually measure the qubit state by sending a signal at the cavity frequency $\omega_c$. As we discussed in the previous chapter, the frequency of the cavity depends on the state of the qubit. Therefore, the natural way to detect the state of the qubit is to measure the phase shift across the cavity by using homodyne measurement. We take two copies of a coherent signal {that has the frequency of the cavity}\footnote{This frequency is the cavity frequency at low power meaning the dressed-cavity frequency. When we use the cavity dressed state frequency for readout we usually need to send it at low-power and this readout is called ``low-power readout''. Often the fidelity of low-power readout is not very good unless we use a parametric amplifier. However, one can also use cavity bare-frequency and again see the phase shift and detect the state of the qubit. This method essentially uses the nonlinearity induced by the qubit-cavity interaction and requires more power. This method is called high-power readout~\cite{reed2010} and does not require a parametric amplifier.}) and whenever we decide to readout the state of the qubit, we send one of the copies to the cavity and take the transmitted (reflected) signal back and demodulate it with the other copy\footnote{Basically, the idea is to make a ``microwave interferometer" except here instead of adding signals together we multiply them together.}. The demodulation results in a DC signal corresponding to the phase difference between two copies. By reading the DC signal we can infer whether the cavity frequency shifted up or down. Figure~\rangle\!\rangleef{fig:readout} demonstrates the readout process. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{readout.pdf} \caption[Qubit state readout, homophone detection]{ {\footnotesize \textbf{Qubit state readout, homophone detection:} The qubit-state-dependent phase shift is detected by comparing the signal which passes the cavity with the reference signal.}} \langle\!\langleanglebel{fig:readout} \end{figure} The phase $\phi_0$ accounts for the fixed phase difference between two signals due to the different path length and can be set to zero by adding a phase shifter ($\phi_0=0$). Therefore the demodulation gives $I=\cos(\phi_q)$ and $Q=\sin(\phi_q)$. For most practical situations the phase shift is small $\cos(\phi_q) \simeq 1$, and therefore the phase shift (information about qubit state) is encoded in only in one quadrature of the reflected signal $Q=\sin(\phi_q) \simeq \phi_q$. The entire readout process can be simply represented in a phasor diagram (see Figure~\rangle\!\rangleef{fig:phasor_readout}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{phasor_readout.pdf} \caption[Readout in phase space representation]{ {\footnotesize \textbf{Readout in phase space representation:} A coherent signal with a minimum uncertainty in each quadrature probes the cavity whose frequency shifts by $\pm \chi$ depends on the state of the qubit. The transmitted (reflected) signal acquires a state-dependent-phase shift $\pm 2\chi/\kappa$ as discussed in Section~\rangle\!\rangleef{section:cavity}.}} \langle\!\langleanglebel{fig:phasor_readout} \end{figure} In order to measure the qubit state, we need to repeat the experiment $N$ times ($N>100$) and each time we detect the phase shift by a value in the $Q$ quadrature. For positive (negative) values we assign $-1$ $(+1)$, indicating that we have found the qubit in the excited (ground) state. After repeating the experiment $N$ times, we gather these statistics and report the population for ground and excited state as \begin{equation}gin{eqnarray} P_g=\frac{N_+ }{N_+ + N_-} \pm \sqrt{\frac{N_+ N_-}{N^3}} \hspace{0.5cm} , \hspace{0.5cm} P_e=1-P_g, \end{eqnarray} where $N_\pm$ is the number of the experiments where we found the qubit in ground (excited) state inferred by positive (negative) values for $Q$. The error $\sqrt{(N_+ N_-)/N^3}$ is calculated based on binomial error. \subsection{Rabi measurements} Now we are ready to discuss Rabi measurements. Once we know the qubit frequency from spectroscopy, then the first experiment in the time domain is to see Rabi oscillation, since this experiment doesn't require any pulse calibration. The idea is to send a qubit signal (with duration $t$) to the system and right after that send the cavity readout pulse and readout the state of the qubit. Normally, we repeat this experiment for different durations\footnote{Normally we sweep $t$ from 0 to 100 ns in 100 steps. In each step the experiment is repeated $N \sim$ 200 times to have a reasonably small binomial error so we can see clear Rabi oscillations.} $t$ and for each timestep, we repeat the experiment $N$ times. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{rabi_exp_1.pdf} \caption[Rabi measurement]{ {\footnotesize \textbf{Rabi measurement:} The sequence for the measurement of Rabi oscillations and the typical room temperature circuitry are shown.}} \langle\!\langleanglebel{fig:rabi} \end{figure} Figure~\rangle\!\rangleef{fig:rabi} summarizes the Rabi experiment setup and procedure. As we discussed in the previous section, in order to control the qubit signal, we use a mixer as a switch which is controlled by DC pulses from an arbitrary waveform generator (AWG). Another mixer is used to control the cavity with the same technique to perform homodyne measurement on the cavity and detect the phase shift to readout the state of the qubit. Figure~\rangle\!\rangleef{fig:rabi100ns} shows a typical Rabi oscillation. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{rabi100ns.pdf} \caption[Chevron plot]{ {\footnotesize \textbf{Chevron plot:} \textbf{a}, A typical result of Rabi oscillation measurements. \textbf{b}, By repeating the Rabi measurement for different qubit frequencies we obtain the ``chevron plot" which can be used to calibrate the qubit frequency. \textbf{c}, The Rabi oscillation frequency versus qubit pulse frequency. As we discussed in Chapter~2, the minimum oscillation rate corresponds to the maximum contrast for Rabi oscillations which happens when the qubit pulse is on-resonance with qubit frequency.}} \langle\!\langleanglebel{fig:rabi100ns} \end{figure} The reason we use a rather short Rabi time (100-200 ns) for Rabi measurement is because we will use this Rabi sequence to calibrate preparation pulses ($\pi$-pulse and $\pi/2$-pulse) which are normally short pulses\footnote{Moreover it is wise to start off by a short sequence for the experiment because we might have a qubit with short decoherence times or there might be some calibration issue in the system which could make it hard to see the qubit evolution at longer qubit evolution times.}. In this step, we may also tweak the readout power, frequency, and phase to maximize the oscillation contrast as depicted in Figure~\rangle\!\rangleef{fig:rabi100ns}a. In order to calibrate $\pi$-pulses, we first need to make sure that the qubit frequency is accurate and oscillations are on-resonance with the qubit. One way to check this is to sweep the qubit frequency while performing Rabi oscillation measurements. The resulting 2D color plot is called ``chevron plot". By fitting a sine-wave to the oscillations, one can find the best estimate of the minimum oscillation frequency which is the qubit frequency\footnote{Rabi spectroscopy is not super sensitive to the detuning in the regime of fast Rabi oscillation. Moreover, one might think that with a stronger drive we might also stark shift the qubit. So, one would think in order to find qubit frequency it is better to have longer Rabi sequence and slower Rabi oscillation to improve precision. But since our main concern is $\pi$-pulse calibration, it makes sense to do Rabi spectroscopy (chevron plot) with actual power that we are going to use for the $\pi$-pulses.}. Later we will see that with Ramsey measurement we can have a better estimation of the qubit frequency. After doing this calibration, we know the power, frequency and the duration for $\pi$-pulse and $\pi/2$-pulse. \subsection{$T_1$ Measurement} One of the main characteristics of a qubit is its relaxation time. In order to measure the lifetime of the qubit we do the following sequence: 1) We prepare the qubit in the excited state by sending a $\pi$-pulse to the qubit. 2) We wait for time $t$, then 3) we measure the qubit state by sending readout pulse to the cavity. Therefore we use the same setup that we had in for Rabi oscillation measurements (see Figure~\rangle\!\rangleef{fig:rabi} for schematics) but we use a slightly different sequence from the AWG. Figure~\rangle\!\rangleef{fig:T1_measurement} summarizes the sequence and a result we get from a $T_1$ measurement. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{T1_exp_1.pdf} \caption[$T_1$ measurement]{ {\footnotesize \textbf{$T_1$ Measurement:} The $T_1$ measurement sequence (consider the experimental setup depicted in Figure~\rangle\!\rangleef{fig:rabi}) and a typical result. The qubit lifetime (relaxation time $T_1$) can be measured by fitting the result to an exponential decay function.}} \langle\!\langleanglebel{fig:T1_measurement} \end{figure} \subsection{Ramsey Measurement ($T_2^*$)} With Ramsey measurement, we characterize the dephasing time for the qubit, $T_2^*$. For that, again we use the same setup as we had for measurements of Rabi oscillations (see Figure~\rangle\!\rangleef{fig:rabi}). The Ramsey sequence follows as: 1) prepare the qubit in superposition state $1/\sqrt{2}(|g\rangle\!\rangleanglengle + i | e\rangle\!\rangleanglengle)$ by applying a $\pi/2$-pulse to the qubit. 2) Wait for a time, then $t$ 3) apply another $\pi/2$-pulse to bring back the qubit to ground state\footnote{If the last $\pi/2$-pulse has the same phase as the first $\pi/2$-pulse we will put the qubit in the excited state. If we do negative pulse (opposite rotation) we bring the qubit back to ground state. Either way is fine, all we need is to bring the qubit to an eigenstate.} and immediately 4) perform readout. The sequence for Ramsey measurement and the result has been shown in fig~\rangle\!\rangleef{fig:ramsey}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{ramsey_exp_1.pdf} \caption[Ramsey measurement]{ {\footnotesize \textbf{Ramsey measurement:} The sequence for the Ramsey measurement (consider the experimental setup depicted in Figure~\rangle\!\rangleef{fig:rabi}) and the typical Ramsey result. The blue (red) curve is when the pulses are on-resonance (0.3 MHz off-resonance) with the qubit frequency. The dephasing time $T_2^*$ can be determined by fitting the data to exponentially decaying sine function $F(t)=A+B\sin(2 \pi \Delta_d t+\phi) \exp(t/T_2^*)$.}} \langle\!\langleanglebel{fig:ramsey} \end{figure} \subsection{Full state tomography} The qubit readout projects the state of the qubit along $z$ axis. Therefore one can determine the expectation value for $\sigma_z$ operator as, \begin{equation}gin{equation} z=\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle = P_g-P_e = \frac{N_+ - N_-}{N_+ + N_-} \pm 2 \sqrt{ \frac{N_+ N_-}{N^3}}. \end{equation} Similarly, one can determine the expectation value for $\sigma_x$ and $\sigma_y$ as well by applying a 90 degree rotation pulse along $y$ and $x$ respectively right before the readout\footnote{This is exactly what we do in Ramsey where we prepare the qubit in $x$ and measure $\langle\!\langleanglengle x\rangle\!\rangleanglengle$.}. Therefore, a full tomography sequence has 3 copy of each sequence with no rotation ($z$), $\pi/2$-rotation in phase ($x$), and $\pi/2$-rotation 90 degree out phase ($y$) right before the readout as depicted in figure~\rangle\!\rangleef{fig:state_tomog}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{state_tomog.pdf} \caption[Full state tomography readout pulses]{ {\footnotesize \textbf{Full state tomography readout pulses:} A readout pulse without any tomographic pulse gives the expectation value for $\sigma_z$ since it projects the qubit in ground or excited state. A readout pulse augmented by tomographic pulse can be used to measure the expectation values for $\sigma_x, \sigma_y$ or any arbitrary basis in the Bloch sphere.}} \langle\!\langleanglebel{fig:state_tomog} \end{figure} The expectation values for $\sigma_x$ and $\sigma_y$, can be determined in a same way, \begin{equation}gin{eqnarray} x=\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle = P_g^{(x)}-P_e^{(x)} = \frac{N_+ - N_-}{N_+ + N_-}\\ y=\langle\!\langleanglengle \sigma_y \rangle\!\rangleanglengle = P_g^{(y)}-P_e^{(y)} = \frac{N_+ - N_-}{N_+ + N_-}, \end{eqnarray} where superscripts$\ ^{(x)}$ and $\ ^{(y)}$ indicate that the readout has in-phase and out-of-phase rotation pulse respectively. \section{Josephson Parametric Amplifier} The signals that carry quantum information at cryogenic temperature are feeble microwave signals. Especially for weak measurement, these signals often contain $\sim 1$ photon per microsecond or less on average. Therefore measurement signals need to be amplified before processing at room temperature where they would otherwise be contaminated by thermal noise. Amplification is essential and often multiple steps of amplification are needed. However, amplifiers add some noise to the signal at each step of amplification. The added noise is not just a technical subtlety but it is rather a fundamental property of quantum mechanics~\cite{slichterthesis}. In this section I follow the discussion in Ref.~\cite{slichterthesis} and briefly discuss the Josephson parametric amplifier and phase sensitive amplification. I will try to connect the discussion to the previous chapters and add some points from the experimental perspective. For a detailed study of noise and amplification see the nice discussions in Ref. \cite{slichterthesis}. \subsection{Classical nonlinear oscillators\langle\!\langleanglebel{Subsection:JPAtheory}} Similar to the transmon circuit discussed in Chapter~2, the Josephson parametric amplifier (JPA) is a nonlinear oscillator except that the critical current is much higher for a JPA\footnote{Typically $I_0^{\mathrm{(JPA)}} \sim10 I_0^{\mathrm{(Transmon)}}, (E_J/E_c)^{\mathrm{(JPA)}}=100 (E_J/E_c)^{\mathrm{(Transmon)}}$.} compared to the transmon. This means that for JPA there are many more energy levels bound in the potential. Moreover, a higher critical current means a weaker nonlinearity. Therefore a JPA can be treated classically as an oscillator which has a weak nonlinearity. Figure~\rangle\!\rangleef{fig:JPA_simple} shows the schematic for a JPA where we include the current source and corresponding impedance $Z_0$ to drive (pump) the JPA\footnote{Remember, we drive the transmon through the coupling between transmon electric dipole and the electric field. Here we directly drive the JPA by connecting it to a current source via an effective impedance $Z_0$.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.69\textwidth]{LJPA_simple.pdf} \caption[JPA Schematic]{ {\footnotesize \textbf{JPA Schematic:} A Josephson parametric amplifier can be considered as a nonlinear LC oscillator (shaded region) connected to a current source via impedance $Z_0$ .}} \langle\!\langleanglebel{fig:JPA_simple} \end{figure} For the currents flowing in the circuit we have, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:LPA_ODE1} \begin{equation}gin{eqnarray} i_J + i_C + i_Z= I(t)\\ I_0 \sin(\delta) + C\dot{V} + \frac{V}{Z_0}= I(t)\\ \xrightarrow{V=V_J=\frac{\Phi}{2 \pi} \dot{\delta}} I_0 \sin(\delta) + C \frac{\Phi_0}{2\pi} \ddot{\delta} + \frac{\Phi_0}{2 \pi Z_0} \dot{\delta}= I(t), \end{eqnarray} \end{subequations} where we use Josephson relations for $i_J$ and $V$ which is the voltage across the components\footnote{The voltage across the parallel components are equal, we substitute its value by the voltage across the JJ.}. We drive the JPA with a coherent classical signal $I(t)=I_{p} \cos(\omega_{p} +\phi_{p})$ and may assume $I_p<I_0$ which insures that $i_J<I_0$. However, we are interested to the regime where the $i_J\langle\!\langlel I_0$ which means the $\delta$ is small enough so that we can expand the $\sin(\delta)$ to up to order $\delta^3$, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:Duffing_JPA1} \begin{equation}gin{eqnarray} I_0 (\delta - \delta^3/6) + C \frac{\Phi_0}{2\pi} \ddot{\delta} + \frac{\Phi_0}{2 \pi Z_0} \dot{\delta}&=& I_p \cos(\omega_p + \phi_p)\\ \to \ddot{\delta} + 2\Gamma \dot{\delta} + \omega_0^2 (\delta - \delta^3/6) &=& \omega_0^2 \frac{I_p}{I_0} \cos(\omega_p + \phi_p), \end{eqnarray} \end{subequations} where $\omega_0=\sqrt{\frac{2 \pi I_0}{C \Phi_0}}$ is the natural frequency\footnote{We may call this the ``unloaded" frequency of the JPA when it is disconnected from load $Z_0$, or $Z_0 \to \infty$.} of the JPA resonator (shaded region in Figure~\rangle\!\rangleef{fig:JPA_simple}). The parameter $\Gamma=1/(2CZ_0)$ accounts for damping of the resonator due to the coupling to the $Z_0$. The Equation~\eqref{eq:Duffing_JPA1}b is the well-known \emph{Duffing Equation} which appears in many nonlinear situations ranging from the pendulum to harmonic frequency generation in nonlinear optics. There are variety of methods for solving the Equation~\eqref{eq:Duffing_JPA1}b. Here I follow the method in Ref~\cite{slichterthesis}. We set the phase of the pump as a reference $\phi_p=0$ and consider the solution for $\delta$ to have both components; in-phase and out-of-phase with the pump. Therefore we use the following ansatz, \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:ansatz_JPA} \delta=\delta_0 \cos(\omega_p + \theta) = \delta_{\parallel} \cos(\omega_p t) + \delta_{\perp} \sin(\omega_p t), \end{eqnarray} where the $\delta_{\parallel} (\delta_{\perp})$ is the amplitude of in-phase (out-of-phase) ``oscillations" in the JPA circuit\footnote{Note that $\delta$ is `the difference between the phases of the superconducting order parameter on each side of the junction'. But it easily parameterizes the current and voltage in the circuit as we shall see in Equation~\eqref{eq:LPA_ODE1}. So we may simply refer to it as the ``oscillation" in the circuit for now.} By plugging the ansatz~\eqref{eq:ansatz_JPA} into \eqref{eq:Duffing_JPA1}b, we have \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:Duffing_JPA2} (\omega_0^2-\omega_p^2) \langle\!\langleeft[ \delta_{\parallel} \cos(\omega_p t) + \delta_{\perp} \sin(\omega_p t) \rangle\!\rangleight] \hspace{7cm} \nonumber \\ + 2\Gamma \langle\!\langleeft[ \omega_0 \delta_{\perp} \cos(\omega_p t) -\omega_0 \delta_{\parallel} \sin(\omega_p t) \rangle\!\rangleight] \hspace{6cm} \nonumber \\ - \omega_0^2/6 \langle\!\langleeft[ \delta^3_{\parallel} \cos^3(\omega_p t) + \delta^3_{\perp} \sin^3(\omega_p t) \rangle\!\rangleight] \hspace{5cm} \nonumber \\ - \omega_0^2/2 \langle\!\langleeft[ \delta^2_{\parallel} \delta_{\perp} \cos^2(\omega_p t) \sin(\omega_p t) + \delta_{\parallel} \delta^2_{\perp} \cos(\omega_p t) \sin^2(\omega_p t) \rangle\!\rangleight] \hspace{0cm} \nonumber \\ = \omega_0^2 I_d/I_0 \cos(\omega_p t). \hspace{3cm} \end{eqnarray} Now we apply the RWA for fast oscillating terms\footnote{For example; $\cos^3(\omega_p t) \to 3/4 \cos(\omega_p t) \ \mathrm{and} \ \cos^2(\omega_p t)\sin(\omega_p t) \to 1/4 \sin(\omega_p t)$.} to obtain the following equations, \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:Duffing_JPA3} (\omega_0^2-\omega_p^2) \delta_{\parallel} + 2\Gamma \omega_0 \delta_{\perp} - \omega_0^2/8 \langle\!\langleeft[ \delta^3_{\parallel} + \delta_{\parallel} \delta^2_{\perp}) \rangle\!\rangleight] &=& \omega_0^2 I_p/I_0 \\ (\omega_0^2-\omega_p^2) \delta_{\perp} - 2\Gamma \omega_0 \delta_{\parallel} - \omega_0^2/8 \langle\!\langleeft[ \delta^3_{\perp} + \delta_{\perp} \delta^2_{\parallel} \rangle\!\rangleight] &=& 0. \end{eqnarray} One can rearrange the above equations in terms of the quality factor of the resonator $Q=\omega_0 Z_0 C$ and dimensionless detuning $\tilde{\Delta}= 2Q (1-\omega_p/\omega_0)$, \begin{equation}gin{subequations}\langle\!\langleanglebel{eq:Duffing_JPA4} \begin{equation}gin{eqnarray} \delta_{\perp} + \delta_{\parallel} \langle\!\langleeft[ \tilde{\Delta} - \frac{Q}{8}( \delta^2_{\parallel} + \delta^2_{\perp}) \rangle\!\rangleight] &=& Q I_p/I_0 \\ -\delta_{\parallel} + \delta_{\perp} \langle\!\langleeft[ \tilde{\Delta} - \frac{Q}{8}( \delta^2_{\parallel} + \delta^2_{\perp} )\rangle\!\rangleight] &=& 0. \end{eqnarray} \end{subequations} Figure~\rangle\!\rangleef{fig:JPA_tilt2} shows the numerical solution to Equation~\rangle\!\rangleef{eq:Duffing_JPA4} for $\delta_0^2$ and $\theta$ versus dimensionless detuning $\tilde{\Delta}$. Note, $\delta_0^2=\delta_{\perp}^2 + \delta_{\parallel}^2,\ \theta=\mathrm{atan}[\delta_{\perp}/\delta_{\parallel}]$. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{JPA_tilt2.pdf} \caption[Duffing resonator response]{{\footnotesize \textbf{Duffing resonator response:} \textbf{a}. Solutions to Equations~\rangle\!\rangleef{eq:Duffing_JPA4}. \textbf{a} The resonance frequency decreases by increasing the power and ultimately the system enters the bi-stable regime where there are more that one solution to the Equation~\rangle\!\rangleef{eq:Duffing_JPA4}. \textbf{b}, This bifurcation behavior also can be seen in the phase response of the oscillator. Right before the bifurcation, the system exhibits sharp response respect to the detuning and power. The sensitivity for power is more clearly shown in Figure~\rangle\!\rangleef{fig:JPA_pup_transfer}}} \langle\!\langleanglebel{fig:JPA_tilt2} \end{figure} Unlike in a linear resonator (e.g. a bare cavity\footnote{For a cavity which hybridized with a qubit, you may see similar nonlinear behavior in the punch-out experiment during the transition between low to high power.}) where the frequency is independent of the power, for nonlinear resonator the frequency decreases for higher driving power (Fig.~\rangle\!\rangleef{fig:JPA_tilt2}). After certain power, the JPA bifurcates signaling that the system has more than one steady state. For amplification purposes, the ``sweet spot'' is right before this bifurcation where the system exhibits maximum sensitivity as manifest by a sharp slope in phase. In order to understand how an amplifier works at the sweet spot, let's look at the phase $\theta$ versus deriving power $I_p/I_0$ as depicted in Figure~\rangle\!\rangleef{fig:JPA_pup_transfer}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{JPA_pup_transfer.pdf} \caption[JPA transfer function]{{\footnotesize \textbf{JPA transfer function:} The change in phase of the oscillations in JPA versus the power of the drive. This curve is a cross section from Figure~\rangle\!\rangleef{fig:JPA_tilt2}b at $\tilde{\Delta}=1.72$ (red dashed line), shwing a sharp response of the phase to the small changes in the power which is the essence of the JPA amplification.}} \langle\!\langleanglebel{fig:JPA_pup_transfer} \end{figure} Figure~\rangle\!\rangleef{fig:JPA_pup_transfer} is a cross section of Figure~\rangle\!\rangleef{fig:JPA_tilt2}b at $\tilde{\Delta}=1.72$ (red dashed line). The fact that the phase response is also sharp with respect to the small change in the power right before the bifurcation power is the essence of JPA amplification\footnote{Note that $\theta$ here is the phase of the oscillation inside the resonator. Using the input-output relation, one can show that the similar behavior also manifested at the phase of the reflected signal from the resonator (e.g see chapter~3 in Ref.~\cite{drummond2013quantum}).}.\\ \noindent\fbox{\parbox{\textwidth}{\textbf{Exercise~2:} Use the input-output theory and show that the similar sensitivity manifested in the reflected signal off of the JPA. Show that the phase of the reflected signal dramatically changes by a small change in the signal power. }} \subsection{Paramp operation} Now we turn to connect our understanding of the JPA and its transfer function to the actual situation that happens in the experiment. Figure~\rangle\!\rangleef{fig:paramp_added} displays the minimum circuitry inside the fridge when we add the paramp in the line. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{paramp_added.pdf} \caption[The minimum experimental setup with paramp]{ {\footnotesize \textbf{The minimum experimental measurement setup with paramp:} The input lines can be used for qubit manipulation signals and cavity probe signals. An additional input line is dedicated to the paramp pump. The pump signal passes by a directional coupler and circulator to reach the paramp and the reflected signal (which have acquired a phase shift) goes to the output line. A DC line also is used to flux bias the paramp.}} \langle\!\langleanglebel{fig:paramp_added} \end{figure} We use an input line to send the pump signal to the paramp. Ideally, the pump (which is relatively strong coherent signal) should be isolated from the qubit system. A directional coupler and a circulator (the circulator C1 in Figure~\rangle\!\rangleef{fig:paramp_added}) prevent the pump signal from entering the cavity\footnote{Practically, sometimes multiple circulators are used for more isolation.}. The other circulator (C2 in Figure~\rangle\!\rangleef{fig:paramp_added}) directs the incoming pump signal to the paramp and sends the reflected signal to the output line. \subsubsection{Paramp calibration: single pump} The first step in the paramp setup is to find the paramp resonance frequency and tune it to the frequency where we wanted to operate the paramp. This can be done by looking at the phase of the reflection tone off of the paramp as depicted in Figure~\rangle\!\rangleef{fig:single_pump}a\footnote{The nonlinear response of the paramp helps to find its resonance. Similar to the punch out experiment, one would see a shift in the phase resonance by increasing the pump power.}. Normally the phase response of the paramp shifts down by increasing the probe tone power (which somewhat acts as a pump as well). After we convince ourselves that we have a resonance frequency of the paramp at the right place, we start pumping that with a separate signal generator and use VNA as a weak signal probe as depicted in Figure~\rangle\!\rangleef{fig:single_pump}b. By increasing the power of the pump and adjusting the frequency of the pump and several back-and-forths we should be able to see the gain profile. Once we see some gain we may tweak the pump power and frequency and even the flux bias to optimize the gain profile. Normally a symmetrical Lorentzian shape gain profile (20 dB peak/ 100 MHz bandwidth) is desirable as depicted by the dashed line in Figure~\rangle\!\rangleef{fig:single_pump}b. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{single_pump.pdf} \caption[The paramp single pump operation]{ {\footnotesize \textbf{The paramp single pump operation:} \textbf{a}, By looking at the phase response of the JPA, we obtain a rough estimation of proper power and flux bias to have sharp behavior at a desired frequency. \textbf{b}, Then we add a pump and set the power and frequency to the estimated values. We should be able to see a small amount of gain by adjusting the power. Then we fine tune the power, flux bias, pump frequency, and pump phase to obtain a reasonable amount of gain $\sim 20$ dB and bandwidth $\sim 100$ MHz.}} \langle\!\langleanglebel{fig:single_pump} \end{figure} \subsubsection{Paramp calibration: double pump} The single pump paramp operation is not the best way to pump the paramp for practical reasons. Mainly because the isolation provided by the directional coupler and circulator is not perfect. Therefore the pump signal can leak into the cavity. This issue is important when the pump frequency is the same as the cavity frequency. This happens in the situation of weak z-measurement where the pump and cavity come from a same generator. In low-power measurement, any leakage of photons in cavity frequency dephases the qubit. In order to get around this issue, we use a ``double-pump" technique. In this case instead of pumping the paramp with the frequency of $\omega_c$ we use two pumps at $\omega_c \pm \Omega_{SB}$ where $\Omega_{SB}$ is the sideband frequency which is typically between 100-500 MHz. With the double pump technique, the paramp effectively works at the cavity frequency but the pump signals are off-resonance with the cavity. For the double pump paramp operation, we first operate the paramp in the single-pump mode and then simply modulate the pump tone by $\Omega_{SB}$. Normally we should see the gain profile by adjusting the pump power\footnote{Often double pump needs higher power but will give more stable performance for the paramp.}. Figure~\rangle\!\rangleef{fig:double_pump} demonstrates the schematics for the double pump operation. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{double_pump.pdf} \caption[The paramp double pump operation]{ {\footnotesize \textbf{The paramp double pump operation.} }} \langle\!\langleanglebel{fig:double_pump} \end{figure} \subsection{Phase-sensitive amplification: phase vs amplitude} As we discussed in Subsection~\rangle\!\rangleef{Subsection:JPAtheory} The essence of JPA amplification is that right before the bifurcation the reflected phase is very sensitive to the pump power as depicted in Figure~\rangle\!\rangleef{fig:JPA_pup_transfer}. In the phase sensitive mode of amplification, where the signal and pump have the same frequency\footnote{Not only do the signal and pump have the same frequency, but their phases are fully correlated. Practically they come from a same generator.}, and we can amplify one of the quadratures and de-amplify the other quadrature. In Figure~\rangle\!\rangleef{fig:amp_vs_phase_JPA} we demonstrate the situation for amplification of each quadrature. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{amp_vs_phase_JPA.pdf} \caption[Phase sensitive amplification]{ {\footnotesize \textbf{Phase sensitive amplification:} The upper (lower) panel demonstrates the paramp operation for amplification when the information encoded in the amplitude (phase) of the signal.}} \langle\!\langleanglebel{fig:amp_vs_phase_JPA} \end{figure} Note, that in case of qubit readout, or weak z-measurement, the information is encoded in the phase of the signal, but in the case of homodyne detection of qubit spontaneous emission, the information is encoded in the amplitude of the signal. We will discuss the qubit measurement more fully in Chapter~4. We will also discuss more practical situations and some consideration to improve the performance of the paramp (e.g. ``dumb-signal" cancellation). \chapter{Quantum Measurement} \langle\!\langleanglebel{ch4} The concept of measurement is very important in all disciplines of science and technology. However, measurement is a crucial concept in the science of quantum mechanics. This is not simply because quantum systems are small and delicate, but this is because measurement fundamentally disturbs the quantum system.\\ In this chapter, we will discuss the basics of quantum measurement in a pedagogical manner. This chapter includes the basic notion of projective measurement and more generalized types of measurement, including weak measurement. We will discuss continuous measurement and the stochastic master equation for the qubit-cavity system introduced in the previous chapters. \section{Projective measurement} Consider a quantum two-level system represented by the Hamiltonian $\hat{H} = -\omega_q \sigma_z/2$ with eigenstates $|\pm z\rangle\!\rangleanglengle$ and eigenvalues $\mp\frac{\omega_q}{2}$. The (pure) state of the system $|\psi \rangle\!\rangleanglengle$ is described by a normalized vector in Hilbert space which can be visualized as a vector pointing from the center of the Bloch sphere to a certain point on its surface\footnote{For a qubit system the Hilbert space is 2D. Note that the surface of the Bloch sphere is a 2D manifold.} with unit radius (Fig.~\rangle\!\rangleef{fig:bloch}). \begin{equation}gin{figure} \centering \includegraphics[width = 0.35\textwidth]{bloch.pdf} \caption[Bloch sphere]{ {\footnotesize \textbf{Bloch sphere:} Projective measurement of the state $|\psi\rangle\!\rangleanglengle$ in the i-basis. The red and blue arrows indicates the backaction of the measurement. }} \langle\!\langleanglebel{fig:bloch} \end{figure} In this visualization, a projective measurement can be thought to project the state $| \psi \rangle\!\rangleanglengle$ into a specific basis (direction). A projective measurement along the $i$-basis (where $i$ can be any direction but we mostly consider $i=x,y,z$) can be described by two projection operators $\hat{\Pi}^{\pm i} =\mid\pm i\rangle\!\rangleanglengle \langle\!\langleanglengle \pm i |$. Every time we perform a projective measurement in $i$-basis, we collapse the state of the qubit and find it either in the $|+ i \rangle\!\rangleanglengle$ or $|- i \rangle\!\rangleanglengle$ (Fig.~\rangle\!\rangleef{fig:bloch}). At this point, one may ask why we consider this destructive operation as a measurement. The point is that the probability of the state being collapsed into $|\pm i \rangle\!\rangleanglengle$ is related to the state $| \psi \rangle\!\rangleanglengle$. To understand this, it is convenient to represent the qubit state in the measurement basis $\{ |\pm i \rangle\!\rangleanglengle \}$, \begin{equation}gin{eqnarray} |\psi \rangle\!\rangleanglengle = c_+ |+i \rangle\!\rangleanglengle + c_- |-i \rangle\!\rangleanglengle. \end{eqnarray} According to \emph{Born's rule}, the probability of finding the qubit in the states $ |\pm i \rangle\!\rangleanglengle $ are $P_{\pm}= \langle\!\langleanglengle \psi| \Pi^{\pm z}| \psi \rangle\!\rangleanglengle= |\langle\!\langleanglengle \pm i | \psi \rangle\!\rangleanglengle|^2 = |c_\pm|^2$. Therefore, if we perform a projective measurement for $N\gg1$ copies of $|\psi \rangle\!\rangleanglengle$ (or repeat the same experiment $N$ times), we would get $N_\pm \simeq P_\pm N$ times result $ \pm 1$ indicating that we collapse the qubit state into $| \pm i \rangle\!\rangleanglengle$. Therefore we figure out the ratio $P_+/ P_-$ (note we also know that $P_+ + P_- =1$). Since the $c_\pm$ are complex numbers, we still need to figure out the relative phase between the eigenstates $|\pm i\rangle\!\rangleanglengle$. To find the relative phase we must perform another set of projective measurements along another basis $ j \neq i$. The best choice for the second basis is when $|\langle\!\langleanglengle \pm j | \pm i \rangle\!\rangleanglengle|^2=1/2 $. We now proceed with an example: Consider a projective measurement in the $z$-basis, $\Pi^{\pm z}=|\pm z \rangle\!\rangleanglengle \langle\!\langleanglengle \pm z|$ on a qubit state \begin{equation}gin{eqnarray} | \psi \rangle\!\rangleanglengle= \frac{1}{\sqrt{2}}( | +z \rangle\!\rangleanglengle + | -z \rangle\!\rangleanglengle). \end{eqnarray} Assume that the measurement apparatus outputs a signal $V= \pm 1$ when the state is projected to the state $|\pm z\rangle\!\rangleanglengle$\footnote{We discuss this in the previous chapter for qubit readout measurement. In the next section, we will discuss the mechanism by which the measurement outcome is actually is generated for general measurements.}. Since the qubit is initially prepared in an equal superposition of the state of the measurement basis, neither measurement outcome is more likely than the other. The qubit will be collapsed into $| \pm z \rangle\!\rangleanglengle$ with the probability of $P_\pm =\langle\!\langleanglengle \psi| \Pi^{\pm z}| \psi \rangle\!\rangleanglengle=1/2$ and outputs $\pm 1$. But after the measurement, we know certainly that the qubit state is $|\pm z\rangle\!\rangleanglengle$. If we were to make another measurement, we would find the same result. This means we have gained information. However, we are now completely uncertain about the measurement result in the $x$-basis because $| \pm z \rangle\!\rangleanglengle= \frac{1}{\sqrt{2}}( | +x \rangle\!\rangleanglengle \pm | -x \rangle\!\rangleanglengle)$ . Therefore we have gained full information in $z$-basis but lost all information in $x$-basis. This is a consequence of the Heisenberg uncertainty principle; we can not be certain about two non-commuting observables at the same time\footnote{This doesn't mean we can not perform measurements on two non-commuting observables at the same time, see Ref.~\cite{hacohen2016quantum}}. In a more general case, the qubit state can be a mixture of $|\psi_n \rangle\!\rangleanglengle$'s with the probabilities $P_n$, which no longer can be written as a single state vector $| \psi\rangle\!\rangleanglengle$. However, this can be represented by a density matrix, \begin{equation}gin{eqnarray} \rangle\!\rangleho = \sum_n P_n |\psi_n \rangle\!\rangleanglengle \langle\!\langleanglengle \psi_n |. \end{eqnarray} The density matrix $\rangle\!\rangleho$ fully represents our state of knowledge about the system by accounting for both the quantum superposition and classical uncertainty of the system. One can simply visualize the mixed state as a vector (norm$<$1) obtained by a weighted average over states with classical uncertainty. Similarly, projective measurement projects the mixed state along the measurement basis; \begin{equation}gin{eqnarray} \sum_n P_n |\psi_n \rangle\!\rangleanglengle \langle\!\langleanglengle \psi_n | \to | \pm i \rangle\!\rangleanglengle \langle\!\langleanglengle \pm i |. \end{eqnarray} Here we specifically focus on a projective measurement along $z$-basis, which is the energy eigenbasis for the qubit. For example, we represent the density matrix $\rangle\!\rangleho$ in the measurement basis $\{ | \pm z \rangle\!\rangleanglengle \}$ we have, \begin{equation}gin{eqnarray} \rangle\!\rangleho &=& P_{++} |+z\rangle\!\rangleanglengle \langle\!\langleanglengle +z | + P_{--} |-z \rangle\!\rangleanglengle \langle\!\langleanglengle -z | \nonumber \\ &+& P_{+-} |+z \rangle\!\rangleanglengle \langle\!\langleanglengle -z | + P_{-+}|-z \rangle\!\rangleanglengle \langle\!\langleanglengle +z |\\ &=& \langle\!\langleeft( {\begin{equation}gin{array}{cc} P_{++} & P_{+-} \\ P_{-+} & P_{--} \\ \end{array} } \rangle\!\rangleight). \end{eqnarray} The diagonal element $P_{++}$ $(P_{--})$ is a real number that represents the probability of projecting the qubit into $|+z\rangle\!\rangleanglengle$ $(|-z\rangle\!\rangleanglengle)$ where $P_{++} + P_{--}=1$. The off-diagonal elements are complex numbers and represent quantum coherences and we have $P_{+-}^*=P_{-+}$. Therefore, a density matrix, in general, has three independent unknowns which means three sets of projective measurements (e.g. in the $x$, $y$, and $z$ bases) are needed to fully characterize the state of the qubit\footnote{Recall the full state tomography discussion in the previous chapter. We will discuss full state tomography in this chapter more fully.}. Experimentally, we are usually able to project the qubit only along its energy eigenbasis. But we can add unitary rotations before projection along $z$ to realize an effective projective measurement along any arbitrary basis as discussed in the previous chapter. Projective measurements in the $z$-basis give us the ratio $P_{++}/P_{--}$, and since the probabilities must add to one, we obtain diagonal elements. Projective measurements in the $x$-basis ($y$-basis) give us $\mathcal{R}e[ P_{+-} ]$ ($\mathcal{I}m[ P_{+-} ]$). \section{Generalized measurement in the $\sigma_z$ basis\langle\!\langleanglebel{section:generalized_m}} In this section, we discuss quantum measurement in a more detailed approach allowing us to study generalized quantum measurement. Here we introduce the discussion in close relation to an actual lab experiment. \subsection{Simple Model\langle\!\langleanglebel{subsection:simple_model}} A quantum measurement is normally modeled by a system $\mathcal{S}$ with Hamiltonian $H_\mathcal{S}$ and a meter $\mathcal{M}$ with Hamiltonian $H_\mathcal{M}$. The measurement is performed by turning ``ON" the interaction between the meter and system, $H_{\mathrm{int}}$, for a certain time $t$ which entangles the state of the qubit with the state of the meter. By performing a subsequent measurement on the meter we collapse the entanglement and gain information. Let's first study this by a simple model\footnote{I follow Andrew Jordan's discussion from the KITP conference, 2018.}. Consider Figure~\rangle\!\rangleef{fig:QM_simple_model} where the system is a qubit $H_\mathcal{S}=-\omega_q \sigma_z/2$ probed by a free particle\footnote{A free particle described by a Gaussian wave packet, which has minimum uncertainty in both position and momentum.} $H_\mathcal{M}=\hat{P}^2/2m$ passing by the qubit. Once the free particle is at a minimum distance from the qubit, they interact by $H_{\mathrm{int}}=-g \sigma_z \otimes \hat{P} \delta(t)$. Note that we assume the interaction is instantaneous and happens only at time $t=0$ when the particle has reached a minimum distance from the qubit. The parameter $g$ is measurement strength and in our model one can think of it as a measure of how close the particle passes by the qubit\footnote{smaller distance causes a stronger push and pull (larger $g$) which results in larger separation on the screen.}. Here we assume that we have minimum uncertainty in position and momentum of the particle which results in Gaussian distributions in the screen\footnote{More realistically one can assume that the interaction happens in a time scale T around $t=0$, in that case then effective coupling would be $g=\int_{-T/2}^{+T/2} g(t) dt$ and the separation is $\propto 2g$. Therefore the measurement strength depends on both interaction strength and interaction time. But here we assume that $g(t)=g \delta(t)$, meaning that the qubit and particle intact only at time $t=0$ when they have minimum distance.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{QM_simple_model.pdf} \caption[Quantum measurement: simple model]{ {\footnotesize \textbf{Quantum measurement, simple model:} A free particle passes by and interacts with a qubit. The interaction is in the form of a push or pull depending on the state of the qubit. The position of the particle when it hits the screen tells us about the state of the qubit. The free particle has its natural Gaussian distribution. The separation between the two distributions is proportional to the interaction strength and the interaction time.}} \langle\!\langleanglebel{fig:QM_simple_model} \end{figure} Now, assume the qubit is initially in state $\psi=\alpha |0\rangle\!\rangleanglengle + \begin{equation}ta |1\rangle\!\rangleanglengle$ and the meter is in the state $\Phi$ which can be effectively represented as, \begin{equation}gin{eqnarray} \Phi= N e^{-\frac{x^2}{4 \sigma^2}}, \end{eqnarray} where $\sigma$ quantifies the minimum fluctuation in position for the particle\footnote{Here we skipped irrelevant details of the free particle wave function and dynamics. Basically, the free particle is a wave packet moving along $z$ direction, $\Phi (r,t)= N \exp(k\cdot r - \omega_p t) \exp(r^2/4 \sigma^2)$ where $k\cdot r=k_z z$. Upon the interaction with the qubit at $t=0, z=0$, the particle gets pulled or pushed and obtains little bit of momentum along $x$ direction as well, $k\cdot r=k_z z + k_x x $. We only care about the particle position at the screen, so we describe the particle state by its position in the $x$-direction at $z=L$ where the screen is located.}. The qubit and particle interact at $t=0$ and the total system (qubit+meter) evolves under unitary evolution, \begin{equation}gin{eqnarray} \mathcal{U}_{\mathrm{tot}}= e^{+i g \sigma_z \otimes \hat{P}} \end{eqnarray} which entangles the qubit and particle state. Therefore, the state of total system would be, \begin{equation}gin{eqnarray} \Psi_{\mathrm{tot}} &=& \mathcal{U}_{\mathrm{tot}} \psi \otimes \Phi \nonumber \\ &=& \mathcal{N} \langle\!\langleeft[ \alpha |0\rangle\!\rangleanglengle \exp(- \frac{(x-g)^2}{4 \sigma^2}) + \begin{equation}ta |1\rangle\!\rangleanglengle \exp(- \frac{(x+g)^2}{4 \sigma^2}) \rangle\!\rangleight]. \langle\!\langleanglebel{eq:qs_tildaO} \end{eqnarray} If we then measure the position that particle landed on the screen and found that it to be $x=\tilde{x}$ (the wave function of meter collapses) then our state of knowledge about the state of qubit would be, \begin{equation}gin{eqnarray} \psi &=& \tilde{N} \langle\!\langleeft[ \alpha \exp(+ \frac{g \tilde{x}}{2 \sigma^2}) |0\rangle\!\rangleanglengle + \begin{equation}ta \exp(- \frac{g \tilde{x}}{2 \sigma^2}) |1\rangle\!\rangleanglengle \rangle\!\rangleight]. \langle\!\langleanglebel{eq:qs_tilda} \end{eqnarray} Where $\tilde{N}$ is a normalization constant. Therefore we learn about the state of the qubit via an indirect measurement. This type of measurement is more general than projective measurement. As the measurement strength becomes stronger, we approach projective measurement, and if the measurement strength is weak, we are in the weak measurement limit. Now we interpret the result of Equation~\eqref{eq:qs_tilda} in these two limits. \emph{Projective measurement limit}- Now consider the situation where $g\gg \sigma$ which means the measurement is strong such that two distributions are well separated with negligible overlap. That means we are pretty sure about which distribution $\tilde{x}$ belongs to, $\tilde{x} \sim +g $ or $\tilde{x} \sim -g $ as depicted in Figure~\rangle\!\rangleef{fig:QM_simple_model_s}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{QM_simple_model_s.pdf} \caption[Strong measurement]{ {\footnotesize \textbf{Strong measurement:} If the particle strongly interacts with the qubit, the separation between two distributions would be large enough so that we can readout the qubit state just by knowing in which distribution the particle has landed.}} \langle\!\langleanglebel{fig:QM_simple_model_s} \end{figure} Therefore one of the terms in \rangle\!\rangleef{eq:qs_tilda} is suppressed. \begin{subequations} \langle\!\langleanglebel{eq:qs_tilda_s} \begin{equation}gin{eqnarray} \tilde{x} \sim +g \xrightarrow{ g\gg \sigma} \psi = |0\rangle\!\rangleanglengle, \\ \tilde{x} \sim -g \xrightarrow{ g\gg \sigma} \psi = |1\rangle\!\rangleanglengle. \end{eqnarray} \end{subequations} This means that the qubit wave function is also projected to one its eigenstates in this limit. In this case it is easy to define a threshold at $x_{\mathrm{th}}=0$ by which two histograms and completely separated. If $\tilde{x}>x_{\mathrm{th}}$ ($\tilde{x}<x_{\mathrm{th}}$) we realize that the qubit has been projected into the ground (excited) state. \emph{Weak measurement limit}- Now consider the situation $g<\sigma$, meaning that the two distributions significantly overlap. Now if we obtain $\tilde{x}$, we are not sure which distribution $\tilde{x}$ belongs to. Yet, based on Equation~\eqref{eq:qs_tilda}, our knowledge about the qubit state is updated. If $\tilde{x}$ is positive (negative) the qubit state shifts more toward the ground (excited) state because one of the terms dominates over the other in Equation~\eqref{eq:qs_tilda}. Therefore by this measurement we slightly disturb the qubit but still we know what is the qubit state because we have measured that disturbance. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{QM_simple_model_w.pdf} \caption[Weak measurement]{ {\footnotesize \textbf{Weak measurement:} In the limit that the particle weakly interacts with the qubit, the separation between the two distributions would be smaller. This gives partial information about the state of the qubit.}} \langle\!\langleanglebel{fig:QM_simple_model_w} \end{figure} In the next section we will discuss this type of measurement more rigorously. \subsection{POVM\langle\!\langleanglebel{subsection:POVMz}} In the previous section, we studied a general type of measurement which is indirect and applies to a wide range of measurements. Formally, this type of measurement can be described in terms of POVMs\footnote{POVM stands for `positive operator-valued measured'.}. For that, let's revisit the result we had in the previous subsection in terms of the density matrix. For a projective measurement, the qubit state which can be described by $\rangle\!\rangleho=\sum_n |\psi_n\rangle\!\rangleanglengle \langle\!\langleanglengle \psi_n|$ undergoes projection to one of the eigenstate \begin{subequations}\langle\!\langleanglebel{eq:density_projection} \begin{equation}gin{eqnarray} \sum_n |\psi_n\rangle\!\rangleanglengle \langle\!\langleanglengle \psi_n| \xrightarrow{\tilde{x}>x_{\mathrm{th}}} |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0 | \\ \sum_n |\psi_n\rangle\!\rangleanglengle \langle\!\langleanglengle \psi_n| \xrightarrow{\tilde{x}<x_{\mathrm{th}}} |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1 |, \end{eqnarray} \end{subequations} which means the final density matrix is the result of acting with the projector $\Pi_n$ on the initial density matrix, \begin{subequations} \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:density_projectors} \rangle\!\rangleho &\xrightarrow[\Pi_0 = |0 \rangle\!\rangleanglengle \langle\!\langleanglengle 0|]{\tilde{x}>x_{\mathrm{th}}}& \frac{\Pi_0 \rangle\!\rangleho \Pi_0 }{\mathrm{Tr}[\Pi_0 \rangle\!\rangleho \Pi_0]}, \mathrm{with \ probability} \ P_0=\mathrm{Tr}[\Pi_0 \rangle\!\rangleho \Pi_0]\\ \rangle\!\rangleho &\xrightarrow[\Pi_1 = |1 \rangle\!\rangleanglengle \langle\!\langleanglengle 1|]{\tilde{x}<x_{\mathrm{th}}}& \frac{\Pi_1 \rangle\!\rangleho \Pi_1 }{\mathrm{Tr}[\Pi_1 \rangle\!\rangleho \Pi_1]} , \mathrm{with \ probability} \ P_1=\mathrm{Tr}[\Pi_1 \rangle\!\rangleho \Pi_1], \end{eqnarray} \end{subequations} where $\mathrm{Tr}[\Pi_n \rangle\!\rangleho \Pi_n]$ in denominator is the normalization factor. Note that we have $\sum_n \Pi_n=|0\rangle\!\rangleanglengle \langle\!\langleanglengle0| + |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1|=\mathbb{1} $. In a more general manner, one can describe partial measurements (including weak and strong measurements) by a set of operators $\Omega_n$ which obey the constraint $\sum_n \Omega_n^{\dagger} \Omega_n = \mathbb{1}$. In this case we have similar operations \begin{equation}gin{eqnarray} \rangle\!\rangleho &\xrightarrow[\Omega_n ]{n\mathrm{th\ outcome}}& \frac{\Omega_n \rangle\!\rangleho \Omega_n^{\dagger} }{\mathrm{Tr}[\Omega_n \rangle\!\rangleho \Omega_n^{\dagger}]}, \mathrm{with \ probability} \ P_n=\mathrm{Tr}[\Omega_n \rangle\!\rangleho \Omega_n^{\dagger}], \langle\!\langleanglebel{eq:density_partial} \end{eqnarray} except now $\Omega_n$ is not necessarily a projector. Actually $\Omega_n$ is called POVM and, in general, can be described by a weighted sum over all projection operators. For example the POVM corresponding to the general measurement discussed in our model (Eq.~\rangle\!\rangleef{eq:qs_tildaO}) can be described by \begin{equation}gin{eqnarray} \Omega_{\tilde{x}}= \mathcal{N} \langle\!\langleeft[ \exp(-\frac{ (\tilde{x} -g)^2}{4 \sigma^2}) |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0| + \exp(- \frac{ (\tilde{x} + g)^2}{4 \sigma^2}) |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1| \rangle\!\rangleight]. \langle\!\langleanglebel{eq:POVM0} \end{eqnarray} One can check that $\Omega_{\tilde{x}}$ acting on $\rangle\!\rangleho=|\psi \rangle\!\rangleanglengle \langle\!\langleanglengle \psi |$, according to the Equation~\eqref{eq:density_partial}, results in Equation~\eqref{eq:qs_tilda}. Note, the measurement outcome $\tilde{x}$ is a continuous variable indicating the position the particle detected on the screen in our model, but can be a discrete value depending on the type of apparatus one uses for the meter\footnote{We will see later in this chapter that this value, in our experiment, is a ``semi-continuous" digitized homodyne voltage.}. \subsection{POVM in terms of physical parameters } Now we translate our model into the language of cavity QED and describe the actual weak measurement that we perform on the qubit in experiment. In our case the qubit is probed by a microwave coherent signal. So essentially we need to replace wave packet of the free particle with a coherent signal. There is, of course, a exact correspondence between a Gaussian wave packet and a coherent signal. As depicted in Figure~\rangle\!\rangleef{fig:povm_QED} the coherent signal is initially prepared along quadrature $I$. It has minimum uncertainty along each canonical position and momentum\footnote{Remember in our model, the wave packet also has minimum uncertainty for position and momentum.}. When the signal passes the cavity and interacts with the qubit, it acquires a phase shift which depends upon the state of the qubit. As we discussed in Chapter~2, the phase shift of the coherent signal can be translated to a displacement in $IQ$ plane. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{POVM_cQED.pdf} \caption[Weak measurement cQED]{ {\footnotesize \textbf{Weak measurement cQED:} \textbf{a}, The qubit-state-dependent phase shift of the cavity probe signal. \textbf{b}, A more realistic schematic for the phase shift detection.}} \langle\!\langleanglebel{fig:povm_QED} \end{figure} Assuming the phase shift happens along the $Q$ quadrature\footnote{This can be done in the experiment by adjusting the phase of the probe signal.}, the measurement outcome is a signal $\tilde{V}$ that we obtain for $Q$ quadrature of the homodyne measurement. The corresponding POVM would be very similar to our model, \begin{equation}gin{eqnarray} \Omega_{\tilde{V}}= \mathcal{N} \langle\!\langleeft[ \exp(-\frac{ (\tilde{V} -g)^2}{4 \sigma^2}) |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0| + \exp(- \frac{ (\tilde{V} + g)^2}{4 \sigma^2}) |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1| \rangle\!\rangleight]. \langle\!\langleanglebel{eq:POVM} \end{eqnarray} However we need to figure out $g$ and $\sigma$ in terms of actual parameters in the measurement\cite{gambetta2008quantum}. As we discussed in Chapter~2, the (dimension-less) variance of coherent state in each quadrature is $1/4$ which is the minimum fluctuation (see Exercise~7 of Chapter~2). However in the actual experiment we collect the signal for a certain amount of time $\Delta t$ with collection efficiency of $\eta$, which means the actual variance we have in the experiment is $\sigma^2=1/(4 \eta \kappa \Delta t)$ where $\kappa$ is the cavity linewidth\footnote{You may think it as a shot noise improvement in the variance. We get $ \eta \kappa \Delta t $ amount of signal from the cavity during the measurement which improves the uncertainty by a factor of $1/\sqrt{\eta \kappa \Delta t}$.}. The separation between the two Gaussian distributions results from the phase shift in the cavity frequency and also the number of photons inside the cavity, as depicted in Figure~\rangle\!\rangleef{fig:povm_QED}. We have a $2 \chi$ frequency shift of the cavity resonance frequency which, in the limit of $\chi\langle\!\langlel \kappa$, resulting in a phase shift of $4 \chi/\kappa$ of the cavity probe (see the discussion in Chapter~3 for the cavity phase shift). Therefore the separation $2g= 4 \chi \sqrt{\bar{n}}/\kappa$ where $\sqrt{\bar{n}}$ accounts for phasor vector length in $IQ$ plane as depicted in Figure~\rangle\!\rangleef{fig:povm_QED}. So we have, \begin{equation}gin{equation} \Omega_{\tilde{V}}= \mathcal{N} \langle\!\langleeft[ e^{-\eta \kappa\Delta t (\tilde{V} -2 \chi \sqrt{\bar{n}}/\kappa)^2} |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0| + e^{-\eta \kappa\Delta t (\tilde{V} +2 \chi \sqrt{\bar{n}}/\kappa)^2} |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1| \rangle\!\rangleight]. \langle\!\langleanglebel{eq:POVM2} \end{equation} We also define the signal-to-noise ratio (SNR) to be, \begin{equation}gin{eqnarray} S=\langle\!\langleeft(\frac{2g}{\sigma}\rangle\!\rangleight)^2= \frac{64 \chi^2 \bar{n} \eta \Delta t}{\kappa }. \langle\!\langleanglebel{eq:SNR} \end{eqnarray} Normally in this experiment, the separation between two Gaussian distributions is always scaled\footnote{In our model this can be adjusted by the position of the screen and in actual experiment we can simply amplify/attenuate the signal or just simply scale the signal after digitization. Note that scaling doesn't change the SNR.} to be $2 g=2$. This means, one can rewrite the POVM as \begin{equation}gin{equation} \Omega_{\tilde{V}}= \mathcal{N} \langle\!\langleeft[ e^{- 4 \chi^2 \bar{n} \eta \Delta t / \kappa (\tilde{V} -1)^2} |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0| + e^{- 4 \chi^2 \bar{n} \eta \Delta t / \kappa (\tilde{V} +1)^2} |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1| \rangle\!\rangleight]. \langle\!\langleanglebel{eq:POVM3} \end{equation} where now $\tilde{V}$ is the scaled signal and the variance of the scaled signal is $\sigma^2=\kappa/(16 \chi^2 \bar{n} \eta \Delta t)$. It is convenient to define $k=4 \chi^2 \bar{n}/ \kappa$ as the measurement strength\footnote{The definition for measurement strength $k$ seems to be different from literature by a factor of two. This wouldn't be an issue if we scaled the signal consistently. Here I define $k$ such that the the measurement operator in the Lindblad equation is exactly $\sqrt{k} \sigma_z$.} which quantifies how strong we are measuring the system regardless of the measurement time and efficiency. \begin{equation}gin{equation} \Omega_{\tilde{V}}= \mathcal{N} \langle\!\langleeft[ e^{- k \eta \Delta t (\tilde{V} -1)^2} |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0| + e^{- k \eta \Delta t (\tilde{V} +1)^2} |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1| \rangle\!\rangleight]. \langle\!\langleanglebel{eq:POVM4} \end{equation} We also define a characteristic measurement time $\tau=1/(4 k \eta)$ which quantifies how long we should collect the signal to achieve $\sigma^2=1$ (SNR=4). To sum up this discussion, Equation~\eqref{eq:POVM4} describes weak measurement on the qubit state $|\psi\rangle\!\rangleanglengle \to \Omega_{\tilde{V}} |\psi \rangle\!\rangleanglengle $ or in a more general form, \begin{equation}gin{equation} \rangle\!\rangleho \to \frac{\Omega_{\tilde{V}} \rangle\!\rangleho \Omega_{\tilde{V}}^{\dagger}}{\mathrm{Tr}[\Omega_{\tilde{V}} \rangle\!\rangleho \Omega_{\tilde{V}}^{\dagger}]} \langle\!\langleanglebel{eq:sum_up_POVM} \end{equation} and we obtain the signal $\tilde{V}$ with a probability \begin{equation}gin{equation} P(\tilde{V})=\mathrm{Tr}[\Omega_{\tilde{V}} \rangle\!\rangleho \Omega_{\tilde{V}}^{\dagger}]=\rangle\!\rangleho_{00} e^{-2 k \eta \Delta t (\tilde{V} -1)^2} + \rangle\!\rangleho_{11} e^{-2 k \eta \Delta t (\tilde{V} +1)^2}, \langle\!\langleanglebel{eq:sum_up_POVM2} \end{equation} where $\rangle\!\rangleho_{00} \ (\rangle\!\rangleho_{11})$ is the probability for the qubit to be in the ground (excited) state before the measurement\footnote{Note that there is a factor of 2 difference between exponents in Equation~\eqref{eq:POVM4} which is an operator and Equation~\eqref{eq:sum_up_POVM2} which is a probability distribution.}. \section{Continuous measurement in $\sigma_z$ basis\langle\!\langleanglebel{section:CWM}} In the previous section we studied how the state of the qubit changes under a generalized measurement for a time $\Delta t$. In this section we are going to study continuous monitoring of the qubit state in the limit of very weak measurement. For that we start from the probability distribution of signal $\tilde{V}$ Equation~\eqref{eq:sum_up_POVM2}. In the limit of very weak measurement, $\Delta t \to 0$, the variance of the distributions $\sigma^2= 1/(4 k\eta\Delta t) \gg 1$ which means that the two distributions almost overlap as depicted in Figure~\rangle\!\rangleef{fig:overlap_PV}. In this limit one can show that, \begin{equation}gin{equation} P(\tilde{V}) \simeq e^{-2 k \eta \Delta t (\tilde{V} - \rangle\!\rangleho_{00} + \rangle\!\rangleho_{11})^2} = e^{- 2k \eta \Delta t (\tilde{V} - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle )^2}, \langle\!\langleanglebel{eq:POVM_sigz} \end{equation} which means we can replace two distributions with one distribution which is centered at $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle$ as depicted in Figure~\rangle\!\rangleef{fig:overlap_PV}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{overlap_PV.pdf} \caption[Measurement signal distribution in the weak limit]{ {\footnotesize \textbf{Measurement signal distribution in the weak limit:} In the limit of weak measurement, the separation between the two distributions is much smaller than the variance of distributions. In such a case, we can approximate the two distributions by a distribution centered at $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle$ as shown in Exercise~1.}} \langle\!\langleanglebel{fig:overlap_PV} \end{figure} Consequently the measurement operator $\Omega_{\tilde{V}}$ in Equation~\eqref{eq:POVM4} can be represented in a compact form up to a renormalization constant, \begin{equation}gin{equation} \Omega_{\tilde{V}} \simeq e^{- k \eta \Delta t (\tilde{V} -\hat{\sigma}_z)^2} \langle\!\langleanglebel{eq:POVM_sigzO} \end{equation} \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~1:} Verify Equation~\eqref{eq:POVM_sigz} by expanding Equation~\eqref{eq:sum_up_POVM2}. For that you need to show (up to a normalization constant) that $$p e^{-(x-1)^2/a^2} + q e^{-(x+1)^2/a^2} \xrightarrow[p+q=1]{a\gg 1} e^{-(x-p+q)^2/a^2}$$. }} The fact that the measurement signal has a Gaussian distribution centered on $\langle\!\langleanglengle\sigma_z \rangle\!\rangleanglengle$ means one can think of the measurement signal as a noisy estimate of $\langle\!\langleanglengle\sigma_z \rangle\!\rangleanglengle$ which can be represented as\footnote{In fact, this interpretation in Equation~\eqref{eq:nosy_estimate} comes in very handy for simulating quantum trajectories.}, \begin{equation}gin{equation} \tilde{V} = \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle + \frac{d\mathcal{W}}{\sqrt{4 k \eta} \Delta t} \langle\!\langleanglebel{eq:nosy_estimate} \end{equation} where $d\mathcal{W}$ is a \emph{Wiener} increment which is a zero-mean Gaussian random variable with variance of $\Delta t$. \subsection{Stochastic Schr\"odinger equation\langle\!\langleanglebel{subsection:SSE}} Now we study qubit evolution under the measurement operator $\Omega_{\tilde{V}}$. I follow the discussion in Ref.~\cite{jaco06}. For now we assume $\eta=1$ and consider a normalized qubit pure state $|\psi(t)\rangle\!\rangleanglengle$ at time $t$. The qubit state at a later time $t+\Delta t$ would be, \begin{subequations} \begin{equation}gin{eqnarray} |\psi(t+\Delta t)\rangle\!\rangleanglengle &=& \Omega_{\tilde{V}} |\psi(t)\rangle\!\rangleanglengle \\ &\propto& e^{- k \Delta t (\tilde{V} -\hat{\sigma}_z)^2} |\psi(t)\rangle\!\rangleanglengle\\ &\propto& e^{- k \Delta t (\hat{\sigma}_z^2 -2 \tilde{V} \sigma_z)} |\psi(t)\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:Schro_SME} \end{eqnarray} \end{subequations} where we ignore the constant term proportional to the $\tilde{V}^2$) in the exponent since we are eventually going to renormalize $|\psi(t+\Delta t)\rangle\!\rangleanglengle$. We now substitute $\tilde{V}$ from Equation~\eqref{eq:nosy_estimate} (for now $\eta=1$), \begin{equation}gin{eqnarray} |\psi(t+\Delta t)\rangle\!\rangleanglengle = \exp(- k \Delta t \hat{\sigma}_z^2 +2 k \Delta t \hat{\sigma}_z \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle +\sqrt{ k} \hat{\sigma}_z d \mathcal{W} ) |\psi(t)\rangle\!\rangleanglengle. \langle\!\langleanglebel{eq:Schro_SME2} \end{eqnarray} Now we replace $\Delta t \to dt $ implying the continuous limit and expand the exponential but only keeping terms up to first order in $dt$, \begin{equation}gin{eqnarray} \rangle\!\rangleesizebox{.85\hsize}{!}{$|\psi(t+d t)\rangle\!\rangleanglengle = \langle\!\langleeft( 1 - k d t \hat{\sigma}_z^2 +2k d t \hat{\sigma}_z \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle +\sqrt{ k} \hat{\sigma}_z d \mathcal{W} + \frac{k}{2} \sigma_z^2 (d\mathcal{W})^2 \rangle\!\rangleight) |\psi(t)\rangle\!\rangleanglengle,$} \langle\!\langleanglebel{eq:Schro_SME25} \end{eqnarray} then we replace $(d \mathcal{W})^2 =dt$ according to stochastic calculus (It\^o rule)\footnote{The Wiener increment $d\mathcal{W}$ has dimension of $\sqrt{t}$, see Ref. \cite{jaco06} for more details.} and arrive at, \begin{equation}gin{eqnarray} |\psi(t+d t)\rangle\!\rangleanglengle = \langle\!\langleeft( 1 - \frac{k}{2} \hat{\sigma}_z [ \hat{\sigma}_z - 4 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle ] dt +\sqrt{ k} \hat{\sigma}_z d \mathcal{W} \rangle\!\rangleight) |\psi(t)\rangle\!\rangleanglengle. \langle\!\langleanglebel{eq:Schro_SME3} \end{eqnarray} Now we need to normalize the state $|\psi(t+d t)\rangle\!\rangleanglengle$ because, so far, we have ignored normalization constants. One can show that \begin{equation}gin{eqnarray} \langle\!\langleanglengle \psi(t+d t)|\psi(t+d t)\rangle\!\rangleanglengle = 1+4 k \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle^2 dt + \sqrt{4 k} \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle d \mathcal{W} + \mathcal{O}[t]^{3/2}, \langle\!\langleanglebel{eq:SSE_norm_cal} \end{eqnarray} where we just keep terms up to the first order in $dt$ and second order in $d\mathcal{W}$. By using a binomial expansion, one can show that \begin{equation}gin{eqnarray} \langle\!\langleeft[ \langle\!\langleanglengle \psi(t+d t)|\psi(t+d t)\rangle\!\rangleanglengle \rangle\!\rangleight]^{-\frac{1}{2}}= 1- \frac{k}{2} \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle^2 dt - \sqrt{ k} \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle d \mathcal{W} + \mathcal{O}[t]^{3/2}. \langle\!\langleanglebel{eq:SSE_norm_cal2} \end{eqnarray} When we multiply the Equation~\eqref{eq:SSE_norm_cal2} by Equation~\eqref{eq:Schro_SME3} (and again keep terms up to $dt$ and $(d\mathcal{W})^2$), we obtain the normalized Stochastic Schr\"odinger Equation (SSE), \begin{equation}gin{eqnarray} d |\psi(t) \rangle\!\rangleanglengle = \langle\!\langleeft( - \frac{k}{2} (\sigma_z - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle )^2 dt + \sqrt{ k} (\sigma_z - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle ) d \mathcal{W} \rangle\!\rangleight) |\psi(t) \rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:Schro_norm} \end{eqnarray} where we define $d |\psi(t) \rangle\!\rangleanglengle = |\psi(t + dt) \rangle\!\rangleanglengle - |\psi(t) \rangle\!\rangleanglengle $. For a given measurement record $\{\tilde{V}\}$ one can infer $d\mathcal{W}$ (see Equation~\rangle\!\rangleef{eq:nosy_estimate}) and integrate this equation to obtain the qubit pure state evolution under measurement with perfect efficiency. For example, the evolution of a qubit initialized in a pure state $\psi(0)=\alpha_0 |0\rangle\!\rangleanglengle + \begin{equation}ta_0 |1\rangle\!\rangleanglengle$ and subject to continuous measurement for time $T$ can be obtained by integrating the Equation~\eqref{eq:Schro_norm} as follows, \begin{equation}gin{eqnarray} \rangle\!\rangleesizebox{.98\hsize}{!}{$ \langle\!\langleeft[ {\begin{equation}gin{array}{c} \alpha_{i+1} \\ \begin{equation}ta_{i+1} \\ \end{array} } \rangle\!\rangleight] = \langle\!\langleeft[ {\begin{equation}gin{array}{c} \alpha_{i} \\ \begin{equation}ta_{i} \\ \end{array} } \rangle\!\rangleight] - \frac{k}{2} dt \langle\!\langleeft[ {\begin{equation}gin{array}{cc} (1-z_i)^2 & 0\\ 0 & (1+z_i)^2\\ \end{array} } \rangle\!\rangleight] \langle\!\langleeft[ {\begin{equation}gin{array}{c} \alpha_{i} \\ \begin{equation}ta_{i} \\ \end{array} } \rangle\!\rangleight] + d\mathcal{W}_i \sqrt{ k} \langle\!\langleeft[ {\begin{equation}gin{array}{cc} 1-z_i & 0\\ 0 & -1-z_i\\ \end{array} } \rangle\!\rangleight] \langle\!\langleeft[ {\begin{equation}gin{array}{c} \alpha_{i} \\ \begin{equation}ta_{i} \\ \end{array} } \rangle\!\rangleight], $} \nonumber\\ \ \langle\!\langleanglebel{eq:SSE_update} \end{eqnarray} where $z_i=|\alpha_i|-|\begin{equation}ta_i|^2$ and $[i=1,2,...,N]$ where $N=T/dt$. Therefore, given the initial values $\alpha_0, \begin{equation}ta_0$, one can update the next values using the measurement record $d \mathcal{W}_i$ at each step\footnote{In the actual experiment we obtain $V_i= z_i + d\mathcal{W}/\sqrt{4 k}dt$ as the measurement signal (See, Eq.~\rangle\!\rangleef{eq:nosy_estimate}). So, one can rewrite \rangle\!\rangleef{eq:Schro_norm} in terms of $V_i$. But since efficient detection is not practical, we leave the \rangle\!\rangleef{eq:Schro_norm} in this form which is more convenient for simulating a quantum trajectory because one can simply use a Gaussian noise generator of variance $dt$ to generate $d\mathcal{W}$. Although for simulation purposes, the unnormalized version of Equation~\eqref{eq:Schro_SME3} works even better since one can manually normalize the state at each step.} and reconstruct the \emph{quantum trajectory} as depicted in Figure~\rangle\!\rangleef{fig:SSE_update} for $N=101, dt=0.01,k=1, \alpha_0=\begin{equation}ta_0=1/\sqrt{2}$. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{SSE_update.pdf} \caption[SSE update trajectory]{ {\footnotesize \textbf{SSE update trajectory:} The measurement signal $V_i$ is used to infer $d\mathcal{W}_i$. The pure state evolution can then be reconstructed with the SSE. In the right panel, the evolution is represented in terms of Bloch coordinates.}} \langle\!\langleanglebel{fig:SSE_update} \end{figure} One can also add any unitary dynamics\footnote{For example, for adding Rabi oscillation to the dynamics one also needs to consider terms like $\dot{\alpha} = i \Omega_R \begin{equation}ta_i$ and $\dot{\begin{equation}ta} = i \Omega_R \alpha_i$ in the state update. Similar to Equation~\eqref{eq:ODE1_rabi_c_RWA} and Equation~\eqref{eq:ODE2_rabi_c_RWA}.}, the SSE is used to describe more general dynamics of the qubit state evolution. \subsection{Stochastic master equation} The SSE in the form of (Eq.~\rangle\!\rangleef{eq:Schro_norm}) is only applicable for pure state evolution. However one can generalize this equation to describe mixed state evolution as well. An easy way to obtain the generalized form is to represent (Eq.~\rangle\!\rangleef{eq:Schro_norm}) in terms of density matrix\footnote{The trick here is that we use a pure state $|\psi\rangle\!\rangleanglengle$ to obtain the SME, and once we arrived at the equations for the SME in terms of density matrix, these equations can be applied to any density matrix, either pure or mixed. One can start from the beginning of the subsection \rangle\!\rangleef{subsection:SSE} and follow the steps in density matrix formalism and directly obtain the SME.} in this form, \begin{equation}gin{eqnarray} \rangle\!\rangleho=|\psi\rangle\!\rangleanglengle \langle\!\langleanglengle \psi| \to d\rangle\!\rangleho = d|\psi\rangle\!\rangleanglengle \langle\!\langleanglengle \psi| + |\psi\rangle\!\rangleanglengle d \langle\!\langleanglengle \psi| + d|\psi\rangle\!\rangleanglengle d \langle\!\langleanglengle \psi|, \langle\!\langleanglebel{eq:SME_psi_psi} \end{eqnarray} where by substituting $d |\psi\rangle\!\rangleanglengle$ from Equation~\eqref{eq:Schro_norm} we arrive at the Stochastic Master Equation SME\footnote{Note, here we have double-commutator.} \begin{equation}gin{eqnarray} d \rangle\!\rangleho= - \frac{k}{2} [ \sigma_z, [\sigma_z,\rangle\!\rangleho ]] dt + \sqrt{k} ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) d\mathcal{W} \langle\!\langleanglebel{eq:SME0_rho} \end{eqnarray} \subsection{Inefficient measurement} In last two sections, we assumed that the measurement is perfectly efficient, $\eta=1$. In this section we relax this assumption and account for inefficient detection. Inefficient detection can be modeled by considering two concurrent independent measurements on the system but ignoring the measurement outcome of one of them. For that, we consider two measurement apparatuses performing measurements on the system. The measurement strength of the first (second) apparatus is $k^{(1)} (k^{(2)} )$ where $k^{(1)}=\eta k$ and $k^{(2)}=(1-\eta) k$ and the measurement outcome is $V^{(1)} (V^{(2)})$, \begin{equation}gin{eqnarray} V^{(m)} = \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle + \frac{d\mathcal{W}^{(m)}}{\sqrt{4 k^{(m)}} \Delta t}, \langle\!\langleanglebel{eq:V_1V_2} \end{eqnarray} where $m=1,2$. Considering both measurements, the qubit evolution would be, \begin{equation}gin{eqnarray} d \rangle\!\rangleho &=& - \frac{k^{(1)} }{2}[ \sigma_z, [\sigma_z,\rangle\!\rangleho ]] dt + \sqrt{k^{(1)}} ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) d\mathcal{W}^{(1)} \nonumber \\ &-& \frac{k^{(2)} }{2} [ \sigma_z, [\sigma_z,\rangle\!\rangleho ]] dt + \sqrt{ k^{(2)}} ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) d\mathcal{W}^{(2)} \langle\!\langleanglebel{eq:SME_rho1_2} \end{eqnarray} Now we ignore the second measurement outcome and average over all possible values for $d\mathcal{W}^{(2)}$. Since $d\mathcal{W}^{(2)}$ is a zero mean Gaussian noise increment, the last term in Equation~\eqref{eq:SME_rho1_2} vanishes and we arrive at the SME for inefficient detection\footnote{Similarly, one can model other types of imperfections and sources of decoherence in the system (e.g. relaxation and dephasing) by considering that the environment performs measurements on the system (via a measurement operator $\sqrt{\gamma} \hat{X} $ which depends on the type of decoherence) and we do not have access to the outcomes of these measurements.}, \begin{equation}gin{eqnarray} d \rangle\!\rangleho= - \dfrac{k}{2} [ \sigma_z, [\sigma_z,\rangle\!\rangleho ]] dt + \sqrt{ \eta k} ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) d\mathcal{W} \langle\!\langleanglebel{eq:SME_rho} \end{eqnarray} where we replace $k^{(1)}=\eta k$, and $k^{(1)} + k^{(2)}=k$ and have simply dropped the superscript for $d\mathcal{W}^{(1)}$. For completeness, let's represent the SME in this form, \begin{equation}gin{eqnarray} d \rangle\!\rangleho&=& k \langle\!\langleeft[ \sigma_z \rangle\!\rangleho \sigma_z - \frac{1}{2}( \rangle\!\rangleho \sigma_z^2 + \sigma_z^2 \rangle\!\rangleho ) \rangle\!\rangleight] dt + \sqrt{ \eta k} ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) d\mathcal{W} \nonumber\\ \dot{\rangle\!\rangleho}&=& k \langle\!\langleeft[ \sigma_z \rangle\!\rangleho \sigma_z - \rangle\!\rangleho \rangle\!\rangleight] + 2 \eta k ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) (V(t) - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle), \langle\!\langleanglebel{eq:SME_rho2} \end{eqnarray} where in the last line we substitute $d \mathcal{W}$ in terms of the actual measurement signal $V(t)$ according to Equation~\eqref{eq:nosy_estimate}. The first term in \rangle\!\rangleef{eq:SME_rho2} is the Lindbladian term\footnote{ The term $L^{\dagger} \rangle\!\rangleho L -\frac{1}{2} \{ L^{\dagger} L , \rangle\!\rangleho \} = \mathcal{D}[L]\rangle\!\rangleho$ is usually called `dissipation superoperator' term.} $L^{\dagger} \rangle\!\rangleho L -\frac{1}{2} \{ L^{\dagger} L , \rangle\!\rangleho \}$ with Lindbladian operator $\hat{L}=\sqrt{k} \sigma_z$ as we introduced in Chapter 2 (See Eq.~\rangle\!\rangleef{eq:Lindblad_gen}). The second term which includes the measurement record and depends on quantum efficiency is the state update due to the measurement (which is referred to as ``unraveling" in the literature\footnote{In the literature you might find the argument that ``unraveling is not unique" \cite{brun2000continuous,drummond2013quantum}. It is true that there are many ways to unravel SME so that the average of many trajectories converge to the same Lindbladian evolution. But once you choose your efficient detector then the unraveling is unique. What about inefficient detector?}). In last three subsections, we specifically discussed generalized measurements corresponding to the measurement operator $\sqrt{k}\sigma_z$, and found the the resulting SME. This SME has a general form and simply can be extended to any relevant measurement operator $\sqrt{k}\hat{c}$, \begin{equation}gin{eqnarray} \dot{\rangle\!\rangleho}= k \langle\!\langleeft[ \hat{c} \rangle\!\rangleho \hat{c}^{\dagger} - \frac{1}{2}( \rangle\!\rangleho \hat{c}^{\dagger}\hat{c} + \hat{c}^{\dagger} \hat{c} \rangle\!\rangleho ) \rangle\!\rangleight] + 2\eta k ( \hat{c} \rangle\!\rangleho + \rangle\!\rangleho \hat{c}^{\dagger} - \langle\!\langleanglengle \hat{c} + \hat{c}^{\dagger} \rangle\!\rangleanglengle \rangle\!\rangleho )(V(t)- \langle\!\langleanglengle \frac{\hat{c} + \hat{c}^{\dagger}}{2} \rangle\!\rangleanglengle ), \nonumber\\ \langle\!\langleanglebel{eq:SME_c} \end{eqnarray} where $k$ still represents the measurement strength. We will see that the measurement operator can even be non-Hermitian. We can also add other type of imperfections to the dynamics. For example, the qubit decoherence due to the dephasing can be modeled by considering that environment also measures the system with measurement operator $\sqrt{\frac{\gamma_2}{2}} \hat{\sigma_z}$ where the $\gamma_2$ is the dephasing rate\footnote{This effect is significant if the total measurement time is comparable to the dephasing time or the measurement strength $k$ is comparable to the dephasing rate $\gamma_2$. The dephasing rate is ideally $\gamma_2=\frac{\gamma_1}{2}=\frac{1}{2 T_1}$ where the $T_1$ is the relaxation time for the qubit.}. However we do not have access to that measurement record. Therefore we sum over all possible outcomes for the environment (as we did for inefficient detection treatment) and obtain, \begin{equation}gin{eqnarray} \dot{\rangle\!\rangleho}&=& (k+\frac{\gamma_2}{2}) \langle\!\langleeft[ \sigma_z \rangle\!\rangleho \sigma_z - \rangle\!\rangleho \rangle\!\rangleight] + 2 \eta k ( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) (V(t) - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle)\nonumber \\ \langle\!\langleanglebel{eq:SME_rho3} \end{eqnarray} One can show that Equation~(\rangle\!\rangleef{eq:SME_rho3}) has following representation in terms of Bloch components $x\equiv\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle , z\equiv\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle $, \begin{subequations}\langle\!\langleanglebel{eq:SME_zx} \begin{equation}gin{eqnarray} \dot{z}&=&4 \eta k (1-z^2)(V(t)-z) \langle\!\langleanglebel{eq:SME_z}\\ \dot{x}&=&-2( k+\frac{\gamma_2}{2} )x -4 \eta k x z (V(t)-z) \langle\!\langleanglebel{eq:SME_x}. \end{eqnarray} \end{subequations} It is worth discussing the ensemble behavior of these equations, which occurs when we average over all possible measurement signals (that means we measure the system but disregard or don't have access to the measurement results). Let's consider the special case where the qubit is prepared in the superposition state $z=0,x=1$. It is apparent that in this case that $\dot{z}=0$ but the quantum coherence $x$ decays by the rate $2\kappa +\gamma_2$, \begin{equation}gin{eqnarray} x&=&e^{-(2 k+\gamma_2) t}. \end{eqnarray} Apart from the natural dephasing rate $\gamma_2$ which is ideally negligible, the qubit also dephases due to the unmonitored measurement photons, \begin{equation}gin{eqnarray} \Gamma=2k=\frac{8 \chi^2 \bar{n}}{\kappa}, \langle\!\langleanglebel{eq:MID} \end{eqnarray} which is called \emph{measurement induced dephasing}\footnote{Later we will utilize this equation for calibration.}. One can also add unitary evolution to the SME~(\rangle\!\rangleef{eq:SME_rho2}) to account for a coherent drive on the qubit and obtain the full version of SME, \begin{equation}gin{eqnarray} \dot{\rangle\!\rangleho}= -i [H_R, \rangle\!\rangleho ] &+& k \langle\!\langleeft[ \sigma_z \rangle\!\rangleho \sigma_z - \rangle\!\rangleho \rangle\!\rangleight] \nonumber\\ &+& 2 \eta k( \sigma_z \rangle\!\rangleho + \rangle\!\rangleho \sigma_z - 2 \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle \rangle\!\rangleho ) (V(t) - \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle), \langle\!\langleanglebel{eq:SME_rh0_full} \end{eqnarray} where $H_R$ represents the Hamiltonian for a drive on the qubit\footnote{Normally we consider $H_R=\frac{\Omega_R}{2}\sigma_x$ or $\frac{\Omega_R}{2}\sigma_y$ where we assume that drive is resonant. In general, any coherent drive, detuned, or along any axis can be added to the SME. The coherent drive's Hamiltonian is conveniently represented in the rotating frame of the drive. Note that this is convenient because the experiment happens in the rotating frame of the drive. The preparation and tomography pulses are from the same generator that that is used for the drive, therefore we pulse the qubit in rotating frame of the drive. This should not be confused with cavity homodyne measurement. Homodyne measurement happens in the rotating frame of the cavity. Therefore, the experiment involves two independent rotating frames for two different purposes.}. In Section~\rangle\!\rangleef{section:z-me} and \rangle\!\rangleef{section:x-me}, we will study the combined unitary and non-unitary evolution of the qubit for both $z$-measurement $\sqrt{k} \sigma_z$, and $\sigma_-$-measurement\footnote{In this document, we interchangeably use `$\sigma_-$-measurement' and `$x$-measurement' for the measurement corresponding to the measurement operator $\sqrt{\gamma_1} \sigma_-$.} $\sqrt{\gamma_1} \sigma_-$ in more details. \section{Bayesian update } Although the SME (Eq.~\rangle\!\rangleef{eq:SME_rho2}) is a formal description for open quantum systems, the fact that it is a nonlinear equation makes it less convenient to work with. There is a fairly straightforward method to reconstruct qubit trajectory which is based on Bayes' theorem, \begin{equation}gin{equation} P(A|B)=\frac{P(B|A) P(A)}{P(B)}, \langle\!\langleanglebel{eq:bayes} \end{equation} where $P(A|B)$ is the probability of event $A$ given that event $ B $ has happened. In connection with quantum measurement one can assume that: \begin{equation}gin{eqnarray} \mbox{event} \ A & \to & \mbox{finding the qubit in ground/excited state} , \nonumber \\ \mbox{event} \ B & \to & \mbox{obtaining the measurement signal} \ V . \nonumber \end{eqnarray} Therefore, one can use Bayes' rule to infer the qubit evolution conditioned on the measurement signal $V$. According to Bayes' rule we have, \begin{subequations}\langle\!\langleanglebel{eq:bayes12} \begin{equation}gin{eqnarray} P_{i+1}(0) = P( 0 | V_i )= \frac{P(V_i|0) P_i(0)}{P(V_i)} \langle\!\langleanglebel{eq:bayes1}\\ P_{i+1}(1) = P( 1 | V_i )= \frac{P(V_i|1) P_i(1)}{P(V_i)}\langle\!\langleanglebel{eq:bayes2}, \end{eqnarray} \end{subequations} where $P_i(0)$ and $P_i(1)$ are the probabilities for the qubit to be in the ground and the excited state before the measurement---these are our prior knowledge in the $i$th step of the update. Then we get the updated probabilities for the qubit state, $P_{i+1}(0)$ and $P_{i+1}(0)$ conditioned on measurement outcome $V_i$. The updated probabilities would be our prior knowledge for the next step of state update. The probability $P(V_i)$ is the unconditioned probability for getting signal $V_i$ based on our prior knowledge. The Bayesian approach is powerful because it connects the unknown conditional probability $P(0|V_i)$ to a well-known conditional probability $P(V_i|0)$. Note that $P(V_i|0)$ and $P(V_i|1)$ are nothing but the Gaussian distributions separated by $\Delta V =2$ as we discussed in Equation~(\rangle\!\rangleef{eq:sum_up_POVM2}), \begin{subequations} \begin{equation}gin{eqnarray} P(V_i) &\propto& \rangle\!\rangleho_{00} e^{-\frac{(V_i-1)^2}{2 \sigma^2}} + \rangle\!\rangleho_{11} e^{-\frac{(V_i+1)^2}{2 \sigma^2}},\\ P(V_i|0) &\propto& e^{-\frac{(V_i-1)^2}{2 \sigma^2}}, \langle\!\langleanglebel{eq:bayes_PV1}\\ P(V_i|1) &\propto& e^{-\frac{(V_i+1)^2}{2 \sigma^2}}, \langle\!\langleanglebel{eq:bayes_PV2} \end{eqnarray} \end{subequations} where $\sigma^2=1/(4 k \eta \Delta t)$ as we discussed in Section~\rangle\!\rangleef{section:CWM}. Now in order to clearly connect Bayes' theorem to the quantum trajectory, we proceed by dividing two conditional probabilities in Equation~(\rangle\!\rangleef{eq:bayes1}) and (\rangle\!\rangleef{eq:bayes2}), \begin{equation}gin{eqnarray} \frac{P_{i+1}(0)}{P_{i+1}(1)} = \frac{P( 0 | V_i )}{ P( 1 | V_i )}= \frac{P_i(0)}{P_i(1)}\frac{P(V_i|0)}{P(V_i|1)}, \end{eqnarray} and substitute the last term form Equation~(\rangle\!\rangleef{eq:bayes_PV1}) and (\rangle\!\rangleef{eq:bayes_PV2}), \begin{equation}gin{eqnarray} \frac{P_{i+1}(0)}{P_{i+1}(1)} = \frac{P_i(0)}{P_i(1)} \exp(+\frac{\Delta V}{\sigma^2}V_i), \end{eqnarray} where we prefer to explicitly have $\Delta V$ (which equals 2) in our representation\footnote{Note that the sign in the exponent depends on which way the Gaussian shifts for the ground and excited states. The convenient choice is when the Gaussian shifts in the positive direction for ground state which is consistent with the interpretation in Equation~(\rangle\!\rangleef{eq:nosy_estimate}).}. Now by considering the fact that $P_j(0)+P_j(1)=1$, one can calculate ${P_{i+1}(0)}$ and ${P_{i+1}(1)}$ given the prior knowledge ${P_{i}(0)}$ and ${P_{i}(1)}$ and measurement outcome $V_i$. Before we proceed further, let's switch the notation to the density matrix language which later allows us to make comparison between Bayesian update and SME update. For that we have $P_{i+1}(0) \to \rangle\!\rangleho_{00}(t+dt)$ and $P_{i}(0) \to \rangle\!\rangleho_{00}(t)$ and similarity for $P_{i+1}(1) \to \rangle\!\rangleho_{11}(t+dt)$ and $P_{i}(1) \to \rangle\!\rangleho_{11}(t)$ and obtain, \begin{equation}gin{eqnarray} \frac{\rangle\!\rangleho_{00}(t+dt)}{\rangle\!\rangleho_{11}(t+dt)} = \frac{\rangle\!\rangleho_{00}(t)}{\rangle\!\rangleho_{11}(t)} \exp(+\frac{\Delta V}{\sigma^2}V_i). \langle\!\langleanglebel{eq:Bayesian_diagonal} \end{eqnarray} Equation~(\rangle\!\rangleef{eq:Bayesian_diagonal}) only allows us to calculate the evolution for diagonal elements of the density matrix. In order to account for off-diagonal elements\footnote{Here I follow Korotkov's discussion in \cite{koro11_bayes}}. Let's assume that the qubit at time $t$, before the $i$th-measurement, was in state $|\psi(t)\rangle\!\rangleanglengle = \sqrt{\rangle\!\rangleho_{00}(t)} |0\rangle\!\rangleanglengle + e^{i\phi} \sqrt{\rangle\!\rangleho_{11}(t)} |1\rangle\!\rangleanglengle$. After the measurement the state would be $|\psi(t+dt)\rangle\!\rangleanglengle = \sqrt{\rangle\!\rangleho_{00}(t+1)} |0\rangle\!\rangleanglengle + e^{i\phi} \sqrt{\rangle\!\rangleho_{11}(t+1)} |1\rangle\!\rangleanglengle$ where we assume that the measurement doesn't change the relative phase\footnote{In the Bloch sphere picture, this is to say that the measurement back-action only kicks the state up or down but not to the sides. In Korotkov's terminology there is only ``spooky" backaction, no ``realistic" backaction.} $\phi$. The density matrix before the measurement would be, \begin{equation}gin{eqnarray} \rangle\!\rangleho(t)=|\psi(t) \rangle\!\rangleanglengle \langle\!\langleanglengle\psi(t)| = \rangle\!\rangleho_{00}(t) |0 \rangle\!\rangleanglengle \langle\!\langleanglengle 0| + \rangle\!\rangleho_{11}(t) |1 \rangle\!\rangleanglengle \langle\!\langleanglengle 1| \hspace{2cm} \nonumber \\ + e^{-i\phi} \sqrt{\rangle\!\rangleho_{00}(t) \rangle\!\rangleho_{11}(t)} |0 \rangle\!\rangleanglengle \langle\!\langleanglengle 1| +e^{i\phi} \sqrt{\rangle\!\rangleho_{11}(t) \rangle\!\rangleho_{00}(t)} |1 \rangle\!\rangleanglengle \langle\!\langleanglengle 0| \end{eqnarray} and similarly for after measurement $\rangle\!\rangleho(t+dt)=|\psi(t+dt) \rangle\!\rangleanglengle \langle\!\langleanglengle\psi(t+dt)|$. Therefore we arrive at a relation for off-diagonal elements, \begin{equation}gin{eqnarray} \frac{\rangle\!\rangleho_{01}(t+dt)}{\rangle\!\rangleho_{01}(t)}= \frac{\sqrt{\rangle\!\rangleho_{00}(t+dt) \rangle\!\rangleho_{11}(t+dt)}}{\sqrt{\rangle\!\rangleho_{00}(t) \rangle\!\rangleho_{11}(t)}}. \end{eqnarray} One can add a damping term to this relation to phenomenologically account for additional dephasing (e.g. a finite $T_2^*$ time, finite efficiency), \begin{equation}gin{eqnarray} \frac{\rangle\!\rangleho_{01}(t+dt)}{\rangle\!\rangleho_{01}(t)}= \frac{\sqrt{\rangle\!\rangleho_{00}(t+dt) \rangle\!\rangleho_{11}(t+dt)}}{\sqrt{\rangle\!\rangleho_{00}(t) \rangle\!\rangleho_{11}(t)}}e^{-\gamma dt}, \langle\!\langleanglebel{eq:Bayesian_off_diagonal} \end{eqnarray} where $\gamma =\frac{8 \chi^2 \bar{n}(1-\eta)}{\kappa} +1/T_2^*$ accounts for both depashing due to imperfect detection and finite qubit coherence time. \subsection{Bayesian update in terms of the Bloch components} It is convenient to represent the Bayesian update in terms of $z=\langle\!\langleanglengle \sigma_z\rangle\!\rangleanglengle$, $x=\langle\!\langleanglengle \sigma_x\rangle\!\rangleanglengle$. By considering that, $z=2\rangle\!\rangleho_{00}-1$ and $x=2\rangle\!\rangleho_{01}$ and the fact that $\rangle\!\rangleho_{00}+\rangle\!\rangleho_{11}=1$, one can show that the Equation~\rangle\!\rangleef{eq:Bayesian_diagonal} and \rangle\!\rangleef{eq:Bayesian_off_diagonal} can be represented in the following form\footnote{We define $z=\langle\!\langleanglengle \sigma_z\rangle\!\rangleanglengle=\mathrm{Tr}(\rangle\!\rangleho \sigma_z)=\rangle\!\rangleho_{00}-\rangle\!\rangleho_{11}=2\rangle\!\rangleho_{00}-1$. Note for off-diagonal elements we have $\rangle\!\rangleho_{01}=\rangle\!\rangleho_{10}^*$ and here we assumed that $\rangle\!\rangleho_{01}$ is real. Therefore $x=\langle\!\langleanglengle \sigma_x\rangle\!\rangleanglengle=\mathrm{Tr}(\rangle\!\rangleho \sigma_x)=2\rangle\!\rangleho_{01}$ and $y=0$}, \begin{subequations} \langle\!\langleanglebel{baysf12} \begin{equation}gin{eqnarray} z(t+dt)&=& \frac{1 + z(t) +(z(t)-1)e^{-V(t) S/\Delta V}}{1 + z(t)-(z(t)-1) e^{-V(t)S/\Delta V}} \langle\!\langleanglebel{baysf1}\\ x(t+dt)&=& x(t) \frac{\sqrt{1-z(t+dt)^2}}{\sqrt{1-z(t)^2}} e^{-\gamma dt}\langle\!\langleanglebel{baysf2} \end{eqnarray} \end{subequations} where $S=(\Delta V/\sigma)^2$ is the signal-to-noise ratio. Theses equations, similar to the SME (\rangle\!\rangleef{eq:SME_rho3}), can be use to update the qubit trajectory for continuous $z$-measurement. Note that, unlike the SME, here we have not made any assumption about $dt$ or the measurement strength\footnote{We need to make that assumption if we add unitary dynamics to the Bayesian update.}. So the $dt$ can in general be any duration, $dt \to \Delta t =T$, and in that case $V(t) \to V(T)= 1/T \int_0^T V(t) dt$. Therefore, the Equation~(\rangle\!\rangleef{baysf12}) can be used to obtain final Bloch coordinate positions $z(T)$ and $x(T)$ without integration. For example, in a simple situation where the qubit starts in a superposition of the measurement operator eigenstates $x(0)=1,z(0)=0$ we have, \begin{equation}gin{eqnarray} z(T)&=& \frac{1 - e^{-V(T) S/\Delta V}}{1 + e^{-V(T)S/\Delta V}}=\tanh(\frac{S}{2 \Delta V} V(T)) \langle\!\langleanglebel{baysf1T}\\ x(T)&=& \sqrt{1-z(T)^2} e^{-\gamma dt}=\mathrm{sech}(\frac{S}{2 \Delta V} V(T))\langle\!\langleanglebel{bays2T} \end{eqnarray} Therefore the final state is determined only by the averaged signal $V(T)$. This is because all measurements commute with one another and commute with the Hamiltonian\footnote{Note, if we add a Rabi drive then the Hamiltonian would not commute with the measurement operator. So we have to do step-wise integration similar to the SME.}. \section{Bayesian vs SME \langle\!\langleanglebel{section:Bayes_vs_SME}} We have introduced two approaches for qubit state update. The SME approach (Eqs.~\rangle\!\rangleef{eq:SME_zx}) and the Bayesian update approach (Eqs.~\rangle\!\rangleef{baysf12}). Now the question is ``What is the difference? And what are the pros and cons of each approach? More importantly, do they even agree?'' We know that in order to arrive at the SME, we did a bunch of expansions and approximations regarding the weak measurement limit. However, for the Bayesian update we did not make any assumption (except the assumption that Bayes's rule applies). So, in principle one should arrive at the SME by expanding the Bayesian result. For that, let's start off by Equations~(\rangle\!\rangleef{baysf1}) and substitute $S/\Delta V = 8 \eta k dt$ and calculate $dz=z(t+dt)-z(t)$ (we drop the notation showing time dependence for compactness), \begin{subequations} \begin{equation}gin{eqnarray} dz &=& \frac{(1-z^2)\sinh(4 \eta k V dt )}{\cosh(4 \eta k V dt )+ z \sinh(4 \eta k V dt )} \langle\!\langleanglebel{bays_prove1}\\ &=& 4 \eta k V (1-z^2)dt -(4 \eta k)^2 V^2 (z+z^3) dt^2+\mathcal{O}[dt]^{3/2}, \langle\!\langleanglebel{bays_prove2} \end{eqnarray} \end{subequations} where we expand\footnote{We keep terms up to the second order of $dt$, but remember $V$ includes a term which is effectively in the order of $1/\sqrt{dt}$, Equation~(\rangle\!\rangleef{eq:nosy_estimate}).} sinh and cosh to arrive at Equation~(\rangle\!\rangleef{bays_prove2}). Now we substitute $V= z + \frac{d\mathcal{W}}{\sqrt{4 \eta k} dt}$ only in the second term in \rangle\!\rangleef{bays_prove2} and keep terms up to the first order of $dt$ (remember $(d\mathcal{W})^2=dt$), therefore we have, \begin{equation}gin{eqnarray} dz&=& 4 \eta k V (1-z^2)dt - 4 \eta k (z-z^3) dt+\mathcal{O}[dt]^{3/2}\\ \to \dot{z}&=& 4 \eta k (1-z^2)(V -z) +\mathcal{O}[dt]^{1/2}, \langle\!\langleanglebel{bays_prove3} \end{eqnarray} where $\dot{z}=dz/dt$. Equation~\eqref{bays_prove3} is in agreement with the SME (\rangle\!\rangleef{eq:SME_z}).\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~2:} By a similar procedure as we did in this subsection, show that the Bayesian equation for $\dot{x}$, from Equations~(\rangle\!\rangleef{baysf2}) is also in agreement with the SME Equation~(\rangle\!\rangleef{eq:SME_x}) in the limit of weak measurement. }} Now the question is why we bother considering SME while we have the exact equations from the Bayesian update. The answer is that SME has greater flexibility and can be used for any measurement operator. We will see in the next section that for $x$-measurement\footnote{By $x$-measurement, we mean the measurement with measurement operator $\sigma_-$. We refer to it as $x$-measurement because the measurement signal in that measurement is related to the $\mathcal{R}e[\sigma_-]=\sigma_x$.} there is no Bayesian update equation. \section{Generalized measurement in the $\sigma_x$ basis\langle\!\langleanglebel{section:generalized_mx}} In this section, we study continuous measurement with the measurement operator $\sqrt{\gamma} \sigma_-$. This measurement operator occurs in homodyne detection of qubit emission. We may refer to this measurement as $x$-measurement since, as we will see later, we normally set the measurement phase so that the homodyne signal in related to $\mathcal{R}e[\sigma_-]=\sigma_x$. \subsection{POVM\langle\!\langleanglebel{subsection:povm_sigmax}} We follow a phenomenological approach to formulate the corresponding POVM. Consider a qubit which decays into a transmission line by rate of $\gamma_1$ as depicted in Figure~\rangle\!\rangleef{fig:x_measurement_simple}. This configuration can be described by the interaction Hamiltonian\footnote{The interaction Hamiltonian before taking the RWA is $H_{\mathrm{int}}= - \gamma_1 ( \sigma_- + \sigma_+)( \hat{a} + \hat{a}^{\dagger})$.} \begin{equation}gin{eqnarray} H_{\mathrm{int}}= - \gamma_1 ( \sigma_- \hat{a}^{\dagger} + \sigma_+ \hat{a} ), \langle\!\langleanglebel{eq:JCH_ch4} \end{eqnarray} \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{x_measurement_simple.pdf} \caption[$x$-measurement schematics]{ {\footnotesize \textbf{$x$-measurement schematic:} A qubit decays into a modeof the transmission line where we perform homodyne measurement.}} \langle\!\langleanglebel{fig:x_measurement_simple} \end{figure} where $\gamma_1$ is the relaxation rate for the qubit and $\hat{a}^{\dagger} (\hat{a})$ is creation (annihilation) operator for the corresponding electromagnetic mode of the transmission line\footnote{What happens to the cavity in this interpretation? One can think of that the cavity mediates the qubit emission. In this interpretation, the qubit has faster relaxation into the transmission line when the qubit and cavity are closer in frequency. However, a more realistic interpretation considers that the qubit and cavity hybridize. Therefore, the first two eigenstates of the combined qubit-cavity system act as a effective qubit as discussed in Chapter~2. This interpretation is more accurate in the limit of strong hybridization, where the qubit state is a polariton state.}. Now assume that the qubit is initially is in state \begin{equation}gin{equation} \psi=\alpha_0|g\rangle\!\rangleanglengle + \begin{equation}ta_0|e\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:qu_init} \end{equation} and the transmission line is in the vacuum state $|\Phi\rangle\!\rangleanglengle=|0\rangle\!\rangleanglengle_{\mathrm{tr}}$ where we use superscript $\cdot_{tr}$ for the transmission line. After time $dt$, the unnormalized state of total system would be in an entangled state, \begin{equation}gin{eqnarray} \Psi_{\mathrm{tot}} = \alpha_0 |0\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle_{\mathrm{tr}} + \begin{equation}ta_0 \sqrt{1-\gamma_1 dt}|1\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle_{\mathrm{tr}} + \begin{equation}ta_0\sqrt{\gamma_1 dt} |0\rangle\!\rangleanglengle |1\rangle\!\rangleanglengle_{\mathrm{tr}}. \langle\!\langleanglebel{eq:psi_tot_relax} \end{eqnarray} If we perform photon detection on transmission line, the (unnormalized) state of the qubit would be, \begin{subequations} \begin{equation}gin{eqnarray} \mathrm{detecting \ no \ photon \ |0\rangle\!\rangleanglengle_{\mathrm{tr}}} &\to& \psi = \alpha_0 |g\rangle\!\rangleanglengle + \begin{equation}ta_0 \sqrt{1-\gamma_1 dt}|e\rangle\!\rangleanglengle \langle\!\langleanglebel{eq:psi_tot_relax1}\\ \mathrm{detecting \ a \ photon \ |1\rangle\!\rangleanglengle_{\mathrm{tr}}} &\to& \psi = |g\rangle\!\rangleanglengle, \langle\!\langleanglebel{eq:psi_tot_relax2} \end{eqnarray} \end{subequations} where $\gamma_1 dt$ is the probability of a relaxation event when the qubit is excited. However, if we perform homodyne measurement instead of photon detection, then the field of the transmission line collapses to a coherent state $|\alpha\rangle\!\rangleanglengle_{\mathrm{tr}}$ and we will obtain a measurement outcome $\alpha$, \begin{equation}gin{eqnarray} \Psi_{\mathrm{tot}} &=& \langle\!\langleeft(\alpha_0 |g\rangle\!\rangleanglengle + \begin{equation}ta_0 \sqrt{1-\gamma_1 dt}|e\rangle\!\rangleanglengle \rangle\!\rangleight) |\alpha\rangle\!\rangleanglengle_{\mathrm{tr}} \langle\!\langleanglengle \alpha |0\rangle\!\rangleanglengle_{\mathrm{tr}} + \begin{equation}ta_0\sqrt{\gamma_1 dt} |g\rangle\!\rangleanglengle |\alpha\rangle\!\rangleanglengle_{\mathrm{tr}} \langle\!\langleanglengle \alpha |1\rangle\!\rangleanglengle_{\mathrm{tr}} \nonumber\\ &=& e^{-|\alpha|^2/2} \langle\!\langleeft(\alpha_0 |g\rangle\!\rangleanglengle + \begin{equation}ta_0 \sqrt{1-\gamma_1 dt}|e\rangle\!\rangleanglengle + \alpha^* \begin{equation}ta_0\sqrt{\gamma_1 dt} |g\rangle\!\rangleanglengle \rangle\!\rangleight) |\alpha\rangle\!\rangleanglengle_{\mathrm{tr}}, \end{eqnarray} where we use $ \langle\!\langleanglengle \alpha | 0 \rangle\!\rangleanglengle_{\mathrm{tr}}=e^{-|\alpha|^2/2}$ and $ \langle\!\langleanglengle \alpha | 1 \rangle\!\rangleanglengle_{\mathrm{tr}}=\alpha^* e^{-|\alpha|^2/2}$ and we absorb constants in the normalization factor\footnote{Note, $|\alpha\rangle\!\rangleanglengle = e^{-|\alpha|^2/2} \sum_n \frac{\alpha^n}{\sqrt{n!} } |n\rangle\!\rangleanglengle$ therefore, $ \langle\!\langleanglengle n |\alpha\rangle\!\rangleanglengle = \frac{\alpha^n}{\sqrt{n!}} \langle\!\langleanglengle 0 |\alpha\rangle\!\rangleanglengle$ where $ \langle\!\langleanglengle 0 |\alpha\rangle\!\rangleanglengle=e^{-|\alpha|^2/2}$.}. We assume that $\alpha$ is real\footnote{This choice makes the measurement to be $\mathcal{R}e[\sigma_-]$ so we call it $x$-measurement. Experimentally, the paramp phase ( the phase of our phase sensitive parametric amplifier) can be set so that all information is encoded only in the ``real" quadrature.} and define $V=\alpha \sqrt{ \gamma_1 dt}$ where $V$ is the homodyne signal. Therefore the qubit state after the measurement will be \begin{equation}gin{eqnarray} \psi &=& e^{-\frac{V^2}{2 \gamma_1 dt}} \langle\!\langleeft(\alpha_0 |g\rangle\!\rangleanglengle + \begin{equation}ta_0 \sqrt{1-\gamma_1 dt}|e\rangle\!\rangleanglengle + V \begin{equation}ta_0 |g\rangle\!\rangleanglengle \rangle\!\rangleight). \langle\!\langleanglebel{eq:qu_after} \end{eqnarray} Note that $\psi$ is not normalized yet. One can show that the corresponding POVM connecting the qubit state before the measurement (Equation~\rangle\!\rangleef{eq:qu_init}) to qubit state after the measurement (Equation~\rangle\!\rangleef{eq:qu_after}) has this form (up to a normalization factor), \begin{equation}gin{eqnarray} \Omega_V &=&e^{-\frac{V^2}{2 \gamma_1 dt}} \langle\!\langleeft( |g \rangle\!\rangleanglengle \langle\!\langleanglengle g| + \sqrt{1-\gamma_1 dt } |e \rangle\!\rangleanglengle \langle\!\langleanglengle e| + V |g \rangle\!\rangleanglengle \langle\!\langleanglengle e| \rangle\!\rangleight),\langle\!\langleanglebel{eq:POVM_xm_not_expanded}\\ &=& e^{-\frac{V^2}{2 \gamma_1 dt}} \langle\!\langleeft( 1- \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_- \rangle\!\rangleight), \langle\!\langleanglebel{eq:POVM_xm} \end{eqnarray} where we find Eq.~\rangle\!\rangleef{eq:POVM_xm} by expanding Eq.~\rangle\!\rangleef{eq:POVM_xm_not_expanded} up to the first order in $dt$~\cite{tan2017homodyne}.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~3:} Show that $\Omega_V$ is a POVM by verifying $\int \Omega_V^\dagger \Omega_V dV =\mathbb{1}$ and obtain the missing normalization factor in Equation~\eqref{eq:POVM_xm} (for answer see Ref.~\cite{tan2017homodyne}). }} Now let's look at the probability of getting a measurement signal $V$, \begin{equation}gin{eqnarray} P(V) &=& | \Omega_V |\psi \rangle\!\rangleanglengle |^2= \langle\!\langleanglengle \psi | \Omega_V^{\dagger} \Omega_V | \psi\rangle\!\rangleanglengle \\ &=& e^{-\frac{V^2}{ \gamma_1 dt}} \langle\!\langleeft( 1- \gamma_1 (dt-V^2) \langle\!\langleanglengle \sigma_+\sigma_-\rangle\!\rangleanglengle + V \langle\!\langleanglengle \sigma_+ + \sigma_-\rangle\!\rangleanglengle \rangle\!\rangleight),\\ &=& e^{-\frac{V^2}{ \gamma_1 dt}} \langle\!\langleeft( 1- \frac{\gamma_1 dt }{2}(1-V^2)(1+z)+ V x \rangle\!\rangleight), \langle\!\langleanglebel{eq:POVM_xm_P} \end{eqnarray} where $z=\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle$ and $x=\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle$. In the limit of continuous measurement $dt \to 0$ we have, \begin{equation}gin{eqnarray} P(V) &\simeq& e^{-\frac{V^2}{ \gamma_1 dt}} \langle\!\langleeft( 1 + V x \rangle\!\rangleight), \\ &\simeq& \exp \langle\!\langleeft[ -\frac{1 }{ \gamma_1 dt} (V^2 - 2 \gamma_1 dt V x )\rangle\!\rangleight]\\ &\simeq& \exp \langle\!\langleeft[ -\frac{(V - \gamma_1 x dt/2)^2}{ \gamma_1 dt }\rangle\!\rangleight], \langle\!\langleanglebel{eq:POVM_xm_Pv} \end{eqnarray} It is convenient to rescale the signal to have variance $\sigma^2 =\gamma_1 dt$ therefore we arrive at\footnote{We could do this rescaling right at the beginning by defining the homodyne signal as $V=\sqrt{\gamma_1 dt/2}$. This scaling may have to do with the fact that with homodyne measurement we only collect half of the signal on average.}, \begin{equation}gin{eqnarray} P(V) &\simeq & \frac{1}{\sqrt{2 \pi \gamma_1 dt}} \exp \langle\!\langleeft[ -\frac{(V - \gamma_1 x dt)^2}{ 2 \gamma_1 dt }\rangle\!\rangleight], \langle\!\langleanglebel{eq:POVM_x_scaled} \end{eqnarray} where we also added the normalization factor. Equation~\eqref{eq:POVM_x_scaled} is analogous to Equation~\eqref{eq:POVM_sigz}, However this time the measurement signal distribution is shifted by $\gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle$ and has variance of $\gamma_1 dt$ as depicted in Figure~\rangle\!\rangleef{fig:overlap_PVx}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.44\textwidth]{overlap_PVx.pdf} \captionsetup{font=footnotesize} \caption[Homodyne measurement signal distribution $x$-measurement]{\textbf{Homodyne measurement signal distribution in the weak limit, $x$-measurement:} Note that $\gamma_1 dt \langle\!\langlel 1$ therefore $\sqrt{\gamma_1 dt} \gg \gamma_1 dt$} \langle\!\langleanglebel{fig:overlap_PVx} \end{figure} Therefore, the homodyne signal can be described in the form\footnote{It worth mentioning that, in case of inefficient detection, the signal would be $V = \sqrt{\eta} \gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle dt + \sqrt{\gamma_1} d\mathcal{W}$. This intuitively makes sense because we always rescale the signal to have variance $\gamma_1 dt$ regardless of the efficiency $\eta$. Still, inefficient measurement decreases the SNR since the greatest mean separation of the homodyne signal conditioned on $\langle\!\langleanglengle x\rangle\!\rangleanglengle$ scales linearly in $\eta$.}, \begin{equation}gin{eqnarray} V &= & \gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle dt + \sqrt{\gamma_1} d\mathcal{W}= \gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle dt + \sqrt{\gamma_1} d\xi dt, \langle\!\langleanglebel{eq:noisy_estimate_x} \end{eqnarray} where $d\mathcal{W}$ and $d\xi$ are zero-mean Gaussian distributions with variance of $dt$ and $dt^{-1}$ respectively (therefore $\sqrt{\gamma_1} d\mathcal{W}$ has variance of $\gamma_1 dt$). \subsection{SME\langle\!\langleanglebel{Subsection:SME_xm_xz}} Now we turn to state evolution and calculating the SSE and SME. For the SSE we simply need to calculate the change in state $|\psi\rangle\!\rangleanglengle$ during the measurement time $dt$, \begin{equation}gin{eqnarray} d|\psi \rangle\!\rangleanglengle &=& |\psi(t+dt)\rangle\!\rangleanglengle - |\psi(t)\rangle\!\rangleanglengle = (\Omega_V -1)\psi \nonumber \\ &=&e^{-\frac{V^2}{2 \gamma_1 dt}} \langle\!\langleeft( - \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_- \rangle\!\rangleight) |\psi\rangle\!\rangleanglengle. \langle\!\langleanglebel{eq:SSE_x_m} \end{eqnarray} In order to obtain the SME we calculate $d\rangle\!\rangleho = d|\psi\rangle\!\rangleanglengle \langle\!\langleanglengle \psi| + |\psi\rangle\!\rangleanglengle d \langle\!\langleanglengle \psi| + d|\psi\rangle\!\rangleanglengle d \langle\!\langleanglengle \psi|$, \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& \langle\!\langleeft( - \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_- \rangle\!\rangleight) \rangle\!\rangleho + \rangle\!\rangleho \langle\!\langleeft( - \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_+ \rangle\!\rangleight) \nonumber \\ &+& \langle\!\langleeft( - \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_- \rangle\!\rangleight) \rangle\!\rangleho \langle\!\langleeft( - \frac{\gamma_1 dt }{2} \sigma_+\sigma_- + V \sigma_+ \rangle\!\rangleight), \langle\!\langleanglebel{eq:SME_x_m1} \end{eqnarray} where we used Equation~(\rangle\!\rangleef{eq:SSE_x_m}) and ignored normalization constants. Again, we only keep terms up to the first order of $dt$\footnote{Remember $V$ has a term of order $\sqrt{t}$ according to Equation~(\rangle\!\rangleef{eq:noisy_estimate_x}).}, \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& - \frac{\gamma_1 dt }{2} ( \sigma_+\sigma_- \rangle\!\rangleho + \rangle\!\rangleho \sigma_+\sigma_- ) + V ( \sigma_-\rangle\!\rangleho + \rangle\!\rangleho \sigma_+ )\nonumber \\ &\ & + \gamma_1 \sigma_- \rangle\!\rangleho \sigma_+ dt, \langle\!\langleanglebel{eq:SME_x_m2} \end{eqnarray} where in the last term, we substitute $V$ from Equation~(\rangle\!\rangleef{eq:noisy_estimate_x}) and keep terms up to $dt$ and use the It\^o rule $(d\mathcal{W})^2 =dt$. By rearranging terms we have, \begin{equation}gin{eqnarray} d\rangle\!\rangleho = \gamma_1 dt \langle\!\langleeft( \sigma_- \rangle\!\rangleho \sigma_+ - \frac{1}{2} (\sigma_+\sigma_- \rangle\!\rangleho + \rangle\!\rangleho \sigma_+\sigma_- ) \rangle\!\rangleight) + V ( \sigma_-\rangle\!\rangleho + \rangle\!\rangleho \sigma_+ ). \langle\!\langleanglebel{eq:SME_x_m3} \end{eqnarray} Since we have ignored the normalization constants, now we need to normalize the result. One can show that the normalized SME has the form \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& \gamma_1 dt \langle\!\langleeft( \sigma_- \rangle\!\rangleho \sigma_+ - \frac{1}{2} (\sigma_+\sigma_- \rangle\!\rangleho + \rangle\!\rangleho \sigma_+\sigma_- ) \rangle\!\rangleight) \langle\!\langleanglebel{eq:SME_x_m4} \\ &+& \langle\!\langleeft(V -\gamma_1 \mathrm{Tr}[ (\sigma_-+\sigma_+)\rangle\!\rangleho] dt \rangle\!\rangleight) ( \sigma_-\rangle\!\rangleho + \rangle\!\rangleho \sigma_+ - \mathrm{Tr}[ (\sigma_-+\sigma_+)\rangle\!\rangleho]\rangle\!\rangleho).\nonumber \end{eqnarray}\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~4:} Convince yourself about the normalization step, which is the transition from Equation~(\rangle\!\rangleef{eq:SME_x_m3})~$\to$~(\rangle\!\rangleef{eq:SME_x_m4}). }} Equation~(\rangle\!\rangleef{eq:SME_x_m4}) can be represented in terms of the dissipation superoperator $\mathcal{D}[L]\rangle\!\rangleho=L^{\dagger} \rangle\!\rangleho L - \frac{1}{2}\{L^{\dagger} L, \rangle\!\rangleho\}$ and jump superopertator $\mathcal{H}[L]\rangle\!\rangleho= L \rangle\!\rangleho + \rangle\!\rangleho L^{\dagger} - \mathrm{Tr}[ (L+L^{\dagger})\rangle\!\rangleho] \rangle\!\rangleho$ in a more compact form, \begin{subequations} \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& \gamma_1 dt \mathcal{D}[\sigma_-]\rangle\!\rangleho + (V -\gamma_1 x dt ) \mathcal{H}[\sigma_-]\rangle\!\rangleho\\ &=& \gamma_1 dt \mathcal{D}[\sigma_-]\rangle\!\rangleho + \sqrt{\gamma_1} d\mathcal{W} \mathcal{H}[\sigma_-]\rangle\!\rangleho\\ \to \dot{\rangle\!\rangleho}=\frac{d\rangle\!\rangleho}{dt}&=&\gamma_1 \mathcal{D}[\sigma_-]\rangle\!\rangleho + \sqrt{\gamma_1} d\xi \mathcal{H}[\sigma_-]\rangle\!\rangleho \langle\!\langleanglebel{eq:SME_xm} \end{eqnarray} \end{subequations} where we substitute $\mathcal{W}$ and $d\xi$ as defined in Equation~(\rangle\!\rangleef{eq:noisy_estimate_x}). Equation~(\rangle\!\rangleef{eq:SME_xm}) describes the evolution of the qubit under radiative decay with rate $\gamma_1$ and continuous perfect monitoring of that radiation with homodyne detection. By comparing to the general form of the SME (Equation \rangle\!\rangleef{eq:SME_rho3}), we understand that the homodyne measurement of the qubit radiation is corresponding to the measurement operator $\sigma_-$ and the measurement strength $k=\gamma_1$ is the rate in which the detector receives the emission. In order to account for imperfect detection we can again use the technique of multiple detectors. Assume that actual detector receives proportion $\eta$ of the total emission at rate $\gamma_1$, thus the measurement strength of this detector is $\eta \gamma_1$. The rest of the emission is then measured by a fictitious detector (the environment) with measurement strength $(1-\eta) \gamma_1$. Both measurement detectors impose their own backaction on the qubit evolution, \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& \gamma_1 \mathcal{D}[\sigma_-]\rangle\!\rangleho + \sqrt{\gamma_1} d\xi \mathcal{H}[\sigma_-]\rangle\!\rangleho + \sqrt{\gamma_1} d\xi^{(f)} \mathcal{H}[\sigma_-]\rangle\!\rangleho, \langle\!\langleanglebel{eq:SME_xm_eta} \end{eqnarray} where $d\xi$ and $d\xi^{(f)}$ represents the collected homdyne signal by actual detector and the fictitious detector respectively. By averaging over all the fictitious detector outcomes we arrive at the SME for inefficient detection of the qubit emission, \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& \gamma_1 \mathcal{D}[\sigma_-]\rangle\!\rangleho + \sqrt{\eta \gamma_1} d\xi \mathcal{H}[\sigma_-]\rangle\!\rangleho, \langle\!\langleanglebel{eq:SME_xm_eta2} \end{eqnarray} where that the corresponding inefficient homodyne signal can be described by, \begin{equation}gin{eqnarray} V &= & \sqrt{\eta}\gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle dt + \sqrt{\gamma_1} d\mathcal{W}= \sqrt{\eta}\gamma_1 \langle\!\langleanglengle x \rangle\!\rangleanglengle dt + \sqrt{\gamma_1} d\xi dt. \langle\!\langleanglebel{eq:noisy_estimate_x2} \end{eqnarray} Similar to the discussion we had for the SME~(\rangle\!\rangleef{eq:SME_rho3}), one can also add unitary evolution to the SME~(\rangle\!\rangleef{eq:SME_xm_eta2}) to account for a coherent drive on the qubit and obtain a full version of the SME, \begin{equation}gin{eqnarray} d\rangle\!\rangleho &=& -i[H_R, \rangle\!\rangleho] + \gamma_1 \mathcal{D}[\sigma_-]\rangle\!\rangleho + \sqrt{\eta \gamma_1} d\xi \mathcal{H}[\sigma_-]\rangle\!\rangleho. \langle\!\langleanglebel{eq:SME_xm_eta3} \end{eqnarray} To sum up the discussion in this section, we may recast this stochastic master equation terms of Bloch vector components, \begin{equation}gin{subequations}\langle\!\langleanglebel{SME_xm_eta_xz} \begin{equation}gin{eqnarray} \dot{z} =& +\Omega x + \gamma_1 (1-z) + \sqrt{\eta \gamma_1} x (1-z) d\xi,\langle\!\langleanglebel{SME_xm_eta_z}, \\ \dot{x} =& - \Omega z - \frac{\gamma_1}{2} x + \sqrt{\eta} ( 1-z - x^2 )d\xi,\langle\!\langleanglebel{SME_xm_eta_x}, \end{eqnarray} \end{subequations} where we assume $H_R=\frac{\Omega}{2} \sigma_y$. \section{z-measurement procedure\langle\!\langleanglebel{section:z-me}} In this section, we are going to utilize the basic techniques mentioned in Chapter 3 to discuss how to actually perform weak measurement and analyze the data to obtain quantum trajectories. A typical $z$-measurement includes: \begin{equation}gin{itemize} \item Qubit calibration and characterization as discussed in Chapter 3. \item Paramp calibration, dumb-signal cancellation, readout calibration, as discussed in Chapter 3. \item Calibration for $ \chi , \eta, \bar{n}, k$. \item Calibration for preparation and tomography pulses, Rabi tomography. \item Data acquisition for quantum state tomography and the actual experiment. \item Post-processing, verifying the measurement trajectory update method by quantum state tomography. \end{itemize} In the following subsections, we discuss each of these steps in greater detail. \subsection{Basic characterization} As discussed in Chapter 3 we first need to characterize the qubit-cavity system. The information we need to obtain in this step is the cavity frequency $\omega_q$, cavity linewidth $\kappa$, qubit frequency $\omega_q$, qubit relaxation time $T_1$, and qubit dephasing time $T_2^*$. In this stage we also find an initial calibration for $\pi$ and $\pi/2$ pulses (usually $T_\pi=20$ ns , $T_{\pi/2}=10$ ns for certain amplitude in arbitrary waveform generator (AWG). More careful calibration should be performed after paramp calibration and dumb-signal cancellation. See Chapter 3 for more details on basic experiment characterization. \subsection{Paramp calibration} As discussed in Chapter 3, we set up the paramp (preferably in double-pump operation mode) at the cavity frequency (more precisely at $\omega_c -\chi$ so we have an optimum and symmetric response for the states $|g\rangle\!\rangleanglengle$ and $|e\rangle\!\rangleanglengle$). \emph{\textbf{``Dumb-signal" cancellation---}} Beside the basic paramp setup and obtaining a proper gain profile, here we need also consider some practical techniques to optimize the low-power readout fidelity. The point is that in weak measurement the paramp should be adjusted to have best performance for weak signal detection\footnote{Moreover the paramp normally works efficiently in the weak signal limit.}. However, during the readout we use a much stronger signal to project the qubit (basically the readout is a very strong measurement). Having calibrated the paramp for weak measurement, it may not have the best performance for readout where we send a large number of photons during the readout. The trick to go around this issue is called ``dumb-signal cancellation". The idea is following: although we need a high number of photons during the readout inside the cavity, after passing the cavity we can coherently cancel the unnecessary part of the signal and only net phase shifts are amplified by the paramp as demonstrated in Figure~\rangle\!\rangleef{fig:dum_sig_cancelation}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{dum_sig_cancelation.pdf} \caption[Dumb-signal cancellation]{ {\footnotesize \textbf{Dumb-signal cancellation:} A copy of the readout pulse with proper amplitude and phase cancels the readout signal in $I$ quadrature while maintaining the information along the $Q$ quadrature (separation $\mathrm{Amp}\times \delta \theta$ is preserved) since the paramp works efficiently in the weak signal regime.} } \langle\!\langleanglebel{fig:dum_sig_cancelation} \end{figure} The dumb-signal cancellation is basically a copy of the readout pulse with the right amount of attenuation\footnote{One way to estimate the proper amplitude for the dumb signal in the experiment is to send a continuous readout pulse to the cavity (e.g. run a long readout sequence in continuous mode, or set the readout pulse always high at mixer) and look at the output signal power with spectrum analyzer. Now disconnect the readout input but this time send the dumb signal cancellation through the pump port and adjust to amplitude to have the same output power signal as we had for continuous readout. By this calibration the amplitude is roughly calibrated and you can find the optimal phase by looking at average homodyne signals in IQ plane and comparing the output signal before readout and during the readout.} and proper phase to cancel the readout signal before reaching to the paramp while maintaining the qubit information. The room temperature circuitry for dumb-signal cancellation is depicted in Figure~\rangle\!\rangleef{fig:chi_nbar1}. \subsection{Quantum efficiency calibration} After the paramp is set up for optimal readout performance, we are ready to calibrate the quantum efficiency $\eta$. This includes calibration of the dispersive shift $\chi$, the average photon number $\bar{n}$, the measurement strength $k$, and finally measurement of the quantum efficiency. In order to obtain values for $\chi$ and $\bar{n}$, we use a Ramsey measurement\footnote{Note that we might have a crude estimation of $\chi$ from the punch-out experiment, but that is not accurate enough for the quantum efficiency calibration.}. As discussed in Chapter 2 (Eq.~\rangle\!\rangleef{eq:H_dis_rearange2}), the qubit frequency is shifted by the average number of photons in the cavity, $\Delta \omega_q= 2 \chi \bar{n}$. Moreover as discussed earlier in this chapter (Equation~\rangle\!\rangleef{eq:MID}), photons in the cavity also induce dephasing of the qubit coherence by a rate $\Gamma=8\chi^2 \bar{n}/\kappa$. We can observe these two effects by performing a Ramsey measurement over a range of average photon number occupation in the cavity. Fortunately, the ratio $\Gamma/\Delta \omega_q=4 \chi/\kappa$ is independent of $\bar{n}$, which means we just need to sweep the average number of photons in the cavity (without knowing the actual $\bar{n}$ values) and calculate the ratio to obtain $\chi$ (the value for the cavity linewidth $\kappa$ is independently known from the basic characterization). For that, we start by running the Ramsey experiment (typically a 5 $\mu$s Ramsey sequence). We set the frequency to be slightly off-resonant\footnote{It is more convenient to avoid being on-resonance with qubit so there are always oscillations which makes an easier fitting procedure. Therefore we prefer to be slightly above the actual qubit frequency (0.4 MHz for 5 $\mu$s Ramsey sequence) and by increasing the $\bar{n}$, qubit will be pushed down (remember $\chi$ is typically negative) and we never Stark shift the qubit into an on-resonance situation. Typically, we set the qubit drive frequency so that we have $\sim$ one oscillation in the limit $\bar{n}\to0$. This usually ensures we will sample enough to resolve Ramsey oscillations at higher $\bar{n}$ in the cavity.} as illustrated in Figure~\rangle\!\rangleef{fig:chi_nbar1}. The qubit signal generator (BNC2) is set to be $0.4$ MHz above the qubit resonance frequency. By changing the DC offset values at the I/Q inputs of the cavity mixer, we let photons to leak into the cavity and shift the qubit frequency and also dephase the qubit, which is measured by the Ramsey measurement. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{chi_nbar1.pdf} \caption[Quantum efficiency calibration setup]{ {\footnotesize \textbf{Quantum efficiency calibration setup}. }} \langle\!\langleanglebel{fig:chi_nbar1} \end{figure} The result of this sweep labeled as the Ch3 offset (the applied DC offset voltage to the input Q for the cavity mixer) has been shown in Figure~\rangle\!\rangleef{fig:chi_nbar2}a. The minimum oscillation, which is $f_{min}=0.4$ MHz, happens somewhere around $55$ mV for the Ch3 offset. As the Ch3 offset deviates from a minimum leakage value, the mixer lets photons populate the cavity and the oscillation frequency increases by $2\chi \bar{n}$. Moreover, the oscillations decay faster as the average number of photons increases in the cavity as expected by the relation $\Gamma=8\chi^2 \bar{n}/\kappa$. Figure~\rangle\!\rangleef{fig:chi_nbar2}b shows Ramsey oscillations from data both near and far from minimum leakage. By fitting a decaying sinusoid to the data we obtain the oscillation frequency $f$ and the Ramsey decay time $1/\Gamma$ as a function of the Ch3 offset value as depicted in Figure~\rangle\!\rangleef{fig:chi_nbar2}c. In order to obtain $\chi$ we plot $\Gamma$ versus $f$ and fit a line to the data as depicted in Figure~\rangle\!\rangleef{fig:chi_nbar2}d. The slope would be $4\chi/\kappa$ (the value of the cavity linewidth $\kappa$ is known from the low-power cavity transmission measurement). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{chi_nbar2.pdf} \caption[$\chi$ calibration result]{ {\footnotesize \textbf{The $\chi$ calibration result:} \textbf{a}, The Ramsey experiment result for different offset values of Ch3. \textbf{b}, Two cuts from the sweep data in \textbf{a} indicated by dashed lines. \textbf{c}, The frequency and the damping rate for the Ramsey data of panel (a) versus the DC offset of Ch3. By fitting the red curve to a parabola we get coefficients $K_0$, and $K_1$, and $K_2$, which will be used to find the optimal quadrature for the measurement \textbf{d}, The damping rate versus the frequency is ideally a line with a slope of $4\chi/\kappa$.}} \langle\!\langleanglebel{fig:chi_nbar2} \end{figure} We need one more piece of information from these data. We fit the curve $f$ (frequency versus Ch3 offset) to a polynomial (parabola is enough) and record the fit parameters as depicted in Figure~\rangle\!\rangleef{fig:chi_nbar2}. Later, this data will be used to find the optimal quadrature for the measurement\footnote{The relative phase of the cavity photons and the paramp pump is important for optimizing amplification and hence quantum efficiency.}. We repeat the experiment and apply the same analysis for the DC offset of Ch4 while keeping the offset of Ch3 fixed at the minimum leakage value\footnote{To be more accurate, after finishing the Ch4 offset sweep one can redo the Ch3 offset sweep with Ch4 offset fixed at the corresponding minimum leakage value.}. Now, we use the parabolic fit to parametrize the mixer output power in terms of the Ramsey oscillation frequency $f$. Here we briefly discuss what this means. Ideally, the mixer output power can be represented by, \begin{equation}gin{eqnarray} f_k = K_2^{\mathrm{(Ch3)}} (\mathrm{Ch3}-\mathrm{Ch3}_{\mathrm{min}})^2 + K_2^{\mathrm{(Ch4)}}(\mathrm{Ch4}-\mathrm{Ch4}_{\mathrm{min}})^2 \end{eqnarray} Where we use the fact that Ch3 and Ch4 are orthogonal and $\mathrm{Ch3}_{\mathrm{min}}=-K_1^{\mathrm{(Ch3)}}/2K_2^{\mathrm{(Ch3)}}$ . The phase of the output signal also can be represented as, \begin{equation}gin{eqnarray} \theta = \mathrm{atan} \langle\!\langleeft[ \sqrt{\frac{K_2^{\mathrm{(Ch4)}}}{K_2^{\mathrm{(Ch3)}}}} \frac{ (Ch4-\mathrm{Ch4}_{\mathrm{min}}) }{(Ch3-\mathrm{Ch3}_{\mathrm{min}})} \rangle\!\rangleight] \end{eqnarray} as depicted in Figure~\rangle\!\rangleef{fig:Mixer_anngle}, the parameter $\theta$ sets the angle of the output signal in the $IQ$ plane (phasor) and $f_k$ parametrizes the length of the phasor which has to do with the number of photons $\bar{n}$ but we usually keep it in terms of frequency $2\pi f_k=2\pi (f-f_{\mathrm{min}})=2\chi \bar{n}$. In fact, $k=4 \chi \cdot 2\pi (f-f_{\mathrm{min}}) /\kappa$ where $k$ is the measurement strength\footnote{Note that, we explicitly express $f$ in MHz (Figure~\rangle\!\rangleef{fig:chi_nbar2}b). One needs to be careful about this factor of $2\pi$}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.35\textwidth]{Mixer_anngle.pdf} \caption[Mixer output]{ {\footnotesize \textbf{ Mixer output:} The output of the mixer is a coherent signal whose phase and amplitude depend on Ch3/Ch4 amplitude and offset. The parameter $f_k$ quantifies the strength of measurement and $\theta$ is the measurement quadrature.}} \langle\!\langleanglebel{fig:Mixer_anngle} \end{figure} One can show that for a given mixer output power $f_k$ and angle $\theta$ the value for Ch3 and Ch4 should be, \begin{subequations}\langle\!\langleanglebel{eq:ch3_theta12} \begin{equation}gin{eqnarray} \mathrm{Ch3}(f_k, \theta) = \sqrt{ \frac{f_k}{K_2^{\mathrm{(Ch3)}}}} \cos(\frac{\pi}{180} \theta) - \frac{K_1^{\mathrm{(Ch3)}}}{2 K_2^{\mathrm{(Ch3)}}} \langle\!\langleanglebel{eq:ch3_theta},\\ \mathrm{Ch4}(f_k, \theta) = \sqrt{ \frac{f_k}{K_2^{\mathrm{(Ch4)}}}} \sin(\frac{\pi}{180} \theta) - \frac{K_1^{\mathrm{(Ch4)}}}{2 K_2^{\mathrm{(Ch4)}}}.\langle\!\langleanglebel{eq:ch4_theta}, \end{eqnarray} \end{subequations} where we represent $\theta$ in degrees for convenience. Now we use Equation~(\rangle\!\rangleef{eq:ch3_theta12}) to once again sweep the Ramsey measurement, but this time we sweep the angle $\theta$ and while keeping the frequency $f_k$ fixed at a certain value\footnote{It is convenient to set $f_k$ at or close to the value that you will be performing the actual experiment. Usually $f_k=0.1$ is a very weak measurement and $f_k=1$ is a relatively ``strong" weak measurement.}. Figure~\rangle\!\rangleef{fig:sweep_theta_Ramsey} shows Ramsey oscillation measurements for different values of $\theta$ value at $f_k=0.5$. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{sweep_theta_Ramsey.pdf} \caption[Ramsey measurements for a sweep of different angles]{ {\footnotesize \textbf{Ramsey measurements for a sweep of different angles.}}} \langle\!\langleanglebel{fig:sweep_theta_Ramsey} \end{figure} Ideally the Ramsey oscillation frequency should be fixed, but in practice the mixer may have some imperfections and the Equation~(\rangle\!\rangleef{eq:ch3_theta12}) does not perfectly predict the mixer output. But this is not be a problem for calibration for a reason that will be clear shortly\footnote{Eventually, for quantum efficiency calibration, we compare the result of two $\theta$ sweeps so imperfections do not contribute to the final result.}. Now we arrive at the last step of the quantum efficiency calibration. In this step we want to find the optimal angle which gives the best signal-to-noise ratio for measurement of the qubit state. For that, we compare the weak measurement signal for a certain time $T \sim 100$ ns after preparing the qubit in the ground or excited state \footnote{Note there is no readout pulse needed in this step.}. We repeat this measurement for different angles and compare the separation of the two readout histograms to find which angles gives the optimal SNR as depicted in Figure~(\rangle\!\rangleef{fig:sweep_theta_histo}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{sweep_theta_histo.pdf} \captionsetup{font=normalsize} \caption[Calibration of $\eta$]{ {\footnotesize \textbf{Calibration of $\eta$:} Comparison of the measurement histograms for $|g\rangle\!\rangleanglengle$ and $|e\rangle\!\rangleanglengle$ for different $\theta$.}} \langle\!\langleanglebel{fig:sweep_theta_histo} \end{figure} Once we find the SNR for different angles $\theta$, we have all the pieces we need to calculate the quantum efficiency, \begin{equation}gin{eqnarray} \eta= \frac{S \kappa} {64 \chi^2 \bar{n} T} = \frac{S \kappa} {64 \pi \chi (f - f_{\mathrm{min}}) T}. \langle\!\langleanglebel{eq:eta} \end{eqnarray} As depicted in Figure~\rangle\!\rangleef{fig:sweep_theta_histo}d, the quantum efficiency is maximum at a certain angle which is ideally aligned with the paramp amplification quadrature. \subsection{Tomography pulse calibration} Before we collect data, it is good to fine-tune the preparation/tomographic pulses. A short Rabi (100ns) sequence with all three types of tomographic readout for $x,y,z$ (as discussed in Chapter 3) is a simple test to verify the preparation and tomographic pulses\footnote{We may have already calibrated $\pi,\pi/2$-pulses in a ``basic qubit characterization" step but note that we are now pumping the paramp and may need to revisit the qubit calibration. Moreover, we might need a more complicated preparation for the actual experiment. So it makes sense to specifically check the preparation pulses before starting the actual experiment.}. Figure~\rangle\!\rangleef{fig:rabi_tomog_diagnosis}a shows a Rabi tomography result corresponding to a perfect calibration for preparation and tomography pulses. The fact that the oscillations for both $x$ and $y$ start from the zero and that $y$ remains always zero means that, for most part, the pulses are calibrated\footnote{One can use a longer Rabi sequence with lower amplitude, $T_1$, or Ramsey sequence to further tune the calibration.} Figures~\rangle\!\rangleef{fig:rabi_tomog_diagnosis}b,c,d,e show some common imperfect calibrations. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{rabi_tomog_diagnosis.pdf} \caption[Rabi tomography diagnosis]{ {\footnotesize \textbf{Rabi tomography diagnosis:} \textbf{a}, A perfect calibration. \textbf{b} The $\pi/2$ pulses needed to be weaker, either shorter pulses or lower amplitude. \textbf{c} The $\pi/2$ pulse for $x\ (y)$ needed lower (higher) amp/duration. \textbf{d}, Mixer orthogonality is slightly higher that 90 degrees. \textbf{d}, Mixer orthogonality is slightly lower that 90 degrees. } } \langle\!\langleanglebel{fig:rabi_tomog_diagnosis} \end{figure} \subsection{Data acquisition} After recalibrating the preparation and tomographic pulses. We are ready to run experimental sequences (including noise calibration and state tomography sequences). The noise calibration measurement (depicted in Figure~\rangle\!\rangleef{fig:sweep_theta_histo}a,b) needs to be collected as a reference to scale the collected digitized weak signal\footnote{The noise calibration sequence doesn't have drive or readout, only ground and excited state preparations and weak measurement for a certain time $\sim$ 1 $\mu$s.}. For example, the sequence for continuous monitoring of a driven qubit has been depicted in Figure~\rangle\!\rangleef{fig:driven_z_measur_seq}a which includes pulses for heralding, preparation, weak measurement, and readout. The obtained data is depicted as a color plot in Figure~\rangle\!\rangleef{fig:driven_z_measur_seq}b. Note that we perform the experiment for different times $t$ (in this case we vary the measurement time from 0 to $2\ \mu$s)\footnote{One may think that only repeating the longest trajectory is enough because then you can update trajectories as long as you wish. However, in order to verify the validity of trajectory update, you will need to have trajectories which have different lengths, which provides you with readout measurement at different times. Later we will discuss how to use the trajectory measurements of different times to tomographically validate the trajectory update method.}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{driven_z_measur_seq.pdf} \caption[Driven $z$-measurement sequence]{ {\footnotesize \textbf{Driven $z$-measurement sequence}. }} \langle\!\langleanglebel{fig:driven_z_measur_seq} \end{figure} \subsection{Post-processing: Quantum trajectory update\langle\!\langleanglebel{subsection:pst_pross}} In this step we use the SME (Equations~\rangle\!\rangleef{eq:SME_zx}) or Bayesian update (Equations~\rangle\!\rangleef{baysf12}) to reconstruct quantum trajectories. First, we need to properly scale the digitized measurement signal to obtain $V(t)$. For that we use the noise calibration data and subtract the overall offset\footnote{Note that the overall offset is determined by averaging both signals regardless of the preparation.}. Then we scale the signal so that the separation between measurement signal histograms of the ground and excited state preparations are equal to two. Moreover, the sign for the scaling factor is chosen so that the histogram corresponding to the ground state preparation is centered at $V=+1$ as depicted in Figure~\rangle\!\rangleef{fig:scaled_v}b which is consistent with our convention (for example see Equation~\rangle\!\rangleef{eq:POVM4}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = .9\textwidth]{scaled_v.pdf} \caption[Digitized weak measurement signal scaling]{ {\footnotesize \textbf{Digitized weak measurement signal scaling}. }} \langle\!\langleanglebel{fig:scaled_v} \end{figure} One can check at this point to make sure that the variance of the signal is consistent with the calibrated quantum efficiency. \subsubsection{SME update} For quantum trajectories, we use the scaled signal in the SME. In order to account for a coherent drive $H_R=-\Omega_R \sigma_y/2$, we use the full version of the SME (Equation~\rangle\!\rangleef{eq:SME_rh0_full}). We represent this in terms of Bloch components as, \begin{subequations}\langle\!\langleanglebel{eq:SME_zx_i} \begin{equation}gin{eqnarray} z[i+1]&=&z[i] + \Omega_R x[i]dt + 4 \eta k (1-z[i]^2)(V[i]-z[i]) dt \langle\!\langleanglebel{eq:SME_z_i}\\ x[i+1]&=&x[i] - \Omega_R z[i+1]dt -( 2k+\gamma_2)x[i]dt -4 \eta k x[i] z[i] (V[i]-z[i]) dt \nonumber\\ \langle\!\langleanglebel{eq:SME_x_i}, \end{eqnarray} \end{subequations} where we also discretized\footnote{Note that, $x[i+1]$ uses $z[i+1]$ for the rotation terms. Why?} the equations to be consistent with the digitized measurement signal with timestep $dt \sim 20$ ns. Equation~(\rangle\!\rangleef{eq:SME_zx_i}) may not be numerically stable or accurate when the timestep $dt$ in the experiment is not small enough. There is an alternative way to update the SME which involves two steps. In this method the unitary evolution separately implemented by a geometric rotation, \begin{subequations}\langle\!\langleanglebel{eq:SME_zx_ii} \begin{equation}gin{eqnarray} z_{\mathrm{d}}&=&z[i]\cos(\Omega_R dt) + x[i] \sin(\Omega_R dt)\\ x_{\mathrm{d}}&=&x[i]\cos(\Omega_R dt) - z[i] \sin(\Omega_R dt)\\ z[i+1]&=&z_{\mathrm{d}} + 4 \eta k (1-z_{\mathrm{d}}^2)(V[i]-z_{\mathrm{d}}) dt \langle\!\langleanglebel{eq:SME_z_ii}\\ x[i+1]&=& x_{\mathrm{d}} -( 2k+\gamma_2)x_{\mathrm{d}}dt -4 \eta k x_{\mathrm{d}}z_{\mathrm{d}} (V[i]-z_{\mathrm{d}}) dt \langle\!\langleanglebel{eq:SME_x_ii}, \end{eqnarray} \end{subequations} where $z_d$ and $x_d$ are dummy variables connecting the two steps. The two-step update has better performance when $dt$ is not small enough to ensure the stability in single-step update. In most of practical situations $dt \Omega_R\langle\!\langlel1$ and the two methods are almost the same (see Figure~\rangle\!\rangleef{fig:SME_traj_update}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.9\textwidth]{SME_traj_update.pdf} \captionsetup{font=footnotesize} \caption[Quantum trajectory updated by SME]{ { \textbf{Quantum trajectory updated by SME:} \textbf{a.} A typical homodyne measurement signal $V(t)$ corresponding to $z$-measurement of a driven qubit, $\Omega_R/2\pi=0.6, k=1,\eta=0.35,dt=20$ ns. \textbf{b.} The corresponding updated quantum trajectory using Equations~(\rangle\!\rangleef{eq:SME_zx_i}) for solid lines and Equations~(\rangle\!\rangleef{eq:SME_zx_ii}) for dashed line. As you can see, for these drive parameters both methods are practically the same.}} \langle\!\langleanglebel{fig:SME_traj_update} \end{figure} \subsubsection{Tomographic validation} In order to verify that the updated trajectories accurately predict the state evolution of the qubit, we show the qubit state predicted by the trajectory is consistent with measurement from quantum state tomography. The idea is to compare the expectation values for $x$, $y$, and $z$ predicted by the quantum trajectory to the expectation value obtained by the result of projective measurements (readouts). Of course the readout is a destructive measurement with binary outcome. Therefore in order to obtain the expectation values one need to repeat the readout measurement on the same state many times. But it is not possible to perform many readouts on a single trajectory hence it is impossible to obtain expectation value for a single trajectory from a projective measurement. However, instead of using single trajectory, we can use many different trajectories as long as all that trajectories have the same prediction for $\langle\!\langleanglengle x \rangle\!\rangleanglengle , \langle\!\langleanglengle y \rangle\!\rangleanglengle , \langle\!\langleanglengle z \rangle\!\rangleanglengle$ at a given verification time $t_{\mathrm{v}}$. Therefore the tomographic verification at any given time $t_{\mathrm{v}}$ involves post-selection of trajectories that agree at that time. A nice way to do this is by choosing a random trajectory as a reference, and for each time step we post-select trajectories that have same prediction as the reference trajectory. Therefore we can reconstruct the reference trajectory by using the readout outcome of post-selected trajectories. Figure~\rangle\!\rangleef{fig:tomog_verification}a shows a reference $z$-trajectory in (black line) and a few post-selected trajectories that have the same prediction for $\langle\!\langleanglengle z \rangle\!\rangleanglengle$ at $t_{\mathrm{v}}=0.8\ \mu$s within some tolerance indicated by a red window. Note these post-selected trajectories are from the experiment time $t=t_{\mathrm{v}}$ so their readout outcomes at $t=t_{\mathrm{v}}$ are available. The average of the readout outcomes from post-selected trajectories reconstruct the reference trajectory at that time step which is indicated by a green circular marker in the zoomed-inset. The agreement between the green circle and the reference trajectory indicates that quantum trajectories truly predict the state of the qubit at that time step. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = .9\textwidth]{tomog_validation.pdf} \caption[Tomographic reconstruction]{ {\footnotesize \textbf{Tomographic reconstruction}.}} \langle\!\langleanglebel{fig:tomog_verification} \end{figure} By repeating this process for both $z$, and $x$ for all time steps, one can reconstruct the reference trajectory and validate the state update as depicted in Figure~\rangle\!\rangleef{fig:tomog_verification}b. The shaded area indicates the binomial error from the readout outcomes of post-selected trajectories at each time step. The binomial error can be calculated as, \begin{equation}gin{eqnarray} \mathrm{Bionamial \ Error} = \sqrt{\frac{p \cdot q}{N}} = \sqrt{\frac{N_+ N_-}{(N_+ + N_-)^3}} \langle\!\langleanglebel{eq:bionamial_error}. \end{eqnarray} Where $p=N_+/N$ and $q=N_-/N$ are the probabilities for two possible outcomes $(p=1-q)$ and $N=N_+ + N_-$ is the total number of outcomes. In this case, the total number of outcomes is equal to the total number of post-selected trajectories for each verification time. \section{$\sigma_x$ measurement procedure\langle\!\langleanglebel{section:x-me}} In this section we discuss the experimental procedure for $x$-measurement. This section follows the theoretical discussion of Section~\rangle\!\rangleef{section:generalized_mx}. For most part, the procedure for $x$-measurement is similar to $z$-measurement which is discussed in Section~\rangle\!\rangleef{section:z-me}. Here we discuss only two steps that are slightly different: the quantum efficiency calibration and the quantum trajectory update. \subsection{Quantum efficiency calibration} The paramp setup is slightly different from the $z$-measurement. Here, the paramp pump is similar to the qubit frequency. Practically, the qubit pulses and paramp pump have to be from the same generator. In an $x$-measurement the paramp is only used for state tracking but not readout. Therefore there is no dumb-signal cancellation and no readout fidelity optimization. High power readout is used and often the fidelity can be improved by transferring population to the higher excited states prior to the readout pulse. The quantum efficiency calibration is relatively easier for $x$-measurement than $z$-measurement. For this, we only need to run the noise calibration sequence with $\pm x$ state preparations and plot the histogram of the weak signal after a certain time of integration and scale the variance to be $\gamma_1 dt$. Then the separation would be $\Delta V = 2 \sqrt{\eta} \gamma_1$. \subsection{State update and quantum trajectory} As discussed in Subsection~\rangle\!\rangleef{Subsection:SME_xm_xz}, the SME for $x$-measurement in terms of Bloch components is described by Equation~(\rangle\!\rangleef{SME_xm_eta_xz}). In order to calculate quantum trajectories, the digitized homodyne signal needs to be properly scaled. For that we first subtract the offset (the offset can be determined by taking the average of the signal from the noise calibration sequence regardless of preparation). Then we scale the signal so that the variance of the histograms is $\gamma_1 dt$. The signal is then ready to be used in the discretized SME, \begin{subequations} \begin{equation}gin{eqnarray} \rangle\!\rangleesizebox{.85\hsize}{!}{$ z[i+1] = z[i] +\Omega_R x[i] + \gamma_1 (1-z[i]) + \sqrt{\eta \gamma_1} x (1-z[i]) (V[i]-\sqrt{\eta} \gamma_1 x[i] dt),\langle\!\langleanglebel{SME_xm_eta_z2}$} \\ \rangle\!\rangleesizebox{.85\hsize}{!}{$x[i+1] = x[i] - \Omega_R z[i+1] - \frac{\gamma_1}{2} x[i] + \sqrt{\eta} ( 1-z[i] - x[i]^2 )(V[i]-\sqrt{\eta} \gamma_1 x[i] dt).\langle\!\langleanglebel{SME_xm_eta_x2}$} \end{eqnarray} \end{subequations} \chapter{Monitoring Spontaneous Emission of a Quantum Emitter \langle\!\langleanglebel{ch5}} In this chapter, I discuss the experimental study of a continuously monitored quantum system. We focus on the dynamics of a decaying emitter under homodyne detection of its radiation. The aim of this chapter is to connect the this experiment with discussions provided in the previous chapters. Unlike classical mechanics, measurement has an inevitable disturbance on quantum systems. This disturbance which is known as measurement backaction and depends on the type of detector that we use for measurement. Therefore, it is natural to ask how the same quantum system, with the same interaction Hamiltonian to the environment, behaves differently under different detection schemes on the environment. Although this doesn't make much sense in a classical framework, it is understandable in the quantum case, owing to the entanglement between the detector and the emitter as we have already seen in the simple model in Chapter~4 (Section~\rangle\!\rangleef{section:generalized_m}). A prime example is the detection of spontaneous emission of an excited emitter. How does the emitter decay under continuous monitoring? Does the decay dynamics depend on the type of the detector? In other words, does an atom decay regardless of the detection or it does decay because of the detection? Exploring these questions underpin the topic of our study in this chapter. \section{Spontaneous emission} Spontaneously emission is ubiquitous in nature and accounts for most of the light that we see around us~\cite{milonni1984spontaneous}. It is often an undesirable effect but also essential for diverse applications ranging from fluorescence imaging to quantum encryption using single photons. In the spontaneous emission process, an excited emitter (excited atom) releases its energy in form of photons into one of the available electromagnetic modes of the environment\footnote{Therefore the spontaneous emission rate can be altered by manipulating the electromagnetic modes that are available to the emitter via engineering the environment~\cite{houck2008controlling, gambetta2011superconducting}.}. From the quantum measurement point of view, spontaneous emission is due to the light-matter interaction and entanglement of the state of the emitter to its electromagnetic environment~\cite{blinov2004observation,eichler2012observation}. In this picture, measurements on the environment (e.g. photon detection, homodyne detection) collapse the entangled wavefunction in a specific basis and convey information about the state of the emitter and consequently cause backaction \cite{wiseman2009quantum}. Therefore, the choice of measurement may change the quantum evolution of the emitter~\cite{wiseman2012dynamical,bolund2014stochastic,jordan2016anatomy, campagne2016observing}. A goal in this chapter is to study the dynamics of spontaneous emission under continuous homodyne measurement. But before discussing homodyne measurement, it would be illuminating to discuss photon detection. This will be helpful to draw a connection between these two types of detection. \section{Photon Detection} Consider a qubit (as a quantum emitter) interacting with an electromagnetic mode of the environment. Assume we use a photon detector to monitor the existence of photon in that mode of the environment\footnote{In general, one can assume that the emitter is interacting with many modes. Then for our discussion, we should also assume that the detector is sensitive to all of the modes.} as depicted in Figure~\rangle\!\rangleef{fig:photon_detection}a. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{photon_detection.pdf} \caption[Photon detection]{ { \footnotesize \textbf{Photon detection:} \textbf{a}, The qubit is initially prepared in the excited state and interacts with an electromagnetic mode. The qubit state and its emission to the mode are entangled via the interaction Hamiltonian~\eqref{eq:int_hamch5}. \textbf{b}, The detection of a photon results is a sudden jump for the emitter state. \textbf{c}, The average of many jump detections results in an exponential decay for the state of the qubit.} } \langle\!\langleanglebel{fig:photon_detection} \end{figure} The emitter which is initially prepared in the excited state interacts with the electromagnetic mode of the environment via the interaction Hamiltonian, \begin{equation}gin{equation} H_\mathrm{int} = \gamma (a^\dagger \sigma_- + a \sigma_+), \langle\!\langleanglebel{eq:int_hamch5} \end{equation} where $a$ and $a^\dagger$ correspond to the creation and annihilation of a photon in that mode. The parameter $\gamma$ quantifies the interaction strength which, in this case, is related to the decay rate of the emitter to the environment\footnote{As discussed earlier, $\gamma$ is proportional to the available density of state for the emitter to decay.}. The interaction Hamiltonian entangles the state of the emitter and the electromagnetic mode which can be represented as\footnote{Note that this is similar to our discussion in Chapter~4 and Equation~\eqref{eq:psi_tot_relax}, except here the emitter initially is in the excited state $\alpha_0=0, \begin{equation}ta_0=1$. This means that if we do not detect a photon, the emitter is still in the excited state with certainty, which is not the case when the emitter is prepared in a superposition state as we discussed in Equation~\eqref{eq:psi_tot_relax}.} (also depicted in Figure~\rangle\!\rangleef{fig:photon_detection}a), \begin{equation}gin{eqnarray} \Psi_{\mathrm{tot}}(0) = |e\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle \to \Psi_{\mathrm{tot}}(t) = \begin{equation}ta |e\rangle\!\rangleanglengle |0\rangle\!\rangleanglengle + \alpha |g\rangle\!\rangleanglengle |1\rangle\!\rangleanglengle.\\ \end{eqnarray} The photon detector monitors the state of the environment by performing measurements in the photon number basis. If we detect a ``click" we learn that the wave-function of the environment has been collapsed to the state $|1 \rangle\!\rangleanglengle$. This means the state of the qubit must be in ground state (measurement backaction). If we do not detect a click, then the emitter is still in the excited state. Therefore the detection of the spontaneous emission in the form of photons (energy quanta), results in an instantaneous jump of the emitter from the excited state to the ground state as depicted in Figure~\rangle\!\rangleef{fig:photon_detection}b~\cite{dalibard1992wave, blocher2017many}. If we average over many jump detections (or equivalently, if we disregard the detection results) the state of the qubit would exponentially decay from the excited to the ground state (Fig.~\rangle\!\rangleef{fig:photon_detection}c). Before we conclude this subsection, it is worth mentioning a key point. You may notice that in the quantum measurement interpretation of the spontaneous emission, the atom decays because a detector collapses the wave function. In other words `the atom decays because the detector clicks'. This is so counterintuitive with our classical understanding of detection where we would say that the detector clicks because the atom has decayed\footnote{The argument `the atom decays because the detector clicks' is true when there is an entanglement between the emitter and the photon.}. We will return to this point again in the discussion on homodyne measurement.\\ \noindent\fbox{\parbox{\textwidth}{ \textbf{Exercise~1:} Consider detecting a photon from a star lightyears away. Does this mean that our detection of that photon causes that atom decay years ago? Explain this in terms of the quantum measurement interpretation. }} \section{Homodyne detection of spontaneous emission} In the previous section, we discussed a situation where spontaneous emission is measured by a photon detector. Now the question is, ``What if the emission is measured with a detector that is not sensitive to quanta, but rather to the amplitude of the field?'' In other words, what if we use a detector that addresses the wave notion of light as opposed to a photon detector which addresses the particle notion of light. What would the backaction be in this case? How are measurement outcomes correlated with the state of the emitter?'' In this section, we experimentally explore these questions by performing homodyne measurement of the spontaneous emission of a qubit. As we discussed in Chapter~4 (Subsection~\rangle\!\rangleef{subsection:povm_sigmax}), Homodyne measurement can be thought of as projections to the coherent basis $|\alpha\rangle\!\rangleanglengle$, where $\alpha=|\alpha|e^{i\phi}$~\cite{jordan2016anatomy}. The measurement outcome $\alpha$ corresponds to the amplitude of the field in the quadrature $a^\dagger e^{i \phi} + a e^{-i \phi}$ which also contains fluctuations in that quadrature. In practice, when we perform homodyne measurement along a certain quadrature, we basically squeeze the outgoing emission along that quadrature as depicted in Figure~\rangle\!\rangleef{fig:homodyne_detection}b. This means we amplify the signal along the $\phi$-axis and de-amplify along the orthogonal axis. Therefore the measurement (or the collapse) happens only along the quadrature\footnote{Because we do not obtain any information along the other quadrature.} $\phi$. Returning to our discussion of `the atom decays because the detector clicks', this means that the emitter only ``decays''\footnote{The word decay is in quotes because, unlike the photon detection, homodyne detection does not necessarily fully collapse the emitter state.} along the $\phi$-quadrature\footnote{The fact that collapse happens in a certain quadrature, results in a certain type of backaction on the qubit which may confine the qubit evolution in a certain subspace.}. \begin{equation}gin{figure} \centering \includegraphics[width = 0.88\textwidth]{homodyne_detection.pdf} \caption[Homodyne detection]{ {\footnotesize \textbf{Homodyne detection:} \textbf{a}, The spontaneous emission of the emitter is detected by homodyne measurement. The local oscillator has a well defined relative phase $\phi$ with respect to the qubit rotating frame which determines the amplification quadrature shown in \textbf{b}. The measurement happens only in one quadrature along $\phi$-axis. The fluctuations in the orthogonal quadrature are de-amplified, which means we do not learn about fluctuations in that quadrature. This allows for noiseless amplification for the $\phi$-quadrature~\cite{caves1982quantum}}} \langle\!\langleanglebel{fig:homodyne_detection} \end{figure} Therefore the idea for the experiment is 1) to study the spontaneous emission dynamics of a qubit by performing a homodyne measurement along the $\phi$ quadrature and 2) to explore the dynamics for different homodyne quadrature measurements. Note that the interaction Hamiltonian (Eq.~\rangle\!\rangleef{eq:int_hamch5}) connects the quadrature $a^\dagger e^{i \phi} + a e^{-i \phi}$ to the corresponding dipole moment of the qubit (emitter), $ \sigma_-e^{i \phi} + \sigma_+e^{-i \phi}$. For example if we set the phase $\phi=0$, we showed in Chapter~4 that the homodyne measurement is actually a noisy estimate of $\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle$ which can be described by (see Equation~\rangle\!\rangleef{eq:noisy_estimate_x}), \begin{equation}gin{eqnarray} dV_t =\sqrt{\eta}\gamma\langle\!\langleanglengle\sigma_x\rangle\!\rangleanglengle dt + \sqrt{\gamma} dW_t. \langle\!\langleanglebel{eq:noisy_estimate_x_ch5} \end{eqnarray} We are interested to know what a detection of the homodyne signal $dV_t$ tells us about the state of the decaying qubit. For that, we use the experimental setup to perform the sequence depicted in Figure \rangle\!\rangleef{seq_spon}. For the experimental setup, note that the qubit pulse and paramp pump share a same generator\footnote{This is a practical way to ensure that the paramp pump and the qubit pulse have a well defined and stable relative phase.} (BNC2) and the paramp is operated in a double-pump mode. Moreover, regarding the homodyne measurement of the emitter's emission, the demodulation should be happen at the qubit frequency but high-power readout demodulation should be at the bare cavity frequency. Therefore we use an RF switch to toggle between two frequencies for demodulation purposes. For the experimental sequence, we prepare the qubit in an initial state (in this case we prepared the qubit in the excited state, $+x$, and $+y$) then start collecting the homodyne signal for a variable time $t$ ($t$=40 ns, 80 ns,...). Finally, we perform a projective measurement to determine the final state of the qubit at that time. \begin{equation}gin{figure} \centering \includegraphics[width=0.98\textwidth]{seq_spon2.pdf} \caption[The experimental setup (spontaneous emission experiment)]{\footnotesize {\bf The experimental setup and sequence:} The emitter is initialized in the excited state or in a superposition state by a preparation pulse. Right after the preparation, the homodyne signal is collected. Finally, we apply a tomographic rotation pulses along different axes followed by a high-power readout pulse to projectively measure the final state of the emitter.} \langle\!\langleanglebel{seq_spon} \end{figure} We characterize the correlation between the average of the collected homodyne signal and the final state (at time $t$). For that, we average the projective result conditioned at the average homodyne signal $\bar{V}$. Therefore we obtain the conditional expectation values, $\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle |_{\bar{V}}$, $\langle\!\langleanglengle \sigma_y \rangle\!\rangleanglengle |_{\bar{V}}$, $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle |_{\bar{V}}$. In Figure \rangle\!\rangleef{fig2_spon}a-c we plot $\langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle |_{\bar{V}}$ and $\langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle |_{\bar{V}}$ parametrically on the $X$--$Z$ plane of the Bloch sphere for different integration times. \begin{equation}gin{figure} \begin{equation}gin{center} \includegraphics[width=0.98\textwidth]{result_1_spon.pdf} \caption[Conditional dynamics of spontaneous decay]{\footnotesize {\bf Conditional dynamics of spontaneous decay:} The tomographic result is the averaged tomographic readout conditioned on the outcome of the homodyne measurement to determine $x \equiv \langle\!\langleanglengle \sigma_x \rangle\!\rangleanglengle |_{\bar{V}}$, and $z\equiv \langle\!\langleanglengle \sigma_z \rangle\!\rangleanglengle |_{\bar{V}}$ . These correlated tomography results are displayed on the $X$-$Z$ plane of the Bloch sphere for three different initial states: $-z$ ({\bf a}), $+x$ ({\bf b}), and $+y$ ({\bf c}). The gray scale indicates the relative occurrence of each measurement value. Note that different backaction between ({\bf b}), and $+y$ ({\bf c}) is the result of the phase-sensitive amplification on different quadratures of the homodyne signal. } \langle\!\langleanglebel{fig2_spon} \end{center} \end{figure} Looking at the experimental result in Figure~\rangle\!\rangleef{fig2_spon}, a few points are noticeable; \begin{equation}gin{itemize} \item When the qubit is prepared in the excited state, we see that the $x$-component of the state develops a correlation with the averaged homodyne signal. \item The emitter state evolves in a deterministic curve inside the Bloch sphere. Therefore one can use these ``smiley" curves for heralding the system in a nearly arbitrary point in the Bloch sphere. \item When the emitter is prepared as the $+x$ state, the qubit state sometimes gets more excited during the decay~\cite{bolund2014stochastic}. This stochastic excitation happens only in the amplitude measurements of the field and such excitations are not possible in the case of photodetection~\cite{jordan2016anatomy}. \item If we rotate the amplification phase by 90 degrees\footnote{or equivalently prepare the system in $+y$.}, As depicted in Figure~\rangle\!\rangleef{fig2_spon}c, the state evolution for qubit is totally different. This is because the backaction happens in a different quadrature. This demonstrates how the choice of homodyne measurement phase can be used to control the evolution of the emitter. \end{itemize} We can take advantage of the deterministic ``smiley" evolution of the qubit to characterize the backaction for the qubit at different points in the Bloch sphere. For that, we let the system evolve from the excited state to a nearly arbitrary place inside the Bloch sphere on a smiley curve $(x_i, z_i)$. This acts as heralding of the qubit state to a specific point in the Bloch sphere, $(x_i, z_i)$. Then we collect the homodyne signal for an additional 40 ns. We use results from tomography to calculate the final position of the qubit ($x_f, z_f$). Therefore, for each point on the smiley curve, we obtain the conditioned evolution of the qubit based on the sign of the additional collected homodyne signal. This method tells us about the measurement backaction for positive and negative homodyne signals at each point on the Bloch sphere. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=0.8\textwidth]{vectormap.pdf} \caption[Backaction vector maps]{\footnotesize {\bf Backaction vector maps~\cite{naghiloo2016mapping}.} {\bf a}, We use the deterministic relation between the average homodyne signal $\bar{V}$ and the emitter's state to herald a nearly arbitrary initial state in the $X$-$Z$ plane of the Bloch sphere. The conditional backaction is obtained by quantum state tomography based on a small portion of the signal $dV$. {\bf b}, Histogram of the signals $dV$ which we separate into positive or negative $dV$. The corresponding backaction imparted on the emitter for negative ({\bf c}) or positive ({\bf d}) values of $dV$ are depicted by arrows at different locations in the $X$-$Z$ plane of the Bloch sphere.}\langle\!\langleanglebel{fig3_spon} \end{center} \end{figure} The results are summarized in Figure~\rangle\!\rangleef{fig3_spon}. The backaction at a specific location in state space, associated with the detection of a given value of $dV$, is demonstrated by the vector connecting $(x_i, z_i)$ and ($x_f, z_f$). The backaction vector maps demonstrate how positive (negative) measurement results push the state toward $+x$ $(-x)$. Furthermore, the maps suggest that measurement backaction is stronger near the state $-z$ suggesting that the measurement strength is proportional to the emitter's excitation. Finally, we can look at the individual quantum trajectories of this process. As we discussed in Chapter~4, all we need is to properly scale the homodyne signal and use that in the SME~\eqref{SME_xm_eta_xz}, \begin{equation}gin{eqnarray} dx &=& - \frac{\gamma}{2} x dt + \sqrt{\eta} ( 1-z - x^2 )(dV_t - \gamma \sqrt{\eta} x dt), \langle\!\langleanglebel{smex}\\ dz &=&\gamma (1-z) dt + \sqrt{\eta} x(1-z)(dV_t - \gamma \sqrt{\eta} x dt ), \langle\!\langleanglebel{smez}\\ dy &=& - \frac{\gamma}{2} y dt - \sqrt{\eta} xy (dV_t - \gamma \sqrt{\eta} x dt). \langle\!\langleanglebel{smey} \end{eqnarray} Figure~\rangle\!\rangleef{fig4_map_spon} shows the result for state update from the exited state and the $+x$ state for 2 $\mu$s of continuous measurement. As we see, the evolution of the qubit during the decay process is no longer jumpy as opposed to the case of photon detection. However, the average of many trajectories would recover the same exponentially damped behavior as we discussed in the previous section\footnote{Regardless of the type of the detector, we will recover the Lindbladian evolution for the system if we average over many detection outcomes (this is equivalent to disregarding all measurement outcomes).}. Moreover, in Figure~\rangle\!\rangleef{fig4_map_spon}b stochastic excitations of individual trajectories toward the excited state is clearly apparent. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=0.7\textwidth]{trajectories_v2.pdf} \caption[Quantum trajectories for a decaying atom]{ \footnotesize \textbf{Quantum trajectories for decaying atom:} {\bf a,b}, Quantum trajectories of spontaneous decay calculated by the stochastic master equation, initiated from $-z$ ({\bf a}) and $+x$ ({\bf b}). Several trajectories are depicted in gray, and a few individual trajectories are highlighted in black. {\bf c,d} Individual trajectories ($\tilde{x}, \tilde{z}$) that originate from $-z$ ({\bf c}) and $+x$ ({\bf d}) are shown as dashed lines and the tomographic reconstruction (see Chapter~4) based on projective measurements are shown as solid lines.}\langle\!\langleanglebel{fig4_map_spon} \end{center} \end{figure} One can quantify the stochastic excitation by extracting the probability of excitation above a certain threshold at different times. Looking at the measurement term (proportional to $\sqrt{\eta}$) in Equation~\eqref{smez} it is clear that the state at $+x$ will be stochastically excited if the Weiner increment $dW_t$, obtained from the detected signal $dV_t$, is less than $- \sqrt{\gamma/\eta} dt$, predicting that $\sim 35\%$ of the trajectories should be excited in the first time step~\cite{naghiloo2016mapping}. Having access to the stochastic trajectories of a quantum system opens new doors to investigate the dynamics of open quantum systems. In particular, the stochastic and non-unitary dynamics of quantum systems combined with a unitary evolution exhibits a rich dynamics which can be utilized for studying fundamental question in quantum physics~\cite{jordan2016anatomy,foroozani2016correlations,naghiloo2017quantum,lewalle2017prediction}.\\ \chapter{Quantum Thermodynamics: Quantum Maxwell's Demon \langle\!\langleanglebel{ch6}} In this chapter we explore quantum thermodynamics at the extreme level of a single atom interacting with a bath. The atom is a two level quantum system in contact with a detector which acts as the atom's environment. In this chapter we attempt to put our understanding about quantum dynamics into the language of quantum thermodynamics. In particular, we study the information-energy connection in quantum thermodynamics in the context of Maxwell's demon. \section{Fluctuation theorems: thermodynamics at the microscope scale} Thermodynamics is normally considered as a theory which describes systems in the limit of a large number of particles, $N \to \infty$. In this limit, often known as the thermodynamic limit, fluctuations of energy are absolutely negligible compared to the total energy in the system. Therefore, it makes sense to describe the state of the system by a few macroscopic parameters regardless of fluctuations in individual degrees of freedom. For example, we define an equilibrium state and characterize the total energy in terms of heat and work by only a few thermodynamic parameters (e.g volume, pressure, temperature) for a gas inside a piston regardless of the position and the velocity of individual gas molecules. As depicted in Figure~\rangle\!\rangleef{fig:fluctu}a, the work fluctuations in a thermodynamic process are negligible in thermodynamic limit so that the work distribution is effectively a delta function. However, for microscopic systems which have a finite number of degrees of freedom, the fluctuations are no longer negligible. In this limit, fluctuations basically drive the systems in a stochastic manner during the process\footnote{Similar to the quantum trajectories which are stochastic due to quantum fluctuations.} as depicted in Figure~\rangle\!\rangleef{fig:fluctu}b. Therefore, the traditional thermodynamics laws need to be revisited for microscopic systems where thermal fluctuations are significant. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.78\textwidth]{fluctu.pdf} \caption[Work fluctuations in the macroscopic and microscopic limit]{ { \footnotesize \textbf{Work fluctuations in the macroscopic and microscopic limit:} \textbf{a}. The work distribution for a system (gas inside a cylinder) in the thermodynamic limit (number of particles $N \to \infty$). The fluctuations are absolutely negligible $\propto N^{-\frac{1}{2}} \sim 0$ in this limit. \textbf{b}. The corresponding system in the limit of a finite number of particles ($N \to 1$). The work distribution fluctuates substantially $\propto N^{-\frac{1}{2}} \sim N$ due to thermal fluctuations.} } \langle\!\langleanglebel{fig:fluctu} \end{figure} In the past decades, thermodynamics has been successfully extended to nonequilibrium microscopic systems to account for thermal fluctuations. In particular, the generalized second law of the thermodynamics, in terms of a fluctuation theorem, has been experimentally verified for classical systems~\cite{collin2005verification}. For example, it has been shown that work fluctuations in a nonequilibrium process follow a fairly strong rule known as the Jarsynski equality (JE), \begin{equation}gin{equation}\langle\!\langleanglebel{eq:jar0} \langle\!\langleanglengle e^{- \begin{equation}ta W} \rangle\!\rangleanglengle = e^{- \begin{equation}ta \Delta F}, \end{equation} which connects the work distribution $W$ from a nonequilibrium process to the equilibrium free energy difference $\Delta F$~\cite{jarzynski1997nonequilibrium}. One can recover the second law of thermodynamics from JE by using Jensen's inequality, \begin{equation}gin{equation}\langle\!\langleanglebel{eq:jar1} \langle\!\langleanglengle e^{- \begin{equation}ta W} \rangle\!\rangleanglengle = e^{- \begin{equation}ta \Delta F} \xrightarrow{\langle\!\langleanglengle e^{x}\rangle\!\rangleanglengle \geq e^{\langle\!\langleanglengle x \rangle\!\rangleanglengle}} \langle\!\langleanglengle W \rangle\!\rangleanglengle \geq \Delta F. \end{equation} Therefore, the JE is considered as the 2nd law of thermodynamics for microscopic systems. This equality has been verified experimentally for classical systems (see for example Ref.~\cite{liphardt2002equilibrium}). However, the extension of thermodynamics to include quantum fluctuations faces unique challenges because quantum fluctuations and coherence do not have a clear role in thermodynamics. The newfound experimental capability to track single quantum trajectories adds to an intense endeavor to study and define thermodynamic quantities for individual quantum systems. \section{Maxwell's demon and the 2nd law} Consider the schematic in Figure~\rangle\!\rangleef{fig:fluctu}b in the limit of a few particles in the cylinder. If we are able to track the particles and react fast enough, we can basically displace the piston without doing any work! In this case, the work distribution is ideally a delta function at zero, but we have displacement in the piston $\Delta F \neq 0$. Thus the JE is no longer valid. In fact, Maxwell came up with a similar idea which was in apparent violation of the 2nd law soon after the establishment of thermodynamics. Maxwell considered a box full of air molecules and an intelligent being who has access to the velocity and position of individual molecules. The demon can sort the hot and cold particle to either side of the box without doing any work as depicted in Figure~\rangle\!\rangleef{fig:demon}. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.8\textwidth]{demon.pdf} \caption[Maxwell's demon]{ { \footnotesize \textbf{Maxwell's demon:} By knowing the position and the velocity of the particles, a demon sorts hot and cold particle in a box in apparent violation of the 2nd law.} } \langle\!\langleanglebel{fig:demon} \end{figure} The question of, ``How the demon can make a oven next to a fridge without doing any work and violate the 2nd law?'', reveals a profound connection between the energy and information in thermodynamics\footnote{This question of how the demon actually violates the 2nd law was unsolved for decades.}. Owing to the dominant contribution of fluctuations in the dynamics of microscopic systems, a lot of effort has been directed toward the understanding of the connection between energy and information in microscopic systems. In particular, the Jarzynski equality (2nd law) has been generalized to account for the demon's information, \begin{equation}gin{equation}\langle\!\langleanglebel{eq:jar2} \langle\!\langleanglengle e^{- \begin{equation}ta W - I} \rangle\!\rangleanglengle = e^{- \begin{equation}ta \Delta F}, \end{equation} where $I$ is the mutual information between the demon's measurement outcome and the state of the system. The generalized Jarzynski equality (GJE) has been studied and verified for classical microscopic systems in which the demon is realized by measuring the thermal fluctuations and by applying subsequent feedback on the system~\cite{serreli2007molecular,raizen2009comprehensive,toyabe2010experimental,koski2014experimental}. The recent advances in fabrication and control over quantum systems allow for unprecedented study of the concept of Maxwell's demon in quantum systems where instead of thermal fluctuations, the quantum fluctuations are dominant. For example, in the minimal quantum situation of a two level quantum system, the generalized Jarzynski equality is verified in the experiment by considering the mutual information between projective measurement outcomes and the state of the qubit~\cite{camati2016experimental,cottet2017observing,ciampini2017experimental,masuyama2018information}. Although these experiments use quantum systems, their result can be interpreted as a classical mixture either because the dynamics doesn't include quantum coherence or because the projective measurement destroys the quantum coherence. However, in an actual quantum situation, the demon can also gain information about the quantum coherences; the off-diagonal elements in the density matrix\footnote{One can think of it in this way that; the classical demon is able to identify the particles as either hot or cold. But the quantum demon in general can also identify particles that are in superposition of hot and cold.} (Fig.~\rangle\!\rangleef{fig:demon_c_q}). \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{demon_c_q.pdf} \caption[Classical demon vs. quantum demon]{ { \footnotesize \textbf{Classical demon vs. quantum demon:} \textbf{a}, A classical demon's knowledge about a quantum system is limited to the populations in the definite states. \textbf{b}, A quantum demon also has knowledge about the quantum coherence in the system.} } \langle\!\langleanglebel{fig:demon_c_q} \end{figure} In previous chapters, we studied continuous monitoring and weak measurement of a quantum systems. Through weak measurement we can learn about the quantum state and quantum coherences without destroying them (completely). Therefore in this chapter, we attempt to utilize continuous monitoring to study Maxwell's demon in the context of quantum measurement. \section{Continuous monitoring: a quantum Maxwell's demon} The idea is to use our ability of tracking and manipulating the quantum state to realize a truly quantum Maxwell's demon. For that, consider the $z$-measurement setup (discussed in Chapter~4) and the experimental sequence demonstrated in Figure~\rangle\!\rangleef{fig:seq_demon}. The experimental protocol consists of five steps: \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.98\textwidth]{seq_demon.pdf} \caption[Maxwell's demon experimental sequence]{ { \footnotesize \textbf{Experimental sequence}.} } \langle\!\langleanglebel{fig:seq_demon} \end{figure} \begin{equation}gin{itemize} \item In Step~1, the qubit is prepared in a thermal state characterized by an inverse temperate\footnote{Here we represent $\begin{equation}ta$ in the qubit energy scale so that, initially for the qubit populations we have $P_1/P_0=e^{-\begin{equation}ta}$.} $\begin{equation}ta$. Practically this can be done by a proper rotation pulse followed by a projective measurement and by then disregarding the measurement outcome. \item In Step~2, a projective measurement is performed so that the qubit is projected to one of its eigenstates. The binary measurement outcome $X \in \{0,1\}$ is recorded. This result, along with the projective result in Step~5, will be used to calculate the transition probabilities and characterize the work distribution for the experiment. \item In Step~3, the demon, without knowing about the projective measurement result $X$, starts monitoring the qubit state while an external drive also acts on the system. Note the effective Hamiltonian for a resonantly driven qubit in the rotating frame is $H_t=- \Omega_R \sigma_y/2$ where $\Omega_R$ quantifies the drive strength as discussed in Chapter~2. \item In Step~4, at a certain time, the demon uses his knowledge about the state of the system to rotate the system back to the ground state and extract work\footnote{In the actual experiment, in order to avoid feedback delay, we perform a random rotation pulse and the correct pulses are post-selected in the data analysis.}. \item In Step~5, the experiment is finished by a second projective measurement which results in a binary measurement outcome $Z \in\{ 0, 1\}$. \end{itemize} We repeat this experimental protocol and gather measurement statistics to experimentally study the 2nd law of thermodynamics. For example, Figure~\rangle\!\rangleef{fig:scatter_demon} shows the scatter plot of final states of the qubit before and after the rotation feedback in Step~4 for 200 experiment runs. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.28\textwidth]{scatter_demon.pdf} \caption[The act of the demon in Step~4]{ { \footnotesize \textbf{The act of the demon in Step~4:} The red (blue) dots are the state of the qubit before (after) rotation feedback. The circular markers shows the average of the states before and after feedback. All data are from weak measurement and quantum trajectory reconstructions except the black cross which comes from the projective measurement after feedback. The agreement between the cross and green circular markers indicates that the trajectory update and feedback rotations are faithfully executed.} } \langle\!\langleanglebel{fig:scatter_demon} \end{figure} \subsection{Examining the Jarzynski equality} Now, we examine the Jarsynski equality~\rangle\!\rangleef{eq:jar1} in the following form, \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:jar0_exp} \langle\!\langleanglengle e^{-\begin{equation}ta W } \rangle\!\rangleanglengle &=& \int P(W) e^{- \begin{equation}ta W} dW \end{eqnarray} where we set $\Delta F=0$ since the initial and final Hamiltonian are practically the same in our experiment. In order to obtain the work distribution $P(W)$ we only use the projective measurement result and calculate the transition probabilities $P_{m,n}$ as demonstrated in Figure~\rangle\!\rangleef{fig:trans_demon}. The work distribution then can be calculated in this form\footnote{Note, because quantum systems do not necessarily occupy states with well defined energy (only eigenstates of the Hamiltonian have a well defined energy), the work distribution is described in terms of transition probabilities between energy eigenstates \cite{talkner2007fluctuation}.}, \begin{equation}gin{equation}\langle\!\langleanglebel{eq:pu} P(W) = \sum_{m,n} P_{m,n}^{\tau} P_{n}^{0} \delta(\Delta U - (E^\tau_m-E^0_n)), \end{equation} where the $P_{n}^{0}$ denote the initial occupation probabilities, $P_{m,n}^{\tau}$ are the transition probabilities between initial and final eigenvalues $E^0_n$ and $E^\tau_m$ of the Hamiltonian $H_t$, and $\tau$ is the duration of the protocol. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{trans_demon.pdf} \caption[Transition probabilities]{ { \footnotesize \textbf{Transition probabilities:} The projective measurement results are used to calculate the transition probabilities. The initial probabilities $P_0(0)$ and $P_1(0)$ are simply calculated based on the relative occurrence of outcome 0 or 1 in the first projective measurement. For the transition probabilities $P_{nm}(\tau)$, we calculate the relative occurrence of the result $n \in \{0,1\}$ in the second projective measurement conditioned that the result $m \in \{0,1\}$ is obtained in the first projective measurement.} } \langle\!\langleanglebel{fig:trans_demon} \end{figure} Therefore, we examine the Jarzynski equality by using transition probabilities as follows, \begin{equation}gin{eqnarray}\langle\!\langleanglebel{eq:jar1_exp} \int P(W) e^{- \begin{equation}ta W} dW &=& P_0(0)P_{00}(\tau)+ P_1(0)P_{11}(\tau)\nonumber \\ &\ & + P_0(0) P_{10}(\tau) e^{+\begin{equation}ta} + P_1(0) P_{01}(\tau) e^{-\begin{equation}ta} \nonumber\\ &=& 1. \end{eqnarray} Figure~\rangle\!\rangleef{fig:jar_demon} (square markers) shows the experimental result for the left-hand side of the Equation~\eqref{eq:jar1_exp} for five different duration times. There is no surprise that the result deviates from unity, because in Equation~\eqref{eq:jar1_exp}, we have ignored the act of the demon on the system. In other words, the demon violates the second law unless we account for the information of the demon. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{jar_demon.pdf} \caption[Violation of the 2nd law]{ { \footnotesize \textbf{Violation of the 2nd law:} The experimental results violates the Jarzynski equality. This violation is because we have ignored the demon's information.} } \langle\!\langleanglebel{fig:jar_demon} \end{figure} \subsection{The demon's information} In this section, we quantify the information that the demon obtains during the measurement. But what is the information? One way to quantify the information is to measure how much you learned that you didn't already know~\cite{lutz2015maxwell}. If we already know that the qubit state is $\rangle\!\rangleho= 0.99 |0\rangle\!\rangleanglengle \langle\!\langleanglengle 0 | + 0.01 |1\rangle\!\rangleanglengle \langle\!\langleanglengle 1 | $ and someone measures the qubit and lets us know that the qubit is in the ground state, we would not learn much. But it turns out that if the qubit is in the excited state our state of knowledge about the qubit would be substantially changed. Therefore the amount of information can be quantified by how unexpected the outcome is. For that, consider $I_z(\rangle\!\rangleho)=- \langle\!\langlen P_z(\rangle\!\rangleho)$ as the information content of $\rangle\!\rangleho$ along $z-$basis which quantifies how much we learn if we obtain result $z=-1,1$ along the $z-$basis. Now we define information exchange for the demon as the difference between initial and final information content, \begin{equation}gin{eqnarray} I_{z',z}(t)&=& \langle\!\langlen P_{z'}(\rangle\!\rangleho_{t|r}) - \langle\!\langlen P_z(\rangle\!\rangleho_0), \langle\!\langleanglebel{Mut_inf1} \end{eqnarray} where, $P_{z'}$ represents the probability of getting the result $z'=-1,1$ in the $z'$-basis where the system is diagonal\footnote{The initial state is always a thermal state so the diagonal basis initially is $z$ basis.}~\cite{funo2013integral}. We calculate the probabilities in the diagonal basis to account for the information encoded in the populations (diagonal elements in density matrix) as well as coherences (off-diagonal elements) as depicted in Figure~\rangle\!\rangleef{fig:info_demon}a. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.78\textwidth]{info_demon.pdf} \caption[Information dynamics for the quantum Maxwell's demon]{ { \footnotesize \textbf{Information dynamics for the quantum Maxwell's demon:} \textbf{a}. The information is quantified by considering the probabilities in the diagonal basis to account for information encoded in the coherences. \textbf{b}. The information exchange dynamics along quantum trajectories; the dashed line shows a typical trace from the ensemble of data (shaded background color). The black solid line is the average of the information exchange.} } \langle\!\langleanglebel{fig:info_demon} \end{figure} For example, the state of the qubit is indicated by the blue arrow in Figure~\rangle\!\rangleef{fig:info_demon}a has the same amount of quantum information as the magenta arrow has provided that the probabilities are calculated in the diagonal basis for each state. The expectation value for the information exchange along a quantum trajectory would be, \begin{equation}gin{eqnarray} \tilde{I}_r = \sum_{z,z'=\pm 1} [P_{z'}(\rangle\!\rangleho_{t|r}) \langle\!\langlen P_{z'}(\rangle\!\rangleho_{t|r}) - P_z(\rangle\!\rangleho_0)\langle\!\langlen P_z(\rangle\!\rangleho_0)], \langle\!\langleanglebel{Mut_inf2} \end{eqnarray} where the conditional probabilities come from a single quantum trajectory. The subscript $\cdot_r$ indicated by $\rangle\!\rangleho_t$ is the conditional evolution found by averaging over many trajectories. We obtain this average value for the information exchange as, \begin{equation}gin{eqnarray} \langle\!\langleanglengle I \rangle\!\rangleanglengle = \sum_r \tilde{I}_r = \sum_{z,z'=\pm1,r} P_{z'}(\rangle\!\rangleho_t) \langle\!\langlen P_{z'}(\rangle\!\rangleho_t) - P_z(\rangle\!\rangleho_0)\langle\!\langlen P_z(\rangle\!\rangleho_0). \langle\!\langleanglebel{Mut_inf3} \end{eqnarray} \subsection{Test of the generalized Jarzynski equality} Now we attempt to verify the generalized Jarzynski equality which includes the information term. For that, we represent Equation~\eqref{eq:jar2} as\footnote{The sign for $W$ in GJE depends on our definition of the work; the work done by the system, or the work done on the system $e^{\pm\begin{equation}ta W - I + \Delta F}$.}, \begin{equation}gin{equation} \begin{equation}gin{split} \langle\!\langleanglengle e^{-\begin{equation}ta W - I + \Delta F} \rangle\!\rangleanglengle =& P_0(0)P_{00}(t)e^{-I_{00}} + P_1(0)P_{11}(t)e^{-I_{11}} \\ +& P_0(0) P_{10}(t) e^{-\begin{equation}ta-I_{10}} + P_1(0) P_{01}(t) e^{+\begin{equation}ta-I_{01}}, \end{split} \langle\!\langleanglebel{jarcq} \end{equation} where $I_{ij}= \langle\!\langlen P_{i}(\rangle\!\rangleho_t) - \langle\!\langlen P_j(\rangle\!\rangleho_0)$ as we discussed in the previous Subsection. Figure~\rangle\!\rangleef{fig:gen_jar_demon} shows the experimental result for Equation~\eqref{jarcq} which indicates that the generalized Jarzynski is indeed verified. \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.48\textwidth]{gen_jar_demon.pdf} \caption[Generalized Jarzynski equality for the quantum Maxwell's demon]{ { \footnotesize \textbf{Generalized Jarzynski equality for the quantum Maxwell's demon:} The blue round markers are the experimental result for the generalized Jarzynski equality for different time durations. The agreement between the dashed line and markers indicates that the GJE is verified, as opposed to the JE (red square markers).} } \langle\!\langleanglebel{fig:gen_jar_demon} \end{figure} \section{Information gain and loss} Now, we study the information dynamics at the ensemble level. In Figure \rangle\!\rangleef{fig:info_demon}d (solid curve) we showed the average information change from many trajectories. You may notice that the average information is negative. This loss of information is due to decoherence which is a uniquely quantum feature and only appears if coherences contribute to the dynamics, and is thus not possible in a classical situation (e.g. See reference \cite{masuyama2018information}). One may categorize the information change in two parts and distinguish the contribution of information gain through measurement and information loss due to imperfect detection \cite{funo2013integral}. In principle, imperfect detection arises because the state evolution of the detector is not exactly known and we must average over possible configurations of the detector as illustrated in Figure~\rangle\!\rangleef{fig:info_transition_demon}a. If we consider the detector uncertainty as an average over inaccessible degrees of freedom, parameterized by a stochastic variable $a$, the exchanged information \eqref{Mut_inf3} can be written as a sum of information gain and information loss $\langle\!\langleanglengle I \rangle\!\rangleanglengle =I_{\rangle\!\ranglem gain} - I_{\rangle\!\ranglem loss}$ where~\cite{funo2013integral}, \begin{equation}gin{eqnarray} I_{\rangle\!\ranglem gain} &=&S(\rangle\!\rangleho_0)- \sum_a p(a,r) S(\rangle\!\rangleho_{t|r,a}) \geqslant 0\\ I_{\rangle\!\ranglem loss} &=&\sum_r S(\rangle\!\rangleho_{t|r})- \sum_a p(a,r) S(\rangle\!\rangleho_{t|r,a}) \geqslant 0 \end{eqnarray} \begin{equation}gin{figure}[ht] \centering \includegraphics[width = 0.78\textwidth]{info_transition_demon.pdf} \caption[Information gain and loss for the quantum Maxwell's demon]{ { \footnotesize \textbf{Information gain and loss for the quantum Maxwell's demon:} \textbf{a}, The inefficient detection can be modeled by averaging over unknown degrees of freedom for the detector. This basically lowers the signal-to-noise ratio. \textbf{b}, By adjusting the initial preparation for the qubit, we change the effective temperature for the system. The average of information exchange is negative for lower temperature (initially purer states) but it is positive for higher temperatures (initially more mixed states). Considering that only information encoded in coherences are susceptible to the loss, the more coherences involved in the dynamics, the more information will be lost.}} \langle\!\langleanglebel{fig:info_transition_demon} \end{figure} However, we do not have access to $a$ in this experiment. But still, we can explore the regimes where quantum coherence has different contributions to the dynamics meaning that the loss has different contributions to the total information change. To do this, we prepare the system in different initial thermal states and calculate the average information exchange for different initial temperatures. In Figure~\rangle\!\rangleef{fig:info_transition_demon}b we plot the final information exchange (at $2\mu$s) versus different initial thermal states ( characterized by $z_{in}= \langle\!\langleanglengle z\rangle\!\rangleanglengle|_{t=0}$). The total information change is positive for higher temperature (for more mixed initial states) but it is negative for lower temperature (more pure initial states). This transition of information gain to information loss can be understood by considering the fact that loss comes from decoherence of lower temperature (more pure) states. In our case, initially, more pure states will acquire more coherence through the unitary drive which turns the initial populations into coherences, these coherences are then lost due to inefficient detection, ultimately leading to a loss of information. \begin{equation}gin{thebibliography}{100} \bibitem{gisin2007quantum} Nicolas Gisin and Rob Thew. \newblock Quantum communication. \newblock {\em Nature photonics}, 1(3):165, 2007. \bibitem{degen2017quantum} Christian~L Degen, F~Reinhard, and P~Cappellaro. \newblock Quantum sensing. \newblock {\em Reviews of modern physics}, 89(3):035002, 2017. \bibitem{wendin2017quantum} G~Wendin. \newblock Quantum information processing with superconducting circuits: a review. \newblock {\em Reports on Progress in Physics}, 80(10):106001, 2017. \bibitem{rotter2015review} Ingrid Rotter and JP~Bird. \newblock A review of progress in the physics of open quantum systems: theory and experiment. \newblock {\em Reports on Progress in Physics}, 78(11):114001, 2015. \bibitem{zurek1992environment} Wojciech~H Zurek. \newblock The environment, decoherence, and the transition from quantum to classical. \newblock In {\em Quantum Gravity And Cosmology-Proceedings Of The Xxii Gift International Seminar On Theoretical Physics}, page 117. World Scientific, 1992. \bibitem{modi2012classical} Kavan Modi, Aharon Brodutch, Hugo Cable, Tomasz Paterek, and Vlatko Vedral. \newblock The classical-quantum boundary for correlations: discord and related measures. \newblock {\em Reviews of Modern Physics}, 84(4):1655, 2012. \bibitem{tan2016quantum} Dian Tan, Mahdi Naghiloo, Klaus M{\"o}lmer, and KW~Murch. \newblock Quantum smoothing for classical mixtures. \newblock {\em Physical Review A}, 94(5):050102, 2016. \bibitem{dressel2017arrow} Justin Dressel, Areeya Chantasri, Andrew~N Jordan, and Alexander~N Korotkov. \newblock Arrow of time for continuous quantum measurement. \newblock {\em Physical review letters}, 119(22):220507, 2017. \bibitem{manikandan2018fluctuation} Sreenath~K Manikandan, Cyril Elouard, and Andrew~N Jordan. \newblock Fluctuation theorems for continuous quantum measurement and absolute irreversibility. \newblock {\em arXiv preprint arXiv:1807.05575}, 2018. \bibitem{harrington2018characterizing} PM~Harrington, D~Tan, M~Naghiloo, and KW~Murch. \newblock Characterizing a statistical arrow of time in quantum measurement dynamics. \newblock {\em arXiv preprint arXiv:1811.07708}, 2018. \bibitem{vinjanampathy2016quantum} Sai Vinjanampathy and Janet Anders. \newblock Quantum thermodynamics. \newblock {\em Contemporary Physics}, 57(4):545--579, 2016. \bibitem{houck2008controlling} AA~Houck, JA~Schreier, BR~Johnson, JM~Chow, Jens Koch, JM~Gambetta, DI~Schuster, L~Frunzio, MH~Devoret, SM~Girvin, et~al. \newblock Controlling the spontaneous emission of a superconducting transmon qubit. \newblock {\em Physical review letters}, 101(8):080502, 2008. \bibitem{gladchenko2009superconducting} Sergey Gladchenko, David Olaya, Eva Dupont-Ferrier, Benoit Dou{\c{c}}ot, Lev~B Ioffe, and Michael~E Gershenson. \newblock Superconducting nanocircuits for topologically protected qubits. \newblock {\em Nature Physics}, 5(1):48, 2009. \bibitem{gambetta2011superconducting} JM~Gambetta, AA~Houck, and Alexandre Blais. \newblock Superconducting qubit with purcell protection and tunable coupling. \newblock {\em Physical review letters}, 106(3):030502, 2011. \bibitem{kockum2018decoherence} Anton~Frisk Kockum, G{\"o}ran Johansson, and Franco Nori. \newblock Decoherence-free interaction between giant atoms in waveguide quantum electrodynamics. \newblock {\em Physical review letters}, 120(14):140404, 2018. \bibitem{ningyuan2015time} Jia Ningyuan, Clai Owens, Ariel Sommer, David Schuster, and Jonathan Simon. \newblock Time-and site-resolved dynamics in a topological circuit. \newblock {\em Physical Review X}, 5(2):021031, 2015. \bibitem{dempster2014understanding} Joshua~M Dempster, Bo~Fu, David~G Ferguson, DI~Schuster, and Jens Koch. \newblock Understanding degenerate ground states of a protected quantum circuit in the presence of disorder. \newblock {\em Physical Review B}, 90(9):094518, 2014. \bibitem{kelly2015state} Julian Kelly, Rami Barends, Austin~G Fowler, Anthony Megrant, Evan Jeffrey, Theodore~C White, Daniel Sank, Josh~Y Mutus, Brooks Campbell, Yu~Chen, et~al. \newblock State preservation by repetitive error detection in a superconducting quantum circuit. \newblock {\em Nature}, 519(7541):66, 2015. \bibitem{ofek2016extending} Nissim Ofek, Andrei Petrenko, Reinier Heeres, Philip Reinhold, Zaki Leghtas, Brian Vlastakis, Yehan Liu, Luigi Frunzio, SM~Girvin, L~Jiang, et~al. \newblock Extending the lifetime of a quantum bit with error correction in superconducting circuits. \newblock {\em Nature}, 536(7617):441, 2016. \bibitem{corcoles2015demonstration} Antonio~D C{\'o}rcoles, Easwar Magesan, Srikanth~J Srinivasan, Andrew~W Cross, Matthias Steffen, Jay~M Gambetta, and Jerry~M Chow. \newblock Demonstration of a quantum error detection code using a square lattice of four superconducting qubits. \newblock {\em Nature communications}, 6:6979, 2015. \bibitem{reed2012realization} Matthew~D Reed, Leonardo DiCarlo, Simon~E Nigg, Luyan Sun, Luigi Frunzio, Steven~M Girvin, and Robert~J Schoelkopf. \newblock Realization of three-qubit quantum error correction with superconducting circuits. \newblock {\em Nature}, 482(7385):382, 2012. \bibitem{korotkov2010decoherence} Alexander~N Korotkov and Kyle Keane. \newblock Decoherence suppression by quantum measurement reversal. \newblock {\em Physical Review A}, 81(4):040103, 2010. \bibitem{kim2012protecting} Yong-Su Kim, Jong-Chan Lee, Osung Kwon, and Yoon-Ho Kim. \newblock Protecting entanglement from decoherence using weak measurement and quantum measurement reversal. \newblock {\em Nature Physics}, 8(2):117, 2012. \bibitem{gillett2010experimental} GG~Gillett, RB~Dalton, BP~Lanyon, MP~Almeida, Marco Barbieri, Geoff~J Pryde, JL~O’brien, KJ~Resch, SD~Bartlett, and AG~White. \newblock Experimental feedback control of quantum systems using weak measurements. \newblock {\em Physical review letters}, 104(8):080503, 2010. \bibitem{vijay2012stabilizing} R~Vijay, Chris Macklin, DH~Slichter, SJ~Weber, KW~Murch, Ravi Naik, Alexander~N Korotkov, and Irfan Siddiqi. \newblock Stabilizing rabi oscillations in a superconducting qubit using quantum feedback. \newblock {\em Nature}, 490(7418):77, 2012. \bibitem{sayrin2011real} Cl{\'e}ment Sayrin, Igor Dotsenko, Xingxing Zhou, Bruno Peaudecerf, Th{\'e}o Rybarczyk, S{\'e}bastien Gleyzes, Pierre Rouchon, Mazyar Mirrahimi, Hadis Amini, Michel Brune, et~al. \newblock Real-time quantum feedback prepares and stabilizes photon number states. \newblock {\em Nature}, 477(7362):73, 2011. \bibitem{sorensen2003measurement} Anders~S S{\o}rensen and Klaus M{\o}lmer. \newblock Measurement induced entanglement and quantum computation with atoms in optical cavities. \newblock {\em Physical review letters}, 91(9):097905, 2003. \bibitem{ruskov2003entanglement} Rusko Ruskov and Alexander~N Korotkov. \newblock Entanglement of solid-state qubits by measurement. \newblock {\em Physical Review B}, 67(24):241305, 2003. \bibitem{roch2014observation} Nicolas Roch, Mollie~E Schwartz, Felix Motzoi, Christopher Macklin, Rajamani Vijay, Andrew~W Eddins, Alexander~N Korotkov, K~Birgitta Whaley, Mohan Sarovar, and Irfan Siddiqi. \newblock Observation of measurement-induced entanglement and quantum trajectories of remote superconducting qubits. \newblock {\em Physical review letters}, 112(17):170501, 2014. \bibitem{murch2013observing} KW~Murch, SJ~Weber, Christopher Macklin, and Irfan Siddiqi. \newblock Observing single quantum trajectories of a superconducting quantum bit. \newblock {\em Nature}, 502(7470):211, 2013. \bibitem{hacohen2016quantum} Shay Hacohen-Gourgy, Leigh~S Martin, Emmanuel Flurin, Vinay~V Ramasesh, K~Birgitta Whaley, and Irfan Siddiqi. \newblock Quantum dynamics of simultaneously measured non-commuting observables. \newblock {\em Nature}, 538(7626):491, 2016. \bibitem{foroozani2016correlations} N~Foroozani, M~Naghiloo, D~Tan, K~M{\o}lmer, and KW~Murch. \newblock Correlations of the time dependent signal and the state of a continuously monitored quantum system. \newblock {\em Physical review letters}, 116(11):110401, 2016. \bibitem{naghiloo2016mapping} Mahdi Naghiloo, N~Foroozani, Dian Tan, A~Jadbabaie, and KW~Murch. \newblock Mapping quantum state dynamics in spontaneous emission. \newblock {\em Nature communications}, 7:11527, 2016. \bibitem{weber2014mapping} SJ~Weber, Areeya Chantasri, Justin Dressel, Andrew~N Jordan, KW~Murch, and Irfan Siddiqi. \newblock Mapping the optimal route between two quantum states. \newblock {\em Nature}, 511(7511):570, 2014. \bibitem{naghiloo2017quantum} M~Naghiloo, D~Tan, PM~Harrington, P~Lewalle, AN~Jordan, and KW~Murch. \newblock Quantum caustics in resonance-fluorescence trajectories. \newblock {\em Physical Review A}, 96(5):053807, 2017. \bibitem{cujia2018watching} KS~Cujia, JM~Boss, J~Zopes, and CL~Degen. \newblock Watching the precession of a single nuclear spin by weak measurements. \newblock {\em arXiv preprint arXiv:1806.08243}, 2018. \bibitem{naghiloo2017achieving} M~Naghiloo, AN~Jordan, and KW~Murch. \newblock Achieving optimal quantum acceleration of frequency estimation using adaptive coherent control. \newblock {\em Physical review letters}, 119(18):180801, 2017. \bibitem{kiilerich2016bayesian} Alexander~Holm Kiilerich and Klaus M{\o}lmer. \newblock Bayesian parameter estimation by continuous homodyne detection. \newblock {\em Physical Review A}, 94(3):032103, 2016. \bibitem{naghiloo2017thermodynamics} M~Naghiloo, D~Tan, PM~Harrington, JJ~Alonso, E~Lutz, A~Romito, and KW~Murch. \newblock Thermodynamics along individual trajectories of a quantum bit. \newblock {\em arXiv preprint arXiv:1703.05885}, 2017. \bibitem{brandao2015second} Fernando Brandao, Michal Horodecki, Nelly Ng, Jonathan Oppenheim, and Stephanie Wehner. \newblock The second laws of quantum thermodynamics. \newblock {\em Proceedings of the National Academy of Sciences}, 112(11):3275--3279, 2015. \bibitem{toyabe2010experimental} Shoichi Toyabe, Takahiro Sagawa, Masahito Ueda, Eiro Muneyuki, and Masaki Sano. \newblock Experimental demonstration of information-to-energy conversion and validation of the generalized jarzynski equality. \newblock {\em Nature physics}, 6(12):988, 2010. \bibitem{gogolin2016equilibration} Christian Gogolin and Jens Eisert. \newblock Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems. \newblock {\em Reports on Progress in Physics}, 79(5):056001, 2016. \bibitem{parrondo2015thermodynamics} Juan~MR Parrondo, Jordan~M Horowitz, and Takahiro Sagawa. \newblock Thermodynamics of information. \newblock {\em Nature physics}, 11(2):131, 2015. \bibitem{naghiloo2018information} M~Naghiloo, JJ~Alonso, A~Romito, E~Lutz, and KW~Murch. \newblock Information gain and loss for a quantum maxwell's demon. \newblock {\em arXiv preprint arXiv:1802.07205}, 2018. \bibitem{kurizki2015quantum} Gershon Kurizki, Patrice Bertet, Yuimaru Kubo, Klaus M{\o}lmer, David Petrosyan, Peter Rabl, and J{\"o}rg Schmiedmayer. \newblock Quantum technologies with hybrid systems. \newblock {\em Proceedings of the National Academy of Sciences}, page 201419326, 2015. \bibitem{murch2012cavity} KW~Murch, U~Vool, D~Zhou, SJ~Weber, SM~Girvin, and I~Siddiqi. \newblock Cavity-assisted quantum bath engineering. \newblock {\em Physical review letters}, 109(18):183602, 2012. \bibitem{harrington2018bath} PM~Harrington, Mahdi Naghiloo, D~Tan, and KW~Murch. \newblock Bath engineering of a fluorescing artificial atom with a photonic crystal. \newblock {\em arXiv preprint arXiv:1812.04205}, 2018. \bibitem{el2018non} Ramy El-Ganainy, Konstantinos~G Makris, Mercedeh Khajavikhan, Ziad~H Musslimani, Stefan Rotter, and Demetrios~N Christodoulides. \newblock Non-hermitian physics and pt symmetry. \newblock {\em Nature Physics}, 14(1):11, 2018. \bibitem{peng2014parity} Bo~Peng, {\c{S}}ahin~Kaya {\"O}zdemir, Fuchuan Lei, Faraz Monifi, Mariagiovanna Gianfreda, Gui~Lu Long, Shanhui Fan, Franco Nori, Carl~M Bender, and Lan Yang. \newblock Parity--time-symmetric whispering-gallery microcavities. \newblock {\em Nature Physics}, 10(5):394, 2014. \bibitem{chen2017exceptional} Weijian Chen, {\c{S}}ahin~Kaya {\"O}zdemir, Guangming Zhao, Jan Wiersig, and Lan Yang. \newblock Exceptional points enhance sensing in an optical microcavity. \newblock {\em Nature}, 548(7666):192, 2017. \bibitem{bender2016pt} Carl~M Bender. \newblock Pt symmetry in quantum physics: From a mathematical curiosity to optical experiments. \newblock {\em Europhysics News}, 47(2):17--20, 2016. \bibitem{bender2017behavior} Carl~M Bender, Nima Hassanpour, Daniel~W Hook, SP~Klevansky, Christoph S{\"u}nderhauf, and Zichao Wen. \newblock Behavior of eigenvalues in a region of broken pt symmetry. \newblock {\em Physical Review A}, 95(5):052113, 2017. \bibitem{bender2016comment} Carl~M Bender, Mariagiovanna Gianfreda, Nima Hassanpour, and Hugh~F Jones. \newblock Comment on “on the lagrangian and hamiltonian description of the damped linear harmonic oscillator”[j. math. phys. 48, 032701 (2007)]. \newblock {\em Journal of Mathematical Physics}, 57(8):084101, 2016. \bibitem{bender2018series} Carl~M Bender, C~Ford, Nima Hassanpour, and B~Xia. \newblock Series solutions of pt-symmetric schr{\"o}dinger equations. \newblock {\em Journal of Physics Communications}, 2(2), 2018. \bibitem{bender2016analytic} Carl~M Bender, Alexander Felski, Nima Hassanpour, SP~Klevansky, and Alireza Beygi. \newblock Analytic structure of eigenvalues of coupled quantum systems. \newblock {\em Physica Scripta}, 92(1):015201, 2016. \bibitem{bender2018p} Carl~M Bender, Nima Hassanpour, SP~Klevansky, and Sarben Sarkar. \newblock P t-symmetric quantum field theory in d dimensions. \newblock {\em Physical Review D}, 98(12):125003, 2018. \bibitem{bender2018pt} Carl~M Bender. \newblock {\em PT Symmetry: In Quantum and Classical Physics}. \newblock World Scientific Publishing, 2018. \bibitem{naghiloo2019quantum} M~Naghiloo, M~Abbasi, Yogesh~N Joglekar, and KW~Murch. \newblock Quantum state tomography across the exceptional point in a single dissipative qubit. \newblock {\em arXiv preprint arXiv:1901.07968}, 2019. \bibitem{malek17cutoff} Moein Malekakhlagh, Alexandru Petrescu, and Hakan~E T{\"u}reci. \newblock Cutoff-free circuit quantum electrodynamics. \newblock {\em Physical review letters}, 119(7):073601, 2017. \bibitem{gely17converg} Mario~F Gely, Adrian Parra-Rodriguez, Daniel Bothner, Ya~M Blanter, Sal~J Bosman, Enrique Solano, and Gary~A Steele. \newblock Convergence of the multimode quantum rabi model of circuit quantum electrodynamics. \newblock {\em Physical Review B}, 95(24):245115, 2017. \bibitem{gerr05} Christopher Gerry, Peter Knight, and Peter~L Knight. \newblock {\em Introductory quantum optics}. \newblock Cambridge university press, 2005. \bibitem{walls2007quantum} Daniel~F Walls and Gerard~J Milburn. \newblock {\em Quantum optics}. \newblock Springer Science \& Business Media, 2007. \bibitem{houck2007generating} AA~Houck, DI~Schuster, JM~Gambetta, JA~Schreier, BR~Johnson, JM~Chow, L~Frunzio, J~Majer, MH~Devoret, SM~Girvin, et~al. \newblock Generating single microwave photons in a circuit. \newblock {\em Nature}, 449(7160):328, 2007. \bibitem{schu07thesis} David~Isaac Schuster. \newblock {\em Circuit quantum electrodynamics}. \newblock Yale University, 2007. \bibitem{tinkham2004introduction} Michael Tinkham. \newblock {\em Introduction to superconductivity}. \newblock Courier Corporation, 2004. \bibitem{devoret2004shortreviwe} Michel~H Devoret, Andreas Wallraff, and John~M Martinis. \newblock Superconducting qubits: A short review. \newblock {\em arXiv preprint cond-mat/0411174}, 2004. \bibitem{khezri26beyand} Mostafa Khezri, Eric Mlinar, Justin Dressel, and Alexander~N. Korotkov. \newblock Measuring a transmon qubit in circuit qed: Dressed squeezed states. \newblock {\em Phys. Rev. A}, 94:012347, Jul 2016. \bibitem{sank16beyond} Daniel Sank, Zijun Chen, Mostafa Khezri, J.~Kelly, R.~Barends, B.~Campbell, Y.~Chen, B.~Chiaro, A.~Dunsworth, A.~Fowler, E.~Jeffrey, E.~Lucero, A.~Megrant, J.~Mutus, M.~Neeley, C.~Neill, P.~J.~J. O'Malley, C.~Quintana, P.~Roushan, A.~Vainsencher, T.~White, J.~Wenner, Alexander~N. Korotkov, and John~M. Martinis. \newblock Measurement-induced state transitions in a superconducting qubit: Beyond the rotating wave approximation. \newblock {\em Phys. Rev. Lett.}, 117:190503, Nov 2016. \bibitem{niemczyk2010circuit} Thomas Niemczyk, F~Deppe, H~Huebl, EP~Menzel, F~Hocke, MJ~Schwarz, JJ~Garcia-Ripoll, D~Zueco, T~H{\"u}mmer, E~Solano, et~al. \newblock Circuit quantum electrodynamics in the ultrastrong-coupling regime. \newblock {\em Nature Physics}, 6(10):772, 2010. \bibitem{bosman2017multi} Sal~J Bosman, Mario~F Gely, Vibhor Singh, Alessandro Bruno, Daniel Bothner, and Gary~A Steele. \newblock Multi-mode ultra-strong coupling in circuit quantum electrodynamics. \newblock {\em npj Quantum Information}, 3(1):46, 2017. \bibitem{koch2007charge} Jens Koch, M~Yu Terri, Jay Gambetta, Andrew~A Houck, DI~Schuster, J~Majer, Alexandre Blais, Michel~H Devoret, Steven~M Girvin, and Robert~J Schoelkopf. \newblock Charge-insensitive qubit design derived from the cooper pair box. \newblock {\em Physical Review A}, 76(4):042319, 2007. \bibitem{pozar} David~M Pozar. \newblock {\em Microwave engineering}. \newblock John Wiley \& Sons, 2009. \bibitem{megrant2012planar} Anthony Megrant, Charles Neill, Rami Barends, Ben Chiaro, Yu~Chen, Ludwig Feigl, Julian Kelly, Erik Lucero, Matteo Mariantoni, Peter~JJ O’Malley, et~al. \newblock Planar superconducting resonators with internal quality factors above one million. \newblock {\em Applied Physics Letters}, 100(11):113510, 2012. \bibitem{khalil2012analysis} MS~Khalil, MJA Stoutimore, FC~Wellstood, and KD~Osborn. \newblock An analysis method for asymmetric resonator transmission applied to superconducting devices. \newblock {\em Journal of Applied Physics}, 111(5):054510, 2012. \bibitem{reed2010} M.~D. Reed, L.~DiCarlo, B.~R. Johnson, L.~Sun, D.~I. Schuster, L.~Frunzio, and R.~J. Schoelkopf. \newblock High-fidelity readout in circuit quantum electrodynamics using the jaynes-cummings nonlinearity. \newblock {\em Phys. Rev. Lett.}, 105:173601, Oct 2010. \bibitem{slichterthesis} Daniel~Huber Slichter. \newblock Quantum jumps and measurement backaction in a superconducting qubit. \newblock {\em University of California, Berkeley}, 2011. \bibitem{drummond2013quantum} Peter~D Drummond and Zbigniew Ficek. \newblock {\em Quantum squeezing}, volume~27. \newblock Springer Science \& Business Media, 2013. \bibitem{gambetta2008quantum} Jay Gambetta, Alexandre Blais, Maxime Boissonneault, Andrew~A Houck, DI~Schuster, and Steven~M Girvin. \newblock Quantum trajectory approach to circuit qed: Quantum jumps and the zeno effect. \newblock {\em Physical Review A}, 77(1):012112, 2008. \bibitem{jaco06} Kurt Jacobs and Daniel~A Steck. \newblock A straightforward introduction to continuous quantum measurement. \newblock {\em Contemporary Physics}, 47(5):279--303, 2006. \bibitem{brun2000continuous} Todd~A Brun. \newblock Continuous measurements, quantum trajectories, and decoherent histories. \newblock {\em Physical Review A}, 61(4):042107, 2000. \bibitem{koro11_bayes} Alexander~N Korotkov. \newblock Quantum bayesian approach to circuit qed measurement. \newblock {\em arXiv preprint arXiv:1111.4016}, 2011. \bibitem{tan2017homodyne} Dian Tan, Neda Foroozani, Mahdi Naghiloo, AH~Kiilerich, K~M{\o}lmer, and KW~Murch. \newblock Homodyne monitoring of postselected decay. \newblock {\em Physical Review A}, 96(2):022104, 2017. \bibitem{milonni1984spontaneous} PW~Milonni. \newblock Why spontaneous emission? \newblock {\em American Journal of Physics}, 52(4):340--343, 1984. \bibitem{blinov2004observation} BB~Blinov, DL~Moehring, L-M Duan, and Chris Monroe. \newblock Observation of entanglement between a single trapped atom and a single photon. \newblock {\em Nature}, 428(6979):153, 2004. \bibitem{eichler2012observation} C~Eichler, C~Lang, JM~Fink, J~Govenius, S~Filipp, and A~Wallraff. \newblock Observation of entanglement between itinerant microwave photons and a superconducting qubit. \newblock {\em Physical review letters}, 109(24):240501, 2012. \bibitem{wiseman2009quantum} Howard~M Wiseman and Gerard~J Milburn. \newblock {\em Quantum measurement and control}. \newblock Cambridge university press, 2009. \bibitem{wiseman2012dynamical} Howard~M Wiseman and Jay~M Gambetta. \newblock Are dynamical quantum jumps detector dependent? \newblock {\em Physical review letters}, 108(22):220402, 2012. \bibitem{bolund2014stochastic} Anders Bolund and Klaus M{\o}lmer. \newblock Stochastic excitation during the decay of a two-level emitter subject to homodyne and heterodyne detection. \newblock {\em Physical Review A}, 89(2):023827, 2014. \bibitem{jordan2016anatomy} Andrew~N Jordan, Areeya Chantasri, Pierre Rouchon, and Benjamin Huard. \newblock Anatomy of fluorescence: quantum trajectory statistics from continuously measuring spontaneous emission. \newblock {\em Quantum Studies: Mathematics and Foundations}, 3(3):237--263, 2016. \bibitem{campagne2016observing} Philippe Campagne-Ibarcq, Pierre Six, Landry Bretheau, Alain Sarlette, Mazyar Mirrahimi, Pierre Rouchon, and Benjamin Huard. \newblock Observing quantum state diffusion by heterodyne detection of fluorescence. \newblock {\em Physical Review X}, 6(1):011002, 2016. \bibitem{dalibard1992wave} Jean Dalibard, Yvan Castin, and Klaus M{\o}lmer. \newblock Wave-function approach to dissipative processes in quantum optics. \newblock {\em Physical review letters}, 68(5):580, 1992. \bibitem{blocher2017many} Philip~Daniel Blocher and Klaus M{\o}lmer. \newblock How many atoms get excited when they decay? \newblock {\em Quantum Science and Technology}, 2(3):034011, 2017. \bibitem{caves1982quantum} Carlton~M Caves. \newblock Quantum limits on noise in linear amplifiers. \newblock {\em Physical Review D}, 26(8):1817, 1982. \bibitem{lewalle2017prediction} Philippe Lewalle, Areeya Chantasri, and Andrew~N Jordan. \newblock Prediction and characterization of multiple extremal paths in continuously monitored qubits. \newblock {\em Physical Review A}, 95(4):042126, 2017. \bibitem{collin2005verification} Delphine Collin, Felix Ritort, Christopher Jarzynski, Steven~B Smith, Ignacio Tinoco~Jr, and Carlos Bustamante. \newblock Verification of the crooks fluctuation theorem and recovery of rna folding free energies. \newblock {\em Nature}, 437(7056):231, 2005. \bibitem{jarzynski1997nonequilibrium} Christopher Jarzynski. \newblock Nonequilibrium equality for free energy differences. \newblock {\em Physical Review Letters}, 78(14):2690, 1997. \bibitem{liphardt2002equilibrium} Jan Liphardt, Sophie Dumont, Steven~B Smith, Ignacio Tinoco, and Carlos Bustamante. \newblock Equilibrium information from nonequilibrium measurements in an experimental test of jarzynski's equality. \newblock {\em Science}, 296(5574):1832--1835, 2002. \bibitem{serreli2007molecular} Viviana Serreli, Chin-Fa Lee, Euan~R Kay, and David~A Leigh. \newblock A molecular information ratchet. \newblock {\em Nature}, 445(7127):523, 2007. \bibitem{raizen2009comprehensive} Mark~G Raizen. \newblock Comprehensive control of atomic motion. \newblock {\em Science}, 324(5933):1403--1406, 2009. \bibitem{koski2014experimental} Jonne~V Koski, Ville~F Maisi, Takahiro Sagawa, and Jukka~P Pekola. \newblock Experimental observation of the role of mutual information in the nonequilibrium dynamics of a maxwell demon. \newblock {\em Physical review letters}, 113(3):030601, 2014. \bibitem{camati2016experimental} Patrice~A Camati, John~PS Peterson, Tiago~B Batalhao, Kaonan Micadei, Alexandre~M Souza, Roberto~S Sarthour, Ivan~S Oliveira, and Roberto~M Serra. \newblock Experimental rectification of entropy production by maxwell’s demon in a quantum system. \newblock {\em Physical review letters}, 117(24):240502, 2016. \bibitem{cottet2017observing} Nathana{\"e}l Cottet, Sebastien Jezouin, Landry Bretheau, Philippe Campagne-Ibarcq, Quentin Ficheux, Janet Anders, Alexia Auff{\`e}ves, R{\'e}mi Azouit, Pierre Rouchon, and Benjamin Huard. \newblock Observing a quantum maxwell demon at work. \newblock {\em Proceedings of the National Academy of Sciences}, 114(29):7561--7564, 2017. \bibitem{ciampini2017experimental} Mario~A Ciampini, Luca Mancino, Adeline Orieux, Caterina Vigliar, Paolo Mataloni, Mauro Paternostro, and Marco Barbieri. \newblock Experimental extractable work-based multipartite separability criteria. \newblock {\em NPJ Quantum Information}, 3(1):10, 2017. \bibitem{masuyama2018information} Y~Masuyama, K~Funo, Y~Murashita, A~Noguchi, S~Kono, Y~Tabuchi, R~Yamazaki, M~Ueda, and Y~Nakamura. \newblock Information-to-work conversion by maxwell’s demon in a superconducting circuit quantum electrodynamical system. \newblock {\em Nature communications}, 9(1):1291, 2018. \bibitem{talkner2007fluctuation} Peter Talkner, Eric Lutz, and Peter H{\"a}nggi. \newblock Fluctuation theorems: Work is not an observable. \newblock {\em Physical Review E}, 75(5):050102, 2007. \bibitem{lutz2015maxwell} Eric Lutz and Sergio Ciliberto. \newblock From maxwell’s demon to landauer’s eraser. \newblock {\em Physics Today}, 68(9):30, 2015. \bibitem{funo2013integral} Ken Funo, Yu~Watanabe, and Masahito Ueda. \newblock Integral quantum fluctuation theorems under measurement and feedback control. \newblock {\em Physical Review E}, 88(5):052121, 2013. \end{thebibliography} \addcontentsline{toc}{chapter}{Bibliography} \end{main} \end{document}
\begin{document} \title[On Zeros and Growth of ]{On Zeros and Growth of Solutions of Second Order Linear Differential Equation} \author[ S. Kumar and M. Saini]{Sanjay Kumar and Manisha Saini} \address{Sanjay Kumar \\ Department of Mathematics \\ Deen Dayal Upadhyaya College \\ University of Delhi \\ New Delhi--110 078, India } \email{[email protected]} \address{Manisha Saini\\ Department of Mathematics\\ University of Delhi\\ New Delhi--110 007, India} \email{[email protected] } \thanks {The research work of the second author is supported by research fellowship from University Grants Commission (UGC), New Delhi.} \keywords{entire function, meromorphic function, order of growth, exponent of convergence, complex differential equation} \subjclass[2010]{Primary 34M10, 30D35} \begin{abstract} For a second order linear differential equation $f''+A(z)f'+B(z)f=0$, with $ A(z)$ and $B(z)$ being transcendental entire functions under some restriction, we have established that all non-trivial solutions are of infinite order. In addition, we have proved that these solutions have infinite number of zeros. Also, we have extended these results to higher order linear differential equations. \end{abstract} \maketitle \section{Introduction} Consider a second order linear differential equation of the form \begin{equation}\label{sde} f''+A(z)f'+B(z)f=0, \quad B(z) \not \equiv 0 \end{equation} where $A(z)$ and $B(z)$ are entire functions. We have used the notion of Value Distribution Theory of meromorphic function, also known as Nevanlinna Theory \cite{yang}. For an entire function $f$, the order of $f$ and exponent of convergence of $f$ are defined, respectively, in the following manner, $$ \rho(f) = \limsup_{r \rightarrow \infty} \frac{\log^+ \log^+ M(r, f)}{\log r} , \quad \lambda(f) =\limsup_{r\rightarrow \infty} \frac{\log^+ N(r,\frac{1}{f})}{\log r} $$ where $ M(r,f)= \max\{\ |f(z)|:|z| =r \}\ $ is the maximum modulus of $f(z)$ over the circle $|z| =r$ and $N(r,\frac{1}{f})$ is the number of zeros of $f(z)$ enclosed in the disk $|z| <r$. It is well known that all solutions of the equation (\ref{sde}) are entire functions. Using Wiman-Valiron theory, it is proved that equation (\ref{sde}) has all solutions of finite order if and only if both $A(z)$ and $B(z)$ are polynomials \cite{lainebook}. Therefore, if either $A(z)$ or $B(z)$ are transcendental entire functions, then almost all solutions of the equation (\ref{sde}) are of infintite order. So, it is natural to find conditions on coeffiicients of the equation (\ref{sde}) such that all non-trivial solutions of the equation (\ref{sde}) are of infinite order. Our aim in this paper is also to find such $A(z)$ and $B(z)$. It was Gundersen \cite{finitegg}, who gave a necessary condition for equation (\ref{sde}) to have a solution of finite order, \begin{thm} A necessary condition for equation (\ref{sde}) to have a non-trivial solution $f$ of finite order is \begin{equation}\label{necc} \rho(B)\leq \rho(A). \end{equation} \end{thm} We illustrate this condition with following examples \begin{example} $f(z)=e^{-z}$ satisfies $f''+e^{-z}f'-(e^{-z}+1)f=0,$ where $\rho(A)=\rho(B)=1$. \end{example}\label{eg1} \begin{example}\label{eg2} With $A(z)=e^z+2$ and $B(z)=1$ equation (\ref{sde}) has finite order solution $f(z)=e^{-z}+1$, where $\rho(B)<\rho(A).$ \end{example} Thus if $\rho(A)<\rho(B)$, then all solutions of the equation (\ref{sde}) are of infinite order. However, given necessary condition is not sufficient, for example \begin{example}\cite{heitt} If $A(z)=P(z)e^{z}+Q(z)e^{-z}+R(z)$, where $ P, Q$ and $ R$ are polynomials and $B(z)$ is an entire function with $\rho(B)<1$ then $\rho(f)$ is infinite, for all non-trivial solutions $f$ of the equation (\ref{sde}). \end{example} In the same paper \cite{finitegg}, Gundersen proved the following result: \begin{thm}\label{thm1} Let $f$ be a non-trivial solution of the equation (\ref{sde}) where either \begin{enumerate}[(i)] \item $\rho(B)< \rho(A)<\frac{1}{2}$\\ or \item $A(z)$ is transcendental entire function with $\rho(A)=0$ and $B(z)$ is a polynomial \end{enumerate} then $\rho(f)$ is infinite. \end{thm} Hellerstein, Miles and Rossi \cite{heller} proved Theorem [\ref{thm1}] for $\rho(B)<\rho(A)=\frac{1}{2}.$ In \cite{frei}, Frei showed that the second order differential equation, \begin{equation}\label{Sde} f''+e^-f'+B(z)f= 0 \end{equation} possesses a solution of finite order if and only if $B(z)=-n^2 , \quad n \in \mathbb{N}$. Ozawa \cite{ozawa} proved that equation (\ref{Sde}), possesses no solution of finite order when $B(z)=az+b, \quad a \neq0$. Amemiya and Ozawa \cite{ame}and Gundersen \cite{ggpol} studied the equation (\ref{Sde}) for $B(z)$ being a particular polynomial. After this, Langley \cite{lang} showed that the differential equation \begin{equation} f''+Ce^{-z}f'+B(z)f=0 \end{equation} has all non-trivial solutions of infinite order, for any nonzero constant $C$ and for any nonconstant polynomial $B(z)$. \\ J. R. Long introduced the notion of the deficient value and Borel direction into the studies of the equation (\ref{sde}). For the definition of deficient value, Borel direction and function extremal for Yang's inequality one may refer to \cite{yang}. \\ In \cite{extremal}, J. R. Long proved that if $A(z)$ is an entire function extremal for Yang's inequality and $B(z)$ a transcendental entire function with $\rho(B)\neq \rho(A)$, then all solutions of the equation (\ref{sde}) are of infinite order. In \cite{jlongfab}, J.R. Long replaced the condition $\rho(B) \neq \rho(A)$ with the condition that $B(z) $ is an entire function with \emph{Fabry gaps}. \\ X. B. Wu \cite{wu}, proved that if $A(z)$ is a non-trivial solution of $w''+Q(z)w=0$, where $Q(z)= b_mz^m+\ldots +b_0, \quad b_m \neq 0$ and $B(z)$ be an entire function with $\mu(B)<\frac{1}{2}+ \frac{1}{2(m+1)}$, then all solutions of equation (\ref{sde}) are of infinite order. J.R. Long \cite{jlongfab} replaced the condition $\mu(B)< \frac{1}{2}+\frac{1}{2(m+1)}$ with $B(z)$ being an entire function with \emph{Fabry gap} such that $\rho(B) \neq \rho(A)$. \\ The main source of the problems in complex differential equation is Gundersen's \cite{problemgg}. J.R. Long \cite{jlong} gave a partial solution for a question asked by Gundersen in \cite{problemgg}. He proved that: \begin{thm}\label{jrthm} Let $A(z)=v(z)e^{P(z)}$, where $v(z)( \not \equiv 0)$ is an entire function and $P(z)=a_nz^n +\ldots +a_0$ is a polynomial of degree $n$ such that $\rho(v)<n$. Let $B(z)=b_mz^m +\ldots +b_0$ be a non-constant polynomial of degree $m$, then all non-trivial solutions of the equation (\ref{sde}) have infinite order if one of the following condition holds: \begin{enumerate}[(i)] \item $m+2 <2n$; \item $m+2>2n$ and $m+2 \neq 2kn$ for all integers $k$; \item $m+2=2n$ and $\frac{a_n^2}{b_m}$ is not a negative real. \end{enumerate} \end{thm} In this paper, we are assuming $B(z)$ to be a transcendental entire function in Theorem \ref{jrthm}. We now recall the notion of \emph{critical rays} and \emph{Fabry gap}: \begin{defn}\label{def1}\cite{jlong} Let $P(z)=a_{n}z^n+a_{n-1}z^{n-1}+\ldots +a_0$, $a_n\neq0$ and $\delta(P,\theta)=\RE(a_ne^{\iota n \theta})$. A ray $\gamma = re^{\iota \theta}$ is called \emph{critical ray} of $e^{P(z)}$ if $\delta(P,\theta)=0.$ \end{defn} It can be easily seen that there are $2n$ different critical rays of $e^{P(z)}$ which divides the whole complex plane into $2n$ distinict sectors of equal length $\frac{\pi}{n}.$ Also $\delta(P,\theta)>0$ in $n$ sectors and $\delta(P,\theta)<0$ in remaining $n$ sectors. We note that $\delta(P,\theta)$ is alternatively positive and negative in the $2n$ sectors. \begin{defn}\cite{hayman} Let $g(z)=\sum_{n=0}^{\infty}a_{\lambda_n}z^{\lambda_n}$ be an entire function. If the sequence $(\lambda_n)$ satisfies $$ \frac{\lambda_n}{n} \rightarrow \infty $$ as $ n \rightarrow \infty$, then $g(z) $ has Fabry gap. \end{defn} An entire function with Fabry gap has order positive or infinity \cite{hayman}. We now fix some notations, \\ $ E^+ = \{ \theta \in [0,2\pi]: \delta(P,\theta)\geq 0\}$ and $E^- = \{ \theta \in [0,2\pi]: \delta(P,\theta)\leq 0 \}.$ \\ Let $\alpha>0$ and $\beta>0$ be such that $\alpha<\beta$ then \[\Omega(\alpha,\beta)= \{z\in \mathbb{C}: \alpha<\arg z <\beta \}.\] In this paper, we will prove the following theorem: \begin{thm}\label{Main} Suppose $A(z)=v(z)e^{P(z)}$ be an entire function with $\lambda(A)<\rho(A)=n$, where $P(z)=a_nz^n+ \ldots a_0$ is a polynomial of degree $n$. Suppose that \begin{enumerate} \item B(z) be a transcendental entire function satisfying $\rho(B)\neq \rho(A)$ or \item B(z) be a transcendental entire function with Fabry gap. \end{enumerate} Then all non-trivial solutions of the equation (\ref{sde}) are of infinite order. Moreover, all non-trivial solutions of the equation (\ref{sde}) have infinite number of zeros. \end{thm} In Theorem [\ref{Main}] part (2), $B(z)$ may be a transcendental entire function with order equal to order of entire function $A(z)$. J. R. Long have proved Theorem [\ref{Main}], for $A(z)$ being an entire function extremal for Yang's inequality in \cite{jlongfab} and \cite{extremal}. We illustrate our result with some examples \begin{example} $$f''+Q(z)e^{P(z)}f'+B(z) f=0,$$ where $Q(z)$and $P(z)$ are polynomials and $B(z)$ is any transcendental entire funcion with $\rho(B)\neq $ degree of $P(z)$. Then $\rho(f)=\infty$, for all non-trivial solutions. \end{example} \begin{example} $$f''+\sin(z) e^{P(z)}f'+ \cos(z^{\frac{n}{2}}) f=0,$$ where $P(z)$ is a polynomial of degree $m>1, m\neq \frac{n}{2}$ and $n\in \mathbb{N}$ , then all non-trivial solutions are of infinite order. \end{example} This paper is orgnised in the following manner: in section 2, we give results which will be useful in proving our main result. In section 3, we will prove our main theorem. In section 4, we will extend our result to higher order linear differential equations. \section{Auxiliary Result} In this section, we present some known results, which will be useful in proving Theorem [\ref{Main}]. These results involves logarithmic measure and logarithmic density of sets, therefore we recall these concepts: \\ The Lebesgue linear measure of a set $E\subset [0,\infty)$ is defined as $m(E)= \int_{E} dt$. The logarithmic measure of a set $F \subset [1,\infty)$ is given by $m_1(F)= \int_{F}\frac{dt}{t}$. The upper and lower logarithmic densities of a set $F \subset [1,\infty)$ are given, respectively, by $$\overline{\log dens(F)} =\limsup_{r\rightarrow \infty}\frac{m_1(F\cap[1,r])}{\log r}$$ $$\overline{\log dens(F)} =\liminf_{r\rightarrow \infty}\frac{m_1(F\cap[1,r])}{\log r}$$ Also, logarithmic density of a set $F\subset [1,\infty)$ is defined as $$\log dens(F)=\overline{\log dens(F)} =\overline{\log dens(F)}.$$ The next lemma is due to Gundersen \cite{log gg} and is used thoroughly during the years. \begin{lemma}\label{gglog} Let $f$ be a transcendental entire function of finite order $\rho$, let $\Gamma= \{\ (k_1,j_1), (k_2,j_2) \ldots (k_m,j_m) \}\ $ denote finite set of distinct pairs of integers that satisfy $ k_i > j_ i \geq 0,$ for $i=1,2, \ldots m, $ and let $\epsilon>0$ be a given constant. Then the following three statements holds: \begin{enumerate}[(i)] \item there exists a set $E_1 \subset[0,2\pi)$ that has linear measure zero, such that if $\psi_0 \in [0,2\pi)\setminus E_1, $ then there is a constant $R_0=R_0(\psi_0)>0$ so that for all $z$ satisfying $\arg z =\psi_0$ and $|z| \geq R_0$and for all $(k,j)\in \Gamma$, we have \begin{equation} \label{guneq} |f^{(k)}(z)/f^{(j)}(z)| \leq |z|^{(k-j)(\rho-1+\epsilon)} \end{equation} \item there exists a set $E_2 \subset (1,\infty)$ that has finite logarithmic measure, such that for all $z$ satisfying $|z| \not \in E_2 \cup [0,1]$ and for all $(k,j) \in \Gamma$, the inequality (\ref{guneq}) holds. \item there exists a set $E_3\subset [0,\infty)$ that has finite linear measure, such that for all $z$ satisfying $|z|\not \in E_3$ and for all $(k,j) \in \Gamma$, we have \begin{equation} |f^{(k)}(z)/ f^{(j)}(z)| \leq |z|^{(k-j)(\rho+\epsilon)}. \end{equation} \end{enumerate} \end{lemma} The following result gives estimates for absolute value of $ A(z)$ over all complex plane except for a negligible set. \begin{lemma}\label{implem} \cite{banklang} Let $A(z)=v(z)e^{P(z)}$ be an entire function with $\lambda(A)<\rho(A)=n$, where $P(z)$ is a polynomial of degree $n$. Then for every $\epsilon>0$ there exists $E \subset [0,2\pi)$ of linear measure zero such that \begin{enumerate}[(i)] \item for $ \theta \in E^+ \setminus E $ there exists $ R>1 $ such that \begin{equation} |A(re^{\iota \theta})| \geq \exp \left( (1-\epsilon) \delta(P,\theta)r^n \right) \end{equation} for $r>R.$ \item for $\theta \in E^-\setminus E$ there exists $R>1$ such that \begin{equation}\label{eq2le} |A(re^{\iota \theta})| \leq \exp \left( (1-\epsilon)\delta(P,\theta) r^n \right) \end{equation} for $r>R.$ \end{enumerate} \end{lemma} Next lemma is from \cite{besi}and give estimates for an entire function of order less than one. \begin{lemma}\label{gglemma} Let $w(z)$ be an entire function of order $\rho$, where $0<\rho<\frac{1}{2}$and let $\epsilon>0$ be a given constant. Then there exists a set $S \subset [0,\infty)$ that has upper logarithmic density at least $1-2\rho$ such that $|w(z)| >\exp (|z|^{\rho-\epsilon})$ for all $z$ satisfying $|z| \in S.$ \end{lemma} The following lemma is from \cite{lainebook}. \begin{lemma}\label{lainebook} Let $g: (0,\infty) \rightarrow \mathbb{R}$, $h: (0,\infty)\rightarrow \mathbb{R}$ be monotone increasing functions such that $g(r) < h (r)$ outside of an exceptional set $E$ of finite logarithmic measure. Then, for any $\alpha > 1$, there exists $r_0 > 0$ such that $g(r) < h(\alpha r)$ holds for all $r > r_0$. \end{lemma} Next lemma give property of an entire function with Fabry gap and can be found in \cite{jlongfab}, \cite{zhe}. \begin{lemma}\label{fablemma} Let $g(z)=\sum_{n=0}^{\infty} a_{\lambda_n}z^{\lambda_n}$ be an entire function of finite order with Fabry gap, and $h(z)$ be an entire function with $\rho(h)=\sigma \in (0,\infty)$. Then for any given $\epsilon\in (0,\sigma)$, there exists a set $H\subset (1,+\infty)$ satisfying $ \overline{log dense} H \geq \xi $, where $\xi\in (0,1)$ is a constant such that for all $|z| =r \in H$, one has $$ \log M(r,h) > r^{\sigma-\epsilon}, \quad \log m(r,g) > (1-\xi)\log M(r,g),$$ where $M(r,h)=\max \{\ |h(z)|: |z|=r\}\ $, $m(r,g)=\min \{\ |g(z)|: |z|=r\}\ $ and $M(r,g)= \max \{\ |g(z)|: |z|=r\}\ $. \end{lemma} The following remark follows from the above lemma. \begin{remark} \label{fabremark} Suppsoe that $g(z)=\sum_{n=0}^{\infty} a_{\lambda_n}z^{\lambda_n}$ be an entire function of order $\sigma \in (0,\infty)$ with Fabry gaps then for any given $\epsilon >0, \quad (0<2\epsilon <\sigma)$, there exists a set $H\subset (1,+\infty)$ satisfying $\overline{\log dense}H \geq \xi$, where $\xi \in (0,1)$ is a constant such that for all $|z| =r \in H$ , one has $$ |g(z)|> M(r,g)^{(1-\xi)}> \exp{\left((1-\xi) r^{\sigma-\epsilon}\right)}>\exp{\left(r^{\sigma-2\epsilon}\right)}.$$ \end{remark} Next lemma can be found in \cite{lainebook} and can be proved by induction. \begin{lemma}\label{ind} Let $h(z)$ and $Q(z)$ be entire functions and define $f=he^{Q}$. Then $f^{(p)}$ may be represented, for each $p\in \mathbb{N}$, in the form \begin{equation}\notag f^{(p)}= \left( h^{(p)}+pQ'h^{(p-1)}+ \sum_{j=2}^{p} \left( {p \choose j} (Q')^j+H_{j-1}(Q') \right) h^{(p-j)} \right)e^Q \end{equation} where $H_{j-1}(Q')$ stands for a differential polynomial of total degree $\leq j-1$ in $Q'$ and its derivatives, with constants coefficients. \end{lemma} We are now able to prove our main result. \section{Proof of Theorem \ref{Main}} This section contains the proof of Theorem [\ref{Main}], which is as follows: \begin{proof} If $\rho(A)< \rho(B)$ then by Theorem [\ref{thm1}], all non-trivial solutions $f$ of the equation (\ref{sde}) are of infinite order. Thus we consider that $\rho(B)\leq\rho(A)< \infty$. Let us suppose that there exists a non-trivial solution $f$ of the equation (\ref{sde}) such that $\rho(f)<\infty$. Then by Lemma [\ref{gglog}], there exists a set $E_1 \subset[0,2\pi)$ that has linear measure zero, such that if $\psi_0 \in [0,2\pi) \setminus E_1, $ then there is a constant $R_0=R_0(\psi_0)>0$ so that for all $z$ satisfying $\arg z =\psi_0$ and $|z| \geq R_0$, we have \begin{equation} \label{guneq1} |f^{(k)}(z)/f(z)|\leq |z| ^{2\rho(f)}, \quad k=1, 2 \end{equation} \begin{enumerate} \item Let $B(z)$ be a transcendental entire function with $\rho(B)\neq \rho(A)$. In this case we need to consider $\rho(B)<\rho(A)$. We consider the following cases on $\rho(B)$. \begin{enumerate}[(a)] \item Suppose that $0<\rho(B)\leq \frac{1}{2}$. Then from Lemma [\ref{gglemma}], there exists a set $S \subset [0,\infty)$ that has upper logarithmic density at least $1-2\rho(B)$ such that \begin{equation}\label{eqB} |B(z)|>\exp(|z| ^{\rho(B)-\epsilon}) \end{equation} for all $z$, satisfying $|z|\in S.$ From equation (\ref{sde}), (\ref{eq2le}), (\ref{guneq1})and (\ref{eqB}), for all $z$, satisfying $\arg z=\psi_0 \in E^- \setminus (E\cup E_1) $and $|z|=r \in S$, $|z|=r >R_0(\psi_0)$ we have \begin{align*} \exp{(r ^{\rho(B)-\epsilon})} &< |B(z)| \\ &\leq |f''(z)/f(z)|+ |A(z)| |f'(z)/f(z)| \\ &\leq r^{2\rho(f)}(1+o(1)) \end{align*} which is a contradiction for arbitrary large $r$. \item When $\frac{1}{2}\leq \rho(B)< \infty $ then using Phragm$\acute{e}$n- Lindel$\ddot{o}$f principle, there exists a sector $\Omega(\alpha, \beta); \quad 0\leq \alpha<\beta \leq 2\pi$ with $\beta-\alpha \geq \frac{\pi}{\rho(B)}$ such that \begin{equation}\label{Border} \limsup_{r\rightarrow \infty} \frac{\log^+ \log^+ |B(re^{\iota \theta})|}{\log r} =\rho(B) \end{equation} for all $\theta \in \Omega(\alpha, \beta)$. Since $\rho(B) < \rho(A)$ this implies that there exists $\theta_0 \in \Omega(\alpha, \beta) \cap \left(E^- \setminus E \right)$. Thus from equation (\ref{eq2le}) and (\ref{Border}), for $\arg z =\theta_0$ we have, \begin{equation}\label{eqAle} |A(re^{\iota \theta_0})| \leq \exp{\left( (1-\epsilon) \delta(P,\theta_0) r^n\right)} \end{equation} and \begin{equation}\label{eqB1} \exp{\left(r^{\rho(B)-\epsilon}\right)} \leq |B(re^{\iota \theta_0})| \end{equation} for sufficiently large $r$. Now from equations (\ref{sde}), (\ref{guneq1}), (\ref{eqAle})and (\ref{eqB1}), for all $z=re^{\iota \theta_0}$, satisfying $\theta_0\in \Omega(\alpha, \beta)\cap E^- \setminus(E\cup E_1)$ and $|z|=r > R_0(\theta_0) $ we have, \begin{align*} \exp{(r ^{\rho(B)-\epsilon})} &< |B(z)| \\ &\leq |f''(z)/f(z)| + |A(z)| | f'(z)/f(z)|\\ &\leq r^{2\rho(f)}(1+o(1)) \end{align*} which is a contradiction for arbitrary large $r$. \item Now suppose that $B(z)$ is a transcendental entire function with $\rho(B)=0$, then using a result from \cite{pd}, for all $\theta \in [0,2\pi)$ one has, \begin{equation}\label{eqB2} \limsup_{r\rightarrow \infty} \frac{\log |B(re^{\iota \theta})|}{\log r} =\infty \end{equation} this implies that for any large $G>0$ there exists $R(G)>0$ such that \begin{equation}\label{eqB3} r^G \leq |B(re^{\iota \theta})| \end{equation} for all $\theta \in [0,2\pi)$ and for all $r>R(G)$. From equations (\ref{sde}), (\ref{eq2le}), (\ref{guneq1})and (\ref{eqB3}), for all $z=re^{\iota \theta}$ satisfying $\arg z=\theta \in E^-\setminus \left( E \cup E_1 \right )$ and $|z| =r >R$ we have, \begin{align*} r ^G &< |B(z)| \\ &\leq |f''(z)/f(z)|+ |A(z)| |f'(z)/f(z)| \\ &\leq r^{2\rho(f)}(1+o(1)) \end{align*} which is a contradiction for arbitrary large $r$. \end{enumerate} Thus all non-trivial solutions of the equation (\ref{sde}) are of infinite order in this case. \item Let $B(z)$ be a transcendental entire function with Fabry gap. Then from Lemma (\ref{fablemma}), for any given $\epsilon >0, \quad (0<2\epsilon <\rho(B))$, there exists a set $H\subset (1,+\infty)$ satisfying $\overline{\log dense}H \geq \xi$, where $\xi \in (0,1)$ is a constant such that for all $|z| =r \in H$, one has \begin{equation}\label{eq2B} |B(z)| >\exp{\left(r^{\rho(B)-2\epsilon}\right)} \end{equation} From equation (\ref{sde}), (\ref{eq2le}), (\ref{guneq1})and (\ref{eq2B}), for all $z$ satisfying $\arg z = \psi_0 \in E^- \setminus (E\cup E_1)$and $|z|=r \in H, \quad r >R_0(\psi_0) $, we have \begin{align*} \exp{\left( r^{\rho(B)-2\epsilon}\right)}&< |B(z)| \leq |f''(z)/f(z)| + |A(z)| |f'(z)/f(z)| \\ &\leq r^{2\rho(f)}(1+o(1)) \end{align*} which is a contradiction for arbitrary large $r$. \end{enumerate} We thus conclude that all non-trivial solutions of the equation (\ref{sde}) are of infinite order. Now let us suppose that $f(z)=h(z)e^{Q(z)}$, where $h(z)$and $Q(z)$ are entire functions, be a non-trivial solution of the equation (\ref{sde}) and hence $\rho(f)=\infty$. First we suppose that $\lambda(f)=\rho(h)< \rho(f)$. From equation (\ref{sde}), we have \begin{equation} h''+ \left( A(z)+2 Q'(z) \right) h+ \left( B(z) +Q''(z) +(Q')^2(z) \right)=0 \end{equation} which implies that $\rho(h)\geq \max \{\ \rho(A), \rho(B), \rho(Q) \}\ >0$. As a consequence of this, $f$ contains infinite number of zeros. If we suppose that $\lambda(f)=\rho(f)=\infty$ then it is clear that $f$ has infinite number of zeros. \end{proof}\section{Further Results} In this section we will extend our result to higher order linear differential equations. We consider the higher order linear differential equation as follows: \begin{equation}\label{sde1} f^{(m)}+A_{(m-1)}(z)f^{(m-1)}+\ldots +A_1(z)f'+A_0(z)f=0 \end{equation} where $m\geq 2$ and $A_0, A_1, \ldots, A_{(m-1)}$ are entire functions. Then it is well known that all solutions of the equation (\ref{sde1}) are entire functions. Moreover, if $A_0, A_1, \ldots, A_{(m-1)}$ are polynomials then all solutions of the equation (\ref{sde1}) are of finite orde and vice-versa \cite{lainebook}. Therefore, if any of the coefficient is a transcendental entire function then equation (\ref{sde1}) will possesses almost all solutions of infinite order. However, conditions on coefficients of the equation (\ref{sde1}) are found so that all solutions are of infinite order \cite{lainebook}. Here, we are giving one of such condition on the coefficients of the equation (\ref{sde1}). \begin{thm} Suppose there exist an integer $j \in \{\ 1,2, \ldots , m-1 \}\ $ such that $ \lambda(A_j)<\rho(A_j)$. Suppose that $A_0$ be a transcendental entire function satisfying $ \rho(A_i) <\rho(A_0)$ where $i=1,2,\ldots m-1, i\neq j$ with \begin{enumerate} \item $\rho(A_0)\neq \rho(A_j)$ or \item $A_0(z)$ be a transcendental entire function with Fabry gap. \end{enumerate} Then every non-trivial solution of the equation (\ref{sde1}) is of infinite order. In addition, all non-trivial solutions of the equation (\ref{sde1}) has infinite number of zeros. \end{thm} \begin{proof} First let us suppose that $\rho(A_j)<\rho(A_0)$. Then suppose that there exist a solution $f \not \equiv 0$ of the equation (\ref{sde1}) such that $\rho(f) < \infty$, then by Lemma [\ref{gglog}] (ii) there exists a set $ E_2 \subset (1,\infty)$ that has finite logarithmic measure, such that for all $z$ satisfying $|z|\not \in E_2\cup[0,1]$ such that \begin{equation} \label{guneqq} |f^{(k)}(z)/f(z)| \leq |z|^{m \rho(f)} \end{equation} where $k=1,2,\ldots, m$. Using equation (\ref{sde1}) and (\ref{guneqq}), we have \begin{align*} |A_0(z)|&\leq \big|\frac{f^{(m)}(z)}{f(z)}\big| +|A_{(m-1)}(z)|\big|\frac{f^{(m-1)}(z)}{f(z)}\big|+\ldots +|A_1(z)|\big|\frac{f'(z)}{f(z)}\big| \\ &\leq |z|^{m\rho(f)} \{\ 1+|A_{(m-1)}(z)|+\dots +|A_1(z)| \}\ \end{align*} for all $z$ satisfying $|z| \not \in E_2\cup[0,1]$. From here we get that \begin{equation} T(r,A_0)\leq m\rho(f) \log r+(m-1) T(r,A_i) +O(1) \end{equation} where $T(r,A_i)=\max \{\ T(r,A_k): k=1,2,\ldots, m-1\}\ $ and $|z|=r \not \in E_2\cup[0,1]$. Using Lemma [\ref{lainebook}], this implies that $\rho(A_0) \leq \rho(A_i)$, which is a contradiction. Thus all non-trivial solutions of the equation (\ref{sde1}) are of infinite order in this case. Now consider $\rho(A_0) \leq \rho(A_j)$ and there exists a non-trivial solution $f$ of finite order then by Lemma [\ref{gglog}] (i), there exists a set $E_1 \subset [0,2 \pi)$ with linear measure zero such that if $\psi_0 \in [0,2\pi)\setminus E_1, $ then there is a constant $R_0=R_0(\psi_0)>0$ so that for all $z$ satisfying $\arg z =\psi_0$ and $|z| \geq R_0$ we have \begin{equation} \label{guneqqq} |f^{(k)}(z)/f(z)| \leq |z|^{m\rho(f)} \qquad k=1,2, \ldots, m-1 \end{equation} Since $\rho(A_i)<\rho(A_0)$, for all $i=1,2, \ldots, m-1, i\neq j$ then for any constant $\eta>0$ such that \\ $ \max \{\ \rho(A_i): i=1,2, \ldots m-1, i\neq j \}\ <\eta< \rho(A_0)$ there exists $R_0>0$ such that \begin{equation}\label{Aieq} |A_i(z)|\leq \exp{|z|^\eta} \end{equation} where $i=1,2,\ldots, m-1, i\neq j$ and $|z|=r >R_0$. \\ Also $\lambda(A_j)<\rho(A_j)=n$ then $A_j(z)=v(z)e^{P(z)}$, where $v(z)$ is an entire function and $P(z)$ is a polynomial of degree $n$. \begin{enumerate} \item Let $A_0(z)$ be a transcendental entire function with $\rho(A_0)\neq \rho(A_j)$. In this case we need to consider that $\rho(A_0)<\rho(A_j)$. We will discuss following three cases: \begin{enumerate}[(a)] \item\label{casea} suppose $0<\rho(A_0)<\frac{1}{2}$ then by Lemma [\ref{gglemma}], for $0<\epsilon< (\rho(A_0)-\eta)$ there exists a set $S \subset [0,\infty)$ that has upper logarithmic density at least $1-2\rho(A_0)$ such that \begin{equation}\label{A0eq} |A_0(z)| >\exp (|z|^{\rho(A_0)-\epsilon}) \end{equation} for all $z$ satisfying $|z| \in S.$ Now using equation (\ref{eq2le}), (\ref{sde1}), (\ref{guneqqq}), (\ref{Aieq})and (\ref{A0eq}) we have \begin{align*} \qquad \qquad \exp{ (|z|^{\rho(A_0)-\epsilon})} &<|A_0(z)| \\ &\leq \big|\frac{f^{(m)}(z)}{f(z)}\big| +|A_{(m-1)}(z)|\big|\frac{f^{(m-1)}(z)}{f(z)}\big| \\ &+\ldots +|A_1(z)|\big|\frac{f'(z)}{f(z)}\big| \\ &\leq r^{m\rho(f)} \{\ 1+ \exp{r^\eta}\\ &+ \ldots + \exp \left( (1-\epsilon) \delta(P,\psi_0)r^n \right)+ \ldots + \exp{r^\eta} \}\ \\ &= r^{m\rho(f)}\{\ 1+ (m-2) \exp{r^\eta} + o(1)\}\ \end{align*} for all $z$ satisfying $|z|=r \in S$ and $\arg z =\psi_0 \in E^- \setminus (E\cup E_1)$. From here we will get contradiction for sufficiently large $r$. \item Now suppose that $\rho(A_0) \geq \frac{1}{2}$, then using Phragm$\acute{e}$n- Lindel$\ddot{o}$f principle, there exists a sector $\Omega(\alpha, \beta); \quad 0\leq \alpha<\beta \leq 2\pi$ with $\beta-\alpha \geq \frac{\pi}{\rho(A_0)}$ such that \begin{equation}\label{A0order} \limsup_{r\rightarrow \infty} \frac{\log^+ \log^+ |A_0(re^{\iota \theta})|}{\log r} =\rho(A_0) \end{equation} for all $\theta \in \Omega(\alpha, \beta)$. Since $\rho(A_0) < \rho(A_j)$ this implies that there exists $\theta_0 \in \Omega(\alpha, \beta) \cap \left(E^- \setminus E \right)$ such that \begin{equation}\label{Ajeq} |A_j(re^{\iota \theta_0})| \leq \exp{\left( (1-\epsilon)\delta(P,\theta_0)r^n \right)} \end{equation} and form equation (\ref{A0order}), we have \begin{equation}\label{Aoeq} |A_0(re^{\iota \theta_0})| \geq \exp{r^{\rho(A_0)-\epsilon}} \end{equation} Thus we get contradiction using equation (\ref{sde1}), (\ref{guneqqq}), (\ref{Aieq}), (\ref{Ajeq})and (\ref{Aoeq}) for sufficiently large $r$ by using similar argument as in case (\ref{casea}). \item Suppose $A_0$ be a transcendental entire function with $\rho(A_0)=0$, then using a result from \cite{pd}, for all $\theta \in [0,2\pi)$ one has, \begin{equation}\label{eqA02} \limsup_{r\rightarrow \infty} \frac{\log |A_0(re^{\iota \theta})|}{\log r} =\infty \end{equation} this implies that for any large $G>0$ there exists $R(G)>0$ such that \begin{equation}\label{eqA03} r^G \leq |A_0(re^{\iota \theta})| \end{equation} for all $\theta \in [0,2\pi)$ and for all $r>R(G)$. From equations (\ref{eq2le}), (\ref{sde1}), (\ref{guneqqq}), (\ref{Aieq})and (\ref{eqA03}) we get a contradiction for sufficiently large $r$ using similar argument as in case (\ref{casea}).\\ Thus we conclude that all non-trivial solutions of the equation (\ref{sde1}) are of infinite order in this case. \end{enumerate} \item Suppose that $A_0(z)$ be a trascendental entire function with Fabry gap then using Lemma [\ref{fablemma}], for any given $\epsilon >0, \quad (0<2\epsilon <\rho(A_0)-\eta)$, there exists a set $H\subset (1,+\infty)$ satisfying $\overline{\log dense}H \geq \xi$, where $\xi \in (0,1)$ is a constant such that for all $|z|=r \in H$ , one has \begin{equation}\label{eq2A0} |B(z)| >\exp{\left(r^{\rho(A_0)-2\epsilon}\right)} \end{equation} From equation (\ref{eq2le}), (\ref{sde1}), (\ref{guneqqq}), (\ref{Aieq})and (\ref{eq2A0}), for all $z$ satisfying $\arg z = \psi_0 \in E^- \setminus (E\cup E_1)$and $|z| =r \in H$, $r>R_0(\psi_0) $, we have \begin{align*} \qquad \qquad \exp{\left( r^{\rho(A_0)-2\epsilon}\right)}&< |A_0(z)| \\ &\leq \big|\frac{f^{(m)}(z)}{f(z)}\big| +|A_{(m-1)}(z)|\big|\frac{f^{(m-1)}(z)}{f(z)}\big| \\ &+\ldots +|A_1(z)|\big|\frac{f'(z)}{f(z)}\big| \\ &\leq r^{m\rho(f)} \{\ 1+ \exp{r^\eta} \\ &+ \ldots + \exp \left( (1-\epsilon) \delta(P,\psi_0)r^n \right) \\ &+ \ldots + \exp{r^\eta} \}\ \\ &= r^{m\rho(f)} \{\ 1+ (m-2) \exp{r^\eta+o(1)} \}\ \end{align*} which is a contradiction for arbitrary large $r$. \end{enumerate} Thus all solutions $f\not \equiv 0$ of the equation (\ref{sde1}) are of infinite order. Suppose that $f(z)=h(z)e^{Q(z)}$, where $h(z)$and $Q(z)$ are entire functions, be a non-trivial solution of the equation (\ref{sde1}) therefore $\rho(f)=\infty$. \\ Let us suppose that $\rho(h)=\lambda(f)<\rho(f)$. Now from equation (\ref{sde1}) and Lemma [\ref{ind}] we get \begin{equation}\label{indu} h^{m}+B_{m-1}(z)h^{(m-1)}+\ldots+ B_0(z)h=0 \end{equation} where $$ B_{m-1}=A_{m-1}+mQ'$$ \begin{align*} B_{m-j}&=A_{m-j}+(m-j+1)A_{m-j+1}Q' \\ &+ \sum_{i=2}^{j} \left( {m-j+1 \choose i}(Q')^i+H_{i-1}(Q') \right) A_{m-j+i}. \end{align*} where $j=2,3, \ldots, m$ and $A_m \equiv 1$. Equation (\ref{indu}) implies that $\rho(h)\geq \max \{\ \rho(Q), \rho(A_0), \rho(A_1), \ldots, \rho(A_{m-1} \}\ >0$. Thus $f(z)$ has infinite number of zero.\\ If $\lambda(f)=\rho(f)=\infty$ then also zeros of $f(z)$ are infinite. \end{proof} \end{document}
\begin{document} \title{Identities for the number of standard Young tableaux in some $(k,\ell)$ hooks} \centerline{A. Regev} {\bf Abstract}: Closed formulas are known for $S(k,0;n)$, the number of standard Young tableaux of size $n$ and with at most $k$ parts, where $1\le k\le 5$. Here we study the analogue problem for $S(k,\ell;n)$, the number of standard Young tableaux of size $n$ which are contained in the $(k,\ell)$ hook. We deduce some formulas for the cases $k+\ell\le 4$. 2010 Mathematics Subject Classification 05C30 \section{Introduction} Given a partition $\lambdabda$ of $n$, $\lambdabda\vdash n$, let $\chi^\lambdabda$ denote the corresponding irreducible $S_n$ character. Its degree is denoted by $\deg \chi^\lambdabda=f^\lambdabda$ and is equal to the number of Standard Young tableaux (SYT) of shape $\lambdabda$~\cite{kerber},~\cite{macdonald},~\cite{sagan},~\cite{stanley}. The number $f^\lambdabda$ can be calculated for example by the hook formula~\cite[Theorem 2.3.21]{kerber},~\cite[Section 3.10]{sagan},~\cite[Corollary 7.21.6]{stanley}. We consider the number of SYT in the $(k,\ell)$ hook. More precisely, given integers $k,\ell,n\ge 0$ we denote \[ H(k,\ell;n)=\{\lambdabda=(\lambdabda_1,\lambdabda_2,\ldots)\mid \lambdabda\vdash n~\mbox{and}~ \lambdabda_{k+1}\le \ell\}\qquad\mbox{and}\qquad S(k,\ell;n)=\sum_{\lambdabda\in H(k,\ell;n)}f^\lambdabda. \] \subsection{The cases where $S(k,\ell;n)$ are known}\label{s1} For the "strip" sums $S(k,0;n)$ it is known~\cite{regev1}~\cite{stanley} that \[ S(2,0;n)={n\choose\lfloor\varphirac{n}{2}\rfloor}\quad\mbox{and}\quad S(3,0;n)=\sum_{j\ge 0}\varphirac{1}{j+1}{n\choose 2j}{2j\choose j}. \] Let $C_j=\varphirac{1}{j+1}{2j\choose j}$ be the Catalan numbers, then Gouyon-Beauchamps~\cite{gouyon}~\cite{stanley} proved that \[ S(4,0;n)=C_{\lfloor\varphirac{n+1}{2}\rfloor}\cdot C_{\lceil\varphirac{n+1}{2}\rceil}\quad\mbox{and}\quad S(5,0;n)=6\sum_{j=0}^{\lfloor\varphirac{n}{2}\rfloor}{n\choose 2j}\cdot C_j\cdot\varphirac{(2j+2)!}{(j+2)!(j+3)!}. \] As for the "hook" sums, until recently only $S(1,1;n)$ and $S(2,1;n)=S(1,2;n)$ have been calculated: 1. ~It easily follows that $S(1,1;n)=2^{n-1}$. 2. ~The following identity was proved in~\cite[Theorem 8.1]{regev2}: \begin{eqnarray}\label{motzkin.path.3} S(2,1;n)=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[ ~~~~~~=\varphirac{1}{4}\left(\sum_{r=0}^{n-1}{n-r\choose{\lfloor\varphirac{n-r}{2}\rfloor}} {n\choose r} +\sum_{k=1}^{\lfloor\varphirac{n}{2}\rfloor-1}\varphirac{n!}{k!\cdot (k+1)!\cdot (n-2k-2)!\cdot (n-k-1)\cdot(n-k)}\right)+1. \] \subsection{The main results} In Section~\ref{s2} we prove Equation~\eqref{rewrite8}, which gives (sort of) a closed formula for $S(3,1;n)$ in terms of the Motzkin-sums function. For the Motzkin-sums function see~\cite[sequence A005043]{sloane}. Equation~\eqref{rewrite8} in fact is a "degree" consequence of a formula of $S_n$ characters, of interest on its own, see Equation~\eqref{rewrite3}. In Section~\ref{s3} we find some intriguing relations between the sums $S(4,0;n)$ and the "rectangular" sub-sums $S^*(2,2,;n)$, see below identities~\eqref{b3} and~\eqref{b4}. Finally, in Section~\ref{s4} we review some cases where the hook-sums $S(k,\ell;n)$ are related, in some rather mysterious ways, to humps calculations on Dyck and on Motzkin paths, see~\eqref{eq1},~\eqref{eq2}, and Theorem~\ref{motzkin.humps.1}. As usual, in some of the above identities it is of interest to find bijective proofs, which might explain these identities. {\bf Acknowledgement.} We thank D. Zeilberger for verifying some of the identities here by the WZ method. \section{The sums $S(3,1;n)$ and the characters $\chi (3,1;n)$}\label{s2} Define the $S_n$ character \begin{eqnarray}\label{rewrite5} \chi(k,\ell;n)=\sum_{\lambdabda\in H(k,\ell;n)} \chi^\lambdabda\qquad\mbox{so}\qquad\deg(\chi(k,\ell;n))=S(k,\ell;n). \end{eqnarray} \subsection{The Motzkin-sums function} Define the $S_n$ character \begin{eqnarray}\label{rewrite6} \Psi(n)=\sum_{k=0}^{\lfloor n/2\rfloor}\chi^{(k,k,1^{n-2k})}\qquad\mbox{and denote}\qquad\deg\Psi(n)=a(n). \end{eqnarray} We call $\Psi(n)$ {\it the Motzkin-sums} character. Note that \[ \deg\chi^{(k,k,1^{n-2k})}=f^{(k,k,1^{n-2k})}=\varphirac{n!}{(k-1)!\cdot k!\cdot (n-2k)!\cdot (n-k)\cdot (n-k+1)}, \] hence \begin{eqnarray}\label{a2} a(n)=\sum_{k=1}^{\lfloor{n}/{2}\rfloor}\varphirac{n!}{(k-1)!\cdot k!\cdot (n-2k)!\cdot (n-k)\cdot (n-k+1)}. \end{eqnarray} By~\cite[sequence A005043]{sloane} it follows that $a(n)$ is the Motzkin-sums function. The reader is referred to~\cite{sloane} for various properties of $a(n)$. For example, $a(n)+a(n+1)=M_n$, where $M_n$ are the Motzkin numbers. Also $a(1)=0,~a(2)=1$ and $a(n)$ satisfies the recurrence: \begin{eqnarray} \mbox{for $n\ge 3$}\qquad \label{a1} a(n)=\varphirac{n-1}{n+1}\cdot(2\cdot a(n-1)+3\cdot a(n-2)). \end{eqnarray} Note also that for $n\ge 2~$ Equation~\eqref{motzkin.path.3} can be written as \begin{eqnarray}\label{a3} S(2,1;n) =\varphirac{1}{4}\left(\sum_{r=0}^{n-1}{n-r\choose{\lfloor\varphirac{n-r}{2}\rfloor}} {n\choose r}+a(n)-1\right)+1. \end{eqnarray} The asymptotic behavior of $a(n)$ can be deduced from that of $M_n$. We deduce it here, even though it is not needed in the sequel. \begin{remark} As $n$ goes to infinity, \[ a(n)\simeq \varphirac{\sqrt 3}{8\cdot\sqrt{2\psii}}\cdot\varphirac{1}{n\sqrt n}\cdot 3^n\qquad\mbox{and}\qquad a(n)\simeq\varphirac{1}{4} \cdot M_n. \] \begin{proof} By standard techniques it can be shown that $a(n)$ has asymptotic behavior $$a(n)\simeq c\cdot \left(\varphirac{1}{n}\right)^g\cdot \alpha^n$$ for some constants $c,g$ and $\alpha$ -- which we now determine. By~\cite{regev1} \[ M_n\simeq \varphirac{\sqrt 3}{2\sqrt{2\psii}} \cdot\left(\varphirac{1}{n}\right)^{3/2}\cdot 3^n. \] With \[ M_n=a(n)+a(n+1)\simeq c\cdot (1+\alpha)\cdot\left(\varphirac{1}{n}\right)^g\cdot\alpha^n \] this implies that $\alpha=3$, that $g=3/2$ and that $c=\varphirac{\sqrt 3}{8\cdot\sqrt{2\psii}}$. \end{proof} \end{remark} \subsection{The outer product of $S_m$ and $S_n$ characters} Given an $S_m$ character $\chi_m$ and an $S_n$ character $\chi_n$, we can form their {\it outer} product $\chi_n\hat\otimes \chi_n$. The exact decomposition of $\chi_m\hat\otimes \chi_n$ is given by the Littlewood-Richardson rule~\cite{kerber},~\cite{macdonald},~\cite{sagan},~\cite{stanley}. In the special case that $\chi_n=\chi^{(n)}$, this decomposition is given, below, by Young's rule. Also \begin{eqnarray}\label{rewrite7} \deg (\chi_n\hat\otimes \chi^{(n)})=\deg (\chi_n)\cdot{n+m\choose n}. \end{eqnarray} {\bf Young's Rule}~\cite{macdonald}: Let $\lambdabda=(\lambdabda_1,\lambdabda_2,\ldots)\vdash m$ and denote by $\lambdabda^{+n}$ the following set of partitions of $m+n$: \[ \lambdabda^{+n}=\{\mu\vdash n+m\mid \mu_1\ge \lambdabda_1\ge \mu_2\ge \lambdabda_2\ge\cdots\}. \] Then \[ \chi^\lambdabda\hat\otimes \chi^{(n)}=\sum_{\mu\in\lambdabda^{+n}}\chi^\mu. \] \begin{example}\label{rewrite4}~\cite{regev1},~\cite{stanley} Given $n$, it follows that \begin{eqnarray}\label{rewrite1} \chi^{(\lfloor n/2\rfloor)}\hat\otimes \chi^{(\lceil n/2\rceil)}=\chi(2,0;n),\quad\mbox{and by taking degrees,}\quad S(2,0;n)={n\choose \lfloor n/2 \rfloor}. \end{eqnarray} \end{example} \subsection{A character formula for $\chi(3,1;n)$} \begin{prop}\label{rewrite2} With the notations of~\eqref{rewrite5} and~\eqref{rewrite6}, \begin{eqnarray}\label{rewrite3} \chi(3,1;n)=\varphirac{1}{2}\cdot\left[\chi(2,0,n)+\sum_{j=0}^n \Psi(j)\hat\otimes \chi^{(n-j)} \right]. \end{eqnarray} By taking degrees, Example~\ref{rewrite4} together with~\eqref{rewrite6} and~\eqref{rewrite7} imply that \begin{eqnarray}\label{rewrite8} S(3,1;n)=\varphirac{1}{2}\cdot \left[{n\choose\lfloor\varphirac{n}{2}\rfloor}+\sum_{j=0}^n a(j)\cdot{n\choose j} \right]. \end{eqnarray} \end{prop} \begin{proof} Denote \[ {\cal O}megaega(n)=\sum_{j=0}^n \Psi(j)\hat\otimes \chi^{(n-j)} \] and analyze this $S_n$ character. Young's rule implies the following: Let $\mu\vdash n$, then by Young's rule $\chi^\mu$ has a positive coefficient in ${\cal O}megaega(n)$ if and only if $\mu\in H(3,1;n)$. Moreover, all these coefficients are either $1$ or $2$, and such a coefficient equals $1$ if and only if $\mu$ is a $\le 2$ two rows partition $\mu=(\mu_1,\mu_2)$. It follows that \begin{eqnarray}\label{p3} \chi(2,0;n)+{\cal O}megaega(n)=2\cdot\sum_{\lambdabda\in H(3,1;n)}\chi^\lambdabda. \end{eqnarray} This implies~\eqref{rewrite3} and completes the proof of Proposition~\ref{rewrite2}. \end{proof} \section{The sums $S(4,0;n)$ and $S^*(2,2;n)$}\label{s3} \begin{defn} \begin{enumerate} \item Let $n=2m$, $m\ge 2$ and let $H^*(2,2;2m)\subset H(2,2;2m)$ denote the set of partitions $H^*(2,2;2m)=\{(k+2,k+2,2^{m-2-k})\vdash 2m\mid k=0,\ldots m-2\}$ (the partitions in the $(2,2)$ hook with both arm and leg being rectangular), then denote \[ S^*(2,2;2m)=\sum _{\lambdabda\in H^*(2,2;2m)} f^\lambdabda. \] \item Let $n=2m+1$, $m\ge 2$ and let $H^*(2,2;2m+1)\subset H(2,2;2m+1)$ denote the set of partitions $H^*(2,2;2m+1)=\{(k+3,k+2,2^{m-2-k})\vdash 2m+1\mid k=0,\ldots m-2\}$ (the partitions in the $(2,2)$ hook with arm nearly rectangular and leg rectangular), then denote \[ S^*(2,2;2m+1)=\sum _{\lambdabda\in H^*(2,2;2m+1)} f^\lambdabda. \] \end{enumerate} \end{defn} Recall from Section~\ref{s1} that $S(4,0;2m-1)=C_m^2$ and $S(4,0;2m)=C_m\cdot C_{m+1}$. We have the following intriguing identities. \begin{prop}\label{b1} \begin{enumerate} \item Let $n=2m$ then \[S(4,0;2m-2)=C_{m-1}\cdot C_{m}=S^*(2,2;2m).\] Explicitly, we have the following identity: \begin{eqnarray}\label{b3} C_{m-1}\cdot C_m=\varphirac{1}{m\cdot (m+1)}\cdot{2m-2\choose m-1}\cdot{2m\choose m}=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{eqnarray} \[\label{b2}~~~~~~~~~~=\sum_{k=0}^{m-2}\varphirac{(2m)!}{k!\cdot (k+1)!\cdot(m-k-2)!\cdot (m-k-1)! \cdot (m-1)\cdot m^2\cdot (m+1)}. \] \item Let $n=2m+1$ then \[\varphirac{2m+1}{m+2}\cdot S(4,0;2m-1)=\varphirac{2m+1}{m+2}\cdot C_{m}^2=S^*(2,2;2m+1).\] Explicitly, we have the following identity: \begin{eqnarray}\label{b4} \varphirac{2m+1}{ m+2}\cdot C_{m}^2= \varphirac{1}{(m+1)\cdot(m+2)}\cdot{2m\choose m}{2m+1\choose m} =~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[=\sum_{k=0}^{m-2}\varphirac{(2m+1)!\cdot 2}{k!\cdot (k+2)!\cdot(m-k-2)!\cdot (m-k-1)! \cdot (m-1)\cdot m\cdot (m+1)\cdot (m+2)}. \] \end{enumerate} \end{prop} \begin{proof} Equation~\eqref{b3} is the specialization of Gauss's $2F1(a,b;c;1)$ with $a=2-m,b=1-m, c=2$~\cite{askey}, and~\eqref{b4} is similar. Alternatively, the identities~\eqref{b3} and~\eqref{b4} can be verified by the WZ method~\cite{doron3},~\cite{doron2}. \end{proof} \section{Hook-sums and humps for pathes}\label{s4} A Dyck path of length $2n$ is a lattice path, in $\mathbb{Z}\times \mathbb{Z}$, from $(0,0)$ to $(2n,0)$, using up-steps $(1,1)$ and down-steps $(1,-1)$ and never going below the $x$-axis. A {\it hump} in a Dyck path is an up-step followed by a down-step. A Motzkin path of length $n$ is a lattice path from $(0,0)$ to $(n,0)$, using flat-steps $(1,0)$, up-steps $(1,1)$ and down-steps $(1,-1)$, and never going below the $x$-axis. A hump in a Motzkin path is an up-step followed by zero or more flat-steps followed by a down-step. We count now {\it humps} for Dyck and for Motzkin paths and observe the following intriguing phenomena: The humps-calculations in the Dyck case correspond the $2\times n$ rectangular shape $\lambdabda=(n,n)$ to the $(1,1)$ hook shape $\mu=(n,1^n)$. And in the Motzkin case we show below that it corresponds the $(3,0)$ strip shape partitions $H(3,0;n)$ to the $(2,1)$ hook shape partitions $H(2,1;n)$. \subsection{The Dyck case} The Catalan number \[C_n=\varphirac{(2n)!}{n!(n+1)!}\] is the cardinality of a variety of sets~\cite{stanley}; here we are interested in two such sets. First, $C_n=f^{(n,n)}$, the number of SYT of shape $(n,n)$. Second, $C_n$ is the number of Dyck paths of length $2n$. Let ${\cal H} D_n$ denote the total number of humps in all the Dyck paths of length $2n$, then \[ {\cal H} D_n={2n-1\choose n},\] see~\cite{dershowitz1},~\cite{dershowitz2},~\cite{deutsch}. Since ${2n-1\choose n}=f^{(n,1^n)}$, we have \[ C_n=f^{(n,n)}\qquad\mbox{and}\qquad{\cal H} D_n=f^{(n,1^n)}, \] which we denote by \begin{eqnarray}\label{eq1} {\cal H}: (n,n)\longrightarrow (n,1^n). \end{eqnarray} \subsection{The Motzkin case} Like the Catalan numbers, also the Motzkin numbers $M_n$ are the cardinality of a variety of sets; for example $M_n=S(3,0;n)$,~\cite{regev1},~\cite{stanley},~\cite [sequence A001006]{sloane}, which gives the Motzkin numbers a SYT interpretation. Also, $M_n$ is the number of Motzkin paths of length $n$. Let ${\cal H} M_n$ denote the total number of humps in all the Motzkin paths of length $n$, then by~\cite [sequence A097861]{sloane} \begin{eqnarray}\label{motzkin.path.2} {\cal H}M_n=\varphirac{1}{2}\sum_{j\ge 1}{n\choose j}{n-j\choose j}. \end{eqnarray} We show below that this implies the intriguing identity ${\cal H}M_n=S(2,1;n)-1,$ which gives a SYT-interpretation of the numbers ${\cal H}M_n$. Thus the humps-calculations in the Motzkin case corresponds the $(3,0)$ strip shape partitions $H(3,0;n)$ to the $(2,1)$ hook shape partitions $H(2,1;n)$. We denote this by \begin{eqnarray}\label{eq2} {\cal H}: H(3,0;n)\longrightarrow H(2,1;n). \end{eqnarray} \begin{thm}\label{motzkin.humps.1} The number of humps for the Motzkin paths of length $n$ satisfies \[ {\cal H}M_n=S(2,1;n)-1. \] \end{thm} \begin{proof} Combining Equations~\eqref{motzkin.path.3} and~\eqref{motzkin.path.2}, the proof of Theorem~\ref{motzkin.humps.1} will follow once the following binomial identity -- of interest on its own -- is proved. \begin{lem}\label{motzkin.humps.11} For $n\ge 2$ \begin{eqnarray}\label{motzkin.path.222} 2\sum_{j=1}^{\lfloor n/2\rfloor}{n\choose j}{n-j\choose j}=\sum_{r=0}^{n-1}{n-r\choose{\lfloor\varphirac{n-r}{2}\rfloor}} {n\choose r}+a(n)-1=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[ ~~~~~~~~~~~~=\sum_{r=0}^{n-1}{n-r\choose{\lfloor\varphirac{n-r}{2}\rfloor}} {n\choose r} +\sum_{k=1}^{\lfloor\varphirac{n}{2}\rfloor-1}\varphirac{n!}{k!\cdot (k+1)!\cdot (n-2k-2)!\cdot (n-k-1)\cdot(n-k)}. \] \end{lem} Equation~\eqref{motzkin.path.222} was verified by the WZ method. About this method, see~\cite{doron3},~\cite{doron2}. We remark that it would be interesting to find an elementary proof of this identity. This completes the proof of Theorem~\ref{motzkin.humps.1}. \end{proof} A. Regev, Math. Dept. The Weizmann Institute, Rehovot 76100, Israel. {\it Email address:} amitai.regev at weizmann.ac.il \end{document}
\begin{document} \pagestyle{myheadings} \markright{Bodnar, M. \& Piotrowska,~M.J.: Family of angiogenesis models with distributed delay} \noindent\begin{tabular}{|p{\textwidth}} \Large\bf Analysis of the family of angiogenesis models with distributed time delays \\ \it Bodnar, M.$^{*,1}$ \& Piotrowska, M.J.$^{*,2}$\\ \it\small $^*$Institute of Applied Mathematics and Mechanics,\\ \it\small University of Warsaw, Banacha 2, 02-097 Warsaw, Poland\\ \small $^1$\texttt{[email protected]}, $^2$\texttt{[email protected]}\\ \multicolumn{1}{|r}{\large\color{orange} Research Article} \\ \\ \hline \end{tabular} \thispagestyle{empty} \tableofcontents \noindent\begin{tabular}{p{\textwidth}} \\ \hline \end{tabular} \begin{abstract} In the presented paper a~family of angiogenesis models, that is a~generalisation of the Hahnfeldt \textit{et al.}\xspace model is proposed. Considered family of models consists of two differential equations with distributed time delays. The global existence and the uniqueness of the solutions are proved. Moreover, the stability of the unique positive steady state is examined in the case of Erlang and piecewise linear delay distributions. Theorems guaranteeing the existence of stability switches and occurrence of Hopf bifurcations are proved. Theoretical results are illustrated by numerical analysis performed for parameters estimated by Hahnfeldt \textit{et al.}\xspace (\textit{Cancer Res.}, 1999). \end{abstract} Keywords: delay differential equations, distributed delay, stability analysis, Hopf bifurcation, angiogenesis \section{Introduction} Angiogenesis is a~very complex process which accompanies us through our whole life starting from the development of the embryo and ending with wound healing in an advanced age since it is a~process of blood vessels formation from pre-existing structures. Clearly, it is often considered as a~vital process involved in organism growth and development. On the other hand, it can also be a~pathological process since it may promote the growth of solid tumours. Indeed, in the first stage of solid tumour development cells create multicellular spheroid (MCS) --- a~small spherical aggregation. For a~fast growing tumour (comparing to the healthy tissue) tumour's cells located in the centre of MCS receive less and less nutrients (such as glucose and oxygen) since they are only supplied through the diffusion of that substances from the external vessels. Hence, when MSC reaches a~certain size (usually around 2--3 mm in diameter, \cite{folkman1971tumor,GIMBRONE1972}) two processes are observed: the saturation of growing cellular mass, and the formation of necrotic core in the centre of MCS. The tumour cells that are poorly nourished secrete number of angiogenic factors, e.g. FGF, VEGF, VEGFR, Ang1 and Ang2, which promote the proliferation and the differentiation of endothelial cells, smooth muscle cells, fibroblasts and also stabilise new created vessels initiating the process of angiogenesis. From that point of view, angiogenesis can be also considered as an essential step in the tumours transition from less harm for hosts avascular forms to cancers that are able to metastasise and finally cause the lethal outcome of the disease. In~\cite{folkman1971tumor} Folkman showed that the growth of solid tumours strongly depends on the amount of blood vessels that are induced to grow by tumours. He summarised that, if the tumour could be stopped from its own blood supply, it would wither and die, and nowadays he is considered as a~father of an anti-angiogenic therapy the approach that aims to prevent the tumour from developing its own blood supply system. On the other hand, angiogenesis has also an essential role in the tumours treatment during chemotherapy since anti-cancer drugs distributed with blood need to be delivered to the tumour and efficiently working vessels net allows the drugs to penetrate better the tumour structure. One also needs to take into account the fact that the vessel's net developed due to angiogenesis initiated by tumour cells is not as efficient as the net in healthy tissue e.g. consists of loops. That causes large difficulties during the treatment of tumours because drugs transport is often ineffective. Because of the reasons explained above, the process of angiogenesis is very important for the solid tumour growth and also for the anti-tumour treatment. Hence, a~ number of authors described that process using various mathematical models: macroscopic \cite{Anderson1998,Aubert2011,bodnar13mbe_angio,Chaplain1990, Chaplain1993,donofrio2004,donofrio2006, onf2009, Hahnfeldt1999}, individual-based models \cite{Chaplain2012,Watson2012} or hybrid models \cite{Chaplain2012a,McDougall2012}. One of the most well recognised angiogenesis models was proposed by Hahnfeldt~\textit{et al.}\xspace~\cite{Hahnfeldt1999} later on studied in~\cite{donofrio2004}. Among others, models with discrete delays describing the same angiogenesis process were recently considered in~\cite{bodnar13mbe_angio} and~\cite{mbuf07jbs}. Models investigating the tumour development and angiogenesis in the context of anti-angiogenic therapy, chemotherapy or radiotherapy were also considered in~\cite{mbuf09appl,Alberto2007prE,ergun03bmb,McDougall2002,Orme1997,angioMBE,Sachs2001}, in the context of optimal treatment schedules~\cite{ledz2007,ledz2008, ledz2009, swier2006,swier1,Swierniak2012} or in context of vessels maturation in~\cite{Arakelyan2005, arakelyan}. Let consider the family of the angiogenesis models that consists of two differential equations in the following form \begin{subequations}\label{model} \begin{align} \frac{\dd}{\dd t}p(t)&=rp(t)h\left(\frac{p(t-\tau_1)}{q(t-\tau_1)}\right), \label{model:1}\\ \frac{\dd}{\dd t}q(t)&=q(t)\Bigg(b\left(\frac{p(t-\tau_2)}{q(t-\tau_2)}\right)^{\alpha} -a_Hp^{2/3}(t) - \mu \Bigg) ,\label{model:2} \end{align} \end{subequations} where variables $p(t)$ and $q(t)$ represent the tumour volume at time $t$, and the maximal tumour size for which tumour cells can be nourished by the present vasculature. Thus, the ratio $\frac{p}{q}$ is interpreted as the measure of vessels density. The function $h$ reflects dependence of tumour growth rate on the vascularisation. We assume $h$ is decreasing. In the literature, in the context of model~\eqref{model}, it is usually assumed that the tumour growth is governed by logistic, generalised logistic (see~\cite{donofrio2004, donofrio2006,onf2009}) or by Gompertzian (see~\cite{Hahnfeldt1999,donofrio2004}) law. The parameter $r$ reflects the maximal tumour growth rate. The delays $\tau_1$ and $\tau_2$ present in the system represent time lags in the processes of the tumour growth and the vessels formation, respectively. Moreover, it is assumed that the population of the endothelial cells depends on both: the stimulation process initiated by poorly nourished tumour cells and the inhibiting factors secreted by tumour cells causing the lost of vessels. The parameter $\alpha$ reflects the strength of the dependence of the vessel dynamics on the ratio $\tfrac{p}{q}$. Parameters $a_H$ and $b$ are proportionality parameters of inhibition and stimulation of the angiogenesis process, respectively. We consider two processes, of which one appears on the surface of a~sphere and the other the inside the sphere, and due to the spherical symmetry assumption we suppose that the ratio of the surface process depends proportionally to the tumour surface and it is represented by the exponent~$2/3$. Thus, the term $-a_Hq(t)p^{\frac{2}{3}}(t)$ appears in Eq.~\eqref{model:2}. In model~\eqref{model} the spontaneous loss of functional vasculature is represented by the term $-\mu q(t)$. In~\cite{Hahnfeldt1999}, the parameter $\mu$ was estimated to be equal to zero. On the other hand, the term $-\mu q(t)$ for $\mu>0$ can be also interpreted as a~constant continuous anti-angiogenic treatment, hence it is also considered in the presented study. For detailed derivation of system~\eqref{model} without delays and with logarithmic function $h$ from a reaction-diffusion system see~\cite{Hahnfeldt1999}. Model~\eqref{model} with Gompertzian tumour growth, $\alpha=1$, and without time delays was firstly proposed by Hahnfeldt~\textit{et al.}\xspace in \cite{Hahnfeldt1999}. The modification of the Hahnfeldt~\textit{et al.}\xspace model was considered by various authors. d'Onofrio and Gandolfi~\cite{donofrio2004} proposed system of ordinary differential equations (ODEs) based on Hahnfeldt~\textit{et al.}\xspace idea but with $h$ being linear or logarithmic function, and with $\alpha=0$, that is when the dynamics of the second variable of the model does not depend on the vascularisation of the tumour. Later in~\cite{mbuf07kkzmbm} Bodnar and Fory\'{s} introduced discrete time delays into the model with the Gompertzian tumour growth and the family of models for parameter $\alpha$ from the interval $[0,1]$ was considered. The analysis was later extended in~\cite{mbuf09appl}. In the same time, d'Onofrio and Gandolfi in~\cite{onf2009} studied the model for $\alpha=0$ with discrete time delays and different functions $h$. The analysis of the family of the models with discrete delays and the Gompertzian tumour growth was extended in~\cite{Piotrowska2011,Forys2013} where the directions of the existing Hopf bifurcations were studied. However, in all these papers only discrete time delays were considered. One of the reasons is the fact that mathematical analysis of such models is easier than the models with distributed delays. Of course, in reality the duration of a~process is never constant and it usually fluctuates around some value. Thus, we believe that delay is somehow distributed around some average value. This is the main reason why in this paper we study the modification of the family of models~\eqref{model} in which the delays are distributed around theirs average values $\tau_1$ and $\tau_2$ instead of being concentrated in these points. On the other hand, we should also point out, that it might be difficult to estimate a~particular delay distribution form experimental data. Up to our knowledge such modification of the family of models has not been considered yet. We are also not aware of any result considering linear systems similar to the linearised system of the model with distributed delays considered in this paper. In fact known analytical results regarding the stability of trivial steady state for linear equations with distributed delays are rather limited, see e.g. \cite{Kolmanovskii1992} and references therein. Most results concern a single equation (see~\cite{Bernard2001,Campbell2009mmnp,Huang2012dcds}, and references therein). In some papers a~second order equation or a~system of equations with distributed delays are considered (see \cite{Kiss2010dcds} for the second order linear equation, and~\cite{Faria2008jde} for the Lotka-Volterra system). However, in these two cases the time delays are only finite and the delayed terms do not appear on the diagonal of the stability matrix, and hence these results cannot be applied directly to the system considered in this paper. The single infinite distributed time delay is considered in~\cite{alber2010} for the model of immune system--tumour interactions, however in that paper only an exponential distribution is considered and the linear chain trick that reduces the system with infinite delay to a~larger system of ODEs is used. A~part of our results is based on application of the Mikhailov Criterion, which generalised version is formulated and proved in the Appendix. This generalisation plays an essential role in studying local stability of the steady state with certain distributed delays. The paper is organised as follows: in Section~\ref{sec:model} considered family of angiogenesis models with distributed delays is proposed, and a~proper phase space and initial conditions are defined. In Section~\ref{sec:matprop} the mathematical analysis including the global existence and uniqueness of solutions, the stability of the existing steady state of considered family of models for different types of considered delay distributions is presented. In that section we also discuss possibilities of stability changes of the positive steady state. Next, our stability results are illustrated and extended by numerical simulations. In Section~\ref{sec:dis} we discuss and summary our results. Finally, for completeness in~\ref{appen} we formulate generalized Mikhailov Criterion used in the paper and we prove it. \section{Family of angiogenesis models with distributed delays}\label{sec:model} The effect of the tumour stimulation of the vessels growth as well as the positive influence of the vessel network on the tumour dynamics is neither instantaneous nor delayed by some constant value. Hence, to reflect reality better, instead of constant delays considered earlier in~\cite{donofrio2004}, \cite{Piotrowska2011} or~\cite{Forys2013} we consider the delays that are distributed around some mean value and their distributions are given by the general probability densities $f_i$. We study the following system \begin{subequations}\label{modeldis} \begin{align} \frac{\dd}{\dd t}p(t)&=rp(t)h\left(\int^{\infty}_0f_1(\tau)\frac{p(t-\tau)}{q(t-\tau)}\dd\tau\right), \label{modeldis:1}\\ \frac{\dd}{\dd t}q(t)&=q(t)\Bigg(b\int^{\infty}_{0} f_2(\tau)\left(\frac{p(t-\tau)}{q(t-\tau)}\right)^{\alpha}\dd\tau -a_Hp^{2/3}(t) - \mu \Bigg) ,\label{modeldis:2} \end{align} \end{subequations} where $f_i(s):[0,\infty)\to \varmathbb{R}_{\ge}$ are delay distributions with the following properties: \begin{enumerate}[\bf (H1)] \item $\displaystyle\int^{\infty}_{0}f_i(s)\dd s=1$, $i=1,2$ and \item $\displaystyle 0\le \int^{\infty}_{0}sf_i(s)\dd s<\infty$, $i=1,2$. \end{enumerate} Moreover, we assume that the function $h$ has the following properties: \begin{enumerate}[\bf (P1)] \item $h:(0,\infty) \to \varmathbb{R}$ is a~continuously differentiable decreasing function; \item $h(1) = 0$; \item $h'(1) = -1$. \end{enumerate} Note, that properties \textbf{(P2)} and \textbf{(P3)} do not make our studies less general as a~proper rescaling and a~suitable choice of parameter $r$ always allows us to arrive to the case $h(1) = 0$ and $h'(1)=-1$. Note also, that an arbitrary probability distribution defined on $[0,+\infty)$ with a~finite expectation fulfils assumptions \textbf{(H1)} and \textbf{(H2)}. To close model~\eqref{modeldis} we need to define initial conditions. We consider a~ continuous initial function $\phi:(-\infty,0]\rightarrow\varmathbb{R}^2$. For infinite delays we need to regulate the behaviour of this function as $t$ tends to $-\infty$. To this end, we introduce a~suitable space. Let us denote as $\varmathbb{C}b= \mathbf{C}((-\infty,0],\varmathbb{R}^2)$ the space of continuous functions defined on the interval $(-\infty,0]$ with values in $\varmathbb{R}^2$. For an arbitrary chosen non-decreasing continuous function $\eta:(-\infty,0]\to\varmathbb{R}^+$ such that $\displaystyle\lim_{\theta\to-\infty}\eta(\theta)=0$, we define the Banach space \[ \mathcal{K}_{\eta} = \left\{ \varphi \in \varmathbb{C}b : \lim_{\theta\to-\infty} \varphi(\theta)\eta(\theta) = 0\quad \text{ and } \quad \sup_{\theta \in (-\infty,0]} |\varphi(\theta)\eta(\theta)|<\infty\right\}, \] with a~norm \[ \|\varphi\|_{\eta} = \sup_{\theta \in (-\infty,0]} |\varphi(\theta)\eta(\theta)|, \quad \text{ for any } \quad \varphi \in \mathcal{K}_{\eta}. \] We need initial functions $\phi$ to be in $\mathcal{K}_{\eta}$ for some arbitrary non-decreasing continuous function $\eta$ that tends to 0 for arguments tending to $-\infty$ (see~\cite{Hino1991}). If the delay is finite, that is if supports of $f_1$ and $f_2$ are compact, we suppose initial functions $\phi\in \mathbf{C}([-\tau_{\max{}},0],\varmathbb{R}^2)$, where the interval $[-\tau_{\max{}},0]$ contains supports of $f_i$, $i=1,2$. However, this is equivalent to consideration of the space $\mathcal{K}_{\eta}$ with an arbitrary $\eta$ being equal to $1$ on $[-\tau_{\max{}},0]$ and decreasing to $0$ in~$-\infty$. If the support of the probability density is unbounded and the initial function $\phi$ is also unbounded, than we need to choose an appropriate function $\eta$ (which in turn implies a~choice of the phase space $\mathcal{K}$). The function $\eta$ must be chosen to control the behaviour of the initial function in $-\infty$. However, due to biological interpretation of the model parameter, it is reasonable to restrict our analysis to the globally bounded initial functions. Thus, we can chose an arbitrary positive continuous non-decreasing function $\eta$ such that $\eta(\theta)\rightarrow 0$ for $\theta\rightarrow-\infty$ for example $\eta=e^{\theta}$. Nevertheless, because such assumption would not make theorems' formulation simpler, we decided to present and prove theorems for arbitrary initial functions and not to assume a~particular form of the function $\eta$. \section{Mathematical analysis of family of angiogenesis models with distributed delays}\label{sec:matprop} Due to properties \textbf{(P2)} and \textbf{(H1)} the steady states of system~\eqref{modeldis} are the same as for Hahnfeldt~\textit{et al.}\xspace and d'Onofrio-Gandolfi models without delays or with discrete delays. Thus, system~\eqref{modeldis} has a~unique positive steady state $(p_e,q_e)$, where $p_e=q_e=\left(\frac{b-\mu}{a_H}\right)^{\frac{3}{2}}$ (compare e.g.~\cite{Piotrowska2011}) if and only if $b>\mu$. Hence, in the rest of the paper we assume that $b>\mu$ holds. Note, that solution to the equations defined by~\eqref{modeldis} can be written in the exponential form. Thus, using the same argument as in~\cite{onf2009} or in~\cite{Piotrowska2011} (for the discrete delay cases), we deduce that $\varmathbb{R}_+^2$ is the invariant set for system~\eqref{modeldis}. Note, that in system~\eqref{modeldis} terms with delay are of the form $p/q$. This, together with the invariance of $\varmathbb{R}_+^2$ suggests the following change of variables \[ x=\ln\left(\frac{p}{p_e}\right), \quad\quad y=\ln\left(\frac{pq_e}{qp_e}\right) \] giving \begin{equation}\label{model_resc_suma} \begin{aligned} \frac{\dd}{\dd t}x(t)&=rh\left( \int^{\infty}_{0} f_1(\tau)\e^{y(t-\tau)}\dd\tau\right), \\ \frac{\dd}{\dd t}y(t)&=rh\left( \int^{\infty}_{0} f_1(\tau)\e^{y(t-\tau)}\dd\tau\right) -b\int^{\infty}_{0}f_2(\tau)\e^{\alpha y(t-\tau)}\dd\tau+(b-\mu) \e^{\frac{2}{3}x(t)}+\mu. \end{aligned} \end{equation} Newly introduced variable $y$ allows us to consider the system where only one variable ($y$) has delayed argument, whereas in system~\eqref{modeldis} both variables have delayed arguments. Clearly, the steady state for re-scaled system~\eqref{model_resc_suma} is $(x_e,y_e)=(0,0)$. It should be mentioned here that the scaling procedure transforms $\varmathbb{R}_+^2$ into the whole $\varmathbb{R}^2$, so space $\mathcal{K}_{\eta}$ is the appropriate phase space for the model~\eqref{model_resc_suma}. \subsection{Existence and uniqueness} \begin{prop}\label{thm:ex} Let $\eta:(-\infty,0]\to\varmathbb{R}^+$ be a~continuous, non-decreasing function such that $\displaystyle\lim_{\theta\to-\infty}\eta(\theta)=0$, let functions $f_i$ fulfil~\textbf{\upshape (H1)}--\textbf{\upshape (H2)}, and let function $h$ fulfils \textbf{\upshape (P1)}--\textbf{\upshape (P3)}. For an arbitrary initial function $\phi=(\phi_1,\phi_2)\in\mathcal{K}_{\eta}$ there exists $t_{\phi}>0$ such that system~\eqref{model_resc_suma} with initial condition $x(t)=\phi_1(t)$, $y(t)=\phi_2(t)$ for $t\in (-\infty,0]$, has a~unique solution in $\mathcal{K}_{\eta}$ defined on $t\in[0, t_{\phi})$. \end{prop} \begin{proof} The right-hand side of system~\eqref{model_resc_suma} fulfils a~local Lipschitz condition. In fact, the assumption \textbf{(P1)} implies that the derivative of $h$ is bounded on an arbitrarily chosen compact set of $(0,+\infty)$. Thus, the function $h$ is locally Lipchitz continuous function as well as all functions on the right-hand side of~\eqref{model_resc_suma}. This implies that the Lipschitz condition is fulfilled for any bounded set in $\mathcal{K}_\eta$. Hence, there exits a~unique solution to system~\eqref{model_resc_suma} defined on $t\in[0, t_{\phi})$ (see~\cite[Chapter 2, Theorem~1.2]{Hino1991}). \end{proof} \begin{thm}[global existence] If assumptions of Proposition~\ref{thm:ex} are fulfilled, the probability densities $f_i$ and an initial function are globally bounded, then solutions to~\eqref{model_resc_suma} are defined for all $t\ge 0$. \end{thm} \begin{proof} The right-hand side of~\eqref{model_resc_suma} can be written in a~functional form as \[ \frac{\dd }{\dd t} x(t) = G_1(x_t,y_t)\,, \quad \frac{\dd }{\dd t} y(t) = G_2(x_t,y_t)\,, \] with \[ \begin{split} G_1(\phi_x,\phi_y)&=r h\left( \int^{\infty}_{0} f_1(\tau)\e^{\phi_y(-\tau)}\dd\tau\right), \\ G_2(\phi_x,\phi_y)&=r h\left( \int^{\infty}_{0} f_1(\tau)\e^{\phi_y(-\tau)}\dd\tau\right) -b\int^{\infty}_{0}f_2(\tau)\e^{\alpha \phi_y(-\tau)}\dd\tau+(b-\mu) \e^{\frac{2}{3}\phi_x(0)}+\mu, \end{split} \] for $(\phi_x,\phi_y)\in \varmathbb{C}b$. Note that due to \textbf{(P1)} we have $|G_1(\phi_x,\phi_y)| \le r h(\exp(-||\phi_y||))$ and similar inequality holds for $|G_2(\phi_x,\phi_y)|$. This implies, that the function $(G_1,G_2)$ maps bounded sets of $\varmathbb{C}b$ into bounded sets of $\varmathbb{R}^2$ (so their closures are compact). Thus, if the solution to~\eqref{model_resc_suma} cannot be prolonged beyond the interval $[0,T)$, for some finite $T$, then $\displaystyle\lim_{t\to T} \bigl(\| x_t\|_{\eta} + \|y_t\|_{\eta}\bigr) = +\infty$ (see~\cite[Chapter~2, Theorem~2.7]{Hino1991}). In the following, we show that $x$ and $y$ are bounded on $[0,T)$ hence, the solution exists for all $t \ge 0$. Let $\delta>0$ be an arbitrary number such that $\int_{\delta}^{\infty} f_i(\tau)\dd\tau>0$ for $i=1,2$. Using the step method we choose the time step equal to $\delta$ and we show that the solution to~\eqref{model_resc_suma} can be prolonged on the interval $[0,\delta]$ and hence on the interval $[n\delta,(n+1)\delta]$, for any $n\in\varmathbb{N}$. Let~letters $C_i$ denote constants that will be chosen later in a~suitable way. First, we provide an upper bound of the solution. Due to boundedness of $y(t)$ for $t\le 0$, we write \[ \int^{\infty}_{0} f_1(\tau)\e^{y(t-\tau)}\dd\tau = \int^{\delta}_{0} f_1(\tau)\e^{y(t-\tau)}\dd\tau + \int^{\infty}_{\delta} f_1(\tau)\e^{y(t-\tau)}\dd\tau \ge \int^{\infty}_{\delta} f_1(\tau)\e^{y(t-\tau)}\dd\tau \ge C_1, \] for all $t\in [0,\delta]$. This estimation leads to a~conclusion \[ \frac{\dd }{\dd t} x(t) \le r\, h\bigl(C_1\bigr) \; \Longrightarrow \; x(t) \le C_2 \quad \text{ for } t\in [0,\delta]. \] From the second equation of~\eqref{model_resc_suma}, arguing in a~similar way, we have \[ \frac{\dd }{\dd t} y(t) \le r\,h\bigl(C_1\bigr) +(b-\mu)\e^{C_3} +\mu \; \Longrightarrow \; y(t) \le C_4, \quad \text{ for } t\in [0,\delta]. \] Now, we proceed with a~lower bound of the solutions. Splitting integral on the interval $(0,+\infty)$ into two integrals and using the fact that $y(t)$ is bounded we obtain \[ \begin{split} \int_0^{\infty}f_1(\tau) \e^{y(t-\tau)}\dd\tau &= \int_0^{\delta}f_1(\tau) \e^{y(t-\tau)}\dd\tau + \int_{\delta}^{\infty} f_1(\tau)\e^{y(t-\tau)}\dd\tau \le \int_0^{\delta}f_1(\tau) \e^{y_M}\dd\tau + C_5\\ & \le \int_0^{\infty}f_1(\tau) \e^{y_M}\dd\tau + C_5 \le \e^{y_M}+C_5 = C_6\,, \end{split} \] where $y_M$ is an upper bound of $y(t)$ on the interval $[-\tau,\delta-\tau]$. This estimation together with a similar argument applied to the second integral in the second equation of~\eqref{model_resc_suma} yields \[ \int_0^{\infty}f_2(\tau) \e^{\alpha y(t-\tau)}\dd\tau \le C_7\,. \] Therefore, from the second equation of~\eqref{model_resc_suma} we have \[ \frac{\dd }{\dd t} y(t) \ge r h\bigl(C_6\bigr) - b\, C_7\; \Longrightarrow \; y(t) \ge C_8, \; \text{ for } \; t\in [0,\delta]. \] The boundedness of $y$ together with the form of first equation of~\eqref{model_resc_suma} implies boundedness of $x(t)$ on $[0,\delta]$. Now, knowing that $x(t)$ and $y(t)$ are bounded on the interval $(-\infty,\delta]$, an analogous reasoning allows us to deduce the boundedness of the solution on the interval $(-\infty,2\delta]$. Hence, the mathematical induction yields the boundedness of the solutions to~\eqref{model_resc_suma} on each compact interval included in $[0,+\infty)$, which completes the proof. \end{proof} \subsection{Stability and Hopf bifurcations} In this section we study the local stability of the steady state (0,0) using standard linearisation technique. Linearisation of system~\eqref{model_resc_suma} around the steady state $(0,0)$ has the following form \begin{equation}\label{sys:lin} \begin{aligned} \frac{\dd}{\dd t} x(t) &= r h'(1) \int_0^{\infty} f_1(\tau) y(t-\tau)\dd \tau \, ,\\ \frac{\dd}{\dd t} y(t) &= \frac{2}{3}(b-\mu)x(t) + \int_0^{\infty}\Bigl(r h'(1) f_1(\tau) -\alpha b f_2(\tau)\Bigl) y(t-\tau)\dd \tau \, , \end{aligned} \end{equation} and, due to the equality $h'(1)=-1$ (see Property \textbf{(P3)}), the corresponding characteristic function is given by \[ W(\lambda) = \det \begin{bmatrix} \displaystyle\lambda &\displaystyle r \int_0^{\infty} f_1(\tau) \e^{-\lambda\tau}\dd \tau \\ \displaystyle-\frac{2}{3}(b-\mu) & \displaystyle\lambda+ \int_0^{\infty}\Bigl(rf_1(\tau)+\alpha b f_2(\tau)\Bigl) \e^{-\lambda\tau}\dd \tau \end{bmatrix}. \] Thus, \[ W(\lambda) = \lambda^2 +\lambda \int_0^{\infty}\Bigl(rf_1(\tau)+\alpha b f_2(\tau)\Bigl) \e^{-\lambda\tau}\dd \tau + \frac{2r}{3}(b-\mu)\int_0^{\infty} f_1(\tau) \e^{-\lambda\tau}\dd \tau. \] In general, as the probability densities one considers the specific distributions which describe the experimental data or studied phenomena in the best way. In the present paper, we consider two particular types of the probability densities. One type is given by \begin{equation}\label{eq:zabek} f_i(\tau) = \frac{1}{\varepsilon{^2}}\begin{cases} \varepsilon -\sigma+\tau, & \tau \in [\sigma-\varepsilon,\sigma), \\ \varepsilon +\sigma -\tau, & \tau\in [\sigma,\sigma+\varepsilon], \quad\quad \textrm{for} \quad\sigma\geq\varepsilon, i=1,2, \\ 0 & \text{otherwise}. \end{cases} \end{equation} We call probability densities defined by~\eqref{eq:zabek} piecewise linear probability densities. Note, that for $\varepsilon\to 0$ the functions $f_i$ converge to Dirac delta at the points $\sigma$ and therefore, system~\eqref{modeldis} becomes a~system with discrete delays considered by~\cite{donofrio2004,mbuf09appl,Piotrowska2011,Forys2013}. Since we assume that all considered probability densities are defined on interval $[0,\infty)$ condition $\sigma\ge\varepsilon$ must be fulfilled. Note, that for $\sigma\ge\varepsilon$ the average value of $f_i$ is equal to $\sigma$ and the standard deviation is equal to $\varepsilon/\sqrt{6}$. For $\sigma<\varepsilon$ one obtains so-called neutral equations, for details see~\cite{Hale93}. In this paper, we also consider second type of probability densities so-called the Erlang probability densities separated or not from zero by~$\sigma\geq0$, i.e. we study system~\eqref{modeldis} with functions given by \begin{equation}\label{eq:Erlang} f_1(\tau) = g_{m_1}(\tau-\sigma)\,, \qquad f_2(\tau) = g_{m_2}(\tau-\sigma)\,, \end{equation} where $g_{m_i}(\tau)$, $i=1,2$, are called non-shifted Erlang distributions and are defined by \begin{equation}\label{eq:defE} g_{m_i}(s)=\frac{a}{(m_i-1)!}(as)^{m_i-1}\e^{-as}, \end{equation} with $a>0$, $s\geq 0$. The mean value of the~non-shifted Erlang distribution $g_{m_i}$ is given by $\frac{m_i}{a}$, while the variance is equal to $\frac{m_i}{a^2}$. Hence, the average delay is equal to this mean and the deviation $\sqrt{\frac{m_i}{a^2}}$ measures the degree of concentration of the delays about the average delay. Clearly, the non-shifted Erlang distribution is a~special case of the Gamma distribution, where the shape parameter $m_i$ is an integer. It is also easy to see that the non-shifted Erlang distribution is the generalisation of the exponential distribution, which one gets taking $m_i=1$. On the other hand, for the non-shifted Erlang distributions when $m_i\rightarrow+\infty$ the probability densities $g_{m_i}$ converge to a~Dirac distributions and hence the system with discrete delays~\eqref{model} is recovered as a~limit of Erlang distributions for system~\eqref{modeldis}. For the shifted Erlang distribution the mean value is $\sigma+\frac{m}{a}$, while variances stay the same as for the non-shifted case. \subsubsection{Erlang probability densities} The Erlang probability densities separated from zero by $\sigma$ read \[ f_i(\tau) = \frac{a^{m_i}(\tau-\sigma)^{m_i-1}}{(m_i-1)!}\e^{-a(\tau-\sigma)},\;\; \text{ for } \; \tau\ge \sigma \] and $0$ otherwise, for $i=1,2$. Note, that in this paper we consider only the case $a>0$. Then for probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} we have \[ \int_0^{\infty} f_i(\tau) \e^{-\lambda\tau}\dd \tau= \frac{a^{m_i}}{(a+\lambda)^{m_i}} \e^{-\lambda\sigma}. \] Hence, the characteristic function has the following form \begin{equation}\label{eq:erl:dow} W(\lambda) = \lambda^2 +\lambda\left(r\frac{a^{m_1}}{(a+\lambda)^{m_1}}\e^{-\lambda\sigma}+\alpha b\frac{a^{m_2}}{(a+\lambda)^{m_2}}\e^{-\lambda\sigma} \right) + \frac{2r}{3}(b-\mu) \frac{a^{m_1}}{(a+\lambda)^{m_1}}\e^{-\lambda\sigma}. \end{equation} Define \begin{equation}\label{def:betagamag} \beta = r+\alpha b \quad \text{and} \quad \gamma = \frac{2 r\, (b-\mu)}{3}. \end{equation} \begin{prop}\label{thm:non-shiftE} Let $b>\mu$. The trivial steady state of system~\eqref{model_resc_suma} with the non-shifted Erlang probability density given by~\eqref{eq:Erlang}--\eqref{eq:defE} ($\sigma=0$) is locally asymptotically stable if \begin{enumerate}[\upshape (i)] \item $a\beta>\gamma$ for $m_1=m_2=1$ ; \item $a > \frac{1}{2}\beta + \frac{2\gamma}{\beta}$ for $m_1=m_2=2$; \item $a > \frac{9}{8}\beta$ and $8\beta a^3-3(8\gamma+3\beta^2)a^2+3\gamma\beta a-\gamma^2>0$ for $m_1=m_2=3$; \item $2a(a+r)>\Big(a(a+\alpha b)+\gamma\Big)+\frac{4a^2\gamma}{(a(a+\alpha b)+\gamma)}$ for $m_1=1$ and $m_2=2$; \item $a > \frac{1}{2}\beta+ \frac{2\gamma}{\beta}-\alpha b$ for $m_1=2$ and $m_2=1$. \end{enumerate} \end{prop} \begin{proof} For $\sigma=0$ we clearly have \[ W(\lambda) = \lambda^2 +\lambda\left(r\frac{a^{m_1}}{(a+\lambda)^{m_1}}+\alpha b\frac{a^{m_2}}{(a+\lambda)^{m_2}} \right) + \frac{2r}{3}(b-\mu) \frac{a^{m_1}}{(a+\lambda)^{m_1}}\,. \] As it can be seen, the advantage of using the Erlang distributions (not separated from zero) is that instead of studying existence of the zeros of the characteristic function one can study existence of roots of a~polynomial, thus the stability analysis is easier. First, consider $m_1=m_2=m$. We investigate the behaviour of the roots of polynomial \begin{equation}\label{eq:Erl:m=1} \lambda^2(a+\lambda)^m + a^m (\lambda\,\beta + \gamma), \end{equation} where $\beta$ and $\gamma$ are given by~\eqref{def:betagamag}. The Routh-Hurwitz stability criterion (see eg.~\cite{rh-criterion}) gives that the necessary and sufficient conditions for the stability of trivial steady state of system~\eqref{sys:lin}. Clearly, the degree of polynomial~\eqref{eq:Erl:m=1} is larger for larger $m$. However, in each case of equals $m$'s (sometimes tedious) algebraic calculations lead to some conditions that guarantee the stability of the considered steady state. For $m=1$ (i.e. for the exponential distribution not separated from zero) we have a~polynomial of degree three \[ \lambda^2(a+\lambda) + a~(\lambda\,\beta + \gamma)\,,\] while for $m=2$ we have \[ \lambda^4+2a\,\lambda^3+a^2\,\lambda^2+\beta a^2\, \lambda+\gamma a^2\,. \] The case $m=3$, is the most complicated since a~direct calculation shows that \eqref{eq:Erl:m=1} is the polynomial of degree 5 \[ \lambda ^5 +3 a~\lambda^4+3 a^2 \lambda ^3 +a^3 \lambda ^2+a^3 \beta \lambda+a^3 \gamma\,. \] For $m_1=1$ and $m_2=2$, polynomial~\eqref{eq:Erl:m=1} reads \[ \lambda^4 + 2a\,\lambda^3+a(a+r)\,\lambda^2+\Bigl(a^2\beta+\gamma a\Bigr)\,\lambda+ \gamma a^2\,, \] while for $m_1=2$ and $m_2=1$ we have \[ \lambda^4+ 2a\,\lambda^3 + a(a+\alpha b)\,\lambda^2+a^2\beta\,\lambda +\gamma a^2\,. \] \end{proof} \begin{prop} For $b>\mu$, $\sigma=0$, $m_1=1$, $m_2=2$ and \begin{equation}\label{cond:prop} r<\alpha b \quad \text{ or } \quad r > \frac{\alpha b}{2} + \frac{2\gamma}{\alpha b}, \end{equation} there exists $\bar a>0$ such that for $a>\bar a$ the trivial steady state of system~\eqref{model_resc_suma} with the non-shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} is locally asymptotically stable and it is unstable for $a\in(0,\bar a)$. \end{prop} \begin{proof} Since $a(a+\alpha b)+\gamma>0$ holds, the stability condition for the case $m_1=1$, $m_2=2$ (the condition (ii) of Theorem~\ref{thm:non-shiftE}) is equivalent to \[ W_a(a) = a^4+2r\, a^3+\Bigl(\alpha b(2r-\alpha b)-4\gamma\Bigr)\,a^2 +2\gamma(r-\alpha b)\,a - \gamma^2>0. \] The Descart's rule of signs implies that if one of conditions~\eqref{cond:prop} holds, then there exists exactly one simple positive real root of $W_a(a)$. We denote it by $\bar a$. Then the trivial steady state system~\eqref{model_resc_suma} is locally asymptotically for $a>\bar a$ and unstable for $0<a<\bar a$. \end{proof} \begin{thm}\label{thm:Erlang} Considered system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE}. \begin{enumerate}[\upshape (a)] \item Let $m_i=m$, $i=1,2$. If the trivial steady state is \begin{enumerate}[\upshape (i)] \item unstable for $\sigma=0$, then it is unstable for any $\sigma>0$, \item locally asymptotically stable for $\sigma=0$, then there exits $\sigma_0>0$ such that it is locally stable for all $\sigma\in[0,\sigma_0)$ and unstable for $\sigma>\sigma_0$. At $\sigma=\sigma_0$ the Hopf bifurcation occurs. \end{enumerate} \item Let $m_1=1$, $m_2=2$. If the trivial steady state is locally asymptotically stable for $\sigma=0$, then there exists exactly one $\sigma_0>0$ such that for $\sigma \in [0,\sigma_0)$ it is locally asymptotically stable and it is unstable for $\sigma>\sigma_0$. At $\sigma=\sigma_0$ the Hopf bifurcation occurs. \item Let $m_1=2$, $m_2=1$. \begin{enumerate}[\upshape (i)] \item If the steady state is locally asymptotically stable for $\sigma=0$, and the function \begin{equation}\label{eq:aux:sigma3} F(u)=u^4 +2a^2u^3+ u^2a^2\bigl(a^2-\alpha^2 b^2\bigr)- u a^3\Biggl( a\beta^2 - 2\alpha b\gamma\Biggr) - \gamma^2 a^{4}. \end{equation} has no positive multiple roots, then the steady state is locally asymptotically stable for some $\sigma \in [0,\sigma_0)$ and it is unstable for $\sigma>\bar\sigma$, with $\sigma_0\le \bar\sigma$. At $\sigma=\sigma_0$ the Hopf bifurcation occurs. \item If \begin{equation}\label{cond:stab:m11m22} a \ge \alpha b\min\left\{1,\frac{2 \gamma}{\beta^2}\right\}, \end{equation} or \begin{equation}\label{cond:stab:m11m22-verA} a < \alpha b\min\left\{1,\frac{2 \gamma}{\beta^2}\right\} \; \text{ and } \; \Bigl(a^2+2\alpha^2 b^2\Bigr)^3 <27\Bigl(2\alpha b\gamma+ a\bigl(\alpha^2b^2-\beta^2\bigr)\Bigr)^2, \end{equation} then at most one stability switch of the steady state is possible. Moreover, if the steady state is locally asymptotically stable for $\sigma=0$, then it is stable for some $\sigma \in [0,\sigma_0)$ and it is unstable for $\sigma>\sigma_0$, For $\sigma=\sigma_0$ the Hopf bifurcation is observed. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} Let $m_M = \max\{m_1,m_2\}$. Then the characteristic function~\eqref{eq:erl:dow} has the following form \[ \begin{split} W(\lambda) =& \frac{1}{(a+\lambda)^{m_M}}\Biggl( \lambda^2(a+\lambda)^{m_M} +\\ &\quad + \biggl(\lambda(ra^{m_1}(a+\lambda)^{m_M-m_1} +\alpha ba^{m_2}(a+\lambda)^{m_M-m_2}) + \gamma a^{m_1}(a+\lambda)^{m_M-m_1}\biggr)\e^{-\lambda\sigma}\Biggr). \end{split} \] The zeros of $W(\lambda)$ are the same as the zeros of \begin{equation}\label{eq:chr:max} D(\lambda) = \lambda^2(a+\lambda)^{m_M} + \biggl(\lambda(ra^{m_1}(a+\lambda)^{m_M-m_1} +\alpha ba^{m_2}(a+\lambda)^{m_M-m_2}) + \gamma a^{m_1}(a+\lambda)^{m_M-m_1}\biggr)\e^{-\lambda\sigma}. \end{equation} For $\lambda=i\omega$ define an auxiliary function \begin{equation}\label{eq:aux:ogolnie} F(\omega) = \omega^4(a^2+\omega^2)^{m_M} - \left\|i\omega(ra^{m_1}(a+i\omega)^{m_M-m_1} +\alpha ba^{m_2}(a+i\omega)^{m_M-m_2}) + \gamma a^{m_1}(a+i\omega)^{m_M-m_1} \right\|^2. \end{equation} In the following we show, with one exception, that there exists a~unique single positive zero $\omega_0=\sqrt{u_0}$ of the function $F$ such that $F'(\omega_0)>0$. This, together with Theorem~1 from~\cite{cook86ekvacioj} allows us to deduce that the zeros of $D$ crosses imaginary axes from left to right with a~positive velocity. This yields the existence of stability switches and the occurrence of the Hopf bifurcation in the appropriate cases. For $m_1=m_2=m$, and for $u=\omega^2$ the auxiliary function $F$ takes the form \begin{equation}\label{eq:Fm} F(u)=u^2\Bigl(a^2+u\Bigr)^{m}-a^{2m}(u\beta^2+\gamma^2)\,. \end{equation} We are interested in the existence of the real positive roots of~\eqref{eq:Fm}. Because the number of sign changes between consecutive non-zero coefficients is equal to 1 the Descartes' rule of signs indicates that the polynomial~\eqref{eq:Fm} has one simple real positive root denoted by $u_0$. Clearly, the fact that coefficient of $u^{2+m}$ is positive and $F(0)<0$, implies $F'(u_0)\ge 0$. Moreover, \[ F'(u)=2u\Bigl(a^2+u\Bigr)^{m}+ mu^2(a^2+u)^{m-1}-a^{2m}\beta^2 \] and thus, $F''(u)>0$. Hence, $F'(u_0)>0$. This completes the proof of part (a). For $m_1=1$, $m_2=2$ and $u=\omega^2$ the function $F$ given by \eqref{eq:aux:ogolnie} has the form \begin{equation}\label{eq:aux:nierowne} F(u) = u^4 + 2 a^2\, u^3 + a^2\Bigl(a^2-r^2\Bigr)\,u^2 - a^2\left(a^2\beta^2 + 2a~\alpha b \gamma +\gamma^2 \right) \, u - a^4\gamma^2\,. \end{equation} We show that the coefficient of $u$ in~\eqref{eq:aux:nierowne} is negative and hence the Descartes' rule of signs implies that a~single stability switch for the trivial steady state of~\eqref{model_resc_suma} occurs. This is equivalent to the inequality \[ G(a) = a^2\beta^2 + 2a~\alpha b \gamma +\gamma^2>0, \] for $a\ge 0$. For $b>\mu$ (that is the condition required for the existence of the trivial steady state of~\eqref{model_resc_suma}) the discriminant of $G$ is the following \[ 4\gamma^2\left(\alpha^2 b^2 - \beta^2\right) = 4\gamma^2\left(\alpha^2 b^2 - (r+\alpha b)^2\right)= -4r\gamma^2 \left(r+2\alpha b\right) < 0. \] Thus, $G(u)>0$, so $F$ has exactly one simple positive root $u_0$. Moreover, $F(0)<0$ hence $F'(u_0)>0$ and the proof of part (b) is completed. Consider the case: $m_1=2$ and $m_2=1$. Then for $u=\omega^2$ Eq.~\eqref{eq:aux:ogolnie} takes the form~\eqref{eq:aux:sigma3} First, note that for the auxiliary function given by \eqref{eq:aux:sigma3} inequality $F(0)<0$, holds. Thus, $F$ has at least one real positive zero. Hence, if $F$ has only simple positive zero, by Theorem~1 from~\cite{cook86ekvacioj} the steady state of system~\eqref{model_resc_suma} is unstable for sufficiently large $\sigma$, which completes the proof of part (c.i). Now, we prove statement (c.ii). Assume that~\eqref{cond:stab:m11m22} holds. Then, if $a\ge \alpha b$, then the coefficient of $u^2$ is non-negative and independently on the sign of the coefficient of $u$ the Descartes' rule of signs indicates that there is exactly one simple positive real zero of $F(u)$. Thus, at most one stability switch of the steady state can occur. On the other hand, if condition $a\ge2\alpha\gamma b/\beta^2$ holds, then the coefficient of $u$ in $F(u)$ is non-positive. Hence, independently of the sign of the coefficient of $u^2$ the single change of the sign is observed. Thus, $F(u)$ has exactly one simple real positive zero. Now assume that~\eqref{cond:stab:m11m22-verA} holds. To shorten notation denote \begin{equation}\label{def:alpha12} \alpha_1 = 2\alpha b\gamma- a\beta^2, \quad\quad \alpha_2 = \alpha^2 b^2-a^2. \quad \end{equation} The first inequality of~\eqref{cond:stab:m11m22-verA} is equivalent to $\alpha_1>0$ and $\alpha_2>0$. Denote $F_1(\zeta) = F(a\zeta)/a^4$, with $\zeta=u/a$. Clearly, $F(u)=0$ is equivalent to $F_1(\zeta)=0$. The function $F_1$ reads \[ F_1(\zeta) = \zeta^4 +2a \zeta^3-\alpha_2 \zeta^2 + \zeta \alpha_1 - \gamma^2. \] We show that under the assumption~\eqref{cond:stab:m11m22-verA} the function $F_1$ is a strictly monotonic function of $\zeta$, which implies that $F$ is a~strictly monotonic function of $u$. This, together with the fact $F(0)<0$ implies that $F$ has exactly one simple positive zero so the assertion of the point (c.ii) is true. In order to show monotonicity of $F_1$, we prove that its first derivative is positive. We have \[ F_1'(\zeta) = 4\zeta^3 +6a \zeta^2-2\alpha_2 \zeta + \alpha_1. \] The assumption $\alpha_1>0$ implies $F_1'(0)>0$. Now, we derive a~condition that guarantees the positivity of $F_1'$ for all $\zeta>0$. To this end, we calculate the minimum of $F_1'(\zeta)$ on the interval $[0,+\infty)$. Calculating the second derivative of $F_1$ we find that the minimum of $F_1'$ is reached at the point \[ \bar \zeta = \frac{1}{2}\biggl(-a+\sqrt{a^2+\tfrac{2}{3}\alpha_2}\biggr). \] Using the fact that $F_1''(\bar\zeta) = 0$ we have \[ F_1'(\bar \zeta) = \alpha_1 - 2\bar\zeta^3-\alpha_2\bar \zeta. \] Thus, $F_1$ is strictly increasing for all $\zeta>0$ if and only if \begin{equation}\label{war:sigma3} \bigl(2\bar \zeta^2+\alpha_2\bigr)\bar \zeta< \alpha_1. \end{equation} A~few algebraic manipulations indicate that~\eqref{war:sigma3} is equivalent to \[ \left(a^2+\frac{2}{3}\alpha_2\right)^{3/2} - a\Bigl(a^2+\alpha_2\Bigr)<\alpha_1. \] Using the definition of~$\alpha_2$ the above inequality reads \[ \frac{1}{3\sqrt{3}}\Bigl(a^2+2\alpha^2 b^2\Bigr)^{3/2} <\alpha_1+ a~\alpha^2b^2. \] Due to the assumption $\alpha_1>0$ both sides of the above inequality are positive, so squaring that inequality we obtain \[ \Bigl(a^2+2\alpha^2 b^2\Bigr)^3 <27\Bigl(\alpha_1+ a~\alpha^2b^2\Bigr)^2. \] Now using the definition of $\alpha_1$ we get \[ \Bigl(a^2+2\alpha^2 b^2\Bigr)^3 <27\Bigl(2\alpha b\gamma+ a\bigl(\alpha^2b^2-\beta^2\bigr)\Bigr)^2, \] which is fulfilled due to the second inequality of~\eqref{cond:stab:m11m22-verA}. This completes the proof of part (c). \end{proof} Note, that there exist a~set of parameters of system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} with $m_1=2$, $m_2=1$ such that the steady state is locally asymptotically stable for $\sigma=0$ and conditions \eqref{cond:stab:m11m22-verA} hold. As a~proper example we formulate the remark below. \begin{rem} If \begin{equation}\label{notcontra} \alpha b<\beta, \quad \text{ and } \quad 1<\frac{2\gamma}{\beta^2}, \end{equation} then for all $a>\frac{1}{2}\beta+\frac{2\gamma}{\beta}-\alpha b$ the steady state of system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} with $m_1=2$, $m_2=1$ loses its stability at $\sigma=\bar\sigma$ and the Hopf bifurcation occurs at this point. \end{rem} \begin{proof} First, note that under the assumptions of the remark if $a\ge\alpha b \min\left\{1,\frac{2\gamma}{\beta^2}\right\}$, then condition \eqref{cond:stab:m11m22} holds and Theorem~\ref{thm:Erlang} implies the assertion of the remark. If $a>\frac{1}{2}\beta+\frac{2\gamma}{\beta}-\alpha b$ and conditions \eqref{cond:stab:m11m22-verA} hold then Theorem~\ref{thm:Erlang} also implies the assertion of the remark. We show that for some set of parameters these conditions can be fulfilled simultaneously. Consider $a<\alpha b \min\left\{1,\frac{2\gamma}{\beta^2}\right\}$. The second inequality of~\eqref{notcontra} implies that $\min\left\{1,\frac{2\gamma}{\beta^2}\right\}=1$. Hence, the stability condition of steady state for $\sigma=0$ and the first condition~\eqref{cond:stab:m11m22-verA} reads \begin{equation}\label{zakresa} \frac{1}{2}\beta+\frac{2\gamma}{\beta}-\alpha b< a< \alpha b. \end{equation} First, we show for some parameters inequalities~\eqref{zakresa} give a~non empty set of $a$. This is true if the following inequalities \begin{equation}\label{zakresalphab} \frac{1}{2}\beta+\frac{2\gamma}{\beta} < 2\alpha b< 2\beta \end{equation} hold, where the second inequality in~\eqref{zakresalphab} is just the first one in~\eqref{notcontra}. Inequalities~\eqref{zakresalphab} do not contradict each other if and only if \begin{equation}\label{zakresgamma} \frac{1}{2}\beta+\frac{2\gamma}{\beta} < 2\beta. \end{equation} Thus, if $\frac{2\gamma}{\beta^2}<3/2$, then there is a~non-empty set of $\alpha b$ such that inequalities~\eqref{zakresalphab} hold and for such $\alpha b$ \eqref{zakresa} determines a~non-empty set of $a$. Now, we show that if~\eqref{notcontra} and the first inequality of~\eqref{cond:stab:m11m22-verA} hold then the second inequality of~\eqref{cond:stab:m11m22-verA} also holds. Note that the assumption $\alpha b<\beta$ means that the right hand side of the second inequality of~\eqref{cond:stab:m11m22-verA} is a~decreasing function of $a$ (of course we need to use here the first inequality of~\eqref{cond:stab:m11m22-verA} that guarantees the positivity of the expression under the second power), while the left hand-side is an increasing function of $a$. Since our assumptions imply that $a<\alpha b$, it is enough to check if inequality~\eqref{cond:stab:m11m22-verA} holds for $a=\alpha b$. Rearranging terms we obtain \[ 27\alpha^6 b^6 <27\Bigl(\alpha^3b^3 + \alpha b\bigl(2\gamma-\beta^2\bigr)\Bigr)^2. \] The above inequality obviously holds as $2\gamma>\beta^2$, which completes the proof. \end{proof} For the general case of system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} the characteristic function $D(\lambda)$ is given by~\eqref{eq:chr:max} and the auxiliary function $F$ is given by~\eqref{eq:aux:ogolnie}. It is easy to see that the highest power of $\omega$ is $4+2m_M$ and the coefficient of it is $1$. Moreover, we have $F(0)<0$. Thus, there exists $\omega_0>0$ such that $F(\omega_0)=0$ and the roots cross the imaginary axis from the left to the right half-plane. Thus, using Theorem~1 from~\cite{cook86ekvacioj}, we can deduce the following \begin{rem} If all roots of $F(\omega)$ given by~\eqref{eq:aux:ogolnie} are simple, and the trivial steady state of system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} is locally asymptotically stable for $\sigma =0$, then it loses its stability due to the Hopf bifurcation for some $\sigma_0>0$ and it is unstable for $\sigma>\sigma_{\infty}$ with some $\sigma_{\infty}\ge \sigma_0$. \end{rem} It seems that the case of multiple roots of $F(\omega)$ is non-generic, however we will not prove it here. \subsubsection{Piecewise linear probability densities} For the function defined by~\eqref{eq:zabek} we have \[ \int_0^{\infty} f_i(\tau)\e^{-\lambda\tau}\dd \tau = \frac{\e^{-\lambda\sigma}}{\lambda^2\varepsilon^{2}} \left(\e^{\lambda\varepsilon}+\e^{-\lambda\varepsilon}-2\right) = \frac{2\e^{-\lambda\sigma}}{\lambda^2\varepsilon^{2}}\Bigl(\cosh(\lambda\varepsilon)-1\Bigr) \] and thus the characteristic function for the trivial steady state of system~\eqref{model_resc_suma} reads \[ W(\lambda) = \lambda^2 +\lambda \int_0^{\infty}\Bigl(rf_1(\tau)+\alpha b f_2(\tau)\Bigl) \e^{-\lambda\tau}\dd \tau +\gamma\int_0^{\infty} f_1(\tau) \e^{-\lambda\tau}\dd \tau\,, \] which is equivalent to \begin{equation}\label{eq:char:zabek:og} W(\lambda) = \lambda^2 +(\lambda r+\gamma)\frac{2\e^{-\lambda\sigma}}{\lambda^2\varepsilon{^2}}\Bigl(\cosh(\lambda\varepsilon)-1\Bigr)+\lambda\alpha b \frac{2\e^{-\lambda\sigma}}{\lambda^2\varepsilon{^2}}\Bigl(\cosh(\lambda\varepsilon)-1\Bigr). \end{equation} Finding zeros of function~\eqref{eq:char:zabek:og} is not a~trivial task. If the characteristic function has the form $W(\lambda)=P(\lambda)+Q(\lambda)\e^{-\lambda\tau}$, where $P$ and $Q$ are polynomials, one could use the Mikhailov Criterion~\cite{uf04jbs}, to estimate the number of zeros of the characteristic function lying in the right half of complex plane. However, in the considered case we do not have strict polynomials. Therefore, we use a~generalised Mikhailov Criterion (for details see~\ref{appen}). Clearly, $\cosh(z)=\sum_{n=0}^{\infty}\frac{z^{2n}}{(2n)!}$, for $z\in\varmathbb{C}$ is an analytic function, hence $\frac{\cosh(z)-1}{z^2}$ is also analytic. Consequently, the function $W$ defined by~\eqref{eq:char:zabek:og} is also analytical. Moreover, obviously condition~\eqref{charfun} holds. Thus, we can apply the generalized Mikhailov Criterion. Considering $\sigma\geq\varepsilon>0$, we have \begin{equation}\label{char:zabek:sigmy} W(\lambda) = \lambda^2 +\Big(\lambda \beta + \gamma\Big)g\bigl(\lambda\varepsilon\bigr)\e^{-\lambda\sigma}\,, \end{equation} where \begin{equation}\label{def:betagamag1} g(x) = \frac{2(\cosh x-1)}{x^2}\,, \quad g_1(x) =g(ix) = \frac{2 - 2\cos x}{x^2} . \end{equation} Thus, \begin{equation}\label{reWzabek} \varmathbb{R}e (W(i\omega)) = -\omega^2 + g_1\bigl(\omega\varepsilon\bigr)\Big(\gamma\cos(\omega\sigma)+ \beta\omega\sin(\omega\sigma)\Big)\,, \end{equation} \begin{equation}\label{imWzabek} \textrm{Im}(W(i\omega)) = g_1\bigl(\omega\varepsilon\bigr)\Big( \beta \omega\cos(\omega\sigma) - \gamma\sin(\omega\sigma)\Big) \,. \end{equation} Here we formulate sufficient condition for stability of the trivial steady state of system~\eqref{model_resc_suma} for the piecewise linear distributions defined by~\eqref{eq:zabek} with $\sigma\geq\varepsilon>0$. This condition is clearly not necessary, what we will show numerically in the next section. \begin{prop}\label{prop:zabek:stab} Let $b>\mu$, $\beta$ and $\gamma$ be defined by~\eqref{def:betagamag}. Then for \begin{enumerate}[\upshape (i)] \item \begin{equation}\label{cond:stab:zabek} \sigma< \frac{\pi}{\beta+\sqrt{\beta^2+4\gamma}+\frac{\gamma}{\beta}\pi} \end{equation} the trivial steady state of system~\eqref{model_resc_suma} with the piecewise linear distributions defined by~\eqref{eq:zabek} and $\sigma\ge\varepsilon>0$ is locally asymptotically stable. \item\begin{equation}\label{cond:nstab:zabeka} \frac{\beta}{\gamma}<\sigma < \frac{2\pi}{\beta+\sqrt{\beta^2+4\gamma}} \end{equation} the trivial steady state of system~\eqref{model_resc_suma} with the piecewise linear distributions defined by~\eqref{eq:zabek} with $\sigma\geq\varepsilon>0$ is unstable. \end{enumerate} \end{prop} \begin{proof} We use the generalized Mikhialov Criterion (for details see~\ref{appen}) to show that the change of the argument of $W(i\omega)$ as $\omega$ varies from 0 to $\infty$ is equal to $\pi$. In fact, we show that the behaviour of the hodograph is more restrictive, namely that for some $\bar\omega>0$ we have $\varmathbb{R}e(W(i\omega))<0$ for $\omega>\bar \omega$, while $\textrm{Im}(W(i\omega))>0$ for $\omega\in [0,\bar\omega]$. To this end first, note that if $g_1(\omega\epsilon) = 0$ and $\omega>0$, then $\varmathbb{R}e(W(i\omega))<0$. Thus, without loss of generality we assume that $g_1(\omega\epsilon) > 0$. First, let us estimate the real part of $W(i\omega)$ given by~\eqref{reWzabek}. Since $b>\mu$ and $g_1(\varepsilon\omega)\le 1$ we have \[ \varmathbb{R}e(W(i\omega)) \le -\omega^2 + \gamma +\omega\,\beta =: g_R(\omega). \] Now, let us consider the imaginary part of $W(i\omega)$. In fact, since we are interested in the sign of it and $g_1(\varepsilon\omega)\ge 0$ it is enough to consider the sign of \[ \omega\beta\cos(\omega\sigma)- \gamma\sin(\omega\sigma)=g_I(\omega\sigma), \] where \[ g_I(x) = \frac{\beta}{\sigma}\, x\cos x - \gamma \sin x. \] Note, that~\eqref{cond:stab:zabek} implies $\beta>\gamma\sigma$, and thus $g_I(x)>0$ for $x$ close to 0. On the other hand, $g_I(\pi/2)=-\gamma<0$. It is also easy to check, that $g_I(x)$ has a~unique zero in $(0,\pi/2)$ for $\beta>\gamma\sigma$. Below, we give some estimation for this zero. Since $\tan x < \frac{\pi}{2} \frac{x}{\frac{\pi}{2}-x}$, for $x\in (0,\pi/2)$ we have \begin{equation}\label{nax0} \frac{\beta}{\gamma\sigma} x \ge \frac{\pi}{2}\frac{x}{\frac{\pi}{2}-x} > \tan x. \end{equation} A~simple algebraic yields that the first inequality of~\eqref{nax0} is fulfilled for all \[ 0< x \le \frac{\pi}{2}\left(1-\sigma\cdot\frac{\gamma}{\beta}\right)<\frac{\pi}{2}. \] Thus, as long as $\omega\sigma \le \frac{\pi}{2}\left(1-\sigma\cdot\frac{\gamma}{\beta}\right)$ we have $\textrm{Im}(W(i\omega))>0$. However, $g_R$ is a~quadratic polynomial with $g_R(0)>0$. Moreover, the coefficient of $\omega^2$ is negative and the positive root of $g_R$ is equal to $\frac{1}{2}\left(\beta+\sqrt{\beta^2+4\gamma}\right)$. An easy algebraic manipulation shows that condition~\eqref{cond:stab:zabek} is equivalent to \[ \frac{\sigma}{2}\left(\beta+\sqrt{\beta^2+4\gamma}\right) < \frac{\pi}{2}\left(1-\sigma\cdot\frac{\gamma}{\beta}\right) \] and thus $g_R(\omega)<0$ for all $\sigma\omega>\frac{\pi}{2}\left(1-\sigma\cdot\frac{\gamma}{\beta}\right)$, which completes the proof of part (i). The idea of the proof of part (ii) is similar to the proof of part (i). However, this time we want to show that for $\sigma\omega\in(0,\pi)$ the imaginary part $\textrm{Im}(W(i\omega))<0$, while for $\sigma\omega>\pi$ the real part $\varmathbb{R}e(W(i\omega))<0$. This indicates that the hodograph goes below the $(0,0)$ point on the complex plane when $\omega$ tends to $+\infty$ and hence the change of the argument of $W(i\omega)$ differs from $\pi$. Hence, from the generalized Mikhialov Criterion it is clear that the trivial steady state of system~\eqref{model_resc_suma} is unstable. Note, that for $\omega\sigma\in(\pi/2,\pi)$ we have $\varmathbb{R}e(W(i\omega))<0$ due to the fact that on this interval cosine is negative and sine is positive. Consider $\omega\sigma\in(0,\pi/2)$. Thus, cosine is positive and inequality $\beta \omega\cos(\omega\sigma) - \gamma\sin(\omega\sigma)<0$ is equivalent to \begin{equation}\label{tan} \frac{\beta}{\sigma\gamma} \omega\sigma < \tan(\omega\sigma). \end{equation} The right hand side of~\eqref{tan} is a~strictly increasing convex function on $(0,\pi/2)$ and have the first derivative at 0 equal to 1. Thus, the first inequality of \eqref{cond:nstab:zabeka} implies~\eqref{tan}. Now, it is enough to show that $g_R(\pi/\sigma) <0$. A~straightforth calculation yields \[ -\left(\frac{\pi}{\sigma}\right)^2+\beta \frac{\pi}{\sigma} +\gamma <0. \] Solving quadratic equation we get \[ \sigma < \frac{2\pi}{\beta+\sqrt{\beta^2+4\gamma}}\,, \] which is the second inequality of \eqref{cond:nstab:zabeka}. \end{proof} \begin{rem} Note, that \eqref{cond:stab:zabek} implies $\sigma<\beta/\gamma$ and consequently, conditions \eqref{cond:nstab:zabeka} and \eqref{cond:stab:zabek} are mutually exclusive. \end{rem} \begin{thm}\label{prop:zabek:bif} If the trivial steady state of system~\eqref{model_resc_suma} with the piecewise linear distributions defined by~\eqref{eq:zabek} and $\sigma\ge\varepsilon>0$ is locally asymptotically stable for $\sigma=\varepsilon$ and the function \[ F(\omega) = \omega^4 - \varepsilon^2\Bigl(\omega^2\beta^2+\gamma^2\Bigr) g_1^2(\varepsilon \omega), \] where $g_1$ is defined by~\eqref{def:betagamag1} has no multiple positive zeros, then the steady state is locally asymptotically stable for some $\sigma \in [\varepsilon,\sigma_0)$ and it is unstable for $\sigma>\bar\sigma$, with $\sigma_0\le \bar\sigma$. At $\sigma=\sigma_0$ the Hopf bifurcation occurs. \end{thm} \begin{proof} For the considered case the characteristic function is given by~\eqref{char:zabek:sigmy}. It can be easily seen that if $W(i\omega)=0$, then \[ \|-i\omega\|^2 = \varepsilon^2 \bigl\|i\omega \beta +\gamma\bigr\|^2 \Bigl\|g\bigl(i\omega\varepsilon\bigr)\Bigr\|^2, \] and thus \[ \omega^4 = \varepsilon^2\Bigl(\omega^2\beta^2+\gamma^2\Bigr) g_1^2(\varepsilon \omega)\,, \] where $g_1$ is defined by~\eqref{def:betagamag1}. The function $F$ has the following properties \[ F(0) = -\varepsilon^2\gamma^2<0\,, \quad \text{ and } \quad \lim_{\omega\to+\infty}F(\omega) = +\infty. \] Hence, we conclude that there exists $\omega_0>0$ such that $F(\omega_0)=0$ and $F'(\omega_0)\ge 0$. It is assumed that all zeros of $F(\omega)$ are simple, thus $F'(\omega_0)>0$ and by Theorem~1 from~\cite{cook86ekvacioj}, the thesis of theorem is proved.\end{proof} \begin{rem} Note, that the case where the function $F$ has multiple positive zeros is not generic. \end{rem} \begin{proof} Consider the case of the multiple positive zero(s) of $F$. First, note the following facts: \begin{enumerate}[(a)] \item $F$ is a~analytic function so it has isolated zeros; \item For $\omega>\max\{\varepsilon\sqrt{\beta^2+\gamma^2},1\}$ inequality $F(\omega)>0$ holds. \item If $g_1(\varepsilon\omega)=0$, then $F(\omega)=(2k\pi/\varepsilon)^4>0$ for some $k\in\varmathbb{N}$, $k\ge 1$. \end{enumerate} Facts (a) and (b) imply, that $F$ has a~finite number of zeros. Suppose that for the arbitrary values of $\beta_0$ and $\gamma_0$ the function $F$ has the multiple zero at $\omega_{m,j}$, for some $j=1,2,...j_M$, where $j_M\ge 1$ is a~natural number. Now, consider the interval $I=[0,\max\{\varepsilon\sqrt{\beta^2+\gamma^2},1\}+1]$. Clearly, $\partial F/\partial \gamma <0$ holds for all $\omega\neq 2k\pi/\varepsilon$. Moreover, $F'$ is also an analytic function so it has a~finite number of zeros in the interval $I$. Thus, we choose $0<\epsilon_{\gamma}<1$ so small that for all $\gamma\in (\gamma_0-\epsilon_{\gamma},\gamma+\epsilon_{\gamma})\setminus\{\gamma_0\}$ the function $F$ has no multiple root in the interval $I$. This completes the proof. \end{proof} \section{Stability results for parameters estimated by Hahnfeldt~\textit{et al.}\xspace}\label{sec:num} In the previous section, we analytically investigated the stability of the trivial steady state of the family of angiogenesis models with distributed delays~\eqref{model_resc_suma} and distributions characterized by the probability densities $f_i$ given by~\eqref{eq:zabek} and~\eqref{eq:Erlang}--\eqref{eq:defE}. System~\eqref{model_resc_suma} is a~rescaled version of~\eqref{modeldis}, where the trivial steady state of~\eqref{model_resc_suma} corresponds to the positive steady state of~\eqref{modeldis}. In this section we illustrate our outcome for particular model parameters considered earlier in the literature, that is the parameters estimated by Hahnfeldt~\textit{et al.}\xspace in the case of the model without the treatment (see ~\cite{hahnfeldt99cancer}). Namely, we consider the following set of parameters: \begin{equation}\label{params} \mu=0,\quad a_H=8.73\times 10^{-3},\quad b=5.85, \quad r=0.192, \end{equation} and we take the function $h(\theta) = -\ln \theta$. We present stability results for the distributed models for two different values of parameter $\alpha$. To keep the description short, for model~\eqref{model_resc_suma} with $\alpha=1$ we would refer to as the Hahnfeldt~\textit{et al.}\xspace model, while to model~\eqref{model_resc_suma} with $\alpha=0$ as the d'Onofrio-Gandolfi model. However, in this paper we also consider positive values of $\mu$ that can be interpreted as a~constant anti-angiogenic treatment. \subsection{Erlang probability densities} First, focus on model~\eqref{model_resc_suma} with the non-shifted Erlang distributions given by~\eqref{eq:Erlang}--\eqref{eq:defE} with $\sigma=0$, $i=1,2$. The expression $m/a$ describes the average delay. On the other hand, in the more general case of $m_1\neq m_2$ a~ similar interpretation leads to the conclusion that the average delays are $m_1/a$ and $m_2/a$. One could also consider the arbitrary values $a=a_1$ for $f_1$ and $a=a_2$ for $f_2$, but then the analytical expressions become very complicated and we decided not to study these cases here. \begin{figure} \caption{Dependence of the critical average delay value in the case of the non-shifted Erlang distributions given by~\eqref{eq:Erlang} \label{fig.zalodmu} \end{figure} \begin{figure} \caption{Dependence of the critical value of parameter $a$ for the trivial steady state of system~\eqref{model_resc_suma} \label{fig.akr} \end{figure} In Fig.~\ref{fig.zalodmu} the dependence of the critical average delay value $\tau_{\kryt,0}$ with $m_i=m$ ($i=1,2$) on the constant treatment coefficient $\mu$ is presented. In the case of the non-shifted Erlang distributions the average delay is defined by $\tau_{\kryt,0} = m/a_{\kryt,0}$ and the left-hand side panel and the right-hand side panel present results for the Hahnfeldt~\textit{et al.}\xspace model ($\alpha=1$) and the d'Onofrion-Gandolfi model ($\alpha=0$), respectively. The curves were calculated from the stability conditions given in Proposition~\ref{thm:non-shiftE}. The stability regions of the trivial steady state of~\eqref{model_resc_suma} with the non-shifted Erlang distributions are below curves, while above them there are the instability regions. It is worth to emphasise that there is a~qualitative difference between the cases: $m=1$ and $m=2,3$ for the Hahnfeldt~\textit{et al.}\xspace model, while there is no such a~difference for the d'Onofrio-Gandolfi model. Clearly, for the Hahnfeldt~\textit{et al.}\xspace model and $m=1$ the critical average delay $\tau_{\kryt,0}$ tends to $+\infty$ as $\mu\to b=5.85$ while for $m=2$ and $m=3$ we have $\tau_{\kryt,0}\bigl|_{\mu=b}=4/\beta\approx 0.662$ and $\tau_{\kryt,0}\bigl|_{\mu=b}=8/(3\beta)\approx 0.441$, respectively. Nevertheless, for the d'Onofrio-Gandolfi model the critical average value of delays are almost the same for the case $m=1,2$ and $m=3$ for $\mu\in [0,b)$ and the behaviour is qualitatively comparable with the result for the Hahnfeldt~\textit{et al.}\xspace model for $m=1$. Moreover, in the considered cases the critical average delay values in the case of the non-shifted Erlang distributions are the~increasing functions of $\mu$. This means that the increase of treatment increases the stability region. \begin{figure} \caption{Dependence of the critical average delay $\tau_{\kryt,\sigma} \label{fig.sigmakr} \end{figure} In Fig.~\ref{fig.akr} we plot the critical value of parameter $a$, that is $a_{\kryt,0}$, for the trivial steady state of system~\eqref{model_resc_suma} with the non-shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE} (i.e. $\sigma=0$) calculated based on Proposition~\ref{thm:non-shiftE}. Here, the regions above the plotted curves correspond to the stability regions for the particular choices of parameters $m_i$, $i=1,2$, while those below correspond to the instability regions. It should be clarified that for the case $\alpha=1$, $m_1=2$ and $m_2=1$ the steady state is stable for all values of $a$ . We see that all plotted functions are decreasing as functions of the strength of constant treatment $\mu$. However, already for the non-shifted Erlang probability densities we observe a~large differences between both models regarding the model dynamics. For the Hahnfeldt~\textit{et al.}\xspace model we have the largest stability region for $m_1=2$ and $m_2=1$, while in the case of the d'Onofrion-Gandolfi model for $m_1=1$ and $m_2=2$. Moreover, for the case $m_1=2$ and $m_2=1$ the steady state is stable independently of the parameter $a$ for the Hahnfeldt~\textit{et al.}\xspace model, while in the same case of $m_i$ for the d'Onofrio-Gandolfi model the steady state might change its stability with the change of $a$. Additionally, for $\alpha=0$ the sizes of the stability regions for $m_i=2$ and $m_1=2$, $m_2=1$ are the same and it is not the case $\alpha=1$. The stability of the steady state for the non-shifted Erlang distributions can be determined by the Routh-Hurwitz criterion, although it may require tedious calculations, in particular for large values of parameters $m_1$ and/or $m_2$. On the other hand, the analytical results we obtained for the shifted Erlang distributions are rather limited. The results presented in Theorem~\ref{thm:Erlang} are only the existence results and give no information of the magnitude of the critical value of $\sigma$ for which stability is lost. Although, it is possible to derive an expression for the critical value of $\sigma$, it would involve $\arccos$ of some algebraic function of $\omega_0$, which value cannot be, in general, determined analytically and the final result would not be informative. In consequence, we decided to calculated numerically the critical values of the average delay $\tau_{cr,\sigma}$ for the considered model with the shifted Erlang distributions for parameters given by~\eqref{params}. Clearly, if one wish to calculate the critical average delay for model with the shifted Erlang distributions it appears that it depends on sigma directly and is given by $\tau_{\kryt,\sigma}=m/a+\sigma_{\kryt}$, where $\sigma_{\kryt}$ is a~critical value of $\sigma$ defined in Theorem~\ref{thm:Erlang}. In Fig.~\ref{fig.sigmakr} we present the stability results for the trivial steady state of system~\eqref{model_resc_suma} with the shifted Erlang probability densities given by~\eqref{eq:Erlang}--\eqref{eq:defE}. In particular, in Fig.~\ref{fig.sigmakr} (left column) we plot the dependence of $\tau_{\kryt,\sigma}$ on the treatment strength $\mu$ for the particular choices of parameters $m_i$ and the particular choices of parameter $a$ for the model with $\alpha=1$. Similarly as in Fig.~\ref{fig.zalodmu} the regions below the plotted curves correspond to the stability regions. Additionally, we plot $\tau_{cr,0}(\mu)$ curves to indicate thresholds for which the destabilizations occur in the case $\sigma=0$. If the curve for a~considered $a$ is above the $\tau_{cr,0}$ curve, then the steady state is unstable for $\sigma=0$ and remains unstable for all $\sigma>0$. In the both cases $m_i=1$ and $m_i=2$ the increase of the value of the parameter $a$ (which is equivalent to decrease of the critical average delay), defining the shape of the probability densities, implies the decrease of the stability regions of the steady state. Since the corresponding curves for $\alpha=0$ are very close to each other instead of plotting them directly we decided to plot the differences $\sigma_+=\tau_{\kryt,0}-\tau_{\kryt, \sigma}$ between the critical values of the average delays $\tau_{\kryt,0}$ in the case $\sigma=0$ and the critical average delays $\tau_{\kryt,\sigma}$ for the chosen values of $a$. Clearly, whenever plotted curve reaches the $\mu$ axis the trivial steady state becomes unstable for all $\sigma\ge 0$. From Fig.~\ref{fig.sigmakr} (right column) we deduce that the stability areas again decrease with the increase of the parameter $a$. \subsection{Piecewise linear distributions} Let us focus on the stability and instability regions of the trivial steady state of system~\eqref{model_resc_suma} with piecewise linear distributions defined by~\eqref{eq:zabek}. Since in such a~case we have smaller number of parameters defining the shape of the probability density we are able to plot the stability regions in ($\sigma$, $\varepsilon$) plane, see Fig.~\ref{fig.zabek}. Note that, for $\varepsilon>\sigma$ system~\eqref{model_resc_suma} becomes a~neutral system, which is out of our consideration hence this region is greyed. Clearly, the estimations obtained in Proposition~\ref{prop:zabek:stab} are rough. However, solving numerically a~system $\varmathbb{R}e W(i\omega)=0$, $\textrm{Im} W(i\omega)=0$, where real and imaginary part of characteristic function are given by~\eqref{reWzabek} and~\eqref{imWzabek}, respectively, we are able to calculate the stability region and the curve for which the stability change occurs. This line was plotted in Fig.~\ref{fig.zabek} as a~thick solid blue line. Additionally, we plot (in both panels) the vertical dashed lines that denote the conditions guaranteeing stability (see Proposition~\ref{prop:zabek:stab}(i)) or instability (see Proposition~\ref{prop:zabek:stab}(ii)) of the trivial steady state. For the Hahnfeldt~\textit{et al.}\xspace model (left panel in Fig.~\ref{fig.zabek}) the condition~\eqref{cond:stab:zabek} from Proposition~\ref{prop:zabek:stab} is denoted by a~vertical dashed line. In that case, the condition \eqref{cond:nstab:zabeka} cannot be fulfilled since the expression on the right-hand side of \eqref{cond:nstab:zabeka} is always smaller than $\beta/\gamma$. For the d'Onofrio-Gandolfi model the dashed vertical line on the left to the solid vertical line indicates the condition~\eqref{cond:stab:zabek}. Two dashed vertical lines on the right to the solid one indicate the region in which the condition~\eqref{cond:nstab:zabeka} is fulfilled. Those numerical results show that the conditions from Proposition~\ref{prop:zabek:stab} are only sufficient but not necessary. From the figure we also deduce that the stability region for the trivial steady state for the d'Onofrio-Gandolfi model is smaller than the one for the distributed Hahnfeldt~\textit{et al.}\xspace model. Moreover, computed by us the critical values of $\sigma$ for $\varepsilon\to 0$ and both $\alpha=1$ and $\alpha=0$ agrees with those calculated by Piotrowska~and~Fory{\'s} in \cite{Piotrowska2011} for the models with discrete equal delays. That agrees with the intuition, since for $\varepsilon\to 0$ models given by~\eqref{model_resc_suma} with piecewise linear distributions defined by~\eqref{eq:zabek} reduce to the models with double discrete delay. \begin{figure} \caption{Stability and instability of the trivial steady state of system~\eqref{model_resc_suma} \label{fig.zabek} \end{figure} \begin{figure} \caption{Stability and instability of the trivial steady state of system~\eqref{model_resc_suma} \label{fig.zabekmu} \end{figure} For $\mu>0$ and the Hahnfeldt~\textit{et al.}\xspace model we observe a~move of the stability switch curve to the right with hardly any change in the shape indicating that the stability region increases with increase of parameter $\mu$, see left panel in Fig.~\ref{fig.zabekmu}. This shift is very small. Moreover, for $\mu=b$ we obtain limiting values: $0.26$ for $\varepsilon\to 0$ and $0.33$ for $\varepsilon\to \sigma$. On the other hand, for the d'Onofrio-Gandolfi model the change is more visible. The lines start to lean to the right. For small $\mu$, the change is quite small, compare with right panel in Fig.~\ref{fig.zabek}. For example, for $\mu=3$ we obtain $\sigma=0.509$ for $\varepsilon\to 0$ and $\sigma = 0.59$ for $\varepsilon=\sigma$. As $\mu$ approaches to $b$ the changes become more rapid. For $\mu=5.7$ we have $\sigma=5.326$ for $\varepsilon\to 0$ and $\sigma = 5.632$ for $\varepsilon=\sigma$, while for $\mu=b$ we have $\sigma=8.181$ and $\sigma =10.065$ for $\varepsilon\to 0$ and $\varepsilon=\sigma$, respectively. \section{Discussion}\label{sec:dis} Although discrete delays often appear in models describing various biological phenomena, e.g.~\cite{bocharov2000,Erneux09,mbab12nonrwa,monk03currbiol,piotrowskaMBEmut,Piotrowska2012RWA,uf02jtb,mbufjp11jmaa,jmjpmbuf11bmb,cooke99jmb,enatsu11dcdsB} and references therein, it is obvious that in natural processes delay, if it exists, is usually somehow distributed around some average value due to differences between individuals and/or environmental noise. Thus, we believe that distributed delays are more realistic. However, in some cases, when the distribution has small variance and compact support discrete delays may be treated as a~good approximation. Clearly, models with discrete and distributed delays might have different dynamics in some ranges of parameters. In the presented paper we have compared the behaviour of two types of systems in the context of the stability of the steady state for a~particular family of models describing the tumour angiogenesis process. \begin{table}[!htb] \caption{The critical average delay value $\tau_{\kryt,0}$ in the case of non-shifted Erlang distributions given by~\eqref{eq:Erlang}--\eqref{eq:defE} with $m_1=m_2=m$ and $a_1=a_2=a$ for arbitrary chosen parameters $\mu$ for system~\eqref{model_resc_suma}. The critical average delay is defined by $\tau_{\kryt,0} = m/a_{\kryt,0}$, where $a_{\kryt,0}$ is the critical value of parameter $a$ for which the stability change occurs (see Theorem~\ref{thm:non-shiftE}). } \centerline{ \begin{tabular}{l|r|r|r|r||r|r|r|r|} &\multicolumn{4}{|c||}{Hahnfeldt~\textit{et al.}\xspace model ($\alpha=1$)} &\multicolumn{4}{c|}{d'Onofrion-Gandolfi model ($\alpha=0$) } \\ \hline $\mu$ & $0$ & $2$ & $4$ & $5.85$ & $0$ & $2$ & $4$ & $5.85$\\ \hline $m_1=m_2=1$ & 8.069 & 12.261 & 25.515 & $\infty$ & 0.256 & 0.390 & 0.811 & $\infty$ \\ \hline $m_1=m_2=2$ & 0.611 & 0.628 & 0.645 & 0.662 & 0.253 & 0.382 & 0.780 & 20.833 \\ \hline $m_1=m_2=3$ & 0.421 & 0.428 & 0.435 & 0.441 & 0.252 & 0.380 & 0.770 & 13.889 \\ \hline \end{tabular}}\label{tab:kr} \end{table} Presented analysis of distributed models includes: the uniqueness, the positivity and the global existence of the solutions, the existence of the steady state and the possibility of the existence of the stability switches. We have analytically derived conditions, involving the parameters defining the probability densities, guaranteeing the stability or instability of the steady state. We have also shown that in some cases the single stability switch is observed and the Hopf bifurcation occurs. For the particular set of parameters, estimated by Hahnfeldt~\textit{et al.}\xspace, we have investigated the stability regions for steady state. We compared the results for both the Hahnfeldt~\textit{et al.}\xspace ($\alpha=1$) and the d'Onofrio-Gandolfi ($\alpha=0$) models in the case of different probability densities. For both models we considered the Erlang shifted and non-shifted distributions as well as the piecewise linear distributions. We want to emphasise here that, in the general case, it is hard to say for which model Hahnfeldt~\textit{et al.}\xspace or d'Onofrio-Gandolfi the stability region is larger since it strongly depends on the considered probability densities and theirs shapes. However, we observe certain similarities. First, for $\mu=0$, i.e. the family of models without treatment, we see that for the Hahnfeldt~\textit{et al.}\xspace model with non-shifted Erlang probability density the larger $m_i=m$ are the smaller the stability region is, see Fig.~\ref{fig.zalodmu} and Table~\ref{tab:kr}. The same holds for the d'Onofrio- Gandolfi model with the non-shifted Erlang distributions, see Table~\ref{tab:kr} and moreover, for all considered $m_i=m$, $i=1,2$ (and $\mu=0$) for the Hahnfeldt~\textit{et al.}\xspace model the stability region is larger than for d'Onofrion-Gandolfi one. Similarly, for the models with the piecewise linear distributions the stability region for $\mu=0$ for the Hahnfeldt~\textit{et al.}\xspace model is larger than the one for the d'Onofrio-Gandolfi model, compare Fig.~\ref{fig.zabek}. If we consider a~positive parameter $\mu$ smaller than $b$ (to ensure the non-negativity of the steady state of model~\eqref{modeldis}, which corresponds to the trivial steady state of \eqref{model_resc_suma}), we observe a~similar tendency for the Erlang distributions in dependence on the parameters $m_1=m_2$ (compare Table~\ref{tab:kr} for the non-shifted distribution case). However, the dependence on $\mu$ depends strongly on the chosen model. For the Hahnfeldt~\textit{et al.}\xspace model and $m_1=m_2\ge 2$, this dependence is almost linear and very weak, while for the d'Onofrio-Gandolfi model it is much stronger. In results, for $m_1=m_2\ge 2$ and any sufficiently large value of $\mu\in[0,b)$ the critical average delay for the Hahnfeldt~\textit{et al.}\xspace model becomes smaller than for the d'Onofrio-Gandolfi model. The case $m_1=m_2=1$ is different. Dependence on $\mu$ is similar for both models and the critical average delay for the Hahnfeldt~\textit{et al.}\xspace model stays larger than for the d'Onofrio-Gandolfi model for any given value of $\mu$. Similarly, for different values of $m_i$, it seems that dependence on $\mu$ is stronger for the d'Onofrio-Gandolfi model, see Fig.~\ref{fig.akr}. Nevertheless, for the d'Onofrio-Gandolfi model with the non-shifted Erlang distributions there is no difference regarding the stability results between cases $m_1=m_2=2$ and $m_1=2$, $m_2=1$, which is not the case for the Hahnfeldt~\textit{et al.}\xspace model, where for $m_1=2$, $m_2=1$ we have stability independently on the value of parameter $a$. For the shifted Erlang distributions we have investigated the changes of the size of the stability region also in the context of the change of the value of parameter $a$. For both the Hahnfeldt~\textit{et al.}\xspace and d'Onofrion-Gandolfi models we see that for all considered $m=m_i$, $i=1,2$, the increase of the parameter $a$ decrease the stability region, however in the case of d'Onofrio-Gandolfi model we have compared the differences of the $\tau_{cr,0}-\tau_{cr, \sigma}$. A~similar increase of the stability region with a decrease of concentration of delay distribution was observed in~\cite{Bernard2001} for one linear equation with distributed delay and Erlang probability density. Nevertheless, we see that since for all considered cases of the shifted and non-shifted Erlang distributed models $\tau_{cr,\sigma}$ and $\tau_{cr,0}$, respectively, are increasing (sometimes slightly) functions of variable $\mu$ implying that the increase of the constant treatment strength enlarges the stability regions for the positive steady state of model~\eqref{modeldis}. Our analysis also shows that the increase of the positive parameter $\mu$ enlarges the stability area for the steady state for both (Hahnfeldt~\textit{et al.}\xspace and d'Onofrion-Gandolfi) distributed models with the piecewise linear distributions, but this time for the d'Onofrion-Gandolfi model this increase is more pronounced than for the distributed Hahnfeldt~\textit{et al.}\xspace model, see Fig.~\ref{fig.zabekmu}. Moreover, for small values of parameter $\mu$ the stability region for the Hahnfeldt~\textit{et al.}\xspace model with the piecewise linear distributions is larger than the one for the d'Onofrio-Gandolfi model, while for the larger values of $\mu$ the situation is opposite. Performed simulations show that the change occurs before $\mu \approx 1.42$. The variances for the shifted and non-shifted Erlang distributions are given by $\frac{m_i}{a^2}$ and they give a~measure of the degree of the concentration of the delay around the mean. Actually, a~better measure of the spread of the distribution around the mean for our purposes is the coefficient of variation, i.e. the ratio of the standard deviation to the mean, that is $\sqrt{m_i}/(a\sigma+m_i)$. Clearly, for the non-shifted distributions we have $1/\sqrt{m_i}$, which implies that larger parameter $m_i$ yields smaller the considered ratio. Hence, the increase of $m_i$ decreases the percentage dispersion of the average delay. On the other hand, for the shifted Erlang distributions, the coefficient of variation is a~decreasing function of $\sigma$. This dependence is obvious since the increase of $\sigma$ with fixed $m_i$ and $a$ means that the average delay is increased while standard deviation remains constant. The coefficient of variation is also a~decreasing function of parameter $a$. On the other hand, if we fix $a$ and $\sigma$, and study the influence of $m_i$, then the coefficient of variation increases if $m_i<a\sigma$ and decreases otherwise. For piecewise linear distributions ($\sigma\geq\varepsilon$) we have the average value of $f_i$ (defined by \eqref{eq:zabek}) equal to $\sigma$ and the standard deviation given by $\varepsilon/\sqrt{6}$. Hence, the coefficient of variation equals to $\varepsilon/(\sigma\sqrt{6})$. Thus, it is an~increasing function of $\varepsilon$ (for fixed $\sigma$) and a~decreasing function of $\sigma$ (for fixed $\varepsilon$). Thus, we conclude that the percentage dispersion of the average delay is the increasing function of $\varepsilon$ and decreasing function of $\sigma$ and it is always smaller than 1. Clearly, all that should be taken into account whenever the probability distributions describing the characteristics of delays are estimated. We believe that with the proposed type of models one can describe in a~more realistic way the process of angiogenesis. However, the considered model should be validated with the experimental data. Our results show that a~system behaviour in the case of distributed delays might be different from the behaviour of the system for discrete delays. Hence, to validate the model with the experimental data it is important to choose the right type of the delay(s) in the model preferably based on the experimental data or hypothesis postulated by the experimentalists. Our study also shows that a~shape of probability density for the distributed delays has an essential influence on the model dynamics. On the other hand, it is not a~trivial task to choose the right distribution if one has not sufficiently large set of the experimental data and does not assume any particular shape of the distribution. This problem need to be individually addressed according the available data and it should be done for the considered model in the future. In our opinion, models with distributed delays could be also studied in the context of efficiency of different types of tumour therapies. We would like to emphasise that oscillations in the experimental data for different human and animal cell lines in the context of neoplastic diseases, even without administration of any treatment, has been already widely observed for leukaemia (\cite{Mackey1978,Fortin1999}), B-cell lymphoma (\cite{Uhr1991}) and solid tumours (\cite{Chignola2000, Chignola2003}). To investigate the origin of such phenomenon a~number of mathematical and numerical models has been developed. For example Kuznetsov et al., \cite{Kuznetsov1994}, studying the interactions between an immune system and a~tumour by system of ODEs, pointed out that different local and global bifurcations (observed for the realistic parameter values they estimated) showed that there might be a~connection between the phenomena of immunostimulation of tumour growth, formation of ``dormant'' state and ``sneaking through'' of tumour. Similarly, Kirschner and Panetta in~\cite{Kirschner1998} discussed inter alia the biological implications of the Hopf bifurcation in the case of the tumour-immune system interactions. More recently, Pujo-Menjouet et al.~\cite{Pujo-Menjouet2005}, using the model with discrete delay for which the Hopf bifurcation is observed, explained why and how short cell cycle durations gave rise to long period oscillations of order form 40 to 80 day for periodic chronic myelogenous leukaemia patients. At the same time in \cite{Bernard2004} the bifurcation phenomenon and its role for a~white-blood-cell production model with discrete delay was studied. The experimentally observed oscillations in the tumour volume~\cite{Chignola2000, Chignola2003} can be explained by the analytic results from earlier and more theoretical paper~\cite{Byrne1997} focussing on the growth of avascular multicellular tumour spheroids, where the delay in the process of proliferation was considered. Later on, this work was extended in \cite{mbuf03mcm_prol}, while the model with the delay only in the apoptosis process was considered in~\cite{mbuf03mcm_reg}. Next, the model that took into account delays in both processes was studied in~\cite{piotrowska2008a}. It is believed that the p53 protein plays an essential role in preventing certain types of cancers and is called ``guardian of the genome''~\cite{Lane1992nature}, hence we could also mention here the models of Hes1 and p53 gene expressions, both with negative feedback, and with discrete time delays proposed by Monk \cite{monk03currbiol}. The model of the Hes1 protein was examined later by Bernard et al.~\cite{Bernard2006} and Bodnar and Bart\l{}omiejczyk~\cite{mbab12nonrwa}.In that cases the mathematical analysis showed, that experimentally observed oscillation can be explained by the time lag present in the system due to the DNA~transcription process as well as the diffusion time of mRNA~and the Hes1 proteins. Finally, we discuss shortly possible generalisation of the results obtained in this paper. The first equation of the considered model is quite general and can describe a~process with saturation. On the other hand, the second equation comes from a~quasi-stationary approximation of some reaction-diffusion equations under particular assumptions that are justified for the angiogenesis process. Thus, most probably it cannot be straightforwardly transformed to describe other biological or medical processes. However, the stability analysis performed in the paper is based on the local properties of the functions. Thus, we may assume a~more general form of the second equation namely $q(t) G(p_t/q_t, p)$ (where $p_t$ and $q_t$ according to a~functional notation typical for DDEs denote terms with delay) assuming that $G$ is an increasing function in the first variable and decreasing in the second one. Then, in our opinion, under some additional assumptions, one should obtain results similar to those presented in the paper. \appendix \section{Generalized Mikhailov Criterion}\label{appen} Here we formulate a~generalized version of the Mikhailov Criterion (see eg.~\cite{uf04jbs}). The classical formulation of the Mikhailov Criterion is for the characteristic functions that are sums of polynomials multiplied by an exponential function. However, it can be generalized for wider class of functions. Below, we present a~detailed formulation. \begin{thm}[Generalised Mikhailov Criterion] Let us assume that $W:\varmathbb{C}\to\varmathbb{C}$ is an analytic function, has no zeros on imaginary axis and fulfils \begin{equation}\label{charfun} W(\lambda) = \lambda^{n} +o(\lambda^{n}), \quad W'(\lambda) = n\lambda^{n-1} + o(\lambda^{n-1}), \end{equation} for some natural number $n$. Then the number of zeros of $W$ in the right-hand complex half-plane is equal to $n/2-\Delta/\pi$ where \[ \Delta = \Delta_{\omega\in[0,+\infty)} \text{arg} W(i\omega). \] \end{thm} Note, that $\Delta$ denotes the change of argument of the vector $W(i\omega)$ in the positive direction of complex plane as $\omega$ increases from $0$ to $+\infty$. \begin{proof} The proof is exactly the same as the proof of the Mikhailov Criterion in~\cite[Th.~1]{uf04jbs}. It is based on integration of the characteristic function on the following contour: part of imaginary axis, portion of the imaginary axis from $i\rho$ to $-i\rho$ and a~semi-circle of radius $\rho$ with the middle in zero located in the right half of complex plane. The fact that $W$ is an analytic function implies that it has the isolated zeros and all integration can be done in the same way as in~\cite{uf04jbs}. Moreover, the assumption~\eqref{charfun} implies that all limits calculated in~\cite{uf04jbs} remain the same. \end{proof} \end{document}
\begin{document} \title{Implicit and Explicit Proof Management in \KeYmaeraX \thanks{This material is based upon work supported by the Air Force Office of Scientific Research under grant number FA9550-16-1-0288 and FA8750-18-C-0092. Any opinions, finding, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force.} \begin{abstract} Hybrid systems theorem proving provides strong correctness guarantees about the interacting discrete and continuous dynamics of cyber-physical systems. The trustworthiness of proofs rests on the soundness of the proof calculus and its correct implementation in a theorem prover. Correctness is easier to achieve with a soundness-critical core that is stripped to the bare minimum, but, as a consequence, proof convenience has to be regained outside the soundness-critical core with proof management techniques. We present modeling and proof management techniques that are built on top of the soundness-critical core of \KeYmaeraX to enable expanding definitions, parametric proofs, lemmas, and other useful proof techniques in hybrid systems proofs. Our techniques steer the uniform substitution implementation of the differential dynamic logic proof calculus in \KeYmaeraX to allow users choose when and how in a proof abstract formulas, terms, or programs become expanded to their concrete definitions, and when and how lemmas and sub-proofs are combined to a full proof. The same techniques are exploited in implicit sub-proofs (without making such sub-proofs explicit to the user) to provide proof features, such as temporarily hiding formulas, which are notoriously difficult to get right when implemented in the prover core, but become trustworthy as proof management techniques outside the core. We illustrate our approach with several useful proof techniques and discuss their presentation on the \KeYmaeraX user interface. \end{abstract} \irlabel{DUS|$\circlearrowright$US} \section{Introduction} \label{sec:introduction} Hybrid systems theorem proving provides strong correctness guarantees about the interacting discrete and continuous dynamics of cyber-physical systems. Theorem proving is most valuable early in the design of a system, since it is not merely a technique to prove the correctness of an already correct system, but also shines when analyzing a system in all its subtleties to discover unknown or only partially known properties of the system. The trustworthiness of proofs and analysis results, however, rests on the soundness of the proof calculus and its correct implementation in a theorem prover. Typical theorem prover implementations often opt for directly representing the rules of a proof calculus in the theorem prover, for instance, with axiom schemata in~\cite{KeY2005,DBLP:conf/cade/PlatzerQ08}, or with trusted implementations of rules~(e.g., KeYmaeraD~\cite{DBLP:conf/icfem/RenshawLP11}) or decision procedures~(e.g., invariant computation~\cite{DBLP:conf/emsoft/LiuZZ11} in the HHL prover~\cite{DBLP:conf/icfem/WangZZ15}). The downside of such an approach is not only that implementations of rules and their side conditions become soundness-critical, but also that additional features often result in increasing the size of the soundness-critical code base of the theorem prover. Correctness is easier to achieve with an LCF-style approach that strips the soundness-critical core to the bare minimum, but, as a consequence, proof convenience has to be regained outside the soundness-critical core with proof management techniques. The \KeYmaeraX~\cite{DBLP:conf/cade/FultonMQVP15} theorem prover for hybrid systems takes an LCF-style approach; previous techniques expanded the capabilities of \KeYmaeraX primarily by providing tactics~\cite{DBLP:conf/itp/FultonMBP17}, e.g., for certifying solutions of differential equations~\cite{DBLP:journals/jar/Platzer17}, for certifying safety and liveness properties of differential equations~\cite{DBLP:journals/jacm/PlatzerT20,DBLP:journals/fmsd/SogokonMTCP}, for stability proofs~\cite{DBLP:conf/tacas/TanP21}, for code synthesis~\cite{DBLP:conf/pldi/BohrerTMMP18}, for component-based modeling and verification~\cite{DBLP:journals/sttt/MullerMRSP18}, and for monitor synthesis~\cite{DBLP:journals/fmsd/MitschP16}. In this paper, we present modeling and proof management techniques that are built on top of the soundness-critical core of \KeYmaeraX to enable structuring and modularizing models with definitions and modularizing proofs with lemmas. These modeling and proof management techniques were developed primarily with interactive proofs in mind, but may also be beneficial for automation (e.g., hierarchical definitions may serve as proof hints). Useful proof techniques for explicit proof management include expanding definitions of the model during a proof, parametric proofs to make progress in proofs despite unknown system properties (e.g., loop invariants), and creating and applying lemmas. Our techniques steer the uniform substitution implementation of the differential dynamic logic proof calculus in \KeYmaeraX to allow users choose when in a proof and how abstract formulas, terms, or programs become expanded to their concrete definitions, and when and how lemmas and sub-proofs are combined to a full proof. The same techniques are exploited in implicit sub-proofs (without making such sub-proofs explicit to the user) to hide technicalities of the prover implementation whose details are irrelevant to the user, or to provide proof features, such as temporarily hiding formulas, which are notoriously difficult to get right when implemented in the prover core, but become trustworthy as proof management techniques outside the core. On the user interface, we attempt to make such proof features available as part of the usual user interactions: for example, when a tactic asks for input (e.g., a loop invariant), users start a parametric proof simply by using uninterpreted function and predicate symbols as tactic inputs, which then appear like elements of the input model whose concrete interpretations can be defined and expanded at a later point in the proof. That way, users can focus on exploring and understanding a system by way of formal proof to provide insight to the theorem prover when it becomes available during the proof. The remainder of this paper is structured as follows: \rref{sec:preliminaries} introduces differential dynamic logic and the relevant core and user interface features of \KeYmaeraX. \rref{sec:proofmanagementexample} gives an example proof that combines and illustrates several of the desired proof management techniques, \rref{sec:lemmas} and \rref{sec:prooftechniques} discuss the underlying lemma application and proof techniques and their appearance on the user interface, and \rref{sec:conclusion} concludes the paper with a discussion of related and future work. \section{Preliminaries} \label{sec:preliminaries} \paragraph{Differential Dynamic Logic by Example} Differential dynamic logic \dL~\cite{DBLP:journals/jar/Platzer17,Platzer18} is a specification language and verification calculus for hybrid systems written as hybrid programs. The syntax of \emph{hybrid programs} (HP) is described by the following grammar where $\asprg,\bsprg$ are hybrid programs, $x$ is a variable and $\astrm,\genDE{x}$ are terms, $\ivr$ is a logical formula: \begin{equation*} \asprg,\bsprg ~\bebecomes~ \pupdate{\pumod{x}{\astrm}} \alternative \ptest{\ivr} \alternative \pevolvein{\D{x}=\genDE{x}}{\ivr} \alternative \pchoice{\asprg}{\bsprg} \alternative \asprg;\bsprg \alternative \prepeat{\asprg} \end{equation*} Assignments \(\pupdate{\pumod{x}{\astrm}}\) and tests \(\ptest{\ivr}\) (to abort execution and discard the run if $\ivr$ is not true) are as usual. Differential equations \(\pevolvein{\D{x}=\genDE{x}}{\ivr}\) are followed along a solution of \(\D{x}=\genDE{x}\) for any duration as long as the evolution domain constraint $\ivr$ is true at every moment along the solution. Nondeterministic choice \(\pchoice{\asprg}{\bsprg}\) runs either $\asprg$ or $\bsprg$, sequential composition \(\asprg;\bsprg\) first runs $\asprg$ and then $\bsprg$ on the resulting states of $\asprg$, and nondeterministic repetition \(\prepeat{\asprg}\) runs $\asprg$ any natural number of times. For example, the hybrid program below \[ \prepeat{\bigl(\underbrace{\text{if}~(x<1) \{\humod{y}{-x}\}~\text{else}~\{\humod{y}{*};\ptest{y>2}\}}_\textit{ctrl};~\underbrace{\pevolve{\D{x}=-xy}}_\textit{ode}\bigr)} \] repeats program $\textit{ctrl}$ followed by differential equation $\textit{ode}$ arbitrarily often; program $\textit{ctrl}$ is a choice between setting $y$ to the value of $-x$ when $x<1$ or else picking any $y>2$. The combined effect of $\textit{ctrl}$ and $\textit{ode}$ is an exponential increase/decay of $x$ with a rate depending on the choice of $y$. When programs become more complicated, it is useful to literally modularize hybrid programs into $\textit{ctrl}$, $\textit{ode}$ etc. using program symbols and use definitions as a structuring mechanism for models. The formulas of \dL describe properties of hybrid programs, summarized by the following grammar where $\asfml,\bsfml$ are formulas, $\astrm,\bstrm$ are terms, $x$ is a variable and $\asprg$ is a hybrid program: \begin{equation*} \asfml,\bsfml ~\bebecomes~ \astrm\geq\bstrm \alternative \lnot \asfml \alternative \asfml \land \bsfml \alternative \asfml \lor \bsfml \alternative \asfml \limply \bsfml \alternative \asfml \lbisubjunct \bsfml \alternative \lforall{x}{\asfml} \alternative \lexists{x}{\asfml} \alternative \dbox{\asprg}{\asfml} \alternative \ddiamond{\asprg}{\asfml} \end{equation*} The operators of first-order real arithmetic are as usual with quantifiers ranging over the reals. Formula \(\dbox{\asprg}{\asfml}\) is true in a state iff formula $\asfml$ is true after all ways of running hybrid program $\asprg$, which is useful for expressing safety properties. Dually, liveness properties are expressed with \(\ddiamond{\asprg}{\asfml}\), which is true in a state iff $\asfml$ is true after at least one run of $\asprg$. \paragraph{Proofs in the \KeYmaeraX Core} \begin{figure} \caption{A \texttt{Provable} \caption{User interface displays the open subgoals of a proof. Tactics menu~$\tiny\circled{\bf A} \caption{Proof state data structure and rendering on the user interface} \label{fig:proofstate-provable} \label{fig:proofstate-ui} \end{figure} The \KeYmaeraX prover core represents proof state as derived rules called \texttt{Provables}, which list the conclusion to prove and the open subgoals, as illustrated in \rref{fig:proofstate-provable}. Conclusion and subgoals are each represented with a sequent of the form $\lsequent{\Gamma}{\Delta}$: assumptions are in $\Gamma$, $\Delta$ lists the alternatives to prove. The meaning of sequent $\lsequent{\Gamma}{\Delta}$ is that of \dL formula $\bigwedge_{p \in \Gamma}p \limply \bigvee_{q\in\Delta}q$. Validity of the subgoals justifies validity of the conclusion; a proof is closed when there are no more open subgoals. The user interface of \KeYmaeraX in \rref{fig:proofstate-ui} displays proof state in its deduction view $\tiny\circled{\bf B}$, provides automation and tactics $\tiny\circled{\bf A}$ to progress in the proof, and lists proof step explanations $\tiny\circled{\bf C}$ as well as a tactic summarizing the recorded proof history. Proofs can be started from an initial conjecture, as well as from a (partial) tactic that advances the proof state according to the tactic steps and displays the remaining proof goals (further manual steps are then recorded and extend the provided tactic). The deduction view strives for close mnemonic similarity to text books \cite{DBLP:conf/fide/MitschP16,DBLP:series/lncs/MitschP20} while maximizing screen estate use (it displays open subgoals in tabs to utilize the full screen width for each subgoal). In principle, the user interface could use typesetting libraries such as MathJax, to resemble textbook appearance even more closely, but such attempts were abandoned for rendering performance reasons. The \KeYmaeraX core is stateless, it does not keep track of proof state. Instead, tactics and proof management outside the core keep track of \texttt{Provables} and instruct the core to apply operations on \texttt{Provables} to transform proof state, see \cite{DBLP:series/lncs/MitschP20} for a description of how tactics combine axioms and a comparison to alternative implementation approaches. Major core operations are to \begin{itemize} \item create a \texttt{Provable}, which is allowed only from a small number of sources, the most important ones are \(\begin{aligned}\linfer {\lsequent{\Gamma}{\Delta}} {\lsequent{\Gamma}{\Delta}} &&,\quad \linfer[qear] {\lclose} {\lsequent{\Gamma}{\Delta}} &&,\quad \linfer {\lclose} {\lsequent{}{\dL~\text{axiom}}}\end{aligned}\) (from left to right: starting a proof by justifying the conjecture from itself, real arithmetic facts, and \dL axioms); \item apply another \texttt{Provable}, whose conjecture matches a subgoal syntactically to replace the existing subgoal with the subgoals of the other \texttt{Provable}, which we exploit for applying lemmas; \item apply uniform substitution to replace predicate symbols with formulas, function symbols with terms, and program symbols with hybrid programs, which is useful to support definitions. \end{itemize} \begin{figure} \caption{A simple proof instructing the core to create a new \texttt{Provable} \label{fig:coreproof-internal} \caption{Presentation of step $\irref{implyr} \label{fig:coreproof-ui} \caption{Steps in a proof in the core vs. presentation on the user interface} \label{fig:coreproof} \end{figure} \noindent A typical proof, illustrated in \rref{fig:coreproof-internal}, retrieves an initial \texttt{Provable} from the \KeYmaeraX core and then proceeds by handing back the \texttt{Provable} to the core together with a proof rule to retrieve a follow-up \texttt{Provable}. This process is repeated until all subgoals are either reduced to \dL axioms or valid formulas in real arithmetic, so no more subgoals remain. At any point in this process can proof state be stored and used later as a lemma (even in other proofs). This entire process is hidden from the user, who instead is presented the sequent proof in \rref{fig:coreproof-ui}. \KeYmaeraX proofs appeal to uniform substitution from \dL axioms \cite{DBLP:journals/jar/Platzer17}: for example, the test axiom $\dibox{\ptest{q}}p \lbisubjunct (q \limply p)$, which is an ordinary \dL formula, together with uniform substitution $\sigma=\usubstlist{\usubstmod{q}{x>0},\usubstmod{p}{\dbox{\pevolve{\D{x}=-x}}x\geq 0}}$ can be used to obtain a concrete instance of this axiom during a proof as follows: \irlabel{US|US} \[ \linfer[US] {\dbox{\ptest{q}}p \lbisubjunct (q \limply p)} {\dbox{\ptest{x>0}}\dbox{\pevolve{\D{x}=-x}}x\geq 0 \lbisubjunct (x>0 \limply \dbox{\pevolve{\D{x}=-x}}x\geq 0)} \enspace . \] \noindent Uniform substitution is mainly used as a mechanism to instantiate axioms soundly, but through \cite[Thm. 27]{DBLP:journals/jar/Platzer17} it is also useful to replace symbols in entire \texttt{Provables} soundly, as illustrated in \rref{fig:provablesubst}. \begin{figure} \caption{Uniform substitution $\sigma=\usubstlist{\usubstmod{p} \label{fig:provablesubst} \end{figure} In this paper, we are going to exploit its application to entire \texttt{Provables} in order to implement proof features such as expanding definitions during a proof and an extended lemma mechanism that is able to bridge syntactic differences between the lemma conclusion and its application target. \section{Implicit and Explicit Proof Management by Example} \label{sec:proofmanagementexample} \begin{figure} \caption{Definitions in the \KeYmaeraX input syntax} \label{fig:defs-inputfile} \caption{Definitions menu} \label{fig:defs-menu} \caption{Predicate and program definitions in \KeYmaeraX} \label{fig:defs} \end{figure} The main motivation for proof management is to allow users expand definitions and structure proofs at their discretion, as well as to enable future automated definition expansion and contraction~\cite{DBLP:journals/jar/Wos87c}. For example, consider the \KeYmaeraX input file in \rref{fig:defs-inputfile} that uses predicate definitions $A(x) \equiv x=2$ to capture assumptions about initial values of $x$ and $S(x) \equiv x \geq 0$ to describe the desired safety property, as well as program definitions $\textit{ctrl}$, which doubles the value of any non-negative $x$, and the differential equation $\textit{ode}$, which models exponential decay. The definitions populate the ``Defs'' menu in \KeYmaeraX that allows users to expand definitions collectively (\texttt{expandAllDefs}) or selectively (e.g., \texttt{expand "ctrl"}) during a proof, see \rref{fig:defs-menu}. The menu automatically adjusts to the symbols of the currently selected subgoal (in the background in \rref{fig:defs-menu}). As a safety question example, we want to answer whether repeated execution of $\textit{ctrl};\textit{ode}$ keeps the value of $x$ non-negative when started at $x=2$. In the proof, we want control over when to expand definitions, and we want to structure the proof into a main theorem and supporting lemmas. \rref{fig:expanddefs} illustrates the proof steps. \irlabel{expand|expand} \irlabel{fold|fold} \irlabel{ode|ode} \irlabel{ids|id using S(x)} \irlabel{by|by} \irlabel{auto|auto} \irlabel{MR|MR} \begin{figure} \caption{Exponential decay lemma} \label{fig:expanddefs-decaylemma} \caption{Unsatisfied control guard lemma} \label{fig:expanddefs-lemma} \caption{Induction step lemma, appeals to exponential decay lemma and unsatisfied guard lemma} \label{fig:expanddefs-step} \caption{Main theorem proves loop induction base case and use case, appeals to \rref{fig:expanddefs-step} \label{fig:expanddefs-main} \caption{The substitutions collected during step~\texttt{expand} \label{fig:expanddefs} \end{figure} The proof proceeds from the initial conjecture $\lsequent{A(x)}{\dibox{\prepeat{(\textit{ctrl};\textit{ode})}}S(x)}$ bottom-to-top, with proof step justifications annotated to the left of the horizontal bars. Validity transfers top-to-bottom, so validity of the sequents (subgoals) above a horizontal bar justifies validity of the conclusion below the horizontal bar. The first step~\irref{implyr} makes the left-hand side of the implication available as assumptions $A(x)$. Next, step~\irref{loop} induction splits the proof into three subgoals: the base case $\lsequent{A(x)}{S(x)}$, which closes by real arithmetic~\irref{qear} after expanding the definitions, the use case $\lsequent{S(x)}{S(x)}$ that is trivially true by step~\irref{id}, and the induction step. The induction step in \rref{fig:expanddefs-step} first addresses the sequential composition with step~\irref{composeb} to isolate $\textit{ctrl}$ from $\textit{ode}$, then expands $\textit{ctrl}$ to split into its two cases: \begin{inparaenum}[(i)] \item on the left branch, the condition of $\text{if}~(S(x))$ is true (represented with $\ptest{S(x)}$) and preserved by the program $\ptest{S(x)};\humod{x}{2}$ as witnessed by a monotonicity step~\irref{MR} and, thus, the exponential decay lemma applies; \item on the right branch with its leading test $\ptest{\neg S(x)}$ the unsatisfied control guard lemma applies. \end{inparaenum} The main proof management features used in the proof are step~\irref{expand} to expand definitions, step~\irref{by} to apply a lemma, and \texttt{using} to temporarily restrict reasoning to certain formulas. To users, the proof in \rref{fig:expanddefs} appears as if they were working on a single \texttt{Provable} and the proof steps were combined immediately. Doing so, however, would require extensive changes to the soundness-critical core and violate the local nature of its reasoning. Behind the scenes, this proof therefore requires a shift from operating on a single \texttt{Provable} to keeping track of loosely connected sub-proofs outside the prover core; these sub-proofs fit together only after applying the substitutions collected during the proof. In the following sections, we provide details on explicit proof management that structures proofs into lemmas and implicit proof management that delays merging \texttt{Provables} and applying uniform substitutions. \section{Explicit Proof Management with Lemmas} \label{sec:lemmas} The \KeYmaeraX input format allows explicit proof management in the input format for users to structure their problem descriptions into lemmas that are shared between proofs, and theorems, which appeal to lemmas to show some of their subgoals. For example, the ``Exponential decay'' and ``Unsatisfied control guard'' lemmas from \rref{fig:expanddefs-decaylemma} and \rref{fig:expanddefs-lemma} are expressed in the \KeYmaeraX ASCII input syntax below, recorded from the steps of the interactive proofs in \rref{fig:expanddefs-decaylemma} and \rref{fig:expanddefs-lemma}. Optional ``/'' in lemma names structure the lemmas into folders on both the user interface and the file system. \begin{lstlisting}[language=KeYmaeraX] Lemma "FIDE21/Exponential decay" ProgramVariables Real x; End. Problem x>=0 -> [{x'=-x}]x>=0 End. Tactic "Recorded" implyR('R=="x>=0 -> [{x'=-x}]x>=0"); ODE('R=="[{x'=-x}]x>=0") End. Tactic "Automated proof" autoClose End. End. Lemma "FIDE21/Unsatisfied control guard" Definitions /* constants, functions, properties, programs */ Bool S(Real x); Bool P(Real x); End. ProgramVariables Real x; End. /* variables */ Problem S(x) -> [?!S(x);]P(x) End. /* specification in dL */ Tactic "Interactive proof" implyR('R=="S(x) -> [?!S(x);]P(x)"); testb('R=="[?!S(x);]P(x)"); implyR('R=="!S(x) -> P(x)"); notL('L=="!S(x)"); id using "S(x)" End. Tactic "Automated proof" autoClose End. End. \end{lstlisting} \label{lst:lemmaFig1a} The ``Induction step'' lemma below uses the earlier two lemmas in its proof. It follows the steps in \rref{fig:expanddefs-step} largely verbatim, but the specific lemma application steps are worth noting. Applying the ``Exponential decay'' lemma is straightforward by \irref{auto}, since it uses $S(x)$ and $\textit{ode}$ in their expanded form as the only difference between the subgoal and the lemma conclusion. The ``Unsatisfied control guard'' lemma, however, introduces a new predicate symbol $P(x)$, which is neither present in the induction step nor in the original conjecture. We, therefore, use substitution $\sigma=\usubstlist{\usubstmod{P(x)}{\dbox{\humod{x}{x}}\dbox{\ptest{\ltrue}}\dbox{\pevolve{\D{x}=-x}}S(x)}}$ to tell the lemma application mechanism how to resolve $P(x)$.\footnote{The leading self-assignment $\humod{x}{x}$ is a necessary technicality to make variable $x$ must-bound because the differential equation $\D{x}=-x$ may run for duration $0$.} \begin{lstlisting}[language=KeYmaeraX] Lemma "FIDE21/Induction step" Definitions Bool S(Real x) <-> x>=0; HP ctrl ::= { if (S(x)) { x:=2*x; } }; HP ode ::= { {x'=-x} }; End. ProgramVariables Real x; End. Problem S(x) -> [ctrl;ode;]S(x) End. Tactic "Proof induction step" implyR('R=="S(x)->[ctrl;ode;]S(x)"); composeb('R=="[ctrl;ode;]S(x)"); expand "ctrl"; choiceb('R=="[?S(x);x:=2*x;++?!S(x);?true;][ode;]S(x)"); andR('R=="[?S(x);x:=2*x;][ode;]S(x) & [?!S(x);?true;][ode;]S(x)"); <( "[?S(x);x:=2*x;][ode;]S(x)": MR("S(x)", 'R=="[?S(x);x:=2*x;][ode;]S(x)"); <( "Use Q->P": expand "S"; autoClose, "Show [a]Q": useLemma("FIDE21/Exponential decay", "US({`S(x)~>x>=0 :: ode;~>{x'=-x} :: nil`});unfold;id") ), "[?!S(x);?true;][ode;]S(x)": composeb('R=="[?!S(x);?true;][ode{|^@|};]S(x)"); useLemma("FIDE21/Unsatisfied control guard", "US({`P(x)~>[x:=x;][?true;][{x'=-x}]S(x) :: nil`});unfold;id") ) End. End. \end{lstlisting} The main theorem of \rref{fig:expanddefs-main} is expressed in \KeYmaeraX ASCII syntax below. Its proof uses the ``Induction step'' lemma in a straightforward way. \begin{lstlisting}[language=KeYmaeraX] Theorem "FIDE21/Combine lemmas" Definitions Bool A(Real x) <-> x=2; Bool S(Real x) <-> x>=0; HP ctrl ::= { if (S(x)) { x:=2*x; } }; HP ode ::= { {x'=-x} }; End. ProgramVariables Real x; End. Problem A(x) -> [{ctrl;ode;}*]S(x) End. Tactic "Interactive proof" implyR('R=="A(x)->[{ctrl;ode;}*]S(x)"); loop("S(x)", 'R=="[{ctrl;ode;}*]S(x)"); <( "Init": expandAllDefs; QE, "Post": id, "Step": expandAllDefs; useLemma("FIDE21/Induction step", "prop") ) End. End. \end{lstlisting} \label{lst:theoremFig1} Structuring proofs into lemmas and theorems need not necessarily be done when creating the input file. As an alternative, the \KeYmaeraX user interface allows users to start lemmas from any proof state; lemma proofs remain linked from the tabs representing open subgoals in the main proof until finished. Other pre-existing lemmas can be searched and applied from the user interface as in \rref{fig:applylemma}. Techniques for implicit proof management and delayed substitution (used in the proofs above) are discussed next. \begin{figure} \caption{Applying a lemma from the lemma search dialog in \KeYmaeraX} \label{fig:applylemma} \end{figure} \section{Implicit Proof Management with Delayed Substitution} \label{sec:prooftechniques} In this section, we discuss the fundamental proof management technique of delayed proof composition and delayed uniform substitution, and then devise several applications of it for enabling parametric proofs and delayed modeling, and for temporary sub-proofs focusing on some select aspects of a subgoal. \subsection{Delayed Proof Composition and Delayed Uniform Substitution} As illustrated in \rref{sec:preliminaries}, the \KeYmaeraX core creates and modifies proof state without keeping track of the steps of the proof. \KeYmaeraX records proof steps outside the core with a separate \texttt{Provable} per proof step. This trace of \texttt{Provables} not only is the basis for rendering and navigating the sequent calculus proof on the user interface, but also gives us freedom to choose when to combine \texttt{Provables}. Instead of combining provables and applying uniform substitutions immediately at every step in the proof, those separate \texttt{Provables} are combined to a single proof once all the steps are finished. The advantage of delayed merging is that we can postpone the uniform effect \cite[Thm. 27]{DBLP:journals/jar/Platzer17} of uniform substitution across subgoals of a \texttt{Provable}. Without delayed merging, symbols that are expanded on one branch would immediately be expanded uniformly across all other branches of the proof, even if those other branches would prefer to continue using symbols in their unexpanded form. The different points of expanding symbols are then reconciled in the final proof checking pass that combines \texttt{Provables}: uniform substitutions that originate from explicit user interactions (e.g., from expanding definitions) are combined with substitutions found through unification. \subsection{Parametric Proofs and Delayed Modeling} Theorem proving is not merely a tool to just obtain a correctness proof about an already correct system; it is a tool to explore and understand a system in all its subtleties and with all its corner cases thoroughly, to discover properties of the system that are not or only partially known, and to discover and fix correctness bugs in the process. We, therefore, frequently want to make progress in a proof without yet committing to specific inputs or even without supplying a finished model and/or conjecture. For example, we may want to analyze a loop, but do not yet know a concrete loop invariant that we could use in the proof. An obvious technique is to use loop unrolling to debug the behavior of the loop body in an attempt to manually identify a loop invariant candidate. However, this is usually not a suitable technique to prove safety of a loop, and so requires duplicate proof effort once a loop invariant candidate is identified through debugging (and perhaps several rounds of alternating debugging and proof attempts). \paragraph{Parametric Proofs} A powerful alternative technique are parametric proofs~\cite{DBLP:journals/jar/Platzer17} to advance in a proof without committing to concrete inputs early in a proof. \begin{figure} \caption{Parametric sequent proof} \caption{Starting a parametric proof in \KeYmaeraX by using an uninterpreted predicate symbol as tactic input; the proof parameter is then definable from a text field in the ``Defs'' menu.} \caption{Parametric loop induction with abstract invariant $J(x)$. The substitution $\sigma=\{J(\cdot)\mapsto \cdot \geq 1\} \label{fig:parametricloopproof} \end{figure} Parametric proofs allow users to proceed with abstract terms or formulas whose concrete shape is discovered later during the proof. Delayed merging of \texttt{Provables} and delayed uniform substitution allow users to supply concrete terms, formulas, and programs for function symbols, predicate symbols, and program symbols at any point in the proof, which then get automatically applied to all prior proof steps upon composition of the final \texttt{Provable}. In \rref{fig:parametricloopproof}, step~\irref{loop} uses an uninterpreted predicate symbol $J(x)$ instead of a concrete formula as a loop invariant. That way, we can advance the proof on all three branches until we find that $J(x)$ simultaneously has to fit $\lsequent{x=2}{J(x)}$ in the induction base case, $\lsequent{J(x)}{J(1+\frac{x-1}{2})}$ in the induction step, and $\lsequent{J(x)}{x \geq -1}$ in the induction use case. At this point, we can experiment with different choices of $J(x)$ and, ultimately, settle for $\sigma=\usubstlist{\usubstmod{J(\cdot)}{\cdot \geq 1}}$. Uniformly substituting this choice into the entire \texttt{Provable} concludes the proof by \irref{qear} on all branches. \paragraph{Delayed Modeling} Delayed merging of \texttt{Provables} and delayed substitution are also helpful to address a common nuisance in proofs: missing assumptions about model parameters (e.g., to avoid division by zero) are easily forgotten and their absence becomes apparent often only rather late in a proof. Using an arity 0 predicate symbol in the model allows users to supply missing assumptions during a proof as they are discovered, without requiring to redo the proof. For example, the proof in \rref{fig:delayedassumptions} uses an arity 0 predicate symbol $p_\text{\faShoppingCart}$ that can be used to collect missing assumptions as they become apparent in the proof. The effect of collecting assumptions during the proof is achieved by simply augmenting the concrete assumption with another fresh $p_{\text{\faShoppingCart}i}$. \begin{figure} \caption{Predicate $p_\text{\faShoppingCart} \caption{Providing assumptions with the ``Defs'' menu} \caption{Delayed modeling by using uninterpreted predicate symbols in the input} \label{fig:delayedassumptions} \end{figure} Note that a proof parameter $J(x)$ cannot be used to introduce the missing assumption $y \neq 0$ because that fact was not even available in the original conjecture and therefore would not be provable in the base case of the induction proof. We use $p_\text{\faShoppingCart}$ to allow limited fixing of model mistakes during the proof; in the proof in \rref{fig:delayedassumptions} it is important that $p_\text{\faShoppingCart}$ is an arity 0 predicate symbol whose free variables do not overlap with the variables bound in the loop, so that it stays available in the induction step of the proof. The conjecture of \rref{fig:delayedassumptions} is expressed below in \KeYmaeraX ASCII input syntax. \begin{lstlisting}[language=KeYmaeraX] Problem A__0() -> x>=0 -> [x:=x/y()2;]x>=0 End. \end{lstlisting} \subsection{Temporary Implicit Sub-Proofs with Select Formulas} Applying tactics, axioms, and proof rules has permanent effect on the proof state. For example, weakening assumptions permanently removes formulas from the proof state; if weakening is done for convenience to focus on specific aspects of the proof, we cannot undo the effect of weakening when the hidden formulas become useful again later in the proof. Not focusing, however, is not an option either, because the mere presence of additional assumptions and formulas may result in duplicate proof effort or intractable proofs (e.g., when applying real arithmetic decision procedures with non-trivial complexity). We want to keep temporary operations separate from the soundness-critical prover core, because their effect is not compatible with its local isolated reasoning principles. It would be unsound to temporarily exclude formulas so that they are not affected by tactic applications. \rref{fig:temphide-ok} illustrates an example with the wanted effect of temporarily hiding formulas, but \rref{fig:temphide-assignunsound} shows that care needs to be taken to not unsoundly exclude formulas temporarily from being affected by, e.g., the assignment axiom. In order to fix \rref{fig:temphide-assignunsound}, the prover core would need to know which axiom or rule can soundly ignore temporarily hidden facts under what conditions. With an implicit sub-proof as in \rref{fig:temphide-assignok} we can temporarily focus tactic application on some proof aspects without extending the \KeYmaeraX core or sacrificing soundness. As an additional benefit, the abbreviations $P_\text{\faEyeSlash}(x)$ have simpler structure than the fully expanded formulas, which makes tactic applications less expensive even if they have to operate on $P_\text{\faEyeSlash}(x)$. \irlabel{weakenli|W\leftrule i} \irlabel{unsoundassignb|$\lightning$} \begin{figure} \caption{Wanted effect of temporarily hiding a formula: ignore for some proof steps, re-introduce when needed} \caption{Unsoundly ignoring temporarily hidden formula} \caption{Sub-proof with substitution $\sigma=\{P_\text{\faEyeSlash} \caption{Hiding formulas temporarily on the user interface} \caption{Sub-proof allows hiding explicit formula structure to temporarily focus tactic application on relevant formulas without sacrificing soundness.} \label{fig:temphide-ok} \label{fig:temphide-assignunsound} \label{fig:temphide-assignok} \label{fig:temphide} \end{figure} In tactics, temporarily focusing on a subset of the sequent formulas is supported with the notation \texttt{using}. For example, the proof of \rref{fig:temphide-ok} is expressed as follows: \begin{lstlisting}[language=Bellerophon] (orL('L)*; <(QE, skip)) using "y=x|y>0 :: x*y<=y^2 :: nil"; QE \end{lstlisting} This script advances the proof fully on the left branch, but postpones the final \irref{qear} (\texttt{QE}) step of the right branch using \texttt{skip} until after the \texttt{using} block. \section{Conclusion} \label{sec:conclusion} Uniform substitution in hybrid systems is a powerful technique for implementing hybrid systems theorem provers in an LCF-style approach, but it comes at the expense of proof convenience when sticking exclusively to core operations. We illustrated how proof convenience can be regained with proof management features that are implemented on top of uniform substitution outside the soundness-critical core of a theorem prover, and we complement those features with modeling conventions and corresponding treatment in the user interface. This approach of using modeling conventions and making proof steps implicit through other user interactions sits somewhat between auto-active verifiers and full interactive theorem proving. Auto-active verifiers, such as Dafny \cite{DBLP:conf/icse/Leino04,DBLP:journals/corr/LeinoW14} and AutoProof \cite{DBLP:conf/tacas/TschannenFNP15} hide verification and interaction with the verification tool entirely behind annotations in the analyzed code. Interactive theorem provers, such as Coq~\cite{DBLP:series/txtcs/BertotC04} and Isabelle/HOL~\cite{DBLP:books/sp/NipkowPW02}, primarily interact with users through scripts, such as structured proofs in Isabelle/Isar~\cite{DBLP:conf/types/Nipkow02} and hide only little of the proof complexity behind other means of presentation even in advanced editors \cite{DBLP:conf/aisc/Wenzel12} or when proofs are found automatically, e.g., with Sledgehammer~\cite{DBLP:conf/cade/Paulson10}. Many (hybrid systems) theorem provers (e.g., \cite{KeY2005,DBLP:conf/cade/PlatzerQ08,DBLP:conf/icfem/RenshawLP11,DBLP:conf/icfem/WangZZ15}) opt for implementing their proof calculus using axiom schemata or with trusted rules, which renders the features presented here soundness-critical. Our proof management techniques, in contrast, provide proof convenience without sacrificing soundness. For future work, we plan to automate unification steps in applying lemmas to bridge the syntactic differences between lemma conclusion and target subgoal, and seek to exploit uniform substitution for further proof techniques. \end{document}
\betagin{document} \betagin{frontmatter} \title{Spinor norm for skew-hermitian forms over quaternion algebras} \author[1]{Luis Arenas-Carmona} \author[2]{Patricio Quiroz} \address[1]{Universidad de Chile, Facultad de Ciencias, Casilla 653, Santiago, Chile} \address[2]{Universidad de Chile, Facultad de Ciencias, Casilla 653, Santiago, Chile} \betagin{abstract} We complete all local spinor norm computations for quaternionic skew-hermitian forms over the field of rational numbers. Examples of class number computations are provided. \end{abstract} \betagin{keyword} Skew-hermitian forms \sigmaep spinor norm \sigmaep quaternion algebras. \end{keyword} \end{frontmatter} \sigmaection{Introduction} Let $K$ be a number field and let $D$ be a quaternion algebra over $K$ with canonical involution $q \mapsto \bar{q}$. Let $V$ be a rank-$n$ free $D$-module. Let $h:V\times V\rightarrow D$ be a \textit{skew-hermitian form}, i.e., $h$ is $D$-linear in the first variable and satisfies $h(x,y)=-\overline{h(y,x)}$. A $D$-linear map $\phi:V\rightarrow V$ preserving $h$ is called an isometry. We denote by $\mathcal{U}_K$ (resp. $\mathcal{U}_K^+$) the unitary group of $h$ (resp. the special unitary group of $h$), i.e., the group of isometries (resp. isometries with trivial reduced norm) of $h$. Skew-hermitian forms share many properties of quadratic forms. In fact, if $D\cong \mathbb{M}_2(K)$, skew-hermitian forms in a rank-$n$ free $D$-module are naturally in correspondence with quadratic forms in the $2n$-dimensional $K$-vector space $PV$, for any idempotent matrix $P$ of rank 1 in $D$ \cite[\S 3]{A-C07}. In this case, the unitary group of $h$ is isomorphic to the orthogonal group of the corresponding quadratic form. On the other hand, $\mathcal{U}_K=\mathcal{U}_K^+$ when $D$ is a division algebra \cite[\S 2.6]{K}. As in the quadratic case, the problem of determining if two skew-hermitian lattices in the same space are isometric or not can be approached by the theory of genera and spinor genera. A skew-hermitian lattice or $\mathcal{O}_D$-lattice in $V$, where $\mathcal{O}_D$ is a maximal order in $D,$ is a lattice $\Lambda$ in $V$ such that $\mathcal{O}_D\Lambda=\Lambda.$ The special unitary group $\mathcal{U}_K^+$ acts naturally on the set of $\mathcal{O}_D$-lattices and the $\mathcal{U}_K^+$-orbits are called classes (strict). This action can be extended to the adelization $\mathcal{U}_\mathbb{A}^+$ of $\mathcal{U}_K^+$ \cite[\S 2]{A-C03}. The $\mathcal{U}_\mathbb{A}^+$-orbits are called genera. A genus of $\mathcal{O}_D$-lattices can be defined as a set of locally isometric lattices in the same space, since there is no Hasse principle for skew-hermitian spaces \cite{A-C07}. Between the class and the genus of a lattice lies its spinor genus. Two lattices $M$ and $\Lambda$ are in the same spinor genus if, replacing each by an isometric lattice if needed, we can find, for each place $\mathfrak{p}$, local isometries $\sigma_\mathfrak{p}\in \mathcal{U}_{K_\mathfrak{p}}^+$ with trivial spinor norm satisfying $\sigma_\mathfrak{p} M_\mathfrak{p}=\Lambda_\mathfrak{p}$. The spinor norm $\theta_\mathfrak{p}:\mathcal{U}_{K_\mathfrak{p}}^+\rightarrow K_\mathfrak{p}^*/K_\mathfrak{p}^{*2}$, or more generally, $\theta_L:\mathcal{U}_L^+\rightarrow L^*/L^{*2}\cong H^1(L,F)$ for any field $L$ containing $K$, is the coboundary map defined from the universal cover $F_{\bar{L}}\hookrightarrow\mathcal{U}_{\bar{L}}^+\twoheadrightarrow\mathcal{U}_{\bar{L}}^+$ \cite[\S 2]{A-C03}. This concept is important because both the class and the spinor genus of a given lattice coincide whenever $\mathcal{U}_{K_\mathfrak{p}}^+$ is non-compact for some arquimedian place $\mathfrak{p}.$ The set of classes contained in the genus of a lattice $\Lambda$ is in one-to-one correspondence with the set of double cosets $$\mathcal{U}_K^+\sigmaetminus \mathcal{U}_\mathbb{A}^+/\mathcal{U}_\mathbb{A}^+(\Lambda),$$ where $\mathcal{U}_\mathbb{A}^+(\Lambda)$ is the adelic stabilizer of $\Lambda$. The cardinality of $\mathcal{U}_K^+\sigmaetminus \mathcal{U}_\mathbb{A}^+/\mathcal{U}_\mathbb{A}^+(\Lambda)$ is called the \textit{class number} of $\Lambda$ with respect to $\mathcal{U}_K^+$. This quantity is difficult to compute in general. An easier problem is determine the number of spinor genera in a genus. In fact, this number is equal to the order of the finite abelian group $$\Theta_\mathbb{A}(\mathcal{U}_\mathbb{A}^+)/\mathcal{B}ig(\theta(\mathcal{U}_K^+)\Theta_\mathbb{A}\big(\mathcal{U}_\mathbb{A}^+(\Lambda)\big)\mathcal{B}ig),$$ where $\Theta_\mathbb{A}$ is the adelic spinor norm \cite[\S 2]{A-C03}. Moreover, if $P:J_K\rightarrow J_K/J_K^2$ is the natural projection, where $J_K$ is the idelic group of $K$, and $$H_\mathbb{A}(\Lambda):=P^{-1}\mathcal{B}ig(\Theta_\mathbb{A}\big(\mathcal{U}_\mathbb{A}^+(\Lambda)\big)\mathcal{B}ig),$$ we have the following group isomorphism \cite[\S 2]{A-C03}: $$\Theta_\mathbb{A}(\mathcal{U}_\mathbb{A}^+)/\mathcal{B}ig(\theta(\mathcal{U}_K^+)\Theta_\mathbb{A}\big(\mathcal{U}_\mathbb{A}^+(\Lambda)\big)\mathcal{B}ig)\cong J_K/K^*H_\mathbb{A}(\Lambda).$$ To compute the group on the right, we need to know the image of the local spinor norm $\theta_\mathfrak{p}:\mathcal{U}_{K_\mathfrak{p}}^+(\Lambda)\rightarrow K_\mathfrak{p}^*/K_\mathfrak{p}^{*2}$ at each place $\mathfrak{p}$ of the number field $K$. This is why we are interested in local spinor norm computations. Full computations exist for symmetric integral bilinear forms. Non-dyadic cases can be found in \cite{K1} and dyadic cases in \cite{Beli}. For this reason we assume, from now on, that the quaternion algebra $D$ is a division algebra. Remember that in this case, we have $\mathcal{U}_K=\mathcal{U}_K^+.$ For skew-hermitian forms, non-dyadic places have been completely studied by B\"oge in \cite{B}. The dyadic case was studied by Arenas-Carmona in \cite{A-C04} and \cite{A-C10}, not completing all the cases, which we carry out here when $K_\mathfrak{p}=\mathbb{Q}_2$. From now on $k=K_\mathfrak{p}$ denotes a dyadic local field of characteristic 0. If $D$ is a division algebra over $k$ we can define an absolute value $|\cdot|:D\rightarrow \mathbb{R}_{\geq 0}$ by $|q|=|Nq|_k$, where $N$ is the reduced norm and $|\cdot|_k$ is the absolute value of $k.$ The valuation on $D$ induced by $|\cdot|$ is denoted by $\nu$. Any skew-hermitian lattice $\Lambdambda$ has a decomposition of the type \betagin{equation}\Lambdambda = \Lambdambda_1\bot \cdots \bot \Lambdambda_t, \lambdabel{dec}\end{equation} where each lattice $\Lambdambda_r$ has rank 1 or 2, and the scales satisfy $\textbf{s}(\Lambdambda_{r+1})\sigmaubset \textbf{s}(\Lambdambda_r)$ \cite[\S 5]{A-C04}. This is the skew-hermitian analogue to the Jordan decomposition for bilinear lattices in \cite[\S 91]{O}. If some $\Lambdambda_m$ in the decomposition of $\Lambda$ has rank 1, then $\Lambdambda_m=\mathcal{O}_Ds_m$ and $h(s_m,s_m)=a_m$. Define $A\sigmaubset k^*/k^{*2}$ by $A=\{N(a_m)k^{*2}|\textup{ rank}(\Lambdambda_m)=1\}$. Following \cite{B}, we define $H(\Lambdambda)$ by the relation $H(\Lambdambda)/k^{*2}=\theta(\mathcal{U}_k^+(\Lambdambda)),$ where $\mathcal{U}_k^+(\Lambdambda)$ is the stabilizer of $\Lambdambda$ in $\mathcal{U}_k^+$, and $\theta:\mathcal{U}_k^+(\Lambdambda)\rightarrow k^*/k^{*2}$ denotes the spinor norm. Note that $k^{*2}\sigmaubset H(\Lambdambda)\sigmaubset k^*.$ By abuse of language we say that $H(\Lambdambda)$ is the image of the spinor norm. It is clear that if $\Lambda=\Lambda_1\bot\Lambda_2$, then $H(\Lambda_1),H(\Lambda_2)\sigmaubset H(\Lambda)$. In addition, for binary indecomposable lattices, we know that $H(\Lambda)=k^*$ \cite{A-C10}. The lattices $\Lambda$ for which the value of $H(\Lambda)$ remains unknown to date are:\\ \noindent\textbf{Case I:} $\Lambda=\lambdangle a_1\rangle\bot\cdots\bot\lambdangle a_n\rangle$, where $A=\{-uk^{*2}\}$ and the minimal difference between the valuation of the scales of two consecutive components satisfy $0<\min\{\nu(a_{i+1})-\nu(a_i)\}\leq \nu(16).$ Here, $u\in\mathcal{O}_k^*$ denotes an arbitrary unit of non-minimal quadratic defect \cite[\S 63]{O}.\\ \textbf{Case II:} $\Lambda=\lambdangle a_1\rangle\bot\cdots\bot\lambdangle a_n\rangle$, where $A=\{ \pi k^{*2}\}$ and the minimal difference between the valuation of the scales of two consecutive components satisfy $\nu(4)\leq\min\{\nu(a_{i+1})-\nu(a_i)\}\leq \nu(16).$ Here, $\pi$ denotes a prime in $k$.\\ In this article, we compute $H(\Lambda)$ for cases I and II above, when $k=\mathbb{Q}_2$. Concretely, we have the following result: \betagin{theorem} The following table contains all local spinor norm computations when the base field is $\mathbb{Q}_2:$ \betagin{table}[h] \betagin{center} $$\betagin{array}{llllllp{5cm}} s & |A| & A & \mu & H(\Lambdambda) & \text{Reference}\\ \hline - & >1 & - & - & \mathbb{Q}_2^* & \text{Corollary } \ref{kest2}\\ 0 & 1 & \text{-}\mathfrak{D}elta\mathbb{Q}_2^{*2} & - & \mathbb{Z}_2^*\mathbb{Q}_2^{*2} & \text{Table 2 in \cite{A-C04}}\\ 0 & 1 & \text{-}u\mathbb{Q}_2^{*2} & 0\leq\mu<\nu(8) & \mathbb{Q}_2^*& \text{Prop. } \ref{propcaso1k}+\text{Table 1 in\cite{A-C04}}\\ 0 & 1 & \text{-}u\mathbb{Q}_2^{*2} & \mu\geq\nu(8) & N\big(\mathbb{Q}_2(a_m)^*\big)& \text{Prop. } \ref{propcaso1}+\text{Table 2 in\cite{A-C04}}\\ 0 & 1 & \pi\mathbb{Q}_2^{*2} & 0\leq\mu\leq\nu(16) & \mathbb{Q}_2^*& \text{Prop. } \ref{propcaso2}+ \text{Prop.} \ref{nu4}+\text{Tables 1,2 in\cite{A-C04}}\\ 0 & 1 & \pi\mathbb{Q}_2^{*2} & \mu>\nu(16) & N\big(\mathbb{Q}_2(a_m)^*\big)& \text{Table 2 in \cite{A-C04}}\\ \neq 0 & - & - & - & \mathbb{Q}_2^*& \text{Theorem 2 in \cite{A-C10}}\\ \end{array}$$ \end{center} \caption{Spinor images for arbitrary lattices over $\mathbb{Q}_2$.} \lambdabel{completa} \end{table} Here, $s$ denotes the number of indecomposable components of rank 2 in the decomposition (\ref{dec}) of $\Lambda$, $\mu=\mu(\Lambda)$ denotes the minimal difference between the valuation of the scales of two consecutive components of rank 1, and $\mathfrak{D}elta\in \mathcal{O}_k^*$ is a unit of minimal quadratic defect \cite[\S 63]{O}. Furthermore, $A$, $\pi$ and $u$ are as in the previous discussion. A dash means irrelevant information. \lambdabel{teo1} \end{theorem} Our (computer assisted) proof of Theorem \ref{teo1} is based on the following scheme: \betagin{picture}(380,80) \put(0,0){\framebox(111,27){\sigmahortstack{Compute\\$H(\Lambdambda)$ for $\Lambdambda$ of rank $n$ }}} \put(111,12){\vector(1,0){35}} \put(111,14){{\footnotesize Thm. 2}} \put(147,0){\framebox(132,27){\sigmahortstack{Compute\\$H(\Lambdambda)$ for $\Lambdambda$ of rank $n\leq 3$ }}} \put(280,12){\vector(1,0){30}} \put(280,14){{\footnotesize $n=2$}} \put(310,2){\framebox(115,20){ Thms. 3,4 + Sage [9]}} \put(280,27){\vector(1,1){27}} \put(280,0){\vector(1,-1){27}} \put(310, 40){\framebox(99,20){ Prop. 6.1 in [2]}} \put(310, -35){\framebox(99,20){ Prop. 5.3}} \put(270,40){{\footnotesize $n=1$}} \put(270,-22){{\footnotesize $n=3$}} \end{picture} \vspace*{1.5cm} The following result is useful to reduce the study of $H(\Lambdambda)$ to the case of low rank $\Lambdambda$ for arbitrary local fields. \betagin{theorem} Let $\Lambdambda=\lambdangle a_1\rangle\bot\cdots\bot\lambdangle a_n\rangle=\mathcal{O}_Ds_1\bot\cdots\bot\mathcal{O}_Ds_n$ be a skew-hermitian lattice and let $\mu=\mu(\Lambda)$ be as above. Assume $\;\mu>\nu(4)$ and $N(a_2),...,N(a_n)\in N(a_1) k^{*2}$. Let $(s;\sigma)\in\mathcal{B}(\Lambda)$, i.e., $s=(1-r)s_m-s_0$, where $s_0=\lambdambda_{m+1}s_{m+1}+\cdots+\lambdambda_ns_n\in \mathcal{O}_Ds_{m+1}\bot\cdots\bot\mathcal{O}_Ds_n$, $\sigma=a_m(1-\bar{r})$ and $|1-\bar{r}|\geq |2|$. If $|\lambdambda_{m+t}|\geq |\lambda_{m+t+l}|$, for some $t\in\{1,...,n-m\}$ and for all $l\in\{1,...,n-m-t\}$, then there exists $\Lambdambda'=\lambdangle b_1\rangle\bot\cdots\bot\lambdangle b_{t+1}\rangle\sigmaubset\Lambda$ satisfying the following conditions: \betagin{enumerate} \item $(s;\sigma)\in\mathcal{U}_k^+(\Lambda').$ \item $\mu(\Lambda')\geq\mu(\Lambda)$. \item $N(b_i) \in N(a_1)k^{*2}$, for all $i=1,...,t+1$. \end{enumerate} \lambdabel{red} \end{theorem} The following theorems help us to develop an algorithm to compute $H(\Lambdambda)$ for lattices of rank 2 for unramified dyadic fields. Using these results, together with Theorem \ref{red}, we compute $H(\Lambda)$ for the unknown cases when the base field is $\mathbb{Q}_2$. Remember that, if $\Lambda=\Lambda_1\bot\Lambda_2$, then $H(\Lambda_1),H(\Lambda_2)\sigmaubset H(\Lambda)$. So, if we want to prove that $H(\Lambda)=k^*$, is enough to do it for binary lattices. \betagin{theorem} Let $\Lambdambda=\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$ be a skew-hermitian lattice such that $|2a_1|\geq |a_2|$ and $N(a_2)\in N(a_1)k^{*2}$. The following statements are equivalent: \betagin{enumerate} \item $H(\Lambdambda)=k^*$. \item There exists $(s;\sigma)\in\mathcal{B}(\Lambda)$ such that $N\sigma\notin N\big(k(a_1)^*\big).$ \item There exists $r\in\mathcal{O}_D$ such that: $\left(\frac{N(1-r)\,,\,-Na_1}{\mathfrak{p}}\right)=-1$, $NzNa_1\in k^{*2}$ and $NzN(\pi^ta_1)^{-1}$ $\in \mathcal{O}_k$, where $z=a_1-ra_1\bar{r}$ and $\nu(\pi^t)=\mu(\Lambda).$ \end{enumerate} \lambdabel{eq} \end{theorem} Let $e=\nu(2)/2$ be the ramification index of $k/\mathbb{Q}_2$ and remember that $t$ in the previous theorem satisfies $\nu(\pi^t)=\mu(\Lambda)$. We call the conditions for $r$ in the statement \textit{3} of the Theorem \ref{eq}, the \textit{$k$-star conditions}. \betagin{theorem} There exists $r\in \mathcal{O}_D$ satisfying the $k$-star conditions if and only if there exists $\alpha\in \mathcal{S}\oplus\mathcal{S}\omega\oplus\mathcal{S} i\oplus\mathcal{S} i\omega\sigmaubset\mathcal{O}_D$ satisfying them, where $\mathcal{S}$ is any finite set of representatives of $\mathcal{O}_k/\pi^u\mathcal{O}_k$, with $u=t+6e$. \lambdabel{ssigen} \end{theorem} \sigmaection{Arithmetic of $D=\left(\frac{\pi,\mathfrak{D}elta}{k}\right)$}\lambdabel{ad} From here on, we work with the unique quaternion division algebra, up to isomorphism, $D=\left(\frac{\pi,\mathfrak{D}elta}{k}\right).$ A base for $D$ is denoted by $\{1,i,j,ij\}$, where $$i^2=\pi,\;j^2=\mathfrak{D}elta,\;ij=-ji.$$ Let $\mathcal{O}_D$ be the unique maximal order in $D.$ The ring $\mathcal{O}_D$ is the set of integral elements in $D$ \cite[\S 2]{V}, i.e., $\mathcal{O}_D=\{q\in D \;|\; |q|\leq 1\}$. Moreover, we have $\mathcal{O}_D=\mathcal{O}_{k(j)}\oplus i\mathcal{O}_{k(j)}$. Is not difficult to prove that $\mathcal{O}_{k(j)}=\mathcal{O}_k[\omega],$ where $\omega=\frac{j+1}{2}.$ Hence, we can write $\mathcal{O}_D=\mathcal{O}_k\oplus\mathcal{O}_k\cdot\omega\oplus\mathcal{O}_k\cdot i\oplus\mathcal{O}_k\cdot i\omega,$ i.e., $\{1,\omega,i,i\omega\}$ is a base for $\mathcal{O}_D.$ In general, if we denote by $\widetilde{\omega}$ and $\widetilde{i}$ the classes, modulo $i^t$, of $\omega$ and $i$ respectively, we have: {\footnotesize $$\mathcal{O}_D/i^t\mathcal{O}_D\cong \betagin{cases} (\mathcal{O}_k/\pi^s\mathcal{O}_k) \oplus (\mathcal{O}_k/\pi^s\mathcal{O}_k)\widetilde{\omega} \oplus (\mathcal{O}_k/\pi^s\mathcal{O}_k) \widetilde{i} \oplus (\mathcal{O}_k/\pi^s\mathcal{O}_k) \widetilde{i}\widetilde{\omega} & , \text{ if }t=2s \textup{ even. }\\ (\mathcal{O}_k/\pi^{s+1}\mathcal{O}_k) \oplus (\mathcal{O}_k/\pi^{s+1}\mathcal{O}_k)\widetilde{\omega} \oplus (\mathcal{O}_k/\pi^s\mathcal{O}_k) \widetilde{i} \oplus (\mathcal{O}_k/\pi^s\mathcal{O}_k) \widetilde{i}\widetilde{\omega} & , \text{ if }t=2s+1 \textup{ odd. } \end{cases}$$} \betagin{remark} If $\alpha=a+b\omega+ci+di\omega\in\mathcal{O}_D$, then the trace $T$ and the reduced norm $N$ of $D$ satisfy $T(\alpha)=2a+b$ and $N(\alpha)=a^2+ab-\mathfrak{d}lta b^2-\pi(c^2+cd-\mathfrak{d}lta d^2)$, where $\mathfrak{d}lta\in \mathcal{O}_k^*$ satisfies $\mathfrak{D}elta=1+4\mathfrak{d}lta.$ \end{remark} We finish this section with a key result that we use frequently in this article to prove that certain norms of quaternions are in the same square class. \betagin{lema} If $\alpha\in\mathcal{O}_D$ satisfies $|\alpha|<1$, then $N(1+4\alpha)$ is a square. \lambdabel{LTCL} \end{lema} \betagin{dem} $N(1+4\alpha)=1+4T(\alpha)+16N(\alpha)=1+4(T(\alpha)+4N(\alpha))$. The condition $|\alpha|<1$ implies $\pi|T(\alpha)$. Hence, the result follows from the Local Square Theorem \cite[\S 63]{O}. \end{dem}\\ \sigmaection{Generators of $\mathcal{U}_k^+(\Lambdambda)$ and their spinor norm}\lambdabel{secgen} Let $(V,h)$ be a skew-hermitian space and let $s\in V,\;\sigma\in D^*$ be such that $\sigma-\bar{\sigma}=h(s,s).$ Define the map $(s;\sigma):V\rightarrow V$ by $$(s;\sigma)(x)=x-h(x,s)\sigma^{-1}s.$$ Following \cite{A-C04} we call such maps \textit{simple rotations} with axis of rotation $s$. Simple rotations generate $\mathcal{U}_k^+$ and satisfy $$\theta[(s;\sigma)]=N(\sigma)k^{*2} \;\text{ \cite{S-H}},$$ where $N:D^*\rightarrow k^*$ is the reduced norm. Note that $\sigma\in k(a)$, where $a=h(s,s).$\\ In \cite[\S 6]{A-C04} the following lemmas are proved. The first one tell us how to construct simple rotations and is used in the second to obtain a set of generators for $\mathcal{U}_k^+(\Lambdambda)$. \betagin{lema}\cite[Lemma 6.3]{A-C04} Let $(V,h)$ be a skew-hermitian space, and let $t,u\in V$ be such that $h(u,u)=h(t,t)=a.$ Define $r$ and $t_0$ by $u=rt+t_0,$ where $t_0\in t^\bot.$ Let $s=t-u$ and $\sigma=h(t,s).$ Then the following identities hold: $$\sigma=a(1-\bar{r}), \hspace{0.3cm} h(t_0,t_0)=a-ra\bar{r},\hspace{0.3cm} \sigma-\bar{\sigma}=h(s,s).$$ In particular $(s;\sigma)$ is a well-defined simple rotation satisfying $(s;\sigma)(t)=u$. \lambdabel{gen0} \end{lema} If $\phi\in\mathcal{U}_k^+$, we have $h(\phi(t),\phi(t))=h(t,t)$, and hence, there exists a simple rotation $(s;\sigma)$ such that $(s;\sigma)(t)=\phi(t).$ This fact can be used to prove the following result by an induction process. \betagin{lema}\cite[Lemma 6.7]{A-C04} Let $\Lambdambda$ be as in (\ref{dec}). Assume that $\Lambda_m=\mathcal{O}_Ds_m$, with $a_m=h(s_m,s_m)$, for $m=1,...,n$. Assume also $|2a_m|\geq |a_l|$ for $m<l$. Then the unitary group $\mathcal{U}_k^+(\Lambdambda)$ of the lattice is generated by elements of the following types:\\mathbb{A}) Simple rotations with axis $s_m, \,$ for some $m=1,...,n.$ \\ B) Simple rotations of the form $(s;\sigma)$, where we have $\sigma=a_m(1-\bar{r}),$ and $s=(1-r)s_m-s_0,$ for some $ s_0\in\mathcal{O}_Ds_{m+1}\bot\cdots\bot\mathcal{O}_Ds_n.$ Furthermore, we can assume $1-r\notin (2i).$ \lambdabel{gen} \end{lema} \betagin{dfn} Let $\Lambda$ be as in (\ref{dec}). We denote by $\mathcal{A}(\Lambda)$ the set of simple rotations of type (A) and by $\mathcal{B}(\Lambda)$ the set of simple rotations of type (B) such that $|1-r|\geq |2|$, i.e., such that $1-r\notin (2i)$. It follows that, $\mathcal{U}_k^+(\Lambda)$ is generated by $\mathcal{A}(\Lambda)\cup \mathcal{B}(\Lambda).$ \lambdabel{defAB} \end{dfn} If $\Lambdambda=\lambdangle a_1\rangle \bot ...\bot\lambdangle a_n\rangle$ is a skew-hermitian lattice, then $[k^*:N\big(k(a_1)^*\big)]=2$ \cite[\S 63]{O} and $N\big(k(a_1)^*\big)\sigmaubset H(\Lambdambda)$ \cite[Proposition 6.1]{A-C04}. A direct consequence of these facts is the following result: \betagin{prop} Let $\Lambdambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$ be a skew-hermitian lattice. Then $H(\Lambdambda)=N\big(k(a_1)^*\big)$ or $H(\Lambda)=k^*.$ \lambdabel{konok} \end{prop} \betagin{cor} Let $\Lambda$ be as in the proposition. If there exists $b\in \mathcal{O}_D$ with $N(b)\notin N(k(a_1)^*)$ such that $\Lambda=\lambdangle b\rangle\bot \Lambda'$, then $H(\Lambda)=k^*.$ \lambdabel{kest2} \end{cor} \betagin{cor} Let $\Lambda$ be as in the proposition. Then $H(\Lambdambda)=k^*$ if and only if, there exists $\phi\in\mathcal{C}(\Lambdambda)$ such that $\theta(\phi)\notin N\big(k(a_1)^*\big)/k^{*2}$, where $\mathcal{C}(\Lambda)$ is a set of generators for $\mathcal{U}_k^+(\Lambda).$ \lambdabel{kest} \end{cor} \betagin{remark} Simple rotations $(s_m;\sigma)\in\mathcal{A}(\Lambda)$ have spinor norm $\theta[(s;\sigma)]=N(\sigma)k^{*2}\in N\big(k(a_m)^*\big)/k^{*2}$, since $\sigma\in k(a_m).$ Hence, if $|2a_m|\geq |a_l|$ for $m<l,$ in the previous corollary we can set $\mathcal{C}(\Lambda)=\mathcal{B}(\Lambda)$. \lambdabel{obscorkest} \end{remark} The following result is key to prove Theorem \ref{ssigen}. Note that, $H(\Lambda)$ depends on the existence of simple rotations with specific spinor norm (see Corollary \ref{kest}). \betagin{lema} Let $\Lambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle=\mathcal{O}_Ds_1\bot\cdots\bot\mathcal{O}_Ds_n$ be a skew-hermitian lattice such that $|2a_m|\geq |a_l|$ for $m<l$. Take $(s;\sigma)\in\mathcal{B}(\Lambda)$, i.e., $s=(1-r)s_m-s_0$, where $$s_0=\lambdambda_{m+1}s_{m+1}+\cdots+\lambdambda_ns_n\in \mathcal{O}_Ds_{m+1}\bot\cdots\bot\mathcal{O}_Ds_n,\;\sigma=a_m(1-\bar{r}) \text{ and } |1-r|\geq|2|.$$ If any of the following conditions is satisfied: \betagin{enumerate} \item $|1-r|>|2|$ and $|\lambda_{m+1}|<1$, for $\mu(\Lambda)\geq\nu(8),$ and the extension $k/\mathbb{Q}_2$ is unramified, \item $|1-r|=|2|$ and $|\lambda_{m+1}|\leq |2|,$ for $\mu(\Lambda)\geq\nu(4\pi),$ \item $|1-r|>|2|$ or $|\lambda_{m+1}|<1$, for $\mu(\Lambda)\geq\nu(16),$ \item $m=1$ and $|\lambda_2|\leq |4|,$ \end{enumerate} then $\theta[(s;\sigma)]\in N\big(k(a_m)^*\big)/k^{*2}.$ \lambdabel{v16} \end{lema} \betagin{dem} It suffices to prove that if $a=h(s,s)$, then $N(a)\in N(a_m)k^{*2}$, since $\sigma\in k(a).$ In fact, we have $s=(1-r)s_m-s_0$, so that $a=(1-r)a_m(1-\bar{r})+a_0$, where $a_0=h(s_0,s_0).$ It follows that \betagin{equation} N(a)=N(a_m)N(1-r)^2N\big(1+(1-r)^{-1}a_0(1-\bar{r})^{-1}a_m^{-1}\big). \lambdabel{ecnorma} \end{equation} Now, $a_0=\lambda_{m+1}a_{m+1}\overline{\lambda_{m+1}}+\cdots+\lambda_na_n\overline{\lambda_n}$ and $|(1-r)^{-1}a_0(1-\bar{r})^{-1}a_m^{-1}|=|a_0||a_m^{-1}|/|1-r|^2<|4|$ if any of the conditions above is satisfied. This implies that the last norm in (\ref{ecnorma}) is a square in virtue of Lemma \ref{LTCL}. \end{dem} \sigmaection{Proof of Theorems 2,3 and 4} \betagin{dem1} Set $\Lambdambda'=\mathcal{O}_Ds_m\bot\cdots\bot\mathcal{O}_Ds_{m+t-1}\bot\mathcal{O}_Ds_0'=$ $\lambdangle b_1\rangle\bot\cdots\bot\lambdangle b_{t+1}\rangle$, where $s_0'=\lambda_{m+t}s_{m+t}+\cdots+\lambda_ns_n$, $b_i=h(s_{m+i-1},s_{m+i-1})=a_{m+i-1},$ for $i=1,...,t$ and $b_{t+1}=h(s_0',s_0')$. It is clear that $\Lambda'\sigmaubset \Lambda$. To prove the condition \textit{1} in the theorem is satisfied, we note that $s_0=s_0'-(\lambda_{m+1}s_{m+1}+\cdots+\lambda_{m+t-1}s_{m+t-1})\in\Lambda'$. We compute, $$(s;\sigma)(s_m)=rs_m+s_0\in\Lambdambda',$$ $$(s;\sigma)(s_i)=s_i-h(s_i,s)\sigma^{-1}s=s_i+h(s_i,s_0)\sigma^{-1}s, \text{ for } i=m+1,...,m+t-1,$$ and $$(s;\sigma)(s_0')=s_0'-h(s_0',s)\sigma^{-1}s=s_0'+h(s_0',s_0)\sigma^{-1}s.$$ Hence, $(s;\sigma)(s_i),(s;\sigma)(s_0')\in\Lambda'$ if $h(s_i,s_0)\sigma^{-1},h(s_0',s_0)\sigma^{-1}\in\mathcal{O}_D$. The latter holds since $|\sigma|=|a_m(1-\bar{r})|\geq |2a_m|\geq \mathbb{S}(s_0),\mathbb{S}(s_0')$, where $\mathbb{S}(v)=\max_{w\in\Lambda}|h(v,w)|$ is the height of a vector $v\in\Lambda$ \cite[\S 5]{A-C04}. Then, $(s;\sigma)\in\mathcal{U}_k^+(\Lambda')$. On the other hand, as $$b_{t+1}=h(s_0',s_0')=\sigmaum_{u=m+t}^n \lambdambda_ua_u\overline{\lambda_u},$$ we have $|b_{t+1}|=|a_{m+t}||\lambda_{m+t}|^2$, since $|\lambda_{m+t}|\geq |\lambda_{m+t+l}|$ for all $l\in\{1,...,n-m-t\}$ and $\mu(\Lambda)>\nu(4)$. From here $\mu(\Lambda')\geq\mu(\Lambda),$ so condition \textit{2} in the theorem is satisfied. Finally, to prove that condition \textit{3} in the theorem holds, we consider \betagin{equation*} N(b_{t+1})=N(\lambdambda_{m+t})^2N(a_{m+t})N\left(1+ (\lambda_{m+t}a_{m+t}\overline{\lambda_{m+t}})^{-1}\sigmaum_{u=m+t+1}^n \lambda_u a_u\overline{\lambda_u}\right), \end{equation*} where $|(\lambda_{m+t}a_{m+t}\overline{\lambda_{m+t}})^{-1}|=|a_{m+t}|^{-1}|\lambda_{m+t}|^{-2}$. Since $|a_{m+t+l}|<|4a_{m+t}|$ and $|\lambda_{m+t}|\geq |\lambda_{m+t+l}|$ for all $l\in\{1,...,n-m-t\}$, we obtain that $$N\left(1+ (\lambda_{m+t}a_{m+t}\overline{\lambda_{m+t}})^{-1}\sigmaum_{u=m+t+1}^n \lambda_ua_u\overline{\lambda_u}\right)$$ is a square due to Lemma \ref{LTCL}. We conclude that $N(b_{t+1})\in N(a_{m+t})k^{*2}$ and the proof of the condition \textit{3} is completed. \end{dem1} The following result, together with Lemma \ref{gen0}, give us an easy method to construct simple rotations of type (B) for binary lattices, from a given quaternion $r\in\mathcal{O}_D$ as in Definition \ref{defAB}. \betagin{lema} Let $r\in\mathcal{O}_D$ be a non-zero quaternion and let $a_1,a_2\in\mathcal{O}_D$ be non-zero pure quaternions. There exists $\lambda\in\mathcal{O}_D$ different from zero such that $a_1=ra_1\bar{r}+\lambda a_2\bar{\lambda}$ if and only if $NzNa_2\in k^{*2}$ and $NzNa_2^{-1}\in\mathcal{O}_k$, where $z=a_1-ra_1\bar{r}.$ \lambdabel{ctsu} \end{lema} \betagin{dem} The equation $a_1=ra_1\bar{r}+\lambda a_2\bar{\lambda}$ has a solution $\lambda\in D^*$ if and only if the skew-hermitian form whose Gram matrix is $\left(\betagin{array}{cc} z & 0 \\ 0 & -a_2 \end{array}\right)$ is isotropic. In fact, for $x,y\in D^*$, $xz\bar{x}-ya_2\bar{y}=0$ if and only if $z=\lambda a_2\bar{\lambda}$, where $\lambda=x^{-1}y$. Now, such a skew-hermitian form is isotropic if and only if it has discriminant 1 \cite[Ch. 10, \S3, Theorem 3.6]{Sch}. The discriminant is the reduced norm of its Gram matrix. We conclude that there exists $\lambda\in D^*$ such that $a_1=ra_1\bar{r}+\lambda a_2\bar{\lambda}$ if and only if $NzNa_2\in k^{*2}.$ Finally, we have $Nz=Na_2N\lambda^2$ if $z=\lambda a_2\bar{\lambda}$. Hence, $\lambda\in\mathcal{O}_D$ if and only if $NzNa_2^{-1}\in\mathcal{O}_k.$ \end{dem} We are ready to prove the Theorem \ref{eq}. \betagin{dem2} If $H(\Lambdambda)=k^*$, then there exists a simple rotation $(s;\sigma)\in\mathcal{B}(\Lambda)$ such that $\theta[(s;\sigma)]\notin N\big(k(a_1)^*\big)/k^{*2}$ in virtue of Remark \ref{obscorkest}, so that 1) implies 2). To prove that 2) implies 3), let $(s;\sigma)$ be a simple rotation such that $\theta[(s;\sigma)]\notin N\big(k(a_1)^*\big)/k^{*2}$. As an isometry, $(s;\sigma)$ satisfies $a_1=h(s_1,s_1)=h((s;\sigma)(s_1),(s;\sigma)(s_1))=ra_1\bar{r}+\lambdambda a_2\bar{\lambda}$ for some $r,\lambda\in\mathcal{O}_D.$ Such an $r\in\mathcal{O}_D$ satisfies $\sigma=a_1(1-\bar{r})$ (Lemma \ref{gen0}). Hence, $\theta[(s;\sigma)]\notin N\big(k(a_1)^*\big)/k^{*2}$ if and only if $N(1-r)\notin N\big(k(a_1)^*\big)$, in virtue of the equation $\theta[(s;\sigma)]=N(\sigma)k^{*2}$. If we rewrite the condition $N(1-r)\notin N\big(k(a_1)^*\big)$ in terms of the Hilbert symbol, we have $\left(\frac{N(1-r)\,,\,-Na_1}{\mathfrak{p}}\right)=-1$ since $a_1^2=-Na_1$. On the other hand, Lemma \ref{ctsu} tell us that $NzNa_2\in k^{*2}$ and $NzNa_2^{-1}\in\mathcal{O}_k$, where $z=a_1-ra_1\bar{r}.$ The result follows since $Na_2\in N(a_1)k^{*2}$ and $\mu=\nu(a_2)-\nu(a_1)=\nu(\pi^t)$. Finally, if $r\in\mathcal{O}_D$ satisfies $NzNa_1\in k^{*2}$ and $NzN(\pi^ta_1)^{-1}\in \mathcal{O}_k$, where $z=a_1-ra_1\bar{r}$ and $\mu=\nu(\pi^t),$ then Lemma \ref{ctsu} and Lemma \ref{gen0} imply the existence of a simple rotation $(s;\sigma)\in\mathcal{U}_k^+(\Lambda)$ of type (B) such that $\sigma=a_1(1-\bar{r}).$ The condition on the Hilbert symbol implies that $\theta[(s;\sigma)]=N(\sigma)k^{*2}\notin N\big(k(a_1)^*\big)/k^{*2}$. Therefore $H(\Lambdambda)=k^*$ in virtue of Corollary \ref{kest}. This concludes the proof. \end{dem2} \betagin{cor} Let $\Lambda$ be as in Theorem \ref{eq}. Let $t$ be such that $\mu=\nu(\pi^t)$. If $H(\Lambda)=k^*$, then $H(\Lambda')=k^*$ for every lattice $\Lambda'=\lambdangle b_1\rangle\bot\lambdangle b_2\rangle$ with $N(b_1),N(b_2)\in N(a_1)k^{*2}$ and $\mu(\Lambda')=\nu(\pi^s)$, for $s<t.$ \lambdabel{corteo} \end{cor} \betagin{remark} Due to Lemma \ref{v16}, in the condition \textit{2} of Theorem \ref{eq}, is enough to consider simple rotations $(s;\sigma)\in\mathcal{B}(\Lambda)$ with $|\lambda|>|4|$, where $s=(1-r)s_1-\lambda s_2.$ Remember that $|1-r|\geq |2|$ for $(s;\sigma)\in\mathcal{B}(\Lambda)$. Furthermore, we can assume that $|a_1|\geq |i|$ by rescaling. We use these facts in the proof of Theorem \ref{ssigen}. \lambdabel{rla4} \end{remark} \betagin{dem3} If $r\in\mathcal{O}_D$ satisfies the $k$-star conditions, then there exists $(s;\sigma)\in\mathcal{B}(\Lambda)$, i.e., $s=(1-r_0)s_1-\lambda s_2$, with $|1-r_0|\geq 2$ and $\sigma=a_1(1-\overline{r_0})$, such that $N(1-r_0)\notin N\big(k(a_1)^*\big)$. Such an $r_0\in\mathcal{O}_D$ also satisfies the $k$-star conditions. Let $\alpha\in\mathcal{O}_D$ be a representative of the class of $r_0$ modulo $\pi^u$ as in the statement. Then, $r_0=\alpha+\pi^u\beta$, with $\beta\in\mathcal{O}_D$ and $\alpha\in\mathcal{S}\oplus\mathcal{S}\omega\oplus\mathcal{S} i\oplus\mathcal{S} i\omega\sigmaubset\mathcal{O}_D$. As $1-r_0=1-\alpha-\pi^u\beta$ we have $N(1-r_0)=N(1-\alpha)N(1-(1-\alpha)^{-1}\pi^u\beta)$. Now, $|1-r_0|\geq|2|$ implies $|1-\alpha|\geq|2|$. Therefore, $N(1-(1-\alpha)^{-1}\pi^u\beta)$ is a square in virtue of Lemma \ref{LTCL}. Hence, $$\left(\frac{N(1-r_0)\,,\,-Na_1}{\mathfrak{p}}\right)=\left(\frac{N(1-\alpha)\,,\,-Na_1}{\mathfrak{p}}\right).$$ On the other hand, if $z=a_1-r_0a_1\overline{r_0}$ and $z'=a_1-\alpha a_1\overline{\alpha}$, then $z=z'-\pi^u\gamma$, with $\gamma=\alpha a_1\bar{\beta}+\beta a_1\bar{\alpha}+\pi^u\beta a_1\bar{\beta}\in\mathcal{O}_D.$ Note that $a_1^{-1}\gamma\in\mathcal{O}_D.$ We have $|z'|=|z|,$ since $|z|=|\pi^t\lambda a_1\bar{\lambda}|>|16\pi^ta_1|=|\pi^{4e+t}a_1|>|\pi^u\gamma|$, where we are assuming $|\lambda|>|4|$ (see Remark \ref{rla4}). Furthermore, we have that $Nz=Nz'N(1-z'^{-1}\pi^u\gamma)$ with $|z'^{-1}\pi^u\gamma|<|\pi^{-(4e+t)}a_1^{-1}\pi^{t+6e}\gamma|=|\pi^{2e}a_1^{-1}\gamma|\leq|4|.$ Hence, $NzNa_1 \textup{ is a square }$ if and only if $Nz'Na_1$ is a square. Finally, from $|z'|=|z|$, i.e., $|Nz|_k=|Nz'|_k$ we obtain $|Nz/\pi^{2t}N(a_1)|_k\leq 1$ if and only if $|Nz'/\pi^{2t}N(a_1)|_k\leq 1.$ \end{dem3} \betagin{remark} The number $u$ in Theorem \ref{ssigen} depends on $|\lambda|$. For example, if $\lambda$ satisfies $|\lambda|=1$, then we would have $|z'|=|\pi^t\lambda a_1\bar{\lambda}|=|\pi^ta_1|$ and so $|z'^{-1}\pi^u\gamma|=|\pi^{-t}a_1^{-1}\pi^u\gamma|\leq |\pi^{u-t}|<|4|$ if $u=t+3e.$ This holds in some cases when $k=\mathbb{Q}_2.$ \lambdabel{u} \end{remark} The following result let us choose particular lattices to compute $H(\Lambdambda)$ for arbitrary lattices. Note that for either of the remaining cases I or II described in the introduction, the extension $k(a_1)/k$ is ramified. \betagin{lema} Let $\Lambdambda=\lambdangle a_1\rangle \bot \lambdangle a_2\rangle$ be a skew-hermitian lattice such that $N(a_2)\in N(a_1)k^{*2}$ and the extension $k(a_1)/k$ is ramified. Then, there exists a skew-hermitian lattice $L=\lambdangle q\rangle\bot\lambdangle \epsilonsilon q\rangle$, where $q\in D^*$ and $\epsilonsilon\in k^*,$ such that $H(\Lambdambda)=H(L)$. Moreover, we can assume that $q=q'$, for any quaternion $q'\in D^*$ with $N(q')\in N(a_1)k^{*2}.$ \lambdabel{ret} \end{lema} \betagin{dem} If $N(a_2)=N(a_1)b^2=N(ba_1)$ for some $b\in k^*$ and $k(a_1)/k$ is ramified, we can use Lemma 4.3 in \cite{A-C04} to conclude that $\lambdangle a_1\rangle \bot \lambdangle a_2\rangle\cong \lambdangle a_1\rangle \bot \lambdangle \xi ba_1\rangle$, some $\xi\in k^*.$ Therefore, it is enough to take $L=\lambdangle q\rangle\bot\lambdangle \epsilonsilon q\rangle$, where $q=a_1$ and $\epsilonsilon=\xi b$. Finally, if $q'\in D^*$ with $N(q')\in N(a_1)k^{*2}$, we have $N(q')=N(ca_1)$ for some $c\in k^*$. Applying Lemma 4.3 in \cite{A-C04} again, with $q=a_1$, we get $q'=\eta \varepsilon cq\bar{\eta}$, for some $\varepsilon\in k^*$ and $\eta\in D^*.$ Hence, $\Lambdambda'=\lambdangle q'\rangle\bot\lambdangle \epsilonsilon q'\rangle$ is isometric to a rescaling of $L$. We conclude $H(\Lambdambda')=H(L)$ as stated. \end{dem} \sigmaection{Algorithm for $k=\mathbb{Q}_2$ and proof of Theorem \ref{completa}}\lambdabel{algq2} By considering Theorems \ref{eq} and \ref{ssigen}, we are in conditions to construct an algorithm to compute $H(\Lambda)$, for a binary lattice $\Lambda$, as follows: If $k=\mathbb{Q}_2$, we have $\mathcal{O}_k=\mathbb{Z}_2$, $\mathcal{O}_D=\mathbb{Z}_2\oplus\mathbb{Z}_2\omega\oplus\mathbb{Z}_2i\oplus\mathbb{Z}_2i\omega$ and $\mathcal{O}_k/\pi^u\mathcal{O}_k\cong \mathbb{Z}/2^u\mathbb{Z}$. \betagin{enumerate} \item Replace $\Lambda$ by a suitable lattice $L=\lambdangle q\rangle\bot\lambdangle \epsilonsilon q\rangle$ as in Lemma \ref{ret}. We have two possibilities for the square class of $Na_1:$ \betagin{enumerate} \item If $Na_1\in\alpha\mathbb{Q}_2^{*2}$, where $\alpha$ is a unit, we look for a pure quaternion $q\in\mathcal{O}_D$ such that $Nq\equiv \alpha\;(8)$. In this case, $Nq=\alpha(1+8\alpha^{-1}\beta)$, for some $\beta\in\mathcal{O}_D$. Hence $Nq\in N(a_1)\mathbb{Q}_2^{*2}$ and $|q|=1$. \item If $Na_1\in\alpha\mathbb{Q}_2^{*2}$, where $\alpha$ is a prime, we look for a pure quaternion $q\in\mathcal{O}_D$ such that $Nq\equiv \alpha\;(16)$. In this case, $Nq=\alpha(1+16\alpha^{-1}\beta)$, for some $\beta\in\mathcal{O}_D$. Hence, $Nq\in N(a_1)\mathbb{Q}_2^{*2}$ and $|q|=|i|.$ \end{enumerate} In both cases we obtain a pure quaternion $q\in\mathcal{O}_D$ with $|q|\geq|i|.$ \item Fix a set of representatives $\mathcal{S}$ of the finite ring $\mathcal{O}_k/\pi^u\mathcal{O}_k:$ We can choose $\mathcal{S}=\{0,1,...,2^u-1\}$ because $\mathcal{O}_k/\pi^u\mathcal{O}_k\cong \mathbb{Z}/2^u\mathbb{Z}.$ \item For $r=a+b\omega+ci+di\omega\in\mathcal{S}\oplus\mathcal{S}\omega\oplus\mathcal{S} i\oplus\mathcal{S} i\omega\sigmaubset\mathcal{O}_D$, check if the $k$-star conditions are satisfied. This verification can be done by using the Sage functions as $\alpha\mapsto (\alpha).ordp()$, which give us the $p$-adic valuation of $\alpha$ in $\mathbb{Q}_p$ \cite{S}. We know that $N(a,b,c,d,\pi)=a^2+ab-b^2-\pi(c^2+cd-d^2)$ is the norm of $r=a+b\omega+ci+di\omega\in\mathcal{O}_D$. Then, if we write $z=q-rq\bar{r}=z_0+z_1\omega+z_2i+z_3i\omega$, we get $Nz=N(z_0,z_1,z_2,z_3,\pi)$. Now, we write a little program using Sage as shown in the following example. Note that $Nz.subs()$ let us substitute values in the expression for $Nz$: \betagin{algorithm} \caption{Case I with $\Lambda=\lambdangle j+ij\rangle\bot\lambdangle 4(j+ij)\rangle$} R=Qp(2); \betagin{algorithmic} \mathbb{F}OR{$a,b,c,d\in\mathcal{S}$} \IF{hilbert\_symbol$(N(a-1,b,c,d,2),-5,2)==-1$ and R$(Nz.subs(a=a,b=b,c=c,d=d)*5)$.is\_square() and R$(Nz.subs(a=a,b=b,c=c,d=d)/2^4).$ordp()$>=0$} \mathbb{R}ETURN $H(\Lambda)=\mathbb{Q}_2^*$ \ENDIF \ENDFOR \mathbb{R}ETURN $H(\Lambda)=N\big(\mathbb{Q}_2(q)^*\big)$ \end{algorithmic} \end{algorithm} \item Conclude that $H(\Lambda)=\mathbb{Q}_2^*$ if some $r$ in the last step satisfy the $k$-star conditions. Otherwise, conclude that $H(\Lambda)=N\big(\mathbb{Q}_2(a_1)^*\big)$ in virtue of Theorem \ref{ssigen} and Proposition \ref{konok}. \end{enumerate} \betagin{remark} The algorithm depends only on $a_1$ and $\mu(\Lambda)$. Then, for the lattice $L$ above, we are interested only in $q$ and the valuation of $\epsilonsilon$, so we can multiply $\epsilonsilon$ for any element of $\mathcal{O}_k^*$. \lambdabel{multk} \end{remark} \betagin{remark} The algorithm can be extended to any unramified finite extension $k$ of $\mathbb{Q}_2$. The condition $|2a_1|\geq |a_2|$ in Theorems \ref{eq} and \ref{red} is essential. Hence, the algorithm does not work, for $\mu<\nu(2)$, if the extension $k/\mathbb{Q}_2$ ramifies, unless the algorithm returns the value $k^*$ for $\mu<\nu(2).$ \end{remark} \sigmaubsection{Computations using Sage} To compute the spinor images in cases I and II when $k=\mathbb{Q}_2$, we use the algorithm above. The following results are obtained by computer search. When the algorithm actually find solutions, we actually list them. Otherwise it is just stated that no solutions were found. \betagin{lema} For any $q\in\{j+ij,i+j\}$ and $t\in\{3,4\}$, there exist $r_1,r_2\in\mathcal{O}_D$ such that: \betagin{enumerate} \item $|1-r_1|=|2|,$ $NzNq\in\mathbb{Q}_2^{*2}$ and $NzN(2^tq)^{-1}\in\mathbb{Z}_2^*,$ where $z=q-r_1q\overline{r_1}$. \item $|1-r_2|=|i|,$ $NzNq\in\mathbb{Q}_2^{*2}$ and $NzN(2^tq)^{-1}\in\mathbb{Z}_2^*,$ where $z=q-r_2q\overline{r_2}$. \end{enumerate} \lambdabel{v1} \end{lema} \betagin{dem} It is a direct computation to verify that the elements $r_1,r_2\in\mathcal{O}_D$ in the table below satisfy the conditions \textit{1-2} in the lemma. \betagin{table}[h] \betagin{center} $$\betagin{array}{|c|c|c|c|} \hline q & t &r_1 & r_2\\ \hline j+ij & 3 &-1-4i-4i\omega&1-14\omega-i-10i\omega\\ \hline j+ij & 4 &-1-8i-8i\omega&1-6\omega-13i-6i\omega\\ \hline i+j & 3 &-1-4i\omega& 1-2\omega-i\\ \hline i+j & 4 &-1-8i\omega& -1-6\omega-3i\\ \hline \end{array}$$ \end{center} \end{table} \end{dem} Using the Sage algorithm above we find elements $r\in\mathcal{O}_D$ (see two following lemmas) which satisfy the $k$-star conditions for particular lattices. This help us to conclude in next section that $H(\Lambda)=\mathbb{Q}_2^*$ in some cases. \betagin{lema} Let $\Lambda=\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$ be a skew-hermitian lattice satisfying the conditions in Theorem \ref{eq}. For $a_1\in\{j+ij,i+j\}$ and $t\in\{1,2\}$ there exists $r\in\mathcal{O}_D$ satisfying the $k$-star conditions. \lambdabel{kestu} \end{lema} \betagin{dem} It is a direct computation to verify that the elements $r\in\mathcal{O}_D$ shown in the table below, satisfy the required conditions for $t=2$ and then, for $t=1$ in virtue of Corollary \ref{corteo}. Here, $i_{_2}=i$ satisfies $i_{_2}^2=2.$ \betagin{center} \betagin{footnotesize} \betagin{tabular}{|c|c|c|c|c|}\hline $a_1$ & $r$ & $N(1-r)$ & $z$ & $NzNa_1$\\ \hline $j+ij$ & $1+2i_{_2}\omega$ & $2\cdot 2^2$ & $4(-1+2\omega-4i_{_2}-7i_{_2}\omega)$ & $2^4\cdot 5^2$ \\ \hline $i+j$ & $1+2i_{_2}+2i_{_2}\omega$ & $-2\cdot 2^2$ & $4(1-2\omega+3i_{_2}+3i_{_2}\omega)$ & $2^4(1+8\cdot 20)$ \\ \hline \end{tabular} \end{footnotesize} \end{center} \end{dem} Note that $D=\left(\frac{\pi,\mathfrak{D}elta}{k}\right)$ for any prime $\pi$ of $k$. Hence, for every prime $\pi$, there exists a pure quaternion $i_{_\pi}\in ik(j)$ satisfying $i_{_\pi}^2=\pi$ and $i_{_\pi}j=-i_{_\pi}j.$ \betagin{lema} Let $\Lambda=\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$ be a skew-hermitian lattice satisfying the hypothesis in Theorem \ref{eq}. For every $a_1\in\{i_{_\pm 2},i_{_\pm 10}\}$ as above and $t\in\{1,2,3,4\}$, there exists $r\in\mathcal{O}_D$ satisfying the $k$-star conditions. \lambdabel{kestpi} \end{lema} \betagin{dem} As in Lemma \ref{kestu}, it is easy to see that the elements $r\in\mathcal{O}_D$ shown in the table below satisfy the required conditions for $t=4$ and then, for $t<4$ in virtue of Corollary \ref{corteo}. \betagin{center} \betagin{footnotesize} \betagin{tabular}{|c|c|c|c|c|}\hline $\pi$ & $r$ & $N(1-r)$ & $z$ & $NzNa_1$\\ \hline $\pm 2$ & $15+8\omega$ & $5\cdot 2^2(1+8\cdot 5^{-1}\cdot 7)$ & $-592i_{\pi}+304i_{_\pi}\omega$ & $2^{10}(1+8\cdot 38)$\\ \hline $\pm 10$ & $15+8\omega$ & $5\cdot 2^2(1+8\cdot 5^{-1}\cdot 7)$ & $-592i_{\pi}+304i_{_\pi}\omega$ & $2^{10}\cdot 5^2(1+8\cdot 38)$\\ \hline \end{tabular} \end{footnotesize} \end{center} \end{dem} The following result help us to prove, in next section, that $H(\Lambda)\neq \mathbb{Q}_2^*$ in some cases. It is proved by an explicit search using Sage as above. \betagin{lema} There is no $r=a+b\omega+ci+di\omega\in \mathbb{Z}\oplus\mathbb{Z}\omega\oplus\mathbb{Z} i\oplus\mathbb{Z} i\omega\sigmaubset\mathcal{O}_D$, with $0\leq a,b,d,c<2^{t+3}$ satisfying the $k$-star conditions for $t\in\{3,4\}$ and $a_1\in\{j+ij,j+i\}.$ \lambdabel{noexistev8} \end{lema} \sigmaubsection{Proof of Theorem 1 in Cases I and II} \textbf{Proof in Case I.} Here we have $\Lambdambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$, where $N(a_m)\in-u\mathbb{Q}_2^{*2},$ for each $m=1,...,n$ and $u\in\mathbb{Z}_2^*$ is a unit of non-minimal quadratic defect independent of $m$. As $\mathbb{Z}_2^*/\mathbb{Z}_2^{*2}=\{\overline{\pm 1},\overline{\pm 5}\}$ and a pure quaternion cannot have norm $-1$, we have two options for $u$: $u=-5$ or $u=-1$. In virtue of Lemma \ref{ret}, we consider for binary lattices, $\Lambdambda=\lambdangle q\rangle\bot\lambdangle \epsilonsilon q\rangle$, where we can choose any pure quaternion $q\in\mathcal{O}_D^0$ satisfying $N(q)\in -u\mathbb{Q}_2^{*2}$, and $\epsilonsilon=\varepsilon\pi^t$, with $\varepsilon\in\mathbb{Z}_2^*$ and $t\in\{1,2,3,4\}$. Here, $q=q_u$ runs over a system of representatives with $N(q)\in-u\mathbb{Q}_2^{*2}$, for $u$ running over the set $\{-5,-1\}$ of units of non-minimal quadratic defect. Moreover, we can assume $i=i_{_2}$, so $i^2=2$, and therefore we work with $\Lambdambda=\lambdangle q_u\rangle\bot\lambdangle 2^tq_u\rangle$, where $t\in\{1,2,3,4\},$ $q_{_{-5}}=j+ij$ and $q_{_{-1}}=i+j$. \betagin{prop} Let $\Lambdambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$ be a skew-hermitian lattice such that $N(a_1)$,..., $N(a_n)\in -u\mathbb{Q}_2^{*2}$ and $0<\mu(\Lambda)<\nu(8)$. Then $H(\Lambdambda)=\mathbb{Q}_2^*.$ \lambdabel{propcaso1k} \end{prop} \betagin{dem} It is enough to consider the case $n=2$. So by the discussion above, we can assume that $\Lambdambda=\lambdangle q_u\rangle\bot\lambdangle 2^tq_u\rangle,$ with $q_{_{-5}}=j+ij$ and $q_{_{-1}}=i+j$, where $t\in\{1,2\}.$ In virtue of Corollary \ref{corteo} it suffices to prove the result for $t=2.$ Lemma \ref{kestu} tell us that there exists $r\in\mathcal{O}_D$ satisfying the $k$-star conditions. This is equivalent to $H(\Lambda)=\mathbb{Q}_2^*.$ \end{dem} To handle the cases where $\nu(8)\leq \mu\leq\nu(16)$ we use the following result, which is used to improve the set of generators $\mathcal{B}(\Lambda).$ The proof is a routine calculation. \betagin{lema} If $r\in \mathcal{O}_D$ satisfies any of the equations \betagin{eqnarray*} j+ij & = & r(j+ij)\bar{r}+\epsilon 2^t\lambda (j+ij)\bar{\lambda}, \\ i+j & = & r(i+j)\bar{r}+\epsilon 2^t\lambda (i+j)\bar{\lambda}, \lambdabel{rjkrc} \end{eqnarray*} where $\epsilon\in\mathbb{Z}_2^*,\;\lambda\in\mathcal{O}_D,\;t\geq 2$, then $1-r\in i\mathcal{O}_D.$ \lambdabel{lrjkrc} \end{lema} \betagin{lema} Let $\Lambdambda=\mathcal{O}_Ds_1\bot\mathcal{O}_Ds_2=\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$ be a skew-hermitian lattice such that $N(a_1),N(a_2)\in-u\mathbb{Q}_2^{*2}$ and $\nu(8)\leq \mu(\Lambda)\leq\nu(16)$. There exists a lattice $L$ of rank 2 such that $H(L)=H(\Lambda)$ and the subsets $\mathcal{B}_1(L),\mathcal{B}_2(L)\sigmaubset\mathcal{B}(L)$, defined by $\mathcal{B}_1(L)=\{(s;\sigma)\in\mathcal{B}(L)\;:\;|1-r|=|i|\}$ and $\mathcal{B}_2(L)=\{(s;\sigma)\in\mathcal{B}(L)\;:\;|\lambda|=1\}$, where $r$ as in Lemma \ref{gen0} and $\lambda$ as in Lemma \ref{ctsu}, satisfy $\mathcal{A}(L)\cup \mathcal{B}_l(L)$ generates $\mathcal{U}_{_{\mathbb{Q}_2}}^+(L),$ for $l=1,2.$ \lambdabel{genq2} \end{lema} \betagin{dem} Remember that $u\in\{-5,-1\}.$ By Lemma \ref{ret} the lattice $L=\mathcal{O}_Ds_1\bot\mathcal{O}_Ds_2=\lambdangle q_u\rangle\bot\lambdangle \epsilon 2^tq_u\rangle,$ where $\epsilon\in\mathbb{Z}_2^*$ and $t\in\{3,4\}$ satisfies $H(L)=H(\Lambda)$, where $q_{_{-5}}=j+ij$ and $q_{_{-1}}=i+j$. Let $\phi\in\mathcal{B}(L)$ be such that $\phi(s_1)=rs_1+\lambda s_2$. We have $|1-r|\in\{|i|,|2|\}$ in virtue of Lemma \ref{lrjkrc}. Hence, to prove that $\mathcal{B}_1(L)$ satisfies the required property, it suffices to prove\footnote{In this case, there exists a second element $(s';\sigma')\in\mathcal{B}_1(L)$ defined by $s'=s_1-(s;\sigma)\phi(s_1),\;\sigma'=q(1-\overline{r''})$ such that $(s';\sigma')(s;\sigma)\phi(s_1)=s_1$.} that, if $\phi$ satisfies $|1-r|=|2|$, then there exists $(s;\sigma)\in\mathcal{B}(L)$ such that $|1-r''|=|i|$ and $|1-r'|=|i|$, where $r',r''\in\mathcal{O}_D$ are defined by $(s;\sigma)(s_1)=r's_1+\lambda's_2$ and $(s;\sigma)\phi(s_1)=r''s_1+\lambda''s_2$, for some $\lambda',\lambda''\in\mathcal{O}_D$. In fact, computing $(s;\sigma)\phi(s_1)=(s;\sigma)(rs_1+\lambda s_2)$ we have \betagin{eqnarray} 1-r'' & = & 1-r+[rq(1-\bar{r'})+\lambda \epsilon 2^tq\bar{\lambda'}](1-\bar{r'})^{-1}q^{-1}(1-r')\lambdabel{1-r''},\\ \lambda'' & = &\lambda+[rq(1-\bar{r'})+\lambda \epsilon 2^tq\bar{\lambda'}](1-\bar{r'})^{-1}q^{-1}\lambda'.\lambdabel{la''} \end{eqnarray} Lemma \ref{v1} implies the existence of an element $r'\in\mathcal{O}_D$ such that $$|1-r'|=|i|,\hspace{1cm} NzN(\epsilon 2^tq)\in\mathbb{Q}_2^{*2} \hspace{0.5cm}\text{ and }\hspace{0.5cm} NzN(\epsilon 2^tq)^{-1}\in\mathbb{Z}_2,$$ where $z=q-r'q\overline{r'}$. Hence, by Lemma \ref{ctsu}, there exists $\lambda'\in\mathcal{O}_D$, such that $q=r'q\overline{r'}+\epsilon 2^t\lambda'q\overline{\lambda'}$. Then $(s;\sigma)$ with $s=(1-r')s_1-\lambda's_2,\;\sigma=q(1-\bar{r'})$ defines a simple rotation of type (B) in virtue of Lemma \ref{gen0}. Note that $(s;\sigma)\in\mathcal{B}_1(L)$. On the other hand, as \betagin{eqnarray*} |[rq(1-\bar{r'})+\lambda \epsilon 2^tq\bar{\lambda'}](1-\bar{r'})^{-1}q^{-1}(1-r')|&=&|rq(1-\bar{r'})+\lambda \epsilon 2^tq\bar{\lambda'}|\\ &=&|1-\bar{r'}|=|i| \end{eqnarray*} and $|1-r|=|2|$, it follows that $|1-r''|=|i|.$ In particular, $\mathcal{A}(L)\cup \mathcal{B}_1(L)$ generates $\mathcal{U}_{_{\mathbb{Q}_2}}^+(L).$ Now, to prove that $\mathcal{A}(L)\cup\mathcal{B}_2(L)$ generates $\mathcal{U}^+(L)$, it suffices to prove\footnote{By a similar argument as we did for $\mathcal{B}_1(L).$ } that, if $\phi\in\mathcal{B}(L)$ satisfies $|\lambda|<1$, there exists $(s;\sigma)\in\mathcal{B}(L)$ such that $|\lambda'|=1$ and $|\lambda''|=1$, where $\lambda,\lambda',\lambda''$ are defined by $\phi,(s;\sigma)$ and $(s;\sigma)\phi$ respectively, as before. From the equation (\ref{la''}) we see that $|\lambda''|=1$ if $|\lambda|<1$ and $|\lambda'|=1.$ By Lemma \ref{v1}, there exists $r'\in\mathcal{O}_D$ such that $$|1-r'|=|i| \text{ or } |2|, \hspace{0.5cm} NzN(\epsilon 2^tq)\in\mathbb{Q}_2^{*2} \hspace{0.5cm}\text{ and }\hspace{0.5cm} NzN(\epsilon2^tq)^{-1}\in\mathbb{Z}_2^*,$$ where $z=q-r'q\overline{r'}$. Hence, by Lemma \ref{ctsu}, there exists $\lambda'\in\mathcal{O}_D$ such that $q=r'q\overline{r'}+\epsilon 2^t\lambda'q\overline{\lambda'}$. Then $(s;\sigma)$, with $s=(1-r')s_1-\lambda's_2$ and $\sigma=q(1-\bar{r'})$, defines a simple rotation of type (B) in virtue of Lemma \ref{gen0}, and this rotation satisfies $|\lambda'|=1$ since $NzN(\epsilon2^tq)^{-1}\in\mathbb{Z}_2^*.$ Now, we take $|1-r'|=|i|, \text{ if } |1-r|=|2|, \text{ and } |1-r'|=|2|,\text{ if } |1-r|=|i|,$ so that $|1-r'|,|1-r''|\geq |2|$ in virtue of equation (\ref{1-r''}). The result follows. \end{dem} \betagin{remark} Notice that, for a lattice $\Lambda$ as in the previous lemma, we can replace $\mathcal{B}(\Lambda)$ by $\mathcal{B}_l(\Lambda), $ for $l=1,2$, in Theorem \ref{eq}. Hence, for any $(s;\sigma)$ in $\mathcal{B}_2(L)$ we have that, if $z=q-rq\bar{r}$, then $|z|=|2^t|$ since $|z|=|2^t\lambda q\bar{\lambda}|.$ This fact, help us to improve the number $u$ in Theorem \ref{ssigen} in virtue of Remark \ref{u}. \lambdabel{BporB1B2} \end{remark} \betagin{prop} There exists $r\in \mathcal{O}_D$ satisfying the $k$-star conditions for $t\in\{3,4\}$ and $Na_1\in -u\mathbb{Q}_2^{*2}$, with $u$ a unit of non-minimal quadratic defect if and only if there exists $\alpha=a+b\omega+ci+di\omega\in \mathbb{Z}\oplus\mathbb{Z}\omega\oplus\mathbb{Z} i\oplus\mathbb{Z} i\omega\sigmaubset\mathcal{O}_D$, with $0\leq a,b,d,c<2^{t+3}$, satisfying them. \lambdabel{ssi} \end{prop} We have a direct consequence of Proposition \ref{genq2} and Lemmas \ref{ret}, \ref{noexistev8}. \betagin{cor} Let $\Lambda=\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$ be a skew-hermitian lattice such that $N(a_1),N(a_2)\in -u\mathbb{Q}_2^{*2}$, where $u$ is a unit of non-minimal quadratic defect and $\mu=\nu(a_2)-\nu(a_1)$ satisfies $\nu(8)\leq\mu\leq\nu(16)$. Then $H(\Lambda)=N\big(\mathbb{Q}_2(a_1)^*\big).$ \lambdabel{nq2v8} \end{cor} We need the following result to handle lattices $\Lambda$ with $\mu(\Lambda)=\nu(8).$ \betagin{lema} If $|\eta|=|i|$ and $a_1$ is a pure unit, then $T(2(\eta a_1\bar{\eta})^{-1}a_1)\in\pi\mathcal{O}_k.$ \lambdabel{traza} \end{lema} \betagin{dem} Set $\rho=i^{-1}\eta\in\mathcal{O}_D^*,$ so that $\eta=i\rho.$ Note that $a_1i\equiv i\bar{a_1}\text{ (mod }\pi),$ where $\rho$ and $\bar{a_1}$ commute modulo $i$. We conclude that $$\eta a_1\bar{\eta}\equiv -N(\rho)\pi \bar{a_1}\text{ (mod }\pi i).$$ In other words $\frac{1}{\pi}\eta a_1\bar{\eta}=-N(\rho)\bar{a_1}+\varepsilon,$ where $\varepsilon\in i\mathcal{O}_D,$ whence $\pi(\eta a_1\bar{\eta})^{-1}=-(N(\rho)\bar{a_1})^{-1}+\mathfrak{d}lta=\frac{-a_1}{N(\rho a_1)}+\mathfrak{d}lta$, for some $\mathfrak{d}lta\in i\mathcal{O}_D.$ We conclude that $$T\big(2(\eta a_1\bar{\eta})^{-1}a_1\big)\equiv \frac{-4a_1^2}{\pi N(\rho a_1)}+\frac{2}{\pi}T(\mathfrak{d}lta a_1)\text{ (mod }\pi)$$ and the result follows since $\mathfrak{d}lta\in i\mathcal{O}_D$ implies $T(\mathfrak{d}lta)\in\pi\mathcal{O}_k.$ \end{dem} \betagin{prop} Let $\Lambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$ be a skew-hermitian lattice such that $N(a_1),...,$ $ N(a_n)\in -u\mathbb{Q}_2^{*2}$, where $u$ is a unit of non-minimal quadratic defect. If $\mu=\mu(\Lambda)$ satisfies $\nu(8)\leq\mu\leq\nu(16)$, then $H(\Lambda)=N\big(\mathbb{Q}_2(a_1)^*\big).$ \lambdabel{propcaso1} \end{prop} \betagin{dem} We consider separately the cases $\mu=\nu(16),\mu=\nu(8).$\\ \underline{\textbf{Case 1:} $\mu=\nu(16).$}\\ In virtue of Lemma \ref{v16} it suffices to consider rotations $(s;\sigma)\in\mathcal{B}(\Lambda)$ such that $|1-r|=|2|$ and $|\lambda_2|=1$. In this case, Theorem \ref{red} tell us we can set $n=2$ in the statement of the proposition. For $n=2$, because of Lemma \ref{genq2}, we can replace $\Lambda$ by a lattice $L$ such that $H(L)=H(\Lambda)$ and a set of generators of $\mathcal{U}_{_{\mathbb{Q}_2}}^+(L)$ is $\mathcal{A}(L)\cup\mathcal{B}_1(L)$. It follows that $H(\Lambdambda)=N\big(\mathbb{Q}_2(a_1)^*\big)$ since rotations in $\mathcal{B}_1(L)$ have spinor norm belonging to $N\big(\mathbb{Q}_2(a_1)^*\big)$ in virtue of Lemma \ref{v16}. \\ \underline{\textbf{Case 2:} $\mu=\nu(8).$} \\ In virtue of Lemma \ref{v16}, any rotation $(s;\sigma)\in\mathcal{B}(\Lambda)$ satisfy $\theta[(s;\sigma)]\in N\big(\mathbb{Q}_2(a_1)^*\big)/\mathbb{Q}_2^{*2}$ unless one of the following conditions is satisfied: \betagin{enumerate} \item $|1-r|=|i|,\;|\lambda_2|=1$, \item $|1-r|=|2|,\;|\lambda_2|\in\{1,|i|\}.$ \end{enumerate} As in the Case 1 we can reduce the cases for which $|\lambda_2|=1$ to consider rank 2 lattices and the case $|1-r|=|2|,\;|\lambda_2|=|i|$ to study rank 3 lattices with\footnote{If $|\lambda_3|<1$, then we can reduce the study to rank 2 lattices (see Theorem \ref{red}).} $|\lambda_3|=1$. For rank 2 lattices, Corollary \ref{nq2v8} tell us that $H(\Lambdambda)=N\big(\mathbb{Q}_2(a_1)^*\big).$ We prove that, for rank 3 lattices $\Lambda$ such that $(s;\sigma)\in\mathcal{B}(\Lambda)$ satisfies $|1-r|=|2|,\;|\lambda_2|=|i|,\;|\lambda_3|=1$ we also have $\theta[(s;\sigma)]\in N\big(\mathbb{Q}_2(a_1)^*\big)/\mathbb{Q}_2^{*2}.$ In fact, in virtue of Lemma \ref{ret} we can assume that $\Lambda=\lambdangle a_1\rangle\bot\lambdangle 8\epsilon_2 a_1\rangle\bot\lambdangle 64\epsilon_3 a_1\rangle$, with $\epsilon_2,\epsilon_3\in\mathbb{Z}_2^*$. Hence, Lemma \ref{gen0} tell us that $r,\lambda_2,\lambda_3\in\mathcal{O}_D$, with $|1-r|\geq |2|$, define an element $\phi\in\mathcal{B}(\Lambda)$ if and only if they satisfy the relation $$z=8\lambda_2 \epsilon_2a_1\overline{\lambda_2}+64\lambda_3\epsilon_3a_1\overline{\lambda_3},$$ where $z=a_1-ra_1\bar{r}$. We can rewrite this equation as follows: $$z=8\lambda_3 (\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3a_1)\overline{\lambda_3},$$ where $\eta=\lambda_3^{-1}\lambda_2.$ Remember that, in this case, $|\lambda_2|=|i|$ and $|\lambda_3|=1$. Hence, by Lemma \ref{ctsu}, the existence of $r,\lambda_1,\lambda_2$ satisfying the equation above is equivalent to the existence of $r,\eta\in\mathcal{O}_D$, with $|\eta|=|i|$ such that $NzN(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)\in\mathbb{Q}_2^{*2}$ and $ NzN\big(8(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)\big)^{-1}\in\mathbb{Z}_2$. We know that $|\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1|=|2|$, so $NzN\big(8(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)\big)^{-1}\in\mathbb{Z}_2$ if and only if $\frac{Nz}{2^8}\in\mathbb{Z}_2.$ On the other hand, $N(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)=N(\epsilon_2\eta a_1\overline{\eta})N(1+8\epsilon(\eta a_1\overline{\eta})^{-1}a_1)$, where $\epsilon=\epsilon_2^{-1}\epsilon_3\in\mathbb{Z}_2^*$. Here, as $|8\epsilon(\eta a_1\overline{\eta})^{-1}a_1|=|4|$, we can write $8\epsilon(\eta a_1\overline{\eta})^{-1}a_1=4\epsilon\xi$ with $\xi=2(\eta a_1\overline{\eta})^{-1}a_1\in\mathcal{O}_D^*$ and we have that $$N(1+4\epsilon\xi)=1+4\epsilon T(\xi)+16\epsilon^2N(\xi).$$ As $T(\xi)=2x+y$, with $x,y\in\mathbb{Z}_2$ it follows that $N(1+4\epsilon\xi)\in\{\mathbb{Q}_2^{*2},\;5\mathbb{Q}_2^{*2}\}.$ Hence, $N(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)\in\{N(a_1)\mathbb{Q}_2^{*2},\;5N(a_1)\mathbb{Q}_2^{*2}\}$, where $N(\epsilon_2\eta a_1\overline{\eta}+8\epsilon_3 a_1)\in 5Na_1\mathbb{Q}_2^{*2}$ if and only if $T(\xi)\equiv 1$ (mod 2). The last condition is not satisfied in virtue of Lemma \ref{traza}. Therefore, we are reduced to the following result, that is an analogue of Theorem \ref{eq}: \textit{$H(\Lambda)=\mathbb{Q}_2^*$ if and only if there exists $r\in\mathcal{O}_D$, with $|1-r|=|2|$ such that: \betagin{enumerate} \item $\big(\frac{N(1-r),-Na_1}{\mathfrak{p}}\big)=-1$, \item $NzNa_1\in\mathbb{Q}_2^{*2}$, \item $|z|=|16|$. \end{enumerate}} Corollary \ref{nq2v8} implies that there is no $r\in\mathcal{O}_D$ satisfying the conditions above. Hence, we conclude that $H(\Lambda)=N\big(\mathbb{Q}_2(a_1)^*\big).$ \end{dem} \textbf{Proof in Case II.} The following result is valid over an arbitrary dyadic local field $k$ and it is an improvement of Proposition 6.9 in \cite{A-C04}. \betagin{prop} Let $\Lambdambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$ be a skew-hermitian lattice such that $N(a_1),...,$ $ N(a_n)\in\pi k^{*2}$, for any prime $\pi$ and $\mu=\nu(4)$. Then $H(\Lambdambda)=k^*.$ \lambdabel{nu4} \end{prop} \betagin{dem} It suffices to prove the result when $n=2$. Hence, we can assume that $\nu(a_2)-\nu(a_1)=\nu(4)$. By Lemma \ref{ret}, we can suppose that the lattice $\lambdangle a_1\rangle\bot\lambdangle a_2\rangle$, has the form $L=\lambdangle q\rangle\bot\lambdangle\epsilonsilon q\rangle$, where $\epsilonsilon \in k^*$ and $\nu(\epsilonsilon)=\nu(4)$. Moreover, we can assume that $q$ is prime and, since all units in $k$ are norms from $k(j)$, we can assume that $q\in ik(j)$. We prove that $H(L)=k^*$ from where the result follows. Because of Corollary \ref{kest2}, it suffices to prove that $L$ represents a prime element whose norm does not belong to the square class of $\pi.$ Remember that if $q\in ik(j)$, then $q\alpha=\bar{\alpha}q$ for any $\alpha\in k(j).$ We compute, for $\alphapha\in \mathcal{O}_{k(j)}$: $$\left(\betagin{array}{cc} 1 & \alpha \end{array}\right)\left(\betagin{array}{cc} q & 0 \\ 0 & \epsilonsilon q \end{array}\right)\left(\betagin{array}{cc} 1 \\ \bar{\alpha} \end{array}\right)=(1+\epsilonsilon \alpha^2)q,$$ where $N(1+\epsilonsilon \alpha^2)=1+\epsilonsilon T(\alphapha^2)+\epsilonsilon^2N(\alpha^2)$. Now, since $\nu(\epsilonsilon)=\nu(4)$ and $\epsilonsilon\in k^*$, we have $\epsilonsilon=4\varepsilon,$ with $\varepsilon \in \mathcal{O}_k^*$. Hence, we conclude $N(1+\epsilonsilon\alpha^2)=1+4\varepsilon T(\alpha^2)+16\varepsilon^2N(\alpha^2).$ Then, in virtue of the Local Square Theorem \cite[\S 63]{O}, it is enough to find $\alpha$ such that $1+4\varepsilon T(\alpha^2)$ is not a square. In fact, it is known \cite[\S 63]{O} that there exists a unit of minimal quadratic defect of the form $\mathfrak{D}elta=1+4\beta\; (\beta \in \mathcal{O}_k^*)$. Hence, it suffices to choose $\alpha\in \mathcal{O}_{k(j)}$ such that $T(\alphapha^2)=\varepsilon^{-1}\beta$. Since $\mathcal{O}_{k(j)}=\mathcal{O}_k[\omega],$ where $\omega=\frac{1+j}{2}$ (\S \ref{ad}), if $\eta\in\mathcal{O}_{k(j)}$, with $\eta=a+b\omega$ for $a,b\in\mathcal{O}_k$, then $T(\eta)=2a+b$. In particular, $T(\varepsilon^{-1}\beta\omega)=\varepsilon^{-1}\beta.$ Now, there exists $\alpha\in \mathcal{O}_{k(j)}$ such that $\alpha^2\equiv \varepsilon^{-1}\beta\omega$ mod($\pi$) since the residue field $\mathcal{O}_{k(j)}/\pi\mathcal{O}_{k(j)}$ is perfect of characteristic 2. Hence $T(\alpha^2)\equiv \varepsilon^{-1}\beta$ mod($\pi$) and $1+4\varepsilon T(\alpha^2)\equiv 1+4\beta$ mod($4\pi$). By Local Square Theorem, we have $1+4\varepsilon T(\alpha^2)= (1+4\beta)u^2=\mathfrak{D}elta u^2$, for some $u\in \mathcal{O}_k^*$. Therefore, $1+4\varepsilon T(\alpha^2)$ is not a square. We conclude that $N(1+\epsilonsilon\alpha^2)$ is not a square. Then $N\big(k(i)^*\big)\neq N\mathcal{B}ig(k\big((1+\epsilonsilon\alpha^2)q\big)^*\mathcal{B}ig)$. We conclude that $H(\Lambdambda)=k^*$ as stated. \end{dem} The procedure above cannot be extended to the case $\mu>\nu(4)$ because, in that case, $N(1+\epsilonsilon\alpha^2)$ is a square. These cases are treated only when the base field is $k=\mathbb{Q}_2$ by the methods used for Case I. By the discussion at the beginning of the section, in rank 2 case, we consider lattices $\Lambda$ of the form $\lambdangle i\rangle\bot\lambdangle\epsilon i\rangle$, where $\epsilon\in\mathbb{Q}_2^*$ and $\nu(4)<\nu(\epsilon)\leq\nu(16).$ In this case, the computation depends on the uniformizing parameter. For every prime $\pi$, set $i_{_\pi}\in \mathcal{O}_{k(j)}i$ such that $i_{_\pi}^2=\pi.$ Remember that, if we prove that $H(\Lambda)=\mathbb{Q}_2^*$, then $H(\Lambda)=\mathbb{Q}_2^*$ for lattices $\Lambda$ of arbitrary rank. Hence, from Lemma \ref{kestpi} and Theorem \ref{eq} next proposition follows. \betagin{prop} Let $\Lambdambda=\lambdangle a_1\rangle\bot ...\bot\lambdangle a_n\rangle$ be a skew-hermitian lattice such that $N(a_1)$,..., $N(a_n)\in \pi\mathbb{Q}_2^{*2}$ and $\mu$ satisfies $\nu(4)<\mu\leq \nu(16)$. Then $H(\Lambdambda)=\mathbb{Q}_2^*.$ \lambdabel{propcaso2} \end{prop} \sigmaection{Examples} \betagin{enumerate} \item Let $\Lambda=\lambdangle i+j\rangle\bot\lambdangle 8(i+j)\rangle$ be a skew-hermitian lattice, where $D=\left(\frac{2,5}{\mathbb{Q}}\right)$, i.e., $i^2=2$ and $j^2=5$. $D$ ramifies only at 2 and 5. Moreover, $\Lambda$ is unimodular for $p\neq 2,7$. Hence, $H(\Lambda_p)=\mathbb{Z}_p^*\mathbb{Q}_p^{*2}$ for $p\neq 2,5,7$ \cite{K1} and $H(\Lambda_5)=\mathbb{Z}_5^*\mathbb{Q}_5^{*2}$ \cite{B}. Then, the spinor class field $\Sigma_\Lambda$ satisfies $\Sigma_\Lambda\sigmaubset\mathbb{Q}(\sigmaqrt{-1},\sigmaqrt{2},\sigmaqrt{7})$. On the other hand, since the algebra is decomposed at infinite and the associated quadratic form is indefinite, we have that the class and the spinor genus coincide and $\Sigma_\Lambda$ satisfies $\Sigma_\Lambda\sigmaubset \mathbb{R}$. From here and the fact that $H(\Lambda_2)=N\big(\mathbb{Q}_2(i+j)^*\big)=N\big(\mathbb{Q}_2(\sigmaqrt{7})^*\big)$ (see Table \ref{completa}), we conclude that $\Sigma_\Lambda\sigmaubset\mathbb{Q}(\sigmaqrt{7})$. For $p=7$ the algebra decomposes and the quadratic form associated to $\lambdangle i+j\rangle$ has discriminant $N(i+j)=-7$. Then, the quadratic lattice associated to $\Lambda$ has a decomposition of the type $$\lambdangle a\rangle\bot\lambdangle -7a\rangle\bot\lambdangle 8a\rangle\bot\lambdangle -56a\rangle,$$ where $a\in\mathbb{Z}_7^*$. Hence, if $\lambdangle a\rangle\bot\lambdangle 8a\rangle=\mathbb{Z}_7 e_1\bot\mathbb{Z}_7 e_2$, we consider the rotation $\tau_{e_1}\tau_{3e_1+e_2}$, for which $\theta(\tau_{e_1}\tau_{3e_1+e_2})=3\mathbb{Q}_7^{*2}.$ This fact tell us that $\Sigma_\Lambda$ is decomposed at 7 because $3\notin N\big(\mathbb{Q}_7(\sigmaqrt{7})^*\big).$ We conclude that $\Sigma_\Lambda=\mathbb{Q}$ and therefore, the class number of $\Lambda$ is 1. \item Let consider the family of lattices $\Lambda=\lambdangle i\rangle\bot\lambdangle 2^ti\rangle$, for $t>0$, where $D=\left(\frac{2,5}{\mathbb{Q}}\right)$. $D$ ramifies only at 2 and 5. The lattice $\Lambda$ is unimodular for $p\neq 2$. We have that $H(\Lambda_p)=\mathbb{Z}_p\mathbb{Q}_p^{*2}$ for $p\neq 2$, in virtue of the computations in \cite{K1} (for $p\neq 5$) and \cite[Theorem 4]{B} (for $p=5$). Hence, the spinor class field $\Sigma_\Lambda$ can ramify only at 2 and $\infty$. So, $\Sigma_\Lambda\sigmaubset\mathbb{Q}(\sigmaqrt{-1},\sigmaqrt{2})$. Observe that the algebra $D$ decomposes at infinity and the quadratic form corresponding to $\Lambda$ is indefinite. Hence, class and spinor genus of $\Lambda$ coincide and $\Sigma_\Lambda\sigmaubset\mathbb{R}$. On the other hand, for $p=2$, Table \ref{completa} tell us that $H(\Lambda_2)=\mathbb{Q}_2^*$ if $t\leq 4$ and $H(\Lambda_2)=N\big(\mathbb{Q}_2(i)^*\big)$ if $t>4$, whence $\Sigma_\Lambda$ decomposes at 2 for $t\leq 4$ and ramifies at 2 for $t>4$. We conclude that $\Sigma_\Lambda=\mathbb{Q}$ for $t\leq 4$, while $\Sigma_\Lambda=\mathbb{Q}(\sigmaqrt{2})$ for $t>4$. In the first case, Hasse principle holds for $\Lambda$. In the second case, the class number of $\Lambda$ is 2. \end{enumerate} \sigmaection*{Acknowledgement} The research was partly supported by Fondecyt, Project No. 1120565. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Cold atom confinement in an all-optical dark ring trap} \author{Spencer E. Olson} \author{Matthew L. Terraciano} \author{Mark Bashkansky} \author{Fredrik K. Fatemi} \affiliation{Naval Research Laboratory, 4555 Overlook Ave. S.W., Washington, DC 20375} \date{\today} \begin{abstract} We demonstrate confinement of $^{85}$Rb atoms in a dark, toroidal optical trap. We use a spatial light modulator to convert a single blue-detuned Gaussian laser beam to a superposition of Laguerre-Gaussian modes that forms a ring-shaped intensity null bounded harmonically in all directions. We measure a 1/$e$ spin-relaxation lifetime of $\approx$1.5 seconds for a trap detuning of 4.0~nm. For smaller detunings, a time-dependent relaxation rate is observed. We use these relaxation rate measurements and imaging diagnostics to optimize trap alignment in a programmable manner with the modulator. The results are compared with numerical simulations. \end{abstract} \pacs{32.80.Pj, 39.25.+k, 03.75.Be} \maketitle \noindent Toroidal traps for cold atoms have recently been of interest for both fundamental and applied research. A toroidal geometry can enable studies of phenomena in non-simply connected or low dimensional topologies~\cite{helmerson, gupta, jain, bludov, jackson, lesanovsky, fernholz, morizot, dutta, arnold, wu, ruostekoski}, {\it e.g.} superfluid persistent circulation states of Bose-Einstein condensates (BECs)~\cite{helmerson}. A ring-shaped atom waveguide may also be suitable for inertial measurements~\cite{gustavson} and neutral atom storage~\cite{dutta, arnold, wu, sauer}. Several approaches for generating ring-shaped waveguides have been proposed and implemented. Magnetic fields have been used to create large ring traps for possible use as atom storage rings or Sagnac interferometry~\cite{gupta, sauer, arnold, wu}. Helmerson {\it{et al.}}~\cite{helmerson} used a combination of magnetic and optical fields to demonstrate persistent current flow of a BEC. Morizot {\it{et al.}}~\cite{morizot} proposed ring traps formed from the combination of an optical standing wave with rf-dressed atoms in a magnetic trap. All-optical approaches have also been considered for toroidal traps ~\cite{wright,courtade, freegarde}. Wright {\it{et~al.}}~\cite{wright} suggested the use of high-azimuthal-order Laguerre-Gaussian (LG) beams to confine atoms with red-detuning. Atoms in red-detuned optical traps seek high intensity, and with large detuning, spontaneous photon scattering can be negligible. Photon scattering can also be reduced by using blue-detuned optical traps. Such ``dark'' traps confine atoms to low intensity, allowing field-free measurements~\cite{ozeri, grimm, friedman, kaplan}, but are challenging to make because they require an intensity minimum bounded by higher intensity. This challenge is often overcome by crossing beams~\cite{kuga, fatemi_SLM, friedman} to plug a hollow optical potential, although dark point atom traps have been realized with a single laser beam containing a phase-engineered intensity null~\cite{ozeri}. The single beam approach has the advantage of alignment simplicity over crossed-beam configurations. Lattices of dark rings have been proposed~\cite{freegarde} and realized~\cite{courtade} using counterpropagating laser beams, but to the best of our knowledge, there have been no reports of atom confinement in a lone optical ring trap. In this paper, we report atom confinement within a different class of dark optical ring traps. We form a bounded, ring-shaped intensity null by converting a Gaussian laser beam to a dual-ringed beam with a programmable spatial light modulator (SLM). SLMs are of increasing value in cold atom manipulation experiments because of their ability to reconfigure trap parameters~\cite{fatemi_SLM, bergamini_SLM, chattrapiban_SLM, mcgloin_SLM, boyer_SLM}. We measure the spin-relaxation lifetime, observe atom dynamics within the traps, and compare the experimental results with numerical simulations. \begin{figure} \caption{\label{fig:beamprofiles} \label{fig:beamprofiles} \end{figure} We form the dual-ringed laser beam by modifying the spatial phase of a laser beam with an SLM, in a similar manner to that used for producing hollow laser beams~\cite{fatemi_SLM, chattrapiban_SLM, rhodes}. The latter can be created by imparting an azimuthal phase $\Phi(r, \phi) = \ell\phi$, with integer $\ell$, to a Gaussian laser beam $E(r)=|E_0|exp\left(-r^2/w_0^2\right)$, where $w_0$ is the waist. The phase discontinuity at $r$ = 0 results in a hollow beam that, for low $\ell$, closely approximates a pure $LG_{p=0}^\ell$ mode, where $p$ and $\ell$ are radial and azimuthal indices. As shown in Figs.~\ref{fig:beamprofiles}a-b, a dual ring is produced by introducing a $\pi$ phase discontinuity at $r = R_c > 0$ such that the resulting beam has large overlap with the $LG_{p=1}^\ell$ mode, which has two radial nodes. The parameter $R_c/w_0$ controls the modal composition and thus the propagation characteristics. In Ref.~\cite{arlt2}, $R_c/w_0$ was set to generate high purity LG modes. Here, we adjust $R_c/w_0$ to create a superposition of $LG_p^\ell$ modes that produces a dark ring at the focus of a lens that is bounded in both the radial and longitudinal directions. Figure~\ref{fig:beamprofiles}c shows the calculated $r$-$z$ cross-section of a toroidal beam with $\ell=1$ as it propagates along $z$ through the focus of an $f$=215~mm focal length lens ($w_0=1.7$mm). We have chosen values of $R_c$ such that the barrier heights in the longitudinal and transverse directions are equal. For $\ell$=0, 1, and 2, this condition is satisfied for $R_c/w_0 \approx$ 0.71, 0.79, and 0.85. The small numerical aperture (NA=$w_0/f$=0.008) leads to a long aspect ratio of $\approx$1:300 for $\ell=1$, defined as the ratio of the longitudinal trap frequency $\omega_\parallel$ to the transverse trap frequency $\omega_\perp$. The mode composition is dominated by $p=0$ (single-ringed) and $p=1$ (dual-ringed) modes. For $\ell=0$, \emph{e.g.}, the $p=0(1)$ fraction is 13\%(78\%). The potential is harmonic in all directions, as indicated in Fig.~\ref{fig:beamprofiles}c. Under these conditions, the ratio of the inner radial barrier height to the outer radial barrier height is $\approx$25-35\%. The radius of the trap depends linearly on $\ell$, as it does for hollow beams~\cite{curtis, fatemi_ao}. The trapping beam is derived from a 30~mW extended cavity diode laser tunable from 776-780~nm. The beam is amplified to $\approx$350~mW with a tapered amplifier of which $\approx$150~mW is coupled into polarization maintaining fiber. The linearly polarized fiber output is collimated with $w_0$=1.7~mm, and reshaped by a 512x512 reflective SLM (Boulder Nonlinear Systems) with 15~$\mu$m pixels and $\approx$90\% absolute diffraction efficiency. A 4-$f$ imaging setup relays this modified Gaussian beam to a magneto-optical trap (MOT). The 4-$f$ relay roughly positions the focus of the ring trap over the MOT, but fine longitudinal adjustments are controlled entirely by the SLM by adding a lens phase profile $\Phi_{\rm{lens}}(r, \phi) = -{\pi}r^2/f\lambda$. We compensate for wavefront errors imposed by the SLM by calibrating the programmed phase on a pixel-by-pixel basis. The experiment begins with a MOT containing $10^{7}$ $^{85}$Rb atoms. After a 1 second loading time, the MOT coils are shut off, and the atoms are cooled to 5~${\mu}K \approx\hbar\Gamma/60k_B$ during a 10~ms molasses cooling stage. All cooling and trapping beams are then extinguished, followed by a 100~$\mu$s pulse that optically pumps the atoms into the $F$=2 hyperfine level. The toroidal beam power is ramped to $\approx$150~mW over 5~ms during the molasses stage. This ramp adiabatically loads atoms into the trap and minimizes the energy gained in the loading process. The trap diameter is significantly smaller than the initial MOT size, so we typically load only a small fraction of atoms ($\approx$5$\times$10$^4$) into the traps. Collisions with background gas limit the trap 1/$e$ lifetimes to $\approx$1~s. After a variable delay, the trapped atoms are imaged onto an electron-multiplying CCD camera (Andor Luca) by a 500~$\mu$s pulse from the MOT and repump beams. Immediately prior to the imaging pulse, the trapping beam is switched off to avoid Stark shifting of the levels. For linear polarization, the optical potential is~\cite{grimm} \begin{equation} U(r) = \frac{\hbar{\Gamma}I(r)}{24I_s}\left(\frac{\Gamma}{\Delta + \Delta_{\rm{LS}}} + \frac{2\Gamma}{\Delta}\right) \end{equation} \noindent where $I_{\rm s}$=1.6~mW/cm$^2$ is the saturation intensity, $\Gamma$=2$\pi\times$6.1~MHz is the linewidth, and $\Delta_{\rm{LS}}$=2$\pi\times$ 7.1~THz (=15~nm) is the fine structure splitting. The resulting trap depths for $\ell=1$ and $\Delta=0.5$ nm, 1.0 nm, 2.0 nm, and 4~nm are 0.26$\hbar\Gamma$, 0.13$\hbar\Gamma$, 0.065$\hbar\Gamma$, and 0.033$\hbar\Gamma$ (at 780~nm, 1~nm$\leftrightarrow$493~GHz). At $\Delta=1$~nm, $\omega_\perp{\approx}2\pi\times$800~Hz and $\omega_\parallel{\approx}2\pi\times$3~Hz. \begin{figure} \caption{\label{fig:atoms-and-beams} \label{fig:atoms-and-beams} \end{figure} We record images of the trapped atoms with the camera axis along $x$ and along $z$. Images along $x$ show the longitudinal trap extent (Fig.~\ref{fig:atoms-and-beams}a), while those along $z$ show the toroidal structure (Fig.~\ref{fig:atoms-and-beams}b). The head-on views in Fig.~\ref{fig:atoms-and-beams}b are taken after a trap time of 600~ms for $\ell = 0{\rm-}2$. Also shown are the azimuthally-averaged beam intensity profiles in the focal plane and atom distributions in the $x$ and $y$ (gravity) directions. Because the trapping beam is propagating horizontally, the potential is not azimuthally symmetric. The gravitational potential energy difference between the intensity nulls for $\ell=2$ is $\Gamma$/30 $\approx 2\pi\times$200 kHz for $^{85}$Rb, which is larger than the atom cloud temperature of 2$\pi\times$100~kHz. Thus, most atoms are found in the bottom portion of the trap. For $\ell\geq1$, atoms could initially be loaded on the axis of the beam, along which there is no barrier. This is seen for $\ell=2$ in Fig.~\ref{fig:atoms-and-beams}. In our configuration it takes a few seconds for these atoms to drift away. Although there should be little interaction between axial atoms and the ring-trapped atoms under adiabatic loading, the axial atoms can be reduced by several means, such as orienting the trapping beam vertically, or loading from an atom distribution that has been dimpled by a blue-detuned Gaussian beam, as in Ref.~\cite{helmerson}. A vertical propagation axis would permit a symmetric ring potential in a horizontal plane, but optical access in this direction was limited. Imaging constraints prevent high contrast images of the toroidal atom distributions. We use an 85~mm Nikon f/1.4 lens, the front element of which is $\approx$250~mm away from the trap location. This lens collects the maximum fluorescence and achieves a peak resolution of $\approx$5~$\mu$m but suffers from spherical aberration, which causes the observed loss of contrast. \begin{figure} \caption{\label{fig:state-lifetimes} \label{fig:state-lifetimes} \end{figure} One benefit of dark traps for coherent atom manipulation is the suppression of photon scattering events~\cite{ozeri, grimm, friedman, kaplan}. We measure the spin relaxation rate due to Raman scattering by measuring the fraction of atoms in the trap that transfer to $F$=3 as a function of trap~\cite{MillerPRA1993}. The atoms are first pumped into the $F$=2 hyperfine level. After a variable trapping time, we image only the atoms that transfer to $F$=3 by using a 10~$\mu$s pulse of resonant cycling transition light. Within 2~ms, both the repump and the cycling transition beams are switched on to image the atoms in both the $F$=2 and $F$=3 states. For background subtraction, two images with the same pulse sequence are taken with no atoms present. This type of background subtraction is necessary to eliminate false counts due to CCD ghosting. By taking the images during a single loading cycle, the effect of atom number fluctuations is reduced. These images are recorded along $x$ (as in Fig.~\ref{fig:atoms-and-beams}a). Between the first two imaging pulses, the atom distribution expands slightly beyond the few integrated rows of pixels. This leads to a slightly low estimate of the total atom count, but the resulting $F$=3 normalized signal is proportional to the actual $F$=3 fraction. We record the $F$=3 signal fraction as a function of trap time for four different detunings (Fig.~\ref{fig:state-lifetimes}a). In the simplest approximation that all atoms have an equal scattering rate, each curve can be modeled by a single exponential $N_3(t) = C(1$ - exp(-${t/\tau}$)), as was used in Ref.~\cite{ozeri}, where $\tau$ is the 1/$e$ decay time. For $\Delta \leq 1$~nm, however, a single relaxation rate was not observed (Fig.~\ref{fig:state-lifetimes}b). This difference between our results and those of Ref.~\cite{ozeri} is most likely due to differences in the trap loading technique, which we have found to affect the rate curves. We note that the $F$=3 fraction at long times should approach 7/12, but our measured values are higher due to the pixel integration described above. Instead of modeling the spin-relaxation with a single-parameter time constant, we phenomenologically ``chirp'' $\tau$ to be $\tau$(t) = $\tau_0 + \beta{t^{1/2}}$ so that we can estimate the relaxation rate at different times. We choose a sublinear chirp rate so that the exponential will decay at long times, but the exact functional form will depend on trap geometry. A steadily increasing $\tau$ should be expected since atoms initially loaded into the trap in locations of high intensity scatter photons more quickly than those loaded into the dark portions of the trap. Thus, a rapid increase in the $F$=3 fraction is observed for small $t$, followed by longer relaxation times for the atoms with the least total energy. Using this form for the $F$=3 fraction, approximate spin-relaxation lifetimes at t = 0 for $\Delta=0.5$~nm, 1.0~nm, 2.0~nm, and $4.0$~nm are 35~ms, 115~ms, 460~ms, and 1440~ms; after 500~ms, these increase to 140~ms, 230~ms, 750~ms, and 1500~ms. The scattering time for atoms in a red-detuned trap of comparable depth at $\Delta$=0.5nm would be $\approx$2.5~ms, which is 50 times shorter than our recorded value. In Ref.~\cite{ozeri}, the blue-detuned trap had a scattering lifetime 700 times longer than a comparable red-detuned trap at 0.5~nm. That work used significantly higher intensities, where the differences between red- and blue-detuning are more dramatic. Photon scattering may be reduced substantially by using commercially available lasers with higher power and larger detuning. For $\Delta > \Delta_{\rm{LS}}$, spin relaxation is further suppressed, asymptotically scaling as $\Delta^{-4}$~\cite{MillerPRA1993}. The time-dependent scattering rate is likely not limited to toroidal geometries, but to the best of our knowledge, it has been observed for the first time in this report. Also, we point out that we did not directly measure the recoil scattering rate, but for our $\Delta$ this is on the same order as the spin-relaxation rate. A recoil scattering rate of 1~s$^{-1}$ corresponds to a heating rate of $\approx$400 nK/s. To demonstrate the time-dependent scattering rate numerically, we perform Monte Carlo simulations for $\Delta = 0.5$~nm. The simulated trap is ramped on over 5~ms. The atom cloud is initially in the $F =2$ state and normally distributed in position and velocity to match the MOT size and temperature. Molasses effects are ignored. Within each time step, each atom's hyperfine level is changed with a probability determined by the local scattering rate, as calculated from the Kramers-Heisenberg formula ~\cite{MillerPRA1993}. The simulation results, compared to data in Fig.~\ref{fig:state-lifetimes}b, confirm the time-dependent relaxation rate described above. For comparison, the data have been renormalized to have an asymptotic value of $7/12$. \begin{figure} \caption{\label{fig:sloshing} \label{fig:sloshing} \end{figure} To demonstrate axial confinement and to quantify the dependence of the scattering rates on the starting position of the atoms in the trap, we displace the trap minimum from the MOT by adjusting the lens function written to the SLM by a few MOT radii (MOT radius $\approx250~\mu$m). Thus, most atoms are initially located in regions of high intensity, reducing the overall scattering lifetime. When the atom cloud is displaced 3~mm, 1.5~mm, and 0.0~mm away from the trap minimum, the single-parameter rate constants (for $\Delta=1$~nm) are 145~ms, 195~ms, and 230~ms (Fig.~\ref{fig:sloshing}a). For each displacement, we show a composite image of the side views of the trap (Fig~\ref{fig:sloshing}b), where each row in the image is a different slice in time. These images show the atom cloud oscillating in the longitudinal direction when the trap is not well overlapped with the atom cloud. By displacing the trap focus, we can also estimate the longitudinal trap frequency. For $\Delta=1$~nm, we measure $\omega_\parallel\approx2\pi\times2$~Hz. This agrees well with the estimate of $\omega_\parallel\approx2\pi\times3$~Hz from the calculated intensity profiles shown in Fig.~\ref{fig:beamprofiles}. The scattering rate data and the composite images can be used to optimize the location of the trap focus, which is done to $\approx$100~$\mu$m with the SLM. As with all single-beam traps, the aspect ratio scales with the inverse of the trapping beam NA. For similar beam parameters, an aspect ratio of $\approx$10 could be realized by using a $f=10$~mm lens. A crossed beam geometry, in which additional beams cap the potential in the longitudinal direction, allows significantly tighter longitudinal confinement and larger diameter traps. In these cases, the ratio $R_c/w_0$ can be changed for optimal confinement. One possibility is to use values of $R_c/w_0$ such that the modified beam is primarily in a single $LG_{p=1}^{\ell}$ mode~\cite{arlt2}. Pure $LG_{p=1}^{\ell}$ modes have a radial intensity null that persists for all values of $z$. When $R_c/w_0$ is chosen such that the most pure $LG_{p=1}^\ell$ is formed, the inner radial barrier height is roughly 3$\times$ larger than the outer one, and the longitudinal barrier is minimized. Therefore, the crossing beam can be well outside the focal plane, where better beam quality is observed but the ring-shaped null remains dark. The reduction of aberration effects outside the focal plane was shown for hollow beams in Ref.~\cite{fatemi_ao}. We have used a spatial light modulator to generate superpositions of LG modes that form single-beam, dark ring traps for cold atoms. We have shown that the atoms can be held in these potentials with long state lifetimes. We have observed atom dynamics in the longitudinal direction and shown that by modifying the trap alignment with the SLM we can optimize the scattering lifetime. This work was funded by the Office of Naval Research and the Defense Advanced Research Projects Agency. \begin{thebibliography}{32} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Helmerson et~al.}(2007)}]{helmerson} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Helmerson}} \emph{et~al.}, \bibinfo{journal}{Nuclear Physics A} \textbf{\bibinfo{volume}{790}}, \bibinfo{pages}{705c} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Gupta et~al.}(2005)}]{gupta} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gupta}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{143201} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Jain et~al.}(2007)\citenamefont{Jain, Bradley, and Gardiner}}]{jain} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Jain}}, \bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Bradley}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~W.} \bibnamefont{Gardiner}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{023617} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Bludov and Konotop}(2007)}]{bludov} \bibinfo{author}{\bibfnamefont{Y.~V.} \bibnamefont{Bludov}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.~V.} \bibnamefont{Konotop}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{053614} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Jackson and Kavoulakis}(2006)}]{jackson} \bibinfo{author}{\bibfnamefont{A.~D.} \bibnamefont{Jackson}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Kavoulakis}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{065601} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Lesanovsky and von Klitzing}(2007)}]{lesanovsky} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Lesanovsky}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{von Klitzing}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{083001} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Fernholz et~al.}(2007)}]{fernholz} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Fernholz}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{063406} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Morizot et~al.}(2006)}]{morizot} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Morizot}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{023617} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Dutta et~al.}(2006)\citenamefont{Dutta, J\"{a}\"{a}skel\"{a}inen, and Meystre}}]{dutta} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Dutta}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{J\"{a}\"{a}skel\"{a}inen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Meystre}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{023609} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Arnold et~al.}(2006)\citenamefont{Arnold, Garvie, and Riis}}]{arnold} \bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Arnold}}, \bibinfo{author}{\bibfnamefont{C.~S.} \bibnamefont{Garvie}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Riis}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{041606(R)} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Wu et~al.}(2004)}]{wu} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wu}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{013409} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Ruostekoski and Dutton}(2005)}]{ruostekoski} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ruostekoski}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Dutton}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{063626} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Gustavson et~al.}(2000)\citenamefont{Gustavson, Landragin, and Kasevich}}]{gustavson} \bibinfo{author}{\bibfnamefont{T.~L.} \bibnamefont{Gustavson}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Landragin}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Kasevich}}, \bibinfo{journal}{Class. Quantum Grav.} \textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{2385} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Sauer et~al.}(2001)\citenamefont{Sauer, Barrett, and Chapman}}]{sauer} \bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Sauer}}, \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Barrett}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Chapman}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{270401} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Wright et~al.}(2000)\citenamefont{Wright, Arlt, and Dholakia}}]{wright} \bibinfo{author}{\bibfnamefont{E.~M.} \bibnamefont{Wright}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Arlt}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Dholakia}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{63}}, \bibinfo{pages}{013608} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Courtade et~al.}(2006)}]{courtade} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Courtade}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{031403(R)} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Freegarde and Dholakia}(2002)}]{freegarde} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Freegarde}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Dholakia}}, \bibinfo{journal}{Opt. Commun.} \textbf{\bibinfo{volume}{201}}, \bibinfo{pages}{99} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Ozeri et~al.}(1999)\citenamefont{Ozeri, Khaykovich, and Davidson}}]{ozeri} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ozeri}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Khaykovich}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Davidson}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{R1750} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Grimm et~al.}(2000)\citenamefont{Grimm, Weidemuller, and Ovchinnikov}}]{grimm} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Grimm}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Weidemuller}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.~B.} \bibnamefont{Ovchinnikov}}, \bibinfo{journal}{Adv. Atom. Mol. Opt. Phys.} \textbf{\bibinfo{volume}{42}}, \bibinfo{pages}{95} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Friedman et~al.}(2002)\citenamefont{Friedman, Kaplan, and Davidson}}]{friedman} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Friedman}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Kaplan}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Davidson}}, \bibinfo{journal}{Adv. Atom. Mol. Opt. Phys.} \textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{99} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Kaplan et~al.}(2005)}]{kaplan} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Kaplan}} \emph{et~al.}, \bibinfo{journal}{J. Opt. B: Quantum Semiclass. Opt.} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{R103} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Kuga et~al.}(1997)}]{kuga} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Kuga}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{4713} (\bibinfo{year}{1997}). \bibitem[{\citenamefont{Fatemi et~al.}(2007)\citenamefont{Fatemi, Bashkansky, and Dutton}}]{fatemi_SLM} \bibinfo{author}{\bibfnamefont{F.~K.} \bibnamefont{Fatemi}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bashkansky}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Dutton}}, \bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{3589} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Bergamini et~al.}(2004)}]{bergamini_SLM} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Bergamini}} \emph{et~al.}, \bibinfo{journal}{J. Opt. Soc. Am. B} \textbf{\bibinfo{volume}{21}}, \bibinfo{pages}{1889} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Chattrapiban et~al.}(2006)}]{chattrapiban_SLM} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Chattrapiban}} \emph{et~al.}, \bibinfo{journal}{J. Opt. Soc. Am. B} \textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{94} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{McGloin et~al.}(2003)}]{mcgloin_SLM} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{McGloin}} \emph{et~al.}, \bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{158} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Boyer et~al.}(2006)}]{boyer_SLM} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Boyer}} \emph{et~al.}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{031402(R)} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Rhodes et~al.}(2006)}]{rhodes} \bibinfo{author}{\bibfnamefont{D.~P.} \bibnamefont{Rhodes}} \emph{et~al.}, \bibinfo{journal}{J. Mod. Optics} \textbf{\bibinfo{volume}{53}}, \bibinfo{pages}{547} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Arlt et~al.}(1998)}]{arlt2} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Arlt}} \emph{et~al.}, \bibinfo{journal}{J. Mod. Optics} \textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{1231} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Curtis and Grier}(2003)}]{curtis} \bibinfo{author}{\bibfnamefont{J.~E.} \bibnamefont{Curtis}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{Grier}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{133901} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Fatemi and Bashkansky}(2007)}]{fatemi_ao} \bibinfo{author}{\bibfnamefont{F.~K.} \bibnamefont{Fatemi}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bashkansky}}, \bibinfo{journal}{Appl. Opt.} \textbf{\bibinfo{volume}{46}}, \bibinfo{pages}{7573} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Miller et~al.}(1993)\citenamefont{Miller, Cline, and Heinzen}}]{MillerPRA1993} \bibinfo{author}{\bibfnamefont{J.~D.} \bibnamefont{Miller}}, \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Cline}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Heinzen}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{R4567} (\bibinfo{year}{1993}). \end{thebibliography} \end{document}
\begin{document} \title{Primitive tuning via quasiconformal surgery} \begin{abstract} Using quasiconformal surgery, we prove that any primitive, postcritically-finite hyperbolic polynomial can be tuned with an arbitrary generalized polynomial with non-escaping critical points, generalizing a result of Douady-Hubbard for quadratic polynomials to the case of higher degree polynomials. This solves affirmatively a conjecture by Inou and Kiwi on surjectivity of the renormalization operator on higher degree polynomials in one complex variable. \end{abstract} \section{Introduction} Quasiconformal surgery is a powerful technique in the theory of holomorphic dynamics of one variable. The process often consists of two steps. First, we construct a quasi-regular map with certain dynamical properties, by modifying part of a holomorphic dynamical system, or extending the definition of a partially defined holomorphic dynamical system. Then we prove existence of a rational map which is (essentially) conjugate to the quasi-regular mapping. Successful applications of this technique include Douady-Hubbard's theory of polynomial-like mappings (\cite{DH2}) and Shishikura's sharp bounds on the number of Fatou cycles (\cite{Shishi}), among others. Tuning is an operator introduced by Douady-Hubbard~\cite{DH2} to prove existence of small copies in the Mandelbrot set. Recall that the Mandelbort set consists of all complex numbers $c$ for which $P_c(z)=z^2+c$ has a connected Julia set. Let $P_a(z)=z^2+a$ be a quadratic polynomial for which the critical point $0$ has period $p$. Douady-Hubbard proved that there is a homeomorphism $\tau=\tau_a$, from $\mathcal{M}$ into itself with $\tau_a(0)=a$ and with the property that the $p$-th iterate of $P_{\tau(c)}$ has a quadratic-like restriction which is hybrid equivalent to $P_c$. Intuitively, the filled Julia set of $P_{\tau(c)}$ is obtained from that of $P_a$ by replacing each bounded Fatou component of $P_a$ with a copy of the filled Julia set of $P_c$. Douady-Hubbard's argument involved detailed knowledge of the combinatorics of the Mandelbrot set and also continuity of the straightening map in the quadratic case, and thus breaks down for higher degree polynomials. In~\cite{IK}, Inou-Kiwi defined a natural analogy of the map $\chi=\tau^{-1}$ for higher degree polynomials, which we will call the {\em IK straightening map}. Given a postcritically finite hyperbolic polynomial $f_0$ of degree $d\ge 2$, together with an internal angle system, they defined a map $\chi$ from a certain space $\mathcal{R}(f_0)$ into a space $\mathcal{C}(T)$ which consists of generalized polynomials over the reduced mapping scheme $T=T(f_0)$ of $f_0$ with fiberwise connected Julia set. The map $\chi$ is injective and in general not continuous~\cite{I3}. The map $f\in \mathcal{R}(f_0)$ is the tuning of $f_0$ with $\chi(f)$ in the sense of Douady and Hubbard when the Julia set $\chi(f)$ is fiberwise locally connected. In the case that $f_0$ is {\em primitive}, i.e. the bounded Fatou components have pairwise disjoint closures, the set $\mathcal{R}(f_0)$ coincide with a combinatorially defined set $\mathcal{C}(f_0)$ which is known to be compact. Inou-Kiwi's argument is mostly combinatorial and they posed a conjecture that in the case that $f_0$ is primitive, $\chi$ is surjective and with connected domain. The fact that $\chi$ is surjective means that $f_0$ can be tuned by every $g\in \mathcal{C}(T)$. In this paper, we shall prove the conjecture of Inou and Kiwi, using quasiconformal surgery technique. In particular, this shows that a primitive postcritically finite hyperbolic map $f_0$ can be tuned by each $g\in \mathcal{C}(T(f_0))$. \begin{maintheorem} Let $f_0$ be a postcritically finite hyperbolic polynomial that is primitive and fix an internal angle system for $f_0$. Then the IK straightening map $\chi:\mathcal{C}(f_0)\to \mathcal{C}(T)$ is bijective and $\mathcal{C}(f_0)$ is connected. \end{maintheorem} We shall recall the definition of Inou-Kiwi's straightening map later. Let us just mention now that if $f_0(z)=z^d+a$ for some $a\in \mathbb C$ then $\mathcal{C}(T)$ is the set of all monic centered polynomials of degree $d$ which have connected Julia sets. The main part of the proof is to show the surjectivity of the map $\chi$. It is fairly easy to construct a quasi-regular map $\textbf{w}idetilde{f}$ which has a generalized polynomial-like restriction hybrid equivalent to a given generalized polynomial in $\mathcal{C}(T)$, via qc surgery in a union of annuli domains around the critical Fatou domains of $f_0$. In order to show that there is a polynomial map with essentially the same dynamics as $\textbf{w}idetilde{f}$, we run Thurston's algorithm to $\textbf{w}idetilde{f}$. We modify an argument of Rivera-Letelier~\cite{R-L} to show the convergence. In order to control distortion, we use a result of Kahn~\cite{Kahn} on removability of certain Julia sets. After surjectivity is proved, we deduce connectivity of $\mathcal{C}(f_0)$ from that of $\mathcal{C}(T)$ which is a theorem of Branner-Hubbard~\cite{BH} and Lavaurs~\cite{La}. In \cite{EY}, qc surgery was successfully applied to construct intertwining tuning. Our case is more complicated since the surgery involves the non wandering set of the dynamics. The paper is organized as follows. In \S\ref{sec:IK}, we recall definition of generalized polynomial-like maps and the IK straightening map. In \S\ref{sec:puzzle}, we construct a specific Yoccoz puzzle for postcritically finite hyperbolic primitive polynomials which is used to construct renormalizations. In \S\ref{sec:Kahn}, we prove a technical distortion lemma. In \S\ref{sec:Thurston}, we prove a convergence theorem for Thurston algorithm. The proof of the main theorem is given in \S\ref{sec:surgery} using qc surgery. \noindent {\bf Acknowledgment.} We would like to thank the referee for carefully reading the paper and in particular, for pointing out an error in Section 3 of the first version. This work was supported by NSFC No. 11731003. \section{The IK straightening map}\label{sec:IK} We recall some basic notations and the definition of IK straightening maps. The multi-critical nature of the problem makes the definition a bit complicated. Let $\text{Poly}_d$ denote the set of all monic centered polynomials of degree $d$, i.e. polynomials of the form $z\mapsto z^d+ a_{d-2}z^{d-2}+\cdots+a_0$, with $a_0,a_1,\cdots, a_{d-2}\in \mathbb C$. For each $f\in \text{Poly}_d$, let $K(f)$ and $J(f)$ denote the filled Julia set and the Julia set respectively. Let $P(f)$ denote the postcritical set of $f$: $$P(f)=\overline{\bigcup_{f'(c)=0, c\in \mathbb C}\bigcup_{n=1}^\infty \{f^n(c)\}}.$$ Let $$\mathcal{C}_d=\{f\in \text{Poly}_d: K(f) \text{ is connected}\}.$$ \subsection{External Rays and Equipotential Curves} For $f\in Poly_d$, the {\em Green function} is defined as $$G_f(z)=\lim_{n\to\infty} \frac{1}{d^n}\log^+ |f^n(z)|,$$ where $\log^+=\max(\log , 0).$ The Green function is continuous and subharmonic in $\mathbb C$, satisfying $G_f(f(z))=dG_f(z)$. It is nonnegative, harmonic and takes positive values exactly in the attracting basin of $\infty$ \[B_f(\infty):=\{z\in \mathbb C\mid f^n(z) \to \infty\ as~n \to \infty\}=\mathbb C\setminus K(f).\] Assume $f\in\mathcal{C}_d$. Then there exists a unique conformal map \[\phi_f:B_f(\infty) \to \{z\mid |z|>1\}\] such that $\phi(z)/z\to 1$ as $z\to\infty$ and such that $\phi_f\circ f(z)=(\phi_f(z))^d$ on $B_f(\infty)$. The {\em external ray of angle $t \in \mathbb{R}/\mathbb{Z}$} is defined as \[\mathcal R_{f}(t)=\{\phi_f^{-1}(re^{i2\pi t})\mid 1<r<\infty\},\] and the {\em equipotential curve of pontential $l\in (0,\infty)$} as \[E_{f}(l)=\{\phi_f^{-1}(e^{l+i2\pi\theta})\mid 0\le\theta<1\}.\] We say the external ray $\mathcal R_f(t)$ {\em lands} at some point $z_0$ if $\lim_{r\to 1}\phi_f^{-1}(re^{i2\pi t})=z_0$. It is known that for each $t\in \mathbb{Q}/\mathbb{Z}$, $\mathcal{R}_f(t)$ lands at some point. On the other hand, if $z_0$ is a repelling or parabolic periodic point, then there exists at least one but at most finitely many external rays landing at $z_0$. See for example~\cite{Mil1}. The {\em rational lamination} of $f$, denoted by $\lambda(f)$, is the equivalence relation on $\mathbb{Q}/\mathbb{Z}$ so that $\theta_1\sim \theta_2$ if and only if $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_f(\theta_2)$ land at the same point. \subsection{Generalized polynomials over a scheme}\label{subsec:GPL} Now let us consider a postcritically finite hyperbolic polynomial $f_0$. Following Milnor \cite{Mil3}, we define the {\em reduced mapping scheme} $$T(=T(f_0))=(|T|,\sigma,\delta)$$ of $f_0$ as follows. Let $|T|$ denote the collection of critical bounded Fatou components of $f_0$. For each $\mathbf{v}\in |T|$, let $r(\mathbf{v})$ be the minimal positive integer such that $f_0^{r(\mathbf{v})}(\mathbf{v})$ is again a critical Fatou component. Define $\sigma:|T|\to |T|$ and $\delta: |T|\to \{2,3,\cdots\}$ by $$\sigma(\mathbf{v})=f_0^{r(\mathbf{v})}(\mathbf{v})\mbox{ and }\delta(\mathbf{v})=\text{deg}(f_0^{r(\mathbf{v})}|\mathbf{v})=\text{deg}(f_0|\mathbf{v}).$$ \begin{definition}[Generalized Polynomials] A generalized polynomial $g$ over $T$ is a map \[g:|T|\times \mathbb C \to |T|\times \mathbb C\] such that $g(\mathbf{v},z)=(\sigma(\mathbf{v}),g_{\mathbf{v}}(z))$ where $g_{\mathbf{v}}(z)$ is a monic centered polynomial of degree $\delta(\mathbf{v})$. \end{definition} Denote by $P(T)$ the set of all generalized polynomials over the scheme $T$. Given $g \in P(T)$, the {\em filled Julia set} $K(g)$ of $g$ is the set of points in $|T|\times \mathbb C$ whose forward orbits are precompact. The boundary $\partial K(g)$ of the filled Julia set is called the {\em Julia set} $J(g)$ of $g$. The filled Julia set $K(g)$ is called {\em fiberwise connected} if $K(g,\mathbf{v}):=K(g)\cap (\{\mathbf{v}\}\times \mathbb C)$ is connected for each $\mathbf{v}\in |T|$. Let $\mathcal C(T)$ be the set of all the generalized polynomials with fiberwise connected filled Julia set over $T$. We shall need external rays and Green function for $g\in \mathcal{C}(T)$ which can be defined similarly as in the case of a single polynomial. Indeed, for each $\mathbf{v}\in |T|$ there exists a unique conformal $\mathbf{v}arphi_{\mathbf{v},g}:\mathbb C\setminus K(g,\mathbf{v})\to \mathbb C\setminus \overline{\mathbb{D}}$ such that $\mathbf{v}arphi_{\mathbf{v},g}(z)/z\to 1$ as $z\to\infty$ and these maps satisfy $\mathbf{v}arphi_{\sigma(\mathbf{v}),g}(g_\mathbf{v}(z))=\mathbf{v}arphi_{\mathbf{v},g}(z)^{\delta(\mathbf{v})}$. For $t\in\mathbb{R}/\mathbb{Z}$, the {\em external ray} $\mathcal{R}_g(\mathbf{v},t)$ is defined as $\mathbf{v}arphi_{\mathbf{v},g}^{-1}(\{re^{2\pi i t}: r>1\})$. The {\em Green function} of $g$ is defined as $$G_g(\mathbf{v},z)=\left\{\begin{array}{ll} \log |\mathbf{v}arphi_\mathbf{v}(z)|, & \mbox{ if } z\not\in K(g,\mathbf{v});\\ 0 &\mbox{ otherwise.} \end{array} \right. $$ \subsection{Generalized polynomial-like maps} We shall now recall the definition of generalized polynomial-like map over the mapping scheme $T$. We say $U$ is a {\em topological multi-disk} in $|T|\times \mathbb C$ if there exist topological disks $\{U_\mathbf{v}\}_{\mathbf{v}\in|T|}$ in $\mathbb C$ such that $U \cap (\{\mathbf{v}\}\times \mathbb C)=\{\mathbf{v}\}\times U_\mathbf{v}$. \begin{definition} An AGPL (almost generalized polynomial-like) map $g$ over the scheme $T$ is a map \[g:U \to U', (\mathbf{v},z)\mapsto (\sigma(\mathbf{v}),g_{\mathbf{v}}(z)),\] with the following properties: \begin{itemize} \item $U, U'$ are two topological multi-disks in $|T|\times \mathbb C$ and $U_\mathbf{v}\subsetneq U'_\mathbf{v}$ for each $\mathbf{v}\in |T|$; \item $g_{\mathbf{v}}: U_\mathbf{v} \to U'_{\sigma(\mathbf{v})}$ is a proper map of degree $\delta(\mathbf{v})$ for each $\mathbf{v}$; \item The set $K(g):=\bigcap\limits_{n=0}^{\infty} g^{-n}(U)$, called the {\em filled-Julia set of $g$}, is compactly contained in $U$. \end{itemize} If in addition $U\Subset U'$, then we say that $g$ is a GPL (generalized polynomial-like) map. \end{definition} It should be noted that every AGPL map has a GPL restriction with the same filled-Julia set. See \cite[Lemma 2.4]{LY}. Let $g_1, g_2$ be two AGPL maps over $T$. We say that they are {\em qc conjugate} if there is a fiberwise qc map $\mathbf{v}arphi$ from a neighborhood of $K(g_1)$ onto a neighborhood of $K(g_2)$, sending the $\mathbf{v}$ fiber of $g_1$ to the $\mathbf{v}$ fiber of $g_2$, such that $\mathbf{v}arphi\circ g_1=g_2\circ \mathbf{v}arphi$ near $K(g_1)$. We say that they are {\em hybrid equivalent} if they are qc conjugate and we can choose $\mathbf{v}arphi$ such that $\bar{\partial} \mathbf{v}arphi=0$ a.e. on $K(g_1)$. The Douady-Hubbard straightening theorem~\cite{DH2} extends in a straightforward way: every AGPL map $g$ is hybrid equivalent to a generalized polynomial $G$. In the case that the filled Julia set of $g$ is fiberwise connected, $G$ is determined up to affine conjugacy. For monic centered quadratic polynomials, each affine conjugacy class consists of a single polynomial. For more general maps, it is convenient to introduce an (external) marking to uniquely determine $G$ for a given $g$. \begin{definition}[Access and external marking]Let $f: U \to U'$ be an AGPL map over the mapping scheme $T$ with fiberwise connected filled Julia set. A path to $K(f)$ is a continuous map $\gamma: [0, 1] \to U'$ such that $\gamma ((0, 1]) \subset U' \backslash K(f)$ and $\gamma(0) \in J(f)$. We say two paths $\gamma_0$ and $\gamma_1$ to $K(f)$ are homotopic if there exists a continuous map $\tilde \gamma : [0, 1] \times [0, 1] \to U$ such that \begin{enumerate} \item $t \mapsto \tilde \gamma (s, t)$ is a path to $K(f)$ for all $s \in [0, 1]$; \item $\tilde \gamma (0, t) = \gamma_0(t)$ and $\tilde\gamma (1, t) = \gamma_1(t)$ for all $t \in [0, 1]$; \item $\tilde \gamma (s, 0) = \gamma_0(0)$ for all $s \in [0, 1]$. \end{enumerate} An access to $K(f)$ is a homotopy class of paths to $K(f)$.\par An external marking of $f$ is a collection $\Gamma=(\Gamma_{\mathbf{v}})_{\mathbf{v} \in |T|}$ where each $\Gamma_{\mathbf{v}}$ is an access to $K(f)$, contained in $\{\mathbf{v}\}\times \mathbb C$, such that $\Gamma$ is forward invariant in the following sense. For every $\mathbf{v} \in |T|$ and every representative $\gamma_{\mathbf{v}} \subset (\{\mathbf{v}\}\times\mathbb C) \cap U$ of $\Gamma_{\mathbf{v}}$, the connected component of $f(\gamma_{\mathbf{v}})\cap U$ which intersects $J(f)$ is a representative of $\Gamma_{\sigma(\mathbf{v})}$. \end{definition} For a generalized polynomial $g \in \mathcal C(T)$, there is a {\em standard external marking of $g$}, given by the external rays $(\mathcal R(\mathbf{v},0))_{\mathbf{v} \in |T|}$ with angle $0$. \begin{Theorem}[Straightening] Let $g$ be an AGPL map over $T$ with fiberwise connected filled Julia set and let $\Gamma$ be an external marking of $g$. There is a unique $f\in \mathcal{C}(T)$ such that there is a hybrid conjugacy between $g$ and $f$ which sends the external marking $\Gamma$ to the standard marking of $f$. \end{Theorem} See \cite[Theorem A]{IK} for a proof. \subsection{The Inou-Kiwi map} Let $f_0, T$ and $r:|T|\to\mathbb{N}$ be as in \S\ref{subsec:GPL} and assume $f_0$ is primitive. It is well-known that $\partial \mathbf{v}$ is a Jordan curve for each $\mathbf{v}\in |T|$. Let $$\mathcal{C}(f_0)=\{f\in \text{Poly}_d: \lambda(f)\supset \lambda(f_0)\},$$ where $\lambda(f)$ and $\lambda(f_0)$ are the rational laminations of $f$ and $f_0$ respectively. For each $f\in \mathcal{C}(f_0)$, Inou-Kiwi constructed an AGPL (in fact, GPL) map \begin{equation}\label{eqn:lambda0R} F:\bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U_\mathbf{v}\to \bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U'_\mathbf{v} \end{equation} over $T$ such that $F(\mathbf{v}, z)=(\sigma(\mathbf{v}), f^{r(\mathbf{v})}(z))$ for each $z\in U_\mathbf{v}$ and such that the filled Julia set of $F$ is the union of $\{\mathbf{v}\}\times K(\mathbf{v},f)$: $$K(\mathbf{v},f)=\bigcap_{\theta\sim_{\lambda(f_0)} \theta'} \overline{S(\theta, \theta'; \mathbf{v})}\cap K(f),$$ where $S(\theta,\theta';\mathbf{v})$ is the component of $\mathbb C\setminus (\overline{\mathcal{R}_f(\theta)\cup \mathcal{R}_f(\theta')})$ which contains external rays $\mathcal{R}_f(t)$ for which $\mathcal{R}_{f_0}(t)$ land on $\partial \mathbf{v}$. We shall call such an AGPL map $F$ as in (\ref{eqn:lambda0R}) a {\em $\lambda(f_0)$-renormalization of } $f$. While there are many choices of $U_\mathbf{v}$ and $U'_\mathbf{v}$, the hybrid class of $\lambda(f_0)$-renormalizations of $f$ is uniquely determined. In order to choose an external marking for $F$, Inou-Kiwi first fixed an {\em internal angle system} which is, by definition, a collection of homeomorphisms $\alpha=(\alpha_{\mathbf{v}}:\partial \mathbf{v} \to \mathbb R/\mathbb Z)_{\mathbf{v}\in |T|}$ such that: \[\delta(\mathbf{v})\alpha_{\mathbf{v}}=\alpha_{\sigma(\mathbf{v})}\circ f_0^{r(\mathbf{v})}\mod 1.\] An internal angle system is uniquely determined by the points $z_\mathbf{v}=\alpha_{\mathbf{v}}^{-1}(0)$, $\mathbf{v}\in |T|$, which are (pre-)periodic points of $f_0$. Choose for each $\mathbf{v}\in |T|$ an external angle $\theta_\mathbf{v}$ so that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands at $z_\mathbf{v}$. They observed that the external rays $\mathcal{R}_f(\theta_\mathbf{v})$ define an external marking of $F$ and this external marking is independent of the choices of $\theta_\mathbf{v}$. Indeed, we can choose $(\theta_\mathbf{v})_{\mathbf{v}\in |T|}$ such that $\delta_{\mathbf{v}}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ for each $\mathbf{v}\in |T|$, see Lemma~\ref{lem:externalangle}. The IK straightening map $\chi:\mathcal{C}(f_0)\to \mathcal{C}(T)$ is defined as follows. For each $f\in \mathcal{C}(f_0)$, $\chi(f)$ is the generalized polynomial in $\mathcal{C}(T)$ for which there is a hybrid conjugacy from a $\lambda(f_0)$-renormalization of $f$ to $\chi(f)$ sending the external marking determined by the internal angle system to the standard marking for $\chi(f)$. \begin{Lemma}\label{lem:externalangle} Let $f_0$ be as above and let $\alpha$ be an internal angle system. Then there exists $\theta_\mathbf{v}\in \mathbb{Q}/\mathbb{Z}$, $\mathbf{v}\in |T|$, such that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands at $\alpha_\mathbf{v}^{-1}(0)$ and such that $f_0^{r(\mathbf{v})}(\mathcal{R}_{f_0}(\theta_\mathbf{v}))=\mathcal{R}_{f_0}(\theta_{\sigma(\mathbf{v})})$. \end{Lemma} \begin{proof} {\bf Claim.} Let $W$ be a fixed Fatou component of $f\in \mathcal{C}_d$ and let $p$ be a repelling fixed point of $f$ in $\partial U$. If the external ray $\mathcal{R}_f(t)$ lands at $p$, then $dt=t\mod 1$. Let $\Theta\subset \mathbb{R}/\mathbb{Z}$ denote the set of external angles $\theta$ for which $\mathcal{R}_f(\theta)$ lands at $p$. It is well known that $m_d: t\mapsto dt\mod 1$ maps $\Theta$ onto itself and preserves the cyclic order. So all the angles $\theta\in \Theta$ has the same period under $m_d$. We may certainly assume that $\Theta$ contains at least two points. Let $\Gamma$ denote the union of these external rays and $\{p\}$. Let $V$ denote the component of $\mathbb C\setminus \Gamma$ which contains $W$ and let $V_1$ denote the component of $\mathbb C\setminus f^{-1}(\Gamma)$ which contains $W$. Then $\partial V\subset \partial V_1$ and $f:V_1\to V$ is a proper map. It follows that $f$ must fix both of the external rays in $\partial V$. This proves the claim. Now for each $\mathbf{v}\in |T|$, we have $f^{r(\mathbf{v})}(\alpha_{\mathbf{v}}^{-1}(0))=\alpha_{\sigma(\mathbf{v})}^{-1}(0)$. So each $\alpha_{\mathbf{v}}^{-1}(0)$ eventually lands at a repelling periodic orbit. The claim enables us to choose $\theta(\mathbf{v})\in \mathbb{R}/\mathbb{Z}$ such that $\delta_\mathbf{v}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ for each $\mathbf{v}\in |T|$. \end{proof} \section{Yoccoz puzzle}\label{sec:puzzle} Let $f_0$ be a monic centered postcritically finite hyperbolic and primitive polynomial of degree $d\ge 2$. In this section, we shall construct a specific Yoccoz puzzle for $f_0$ (Theorem~\ref{thm:puzzle}). Recall that $\text{Poly}_d$ denotes the collection of monic centered polynomials of degree $d$. We say that a finite set $Z$ is {\em $f_0$-admissible} if the following hold: \begin{itemize} \item $f_0(Z)\subset Z$, \item each periodic point in $Z$ is repelling; \item for each $z\in Z$, there exist at least two external rays landing at $z$. \end{itemize} Let $\Gamma_0=\Gamma^Z_0$ denote the union of all the external rays landing on $Z$, the set $Z$ itself and the equipotential $\{G_{f_0}(z)=1\}$. For each $n\ge 1$, define $\Gamma^Z_n=f_0^{-n}(\Gamma^Z_0)$. A bounded component of $\mathbb C\setminus \Gamma^Z_n$ is called a {\em $Z$-puzzle piece} of depth $n$. The aim of this section is to prove the following: \begin{Theorem}\label{thm:puzzle} Let $f_0$ be a monic centered polynomial of degree $d\ge 2$ which is postcritically finite, hyperbolic and primitive. Assume that $f_0(z)\not=z^d$. Then there exists an $f_0$-admissible finite set $Z$ such that for each (finite) critical point $c$ of $f_0$, if $Y_n(c)$ denotes the $Z$-puzzle piece of depth $n$ which contains $c$ and $U(c)$ denotes the Fatou component containing $c$, then $\bigcap_{n=0}^\infty Y_n(c)=\overline{U(c)}$. \end{Theorem} We say a point $a\in J(f_0)$ is {\em bi-accessible} if it is the common landing points of two distinct external rays. A point in $J(f_0)$ is called {\em buried} if it is not in the boundary of any bounded Fatou componnet. We shall need the following lemmas to find buried bi-accessible periodic points. \begin{Lemma}\label{lem:puuzle-1} Let $f_0\in \text{Poly}_d$ be a postcritically finite hyperbolic polynomial with $f_0(z)\not=z^d$, where $d\ge 2$. Then any bi-accessible point in the boundary of a bounded Fatou component is eventually periodic. \end{Lemma} \begin{proof} Arguing by contradiction, assume that there exists a bi-accessible point $a_0$ which is in the boundary of a bounded Fatou component $U$ and such that $a_0$ is not eventually periodic. By Sullivan's no wandering theorem, $U$ is eventually periodic. So it suffices to consider the case that $U$ is fixed by $f_0$. Let $t, t'\in \mathbb{R}/\mathbb{Z}$, $t\not=t'$, be such that $\mathcal{R}_{f_0}(t)$ and $\mathcal{R}_{f_0}(t')$ land at $a_0$. For each $n\ge 1$, let $a_n:=f_0^n(a_0)$ and let $t_n=d^n t$, $t'_n=d^n t'$. Then $a_n$ are pairwise distinct, $t_n, t'_n\not\in \mathbb{Q}/\mathbb{Z}$ and $t_n\not=t'_n$ for all $n\ge 0$. Let $\Gamma_n=\mathcal{R}_{f_0}(t_n)\cup \mathcal{R}_{f_0}(t'_n)\cup \{a_n\}$ and let $W_n$ and $W'_n$ be the components of $\mathbb C\setminus\Gamma_n$ so that $W'_n\supset U$ and $W_n\cap U=\emptyset$. Note that $\overline{W_n}\cap \overline{U}=\{a_n\}$ for each $n\ge 0$. Since $\partial W_n$, $n\ge 0$, are pairwise disjoint, $W_n$, $n\ge 0$, are pairwise disjoint. So there exists $n_0$ such that for all $n\ge n_0$, $W_n$ contains no critical point. {\em Claim.} If $W_n$ contains no critical point, then $f_0(W_n)=W_{n+1}$. To see this, let $\textbf{w}idehat{\Gamma}_n=f_0^{-1}(\Gamma_{n+1})$ which is a finite union of simple curves stretching to infinity on both sides. Let $V_{n}$ (resp. $V'_n$) denote the component of $\mathbb C\setminus \textbf{w}idehat{\Gamma}_n$ which contains $a_n$ in its boundary and such that $V_n\subset W_n$ and $\partial W_n\subset \partial V_n $ (resp. $V'_n\subset W'_n$ and $\partial W'_n\subset \partial V'_n $). Then $f_0(V_n)$ and $f_0(V'_n)$ are distinct components of $\mathbb C\setminus \Gamma_{n+1}$. Since $V'_n\supset U$, we have $f_0(V'_n)=W'_{n+1}$ and hence $f_0(V_n)=W_{n+1}$. Since $W_n$ contains no critical point, $f_0:V_n\to W_{n+1}$ is a conformal map, which implies that $\partial V_{n}$ consists of one component of $\textbf{w}idehat{\Gamma}_n$. Thus $\partial V_n=\partial W_n$, hence $W_n=V_n$, $f_0(W_n)=W_{n+1}$. Thus, for all $n\ge n_0$, $f_0(W_n)=W_{n+1}$. It follows that $f^n_0(W_{n_0})=W_{n+n_0}$ for all $n\ge 0$. So $W_{n_0}$ is a wandering domain, which contradicts Sullivan's no wandering theorem. A more elementary argument is as follows: We can choose $\theta\in \mathbb Q/\mathbb Z$ such that $\mathcal R_{f_0}(\theta) \subset W_{n_0}$. As $\mathcal R_{f_0}(\theta)$ is eventually periodic, $W_{n_0}$ cannot be a wandering domain. \end{proof} \begin{Lemma}\label{lem:puzzle0} Let $f_0\in \text{Poly}_d$ be a postcritically finite hyperbolic polynomial with $f_0(z)\not=z^d$, where $d\ge 2$. Then $f_0$ has a bi-accessible repelling periodic point. \end{Lemma} \begin{proof} Without loss of generality, we assume that $f_0$ has a fixed bounded Fatou component $U$, and let $$\Lambda=\{t\in \mathbb{R}/\mathbb{Z}: \mathcal{R}_{f_0}(t) \textrm{ lands on } \partial U\}.$$ Since the Julia set of $f_0$ is locally connected, by the Caratheodory Theorem, there is a continuous surjective map $\pi:\mathbb{R}/\mathbb{Z}\to J(f)$ such that the external ray $\mathcal{R}_{f_0}(t)$ lands at $\pi(t)$ and hence $\pi\circ m_d=f_0\circ \pi$, where $m_d: \mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$, $t\mapsto d t\mod 1$. In particular, $\Lambda=\pi^{-1}(\partial U)$ is a non-empty compact subset of $\mathbb{R}/\mathbb{Z}$ which is forward invariant under the map $m_d$. Since $f_0(z)\not=z^d$, $J(f)$ is not a Jordan curve, so $\Lambda$ is a proper subset of $\mathbb{R}/\mathbb{Z}$. Thus $\pi: \Lambda\to \partial U$ is not a homeomorphism. Since $\pi: \Lambda\to \partial U$ is continuous and surjective, this map is not injective. In other words, there exists a bi-accessible point $w\in \partial U$. By Lemma~\ref{lem:puuzle-1}, $w$ is eventually periodic. Any periodic point in the orbit of $w$ is a bi-accessible repelling periodic point. \end{proof} \begin{Lemma}\label{lem:puzzle} If $f_0\in Poly_d$ is postcritically finite, hyperbolic and primitive and $f_0(z)\not=z^d$, then $f_0$ has a buried biaccessible repelling periodic point. \end{Lemma} \begin{proof} In order to show that $f_0$ has a buried bi-accessible point, it is enough to show that for some $s\ge 1$, $f_0^s$ has a repelling fixed point which is the landing point of an external ray not fixed by $f_0^s$. Indeed, if a repelling fixed point $p$ of $f_0^s$ is in the boundary of a bounded Fatou component $V$, then by the assumption that $f_0$ is primitive, we have $f_0^s(U)=U$, hence by the Claim in Lemma~\ref{lem:externalangle}, any external ray landing at $p$ is fixed by $f_0^s$. Therefore, it is enough to prove the following Statement $N$ for each $N\ge 0$. {\bf Statement $N$.} Suppose that $f_0\in \text{Poly}_d$ is a postcritically finite, hyperbolic and primitive map and $f_0(z)\not=z^d$, where $d\ge 2$. If $f_0$ has at most $N$ attracting periodic points, then there exists $s\ge 1$ such that $f_0^s$ has a repelling fixed point $p$ which is the landing point of an external ray which is not fixed by $f_0^s$. We proceed by induction on $N$. Statement $0$ is a null statement. Let $N\ge 1$ and assume that the statement $N'$ holds all $0\le N'<N$. Let $f_0\in Poly_{d}$ be as in Statement $N$. By Lemma~\ref{lem:puzzle0}, $f_0$ has a bi-accessible repelling periodic point $p_0$. Let $\mathcal{R}_{f_0}(t_i)$, $1\le i\le q$, be the external rays landing at $p_0$, where $q\ge 2$. Replacing $f_0$ by an iterate of $f_0$, we assume that \begin{itemize} \item all the external rays $\mathcal{R}_{f_0}(t_i)$ are fixed by $f_0$; \item all attracting periodic points of $f_0$ are fixed by $f_0$. \end{itemize} In particular, $f_0(p_0)=p_0$. (Note that $f_0^k$, $k\ge 1$, satisfies the assumption of Statement $N$.) Let us construct a Yoccoz puzzle using $Z=\{p_0\}$. Let $Y_0^j$, $1\le j\le q$ denote the puzzle pieces of depth zero and for each $n\ge 1$, let $Y_n^j$ denote the puzzle piece of depth $n$ which satisfies $Y_n^j\subset Y_0^j$ and $p_0\in \overline{Y_n^j}$. Since all the external rays $\mathcal{R}_{f_0}(t_j)$ are fixed, $f_0: Y_n^j\to Y_{n-1}^j$ is a proper map. Let $d_{n,j}$ denote the degree of $f_0:Y_n^j\to Y_{n-1}^j$, let $D_{n,j}=d_{1,j}d_{2,j}\cdots d_{n,j}$ and let $d_j=\lim_{n\to\infty} d_{n,j}$. Let $\Delta_{n,j}=\{t\in\mathbb{R}/\mathbb{Z}: \mathcal{R}_{f_0}(t)\cap \overline{Y_n^j}\not=\emptyset\}$. Note that $\Delta_{0,j}$ is a closed interval and $\Delta_{n,j}$ is a disjoint union of $D_{n,j}$ closed intervals each of which is mapped onto $\Delta_{0,j}$ under $m_d^n$ diffeomorphically. Let us show $d_{1,j}\ge 2$ for each $j$. Indeed otherwise, $f_0: Y_1^j\to Y_0^j$ is a conformal map, which implies that $m_d:\Delta_{1,j}\to \Delta_{0,j}$ is a homeomorphism, which is absurd since $\Delta_{1,j}$ intersects both endpoints of $\Delta_{0,j}$. {\em Case 1.} Suppose that $d_j=1$ holds for some $j$. Let $s_0$ be such that $d_{s,j}=1$ for all $s> s_0$. Then $f_0^{s-s_0}|Y_s^j$ is univalent for all $s> s_0$. Since $f_0$ has only finitely many attracting periodic points and every attracting cycle of $f_0$ contains a critical point, there exists $s_1>s_0$ such that $\overline{Y_{s_1}^j}$ does not contain any attracting periodic point of $f_0$. Consider the map $f_0^{s_1}: \overline{Y_{s_1}^j}\to \overline{Y_0^j}$ which has degree $D:=D_{s_1,j}\ge d_{1,j}\ge 2$. By the thickening technique (\cite{Mil4}), it extends to a polynomial-like map $f_0^{s_1}: U_j\to U_j'$ of degree $D$ in the sense of Douady and Hubbard, so it has $D$ fixed points which are contained in $\overline{Y_{s_1}^j}$. By our choice of $s_1$, none of these fixed point is attracting. Since $f_0$ is hyperbolic, it follows that all the $D$ fixed points of $f_0^{s_1}: U_j\to U_j'$ are repelling. The number of external rays of $f_0$ which intersect $\overline{Y_{s_1}^j}$ and are fixed by $f_0^{s_1}$ is exactly $D$, with two of them landing at the same point $p_0$. It follows that one of the repelling fixed point $p$ of $f_0^{s_1}|\overline{Y_{s_1}^j}$ is not the landing point of $f_0^{s_1}$-fixed external ray. {\em Case 2.} Assume that $d_j>1$ holds for all $j$. Take $n_j$ sufficently large so that $d_{n,j}=d_j$ for all $n\ge n_j$. Then all critical points of $f_0|Y_{n_j}^j$ do not escape from $Y_{n_j}^j$ under iteration, hence $f_0: \overline{Y_{n_j}^j}\to \overline{Y_{n_j-1}^j}$ is a proper map of degree $d_j$ with non-escaping critical points. Again by the thickening technique (\cite{Mil4}), it extends to a polynomial-like map $f_0: U_j\to U_j'$ of degree $d_j$ in the sense of Douady and Hubbard. Thus it is topologically conjugate to a polynomial $g_j\in Poly_{d_j}$ with connected Julia set. The polynomial $g_j$ is hyperbolic, postcritically finite and primitive. Since for any $j'\not=j$, $f_0$ has an attracting fixed point in $Y_{n_{j'}}^{j'}$, the number of attracting periodic points of $f_0: U_j\to U'_j$ is less $N$. Thus the number of attracting periodic points of $g_j$ is less than $N$. {\em Subcase 2.1} Assume $g_j(z)\not=z^{d_j}$ for some $j$. Then by the induction hypothesis, $g_j$ has a periodic point $\tilde{p}$ of period $s$ which is not the landing point of $g_j^s$-fixed external rays. Taking the corresponding periodic point $p$ of $f_0: U_j\to U_j'$, we are done. {\em Subcase 2.2} Assume $g_j(z)=z^{d_j}$ for all $j$. This implies that the filled Julia set of $f_0: U_j\to U_j'$ is the closure of a Jordan disk $V_j$ which contains $p_0$. Each $V_j$ is a bounded Fatou component of $f_0$. These bounded Fatou components $V_j$, $1\le j\le q$, have $p_0$ as a common point in their closures, contradicting the assumption that $f_0$ is primitive. \end{proof} \newcommand{\mathbb Crit}{\text{Crit}} \begin{proof}[Proof of Theorem~\ref{thm:puzzle}] Let $\mathbb Crit_{per}$ denote the set of all periodic critical points of $f_0$ and for each $c\in \mathbb Crit_{per}$, let $q(c)$ denote its period. For each admissible set $Z$ and $c\in\mathbb Crit_{per}$, let $s_Z(c)$ denote the minimal positive integer such that $f_0^{s_Z(c)}(c)\in \bigcap_{n=0}^\infty Y_n^Z(c)$ and let $d_Z(c)=\lim_{n\to\infty} \text{deg} (f^{s_Z(c)}|Y_n^Z(c))$. Of course $q(c)\ge s_Z(c)$. Note that if $Z\subset Z'$ then $s_Z(c)\le s_{Z'}(c)$ for all $c\in\mathbb Crit_{per}$, and if $s_Z(c)=s_{Z'}(c)$ then $d_{Z}(c)\ge d_{Z'}(c)$. Given admissible sets $Z\subset Z'$ we say that $Z'$ is a {\em (proper) refinement} of $Z$ if one of the following holds: \begin{itemize} \item there exists $c_0\in \mathbb Crit_{per}$ such that $s_{Z'}(c_0)>s_{Z}(c_0)$; \item $s_{Z'}(c)=s_Z(c)$ for all $c\in \mathbb Crit_{per}$ and there exists $c_0\in \mathbb Crit_{per}$ such that $d_{Z'}(c_0)< d_{Z}(c_0)$. \end{itemize} Clearly, there does not exist an infinite sequence of admissible sets $\{Z_n\}_{n=1}^\infty$ such that for all $n$, $Z_{n+1}$ is a refinement of $Z_n$. Let us say an $f_0$-admissible set $Z$ is {\em buried} if $Z$ is disjoint from the boundary of any bounded Fatou component. A buried $f_0$-admissible set exists by Lemma~\ref{lem:puzzle}. It suffices to prove that if $Z$ is a buried $f_0$-admissible set for which the property required by the theorem does not hold, then there exists a buried $f_0$-admissible set $Z'$ which is a refinement of $Z$. To this end, assume that there exists $c_0\in \mathbb Crit_{per}$ such that $\bigcap_{n=0}^\infty Y_n^Z(c_0)\supsetneq \overline{U(c_0)}$. Write $s=s_Z(c_0)$. When $N$ is large enough, the critical points of the proper map $g=f_0^s|Y_{N+s}(c_0)$ never escapes from its domain. Using the thickening technique (\cite{Mil4}), $g$ extends to a Douady-Hubbard polynomial-like map with connected Julia set. Thus it is hybrid equivalent to a monic centered polynomial $P$ which is necessarily hyperbolic and postcritically finite. Let $D\ge 2$ denote the degree of $P$ and let $h$ denote a hybrid conjugacy. As the filled Julia set of $P$ is not a topological disk, $P(z)\not=z^D$. So by Lemma~\ref{lem:puzzle}, $P$ has a repelling periodic point $\hat{z}_1$ which is biaccessible and buried. By~\cite[Lemma 3.6]{I1} (see also ~\cite[Theorem 7.11]{McM1}), $z_1=h^{-1}(\hat{z}_1)$ is a buried biaccessible repelling periodic point of $f_0$. Let $Z'$ denote the union of $Z$ and the $f_0$-orbit of $z_1$. As $\bigcap_{n=0}^\infty Y_n^{Z'}(c_0)$ is a proper subset of $\bigcap_{n=0}^\infty Y_n^Z(c_0)$, either $s_{Z'}(c_0)\not=s_Z(c_0)$ or $d_{Z'}(c_0)< d_Z(c_0)$. This completes the proof. \end{proof} We shall need the following result later. \begin{Proposition}\label{prop:noncrpiece} Let $f_0$ and $Z$ be as in Theorem~\ref{thm:puzzle}. Then $$\sup\{\text{diam}(Y): Y \text{ is a puzzle piece of depth }n, \overline{Y}\cap Z\not=\emptyset\}\to 0\text{ as } n\to\infty.$$ \end{Proposition} \begin{proof} For each $n\ge 0$ and $z\in Z$, let $Y^*_n(z)$ denote the union of the closures of the puzzle pieces of depth $n$ which contain $z$ in their boundaries. Since $Z$ is finite, there exists $N$ such that $Y^*_N(z)\cap P(f_0)=\emptyset$ for all $z\in Z$. For each $n\ge 0$, and $z\in Z$, $f_0^n: Y_{n+N}^*(z)\to Y_N^*(f_0^n(z))$ is a conformal map which extends to a definite neighborhood of $f_0^n(z)$. It follows that $f_0^n|Y_{n+N}^*(z)$ has uniformly bounded distortion. Since $z\in J(f)$, this implies that $\text{diam}(Y_n^*(z))\to 0$ as $n\to\infty$. \end{proof} We shall construct $\lambda(f_0)$-renormalizations using the puzzle given by Theorem~\ref{thm:puzzle}. The following is a criterion which will be used in the proof of surjectivity part of the main theorem. \begin{Proposition}\label{prop:com} Let $N_0$ be a positive integer such that for each $\mathbf{v}\in |T|$, the puzzle pieces $f_0^j(Y_{N_0}(\mathbf{v}))$, $\mathbf{v}\in |T|$, $1\le j\le r(\mathbf{v})$, are pairwise disjoint, and $f_0^{r(\mathbf{v})}: Y_{N_0}(\mathbf{v})\to Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))$ has degree $\delta(\mathbf{v})$. Assume that $f\in \text{Poly}_d$ satisfies the following: \begin{enumerate} \item there is a homeomorphism $\psi: \mathbb C\to \mathbb C$ with the following properties: \begin{itemize} \item $\phi_f\circ \psi=\phi_{f_0}$ holds near $\infty$, where $\phi_f$ and $\phi_{f_0}$ are the B\"ottcher map for $f$ and $f_0$ respectively; \item $\psi \circ f_0(z)=f\circ \psi(z)$ for all $z\in \mathbb C\setminus \bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. \end{itemize} \item The map $$F:\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times \psi(Y_{N_0}(\mathbf{v}))\to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times \psi(Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))),$$ defined as $F(\mathbf{v}, z)=(\sigma(\mathbf{v}),f^{r(\mathbf{v})}(z))$, is an AGPL map with fibrewise connected filled Julia set. \end{enumerate} Then $f\in \mathcal{C}(f_0)$ and $F$ is a $\lambda(f_0)$-renormalization of $f$. \end{Proposition} \begin{proof} First of all, note the assumption implies that the filled Julia set $K(f)$ of $f$ is connected and $\textbf{w}idehat{Z}=\psi(Z)$ is an admissible set for $f$. It suffices to show that $f\in \mathcal{C}(f_0)$. Once this is proved, the other statement follows from \cite[Proposition 3.13]{IK}. Let $L(\mathbf{v}, f)$ denote the filled Julia set of $F$ in the fiber $\{\mathbf{v}\}\times \mathbb C$. {\bf Step 1.} We show by induction that for each $k\ge N_0$, there is a homeomorphsim $\psi_k:\mathbb C\to\mathbb C$ which coincide with $\phi_f^{-1}\circ \phi_{f_0}$ on $\Gamma_k\setminus J(f)$. For $k=N_0$, we choose $\psi_{N_0}=\psi$. Assume now that $\psi_k$ has been defined for some $k\ge N_0$ and let us construct $\psi_{k+1}$. For each $Y\subset \mathbb C$, denote $Y'=\psi_k(Y)$. It suffices to construct, for each $Y\in \mathcal{Y}_k$, a homeomorphism $\psi_{k+1}: \overline{Y}\to \overline{Y'}$ so that $f\circ \psi_{k+1}(z)=\psi_k\circ f_0(z)$ for $z\in \overline{Y}\cap \Gamma_{k+1}$. Indeed, if $Y$ does not contain a critical point of $f_0$, then $f_0:Y\to f_0(Y)$ is a conformal map, and so is $f: Y'\to f(Y')$. In this case, we define $\psi_{k+1}|\overline{Y}=(f|\overline{Y'})^{-1}\circ (\psi_k|f_0(\overline{Y})) \circ (f_0|\overline{Y}))$. Assume that $Y$ contains a critical point of $f_0$, so that $Y=Y_k(\mathbf{v})$ for some $\mathbf{v}\in |T|$ and hence $Y'\supset L(\mathbf{v}, f)$. Let $B=Y_{k-r(\mathbf{v})+1}(\sigma(\mathbf{v}))$, $A=Y_{k+1}(\mathbf{v})$ and $X=Y_{k-r(\mathbf{v})}(\sigma(\mathbf{v}))$. Then $B'\supset L(\sigma (\mathbf{v}), f)$, $A'\supset L(\mathbf{v}, f)$ and $X'\supset L(\sigma(\mathbf{v}), f)$. Since $f_0^{r(\mathbf{v})}: \overline{Y}\setminus A\to \overline{X}\setminus B$ and $f^{r(\mathbf{v})}: \overline{Y'}\setminus A'\to \overline{X'}\setminus B'$ are both $\delta(\mathbf{v})$ to $1$ covering, there is a homeomorphism $\psi_{k+1}: \overline{Y}\setminus A\to \overline{Y'}\setminus A'$ such that $\psi_{k+1}\circ f_0^{r(\mathbf{v})}=f^{r(\mathbf{v})}\circ \psi_k$ on $\overline{Y}\setminus A$ and $\psi_{k+1}=\psi_k$ on $\partial Y$. Extending the map $\psi_{k+1}$ in an arbitrary way to a homeomorphism from $\overline{Y}$ to $\overline{Y'}$, we obtain the desired map $\psi_{k+1}:\overline{Y}\to \overline{Y'}$. {\bf Step 2.} For each $k\ge N_0$, there is a qc map $\Psi_k$ such that $\Psi_k=\phi_f^{-1}\circ \phi_{f_0}$ near infinity and such that $f\circ \Psi_k(z)=\Psi_k\circ f_0(z)$ for all $z\not\in \bigcup_\mathbf{v} Y_k(\mathbf{v})$. This is well-known. See for example~\cite[Section 5]{KSS}. This implies that if $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ land at a common point which is not in $\bigcup_{n=0}^\infty f_0^{-n}\left(\bigcup_{\mathbf{v}\in |T|}\partial \mathbf{v}\right)$ ($\theta_1,\theta_2\in \mathbb{Q}/\mathbb{Z}$), then $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_{f}(\theta_2)$ have a common landing point as well. Indeed, there exists $k$ such that the whole $f_0$-orbit of the rays $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ lie outside $\bigcup_\mathbf{v} Y_k(\mathbf{v})$, so $\mathcal{R}_f(\theta_i)=\Psi_k(\mathcal{R}_{f_0}(\theta_i))$, $i=1,2$. {\bf Step 3.} It remains to show that if $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$, $\theta_1, \theta_2\in\mathbb{Q}/\mathbb{Z}$ landing at a common point in $\bigcup_{n=0}^\infty f_0^{-n}\left(\bigcup_{\mathbf{v}\in |T|}\partial \mathbf{v}\right)$, then $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_{f}(\theta_2)$ have a common landing point. Let us first assume that the common landing point is in $\partial \mathbf{v}_0$ for some $\mathbf{v}_0\in |T|$. Let $\Psi=\Psi_{N_0}$ be given by Step 2. We define a new qc map $H$ from $\mathbb{C}\setminus \bigcup_{\mathbf{v}}\overline{\mathbf{v}} \to \mathbb{C}\setminus \bigcup_{\mathbf{v}} L(\mathbf{v}, f)$ such that $H=\Psi$ outside $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$ and such that $H\circ f_0^{r(\mathbf{v})}=f^{r(\mathbf{v})}\circ H$ inside $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. Note that $H$ maps $\mathcal{R}_{f_0}(\theta_i)$ onto $\mathcal{R}_f(\theta_i)$, $i=1,2.$ For each $\textbf{v}\not=\textbf{v}_0$, choose a quasidisk $\Omega_{\textbf{v}}$ so that these quasidisks are pairwise disjoint and disjoint from $\textbf{v}_0\cup \mathcal{R}_{f_0}(\theta_1)\cup\mathcal{R}_{f_0}(\theta_2)$. Let $H_0=H$ on $\mathbb{C} \setminus ( \overline{\mathbf{v}_0}\cup\bigcup_{\mathbf{v}\ne \mathbf{v}_0}\Omega_{\mathbf{v}})$, and then extend $H_0$ quasiconformally to $\mathbb C\setminus \overline{\mathbf{v}_0}$ by Beurling-Ahlfors extension. So we obtain a qc map $H_0: \mathbb{C}\setminus \overline{\mathbf{v}_0}\to \mathbb{C}\setminus L(\mathbf{v}_0,f)$ which again maps $\mathcal{R}_{f_0}(\theta_i)$ onto $\mathcal{R}_f(\theta_i)$, $i=1,2$. Let $\mathbf{v}artheta:\mathbb{C}\setminus \overline{\mathbb{D}}\to \mathbb{C}\setminus L(\mathbf{v}_0,f)$ denote a Riemann mapping. Since $\partial \mathbf{v}_0$ is a Jordan curve, $\mathbf{v}artheta^{-1}\circ H_0$ extends continuously to $\mathbb{C}\setminus \mathbf{v}_0$. Since $\mathcal{R}_f(\theta_i)$ both land and $\mathbf{v}artheta^{-1}(\mathcal{R}_{f} (\theta_1))$ and $\mathbf{v}artheta^{-1}(\mathcal{R}_f(\theta_2))$ have a common landing point, by Lindelof's theorem, we conclude that $\mathcal{R}_f(\theta_i)$, $i=1,2$, land at the same point. For the general case, let $n\ge 1$ be minimal such that the common landing point of $\mathcal{R}_{f_0}(d^n \theta_i)$ ($i=1,2$) lie in $\bigcup_{\mathbf{v}} \partial \mathbf{v}$. As proved above, the external rays $\mathcal{R}_f(d^n \theta_i)$, $i=1,2$, have a common landing point $z$. Let $k$ be large integer such that the external rays $\mathcal{R}_{f_0}(d^j \theta_i)$, $0\le j<n$, lie outside $\bigcup_\mathbf{v} Y_k(\mathbf{v})$, let $Y$ denote $f_0$-the puzzle piece of depth $k+n$ which contains the common landing point of $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ and let $Y'=\psi_{k+n}(Y)$. Then $f^n: Y'\to Y_k(\mathbf{v})$ is a conformal map for some $\mathbf{v}\in |T|$ and the rays $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_f(\theta_2)$ enter $Y'$. Thus both of them have to land at the unique point in $\overline{Y'}$ which is mapped to $z$ by $f^n$. \end{proof} \section{Kahn's quasiconformal distortion bounds}\label{sec:Kahn} In this section, we will modify the argument in~\cite{Kahn}\footnote{According to Kahn, Yoccoz may have a similar result.} to obtain a K-qc extension principle. The main result is Theorem~\ref{thm:Kqcpuzzle} which will be used later to show the convergence of the Thurston Algorithm in the proof of the Main Theorem. Throughout we fix a monic centered, hyperbolic, postcritically finite and primitive polynomial $f_0$ of degree $d$ such that $f_0(z)\not=z^d$. Let $Z$ be an admissible set given by Theorem~\ref{thm:puzzle} and let $Y_n(z)=Y_n^Z(z)$. Let $\mathbb Crit(f_0)=\{c\in \mathbb C: f_0'(c)=0\}$ and let $L_n$ denote the domain of the first landing map to $\bigcup_{c\in \mathbb Crit(f_0)} Y_n(c)$: \begin{equation}\label{eqn:dfnLn} L_n=\left\{z\in \mathbb C: \exists k\ge 0 \text{ such that } f_0^k(z)\in \bigcup_{c\in \mathbb Crit(f_0)} Y_n(c)\right\}. \end{equation} \begin{Theorem}\label{thm:Kqcpuzzle} There exists $N>0$ and for any puzzle piece $Y$ of depth $m\ge 0$, there is a constant $C=C(Y)>1$ satisfying the following property: if $Q:Y \to Q(Y)$ is a qc map which is conformal a.e. in $Y\setminus L_{m+N}$, then there exists a $C$-qc map $\textbf{w}idetilde Q :Y\to Q(Y)$ such that $\textbf{w}idetilde Q=Q$ on $\partial Y$. \end{Theorem} \subsection{Quasiconformal Distortion Bounds and a toy model} The difficulty in proving the theorem is that the landing domains $L_{m+N}$ may come arbitrarily close to the boundary of $Y$. To deal with the situation, we shall need a toy model developed by Kahn (\cite{Kahn}). Let us first recall some terminology from \cite{Kahn}. Let $U \subset \mathbb C$ be a Jordan domain and $A$ be a measurable subset of $U$. We say that $(A,U)$ has {\em bounded qc distortion} if there exists a constant $K\ge 1$ with the following property: if $Q:U\to Q(U)$ is a quasiconformal map and $\bar\partial Q=0$ a.e. outside $A$, then there is a $K$-q.c map $\tilde Q:U \to Q(U)$ such that $\tilde Q=Q$ on $\partial U$. Let $\mathcal{QD}(A,U)$ denote the smallest $K$ satisfying the property. Using this terminology, we can restate Theorem~\ref{thm:Kqcpuzzle} as follows: \begin{theorem3'} There exists $N>0$ such that if $Y$ is a puzzle piece $Y$ of depth $m\ge 0$, \[\mathcal{QD}(L_{m+N}\cap Y, Y)<\infty.\] \end{theorem3'} We shall need the following easy facts. \begin{Lemma}\label{compact}\cite[Fact~1.3.6]{Kahn} If $A \subset U$ is compact, then $\mathcal{QD}(A,U)<\infty$. \end{Lemma} \begin{Lemma}\label{qc1}\cite[Fact~1.3.4]{Kahn} Let $U$ and $V$ be Jordan domains in $\mathbb C$ and $A$ be a measurable subset of $U$. If there exists a $L$-qc map $g:U\to V$ and $\mathcal{QD}(A,U)<\infty$, then \[\mathcal{QD}(g(A),V)\le L^2\mathcal{QD}(A,U).\] \end{Lemma} \begin{Lemma}\label{qc2} The following statements are equivalent: \begin{enumerate} \item [(i)] $\mathcal{QD}(A,U)=C<\infty$; \item [(ii)] For any qc map $Q:U \to Q(U)$, if $\mathrm{Dil}(Q)\le K$ for some $K\ge 1$ a.e. outside $A$, then there is a $KC$-qc map $\tilde Q:U \to Q(U)$ such that $\tilde Q=Q$ on $\partial U$. \end{enumerate} \end{Lemma} \begin{proof} It is obvious that (ii) implies (i). Let us show that (i) implies (ii). Let $\mu$ be the Beltrami differential such that $\mu =\bar\partial Q^{-1}/\partial Q^{-1}$ on $Q(U\backslash A)$ and $\mu=0$ otherwise. By the Measurable Riemann Mapping Theorem, there exists $g:\mathbb C \to \mathbb C$ quasiconformal map with Beltrami differential $\mu$. Then $g\circ Q: U\to g\circ Q(U)$ is a quasiconformal map and $\bar\partial g\circ Q=0$ a.e. outside A. Thus there exists a $C$-qc map $G:U\to g\circ Q(U)$ such that $G=g\circ h$ on $\partial U$, where $C=\mathcal{QD}(A,U)<\infty$. Finally, let $\tilde Q=g^{-1}\circ G$ and we are done. \end{proof} We shall now recall the {\em recursively notched square model} developed in~\cite{Kahn}. Let $S=(0,1)\times (-1/2,1/2)$. Let $\mathcal{I}$ denote the collection of the components of $(0,1)\setminus \mathcal{C}$, where $\mathcal{C}$ is the ternary Cantor set. Let \begin{equation}\label{eqn:setN} \mathcal{N}=\bigcup_{I\in\mathcal{I}} \overline{I}\times [-|I|/2, |I|/2] \end{equation} which is a countable disjoint union of closed squares. The following is ~\cite[Lemma 2.1.1]{Kahn}: \begin{Theorem}\label{thm:Kahn} $\mathcal{QD}(\mathcal{N}, S)<\infty.$ \end{Theorem} \subsection{Reduce to the toy model} We will work on the polynomial map $f_0$ fixed at the beginning of this section. In the following we write $\mathcal{R}(\theta)$ for $\mathcal{R}_{f_0}(\theta)$. A {\em geometric ray-pair} is, by definition, a simple curve consisting of two distinct external rays together with their common landing point. A {\em slice} is an open set $U$ bounded by two disjoint ray-pairs $\mathcal{R}(\theta_i)\cup \mathcal{R}(\theta_i')\cup \{a_i\}$, $i=1,2$ such that no external ray lying inside $U$ lands at either $a_1$ or $a_2$. For every $z_0 \in Z$, the external rays landing at $z_0$ cut the complex plane $\mathbb C$ into finitely many sectors $S_1(z_0),\cdots,S_{n(z_0)}(z_0)$. Let $\mathcal S=\{S_j(z)\mid z\in Z,~1\le j\le n(z) \}$. We list the elements in $\mathcal S$ as $S_1, S_2,\cdots, S_{\nu}$ where $\nu=\# \mathcal S$. For each $j$, the boundary of $S_j$ is a geometric ray-pair: there exists $\alpha^j\in Z$ and $\theta^-_j,\theta^+_j\in\mathbb{R}/\mathbb{Z}$ such that $$\partial S_j=\mathcal R(\theta^-_j)\cup \{\alpha^j\}\cup \mathcal R(\theta^+_j).$$ We order $\theta^-_j, \theta^+_j$ in such a way that $$\{t\in \mathbb{R}/\mathbb{Z}: \mathcal{R}(t)\subset S_j\}=(\theta_j^-, \theta_j^+).$$ \begin{Proposition}\label{prop:slice} For each $j\in \{1,2,\ldots, \nu\}$ and each $n$ sufficiently large, there exists a geometric ray-pair $\gamma_n^j=\mathcal{R}(t_n^-(j))\cup \mathcal{R}(t_n^+(j))\cup \{\alpha_n^j\}$ contained in $S_j$ with the following properties: \begin{itemize} \item $\alpha_n^j\in f_0^{-n}(Z)$; \item $\theta^-_j, t_n^-(j), t_n^+(j), \theta^+_j$ lie in $\mathbb{R}/\mathbb{Z}$ in the anticlockwise order; \item $\partial S_j$ and $\gamma_n^j$ bound a slice; \item $t_n^-(j)\to \theta_j^-$, $t_n^+(j)\to \theta_j^+$ as $n\to\infty$. \end{itemize} \end{Proposition} We postpone the proof of this proposition to the end of this section and show now how it implies Theorem~\ref{thm:Kqcpuzzle}. \begin{proof}[Proof of Theorem 3'] For each $n$ large, let $\textbf{w}idehat{S}_n^j$ be the slice bounded by $\gamma_n^j$ and $\partial S_j$ given by Proposition~\ref{prop:slice}, and let $S_n^j=\{z\in \textbf{w}idehat{S}_n^j: G(z)<1/d^n\}$, where $G$ is the Green function of $f_0$. So $\overline{S_n^j}$ is a finite union of closures of puzzle pieces of depth $n$. Choose $N_*$ sufficiently large so that the closure of $S^j:=S^j_{N_*}$ is disjoint from the post-critical set of $f_0$. Let $q\gg N_*$ be a positive integer so that for any $j,j'$, the diameter of each component of $f_0^{-q}(S^j)$ is much smaller than that of $S^{j'}$. For each $j\in \{1,2,\ldots,\nu\}$, $f_0^q$ maps a neighborhood $Q_j$ of $\alpha^j$ conformally onto the component of the interior of $\bigcup_{j'=1}^ {\nu} \overline{S^{j'}}$ which contains $f_0^q(\alpha_j)$. Let $A_j=Q_j\cap S^j$. Then $A_j$ is a quasi-disk which contains $\alpha^j$ in its boundary and there is $\sigma(j)\in \{1,2,\ldots, \nu\}$ such that $f_0^q(A_j)=S^{\sigma(j)}$. Similarly, there is a quasi-disk $B_j$ which is contained in $S^j$ and contains $\alpha_{N_*}^j$ in its boundary and $\tau(j)\in \{1,2,\ldots, \nu\}$ such that $f_0^q(B_j)=S^{\tau(j)}$. Choosing $q$ large enough, we can ensure that $\overline{A_j}\cap\overline{B_j}=\emptyset$. Note that $m_d^q(\theta^+_j)=\theta^+_{\sigma(j)}, m_d^q(\theta^-_j)=\theta^-_{\sigma(j)}$, $m_d^q(t_{N_*}^-(j))=\theta^+_{\tau(j)}$ and $m_d^q(t_{N_*}^+(j))=\theta^-_{\tau(j)}$, where $m_d:\mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$ denotes the map $t\mapsto dt \mod 1$. Let $$F:\bigcup_{j=1}^{\nu} \overline{A_j}\cup\overline{B_j}\to \bigcup_{j=1}^{\nu} \overline{S^j}$$ be the restriction of $f_0^q$. Let $A=(0,1/3)\times (-1/6, 1/6)$, $B=(2/3, 1)\times (-1/6,1/6)$ and $S=(0,1)\times (-1/2,1/2)$ and define a map $$G: \bigcup_{j=1}^\nu \{j\}\times (A\cup B) \to \bigcup_{j=1}^\nu \{j\}\times S,$$ as follows: $$G(j,z)=\left\{\begin{array}{ll} (\sigma(j), 3z), &\mbox{ if } z\in A;\\ (\tau(j), 3(1-z)), &\mbox{ if } z\in B. \end{array} \right. $$ Let $C_j=\overline{\{z\in S^j\setminus (\overline{A_j\cup B_j}): G(z)\le d^{-q-N_*}\}}$ and $C=[1/3,2/3]\times [-1/6, 1/6]$. {\bf Claim.} There is a qc homeomorphism $H_:\bigcup_{j=1}^\nu S^j\to \bigcup_{j=1}^\nu \{j\}\times S$ such that \begin{enumerate} \item[(i)] $H(A_j)=\{j\}\times A, H(B_j)=\{j\}\times B, H(C_j)=\{j\}\times C,$ \item[(ii)] $H\circ F=G\circ H$ holds on $\bigcup_{j=1}^\nu \overline {A_j\cup B_j}$. \end{enumerate} Indeed, it suffices to prove there is qc map $H_0:\bigcup_{j=1}^\nu S^j\to \bigcup_{j=1}^\nu \{j\}\times S$ such that (i) holds and (ii) holds on $\bigcup_{j} (\partial A_j\cup \partial B_j)$ (with $H$ replaced by $H_0$). Indeed, once such a $H_0$ is constructed, we can construct inductively a sequence $\{H_n\}_{n=0}^\infty$ of qc maps by pull-back which has the following properties: \begin{itemize} \item $H_{n+1}=H_n$ on $\bigcup_{j=1}^\nu S^j \setminus (A_j\cup B_j);$ \item $H_{n}\circ F=G\circ H_{n+1}$ holds on $\bigcup_{j=1}^\nu \overline {A_j\cup B_j}$. \end{itemize} These maps $H_n$ have the same maximal dilatation as $H_0$, and they eventually stablize for any point in the set $X=\{z\in \bigcup_{j=1}^\nu S^j: F^n(z)\not\in \bigcup A_j\cup B_j\mbox{ for some }n\}$. Since $F$ is uniformly expanding, the set $X$ is dense in $\bigcup_j S^j$, it follows that $H_n$ converges to a qc map $H$ which satisfies the requirements. For the existence of $H_0$, a concrete construction of a homeomorphism with the desired properties can be easily done using B\"ottcher coordinate with an extra property that it is qc in $S^j\setminus \overline{A_j\cup B_j\cup C_j}$. It can be made global qc since $S^j, A_j, B_j, C_j$ are all quasi-disks. Now, let $$\mathcal{N}_j=\{z\in S^j: \exists n\ge 1\text{ such that }F^n(z) \text{ is well-defined and belongs to} \bigcup_{j'} C_{j'}\}.$$ Note that for each $j$, $H(\mathcal{N}_j)=\{j\}\times\mathcal{N},$ where $\mathcal{N}$ is as in (\ref{eqn:setN}). Therefore, $$Q:=\max_{j=1}^\nu \mathcal{QD}(\mathcal{N}_j, S^j)<\infty.$$ Let $N=q+N_*$. Then any landing domain of $Y_{N}:=\bigcup_{c\in\mathbb Crit(f_0)} Y_N(c)$ does not intersect $\bigcup_{k=0}^{q-1}\bigcup_{j=1}^\nu f_0^k(\partial A_j\cup\partial B_j)$. Therefore, $L_N\cap S^j\subset \mathcal{N}_j,$ so that $$\mathcal{QD}(L_N\cap S^j, S^j)\le \mathcal{QD}(\mathcal{N}_j, S^j)\le Q.$$ Now let $Y$ be an arbitrary Yoccoz puzzle piece of depth $m\ge 0$. Similarly as in the construction for $A_j$ and $B_j$ above, for each $x\in \partial Y\cap J(f)$, there is a quasi-disk $V_x$ which is contained in $Y$ and contains $x$ in its closure such that $f_0^{m}$ maps $V_x$ onto $S^{j(x)}$ for some $j(x)\in \{1,2,\ldots, \nu\}$. Since $\overline{S^{j(x)}}$ is a finite union of the closure of puzzle pieces of depth $N_*$ and disjoint from the postcritical set of $f_0$, for each $k=0,1,\ldots, m-1$, $f^k(V_x)$ is a finite union of the closure of puzzle pieces of depth $N_*+m-k$ ($< m+N$) and does not contain a critical point of $f_0$, so $f^k(V_x)\cap Y_{m+N}=\emptyset$. It follows that $f_0^{m}$ maps $V_x\cap L_{m+N}$ onto $S^{j(x)}\cap L_{m+N}$. Therefore $$\mathcal{QD}(L_{m+N}\cap V_x, V_x)= \mathcal{QD}(L_{m+N}\cap S^{j(x)}, S^{j(x)})\le \mathcal{QD}(L_N\cap S^{j(x)}, S^{j(x)})\le Q.$$ These $V_x$'s are pairwise disjoint since each of them is mapped onto of a component of $\bigcup_j S^j$ univalently under $f_0^{m}$ Noting that $(Y\cap L_{m+N})\setminus \bigcup_{x\in \partial Y\cap J(f)}V_x$ is compactly contained in $Y$, we conclude that $\mathcal{QD}(L_{m+N}\cap Y, Y)<\infty$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:slice}] We denote by $Y^j_n$ the unique puzzle piece of depth $n$ which attaches $\alpha^j$ and is contained in the sector $S_j$. By Proposition~\ref{prop:noncrpiece}, when $n$ is sufficiently large, $\overline{Y_n^j}$ is disjoint from the postcritical set of $f_0$. Fix $n_0$ large so that for each $j$, there exists $\alpha^j_{n_0} \in \overline Y^j_{n_0}\cap J(f_0)$ with $\alpha^j_{n_0} \not\in Z$. Let $\mathcal R(s^-_j)$ and $\mathcal R(s^+_j)$ be the external rays landing at $\alpha^j_{n_0}$ which are the boundary curves of $Y^j_{n_0}$. We assume that $\theta^-_j, s^-_j, s^+_j, \theta^+_j$ lie in $\mathbb{R}/\mathbb{Z}$ in the anticlockwise cyclic order. For $n\ge n_0$, we define two angles $t^-_n(j)$ and $t^+_n(j)$ as following: \[t^-_n(j):=\sup\{\theta\in(\theta^-_j,s^-_j)\mid \mathcal R(\theta)\cap Y^j_{n}\ne \mathbf{v}arnothing\}\] and \[t^+_n(j):=\inf\{\theta\in(s^+_j,\theta^+_{j})\mid \mathcal R(\theta)\cap Y^j_{n}\ne \mathbf{v}arnothing\}.\] {\bf Claim 1.} {\em For every $1\le j\le \nu$ and any $n\ge n_0$, the two external rays $\mathcal R(t^-_n(j))$ and $\mathcal R(t^+_n(j))$ land at the same point, denoted by $\alpha_n^j$.} Let $\mathcal R^n(t)=\mathcal R(t)\cap \{z\mid G_{f_0}(z)<d^{-n}\}$. Clearly, $\mathcal R^n(t^-_n(j))$ and $\mathcal R^n(t^+_n(j))$ are on the boundary of $Y^j_n$ and are closest rays (on the boundary of $Y^j_n$) to $\mathcal R(s^-_j)$ and $\mathcal R(s^+_j)$ respectively. First, we show that $\mathcal R(t^-_{n_0+1}(j))$ and $\mathcal R(t^+_{n_0+1}(j))$ land at the same point. {\em Case 1.} $\alpha_{n_0}^j\in \overline{Y_{n_0+1}^j}$. Then $t^{\pm}_{n_0+1}(j)=s^{\pm}_j$, and so claim holds. {\em Case 2.} $\alpha_{n_0}^j\not\in \overline{Y_{n_0+1}^j}$. Then there exists $t^-$ and $t^+$ such that \begin{itemize} \item $\mathcal{R}(t^{\pm})$ intersects the boundary of $Y_{n_0+1}^j$ with a common landing point $\alpha\not\in \{\alpha_{n_0}^j,\alpha^j\}$; \item $\mathcal{R}(t^+)\cup \mathcal{R}(t^-)\cup\{\alpha\}$ separates $\alpha_{n_0+1}^j$ from $\alpha^j$. \end{itemize} Without loss of generality, assume $t^-\in (\theta_j^-, s_j^-)$. Then $t^+\in (s_j^+,\theta_j^+)$. So for any $t\in (t_-, s_j^-)$, $\mathcal{R}(t)$ is disjoint from $Y_{n_0+1}^j$, hence $t^-=t_{n_0+1}^j$. Similarly, $t^+=t_{n_0+1}^j$. Thus $t_{n_0+1}^-\sim_{\lambda_{f_0}} t_{n_0+1}^+$. The general case can be proved similarly by induction. It is clear that $\partial S_j$ and $\gamma_n^j=\mathcal{R}(t_-^n(j))\cup\mathcal{R}(t_+^n(j))\cup \{\alpha_n^j\}$ bound a slice for each $n\ge n_0$. So it remains to show {\bf Claim 2.} {\em The sequence $\{t^-_n(j)\}_{n>n_0}$ decreases monotonically to $\theta^-_j$ and $\{t^+_n(j)\}_{n>n_0}$ increases monotonically to $\theta^+_{j}$ for all $j$.} Since $\mathrm{diam}(Y^j_n)$ converges to $0$, $\alpha^j_n \to \alpha^j$ . Obviously, $\{t^-_n(j)\}$ is monotonically decreasing, thus it converges to some $\theta\in[\theta^-_j,s^-_j]$. If $\theta \ne \theta^-_j$, then $\alpha^j_n$ converges to the landing point of $\mathcal R(\theta)$ which is not equal to $\alpha^j$. This leads to a contradiction. \end{proof} \section{Thurston's Algorithm}\label{sec:Thurston} \subsection{Thurston's Algorithm} The Thurston algorithm was introduced by Thurston to construct rational maps that is combinatorially equivalent to a given branched covering of the 2-sphere. See \cite{DH3}. The algorithm goes as follows. Let $\textbf{w}idetilde{f}:\mathbb CC\to \mathbb CC$ be a quasi-regular map of degree $d>1$. Given a qc map $h_0:\mathbb CC\to \mathbb CC$, let $\sigma_0$ be the standard complex structure on $\mathbb CC$ and $\sigma=(h_0\circ\tilde f)^*\sigma_0$. By the Measurable Riemann Mapping Theorem, there exists a qc map $h_1:\mathbb CC\to \mathbb CC$ with $h_1^*\sigma_0=\sigma$ so that $Q_0:=h_0\circ \textbf{w}idetilde{f}\circ h_1^{-1}$ is a rational map of degree $d$. The qc map $h_1$ is unique up to composition with a M\"obius transofrmation. Applying the same argument to $h_1$ instead of $h_0$, we obtain a qc map $h_2$ and a rational map $Q_1$ of degree $d$ such that $Q_1\circ h_2=h_1\circ \textbf{w}idetilde{f}$. Repeating the argument, we obtain a sequence of normalized qc maps $\{h_n\}_{n=0}^\infty$ and a sequence of rational maps $\{Q_n\}_{n=0}^\infty$ of degree $d$ such that $Q_n\circ h_{n+1}=h_{n}\circ \textbf{w}idetilde{f}$. The question is to study the convergence of $\{h_n\}_{n=1}^\infty$ and $\{Q_n\}_{n=1}^\infty$ after suitable normalization. In \cite{R-L}, Rivera-Letelier applied the algorithm to a certain class of quasi-regular maps which may have non-recurrent branched points with infinite orbits. In this section, we shall modify his argument and prove a convergence theorem for a quasi-regular map $\textbf{w}idetilde{f}:\mathbb CC\to \mathbb CC$ where the irregular part has a nice Markov structure. For simplicity, we shall assume that $\textbf{w}idetilde{f}$ satisfies the following: $\textbf{w}idetilde{f}^{-1}(\infty)=\infty$ and $\textbf{w}idetilde{f}$ is holomorphic in a neighborhood of $\infty$. Below, we shall use the terminology {\em quasi-regular polynomial} for such a map. An open set $\mathcal{B}$ is called {\em nice} if each component of $\mathcal{B}$ is a Jordan disk and $\textbf{w}idetilde{f}^k(\partial \mathcal{B})\cap \mathcal{B}=\emptyset$ for each $k\ge 1$. Let $$D(\mathcal{B})=\{z\in \mathbb C: \exists n\ge 1\text{ such that } \textbf{w}idetilde{f}^n(z)\in\mathcal{B}\}$$ denote the domain of the first entry map to $\mathcal{B}$. We say that a nice open set $\mathcal{B}$ is {\em free} if $$P(\textbf{w}idetilde{f})\cap\overline{\mathcal{B}}=\emptyset,$$ where $$P(\textbf{w}idetilde{f})=\overline{\bigcup_{c\in \mathbb Crit(\textbf{w}idetilde{f})}\bigcup_{n\ge 1} \{\textbf{w}idetilde{f}^n(c)\}},$$ and $\mathbb Crit(\textbf{w}idetilde{f})$ denotes that set of the ramification points of $\textbf{w}idetilde{f}$. We say that an open set $\mathcal{B}$ is {\em $M$-nice} if it is nice and for each component $B$ of $\mathcal{B}$, the following three conditions hold: \begin{equation}\label{eqn:shapeB} \text{diam}(B)^2\le M \text{area} (B); \end{equation} \begin{equation}\label{eqn:Bretsmall} \frac{\mathrm{area}(B\setminus D(\mathcal{B}))}{\mathrm{area}(B)}>M^{-1}; \end{equation} \begin{equation}\label{eqn:Bretqc} \mathcal{QD}(D(\mathcal B)\cap B, B)\le M, \end{equation} where $\mathcal{QD}$ is as defined in \S\ref{sec:Kahn}. \begin{Theorem}\label{thm:Thurston} Let $\textbf{w}idetilde{f}:\mathbb CC\to \mathbb CC$ be a quasi-regular polynomial of degree $d\ge 2$ and let $A\subset \mathbb CC$ be a Borel set such that $\bar\partial \tilde f=0$ a.e. outside $A$. Assume that there is a free open set $\mathcal{B}$ which is $M$-nice for some $M<\infty$ and a positive integer $T$ such that $$(\ast) \text{ for every }z\text{ and }n\ge 1, \text{ if }\textbf{w}idetilde{f}^j(z)\not\in\mathcal{B}\text{ for each }0\le j <n,\text{ then }\#\{0\le k<n: \textbf{w}idetilde f^k(z) \in A\}\le T.$$ Then there is a continuous surjection $h:\mathbb CC\to \mathbb CC$ and a rational map $f:\mathbb CC\to \mathbb CC$ of degree $d$ such that $f\circ h=h\circ \tilde f$. Moreover, there is a qc map $\lambda_0$ such that $\lambda_0(z)=h(z)$ whenever $z\not\in \bigcup_{n=0}^\infty \textbf{w}idetilde{f}^{-n}(\mathcal{B})$ and such that $\bar{\partial} \lambda_0=0$ holds a.e. on the set $\{z\in \mathbb CC: \textbf{w}idetilde{f}^n(z)\not\in A, \forall n\ge 0\}$. \end{Theorem} The rest of this section is devoted to a proof of the theorem. Without loss of generality, we may assume that $\mathcal{B}\subset A$. Note that ${D}(\mathcal{B})\cup \mathcal{B}$ is compactly contained in $\mathbb C$. Starting with $h_0=id$, we construct a sequence of qc maps $h_n$ as above, normalized so that $h_n(z)=z+o(1)$ near infinity. Note that there is a neighborhood $V$ of $\infty$ such that $\textbf{w}idetilde{f}^{-1}(V)\subset V$, so that all the maps $h_n$ are conformal in $V$. For a map $\mathbf{v}arphi:\mathbb C\to \mathbb C$ and $z\in \mathbb C$, let $$H(\mathbf{v}arphi;z)=\limsup_{r\searrow 0}\frac{\sup_{|w-z|=r} |\mathbf{v}arphi(w)-\mathbf{v}arphi(z)|}{\inf_{|w-z|=r}|\mathbf{v}arphi(w)-\mathbf{v}arphi(z)|}.$$ So if $\mathbf{v}arphi$ is differentiable at $z$ with a positive Jacobian, then $$H(\mathbf{v}arphi;z)=\frac{|\partial \mathbf{v}arphi (z)|+|\overline{\partial} \mathbf{v}arphi(z)|}{|\partial \mathbf{v}arphi (z)|-|\overline{\partial} \mathbf{v}arphi(z)|}.$$ Fix $K>1$ such that $\textbf{w}idetilde{f}$ is $K$-quasi-regular. Let us call a point $z\in \mathbb C$ {\em regular} if the following hold: \begin{enumerate} \item $\textbf{w}idetilde{f}^n(z)$ is not a ramification point of $\textbf{w}idetilde{f}$ for any $n\ge 0$; \item for any non-negative $n$ and $m$, the maps $h_n$ and $\textbf{w}idetilde{f}$ are differentiable at $\textbf{w}idetilde{f}^m(z)$ with a positive Jakobian. Moreover, $$H(\textbf{w}idetilde{f}; \textbf{w}idetilde{f}^m(z))\le K.$$ \end{enumerate} Note that Lebesgue almost every point in $\mathbb C$ is regular. \begin{Lemma}\label{lem:hkh-n1st} If $z$ is a regular point, then for any integers $k>n\ge 0$, the following hold: \begin{equation} H(h_k\circ h_n^{-1}, h_n(z))=H(\textbf{w}idetilde{f}^{k-n};\textbf{w}idetilde{f}^n(z))\le \prod_{j=n}^{k-1} H(\textbf{w}idetilde{f}; \textbf{w}idetilde{f}^j(z))\le K^{\#\{n\le j<k: \textbf{w}idetilde{f}^j(z)\in A\}}. \end{equation} \end{Lemma} \begin{proof} The equality follows from the identity $$\textbf{w}idetilde{f}^{k-n}\circ (Q_0\circ Q_1\circ \cdots Q_{n-1})\circ h_n =Q_0\circ \cdots \circ Q_{k-1}\circ h_k=\textbf{w}idetilde{f}^k.$$ The first inequality follows from the definition of $H$ and the second inequality follows from the assumption that $z$ is regular. \end{proof} Define \[\textbf{w}idetilde{\mathcal{K}}(n):=\{z\mid \tilde f^k(z) \notin A,\text{ for all } k \ge n\}\] and \[\textbf{w}idetilde{\mathcal{L}}(n)=\{z\mid \tilde f^k(z) \notin \mathcal B,\text{ for all } k \ge n\} \supset \textbf{w}idetilde{\mathcal{K}}(n).\] As $\mathcal{B}$ is open, $\tilde {\mathcal L}(n)$ is closed for each $n\ge 0$. \begin{Lemma}For every $k>n$, $\overline{\partial } (h_k\circ h^{-1}_n)=0$ a.e. on $h_n(\textbf{w}idetilde{\mathcal{K}}(n))$. Moreover, there exists $K'$-q.c map $h_{k,n}$ such that $h_{k,n}=h_k\circ h^{-1}_n$ on $h_n(\textbf{w}idetilde{\mathcal{L}}(n))$. \end{Lemma} \begin{proof} For each regular $z\in \textbf{w}idetilde{\mathcal{K}}(n)$, $\#\{n\le j<k: \textbf{w}idetilde{f}^j(z)\in A\}=0$. So by Lemma~\ref{lem:hkh-n1st}, $H(h_k\circ h_n^{-1}; h_n(z))=1$. Since a qc map is absolutely continuous, the first statement follows. Now let us turn to the second statement. Note that for each regular $z\in \textbf{w}idetilde{\mathcal{L}}(n)$, \begin{equation}\label{eqn:hkhn-1Ln} H(h_k\circ h_n^{-1}; h_n(z))\le K^T, \end{equation} since by assumption ($\ast$), $\#\{n\le j<k:\textbf{w}idetilde{f}^j(z)\in A\}\le T.$ Put $K'=K^{4T+1}M$. {\bf Claim.} For each component $W$ of $\mathbb C\setminus \textbf{w}idetilde{\mathcal{L}}(n)$, there is a $K'$-qc map $h^W_{k,n}$ from $h_n(W)$ onto its image such that $h_{k,n}^W|\partial h_n(W)=h_k\circ h_n^{-1}|h_n(\partial W)$. Once this claim is proved, we can obtain a homeomorphism $h_{k,n}:\mathbb CC\to \mathbb CC$ which coincides with $h_k\circ h_n^{-1}$ outside $\textbf{w}idetilde{\mathcal{L}}(n)$ and coincides with $h_{k,n}^W$ on $h_n(W)$ for each component $W$ of $\mathbb C\setminus \textbf{w}idetilde{\mathcal{L}}(n)$. By \cite[Lemma 2 in Chapter 1]{DH2}, the map $h_{k,n}$ is $K'$-qc. To prove the claim, let $w$ be the smallest integers such that $w\ge n$ and $\textbf{w}idetilde{f}^w(W)\cap\mathcal{B}\not=\emptyset$. Since $\mathcal{B}$ is nice and disjoint from the postcritical set of $\textbf{w}idetilde{f}$, $\textbf{w}idetilde{f}^w$ maps $W$ homeomorphically onto a component $B$ of $\mathcal{B}$, and $\textbf{w}idetilde{f}^k(W)\cap\mathcal{B}=\emptyset$ for each $n\le k<w$. By the assumption ($\ast$), for any $z\in W$, $\#\{n\le j<w: \textbf{w}idetilde{f}^j(z)\in A\}\le T.$ By Lemma~\ref{lem:hkh-n1st}, if $z$ is regular, then for any $n<k\le w$, $$H(h_k\circ h_n^{-1}; h_n(z))=H(\textbf{w}idetilde{f}^{k-n}; \textbf{w}idetilde{f}^n(z))\le K^T.$$ In particular, if $n<k\le w$, then $h_k\circ h_n^{-1}$ is $K^T$-qc on $h_n(W)$. Assume now that $k>w$ and let $W'=(\textbf{w}idetilde{f}^{w}|W)^{-1}(D(\mathcal{B})\cap B)$. Since $\textbf{w}idetilde{f}^{n}\circ h_n^{-1}$ is conformal on $h_n(W)$ and $\textbf{w}idetilde{f}^{w-n}$ is a $K^T$-qc map in $\textbf{w}idetilde{f}^n(W)$, by Lemma~\ref{qc1}, we have $$\mathcal{QD}(h_n(W'), h_n(W))\le K^{2T} \mathcal{QD}(D(\mathcal{B})\cap B, B)\le K^{2T}M.$$ For each $z\in W\setminus W'$, $\textbf{w}idetilde{f}^w(z)\not\in D(\mathcal{B})$, so $\textbf{w}idetilde{f}^j(z)\not\in \mathcal{B}$ for all $j>w$. By assumption ($\ast$), it follows that $$\#\{n\le j<k: \textbf{w}idetilde{f}^j(z)\in A\}\le 2T+1.$$ Thus for a regular $z\in W\setminus W'$, $H(h_k\circ h_n^{-1};h_n(z))\le K^{2T+1}.$ It follows by Lemma~\ref{qc2} that there is a $K'$-qc map defined on $h_n(W)$ which has the same boundary value as $h_k\circ h_n^{-1}$. \end{proof} By compactness of normalized $K$-qc maps and the diagonal argument, there exists a sequence $\{k_j\}_{j=1}^\infty$ of positive integers such that for each $n$, $h_{k_j,n}$ converges locally uniformly in $\mathbb C$ to a $K'$-qc map $\lambda_n:\mathbb C\to\mathbb C$. Since $\overline{\partial } h_{k_j,n}$ and $\overline{\partial }\lambda_n$ have bounded norm in $L^2(\mathbb C)$ and $\overline{\partial } h_{k_j,n}$ converges to $\overline{\partial }\lambda_n$ in the sense of distribution, it follows that \begin{equation}\label{eqn:dbarchi} \overline{\partial }\lambda_n=0 \text{ a.e. on } h_n(\textbf{w}idetilde{\mathcal{K}}(n)). \end{equation} \begin{Lemma}\label{lem:chinhn} For each $k>n\ge 0$, $$\lambda_n \circ h_n=\lambda_k \circ h_k\text{ on }\textbf{w}idetilde{\mathcal{L}}(n).$$ \end{Lemma} \begin{proof} For each $j$ large enough so that $k_j>k>n$, since $\textbf{w}idetilde{\mathcal{L}}(n)\subset \textbf{w}idetilde{\mathcal{L}}(k)$, \begin{eqnarray*} h_{k_j,n}\circ h_n=h_{k_j} = h_{k_j}\circ h^{-1}_k\circ h_k = h_{k_j,k}\circ h_k \end{eqnarray*} is valid on $\textbf{w}idetilde{\mathcal{L}}(n)$. Letting $j$ go to infinity, we obtain $\lambda_n\circ h_n=\lambda_k\circ h_k$ on $\textbf{w}idetilde{\mathcal{L}}(n)$. \end{proof} \subsection{Limit geometry} Let $\mathcal{L}(n)=\lambda_n\circ h_n(\textbf{w}idetilde{\mathcal{L}}(n))$ and $\mathcal{K}(n)=\lambda_n\circ h_n(\textbf{w}idetilde{\mathcal{K}}(n))$. Since we assume $A\supset \mathcal{B}$, $\mathcal{K}(n)\subset \mathcal{L}(n)$. By Lemma~\ref{lem:chinhn}, for $k\ge n$, we have \[\mathcal{L}(n)=\lambda_k\circ h_k(\textbf{w}idetilde{\mathcal{L}}(n))\subset \lambda_k\circ h_k(\textbf{w}idetilde{\mathcal{L}}(k))=\mathcal{L}(k)\] and \[\mathcal{K}(n)=\lambda_k\circ h_k(\textbf{w}idetilde{\mathcal{K}}(n))\subset \lambda_k\circ h_k(\textbf{w}idetilde{\mathcal{K}}(k))=\mathcal{K}(k).\] By (\ref{eqn:dbarchi}), $\lambda^{-1}_n$ is conformal a.e.on $\mathcal{K}(n)$. \begin{equation*} \xymatrix@C=4em@R=3em{ \cdots \ar[r] &\mathbb C\ar[r] &\mathbb C \ar[r] &\cdots \ar[r] &\mathbb C\ar[r] &\mathbb C\\ \cdots \ar[r] &\mathbb C \ar[u]^{\lambda_{n+1}}\ar[r]^{Q_n} &\mathbb C\ar[u]^{\lambda_{n}}\ar[r] &\cdots \ar[r] &\mathbb C \ar[u]^{\lambda_{1}}\ar[r]^{Q_0}&\mathbb C\ar[u]^{\lambda_{0}}\\ \cdots \ar[r] &\mathbb C \ar[u]^{h_{n+1}}\ar[r]^{\tilde f} &\mathbb C\ar[u]^{h_n}\ar[r] &\cdots \ar[r] &\mathbb C \ar[u]^{h_1}\ar[r]^{\tilde f} &\mathbb C \ar[u]^{\mathrm{id}}} \end{equation*} \begin{Lemma}\label{lem:shapeW} There exists $M'>0$ such that for any $n\ge 0$ and any component $W$ of $\mathbb CC\setminus \mathcal{L}(n)$, $$\text{diam} (W)^2\le M'\text{area}(W).$$ \end{Lemma} \begin{proof} Let $\textbf{w}idetilde{W}=(\lambda_n\circ h_n)^{-1}(W)$. Let $w$ be the minimal integer such that $w\ge n$ and $\textbf{w}idetilde{f}^w(\textbf{w}idetilde{W})\cap \mathcal{B}\not=\emptyset$. Then $\textbf{w}idetilde{W}$ is a component of $\mathbb CC\setminus \textbf{w}idetilde{\mathcal{L}}(w)$ and $\textbf{w}idetilde{f}^w$ maps $\textbf{w}idetilde{W}$ homeomorphically onto a component $B$ of $\mathcal{B}$. So $\mathbf{v}arphi:=Q_0\circ Q_1\circ \cdots Q_{w-1}$ maps $h_w(\textbf{w}idetilde{W})$ conformally onto $B$. Moreover, since $B$ has a definite neighborhood disjoint from $P(\textbf{w}idetilde{f})$, the conformal map $\mathbf{v}arphi$ has bounded distortion. Thus $\text{diam}(h_w(\textbf{w}idetilde{W}))^2/\text{area}(h_w(\textbf{w}idetilde{W}))$ is bounded from above. Since $\lambda_w$ are normalized $K'$-qc maps and $\lambda_w (h_w(\textbf{w}idetilde{W}))=W$, the statement follows. \end{proof} \begin{Lemma}\label{lem:Lnsmall} \begin{enumerate} \item The Lebesgue measure of the set $\mathbb C\setminus \mathcal{L}(n)$ tends to zero as $n\to\infty$. \item $$\lim_{n\to\infty} \sup\{\text{diam}(W): W\text{ is a component of }\mathbb C\setminus \mathcal{L}(n)\}=0.$$ \end{enumerate} \end{Lemma} \begin{proof} The second statement follows form the first since all components of $\mathbb CC\setminus \mathcal{L}(n)$ have uniformly bounded shape by Lemma~\ref{lem:shapeW}. To prove the first statement, we shall use a martingale type argument. Let $\mathscr{W}$ denote the collection of components of $\mathbb CC\setminus \mathcal{L}(n)$, where $n$ runs over all non-negative integers. Let $\mathscr{W}^0$ denote the maximal elements in $\mathscr{W}$, i.e., those that are not contained in any other. For each $k\ge 1$, define inductively $\mathscr{W}^k$ to be the maximal elements in $\mathscr{W}\setminus \bigcup_{0\le j<k} \mathscr{W}^j$. So $\mathscr{W}$ is the disjoint union of $\mathscr{W}^k$, $k=0,1,\ldots$. Note that $$\mathbb CC\setminus \bigcup_n \mathcal{L}(n)\subset \bigcap_{k=0}^\infty \bigcup_{W\in \mathscr{W}^k} W.$$ It suffices to show that there is a constant $\lambda\in (0,1)$ such that for $k\ge 0$ and each $W\in \mathscr{W}^k$, \begin{equation}\label{eqn:Wfollower} \frac{\text{area}(W\cap (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))}{\text{area} (W)}\le \lambda. \end{equation} To this end, fix such a $W$. Note that there is $n$ such that $\textbf{w}idetilde{W}:=(\lambda_n\circ h_n)^{-1}(W)$ satisfies the following: $\textbf{w}idetilde{f}^n$ maps $\textbf{w}idetilde{W}$ homeomorphically onto a component $B$ of $\mathcal{B}$. So $\mathbf{v}arphi:=Q_0\circ Q_1\circ \cdots\circ Q_{n-1}$ maps $\lambda_n^{-1}(W)$ conformally onto $B$ and maps $\lambda_n^{-1}(W\setminus (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))$ onto $B\setminus D(\mathcal{B})$. Since $B$ has a definite neighborhood disjoint from $P(\textbf{w}idetilde{f})$, $\mathbf{v}arphi|\lambda_n^{-1}(W)$ has bounded distortion. Thus $$\frac{\text{area}(\lambda_n^{-1}(W\setminus (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))}{\text{area} (\lambda_n^{-1}(W))} \ge C\frac{\text{area}(B\setminus D(\mathcal{B}))}{\text{area}(B)}>\frac{C}{M},$$ where $C>0$ is independent of $W$. Since $\lambda_n$ are normalized $K'$-qc maps, (\ref{eqn:Wfollower}) follows. \end{proof} \begin{Lemma} \label{lem:KnLn} $\bigcup_{n=0}^\infty \mathcal{K}(n)=\bigcup_{n=0}^\infty\mathcal{L}(n)$. \end{Lemma} \begin{proof} By the assumption ($\ast$), $\bigcup_n \textbf{w}idetilde{\mathcal{K}}(n)=\bigcup_n \textbf{w}idetilde{\mathcal{L}}(n)$. Arguing by contradiction, assume that there exists $z\in (\bigcup_n \mathcal{L}(n))\setminus (\bigcup_n \mathcal{K}(n))$. Then there exists $n_0$ such that for all $n\ge n_0,$ $z\in \mathcal{L}(n)\setminus \mathcal{K}(n)$. By definition, there exists $\tilde{z}_n\in \textbf{w}idetilde{\mathcal L}(n)\setminus \textbf{w}idetilde{\mathcal{K}}(n)$ such that $\lambda_n\circ h_n(\tilde{z}_n)=z$. Then $$\lambda_{n+1}\circ h_{n+1}(\tilde{z}_n)=\lambda_n\circ h_n(\tilde{z}_n)=z=\lambda_{n+1}\circ h_{n+1}(\tilde{z}_{n+1}),$$ so $\tilde{z}_{n+1}=\tilde{z}_n$. Therefore $\tilde{z}=\tilde{z}_{n_0}$ satisfies $\tilde{z}\in (\bigcup_{n=n_0}^\infty \textbf{w}idetilde{\mathcal{L}}(n))\setminus (\bigcup_{n=0}^\infty\textbf{w}idetilde{\mathcal{K}}(n))$. This is absurd. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Thurston}] By Lemma~\ref{lem:chinhn}, $\lambda_n\circ h_n=\lambda_k\circ h_k$ on $\textbf{w}idetilde{L}(n)$ for all $k>n$, we obtain $\lambda_n\circ h_n(\textbf{w}idetilde{W})=\lambda_k\circ h_k(\textbf{w}idetilde{W})$ for all component $\textbf{w}idetilde{W}$ of $\mathbb C\backslash \textbf{w}idetilde{L}(n)$. By Lemma~\ref{lem:Lnsmall}, \[\sup\limits_{z\in \mathbb C}|\lambda_k\circ h_k(z)-\lambda_n\circ h_n(z)| \le 2\sup\limits_{\textbf{w}idetilde W}\mathrm{diam}~\lambda_n\circ h_n(\textbf{w}idetilde W) \to 0\] as $n$ tends to $\infty$, so $\lambda_n\circ h_n$ converges uniformlly to a continuous function $h:\mathbb C\to \mathbb C$. Similarly, \begin{eqnarray*} & &\sup\limits_{z\in \mathbb C}|\lambda_{k}\circ Q_{k}\circ \lambda^{-1}_{k+1}(z)-\lambda_{n}\circ Q_{n}\circ \lambda^{-1}_{n+1}(z)|\\ &=& \sup\limits_{z\in \mathbb C}|\lambda_{k}\circ h_k\circ\tilde f\circ h^{-1}_{k+1}\circ \lambda^{-1}_{k+1}(z)-\lambda_{n}\circ h_n\circ \tilde f \circ h^{-1}_{n+1}\circ \lambda^{-1}_{n+1}(z)| \\ &=& 2 \sup\limits_{W}\mathrm{diam}W \to 0 \end{eqnarray*} as $n$ tends to $0$, so $\lambda_{n}\circ Q_{n}\circ \lambda^{-1}_{n+1}$ converges uniformlly to a proper map $f:\mathbb C\to \mathbb C$.\par Recall that $\lambda^{-1}_n$ is a normalized $\tilde K$-qc map and conformal Lebsgue a.e. on $\mathcal{K}(n)$. By Lemma~\ref{lem:Lnsmall} (1) and Lemma~\ref{lem:KnLn}, $\bigcup\limits_{n}\mathcal{K}(n)=\bigcup\limits_n \mathcal{L}(n)$ has full Lebsgue measure. By \cite[Lemma B.1]{R-L}, $\lambda^{-1}_n$ converges uniformly to identity, hence so does $\lambda_n$. We conclude that $h_n$ converges uniformly to a continuous map $h$ and $Q_n$ converges uniformly to a rational map $f$ of degree $d$. Thus $h\circ \textbf{w}idetilde{f}=f\circ h$. On $\textbf{w}idetilde{\mathcal{L}}(0)$, $h(z)=\lim_{n\to\infty}\lambda_n\circ h_n(z)=\lambda_0\circ h_0(z)=\lambda_0(z)$. \end{proof} \section{Qc surgery and proof of the Main Theorem}\label{sec:surgery} As before, we fix $f_0\in \text{Poly}(d)$ which is postcritically finite, hyperbolic and primitive, let $T=(|T|, \sigma, \delta)$ denote the reduced mapping scheme of $f_0$ and let $r:|T|\to \mathbb{N}$ denote the return time function. We also fix a collection $\{\theta_\mathbf{v}\}_{\mathbf{v}\in |T|}$ of external angles such that $d^{r(\mathbf{v})}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ and such that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands on the boundary of $\mathbf{v}$, for each $\mathbf{v}\in |T|$, according to Lemma~\ref{lem:externalangle}. Choosing two large positive integers $N_0<N_1$ such that the following hold for each $\mathbf{v}\in |T|$: \begin{itemize} \item $f_0^j(Y_{N_0}(\mathbf{v}))$, $\mathbf{v}\in |T|$, $0\le j<r(\mathbf{v})$, are pairwise disjoint; \item $f_0^{r(\mathbf{v})}: Y_{N_0}(\mathbf{v})\to Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))$ has degree $\delta(\mathbf{v})$; \item putting $N'_0=N_0+N+\max_{\mathbf{v}} r(\mathbf{v})$, $Y_{N_1}(\mathbf{v})\Subset Y_{N'_0}(\mathbf{v})$. \end{itemize} where $N$ is as in Theorem~\ref{thm:Kqcpuzzle}. Applying the `thickening' procedure (\cite{Mil4} and \cite[Lemma 5.13]{IK}), we obtain quasi-disks $U'_{\mathbf{v}}\Subset U''_{\mathbf{v}}$, with $U'_{\mathbf{v}}\supset Y_{N_1+r(\mathbf{v})}(\mathbf{v})$ and $Y_{N_0+N}(\mathbf{v})\Supset U''_{\mathbf{v}}\supset Y_{N_1}(\mathbf{v})$ and such that $f_0^{r(\mathbf{v})}: U'_{\mathbf{v}}\to U''_{\mathbf{v}}$ again has degree $\delta(\mathbf{v})$. Then the map $$F_0:\bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U'_\mathbf{v}\to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U''_\mathbf{v}$$ defined by $F_0|U'_\mathbf{v}=f_0^{r(\mathbf{v})} |U'_\mathbf{v}$, is a GPL map over $T$, with filled Julia set equal to $\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times\overline{\mathbf{v}}$. We may choose these domains $U'_\mathbf{v}$ and $U''_\mathbf{v}$ so that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ intersects $\partial U'_\mathbf{v}$ (resp. $\partial U''_\mathbf{v}$) at a single point. Let $U_\mathbf{v}=\{z\in U'_\mathbf{v}: F_0(\mathbf{v}, z)\in \{\sigma(\mathbf{v})\}\times U'_{\sigma(\mathbf{v})}\}$. \begin{Theorem}\label{thm:tildef} Given $g\in\mathcal{C}(T)$ there exists a quasi-regular map $\textbf{w}idetilde{f}$ of degree $d$ with the following properties: \begin{enumerate} \item $f_0(z)=\textbf{w}idetilde{f}(z)$ for each $z\in \mathbb C\setminus (\bigcup_{\mathbf{v}} U'_\mathbf{v})$; \item There exist quasi-disks $U_{\mathbf{v}, g} \Subset U'_\mathbf{v}$ such that $\textbf{w}idetilde{f}$ is holomorphic in $U_{\mathbf{v},g}$ and the map $$\textbf{w}idetilde{F}:\bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U_{\mathbf{v}, g} \to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U'_\mathbf{v},\,\, (\mathbf{v}, z)\mapsto (\sigma(\mathbf{v}), \textbf{w}idetilde{f}^{r(\mathbf{v})}(z)),$$ is a GPL map over $T$ which is conformally conjugate to $g$ near their filled Julia set. More precisely, there are quasi-disks $V_{\mathbf{v}, g}\Subset V'_{\mathbf{v}, g}$ such that $g:\bigcup_{\mathbf{v}} \{\mathbf{v}\}\times V_{\mathbf{v}, g} \to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times V'_{\mathbf{v},g}$ is a GPL map over $T$, and for each $\mathbf{v}\in |T|$ there is a conformal map $\mathbf{v}arphi_\mathbf{v}: U'_\mathbf{v}\to V'_{\mathbf{v},g}$ such that $\mathbf{v}arphi_{\mathbf{v}} (U_{\mathbf{v}, g})=V_{\mathbf{v},g}$ and $$\mathbf{v}arphi_{\sigma(\mathbf{v})}\circ \textbf{w}idetilde{f}^{r(\mathbf{v})}=g\circ \mathbf{v}arphi_\mathbf{v} \text{ holds on } U_{\mathbf{v},g}.$$ \item Furthermore, if $\ell_\mathbf{v}$ denote the union of $\mathcal{R}_{f_0}(\theta_\mathbf{v})\setminus U'_\mathbf{v}$ and $\mathbf{v}arphi_\mathbf{v}^{-1}(\mathcal{R}_g(\mathbf{v}, 0)\cap V'_{\mathbf{v},g})$, then $\ell_\mathbf{v}$ is a ray, that is a simple curve starting from the infinity, and $\textbf{w}idetilde{f}^{r(\mathbf{v})}(\ell_\mathbf{v})=\ell_{\sigma(\mathbf{v})}$. \end{enumerate} \end{Theorem} \begin{proof} Let $a'_\mathbf{v}$ (resp. $a_\mathbf{v}$) denote the unique intersection point of $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ with $\partial U'_\mathbf{v}$ (resp. $U_\mathbf{v}$). Let $V'_{\mathbf{v},g}=\{z\in \mathbb C: |G_g(\mathbf{v},z)|<1\}$ and $V_{\mathbf{v},g}=\{z\in \mathbb C: |G_g(\mathbf{v},z)|<1/\delta(\mathbf{v})\}$, where $G_g$ is the Green function of $g$. Let $a'_{\mathbf{v},g}$ (resp. $a_{\mathbf{v}, g}$) denote the unique intersection point of the external ray $\mathcal{R}_g(\mathbf{v}, 0)$ with $\partial V'_{\mathbf{v}, g}$ (resp. $\partial V_{\mathbf{v}, g}$). Let $\mathbf{v}arphi_\mathbf{v}$ denote the unique Riemann mapping from $U'_\mathbf{v}$ onto $V'_{\mathbf{v},g}$ such that $\mathbf{v}arphi_\mathbf{v}(a'_{\mathbf{v}})=a'_{\mathbf{v}, g}$ and $\mathbf{v}arphi_\mathbf{v} (a(\mathbf{v}))=a_{\mathbf{v}, g}$. Define $U_{\mathbf{v},g}=\mathbf{v}arphi_{\mathbf{v}}^{-1}(V_{\mathbf{v},g})$ and define $\textbf{w}idetilde{f}|U_{\mathbf{v},g}= (f^{r(\mathbf{v})-1}|f(U'_\mathbf{v}))^{-1}\circ (\mathbf{v}arphi_{\sigma(\mathbf{v})}^{-1}\circ g\circ \mathbf{v}arphi_\mathbf{v})$ which is a holomorphic proper map of degree $\delta(\mathbf{v})$. Finally, define $\textbf{w}idetilde{f}$ on each annulus $U'_{\mathbf{v}}\setminus U_{\mathbf{v}, g}$ so that $\textbf{w}idetilde{f}$ is a quasiregular covering map from this annulus to $f_0(U'_{\mathbf{v}}\setminus U_{\mathbf{v}})$ of degree $\delta(\mathbf{v})$ and $\textbf{w}idetilde{f}$ maps the arc $\mathbf{v}arphi^{-1}_{\mathbf{v}}(\mathcal{R}_g(\mathbf{v},0)\cap (V_\mathbf{v}'\setminus V_\mathbf{v}))$ onto the arc $f_0(\mathcal{R}_{f_0}(\theta_\mathbf{v})\cap (U_\mathbf{v}'\setminus U_\mathbf{v}))$. All the desired properties are easily checked. \end{proof} \begin{proof}[Proof of the Main Theorem (Surjectivity)] Let $\textbf{w}idetilde f$ be the quasi-regular map constructed in Theorem~\ref{thm:tildef}, and let $C\ge 1$ be the maximal dilatation of $\textbf{w}idetilde{f}$. Let us check that it satisfies the conditions of Theorem~\ref{thm:Thurston}. Firstly, it is a quasi-regular polynomial and $\bar{\partial} f=0$ a.e. on $\mathbb C\setminus A$, where $$A:=\bigcup\limits_{\mathbf{v} \in |T|} U'_{\mathbf{v}}\backslash U_{\mathbf{v},g}\subset \bigcup_{\mathbf{v}} Y_{N_0+N}(\mathbf{v}).$$ To construct the set $\mathcal{B}$, let $$R(A)=\{x\in A:\exists n\ge 1\text{ such that } \textbf{w}idetilde{f}^n(z)\in A\}$$ be the return domain to $A$ under $\textbf{w}idetilde{f}$. For every $x \in R(A)$, there is a smallest integer $k=k(x)$ such that $\tilde f^k(x) \in \bigcup\limits_{\mathbf{v}} Y_{N_0}(\mathbf{v})\backslash \overline{Y_{N_0+r(\mathbf{v})}(\mathbf{v})}=:\Omega$. It is easy to see $Q:=\sup\limits_{x \in R(A)}k(x)<\infty$. Let $$E=\left\{z\in \Omega: \exists n\ge 1, \text{ such that } \textbf{w}idetilde{f}^n(z)\in \bigcup_{\mathbf{v}}Y_{N_0}(\mathbf{v})\right\},$$ and let $\mathcal{B}$ be the union of components of $D(E)$ which intersect $R(A)$. So $\mathcal{B}\supset R(A)$. Consequently, if $\textbf{w}idetilde{f}^j(z)\not\in \mathcal{B}$ for $0\le j<n$, then $\#\{0\le j<n: \textbf{w}idetilde{f}^j(z)\in A\}\le 1$. So the condition ($\ast$) in Theorem~\ref{thm:Thurston} holds with $T=1$. Let us show that $\mathcal{B}$ is a free $M$-nice set for some $M>0$. To this end, we first observe that $E$ and hence $\mathcal{B}$ is a nice set of $\textbf{w}idetilde{f}$. Since $\overline{\Omega}$ is disjoint from the post-critical set of $\textbf{w}idetilde{f}$ and $\mathcal{B}\subset \bigcup_{k=0}^Q \textbf{w}idetilde{f}^{-k}(\Omega)$, $\mathcal{B}$ is free. Fix a component $B$ of $\mathcal{B}$, let $s$ be the entry time of $B$ into $E$, and let $t$ denote the return time of $\textbf{w}idetilde{f}^{s}(B)$ into $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. Then $\textbf{w}idetilde{f}^{s+t}$ maps $B$ homeomorphically onto a component of $\bigcup_{\mathbf{v}}Y_{N_0}(\mathbf{v})$ and the map $\textbf{w}idetilde{f}^{s+t}|B$ is $C$-qc. In fact, for each $x\in B$, $\#\{0\le j<s: \textbf{w}idetilde{f}^j(x)\in A\}\le 1$ and $\textbf{w}idetilde{f}^t|\textbf{w}idetilde{f}^s(B)=f_0^t|\textbf{w}idetilde{f}^s(B)$ is conformal. Our assumption on $N_1$ and $N_0$ ensures that $B\subset \mathcal{B}\subset \bigcup_{\mathbf{v}}Y_{N_0+N}(\mathbf{v})$, so that $\textbf{w}idetilde{f}^{s+t} (D(\mathcal{B})\cap B)\subset L_{N_0+N}$. It follows from Lemma~\ref{qc1} and Theorem~3' that \[\mathcal{QD}(D(\mathcal {B})\cap B, B) \le C^{2}\max_{\mathbf{v}\in |T|}(L_{N_0+N}\cap Y_{N_0}(\mathbf{v}),Y_{N_0}(\mathbf{v}))=:M<\infty.\] Since $\textbf{w}idetilde{f}^{s+t}|B$ extends to a $C$-qc map onto a neighborhood of $\textbf{w}idetilde{f}^{s+t}(B)$, enlarging $M$ if necessary, we have that $M\text{area}(B)\ge \text{diam}(B)^2$ and $M\text{area}(B\cap D(\mathcal{B}))>\text{area}(B)$. This proves that $\mathcal{B}$ is $M$-nice. So by Theorem~\ref{thm:Thurston}, there is a continuous surjective map $h$ and a map $f\in Poly_d$ such that $f\circ h=h\circ \textbf{w}idetilde{f}$. The map $h$ is holomorphic and $h(z)=z+o(1)$ near infinity. Near $\infty$, $\textbf{w}idetilde{f}=f_0$. Thus $h=\phi_f\circ \phi_{f_0}^{-1}$ near $\infty$, where $\phi_f$ and $\phi_{f_0}$ are the B\"ottcher map for $f$ and $f_0$ respectively. It follows that $h(\ell_\mathbf{v})$ is the external ray $\mathcal{R}_f(\theta_\mathbf{v})$. By Proposition~\ref{prop:com}, $f\in \mathcal{C}(f_0)$ and $F:\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))\to \bigcup_{\mathbf{v}}\{\mathbf{v}\}\times h(Y_{N_0}(\mathbf{v}))$, $F|h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))=f^{r(\mathbf{v})}$, is a $\lambda(f_0)$-renormalization of $f$. In order to show that $\chi(f)=g$, we need to show that $F$ and $\textbf{w}idetilde{F}$ are hybrid equivalent. Let us consider the associated maps $\textbf{w}idetilde{\textbf{F}}: \bigcup_{\mathbf{v}} Y_{N_0+r(\mathbf{v})}(\mathbf{v})\to \bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$ and $\textbf{F}:\bigcup_{\mathbf{v}} h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))\to \bigcup_{\mathbf{v}}h(Y_{N_0}(\mathbf{v}))$, where $\textbf{w}idetilde{\textbf{F}}|Y_{N_0+r(\mathbf{v})}(v)=\textbf{w}idetilde{f}^{r(\mathbf{v})}|Y_{N_0+r(\mathbf{v})}(\mathbf{v})$ and $\textbf{F}|h(Y_{N_0+r(\mathbf{v})})=f^{r(\mathbf{v})}|h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))$. It suffices to show that there is a qc map $H:\bigcup_{\mathbf{v}} U'_\mathbf{v}\to \bigcup_{\mathbf{v}} h(U'_\mathbf{v})$ such that $H\circ \textbf{w}idetilde{\textbf{F}}=\textbf{F}\circ H$ and such that $\bar{H}=0$ a.e. on the filled Julia set $K(\textbf{w}idetilde{\textbf{F}})$ of $\textbf{w}idetilde{\textbf{F}}$. Note that $h\circ \textbf{w}idetilde{\textbf{F}}= \textbf{F}\circ h$ and $K(\textbf{w}idetilde{\textbf{F}})$ contains the postcritical set of $\textbf{w}idetilde{\textbf{F}}$. By Theorem~\ref{thm:Thurston}, there is a qc map $\lambda_0$ such that $\lambda_0=h$ outside $\mathcal{B}':=\bigcup_{k=0}^\infty \textbf{w}idetilde{f}^{-k}(\mathcal{B})$ and such that $\bar{\partial } \lambda_0=0$ a.e. on $\{z: \textbf{w}idetilde{f}^n(z)\notin A, \forall n\ge 0\}$. In particular, $\bar{\partial} \lambda_0=0$ a.e. on $K(\textbf{w}idetilde{\textbf{F}})$. Since $\mathcal{B}'$ is a countable union of Jordan disks with pairwise disjoint closure and it is disjoint from $K(\textbf{w}idetilde{\textbf{F}})$, $\lambda_0$ is homotopic to $h$ rel $K(\textbf{w}idetilde{\textbf{F}})$. Therefore, there is a sequence of qc maps $\lambda_k:\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})\to \bigcup_{\mathbf{v}} h(Y_{N_0}(\mathbf{v}))$, $k=1,2,\ldots$, all homotopic to $h$ rel $K(\textbf{w}idetilde{\textbf{F}})$, such that $\textbf{F}\circ \lambda_{k+1}=\lambda_k \circ \textbf{w}idetilde{\textbf{F}}$ and $\lambda_k=\lambda_0$ on $W:=\bigcup_{\mathbf{v}} (Y_{N_0}(\mathbf{v})\setminus Y_{N_0+r(\mathbf{v})}(\mathbf{v}))$ holds for $k=0,1,\ldots$. Since $\textbf{F}$ is holomorphic, $\textbf{w}idetilde{\textbf{F}}$ is holomorphic outside $A$, and each orbit of $\textbf{w}idetilde{\textbf{F}}$ passes through $A$ at most once, the maximal dilatation of $\lambda_k$ is uniformly bounded. Since $\lambda_k(z)$ eventually stablizes for each $z$ in the domain of $\textbf{w}idetilde{\textbf{F}}$, $\lambda_k$ converges to a qc map $H$. Moreover, $\bar{\partial }H =0$ holds a.e. on $K(\textbf{w}idetilde{\textbf{F}})$ since so does $\lambda_k$ for each $k$. \end{proof} In order to show that $\mathcal{C}(f_0)$ is connected, we shall make use of the following result. \begin{Theorem}[Branner-Hubbard-Lavaurs]\label{thm:bhl} The set $\mathcal{C}(T)$ is a connected compact set. \end{Theorem} \begin{proof} The proofs in the literature were stated for the case $\mathcal{C}(d)$. For $d=2$ this is due to Douady-Hubbard (\cite{DH1}), the case $d=3$ was proved by Branner-Hubbard (\cite{BH}) and for all $d\ge 3$ this was proved by Lavaurs (\cite{La}). The proof of Lavaurs generalizes to the case of $\mathcal{C}(T)$ in a straightforward way. See also \cite{DP} for a stronger result with a different proof. \end{proof} \begin{proof}[Proof of the Main Theorem (connectivity)] We shall show that if $E$ is a non-empty open and closed subset of $\mathcal{C}(f_0)$, then $\chi(E)$ is a closed subset of $\mathcal{C}(T)$. Together with connectivity of $\mathcal{C}(T)$ and bijectivity of the map $\chi$, this implies that $\mathcal{C}(f_0)$ is connected. Suppose that $g_n$ is a sequence in $\chi(E)$ and $g_n\to g$ in $\mathcal{C}(T)$. We need to show that $g\in \chi(E)$. Let $f_n=\chi^{-1}(g_n)\in E$. Since $E$ is compact, passing to a subsequence we may assume that $f_n\to f\in E$. As in \cite[Section 7]{DH2}, we may choose hybrid conjugacies $h_n$ between $\lambda(f_0)$-renormalization of $f_n$ and $g_n$ so that the maximal dilatation of $h_n$ is uniformly bounded. Passing to a further subsequence, we see that the $\lambda(f_0)$-renormalization of $f$ is qc conjugate to $g$ respecting the external markings. Thus $f$ is conjugate to $\chi^{-1}(g)$ via a qc map $h:\mathbb C\to \mathbb C$ which is conformal outside the filled Julia set of $f$ and satisfies $h(z)=z+o(1)$ near infinity. The Beltrami path connecting $f$ and $\chi^{-1}(g)$ is contained in $E$ and thus $g=\chi(\chi^{-1}(g))\in \chi(E)$. \end{proof} \end{document}
\begin{document} \title{Einstein-Podolsky-Rosen steering and the steering ellipsoid} \author{Sania Jevtic} \affiliation{Mathematical Sciences, John Crank 501, Brunel University, Uxbridge UB8 3PH, United Kingdom} \author{Michael J. W. Hall} \affiliation{Centre for Quantum Computation and Communication Technology (Australian Research Council), Centre for Quantum Dynamics, Griffith University, Brisbane, QLD 4111, Australia} \author{ Malcolm R. Anderson} \affiliation{Mathematical and Computing Sciences, Universiti Brunei Darussalam, Gadong BE 1410, Negara Brunei Darussalam} \author{Marcin Zwierz} \affiliation{Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02-093 Warszawa, Poland} \affiliation{Centre for Quantum Computation and Communication Technology (Australian Research Council), Centre for Quantum Dynamics, Griffith University, Brisbane, QLD 4111, Australia} \author{Howard M. Wiseman} \affiliation{Centre for Quantum Computation and Communication Technology (Australian Research Council), Centre for Quantum Dynamics, Griffith University, Brisbane, QLD 4111, Australia} \begin{abstract} The question of which two-qubit states are steerable (i.e. permit a demonstration of EPR-steering) remains open. Here, a strong necessary condition is obtained for the steerability of two-qubit states having maximally-mixed reduced states, via the construction of local hidden state models. It is conjectured that this condition is in fact sufficient. Two provably sufficient conditions are also obtained, via asymmetric EPR-steering inequalities. Our work uses ideas from the quantum steering ellipsoid formalism, and explicitly evaluates the integral of $\boldsymbol n/(\boldsymbol n^\intercal A\boldsymbol n)^2$ over arbitrary unit hemispheres for any positive matrix $A$. \end{abstract} \maketitle \section{Introduction} Quantum systems can be correlated in ways that supersede classical descriptions. However, there are degrees of non-classicality for quantum correlations. For simplicity, we consider only bipartite correlations, with the two, spatially separated, parties being named Alice and Bob as usual. At the weaker end of the spectrum are quantum systems whose states cannot be expressed as a mixture of product-states of the constituents. These are called non-separable or entangled states. The product-states appearing in such a mixture comprise a local hidden state (LHS) model for any measurements undertaken by Alice and Bob. At the strongest end of the spectrum are quantum systems whose measurement correlations can violate a Bell inequality~\cite{bell64,chsh}, hence demonstrating (modulo loopholes~\cite{larsson14}) the violation of local causality~\cite{bell76}. This phenomenon---commonly known as Bell-nonlocality~\cite{wiseman14}---is the only way for two spatially separated parties to verify the existence of entanglement if either of them, or their detectors, cannot be trusted~\cite{terhal00}. We say that a bipartite state is Bell-local if and only if there is a local hidden variable (LHV) model for any measurements Alice and Bob perform. Here the `variables' are not restricted to be quantum states, hence the distinction between non-separability and Bell-nonlocality. In between these types of non-classical correlations lies EPR-steering. The name is inspired by the seminal paper of Einstein, Podolsky, and Rosen (EPR)~\cite{einstein35}, and the follow-up by Schr\"odinger~\cite{schrsteer35}, which coined the term ``steering'' for the phenomenon EPR had noticed. Although introduced eighty years ago, as this Special Issue celebrates, the notion of EPR-steering was only formalized eight years ago, by one of us and co-workers~\cite{wiseman07,jones07}. This formalization was that EPR-steering is the only way to verify the existence of entanglement if one of the parties --- conventionally Alice~\cite{wiseman07,jones07,cavalcanti09} --- or her detectors, cannot be trusted. We say that a bipartite state is EPR-steerable if and only if it allows a demonstration of EPR-steering. A state is not EPR-steerable if and only if there exists a hybrid LHV--LHS model explaining the Alice--Bob correlations. Since in this paper we are concerned with steering, when we refer to a LHS model we mean a LHS model for Bob only; it is implicit that Alice can have a completely general LHV model. The above three notions of non-locality for quantum states coincide for pure states: any non-product pure state is non-separable, EPS-steerable, and Bell-nonlocal. However for mixed states, the interplay of quantum and classical correlations produces a far richer structure. For mixed states the logical hierarchy of the three concepts leads to a hierarchy for the bipartite states: the set of separable states is a strict subset of the set of non-EPR-steerable states, which is a strict subset of the set of Bell-local states~\cite{wiseman07,jones07}. Although the EPR-steerable set has been completely determined for certain classes of highly symmetric states (at least for the case where Alice and Bob perform projective measurements)~\cite{wiseman07,jones07}, until now very little was known about what types of states are steerable even for the simplest case of two qubits. In this simplest case, the phenomenon of steering in a more general sense --- i.e. within what set can Alice steer Bob's state by measurements on her system --- has been studied extensively using the so-called steering ellipsoid formalism \cite{verstraete2002,Shi,QSE}. However, no relation between the steering ellipsoid and EPR-steerability has been determined. In this manuscript, we investigate EPR-steerability of the class of two-qubit states whose reduced states are maximally mixed, the so-called T-states~\cite{Tstates}. We use the steering ellipsoid formalism to develop a deterministic LHS model for projective measurements on these states and we conjecture that this model is optimal. Furthermore we obtain two sufficient conditions for T-states to be EPR-steerable, via suitable EPR-steering inequalities~\cite{cavalcanti09, jones11} (including a new asymmetric steering inequality for the spin covariance matrix). These sufficient conditions touch the necessary condition in some regions of the space of T-states, and everywhere else the gap between them is quite small. The paper is organised as follows. In section 2 we discuss in detail the three notions of non-locality, namely Bell-nonlocality, EPR-steerability and non-separability. Section 3 introduces the quantum steering ellipsoid formalism for a two-qubit state, and in section 4 we use the steering ellipsoid to develop a deterministic LHS model for projective measurements on T-states. In section 5, two asymmetric steering inequalities for arbitrary two-qubit states are derived. Finally in section 6 we conclude and discuss further work. \section{\label{sec_LHSmodel}EPR-steering and local hidden state models} Two separated observers, Alice and Bob, can use a shared quantum state to generate statistical correlations between local measurement outcomes. Each observer carries out a local measurement, labelled by $A$ and $B$ respectively, to obtain corresponding outcomes labelled by $a$ and $b$. The measurement correlations are described by some set of joint probability distributions, $\{p(a,b|A,B)\}$, with $A$ and $B$ ranging over the available measurements. The type of state shared by Alice and Bob may be classified via the properties of these joint distributions, for all possible measurement settings $A$ and $B$. The correlations of a {\it Bell-local} state have a local hidden variable (LHV) model \cite{bell64, chsh}, \begin{equation} \label{lhv} p(a,b|A,B) = \sum_\lambda P(\lambda) \,p(a|A,\lambda)\,p(b|B,\lambda) , \end{equation} for some `hidden' random variable $\lambda$ with probability distribution $P(\lambda)$. Hence, the measured correlations may be understood as arising from ignorance of the value of $\lambda$, where the latter locally determines the statistics of the outcomes $a$ and $b$ and is independent of the choice of $A$ and $B$. Conversely, a state is defined to be Bell{\it-nonlocal} if it has no LHV model. Such states allow, for example, the secure generation of a cryptographic key between Alice and Bob without trust in their devices~\cite{qkd,qkd2}. In this paper, we are concerned with whether the state is {\em steerable}; that is, whether it allows for correlations that demonstrate EPR-steering. As discussed in the introduction, EPR-steering by Alice is demonstrated if it is {\em not} the case that the correlations can be described by a hybrid LHV--LHS model, wherein, \begin{equation} \label{lhs} p(a,b|A,B) = \sum_\lambda P(\lambda) \,p(a|A,\lambda)\,p_Q(b|B,\lambda), \end{equation} where the local distributions $p_Q(b|B,\lambda)$ correspond to measurements on local quantum states $\rho_B(\lambda)$, i.e., \[ p_Q(b|B,\lambda) = {\rm tr}[\rho_B(\lambda) F^B_b ]. \] Here $\{ F^B_b\}$ denotes the positive operator valued measure (POVM) corresponding to measurement $B$. The state is said to be {\it steerable} by Alice if there is {\it no} such model. The roles of Alice and Bob may also be reversed in the above, to define steerability by Bob. Comparing Eqs.~(\ref{lhv}) and (\ref{lhs}), it is seen that all nonsteerable states are Bell-local. Hence, all Bell-nonlocal states are steerable, by both Alice and Bob. In fact, the class of steerable states is strictly larger \cite{wiseman07}. Moreover, while not as powerful as Bell-nonlocality in general, steerability is more robust to detection inefficiencies \cite{ineff}, and also enables the use of untrusted devices in quantum key distribution, albeit only on one side \cite{steeringqkd}. By a similar argument, a separable quantum state shared by Alice and Bob, $\rho=\sum_\lambda p(\lambda) \rho_A(\lambda) \otimes \rho_B(\lambda)$, is both Bell-local and nonsteerable. Moreover, the set of separable states is strictly smaller than the set of nonsteerable states~\cite{wiseman07}. It is important that EPR-steerability of a quantum state not be confused with merely the dependence of the reduced state of one observer on the choice of measurement made by another, which can occur even for separable states. The term `steering' has been used with reference to this phenomenon, in particular for the concept of `steering ellipsoid', which we will use in our analysis. EPR-steering, as defined above, is a special case of this phenomenon, and is only possible for a subset of nonseparable states. We are interested in the EPR-steerability of states for all possible \textit{projective} measurements. If Alice is doing the steering, then it is sufficient for Bob's measurements to comprise some tomographically complete set of projectors. It is straightforward to show in this case that the condition for Bob to have an LHS model, Eq.~(\ref{lhs}), reduces to the existence of a representation of the form \begin{subequations} \begin{align} \label{reduced} \hspace{-0.3cm} p_E\, \rho^E_B&:= {\rm tr}_A[\rho \,E\otimes \mathbbm{1} ] = \sum_\lambda P(\lambda)\, p(1|E,\lambda) \,\rho_B(\lambda) ,\\ \label{reduced_p} p_E &= \tr [ \rho E\otimes I] = \sum_\lambda P(\lambda) p(1|E,\lambda). \end{align} \end{subequations} Here $E$ is any projector that can be measured by Alice; $p_E$ is the probability of result `$E=1$' and $p(1|E,\lambda)$ is the corresponding probability given $\lambda$; $\rho^E_B$ is the reduced state of Bob's component corresponding to this result; and ${\rm tr}_A[\cdot]$ denotes the partial trace over Alice's component. Note that this form, and hence EPR-steerability by Alice, is invariant under local unitary transformations on Bob's components. Determining EPR-steerability in this case, where Alice is permitted to measure any Hermitian observable, is surprisingly difficult, with the answer only known for certain special cases such as Werner states \cite{wiseman07}. However, in this paper we give a strong necessary condition for the EPR-steerability of a large class of two-qubit states, which we conjecture is also sufficient. This condition is obtained via the construction of a suitable LHS model, which is in turn motivated by properties of the `quantum steering ellipsoid'~\cite{verstraete2002, QSE}. Properties of this ellipsoid are therefore reviewed in the following section. \section{\label{sec_QSE}The quantum steering ellipsoid} An arbitrary two-qubit state may be written in the standard form \begin{align*} \rho = \frac{1}{4}\left( \mathbbm{1} \otimes \mathbbm{1} +\boldsymbol{a}\cdot \boldsymbol{\sigma}\otimes \mathbbm{1} + \mathbbm{1} \otimes \boldsymbol{b}\cdot\boldsymbol{\sigma} + \sum_{j,k} T_{jk} \,\sigma_j\otimes\sigma_k\right). \end{align*} Here $(\sigma_1,\sigma_2,\sigma_3)\equiv\boldsymbol{\sigma}$ denote the Pauli spin operators, and $$ a_j=\tr [\rho \,\sigma_j\otimes \mathbbm{1} ],~ b_j=\tr [\rho\, \mathbbm{1} \otimes\sigma_j],~T_{jk}=\tr [\rho \,\sigma_j\otimes\sigma_k] . $$ Thus, $\boldsymbol{a}$ and $\boldsymbol{b}$ are the Bloch vectors for Alice and Bob's qubits, and $T$ is the spin correlation matrix. If Alice makes a projective measurement on her qubit, and obtains an outcome corresponding to projector $E$, Bob's reduced state follows from Eq.~(\ref{reduced}) as \[ \rho^E_B =\frac{\tr_A [\rho\, E \otimes \mathbbm{1} ] }{\tr[\rho \,E \otimes \mathbbm{1} ] }. \] We will also refer to $\rho^E_B$ as Bob's `steered state'. To determine Bob's possible steered states, note that the projector $E$ may be expanded in the Pauli basis as $E = \frac12 \left( \mathbbm{1} + \boldsymbol{e}\cdot\boldsymbol{\sigma}\right)$, with $|\boldsymbol e|= 1$. This yields the corresponding steered state $ \rho^E_B = \frac 12\left( \mathbbm{1} + \boldsymbol{b}(\boldsymbol e)\cdot\boldsymbol{\sigma} \right)$, with associated Bloch vector \begin{equation} \label{qsteerB} \boldsymbol{b}(\boldsymbol e) = \frac{1}{2p_{\boldsymbol e}} (\boldsymbol b + T^\intercal\boldsymbol{e}), \end{equation} where $p_{\boldsymbol e}$ is the associated probability of result $`E=1$', \begin{equation} \label{pe} p_{\boldsymbol e} := \tr[\rho (E \otimes \mathbbm{1} )] = \frac 12 (1+\boldsymbol a\cdot\boldsymbol e), \end{equation} called $p_E$ previously. In what follows we will refer to the vector $\boldsymbol e$ rather than its corresponding operator $E$. The surface of the steering ellipsoid is defined to be the set of steered Bloch vectors, $\{ \boldsymbol{b}(\boldsymbol e): |\boldsymbol e|=1\}$, and in Ref.~\cite{QSE} it is shown that interior points can be obtained from positive operator-valued measurements (POVMs). The ellipsoid has centre \begin{equation} \label{QSE_centre} \boldsymbol c = \frac{\boldsymbol b - T^\intercal\boldsymbol a}{1-a^2} , \end{equation} and the semiaxes $s_1,s_2, s_3$ are the roots of the eigenvalues of the matrix \begin{equation} \label{QSE_mat} Q = \frac{1}{1-a^2}\left(T^\intercal - \boldsymbol b \boldsymbol a^\intercal \right)\left( \mathbbm{1} + \frac{\boldsymbol a \boldsymbol a^\intercal}{1-a^2} \right)\left(T - \boldsymbol a \boldsymbol b ^\intercal \right) . \end{equation} The eigenvectors of $Q$ give the orientation of the ellipsoid around its centre \cite{QSE}. Thus, the general equation of the steering ellipsoid surface is $\boldsymbol x^\intercal Q^{-1}\boldsymbol x=1$ with $\boldsymbol x \in \mathbb{R}^3$ being the displacement vector from the centre $\boldsymbol{c}$. Entangled states typically have large steering ellipsoids---the largest possible being the Bloch ball, which is generated by every pure entangled state \cite{QSE}. In contrast, the volume of the steering ellipsoid is strictly bounded for separable states. Indeed, a two-qubit state is separable if and only if its steering ellipsoid is contained within a tetrahedron contained within the Bloch sphere \cite{QSE}. Thus, the separability of two-qubit states has a beautiful geometric characterisation in terms of the quantum steering ellipsoid. No similar characterisation has been found for EPR-steerability, to date. However, for non-separable states, knowledge of the steering ellipsoid matrix $Q$, its centre $\boldsymbol c$, and Bob's Bloch vector $\boldsymbol b$ uniquely determines the shared state $\rho$ up to a local unitary transformation on Alice's system \cite{QSE}, \cite{Note1} and so is sufficient, in principle, to determine the EPR-steerability of $\rho$. In this paper we find a direct connection between EPR-steerability and the quantum steering ellipsoid, for the case that the Bloch vectors $\boldsymbol a$ and $\boldsymbol b$ vanish. \section{\label{sec_Tstate}Necessary condition for EPR-steerability of T-states} \subsection{T-states} Let $T=O_A \widetilde D O_B^\intercal$ be a singular value decomposition of the spin correlation matrix $T$, for some diagonal matrix $\widetilde D\geq 0$ and orthogonal matrices $O_A,O_B \in \mathrm{O}(3)$. Noting that any $O\in \mathrm{O}(3)$ is either a rotation or the product of a rotation with the parity matrix $-I$, it follows that $T$ can always be represented in the form $T=R_A D R_B^\intercal$, for proper rotations $R_A, R_B \in \mathrm{SO}(3)$, where the diagonal matrix $D$ may now have negative entries. The rotations $R_A$ and $R_B$ may be implemented by local unitary operations on the shared state $\rho$, amounting to a local basis change. Hence, all properties of a shared two-qubit state, including steerability properties in particular, can be formulated in a representation in which the spin correlation matrix has the diagonal form $T \equiv D = {\rm diag}[t_1, t_2,t_3]$. It follows that if the shared state $\rho$ has maximally-mixed reduced states with $\boldsymbol a=\boldsymbol b=\boldsymbol 0$, then it is completely described, up to local unitaries, by a diagonal $T$, i.e. one may consider \begin{align} \rho = \frac{1}{4} \left( \mathbbm{1} \otimes \mathbbm{1} + \sum_j t_j \sigma_j\otimes \sigma_j \right) \end{align} without loss of generality. Such states are called T-states~\cite{Tstates}. They are equivalent to mixtures of Bell states, and hence form a tetrahedron in the space parameterised by $(t_1,t_2,t_3)$ \cite{Tstates}. Entangled T-states necessarily have $t_1 t_2 t_3 < 0$, and the set of separable T-states forms an octahedron within the tetrahedron \cite{Tstates}. The T-state steering ellipsoid is centred at the origin, $\boldsymbol c = \boldsymbol 0$, and the ellipsoid matrix is simply $Q = T^\intercal T$, as follows from Eqs.~(\ref{QSE_centre}) and (\ref{QSE_mat}) with $\boldsymbol a= \boldsymbol b= \boldsymbol 0$. The semiaxes are $s_i = |t_i|$ for $i=1,2,3$, and are aligned with the $x,y,z$-axes of the Bloch sphere. Thus, the equation of the ellipsoid surface in spherical coordinates $(r,\theta,\phi)$ is $r=1/f(\theta,\phi)$, with \begin{align} f(\theta,\phi)^2 := \frac{\sin^{2}\theta\cos ^{2}\phi}{s_1^2} + \frac{\sin ^{2}\theta\sin ^{2}\phi}{s_2^2} +\frac{\cos^{2}\theta}{s_3^2}. \label{ell_eqn} \end{align} We find a remarkable connection between this equation and the EPR-steerability of T-states in the following subsection. \subsection{\label{sec_Tstate_results}Deterministic LHS models for T-states} Without loss of generality, consider measurement by Alice of Hermitian observables on her qubit. Such observables can be equivalently represented via projections, $E=\frac12 ( \mathbbm{1} +\boldsymbol e.\cdot\boldsymbol\sigma)$, with $|\boldsymbol e|=1$. The probability of result `$E=1$' and the corresponding steered Bloch vector are given by Eqs.~(\ref{qsteerB}) and (\ref{pe}) with $\boldsymbol a=\boldsymbol b=\boldsymbol 0$, i.e., \[ p_{\boldsymbol e}=1/2,~~~~~~\boldsymbol b(\boldsymbol e) = T^T\boldsymbol e=T\boldsymbol e. \] Hence, letting $\boldsymbol n(\lambda)$ denote the Bloch vector corresponding to $\rho_B(\lambda) $ in Eq.~(\ref{reduced}), then from Eqs.~(\ref{reduced}) and \eqref{reduced_p}, it follows there is an LHS model for Bob if and only if there is a representation of the form \[ \sum_\lambda P(\lambda)\,p(1|\boldsymbol e, \lambda) = \frac12, \!~~~\sum_\lambda P(\lambda)\,p(1|\boldsymbol e, \lambda)\,\boldsymbol n(\lambda) = \frac12 T\boldsymbol e,\] for all unit vectors $\boldsymbol e$. Noting further that $\boldsymbol n(\lambda)$ can always be represented as some mixture of unit vectors, corresponding to pure $\rho_B(\lambda)$, these conditions are equivalent to the existence of a representation of the form \begin{align} \label{peint} \int P(\boldsymbol n) \,p(1|\boldsymbol e,\boldsymbol n)\mathop{}\!\mathrm{d}^2\boldsymbol n &= \frac12,\\ \int P(\boldsymbol n) \,p(1|\boldsymbol e, \boldsymbol n)\, \boldsymbol n\mathop{}\!\mathrm{d}^2\boldsymbol n &= \frac12 T\boldsymbol e, \label{beint} \end{align} with integration over the Bloch sphere. Thus, the unit Bloch vector $\boldsymbol n$ labels both the local hidden state and the hidden variable. Given LHS models for Bob for any two T-states, having spin correlation matrices $T_0$ and $T_1$, it is trivial to construct an LHS model for the T-state corresponding to $T_q=(1-q)T_0+qT_1$, for any $0\leq q\leq 1$, via the convexity property of nonsteerable states \cite{cavalcanti09}. Our strategy is to find {\it deterministic} LHS models for some set of T-states, for which the result `$E=1$' is fully determined by knowledge of $\boldsymbol n$, i.e., $p(1|\boldsymbol e, \boldsymbol n)\in \{0,1\}$. LHS models can then be constructed for all convex combinations of T-states in this set. To find deterministic LHS models, we are guided by the fact that the steered Bloch vectors $\boldsymbol b(\boldsymbol e)=T\boldsymbol e$ are precisely those vectors that generate the surface of the quantum steering ellipsoid for the T-state \cite{QSE}. We make the ansatz that $P(\boldsymbol n)$ is proportional to some power of the function $f(\theta,\phi)$ in Eq.~(\ref{ell_eqn}) that defines this surface, i.e., \begin{equation} \label{pform} P(\boldsymbol n) = N_T \left[ f(\theta,\phi)\right]^m \equiv N_T\,\left[\boldsymbol n^\intercal T^{-2}\boldsymbol n\right]^{m/2} \end{equation} for $\boldsymbol n=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$, where $N_T$ is a normalisation constant. Further, denoting the region of the Bloch sphere, for which $p(1|\boldsymbol e,\boldsymbol n)=1$ by $\mathcal{R}[\boldsymbol e]$, the condition in Eq.~(\ref{peint}) becomes $\int_{\mathcal{R}[\boldsymbol e]} P(\boldsymbol n) \mathop{}\!\mathrm{d}^2\boldsymbol n = \frac12$. We note this is automatically satisfied if $R(\boldsymbol e)$ is a hemisphere, as a consequence of the symmetry $P(\boldsymbol n)=P(-\boldsymbol n)$ for the above form of $P(\boldsymbol n)$. Hence, under the assumptions that (i) $P(\boldsymbol n)$ is determined by the steering ellipsoid as per Eq.~(\ref{pform}), and (ii) $\mathcal{R}[\boldsymbol e]$ is a hemisphere for each unit vector $\boldsymbol e$, the only remaining constraint to be satisfied by a deterministic LHS model for a T-state is Eq.~(\ref{beint}), i.e., \begin{equation} \label{con} N_T \int_{\mathcal{R}[\boldsymbol e]} \left[ \boldsymbol n^\intercal T^{-2} \boldsymbol n\right]^{m/2}\,\boldsymbol n \mathop{}\!\mathrm{d}^2\boldsymbol n = \frac12 T\boldsymbol e, \end{equation} for some suitable mapping $\boldsymbol e\rightarrow \mathcal{R}[\boldsymbol e]$. Extensive numerical testing, with different values of the exponent $m$, show that this constraint can be satisfied by the choices \begin{equation} \label{re} m=-4,~~~~~~~~\mathcal{R}[\boldsymbol e]= \{\boldsymbol n: \boldsymbol nT^{-1} \boldsymbol e\geq 0\}, \end{equation} for a two-parameter family of T-states. Assuming the numerical results are correct, it is not difficult to show, using infinitesimal rotations of $\boldsymbol e$ about the $z$-axis, that this family corresponds to those T-states that satisfy \begin{equation} \label{surface} 2\pi N_T |\det T| = 1. \end{equation} Fortunately, we have been able to confirm these results analytically by explicitly evaluating the integral in Eq.~(\ref{con}) for $m=-4$ (see Appendix A). An explicit form for the corresponding normalisation constant $N_T$ is also given in Appendix A, and it is further shown that the family of T-states satisfying Eq.~(\ref{surface}) is equivalently defined by the condition \begin{equation} \label{surface2} \int \sqrt{\boldsymbol n^\intercal T^2 \,\boldsymbol n} \mathop{}\!\mathrm{d}^2\boldsymbol n = 2\pi. \end{equation} This may be interpreted geometrically in terms of the harmonic mean radius of the `inverse' ellipsoid $\boldsymbol x^\intercal\, T^2\boldsymbol x=1$ being equal to $2$. \subsection{Necessary EPR-steerability condition} Equation (\ref{surface}) defines a surface in the space of possible $T$ matrices, plotted in Fig.~1(a) as a function of the semiaxes $s_1, s_2$ and $s_3$. As a consequence of the convexity of nonsteerable states (see above), all T-states corresponding to the region defined by this surface and the positive octant have local hidden state models for Bob. Also shown is the boundary of the separable T-states ($s_1+s_2+s_3\leq 1$ \cite{Tstates}), in red, which is clearly a strict subset of the nonsteerable T-states. The green plane corresponds to the sufficient condition $s_1+s_2+s_3 > \frac{3}{2}$ for EPR-steerable states, derived in Sec.~5 below. It follows that a necessary condition for a T-state to be EPR-steerable by Alice is that it corresponds to a point above the sandwiched surface shown in Fig.~1(a). Note that this condition is in fact symmetric between Alice and Bob, since their steering ellipsoids are the same for T-states. Because of the elegant relation between our LHS model and the steering ellipsoid, and other evidence given below, we conjecture that this condition is also {\it sufficient} for EPR-steerability. \begin{figure} \caption{Correlation bounds for T-states, with $s_i = |t_i|$. \textbf{Top figure (a)} \label{All_Tstate_Bounds} \end{figure} \subsection{Special cases} When $|t_1| = |t_2|$ we can solve Eq.~(\ref{surface}) explicitly, because the normalisation constant $N_T$ simplifies. The corresponding equation of the $s_3$ semiaxis, in terms of $u := s_3/s_1=s_3/s_2$, is given by \begin{align} \label{deg} \hspace{-0.2cm}s_3= \left\{\begin{array}{cc} \left[1 + \frac{\mathrm{arctan}(\sqrt{u^{-2} - 1})}{u^2\sqrt{u^{-2} - 1}} \right] ^{-1} & u < 1, \\ \left[1 - \frac {\sqrt{1-u^{-2}}}{2(u^2-1)}\ln\frac{|1-\sqrt{1-u^{-2}}|}{1+\sqrt{1-u^{-2}}} \right] ^{-1} & u > 1, \end{array} \right. \end{align} and $s_3=\frac12$ for $u=1$. Fig.~1(b) displays this analytic EPR-steerable curve through the T-state subspace $|t_1|=|t_2| \Leftrightarrow s_1 = s_2$, showing more clearly the different correlation regions. The symmetric situation $s_1=s_2=s_3$ corresponds to Werner states. Our deterministic LHS model is for $s_1=s_2=s_3=1/2$ in this case, which is known to represent the EPR-steerable boundary for Werner states \cite{jones07}. Thus, our model is certainly optimal for this class of states. \section{Sufficient conditions for EPR-steerability}\label{sec_sufficient} In the previous section a strong necessary condition for the EPR-steerability of T-states was obtained, corresponding to the boundary defined in Eq.~(\ref{surface}) and depicted in Fig.~1. While we have conjectured that this condition is also sufficient, it is not actually known if all T-states above this boundary are EPR-steerable. Here we give two sufficient general conditions for EPR-steerability, and apply them to T-states. These conditions are examples of EPR-steering inequalities, i.e., statistical correlation inequalities that must be satisfied by any LHS model for Bob \cite{cavalcanti09}. Thus, violation of such an inequality immediately implies that Alice and Bob must share an EPR-steerable resource. Our first condition is based on a new EPR-steering inequality for the spin covariance matrix, and the second on a known nonlinear EPR-steering inequality \cite{jones11}. Both EPR-steering inequalities are further of interest in that they are asymmetric under the interchange of Alice and Bob's roles. \subsection{Linear asymmetric EPR-steering inequality} Suppose Alice and Bob share a two-qubit state with spin covariance matrix $C$ given by \begin{equation} \label{cjk} C_{jk}:= \langle \sigma_j\otimes\sigma_k\rangle - \langle \sigma_j\otimes \mathbbm{1} \rangle\,\langle \mathbbm{1} \otimes\sigma_k\rangle = T_{jk}-a_jb_k, \end{equation} and that each can measure any Hermitian observable on their qubit. We show in Appendix~\ref{app:cov} that, if there is an LHS model for Bob, then the singular values $c_1$, $c_2$, $c_3$ of the spin covariance matrix must satisfy the linear EPR-steering inequality \begin{equation} \label{lin} c_1 + c_2 + c_3 \leq \frac{3}{2}\sqrt{1 - b^2}. \end{equation} From $C=T-\boldsymbol a\boldsymbol b^\intercal$, and using $\boldsymbol a=\boldsymbol b=\boldsymbol 0$ and $s_j=|t_j|$ for T-states, it follows immediately that one has the simple {\it sufficient} condition \begin{equation} \label{linsuff} s_1 +s_2+s_3 > \frac{3}{2} \end{equation} for the EPR-steerability of T-states (by either Alice or Bob). The boundary of T-states satisfying this condition is plotted in Figs.~1 (a) and~(b), showing that the condition is relatively strong. In particular, it is a tangent plane to the necessary condition at the point corresponding to Werner states (which we already knew to be a point on the true boundary of EPR-steerable states). However, in some parameter regions a stronger condition can be obtained, as per below. \subsection{Nonlinear asymmetric EPR-steering inequality} Suppose Alice and Bob share a two-qubit state as before, where Bob can measure the observables $ \mathbbm{1} \otimes\sigma_3$, $ \mathbbm{1} \otimes\sigma_\phi$ on his qubit, with $\sigma_\phi:=\sigma_1\cos \phi + \sigma_2\sin\phi$, for any $\phi\in[0,2\pi]$, and Alice can measure corresponding Hermitian observables $A_3\otimes \mathbbm{1} , A_\phi\otimes \mathbbm{1} $ on her qubit, with outcomes labelled by $\pm1$. It may then be shown that any LHS model for Bob must satisfy the EPR-steering inequality~\cite{jones11} \begin{eqnarray*}\label{non-linear_inequality} &&\frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \langle A_\phi \otimes\sigma_\phi \rangle \, d\phi \\ &~&~~~~\leq \frac{2}{\pi} \left[p_{+} \sqrt{1 - \langle { \mathbbm{1} \otimes\sigma}_{3} \rangle_{+}^{2}} + p_{-} \sqrt{1 - \langle { \mathbbm{1} \otimes\sigma}_{3} \rangle_{-}^{2}}\right], \end{eqnarray*} where $p_{\pm}$ denotes the probability that Alice obtains result $A_3=\pm 1$, and $\langle { \mathbbm{1} \otimes\sigma}_{3} \rangle_{\pm}$ is Bob's corresponding conditional expectation value for $ \mathbbm{1} \otimes\sigma_3$ for this result. As per the first part of Sec.~4A, we may always choose a representation in which the spin correlation matrix $T$ is diagonal, i.e., $T={\rm diag}[t_1,t_2,t_3]$, without loss of generality. Making the choices $A_3=\sigma_3$ and $A_\phi=\sigma_1 ({\rm sign}\, t_1)\cos\phi+ \sigma_2 ({\rm sign}\, t_2)\sin\phi$ in this representation, then $p_\pm$ and $\langle { \mathbbm{1} \otimes\sigma}_{3} \rangle_{\pm}$ are given by $p_{\boldsymbol e}$ and the third component of $\boldsymbol b(\boldsymbol e)$ in Eqs.~(\ref{pe}) and (\ref{qsteerB}), respectively, with $\boldsymbol e=(0,0,\pm1)^\intercal$. Hence, the above inequality simplifies to \begin{align} \label{nlin} |t_1|+|t_2| \leq \frac{2}{\pi}&\left[ \sqrt{(1+a_3)^2-(t_3+b_3)^2}\right. \nonumber\\umber\\ &~~~~ \left.+ \sqrt{(1-a_3)^2-(t_3-b_3)^2}\,\right], \end{align} where $a_3$ and $b_3$ are the third components of Alice and Bob's Bloch vectors $\boldsymbol a$ and $\boldsymbol b$. For T-states, recalling that $s_i\equiv |t_i|$, the above inequality simplifies further, to the nonlinear inequality \[ f(s_1,s_2,s_3):= s_1+s_2 -\frac{4}{\pi}\sqrt{1-s_3^2} \leq 0. \] Hence, since similar inequalities can be obtained by permuting $s_1, s_2,s_3$, we have the {\it sufficient} condition \begin{equation} \label{nonlinsuff} \max\{ f(s_1,s_2,s_3), f(s_2,s_3,s_1), f(s_3,s_1,s_2)\} > 0 \end{equation} for the EPR-steerability of T-states. The boundary of T-states satisfying this condition is plotted in Fig.~1(b) for the case $s_1=s_2$. It is seen to be stronger than the linear condition in Eq.~(\ref{linsuff}) if one semiaxis is sufficiently large. The region below both sufficient conditions is never far above the smooth curve of our necessary condition, supporting our conjecture that the latter is the true boundary. \section{Recapitulation and future directions} In this paper we have considered steering for the set of two-qubit states with maximally mixed marginals (`T-states'), where Alice is allowed to make arbitrary projective measurements on her qubit. We have constructed a LHV--LHS model (LHV for Alice, LHS for Bob), which describes measurable quantum correlations for all separable, and a large portion of non-separable, T-states. That is, this model reproduces the steering scenario, by which Alice's measurement collapses Bob's state to a corresponding point on the surface of the quantum steering ellipsoid. Our model is constructed using the steering ellipsoid, and coincides with the optimal LHV--LHS model for the case of Werner states. Furthermore, only a small (and sometimes vanishing) gap remains between the set of T-states that are provably non-steerable by our LHV--LHS model, and the set that are provably steerable by the two steering inequalities that we derive. As such, we conjecture that this LHV--LHS model is in fact optimal for T-states. Proving this, however, remains an open question. A natural extension of this work is to consider LHV--LHS models for arbitrary two-qubit states. How can knowledge of their steering ellipsoids be incorporated into such LHV--LHS models? Investigations in this direction have already begun, but the situation is far more complex when Alice and Bob's Bloch vectors have nonzero magnitude and the phenomenon of ``one-way steering'' may arise \cite{1waysteer}. Finally, our LHV--LHS models apply to the case where Alice is restricted to measurements of Hermitian observables. It would be of great interest to generalize these to arbitrary POVM measurements. However, we note that this is a very difficult problem even for the case of two-qubit Werner states \cite{werner14}. Nevertheless, the steering ellipsoid is a depiction of all collapsed states, including those arising from POVMs (they give the interior points of the ellipsoid) and perhaps this can provide some intuition for how to proceed with this generalisation. \begin{acknowledgments} SJ would like to thank David Jennings for his early contributions to this project. SJ is funded by EPSRC grant EP/K022512/1. This work was supported by the Australian Research Council Centre of Excellence CE110001027 and the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n$^{\circ}$ [316244]. \end{acknowledgments} \appendix \section{Details of the deterministic LHS model} The family of T-states described by our deterministic LHS model in Sec.~4B corresponds to the surface defined by either of Eqs.~(\ref{surface}) and (\ref{surface2}). This is a consequence of the following theorem, proved further below. \begin{thm} For any full-rank diagonal matrix $T$ and nonzero vector ${\boldsymbol v}$ one has \begin{align} \int_{\boldsymbol n\cdot\boldsymbol v\geq 0} \frac{\boldsymbol n\,\mathop{}\!\mathrm{d}^2\boldsymbol n}{(\boldsymbol n^\intercal T^{-2} \boldsymbol n)^{2}} = \frac{\pi |\!\det T|\, T^2 \boldsymbol v}{|T\boldsymbol v|}. \end{align} \end{thm} Note that substitution of Eq.~(\ref{re}) into constraint~(\ref{con}) immediately yields Eq.~(\ref{surface}) via the theorem (with $\boldsymbol v=T^{-1}\boldsymbol e$). Further, taking the dot product of the integral in the theorem with $\boldsymbol v$, multiplying by $N_T$, and integrating $\boldsymbol v$ over the unit sphere, yields (reversing the order of integration) \begin{align*} \int \mathop{}\!\mathrm{d}^2\boldsymbol n\,P(\boldsymbol n)\int_{\boldsymbol n\cdot\boldsymbol v\geq 0}\mathop{}\!\mathrm{d}^2\boldsymbol v\, \boldsymbol v\cdot\boldsymbol n= \pi, \end{align*} whereas carrying out the same operations on the righthand side of the theorem yields $\pi N_T|\det T|\int \sqrt{\boldsymbol v^\intercal T^2\boldsymbol v}\mathop{}\!\mathrm{d}^2\boldsymbol v$. Equating these immediately implies the equivalence of Eqs.~(\ref{surface}) and (\ref{surface2}) as desired. An explicit analytic formula for the normalisation constant $N_T$ is given at the end of this appendix. \begin{proof} First, define $Q = T^{-2} \in GL(3,\mathbb{R})$; that is, \begin{align} Q = \mbox{diag} (a,b,c) = (t_1^{-2}, t_2^{-2}, t_3^{-2}), \end{align} and \begin{align} \label{qv} \boldsymbol q (\boldsymbol v) := \int_{\boldsymbol n\cdot\boldsymbol v \geq 0} \frac{\boldsymbol n \mathop{}\!\mathrm{d}^2 \boldsymbol n}{(\boldsymbol n^\intercal Q \boldsymbol n)^{2}}. \end{align} Noting $\boldsymbol v$ in the theorem may be taken to be a unit vector without loss of generality, we will parameterise the unit vectors $\boldsymbol n$ and $\boldsymbol v$ by \begin{align} \boldsymbol n &= (\sin \theta \cos \phi,\sin\theta\sin\phi,\cos\theta)^\intercal,\\ \boldsymbol v &= (\sin \alpha \cos \beta,\sin\alpha\sin\beta,\cos\alpha)^\intercal, \end{align} with $\theta, \alpha \in [0,\pi]$ and $\phi, \beta \in [0,2\pi)$. Thus, $\mathop{}\!\mathrm{d}^2 \boldsymbol n \equiv \sin\theta\mathop{}\!\mathrm{d}\theta\mathop{}\!\mathrm{d}\phi$. Further, without loss of generality, it will be assumed that $\boldsymbol{v}$ points into the northern hemisphere, so that $\cos \alpha\geq 0$. Then $\alpha\in \lbrack 0,\pi /2]$ and $\beta\in \lbrack 0,2\pi )$. The surface of integration is a hemisphere bounded by the great circle $ \boldsymbol n\cdot \boldsymbol v=\,0$. In the simple case where $\boldsymbol{v} =(0,0,1)^{T}$, the boundary curve has the parametric form $(x,y,z)=(\cos \gamma,\sin \gamma,0)$ for $\gamma\in ( 0,2\pi )$. Hence, the boundary curve in the generic case can be constructed by applying the orthogonal operator $R$, that rotates $\boldsymbol{v}$ from $(0,0,1)^{T}$ to $(\sin \alpha \cos \beta,\sin\alpha\sin\beta,\cos\alpha)^\intercal$, to the vector $(\cos \gamma,\sin \gamma,0)^{T}$. That is, \begin{align*} R&=\left( \begin{array}{ccc} \cos \beta & -\sin \beta & 0 \\ \sin \beta & \cos \beta & 0 \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{ccc} \cos \alpha & 0 & \sin \alpha \\ 0 & 1 & 0 \\ -\sin \alpha & 0 & \cos \alpha \end{array} \right) \\ &=\left( \allowbreak \begin{array}{ccc} \cos \alpha\cos \beta & -\sin \beta & \sin \alpha\cos \beta \\ \cos \alpha\sin \beta & \cos \beta &\sin \alpha \sin \beta \\ -\sin \alpha& 0 & \cos \alpha \end{array} \right), \end{align*} and the boundary curve has the form \begin{align*} \left( \begin{array}{c} x \\ y \\ z \end{array} \right) \!=\!R\left( \begin{array}{c} \cos \gamma \\ \sin \gamma \\ 0 \end{array} \right)\! =\!\left( \allowbreak \begin{array}{c} \cos \alpha\cos \beta\cos \gamma-\sin \beta\sin \gamma \\ \cos \alpha\sin \beta\cos \gamma+\cos \beta\sin \gamma \\ -\sin \alpha\cos \gamma \end{array} \right). \end{align*} For the purposes of integrating over the hemisphere, it is convenient to vary $\phi $ from $0$ to $2\pi $ and $\theta $ from $0$ to its value $\chi (\phi )$ on the boundary curve. From the above expression for the boundary, and using $z=\cos \theta $ and $y/x=\tan \phi $, it follows that $\cos \chi =-\sin \alpha\cos \gamma$ and $(\cos \alpha\sin \beta\cos \gamma+\cos \beta\sin \gamma)\cos \phi =(\cos \alpha\cos \beta\cos \gamma-\sin \beta\sin \gamma)\sin \phi$. The last equation be rearranged to read $\cos \alpha\sin (\phi -\beta)\cos \gamma=\cos (\phi -\beta)\sin \gamma$, and after squaring both sides this equation solves to give \[ \cos \gamma=\pm \frac{\cos (\phi -\beta)}{[\cos ^{2}(\phi -\beta)+\cos ^{2}\alpha\sin ^{2}(\phi -\beta)]^{1/2}}. \] Now, $\chi $ assumes its maximum value when $\phi =\beta$, which according to the relation $\cos \chi =-\sin \alpha\cos \gamma$ and the fact that $\alpha \in [0,\pi/2]$ should correspond to $\gamma=0$. So we take the upper sign in the last equation, yielding \begin{align} \label{coschi} \nonumber\\umber \cos \chi &=\frac{-\sin \alpha\cos (\phi -\beta)}{[\cos ^{2}(\phi -\beta)+\cos ^{2}\alpha\sin ^{2}(\phi -\beta)]^{1/2}}\\ &= \frac{-\sin \alpha\cos (\phi -\beta)}{[\cos ^{2}\alpha+\sin ^{2}\alpha\cos ^{2}(\phi -\beta)]^{1/2}} . \end{align} It follows immediately that \begin{align} \label{sinchi} \sin \chi =\frac{\cos \alpha}{[\cos ^{2}\alpha+\sin ^{2}\alpha\cos ^{2}(\phi -\beta)]^{1/2}}, \end{align} with the choice of sign fixed by the fact that $\sin \chi \geq 0$ and (by assumption) $\cos \alpha\geq 0$. The surface integral for $\boldsymbol q(\boldsymbol v)$ in Eq.~(\ref{qv}) can now be written in the form: \begin{equation} \label{qvnew} \int_{0}^{2\pi }\int_{0}^{\chi (\phi )}\frac{(\sin \theta \cos \phi ,\sin \theta \sin \phi ,\cos \theta )^{T}\sin \theta \,d\theta \,d\phi}{(a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}} . \end{equation} To evaluate the the third component of $\boldsymbol q(\boldsymbol v)$, note that the integral over $\theta $, \[ \int_{0}^{\chi (\phi )}\frac{\sin \theta \cos \theta \mathop{}\!\mathrm{d} \theta }{(a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}}, \] can be evaluated explicitly by making the substitution $w=\sin ^{2}\theta $, as $\int (A +B w)^{-2}dw=-B ^{-1}(A +B w)^{-1}$ for any $B \neq 0$, yielding \begin{align*} \frac{1}{2c}\frac{\sin ^{2}\chi }{a\sin ^{2}\chi \cos ^{2}\phi +b\sin ^{2}\chi \sin ^{2}\phi +c\cos ^{2}\chi }. \end{align*} After inserting the expressions for $\cos \chi $ and $\sin \chi $ derived earlier, we have \begin{eqnarray*} &&\int_{0}^{\chi (\phi )}\frac{\sin \theta \cos \theta }{(a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}} \,d\theta \\ &=&\frac{1}{2c}\frac{\cos ^{2}\alpha}{a\cos ^{2}\alpha\cos ^{2}\phi +b\cos ^{2}\alpha\sin ^{2}\phi +c\sin ^{2}\alpha\cos ^{2}(\phi -\beta)}. \end{eqnarray*} We now need to integrate the last expression over $\phi $. Introducing new constants \begin{align*} l &=a\cos ^{2}\alpha+c\sin ^{2}\alpha\cos ^{2}\beta,\\ m &=b\cos^{2}\alpha+c\sin ^{2}\alpha\sin ^{2}\beta,\\ n &=c\sin ^{2}\alpha\sin \beta\cos \beta, \end{align*} the full surface integral simplifies to a form that may be evaluated by Mathematica (or by contour integration over the unit circle in the complex plane): \begin{eqnarray} \nonumber\\umber \label{int1} &&\int_{0}^{2\pi }\int_{0}^{\chi (\phi )}\frac{\sin \theta \cos \theta d\theta \,d\phi }{ (a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}} \\ \nonumber\\umber &=&\frac{\cos ^{2}\alpha}{2c}\int_{0}^{2\pi }\frac{d\phi }{l \cos ^{2}\phi +m \sin ^{2}\phi +2n \sin \phi \cos \phi }\\ \nonumber\\umber &=& \pm\frac{\cos ^{2}\alpha}{2c} \frac{2\pi}{\sqrt{lm-n^2}}. \end{eqnarray} The indeterminate sign here is fixed by examining the case $\alpha=0$ and $ a=b=c$, for which $\chi (\phi )=\pi /2$ and the integrand reduces to $a^{-2}\sin \theta \cos \theta $, which integrates to give $\pi a^{-2}$. So, unsurprisingly, we choose the positive sign. This yields the third component of surface integral to be \begin{align} \label{qv3} [\boldsymbol q(\boldsymbol v)]_3 = \frac{\pi \cos \alpha}{c[ab\cos ^{2}\alpha+c(a\sin ^{2}\beta+b\cos ^{2}\beta)\sin ^{2}\alpha]^{1/2}}. \end{align} The integrals over $\theta $ in the remaining two components of $\boldsymbol q(\boldsymbol v)$ in Eq.~(\ref{qvnew}) are unfortunately not so straightforward. However, there is a simple trick that allows us to calculate both surface integrals explicitly, and that is to differentiate the integrals with respect to the parameters $\alpha$ and $\beta$. Since the only dependence on $\alpha$ and $ \beta$ comes through the function $\chi (\phi)$, this eliminates the need to integrate over $\theta $. In fact we only need to differentiate with respect to one of these parameters, choose $\alpha$. To see this, note that \begin{align*} &\frac{\partial }{\partial \alpha}\int_{0}^{2\pi }\int_{0}^{\chi (\phi )}\frac{ (\cos \phi ,\sin \phi )\sin ^{2}\theta \,d\theta \,d\phi}{(a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}} \\ &=\int_{0}^{2\pi }\frac{(\cos \phi ,\sin \phi )\sin ^{2}\chi }{(a\sin ^{2}\chi \cos ^{2}\phi +b\sin ^{2}\chi \sin ^{2}\phi +c\cos ^{2}\chi )^{2}}\, \frac{\partial \chi }{\partial \alpha}d\phi, \end{align*} where $\partial \chi /\partial \alpha$ can be evaluated by making use of the equations \eqref{coschi} and \eqref{sinchi}. In fact, \begin{align*} -\sin \chi \,\frac{\partial \chi }{\partial \alpha}&=\,\frac{\partial }{\partial \alpha} \left(\frac{-\sin \alpha\cos (\phi -\beta)}{[\cos ^{2}\alpha+\sin ^{2}\alpha\cos ^{2}(\phi -\beta)]^{1/2} }\right) \end{align*} \begin{align*} =\frac{-\cos \alpha\cos (\phi -\beta)}{[\cos ^{2}\alpha+\sin ^{2}\alpha\cos ^{2}(\phi -\beta)]^{3/2}}. \end{align*} Inserting the last two equations and the expressions for $\sin\chi$ and $\cos\chi$ into the integrals above, and using the constants $l, m$ and $n$ defined earlier, then gives: \begin{widetext} \begin{eqnarray} \nonumber\\umber &&\frac{\partial }{\partial \alpha}\int_{0}^{2\pi }\int_{0}^{\chi (\phi )}\frac{ (\cos \phi ,\sin \phi )\sin ^{2}\theta \mathop{}\!\mathrm{d}\theta \,\mathop{}\!\mathrm{d}\phi}{(a\sin ^{2}\theta \cos ^{2}\phi +b\sin ^{2}\theta \sin ^{2}\phi +c\cos ^{2}\theta )^{2}} \\ ~&=&\cos ^{2}\alpha\int_{0}^{2\pi }\frac{(\cos \phi ,\sin \phi )\cos (\phi -\beta)}{ [a\cos ^{2}\phi \cos ^{2}\alpha+b\sin ^{2}\phi \cos ^{2}\alpha+c\sin ^{2}\alpha\cos ^{2}(\phi -\beta)]^{2}}\mathop{}\!\mathrm{d}\phi \nonumber\\umber \\ \label{xy1} ~&=&\cos ^{2}\alpha\int_{0}^{2\pi }\frac{(\sin \beta\sin \phi \cos \phi +\cos \beta\cos ^{2}\phi ,\sin \beta\sin ^{2}\phi +\cos \beta\sin \phi \cos \phi )}{(l \cos ^{2}\phi +m \sin ^{2}\phi +2n \sin \phi \cos \phi )^{2}}\mathop{}\!\mathrm{d}\phi . \end{eqnarray} \end{widetext} Consequently, there are three separate integrals we need to evaluate and these can be done in Mathematica (or by complex contour integration): \begin{eqnarray*} \label{int2} \int_{0}^{2\pi } \frac{(\sin^2\phi,\cos ^{2}\phi,\sin\phi\cos\phi) \, d\phi }{(l\cos ^{2}\phi +m \sin ^{2}\phi +2n \sin \phi \cos \phi )^{2}} = \frac{\pi(l,m,-n)}{(lm-n^2)^{3/2}} . \end{eqnarray*} Using the values we have for $l,m,n$ we substitute these back into equation \eqref{xy1} and integrate over $\alpha$ to obtain \begin{align} \nonumber\\umber &[\boldsymbol q(\boldsymbol v)]_1 =\pi\int \frac{\cos ^{2}\alpha(m\cos \beta-n\sin \beta) }{(lm-n^2)^{3/2}}d\alpha \end{align} \begin{align} \label{qv1} &=\frac{a^{-1}\pi \sin\alpha\cos\beta}{[ab \cos^2\alpha + c\sin^2\alpha(b\cos^2\beta + a\sin^2\beta)]^{1/2}}, \end{align} and \begin{align} \nonumber\\umber &[\boldsymbol q(\boldsymbol v)]_2 =\pi\int \frac{\cos ^{2}\alpha(l\sin \beta -n\cos \beta) }{(lm-n^2)^{3/2}}d\alpha\\ \label{qv2} &=\frac{b^{-1}\pi \sin\alpha\sin\beta}{[ab \cos^2\alpha + c\sin^2\alpha(b\cos^2\beta + a\sin^2\beta)]^{1/2}}. \end{align} The absence of integration constants can be confirmed by noting that these expressions vanish for $\alpha=0$ -- i.e., when the vector $\mathbf{\boldsymbol v}$ is aligned with the $z$-axis -- as they should by symmetry. Note the denominators of Eqs. \eqref{qv1} and \eqref{qv2} simplify to $abc (\boldsymbol v^\intercal Q^{-1} \boldsymbol v )$. Combining this with Eqs.~(\ref{qv3}) and (\ref{qv1})-(\ref{qv2}), we have \begin{align} \boldsymbol q (\boldsymbol v) = \frac{ \pi Q^{-1} \boldsymbol v}{\sqrt{abc (\boldsymbol v^\intercal Q^{-1} \boldsymbol v)}}, \end{align} and so setting $Q = T^{-2}$, the theorem follows as desired. \end{proof} Finally, the normalisation constant $N_T$ in Eq.~(\ref{surface}) may be analytically evaluated using Mathematica. Under the assumption that $|t_3| > |t_2| > |t_1|$, denote $a=|t_1|, b=|t_2|, c=|t_3|$. We find \begin{widetext} \begin{align} \nonumber\\umber N_T^{-1} &= \int_{\boldsymbol n\cdot\boldsymbol n=1} (\boldsymbol n^\intercal T^{-2} \boldsymbol n)^{-2} \mathop{}\!\mathrm{d}^2\boldsymbol n = \frac{2\pi}{abc(a+b)(b+c)(c^2-a^2)} \\ &\times \big(X+Y\left\{b(c-a)E[C]+a(b+c)K[C]+ib(c-a)(E[A_1,B]-E[A_2,B])+ic(a+b)(F[A_1,B]-F[A_2,B])\right\}\big), \end{align} \end{widetext} where $F[\cdot, \cdot], E[\cdot,\cdot]$ are the elliptic integrals the first and second kind, $E[\cdot]$ is the complete elliptic integral and $K[\cdot]$ is the complete elliptic integral of the first kind, and \[ A_1 = i \mathrm{arccsch}\left(\frac{a}{\sqrt{c^2-a^2}} \right),~~~~ A_2 = i\ln \left(\frac{b+c}{\sqrt{c^2-b^2}} \right), \] \[ B = \frac{a^2(c^2-b^2)}{b^2(c^2-a^2)} ,~~~~~~~~ C = \frac{c^2(b^2-a^2)}{b^2(c^2-a^2)}, \] \[ X =c(c-a)[(a+c)(b+c)+ab],~~~~ Y=(a+b+c)\sqrt{c^2-a^2}. \] Thus, the normalisation constant $N_T$ has a rather non-trivial form. It is highly unlikely that we can invert it to express the EPR-steerability condition $2\pi N_T|\det T| =1$ as $c = g(a,b)$ where $g$ is some function of $a,b$, other than in the special cases considered in Sec.~4D. In general, we must leave it as an implicit equation in $a,b,c$ (that is, of the $t_j$s). \section{\label{app:cov}EPR-Steering inequality for spin covariance matrix} To demonstrate the linear EPR-steering inequality in Eq.~(\ref{lin}), let $A_{\boldsymbol v}$ denote some dichotomic observable that Alice can measure on her qubit, with outcomes labelled by $\pm1$, where $\boldsymbol v$ is any unit vector. We will make a specific choice of $A_{\boldsymbol v}$ below. Define the corresponding covariance function \begin{equation} \label{cv} C(\boldsymbol v):=\langle A_{\boldsymbol v} \otimes \boldsymbol v\cdot\boldsymbol\sigma\rangle -\langle A_{\boldsymbol v}\rangle\,\langle \boldsymbol v\cdot\boldsymbol\sigma\rangle . \end{equation} If there is an LHS model for Bob then, noting that one may take $p(a|x,\lambda)$in Eq.~(\ref{lhs}) to be deterministic without loss of generality, there are functions $\alpha_{\boldsymbol v}(\lambda)=\pm 1$ such that $C(\boldsymbol v) =\sum_\lambda p(\lambda) [\alpha_{\boldsymbol v}(\lambda) - \bar{\alpha}_{\boldsymbol v}]\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v$, where $\bar{\alpha}_{\boldsymbol v}=\sum_\lambda p(\lambda) \alpha_{\boldsymbol v}(\lambda)$, and the hidden state $\rho_B(\lambda)$ has corresponding Bloch vector $\boldsymbol n(\lambda)$. Now, the Bloch sphere can be partitioned into two sets, $S_+(\lambda)=\{\boldsymbol v : [\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v \geq 0\}$ and $S_-(\lambda)=\{\boldsymbol v : [ n(\lambda)-\boldsymbol b]\cdot\boldsymbol v <0\}$, for each value of $\lambda$. Hence, noting $-1-\bar{\alpha}_{\boldsymbol v}\leq \alpha_{\boldsymbol v}(\lambda) - \bar{\alpha}_{\boldsymbol v} \leq 1- \bar{\alpha}_{\boldsymbol v}$, $\int C(\boldsymbol v)\,d^2\boldsymbol v$ is equal to \begin{eqnarray*} &~& \sum_\lambda p(\lambda)\left\{ \int_{S_+(\lambda)}\mathop{}\!\mathrm{d}^2\boldsymbol v\, [\alpha_{\boldsymbol v}(\lambda) - \bar{\alpha}_{\boldsymbol v}]\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v \right.\\ &~& \left. + \int_{S_-(\lambda)} \mathop{}\!\mathrm{d}^2\boldsymbol v\, [\alpha_{\boldsymbol v}(\lambda) - \bar{\alpha}_{\boldsymbol v}]\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v \right\} \end{eqnarray*} \begin{eqnarray*} &\leq& \sum_\lambda p(\lambda)\left\{ \int_{S_+(\lambda)} \mathop{}\!\mathrm{d}^2\boldsymbol v\, [1 - \bar{\alpha}_{\boldsymbol v}]\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v \right.\\ &~& -\left. \int_{S_-(\lambda)} \mathop{}\!\mathrm{d}^2\boldsymbol v\, [1+ \bar{\alpha}_{\boldsymbol v}]\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v \right\}\\ &=& \sum_\lambda p(\lambda) \int \mathop{}\!\mathrm{d}^2\boldsymbol v\, |[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v| \end{eqnarray*} \begin{eqnarray*} &~& - \sum_\lambda p(\lambda) \int \mathop{}\!\mathrm{d}^2\boldsymbol v\, \bar{\alpha}_{\boldsymbol v}\,[\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v\\ &=& \sum_\lambda p(\lambda) |\boldsymbol n(\lambda)-\boldsymbol b|\,\int \mathop{}\!\mathrm{d}^2\boldsymbol v\, |\boldsymbol v\cdot \boldsymbol w(\lambda)|, \end{eqnarray*} where $\boldsymbol w(\lambda)$ denotes the unit vector in the $\boldsymbol n(\lambda)-\boldsymbol b$ direction, and the last line follows by interchanging the summation and integration in the second term of the previous line. The integral in the last line can be evaluated for each value of $\lambda$ by rotating the coordinates such that $w(\lambda)$ is aligned with the $z$-axis, yielding $\int \mathop{}\!\mathrm{d}^2\boldsymbol v\, |\boldsymbol v\cdot \boldsymbol w(\lambda)| = \int\mathop{}\!\mathrm{d}^2\boldsymbol v\,|v_3| = \int_0^{2\pi} \mathop{}\!\mathrm{d}\phi \int_0^\pi \mathop{}\!\mathrm{d}\theta \sin \theta\, |\cos\theta| =2\pi$. Hence, the above inequality can be rewritten as \begin{eqnarray} \nonumber\\umber \frac{1}{4\pi} \int \mathop{}\!\mathrm{d}^2\boldsymbol v\, C(\boldsymbol v) &\leq& \frac{1}{2} \sum_\lambda p(\lambda)\,|\boldsymbol n(\lambda) -\boldsymbol b| \\ \nonumber\\umber &\leq& \frac{1}{2}\left[\sum_\lambda p(\lambda)\,|\boldsymbol n(\lambda) -\boldsymbol b|^2\right]^{1/2} \\ \label{cav} &\leq& \frac{1}{2}\sqrt{1-\boldsymbol b\cdot\boldsymbol b}, \end{eqnarray} where the second and third lines follow using the Schwarz inequality and $|\boldsymbol n(\lambda)|\leq 1$, respectively. Note, by the way, that the first inequality is tight for the case $\alpha_{\boldsymbol v}(\lambda)={\rm sign}\left( [\boldsymbol n(\lambda)-\boldsymbol b]\cdot\boldsymbol v\right)$. Now, making the choice $A_{\boldsymbol v}=\boldsymbol u\cdot\boldsymbol \sigma$ with $u_j:={\rm sign}(C_{jj}) v_j$, one has from Eqs.~(\ref{cjk}) and (\ref{cv}) that \begin{eqnarray*} \int \mathop{}\!\mathrm{d}^2\boldsymbol v\, C(\boldsymbol v) &=&\sum_{j,k} C_{jk}\,{\rm sign}(C_{jj}) \int \mathop{}\!\mathrm{d}^2\boldsymbol v\,v_j\,v_k \\ &=& \sum_{j,k} C_{jk}\,{\rm sign}(C_{jj}) \frac{4\pi}{3}\delta_{jk} = \frac{4\pi}{3}\sum_j |C_{jj}| . \end{eqnarray*} Combining with Eq.~(\ref{cav}) immediately yields the EPR-steering inequality \begin{equation} \sum_j |C_{jj}| \leq \frac{3}{2} \sqrt{1-\boldsymbol b\cdot\boldsymbol b}. \end{equation} Finally, this inequality may similarly be derived in a representation in which local rotations put the spin covariance matrix $C$ in diagonal form, with coefficients given up to a sign by the singular values of $C$ (similarly to the spin correlation matrix $T$ in Sec.~4A). Since $\boldsymbol b\cdot\boldsymbol b=b^2$ is invariant under such rotations, Eq.~(\ref{lin}) follows. \end{document}
\begin{document} \title{Conventional and unconventional photon statistics} \author{Eduardo {Zubizarreta Casalengua}} \affiliation{Departamento de F\'isica Te\'orica de la Materia Condensada, Universidad Aut\'onoma de Madrid, 28049 Madrid, Spain} \author{{Juan Camilo} {L\'{o}pez~Carre\~no}} \affiliation{Departamento de F\'isica Te\'orica de la Materia Condensada, Universidad Aut\'onoma de Madrid, 28049 Madrid, Spain} \affiliation{Faculty of Science and Engineering, University of Wolverhampton, Wulfruna St, Wolverhampton WV1 1LY, UK} \author{Fabrice P. Laussy} \affiliation{Faculty of Science and Engineering, University of Wolverhampton, Wulfruna St, Wolverhampton WV1 1LY, UK} \affiliation{Russian Quantum Center, Novaya 100, 143025 Skolkovo, Moscow Region, Russia} \author{Elena {del Valle}} \email{[email protected]} \affiliation{Departamento de F\'isica Te\'orica de la Materia Condensada, Universidad Aut\'onoma de Madrid, 28049 Madrid, Spain} \date{\today} \begin{abstract} We show how the photon statistics emitted by a large variety of light-matter systems under weak coherent driving can be understood, to lowest order in the driving, in the framework of an admixture of (or interference between) a squeezed state and a coherent state, with the resulting state accounting for all bunching and antibunching features. One can further identify two mechanisms that produce resonances for the photon correlations: i) \emph{conventional statistics} describes cases that involve a particular quantum level or set of levels in the excitation/emission processes with interferences occurring to all orders in the photon numbers, while \emph{unconventional} statistics describes cases where the driving laser is far from resonance with any level and the interference occurs for a particular number of photons only, yielding stronger correlations but only for a definite number of photons. Such an understanding and classification allows for a comprehensive and transparent description of the photon statistics from a wide range of disparate systems, where optimum conditions for various types of photon correlations can be found and realized. \end{abstract} \maketitle \tableofcontents \section{Introduction} Quantum optics was born with the study of photon statistics~\cite{glauber06a}. Following Hanbury Brown's discovery of photon bunching~\cite{hanburybrown52a,hanburybrown56c} and Kimble \emph{et al.}'s observation of antibunching~\cite{kimble77a}, there has been a burgeoning activity of tracking how pairs of detected photons are related to each other. From the various mechanisms that generate correlated photons, one that turns an uncorrelated stream into antibunched photons has attracted much attention across different platforms for its practicality of operation (with a laser) and appealing underlying mechanism~\cite{imamoglu94a,werner99a,kim99a,michler00a,birnbaum05a,faraon08a,dayan08a,lang11a,hoffman11a}. This so-called ``blockade'' effect describes how the occupation of an energy level by a particle forbids another particle to occupy the same level. At such, it is reminiscent of Pauli's exclusion principle~\cite{kaplan_book16a,massimi_book05a} and indeed the first type of blockade involved electron in the so-called ``Coulomb blockade''~\cite{averin86a,fulton87a,kastner08a}. However, while Pauli's principle relies on the antisymmetry of fermionic wavefunctions, one can also implement a ``bosonic blockade'' with photons exciting an anharmonic system~\cite{carusotto01a}, based on nonlinearities in the energy levels due to self-interactions. The idea is simple: when photons are resonant with the frequency of the oscillator, a first photon can excite the system, but due to photon-photon interactions, a subsequent photon is now detuned from the oscillator's frequency. If its energy is not sufficient to climb the ladder of states, it cannot excite the system, that remains with one photon. In this way, one can turn a coherent---that is, uncorrelated---stream of photons, with its characteristic Poissonian fluctuations, into a more ordered stream of separated photons, effectively acting as a ``photon turnstile''. The quality of such a suppression of the Poisson bursts can be measured at the two-photon level with Glauber's second-order correlation function~$g^{(2)}(\tau)$, that compares the coincidences in time to those expected from a random process of same intensity. Correlations decrease from~1, with no blockade, towards~0 as the nonlinearity~$U$ increases. In the limit where~$U$ becomes infinite, putting the second excited state arbitrarily far and realizing a two-level system (2LS), a second photon is strictly forbidden and $g^{(2)}$ becomes perfectly antibunched. For open bosonic systems, the ratio of the interaction to the decay rate is an important variable for the blocking to be effective. The ``blockade'' regime is reached when interactions start to dominate the dissipation. This can be marked as the onset of antibunching: one photon starts to suppress the next one~\cite{sanvitto19a}. The driven damped anharmonic system is an important model, not least because it is one of the few cases to enjoy an exact analytical solution~\cite{drummond80a}. While much of the mechanism is contained in this particular case, compound systems---where the anharmonic system is coupled to a single-mode cavity---have also attracted considerable attention. This describes for instance interacting quantum-well excitons coupled to a microcavity mode~\cite{kavokin_book17a}. The effect is then known as ``polariton-blockade'', after the eponymous light-matter particles that constitute the elementary excitations of such systems. This configuration was first addressed theoretically by Verger \emph{et al.}~\cite{verger06a} who studied the response of the cavity around the lower-polariton resonance, predicting antibunching indeed, although of too small magnitude with the parameters of typical systems to be observed easily. This spurred interest in polariton boxes and other ways of confining polaritons to enhance their interactions~\cite{eldaif06a}. A few years later, Liew and Savona computed a much stronger antibunching from a seemingly distinct compound system with the same order of nonlinearity, namely, coupled cavities~\cite{liew10a}. This so-called ``unconventional polariton blockade'' was quickly understood as originating not from the particular configuration of coupled cavities with weak Kerr nonlinearities but from a subtler type of blocking, due to destructive interferences between probability amplitudes whenever there are two paths that can reach the excited state with two photons~\cite{bamba11a}. This result has generated considerable attention, although it was later remarked~\cite{lemonde14a} that it was a known effect~\cite{carmichael85a,carmichael91b}, observed decades earlier~\cite{foster00a} where it received a much smaller followup. Besides, Lemonde \emph{et al.}~\cite{lemonde14a} further clarified how unconventional photon blockade is connected to squeezing rather than single-photon states, which had been presented as one of the main interest of the effect. Recently, both conventional~\cite{arXiv_delteil18a,arXiv_munozmatutano17a} and unconventional~\cite{vaneph18a,snijders18a} blockades have been reported in solid-state systems, where the 2010 revival of the idea had triggered intense activity. In this text, we provide a unifying picture of the two types of polariton blockades. We show how they typically sit next to each other in interacting coupled light-matter systems along with other phenomenologies, that produce superbunching instead of the blockade antibunching. In particular, we show that they are both rooted in the single-component system, either a 2LS or an anharmonic oscillator, with strong photon correlations produced by interfering the emitter's incoherent signal with a coherent fraction. We will nevertheless highlight how the two blockades are intrinsically different mechanisms with different characteristics. Most importantly, the conventional blockade, based on dressed-state blocking, yields photon antibunching at all orders in the number of photons, i.e., $g^{(N)}\rightarrow 0$ for all~$N\ge 2$, while the unconventional blockade can only target one~$N$ in isolation, producing bunching for the others. Another apparent similarity is that both types of blockades produce the same state in what concerns the population and the two-photon correlation~$g^{(2)}$ at the lowest order in the driving, but differences occur at higher orders, namely, at the second-order for~$g^{(2)}$ and at the third-order for the population. Differences exist already at the lowest order in the driving for $g^{(3)}$ and higher-order photon correlations, making it clear that the two mechanisms differ substantially and produce different states, despite strong resemblances in the quantities of easiest experimental reach. The state produced by both types of blockade at the two-photon level results from a simple interference between a squeezed state and a coherent state. While the squeezing is typically produced by the emitter, the coherent fraction can be either brought from outside, idoneously, as a fraction of the driving laser itself---a technique known as ``homodyning''---or be produced internally by the driven system itself, a concept introduced in the literature under the apt qualification of ``self-homodyning''~\cite{carmichael85a}. We will thus highlight that, to this order, essentially the same physics---of tailoring two-photon statistics by admixing squeezed and coherent light, discussed in Section~\ref{sec:ThuFeb15174655CET2018}---takes place in a variety of platforms, overviewed in Section~\ref{sec:vieene25080609GMT2019}. We will further synthesize this picture by unifying the cases where the nonlinearity is i) strong, namely, provided by a 2LS (Section~\ref{sec:FriFeb23095933CET2018}) or on the contrary ii) weak, namely, provided by an anharmonic oscillator (Section~\ref{sec:jueene10122434CET2019}), and how these are further generalized in presence of a cavity where self-homodyning becomes a compelling picture since a cavity is an ideal receptacle for coherent states. This brings the 2LS into the Jaynes-Cummings model (Section~\ref{sec:FriFeb23100104CET2018}) and the anharmonic oscillator into microcavity polaritons (Section~\ref{sec:FriFeb23100219CET2018}), respectively. There are many variations in between all these configurations, that the literature has touched upon in many forms, as we briefly overview in the next Section. For our own discussion, while we have tried to retain as much generality as possible for the variables that play a significant role, we do not include for the sake of brevity all the possible combinations, which could of course be done would the need arise for a given platform. Section~\ref{sec:jueene24231247GMT2019} summarizes and concludes. \section{A short review of blockades and related effects} \label{sec:vieene25080609GMT2019} We will keep this overview very short since a thorough review would require a full work on its own. A good review is found in Ref.~\cite{flayac17b}. This Section will also allow us to introduce the formalism and notations. For details of the microscopic derivation, we refer to Ref.~\cite{verger06a}. A fairly general type of photon blockade is described by the Hamiltonian \begin{subequations} \label{eq:Mon5Jun145532BST2017} \begin{align} H={}&\hbar\omega_a\ud{a}a+\hbar\omega_b\ud{b}b+\hbar g(\ud{a}b+a\ud{b})\label{eq:Mon5Jun150144BST2017}\\ &+\frac{U_a}{2}\ud{a}\ud{a}aa+\frac{U_b}{2}\ud{b}\ud{b}bb\\ &+\Omega_a e^{i\varpi_a t}a+\Omega_b e^{i\varpi_b t}b\\ &+\mathrm{h.c.}\nonumber \end{align} \end{subequations} where~$\hbar\omega_c$ is the free energy of the modes~$c=a,b$, both bosonic, $\hbar g$ describes their Rabi coupling, giving rise to polaritons as eigenstates of line~(\ref{eq:Mon5Jun150144BST2017}), $U_c$ are the nonlinearities of the respective modes, here again for~$c=a,b$, and~$\Omega_c$ describes resonant excitation at the energy~$\varpi_c$. This is brought to the dissipative regime through the standard techniques of open quantum systems, namely, with a master equation in the Lindblad form $\partial_t \rho = i \left[ \rho ,H \right] + \sum_{c=a,b} (\gamma_c/2) \mathcal{L}_{c} \rho$, where the superoperator $\mathcal{L}_{c} \rho \equiv 2 c \rho \ud{c} - \ud{c}c \rho - \rho \ud{c}c$ describes a decay rate of mode~$c$ at rate~$\gamma_c$ (we do not include incoherent excitation nor dephasing, which are other parameters one could easily account for in the following analysis from a mere technical point of view). Particular cases or variations of Eq.~(\ref{eq:Mon5Jun145532BST2017}) have been studied in a myriad of works, even when restricting to those with a focus on the emitted photon statistics. This ranges from cases retaining one mode only~\cite{ferretti12a} to the most general form of Eq.~(\ref{eq:Mon5Jun145532BST2017})~\cite{ferretti10a, ferretti13a, flayac13a, flayac16a, flayac17a, arXiv_liang18a} with further variations (such as using pulsed excitation~\cite{flayac15a}). A first consideration of the effect of field admixture on the photon statistics was made by Flayac and Savona~\cite{flayac13a}, who found that that the conditions for strong correlations are shifted rather than hampered. This touches upon, in the framework of input/output theory, the mechanisms of mixing fields that we will highlight in the following, where we will show that beyond being altered, correlations can be drastically optimized (becoming exactly zero to first order for antibunching and infinite for bunching). In a later work~\cite{flayac16a}, they further progressed towards fully exploiting homodyning by including a ``dissipative, one-directional coupling'' term, which allowed them to achieve a considerable improvement of the photon correlations, especially in time, with suppression of oscillations and the emergence of a plateau at small time delays. This is due to the same mechanism than the one we used with identical consequences but in a different context~\cite{lopezcarreno18b} (a two-level system admixed to an external laser). Emphasis should also be given to the bulk of work devoted to related ideas of mixing fields by the Vu\u{c}kovi\'c\xspace group, starting with their use of self-homodyning to study the Mollow triplet in a dynamical setting~\cite{fischer16a}. Initially used as a suppression technique to access the quantum emitter's dynamics by cancelling out the scattered coherent component from their driving laser~\cite{muller16a,dory17a}, they later appreciated the widespread application of their effect and its natural occurence in other systems~\cite{fischer18a}, where it had passed unnoticed, as well as the benefits of a tunability of the interfering component~\cite{fischer17a}, which they proposed in the form of a partially transmitting element in an on-chip integrated architecture that combines a waveguide with a quantum-dot/photonic-crystal cavity QED platform. As such, they have made a series of pioneering contributions in the effect of homodyning for quantum engineering and optimization, which is promised to a great followup~\cite{li18a}. In the following, we will provide a unified theory of the mixing of coherent and quantum light that has been developed and implemented throughout the recent years by Fischer \emph{et al.}~\cite{fischer16a,fischer18a}. The possibility and benefits of an external laser to optimize photon correlations also appeared in a work by Van Regemortel \emph{et al.}~\cite{vanregemortel18a}, with a foothold in the same ideas. The effect of tuning two types of driving was emphasized by Xu and Li~\cite{xu14a}, who reported among other notable results how changing their ratio can bring the system from strong antibunching to superbunching, an idea which we will revisit from the point of view of interfering fields through (controlled) homodyning or (self-consistent) self-homodyning. Similar ideas have then been explored and extended several times in many variations of the problem~\cite{zhang14e,tang15a,shen15a,shen15b, li15c, xu13a, xu16a, wang16b, wang17a, cheng17a, deng17a,zhou16a,liu16a,kryuchkyan16a,zhou16b,yu17a} which all fit nicely in the wider picture that we will present. The microcavity-polariton configuration with interactions in one mode only (describing quantum well excitons, the other being a cavity mode) has been studied mainly from the (conventional) polariton blockade point of view~\cite{verger06a,gerace09a}, in which case, the (much stronger) unconventional antibunching has been typically overlooked. We will focus in the following on this case rather than on the possibly more popular two weakly-interacting sites. First, because this allows a direct comparison with the Jaynes--Cumming limit, second, because this configuration became timely following the recent experimental breakthrough with polariton blockade~\cite{arXiv_delteil18a,arXiv_munozmatutano17a}. Whatever the platform, it needs be also emphasized that while many works have focused on single-photon emission as the spotlight for the effect (which is dubious when antibunching is produced from the unconventional route), others have also stressed different applications or suggested different contextualization, such as phase-transitions~\cite{ferretti10a} or entanglement~\cite{casteels17a}, and there is certainly much to exploit from one perspective or another. Among other configurations that cannot be accommodated by Eq.~(\ref{eq:Mon5Jun145532BST2017}) as they add even more components, one could mention examples from works that involve additional modes (three in Refs.~\cite{majumdar12e,kyriienko14c,kyriienko14a}), different types of nonlinearity, e.g., $a^2\ud{b}$ in Refs.~\cite{majumdar13a,gerace14a,zhou15a}, a four-level system in Ref.~\cite{bajcsy13a}), two two-level system in a cavity in Ref.~\cite{radulaski17a} and up to the general Tavvis--Cummings model~\cite{arXiv_trivedi19a}, pulsed coherent control of a two-level system in Ref.~\cite{arXiv_loredo18a}, two coupled cavities each containing a two-level system~\cite{knap11a,schwendimann12a} up to a complete array~\cite{grujic13a}. It seems however clear to us that the phenomenology reported in each of these particular cases would fall within the classification that we will establish in the remaining of the text, i.e., they can be understood as an homodyning effect of some sort. \section{Homodyne and self-homodyne interferences} \label{sec:ThuFeb15174655CET2018} We will return in the rest of this text to such systems as those discussed in the previous Section---all a particular case or a variation of Eq.~(\ref{eq:Mon5Jun145532BST2017})---to show that the two-photon statistics of their emission can be described to lowest order in the driving by a simple process: the mixing of a squeezed and a coherent state. In this Section, we therefore study this configuration in details. We first consider the mixture of any two fields as obtained in one of the output arms of a balanced beam splitter (cf.~Fig.~\ref{fig:1}a), which is fed by a coherent state on one of its arms, with a complex amplitude~$\alpha=|\alpha|e^{i\phi}$, and another field of a general nature, described with annihilation operator $d$, on the other arm. The field that leaves the beam splitter is a mixture of the two input fields, whose annihilation operator can be written as~$s=\alpha+d$, where we are leaving out the normalization ($1/\sqrt{2}$) and $\frac{\pi}{2}$-phase shift in the reflected light as reasoned in Appendix~\ref{app:1}. Within this description, any normally-ordered correlator of the resulting field can be expressed in terms of the inputs as \begin{equation} \label{eq:homodynecorrelators} \corr{s}{n}{m} = \sum_{p=0}^{n} \sum_{q=0}^{m} \binom{n}{p}\binom{m}{q} \alpha^{* p} \alpha^{q} \corr{d}{n-p}{m-q} \,. \end{equation} From this expression, we can compute any relevant observable of the mixture. For instance, the total population is \begin{equation} \mean{n_s} \equiv \pop{s} = |\alpha|^2 + \mean{n_d} + 2 \Re [ \alpha^* \coh{d} ] \,, \end{equation} with $n_d\equiv d^\dagger d$. Apart from the sum of both input intensities, there is a contribution (last term) from the first-order interference between the coherent components of each of the fields or \emph{mean fields}. Similarly, the second-order coherence function, which is defined as \begin{subequations} \label{eq:MonNov27154304CET2017} \begin{align} \label{eq:MonNov27154304CET2017a} g_s^{(2)}(\tau)&=\lim_{t\rightarrow \infty}\frac{\mean{s^\dagger (t) (s^\dagger s )(t+\tau) s (t)}}{[\mean{s^\dagger s}(t)]^2}\,,\\ \label{eq:MonNov27154304CET2017b} & = \frac{\mean{s^\dagger (s^\dagger s )(\tau) s}}{\mean{n_s}^2} \,, \end{align} \end{subequations} can be readily obtained from the correlators in Eq.~(\ref{eq:homodynecorrelators}). In this text, we omit the time~$t$ in all expressions, as it is considered to be large enough for the system to have reached the steady state under the continuous drive, and will also set the delay~$\tau=0$, thus focusing on coincidences. This simplifies the notation $g_s^{(2)}=g_s^{(2)}(t\rightarrow\infty,\tau=0)$. We will also consider $N$-th order coherence functions, also at zero delay: $g_s^{(N)}\equiv\mean{s^{\dagger N} s^N}/\mean{s^\dagger s}^N$. These correlators can always be written as a polynomial series of powers of the amplitude of the coherent field~$\alpha$: \begin{equation} \label{eq:ThuFeb22113232CET2018} g^{(N)}_s=\frac{\sum_{k=0}^{2N} c_k(\phi) |\alpha|^k}{\mean{n_s}^{2N}}\,, \end{equation} where~$c_k(\phi)$ are coefficients that depend on the phase of the coherent field~$\phi$, and mean values of the type~$\mean{d^{\dagger\mu} d^\nu}$ with $\mu+\nu\leq N^2$. In particular, the 2nd-order correlation function, Eq.~(\ref{eq:MonNov27154304CET2017b}), can be rearranged as \begin{equation} \label{eq:g2mixdecomposition} g^{(2)}_{s} = 1 + \mathcal{I}_0 + \mathcal{I}_1 + \mathcal{I}_2\,, \end{equation} with $\mathcal{I}_m \sim |\alpha|^m$~\cite{mandel82a,carmichael85a,vogel91a,vogel95a}, where 1 represents the coherent contribution of the total signal, and the incoherent contributions read \begin{subequations} \label{eq:ThuFeb22121240CET2018} \begin{align} \label{eq:ThuFeb22121240CET2018a} \mathcal{I}_0 &= \frac{\corr{d}{2}{2} - \pop{d}^2}{\mean{n_s}^2}\,,\\ \label{eq:ThuFeb22121240CET2018b} \mathcal{I}_1 &=4\frac{ \Re[\alpha^{*} (\av{\ud{d} d^2}- \pop{d} \av{d})]}{\mean{n_s}^2}\,,\\ \label{eq:ThuFeb22121240CET2018c} \mathcal{I}_2 &= 2\frac{\Re[ {\alpha^*}^{2} \av{d^2}] - 2\Re[\alpha^* \av{d}]^2 + |\alpha|^2 \pop{d}}{\mean{n_s}^2}\\ & =4 \Big[ |\alpha|^2 \left( \cos^2 \phi \ \av{{:}X_{d}^2{:}} + \sin^2 \phi \ \av{{:}Y_{d}^2{:}} +{} \right. \nonumber \\ & \quad \left. {}+ \cos \phi \sin \phi \ \av{\lbrace X_d , Y_d \rbrace} \right) - \Re[ \alpha^* \av{d}]^2 \Big]/\mean{n_s}^2\,.\nonumber \end{align} \end{subequations} Here, the notation ``$::$'' indicates normal ordering, $\lbrace X_d,Y_d \rbrace = X_dY_d+Y_dX_d$, and $X_d = \frac{1}{2} \left(\ud{d}+d\right) $, $Y_d = \frac{i}{2} \left(\ud{d}- d \right)$ are the quadratures of the field described with the annihilation operator~$d$. Note that there are no explicit terms $\mathcal{I}_3$ and $\mathcal{I}_4$ because through simplifications these get absorbed in the term~$\mathcal{I}_1$. \begin{figure} \caption{(Color online) Second-order coherence function $\g{2} \label{fig:1} \end{figure} The decomposition was, to the best of our knowledge, first introduced by Carmichael~\cite{carmichael85a} and in fact precisely to show that the same quantum-optical phenomenology observed in different systems had the same origin, namely, to root nonclassical effects observed in optical bistability with many atoms in a cavity, to the physics of a single atom coherently driven, i.e., resonance fluorescence. This is at this occasion of unifying squeezing and antibunching from two seemingly unrelated platforms under the same umbrella of self-homodyning that he introduced this terminology. These concepts have thus been invoked and explored to some extent from the earliest days of the field, but it is only recently that they seem to start being fully understood and exploited~\cite{fischer16a,flayac16a,fischer18a,vanregemortel18a,lopezcarreno18b}. Using such interferences to analyse the squeezing properties of a signal of interest (here, the field with annihilation operator~$d$) through a controlled variation of a local oscillator (here, the coherent field~$\alpha$), was first suggested by W.~Vogel~\cite{vogel91a, vogel95a}, subsequently implemented to show squeezing in resonance fluorescence~\cite{schulte15a}, and recently impulsed in a series of works from the Vu\u{c}kovi\'c\xspace group, as previously discussed. The physical interpretation of the contributions to the decomposition are as follows: \begin{itemize} \item The numerator of~$\mathcal{I}_0$ is the normally-ordered variance of the signal intensity, that is, $\mean{{:}(\Delta n_d)^2 {:}}=\mean{{:}n_d^2{:}}-\mean{n_d}^2$ with $n_d=d^\dagger d$ and $\Delta n_d=n_d-\mean{n_d}$. Therefore, $\mathcal{I}_0<0$ indicates that the field~$d$ has sub-Poissonian statistics, which in turn contributes to the sub-Poissonian statistics of the total field~$s$. \item The numerator of~$\mathcal{I}_1$ represents the normally-ordered correlation between the fluctuation-field strength and intensity, $\av{\ud{d} d^2}- \pop{d} \av{d}=\mean{{:}\Delta d \, \Delta n_d{:}}$, which have been referred to as \emph{anomalous moments}~\cite{vogel91a,vogel95a} and been recently measured~\cite{kuhn17a}. A squeezed-coherent state has such correlations. \item The numerator of the last component, $\mathcal{I}_2$, can also be written in terms of the quadratures of the field~$d$. Having $\mathcal{I}_2<0$ necessarily implies that the state of light has a squeezing component. This can be proved by noting that ${:}X_d^2{:} = X_d^2 - 1/4$ (the same for ${:}Y_d^2{:}$). Then, rearranging the numerator of Eq.~(\ref{eq:ThuFeb22121240CET2018c}) leads to: \begin{equation} \label{eq:I2transformation} \av{X_{d,\phi}^2}- \av{X_{d_,\phi}}^2 = \left( \Delta X_{d,\phi} \right)^2 \,, \end{equation} where $X_{d,\phi} = (e^{i \phi} \ud{d} + e^{-i \phi} d)/2$ is the quadrature with the same phase of the coherent field (given by the angle $\phi$). If $\mathcal{I}_2 < 0$, the dispersion of $X_{d,\phi}$ must be less than $1/2$, but since $X_{d,\phi}$ and its orthogonal quadrature, $X_{d,\phi+\pi/2}$, must fulfil the Heisenberg uncertainty relation, $\Delta X_{d,\phi} \Delta X_{d,\phi+ \pi/2} \geq 1/4$, then $\Delta X_{d,\phi+ \pi/2} > 1/2$. This necessarily implies that there is a certain degree of squeezing in $d$. Nevertheless, the opposite statement is not true. A state with a non-zero degree of squeezing can have $\mathcal{I}_2 \geq 0$, for instance, if the relative direction between the coherent and squeezing contributions fulfils $\theta-2\phi = \pi/2$ (a straightforward example is provided by the displaced squeezed state). Furthermore, if $\mean{d}=0$, the numerator of $\mathcal{I}_2$ simplifies to $4 |\alpha|^2 ( \av{{:}X_{d}^2{:}} - |\alpha|^2 )$. \end{itemize} An analogous procedure allows us to decompose the third-order coherence function, $\g{3}_s= 1+\sum_{m=0}^{4} \mathcal{J}_m$ (the expressions for $\mathcal{J}_m $ are given in Appendix~\ref{app:g3decomp}). Naturally, higher-order-correlator decompositions follow the same rules. As an illustration which will be relevant in what follows, let us consider the interference between a coherent and a squeezed state, as shown schematically in Fig.~\ref{fig:1}(a). The coherent state can be written as~$\ket{\alpha}=D_a(\alpha)\ket{0}$, where~$\alpha = |\alpha|e^{i\phi}$ is the amplitude of the coherent state as before, and $D_a(\alpha)=\exp(\alpha \ud{a}-\alpha^\ast a)$ is the displacement operator of the field with annihilation operator~$a$. Similarly, the squeezed state may be written as~$\ket{\xi}=S_d(\xi)\ket{0}$, where~$\xi=re^{i\theta}$ is the \emph{squeezing parameter} and $S_d(\xi)=\exp(\xi d^{\dagger\,2}-\xi^\ast d^2)$ is the squeezing operator of the field with annihilation operator~$d$. Thus, the state that feeds the beam splitter is~$\ket{\psi_\mathrm{in}}=\ket{\alpha\,,\xi}$. The interference at the beam splitter mixes these two states, and the state of the light that leaves the beam splitter is a two-mode squeezed state that is further squeezed and displaced (the detailed transformation is given in Appendix~\ref{app:1}). Since we are only interested in the output of one of the arms of the beam splitter, we take the partial trace over the other arm and we end up with a \emph{displaced squeezed thermal state}~\cite{lemonde14a}, \begin{equation} \label{eq:ThuFeb22151812CET2018} \rho_{s} = \mathcal{D}_s (\alpha) \mathcal{S}_s (\xi) \rho_{\mathrm{th}} \left(\mean{n_\mathrm{th}}\right) \ud{\mathcal{S}_s} \left(\xi \right) \ud{\mathcal{D}_s} \left(\alpha \right)\,, \end{equation} where now the displacement and squeezing operators correspond to the operator~$s=a+d$, and $\mean{n_\mathrm{th}}$, the thermal population, can be obtained from the population of the squeezed state,~$\mean{n_d}$, which for a balanced beam splitter follows the relation~$\mean{n_\mathrm{th}} = (\sqrt{1+\mean{n_d}}-1)/2$. The second-order correlation for the total signal is computed as in Eq.~(\ref{eq:MonNov27154304CET2017}), taking the averages using the state in Eq.~(\ref{eq:ThuFeb22151812CET2018}), namely $ g^{(2)}_s = \Tr[\rho_s s^{\dagger\,2}s^2]/\Tr[\rho_s s^\dagger s]^2$, which can be decomposed as in Eq.~(\ref{eq:g2mixdecomposition}) into \begin{subequations} \label{eq:FriOct20184657CEST2017} \begin{align} \label{eq:FriOct20184657CEST2017a} \mathcal{I}_0&= \frac{ \sinh[4](r)}{\av{n_s}^2} [1+\coth(r)^2]\,,\\ \label{eq:FriOct20184657CEST2017b} \mathcal{I}_1&=0\,,\\ \label{eq:FriOct20184657CEST2017c} \mathcal{I}_2&=\frac{2 |\alpha|^2 \sinh^2(r)}{\av{n_s}^2} \left[ 1 - \cos(\theta - 2 \phi) \coth(r) \right] \,, \end{align} \end{subequations} where~$\mean{n_s}=|\alpha|^2 + \sinh^2(r)$. Here, $\mean{d}=0$ but also Eq.~(\ref{eq:FriOct20184657CEST2017b}) is exactly zero because, for a squeezed state, the correlators~$\corr{d}{\mu}{\nu}$ vanish when~$\mu+\nu$ is an odd number. Useful expression for the decomposition of $\g{2}_s$ and $\g{3}_s$ in terms of the incoherent component and the two-photon coherence are given in Appendix~\ref{app:1}. Inspection of Eqs.~(\ref{eq:FriOct20184657CEST2017}) shows that the only way in which $\g{2}_s<1$, that is, the statistics of the total signal can be sub-Poissonian, regardless of the value of the squeezing parameter, is for $\mathcal{I}_2$ to be negative, which implies that the phases of the displacement and the squeezing must be related by~$|\theta-2\phi|<\pi/2$. We take for simplicity the minimizing alignment, $\theta=2\phi$, which means that the coherent and squeezed excitations are driven with the same phase, since the phase of the squeezed state is $\theta/2$. Using such a relation, the interference yields the correlation map shown in Fig.~\ref{fig:1}(b) as a function of the amplitude of the coherent~$|\alpha|$ and squeezing~$r$ intensities. The black dashed line shows the optimum amplitude of the coherent state that minimizes~$\g{2}_s$ for a given squeezing, which is given by \begin{equation} \label{eq:FriOct20154005CEST2017} |\alpha|_\mathrm{min}=e^{r}\sqrt{\cosh(r)\sinh(r)}\,. \end{equation} Replacing this condition in Eqs.~(\ref{eq:FriOct20184657CEST2017}) we obtain the minimum possible value of~$\g{2}_s$, \begin{equation} \label{eq:FriOct20172135CEST2017} g_{s,\,\mathrm{min}}^{(2)}=1-\frac{e^{-2r}}{1+\sinh(2r)}\leq 1\,. \end{equation} This goes to zero although at the same time as the population goes to zero. Figure~\ref{fig:1}(d) shows a transverse cut of the correlation map in~(b) along the purple long-dashed line, which corresponds to~$|\alpha|=0.3$. The decomposition and total~$g_s^{(2)}$ are shown as a function of the squeezing parameter, with minimum~$g_s^{(2)}=0.26$ at $r\approx 0.078$. Without the interference with the coherent state, the squeezed state can never have sub-Poissonian statistics. In fact, in such a case the correlations become independent of the phase of the squeezing parameter: \begin{equation} \label{eq:ThuFeb22170058CET2018} g^{(2)}_{d}=g_s^{(2)}|_{\alpha\rightarrow 0}= 2+\coth^2(r)\geq 3\,, \end{equation} which diverges at vanishing squeezing $r\rightarrow 0$ (with also vanishing signal $\av{n_d}=\sinh^2(r)$), and is minimum when squeezing is infinite~$r\rightarrow \infty$. There is a great tunability from such a simple admixture since $g^{(2)}_s$ of the light at the output of the beam splitter can be varied between 0 and $\infty$ simply by adjusting the magnitudes of the coherent field and the squeezing parameter. In particular, the most sub-Poissonian statistics occurs when coherent light interferes with a small amount of squeezing~$r<|\alpha_\mathrm{min}|$, in the right intensity proportion, given by Eq.~(\ref{eq:FriOct20154005CEST2017}). Counter-intuitively, $g^{(2)}_s\ll 1$ occurs when the squeezed light itself is, on the opposite, super-Poissonian (even super-chaotic $g_d^{(2)}> 2$)~\footnote{In fact, in order to have $g_{s,\,\mathrm{min}}^{(2)}<1/2$, it is required $r<\log{(\sqrt{6}-1)}/2\approx 0.186$, which implies $g_d^{(2)}>31.7$ and $|\alpha_\mathrm{min}|>\sqrt{(2-\sqrt{6})/2}\approx 0.52>r$.}. This is a fundamental result that we will find throughout the text in order to find the conditions for and manipulate sub-Poissonian statistics and antibunching in various systems under weak coherent driving. An important fact for our classification of photon statistics is that, since the sub-Poissonian behaviour is here due to an interference effect, the set of parameters that suppresses the fluctuation at the two-photon level does not suppress them at all $N$-photon levels, which means that the multi-photon emission cannot be precluded simultaneously at all orders. In other words, the condition in Eq.~(\ref{eq:FriOct20154005CEST2017}) that minimizes $g_s^{(2)}$, also minimizes the two-photon probability in the interference density matrix~$\bra{2}\rho_s\ket{2}$, at low intensities. But this is not the same condition that minimizes any other photon probability~$\bra{n}\rho_s\ket{n}$. This incompatibility is revealed by the \textit{$n$-norm}, as defined in Ref.~\cite{arXiv_lopezcarreno16c}, which is the distance in the correlation space between signal $s$ and a perfect single-photon source: \begin{equation} \label{eq:crispsnorm} \norm{(\g{k}_s )}_n = \sqrt[n]{\sum_{k=2}^{n+1} [\g{k}_s]^{n}}\,. \end{equation} In Figure~\ref{fig:1}(c) we show the~$5$-norm for the same range of parameters of Panel~(b). The dashed black line indicates the minimum values of~$\g{2}_s$, which lies in a high-fluctuation region when the higher order correlation functions are taken into account. Further increasing~$n$ renders the correlation map completely red which means that multiphoton emission is not suppressed even if we have $\g{2}_s$ close to zero. This is a feature typical of antibunching that arises from a two-photon interference only, that suggests that their use as single-photon sources may be an issue in the context of applications for quantum technology where higher-photon correlations may jeopardize two-photon suppression. This is related to the fact that this antibunching stems from a Gaussian state, which is the most classical of the quantum states. The discussion presented above and, in particular, the decomposition of the second-order correlation as in Eq.~(\ref{eq:g2mixdecomposition}), are not limited to the particular case of interfering pure states set as initial conditions. This can also be applied to the dynamical case of a single system which provides itself and directly a coherent component~$\alpha$ along with another, and therefore quantum, type of component. Calling $s$ the annihilation operator for a particular emitter which has such a coherent---but not exclusively---component in its radiation, one can thus express its emission as the interference (or superposition) of a mean coherent field~$\mean{s}$ and its quantum fluctuations, with operator~$d=s-\mean{s}$. That is, one can always write \begin{equation} \label{eq:FriFeb23144144CET2018} s = \mean{s}+d\,. \end{equation} Following the terminology previously introduced in the literature for a similar purpose~\cite{carmichael85a}, we call this interpretation of the emission a \emph{self-homodyne} effect. Since $g_s^{(2)}$ is also given by Eq.~(\ref{eq:ThuFeb22121240CET2018}) with the simplification brought by the fact that $\mean{d}=0$, by replacing~$\alpha\rightarrow \mean{s}$ and~$d \rightarrow s-\mean{s}$, we obtain the general expressions in terms of $\corr{s}{n}{m}$ for the emission of a single-emitter~$s$, interfering its own components: \begin{widetext} \begin{subequations} \label{eq:decompositiontermswhole} \begin{align} \mathcal{I}_0 &= \frac{\corr{s}{2}{2} - \av{\ud{s}s}^2 - 4 |\mean{s}|^4 + 6 |\mean{s}|^2 \pop{s}+2\Re[ {\mean{\ud{s}}}^2 \av{s^2} - 2 \mean{\ud{s}} \av{\ud{s} s^2} ] }{\pop{s}^2}\, ,\\ \mathcal{I}_1 &=4 \frac{\Re[ \mean{\ud{s}} \av{\ud{s} s^2} - \mean{\ud{s}}^2 \av{s^2}] + 2 |\mean{s}|^2 \left(|\mean{s}|^2 -\mean{\ud{s}s}\right) }{\pop{s}^2}\, , \\ \mathcal{I}_2 &= 2 \frac{ \Re[\mean{\ud{s}}^{2} \av{s^2}] + |\mean{s}|^2 \pop{s} - 2|\mean{s}|^4 }{\pop{s}^2}\, . \end{align} \end{subequations} \end{widetext} For completeness, in Appendix~\ref{ap:coh-sq} we present the possible models (Hamiltonians and Liouvillians) that produce a coherent state and a squeezed state in a cavity, and give the correspondence between the dynamical parameters (such as the coherent driving and the squeezing intensity) and the abstract quantities~$\alpha$ and~$r$. In the following Sections we will use the self-homodyne decomposition Eqs.~(\ref{eq:decompositiontermswhole}) and this understanding in terms of interferences between coherent and quantum components, which in our cases may or may not be of the squeezing type, to analyse some statistical properties (anti- and super-bunching) of systems in their low-driving regime, and contrast their statistics with conventional blockade effects. We will focus on cases that are both fundamental and tightly related to each other, namely, the 2LS (resonance fluorescence) in Section~\ref{sec:FriFeb23095933CET2018}, the anharmonic oscillator in Section~\ref{sec:jueene10122434CET2019}, the Jaynes--Cummings Hamiltonian in Section~\ref{sec:FriFeb23100104CET2018} and microcavity polaritons in Section~\ref{sec:FriFeb23100219CET2018}. \section{Resonance fluorescence in the Heitler regime} \label{sec:FriFeb23095933CET2018} \begin{figure} \caption{Second-order coherence function $\g{2} \label{fig:02} \end{figure} We first consider the excitation of a two-level system (2LS) driven by a coherent source in the regime of low excitation---commonly referred to as the \textit{Heitler regime}. Such a system is modeled by the Hamiltonian ($\hbar=1$) \begin{equation} \label{eq:2LShamiltonian} H_\mathrm{rf} = (\omega_\sigma-\omega_\mathrm{L}) \ud{\sigma} \sigma + \Omega_{\sigma} \left(\ud{\sigma} + \sigma \right)\,. \end{equation} This is the particular case of the general Hamiltonian~(\ref{eq:Mon5Jun145532BST2017}) when only one mode is considered and~$U\rightarrow\infty$. Here, the 2LS has a frequency~$\omega_\sigma$ and is described with an annihilation operator~$\sigma$ that follows the pseudospin algebra, whereas the laser is treated classically, i.e., as a complex number, with intensity~$\Omega_\sigma$ (taken real without loss of generality) and frequency~$\omega_\mathrm{L}$. The dynamics only depends on the frequency difference, $\Delta_\sigma\equiv \omega_\sigma-\omega_\mathrm{L}$. The dissipative character of the system is included in the dynamics with a master equation $\partial_t \rho = i \left[ \rho ,H_\mathrm{rf} \right] + (\gamma_\sigma/2) \mathcal{L}_{\sigma} \rho$, where the Lindblad form $\mathcal{L}_{\sigma} \rho = 2 \sigma \rho \ud{\sigma} - \ud{\sigma} \sigma \rho - \rho \ud{\sigma} \sigma$ describes the decay of the 2LS at a rate~$\gamma_\sigma$. The steady-state solution (computed as indicated in Appendix~\ref{app:2}) can be fully written in terms of two parameters: the 2LS population (or probability to be in the excited state)~$\mean{n_\sigma}\equiv \pop{\sigma}$, and the coherence or mean field~$\alpha \equiv \mean{\sigma}$~\cite{delvalle11a}: \begin{equation} \label{eq:2LSrhoSS} \rho = \begin{pmatrix} 1 - \mean{n_\sigma} & \alpha^\ast \\ \alpha & \mean{n_\sigma} \end{pmatrix}, \end{equation} where \begin{subequations} \label{eq:2LS_observables} \begin{equation} \mean{n_\sigma} = \frac{4 \Omega_{\sigma}^2}{\gamma_\sigma^2 + 4 \Delta_{\sigma}^2 + 8 \Omega_{\sigma}^2}\,, \end{equation} \begin{equation} \alpha = \frac{2 \Omega_{\sigma} (2 \Delta_{\sigma} + i \gamma_\sigma)}{\gamma_\sigma^2 + 4 \Delta_{\sigma}^2 + 8 \Omega_{\sigma}^2}\,. \end{equation} \end{subequations} As a consequence of the fermionic character of the 2LS, it can only sustain one excitation at a time. Therefore, all the correlators different from those in Eq.~(\ref{eq:2LS_observables}) vanish, and in particular the $N$-photon correlations of the two-level system are exactly zero, namely~$\g{N}_\sigma=0$ for~$N\geq 2$. We call this perfect cancellation of correlations to all orders \emph{conventional blockade} or \emph{conventional antibunching} (CA), as it arises from the natural Pauli blocking scenario. To investigate the components of the correlations that ultimately provide the perfect sub-Poissonian behaviour of the 2LS, we separate the mean field from the fluctuations of the signal ($\sigma=\alpha+\epsilon$), in analogy with Eq.~(\ref{eq:FriFeb23144144CET2018}). Following~Eqs.~(\ref{eq:decompositiontermswhole}), $\g{2}_\sigma$ can be decomposed as in Eq.~(\ref{eq:g2mixdecomposition}) with, \begin{subequations} \label{eq:FriFeb23150914CET2018} \begin{align} \mathcal{I}_0 &= \frac{|\alpha|^2 \left( 6 \mean{n_\sigma}- 4 |\alpha|^2 \right)}{n_\sigma^2} - 1\,,\\ \mathcal{I}_1 &= -8 \frac{ |\alpha|^2 \left( \mean{n_\sigma}- |\alpha|^2 \right)}{n_\sigma^2}\,, \\ \label{eq:2LSg2decompositiontermsI2} \mathcal{I}_2 &= 2 \frac{ |\alpha|^2 \left( \mean{n_\sigma}- 2 |\alpha|^2 \right)}{n_\sigma^2}\,. \end{align} \end{subequations} These are presented in Fig.~\ref{fig:02}(a) as a function of the intensity of the driving laser. The decomposition shows that, although the photon correlations of the 2LS are always perfectly sub-Poissonian, or antibunched~\footnote{Equivalence of sub-Poissonianity with antibunching follows from $\lim_{\tau\rightarrow \infty}g_\sigma^{(2)}(\tau)=1$ for a continuously driven system, so that $g_\sigma^{(2)}=0$ also implies $g_\sigma^{(2)}<g_\sigma^{(2)}(\tau)$.}, the nature of their cancellation varies depending on the driving regime~\cite{lopezcarreno18b}. In the high-driving regime, the coherent component is compensated by the sub-Poissonian statistics of the quantum fluctuations ($\mathcal{I}_0<0$) since $\lim_{\Omega_\sigma \rightarrow \infty} \alpha =0$ and fluctuations become the total field, $\epsilon\rightarrow \sigma$. In contrast, in the Heitler regime the coherent component is compensated by the super-Poissonian but also squeezed fluctuations~($\mathcal{I}_2<0$). The Heitler regime is, therefore, an example of the type of self-homodyne interference that we discussed in Sec.~\ref{sec:ThuFeb15174655CET2018}. Let us then analyse more closely the fluctuations by looking into their correlation functions: \begin{equation} \label{eq:FriFeb23154045CET2018} \mean{\epsilon^{\dagger k}\epsilon^l} =(-1)^{k+l}\alpha^{\ast\,k-1}\alpha^{l-1}\big( |\alpha|^2 (1-k-l+kl) +kl\,\mean{n_\epsilon}\big)\,, \end{equation} where~$\mean{n_\epsilon} = \mean{n_\sigma}-|\alpha|^2$ is the contribution from the fluctuations to the total population of the 2LS. More details are given in Appendix~\ref{sec:FriFeb23221121CET2018}. In particular, the~$N$-photon correlations from the fluctuations alone is given by, \begin{equation} \label{eq:FriFeb23155745CET2018} g^{(N)}_\epsilon =\frac{|\alpha|^{2(N-1)} \big(N^2\,\mean{n_\sigma}+(1-2N)|\alpha|^2\big)}{(\mean{n_\sigma}- |\alpha|^2)^N}\,, \end{equation} that in terms of the physical parameters reads \begin{equation} \label{eq:FriFeb23174312CET2018} \g{N}_\epsilon = \frac{ \left(N-1\right)^2 \left(\gamma_\sigma^2 +4 \Delta_{\sigma}^2 \right) + 8 N^2 \Omega_{\sigma}^2}{8^{N}\, \Omega_{\sigma}^{2N} \left(\gamma_\sigma^2 +4 \Delta_{\sigma}^2 \right)^{N-1}}\,. \end{equation} In Fig.~\ref{fig:02}(b) we plot $\g{2}_\epsilon$ confirming that fluctuations are sub-Poissonian or super-Poissonian depending on whether the effective driving defined as~$\Omega_\mathrm{eff}\equiv\Omega_\sigma/\sqrt{1+(2\Delta_\sigma/\gamma_\sigma)^2}$ is much larger or smaller than the system decay~$\gamma_\sigma$, respectively (the figure is for the resonant case). In the Heitler regime, we need to consider only the magnitudes up to leading order in the effective normalised driving~$p\equiv 2\Omega_\mathrm{eff}/\gamma_\sigma$. The main contribution to the intensity~$\mean{n_\sigma}=|\alpha|^2+\mean{n_\epsilon}$, in the absense of pure dephasing, comes from the coherent part~$|\alpha|^2$ of the signal. Fluctuations only appear to the next order, having, up to fourth order in $p$: \begin{subequations} \begin{align} \mean{n_\sigma}&=p^2-2p^4 \,,\\ |\alpha|^2&=p^2-4p^4\,,\\ \mean{n_\epsilon}&=2p^4\,. \end{align} \end{subequations} The coherent contribution corresponds to the elastic (also known as ``Rayleigh'') scattering of the laser-photons by the two-level system, while the fluctuations originate from the two-photon excitation and re-emission~\cite{dalibard83a}. In the spectrum of emission, this manifests as a superposition of a delta and a Lorentzian peaks with exactly these weights, $|\alpha|^2$ and $\mean{n_\epsilon}$, both centered at the laser frequency, with no width (for an ideal laser) and $\gamma_\sigma$-width, respectively~\cite{mollow69a,loudon_book00a,lopezcarreno18b}. Fluctuations have no coherent intensity by construction, $\mean{\epsilon}=0$. At the same time, their second momentum is not zero but exactly the opposite of the coherent field one: $\mean{\epsilon^2}=-\alpha^2$, thanks to the fact that $\mean{\sigma^2}=\alpha^2+\mean{\epsilon^2}=0$. This means that both contributions, coherent and incoherent, are of the same order in the driving~$p$ when it comes to two-photon processes and can, therefore, interfere and even cancel each other. This is precisely what happens and is made explicit in the $g^{(2)}_\sigma$-decomposition above. The strong two-photon interference ($\mathcal{I}_2$) can compensate the Poissonian and super-Poissonian statistics of the coherent and incoherent parts of the signal ($1+\mathcal{I}_0$). Since quadrature squeezing is created by a displacement operator, or a Hamiltonian, based on the operator~$\epsilon^2$, this situation corresponds to a high degree of quadrature squeezing for the fluctuations. Further analysis of the fluctuation quadratures (details of the calculation are in Appendix~\ref{sec:FriFeb23221121CET2018}) shows that their variances behave similarly to a \emph{squeezed thermal} state in the Heitler regime, which allows us to derive an effective squeezing parameter~$r_\mathrm{eff}=p^2$ to describe the state to lowest order in the driving. This is plotted in Fig.~\ref{fig:02}(b) as a function of the driving, with the line becoming dashed when the interference can no longer be described in terms of a squeezed thermal state. Note that the total signal has no squeezing at low driving, only fluctuations do, because the coherent contribution is much larger. \begin{figure} \caption{(Color online) Interference between the output of resonance fluorescence and an external laser of intensity proportional to~$\mathcal{F} \label{fig:WedNov29150123CET2017} \end{figure} Resonance fluorescence by itself always provides antibunching, due to the perfect cancellation of the various components. However, one can disrupt this by manipulating the coherent fraction, simply by interfering the signal~$\sigma$ in a beam splitter with an external coherent state $\ket{\beta}$. This allows to change the photon statistics of the total signal $s= \mathrm{T} \sigma + i \mathrm{R} \beta $, where $T^2$ and $R^2$ are the transmittance and reflectance of the beam splitter. Actually, since the decomposition affects correlators to all orders, Eq.~(\ref{eq:FriFeb23155745CET2018}), one can target the $N$-photon level instead of the 2-photon one. Namely, one can decide to set the $N$-photon coherence function to zero. As a particular case, the 1-photon case cancels the signal altogether, which is obtained by solving the condition $\mean{n_s}=\mathrm{T}^2|\alpha + i \mathrm{R} \beta_1/\mathrm{T}|^2 =0$ (because $\mean{n_\epsilon} =0$ to second order in $\Omega_\sigma$). We will show that the possibility to target one~$N$ in isolation of the others introduces a separate regime from conventional blockade. Given their relationship and in line with the terminology found in the literature, we refer to this as \emph{unconventional blockade} and \emph{unconventional antibunching} (UA). With this objective of tuning $N$-photon statistics and in order to avoid referencing the specificities of the beam splitter which do not change the normalized observables, let us define~$\beta' \equiv \mathrm{R} \beta/\mathrm{T} \equiv |\beta'|e^{i\phi}$ and parametrise its amplitude as a fraction~$\mathcal{F}$ (always a positive number) of the laser field exciting the 2LS: \begin{equation} \label{eq:SatFeb24130524CET2018} |\beta'| = \frac{\Omega_\sigma}{\gamma_\sigma}\mathcal{F} \,. \end{equation} With this, the coherence function $g^{(N)}_s$ of the interfered field in the Heitler regime is given by: \begin{multline} \label{eq:jueene17193740GMT2019} g^{(N)}_s = \frac{\mathrm{T}^{2N}}{\av{n_s}^N}\frac{\mathcal{F}^{2 (N-1)} \Omega_{\sigma}^{2N}} {\gamma_{\sigma}^{2N}\left(\gamma_{\sigma}^2 + 4 \Delta_\sigma^2\right) } \times \\ \big[\mathcal{F}^2 \left(\gamma_{\sigma}^2 + 4 \Delta_{\sigma}^2 \right)+ 4 N \mathcal{F} \gamma_{\sigma} \left(\gamma_{\sigma} \cos \phi - 2 \Delta_\sigma \sin \phi\right) +{}\\{}+ 4 N^2 \gamma_{\sigma}^2\big] \,. \end{multline} Since $g_s^{(1)}=1$, this expression also provides the population~$\av{n_s}$ by considering the case~$N=1$. One can appreciate the considerable enrichment brought by the interfering laser by comparing $\av{n_s}$ (with the interfering laser, $\mathcal{F}\neq0$) to Eq.~(\ref{eq:2LS_observables}(a)) (without, $\mathcal{F}=0$) and even more so by comparing the $N$-photon correlation function, which is identically zero without the interfering laser, and that becomes Eq.~(\ref{eq:jueene17193740GMT2019}) with the interfering laser. Interestingly, there is now another condition that suppresses the correlations and yields perfect antibunching at a given $N$-photon order, in addition to the one obtained in the original system without the interfering laser~(CA). The new conditions exist for any detuning and are given by: \begin{equation} \label{eq:2LSconds} \tan \phi_N = - \frac{2\Delta_\sigma}{\gamma_\sigma} \quad \quad \mathrm{and} \quad\quad \mathcal{F}_N = -2 N \cos \phi_N\,. \end{equation} Focusing on the resonance case for simplicity, we have $\mathcal{F}_N=2N$ and always the same phase, $\phi_N=\pi$, which corresponds to the field $i\beta_N'=- N |\alpha|$. The total coherent fraction changes phase for all $N$: $\alpha + i \beta_N' = -(N-1)\alpha$. The signal population ($\mean{n_s}=G^{(1)}_s$) vanishes due to a first order (or one-photon) interference at the external laser parameter $\mathcal{F}_1 = 2 $, which translates into $i\beta_1'=-i|\alpha|$. The external laser completely compensates the coherent fraction of resonance fluorescence, in this case $\alpha=i|\alpha|$ (with $|\alpha|=2\Omega_\sigma/\gamma_\sigma$). This situation corresponds to a \emph{classical destructive interference}, which equally occurs between two fully classical laser beams with the same intensity and opposite phase. In the case of highest interest, that of two-photon correlations~$g^{(2)}_s$, we find a destructive two-photon interference at the intensity $\mathcal{F}_2=4$, which corresponds to an external laser $i\beta_2=- 2|\alpha|$, that fully inverts the sign of the coherent fraction in the total signal: $\alpha + i \beta_2' = -\alpha$. This coherent contribution leads to perfect cancelation of the two-photon probability in a wavefunction approach~\cite{visser95a} (see the details in Appendix~\ref{sec:WedFeb28173330CET2018}). Note that this does not, however, satisfy all other $N$-photon interference conditions and $g^{(N)}_s$ with $N>2$ do not vanish. This is a very different situation as compared to the original resonance fluorescence where $g^{(N)}_\sigma=0$ for all $N>1$. One, the conventional scenario, arises from a an interference that takes place at all orders. The other, the unconventional scenario, results from an interference that is specific to a given number of photons. \begin{table*}[ht] \centering \begin{ruledtabular} \begin{tabular}{c|c|c||c|c} $\g{N}$ & Squeezed Thermal & Heitler fluctuations & Displaced Squeezed Thermal & Laser-corrected configuration \\ \hline & & & & \\ [-2ex] $n_a$ & $\frac{32\Omega_\sigma^4}{ \Gamma_\sigma^4} + \frac{1792 \Omega_\sigma^8}{3 \Gamma_\sigma^8} + O\left(\Omega_\sigma^{12}\right)$ & $\frac{32\Omega_\sigma^4}{ \Gamma_\sigma^4} - \frac{512 \Omega_\sigma^6}{ \Gamma_\sigma^6} + O\left(\Omega_\sigma^{8}\right)$ & $\frac{4 \Omega_\sigma^2}{\Gamma_\sigma^2} + \frac{32 \Omega_\sigma^4}{\Gamma_\sigma^4} + O\left(\Omega_\sigma^6\right)$ & $\frac{4 \Omega_\sigma^2}{\Gamma_\sigma^2} - \frac{32 \Omega_\sigma^4}{\Gamma_\sigma^4} + O\left(\Omega_\sigma^6\right)$ \\ $\g{2}$ & $\frac{\Gamma_\sigma^4}{64 \Omega_\sigma^4} + \frac{11}{4} + O\left(\Omega_\sigma^4\right)$ & $\frac{\Gamma_\sigma^4}{64 \Omega_\sigma^4} + \frac{\Gamma_\sigma^2}{2 \Omega_\sigma^2} + O\left(\Omega_\sigma^0\right) $ & $0 +\frac{32 \Omega_\sigma^2}{\Gamma_\sigma^2} + O\left(\Omega_\sigma^4\right)$ & $ 0 + \frac{128 \Omega_\sigma^2}{\Gamma_\sigma^2} + O\left(\Omega_\sigma^4\right) $ \\ [1.5ex] $\g{3}$ & $\frac{9 \Gamma_\sigma^4}{64 \Omega_\sigma^4} + \frac{51}{4} + O\left(\Omega_\sigma^4\right)$ & $ 4 - \frac{96 \Omega_\sigma^2}{\Gamma_\sigma^2} + O\left(\Omega_\sigma^4\right)$ & $16 +\frac{768 \Omega_\sigma^2}{\Gamma_\sigma^2} + O\left(\Omega_\sigma^{4}\right) $ & $\frac{\Gamma_\sigma^6}{128 \Omega_\sigma^6} + \frac{9 \Gamma_\sigma^4}{64 \Omega_\sigma^2} + O\left({1\over\Omega_\sigma^{2}}\right) $ \\ [1ex] \end{tabular} \end{ruledtabular} \caption{Two-level system. Comparison of first- (population), second- and third-order photon correlations, i) between a Squeezed Thermal state and the fluctuations in the Heitler regime and ii) between a Displaced Squeezed Thermal state and the fluctuations in the laser-corrected Heitler regime, to various orders in the driving~$\Omega_\sigma$.} \label{tab:fluctuations} \end{table*} \begin{table*}[ht] \centering \begin{ruledtabular} \begin{tabular}{c|c|c||c|c} $\g{N}$ & Displaced Squeezed Thermal & AO antibunching & Displaced Squeezed Thermal & Laser-corrected configuration \\ \hline & & & & \\ [-2ex] $n_a$ & $ 2.89 \, \Omega_b^2 + 4.63 \, \Omega_b^4 + O \left(\Omega_b^6 \right) $ & $ 2.89 \, \Omega_b^2 - 10.36 \, \Omega_b^4 + O \left(\Omega_b^6\right)$ & $ 1.52 \, \Omega_b^2 + 4.63 \, \Omega_b^4 + O\left(\Omega_b^6\right)$ & $1.52 \, \Omega_b^2 - 3.25 \, \Omega_b^4 + O\left(\Omega_b^6\right)$ \\ $\g{2}$ & $ 0.38 + 5.18 \, \Omega_b^2 + O\left(\Omega_b^4\right)$ & $ 0.38 + 0.91 \, \Omega_b^2 + O\left(\Omega_b^4 \right) $ & $0 + 12.75 \, \Omega_b^2 + O\left(\Omega_b^4\right)$ & $ 0 + 47.84 \, \Omega_b^2 + O\left(\Omega_b^4\right) $ \\ [1.5ex] $\g{3}$ & $ 0.80 + 1.64 \, \Omega_b^2 + O\left(\Omega_b^4\right)$ & $ 0.06 + 0.37 \, \Omega_b^2 + O\left(\Omega_b^4\right)$ & $ 4 - 34.52 \, \Omega_b^2 + O\left(\Omega_b^{4}\right) $ & $ 0.71 + 0.78 \, \Omega_b^2 + O\left({\Omega_b^{4}}\right) $ \\ [1ex] \end{tabular} \end{ruledtabular} \caption{Anharmonic oscillator. Comparison of first- (population), second- and third-order photon correlations, i) between a Displaced Squeezed Thermal state and the anharmonic oscillator with $\Delta_b = \Delta_{-}$ (optimal antibunching) and ii) between a Displaced Squeezed Thermal state and the laser-corrected for the optimal $\g{2}$ configuration ($\mathcal{F}_{2,2}$ and $\phi_{2,2} $). Selected parameters: $\gamma_b = U = 1$.} \label{tab:AOtable} \end{table*} All these interferences can be seen in Fig.~\ref{fig:WedNov29150123CET2017}(a) where we plot them up to $N=4$. When there is no interference with the external laser, $\mathcal{F}=0$, antibunching is perfect to all orders recovering resonance fluorescence. At the one-photon interference, the denominator of $g^{(N)}_s$ becomes zero and the functions, therefore, diverge. This produces a superbunching effect of a classical origin, as previously discussed: a destructive interference effect that brings the total intensity to zero. In this case, the external laser removes completely by destructive interference the coherent fraction of the total signal. Therefore, the statistics is that of the fluctuations alone, what we previously called $g^{(N)}_\mathrm{\epsilon}$, given by Eq.~\eqref{eq:FriFeb23155745CET2018}. We have already discussed how, in the Heitler regime, fluctuations become super chaotic and squeezed. We can see, on the left hand side of Fig.~\ref{fig:1}, that in the limit of $\Omega_\sigma \rightarrow 0$, they actually diverge. Such a superbunching is thus linked to noise. The resulting state is missing the one-photon component and, consequently, the next (dominating) component is the two-photon one. Nevertheless, there is not a suppression mechanism for components with higher number of photons so that the relevance of such a configuration for multiphoton (bundle) emission remains to be investigated, which is however better left for a future work. We call this feature \emph{unconventional bunching} (UB) in contrast with bunching that results from a $N$-photon de-excitation process that excludes explicitly the emission of other photon-numbers. This superbunching, as well as the antibunching by destructive interferences, will be reappearing in the next systems of study. The Heitler regime is, therefore, a simple but rich system where all the squeezing-originated interferences already occur although we need an external laser to have them manifest. We now turn to the subtle point of which quantum state is realized by the various scenarios. To lowest-order in the driving, the dynamical state of the system can be described by a superposition of a coherent and a squeezed state, insofar as only the lower-order correlation functions (namely, population and~$\g{2}$) are considered. This is shown in Table~\ref{tab:fluctuations}, where we compare $g^{(N)}$ for $1\le N\le 3$ (with $N=1$ corresponding to the population) for the fluctuations in the Heitler regime vs the corresponding observables for a squeezed thermal state, on the one hand, and the laser-corrected configuration vs the displaced squeezed thermal state on the other hand. As explained in Appendix~\ref{sec:FriFeb23221121CET2018}, such a comparison can be made by identifying the squeezing parameter and thermal populations to various orders in a series expansion of the quantum states with the corresponding observables from the dynamical systems. One finds: \begin{align} \mathrm{r}_\mathrm{eff} = \frac{4 \Omega_\sigma^2}{\Gamma_\sigma^2}, && \mathrm{p}_\mathrm{eff} = \frac{16 \Omega_\sigma^4}{\Gamma_\sigma^4}, \end{align} where we have defined $\Gamma_{\sigma}^2 = \gamma_\sigma^2 + 4 \Delta_\sigma^2$. By definition, fluctuations have a vanishing mean, i.e., $\mean{\epsilon} = 0 $ so we must choose $\alpha = 0$. On the other hand, for the corrected emission, since one is blocking the two-photon contribution (at first order, this gives $\g{2} = 0 $), the comparison with a displaced thermal state is obtained by imposing the condition for $\g{2}$ to vanish at first order ($r = |\alpha|^2$ and $\theta = 2 \, \phi $). The results are compiled in the table up to the order at which the results differ. Through the typical observables that are the population and $g^{(2)}$, one can see how the system is indeed well described to lowest order in the driving by a coherent squeezed thermal state (displaced if there is a laser-correction). However to next order, there is a departure, showing that the Gaussian state representation is an approximation valid up to second-order only. In fact, for three-photon correlations, the disagreement occurs already at the lowest-order in the driving, and is of a qualitative character, as is also shown in the table. Therefore, such a description is handy but breaks down if a high-enough number of photons or a too high-pumping is considered. \section{Anharmonic blockade} \label{sec:jueene10122434CET2019} \begin{figure} \caption{Second-order coherence function $\g{2} \label{fig:ao1} \end{figure} To show that the effects of conventional (self-homodyne interference at all~$N$) and unconventional (self-homodyne interference at a given $N$ only) blockades take place in a general setting and are not specific of strong quantum nonlinearities (such as a 2LS), we now address the case of a single anharmonic oscillator, that describes an interacting bosonic mode with a Kerr-type nonlinearity, which can be very weak. With driving by a coherent source (a laser) at frequency $\omega_{\mathrm{L}}$, its Hamiltonian reads \begin{equation} H_\mathrm{ao} = \Delta_b \, \ud{b} b + \frac{U}{2} \, \ud{b} \ud{b} b b + \Omega_b (\ud{b} + b), \end{equation} where the cavity operators are represented by $\ud{b}$ and $b$, $\Delta_b = \omega_b - \omega_{\mathrm{L}}$ is the detuning between the cavity and the laser, $U$ denotes the particle interaction strength (that provices the nonlinearity) and the driving amplitude is given by $\Omega_b$. This is the particular case of the general Hamiltonian~(\ref{eq:Mon5Jun145532BST2017}) when only one mode is considered and~$U$ remains finite and, generally, small. The level structure of this system (at vanishing driving) is given by the simple expression~$E^{(N)}=N\omega_b+N(N-1)U$. The condition for the laser frequency to hit resonantly the $N$-photon level is $\omega_\mathrm{L}=E^{(N)}/N$ (or $\Delta_b=-(N-1)U/2$). We restrict our analysis of the dynamics $\dot{\rho} = - i \, [H_\mathrm{ao}, \rho] + (\gamma_b/2) \mathcal{L}_b \rho$, with $\gamma_b$ the decay rate of the mode, to the case of vanishing pumping, i.e. $\Omega_b \ll \gamma_b$. Solving the correlator equations in this limit gives the population \begin{equation} \label{eq:jueene17201516GMT2019} \av{n_b} = \frac{\Omega_b^2}{\gamma_b^2 + 4 \Delta_b^2}\,, \end{equation} the 2nd-order Glauber correlator \begin{equation} \label{eq:jueene17201437GMT2019} \g{2}_b = \frac{\av{\ud{b} \ud{b} b b}}{\av{\ud{b} b}^2} = \frac{\big(\gamma_a^2 + 4 \Delta_b^2 \big)}{\gamma_b^2 + (U + 2 \Delta_b)^2}\,, \end{equation} as well as the higher-order correlators \begin{equation} \g{N}_b = \frac{\big( \gamma_a^2 + 4 \Delta_b^2 \big)^{n-1}}{\prod_{k = 1}^{n-1} \big[ \gamma_b^2 + \big(k U + 2 \Delta_b \big)^2 \big]}\,. \end{equation} This shows that, when scanning in frequency, $\g{2}_b$ has two extrema, one minimum and one maximum, as can be seen in Fig.~\ref{fig:ao1}, whose positions are given by \begin{equation} \Delta_{\pm} = - \frac{1}{4} \Big(U \pm \sqrt{U^2 + 4 \gamma_b^2} \, \Big), \end{equation} with respective optimum antibunching~($-$) and bunching~$(+)$ \begin{equation} \g{2}_b \big(\Delta_b = \Delta_{\pm}\big) = 1 + \frac{U \Big( U \pm \sqrt{U^2 + 4 \gamma_b^2} \Big)}{2 \gamma_b^2}. \end{equation} Both of these features are linked to the level structure: the antibunching condition is that of resonantly driving the first rung, $E^{(1)}$ (note that $\Delta_-\sim 0$, especially when $U\gg \gamma_b$), and the bunching condition, that of driving the second rung, $E^{(2)}$ ($\Delta_+\sim -U/2$). In both cases, all other rungs are off-resonance and will remain much less occupied. Therefore, these effects are of a \emph{conventional} nature, as we have defined it in the previous section: CA and \emph{conventional bunching} (CB), respectively. The difference with resonance fluorescence is that here, CA is not a perfect interference at all orders ($g^{(N)}=0$ for $N>1$) but an approximated one. For instance, $g^{(2)}\approx(\gamma_b/U)^2$ (to leading order in $U/\gamma_b$), is only zero in the limit $U\rightarrow \infty$, when the system converges to a 2LS. On the other hand, there was not CB in resonance fluorescence due to the lack of levels~$N>1$. Here, we see it appearing for the first time. \begin{figure} \caption{Second-order coherence function $\g{2} \label{fig:ao2} \end{figure} The decomposition~$\g{2}_b$ of according to Eq.~(\ref{eq:g2mixdecomposition}) yields \begin{subequations} \begin{align} \mathcal{I}_0 = & \, \frac{U^2}{\gamma_b^2 + (U + 2 \Delta_b)^2} \,, \\ \mathcal{I}_1 = & \, 0 \,, \\ \mathcal{I}_2 = & \,-\frac{2 U \big(U + 2 \Delta_b \big)}{\gamma_b^2 + (U + 2 \Delta_b)^2} \,. \end{align} \end{subequations} $\mathcal{I}_0>0$ means that fluctuations are always super-poissonian. $\mathcal{I}_1$ vanishes in the limit of low driving (as in the case of the Heitler regime), which means that there are no anomalous correlations to leading order in~$\Omega_b$. The remaining term $\mathcal{I}_2$ can take positive (for $\Delta_b > - U/2$) and negative (for $\Delta_b < - U/2$) values, resulting in super-Poissonian statistics or on the contrary favouring antibunching. The various terms and the total $\g{2}_b$ are shown in Fig.~\ref{fig:ao2} as a function of $U$, for the case $\Delta_b=\Delta_{-}(U)$ that maximizes antibunching, showing the evolution from Poissonian fluctuations in the linear regime of a driven harmonic mode to antibunching as the two-level limit is recovered with $\mathcal{I}_0 \rightarrow 1$ and $\mathcal{I}_2 \rightarrow -2$ (cf.~Fig.~\ref{fig:02}(a)). \begin{figure*} \caption{(Color online) Interference between the output of a driven anharmonic system and an external laser of intensity proportional to~$\mathcal{F} \label{fig:ao3} \end{figure*} As previously, the statistics can be modified by adjusting the coherent component of the original signal $b$ with an external laser $\beta = |\beta|e^{i \phi}$. The resulting signal is then described by the operator $s = \mathrm{T} b + i \, \mathrm{R} \beta$, with coherent contribution now given by $\av{s} = \mathrm{T} \av{b} + i \, \mathrm{R} \beta $. To simplify further the calculations, we choose $\beta = \frac{\mathrm{T}}{\mathrm{R}} \beta'$, where $\beta'$ is also written in terms of the driving amplitude and an adimensional amplitude $\mathcal{F}$: \begin{equation} \beta' = \frac{\Omega_b}{\gamma_b} \mathcal{F}\,. \end{equation} Additionally, we shift the phase $\phi \rightarrow \phi + \pi$ so in the limit of high $U$, all the results are consistent with the previous case. Then, the total population becomes: \begin{widetext} \begin{equation} \label{eq:jueene17201652GMT2019} \av{n_s} = \frac{\mathrm{T}^2 \Omega_b^2} {\gamma_b^2 \big(\gamma_b^2 + 4 \Delta_b^2\big)} \big[\mathcal{F}^2\big(\gamma_b^2 + 4 \Delta_b^2\big) + 4 \gamma_b \mathcal{F} \big( \gamma_b \cos \phi - 2 \Delta_b \sin \phi \big) + 4 \gamma_b^2 \big]\, , \end{equation} and the 2-photon correlations become \begin{equation} \label{eq:jueene17201723GMT2019} \begin{split} \g{2}_s = & \tilde{\Gamma}_b^2 \Big\lbrace \tilde{\Gamma}_b^2 \big[\gamma_b^2 + (U + \Delta_b)^2\big] \mathcal{F}^4 + 8 \gamma_b \big[\gamma_b^2 + (U + \Delta_b)^2\big] ( \gamma_b \cos \phi - 2 \Delta_b \sin \phi) \mathcal{F}^3 + \big[ 16 \gamma_b^2 \big(\gamma_b^2 + (U + 2 \Delta_b)^2\big) + \\ & 8 \gamma_b^2 \cos 2 \phi \, \big( \gamma_b^2 - 2 \Delta_b (U + 2 \Delta_b) \big) - 8 \gamma_b^3 \sin 2 \phi (U + 4 \Delta_b)\big] \mathcal{F}^2 + 32 \gamma_b^3 \big[\gamma_b \cos \phi - (U + 2 \Delta_b) \sin \phi \big] \mathcal{F} + 16 \gamma_b^2 \Big\rbrace \biggm/ \\ & \Big\lbrace \big[ \gamma_b^2 + (U + 2 \Delta_b)^2 \big] \big[ \mathcal{F}^2\big(\gamma_b^2 + 4 \Delta_b^2\big) + 4 \gamma_b \mathcal{F} \big( \gamma_b \cos \phi - 2 \Delta_b \sin \phi \big) + 4 \gamma_b^2\big]^2 \Big\rbrace, \end{split} \end{equation} \end{widetext} where we have used $\tilde{\Gamma}_b^2 = \gamma_b^2 + 4 \Delta_b^2$. Here as well, we can compare the enrichment brought by the interfering laser by comparing Eqs.~(\ref{eq:jueene17201516GMT2019}) and~(\ref{eq:jueene17201652GMT2019}) for populations and Eqs.~(\ref{eq:jueene17201437GMT2019}) and (\ref{eq:jueene17201723GMT2019}) for second-order correlations, with and without the interference, respectively. In this case, higher-order correlators could also be given in closed-form but is too awkward to be written here. The cases $g_s^{(k)}$ for $2\le k\le 4$ are shown in Fig.~\ref{fig:ao3} as a function of the parameters of the interfering laser. By comparing this to Fig.~\ref{fig:WedNov29150123CET2017} for the 2LS, one can see that the anharmonic system is significantly more complex, with resonances for the correlations that occur for specific conditions of the phase for each~$\mathcal{F}$ that leads to unconventional forms of antibunching or superbunching, rather than to be simply out-of-phase previously. This makes salient the punctual character of the unconventional mechanism: each strong correlation at any given order must be realized in a very particular way: the one that matches the corresponding interference. The maximum bunching (UB) accessible with the interfering laser is reached when the coherent-fraction population goes to zero (1-photon suppression) for which the conditions on the phase and amplitude read \begin{equation} \label{} \tan \phi_1 = - \frac{2\Delta_b}{\gamma_b} \quad \quad \mathrm{and} \quad\quad \mathcal{F}_1 = -2 \cos \phi_1\,. \end{equation} Those conditions are exactly the same as Eq.~\eqref{eq:2LSconds} for $N = 1$. Analogous conditions for the multi-photon cases can be found solving $\g{N}_s=0$. For the case $N=2$, we find four different roots: \begin{subequations} \begin{align} \mathcal{F}_{2,1/2} = & \frac{2i e^{i \phi} \gamma_b}{(U + 2 \Delta_b) + i \gamma_b} \bigg\lbrace 1 \pm \sqrt{\frac{U}{(U + 2 \Delta_b) + i \gamma_b}} \bigg\rbrace \,, \\ \mathcal{F}_{2,3/4} = & \frac{2i e^{-i \phi} \gamma_b}{(U + 2 \Delta_b) - i \gamma_b} \bigg\lbrace 1 \pm \sqrt{\frac{U}{(U + 2 \Delta_b) - i \gamma_b}} \bigg\rbrace^{-1} \,. \end{align} \end{subequations} Since these should be, by definition, real, this imposes another constrain on~$\phi$. Although the expression for real-valued~$\phi$ to make~$\mathcal{F}$ real cannot be given in closed form, they are readily found numerically. It is possible to get four real solutions, that are however degenerate. There are only two different conditions for $\phi$ since the real part is the same for each pair of roots, i.e., $\Re(\mathcal{F}_{2,1}) = \Re(\mathcal{F}_{2,4})$ and $\Re(\mathcal{F}_{2,2}) = \Re(\mathcal{F}_{2,3}))$. This yields two physical solutions. For instance, for $U = \gamma_b$ and $\Delta_b = \Delta_{-}$ (the case shown in Fig.~\ref{fig:ao3}), $\g{2}_s$ vanishes at $\mathcal{F}_{2,1} \approx 0.615$ and $\mathcal{\phi}_{2,1} \approx 0.659 \, \pi$ for one solution and at $\mathcal{F}_{2,2} \approx 2.907 $ and $\mathcal{\phi}_{2,2} \approx 0.860 \, \pi$ for the other one. Similar resonances in higher-order correlations could be found following the same procedure. Regarding the quantum state realized in the system, similar conclusions can be drawn for the anharmonic oscillator than for the two-level system (previous Section, cf.~Tables~\ref{tab:fluctuations} and~\ref{tab:AOtable}). Specifically for this case, the system can be described by a displaced squeezed thermal state, properly parameterized, but to lowest-order in the driving and for the population and the two-photon correlation only. Departures arise to next-order in the pumping or to any-order for three-photon correlations and higher. The main differences is that the anharmonic oscillator case has to be worked out numerically, so the prefactors are given by the solutions that optimize the antibunching, for the system parameters indicated in the caption. The same result otherwise holds that the Gaussian-state description is a low-driving approximation valid for the population and two-photon statistics. The same holds for the systems studied in the following Sections, although this point will not be stressed anymore. \section{Jaynes--Cummings blockade} \label{sec:FriFeb23100104CET2018} Now that we have considered the two-level system on the one hand (Section~\ref{sec:FriFeb23095933CET2018}) and the bosonic mode on the other hand (Section~\ref{sec:jueene10122434CET2019}), we turn to the richer and intricate Physics that arises from their coupling. We will show how the themes of the previous Sections allow us to unify in a fairly concise picture the great variety of phenomena observed and/or reported in isolation. We thus consider the case where a cavity mode, with bosonic annihilation operator~$a$ and frequency~$\omega_a$, is coupled to a 2LS, with operator~$\sigma$ and frequency~$\omega_\sigma$, with a strength given by~$g$. Such a system is described by the Jaynes--Cummings Hamiltonian~\cite{jaynes63a,shore93a}, \begin{multline} \label{eq:Thu31May103357CEST2018} H_\mathrm{jc} = \Delta_\sigma \ud{\sigma} \sigma + \Delta_a \ud{a} a + g \left(\ud{a} \sigma + \ud{\sigma} a \right) +{}\\ {}+\Omega_a \left(e^{i \phi} \ud{a} + e^{- i \phi} a\right) + \Omega_\sigma \left(\ud{\sigma} + \sigma\right), \end{multline} where we also include both a cavity and a 2LS driving term by a laser of frequency~$\omega_\mathrm{L}$, with respective intensities~$\Omega_a$ and~$\Omega_\sigma$ and relative phase~$\phi$. We assume $g$ and $\Omega_\sigma$ to be real numbers, without loss of generality since the magnitudes of interest ($G_a^{(N)}$) are independent of their phases. The relative phase~$\phi$ between dot and cavity drivings is, on the other hand, important. We also limit ourselves in this text to the case where the frequencies of the dot and cavity drivings~$\omega_\mathrm{L}$ are identical and the analysis could be pushed further to the case where this limitation is lifted. The dissipation is taken into account through the master equation~$\partial_t \rho = i \left[ \rho ,H_\mathrm{jc} \right] + (\gamma_\sigma/2) \mathcal{L}_{\sigma} \rho+(\gamma_a/2)\mathcal{L}_a\rho$ where~$\gamma_a$ is the decay rate of the cavity. We solve the steady state in the low-driving regime,~i.e., when $\Omega_a\ll \gamma_a\,,\gamma_\sigma$, as indicated in Appendix~\ref{app:2} and find the populations: \begin{widetext} \begin{equation} \label{eq:Tue29May182623CEST2018} \av{n_{\substack{a\\\sigma}}} = 4 \, \frac{4 g^2 \Omega_{\substack{a\\\sigma}}^2 + \tilde{\Gamma}_{\substack{\sigma\\a}}^2 \Omega_{\substack{a\\\sigma}}^2 - 4 g \Omega_a \Omega_\sigma \left(\pm 2 \Delta_{\substack{a\\\sigma}}\cos \phi + \gamma_{\substack{\sigma\\a}} \sin \phi \right)} {16 g^4 + 8 g^2 \left(\gamma_a \gamma_\sigma - 4 \Delta_a \Delta_\sigma\right) + \tilde{\Gamma}_a^2 \tilde{\Gamma}_\sigma^2 } \, , \end{equation} with matching upper/lower indices (including~$\pm$) and with $\tilde{\Gamma}^2_i = \gamma_i^2 + 4 \Delta_i^2$ (for $i = a, \sigma$). Similarly, we find the two-photon coherence function from the cavity: \begin{equation} \label{eq:Tue29May184632CEST2018} \begin{split} \g{2}_a = & \Big\lbrace \Big[ 16 g^4 + 8 g^2 \left(\gamma_a \gamma_\sigma -4 \Delta_a \Delta_\sigma \right) + \tilde{\Gamma}_a^2 \tilde{\Gamma}^2_\sigma \Big] \Big[16 g^4 \left(1 + \chi^2 \right) + 8 g^2 \big(2 \chi^2 \tilde{\Gamma}_{11}^2 + 4 \Delta_\sigma \tilde{\Delta}_{11}- \gamma_\sigma \tilde{\gamma}_{11}\big) + \tilde{\Gamma}^2_{\sigma} \tilde{\Gamma}_{11}^2 -\\ & 16 g \chi \big( \Delta_\sigma \tilde{\Gamma}_{11}^2 + 4 g^2 \tilde{\Delta}_{11} [1+\chi^2] \big) \cos \phi + 8 g^2 \chi^2 \big(4 g^2 - \gamma_\sigma \tilde{\gamma}_{11} + 4 \Delta_\sigma \tilde{\Delta}_{11}\big) \cos 2 \phi \, - \\ & 8 g \chi \big(\gamma_\sigma \tilde{\Gamma}_{11}^2 + 4 g^2 \tilde{\gamma}_{11} [\chi^2-1] \big) \sin \phi + 16 g^2 \chi^2 \big(\gamma_a \Delta_\sigma + \gamma_\sigma \tilde{\Delta}_{12}\big) \sin 2 \phi \Big] \Big\rbrace \Big/ \\ & \Big\lbrace \Big[16 g^4 + 8 g^2 \Big(\gamma_a \tilde{\gamma}_{11}- 4 \Delta_a \tilde{\Delta}_{11}\Big) + \tilde{\Gamma}_a^2 \tilde{\Gamma}_{11}^2\Big] \Big[4 g^2 \chi^2 + \tilde{\Gamma}_{\sigma}^2 - 4 g \chi \big(2 \Delta_\sigma \cos \phi + \gamma_\sigma \sin \phi\big)\Big]^2 \Big\rbrace , \end{split} \end{equation} \end{widetext} where $\tilde{\Delta}_{ij} \equiv i \Delta_a + j \Delta_\sigma$, $\tilde{\gamma}_{ij} = i \gamma_a + j \gamma_\sigma$, $\tilde{\Gamma}_{ij}^2 \equiv \tilde{\gamma}_{ij}^2 + 4 \tilde{\Delta}^2_{ij}$ and $\chi=\Omega_\sigma/\Omega_a$ is the ratio of excitation. The range of $\chi$ extends from 0 to $\infty$ so that it is convenient to use the derived quantity~$\tilde{\chi}=\frac{2}{\pi} \atan(\chi)$ which varies between 0 and 1. Equation~(\ref{eq:Tue29May184632CEST2018}) is admittedly not enlightening per se but it contains all the physics of conventional and unconventional photon statistics that arises from self-homodyning, including bunching and antibunching, for all the regimes of operations. It is quite remarkable that so much physics of dressed-state blockades and interferences can be packed-up so concisely. We plot a particular case of this formula as a function of~$\omega_a$ and~$\omega_\mathrm{L}$ in Fig.~\ref{fig:JCantibunchingplat}(a), namely, only driving the cavity ($\Omega_\sigma=\chi=0$). Its reduced expression, Eq.~(\ref{eq:JCg2}), is given in Appendix~\ref{sec:Wed30May110352CEST2018}. The general case is available through an applet~\cite{wolfram_casalengua18a} and we will shortly discuss other cases as well. The structure that is thus revealed can be decomposed in two classes, as shown in panel~(b): the conventional statistics that originates from the nonlinear properties of the quantum levels, in solid lines, and the unconventional statistics that originates from interferences, in dashed lines. Both can give rise to bunching (in red) and antibunching (in blue). We now discuss them in details. \begin{figure} \caption{(Color online) Jaynes--Cummings model. (a) Photon statistics $g^{(2)} \label{fig:JCantibunchingplat} \end{figure} \subsection{Conventional statistics} Conventional features arise from the laser entering in resonance with a dressed state of the dissipative JC ladder~\cite{delvalle09a,laussy12e}, which energy is the real part of \begin{multline} \label{eq:eigenstates} E^{(N)}_{\pm}=N\omega_{a}+\frac{\omega_\sigma-\omega_a}{2}-i \frac{(2N-1)\gamma_a+\gamma_\sigma}{4}\\\pm\sqrt{(\sqrt{N}g)^2+\left( \frac{\omega_a-\omega_\sigma}{2}-i\frac{\gamma_a-\gamma_\sigma}{4}\right)^2 }\,. \end{multline} The first rung $E^{(1)}_{\pm}$ yields the CA lines in Fig.~\ref{fig:JCantibunchingplat}(b). This corresponds to an increase in the cavity population, as shown in Fig.~\ref{fig:JCantibunchingplat}(c) as two white lines, corresponding to the familiar lower and upper branches of strong coupling. The system effectively gets excited, but through its first rung only. The second rung blocks further excitation according to the conventional antibunching (CA), or photon-blockade, scenario, so that with the increase of population goes a decrease of two-photon excitation, leading to antibunching. This is in complete analogy with the CA that appears in the Heitler regime of resonance fluorescence. This is not an exact zero in $g^{(2)}_a$ in the low driving regime (the imaginary part of the root does not vanish) because the conditions for perfect interference are no longer met having a strongly coupled cavity with a decay rate. We recently showed in Ref.~\cite{lopezcarreno18b} that even in the vanishing coupling regime, $g\rightarrow 0$, when the cavity acts as a mere detector of the 2LS emission, perfect antibunching is spoiled due to the finite decay rate ($\gamma_a$ representing the precision in frequency detection). This is due to the fact that the cavity is effectively filtering out some of the incoherent fraction of the emission while the coherent fraction is still fully collected. The interference condition in the $g^{(2)}_a$ decomposition, $1+\mathcal{I}_1=-\mathcal{I}_2=2$, is no longer satisfied (see Fig.~2 of Ref.~\cite{lopezcarreno18b}). On the other hand, driving resonantly the second rung, $E^{(2)}_{\pm}$, leads to conventional bunching (CB), shown as red solid lines in Fig.~\ref{fig:JCantibunchingplat}(b). These quantum features are well known and also found with incoherent driving of the system in the spectrum of emission~\cite{laussy12e}, they are not conditional to the coherence of the driving. This also corresponds to an increase in the cavity population although this is not showing in Fig.~\ref{fig:JCantibunchingplat}(c), where only first order effects appear. \subsection{Unconventional statistics} We now turn to the other features in $g^{(2)}_a$ that do not correspond to a resonant condition with a dressed state: these are, first, a superbunching line at $\omega_\mathrm{L}=0$ (dashed red in Fig.~\ref{fig:JCantibunchingplat}(b)) and second, two symmetric antibunched lines (dashed blue). All correspond to a self-homodyne interference that the coherent field driving the cavity can produce on its own, without the need of a second external laser. In this case, this also involves several modes (degrees of freedom) and more parameters than in resonance fluorescence, so the phenomenology is richer, but can be tracked down to the same physics. We call them again unconventional antibunching (UA) and unconventional bunching (UB) in full analogy with the Heitler regime of resonance fluorescence and in agreement with the literature that refers to particular cases of this phenomenology as ``unconventional blockade''~\cite{bamba11a} (the term of ``tunnelling'' has also been employed but the underlying physical picture might be misleading~\cite{faraon08a}). We first address antibunching (UA). This is found by minimizing $\g{2}_a$ in regions where there is no CA, which yields (for the particular case $\chi = 0$): \begin{equation} \label{eq:UAcondition} \Delta_a = -\Delta_\sigma \left(1 + \frac{4 g^2}{\gamma_\sigma^2 + 4 \Delta_\sigma^2}\right) \,, \end{equation} which is the analytical expression for the the dashed blue lines in Fig.~\ref{fig:JCantibunchingplat}(b) (we remind that $\Delta_i\equiv \omega_i-\omega_\mathrm{L}$ for~$i=a$, $\sigma$). The most general case when both the emitter and cavity are excited is given in Appendix \ref{app:perfectantibunching}. In the minimization process, we also find the condition for CA, due to the first-rung resonance, but this can be disconnected from UA beyond the fact that CA is already identified, because of UA also admits an exact zero, which is found by either solving $\g{2}_a=0$ or setting to zero the two-photon probability in the wavefunction approximation~\cite{visser95a} (see Appendix~\ref{sec:WedFeb28173330CET2018}). This gives the conditions on the detunings as function of the system parameters~\footnote{Eq.~(\ref{eq:FriMar2105208CET2018}) generalizes the condition given in Ref.~\cite{carmichael85a} for the resonant case and coincides with it with the notation~$\gamma_a /2 \rightarrow \kappa$ and~$\mathscr{C} \rightarrow \mathscr{C}/2$).}: \begin{subequations} \label{eq:FriMar2105208CET2018} \begin{align} \label{eq:FriMar2105208CET2018a} \Delta_\sigma^2&=\frac{\gamma_\sigma^2}{4}\left(\frac{4g^2}{\gamma_\sigma(\gamma_{\sigma }+\gamma_a)}-1\right)\,,\\ \label{eq:FriMar2105208CET2018b} \Delta_{a} &= - \bigg (2+ \frac{\gamma_a}{\gamma_\sigma} \bigg) \Delta_\sigma\,. \end{align} \end{subequations} These conditions are met in Fig.~\ref{fig:JCantibunchingplat}(a) at the lowest point where the blue UA line intersects the (e) cut (and on the symmetric point $\omega_a < 0$). When the laser is at resonance with the 2LS ($\Delta_\sigma=0$) and cavity losses are large ($\gamma_a \gg \gamma_\sigma$), this occurs when the cooperativity $\mathscr{C}\equiv \frac{4 g^2}{\gamma_a \gamma_\sigma}=1$. This type of UA interference is second-order, so it is not apparent in the cavity population at low driving, Fig.~\ref{fig:JCantibunchingplat}(b). One has to turn to two-photon correlations instead. Note also that UA requires a cavity-emitter detuning that is of the order of~$g$. \begin{figure*} \caption{(Color online)~Effect of the type of driving on the two-photon statistics, for the Jaynes--Cumming model as measured in its cavity emission. The upper row shows Eq.~(\ref{eq:Tue29May184632CEST2018} \label{fig:Wed30May111214CEST2018} \end{figure*} Since this is an interference effect, we perform the same decomposition of $\g{2}_a$ in terms of coherent and incoherent fractions, as in previous Sections, given by Eq.~(\ref{eq:g2mixdecomposition}), and show the terms that are not zero in Fig.~\ref{fig:JCantibunchingplat}(d-e). The full expressions are in Appendix~\ref{app:4}. The term $\mathcal{I}_1$ is exactly zero to lowest order in the driving and only the fluctuation-statistics~$\mathcal{I}_0$ and the two-photon interference~$\mathcal{I}_2$ play a role, like in the Heitler regime of resonance fluorescence. We observe that, in this decomposition, there is no difference between the CA and UA, since both occur approximately when the statistics of the laser and fluctuations, $1+\mathcal{I}_0=2$, are compensated by their two-photon interference, $\mathcal{I}_2=-2$, again as in the Heitler regime. The fundamental differences between these two types of antibunching will be discussed later on. Before that, we discuss the last feature: the unconventional bunching at~$\omega_\mathrm{L}=0$. The reason for the super-bunching peak labelled as UB in Fig.~\ref{fig:JCantibunchingplat}(b) is also the same as in resonance fluorescence: the cancellation of the coherent part, in this case, of the cavity emission, and the consequent dominance of the fluctuations only, which are super-Poissonian in this region. Therefore, contrary to the CB, this superbunched statistics is not directly linked to an enhanced $N$-photon (for any~$N$) emission and it does not appear one could harvest or Purcell-enhance it, for instance, by coupling the system to an auxiliary resonant cavity. Since it is pretty much wildly fluctuating noise, the actual prospects of multi-photon physics in this context remains to be investigated. In any case, the conditions that yield the super-Poissonian correlations can thus be obtained by minimising the cavity population $\mean{n_a}$ or, from the wavefunction approximation detailed in Appendix~(\ref{sec:WedFeb28173330CET2018}), by minimising the probability to have one photon, given by Eq.~(\ref{eq:WedFeb28145309CET2018}), which coincide with the coherent fraction to lowest order in $\Omega_a$. One cannot achieve an exact zero in this case but the cavity population is clearly undergoing a destructive interference, as shown by the black horizontal line in Fig.~\ref{fig:JCantibunchingplat}(c). The resulting condition links the laser frequency with the 2LS one: \begin{equation} \label{eq:FriFeb23173509CET2018} \Delta_\sigma = \chi g \cos \phi \,, \end{equation} which reduces to simply $\Delta_\sigma=0$ (laser in resonance with the 2LS) if i) the dot and cavity are driven with a $\pi/2$-phase difference or ii) the laser drives the cavity only ($\chi = 0)$. So far, we have focused on the particular case of Eq.~(\ref{eq:Tue29May184632CEST2018}) where~$\Omega_\sigma=0$ (i.e., Eq.~(\ref{eq:JCg2})). This is the case dominantly studied in the literature and the one assumed to best reflect the experimental situation. It is also for our purpose a good choice to clarify the phenomenology that is taking place and how various types of statistics cohabit. It must be emphasised, however, that while the physics is essentially the same in the more general configuration, the results are, even qualitatively, significantly different in configurations where the two types of pumping are present. This is shown in Fig.~\ref{fig:Wed30May111214CEST2018}. While conventional features are stable, being pinned to the level structure, the unconventional ones that are due to interferences are very sensible to the excitation conditions and get displaced or, in the case of QD excitation only, even completely suppressed. If one is to regard conventional features as more desirable for applications, this figure is therefore again an exhortation at focusing on the QD excitation configuration. \begin{figure} \caption{(Color online). Higher-order photon statistics, at (left column) three- and (right) four-photon level. Top row is for cavity excitation and bottom row for 2LS excitation. In the top row, we have superimposed the right-half of the conventional (solid) and unconventional (dashed) features, putting them in grey when not present for a given order of the correlations. The conventional features grow in numbers and stay pinned at the same positions while the unconventional ones remain in the same number but at different positions. Parameters: $\gamma_a = 0.1 \, g$, $\gamma_\sigma = 0.001 \, g$.} \label{fig:Tue29May164357CEST2018} \end{figure} While we have focused on the two-photon statistics, both the conventional and unconventional effects occur at the~$N$-photon level, in which case they manifest through higher-order coherence functions~$g^{(N)}$, and their~$N$th-order behaviour is one of the key differences between conventional and unconventional statistics. Regarding conventional features, resonances happen at the $N$-photon level when $N$ photons of the laser have the same energy than one of the dressed states (and only one, thanks to the JC nonlinearities): $\omega_\mathrm{L}=\mathrm{Re}\{E^{(N)}_{\pm}\}/N$. The blockade that is realised is a real blockade in the sense that all the correlation functions are then depleted simultaneously. In Fig.~\ref{fig:Tue29May164357CEST2018}, the counterpart of Fig.~\ref{fig:JCantibunchingplat}(a) is shown for~$\g{3}_a$ and~$\g{4}_a$ and shows how more conventional features appear with increasing~$N$ but otherwise stay pinned to the same conditions, while the number of unconventional features stays the same, but their positions drift with~$N$, so that one cannot simultaneously realise $g^{(N)}_a<1$ for all~$N$. This is an important difference between a convex mixture of Gaussian states, which is a semi-classical state, and a state beyond this class, which is genuinely quantum, as previously mentioned. The latter requires the ability to imprint strong correlations at several and possibly all photon-orders. This suggests that CA could be more suited than UA for quantum applications. Note how with the 2LS direct excitation, shown in the second row of Fig.~\ref{fig:Tue29May164357CEST2018}, one only finds conventional statistics, with magnified features such as broader antibunching in the photon-like branch and narrower one in the exciton-like branch. The $N$-photon resonances are neatly separated for large-enough detuning, which is the underlying principle to harness rich $N$-photon resources~\cite{sanchezmunoz14a}. \begin{figure} \caption{(Color online) Transition to weak coupling and non-vanishing pumping. (a) Evolution of $g^{(2)} \label{fig:JCsuper} \end{figure} We now turn to another noteworthy regime, out of the many configurations of interest that are covered by Eq.~(\ref{eq:Tue29May184632CEST2018}), namely, the transition from weak to strong coupling. The so-called strong-coupling, when $g>|\gamma_a-\gamma_\sigma|/4$, is one of the coveted attributes of light-matter interactions, leading to the emergence of dressed states and to a new realm of physics. It is also, however, an ill-defined concept in presence of detuning~\cite{laussy12e} and one would still find the dressed-state structure of Fig.~\ref{fig:Tue29May164357CEST2018} in the largely detuned regime when driving the 2LS, even up to large photon-order~\cite{kavokin_book17a}. The restructuration of the statistics when crossing over to the weak-coupling regime is explored in Fig.~\ref{fig:JCsuper}(a), where we track the impact on $g^{(2)}_a$ of changing the coupling~$g$, on the cut in Fig.~\ref{fig:JCantibunchingplat}(e) that intersects from top to bottom CB, UA (twice), UB and CA. One can see how the features converge as the coupling is reduced, with the conventional ones disappearing first, which is expected from the disappearance of the underlying dressed states, that are responsible for the conventional effects. The unconventional antibunching, on the other hand, is more robust and can be tracked well into weak coupling where all effects ultimately vanish at the same time as they merge. Conventional antibunching is the most robust feature, as can be seen by tracking for instance the UB peak at the point where it is the most isolated from the other features, namely, at resonance where~$\omega_\mathrm{L}=\omega_a=\omega_\sigma=0$. Spanning over the two main parameters that control strong coupling, the coupling strength~$g$ (in units of~$\gamma_\sigma$) and the rates of dissipation rates~$\gamma_a/\gamma_\sigma$, one sees that the strong bunching is not always sustained but can be instead overtaken by conventional antibunching, which is the well-defined blue line in the figure (given by Eq.~(\ref{eq:UAcondition})). The region where the UB peak is well-defined can be identified by inspecting the second derivative of $\g{2}_a$ as a function of the laser frequency, $\partial^2_{\omega_\mathrm{L}}\g{2}_a$ at $\omega_{\mathrm{L}} = 0$ and is shown in Fig.~\ref{fig:JCsuper}(b) as a dashed black line. The white line that separates the antibunching region from the bunching one corresponds to the critical coupling strength~$g_P$ between the cavity and the 2LS that leads to $\g{2}_a=1$ (its expression is given in the Appendix, Eq.~(\ref{eq:FriMar2104433CET2018})). The strong-weak coupling frontier $g/\gamma_\sigma<|\gamma_a/\gamma_\sigma-1|/4$ is indicated with a dotted green line as a reference, illustrating again the lack of close connection between strong-coupling and the photon-statistics features. We conclude our discussion of the Jaynes-Cummings system with the second main difference between conventional and unconventional statistics, namely their resilience to higher driving. All our results are exact in the limit of vanishing driving, that is to say, in the approximation of neglecting $\Omega$ terms of higher-orders than the smallest contributing one. For non-vanishing driving, numerically exact results can be obtained instead (and can be made to agree with arbitrary precision to the analytical expressions, as long as the driving is taken low enough, what we have consistently checked). A characteristic of the unconventional features is that, being due to an interference effect for a given photon number only, it is fragile to driving, unlike the conventional features which display more robustness. This is illustrated in Fig.~\ref{fig:JCsuper}(c) for the case of cavity driving~$\Omega_a$, where we compare the analytical result from Eq.~(\ref{eq:Tue29May184632CEST2018}) or, in this case, Eq.~(\ref{eq:JCg2}), in black, with the numerical solution for~$\Omega_a=0.25\gamma_a$, so still fairly small. One can see how the conventional features are qualitatively preserved and quantitatively similar to the analytical result, while the unconventional antibunching has been completely washed out. One could consider still other aspects of the Physics embedded in Eq.~(\ref{eq:Tue29May184632CEST2018}). We invite the inquisitive and/or interested readers to explore them through the applet~\cite{wolfram_casalengua18a} which is helpful to get a sense of the complexity of the problem. Instead of discussing these further, we now turn to another platform of interest that bears many similarities with the Jaynes--Cummings results. \section{Microcavity-polariton blockade} \label{sec:FriFeb23100219CET2018} Microcavity polaritons~\cite{kavokin_book17a} arise from the strong coupling between a planar cavity photon and a quantum well exciton, both of which are bosonic fields with annihilation operators~$a$ and~$b$, respectively. These fields are coupled with strength~$g$ and have frequencies~$\omega_a$ and~$\omega_b$. Moreover, the excitons, being electron-hole pairs, have Coulomb interactions that we parametrise as~$U/2$. Thus, the Hamiltonian describing the polariton system is given by \begin{multline} \label{eq:FriMar2182347CET2018} H_\mathrm{pol} = \Delta_a \ud{a} a + \Delta_{b} \ud{b} b + g \left(\ud{a} b + \ud{b} a \right) +{}\\ {} \Omega_a \left(e^{i \phi}\ud{a} + e^{-i \phi} a\right) + \Omega_b (\ud{b} + b) + \frac{U}{2} \ud{b} \ud{b} b b\,, \end{multline} where~$\Delta_{a,b} = \omega_{a,b} - \omega_{\mathrm{L}}$ are the frequencies of cavity/exciton referred to $\omega_\mathrm{L}$, which is the frequency of the laser that drives the photonic/excitonic field with amplitudes~$\Omega_{a,b}$. The phase difference between them is represented by $\phi = \phi_a - \phi_b$ and since the absolute phase can be chosen freely, $\phi_{a,b}$ are fixed to be $\phi$ and $0$. The dissipative dynamics of the polaritons is given by a master equation $\partial_t \rho = i \left[ \rho ,H_\mathrm{pol} \right] + (\gamma_b/2) \mathcal{L}_{b} \rho+ (\gamma_a/2) \mathcal{L}_{a} \rho$, where~$\gamma_a$ and~$\gamma_b$ are the decay rates of the photon and the exciton, respectively. As compared to the Jaynes--Cummings Hamiltonian~(\ref{eq:Thu31May103357CEST2018}), the polariton Hamiltonian substitutes the 2LS by a weakly interacting Bosonic mode, $b\rightarrow\sigma$ with nonlinearities~$\ud{b}\ud{b}bb$ thus slightly displacing the state with two excitations while the 2LS forbids it entirely. In the case where~$U\rightarrow\infty$, the Jaynes--Cummings limit is recovered, but in most experimental cases, $U/\gamma_a$ is very small. In all cases, in the low driving regime ($\Omega_a \rightarrow 0$) the steady-state populations of the photon and the exciton are given by the same expressions as in the Jaynes--Cummings model, Eq.~(\ref{eq:Tue29May182623CEST2018}) with~$\sigma\rightarrow b$, since the 2LS converges to a bosonic field in the linear regime. The differences arise in the two-particle magnitudes (cf.~Eq.~(\ref{eq:Tue29May184632CEST2018})): \begin{widetext} \begin{subequations} \label{eq:Thu31May102314CEST2018} \begin{equation} \begin{split} \g{2}_a = & \Big\lbrace \Big[16 g^4 + 8 g^2 \big(\gamma_a \gamma_b - 4 \Delta_a \Delta_b \big) + \tilde{\Gamma}_a^2 \tilde{\Gamma}_b^2 \Big] \Big[\tilde{\Gamma}_b^2 \tilde{\Gamma}_{11}^2 \big(\gamma_b^2 + \tilde{U}_{12}^2\big) + 8 g^2 \Big(U^2 [4 \Delta_b \tilde{\Delta}_{11}- \gamma_b \tilde{\gamma}_{11}]+ 2 \tilde{\Gamma}_{11}^2 [\gamma_b^2 + \tilde{U}_{12}^2] \chi^2 +\\ & 8 U \Delta_b^2 \tilde{\Delta}_{11} -2 U \gamma_b^2 \tilde{\Delta}_{13} - 4 U \gamma_a \gamma_b \Delta_b \Big) + 16 g^4 \Big(U^2 + [\tilde{\gamma}_{11}^2 + (U + 2 \tilde{\Delta}_{11})^2] \chi^4\Big) - 16 g \chi \cos \phi \, \Big( \Delta_b \tilde{\Gamma}_{11}^2 [\gamma_b^2 +4 \tilde{U}_{12}^2] \\ & + 2 g^2 [U(2 \tilde{\Delta}_{11} \tilde{U}_{12}- \gamma_b \tilde{\gamma}_{11}) + (2 U^2 \tilde{\Delta}_{11}+ 2 \Delta_b \tilde{\Gamma}_{11}^2+ U \lbrace \gamma_a \tilde{\gamma}_{11} + 4 \tilde{\Delta}_{11} \tilde{\Delta}_{12} \rbrace ) \chi^2 ]\Big) + 8 g^2 \chi^2 \cos 2 \phi \, \Big(4 g^2 U [U + 2 \tilde{\Delta}_{11}] - \\ & U^2 [\gamma_b \tilde{\gamma}_{11}- 4 \Delta_b \tilde{\Delta}_{11}] - [\gamma_b^2 - 4 \Delta_b^2] \tilde{\Gamma}_{11}^2 + 2 U [\gamma_a^2 \Delta_b + \tilde{\Delta}_{12} (4 \Delta_b \tilde{\Delta}_{11} - \gamma_b^2)] \Big) -8 g \chi \sin \phi \, \Big( \gamma_b \tilde{\Gamma}_{11}^2 [\gamma_b^2 + \tilde{U}_{12}^2] + \\ & 4 g^2 [\gamma_b \tilde{\Gamma}_{11}^2 \chi^2 + U (\chi^2-1) (U \tilde{\gamma}_{11} + 2 \gamma_b \Delta_a +2 \tilde{\gamma}_{12} \Delta_b)] \Big) + 8 g^2 \chi^2 \sin 2 \phi \, \Big( -4 g^2 U \tilde{\gamma}_{11} + 4 \gamma_b \Delta_b \tilde{\Gamma}_{11}^2 + 2 U^2 [\gamma_a \Delta_b + \gamma_b \tilde{\Delta}_{12}] + \\ & U [\gamma_a^2 \gamma_b + 4 \gamma_b \tilde{\Delta}_{12}^2 + \gamma_a \tilde{\Gamma}_b^2] \Big) \Big] \Big\rbrace \Big/ \\ & \Big\lbrace \Big( \tilde{\Gamma}_a^2 \tilde{\Gamma}_{11}^2 \big[\gamma_b^2 + \tilde{U}_{12}^2 \big] + 16 g^4 \big[\tilde{\gamma}_{11}^2 + \big(U + 2 \tilde{\Delta}_{11}\big)^2\big] + 8 g^2 \big[U^2 \big(\gamma_a \tilde{\gamma}_{11} - 4 \Delta_a \tilde{\Delta}_{11}\big) + \tilde{\Gamma}_{11}^2 \big(\gamma_a \gamma_b - 4 \Delta_a \Delta_b\big) - \\ & 2U \big(\gamma_a^2 \tilde{\Delta}_{1 \bar{1}} - 2 \gamma_a \gamma_b \Delta_b + 4 \Delta_a \tilde{\Delta}_{11} \tilde{\Delta}_{12}\big) \big] \Big) \Big(4 g^2 \chi^2 + \tilde{\Gamma}_b^2 - 4 g \chi \big[2 \Delta_b \cos \phi + \gamma_b \sin \phi \big]\Big)^2 \Big\rbrace \, , \end{split} \end{equation} \begin{equation} \begin{split} \g{2}_b = & \big\lbrace \tilde{\Gamma}_{11}^2 \big[16 g^4 + 8 g^2 \big( \gamma_a \gamma_b - 4 \Delta_a \Delta_b \big) + \tilde{\Gamma}_a^2 \tilde{\Gamma}_b^2\big] \big\rbrace \Big/ \\ & \big\lbrace \tilde{\Gamma}_a^2 \tilde{\Gamma}_{11}^2 \big[\gamma_b^2 +\tilde{U}_{12}^2 \big] + 16 g^4 \big[\tilde{\gamma}_{11}^2 + \big(U + 2 \tilde{\Delta}_{11}\big)^2\big] + 8 g^2 \big[U^2 \big(\gamma_a \tilde{\gamma}_{11} - 4 \Delta_a \tilde{\Delta}_{11}\big) + \tilde{\Gamma}_{11}^2 \big(\gamma_a \gamma_b - 4 \Delta_a \Delta_b\big) - \\ & 2U \big(\gamma_a^2 \tilde{\Delta}_{1 \bar{1}} - 2 \gamma_a \gamma_b \Delta_b + 4 \Delta_a \tilde{\Delta}_{11} \tilde{\Delta}_{12}\big) \big] \big\rbrace \, , \end{split} \end{equation} \end{subequations} where we have used the short-hand notation~$\gamma_{+}=\gamma_a+\gamma_b$, $\Delta_{\pm}=\Delta_a \pm \Delta_b$ and~$\Gamma_{c}^2 = \gamma_c^2 + 4\Delta_c^2$ for $c=a,b,+$ as well as $\tilde{\Delta}_{ij} \equiv i \Delta_a + j \Delta_b$, $\tilde{\gamma}_{ij} = i \gamma_a + j \gamma_\sigma$, $\tilde{\Gamma}_{ij}^2 \equiv \tilde{\gamma}_{ij}^2 + 4 \tilde{\Delta}^2_{ij}$, $\tilde{U}_{ij} = i U + j \Delta_b$ and $\bar{j}$ denotes negative integer values~($\bar{j} = -j$). \end{widetext} Note that a major conceptual difference with the Jaynes--Cummings model is that it now becomes relevant to consider the emitter (in this case, excitonic) two-photon coherence function, $g^{(2)}_b$, while in the Jaynes--Cummings case one has the trivial result~$g^{(2)}_\sigma\equiv0$. The exciton statistics enjoys noteworthy characteristics, as we shall shortly see. We repeat in Fig.~\ref{fig:08} the same plots for the polariton system as for the Jaynes--Cummings case (Fig.~\ref{fig:JCantibunchingplat}). The applet~\cite{wolfram_casalengua18a} also covers this more general case. The cavity population is exactly the same, as already mentioned, and all other panels bear clear analogies. The two-photon coherence function converges to the Jaynes--Cummings one in the infinite interation limit ($\lim_{U\rightarrow\infty}g^{(2)}_a$) but is distinctly distorted for high-energy laser driving in the positive photon-exciton detuning region, and features an additional UA and CB couple of lines in the negative detuning region. The decomposition of $\g{2}_a$ as in Eq.~(\ref{eq:g2mixdecomposition}) can be made (the expressions are however bulky and relegated to Appendix~\ref{app:4}) and are shown in Fig.~\ref{fig:08}(d) and~(e). \begin{figure} \caption{(Color online) Polaritonic counterpart of Fig.~\ref{fig:JCantibunchingplat} \label{fig:08} \end{figure} \subsection{Conventional statistics} Like in the Jaynes-Cummings model, one can identify the conventional antibunching (CA) and bunching (CB) by mapping the observed features to an underlying blockade mechanism, namely, the positions at which $N$-photon excitation occurs, which is when the laser is resonant with one of the states in the $N$-photon rung. The first rung that provides CA is given by the same Eq.~(\ref{eq:eigenstates}), with $N=1$, since this corresponds to the linear regime where both systems converge. One finds, therefore, that the two CA blue lines in Fig.~\ref{fig:08}(a), marked in solid blue in~(b), are the same as in the Jaynes--Cummings model. They coincide as well with the white regions in Fig.~\ref{fig:08}(c) where the cavity emission is enhanced. This is the standard one-photon resonance, with a blockade of photons into higher rungs due to the non-linearity now introduced by the interactions (instead of the 2LS). Higher rungs are different from the Jaynes--Cummings model, but their effects otherwise follow from the same principle and they are similarly obtained by diagonalizing the effective Hamiltonian in the corresponding $N$-excitation Hilbert subspace, that is, in the basis $\{\ket{N,0},\, \ket{N-1,1},\,\hdots \ket{0,N} \}$ (where each state is characterised by the photon and exciton number). At the two-photon level, one is interested in the second rung, which contains three eigenstates. The expressions for the general eigenenergies are rather large but we can provide here the first order in the interactions~$U$ in the strong coupling limit~($g\gg \gamma_a$, $\gamma_b$): \begin{subequations} \label{eq:pb2ndman} \begin{align} E_0^{(2)}&=\omega_a + \omega_b +\frac{g^2}{2 R^2}U\,,\\ E^{(2)}_{\pm}&=\omega_a + \omega_b \pm 2R +\frac{2g^2+(\omega_a - \omega_b)[(\omega_a - \omega_b)\mp 2R]}{8 R^2}U\, , \end{align} \label{eq:eigenstatesPol} \end{subequations} with~$R=\sqrt{g^2+(\omega_a - \omega_b)^2/4}$ the normal mode splitting typical of strong coupling. In this limit, $E^{(1)}_{\pm}=(\omega_a + \omega_b)/2 \pm R $. The CB lines are positioned, therefore, according to the conditions for two-photon excitation by the laser: $\omega_\mathrm{L}=\Re{E_-^{(2)}}/2$, $\Re{E_0^{(2)}}/2$, $\Re{E_+^{(2)}}/2$, in increasing order, as they appear in Fig.~\ref{fig:08}(a), marked with solid red lines in Fig.~\ref{fig:08}(b). The upper CB line, corresponding to $E_+^{(2)}$, is the faintest one in the cavity emission due to the fact that it has the most excitonic component. It is monotonically blueshifted with increasing~$U$ and becomes linear in the density plot as $E_+^{(2)}\rightarrow U$. The other two levels converge to those in the Jaynes--Cummings model in such case: $E_-^{(2)} \rightarrow -\sqrt{2}g$ and $E_0^{(2)}\rightarrow \sqrt{2}g$. \subsection{Unconventional statistics} We now shift to the unconventional features in polariton blockade. Superbunching, or UB, is found by minimization of $\mean{n_a}$ and, therefore, also occurs for the same condition as the Jaynes--Cummings model~$\Delta_b = \chi g \cos \phi$. The maximum superbunching is obtained at one of the crossings of UB and CB. Now turning to the more interesting Unconventional Antibunching (UA) features, they are found, in the polariton case as well, from the minimization of $\g{2}_a$. Since the equations are quite bulky, only the case of cavity excitation ($\Omega_b = 0$) is included here (the full expression can be consulted at App. \ref{app:perfectantibunching}). The UA curve is given by the solution of \begin{equation} \label{eq:UAcurve} \Delta_a = - \Delta_b - \frac{4 g^2 \Delta_b}{\gamma_b^2 + 4 \Delta_b^2} + \frac{2 g^2 (U + 2 \Delta_b)}{\gamma_b^2 + (U+2 \Delta_b)^2}, \end{equation} and the conditions for perfect antibunching come from solving the equation \begin{equation} \gamma_b \left[1 + 4 g^2 \left( - \frac{4 g^2 \Delta_b}{\gamma_b^2 + 4 \Delta_b^2} + \frac{2 g^2 (U + 2 \Delta_b)}{\gamma_b^2 + (U+2 \Delta_b)^2}\right)\right] = - \gamma_a \end{equation} and subsequently imposing that every parameter must be real (or the more restrictive case: real and positive) that lead to additional restrictions. \begin{figure} \caption{(Color online) Effects on the polaritonic photon statistics. Same as Fig.~\ref{fig:08} \label{fig:Fri8Jun183859CEST2018} \end{figure} \begin{figure} \caption{Polariton blockade. (a) Same as Fig.~\ref{fig:08} \label{fig:Fri8Jun183957CEST2018} \end{figure} As already noted, the polariton case adds a third CA line as compared to its Jaynes--Cummings counterpart. The correspondence between both cases is still clear, but this is largely thanks to the large interaction strength chosen in Fig.~\ref{fig:08}, namely, $U/\gamma_a=10$. This choice will allow us to survey quickly the polaritonic phenomenology based on the more thoroughly discussed Jaynes--Cummings one. Figure~\ref{fig:Fri8Jun183859CEST2018}, for instance, shows the polaritonic counterpart of Fig.~\ref{fig:Wed30May111214CEST2018} on its left panel but for one case of mixed-pumping only, highlighting the considerable reshaping of the structure and the importance of controlling, or at least knowing, the ratio of exciton and photon driving. The right panel of Fig.~\ref{fig:Fri8Jun183859CEST2018} provides $g^{(2)}_b$, which, if compared to Fig.~\ref{fig:08}, shows that the main result is to remove all the unconventional features and retain only the conventional ones. The peaks are also less in the excitonic emission, producing a smoother background. Another dramatic feature of the excitonic correlations, which is apparent from Eq.~(\ref{eq:Thu31May102314CEST2018}), is that it is independent from the ratio~$\chi$ of driving, i.e., the same result is obtained if driving the cavity alone, the exciton alone, or a mixture of both, in stark contrast of the cavity correlations (cf. Fig.~\ref{fig:08}(a) and \ref{fig:Fri8Jun183859CEST2018}(a) where the only difference is that half the excitation drives the 2LS in the second case rather than going fully to the cavity in the first case). This could be of tremendous value for spectroscopic characterization of such systems since it is typically difficult to know the exact type of pumping, while experimental evidence shows that both fields are indeed being driven under coherent excitation~\cite{dominici14a}. If measuring the excitonic correlations, there is no dependence from the particular type of coupling of the laser to the system. On the other hand, excitonic emission is much less straighftorward of access. Note, finally, that one could similarly consider the lower and upper polariton statistics, but they are even less featureless, with correlations of the signal that merely follow the polariton branches (their expression is given in the appendix for completeness). Finally, in Fig.~\ref{fig:Fri8Jun183957CEST2018}, we focus on the effect of the interaction strength and how to optimise the observation of antibunching. We have already emphasized that for clarity of the connection between the Jaynes--Cummings and the polariton case, we have considered a value of~$U/\gamma_a$ substantially in excess even of the few cases themselves largely in excess of the bulk of the literature~\cite{sun17b,rosenberg18a,togan18a}. While it is not excluded that such a regime will be available in the near future, it is naturally more relevant to turn to the most common experimental configuration where $U/\gamma_a\ll 1$. We show such a case in Fig.~\ref{fig:Fri8Jun183957CEST2018}(a), where $U/\gamma_a=0.1$. We see how, as a result, the CB and CA lines of the positively-detuned case collapse one onto the other. The UA line previously in between has, in the process, disappeared. The CA and CB however do not cancel each other but merge into a characteristic dispersive-like shape, shown in panel~(b), the observation of which, predicted over a decade ago~\cite{verger06a}, has been a long-awaited result for polaritons and has indeed been just recently reported from two independent groups~\cite{arXiv_delteil18a,arXiv_munozmatutano17a}. While this shape has been regarded as an intrinsic and fundamental profile of polariton blockade, our wider picture shows how it arises instead from different features brought to close proximity by the weak interactions. The difficulty in reporting polariton blockade lies in the weak value of antibunching, which is largely due to the fact that no optimisation over the full structure of photon correlations, that was unknown till this work, has been made for the driving configuration that yields the best antibunching for given system parameters. \section{Summary, Discussion and Conclusion} \label{sec:jueene24231247GMT2019} We have connected an hitherto disparate and voluminous phenomenology of photon statistics in the light emitted by a variety of optical systems, into a unified picture that identifies two classes of conventional and unconventional features, covering both the cases of antibunching and bunching, which leads us to a classification of CA, UA, CB and UB. One class (conventional), linked to real states repulsion, occurs at all orders and for all photon numbers while the other (unconventional) occurs for a given photon-number with no a priori underlying level structure. To lowest order in the driving, the dynamical response can be described by an interferences between a squeezed component and a coherent component, and thus, in this picture, one can understand the photon statistics emitted by many optical systems as simply arising from the particular way each implementation finds to produce some squeezing on the one hand and some coherent field on the other hand, and interfere them during its emission. {In agreement with the previous literature, we call} this phenomenon ``self-homodyning''. With this understanding, one can bring considerable tailoring of photon correlations by modifying the relative importance of coherence vs squeezing, which is conveniently achieved by superimposing a fraction of the driving laser to the output of the system {(``homodyning'')}. This Gaussian-state approximation holds to lower orders in the driving for the first correlation functions only. For instance, for the case of the laser-corrected two-level system (Table~\ref{tab:fluctuations}), the squeezed-coherent Gaussian description holds up to the second-order in the driving for the population and to the first-order in the driving for~$\g{2}$. Deviations occur for these observables to higher-orders in the driving while higher-order correlation functions already differ to the lowest order in the driving. Such deviations seem to arise from the non-Gaussian nature of the quantum fluctuations in these highly non-linear systems. This remains to be studied in detail. Such a general picture can explain under a unified mechanism a wealth of observations that could otherwise appear to be peculiarities that are specific to a particular configuration. To take one recent example from a group that has been leading in the development and applications of the type of homodyning and self-homodyning that we have studied above, in Ref.~\cite{arXiv_trivedi19a}, Trivedi \emph{et al.} study the generalization of the Jaynes--Cummings system to~$N$ emitters: the so-called Tavis--Cummings Hamiltonian. Here, it is found that driving resonantly the eigenstates~\cite{laussy11a} produces conventional antibunching, flanked by unconventional antibunching for laser frequencies detuned from the one- and two-photon resonances. This is the counterpart of the situation of Fig.~\ref{fig:JCantibunchingplat}(d) (resonance) and~(e) (detuning), both also shown in Panel~(a), where increasing~$N$ has the effect of bosonizing the interacting (matter-like) part of the system or decreasing the effective nonlinearity, similarly to decreasing~$g$ for~$N=1$. Interestingly, it is reported that while for the case of resonance, antibunching is spoiled with an increasing number of emitters~$N$, in presence of a detuning, one of the antibunching peaks is, on the opposite, enhanced with increasing~$N$. This apparently puzzling behaviour is easily understood once the conventional and unconventional nature of the respective antibunching lines are recognized. In the resonant case, antibunching is always conventional, and as such it is spoiled by the bosonization of the system due to it increasing number of emitters~\cite{dicke54a}, or by reducing the coupling. Since both weaken the nonlinearity in the level structure, this destroys the conventional blockade that is based on it. With detuning, on the other hand, one finds not only conventional but also unconventional antibunching, cf.~Fig.~\ref{fig:JCantibunchingplat}(b). Their CA is also spoiled with increasing~$N$, as reported, but their UA, however, increases, which can be expected since it is due to an self-homodyning interference between the coherent and incoherent parts of the emission at the two-photon level, as explained above, and this does not suffer from a reduced nonlinearity (or increasing~$N$). It can in fact be also optimized (i.e., reduced) like all types of UA and as a result, should even reach $g^{(2)}=0$ to lowest order for a proper choice of the detuning, that will depend on~$N$ in a way that remains to be computed. Since we have shown, however, that the interference nature of UA makes it sensitive to dephasing, and that detuning~\cite{lopezcarreno19a} results in fast oscillations in autocorrelation times, with a narrowing plateau of antibunching, one can also expect this antibunching to be particularly fragile and difficult to resolve when including a realistic model for its detection. This is consistent with the finding of the Authors that inhomogeneous broadening quickly spoils UA. Finally, they also find in both detuned and resonant cases the unconventional bunching, as the large bunching central peak that is a typical feature of the general mechanism (cf.~Fig.~\ref{fig:JCantibunchingplat}). This is therefore the super-chaotic noise due to self-homodyning stripping down the emission to its mere fluctuations. As such, the interpretation in terms of two-photon bound states that is offered in Ref.~\cite{arXiv_trivedi19a} and in other works~\cite{faraon08a,dory17a} should be further analyzed and quantified. We suspect the emission in UB to be less efficient for multiphoton Physics as compared to leapfrog emission~\cite{sanchezmunoz14a}, due to the lack of a suppression mechanism for higher photon-number processes, and despite the large values of the correlation functions that they produce. As a conclusion, our picture brings considerable simplification in the interpretation and identification of the various phenomena observed in a plethora of systems, in particular with respect to connecting them between each other throughout platforms. It clarifies the value but also the limitation of a description in terms of Gaussian-states. It also opens a new route to control and fine-tune such photon correlations and make a more informed and better use of them for quantum applications. \onecolumngrid \appendix \setcounter{equation}{0} \setcounter{figure}{0} \section*{Appendices} \section{Interference between a coherent and squeezed state.} \label{app:1} In a beam splitter (with a certain transmittance $\mathrm{T}^2$ and reflectance $\mathrm{R}^2$), the relation between the input light fields ($a$, $d$) and the output fields ($o$, $s$) is a simple unitary transformation: \begin{equation} \label{eq:BStransformations} o=\mathrm{T} d +i\mathrm{R} a \, , \quad s = i\mathrm{R} d+\mathrm{T} a \, , \quad \text{and} \quad d = \mathrm{T} o - i \, \mathrm{R} s \, , \quad a = -i \, \mathrm{R} o + \mathrm{T} s \, , \end{equation} where the real coefficients $\mathrm{T}$ and $\mathrm{R}$ must fulfil $1 \geq \mathrm{T}, \mathrm{R} \geq 0$ and $\mathrm{T}^2 + \mathrm{R}^2 = 1$. In our case of interest, the input 0 is a squeezed state $\ket{\xi} = \mathcal{S}_d \left(\xi\right) \ket{0}$ and input 1, a coherent state $\ket{\alpha} = \mathcal{D}_a \left(\alpha\right) \ket{0}$. The squeezing operator is $\mathcal{S}_d \left(\xi\right) = \exp \left[\frac{1}{2} \left(\xi^* d^2 - \xi \ud{d}^2\right)\right]$ with squeezing parameter $\xi = r e^{i \theta}$ and the displacement operator is $\mathcal{D}_a \left( \alpha \right) = \exp \left( \alpha \ud{a} - \alpha^* a \right)$ with coherent parameter $\alpha = |\alpha| e^{i \phi}$. The total input state can be written as: \begin{equation} \ket{\psi_{\mathrm{in}}} = \mathcal{D}_a \left(\alpha\right) \mathcal{S}_d \left(\xi\right) \ket{0}_{da}, \end{equation} where the state subscript indicates the input/output subspaces where operators are acting upon. These operators are written in the input basis. Now, applying transformations of Eq.~\eqref{eq:BStransformations} and rearranging terms, we obtain, in first place, for the displacement operator: $\mathcal{D}_a \left(\alpha\right) = \exp \left( \alpha \ud{o} - \alpha^* o + \alpha \ud{s} - \alpha^* s \right)$, where $\alpha_o = i \mathrm{R} \alpha$ and $\alpha_s = \mathrm{T} \alpha$. Exponentials of operators can be factorized since both output are independent from each other and commute. This leads to the simple expression: $\mathcal{D}_a \left(\alpha \right) = \mathcal{D}_o \left(\alpha_o \right) \mathcal{D}_s \left(\alpha_s\right) = \mathcal{D}_s \left(\alpha_s\right) \mathcal{D}_o \left(\alpha_o \right)$, where each displacement operator $\mathcal{D}_j \left(\alpha_j\right)$ ($j=o,s$) only acts over the assigned output. Then, the squeezing operator in the output basis reads: \begin{equation} \mathcal{S}_d \left(\xi \right) = \exp \left[ \frac{1}{2} \left(\xi_o^* o^2 - \xi_o \ud{o} \right) + \frac{1}{2} \left(\xi_s^* s^2 - \xi_s \ud{s} \right) + \right. \left. \left(\xi_{os}^* o s - \xi_{os} \ud{o} \ud{s} \right) \right] =\exp(S_o+S_s+S_{os})\,, \end{equation} where $\xi_o = \mathrm{T}^2 \xi$, $\xi_s = - \mathrm{R}^2 \xi$ and $\xi_{os} = i \mathrm{R} \mathrm{T} \, \xi$. This exponential can be simply split into two different contributions only if $\lbrack S_o + S_s , S_{os} \rbrack =0$, which is fulfilled only in the particular case $\mathrm{T} = \mathrm{R}$ (symmetrical BS). Although this constriction could suppose a huge difficulty to overcome, first correction term grows proportional to $r^2 \mathrm{T} \mathrm{R} \left(\mathrm{T}^2-\mathrm{R}^2 \right)$. Thus, for either low squeezing signal ($r \ll 1$) or almost symmetrical BS ($\mathrm{T}-\mathrm{R} \approx 0$), the output signal could be described as follows. The commutator $\lbrack S_o , S_s \rbrack$ vanishes for any possible values, so the exponential simplifies into $\mathcal{S}_d \left(\xi \right) = \mathcal{S}_o \left(\xi_o \right) \mathcal{S}_s \left(\xi_s \right) \mathcal{S}_{os} \left(\xi_{os} \right)$. With the previous results, the output state can be written as: \begin{equation} \ket{\psi_{\mathrm{out}}} = \mathcal{D}_o \left(\alpha_o\right) \mathcal{S}_o \left(\xi_o\right) \mathcal{D}_s \left(\alpha_s\right) \mathcal{S}_s \left(\xi_s\right) \mathcal{S}_{os} \left(\xi_{os}\right) \ket{0}_{os} \\ = \mathcal{D}_o \left(\alpha_o\right) \mathcal{S}_o \left(\xi_o\right) \mathcal{D}_s \left(\alpha_s\right) \mathcal{S}_s \left(\xi_s\right) \ket{\xi_{os}}_{os}, \end{equation} where $\ket{\xi_{os}}$ represents a two-mode squeezed state. In the Fock basis, this state can be written as (from Ref.~\cite{guerry_book05a}, Chapter 7): \begin{equation} \ket{\xi_{os}} = \frac{1}{\cosh r_{os}} \sum_{n=0}^{\infty} \left(\tanh r_{os}\right)^n \ket{n, n}_{os}, \end{equation} where $r_{os} = |\xi_{os}| = \mathrm{R} \mathrm{T} \, r$. The corresponding density matrix for this pure state reads $\rho_{\mathrm{out}} = \ket{\psi_{\mathrm{out}}} \bra{\psi_{\mathrm{out}}}$. Tracing out output $o$, we obtain the density matrix for output $s$ only (our signal of interest): $\rho_s = \Tr_o\{ \rho_{\mathrm{out}}\}$. For the next step, we will use the properties of the trace to move clockwise the operators acting over the output $o$ subspace and the identities $\ud{\mathcal{D}_o} \left(\alpha_o\right) \mathcal{D}_o \left(\alpha_o\right) = \ud{\mathcal{S}_o} \left(\xi_o\right) \mathcal{S}_o \left(\xi_o\right) = \hat{\mathds{1}}_o $, where $\hat{\mathds{1}}_o $ is the identity operator. Furthermore, any operator that only acts on the $s$-subspace can be taken out of the trace. With this, \begin{equation} \rho_s= \mathcal{D}_s \left(\alpha_s\right) \mathcal{S}_s \left(\xi_s\right) \big(\Tr_o \lbrace \ket{\xi_{os}} \bra{\xi_{os}} \rbrace \big) \ud{\mathcal{S}_s} \left(\xi_s\right) \ud{\mathcal{D}_s} \left(\alpha_s\right) \, . \end{equation} Computing the partial trace: \begin{equation} \begin{split} \Tr_o \lbrace \ket{\xi_{os}} \bra{\xi_{os}} \rbrace = & \sum_{p=0}^{\infty} \bra{p}_o \left(\frac{1}{\cosh^2 r_{os}} \sum_{n,m=0}^{\infty} \left(\tanh r_{os}\right)^{n+m} \ket{n}_o \ket{n}_s \bra{m}_s \bra{m}_o\right) \ket{p}_o \\ = & \, \frac{1}{\cosh^2 r_{os}} \sum_{n}^{\infty} \left(\tanh r_{os}\right)^{2n} \ket{n}_s \bra{n}_s , \end{split} \end{equation} from the second to the third equality we have used $\braket{m}{p}_o = \delta_{m,p}$ and $\braket{p}{n}_o = \delta_{p,n}$. The resulting density matrix has the form of a thermal state $\rho_{\mathrm{th}}$ with mean population $\mathrm{p_{th}} \equiv \pop{s}_{\mathrm{th}} = \sinh^2 r_{os}$. To sum up, the output field detected at a single arm of the system corresponds to a \textit{displaced squeezed thermal} state. So, to sum up, \begin{equation} \rho_s = \mathcal{D}_s \left(\alpha_s\right) \mathcal{S}_s \left(\xi_s\right) \rho_{\mathrm{th}} \ud{\mathcal{S}_s} \left(\xi_s\right) \ud{\mathcal{D}_s} \left(\alpha_s\right), \end{equation} with parameters $\alpha_s = \mathrm{T}|\alpha| e^{i \phi}$, $\xi_s = r_s e^{i \theta_s} = \mathrm{R}^2 e^{i \left( \theta + \pi \right)}$ and $ \mathrm{p_{th}} = \sinh[2](\mathrm{R}\mathrm{T} \, r)$. Even though $\mathrm{T}$ and $\mathrm{R}$ appear as parameters, last equation only works for $\mathrm{R} \approx \mathrm{T}$. We restrict ourselves to the case of \textit{50:50} beam splitter ($\mathrm{T}^2 = \mathrm{R}^2 = 1/2$). Thermal population can be expressed in terms of squeezed population of the input signal $\av{n_d} = \sinh^2 r$: \begin{equation} \mathrm{p_\mathrm{th}} + \frac{1}{2} = \frac{1}{2} \sqrt{1+ \av{n_d}}. \end{equation} From $\rho_s$ we can compute the observables for the mixed signal: \begin{subequations} \begin{equation} \av{n_s} = \frac{|\alpha|^2}{2} + \frac{\av{n_d}}{2} \, , \quad |\av{s^2}| = \left(\mathrm{p_{th}} + \frac{1}{2}\right) \sinh(r)\,, \end{equation} \begin{equation} \label{eq:DST_g2} \g{2}_s = 1 + \av{n_s}^{-2} \sinh^2 r \left[\cosh 2r + 2 |\alpha|^2 \left(1- \cos(\theta-2\phi) \coth r\right) \right] \, , \end{equation} \begin{multline} \g{3}_s = 1 + \av{n_s}^{-3} \sinh^2 r \, \Big\lbrace \sinh^2 2r + 5 \sinh^2 r \cosh 2r + 6 |\alpha|^4 \left(1- \cos(\theta-2\phi) \coth r \right) + \\ \: 3 |\alpha|^2 \left[3 \coth^2 r - 1 + 6 \left(1- \cos(\theta-2\phi) \coth r\right) \right] \Big\rbrace \,, \end{multline} \end{subequations} \section{Decomposition of $g_s^{(3)}$ in powers $\alpha$} \label{app:g3decomp} We provide the third-order coherence function of signal $s$, in terms of two interfering fields, $s=\alpha + d$. As in the case for $\g{2}_s$, the highest order contributions in powers of $\alpha$ can be gathered into the coherent term (given by 1), yielding: \begin{equation} \g{3}_s = 1 + \sum_{m=0}^{4} \mathcal{J}_m \, , \end{equation} with \begin{align} \mathcal{J}_0 &=\frac{\corr{d}{3}{3} - \pop{d}^3 }{\av{n_s}^3} \, ,\\ \mathcal{J}_1 &= 6 \frac{ \Re[ \alpha^* \left( \corr{d}{2}{3} - \coh{d} \pop{d} \right)] }{\av{n_s}^3} \, ,\\ \mathcal{J}_2 &= 3\Big[ 2\Re[{\alpha^*}^2 \av{\ud{d} d^3} ] + |\alpha|^2 \left(3\corr{d}{2}{2} +\pop{d}^2 \right) - 4 \Re[ \alpha^* \coh{d}]^2 \Big]/ \av{n_s}^3 \,,\\ \mathcal{J}_3 &= 2 \Big[ \Re[ {\alpha^*}^3 \av{d^3} ] + |\alpha|^2 \Re[ \alpha^* \left( 9 \av{\ud{d} d^2} - \coh{d} \pop{d} \right)] - 4 \Re[ \alpha^* \coh{d}]^3 \Big]/ \av{n_s}^3 \, ,\\ \mathcal{J}_4 &= 6|\alpha|^2 \frac{ \Re[ {\alpha^*}^2 \av{d^2} ] + |\alpha|^2 \pop{d} - 2 \Re[\alpha^* \coh{d}]^2}{\av{n_s}^3}\,. \end{align} \section{Coherent and squeezed steady states} \label{ap:coh-sq} The coherent and the squeezed state described in Sec.~(\ref{sec:ThuFeb15174655CET2018}) can be obtained as the steady-state solutions of a driven cavity, described through the master equation (hereafter $ \hbar = 1 $ is assumed) \begin{equation} \label{eq:FriOct20195045CEST2017} \partial_t \rho = i[\rho,H_c]+\frac{\gamma_c}{2}\mathcal{L}_c \rho\,. \end{equation} Here~$\mathcal{L}_c=(2c\rho \ud{c} - \ud{c}c\rho - \rho\ud{c}c)$ and~$c=a,d$ are the annihilation operator of the field generating the coherent and squeezed state. The Hamiltonians are set as $H_a=\Delta_a\ud{a}a + \Omega_a (\ud{a}+a)$ for the coherent state and $H_d=\Delta_d \ud{d}d + \lambda_d(d^{\dagger 2}+d^2)$ for the squeezed state. In the former case, $\Delta_a$ is the detuning between the cavity and the laser that excites the cavity with intensity~$\Omega_a$ and in the latter case, it is the detuning between the cavity and the driving mode that squeezes the cavity with intensity~$\lambda_d$. The relation between the dynamical quantities and the displacement and the squeezing parameters is the following: $\alpha=\mean{a}$, $\sinh^2(r)=\mean{\ud{d}d}$ and $\tan\theta= i (\mean{d^{\dagger 2}}-\mean{d^2})/(\mean{d^{\dagger 2}}+\mean{d^2})$. These two systems can be solved exactly, so the steady-state solutions for the parameters defined above are: \begin{equation} \label{eq:FriOct20213722CEST2017a} |\alpha|= \frac{2\Omega_a}{\sqrt{\gamma_a+4\Delta_a^2}}\quad\text{and}\quad \tan\phi = -\frac{\gamma_a}{2\Delta_a}\,, \end{equation} for the coherent state and \begin{equation} \label{eq:FriOct20213722CEST2017b} \tanh(2r)=\frac{4\lambda_d}{\sqrt{\gamma_d^2+4 \Delta_d^2}}\quad\text{and}\quad\tan\theta=\frac{\gamma_d}{2\Delta_d}\,, \end{equation} for the squeezed state. For the latter case, there is also a thermal contribution given by $\mathrm{p_{th}} = \sinh[2](r)$, so the resulting state is actually a \textit{thermal squeezed} state. \section{Steady states of light matter coupling at vanishing laser driving} \label{app:2} In this work we need to solve the steady state dynamics of light-matter interaction in the low coherent driving regime. The light field is a cavity mode~$a$ and the matter field can be either a 2LS~$\sigma$ or another bosonic mode~$b$. We solve the dynamics in terms of a general mean value, a product of any system operator, which in its most general normally ordered form reads~$C_{\{m,n,\mu,\nu\}}=\mean{\sigma^{\dagger m}\sigma^n a^{\dagger \mu} a^\nu}$ (with~$m$, $n \in\{0,1\}$ and $\mu$, $\nu\in \mathbb{N}$) if the matter field is a 2LS or ~$C_{\{m,n,\mu,\nu\}}=\mean{b^{\dagger m}b^n a^{\dagger \mu} a^\nu}$ (with~$m$, $n$, $\mu$, $\nu\in \mathbb{N}$) if the matter field is bosonic. This general element follows the master equation described in the main text, which can be expressed in a matricial form: \begin{equation} \label{eq:TueMay5174356GMT2009} \partial_t C_{\{m,n,\mu,\nu\}}=\sum_{{m',n',\mu',\nu'}}\mathcal{M}_{\substack{m,n,\mu,\nu\\m',n',\mu',\nu'}}C_{\{m',n',\mu',\nu'\}}\,. \end{equation} The regression matrix elements~$\mathcal{M}_{\substack{m,n,\mu,\nu\\m',n',\mu',\nu'}}$, in the case of a coupled 2LS, are given by: \begin{subequations} \label{eq:TueDec23114907CET2008} \begin{align} &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu,\nu}}=-\frac{\gamma_a}2(\mu+\nu)-\frac{\gamma_\sigma}2(m+n) + i (\mu - \nu ) \delta_{a} + i (m-n) \Delta_\sigma \end{align} \begin{align} &\mathcal{M}_{\substack{m,n,\mu,\nu\\1-m,n,\mu,\nu}}=i\Omega_\sigma[m+2n(1-m)]\,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,1-n,\mu,\nu}}=-i\Omega_\sigma[n+2m(1-n)]\,\\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu-1,\nu}}=i \Omega_a\mu\,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu,\nu-1}}=- i\Omega_a\nu\,,\\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,1-n,\mu,\nu-1}}=-i g(1-n)\nu\,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\1-m,n,\mu-1,\nu}}=i g(1-m)\mu\,, \\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,1-n,\mu,\nu+1}}=-i g n \,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\1-m,n,\mu+1,\nu}}=i g m \,, \\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\1-m,n,\mu,\nu+1}}=2 i n g(1-m) \,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,1-n,\mu + 1,\nu}}= -2 i n g(1-m)\,, \end{align} \end{subequations} and zero everywhere else. In the main text, we discuss first the case of resonance fluorescence, which corresponds to having only the 2LS operator~$\sigma$ and no cavity mode~$a$ (taking $g$, $\Omega_a=0$ here). Second, we solve the Jaynes--Cummings model with both cavity and dot driving with a phase difference between the sources, which corresponds to setting $\Omega_\sigma / \Omega_a = \chi e^{-i \phi} $ here. Similarly, for the polariton model where the matter field is an exciton (boson), we have: \begin{subequations} \begin{align} &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu,\nu}}=-\frac{\gamma_a}2(\mu+\nu)-\frac{\gamma_b}2(m+n) + i (\mu - \nu ) \Delta_{a} + i (m-n) \Delta_b + i \frac{U}{2} \left[m (m-1) - n(n-1) \right], \end{align} \begin{align} &\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu-1,\nu}}=i\Omega_a \mu , &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n,\mu,\nu-1}}=-i\Omega_a \nu \,\\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m+1,n,\mu-1,\nu}}= i g \mu\,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n+1,\mu,\nu-1}}=-i g \nu\,,\\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m-1,n,\mu+1,\nu}}= i g m\,,\quad &&\mathcal{M}_{\substack{m,n,\mu,\nu\\m,n-1,\mu,\nu+1}}=-i g n\,,\\ &\mathcal{M}_{\substack{m,n,\mu,\nu\\m+1,n+1,\mu,\nu}}=i U (m-n) ,\quad \end{align} \end{subequations} and, again, the remaining matrix elements are zero. These equations can be solved numerically, by choosing a high enough truncation in the number of excitations, in order to obtain the steady state ($\partial_t C_{\{m,n,\mu,\nu\}}=0$) for any given pump. However, we are interested in an analytical solution when applying vanishing driving limit ($\Omega_\sigma \rightarrow 0$ or $\Omega_a\rightarrow 0$). In this case, it is enough to solve recursively sets of truncated equations. That is, we start with the lowest order correlators, with only one operator, that we write in a vectorial form for convenience (using the JC model as an example): $\mathbf{v}_1=( \mean{a}~\mean{a^\dagger}~\mean{\sigma}~\mean{\sigma^\dagger })^\mathrm{T}$. Its equation, $\partial_t \mathbf{v}_1= M_1 \mathbf{v}_1+A_1+~\mathrm{h.~o.~t.}$, provides the steady state value~$\mathbf{v}_1= -M_1^{-1} A_1+~\mathrm{h.~o.~t.}$, to lowest order in $\Omega_a$ (with h. o. t. meaning \emph{higher order terms}). We proceed in the same way with the two-operator correlators $\mathbf{v}_2=( \mean{a^2}~\mean{a^{\dagger 2}}~\mean{a^\dagger a}~\mean{\sigma^\dagger\sigma}~\mean{\sigma^\dagger a}\,\hdots)^\mathrm{T}$, only, in this case, we also need to include the steady state value for the one-operator correlators as part of the independent term in the equation: $\partial_t \mathbf{v}_2= M_2 \mathbf{v}_2+A_2+X_{21} \mathbf{v}_1+~\mathrm{h.~o.~t.}$. The steady state reads~$\mathbf{v}_2= -M_2^{-1} (A_2+X_{21} \mathbf{v}_1)+~\mathrm{h.~o.~t.}$ with an straightforward generalization $\mathbf{v}_N= -M_N^{-1} (A_N+\sum_{j=1}^{N-1}X_{N j} \mathbf{v}_j)+~\mathrm{h.~o.~t.}$. We, in particular, aim at obtaining photon correlators of the type~$\mean{a^{\dagger N} a^N }$ that follow~$\mean{a^{\dagger N} a^N }\sim (\Omega_a)^{2N} $, to lowest order in the driving $\Omega_a$. The normalized correlation functions~$g_a^{(N)}=\mean{a^{\dagger N} a^N }/\mean{a^\dagger a}^N $ are independent of $\Omega_a$ to lowest order, their computation requiring to solve the $2N$ sets of recurrent equations and taking the limit~$\lim_{\Omega_a\rightarrow 0}g_a^{(N)}$. \section{Homodyne interference with resonance fluorescence: correlations and squeezing from the fluctuations and the total signal} \label{sec:FriFeb23221121CET2018} The correlations from the fluctuations of resonance fluorescence, with operator $\epsilon = \sigma -\alpha$, can be accessed using the technique of homodyne detection explained in Sec.~(\ref{sec:ThuFeb15174655CET2018}). In this case, we feed one of the beam splitter arms with resonance fluorescence ($d\rightarrow \sigma$) and the other with a coherent field ($a \rightarrow \beta$). The correlators of the output of the arms as defined in Appendix~\ref{app:1}, $s= i\mathrm{R} \beta + \mathrm{T} \sigma$, are given by: \begin{equation} \label{eq:2LScorrBS} \corr{s}{n}{m} = \sum_{p=0}^n \sum_{q=0}^m \binom{n}{p} \binom{m}{q} (-i \mathrm{R}\beta^*)^{n-p} (i \mathrm{R}\beta)^{m-q} \corr{\sigma}{p}{q} \, . \end{equation} Since $\corr{\sigma}{p}{q} =0$ for $p,q >1$, this simplifies to \begin{equation} \corr{s}{n}{m} = (-i \mathrm{R}\beta^*)^{n} (i \mathrm{R}\beta)^{m} - i \mathrm{R} \mathrm{T} \ (-i \mathrm{R}\beta^*)^{n-1} (i \mathrm{R}\beta)^{m-1} \left(m \beta \coh{\sigma} - n \beta^* \coh{\sigma}^* \right) + n m \ (-i \mathrm{R}\beta^*)^{n-1} (i \mathrm{R}\beta)^{m-1} \mathrm{T}^2 \av{n_{\sigma}}\,. \end{equation} For instance, the coherent fraction and total population of the output field are: \begin{subequations} \label{eq:FriFeb23210546CET2018} \begin{align} \label{eq:FriFeb23210546CET2018a} \coh{s} &= i \mathrm{R} \beta + \mathrm{T} \coh{\sigma}\,,\\ \label{eq:FriFeb23210546CET2018b} \pop{s} &= \mathrm{R}|\beta|^2 + \mathrm{T}^2 \av{n_\sigma} +2 \mathrm{R} \mathrm{T} \Re [ i \beta^* \coh{\sigma}]\,. \end{align} \end{subequations} Therefore, we can chose the coherent field to compensate exactly the coherent component of the 2LS $\alpha=\mean{\sigma}$,~i.e., setting~$\beta= i \frac{\mathrm{T}}{\mathrm{R}}\mean{\sigma}$ so that we hace only the transmitted fluctuations~$s=T\epsilon$. In such a case the correlators simplify even further, \begin{equation} \label{eq:2LSincohCorrs} \corr{\epsilon}{n}{m}=\corr{s}{n}{m}/ \mathrm{T}^{n+m} = (-1)^{m+n} \alpha^{m-1} \alpha^{* (n-1)} \left[ nm \ \av{n_\sigma} - \left(n+m-1 \right) \big |\alpha \big |^2\right]\,. \end{equation} With this general expression, we obtain the population and coherence functions, Eq.~(\ref{eq:FriFeb23155745CET2018}) of the main text. We want to recover and analyse light properties from the original 2LS (before the beam splitter). In order to do this, the factor $\mathrm{T}^2$ on the population should be eliminated (making $\mathrm{T}^2 \av{n_\sigma} \rightarrow \av{n_\sigma}$, $\mathrm{T}\coh{\sigma} \rightarrow \coh{\sigma}$). Note that this change is not inconsistent, given that the beam splitter divides by two the incoming signal and thus merely attenuating it by a factor~$\mathrm{T}^2$. Moreover, since the photon correlations are normalized objects, a global attenuation in the \textit{unnormalized} correlators do not change them. However, suppressing the coherent contribution of the emission is not the only possibility. We can also tune the coherent contribution by choosing $\beta' = e^{i \phi} |\beta'|$, where the amplitude is parametrize as $|\beta'|= \frac{\mathrm{R}}{\mathrm{T}} |\beta|$. Thus, we are broadening the range of possible output configurations~\cite{lopezcarreno18b}. So, for the most general case, the $N$-particle correlators have the following form: \begin{equation} \label{} \g{N}_s = \frac{|\beta'|^{2(n-1)} \left( |\beta'|^2 + n^2 \av{n_\sigma} +2 n \Im \lbrace \av{\sigma} |\beta'| e^{-i \phi} \rbrace \right)} {\left(|\beta'|^2 + \av{n_\sigma}+2 \Im \lbrace \av{\sigma} |\beta'| e^{-i \phi} \rbrace \right)^n}, \end{equation} After this general expression, the amplitude $|\beta'|$ is usually expressed in a more suitable way, referencing it to the driving intensity of the laser: $|\beta'| = \frac{\Omega_{\sigma}}{\gamma_{\sigma}} \mathcal{F}$. Other two important quantities are the mean and variance of the quadratures, $\av{X_{s,\chi}} = \frac{1}{2} (e^{i\chi} \av{\ud{s}} + \mathrm{c.c.})$ and their dispersion $\av{\Delta X_{s,\chi}^2} = \av{X_{s,\chi}^2 - \av{X_{s,\chi}}^2}$, respectively: \begin{equation} \av{X_{s,\chi}} = \frac{1}{2} \left(e^{i \phi} \av{s}^* + \mathrm{c.c} \,\right), \end{equation} The mean value only depends on the total coherent contribution $\av{s}= \mathrm{T} (i \beta' + \alpha)$. The maximum and minimum of the (normal-ordered) quadrature variance for a single-mode can be inferred independently of specific nature of the field: \begin{equation} \label{eq:sQuadDisp} \av{:\Delta X_{s}^2:}_{\mathrm{max}/\mathrm{min}} = \av{\Delta X_{s}^2}_{\mathrm{max}/\mathrm{min}} - \frac{1}{4} = \frac{1}{2} \left[\pm |\av{s^2}-\av{s}^2| + \pop{s} - |\av{s}|^2\right], \end{equation} where the sign correspond to maximum and minimum, respectively. While the variance is strictly a positive-valued quantity, its normal-ordered counterpart is not. This latter indicates the deviation of the variance from the vacuum value (which is $\frac{1}{4}$) so values below 0 will reveal some degree of quadrature squeezing. Likewise the angle of squeezing is generically given by: \begin{equation} \theta = \mathrm{arg} \left[\av{s^2} - \av{s}^2\right]. \end{equation} After substituting the correlators \eqref{eq:2LSincohCorrs} on \eqref{eq:sQuadDisp} and, then, using the steady-state solution given in Eq.~(\ref{eq:2LSrhoSS}): \begin{align} \av{:\Delta X_{s}^2:}_\mathrm{min} = -\frac{2 \mathrm{T}^2 \Omega_{\sigma}^2 \left(\gamma_{\sigma}^2 + 4 \Delta_\sigma^2 - 8 \Omega_\sigma^2 \right)}{\left(\gamma_{\sigma}^2 + 4 \Delta_\sigma^2 + 8 \Omega_\sigma^2\right)^2}, & \ \ \av{:\Delta X_{s}^2:}_\mathrm{max} = \frac{2 \mathrm{T}^2 \Omega_{\sigma}^2}{\gamma_{\sigma}^2 + 4 \Delta_\sigma^2 + 8 \Omega_\sigma^2}, \end{align} and the angle of squeezing will be: \begin{equation} \theta = \mathrm{arg} \left[\left(\gamma_{\sigma}-2 i \Delta_\sigma \right)^2\right] \end{equation} It is not surprising that factor $\mathcal{F}$ does not appear since all the squeezing properties exclusively come from the fluctuations. The strength of this effect is reduced by the factor $\mathrm{T}^2$ as the input signal $\sigma$ ($\alpha + \epsilon$) is divided by the beam splitter, which can naturally absorbed into $\Omega_\sigma^2$. Now we are interested in the low driving regime ($\Omega_{\sigma} \rightarrow 0$), so the previous expressions at the lowest order in $\Omega_\sigma$ simplify \begin{equation} \label{eq:2LSlowpumpDisp} \av{:\Delta X^2_s:}_{\mathrm{max}/\mathrm{min}} \approx \pm \frac{2 \Omega_{\sigma}^2}{ \gamma_{\sigma}^2 + 4 \Delta^2_\sigma}, \end{equation} In this limit, both dispersion are symmetrical (but with opposite signs). We can regard these expression as a limit of low squeezing (and coherent intensity $|\alpha|^2$) from a displaced squeezed thermal state (although the coherent part does not contribute to the variance). Such states has got the following dispersion when $r \rightarrow 0$: \begin{equation} \label{eq:DSTlowsqueezedDispersion} \av{:\Delta X^2:}_{\mathrm{max}/\mathrm{min}}^{\mathrm{DST}} \approx \frac{1}{4} \left[\left(1 \pm 2 r\right) \left(1 + 2 \av{n_\mathrm{th}}\right)-1\right] \approx \pm \, \frac{r}{2}, \end{equation} where the superscript on the average reminds that the observable corresponds to an \textit{displaced squeezed thermal} state. We have approximated $1 + 2 \av{n_\mathrm{th}}$ as $1$ since the thermal population grows like $\Omega_{\sigma}^4$ (which comes from the first order of the incoherent population). Comparing \eqref{eq:2LSlowpumpDisp} with \eqref{eq:DSTlowsqueezedDispersion}, we found that the incoherent population in the Heitler regime behaves like a squeezed thermal state with squeezing parameter \begin{equation} \label{eq:2LSreff} r_\mathrm{eff} = \frac{4 \Omega_{\sigma}^2}{ \gamma_{\sigma}^2 + 4 \Delta_{\sigma}^2}, \end{equation} and the \emph{effective} thermal population $\mathrm{p}_\mathrm{th}$: \begin{equation} \label{eq:2LSntheff} \mathrm{p}_\mathrm{th} \approx \frac{16 \Omega_\sigma^4}{\left( \gamma_{\sigma}^2 + 4 \Delta_{\sigma}^2\right)^2 }, \end{equation} From these two parameters an effective $\g{2}$, namely $\g{2}_\mathrm{eff}$, can be obtained for the fluctuations. Supposing that, in the low excitation regime, fluctuations would behave similar to an squeezed thermal state, then $\g{2}_\epsilon$ should have the same form. Fixing $|\alpha| = 0$ in Eq.~\eqref{eq:DST_g2} and taking the limit $r^2 \rightarrow 0$ and $\mathrm{p}_\mathrm{th} \rightarrow 0$ (both go to 0 with the same power dependence), we get \begin{equation} \g{2}_\mathrm{eff} \approx \frac{r_\mathrm{eff}^2}{\left(r_\mathrm{eff}^2 +\mathrm{p}_\mathrm{th}\right)^2}, \end{equation} which, after substituting Eqs.~\eqref{eq:2LSreff}-\eqref{eq:2LSntheff}, reads \begin{equation} \label{eq:2LSg2eff} \g{2}_\mathrm{eff} \approx \frac{\left(\gamma_{\sigma}^2 + 4 \Delta_{\sigma}^2\right)^2} {\Omega_{\sigma}^4}. \end{equation} \section{Wavefunction approximation method at vanishing pumping regime} \label{app:3} In the context of this paper, the wavefunction approximations~\cite{visser95a} consist of assuming that the state of the system composed by two fields, with annihilation operators~$\xi$ and~$c$ following either fermionic or bosonic algebra, can be approximated by a pure state, which in the Fock state basis reads, \begin{equation} \label{eq:WedFeb28112316CET2018} \ket{\psi} = \sum_{n,m} \mathcal{C}_{nm} \ket{n}_c\ket{m}_\xi \equiv \sum_{n,m} \mathcal{C}_{nm} \ket{n\,,m} \,, \end{equation} where~$\mathcal{C}_{nm}$ are the probability amplitude of having~$n$ photons in the field described with operator~$\xi$ and~$m$ photons in the field described with operator~$c$; and the summation is done until the allowed number of photons: one for a fermionic field and~$N$ for a bosonic one. Given that the dynamics of the system is given by the master equation \begin{equation} \label{eq:WedFeb28114318CET2018} \partial_t \rho = i[\rho,H] + \sum_k (\tilde \Gamma_k/2) \mathcal{L}_{j_k}\rho\,, \end{equation} where~$H$ is the Hamiltonian of the system and we have assumed that the dissipation is given by ``jump operators''~$j_k$ at rates~$\tilde\Gamma_k$, the dynamics of the wavefunction is given by Schr\"odinger equation \begin{equation} \label{eq:WedFeb28113458CET2018} \partial_t \ket{\psi} = - i H_\mathrm{eff}\ket{\psi} \end{equation} where~$H_\mathrm{eff}$ is a non-hermitian Hamitonian constructed as~$ H_\mathrm{eff}=H-i \sum_k \tilde\Gamma_k\, \ud{j_k} j_k$, and the coefficients evolve as, \begin{equation} \label{eq:coeffeqs} \partial_t \, \mathcal{C}_{nm} = -i \sum_{p,q} \bra{n\,,m}H_\mathrm{eff} \ket{p\,,q} \mathcal{C}_{pq}\,. \end{equation} In the following sections we make explicit both the effective Hamiltonians and the differential equations for the coefficients for all the systems considered in the main text. \subsection{Two-level system in the Heitler regime} The Hamiltonian describing the excitation of a sensor (a cavity) by the emission of a 2LS, which in turn is driven in the Heitler regime by a laser, is given by the Hamiltonian \eqref{eq:Thu31May103357CEST2018}. To complete the analogy of beam splitter setting and be consistent with the main text, both driving and coupling for the sensor has to be defined in terms of coherent source amplitude $|\beta|$ and BS parameters $\mathrm{T}$ and $\mathrm{R}$: $\Omega_a \rightarrow i \mathrm{R}|\beta|$, $g \rightarrow \mathrm{T} g$. The system and driving source are not necessarily at resonance so we define the detuning as $\Delta_{\sigma} = \omega_\sigma - \omega_{\mathrm{L}}$. The effective Hamiltonian that describes the dynamics in the wavefunction approximation is \begin{equation} \label{eq:MonMar5160912CET2018} H_\mathrm{eff} = H -\frac{i}{2}\left ( \gamma_\sigma \ud{\sigma}\sigma + \Gamma \ud{a}a \right)\,, \end{equation} where~$H$ is the Hamiltonian in Eq.~\eqref{eq:Thu31May103357CEST2018}. Replacing the effective Hamiltonian in Eq.~(\ref{eq:MonMar5160912CET2018}) on the expression in Eq.~(\ref{eq:WedFeb28113458CET2018}), we obtain the differential equations for the coefficients of interest: \begin{subequations} \label{eq:MonMar5161258CET2018} \begin{align} i \partial_t \mathcal{C}_{01} &= \Omega_\sigma + \mathrm{T}\,g\, \mathcal{C}_{10} - i \mathrm{R}|\beta|e^{-i\phi} \mathcal{C}_{11}+\left(\Delta_{\sigma}- i\frac{\gamma_\sigma}{2} \right) \mathcal{C}_{01}\,,\\ i \partial_t \mathcal{C}_{10} &= i \mathrm{R} |\beta|e^{i\phi} + \Omega_\sigma \mathcal{C}_{11}+ \mathrm{T}\,g\, \mathcal{C}_{01} - i \sqrt{2} \mathrm{R}|\beta|e^{-i\phi} \mathcal{C}_{20}-i\frac{\Gamma}{2} \mathcal{C}_{10}\,,\\ i \partial_t \mathcal{C}_{11} &= \Omega_\sigma \mathcal{C}_{10} + i \mathrm{R}|\beta| e^{i\phi} \mathcal{C}_{01}+\sqrt{2} \mathrm{T}\,g\,\mathcal{C}_{20} + \left(\Delta_\sigma-i\frac{\gamma_\sigma+\Gamma}{2} \right) \mathcal{C}_{11}\,,\\ i \partial_t \mathcal{C}_{20} & = \sqrt{2}\mathrm{T}\,g\,\mathcal{C}_{11} + i \sqrt{2} \mathrm{R}|\beta|e^{i \phi}\mathcal{C}_{10} - i\Gamma \mathcal{C}_{20}\,, \end{align} \end{subequations} where we have assumed that the driving to the 2LS is low enough so that the states with three or more excitation can be safely neglected. Assuming that the coherent field that drives the sensor can be written as a fraction of the field that drives the two-level system ($|\beta| = g \frac{\mathrm{T} \Omega_{\sigma}}{\mathrm{R} \gamma_\sigma} \mathcal{F} $), very similar as Eq.~(\ref{eq:SatFeb24130524CET2018}), and to leading order in the coupling and the driving intensity of the two-level system, the solution to Eq.~(\ref{eq:MonMar5161258CET2018}) is \begin{subequations} \label{eq:MonMar5165614CET2018} \begin{align} \mathcal{C}_{01} &= -\frac{2i\Omega_\sigma}{\gamma_\sigma + 2 i \Delta_\sigma}\,,\\ \mathcal{C}_{10} &= -\frac{2g\Omega_\sigma \mathrm{T}}{\Gamma} \left(\frac{2}{\gamma_\sigma + 2 i \Delta_\sigma}-\frac{\mathcal{F}e^{i\phi}}{\gamma_\sigma}\right)\,,\\ \mathcal{C}_{11} &= -\frac{4 i g \Omega_\sigma^2 \mathrm{T} }{\Gamma \gamma_\sigma \left(\gamma_\sigma + 2 i \Delta_{\sigma}\right) \left(\gamma_\sigma + \Gamma + 2 i \Delta_{\sigma}\right) } [-2\gamma_\sigma +\mathcal{F}(\gamma_\sigma +\Gamma+ 2 i \Delta_\sigma)e^{i\phi}]\,,\\ \mathcal{C}_{20}&= \frac{2\sqrt{2} g^2\Omega_\sigma^2 \mathrm{T}^2}{\gamma_\sigma^2 \Gamma^2 \left(\gamma_\sigma + 2 i \Delta_{\sigma}\right) \left(\gamma_\sigma + \Gamma + 2 i \Delta_{\sigma}\right)} \times \\ & \ \ [4 \gamma _{\sigma }^2+\mathcal{F}^2 e^{2 i \phi } \left(\gamma _{\sigma }+2 i \Delta _{\sigma }\right) \left(\gamma _{\sigma }+\Gamma +2 i \Delta _{\sigma }\right)-4 \mathcal{F} e^{i \phi } \gamma _{\sigma } \left(\gamma _{\sigma }+\Gamma +2 i \Delta _{\sigma }\right)]\,. \end{align} \end{subequations} The population of both the 2LS and the cavity, and the~$\g{2}_a$ can be obtained from the coefficients in Eq.~(\ref{eq:MonMar5165614CET2018}). However, to recover some information from the unfiltered signal ($\Gamma \rightarrow \infty$), this limit has to be performed carefully and the previous expressions need some manipulation first. A new set of coefficients is defined as: $\mathcal{C}'_{ij} = \left(\frac{\Gamma}{2\mathrm{T} g} \right)^i \mathcal{C}_{ij}$ so any explicit dependence of sensor parameters disappears, resulting in a non-vanishing (finite) solution after the proper limit are taken. After the substitution and taking the limit, the new coefficients are \begin{subequations} \begin{align} \mathcal{C}'_{01} &= -\frac{2i\Omega_\sigma}{\gamma_\sigma + 2 i \Delta_\sigma} \,,\\ \mathcal{C}'_{10} &= \Omega_{\sigma} \left(\frac{e^{i \phi} \mathcal{F}}{\gamma_\sigma} - \frac{2}{\gamma_\sigma + 2 i \Delta_\sigma}\right) \,,\\ \mathcal{C}'_{11} &=- \frac{2 i e^{i \phi} \mathcal{F} \Omega_\sigma^2}{\gamma_\sigma \left(\gamma_\sigma + 2 i \Delta_{\sigma}\right)}\,,\\ \mathcal{C}'_{20}&= \frac{e^{i \phi} \mathcal{F} \Omega_\sigma^2}{\gamma_\sigma^2 \left(\gamma_\sigma + 2 i \Delta_{\sigma}\right)} \left[e^{i \phi} \mathcal{F} \left(\gamma_\sigma+ 2 i \Delta_\sigma \right) -4 \gamma_\sigma\right] \,. \end{align} \end{subequations} Now, these solutions provide useful information about the equivalent filtered signal: $\av{n_a} \approx |\mathcal{C}'_{10}|^2$, $\mathrm{P}_{20} = |\mathcal{C}'_{20}|^2$ (probability of two-photon state) and~$\g{2}_a \approx 2|\mathcal{C}'_{20}|^2/ |\mathcal{C}'_{10}|^4$. The cancellation of the coefficient~$\mathcal{C}'_{20}$,and therefore of~$\g{2}_a$, yields the condition on the attenuation factor \begin{equation} \label{eq:TueMar6111957CET2018} \mathcal{F} = \frac{4 e^{-i \phi}\gamma_\sigma}{\gamma_\sigma + 2 i \Delta_{\sigma}}\,, \end{equation} which can only be satisfied---$\mathcal{F}$~is an attenuation factor, and thus a real number---when the relative phase between the driving field and the 2LS coherent contribution is either 0 or $\pi$ (opposite phase), in agreement with Fig.~\ref{fig:WedNov29150123CET2017}(a-d) in the main text. Note, as well, that the cancellation of the coefficient~$\mathcal{C}_{10}$, and therefore of the population of the cavity, is obtained when~$\mathcal{F} = \frac{2 e^{-i \phi}\gamma_\sigma}{\gamma_\sigma + 2 i \Delta_{\sigma}}$ which is a real number for the same phases for which Eq.~(\ref{eq:TueMar6111957CET2018}) is a real number. \subsection{Jaynes--Cummings blockade} \label{sec:WedFeb28173330CET2018} The Hamiltonian describing the Jaynes--Cummings model in given in Eq.~(\ref{eq:Thu31May103357CEST2018}), and the dynamics is complemented with a master equation that takes into account the decay of the 2LS with rate~$\gamma_\sigma$ and of the cavity with rate~$\gamma_a$. As such, the effective Hamiltonian that described the dynamics in the wavefunction approximation is \begin{equation} \label{eq:WedFeb28144028CET2018} H_\mathrm{eff} = \left(\Delta_a - i \frac{\gamma_a}{2} \right) \ud{a} a \ + \left(\Delta_\sigma - i \frac{\gamma_\sigma}{2} \right) \ud{\sigma} \sigma + g (\ud{a} \sigma + \ud{\sigma} a ) + \Omega_a (e^{i \phi}\ud{a} + e^{-i \phi} a) + \Omega_{\sigma} (\ud{\sigma} + \sigma)\,. \end{equation} Replacing the Hamiltonian in Eq~(\ref{eq:WedFeb28144028CET2018}) into Eq.~(\ref{eq:coeffeqs}), we have that the differential equation for the relevant coefficients are as follows, \begin{subequations} \label{eq:WedFeb28144227CET2018} \begin{align} i \partial_t \,\mathcal{C}_{10}&= e^{i \phi} \Omega_a + \left(\Delta_a - i \frac{\gamma_a}{2}\right) \mathcal{C}_{10} + g\, \mathcal{C}_{01} + \Omega_{\sigma} \, \mathcal{C}_{11}\ + \sqrt{2} e^{-i \phi} \Omega_a \, \mathcal{C}_{20} \,, \\ i \partial_t \,\mathcal{C}_{01}&= \Omega_{\sigma} + g \, \mathcal{C}_{10} + \left(\Delta_\sigma - i \frac{\gamma_\sigma}{2}\right) \mathcal{C}_{01} + e^{-i \phi} \Omega_a \, \mathcal{C}_{11}\,, \\ i \partial_t \,\mathcal{C}_{11}&= e^{i \phi} \Omega_a \, \mathcal{C}_{01} + \Omega_{\sigma} \, \mathcal{C}_{10}+\left(\Delta_a +\Delta_\sigma - i \frac{\gamma_a + \gamma_\sigma}{2}\right) \mathcal{C}_{11} + \sqrt{2} g\, \mathcal{C}_{20}\,, \\ i \partial_t \,\mathcal{C}_{20}&= \sqrt{2} e^{i \phi} \Omega_a \, \mathcal{C}_{10} + \sqrt{2} g \,\mathcal{C}_{11}+ \left(2 \Delta_a - i \gamma_a \right) \mathcal{C}_{20}\,, \end{align} \end{subequations} where we have assumed that the driving is low enough for the states containing three or more photons to be neglected. The steady-state solution for the coefficients Eq.~(\ref{eq:WedFeb28144227CET2018}) is obtained when the derivatives on the left-hand side of the equation vanish. Thus, assuming that the coefficient of the vacuum dominates over all the others, i.e.,~$\mathcal{C}_{00}\approx 1$, and to leading order in the driving of the cavity, the coefficients are \begin{subequations} \label{eq:WedFeb28145309CET2018} \begin{align} \mathcal{C}_{10} &= \Omega_a \, \frac{2 e^{i \phi} \left( 2 \Delta_\sigma - i \gamma _{\sigma } \right) - 4 \chi g}{4 g^2+\left(\gamma _a+2 i \Delta_a\right) \left(\gamma _{\sigma }+2 i \Delta_\sigma \right)}, \\ \mathcal{C}_{01} &= \Omega_a \, \frac{2 \chi \left( 2 \Delta_a - i \gamma _a \right) - 4 e^{i \phi} g}{4 g^2+\left(\gamma _a+2 i \Delta_a\right) \left(\gamma _{\sigma }+2 i \Delta_\sigma \right)}, \\ \mathcal{C}_{11} & = 4 \Omega_a^2 \, \frac{[2 e^{i \phi} g - \chi (2 \Delta_a - i \gamma_a)] [2 g \chi + i e^{i \phi} (\tilde{\gamma}_{11} + 2 i \tilde{\Delta}_{11})]} { [4 g^2+\left(\gamma _a+2 i \Delta_a\right) \left(\gamma _{\sigma }+2 i \Delta_\sigma \right)] [4 g^2+\left(\gamma _a+2 i \Delta_a\right) (\tilde{\gamma}_{11} +2 i \tilde{\Delta}_{11})]},\\ \mathcal{C}_{20} &= \sqrt{8} \Omega _a^2 \, \frac{4 g^2 \chi^2 + 4 i e^{i \phi} g \chi (\tilde{\gamma}_{11} +2 i \tilde{\Delta}_{11}) +e^{i 2 \phi} [4 g^2 - (\gamma_\sigma + 2 i \Delta_\sigma) (\tilde{\gamma}_{11} +2 i \tilde{\Delta}_{11})] } { [4 g^2+\left(\gamma _a+2 i \Delta_a\right) \left(\gamma _{\sigma }+2 i \Delta_\sigma \right)] [4 g^2+\left(\gamma _a+2 i \Delta_a\right) (\tilde{\gamma}_{11} +2 i \tilde{\Delta}_{11})]}, \end{align} \end{subequations} where $\Delta_c = \omega_c - \omega_\mathrm{L}$ (for $c = a, \sigma$), $\chi = \Omega_{\sigma}/ \Omega_a$ is the ratio between dot and cavity driving and $\tilde{\Delta}_{ij} = i \Delta_a + j \Delta_{\sigma}$ (the same notation for $\tilde{\gamma}_{ij}$). The population of both the 2LS and the cavity, and the~$\g{2}_a$ can be obtained from the coefficients in Eq.~(\ref{eq:WedFeb28145309CET2018}) as~$ n_a = |\mathcal{C}_{10}|^2$, $\mean{n_\sigma}= |\mathcal{C}_{01}|^2$ and~$\g{2}_a = 2|\mathcal{C}_{20}|^2/ |\mathcal{C}_{10}|^4$, respectively; which coincide with the expressions given in Eq.~\eqref{eq:Tue29May182623CEST2018} and Eq.~\eqref{eq:Tue29May184632CEST2018} of the main text. \subsection{Exciton-polaritons blockade} \label{sec:FriMar2181711CET2018} The Hamiltonian describing exciton-polaritons is given in Eq.~(\ref{eq:FriMar2182347CET2018}), and the dynamics is complemented with a master equation that takes into account the decay rate of the cavity with rate~$\gamma_a$ and of the excitons with rate~$\gamma_b$. Thus, the effective Hamiltonian that described the dynamics in the wavefunction approximation is \begin{equation} \label{eq:FriMar2182643CET2018} H_{\mathrm{eff}} = \left(\Delta_a - i \frac{\gamma_a}{2}\right) \ud{a} a + \left(\Delta_{b}- i \frac{\gamma_b}{2}\right) \ud{b} b + g \left(\ud{a} b + \ud{b} a \right) + \Omega_a \left(e^{i\phi} \ud{a} + e^{-i \phi} a\right) + \Omega_b (\ud{b} + b) + \frac{U}{2} \ud{b} \ud{b} b b\,. \end{equation} Replacing the Hamiltonian in Eq.~(\ref{eq:FriMar2182643CET2018}) in Eq.~(\ref{eq:coeffeqs}), we find that the differential equations for the relevant coefficients are \begin{subequations} \label{eq:polaritonSteadyState_wavefunc} \begin{align} i \partial_t\, \mathcal{C}_{10} &= e^{i\phi} \Omega_a + \left(\Delta_a - i \frac{\gamma_a}{2}\right) \mathcal{C}_{10} + g \ \mathcal{C}_{01} + \Omega_b \, \mathcal{C}_{11} + \sqrt{2} e^{-i\phi} \Omega_a \, \mathcal{C}_{20}\,, \\ i \partial_t \,\mathcal{C}_{01} &= \Omega_b + g \ \mathcal{C}_{10} + \left(\Delta_\sigma - i \frac{\gamma_\sigma}{2}\right) \mathcal{C}_{01} + e^{-i \phi} \Omega_a \, \mathcal{C}_{11} + \sqrt{2} \Omega_b \, \mathcal{C}_{02} \,, \\ i \partial_t \,\mathcal{C}_{11} &= e^{i \phi} \Omega_a \, \mathcal{C}_{01} + \Omega_b \, \mathcal{C}_{01} + \left(\Delta_a +\Delta_\sigma - i \frac{\gamma_a + \gamma_\sigma}{2}\right) \mathcal{C}_{11} + \sqrt{2} g \left( \mathcal{C}_{20} + \mathcal{C}_{02} \right)\,, \\ i \partial_t \,\mathcal{C}_{20} &= \sqrt{2} e^{i \phi} \Omega_a \, \mathcal{C}_{10} + \sqrt{2} g \, \mathcal{C}_{11}+ \left(2 \Delta_a - i \gamma_a \right) \mathcal{C}_{20}\,, \\ i \partial_t\, \mathcal{C}_{02} &= \sqrt{2} \Omega_b \, \mathcal{C}_{01} + \sqrt{2} g \, \mathcal{C}_{11}+ \left(2 \Delta_b + U - i \gamma_b \right) \mathcal{C}_{02}\,, \end{align} \end{subequations} where we have assumed that the driving is low enough for the states with three or more photons to be neglected. The steady-state solution of the coefficients in Eq.~(\ref{eq:polaritonSteadyState_wavefunc}) is obtained when the derivatives in the left-hand side of the equation vanish. Thus, assuming that the coefficient of the vacuum dominates over all the others, i.e.,~$\mathcal{C}_{00}\approx 1$, and to leading order in the driving of the cavity, the coefficients are \begin{subequations} \label{eq:FriMar2190022CET2018} \begin{align} \mathcal{C}_{10} &= 2 \Omega_a \, \frac{e^{i \phi} (2 \Delta_b - i \gamma_b) - 2 g \chi} {4 g^2+(\gamma _a + 2 i \Delta_{a}) (\gamma_b + 2 i \Delta_{b})}\,, \\ \mathcal{C}_{01} &=2 \Omega_a \, \frac{\chi (2 \Delta_a - i \gamma_a) - 2 e^{i \phi} g } {4 g^2+(\gamma _a + 2 i \Delta_{a}) (\gamma_b + 2 i \Delta_{b})}\,, \\ \mathcal{C}_{11} &= 4 \Omega_a^2 \, [-2 i e^{i \phi} g + (\gamma_a + 2 i \Delta_a)] [e^{i \phi} (U + 2 \Delta_b - i \gamma_b)(\tilde{\gamma}_{11} + 2 i \tilde{\Delta}_{11}) - i \chi g (U + 2 \tilde{\Delta}_{11} - i \tilde{\gamma}_{11})] /\mathcal{N} \,, \\ \mathcal{C}_{20} & = \begin{aligned}[t] & i \sqrt{8} \Omega_a^2 \, \lbrace 4 \chi^2 g^2 (U + \tilde{\Delta}_{11} - i \tilde{\gamma}_{11})+ 4 i e^{i \phi} \chi g (U + 2 \Delta_b - i \gamma_b) (U + \tilde{\Delta}_{11} - i \tilde{\gamma}_{11}) + \\ & \, e^{i 2 \phi} [4 g^2 U - (\gamma_b + 2 i \Delta_b)(\tilde{\gamma}_{11} + 2 i \tilde{\Delta}_{11}) (U + \Delta_b - i \gamma_b)]\rbrace / \ \mathcal{N}, \end{aligned} \\ \mathcal{C}_{02}& = i \sqrt{8} \Omega_a^2 \, (\tilde{\gamma}_{11} + 2 i \tilde{\Delta}_{11}) (2 e^{i \phi} + i \chi \gamma_a - 2 \chi \Delta_a)^2 / \ \mathcal{N}\,, \end{align} \end{subequations} where we have used \begin{equation} \mathcal{N}= [4 g^2 + (\gamma_a + i \Delta_a)(\gamma_b + \Delta_b)] [ (\gamma_a + i \Delta_a) (\tilde{\gamma}_{11} + i \tilde{\Delta}_{11}) (U + 2 \Delta_b - i \gamma_b) + 4 g^2 (U + 2 \tilde{\Delta}_{11} - i \tilde{\gamma}_{11})], \end{equation} and $\chi$, $\tilde{\gamma}_{ij}$ and $\tilde{\Delta}_{ij}$ share the same definition as described above changing $\sigma$ by $b$. The population of both the cavity and the excitons, and the~$\g{2}_{a,b}$ can be obtained from the coefficients in Eq.~(\ref{eq:FriMar2190022CET2018}) as~$ n_a = |\mathcal{C}_{10}|^2$, $\mean{n_b}= |\mathcal{C}_{01}|^2$, $\g{2}_a = 2|\mathcal{C}_{20}|^2/ |\mathcal{C}_{10}|^4$ and~$\g{2}_b = 2|\mathcal{C}_{02}|^2/ |\mathcal{C}_{01}|^4$, respectively; which coincide with the expressions given in Eq.~(\ref{eq:Thu31May102314CEST2018}) of the main text. \section{$\g{2}$ decomposition} \label{app:4} \subsection{Jaynes--Cummings model} Given the expressions for $\g{2}$ decomposition from the equations \eqref{eq:decompositiontermswhole} and steady-state correlators obtained using the methods described above, we find the following expressions for the JC model when the cavity is driven ($\chi = 0$): \begin{subequations} \begin{align} \mathcal{I}_0 = & \ 256 g^8 \bigm/f_1\left(g, \Delta_a, \Delta_{\sigma}, \gamma_a,\gamma_\sigma\right)\,,\\ \mathcal{I}_1 = & \ 0\,,\\ \begin{split} \mathcal{I}_2 = & \ 32 g^4 \bigg[-\gamma _{\sigma }^2 \Big(4 g^2 + \gamma_a \left(\gamma_a +\gamma_{\sigma }\right) - 4 \Delta_{a}^2\Big)+ 4 \gamma_{\sigma } \left(4 \gamma_a + 3 \gamma_{\sigma }\right) \Delta_a \Delta_{\sigma} + 4 \Delta_{\sigma}^2 \Big(4 g^2 + \gamma_a \left( \gamma_a + \gamma_{\sigma }\right) - 4 \Delta_{a}^2\Big) - \\ & \ 16 \Delta_a \Delta_{\sigma}^3\bigg] \biggm/ f_1\left(g, \Delta_a, \Delta_{\sigma}, \gamma_a,\gamma_\sigma\right), \end{split} \end{align} \end{subequations} where the function $f_1\left(g,\Delta_a, \Delta_{\sigma}, \gamma_a,\gamma_\sigma\right)$ is defined as: \begin{multline} f_1\left(g, \omega_a, \omega_{\mathrm{L}}, \gamma_a,\gamma_\sigma\right) = \Big(\gamma_{\sigma }^2 + 4 \Delta_{\sigma}^2\Big)^2 \Big(16 g^4 +8 g^2 \big[\gamma_a \left(\gamma_a + \gamma_{\sigma }\right) - 4 \Delta_a \left(\Delta_{a}+\Delta_{\sigma}\right)\big] +\\{} \big[\gamma_a^2 +4 \Delta_{a}^2\big] \big[\left(\gamma_a + \gamma_\sigma\right)^2 + 4 \left(\Delta_a + \Delta_{\sigma}\right)^2\big]\Big) , \end{multline} \subsection{Microcavity polariton} For the polariton system (also for $\chi = 0$), we can do the same decomposition: \begin{subequations} \begin{align} \mathcal{I}_0 = & 256 U^2 \, g^8 / f_2\left(g, \Delta_a, \Delta_b,\gamma_a,\gamma_b\right)\,,\\ \mathcal{I}_1 = & 0,\\ \begin{split} \mathcal{I}_2= & -32 g^4 U \bigg[ \gamma_b^2 \Big(U \gamma_a \left[\gamma_a + \gamma_b\right] + 2 \gamma_b \left[2 \gamma_a + \gamma_b \Delta_a\right] - 4 U \Delta_a^2 + 4 g^2 \left[U+2 \Delta_2 \right]\Big) + \\ & 2 \gamma_b \Delta_b \Big( 3 \gamma_a^2 \gamma_b + 4 g^2 \left[2 \gamma_a + 3 \gamma_b\right] - 6 \gamma_b \Delta_a \left[U + 2 \Delta_a\right] + 4 \gamma_a \left[\gamma_b^2 - 2 U \Delta_a \right]\Big) - 4 \Delta_b^2 \Big(U \gamma_a \left[\gamma_a + 3 \gamma_b \right]+ 12 \gamma_b\left[\gamma_a + \gamma_b\right] - \\ & 4 U \Delta_a^2 + 4 g^2 \left[U + 2 \Delta_a\right]\Big) - \Delta_b^3 \Big(4 g^2+ \gamma_a^2 + 4 \gamma_a \gamma_b - 2 \Delta_a \left[ U + \Delta_a\right] + 32 \Delta_a \Delta_b^4\Big) \bigg] \biggm/ f_2\left(g, \Delta_a, \Delta_b, \gamma_a,\gamma_b\right)\,, \end{split} \end{align} \end{subequations} where the auxiliary function $f_2$ has the following form: \begin{multline} f_2 \left(g, \Delta_a, \Delta_b,\gamma_a,\gamma_b\right) = \Big[\gamma _b^2+4 \Delta_b^2\Big]^2 \bigg[ \left(\gamma_a^2 + 4 \Delta_a^2\right) \left(\left(\gamma_a+\gamma_b\right)^2 + 4 \left(\Delta_a + \Delta_b\right)^2\right) \left(\gamma_b^2 + \left(U + 2\Delta_b\right)^2\right) + \\ 16 g^4 \left(\left(\gamma_a + \gamma_b\right)^2 + \left(U + 2 \left[ \Delta_a + \Delta_b\right]\right)^2\right) + 8 g^2 \Big(U^2 \left(\gamma_a \left[\gamma_a + \gamma_b\right]- 4 \Delta_a \left[\Delta_a + \Delta_b\right] \right) + \\ \big(\gamma_a \gamma_b - 4 \Delta_a \Delta_b\big) \big(\left[\gamma_a + \gamma_b\right]^2 + 4 \left[\Delta_a + \Delta_b\right]^2\big) - 2 U \left(\gamma_a^2 \left[\Delta_a -\Delta_b\right]- 2 \gamma_a \gamma_b \Delta_b + 4 \Delta_a \left[\Delta_a + \Delta_b\right] \left[\Delta_a + 2 \Delta_b\right] \right) \Big) \bigg]\,. \end{multline} \section{Unconventional antibunching conditions} \label{app:perfectantibunching} \subsection{JC Model} Starting from the equation $\mathcal{C}_{20} =0$ (which is directly linked with $\g{2}_a = 0$ in the vanishing driving limit): \begin{equation} \Delta_a = \frac{i \big(\gamma_\sigma + 2 i \Delta_\sigma \big) \big(\tilde{\gamma}_{11} + 2 i \Delta_\sigma \big) + 4 e^{-i \phi} g \, \chi \big(\tilde{\gamma}_{11} + 2 i \Delta_\sigma \big) - 4 i g^2 \big(1 + e^{-2 i \phi} \chi^2\big)} {2 \gamma_\sigma + 4 i \Delta_\sigma - 8 i e^{-i \phi} g \, \chi } \end{equation} Taking the real part gives the expression for UA curve: \begin{equation} \Delta_a = \frac{4 g \chi \big\lbrace 2 \cos \phi \big[2 \Delta_\sigma^2 + g^2 \big(1+ \chi^2 \big)\big] - g \chi \Delta_\sigma \cos 2 \phi - \gamma_\sigma \sin \phi \big(g \chi \cos \phi - 2 \Delta_\sigma \big)\big\rbrace - \big[\tilde{\Gamma}^2_\sigma + 4 g^2 \big(1+ 4 \chi^2 \big)\big]} {\gamma_\sigma^2 + 4 \big(\Delta_\sigma^2 + 4 g^2 \chi^2 \big) - 8 g \chi \big( 2 \Delta_\sigma \cos \phi + \gamma_\sigma \sin \phi \big)} \end{equation} While imposing the imaginary part to be zero gives the second contraint: \begin{equation} \Delta_\sigma = \frac{4 g \chi \cos \phi \, (\tilde{\gamma}_{11} - g \chi \sin \phi) \pm \sqrt{-\tilde{\gamma}_{11} (\gamma_\sigma - 4 g \chi \sin \phi) [-4 g^2 + \gamma_\sigma \tilde{\gamma}_{11}- 4 g \chi (g \chi \cos 2 \phi + \tilde{\gamma}_{11} \sin \phi) ] + 4 g^2\chi^2 \sin^2 2 \phi } }{2 \tilde{\gamma}_{11}}, \end{equation} which must return a real quantity so that the radicand has to be positive perforce. \subsection{Microcavity polariton} The perfect antibunching conditions can be derived straightaway from the equation $\g{2}_a=0$, where $\g{2}_a$ is given by the expression \eqref{eq:PBg2}. Then, we clear $\Delta_a$ from the previous equation, which leads to: \begin{multline} \label{eq:UApbcond} \Delta_a = \big\lbrace e^{i \phi} \big[4 g^2U - \big(\gamma_b + 2 i \Delta_b\big) \big(\tilde{\gamma}_{11} + 2 i \Delta_b\big) \big(U + 2 \Delta_b - i \gamma_b\big) \big] \\ + 4 i g \chi \big(U + 2 \Delta_b - i \gamma_b\big) \big(\tilde{\gamma}_{11} + 2 i \Delta_b\big) + 4 e^{-i \phi} g^2 \chi^2 \big(U + 2 \Delta_b - i \tilde{\gamma}_{11}\big) \big\rbrace / \mathcal{N}, \end{multline} where $\mathcal{N}$ is defined as: \begin{equation} \mathcal{N} = 2 \big[ e^{i \phi} \big(\gamma_b + 2 i \Delta_b\big) \big(\gamma_b + i U + 2 i \Delta_b\big) +4 g \chi \big(U + 2 \Delta_b - i \gamma_b \big) - 4 g^2 \chi^2 e^{-i \phi}\big]. \end{equation} Nevertheless, by definition $\Delta_a$ must a real quantity. Taking the real part of this last expression leads to the equation for the curve which follows the UA effect. Moreover, the cancellation of its imaginary part (same as the JC model) turns out to be a second condition to exactly reach $\g{2}_a = 0$, which can be cleared for any chosen parameter. Further analysis shows that more restrictions emerge from the fact of selecting only real-valued (physical) parameters. As an illustration, we present here the case of cavity excitation ($\chi = 0$). Splitting both real and imaginary parts from Eq.~\eqref{eq:UApbcond}: \begin{subequations} \begin{align} &\Delta_a = - \Delta_b - \frac{4 g^2 \Delta_b}{\gamma_b^2 + 4 \Delta_b^2} + \frac{2 g^2 (U + 2 \Delta_b)}{\gamma_b^2 + (U + 2 \Delta_b)^2}, \\ & 0 = \gamma_a + \gamma_b + 4 g^2 \gamma_b \left( - \frac{1}{\gamma_b^2 + 4 \Delta_b^2} + \frac{1}{\gamma_b^2 + (U + 2 \Delta_b)^2} \right) . \end{align} \end{subequations} First expression provides an implicit equation for the 3 distinct curves of UA shown in Fig.~\ref{fig:08} whereas the latter gives the exact location where $\g{2}_a$ vanishes. \section{Particular cases of special interest} \label{sec:Wed30May110352CEST2018} In this appendix, we list some expressions which could be easily derived from the general cases given in the text, but whose importance and popularity could result a convenience for many readers to be available explicitely. This is the two-photon coherence function in presence of cavity pumping only (the case most discussed in the literature) for the JC system: \begin{multline} \label{eq:JCg2} \g{2}_a = \Big\lbrace \big[(16 g^4 + 8 g^2 \left(\gamma_{\sigma } \gamma_a - 4 \Delta_a \Delta_{\sigma} \right) + \Gamma_a^2 \Gamma_\sigma^2\big] \big[ 16 g^4 -8 g^2 \big(\gamma_\sigma \gamma_{+} - 4 \Delta_\sigma \Delta_{+} \big) + \Gamma_{\sigma}^2 \Gamma_{+}^2 \big] \Big\rbrace \biggm/ \\ \bigg\lbrace \Gamma_\sigma ^4 \Big[ 16 g^4 +8 g^2 \big(\gamma_a \gamma_{+} - 4 \Delta_a \Delta_{+}\big) + \Gamma_a^2 \Gamma_{+}^2 \Big] \bigg\rbrace, \end{multline} and for the microcavity polaritons: \begin{multline} \label{eq:PBg2} \g{2}_a = [16g^4 + 8g^2( \gamma_a + \gamma_b- 4\Delta_a \Delta_b)+\Gamma_a^2 \Gamma_b^2 ]\times{}\\ {}\times \lbrace 16g^4U^2 + \Gamma_b^2 \Gamma_{+}^2 [\gamma_b^2 + (U+2\Delta_b)^2] -8g^2U [4\gamma_a\gamma_b \Delta_b -8 \Delta_b^2 \Delta_{+} + 2\gamma_b^2 (\Delta_{+}+2\Delta_b) + U(\gamma_b \gamma_{+}- 4\Delta_b \Delta_{+})]\rbrace \bigm/ \\\big[ 8g^2 \Gamma_b^4\{ U^2 (\gamma_a\gamma_{+}- 4 \Delta_a \Delta_{+}) + \Gamma_{+}^2 (\gamma_a\gamma_b -4 \Delta_a \Delta_b) -2 U [ \gamma_a^2 \Delta_{-} - 2 \gamma_a \gamma_b \Delta_b + 4 \Delta_a \Delta_{+} ( \Delta_{+} + \Delta_b)]\} \times {}\\ \Gamma_b^4 \{\Gamma_a^2 \Gamma_{+}^2 [\gamma_b^2 +(U+2\Delta_b)^2] + 16g^4 [\gamma_{+}+(U+2\Delta_{+})^2] \} \big]\,. \end{multline} This is the coupling strength between the cavity and the 2LS which, for given parameters, results in $\g{2}_a =1$: \begin{multline} \label{eq:FriMar2104433CET2018} g_{P} = \frac{1}{2} \Big \lbrace \big [ 16 \Delta_\sigma^4 + 32 \Delta_a \Delta_\sigma^3 - 8(\gamma_a^2 +3\gamma_a \gamma_\sigma + \gamma_\sigma^2 - 4\Delta_a^2)\Delta_\sigma^2 - {}\\ {}-8 \gamma_\sigma (4\gamma_a+3\gamma_\sigma) \Delta_a \Delta_\sigma + \gamma_\sigma^2 (2\gamma_a^2 + 2\gamma_a \gamma_\sigma + \gamma_\sigma^2 - 8\Delta_a^2 ) \big]^{1/2} + \gamma_\sigma^2 - 4\Delta_\sigma^2 \Big \rbrace^{1/2}\,. \end{multline} A smaller coupling~$g<g_\mathrm{P}$ produces antibunched light while a larger coupling~$g>g_\mathrm{P}$ produces bunched light. \end{document}
\begin{document} \title[Gen. ineq. and shape op. ineq. of cont. CR-wp in cosympl. sp. form]{General inequalities and new shape operator inequality for contact CR-warped product submanifolds in cosymplectic space form} \author[A. Mustafa]{ABDULQADER MUSTAFA} \address{Department of Mathematics, Faculty of Arts and Science, Palestine Technical University, Kadoorei, Tulkarm, Palestine} \eqrefmail{[email protected]} \author[A. Assad]{ATA ASSAD} \address{Department of Mathematics, Faculty of Arts and Science, Palestine Technical University, Kadoorei, Tulkarm, Palestine} \eqrefmail{[email protected]} \author[C. \"Ozel]{CENAP \"OZEL} \address{Department of Mathematics, Faculty of Science, King Abdulaziz University, 21589 Jeddah, Saudi Arabia} \eqrefmail{[email protected]} \author[A. Pigazzini]{ALEXANDER PIGAZZINI} \address{Mathematical and Physical Science Foundation, 4200 Slagelse, Denmark} \eqrefmail{[email protected]} \maketitle \begin{abstract} We establish two main inequalities; one for the norm of the second fundamental form and the other for the matrix of the shape operator. The results obtained are for cosymplectic manifolds and, for these, we show that the contact warped product submanifolds naturally possess a geometric property; namely $\mathcal{D}_1$-minimality which, by means of the Gauss equation, allows us to obtain an optimal general inequality. For sake of generalization, we state our hypotheses for nearly cosymplectic manifolds, then we obtain them as particular cases for cosymplectic manifolds. \\ For the other part of the paper, we derived some inequalities and applied them to construct and introduce a shape operator inequality for cosimpleptic manifolds involving the harmonic series. \\ As further research directions, we have addressed a couple of open problems arose naturally during this work and which depend on its results. \noindent{\it{AMS Subject Classification (2010)}}: {53C15; 53C40; 53C42; 53B25} \noindent{\it{Keywords}}:{ Contact CR-warped product submanifolds, cosymplectic manifolds, shape operator inequality.} \eqrefnd{abstract} \sloppy \section{Introduction} Warped product is a very important mathematical tool in the theory of general relativity. These mathematical structure are still widely studied today, because they can provide the best mathematical models of our universe, as for example the Robertson-Walker models, the Friedmann cosmological models, or the relativistic model of the Schwarzschild spacetime, which admits a warped product construction \cite{iijj77}. As seen in \cite{genIneq}, the aim of this paper continues in the search for the control of extrinsic quantities in relation to intrinsic quantities of Riemannan manifolds through the Nash theorem and consequent applications (\cite{2233ee}, \cite{55kk99}). This motivation, led Chen to face the following problem: \begin{problem}\label{prob3} \cite{aallr4} Establish simple relationships between the main extrinsic invariants and the main intrinsic invariants of a submanifold. \eqrefnd{problem} Several famous results in differential geometry, such as isoperimetric inequality, Chern-Lashof's inequality, and Gauss-Bonnet's theorem among others, can be regarded as results in this respect. The current paper aims to continue this sequel of inequalities. \\ This current paper is organized to include six sections. The section four, based on the results already obtained in \cite{genIneq}, states a general inequality involving the scalar curvature and the squared norm of the second fundamental form for a contact CR-warped product submanifold in a cosymplectic space form (i.e., \textit{Theorem 4.2}). In section five, we introduce a new type of inequalities for the shape operator matrix which involve the harmonic serie (\textit{Theorem 5.2} and \textit{Theorem 5.5}). In the final section, we assume that two open problems arose naturally due to the results of this work. \section{Preliminaries} As a preliminary we refer to what the authors have already explained in \cite{genIneq}(\textit{Section 2}). \section{The Existence of $\mathcal{D}_1$-Minimal Contact Warped product Submanifolds in Cosymplectic Manifolds} Using the same calculations made in \cite{genIneq}, this sections aims to prove the existence of $\mathcal{D}_1$-Minimal contact warped product submanifolds in cosymplectic manifolds; namely, the contact $CR$-warped product submanifold of the type $M^n=N_T\times _fN_\perp$, where the characteristic vector field ${\bf x}i$ is tangent to $N_T$. It is well-known that the contact $CR$-warped product submanifold of the type $M^n=N_\perp\times _fN_T$ in nearly cosymplectic manifolds is trivial warped product, reversing the factors then we consider the contact $CR$-warped product submanifold of the type $M^n=N_T\times _fN_\perp$ in nearly cosymplectic manifolds $\tilde {M}^{2l+1}$. For the purpose of generalization, we consider the contact CR-warped product submanifolds in a nearly cosymplectic manifold. The computations, already shown in \cite{genIneq} (i.e. \textit{Lemma 3.1, Lemma 3.3 and Corollary 3.2}), will not change for this case, and based on this, it is trivial to further extend the same results even for cosmplectic manifolds as a special case. Now, considering the above, as a particular case of the \textit{Lemma 4.1} (present in \cite{genIneq}),we finally can state the following new corollary: \begin{corollary}\label{104} The contact $CR$-warped product submanifold of type $M^n=N_T\times _fN_\perp$ is $\mathcal{D}_T$-minimal in cosymplectic manifolds. \eqrefnd{corollary} \section{Special Inequalities and Applications} Considering \cite{genIneq}(\textit{Theorem 6.1}), we obtain: \begin{theorem}\label{299} Let $\varphi :M^n=N_1\times _fN_2 \longrightarrow \tilde M^{2m+1}(c_c)$ be a $\mathcal {D}_1$-minimal isometric immersion of a contact warped product submanifold $M^n$ into a cosymplectic space form $\tilde {M}(c_c)^{2m+1}$. Then, we have \begin{enumerate} \item[(i)]$||h||^2\ge 2n_2 \biggl(||\nabla (\ln f)||^2-\Delta (\ln f)+(n_1-1)\frac{c_{c}}{4}\biggr).$ \item[(ii)] The equality in (i) holds identically if and only if $N_1$, $N_2$ and $M^n$ are totally geodesic, totally umbilical and minimal submanifolds in $\tilde M^{2m+1}(c_c)$, respectively. \eqrefnd{enumerate} \eqrefnd{theorem} Since contact $CR$-warped product submanifolds are $\mathcal{D}_1$-minimal in a cosymplectic space form $\tilde {M}(c_c)^{2m+1}$, then: \begin{theorem}\label{299} Let $\varphi :M^n=N_T\times _fN_\perp \longrightarrow \tilde M^{2m+1}(c_c)$ be an isometric immersion of a contact $CR$-warped product submanifold $M^n$ into a cosymplectic space form $\tilde {M}(c_c)^{2m+1}$. Then, we have \begin{enumerate} \item[(i)]$||h||^2\ge 2n_2 \biggl(||\nabla (\ln f)||^2-\Delta (\ln f)+(n_1-1)\frac{c_{c}}{4}\biggr).$ \item[(ii)] The equality in (i) holds identically if and only if $N_T$, $N_\perp$ and $M^n$ are totally geodesic, totally umbilical and minimal submanifolds in $\tilde M^{2m+1}(c_c)$, respectively. \eqrefnd{enumerate} \eqrefnd{theorem} \begin{remark} In Euclidean cosymplectic space forms, part (i) of the above two inequalities reduces to $$||h||^2\ge 2n_2 \biggl(||\nabla (\ln f)||^2-\Delta (\ln f)\biggr).$$ \eqrefnd{remark} \section{Inequalities of the Shape Operator Matrix and harmonic serie} In this section we consider two important relations, the first is: $$(I)\; \; h({\bf x}i, {\bf x}i)=0,$$ in fact, in our \textit{Section 3}, we specify that ${\bf x}i$ is tangent to the first factor and from \cite{genIneq}(\textit{Corollary 3.2}(i)) (which is also trivialy valid for contact CR- warped product submanifold in a nearly cosymplectic manifold), we obtain $(I)$, while the second relation is: $$(II)\; \; <A_\zeta X, Y>=<h(X,Y), \zeta>,$$ where $A$ and $h$ are the shape operator and the second fundamental form respectively. \\ We consider the matrix of the shape operator putting $v=n-1$. Therefore, in the rest of this paper we demonstrate some geometric and arithmetic inequalities for such matrices of order $v\times v$. \\ Let $ \mathbb{M} _{v}( \mathbb{C} )$ be the algebra of all $v\times v$ complex matrices. The singular values $t_{1}(A),...,t_{v}(A)$ of a matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$ are the eigenvalues of the matrix $\left( A^{\ast}A\right) ^{1/2}$ arranged in decreasing order and repeated according to multiplicity. A Hermitian matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$ is said to be positive semidefinite, written as $A\geq0$, if $x^{\ast }Ax\geq0$ for all $x\in \mathbb{C} ^{v}$ and it is called positive definite, written as $A>0$, if $x^{\ast}Ax>0$ for all $x\in \mathbb{C} ^{v}$ with $x\neq0$. The Hilbert-Schmidt norm (or the Frobenius norm) $\left\Vert \cdot\right\Vert _{2}$\ is the norm defined on $ \mathbb{M} _{v}( \mathbb{C} )$\ by $\left\Vert A\right\Vert _{2}=\left( {\displaystyle\sum\limits_{j=1}^{v}} t_{j}^{2}(A)\right) ^{1/2}$, $A\in \mathbb{M} _{v}( \mathbb{C} )$. The Hilbert-Schmidt norm is unitarily invariant, that is $\left\Vert UAV\right\Vert _{2}=\left\Vert A\right\Vert _{2}$ for all $A\in \mathbb{M} _{v}( \mathbb{C} )$ and all unitary matrices $U,V\in \mathbb{M} _{v}( \mathbb{C} )$. Another property of the Hilbert-Schmidt norm is that $\left\Vert A\right\Vert _{2}=\left( {\displaystyle\sum\limits_{i,j=1}^{v}} \left\vert a_{j}^{\ast}Ab_{i}\right\vert ^{2}\right) ^{1/2}$, where $\{b_{j}\}_{j=1}^{v}$ and $\{a_{j}\}_{j=1}^{v}$ are two orthonormal bases of $ \mathbb{C} ^{v}$. The spectral matrix norm, denoted by $\left\Vert \cdot\right\Vert $, of a matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$ is the norm defined by $\left\Vert A\right\Vert =\sup\{\left\Vert Ax\right\Vert :x\in \mathbb{C} ^{v},\left\Vert x\right\Vert =1\}$ or equivalently $\left\Vert A\right\Vert =t_{1}\left( A\right) $, For further properties of these norms the reader is referred to \cite{44} or \cite{33}. A matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$ is called contraction if $\left\Vert A\right\Vert \leq1$, or equivalently, $A^{\ast}A\leq I_{v}$, where $I_{v}$ is the identity matrix in $ \mathbb{M} _{v}( \mathbb{C} )$. An $v\times v$ $A=(\alpha_{ij})$ is called dobly stochastic if and only if $\alpha_{ij}\geq0,$ for all $i,j=1,...,v,$ summation all column and row enteries equal one \cite{55}. Let $ \mathbb{M} _{v}( \mathbb{C} )$ be the algebra of all $v\times v$ complex matrices. For a matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$, let $\lambda_{1}(A),...,\lambda_{v}(A)$ be the eigenvalues of $A$ repeated according to multiplicity. The singular values of $A$, denoted by $t_{1}(A),...,t_{v}(A)$, are the eigenvalues of the matrix $\left\vert A\right\vert =\left( A^{\ast}A\right) ^{1/2}$ arranged in decreasing order and repeated according to multiplicity. A Hermitian matrix $A\in \mathbb{M} _{v}( \mathbb{C} )$ is said to be positive semidefinite if $x^{\ast}Ax\geq0$ for all $x\in \mathbb{C} ^{v}$ and it is called positive definite if $x^{\ast}Ax>0$ for all $x\in \mathbb{C} ^{v}$ with $x\neq0$. The direct sum of matrices \\ \\ $A_{1},...,A_{u}\in \mathbb{M} _{v}( \mathbb{C} )$ is the matrix $\oplus_{i=1}^{u}A_{i}=\left[ \begin{array} [c]{cccc} A_{1} & 0 & \cdots & 0\\ 0 & A_{2} & \ddots & \vdots\\ \vdots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & A_{u} \eqrefnd{array} \right] $. \\ \\ \\ For two matrices $A_{1},A_{2}\in \mathbb{M} _{v}( \mathbb{C} ),$ we write $A\oplus B$ instead of $\oplus_{i=1}^{2}A_{i}$. The usual matrix norm $\left\Vert \cdot\right\Vert ,$ the Schatten $p$-norm ($p\geq1$), and the Ky Fan $k$-norms $\left\Vert \cdot\right\Vert _{(k)}$ $\left( k=1,...,v\right) $ are the norms defined on $ \mathbb{M} _{v}( \mathbb{C} )$ by $\left\Vert A\right\Vert =\sup\{\left\Vert Ax\right\Vert :x\in \mathbb{C} ,\left\Vert x\right\Vert =1\}$, $\left\Vert A\right\Vert _{p}=\sum_{j=1} ^{v}t_{j}^{p}\left( A\right) $, and $\left\Vert A\right\Vert _{(k)} =\sum_{j=1}^{k}t_{j}\left( A\right) ,$ $k=1,...,v$. It is known that (see, e.g., \cite[p. 76]{55}) for every $A\in \mathbb{M} _{v}( \mathbb{C} )$ we have \begin{equation} \left\Vert A\right\Vert =t_{1}\left( A\right) \label{id1} \eqrefnd{equation} and for each $k=1,...,v,$ we have \begin{equation} \left\Vert A\right\Vert _{(k)}=\max\left\vert \sum_{j=1}^{k}y_{j}^{\ast} Ax_{j}\right\vert ,\label{id0} \eqrefnd{equation} where the maximum is taken over all choices of orthonormal $k$-tuples $x_{1},...,x_{k}$ and $y_{1},...,y_{k}$. In fact, replacing each $y_{j}$ by $z_{j}y_{j}$ for some suitable complex number $z_{j}$ of modulus $1$ for which $\bar{z}_{j}y_{j}^{\ast}Ax_{j}=\left\vert y_{j}^{\ast}Ax_{j}\right\vert $, implies that the $k$-tuple $z_{1}y_{1},...,z_{k}y_{k}$ is still orthonormal, and so an identity equivalent the identity (\ref{id0}) can be seen as follows: \begin{equation} \left\Vert A\right\Vert _{(k)}=\max\sum_{j=1}^{k}\left\vert y_{j}^{\ast} Ax_{j}\right\vert ,\label{id2} \eqrefnd{equation} where the maximum is taken over all choices of orthonormal $k$-tuples $x_{1},...,x_{k}$ and $y_{1},...,y_{k}$. A unitarily invariant norm $\left\vert \left\vert \left\vert \cdot\right\vert \right\vert \right\vert $ is a norm defined on $ \mathbb{M} _{v}( \mathbb{C} )$\ that satisfies the invariance property $\left\vert \left\vert \left\vert UAV\right\vert \right\vert \right\vert =\left\vert \left\vert \left\vert A\right\vert \right\vert \right\vert $ for every $A\in \mathbb{M} _{v}( \mathbb{C} )$ and every unitary matrices $U,V\in \mathbb{M} _{v}( \mathbb{C} )$. It is known that \[ \left\vert \left\vert \left\vert A\oplus A\right\vert \right\vert \right\vert \geq\left\vert \left\vert \left\vert B\oplus B\right\vert \right\vert \right\vert \text{ \ \ for every unitarily invariant norm} \] if and only if \[ \left\vert \left\vert \left\vert A\right\vert \right\vert \right\vert \geq\left\vert \left\vert \left\vert B\right\vert \right\vert \right\vert \text{ \ \ for every unitarily invariant norm.} \] Also, \[ \left\vert \left\vert \left\vert A\oplus B\right\vert \right\vert \right\vert =\left\vert \left\vert \left\vert B\oplus A\right\vert \right\vert \right\vert =\left\vert \left\vert \left\vert \left[ \begin{array} [c]{cc} 0 & B\\ A^{\ast} & 0 \eqrefnd{array} \right] \right\vert \right\vert \right\vert. \] Hermitian matrix in real space is symetric matrix, so if $A,B$ are hermitian matrices and $A-B$ is positive then we say that $B\leq A$ .Wely's monotoniciy theorem say that the relation imply $\lambda_{j}(B)\leq\lambda_{j}(A)$ for all $j=1,...,v$.\cite{55} \\ Let $A,B$ be symmetric matrices $\pm A\leq B. $\; Then $t_{j}(A)\leq t_{j}(B\oplus B)$. This lead to: $t_{j}(AB+BA)\leq t_{j}((A^{2}+B^{2} )\oplus(A^{2}+B^{2}))$ \cite{55}. In the following we construct inequalities of harmonic series. Now let consider harmonic series in the following form $ {\displaystyle\sum\limits_{v=1}^{\infty}} \frac{1}{v}$ or in general form ,$ {\displaystyle\sum\limits_{v=1}^{\infty}} \frac{1}{\alpha\text{ }v\text{ }+d}$ where $\alpha\neq0,d$ are real numbers, $\frac {\alpha}{d}$ is positive, and a generalization of the harmonic series is the $p-\operatorname{series} $ (or hyperharmonic series), defined as $ {\displaystyle\sum\limits_{v=1}^{\infty}} \frac{1}{v^{p}}.$We have the following inequality (from \cite{11}, p. 202), respect to harmonic series. \begin{lemma}\label{X} \label{L0001} \ Let $v>1$, be positive integer then \eqrefnd{lemma} \begin{equation} 2\sqrt{v+1}-2\lessdot {\displaystyle\sum\limits_{k=1}^{v}} \frac{1}{\sqrt{k}}<2\sqrt{v}-1\label{01} \eqrefnd{equation} so we can see that \begin{align} {\displaystyle\sum\limits_{k=1}^{v}} \sqrt{k} & = {\displaystyle\sum\limits_{k=1}^{v}} \frac{k}{\sqrt{k}}\label{1}\\ & \leq\left( {\displaystyle\sum\limits_{k=1}^{v}} k\right) \left( {\displaystyle\sum\limits_{k=1}^{v}} \frac{1}{\sqrt{k}}\right) \nonumber\\ & \leq\frac{v(v+1)}{2})\left( 2\sqrt{v}-1\right) \nonumber\\ & =v(v+1)(\sqrt{v}-0.5)\nonumber \eqrefnd{align} more general representation for it by the following theorem. \begin{theorem} \label{T010}Let $A\in \mathbb{M} _{v}( \mathbb{C} )$ be positive definite matrix then \[ \left\Vert {\displaystyle\sum\limits_{k=1}^{v}} \sqrt{k}A^{k}\right\Vert _{2}<\frac{v(v+1)(\sqrt{v}-0.5)(\left\Vert A\right\Vert _{2}-\left( \left\Vert A\right\Vert _{2}\right) ^{v+1} )}{1-\left\Vert A\right\Vert _{2}} \] \eqrefnd{theorem} \begin{proof} Let \ $A$ \ has singular values $\geq$ $t_{1}(A)\geq...\geq t_{v}(A)$ \ \ and $U$ be a unitariy matix such that $A=Udig(t_{1}(A),...,t_{v}(A))U^{\ast}$ then. \begin{align*} \left\Vert {\displaystyle\sum\limits_{k=1}^{v}} \sqrt{k}A^{k}\right\Vert _{2} & =\left\Vert {\displaystyle\sum\limits_{k=1}^{v-}} dig(\sqrt{k}t_{1}^{k}(A),...,\sqrt{k}t_{v}^{k}(A))\right\Vert _{2}\\ & < {\displaystyle\sum\limits_{k=1}^{v-}} \sqrt{k}\left\Vert dig(t_{1}^{k}(A),...,t_{v}^{k}(A))\right\Vert _{2}\\ & \leq v(v+1)(\sqrt{v}-0.5) {\displaystyle\sum\limits_{k=1}^{v-}} \left\Vert dig(t_{1}^{k}(A),...,t_{v}^{k}(A))\right\Vert _{2}\\ & \leq v(v+1)(\sqrt{v}-0.5) {\displaystyle\sum\limits_{k=1}^{v-}} (\left\Vert dig(t_{1}(A),...,t_{v}(A))\right\Vert _{2})^{k}\\ & =v(v+1)(\sqrt{v}-0.5) {\displaystyle\sum\limits_{k=1}^{v-}} (\left\Vert A\right\Vert _{2})^{k} \index{1} \\ & =\frac{v(v+1)(\sqrt{v}-0.5)(\left\Vert A\right\Vert _{2}-\left\Vert A\right\Vert _{2}{}^{v+1}}{1-\left\Vert A\right\Vert _{2}} \eqrefnd{align*} \eqrefnd{proof} We can have a result depend at this theorem for dobuly stochastic matrices.\label{index1} we get the result imediatley. \begin{corollary} \label{C1}Le $A$ be positive definite doubly stochastic matrix then \[ \left\Vert {\displaystyle\sum\limits_{k=1}^{v}} \sqrt{k}A^{k}\right\Vert _{2}\leq v^{2}(v+1)(\sqrt{v}-0.5) \] \eqrefnd{corollary} \begin{proof} since $A$ be positive definite doubly stochastic matrix, we obtain \\ $0<\left\Vert A\right\Vert _{2}\leq1$ \eqrefnd{proof} \begin{lemma} Let $v>1$, be positive integer and $x_{k}$, $k=1,2,....v$, be positive number then \begin{equation} \frac{\min(x_{k})_{1\leq k\leq v}}{v(v+1)(\sqrt{v}-0.5)}\lessdot {\displaystyle\sum\limits_{k=1}^{v}} \frac{x_{k}}{\sqrt{k}}<v(2\sqrt{v}-1)\max(x_{k})_{1\leq k\leq v}\label{2} \eqrefnd{equation} \eqrefnd{lemma} \begin{proof} Since \begin{align*} {\displaystyle\sum\limits_{k=1}^{v}} \frac{x_{k}}{\sqrt{k}} & \leq\left( {\displaystyle\sum\limits_{k=1}^{v}} x_{k}\right) \left( {\displaystyle\sum\limits_{k=1}^{v}} \frac{1}{\sqrt{k}}\right) \\ & <v(2\sqrt{v}-1)\max(x_{k})_{1\leq k\leq v} \text{\ \ \ \ \ \ \ \ \ \ \ \ \ (by the inequality (\ref{01}))} \eqrefnd{align*} The left side of the inequlity we can have \begin{align} {\displaystyle\sum\limits_{k=1}^{v}} \frac{x_{k}}{\sqrt{k}} & \geq\frac{ {\displaystyle\sum\limits_{k=1}^{v}} x_{k}}{ {\displaystyle\sum\limits_{k=1}^{v}} \sqrt{k}}\label{3}\\ & >\frac{v\min(x_{k})_{1\leq k\leq v}}{v(v+1)(\sqrt{v}-0.5)}\text{ \ (by the inequality (\ref{1}))}\nonumber\\ & =\frac{\min(x_{k})_{1\leq k\leq v}}{(v+1)(\sqrt{v}-0.5)}\nonumber \eqrefnd{align} \eqrefnd{proof} Based on the inequality (\ref{2}) we have the following Theorem \begin{theorem} \label{T0}Let $A,X\in \mathbb{M} _{v}( \mathbb{C} )$, be positive definite matrice and let $\left\lceil t_{k}(A)\right\rceil ,\left\lfloor t_{k}(A)\right\rfloor \leq v$. Then \[ t_{v}(X)(2\sqrt{\left\lfloor t_{k}(A)\right\rfloor +1}-2)\lessdot {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{k}(X\text{ }A^{-0.5})<(2\sqrt{\left\lceil t_{k}(A)\right\rceil }-1)t_{1}(X) \] \eqrefnd{theorem} \begin{proof} Since \begin{align*} {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{k}(X\text{ }A^{-0.5}) & \leq {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{1}(X\text{ })t_{k}(A^{-0.5})\\ & = {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{1}(X\text{ })\left\lceil t_{k}(A)\right\rceil ^{-0.5}\\ & <t_{1}(X\text{ })(2\sqrt{\left\lceil t_{k}(A)\right\rceil }-1)\text{ \ by lemma } \ref{X} \eqrefnd{align*} The left side inequality, since \begin{align*} {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{k}(X\text{ }A^{-0.5}) & \geq {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{v}(X)t_{k}(A^{-0.5})\\ & = {\displaystyle\sum\limits_{k=1}^{\lfloor t_{k}(A)\rfloor}} t_{v}(X\text{ })\left\lfloor t_{k}(A)\right\rfloor ^{-0.5}\\ & >t_{v}(X)(2\sqrt{\left\lfloor t_{k}(A)\right\rfloor +1}-2)\text{\ by lemma } \ref{X} \eqrefnd{align*} \eqrefnd{proof} \section{Research problems based on First Chen inequality} Due to the results of this paper, we hypothesize a pair of open problems. Firstly, we suggest the following: \begin{problem}\label{ama1} Prove the inequalities of the second fundamental form for $CR$-submanifolds in generalized Sasakian space forms. \eqrefnd{problem} Secondly, we ask: \begin{problem}\label{pqm2} Prove the rest of inequalities of this paper for the matrix of the second fundamental form obtained in previous problem. \eqrefnd{problem} \vskip.15in \begin{thebibliography}{110} \bibitem {55} R. Bhatia, F. Kittaneh, {\textit{The Matrix ArithmeticñGeometric Mean inequality Revisited,Linear Algebra and its Applications}}, 428, 2177ñ2191, (2008). \bibitem{aallr4} B.-Y. Chen, {\textit{Relations between Ricci curvature and shape operator for submanifolds with arbitrary codimensions}}, Glasgow Math. J. {\bf{41}} (1999), 33-41. \bibitem{2233ee} B.-Y. Chen,{\textit{Geometry of warped products as Riemannian submanifolds and related problems}}, Soochow J. Math. {\bf{28}} (2002), 125-156. \bibitem{55kk99} B.-Y. Chen, {\textit{$\delta$-invariants, inequalities of submanifolds and their applications: in Topics in differential Geometry}}, Editura Academiei Rom$\hat a$ne, Bucharest (2008), 29-155. \bibitem{11} D. S. Mitrinovi¥c, {\textit{Analytic Inequalities}}, New yourk: Springer-Verlag, (1970). \bibitem{genIneq} A. Mustafa, C. \"Ozel, P. Linker, M. Sati, A. Pigazzini, {\textit{A general inequality for warped product CR-submanifolds of Kähler manifolds}}, Hacet. j. Math. Stat. https://doi.org/10.15672/hujms.1018497, (2022). \bibitem{iijj77} B. O'Neill, {\textit{Semi-Riemannian geometry with applictions to relativity}} Academic Press, New York, (1983). \bibitem {33} J. Ringrose, {\textit{Compact Non-Self-Adjoint Operators}}, London: Nework, Van Nostrand Reinhold Co., (1971). \bibitem{44} B. Simon, {\textit{Trace Ideals and Their applications}}, Cambridge University Press, (1979). \eqrefnd{thebibliography} \eqrefnd{document}
\begin{document} \title{A 3-Local Characterization of the Harada--Norton Sporadic Simple Group} \omega peratorname{Aut}hor{Sarah Astill} \mu aketitle \address{Department of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW} \begin{abstract} We provide $3$-local characterizations of the Harada--Norton sporadic simple group and its automorphism group. Both groups are examples of groups of parabolic characteristic three and we identify them from the structure of the normalizer of the centre of a Sylow $3$-subgroup. {\epsilon }nd{abstract} \section{Introduction} In \cite{HaradaHN} in 1975, Harada introduced a new simple group. He proved that a group with an involution whose centralizer is a double cover of the automorphism group of the Higman--Sims sporadic simple group is simple of order $2^{14}.3^6.5^6.7.11.19$. In 1976, in his PhD thesis, Norton proved such a group exists and thus we have the Harada--Norton sporadic simple group, $\mu athrm{HN}$. The simple group was not proved to be unique until 1992. In \cite{SegevHN}, Segev proves that there is a unique group $G$ (up to isomorphism) with two involutions $u$ and $t$ such that $C_G(u)\sim (2 ^{\cdot} \mu athrm{HS}) : 2$ and $C_G(t)\sim 2_+^{1+8}.(\omega peratorname{Alt}(5)\wr 2)$ with $C_G(O_2(C_G(t)))\leqslant O_2(C_G(t))$. We can therefore define the group $\mu athrm{HN}$ by the structure of two involution centralizers in this way. In the ongoing project to understand groups of local and parabolic characteristic $p$ (see for example \cite{MSS-overview}) it has been observed that both $\mu athrm{HN}$ and $\omega peratorname{Aut}(\mu athrm{HN})$ are example of groups of parabolic characteristic $3$. This is to say that every $3$-local subgroup, $H$, containing a Sylow $3$-subgroup satisfies $C_H(O_p(H))\leqslant O_p(H)$. The aim of this paper is therefore to characterize $\mu athrm{HN}$ and $\mu athrm{HN} :2$ in terms of their $3$-structure. The hypothesis we consider and the theorem we prove are as follows. \begin{hyp}\label{mainhyp} Let $G$ be a group and let $Z$ be the centre of a Sylow $3$-subgroup of $G$ with $Q:=O_3(N_G(Z))$. Suppose that \begin{enumerate}[$(i)$] \item $Q\cong 3_+^{1+4}$; \item $C_G(Q)\leqslant Q$; \item $Z \neq Z^x \leqslant Q$ for some $x \in G$; and \item $N_G(Z)/Q $ has shape $4^{.}\omega peratorname{Alt}(5)$ or $4^{.}\omega peratorname{Sym}(5)$. {\epsilon }nd{enumerate} {\epsilon }nd{hyp} \begin{thm} If $G$ satisfies Hypothesis \ref{mainhyp} then $G \cong \mu athrm{HN}$ or $G \cong \omega peratorname{Aut}(\mu athrm{HN})$. {\epsilon }nd{thm} For a large part of this proof we work under the hypothesis that $N_G(Z)/Q $ has shape either $4^{.}\omega peratorname{Alt}(5)$ or $4^{.}\omega peratorname{Sym}(5)$. We will refer to these two possibilities as Case I and Case II respectively. In Section \ref{HN-Section-3Local}, we determine the possibilities for the structure of certain $3$-local subgroups of $G$. This allows us to see the fusion of elements of order three in $G$. In particular, it allows us to identify a distinct conjugacy class of elements of order three. In $3$-local recognition results, it is often necessary to determine $C_G(x)$ for each element of order three in $G$. In this case, we have just one further centralizer to determine which is isomorphic to $3 \times \omega peratorname{Alt}(9)$ or $3 \times \omega peratorname{Sym}(9)$ and we use a theorem due to Prince \cite{PrinceSym9} to recognize this centralizer. In Section \ref{HN-Section-CG(t)} we determine the structure of $C_G(t)$ where $t$ is a $2$-central involution. This requires a great deal of $2$-local analysis, in particular, we must take full advantage of our knowledge of the $2$-local subgroups in $\omega peratorname{Alt}(9)$ and use a theorem due to Goldschmidt about $2$-subgroups with a strongly closed abelian subgroup. The determination of $C_G(t)$ is much more difficult than similar recognition results (in \cite{AstillO8+2.3} for example). A reason for this may be that the $3$-rank of $C_G(t)/O_2(C_G(t))$ is just two whilst the $2$-rank is four. An easier example may have greater $3$-rank and lesser $2$-rank. We also show in Section \ref{HN-Section-CG(t)} that in Case II of the hypothesis, $G$ has a proper normal subgroup which satisfies Case I. Once have made this observation, our calculations are simplified significantly as we can reduce to a Case I hypothesis only. One conjugacy class of involution centralizer is not enough to recognize $\mu athrm{HN}$ and so in Section \ref{HN-Section-CG(u)} we prove that $G$ also has an involution centralizer which has shape $(2^{\cdot}\mu athrm{HS}):2$ by making use of a theorem of Aschbacher. The results of Sections \ref{HN-Section-CG(t)} and \ref{HN-Section-CG(u)} allow us to apply the uniqueness theorem by Segev to prove that in Case I $G\cong \mu athrm{HN}$. It then follows easily that in Case II, $G\cong \omega peratorname{Aut}(\mu athrm{HN})$. All groups in this article are finite. We note that $\omega peratorname{Sym}(n)$ and $\omega peratorname{Alt}(n)$ denote the symmetric and alternating groups of degree $n$ and $\omega peratorname{Dih}(n)$ and $\mu athrm{Q}(n)$ denote the dihedral group and quaternion groups of order $n$. Notation for classical groups follows \cite{Aschbacher}. All other groups and notation for group extensions follows the {\sc Atlas} \cite{atlas} conventions. In particular we mention that the shape of a group is some description of its normal structure and we use the symbol $\sim$ (for example if $G\cong \omega peratorname{Sym}(4)$, we may choose to write $G\sim 2^2.\omega peratorname{Sym}(3)$). If $H$ is a group acting on a set containing $x$ then $x^H$ is the orbit of $x$ under $H$. If a group $A$ acts on a group $B$ and $a \in A$ and $b \in B$ then $[b,a]=b\inv b^a$. Further group theory notation and terminology is standard as in \cite{Aschbacher} and \cite{stellmacher} except that $\mu athbf{Z}(H)$ denotes the centre of a group $H$. \section{Preliminary Results} \begin{thm}[Aschbacher]\cite{AschbacherHS}\label{Aschbacher-HS} Let $G$ be a group with an involution $t$ and set $H:=C_G(t)$. Let $V \leqslant G$ such that $V\cong 2 \times 2 \times 2$ and set $M:=N_G(V)$. Suppose that \begin{enumerate}[$(i)$] \item $O_2(H)\cong 4 *2_+^{1+4}$ and $H/O_2(H) \cong \omega peratorname{Sym}(5)$; and \item $V \leqslant O_2(H)$, $O_2(M)\cong 4 \times 4 \times 4$ and $M/O_2(M)\cong \mu athrm{G}L_3(2)$. {\epsilon }nd{enumerate} Then $G\cong\mu athrm{HS}$. {\epsilon }nd{thm} \begin{thm}[Segev]\cite{SegevHN}\label{Segev-HN} Let G be a finite group containing two involutions $u$ and $t$ such that $C_G(u)\cong (2 ^. \mu athrm{HS}) : 2$ and $C_G(t)\sim 2_+^{1+8}.(\omega peratorname{Alt}(5)\wr 2)$ with $C_G(O_2(C_G(t)))\leqslant O_2(C_G(t))$. Then $G \cong \mu athrm{HN}$. {\epsilon }nd{thm} Recall that given a $p$-group $S$, we set $\mu athrm{O}mega(S)=\<x \mu id x^p=1\>$. \begin{thm}[Goldschmidt]\cite[p370]{stellmacher} \label{goldschmidt} Let $S$ be a Sylow 2-subgroup of a group $G$ and let $A$ be an abelian subgroup of $S$ such that $A$ is strongly closed in $S$ with respect to $G$. Suppose that $G = \<A^G\>$ and $O_{2'}(G) = 1$. Then $G = F^*(G)$ and $A = O_2(G)\mu athrm{O}mega(S)$. {\epsilon }nd{thm} \begin{thm}[Hayden]\cite[3.3, p545]{HaydenPSp43}.\label{Hayden} Let $G$ be a finite group and let $T \in \omega peratorname{Syl}_3(G)$ be elementary abelian of order nine. Assume that \begin{enumerate}[$(i)$] \item $N_G(T)/C_G(T)\cong 2\times 2$; \item $C_G(T)=T$; and \item $C_G(t)\leqslant N_G(T)$ for each $t \in T^\#$. {\epsilon }nd{enumerate} Then $G=N_G(T)$. {\epsilon }nd{thm} \begin{thm}[Feit--Thompson] \cite{FeitThompson}\label{Feit-Thompson} Let $G$ be a finite group containing a subgroup, $X$, of order three such that $C_G(X)=X$. Then one of the following holds: \begin{enumerate}[$(i)$] \item $G$ contains a nilpotent normal subgroup, $N$, such that $G/N\cong \omega peratorname{Sym}(3)$ or $C_3$; \item $G$ contains an elementary abelian normal 2-subgroup, $N$, such that $G/N\cong \mu athrm{Alt}(5)$; or \item $G\cong\mu athrm{PSL}_2(7)$. {\epsilon }nd{enumerate} {\epsilon }nd{thm} The result can be found in \cite{FeitThompson} however the additional information in conclusion $(ii)$ that $N$ is elementary abelian uses a theorem of Higman \cite{Higman}. \begin{thm}[Prince]\cite{PrinceSym9}\label{princeSym9} Let $G$ be a group and suppose $x \in G$ has order 3 such that $C_G(x)\cong C_{\omega peratorname{Sym}(9)}(\hspace{0.5mm} (1,2,3)(4,5,6)(7,8,9) \hspace{0.5mm} )$ and there exists $J\leqslant C_G(x)$ which is elementary abelian of order 27 and normalizes no non-trivial $3'$-subgroup of $G$. Then either $J\vartriangleleft G$ or $G \cong \omega peratorname{Sym}(9)$. {\epsilon }nd{thm} \begin{lemma}\label{Prelims-sym9} Let $G$ be a group of order $3^42$ with $S \in \omega peratorname{Syl}_3(G)$ and $T \in \omega peratorname{Syl}_2(G)$ and $J\vartriangleleft G$ elementary abelian of order 27. Suppose that $Z:=\mu athbf{Z}(S)$ has order three and $Z \leqslant C_S(T)\neq S$. Then $G\cong C_{\omega peratorname{Sym}(9)}(\hspace{0.5mm} (1,2,3)(4,5,6)(7,8,9) \hspace{0.5mm} )$. {\epsilon }nd{lemma} \begin{proof} We have that $T$ normalizes $Z$ and $J$ and so by Maschke's Theorem, there exists a subgroup $K \leqslant J$ such that $K$ is a $T$-invariant complement to $Z$ in $J$. Set $L:=KT$ then $K\trianglelefteq L$ and $[G:L]=9$. Suppose that $N \leqslant L$ and that $N$ is normal in $G$. If $3\mu id |N|$ then $N\cap \mu athbf{Z}(S) \neq 1$ which is a contradiction since $Z \nleq K$. So $N$ is a $2$-group which implies $N=1$ otherwise $G$ has a central involution. Hence there is an injective homomorphism from $G$ into $\omega peratorname{Sym}(9)$. Moreover there is a map from $G$ into the centralizer in $\omega peratorname{Sym}(9)$ of the centre of a Sylow $3$-subgroup. Since $| C_{\omega peratorname{Sym}(9)}(\hspace{0.5mm} (1,2,3)(4,5,6)(7,8,9) \hspace{0.5mm}|=|G|$, we have an isomorphism. {\epsilon }nd{proof} \begin{thm}[Parker--Rowley]\cite{ParkerRowleyA8}\label{ParkerRowleyA8} Let $G$ be a finite group with $R :=\<a, b\>$ an elementary abelian Sylow 3-subgroup of $G$ of order nine. Assume the following hold. \begin{enumerate}[$(i)$] \item $C_G(R)=R$ and $N_G(R)/C_G(R)\cong Dih(8)$. \item $C_G(a)\cong 3\times \omega peratorname{Alt}(5)$ and $N_G(\<a\>)$ is isomorphic to the diagonal subgroup of index two in $\omega peratorname{Sym}(3)\times \omega peratorname{Sym}(5)$. \item $C_G(b)\leqslant N_G(R)$, $C_G(b)/R\cong 2$ and $N_G(\<b\>)/R\cong 2\times 2$. {\epsilon }nd{enumerate}Then $G$ is isomorphic to $\omega peratorname{Alt}(8)$. {\epsilon }nd{thm} \begin{cor}\label{Cor-ParkerRowleyA8} Let $G$ be a group and $\omega peratorname{Alt}(8)\cong H\leqslant G$ such that for $R \in \omega peratorname{Syl}_3(H)$ and each $r \in R^\#$, $C_G(r) \leqslant H$. Then $G=H$. {\epsilon }nd{cor} \begin{proof} Suppose $R$ is not a Sylow $3$-subgroup of $G$. Then there exists $R<S\in \omega peratorname{Syl}_3(G)$. Therefore $R<N_S(R)$ and $1 \neq r \in \mu athbf{Z}(N_S(R))\cap R$. Therefore $N_S(R)\leqslant C_G(r)\leqslant H$ which is a contradiction. Thus $R \in \omega peratorname{Syl}_3(G)$. Pick $a,b\in R$ such that $C_H(a)\cong 3 \times \omega peratorname{Alt}(5)$ and $C_H(b)\leqslant N_H(R)$. Now we check the hypotheses of Theorem \ref{ParkerRowleyA8}. We have that for any $r \in R^\#$, $C_G(R)\leqslant C_G(r) \leqslant H$ and so $C_G(R)=C_H(R)=R$. So consider $N_G(R)/C_G(R)$ which is isomorphic to a subgroup of $\mu athrm{G}L_2(3)$. Since $R \in \omega peratorname{Syl}_3(G)$, $N_G(R)/R$ is a $2$-group. Also $N_H(R)/R\cong \omega peratorname{Dih}(8)$. Suppose $N_G(R)/R \cong \omega peratorname{SDih}(16)$. Then $N_G(R)$ is transitive on $R^\#$ which is a contradiction. Therefore $N_G(R)=N_H(R)$ and $N_G(R)/C_G(R)\cong \omega peratorname{Dih}(8)$ so $(i)$ is satisfied. Now $C_G(a)=C_H(a)$ and there exists some $x \in H$ that inverts $a$. Therefore $N_H(\<a\>)=C_H(a)\<x\>\leqslant H$. Similarly $C_G(b)=C_H(b)$ and there exists some $y \in H$ that inverts $b$. Therefore $N_H(\<b\>)=C_H(b)\<y\>\leqslant H$. Thus $(ii)$ and $(iii)$ are satisfied so $G=H\cong \omega peratorname{Alt}(8)$. {\epsilon }nd{proof} \begin{lemma}\cite[3.20 $(iii)$]{ParkerRowley-book}\label{Parker-Rowley-SL2(q)-splitting} Let $X\cong \mu athrm{SL} _2(3)$ and $S \in \omega peratorname{Syl}_3(X)$. Suppose that $X$ acts on an elementary abelian $3$-group $V$ such that $V=\<C_V(S)^X\>$, $C_V(X)=1$ and $[V,S,S]=1$. Then $V$ is a direct product of natural modules for $X$. {\epsilon }nd{lemma} \begin{lemma}\label{conjugation in thompson subgroup} Let $G$ be a group, $p$ be a prime and $S\in \omega peratorname{Syl}_p(G)$. Suppose $J(S)$ is abelian and suppose $a,b \in J(S)$ are conjugate in $G$. Then $a$ and $b$ are conjugate in $N_G(J(S))$. {\epsilon }nd{lemma} \begin{proof} Suppose $a^g=b$ for some $g \in G$. Notice first that it follows immediately from the definition of the Thompson subgroup that $J(S)^g=J(S^g)$. Now $J(S),J(S^g)\leqslant C_G(b)$. Let $P,Q \in \omega peratorname{Syl}_p(C_G(b))$ such that $J(S)\leqslant P$ and $J(S^g)\leqslant Q$. Again, by the definition of the Thompson subgroup, it is clear that $J(S)\leqslant P$ implies $J(S)=J(P)$ and similarly $J(S^g)= J(Q)$. By Sylow's Theorem, there exists $x \in C_G(b)$ such that $Q^x=P$ and so $J(S)=J(P)= J(Q)^x=J(S)^{gx}$. Thus $gx \in N_G(J(S))$ and $a^{gx}=b^x=b$ as required. {\epsilon }nd{proof} \begin{lemma}\label{Prelims 2^8 3^2 Dih(8)} Let $X$ be a group with an elementary abelian subgroup $E\vartriangleleft X$ of order $2^{2n}$ such that $C_X(E)=E$. Let $S \in \omega peratorname{Syl}_2(X)$ and suppose that whenever $E<R\trianglelefteq S$ with $R/E$ elementary abelian and $|R/E|=2^a$ we have $|C_E(R)|\leqslant 2^{2n-a-1}$. Then $E$ is characteristic in $S$. {\epsilon }nd{lemma} \begin{proof} First observe that since $C_X(E)=E$, $X/E$ is a group of outer automorphisms of $E$. Let $\a$ be an automorphism of $S$ such that $E^\a\neq E$. Then $R:=EE^\a\trianglelefteq S$. Since $E^\a$ is elementary abelian, we have that $E^\a/(E \cap E^\a)\cong EE^\a/E=R/E$ is elementary abelian and $E \cap E^\a$ is central in $R$. If $|R/E|=2^a$ then $|E \cap E^\a|=2^{2n-a}$ so $|C_E(R)|\geqslant |E \cap E^\a|=2^{2n-a}$ which is a contradiction. {\epsilon }nd{proof} \begin{lemma}\label{prelims-extraspecial 2^5 in GL(4,3)} Let $E\leqslant \mu athrm{G}L_4(3)$ such that $|E|=2^5$, and $|\Phi(E)|\leqslant 2$. Furthermore let $S \leqslant \mu athrm{G}L_4(3)$ be elementary abelian of order nine such that $S$ acts faithfully on $E$. If $\mu athrm{Q}(8) \cong A \cong B$ with $A\neq B$ both $S$-invariant subgroups of $E$, then $E\cong 2_+^{1+4}$ and $E$ is uniquely determined up to conjugation in $\mu athrm{G}L_4(3)$. {\epsilon }nd{lemma} \begin{proof} Note that $E$ is non-abelian since $A,B \leqslant E$. Therefore $|E/\Phi(E)|=2^4$ is acted on faithfully by ${S}$. Hence, $S$ is isomorphic to a subgroup of $\mu athrm{G}L_4(2)$. Now observe that $\mu athrm{G}L_4(2)$ has Sylow $3$-subgroups of order nine which contain an element of order three which acts fixed-point-freely on the natural module. Thus any ${S}$-invariant subgroup of $E$ properly containing $\Phi(E)$ has order $2^3$ or $2^5$. Since $A$ and $B$ are distinct and normalized by $S$, we have $E=AB$. Suppose $|\mu athbf{Z}(E)|>2$. Then $|\mu athbf{Z}(E)|=8$ is ${S}$-invariant. By coprime action, $\mu athbf{Z}(E)=\<C_{\mu athbf{Z}(E)}(s)|s \in S^\#\>$. Thus there exists $s \in S^\#$ such that $C_{\mu athbf{Z}(E)}(s)>\Phi(E)$. Since $E=AB$, we find $a\in A$ and $b\in B$ such that $ab \in C_{\mu athbf{Z}(E)}(s)\bs \Phi(E)$. Then, as $s$ normalizes $A$ and $B$, $s$ must centralize $a$ and $b$. Now $C_E(s)$ is $S$-invariant with $|C_E(s) \cap A|\geqslant 4$ and $|C_E(s) \cap B|\geqslant 4$. It follows that $[E,s]=1$ which is a contradiction. Thus $\mu athbf{Z}(E)=\Phi(E)$ and so $E$ is extraspecial and $E\cong 2_+^{1+4}$. Since $E$ is extraspecial, $[E:E']=2^4$. Therefore there are sixteen 1-dimensional representations of $E$ over $\mu athrm{G}F(3)$. Moreover there is a $4$-dimensional representation of $E$ since $E \leqslant \mu athrm{G}L_4(3)$. Since $16+4^2=32=|E|$, this accounts for all the irreducible representations of $E$ over $\mu athrm{G}F(3)$. Hence there is a unique $4$-dimensional representation of $E$ and so there is one conjugacy class of such subgroups in $\mu athrm{G}L_4(3)$. {\epsilon }nd{proof} The following lemma is an application of Extremal Transfer (see \cite[15.15, p92]{GLS2} that will be needed in Section \ref{HN-Section-CG(u)}. \begin{lemma}\label{Prelims-4*4*4 transfer} Let $G$ be a group and $4\times 4 \times 4 \cong A \leqslant G$ with $C_G(A)=A$. Set $X:=N_G(A)$ and assume that $X\sim 4^3: (2 \times \mu athrm{G}L_3(2))$ contains a Sylow $2$-subgroup of $G$. Furthermore suppose that there exists an involution $u \in X \bs O^2(X)$ such that $C_G(u)\cong 2 \times \omega peratorname{Sym}(8)$. Then $u \notin O^2(G)$. In particular, $O^2(G)\neq G$. {\epsilon }nd{lemma} \begin{proof} Let $Y:=O^2(X)$ then $Y/A\cong \mu athrm{G}L_3(2)$ and $u \notin Y$. We assume for a contradiction that for some $g \in G$, $r:=u^g \in Y$ and so we apply \cite[15.15, p92]{GLS2} to see that $C_X(r)$ contains a Sylow $2$-subgroup of $C_G(r)\cong 2 \times \omega peratorname{Sym}(8)$. Observe first that $r \notin A$ because no element of order four in $G$ squares to $r$. Set $V:=\mu athrm{O}mega(A)\cong 2^3$ and let $S\in \omega peratorname{Syl}_2(C_X(r))$. Then $|S|=2^8$ and therefore $|S \cap A|\geqslant 2^4$. It follows that $S \cap A\cong 4 \times 4$ since $Ar\in Y/A\cong \mu athrm{G}L_3(2)$ acts faithfully on $V$ and therefore $|C_V(r)|\leqslant 2^2$. In particular, $|C_A(r)|=2^4$ and so $SA\in \omega peratorname{Syl}_2(X)$. Since $X/A\cong 2\times \mu athrm{G}L_3(2)$, $2 \times \omega peratorname{Dih}(8)\cong SA/A \cong S /(A \cap S)=S/C_A(r)$. Set $S_0:=S \cap Y$ then $r \in S_0$ and we have that $\omega peratorname{Dih}(8)\cong S_0A/A \cong S_0 /(A \cap S_0)=S_0/C_A(r)$. Since $r \in \mu athbf{Z}(S)$, $C_A(r)r\in \mu athbf{Z}(S_0/C_A(r))$. Therefore $S_0/\<C_A(r),r\>\cong 2 \times 2$. Let $C_A(r)<R<S$ such that $|R/C_A(r)|=2$ and $S=S_0R$ and $[R,S_0]\leqslant C_A(r)$. This is possible as $S/C_A(r)\cong 2 \times \omega peratorname{Dih}(8)$. We have therefore that $[R,S_0]$, $S_0 \cap R \leqslant C_A(r) \leqslant \<C_A(r),r\>$ and so $S/\<C_A(r),r\> \cong 2 \times 2 \times 2$. Now $\<C_A(r),r\>/\<r\>\cong C_A(r)\cong 4 \times 4$. Hence, $S/\<r\>\sim (4 \times 4). (2 \times 2 \times 2)$ which is a subgroup of $C_G(r)/\<r\>\cong \omega peratorname{Sym}(8)$. However, a $2$-subgroup of $\omega peratorname{Sym}(8)$ has non-abelian derived subgroup which supplies us with a contradiction. Thus $u \notin O^2(G)$. {\epsilon }nd{proof} \begin{lemma}\label{Prelims-centralizers of invs on a vspace which invert a 3} Let $G$ be a group with a normal $2$-subgroup $V$ which is elementary abelian of order $2^n$. Suppose $t$ and $w$ are in $G$ such that $Vt$ has order two and $Vw$ has order three and $Vt$ inverts $Vw$. If $|C_V(w)|=2^a$ then $|C_V(t)|\leqslant 2^{(n+a)/2}$. {\epsilon }nd{lemma} \begin{proof} Since $Vt$ inverts $Vw$, we have that $Vw=Vtw^2t$ and so $Vw^2=Vtw^2tw=VtVt^w$. Therefore $C_V(t) \cap C_V(t^w) \leqslant C_V(w^2)=C_V(w)$. We have that $|V| \geqslant |C_V(t)C_V(t^w)|=|C_V(t)||C_V(t^w)|/|C_V(t) \cap C_V(t^w)|$ and so $2^n \geqslant |C_V(t)|^2/2^a$ which implies $|C_V(t)|\leqslant 2^{(n+a)/2}$. {\epsilon }nd{proof} \begin{lemma}\label{lem-conjinvos} Let $G$ be a finite group and $V\trianglelefteq G$ be an elementary abelian $2$-group. Suppose that $r\in G$ is an involution such that $C_{V}(r)=[V,r]$. Then \begin{enumerate}[$(i)$] \item every involution in $Vr$ is conjugate to $r$; and \item $|C_{G}(r)|=|C_{V}(r)||C_{G/V}(Vr)|$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ Let $t\in Vr$ be an involution. Then $t=qr$, for some $q\in V$. Since $t^2=1$, we have that $1=qrqr=[q,r]$ as $r$ and $q$ have order at most two. So $q\in C_{V}(r)=[V,r]$. So $q=q_{1}rq_{1}r$, for some $q_{1}\in V$, and therefore $t=q_{1}rq_{1}rr=r^{q_{1}}$ and so $t$ is conjugate to $r$ by an element of $V$. $(ii)$ Define a homomorphism, $\phi:C_{G}(r)\rightarrow C_{G/V}(Vr)$ by $\phi(x)=Vx$. Then $\ker \phi=C_{V}(r)$. Moreover, if $Vy\in C_{G/V}(Vr)$ then $Vr^y=Vr$. Hence, using $(i)$ we see that there exists $q \in V$ such that $r^y=r^q$. Therefore $q\inv y \in C_G(r)$ and of course $Vq\inv y =Vy$ and so $\phi(q\inv y)=Vy$. Therefore $\phi$ is surjective. Thus, by an isomorphism theorem, $C_{G}(r)/C_{V}(r)\cong C_{G/V}(Vr)$ and $|C_{G}(r)|=|C_{V}(r)||C_{G/V}(Vr)|$, as required. {\epsilon }nd{proof} \section{Determining the 3-Local Structure of $G$}\label{HN-Section-3Local} We assume Hypothesis \ref{mainhyp}. Let $x \in G\bs N_G(Z)$ such that $Z^x \leqslant Q$ and set $Y:=ZZ^x \leqslant Q$. We begin by making some easy observations in particular noting that $Z \leqslant Q^x$ and so our hypothesis is symmetric. For a large part of this proof we work under the hypothesis that $N_G(Z)/Q $ has shape either $4^{.}\omega peratorname{Alt}(5)$ or $4^{.}\omega peratorname{Sym}(5)$. We will refer to these two possibilities as Case I and Case II respectively. At the end of Section \ref{HN-Section-CG(t)} we are able to see that in Case II $G$ has an proper normal subgroup which satisfies the hypothesis of Case I and so from that point we consider Case I only. \begin{lemma}\label{Prelims-EasyLemma} \begin{enumerate}[$(i)$] \item $|Z|=3$. \item $C_{C_G(Z)}(Q/Z)=Q$. \item $Z \leqslant Q^x$. \item $Q \cap Q^x$ is elementary abelian. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ By hypothesis, $Z$ is the centre of a Sylow $3$-subgroup, $S$ say, of $G$, therefore $\omega peratorname{Syl}_3(C_G(Z)) \subseteq \omega peratorname{Syl}_3(G)$. We have that $Q=O_3(C_G(Z))$ and $C_G(Q) \leqslant Q$. Therefore $[Q,Z]=1$ implies that $Z \leqslant Q$. Thus $Z \leqslant \mu athbf{Z}(Q)$ and $|\mu athbf{Z}(Q)|=3$ since $Q$ is extraspecial. Hence $Z=\mu athbf{Z}(Q)=\mu athbf{Z}(S)$ has order three. $(ii)$ Suppose that $p$ is a prime and $g \in C_G(Z)$ is a $p$-element such that $[Q/Z,g]=1$. Then if $p \neq 3$ we may apply coprime action to say that $Q/Z=C_{Q/Z}(g)=C_Q(g)Z/Z= C_Q(g)/Z$ and so $[Q,g]=1$ which is a contradiction as $C_G(Q) \leqslant Q$. Therefore $C_{C_G(Z)/Z}(Q/Z)$ is a $3$-group and the preimage in $C_G(Z)$ is a normal $3$-subgroup of $C_G(Z)$ and so must be contained in $O_3(C_G(Z))=Q$. Therefore $Q \leqslant C_{C_G(Z)}(Q/Z)\leqslant Q$. $(iii)$ Suppose $Z \nleq Q^x$. Notice that $C_Q(Y)$ normalizes $Q$ and $Q^x$ and therefore $Q \cap Q^x \trianglelefteq C_Q(Y)$. This implies that $Q \cap Q^x=Z^x$ for if we had $Q \cap Q^x>Z^x$ then $|C_Q(Y)/(Q\cap Q^x)|\leqslant 9$ and so $Z=C_Q(Y)'\leqslant Q \cap Q^x$. Therefore $Q^xC_Q(Y)/Q^x\cong C_Q(Y)/Z^x$ which must be non-abelian of exponent three and order $3^3$ and so $Q^xC_Q(Y)/Q^x$ is a subgroup of $C_G(Z^x)/Q^x$ of order $3^3$ which is a contradiction. $(iv)$ Since $[Q,Q]=Z \neq Z^x =[Q^x,Q^x]$, we immediately see that $[Q \cap Q^x,Q \cap Q^x] \leqslant Z\cap Z^x =1$. Therefore $Q \cap Q^x$ is abelian and since $Q$ has exponent three, $Q \cap Q^x$ is elementary abelian. {\epsilon }nd{proof} Let $t \in N_G(Z)$ be an involution such that $Qt \in \mu athbf{Z}(N_G(Z)/Q)$. \begin{lemma}\label{HN-AnotherEasyLemma} \begin{enumerate}[$(i)$] \item $N_G(Z)/Q$ acts irreducibly on $Q/Z$ and in Case I, $C_G(Z)/Q\cong 2^{\cdot} \omega peratorname{Alt}(5)\cong \mu athrm{SL} (2,5)$ whilst in Case II, it has shape $2^{\cdot}\omega peratorname{Sym}(5)$. \item $C_Q(t)=Z=C_Q(f)$ for every element of order five $f \in C_G(Z)$. In particular, in Case I, $C_G(Y)$ is a $3$-group and in Case II, $C_G(Y)$ is a $\{2,3\}$ group with $2$-part at most $2$. \item There exists a group $X<C_G(Z)$ with $X/Q\cong 2^{\cdot}\omega peratorname{Alt}(4)\cong \mu athrm{SL} _2(3)$ and such that $X/Q$ has no central chief factors on $Q/Z$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} It is clear that $N_G(Z)/Q$ acts irreducibly on $Q/Z$ (for example, from the fact that $5 \nmid |\mu athrm{G}L_3(3)|$) and so has no non-trivial modules of dimension less than four over $\mu athrm{G}F(3)$. Since $Qt \in \mu athbf{Z}(N_G(Z)/Q)$, it follows from coprime action that $C_{Q/Z}(Qt)=C_Q(t)/Z$. Hence $C_Q(t)$ is a normal subgroup $N_G(Z)$ which is not equal to $Q$ since $C_G(Q)\leqslant Q$. Since $C_Q(t)/Z$ is a proper $N_G(Z)/Q$-submodule of $Q$, we have $C_Q(t)=Z$ and $[Q,t]=Q$. In particular notice that this means that the normal subgroup of order four in $N_G(Z)/Q$ does not centralize $Z$. Thus $C_G(Z)/Q\cong 2^{\cdot} \omega peratorname{Alt}(5)\cong \mu athrm{SL} (2,5)$ or has shape $2^{\cdot}\omega peratorname{Sym}(5)$ which proves $(i)$. Now, for $f \in N_G(Z)$ of order five, by coprime action, $Q/Z=C_{Q/Z}(f) \times [Q/Z,f]$ and $C_{Q/Z}(f)=C_Q(f)/Z\neq Q/Z$. Since $f$ acts fixed-point-freely on $[Q/Z,f]$, $[Q/Z,f]^\#$ has order a multiple of five. Therefore $Q/Z=[Q/Z,f]$ and so $C_Q(f)=Z$. Now $C_G(Y) \leqslant C_G(Z)$ and $C_G(Y)$ contains no involution or element of order five. In the case that $N_G(Z)/Q\cong 4^{\cdot} \omega peratorname{Alt}(5)$ (and so $C_G(Z)/Q\cong 2^{\cdot} \omega peratorname{Alt}(5)$) since the Sylow $2$-subgroups are quaternion, we see that $C_G(Y)$ is a $3$-group. Otherwise, $Y$ is centralized by a $2$-group of order at most $2$. This proves $(ii)$. $(iii)$ Observe (using \cite[33.15, p170]{Aschbacher} for example) that a group of shape $2^{.}\omega peratorname{Alt}(5)$ is uniquely defined up to isomorphism and that in either case $C_G(Z)/Q$ has such a subgroup which necessarily contains $Qt$. Moreover, $2^{.}\omega peratorname{Alt}(5)$ has Sylow $2$-subgroups isomorphic to $\mu athrm{Q}(8)$ with normalizer isomorphic to $\mu athrm{SL} _2(3)$. Let $X\leqslant C_G(Z)$ be such a subgroup. There can be no central chief factor of $X$ on $Q/Z$ because $Qt\in \mu athbf{Z}(X/Q)$ inverts $Q/Z$. {\epsilon }nd{proof} \begin{lemma}\label{facts about W} \begin{enumerate}[$(i)$] \item $W:=C_Q(Y)C_{Q^x}(Y)=O_3(C_G(Y))$ is a group of order $3^5$ and there exists $S\in \omega peratorname{Syl}_3(N_G(Z))$ such that $Y\vartriangleleft S$. \item $L:=\<Q,Q^x\>\leqslant N_G(Y)$, $W=C_L(Y)$, $L/W \cong \mu athrm{SL} _2(3)$ and $N_G(Y)/C_G(Y) \cong \mu athrm{G}L_2(3)$. \item $Y$ and $W/(Q \cap Q^x)$ are natural $L/C_L(Y)$-modules. \item $\mu athbf{Z}(W)=Y$. \item $W$ has exponent three. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ Notice that $C_Q(Y)$ normalizes $C_{Q^x}(Y)=Q^x \cap C_G(Z)$ and so $W$ is a $3$-group. Since $Z$ is the centre of a Sylow $3$-subgroup of $G$ we clearly have that $|W|<3^6$. Now, $Q$ is extraspecial and so $|C_Q(Y)|=3^4$ and since $Y=ZZ^x\leqslant Q^x$, we similarly have $|C_{Q^x}(Y)|=3^4$. Of course, $C_{Q^x}(Y) \neq C_{Q}(Y)$ as their derived subgroups are not equal and so we have that $|W|=3^5$. Moreover, $|C_G(Y):W| \leqslant 2$ and so $W=O_3(C_G(Y))$ and since $Q\nleq W$ we have that $Y\lhd QW\in \omega peratorname{Syl}_3(N_G(Z))$. $(ii)$ We have that $L:=\<Q,Q^x\>\leqslant N_G(Y)$ and $L/C_L(Y)$ is isomorphic to a subgroup of $\mu athrm{G}L_2(3)$. Since $L$ is generated by two distinct three subgroups, $Q$ and $Q$, it follows that $L/C_L(Y)$ is generated by two distinct subgroups of order three and so $L/C_L(Y)\cong \mu athrm{SL} _2(3)$. Also, we have seen that $C_Q(t)=Z$, therefore $t$ inverts $Y/Z$ and of course centralizes $Z$. Hence $\<L,t\>/C_L(Y)\cong \mu athrm{G}L_2(3)$ and in particular $N_G(Y)/C_G(Y) \cong \mu athrm{G}L_2(3)$. If $W\neq C_L(Y)$ then $|C_L(Y)/W|=2$. Notice that $L/W$ has a normal, non-abelian Sylow $2$-subgroup, $P$ say, of order $2^4$ and that any smaller normal $2$-subgroup of $L/W$ must centralizes $QW/W$ (else they would together generate $L/W$). It follows therefore that $P$ is special with centre of order four however there can be no such group. Thus $W=C_L(Y)$. $(iii)$ We clearly have that $Y$ is a natural $L/W$-module. Now suppose that $Y=Q \cap Q^x=C_Q(Y) \cap C_{Q^x}(Y)$. Then $|W|=3^43^4/3^2=3^6$ which is a contradiction hence $Y<Q \cap Q^x$ and since $Q \cap Q^x$ is elementary abelian, $|Q \cap Q^x|=3^3$ and so $|W/(Q \cap Q^x)|=9$. Notice that $Q \cap Q^x$ is normalized by $L$ and that $L/W$ acts on $W/(Q \cap Q^x)$, which is the direct product of the groups $C_Q(Y)/(Q \cap Q^x)$ and $C_{Q^x}(Y)/(Q \cap Q^x)$. Therefore it must be a natural $L/W$-module. $(iv)$ Clearly $Y \leqslant \mu athbf{Z}(W)$ so suppose $Y<\mu athbf{Z}(W)$ then $\mu athbf{Z}(W)$ has index at most nine in $W$. Since the non-abelian group $C_Q(Y)$ is contained in $W$, $W$ is non-abelian. Therefore $[W:\mu athbf{Z}(W)]=9$. Notice that $\mu athbf{Z}(W)\neq Q \cap Q^x$ otherwise $C_Q(Y)$ is abelian. So we have that $(Q\cap Q^x )\mu athbf{Z}(W)/(Q \cap Q^x)$ is a proper and non-trivial $L/W$ invariant subgroup of the natural $L/W$-module, $W/(Q \cap Q^x)$. This is a contradiction. Thus $Y=\mu athbf{Z}(W)$. $(v)$ Since $Q$ has exponent three, $Q \cap Q^x$ does also. Choose $a \in Q \bs (Q \cap Q^x)$ then $(Q \cap Q^x) a$ is a non-identity element of the natural $N_G(Y)/W$-module, $W/Q \cap Q^x$. Moreover, every element in the coset has order dividing three since $Q$ has exponent three. Since $N_G(Y)/W$ is transitive on the non-identity elements of the natural module $W/(Q \cap Q^x)$, every element of $W$ has order dividing three. {\epsilon }nd{proof} We continue to use the notation $W$ and $L$ as in the previous lemma. Furthermore, let $s$ be an involution such that $Ws \in \mu athbf{Z}(L/W)$ then $s$ inverts $Y$ and so $s \in N_G(Z)$. Since $t \in N_G(Y)$ we may choose $s$ such that $s$ and $t$ are in a Sylow $2$-subgroup of $N_G(Y)$ and therefore $[s,t]=1$. Furthermore set $J:=[W,s]$ and we also now set $S:=QW \in \omega peratorname{Syl}_3(C_G(Z)) \cap \omega peratorname{Syl}_3(L)$ and let $Z_2$ be the second centre of $S$ (so $Z_2/Z=\mu athbf{Z}(S/Z)$). \begin{lemma}\label{lemma structure of Y and W} \begin{enumerate}[$(i)$] \item We have that $Y \leqslant Z_2\leqslant C_G(Y)$ and $Z_2$ is abelian of order $3^3$ but distinct from $Q \cap Q^x$. \item $W'=Y$. \item $Q \cap Q^x=Y C_{C_S(Y)}(s)$ and $C_{C_S(Y)}(s)\lhd C_L(s)\cong 3. \mu athrm{SL} _2(3)$. \item $J$ is an elementary abelian subgroup of $C_G(Y)$ of order $3^4$ that is inverted by $s$ and $Q \cap J=Z_2$. \item $J=J(S)=J(W)$ and $Y \leqslant S' \leqslant Q \cap J$. \item $t$ and $st$ are conjugate in $N_G(Y)$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ By Lemma \ref{Prelims-EasyLemma} $(ii)$, $C_G(Z)/Q$ acts faithfully on $Q/Z$. Therefore $S/Q$ is isomorphic to a cyclic subgroup of $\mu athrm{G}L(Q/Z)\cong \mu athrm{G}L_4(3)$ of order three, we may consider the Jordan blocks of elements of order three to see that any such cyclic subgroup centralizes a subgroup of $Q/Z$ of order at least $3^2$. Therefore $|Z_2/Z|\geqslant 9$ and so $|Z_2|\geqslant 27$. Since $[Q/Z,Z_2]=1$, $Z_2 \leqslant Q$ by Lemma \ref{Prelims-EasyLemma} $(ii)$. Now suppose $Z_2 \nleq C_G(Y)$. Then $S=Z_2C_G(Y) \in \omega peratorname{Syl}_3(G)$. Since $Z_2/Z=\mu athbf{Z}(S/Z)$, $[S,Z_2] \leqslant Z$ and so $[W,Z_2] \leqslant Z \leqslant Q \cap Q^x$. Therefore $[W/(Q \cap Q^x), Z_2]=1$, however this implies that $S/W$ acts trivially on the natural $L/W$-module $W/(Q \cap Q^x)$ which is a contradiction. So $Z_2 \leqslant C_G(Y)\cap Q$. Suppose $Q \cap Q^x \leqslant Z_2$. Then $Z^x=[C_{Q^x}(Y), Q \cap Q^x]\leqslant [C_{Q^x}(Y),Z_2]\leqslant Z$ which is a contradiction. Therefore $|Z_2|=27$ and in particular, $Z_2 \neq Q \cap Q^x$. Furthermore $Y\vartriangleleft S$ and so $Y/Z$ is central in $S/Z$. Therefore $Y \leqslant Z_2$ and since $Y$ is central in $Z_2 \leqslant C_G(Y)$, $Z_2$ is abelian. $(ii)$ Now $Z=C_Q(Y)'\leqslant W'$ and $Z^x=C_{Q^x}(Y)'\leqslant W'$ and so $Y \leqslant W'$. Moreover, we have just observed that $Z_2 \neq Q \cap Q^x$ and so $Q \cap Q^x$ and $Z_2$ are distinct normal subgroups of $W$ both of index nine. Thus $Y \leqslant W' \leqslant Q \cap Q^x \cap Z_2$. It follows from the group orders that $Y=C_G(Y)'=Q \cap Q^x \cap Z_2$. $(iii)$ By coprime action on an abelian group, $W/Y=C_{W/Y}(s) \times [W/Y,s]$. By Lemma \ref{facts about W} $(ii)$, $Y$ and $W/(Q \cap Q^x)$ are natural $L/W$-modules. Therefore $s$ inverts $Y$ and $W/(Q \cap Q^x)$. It follows from coprime action that $|C_{W}(s)|=3$ and that $C_{W}(s)\leqslant Q \cap Q^x$ with $Q \cap Q^x=YC_{W}(s)$. Furthermore, $L/W\cong \mu athrm{SL} _2(3)$ and $Ws\in \mu athbf{Z}(L/W)$ so it follows from coprime action that $C_{W}(s)\lhd C_L(s)\sim 3.\mu athrm{SL} _2(3)$. $(iv)$ We have that $Y$ is inverted by $s$ and so $Y=[Y,s] \leqslant J$. We also have that $|[W/Y,s]|=9$ and by coprime action we see, $[W/Y,s]=Y[W,s]/Y=YJ/Y \cong J/(J \cap Y)=J/Y$. This implies that $J$ has order $3^4$ and furthermore we have that $1=J\cap C_{W}(s)$. This implies that $s$ acts fixed-point-freely on $J$ and so $J$ is abelian and inverted by $s$. By Lemma \ref{facts about W} $(v)$, $W$ has exponent three and so $J$ is elementary abelian. Now $Z_2$ is a characteristic subgroup of $S$ and so is normalized by $s$. Moreover $Y \leqslant Z_2$. If $C_{Z_2}(s)\neq 1$ then $Z_2=Q \cap Q^x=YC_{W}(s)$ which is a contradiction. Thus $Z_2=J \cap Q$. $(v)$ Suppose there was another abelian subgroup of $W$ of order $3^4$, $J_0$ say. Then $|J \cap J_0|=3^3$ and $J \cap J_0$ is central in $W$. This contradicts Lemma \ref{facts about W} which says that $\mu athbf{Z}(W)=Y$. It follows therefore that $J(W)=J$. Clearly $3^4$ is the largest possible order of an abelian subgroup of $S$ (else $Q$ would contain abelian subgroups of order $3^4$). So suppose $J_1$ is an abelian subgroup of $S$ distinct from $J$. Then $J_1 \nleq W$ and $J_1 \nleq Q$. Therefore, $S/Z$ contains three distinct abelian subgroups $Q/Z$, $J/Z$ and $J_1/Z$. We must have that $S=QJ=QJ_1$. Hence, $(Q/Z) \cap (J/Z)$ and $(Q/Z) \cap (J_1/Z)$ both have order nine and are both central in $S/Z$. We must have that $Q/Z \cap J/Z =Q/Z \cap J_1/Z=Z_2/Z$. Thus $Y \leqslant Z_2 \leqslant J_1$ and so $J_1\leqslant C_S(Y)$ which we have seen is not possible. Thus $J=J(S)$. In particular, $J$ is a normal subgroup of $S$ of index nine and so $Y=W' \leqslant S' \leqslant Q \cap J$. $(vi)$ Finally, we have seen that $L\<t\>/W\cong \mu athrm{G}L_2(3)$ and so $Wt$ is conjugate to $Wst$ and since $W$ is a $3$-group, an element of $L$ conjugates $t$ to $st$. {\epsilon }nd{proof} We now choose a subgroup $S \leqslant X \leqslant C_G(Z)$ such that $X/Q\cong \mu athrm{SL} _2(3)$ as in Lemma \ref{HN-AnotherEasyLemma}. \begin{lemma} $Q/Z=\<C_{Q/Z}(S)^{X/Q}\>$ and $S/Q$ acts quadratically on $Q/Z$. {\epsilon }nd{lemma} \begin{proof} First observe that since $X/Q\cong \mu athrm{SL} _2(3)$ and there is no central chief factor of $X/Q$ on $Q/Z$, any proper $X/Q$-submodule of $Q/Z$ is necessarily a natural $X/Q$-module. Let $Z < V< Q$ such that $V/Z$ is an $X/Q$-submodule and is therefore a natural module. Thus $S/Q$ acts non-trivially on $V/Z$. In particular this means $V/Z\neq \mu athbf{Z}(S/Z)=C_{Q/Z}(S)$. So $\mu athbf{Z}(S/Z)$ is not contained in any proper $X$-invariant subgroup of $Q$. Thus $Q=\<C_{Q/Z}(S)^{X/Q}\>$. Now, $J=J(S)$ is abelian and normalized by $S$ and so $[Q,J]\leqslant J$ and then $[Q,J,J]=1$. Now $J \nleq Q$ (as $Q$ has no abelian subgroups of order $3^4$) and so $S/Q=JQ/Q$ and therefore $[Q/Z,JQ/Q,JQ/Q]=1$ and so $S/Q$ acts quadratically on $Q/Z$. {\epsilon }nd{proof} We have now satisfied the conditions of Lemma \ref{Parker-Rowley-SL2(q)-splitting} and so we have the following results. \begin{lemma}\label{Lemma N1-N4} \begin{enumerate}[$(i)$] \item $Q/Z$ is a direct product of natural $X/Q$-modules. \item There are exactly four $X$-invariant subgroups $N_1,N_2,N_3,N_4< Q$ properly containing $Z$ such that for $i \neq j$, $N_i \cap N_j=Z$. \item $N_i \cap J$ has order nine for each $i$ and $S'=J \cap Q=\<N_i \cap J|1\leqslant i \leqslant 4\>$. \item For some $i \in \{1,2,3,4\}$, $Y \leqslant N_i$ and $N_i$ is abelian. \item For each $i \in \{1,2,3,4\}$, $X$ is transitive on $N_i\bs Z$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} Part $(i)$ follows immediately from Lemma \ref{Parker-Rowley-SL2(q)-splitting} which says that $Q/Z$ is a direct product of natural $X/Q$-modules. Let $N_1$ and $N_2$ be the corresponding subgroups of $Q$. View $N_1/Z$ and $N_2/Z$ as vector spaces over $\mu athrm{G}F(3)$. Since $N_1/Z$ and $N_2/Z$ are isomorphic as $X$-modules, it follows that there are two additional isomorphic submodules in $Q/Z$. Let $N_3$ and $N_4$ be the corresponding normal subgroups of $Q$. Then $N_3/Z$ and $N_4/Z$ are natural $X$-modules and for $i \neq j$, $N_i \cap N_j=Z$. This proves $(ii)$. By Lemma \ref{lemma structure of Y and W}, if $Z_2/Z=\mu athbf{Z}(S/Z)$ then $Y \leqslant Z_2$ and $Z_2=J \cap Q$ is elementary abelian of order $27$. Now for each $i\in \{1,2,3,4\}$, $C_{N_i/Z}(S) \neq 1$ and so $Z_2 \cap N_i=J \cap N_i$ has order at least nine. In fact the order must be exactly nine for were it greater then for some $i$, $N_i=Z_2$ and then $N_i \cap N_j$ would have order at least nine for each $j \neq i$. Now for each $i \neq j$, $N_i \cap N_j=Z$ and so $N_i \cap J \neq N_j \cap J$ and so $Z_2=\<N_i \cap J|1\leqslant i \leqslant 4\>$. In particular we must have (without loss of generality) that $N_1\cap J=Y$. By Lemma \ref{lemma structure of Y and W} $(v)$, $Y\leqslant S' \leqslant Q \cap J$. Suppose $S'=Y$. Then for any $2\leqslant i \leqslant 4$, $Y \nleq N_i$ and so $[N_i,S] \leqslant N_i \cap Y =Z$. Therefore $N_i \leqslant Z_2$ which is a contradiction. Thus $Y<S'=J \cap Q$ which proves $(iii)$. We already have that (without loss of generality) $N_1\cap J=Y$. Suppose that $N_1$ is non-abelian. Then $C_Q(N_1)\cong N_1\cong 3_+^{1+2}$. Since $N_1$ is $X$-invariant, $C_Q(N_1)$ is also $X$-invariant. Therefore, without loss of generality, we can assume that $N_2=C_Q(N_1)$ and so $N_2\leqslant C_S(Y)$ and $N_2$ is also non-abelian with $Y \nleq N_2$. Since $N_1 \nleq C_S(Y)$, $S=C_S(Y)N_1$ and so we have that $S'\leqslant [C_S(Y),N_1] C_S(Y)' N_1' \leqslant (C_S(Y) \cap N_1)Y Z=Y$ (using Lemma \ref{lemma structure of Y and W}) which is a contradiction since $S'>Y$. This proves $(iv)$. Finally, since each $N_i/Z$ is a natural $X/Q$-module, $X$ is transitive on the non-identity elements of $N_i/Z$. So let $Z \neq Zn \in N_i/Z$. Then $\<Z,n\>\vartriangleleft Q$ however $|C_Q(n)|=3^4$. Therefore $n$ lies in a $Q$-orbit of length three in $Zn$. Hence every element in $Zn$ is conjugate in $X$. Thus $X$ is transitive on $N_i \bs Z$ which completes the proof. {\epsilon }nd{proof} For the rest of this section we continue the notation from Lemma \ref{Lemma N1-N4} with $N_1,N_2,N_3,N_4$ chosen such that $Y<N_1$ and satisfying the notation set in the following lemma also. \begin{lemma}\label{N_1, N_2 abelian, N_3, N_4 not} Without loss of generality we may assume that $N_1\cong N_2$ is elementary abelian and $N_3\cong N_4$ is extraspecial with $[N_3,N_4]=1$. {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{Lemma N1-N4}, $N_1$ is abelian. So suppose $N_i$ is non-abelian for some $i\in \{2,3,4\}$. Then $C_Q(N_i)\cong N_i\cong 3_+^{1+2}$ is $X$-invariant and we may assume $C_Q(N_i)=N_j$ for some $i \neq j\in \{2,3,4\}$. Now it follows that either $N_i$ is abelian for every $i\in \{1,2,3,4\}$ or without loss of generality $N_1\cong N_2$ and $N_3\cong N_4$ are non-abelian. So we assume for a contradiction that $N_2$, $N_3$ and $N_4$ are all abelian. Since $N_1/Z$ is isomorphic as a $\mu athrm{G}F(3)X/Q$-module to $N_2/Z$, for any $m \in N_1\bs Z$ there is an $n \in N_2 \bs Z$ such that $Zn$ is the image of $Zm$ under a module isomorphism. It then follows (without loss of generality) that $Znm$ is an element of $N_3/Z$ and $Zn^2m$ is an element of $N_4/Z$. In particular $x_1:=nm \in N_3$ and $x_2:=n^2m \in N_4$. Let $g \in X$ have order four then $Qg^2=Qt$ inverts $Q/Z$ and so \begin{equation}\label{one}Zn^{g^2}=Zn^2 ~\mu athrm{and}~ Zm^{g^2}=Zm^2.{\epsilon }nd{equation} Also if $Z\neq Za\in N_i/Z$ and $g$ and $h$ are elements of order four in $X$ such that $Q\<g\>\neq Q\<h\>$ then $N_i/Z=\<Za^g,Za^h\>$ and so $N_i=Z\<a^g,a^h\>$. So consider $[x_1,x_2^g]$. We calculate the following using commutator relations and using that all commutators are in $Z$ and therefore central. \[\begin{array}{rclr} [x_1,x_2^g]&=&[nm,{(n^2)}^g m^g]&\:\\ \;&=&[n,m^g][m,m^g][n,{(n^2)}^g][m,{(n^2)}^g]&\;\\ \;&=&[n,m^g][m,{(n^2)}^g]& (\mu athrm{since}~N_1~\mu athrm{and}~N_2~ \mu athrm{are~abelian})\\ \;&=&[n,m^g][m,{n}^g]^2&\;\\ \;&=&([n,m^g][m,{n}^g]^2)^g&(\mu athrm{since~commutators~are~central~in~}X)\\ \;&=&[n^g,m^{g^2}][m^g,n^{g^2}]^2&\;\\ \;&=&[n^g,m^2][m^g,n^2]^2&(\mu athrm{by~Equation~}\ref{one})\\ \;&=&[n^g,m]^2[m^g,n]&\;\\ \;&=&[{(n^2)}^g,m][m^g,n]&\;\\ \;&=&[{(n^2)}^g,m][m^g,m][{(n^2)}^g,n][m^g,n]&(\mu athrm{since}~N_1~\mu athrm{and}~N_2~ \mu athrm{are~abelian})\\ \;&=&[{(n^2)}^g m^g,nm]&\;\\ \;&=&[x_2^g,x_1].&\;\\ {\epsilon }nd{array}\]Thus $[x_1,x_2^g]=[x_1,x_2^g]\inv$ and so $[x_1,x_2^g]=1$. This holds for any element of order four in $X$. Thus $mn\in N_3$ commutes with $N_4=Z\<(n^2m)^g,(n^2m)^h\>$ where $g$ and $h$ are elements of order four as above. Furthermore this argument works for any element of $N_3\bs Z$ and so $[N_3,N_4]=1$. However this contradicts our assumption that $N_3$ and $N_4$ are abelian. {\epsilon }nd{proof} \begin{lemma}\label{HN-a new class of three} For $i\in \{3,4\}$, elements in $N_i\bs Z$ are not conjugate into $Z$. In particular, there are 12 elements of order three in $S'$ which are not $G$-conjugate into $Z$. {\epsilon }nd{lemma} \begin{proof} Let $\{i,j\}=\{3,4\}$ and let $a\in N_i \bs Z$. Since $a\in Z_2$, $|C_S(a)|=3^5$. Suppose that $a \in Z^G$. Then we must have that $C_S(a)=O_3(C_G(\<a,Z\>))$ and we must similarly have that $\<a,Z\>=C_S(a)'$. Now $S=C_S(a)N_j$ and so $S' \leqslant C_S(a)' N_i' (C_S(a)\cap N_i)\leqslant \<a,Z\>$ which is a contradiction. Thus $a \notin Z^G$. Every element in $N_i \bs Z$ is conjugate to $a$ and therefore no element in $N_i \bs Z$ is conjugate into $Z$. Furthermore, by Lemma \ref{Lemma N1-N4} $(iii)$, we see that $S'=J \cap Q$ contains twelve elements of order three which are not conjugate into $Z$. These are contained in $N_3 \cap J=N_3 \cap S'$ and $N_4 \cap J=N_4 \cap S'$. {\epsilon }nd{proof} \begin{lemma}\label{C-i's-orders and derived subgroups} \begin{enumerate}[$(i)$] \item Let $i \in \{1,2,3,4\}$ and set $S_i:=C_S(J \cap N_i)$ then $|S_i|=3^5$ and $|\mu athbf{Z}(S_i)|=9$. \item $S_1'=\mu athbf{Z}(S_1)=J\cap N_1=Y$, $S_2'=\mu athbf{Z}(S_2)=J \cap N_2$, $S_3'=\mu athbf{Z}(S_4)=J \cap N_4$ and $S_4'=\mu athbf{Z}(S_3)=J \cap N_3$. {\epsilon }nd{enumerate} In particular $S_i \neq S_j$ for each $i \neq j$. {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{Lemma N1-N4}, $|J \cap N_i|=9$ for each $i\in\{1,2,3,4\}$ and since $J$ is elementary abelian of order $81$, $J \leqslant S_i$. Hence $S_i\geqslant \<J,C_Q(J \cap N_i)\>$. Since $C_Q(J \cap N_i)$ has order $3^4$ and is non-abelian, $|\<J,C_Q(J \cap N_i)\>|\geqslant 3^5$. Moreover, since $|S|=3^6$ and $\mu athbf{Z}(S)=Z$ has order three, it follows that $S_i=\<J,C_Q(J \cap N_i)\>$ has order $3^5$. Now for each $i \in \{1,2,3,4\}$, $\mu athbf{Z}(S_i)\leqslant Q$ otherwise $S=Q\mu athbf{Z}(S_i)$ and then $[S_i \cap Q,S]\leqslant [S_i \cap Q,Q][S_i \cap Q,\mu athbf{Z}(S_i)]\leqslant Z$ which implies that $S_i \cap Q\leqslant Z_2$ which is a contradiction. Thus $\mu athbf{Z}(S_i)\leqslant Q$ has order nine and $\mu athbf{Z}(S_i)=N_i \cap J$. Now for $i \in \{1,2,3,4\}$, we have that $Z \leqslant S_i'$. If $S_i'=Z$ then $Q/Z$ and $S_i/Z$ are two distinct abelian subgroups of $S/Z$ of index three. This implies that $S/Z$ has centre of order at least $3^3$. However by Lemma \ref{lemma structure of Y and W} $(i)$, $\mu athbf{Z}(S/Z)$ has order nine. Thus $S_i'>Z$. Now for $i=1$, by Lemma \ref{Lemma N1-N4}, $Y= N_1\cap J$ and so $\mu athbf{Z}(S_1)=N_1 \cap J=Y$. Furthermore, for $i \in \{1,2\}$, $N_i$ is abelian and so $N_i \leqslant S_i$. Therefore $S_i' \leqslant S' \cap N_i= J\cap N_i$ since $N_i\vartriangleleft S_i$. For $\{i,j\} = \{3,4\}$, $[N_i,N_j]=1$ and so $N_j \leqslant S_i$. Therefore $S_i' \leqslant S' \cap N_j= J \cap N_j$ since $N_j\vartriangleleft S_i$. {\epsilon }nd{proof} \begin{lemma}\label{elements of order nine in S} Every element of order three in $S$ lies in the set $Q \cup S_1 \cup S_2$ and the cube of every element of order nine in $S$ is in $Z$. {\epsilon }nd{lemma} \begin{proof} By hypothesis, $Q$ has exponent three and by Lemma \ref{lemma structure of Y and W} $(vi)$, so does $J$. So let $ g\in S $ such that $g \notin Q \cup J$. Then $g=cb$ for some $ c \in Q \bs J=Q \bs S'$ and some $b \in J \bs Q$. We calculate using the equality $c[b,c][b,c,c]=[b,c]c$ and using that $b \in J$ so commutes with all commutators in $S'\leqslant J$.\begin{eqnarray*} cbcbcb&=&c^2b[b,c]bcb \\ &=&c^2b^2[b,c]cb \\ &=&c^2b^2c[b,c][b,c,c]b \\ &=&c^2b^2cb[b,c][b,c,c] \\ &=&[c,b][b,c][b,c,c] \\ &=&[b,c,c]. {\epsilon }nd{eqnarray*} Since $c \in Q\bs J=Q \bs S'$, $S'\<c\>$ is a proper subgroup of $Q$ properly containing $S'$. As $S'\cap N_i=J \cap N_i$ has order nine for each $i\in \{1,2,3,4\}$, $S'N_i$ has order $81$. Thus $S'\<c\>=S'N_i$ for some $i\in \{1,2,3,4\}$. If $S'\<c\>=S'N_1=C_Q(Y)$ then $cb\in C_S(Y)=W$ and $W$ has exponent three. Suppose $S'\<c\>=S'N_2$. Then $S_2=C_S(S' \cap N_2)=J\<c\>$ and $S_2'=S' \cap N_2$ therefore $[b,c]\in S' \cap N_2$ is central in $S'\<c\>=S'N_2$. Therefore $[b,c,c]=1$ and so $cb$ has order three. Now suppose $S'\<c\>=S'N_3$ (and a similar argument holds if $S'\<c\>=S'N_4$). Then $S_4=C_S(S' \cap N_4)=J\<c\>$ and $[b,c] \in S_4'=S' \cap N_3$. Suppose $cbcbcb=[b,c,c]=1$. Then $[b,c]$ commutes with $J\<c\>=S_4$ and so $[b,c] \in S_4' \cap \mu athbf{Z}(S_4)=Z$. Thus $S_4=J\<c\>=S'\<b,c\>$ and so $[S_4,S_4]=\<[S',c],[S',b],[c,b]\>$. However $[S',c]\leqslant Z$, $[S',b]=1$ and $[c,b]\in Z$ which is a contradiction since $[S_4,S_4]=N_3\cap S'>Z$. Thus $[b,c,c]\neq 1$ and $cb$ has order nine (no element can order $27$ since $Q$ has exponent three). Furthermore, $(cb)^3=[b,c,c] \in [S' \cap N_4,c] \leqslant [Q,Q]=Z$ and so the cube of every such element of order nine is in $Z$. {\epsilon }nd{proof} \begin{lemma}\label{prelims-Centre of the C_i's} For each $i\in\{3,4\}$, if $a \in \mu athbf{Z}(S_i) \bs Z$ then $\mu athbf{Z}(S_i/\<a\>)=\mu athbf{Z}(S_i)/\<a\>$. {\epsilon }nd{lemma} \begin{proof} Let $\{i,j\}= \{3,4\}$ then by Lemma \ref{C-i's-orders and derived subgroups}, we have that $S_i'=\mu athbf{Z}(S_j)$ and $S_j'=\mu athbf{Z}(S_i)$. So let $a \in \mu athbf{Z}(S_i) \bs Z$ and suppose $\mu athbf{Z}(S_i/\<a\>)>\mu athbf{Z}(S_i)/\<a\>$. Let $V\leqslant S_i$ such that $a \in V$ and $\mu athbf{Z}(S_i/\<a\>)=V/\<a\>$ then $|V|\geqslant 3^3$. Therefore $S_i/V$ is abelian and so $S_i'\leqslant V$. Therefore $[S_i',S_i] \leqslant \<a\>$. However $S_i$ normalizes $\mu athbf{Z}(S_j)=S_i'$ and so $[S_i',S_i] \leqslant \<a\>\cap S_i'= \<a\>\cap \mu athbf{Z}(S_j)=1$ since $\mu athbf{Z}(S_i) \cap \mu athbf{Z}(S_j)\leqslant N_i \cap N_j=Z$. However this implies that $S_i' \leqslant \mu athbf{Z}(S_i)$ and so $N_j \cap J\leqslant N_i \cap J$ which is a contradiction. Therefore $\mu athbf{Z}(S_i/\<a\>)=\mu athbf{Z}(S_i)/\<a\>$. {\epsilon }nd{proof} We fix an element of order three $a$ in $Q$ such that $a \in (N_3 \cap J) \bs Z$ and therefore $a \notin Z^G$ by Lemma \ref{HN-a new class of three}. Let $3\mu athcal{A}:=\{a^g|g \in G\}$ and $3\mu athcal{B}:=\{z^g|g \in G\}$ where $Z=\<z\>$. We show in the rest of this section that these are the only conjugacy classes of elements of order three in $G$. \begin{lemma}\label{HN-prelims1} \begin{enumerate}[$(i)$] \item $|C_S(a)|=3^5$; \item $|a^{G} \cap Q|=|a^{C_G(Z)} \cap Q|=120$ and $|z^G \cap Q|=|z^{xC_G(Z)} \cap Q|+2=122$; and \item $Q^\# \subset 3\mu athcal{A} \cup 3\mu athcal{B}$. {\epsilon }nd{enumerate} Furthermore, in Case II, $C_G(Z)/Q$ is isomorphic to the group of shape $2^{\cdot}\omega peratorname{Sym}(5)$ which has semi-dihedral Sylow $2$-subgroups and in either case $\<s,C_G(Z)'\>/Q\sim 4^{\cdot}\omega peratorname{Alt}(5)$. {\epsilon }nd{lemma} \begin{proof} We have chosen $a \in N_3\cap J$ and so by Lemma \ref{C-i's-orders and derived subgroups}, $C_S(a)=C_S(\<Z,a\>)=C_S(N_3 \cap J)=S_3$ which has order $3^5$. Now let $q \in Q\bs Z$ and consider $[{C_G(Z)}:C_{C_G(Z)}(q)]$. By Lemma \ref{HN-AnotherEasyLemma} $(ii)$, an element of order five acts fixed-point-freely on $Q/Z$ so we have that $5\mu id [{C_G(Z)}:C_{C_G(Z)}(q)]$. Let $R$ be a Sylow $2$-subgroup of $C_{C_G(Z)}(q)$. Recall that $C_G(Z)/Q$ has subgroups isomorphic to $\mu athrm{Q}(8)$ with $Qt$ in the centre. The preimage of any such subgroup in $C_G(Z)$ intersects trivially with $R$ as $Qt$ inverts $Q/Z$. So $8\mu id [{C_G(Z)}:C_{C_G(Z)}(q)]$. Furthermore $q$ is not $3$-central in $C_G(Z)$ and so $3\mu id [{C_G(Z)}:C_{C_G(Z)}(q)]$. Therefore $[{C_G(Z)}:C_{C_G(Z)}(q)]$ is a multiple of 120. Now there exists $z^x \in Q \bs Z$ which lies in a ${C_G(Z)}$-orbit in $Q$ of length at least 120 and also there exists $a \in Q$ which is not conjugate to $z$ and lies in a ${C_G(Z)}$-orbit in $Q$ of length at least 120. Since $a$ is not conjugate to $z^x$, these orbits are distinct. Thus $|a^{G} \cap Q|=|a^{C_G(Z)} \cap Q|=120$ and $|z^G \cap Q|=|z^{xC_G(Z)} \cap Q|+2=122$. We may now observe that in Case II, when $N_G(Z)/Q\sim 4^{\cdot}\omega peratorname{Sym}(5)$, every subgroup of index two must contain an involution centralizing $z^x \in Q\bs Z$. In particular, $t$ can not be the unique involution in any such index two subgroup. It follows then that $C_G(Z)/Q$ is isomorphic to the group of shape $2^{\cdot}\omega peratorname{Sym}(5)$ which has semi-dihedral Sylow $2$-subgroups as claimed. The final comment in the statement of this lemma is clear in Case I so suppose we are in Case II. Recall that $Js$ lies in the centre of $N_G(J)/J$ and that $s$ centralizes $S/J$. We have that $\<s,C_G(Z)'\>/Q$ has shape $4^{\cdot}\omega peratorname{Alt}(5)$ or $2^{\cdot}\omega peratorname{Sym}(5)$. Thus a Sylow $2$-subgroup of the normalizer of $S$ is isomorphic to $C_4 \times C_2$ or $\omega peratorname{Dih}(8)$ respectively. If $\<s,C_G(Z)'\>/Q \sim 2^{\cdot}\omega peratorname{Sym}(5)$ then a Sylow $2$-subgroup of the normalizer of $S$ must act faithfully on $S/J=QJ/J\cong Q/Q \cap J$ as the centre of the dihedral group is a conjugate of $t$ and so inverts $S/J$. Thus $s$ cannot lie in such a subgroup and we may conclude that $\<s,C_G(Z)'\>/Q\sim 4^{\cdot}\omega peratorname{Alt}(5)$. {\epsilon }nd{proof} \begin{lemma}\label{HN-prelim-element of order four normalizing S} \begin{enumerate}[$(i)$] \item $|C_J(t)|=|C_S(t)|=3^2$ and $t$ inverts $S/J$. \item In Case I, we have that $|N_G(S) \cap C_G(Z)|=3^62^2$ and $|N_G(S)|=3^62^3$. \item In Case II, we have that $|N_G(S) \cap C_G(Z)|=3^62^3$ and $|N_G(S)|=3^62^4$. \item There exists an element of order four $e \in N_G(S) \cap C_G(Z)$ such that $e^2=t$ and $e$ does not normalize $Y$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} We have that $C_Q(t)=Z$ and so $t$ inverts $Q/Z$. We also have that $t$ centralizes $S/Q=QJ/Q\cong J/(J \cap Q)$. Since $C_Q(t)=Z$, we see using coprime action and an isomorphism theorem that $C_{J/(J \cap Q)}(t)=C_J(t)(J \cap Q)/(J \cap Q)\cong C_J(t)/Z$ and so $|C_J(t)|=3^2$. We also see that $t$ inverts $Q/(Q \cap J)\cong QJ/J=S/J$ which proves $(i)$. Now, in Case I, the normalizer of a Sylow $3$-subgroup has order $2^23$ with a cyclic Sylow $2$-subgroup and in Case II, it has order $2^33$. Recall that $s\in N_G(Y)$ inverts $Z$ and normalizes $S \leqslant N_G(Y)$. Thus $(ii)$ and $(iii)$ follow immediately. Furthermore, in either case, we may choose an element of order four $e \in C_G(Z)$ that squares to $t$ and normalizes $S$. Suppose $e$ normalizes $Y$. Then $e^2=t$ centralizes $Y$ which is impossible. This completes the proof. {\epsilon }nd{proof} \begin{lemma}\label{HN-prelims2} \begin{enumerate}[$(i)$] \item $J^\# \subseteq 3\mu athcal{A} \cup 3\mu athcal{B}$. \item $N_2^\# \subseteq 3\mu athcal{B}$, $|C_J(t) \cap 3\mu athcal{A}|=|C_J(t) \cap 3\mu athcal{A}|=4$ and $C_{C_S(Y)}(s)^\# \subseteq 3\mu athcal{A}$. \item Every element of order three in $S$ is in the set $3\mu athcal{A}\cup 3\mu athcal{B}$. \item For every $q \in Q$ there exists $P \in \omega peratorname{Syl}_3(C_G(Z))$ such that $q \in J(P)$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ Since $N_G(Y)/C_G(Y)\cong \mu athrm{G}L_2(3)$ and $J$ is characteristic in $C_G(Y)$ and inverted by $C_G(Y)s$, we have that $J/Y$ is a natural $N_G(Y)/C_G(Y)$-module. Hence there are four $N_G(Y)$-images of $S'$ in $J$ with pairwise intersection equal to $Y$. By Lemma \ref{HN-prelims1}, $Q^\#\subseteq 3\mu athcal{A} \cup 3\mu athcal{B}$. Therefore $[S,S]^\# \subseteq 3\mu athcal{A} \cup 3\mu athcal{B}$ which implies that $J\bs\{1\} \subseteq 3\mu athcal{A} \cup 3\mu athcal{B}$. $(ii)$ We have that for $i \in \{1,2,3,4\}$, by Lemma \ref{Lemma N1-N4} $(v)$, $X$ is transitive on $N_i\bs Z$ and so either $N_i\bs Z\subseteq 3\mu athcal{A}$ or $N_i\bs Z \subseteq 3\mu athcal{B}$. By Lemma \ref{HN-prelim-element of order four normalizing S} $(iii)$, there exists $e\in N_G(S)$ such that $Y^e\neq Y$. Since $e$ normalizes $S$, $e$ normalizes $Z_2=\<J \cap N_i \mu id i \in \{1,2,3,4\}\>$. Therefore $Y^e=N_i \cap J$ for some $i \in \{2,3,4\}$. We have that $N_i\bs Z \subseteq 3\mu athcal{A}$ for $i=3,4$ and so $Y^e=N_2 \cap J$. Thus $N_2^\# \subseteq 3 \mu athcal{B}$. Notice now that $C_{Z_2}(st)$ has order 9 and is a complement to $Z$ in $Z_2$. Therefore $|C_J(st) \cap 3\mu athcal{A}|=|C_J(st) \cap 3\mu athcal{A}|=4$. By Lemma \ref{lemma structure of Y and W} $(vi)$, $t$ is conjugate to $st$ by an element of $N_G(Y)\leqslant N_G(J)$ and so the same count holds for $t$. Now there are five conjugates of $X$ in $C_G(Z)$ and therefore five images of $N_1$ and of $N_2$ in $C_G(Z)$ (since if $N_i$ was normal in two distinct conjugates of $X$ then $N_i$ would be normal in $C_G(Z)$). For each $i\in \{1,2\}$, $N_i\bs Z$ contains $24$ conjugates of $z$. Since $Q\bs Z$ contains 120 conjugates of $Z$, there exists $i \in \{1,2\}$ and $g \in C_G(Z)$ such that $Y\leqslant N_i^g\vartriangleleft X^g$ and $N_i^g\neq N_1$. Now consider $C_Q(Y)$ which is normalized by $s$ (as $s$ normalizes $Q$ and $Y$). By Lemma \ref{lemma structure of Y and W} $(iii)$, $C_{C_S(Y)}(s) \leqslant Q \cap Q^x$ has order three. Now there are four proper subgroups of $C_Q(Y)$ properly containing $Y$. These include $Q \cap Q^x$, $S'$, $N_1$ and $N_i^g$ (we can not yet exclude the possibility that $Q \cap Q^x=N_1$ or $N_i^g$). We have that $s$ normalizes at least two subgroups: $S'\neq Q \cap Q^x$ (since $S'=J \cap Q$ and using \ref{lemma structure of Y and W} $(i)$ and $(iv)$). Suppose that $s$ normalizes $N_1$ and $N_i^g$. If $s$ inverts $N_1$ then $N_1\leqslant [C_S(Y),s]=J$ which is a contradiction (as $|N_1 \cap J|=9$). Therefore $N_1=YC_{C_S(Y)}(s)=Q \cap Q^x$ and by the same argument $N_i^g =Q \cap Q^x$ which is a contradiction since $N_i^g \neq N_1$. Therefore at least one of $N_1$ and $N_i^g$ is not normalized by $s$. We assume that $N_1^s \neq N_1$ (and the same argument works if $N_i^{gs} \neq N_i^g$) and so the four proper subgroups of $C_Q(Y)$ properly containing $Y$ are $Q \cap Q^x$, $S'$, $N_1$ and $N_1^s$. Now consider $|C_Q(Y) \cap 3\mu athcal{A}|$. Since $Q/N_1$ is a natural $X/Q$-module, there are four $X$-conjugates of $C_Q(Y)$ in $Q$ intersecting at $N_1$. Each must contain exactly 120/4=30 conjugates of $a$. Thus $|C_Q(Y) \cap 3 \mu athcal{A}|=30$. Clearly $N_1\cap 3 \mu athcal{A}=N_1^s \cap 3\mu athcal{A}={\epsilon }mptyset$ and $|S'\cap 3 \mu athcal{A}|=12$ by Lemma \ref{HN-a new class of three}. Therefore we have $|Q \cap Q^x \cap 3\mu athcal{A}|=18$. In particular this implies $C_{C_S(Y)}(s)^\# \subseteq 3 \mu athcal{A}$. $(iii)$ By Lemma \ref{elements of order nine in S}, every element of order three in $S$ lies in $Q \cup C_S(N_1 \cap J) \cup C_S(N_2 \cup J)$ and the cube of every element of nine is in $Z$. Recall that $N_1 \cap J=Y$ and since $N_2^\# \subseteq 3\mu athcal{B}$ and $C_G(Z)$ is transitive on $Q \cap 3\mu athcal{B} \bs Z$, $N_1 \cap J$ is conjugate in $C_G(Z)$ to $N_2 \cap J$. Therefore $S_2=C_S(N_2 \cap J)$ is conjugate to $C_S(Y)=S_1$. Now, by Lemma \ref{facts about W}, $C_S(Y)/(Q \cap Q^x)$ is a natural $\mu athrm{SL} _2(3)$-module and so there are four $N_G(Y)$-conjugates of $C_Q(Y)$ in $C_S(Y)$ and this accounts for every element of $C_S(Y)$. Since $C_Q(Y)^\# \subseteq Q^\# \subseteq 3\mu athcal{A} \cup 3 \mu athcal{B}$, $C_S(Y)^\# \subseteq 3\mu athcal{A} \cup 3 \mu athcal{B}$ and therefore every element of order three in $S$ is in $3\mu athcal{A} \cup 3 \mu athcal{B}$. $(iv)$ Since $z^x,a \in J=J(S)$ and every element in $Q\bs Z$ is ${C_G(Z)}$-conjugate to one of these, every element in $Q$ lies in the Thompson subgroup of a Sylow $3$-subgroup of ${C_G(Z)}$. {\epsilon }nd{proof} \begin{lemma}\label{HN-centralizer of a does not normalize J} $C_G(a)\nleq N_G(J)$. {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{HN-prelims2} $(ii)$ and $(iv)$, there exists $g \in Q \cap Q^x \cap 3\mu athcal{A}$ and there exists $R\in \omega peratorname{Syl}_3(C_G(Z))$ such that $g \in J(R)$. The same lemma applied to $C_G(Z^x)$ says that there exists $P\in \omega peratorname{Syl}_3(C_G(Z^x))$ such that $g \in J(P)$. If $Q \cap Q^x \leqslant J(R)$ then $Y \leqslant J(R)\leqslant C_G(Y)$. Hence $J(R)=J({C_G(Y)})=J$ (see Lemma \ref{lemma structure of Y and W} $(v)$) however $Q \cap Q^x \nleq J$ (by Lemma \ref{lemma structure of Y and W} $(iii)$ since $1 \neq C_{C_G(Y)}(s)\leqslant Q \cap Q^x$ but $J$ is inverted by $s$). Therefore $C_R(g)=J(R)(Q \cap Q^x)$ and similarly $C_P(g)=J(P)(Q \cap Q^x)$. Suppose $J(P)=J(R)$. Then $J(R)$ is normalized by $\<Q,Q^x\>=L$. However $O_3(L)={C_S(Y)}$ and so $g \in J(R)=J({C_S(Y)})=J$ which is a contradiction and so $J(P)\neq J(R)$. This implies that $C_G(g)$ has two distinct Sylow $3$-subgroups with distinct Thompson subgroups. Since $a$ is conjugate to $g$, it follows that $C_G(a) \nleq N_G(J)$. {\epsilon }nd{proof} \begin{lemma}\label{HN-element of order four in CG(Z)} Let $ A \in \omega peratorname{Syl}_2(C_G(Z))$ such that $t \in A$ and suppose that $f \in A$ such that $f^2=t$. Then $Z \in \omega peratorname{Syl}_3(C_G(f))\cap \omega peratorname{Syl}_3(C_G(A))$. {\epsilon }nd{lemma} \begin{proof} We have that $C_Q(A)\leqslant C_Q(f)=Z$ since $f^2=t$ and $C_Q(t)=Z$. In either case for the structure of $C_G(Z)$, we have that every element of order four in $A$ (which is isomorphic to either $\mu athrm{SD}_{16}$ or $\mu athrm{Q}(8)$) lies in the subgroup of $A$ isomorphic to $\mu athrm{Q}(8)$ which in turns lies in $O^2(C_G(Z))$. Thus it follows from the structure of $2^{\cdot}\omega peratorname{Alt}(5)$ that no element of order three in $C_G(Z)/Q$ centralizes $Qf$. Therefore, by coprime action, we have that $Z \in \omega peratorname{Syl}_3(C_G(f))\cap \omega peratorname{Syl}_3(C_G(A))$. {\epsilon }nd{proof} \begin{lemma}\label{HN-normalizer of J1} We have that $[N_G(J):C_{N_G(J)}(a)]=48$ and $[N_G(J):C_{N_G(J)}(Z)]=32$. Furthermore, $|N_G(J)|=3^62^7$ or $3^62^8$ in Case I and II respectively. {\epsilon }nd{lemma} \begin{proof} Since $J/Y$ is a natural ${N_G(Y)}/{C_G(Y)}$-module, $J$ contains four $N_G(Y)$-conjugates of $S'$ with pairwise intersection $Y$. By Lemma \ref{Lemma N1-N4}, $S'=\<N_i \cap S'\mu id 1 \leqslant i \leqslant 4\>$. Since the conjugates of $z$ lie in $N_1 \cup S'$ and $N_2 \cup S'$, $|S' \cap 3\mu athcal{B}|=8+6=14$ and so $|J\cap 3\mu athcal{B}|=8+(4*6)=32$. Therefore, by Lemma \ref{conjugation in thompson subgroup}, $[N_G(J):C_{N_G(J)}(z)]=32$. Now by Lemma \ref{HN-prelim-element of order four normalizing S}, $|C_{N_G(S)}(z)|=3^62^2$ in Case I and $|C_{N_G(S)}(z)|=3^62^3$ in Case II. Note that $C_{N_G(S)}(z)\leqslant C_{N_G(J)}(z)\leqslant C_G(z) \cap N_G(QJ)=C_{N_G(S)}(z)$ and so $C_{N_G(S)}(z)= C_{N_G(J)}(z)$. Thus $|N_G(J)|=3^62^7$ or $3^62^8$ respectively. Since $J^\# \subseteq 3\mu athcal{A} \cup 3\mu athcal{B}$, $|J \cap 3\mu athcal{A}|=48$ and so $[N_G(J):C_{N_G(J)}(a)]=48$. {\epsilon }nd{proof} \begin{lemma}\label{HN-J is self-centralizing} If $r$ is any involution in $C_G(Z) \bs Qt$, then $C_Q(r)\cong [Q,r] \cong 3_+^{1+2}$. Furthermore, we have that $C_G(J)=J$ and $J$ normalizes no non-trivial $3'$-subgroup of $G$. {\epsilon }nd{lemma} \begin{proof} We may assume that $\<t,r\> \leqslant C_G(Z)$ is elementary abelian of order four. By coprime action, $Q=\<C_Q(t),C_Q(r),C_Q(tr)\>$. It follows from the fact that $C_Q(t)=Z$ that $C_Q(r),C_Q(tr)>Z$. By the three subgroup lemma we have that $[[Q,r],C_Q(r)]=1$ and it therefore follows that $[C_Q(r)\cong [Q,r]\cong 3_+^{1+2}$ as claimed. So now suppose that an involution in $C_G(Z)$ centralizes $J$ then $Z_2=C_Q(r)$ is elementary abelian and it follows then that $C_G(J)=J$. If $N$ is a $3'$-subgroup of $G$ normalized by $J$ then by coprime action, $N=\<C_N(y):y in Y^\#\>$. Since $Y \leqslant O_3(C_G(y))$ for each $y \in Y^\#$, we have that $[C_N(y),Y]\leqslant C_N(y) \cap O_3(C_G(y))=1$. Thus $N \leqslant C_G(Y)$ and in particular, $N$ normalizes $J(O_3(C_G(Y)))=J(W)=J$. Thus $[J,N] \leqslant N \cap J=1$. We thus have that $N=1$. {\epsilon }nd{proof} Recall that a group $H$ is said to be $3$-soluble of length one if $H/O_{3'}(H)$ has a normal Sylow $3$-subgroup which is to say that $H=O_{3',3,3'}(H)$. \begin{lemma}\label{HN-normalier of J-order of the O2} We have that $O_2({N_G(Y)}/J) \leqslant O_2(N_G(J)/J) \cong 2_+^{1+4}$ and $N_G(J)/J$ is $3$-soluble of length one. {\epsilon }nd{lemma} \begin{proof} Set $K:=N_G(J)$ and $\omega verline{K}=K/J$. Then $\omega verline{K}$ has order $3^22^7$ or $3^22^8$ and $\omega verline{S}\in \omega peratorname{Syl}_3(\omega verline{K})$. Clearly $O_3(K)\leqslant S$ so recall Lemma \ref{C-i's-orders and derived subgroups}. If $O_3(K)>J$ then it is clear from the order that $O_3(K)\neq S$ and so we must have $O_3(K)=S_i$ for some $i\in \{1,2,3,4\}$. Therefore $K$ normalizes $\mu athbf{Z}(S_i)$. However, this leads to a contradiction since $K$ is transitive on $J \cap 3\mu athcal{A}$ and on $J \cap 3\mu athcal{B}$ and so we have that $O_3(K)=J$. By Burnside's $p^\a q^\b$-Theorem \cite[4.3.3, p131]{Gorenstein}, $\omega verline{K}$ is solvable. Let $N$ be a subgroup of $K$ such that $J\leqslant N$ and $\omega verline{N}=O_2(\omega verline{K})$. Then $\omega verline{N} \neq 1$ since $\omega verline{K}$ is solvable and $O_3(\omega verline{K})=1$. Recall that $s$ inverts $J$ and so $\omega verline{s} \in \mu athbf{Z}(\omega verline{K})$, in particular, $\omega verline{s} \in \omega verline{N}$. Moreover $\omega verline{N}$ is the Fitting subgroup of $\omega verline{K}$, $F(\omega verline{K})$, and so by \cite[6.5.8]{stellmacher}, $C_{\omega verline{K}}(\omega verline{N})\leqslant \omega verline{N}$. If any element in $\omega verline{S}$ centralizes $\omega verline{N}/\Phi(\omega verline{N})$ then by a theorem of Burnside \cite[5.1.4, p174]{Gorenstein}, such an element centralizes $\omega verline{N}$ and so is the identity. Therefore $\omega verline{S}$ acts faithfully on $\omega verline{N}/\Phi(\omega verline{N})$ and so by calculating the order of a Sylow $3$-subgroup in $\mu athrm{G}L_n(2)$ for $n=1,2,3$ we see that $|\omega verline{N}/\Phi(\omega verline{N})|\geqslant 2^4$. Moreover, since $\omega verline{s}$ is central in $\omega verline{K}$, we have that $\omega verline{s}\in \Phi(\omega verline{N})$ and so $|\omega verline{N}| \geqslant 2^5$. We use Lemma \ref{HN-prelim-element of order four normalizing S} $(iii)$ to find $e\in N_G(S)$ such that $e^2=t$ and $e$ does not normalize $Y$. Since $t$ inverts $\omega verline{S}$, by Lemma \ref{HN-prelim-element of order four normalizing S} $(i)$, $\<\omega verline{e}\>\cap \omega verline{N}=1$. So, when $|\omega verline{K}|=3^22^7$ we have that $|\omega verline{N}|\leqslant 2^5$ and so we have that $|\omega verline{N}|=2^5$ and then that $\omega verline{K}=\omega verline{N}\omega verline{S}\<\omega verline{e}\>$ is $3$-soluble of length one. So suppose instead that $|\omega verline{K}|=3^22^8$ and let $P \in \omega peratorname{Syl}_2(N_{C_G(Z)}(S))$ then $|P|=2^3$. We have seen that a subgroup of order four in $P$ intersects trivially with $N$ and so $|P\cap N| \leqslant 2$. Suppose for a contradiction that $R:=P \cap N$ has order two. We have that $[S/J, RJ/J]\leqslant S/J \cap N/J=1$ and so $R$ acts trivially on $S/J\cong Q/(Q \cap J)$. Clearly $QR/Q\neq Q\<t\>/Q=\mu athbf{Z}(C_G(Z)/Q)$ and so by Lemma \ref{HN-J is self-centralizing}, $[C_Q(R)\cong [Q,R]\cong 3_+^{1+2}$. However we have seen that $R$ centralizes $Q/Z_2$ and so $Z_2=[Q,R]$ which is a contradiction. Thus we again have that $|\omega verline{N}|=2^5$ and $\omega verline{K}=\omega verline{N}\omega verline{S}\omega verline{P}\>$ is $3$-soluble of length one. Recall that $L=\<Q,Q^x\>\leqslant N_G(Y)$ and $W=O_3(L)=C_L(Y)$ and $L/W\cong \mu athrm{SL} _2(3)$. It follows then that $L/J \cong 3 \times \mu athrm{SL} _2(3)$ and so there exists a group $J \leqslant A\leqslant K$ such that $\omega verline{A} \cong \mu athrm{Q}(8)$ is normalized by $\omega verline{S}$ and thus necessarily contained in $\omega verline{N}$. Recall that we have $e\in N_G(S)\leqslant K$ such that $e^2=t$ and $e$ does not normalize $Y$. If $\omega verline{A}^{\omega verline{e}}=\omega verline{A}$ then $\omega verline{L}^{\omega verline{e}}=\omega verline{L}$ and it would follow that $Y^e=Y$. Thus if we set $\omega verline{B}=\omega verline{A}^{\omega verline{e}}$ then we may apply Lemma \ref{prelims-extraspecial 2^5 in GL(4,3)} to see that $\omega verline{N}\cong 2_+^{1+4}$ which completes the proof. {\epsilon }nd{proof} \begin{lemma}\label{HN-normalizer of J2} $C_S(s)=\<\alpha_1,\alpha_2\>\cong 3 \times 3$ where $\alpha_1,\alpha_2 \in 3\mu athcal{A}$ and there exist $\<\alpha_1,\alpha_2\>$-invariant subgroups $\mu athrm{Q}(8) \cong X_i\leqslant C_G(s) \cap C_G(\alpha_i)$ for $i \in\{1,2\}$ such that $s \in X_i$ and $[X_1,X_2]=1$. {\epsilon }nd{lemma} \begin{proof} Consider $D:=C_{N_G(J)}(s)\cong C_{N_G(J)}(s)J/J=C_{N_G(J)/J}(s)=N_G(J)/J$. This is a group of order $2^73^2$ or $2^83^2$ in which $O_2(D)\cong 2_+^{1+4}$. Since $s$ centralizes $S/J\cong Q/(Q\cap J)$ (which is elementary abelian), we see that $P:=C_S(s)$ is elementary abelian of order nine. By Lemma \ref{HN-prelims2} $(ii)$, $C_{C_S(Y)}(s)^\# \subseteq 3 \mu athcal{A}$ so let $\<\alpha_1\>:=C_{C_S(Y)}(s)\leqslant P$ and by Lemma \ref{lemma structure of Y and W} $(iii)$, $\<\alpha_1\>\lhd C_L(s)\cong 3. \mu athrm{SL} _2(3)$. This extension is split and thus a direct product by Gasch\"{u}tz's Theorem as $P$ is elementary abelian. Thus $\alpha_1$ commutes with a group $X_1\leqslant C_L(s) $ isomorphic to $\mu athrm{Q}(8)$ which is normalized by $P$. Recall that using Lemma \ref{HN-prelim-element of order four normalizing S} $(iii)$ there is an element of order four $e\in C_G(Z)$ which normalizes $S$ (and therefore $J$ and $J\<s\>$) but not $Y$ and so ${C_S(Y)} \neq {C_S(Y)}^e$ and $J\leqslant C_S(Y^e)\leqslant S$. Since $s$ centralizes $S/J$, we have $\alpha_1^e=:\alpha_2\in C_{{C_S(Y^e)}}(s)$ and $P=\<\alpha_1,\alpha_2\>$. Moreover, $\<\alpha_2\> \lhd C_{L^e}(s)\cong 3 \times \mu athrm{SL} _2(3)$. Let $X_2\leqslant C_{L^e}(s)$ isomorphic to $\mu athrm{Q}(8)$ which is normalized by $P$. Now $D$ is $3$-soluble of length one, and since for $i \in \{1,2\}$, $\<X_i,P\>\cong 3 \times \mu athrm{SL} _2(3)$, it follows that $X_i\leqslant O_2(D)\cong 2_+^{1+4}$. Since $2_+^{1+4}$ contains just two subgroups isomorphic to $\mu athrm{Q}(8)$, we have that $X_1X_3\cong 2_+^{1+4}$ and $[X_1,X_2]=1$. {\epsilon }nd{proof} \begin{lemma} \begin{enumerate}[$(i)$] \item In Case I, $C_G(a)\cong 3 \times \omega peratorname{Alt}(9)$ and $N_G(\<a\>)$ is isomorphic to the diagonal subgroup of index two in $\omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$. \item In Case II, $C_G(a)\cong 3 \times \omega peratorname{Sym}(9)$ and $N_G(\<a\>)\cong\omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} We will apply Theorem \ref{princeSym9} in Case I to $N_G(\<a\>)/\<a\>$ and in Case II to $C_G(\<a\>)/\<a\>$ to see that each is isomorphic to $\omega peratorname{Sym}(9)$. Let $N_a:=N_G(\<a\>)$, $C_a:=C_G(\<a\>)$, $S_a:=C_S(a)\in \omega peratorname{Syl}_3(N_a)$ and $\omega verline{N_a}:=N_a/\<a\>$. We first restrict ourselves to Case I. Consider $C_{\omega verline{N_a}}(\omega verline{Z})$. If $g \in N_a$ and $\omega verline{Z}^{\omega verline{g}}=\omega verline{Z}$, then $[Z,g]=1$. This is clear since $\<Z,a\> \cap 3\mu athcal{B}\subset Z$. Recall that $t$ inverts $Q/Z$ and so by swapping $a$ with some appropriate conjugate from $\<Z,a\>$, we may assume that $t$ inverts $a$. By Lemma \ref{HN-prelims1}, $|Q \cap 3\mu athcal{A}|=120$ and $C_G(Z)$ is transitive on the set. We have therefore that $[C_G(Z):C_{C_G(Z)}(a)]=120$. Hence $|C_G(Z) \cap N_a|=3^52$. Therefore $C_{\omega verline{N_a}}(\omega verline{Z})$ has order $3^42$ with Sylow $3$-subgroup $\omega verline{S_a}$ and Sylow $2$-subgroup $\omega verline{\<t\>}$. Now $\omega verline{J}\vartriangleleft \omega verline{C_{\omega verline{N_a}}(\omega verline{Z})}$ and $\omega verline{J}$ is elementary abelian of order $27$. Consider $\mu athbf{Z}(\omega verline{S_a})$. Since $a \in N_3 \cap J$, we may apply Lemma \ref{prelims-Centre of the C_i's} to say that $\mu athbf{Z}(\omega verline{S_a})=\omega verline{\mu athbf{Z}(S_a)}=\omega verline{Z}$ which has order three. Clearly $\omega verline{t}$ commutes with $Z$ but not $\omega verline{S_a}$. Thus we may apply Lemma \ref{Prelims-sym9} to see that $C_{\omega verline{N_a}}(\omega verline{Z})\cong C_{\omega peratorname{Sym}(9)}(\hspace{0.5mm} (1,2,3)(4,5,6)(7,8,9) \hspace{0.5mm} )$. So it remains to show that $\omega verline{J}$ normalizes no non-trivial $3'$-subgroup of $\omega verline{N_a}$. However this follows from Lemma \ref{HN-J is self-centralizing}. Hence we may apply Theorem \ref{princeSym9} to see that either $N_a\leqslant N_G(J)$ or $\omega verline{N_a} \cong \omega peratorname{Sym}(9)$. Thus we use Lemma \ref{HN-centralizer of a does not normalize J} to see that $\omega verline{N_a} \cong \omega peratorname{Sym}(9)$. It follows of course that $C_G(a)/\<a\>\cong \omega peratorname{Alt}(9)$ and using \cite[33.15, p170]{Aschbacher}, for example, we see that the Schur Multiplier of $\omega peratorname{Alt}(9)$ has order two. Therefore $C_G(a)$ splits over $\<a\>$ and so $C_G(a)\cong 3 \times \omega peratorname{Alt}(9)$. To see the structure of the normalizer we need only observe that an involution $s$ inverts $J$ and therefore inverts $a$ whilst acting non-trivially on $O^3(C_G(a))$. Therefore since $\omega peratorname{Aut}(\omega peratorname{Alt}(9))\cong \omega peratorname{Sym}(9)$, we see that $N_G(\<a\>)$ is isomorphic to the diagonal subgroup of index two in $\omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$. Now in Case II, we consider $C_{\omega verline{C_a}}(\omega verline{Z})$. Arguing as before, if $g \in C_a$ and $\omega verline{Z}^{\omega verline{g}}=\omega verline{Z}$, then $[Z,g]=1$. In this case, we conclude from $[C_G(Z):C_{C_G(Z)}(a)]=120$ that $|C_G(Z) \cap C_a|=3^52$ and so $C_{\omega verline{C_a}}(\omega verline{Z})$ has order $3^42$ with Sylow $3$-subgroup $\omega verline{S_a}$ and a Sylow $2$-subgroup $\omega verline{\<r\>}$ say, where $r$ is an involution in $C_G(Z)$. If $[\omega verline{r},\omega verline{S_a}]=1$ then $[r,S_a]=1$. By Lemma \ref{HN-J is self-centralizing}, $C_Q(r)\cong 3_+^{1+2}$ however, $Q \cap J \leqslant Q \cap S_a\leqslant C_Q(r)$ gives us a contradiction. Thus $[\omega verline{r},\omega verline{S_a}]\neq 1$ and we may again apply Lemma \ref{Prelims-sym9} to see that $C_{\omega verline{C_a}}(\omega verline{Z})\cong C_{\omega peratorname{Sym}(9)}(\hspace{0.5mm} (1,2,3)(4,5,6)(7,8,9) \hspace{0.5mm} )$. Of course we again have that $\omega verline{J}$ normalizes no non-trivial $3'$-subgroup of $\omega verline{C_a}$ so we may apply Theorem \ref{princeSym9} to see that $\omega verline{C_a} \cong \omega peratorname{Sym}(9)$. It follows then that $C_G(a)\cong 3 \times \omega peratorname{Sym}(9)$ and $N_G(\<a\>)\cong \omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$. {\epsilon }nd{proof} \section{The Structure of the Centralizer of $t$}\label{HN-Section-CG(t)} We now have sufficient information concerning the $3$-local structure of $G$ to determine the centralizer of $t$ and to show that in one case $G$ has a non-trivial $2$-quotient. We set $H:=C_G(t)$, $P:=C_J(t)$ and $\omega verline{H}:=H/\<t\>$. We will show that $H$\ has shape $2_+^{1+8}.(\omega peratorname{Alt}(5) \wr 2)$ (possibly extended by a cyclic group of order two) and so we must first show that $H$ has an extraspecial subgroup of order $2^9$. We then show that $H$ has a subgroup, $K$, of the required shape and then finally we apply a theorem of Goldschmidt to prove that $K=H$. Along the way we gather several results which will be useful in Section \ref{HN-Section-CG(u)}. \begin{lemma}\label{HN-conjugates in P} In Case I, $C_H(Z)\cong 3 \times 2^{.}\omega peratorname{Alt}(5)$ and $N_H(Z) \cong 3 : 4^{.}\omega peratorname{Alt}(5)$ and $C_H(P)=P\<t\>$. In Case II, $C_H(Z) \cong 3 \times 2^{.}\omega peratorname{Sym}(5)$ and $N_H(Z) \cong 3 : 4^{.}\omega peratorname{Sym}(5)$ and $C_H(P)\cong 3^2 \times 2^2$. Furthermore, $|P \cap 3\mu athcal{A}|=|P \cap 3\mu athcal{B}|=4$ and $P \in \omega peratorname{Syl}_3(H)$. {\epsilon }nd{lemma} \begin{proof} By coprime action and an isomorphism theorem, we have that $C_{C_G(Z)/Q}(t)\cong C_{C_G(Z)}(t)/C_Q(t)$ and $C_{N_G(Z)/Q}(t)\cong C_{C_N(Z)}(t)/C_Q(t)$. By Lemma \ref{HN-prelim-element of order four normalizing S}, $|P|=9$ and since $P\leqslant J$ is elementary abelian, $P$ splits over $Z$. Thus $C_{C_G(Z)}(t)$ splits over $Z$ by Gasch\"{u}tz's Theorem and so $C_{H}(Z)$ and $N_H(Z)$ are as claimed. Lemma \ref{HN-prelims2} gives us that $|P \cap 3\mu athcal{A}|=|P \cap 3\mu athcal{B}|=4$ and then it is immediate that $N_H(P)\leqslant N_H(Z)$ and so $P\in \omega peratorname{Syl}_3(H)$. {\epsilon }nd{proof} We fix notation such that $P=\{1,z_1,z_1^2,z_2,z_2^2,a_1,a_1^2, a_2,a_2^2\}$ where $z_1 \in Z$, $P \cap 3\mu athcal{B}=\{z_1,z_1^2,z_2,z_2^2\}$ and $P \cap 3\mu athcal{A}=\{a_1,a_1^2, a_2,a_2^2\}$. \begin{lemma}\label{HN-Normalizer H of P} Let $\{i,j\}=\{1,2\}$ then $P \cap O^3(C_G(a_i))=\<a_j\>$ and $N_H(P)/C_H(P)\cong \mu athrm{Dih}(8)$ acts transitively on $3\mu athcal{A}\cap P$ and $3\mu athcal{B}\cap P$. {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{HN-conjugates in P}, $|P\cap 3 \mu athcal{A}|=|P\cap 3 \mu athcal{B}|=4$ and so it is clear that $N_H(P)/C_H(P)$ is isomorphic to a subgroup of $\mu athrm{Dih}(8)$. Observe that every element of order three in $O^3(C_G(a_i))\cong \omega peratorname{Alt}(9)$ or $\omega peratorname{Sym}(9)$ is conjugate to its inverse. Therefore an element in $O^3(C_G(a_i))$ inverts $P \cap O^3(C_G(a_i))$ and so we must have that element inverting $a_j$ and permuting $\<z_1\>$ and $\<z_2\>$. Thus $P \cap O^3(C_G(a_i))=\<a_j\>$. Furthermore an element of order four in $N_H(Z)$ inverts $Z$ whilst centralizing $P/Z$. Hence an element in $N_H(Z)$ permutes $\<a_1\>$ and $\<a_2\>$. We have that $s$ inverts $P$ and so we have that $N_H(P)$ is transitive on $3\mu athcal{A}\cap P$ and $3\mu athcal{B}\cap P$. {\epsilon }nd{proof} \begin{lemma}\label{HN-images in alt9} Let $\{i,j\}=\{1,2\}$. \begin{enumerate}[$(i)$] \item The image in $O^3(C_G(a_i))$ of elements of cycle type $3$ and $3^2$ are in $3\mu athcal{A}$ and those of cycle type $3^3$ are in $3\mu athcal{B}$. In particular, $a_j \in P \cap O^3(C_G(a_i))$ has cycle type $3^2$. \item $t$ has cycle type $2^4$ and is not $G$-conjugate to involutions of cycle type $2^2$ in $O^3(C_G(a_i))$. \item In Case II when $O^3(C_G(a_i))\cong \omega peratorname{Sym}(9)$, even involutions are not $G$-conjugate to odd involutions. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} We have that $C_G(a_i)\cong 3 \times \omega peratorname{Alt}(9)$ or $3 \times \omega peratorname{Sym}(9)$ and so $|P\cap O^3(C_G(a_i))|=3$. Consider representatives for the three conjugacy classes of elements of order three in $\omega peratorname{Alt}(9)$. An element of cycle type $3$ must clearly be in $3\mu athcal{A}$ and an element of cycle type $3^3$ is the cube of an element of order nine and so by Lemma \ref{elements of order nine in S} must be in $3\mu athcal{B}$. Consider now the image of $P \cap O^3(C_G(a_i))$ (which we have seen is equal to $\<a_j\>$) in $\omega peratorname{Alt}(9)$. If it is conjugate to $\<(1,2,3)\>$ then $P$ commutes with a subgroup isomorphic to $3\times 3 \times \omega peratorname{Alt}(6)$. However $z \in P$ and $C_G(z)$ has no such subgroup. So, since elements of cycle type $3^3$ are in $3\mu athcal{B}$, we must have that the image in $\omega peratorname{Alt}(9)$ of $P \cap O^3(C_G(a_i))$ is conjugate to $\<(1,2,3)(4,5,6)\>$. We see easily that the image of $t$ is an even permutation since when $O^3(C_G(a_i))\cong \omega peratorname{Sym}(9)$, the odd involutions commute with a group of order nine and so in $C_G(a_i)$ they centralize a group of order 27. Let $r$ be an involution of cycle type $2^2$. Then $r$ commutes with a single three cycle $x$ say in $O^3(C_G(a_i))$. Thus $r \in C_G(\<a_i,x\>)\cong 3^2 \times \omega peratorname{Alt}(6)$ or $3^2\times \omega peratorname{Sym}(6)$. Clearly then $\<a_i,x\>^\# \subset 3\mu athcal{A}$ and so $r$ commutes with no conjugate of $Z$. Thus $r$ is not $G$-conjugate to $t$. Now suppose we are in Case II and so $O^3(C_G(a_i))\cong \omega peratorname{Sym}(9)$. Continue to let $r \in C_G(\<a_i,x\>)\cong 3^2 \times \omega peratorname{Sym}(6)$ have cycle type $2^2$. There is an element of order four $f$ in $C_G(\<a_i,x\>)$ which squares to $r$. Hence for every $y \in \<a_i,x\>$, $f \in C_G(y)$ and so the image of $r$ in $O^3(C_G(y))$ is an even involution and thus of type $2^2$. Thus $C_G(r) \cap C_G(y)$ has Sylow $3$-subgroups of order nine. It follows that $C_G(r)$ has Sylow $3$-subgroups of order nine. Now any odd involution in $O^3(C_G(a_i))$ commutes with a group of order 27 and so $r$ is not $G$-conjugate to any odd involution in $O^3(C_G(a_i))$. {\epsilon }nd{proof} Set $2\mu athcal{A}$ to be all $G$-conjugates of an involution whose image in $O^3(C_G(a_1))$ has cycle type $2^2$ and $2\mu athcal{B}=\{t^g|g \in G\}$. We now introduce some further notation by first fixing an injective homomorphism from $N_G(\<a_i\>)$ into $\omega peratorname{Sym}(12)$ such that $O^3(C_G(a_i))$ maps into $\omega peratorname{Sym}(\{1,..,9\})$ and $a_i$ maps to $(10,11,12)$ ($\{i,j\}=\{1,2\}$). We define subgroups and elements of $G$ by their image. \begin{notation1}\label{HN-Alt9notation} Let $\{i,j\}=\{1,2\}$. \begin{enumerate}[$\bullet$] \item $a_i\mu apsto (10,11,12)$. \item $a_j \mu apsto (1,3,5)(2,4,6)$. \item $t \mu apsto (1,2)(3,4)(5,6)(7,8)$. \item $Q_i \mu apsto \<(1,2)(3,4)(5,6)(7,8), (1,3)(2,4)(5,8)(6,7), (1,5)(3,8)(2,6)(7,4), (1,2)(3,4), (3,4)(5,6)\>$. \item $r_i \mu apsto (1,3)(2,4)$. \item When $i=1$, $Q_1>E\mu apsto \<(1,2)(3,4)(5,6)(7,8),(1,3)(2,4)(5,8)(6,7),(1,5)(3,8)(2,6)(7,4)\>$. \item When $i=2$, $Q_2\ni u\mu apsto (1,2)(3,4)$ and $Q_2>F\mu apsto \<(1,2)(3,4),(3,4)(5,6)\>$. {\epsilon }nd{enumerate} {\epsilon }nd{notation1} We observe the following by calculating directly in the image of $N_G(\<a_i\>)$ in $\omega peratorname{Sym}(12)$. \begin{lemma}\label{HN-alt9 observations} \begin{enumerate}[$(i)$] \item In Case I, $C_H(a_i)\sim 3 \times (2_+^{1+4}:\omega peratorname{Sym}(3))$. In Case II, $C_H(a_i)\sim 3 \times (2_+^{1+4}:(2 \times\omega peratorname{Sym}(3)))$. In either case, $Q_i\cong 2_+^{1+4}$ is normal in $C_H(a_i)$ with $r_i \in C_H(a_i)\bs Q_i$. \item $2 \times 2 \times 2\cong E\vartriangleleft H \cap O^2(C_G(a_1))$ and there exists $\mu athrm{G}L_3(2)\cong C\leqslant C_G(a_1)$ such that $a_2 \in C$ and $C$ is a complement to $C_{C_G(a_1)}(E)$ in $N_{C_G(a_1)}(E)$. Furthermore, $N_G(E) \cap C_G(P)=\<t,P\>$. \item If $\<t\> <V <Q_i$ such that $V \vartriangleleft C_H(a_i)$ then $V$ is elementary abelian. \item $C_{C_H(a_i)}(Q_i)=\<t,a_i\>$. \item $C_{C_H(a_1)}(E)=\<E,a_1\>$. \item $s,t,st \in 2 \mu athcal{B}$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} These can mostly be checked by direct calculation in the permutation group. To prove $(vi)$ we observe that having fixed the image of $a_j$ we see that the image of $J\cap O^3(C_G(a_i))$ is $\<(1,3,5),(2,4,6),(7,8,9)\>$ and since $s$ inverts $J$ we may assume the image of $s$ is $C_G(a_i)$-conjugate to $(1,3)(2,4)(7,8)(10,11)$. Thus it becomes clear that $s$ is conjugate to $st$. Now by Lemma \ref{lemma structure of Y and W} $(vi)$, $t$ is conjugate to $st$ and therefore to $s$ also. {\epsilon }nd{proof} For the sake of simplifying language in the following lemma, we define the following for $g \in G$, ${\reflectbox{\rm{N}}_P}(g):=\{M\leqslant C_H(g) \mu id (|M|,3)=1,~[M,P]\leqslant M,~ C_M(P)\leqslant \<t\> \}$. \begin{lemma}\label{HN-3'-subgroups of centralizers} Let $i\in \{1,2\}$, $\reflectbox{\rm{N}}_P(z_i)=\{1,\<t\>, A_i,B_i\}$ where $A_i\cong B_i\cong \\mu athrm{Q}(8)$ are distinct $2$-subgroups of $C_G(z_i)$ with $z_j \in \<A_i,B_i\>=C_H(z_i)'\cong 2^{.}\omega peratorname{Alt}(5)$. Meanwhile, $M \in \reflectbox{\rm{N}}_P(a_i)$ only if $M\leqslant Q_i$. {\epsilon }nd{lemma} \begin{proof} We have that $C_H(a_i)\sim 3 \times 2_+^{1+4}:\omega peratorname{Sym}(3)$ or $3 \times 2_+^{1+4}:(2 \times \omega peratorname{Sym}(3))$ which are subgroups of $3 \times \omega peratorname{Alt}(9)$ and $3\times \omega peratorname{Sym}(9)$ respectively. It is clear in the first case that if $M$ is any normal $3'$-subgroup of $C_H(a_i)$ then $M\leqslant Q_i$. In the second case we must check within our permutation group that any such $M$ with $C_M(P)\leqslant \<t\>$ must satisfy $M \leqslant Q_i$. We have that $C_H(z_i)\cong 3 \times 2^{.}\omega peratorname{Alt}(5)$ or $3 \times 2^{.}\omega peratorname{Sym}(5)$. Let $M$ be a $3'$-subgroup of $C_H(z_i)$ that is normalized by $P$. It is clear that $5 \nmid |M|$ so $M$ must be a $2$-group. Now $C_H(z_i)$ has Sylow $2$-subgroups isomorphic to $\mu athrm{Q}(8)$ or $\mu athrm{SDih}(16)$. Since $C_M(P) \leqslant \<t\>$, we must have that $M\leqslant\<t\>$ or $M \cong \mu athrm{Q}(8)$ and $MP\cong 3 \times \mu athrm{SL} _2(3)$. Notice that $P$ is involved in precisely two subgroups of $C_H(z_i)$ isomorphic to $MP\cong 3 \times \mu athrm{SL} _2(3)$. We define $A_i$ and $B_i$ to be the two distinct $2$-radical subgroups. It then follows that $\<A_i,B_i\>=C_H(z_i)'\cong 2^{.}\omega peratorname{Alt}(5)$ and since an element of order four centralizes $z_i$ and inverts $P \cap \<A_i,B_i\>$, we must have that $z_j \in P \cap \<A_i,B_i\>$. {\epsilon }nd{proof} We continue the notation for the $P$-invariant subgroups from the previous lemma. The subgroups $\{A_i,B_i\}$ and $Q_j$ for $i,j \in \{1,2\}$ play key roles in this section as our building blocks for $H$. \begin{lemma}\label{HN-swapping a_i's and z_i's-2} Let $\{i,j\}=\{1,2\}$. The following hold. \begin{enumerate}[$(i)$] \item $N_H(P) \cap C_H(a_i)$ acts transitively on the set $\{\<z_1\>,\<z_2\>\}$. \item $N_H(P) \cap C_H(z_i)$ acts transitively on the set $\{\<a_1\>,\<a_2\>\}$. \item $N_H(P)$ acts as $\omega peratorname{Dih}(8)$ on $\{A_1,B_1, A_2,B_2\}$ with blocks of imprimitivity $\{A_i,B_i\}$. In particular, $N_H(P) \cap N_H(\<a_i\>)$ acts transitively. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{HN-Normalizer H of P}, $N_H(P)/C_H(P)\cong \mu athrm{Dih}(8)$ and $N_H(P)$ is transitive on $P \cap 3\mu athcal{A}$ and $P \cap 3\mu athcal{B}$ which both have order four and so $(i)$ and $(ii)$ are clear. Now by Lemma \ref{HN-3'-subgroups of centralizers}, $N_H(P)$ acts imprimitively on the set $\{A_1,B_1,A_2,B_2\}$ swapping $\{A_1,B_1\}$ with $\{A_2,B_2\}$. Recall that $N_H(\<z_2\>)\sim 3:4^{.}\omega peratorname{Alt}(5)$ or $3:4^{.}\omega peratorname{Sym}(5)$. In either case an element of order four, $g$ say, inverts $z_2$ whilst centralizing $\<A_2,B_2\>$. Therefore $g \in C_H(z_1)$. This element necessarily permutes $A_1$ and $B_1$ since Sylow $2$-subgroups of $C_H(Z)$ are either quaternion of order eight of semi-dihedral of order 16. Thus $N_H(P)$ acts transitively and imprimitively on $\{A_1,B_1, A_2,B_2\}$ and contains a tranposition and so $(iv)$ follows. If the subgroup $N_H(P) \cap N_H(\<a_i\>)$ acts as a non-transitive subgroup of $\omega peratorname{Dih}(8)$ then it preserves each $\{A_i,B_i\}$ but therefore normalizes $\<z_1\>$ and $\<z_2\>$ which we have seen is not the case and so we have $(iii)$. {\epsilon }nd{proof} The following lemma is a key step in determining the structure of $H$ since it proves that $H$ contains a subgroup which is extraspecial of order $2^9$. \begin{lemma}\label{HN-Q_i's} Let $\{i,j\}=\{1,2\}$ then $Q_i \cap Q_j=\<t\>$ and $O_{3'}(C_G(Q_i))=Q_j$. In particular $\<t\>$ is the centre of a Sylow $2$-subgroup of $G$ and $Q_1Q_2\cong 2_+^{1+8}$ with $C_G(Q_1Q_2)=\<t\>$ and $C_{Q_1Q_2}(z_i)=\<t\>$. {\epsilon }nd{lemma} \begin{proof} Let $\{i,j\}=\{1,2\}$. Recall Lemma \ref{HN-alt9 observations} $(iv)$ which says that $C_{C_H(a_i)}(Q_i)=\<t,a_i\>$. Therefore $C_{\omega verline{H}}(\omega verline{Q_i})$ has a self-centralizing element of order three and so we may use Theorem \ref{Feit-Thompson}. Notice also that $\<a_i\>\in \omega peratorname{Syl}_3(C_G(Q_i))$ and $N_H(P) \cap N_H(\<a_i\>)$ normalizes $C_G(Q_i)$. By Lemma \ref{HN-swapping a_i's and z_i's-2} $(iii)$, $N_H(P) \cap N_H(\<a_i\>)$ is transitive on $\{A_1,A_2,B_1,B_2\}$. Thus no group from this set commutes with $Q_i$ else we would have $z_1 \in \<A_2,B_2\>\leqslant C_G(Q_i)$ which is a contradiction. Set $N=O_{3'}(C_G(Q_i))$. Then $N$ is normalized by $P$ and by the comment above we see that $C_N(z_1)=C_N(z_2)=\<t\>$. Moreover $C_{N}(a_i)=\<t\>$ and so by coprime action we have, \[N=\<C_N(z_1),C_N(z_2),C_N(a_1),C_N(a_2)\>=C_N(a_j).\] By Lemma \ref{HN-3'-subgroups of centralizers}, since $C_N(P) \leqslant \<t\>$, we have that $N\leqslant Q_j$. Suppose that $\<t\> < N< Q_j$. Since $a_i$ acts fixed-point-freely on $N/\<t\>$, $|N|=2^3$. By Lemma \ref{HN-alt9 observations} $(iii)$, $N$ is elementary abelian. Now by Lemma \ref{HN-alt9 observations} $(vi)$, $s$ is conjugate to $t$ in $G$. Recall Lemma \ref{HN-normalizer of J2}. This, together with the fact that $P=\<a_1,a_2\> \in \omega peratorname{Syl}_3(H)$, implies that for $k\in\{1,2\}$ there exists a $P$-invariant subgroup $\mu athrm{Q}(8)\cong X_k\leqslant C_H(a_k)$ with $[X_1,X_2]=1$. Now by Lemma \ref{HN-3'-subgroups of centralizers}, $X_i \leqslant Q_i$ and $X_j \leqslant Q_j$. We have that $X_j$ and $N$ are both $P$-invariant and furthermore we have that $X_j\cong \mu athrm{Q}(8)$ where as $N$ is elementary abelian. Therefore $|X_j \cap N|=2$ and so $Q_j=NX_j$. Similarly, $Q_i=O_2(C_G(Q_j))X_i$. Therefore $X_j$ commutes with $Q_i$ which is a contradiction. So we have that either $N=Q_i$ or $N=\<t\>$. Assume the latter for a contradiction. In Case I, we have that $N_G(\<a_i\>)$ is the diagonal subgroup of index two in $\omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$. It follows that $C_G(Q_i)$ has a self-normalizing Sylow $3$-subgroup and so a normal $3$-complement. Therefore, $C_G(Q_i)=N \<a_i\>=\<t\>\times \<a_i\>$. Hence $N_G(Q_i) \leqslant N_G(\<a_i\>)$ has Sylow $2$-subgroups of order $2^7$ isomorphic to a Sylow $2$-subgroup of $\omega peratorname{Sym}(9)$. We check that $Q_i$ is characteristic in such a $2$-group to see that $N_G(\<a_i\>)$ contains a Sylow $2$-subgroup of $G$. Therefore there exists $g\in G$ such that $A_1^g \in N_G(\<a_i\>)$. However, using Lemma \ref{HN-element of order four in CG(Z)} we now get a contradiction since no element of order four in $A_1$ commutes with an element of $3\mu athcal{A}$. Thus in Case I we have that $O_{3'}(C_G(Q_i))=Q_j$. In Case II we instead have that $N_G(\<a_i\>)\cong \omega peratorname{Sym}(3)\times \omega peratorname{Sym}(9)$ and using Theorem \ref{Feit-Thompson} we see that $C_G(Q_i)/\<t\> \cong \omega peratorname{Alt}(5)$, $\mu athrm{PSL}_2(7)$ or $\omega peratorname{Sym}(3)$. Notice that an element of order three in $P$ must therefore commute with $C_G(Q_i)$. That element must be in $\<a_j\>$. However, $t\in C_G(a_j)$ and it follows from Lemma \ref{HN-images in alt9} that $t$ is $2$-central in $C_G(a_j)\cong 3 \times \omega peratorname{Sym}(9)$. A $2$-central involution in $\omega peratorname{Sym}(9)$ does not commute with subgroups isomorphic to $\omega peratorname{Alt}(5)$, $\mu athrm{PSL}_2(7)$ or their double covers. It follows that we must have $C_G(Q_i)/\<t\>\cong \omega peratorname{Sym}(3)$. We now again have that $N_G(Q_i)$ is contained in $N_G(\<a_i\>$ and we get a contradiction as before. Hence we have in both cases that $N=Q_j$ and also that $C_G(Q_i)/N$ acts faithfully on $N/\<t\>$. So $[Q_1,Q_2]=1$ and furthermore $Q_1 \cap Q_2 \leqslant C_{Q_1}(Q_1)=\<t\>$ and so we get that $Q_1Q_2\cong 2_+^{1+8}$ and clearly $C_{Q_1Q_2}(z_i)=\<t\>$. Now let $Q_1Q_2\leqslant T\in \omega peratorname{Syl}_2(G)$ then $\mu athbf{Z}(T)\leqslant C_T(Q_1Q_2)\leqslant C_T(Q_1) \cap C_T(Q_2)$. Since $C_G(Q_i)/N$ acts faithfully on $N/\<t\>$, it is clear that $C_T(Q_1) \cap C_T(Q_2)=\<t\>$. Hence $\mu athbf{Z}(T)=C_G(Q_1Q_2)=\<t\>$. {\epsilon }nd{proof} Set ${Q_{12}}:=Q_1Q_2\cong 2_+^{1+8}$ and recall that in Notation \ref{HN-Alt9notation} we defined $E\leqslant C_G(a_1)$ such that $t \in E\trianglelefteq C_H(a_1)$ is elementary abelian of order eight. We now consider $C_G(E)$ and $N_G(E)$. \begin{lemma}\label{HN-info on centralizer of E} \begin{enumerate}[$(i)$] \item We have that $C_G(E)/O_2(C_G(E))\cong C_3$ in Case I and $C_G(E)/O_2(C_G(E))\cong\omega peratorname{Sym}(3)$ in Case II. \item Without loss of generality (on choices of $A_i$) we may assume that $O_{2}(C_G(E))=\<E,Q_2,A_1,A_2\>$ which is normalized by $P$. \item $N_G(E)/C_G(E)\cong \mu athrm{G}L_3(2)$ where the extension is split and there is a complement to $C_G(E)$ in $C_H(a_1)$ containing $a_2$. \item $\<Q_{12},A_1,A_2\>$ is a $2$-group which is normalized by $P$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{HN-alt9 observations} $(v)$, $C_{C_H(a_1)}(E)=\<E,a_1\>$ and so $C_G(E)/E$ satisfies Theorem \ref{Feit-Thompson}. Set $N=O_{3'}(C_G(E))$ then $C_N(a_1)=E$ and $a_1$ acts fixed-point-freely on $N/E$. A theorem of Thompson says that $N/E$ is nilpotent and therefore $N$ is nilpotent. By Theorem \ref{Feit-Thompson}, $C_G(E)/N\cong C_3$, $\omega peratorname{Sym}(3)$, $\omega peratorname{Alt}(5)$ or $\mu athrm{PSL}_2(7)$ (in which case $N=E$). Also by Lemma \ref{HN-alt9 observations} $(ii)$, there exists a complement to $C_G(E)$ in $N_G(E)$ which in particular is a subgroup of $C_H(a_1)$ containing $a_2$. Since $P$ normalizes $N$, we may apply coprime action to see that \[N=\<C_N(z_1),C_N(z_2),C_N(a_1),C_N(a_2)\>.\] Since $C_N(P)\leqslant C_E(a_2)=\<t\>$, we use Lemma \ref{HN-3'-subgroups of centralizers} to see that $N$ is generated by $2$-groups and as $N$ is nilpotent, $N$ is a $2$-group. We have that $E\leqslant Q_1$ and by Lemma \ref{HN-Q_i's}, $[Q_1,Q_2]=1$ and $Q_1 \cap Q_2=\<t\>$ so $Q_2 \cap E=\<t\>$ and $Q_2 \cap N >\<t\>$ and so has order $2^3$ or $2^5$. Notice also that this implies that $N$ does not split over $E$. Let $g\in N_G(E)\cap C_G(a_1)$ be an element of order seven. Then $g$ acts fixed-point-freely on $E$. If $[N/E,g]=1$ then $N=C_N(g)\times E$ which is a contradiction. Thus $[N/E,g]\neq 1$ and so $|N/E|\geqslant 2^3$. Since $a_1$ acts fixed-point-freely on $N/E$ and preserves $[N/E,g]$, we have $|[N/E,g]| \geqslant 2^6$. If $z_1$ and $z_2$ act fixed-point-freely on $N/E$ then $N=Q_2E$ and so $|N/E|=2^2$ or $2^4$ which we have seen is not the case. Therefore at least one of $C_{N/E}(z_1)$ and $C_{N/E}(z_2)$ is non-trivial. Since $E\trianglelefteq C_H(a_1)$ we may apply Lemma \ref{HN-swapping a_i's and z_i's-2} $(i)$ which says that $N_H(P) \cap C_H(a_1)$ acts transitively on the set $\{\<z_1\>,\<z_2\>\}$. Therefore $C_{N/E}(z_1)$ and $C_{N/E}(z_2)$ are both non-trivial. So we may assume, without loss of generality, that $A_1\leqslant N$ and $A_2 \leqslant N$ and so $N=\<E,Q_2\cap N,A_1,A_2\>$. Now suppose that $C_G(E)/N\cong \omega peratorname{Alt}(5)$. Since $N_G(E)/C_G(E)\cong \mu athrm{G}L_3(2)$, we have that $N_G(E)\cong \omega peratorname{Alt}(5) \times \mu athrm{G}L_3(2)$. We have that $Na_2$ is an element of order three in $N_G(E)/N$ and $C_{N_G(E)/N}(Na_2)$ contains a subgroup isomorphic to $\omega peratorname{Alt}(5)$. By coprime action, \[C_{N_G(E)/N}(a_2)= C_{N_G(E)}(a_2)N/N\cong C_{N_G(E)}(a_2)/C_N(a_2)\] which is a $2$-group of order 8 or 32 extended by $\omega peratorname{Alt}(5)$ which does not exist in $\omega peratorname{Sym}(9)$. Hence $C_G(E)/N\cong \omega peratorname{Sym}(3)$ or $C_3$ and it follows that $N=\<E,,A_1,A_2\>$. Additionally since $Q_1$ normalizes $E$ and so normalizes $N$, we see that $\<Q_{12},A_1,A_2\>$ is a $2$-group which is clearly normalized by $P$. {\epsilon }nd{proof} We continue the notation from this lemma for the rest of this section such that $A_1$ and $A_2$ commute with $E$. Set $K:=N_G(Q_{12})\leqslant H$. We show in the rest of this section that $K=H$. \begin{lemma}\label{HN-three things about K} \begin{enumerate}[$(i)$] \item $N_H(P)\leqslant K$ and $|N_H(P)Q_{12}/Q_{12}|=3^22^3$ in Case I or $3^22^4$ in Case II. \item Suppose that $v\in K$ such that $Q_{12}v$ is an involution which inverts ${Q_{12}}z_i$ for some $i \in \{1,2\}$. Then $C_{\omega verline{{Q_{12}}}}(v)=[\omega verline{Q_{12}},v]$ has order $2^4$. \item $C_{Q_{12}}(A_1)\neq C_{Q_{12}}(A_2)$. \item For $i \in \{1,2\}$, $C_H(a_i)\leqslant K$. \item For $i \in \{1,2\}$, $C_H(z_i)\leqslant K$. \item $\<A_1,B_1\>$ commutes with $\<A_2,B_2\>$ modulo $Q_{12}$ and in particular, $\<A_1,A_2\>Q_{12}/Q_{12}\cong 2^4$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} $(i)$ First observe that $N_H(P)$ acts on the set $\{a_1,a_2,a_1^2,a_2^2\}=P \cap 3\mu athcal{A}$ and therefore it preserves ${Q_{12}}=Q_1Q_2$ so $N_H(P) \leqslant K$. Clearly $N_{Q_{12}}(P)$ commutes with $P$ and $\<t\> \leqslant C_{Q_{12}}(P)\leqslant Q_1 \cap Q_2=\<t\>$ so the order of the quotient is clear given Lemmas \ref{HN-conjugates in P} and \ref{HN-Normalizer H of P}. $(ii)$ Observe that $\omega verline{Q_{12}}$ is elementary abelian and $\omega verline{Q_{12}}\omega verline{v}$ has order two and inverts $\omega verline{Q_{12}}\omega verline{z_i}$ which has order three. Therefore we may use Lemma \ref{Prelims-centralizers of invs on a vspace which invert a 3} and since $|C_{\omega verline{Q_{12}}}(z_i)|=1$, we have that $|C_{\omega verline{{Q_{12}}}}(v)|\leqslant 2^4$. Of course we always have that $|C_{\omega verline{{Q_{12}}}}(v)|\geqslant 2^4$ and so we get equality. $(iii)$ Suppose that $C_{Q_{12}}(A_1)=C_{Q_{12}}(A_2)$. Recall that $N_H(P)\sim 3:4^{.}\omega peratorname{Alt}(5)$ acts as $\omega peratorname{Dih}(8)$ on $\{A_1,B_1,A_2,B_2\}$. Therefore there exists $g\in N_H(P)$ permuting $A_1$ and $B_1$ and fixing $A_2$ and $B_2$. Hence \[C_{Q_{12}}(A_1)=C_{Q_{12}}(A_2)=C_{Q_{12}}(A_2)^g=C_{Q_{12}}(A_1)^g=C_{Q_{12}}(B_1).\] Therefore $E \leqslant C_{Q_{12}}(A_1)=C_{Q_{12}}(\<A_1,B_1\>)\leqslant C_{Q_{12}}(z_2)$. This is a contradiction. $(iv)$ This is clear since $C_H(a_i)=Q_iN_{C_H(a_i)}(P)$ and so $C_H(a_i)\leqslant K$. $(v)$ By Lemma \ref{HN-info on centralizer of E}, $T:=\<Q_{12},A_1,A_2\>$ is a $2$-group which is normalized by $P$. We consider $N_T(Q_{12})\leqslant K$. Since $T$ is normalized by $P$, we apply coprime action to see that \[N_T(Q_{12})=\<C_{N_T(Q_{12})}(r)\mu id r \in P^\#\>.\] We have that $T\leqslant N_G(E)$ and by Lemma \ref{HN-alt9 observations} $(ii)$, $C_T(P)=\<t\>$ and so we may use Lemma \ref{HN-3'-subgroups of centralizers}. Since $Q_{12}$ is normalized by $N_H(P)$ which is transitive on $\{A_1,B_1,A_2,B_2\}$ (by Lemma \ref{HN-swapping a_i's and z_i's-2} $(iii)$), it is clear that $A_1 \nleq Q_{12}$. Thus $N_T(Q_{12})>Q_{12}$. Now we use Lemma \ref{HN-3'-subgroups of centralizers} to see that for $j \in \{1,2\}$ $C_{N_T(Q_{12})}(a_j)=Q_j$ and to see that for some $i \in \{1,2\}$, $C_{N_T(Q_{12})}(z_i)\in \{A_i,B_i\}$. However we again apply Lemma \ref{HN-swapping a_i's and z_i's-2} $(iii)$ to see that since one of $A_i$ or $B_i$ is in $K$ and $N_H(P) \leqslant K$ is transitive on $\{A_1,B_1,A_2,B_2\}$, $\<A_1,B_1,A_2,B_2\> \leqslant K$. Moreover, $C_H(P)\leqslant K$ and so for $i \in \{1,2\}$, $\<A_i,B_i,C_H(P)\>=C_H(z_i)\leqslant K$. $(vi)$ Let $\widehat{K}:=K/Q_{12}$ Since $\widehat{T}=\<\widehat{A_1},\widehat{A_2}\>$ is a $2$-subgroup of $\widehat{K}$. We apply the same coprime action arguments again to $\widehat{T}$ and $\mu athbf{Z}(\widehat{T})$ to see that $\widehat{T}$ is elementary abelian of order 16. Thus $[\widehat{A_1},\widehat{A_2}]=1$. However we have seen that we can choose an element of order four $g \in N_G(Z)$ that fixes $A_2$ whilst permuting $\{A_1,B_1\}$. Thus $1^g=[\widehat{A_1},\widehat{A_2}]^g=[\widehat{B_1},\widehat{A_2}]$. So $[\<\widehat{A_1},\widehat{B_1}\>,\widehat{A_2}]$. Repeating these arguments we get to $[\<\widehat{A_1},\widehat{B_1}\>,\<\widehat{A_2},\widehat{B_2}\>]=1$. {\epsilon }nd{proof} \begin{lemma}\label{HN-alt5 and char 2} Set $D=\<N_H(P),C_H(p)\mu id p \in P^\#\>$. \begin{enumerate}[$(i)$] \item In Case I, $D/Q_{12}$ is isomorphic to $\omega peratorname{Alt}(5)\wr 2$. In Case II, $D/Q_{12}$ is isomorphic to a subgroup of $\omega peratorname{Sym}(5) \wr 2$ of shape $(\omega peratorname{Alt}(5)\wr 2).2$. \item A Sylow $2$-subgroup of $D/Q_{12}$ has a unique elementary abelian subgroup of order 16. \item If If $Q_{12}v$ is an involution in $(DQ_{12})'/Q_{12}\cong \omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5)$, then either $v$ has order four and squares to $t$ with $C_H(v)$ containing a conjugate of $Z$ or $Q_{12}v$ is diagonal in which case $v\in 2\mu athcal{B}$ and $[\omega verline{Q_{12}},v]=C_{\omega verline{Q_{12}}}(v)$ has order $2^4$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} \begin{proof} Set $\widehat{K}=K/Q_{12}$ then $\widehat{D}=\<\widehat{A_1},\widehat{A_2},\widehat{B_1},\widehat{B_2}, \widehat{N_H(P)}\>$. We have seen that $\<\widehat{A_1},\widehat{A_2},\widehat{B_1},\widehat{B_2}\>$ is isomorphic to $\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5)$. It is normalized by $\widehat{N_H(P)}$ which has order $3^22^3$ or $3^22^4$ by Lemma \ref{HN-three things about K} $(i)$. Thus we have that either $\widehat{D}\sim (\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5)).2$ which in fact is isomorphic to $\omega peratorname{Alt}(5) \wr 2$ since an element in $N_H(P)$ swaps $\<A_1,B_1\>$ and $\<A_2,B_2\>$. Otherwise we have that $\widehat{D}\sim (\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5) ):(2 \times 2)$. Note that $\widehat{D}$ is a subgroup of the automorphism of $\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5)$ otherwise $\<C_{\widehat{D}}(\widehat{P}),\widehat{A_1}\>$ contains an abelian subgroup of $C_{\widehat{H}}(\widehat{Z})$ of order $2^3$ which is not possible. So $\widehat{D}$ is an index two subgroup of $\omega peratorname{Sym}(5)\wr 2$. It is not isomorphic to $\omega peratorname{Sym}(5) \times \omega peratorname{Sym}(5)$ as an element in $N_H(P)$ swaps $\<A_1,B_1\>$ and $\<A_2,B_2\>$. It remains to calculate in the remaining two groups that a Sylow $2$-subgroup has a unique elementary abelian subgroup of order 16. This proves $(i)$ and $(ii)$. Now $\widehat{D}'$ has two conjugacy classes of involutions: diagonal and non-diagonal. If $\widehat{v}$ is an involution in $\widehat{D}'$ then it inverts a conjugate of $\widehat{z_1}$ and therefore by Lemma \ref{HN-three things about K} $(ii)$, $C_{\omega verline{{Q_{12}}}}(v)=[\omega verline{Q_{12}},v]$ has order $2^4$. Moreover $\omega verline{{Q_{12}}}\omega verline{v}$ is an involution and by Lemma \ref{lem-conjinvos}, every involution in $\omega verline{{Q_{12}}}\omega verline{v}$ is conjugate to $\omega verline{v}$. Thus if $\widehat{w}=\widehat{v}$ the $v$ is conjugate to $w$ or to $tw$. We may choose an element of order four, $f_i\in \<A_i,B_i\>)\cap N_H(P)$ with $f_i^2=t$. Then $\widehat{f_i}$ represents every non-diagonal involution in $\widehat{D}'$. Now $\widehat{f_1f_2}$ is a diagonal involution and $f_1f_2\in N_H(P)$. In fact $f_1f_2$ is in $N_H(Z)$ and inverts $P$. We can see in $N_H(\<a_1\>)$, for example, that no element of order four inverts $P$ and so $f_1f_2$ is an involution. Moreover, $\widehat{\<f_1f_2,A_1,B_1\>}=\widehat{\<f_2,A_1,B_1\>}\cong 2 \times \omega peratorname{Alt}(5)$ so $\<f_1f_2,A_1,B_1\>\cong 4^{.}\omega peratorname{Alt}(5)$. It therefore follows from Lemma \ref{HN-prelims1} that $f_1f_2$ is conjugate to $s$. Thus if $\widehat{v}$ is an involution in $\widehat{D}'$ then $v$ is either an element of order four conjugate to $f_i$ or $f_i^3$ or is conjugate to $s$ or $st$ and so in $2\mu athcal{B}$ which proves $(iii)$. {\epsilon }nd{proof} \begin{lemma}\label{HN-centralizer of F} Assume that we are in Case I. For $V<E$ such that $t \in V\cong 2 \times 2$, $C_G(V)\leqslant \<C_H(p)\mu id p \in P^\#\>$. {\epsilon }nd{lemma} \begin{proof} Notice that $E=\<t\>\times [E,P]$ and that the image of $[E,P]$ in $O^3(C_G(a_1))$ is $\<(1, 5)(2, 6)(3, 8)(4, 7),(1, 8)(2, 7)(3, 5)(4, 6)\>$. Since we are in Case I, we have that $N_G(\<a_1\>)$ is the diagonal subgroup of index two in $\omega peratorname{Sym}(3) \times \omega peratorname{Sym}(9)$. We calculate the centralizer in $\omega peratorname{Sym}(9)$ and $\omega peratorname{Alt}(9)$ of such a fours group to see it has order 32. Thus $C_G([E,P])$ has a Sylow $3$-subgroup $\<a_1\>$ with centralizer $\<E,a_1\>$ equal to its normalizer. Hence $C_G([E,P])$ has a normal $3$-complement, $N$ say. It is clear that $N$ is normalized by $P$ with $C_N(P)=\<t\>$ and contains $\<C_{Q_1}([E,P]),Q_2,A_1,A_2\>$ which has order $2^{12}$. Now by coprime action, $N=\<C_N(a_1),C_N(a_2),C_N(z_1),C_N(z_2)\>$. It is clear that for $i \in \{1,2\}$, $A_i$ is a maximal $3'$-subgroup of $C_G(z_i)$ so $C_N(z_i)=A_i$. Also $Q_2$ is a maximal $3'$-subgroup of $C_G(a_2)$ normalized by $P$ and we have calculated that $|C_N(a_1)|=32$. Thus $N=\<C_N(a_1),O_2(C_G(E))\>$. Now $N_G(E)$ is transitive on fours subgroups of $E$ but normalizes $O_2(C_G(E))$ and so there exists $g \in C_G(a_1)$ such that $t \in V=[E,P]^g$ and so $N^g=O_{3'}(C_G(V))$ with $N^g=\<C_{N^g}(a_1),O_2(C_G(E))\>$ and we see that $C_G(V)\leqslant \<C_H(p)\mu id p \in P^\#\>$. {\epsilon }nd{proof} \begin{lemma}\label{HN-InvOrbs} \begin{enumerate}[$(i)$] \item $K$ has a subgroup $K_0$ such that $K_0/Q_{12}\cong \omega peratorname{Alt}(5)\wr 2$ acts faithfully on $Q_{12}$. \item Every involution in $Q_{12}$ is in $2\mu athcal{A}\cup 2 \mu athcal{B}$ and $K_0$ is transitive on $Q_{12} \cap 2\mu athcal{A}$ and on $(Q_{12}\bs \<t\>)\cap 2 \mu athcal{B}$. The orbit lengths are 120 and 150 respectively. \item Diagonal subgroup of $K_0$ isomorphic to $\omega peratorname{Alt}(5)$ either centralizes a subgroup of $Q_{12}$ isomorphic to $C_4 \times C_2$ containing an involution in $2\mu athcal{A}$ or contain an element of order five acting fixed-point-freely on $\omega verline{Q_{12}}$. {\epsilon }nd{enumerate} In particular, in Case I, $K_0=K$. {\epsilon }nd{lemma} \begin{proof} Again we set $\widehat{K}=K/Q_{12}$. We have seen in Lemma \ref{HN-images in alt9} and Notation \ref{HN-Alt9notation}, that $Q_1$ contains non-conjugate involutions from $2\mu athcal{A}$ and from $2\mu athcal{B}$. We have seen that $K$ has a subgroup, $K_0$ say, such that $\widehat{K_0}$ is isomorphic to $\omega peratorname{Alt}(5) \wr 2$. We consider the action of this group on $Q_{12}$. The action is clearly faithful as $C_G(Q_{12})=\<t\>$. Now for $\{i,j\} = \{1,2\}$, $\<\widehat{A_i},\widehat{B_i}\>\cong \omega peratorname{Alt}(5)$ acts on $\omega verline{Q_{12}}$ with an element of order three, $\widehat{z_j}$, acting fixed-point-freely. It therefore follows that $\omega verline{Q_{12}}$ is a sum of two natural $\mu athrm{G}F(2)\omega peratorname{Alt}(5)$-modules. In particular, an element of order five in $\<\widehat{A_i},\widehat{B_i}\>$ acts fixed-point-freely on $\omega verline{Q_{12}}$. Now $\widehat{K_0}$ contains two further conjugacy classes of subgroups isomorphic to $\omega peratorname{Alt}(5)$; the diagonal subgroups. One of these lies in a $\omega peratorname{Sym}(5)$ the other in an $\omega peratorname{Alt}(5) \times 2$ and furthermore $\widehat{K_0}$ contains two further conjugacy classes of subgroups of order five. Let $F$ be a Sylow $5$-subgroup of $\widehat{K_0}$ then by coprime action $Q_{12}=\<C_{Q_{12}}(f):f \in F^\#\>$. Since $C_G({Q_{12}})=\<t\>$, no element in $F$ acts trivially. However $F$ has six subgroups of order five and for one of these, $\<f\>$ say we must have $\<t\> < C_{Q_{12}}(f) < Q_{12}$ and $Q_{12}=[Q_{12},f]C_{Q_{12}}(f)$ where $[Q_{12},f]$ and $C_{Q_{12}}(f)$ are both extraspecial and intersect at $\<t\>$. Moreover, for any other element $f'\in F\bs \<f\>$ we have that $C_{Q_{12}}(f')\cap C_{Q_{12}}(f)=\<t\>$ so $C_{Q_{12}}(f)$ has an automorphism of order five. Of course $[Q_{12},f]$ has an automorphism $f$ of order five also. It follows that $C_{Q_{12}}(f)\cong [Q_{12},f]\cong 2_-^{1+4}$. Moreover, exactly two of the subgroups of $F$ of order five act non-trivially on $\omega verline{Q_{12}}$ and so it follows that one class of diagonal $\omega peratorname{Alt}(5)$ subgroups of $\widehat{K_0}$ contain such an element of order five and the other contains a fixed-point-free element of order five. Recall that $E\leqslant Q_1$ commutes with $\<A_1,A_2\>$ so consider an involution, $v$ say in $E$ distinct from $t$. Suppose that $v$ is fixed by an element of order five as well as $\<A_1,A_2\>$ in $\widehat{K_0}$. Then it follows that $C_{\widehat{K_0}}(v)$ contains $\widehat{K_0}'$ which is a contradiction. Thus $v$ lies in a $\widehat{K_0}$-orbit which is a multiple of 25. Clearly $v$ does not commute with $P$ and so $v$ lies in an orbit which is a multiple of 3. Also $v$ is conjugate to $vt$ in $Q_{12}$ and so $|v^{K_0}|\geqslant 150$. Now, let $D$ be a diagonal subgroup of $\widehat{K_0}$ isomorphic to $\omega peratorname{Alt}(5)$ and let $F$ be a Sylow $5$-subgroup of $D$. We choose $D$ such that $Q_{12}a_2$ generates a Sylow $3$-subgroup of $D$. Note that $F$ acts non-trivially on a subgroup of $\omega verline{Q_{12}}$ of order $2^4$ and that $[Q_{12},F]\cong 2_-^{1+4} \ncong 2_+^{1+4}\cong [Q_{12},a_2]=Q_1$. In fact $|[Q_{12},F]\cap Q_1|\leqslant 2^3$ because $[Q_{12},F]$ is centralized by an element of order five. Thus, if $|[Q_{12},F]\cap Q_1|=2^4$ then $|[Q_{12},F]\cap E|=2^2$ but we have seen no element of $E\bs\<t\>$ commutes with an element of order five. Thus, $|[Q_{12},F]\cap Q_1|\leqslant 2^3$. Now suppose that $|C_{Q_{12}}(F)\cap C_{Q_{12}}(a_2)|=|C_{Q_{12}}(F)\cap Q_2|=4$ then, as a $D$-module, $\omega verline{Q_{12}}$ has no submodule which is a sum of two trivial modules. However this implies that $\< [\omega verline{Q_{12}},F],Q_1 \>$ is a sum of a 4-dimensional and a trivial module and so has order $2^5$ however that means that $|[Q_{12},F]\cap Q_1|\geqslant 2^4$ which is a contradiction. Thus $|C_{Q_{12}}(F)\cap Q_2|= 8$ (it can be no larger without containing a conjugate of $v\in E\bs\<t\>$). Since $2_-^{1+4}$ has $2$-rank $2$, $C_{Q_{12}}(F)\cap Q_2$ has two rank at most 2. If it had $2$-rank 1 then it would necessarily be isomorphic to $\mu athrm{Q}(8)$. However, $Q_2$ has just two subgroups isomorphic to $\mu athrm{Q}(8)$ both of which are normalized by $a_1$. Any subgroup of $C_{Q_{12}}(F)\cap Q_2$ normalized by $a_1$ is normalized by $\widehat{\<a_1,D\>}=\widehat{K_0}'$ which is again a contradiction. Thus $C_{Q_{12}}(F)\cap Q_2\cong 4 \times 2$. This implies that an involution in $Q_2\bs \<t\>$ is centralized by $D$. Call this involution $w$ and observe that $w \notin v^{K_0}$. Now $\widehat{K_0}$ acts faithfully on $Q_{12}$ and so $C_{\widehat{K_0}}(w)=D$ or is a maximal subgroup of $\widehat{K_0}$ of shape $2 \times \omega peratorname{Alt}(5)$ or $\omega peratorname{Sym}(5)$. In particular, $\omega verline{w}$ lies in a $\widehat{K_0}$-orbit of length a multiple of 60. Thus $|w^{K_0}|\geqslant 120$. Now $Q_{12}$ has 270 involutions and so every involution lies in $v^{K_0} \cup w^{K_0}$. Since $Q_{12}$ contains representatives from $2\mu athcal{A}$ and $2 \mu athcal{B}$ we must have that $w^{K_0}=Q_{12} \cap 2\mu athcal{A}$ and $v^{K_0}=Q_{12}\cap 2 \mu athcal{B}$. Finally, in Case I, by Lemma \ref{HN-alt5 and char 2}, $K_0$ contains $C_H(p)$ for each $p \in P^\#$ and by Lemma \ref{HN-centralizer of F}, an involution in $Q_{12}\bs \<t\>$ has centralizer contained in $K_0$ so we may conclude that $K=K_0$. {\epsilon }nd{proof} \begin{lemma}\label{HN-new-sylow2} $N_H(E) \leqslant K$ and contains a Sylow $2$-subgroup of $G$. In Case I, a Sylow $2$-subgroup has order $2^{14}$. In Case II, it has order $2^{15}$ and is self-normalizing with derived subgroup contained in $Q_{12}A_1A_2$. {\epsilon }nd{lemma} \begin{proof} It follows from Lemma \ref{HN-info on centralizer of E} that $C_G(E) \leqslant K$. So we consider $N_H(E)$. By Lemma \ref{HN-info on centralizer of E}, there exists a complement, $C$, to $C_G(E)$ in $N_G(E)$ such that $C \leqslant C_G(a_1)$. Now, by Dedekind's Modular Law, $N_G(E) \cap H=C_G(E)C \cap H=C_G(E)(C \cap H)$. Furthermore, $\omega peratorname{Sym}(4)\cong C \cap H \leqslant C_H(a_1)\leqslant K$ by Lemma \ref{HN-three things about K} $(iv)$. Thus $N_H(E) \leqslant K$. In Case I, we have seen that $K/Q_{12}\cong \omega peratorname{Alt}(5)\wr 2$ and so $K$ has Sylow $2$-subgroups of order $2^{14}$. To see these are Sylow $2$-subgroups of $H$ we must show that $Q_{12}$ is characteristic in any such. First we consider Case II. We have seen in Lemma \ref{HN-info on centralizer of E} that $O_2(C_G(E))=\<E,Q_2,A_1,A_2\>$ and $C_G(E)/O_2(C_G(E))\cong\omega peratorname{Sym}(3)$. We have seen also that $A_1$ and $A_2$ commute modulo $Q_{12}$, however it is clear that they must in fact commute modulo $EQ_2$. Thus $|O_2(C_G(E))|=2^{11}$ and $|N_H(E)|=2^{15}3^2$. Since $N_H(E)$ normalizes $Q_{12}$, it is clear that $Q_1Q_2A_1A_2\unlhd N_H(E)$ and has order $2^{13}$. It follows that $Q_{12}A_1A_2=O_2(N_H(E))$ and since $C \cap H$ is complement commuting with $a_1$, we additionally see that $N_H(E)/O_2(N_H(E))\cong \omega peratorname{Sym}(3) \times \omega peratorname{Sym}(3)$. Now set $\widehat{K}=K/Q_{12}$ and consider $M:=N_{K}({Q_{12}A_1A_2})\geqslant N_H(E)$. This has ${P}$ as a Sylow $3$-subgroup. We have seen in Lemma \ref{HN-swapping a_i's and z_i's-2} $(iii)$ that $N_H(P)$ acts as $\omega peratorname{Dih}(8)$ on $\{A_1,B_1,A_2,B_2\}$. Thus the subgroup of $N_H(P)$ which preserves $\{A_1,A_2\}$ has index four. Using Lemmas \ref{HN-conjugates in P} and \ref{HN-Normalizer H of P} we therefore see that $|N_M(P)|=3^22^3$. We have, using Lemma \ref{HN-conjugates in P}, that a Sylow $2$-subgroup of $C_H(P)$ is elementary abelian and if an involution, $r$ say, in $C_H(P)$ normalizes $A_1$ and $A_2$ then it must be in $A_1$ and $A_2$ otherwise $\<A_1,r\>$ is a $2$-subgroup of $C_G(Z)$ normalized by $P$ which is not possible. Thus $C_M(P)=\<t\>$. Now we may apply Lemma \ref{HN-3'-subgroups of centralizers} together with coprime action to argue that $Q_{12}A_1A_2=O_{3'}(M)$ and so $\widehat{A_1A_2}=O_{3'}(\widehat{M})$. Now $|N_{\widehat{M}}(\widehat{P})|=3^22^2$ with elementary abelian Sylow $2$-subgroups (as seen in $N_H(E)$). We also see that $C_M(p)\leqslant N_M(P)$ for any $p \in P^\#$. Thus $\widehat{M}/\widehat{A_1A_2}$ satisfies Theorem \ref{Hayden} and we may conclude that $\widehat{M}/\widehat{A_1A_2}$ has a normal Sylow $3$-subgroup. Thus $M=Q_{12}A_1A_2N_M(P)=N_H(E)$. Let $T$ be a Sylow $2$-subgroup of $N_H(E)$. Then $\widehat{T}$ is a Sylow $2$-subgroup of the subgroup $\widehat{D}$ in Lemma \ref{HN-alt5 and char 2} and therefore $\widehat{A_1A_2}$ is characteristic in $\widehat{T}$. Hence $T$ is a Sylow $2$-subgroup of $K$ and has order $2^{15}$. We now show that in both Cases I and II that if $T\in \omega peratorname{Syl}_2(N_H(E))$ then $Q_{12}$ is characteristic in $T$ to conclude that $T$ is a Sylow $2$-subgroup of $H$. We continue the notation that $\widehat{K}=K/Q_{12}$ and use Lemma \ref{Prelims 2^8 3^2 Dih(8)} by considering the action of $\widehat{T}$ on $\omega verline{Q_{12}}$. Now any involution in $\widehat{A_1A_2}$ inverts a conjugate of $Z$ and so by Lemma \ref{HN-three things about K} $(ii)$ has centralizer of order $2^4$ in $\omega verline{Q_{12}}$. Now if $R$ is any elementary abelian normal subgroup of $\widehat{T}$ then $R \cap \widehat{A_1A_2}$ has order at least two and so for $R$ of order $2,4,8$ we have satisfied the requirements of Lemma \ref{Prelims 2^8 3^2 Dih(8)}. It remains to check that if $|R|=2^4$ then $|C_{\omega verline{Q_{12}}}(R)|\leqslant 2^3$. However we have seen in Lemma \ref{HN-alt5 and char 2} that such an $R$ must be conjugate to $\widehat{A_1A_2}$ and now we may use Lemma \ref{HN-three things about K} $(iii)$ to see that $C_{\omega verline{Q_{12}}}(A_1)\neq C_{\omega verline{Q_{12}}}(A_2)$ and since each $C_{\omega verline{Q_{12}}}(A_i)$ has centralizer in $\omega verline{Q_{12}}$ of order at most $2^4$, we can conclude that $|C_{\omega verline{Q_{12}}}(A_1A_2)|\leqslant 2^3$. Hence Lemma \ref{Prelims 2^8 3^2 Dih(8)} gives us that $Q_{12}$ is characteristic in $T$ and since $T\in \omega peratorname{Syl}_2(N_G(K))$, we must have that $T \in \omega peratorname{Syl}_2(H)$ and then by Lemma \ref{HN-Q_i's}, $T \in \omega peratorname{Syl}_2(G)$. Finally it is clear that $T' \leqslant Q_{12}A_1A_2$ and since $Q_{12}$ is characteristic in $T$, $N_G(T)\leqslant K$ and therefore, $\widehat{A_1A_2} ~\mu athrm{char}~ \widehat{T} \unlhd \widehat{N_G(T)}$. Thus $N_G(T)\leqslant N_H(E)$ and so, in Case II, is self-normalizing as claimed. {\epsilon }nd{proof} \begin{lemma}\label{HN-Indextwo} In Case II, $G$ has proper normal subgroup $G_0$ which satisfies the hypotheses in Case I. {\epsilon }nd{lemma} \begin{proof} In Case II we have that $O^3(C_G(a_1))\cong \omega peratorname{Sym}(9)$. We choose an involution, $r$ say, in $O^3(C_G(a_1))$ whose image is the transposition $(7,8)$ and so $r$ is in the subgroup of $K$ described in Lemma \ref{HN-alt5 and char 2}. Let $T$ be a Sylow $2$-subgroup of $N_H(E)$. Then by Lemma \ref{HN-new-sylow2}, $T \in \omega peratorname{Syl}_2(G)$ and $T'\leqslant Q_{12}A_1A_2$. Now, using Lemma \ref{HN-alt5 and char 2} $(iii)$ and Lemma \ref{HN-InvOrbs}, we see that every involution in $T'$ lies in $2\mu athcal{A} \cup 2\mu athcal{B}$. By Lemma \ref{HN-images in alt9}, $r$ is not in $2\mu athcal{A}$ or $2\mu athcal{B}$ and so no $G$-conjugate of $r$ lies in $T'$. Now we may apply Gr\"{u}n's Theorem to see that a Sylow $2$-subgroup of $G'$ is equal to $\<N_G(T)',T \cap R' \mu id R \in \omega peratorname{Syl}_2(G)\>$. Hence $r \notin G'$ and $G/G'$ has even order. Now by Lemma \ref{HN-prelims1}, it is clear that $\<A_1,A_2,s\>\leqslant Q'$ and so we see that $G$ has a proper normal subgroup $G_0$ such that $N_{G_0}(Z)\sim 3^{1+4}: 4^{.}\omega peratorname{Alt}(5)$. We must check that $Z$ is conjugate to $Z^x$ in $G_0$ however this is clear as $Z$ and $Z^x$ are conjugate in $\<Q,Q^x\>\leqslant G_0$. {\epsilon }nd{proof} In light of Lemma \ref{HN-Indextwo}, we may simplify our working significantly by assuming from now on that we are in Case I only and so by Lemma \ref{HN-InvOrbs}, $K/Q_{12}\cong \omega peratorname{Alt}(5) \wr 2$. \begin{lemma}\label{HN-K is strongly 3 embedded} $K$ is strongly $3$-embedded in $H$. {\epsilon }nd{lemma} \begin{proof} Let $h \in H$ and $y\in K \cap K^h$ be an element of order three. By Lemma \ref{HN-three things about K}, the centralizer in $H$ of every element of order three in $K$ is contained in $K$. Thus $C_H(y) \leqslant K \cap K^h$. Therefore $K \cap K^h$ contains a Sylow $3$-subgroup of $H$. So assume $P\leqslant K \cap K^h$. Then ${Q_{12}}=O_2(K)=\prod_{p\in P^\#}O_2(C_H(p))=O_2(K^h)={Q_{12}}^h$. Therefore $h \in N_G({Q_{12}})=K$ and so $K=K^h$. {\epsilon }nd{proof} Recall we fixed an involution $r_1 \in C_H(a_1)$ in Notation \ref{HN-Alt9notation}. \begin{lemma}\label{HN-Transfer-O^2(H) is proper} $r_1$ is not in $O^2(H)$. In particular, $H\neq O^2(H)$ and $O^2(H) \cap K\sim 2_+^{1+8}.(\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5))$. {\epsilon }nd{lemma} \begin{proof} Given the cycle type of the images of $r_1$ and $t$ in $\omega peratorname{Alt}(9)\cong O^3(C_G(a_1))$ and by Lemma \ref{HN-images in alt9}, we see that $r_1$ is not conjugate to $t$ in $G$ however the product $r_1t$ is conjugate to $t$ in $O^3(C_G(a_1))$ and therefore $r_1$ is not conjugate to $r_1t$ in $G$. Observe that $r_1$ inverts $a_2$ therefore $r_1\notin {Q_{12}}$. Since $r_1$ centralizes $a_1$ whilst inverting $a_2$, we have that $r_1$ permutes $\<z_1\>$ and $\<z_2\>$ and therefore permutes $\<A_1,B_1\>$ and $\<A_2,B_2\>$ and so $r_1\notin O^2(K)$. Let $T\in \omega peratorname{Syl}_2(K)$ such that $r_1\in T$ and suppose that for some $h \in H$, $r_1^h\in O^2(K) \cap T$. Suppose that $r_1^h \in {Q_{12}}$. Then $\<r_1^h,t\>\vartriangleleft {Q_{12}}$ but is not central in ${Q_{12}}$ as $Q_{12}$ is extraspecial. Therefore $r_1^h$ is conjugate to $r_1^ht=(r_1t)^h$ in ${Q_{12}}$ and so $r_1$ is conjugate to $r_1t$ which is a contradiction. So $r_1^h \notin {Q_{12}}$. So consider $Q_{12} \neq {Q_{12}}r_1^h$. By Lemma \ref{HN-alt5 and char 2}, either $r_1^h \in 2 \mu athcal{B}$ or has order four. However $r_1$ is an involution and is not conjugate to $t$ in $G$ and so we have a contradiction. Thus no $H$-conjugate of $r_1$ lies in $T \cap O^2(K)$ which is a maximal subgroup of $T\in \omega peratorname{Syl}_2(H)$. By Thompson Transfer, $r_1 \notin O^2(H)$ and so $H \neq O^2(H)$. Since $[K:O^2(K)]=2$, we must have $O^2(K)=O^2(H) \cap K\sim 2_+^{1+8}.(\omega peratorname{Alt}(5)\times \omega peratorname{Alt}(5))$. {\epsilon }nd{proof} \begin{lemma}\label{HN-orbits of elements in Q} Let $f \in {Q_{12}}\bs \<t\>$. Then either $f$ has order four or one of the following occurs. \begin{enumerate}[$(i)$] \item $f\in 2\mu athcal{B}$, $C_H(f)\leqslant K$ has order $2^{13}3$ and $\omega verline{f}$ is $2$-central in $\omega verline{K}$. \item $f\in 2\mu athcal{A}$, $|C_K(f)|=2^{11}35$ and $C_{K}({f}){Q_{12}}/{Q_{12}}\cong \omega peratorname{Alt}(5) \times 2$ or $\omega peratorname{Sym}(5)$. {\epsilon }nd{enumerate} {\epsilon }nd{lemma} In particular, $K$ acts irreducibly on $\omega verline{Q_{12}}$, $C_H(f) \cap 3\mu athcal{A} \neq 1$ and if $\omega verline{f} \in \mu athbf{Z}(\omega verline{T})$ then $f\in 2\mu athcal{B}$ and $C_H(f) \leqslant K$. \begin{proof} Lemma \ref{HN-InvOrbs} tells us that every involution in $Q_{12}\bs \{t\}$ lies in one of two $K$-conjugacy classes. Meanwhile, using Lemma \ref{HN-centralizer of F} we see that if such an involution $f \in 2\mu athcal{B}$ then $C_H(f)\leqslant K$ and lies in a $K$-orbit of length 150 which means that $|C_H(f)|=2^{13}3$ and so $\omega verline{f}$ is $2$-central in $\omega verline{K}$. If $f \in 2\mu athcal{A}$ then $f$ commutes with a diagonal subgroup of $K/Q_{12}$ isomorphic to $\omega peratorname{Alt}(5)$ and lies in a $K$-orbit of length 120. It follows from the structure of the maximal subgroups of $\omega peratorname{Alt}(5)\wr 2$ that $C_{K/Q_{12}}(f)\cong 2 \times \omega peratorname{Alt}(5)$ or $\omega peratorname{Sym}(5)$. We now suppose that $f \in Q_{12}$ has order four. In Lemma \ref{HN-InvOrbs} we saw that an element of order four in $Q_{12}$ also commutes with a diagonal subgroup of $K/Q_{12}$ isomorphic to $\omega peratorname{Alt}(5)$ and so lies in a $K$-orbit of length 120 or 240. Suppose there is more than on $K$-orbit of elements of order four. Any $K$-orbit has length a multiple of 30 (because $Q_{12}$ is non-abelian and because there exist elements of order three and five which act fixed-point-freely on $\omega verline{Q_{12}}$). However no orbit can have length 30 or 60 because $K/Q_{12}$ has no subgroups of order $2^535$ or $2^435$. Since $Q_{12}$ has 240 involutions we have either one orbit of length 240 or two of length 120. In particular, no element of order four is $2$-central in $\omega verline{K}$. Now if $f$ is any element in $Q_{12}$ then $1\neq \omega verline{f} \in \omega verline{{Q_{12}}}$ commutes with an element of order three in $\omega verline{K}$. Since each $z_i$ acts fixed-point-freely on $\omega verline{{Q_{12}}}$, we have that $\omega verline{f}$ is centralized by a conjugate of ${Q_{12}}a_i$. Therefore $f$ commutes with a conjugate of $a_i$. Furthermore we observe that if $f$ has order four or $f\in 2\mu athcal{A}$ then $\omega verline{f}$ is not $2$-central in $\omega verline{K}$ whereas if $f\in 2\mu athcal{B}$ then $\omega verline{f}$ is $2$-central in $\omega verline{K}$. Finally, suppose that $ W<Q_{12}$ with $t \in W\vartriangleleft K$. Then $W$ must be a union of $K$-orbits. However the $K$-orbits on $Q_{12}$ have lengths in $\{1, 150,120,240\}$ and no union of orbits is a power of $2$ greater than $2$ and less than $2^9$. Thus $K$ acts irreducibly on $\omega verline{Q_{12}}$. {\epsilon }nd{proof} \begin{lemma}\label{HN-Q=Q^h} Let $h \in H$. If $({Q_{12}} \cap {Q_{12}}^h)\bs \<t\>$ contains an involution in $2\mu athcal{B}$ then ${Q_{12}}={Q_{12}}^h$. {\epsilon }nd{lemma} \begin{proof} We may suppose that for some $1 \neq \omega verline{f} \in \mu athbf{Z}(\omega verline{T})$, $\omega verline{f} \in \omega verline{{Q_{12}}} \cap \omega verline{{Q_{12}}}^h$. By Lemma \ref{HN-orbits of elements in Q}, $f\in 2\mu athcal{B}$ and $C_H(f) \leqslant K$ and also $C_H(f) \leqslant K^h$. However this implies that $3\mu id |K \cap K^h|$ and so $K=K^h$ and ${Q_{12}}={Q_{12}}^h$ by Lemma \ref{HN-K is strongly 3 embedded}. {\epsilon }nd{proof} \begin{lemma} Let $T \in \omega peratorname{Syl}_2(K)$. Then $\omega verline{{Q_{12}}}$ is strongly closed in $\omega verline{T}$ with respect to $\omega verline{H}$. {\epsilon }nd{lemma} \begin{proof} Let $1\neq \omega verline{f}\in \omega verline{{Q_{12}}}$ such that $\omega verline{f} \in \omega verline{T^h}\bs \omega verline{{Q_{12}^h}}$ for some $h \in H$. Since $f \in Q_{12}\leqslant O^2(K)\leqslant O^2(H)$, we must have that $f\in O^2(K^h)=O^2(H) \cap K^h$. By Lemma \ref{HN-alt5 and char 2} $(iii)$ applied to $K^h$, either $f$ is an element of order four squaring to $t$ and commuting with a conjugate of $Z$ or $f\in 2\mu athcal{B}$ and $C_{\omega verline{{Q_{12}}}^h}({f})=[\omega verline{{Q_{12}}}^h,{f}]$ has order $2^4$. Suppose first that $f$ has order four. Then $f^2=t$ and ${Q_{12}}^hf$ is an involution in $O^2(K/{Q_{12}})^h$. By Lemma \ref{HN-alt5 and char 2} $(iii)$, $C_G(f)$ contains a conjugate of $Z$ and then by Lemma \ref{HN-element of order four in CG(Z)}, a Sylow $3$-subgroup of $C_G(f)$ is conjugate to $Z$. However, by Lemma \ref{HN-orbits of elements in Q}, since $f \in Q_{12}$, $C_H(f) \cap 3\mu athcal{A} \neq 1$ which is a contradiction. So we suppose instead that $f$ is an involution. Then $Q_{12}^hf$ is a diagonal involution with $f \in 2 \mu athcal{B}$ and it follows from Lemma \ref{HN-InvOrbs} $(iii)$ that $f$ commutes with an element of order four in $Q_{12}^h$. Now, by Lemma \ref{HN-orbits of elements in Q}, $C_H(f)\leqslant K$. The element of order four in $Q_{12}^h$ commuting with $H$ is necessarily in $Q_{12}$ else we have a contradiction as before. Let $D,V\leqslant K^h$ such that $\omega verline{D}:=C_{\omega verline{K^h}}({f})$ and $\omega verline{V}:=C_{\omega verline{{Q_{12}^h}}}({f})$. Then we have seen that $V \cap Q_{12}>\<t\>$. By Lemma \ref{lem-conjinvos}, since $\omega verline{V}=[\omega verline{Q_{12}},f]$, $|C_{\omega verline{K}^h}({f})|=|\omega verline{V}||C_{\omega verline{K^h}/\omega verline{Q_{12}^h}}({f})|=2^9$. Thus $\omega verline{Q_{12}^h D}$ is a Sylow $2$-subgroup of $\omega verline{K}^h$. Since $1\neq \omega verline{V} \cap \omega verline{Q_{12}}\unlhd \omega verline{D}$, we get that $\omega verline{V}\cap \omega verline{Q_{12}}\cap \mu athbf{Z}(\omega verline{D})\neq 1$. However, $\omega verline{V}\cap \mu athbf{Z}(\omega verline{D})\leqslant \mu athbf{Z}(\omega verline{Q_{12}^hD})$ the preimage of which contains only involutions in $2\mu athcal{B}$. Therefore $V \cap Q_{12}\leqslant Q_{12}^h \cap Q_{12}$ contains involutions in $2\mu athcal{B}$ distinct from $t$. This contradicts Lemma \ref{HN-Q=Q^h}. Thus $\omega verline{{Q_{12}}}$ is strongly closed in $\omega verline{T}$ with respect to $\omega verline{H}$. {\epsilon }nd{proof} \begin{lemma} $K=H$. {\epsilon }nd{lemma} \begin{proof} Assume for a contradiction that $K<H$ then ${Q_{12}} \ntriangleleft H$. Consider $O_{3'}(H)$. By Lemma \ref{HN-orbits of elements in Q}, the only proper non-trivial subgroup of $Q_{12}$ which is normalized by $K$ is $\<t\>$. So we have that $O_{3'}(H)\cap K\leqslant O_{3'}(H)\cap Q_{12}=\<t\>$. Since $O_{3'}(H)$ is normalized by $P$, by coprime action, $O_{3'}(H)$ is generated by elements commuting with elements of $P^\#$. However by Lemma \ref{HN-three things about K}, for every $p \in P^\#$, $C_H(p) \leqslant K$. Therefore $O_{3'}(H)\leqslant K$ and so $O_{3'}(H)=\<t\>$. Set $M:=\<{Q_{12}}^H\>\trianglelefteq H$ then $M \leqslant O^2(H)$. Moreover $O_{3'}(M)\leqslant O_{3'}(H)$ and so $O_{3'}(M)=\<t\>$. Therefore we have $P \cap M\neq 1$. Now $M \cap K$ is a normal subgroup of $K$ and contained in $O^2(H)\cap K=O^2(K)$. Hence $M \cap K=O^2(K)$. Set $N:=O_{2'}(M)$. If $N$ is $3'$ then $N \leqslant O_{3'}(H) =\<t\>$ and so $N=1$. Otherwise $P \cap N\neq 1$ and then $[P\cap N,{Q_{12}}] \leqslant N \cap {Q_{12}}=1$ which is a contradiction as $C_G(Q_{12})\leqslant Q_{12}$. Therefore $O_{2'}(M)=1$. Now, since $P \leqslant M\vartriangleleft H$, $H=MN_H(P)$ by a Frattini argument and so $M=\<Q_{12}^H\>=\<Q_{12}^{N_H(P)M}\>=\<Q_{12}^M\>$ since $N_H(P)\leqslant K=N_G(Q_{12})$. Finally, we may apply Theorem \ref{goldschmidt} to $\omega verline{M}=\<\omega verline{{Q_{12}}}^M\>$. As required, we have that for $T\in \omega peratorname{Syl}_2(K)$, $\omega verline{{Q_{12}}}$ is strongly closed in $\omega verline{T}$ with respect to $\omega verline{H}$. Hence $\omega verline{{Q_{12}}}$ is strongly closed in $\omega verline{M} \cap \omega verline{T}$ with respect to $\omega verline{M}$. We have also that $O_{2'}(\omega verline{M})=1$. Thus $\omega verline{{Q_{12}}}=O_2(\omega verline{M})\mu athrm{O}mega(\omega verline{T} \cap \omega verline{M})$. Since ${Q_{12}}$ is not a Sylow $2$-subgroup of $M \leqslant O^2(H)$ we may find $e \in (M \cap T)\bs {Q_{12}}$. Then by Lemma \ref{HN-alt5 and char 2} $(iii)$, ${Q_{12}}e$ contains either involutions or elements of order four squaring to $t$. In either case $\omega verline{{Q_{12}}}\omega verline{e} \cap \mu athrm{O}mega(\omega verline{T} \cap \omega verline{M})\neq 1$ and so ${Q_{12}} \nleq \mu athrm{O}mega(\omega verline{T} \cap \omega verline{M})$. This contradiction proves that $H=K$. {\epsilon }nd{proof} \section{The Structure of the Centralizer of $u$}\label{HN-Section-CG(u)} We continue to assume that we are in Case I only. We now know the structure of the centralizer of an involution in $G$-conjugacy class $2\mu athcal{B}$ and so we must determine the structure of the centralizer of an involution in $2\mu athcal{A}$. We continue notation from Section \ref{HN-Section-CG(t)}. Recall that in Notation \ref{HN-Alt9notation} we fixed an involution $u \in Q_2\leqslant C_G(a_2)$ and we defined $2\mu athcal{A}$ to be the conjugacy class of involutions in $G$ containing $u$. By Lemma \ref{HN-images in alt9}, $2\mu athcal{A}\neq 2 \mu athcal{B}$. Let $L:=C_G(u)$ and $\wt{L}=L/\<u\>$ and we continue to set $H=C_G(t)$ and $\omega verline{H}=H/\<t\>$. We will show that $L\sim (2^{\cdot}\mu athrm{HS}):2$ and so we must identify that $\wt{L}$ has an index two subgroup isomorphic to the sporadic simple group $\mu athrm{HS}$. We first show that $\wt{L}$ has a subgroup $2\times \omega peratorname{Sym}(8)$ and later that the centre of this subgroup does not live in $O^2(\wt{L})$. We will use the information we have about $C_G(t)=H$ and $N_G(E)$ to see the structure of some $2$-local subgroups of $\wt{L}$. Once we have used extremal transfer to find the index two subgroup of $\wt{L}$ we are then able to use this $2$-local information to apply a theorem due to Aschbacher \cite{AschbacherHS} to recognize $\mu athrm{HS}$. The Aschbacher result requires us to find $2$-local subgroups of shape $(4 *2_+^{1+4}).\omega peratorname{Sym}(5)$ and $(4^3). \mu athrm{G}L_3(2)$ Recall using Notation \ref{HN-Alt9notation} that $u \in F \leqslant Q_2 \leqslant C_G(a_2)$ and that $a_1$ normalizes $F$. \begin{lemma}\label{HN-fours groups centralize A8} $C_G(F)\cong 2\times 2 \times \omega peratorname{Alt}(8)$ with $C_G(F)>C_G(u) \cap C_G(a_1)\cong \omega peratorname{Alt}(8)$ and $C_{\wt{L}}(\wt{F})\cong 2 \times \omega peratorname{Sym}(8)$. Moreover if $F_0$ is any fours subgroup of $C_G(a_2)$ such that $F_0^\# \subseteq 2\mu athcal{A}$ then $C_G(F_0)\cong C_G(F)$. {\epsilon }nd{lemma} \begin{proof} Set $M:=C_G(F)$. First observe that $F \leqslant O^3(C_G(a_2))\cong \omega peratorname{Alt}(9)$ and the image of $F^\#$ in $\omega peratorname{Alt}(9)$ consists of involutions of cycle type $2^2$. Notice also that $\omega peratorname{Alt}(9)$ has two classes of such fours groups with representatives $\<(1,2)(3,4),(1,3)(2,4)\>$ and $\<(1,2)(3,4),(3,4)(5,6)\>$. These subgroups of $\omega peratorname{Alt}(9)$ have respective centralizers isomorphic to $2^2 \times \omega peratorname{Alt}(5)$ and $2^2 \times \omega peratorname{Sym}(3)$ and respective normalizers $(\omega peratorname{Alt}(4) \times \omega peratorname{Alt}(5)):2$ and $\omega peratorname{Sym}(4) \times \omega peratorname{Sym}(3)$. Given the image of $F$ in $O^3(C_G(a_2))$, we have that $M\cap C_G(a_2)\cong 3 \times 2^2 \times \omega peratorname{Sym}(3)$. Let $R \in \omega peratorname{Syl}_3(M \cap C_G(a_2))$ such that $\<R,a_1\>$ is a Sylow $3$-subgroup of $N_G(F)$. Then $a_2 \in R$ and $\<R,a_1\>$ is abelian and $R^\# \subseteq 3\mu athcal{A}$ since no element of order three in $3\mu athcal{B}$ commutes with a fours group. Therefore by the earlier argument for each $r \in R^\#$, $C_G(r) \cap M\cong 3\times 2^2 \times \omega peratorname{Alt}(5)$ or $3\times 2^2 \times \omega peratorname{Sym}(3)$. Consider $M \cap C_G(a_1)$ which is isomorphic to a subgroup of $\omega peratorname{Alt}(9)\cong O^3(C_G(a_1))$. Notice that $F$ does not commute with $O^3(C_G(a_1))$ (for we would then have $F$ commuting with an element of $3\mu athcal{B}$ and such elements do not commute with a fours group) and so $M \cap C_G(a_1)$ is a proper subgroup of $O^3(C_G(a_1))$. By Lemma \ref{HN-Q_i's}, we have that $F\leqslant Q_2$ commutes with $Q_1\leqslant C_G(a_1)$. Also $F$ commutes with $R\leqslant C_G(a_1)$ and so $|M \cap C_G(a_1)|$ is a multiple of $2^53^2$. Moreover $M \cap C_G(a_1)$ contains the subgroup $Q_1\<a_2\>\sim 2^{1+4}_+.3$. We check the maximal subgroups of $\omega peratorname{Alt}(9)$ (see \cite{atlas}) to see that $M \cap C_G(a_1)$ is isomorphic to either a subgroup of $\omega peratorname{Alt}(8)$ or the diagonal subgroup of index two in $\omega peratorname{Sym}(5) \times \omega peratorname{Sym}(4)$. The latter possibility leads to a Sylow $2$-subgroup of order $2^5$ with centre of order four which is impossible as $2_+^{1+4}\cong Q_2\leqslant M\cap C_G(a_1)$. So $M\cap C_G(a_1)$ is isomorphic to a subgroup of $\omega peratorname{Alt}(8)$. Suppose it is isomorphic to a proper subgroup of $\omega peratorname{Alt}(8)$. We again check the maximal subgroups of $\omega peratorname{Alt}(8)$ (\cite{atlas}) to see that $M\cap C_G(a_1)$ is isomorphic to a subgroup of $N_{\omega peratorname{Alt}(8)}(\<(1,2)(3,4),(1,3)(2,4),(5,6)(7,8),(5,7)(6,8)\>)\sim 2^4:(\omega peratorname{Sym}(3) \times \omega peratorname{Sym}(3))$. This subgroup can be seen easily in $\mu athrm{G}L_4(2)$ as the subgroup of matrices of shape \[\left(\begin{array}{cccc} \ast & \ast & 0 & 0 \\ \ast & \ast & 0 & 0 \\ \ast & \ast & \ast & \ast \\ \ast & \ast & \ast & \ast {\epsilon }nd{array}\right).\] We calculate in this group that an extraspecial subgroup of order $2^5$ is not normalized by an element of order three. Therefore $M \cap C_G(a_1)$ is not isomorphic to a subgroup of this matrix group. Thus $M\cap C_G(a_1)\cong \omega peratorname{Alt}(8)$. In particular $M$ has a subgroup isomorphic to $2^2 \times \omega peratorname{Alt}(8)$. Now we have that for every $r \in R^\#$, $C_M(r)\cong 3\times 2^2 \times \omega peratorname{Sym}(3)$ or $3\times 2^2 \times \omega peratorname{Alt}(5)$. Now $R\leqslant C_M(a_1)\cong \omega peratorname{Alt}(8)$ and so $R \in \omega peratorname{Syl}_3(C_M(a_1))$. Moreover, $\omega peratorname{Alt}(8)$ has two conjugacy classes of elements of order three. So we may set $R=\{1,a_2,a_2^2,a_3,a_3^2,b_1,b_1^2,b_2,b_2^2\}$ where $a_2$ is conjugate to $a_3$ in $C_M(a_1)$ and $b_1$ is conjugate to $b_2$ in $C_M(a_1)$ such that $C_{C_M(a_1)}(b_i)\cong 3 \times \omega peratorname{Alt}(5)$ ($i \in \{1,2\}$) and $C_{C_M(a_1)}(a_j)\cong 3 \times \omega peratorname{Sym}(3)$ ($j \in \{2,3\}$). Now we already have that $C_M(a_3) \cong C_M(a_2)\cong 3 \times 2^2 \times \omega peratorname{Sym}(3)$ and we have two possibilities for the structure of the other $3$-centralizer. Therefore we must have that $C_{M}(b_i)\cong 3 \times 2 \times 2 \times \omega peratorname{Alt}(5)$. Now by coprime action $C_{M/F}(Fb_i)\cong C_M(b_i)/F$ and $C_{M/F}(Fa_i)\cong C_M(a_i)/F$. Hence we may apply Corollary \ref{Cor-ParkerRowleyA8} to $M/F$ to say that $M/F\cong \omega peratorname{Alt}(8)$. Therefore $M\cong 2^2 \times \omega peratorname{Alt}(8)$. Consider $N_L(F)$. We have seen that $N_G(F)/M\cong \omega peratorname{Sym}(3)$ and so $[N_L(F):M]=2$. It follows that $N_L(F)/F\cong 2 \times \omega peratorname{Alt}(8)$ or $\omega peratorname{Sym}(8)$. For $b_1\in R$, $C_{M}(b_1)\cong 3 \times 2^2 \times \omega peratorname{Alt}(5)$ and so $C_{N_L(F)}(b_1)\sim 3 \times (2^2 \times \omega peratorname{Alt}(5)):2$ and $C_{N_L(F)}(b_1)/F\sim 3 \times \omega peratorname{Sym}(5)$ which is not a subgroup of $\omega peratorname{Alt}(8) \times 2$. Thus we must have that $N_L(F)/F\cong \omega peratorname{Sym}(8)$ and so $C_{\wt{L}}(\wt{F})\cong 2 \times \omega peratorname{Sym}(8)$. Now let $F_0\leqslant C_G(a_2)$ have image $\<(1,2)(3,4),(1,3)(2,4)\>$ in $\omega peratorname{Alt}(9)\cong O^3(C_G(a_2))$. Then $C_G(a_2) \cap C_G(F_0)\cong 3 \times 2^2 \times \omega peratorname{Alt}(5)$. Now recall that $R\in \omega peratorname{Syl}_3(M)$ and $M=C_G(F)\cong 2^2 \times \omega peratorname{Alt}(8)$ and so there exists $r \in R^\#$ such that $M \cap C_G(r)\cong 3 \times 2^2 \times \omega peratorname{Alt}(5)$. Since every element in $R^\#$ is conjugate in $G$, we have that $F_0$ is conjugate to $F$ in $G$. Thus $C_G(F_0)\cong C_G(F)$. {\epsilon }nd{proof} Recall from Notation \ref{HN-Alt9notation} that $r_2$ is an involution in $O^3(C_G(a_2))$ which is conjugate to $u$ and $r_2u$. In light of Lemma \ref{HN-fours groups centralize A8}, the following result is a calculation in a group isomorphic to $2 \times 2 \times \omega peratorname{Alt}(8)$. \begin{lemma}\label{HN-HS-centralizer of r,t,u} $O_2(C_{H \cap L}(r_2))$ has order $2^6$. {\epsilon }nd{lemma} \begin{proof} It is clear from Notation \ref{HN-Alt9notation} that $\<r_2,u\>^\# \subseteq 2\mu athcal{A}$. Set $F_0:=\<r_2,u\>$ then by Lemma \ref{HN-fours groups centralize A8}, $C_G(F_0)\cong 2\times 2 \times \omega peratorname{Alt}(8)$. Notice also from Notation \ref{HN-Alt9notation} that $t \in C_G(F_0) \cap C_G(a_2)\cong 3 \times 2^2 \times \omega peratorname{Alt}(5)$ which has an abelian subgroup containing $t$ isomorphic to $3 \times 2^4$. Consider $\<F_0,t\> \cap C_G(F_0)'$ (of course $C_G(F_0)'\cong \omega peratorname{Alt}(8)$) which has order two. If $\<F_0,t\> \cap C_G(F_0)'$ is $2$-central in $C_G(F_0)'$ then $a_2 \in C_G(F_0)' \cap C_G(t)$ is isomorphic to the subgroup of $\omega peratorname{Alt}(8)$ of shape $2_+^{1+4}.\omega peratorname{Sym}(3)$. However this implies that $C_G(\<F_0,t\>) \cap C_G(a_2)\cong 2 \times 2 \times 2 \times 3$ which is not the case. Thus $\<F_0,t\> \cap C_G(F_0)'$ is not $2$-central in $C_G(F_0)'$ and so $C_G(F_0)' \cap C_G(t)$ is isomorphic to a subgroup of $\omega peratorname{Alt}(8)$ of shape $(2^2\times \omega peratorname{Alt}(4)) :2$. Thus $C_{H \cap L}(r_2)\sim 2^2\times (2^2 \times \omega peratorname{Alt}(4)) :2$ and so the order of the $2$-radical is clear. {\epsilon }nd{proof} \begin{lemma}\label{HN-HS-Sylow 2} $H\cap L$ contains a Sylow $2$-subgroup of $L$ which has order $2^{11}$ and centre $\<t,u\>$. {\epsilon }nd{lemma} \begin{proof} Let $S_u$ be a Sylow $2$-subgroup of $C_L(t)$. We have that $u \in Q_2 \leqslant Q_{12}$ and since $u \in 2\mu athcal{A}$, we may apply Lemma \ref{HN-orbits of elements in Q} to see that $|C_{{H}}({u})|=2^{11}.3.5$. Therefore $|S_u|=2^{11}$. Now, $u \in Q_2$ and $[Q_1,Q_2]=1$ (by Lemma \ref{HN-Q_i's}) so we have that $Q_1\leqslant C_{O_2(H)}(u) \leqslant S_u$. Moreover, $\mu athbf{Z}(S_u)\leqslant C_{S_u}(Q_1) \leqslant Q_2$. Therefore $\mu athbf{Z}(S_u)\leqslant \mu athbf{Z}(C_{Q_2}(u))=\<t,u\>$ since $Q_2$ is extraspecial of order $2^5$. Hence $\mu athbf{Z}(S_u)=\<t,u\>$. Since $\<t,u\>\leqslant Q_{12}$ and $Q_{12}$ is extraspecial, $u$ is conjugate to $ut$ in $Q_{12}$. Therefore $N_G(\<t,u\>)\leqslant C_G(t)$. So let $S_u \leqslant T_u\in \omega peratorname{Syl}_2(H \cap L)$ then $N_{T_u}(S_u) \leqslant N_{L}(\<t,u\>) \leqslant H \cap L$. Thus $S_u$ is a Sylow $2$-subgroup of $L$. {\epsilon }nd{proof} \begin{lemma} $(H \cap L)/({Q_{12}} \cap L) \cong \omega peratorname{Sym}(5)$. {\epsilon }nd{lemma} \begin{proof} Using Lemma \ref{HN-orbits of elements in Q} we have that $C_H(u)/C_{Q_{12}}(u)\cong \omega peratorname{Alt}(5) \times 2$ or $\omega peratorname{Sym}(5)$. We suppose for a contradiction that $(H \cap L) /(Q_{12} \cap L)=C_H(u)/C_{Q_{12}}(u)\cong C_H(u){Q_{12}}/{Q_{12}}\cong 2 \times \omega peratorname{Alt}(5)$. Now set $V:=C_{Q_{12}}(u)$ then $|V|=2^8$ and $\omega verline{V}$ is normalized by $C_H(u)/V\cong 2 \times \omega peratorname{Alt}(5)$. Recall from Notation \ref{HN-Alt9notation} that $r_2$ is an involution in $O^3(C_G(a_2))$ and from Lemma \ref{HN-alt9 observations} that $r_2\in C_H(a_2)\bs Q_2$. Since $[r_2,a_2]=1$, $[Vr_2,Va_2]=1$ and therefore $Vr_2\in \mu athbf{Z}(C_H(u)/V)$. In particular, $C_{\omega verline{V}}(r_2)$ is preserved by $O^2(C_H(u)/V)\cong \omega peratorname{Alt}(5)$. Since $Va_2$ acts non-trivially on $\omega verline{V}$, $O^2(C_H(u)/V)$ acts non-trivially. This is to say that there exists a non-central $O^2(C_H(u)/V)$-chief factor of $V$. Moreover, this chief factor has order $2^4$. Now Lemma \ref{HN-HS-centralizer of r,t,u} gives us that $|O_2(C_{H \cap L}(r_2))|=2^6$. Clearly $C_V(r_2)$ is a normal $2$-subgroup of $C_{H \cap L}(r_2)$ as is $\<r_2\>\nleq V$. Therefore $|C_V(r_2)|\leqslant 2^5$ and so $|C_{\omega verline{V}}(r_2)|=2^4$ or $2^5$. Suppose first that $|C_{\omega verline{V}}(r_2)|=2^4$ then $\omega verline{u} \in C_{\omega verline{V}}(r_2)$ is normalized by $O^2(C_H(u)/V)$ and so $C_{\omega verline{V}}(r_2)$ is necessarily a sum of trivial $O^2(C_H(u)/V)$-modules. Moreover $\omega verline{V}/C_{\omega verline{V}}(r_2)$ has dimension three and is therefore also a sum of trivial $O^2(C_H(u)/V)$-modules. This is a contradiction. So suppose instead that $|C_{\omega verline{V}}(r_2)|=2^5$. Then $|[\omega verline{V},r_2]|=2^2$. Furthermore $[\omega verline{V},r_2]$ is preserved by $O^2(C_H(u)/V)$. Thus $[\omega verline{V},r_2]$ is a sum of two trivial $O^2(C_H(u)/V)$-modules. Since $[\omega verline{V},r_2]\leqslant C_{\omega verline{V}}(r_2)$, it follows that $C_{\omega verline{V}}(r_2)$ is also a sum of trivial $O^2(C_H(u)/V)$-modules as is $\omega verline{V}/C_{\omega verline{V}}(r_2)$. Again this gives us a contradiction. Hence we may conclude that $C_H(u)/C_{Q_{12}}(u)\cong C_H(u){Q_{12}}/{Q_{12}}\cong \omega peratorname{Sym}(5)$. {\epsilon }nd{proof} \begin{lemma}\label{HN-HS-1-finding 4*4*4} There exists an element of order four $d\in C_{Q_2}(u)$ such that $d^2=t$ and $4 \times 2\cong {\<d,u\>}\vartriangleleft {H \cap L}$. {\epsilon }nd{lemma} \begin{proof} This is clear once we recall Lemma \ref{HN-InvOrbs} $(iii)$ which tells us that diagonal subgroups of $H/Q_{12}$ isomorphic to $\omega peratorname{Alt}(5)$ which centralize an involution in $\omega verline{Q_{12}}$, in fact centralize a subgroup of $Q_{12}$ isomorphic to $C_4 \times C_2$. We have that $(H\cap L)Q_{12}/Q_{12}\cong (H \cap L)/({Q_{12}} \cap L) \cong \omega peratorname{Sym}(5)$ contains such a subgroup. Finally we must check that the element of order four in in $Q_2$ however this is clear as $a_2\in H \cap L$ and $C_{Q_{12}}(a_2)=Q_2$. {\epsilon }nd{proof} \begin{lemma}\label{HN-HS-complement to A} There exists a complement $C\cong \mu athrm{G}L_3(2)$ to $C_L(E)$ in $N_L(E)$ such that $EC \leqslant C_G(F)$. {\epsilon }nd{lemma} \begin{proof} Recall that $u \in F \leqslant Q_2$ and by Lemma \ref{HN-fours groups centralize A8}, $2 \times 2 \times \omega peratorname{Alt}(8)\cong C_G(F)> C_G(u) \cap C_G(a_1)\cong \omega peratorname{Alt}(8)$. Notice that $E \leqslant Q_1 \leqslant C_G(F)$ since $[Q_1,Q_2]=1$. Notice also that $t\in C_G(u)\cap C_G(a_1)$. From notation \ref{HN-Alt9notation}, the image of $t$ in $\omega peratorname{Alt}(9)\cong O^3(C_G(a_1))$ is $(1,2)(3,4)(5,6)(7,8)$ and so clearly $t$ lies in exactly one subgroup of $O^3(C_G(a_1))$ isomorphic to $\omega peratorname{Alt}(8)$. By Lemma \ref{HN-alt9 observations}, $O^3(C_G(a_1))$ contains a complement, $C$ say, to $C_G(E)$ in $N_G(E)$. Moreover the image of $EC$ in $O^3(C_G(a_1))$ lies in a subgroup isomorphic to $\omega peratorname{Alt}(8)$ containing $t$. Therefore $EC\leqslant C_G(u)\cap C_G(a_1)\leqslant C_G(F)$. {\epsilon }nd{proof} \begin{lemma}\label{HN-HS-index two subgroup} We have that $L=FO^2(L)$ with $[L:O^2(L)]=2$, $\wt{N_{O^2(L)}(E)}\cong 4^3:\mu athrm{G}L_3(2))$ and $\wt{C_{O^2(L)}(t)}\sim 2_+^{1+4}*4. \omega peratorname{Sym}(5)$. {\epsilon }nd{lemma} \begin{proof} By Lemma \ref{HN-alt9 observations} $(v)$, a Sylow $3$-subgroup of $C_G(E)/E$ is self-centralizing in $G$ and therefore $3 \nmid |C_G(u) \cap C_G(E)|$. Hence $C_L(E)$ is a $2$-group. Notice that $|C_L(E)|\leqslant 2^8$ as, by Lemma \ref{HN-HS-complement to A}, a complement to $C_G(E)$ in $N_G(E)$ isomorphic to $\mu athrm{G}L_3(2)$ is in $L$ and by Lemma \ref{HN-HS-Sylow 2}, $|L|_2=2^{11}$. By Lemma \ref{HN-HS-1-finding 4*4*4}, there exists an element of order four $d\in C_{Q_2}(u)$ such that $d^2=t \in E$ and ${\<d,u\>}\vartriangleleft H \cap L$ which implies that $4\cong \<\wt{d}\>\vartriangleleft \wt{H \cap L}$. Moreover, $d \in Q_2\leqslant C_G(E)$ and so $\<\wt{d}\>\vartriangleleft \wt{N_L(E) \cap H}$. So consider $\<\wt{d}^{N_L(E)}\>\leqslant \wt{C_L(E)}$. Since $\wt{d}^2=\wt{t}\in \wt{E}$ and $N_L(E)$ is transitive on $E^\#$, we clearly have at least seven conjugates of $\<\wt{d}\>$ in $\wt{N_L(E)}$. Moreover since $\<\wt{d}\>\vartriangleleft \wt{C_L(E)}$, the $N_L(E)$-conjugates of $\<\wt{d}\>$ in $\wt{C_L(E)}$ pairwise commute and generate a $2$-group of order at most $2^7$. Thus $\wt{N_L(E)}\vartriangleright \<\wt{d}^{{N_L(E)}}\>\cong 4 \times 4 \times 4$. Let $A$ be the preimage in $L$ containing $u$. Now by Lemma \ref{HN-HS-complement to A} there exists a complement, $C$, to $C_L(E)$ in $N_L(E)$. Moreover, $\wt{C}$ acts non-trivially on $\wt{A}$ and so $\wt{A}\wt{C}\sim 4^3:\mu athrm{G}L_3(2)$. Recall that $u \in F \leqslant Q_2$ and by Lemma \ref{HN-HS-complement to A}, $F \leqslant C_L(E)$. Therefore $\wt{F}$ normalizes $\wt{AC}$. Furthermore, by Lemma \ref{HN-fours groups centralize A8}, $C_{\wt{L}}(\wt{F})\cong 2 \times \omega peratorname{Sym}(8)$. In particular, $\wt{F} \nleq \wt{A}$ and $[\wt{A}, \wt{F}] \neq 1$. By Lemma \ref{HN-HS-complement to A}, $[C,F]=1$. Thus $\wt{ACF}\sim 4^3:(2 \times \mu athrm{G}L_3(2))$. Since $C_L(E)$ is a $2$-group, we must have that $\wt{ACF} = \wt{N_L(E)}$. We now apply Lemma \ref{Prelims-4*4*4 transfer} to $\wt{L}$ to say that $O^2(\wt{L})\neq \wt{L}$ and clearly $[\wt{L}:O^2(\wt{L})]=2$. Notice that $u \in O^2(L)$ because $u$ in $\omega peratorname{Alt}(9)\cong O^3(C_G(a_2))$ and recalling Notation \ref{HN-Alt9notation}, $u\mu apsto (1,2)(3,4)$, we see that an element of order four with image $(1,3,2,4)(5,6)$ squares to $u$. Thus $[{L}:O^2({L})]=2$. Let $L_0=O^2(L)$. It follows that $\wt{N_{L_0}(E)}\cong 4^3:\mu athrm{G}L_3(2))$. Since $F\leqslant Q_{12} \cap L$ and $F \nleq L_0$, $Q_{12} \cap L_0<Q_{12} \cap L$. Since $[Q_1,a_2]=Q_1$, and so $Q_1 \leqslant L' \leqslant L_0$. Also $\<d,u\>\cong 4 \times 2$ is normal in $L \cap H$ and clearly $d\in L_0$. Thus $(2_+^{1+4} \ast 4) \times 2\sim Q_1\<d,u\>=Q_{12} \cap L_0$. Now $(H \cap L_0)/(Q_{12} \cap O^2(L))\cong \omega peratorname{Sym}(5)$ follows from an isomorphism theorem since \[\frac{H \cap L_0}{Q_{12} \cap L_0}=\frac{H \cap L_0}{(H \cap L_0) \cap (Q_{12} \cap L)}\cong \frac{(H \cap L_0)(Q_{12} \cap L)}{Q_{12} \cap L}=\frac{H \cap L}{Q_{12} \cap L}\cong \omega peratorname{Sym}(5).\] Thus $C_{L_0}(t)$ has $2$-radical, $Q_{12} \cap L_0\cong 2_+^{1+4}*4\times 2$ with quotient $\omega peratorname{Sym}(5)$. {\epsilon }nd{proof} \begin{lemma} $L\cong 2^{\cdot}\mu athrm{HS} :2$. {\epsilon }nd{lemma} \begin{proof} We must prove that $O^2(\wt{L})$ satisfies the hypotheses of Theorem \ref{Aschbacher-HS} to recognize the sporadic simple group $\mu athrm{HS}$. Now we have that $\wt{t}$ is an involution in $\wt{L_0}$ and since $ut$ is not conjugate to $t$, we have that $C_{\wt{L_0}}(\wt{t})=\wt{(C_G(t) \cap L_0)} \sim 2_+^{1+4}*4.\omega peratorname{Sym}(5)$. Suppose that $g \in O^2(L)$ and $\wt{x}$ normalizes $\wt{E}$. Then $g$ normalizes $E\<u\>$. Since $N_L(E)$ is transitive on $E^\#$ and we have seen that $tu \in 2 \mu athcal{A}$, we have that $\{eu|e \in E^\#\}\subseteq 2\mu athcal{A}$. Therefore $E\<u\> \cap 2\mu athcal{B}=E^\#$. Hence $g$ normalizes $E$. Thus $N_{\wt{O^2(L)}}(\wt{E})=\wt{N_{O^2(L)}(E)}\cong 4^3:\mu athrm{G}L_3(2))$. We therefore apply Theorem \ref{Aschbacher-HS} to $\wt{O^2(L)}$ to see that $\wt{O^2(L)}\cong \mu athrm{HS}$ and so $O^2(L)\cong 2^{.}\mu athrm{HS}$ or $2\times \mu athrm{HS}$. We have seen that $L$ does not split over $\<u\>$ and we have also seen that $L=FO^2(L)$. Thus we must have that $L\cong 2^{\cdot}\omega peratorname{Aut}(\mu athrm{HS})\cong 2^{\cdot}\mu athrm{HS} :2$. {\epsilon }nd{proof} \begin{lemma} In Case I, $G \cong \mu athrm{HN}$. {\epsilon }nd{lemma} \begin{proof} We have that $G$ is a finite group with two involutions $u$ and $t$ and $L=C_G(u) \sim (2^{\cdot}\mu athrm{HS}) : 2$. Also $C_G(t)\sim 2_+^{1+8}.(\omega peratorname{Alt}(5)\wr 2)$ and $O_2(H)=Q_{12}$ and by Lemma \ref{HN-Q_i's}, $C_G(Q_{12})\leqslant Q_{12}$. Thus, by Theorem \ref{Segev-HN}, $G \cong \mu athrm{HN}$. {\epsilon }nd{proof} Now we recall Case II and Lemma \ref{HN-Indextwo} to see that in Case II, $G$ has proper subgroup $G_0$ of even index and we have proved that $G_0\cong \mu athrm{HN}$. By a Frattini argument, $G=G_0N_G(S)$ ($S \in \omega peratorname{Syl}_3(G)$) and so it follows using Lemma \ref{HN-Q_i's} that $[G:G_0]=2$. We check finally that $G\ncong 2\times \mu athrm{HN}$ however we can use, for example, that $C_G(J)\leqslant J$ (Lemma \ref{HN-J is self-centralizing}). Thus, in Case II, $G \cong \mu athrm{Aut}(\mu athrm{HN})$. {\epsilon }nd{document}
\begin{document} \draft \title{Comparisons of spectra determined using detector atoms and spatial correlation functions} \author{M. Havukainen} \address{ Helsinki Institute of Physics, PL 9, FIN-00014 Helsingin yliopisto, Finland } \date{\today} \maketitle \begin{abstract} We show how two level atoms can be used to determine the local time dependent spectrum. The method is applied to a one dimensional cavity. The spectrum obtained is compared with the mode spectrum determined using spatially filtered second order correlation functions. The spectra obtained using two level atoms give identical results with the mode spectrum. One benefit of the method is that only one time averages are needed. It is also more closely related to a realistic measurement scheme than any other definition of a time dependent spectrum. \epsilonnd{abstract} \begin{multicols}{2} \narrowtext \section{Introduction} \label{Introduction} It has turned out to be difficult to give a universally accepted definition of the time dependent spectrum. There are several definitions \cite{page,lampard,silverman} which have not been popular in optics. A definition, which is connected to a realistic spectrum measurement was given by Eberly and W\'odkiewicz \cite{eberly}. Their, 'physical spectrum', has become a kind of canonical definition for a time dependent spectrum in quantum optics. All definitions mentioned above are based on integrals, typically Fourier transforms, over different two time averages of classical stochastic variables or quantum mechanical operators. In our earlier paper \cite{analatompaper}, a totally new approach was introduced. Instead of calculating two time averages the radiation is guided to a group of two level atoms. All the atoms have equal, very small decay constants, but the resonance frequencies are all different. The spectrum measured by the atoms can be read from the excitation probabilities of the atoms. We have studied the spectrum of resonance fluorescence radiation from a laser driven three level atom \cite{analatompaper}. The method was shown to give exactly the same result as the 'physical spectrum' defined by Eberly and W\'odkiewicz. In order to calculate the two time averages needed to determine the 'physical spectrum', the Quantum Regression Theorem (QRT) \cite{lax} must be used. It has been shown that there are systems where QRT cannot be used i.e. the two time averages it gives are incorrect \cite{oconnel}. This is the case if the interaction between the matter, which emits the radiation, and the radiation field is strong. Interaction can be strong e.g. in microcavities. In the method of analyzer atoms, two time averages are not needed. The method can be used also in situations where the QRT is known to give incorrect results. It can be said that it is closer to a realistic spectrum measurement than any other definition. In our earlier paper, the three level atom was coupled to the analyzer atoms using cascaded master equations \cite{cascadegardiner,cascadecarmichael}. Thus the quantum mechanical state of the field was not made available. In this paper we connect the method of analyzer atoms to our cavity QED simulations in one dimension. In these simulations there are two level atoms inside a one dimensional cavity. The field is quantized using canonical quantization. The state vector is restricted to have only one excitation i.e. the field strength is very low. The model was introduced by V. Bu\v{z}ek et. al. \cite{buzekczech,buzekkorean} and has been used in several different studies in one \cite{decay} and two dimensions \cite{cavity2dpaper}. The quantum mechanical state of the field is available so the spectrum of the radiation can be read directly from the excitations of the modes. The spatial dependence of the spectrum is obtained by calculating the second order correlation function $g(r_1,r_2)$ and using appropriate filter functions. The typical situation in the simulations is that the initial field intensity is split into left and right propagating parts using two level atoms. The spectra of both parts is determined using both analyzer atoms and filtered correlation functions. In all cases the two different methods give identical results. In Sec. II we specify the model used in the simulations. Spatial correlation functions and their relation to the spectrum is explained in Sec. III. In Sec. IV we present the analyzer atom method for spectrum measurements. In Sec. IV we show several results of our simulations and finally in Sec. VI we present our conclusions. \section{Hamiltonians and State vectors} \subsection{Hamiltonians} In our simulations the mode functions in the one dimensional cavity ($0\leq r\leq L$) are \begin{equation} \label{modefunctions} G_n(r)=\sin(k_nr), \ \ k_n=n\frac{\pi}{L}, \ \ n=1,2,3... \epsilonnd{equation} All mode functions are zero at the edges of the cavity, $r=0$ and $r=L$. The electric and magnetic field operators can be expanded using the mode functions (\ref{modefunctions}) \begin{eqnarray} \label{EandB} \hat{E} & = & \sum\limits_{n=1}^{\infty}\left(\frac{\hbar\omega_n}{\epsilonpsilon_0L}\right)^{1/2}\sin(k_nr)(\hat{a}_n+\hat{a}_n^{\dagger}) \\ \hat{B} & = & i\sum\limits_{n=1}^{\infty}\left(\frac{\hbar\omega_n\mu_0}{L}\right)^{1/2}\sin(k_nr)(\hat{a}_n-\hat{a}_n^{\dagger}), \nonumber \epsilonnd{eqnarray} where $\hat{a}$ and $\hat{a}^{\dagger}$ are annihilation and creation operators. The operators satisfy the usual canonical commutation relations \begin{eqnarray} [\hat{a}_n,\hat{a}_m^{\dag}] & = & \delta_{nm}, \\ \left[\hat{a}_n,\hat{a}_m\right] & = & [\hat{a}_n^{\dag},\hat{a}_m^{\dag}]=0. \epsilonnd{eqnarray} For the energy density operator we get using the expansions (\ref{EandB}) \begin{eqnarray} \hat{H}(r) & = & \frac{1}{2}\epsilonpsilon_0\hat{E}(r)^2+\frac{1}{2\mu_0}\hat{B}(r)^2 \\ & = & \frac{2\hbar}{L}\sum\limits_{n=1}^{\infty}\sum\limits_{n'=1}^{\infty}\sqrt{\omega_n\omega_{n'}}\sin(k_nr)\sin(k_{n'}r)(\hat{a}_{n'}^{\dagger}\hat{a}_n+\frac{1}{2}).\nonumber \epsilonnd{eqnarray} Integration of $\hat{H}(r)$ over the whole cavity and use of the orthogonal integral \begin{equation} \label{orthogonalintegral} \int\limits_0^L\sin(k_nr)\sin(k_{n'}r)dr=\frac{L}{2}\delta_{nm}, \ \ \ n,m\geq 1 \epsilonnd{equation} gives the familiar field Hamiltonian \begin{equation} \hat{H}_F=\sum\limits_{n=1}^{\infty}\hbar\omega_n(\hat{a}_n^{\dagger}\hat{a}_n + \frac{1}{2}). \epsilonnd{equation} At fixed positions inside the cavity there are $N_A$ two level atoms with resonance frequencies $\omega_j$ and dipole constants $D_j$. The atomic Hamiltonian is the sum over all one atom Hamiltonians \begin{equation} \hat{H}_A=\sum\limits_{j=1}^{N_A}\hbar\omega_j\hat{\sigma}_z^j, \epsilonnd{equation} where $\hat{\sigma_z}$ is a Pauli spin matrix. The radiation field and the atoms are coupled through dipole coupling. The dipole operator of the $j$:th atom is \begin{equation} \label{dipoleoperator} \hat{D}_j=(D_j\hat{\sigma}_+^j + D_j^*\hat{\sigma}_-^j). \epsilonnd{equation} For the interaction Hamiltonian we get \begin{equation} \hat{H}_I=-\sum\limits_{j=1}^{N_A}\hat{D}_j\cdot\hat{E}(r_j), \epsilonnd{equation} where $\hat{E}(r_j)$ is the electric field operator (\ref{EandB}) at the atomic position $r_j$. Using the expansion (\ref{EandB}) for the field operator and (\ref{dipoleoperator}) for the dipole operator we get \begin{equation} \label{jaynescummings} \hat{H}_I=-\sum\limits_{j=1}^{N_A}\sum\limits_{n=1}^{\infty}\left(\frac{\hbar\omega_n}{\epsilonpsilon_0L}\right)^{1/2}\sin(k_nr)(D_j\hat{\sigma}_+^j\hat{a}_n + D_j^*\hat{\sigma}^j_-\hat{a}_n^{\dag}). \epsilonnd{equation} The terms $\hat{\sigma}_+^j\hat{a}_j^{\dag}$ and $\hat{\sigma}_-^j\hat{a}_j$, which do not affect the time evolution significantly, have been neglected. This approximation is called the Rotating Wave approximation (RWA). The Hamiltonian (\ref{jaynescummings}) has the familiar Jaynes-Cummings form. The total Hamiltonian is the sum of field, atomic and interaction Hamiltonians: \begin{equation} \hat{H}=\hat{H}_F + \hat{H}_A + \hat{H}_I. \epsilonnd{equation} \subsection{State vectors} The general state vector is restricted to have only one excitation. The most general state vector has the form \begin{eqnarray} \label{generalstatevector} |\Psi\rangle & = & \sum\limits_{k}\left( c_{k}|1\rangle_{k}\prod\limits_{{k'}\neq {k}}|0\rangle_{k'}\right) \otimes\prod\limits_{j=1}^{N_A}|0\rangle_j\nonumber\\ & & +\prod\limits_{k}|0\rangle_{k}\otimes\sum\limits_{j=1}^{N_A}\left( c_j|1\rangle_j\prod\limits_{j'=1,j'\neq j}^{N_A}|0\rangle_{j'}\right) \\ & \epsilonquiv & \sum\limits_{k}c_{k}|1_{k},0\rangle + \sum\limits_{j=1}^{N_A}c_j|0,1_j\rangle. \nonumber \epsilonnd{eqnarray} In the first sum, one of the field basis vectors have one excitation and in the second one, the excitation is in the atomic basis functions. If the initial state vector has only one excitation it does not obtain more excitations because the RWA was used in the interaction Hamiltonian. The state vector at all times is given by equation (\ref{generalstatevector}). The state vector used as the initial state in our simulations is of the form \begin{equation} \label{gaussianphoton} |\Psi\rangle=\sum\limits_{k}(2\pi\sigma_k^2)^{(-1/4)}\epsilonxp\left(-ikr_0 - \frac{(k-k_0)^2}{4\sigma_k^2}\right)|1_k,0\rangle. \epsilonnd{equation} The mode distribution $|c_k|^2$ is a Gaussian centered at $k=k_0$ with a variance $\sigma_k^2$. The phase factor $e^{-ikr_0}$ is important. It guarantees that the energy density distribution is also a Gaussian centered at $r_0$. \section{Correlation functions and the mode spectrum} \subsection{Correlation functions} The exact quantum mechanical state of the field is determined uniquely if all correlation functions of the operators $\hat{E}(r)$ and $\hat{B}(r)$ are known. In our simulations, because of the special form of the state vector (\ref{generalstatevector}) the second order correlation function determines the field state uniquely. In the following we denote $\epsilonpsilon_0=\mu_0=1$. Using the expansions (\ref{EandB}) we get for the normally ordered correlation function \epsilonnd{multicols} \widetext \begin{eqnarray} \label{correE} \lefteqn{\langle\Psi(t)|:\hat{E}(r_1)\hat{E}(r_2):|\Psi(t)\rangle} \nonumber \\ & & \hspace{1.5cm}=\sum\limits_{n=1}^{\infty}\sum\limits_{m=1}^{\infty}\frac{1}{L}\sqrt{\omega_n\omega_m}\sin(k_nr_1)\sin(k_mr_2)\langle\Psi(t)|\hat{a}_n^{\dagger}\hat{a}_m + \hat{a}_m^{\dagger}\hat{a}_n|\Psi(t)\rangle \\ & & \hspace{1.5cm}=\sum\limits_{n=1}^{\infty}\sum\limits_{m=1}^{\infty}\frac{1}{L}\sqrt{\omega_n\omega_m}\sin(k_nr_1)\sin(k_mr_2)(c_n^*(t)c_m(t) + c_m^*(t)c_n(t)) \nonumber \\ & & \hspace{1.5cm}=T^*(r_1,t)T(r_2,t) + T(r_1,t)T^*(r_2,t) \nonumber \epsilonnd{eqnarray} \begin{multicols}{2} \narrowtext where \begin{equation} \label{T} T(r,t)=\sum\limits_{n=1}^{\infty}\sqrt{\frac{\omega_n}{L}}\sin(k_nr)c_n(t). \epsilonnd{equation} A similar calculation gives \begin{eqnarray} \label{correB} \lefteqn{\langle\Psi(t)|:\hat{B}(r_1)\hat{B}(r_2):|\Psi(t)\rangle} \\ & & = T^*(r_1,t)T(r_2,t) - T^*(r_2,t)T(r_1,t).\nonumber \epsilonnd{eqnarray} It is possible to invert the formula (\ref{T}) and get the mode coefficients in Eq. (\ref{generalstatevector}). Multiplying both sides by $\sin(k_mr)$ and using the orthogonality integral (\ref{orthogonalintegral}) we get \begin{equation} c_m(t)=\frac{2}{\sqrt{\omega_mL}}\int\limits_0^L\sin(k_mr)T(r,t)dr. \epsilonnd{equation} So if $T(r,t)$ is known the mode coefficients can be calculated. It is also possible to calculate the coefficients $c_n$ from the correlation functions (\ref{correE}) and (\ref{correB}). Let us define \begin{eqnarray} \label{w} \lefteqn{W(r_1,r_2)} \nonumber \\ & = &\langle\Psi(t)|\left( :\hat{E}(r_1)\hat{E}(r_2): + :\hat{B}(r_1)\hat{B}(r_2):\right) |\Psi(t)\rangle \nonumber\\ & = & T^*(r_1)T(r_2) \epsilonnd{eqnarray} Integrating both sides by $\int\limits_o^L\sin(k_1r)\sin(k_pr)dr$ we get \begin{equation} \label{modecoefficientsfromw} c_1^*c_p=\frac{4}{L\sqrt{\omega_1\omega_p}}\int\limits_0^Ldr_1\int\limits_0^Ldr_2\sin(k_1r_1)\sin(k_pr_2)W(r_1,r_2). \epsilonnd{equation} This gives equations for each $c_p$. For $p=1$ we get $|c_1|^2$. We can choose $c_1$ to be real and determine $c_1$. The rest of the coefficients are now determined uniquely. Normalization can be used as a check of the calculation. \subsection{A local mode spectrum} \label{sectionlocalmodespectrum} The mode spectrum $|c_k|^2$ gives the time dependent spectrum of the radiation in the whole cavity. In order to get a spectrum of radiation in some part of the cavity only we use filtered correlation functions. The correlation function (\ref{w}) is replaced by a filtered correlation function \begin{equation} \label{filteredw} W_F(r_1,r_2;r_0) = g^F(r_1,r_2,r_0)W(r_1,r_2), \epsilonnd{equation} where the filter function is a real-valued window function centered at $r_0$. The method is similar to the Windowed Fourier Transform (WFT), Ref. \cite{fundamentalsofwavelets,anintroductiontowavelets}, which is used to determine the time dependent frequency distribution of a time dependent signal. The mode spectrum is calculated using a filtered correlation function (\ref{filteredw}) in equation (\ref{modecoefficientsfromw}). The spectrum obtained is the mode spectrum of the radiation in the part of the cavity where the filter function is nonzero. A typical example of the filter function of one variable is a Gaussian filter \begin{equation} g^F(r;r_0)=(2\pi\sigma_r^2)^{-1/2}\epsilonxp\left(-\frac{(r-r_0)^2}{2\sigma_r^2}\right), \epsilonnd{equation} where $\sigma_r^2$ is the variance of the window. Two dimensional filter is a product of two one dimensional ones. Another filter used in our simulations is a constant filter which is unity in some region and zero everywhere else \begin{equation} r = \left\{ \begin{array}{ll} 1\ \ \ \ \ \ \ & r_{min}\leq r\leq r_{max} \\ 0\ \ \ \ \ \ \ & r\leq r_{min},\ \ r\geq r_{max} \epsilonnd{array} \right. . \epsilonnd{equation} The general behavior of the spectrum with all filter functions is such that the broader the filter function, the smaller details can be seen in the spectrum. \section{Traditional definitions of the spectrum and analyzer atoms} \subsection{Traditional definitions of the spectrum} In the previous section the spectrum was calculated using '{\it spatial correlation functions at a specific time value}'. The traditional method to get the spectrum is to calculate the Fourier transform of two time averages of field operators, i.e. use '{\it time correlation functions at a specific point}'. For a stationary field the spectrum can be defined as \begin{equation} \label{fourierspectrum} S(\omega)=\int\limits_{-\infty}^{\infty}d\tau\langle\hat{E}(t)\hat{E}(t+\tau)\rangle e^{i\omega\tau}. \epsilonnd{equation} This definition is valid only for stationary fields. There are many generalizations of a Fourier spectrum for nonstationary fields \cite{page,lampard,silverman}. A method which takes also the measurement scheme into account was presented by Eberly and W\'odkiewicz \cite{eberly}. They define a 'physical spectrum' to be a double Fourier transform of two time averages multiplied by filter functions \epsilonnd{multicols} \widetext \begin{equation} \label{physicalspectrum} S_{phys}(t,\omega_f,\Gamma_f) = \Gamma_f^2\int\limits_0^{\infty}d\tau_1\int\limits_0^{\infty}d\tau_2\epsilonxp(-(\Gamma_f-i\omega_f)\tau_1)\epsilonxp(-(\Gamma_f+i\omega_f)\tau_2)\langle\hat{E}^*(t-\tau_1)\hat{E}(t-\tau_2)\rangle . \epsilonnd{equation} \begin{multicols}{2} \narrowtext The frequency filters are placed in front of the photodetector. They allow only the radiation with a certain frequency to pass to the detector. The spectrum is obtained from the relative intensities measured by the detector when the filters are tuned to different frequencies. The spectrum obtained is time dependent and the definition incorporates the measurement scheme. Our method to include time dependence to the spectrum is analogous to the method shown in the previous section. The definition (\ref{physicalspectrum}) has '{\it time filtered}' correlation functions whereas in the last section '{\it spatially filtered}' correlation functions were used. In many problems of quantum optics the radiation field is traced out from the Hilbert space. As a result the time evolution of the system is described by a master equation, which does not have information about the field degrees of freedom. Using this approach it is not possible to calculate the spatial correlation functions; then the definitions with time correlation functions like (\ref{fourierspectrum}) and (\ref{physicalspectrum}) are the only useful definitions. \subsection{Analyzer atoms} \label{sectionanalyzeratomspectrum} In our earlier paper \cite{analatompaper}, a totally different approach to the time dependent spectrum was introduced. The radiation from the system of interest is guided to a group of $N$ two level atoms which all have the same very small decay constant $\Gamma$. The resonance frequencies of the atoms are all different \begin{equation} \label{analatomfrequencies} \omega_n=n\cdot\Delta\omega, \ \ \ \ \Delta\omega=\frac{\omega_{max}-\omega_{min}}{N-1}, \ \ \ \ n=1,2...N. \epsilonnd{equation} Initially all atoms are in the ground state. The incoming radiation excites the atoms. Because the decay constants are equal and very small, the excitation probabilities of the atoms are directly proportional to the intensity of the incoming radiation at the resonance frequency. The normalized excitation as a function of resonance frequency of the atoms gives the time dependent spectrum of the radiation. A detailed description of the method and a comparison with a spectrum (\ref{physicalspectrum}) calculated using two time averages can be found in Ref. \cite{analatompaper}. In these simulations cascaded master equations were used to describe a three level atom and a group of analyzer atoms. The two spectra were shown to give identical results for the resonance fluorescence radiation of a three level atom. The analyzer atom method can be used in our cavity simulations. Now the atoms with frequencies given by Eq. (\ref{analatomfrequencies}) and very small decay constants are located at some point inside the cavity. As in the case of master equations, the excitations should give the local time dependent spectrum of the radiation which passes them. In the next section the spectrum measured using detector atoms is compared with the mode spectrum calculated using filtered spatial correlation functions. \section{Simulations} In this section we show results of spectrum measurements in three different cases. In all simulations there is one atom or several atoms at the center of the cavity. The initial field state is localized on the left and is moving to the right. The atom at the center splits the photon into left and right propagating parts. The spectra of these two parts are measured separately using analyzer atoms and filtered correlation functions. In the first simulation the initial field state is the Gaussian (\ref{gaussianphoton}) and there is one atom at the center. In the second simulation there are three atoms at the center. In the last simulation there is only one center atom and the field state is a superposition of several random Gaussian states. The units in our simulations are chosen is such a way that $c=\epsilonpsilon_0=\mu_0=\hbar=1$. \subsection{A Gaussian photon and one atom} In the first simulation a Gaussian photon (\ref{gaussianphoton}) propagates towards the center of the cavity. The length of the cavity is $L=2\pi$. The mode distribution is a Gaussian (\ref{gaussianphoton}), centered at $k_0=100.0$ with the width $\Gamma_{ph}=4\pi$. The intensity profile is centered at $r_0=2.0$. The initial intensity is shown in Fig. \ref{intensityoneatom}. \pagebreak \begin{figure}[htp] \centerline{\psfig{file=figures/intensityoneatom.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The energy density of the photon at $t=0.0$ and $t=3.8$. The initial energy density is a Gaussian (\protect\ref{gaussianphoton}) with parameters $r_0=2.0$, $k_0=100.0$ and $\Gamma_k=4\pi$. At $t=3.8$ there is one peak propagating to the left and two to the right. The initial intensity is split as a results of the interaction with the center atom, marked by a cross. The circles show the positions of the analyzer atoms. The length of the cavity is $L=2\pi$ and the number of modes is $N_{mode}=400$.} \label{intensityoneatom} \epsilonnd{figure} In the center at $r=\frac{L}{2}$, there is a two level atom which is exactly on resonance with the photon $\omega_0=100.0$. The decay constant of the atom is $\Gamma=\pi$, so the linewidth of the atom is narrower than the width of the mode distribution of the field. When the photon reaches the atom, the atom gets a nonzero excitation. Later the atom emits energy back to the field modes via decay to the ground state. The population of the excited state as a function of time is shown in Fig. \ref{centeratomexcitation}. \begin{figure}[htp] \centerline{\psfig{file=figures/centeratomexcitation.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The excitation of the center atom. The photon excites the atom which decays exponentially to the ground state. The decay constant of the atom is $\Gamma=\pi$.} \label{centeratomexcitation} \epsilonnd{figure} As a result of the interaction, part of the energy is reflected to the left and part is able to pass the atom. The intensity at $t=3.8$, after the interaction, is shown in Fig. \ref{intensityoneatom}. The intensity profile on the right has two peaks. The first peak has propagated to the right without interaction and the second is the result of the atomic decay. In addition to the center atom there are $600$ atoms which detect the spectrum of the radiation which passes them as was explained in section IV. At $r=1.8$ i.e. left from the center atom there are $200$ atoms which measure the spectrum of the reflected radiation. The dipole couplings of the atoms are time dependent in such a way that they are zero when the initial Gaussian photon propagates to the right. At $t=1.5$ the atomic dipoles get the values determined by (\ref{analatomfrequencies}) and start to detect the spectrum of the radiation. On the right at $r=\frac{L}{2}+1.0$ there are two groups of analyzer atoms. The first group ($200$ atoms) measures the spectrum all the time. After the first peak on the right has passed them, its spectrum can be read from the excitations. After the second peak has passed them the atoms give the spectrum of all radiation right from the atom i.e. spectrum of the first and second peaks. Another group at the same position $r=L/2.0+1.0$ detects the spectrum of the second peak on the right. The dipole constants of these atoms get nonzero values (\ref{analatomfrequencies}) immediately after the first peak has passed them. The spectra of the total intensity on the right and left measured using the atoms are shown in Fig. \ref{analatomspecleftandrightoneatom}. \begin{figure}[htp] \centerline{\psfig{file=figures/analatomspecleftandrightoneatom.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The normalized spectra of the radiation on the left and right measured by the analyzer atoms, Fig. \protect\ref{intensityoneatom}. The resonant radiation has been reflected to the left and the off-resonant radiation has propagated to the right. The spectrum has been read from the atoms after all radiation has propagated through them. Both spectra have been normalized in such a way that the area under the curves is unity.} \label{analatomspecleftandrightoneatom} \epsilonnd{figure} On the right the spectrum has a two peak structure. It is interesting that on the right there is no intensity at the resonance frequency of the atom. The center atom has been able to reflect the resonant radiation to the left and only off-resonant radiation passes the atom. The fact that resonant radiation is missing is interesting because the second peak of the intensity profile is the result of the atomic decay. As was explained earlier, we also measured the spectra of the two peaks on the right separately. The result is shown in Fig. \ref{analatomspectwopeaksoneatom}. \begin{figure}[htp] \centerline{\psfig{file=figures/analatomspectwopeaksoneatom.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The measured spectra of the two peaks on the right (Fig. \protect\ref{intensityoneatom}) separately. The spectrum of the first peak is broad because the intensity profile is narrow. The second peak has a typical Lorentzian spectrum of free decay. When the spectrum of both peaks is measured, as in Fig. \protect\ref{analatomspecleftandrightoneatom}, the second peak creates an stimulated decay type of effect to the detector atoms and the measured spectrum has a dip in the middle. The normalization of the spectra is the same as in Fig. \protect\ref{analatomspecleftandrightoneatom}} \label{analatomspectwopeaksoneatom} \epsilonnd{figure} The spectrum of the first peak is quite close to the initial spectrum. The spectrum of the second peak is narrower and is similar to the spectrum of the spontaneous decay. The fact that the spectrum of the second peak is narrower is understandable because the intensity profile is wider and the interaction time with the detector atoms longer. The combined spectrum of the two peaks has a dip in the middle, Fig. \ref{analatomspecleftandrightoneatom}, whereas the spectra of the peaks measured separately both have a single peak structure. This shows that the spectrum is not additive. The second peak causes a stimulated decay type of effect and the excitations of the detector atoms, which are on resonance with the radiation, get smaller. Next we calculate the mode spectrum using the filtered correlation functions as was explained in Sec. III. The normally ordered correlation function $\langle:\hat{E}(r_1)\hat{E}(r_2):\rangle$ at $t=3.8$ is shown in Fig. \ref{corre1and2} (a). The diagonal elements $r_1=r_2$ give the energy density of the radiation. Off-diagonal elements show the coherence of the radiation. In order to calculate the spectrum of the radiation in the left part of the cavity, a filtered correlation function is used. The part $r_1, r_2 \geq \frac{L}{2}$ is replaced by zero. The filtered correlation function is shown in Fig. \ref{corre1and2} (b). \begin{figure}[htp] \centerline{\psfig{file=figures/corre1and2.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=0,clip=}} \caption{The correlation function $\langle\Psi|:\hat{E}(r_1)\hat{E}(r_2):|\Psi\rangle$ of the radiation at $t=3.8$ (a). The diagonal elements give the energy density of the radiation, Fig. \protect\ref{intensityoneatom}. The lower figure (b) shows the filtered correlation function used to determine the mode spectrum on the right.} \label{corre1and2} \epsilonnd{figure} Similarly in order to get the spectrum of the radiation on the right, the replacement $\langle:\hat{E}(r_1)\hat{E}(r_2):\rangle=0$ when $r_1, r_2 \leq \frac{L}{2}$ is used. The mode spectrum is calculated using the formula (\ref{modecoefficientsfromw}) with a filtered correlation function. The mode spectra obtained are identical to the spectra measured using the analyzer atoms, Fig. \ref{analatomspecleftandrightoneatom}. We calculated also the mode spectra of the two peaks on the right separately using the appropriate filtered correlation functions. Also these spectra were the same as measured by analyzer atoms, Fig. \ref{analatomspectwopeaksoneatom}. \subsection{A Gaussian photon and three atoms} Next we add two more atoms to the center with resonance frequencies and decay constants $\omega_0=90.0$, $\Gamma=\pi$ and $\omega_0=110.0$, $\Gamma=\pi/4$ respectively. The third atom is the same as earlier, $\omega_0=100.0$ and $\Gamma=\pi$. The atom with resonance frequency $\omega_0=110.0$ has a smaller decay constant than the other two atoms. The width of the mode spectrum of the Gaussian initial photon is increased to $\Gamma_{ph}=8\pi$. Because the spectrum is very broad, the energy density profile of the photon is narrow. We have carried out exactly the same spectrum measurement as in the previous simulation for the radiation which passes the atoms and for the reflected radiation. The results are shown in Fig. \ref{analatomspecleftandrightthreeatoms}. \begin{figure}[htp] \centerline{\psfig{file=figures/analatomspecleftandrightthreeatoms.ps,width=10.0cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The measured spectra of the radiation on the left and right in the simulation when there are three atoms at the center of the cavity. The radiation on the left has three peaks at resonance frequencies. On the right the broad spectrum has three holes. The width of the peak (or hole) at $\omega_0=110.0$ is narrower than the two others. The length of the cavity is $L=8\pi$ and the number of modes $N_{mode}=1600$. The normalization of the spectra is the same as in Fig. \protect\ref{analatomspecleftandrightoneatom}} \label{analatomspecleftandrightthreeatoms} \epsilonnd{figure} On the left the spectrum has three peaks at the resonance frequency. On the right the original Gaussian spectrum has three holes at the atomic frequencies. The width of the peak and the hole at $\omega_0=110$ is narrower as was expected based on the atomic parameters. The result is qualitatively the same as in the simulation with one atom in the center. The resonant radiation is reflected and the off-resonant radiation is able to pass the atoms. As in the previous simulation we calculated the two spectra using also the filtered correlation functions. The correlation function has a more complicated form than the one in Fig. \ref{corre1and2}. The result is again identical to the spectra given by the analyzer atoms. \subsection{A random photon and one atom} In the two previous simulations the initial state of the photon has been of the Gaussian form (\ref{gaussianphoton}). Next we choose the initial state to be rather exotic. The idea of this simulation is to test the method using a totally different type of field excitation than in the previous simulations. The initial photon is a superposition of ten random Gaussian states. The parameters $k_0$, $\sigma^2_k$ and $r_0$ in equation (\ref{gaussianphoton}) are chosen randomly, in such a way that the initial energy density is on the left and propagating to the right. The frequency distribution is centered at $\omega=100.0$. Again at the center there is one atom which splits the intensity into two parts. The initial intensity profile of the photon is shown in Fig. \ref{intensityandinitialmodespectrumstrange}(a). \begin{figure}[htp] \centerline{\psfig{file=figures/intensityandinitialmodespectrumstrange.ps,width=10cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The intensity profile (a) and the mode spectrum (b) of the field state which is a sum of ten Gaussian distributions for the photon. The field is propagating to the right. Note that only a part of the cavity is shown. At the center, $r=\frac{L}{2}$, there is one atom with the resonance frequency $\omega_0=100.0$. The circles on the left and right are analyzer atom which detect the spectrum. The initial mode spectrum, (b), has a six peak structure as some of the Gaussians are overlapping. The normalization of the spectrum is the same as in Fig. \protect\ref{analatomspecleftandrightoneatom} } \label{intensityandinitialmodespectrumstrange} \epsilonnd{figure} Note that the length of the cavity is now $L=8\pi$. The field has a four peak structure and it propagates to the right towards the center atom (thick circle). The atom has the resonance frequency $\omega_0=100.0$. Circles right and left from the center atom show the positions of the analyzer atoms which detect the spectrum. Figure \ref{intensityandinitialmodespectrumstrange}(b) shows the mode spectrum of the initial photon. As in the earlier simulations the center atom splits the radiation into left and right propagating parts. The spectra of the two parts are shown in Fig. \ref{spectrumleftandrightstrange}. \begin{figure}[htp] \centerline{\psfig{file=figures/spectrumleftandrightstrange.ps,width=10cm,bbllx=1cm,bblly=1cm,bburx=21cm,bbury=27cm,angle=90,clip=}} \caption{The spectra on the left and right measured using analyzer atoms. The initial state of the field is a sum of ten random Gaussian states, Fig. \protect\ref{intensityandinitialmodespectrumstrange}. The spectrum has been read from the atoms after all radiation has propagated through them. The normalization of the spectrum is the same as in Fig. \protect\ref{analatomspecleftandrightoneatom}} \label{spectrumleftandrightstrange} \epsilonnd{figure} Because there is not much intensity which would be at resonance with the center atom, the spectrum on the right is similar to the original mode spectrum. Only a small fraction of the intensity is reflected to the left. Both spectra have several peaks. We calculated the mode spectrum on the left and right separately using filter functions as earlier. Again the mode spectrum gives exactly the same results as detected using analyzer atoms. \section{Conclusion} \label{Conclusion} We have used two level atoms to detect the time dependent and local spectrum of radiation in a one dimensional cavity. The spectrum is read from the excitation probabilities of atoms with very small decay constants. An alternative method to determine the spectrum in this paper has been to calculate the mode coefficients using a filtered second order correlation function. Different filter functions can be used to determine the spectrum at a specific part of the cavity. It might be possible to generalize the approach of filtered correlation functions using wavelet expansions \cite{fundamentalsofwavelets,anintroductiontowavelets,afriendlyguidetowavelets}. In all cases studied, the spectra determined using these two methods give identical results. In our earlier paper we showed that the analyzer atom spectrum gives the same result for a resonance fluorescence spectrum of a laser driven three level atom as the 'physical spectrum' defined by Eberly and W\'odkiewicz. This paper gives further proof that the method really works. One benefit of the method is that only one time averages of quantum mechanical operators are needed. Thus it is closer to a realistic spectrum measurement than the usual definitions of time dependent spectrum, which typically require two time averages. It can also be used in situations were the usual method to calculate two time averages in the Schr\"odinger picture, the quantum regression theorem, is known to give incorrect results. \begin{thebibliography}{99} \bibitem{page} {\sc Page C. H.}, 1952, {\epsilonm J. Appl. Phys}, {\bf 23}, 103. \bibitem{lampard} {\sc Lampard D. G.}, 1954, {\epsilonm J. Appl. Phys}, {\bf 25}, 803. \bibitem{silverman} {\sc Silverman R. A.}, 1957, {\epsilonm Proc. I.R.E. (Trans. Inf. Th)}, {\bf 3}, 182. \bibitem{eberly} {\sc Eberly J. H.} and {\sc W\'odkiewicz K.}, 1977, {\epsilonm J. Opt. Soc. Am.}, {\bf 67}, 1252. \bibitem{analatompaper} {\sc Havukainen M.} and {\sc Stenholm S.}, 1998, {\epsilonm J. Mod. Opt.}, {\bf 45}(8), 1699. \bibitem{lax} {\sc Lax M.}, 1963, {\epsilonm Phys. Rev. A}, {\bf 129}, 2342. \bibitem{oconnel} {\sc Ford G. W.} and {\sc O'Connell R. F.}, 1996, {\epsilonm Phys. Rev. Lett.}, {\bf 77}, 798. \bibitem{cascadegardiner} {\sc Gardiner C. W.}, 1993, {\epsilonm Phys. Rev. Lett.}, {\bf 70}, 2269. \bibitem{cascadecarmichael} {\sc Carmichael H. J.}, 1993, {\epsilonm Phys. Rev. Lett.}, {\bf 70}, 2273. \bibitem{buzekczech} {\sc Bu\v{z}ek V.}, 1989, {\epsilonm Czech. J. Phys. B}, {\bf 39}, 345. \bibitem{buzekkorean} {\sc Bu\v{z}ek V.} and {\sc Kim M. G.}, 1997, {\epsilonm J. Korean Phys. Soc.}, {\bf 30}, 413. \bibitem{decay} {\sc Bu\v{z}ek V., Drobn\'{y} G., Kim M. G., Havukainen M.} and {\sc Knight P. L.}, 1999, {\epsilonm Phys. Rev. A}, {\bf 60}, 582. \bibitem{cavity2dpaper} {\sc Havukainen M., Drobn\'{y} G., Stenholm S.} and {\sc Bu\v{z}ek V.}, 1999, {\epsilonm J. Mod. Opt.}, {\bf 46}(9), 1343. \bibitem{fundamentalsofwavelets} {\sc Goswami J. C.} and {\sc Chan A. K.}, 1999, {\epsilonm Fundamentals of Wavelets} (Wiley, New York). \bibitem{anintroductiontowavelets} {\sc Chui C. K.}, 1992, {\epsilonm An Introduction to Wavelets} (Academic Press, Boston). \bibitem{afriendlyguidetowavelets} {\sc Kaiser G.}, 1994, {\epsilonm A Friendly Guide to Wavelets} (Birkh\"auser, Boston). \epsilonnd{thebibliography} \epsilonnd{multicols} \epsilonnd{document}
\begin{document} \title{{ \bf Weakly mixing diffeomorphisms preserving a measurable Riemannian metric are dense in $\mathcal{A}_{\alpha}\left(M\right)$ for arbitrary Liouvillean number $\alpha$}} \author[1]{{\sc Roland Gunesch}} \author[2]{{\sc Philipp Kunde}} \affil[1]{University of Education Vorarlberg, Feldkirch, Austria,} \affil[2]{Department of Mathematics, University of Hamburg, Hamburg, Germany} \date{} \maketitle \begin{abstract} We show that on any smooth compact connected manifold of dimension $m\geq 2$ admitting a smooth non-trivial circle action $\mathcal{S} = \left\{S_t\right\}_{t \in \mathbb{R}}$, $S_{t+1}=S_t$, the set of weakly mixing $C^{\infty}$-diffeomorphisms which preserve both a smooth volume $\nu$ and a measurable Riemannian metric is dense in $\mathcal{A}_{\alpha} \left(M\right)= \overline{ \left\{h \circ S_{\alpha} \circ h^{-1} : h \in \text{Diff}^{\infty}\left(M, \nu\right) \right\}}^{C^{\infty}}$ for every Liouvillean number $\alpha$. The proof is based on a quantitative version of the Anosov-Katok-method with explicitly constructed conjugation maps and partitions. \end{abstract} \textbf{Key words: } Smooth Ergodic Theory; Conjugation-approximation-method; almost isometries; weakly mixing diffeomorphisms. \\ \textbf{AMS subject classification: } 37A05 (primary), 37C40, 57R50, 53C99 (secondary). \section{Introduction} To begin, recall that a dynamical system $\left(X,T,\nu\right)$ is ergodic if and only if every measurable complex-valued function $h$ on $\left(X, \nu\right)$ which is invariant (i.e. such that $h\left(Tx\right) = h\left(x\right)$ for every $x \in X$) must necessarily be constant. We define $\left(X,T,\nu\right)$ to be weakly mixing if it satisfies the stronger condition that there is no non-constant measurable complex valued function $h$ on $\left(X, \nu\right)$ such that $h\left(Tx\right) = \lambda \cdot h\left(x\right)$ for some $\lambda \in \mathbb{C}$. Equivalently there is an increasing sequence $\left(m_n\right)_{n \in \mathbb{N}}$ of natural numbers such that $\lim_{n \rightarrow \infty} \left| \nu\left(B \cap T^{-m_n}\left(A\right)\right) - \nu\left(A\right) \cdot \nu\left(B\right) \right| = 0$ for every pair of measurable sets $A,B \subseteq X$ (see \cite{Skl} or \cite[Theorem 5.1]{AK}). We call a circle action $\left\{S_t\right\}_{t \in \mathbb{R}}$ on a manifold $M$ non-trivial if there exists $t \in \mathbb{R}$ and $x\in M$ with $S_t(x)\neq x$; in other words, not all orbits are fixed points (even though some may be). \\ Until 1970 it was an open question if there exists an ergodic area-preserving smooth diffeomorphism on the disc $\mathbb{D}^2$. This problem was solved by the so-called ``approximation by conjugation''-method developed by D. Anosov and A. Katok in \cite{AK}. In fact, on every smooth compact connected manifold $M$ of dimension $m\geq 2$ admitting a non-trivial circle action $\mathcal{S} = \left\{S_t\right\}_{t \in \mathbb{S}^1}$ preserving a smooth volume $\nu$ this method enables the construction of smooth diffeomorphisms with specific ergodic properties (e.\,g.~weakly mixing ones in \cite[section 5]{AK}) or non-standard smooth realizations of measure-preserving systems (e.\,g.~\cite[section 6]{AK} and \cite{FSW}). These diffeomorphisms are constructed as limits of conjugates $f_n = H_n \circ S_{\alpha_{n+1}} \circ H^{-1}_n$, where $\alpha_{n+1} = \alpha_n + \frac{1}{k_n \cdot l_n \cdot q^2_n} \in \mathbb{Q}$, where $H_n = H_{n-1} \circ h_n$ and where $h_n$ are measure-preserving diffeomorphisms satisfying $S_{\frac{1}{q_n}} \circ h_n = h_n \circ S_{\frac{1}{q_n}}$. In each step the conjugation map $h_n$ and the parameter $k_n$ are chosen such that the diffeomorphism $f_n$ imitates the desired property with a certain precision. In a final step of the construction, the parameter $l_n$ is chosen large enough to guarantee closeness of $f_{n}$ to $f_{n-1}$ in the $C^{\infty}$-topology, and so the convergence of the sequence $\left(f_n\right)_{n \in \mathbb{N}}$ to a limit diffeomorphism is provided. It is even possible to keep this limit diffeomorphism within any given $C^{\infty}$-neighbourhood of the initial element $S_{\alpha_1}$ or, by applying a fixed diffeomorphism $g$ first, of $g\circ S_{\alpha_1} \circ g^{-1}$. So the construction can be carried out in a neighbourhood of any diffeomorphism conjugate to an element of the action. Thus, $\mathcal{A}\left(M\right) = \overline{\left\{h \circ S_t \circ h^{-1} \ : \ t \in \mathbb{S}^1, h \in \text{Diff}^{\infty}\left(M, \nu \right)\right\}}^{C^{\infty}}$ is a natural space for the produced diffeomorphisms. Moreover, we will consider the restricted space $\mathcal{A}_{\alpha}\left(M\right) = \overline{\left\{h \circ S_{\alpha} \circ h^{-1} \ : h \in \text{Diff}^{\infty}\left(M, \nu \right)\right\}}^{C^{\infty}}$ for $\alpha \in \mathbb{S}^1$. \\ In the following let $M$ be a smooth compact connected manifold of dimension $m \geq 2$ admitting a non-trivial circle action $\mathcal{S} = \left\{S_t \right\}_{t \in \mathbb{R}}$, $S_{t+1} = S_t$. Note that any such action possesses a smooth invariant volume: Every smooth manifold carries a Riemannian metric and hence a smooth Riemannian volume form $\hat{\nu}$. Any smooth volume form is given by $f \cdot \hat{\nu}$, where $f$ is a positive scalar function. If $\bar{f}$ is the fiberwise average of $f$, then $\bar{f} \cdot \hat{\nu}$ is a smooth volume form which is invariant under $\mathcal{S}$. In case of a manifold with boundary by a smooth diffeomorphism we mean infinitely differentiable in the interior and such that all the derivatives can be extended to the boundary continuously. \\ In their influential paper \cite{AK} Anosov and Katok proved amongst others that in $\mathcal{A} \left(M\right)$ the set of weakly mixing diffeomorphisms is generic (i.\,e. it is a dense $G_{\delta}$-set) in the $C^{\infty}\left(M\right)$-topology. For this they used the ``approximation by conjugation''-method. In \cite{GK} the conjugation maps are constructed more explicitly such that they can be equipped with the additional structure of being locally very close to an isometry, thus showing that there exists a weakly mixing smooth diffeomorphism preserving a smooth measure and a measurable Riemannian metric on any manifold with non-trivial circle action. Actually, it follows from the respective proofs that both results are true in $\mathcal{A}_{\alpha}\left(M\right)$ for a $G_{\delta}$-set of $\alpha \in \mathbb{R}$. However, both proofs do not give a full description of the set of $\alpha \in \mathbb{R}$ for which the particular result holds in $\mathcal{A}_{\alpha}\left(M\right)$. Such an investigation is started in \cite{FS}: B. Fayad and M. Saprykina showed in case of dimension $2$ that if $\alpha \in \mathbb{S}^1$ is Liouville, the set of weakly mixing diffeomorphisms is generic in the $C^{\infty}\left(M\right)$-topology in $\mathcal{A}_{\alpha}\left(M\right)$. Here an irrational number $\alpha$ is called Liouville if and only if for every $C \in \mathbb{R}_{>0}$ and for every $n \in \mathbb{N}$ there are infinitely many pairs of coprime integers $p,q$ such that $\left| \alpha - \frac{p}{q}\right| < \frac{C}{q^n}$. \\ In this article we prove the following theorem generalizing the results of \cite{GK} as well as \cite{FS}: \begin{theo} \label{theo:main} Let $M$ be a smooth compact and connected manifold of dimension $m\geq2$ with a non-trivial circle action $\mathcal{S} = \left\{S_t\right\}_{t \in \mathbb{R}}$, $S_{t+1}=S_t$. For any $\mathcal{S}$-invariant smooth volume $\nu$ the following is true: If $\alpha \in \mathbb{R}$ is Liouville, then the set of volume-preserving diffeomorphisms, that are weakly mixing and preserve a measurable Riemannian metric, is dense in the $C^{\infty}$-topology in $\mathcal{A}_{\alpha}\left(M\right)$. \end{theo} See \cite[section 3]{GK} for a comprehensive consideration of IM-diffeomorphisms (i.\,e. diffeomorphisms preserving an absolutely continuous probability measure and a measurable Riemannian metric) and IM-group actions. In particular, the existence of a measurable invariant metric for a diffeomorphism is equivalent to the existence of an invariant measure for the projectivized derivative extension which is absolutely continuous in the fibers. It is a natural question to ask about the ergodic properties of the derivative extension with respect to such a measure. While in our construction the projectivized derivative extension is as non-ergodic as possible (in fact, the derivative cocycle is cohomologous to the identity), it is work in progress to realize ergodic behaviour. This would provide the only known examples of measure-preserving diffeomorphisms whose differential is ergodic with respect to a smooth measure in the projectivization of the tangent bundle. Recently, it has been proven that for every $\rho >0$ and $m \geq 2$ there exists a weakly mixing real-analytic diffeomorphism $f \in \text{Diff}^{\omega}_{\rho} \left(\mathbb{T}^m, \mu \right)$ preserving a measurable Riemannian metric (\cite{Kana}).\\ We want to point out that Theorem \ref{theo:main} is in some sense the best we can obtain: \begin{itemize} \item By \cite[corollary 1.4]{FS}, whose proof uses Herman's last geometric result (\cite{FKr}), we have the following dichotomy in case of $M = \mathbb{S}^1 \times \left[0,1\right]$: A number $\alpha \in \mathbb{R} \setminus \mathbb{Q}$ is Diophantine if and only if there is no ergodic diffeomorphism of $M$ whose rotation number (on at least one of the boundaries) is equal to $\alpha$. Since weakly mixing diffeomorphisms are ergodic, there cannot be a weakly mixing $f \in \mathcal{A}_{\alpha}\left(\mathbb{S}^1 \times \left[0,1\right]\right)$ for $\alpha \in \mathbb{R} \setminus \mathbb{Q}$ Diophantine. \item By a result of A. Furman (appendix to \cite{GK}) a weakly mixing diffeomorphism cannot preserve a Riemannian metric with $L^{2}$-distortion (i.e. both the norm and its inverse are $L^2$-functions). Moreover, it is conjectured that a weakly mixing diffeomorphism cannot preserve a Riemannian metric with $L^{1}$-distortion (see \cite[Conjecture 3.7.]{GK}). \end{itemize} Using the standard techniques to prove genericity of the weak mixing-property and Theorem \ref{theo:main} we conclude in subsection \ref{subsection:first}: \begin{cor} \label{cor:A} Let $M$ be a smooth compact and connected manifold of dimension $m\geq2$ with a non-trivial circle action $\mathcal{S} = \left\{S_t\right\}_{t \in \mathbb{R}}$, $S_{t+1}=S_t$, preserving a smooth volume $\nu$. If $\alpha \in \mathbb{R}$ is Liouville, the set of volume-preserving weakly mixing diffeomorphisms is a dense $G_{\delta}$-set in the $C^{\infty}$-topology in $\mathcal{A}_{\alpha}\left(M\right)$. \end{cor} Hence, we obtain the result of \cite{FS} in arbitrary dimension at least $2$. \section{Preliminaries} \subsection{Definitions and notations} In this chapter we want to introduce advantageous definitions and notations. Initially we discuss topologies on the space of smooth diffeomorphisms on the manifold $M = \mathbb{S}^1 \times \left[0,1\right]^{m-1}$. Note that for diffeomorphisms $f = \left(f_1,...,f_m\right): \mathbb{S}^1 \times \left[0,1\right]^{m-1} \rightarrow \mathbb{S}^1 \times \left[0,1\right]^{m-1}$ the coordinate function $f_1$ understood as a map $\mathbb{R}\times \left[0,1\right]^{m-1} \rightarrow \mathbb{R}$ has to satisfy the condition $f_1\left(\theta +n, r_1,...,r_{m-1}\right) = f_1 \left(\theta, r_1,...,r_{m-1}\right) + l$ for $n \in \mathbb{Z}$, where either $l =n$ or $l= -n$. Moreover, for $i \in \left\{2,...,m\right\}$ the coordinate function $f_i$ has to be $\mathbb{Z}$-periodic in the first component, i.e. $f_i \left( \theta + n, r_1,...,r_{m-1}\right) = f_i \left(\theta,r_1,...,r_{m-1}\right)$ for every $n \in \mathbb{Z}$. \\ To define explicit metrics on Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ and in the following, the subsequent notations will be useful: \begin{dfn} \begin{enumerate} \item For a sufficiently differentiable function $f: \mathbb{R}^m \rightarrow \mathbb{R}$ and a multi-index $\vec{a} = \left(a_1,...,a_m\right) \in \mathbb{N}^m_0$ \begin{equation*} D_{\vec{a}}f := \frac{\partial^{\left|\vec{a}\right|}}{\partial x_1^{a_1}...\partial x_m^{a_m}} f, \end{equation*} where $\left|\vec{a}\right| = \sum^{m}_{i=1} a_i$ is the order of $\vec{a}$. \item For a continuous function $F: \left(0,1\right)^m \rightarrow \mathbb{R}$ \begin{equation*} \left\|F\right\|_0 := \sup_{z \in \left(0,1\right)^m} \left|F\left(z\right)\right|. \end{equation*} \end{enumerate} \end{dfn} Diffeomorphisms on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ can be regarded as maps from $\left[0,1\right]^m$ to $\mathbb{R}^m$. In this spirit the expressions $\left\|f_i\right\|_0$ as well as $\left\|D_{\vec{a}}f_i\right\|_0$ for any multi-index $\vec{a}$ with $\left| \vec{a}\right|\leq k$ have to be understood for $f=\left(f_1,...,f_m\right) \in$ Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$. Since such a diffeomorphism is a continuous map on the compact manifold and every partial derivative can be extended continuously to the boundary, all these expressions are finite. Thus the subsequent definition makes sense: \begin{dfn} \begin{enumerate} \item For $f,g \in$ Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ with coordinate functions $f_i$ resp. $g_i$ we define \begin{equation*} \tilde{d}_0\left(f,g\right) = \max_{i=1,..,m} \left\{ \inf_{p \in \mathbb{Z}} \left\| \left(f - g\right)_i + p\right\|_0\right\} \end{equation*} as well as \begin{equation*} \tilde{d}_k\left(f,g\right) = \max \left\{ \tilde{d}_0\left(f,g\right), \left\|D_{\vec{a}}\left(f-g\right)_i\right\|_0 \ : \ i=1,...,m \ , \ 1\leq \left|\vec{a}\right| \leq k \right\}. \end{equation*} \item Using the definitions from 1. we define for $f,g \in$ Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$: \begin{equation*} d_k\left(f,g\right) = \max \left\{ \tilde{d}_k\left(f,g\right) \ , \ \tilde{d}_k\left(f^{-1},g^{-1}\right)\right\}. \end{equation*} \end{enumerate} \end{dfn} Obviously $d_k$ describes a metric on Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ measuring the distance between the diffeomorphisms as well as their inverses. As in the case of a general compact manifold the following definition connects to it: \begin{dfn} \begin{enumerate} \item A sequence of Diff$^{\infty}\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$-diffeomorphisms is called convergent in Diff$^{\infty}\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ if it converges in Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ for every $k \in \mathbb{N}$. \item On Diff$^{\infty}\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ we declare the following metric \begin{equation*} d_{\infty}\left(f,g\right) = \sum^{\infty}_{k=1} \frac{d_k\left(f,g\right)}{2^k \cdot \left(1 + d_k\left(f,g\right)\right)}. \end{equation*} \end{enumerate} \end{dfn} It is a general fact that Diff$^{\infty}\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ is a complete metric space with respect to this metric $d_{\infty}$. \\ Again considering diffeomorphisms on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ as maps from $\left[0,1\right]^m$ to $\mathbb{R}^m$ we add the adjacent notation: \begin{dfn} Let $f \in$ Diff$^k\left(\mathbb{S}^1 \times \left[0,1\right]^{m-1}\right)$ with coordinate functions $f_i$ be given. Then \begin{equation*} \left\| Df \right\|_0 := \max_{i,j \in \left\{1,...,m\right\}} \left\| D_j f_i \right\|_0 \end{equation*} and \begin{equation*} ||| f ||| _k := \max \left\{ \left\|D_{\vec{a}} f_i \right\|_0 , \left\|D_{\vec{a}} \left(f^{-1}_{i}\right)\right\|_0 \ : \ i = 1,...,m, \ \vec{a} \text{ multi-index with } 0\leq \left| \vec{a}\right| \leq k \right\}. \end{equation*} \end{dfn} \begin{rem} By the above-mentioned observations for every multi-index $\vec{a}$ with $\left|\vec{a}\right|\geq 1$ and every $i \in \left\{1,...,m\right\}$ the derivative $D_{\vec{a}} h_i$ is $\mathbb{Z}$-periodic in the first variable. Since in case of a diffeomorphism $g= \left(g_1,...,g_m\right)$ on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ regarded as a map $\left[0,1\right]^m \rightarrow \mathbb{R}^m$ the coordinate functions $g_j$ for $j \in \left\{2,...,m\right\}$ satisfy $g_j\left(\left[0,1\right]^m\right) \subseteq \left[0,1\right]$, it holds: \begin{equation*} \sup_{z \in \left(0,1\right)^m} \left| \left(D_{\vec{a}} h_i \right) \left(g\left(z\right)\right)\right| \leq |||h|||_{\left|\vec{a}\right|}. \end{equation*} \end{rem} Furthermore, we introduce the notion of a partial partition of a compact manifold $M$, which is a pairwise disjoint countable collection of measurable subsets of $M$. \begin{dfn} \begin{itemize} \item A sequence of partial partitions $\nu_n$ converges to the decomposition into points if and only if for a given measurable set $A$ and for every $n \in \mathbb{N}$ there exists a measurable set $A_n$, which is a union of elements of $\nu_n$, such that $\lim_{n \rightarrow \infty} \mu \left( A \Delta A_n \right) = 0$. We often denote this by $\nu_n \rightarrow \varepsilon$. \item A partial partition $\nu$ is the image under a diffeomorphism $F:M \rightarrow M$ of a partial partition $\eta$ if and only if $\nu = \left\{ F \left(I\right) \: : \: I \in \eta \right\}$. We write this as $\nu = F\left(\eta\right)$. \end{itemize} \end{dfn} \subsection{First steps of the proof} \label{subsection:first} First of all we show how constructions on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ can be transfered to a general compact connected smooth manifold M with a non-trivial circle action $\mathcal{S} = \left\{S_{t}\right\}_{t \in \mathbb{R}}$, $S_{t+1} = S_t$. By \cite[Proposition 2.1.]{AK}, we can assume that $1$ is the smallest positive number satisfying $S_t = \text{id}$. Hence, we can assume $\mathcal{S}$ to be effective. We denote the set of fixed points of $\mathcal{S}$ by $F$ and for $q \in \mathbb{N}$ $F_q$ is the set of fixed points of the map $S_{\frac{1}{q}}$. \\ On the other hand, we consider $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ with Lebesgue measure $\mu$. Furthermore, let $\mathcal{R} = \left\{R_{\alpha} \right\}_{\alpha \in \mathbb{S}^1}$ be the standard action of $\mathbb{S}^1$ on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$, where the map $R_{\alpha}$ is given by $R_{\alpha}\left(\theta, r_1,...,r_{m-1}\right) = \left(\theta + \alpha, r_1,...,r_{m-1}\right)$. Hereby, we can formulate the following result (see \cite[Proposition 1]{FSW}): \begin{prop} Let $M$ be a $m$-dimensional smooth, compact and connected manifold admitting an effective circle action $\mathcal{S} = \left\{S_{t}\right\}_{t \in \mathbb{R}}$, $S_{t+1} = S_t$, preserving a smooth volume $\nu$. Let $B \coloneqq \partial M \cup F \cup \left( \bigcup_{q\geq 1} F_q \right)$. There exists a continuous surjective map $G: \mathbb{S}^1 \times \left[0,1\right]^{m-1} \rightarrow M$ with the following properties: \begin{enumerate} \item The restriction of $G$ to $\mathbb{S}^{1} \times \left(0,1\right)^{m-1}$ is a $C^{\infty}$-diffeomorphic embedding. \item $\nu\left(G\left(\partial\left(\mathbb{S}^{1} \times \left[0,1\right]^{m-1}\right)\right)\right) = 0$ \item $G\left(\partial\left(\mathbb{S}^{1} \times \left[0,1\right]^{m-1}\right)\right) \supseteq B$ \item $G_{*}\left(\mu\right) = \nu$ \item $\mathcal{S} \circ G = G \circ \mathcal{R}$ \end{enumerate} \end{prop} By the same reasoning as in \cite[section 2.2.]{FSW}, this proposition allows us to carry a construction from $\left(\mathbb{S}^{1} \times \left[0,1\right]^{m-1}, \mathcal{R}, \mu\right)$ to the general case $\left(M, \mathcal{S}, \nu\right)$: \\ Suppose $f: \mathbb{S}^{1} \times \left[0,1\right]^{m-1} \rightarrow \mathbb{S}^{1} \times \left[0,1\right]^{m-1}$ is a weakly mixing diffeomorphism sufficiently close to $R_{\alpha}$ in the $C^{\infty}$-topology with $f$-invariant measurable Riemannian metric $\omega$ obtained by $f = \lim_{n \rightarrow \infty} f_n$ with $f_n = H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$, where $f_n = R_{\alpha_{n+1}}$ in a neighbourhood of the boundary (in Proposition \ref{prop:satz2} we will see that these conditions can be satisfied in the constructions of this article). Then we define a sequence of diffeomorphisms: \begin{equation*} \tilde{f_n}: M \rightarrow M \ \ \ \ \ \ \tilde{f}_n\left(x\right) = \begin{cases} G \circ f_n \circ G^{-1}\left(x\right)& \textit{if $x \in G\left(\mathbb{S}^1 \times \left(0,1\right)^{m-1}\right)$} \\ S_{\alpha_{n+1}}\left(x\right)& \textit{if $x \in G\left(\partial\left(\mathbb{S}^1 \times \left(0,1\right)^{m-1}\right)\right)$} \end{cases} \end{equation*} Constituted in \cite[section 5.1.]{FK}, this sequence is convergent in the $C^{\infty}$-topology to the diffeomorphism \begin{equation*} \tilde{f}: M \rightarrow M \ \ \ \ \ \ \tilde{f}\left(x\right) = \begin{cases} G \circ f \circ G^{-1}\left(x\right)& \textit{if $x \in G\left(\mathbb{S}^1 \times \left(0,1\right)^{m-1}\right)$} \\ S_{\alpha}\left(x\right)& \textit{if $x \in G\left(\partial\left(\mathbb{S}^1 \times \left(0,1\right)^{m-1}\right)\right)$} \end{cases} \end{equation*} provided the closeness from $f$ to $R_{\alpha}$ in the $C^{\infty}$-topology. \\ We observe that $f$ and $\tilde{f}$ are measure-theoretically isomorphic. Then $\tilde{f}$ is weakly mixing because the weak mixing-property is invariant under isomorphisms. \\ Moreover, we want to show how we can construct a $\tilde{f}$-invariant measurable Riemannian metric $\tilde{\omega}$ out of the $f$-invariant metric $\omega$. Since $\tilde{\omega}$ only needs to be a measurable metric and $\nu\left(G\left(\partial\left(\mathbb{S}^{1} \times \left[0,1\right]^{m-1}\right)\right)\right) = 0$, we only have to construct it on $G\left(\mathbb{S}^{1} \times \left(0,1\right)^{m-1}\right)$. Using the diffeomorphic embedding $G$ we consider $\tilde{\omega} |_{G\left(\mathbb{S}^{1} \times \left(0,1\right)^{m-1}\right)} \coloneqq \left(G^{-1}\right)^{\ast} \omega |_{G\left(\mathbb{S}^{1} \times \left(0,1\right)^{m-1}\right)}$ and show that it is $\tilde{f}$-invariant: On $G\left(\mathbb{S}^{1} \times \left(0,1\right)^{m-1}\right)$ we have $\tilde{f}=G \circ f \circ G^{-1}$ and thus we can compute: \begin{equation*} \tilde{f}^{\ast} \tilde{\omega} = \left(G \circ f \circ G^{-1}\right)^{\ast} \left(\left(G^{-1}\right)^{\ast} \omega \right) = \left(G^{-1}\right)^{\ast} \circ f^{\ast} \circ G^{\ast} \circ \left(G^{-1}\right)^{\ast} \omega = \left(G^{-1}\right)^{\ast} \circ f^{\ast} \omega = \left(G^{-1}\right)^{\ast} \omega = \tilde{\omega} \end{equation*} Altogether the construction done in the case of $\left(\mathbb{S}^{1} \times \left[0,1\right]^{m-1}, \mathcal{R}, \mu\right)$ is transfered to $\left(M, \mathcal{S}, \nu\right)$. Hence, it suffices to consider constructions on $M = \mathbb{S}^{1} \times \left[0,1\right]^{m-1}$ with circle action $\mathcal{R}$ subsequently. In this case we will prove the following result: \begin{prop} \label{prop:satz2} For every Liouvillean number $\alpha$ there is a sequence $\left(\alpha_n\right)_{n \in \mathbb{N}}$ of rational numbers $\alpha_n = \frac{p_n}{q_n}$ satisfying $\lim_{n\rightarrow \infty} \left| \alpha-\alpha_n\right|=0$ monotonically, and there are sequences $\left(g_n\right)_{n \in \mathbb{N}}$, $\left(\phi_n\right)_{n \in \mathbb{N}}$ of measure-preserving diffeomorphisms satisfying $g_n\circ R_{\frac{1}{q_n}}=R_{\frac{1}{q_n}}\circ g_n$ as well as $\phi_n \circ R_{\frac{1}{q_n}} = R_{\frac{1}{q_n}} \circ \phi_n$ such that the diffeomorphisms $f_n = H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$ with $H_n := h_1 \circ h_2 \circ ... \circ h_n$, where $h_n := g_n \circ \phi_n$, coincide with $R_{\alpha_{n+1}}$ in a neighbourhood of the boundary, converge in the Diff$^{\infty}\left(M\right)$-topology, and the diffeomorphism $f = \lim_{n \rightarrow \infty} f_n$ is weakly mixing, has an invariant measurable Riemannian metric, and satisfies $f \in \mathcal{A}_{\alpha}\left(M\right)$. \\ Furthermore, for every $\varepsilon > 0$ the parameters in the construction can be chosen in such a way that $d_{\infty}\left(f, R_{\alpha}\right) < \varepsilon$. \end{prop} By this Proposition weakly mixing diffeomorphisms preserving a measurable Riemannian metric are dense in $\mathcal{A}_{\alpha}\left(M\right)$: \\ Because of $\mathcal{A}_{\alpha} \left(M\right) = \overline{\left\{ h \circ R_{\alpha} \circ h^{-1} : h \in \text{Diff}^{\infty}\left(M, \mu\right)\right\}}^{C^{\infty}}$ it is enough to show that for every diffeomorphism $h\in\text{Diff}^{\infty}\left(M,\mu\right)$ and every $\epsilon > 0$ there is a weakly mixing diffeomorphism $\tilde{f}$ preserving a measurable Riemannian metric such that $d_{\infty}\left(\tilde{f}, h \circ R_{\alpha} \circ h^{-1} \right) < \epsilon$. For this purpose, let $h \in \text{Diff}^{\infty}\left(M, \mu\right)$ and $\epsilon > 0$ be arbitrary. By \cite[p. 3]{Omori} and \cite[Theorem 43.1.]{KM}, Diff$^{\infty}\left(M\right)$ is a Lie group. In particular, the conjugating map $g \mapsto h \circ g \circ h^{-1}$ is continuous with respect to the metric $d_{\infty}$. Continuity in the point $R_{\alpha}$ yields the existence of $\delta >0$, such that $d_{\infty}\left(g, R_{\alpha}\right)<\delta$ implies $d_{\infty}\left(h \circ g \circ h^{-1}, h \circ R_{\alpha} \circ h^{-1}\right)< \epsilon$. By Proposition \ref{prop:satz2} we can find a weakly mixing diffeomorphism $f$ with $f$-invariant measurable Riemannian metric $\omega$ and $d_{\infty}(f,R_{\alpha}) < \delta$. Hence $\tilde{f} := h \circ f \circ h^{-1}$ satisfies $d_{\infty}\left(\tilde{f}, h \circ R_{\alpha} \circ h^{-1} \right) < \epsilon$. Note that $\tilde{f}$ is weakly mixing and preserves the measurable Riemannian metric $\tilde{\omega} \coloneqq \left(h^{-1}\right)^{\ast}\omega$. \\ Hence, Theorem \ref{theo:main} is deduced from Proposition \ref{prop:satz2}. \begin{rem} Moreover we can show that the set of weakly mixing diffeomorphisms is generic in $\mathcal{A}_{\alpha}\left(M\right)$ (i.e. it is a dense $G_{\delta}$-set) using Proposition \ref{prop:satz2} and the same technique as in \cite{Ha}, section \textit{Category}. \\ Using Proposition \ref{prop:satz2} we can show that the set of weakly mixing diffeomorphisms is generic in $\mathcal{A}_{\alpha}\left(M\right)$ (i.e. it is a dense $G_{\delta}$-set). Thereby, we consider a countable dense set $\left\{\varphi_n\right\}_{n \in \mathbb{N}}$ in $L^2\left(M, \mu\right)$, which is a separable space, and define the sets: \begin{equation*} O\left(i,j,k,n\right) = \left\{ T \in \mathcal{A}_{\alpha}\left(M\right) \ : \ \left| \left(U^{n}_{T} \varphi_i , \varphi_j\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_j\right) \right| < \frac{1}{k}\right\} \end{equation*} Since $\left(U_T \varphi, \psi\right)$ depends continuously on $T$, each $O\left(i,j,k,n\right)$ is open. Hence, \begin{equation*} K \coloneqq \bigcap_{i \in \mathbb{N}} \bigcap_{j \in \mathbb{N}} \bigcap_{k \in \mathbb{N}} \bigcup_{n \in \mathbb{N}} O\left(i,j,k,n\right) \end{equation*} is a $G_{\delta}$-set. \\ By another equivalent characterisation a measure-preserving transformation $T$ is weakly mixing if and only if for every $\varphi,\psi \in L^2\left(M, \mu\right)$ there is a sequence $\left(m_n\right)_{n \in \mathbb{N}}$ of density one such that $\lim_{n \rightarrow \infty} \left(U^{m_n}_{T} \varphi , \psi \right) = \left( \varphi,1\right)\cdot\left(1,\psi\right)$. Thus, every weakly mixing diffeomorphism is contained in $K$. On the other hand, we show that a transformation, that is not weakly mixing, does not belong to $K$: If $T$ is not weakly mixing, $U_T$ has a non-trivial eigenfunction. W.l.o.g. we can assume the existence of $f \in L^2\left(M, \mu\right)$ and $c \in \mathbb{C}$ of absolute value 1 satisfying $U_T f = c \cdot f$, $\left\|f\right\|_{L^2} = 1$ and $\left(1,f\right)=0$. Since $\left\{\varphi_n\right\}_{n \in \mathbb{N}}$ is dense in $L^2\left(M, \mu\right)$, there is an index $i$ such that $\left\|f-\varphi_i\right\|_{L^2} < 0.1$. Obviously $\left\|\varphi_i\right\|_{L^2} \leq \left\|f\right\|_{L^2} + \left\|f-\varphi_i\right\|_{L^2} < 1.1$ and $\left| \left(U^{n}_{T} f , f\right) - \left(f,1\right) \cdot \left(1, f\right) \right| = \left|\left(c^n \cdot f,f\right)\right|= \left|c^n\right| \cdot \left\|f\right\|^2_{L^2} = 1$. Consequently we can estimate: \begin{align*} 1 & = \left| \left(U^{n}_{T} f , f\right) - \left(f,1\right) \cdot \left(1, f\right) \right| \\ & \leq \left| \left(U^{n}_{T} f , f\right) - \left(U^{n}_{T}f, \varphi_i\right) \right|+\left| \left(U^{n}_{T} f, \varphi_i\right) - \left(U^{n}_{T} \varphi_i,\varphi_i\right)\right|+\left| \left(U^{n}_{T} \varphi_i , \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_i\right) \right| \\ & \ \ \ \ + \left| \left(\varphi_i,1\right) \cdot \left(1 , \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, f\right) \right| + \left| \left(\varphi_i , 1\right) \cdot \left(1,f\right)- \left(f,1\right) \cdot \left(1, f\right) \right| \\ & \leq \left|c\right|^n \cdot \left\|f\right\|_{L^2} \cdot \left\|f-\varphi_i\right\|_{L^2} + \left\|f-\varphi_i\right\|_{L^2} \cdot \left\|\varphi_i\right\|_{L^2} + \left| \left(U^{n}_{T} \varphi_i, \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_i\right) \right| \\ & \ \ \ \ + \left\|\varphi_i\right\|_{L^2} \cdot \left\|f-\varphi_i\right\|_{L^2} \\ & \leq 0.1 + 0.11 + \left| \left(U^{n}_{T} \varphi_i, \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_i\right) \right| + 0.11 \\ & < \left| \left(U^{n}_{T} \varphi_i, \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_i\right) \right| + 0.5 \end{align*} Thus $\left| \left(U^{n}_{T} \varphi_i, \varphi_i\right) - \left(\varphi_i,1\right) \cdot \left(1, \varphi_i\right) \right|$ has to be larger than $\frac{1}{2}$. Hence $T$ does not belong to $O\left(i,i,2,n\right)$ for any value of $n$ and accordingly does not belong to $K$. So $K$ coincides with the set of weakly mixing diffeomorphisms in $\mathcal{A}_{\alpha}\left(M\right)$. By the observations above we know that this set is dense. In conclusion the set of weakly mixing diffeomorphisms is a dense $G_{\delta}$-set in $\mathcal{A}_{\alpha}\left(M\right)$. Thus Corollary \ref{cor:A} is proven. \end{rem} \subsection{Outline of the proof} The constructions are based on the ``approximation by conjugation''-method developed by D.V. Anosov and A. Katok in \cite{AK}. As indicated in the introduction, one constructs successively a sequence of measure preserving diffeomorphisms $f_n= H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$, where the conjugation maps $H_n = h_1 \circ ... \circ h_n$ and the rational numbers $\alpha_n = \frac{p_n}{q_n}$ are chosen in such a way that the functions $f_n$ converge to a diffeomorphism $f$ with the desired properties. \\ First of all we will define two sequences of partial partitions, which converge to the decomposition into points in each case. The first type of partial partition, called $\eta_n$, will satisfy the requirements in the proof of the weak mixing-property. On the partition elements of the even more refined second type, called $\zeta_n$, the conjugation map $h_n$ will act as an isometry, and this will enable us to construct an invariant measurable Riemannian metric. Afterwards we will construct these conjugating diffeomorphisms $h_n = g_n \circ \phi_n$, which are composed of two step-by-step defined smooth measure-preserving diffeomorphisms. In this construction the map $g_n$ should introduce shear in the $\theta$-direction as in \cite{FS}. So $\tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(\theta,r_1,..., r_{m-1}\right) = \left(\theta + \left[n \cdot q^{\sigma}_{n}\right] \cdot r_1, r_1,..., r_{m-1}\right)$ might seem an obvious candidate. Unfortunately, that map is not an isometry. Therefore, the map $g_n$ will be constructed in such a way that $g_n$ is an isometry on the image under $\phi_n$ of any partition element $\check{I}_n \in \zeta_n$, and $g_n \left(\hat{I}_n\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]}\left(\hat{I}_n\right)$ as well as $g_n \left( \Phi_n \left(\hat{I}_n\right) \right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left( \Phi_n \left(\hat{I}_n\right) \right)$ for every $\hat{I}_n \in \eta_n$, where $\Phi_n = \phi_n \circ R^{m_n}_{\alpha_{n+1}} \circ \phi^{-1}_{n}$ with a specific sequence $\left(m_n\right)_{n\in \mathbb{N}}$ of natural numbers (see section \ref{section:distri}) is important in the proof of the weak mixing property. Likewise the conjugation map $\phi_n$ will be built such that it acts on the elements of $\zeta_n$ as an isometry and on the elements of $\eta_n$ in such a way that it satisfies the requirements of the desired criterion for weak mixing. This criterion is established in section \ref{section:crit}. It is similar to the criterion in \cite{FS} but modified in many places because of the new conjugation map $g_n$ and the new type of partitions. The construction presented here combines the advantages of shearing maps and local isometries, and it combines local maps in such a way that the derivatives of the resultant conjugation maps can be suitably bounded. Unfortunately, this requires a fairly elaborate and slightly technical construction. \\ In section \ref{section:conv} we will show convergence of the sequence $\left(f_n\right)_{n \in \mathbb{N}}$ in $\mathcal{A}_{\alpha}\left(M\right)$ for a given Liouville number $\alpha$ by the same approach as in \cite{FS}. To do so, we have to estimate the norms $||| H_n |||_k$ very carefully. Furthermore, we will see at the end of section \ref{section:conv} that the criterion for weak mixing applies to the obtained diffeomorphism $f = \lim_{n\rightarrow \infty}f_n$. Finally, we will construct the desired $f$-invariant measurable Riemannian metric in section \ref{section:metric}. \section{Explicit constructions} \label{section:constr} \subsection{Sequences of partial partitions} In this subsection we define the two announced sequences of partial partitions $\left(\eta_n\right)_{n \in \mathbb{N}}$ and $\left(\zeta_n\right)_{n \in \mathbb{N}}$ of $M = \mathbb{S}^{1} \times \left[0,1\right]^{m-1}$. \subsubsection{Partial partition $\eta_n$} \label{subsubsection:eta} \begin{rem} For convenience we will use the notation $\prod^{m}_{i=2} \left[ a_i, b_i\right]$ for $\left[a_2, b_2\right] \times ... \times \left[a_m, b_m\right]$. \end{rem} Initially, $\eta_n$ will be constructed on the fundamental sector $\left[0, \frac{1}{q_n}\right] \times \left[0,1\right]^{m-1}$. For this purpose we divide the fundamental sector into $n$ sections: \begin{itemize} \item In case of $k \in \mathbb{N}$ and $2\leq k \leq n-1$ on $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n} \right] \times \left[0,1\right]^{m-1}$ the partial partition $\eta_n$ consists of all multidimensional intervals of the following form: \begin{equation*} \begin{split} & \Bigg{[} \frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+ \frac{j^{\left((m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}\right)}_{1}}{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n} + \frac{1}{10 \cdot n^5 \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n}, \\ & \quad \frac{k-1}{n \cdot q_n}+ \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+\frac{j^{\left((m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}\right)}_{1} + 1}{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n} - \frac{1}{10 \cdot n^5 \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n} \Bigg{]} \\ \times & \prod^{m}_{i=2} \left[ \frac{j^{(1)}_i}{q_n} + ...+\frac{j^{(k+1)}_i}{q^{k+1}_n} + \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \frac{j^{(1)}_i}{q_n} + ...+\frac{j^{(k+1)}_i +1}{q^{k+1}_n} - \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}\right], \end{split} \end{equation*} where $j^{(l)}_1 \in \mathbb{Z}$ und $\left\lceil \frac{q_n}{10n^4} \right\rceil \leq j^{(l)}_1 \leq q_n - \left\lceil \frac{q_n}{10n^4} \right\rceil - 1$ for $l=1,...,(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}$ as well as $j^{(l)}_{i} \in \mathbb{Z}$ and $\left\lceil \frac{q_n}{10n^4} \right\rceil \leq j^{(l)}_{i} \leq q_n - \left\lceil \frac{q_n}{10n^4} \right\rceil - 1$ for $i=2,...,m$ and $l=1,...,k+1$. \item On $\left[0, \frac{1}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$ as well as $\left[\frac{n-1}{n \cdot q_n}, \frac{1}{q_n} \right] \times \left[0,1\right]^{m-1}$ there are no elements of the partial partition $\eta_n$. \end{itemize} By applying the map $R_{l/q_n}$ with $l \in \mathbb{Z}$, this partial partition of $\left[0, \frac{1}{q_n}\right] \times \left[0,1\right]^{m-1}$ is extended to a partial partition of $\mathbb{S}^{1} \times \left[0,1\right]^{m-1}$. \begin{rem} By construction this sequence of partial partitions converges to the decomposition into points. \end{rem} \subsubsection{Partial partition $\zeta_n$} \label{subsubsection:zeta} As in the previous case we will construct the partial partition $\zeta_n$ on the fundamental sector $\left[0, \frac{1}{q_n}\right] \times \left[0,1\right]^{m-1}$ initially and therefore divide this sector into $n$ sections: In case of $k \in \mathbb{N}$ and $1\leq k \leq n$ on $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n} \right] \times \left[0,1\right]^{m-1}$ the partial partition $\zeta_n$ consists of all multidimensional intervals of the following form: \begin{equation*} \begin{split} & \Bigg{[}\frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^2_n}+...+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}\right)}_{1}}{n \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n}+\frac{1}{n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n},\\ & \quad \frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^2_n}+...+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}\right)}_{1}+1}{n \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n}-\frac{1}{n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n} \Bigg{]} \\ \times & \Bigg{[} \frac{j^{(1)}_2}{q_n}+ ... +\frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1\right)}_2}{q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n}+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+2\right)}_2 }{8 n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[n q^{\sigma}_{n}\right]} + \frac{1}{8 n^9 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[ n q^{\sigma}_{n}\right]}, \\ & \quad \frac{j^{(1)}_2}{q_n} + ... +\frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1\right)}_2}{q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n}+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+2\right)}_2 + 1}{8 n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[n q^{\sigma}_{n}\right]} - \frac{1}{8 n^9 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[ n q^{\sigma}_{n}\right]}\Bigg{]} \\ \times & \prod^{m}_{i=3} \left[ \frac{j^{(1)}_i}{q_n} + ... + \frac{j^{(k)}_i}{q^{k}_n} + \frac{1}{n^4 \cdot q^{k}_n}, \frac{j^{(1)}_i}{q_n} +...+ \frac{j^{(k)}_i +1}{q^{k}_n}- \frac{1}{n^4 \cdot q^{k}_n}\right], \end{split} \end{equation*} where $j^{(l)}_i \in \mathbb{Z}$ and $\left\lceil \frac{q_n}{n^4} \right\rceil \leq j^{(l)}_i \leq q_n - \left\lceil \frac{q_n}{n^4} \right\rceil - 1$ for $i=3,...,m$ and $l=1,..,k$ as well as $j^{(l)}_{1} \in \mathbb{Z}$, $\left\lceil \frac{q_n}{n^4} \right\rceil \leq j^{(l)}_{1} \leq q_n - \left\lceil \frac{q_n}{n^4} \right\rceil - 1$ for $l=1,...,(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}$ as well as $j^{(l)}_2 \in \mathbb{Z}$ and $\left\lceil \frac{q_n}{n^4} \right\rceil \leq j^{(l)}_2 \leq q_n - \left\lceil \frac{q_n}{n^4} \right\rceil - 1$ for $l=1,...,(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1$ as well as $j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+2\right)}_2\in \mathbb{Z}$ and $8 \cdot n \cdot \left[n \cdot q^{\sigma}_{n}\right] \leq j^{\left((m-1) \cdot \frac{k\cdot \left(k+1\right)}{2}+2\right)}_2 \leq 8 \cdot n^5 \cdot \left[n \cdot q^{\sigma}_{n}\right] - 8 \cdot n \cdot \left[n \cdot q^{\sigma}_{n}\right] - 1$. \begin{rem} \label{rem:überd} For every $n\geq 3$ the partial partition $\zeta_n$ consists of disjoint sets, covers a set of measure at least $1-\frac{4 \cdot m}{n^2}$, and the sequence $\left(\zeta_n\right)_{n \in \mathbb{N}}$ converges to the decomposition into points. \end{rem} \subsection{The conjugation map $g_n$} \label{subsection:g} Let $\sigma \in (0,1)$. As mentioned in the sketch of the proof we aim for a smooth measure-preserving diffeomorphism $g_n$ which satisfies $g_n \left(\hat{I}_n\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(\hat{I}_n\right)$ as well as $g_n \left( \Phi_n \left(\hat{I}_n\right) \right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left( \Phi_n \left(\hat{I}_n\right) \right)$ for every $\hat{I}_n \in \eta_n$ and is an isometry on the image under $\phi_n$ of any partition element $\check{I}_n \in \zeta_n$. \\ Let $a,b \in \mathbb{Z}$ and $\varepsilon \in \left(0, \frac{1}{16}\right]$ such that $\frac{1}{\varepsilon} \in \mathbb{Z}$. Moreover, we consider $\delta >0$ such that $\frac{1}{\delta} \in \mathbb{Z}$ and $\frac{a\cdot b \cdot \delta}{\varepsilon} \in \mathbb{Z}$. We denote $\left[0,1\right]^2$ by $\Delta$ and $\left[\varepsilon, 1- \varepsilon \right]^2$ by $\Delta\left(\varepsilon\right)$. \begin{lem} \label{lem:g} For every $\varepsilon \in \left(0, \frac{1}{16}\right]$ there exists a smooth measure-preserving diffeomorphism $g_{\varepsilon}: \left[0,1\right]^2 \rightarrow \left\{ \left(x + \varepsilon \cdot y, y \right) \ : \ x,y \in \left[0,1\right]\right\}$ that is the identity on $\Delta\left(4 \varepsilon \right)$ and coincides with the map $\left(x,y\right) \mapsto \left(x + \varepsilon \cdot y, y\right)$ on $\Delta \setminus \Delta\left(\varepsilon\right)$. \end{lem} \begin{pr} First of all let $\psi_{\varepsilon}:\mathbb{R}^2\to\mathbb{R}^2$ be a smooth diffeomorphism satisfying \begin{equation*} \psi_{\varepsilon}\left(x,y\right) = \begin{cases} \left(x,y\right)& \textit{on $\mathbb{R}^2 \setminus \Delta\left(2\varepsilon\right)$} \\ \left(\frac{1}{2}+\frac{1}{5} \cdot \left(x-\frac{1}{2}\right), \frac{1}{2}+\frac{1}{5} \cdot \left(y-\frac{1}{2}\right)\right)& \textit{on $\Delta\left(4 \varepsilon\right)$} \end{cases} \end{equation*} Furthermore, let $\tau_{\varepsilon}$ be a smooth diffeomorphism with the following properties \begin{equation*} \tau_{\varepsilon}\left(x,y\right) = \begin{cases} \left(x + \varepsilon \cdot y,y \right)& \textit{on $\left\{\left(x-\frac{1}{2}\right)^2+\left(y-\frac{1}{2}\right)^2 \geq \left(\frac{5}{16}\right)^2\right\}$} \\ \left(x,y\right)& \textit{on $\left\{\left(x-\frac{1}{2}\right)^2+\left(y-\frac{1}{2}\right)^2 \leq \frac{1}{50}\right\}$} \end{cases} \end{equation*} We define $\bar{g}_{\varepsilon} \coloneqq \psi^{-1}_{\varepsilon} \circ \tau_{\varepsilon} \circ \psi_{\varepsilon}$. Then the diffeomorphism $\bar{g}_{\varepsilon}$ coincides with the identity on $\Delta\left(4 \varepsilon\right)$ and with the map $\left(x,y\right) \mapsto \left(x + \varepsilon \cdot y, y\right)$ on $\mathbb{R}^2 \setminus \Delta\left(\varepsilon\right)$. From this we conclude that $\det \left(D\bar{g}_{\varepsilon}\right) > 0$. Moreover, $\bar{g}_{\varepsilon}$ is measure-preserving on $U_{\varepsilon} \coloneqq \left(\mathbb{R}^2 \setminus \Delta\left(\varepsilon\right)\right) \cup \Delta\left(4 \varepsilon \right)$. \\ With the aid of ``Moser's trick'' we want to construct a diffeomorphism $g_{\varepsilon}$ which is measure-preserving on the whole $\mathbb{R}^2$ and agrees with $\bar{g}_{\varepsilon}$ on $U_{\varepsilon}$. To do so, we consider the canonical volume form $\Omega_0$ on $\mathbb{R}^2$: $\Omega_0 = dx \wedge dy$; in other words, $\Omega_0 = d\omega_0$ using the 1-form $\omega_0 = \frac{1}{2} \cdot \left( x \cdot dy - y \cdot dx\right)$. Additionally we introduce the volume form $\Omega_1 \coloneqq \bar{g}^{*}_{\varepsilon} \Omega_0$. \\ At first we note that $\bar{g}_{\varepsilon}$ preserves the 1-form $\omega_0$ on $U_{\varepsilon}$: Clearly this holds on $\Delta\left(4 \varepsilon\right)$, where $\bar{g}_{\varepsilon}$ is the identity. On $\mathbb{R}^2 \setminus \Delta \left(\varepsilon \right)$ we have $D\bar{g}_{\varepsilon} \left(x,y\right) = \left(x + \varepsilon y, y\right)$, and thus we get \begin{equation*} \bar{g}^{*}_{\varepsilon} \omega_0 = \omega_0 \left(x+ \varepsilon \cdot y, y\right) = \frac{1}{2} \cdot \left(\left(x+ \varepsilon \cdot y\right) dy - y \cdot d\left(x+\varepsilon \cdot y\right)\right) = \frac{1}{2} \cdot \left(x \cdot dy - y \cdot dx\right) = \omega_0\left(x,y\right). \end{equation*} Furthermore, we introduce $\Omega' \coloneqq \Omega_1 - \Omega_0$. Since the exterior derivative commutes with the pull-back, it holds that $\Omega' = d\left(\bar{g}^{*}_{\varepsilon} \omega_0 - \omega_0\right)$. In addition we consider the volume form $\Omega_t \coloneqq \Omega_0 + t \cdot \Omega'$ and note that $\Omega_t$ is non-degenerate for $t\in[0,1]$. Thus, we get a uniquely defined vector field $X_t$ such that $\Omega_t\left(X_t, \cdot \right) = \left(\omega_0 - \bar{g}^{*}_{\varepsilon} \omega_0\right) \left(\cdot\right)$. Since $\Delta$ is a compact manifold, the non-autonomous differential equation $\frac{d}{dt} u(t) = X_t \left(u(t)\right)$ with initial values in $\Delta$ has a solution defined on $\mathbb{R}$. Hence, we get a one-parameter family of diffeomorphisms $\left\{\nu_t\right\}_{t \in \left[0,1\right]}$ on $\Delta$ satisfying $\dot{\nu}_t = X_t\left(\nu_t\right)$, $\nu_0 = \text{id}$. \\ Referring to \cite[Lemma 2.2]{Be}, it holds that \begin{equation*} \frac{d}{dt} \nu^{\ast}_t \Omega_t = d\left(\nu^{\ast}_t\left( i\left(X_t\right) \Omega_t \right)\right) + \nu^{\ast}_t\left(\frac{d}{dt} \Omega_t + i\left(X_t\right)d\Omega_t\right). \end{equation*} Because of $d\left(\nu^{\ast}_t\left( i\left(X_t\right) \Omega_t \right)\right) = \nu^{\ast}_t\left(d\left( i\left(X_t\right) \Omega_t \right)\right)$ and $d \Omega_t = d\left( d \omega_0 + t \cdot \left(d \left(\bar{g}^{\ast}_{\varepsilon} \omega_0\right) - d \omega_0\right)\right) = 0$ we compute: \begin{align*} \frac{d}{dt} \nu^{\ast}_t \Omega_t & = \nu^{\ast}_t\left(d\left( i\left(X_t\right) \Omega_t \right)\right) + \nu^{\ast}_t\left(\frac{d}{dt} \Omega_t\right) = \nu^{\ast}_t d \left( \Omega_t\left(X_t, \cdot \right) \right) + \nu^{\ast}_t \Omega' \\ & = \nu^{\ast}_t d\left(\omega_0 - \bar{g}^{\ast}_{\varepsilon}\omega_0\right) + \nu^{\ast}_t \Omega' = \nu^{\ast}_t \left( \Omega_0 - \Omega_1 \right) + \nu^{\ast}_t \left( \Omega_1 - \Omega_0 \right) = 0. \end{align*} Consequently $\nu^{\ast}_1 \Omega_1 = \nu^{\ast}_0 \Omega_0 = \Omega_0$ (using $\nu_0 = \text{id}$ in the last step). As we have seen, it holds that $\bar{g}^{*}_{\varepsilon} \omega_0 = \omega_0$ on $U_{\varepsilon}$. Therefore, on $U_{\varepsilon}$ it holds that $\Omega_t \left( X_t, \cdot \right) = 0$. Since $\Omega_t$ is non-degenerate, we conclude that $X_t = 0$ on $U_{\varepsilon}$ and hence $\nu_1 = \nu_0 = \text{id}$ on $U_{\varepsilon} \cap \Delta$. Now we can extend $\nu_1$ smoothly to $\mathbb{R}^2$ as the identity. \\ Denote $g_{\varepsilon} \coloneqq \bar{g}_{\varepsilon} \circ \nu_1$. Because of $\nu_1 = \text{id}$ on $U_{\varepsilon}$, the map $g_{\varepsilon}$ coincides with $\bar{g}_{\varepsilon}$ on $U_{\varepsilon}$ as announced. Furthermore we have \begin{equation*} g^{\ast}_{\varepsilon} \Omega_0 = \left(\bar{g}_{\varepsilon} \circ \nu_1 \right)^{\ast} \Omega_0 = \nu^{\ast}_1 \left( \bar{g}^{\ast}_{\varepsilon} \Omega_0\right)= \nu^{\ast}_1 \Omega_1 = \Omega_0. \end{equation*} Using the transformation formula we compute for an arbitrary measurable set $A \subseteq \mathbb{R}^2$: \begin{equation*} \mu\left( g_{\varepsilon}\left(A\right)\right) = \int_{g_{\varepsilon}\left(A\right)} \Omega_0 = \int_{A} \left|\det\left(Dg_{\varepsilon}\right)\right| \cdot \Omega_0. \end{equation*} We know $\det\left(D\nu_1\right) > 0$ (because $\nu_0 = \text{id}$ and all the maps $\nu_t$ are diffeomorphisms) as well as $\det\left(D \bar{g}_{\varepsilon}\right) > 0$, and thus $\left|\det\left(Dg_{\varepsilon}\right)\right| = \det\left(Dg_{\varepsilon}\right)$. Since $g^{\ast}_{\varepsilon} \Omega_0 = \left(\det\left(Dg_{\varepsilon}\right) \right) \cdot \Omega_0$ (compare with \cite[proposition 5.1.3.]{KH}) we finally conclude: \begin{equation*} \mu\left( g_{\varepsilon}\left(A\right)\right) = \int_{A} \left(\det\left(Dg_{\varepsilon}\right)\right) \cdot \Omega_0 = \int_{A} g^{\ast}_{\varepsilon} \Omega_0 = \int_{A} \Omega_0 = \mu\left(A\right). \end{equation*} Consequently $g_{\varepsilon}$ is a measure-preserving diffeomorphism on $\mathbb{R}^2$ satisfying the desired properties. \end{pr} Let $\tilde{g}_b: \mathbb{S}^1 \times \left[0,1\right]^{m-1} \rightarrow \mathbb{S}^1 \times \left[0,1\right]^{m-1}$ be the smooth measure-preserving diffeomorphism $\tilde{g}_b \left(\theta, r_1,...,r_{m-1}\right) = \left(\theta + b \cdot r_1, r_1,...,r_{m-1}\right)$ and denote $\left[0, \frac{1}{a}\right] \times \left[0, \frac{\varepsilon}{b \cdot a}\right] \times \left[\delta,1- \delta\right]^{m-2}$ by $\Delta_{a,b, \varepsilon, \delta}$. Using the map $D_{a,b,\varepsilon}: \mathbb{R}^m \rightarrow \mathbb{R}^m, \left(\theta, r_1,..., r_{m-1}\right) \mapsto \left(a \cdot \theta, \frac{b \cdot a}{\varepsilon} \cdot r_1, r_2,..., r_{m-1} \right)$ and $g_{\varepsilon}$ from Lemma \ref{lem:g} we define the measure-preserving diffeomorphism $g_{a,b, \varepsilon, \delta}: \Delta_{a,b, \varepsilon, \delta} \rightarrow \tilde{g}_b\left(\Delta_{a,b, \varepsilon, \delta}\right)$ by setting $g_{a,b, \varepsilon, \delta} = D^{-1}_{a,b, \varepsilon} \circ \left(g_{\varepsilon}, \text{id}_{\mathbb{R}^{m-2}}\right) \circ D_{a,b, \varepsilon}$. Using the fact that $\frac{a\cdot b \cdot \delta}{\varepsilon} \in \mathbb{Z}$ we extend it to a smooth diffeomorphism $g_{a,b, \varepsilon, \delta}: \left[0, \frac{1}{a}\right] \times \left[\delta,1-\delta\right]^{m-1} \rightarrow \tilde{g}_b \left( \left[0, \frac{1}{a}\right] \times \left[\delta,1-\delta\right]^{m-1} \right)$ by the description: \begin{equation*} g_{a,b, \varepsilon, \delta} \left(\theta , r_1 + l \cdot \frac{\varepsilon}{b \cdot a},r_2,..., r_{m-1} \right) = \left(l \cdot \frac{\varepsilon}{a}, l \cdot \frac{\varepsilon}{b \cdot a}, \vec{0}\right) + g_{a,b, \varepsilon, \delta} \left( \theta, r_1,...,r_{m-1}\right) \end{equation*} for $r_1 \in \left[0, \frac{\varepsilon}{b \cdot a}\right]$ and some $l \in \mathbb{Z}$ satisfying $\frac{b \cdot a \cdot \delta}{\varepsilon} \leq l \leq \frac{b \cdot a}{\varepsilon} - \frac{b \cdot a \cdot \delta}{\varepsilon} - 1$. Since this map coincides with the map $\tilde{g}_b$ in a neighbourhood of the boundary we can extend it to a map $g_{a,b,\varepsilon, \delta}: \left[0,\frac{1}{a}\right] \times \left[0,1\right]^{m-1} \rightarrow \tilde{g}_b\left(\left[0,\frac{1}{a}\right] \times \left[0,1\right]^{m-1}\right)$ by setting it equal to $\tilde{g}_b$ on $\left[0,\frac{1}{a}\right] \times \left(\left[0,1\right]^{m-1} \setminus \left[\delta,1-\delta\right]^{m-1}\right)$. \\ We initially construct the smooth measure-preserving diffeomorphism $g_n$ on the fundamental sector. For this, we divide the sector into $n$ sections: On $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$ in case of $k \in \mathbb{Z}$ and $1 \leq k \leq n$: \begin{equation*} g_n = g_{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n, \left[n \cdot q^{\sigma}_{n}\right], \frac{1}{8n^4}, \frac{1}{32n^4}}. \end{equation*} Since $g_n$ coincides with the map $\tilde{g}_{\left[n \cdot q^{\sigma}_{n}\right]}$ in a neighbourhood of the boundary of the different sections on the $\theta$-axis, this yields a smooth map, and we can extend it to a smooth measure-preserving diffeomorphism on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ using the description $g_n \circ R_{\frac{l}{q_n}} = R_{\frac{l}{q_n}} \circ g_n$ for $l \in \mathbb{Z}$. Furthermore, we note that the subsequent constructions are done in such a way that $260n^4$ divides $q_n$ (see Lemma \ref{lem:conv}) and so the assumption $\frac{a\cdot b \cdot \delta}{\varepsilon}=\frac{a \cdot b}{4} \in \mathbb{Z}$ is satisfied. Indeed, this map $g_n$ satisfies the following desired property: \begin{lem} \label{lem:outer} For every element $\hat{I}_n \in \eta_n$ we have $g_n\left(\hat{I}_n\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(\hat{I}_n\right)$. \end{lem} \begin{pr} We consider a partition element $\hat{I}_{n,k} \in \eta_n$ on $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$ in case of $k \in \mathbb{Z}$ and $2 \leq k \leq n-1$ and want to examine the effect of $g_n = g_{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n, \left[n \cdot q^{\sigma}_{n}\right], \frac{1}{8n^4}, \frac{1}{32n^4}}$ on it. \\ In the $r_1$-coordinate we use the fact that there is $u \in \mathbb{Z}$ such that \begin{equation*} \frac{1}{26n^4 q^{k+1}_n} =u \cdot \frac{\varepsilon}{b \cdot a} = u \cdot \frac{1}{8 n^4 \cdot \left[nq^{\sigma}_n\right] \cdot nq^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n}, \end{equation*} where we use the fact that $260n^4$ divides $q_n$ (Lemma \ref{lem:conv}). Also, with respect to the $\theta$-coordinate the boundary of this element lies in the domain where $g_{a,b, \varepsilon, \delta} = \tilde{g}_{\left[nq^{\sigma}_n\right]}$ because $\frac{1}{10 \cdot n^4} < \varepsilon =\frac{1}{8 \cdot n^4}$. \end{pr} \subsection{The conjugation map $\phi_n$} \label{subsection:phi} \begin{lem} \label{lem:varphi} For every $\varepsilon \in \left(0, \frac{1}{4}\right)$ and every $i,j \in \left\{1,...,m\right\}$ there exists a smooth measure-preserving diffeomorphism $\varphi_{\varepsilon, i, j}$ on $\mathbb{R}^m$ which is the rotation in the $x_i - x_{j}$-plane by $\pi/2$ about the point $\left(\frac{1}{2},..., \frac{1}{2}\right) \in \mathbb{R}^m$ on $\left[2 \varepsilon, 1-2 \varepsilon\right]^{m}$ and coincides with the identity outside of $\left[\varepsilon, 1-\varepsilon\right]^m$. \end{lem} \begin{pr} The proof is similar to the proof of Lemma \ref{lem:g}. (See also \cite[section 4.6]{GK} for a geometrical argument of the proof.) \end{pr} Furthermore, for $\lambda \in \mathbb{N}$ we define the maps $C_{\lambda} \left(x_1,x_2,...,x_m\right) = \left( \lambda \cdot x_1, x_2,..., x_m \right)$ and $D_{\lambda} \left(x_1,...,x_m \right) = \left( \lambda \cdot x_1, \lambda \cdot x_2,..., \lambda \cdot x_m \right)$. Let $\mu \in \mathbb{N}$, $\frac{1}{\delta} \in \mathbb{N}$ and assume $\frac{1}{\delta}$ divides $\mu$. We construct a diffeomorphism $\psi_{\mu, \delta, i, j, \varepsilon_2}$ in the following way: \begin{itemize} \item Consider $\left[0, 1-2 \cdot \delta\right]^m$: Since $\frac{1}{\delta}$ divides $\mu$, we can divide $\left[0, 1-2 \cdot \delta\right]^m$ into cubes of side length $\frac{1}{\mu}$. \item Under the map $D_{\mu}$ any of these cubes of the form $\prod^{m}_{i=1} \left[\frac{l_i}{\mu}, \frac{l_i +1}{\mu}\right]$ with $l_i \in \mathbb{N}$ is mapped onto $\prod^{m}_{i=1} \left[ l_i, l_i +1\right]$. \item On $\left[0,1\right]^{m}$ we will use the diffeomorphism $\varphi^{-1}_{\varepsilon_2, i, j}$ constructed in Lemma \ref{lem:varphi} . Since this is the identity outside of $\Delta\left(\varepsilon_2\right)$, we can extend it to a diffeomorphism $\bar{\varphi}^{-1}_{\varepsilon_2, i, j}$ on $\mathbb{R}^m$ using the instruction $\bar{\varphi}^{-1}_{\varepsilon_2, i, j} \left(x_1 +k_1, x_2 + k_2,...,x_m + k_m\right) = \left(k_1,...,k_m\right) + \varphi^{-1}_{\varepsilon_2, i, j}\left(x_1, x_2,...,x_m\right)$, where $k_i \in \mathbb{Z}$ and $x_i \in \left[0,1\right]$. \item Now we define the smooth measure-preserving diffeomorphism \begin{equation*} \tilde{\psi}_{\mu, \delta, i, j, \varepsilon_2} = D^{-1}_{\mu} \circ \bar{\varphi}^{-1}_{\varepsilon_2, i, j} \circ D_{\mu} \ \ \ :\ \ \ \left[0,1-2\delta\right]^{m} \rightarrow \left[0,1-2\delta\right]^{m} \end{equation*} \item With this we define \begin{align*} & \psi_{\mu, \delta, i, j, \varepsilon_2} \left(x_1,...,x_m \right) = \\ & \begin{cases} \left(\left[\tilde{\psi}_{\mu, \delta, i, j, \varepsilon_2}\left(x_1-\delta,...,x_m-\delta\right)\right]_1 + \delta,...,\left[\tilde{\psi}_{\mu, \delta, i, j, \varepsilon_2}\left(x_1-\delta,...,x_m-\delta\right)\right]_m + \delta \right)& \text{on $\left[\delta,1-\delta\right]^m$} \\ \left(x_1,...,x_m\right)& \text{otherwise} \end{cases} \end{align*} This is a smooth map because $\tilde{\psi}_{\mu, \delta, i, j, \varepsilon_2}$ is the identity in a neighbourhood of the boundary by the construction. \end{itemize} \begin{rem} \label{rem:W} For every set $W = \prod^{m}_{i=1} \left[\frac{l_i}{\mu} + r_i, \frac{l_i +1}{\mu} - r_i\right]$ where $l_i \in \mathbb{Z}$ and $r_i \in \mathbb{R}$ satisfies $\left|r_i \cdot \mu \right| \leq \varepsilon_2$ we have $\psi_{\mu, \delta, i, j, \varepsilon_2} \left(W\right) = W$. \end{rem} Using these maps we build the following smooth measure-preserving diffeomorphism: \begin{equation*} \tilde{\phi}_{\lambda, \varepsilon, i,j, \mu, \delta, \varepsilon_2} : \left[0, \frac{1}{\lambda} \right] \times \left[0,1\right]^{m-1} \rightarrow \left[0, \frac{1}{\lambda} \right] \times \left[0,1\right]^{m-1}, \ \ \ \tilde{\phi}_{\lambda, \varepsilon, i,j, \mu, \delta, \varepsilon_2} = C^{-1}_{\lambda} \circ \psi_{\mu, \delta, i, j, \varepsilon_2} \circ \varphi_{\varepsilon, i, j} \circ C_{\lambda} \end{equation*} Afterwards, $\tilde{\phi}_{\lambda, \varepsilon, i,j, \mu, \delta, \varepsilon_2}$ is extended to a diffeomorphism on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ by the description $\tilde{\phi}_{\lambda, \varepsilon, i,j, \mu, \delta, \varepsilon_2}\left(x_1 + \frac{1}{\lambda}, x_2,..., x_m\right)= \left(\frac{1}{\lambda}, 0, ..., 0\right) + \tilde{\phi}_{\lambda, \varepsilon, i,j, \mu, \delta, \varepsilon_2}\left(x_1, x_2,..., x_m\right)$. For convenience we will use the notation $\phi^{(j)}_{\lambda, \mu} = \tilde{\phi}_{\lambda, \frac{1}{60n^4}, 1,j, \mu, \frac{1}{10n^4}, \frac{1}{22n^4}}$. With this we define the diffeomorphism $\phi_n$ on the fundamental sector: On $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$ in case of $k \in \mathbb{N}$ and $1 \leq k \leq n$: \begin{equation*} \phi_n = \tilde{\phi}^{(m)}_{n \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2} + \left(m-2\right) \cdot k}_n, q^k_n} \circ \tilde{\phi}^{(m-1)}_{n \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2} + \left(m-3\right) \cdot k}_n, q^{k}_{n}} \circ ... \circ \tilde{\phi}^{(2)}_{n \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}}_n, q^{k}_{n}} \end{equation*} This is a smooth map because $\phi_n$ coincides with the identity in a neighbourhood of the different sections. \\ Now we extend $\phi_n$ to a diffeomorphism on $\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ using the description $\phi_n \circ R_{\frac{1}{q_n}} = R_{\frac{1}{q_n}} \circ \phi_n$. \begin{figure} \caption{Qualitative shape of the action of $\phi_n$ on a partition element $\hat{I} \end{figure} \section{$\left(\gamma, \delta, \epsilon\right)$-distribution} \label{section:distri} We introduce the central notion of the criterion for weak mixing deduced in the next section: \begin{dfn} Let $\Phi: M \rightarrow M$ be a diffeomorphism. We say $\Phi$ $\left(\gamma, \delta, \epsilon\right)$-distributes an element $\hat{I}$ of a partial partition, if the following properties are satisfied: \begin{itemize} \item $\pi_{\vec{r}}\left(\Phi\left(\hat{I}\right)\right)$ is a $\left(m-1\right)$-dimensional interval $J$, i.e. $J = I_1 \times ... \times I_{m-1}$ with intervals $I_k \subseteq \left[0,1\right]$, and $1-\delta \leq \lambda\left(I_k\right) \leq 1$ for $k=1,...,m-1$. Here $\pi_{\vec{r}}$ denotes the projection on the $\left(r_1,...,r_{m-1}\right)$-coordinates (i.e., the last $m-1$ coordinates; the first one is the $\theta$-coordinate). \item $\Phi\left(\hat{I}\right)$ is contained in a set of the form $\left[c,c+\gamma\right] \times J$ for some $c \in \mathbb{S}^{1}$. \item For every $\left(m-1\right)$-dimensional interval $\tilde{J} \subseteq J$ it holds: \begin{equation*} \left| \frac{\mu\left(\hat{I}\cap \Phi^{-1}\left(\mathbb{S}^{1}\times \tilde{J}\right)\right)}{\mu\left(\hat{I}\right)} - \frac{\mu^{(m-1)}\left(\tilde{J}\right)}{\mu^{(m-1)}\left(J\right)} \right| \leq \epsilon \cdot \frac{\mu^{(m-1)}\left(\tilde{J}\right)}{\mu^{(m-1)}\left(J\right)}, \end{equation*} where $\mu^{(m-1)}$ is the Lebesgue measure on $\left[0,1\right]^{m-1}$. \end{itemize} \end{dfn} \begin{rem} Analogous to \cite{FS} we will call the third property ``almost uniform distribution'' of $\hat{I}$ in the $r_1,..,r_{m-1}$-coordinates. In the following we will often write it in the form of \begin{equation*} \left| \mu\left(\hat{I}\cap \Phi^{-1}\left(\mathbb{S}^{1}\times \tilde{J}\right)\right) \cdot \mu^{(m-1)}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu^{(m-1)}\left(\tilde{J}\right) \right| \leq \epsilon \cdot \mu \left(\hat{I} \right) \cdot \mu^{(m-1)}\left(\tilde{J}\right). \end{equation*} \end{rem} In the next step we define the sequence of natural numbers $\left(m_n\right)_{n \in \mathbb{N}}$: \begin{align*} m_n & = \min \left\{ m \leq q_{n+1} \ \ : \ \ m \in \mathbb{N},\ \ \inf_{k \in \mathbb{Z}} \left| m \cdot \frac{p_{n+1}}{q_{n+1}} - \frac{1}{n \cdot q_n} + \frac{k}{q_n}\right| \leq \frac{260 \cdot (n+1)^4}{q_{n+1}}\right\} \\ & = \min \left\{ m \leq q_{n+1} \ \ : \ \ m \in \mathbb{N},\ \ \inf_{k \in \mathbb{Z}} \left| m \cdot \frac{q_n \cdot p_{n+1}}{q_{n+1}} - \frac{1}{n} + k \right| \leq \frac{260 \cdot (n+1)^4 \cdot q_n}{q_{n+1}}\right\} \end{align*} \begin{lem} The set $\left\{ m \leq q_{n+1} \ \ : \ \ m \in \mathbb{N},\ \ \inf_{k \in \mathbb{Z}} \left| m \cdot \frac{q_n \cdot p_{n+1}}{q_{n+1}} - \frac{1}{n} + k \right| \leq \frac{260(n+1)^4 \cdot q_n}{q_{n+1}}\right\}$ is nonempty for every $n \in \mathbb{N}$, i.e., $m_n$ exists. \end{lem} \begin{pr} In Lemma \ref{lem:conv} we will construct the sequence $\alpha_n = \frac{p_n}{q_n}$ in such a way that $q_n = 260n^4 \cdot \tilde{q}_n$ and $p_n = 260n^4 \cdot \tilde{p}_n$ with $\tilde{p}_n, \tilde{q}_n$ relatively prime. Therefore, the set $\left\{ j \cdot \frac{q_n \cdot p_{n+1}}{q_{n+1}} \ : \ j=1,...,q_{n+1} \right\}$ contains $\frac{q_{n+1}}{260(n+1)^4 \cdot \text{gcd}\left(q_n, \tilde{q}_{n+1}\right)}$ different equally distributed points on $\mathbb{S}^1$. Hence there are at least $\frac{q_{n+1}}{260(n+1)^4 \cdot q_n}$ different such points and so for every $x \in \mathbb{S}^1$ there is a $j \in \left\{1,...,q_{n+1} \right\}$ such that $\inf_{k \in \mathbb{Z}} \left| x - j \cdot \frac{q_n \cdot p_{n+1}}{q_{n+1}} + k \right| \leq \frac{260(n+1)^4 \cdot q_n}{q_{n+1}}$. In particular, this is true for $x=\frac{1}{n}$. \end{pr} \begin{rem} \label{rem:an} We define \begin{equation*} a_n = \left(m_n \cdot \frac{p_{n+1}}{q_{n+1}} - \frac{1}{n \cdot q_n}\right) \text{ mod } \frac{1}{q_n} \end{equation*} By the above construction of $m_n$ it holds that $\left|a_n \right| \leq \frac{260 \cdot (n+1)^4}{q_{n+1}}$. In Lemma \ref{lem:conv} we will see that it is possible to choose $q_{n+1} \geq 64 \cdot 260 \cdot \left(n+1\right)^4 \cdot n^{11} \cdot q^{(m-1) \cdot n^2+3}_n$. Thus, we get: \begin{equation*} \left|a_n \right| \leq \frac{1}{64 \cdot n^{11} \cdot q^{(m-1) \cdot n^2+3}_n}. \end{equation*} \end{rem} Our constructions are done in such a way that the following property is satisfied: \begin{lem} \label{lem:distri} The map $\Phi_n \coloneqq \phi_n \circ R^{m_n}_{\alpha_{n+1}} \circ \phi^{-1}_{n}$ with the conjugating maps $\phi_n$ defined in section \ref{subsection:phi} $\left(\frac{1}{n \cdot q^{m}_{n}}, \frac{1}{n^4}, \frac{1}{n}\right)$-distributes the elements of the partition $\eta_n$. \end{lem} \begin{pr} We consider a partition element $\hat{I}_{n,k}$ on $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$. When applying the map $\phi^{-1}_n$ we observe that this element is positioned in such a way that all the occuring maps $\varphi^{-1}_{\varepsilon,1,j}$ and $\varphi_{\varepsilon_2,1,j}$ act as the respective rotations. Then we compute $\phi^{-1}_n\left(\hat{I}_{n,k}\right)$: \begin{equation*} \begin{split} & \Bigg{[} \frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+ \frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}\right)}_{1}}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1}_n} + \frac{j^{(1)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+2}_n}+ ... + \frac{j^{(k)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+1}_n} \\ & \quad + \frac{j^{(1)}_3}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+2}_n}+...+\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}+\frac{1}{10 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}, \\ & \quad \frac{k-1}{n \cdot q_n}+ \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+\frac{j^{(k)}_m+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}-\frac{1}{10 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n} \Bigg{]} \\ \times & \prod^{m}_{i=2} \Bigg{[} 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n}+\frac{j^{(k+1)}_i}{q^{k+1}_n} + \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \\ & \quad 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n}+\frac{j^{(k+1)}_i+1}{q^{k+1}_n} - \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n} \Bigg{]}. \end{split} \end{equation*} By our choice of the number $m_n$ the subsequent application of $R^{m_n}_{\alpha_{n+1}}$ yields modulo $\frac{1}{q_n}$: \begin{equation*} \begin{split} & \Bigg{[} \frac{k}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+ \frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}\right)}_{1}}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1}_n} + \frac{j^{(1)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+2}_n}+ ... + \frac{j^{(k)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+1}_n} \\ & \quad + \frac{j^{(1)}_3}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+2}_n}+...+\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}+\frac{1}{10 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n} + a_n, \\ & \quad \frac{k}{n \cdot q_n}+ \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+\frac{j^{(k)}_m+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}-\frac{1}{10 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n} + a_n \Bigg{]} \\ \times & \prod^{m}_{i=2} \Bigg{[} 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i}{q^{k+1}_n} + \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \\ & \quad 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i+1}{q^{k+1}_n}- \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n} \Bigg{]}, \end{split} \end{equation*} at which $a_n$ is the ``error term'' introduced in Remark \ref{rem:an}. Under $\varphi_{\frac{1}{60n^4}, 1,2} \circ C_{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}$ this is mapped to \begin{equation*} \begin{split} & \left(\frac{k}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}, \vec{0}\right) + \\ & \Bigg{[}\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1\right)}_1}{q_n}+ ... +\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k\right)}_1+1}{q^{k}_n}- \frac{j^{(k+1)}_2+1}{q^{k+1}_n}+ \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \\ & \quad \frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1\right)}_1}{q_n}+ ... +\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k\right)}_1+1}{q^{k}_n}- \frac{j^{(k+1)}_2}{q^{k+1}_n}- \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \Bigg{]} \\ \times & \left[\frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n, 1- \frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n\right] \\ \times & \prod^{m}_{i=3} \Bigg{[} 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i}{q^{k+1}_n} + \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \\ & \quad 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i+1}{q^{k+1}_n}- \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n} \Bigg{]} \end{split} \end{equation*} using the bound on $a_n$. With the aid of Remark \ref{rem:W}, the bound on $a_n$ from Remark \ref{rem:an} and the fact that $10n^4$ divides $q^{k+1}_n$ by Lemma \ref{lem:conv} we can compute the image of $\hat{I}_{n,k}$ under $\tilde{\phi}^{(2)}_{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n,q^{k+1}_n} \circ R^{m_n}_{\alpha_{n+1}} \circ \phi^{-1}_n$: \begin{equation*} \begin{split} & \Bigg{[} \frac{k}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n}+\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1\right)}_1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+2}_n}+... \\ & \quad +\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k\right)}_1+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+1}_n}-\frac{j^{(k+1)}_2+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+2}_n}+ \frac{1}{26 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+2}_n} , \\ & \quad \frac{k}{n \cdot q_n}+ \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...-\frac{j^{(k+1)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+2}_n}- \frac{1}{26 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+2}_n} \Bigg{]} \\ \times & \left[\frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n, 1- \frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n\right] \\ \times & \prod^{m}_{i=3} \Bigg{[} 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i}{q^{k+1}_n} + \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n}, \\ & \quad 1-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-2) \cdot k +1\right)}_{1}}{q_n}-...-\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+(i-1) \cdot k \right)}_{1}+1}{q^k_n} +\frac{j^{(k+1)}_i+1}{q^{k+1}_n}- \frac{1}{26 \cdot n^4 \cdot q^{k+1}_n} \Bigg{]}. \end{split} \end{equation*} Continuing in the same way we obtain that $\Phi_n\left(\hat{I}_{n,k}\right)$ is equal to \begin{equation*} \begin{split} & \Bigg{[} \frac{k}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...+ \frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}\right)}_{1}}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1}_n} + \frac{j^{(1)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+2}_n}+ ...+ \frac{j^{(k)}_2}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+1}_n} \\ & \quad + \frac{j^{(1)}_3}{n \cdot q^{(m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+2}_n}+...+\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n} \\ & \quad +\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+1\right)}_1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+2}_n}+...+\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k\right)}_1+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+1}_n}-\frac{j^{(k+1)}_2+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+2}_n} \\ & \quad +\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+k+1\right)}_1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+k+3}_n}+...+\frac{j^{\left((m-1) \cdot \frac{\left(k-1\right) \cdot k}{2}+2k\right)}_1+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+2k+2}_n}-\frac{j^{(k+1)}_3+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+2k+3}_n}+... \\ & \quad +\frac{j^{\left((m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}\right)}_1+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}}_n}-\frac{j^{(k+1)}_m+1}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}+1}_n}+\frac{1}{26 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}+1}_n} , \\ & \quad \frac{k}{n \cdot q_n}+ \frac{j^{(1)}_{1}}{n \cdot q^{2}_n}+...- \frac{j^{(k+1)}_m}{n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}+1}_n}-\frac{1}{26 \cdot n^5 \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}+1}_n} \Bigg{]} \\ \times & \left[\frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n, 1- \frac{1}{10 \cdot n^4}+ n \cdot q^{(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}+1}_n \cdot a_n\right] \times \prod^{m}_{i=3} \left[\frac{1}{26n^4},1-\frac{1}{26n^4}\right]. \end{split} \end{equation*} Thus, such a set $\Phi_n\left(\hat{I}_{n}\right)$ with $\hat{I}_n \in \eta_n$ has a $\theta$-witdth of at most $\frac{1}{n \cdot q^{3m+1}_n}$. \\ Moreover, we see that we can choose $\epsilon=0$ in the definition of a $\left(\gamma, \delta, \epsilon\right)$-distribution: With the notation $A_{\theta} \coloneqq \pi_{\theta}\left(\Phi_n\left(\hat{I}_n\right)\right)$ we have $\Phi_n\left(\hat{I}_n\right)= A_{\theta} \times J$ and so for every $(m-1)$-dimensional interval $\tilde{J}\subseteq J$: \begin{equation*} \frac{\mu\left(\hat{I}_n \cap \Phi^{-1}_n\left(\mathbb{S}^1 \times \tilde{J}\right)\right)}{\mu\left(\hat{I}_n\right)} = \frac{\mu\left(\Phi_n\left(\hat{I}_n\right) \cap \mathbb{S}^1 \times \tilde{J}\right)}{\mu\left(\Phi_n\left(\hat{I}_n\right)\right)} = \frac{\tilde{\lambda}\left(A_{\theta}\right) \cdot \mu^{(m-1)}\left(\tilde{J}\right)}{\tilde{\lambda}\left(A_{\theta}\right) \cdot \mu^{(m-1)}\left(J\right)} = \frac{\mu^{(m-1)}\left(\tilde{J}\right)}{\mu^{(m-1)}\left(J\right)} \end{equation*} because $\Phi_n$ is measure-preserving. \end{pr} Furthermore, we show the next property concerning the conjugating map $g_n$ constructed in section \ref{subsection:g}: \begin{lem} \label{lem:g-phi} For every $\hat{I}_n \in \eta_n$ we have: $g_n \left(\Phi_n\left(\hat{I}_n\right)\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(\Phi_n\left(\hat{I}_n\right)\right)$. \end{lem} \begin{pr} In the proof of the precedent Lemma \ref{lem:distri} we computed $\Phi_n\left(\hat{I}_{n,k}\right)$ for a partition element $\hat{I}_{n,k}$. Now we have to examine the effect of $g_n = g_{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}}_n, \left[n \cdot q^{\sigma}_{n}\right], \frac{1}{8n^4}, \frac{1}{32n^4}}$ on it. \\ Since $260n^4$ divides $q_n$ by Lemma \ref{lem:conv}, there is $u \in \mathbb{Z}$ such that \begin{equation*} \frac{1}{10n^4}=u \cdot \frac{\varepsilon}{b \cdot a} = u \cdot \frac{1}{8 n^4 \cdot \left[nq^{\sigma}_n\right] \cdot n q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}}_n}. \end{equation*} By $\frac{1}{26n^4} < \varepsilon = \frac{1}{8n^4}$ and the bound on $a_n$ the boundary of $\Phi_n\left(\hat{I}_{n,k}\right)$ lies in the domain where $g_{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot \left(k+2\right)}{2}}_n, \left[n \cdot q^{\sigma}_{n}\right], \frac{1}{8n^4}, \frac{1}{32n^4}} = \tilde{g}_{\left[nq^{\sigma}_n\right]}$. \end{pr} \section{Criterion for weak mixing} \label{section:crit} In this section we will prove a criterion for weak mixing on $M=\mathbb{S}^1 \times \left[0,1\right]^{m-1}$ in the setting of the beforehand constructions. For the derivation we need a couple of lemmas. The first one expresses the weak mixing property on the elements of a partial partition $\eta_n$ generally: \begin{lem} \label{lem:app1} Let $f \in$ Diff$^{\infty}\left(M, \mu\right)$, $\left(m_n\right)_{n \in \mathbb{N}}$ be a sequence of natural numbers and $\left(\nu_n\right)_{n \in \mathbb{N}}$ be a sequence of partial partitions, where $\nu_n \rightarrow \varepsilon$ and for every $n \in \mathbb{N}$ $\nu_n$ is the image of a partial partition $\eta_n$ under a measure-preserving diffeomorphism $F_n$, satisfying the following property: For every $m$-dimensional cube $A \subseteq \mathbb{S}^1 \times \left(0,1\right)^{m-1}$ and for every $\epsilon > 0$ there exists $N \in \mathbb{N}$ such that for every $n\geq N$ and for every $\Gamma_n \in \nu_n$ we have \begin{equation} \label{1} \left| \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) - \mu\left(\Gamma_n\right) \cdot \mu\left(A\right) \right| \leq 3 \cdot \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A \right). \end{equation} Then $f$ is weakly mixing. \end{lem} \begin{pr} A diffeomorphism $f$ is weakly mixing if for all measurable sets $A,B \subseteq M$ it holds: \begin{equation*} \lim_{n\rightarrow \infty} \left| \mu\left(B\cap f^{-m_n}\left(A\right)\right) - \mu\left(B\right) \cdot \mu\left(A\right) \right| = 0. \end{equation*} Since every measurable set in $M = \mathbb{S}^{1} \times \left[0,1\right]^{m-1}$ can be approximated by a countable disjoint union of $m$-dimensional cubes in $\mathbb{S}^{1} \times \left(0,1 \right)^{m-1}$ in arbitrary precision, we only have to prove the statement in case that $A$ is a $m$-dimensional cube in $\mathbb{S}^{1} \times \left(0,1 \right)^{m-1}$. \\ Hence, we consider an arbitrary $m$-dimensional cube $A \subset \mathbb{S}^{1} \times \left(0,1 \right)^{m-1}$. Moreover, let $B \subseteq M$ be a measurable set. Since $\nu_n \rightarrow \varepsilon$ for every $\epsilon \in \left(0,1\right] $ there are $ n \in \mathbb{N}$ and a set $\hat{B} = \bigcup_{i \in \Lambda} \Gamma^{i}_{n}$, where $\Gamma^{i}_{n} \in \nu_n$ and $\Lambda$ is a countable set of indices, such that $\mu\left(B\triangle \hat{B}\right) < \epsilon \cdot \mu\left(B\right) \cdot \mu\left(A\right)$. We obtain for sufficiently large $n$: \begin{align*} & \left| \mu\left(B\cap f^{-m_n}\left(A\right)\right) - \mu\left(B\right) \cdot \mu\left(A\right) \right| \\ & \leq \left| \mu\left(B \cap f^{-m_n}\left(A\right)\right) - \mu\left(\hat{B} \cap f^{-m_n}\left(A\right)\right) \right| + \left| \mu\left(\hat{B} \cap f^{-m_n}\left(A\right)\right) - \mu\left(\hat{B}\right) \cdot \mu\left(A\right) \right|\\ & \quad + \left|\mu\left(\hat{B}\right) \cdot \mu\left(A\right) - \mu\left(B\right) \cdot \mu\left(A\right) \right| \\ \displaybreak[0] & = \left| \mu\left(B \cap f^{-m_n}\left(A\right)\right) - \mu\left(\hat{B} \cap f^{-m_n}\left(A\right)\right) \right| \\ & \quad + \left| \mu\left(\bigcup_{i \in \Lambda} \left(\Gamma^{i}_{n} \cap f^{-m_n}\left(A\right)\right)\right) - \mu\left(\bigcup_{i \in \Lambda} \Gamma^{i}_{n}\right) \cdot \mu\left(A\right) \right| + \mu\left(A\right) \cdot \left|\mu\left(\hat{B}\right) - \mu\left(B\right) \right| \\ \displaybreak[0] & \leq \mu\left(\hat{B} \triangle B\right) + \left| \sum_{i \in \Lambda} \mu\left(\Gamma^{i}_{n} \cap f^{-m_n}\left(A\right)\right) - \mu\left(\Gamma^{i}_{n}\right) \cdot \mu\left(A\right) \right| + \mu\left(A\right) \cdot \mu\left(\hat{B} \triangle B\right) \\ \displaybreak[0] & \leq \epsilon \cdot \mu(B) \cdot \mu(A) + \sum_{i \in \Lambda} \left( \left| \mu\left(\Gamma^{i}_{n} \cap f^{-m_n}(A)\right) - \mu\left(\Gamma^{i}_{n}\right) \cdot \mu(A) \right| \right) + \epsilon \cdot \mu(A)^{2} \cdot \mu(B) \\ \displaybreak[0] & \leq \sum_{i \in \Lambda} \left( 3 \cdot \epsilon \cdot \mu\left(\Gamma^{i}_{n}\right) \cdot \mu(A) \right) + 2 \cdot \epsilon \cdot \mu(A) \cdot \mu(B) = 3 \cdot \epsilon \cdot \mu(A) \cdot \mu\left(\bigcup_{i \in \Lambda} \hat{I}^{i}_{n}\right) + 2 \cdot \epsilon \cdot \mu(A) \cdot \mu(B) \\ \displaybreak[0] & = 3 \cdot \epsilon \cdot \mu(A) \cdot \mu\left(\hat{B}\right) + 2 \cdot \epsilon \cdot \mu(A) \cdot \mu(B)\leq 3 \cdot \epsilon \cdot \mu(A) \cdot \left( \mu(B) + \mu\left(\hat{B}\triangle B\right)\right) + 2 \cdot \epsilon \cdot \mu(A) \cdot \mu(B) \\ & \leq 5 \cdot \epsilon \cdot \mu(A) \cdot \mu(B) + 3 \cdot \epsilon^{2} \cdot \mu(A)^{2} \cdot \mu(B). \end{align*} This estimate shows $\lim_{n\rightarrow \infty} \left| \mu\left(B\cap f^{-m_n}\left(A\right)\right) - \mu\left(B\right) \cdot \mu\left(A\right) \right| = 0$, because $\epsilon$ can be chosen arbitrarily small. \end{pr} In property (\ref{1}) we want to replace $f$ by $f_n$: \begin{lem} \label{lem:app2} Let $f = \lim_{n \rightarrow \infty} f_n$ be a diffeomorphism obtained by the constructions in the preceding sections and $\left(m_n\right)_{n \in \mathbb{N}}$ be a sequence of natural numbers fulfilling $d_0\left(f^{m_n},f^{m_n}_{n}\right) < \frac{1}{2^{n}}$. Furthermore, let $\left(\nu_n\right)_{n \in \mathbb{N}}$ be a sequence of partial partitions, where $\nu_n \rightarrow \varepsilon$ and for every $n \in \mathbb{N}$ $\nu_n$ is the image of a partial partition $\eta_n$ under a measure-preserving diffeomorphism $F_n$, satisfying the following property: For every $m$-dimensional cube $A \subseteq \mathbb{S}^1 \times \left(0,1\right)^{m-1}$ and for every $\epsilon \in \left(0,1\right]$ there exists $N \in \mathbb{N}$ such that for every $n\geq N$ and for every $\Gamma_n \in \nu_n$ we have \begin{equation} \label{2} \left| \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A\right)\right) - \mu\left(\Gamma_n\right) \cdot \mu\left(A\right) \right| \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right). \end{equation} Then $f$ is weakly mixing. \end{lem} \begin{pr} We want to show that the requirements of Lemma \ref{lem:app1} are fulfilled. This implies that $f$ is weakly mixing. \\ For it let $A \subseteq \mathbb{S}^{1} \times \left(0,1 \right)^{m-1}$ be an arbitrary $m$-dimensional cube and $\epsilon \in \left(0,1\right]$. \\ We consider two $m$-dimensional cubes $A_1, A_2 \subset \mathbb{S}^{1} \times \left(0,1 \right)^{m-1}$ with $A_1 \subset A \subset A_2$ as well as $\mu\left(A\triangle A_i\right) < \epsilon \cdot \mu\left(A\right)$ and for sufficiently large $n$: dist$\left(\partial A, \partial A_i\right) > \frac{1}{2^{n}}$ for $i=1,2$. If $n$ is sufficiently large, we obtain for $\Gamma_n \in \nu_n$ and for $i=1,2$ by the assumptions of this Lemma: \begin{equation*} \left| \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_i\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A_i\right) \right| \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_i\right). \end{equation*} Herefrom we conclude $\left(1-\epsilon\right) \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_1\right) \leq \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_1\right)\right)$ on the one hand and $\mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_2\right)\right) \leq \left(1+\epsilon\right) \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_2\right)$ on the other hand. Because of $d_0\left(f^{m_n},f^{m_n}_{n}\right) < \frac{1}{2^{n}}$ the following relations are true: \begin{align*} f^{m_n}_{n}(x) \in A_1 & \Longrightarrow f^{m_n}(x) \in A, \\ f^{m_n}(x) \in A & \Longrightarrow f^{m_n}_{n}(x) \in A_2. \end{align*} Thus: $\mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_1\right)\right) \leq \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) \leq \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_2\right)\right)$. \\ Altogether, it holds: $\left(1-\epsilon\right) \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_1\right) \leq \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) \leq \left(1+\epsilon\right) \cdot \mu \left(\Gamma_n \right) \cdot \mu\left(A_2\right)$. Therewith, we obtain the following estimate from above: \begin{align*} & \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & \leq \left(1+\epsilon \right) \cdot \mu \left(\Gamma_n \right) \cdot \mu \left(A_2\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A_2\right) + \mu \left(\Gamma_n \right) \cdot \left( \mu\left(A_2 \right) - \mu\left(A\right) \right) \\ & \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_2\right) + \mu \left(\Gamma_n\right) \cdot \mu\left(A_2 \triangle A\right) \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \left(\mu(A) + \mu\left(A_2 \triangle A\right)\right) + \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & \leq 2 \cdot \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) + \epsilon^{2} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \leq 3 \cdot \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \end{align*} Furthermore, we deduce the following estimate from below in an analogous way: \begin{equation*} \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \geq - 3 \cdot \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \end{equation*} Hence, we get: $\left| \mu \left(\Gamma_n \cap f^{-m_n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \right| \leq 3 \cdot \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right)$, i.e. the requirements of Lemma \ref{lem:app1} are met. \end{pr} Now we concentrate on the setting of our explicit constructions: \begin{lem} \label{lem:points} Consider the sequence of partial partitions $\left(\eta_n\right)_{n \in \mathbb{N}}$ constructed in section \ref{subsubsection:eta} and the diffeomorphisms $g_n$ from chapter \ref{subsection:g}. Furthermore, let $\left(H_n\right)_{n \in \mathbb{N}}$ be a sequence of measure-preserving smooth diffeomorphisms satisfying $\left\|DH_{n-1}\right\| \leq \frac{\ln\left(q_n\right)}{n}$ for every $n \in \mathbb{N}$ and define the partial partitions $\nu_n = \left\{ \Gamma_n = H_{n-1} \circ g_n \left( \hat{I}_n \right) \ : \ \hat{I}_n \in \eta_n \right\}$. \\ Then we get $\nu_n \rightarrow \varepsilon$. \end{lem} \begin{pr} By construction $\eta_n = \left\{ \hat{I}^{i}_{n} : i \in \Lambda_n \right\}$, where $\Lambda_n$ is a countable set of indices. Because of $\eta_n \rightarrow \varepsilon$ it holds $\lim_{n\rightarrow\infty} \mu \left(\bigcup_{i \in \Lambda_n} \hat{I}^{i}_{n} \right) = 1$. Since $H_{n-1} \circ g_n$ is measure-preserving, we conclude: \begin{equation*} \lim_{n\rightarrow\infty} \mu \left(\bigcup_{i \in \Lambda_n} \Gamma^{i}_{n}\right) = \lim_{n\rightarrow\infty} \mu \left(\bigcup_{i \in \Lambda_n} H_{n-1} \circ g_n\left(\hat{I}^{i}_{n} \right)\right) = \lim_{n\rightarrow\infty} \mu \left(H_{n-1} \circ g_n \left(\bigcup_{i \in \Lambda_n} \hat{I}^{i}_{n} \right)\right) = 1. \end{equation*} For any $m$-dimensional cube with side length $l_n$ it holds: diam$\left(W_n\right) = \sqrt{m} \cdot l_n$. Because every element of the partition $\eta_n$ is contained in a cube of side length $\frac{1}{q_n}$ it follows for every $i \in \Lambda_n$: diam$\left(\hat{I}^{i}_{n}\right) \leq \sqrt{m} \cdot \frac{1}{q_n}$. Furthermore, we saw in Lemma \ref{lem:outer}: $g_n\left(\hat{I}^i_n\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(\hat{I}^i_n\right)$ for every $i \in \Lambda_n$. Hence, for every $\Gamma^{i}_{n} = H_{n-1} \circ \tilde{g}_{\left[n q^{\sigma}_{n}\right]} \left(I^{i}_{n}\right)$ : \begin{equation*} \text{diam}\left(\Gamma^{i}_{n}\right) \leq \left\|DH_{n-1} \right\|_0 \cdot \left\| D\tilde{g}_{\left[n q^{\sigma}_{n}\right]} \right\|_0 \cdot \text{diam}\left(\hat{I}^{i}_{n}\right) \leq \frac{\ln\left(q_n\right)}{n} \cdot \left[n \cdot q^{\sigma}_{n} \right] \cdot \frac{\sqrt{m} }{q_n} \leq \sqrt{m} \cdot q^{\sigma-1}_{n} \cdot \ln\left(q_n\right). \end{equation*} Because of $\sigma < 1$ we conclude $\lim_{n\rightarrow\infty} $diam$\left(\Gamma^{i}_{n}\right) = 0$ and consequently $\nu_n \rightarrow \varepsilon$. \end{pr} In the following the Lebesgue measures on $\mathbb{S}^1$, $\left[0,1\right]^{m-2}$, $\left[0,1\right]^{m-1}$ are denoted by $\tilde{\lambda}$, $\mu^{(m-2)}$ and $\tilde{\mu}$ respectively. The next technical result is needed in the proof of Lemma \ref{lem:cube}. \begin{lem} \label{lem:help} Given an interval on the $r_1$-axis of the form $K = \bigcup_{k \in \mathbb{Z}, k_1 \leq k \leq k_2} \left[\frac{k \cdot \varepsilon}{b \cdot a}, \frac{(k+1) \cdot \varepsilon}{b \cdot a} \right]$, where $k_1, k_2 \in \mathbb{Z}$ with $\frac{b \cdot a}{\varepsilon} \cdot \delta \leq k_1 < k_2 \leq \frac{b \cdot a}{\varepsilon} - \frac{b \cdot a}{\varepsilon} \cdot \delta - 1$, and a $(m-2)$-dimensional interval $Z$ in $\left(r_2,...,r_{m-1}\right)$, let $K_{c, \gamma}$ denote the cuboid $\left[c, c + \gamma \right] \times K \times Z$ for some $\gamma > 0$. We consider the diffeomorphism $g_{a,b, \varepsilon,\delta}$ constructed in subsection \ref{subsection:g} and an interval $L=\left[l_1, l_2 \right]$ of $\mathbb{S}^1$ satisfying $\tilde{\lambda}\left(L\right)\geq 4 \cdot \frac{1-2 \varepsilon}{a} - \gamma$. \\ If $b \cdot \lambda (K) > 2$, then for the set $Q:= \pi_{\vec{r}}\left(K_{c, \gamma} \cap g^{-1}_{a,b, \varepsilon,\delta}\left(L \times K \times Z\right)\right)$ we have: \begin{align*} & \left| \tilde{\mu}\left(Q\right) - \lambda \left(K\right) \cdot \tilde{\lambda} \left(L\right) \cdot \mu^{\left(m-2\right)}\left(Z\right) \right| \\ & \leq \left(\frac{2}{b}\cdot \tilde{\lambda} \left(L \right) + \frac{2 \cdot \gamma}{b} + \gamma \cdot \lambda \left(K\right) + 4 \cdot \frac{1-2\varepsilon}{a} \cdot \lambda (K) + 8 \cdot \frac{1-2\varepsilon}{b \cdot a}\right) \cdot \mu^{\left(m-2\right)}\left(Z\right). \end{align*} \end{lem} \begin{pr} We consider the diffeomorphism $\tilde{g}_b: M \rightarrow M$, $\left(\theta, r_1,...,r_{m-1}\right) \mapsto \left( \theta + b \cdot r_1, r_1,...,r_{m-1}\right)$ and the set: \begin{align*} Q_b & \coloneqq \pi_{\vec{r}}\left( K_{c,\gamma} \cap \tilde{g}^{-1}_b\left(L \times K \times Z\right)\right) \\ & = \left\{ \left(r_1, r_2, ...,r_{m-1}\right) \in K \times Z \ : \ \left( \theta + b \cdot r_1, \vec{r}\right) \in L \times K \times Z , \theta \in \left[c, c+ \gamma \right] \right\} \\ & = \left\{ \left(r_1, r_2, ..., r_{m-1}\right) \in K \times Z \ : \ b \cdot r_1 \in \left[l_1 - c - \gamma , l_2 - c \right] \ \text{ mod } 1 \right\}. \end{align*} The interval $b \cdot K$ seen as an interval in $\mathbb{R}$ does not intersect more than $b \cdot \lambda(K) + 2$ and not less than $b \cdot \lambda \left(K\right) - 2$ intervals of the form $\left[ i, i+1 \right]$ with $i \in \mathbb{Z}$. By construction of the map $g_{a,b, \varepsilon,\delta}$ it holds for $\Delta_l \coloneqq \left[\frac{l \cdot \varepsilon}{b \cdot a}, \frac{(l+1) \cdot \varepsilon}{b \cdot a}\right]$ in consideration: $\pi_{\vec{r}} \left(g_{a,b, \varepsilon,\delta}\left(\left[c,c+ \gamma \right] \times \Delta_l \times Z\right)\right) = \Delta_l \times Z$. \\ \textbf{Claim: } A resulting interval on the $r_1$-axis of $K_{c,\gamma} \cap \tilde{g}^{-1}_b\left(L \times K \times Z\right)$ and the corresponding $r_1$-projection of $K_{c,\gamma} \cap g^{-1}_{a,b,\varepsilon}\left(L \times K \times Z\right)$ can differ by a length of at most $4 \cdot \frac{1-2 \varepsilon}{b \cdot a}$. \\ \textbf{Proof: } If $\left\{c\right\} \times \Delta_l \times Z$ (resp. $\left\{c + \gamma \right\} \times \Delta_l \times Z$) are contained in the domain, where $g_{a,b,\varepsilon} =\tilde{g}_b$, the left (resp. the right) boundaries of $\pi_{\theta}\left(g_{a,b, \varepsilon,\delta}\left(\left[c,c+ \gamma \right] \times \Delta_l \times Z\right)\right)$ and $\pi_{\theta}\left(\tilde{g}_b\left(\left[c,c+ \gamma \right] \times \Delta_l \times Z\right)\right)$ coincide. Otherwise, i.e. $c \in \left(\frac{k}{a} + \varepsilon, \frac{k+1}{a} - \varepsilon\right)$ (resp. $c + \gamma \in \left(\frac{k}{a} + \varepsilon, \frac{k+1}{a} - \varepsilon\right)$) the sets $\pi_{\theta}\left(g_{a,b, \varepsilon,\delta}\left(\left\{c\right\} \times \Delta_l \times Z\right)\right)$ and $\pi_{\theta}\left(\tilde{g}_b\left(\left\{c\right\} \times \Delta_l \times Z\right)\right)$ (resp. $\pi_{\theta}\left(g_{a,b, \varepsilon,\delta}\left(\left\{c + \gamma \right\} \times \Delta_l \times Z\right)\right)$ and $\pi_{\theta}\left(\tilde{g}_b\left(\left\{c + \gamma \right\} \times \Delta_l \times Z\right)\right)$) differ by a length of at most $\frac{1-2\varepsilon}{a}$. Since $\pi_{\theta}\left(\tilde{g}_{b}\left(\left\{u\right\} \times \Delta_l \times Z\right)\right)$ for arbitrary $u \in \mathbb{S}^1$ has a length of $\frac{\varepsilon}{a}$ on the $\theta$-axis, this discrepancy will be equalised after at most $\frac{1-2\varepsilon}{a} : \frac{\varepsilon}{a} = \frac{1-2 \varepsilon}{\varepsilon}$ blocks $\Delta_l$ on the $r_1$-axis. Thus, the resulting interval on the $r_1$-axis of $K_{c,\gamma} \cap \tilde{g}^{-1}_b\left(L \times K \times Z\right)$ and the corresponding $r_1$-projection of $K_{c,\gamma} \cap g^{-1}_{a,b,\varepsilon}\left(L \times K \times Z\right)$ can differ by a length of at most $4 \cdot \frac{1-2\varepsilon}{\varepsilon} \cdot \frac{\varepsilon}{b \cdot a} = 4 \cdot \left(1-2\varepsilon\right)\frac{1}{b \cdot a}$. \qed Therefore, we compute on the one side: \begin{align*} & \tilde{\mu}\left(Q\right) \leq \left(b \cdot \lambda\left(K\right) + 2\right) \cdot \left(\frac{l_2 - \left(l_1 - \gamma\right)}{b} + 4 \cdot \frac{1-2 \varepsilon}{b \cdot a}\right) \cdot \mu^{(m-2)}\left(Z\right) \\ & = \left(\lambda\left(K\right) \cdot \tilde{\lambda} \left(L\right) + 2 \cdot \frac{\tilde{\lambda}\left(L\right)}{b} + \lambda\left(K\right) \cdot \gamma + \frac{2 \cdot \gamma}{b} + 4 \cdot \lambda(K) \cdot \frac{1-2 \varepsilon}{a} + 8 \cdot \frac{1-2 \varepsilon}{b \cdot a} \right) \cdot \mu^{(m-2)}\left(Z\right) \end{align*} and on the other side \begin{align*} & \tilde{\mu}\left(Q\right) \geq \left(b \cdot \lambda\left(K\right) - 2\right) \cdot \left(\frac{l_2 - \left(l_1 - \gamma\right)}{b} - 4 \cdot \frac{1-2 \varepsilon}{b \cdot a} \right) \cdot \mu^{(m-2)}\left(Z\right) \\ & = \left(\lambda\left(K\right) \cdot \tilde{\lambda}\left(L\right) - 2 \cdot \frac{\tilde{\lambda}\left(L\right)}{b} + \lambda\left(K\right) \cdot \gamma - \frac{2 \cdot \gamma}{b} - 4 \cdot \lambda(K) \cdot \frac{1-2 \varepsilon}{a} + 8 \cdot \frac{1-2 \varepsilon}{b \cdot a} \right) \cdot \mu^{(m-2)}\left(Z\right). \end{align*} Both equations together yield: \begin{align*} & \left| \tilde{\mu}\left(Q\right) - \lambda\left(K\right) \cdot \tilde{\lambda} \left(L\right) \cdot \mu^{(m-2)}\left(Z\right) - \gamma \cdot \lambda\left(K\right) \cdot \mu^{(m-2)}\left(Z\right) - 8 \cdot \frac{1-2 \varepsilon}{b \cdot a}\cdot \mu^{(m-2)}\left(Z\right) \right| \\ & \leq \left(\frac{2}{b}\cdot \tilde{\lambda}\left(L\right) + \frac{2 \cdot \gamma}{b} + 4 \cdot \lambda(K) \cdot \frac{1-2 \varepsilon}{ a} \right) \cdot \mu^{(m-2)}\left(Z\right). \end{align*} The claim follows because \begin{align*} & \left| \tilde{\mu}\left(Q\right) - \lambda\left(K\right) \cdot \tilde{\lambda} \left(L\right) \cdot \mu^{(m-2)}\left(Z\right) \right| - \gamma \cdot \lambda\left(K\right) \cdot \mu^{(m-2)}\left(Z\right) - 8 \cdot \frac{1-2 \varepsilon}{b \cdot a} \cdot \mu^{(m-2)}\left(Z\right) \\ & \leq \left| \tilde{\mu}\left(Q\right) - \lambda\left(K\right) \cdot \tilde{\lambda} \left(L\right) \cdot \mu^{(m-2)}\left(Z\right) - \gamma \cdot \lambda\left(K\right) \cdot \mu^{(m-2)}\left(Z\right) - 8 \cdot \frac{1-2 \varepsilon}{b \cdot a} \cdot \mu^{(m-2)}\left(Z\right) \right|. \end{align*} \end{pr} \begin{lem} \label{lem:cube} Let $n\geq5$, $g_n$ as in section \ref{subsection:g} and $\hat{I}_n \in \eta_n$, where $\eta_n$ is the partial partition constructed in section \ref{subsubsection:eta}. For the diffeomorphism $\phi_n$ constructed in section \ref{subsection:phi} and $m_n$ as in chapter \ref{section:distri} we consider $\Phi_n = \phi_n \circ R^{m_n}_{\alpha_{n+1}} \circ \phi^{-1}_{n}$ and denote $\pi_{\vec{r}} \left( \Phi_n \left(\hat{I}_n\right)\right)$ by J. \\ Then for every $m$-dimensional cube $S$ of side length $q^{-\sigma}_{n}$ lying in $\mathbb{S}^1 \times J$ we get \begin{equation} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n \circ g^{-1}_{n}\left(S\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(S\right) \right| \leq \frac{21}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{equation} \end{lem} In other words this Lemma tells us that a partition element is ``almost uniformly distributed'' under $g_n \circ \Phi_n$ on the whole manifold $M= \mathbb{S}^1 \times \left[0,1\right]^{m-1}$. \begin{pr} Let $S$ be a $m$-dimensional cube with side length $q^{-\sigma}_{n}$ lying in $\mathbb{S}^{1} \times J$. Furthermore, we denote: \begin{equation*} S_\theta = \pi_{\theta}\left(S\right) \ \ \ \ \ \ \ \ \ \ S_{r_1} = \pi_{r_1}\left(S\right) \ \ \ \ \ \ \ \ \ \ S_{\tilde{\vec{r}}} = \pi_{\left(r_2,...,r_{m-1}\right)} \left(S\right) \ \ \ \ \ \ \ \ \ \ S_r = S_{r_1} \times S_{\tilde{\vec{r}}} = \pi_{\vec{r}}\left(S\right) \end{equation*} Obviously: $\tilde{\lambda}\left(S_{\theta}\right) = \lambda\left(S_{r_1}\right) = q^{-\sigma}_{n}$ and $\tilde{\lambda}\left(S_{\theta}\right) \cdot \lambda\left(S_{r_1}\right) \cdot \mu^{(m-2)}\left(S_{\tilde{\vec{r}}}\right)= \mu\left(S\right) = q^{-m \sigma}_{n}$. \\ According to Lemma \ref{lem:distri} $\Phi_n$ $\left(\frac{1}{n \cdot q^{m}_{n}}, \frac{1}{n^4},\frac{1}{n}\right)$-distributes the partition element $\hat{I}_n \in \eta_n$, in particular $\Phi_n\left(\hat{I}_n\right) \subseteq \left[c,c+\gamma\right] \times J$ for some $c \in \mathbb{S}^1$ and some $\gamma \leq \frac{1}{n \cdot q^m_n}$. Furthermore, we saw in the proof of Lemma \ref{lem:g-phi} that $\left[c,c+\gamma\right] \times J$ is contained in the interior of the step-by-step domains of the map $g_n$ and that on its boundary $g_n = \tilde{g}_{\left[n q^{\sigma}_{n}\right]}$ holds. Particularly it follows $\gamma \geq \frac{1-2\varepsilon}{a}$ in case of $g_n = g_{a,b, \varepsilon,\delta}$. For $l \in \mathbb{Z}$, $0\leq l \leq \frac{b\cdot a}{\varepsilon}-1$ we introduce the set $\Delta_l = \left[\frac{l \varepsilon}{ba}, \frac{(l+1) \varepsilon}{ba}\right]$ and therewith we consider \begin{equation*} \tilde{S}_{r_1} \coloneqq \bigcup_{\Delta_l \subseteq S_{r_1}} \Delta_l; \ \ \ \ \tilde{S}_r \coloneqq \bigcup_{\Delta_l \subseteq S_{r_1}} \Delta_l \times S_{\tilde{\vec{r}}} \ \ \ \ \text{ as well as } \ \ \ \ \tilde{S} \coloneqq S_{\theta} \times \tilde{S}_r \subseteq S \end{equation*} Using the triangle inequality we obtain \begin{align*} & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(S\right) \right| \\ \leq & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) - \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \right| \cdot \tilde{\mu}\left(J\right) \\ & + \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \mu\left(\tilde{S}\right)\right| + \mu\left(\hat{I}\right) \cdot \left|\mu\left(\tilde{S}\right) - \mu\left(S\right) \right| \end{align*} Here $\left|\mu\left(\tilde{S}\right) - \mu\left(S\right) \right| = \mu\left(S \setminus \tilde{S}\right) \leq 2 \cdot \frac{\varepsilon}{b \cdot a} \cdot \tilde{\lambda}\left(S_{\theta}\right) \cdot \mu^{(m-2)}\left(S_{\tilde{\vec{r}}}\right) \leq 2 \cdot \frac{\varepsilon}{a} \cdot \mu\left(S\right)$, where we used $b=\left[n \cdot q^{\sigma}_{n}\right] \geq q^{\sigma}_{n}$ in case of $n>4$. Since $\Phi_n$ and $g_n$ are measure-preserving, we additionally obtain: $\left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) - \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \right| \leq \mu\left(S \setminus \tilde{S}\right) \leq 2 \cdot \frac{\varepsilon}{a} \cdot \mu\left(S\right)$. \\ In the proof of Lemma \ref{lem:g-phi} we observed $\mu\left(\Phi_n\left(\hat{I}\right)\right)=\frac{1}{a} \cdot \left(1-\frac{2}{26n^4}\right) \cdot \tilde{\mu}\left(J\right)$. Hence: \begin{align*} & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) - \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \right| \cdot \tilde{\mu}\left(J\right) \leq 2 \cdot \frac{\varepsilon}{a} \cdot \mu\left(S\right) \cdot \tilde{\mu}\left(J\right) \\ &= 2 \cdot \frac{\varepsilon}{1-\frac{2}{26n^4}} \cdot \mu\left(S\right) \cdot \mu\left(\Phi_n\left(\hat{I}\right)\right) \leq 4 \cdot \varepsilon \cdot \mu\left(S\right) \cdot \mu\left(\Phi_n\left(\hat{I}\right)\right) = 4 \cdot \varepsilon \cdot \mu\left(S\right) \cdot \mu\left(\hat{I}\right) \end{align*} Thus, we obtain: \begin{equation} \label{triangle} \begin{split} & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(S\right) \right| \\ & \leq \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \mu\left(\tilde{S}\right)\right| + 5 \cdot \varepsilon \cdot \mu\left(S\right) \cdot \mu\left(\hat{I}\right) \end{split} \end{equation} Next, we want to estimate the first summand. By construction of the map $g_n = g_{a,b, \varepsilon,\delta}$ and the definition of $\tilde{S}$ it holds: $\Phi_n\left(\hat{I}\right) \cap g^{-1}_{n} \left(\tilde{S}\right) \subseteq \left[c,c+\gamma\right] \times \tilde{S}_r \eqqcolon K_{c, \gamma}$. Considering the proof of Lemma \ref{lem:g-phi} again, we obtain $g_n\left(K_{c, \gamma}\right) = \tilde{g}_{\left[n q^{\sigma}_{n}\right]}\left(K_{c, \gamma}\right)$ (since $c$ and $c+\gamma$ are in the domain where $g_n = \tilde{g}_{\left[n q^{\sigma}_{n}\right]}$ holds). \\ Because of Lemma \ref{lem:distri} $2\gamma \leq \frac{2}{n \cdot q^{m}_{n}} < q^{-\sigma}_{n}$ for $n>2$. So we can define a cuboid $S_1 \subseteq \tilde{S}$, where $S_1 \coloneqq \left[ s_1 + \gamma, s_2 - \gamma \right] \times \tilde{S}_r$ using the notation $S_{\theta} = \left[s_1, s_2\right]$. We examine the two sets \begin{equation*} Q := \pi_{\vec{r}}\left( K_{c, \gamma} \cap g^{-1}_{n}\left(S_{\theta} \times \tilde{S}_{r} \right)\right) \ \ \ \ \ \ \ \ \ Q_1 := \pi_{\vec{r}}\left( K_{c, \gamma} \cap g^{-1}_{n}\left(\left[s_1 + \gamma, s_2 - \gamma \right] \times \tilde{S}_{r} \right)\right) \end{equation*} As seen above $\Phi_n\left(\hat{I}\right) \cap g^{-1}_{n} \left(\tilde{S}\right) \subseteq K_{c, \gamma}$. Hence $\Phi_n\left(\hat{I}\right) \cap g^{-1}_{n} \left(\tilde{S}\right) \subseteq \Phi_n\left(\hat{I}\right) \cap g^{-1}_{n} \left(\tilde{S}\right) \cap K_{c, \gamma}$, which implies $\Phi_n \left(\hat{I}\right) \cap g^{-1}_{n}\left(\tilde{S}\right) \subseteq \Phi_n \left(\hat{I}\right) \cap \left(\mathbb{S}^{1} \times Q\right)$. \\ \textbf{Claim: } On the other hand: $\Phi_n \left(\hat{I}\right) \cap \left(\mathbb{S}^{1} \times Q_1\right) \subseteq \Phi_n \left(\hat{I}\right) \cap g^{-1}_{n}\left(\tilde{S}\right)$. \\ \textbf{Proof of the claim: } For $\left(\theta, \vec{r}\right) \in \Phi_n \left(\hat{I}\right) \cap \left(\mathbb{S}^{1} \times Q_1\right)$ arbitrary it holds $\left(\theta, \vec{r}\right) \in \Phi_n \left(\hat{I}\right)$, i.e. $\theta \in \left[c, c + \gamma\right]$, and $\vec{r} \in \pi_{\vec{r}}\left( K_{c, \gamma} \cap g^{-1}_{n}\left(\left[s_1 + \gamma, s_2 - \gamma \right] \times \tilde{S}_{r} \right)\right)$, i.e. in particular $\vec{r} \in \tilde{S}_r$. This implies the existence of $\bar{\theta} \in \left[c, c + \gamma\right]$ satisfying $\left(\bar{\theta}, \vec{r}\right) \in K_{c,\gamma} \cap g^{-1}_{n} \left(S_1\right)$. Hence, there are $\beta \in \left[s_1 + \gamma, s_2 - \gamma \right]$ and $\vec{r}_1 \in \tilde{S}_r$, such that $g_n \left( \bar{\theta}, \vec{r} \right) = \left(\beta,\vec{r}_1\right)$. Because of $\bar{\theta} \in \left[c,c+\gamma\right]$ and $\vec{r} \in \tilde{S}_r$ the point $\left( \bar{\theta}, \vec{r} \right)$ is contained in one cuboid of the form $\Delta_{a,b, \varepsilon}$. Since $\theta \in \left[c,c+\gamma\right]$, $\left( \theta, \vec{r} \right)$ is contained in the same $\Delta_{a,b, \varepsilon}$. Thus, $\pi_{\vec{r}} \left(g_n\left(\theta, \vec{r}\right)\right) \in \tilde{S}_r$. Furthermore, $g_n\left(\theta, \vec{r}\right)$ and $g_n\left(\bar{\theta}, \vec{r}\right)$ are in a distance of at most $\gamma$ on the $\theta$-axis, because $\theta, \bar{\theta} \in \left[c,c+\gamma\right]$, i.e. $\left|\theta - \bar{\theta} \right| \leq \gamma$, $g_n \left(K_{c, \gamma}\right)= \tilde{g}_{\left[n q^{\sigma}_{n}\right]}\left(K_{c, \gamma}\right)$ and the map $\tilde{g}_{\left[n q^{\sigma}_{n}\right]}$ preserves the distances on the $\theta$-axis. Thus, there are $\bar{\beta} \in \left[s_1,s_2\right]$ and $\vec{r}_2 \in \tilde{S}_r$ such that $g_n\left(\theta, \vec{r}\right)= \left(\bar{\beta}, \vec{r}_2\right)$. So $\left(\theta, \vec{r}\right) \in \Phi_n\left(\hat{I}\right)\cap g^{-1}_{n}\left(\tilde{S}\right)$. \qed \\ Altogether, the following inclusions are true: \begin{equation*} \Phi_n \left(\hat{I}\right) \cap \left(\mathbb{S}^{1} \times Q_1\right) \subseteq \Phi_n \left(\hat{I}\right) \cap g^{-1}_{n}\left(\tilde{S}\right) \subseteq \Phi_n \left(\hat{I}\right) \cap \left(\mathbb{S}^{1} \times Q\right). \end{equation*} Thus, we obtain: \begin{equation} \label{big} \begin{split} & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(\tilde{S})\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| \\ \leq \max \Bigg{(} & \left|\mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| , \\ & \left|\mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q_1\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| \Bigg{)} \end{split} \end{equation} We want to apply Lemma \ref{lem:help} for $K = \tilde{S}_{r_1}$, $L=S_{\theta}$, $Z = S_{\tilde{\vec{r}}}$ and $b=\left[n\cdot q^{\sigma}_{n}\right]$ (note that $4 \cdot \frac{1-2\varepsilon}{a}-\gamma \leq 3 \cdot \frac{1-2\varepsilon}{a} \leq \frac{3}{n \cdot q^{m}_{n}} < \frac{1}{q^{\sigma}_{n}} = \tilde{\lambda}\left(L\right)$ because of the mentioned relation $\gamma\geq\frac{1-2\varepsilon}{a}$ and for $n > 4$: $b \cdot \lambda(K) = \left[n q^{\sigma}_{n}\right] \cdot q^{-\sigma}_{n} \geq \frac{1}{2} n q^{\sigma}_{n} \cdot q^{-\sigma}_{n} >2 $): \begin{align*} & \left| \tilde{\mu}\left(Q\right) - \mu\left(\tilde{S}\right) \right| \\ & \leq \left(\frac{2}{\left[n \cdot q^{\sigma}_{n}\right] } \cdot \tilde{\lambda}\left(S_{\theta}\right) + \frac{2 \gamma}{\left[n\cdot q^{\sigma}_{n}\right]} + \gamma \cdot \lambda\left(\tilde{S}_{r_1}\right) + 4 \cdot \frac{1-2\varepsilon}{a} \lambda\left(\tilde{S}_{r_1}\right) + 8 \cdot \frac{1-2 \varepsilon}{\left[n q^{\sigma}_{n}\right] \cdot a} \right) \cdot \mu^{(m-2)}\left(S_{\tilde{\vec{r}}}\right) \\ & \leq \left(\frac{4}{n \cdot q^{\sigma}_{n} } \tilde{\lambda}\left(S_{\theta}\right) + \frac{4}{n \cdot q^{\sigma}_{n} \cdot q^{\sigma}_{n}} + \frac{1}{n \cdot q^{\sigma}_{n}} \lambda\left(S_{r_1}\right) + 4 \cdot \frac{1-2\varepsilon}{n \cdot q^{m}_{n}} \lambda\left(S_{r_1}\right) + \frac{16 \cdot \left(1-2\varepsilon\right)}{n \cdot q^{\sigma}_{n} \cdot n \cdot q^{m}_{n}} \right) \cdot \mu^{(m-2)}\left(S_{\tilde{\vec{r}}}\right) \\ & \leq \frac{14}{n} \cdot \mu\left(S\right). \end{align*} In particular, we receive from this estimate: $\frac{14}{n} \cdot \mu\left(S\right) \geq \tilde{\mu}\left(Q\right) - \mu\left( \tilde{S} \right) \geq \tilde{\mu}\left(Q\right) - \mu\left(S\right)$, hence: $\tilde{\mu}\left(Q\right) \leq \left(1 + \frac{14}{n} \right) \cdot \mu\left(S\right) \leq 4 \cdot \mu\left(S\right)$. \\ Analogously, we obtain: $\tilde{\mu}\left(Q_1\right) \leq 4 \cdot \mu \left(S\right)$ as well as $\left|\tilde{\mu}\left(Q_1\right) - \mu\left(\tilde{S}_1\right)\right| \leq \frac{14}{n} \cdot \mu\left(S\right)$. \\ Since $Q$ as well as $Q_1$ are a finite union of disjoint $\left(m-1\right)$-dimensional intervals contained in $J$ and $\Phi_n$ $\left(\frac{1}{n \cdot q^{m}_{n}}, \frac{1}{n^4}, \frac{1}{n}\right)$-distributes the interval $\hat{I}$, we get: \begin{equation*} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \tilde{\mu}\left(Q\right) \right| \leq \frac{1}{n} \cdot \mu\left(\hat{I}\right) \cdot \tilde{\mu}\left(Q\right) \leq \frac{4}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right) \end{equation*} as well as \begin{equation*} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q_1\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \tilde{\mu}\left(Q_1\right) \right| \leq \frac{1}{n} \cdot \mu\left(\hat{I}\right) \cdot \tilde{\mu}\left(Q_1\right) \leq \frac{4}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{equation*} Now we can proceed \begin{align*} & \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| \\ & \leq \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \tilde{\mu}\left(Q\right) \right| + \mu\left(\hat{I}\right) \cdot \left| \tilde{\mu}\left(Q\right) - \mu\left(\tilde{S}\right) \right| \\ & \leq \frac{4}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right) + \mu\left(\hat{I}\right) \cdot \frac{14}{n} \cdot \mu\left(S\right) = \frac{18}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{align*} Noting that $\mu\left(S_1\right)= \mu\left(\tilde{S}\right) - 2 \gamma \cdot \tilde{\mu}\left(\tilde{S}_r\right)$ and so $\mu\left(\tilde{S}\right) - \mu\left(S_1\right) \leq 2 \cdot \frac{1}{n \cdot q^{\sigma}_{n}} \cdot \tilde{\mu}\left(\tilde{S}_r\right) \leq \frac{2}{n} \cdot \mu\left(S\right)$ we obtain in the same way as above: \begin{equation*} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(\mathbb{S}^{1} \times Q_1\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| \leq \frac{20}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{equation*} Using equation (\ref{big}) this yields: \begin{equation*} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}\left(\tilde{S}\right)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(\tilde{S}\right) \right| \leq \frac{20}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{equation*} Finally, we conclude with the aid of equation (\ref{triangle}) because of $\varepsilon = \frac{1}{8n^4}$: \begin{equation*} \left| \mu\left(\hat{I} \cap \Phi^{-1}_n\left(g^{-1}_{n}(S)\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}\right) \cdot \mu\left(S\right) \right| \leq \frac{21}{n} \cdot \mu\left(\hat{I}\right) \cdot \mu\left(S\right). \end{equation*} \end{pr} Now we are able to prove the desired criterion for weak mixing. \begin{prop}[Criterion for weak mixing] \label{prop:crit} Let $f_n = H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$ and the sequence $\left(m_n\right)_{n \in \mathbb{N}}$ be constructed as in the previous sections. Suppose additionally that $d_0 \left(f^{m_n}, f^{m_n}_{n} \right) < \frac{1}{2^n}$ for every $n \in \mathbb{N}$, $\left\| DH_{n-1} \right\|_0 \leq \frac{\ln\left(q_n\right)}{n}$ and that the limit $f= \lim_{n \rightarrow \infty} f_n$ exists. \\ Then $f$ is weakly mixing. \end{prop} \begin{pr} To apply Lemma \ref{lem:app2} we consider the partial partitions $\nu_n \coloneqq H_{n-1} \circ g_n \left(\eta_n\right)$. As proven in Lemma \ref{lem:points} these partial partitions satisfy $\nu_n \rightarrow \varepsilon$. We have to establish equation (\ref{2}). To do so, let $\varepsilon >0$ and a $m$-dimensional cube $A\subseteq \mathbb{S}^1 \times \left(0,1\right)^{m-1}$ be given. There exists $N \in \mathbb{N}$ such that $A \subseteq \mathbb{S}^1 \times \left[\frac{1}{n^4}, 1-\frac{1}{n^4}\right]^{m-1}$ for every $n \geq N$. Because of Lemma \ref{lem:distri} and the properties of a $\left(\frac{1}{n \cdot q^{m}_{n}}, \frac{1}{n^4}, \frac{1}{n}\right)$-distribution we obtain for every $\hat{I}_n \in \eta_n$ that $\pi_{\vec{r}}\left(\Phi_n\left( \hat{I}_n\right)\right) \supseteq \left[\frac{1}{n^4}, 1-\frac{1}{n^4}\right]^{m-1}$. Furthermore, we note that $f^{m_n}_{n} = H_n \circ R^{m_n}_{\alpha_{n+1}} \circ H^{-1}_{n} = H_{n-1} \circ g_n \circ \Phi_n \circ g^{-1}_{n} \circ H^{-1}_{n-1}$. \\ Let $S_n$ be a $m$-dimensional cube of side length $q^{-\sigma}_{n}$ contained in $\mathbb{S}^1 \times \left[\frac{1}{n^4}, 1-\frac{1}{n^4}\right]^{m-1}$. We look at $C_n \coloneqq H_{n-1}\left(S_n\right)$, $\Gamma_n \in \nu_n$, and compute (since $g_n$ and $H_{n-1}$ are measure-preserving): \begin{align*} & \left| \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(C_n\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(C_n\right) \right| = \left| \mu\left(\hat{I}_n \cap \Phi^{-1}_n \circ g^{-1}_{n}\left(S_n\right)\right) - \mu\left(\hat{I}_n\right) \cdot \mu\left(S_n\right) \right| \\ & \leq \frac{1}{\tilde{\mu}\left(J\right)} \cdot \left| \mu\left(\hat{I}_n \cap \Phi^{-1}_n \circ g^{-1}_{n}\left(S_n\right)\right) \cdot \tilde{\mu}\left(J\right) - \mu\left(\hat{I}_n\right) \cdot \mu\left(S_n\right) \right| + \frac{1-\tilde{\mu}\left(J\right)}{\tilde{\mu}\left(J\right)} \cdot \mu\left(\hat{I}_n\right) \cdot \mu\left(S_n\right). \end{align*} Bernoulli's inequality yields: $\tilde{\mu}(J) \geq \left( 1-\frac{1}{n}\right)^{m-1} \geq 1 + \left(m-1\right) \cdot \left(-\frac{1}{n} \right) = 1 - \frac{m-1}{n}$. Hence we obtain for $n>2\cdot(m-1)$: $\tilde{\mu}\left(J\right)\geq\frac{1}{2}$ and so: $\frac{1- \tilde{\mu}\left(J\right)}{\tilde{\mu}\left(J\right)} \leq 2 \cdot \left(1- \tilde{\mu}\left(J\right) \right) \leq \frac{2 \cdot \left(m-1\right)}{n}$. We continue by applying Lemma \ref{lem:cube}: \begin{align*} \left| \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(C_n\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(C_n\right) \right| & \leq 2 \cdot \frac{21}{n} \cdot \mu\left(\hat{I}_n\right) \cdot \mu\left(S_n\right) + \frac{2 \cdot \left(m-1\right)}{n} \cdot \mu\left(\hat{I}_n\right) \cdot \mu\left(S_n\right) \\ & = \frac{40+2\cdot m}{n} \cdot \mu\left(\hat{I}_n\right) \cdot \mu \left( S_n \right) \end{align*} Moreover, it holds that diam$\left(C_n\right) \leq \left\|DH_{n-1} \right\|_0 \cdot \text{diam}\left(S_n\right) \leq \sqrt{m} \cdot \frac{\ln\left(q_n\right)}{q^{\sigma}_{n}}$, i.e. diam$\left(C_n\right)\rightarrow0$ as $n\rightarrow \infty$. Thus, we can approximate $A$ by a countable disjoint union of sets $C_n=H_{n-1}\left(S_n\right)$ with $S_n \subseteq \mathbb{S}^1 \times \left[\frac{1}{n^4}, 1-\frac{1}{n^4}\right]^{m-1}$ a $m$-dimensional cube of side length $q^{-\sigma}_{n}$ with given precision, assuming that $n$ is chosen to be large enough. Consequently for sufficiently large $n$ there are sets $A_1=\dot{\bigcup}_{i \in \Sigma^{1}_{n}} C^{i}_{n}$ and $A_2=\dot{\bigcup}_{i \in \Sigma^{2}_{n}} C^{i}_{n}$ with countable sets $\Sigma^{1}_{n}$ and $\Sigma^{2}_{n}$ of indices satisfying $A_1 \subseteq A \subseteq A_2$ as well as $ \left| \mu(A) - \mu(A_i) \right| \leq \frac{\epsilon}{3} \cdot \mu(A)$ for $i=1,2$. \\ Additionally we choose $n$ such that $\frac{40 + 2 \cdot m}{n} < \frac{\epsilon}{3}$ holds. It follows that \begin{align*} & \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & \leq \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A_2\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A_2\right) + \mu \left(\Gamma_n\right) \cdot \left( \mu\left(A_2\right) - \mu\left(A\right) \right) \\ & \leq \sum_{i \in \Sigma^{2}_{n}} \left(\mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(C^{i}_{n}\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(C^{i}_{n}\right) \right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & \leq \sum_{i \in \Sigma^{2}_{n}} \left(\frac{40 + 2 \cdot m}{n} \cdot \mu\left(\hat{I}_n\right) \cdot \mu\left(S^{i}_{n}\right) \right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & = \frac{40 + 2 \cdot m}{n} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(\bigcup_{i \in \Sigma^{2}_{n}} C^{i}_{n}\right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \leq \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A_2\right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \\ & = \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \left(\mu\left(A_2\right)-\mu\left(A\right)\right) + \frac{\epsilon}{3} \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right). \end{align*} Analogously, we estimate that $\mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \geq - \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right)$. Both estimates enable us to conclude that $\left| \mu \left(\Gamma_n \cap f^{-m_n}_{n}\left(A\right)\right) - \mu \left(\Gamma_n\right) \cdot \mu\left(A\right) \right| \leq \epsilon \cdot \mu \left(\Gamma_n\right) \cdot \mu\left(A\right)$. \end{pr} \section{Convergence of $\left(f_n\right)_{n \in \mathbb{N}}$ in Diff$^{\infty}\left(M\right)$} \label{section:conv} In the following we show that the sequence of constructed measure-preserving smooth diffeomorphisms $f_n = H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$ converges. For this purpose, we need a couple of results concerning the conjugation maps. \subsection{Properties of the conjugation maps $\phi_n$ and $H_n$} In order to find estimates on the norms $\left|\left\| H_n \right\|\right|_k$ we will need the next technical result which is an application of the chain rule: \begin{lem} \label{lem:derphi} Let $\phi := \tilde{\phi}^{(m)}_{\lambda_m, \mu_m} \circ ... \circ \tilde{\phi}^{(2)}_{\lambda_2, \mu_2}$, $j \in \left\{1,...,m\right\}$ and $k \in \mathbb{N}$. For any multi-index $\vec{a}$ with $\left|\vec{a}\right|=k$ the partial derivative $D_{\vec{a}} \left[ \phi \right]_j$ consists of a sum of products of at most $(m-1) \cdot k$ terms of the form \begin{equation*} D_{\vec{b}} \left(\left[\tilde{\phi}^{(i)}_{\lambda_i, \mu_i}\right]_l\right) \circ \tilde{\phi}^{(i-1)}_{\lambda_{i-1}, \mu_{i-1}} \circ ... \circ \tilde{\phi}^{(2)}_{\lambda_2, \mu_2}, \end{equation*} where $l \in \left\{1,...,m\right\}$, $i \in \left\{2,...,m\right\}$ and $\vec{b}$ is a multi-index with $\left|\vec{b}\right|\leq k$. \end{lem} In the same way we obtain a similar statement holding for the inverses: \begin{lem} \label{lem:derphiinv} Let $\psi := \left(\tilde{\phi}^{(2)}_{\lambda_2, \mu_2}\right)^{-1} \circ ... \circ \left(\tilde{\phi}^{(m)}_{\lambda_m, \mu_m}\right)^{-1}$, $j \in \left\{1,...,m\right\}$ and $k \in \mathbb{N}$. For any multi-index $\vec{a}$ with $\left|\vec{a}\right|=k$ the partial derivative $D_{\vec{a}} \left[ \psi \right]_j$ consists of a sum of products of at most $(m-1) \cdot k$ terms of the following form \begin{equation*} D_{\vec{b}} \left(\left[\left(\tilde{\phi}^{(i)}_{\lambda_i, \mu_i}\right)^{-1}\right]_l\right) \circ \left(\tilde{\phi}^{(i+1)}_{\lambda_{i+1}, \mu_{i+1}}\right)^{-1} \circ ... \circ \left(\tilde{\phi}^{(m)}_{\lambda_m, \mu_m}\right)^{-1}, \end{equation*} where $l \in \left\{1,...,m\right\}$, $i \in \left\{2,...,m\right\}$ and $\vec{b}$ is a multi-index with $\left|\vec{b}\right|\leq k$. \end{lem} \begin{rem} \label{rem:faa} In the proof of the following lemmas we will use the formula of Faà di Bruno in several variables. It can be found in the paper \textit{``A multivariate Faà di Bruno formula with applications''} (\cite{Fa}) for example. \\ For this we introduce an ordering on $\mathbb{N}^d_0$: For multiindices $\vec{\mu} = \left(\mu_1,...,\mu_d\right)$ and $\vec{\nu} = \left(\nu_1,...,\nu_d\right)$ in $\mathbb{N}^d_0$ we will write $\vec{\mu} \prec \vec{\nu}$, if one of the following properties is satisfied: \begin{enumerate} \item $\left| \vec{\mu} \right| < \left| \vec{\nu} \right|$, where $\left| \vec{\mu} \right| = \sum^{d}_{i=1} \mu_i$. \item $\left| \vec{\mu} \right| = \left| \vec{\nu} \right|$ and $\mu_1 < \nu_1$. \item $\left| \vec{\mu} \right| = \left| \vec{\nu} \right|$, $\mu_i = \nu_i$ for $1\leq i \leq k$ and $\mu_{k+1} < \nu_{k+1}$ for some $1 \leq k < d$. \end{enumerate} In other words, we compare by order and then lexicographically. Additionally we will use these notations: \begin{itemize} \item For $\vec{\nu}=\left(\nu_1,...,\nu_d\right) \in \mathbb{N}^d_0$: \begin{equation*} \vec{\nu}! = \prod^{d}_{i=1} \nu_i ! \end{equation*} \item For $\vec{\nu}=\left(\nu_1,...,\nu_d\right) \in \mathbb{N}^d_0$ and $\vec{z} = \left(z_1,...,z_d\right) \in \mathbb{R}^d$: \begin{equation*} \vec{z}^{\ \vec{\nu}} = \prod^{d}_{i=1} z^{\nu_i}_{i} \end{equation*} \end{itemize} Then we get for the composition $h\left(x_1,...,x_d\right) := f\left(g^{(1)}\left(x_1,...,x_d\right),..., g^{(m)}\left(x_1,...,x_d\right)\right)$ with sufficiently differentiable functions $f: \mathbb{R}^m \rightarrow \mathbb{R}$, $g^{(i)}: \mathbb{R}^d \rightarrow \mathbb{R}$ and a multi-index $\vec{\nu} \in \mathbb{N}^d_0$ with $\left| \vec{\nu} \right| = n$: \begin{equation*} D_{\vec{\nu}}h = \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq n} D_{\vec{\lambda}}f \cdot \sum^{n}_{s=1}\ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)} \vec{\nu}! \cdot \prod^{s}_{j=1}\frac{\left[D_{\vec{l}_j} \vec{g}\right]^{\vec{k}_j}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \end{equation*} Here $\left[D_{\vec{l}_j} \vec{g}\right]$ denotes $\left(D_{\vec{l}_j} g^{(1)},...,D_{\vec{l}_j}g^{(m)}\right)$ and \begin{align*} & p_s\left(\vec{\nu}, \vec{\lambda}\right) := \\ & \left\{ \left(\vec{k}_1,...,\vec{k}_s, \vec{l}_1,...,\vec{l}_s\right) : \vec{k}_i \in \mathbb{N}^{m}_{0}, \left| \vec{k}_i \right| >0, \vec{l}_i \in \mathbb{N}^d_0, 0 \prec \vec{l}_1 \prec ... \prec \vec{l}_s, \sum^{s}_{i=1} \vec{k}_i = \vec{\lambda} \text{ and } \sum^{s}_{i=1} \left| \vec{k}_i \right| \cdot \vec{l}_i = \vec{\nu} \right\} \end{align*} \end{rem} With the aid of these technical results we can prove an estimate on the norms of the map $\phi_n$: \begin{lem} \label{lem:normphi} For every $k \in \mathbb{N}$ it holds that \begin{equation*} ||| \phi_n |||_k \leq C \cdot q^{\left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}, \end{equation*} where $C$ is a constant depending on $m$, $k$ and $n$, but is independent of $q_n$. \end{lem} \begin{pr} First of all we consider the map $\tilde{\phi}_{\lambda, \mu} \coloneqq \tilde{\phi}_{\lambda, \varepsilon, i,j,\mu, \delta, \varepsilon_2} = C^{-1}_{\lambda} \circ \psi_{\mu,\delta,i,j,\varepsilon_2} \circ \varphi_{\varepsilon, i, j} \circ C_{\lambda}$ introduced in subsection \ref{subsection:phi}: \begin{align*} & \tilde{\phi}_{\lambda, \mu}\left(x_1,...,x_m\right) = \\ & \left(\frac{1}{\lambda} \left[\psi_{\mu} \circ \varphi_{\varepsilon}\right]_1 \left(\lambda x_1,x_2,...,x_m\right), \left[\psi_{\mu} \circ \varphi_{\varepsilon}\right]_2\left(\lambda x_1,x_2,...,x_m\right),...,\left[\psi_{\mu} \circ \varphi_{\varepsilon}\right]_m\left(\lambda x_1,x_2,...,x_m\right) \right). \end{align*} Let $k \in \mathbb{N}$. We compute for a multi-index $\vec{a}$ with $0\leq \left| \vec{a}\right| \leq k$: $\left\|D_{\vec{a}}\left[\tilde{\phi}_{\lambda, \mu}\right]_1\right\|_0 \leq \lambda^{k-1} \cdot ||| \psi_{\mu} \circ \varphi_{\varepsilon} |||_k$ and for $r \in \left\{2,...,m\right\}$: $\left\|D_{\vec{a}}\left[\tilde{\phi}_{\lambda, \mu}\right]_r\right\|_0 \leq \lambda^{k} \cdot ||| \psi_{\mu} \circ \varphi_{\varepsilon} |||_k$. \\ Therefore, we examine the map $\psi_{\mu}$. For any multi-index $\vec{a}$ with $0\leq \left| \vec{a}\right| \leq k$ and $r \in \left\{1,...,m\right\}$ we obtain: $\left\|D_{\vec{a}} \left[\psi_{\mu}\right]_r \right\|_0 \leq \mu^{k-1} \cdot ||| \varphi_{\varepsilon_2} |||_k = C_{k, \varepsilon_2} \cdot \mu^{k-1}$ and analogously $\left\|D_{\vec{a}} \left[\psi^{-1}_{\mu}\right]_r \right\|_0 \leq C_{k, \varepsilon_2} \cdot \mu^{k-1}$. Hence: $|||\psi_{\mu}|||_k \leq C \cdot \mu^{k-1}$. \\ In the next step we use the formula of Faà di Bruno mentioned in remark \ref{rem:faa}. With it we compute for any multi-index $\vec{\nu}$ with $\left|\vec{\nu}\right| = k$: \begin{align*} \left\|D_{\vec{\nu}} \left[\left(\psi_{\mu} \circ \varphi_{\varepsilon}\right)^{-1}\right]_r \right\|_0 & = \left\|D_{\vec{\nu}} \left[ \varphi^{-1}_{\varepsilon} \circ \psi^{-1}_{\mu}\right]_r \right\|_0 \\ & = \left\| \sum_{\vec{\lambda} \in \mathbb{N}^m_0 , 1\leq \left|\vec{\lambda} \right|\leq k} D_{\vec{\lambda}}\left[\varphi^{-1}_{\varepsilon}\right]_r \sum^{k}_{s=1}\ \ \sum_{\left(\vec{k}_1,...,\vec{k}_s, \vec{l}_1,...,\vec{l}_s\right) \in p_s\left(\vec{\nu},\vec{\lambda}\right)} \vec{\nu}! \prod^{s}_{j=1}\frac{\left[D_{\vec{l}_j} \psi^{-1}_{\mu} \right]^{\vec{k}_j}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \right\|_0 \\ & = \left\| \sum_{\vec{\lambda} \in \mathbb{N}^m_0, 1\leq \left|\vec{\lambda} \right|\leq k} D_{\vec{\lambda}}\left[\varphi^{-1}_{\varepsilon}\right]_r \cdot \sum^{k}_{s=1}\ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)} \vec{\nu}! \cdot \prod^{s}_{j=1}\frac{\prod^{m}_{t=1}\left(D_{\vec{l}_j} \left[ \psi^{-1}_{\mu}\right]_t\right)^{\vec{k}_{j_t}}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \right\|_0 \\ \displaybreak[0] & \leq \sum_{\vec{\lambda} \in \mathbb{N}^m_0, 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[\varphi^{-1}_{\varepsilon}\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{\prod^{m}_{t=1}\left\| D_{\vec{l}_j} \left[\psi^{-1}_{\mu}\right]_t \right\|^{\vec{k}_{j_t}}_0}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \\ \displaybreak[0] & \leq \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[\varphi^{-1}_{\varepsilon}\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{|||\psi^{-1}_{\mu}|||^{\sum^{m}_{t=1} \vec{k}_{j_t}}_{\left|\vec{l}_j\right|}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \\ & = \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[\varphi^{-1}_{\varepsilon}\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{|||\psi^{-1}_{\mu}|||^{\left|\vec{k}_j\right|}_{\left|\vec{l}_j\right|}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \end{align*} As seen above: $|||\psi^{-1}_{\mu}|||^{\left|\vec{k}_j\right|}_{\left|\vec{l}_j\right|} \leq C \cdot \mu^{\left|\vec{k}_j\right| \cdot \left|\vec{l}_j\right|}$. Hereby: $\prod^{s}_{j=1} |||\psi^{-1}_{\mu} |||^{\left| \vec{k}_j \right|}_{\left|\vec{l}_j\right|} \leq \hat{C} \cdot \mu^{\sum^{s}_{j=1} \left|\vec{l}_j\right| \cdot \left|\vec{k}_j\right|}$, where $\hat{C}$ is independent of $\mu$. By definition of the set $p_s\left(\vec{\nu}, \vec{\lambda}\right)$ we have $\sum^{s}_{i=1} \left| \vec{k}_i \right| \cdot \vec{l}_i = \vec{\nu}$. Hence: \begin{equation*} k = \left| \vec{\nu} \right| = \left| \sum^{s}_{i=1} \left| \vec{k}_i\right| \cdot \vec{l}_i \right| = \sum^{m}_{t=1} \left(\sum^{s}_{i=1} \left| \vec{k}_i\right| \cdot \vec{l}_i \right)_t = \sum^{m}_{t=1} \sum^{s}_{i=1} \left| \vec{k}_i \right| \cdot \vec{l}_{i_t} = \sum^{s}_{i=1} \left| \vec{k}_i \right| \cdot \left( \sum^{m}_{t=1} \vec{l}_{i_t} \right) = \sum^{s}_{i=1} \left| \vec{k}_i \right| \cdot \left| \vec{l}_i \right| \end{equation*} This shows $\prod^{s}_{j=1} |||\psi^{-1}_{\mu} |||^{\left| \vec{k}_j \right|}_{\left|\vec{l}_j\right|} \leq \hat{C} \cdot \mu^k$ and finally $\left\|D_{\vec{\nu}} \left[\left(\psi_{\mu} \circ \varphi_{\varepsilon}\right)^{-1}\right]_r \right\|_0 \leq C \cdot \mu^k$. Analogously we compute $\left\|D_{\vec{\nu}} \left[\psi_{\mu} \circ \varphi_{\varepsilon}\right]_r \right\|_0 \leq C \cdot |||\psi_{\mu} |||_k \leq C \cdot \mu^{k-1}$. Altogether, we obtain $|||\psi_{\mu} \circ \varphi_{\varepsilon}|||_k \leq C \cdot \mu^k$. Hereby, we estimate $\left\| D_{\vec{a}} \left[\tilde{\phi}_{\lambda, \mu} \right]_r \right\|_0 \leq C \cdot \lambda^k \cdot \mu^k$ and analogously $\left\| D_{\vec{a}} \left[\tilde{\phi}^{-1}_{\lambda, \mu} \right]_r \right\|_0 \leq C \cdot \lambda^k \cdot \mu^k$. In conclusion this yields $||| \tilde{\phi}_{\lambda, \mu} |||_k \leq C \cdot \mu^k \cdot \lambda^k$. \\ In the next step we consider $\phi \coloneqq \tilde{\phi}^{(m)}_{\lambda_m, \mu_m} \circ ... \circ \tilde{\phi}^{(2)}_{\lambda_2, \mu_2}$. Let $\lambda_{\text{max}} \coloneqq \max \left\{ \lambda_2,..., \lambda_m \right\}$ as well as $\mu_{\text{max}} \coloneqq \max \left\{ \mu_2,..., \mu_m \right\}$. Inductively we will show $||| \phi |||_k \leq \tilde{C} \cdot \lambda^{(m-1) \cdot k}_{\text{max}} \cdot \mu^{(m-1) \cdot k}_{\text{max}}$ for every $k \in \mathbb{N}$, where $\tilde{C}$ is a constant independent of $\lambda_i$ and $\mu_i$. \\ \textit{Start: $k=1$ } \\ Let $l \in \left\{1,...,m\right\}$ be arbitrary. By Lemma \ref{lem:derphi} a partial derivative of $\left[ \phi \right]_l$ of first order consists of a sum of products of at most $m-1$ first order partial derivatives of functions $\tilde{\phi}^{(j)}_{\lambda_j, \mu_j}$. Therewith, we obtain using $|||\tilde{\phi}^{(j)}_{\lambda_j, \mu_j}|||_1 \leq C \cdot \lambda_{\text{max}} \cdot \mu_{\text{max}}$ the estimate $\left\| D_i \left[ \phi \right]_l \right\|_0 \leq C_1 \cdot \lambda^{m-1}_{\text{max}} \cdot \mu^{m-1}_{\text{max}}$ for every $i \in \left\{1,...,m\right\}$, where $C_1$ is a constant independent of $\lambda$ and $\mu$.\\ With the aid of Lemma \ref{lem:derphiinv} we obtain the same statement for $\phi^{-1} = \left(\tilde{\phi}^{(2)}_{\lambda_2, \mu_2}\right)^{-1} \circ ... \circ \left(\tilde{\phi}^{(m)}_{\lambda_m, \mu_m} \right)^{-1}$. Hence, we conclude: $||| \phi |||_1 \leq \tilde{C}_1 \cdot \lambda^{m-1}_{\text{max}} \cdot \mu^{m-1}_{\text{max}}$. \\ \textit{Assumption: } The claim is true for $k \in \mathbb{N}$. \\ \textit{Induction step $k \rightarrow k+1$:}\\ In the proof of Lemma \ref{lem:derphi} one observes that at the transition $k\rightarrow k+1$ in the product of at most $(m-1) \cdot k$ terms of the form $D_{\vec{b}} \left(\left[\tilde{\phi}^{(i)}_{\lambda_i, \mu_i}\right]_l\right) \circ \tilde{\phi}^{(i-1)}_{\lambda_{i-1}, \mu_{i-1}} \circ ... \circ \tilde{\phi}^{(2)}_{\lambda_2, \mu_2}$ one is replaced by a product of a term $\left(D_j D_{\vec{b}} \left[\tilde{\phi}^{(i)}_{\lambda_i, \mu_i}\right]_l \right) \circ \tilde{\phi}^{(i-1)}_{\lambda_{i-1}, \mu_{i-1}} \circ ... \circ \tilde{\phi}^{(2)}_{\lambda_2, \mu_2}$ with $j \in \left\{1,...,m\right\}$ and at most $m-2$ partial derivatives of first order. Because of $||| \tilde{\phi}^{(i)}_{\lambda_i, \mu_i}|||_{k+1} \leq C \cdot \lambda^{k+1}_{\text{max}} \cdot \mu^{k+1}_{\text{max}}$ and $||| \tilde{\phi}^{(j)}_{\lambda_j, \mu_j}|||_1 \leq C \cdot \lambda_{\text{max}} \cdot \mu_{\text{max}}$ the $\lambda_{\text{max}}$-exponent as well as the $\mu_{\text{max}}$-exponent increase by at most $1+(m-2) \cdot 1= m-1$. \\ In the same spirit one uses the proof of Lemma \ref{lem:derphiinv} to show that also in case of $\phi^{-1}$ the $\lambda_{\text{max}}$-exponent as well as the $\mu_{\text{max}}$-exponent increase by at most $m-1$. \\ Using the assumption we conclude \begin{equation*} ||| \phi |||_{k+1} \leq \hat{C} \cdot \lambda^{k \cdot (m-1) + m-1}_{\text{max}} \cdot \mu^{k \cdot (m-1) + m-1}_{\text{max}} = \hat{C}\cdot \lambda^{(k+1) \cdot (m-1)}_{\text{max}} \cdot \mu^{(k+1) \cdot (m-1)}_{\text{max}}. \end{equation*} So the proof by induction is completed. \\ In the setting of our explicit construction of the map $\phi_n$ in section \ref{subsection:phi} we have $\varepsilon_1 = \frac{1}{60 \cdot n^4}$, $\varepsilon_2 = \frac{1}{22 \cdot n^4}$, $\lambda_{\text{max}} =n \cdot q^{1+(m-1) \cdot \frac{n \cdot \left(n-1\right)}{2} + (m-2) \cdot n}_{n}$ and $\mu_{\text{max}} = q^{n}_n$. Thus: \begin{align*} ||| \phi_n |||_k & \leq \tilde{C}\left(m,k,n\right) \cdot \left(n \cdot q^{1+(m-1) \cdot \frac{n \cdot \left(n-1\right)}{2} + (m-2) \cdot n}_{n}\right)^{(m-1) \cdot k} \cdot \left(q^{n}_{n}\right)^{(m-1) \cdot k} \\ & \leq C\left(m,k,n\right) \cdot q^{\left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}, \end{align*} where $C\left(m,k,n\right)$ is a constant independent of $q_n$. \end{pr} In the next step we consider the map $h_n = g_n \circ \phi_n$, where $g_n$ is constructed in section \ref{subsection:g}: \begin{lem} \label{lem:normh} For every $k \in \mathbb{N}$ it holds: \begin{equation*} ||| h_n |||_k \leq \bar{C} \cdot q^{3 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}, \end{equation*} where $\bar{C}$ is a constant depending on $m$, $k$ and $n$, but is independent of $q_n$. \end{lem} \begin{pr} Outside of $\mathbb{S}^1 \times \left[\delta,1-\delta\right]^{m-1}$, i.e. $g_n=\tilde{g}_{\left[nq^{\sigma}_n\right]}$, we have: \begin{align*} & h_n \left(x_1,...,x_m\right) = g_n \circ \phi_n \left(x_1,...,x_m\right) \\ & = \left( \left[\phi_n \left(x_1,...,x_m\right)\right]_1 + \left[ n \cdot q^{\sigma}_n \right] \cdot \left[\phi_n \left(x_1,...,x_m\right)\right]_2, \left[\phi_n \left(x_1,...,x_m\right)\right]_2,...,\left[\phi_n \left(x_1,...,x_m\right)\right]_m \right) \end{align*} and \begin{align*} & h^{-1}_n \left(x_1,...,x_m\right) = \phi^{-1}_n \circ g^{-1}_n \left(x_1,...,x_m\right) \\ & = \left( \left[\phi^{-1}_n \left(x_1-\left[ n \cdot q^{\sigma}_n \right] \cdot x_2,x_2,...,x_m\right)\right]_1,...,\left[\phi_n \left(x_1-\left[ n \cdot q^{\sigma}_n \right] \cdot x_2,x_2,...,x_m\right)\right]_m \right). \end{align*} Since $\sigma < 1$ we can estimate: \begin{equation*} ||| h_n |||_k \leq 2 \cdot \left[n \cdot q^{\sigma}_{n}\right]^k \cdot ||| \phi_n |||_k \leq \bar{C}\left(m,k,n\right) \cdot q^{\sigma \cdot k}_{n} \cdot q^{\left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n} \leq \bar{C}\left(m,k,n\right) \cdot q^{2 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n} \end{equation*} with a constant $\bar{C}\left(m,k,n\right)$ independent of $q_n$. \\ In the other case we have \begin{equation*} g_n \circ \phi_n \left(x_1,...,x_m\right) = \left( \left[g_{a,b,\varepsilon} \left(\left[\phi_n\right]_1,\left[\phi_n\right]_2\right)\right]_1, \left[g_{a,b,\varepsilon} \left(\left[\phi_n\right]_1,\left[\phi_n\right]_2\right)\right]_2,\left[\phi_n \right]_3,...,\left[\phi_n \right]_m \right). \end{equation*} We will use the formula of Faà di Bruno as above for any multi-index $\vec{\nu}$ with $\left|\vec{\nu}\right|=k$ and $r\in\left\{1,...,m\right\}$: \begin{align*} \left\|D_{\vec{\nu}} \left[h_n\right]_r \right\|_0 & = \left\|D_{\vec{\nu}} \left[g_{a,b,\varepsilon} \circ \phi_n \right]_r \right\|_0 \\ & \leq \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[g_{a,b,\varepsilon}\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{||| \phi_n |||^{\left|\vec{k}_j\right|}_{\left|\vec{l}_j\right|}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \end{align*} By Lemma \ref{lem:normphi} we have $||| \phi_n |||_k \leq C \cdot q^{\left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}$, where $C$ is a constant independent of $q_n$. As above we show $\prod^{s}_{j=1} ||| \phi_n |||^{\left| \vec{k}_j \right|}_{\left|\vec{l}_j\right|} \leq \hat{C} \cdot q^{\left(\sum^{s}_{j=1} \left|\vec{l}_j\right| \cdot \left|\vec{k}_j\right| \right) \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n} = \hat{C} \cdot q^{\left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}$, where $\hat{C}$ is a constant independent of $q_n$.\\ Furthermore, we examine the map $g_{a,b, \varepsilon,\delta}=D^{-1}_{a,b, \varepsilon} \circ g_{\varepsilon} \circ D_{a,b,\varepsilon}$ for $a,b \in \mathbb{Z}$ and obtain \begin{equation*} ||| g_{a,b, \varepsilon,\delta} |||_k \leq \left(\frac{b \cdot a}{\varepsilon}\right)^{k} \cdot ||| g_{\varepsilon} |||_k = C_{\varepsilon,k} \cdot b^k \cdot a^{k}. \end{equation*} By our constructions in section \ref{subsection:g} we have $b = \left[ n \cdot q^{\sigma}_{n} \right] \leq n \cdot q^{\sigma}_{n}$, $a\leq n \cdot q^{1+(m-1) \cdot \frac{n \cdot \left(n+1\right)}{2}}_{n}$ and $\varepsilon = \frac{1}{8n^4}$. Hence: $||| g_n |||_k \leq C_{n,k}\cdot q^{\sigma \cdot k}_{n} \cdot q^{k+ k \cdot (m-1) \cdot \frac{n \cdot \left(n+1\right)}{2}}_{n} \leq C_{n,k} \cdot q^{2 \cdot k \cdot (m-1) \cdot n \cdot \left(n+1\right)}_{n}$. Finally, we conclude: $\left\|D_{\vec{\nu}} \left[h_n\right]_r \right\|_0 \leq C \cdot q^{2 \cdot k \cdot (m-1) \cdot n \cdot \left(n+1\right)}_{n} \cdot q^{k \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n} \leq C \cdot q^{3 \cdot k \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n}$. \\ In the next step we consider $h^{-1}_{n}=\phi^{-1}_{n} \circ g^{-1}_{a,b,\varepsilon}$. For $r \in \left\{1,...,m\right\}$ and any multi-index $\vec{\nu}$ with $\left| \vec{\nu} \right| = k$ we obtain using the formula of Faà di Bruno again: \begin{align*} \left\|D_{\vec{\nu}} \left[h^{-1}_n\right]_r \right\|_0 & = \left\|D_{\vec{\nu}} \left[\phi^{-1}_n \circ g^{-1}_n \right]_r \right\|_0 \\ & \leq \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[\phi^{-1}_n\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{||| g_n |||^{\left|\vec{k}_j\right|}_{\left|\vec{l}_j\right|}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \end{align*} As above we show $\prod^{s}_{j=1} ||| g_n |||^{\left| \vec{k}_j \right|}_{\left|\vec{l}_j\right|} \leq \hat{C} \cdot q^{2 \cdot k \cdot (m-1) \cdot n \cdot \left(n+1\right)}_{n}$, where $\hat{C}$ is a constant independent of $q_n$. Since $||| \phi_n |||_k \leq C \cdot q^{k \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n}$ we get \begin{equation*} \left\|D_{\vec{\nu}} \left[h^{-1}_n\right]_r \right\|_0 \leq \check{C} \cdot q^{2 \cdot k \cdot (m-1) \cdot n \cdot \left(n+1\right)}_{n} \cdot q^{k \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n} \leq \check{C} \cdot q^{3 \cdot k \cdot \left(m-1\right)^2 \cdot n \cdot \left(n+1\right)}_{n}, \end{equation*} where $\check{C}$ is a constant independent of $q_n$. \\ Thus, we finally obtain $||| h_n |||_k \leq C(n,k,m) \cdot q^{3 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}$. \end{pr} Finally, we are able to prove an estimate on the norms of the map $H_n$: \begin{lem} \label{lem:normH} For every $k \in \mathbb{N}$ we get: \begin{equation*} ||| H_n |||_k \leq \breve{C} \cdot q^{3 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}, \end{equation*} where $\breve{C}$ is a constant depending solely on $m$, $k$, $n$ and $H_{n-1}$. Since $H_{n-1}$ is independent of $q_n$ in particular, the same is true for $\breve{C}$. \end{lem} \begin{pr} Let $k \in \mathbb{N}$, $r \in \left\{1,...,m\right\}$ and $\vec{\nu} \in \mathbb{N}^{m}_{0}$ be a multi-index with $\left| \vec{\nu} \right| =k$. As above we estimate: \begin{align*} \left\|D_{\vec{\nu}} \left[H_n\right]_r \right\|_0 & = \left\|D_{\vec{\nu}} \left[H_{n-1} \circ h_n \right]_r \right\|_0 \\ & \leq \sum_{\vec{\lambda} \in \mathbb{N}^m_0 \text{ with } 1\leq \left|\vec{\lambda} \right|\leq k} \left\|D_{\vec{\lambda}} \left[H_{n-1}\right]_r \right\|_0 \cdot \sum^{k}_{s=1} \ \ \sum_{p_s\left(\vec{\nu},\vec{\lambda}\right)}\vec{\nu}! \cdot \prod^{s}_{j=1}\frac{|||h_n|||^{\left|\vec{k}_j\right|}_{\left|\vec{l}_j\right|}}{\vec{k}_j ! \cdot \left(\vec{l}_j !\right)^{\left|\vec{k}_j\right|}} \end{align*} and compute using Lemma \ref{lem:normh}: $\prod^{s}_{j=1} ||| h_n |||^{\left| \vec{k}_j \right|}_{\left|\vec{l}_j\right|} \leq \hat{C} \cdot q^{3 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}$, where $\hat{C}$ is a constant independent of $q_n$. Since $H_{n-1}$ is independent of $q_n$ we conclude: \begin{equation*} \left\|D_{\vec{\nu}} \left[H_n\right]_r \right\|_0 \leq \check{C} \cdot q^{3 \cdot \left(m-1\right)^2 \cdot k \cdot n \cdot \left(n+1\right)}_{n}, \end{equation*} where $\check{C}$ is a constant independent of $q_n$. \\ In the same way we prove an analogous estimate of $\left\|D_{\vec{\nu}} \left[H^{-1}_n\right]_r \right\|_0$ and verify the claim. \end{pr} In particular, we see that this norm can be estimated by a power of $q_n$. \subsection{Proof of convergence} For the proof of the convergence of the sequence $\left(f_n\right)_{n\in \mathbb{N}}$ in the Diff$^{\infty}\left(M\right)$-topology the next result, that can be found in \cite[Lemma 4]{FSW}, is very useful. \begin{lem} \label{lem:konj} Let $k \in \mathbb{N}_0$ and $h$ be a C$^{\infty}$-diffeomorphism on $M$. Then we get for every $\alpha,\beta \in \mathbb{R}$: \begin{equation*} d_k\left(h \circ R_{\alpha} \circ h^{-1}, h \circ R_{\beta} \circ h^{-1}\right) \leq C_k \cdot ||| h |||^{k+1}_{k+1} \cdot \left| \alpha - \beta \right|, \end{equation*} where the constant $C_k$ depends solely on $k$ and $m$. In particular $C_0 = 1$. \end{lem} In the following Lemma we show that under some assumptions on the sequence $\left(\alpha_n\right)_{n\in \mathbb{N}}$ the sequence $\left(f_n\right)_{n \in \mathbb{N}}$ converges to $f \in \mathcal{A}_{\alpha}\left(M\right)$ in the Diff$^{\infty}\left(M\right)$-topology. Afterwards, we will show that we can fulfil these conditions (see Lemma \ref{lem:conv}). \begin{lem} \label{lem:convgen} Let $\varepsilon > 0$ be arbitrary and $\left(k_n\right)_{n \in \mathbb{N}}$ be a strictly increasing sequence of natural numbers satisfying $\sum^{\infty}_{n=1}\frac{1}{k_n} < \varepsilon$. Furthermore, we assume that in our constructions the following conditions are fulfilled: $$ \left| \alpha - \alpha_1 \right| < \varepsilon \quad\text{ and }\quad\left| \alpha - \alpha_n \right| \leq \frac{1}{2 \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n+1}}\text{ for every }n \in \mathbb{N}, $$ where $C_{k_n}$ are the constants from Lemma \ref{lem:konj}. \begin{enumerate} \item Then the sequence of diffeomorphisms $f_n = H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}$ converges in the Diff$^{\infty}(M)$-topology to a measure-preserving smooth diffeomorphism $f$, for which $d_{\infty}\left(f,R_{\alpha}\right)< 3 \cdot \varepsilon$ holds. \item Also the sequence of diffeomorphisms $\hat{f}_n = H_n \circ R_{\alpha} \circ H^{-1}_{n} \in \mathcal{A}_{\alpha}\left(M\right)$ converges to $f$ in the Diff$^{\infty}(M)$-topology. Hence $f \in \mathcal{A}_{\alpha}\left(M\right)$. \end{enumerate} \end{lem} \begin{pr} \begin{enumerate} \item According to our construction it holds $h_n \circ R_{\alpha_n} = R_{\alpha_n} \circ h_n$ and hence \begin{align*} f_{n-1} & = H_{n-1} \circ R_{\alpha_n} \circ H^{-1}_{n-1} = H_{n-1} \circ R_{\alpha_n} \circ h_n \circ h^{-1}_{n} \circ H^{-1}_{n-1} \\ & = H_{n-1} \circ h_n \circ R_{\alpha_n} \circ h^{-1}_{n} \circ H^{-1}_{n-1} = H_n \circ R_{\alpha_n} \circ H^{-1}_{n}. \end{align*} Applying Lemma \ref{lem:konj} we obtain for every $k,n \in \mathbb{N}$: \begin{equation} \label{est1} d_k\left(f_n, f_{n-1}\right) = d_k\left(H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}, H_n \circ R_{\alpha_n} \circ H^{-1}_{n}\right) \leq C_k \cdot ||| H_n |||^{k+1}_{k+1} \cdot \left| \alpha_{n+1} - \alpha_n \right| \end{equation} In section \ref{subsection:first} we assumed $\left|\alpha - \alpha_n \right| \stackrel{n \rightarrow \infty}{\longrightarrow} 0$ monotonically. Using the triangle inequality we obtain $\left| \alpha_{n+1} - \alpha_n \right| \leq \left| \alpha_{n+1} - \alpha \right| + \left| \alpha - \alpha_n \right| \leq 2 \cdot \left| \alpha - \alpha_n \right|$ and therefore equation (\ref{est1}) becomes: \begin{equation*} d_k\left(f_n, f_{n-1}\right) \leq C_k \cdot ||| H_n |||^{k+1}_{k+1} \cdot 2 \cdot \left| \alpha_{n} - \alpha \right|. \end{equation*} By the assumptions of this Lemma it follows for every $k \leq k_n$: \begin{equation} \label{est4} d_k\left(f_n,f_{n-1}\right) \leq d_{k_n}\left(f_n,f_{n-1}\right) \leq C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n +1} \cdot 2 \cdot \frac{1}{2 \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n +1} } \leq \frac{1}{k_n} \end{equation} In the next step we show that for arbitrary $k \in \mathbb{N}$ $\left(f_n\right)_{n \in \mathbb{N}}$ is a Cauchy sequence in Diff$^k\left(M\right)$, i.e. $\lim_{n,m\rightarrow \infty} d_k\left(f_n,f_m\right) = 0$. For this purpose, we calculate: \begin{equation} \label{est2} \lim_{n \rightarrow \infty} d_k\left(f_n,f_m\right) \leq \lim_{n \rightarrow \infty} \sum^{n}_{i=m+1} d_k\left(f_i, f_{i-1}\right) = \sum^{\infty}_{i=m+1}d_k\left(f_i, f_{i-1}\right). \end{equation} We consider the limit process $ m \rightarrow \infty$, i.e. we can assume $k\leq k_m$ and obtain from equations (\ref{est4}) and (\ref{est2}): \begin{equation*} \lim_{n,m \rightarrow \infty} d_k\left(f_n,f_m\right) \leq \lim_{m \rightarrow \infty} \sum^{\infty}_{i=m+1} \frac{1}{k_i} = 0. \end{equation*} Since Diff$^{k}\left(M\right)$ is complete, the sequence $\left(f_n\right)_{n \in \mathbb{N}}$ converges consequently in Diff$^k\left(M\right)$ for every $k \in \mathbb{N}$. Thus, the sequence converges in Diff$^{\infty}\left(M\right)$ by definition. \\ \\ Furthermore, we estimate: \begin{equation} \label{est3} d_{\infty}\left(R_{\alpha},f \right)= d_{\infty}\left(R_{\alpha}, \lim_{n\rightarrow \infty} f_n\right) \leq d_{\infty}\left(R_{\alpha}, R_{\alpha_1}\right) + \sum^{\infty}_{n=1} d_{\infty}\left(f_n, f_{n-1}\right), \end{equation} where we used the notation $f_0=R_{\alpha_1}$. \\ By explicit calculations we obtain $d_k\left(R_{\alpha}, R_{\alpha_1}\right) = d_0\left(R_{\alpha}, R_{\alpha_1}\right) = \left| \alpha - \alpha_1 \right|$ for every $k \in \mathbb{N}$, hence \begin{equation*} d_{\infty}\left(R_{\alpha}, R_{\alpha_1}\right) = \sum^{\infty}_{k=1} \frac{\left| \alpha - \alpha_1 \right|}{2^k \cdot \left(1+d_k\left(R_{\alpha}, R_{\alpha_1}\right)\right)} \leq \left| \alpha - \alpha_1 \right| \cdot \sum^{\infty}_{k=1} \frac{1}{2^k} = \left| \alpha - \alpha_1 \right|. \end{equation*} Additionally it holds: \begin{align*} \sum^{\infty}_{n=1} d_{\infty}\left(f_n, f_{n-1}\right) & = \sum^{\infty}_{n=1} \sum^{\infty}_{k=1} \frac{d_k\left(f_n, f_{n-1}\right)}{2^k \cdot \left(1+d_k\left(f_n,f_{n-1}\right)\right)} \\ &= \sum^{\infty}_{n=1}\left( \sum^{k_n}_{k=1}\frac{d_k\left(f_n, f_{n-1}\right)}{2^k \cdot \left(1+d_k\left(f_n,f_{n-1}\right)\right)} + \sum^{\infty}_{k=k_n +1}\frac{d_k\left(f_n, f_{n-1}\right)}{2^k \cdot \left(1+d_k\left(f_n,f_{n-1}\right)\right)}\right) \end{align*} As seen above $d_k\left(f_n,f_{n-1}\right)\leq \frac{1}{k_n}$ for every $k\leq k_n$. Hereby, it follows further: \begin{align*} \sum^{\infty}_{n=1} d_{\infty}\left(f_n, f_{n-1}\right) & \leq \sum^{\infty}_{n=1}\left( \frac{1}{k_n} \cdot \sum^{k_n}_{k=1}\frac{1}{2^k} + \sum^{\infty}_{k=k_n +1}\frac{d_k\left(f_n, f_{n-1}\right)}{2^k \cdot \left(1+d_k\left(f_n,f_{n-1}\right)\right)}\right) \\ & \leq \sum^{\infty}_{n=1} \frac{1}{k_n} + \sum^{\infty}_{n=1}\sum^{\infty}_{k=k_n +1} \frac{1}{2^k}. \end{align*} Because of $\sum^{\infty}_{k=k_n +1} \frac{1}{2^k} = 2 - \sum^{k_n}_{k=0} \frac{1}{2^k} = \left(\frac{1}{2}\right)^{k_n} \leq \frac{1}{k_n}$ we conclude: \begin{equation*} \sum^{\infty}_{n=1} d_{\infty}\left(f_n, f_{n-1}\right)\leq \sum^{\infty}_{n=1} \frac{1}{k_n} + \sum^{\infty}_{n=1}\frac{1}{k_n} < 2 \cdot \varepsilon. \end{equation*} Hence, using equation (\ref{est3}) we obtain the desired estimate $d_{\infty}\left(f, R_{\alpha}\right) < 3 \cdot \varepsilon$. \item We have to show: $\hat{f}_n \rightarrow f$ in Diff$^{\infty}\left(M\right)$. \\ For it we compute with the aid of Lemma \ref{lem:konj} for every $n \in \mathbb{N}$ and $k \leq k_n$: \begin{align*} d_k\left(f_n, \hat{f}_n\right) & \leq d_{k_n}\left(H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_n, H_n \circ R_{\alpha} \circ H^{-1}_n\right) \\ & \leq C_{k_n} \cdot ||| H_n |||^{k_n +1}_{k_n+1} \cdot \left| \alpha_{n+1} - \alpha \right| \leq C_{k_n} \cdot ||| H_n |||^{k_n +1}_{k_n+1} \cdot \left| \alpha_{n} - \alpha \right| \\ & \leq C_{k_n} \cdot ||| H_n |||^{k_n +1}_{k_n+1} \cdot \frac{1}{2 \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n +1}_{k_n+1}} = \frac{1}{2 \cdot k_n} \leq \frac{1}{k_n}. \end{align*} Fix some $k \in \mathbb{N}$. \\ \textbf{Claim: }$\forall \delta >0 \ \ \exists N \ \ \forall n \geq N: \ \ d_k\left(f, \hat{f}_{n}\right) < \delta$, i.e. $\hat{f}_n \rightarrow f$ in Diff$^k\left(M\right)$.\\ \textbf{Proof: } Let $\delta >0$ be given. Since $f_n \rightarrow f$ in Diff$^{\infty}\left(M\right)$ we have $f_n \rightarrow f$ in Diff$^{k}\left(M\right)$ in particular. Hence, there is $n_1 \in \mathbb{N}$, such that $d_k\left(f,f_n\right)< \frac{\delta}{2}$ for every $n \geq n_1$. Because of $k_n \rightarrow \infty$ we conclude the existence of $n_2 \in \mathbb{N}$, such that $\frac{1}{k_n}<\frac{\delta}{2}$ for every $n\geq n_2$, as well as the existence of $n_3 \in \mathbb{N}$, such that $k_n\geq k$ for every $n\geq n_3$. Then we obtain for every $n\geq \max\left\{n_1,n_2,n_3\right\}$: \begin{equation*} d_k\left(f, \hat{f}_n\right) \leq d_k\left(f,f_n\right) + d_k\left(f_n, \hat{f}_n\right) < \frac{\delta}{2} + d_{k_n}\left(f_n, \hat{f}_n\right) \leq \frac{\delta}{2} + \frac{1}{k_n} < \frac{\delta}{2} + \frac{\delta}{2} = \delta. \end{equation*} Hence, the claim is proven.\\ \\ In the next step we show: $\lim_{n\rightarrow \infty}d_{\infty}\left(\hat{f}_n,f\right) = 0$. For this purpose, we examine: \begin{align*} d_{\infty}\left(f_n, \hat{f}_n\right) & = \sum^{k_n}_{k=1} \frac{d_k\left(f_n, \hat{f}_n\right)}{2^k \cdot \left(1+d_k\left(f_n, \hat{f}_n\right)\right)} + \sum^{\infty}_{k=k_n +1} \frac{d_k\left(f_n, \hat{f}_n\right)}{2^k \cdot \left(1+d_k\left(f_n, \hat{f}_n\right)\right)} \\ & \leq \frac{1}{k_n} \cdot \sum^{k_n}_{k=1} \frac{1}{2^k} + \sum^{\infty}_{k=k_n +1} \frac{1}{2^k} \leq \frac{1}{k_n} + \left(\frac{1}{2}\right)^{k_n}. \end{align*} Consequently $\lim_{n \rightarrow \infty} d_{\infty}\left(f_n, \hat{f}_n\right) = 0$. With it we compute: \begin{align*} \lim_{n \rightarrow \infty} d_{\infty}\left(f, \hat{f}_n\right) & = \lim_{n \rightarrow \infty} d_{\infty}\left(\lim_{m \rightarrow \infty}f_m, \hat{f}_n\right) = \lim_{n \rightarrow \infty} \lim_{m \rightarrow \infty} d_{\infty}\left(f_m, \hat{f}_n\right) \\ &\leq \lim_{n \rightarrow \infty} \lim_{m \rightarrow \infty} \left( \sum^{m}_{i=n+1} d_{\infty}\left(f_i,f_{i-1}\right) + d_{\infty}\left(f_n, \hat{f}_n\right)\right) \\ &= \lim_{n \rightarrow \infty} \sum^{\infty}_{i=n+1} d_{\infty}\left(f_i,f_{i-1}\right) + \lim_{n \rightarrow \infty} d_{\infty}\left(f_n, \hat{f}_n\right) = 0. \end{align*} As asserted we obtain: $\lim_{n\rightarrow \infty}d_{\infty}\left(\hat{f}_n,f\right) = 0$. \end{enumerate} \end{pr} As announced we show that we can satisfy the conditions from Lemma \ref{lem:convgen} in our constructions: \begin{lem} \label{lem:conv} Let $\left(k_n\right)_{n \in \mathbb{N}}$ be a strictly increasing sequence of natural numbers with $\sum^{\infty}_{n=1} \frac{1}{k_n} < \infty$ and $C_{k_n}$ be the constants from Lemma \ref{lem:konj}. For any Liouvillean number $\alpha$ there exists a sequence $\alpha_n = \frac{p_n}{q_n}$ of rational numbers with the property that $260n^4$ divides $q_n$, such that our conjugation maps $H_n$ constructed in section \ref{subsection:g} and \ref{subsection:phi} fulfil the following conditions: \begin{enumerate} \item For every $n \in \mathbb{N}$: \begin{equation*} \left| \alpha - \alpha_n \right| < \frac{1}{2 \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n +1}_{k_n +1}}. \end{equation*} \item For every $n \in \mathbb{N}$: \begin{equation*} \left| \alpha - \alpha_n \right| < \frac{1}{2^{n+1} \cdot q_n \cdot ||| H_n |||_1}. \end{equation*} \item For every $n \in \mathbb{N}$: \begin{equation*} \left\|DH_{n-1} \right\|_0 < \frac{\ln\left(q_n\right)}{n}. \end{equation*} \end{enumerate} \end{lem} \begin{pr} In Lemma \ref{lem:normH} we saw $||| H_n |||_{k_n+1} \leq \breve{C}_n \cdot q^{3 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right) \cdot n \cdot \left(n+1\right)}_{n}$, where the constant $\breve{C}_n$ was independent of $q_n$. Thus, we can choose $q_n \geq \breve{C}_n$ for every $n \in \mathbb{N}$. Hence, we obtain: $||| H_n |||_{k_n+1} \leq q^{4 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right) \cdot n \cdot \left(n+1\right)}_{n}$. \\ Besides $q_n \geq \breve{C}_n$ we keep the mentioned condition $q_n \geq 64 \cdot 260 \cdot n^4 \cdot (n-1)^{11} \cdot q^{(m-1) \cdot (n-1)^2+3}_{n-1}$ in mind. Furthermore, we can demand $\left\|DH_{n-1}\right\|_0 < \frac{\ln\left(q_n\right)}{n}$ from $q_n$ because $H_{n-1}$ is independent of $q_n$. Since $\alpha$ is a Liouvillean number, we find a sequence of rational numbers $\tilde{\alpha}_n = \frac{\tilde{p}_n}{\tilde{q}_n}$, $\tilde{p}_n, \tilde{q}_n$ relatively prime, under the above restrictions (formulated for $\tilde{q}_n$) satisfying: \begin{equation*} \left| \alpha - \tilde{\alpha}_n \right| = \left| \alpha - \frac{\tilde{p}_n}{\tilde{q}_n} \right|< \frac{\left|\alpha-\alpha_{n-1}\right|}{2^{n+1} \cdot k_n \cdot C_{k_n} \cdot \left(260n^4\right)^{1+4 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right)^2 \cdot n \cdot \left(n+1\right)} \cdot \tilde{q}^{1+4 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right)^2 \cdot n \cdot \left(n+1\right)}_{n}} \end{equation*} Put $q_n \coloneqq 260 n^4 \cdot \tilde{q}_n$ and $p_n \coloneqq 260n^4 \cdot \tilde{p}_n$. Then we obtain: \begin{equation*} \left| \alpha - \alpha_n \right| < \frac{\left|\alpha-\alpha_{n-1}\right|}{2^{n+1} \cdot k_n \cdot C_{k_n} \cdot q^{1+4 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right)^2 \cdot n \cdot \left(n+1\right)}_{n}}. \end{equation*} So we have $\left|\alpha-\alpha_{n}\right| \stackrel{n\rightarrow \infty}{\longrightarrow} 0$ monotonically. Because of $||| H_n |||^{k_n+1}_{k_n+1} \leq q^{4 \cdot \left(m-1\right)^2 \cdot \left(k_n+1\right)^2 \cdot n \cdot \left(n+1\right)}$ this yields: $\left| \alpha - \alpha_n \right| < \frac{1}{2^{n+1} \cdot q_n \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n +1}}$. Thus, the first property of this Lemma is fulfilled. \\ Furthermore, we note $k_n\geq1$ and $C_{k_n} \geq 1$ by Lemma \ref{lem:konj}. Thus, $q_n \cdot k_n \cdot C_{k_n} \geq q_n$. Moreover, $||| H_n |||_1 \geq \left\|H_n\right\|_0 =1$, because $H_n: \mathbb{S}^1 \times \left[0,1\right]^{m-1} \rightarrow \mathbb{S}^1 \times \left[0,1\right]^{m-1}$ is a diffeomorphism. Hence, $||| H_n |||^{k_n+1}_{k_n +1}\geq ||| H_n |||_1$. Altogether, we conclude $2^{n+1} \cdot q_n \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n +1} \geq 2^{n+1} \cdot q_n \cdot ||| H_n |||_1$ and so: \begin{equation} \label{alpha} \left| \alpha - \alpha_n \right| < \frac{1}{2^{n+1} \cdot q_n \cdot k_n \cdot C_{k_n} \cdot ||| H_n |||^{k_n+1}_{k_n +1}} \leq \frac{1}{2^{n+1} \cdot q_n \cdot ||| H_n |||_1}, \end{equation} i.e. we verified the second property. \end{pr} \begin{rem} Lemma \ref{lem:conv} shows that the conditions of Lemma \ref{lem:convgen} are satisfied. Therefore, our sequence of constructed diffeomorphisms $f_n$ converges in the Diff$^{\infty}(M)$-topology to a diffeomorphism $f \in \mathcal{A}_{\alpha}(M)$. \end{rem} To apply Proposition \ref{prop:crit} we need another result: \begin{lem} \label{lem:m} Let $\left(\alpha_n \right)_{n \in \mathbb{N}}$ be constructed as in Lemma \ref{lem:conv}. Then it holds for every $n \in \mathbb{N}$ and for every $\tilde{m} \leq q_{n+1}$: \begin{equation*} d_0\left(f^{\tilde{m}},f^{\tilde{m}}_{n}\right)\leq\frac{1}{2^n}. \end{equation*} \end{lem} \begin{pr} In the proof of Lemma \ref{lem:convgen} we observed $f_{i-1}= H_{i} \circ R_{\alpha_{i}} \circ H^{-1}_{i}$ for every $i \in \mathbb{N}$. Hereby and with the help of Lemma \ref{lem:konj} we compute: \begin{equation*} d_0\left(f^{\tilde{m}}_{i},f^{\tilde{m}}_{i-1}\right) = d_0\left(H_{i} \circ R_{\tilde{m} \cdot \alpha_{i+1}} \circ H^{-1}_{i}, H_{i} \circ R_{\tilde{m} \cdot \alpha_{i}} \circ H^{-1}_{i}\right) \leq ||| H_{i} |||_1 \cdot \tilde{m} \cdot 2 \cdot \left|\alpha-\alpha_{i}\right|. \end{equation*} Since $\tilde{m} \leq q_{n+1} \leq q_i$ we conclude for every $i > n$ using equation (\ref{alpha}) : \begin{equation*} d_0 \left( f^{\tilde{m}}_{i}, f^{\tilde{m}}_{i-1}\right) \leq ||| H_i |||_1 \cdot \tilde{m} \cdot 2 \cdot \left| \alpha - \alpha_i \right| \leq ||| H_i |||_1 \cdot \tilde{m} \cdot 2 \cdot \frac{1}{2^{i+1} \cdot q_i \cdot ||| H_i |||_1} \leq \frac{\tilde{m}}{q_i} \cdot \frac{1}{2^i} \leq \frac{1}{2^i}. \end{equation*} Thus, for every $\tilde{m}\leq q_{n+1}$ we get the claimed result: \begin{equation*} d_0\left(f^{\tilde{m}}, f^{\tilde{m}}_{n}\right) = \lim_{k \rightarrow \infty} d_0\left(f^{\tilde{m}}_{k}, f^{\tilde{m}}_{n}\right) \leq \lim_{k \rightarrow \infty} \sum^{k}_{i=n+1} d_0\left(f^{\tilde{m}}_{i}, f^{\tilde{m}}_{i-1}\right)\leq \sum^{\infty}_{i=n+1} \frac{1}{2^{i}} = \left(\frac{1}{2}\right)^n. \end{equation*} \end{pr} \begin{rem} Note that the sequence $\left(m_n\right)_{n \in \mathbb{N}}$ defined in section \ref{section:distri} meets the mentioned condition $m_{n} \leq q_{n+1}$ and hence Lemma \ref{lem:m} can be applied to it. \end{rem} Concluding we have checked that all the assumptions of Proposition \ref{prop:crit} are satisfied. Thus, this criterion guarantees that the constructed diffeomorphism $f \in \mathcal{A}_{\alpha}(M)$ is weakly mixing. In addition, for every $\varepsilon >0$ we can choose the parameters by Lemma \ref{lem:convgen} in such a way, that $d_{\infty}\left(f, R_{\alpha}\right) < \varepsilon$ holds. \section{Construction of the measurable $f$-invariant Riemannian metric} \label{section:metric} Let $\omega_0$ denote the standard Riemannian metric on $M = \mathbb{S}^1 \times \left[0,1\right]^{m-1}$. The following Lemma shows that the conjugation map $h_n = g_n \circ \phi_n$ constructed in section \ref{section:constr} is an isometry with respect to $\omega_0$ on the elements of the partial partition $\zeta_n$. \begin{lem} \label{lem:isom} Let $\check{I}_n \in \zeta_n$. Then $h_n |_{\check{I}_n}$ is an isometry with respect to $\omega_0$. \end{lem} \begin{pr} Let $\check{I}_{n,k} \in \zeta_n$ be a partition element on $\left[\frac{k-1}{n \cdot q_n}, \frac{k}{n \cdot q_n}\right] \times \left[0,1\right]^{m-1}$. This element $\check{I}_{n,k}$ is positioned in such a way that all the occurring maps $\varphi_{\varepsilon, 1,j}$ and $\varphi^{-1}_{\varepsilon_2,1,j}$ act as rotations on it. Thus, $\phi_n |_{\check{I}_{n,k}}$ is an isometry and $\phi_n\left(\check{I}_{n,k}\right)$ is equal to \begin{equation*} \begin{split} & \Bigg{[}\frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^2_n}+...+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}\right)}_{1}+1}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+1}_n}-\frac{j^{(1)}_2}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+2}_n}-...-\frac{j^{(k)}_2}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+k+1}_n}\\ & \quad - \frac{j^{(1)}_3}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+k+2}_n}-...-\frac{j^{(k)}_m+1}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1}_n}+\frac{1}{n^5 \cdot q^{(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1}_n},\\ & \quad \frac{k-1}{n \cdot q_n} + \frac{j^{(1)}_{1}}{n \cdot q^2_n}+...-\frac{j^{(k)}_m}{n \cdot q^{(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1}_n}-\frac{1}{n^5 \cdot q^{(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1}_n} \Bigg{]} \\ \times & \Bigg{[} \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+1\right)}_1}{q_n}+ ...\frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+k\right)}_1}{q^k_n}+\frac{j^{(k+1)}_2}{q^{k+1}_n} +...+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+1\right)}_2}{q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n}+ \\ & \quad \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+2\right)}_2 }{8 n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[n q^{\sigma}_{n}\right]} + \frac{1}{8 n^9 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[ n q^{\sigma}_{n}\right]}, \\ & \quad \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+1\right)}_1}{q_n}+ ...+ \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}+2\right)}_2 + 1}{8 n^5 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[n q^{\sigma}_{n}\right]} - \frac{1}{8 n^9 \cdot q^{1+(m-1) \cdot \frac{k \cdot \left(k+1\right)}{2}}_n \cdot \left[ n q^{\sigma}_{n}\right]}\Bigg{]} \\ \times & \prod^{m}_{i=3} \Bigg{[} \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+(i-2) \cdot k+1\right)}_1}{q_n} + ... + \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+(i-1) \cdot k\right)}_1}{q^k_n} + \frac{1}{n^4 \cdot q^{k}_n}, \\ & \quad \quad \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+(i-2) \cdot k+1\right)}_1}{q_n} + ... + \frac{j^{\left((m-1) \cdot \frac{k \cdot \left(k-1\right)}{2}+(i-1) \cdot k\right)}_1 + 1}{q^k_n} - \frac{1}{n^4 \cdot q^{k}_n}\Bigg{]}. \end{split} \end{equation*} Then we have to examine the application of $g_n = g_{n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n, \left[n \cdot q^{\sigma}_{n}\right], \frac{1}{8n^4}, \frac{1}{32n^4}}$. In particular, we have $\frac{\varepsilon}{b \cdot a} = \frac{1}{8n^4 \cdot \left[n \cdot q^{\sigma}_{n}\right] \cdot n \cdot q^{1+(m-1) \cdot \frac{\left(k+1\right) \cdot k}{2}}_n}$. Since $4 \cdot \varepsilon = \frac{1}{2n^4}< \frac{1}{n^4}$, $g_n$ acts as translation on $\phi_n\left(\check{I}_{n,k}\right)$. \end{pr} This Lemma implies that $h^{-1}_{n} |_{h_n\left(\check{I}_n\right)}$ is an isometry as well.\\ In the following we construct the $f$-invariant measurable Riemannian metric. This construction parallels the approach in \cite[section 4.8]{GK}. For it we put $\omega_n \coloneqq \left(H^{-1}_n\right)^{\ast} \omega_0$. Each $\omega_n$ is a smooth Riemannian metric because it is the pullback of a smooth metric via a $C^{\infty}\left(M\right)$-diffeomorphism. Since $R^{\ast}_{\alpha_{n+1}}\omega_0 = \omega_0$ the metric $\omega_n$ is $f_n$-invariant: \begin{align*} f^{\ast}_{n} \omega_n & = \left(H_n \circ R_{\alpha_{n+1}} \circ H^{-1}_{n}\right)^{\ast} \left(H^{-1}_{n}\right)^{\ast}\omega_0 = \left(H^{-1}_{n}\right)^{\ast} R^{\ast}_{\alpha_{n+1}} H^{\ast}_{n} \left(H^{-1}_{n}\right)^{\ast} \omega_0 = \left(H^{-1}_{n}\right)^{\ast} R^{\ast}_{\alpha_{n+1}} \omega_0 \\ & = \left(H^{-1}_{n}\right)^{\ast} \omega_0 = \omega_n. \end{align*} With the succeeding Lemmas we show that the limit $\omega_{\infty} \coloneqq \lim_{n \rightarrow \infty} \omega_n$ exists $\mu$-almost everywhere and is the desired $f$-invariant Riemannian metric. \begin{lem} \label{lem:ae} The sequence $\left(\omega_n\right)_{n \in \mathbb{N}}$ converges $\mu$-a.e. to a limit $\omega_{\infty}$ \end{lem} \begin{pr} For every $N \in \mathbb{N}$ we have for every $k>0$: \begin{equation*} \omega_{N+k} = \left(H^{-1}_{N+k}\right)^{\ast} \omega_0 = \left(h^{-1}_{N+k} \circ ... \circ h^{-1}_{N+1} \circ H^{-1}_{N}\right)^{\ast} \omega_0 = \left(H^{-1}_{N}\right)^{\ast} \left(h^{-1}_{N+k} \circ ... \circ h^{-1}_{N+1}\right)^{\ast} \omega_0. \end{equation*} Since the elements of the partition $\zeta_n$ cover $M$ except a set of measure at most $\frac{4m}{n^2}$ by Remark \ref{rem:überd}, Lemma \ref{lem:isom} shows that $\omega_{N+k}$ coincides with $\omega_N = \left( H^{-1}_{N}\right)^{\ast} \omega_0$ on a set of measure at least $1-\sum^{\infty}_{n= N+1} \frac{4m}{n^2}$. As this measure approaches $1$ for $N\rightarrow \infty$, the sequence $\left(\omega_n\right)_{n \in \mathbb{N}}$ converges on a set of full measure. \end{pr} \begin{lem} The limit $\omega_{\infty}$ is a measurable Riemannian metric. \end{lem} \begin{pr} The limit $\omega_{\infty}$ is a measurable map because it is the pointwise limit of the smooth metrics $\omega_n$, which in particular are measurable. By the same reasoning $\omega_{\infty} |_p$ is symmetric for $\mu$-almost every $p \in M$. Furthermore, $\omega_{\infty}$ is positive definite because $\omega_n$ is positive definite for every $n \in \mathbb{N}$ and $\omega_{\infty}$ coincides with $\omega_N$ on $T_1M \otimes T_1M$ minus a set of measure at most $\sum^{\infty}_{n= N+1} \frac{4m}{n^2}$. Since this is true for every $N \in \mathbb{N}$, $\omega_{\infty}$ is positive definite on a set of full measure. \end{pr} \begin{rem} \label{rem:Egoroff} In the proof of the subsequent Lemma we will need Egoroff's theorem (for example \cite[§21, Theorem A]{Ha2}): Let $\left(N,d\right)$ denote a separable metric space. Given a sequence $\left(\varphi_n\right)_{n \in \mathbb{N}}$ of $N$-valued measurable functions on a measure space $\left(X, \Sigma, \mu\right)$ and a measurable subset $A \subseteq X$, $\mu\left(A\right) < \infty$, such that $\left(\varphi_n\right)_{n \in \mathbb{N}}$ converges $\mu$-a.e. on $A$ to a limit function $\varphi$. Then for every $\varepsilon > 0$ there exists a measurable subset $B \subset A$ such that $\mu\left(B\right) < \varepsilon$ and $\left(\varphi_n\right)_{n \in \mathbb{N}}$ converges to $\varphi$ uniformly on $A \setminus B$. \end{rem} \begin{lem} $\omega_{\infty}$ is $f$-invariant, i.e. $f^{\ast}\omega_{\infty} = \omega_{\infty}$ $\mu$-a.e.. \end{lem} \begin{pr} By Lemma \ref{lem:ae} the sequence $\left(\omega_n\right)_{n \in \mathbb{N}}$ converges in the C$^{\infty}$-topology pointwise almost everywhere. Hence, we obtain using Egoroff's theorem: For every $\delta > 0$ there is a set $C_{\delta} \subseteq M$ such that $\mu\left(M \setminus C_{\delta}\right) < \delta$ and the convergence $\omega_n \rightarrow \omega_{\infty}$ is uniform on $C_{\delta}$. \\ The function $f$ was constructed as the limit of the sequence $\left(f_n\right)_{n \in \mathbb{N}}$ in the C$^{\infty}$-topology. Thus, $\tilde{f}_n \coloneqq f^{-1}_{n} \circ f \rightarrow \text{id}$ in the C$^{\infty}$-topology. Since $M$ is compact, this convergence is uniform too. \\ Furthermore, the smoothness of $f$ implies $f^{\ast} \omega_{\infty} = f^{\ast}\lim_{n \rightarrow \infty} \omega_n = \lim_{n \rightarrow \infty} f^{\ast} \omega_n$. Therewith, we compute on $C_{\delta}$: $f^{\ast} \omega_{\infty} = \lim_{n \rightarrow \infty} \left( \left(f_n \tilde{f}_n \right)^{\ast} \omega_n \right) = \lim_{n \rightarrow \infty} \left( \tilde{f}^{\ast}_{n} f^{\ast}_{n} \omega_n \right) = \lim_{n \rightarrow \infty} \tilde{f}^{\ast}_{n} \omega_n = \omega_{\infty}$, where we used the uniform convergence on $C_{\delta}$ in the last step. As this holds on every set $C_{\delta}$ with $\delta > 0$, it also holds on the set $\bigcup_{\delta > 0} C_{\delta}$. This is a set of full measure and therefore the claim follows. \end{pr} Hence, the desired $f$-invariant measurable Riemannian metric $\omega_{\infty}$ is constructed and thus Proposition \ref{prop:satz2} is proven. \end{document}
\begin{document} \title[Bidemocratic bases]{Bidemocratic bases and their connections with other greedy-type bases} \author[Albiac]{Fernando Albiac} \address{Department of Mathematics, Statistics, and Computer Sciencies--InaMat2\\ Universidad P\'ublica de Navarra\\ Campus de Arrosad\'{i}a\\ Pamplona\\ 31006 Spain} \email{[email protected]} \author[Ansorena]{Jos\'e L. Ansorena} \address{Department of Mathematics and Computer Sciences\\ Universidad de La Rioja\\ Logro\~no\\ 26004 Spain} \email{[email protected]} \author[Berasategui]{Miguel Berasategui} \address{Miguel Berasategui\\ IMAS - UBA - CONICET - Pab I, Facultad de Ciencias Exactas y Naturales\\ Universidad de Buenos Aires\\ (1428), Buenos Aires, Argentina} \email{[email protected]} \author[Bern\'a]{Pablo M. Bern\'a} \address{Pablo M. Bern\'a\\ Departamento de Matem\'atica Aplicada y Estad\'istica, Facultad de Ciencias Econ\'omicas y Empresariales, Universidad San Pablo-CEU, CEU Universities\\ Madrid, 28003 Spain.} \email{[email protected]} \author[Lassalle]{Silvia Lassalle} \address{Silvia Lassalle\\ Departamento de Matem\'atica\\ Universidad de San Andr\'es, Vito Duma 284\\ (1644) Victoria, Buenos Aires, Argentina and\\ IMAS - CONICET} \email{[email protected]} \begin{abstract} In nonlinear greedy approximation theory, bidemocratic bases have traditionally played the role of dualizing democratic, greedy, quasi-greedy, or almost greedy bases. In this article we shift the viewpoint and study them for their own sake, just as we would with any other kind of greedy-type bases. In particular we show that bidemocratic bases need not be quasi-greedy, despite the fact that they retain a strong unconditionality flavor which brings them very close to being quasi-greedy. Our constructive approach gives that for each $1<p<\infty$ the space $\ell_p$ has a bidemocratic basis which is not quasi-greedy. We also present a novel method for constructing conditional quasi-greedy bases which are bidemocratic, and provide a characterization of bidemocratic bases in terms of the new concepts of truncation quasi-greediness and partially democratic bases. \end{abstract} \subjclass[2010]{41A65, 41A46, 46B15, 46B45} \keywords{Nonlinear approximation, Thresholding greedy algorithm, quasi-greedy basis, democracy} \thanks{F. Albiac acknowledges the support of the Spanish Ministry for Science and Innovation under Grant PID2019-107701GB-I00 for \emph{Operators, lattices, and structure of Banach spaces}. F. Albiac and J.~L. Ansorena acknowledge the support of the Spanish Ministry for Science, Innovation, and Universities under Grant PGC2018-095366-B-I00 for \emph{An\'alisis Vectorial, Multilineal y Aproximaci\'on.} M. Berasategui and S. Lassalle were supported by ANPCyT PICT-2018-04104. P.~M. Bern\'a by Grants PID2019-105599GB-I00 (Agencia Estatal de Investigaci\'on, Spain) and 20906/PI/18 from Fundaci\'on S\'eneca (Regi\'on de Murcia, Spain). S. Lassalle was also supported in part by CONICET PIP 0483 and PAI UdeSA 2020-2021. F. Albiac, J.~L. Ansorena and P.~M. Bern\'a would like to thank the Erwin Schr\"odinger International Institute for Mathematics and Physics, Vienna, for support and hospitality during the programme \emph{Applied Functional Analysis and High-Dimensional Approximation}, held in the Spring of 2021, where work on this paper was undertaken.} \maketitle \section{Introduction and background}\ensuremath{\mathbf{n}}oindent Let $\ensuremath{\mathbb{X}}$ be an infinite-dimensional separable Banach space (or, more generally, a quasi-Banach space) over the real or complex field $\ensuremath{\mathbb{F}}$. Throughout this paper, unless otherwise stated, by a \emph{basis} of $\ensuremath{\mathbb{X}}$ we mean a norm-bounded sequence $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ that generates the entire space, in the sense that \[ \overline{\spn}(\ensuremath{\bm{x}}_n \ensuremath{\mathtt{c}}lon n\in\ensuremath{\mathbb{N}})=\ensuremath{\mathbb{X}}, \] and for which there is a (unique) norm-bounded sequence $\ensuremath{\mathcal{X}}^{\ast}=(\ensuremath{\bm{x}}_{n}^{\ast})_{n=1}^\infty$ in the dual space $\ensuremath{\mathbb{X}}^{\ast}$ such that $(\ensuremath{\bm{x}}_{n}, \ensuremath{\bm{x}}_{n}^{\ast})_{n=1}^{\infty}$ is a biorthogonal system. We will refer to the basic sequence $\ensuremath{\mathcal{X}}^{\ast}$ as to the \emph{dual basis} of $\ensuremath{\mathcal{X}}$. We recall that the basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is called \emph{democratic} if there is a constant $\Delta$ such that \[ \left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k \right\Vert\le \Delta \left\Vert \sum_{k\in B}\ensuremath{\bm{x}}_k \right\Vert \] whenever $A$ and $B$ are finite subsets of $\ensuremath{\mathbb{N}}$ with $|A|\le |B|$. The \emph{fundamental function} $\varphi\ensuremath{\mathtt{c}}lon\ensuremath{\mathbb{N}}\to[0,\infty)$ of $\ensuremath{\mathcal{X}}$ is then defined by \[ \varphi(m)=\sup\limits_{|A|\le m}\left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k \right\Vert,\quad m\in\ensuremath{\mathbb{N}}, \] while the \emph{dual fundamental function} of $\ensuremath{\mathcal{X}}$ is just the fundamental function of its dual basis, i.e., \[ \varphi^{\ast}(m)=\sup\limits_{|A|\le m}\left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k^{\ast} \right\Vert, \quad m\in \ensuremath{\mathbb{N}}. \] In general it is not true that if a basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is democratic, then its dual basis $\ensuremath{\mathcal{X}}^{\ast}$ is democratic as well. For instance, the $L_1$-normalized Haar system is an unconditional democratic basis of the dyadic Hardy space $H_1$ \cites{Woj1982,Woj2000}, but the $L_\infty$-normalized Haar system is not democratic in the dyadic $\ensuremath{\mathrm{BMO}}$-space \cite{Oswald2001}. In order to understand better how certain greedy-like properties dualize, Dilworth et al.\ introduced in \cite{DKKT2003} a strengthen form of democracy. Notice that the elementary computation \[ m=\left(\sum_{k\in A} \ensuremath{\bm{x}}_k^{\ast}\right)\left(\sum_{k\in A} \ensuremath{\bm{x}}_k\right) \le \left\Vert\sum_{k\in A} \ensuremath{\bm{x}}_k^{\ast}\right\Vert \left\Vert\sum_{k\in A} \ensuremath{\bm{x}}_k\right\Vert \;\text{if}\; |A|=m, \] yields the estimate \[ m\le \varphi(m)\,\varphi^{\ast}(m),\quad m\in\ensuremath{\mathbb{N}}. \] A basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is then said to be \emph{bidemocratic} if the reverse inequality is fulfilled up to a constant, i.e., $\ensuremath{\mathcal{X}}$ is bidemocratic with constant $\Delta_{b}$ ($\Delta_{b}$-bidemocratic for short) if \[ \varphi(m)\, \varphi^{\ast}(m)\le \Delta_{b}\, m, \quad m\in \ensuremath{\mathbb{N}}. \] Amongst other relevant results relative to this kind of bases in Banach spaces, Dilworth et al.\ showed in \cite{DKKT2003} that being quasi-greedy passes to dual bases under the assumption of bidemocracy (see \cite{DKKT2003}*{Theorem 5.4}). Since the dual basis of a bidemocratic basis is democratic, it follows that the corresponding result also holds for almost greedy and greedy bases. That is, if a bidemocratic basis is almost greedy (respectively, greedy), then so is its dual basis. Despite the instrumental role played by bidemocratic bases as a key that permits dualizing some greedy-type properties, it is our contention in this paper that these bases are of interest by themselves and that they deserve to be studied as any other kind of greedy-like bases. For instance, the unconditionality constants of bidemocratic bases have been estimated (see Theorem~\ref{thm:BidCond} below), which sheds some information onto the performance of the greedy algorithm when it is implemented specifically for these bases. To undertake our task we must first place bidemocratic bases in the map by relating them with other types of bases that are relevant in the theory. In this respect the most important open question is whether bidemocratic bases are quasi-greedy. This problem is motivated by recent results that show that bidemocratic bases have uniform boundedness properties of certain (nonlinear) truncation operators that make them very close to quasi-greedy bases (see \cite{AABW2021}*{Proposition 5.7}). In our language, bidemocratic bases are truncation quasi-greedy. In Section~\ref{sect:BDNonQG} we will solve this question in the negative by proving that bidemocracy is not in general strong enough to ensure quasi-greediness and show that for $1<p<\infty$ the space $\ell_p$ has a bidemocratic basis which is not quasi-greedy. Before that, we will look for sufficient conditions for a basis to be bidemocratic. Here one must take into account that if $\ensuremath{\mathcal{X}}$ is bidemocratic then both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are democratic but the converse fails. The only positive result we find in the literature in the reverse direction is the aforementioned Theorem 5.4 from \cite{DKKT2003}, which tells us that if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are quasi-greedy and democratic then $\ensuremath{\mathcal{X}}$ is bidemocratic. In Section~\ref{sect:truncation quasi-greedy} we extend this result by relaxing the conditions on the bases $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ while still attaining the bidemocracy of $\ensuremath{\mathcal{X}}$. Turning to quasi-greedy bases, it is natural and consistent with our discussion in this paper, to further the study of conditional quasi-greedy bases by looking for conditional bidemocratic quasi-greedy bases, i.e., conditional almost greedy bases whose dual bases are also almost greedy. The previous methods for building conditional almost greedy bases in Banach spaces yield either bases whose fundamental function coincides with the fundamental function of the canonical basis of $\ell_1$, or bases whose fundamental function increases steadily enough (formally, bases that have the upper regularity property and the lower regularity property). In the former case, the bases are not bidemocratic unless they are equivalent to the canonical $\ell_1$-basis; in the latter, the bases are always bidemocratic by \cite{DKKT2003}*{Proposition 4.4}. The existence of conditional bidemocratic quasi-greedy bases which do not have the upper regularity property seems to be an unexplored area. In Section~\ref{sect:NM} we contribute to this topic by developing a new method for building bidemocratic, conditional, quasi-greedy bases with arbitrary fundamental functions. Throughout this paper we will use standard notation and terminology from Banach spaces and greedy approximation theory, as can be found, e.g., in \cite{AlbiacKalton2016}. We also refer the reader to the recent article \cite{AABW2021} for other more especialized notation. We next single out however the most heavily used terminology. For broader applicability, whenever it is possible we will establish our results in the setting of quasi-Banach spaces. Let us recall that a \emph{quasi-Banach space} is a vector space $\ensuremath{\mathbb{X}}$ over the real or complex field $\ensuremath{\mathbb{F}}$ equipped with a \emph{quasi-norm}, i.e., a map $\|\cdot\|\ensuremath{\mathtt{c}}lon \ensuremath{\mathbb{X}}\to [0,\infty)$ that satisfies all the usual properties of a norm with the exception of the triangle law, which is replaced with the condition \begin{equation}\label{defquasinorm} \|f+g\|\leq \kappa( \| f\| + \|g\|),\quad f,g\in \ensuremath{\mathbb{X}}, \end{equation} for some $\kappa\ensuremath{\bm{g}}e 1$ independent of $f$ and $g$, and moreover $(\ensuremath{\mathbb{X}},\|\cdot\|)$ is complete. The \emph{modulus of concavity} of the quasi-norm is the smallest constant $\kappa\ensuremath{\bm{g}}e 1$ in \eqref{defquasinorm}. Given $0<p\le 1$, a \emph{$p$-Banach space} will be a quasi-Banach space whose quasi-norm is $p$-subadditive, i.e., \[ \Vert f+g\Vert^p \le \Vert f\Vert^p +\Vert g \Vert^p, \quad f,g\in\ensuremath{\mathbb{X}}. \] Some authors have studied the Thresholding Greedy Algorithm, or TGA for short, for more demanding types of bases that we will bring into play on occasion. A sequence $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ of $\ensuremath{\mathbb{X}}$ is said to be a \emph{Schauder basis} if for every $f\in\ensuremath{\mathbb{X}}$ there is a unique sequence $(a_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{F}}$ such that $f= \sum_{n=1}^{\infty} a_n\, \ensuremath{\bm{x}}_{n}$, where the convergence of the series is understood in the topology induced by the quasi-norm. If $\ensuremath{\mathcal{X}}$ is a Schauder basis we define the biorthogonal functionals associated to $\ensuremath{\mathcal{X}}$ by $\ensuremath{\bm{x}}_k^*(f)=a_k$ for all $f=\sum_{n=1}^{\infty} a_n \, \ensuremath{\bm{x}}_{n}\in\ensuremath{\mathbb{X}}$ and $k\in\ensuremath{\mathbb{N}}$. The \emph{partial-sum projections} $S_{m}\ensuremath{\mathtt{c}}lon \ensuremath{\mathbb{X}}\to \ensuremath{\mathbb{X}}$ with respect to the Schauder basis $\ensuremath{\mathcal{X}}$, given by \[ f\mapsto S_{m}(f)= \sum_{n=1}^{m} \ensuremath{\bm{x}}_n^*(f)\, \ensuremath{\bm{x}}_{n}, \quad f\in\ensuremath{\mathbb{X}},\, m\in\ensuremath{\mathbb{N}}, \] are uniformly bounded, whence we infer that $\sup_n \Vert \ensuremath{\bm{x}}_n\Vert \, \Vert \ensuremath{\bm{x}}_n^*\Vert<\infty$. Hence, if a Schauder basis $\ensuremath{\mathcal{X}}$ is semi-normalized, i.e., \[ 0<\inf_n \Vert \ensuremath{\bm{x}}_n\Vert\le \sup_n \Vert \ensuremath{\bm{x}}_n\Vert<\infty, \] then $(\ensuremath{\bm{x}}_n^*)_{n=1}^\infty$ is norm-bounded and so $\ensuremath{\mathcal{X}}$ is a basis in the sense of this paper. If $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is a Schauder basis, then the \emph{coefficient transform} \[ f\mapsto (\ensuremath{\bm{x}}_n^{\ast}(f))_{n=1}^\infty, \quad f\in\ensuremath{\mathbb{X}}, \] is one-to-one, that is, the basis $\ensuremath{\mathcal{X}}$ is \emph{total}. In the case when $\Vert S_m\Vert \le 1$ for all $m\in\ensuremath{\mathbb{N}}$ the Schauder basis $\ensuremath{\mathcal{X}}$ is said to be \emph{monotone}. Given $A\subseteq \ensuremath{\mathbb{N}}$, we will use $\ensuremath{\mathcal{E}}_A$ to denote the set consisting of all families $(\varepsilon_n)_{n\in A}$ in $\ensuremath{\mathbb{F}}$ with $|\varepsilon_n|=1$ for all $n\in A$. Given a basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ of $\ensuremath{\mathbb{X}}$, a finite set $A\subseteq\ensuremath{\mathbb{N}}$ and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$ it is by now customary to use \[ \textstyle \ensuremath{\mathbbm{1}}_{\varepsilon,A}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sum_{n\in A} \varepsilon_n\, \ensuremath{\bm{x}}_n \;(\text{resp.,}\; \ensuremath{\mathbbm{1}}^{\ast}_{\varepsilon,A}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sum_{n\in A} \varepsilon_n \, \ensuremath{\bm{x}}_n^{\ast} ). \] If the basis and the space are clear from context we simply put $\ensuremath{\mathbbm{1}}_{\varepsilon,A}$ (resp., $\ensuremath{\mathbbm{1}}^{\ast}_{\varepsilon,A}$), and if $\varepsilon_n=1$ for all $n\in A$ we put $\ensuremath{\mathbbm{1}}_A$ (resp., $\ensuremath{\mathbbm{1}}^{\ast}_{A}$). Associated with the fundamental function $\varphi$ of the basis are the \emph{upper super-democracy function} of $\ensuremath{\mathcal{X}}$, \[ \ensuremath{\bm{\varphi_u}}(m)=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=\sup\left\lbrace \left\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \right\Vert \ensuremath{\mathtt{c}}lon |A|\le m,\, \varepsilon\in\ensuremath{\mathcal{E}}_A \right\rbrace, \quad m\in\ensuremath{\mathbb{N}}. \] and the \emph{lower super-democracy function} of $\ensuremath{\mathcal{X}}$, \[ \ensuremath{\bm{\varphi_l}}(m)=\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=\inf\left\lbrace \left\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \right\Vert \ensuremath{\mathtt{c}}lon |A|\ensuremath{\bm{g}}e m,\, \varepsilon\in\ensuremath{\mathcal{E}}_A \right\rbrace, \quad m\in\ensuremath{\mathbb{N}}. \] The growth of $\ensuremath{\bm{\varphi_u}}$ is of the same order as $\varphi$ (see \cite{AABW2021}*{inequality (8.3)}), and so the basis $\ensuremath{\mathcal{X}}$ is bidemocratic if and only if \begin{equation*} \sup_{m\in\ensuremath{\mathbb{N}}} \frac{1}{m} \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}](m) <\infty \end{equation*} (see \cite{AABW2021}*{Lemma 5.5}). The symbol $\alpha_j\lesssim \beta_j$ for $j\in J$ means that there is a positive constant $C$ such that the families of nonnegative real numbers $(\alpha_j)_{j\in J}$ and $(\beta_j)_{j\in J}$ are related by the inequality $\alpha_j\le C\beta_j$ for all $j\in J$. If $\alpha_j\lesssim \beta_j$ and $\beta_j\lesssim \alpha_j$ for $j\in J$ we say $(\alpha_j)_{j\in J}$ are $(\beta_j)_{j\in J}$ are equivalent, and write $\alpha_j\approx \beta_j$ for $j\in J$. We finally recall that two bases $(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ are said to be equivalent if there is an isomorphism $T$ from $\ensuremath{\mathbb{X}}$ onto $\ensuremath{\mathbb{Y}}$ with $T(\ensuremath{\bm{x}}_n)=\ensuremath{\bm{y}}_n$ for all $n\in\ensuremath{\mathbb{N}}$. \section{From truncation quasi-greedy to bidemocratic bases }\label{sect:truncation quasi-greedy}\ensuremath{\mathbf{n}}oindent Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_{n})_{n=1}^{\infty}$ be a semi-normalized basis for a quasi-Banach space $\ensuremath{\mathbb{X}}$ with dual basis $(\ensuremath{\bm{x}}_{n}^*)_{n=1}^{\infty}$. For each $f\in \ensuremath{\mathbb{X}}$ and each $B\subseteq\ensuremath{\mathbb{N}}$ finite, put \[ \ensuremath{\mathcal{U}}(f,B) = \min_{n\in B} |\ensuremath{\bm{x}}_n^{\ast}(f)| \sum_{n\in B} \sgn (\ensuremath{\bm{x}}_n^{\ast}(f)) \, \ensuremath{\bm{x}}_n. \] Given $m\in\ensuremath{\mathbb{N}}\cup\{0\}$, the $m$\emph{th-restricted truncation operator} $\ensuremath{\mathcal{U}}_m\ensuremath{\mathtt{c}}lon \ensuremath{\mathbb{X}} \to \ensuremath{\mathbb{X}}$ is defined as \[ \ensuremath{\mathcal{U}}_m(f)=\ensuremath{\mathcal{U}}(f,A_m(f)), \quad f\in\ensuremath{\mathbb{X}}, \] where $A=A_m(f)\subseteq\ensuremath{\mathbb{N}}$ is a \emph{greedy set} of $f$ of cardinality $m$, i.e., $|\ensuremath{\bm{x}}_{n}^{\ast}(f)|\ensuremath{\bm{g}}e| \ensuremath{\bm{x}}_{k}^{\ast}(f)|$ whenever $n\in A$ and $k\ensuremath{\mathbf{n}}ot\in A$. The set $A$ depends on $f$ and $m$, and may not be unique; if this happens we take any such set. We put \begin{equation*} \Lambda_u=\Lambda_u[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup\{ \Vert \ensuremath{\mathcal{U}}(f,B)\Vert \ensuremath{\mathtt{c}}lon B \;\text{greedy set of}\; f, \, \Vert f\Vert \le 1\}. \end{equation*} If the quasi-norm is continuous, applying a perturbation technique yields \[ \Lambda_u=\sup_m \Vert \ensuremath{\mathcal{U}}_m\Vert. \] Thus, the basis $\ensuremath{\mathcal{X}}$ is said to be \emph{truncation quasi-greedy} if $(\ensuremath{\mathcal{U}}_m)_{m=1}^\infty$ is a uniformly bounded family of (nonlinear) operators, or equivalently, if and only if $\Lambda_u<\infty$. In this case we will refer to $\Lambda_u$ as the \emph{truncation quasi-greedy constant} of the basis. Quasi-greedy bases are truncation quasi-greedy (see \cite{DKKT2003}*{Lemma 2.2} and \cite{AABW2021}*{Theorem 4.13}), but the converse does not hold in general. The first case in point appeared in the proof of \cite{BBG2017}*{Proposition 5.6}, where the authors constructed a basis that dominates the unit vector system of $\ell_{1,\infty}$, hence it is truncation quasi-greedy by \cite{AABW2021}*{Proposition 9.4}, but it is not quasi-greedy. In spite of that, truncation quasi-greedy bases still enjoy most of the nice unconditionality-like properties of quasi-greedy bases. For instance, they are quasi-greedy for large coefficients (QGLC for short), suppression unconditional for constant coefficients (SUCC for short), and lattice partially unconditional (LPU for short). See \cite{AABW2021}*{Sections 3 and 4} for the precise definitions and the proofs of these relations. In turn, if $\ensuremath{\mathcal{X}}$ is bidemocratic then both $\ensuremath{\mathcal{X}}$ and its dual basis $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy (\cite{AABW2021}*{Proposition 5.7}). In this section we study the converse implication, i.e., we want to know which additional conditions make a truncation quasi-greedy basis bidemocratic. A good starting point is the following result, which uses the upper regularity property (URP for short) and which is valid only for Banach spaces. Following \cite{DKKT2003} we shall say that a basis has the URP if there is an integer $b\ensuremath{\bm{g}}e 3$ so that its fundamental function $\varphi$ satisfies \begin{equation}\label{URPdef} 2\varphi(b m)\le {b} \varphi(m),\quad m\in\ensuremath{\mathbb{N}}. \end{equation} \begin{theorem}[see \cite{AABW2021}*{Lemma 9.8 and Proposition 10.17(iii)}] Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is democratic, truncation quasi-greedy, and has the URP. Then $\ensuremath{\mathcal{X}}$ is bidemocratic (and so $\ensuremath{\mathcal{X}}^{\ast}$ is truncation quasi-greedy too). \end{theorem} Can we do any better? Dilworth et al.\ characterized bidemocratic bases as those quasi-greedy bases which fulfill an additional condition, weaker than democracy, which they named conservative (\cite{DKKT2003}*{Theorem 5.4}). Recall that a basis is said to be \emph{conservative} if there is a constant $C$ such that $\Vert \ensuremath{\mathbbm{1}}_{A} \Vert \le C \Vert \ensuremath{\mathbbm{1}}_{B} \Vert$ whenever $|A|\le |B|$ and $\max (A) \le \min (B)$. Our objection to this concept is that it is not preserved under rearrangements of the basis. Thus, since the greedy algorithm is ``reordering invariant'' (i.e., if $\pi$ is a permutation of $\ensuremath{\mathbb{N}}$, the greedy algorighm with respect to the bases $(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{x}}_{\pi(n)})_{n=1}^\infty$ is the same) when working with conservative bases we are bringing an outer element into the theory. This is the reason why we establish our characterization of bidemocratic bases below in terms of a reordering invariant new class of bases which is more general than the class of conservative bases and which we next define. \begin{definition} We say that a basis is \emph{partially democratic} if there is a constant $C$ such that for each $D\subseteq\ensuremath{\mathbb{N}}$ finite there is $D\subseteq E\subseteq\ensuremath{\mathbb{N}}$ finite such that $\Vert \ensuremath{\mathbbm{1}}_A\Vert \le C \Vert \ensuremath{\mathbbm{1}}_B\Vert$ whenever $A\subseteq D$ and $B\subseteq \ensuremath{\mathbb{N}}\setminus E$ satisfy $|A|\le |B|$. \end{definition} The following lemma is well-known. \begin{lemma}[See \cite{AABW2021}*{Proposition 4.16} or \cite{AAW2021b}*{Lemma 5.2}]\label{lem:truncation quasi-greedyQU} Suppose that $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is a truncation quasi-greedy basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. Then there is a constant $C$ depending on the modulus of concavity of $\ensuremath{\mathbb{X}}$ and the truncation quasi-greedy constant of $\ensuremath{\mathcal{X}}$ such that \[ \left\Vert \sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n\right\Vert \le C \Vert f\Vert \] for all $f\in \ensuremath{\mathbb{X}}$, and $A\subseteq\ensuremath{\mathbb{N}}$ such that $\max_{n\in A}|a_n|\le \min_{n\in A} |\ensuremath{\bm{x}}_n^{\ast}(f)|$. \end{lemma} \begin{theorem}\label{thm:PDtruncation quasi-greedy} Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy and partially democratic. Then $\ensuremath{\mathcal{X}}$ is bidemocratic. \end{theorem} \begin{proof} We will customize the proof of \cite{DKKT2003}*{Theorem 5.4} to suit our more general statement. By Lemma~\ref{lem:truncation quasi-greedyQU} there is a constant $\Lambda$ such that \begin{equation} \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \Vert\le \Lambda \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} + f\Vert \label{anotherone} \end{equation} for every $A\subseteq \ensuremath{\mathbb{N}}$ finite, every $\varepsilon\in\ensuremath{\mathcal{E}}_A$, and every $f\in\ensuremath{\mathbb{X}}$ with $\supp(f)\cap A=\emptyset$. Applying the Hahn--Banach theorem to the equivalence class of $\ensuremath{\mathbbm{1}}_{\varepsilon,A}$ in the quotient space $\ensuremath{\mathbb{X}}/\overline{\spn}(\ensuremath{\bm{x}}_n\ensuremath{\mathtt{c}}lon n\ensuremath{\mathbf{n}}otin A)$ yields $f^*\in\spn(\ensuremath{\bm{x}}_n^* \ensuremath{\mathtt{c}}lon n\in A)$ with $\|f^{\ast}\|=1$ such that \[ {\|\ensuremath{\mathbbm{1}}_{\varepsilon, A}\|}\le {\Lambda} |f^{\ast}(\ensuremath{\mathbbm{1}}_{\varepsilon,A})|. \] Set $\ensuremath{\bm{\varphi_u}}=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ and $\ensuremath{\bm{\varphi_u}}^{\ast}=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$. Let $\Delta_d$ and $\Delta_d^{\ast}$ be the partial democracy constants of $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ respectively, and let $\Lambda_u^{\ast}$ be the truncation quasi-greedy constant of $\ensuremath{\mathcal{X}}^{\ast}$. Given $m\in \ensuremath{\mathbb{N}}$, fix $0<\epsilon<1$ and choose sets $B_1, B_2$ and signs $\varepsilon\in \ensuremath{\mathcal{E}}_{B_1}$, $\varepsilon'\in \ensuremath{\mathcal{E}}_{B_2}$ so that $|B_1|\le m$, $|B_2|\le m$, \begin{align} \|\ensuremath{\mathbbm{1}}_{\varepsilon, B_1}\|\ensuremath{\bm{g}}e (1-\epsilon) \ensuremath{\bm{\varphi_u}}(m) \;\text{and}\; \|\ensuremath{\mathbbm{1}}_{\varepsilon', B_2}^{\ast}\|\ensuremath{\bm{g}}e (1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m). \label{two} \end{align} Use partial democracy to pick $D\subseteq\ensuremath{\mathbb{N}}$ disjoint with $B_1\cup B_2$ such that $|D|=2m$, $\Vert \ensuremath{\mathbbm{1}}_B\Vert \le \ensuremath{\mathbf{C}} \Vert \ensuremath{\mathbbm{1}}_A\Vert$, and $\Vert \ensuremath{\mathbbm{1}}_B^{\ast}\Vert \le \ensuremath{\mathbf{C}}^{\ast} \Vert \ensuremath{\mathbbm{1}}_A^{\ast}\Vert$ whenever $B\subseteq B_1\cup B_2$ and $A\subseteq D$ satisfy $|B|\le |A|$. It follows from \eqref{two} and partial democracy that, for every $A\subseteq D$ with $|A|\ensuremath{\bm{g}}e m$, \begin{align} (1-\epsilon)\ensuremath{\bm{\varphi_u}}(m)\le& \ensuremath{\mathbf{C}}\|\ensuremath{\mathbbm{1}}_{A}\|, \;\text{and}\; (1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m)\le \ensuremath{\mathbf{C}}^{\ast}\|\ensuremath{\mathbbm{1}}_{A}^{\ast}\|,\label{one3} \end{align} where $\ensuremath{\mathbf{C}}= 2\lambda \Delta_d$ and $\ensuremath{\mathbf{C}}^{\ast}=2 \lambda \Delta_d^{\ast}$ with $\lambda=1$ if $\ensuremath{\mathbb{F}}=\ensuremath{\mathbb{R}}$ or $2$ if $\ensuremath{\mathbb{F}}=\ensuremath{\mathbb{C}}$. For such subsets $A$ of $\ensuremath{\mathbb{N}}$ the set \[ \ensuremath{\mathcal{K}}_A=\left\{f^{\ast}\in \spn(\ensuremath{\bm{x}}_n^{\ast} \ensuremath{\mathtt{c}}lon n\in A) \ensuremath{\mathtt{c}}lon \|f^{\ast}\|\le 1, \; f^{\ast}(\ensuremath{\mathbbm{1}}_{A})\ensuremath{\bm{g}}e \frac{ (1-\epsilon)\ensuremath{\varphi_u}(m)}{\ensuremath{\mathbf{C}}\Lambda}\right\} \] is convex and nonempty. Note that $\ensuremath{\mathcal{K}}_A$ increases with $A$, and that \begin{equation}\label{eq:TrivialEst} \sum_{n\in A} |f^{\ast}(\ensuremath{\bm{x}}_n)| = f^{\ast}\left( \ensuremath{\mathbbm{1}}_{\overline{\varepsilon(f^{\ast})},A}\right) \le \Vert f^{\ast}\Vert \, \left\Vert \ensuremath{\mathbbm{1}}_{\overline{\varepsilon(f^{\ast})},A}\right\Vert \le \ensuremath{\varphi_u}(|A|), \; f^{\ast}\in \ensuremath{\mathcal{K}}_A. \end{equation} Pick $f^{\ast}\in\ensuremath{\mathcal{K}}_D$ that minimizes $\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2$. The geometric properties of minimizing vectors on convex subsets of Hilbert spaces yield \begin{equation}\label{eq:GeoH} \sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2\le \Re\left( \sum_{n\in D} f^{\ast}(\ensuremath{\bm{x}}_n) g^{\ast}(\ensuremath{\bm{x}}_n)\right), \quad g^{\ast}\in \ensuremath{\mathcal{K}}_D. \end{equation} Let $E$ be a greedy set of $f^{\ast}$ with $|E|=m$, and put $A=D\setminus E$. Using that $\ensuremath{\mathcal{X}}^{\ast}$ is truncation quasi-greedy we obtain \begin{equation}\label{eq:truncation quasi-greedyD} \min_{n\in E}|f^{\ast}(\ensuremath{\bm{x}}_n)|\, \|\ensuremath{\mathbbm{1}}_{E}^{\ast}\|\le \Lambda_u^{\ast}\|f^{\ast}\|\le \Lambda_u^{\ast}. \end{equation} Pick $g^{\ast}\in \ensuremath{\mathcal{K}}_A$. By \eqref{eq:GeoH}, \eqref{eq:TrivialEst}, \eqref{eq:truncation quasi-greedyD} and \eqref{one3}, \begin{align*} \sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2&\le \sum_{n\in A}|f^{\ast}(\ensuremath{\bm{x}}_n)||g^{\ast}(\ensuremath{\bm{x}}_n)|\\ &\le \min_{n\in E}|f^{\ast}(\ensuremath{\bm{x}}_n)| \sum_{n\in A} |g^{\ast}(\ensuremath{\bm{x}}_n)|\\ &\le\frac{\Lambda_u^{\ast}}{\Vert \ensuremath{\mathbbm{1}}_E^{\ast}\Vert} \ensuremath{\bm{\varphi_u}}(m) \\ &\le\frac{\Lambda_u^{\ast}\ensuremath{\mathbf{C}}^{\ast}}{(1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m)}\ensuremath{\varphi_u}(m). \end{align*} Hence, by the Cauchy--Bunyakovsky--Schwarz inequality, \begin{align*} (1-\epsilon)^{2}(\ensuremath{\varphi_u}(m))^2&\le \ensuremath{\mathbf{C}}^2\Lambda^2 | f^{\ast}(\ensuremath{\mathbbm{1}}_D)|^2\\ &\le \ensuremath{\mathbf{C}}^2\Lambda^2\left(\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|\right)^2\\ &\le 2 \ensuremath{\mathbf{C}}^2\Lambda^2 m \sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2\\ & \le 2m \frac{\ensuremath{\mathbf{C}}^2\ensuremath{\mathbf{C}}^{\ast} \Lambda^2\Lambda_u^{\ast}}{(1-\epsilon)} \frac{\ensuremath{\bm{\varphi_u}}(m)}{\ensuremath{\bm{\varphi_u}}^{\ast}(m)}. \end{align*} Since $\epsilon$ is arbitrary, we obtain \[ \ensuremath{\bm{\varphi_u}}(m)\ensuremath{\bm{\varphi_u}}^{\ast}(m)\le 2 \ensuremath{\mathbf{C}}^2\ensuremath{\mathbf{C}}^{\ast}\Lambda^2\Lambda_u^{\ast}m, \] and so the basis is bidemocratic. \end{proof} \begin{corollary} Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy and conservative. Then $\ensuremath{\mathcal{X}}$ is bidemocratic. \end{corollary} \begin{proof} If follows readily from Theorem~\ref{thm:PDtruncation quasi-greedy} since conservative bases are partially democratic. \end{proof} \begin{remark} Note that Theorem~\ref{thm:PDtruncation quasi-greedy} makes sense only for Banach spaces, i.e., it cannot be extended to nonlocally convex quasi-Banach spaces. Indeed, for $0<p<1$ the unit vector system of $\ell_p$ is a democratic unconditional basis whose dual basis is the unit vector system of $c_0$, which also is democratic; but the unit vector system of $\ell_p$ is not bidemocratic! \end{remark} \section{Existence of bidemocratic non-quasi-greedy bases}\label{sect:BDNonQG}\ensuremath{\mathbf{n}}oindent This section is geared towards proving the existence of bidemocratic bases which are not quasi-greedy. To that end, let us first set the minimum requirements on terminology we need for this section. Suppose $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_{n})_{n=1}^{\infty}$ is a democratic basis in a quasi-Banach space $\ensuremath{\mathbb{X}}$. We shall say that $\ensuremath{\mathcal{X}}$ has the \emph{lower regularity property} (LRP for short) if there is an integer $b\ensuremath{\bm{g}}e 2$ such \begin{equation}\label{LRPdef} 2 \varphi(m) \le \varphi(bm), \quad m\in\ensuremath{\mathbb{N}}. \end{equation} In a sense, the LRP is the dual property of the URP. Abusing the language we will say that a sequence has the URP (respectively, LRP), if its terms verify the condition \eqref{URPdef} (respectively, \eqref{LRPdef}). Note that $(\varphi(m))_{m=1}^\infty$ has the LRP if and only if $(m/\varphi(m))_{m=1}^\infty$ has the URP. If $(\varphi(m))_{m=1}^\infty$ has the LRP then there is $a>0$ and $C\ensuremath{\bm{g}}e 1$ such that \begin{equation}\label{eq:LRP} \frac{m^a}{n^a}\le C \frac{\varphi(m)}{\varphi(n)}, \quad n\le m. \end{equation} In the case when $\varphi$ is non-decreasing and the sequence $(\varphi(m)/m)_{m=1}^\infty$ is non-increasing, $\varphi$ has the LRP if and only if the weight $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ defined by $w_n=\varphi(n)/n$ is a \emph{regular} weight, i.e., it satisfies the Dini condition \[ \sup_{n} \frac{1}{n w_n} \sum_{k=1}^n w_k <\infty \] (see \cite{AABW2021}*{Lemma 9.8}), in which case \begin{equation}\label{eq:LRPbis} \sum_{n=1}^m \frac{\varphi(n)}{n} \approx \varphi(m), \quad m\in\ensuremath{\mathbb{N}}. \end{equation} For instance, the power sequence $(m^{1/p})_{m=1}^{\infty}$ has the URP for $1<p<\infty$. In other words, the weight $\ensuremath{\bm{w}}= (n^{-a})_{n=1}^{\infty}$ is regular for $0<a<1$. We will need the following elementary lemma about the \emph{harmonic numbers} \[ H_m=\sum_{n=1}^m \frac{1}{n}, \quad m\in\ensuremath{\mathbb{N}}\cup\{0\}. \] \begin{lemma}\label{lem:JarDif} For each $0<a<1$ there exists a constant $C(a)$ such that \begin{equation*} S(a,r,t):=\sum_{k=r+1}^t k^{-a}(k-r)^{a-1}\le C(a) (H_t-H_r), \quad t\ensuremath{\bm{g}}e 2r. \end{equation*} \end{lemma} \begin{proof} The inequality is trivial for $r=0$. So we assume that $r\ensuremath{\bm{g}}e 1$. If we define $f\ensuremath{\mathtt{c}}lon [1,\infty) \to [0,\infty)$ by $ f(u) = u^{-a} (u-1)^{a-1}, $ we have \[ k^{-a}(k-r)^{a-1} \le x^{-a} (x-r)^{a-1}= \frac{1}{r} f\left( \frac{x}{r}\right), \quad k\in \ensuremath{\mathbb{N}}, \; x\in [k-1,k]. \] Hence, \[ S(a,r,t)\le \int_{r}^t f\left( \frac{x}{r}\right) \frac{dx}{r}=\int_1^{t/r} f(u) \, du. \] Since $f$ is integrable on $[1,2]$ and $f(u) \lesssim 1/u$ for $u\in[2,\infty)$, there is a constant $C_1$ such that $S(a,r,t) \le C_1 \log(t/r)$. Taking into account that $H_t-H_r\ensuremath{\bm{g}}e (t-r)/t\ensuremath{\bm{g}}e 1/2$, and that there is a constant $C_2$ such that $ \log m\le H_m\le \log m+C_2$ for all $m\in\ensuremath{\mathbb{N}}$ we are done. \end{proof} For further reference, we record an easy lemma that we will use several times. Note that it applies in particular to the harmonic series. \begin{lemma}\label{lem:Jar} Let $\sum_{n=1}^\infty c_n$ be a divergent series of nonnegative terms. Suppose that $\lim_n c_n=0$. Then, for every $m\in\ensuremath{\mathbb{N}}\cup\{0\}$ and $0\le a<b$, there are $m\le r<s$ such that $a\le \sum_{n=r+1}^s c_n <b$. \end{lemma} We will also use the following well-known lemma. Note that it could be used to prove the divergence of the harmonic series. \begin{lemma}[See \cite{Rudin1976}*{Exercise 11, p.\ 84}]\label{lem:AlsoDiverges} Let $\sum_{n=1}^\infty c_n$ be a divergent series of nonnegative terms. Then the (smaller) series \[ \sum_{n=1}^\infty \frac{c_n}{\sum_{k=1}^n c_k} \] also diverges. \end{lemma} Lorentz sequence spaces $d_{1,q}(\ensuremath{\bm{w}})$ play a relevant role in the qualitative study of greedy-like bases. Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight (i.e., a sequence of nonnegative numbers with $w_1>0$) whose primitive weight $(s_m)_{m=1}^\infty$, defined by $s_m=\sum_{n=1}^m w_n$, is unbounded and \emph{doubling}, i.e., \[ \sup_m \frac{s_{2m}}{s_m} <\infty. \] Given $0<q\le \infty$, we will denote by $d_{1,q}(\ensuremath{\bm{w}})$ the quasi-Banach space of all $f\in c_0$ whose non-increasing rearrangement $(a_n)_{n=1}^\infty$ satisfies \[ \Vert f\Vert_{d_{1,q}(\ensuremath{\bm{w}})}=\left( \sum_{n=1}^\infty a_n^q s_n^{q-1}w_n\right)^{1/q}<\infty, \] with the usual modification if $q=\infty$. For power weights this definition yields the classical Lorentz sequence spaces $\ell_{p,q}$. To be precise, if $\ensuremath{\bm{w}}=(n^{1/p-1})_{n=1}^\infty$ for some $0<p<\infty$, then, up to an equivalent quasi-norm, $d_{1,q}(\ensuremath{\bm{w}}) =\ell_{p,q}$ and if $(a_n)_{n=1}^\infty$ is the non-increasing rearrangement of $f\in c_0$, \[ \Vert f \Vert_{\ell_{p,q}}=\left( \sum_{n=1}^\infty a_n^q n^{q/p-1}\right)^{1/q}. \] For a quick introduction to Lorentz sequence spaces, we refer the reader to \cite{AABW2021}*{Section 9.2}. Here we gather the properties of these spaces that are most pertinent for our purposes. Although it is customary to designate them after the weight $\ensuremath{\bm{w}}$, it must be conceded that as a matter of fact they depend on its primitive weight $(s_m)_{m=1}^\infty$ rather than on $\ensuremath{\bm{w}}$. That is, given weights $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ and $\ensuremath{\bm{w}}'=(w_n')_{n=1}^\infty$ with primitive weights $(s_m)_{m=1}^\infty$ and $(s_m')_{m=1}^\infty$, we have $d_{1,q}(\ensuremath{\bm{w}})=d_{1,q}(\ensuremath{\bm{w}}')$ (up to an equivalent quasi-norm) if and only if $s_m\approx s_m'$ for $m\in\ensuremath{\mathbb{N}}$. The fundamental function of the unit vector system of $d_{1,q}(\ensuremath{\bm{w}})$ is equivalent to $(s_m)_{m=1}^\infty$ thus, essentially, it does not depend on $q$. We have \[ d_{1,p}(\ensuremath{\bm{w}}) \subseteq d_{1,q}(\ensuremath{\bm{w}}), \quad 0<p<q\le \infty. \] To show that this inclusion is actually strict we can, for instance, use the sequence \[ H_m[\ensuremath{\bm{w}}]=\sum_{n=1}^m \frac{w_n}{s_n}, \quad m\in\ensuremath{\mathbb{N}}, \] and notice that $\lim_m H_m[\ensuremath{\bm{w}}]=\infty$ by Lemma~\ref{lem:AlsoDiverges}, and \begin{equation}\label{eq:NormLorentz} \left\Vert \sum_{n=1}^m \frac{1}{s_n}\, \ensuremath{\bm{e}}_n\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}=(H_m[\ensuremath{\bm{w}}])^{1/q}, \quad m\in\ensuremath{\mathbb{N}},\; 0<q<\infty. \end{equation} \begin{lemma}\label{lem:LorentzLRP} Let $0<q\le \infty$, and let $(s_m)_{m=1}^\infty$ be the primitive weight of a weight $\ensuremath{\bm{w}}$. Suppose that $(s_m)_{m=1}^\infty$ has the LRP and that the weight $\ensuremath{\bm{w}}'=(w_n')_{n=1}^\infty$ given by $w_n'=s_n/n$ is non-increasing. Then: \begin{enumerate}[label=(\roman*), leftmargin=*, widest=iii] \item\label{LorentzLRP:1} $d_{1,q}(\ensuremath{\bm{w}})=d_{1,q}(\ensuremath{\bm{w}}')$; \item\label{LorentzLRP:2} for $0\le r \le t<\infty$, $H_t[\ensuremath{\bm{w}}']-H_r[\ensuremath{\bm{w}}']\approx H_t-H_r$ and \item\label{LorentzLRP:3} $ A(r,t):=\left\Vert \sum_{n=r+1}^t s_n^{-1}\, \ensuremath{\bm{e}}_n \right\Vert _{d_{1,q}(\ensuremath{\bm{w}})} \lesssim \max\{1, (H_t-H_r)^{1/q}\}. $ \end{enumerate} \end{lemma} \begin{proof} The first part follows from \eqref{eq:LRPbis}. Let $(s_m')_{m=1}^\infty$ be the primitive weight of $\ensuremath{\bm{w}}'$. The equivalence \eqref{eq:LRPbis} also yields \[ \frac{w_n'}{s_n'}\approx \frac{1}{n}, \quad n\in\ensuremath{\mathbb{N}}. \] Hence, \ref{LorentzLRP:2} holds. Pick $0<a<1/q$ such that \eqref{eq:LRP} holds. On one hand, if $t\le 2r+1$, \[ A(r,t) \le \frac{1}{s_{r+1}} \left\Vert \sum_{n=r+1}^t \ensuremath{\bm{e}}_n\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}\lesssim \frac{s_{t-r}}{s_{r+1}}\le 1. \] On the other hand, if $t\ensuremath{\bm{g}}e 2r$ using again \ref{LorentzLRP:1} we obtain \[ A(r,t) \approx \left(\sum_{k=r+1}^t \frac{s_{k-r}^q}{s_k^q(k-r)} \right)^{1/q} \lesssim \left(\sum_{k=r+1}^t \frac{(k-r)^{aq}}{k^{aq}(k-r)} \right)^{1/q}. \] Hence, applying Lemma~\ref{lem:JarDif} yields the desired inequality. \end{proof} To contextualize the assumptions in Theorem~\ref{theoremLp} below we must take into account that any basis $\ensuremath{\mathcal{X}}$ of a $r$-Banach space $\ensuremath{\mathbb{X}}$, $0<r\le 1$, is dominated by the unit vector basis of the Lorentz sequence space $d_{1,r}(\ensuremath{\bm{w}})$, where the primitive weight of $\ensuremath{\bm{w}}$ is $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ (see \cite{AABW2021}*{Theorem 9.12}). Although it is not central in our study, in the proof of Theorem~\ref{theoremLp} we will keep track of the \emph{quasi-greedy parameters} of the basis, \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup\{ \Vert S_A[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](f) \Vert \ensuremath{\mathtt{c}}lon A \;\text{greedy set of}\; f\in B_\ensuremath{\mathbb{X}}, \, |A|= m\}, \] where for a finite subset $A\subseteq \ensuremath{\mathbb{N}}$, we let $S_A=S_A[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\ensuremath{\mathtt{c}}lon \ensuremath{\mathbb{X}} \to\ensuremath{\mathbb{X}}$ denote the coordinate projection on $A$ , i.e., \[ S_A(f)=\sum_{n\in A} \ensuremath{\bm{x}}_n^{\ast}(f)\, \ensuremath{\bm{x}}_n,\quad f\in \ensuremath{\mathbb{X}}. \] The quasi-greedy parameters are bounded above by the \emph{unconditionality parameters} \[ \ensuremath{\bm{k}}_m=\ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] :=\sup_{|A|= m} \Vert S_A\Vert, \quad m\in\ensuremath{\mathbb{N}}, \] which are used to quantify how far the basis is from being unconditional. Thus, the following result exhibits that bidemocratic bases are close to being quasi-greedy. \begin{theorem}\label{thm:BidCond} Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a $p$-Banach space $\ensuremath{\mathbb{X}}$, $0<p\le 1$. Then, \[ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\lesssim (\log m)^{1/p}, \quad m\ensuremath{\bm{g}}e 2. \] \end{theorem} \begin{proof} Just combine \cite{AABW2021}*{Proposition 5.7} with \cite{AAW2021b}*{Theorem 5.1}. \end{proof} Since $(\ensuremath{\overline{\bm{g}}}_m)_{m=1}^\infty$ need not be non-decreasing (see \cite{Oikhberg2018}*{Proposition 3.1}), we also set \[ \ensuremath{\bm{g}}_m=\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup_{k\le m} \ensuremath{\overline{\bm{g}}}_k. \] Of course, $\ensuremath{\mathcal{X}}$ is quasi-greedy if and only if $\sup_m \ensuremath{\bm{g}}_m=\sup_m \ensuremath{\overline{\bm{g}}}_m<\infty$, and $\ensuremath{\mathcal{X}}$ is unconditional if and only if $\sup_m \ensuremath{\bm{k}}_m<\infty$. We will use the fact that quasi-greedy bases are in particular total bases (see \cite{AABW2021}*{Corollary 4.5}) to prove the advertised existence of bidemocratic non-quasi-greedy bases. \begin{theorem}\label{theoremLp} Let $1<q<\infty$, and let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight whose primitive weight $(s_m)_{m=1}^\infty$ is unbounded. Let $\ensuremath{\mathbb{X}}$ be a quasi-Banach space with a basis $\ensuremath{\mathcal{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, and that $\ensuremath{\mathcal{X}}$ has a subsequence dominated by the unit vector basis of $d_{1,q}(\ensuremath{\bm{w}})$. Then $\ensuremath{\mathbb{X}}$ has a non-total bidemocratic basis $\ensuremath{\mathcal{Y}}$ with \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\approx s_m, \quad m\in \ensuremath{\mathbb{N}}. \] Moreover, if $(s_m)_{m=1}^\infty$ has the LRP and $(s_m/m)_{m=1}^\infty$ is non-increasing, \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}] \ensuremath{\bm{g}}trsim \left(\log m\right)^{{1/q'}}, \quad m\ensuremath{\bm{g}}e 2, \] where $1/q+1/q^{\prime}=1$. \end{theorem} \begin{proof} Choose a subsequence $\left(\ensuremath{\bm{x}}_{\eta(k)}\right)_{k=1}^{\infty}$ of $\ensuremath{\mathcal{X}}=\left(\ensuremath{\bm{x}}_n\right)_{n=1}^{\infty}$ so that $\eta(1)\ensuremath{\bm{g}}e 2$ and the linear operator $T\ensuremath{\mathtt{c}}lon d_{1,q}(\ensuremath{\bm{w}})\rightarrow \ensuremath{\mathbb{X}}$ given by \[ T\left(\ensuremath{\bm{e}}_k\right)= \ensuremath{\bm{x}}_{\eta(k)}, \quad k\in\ensuremath{\mathbb{N}}, \] is bounded. For each $n\in\ensuremath{\mathbb{N}}$, $n\ensuremath{\bm{g}}e 2$, define $\ensuremath{\bm{y}}_n=\ensuremath{\bm{x}}_n+\ensuremath{\bm{z}}_n$, where \[ \ensuremath{\bm{z}}_n =\begin{cases} w_k \, \ensuremath{\bm{x}}_1& \text{ if } n=\eta(k),\\ 0 & \text{otherwise}. \end{cases} \] It is clear that $(\ensuremath{\bm{y}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=2}^\infty$ is a biorthogonal system. Thus, in order to prove that $\ensuremath{\mathcal{Y}}:=\left(\ensuremath{\bm{y}}_n\right)_{n=2}^{\infty}$ is a basis of $\ensuremath{\mathbb{X}}$ with dual basis $\ensuremath{\mathcal{Y}}^*=(\ensuremath{\bm{x}}_n^*)_{n=2}^\infty$ it suffices to prove that $\ensuremath{\bm{x}}_1$ belongs to the closed linear span of $\ensuremath{\mathcal{Y}}$. For each $m\in\ensuremath{\mathbb{N}}$ we have \begin{align*} f_m&:=\frac{1}{H_m[\ensuremath{\bm{w}}]} \sum_{k=1}^m \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)}\\ &=\ensuremath{\bm{x}}_1+\frac{1}{H_m[\ensuremath{\bm{w}}]}\sum_{k=1}^m\frac{1}{s_k} \ensuremath{\bm{x}}_{\eta(k)}\\ &=\ensuremath{\bm{x}}_1+\frac{1}{H_m[\ensuremath{\bm{w}}]} T(g_m), \end{align*} where $g_m=\sum_{k=1}^m s_k^{-1} \ensuremath{\bm{e}}_k$. By \eqref{eq:NormLorentz}, \[ \Vert f_m-\ensuremath{\bm{x}}_1\Vert\le \Vert T \Vert(H_m[\ensuremath{\bm{w}}])^{-1/q'}, \quad m\in\ensuremath{\mathbb{N}}. \] Since by Lemma~\ref{lem:AlsoDiverges}, $\lim_m H_m[\ensuremath{\bm{w}}]=\infty$ we obtain $\lim_m f_m=\ensuremath{\bm{x}}_1$. Since $\ensuremath{\bm{y}}_n^{\ast}\left(\ensuremath{\bm{x}}_1\right)=0$ for all $n\ensuremath{\bm{g}}e 2$, $\ensuremath{\mathcal{Y}}$ is not a total basis. Put $\ensuremath{\mathcal{Z}}=(\ensuremath{\bm{z}}_n)_{n=2}^\infty$. Then, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m)=\Vert \ensuremath{\bm{x}}_1\Vert \sum_{k=1}^m w_k \approx s_m, \quad m\in\ensuremath{\mathbb{N}}. \] Hence, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\lesssim \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)+ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m)\lesssim s_m, \quad m\in\ensuremath{\mathbb{N}}, \] so that, since $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}}^*,\ensuremath{\mathbb{X}}](m)\lesssim m/s_m$ for $m\in\ensuremath{\mathbb{N}}$, $\ensuremath{\mathcal{Y}}$ is bidemocratic. In the case when $(s_m)_{m=1}^\infty$ has the LRP and $(s_m/m)_{m=1}^\infty$ is non-increasing, by parts \ref{LorentzLRP:1} and ~\ref{LorentzLRP:2} of Lemma~\ref{lem:LorentzLRP} we can assume without loss of generality that \begin{equation}\label{eq:HarmonicEquivalence} H_t[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]\approx H_t-H_r, \quad 0\le r \le t. \end{equation} To estimate the quasi-greedy parameters we appeal to Lemma~\ref{lem:Jar} to pick for each $m\ensuremath{\bm{g}}e 2$, natural numbers $r=r(m)$ and $s=s(m)$ with $m\le r \le s$, and \begin{equation}\label{eq:HarmonicEstimates} H_m[\ensuremath{\bm{w}}] \le H_s[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]\le (H_m[\ensuremath{\bm{w}}])^{1/q}+H_m[\ensuremath{\bm{w}}]. \end{equation} Set $h_m=\sum_{k=r+1}^s s_k^{-1} \ensuremath{\bm{e}}_k$ and \begin{align*} u_m&=\frac{1}{H_m[\ensuremath{\bm{w}}]} \left( \sum_{k=1}^{m} \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)} -\sum_{k=r+1}^{s} \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)}\right)\\ &=\frac{1}{H_m[\ensuremath{\bm{w}}]} \left( T(g_m)-T(h_m) + (H_m[\ensuremath{\bm{w}}]-H_s[\ensuremath{\bm{w}}]+H_r[\ensuremath{\bm{w}}]) \ensuremath{\bm{x}}_1 \right). \end{align*} By Lemma~\ref{lem:LorentzLRP}~\ref{LorentzLRP:3}, \eqref{eq:NormLorentz}, \eqref{eq:HarmonicEquivalence} and \eqref{eq:HarmonicEstimates}, \[ \max\{\Vert g_m\Vert, \Vert h_m\Vert, |H_s[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]-H_m[\ensuremath{\bm{w}}]|\}\lesssim H_m^{1/q}, \quad m\in\ensuremath{\mathbb{N}}. \] Hence, $ \Vert u_m\Vert \lesssim H_m^{-1/q'} $ for $m\in\ensuremath{\mathbb{N}}$. Since $A_m:=\{\eta(1),\dots,\eta(m)\}$ is a greedy set of $u_m$ with respect to $\ensuremath{\mathcal{Y}}$, and \[ \Vert S_{A_m}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](u_m)\Vert=\Vert f_m\Vert \approx 1,\quad m\in\ensuremath{\mathbb{N}}, \] we are done. \end{proof} \begin{corollary}\label{corbidem} Let $\ensuremath{\mathbb{X}}$ be a Banach space with a Schauder basis. Suppose that $\ensuremath{\mathbb{X}}$ has a complemented subspace isomorphic to $\ell_{p,q}$, where $p$, $q\in(1,\infty)$. Then $\ensuremath{\mathbb{X}}$ has a non-total bidemocratic basis $\ensuremath{\mathcal{Y}}$ with \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\approx m^{{1/p}}, \quad m\in \ensuremath{\mathbb{N}}, \] and \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}] \ensuremath{\bm{g}}trsim \left(\log m\right)^{{1/q'}}, \quad m\ensuremath{\bm{g}}e 2. \] \end{corollary} \begin{proof} An application of the Dilworth-Kalton-Kutzarova method, or DKK-method for short (see \cites{AADK2019b,DKK2003}), yields a bidemocratic Schauder basis of $\ensuremath{\mathbb{X}}$ with fundamental function equivalent to $(m^{1/p})_{m=1}^\infty$ (see \cite{AADK2019b}). The direct sum of this basis with the unit vector system of $\ell_{p,q}$ is a bidemocratic Schauder basis of $ \ensuremath{\mathbb{X}}\oplus\ell_{p,q}\approx\ensuremath{\mathbb{X}}$ that posesses a subsequence equivalent to the unit vector basis of $\ell_{p,q}$. Applying Theorem~\ref{theoremLp} we are done. \end{proof} Note that Corollary~\ref{corbidem} can be applied with $1<p=q<\infty$, so that $\ell_{p,q}=\ell_p$. Hence as a consequence we obtain the result that we announced in the Introduction. \begin{theorem}\label{thm:BDNotTotal} Let $1<p<\infty$. Then $\ell_p$ has a bidemocratic non-total (hence, non-quasi-greedy) basis. \end{theorem} Theorem~\ref{thm:BDNotTotal} leads us naturally to the question about the existence of bidemocratic non-total bases in $\ell_1$ and $c_0$. We make a detour from our route to solve both questions in the negative. For that we will need to apply the arguments that follow, keeping in mind that $\ell_1=(c_0)^{\ast}$ is a GT-space (see \cite{LinPel1968}). \begin{proposition} Let $\ensuremath{\mathbb{X}}$ be a quasi-Banach space, and let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ be sequences in $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{X}}^{\ast}$, respectively. Suppose that $(\ensuremath{\bm{x}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ is a biorthogonal system and that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \, \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}^{\ast}](m)\le C m, \quad m\in\ensuremath{\mathbb{N}}, \] for some constant $C$. Then $\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} [\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] \Vert\le C \Vert f \Vert$ whenever $A\subseteq \ensuremath{\mathbb{N}}$, $\varepsilon\in\ensuremath{\mathcal{E}}_A$, and $f\in\ensuremath{\mathbb{X}}$ are such that $|\ensuremath{\bm{x}}_n^{\ast}(f)|\ensuremath{\bm{g}}e 1$ on a set of cardinality at least $|A|$. \end{proposition} \begin{proof} In the case when $\ensuremath{\mathcal{X}}$ spans the whole space $\ensuremath{\mathbb{X}}$, this proposition says that any bidemocratic basis is truncation quasi-greedy. In fact, the proof of \cite{AABW2021}*{Proposition 5.7} gives this slightly more general result. \end{proof} \begin{theorem}\label{thm:AAW} Let $\ensuremath{\mathbb{X}}$ be a GT-space and let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ be a sequences in $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{X}}^{\ast}$, respectively. Suppose that $(\ensuremath{\bm{x}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ is a biorthogonal system and that there is a constant $C$ such that $\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} [\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] \Vert\le C \Vert f \Vert$ whenever $A\subseteq \ensuremath{\mathbb{N}}$ and $f\in\ensuremath{\mathbb{X}}$ satisfy $|\ensuremath{\bm{x}}_n^{\ast}(f)|\ensuremath{\bm{g}}e 1 \ensuremath{\bm{g}}e |\ensuremath{\bm{x}}_k^{\ast}(f)|$ for $(n,k)\in A\times (\ensuremath{\mathbb{N}}\setminus A)$, and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$ is defined by $\ensuremath{\bm{x}}_n^{\ast}(f)=|\ensuremath{\bm{x}}_n^{\ast}(f)|\, \varepsilon_n$. Then, $\ensuremath{\bm{\varphi_l}}(m) \ensuremath{\bm{g}}trsim m$ for $m\in\ensuremath{\mathbb{N}}$. \end{theorem} \begin{proof} In the case when $\ensuremath{\mathcal{X}}$ spans the whole space $\ensuremath{\mathbb{X}}$, this theorem says that any truncation quasi-greedy basis of a GT-space is democratic with fundamental function equivalent to $(m)_{m=1}^\infty$ (see \cite{AAW2021}*{Theorem 4.3}). As a matter of fact, the proof of \cite{AAW2021}*{Theorem 4.3} gives this slightly more general result. \end{proof} \begin{theorem}Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a Banach space $\ensuremath{\mathbb{X}}$. \begin{enumerate}[label=(\roman*),leftmargin=*,widest=ii] \item\label{GTspace}If $\ensuremath{\mathbb{X}}$ is a GT-space, then $\ensuremath{\mathcal{X}}$ is equivalent to the canonical basis of $\ell_1$. \item\label{predualGTspace}If $\ensuremath{\mathbb{X}}^{\ast}$ is a GT-space, then $\ensuremath{\mathcal{X}}$ is equivalent to the canonical basis of $c_0$. \end{enumerate} \end{theorem} \begin{proof} Suppose that $\ensuremath{\mathbb{X}}$ (resp.,\ $\ensuremath{\mathbb{X}}^{\ast}$) is a GT-space. By Theorem~\ref{thm:AAW} $\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ (resp.,\ $\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$) is equivalent to $(m)_{m=1}^\infty$. Hence, $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$ (resp.,\ $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$) is bounded. This readily gives that $\ensuremath{\mathcal{X}}^{\ast}$ (resp.\ $\ensuremath{\mathcal{X}}$) is equivalent to the canonical basis of $c_0$. To conclude the proof of \ref{GTspace}, we infer that $\ensuremath{\mathcal{X}}^{**}$ is equivalent to the canonical basis $\ensuremath{\mathcal{B}}_{\ell_1}$ of $\ell_1$. Since $\ensuremath{\mathcal{B}}_{\ell_1}$ dominates $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}$ dominates $\ensuremath{\mathcal{X}}^{**}$ we are done. \end{proof} It is known that some results involving the TGA work for total bases but break down if we drop this assumption (see, e.g., \cite{BL2020}*{Theorem 4.2 and Example 4.5}). In view of this, another question springing from Theorem~\ref{thm:BDNotTotal} is whether working with total bases makes a difference, i.e., whether bidemocratic total bases are quasi-greedy. We solve this question in the negative by proving the following theorem. \begin{theorem}\label{thm:BDTotalNotQG} Let $1<p<\infty$. Then any infinite-dimensional subspace of $\ell_p$ has a further subspace with a bidemocratic non-quasi-greedy total basis. \end{theorem} Theorem~\ref{thm:BDTotalNotQG} will follow as a consequence of the following general result. \begin{theorem}\label{theoremLp2} Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight, and suppose that its primitive weight $(s_m)_{m=1}^\infty$ has the LRP and that $(s_m/m)_{m=1}^\infty$ is non-increasing. Let $\ensuremath{\mathbb{X}}$ be a Banach space with a total basis $\ensuremath{\mathcal{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, and that $\ensuremath{\mathcal{X}}$ has a subsequence dominated by the unit vector basis of $d_{1,q}(\ensuremath{\bm{w}})$ for some $q>1$. Then $\ensuremath{\mathbb{X}}$ has a subspace $\ensuremath{\mathbb{Y}}$ with a basis $\ensuremath{\mathcal{Y}}$ satisfying the following properties: \begin{enumerate}[label=(\roman*),leftmargin=*,widest=iii] \item\label{bidemocraticlp2}$\ensuremath{\mathcal{Y}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$. \item\label{markulp2} $\ensuremath{\mathcal{Y}}$ is total. \item\label{notqglps}$\ensuremath{\mathcal{Y}}$ is not quasi-greedy. \item\label{notslp2}$\ensuremath{\mathcal{Y}}$ is not Schauder in any order. \end{enumerate} \end{theorem} \begin{proof} Choose a subsequence $\left(\ensuremath{\bm{x}}_{\eta(j)}\right)_{j=1}^{\infty}$ of $\ensuremath{\mathcal{X}}=\left(\ensuremath{\bm{x}}_n\right)_{n=1}^{\infty}$ so that $\ensuremath{\mathbb{N}}\setminus\eta(\ensuremath{\mathbb{N}})$ is infinite and the linear operator $T\ensuremath{\mathtt{c}}lon d_{1,q}(\ensuremath{\bm{w}})\rightarrow \ensuremath{\mathbb{X}}$ given by \[ T\left(\ensuremath{\bm{e}}_j\right)= \ensuremath{\bm{x}}_{\eta(j)}, \quad k\in\ensuremath{\mathbb{N}}, \] is bounded. Let $\psi\ensuremath{\mathtt{c}}lon\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ be the increasing sequence defined by $\psi(\ensuremath{\mathbb{N}})=\ensuremath{\mathbb{N}}\setminus\eta(\ensuremath{\mathbb{N}})$. Since the harmonic series diverges we can recursively construct an increasing sequence $(t_k)_{k=0}^\infty$ of natural numbers with $t_0=0$ such that, if we put \[ \Lambda_k=H_{t_k} -H_{t_{k-1}}, \] then $\lim_k \Lambda_k=\infty$. For each $j\in\ensuremath{\mathbb{N}}$ define $\ensuremath{\bm{y}}_j=\ensuremath{\bm{x}}_{\eta(j)}+\ensuremath{\bm{z}}_j$, where \[ \ensuremath{\bm{z}}_j = \frac{s_j}{j} \ensuremath{\bm{x}}_{\psi(k)}, \quad k\in\ensuremath{\mathbb{N}},\; t_{k-1} <j \le t_k. \] It is clear that $(\ensuremath{\bm{y}}_j,\ensuremath{\bm{x}}_{\eta(j)}^{\ast})_{j=1}^\infty$ is a biorthogonal system. Thus, to see that $\ensuremath{\mathcal{Y}}:=\left(\ensuremath{\bm{y}}_j\right)_{j=1}^{\infty}$ satisfies \ref{bidemocraticlp2} it suffices to prove that, if $\ensuremath{\mathcal{Z}}=(\ensuremath{\bm{z}}_j)_{j=1}^\infty$, $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m) \lesssim s_m$ for $m\in\ensuremath{\mathbb{N}}$. Set $C_1=\sup_n \Vert \ensuremath{\bm{x}}_n\Vert$. For every $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|=m<\infty$ and $\varepsilon\in\ensuremath{\mathcal{E}}_A$ we have \[ \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}]\Vert \le C_1 \sum_{j\in A} \frac{s_j}{j} \le C_1\sum_{j=1}^m \frac{s_j}{j} \lesssim s_m. \] Let us see that $\ensuremath{\mathcal{Y}}$ is a total basis of $\ensuremath{\mathbb{Y}}=[\ensuremath{\mathcal{Y}}]$. Set \[ \ensuremath{\bm{z}}_k^{\ast}=\ensuremath{\bm{x}}_{\psi(k)}^{\ast} -\sum_{j=1+t_{k-1}}^{t_k} \frac{s_j}{j} \, \ensuremath{\bm{x}}^{\ast}_{\eta(j)}, \quad k\in\ensuremath{\mathbb{N}}. \] We have $\ensuremath{\bm{z}}_k^{\ast}(\ensuremath{\bm{y}}_j)=0$ for all $j$ and $k\in\ensuremath{\mathbb{N}}$. Therefore $\ensuremath{\bm{z}}_k^{\ast}(f)=0$ for all $f\in\ensuremath{\mathbb{Y}}$ and $k\in\ensuremath{\mathbb{N}}$. Pick $f\in\ensuremath{\mathbb{Y}}$ and suppose that $\ensuremath{\bm{x}}_{\eta(j)}^{\ast}(f)=0$ for all $j\in\ensuremath{\mathbb{N}}$. We infer that $\ensuremath{\bm{x}}_{\psi(k)}^{\ast}(f)=0$ for all $k\in\ensuremath{\mathbb{N}}$. Since $\ensuremath{\mathcal{X}}$ is a total basis, $f=0$. To prove that $\ensuremath{\mathcal{Y}}$ is neither a quasi-greedy basis nor a Schauder basis in any ordering, we pick a permutation $\pi$ of $\ensuremath{\mathbb{N}}$. For each $k\in\ensuremath{\mathbb{N}}$, choose $A_k\subseteq D_k:=[1+t_{k-1},t_k]\cap \ensuremath{\mathbb{N}}$ minimal with the properties \[ l:=\max( \pi^{-1}(A_k)) <\min (\pi^{-1}(D_k\setminus A_k)) \;\text{and}\; \Gamma_k:=\sum_{j\in A_k} \frac{1}{j} > \frac{\Lambda_k}{2}. \] By construction, \[ \frac{\Lambda_k}{2} \ensuremath{\bm{g}}e \Gamma_k-\frac{1}{\pi(l)} \ensuremath{\bm{g}}e \Gamma_k-1. \] Then, if we set \[ \Theta_k:=\sum_{j\in D_k\setminus A_k} \frac{1}{j}=\Lambda_k-\Gamma_k, \] we have $\Gamma_k-\Theta_k=-\Lambda_k+2\Gamma_k\in(0,2]$. Also by construction, if we set \[ g_k=\sum_{j\in A_k} \frac{1}{s_j} \ensuremath{\bm{y}}_j, \quad h_k=\sum_{j\in D_k\setminus A_k} \frac{1}{s_j} \ensuremath{\bm{y}}_j, \quad k\in\ensuremath{\mathbb{N}}, \] then $g_k$ is a partial-sum projection of $f_k:=g_k-h_k$ with respect to the rearranged basis $(\ensuremath{\bm{y}}_{\pi(i)})_{i=1}^\infty$. Moreover, in the case when $\pi$ is the identity map, $g_k$ is a greedy projection of $f_k$. On one hand, if we set \[ f_k'= \sum_{j\in A_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j- \sum_{j\in D_k\setminus A_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j,\\ \] we have $f_k=T(f_k') + (\Gamma_k-\Theta_k) \ensuremath{\bm{x}}_{\psi(k)}$ for all $k\in\ensuremath{\mathbb{N}}$. By Lemma~\ref{lem:LorentzLRP}~\ref{LorentzLRP:3}, \[ \Vert f_k'\Vert_{d_{1,q}(\ensuremath{\bm{w}})} = \left\Vert \sum_{j=1+t_{k-1}}^{t_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}\lesssim \max\{ 1, \Lambda_k^{1/q}\}\approx \Lambda_k^{1/q}. \] Hence, $\Vert f_k\Vert \lesssim \Lambda_k^{1/q}$ for $k\in\ensuremath{\mathbb{N}}$. On the other hand, since $\ensuremath{\bm{x}}_{\psi(k)}^{\ast} (g_k)=\Gamma_k$, we have \[ \Lambda_k<2\Gamma_k\le 2 C_2 \Vert g_k\Vert \] where $C_2=\sup_n \Vert \ensuremath{\bm{x}}_n^{\ast}\Vert$. Summing up, \[ \frac{ \Vert g_k\Vert}{\Vert f_k\Vert} \ensuremath{\bm{g}}trsim \Lambda_k^{1/q'} \xrightarrow[k\to \infty]{}\infty. \qedhere \] \end{proof} \begin{corollary}\label{corollaryl2} There is a bidemocratic total basis of $\ell_2$ that is not Schauder under any rearrangement of the terms nor quasi-greedy. \end{corollary} Let us notice that the bases we construct to prove Theorem~\ref{thm:BDTotalNotQG} are \emph{not} Schauder bases. As the TGA does no depend on the particular way we reorder the basis, whereas being a Schauder basis does, studying the TGA within the framework of Schauder bases is somehow unnatural. Nonetheless, Schauder bases have provided a friendly framework to develop the greedy approximation theory with respect to bases since its beginning at the turn of the century. In fact, it is nowadays unknown even whether certain results involving the TGA work outside the framework of Schauder bases (see, e.g., \cite{Berna2020})! Hence, in connection with our discussion it is natural to wonder whether bidemocratic Schauder bases are quasi-greedy. We close this section by providing a negative answer to this question too. \begin{theorem}\label{thm:BDSchauderNotQG} There is a Banach space with a bidemocratic Schauder basis which is not quasi-greedy. \end{theorem} The proof of Theorem~\ref{thm:BDSchauderNotQG} relies on a construction that has its roots in \cite{KoTe1999}, where it was used to build a conditional quasi-greedy basis. Variants of the original idea of Konyagin and Telmyakov have appeared in several papers with different motivations (see \cites{GHO2013,BBGHO2018,AABW2021,Oikhberg2018}). Prior to tackling the proof we introduce a quantitative version of \cite{DKKT2003}*{Theorem 5.4}. \begin{theorem}\label{thm:dualQuantitative} Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. Then \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]\lesssim \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad m\in\ensuremath{\mathbb{N}}. \] In particular, if $\ensuremath{\mathcal{X}}$ is a Schauder basis of a Banach space, then \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]\approx \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad m\in\ensuremath{\mathbb{N}}. \] \end{theorem} \begin{proof} The proof of \cite{DKKT2003}*{Theorem 5.4} (see also \cite{AABW2021}*{Proof of Proposition 5.7}) yields the first estimate. To see the equivalence, we use that $\ensuremath{\mathcal{X}}^{**}$ is equivalent to $\ensuremath{\mathcal{X}}$ (see \cite{AlbiacKalton2016}*{Corollary 3.2.4}). \end{proof} \begin{proposition}\label{theorembidemocraticnotqG} Let $1<p<\infty$. There is a Banach space $\ensuremath{\mathbb{X}}$ with a monotone Schauder basis $\ensuremath{\mathcal{X}}$ with the following properties: \begin{enumerate}[label=(\roman*),leftmargin=*,widest=ii] \item\label{1bidem}For all finite sets $A\subseteq \ensuremath{\mathbb{N}}$ and all $\varepsilon\in \ensuremath{\mathcal{E}}_A$, \[ \Vert \ensuremath{\mathbbm{1}}_{\varepsilon A}\Vert=\left|A\right|^{1/p} \;\text{and}\; \Vert\ensuremath{\mathbbm{1}}_{\varepsilon A}^{\ast}\Vert=\left|A\right|^{1/p'}, \] where $1/p+1/p^{\prime}=1$. Therefore, $\ensuremath{\mathcal{X}}$ is $1$-bidemocratic. \item\label{notnQG} Neither $\ensuremath{\mathcal{X}}$ nor $\ensuremath{\mathcal{X}}^{\ast}$ are quasi-greedy. Quantitatively, \[ \ensuremath{\overline{\bm{g}}}_m\approx\ensuremath{\overline{\bm{g}}}_m^{\ast}\approx \ensuremath{\bm{k}}_m\approx \ensuremath{\bm{k}}_m^{\ast}\approx \left(\log{m}\right)^{1/p'} , \quad m\in\ensuremath{\mathbb{N}},\; m\ensuremath{\bm{g}}e 2. \] \end{enumerate} \end{proposition} \begin{proof} Put \[ \ensuremath{\mathcal{D}}:=\{(m,k)\in\ensuremath{\mathbb{N}}^2 \ensuremath{\mathtt{c}}lon 1\le k \le m\}, \] where the elements are taken in the lexicographical order. Appealing to Lemma~\ref{lem:Jar} we recursively construct a family $(r_{m,k},s_{m,k})_{(m,k)\in\ensuremath{\mathcal{D}}}$ in $\ensuremath{\mathbb{N}}^2$ such that \begin{align} m+1<r_{m,k}<s_{m,k},\quad&1\le k\le m,\label{movetotheright1}\\ s_{m,k}<r_{m,k+1}, \quad &1\le k<m, \;\text{and}\label{movetotheright2}\\ \frac{1}{k}- \frac{1}{m}\le T_{m,k}:= \sum_{j=r_{m,k}}^{s_{m,k}}\frac{1}{j}<\frac{1}{k},\quad &1\le k\le m. \label{conditionclosesums} \end{align} Next, we choose a sequence $( A_m)_{m=1}^\infty$ of integer intervals contained in $\ensuremath{\mathbb{N}}$ so that $\max(A_m)<\min(A_{m+1})$ for all $m\in\ensuremath{\mathbb{N}}$, and \begin{equation}\label{separatingthesets} |A_m|=2m+\sum_{k=1}^{m}s_{m,k}-r_{m,k}. \end{equation} Let \[ i_{m,k}=\min A_m+\sum_{j=1}^{k-1}\left(s_{m,j}-r_{m,j}+2\right), \quad (m,k)\in\ensuremath{\mathcal{D}}. \] Fix $m\in\ensuremath{\mathbb{N}}$. For each $n\in A_m$ there are unique integers $1\le k \le m$ and $-1\le t \le s_{m,k}-r_{m,k}$ so that $n=i_{m,k}+1+t$. Let us set \[ (d_{m,n},\varepsilon_{m,n})= \begin{cases} (k,1) &\text{ if } t=-1,\\ (r_{m,k}+t,-1) &\text{otherwise.} \end{cases} \] Consider the subset of $\ensuremath{\mathbb{N}}$ given by \begin{equation*} B_m=\{ i_{m,k} \ensuremath{\mathtt{c}}lon 1\le k \le m\}. \end{equation*} The family $(d_{m,n})_{n\in B_m}$ is increasing. By \eqref{movetotheright2}, $(d_{m,n})_{n\in A_m\setminus B_m}$ is also increasing, and by \eqref{movetotheright1}, \begin{equation}\label{eq:BlGreedy} \max_{n\in B_m} d_{m,n} < \min_{n\in A_m\setminus B_m} d_{m,n}. \end{equation} Set $b_{m,n}=d_{m,n}^{-1/p'}$ for $m\in\ensuremath{\mathbb{N}}$ and $n\in A_m$. Since the family $(d_{m,n})_{n\in A_m}$ consists of distinct positive integers, for each $m\in \ensuremath{\mathbb{N}}$ and $A\subseteq A_m$ we have \begin{align} \sum_{n\in A}b_{m,n} &\le \sum_{n=1}^{\left|A\right|}n^{-1/p'} \le p \left|A\right|^{1/p}, \;\text{and}\label{firstboundforfundamentalfunction}\\ \sum_{n\in A}b_{m,n}^{p'} &\le H_{|A|}, \label{lastestimate} \end{align} where, as we said, $H_m$ denotes the $m$th harmonic number. Once the family $(b_{m,n})_{m\in\ensuremath{\mathbb{N}},n\in A_m}$ has been constructed, we define $\Vert\cdot\Vert_{\maltese}$ on $c_{00}$ by \[ \left\Vert(a_n)_{n=1}^\infty\right\Vert_{\maltese}=\frac{1}{p}\sup_{\substack{m\in\ensuremath{\mathbb{N}}\\ l\in A_m}}\left|\sum_{\substack{n\in A_m\\ensuremath{\mathbf{n}}\le l}}a_{n}b_{m,n} \right|.\label{Snorm0} \] Since $\max(A_m)<\min(A_{m+1})$ for all $m\in \ensuremath{\mathbb{N}}$, we have that $\Vert f\Vert_{\maltese}<\infty$ for all $f\in c_{00}$, so that $\Vert\cdot\Vert_{\maltese}$ is a semi-norm. Let $\ensuremath{\mathbb{X}}$ be the Banach space obtained as the completion of $c_{00}$ endowed with the norm \[ \Vert f\Vert=\max\left\lbrace \Vert f\Vert_p,\Vert f\Vert_{\maltese}\right\rbrace.\label{norm} \] It is routine to check that the unit vector system $\ensuremath{\mathcal{X}}$ is a monotone normalized Schauder basis of $\ensuremath{\mathbb{X}}$ whose coordinate functionals $\ensuremath{\mathcal{X}}^{\ast}$ are the canonical projections on each coordinate. It follows from \eqref{firstboundforfundamentalfunction} that \[ \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}\Vert_{\maltese}\le \frac{1}{p}\sup_{m\in \ensuremath{\mathbb{N}}}\sum_{n\in A\cap A_m} b_{m,n}\le \left|A\right|^{1/p}, \quad |A|<\infty, \; \varepsilon\in\ensuremath{\mathcal{E}}_A. \] By definition, there is a norm-one linear map from $\ensuremath{\mathbb{X}}$ into $\ell_p$ which maps $\ensuremath{\mathcal{X}}$ to the unit vector system of $\ell_p$. By duality, there is a norm-one map from $\ell_{p'}$ into $\ensuremath{\mathbb{X}}^{\ast}$ which maps the unit vector system of $\ell_{p'}$ to $\ensuremath{\mathcal{X}}^{\ast}$. In particular, \[ \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}^{\ast}\Vert \le |A|^{1/p'}, \quad |A|<\infty, \; \varepsilon\in\ensuremath{\mathcal{E}}_A. \] We infer that \ref{1bidem} holds. Define $a_{m,n}=\varepsilon_{m,n} d_{m,n}^{-1/p}$, so that $a_{m,n} b_{m,n}={\varepsilon_{m,n}}/{d_{m,n}}$ for $m\in\ensuremath{\mathbb{N}}$ and $n\in A_m$. For each $m\in\ensuremath{\mathbb{N}}$ set \[ f_m=\sum_{n\in A_m}a_{m,n}\, \ensuremath{\bm{x}}_n. \] Let $(m,k)\in\ensuremath{\mathcal{D}}$ and use the convention $i_{m,m+1}=1+\max(A_m)$. If $i_{m,k}\le l <i_{m,k+1}$ by construction we have \[ B_{m,k}(l):=\sum_{n=i_{m,k}}^{l} a_{m,n} b_{m,n}=\frac{1}{k}-\sum_{j=r_{m,k}}^{l-1+r_{m,k}-i_{m,k}} \frac{1}{j}. \] Thus, the maximum and minimum values of $B_{m,k}(l)$ on the interval $i_{m,k}\le l <i_{m,k+1}$ are $1/k$ and $1/k-T_{m,k}$, respectively. Since by the right hand-side inequality in \eqref{conditionclosesums}, $1/j-T_{m,j}>0$ for all $1\le j \le m$ we infer that \[ \|f_m\|_{\maltese} =\frac{1}{p} \max_{l\in A_m} \sum_{\substack{n\in A_m \\ n\le l}} a_{m,n} b_{m,n} =\frac{1}{p}\max_{1\le k \le m} \frac{1}{k}+ \sum_{j=1}^{k-1} \left( \frac{1}{j}-T_{m,j}\right). \] Using the left hand-side inequality in \eqref{conditionclosesums} we obtain \[ \|f_m\|_{\maltese}\le \frac{1}{p} \max_{1\le k \le m} \frac{1}{k}+\frac{k-1}{m}=\frac{1}{p}. \] We also have \[ \Vert f_m\Vert_{p}^p=\sum_{k=1}^m\left( \frac{1}{k}+T_{m,k}\right) \le 2 H_m. \] Hence, $\Vert f_m\Vert\le 2^{1/p} H_m^{1/p}$ for all $m\in\ensuremath{\mathbb{N}}$. By \eqref{eq:BlGreedy}, $B_m$ is a greedy set of $f_m$. Since every coefficient of $f_m$ is positive on $B_m$, \[ \Vert S_{B_m}(f_m) \Vert\ensuremath{\bm{g}}e \Vert S_{B_m}(f_m) \Vert_{\maltese}=\frac{1}{p}\sum_{j\in B_m} \frac{1}{d_{m,n}}=\frac{1}{p}H_m. \] Summing up, \[ \frac{\Vert S_{B_m}(f_m)\Vert} {\Vert f_m\Vert}\ensuremath{\bm{g}}e \frac{1}{p \, 2^{1/p}} H_m^{1/p'}, \quad m\in\ensuremath{\mathbb{N}}. \] Since $|B_m|=m$, this shows that $\ensuremath{\bm{g}}_m\ensuremath{\bm{g}}e p^{-1} 2^{-1/p} H_m^{1/p'}$ for all $m\in\ensuremath{\mathbb{N}}$. By Theorem~\ref{thm:dualQuantitative}, it only remains to obtain the upper estimate for the unconditionality constants of $\ensuremath{\mathcal{X}}$. By \eqref{lastestimate} and H\"older's inequality, for all $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|\le m$ we have \[ \Vert S_A( f)\Vert_{\maltese}\le \frac{1}{p}\Vert f\Vert_p \sup_m\left( \sum_{n\in A\cap A_m} |b_{m,n}|^{p'}\right)^{1/p'} \le \frac{1}{p} H_m^{1/p'} \Vert f\Vert_p. \] Hence, $\ensuremath{\bm{k}}_m\le\max\{1, H_m^{1/p'}/p \}$ for all $m\in\ensuremath{\mathbb{N}}$. \end{proof} \begin{remark} Given a basis $\ensuremath{\mathcal{X}}$ and an infinite subset $\ensuremath{\mathbf{n}}$ of $\ensuremath{\mathbb{N}}$, we say that $\ensuremath{\mathcal{X}}$ is $\ensuremath{\mathbf{n}}$-quasi-greedy if \[ \sup\left\lbrace \dfrac{\Vert S_A(f)\Vert}{\Vert f\Vert} \ensuremath{\mathtt{c}}lon f\in\ensuremath{\mathbb{X}},\, A \;\text{greedy set of}\; f,\, |A|\in\ensuremath{\mathbf{n}}\right\rbrace<\infty \] (see \cite{Oikhberg2018}). Note that the basis constructed in Proposition~\ref{theorembidemocraticnotqG} is not $\ensuremath{\mathbf{n}}$-quasi-greedy for any increasing sequence $\ensuremath{\mathbf{n}}$. \end{remark} \begin{remark}\label{remarklp} The basis $\ensuremath{\mathcal{X}}$ in Proposition~\ref{theorembidemocraticnotqG} has a subbasis isometrically equivalent to the unit vector basis of $\ell_p$. Indeed, it is easy to check that $\left(\ensuremath{\bm{x}}_{i_{m,1}}\right)_{m=1}^\infty$ has this property. The basis $\ensuremath{\mathcal{X}}$ also has, as we next show, a block basis isometrically equivalent to the unit vector basis of $c_0$. Let $(A_m)_{m=1}^\infty$, $(B_m)_{m=1}^\infty$ and $(f_m)_{m=1}^\infty$ be as in that proposition, and define \[ g_m:=S_{B_m}(f_m), \quad h_m=\frac{g_m}{\|g_m\|_{\maltese}}, \quad m\in\ensuremath{\mathbb{N}}. \] Pick positive scalars $(\varepsilon_k)_{k=1}^\infty$ with $\sum_{k=1}^\infty \varepsilon_k^p=1$. Since \[ \lim_{m} \frac{ \Vert g_m\Vert_p}{\Vert g_m\Vert_{\maltese}}=0, \] there is a subsequence $\left(g_{m_k}\right)_{k=1}^\infty$ with $\left\Vert g_{m_k}\right\Vert_p\le \varepsilon_k \Vert g_{m_k}\Vert_{\maltese}$ for all $k\in\ensuremath{\mathbb{N}}$. Let $f=(a_k)_{k=1}^\infty\in c_{00}$. Since $\supp(h_m)\subseteq A_{m}$ for all $m$, we have \[ \left\Vert \sum_{k=1}^{\infty}a_k h_{m_k} \right\Vert_{\maltese} =\max_{k\in\ensuremath{\mathbb{N}}} |a_k| \Vert h_{m_k}\Vert_{\maltese} =\max_{k\in\ensuremath{\mathbb{N}}}|a_k| \] and \[ \left\Vert \sum_{k=1}^{\infty}a_k h_{m_k} \right\Vert_p =\left( \sum_{k=1}^\infty |a_k|^p \Vert h_{m_k}\Vert_p^p\right)^{1/p} \le \left( \sum_{k=1}^\infty |a_k|^p \varepsilon_k^p \right)^{1/p} \le \max_{k\in\ensuremath{\mathbb{N}}} |a_k|. \] Consequently, $\Vert \sum_{k=1}^\infty a_k h_{m_k}\Vert= \max_{k\in\ensuremath{\mathbb{N}}} |a_k|$. \end{remark} \section{Building bidemocratic conditional quasi-greedy bases}\label{sect:NM}\ensuremath{\mathbf{n}}oindent Probably, the most versatile method for building conditional quasi-greedy bases is the previously mentioned DKK-method due to Dilworth, Kalton and Kutzarova, which works only in the locally convex setting (i.e., for Banach spaces). It produces conditional almost greedy bases whose fundamental function either is equivalent to $(m)_{m=1}^\infty$ or has both the LRP and the URP. Thus, the DKK-method serves as a tool for constructing Banach spaces with bidemocratic conditional quasi-greedy bases whose fundamental function has both the LRP and the URP. In this section we develop a new method for building conditional bases that allows us to construct bidemocratic conditional quasi-greedy bases with an arbitrary fundamental function. We write $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ for the Cartesian product of the quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ endowed with the quasi-norm \[ \Vert (f,g)\Vert=\max\{ \Vert f\Vert, \Vert g\Vert\}, \quad f\in\ensuremath{\mathbb{X}},\, g\in\ensuremath{\mathbb{Y}}. \] Given sequences $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ in quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively, its direct sum is the sequence $\ensuremath{\mathcal{X}}\oplus\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{u}}_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ given by \[ \ensuremath{\bm{u}}_{2n-1}=(\ensuremath{\bm{x}}_n,0), \quad \ensuremath{\bm{u}}_{2n}=(0,\ensuremath{\bm{y}}_n), \quad n\in\ensuremath{\mathbb{N}}. \] If $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bidemocratic bases, and $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]$, then the basis $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ of $\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}$ is also bidemocratic with \begin{align*} \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]&\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}],\\ \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]=&\max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\},\\ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]=&\max\{\ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}. \end{align*} Loosely speaking, we could say that $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ inherits naturally the properties of $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$. In contrast, `rotating' $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ gives rise to more interesting situations. In this section we study the `rotated' sequence $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{z}}_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ given by \[ \ensuremath{\bm{z}}_{2n-1}=\frac{1}{\sqrt{2}}(\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n), \quad \ensuremath{\bm{z}}_{2n}=\frac{1}{\sqrt{2}}(\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n), \quad n\in\ensuremath{\mathbb{N}}. \] Note that \[ \sum_{n=1}^\infty a_n\, \ensuremath{\bm{z}}_n =\frac{1}{\sqrt{2}}\left(\sum_{n=1}^\infty (a_{2n-1}+a_{2n})\ensuremath{\bm{x}}_n, \sum_{n=1}^\infty (a_{2n-1}-a_{2n})\ensuremath{\bm{y}}_n\right), \] whenever the series converges. To deal with bases built using this method, we introduce some notation. Given $A\subseteq\ensuremath{\mathbb{N}}$ we set \[ A^{\ensuremath{\bm{o}}}=\{2n-1\ensuremath{\mathtt{c}}lon n\in A\}, \quad A^{\ensuremath{\bm{e}}}=\{2n\ensuremath{\mathtt{c}}lon n\in A\}. \] Consider also the onto map $\eta\ensuremath{\mathtt{c}}lon\ensuremath{\mathbb{N}}\to\ensuremath{\mathbb{N}}$ given by $\eta(n)=\lceil n/2 \rceil$. Note that $\eta^{-1}(A)=A^{\ensuremath{\bm{o}}}\cup A^{\ensuremath{\bm{e}}}$ and $\eta(A^{\ensuremath{\bm{o}}})=\eta( A^{\ensuremath{\bm{e}}})=A$ for all $A\subseteq\ensuremath{\mathbb{N}}$. Our first auxiliary result is pretty clear and well-known. In its statement we implicitly use the natural identification of $(\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}})^{\ast}$ with $\ensuremath{\mathbb{X}}^{\ast}\oplus\ensuremath{\mathbb{Y}}^{\ast}$. \begin{lemma}[cf.\ \cite{AAW2019}*{Theorem 2.6}]\label{lem:dualdiamond} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bases of $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ whose dual basis is $\ensuremath{\mathcal{X}}^{\ast}\diamond\ensuremath{\mathcal{Y}}^{\ast}$. Moreover, if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are Schauder bases, so is $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$. \end{lemma} \begin{lemma}\label{constantupperdemfunct} Let $\ensuremath{\mathcal{X}}$ be a basis of a quasi-Banach space. There is a constant $C$ such that \begin{equation*} \left\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n \right\Vert \le C \ensuremath{\bm{\varphi_u}}(m) \end{equation*} whenever $|a_n|\le 1$ for all $n\in\ensuremath{\mathbb{N}}$ and $a_n\ensuremath{\mathbf{n}}ot=0$ for at most $m$ indices. Moreover, if $\ensuremath{\mathbb{X}}$ is $p$-Banach space, $0<p\le 1$, we can choose $C=(2^p-1)^{-1/p}$. \end{lemma} \begin{proof} It follows readily from \cite{AABW2021}*{Corollary 2.3}. \end{proof} \begin{lemma}\label{lem:SDDiamond} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bases of $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}] \le C\max\{ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\} \] for some constant $C$ that only depends of the spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ (and it is $\sqrt{2}$ if $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ are Banach spaces). \end{lemma} \begin{proof} Let $m\in\ensuremath{\mathbb{N}}$, $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|\le m$, and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$. We extend $\varepsilon$ by setting $\varepsilon_n=0$ if $n\in\ensuremath{\mathbb{N}}\setminus A$. Put \[ B=\{ n\in\ensuremath{\mathbb{N}} \ensuremath{\mathtt{c}}lon 2n-1\in A\} \cup \{ n\in\ensuremath{\mathbb{N}} \ensuremath{\mathtt{c}}lon 2n\in A\}, \] that is, $B=\eta(A)$. We have $|\varepsilon_{2n-1} \pm \varepsilon_{2n}|\le 2 \chi_B(n)$ for all $n\in\ensuremath{\mathbb{N}}$, and all $B$ such that $|B|\le |A|$. Thus, if $C$ is the constant in Lemma~\ref{constantupperdemfunct}, \begin{align*} \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}&[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\Vert\\ &= \frac{1}{\sqrt{2}} \max \left\{ \left\Vert \sum_{n=1}^\infty (\varepsilon_{2n-1}+\varepsilon_{2n})\ensuremath{\bm{x}}_n \right\Vert, \left\Vert \sum_{n=1}^\infty (\varepsilon_{2n-1}-\varepsilon_{2n})\ensuremath{\bm{y}}_n \right\Vert \right\}\\ &\le \frac{2C}{\sqrt{2}} \max \left\{ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m), \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m). \right\}\qedhere \end{align*} \end{proof} \begin{proposition}\label{prop:bidem} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bidemocratic bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose also that \[ s_m:= \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a bidemocratic basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$. Moreover, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}](m) \approx s_m, \quad m\in\ensuremath{\mathbb{N}}. \] \end{proposition} \begin{proof}Since, by assumption, \[ \max\{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}](m) , \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}}^{\ast},\ensuremath{\mathbb{Y}}](m)\} \lesssim \frac{m}{s_m}, \quad m\in\ensuremath{\mathbb{N}}, \] applying Lemma~\ref{lem:SDDiamond} yields \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}](m) \lesssim s_m, \quad m\in\ensuremath{\mathbb{N}},\] and \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast}\diamond\ensuremath{\mathcal{Y}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}\oplus\ensuremath{\mathbb{Y}}^{\ast}](m) \lesssim \frac{m}{s_m}, \quad m\in\ensuremath{\mathbb{N}}. \] Using Lemma~\ref{lem:dualdiamond}, these inequalities readily give the desired result. \end{proof} \begin{proposition}\label{prop:diamondConditional} Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ be non-equivalent bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then, $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a conditional basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$. Quantitatively, if \[ \textstyle \ensuremath{\mathtt{c}}_m=\{(a_n)_{n=1}^\infty\in\ensuremath{\mathbb{F}}^\ensuremath{\mathbb{N}} \ensuremath{\mathtt{c}}lon |\{n\in\ensuremath{\mathbb{N}}\ensuremath{\mathtt{c}}lon a_n\ensuremath{\mathbf{n}}ot=0\}|\le m\} \] and \[ E_m[\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}]=\sup_{(a_n)_{n=1}^\infty\in \ensuremath{\mathtt{c}}_m } \frac{\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n\Vert}{\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{y}}_n\Vert}, \quad m\in\ensuremath{\mathbb{N}}, \] then \[ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\ensuremath{\bm{g}}e \frac{1}{2}\max\{E_m[\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}],E_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{X}}]\}, \quad m\in\ensuremath{\mathbb{N}}. \] \end{proposition} \begin{proof} Given an eventually null sequence $(a_n)_{n\in A}$, define $(b_n)_{n\in A^{\ensuremath{\bm{o}}}}$ and $(c_n)_{n\in A^{\ensuremath{\bm{e}}}}$ by $b_{2n-1}=c_{2n}=a_n$ for all $n\in A$. If \[ f_{\ensuremath{\bm{o}}}=\sum_{n\in A^{\ensuremath{\bm{o}}}} b_n\, \ensuremath{\bm{z}}_n \;\text{and}\; f_{\ensuremath{\bm{e}}}=\sum_{n\in A^{\ensuremath{\bm{e}}}} c_n\, \ensuremath{\bm{z}}_n \] we have \begin{align*} f_{\ensuremath{\bm{o}}} &=\frac{1}{\sqrt{2}}\left(\sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n,\sum_{n\in A} a_n\, \ensuremath{\bm{y}}_n\right),\\ f_{\ensuremath{\bm{o}}}+ f_{\ensuremath{\bm{e}}} &=\sqrt{2} \left(\sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n,0\right), \;\text{and}\\ f_{\ensuremath{\bm{o}}}- f_{\ensuremath{\bm{e}}}&=\sqrt{2} \left(0,\sum_{n\in A} a_n\, \ensuremath{\bm{y}}_n\right). \end{align*} Therefore, \[ \frac{ \Vert f_{\ensuremath{\bm{o}}}\Vert}{ \Vert f_{\ensuremath{\bm{o}}}+f_{\ensuremath{\bm{e}}}\Vert }\ensuremath{\bm{g}}e\frac{1}{2}\frac{\|\sum_{n=1}^{\infty}a_n\ensuremath{\bm{y}}_n\|}{\| \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n\|} \;\text{and}\; \frac{ \Vert f_{\ensuremath{\bm{o}}}\Vert}{ \Vert f_{\ensuremath{\bm{o}}}-f_{\ensuremath{\bm{e}}}\Vert }\ensuremath{\bm{g}}e\frac{1}{2}\frac{\|\sum_{n=1}^{\infty}a_n\ensuremath{\bm{x}}_n\|}{\| \sum_{n=1}^\infty a_n\, \ensuremath{\bm{y}}_n\|}.\qedhere \] \end{proof} Proposition~\ref{prop:diamondConditional} gives that the conditionality constants of $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ are bounded below by \[ \frac{1}{2}\max\left\{ \frac{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}, \frac{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}, \frac{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}, \frac{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]} \right\}. \] Thus, applying our method to bases with non-equivalent fundamental functions yields `highly' conditional bases. In contrast, since bidemocratic bases are truncation quasi-greedy (see \cite{AABW2021}*{Proposition 5.7}), a combination of Proposition~\ref{prop:bidem} with Theorem~\ref{thm:BidCond} exhibits that we can apply the `rotation method' to bidemocratic bases with equivalent fundamental functions to obtain bases whose conditionality constants grow `slowly'. However, the basis $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is always conditional unless $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are equivalent. In this context, since quasi-greedy bases are truncation quasi-greedy (see \cite{AABW2021}*{Theorem 4.13}) we ask ourselves whether our construction preserves quasi-greediness. Our next result provides an affirmative answer to this question. \begin{theorem}\label{thm:QGC} Let $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ be bidemocratic bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then, \[ \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\approx \max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}, \quad m\in\ensuremath{\mathbb{N}}. \] In particular, $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is quasi-greedy if and only if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are quasi-greedy. \end{theorem} Before the proof of Theorem~\ref{thm:QGC} we give two auxiliary lemmas. \begin{lemma}\label{lem:ShareBidem} Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ be bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose that $\ensuremath{\mathcal{Y}}$ is truncation quasi-greedy and that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \lesssim \ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then, there is a constant $C_0$ such that \[ \left\Vert \sum_{n\in E} c_n \, \ensuremath{\bm{x}}_n \right\Vert \le C_0 \left\Vert \sum_{n=1}^\infty d_n\, \ensuremath{\bm{y}}_n \right\Vert \] whenever $\max_{n\in E} |c_n|\le \min_{n\in E} |d_n|$. \end{lemma} \begin{proof} It is immediate from Lemma~\ref{constantupperdemfunct} and Lemma~\ref{lem:truncation quasi-greedyQU} combined. \end{proof} \begin{lemma}\label{cor:doubling} Let $\ensuremath{\mathcal{X}}$ be a basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. If $\ensuremath{\mathcal{X}}$ is truncation quasi-greedy there is a constant $C$ such that \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\le C\, \ensuremath{\overline{\bm{g}}}_k[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad k\le m \le 2k. \] In particular, the sequences $(\ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}])_{m=1}^\infty$ and $(\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}])_{m=1}^\infty$ are doubling. \end{lemma} \begin{proof} Let $A$ be a greedy set of $f\in\ensuremath{\mathbb{X}}$ with $|A|=m$. Pick a greedy set $B$ of $f$ with $B\subseteq A$ and $|B|=k$. Since $|A\setminus B|\le |B|$, applying Lemma~\ref{lem:ShareBidem} with $\ensuremath{\mathcal{X}}$ and a permutation of $\ensuremath{\mathcal{X}}$ yields $\Vert S_{A\setminus B}(f) \Vert \le C_0 \Vert f\Vert$, where $C_0$ only depends on $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathbb{X}}$. Hence, if $\kappa$ denotes the modulus of concavity of $\ensuremath{\mathbb{X}}$, $\Vert S_A(f)\Vert \le \kappa(C_0+\ensuremath{\overline{\bm{g}}}_k) \Vert f \Vert$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:QGC}] Set $\ensuremath{\mathcal{X}}\diamond \ensuremath{\mathcal{Y}}=(\ensuremath{\bm{z}}_n)_{n=1}^\infty$. Choosing $f\in\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ with $\ensuremath{\bm{z}}_{2n-1}^{\ast}(f)=\pm \ensuremath{\bm{z}}_{2n}^{\ast}(f)$ yields \[ \ensuremath{\bm{h}}_m:= \max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}\le \ensuremath{\bm{g}}_{2m}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}], \quad m\in\ensuremath{\mathbb{N}}. \] Using Lemma~\ref{cor:doubling}, we obtain the desired upper estimate for $\ensuremath{\bm{h}}_m$. For $A\subseteq\ensuremath{\mathbb{N}}$, set $S_A=S_A[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]$, $S_A^\ensuremath{\mathbb{X}}=S_A[\ensuremath{\mathcal{X}}, \ensuremath{\mathbb{X}}]$, and $S_A^\ensuremath{\mathbb{Y}}=S_A[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]$. Given a greedy set $B$ of $f=(g,h)\in \ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$, let $A_1$, $A_2$ and $A_{12}$ be disjoint subsets of $\ensuremath{\mathbb{N}}$ such that \[ B=(A_{12}\cup A_1)^{\ensuremath{\bm{o}}} \cup (A_{12}\cup A_2)^{\ensuremath{\bm{e}}}. \] We have $|B|=2|A_{12}|+|A_1|+|A_2|$. Set $A_0=\ensuremath{\mathbb{N}}\setminus (A_{12}\cup A_1\cup A_2)$. Let $(c_n)_{n=1}^\infty$ be the coefficients of $f$ relative to $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$, let $(a_n)_{n=1}^\infty$ be the coefficients of $g$ relative to $\ensuremath{\mathcal{X}}$, and let $(b_n)_{n=1}^\infty$ be the coefficients of $h$ with respect to $\ensuremath{\mathcal{Y}}$. If $c=\min\{|c_n| \ensuremath{\mathtt{c}}lon n\in B\}$, \[ \max_{n\in A_0} \left\{\frac{1}{\sqrt{2}} |a_n+b_n|, \frac{1}{\sqrt{2}} |a_n-b_n|\right\} = \max_{n\in A_0^{\ensuremath{\bm{o}}}\cup A_0^{\ensuremath{\bm{e}}}} |c_n| \le c. \] Hence, $|a_n|$, $|b_n|\le \sqrt{2} c$ for all $n\in A_0$, i.e., $A_3\cup A_4\subseteq \ensuremath{\mathbb{N}}\setminus A_0$, where \[ A_3=\{n\in\ensuremath{\mathbb{N}} \ensuremath{\mathtt{c}}lon |a_n|>\sqrt{2} c\}, \quad A_4=\{n\in\ensuremath{\mathbb{N}} \ensuremath{\mathtt{c}}lon |b_n|>\sqrt{2} c\}. \] Note that $A_3$ is a greedy set of $g$, $A_4$ is a greedy set of $h$, and \[ \max\{ |A_3|,|A_4|\}\le |\ensuremath{\mathbb{N}}\setminus A_0|=|A_{12}\cup A_1\cup A_2|\le |A_{12}|+|A_1|+|A_2|\le |B|. \] Set $A_5=\ensuremath{\mathbb{N}}\setminus (A_3\cup A_0)$ and $A_6=\ensuremath{\mathbb{N}}\setminus (A_4\cup A_0)$. Taking into account that, for any $D\subseteq\ensuremath{\mathbb{N}}$, the coordinate projection on $\eta^{-1}(D)$ with respect to $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ coincides with that with respect to the direct sum $\ensuremath{\mathcal{X}}\oplus\ensuremath{\mathcal{Y}}$ of bases $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ we obtain \[ (S_{A_3}^\ensuremath{\mathbb{X}}(g),S_{A_4}^\ensuremath{\mathbb{Y}}(h)) -S_B(f)=S_{A_1^{\ensuremath{\bm{e}}}}(f) +S_{A_2^{\ensuremath{\bm{o}}}}(f)-(S_{A_5}^\ensuremath{\mathbb{X}}(g),S_{A_6}^\ensuremath{\mathbb{Y}}(h)). \] Therefore, it suffices to prove that \[ \max\{ \Vert S_{A_5}^\ensuremath{\mathbb{X}}(g)\Vert, \Vert S_{A_6}^\ensuremath{\mathbb{Y}}(h)\Vert, \Vert S_{A_1^{\ensuremath{\bm{e}}}}(f) \Vert , \Vert S_{A_2^{\ensuremath{\bm{o}}}}(f) \Vert\} \le C_1 \Vert f \Vert \] for some constant $C_1$. Thus, the result would follow by applying the next two claims to the pairs of bases $(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$, $(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}^-)$, $(\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{X}})$, and $(\ensuremath{\mathcal{Y}}^-,\ensuremath{\mathcal{X}})$, where $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$, $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$, and $\ensuremath{\mathcal{Y}}^-=(-\ensuremath{\bm{y}}_n)_{n=1}^\infty$. \begin{claim}\label{claim1} There is a constant $C$ such that \[ \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert \le C \left\Vert \sum_{n=1}^\infty a_n \, (\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n) + \sum_{n=1}^\infty b_n \, (\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n) \right\Vert \] whenever $A\subseteq\ensuremath{\mathbb{N}}$ is finite and $\max_{n\in A} |a_n|\le b:=\min_{n\in A} |b_n|$. \end{claim} \begin{claim}\label{claim2} There is a constant $C$ such that \[ \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert \le C \left\Vert\left( \sum_{n=1}^\infty a_n \, \ensuremath{\bm{x}}_n , \sum_{n=1}^\infty a_n \,\ensuremath{\bm{y}}_n\right) \right\Vert \] whenever $\max_{n\in A} |a_n|\le b:=\min_{n\in A} \max\{|a_n+b_n|,|a_n-b_n|\}$. \end{claim} Let us prove Claim~\ref{claim1}. Set $D_1=\{n\in A \ensuremath{\mathtt{c}}lon |a_n-b_n|\ensuremath{\bm{g}}e b\}$. If $n\in D_2:=A\setminus D_1$ then \[ |a_n+b_n|=|2b_n+(a_n-b_n)| \ensuremath{\bm{g}}e 2 |b_n|-|a_n-b_n|> 2b-b=b. \] Hence, if $\kappa$ is the modulus of concavity of $\ensuremath{\mathbb{X}}$, applying Lemma~\ref{lem:ShareBidem} we obtain \begin{align*} \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert &\le \kappa\left( \left\Vert \sum_{n\in D_1} a_n \, \ensuremath{\bm{x}}_n\right\Vert +\left\Vert \sum_{n\in D_2} a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\\ &\le \kappa C_0\left( \left\Vert \sum_{n=1}^\infty (a_n-b_n) \, \ensuremath{\bm{y}}_n\right\Vert + \left\Vert \sum_{n=1}^\infty (a_n+b_n)\ensuremath{\bm{x}}_n\right\Vert \right)\\ &\le 2 \kappa C_0 \max\left\{ \left\Vert \sum_{n=1}^\infty (a_n+b_n) \, \ensuremath{\bm{x}}_n\right\Vert , \left\Vert \sum_{n=1}^\infty (a_n-b_n) \ensuremath{\bm{y}}_n\right\Vert \right\}\\ &= 2 \kappa C_0\left\Vert \sum_{n=1}^\infty a_n \, (\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n) + \sum_{n=1}^\infty b_n \, (\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n) \right\Vert. \end{align*} We conclude by proving Claim~\ref{claim2}. Set $D_1=\{n\in A \ensuremath{\mathtt{c}}lon |a_n|\le |b_n| \}$ and $D_2=A\setminus D_1$. Since \[ \max\{ |a_n|,|b_n|\}\ensuremath{\bm{g}}e \frac{b}{2}, \quad n\in A, \] we have $|b_n|\ensuremath{\bm{g}}e b/2$ for all $n\in D_1$ and $|a_n|\ensuremath{\bm{g}}e b/2$ for all $n\in D_2$. Therefore \[ \max_{n\in D_1} |a_n|\le 2 \min_{n\in D_1} |b_n|, \quad \max_{n\in D_2} |a_n|\le 2 \min_{n\in D_2} |a_n|. \] Applying Lemma~\ref{lem:ShareBidem} we obtain \begin{align*} \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert &\le \kappa \left(\left\Vert \sum_{n\in D_1} a_n \, \ensuremath{\bm{x}}_n\right\Vert +\left\Vert \sum_{n\in D_2} a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\ensuremath{\mathbf{n}}onumber\\ &\le \kappa C_0\left(\left\Vert \sum_{n =1}^\infty 2 b_n \, \ensuremath{\bm{y}}_n\right\Vert +\left\Vert \sum_{n=1}^\infty 2 a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\ensuremath{\mathbf{n}}onumber\\ &\le 4\kappa C_0 \left\Vert \left( \sum_{n =1}^\infty a_n \, \ensuremath{\bm{x}}_n, \sum_{n=1}^\infty b_n \, \ensuremath{\bm{y}}_n\right)\right\Vert.\qedhere \end{align*} \end{proof} If $\varphi$ is the fundamental function of a basis of a Banach space, then $(\varphi(m))_{m=1}^\infty$ and $(m/\varphi(m))_{m=1}^\infty$ are non-decreasing sequences (see \cite{DKKT2003}). Our next result says that any such $\varphi$ corresponds in fact to a bidemocratic basis of a Banach space. \begin{theorem}\label{thm:BDQGAFF} Let $(s_m)_{m=1}^\infty$ be a non-decreasing unbounded sequence of positive scalars. Suppose that $(m/s_m)_{m=1}^\infty$ is unbounded and non-decreasing. Then there is a Banach space $\ensuremath{\mathbb{X}}$ and a conditional bidemocratic quasi-greedy basis $\ensuremath{\mathcal{X}}$ of $\ensuremath{\mathbb{X}}$ whose fundamental function grows as $(s_m)_{m=1}^\infty$. \end{theorem} \begin{proof} Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ denote the weight whose primitive weight is $(s_m)_{m=1}^\infty$. Then $d_{1,1}(\ensuremath{\bm{w}})$ is a Banach space whose dual space is the Marcinkiewicz space $m(\ensuremath{\bm{w}})$, consisting of all $f\in c_0$ whose non-increasing rearrangement $(a_n)_{n=1}^\infty$ satisfies \[ \Vert f\Vert_{m(\ensuremath{\bm{w}})}=\sup_m \frac{1}{s_m}\sum_{n=1}^m a_n<\infty \] (see \cite{CRS2007}*{Theorems 2.4.14 and 2.5.10}). Let $m_0(\ensuremath{\bm{w}})$ be the separable part of $m(\ensuremath{\bm{w}})$, and let $\ensuremath{\bm{w}}^{\ast}$ be the weight whose primitive weight is $(m/s_m)_{m=1}^\infty$. We have the following chain of norm-one inclusions: \begin{equation}\label{eq:embeddingsM} d_{1,1}(\ensuremath{\bm{w}}) \subseteq m_0(\ensuremath{\bm{w}}^{\ast}) \subseteq m(\ensuremath{\bm{w}}^{\ast}) \subseteq d_{1,\infty}(\ensuremath{\bm{w}}). \end{equation} The right hand-side inclusion is clear. Let us prove the left hand-side inclusion. Let $(a_n)_{n=1}^\infty$ be the non-increasing rearrangement of $f\in c_0$. Given $m\in\ensuremath{\mathbb{N}}$ we define $(b_n)_{n=1}^\infty$ by $b_n=a_n$ is $n\le m$ and $b_n=0$ otherwise. Using Abel's summation formula we obtain \begin{align*} \frac{s_m}{m}\sum_{n=1}^m a_n &= \frac{s_m}{m} \sum_{n=1}^\infty (b_n-b_{n+1})n \\ &\le \sum_{n=1}^\infty (b_n-b_{n+1})s_n\\ &=\sum_{n=1}^m a_n w_n\le \Vert f\Vert_{1,\ensuremath{\bm{w}}}. \end{align*} We infer from \eqref{eq:embeddingsM} that $d_{1,1}(\ensuremath{\bm{w}})$ and $m_0(\ensuremath{\bm{w}}^{\ast})$ are Banach spaces for which the unit vector system is a symmetric basis with fundamental function $(s_m)_{m=1}^\infty$. Applying the rotation method with these bases yields a bidemocratic quasi-greedy basis of $d_{1,1}(\ensuremath{\bm{w}})\oplus m_0(\ensuremath{\bm{w}}^{\ast})$ with fundamental function equivalent to $(s_m)_{m=1}^\infty$. To show that this basis is conditional, by Proposition~\ref{prop:diamondConditional} it suffices to show that $d_{1,1}(\ensuremath{\bm{w}})$ and $m_0(\ensuremath{\bm{w}}^{\ast})$ are not isomorphic, so that $d_{1,1}(\ensuremath{\bm{w}})\subsetneq m_0(\ensuremath{\bm{w}}^{\ast})$. For that, we note that the unit vector system is a boundedly complete basis of $d_{1,1}(\ensuremath{\bm{w}})$ and that $\ell_1$ is a complemented subspace of $d_{1,1}(\ensuremath{\bm{w}}^{\ast})$. Indeed, the first fact is clear. The second fact follows taking into account that the proof in \cite{ACL1973} works even without imposing to $\ensuremath{\bm{w}}^*$ to be non-increasing. An appeal to \cite{AlbiacKalton2016}*{Theorem 3.2.15} and \cite{AlbiacKalton2016}*{Theorem 3.3.1} concludes the proof. \end{proof} \begin{remark} Notice that in Theorem~\ref{thm:BDQGAFF} we can obtain that $\ensuremath{\mathcal{X}}$ is $1$-bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=s_m$ for all $m\in\ensuremath{\mathbb{N}}$. Indeed, if $(s_m)_{m=1}^\infty$ is a non-decreasing sequence of positive scalars such that $(m/s_m)_{m=1}^\infty$ is non-decreasing, and $\ensuremath{\mathbb{X}}$ is a $p$-Banach space, $0<p\le 1$, with a bidemocratic basis $\ensuremath{\mathcal{X}}$ such that $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, then, arguing as in the proof of \cite{DOSZ2011}*{Theorem 2.1} (where unconditionality plays no role), we obtain an equivalent $p$-norm for $\ensuremath{\mathbb{X}}$ with respect to which $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=s_m$ and $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}](m)=m/s_m$ for all $m\in\ensuremath{\mathbb{N}}$. \end{remark} \begin{remark} In the case when $(s_m)_{m=1}^\infty$ has the URP we can give a more quantitative approach to the proof of Theorem~\ref{thm:BDQGAFF}. In this particular case we have $m(\ensuremath{\bm{w}}^{\ast})=d_{1,\infty}(\ensuremath{\bm{w}})$. Moreover, by \cite{CRS2007}*{Theorem 2.5.10}, $d_{1,q}(\ensuremath{\bm{w}})$ is a Banach space for every $1<q<\infty$. Notice that, in general, $d_{1,q}(\ensuremath{\bm{w}})$ is $r$-Banach for all $r<1$ and $1<q\le \infty$, and that $d_{1,q}(\ensuremath{\bm{w}})$ is $q$-Banach for all $0<q<1$. Applying the rotation method with the unit vector systems of $d_{1,p}(\ensuremath{\bm{w}})$ and $d_{1,q}(\ensuremath{\bm{w}})$, $0<p<q\le \infty$, yields a bidemocratic quasi-greedy basis (of a quasi-Banach space which is locally convex if $p\ensuremath{\bm{g}}e 1$) whose fundamental function is equivalent to $(s_m)_{m=1}^\infty$. Combining \eqref{eq:NormLorentz} with Proposition~\ref{prop:diamondConditional} gives that the conditionality constants $(\ensuremath{\bm{k}}_m)_{m=1}^\infty$ of the basis we obtain satisfy \[ \ensuremath{\bm{k}}_m\ensuremath{\bm{g}}trsim (H_m[\ensuremath{\bm{w}}])^{1/p-1/q}, \quad m\in\ensuremath{\mathbb{N}}. \] In the particular case that $(s_m)_{m=1}^\infty$ has the LRP, by Lemma~\ref{lem:LorentzLRP}, \[ \ensuremath{\bm{k}}_m\ensuremath{\bm{g}}trsim (\log m)^{1/p-1/q}, \quad m\in\ensuremath{\mathbb{N}},\; m\ensuremath{\bm{g}}e 2. \] Notice that, if $1<p<q<\infty$ and $(m/s_m^q)_{m=1}^\infty$ is non-decreasing, $\ensuremath{\mathbb{X}}$ is superreflexive (see \cite{Altshuler1975}). In particular, we find a bidemocratic quasi-greedy basis of a Banach space with $\ensuremath{\bm{k}}_m\ensuremath{\bm{g}}trsim \log m$ for $m\ensuremath{\bm{g}}e 2$; and, for each $0<s<1$, a bidemocratic quasi-greedy basis of a superreflexive Banach space with $\ensuremath{\bm{k}}_m\ensuremath{\bm{g}}trsim (\log m)^s$ for $m\ensuremath{\bm{g}}e 2$. Thus, the rotation method serves to built `highly conditional' almost greedy bases (see \cite{AADK2019b} for background on this topic). \end{remark} \begin{example} Let $\ensuremath{\mathbb{X}}$ be a Banach space with a greedy, non-symmetric basis $\ensuremath{\mathcal{X}}$ whose dual basis is also greedy. Then, if $\ensuremath{\mathcal{X}}_\pi$ is a permutation of $\ensuremath{\mathcal{X}}$ nonequivalent to $\ensuremath{\mathcal{X}}$, we have that $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{X}}_\pi$ is a conditional quasi-greedy basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{X}}$. For instance, in light of \cite{Temlyakov1998}*{Theorem 2.1}, this technique can be applied to the $L_p$-normalized Haar system to obtain a bidemocratic conditional quasi-greedy basis of $L_p([0,1])$, $p\in(1,2)\cup(2,\infty)$. Also, since, for the same values of $p$, the space $\ell_p$ has a greedy basis which is non-equivalent to the canonical basis (see \cite{DHK2006}*{Theorem 2.1}), this technique yields a bidemocratic conditional basis of $\ell_p$. \end{example} \begin{bibdiv} \begin{biblist} \bib{AABW2021}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Bern\'{a}, Pablo~M.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Greedy approximation for biorthogonal systems in quasi-Banach spaces}, date={2021}, journal={Dissertationes Math. (Rozprawy Mat.)}, volume={560}, pages={1\ensuremath{\mathbf{n}}dash 88}, } \bib{AADK2019b}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Dilworth, Stephen~J.}, author={Kutzarova, Denka}, title={Building highly conditional almost greedy and quasi-greedy bases in {B}anach spaces}, date={2019}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={276}, number={6}, pages={1893\ensuremath{\mathbf{n}}dash 1924}, url={https://doi-org/10.1016/j.jfa.2018.08.015}, review={\MR{3912795}}, } \bib{AAW2019}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Conditional quasi-greedy bases in non-superreflexive {B}anach spaces}, date={2019}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={49}, number={1}, pages={103\ensuremath{\mathbf{n}}dash 122}, url={https://doi-org/10.1007/s00365-017-9399-x}, review={\MR{3895765}}, } \bib{AAW2021b}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={On certain subspaces of {$\ell_p$} for {$0<p\leq1$} and their applications to conditional quasi-greedy bases in {$p$}-{B}anach spaces}, date={2021}, ISSN={0025-5831}, journal={Math. Ann.}, volume={379}, number={1-2}, pages={465\ensuremath{\mathbf{n}}dash 502}, url={https://doi-org/10.1007/s00208-020-02069-3}, review={\MR{4211094}}, } \bib{AAW2021}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Quasi-greedy bases in {$\ell_ p$} {$(0<p<1)$} are democratic}, date={2021}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={280}, number={7}, pages={108871, 21}, url={https://doi-org/10.1016/j.jfa.2020.108871}, review={\MR{4211033}}, } \bib{AlbiacKalton2016}{book}{ author={Albiac, Fernando}, author={Kalton, Nigel~J.}, title={Topics in {B}anach space theory}, edition={Second}, series={Graduate Texts in Mathematics}, publisher={Springer, [Cham]}, date={2016}, volume={233}, ISBN={978-3-319-31555-3; 978-3-319-31557-7}, url={https://doi.org/10.1007/978-3-319-31557-7}, note={With a foreword by Gilles Godefory}, review={\MR{3526021}}, } \bib{Altshuler1975}{article}{ author={Altshuler, Zvi}, title={Uniform convexity in {L}orentz sequence spaces}, date={1975}, ISSN={0021-2172}, journal={Israel J. Math.}, volume={20}, number={3-4}, pages={260\ensuremath{\mathbf{n}}dash 274}, url={https://doi.org/10.1007/BF02760331}, review={\MR{385517}}, } \bib{ACL1973}{article}{ author={Altshuler, Zvi}, author={Casazza, Peter~G.}, author={Lin, Bor~Luh}, title={On symmetric basic sequences in {L}orentz sequence spaces}, date={1973}, ISSN={0021-2172}, journal={Israel J. Math.}, volume={15}, pages={140\ensuremath{\mathbf{n}}dash 155}, url={https://doi-org/10.1007/BF02764600}, review={\MR{328553}}, } \bib{BL2020}{article}{ author={Berasategui, Miguel}, author={Lassalle, Silvia}, title={Weak semi-greedy bases and the equivalence between semi-greedy, branch semi-greedy and almost greedy {M}arkushevich bases in {B}anach spaces}, date={2020}, journal={arXiv e-prints}, eprint={2004.06849}, } \bib{Berna2020}{article}{ author={Bern\'{a}, Pablo~M.}, title={Characterization of weight-semi-greedy bases}, date={2020}, ISSN={1069-5869}, journal={J. Fourier Anal. Appl.}, volume={26}, number={1}, pages={Paper No. 21, 21}, url={https://doi-org/10.1007/s00041-020-09727-9}, review={\MR{4056847}}, } \bib{BBG2017}{article}{ author={Bern\'{a}, Pablo~M.}, author={Blasco, \'{O}scar}, author={Garrig\'{o}s, Gustavo}, title={Lebesgue inequalities for the greedy algorithm in general bases}, date={2017}, ISSN={1139-1138}, journal={Rev. Mat. Complut.}, volume={30}, number={2}, pages={369\ensuremath{\mathbf{n}}dash 392}, url={https://doi.org/10.1007/s13163-017-0221-x}, review={\MR{3642039}}, } \bib{BBGHO2018}{article}{ author={Bern\'{a}, Pablo~M.}, author={Blasco, Oscar}, author={Garrig\'{o}s, Gustavo}, author={Hern\'{a}ndez, Eugenio}, author={Oikhberg, Timur}, title={Embeddings and {L}ebesgue-type inequalities for the greedy algorithm in {B}anach spaces}, date={2018}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={48}, number={3}, pages={415\ensuremath{\mathbf{n}}dash 451}, url={https://doi.org/10.1007/s00365-018-9415-9}, review={\MR{3869447}}, } \bib{CRS2007}{article}{ author={Carro, Mar\'{\i}a~J.}, author={Raposo, Jos\'{e}~A.}, author={Soria, Javier}, title={Recent developments in the theory of {L}orentz spaces and weighted inequalities}, date={2007}, ISSN={0065-9266}, journal={Mem. Amer. Math. Soc.}, volume={187}, number={877}, pages={xii+128}, url={https://doi-org/10.1090/memo/0877}, review={\MR{2308059}}, } \bib{DHK2006}{article}{ author={Dilworth, Stephen~J.}, author={Hoffmann, Mark}, author={Kutzarova, Denka}, title={Non-equivalent greedy and almost greedy bases in {$l_p$}}, date={2006}, ISSN={0972-6802}, journal={J. Funct. Spaces Appl.}, volume={4}, number={1}, pages={25\ensuremath{\mathbf{n}}dash 42}, url={https://doi-org/10.1155/2006/368648}, review={\MR{2194634}}, } \bib{DKK2003}{article}{ author={Dilworth, Stephen~J.}, author={Kalton, Nigel~J.}, author={Kutzarova, Denka}, title={On the existence of almost greedy bases in {B}anach spaces}, date={2003}, ISSN={0039-3223}, journal={Studia Math.}, volume={159}, number={1}, pages={67\ensuremath{\mathbf{n}}dash 101}, url={https://doi.org/10.4064/sm159-1-4}, note={Dedicated to Professor Aleksander Pe{\l}czy\'nski on the occasion of his 70th birthday}, review={\MR{2030904}}, } \bib{DKKT2003}{article}{ author={Dilworth, Stephen~J.}, author={Kalton, Nigel~J.}, author={Kutzarova, Denka}, author={Temlyakov, Vladimir~N.}, title={The thresholding greedy algorithm, greedy bases, and duality}, date={2003}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={19}, number={4}, pages={575\ensuremath{\mathbf{n}}dash 597}, url={https://doi-org/10.1007/s00365-002-0525-y}, review={\MR{1998906}}, } \bib{DOSZ2011}{article}{ author={Dilworth, Stephen~J.}, author={Odell, Edward~W.}, author={Schlumprecht, Thomas}, author={Zs\'{a}k, Andr\'{a}s}, title={Renormings and symmetry properties of $1$-greedy bases}, date={2011}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={163}, number={9}, pages={1049\ensuremath{\mathbf{n}}dash 1075}, url={https://doi.org/10.1016/j.jat.2011.02.013}, review={\MR{2832742}}, } \bib{GHO2013}{article}{ author={Garrig\'os, Gustavo}, author={Hern\'{a}ndez, Eugenio}, author={Oikhberg, Timur}, title={Lebesgue-type inequalities for quasi-greedy bases}, date={2013}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={38}, number={3}, pages={447\ensuremath{\mathbf{n}}dash 470}, url={https://doi-org/10.1007/s00365-013-9209-z}, review={\MR{3122278}}, } \bib{KoTe1999}{article}{ author={Konyagin, Sergei~V.}, author={Temlyakov, Vladimir~N.}, title={A remark on greedy approximation in {B}anach spaces}, date={1999}, ISSN={1310-6236}, journal={East J. Approx.}, volume={5}, number={3}, pages={365\ensuremath{\mathbf{n}}dash 379}, review={\MR{1716087}}, } \bib{LinPel1968}{article}{ author={Lindenstrauss, Joram}, author={Pe{\l}czy\'{n}ski, Aleksander}, title={Absolutely summing operators in {$L_{p}$}-spaces and their applications}, date={1968}, ISSN={0039-3223}, journal={Studia Math.}, volume={29}, pages={275\ensuremath{\mathbf{n}}dash 326}, url={https://doi-org/10.4064/sm-29-3-275-326}, review={\MR{0231188}}, } \bib{Oikhberg2018}{article}{ author={Oikhberg, Timur}, title={Greedy algorithm with gaps}, date={2018}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={225}, pages={176\ensuremath{\mathbf{n}}dash 190}, url={https://doi-org/10.1016/j.jat.2017.10.006}, review={\MR{3733255}}, } \bib{Oswald2001}{article}{ author={Oswald, Peter}, title={Greedy algorithms and best {$m$}-term approximation with respect to biorthogonal systems}, date={2001}, ISSN={1069-5869}, journal={J. Fourier Anal. Appl.}, volume={7}, number={4}, pages={325\ensuremath{\mathbf{n}}dash 341}, url={https://doi.org/10.1007/BF02514500}, review={\MR{1836816}}, } \bib{Rudin1976}{book}{ author={Rudin, Walter}, title={Principles of mathematical analysis}, edition={Third}, publisher={McGraw-Hill Book Co., New York-Auckland-D\"{u}sseldorf}, date={1976}, note={International Series in Pure and Applied Mathematics}, review={\MR{0385023}}, } \bib{Temlyakov1998}{article}{ author={Temlyakov, Vladimir~N.}, title={The best {$m$}-term approximation and greedy algorithms}, date={1998}, ISSN={1019-7168}, journal={Adv. Comput. Math.}, volume={8}, number={3}, pages={249\ensuremath{\mathbf{n}}dash 265}, url={https://doi.org/10.1023/A:1018900431309}, review={\MR{1628182}}, } \bib{Woj1982}{article}{ author={Wojtaszczyk, Przemys{\l}aw}, title={The {F}ranklin system is an unconditional basis in {$H_{1}$}}, date={1982}, ISSN={0004-2080}, journal={Ark. Mat.}, volume={20}, number={2}, pages={293\ensuremath{\mathbf{n}}dash 300}, url={https://doi.org/10.1007/BF02390514}, review={\MR{686177}}, } \bib{Woj2000}{article}{ author={Wojtaszczyk, Przemys{\l}aw}, title={Greedy algorithm for general biorthogonal systems}, date={2000}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={107}, number={2}, pages={293\ensuremath{\mathbf{n}}dash 314}, url={https://doi-org/10.1006/jath.2000.3512}, review={\MR{1806955}}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Properties of distance functions on convex surfaces and applications} \author{Jan Rataj} \author{Lud\v ek Zaj\'\i\v cek} \address{Charles University\\ Faculty of Mathematics and Physics\\ Sokolovsk\'a 83\\ 186 75 Praha 8\\ Czech Republic} \varepsilonmail{[email protected]} \varepsilonmail{[email protected]} \subjclass{53C45, 52A20} \keywords{distance function, convex surface, Alexandrov space, DC\ manifold, ambiguous locus, skeleton, $r$-boundary} \thanks{The research was supported by the grant MSM 0021620839 from the Czech Ministry of Education. The second author was also supported by the grants GA\v CR 201/06/0198 and 201/09/0067. } \begin{abstract} If $X$ is a convex surface in a Euclidean space, then the squared intrinsic distance function $\mathrm{dist}^2(x,y)$ is DC (d.c., delta-convex) on $X\times X$ in the only natural extrinsic sense. An analogous result holds for the squared distance function $\mathrm{dist}^2(x,F)$ from a closed set $F \subset X$. Applications concerning $r$-boundaries (distance spheres) and ambiguous loci (exoskeletons) of closed subsets of a convex surface are given. \varepsilonnd{abstract} \maketitle \markboth{J. Rataj and L.~Zaj\'{\i}\v{c}ek}{Properties of distance functions} \section{Introduction} The geometry of $2$-dimensional convex surfaces in $\mathbb{R}^3$ was thoroughly studied by A.D. Alexandrov \cite{Acon}. Important generalizations for $n$-dimensional convex surfaces in $\mathbb{R}^{n+1}$ are due to A.D. Milka (see, e.g., \cite{Mi}). Many (but not all) results on geometry on convex surfaces are special cases of results of the theory of Alexandrov spaces with curvature bounded from below. Let $X \subset \mathbb{R}^{n+1}$ be an $n$-dimensional (closed bounded) convex surface and $\varepsilonmptyset \neq F \subset X$ a closed set. We will prove (Theorem \ref{distset}) that (A)\ \ {\it the intrinsic distance $d_F(x):= \mathrm{dist}(x,F)$ is locally DC on $X \setminus F$ in the natural extrinsic sense (with respect to natural local charts).} It is well-known that, in a Euclidean space, $d_F$ is not only locally DC but even locally semiconcave on the complement of $F$. This was generalized to smooth Riemannian manifolds in \cite{MM}. The result (A) can be applied to some problems from the geometry of convex surfaces that are formulated in the language of intrinsic distance functions. The reason of this is that DC functions (i.e., functions which are differences of two convex functions) have many nice properties which are close to those of $C^2$ functions. We present two applications. The first one (Theorem \ref{paralel}) concerns $r$-boundaries (distance spheres) of a closed set $F\subset X$ in the cases $\dim X =2,3$. It implies that, for almost all $r$, the $r$-boundary is a Lipschitz manifold, and so provides an analogue of well-known results proved (in Euclidean spaces) by Ferry \cite{Fe} and Fu \cite{Fu}. The second application (Theorem \ref{zam}) concerns the ambiguous locus (exoskeleton) of a closed subset of an $n$-dimensional ($n \in \mathbb{N}$) convex surface. This result is essentially stronger than the corresponding result of T. Zamfirescu in Alexandrov spaces of curvature bounded from below. It is not clear whether the results of these applications can be obtained as consequences of results in Alexandrov spaces (possibly with some additional properties). In any case, there are serious obstacles for obtaining such generalizations by our methods (see Remark \ref{obst}). To explain briefly what is the ``natural extrinsic sense'' from (A), consider for a while an unbounded convex surface $ X \subset \mathbb{R}^{n+1}$ which is the graph of a convex function $f: \mathbb{R}^{n} \to \mathbb{R}$, and denote $x^*:= (x,f(x))$ for $x \in \mathbb{R}^n$. Then (A) also holds (see Remark \ref{unbounded}) and is equivalent to the statement (B)\ \ {\it the function $h(x):= \mathrm{dist}(x^*,F)$ is locally DC on $\{x \in \mathbb{R}^n:\ x^* \notin F\}$.} Moreover, it is true that (C)\ \ {\it $h^2(x):= \mathrm{dist}^2(x^*,F)$ is DC on whole $\mathbb{R}^n$, and} (D)\ \ {\it the function $g(x,y):= \mathrm{dist}^2(x^*,y^*)$ is DC on $\mathbb{R}^{2n}= \mathbb{R}^n \times \mathbb{R}^n$.} For a natural formulation of corresponding results (Theorem \ref{distset} and \ref{main}) for a closed bounded convex surface $X$, we will define in a canonical way the structure of a DC manifold on $X$ and $X \times X$. A weaker version of the result (C) (in the case $n=2$) was known for a long time to the second author, who used a method similar to that of Alexandrov's proof (for two-dimensional convex surfaces) of Alexandrov-Toponogov theorem, namely an approximation of a general convex surface by polyhedral convex surfaces and considering a developing of those polyhedral convex surfaces ``along geodesics''. However, it is not easy to formalize this geometrically transparent method (even for $n=2$). In the present article we use another method suggested by the first author. Namely, we use well-known semiconcavity properties of distance functions on $X$ and $X\times X$ in an intrinsic sense (i.e., in the sense of the theory of length spaces). Using this method, we got rid of using developings. However, our proof still needs approximation by polyhedral surfaces. Note that, in the case $n=1$, the above statements (A)-(D) have straigthforward proofs and an example (in which $F$ is a singleton) can be easily constructed where the DC function $h^2$ from (C) is neither semiconcave nor semiconvex. The organization of the paper is as follows. In Section~2 (Preliminaries) we recall some facts concerning length spaces, semiconcave functions, DC functions, DC manifolds, and DC surfaces. Further we prove (by standard methods) two needfull technical lemmas on approximation of convex surfaces by polyhedral surfaces. In Section 3 we prove our main results on distance functions on closed bounded convex surfaces. Section 4 is devoted to applications which we already briefly described above. In the last short Section 5 we present several remarks and questions concerning DC structures on length spaces. \section{Preliminaries} In a metric space, $B(c,r)$ denotes the open ball with center $c$ and radius $r$. The symbol $\mathcal{H}^k$ stands for the $k$-dimensional Hausdorff measure. If $a, b \in \mathbb{R}^n$, then $[a,b]$ denotes the segment joining $a$ and $b$. If $F$ is a Lipschitz mapping, then $\mathrm{Lip}\, F$ stands for the least Lipschitz constant of $F$. If $W$ is a unitary space and $V$ is a subspace of $W$, then we denote by $V_W^{\perp}$ the orthogonal complement of $V$ in $W$. If $f$ is a mapping from a normed space $X$ to a normed space $Y$, then the symbol $df(a)$ stands for the (Fr\' echet) differential of $f$ at $a\in X$. If $df(a)$ exists and $$ \lim_{x,y \to a, x\neq y} \ \frac{f(y)-f(x)- df(a)(y-x)}{\|y-x\|} =0,$$ then we say that $f$ is {\varepsilonm strictly differentiable} at $a$ (cf. \cite[p.~19]{Mor}). For the sake of brevity, we introduce the following notation (we use the symbol $\Delta^{2} $, though $\Delta^2f(x,y)$ is one half of a second difference). \begin{definition}\label{drudif} If $f$ is a real function defined on a subset $U$ of a vector space and $x,y, \frac{x+y}2 \in U$, we denote \begin{equation}\label{drd} \Delta^2f(x,y):=\frac{f(x)+f(y)}2 - f\left(\frac{x+y}2\right). \varepsilonnd{equation} \varepsilonnd{definition} Note that, if $f(y) = \|y\|^2,\ y \in \mathbb{R}^n$, then \begin{equation}\label{drddrm} \Delta^2f(x+h,x-h)= \frac{\|x+h\|^2 + \|x-h\|^2}{2} - \|x\|^2 = \|h\|^2. \varepsilonnd{equation} We shall need the following easy lemma. Its first part is an obvious consequence of \cite[Lemma 1.16]{VeZa} (which works with convex functions). The second part clearly follows from the first one. \begin{lemma}\label{VZ} \begin{enumerate} \item Let $f: (a,b) \to \mathbb{R}$ be a continuous function. Suppose that for every $t \in (a,b)$ and $\delta > 0$ there exists $0<d < \delta$ such that $\Delta^2f(t+d,t-d)\leq 0$. Then $f$ is concave on $(a,b)$. \item Let $f$ be a continuous function on on open convex subset $C \subset \mathbb{R}^n$. Suppose that for every $x\in C$ there exists $\delta >0$ such that $\Delta^2f(x+h,x-h)\leq 0$ whenever $\|h\| < \delta$. Then $f$ is concave on $C$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \subsection{Length spaces and semiconcave functions} A metric space $(X,d)$ is called a {\it length (or inner or intrinsic) space} if, for each $x, y \in X$, $d(x,y)$ equals to the infimum of lengths of curves joining $x$ and $y$ (see \cite[p. 38]{BBI} or \cite[p. 824]{Pl}). If $X$ is a length space, then a curve $\mathrm{var}phi: [a,b] \to X$ is called {\it minimal}, if it is a shortes curve joining its endpoints $x=\mathrm{var}phi(a)$ and $y=\mathrm{var}phi(b)$ parametrized by the arc-length. A length space $X$ is called {\it geodesic (or strictly intrinsic) space} if each pair of points in $X$ can be joined by a minimal curve. Note that any complete, locally compact length space is geodesic (see \cite[Theorem 8]{Pl}). Alexandrov spaces with curvature bounded from below are defined as length spaces which have a lower curvature bound in the sense of Alexandrov. The precise definition of these spaces can be found in \cite{BBI} or \cite{Pl}. (Frequently Alexandrov spaces are supposed to be complete and/or finite dimensional.) If $X$ is a length space and $\mathrm{var}phi: [a,b] \to X$ a minimal curve, then the point $s=\mathrm{var}phi((a+b)/2)$ is called {\it the midpoint of the minimal curve $\mathrm{var}phi$}. A point $t$ is called {\it a midpoint of $x,y$} if it is the midpoint of a minimal curve $\mathrm{var}phi$ joining $x$ and $y$. If $\mathrm{var}phi$ as above can be chosen to lie in a set $G\subset X$, we will say that $t$ is {\it a $G$-midpoint of $x,y$}. One of several natural equivalent definitions (see \cite[Definition 1.1.1 and Proposition 1.1.3]{CaSi}) of semiconcavity in $\mathbb{R}^n$ reads as follows. \begin{definition}\label{semieuk} A function $u$ on an open set $A\subset \mathbb{R}^n$ is called {\it semiconcave} with a {\it semiconcavity constant} $c\geq 0$ if $u$ is continuous on $A$ and \begin{equation}\label{seeu} \Delta^2u(x+h,x-h)\leq (c/2) \|h\|^2, \varepsilonnd{equation} whenever $x, h \in \mathbb{R}^n$ and $[x-h,x+h] \subset A$. \varepsilonnd{definition} \begin{remark}\label{semieuk2} It is well-known and easy to see (cf. \cite[Proposition 1.1.3]{CaSi}) that $u$ is semiconcave on $A$ with semiconcavity constant $c$ if and only if the function $g(x) = u(x) - (c/2) \|x\|^2$ is locally concave on $A$. \varepsilonnd{remark} The notion of semiconcavity extends naturally to length spaces $X$. The authors working in the theory of length spaces use mostly the following terminology (cf. \cite[p.\ 5]{Pet} or \cite[p.\ 862]{Pl}). \begin{definition}\label{semilen} Let $X$ be a geodesic space. Let $G \subset X$ be open, $c\geq 0$, and $f: G \to \mathbb{R}$ be a locally Lipschitz function. \begin{enumerate} \item We say that $f$ is $c$-{\it concave} if, for each minimal curve $\gamma: [a,b] \to G$, the function $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $[a,b]$. \item We say that $f$ is {\it semiconcave} on $G$ if for each $x \in G$ there exists $c \geq 0$ such that $f$ is $c$-concave on an open neighbourhood of $x$. \varepsilonnd{enumerate} \varepsilonnd{definition} \begin{remark}\label{semlen} If $X=\mathbb{R}^n$, then $c$-concavity coincides with semiconcavity with constant $c$. \varepsilonnd{remark} We will need the following simple well-known characterization of $c$-concavity. Because of the lack of the reference, we give the proof. \begin{lemma}\label{loclen} Let $Y$ be a geodesic space. Let $M \subset Y$ be open, $c\geq 0$, and $f: M\to \mathbb{R}$ be a locally Lipschitz function. Then the following are equivalent. \begin{enumerate} \item $f$ is $c$-concave on $M$. \item If $x,y \in M$, and $s$ is an $M$-midpoint of $x,y$, then \begin{equation}\label{semmid} \frac{f(x)+f(y)}{2}- f(s) \leq (c/2) d^2, \varepsilonnd{equation} where $d:= (1/2)\ \mathrm{dist}(x,y)$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} Suppose that (i) holds. To prove (ii), let $x, y, s, d$ be as in (ii). Choose a minimal curve $\gamma: [a,b] \to M$ with $\gamma(a) = x, \gamma(b)=y$ and $\gamma((1/2)(a+b)) = s$. By (i), the function $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $[a,b]$. So $\widetilde f := f \circ \gamma$ is semiconcave with semiconcavity constant $c$ on $(a,b)$ by Remark \ref{semieuk2}. Consequently, $ \Delta^2\widetilde f(b-h,a+h) \leq (c/2) |(1/2)(b-a)-h|^2$ for each $0 < h < (1/2)(b-a)$. By continuity of $\widetilde f$ we clearly obtain \varepsilonqref{semmid}, since $d= (1/2)(b-a)$. To prove (ii)$\mathbb{R}ightarrow$(i), consider a minimal curve $\gamma: [a,b] \to M$ and suppose that $f$ satisfies (ii). It is easy to see that then $\widetilde f := f \circ \gamma$ is semiconcave with semiconcavity constant $c$ on $(a,b)$. By Remark \ref{semieuk2}, $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $(a,b)$, and therefore (by continuity of $g$), also on $[a,b]$. \varepsilonnd{proof} \subsection{DC manifolds and DC surfaces} \begin{definition}\label{dcfce} Let $C$ be a nonempty convex set in a real normed linear space $X$. A function $f\colon C\to\mathbb{R}$ is called {\varepsilonm DC}\ (or d.c., or delta-convex) if it can be represented as a difference of two continuous convex functions on $C$. If $Y$ is a finite-dimensional normed linear space, then a mapping $F\colon C\to Y$ is called {\varepsilonm DC}, if $y^*\circ F$ is a DC function on $C$ for each linear functional $y^* \in Y^*$. \varepsilonnd{definition} \begin{remark}\label{podc} \begin{enumerate} \item To prove that $F$ is DC, it is clearly sufficient to show that $y^*\circ F$ is DC for each $y^*$ from a basis of $Y^*$. \item Each DC mapping is clearly locally Lipschitz. \item There are many works on optimization that deal with DC functions. A theory of DC (delta-convex) mappings in the case when $Y$ is a general normed linear space was built in \cite{VeZa}. \varepsilonnd{enumerate} \varepsilonnd{remark} Some basic properties of DC functions and mappings are contained in the following lemma. \begin{lemma}\label{zakldc} Let $X,Y,Z$ be finite-dimensional normed linear spaces, let $C\subset X$ be a nonempty convex set, and $U \subset X$ and $V \subset Y$ open sets. \begin{enumerate} \item[(a)]\ {\rm (\cite{A1})} If the derivative of a function $f$ on $C$ is Lipchitz, then $f$ is DC. In particular, each affine mapping is DC. \item[(b)]\ {\rm (\cite{Ha})} If a mapping $F: C \to Y$ is locally DC on $C$, then it is DC on $C$. \item[(c)]\ {\rm (\cite{Ha})} Let a mapping $F: U \to Y$ be locally DC, $F(U) \subset V$, and let $G:V\to Z$ be locally DC. Then $G \circ F$ is locally DC on $U$. \item[(d)]\ {\rm (\cite{VeZa})} Let $F: U \to V$ be a bilipschitz bijection which is locally DC on $U$. Then $F^{-1}$ is locally DC on $V$. \varepsilonnd{enumerate} \varepsilonnd{lemma} Since locally DC mappings are stable with respect to compositions {\rm lin}\,ebreak (Lemma~\ref{zakldc}(c)), the notion of an $n$-dimensional DC manifold can be defined in an obvious way, see \cite[\S\S2.6, 2.7]{Kuwae}. The importance of this notion was shown in Perelman's preprint \cite{Per}, cf.\ Section~\ref{Sec-Rem}. \begin{definition}\label{DCman} Let $X$ be a paracompact Hausdorff topological space and $n \in \mathbb{N}$. \begin{enumerate} \item We say that $(U, \mathrm{var}phi)$ is an {\it $n$-dimensional chart on} $X$ if $U$ is a nonempty open subset of $X$ and $\mathrm{var}phi: U \to \mathbb{R}^n$ a homeomorphism of $U$ onto an open set $\mathrm{var}phi(U)\subset\mathbb{R}^n$. \item We say that two $n$-dimensional charts $(U_1, \mathrm{var}phi_1)$ and $(U_2, \mathrm{var}phi_2)$ on $X$ are {\it DC-compatible} if $U_1 \cap U_2 = \varepsilonmptyset$ or $U_1 \cap U_2 \neq \varepsilonmptyset$ and the {\it transition maps} $\mathrm{var}phi_2 \circ (\mathrm{var}phi_1)^{-1}$ and $\mathrm{var}phi_1 \circ (\mathrm{var}phi_2)^{-1}$ are locally DC (on their domains $\mathrm{var}phi_1(U_1 \cap U_2)$ and $\mathrm{var}phi_2(U_1 \cap U_2)$, respectively). \item We say that a system $\mathcal{A}$ of $n$-dimensional charts on $X$ is an {\it $n$-dimen\-si\-onal DC atlas on} $X$, if the domains of the charts from $\mathcal{A}$ cover $X$ and any two charts from $\mathcal{A}$ are DC-compatible. \varepsilonnd{enumerate} \varepsilonnd{definition} Obviously, each $n$-dimensional DC atlas $\mathcal{A}$ on $X$ can be extended to a uniquely determined maximal $n$-dimensional DC atlas (which consists of all $n$-dimensional charts on $X$ that are DC-compatible with all charts from $A$). We will say that $X$ {\it is equipped with an ($n$-dimensional) DC structure} (or with a structure of an $n$-dimensional DC manifold), if a maximal $n$-dimensional DC atlas on $X$ is determined (e.g., by a choice of an $n$-dimensional DC atlas). Let $X$ be equipped with a DC structure and let $f$ be a function defined on an open set $G \subset X$. Then we say that $f$ is {\it DC} if $f\circ \mathrm{var}phi^{-1}$ is locally DC on $\mathrm{var}phi(U \cap G)$ for each chart $(U,\mathrm{var}phi)$ from the maximal DC atlas on $X$ such that $U\cap G\neq\varepsilonmptyset$. Clearly, it is sufficient to check this condition for each chart from an arbitrary fixed DC atlas. \begin{remark}\label{hilbert} \begin{enumerate} \item If we consider, in the definition of the chart $(U,\mathrm{var}phi)$, a mapping $\mathrm{var}phi$ from $U$ to an $n$-dimensional unitary space $H_\mathrm{var}phi$, the whole Definition~\ref{DCman} does not change sense. (Indeed, we can identify $H_\mathrm{var}phi$ with $\mathbb{R}^n$ by an isometry because of Lemma~\ref{zakldc}~(a), (c).) In the following, it will be convinient for us to use such (formally more general) charts with range in an $n$-dimensional linear subspace of a Euclidean space. \item If $X, Y$ are nonempty spaces equipped with $m,n$-dimensional DC structures, respectively, then the Cartesian product $X\times Y$ is canonically equipped with an $(m+n)$-dimensional DC structure. Indeed, let $\mathcal{A}_X,\mathcal{A}_Y$ be $m,n$-dimensional DC atlases on $X,Y$, respectively. Then, $$\mathcal{A}=\{ (U_X\times U_Y,\mathrm{var}phi_X\otimes\mathrm{var}phi_Y):\, (U_X,\mathrm{var}phi_X)\in\mathcal{A}_X, (U_Y,\mathrm{var}phi_Y)\in\mathcal{A}_Y\}$$ is an $(m+n)$-dimensional DC atlas on $X\times Y$, if we define $(\mathrm{var}phi_X\otimes\mathrm{var}phi_Y)(x,y)=(\mathrm{var}phi_X(x),\mathrm{var}phi_Y(y))$. \item If $X, Y$ are equipped with $m,n$-dimensional DC structures, respectively, and $f:X\times Y\to\mathbb{R}$ is DC, then the section $x\mapsto f(x,y)$ is DC on $X$ for any $y\in Y$, and the section $y\mapsto f(x,y)$ is DC on $Y$ for any $x\in X$. \varepsilonnd{enumerate} \varepsilonnd{remark} \begin{definition}\label{surfldc} Let $H$ be an $(n+k)$-dimensional unitary space ($n,k \in \mathbb{N}$). We say that a set $M \subset H$ is a {\it $k$-dimensional Lipschitz} (resp.\ {\it DC}) {\it surface}, if it is nonempty and for each $x \in M$ there exists a $k$-dimensional linear space $Q \subset H$, an open neighbourhood $W$ of $x$, a set $G \subset Q$ open in $Q$ and a Lipschitz (resp. locally DC) mapping $h: G \to Q^{\perp}$ such that $$ M \cap W = \{u + h(u): u \in G\}.$$ \varepsilonnd{definition} \begin{remark}\label{surf} \begin{enumerate} \item Lipschitz surfaces were considered e.g.\ by Whitehead \cite[p. 165]{Wh} or Walter \cite{Walter}, who called them strong Lipschitz submanifolds. Obviously, each DC surface is a Lipschitz surface. For some properties of DC surfaces see \cite{Zajploch}. \item If we suppose, in the above definition of a DC surface, that $G$ is convex and $h$ is DC and Lipschitz, we obtain clearly the same notion. \item Each Lipschitz (resp.\ DC) surface admits a natural structure of a Lipschitz (resp.\ DC) manifold that is given by the charts of the form $( W \cap M, \psi^{-1})$, where $\psi(u)= u + h(u),\ u \in G$ (cf. Remark \ref{hilbert}(i)). \varepsilonnd{enumerate} \varepsilonnd{remark} \begin{lemma}\label{aznapl} Let $H$ be an $n$-dimensional unitary space, $V \subset H$ an open convex set, and $f:V \to \mathbb{R}^m$ be a DC mapping. Then there exists a sequence $(T_i)$ of $(n-1)$-dimensional DC surfaces in $H$ such that $f$ is strictly differentiable at each point of $V \setminus \bigcup_{i=1}^{\infty} T_i$. \varepsilonnd{lemma} \begin{proof} Let $f=(f_1,\dots,f_m)$. By definition of a DC mapping, $f_j = \alpha_j - \beta_j$, where $\alpha_j$ and $\beta_j$ are convex functions. By \cite{Zaj}, for each $j$ we can find a sequence $T^j_k$, $k \in \mathbb{N}$, of $(n-1)$-dimensional DC surfaces in $H$ such that both $\alpha_j$ and $\beta_j$ are differentiable at each point of $D_j:= H \setminus \bigcup_{k=1}^{\infty} T^j_k$. Since each convex function is strictly differentiable at each point at which it is (Fr\' echet) differentiable (see, e.g., \cite[Proposition 3.8]{VeZa} for a proof of this well-known fact), we conclude that each $f_j$ is strictly differentiable at each point of $D_j$. Since strict differentiablity of $f$ clearly follows from strict differentiability of all $f_j$'s, the proof is finished after ordering all sets $T^j_k$, $k \in \mathbb{N}$, $j=1,\dots,m$, to a sequence $(T_i)$. \varepsilonnd{proof} \subsection{Convex surfaces} \begin{definition} \label{convex_surface} A {\it convex body} in $\mathbb{R}^n$ is a compact convex subset with non\-empty interior. Under a {\it convex surface} in $\mathbb{R}^n$ we understand the boundary $X=\partial C$ of a convex body $C$. A convex surface $X$ is said to be {\it polyhedral} if it can be covered by finitely many hyperplanes. \varepsilonnd{definition} It is well-known that a convex surface in $\mathbb{R}^n$ with its intrinsic metric is a complete geodesic space with nonnegative curvature (see \cite{Buyalo} or \cite[\S10.2]{BBI}). Obviously, each convex surface $X$ is a DC surface (cf. Remark \ref{Rcomp}(iii)), and so has a canonical DC structure. In the following, we will work mainly with ``standard'' DC charts on $X$ (which are considered in the generalized sense of Remark \ref{hilbert}(i)). \begin{definition}\label{standard} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and $U$ a nonempty, relatively open subset of $X$. We say that $(U, \mathrm{var}phi)$ is a {\it standard $n$-dimensional chart} on $X$, if there exist a unit vector $e\in\mathbb{R}^{n+1}$, a convex, relatively open subset $V$ of the hyperplane $e^\perp$, and a Lipschitz convex function $f: V \to \mathbb{R}$ such that, setting $F (x) := x + f(x) e,\ x \in V$, we have $U = F(V)$ and $\mathrm{var}phi = F^{-1}$. In this case we will say that $(U,\mathrm{var}phi)$ is an {\it $(e,V)$-standard chart} on $X$ and $f$ will be called {\it the convex function associated with the standard chart}. \varepsilonnd{definition} \begin{remark}\label{Rcomp} \begin{enumerate} \item Clearly, if $(U,\mathrm{var}phi)$ is an $(e,V)$-standard chart on $X$ and $\pi$ denotes the orthogonal projection onto $e^\perp$, then $\mathrm{var}phi = \pi\restriction_U$. \item Let $(U_1, \mathrm{var}phi_1)$ and $(U_2, \mathrm{var}phi_2)$ be standard charts as in the above definition. Then these charts are DC-compatible. Indeed, $\mathrm{var}phi_1^{-1}$ is a DC mapping from $V_1$ to $\mathbb{R}^{n+1}$ and $\mathrm{var}phi_2$ is a restriction of a linear mapping $\pi$ (see (i)). So $\mathrm{var}phi_2 \circ (\mathrm{var}phi_1)^{-1}=\pi\circ (\mathrm{var}phi_1)^{-1}$ is locally DC by Lemma \ref{zakldc}(a),(c). \item Let $X \subset \mathbb{R}^{n+1}$ be a convex surface, $z \in X$, and let $C$ be the convex body for which $X = \partial C$. Choose $a \in {\rm int}\, C$, set $e:= \frac{a-z}{\|a-z\|}$ and $V:= \pi (B(a,\delta))$, where $\delta>0$ is sufficiently small and $\pi$ is the orthogonal projection of $\mathbb{R}^{n+1}$ onto $e^{\perp}$. Then it is easy to see that there exists an $(e,V)$-standard chart $(U,\mathrm{var}phi)$ on $X$ with $z \in U$. \varepsilonnd{enumerate} \varepsilonnd{remark} By (ii) and (iii) above, the following definition is correct. \begin{definition}\label{ststr} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface. Then the {\it standard DC structure} on $X$ is determined by the atlas of all standard $n$-dimensional charts on $X$. \varepsilonnd{definition} \begin{lemma}\label{obrdc} Let $X \subset \mathbb{R}^{n+1}$ ($n\geq 2$) be a convex surface and let $(U,\mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$. Let $T \subset e^{\perp}$ be an $(n-1)$-dimensional DC surface in $e^{\perp}$ with $T\cap V\neq\varepsilonmptyset$. Then $\mathrm{var}phi^{-1}(T \cap V)$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{lemma} \begin{proof} Let $f$ be the convex function associted with $(U,\mathrm{var}phi)$. Let $z$ be an arbitrary point of $\mathrm{var}phi^{-1}(T \cap V)$. Denote $x:= \mathrm{var}phi(z)$. By Definition \ref{surfldc} there exist an $(n-1)$-dimensional linear space $Q \subset e^{\perp}$, a set $G \subset Q$ open in $Q$, an open neighbourhood $W$ of $x$ in $e^{\perp}$ and a locally DC mapping $h: G \to Q_{e^{\perp}}^{\perp}$ such that $T \cap W = \{u + h(u):\ u \in G\}$. We can and will suppose that $W \subset V$. Observing that $z \in \mathrm{var}phi^{-1}(T \cap W)$ and $\mathrm{var}phi^{-1}(T \cap W)$ is an open set in $\mathrm{var}phi^{-1}(T \cap V)$, $$\mathrm{var}phi^{-1}(T \cap W)= \{u + h(u)+ f(u+h(u)) e:\ u \in G\} $$ and $u \mapsto h(u)+ f(u+h(u)) e$ is a locally DC mapping $G \to Q^{\perp}_{\mathbb{R}^{n+1}}$, we finish the proof. \varepsilonnd{proof} \begin{lemma}\label{aprox} \begin{enumerate} \item[{\rm (i)}] Let $X$ be a convex surface in $\mathbb{R}^m$. Then there exists a sequence $(X_k)$ of polyhedral convex surfaces in $\mathbb{R}^m$ converging to $X$ in the Hausdorff distance. \item[{\rm (ii)}] Let convex surfaces $X_k$ converge in the Hausdorff distance to a convex surface $X$ in $\mathbb{R}^m$ and let $\mathrm{dist}_X$, $\mathrm{dist}_{X_k}$ denote the intrinsic distances on $X$, $X_k$, respectively. Assume that $a, b \in X$, $a_k, b_k \in X_k$, $a_k \to a$ and $b_k \to b$. Then $\mathrm{dist}_{X_k}(a_k,b_k) \to \mathrm{dist}_X(a,b)$. \item[{\rm (iii)}] If $X_k,X$ are as in (ii) then ${\rm diam}\, X_k\to{\rm diam}\, X$, where ${\rm diam}\, X_k,{\rm diam}\, X$ is the intrinsic diameter of $X_k,X$, respectively. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} (i) is well-known, see e.g.\ \cite[\S1.8.15]{Schneider}. (ii) can be proved as in \cite[Lemma~10.2.7]{BBI}, where a slightly different assertion is shown. We present here the proof for completeness. Let $C,C_k$ be convex bodies in $\mathbb{R}^m$ such that $X=\partial C$, $X_k=\partial C_k$, $k\in\mathbb{N}$, and assume, without loss of generality, that the origin lies in the interior of $C$. It is easy to show that, since the Hausdorff distance of $X$ and $X_k$ tends to zero, there exist $k_0 \in \mathbb{N}$ and a sequence $\mathrm{var}epsilon_k \searrow 0$ such that $$(1-\mathrm{var}epsilon_k)C\subset C_k\subset (1+\mathrm{var}epsilon_k)C,\quad k \geq k_0.$$ For a convex body $D$ in $\mathbb{R}^m$ and corresponding convex surface $Y=\partial D$, we shall denote by $\Pi_Y$ the metric projection of $\mathbb{R}^m$ onto $Y$, defined outside of the interior of $D$. The symbol $\mathrm{dist}_Y$ denotes the intrinsic distance on the convex surface $Y$. Let $a,b,a_k,b_k$ from the assumption be given, and (for $k \geq k_0$) denote $\widetilde{a}_k=\Pi_{X_k}((1+\mathrm{var}epsilon_k)a)$, $\widetilde{b}_k=\Pi_{X_k}((1+\mathrm{var}epsilon_k)b)$. Since $\Pi_{X_k}$ is a contraction (see e.g. \cite[Theorem~1.2.2]{Schneider}), we have \begin{eqnarray*} \mathrm{dist}_{X_k}(\widetilde{a}_k,\widetilde{b}_k)&\leq& \mathrm{dist}_{(1+\mathrm{var}epsilon_k)X}((1+\mathrm{var}epsilon_k)a,(1+\mathrm{var}epsilon_k)b)\\ &=&(1+\mathrm{var}epsilon_k)\mathrm{dist}_X(a,b). \varepsilonnd{eqnarray*} Further, clearly $\widetilde{a}_k\to a$ and $\widetilde{b}_k\to b$, which implies that $\mathrm{dist}_{X_k}(\widetilde{a}_k,a_k)\to 0$ and $\mathrm{dist}_{X_k}(\widetilde{b}_k,b_k)\to 0$. Consequently, $$\limsup_{k\to\infty}\mathrm{dist}_{X_k}(a_k,b_k)\leq\mathrm{dist}_X(a,b).$$ The inequality \ $\liminf_{k\to\infty}\mathrm{dist}_{X_k}(a_k,b_k)\geq\mathrm{dist}_X(a,b)$ \ is obtained in a similar way, considering the metric projections of $a_k$ and $b_k$ onto $(1-\mathrm{var}epsilon_k)X$. (iii) is a straightforward consequence of (ii) and the compactness of $X$. \varepsilonnd{proof} \begin{lemma}\label{aprox2} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface, $(U,\mathrm{var}phi)$ an $(e,V)$-standard chart on $X$, and let $f$ be the associated convex function. Let $(X_k)$ be a sequence of convex surfaces which tends in the Hausdorff metric to $X$, and $W \subset V$ be an open convex set such that $\overline{W} \subset V$. Then there exists $k_0 \in \mathbb{N}$ such that, for each $k \geq k_0$, the surface $X_k$ has an $(e,W)$-standard chart $(U_k,\mathrm{var}phi_k)$, and the associated convex functions $f_k$ satisfy \begin{equation}\label{vlfk} f_k(x) \to f(x),\ x \in W\ \ \ \ \text{and}\ \ \ \ \limsup_{k \to \infty}\, \operatorname{Lip} f_k \leq \operatorname{Lip} f. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} Denote by $C$($C_k$) the convex body for which $X = \partial C$ ($X_k = \partial C_k$, respectively). Clearly, the convex function $f$ has the form $$f(v)=\inf\{ t\in\mathbb{R}:\, v+te\in C\},\quad v\in V.$$ Let $\pi$ be the orthogonal projection onto $e^\perp$ and denote $$W_r:=\{v \in e^{\perp}:\ \mathrm{dist}(v, W) < r\},\quad r>0.$$ Let $\mathrm{var}epsilon,\delta>0$ be such that $W_{\mathrm{var}epsilon+\delta}\subset V$, and let $k_0=k_0(\delta)\in\mathbb{N}$ be such that the Hausdorff distance of $X$ and $X_k$ (and, hence, also of $C$ and $C_k$) is less that $\delta$ for all $k>k_0$. Fix a $k>k_0$. It is easy to show that $$f_k^*(v)= \inf\{ t\in\mathbb{R}:\, v+te\in C_k\},\quad v\in W_\mathrm{var}epsilon$$ is a finite convex function. We shall show that \begin{equation} \label{bbb} |f_k^*(v)-f(v)|\leq (1+\mathrm{Lip}\, f)\delta,\quad v\in W_\mathrm{var}epsilon. \varepsilonnd{equation} Take a point $v\in W_\mathrm{var}epsilon$ and denote $x=v+f(v)e\in X$ and $y=v+f_k^*(v)e\in X_k$. From the definition of the Hausdorff distance, there must be a point $c\in C$ with $\|c-y\|<\delta$. This implies that for $w:=\pi(c)$ we have $f(w)\leq c\cdot e$ and $$f_k^*(v)=y\cdot e\geq c\cdot e-\delta \geq f(w)-\delta\geq f(v)-\delta \operatorname{Lip} f-\delta.$$ For the other inequality, note that, since $f_k^*$ is convex, there exists a unit vector $u\in\mathbb{R}^{n+1}$ with $u\cdot e=:-\varepsilonta<0$ such that $(z-y)\cdot u\leq 0$ for all $z\in C_k$ (i.e., $u$ is a unit outer normal vector to $C_k$ at $y$). It is easy to see that $(z-y)\cdot u\leq\delta$ for all $z\in C$, since the Hausdorff distance of $C$ and $C_k$ is less than $\delta$. Consider the point $z=w+f(w)e\in C$ with $w=v+\delta u^*$, where $u^*=\pi(u)/\|\pi(u)\|$ if $\pi(u)\neq 0$ and $u^*$ is any unit vector in $e^\perp$ if $\pi(u)=0$. Then \begin{eqnarray*} \delta&\geq&(z-y)\cdot u=(w+f(w)e-v-f_k^*(v)e)\cdot u\\ &=&(w-v)\cdot u+(f(w)-f_k^*(v))(e\cdot u)\\ &=&\delta\sqrt{1-\varepsilonta^2}+(f(w)-f_k^*(v))(-\varepsilonta)\\ &\geq&\delta(1-\varepsilonta)+(f_k^*(v)-f(w))\varepsilonta, \varepsilonnd{eqnarray*} which implies that $$f_k^*(v)\leq f(w)+\delta\leq f(v)+\delta \operatorname{Lip} f+\delta$$ by the Lipschitz property of $f$, and \varepsilonqref{bbb} is verified. We shall show now that for $k>k_0$, $X_k$ has an $(e,W)$-standard chart with associated convex function $f_k:=f_k^*\restriction W$ (i.e., that $f_k$ is Lipschitz) and that \varepsilonqref{vlfk} holds. Given two different points $u,v\in W$, we define points $u^*, v^* \in W_\mathrm{var}epsilon$ as follows: we set $u^*=u-\mathrm{var}epsilon\frac{v-u}{\| v-u\|}$, $v^*=v$ if $f_k(u)\geq f_k(v)$, and $u^*=u$, $v^*=v+\mathrm{var}epsilon\frac{v-u}{\| v-u\|}$ if $f_k(u)\leq f_k(v)$. Then, using \varepsilonqref{bbb} and convexity of $f_k^*$, we obtain $$\frac{|f_k(u) - f_k(v)|}{\|u-v\|} \leq \frac{|f_k^*(u^*) - f_k^*(v^*)|}{\|u^*-v^*\|}\leq \operatorname{Lip} f + \frac{(2+2\operatorname{Lip} f)\delta}{\mathrm{var}epsilon}$$ whenever $k>k_0(\delta)$. Therefore, $\operatorname{Lip} f_k \leq \operatorname{Lip} f + \frac{(2+2\operatorname{Lip} f)\delta}{\mathrm{var}epsilon}$. Using this inequality, \varepsilonqref{bbb}, and the fact that $\delta>0$ can be arbitrarily small, we obtain \varepsilonqref{vlfk}. \varepsilonnd{proof} \section{Extrinsic properties of distance functions on convex surfaces} We will prove our results via the following result concerning intrinsic properties of distance functions on convex surfaces, which is an easy consequence of well-known results. \begin{proposition} \label{P1} Let $X$ be a complete geodesic (Alexandrov) space with nonnegative curvature. Then the Cartesian product $X^2$ with the product metric $$\mathrm{dist}_{X\times X} ((x_1,x_2),(y_1,y_2))=\sqrt{\mathrm{dist}^2(x_1,y_1)+\mathrm{dist}^2(x_2,y_2)}$$ is a complete geodesic space with nonnegative curvature as well, and the squared distance $g(x_1,x_2):=\mathrm{dist}^2(x_1,x_2)$ is $4$-concave on $X^2$. \varepsilonnd{proposition} \begin{proof} The assertion on the properties of $X^2$ is well-known, see e.g. \cite[\S3.6.1, \S10.2.1]{BBI}. In order to show the $4$-concavity of $g$, we shall use the fact that \begin{equation} \label{diag} g(x_1,x_2)=2\,\mathrm{dist}_{X\times X}^2((x_1,x_2),D),\quad x_1,x_2\in X, \varepsilonnd{equation} where $D$ is the diagonal in $X\times X$. To see that \varepsilonqref{diag} holds, note that \begin{eqnarray*} \mathrm{dist}_{X\times X}^2((x_1,x_2),D) &=&\inf_{y\in X}\mathrm{dist}_{X\times X}^2((x_1,x_2),(y,y))\\ &=&\inf_{y\in X}(\mathrm{dist}^2(x_1,y)+\mathrm{dist}^2(x_2,y)). \varepsilonnd{eqnarray*} Choosing a midpoint of $x_1$ and $x_2$ for $y$ in the last expression, we see that $\mathrm{dist}_{X\times X}^2((x_1,x_2),D)\leq \frac 12\mathrm{dist}^2(x_1,x_2)$. On the other hand, if $y$ is an arbitrary point of $X$, we get by the triangle inequality $$\mathrm{dist}^2(x_1,x_2)\leq 2(\mathrm{dist}^2(x_1,y)+\mathrm{dist}^2(x_2,y))= 2\mathrm{dist}_{X\times X}^2((x_1,x_2),(y,y)),$$ and thus we get the other inequality proving \varepsilonqref{diag}. To finish the proof, we use the following fact: {\varepsilonm If $Y$ is a length space of nonnegative curvature and $\varepsilonmptyset\neq F\subset Y$ a closed subset, then the squared distance function $d_F^2(\cdot)=\mathrm{dist}_Y^2(\cdot,F)$ is $2$-concave on $Y$.} This is well-known if $F$ is a singleton (see e.g.\ \cite[Proposition~116]{Pl}) and follows easily for a general nonempty closed set $F$ by the facts that $d^2_F(y)=\inf_{x\in F}d^2_{\{ x\}}(y)$ and that the infimum of concave functions is concave. If we apply this for $Y=X\times X$ and $F=D$, \varepsilonqref{diag} completes the proof. \varepsilonnd{proof} \begin{lemma} \label{L-f} Let $X$ be a polyhedral convex surface in $\mathbb{R}^{n+1}$, $T \in X$, and $(U, \mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$ such that $T \in U$. Let $f$ be the associated convex function and $t:= \mathrm{var}phi(T)$. Then there exists a $\delta>0$ such that for all $x,y\in V$ with $t=(x+y)/2$ and $\|x-t\|=\|y-t\|<\delta$ we have $$\mathrm{dist}(S,T)\leq 2\Delta^2f(x,y),$$ whenever $S$ is a midpoint of $\mathrm{var}phi^{-1}(x),\mathrm{var}phi^{-1}(y)$. \varepsilonnd{lemma} \begin{proof} Denoting $F:= \mathrm{var}phi^{-1}$, we have $F(u)= u + f(u)e$. Let $L$ be the Lipschitz constant of $f$. It is easy to see that we can choose $\delta_0>0$ such that for any $x\in V$ with $\|x-t\|<\delta_0$, the function $f$ is affine on the segment $[x,t]$. Then we take $\delta\leq\delta_0/L$, such that for any two points $x,y\in B(t,\delta)$, any minimal curve connecting $F(x)$ and $F(y)$ (and, hence, also any midpoint of $F(x),F(y)$) lies in $U$. Let two points $x,y\in B(t,\delta)$ with $t = \frac{x+y}{2}$ be given and denote $\Delta=\Delta^2f(x,y)$. Let $S$ be a midpoint of $F(x),F(y)$ (lying necessarily in $U$) and set $s=\mathrm{var}phi(S)$. Note that $\Delta\leq L\delta$. From the parallelogram law, we obtain $$2\|F(x)-T\|^2+2\|F(y)-T\|^2=\|F(y)-F(x)\|^2+4\Delta^2,$$ since \begin{equation} \label{Eq-L-1} \Delta=\left\|\frac{F(x)+F(y)}2-T\right\|. \varepsilonnd{equation} Taking the square root, and using the inequality $a+b\leq\sqrt{2a^2+2b^2}$, we obtain $$\|F(x)-T\|+\|F(y)-T\|\leq\sqrt{\|F(y)-F(x)\|^2+4\Delta^2}.$$ It is clear that the geodesic distance of $F(x)$ and $F(y)$ is at most $\|F(x)-T\|+\|F(y)-T\|$ (which is the length of a curve in $X$ connecting $F(x)$ and $F(y)$). Thus, $$\|S-F(x)\|\leq\mathrm{dist}(S,F(x))=\frac 12\mathrm{dist} (F(x),F(y))\leq \sqrt{\left(\frac{\|F(y)-F(x)\|}2\right)^2+\Delta^2}$$ and the same upper bound applies to $\|S-F(y)\|$. Summing the squares of both distances, we obtain $$\|S-F(x)\|^2+\|S-F(y)\|^2\leq\frac 12 \|F(y)-F(x)\|^2+2\Delta^2$$ and, since the left hand side equals, again by the parallelogram law, $$\frac 12 \left(\|F(y)-F(x)\|^2+\|2S-(F(x)+F(y)\|^2\right),$$ we arrive at \begin{equation} \label{Eq-L-2} \left\|S-\frac{F(x)+F(y)}2\right\|\leq\Delta. \varepsilonnd{equation} Considering the orthogonal projections of $S$ and $\frac{F(x)+F(y)}2$ onto $e^{\perp}$, we obtain $$\|s-t\|\leq\Delta\leq L\delta\leq\delta_0$$ and, hence, we have $$\mathrm{dist} (S,T)=\|S-T\|,$$ since $f$ is affine on $[s,t]$. On the other hand, equations \varepsilonqref{Eq-L-1} and \varepsilonqref{Eq-L-2} imply $\|S-T\|\leq 2\Delta$, which completes the proof. \varepsilonnd{proof} \begin{proposition}\label{hlavni} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and let $(U_i,\mathrm{var}phi_i)$ be $(e_i,V_i)$ standard charts, $i=1,2$. Let $f_1$, $f_2$ be the corresponding convex functions. Set $$ g(x_1,x_2)=\mathrm{dist}^2(\mathrm{var}phi_1^{-1}(x_1), \mathrm{var}phi_2^{-1}(x_2)),\ \ \ \ x_1 \in V_1,\ x_2 \in V_2,$$ where $\mathrm{dist}$ is the intrinsic distance on $X$. Then the function $g-c-d$ is concave on $V_1\times V_2$, where \begin{eqnarray*} c(x_1,x_2)&=&4(1+L^2)(\|x_1\|^2+\|x_2\|^2),\\ d(x_1,x_2)&=&4M(f_1(x_1)+f_2(x_2)), \varepsilonnd{eqnarray*} $L=\max\{\mathrm{Lip}\, f_1,\mathrm{Lip}\, f_2\}$ and $M$ is the intrinsic diameter of $X$. \varepsilonnd{proposition} \begin{proof} Assume first that the convex surface $X$ is polyhedral. We shall show that for any $t\in V_1\times V_2$ there exists $\delta>0$ such that \begin{equation} \label{Eq-T-1} \Delta^2g(x,y)\leq\Delta^2c(x,y)+\Delta^2d(x,y) \varepsilonnd{equation} for all $x,y\in B(t,\delta)\subset V_1\times V_2$ with $t=(x+y)/2$, which implies the assertion, see Lemma~\ref{VZ}. We have \begin{eqnarray*} \Delta^2g(x,y)&=&\frac{g(x)+g(y)}2-g(t)\\ &=&\left(\frac{g(x)+g(y)}2-g(s)\right)+\left(g(s)-g(t)\right), \varepsilonnd{eqnarray*} whenever $s=(s_1,s_2)\in V_1\times V_2$ is such that $(\mathrm{var}phi_1^{-1}(s_1),\mathrm{var}phi_2^{-1}(s_2))$ is a midpoint of $(\mathrm{var}phi_1^{-1}(x_1),\mathrm{var}phi_2^{-1}(x_2))$ and $(\mathrm{var}phi_1^{-1}(y_1),\mathrm{var}phi_2^{-1}(y_2))$ in $X^2$, where $x=(x_1,x_2)$ and $y=(y_1,y_2)$. By Proposition~\ref{P1} and Lemma~\ref{loclen}(ii), the first summand is bounded from above by $$2\,\frac{\mathrm{dist}^2(\mathrm{var}phi_1^{-1}(x_1),\mathrm{var}phi_1^{-1}(y_1))+\mathrm{dist}^2(\mathrm{var}phi_2^{-1}(x_2),\mathrm{var}phi_2^{-1}(y_2))}4.$$ Since clearly $$\mathrm{dist} (\mathrm{var}phi_i^{-1}(x_i),\mathrm{var}phi_i^{-1}(y_i))\leq\sqrt{1+(\operatorname{Lip} f_i)^2}\|x_i-y_i\|, \quad i=1,2, $$ we get \begin{eqnarray*} \frac{g(x)+g(y)}2-g(s)&\leq&(2+(\operatorname{Lip} f_1)^2+(\operatorname{Lip} f_2)^2) \frac{\|x_1-y_1\|^2+\|x_2-y_2\|^2}2\\ &\leq& \Delta^2c(x,y) \varepsilonnd{eqnarray*} (we use the fact that $\Delta^2c(x,y)=4(1+L^2)(\|x-y\|/2)^2$, see \varepsilonqref{drddrm}). In order to verify \varepsilonqref{Eq-T-1}, it remains thus to show that \begin{equation} \label{Eq-T-2} |g(s)-g(t)|\leq \Delta^2d(x,y) . \varepsilonnd{equation} Denote $t=(t_1,t_2)$, $s=(s_1,s_2)$, $T_i=\mathrm{var}phi_i^{-1}(t_i)$ and $S_i=\mathrm{var}phi_i^{-1}(s_i)$, $i=1,2$ . We have \begin{eqnarray*} |g(s)-g(t)|&=&|\mathrm{dist}^2(S_1,S_2)-\mathrm{dist}^2(T_1,T_2)|\\ &\leq&2M|\mathrm{dist}(S_1,S_2)-\mathrm{dist}(T_1,T_2)|\\ &\leq&2M(\mathrm{dist}(S_1,T_1)+\mathrm{dist}(S_2,T_2)), \varepsilonnd{eqnarray*} where the last inequality follows from the (iterated) triangle inequality. Applying Lemma~\ref{L-f} and the fact that $S_i$ is a midpoint of $\mathrm{var}phi_i^{-1}(x_i),\mathrm{var}phi_i^{-1}(y_i)$ (see \cite[\S4.3]{Pl}), we get $\mathrm{dist}(S_i,T_i)\leq 2\Delta^2f_i(x_i,y_i)$, $i=1,2$, for $\delta$ sufficiently small. Since clearly $$\Delta^2d(x,y)=4M(\Delta^2f_1(x_1,y_1)+\Delta^2f_2(x_2,y_2)),$$ \varepsilonqref{Eq-T-2} follows. Let now $X$ be an arbitrary convex surface. Let $(X_k)$ be a sequence of polyhedral convex surfaces which tends in the Hausdorff metric to $X$. Consider arbitrary open convex sets $W_i \subset V_i$ with $\overline{W_i} \subset V_i$, $i=1,2$. Applying Lemma~\ref{aprox2} (and considering a subsequence of $X_k$ if necessary), we find $(e_i,W_i)$-standard charts $(U_{i,k},\mathrm{var}phi_{i,k})$ of $X_k$ such that the associated convex functions $f_{i,k}$ converge to $f_i\restriction_{W_i}$, $L^*_i:= \lim_{k\to \infty} \operatorname{Lip} f_{i,k}$ exists and $L^*_i \leq \operatorname{Lip} f_i$, $i=1,2$. By the first part of the proof we know that the function $$ \psi_k(x_1,x_2) := g_k(x_1,x_2) - 4(1 + L_k^2)(\|x_1\|^2+\|x_2\|^2)- 4M_k(f_{1,k}(x_1)+f_{2,k}(x_2)),$$ where $M_k$ is the intrinsic diameter of $X_k$ and $L_k = \max(\operatorname{Lip} f_{1,k}, \operatorname{Lip} f_{1,k})$, is concave on $W_1 \times W_2$. Obviously, $L_k \to L^*:= \max(L^*_1, L^*_2) \leq L$ and Lemma~\ref{aprox} implies that $g_k\to g$ and $M_k\to M$. Consequently, $$\lim_{k \to \infty} \psi_k(x_1,x_2) = g(x_1,x_2) - 4(1 + {L^*}^2)(\|x_1\|^2+\|x_2\|^2)- 4M(f_{1}(x_1)+f_{2}(x_2))$$ is concave on $W_1 \times W_2$. Since $L^* \leq L$ , we obtain that $g-c-d$ is concave on $W_1 \times W_2$. Thus $g-c-d$ is locally concave, and so concave, on $V_1 \times V_2$. \varepsilonnd{proof} Proposition~\ref{hlavni} has the following immediate corollary (recall the definition of a DC function on a DC manifold, Definition~\ref{DCman}, and the definition of the DC structure on $X^2$, Remark~\ref{hilbert}~(ii)). \begin{theorem} \label{main} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$. Then the squared distance function $(x,y)\mapsto\mathrm{dist}^2(x,y)$ is DC on $X^2$. \varepsilonnd{theorem} Using Remark~\ref{hilbert}~(iii), we obtain \begin{corollary} \label{pevny_bod} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$ and let $x_0\in X$ be fixed. Then the squared distance from $x_0$, $x\mapsto\mathrm{dist}^2(x,x_0)$, is DC on $X$. \varepsilonnd{corollary} Since the function $g(z) = \sqrt z$ is DC on $(0,\infty)$, Lemma \ref{zakldc}(c) easily implies \begin{corollary} \label{pevny_bod1} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$ and let $x_0\in X$ be fixed. Then the distance from $x_0$, $x\mapsto\mathrm{dist}(x,x_0)$, is DC on $X\setminus \{x_0\}$. \varepsilonnd{corollary} \begin{remark}\label{nacel} If $n=1$, it is not difficult to show that the function $x\mapsto\mathrm{dist}(x,x_0)$ is DC on the whole $X$. On the other hand, we conjecture that this statement is not true in general for $n \geq 2$. \varepsilonnd{remark} \begin{theorem}\label{distset} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and $\varepsilonmptyset \neq F \subset X$ a closed set. Denoting $d_F := \mathrm{dist}(\cdot,F)$, \begin{enumerate} \item the function $(d_F)^2$ is DC on $X$ and \item the function $d_F$ is DC on $X\setminus F$. \varepsilonnd{enumerate} \varepsilonnd{theorem} \begin{proof} Since $X$ is compact, we can choose a finite system $(U_i,\mathrm{var}phi_i)$, $i \in I$, of $(e_i,V_i)$-standard charts which forms a DC atlas on $X$. Let $f_i$, $i \in I$, be the corrresponding convex functions. Choose $L>0$ such that $\mathrm{Lip}\, f_i\leq L$ for all $i \in I$ and let $M$ be the intrinsic diameter of $X$. To prove (i), it is sufficient to show that, for all $i \in I$, $(d_F)^2\circ (\mathrm{var}phi_i)^{-1}$ is DC on $V_i$. So fix $i \in I$ and consider an arbitrary $y \in F$. Choose $j \in I$ with $y \in U_j$. Set $$ \omega(x) : = 4(1+L^2)\|x\|^2 + 4Mf_i(x),\ \ \ \ \ x \in V_i.$$ Proposition \ref{hlavni} (used for $\mathrm{var}phi_1 = \mathrm{var}phi_i$ and $\mathrm{var}phi_2 = \mathrm{var}phi_j$) easily implies that the function $h_y(x) = \mathrm{dist}^2(\mathrm{var}phi_i^{-1}(x), y) - \omega(x)$ is concave on $V_i$. Consequently, the function $$ \psi(x) := (d_F)^2\circ (\mathrm{var}phi_i)^{-1}(x) - \omega(x) = \inf_{y \in F} h_y(x)$$ is concave on $V_i$. So $(d_F)^2\circ (\mathrm{var}phi_i)^{-1} = \psi + \omega = \omega - (-\psi)$ is DC on $V_i$. Thus (i) is proved. Since the function $g(z) = \sqrt z$ is DC on $(0,\infty)$, Lemma~\ref{zakldc}(c) easily implies (ii). \varepsilonnd{proof} \begin{remark}\label{unbounded} It is not difficult to show that Theorems \ref{distset} and \ref{main} imply corresponding results in $n$-dimensional closed unbounded convex surfaces $X \subset \mathbb{R}^{n+1}$; in particular that the statements (B), (C) and (D) from Introduction hold. To this end, it is sufficient to consider a bounded closed convex surface $\widetilde X$ which contains a sufficiently large part of $X$. \varepsilonnd{remark} \section{Applications} Our results on distance functions can be applied to a number of problems from the geometry of convex surfaces that are formulated in the language of distance functions. We present below applications concerning $r$-boundaries (distance spheres), the multijoined locus, and the ambiguous locus (exoskeleton) of a closed subset of a convex surface. Recall that $r$-boundaries and ambiguous loci were studied (in Euclidean, Riemannian and Alexandrov spaces) in a number of articles (see, e.g., \cite{Fe}, \cite{ST}, \cite{Zamf}, \cite{HLW}). The first application (Theorem \ref{abst} below) concerning $r$-boundaries provides an analogue of well-known results proved (in Euclidean spaces) by Ferry \cite{Fe} and Fu \cite{Fu}. It is an easy consequence of Theorem \ref{distset} and the following general result on level sets of DC functions, which immediately follows from \cite[Theorem 3.4]{RaZa}. \begin{theorem}\label{abst} Let $n \in \{2,3\}$, let $E$ be an $n$-dimensional unitary space, and let $d$ be a locally DC function on an open set $G \subset E$. Suppose that $d$ has no stationary point. Then there exists a set $N \subset \mathbb{R}$ with $\mathcal{H}^{(n-1)/2}(N) =0$ such that, for every $r \in d(G) \setminus N$, the set $d^{-1}(r)$ is an $(n-1)$-dimensional DC surface in $E$. Moreover, $N$ can be chosen such that $N= d(C)$, where $C$ is a closed set in $G$. \varepsilonnd{theorem} (Let us note that $C$ can be chosen to be the set of all critical points of $d$, but we will not need this fact.) \begin{theorem}\label{paralel} Let $n \in \{2,3\}$ and let $X \subset \mathbb{R}^{n+1}$ ($n\geq 2$) be a convex surface and $\varepsilonmptyset \neq K \subset X$ a closed set. For $r>0$, consider the $r$-boundary (distance sphere) $K_r := \{x \in X:\ \mathrm{dist}(x,K) =r\}$. There exists a compact set $N \subset [0,\infty)$ with $\mathcal{H}^{(n-1)/2}(N) =0$ such that that, for every $r \in (0,\infty) \setminus N$, the $r$-boundary $K_r$ is either empty, or an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{theorem} \begin{proof} Choose a system $(U_i,\mathrm{var}phi_i)$, $i \in \mathbb{N}$, of $(e_i,V_i)$-standard charts on $X$ such that $G:=X\setminus K = \bigcup_{i=1}^{\infty} U_i$. By Theorem \ref{distset}, we know that $d_i :=d_K \circ \mathrm{var}phi_i^{-1}$ is locally DC on $V_i$, where $d_K := \mathrm{dist}(\cdot,K)$. Moreover, no $t \in \mathrm{var}phi_i(U_i)$ is a stationary point of $d_i$ (i.e., the differential of $d_i$ at $t$ is nonzero). Indeed, otherwise there exists $\delta>0$ such that $|d_i(\tau) - d_i(t)| < \|\tau -t\|$ whenever $\|\tau -t\|<\delta$. Denote $x:= \mathrm{var}phi^{-1}(t)$ and choose a minimal curve $\gamma$ with endpoints $x$ and $u\in K$ and length $s = \mathrm{dist} (x,K)$. Choosing a point $x^*$ on the image of $\gamma$ which is sufficiently close to $x$ and putting $\tau := \mathrm{var}phi_i(x^*)$, we clearly have $\|\tau -t\|<\delta$ and $|d_i(\tau) - d_i(t)| = \mathrm{dist}(x,x^*) \geq \|\tau -t\|$, which is a contradiction. Consequently, by Theorem \ref{abst} we can find for each $i$ a set $S_i \subset V_i$ closed in $V_i$ such that, for $N_i:= d_i(S_i)$, we know that $\mathcal{H}^{(n-1)/2}(N_i) =0$ and, for each $r \in (0,\infty)\setminus N_i$, the set $d_i^{-1}(r)$ is either empty, or an $(n-1)$-dimensional DC surface in $e_i^{\perp}$. Define $S$ as the set of all points $x \in G$ such that $\mathrm{var}phi_i(x) \in S_i$ whenever $x \in U_i$. Obviously, $S$ is closed in $G$. Set $N:= d_K(S) \cup \{0\}$. Since clearly $N \subset \bigcup_{i=1}^{\infty} N_i \cup \{0\}$, we have $\mathcal{H}^{(n-1)/2}(N) =0$. Since $K \cup S$ is compact, $N = d_K(K \cup S)$ and $d_K$ is continuous, we obtain that $N$ is compact. Let now $r \in (0,\infty) \setminus N$ and $x \in K_r$. Let $x \in U_i$. Then clearly $K_r \cap U_i = \mathrm{var}phi_i^{-1}(d_i^{-1}(r))$. Since $d_i^{-1}(r)$ is an $(n-1)$-dimensional DC surface in $e_i^{\perp}$, Lemma \ref{obrdc} implies that $K_r \cap U_i$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. Since $x \in K_r$ was arbitrary, we obtain that $K_r$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{proof} \begin{remark}\label{obst} Let $n=2$. Then the weaker version of Theorem \ref{paralel} in which $\mathcal{H}^{1}(N) =0$ (instead of $\mathcal{H}^{1/2}(N) =0$) and $K_r$ are $(n-1)$-dimensional Lipschitz manifolds follows from \cite[Theorem~B]{ST} proved in $2$-dimensional Alexandrov spaces without boundary. In such Alexandrov spaces even the version in which $\mathcal{H}^{1/2}(N) =0$ and $K_r$ are $(n-1)$-dimensional Lipschitz manifolds holds; it is proved in \cite{RaZa} using Theorem \ref{abst} and Perelman's DC structure (cf.\ Section~\ref{Sec-Rem}). However, it seems to be impossible to deduce by this method Theorem \ref{paralel} in its full strength; any proof that $K_r$ are DC surfaces probably needs results of the present article. If $X$ is a $3$-dimensional Alexandrov space without boundary, it is still possible that the version of Theorem \ref{paralel} in which $K_r$ are Lipschitz manifolds holds. But it cannot be proved using only Theorem \ref{abst} and Perelman's DC structure even if $X$ is a convex surface. The obstacle is that the set $X \setminus X^*$ of ``Perelman's singular'' points (cf.\ Section~\ref{Sec-Rem}) can have positive $1$-dimensional Hausdorff measure even if $X$ is a convex surface in $\mathbb{R}^4$ (see \cite[Example 6.5]{RaZa}). \varepsilonnd{remark} \begin{remark}\label{obecdim} Examples due to Ferry \cite{Fe} show that Theorem \ref{paralel} cannot be generalized for $n \geq 4$. For an arbitrary $n$-dimensional convex surface $X$ we can, however, obtain (quite similarly as in \cite{RaZa} for Riemannian manifolds or Alexandrov spaces without Perelman singular points) that for all $r>0$ except a countable set, each $K_r$ contains an $(n-1)$-dimensional DC surface $A_r$ such that $A_r$ is dense and open in $K_r$, and $\mathcal{H}^{n-1}(K_r \setminus A_r) =0$. \varepsilonnd{remark} If $K$ is a closed subset of a length space $X$, the {\it multijoined locus} $M(K)$ of $K$ is the set of all points $x\in X$ such that the distance from $x$ to $K$ is realized by at least two different minimal curves in $X$. If two such minimal curves exist that connect $x$ with two different points of $K$, $x$ is said to belong to the {\it ambiguous locus} $A(K)$ of $K$. The ambiguous locus of $K$ is also called skeleton of $X\setminus K$ (or exoskeleton of $K$, \cite{HLW}). Zamfirescu \cite{Zamf} studies the multijoined locus in a complete geodesic (Alexandrov) space of curvature bounded from below and shows that it is $\sigma$-porous. An application of Theorem~\ref{distset} yields a stronger result for convex surfaces: \begin{theorem}\label{zam} Let $K$ be a closed subset of a convex surface $X\subset \mathbb{R}^{n+1}$ ($n\geq 2$). Then $M(K)$ (and, hence, also $A(K)$) can be covered by countably many $(n-1)$-dimensional DC surfaces lying in $X$. \varepsilonnd{theorem} \begin{proof} Let $(U,\mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$. It is clearly sufficient to prove that $M(K)\cap U$ can be covered by countably many $(n-1)$-dimensional DC surfaces. Set $F: = \mathrm{var}phi^{-1}$ and denote by $d_K(z)$ the intrinsic distance of $z \in X$ from $K$. Since both the mapping $F$ and the function $d_K \circ F$ are DC on $V$ (see Theorem~\ref{distset} and Lemma~\ref{zakldc}), they are by Lemma~\ref{aznapl} strictly differentiable at all points of $V\setminus N$, where $N$ is a countable union of $(n-1)$-dimensional DC surfaces in $e^{\perp}$. By Lemma \ref{obrdc}, $F(N\cap V)$ is a countable union of $(n-1)$-dimensional DC surfaces in $\mathbb{R}^{n+1}$. So it is sufficient to prove that $M(K)\cap U\subset F(N) $. To prove this inclusion, suppose to the contrary that there exists a point $x \in M(K)\cap U$ such that both $F$ and $d_K \circ F$ are strictly differentiable at $x$. We can assume without loss of generality that $x=0$. Let $T:= (dF(0))(e^{\perp})$ be the vector tangent space to $X$ at $0$. Let $P$ be the projection of $\mathbb{R}^{n+1}$ onto $T$ in the direction of $e$ and define $Q:= (P\restriction_U)^{-1}$. It is easy to see that $Q=F\circ (d F(0))^{-1}$ and therefore $dQ(0) = (d F(0)) \circ (d F(0))^{-1} = {\rm id}_T$. Since $0 \in M(K)$, there exist two different minimal curves $\beta,\gamma: [0,r]\to X$ such that $r= d_K(0)$, $\beta(0)=\gamma(0)=0$, $\beta(r) \in K$, and $\gamma(r) \in K$. As any minimal curves on a convex surface, $\beta$ and $\gamma$ have right semitangents at $0$ (see \cite[Corollary~2]{Buyalo}); let $u,v\in\mathbb{R}^{n+1}$ be unit vectors from these semitangents. Further, \cite[Theorem 2]{Mi} easily implies that $u\neq v$. Clearly $d_K \circ \beta(t) = r-t,\ t \in [0,r]$, and $(P \circ \beta)'_+(0) = P(\beta'_+(0))=u$. Further observe that $d_K \circ Q$ is differentiable at $0$, since $d_K \circ F$ is differentiable at $0= (dF(0))^{-1}(0)$. Using the above facts, we obtain \begin{eqnarray*} (d(d_K \circ Q)(0))(u)&=&(d(d_K \circ Q)(0))((P \circ \beta)'_+(0))= (d_K \circ Q \circ P \circ \beta)'_+(0)\\ &=& (d_K \circ \beta)'_+(0) = -1. \varepsilonnd{eqnarray*} In the same way we obtain $(d(d_K \circ Q)(0))(v) =-1$. Thus, $u+v\neq 0$ and, by the linearity of the differential, $$(d(d_K \circ Q)(0))\left(\frac{u+v}{\| u+v\|}\right)=\frac{-2}{\| u+v\|}<-1.$$ Thus there exists $\mathrm{var}epsilon >0$ such that \begin{equation}\label{novel} \|d(d_K \circ Q)(0)\| > 1+ \mathrm{var}epsilon. \varepsilonnd{equation} Since $dQ(0) ={\rm id}_T$ and $Q = F\circ (d F(0))^{-1}$ is clearly strictly differentiable at 0, there exists $\delta >0$ such that $$ \|Q(p)-Q(q) - (p-q)\| \leq \mathrm{var}epsilon \|p-q\|,\ \ \ \ \ p, q \in B(0,\delta) \cap T,$$ and consequently $Q$ is Lipchitz on $B(0,\delta) \cap T$ with constant $1+\mathrm{var}epsilon$. Let $p, q \in B(0,\delta) \cap T$ and consider the curve $\omega: [0,1] \to X$, $\omega(t) = Q (tp + (1-t)q)$. Then clearly $$ \mathrm{dist}(Q(p),Q(q)) \leq \mathrm{length}\ \omega \leq (1+\mathrm{var}epsilon) \|p-q\|.$$ Consequently $$\|d_K \circ Q(p) - d_K \circ Q (q)\| \leq \mathrm{dist}(Q(p),Q(q)) \leq (1+\mathrm{var}epsilon) \|p-q\|.$$ Thus the function $d_K \circ Q$ is Lipchitz on $B(0,\delta) \cap T$ with constant $1+\mathrm{var}epsilon$, which contradicts \varepsilonqref{novel}. \varepsilonnd{proof} \begin{remark} An analoguous result on ambiguous loci in a Hilbert space was proved in \cite{Zadi}. \varepsilonnd{remark} \section{Remarks and questions} \label{Sec-Rem} The results of \cite{Per} and Corollary \ref{pevny_bod1} suggest that the following definition is natural. \begin{definition}\label{compat} Let $X$ be a length space and let an open set $G \subset X$ be equipped with an $n$-dimensional DC structure. We will say that this DC structure is {\it compatible} with the intrinsic metric on $X$, if the following hold. \begin{enumerate} \item For each DC chart $(U, \mathrm{var}phi)$, the map $\mathrm{var}phi: U \to \mathbb{R}^n$ is locally bilipschitz. \item For each $x_0 \in X$, the distance function $\mathrm{dist}(x_0,\cdot)$ is DC (with respect to the DC structure) on $G \setminus \{x_0\}$. \varepsilonnd{enumerate} \varepsilonnd{definition} If $M$ is an $n$-dimensional Alexandrov space with curvature bounded from below and without boundary, the results of \cite{Per} (cf.\ \cite[\S2.7]{Kuwae} give that there exists an open dense set $M^* \subset M$ with $\dim_H(M \setminus M^*) \leq n-2$ and an $n$-dimensional DC structure on $M^*$ compatible with the intrinsic metric on $M$ (cf.\ \cite[p.~6, line 9 from below]{Per}). Since the components of each chart of this DC structure are formed by distance functions, Lemma \ref{zakldc}(d) easily implies that {\it no other DC structure on $M^*$ compatible with the intrinsic metric exists}. Let $X \subset \mathbb{R}^{n+1}$ be a convex surface. Then Corollary \ref{pevny_bod1} gives that the standard DC structure on $X$ is compatible with the intrinsic metric on $X$. By the above observations, there is no other compatible DC structure on the (open dense) ``Perelman's set'' $X^*$. {\it We conjecture that this uniqueness is true also on the whole $X$.} Further note that the standard DC structure on $X$ has an atlas such that all corresponding transition maps are $C^{\infty}$. Indeed, let $C$ be the convex body for which $X = \partial C$. We can suppose $0 \in {\rm int}\, C$ and find $r>0$ such that $B(0,r) \subset {\rm int}\, C$. Now ``identify'' $X$ with the $C^{\infty}$ manifold $\partial B(0,r)$ via the radial projection of $X$ on $\partial B(0,r)$. Then, this bijection transfers the $C^{\infty}$ structure of $\partial B(0,r)$ on $X$. We conclude with the following problem. {\bf Problem }\ Let $f: \mathbb{R}^n \to \mathbb{R}$ be a semiconcave (resp DC) function. Consider the ``semiconcave surface'' (resp. DC surface) $X := {\rm graph}\, f$ equipped with the intrinsic metric. Let $x_0 \in X$. Is it true that the distance function $\mathrm{dist}(x_0,\cdot)$ is DC on $X \setminus \{x_0\}$ with respect to the natural DC structure (given by the projection onto $\mathbb{R}^n$)? In other words, is the natural DC structure on $X$ compatible with the intrinsic metric on $X$? If $f$ is convex, then the answer is positive, see Remark \ref{unbounded}. If $f$ is semiconcave, then each minimal curve $\mathrm{var}phi$ on $X$ has bounded turn in $\mathbb{R}^{n+1}$ by \cite{Re}. Thus some interesting results on intrinsic properties extend from convex surfaces to the case of semiconcave surfaces. So, there is a chance that the above problem has the affirmative answer in this case. However, we were not able to extend our proof to this case. \begin{thebibliography}{99} \bibitem{Acon} A.D. Alexandrov, {\varepsilonm Intrinsic Geometry of Convex Surfaces (Russian)}, OGIZ, Moscow-Leningrad, 1948. \bibitem{A1} A. D. Alexandrov, {\varepsilonm On surfaces represented as the difference of convex functions,} Izv. Akad. Nauk. Kaz. SSR 60, Ser. Math. Mekh. 3 (1949), 3--20 (in Russian). \bibitem{BBI} D. Burago, Y. Burago, S. Ivanov, {\it A course in metric geometry}, Graduate Studies in Mathematics, Volume 33, Amer.\ Math.\ Soc., Providence, 2001. \bibitem{Buyalo} S.V. Buyalo, {\it Shortest paths on convex hypersurfaces of Riemannian spaces}, Zap.\ Nau\v{c}n.\ Sem.\ Leningrad.\ Otdel.\ Mat.\ Inst.\ Steklov.\ (LOMI) 66 (1976), 114--132. \bibitem{CaSi} P. Cannarsa, C. Sinestrari, {\varepsilonm Semiconcave functions, Hamilton-Jacobi equations, and optimal control}, Progress in Nonlinear Differential Equations and their Applications, 58. Birkh\"auser Boston, Inc., Boston, MA, 2004. \bibitem{Fe} S. Ferry, {\varepsilonm When $\mathrm{var}epsilonsilon $-boundaries are manifolds,} Fund. Math. 90 (1976), 199--210. \bibitem{Fu} J.H.G. Fu, {\varepsilonm Tubular neighborhoods in Euclidean spaces}, Duke Math.\ J. 52 (1985), 1025--1046. \bibitem{Ha} P. Hartman, {\varepsilonm On functions representable as a difference of convex functions,} Pacific J. Math. 9 (1959), 707--713. \bibitem{HLW} D. Hug, G. Last, W. Weil, {\varepsilonm A local Steiner-type formula for general closed sets and applications,} Math. Z. 246 (2004), 237--272. \bibitem{Kuwae} K. Kuwae, Y. Machigashira, T. Shioya: {\it Sobolev spaces, Laplacian, and heat kernel on Alexandrov spaces}, Math.\ Z. 238 (2001), 269--316. \bibitem{Mi} A.D. Milka, {\varepsilonm Shortest lines on convex surfaces (Russian),} Dokl. Akad. Nauk SSSR 248 (1979), 34--36. \bibitem{Mor} B.S. Mordukhovich, {\varepsilonm Variational analysis and generalized differentiation I., Basic theory,} Grundlehren der Mathematischen Wissenschaften 330, Springer-Verlag, Berlin, 2006. \bibitem{MM} C. Mantegazza, A.C. Mennucci, {\varepsilonm Hamilton-Jacobi equations and distance functions on Riemannian manifolds}, Appl. Math. Optim. 47 (2003), 1--25. \bibitem{OS} Y. Otsu, T. Shioya, {\varepsilonm The Riemann structure of Alexandrov spaces}, J. Differential Geom. 39 (1994), 629--658. \bibitem{Per} G. Perelman, {\varepsilonm DC structure on Alexandrov space}, an unpublished preprint (1995), available at http://www.math.psu.edu/petrunin/papers/papers.html. \bibitem{Pet} A. Petrunin, {\varepsilonm Semiconcave functions in Alexandrov's geometry}, in: Surveys in Differential Geometry, Vol.~XI, J. Cheeger and K. Grove Eds., Int.\ Press, Somerville, 2007, pp. 137--202. \bibitem{Pl} C. Plaut, {\varepsilonm Metric spaces of curvature $\geq k$}, Handbook of geometric topology, 819--898, North-Holland, Amsterdam, 2002. \bibitem{RaZa} J. Rataj, L. Zaj\'\i\v cek, {\varepsilonm Critical values and level sets of distance functions in Riemannian, Alexandrov and Minkowski spaces}, arXiv 0911.4020. \bibitem{Re} Yu. G. Reshetnyak, {\varepsilonm On a generalization of convex surfaces} (Russian), Mat. Sbornik 40 (82)(1956), 381--398. \bibitem{Schneider} R. Schneider, {\varepsilonm Convex bodies: the Brunn-Minkowski theory}, Cambrigde University Press, Cambrigde, 1993. \bibitem{ST} K. Shiohama, M. Tanaka, {\varepsilonm Cut loci and distance spheres on Alexandrov surfaces,} Actes de la Table Ronde de G\'eom\'etrie Diff\'erentielle (Luminy, 1992), 531--559, S\'emin. Congr., 1, Soc. Math. France, Paris, 1996. \bibitem{VeZa} L. Vesel\'y, L. Zaj\'\i\v cek, {\varepsilonm Delta-convex mappings between Banach spaces and applications,} Dissertationes Math. (Rozprawy Mat.) 289 (1989), 52 pp. \bibitem{Walter} R. Walter, {\it Some analytical properties of geodesically convex sets}, Abh.\ Math.\ Sem.\ Univ.\ Hamburg 45 (1976), 263--282. \bibitem{Wh} J.H.C. Whitehead, {\it Manifolds with transverse fields in Euclidean space}, Ann.\ Math.\ 73 (1961), 154--212. \bibitem{Zaj} L. Zaj\'\i\v{c}ek, \textit{On the differentiation of convex functions in finite and infinite dimensional spaces}, Czechoslovak Math. J. 29 (1979), 292--308. \bibitem{Zadi} L. Zaj\'\i\v{c}ek, \textit{ Differentiability of the distance function and points of multi-valuedness of the metric projection in Banach space}, Czechoslovak Math. J. 33 (1983), 340--348. \bibitem{Zajploch} L. Zaj\'\i\v{c}ek, \textit{ On Lipschitz and d.c.\ surfaces of finite codimension in a Banach space}, Czechoslovak Math. J. 58 (2008), 849--864. \bibitem{Zamf} T. Zamfirescu, {\it On the cut locus in Alexandrov spaces and applications to convex surfaces}, Pacific J. Math.\ 217 (2004), 375--386. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Optimal Measures for Multivariate Geometric Potentials} \begin{abstract} We study measures and point configurations optimizing energies based on multivariate potentials. The emphasis is put on potentials defined by geometric characteristics of sets of points, which serve as multi-input generalizations of the well-known Riesz potentials for pairwise interaction. One of such potentials is volume squared of the simplex with vertices at the $k \ge 3$ given points: we show that the arising energy is maximized by balanced isotropic measures, in contrast to the classical two-input energy. These results are used to obtain interesting geometric optimality properties of the regular simplex. As the main machinery, we adapt the semidefinite programming method to this context and establish relevant versions of the $k$-point bounds. \mathbb End{abstract} \tableofcontents \section{Introduction} A variety of problems in many areas of mathematics and science can be formulated as discrete or continuous energy optimization problems for two-point interaction potentials. The discrete energy and the continuous energy integral in this setup are defined as \begin{equation}\label{e.2ener} \frac{1}{N^2} \sum_{x,y \in \omega_N} K (x,y) \,\,\, \textup{ or } \int_\mathbb Omega \int_\mathbb Omega K(x,y) \, d\mu (x) \, d\mu (y), \mathbb End{equation} where $K: \mathbb Omega \times \mathbb Omega \rightarrow \mathbb R$ is a potential function. In the former case, the energy is determined for discrete sets $\omega_N$ of $N$ points in $\mathbb Omega$. In the latter case, it is determined for probability measures $\mu$ on the domain $\mathbb Omega$. For $\mathbb Omega \subset \mathbb R^d$, undoubtedly, one of the most well studied energies of this type are the Riesz energies with the kernels $K(x,y) = \| x-y \|^s$ (diagonal terms need to be dropped in the discrete case for $s<0$). We refer the reader to \cite{BHS} for an excellent exposition of the subject. However, numerous applications (e.g. Menger curvature \cite{MMV}, $U$-statistics \cite{L,V}, $k$-point bounds \cite{BV, DMOV, Mu, CW}, three-nucleon force in physics \cite{Z}, etc) call for energies that depend on interactions of triples or $k$-tuples of particles, rather than just pairwise interactions, i.e.\ energies of the type \begin{align}\label{e.nener} E_K (\omega_N) & = \frac{1}{N^k} \sum_{ z_1,\operatorname{d}ots,z_k \in \omega_N} K (z_1,\ldots,z_k), \\ \label{e.neneri} I_K (\mu) & = \int_\mathbb Omega \operatorname{d}ots \int_\mathbb Omega K (x_1,\operatorname{d}ots,x_k) \, d\mu (x_1) \,\operatorname{d}ots \, d\mu (x_k), \mathbb End{align} with $k\ge 3$. The question of interest is finding point configurations and measures optimizing such energies.\\ Continuing the general study initiated in \cite{BFGMPV1}, in this paper we study multivariate potentials that are determined by geometric characteristics of sets of $k$ points in $\mathbb R^d$ and, at the same time, serve as generalizations of classical pairwise potentials ubiquitous in the literature, in particular, the aforementioned Riesz potentials. There are two main classes of such potentials which we investigate here. \\ \noindent {\bf The potential $V$: volume of the parallelepiped spanned by $k$-tuples of vectors.} Let $2\leq k\leq d$ and define the $k$-input kernel $V(x_1,\ldots, x_k)$ as the $k$-dimensional volume of the parallelepiped whose vertices are the points $x_1$, $\ldots$, $x_k$ and the origin. In other words, the parallelepiped spanned by the {\mathbb Emph{vectors}} $x_1$, $\ldots$, $x_k$ (or, equivalently, $V$ is the volume of the simplex spanned by these $k$ vectors, scaled by a factor of $k!$). Note that $V^2$ is the determinant of the Gram matrix of the set of vectors $\{x_1,\ldots, x_k\}$. \\% scaled by $\frac{1}{(k!)^2}$. As an analogue of Riesz energies, we study energies with kernels $V^{\alpha}$.\\ \noindent {\bf The potential $A$: volume (area) of the simplex spanned by $k$-tuples of points.} For $2\leq k\leq d+1$, define the $k$-input kernel $A(x_1,\ldots, x_k)$ as the $(k-1)$-dimensional volume of the simplex whose vertices are the points $x_1$, $\ldots$, $x_k$. Similarly to $V^2$, the potential $A^2$ can be represented as a determinant of a matrix based on scalar products of the set of vectors $\{x_1,\ldots, x_k\}$, see Lemma \ref{lem:A-formula}.\\ Observe that in the case $k=3$, $V$ is simply the three-dimensional {\mathbb Emph{volume}} of the parallelepiped spanned by the vectors $x_1$, $x_2$, $x_3$, while $A$ is the {\mathbb Emph{area}} of the triangle with vertices at the points $x_1$, $x_2$, $x_3$. This explains the notation chosen for these potentials. The three-input case of the potentials will be the focus of Section \ref{sec:SD}. Note also that for $k=2$, $A(x,y) = \| x -y\|$, so the potentials $A^s$ are direct multivariate generalizations of the Riesz potentials. This was also remarked upon in \cite{KPS}, where the authors studied the gradient flow of $A^2$ as a generalization of the linear consensus model. The two-input case for both of these potentials is discussed in Section \ref{sec:k=2}. \\ In direct analogy to the Riesz energies, we shall study multi-input energies with kernels given by powers of these potentials: $A^{s}$ and $V^s$ (in this paper, the powers are mostly assumed to be positive, i.e. $s>0$). Due to the nature of these potentials for $s>0$, one is generally interested in measures and point configurations that {\mathbb Emph{maximize}} (rather than minimize) the corresponding energies. The geometric setting will be primarily restricted to the case when the domain $\mathbb Omega$ is the unit sphere $\mathbb S^{d-1}$, as well as $\mathbb Omega= \mathbb R^d$ with certain moment restrictions on the underlying probability measures \mathbb Eqref{eq:moment}. In the former case, one of the most natural questions is whether the normalized uniform surface measure $\sigma$ on the sphere $\mathbb S^{d-1}$ maximize the energies $I_{A^s} (\mu)$ and $I_{V^s} (\mu)$. These questions can be reformulated in probabilistic terms as follows: assume that $k$ random points/vectors are chosen on the sphere $\mathbb S^{d-1}$ independently according to the probability distribution $\mu$. Which probability distribution $\mu$ maximizes the expected $s^{th}$ power of the volume of the parallelepiped spanned by the vectors (or, respectively, of the volume of the simplex spanned by the random points)? Is the uniform distribution $\sigma$ optimal? \\ The case $s=2$ appears to be more manageable than others, since, as mentioned above, both $V^2$ and $A^2$ can be expressed as polynomials. In fact, it has been already shown by Cahill and Casazza \cite{CC} (see also Theorem \ref{thm:volume-gen} below) that $I_{V^2}$ is maximized by isotropic measures on the sphere (see \mathbb Eqref{eq:isotropic_init} for the definition), which includes $\sigma$. Based on this result we show that $I_{V^s}$ is maximized by the discrete measure uniformly distributed over the vertices of an orthonormal basis when $s>2$ (Corollary \ref{cor:V^s for s>2}). Other main results of the present paper concerning multivariate geometric potentials include: \begin{itemize} \item The maximizers of the energy $I_{A^2}(\mu)$ on the sphere $\mathbb S^{d-1}$ are exactly the balanced isotropic measures (which includes the uniform surface measure $\sigma$, see Section \ref{sec:not} for the relevant definitions). This is proved in Theorem \ref{thm:area-gen} in full generality (for all $d\ge 2$ and $3\le k \le d+1$), but different proofs of partial cases are also given in Theorem \ref{thm: triangle area squared max} (the case of $k=3$ inputs, i.e. area squared of a triangle) and Theorem \ref{thm:A^2maxGen} ($k=d+1$ inputs in dimension $d\ge 2$, i.e. a full-dimensional simplex; this theorem also applies to measures on $\mathbb R^d$ with unit second moment). \item When $k=d+1$ and $s>2$, the energy $I_{A^s}(\mu)$ is maximized by the discrete measure uniformly distributed over the vertices of a regular $d$-dimensional simplex (Corollary \ref{cor:A^s for s>2}). \item For $0<s\le 2$, the discrete energies $E_{V^s}$ and $E_{A^s}$ with $N=d+1$ points are maximized by the vertices of a regular $d$-dimensional simplex, see Corollary \ref{cor:Simplex Best A, V}. As a corollary, a regular $d$-dimensional simplex maximizes the sum of volumes of $j$-dimensional faces ($1\le j \le d$) among all simplices of a given circumradius (Corollary \ref{cor:Geometric Simplex is best}). \mathbb End{itemize} \noindent For more precise technical statements of these results the reader is directed to the theorems and corollaries referenced above. \\ The case $s=2$ is also special due to the fact that in the classical two-input setting this is exactly the phase transition for the Riesz energy on the sphere, which is maximized uniquely by the uniform surface measure $\sigma$ for $0<s<2$ and by discrete measures for $s>2$ \cite{Bj} (see also Proposition \ref{prop:k=2} below). Some of our main results suggest that similar behavior persists in the multivariate case, although the case $0<s<2$ (including the very natural $s=1$) remains out of reach. We conjecture that the uniform surface measure $\sigma$ maximizes both $I_{A^s} (\mu)$ and $I_{V^s} (\mu)$ when $0<s<2$. \\ The main machinery for our optimization results is a variant of the semidefinite programming method. We adapt the method developed by Bachoc and Vallentin for finding three-point packing bounds for spherical codes \cite{BV}. Three-point bounds were also applied to energy optimization for pair potentials in \cite{CW} and for multivariate $p$-frame energy in \cite{BFGMPV2}. The approach of Bachoc and Vallentin later was generalized by Musin \cite{Mu} who established the $k$-point version of packing bounds. This method is actively utilized for solving packing/energy problems (see, e.g. \cite{DMOV, DDM}) but its applicability is typically limited due to complexity of actual semidefinite programs. Our paper seems to be the first one where general $k$-point bounds are explicitly used for all positive integer $k$.\\ The paper is organized as follows. Section \ref{sec:not} describes the relevant background, definitions, notation, and covers the two-input case of the energies. Section \ref{sec:Discrete A and V} presents the applications of our main results to some geometric optimality properties of the regular simplex. In Section \ref{sec:SD} we discuss the semidefinite programming approach of \cite{BV} and demonstrate how it leads to optimization results for $3$-input energies with geometric kernels. Section \ref{sec:vol} shows how the known results about $I_{V^2}$ \cite{CC} can be used to obtain partial results for $I_{A^2}$, as well as the discreteness of maximizers for $I_{V^s}$ and $I_{A^s}$ with $s>2$. In Section \ref{sec:k-point} we provide a self-contained description of $k$-point semidefinite bounds for the sphere and give a general construction of $k$-positive definite multivariate functions based on these bounds. Finally, in the main result of Section \ref{sec:A^2 on Sphere}, we use multivariate functions from Section \ref{sec:k-point} to prove that the energy $I_{A^2}$ based on the squared volume of a simplex is maximized by balanced isotropic measures on the sphere. In the Appendix (Section \ref{sec:Appen A^2}) we give an explicit expression for the potential $A^2$. \section{Background and notation}\label{sec:not} The notation in this paper generally follows \cite{BFGMPV1}. Most of the optimization problems, with a few exceptions, will be formulated for measures or finite configurations of points on the unit Euclidean sphere $\mathbb{S}^{d-1}$. Often the potentials will be invariant under the action changing an argument to its opposite. Essentially, this means that the underlying space is the real projective space $\mathbb{RP}^{d-1}$, but we will still formulate our results in terms of the unit sphere. In what follows, the domain $\mathbb Omega$ is either the sphere $\mathbb S^{d-1}$ or the Euclidean space $\mathbb R^d$. Assume $ k \in \mathbb{N}\setminus\{1\}$ is the number of inputs and the kernel $K: \mathbb Omega^k \rightarrow \mathbb{R}$ is continuous. We denote by $\mathcal{M}(\mathbb Omega)$ the set of finite signed Borel measures on $\mathbb Omega$, and by $\mathcal{P}(\mathbb Omega)$ the set of Borel probability measures on $\mathbb Omega$. If $\mathbb Omega = \mathbb{R}^d$, we define $\mathcal{P}^*(\mathbb R^d)$ to be the set of Borel probability measures $\mu$ on $\mathbb R^d$ satisfying \begin{equation}\label{eq:moment} \int_{\mathbb R^d} \| x \|^2 d \mu(x) = 1. \mathbb End{equation} Observe that, by a slight abuse of notation, $\mathcal{P}(\mathbb S^{d-1}) \subset \mathcal{P}^*(\mathbb R^d)$ Let $\omega_N = \{ z_1, z_2, \ldots, z_N\}$ be an $N$-point configuration (multiset) in $\mathbb Omega$, for $N \geq k$. Then the discrete $K$-energy of $\omega_N$ is defined to be \begin{equation*}\label{eq:DiscreteEnergyDef} E_K(\omega_N) := \frac{1}{N^k} \sum_{j_1=1}^{N} \cdots \sum_{j_k = 1}^{N} K(z_{j_1}, \ldots, z_{j_k}). \mathbb End{equation*} Similarly, we define the energy integral for measures on $\mathbb Omega$: for $\mu \in \mathcal{M}(\mathbb Omega)$, \begin{equation*}\label{eq:ContEnergyDef} I_K(\mu ) = \int_{ \mathbb Omega} \cdots \int_{\mathbb Omega} K(x_1, \ldots, x_k) \,d\mu(x_1) \cdots d\mu(x_k), \mathbb End{equation*} when absolutely convergent, as will be the case in all of the contexts considered below. In the present paper we shall be interested in finding probability measures ($\mu \in \mathcal{P}(\mathbb S^{d-1} ) $ or $\mu \in \mathcal{P}^*(\mathbb R^d)$) which optimize (in most cases, maximize) the energy integrals $I_K$. \subsection{Isotropic measures and frame energy} The \textit{$p$-frame potential} is defined as $|\langle x,y \rightarrowngle|^p$. The notion of the $2$-frame potential, or simply \textit{frame potential}, was introduced by Benedetto and Fickus \cite{BF}, and later generalized to $p \in (0, \infty)$ by Ehler and Okoudjou \cite{EO}. Minimization of the frame energy is well understood: the following lemma is usually stated for $\mu \in \mathcal P (\mathbb S^{d-1})$, see e.g. Theorem 4.10 in \cite{BF}, but the extension to $ \mathcal{P}^*(\mathbb R^d)$ is straightforward (see also Remark \ref{rem:proj} below). \begin{lemma}\label{lem:frame} For any {$\mu\in \mathcal{P}^*(\mathbb R^d)$}, and hence also any $\mu \in \mathcal P (\mathbb S^{d-1})$, \begin{equation*} \int_{ \mathbb Omega} \int_{\mathbb Omega} \langle x,y \rightarrowngle^2\, d\mu(x) d\mu(y) \geq \frac 1 d. \mathbb End{equation*} \mathbb End{lemma} It is easy to see that the equality in the estimate above is achieved precisely for the measures which satisfy \begin{equation}\label{eq:isotropic_init} \int_{ \mathbb Omega} x x^T \, d\mu(x)=\frac 1 d I_d, \mathbb End{equation} where $I_d$ is the $d\times d$ identity matrix. It will be convenient for us to use this condition in the following form: for any $y\in\mathbb{S}^{d-1}$, \begin{equation}\label{eq:isotropic} \int_{ \mathbb Omega} \langle x, y\rightarrowngle^2\, d\mu(x)=\frac 1 d. \mathbb End{equation} Measures which satisfy \mathbb Eqref{eq:isotropic_init} or, equivalently, \mathbb Eqref{eq:isotropic}, are called \textit{isotropic}. We note that $\operatorname{Tr} (x x^T)=\|x\|^2$ and (\ref{eq:isotropic_init}) implies $\int_{\mathbb Omega} \| x \|^2 d \mu(x) = 1$ so, as a matter of fact, all isotropic measures on $\mathbb{R}^d$ automatically belong to $\mathcal{P}^*(\mathbb{R}^d)$. The discrete version of Lemma \ref{lem:frame} states that for $N \geq d$ and $\{ x_1,\operatorname{d}ots,x_N\} \subset \mathbb S^{d-1}$, \begin{equation*} \sum_{i=1}^{N}\sum_{j = 1}^{N} \langle x_i, x_j \rightarrowngle^2\geq \frac {N^2} d. \mathbb End{equation*} Discrete sets for which this bound is sharp are known as \textit{unit norm tight frames}, which explains the term \mathbb Emph{frame energy}. The lower bound for the discrete frame energy is a special case of bounds by Welch \cite{W} and Sidelnikov \cite{Sid}. There is a natural projection $\pi:\mathcal{P}^*(\mathbb{R}^d)\rightarrow \mathcal{P}(\mathbb{S}^{d-1})$ that maps isotropic measures in $\mathbb{R}^d$ onto isotropic measures in $\mathbb{S}^{d-1}$. First, we define the projection $\pi_0:\mathbb{R}^d\setminus\{0\}\rightarrow\mathbb{S}^{d-1}$ by $\pi_0(x)=x/\|x\|$. Now for any $\mu\in \mathcal{P}^*(\mathbb{R}^d)$, we define $\mu^*=\pi(\mu)$ as {the pushforward measure $ (\pi_0)_\# \|x\|^2\, d\mu(x) $, that is}: for any Borel subset $B$ of $\mathbb{S}^{d-1}$, we set $$\mu^*(B)=\int_{\pi_0^{-1}(B)} \|x\|^2\, d\mu(x).$$ Clearly, $\mu^*$ is a Borel probability measure on $\mathbb{S}^{d-1}$. Checking (\ref{eq:isotropic}), we can also see that for an isotropic $\mu$, $\pi(\mu)$ is isotropic too. \begin{remark}\label{rem:proj} For potentials $K$ that are homogeneous of degree $2$ in each variable, the energy $I_K(\mu)$ is invariant under the projection $\pi$. The kernel $V^2$ is such a function, since it is the determinant of the Gram matrix of $\{ x_1,\operatorname{d}ots,x_N\}$. This property is also satisfied by the frame potential $K(x,y) = \langle x,y \rightarrowngle^2$. In such cases, it is sufficient to find optimizers for probability measures on the sphere in order to solve an optimization problem in $\mathcal{P}^*(\mathbb{R}^d)$. \mathbb End{remark} We call a measure $\mu$ \textit{balanced} if $\int_{\mathbb Omega} x\, d\mu(x) = 0$, i.e. the center of mass is at the origin. Balanced isotropic measures can be used to construct isotropic measures in higher dimensions, as will be seen in the proof of Theorem~\ref{thm:A^2maxGen}. \subsection{Linear programming and positive definite kernels} The linear programming method, developed for the spherical case in \cite{DGS}, appeared to be successful in finding optimizing measures and point configurations as well as in giving lower bounds for two-point interaction energies (see, e.g., \cite{BGMPV1,CK, Y}). Here we briefly describe how it works. In Sections \ref{sec:SD} and \ref{sec:k-point}, we explain in more detail how the method is extended to semidefinite bounds for $k$-point energies. A symmetric kernel $K: ({\mathbb S^{d-1}})^2 \rightarrow \mathbb{R}$ is called \textit{positive definite} if for every $\nu \in \mathcal{M}(\mathbb S^{d-1})$, the energy integral satisfies $I_K(\nu) \geq 0$. A classical theorem of Schoenberg described positive definite kernels via Gegenbauer polynomials \cite{Sch}. The Gegenbauer polynomials $P_m^d$, $ m\geq 0 $, form an orthogonal basis on $[-1,1]$ with respect to the measure $(1-t^2)^{\frac{d-3}2}dt$. Here, $P_m^d$ is normalized so that $P_m^d(1) = 1$. All continuous functions on $[-1,1]$ can be expanded like so: \begin{equation}\label{eq:GegenbauerExpansion} f(t)=\sum_{m=0}^{\infty} \hat{f}_m P_m^d(t), \mathbb End{equation} where the sum converges uniformly and absolutely if $K(x,y) = f( \langle x, y \rightarrowngle)$ is positive definite on $\mathbb{S}^{d-1}$ (due to Mercer's Theorem). Rotationally-invariant positive definite kernels on the sphere are exactly {characterized} by the positivity of their Gegenbauer coefficients. \begin{theorem}[Schoenberg \cite{Sch}]\label{thm:sch} The kernel $K(x,y)=f(\langle x,y \rightarrowngle)$ is positive definite on $\mathbb{S}^{d-1}$ if and only if all coefficients $\hat{f}_m$ of the Gegenbauer expansion \mathbb Eqref{eq:GegenbauerExpansion} are non-negative. \mathbb End{theorem} More background on Gegenbauer polynomials, energy, and positive definite kernels on the sphere can be found in \cite{AH, BHS}. If one can bound a given function $f$ from below by a positive definite (modulo a constant) function $h$, usually a polynomial, then the linear programming bounds on the energy of $f$ are then essentially consequences of the inequalities $$\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}} P_m^d(\langle x,y \rightarrowngle) \,d\mu(x) d\mu(y)\geq 0.$$ For example, $P_2^d(t)=\frac {dt^2-1} {d-1}$ and the inequality above immediately implies the lower bound in Lemma \ref{lem:frame}. \\ \subsection{$k$-positive definite kernels} As an extension of the notion of positive definite kernels to the multivariate case, we define \textit{$k$-positive definite kernels}. Let $K: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ be continuous and symmetric in the first two variables. We define the \textit{potential function of $K$ for fixed $z_3, \ldots, z_k$} as \begin{equation}\label{eq:PotentialFuncDef} U_{K}^{z_3, \ldots, z_{k}}(x,y):= K(x,y, z_3, \ldots, z_k). \mathbb End{equation} We call $K$ $k$-positive definite if for any $z_3, \ldots, z_k \in \mathbb S^{d-1}$, the potential function $U_{K}^{z_3, \ldots, z_{k}}(x,y)$ is positive definite as a function of $x$ and $y$. For kernels symmetric in all variables, this definition is the same as the one given in \cite{BFGMPV1}. A kernel $Y$ is $k$-negative definite if $-Y$ is $k$-positive definite. In Section \ref{sec:k-point} we provide a self-contained construction of large classes of $k$-positive definite kernels for $\mathbb{S}^{d-1}$. Here we collect some general results about positive definiteness and energy minimization for multivariate kernels. \begin{lemma}\label{lem:Schur's Lemma} Suppose that $K_1, K_2, \ldots$ are $k$-positive definite. Then $K_1 + K_2$ and $K_1 K_2$ are $k$-positive definite. If the sequence of $K_j$'s converges (uniformly in the first two variables and pointwise in the others) to a kernel $K$, then $K$ is also $k$-positive definite. \mathbb End{lemma} This result follows immediately from the same results for two-input kernels. Similarly, we have the following: \begin{lemma}\label{lem:Schur's Lemma2} Suppose that $K_1, K_2, \ldots$ are kernels such that each $I_{K_j}$ is minimized by some probability measure $\mu$. Then $I_{K_1 + K_2}$ is also. If the sequence of $K_j$'s converges (uniformly in the first two variables and pointwise in the others) to a kernel $K$, then $I_K$ is also minimized by $\mu$. \mathbb End{lemma} As in the two-input case, multiplication does not generally preserve the minimizers of energies. \begin{proposition}\label{prop:kPosDef and Energy Zero is Min} Suppose that $Y$ is a $k$-positive definite kernel on $\mathbb S^{d-1}$ and $\mu \in \mathcal{P}(\mathbb S^{d-1})$ with $I_Y(\mu) = 0$. Then $\mu$ is a minimizer of $I_Y$. \mathbb End{proposition} \begin{proof} Let $\nu \in \mathcal{P}(\mathbb S^{d-1})$. Then, since $Y$ is $k$-positive definite \begin{equation*} I_Y(\nu) \geq \min_{z_3, \ldots, z_k \in \mathbb S^{d-1}} \int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} Y(x,y, z_3, \ldots, z_n)\, d\nu(x) d\nu(y) \geq 0 = I_Y(\mu). \mathbb End{equation*} \mathbb End{proof} We can create multivariate kernels from kernels with fewer inputs in a natural way that preserves minimizers of the energy. \begin{lemma}\label{lem:kPosDef to nPosDef} For some kernel $Y: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ and $n > k$, let \begin{equation} K(x_1, x_2, \ldots, x_n) = \frac{1}{|S|} \sum_{\pi \in S} Y(x_1, x_2, x_{\pi(3)}, \ldots, x_{\pi(k)}), \mathbb End{equation} where $S$ is a nonempty set of permutations of the set $\{ 3, \ldots, n\}$. Then $I_K$ is minimized by $\mu \in \mathcal{P}(\mathbb S^{d-1})$ if and only if $I_Y$ is as well. In addition, if $Y$ is $k$-positive definite, then $K$ is $n$-positive definite. \mathbb End{lemma} Note that if $S$ is the set of all such permutations, then $K$ is symmetric in the last $n-2$ variables. \begin{proof} For any $\nu \in \mathcal{M}(\mathbb S^{d-1})$, we see that \begin{equation*} I_K(\nu) = I_Y(\nu), \mathbb End{equation*} meaning their minimizers must be the same, and for any $z_3, \ldots, z_n \in \mathbb S^{d-1}$, \begin{equation*} \int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} K(x,y, z_3, \ldots, z_n)\, d\nu(x) d\nu(y) = \frac{1}{|S|} \sum_{\pi \in S} \int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} Y(x, y, z_{\pi(3)}, \ldots, z_{\pi(k)})\nu(x)\, d\nu(y), \mathbb End{equation*} which is non-negative if $Y$ is $k$-positive definite. \mathbb End{proof} \begin{proposition}\label{prop:Min is Min for Symmetrization} For some kernel $Y: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ let $K$ be defined by $$K( x_1,\ldots, x_k) = \frac{1}{k!} \sum_{\pi} Y( x_{\pi(1)}, \ldots, x_{\pi(k)}),$$ where $\pi$ varies over all permutations of $\{1, \ldots, k\}$. Then $K$ is a symmetric kernel, and $I_K$ is minimized by $\mu \in \mathcal{P}(\mathbb S^{d-1})$ if and only if $I_Y$ is as well. \mathbb End{proposition} The proof is identical to that of Lemma \ref{lem:kPosDef to nPosDef}. We note that, unlike in Lemma \ref{lem:kPosDef to nPosDef}, $k$-positive definiteness of $Y$ does not imply that $K$ is also $k$-positive definite. In fact, in the three-input case, $-V^2$ and $-A^2$ in this paper are examples of symmetric kernels that are not $3$-positive definite \cite[Propositions 6.9 and 6.10]{BFGMPV1} but are the symmetrizations of $3$-positive definite kernels (modulo a constant), see \mathbb Eqref{eq:Volume^2 SDP decomposition} and \mathbb Eqref{eq:Area^2 SDP Decomposition}. We finally remark that the discussion of this section generalizes to arbitrary compact metric spaces in place of the sphere $\mathbb S^{d-1}$. \subsection{Two-input volumes}\label{sec:k=2} Here, we address the two-input versions of $V^2$ and $A^2$ on the sphere. \begin{proposition}\label{prop:vsas} Let $k=2$. On the sphere $\mathbb{S}^{d-1}$, $\sigma$ is a maximizer of the two-input energies $I_{A^s}$ and $I_{V^s}$ for $0<s<2$. Moreover, in the case of $A^s$, $\sigma$ is the unique maximizer. \mathbb End{proposition} \begin{proof} It is well known, see e.g. \cite[Proposition 2.3]{BGMPV1}, that $\sigma$ is a minimizer of $I_K$, where $K(x,y)=f(\langle x,y\rightarrowngle)$, if and only if for the Gegenbauer expansion $f(t)=\sum_{m=0}^{\infty} \hat{f}_m P_m^d(t)$, $\hat{f}_m\geq 0$ for all $m\geq 1$ (which, according to Theorem \ref{thm:sch}, is equivalent to the fact that $f$ is positive definite on $\mathbb S^{d-1}$ modulo an additive constant). Moreover, $\sigma$ is the unique minimizer if $\hat{f}_m> 0$ for all $m\geq 1$. We note that it is sufficient to use a weaker condition based on Maclaurin expansions of $f$. Assume $f(t)=\sum_{m=0}^{\infty} f^*_m t^m$ for $t\in[-1,1]$, with the series converging uniformly and absolutely. Each function $t^m$ is positive definite on $\mathbb{S}^{d-1}$ by Schur's product theorem, and, by Theorem \ref{thm:sch}, it can be represented as a non-negative combination of Gegenbauer polynomials with, in particular, a positive coefficient for $P_m^d(t)$. This means that, whenever all $f^*_m$ are non-negative (positive) for $m\geq 1$, then all Gegenbauer coefficients $\hat{f}_m$ for $m\geq 1$ are also non-negative (positive). We need to show that $\sigma$ is a maximizer, so it is sufficient to check that all coefficients, starting from $m=1$, of the Maclaurin expansions of $V^s$ and $A^s$ are nonpositive. Indeed, \begin{equation*} V^s(x,y) = (V^2)^{s/2} = \left( 1-\langle x,y\rightarrowngle^2 \right)^{s/2} = \sum_{m=0}^{\infty} (-1)^m \binom{s/2}{m} \langle x,y \rightarrowngle^{2m}. \mathbb End{equation*} Similarly, \begin{equation*} A^s(x,y) = (A^2)^{s/2} = (2-2\langle x,y\rightarrowngle)^{s/2}=2^{s/2} (1-\langle x,y\rightarrowngle)^{s/2} = 2^{s/2}\sum_{m=0}^{\infty} (-1)^m \binom{s/2}{m} \langle x,y \rightarrowngle^m. \mathbb End{equation*} In both cases, $(-1)^m \binom{s/2}{m}$ is negative for all $m\geq 1$ so $\sigma$ is a maximizer for $V^s$ and the unique maximizer for $A^s$. \mathbb End{proof} \begin{remark} Since $V$ is invariant under central symmetry, it would be natural to consider it as a potential on the projective space $\mathbb{RP}^{d-1}$. Under this setup the uniform distribution over $\mathbb{RP}^{d-1}$ is the unique maximizer of $I_{V^s}$. \mathbb End{remark} Since $A(x,y) = \| x-y \|$, the statements about $A^s$ in Proposition \ref{prop:vsas} can be viewed as a special case of a more general result of Bjorck \cite{Bj}. Below we collect his results specialized to the sphere. \begin{proposition}[Bjorck \cite{Bj}]\label{prop:k=2} Let $k=2$, i.e. $A (x,y) = \| x- y \|$. For the two-input energy $I_{A^s}$ on the sphere $\mathbb{S}^{d-1}$, \begin{itemize} \item if $0<s<2$, then $\sigma$ is the unique maximizer of $I_{A^s}$; \item if $s = 2$, then $\mu$ is a maximizer of $I_{A^s}$ if and only if $\mu$ is balanced; \item if $s > 2$, then the maximizers of $I_{A^s}$ are exactly measures of the the form $\frac{1}{2}( \operatorname{d}elta_{p} + \operatorname{d}elta_{-p})$, for some $p \in \mathbb{S}^{d-1}$. \mathbb End{itemize} \mathbb End{proposition} A similar proposition about the minimizers over $\mathcal P(\mathbb S^{d-1})$ can be formulated for powers of $V$ in the two-input case. \begin{proposition}\label{prop:Vk=2} Let $k=2$, i.e. $V (x,y) = \left( 1-\langle x,y\rightarrowngle^2 \right)^{1/2} $. For the two-input energy $I_{V^s}$ on the sphere $\mathbb{S}^{d-1}$, \begin{itemize} \item if $0<s<2$, then $\sigma$ is a maximizer of $I_{V^s}$; \item if $s = 2$, then $\mu$ is a maximizer of $I_{V^s}$ if and only if $\mu$ is isotropic; \item if $s > 2$, then the only maximizers (up to central symmetry and rotation) of $I_{V^s}$ are uniform measures on the elements of an orthonormal basis of $\mathbb R^d$, i.e. measures of the the form $\frac{1}{d} \sum_{i=1}^d \operatorname{d}elta_{e_i}$, where $\{e_i\}_{i=1}^d$ is an orthonormal basis of $\mathbb R^d$. \mathbb End{itemize} \mathbb End{proposition} The case $0<s<2$ is covered in Proposition \ref{prop:vsas} above. The phase transition case $s=2$ follows from the case of equality in Lemma \ref{lem:frame}. The case $s>2$ can be easily handled by the linear programming method, but we give the proof of a more general statement for all $2\le k\le d$ in Corollary \ref{cor:V^s for s>2}. Exposition on the logarithmic and singular energies ($s < 0$) can be found in \cite{BHS} (and the references therein) for $A^s$ and \cite{CHS} for $V^s$. \subsection{Comparison of two-input and multi-input energies}\label{sec:compare} The multi-input, i.e. $k\geq 3$, generalizations of Propositions \ref{prop:k=2} and \ref{prop:Vk=2}, which are naturally more complicated, require different methods and form the main purpose of this paper. As stated in the introduction, we believe that the uniform measure $\sigma$ still maximizes both $I_{A^s}$ and $I_{V^s}$ in the range $0<s<2$ for $k\ge 3$, but this remains a conjecture. When $s=2$ and $k\ge 3$, maximizers of $I_{V^2}$ are, as in Proposition \ref{prop:Vk=2}, exactly the isotropic measures on $\mathbb S^{d-1}$ \cite{CC} (see Theorem \ref{thm:volume-gen}). However, we shall show (see Theorem \ref{thm:area-gen}, as well as Theorems \ref{thm: triangle area squared max} and \ref{thm:A^2maxGen}) that the maximizers of $I_{A^2}$ for $3\le k \le d+1$ are exactly \mathbb Emph{balanced isotropic measures} (and not just balanced as in Proposition \ref{prop:k=2} for $k=2$). The case $s>2$ of Proposition \ref{prop:Vk=2} for $I_{V^s}$ still holds for all $2\le k \le d$ (Corollary \ref{cor:V^s for s>2}). However, we are only able to prove an analogue of this case for $A^s$ when $k=d+1$ (Corollary \ref{cor:A^s for s>2}): the uniform measure on the vertices of a regular simplex replaces the two poles as the unique (up to rotations) maximizer of $I_{A^s}$ for $s>2$. We conjecture that maximizers of $I_{A^s}$ with $s>2$ are discrete for all $3\le k \le d+1$, but their exact structure remains elusive (see end of Section~\ref{sec:A^2 on Sphere}). This discussion shows that in the multi-input case $k\ge 3$ the behavior of $A^s$ is significantly more complicated than that of $V^s$, which is evidenced already by the fact that the polynomial representation of $A^2$ (Lemma \ref{lem:A-formula}) is more involved than that of $V^2$. \section{Discrete Energies and Optimality of the Regular Simplex}\label{sec:Discrete A and V} Before presenting the study of maximizers of continuous energies with kernels $V^s$ and $A^s$, we discuss their discrete analogues with $N=d+1$ points, and find that the regular simplex is a maximizer for $0 < s < 2$. Consequently, we discover a new geometrically optimal property of the regular simplex. These statements use the results from Sections \ref{sec:vol} and \ref{sec:A^2 on Sphere} about continuous $k$-point energies as a tool. We chose to open with these discrete results since, in our opinion, they yield particularly elegant applications of the theory. We start with a general statement: \begin{theorem}\label{thm: simplex best, general} Let $2 \leq k \leq d+1$ and $B: ( \mathbb{S}^{d-1})^k \rightarrow [0, \infty)$ be a polynomial kernel of degree at most two in each variable, such that $\sigma$ maximizes $I_B$ and whenever $x_i = x_j$ for some $i \neq j$, then $B(x_1, x_2, \ldots, x_k) = 0$. Let $f: [0, \infty) \rightarrow \mathbb{R}$ be concave, increasing, and such that $f(0) = 0$, and define the kernel $K(x_1, \ldots, x_k) = f(B(x_1, \ldots, x_k))$. If $N= d+1$, then the set of vertices of a regular $(N-1)$-simplex inscribed in $\mathbb{S}^{d-1}$ maximizes the discrete energy $E_K(\omega_N)$ over all $N$-point configurations on the sphere. Moreover, if $f$ is strictly concave and strictly increasing, then the vertices of regular $(N-1)$-simplices are the only maximizers of the energy (if $B$ doesn't contain terms which are linear in some of the variables, the uniqueness is up to changing any individual vertex $x$ to its opposite $-x$). \mathbb End{theorem} \begin{proof} Let $\omega_N = \{ z_1, \ldots, z_N\}$ be an arbitrary point configuration on $\mathbb{S}^{d-1}$. Since $B$ is zero if two of its inputs are the same, we can restrict the sum to $k$-tuples with distinct entries. Combining this with the fact that $f$ is increasing and concave, using Jensen's inequality, we have \begin{align*} E_K(\omega_N) & := \frac{1}{N^k} \sum_{z_1, \ldots, z_k \in \omega_N} f(B(z_1, \ldots, z_k)) \\ & \leq \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \sum_{\substack{z_{j_1}, \ldots, z_{j_k} \in \omega_N \\ j_1, \ldots, j_k \text{ distinct}}} \frac{B(z_{j_1}, \ldots, z_{j_k})}{N (N-1) \cdots (N-k+1)}\right)\\ & = \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \frac{ N^k E_B (\omega_N)}{N (N-1) \cdots (N-k+1)}\right)\\ & \leq \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \frac{ N^k I_B(\sigma)}{N (N-1) \cdots (N-k+1)}\right).\\ \mathbb End{align*} The first inequality becomes an equality if \begin{equation*} B(y_1,\ldots, y_k) = \frac{N^k E_B (\omega_N)}{N (N-1) \cdots (N-k+1)} \mathbb End{equation*} for all distinct $y_1, \ldots, y_k \in \omega_N$, while the second becomes an equality if the point configuration is a spherical $2$-design, in particular, if $\omega_N$ is a regular simplex. The case of uniqueness is similar. \mathbb End{proof} This generalizes some known results for $B(x,y) = \| x-y\|^2$ \cite{Y, CK}. Note that this proof also extends to provide an upper bound of the energy $E_{f \circ B}(\omega_N)$ for every $N \geq k$, and that this upper bound is achieved whenever $B(z_{j_1}, ..., z_{j_k})$ is constant for every $k$-tuple of distinct points, meaning that one may find additional optimizers of the energy. For instance, for $B=V^2$ and $N = d$, any orthonormal basis would be a maximizer (though this would not work for $B = A^2$, since an orthonormal basis is not balanced). An upper bound of this form was given for $E_{V}(\omega_N)$ in \cite[Corollary 5.2]{CC}. We will show in subsequent sections that $\sigma$ maximizes the continuous energies with kernels $V^2$ and $A^2$ (Theorems \ref{thm:area-gen} and \ref{thm:volume-gen}), both of which are polynomials of degree two. Hence Theorem \ref{thm: simplex best, general} applies, immediately yielding the following corollary: \begin{corollary}\label{cor:Simplex Best A, V} Assume that either $K(x_1, \ldots, x_k) = V(x_1, \ldots, x_k)^s$ with $2 \leq k \leq d$, or $K(x_1, \ldots, x_k) = A(x_1, \ldots, x_k)^s$ with $2 \leq k \leq d+1$. Let $0 < s \le 2$. For $N=d+1$ points, the discrete $k$-input energy $E_K$ on the sphere $\mathbb S^{d-1}$ is uniquely (up to rotations, and up to central symmetry in the case of $V^2$) maximized by the vertices of a regular simplex inscribed in $\mathbb{S}^{d-1}$. \mathbb End{corollary} \begin{proof} For $0<s<2$, the function $f(t) = t^{s/2}$ is strictly concave and strictly increasing, so the Theorem immediately applies. The uniqueness in the case $s=2$ needs a separate discussion. By Theorems \ref{thm:area-gen} and \ref{thm:volume-gen}, $I_{V^2}$ is maximized by isotropic measures on $\mathbb S^{d-1}$, and $I_{A^2}$ -- by balanced isotropic measures. In the discrete case, isotropic measures on $\mathbb S^{d-1}$ are exactly unit norm tight frames. The only tight frames on $\mathbb S^{d-1}$ with $N=d+1$ elements (up to central symmetry and rotations) are the vertices of a regular simplex \cite[Theorem 2.6]{GK}. \mathbb End{proof} Taking $K = A^s$ in Corollary \ref{cor:Simplex Best A, V}, and setting $j=k-1$, we obtain an interesting geometric result: \begin{corollary}\label{cor:Geometric Simplex is best} Let $1 \leq j \leq d$, $0 < s \leq 2$, $S$ be a $d$-simplex inscribed in $\mathbb{S}^{d-1}$, $\mathcal{F}_{j}$ the set of $j$-dimensional faces of $S$, and $\mathrm{Vol}_{j}(C)$ the $j$-dimensional volume of a set $C$. Then \begin{equation}\label{eq:T-functional} \sum_{F \in \mathcal{F}_{j}} \mathrm{Vol}_{j}(F)^s, \mathbb End{equation} achieves its maximum if and only if $S$ is a regular simplex. \mathbb End{corollary} In the case $s = 1$, this generalizes the known results for $j=1$, i.e. the sum of distances between vertices \cite{F1}, $j=d-1$, i.e. the surface area \cite{Ta2}, and $j=d$, i.e. the volume \cite{Jo, Ta1, Ball2, HL}. We also note that \mathbb Eqref{eq:T-functional} is a special case of the $T$-functional, which has received a fair amount of study, mostly in Stochastic Geometry (see e.g. \cite{A, GKT, HoLe, HMR, KMTT, KTT}). \begin{remark} We note that by adjusting the definition of the discrete energy $E_K$ to only include summands where all inputs are distinct, we can study lower-semicontinuous kernels $K: \Big(\mathbb{S}^{d-1} \Big)^k \rightarrow ( - \infty, \infty]$. In this case, if we define $f$ in the statement of Theorem \ref{thm: simplex best, general} as a decreasing, convex function $f: (0, \infty) \rightarrow \mathbb{R}$, with $f(0) = \lim_{x \rightarrow 0^+} f(x)$, an identical proof shows that the vertices of a regular simplex minimize $E_{f \circ B}$, as a generalization of \cite[Theorem 1.2]{CK}. In particular, as an extension of Corollary \ref{cor:Simplex Best A, V}, this shows that regular simplices are optimal for $-\log(A)$ and $-\log(V)$, as well as $A^{s}$ and $V^{s}$ for $s < 0$. \mathbb End{remark} \section{Semidefinite Programming and Three-input Volumes}\label{sec:SD} {In this section we recall the basics of semidefinite programming and apply it to the maximization of integral functionals with three-input kernels. It will be shown that isotropic probability measures on $ \mathbb S^{d-1} $ maximize $ I_{V^2} $, where $ V $ is the $3$-volume, among all probability measures, while for the $2$-area energy integral $ I_{A^2} $, maximizers are isotropic and balanced probability measures. In Sections} \ref{sec:Powers of V} and \ref{sec:A^2 on Sphere}, {these results are generalized to larger numbers of inputs and to measures on $\mathbb{R}^d$.} For brevity, we denote $u=\langle y,z \rightarrowngle$, $v=\langle v,z \rightarrowngle$, $t=\langle x,y \rightarrowngle$. We also take $\sigma$, as before, to be the uniform probability measure on the sphere $\mathbb{S}^{d-1}$. In \cite{BV}, Bachoc and Vallentin produced a class of infinite matrices and associated polynomials of the form \begin{equation}\label{eq:SemDefProgYs} (Y_{m}^d)_{i+1,j+1} (x,y,z) := Y_{m,i,j}^d(x,y,z) := P_i^{d + 2m}(u) P_{j}^{d + 2m}(v) Q_m^d(u,v,t), \mathbb End{equation} where $m,i,j \in \mathbb{N}_0$, $P_m^h$ is the normalized Gegenbauer polynomial of degree $m$ on $\mathbb{S}^{h-1}$ and \begin{equation}\label{eq:BachocValQKernel} Q_m^d(u,v,t) = ((1-u^2)(1-v^2))^{\frac{m}{2}}P_{m}^{d-1}\left(\frac{t-uv}{\sqrt{(1-u^2)(1-v^2)}}\right). \mathbb End{equation} \begin{remark} Polynomials $Y_{m,i,j}^d$ in \cite{BV} were defined with certain coefficients which we omit here for the sake of simplicity. \mathbb End{remark} Here we provide the upper left $3\times 3$, $2 \times 2$, and $1 \times 1$ submatrices of infinite matrices $Y_0^d$, $Y_1^d$, and $Y_2^d$, respectively, which is all that we need for the rest of this section: $$\begin{pmatrix} 1 & v & \frac {dv^2-1} {d-1}\\ u & uv & u \frac {dv^2-1} {d-1}\\ \frac {dv^2-1} {d-1} & \frac {du^2-1} {d-1} v & \frac {du^2-1} {d-1} \frac {dv^2-1} {d-1}\mathbb End{pmatrix}$$ $$ \begin{pmatrix} t-uv & u (t-uv)\\ v(t-uv) & uv(t-uv)\mathbb End{pmatrix}, \begin{pmatrix} \frac {(d-1)(t-uv)^2-(1-u^2)(1-v^2)} {d-2}\mathbb End{pmatrix}.$$ By letting $\pi$ run through the group of all permutation of the variables $x$, $y$, and $z$, and averaging, they defined the following symmetric matrices and associated polynomials \begin{equation*} (S_{m}^d)_{i+1,j+1} (x,y,z) := S_{m,i,j}^d (x,y,z) := \frac{1}{6} \sum_{\pi} Y_{m,i,j}^d (\pi(x), \pi(y), \pi(z)). \mathbb End{equation*} These polynomials and matrices have a variety of nice properties: \begin{enumerate} \item\label{i} For any $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$ and $e \in \mathbb{S}^{d-1}$, $$ \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Y_{m}^d(x,y,e)\, d\mu(x) d \mu(y)$$ and $$ S_m^d(\mu) := \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} S_{m}^d(x,y,z)\, d\mu(x) d \mu(y) d \mu(z)$$ are positive semidefinite, i.e. all principal minors (formed by finite submatrices) are nonnegative. \item\label{ii} For $(m,i,j) \neq (0,0,0)$, $I_{S_{m,i,j}^d}(\sigma) = 0$, and for all $e \in \mathbb{S}^{d-1}$, $ I_{Y_{m,i,j}^d}(\sigma, \sigma, \operatorname{d}elta_{e}) = 0$. \item\label{iii} For $m \geq 1$ and $e \in \mathbb{S}^{d-1}$, $I_{S_{m,i,j}^d}(\operatorname{d}elta_e) = I_{Y_{m,i,j}^d}(\operatorname{d}elta_e) = 0$. \mathbb End{enumerate} We note that the paper \cite{BV} was only concerned with finite point sets. However, the results naturally extend to the continuous setting, and \mathbb Eqref{i} is simply the extension of Corollary 3.5 in \cite{BV}, while \mathbb Eqref{ii} follows from the construction of $Y_{m,i,j}$'s from spherical harmonics (see Theorem 3.1 and the preceding text, as well as equation (11), in \cite{BV}). Finally, \mathbb Eqref{iii} follows from the fact that $Q_m^d(1,1,1) = 0$. Now consider an infinite, symmetric, positive semidefinite matrix $A$ with finitely many nonzero entries. Then for any $m \geq 1$ and $\mu \in \mathcal{P}( \mathbb{S}^{d-1})$, $\operatorname{Tr}( S_m^d(\mu) A) \geq 0$, with equality if $\mu = \sigma$. (Indeed, observe that, for two matrices positive definite matrices $B=(b_{ij})$, $C = (c_{ij})$, Schur's theorem implies that the Hadamard product $B\circ C = (b_{ij} c_{ij})$ is positive definite, which leads to the inequality $\operatorname{Tr} (BC) = \sum_{i,j} b_{ij} c_{ij} \ge 0$.) Likewise, let $A_0$ be an infinite, symmetric, positive semidefinite matrix $A$ with finitely many nonzero entries and such that all entries in the first row and first column are zeros. Then for any probability measure $\mu$, $\operatorname{Tr}( S_m^d(\mu) A_0) \geq 0$, with equality if $\mu = \sigma$. In this case, we require zeros in the first row and column due to the fact that $S_{0,0,0}^d$ is a constant, so we would not get equality for $\sigma$ in the above inequality. This gives us the following: \begin{theorem}\label{thm:SemiDefMin} Let $n \in \mathbb{N}_0$. For each $m \leq n$, let $A_m$ be an infinite, symmetric, positive semidefinite matrix with finitely many nonzero entries, with the additional requirement that $A_0$ has only zeros in its first row and first column. Let $$K(x,y,z) = \sum_{m=0}^{n} \operatorname{Tr}( S_{m}^d(x,y,z)\, A_m).$$ Then $\sigma$ is a minimizer of $I_K$ over probability measures on the sphere $\mathbb{S}^{d-1}$. \mathbb End{theorem} \noindent Naturally, adding a constant to $K$ does not change this statement, and multiplying by $-1$ turns it into a maximization result. Observe also that, when $A$ is a diagonal matrix, $\operatorname{Tr}( S_m^d A) $ is simply a positive linear combination of the diagonal elements of $ S_m^d $. Theorem \ref{thm:SemiDefMin} is often applied in this way (see the proofs of Theorems \ref{thm:vol squared max} and \ref{thm: triangle area squared max} below), in close analogy to Theorem \ref{thm:sch}. \subsection{Volume of a parallelepiped} Maximizing the sum of distances between points on a space (or the corresponding distance integrals) is a very natural optimization problem for two-input kernels, and one which has garnered a fair amount of attention (see, e.g. \cite{B, BBS, BS, BD, BDM, Bj, F1, F2, Sk, St}), and, as mentioned in the introduction, higher dimensional analogues, such as area and volume, yield natural extensions for kernels with more inputs. In this section we discuss such questions for $k=3$ inputs, focusing on volume squared and area squared, as these produce polynomials which are easier to work with. We first consider the kernel \begin{equation*} K(x,y,z) = V^2(x,y,z) = \operatorname{d}et\begin{pmatrix} 1 & u & v \\ u & 1 & t \\ v & t & 1 \mathbb End{pmatrix}=1-u^2-v^2-t^2+2uvt \mathbb End{equation*} on the sphere $\mathbb{S}^{d-1}$, with $d > 2$, and where $V(x,y,z)$ is the volume of the parallelepiped formed by the vectors $x$, $y$, and $z$. As mentioned in \cite{BFGMPV1}, $-V^2$ is not 3-positive definite (modulo a constant) but as we show here, $\sigma$ is a minimizer of $I_{-V^2}$, i.e. a maximizer of $I_{V^2}$. Indeed, we see that \begin{equation}\label{eq:Volume^2 SDP decomposition} V^2(x,y,z) = \frac{(d-1)(d-2)}{d^2} - \frac{(d-1)(d-2)}{d^2} S_{0,2,2}^d - \frac{4(d-2)}{d} S_{1,1,1}^d - \frac{(3d-4)(d-2)}{d (d-1)} S_{2,0,0}^d, \mathbb End{equation} so Theorem \ref{thm:SemiDefMin} tells us that $\sigma$ is a maximizer. Moreover, since $V^2$ is a polynomial of degree two in every variable, and has no linear terms, any isotropic measure on the sphere is also a maximizer, and in fact this classifies all maximizers. \begin{theorem}\label{thm:vol squared max} Isotropic probability measures on the sphere maximize $I_{V^2}$ over $\mathcal{P}(\mathbb{S}^{d-1})$. \mathbb End{theorem} \subsection{Area of a triangle} Using the same method as in Theorem \ref{thm:vol squared max}, we can show that $\sigma$ is a maximizer of $I_{A^2}$, where $A(x,y,z)$ is the area of a triangle, since \begin{equation}\label{eq:Area^2 SDP decomposition} A^2(x,y,z) = \frac{1}{4} \Big(3 \frac{d-1}{d} - 3 \frac{d-2}{d-1} S_{2,0,0}^d -6 S_{1,1,1}^d - 6S_{1,0,0}^d - 3 \frac{d-1}{d}S_{0,2,2}^d \Big). \mathbb End{equation} However, we can also prove this with a slightly different method, which acts as a special case of Theorem \ref{thm:vol squared max}, a more general statement that we will prove by means of $k$-point bounds. \begin{theorem}\label{thm: triangle area squared max} Suppose $d \geq 2$, and let $ A^2(x,y,z) $ be the square of the area of the triangle with vertices at $x$, $y$, $z\in \mathbb S^{d-1}$. Then the uniform surface measure $\sigma$ maximizes $I_{A^2} (\mu)$ over $\mathcal{P} (\mathbb{S}^{d-1})$. Moreover, any balanced, isotropic measure $\mu \in \mathcal{P} (\mathbb{S}^{d-1})$ maximizes $I_{A^2}$. \mathbb End{theorem} \begin{proof} Using Heron's formula, we express $A^2(x,y,z)$ via the scalar products of $x,y,z$: \begin{equation}\label{eq:Area^2 SDP Decomposition} A^2 (x,y,z) = \frac34 -\frac12 (u+v+t) + \frac12 (uv + vt +tu) - \frac{1}4 ( u^2 + v^2 + t^2) = \frac {3} {4} - \frac 3 2 S_{1,0,0}^d - \frac{1}4 ( u^2 + v^2 + t^2). \mathbb End{equation} Note that $\sigma$ minimizes both $S_{1,0,0}^d$ and $u^2+v^2+t^2$ by Theorem \ref{thm:SemiDefMin} and Lemma \ref{lem:frame}, respectively. Therefore, for all $\mu \in \mathcal{P}(\mu)$, $$I_{A^2}(\mu)\leq \frac 3 4 - \frac 1 4 \cdot \frac 3 d = \frac {3(d-1)} {4d} = I_{A^2}(\sigma).$$ More generally, maximizers of $I_{A^2}$ must be isotropic measures in order to achieve the sharp bound from Lemma \ref{lem:frame} and must be balanced so that $S_{1,0,0}^d$ vanishes on them. \mathbb End{proof} An alternative proof of this result is also given in \cite[Theorem 6.7]{BFGMPV1}. We also would like to remark that since \begin{equation*} 3 \frac{d-2}{d-1} S_{2,0,0}^d +6 S_{1,1,1}^d + 3 \frac{d-1}{d}S_{0,2,2}^d + \frac{3}{d} = u^2 + v^2 + t^2, \mathbb End{equation*} which follows from \cite[Proposition 3.6]{BV}, Lemma \ref{lem:frame} follows from Theorem \ref{thm:SemiDefMin}, which demonstrates an instance of obtaining $2$-point bounds from $3$-point bounds. \section{Maximizing $k$-volumes}\label{sec:vol} { This section collects results on maximization of volume integral functionals over probability measures with unit second moment in $ \mathbb R^d $, denoted as before by $ \mathcal P^*(\mathbb R^d) $, for $k\ge 3$ inputs. In some cases we will further restrict the supports of such measures to the unit sphere, thereby optimizing over $ \mathcal P(\mathbb S^{d-1}) $. As in the rest of the paper, we are interested in powers of two kernels: the $k$-dimensional Euclidean volume of the parallelepiped $V(x_1,\ldots,x_k)$, and the $(k-1)$-dimensional volume of the simplex $A (x_1,\ldots,x_k)$.} \subsection{Maximizing the powers of $V$}\label{sec:Powers of V} As in the previous section, we start with $ V^2 $, i.e. the squared $k$-dimensional volume of a parallelepiped spanned by the vectors $x_1, \ldots, x_k$, equal to the determinant of the Gram matrix of the set of vectors $\{x_1,\ldots,x_k\}\subset \mathbb R^d$. Alternatively, $\frac 1 {(k!)^2} V^2(x_1, \ldots, x_k)$ can be seen as the square of the Euclidean volume of the simplex with vertices $0, x_1, \ldots, x_k$. The following theorem (in a slightly different form) can be found in the literature: \begin{theorem}\label{thm:volume-gen} Let $d\ge 3 $ and $3\le k \le d$. The set of maximizing measures of $I_{V^2}$ in $\mathcal{P}^*(\mathbb{R}^d)$ is the set of isotropic measures on $\mathbb{R}^d$. The value of the maximum is $\frac {k!}{d^k}\binom{d}{k}$. As a corollary, isotropic measures on $\mathbb S^{d-1}$ (which include the uniform surface measure $\sigma$) are exactly the maximizers of $I_{V^2}$ over $\mathcal{P}(\mathbb S^{d-1})$. \mathbb End{theorem} This theorem was proved by Rankin \cite{R} for $k=d$ and by Cahill and Casazza \cite{CC} in the general case. In both papers, the statements are for finite spherical sets but the proofs work for measures in $\mathcal{P}^*(\mathbb{R}^d)$ with only minor adjustments. We also note that due to Remark \ref{rem:proj}, it is sufficient to prove the result for the spherical case only, since $V^2$ is homogeneous of degree two. The equality case of Theorem \ref{thm:volume-gen} was also treated in a more general context in \cite{FNZ, P}. {Theorem \ref{thm:volume-gen} allows one to characterize the minimizers of $ I_{V^s} $ with $ s>2 $ as well.} \begin{corollary}\label{cor:V^s for s>2} For $s > 2$, the energy $I_{V^s}$ on $\mathbb{S}^{d-1}$ is uniquely (up to rotations and central symmetry) maximized by the uniform measure on an orthonormal basis. \mathbb End{corollary} \begin{proof} For $s > 2$, $V^2(x_1, \ldots, x_k) \geq V^s(x_1, \ldots, x_k)$ for all $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$, with equality exactly when $x_1, \ldots, x_k$ is an orthonormal set (so the volume is $1$) or when $x_1, \ldots, x_k$ are linearly dependent (so the volume is $0$). Thus, for all $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$, \begin{equation} \frac {k!}{d^k}\binom{d}{k} \geq I_{V^2}(\mu) \geq I_{V^s}(\mu). \mathbb End{equation} The first inequality becomes an equality if $\mu$ is isotropic, and the second inequality becomes equality if any $x_1, \ldots, x_k \in \operatorname{supp}(\mu)$ are either orthonormal or linearly dependent. Since the support of an isotropic measure must be full-dimensional, both of these conditions occur simultaneously if and only if $\mu(\{- e_j, e_j\}) = \frac{1}{d}$ for $j = 1, \ldots, d$ for some orthonormal basis $e_1, \ldots, e_d$. \mathbb End{proof} It is easy to see that the uniform distribution on an orthonormal basis is not a maximizer of $V^s$ for $0 < s < 2$. \subsection{Maximizing the powers of $A$}\label{sec:Powers of A} We now turn to the powers of $A (x_1,\ldots,x_k)$, the $(k-1)$-dimensional volume of the simplex with vertices $x_1, \ldots, x_k$, and again start by considering the kernel $A^2$. The result of Theorem \ref{thm:volume-gen} for $V^2$ can be used to obtain a similar statement for the measures in $\mathcal{P}^*(\mathbb{R}^d)$ maximizing $ I_{A^2} $, in the case of $k= d+1$ inputs, i.e when the simplex is full-dimensional. The main idea is to embed $ \mathbb R^d $ into $ \mathbb R^{d+1} $, treat the value of $ A^2(x_1,\ldots,x_{d+1}) $ as the value of $ V^2(y_1,\ldots,y_{d+1}) $ for a suitable $ y_1,..., y_k $, and then use Theorem \ref{thm:volume-gen}. We shall defer the calculation of the maximal value of $ I_{A^2} $ until Theorem~\ref{thm:area-gen}, which also gives an alternative proof of the characterization of maximizers on the sphere $\mathbb S^{d-1}$, moreover, addressing the case of {\mathbb Emph{any}} number of inputs $3\le k \le d+1$, rather than just $k=d+1$ as in the theorem below. However, the result of this section, Theorem \ref{thm:A^2maxGen}, applies to measures on $\mathbb R^d$, while Theorem~\ref{thm:area-gen} is restricted to the sphere. \begin{theorem}\label{thm:A^2maxGen} For $d \geq 2$ and $k = d+1$, maximizers of $I_{A^2}$ in $\mathcal{P}^*(\mathbb{R}^d)$ are the balanced, isotropic probability measures on $\mathbb{R}^d$. \mathbb End{theorem} \begin{proof} In this proof, we say that a measure $ \mu $ on $ \mathbb R^d $ is $ d $-isotropic if equation \mathbb Eqref{eq:isotropic_init} holds. Given a unit basis vector $ e_{d+1}\in \mathbb R^{d+1} $, we identify $ \mathbb R^d $ with the hyperplane in $ \mathbb R^{d+1} $, orthogonal to $ e_{d+1} $ and passing through the origin. To reduce the value of $ I_{A^2} $ to that of $ I_{V^2} $, given a measure $ \mu $ on $ \mathbb R^d $, denote its pushforward to $\mathbb R^{d+1}$ by $\hat\mu$: \begin{equation} \label{eq:muhat} \hat\mu:= \psi_\# \mu, \mathbb End{equation} where the map $ \psi: \mathbb R^d \to \mathbb R^{d+1} $ is \[ \psi(x) := \sqrt{\frac{d}{d+1}}x + \frac1{\sqrt{d+1}}e_{d+1}. \] It is understood here that $ x\in\mathbb R^d \subset \mathbb R^{d+1} $, so the addition in right-hand side is in \(\mathbb R^{d+1}\). Recall that a $ (d+1) $-dimensional simplex with base of $ d $-dimensional volume $ S $ and height $ h $ has $ (d+1) $-dimensional volume ${ S h}/{(d+1)} $. This gives, with $V^2 = V^2(x_1,\ldots,x_{d+1}) $ the square of the $ (d+1) $-dimensional volume of the parallelepiped spanned by its inputs, \begin{equation}\label{eq:AtoV} I_{V^2}(\hat\mu) = (d!)^2 \frac{d^{d}}{(d+1)^{d+1}}I_{A^2}(\mu), \mathbb End{equation} where we account for the fact that $ V $ includes a factor of $ (d+1)! $, whereas $ A $ does not. By Theorem \ref{thm:volume-gen}, the functional $I_{V^2}$ on the left-hand side of equation~\mathbb Eqref{eq:AtoV} is maximized over $\mathcal{P}^*(\mathbb{R}^{d+1})$ exactly when $\hat{\mu}$ is isotropic. To finish the proof, it remains to observe that the pushforward $ \hat\mu = \psi_\#\mu $ is $ (d+1)$-isotropic if and only if $ \mu $ is $ d $-isotropic and balanced. Indeed, writing $ x = (x^{(1)}, \ldots, x^{(d+1)}) = (y, x^{(d+1)}) $, we have for $ 1\leq i \leq j \leq d+1 $ \begin{equation} \label{eq:isotropyGamma} \int_{\mathbb{R}^{d+1}} x^{(i)}x^{(j)}\, d\hat\mu(x) = \begin{cases} \frac{d}{d+1} \int_{\mathbb{R}^d} y^{(i)}y^{(j)}\, d\mu(y) & j \leq d,\\ \frac{\sqrt{d }}{d+1} \int_{\mathbb{R}^d} y^{(i)}\, d\mu(y) & i \leq d, \ j=d+1,\\ \frac{1}{d+1} & i=j=d+1. \mathbb End{cases} \mathbb End{equation} By \mathbb Eqref{eq:isotropic_init}, $ (d+1) $-isotropy of $ \hat\mu $ means the integral in the left-hand side of \mathbb Eqref{eq:isotropyGamma} is equal to $ \operatorname{d}elta_{ij}/(d+1) $, $ 1\leq i\leq j\leq d+1 $. In particular, then the first integral in the right-hand side is equal to $ \operatorname{d}elta_{ij}/d $, which is precisely the condition for $ d $-isotropy of $ \mu $, and the integrals of $ y^{(i)} $ are all equal to zero, implying $ \mu $ is balanced. The converse implications obviously follows along the same lines. \mathbb End{proof} We can use the same methods as in Corollary \ref{cor:V^s for s>2} to find the maximizers of $ I_{A^s} $, for larger powers, on the sphere $\mathbb S^{d-1}$. \begin{corollary}\label{cor:A^s for s>2} Let $s > 2$ and $A(x_1, \ldots, x_{d+1})$ be the $d$-dimensional volume of a simplex with vertices $x_1, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$. Then $I_{A^s}$ is uniquely (up to rotations) maximized by the uniform distribution on the vertices of a regular $d$ simplex. \mathbb End{corollary} \begin{proof} We know that $A(x_1, \ldots, x_{d+1})$ is maximized exactly when $x_1, \ldots, x_{d+1}$ are the vertices of a regular simplex (see, e.g. \cite{Jo, Ta1, Ball2, HL}, see also the case $j=d$ of Corollary \ref{cor:Geometric Simplex is best}). Let $\alpha$ be that maximum volume. We see that for $s > 2$, \begin{equation*} A^2(x_1, \ldots, x_{d+1}) \geq \alpha^{2-s} A^s(x_1, \ldots, x_{d+1}) \mathbb End{equation*} for all $x_1, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$, with equality exactly when $A(x_1, \ldots, x_{d+1})$ is $0$ or $\alpha$. We know that, for all $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$ \begin{equation} \frac{d+1}{d! d^d} \geq I_{A^2}(\mu) \geq \alpha^{2-s} I_{A^s}(\mu). \mathbb End{equation} The first inequality becomes an equality when $\mu$ is balanced and isotropic, and the second becomes an equality when $A(x_1, \ldots, x_{d+1})$ is $0$ or $\alpha$ for all $x_1, \ldots, x_{d+1} \in \operatorname{supp}(\mu)$. These both occur exactly when $\mu$ is the uniform distribution on the vertices of a regular $d$-simplex. \mathbb End{proof} For $0 < s < 2$, the uniform distribution on the vertices of a regular simplex is not a maximizer of $I_{A^s}$. It is also not clear which measures maximize $I_{A^s}$ for $2<s$ for general $k< d+1$. We conjecture that the maximizers are again discrete in this case (see the discussion at the end of Section~\ref{sec:A^2 on Sphere}). \section{Kernels for \texorpdfstring{$k$}{k}-point bounds}\label{sec:k-point} In this section, we explain how to construct a class of continuous (in certain cases, polynomial) kernels which are $k$-positive definite and whose energy is minimized by $\sigma$. These kernels are a generalization of those developed by Bachoc and Vallentin in \cite{BV}, and similar to the kernels given in \cite{Mu} and \cite{DMOV}, all of which were used for obtaining $k$-point semidefinite programming bounds. We provide a slight alteration to these kernels, so that the inputs are no longer restricted to being linearly independent, or constrained to some proper subset of the sphere. Consider the points $\{x_1,\ldots, x_k\}\subset \mathbb S^{d-1}$ with $k \leq d+1$. Suppose that $x_3,\ldots, x_k$ are linearly independent and $x_1, x_2 \not\in X = \operatorname{span}\{x_3, \ldots, x_k\}$, and denote the orthogonal projections of $x_1$ and $x_2$ onto $X^{\perp}$ as $y_1$ and $y_2$, respectively. Then the normalized vectors $\frac{y_1}{\|y_1\|}$ and $\frac{y_2}{\|y_2\|}$ belong to the unit sphere in the $(d-k+2)$-dimensional space $X^{\perp}$. If $k \leq d$, then on this unit sphere, the kernel given by $P_l^{d-k+2}( \langle x, y \rightarrowngle)$ is positive definite, suggesting that we may be able to build a $k$-positive definite kernel from \begin{equation}\label{eq:SDP Building Block} P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rightarrowngle \Big). \mathbb End{equation} If $k = d+1$, then $\frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \in \mathbb{S}^0 = \{ -1, 1 \}$, and we see that $1$ and $\frac{y_1}{\|y_1\|} \frac{y_2}{\|y_2\|}$ make a basis for positive definite functions. Of course, for $l > 0$, $P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rightarrowngle \Big)$ is not well defined, as a function of $x_1, \ldots, x_k$, if $x_1$ or $x_2$ is in $X$, and may not be continuous whenever the dimension of $X$ changes. We can modify \mathbb Eqref{eq:SDP Building Block} to account for these issues, arriving at the following polynomial kernel. In what follows, we denote $W$ to be the Gram matrix of $x_3, \ldots, x_k$, and we set $P^1_0 =1$ and $P^1_1(t) = t$ (these are the only cases when $P^1_j$ are defined). \begin{theorem}\label{thm:Qdef} With the notation above, for any $l \in \mathbb{N}_0$, the function $Q_{k,l}^{d}: \Big( \mathbb{S}^{d-1} \Big)^k \rightarrow \mathbb{R}$ defined by \begin{equation}\label{eq:Qdef} Q_{k,l}^{d}( x_1, \ldots, x_k) = \operatorname{d}et(W)^l \| y_1 \|^{l} \| y_2 \|^{l} P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rightarrowngle \Big) \mathbb End{equation} is a rotationally-invariant $k$-positive definite polynomial kernel and $I_{Q_{k,l}^d}$ is minimized by $\sigma$. \mathbb End{theorem} We note that these kernels are symmetric in the last $k-2$ variables as well as in the first two variables. \begin{proof} Note that $Q_{k,0}^d = 1$, so our claim holds in these cases. Now, assume that $l \in \mathbb{N}$. Rotational-invariance follows immediately from the rotational-invariance of $W$, $\| y_1 \|$, $\|y_2\|$ and $\langle y_1, y_2 \rightarrowngle$. In what follows, we denote $u_{i,j} = \langle x_i, x_j \rightarrowngle$ for all $i$ and $j$, and for $h \in \{ 1, 2\}$, $w_h = (u_{h,3}, \ldots, u_{h,k})^T$, $y_h^{\perp} = x_h - y_h$, and $z_h = \frac{y_h}{\|y_h\|}$. We first must show that our kernel $Q_{k,l}^{d}$ is well-defined. Write $y_1^{\perp} = \sum_{j=3}^{k} \alpha_j x_j$ and $y_2^{\perp} = \sum_{j=3}^{k} \beta_j x_j$, and denote $\alpha = ( \alpha_3 , \ldots, \alpha_k)^T$ and $\beta = ( \beta_3, \ldots, \beta_k)^T$. Since for $3 \leq j \leq k$, \begin{equation*} u_{1,j} = \langle x_1, x_j \rightarrowngle = \langle y_1^{\perp}, x_j \rightarrowngle = \sum_{i=3}^{k} \alpha_i u_{i,j} \quad \text{ and } \quad u_{2,j} = \langle x_2, x_j \rightarrowngle = \langle y_2^{\perp}, x_j \rightarrowngle = \sum_{i=3}^{k} \beta_i u_{i,j} \mathbb End{equation*} we conclude that $w_1 = W \alpha$ and $w_2 = W \beta$. Now assume that $x_3, \ldots, x_k$ are linearly independent. Consequently, $\alpha = W^{-1} w_1$ and $\beta = W^{-1} w_2$, so \begin{equation} \langle y_1^{\perp}, y_2^{\perp} \rightarrowngle = \alpha^T W \beta = w_1^T W^{-1} W W^{-1} w_2 = w_1^T W^{-1} w_2, \mathbb End{equation} and similarly \begin{equation} \langle y_1^{\perp}, y_1^{\perp} \rightarrowngle = w_1^T W^{-1} w_1 \text{ and } \langle y_2^{\perp}, y_2^{\perp} \rightarrowngle = w_2^T W^{-1} w_2. \mathbb End{equation} We then see that \begin{equation} \langle y_1, y_2 \rightarrowngle = \langle x_1, x_2 \rightarrowngle - \langle y_1^{\perp}, y_2^{\perp} \rightarrowngle = u_{1,2} - w_1^T W^{-1} w_2, \mathbb End{equation} \begin{equation} \| y_1 \|^2 = 1 - w_1^T W^{-1} w_1 \text{ and } \| y_2 \|^2 = 1 - w_2^T W^{-1} w_2. \mathbb End{equation} Thus, if $x_1, x_2 \not\in X$, we can rewrite \mathbb Eqref{eq:Qdef} as \begin{align} Q_{k,l}^d (x_1, \ldots, x_k) = \operatorname{d}et(W)^l \Big( \big(1-w_1^T W^{-1} w_1 \big) & \big(1-w_2^T W^{-1} w_2 \big) \Big)^{l/2} \\ \nonumber & \times P_l^{d-k+2}\left(\frac {u_{1,2} - w_1^T W^{-1} w_2} {\sqrt{1-w_1^T W^{-1} w_1}\sqrt{1- w_2^T W^{-1} w_2}} \right). \mathbb End{align} Letting $P_l^{d-k+2}(t) = \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} t^{l-2m}$, we see that \begin{align}\label{eq:Qexpansion1} Q_{k,l}^{d}( x_1, \ldots, x_k) = \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} & \Big( \operatorname{d}et(W) u_{1,2} -w_1^{T} \operatorname{adj}(W) w_2 \Big)^{l - 2m} \; \Big(\operatorname{d}et(W) -w_1^{T} \operatorname{adj}(W) w_1 \Big)^{m} \; \\ \nonumber & \times \Big(\operatorname{d}et(W) -w_2^{T} \operatorname{adj}(W) w_2 \Big)^{m}, \mathbb End{align} where $\operatorname{adj}(W)$ is the adjugate matrix of $W$. This is a polynomial of the inner products of $x_1,\ldots, x_k$, and so is well defined for all $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$. In addition, by rewriting \mathbb Eqref{eq:Qdef} as \begin{equation}\label{eq:Qexpansion2} Q_{k,l}^{d}( x_1, \ldots, x_k) = \operatorname{d}et(W)^l \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} \langle y_1, y_2 \rightarrowngle^{l - 2m} \|y_1\|^{m} \|y_2\|^{m}, \mathbb End{equation} for $k \leq d$ and \begin{equation}\label{eq:Qexpansion2, k=d+1} Q_{d+1,1}^d(x_1, \ldots, x_k) = \operatorname{d}et(W) \langle y_1, y_2 \rightarrowngle, \mathbb End{equation} we see that $Q_{k,l}^d$ is zero if $x_3, \ldots, x_k$ are linearly dependent. If $k = d+1$ and $l =1$, \mathbb Eqref{eq:Qexpansion2, k=d+1} shows us that for any fixed $x_3, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \begin{equation}\label{eq:I_Q k = d+1} I_{U_{Q_{d+1,l}^d}^{x_3, \ldots, x_{d+1}}}(\mu) = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} \operatorname{d}et(W) y_1 y_2\, d\mu(x_1) d\mu( x_2) = \operatorname{d}et(W) \Big( \int_{\mathbb{S}^{d-1}} y_1 d \mu(x_1) \Big)^2 \geq 0, \mathbb End{equation} so $Q_{k, 1}^d$ is $k$-positive definite. Note that since $W$ is the Gram matrix of $x_3, \ldots, x_k$, its determinant is nonnegative. We now want to show that this energy is zero when $\mu = \sigma$. We first note that this occurs if $x_3, \ldots, x_k$ are linearly dependent, so let us assume that $x_3,\ldots, x_k$ are linearly independent, and set $f( y_1^{\perp}, y_1) = f(x_1) = y_1$. Denoting the unit ball in $\mathbb{R}^n$ as $\mathbb{B}^n$, we have, by Lemma A.5.4 of \cite{DX}, that \begin{align*} \int_{\mathbb{S}^{d-1}} f(x_1)\, d\sigma(x_1) & = \int_{\mathbb{B}^{d-1}} \frac{f(y_1^{\perp}, \sqrt{ 1 - \| y_1^{\perp}\|^2}) + f(y_1^{\perp}, -\sqrt{ 1 - \| y_1^{\perp}\|^2})}{\sqrt{ 1 - \| y_1^{\perp}\|^2}} d y_1^{\perp} \\ & = \int_{\mathbb{B}^{d-1}} \frac{\sqrt{ 1 - \| y_1^{\perp}\|^2} + (-\sqrt{ 1 - \| y_1^{\perp}\|^2})}{\sqrt{ 1 - \| y_1^{\perp}\|^2}} d y_1^{\perp} = 0. \\ \mathbb End{align*} It now follows from \mathbb Eqref{eq:I_Q k = d+1} that $I_{Q_{d+1,1}^d}(\sigma) = 0$, so $\sigma$ minimizes $I_{Q_{d+1,1}^d}$. When $k \leq d$, we need a bit more machinery. Let $Y_1,\ldots Y_{\operatorname{dim}(\mathcal{H}_l^{d-k+1})}$ be an orthonormal basis of $\mathcal{H}_l^{d-k+1}$, the space of spherical harmonics of degree $l$ on $\mathbb{S}^{d-k+1}$. Then the addition formula (see \cite[Theorem 1.2.6]{DX}) tells us that \begin{equation}\label{eq:addition formula for Q} Q_{k,l}^{d}( x_1, \ldots, x_k) = \operatorname{d}et(W)^l \| y_1 \|^{l} \| y_2 \|^{l} \frac{1}{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} Y_j( z_1) Y_j(z_2). \mathbb End{equation} Thus for any fixed $x_3, \ldots, x_k$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \begin{align*} I_{U_{Q_{k,l}^d}^{x_3, \ldots, x_k}}(\mu) & = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Q_{k,l}^d( x_1, x_2, \ldots, x_k)\, d\mu(x_1) d\mu(x_2) \\ & = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}}\operatorname{d}et(W)^l \| y_1 \|^{l} \| y_2 \|^{l} \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} Y_j( z_1) Y_j(z_2) \,d\mu(x_1) d\mu(x_2) \\ & = \operatorname{d}et(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} d \mu(x_1) \Bigg)^2 \geq 0. \mathbb End{align*} Note that since $W$ is the Gram matrix of $x_3, \ldots, x_k$, its determinant is nonnegative. Thus, $Q_{k,l}^d$ is indeed $k$-positive definite, so for any $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$, $I_{Q_{k,l}^d}(\mu) \geq 0$. We now show that $\sigma$ minimizes the energy $I_{Q_{k,l}^d}$. For any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, we see that $$I_{U_{Q_{k,l}^d}^{x_3, \ldots, x_k}}(\sigma) = \operatorname{d}et(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} d \sigma(x_1) \Bigg)^2.$$ If $x_3, \ldots, x_k$ are linearly dependent, we know this is zero. Assume that $x_3, \ldots, x_k$ are linearly independent, and for $1 \leq j \leq \operatorname{dim}(\mathcal{H}_l^{d-k+1})$, let $$f_j(x_1) = f_j( y_1^{\perp}, y_1) = Y_j( z_1) \| y_1 \|^{l} .$$ By Lemma A.5.4 of \cite{DX}, we have that \begin{align*} \int_{\mathbb{S}^{d-1}} f_j(x_1) d \sigma (x_1) & = \int_{\mathbb{B}^{k-2}} ( 1 - \|y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} f_j( y_1^{\perp} , \sqrt{1 - \| y_1^{\perp} \|^2} \xi ) d \sigma (\xi) \right] dy_1^{\perp} \\ & = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} Y_j(\xi) (1 - \| y_1^{\perp} \|^2)^{\frac{l}{2}} d \sigma (\xi) \right] dy_1^{\perp} \\ & = 0. \mathbb End{align*} Thus, for any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, $$ \int_{ \mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Q_{k,l}^d(x_1, x_2, \ldots, x_k) d \sigma(x_1) d \sigma(x_2) = 0,$$ meaning that $$ I_{Q_{k,l}^d}(\sigma) = 0,$$ so $\sigma$ is indeed a minimizer of $I_{Q_{k,l}^d}$. \mathbb End{proof} We note here that $Q_{k,l}^d$ is zero if $x_3, \ldots, x_k$ are linearly dependent or if $x_1$ or $x_2$ are in $X$. For $k=3$, these kernels are essentially \mathbb Eqref{eq:BachocValQKernel}, introduced by Bachoc and Vallentin \cite{BV}. In this instance, note that $\operatorname{d}et(W) =1$ is constant. The general case was covered by Musin \cite{Mu} who used the kernels to formulate general SDP bounds for spherical codes and, with some additional machinery, generalized the result of Schoenberg \cite{Sch} to characterize all positive definite kernels invariant under the stabilizer of $X$. However, in that setting, it was assumed that $x_3, \ldots, x_k$ were fixed and linearly independent, so no factor such as $\operatorname{d}et(W)^l$ was included and the functions were only really functions of two variables. Recently, similar kernels with $k\geq 4$ were used for finding new bounds for sizes of equiangular sets of lines in \cite{DMOV}, where kernels were constructed in a way that assumed that distance set the last $k-2$ inputs had finitely many values, making them multivariate functions, but not allowing the last $k-2$ inputs to take arbitrary values on the sphere. The authors of \cite{DMOV} even discuss the difficulty of such a task. Our inclusion of $\operatorname{d}et(W)$ as a factor of $Q$ allows us to address this issue, though alone, this would not allow us to construct functions which are not constant when $x_3, \ldots, x_k$ are linearly dependent, or more complicated positive definite functions, such as semidefinite combinations of the functions \mathbb Eqref{eq:SemDefProgYs}. We discuss how to construct such functions later in this section. For the main result of Section \ref{sec:A^2 on Sphere}, it is sufficient to use the case $l=1$ so we formulate the relevant statement as a separate lemma. \begin{lemma}\label{lem:Musin-1} For any set of fixed vectors $x_3, \ldots, x_k\in\mathbb{S}^{d-1}$, the kernel $$Q_{k,1}^d (x_1, \ldots, x_k) = \operatorname{d}et(W) \langle y_1, y_2 \rightarrowngle = \operatorname{d}et(W)u_{1,2} - w_1^T \operatorname{adj}(W) w_2 $$ is $k$-positive definite, and $I_{Q_{k,1}^d}$ is minimized by $\sigma$. \mathbb End{lemma} For small values of $k$, these kernels take the form: \begin{align*} Q_{3,1}^d &= u_{1,2} - u_{1,3}u_{2,3},\\ Q_{4,1}^d &= u_{1,2} -u_{1,2}u_{3,4}^2 - u_{1,3}u_{2,3} - u_{1,4}u_{2,4} + u_{1,3}u_{2,4}u_{3,4} + u_{1,4}u_{2,3}u_{3,4}. \mathbb End{align*} We can use these kernels $Q_{k,l}^d$ to construct various other kernels which are $k$-positive definite and whose energies are minimized by $\sigma$. Similar objects were studied in \cite{KV}. \begin{corollary}\label{cor:MoreGenPosDef} Let $G: (\mathbb{S}^{d-1} )^{k-1} \rightarrow \mathbb{R}$ be a continuous function such that, for $\mathbb Eta_1, \mathbb Eta_2, \ldots, \mathbb Eta_{k-1} \in \mathbb{S}^{d-1}$, $G(\mathbb Eta_1, \ldots, \mathbb Eta_{k-1})$ depends only on the inner products $\langle \mathbb Eta_i, \mathbb Eta_j \rightarrowngle$, $1 \leq i < j \leq k-1$. Then the kernel \begin{equation}\label{eq:MorGenPosDef} T( x_1, x_2, \ldots, x_k) = G(x_1, x_3, \ldots, x_k) G( x_2, x_3,\ldots, x_k) Q_{k,l}^d( x_1, x_2, \ldots, x_k ) \mathbb End{equation} is rotationally-invariant and $k$-positive definite. If $l \geq 1$, $T$ satisfies \begin{equation} \inf_{\mu \in \mathcal{P}(\mathbb{S}^{d-1})} I_{T}(\mu) = I_{T}(\sigma) = 0. \mathbb End{equation} \mathbb End{corollary} From the way we defined $T$, we can see that $T$ is indeed continuous and symmetric in the first two variables. \begin{proof} We will use the same notation as in the proof of Theorem \ref{thm:Qdef}. We see immediately that the rotational-invariance of $T$ follows from the rotational-invariance of $Q_{k,l}^d$ and the inner products $\langle x_i, x_j \rightarrowngle$. We also have that for fixed $x_3, \ldots, x_k$, $G(x_i, x_3, \ldots, x_k)$ depends only on $y_i^{\perp}$, the orthogonal projection of $x_i$ onto $X$. For $k \leq d$, that $T$ is $k$-positive definite can be seen by the fact that for fixed $x_3, \ldots, x_k$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \mathbb Eqref{eq:addition formula for Q} gives us $$I_{U_T^{x_3, \ldots, x_k}}(\mu) = \operatorname{d}et(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} G( x_1, x_3,\ldots x_k) d \mu(x_1) \Bigg)^2 \geq 0.$$ If $ x_3, \ldots, x_k$ are linearly dependent, then $T = 0$, so assume that $x_3, \ldots, x_k$ are linearly independent, and for $1 \leq j \leq \operatorname{dim}(\mathcal{H}_l^{d-k+1})$, let $$f_j(x_1) = f_j( y_1^{\perp}, y_1) = Y_j( z_1) \| y_1 \|^{l} G( x_1, x_3,\ldots x_k).$$ By Lemma A.5.4 of \cite{DX}, and since $G$ does not depend on $y_1$, we have \begin{align*} \int_{\mathbb{S}^{d-1}} f_j(x_1) d \sigma (x_1) & = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} f_j( y_1^{\perp} , \sqrt{1 - \| y_1^{\perp} \|^2} \xi ) d \sigma (\xi) \right] dy_1^{\perp} \\ & = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} (1 - \| y_1^{\perp} \|^2)^{\frac{l}{2}} G(x_1, x_3,\ldots x_k) \left[ \int_{\mathbb{S}^{d-k+1}} Y_j(\xi) d \sigma (\xi) \right] d y_1^{\perp} \\ & = 0. \mathbb End{align*} Thus, for any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, $$ \int_{ \mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} T(x_1, x_2, \ldots, x_k) d \sigma(x_1) d \sigma(x_2) = 0,$$ meaning that $$ I_{T}(\sigma) = 0,$$ so $\sigma$ is indeed a minimizer of $I_{T}$. The case of $k = d+1$ is similar. \mathbb End{proof} \begin{lemma}\label{lem:l=0PosDef} Let $G: \big( \mathbb{S}^{d-1} \big)^{k-1} \rightarrow \mathbb{R}$ be continuous, depend only on the inner products of its inputs, and satisfy $\int_{\mathbb{S}^{d-1}} G( \mathbb Eta_1, \ldots, \mathbb Eta_{k-1}) d \sigma(\mathbb Eta_1) = 0$. Then the kernel \begin{equation}\label{eq:l=0PosDef} H(x_1, x_2, \ldots, x_k) = G(x_1, x_3, \ldots, x_k) G( x_2, x_3, \ldots, x_k) \mathbb End{equation} is rotationally-invariant, $k$-positive definite, and satisfies \begin{equation} \inf_{\mu \in \mathcal{P}(\mathbb{S}^{d-1})} I_H(\mu) = I_H(\sigma) = 0. \mathbb End{equation} \mathbb End{lemma} The formulation of $T$ and $H$ in the corollary and lemma, and the fact that the sum of $k$-positive definite kernels minimized by $\sigma$ is a $k$-positive definite kernel minimized by $\sigma$, allows us to now recover Theorem \ref{thm:SemiDefMin}. In \cite{BV}, the authors created matrices $Y_l^d$ of polynomials, and then took the trace of the product of a positive semidefinite matrix and a $Y_l^d$. When $l = 0$, this would lead to a sum of kernels of the form \mathbb Eqref{eq:l=0PosDef}, and for $l > 0$ this would lead to a sum of kernels of the form \mathbb Eqref{eq:MorGenPosDef}. By combining Lemmas \ref{lem:Schur's Lemma}, \ref{lem:Schur's Lemma2}, \ref{lem:kPosDef to nPosDef}, and \ref{lem:l=0PosDef} with Corollary \ref{cor:MoreGenPosDef} we can now construct a wide range of rotationally-invariant $k$-positive definite kernels whose energies are minimized by $\sigma$ from the kernels $Q_{n,l}^d$'s for $n < k$. In particular, we can construct kernels which are not constant when $x_3, \ldots, x_n$ are linearly dependent, unlike $Q_{k,l}^d$. \section{Maximizing the integral of $A^2$ on the sphere}\label{sec:A^2 on Sphere} We now turn to the last main results of the paper. As an analogue of the result by Cahill and Casazza (Theorem \ref{thm:volume-gen}), we solve the optimization problem for $A^2$, the square of the $(k-1)$-dimensional volume of the simplex, for an arbitrary number of inputs $3\le k \le d+1$. We have already proved some partial cases of the theorem below: Theorem \ref{thm: triangle area squared max} (for the case $k=3$ and $d\ge 2$, i.e. the area of the triangle) and Theorem \ref{thm:A^2maxGen} (for full-dimensional simplices, i.e. $k=d+1\ge 3$). We would like to point out, that the latter theorem applies to measures on $\mathbb R^d$. The following theorem, while restricted to the sphere, covers the whole range $3\le k \le d+1$. \begin{theorem}\label{thm:area-gen} Let $d\ge 2$ and $3\leq k\leq d+1$. Let $A(x_1, \ldots, x_k)$ be the $(k-1)$-dimensional Euclidean volume of a simplex with vertices $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$. Then the set of maximizing measures of $I_{A^2}$ in $\mathcal{P}(\mathbb S^{d-1})$ is the set of balanced isotropic measures on $\mathbb S^{d-1}$. In particular, the uniform surface measure $\sigma$ maximizes $I_{A^2}$. The value of the maximum is $\frac {k}{(k-1)! d^{k-1}}\binom{d}{k-1}$. \mathbb End{theorem} \begin{proof} Let $U$ be the Gram matrix of vectors $\{x_1,\ldots,x_k\}\subset \mathbb S^{d-1}$ with entries $u_{i,j}$, i.e. $\langle x_i,x_j \rightarrowngle = u_{i,j}$ for $1\leq i,j\leq k$. For $I,J\subseteq \{1,\ldots,k\}$, we denote by $U_{I,J}$ the submatrix of $U$ obtained by deleting rows with numbers from $I$ and columns with numbers from $J$. By Lemma \ref{lem:A-formula}, whose proof is postponed to the Appendix, $$((k-1)!)^2 A^2 = - \operatorname{d}et \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\mathbb End{pmatrix}.$$ We expand the determinant by choosing, for each $i,j \in \{ 1,\ldots, k\}$, the elements in the last column and $i^{th}$ row and the $j^{th}$ column and last row. We treat the cases $i=j$ and $i\neq j$ separately. \begin{align*} ((k-1)!)^2 A^2 & = - \sum\limits_{i=1}^k (-1)^{k+1+i+k+i} \operatorname{d}et(U_{\{i\},\{i\}}) - \sum\limits_{i\neq j} (-1)^{k + 1 + i+ k +j} \operatorname{d}et(U_{\{i\},\{j\}})\\ &=\sum\limits_{i=1}^k \operatorname{d}et(U_{\{i\},\{i\}}) + \sum\limits_{i\neq j} (-1)^{i+j} \operatorname{d}et(U_{\{i\},\{j\}}) \mathbb End{align*} For each $i \in \{ 1, \ldots, k \}$, $\operatorname{d}et(U_{\{i\},\{i\}})$ is the $(k-1)$-point kernel $V^2 (x_1,\operatorname{d}ots, x_{i-1},x_{i+1},\operatorname{d}ots,x_n)$ from Theorem \ref{thm:volume-gen}. Subsequently, Theorem \ref{thm:volume-gen} implies that the energy integral for the kernel defined by the first sum is not greater than $k \frac {(k-1)!} {d^{k-1}} \binom{d}{k-1}$. It is now sufficient to show that the contribution of the second sum is nonpositive. Let us fix $i,j \in \{1, \ldots, k\}$, with $i \neq j$, and denote $U_{\{i,j\},\{i,j\}}$ by $U'$. We expand $\operatorname{d}et(U_{\{i\},\{j\}})$ by row $j$ of $U$ and column $i$ of $U$ taking an element $u_{j,m}$, $m\neq j$, from the row and $u_{n,i}$, $n\neq i$, from the column, respectively. If $m = i$ and $n = j$, then we take $u_{j,i}$ both for the row and the column expansion. The contribution of this case to $\operatorname{d}et(U_{\{i \}, \{ j\}})$ the sum is then $(-1)^{i+j -1} u_{j,i} \operatorname{d}et(U')$. Let us now consider the case where $m\neq i$ and $n\neq j$. Without loss of generality, let us assume that $i < j$ (the case of $i > j$ is similar). Let $n'$ be the position of row $n$ of $U$ after rows $i$ and $j$ are deleted, i.e. $n'=n$ if $n<i$, $n'=n-1$ if $i < n < j$, and $n' = n-2$ if $j < n$. Similarly we define $m'=m$ if $m<i$, $m'=m-1$ if $i < m < j$, and $m' = m-2$ if $j < m$. This guarantees that $U_{\{i,j,n\},\{i,j,m\} } = U'_{ \{n'\},\{m'\}}$. A careful examination of the signs shows that the contribution of this expansion in the sum is then \begin{align*} (-1)^{i +n' + j + m'} u_{n,i} u_{j, m} \operatorname{d}et(U'_{\{n'\},\{m'\}}) &= (-1)^{p} u_{n,i} u_{j, m} \operatorname{d}et(U'_{\{n'\},\{m'\}}) \\ &= (-1)^{ p} u_{n,i} u_{j, m} \operatorname{d}et(U_{\{i, j, n\},\{i, j, m\}}) \mathbb End{align*} where $$p = i + n + j + m + \frac{\operatorname{sgn}(n-i) + \operatorname{sgn}(n-j) + \operatorname{sgn}(m-i) + \operatorname{sgn}(m-j)}{2} - 2.$$ Overall, we have \begin{align*} (-1)^{i+j}\operatorname{d}et(U_{\{i\},\{j\}}) & = (-1)^{2i + 2j -1}u_{j,i} \operatorname{d}et(U') + \sum_{\substack{1\leq m,n\leq k \\ m,n\notin \{i,j\}}} (-1)^{2i + 2j + m' + n'} u_{n,i} u_{j,m}\operatorname{d}et(U'_{\{n'\},\{m'\}}) \\ & = -u_{j,i} \operatorname{d}et(U') + \sum_{\substack{1\leq m,n\leq k\\ m,n\neq i,j}} (-1)^{m'+n'}u_{n,i} u_{j,m} \operatorname{d}et(U'_{\{n'\},\{m'\}}) \\ & = -(u_{j,i} \operatorname{d}et(U') - {u_i'}^T \operatorname{adj}(U') u_j')\\ & = -(u_{j,i} \operatorname{d}et(U') - {u_i'}^T \operatorname{adj}(U') u_j')\\ & = - Q_{k,1}^d( x_i, x_j, x_{l_1}, \ldots, x_{l_{k-2}}), \mathbb End{align*} where $u_i'=(u_{1,i},\ldots,u_{k,i})^T$ with the first index running through all $n\neq i,j$, $u_j'=(u_{j,1},\ldots,u_{j,k})^T$ with the second index running through all $m\neq i,j$, and $\{l_1, \ldots, l_{k-2} \} = \{ 1, \ldots, k \} \setminus \{i, j \}$. For the last identity above, see Lemma \ref{lem:Musin-1}. From Theorem \ref{thm:Qdef} (or Lemma \ref{lem:Musin-1} specifically for this case), we know that $-Q_{k,1}^d$ is $k$-negative definite, so its contribution to $I_{A^2}$, and therefore the contribution of $$K (x_1, \ldots, x_k) = \sum_{i \neq j} (-1)^{i+j} \operatorname{d}et(U_{\{i \}, \{j\}}) = - \frac{\binom{k}{2}}{k!} \sum_{\pi} Q_{k,1}^d( x_{\pi(1)}, \ldots, x_{\pi(k)}),$$ to $I_{A^2}$ is nonpositive, so $I_{A^2} \leq \frac{k}{(k-1)! d^{k-1}} \binom{d}{k-1}$. It remains to find out which measures maximize $I_{A^2}$. Due to Theorem \ref{thm:volume-gen}, any maximizing measure $\mu$ must be isotropic. In order to find measures vanishing on the second part we need to return to Lemma \ref{lem:Musin-1}. The necessary and sufficient condition for vanishing on $Q_{k,1}^d(x_1, \ldots, x_k)$ is the following. For any linearly independent $x_3, \ldots, x_k$ from the support of $\mu$ and the linear space $X$ generated by them, the projection of $\mu$ onto $X^{\perp}$ must be balanced. In other words, the center of mass of $\mu$ must belong to $X$. An isotropic measure must be full-dimensional so there must exist $d$ linearly independent vectors from $\operatorname{supp}(\mu)$. The intersection of all linear spaces generated by any $(k-2)$ of these $d$ vectors is only the origin so the center of mass of $\mu$ must be at the origin. Clearly, balanced isotropic measures attain the found maximum. \mathbb End{proof} \begin{remark} In the last part of the proof we could also have shown that $\sigma$ is a maximizer, then noted that the potential is a polynomial of degree at most two in any of its variables, with some parts being of degree one, meaning that balanced isotropic measures are the maximizers, since they yield the same value of energy as $\sigma$ (this a direct analogy to spherical $2$-designs). \mathbb End{remark} We note that in the case that $k = 2$, $A(x,y)$ is simply the Euclidean distance between $x$ and $y$. If we were to split $\| x-y\|^2$ into a linear part and a ``volume" part as in the proof, then the volume is simply the distance from the origin to a point on the circle, which is always $1$. Thus, in that particular case, only the linear part matters, so maximizers of $I_{A^2}$ on the sphere are all balanced measures, as shown in \cite{Bj}. The case of $k = 3$ was handled by Theorem \ref{thm: triangle area squared max}, where in this case $Q_{k,1}^d = Y_{1,0,0}$ and the $(k-1)$-volume squared function is $V^2(x,y) = 1- \langle x, y \rightarrowngle^2$, as discussed in Section \ref{sec:k=2}. Finally, we note that despite having this result for $A^2$ on the sphere, Corollary \ref{cor:A^s for s>2} does not hold if $A$ has $k < d+1$ inputs. As $s \rightarrow \infty$, we should expect the maximizer of $I_{A^s}$ to be supported on some set such that $A$ takes only its minimum (zero) and maximum values. This will be the set of vertices of a regular $(k-1)$-simplex on some $(k-2)$-dimensional subsphere. Such a measure is not isotropic on $\mathbb{S}^{d-1}$ for $k < d+1$, and so we can also not use the same proof method to determine maximizers. We do, however, conjecture that the maximizers of $I_{A^s}$ are discrete when $s>2$, for all $3\le k \le d+1$. \section{Acknowledgments} We would like to thank Danylo Radchenko and David de Laat for fruitful discussions and useful suggestions. All of the authors express gratitude to ICERM for hospitality and support during the Collaborate{@}ICERM program in 2021. D.~Bilyk has been supported by the NSF grant DMS-2054606 and Simons Collaboration Grant 712810. D.~Ferizovi\'{c} thankfully acknowledges support by the Methusalem grant of the Flemish Government. A.~Glazyrin was supported by the NSF grant DMS-2054536. R.W.~Matzke was supported by the Doctoral Dissertation Fellowship of the University of Minnesota, the Austrian Science Fund FWF project F5503 part of the Special Research Program (SFB) ``Quasi-Monte Carlo Methods: Theory and Applications", and NSF Postdoctoral Fellowship Grant 2202877. O.~Vlasiuk was supported by an AMS-Simons Travel Grant. \section{Appendix: Expressing \texorpdfstring{$A^2$}{A2} through Gram determinants}\label{sec:Appen A^2} Let $U$ be the Gram matrix of vectors $\{x_1,\ldots,x_k\}\subset \mathbb S^{d-1}$ with entries $u_{i,j}$, i.e. $\langle x_i,x_j \rightarrowngle = u_{i,j}$ for $1\leq i,j\leq k$. The following lemma provides a linear-algebraic description of $A^2$. \begin{lemma}\label{lem:A-formula} $$A^2 (x_1,\ldots,x_k) = -\frac 1 {((k-1)!)^2} \operatorname{d}et \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\mathbb End{pmatrix},$$ where $\mathbf{1}$ is the column vector of $k$ ones. \mathbb End{lemma} \begin{proof} $A^2$ can be found from the Gram matrix of the vectors $x_2-x_1, \ldots, x_k-x_1$. \begin{align*} A^2 (x_1,\ldots,x_k) &= \frac 1 {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}\langle x_2-x_1, x_2-x_1\rightarrowngle&\ldots&\langle x_2-x_1, x_k-x_1\rightarrowngle \\ \vdots & \operatorname{d}dots& \vdots \\ \langle x_k-x_1, x_2-x_1\rightarrowngle & \ldots & \langle x_k-x_1, x_k-x_1\rightarrowngle\mathbb End{pmatrix} \\ & = \frac 1 {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} \\ \vdots & \operatorname{d}dots& \vdots \\ 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}\mathbb End{pmatrix} \\ & = \frac {-1} {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}0 & 0 & \ldots & 0 &1\\0 & 2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} & 0\\ \vdots & \vdots & \operatorname{d}dots& \vdots &\vdots\\0& 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}&0\\ 1 & 0 & \ldots & 0 & 0\mathbb End{pmatrix}\\ & = \frac {-1} {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}0 & u_{1,2} & \ldots & u_{1,k} &1\\u_{1,2} & 2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} & 0\\ \vdots & \vdots & \operatorname{d}dots& \vdots &\vdots\\u_{1,k}& 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}&0\\ 1 & 0 & \ldots & 0 & 0\mathbb End{pmatrix}. \mathbb End{align*} Note that in the third equality, we created a $(k+1) \times (k+1)$ matrix whose determinant is the negative of our original matrix, due to the only nonzero entries in the last row and column being the ones in the upper right and lower left corners. This also means that inserting $u_{1,j}$'s into the first row and column doesn't affect the determinant. Now we add the first row and column to all rows and columns except for the last ones. $$A^2=-\frac 1 {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}0 & u_{1,2} & \ldots & u_{1,k} &1\\u_{1,2} & 2&\ldots&1+u_{2,k} & 1\\ \vdots & \vdots & \operatorname{d}dots& \vdots &\vdots\\u_{1,k}& 1+u_{k,2}& \ldots & 2&1\\ 1 & 1 & \ldots & 1 & 0\mathbb End{pmatrix}.$$ We subtract the last column from all columns except for the first one, then add the bottom row to the top row, and see that \begin{align*} A^2 & = -\frac 1 {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}0 & u_{1,2}-1 & \ldots & u_{1,k}-1 &1\\u_{1,2} & 1&\ldots&u_{2,k} & 1\\ \vdots & \vdots & \operatorname{d}dots& \vdots &\vdots\\u_{1,k}& u_{k,2}& \ldots & 1&1\\ 1 & 1 & \ldots & 1 & 0\mathbb End{pmatrix} \\ & = -\frac 1 {((k-1)!)^2} \operatorname{d}et\begin{pmatrix}1 & u_{1,2}& \ldots & u_{1,k}&1\\u_{1,2} & 1&\ldots&u_{2,k} & 1\\ \vdots & \vdots & \operatorname{d}dots& \vdots &\vdots\\u_{1,k}& u_{k,2}& \ldots & 1&1\\ 1 & 1 & \ldots & 1 & 0\mathbb End{pmatrix}= -\frac 1 {((k-1)!)^2} \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\mathbb End{pmatrix}. \mathbb End{align*} \mathbb End{proof} \begin{thebibliography}{10} \bibitem[A]{A} F. Affentranger, \mathbb Emph{The convex hull of random points with spherically symmetric distribution}. Rendicontri del Seminario Matematico Universit\`{a} e Politecnico di Torino \textbf{49}(3), 359--383 (1991). \bibitem[AH]{AH} K. Atkinson, W. Han, \mathbb Emph{Spherical harmonics and approximations on the unit sphere: an introduction.} Springer Science \& Business Media, Vol. 2044 (2012). \bibitem[BV]{BV} C. Bachoc, F. Vallentin, \mathbb Emph{New Upper Bounds for Kissing Numbers from Semidefinite Programming.} Journal of the American Mathematical Society, \textbf{21}(3), 909--924 (2008). \bibitem[Ba]{Ball2} K. Ball, \mathbb Emph{Ellipsoids of maximal volume in convex bodies}. Geom. Dedicata \textbf{41} 241–250 (1992). \bibitem[B]{B} A. Barg, \mathbb Emph{Stolarsky's invariance principle for finite metric spaces}. Mathematika, \textbf{67}(1), 158--186 (2021). \bibitem[BS]{BS} A. Barg, M. Skriganov, \mathbb Emph{Bounds for Discrepancies in the Hamming Space}. Journal of Complexity, \textbf{65}, 101552 (2021). \bibitem[BBS]{BBS} A. Barg, P. Boyvalenkov, M. Stoyanova, \mathbb Emph{Bounds for the sum of distances of spherical sets of small size}. Discrete Mathematics, \textbf{346}(5), 113346 (2023). \bibitem[BF]{BF} J. Benedetto, M. Fickus, \mathbb Emph{Finite normalized tight frames.} Advances in Computational Mathematics, \textbf{18}(2-4), 357--385 (2003). \bibitem[BD]{BD} D. Bilyk, F. Dai, \mathbb Emph{Geodesic distance Riesz energy on the sphere}. Transactions of the AMS, \textbf{372}, 3141--3166 (2019). \bibitem[BDM]{BDM} D. Bilyk, F. Dai, R. Matzke, \mathbb Emph{Stolarsky principle and energy optimization on the sphere.} Constructive Approximation, \textbf{48}(1), 31--60 (2018). \bibitem[BFGMPV1]{BFGMPV1} D. Bilyk, D. Ferizovi\'{c}, A. Glazyrin, R. Matzke, J. Park, O. Vlasiuk, \mathbb Emph{Potential Theory with Multivariate Kernels.} Mathematische Zeitschrift, \textbf{301}, 2907--2935 (2022). \bibitem[BFGMPV2]{BFGMPV2} D. Bilyk, D. Ferizovi\'{c}, A. Glazyrin, R. Matzke, J. Park, O. Vlasiuk, \mathbb Emph{Optimizers of three-point energies and nearly orthogonal sets.} Preprint: https://arxiv.org/pdf/2303.12283. \bibitem[BGMPV]{BGMPV1} D. Bilyk, A. Glazyrin, R. Matzke, J. Park, O. Vlasiuk, {\mathbb Emph{Optimal measures for $p$-frame energies on spheres}}. Revista Matem\'{a}tica Iberoamericana, \textbf{38}(4), 1129--1160 (2022). \bibitem[Bj]{Bj} G. Bj\"{o}rck, \mathbb Emph{Distributions of positive mass, which maximize a certain generalized energy integral.} Arkiv F\"{o}r Matematik, \textbf{3}, 255--269 (1956). \bibitem[BHS]{BHS} S.V. Borodachov, D.P. Hardin, E.B. Saff, \mathbb Emph{Discrete Energy on Rectifiable Sets}. Springer Monographs in Mathematics (2019). \bibitem[CC]{CC} J. Cahill, P.G. Casazza, \mathbb Emph{Optimal Parseval frames: Total coherence and total volume.} Linear and Multilinear Algebra, 1--27 (2022). \bibitem[CHS]{CHS} X. Chen, D.P. Hardin, E.B. Saff, \mathbb Emph{On the Search for Tight Frames of Low Coherence}. Journal of Fourier Analysis and Applications, \textbf{27}, 2 (2021). \bibitem[CK]{CK} H. Cohn, A. Kumar, \mathbb Emph{Universally optimal distribution of points on spheres}. Journal of the AMS, \textbf{20}(1), 99--148 (2006). \bibitem[CW]{CW} H. Cohn and J. Woo. \mathbb Emph{Three-point bounds for energy minimization}. Journal of the AMS, \textbf{25}(4), 929--958 (2012). \bibitem[DX]{DX} F. Dai, Y. Xu, \mathbb Emph{Approximation Theory and Harmonic Analysis on Spheres and Balls.} Springer Monographs in Mathematics, Springer, New York, NY (2013). \bibitem[DMOV]{DMOV} D. de Laat, F.C. Machado, F.M. de Oliveira Filho, F. Vallentin, \mathbb Emph{$k$-point semidefinite programming bounds for equiangular lines.} Mathematical Programming, \textbf{194}(1), 533--567 (2022). \bibitem[DGS]{DGS} P. Delsarte, J. Goethals, J. Seidel, \mathbb Emph{Spherical codes and designs}. Geometriae Dedicata \textbf{6}, 363--388 (1977). \bibitem[DDM]{DDM} M. Dostert, D. de Laat, P. Moustrou, \mathbb Emph{Exact Semidefinite Programming Bounds for Packing Problems}. SIAM Journal on Optimization, \textbf{31}(2), 1433--1458 (2021). \bibitem[EO]{EO} M. Ehler, K.A. Okoudjou, \mathbb Emph{Minimization of the probabilistic $p$-frame potential}. Journal of Statistical Planning and Inference, {\bf 142}, 645--659 (2012). \bibitem[F1]{F1} L. Fejes T\'{o}th, \mathbb Emph{On the sum of distances determined by a point set}. {Acta Mathematica Academiae Scientiarum Hungarica}, \textbf{7}, 397--401 (1956). \bibitem[F2]{F2} L. Fejes T\'{o}th, \mathbb Emph{\"{U}ber eine Punktverteilung auf der Kugel (in German)}. {Acta Mathematica Academiae Scientiarum Hungarica}, \textbf{10}, 13--19 (1959). \bibitem[FNZ]{FNZ} F. Fodor, M. Nasz\'{o}di, T. Zarn\'{o}cz, \mathbb Emph{On the volume bound in the Dvoretzky–Rogers lemma}. Pacific Journal of Mathematics, \textbf{301}(1), 89--99 (2019) \bibitem[GKT]{GKT} T. Godland, Z. Kabluchko, C. Th\"{a}le, \mathbb Emph{Beta-star polytopes and hyperbolic stochastic geometry}. Advances in Mathematics, \textbf{404}(Part A), 108382 (2022). \bibitem[GK]{GK} V.K. Goyal, J. Kovan\v{c}evi\'{c}, \mathbb Emph{Quantized Frame Expansions with Erasures}. Applied and Computational Harmonic Analysis, \textbf{10}, 203--233 (2001). \bibitem[HL]{HL} A.G. Horv\'{a}th, Z. L\'{a}ngi, \mathbb Emph{Maximum volume polytopes inscribed in the unit sphere}. Monatshefte f\"{u}r Mathematik, \textbf{181}, 341--354 (2016). \bibitem[HoL]{HoLe} S. Hoehner, J. Ledford, \mathbb Emph{Extremal arrangements of Points on a Sphere for Weighted Cone-Volume Functionals}. Preprint: https://arxiv.org/pdf/2205.09096. \bibitem[HMR]{HMR} D. Hug, G.O. Munsonius, M. Reitzner, \mathbb Emph{Asymptotic mean values of Gaussian polytopes}. Beitr\"{a}ge zur Algebra und Geometrie, \textbf{45}(2), 531--548 (2004). \bibitem[J]{Jo} F. John, \mathbb Emph{Extremum problems with inequalities as subsidiary conditions}. Studies and Essays Presented to R. Courant on his 60th Birthday, January 8, 1948, 187--204. Interscience Publishers, Inc., New York, N. Y. (1948). \bibitem[KMTT]{KMTT} Z. Kabluchko, A. Marynych, D. Temesvari, C. Th\"{a}le, \mathbb Emph{Cones generated by random points on half-spheres and convex hulls of Poisson point processes}. Probability Theory and Related Fields, \textbf{175}, 1021-1061 (2019). \bibitem[KTT]{KTT} Z. Kabluchko, D. Temesvari, C. Th\"{a}le, \mathbb Emph{Expected intrinsic volumes and facet numbers of random beta-polytopes}. Mathematische Nachrichten, \textbf{292}(1), 79--105 (2018). \bibitem[KPS]{KPS} D. Kim, H. Park, W. Shim, \mathbb Emph{Higher-Order Interaction Model From Geometric Measurements}. Preprint: https://arxiv.org/pdf/2211.13001. \bibitem[KV]{KV} O. Kuryatnikova, J.C. Vera, \mathbb Emph{Generalizations of Schoenberg's theorem on positive definite kernels}. Preprint: https://arxiv.org/pdf/1904.02538. \bibitem[L]{L} A.J. Lee, \mathbb Emph{U-statistics. Theory and Practice}. Marcel Dekker, Inc., New York (1990). \bibitem[MMV]{MMV} P. Mattila, M. S. Melnikov, J. Verdera, {\mathbb Emph{The Cauchy integral, analytic capacity, and uniform rectifiability}}. Annals of Mathematics (2), \textbf{144}(1), 127--136 (1996). \bibitem[M]{Mu} O. Musin, \mathbb Emph{Multivariate positive definite functions on spheres.} Contemporary Mathematics, \textbf{625}, 177--190 (2014). \bibitem[P]{P} P. Pivovarov, \mathbb Emph{On determinants and the volume of random polytopes in isotropic convex bodies}. Geometriae Dedicata, \textbf{149}, 45--58 (2010). \bibitem[R]{R} R.A. Rankin, \mathbb Emph{On the Minimal Points of Positive Definite Quadratic Forms}. Mathematika, \textbf{3}(1), 15--24 (1956). \bibitem[Sc]{Sch} I.J. Schoenberg, \mathbb Emph{Positive definite functions on spheres}. Duke Mathematical Journal, \textbf{9}, 96--108 (1941). \bibitem[Si]{Sid} V.~M. Sidel'nikov, \mathbb Emph{New estimates for the closest packing of spheres in $n$-dimensional Euclidean space}. Matematicheskii Sbornik, {\bf 24}, 148--158 (1974). \bibitem[Sk]{Sk} M. Skriganov, \mathbb Emph{Stolarsky's invariance principle for projective spaces.} {Journal of Complexity}, \textbf{56}, 101428 (2020). \bibitem[St]{St} K. B. Stolarsky, \mathbb Emph{Sums of distances between points on a sphere. II.}. {Proceedings of the AMS}, \textbf{41}, 575--582 (1973). \bibitem[T1]{Ta1} R.M. Tanner, \mathbb Emph{Contributions to the simplex code conjecture}. Tech. Report No. 6154-8, Information Systems Lab., Stanford University (1970). \bibitem[T2]{Ta2} R.M. Tanner, \mathbb Emph{Some Content Maximizing Properties of the Regular Simplex}. Pacific Journal of Mathematics, \textbf{52}, 611-616 (1974). \bibitem[V]{V} A.W. van der Vaart, \mathbb Emph{Asymptotic Statistics}. Cambridge University Press, Cambridge (2000). \bibitem[W]{W} L. Welch, \mathbb Emph{Lower bounds on the maximum cross correlation of signals.} IEEE Transactions on Information Theory, {\bf 20}, 397--399 (2006). \bibitem[Y]{Y} V. A. Yudin, \mathbb Emph{Minimum potential energy of a point system of charges}. Diskretnaya Matematika, \textbf{4}(2), 115--121 (1992), (Russian, with Russian summary); English translation, Discrete Mathematics and Applications, \textbf{3}(1), 75--81 (1993). \bibitem[Z]{Z} V.G. Zelevinsky, \mathbb Emph{Three-Body Forces and Many-Body Dynamics}. Physics of Atomic Nuclei, \textbf{72}, 1107­--1115 (2009). \mathbb End{thebibliography} \mathbb End{document}
\begin{document} \title{Correlation complementarity yields Bell monogamy relations} \author{P. Kurzy\'nski} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore} \affiliation{Faculty of Physics, Adam Mickiewicz University, Umultowska 85, 61-614 Pozna\'{n}, Poland} \author{T. Paterek} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore} \author{R. Ramanathan} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore} \author{W. Laskowski} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, 80-952 Gda\'nsk, Poland} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians Universit\"at M\"unchen, 80755 M\"unchen, Germany} \affiliation{Max Planck Institut f\"ur Quantenoptik, 85748 Garching, Germany} \author{D. Kaszlikowski} \email{[email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, 117542 Singapore, Singapore} \begin{abstract} We present a method to derive Bell monogamy relations by connecting the complementarity principle with quantum non-locality. The resulting monogamy relations are stronger than those obtained from the no-signaling principle alone. In many cases, they yield tight quantum bounds on violation of single and multiple qubit correlation Bell inequalities. In contrast with the two-qubit case, a rich structure of possible violation patterns is shown to exist in the multipartite scenario. \end{abstract} \maketitle It is an experimentally confirmed fact, with the exception of certain experimental loopholes, that Bell inequalities are violated \cite{BELL}. In a typical Bell scenario a composite system is split between many parties and each party independently performs measurements on their corresponding subsystems. When all measurements are done the parties meet and calculate a function (Bell parameter) of their measurement outcomes in order to check whether they succeeded in violation of local realism. An interesting phenomenon occurs when a subsystem is involved in more than one Bell experiment, i.e. when measurement outcomes of one party are plugged into more than one Bell parameter involving different parties. In this case trade-offs exist between strengths of violations of a Bell inequality by different sets of observers, known as monogamy relations \cite{SG2001,TV2006,BLMPSR2005,MAN2006,TONER2009,PB2009}. One of the origins of this monogamy is the principle of no-signaling, according to which information cannot be transmitted with in nite speed. If violations are sufficiently strong possibility of superluminal communication between observers arises and consequently the Bell monogamy is present in every no-signaling theory \cite{BLMPSR2005,MAN2006,TONER2009,PB2009}. However, no-signaling principle alone does not identify the set of violations allowed by quantum theory. The monogamy relations derived within quantum theory, in the scenario where a Bell inequality is tested between parties $AB$ and $AC$, show even more stringent constraints on the allowed violations \cite{SG2001,TV2006}. Here we derive within quantum theory the monogamy relations which involve violation of multi-partite Bell inequalities, and study their properties. The trade-offs obtained are stronger than those arising from no-signaling alone and in most cases we show that they fully characterize the quantum set of allowed Bell violations. Our method uses complementarity of operators defining quantum values of Bell parameters and shows that Bell monogamy stems from quantum complementarity. This sheds new light on the relation between complementarity (uncertainty) and quantum non-locality. Oppenheim and Wehner show that complementarity relations for single-party observables determine the strength of a single Bell inequality violation \cite{OW}. Here we show for qubit inequalities that the same can be achieved using complementarity between correlation observables and that this type of complementarity also determines violation strength for several Bell inequalities (monogamy). We begin with the principle of complementarity, which forbids simultaneous knowledge of certain observables, and show that the only dichotomic complementary observables in quantum formalism are those that anti-commute. Conversely, we demonstrate that there exists a bound for the sum of squared expectation values of anti-commuting operators in any physical state \cite{GEZA,WW2008}. This bound is subsequently used to derive quantum bounds on Bell inequality violations. For its other applications see for instance Ref. \cite{MACRO}. Consider a set of dichotomic ($\pm 1$) complementary measurements. The complementarity is manifested in the fact that if the expectation value of one measurement is $\pm1$ then expectation values of all other complementary measurements are zero. We show that the corresponding quantum mechanical operators anti-commute. Consider a pair of dichotomic operators $A$ and $B$ and put the expectation value $\langle A \rangle = 1$, i.e., the state being measured, say $\ket{a}$, is one of the $+1$ eigenstates. Complementarity requires $\bra{a} B \ket{a} = 0$, which implies $B \ket{a} = \ket{a_{\perp}}$, where $\perp$ denotes a state orthogonal to $\ket{a}$. Since $B^2 = \openone$, we also have $B \ket{a_{\perp}} = \ket{a}$ and therefore $\ket{b} = \tfrac{1}{\sqrt{2}}(\ket{a} + \ket{a_\perp})$ is the $+1$ eigenstate of $B$. For this state complementarity demands, $\bra{b} A \ket{b} = 0$, i.e. $A \ket{b}$ is orthogonal to $\ket{b}$ which is only satisfied if $\ket{a_\perp}$ is the $-1$ eigenstate of $A$. The same argument applies to all $+1$ eigenstates, therefore the two eigenspaces have equal dimension. As a consequence, $A = \sum_{a} (\ket{a} \bra{a} - \ket{a_\perp} \bra{a_\perp})$ and $B = \sum_{a} (\ket{a_{\perp}} \bra{a} + \ket{a} \bra{a_{\perp}})$. It is now easy to verify that $A$ and $B$ anti-commute. Conversely, consider a set of traceless and trace-orthogonal dichotomic hermitian operators $A_k$. We denote by $\alpha_k$ the expectation values of measurements $A_k$ in some state $\rho$, which are real numbers in the range $[-1,1]$. Let us group operators $A_k$ into disjoint sets $S_j$ of mutually anti-commuting operators, $S_j=\{A_1^{(j)},A_2^{(j)},\dots\}$. Next, consider an operator $F_j \equiv \sum_{k=1}^{|S_j|} \alpha_{kj} A_{k}^{(j)} = \vec \alpha_j \cdot \vec A_j$, whose variance in the same state $\rho$ is given by $\langle F_j^2\rangle-\langle F_j\rangle^2 = |\vec \alpha_j|^2 (1- |\vec \alpha_j|^2)$ due to assumed anti-commutativity and because the square of each individual operator is identity. Positivity of variance, which stems from the positivity of $\rho$, implies that \begin{equation} |\vec\alpha_{j}|\leq 1. \label{INEQ} \end{equation} As a result, if an expectation value of one observable is $\pm 1$ then expectation values of all other anti-commuting observables are necessarily zero. In this way anti-commuting operators are related to complementarity. In fact, the above inequality is more general as it gives trade-offs between squared expectation values of anti-commuting operators in any physical state. Here, we derive inequality (\ref{INEQ}) in the spirit of Heisenberg uncertainty relation, see \cite{GEZA,WW2008} for alternative derivations. For dichotomic observables the square of expectation value is related to the Tsallis entropy as $S_2(A_j)=\frac{1}{2}(1-\langle A_j\rangle^2)$, therefore the inequality can be converted into entropic uncertainty relation. Inequality (\ref{INEQ}) provides a powerful tool for the studies of quantum non-locality. We show that it allows derivation of the Tsirelson bound \cite{TSIRELSON} and monogamy of Bell inequality violations between many qubits. A general $N$-qubit density matrix can be decomposed into tensor products of Pauli operators \begin{equation} \rho = \frac{1}{2^N} \sum_{\mu_1,...,\mu_N =0}^3 T_{\mu_1 \dots \mu_N} \sigma_{\mu_1} \otimes \dots \otimes \sigma_{\mu_N}, \end{equation} where $\sigma_{\mu_n} \in \{\openone, \sigma_x,\sigma_y,\sigma_z\}$ is the $\mu_n$-th local Pauli operator for the $n$-th party and $T_{\mu_1 \dots \mu_N} = \mathrm{Tr}[\rho (\sigma_{\mu_1} \otimes \dots \otimes \sigma_{\mu_N}) ]$ are the components of the correlation tensor $\hat T$. The orthogonal basis of tensor products of Pauli operators has the property that its elements either commute or anti-commute. We study a complete collection of two-setting correlation Bell inequalities for $N$ qubits \cite{WZ2001,WW2001,ZB2002}. It can be condensed into a single general Bell inequality, whose classical bound is one \cite{ZB2002}. All correlations which satisfy this general inequality and only such correlations admit a local hidden variable (LHV) description of the Bell experiment. This is in contrast to single inequality like, e.g., CHSH \cite{CHSH} violation of which is only sufficient to disqualify LHV model. For two qubits, if the general inequality is satisfied then all CHSH inequalities are satisfied, and if the general inequality is violated then there exists a CHSH inequality (with minus sign in a suitable place) which is violated. The quantum value of the general Bell parameter, denoted by $\mathcal{L}$, was shown to have an upper bound of \begin{equation} \mathcal{L}^2 \le \sum_{k_1, \dots, k_N=x,y} T_{k_1 \dots k_N}^2, \label{UP_BOUND} \end{equation} where summation is over orthogonal local directions $x$ and $y$ which span the plane of the local settings \cite{ZB2002}. If the upper bound above is smaller than the classical limit of $1$, there exists an LHV model. Our method for finding quantum bounds for Bell violations is to use condition (\ref{UP_BOUND}) for combinations of Bell parameters and then identify sets of anti-commuting operators in order to utilize inequality (\ref{INEQ}) and obtain a bound on these combinations. We begin by showing an application of Inequality (\ref{INEQ}) to a new derivation of the Tsirelson bound. For two qubits the general Bell parameter is upper bounded by $\mathcal{L}^2 \le T_{xx}^2 + T_{xy}^2 + T_{yx}^2 + T_{yy}^2$. One can identify here two vectors of averages of anti-commuting observables, e.g., $\vec \alpha_1 = (T_{xx},T_{xy})$ and $\vec \alpha_2 = (T_{yx},T_{yy})$. Due to (\ref{INEQ}) we obtain $\mathcal{L} \le \sqrt{2}$ which is exactly the Tsirelson bound. One can apply this method to look for corresponding maximal quantum violations of other correlation inequalities, e.g. it is easy to verify that the ``Tsirelson bound'' of the multi-setting inequalities \cite{MULTISETTING} is just the same as the one for the two-setting inequalities. Our derivation shows that Tsirelson's bound is due to complementarity of correlations $T_{ix}^2+T_{iy}^2 \leq 1$ with $i=x,y$. Any theory more non-local than quantum mechanics would have to violate this complementarity relation (compare with Ref. \cite{OW}). \begin{figure} \caption{The nodes of these graphs represent observers trying to violate Bell inequalities which are denoted by colored edges. {\bf a)} \label{FIG_GRAPHS} \end{figure} To describe how complementarity of correlations can be used to establish Bell monogamy, consider the simplest scenario of three particles, illustrated in Fig. \ref{FIG_GRAPHS}a. We show that if correlations obtained in two-setting Bell experiment by $AB$ cannot be modeled by LHV, then correlations obtained by $AC$ admit LHV model. We use condition (\ref{UP_BOUND}) which applied to the present bipartite scenario reads: $\mathcal{L}_{AB}^2 + \mathcal{L}_{AC}^2 \le \sum_{k,l=x,y} T_{kl0}^2 + \sum_{k,m=x,y} T_{k0m}^2$. It is important to note that the settings of $A$ are the same in both sums and accordingly orthogonal local directions $x$ and $y$ are the same for $A$ in both sums. We arrange the Pauli operators corresponding to correlation tensor components entering the sums into the following two sets of anti-commuting operators: $\{XX\openone,XY\openone,Y\openone X,Y\openone Y\}$ and $\{YX\openone,YY\openone, X\openone X,X \openone Y\}$, where $X=\sigma_x$ and $Y=\sigma_y$. Note that the anti-commutation of any pair of operators within a set is solely due to anti-commutativity of local Pauli operators. We obtain our result $\mathcal{L}_{AB}^2 + \mathcal{L}_{AC}^2 \le 2$. Once a CHSH inequality is violated between $AB$, all CHSH inequalities between $AC$ are satisfied, similar results were obtained in \cite{SG2001,TV2006}. Before we move to a general case of arbitrary number of qubits, we present an explicit example of multipartite monogamy relation. Consider parties $A$, $B$, $C$, $D$ trying to violate a correlation Bell inequality in a scenario depicted in Fig. \ref{FIG_GRAPHS}b. We show the new monogamy relation: $\mathcal{L}_{ABC}^2+\mathcal{L}_{ABD}^2+\mathcal{L}_{ACD}^2+\mathcal{L}_{BCD}^2 \leq 4$. Condition (\ref{UP_BOUND}) applied to these tripartite Bell parameters implies that the left-hand side is bounded by the sum of $32$ elements. The corresponding tensor products of Pauli operators can be grouped into four sets: \begin{eqnarray} \{XXY \openone,XY\openone X,X\openone XY, \openone YYY,\dots\}, \nonumber \\ \{XYX\openone,YY\openone Y,Y\openone XX,\openone XXY,\dots\}, \nonumber \\ \{YXX\openone,XX\openone Y,Y\openone YY,\openone XYX,\dots\}, \nonumber \\ \{YYY\openone,YX\openone X,X\openone YX,\openone YXX,\dots\}, \nonumber \end{eqnarray} where the dots denote four more operators being the previous four operators with $X$ replaced by $Y$ and vice versa. All operators in each set anti-commute, therefore the bound is proved. To give a concrete example of monogamy of a well-known inequality we choose the inequality due to Mermin \cite{MERMIN}: $E_{112} + E_{121} + E_{211} - E_{222} \le 2$, where $E_{klm}$ denote the correlation functions. Since the classical bound of the Mermin inequality is $2$, and not $1$ as we have assumed in our derivation, the new "Mermin monogamy" is $\mathcal{M}_{ABC}^2+\mathcal{M}_{ABD}^2+\mathcal{M}_{ACD}^2+\mathcal{M}_{BCD}^2 \leq 16$, where $\mathcal{M}$ is the quantum value of the corresponding Mermin parameter. The bound of the new monogamy relation can be achieved in many ways. If a triple of observers share the GHZ state, they can obtain maximal violation of $4$ and the remaining triples observe vanishing Mermin quantities $\mathcal{M}$. This can be attributed to maximal entanglement of the GHZ state. It is also possible for two and three triples to violate Mermin inequality non-maximally, and at the same time to achieve the bound. For example, the state $\frac{1}{2}\left(|0001\rangle+|0010\rangle +i\sqrt{2}|1111\rangle\right)$ allows $ABC$ and $ABD$ to obtain $\mathcal{M} = 2\sqrt{2}$, and the state $\frac{1}{\sqrt{6}}\left(|0001\rangle+|0010\rangle+|0100\rangle+i\sqrt{3}|1111\rangle\right)$ allows $ABC$, $ABD$ and $ACD$ to obtain $\mathcal{M} = \tfrac{4}{\sqrt{3}}$. Note that it is impossible to violate all four inequalities simultaneously. We now derive new monogamy relations for $N$ qubits. Consider scenario of Fig. \ref{FIG_GRAPHS}c, in which $N$ is odd, $A$ is the fixed qubit and the remaining $N-1$ qubits are split into two groups $\vec B = (B_1,...,B_M)$ and $\vec C = (C_1,...,C_M)$ each containing $M=\tfrac{1}{2}(N-1)$ qubits. We shall derive the trade-off relation between violation of $(M+1)$-partite Bell inequality by parties $A \vec B$ and $A \vec C$. Using condition (\ref{UP_BOUND}), the elements of the correlation tensor which enter the bound of $\mathcal{L}_{A \vec B}^2 + \mathcal{L}_{A \vec C}^2$ are of the form $T_{k l_1 \dots l_M 0 \dots 0}$ and $T_{k 0 \dots 0 m_1 \dots m_M}$. The corresponding Pauli operators can be arranged into $2^{M}$ sets of four mutually anti-commuting operators each: $\vec A_{1S} = \{XXS I, XYSI, YIXS, YIYS\}$, $\vec A_{2S} = \{YXS I, YYS I, XIXS, XIYS\}$, where $S$ stands for all $2^{M-1}$ combinations of $X$'s and $Y$'s for $M-1$ parties, and $I = \openone^{\otimes M}$ is identity operator on $M$ neighboring qubits. Therefore, according to the theorem, we arrive at the following trade-off: $\mathcal{L}_{A \vec B}^2 + \mathcal{L}_{A \vec C}^2 \le 2^M$. The bound of this inequality is tight in the sense that there exist quantum states achieving the bound for all allowed values of $\mathcal{L}_{A \vec B}$ and $\mathcal{L}_{A \vec C}$. This is a generalization of a similar property for CHSH monogamy \cite{TV2006}. The state of interest can be chosen as \begin{equation} |\psi \rangle = \tfrac{1}{\sqrt{2}} \cos \alpha \left( | 0 \vec 0 \vec 0 \rangle + |1 \vec 0 \vec 1 \rangle \right) + \tfrac{1}{\sqrt{2}} \sin \alpha \left( | 1 \vec 1 \vec 0 \rangle + |0 \vec 1 \vec 1 \rangle \right), \label{PSI_MONO} \end{equation} where e.g. $|1 \vec 0 \vec 1 \rangle$ denotes a state in which qubit $A$ is in the $| 1 \rangle$ eigenstate of local $Z$ basis, all qubits of $\vec B$ are in state $| 0 \rangle$ of their local $Z$ bases, and all qubits of $\vec C$ are in state $| 1 \rangle$ of their respective $Z$ bases. The non-vanishing correlation tensor components in $xy$ plane, which involve only $(M+1)$-partite correlations are $T_{x \vec w \vec 0} = \pm \sin 2\alpha$, $T_{x \vec 0 \vec w} = \pm 1$, and $T_{y \vec 0 \vec v} = - \cos 2\alpha$, where $\vec w$ contains even number of $y$ indices, other indices being $x$, and $\vec v$ contains odd number of $y$ indices, other indices again being $x$. There are $\sum_{k=1}^{\lfloor M/2 \rfloor} {M \choose 2 k} = 2^{M-1}$ correlation tensor elements of each type and consequently \begin{equation} \mathcal{L}_{A \vec B}^2 = 2^{M-1} \sin^2 2\alpha, \quad \mathcal{L}_{A \vec C}^2 = 2^{M-1}(1+ \cos^2 2\alpha). \label{TIGHT} \end{equation} Therefore, the bound is always achieved and all allowed values of $\mathcal{L}_{A \vec B}$ and $\mathcal{L}_{A \vec C}$ can be attained either by the state (\ref{PSI_MONO}) or the state with the role of qubits $\vec B \leftrightarrow \vec C$ interchanged. The underlying reason why the above trade-off allows for violation by both $A \vec B$ and $A \vec C$ is the fact that sets of anti-commuting operators of the Bell parameters can contain at most four elements. Now we present a much stronger new monogamy related to the graph in Fig. \ref{FIG_GRAPHS}d. Consider $M$-partite Bell inequalities corresponding to different paths from the root of the graph to its leaves ($M=3$ in Fig. \ref{FIG_GRAPHS}d). There are $2^{M-1}$ such inequalities and we shall prove that their quantum mechanical values obey \begin{equation} \mathcal{L}_1^2 + \dots + \mathcal{L}_{2^{M-1}}^2\le 2^{M-1}, \label{STRONG} \end{equation} where $\mathcal{L}_j$ is the quantum value for the $j$-th Bell parameter in the graph. To prove this, we construct $2^{M-1}$ sets of anti-commuting operators, each set containing $2^M$ elements, such that they exhaust all correlation tensor elements which enter the bound of the left-hand side of (\ref{STRONG}) after application of condition (\ref{UP_BOUND}). The construction also uses the graph of the binary tree. We begin at the root, to which we associate a set of two anti-commuting operators, $X$ and $Y$, for the corresponding qubit. A general rule now is that if we move up in the graph from qubit $A$ to qubit $B$ we generate two new anti-commuting operators by placing $X$ or $Y$ at position $B$ to the operator which had $X$ at position $A$. Similarly, if we move down in the graph to qubit $C$ we generate two new anti-commuting operators by placing $X$ or $Y$ at position $C$ to the operator which contained $Y$ at position $A$. For example, starting from the set of operators $(X,Y)$ by moving up we obtain $(XX\openone,XY\openone)$, and by moving down we have $(Y \openone X ,Y \openone Y)$. The next sets of operators are $(XX\openone X\openone \openone \openone, XX\openone Y\openone \openone \openone)$, $( XY\openone\openone X \openone \openone, XY\openone \openone Y \openone \openone)$, $(Y\openone X\openone \openone X\openone, Y\openone X \openone \openone Y \openone)$ and $(Y \openone Y \openone \openone \openone X, Y \openone Y\openone \openone \openone Y)$ if we move from the root: up up, up down, down up and down down, respectively. By following this procedure in the whole graph we obtain a set of $2^M$ mutually anti-commuting operators. According to this algorithm the anti-commuting operators can be grouped in pairs having the same Pauli operators except for the qubits of the last step (the leaves of the graph). There are $2^{M-1}$ such pairs corresponding to distinct combinations of tensor products of $X$ and $Y$ operators on $M-1$ positions. Importantly, in different operators these positions are different and to generate the whole set of operators entering the bound we have to perform suitable permutations of positions. Such permutations always exist and they do not affect anti-commutativity. Finally we end up with the promised $2^{M-1}$ sets of $2^{M}$ anti-commuting operators each, which according to Eq. (\ref{INEQ}) give the bound of (\ref{STRONG}). The inequality (\ref{STRONG}) is stronger than the previous trade-off relation in the sense that it does not allow simultaneous violation of all the inequalities of its left-hand side. All other patterns of violations are possible as we now show. Choose any number, $m$, of Bell inequalities, i.e. paths in the Fig. \ref{FIG_GRAPHS}d. Altogether they involve $n$ parties which share the following quantum state \begin{equation} \ket{\psi_n} = \frac{1}{\sqrt{2}} | \underbrace{0 \dots 0}_{n} \rangle + \frac{1}{\sqrt{2 m}} \sum_{j = 1}^m | 0 \dots 0 \underbrace{1 \dots 1}_{\mathcal{P}_j} 0 \dots 0 \rangle, \end{equation} where $\mathcal{P}_j$ denotes parties involved in the $j$-th Bell inequality. Note that all states under the sum are orthogonal as they involve different parties. The only non-vanishing components of the correlation tensor of this state have even number of $y$ indices for the parties involved in the Bell inequalities. Squares of all these components are equal to $\tfrac{1}{m}$ which gives $\mathcal{L}_j^2 = \tfrac{2^{M-1}}{m}$ for each Bell inequality $j=1,\dots, m$. Therefore, all $m$ Bell inequalities are violated as soon as $m < 2^{M-1}$. Moreover, the sum of these $m$ Bell parameters saturates the bound of (\ref{STRONG}) and therefore independently of the state shared by other parties the remaining Bell parameters of (\ref{STRONG}) all vanish. In conclusion, we have derived monogamy of multipartite Bell inequality violations which are all quadratic functions of Bell parameters. As such these relations are stronger than those following from no-signaling principle alone, which are linear in Bell parameters \cite{BLMPSR2005,MAN2006,TONER2009,PB2009}. Indeed, most of our monogamies are tight in the sense that they precisely identify the set of Bell violations allowed by quantum theory. Our proofs are within quantum formalism and utilize the bounds imposed by the complementarity principle. These bounds were established for dichotomic observables and are applicable to any Bell inequality involving these, it would be useful to extend the formalism to measurements with more outcomes. It would also be interesting to see if the Bell violation trade-offs can be derived without using quantum formalism, a candidate for this task is the principle of information causality \cite{IC}. \emph{Acknowledgements}. This research is supported by the National Research Foundation and Ministry of Education in Singapore. WL is supported by the EU program Q-ESSENCE (Contract No.248095), the MNiSW Grant no. N202 208538 and by the Foundation for Polish Science. \end{document}
\begin{document} \title[Musicological aspects of counterpoint theory]{Musicological, computational, and conceptual aspects of first-species counterpoint theory} \author[J. S. Arias-Valero]{Juan Sebasti\'{a}n Arias-Valero} \address{Departamento de Matem\'{a}ticas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Ciudad de M\'{e}xico, M\'{e}xico} \email{[email protected]} \author[O. A. Agustín-Aquino]{Octavio Alberto Agustín-Aquino} \address{Instituto de Física y Matemáticas, Universidad Tecnológica de la Mixteca, Huajuapan de León, México} \email{[email protected]} \author[E. Lluis-Puebla]{Emilio Lluis-Puebla} \address{Departamento de Matem\'{a}ticas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Ciudad de M\'{e}xico, M\'{e}xico} \email{[email protected]} \begin{abstract} We re-create the essential results of a 1989 unpublished article by Mazzola and Muzzulini that contains the musicological aspects of a first-species counterpoint model. We include a summary of the mathematical counterpoint theory and several variations of the model that offer different perspectives on Mazzola's original principles. \end{abstract} \keywords{counterpoint; rings; modules; combinatorics} \maketitle \section{Introduction}\label{intro} The original idea of this article was to communicate the musicological results from \cite{MazzMuzz}, which presents the results of a model for \textit{first-species counterpoint}, based on ring and module theory, but was not published. This kind of counterpoint is the simplest one and the didactic basis of \textit{Renaissance counterpoint}, as taught by \cite{Fux}. The model was introduced by \cite{Inicount} for $\mathbb{Z}_{12}$; a ring that can be used to model the algebraic behavior of the twelve intervals between tones in the \textit{chromatic scale} of Western musical tradition. Then, the model was re-exposed with some additional computational results by Hichert in \cite[Part~VII]{MazzolaTopos}. Further generalizations to the case when the rings are of the form $\mathbb{Z}_{n}$ were considered in \cite{OAAAthesis,Junod}, and then included in a collaborative compendium of mathematical counterpoint theory and its computational aspects \cite{Octavio}. The motivation for such a generalization to $\mathbb{Z}_n$ was the existence of \textit{microtonal scales} with more than twelve tones, which have been used for making real music \cite{octaviotod}. However, the understanding of \cite{MazzMuzz} is unavoidably linked to that of the very model, so in this paper we expose a summary of it and the musicological, computer-aided, study of its results. We do not reduce this study to a mere exposition of the previous results, but propose several variations of the model with the due justifications. The intention of these variations is not to destroy the original model, but to deepen into some of its basic principles. Regarding other mathematical models of counterpoint, Mazzola's school has an important counterpart in D. Tymoczko. Tymoczko's model \cite[Appendix]{Tympolem} is based on orbifolds and is oriented towards voice leading. Moreover, it works by means of a geometric reading of the usual counterpoint rules, in contrast to the predictive character of Mazzola's model, which follows the principle that these rules obey mathematical relations based on musical symmetries. There has been a polemic around Mazzola's and Tymoczko's models. Tymoczko's initial critique can be found in \cite{Tympolem} and a response by Mazzola and the second author can be found in \cite{Mazzpolem}. Two of Tymoczko's concerns just were that the musicological study \cite{MazzMuzz} was not available and that there were some progressions forbidden by the model but good to Fux. Likely, this article can clarify those and other possible concerns. We organize this article as follows. First, in Section~\ref{strsty} we codify Fux's \textit{strict style} following two standard sources \cite{Fux,Jep}. This is a \textit{descriptive model}, which helps to make a mathematical taxonomy of progressions into inadmissible, bad, and good ones, but hardly explains anything on the nature of the rules. Then, based on \cite[Chapitre~III]{Beau}, in Sections \ref{modprin} and \ref{mod}, we expose Mazzola's model both in informal and formal synthetic ways. The quantitative computational results are also included. In Section~\ref{redstrsty}, we reduce, modulo $12$, the strict style to obtain the \textit{reduced strict style}. This procedure helps to compare the original phenomenon to the model, whose base is the integers modulo $12$. Here, we note that there are some new \textit{ambiguous} progressions that probably do not deserve the defined names inadmissible, bad, or good. In Section~\ref{comp}, we compare the reduced strict style to the model. Based on the conceptual and structural understanding of the model recorded in the previous sections, in Section~\ref{natvar} we propose an alternative model that replaces the dual numbers product (infinitesimals one) by an integers modulo $12$ one. The alternative to the dual numbers ring is also isomorphic to the product $\mathbb{Z}_{12}\times \mathbb{Z}_{12}$, and therefore could be suitable for further categorical generalizations. The quantitative results of the alternative model are very similar to those of the original one, and slightly improve them regarding the predictions of inadmissible or bad progressions in the reduced strict style. Also, in Sections~\ref{2var} and \ref{3var}, we propose two variations of the model that deepen into the principle of local characterization of deformed consonances/dissonances. These variations become the same and, again, they slightly improve the prediction of inadmissible or bad progressions. Finally, we provide conclusions from our study and point out some directions for further research. \section{The strict style}\label{strsty} The simplest case of first-species counterpoint consists of two voices, \textit{cantus firmus} and \textit{discantus}, whose notes occur simultaneously and have the same duration. Thus, given a note in the cantus firmus, the corresponding one in the discantus is determined by the interval between them. Each interval is a consonance and the composition must satisfy certain rules. See Figure~\ref{Fux}. \begin{figure} \caption{Some features of a first-species counterpoint example given by \cite[p.~29]{Fux} \label{Fux} \end{figure} The following is the codification of the \textit{strict style} given in \cite{MazzMuzz}, which is based on \cite{Tittel}. Here, for accessibility of the bibliographical sources, we justify each rule following the standard counterpoint books \cite{Fux,Jep}. The strict style consists of the following \textit{data}: \begin{itemize} \item \textit{Pitch space}. We will work with the usual group $\mathbb{Z}_{12}$ of pitch classes of the equal-tempered scale\footnote{For simplicity, we assume that it extends endlessly in both directions. The choice of this particular scale is only illustrative, but irrelevant. It could equally denote a scale in a different tuning.} $\mathbb{Z}$. We denote by $\pi$ the natural projection from $\mathbb{Z}$ to $\mathbb{Z}_{12}$. \item \textit{Intervals}. The intervals between pitch classes and pitches also correspond to the groups $\mathbb{Z}_{12}$ and $\mathbb{Z}$, since they are just differences. \item \textit{Diatonic scale}. The basic pattern of modes is the subset $X$ of $\mathbb{Z}_{12}$, where $X=\{0,2,4,5,7,9,11\}$, and its extension $\pi^{-1}(X)$ to $\mathbb{Z}$. \item \textit{Contrapuntal intervals}. Each interval between a given note $c$ in the cantus firmus and its discantus $d$ corresponds uniquely to a linear polynomial $c+(d-c)x$. This expression is equivalent to giving the two voices explicitly as a pair $(c,d)$, but our intention is \textit{to stress the role of intervals}. We denote by $P_1$ the additive Abelian \textit{group of contrapuntal intervals}. \item \textit{Consonances and dissonances}. We define consonances (unison, thirds, fifth, sixths) and dissonances (seconds, fourth, sevenths) by \[K=\{0,3,4,7,8,9\}\text{ and }D=\{1,2,5,6,10,11\},\] respectively. Thus, $\{K,D\}$ is a \textit{partition} of the intervals group $\mathbb{Z}_{12}$. The consonance/dissonance partition of $\mathbb{Z}$ is the naturally induced by $\{K,D\}$, namely $\{\pi^{-1}(K),\pi^{-1}(D)\}$. In turn, by classifying contrapuntal intervals $c+(d-c)x$ according to whether $d-c$ is a consonance or a dissonance, we have a partition of the contrapuntal intervals group $P_1$ into \textit{contrapuntal consonances and dissonances}. \end{itemize} \textit{A composition of first-species counterpoint} is a finite sequence $\xi_1,\xi_2,\dots, \xi_n$ of contrapuntal intervals essentially satisfying the following \textit{rules}. \subsection{Preliminary rules}\label{prerul} \begin{itemize} \item Each contrapuntal interval is a contrapuntal consonance \cite[p.~27]{Fux}. \item We work with consonances up to the tenth \cite[p.~ 112, 5]{Jep}, that is, for each contrapuntal interval $c+kx$, $k\in\{0,3,4,7,8,9,12,15,16\}$. \item Both $c$ (cantus firmus) and $d$ (discantus $c+k$) of each contrapuntal interval $c+kx$ are in the diatonic scale.\footnote{Moreover, the choice of a distinguished element $t$ in $X$ determines a \textit{mode} with \textit{tonic} $t$, where the composition occurs. Dorian, Phrygian, Mixolydian, Eolian, and Ionian modes are determined by the tonics $2$, $4$, $7$, $9$, and $0$, respectively. The tonic is essentially stressed in the first and last contrapuntal intervals of the composition \cite[p.~31]{Fux}, but the role of modes in progressions is secondary since the latter do not depend on a particular note (Section~\ref{strtrans}). See \cite[pp.~59-82]{Jep} for a musical introduction to modes.} \item Given a \textit{progression} $(c+kx,c'+k'x)$, from a contrapuntal consonance to its successor in a composition, the maximum change of a voice is an octave \cite[p.~109]{Jep}. Formally, $|c'-c|\leq 12$ and $|d'-d|\leq 12$, where $d=c+k$ and $d'=c'+k'$. \end{itemize} \subsection{Progression rules}\label{prorul} According to \cite{MazzMuzz,Fux,Jep}, we divide progressions $(c+kx,c'+k'x)$ into three categories. \begin{itemize} \item \textit{Inadmissible}: \begin{itemize} \item[] \textit{Unison repetitions} \cite[p.~ 112, 3]{Jep}. \item[] \textit{Parallel perfect consonances} \cite[p.~22]{Fux}. Those progressions with $k\in \{0,7,12\}$, $k=k'$, and $c\neq c'$. \item[] \textit{Hidden parallel perfect consonances} \cite[p.~22]{Fux}. Those satisfying $k'\in\{0,7,12\}$, $(d'-d)(c'-c)>0$, and $k\neq k'$. \item[] \textit{Tritones} \cite[p.~35]{Fux}. They satisfy $|c'-c|=6$ or $|d'-d|=6$. \item[] \textit{Too large skips} \cite[p.~27, Footnote~1]{Fux}. If $7<|c'-c|<12$ or $7<|d'-d|<12$. The octave is regarded as a sort of repetition \cite[p.~112, 7]{Jep} and is accepted. \end{itemize} \item \textit{Bad}: Those that are not inadmissible but fall into some of the following cases. Bad progressions are not strictly avoided. \begin{itemize} \item[] \textit{Imperfect consonances by similar skips}\footnote{It is a weak form of the combination of the rules \textit{Parallel imperfect consonances} and \textit{Hidden parallel imperfect consonances}.} \cite[p.~112, 7]{Jep}. This means that $k'\in \{3,4,8,9,15,16\}$, $(d'-d)(c'-c)>0$, $|c'-c|>2$, $|d'-d|>2$, and $5<|c'-c|< 12$ or $5<|d'-d|<12$. \item[] \textit{Hidden tritones} \cite[p.~35, Footnote~9]{Fux}. If $d'-c\equiv 6 \pmod{12}$ or $d-c'\equiv 6 \pmod{12}$. \end{itemize} \item \textit{Good}: All other progressions. \end{itemize} \subsection{Strict style modulo translation}\label{strtrans} \textit{Since all rules can be expressed in terms of intervals}, they are invariant under translation of the progressions. Thus, to understand them, it is enough to study the representatives under translation of the form \[(0+kx,c'+k'x).\] See Table~\ref{tab:progtrans} for the counting of all progressions and their types. These outcomes were obtained with Code~1 in the Online Supplement. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline \multirow{9}{*}{$1057$ prog.}&\multirow{6}{*}{$671$ inadmissible} & $22$ parallel fifths \\ \cline{3-3} & & $49$ parallel eights and unisons \\ \cline{3-3} & & $88$ hidden fifths \\ \cline{3-3} & & $128$ hidden eights and unisons \\ \cline{3-3} & & $170$ tritones \\ \cline{3-3} & & $434$ too large skips \\ \cline{2-3} & \multirow{2}{*}{$64$ bad} & $38$ imp. cons. by sim. skips \\ \cline{3-3}& & $26$ hidden tritones \\ \cline{2-3} & \multicolumn{1}{c}{$322$ good} &\\ \hline \end{tabular} \end{center} \caption{Counting of the strict style progressions modulo translation and their types.} \label{tab:progtrans} \end{table} \section{The principles of Mazzola's model}\label{modprin} The following are the conceptual principles of the model; the mathematical details are in Section~\ref{mod}. \subsection{Counterpoint rules obey mathematical laws based on symmetries} Rather than an acoustic or psychological theory, counterpoint is a \textit{composition theory} with its own \textit{logic}. The fourth is an acoustic consonance, but it is a dissonance in counterpoint. Similarly, to justify the tritone prohibitions by saying that these intervals are hard to sing is not satisfactory \cite[p.~75]{Beau} given the versatility of instrumental counterpoint. On the other hand, \textit{symmetries} (for example, transpositions and inversions) are \textit{the natural operations that occur in counterpoint}, so they are a reasonable basis for a model. \subsection{We restrict to the intervals modulo octave} The model does not intend to explain Fux counterpoint rules, but their reduction modulo octave, in the sense of Section~\ref{redstrsty}. This choice is just a \textit{simplification procedure}. \subsection{The division of intervals into consonances and dissonances has a mathematical conceptual characterization} The partition of the twelve intervals modulo octave into consonances (unison, thirds, fifth, and sixths) and dissonances (seconds, fourth, sevenths) has a \textit{unique symmetry} that interchanges them. This fundamental property, discovered by Mazzola, \textit{characterizes} this partition together with a monoid property for consonances \cite[p.~76]{Beau}. \subsection{The discantus is a tangential alteration of the cantus firmus}\label{tang} First, there is an analogy of the intervals group $\mathbb{Z}_{12}$ with a \textit{differentiable manifold}, with associated tangent spaces, given the characterization of this Abelian group as the product $\mathbb{Z}_3\times \mathbb{Z}_4$, which can be interpreted as a \textit{torus of thirds} \cite[Section~12.1]{Beau}. On the other hand, \textit{the discantus is a variation or alteration of the cantus firmus}, according to \cite[p.~73]{Beau} and \cite[p.~549]{Inicount}, and \textit{alterations are tangents} \cite[Section~7.5]{MazzolaTopos}. Thus, \textit{dual numbers}, which are the natural tangents in algebraic geometry \cite[p.~80]{Hartshorne}, suitably model this situation. This point of view of alterations as tangents is likely inspired by infinitesimal deformations in algebraic geometry \cite[Example~9.13.1]{Hartshorne}, which are related to dual numbers. \subsection{Alternation and deformation} Contrapuntal tension is not only vertical (between cantus firmus and discantus) but horizontal (between an interval and its successor), as reflected by the division of consonances into perfect (unison/octave and fifth) and imperfect ones (thirds and sixths). This suggest a trace of dissonance inside consonance. These ideas are inspired by the work of Klaus-Jürgen Sachs as explained in \cite[Section~14.1]{Beau} and \cite[p.~646]{MazzolaTopos}. To translate the idea of horizontal tension in the model, we consider progressions of consonances (interval to successor) that can be regarded as symmetry-deformed progressions from a dissonance to a consonance \cite[Section~31.1]{MazzolaTopos}. This \textit{alternation} (or contrast) process resembles the musical resolution sense of dissonances into consonances. \section{Summary of mathematical counterpoint theory}\label{mod} The following is a synthesis of a more robust counterpoint theory \cite{Theo}. We start with the following basic data. \begin{itemize} \item The \textit{ring} $\mathbb{Z}_{12}$, which corresponds to the twelve intervals (up to the octave). \item The group of \textit{symmetries} of $\mathbb{Z}_{12}$, denoted by $\sym(\mathbb{Z}_{12})$, which consists of all affine automorphisms of the form $e^ab:\mathbb{Z}_{12}\longrightarrow \mathbb{Z}_{12}:r\mapsto br+a$, where $b\in \mathbb{Z}_{12}^*=\{1,3,5,7,11\}$. The symmetries of the form $e^a1$, or $e^a$ for short, called \textit{translations}, correspond to transposition in music. \item The \textit{partition} $\{K,D\}$ of $\mathbb{Z}_{12}$ into \textit{consonances} $K$ and \textit{dissonances} $D$, where $K=\{0,3,4,7,8,9\}$ and $D=\{1,2,5,6,10,11\}$. Mazzola's fundamental observation is that $e^2 5$ is the \textit{unique} $p\in \sym(\mathbb{Z}_{12})$ such that $p(K)=D$. \item The \textit{dual numbers ring} $\mathbb{Z}_{12}[\e]$, which models contrapuntal intervals modulo octave. It consists of all linear polynomials $a+b\e$ subject to the relation $\e^2=0$. Formally, it is the quotient ring $\mathbb{Z}_{12}[x]/\left\langle x^2\right\rangle$, where $\e$ is the class of $x$. \item The group of \textit{symmetries} of $\mathbb{Z}_{12}[\e]$, denoted by $\sym(\mathbb{Z}_{12}[\e])$, which consists of all affine automorphisms of the form $e^{u+v\e}(c+d\e):\mathbb{Z}_{12}[\e]\longrightarrow \mathbb{Z}_{12}[\e]:x+y\e\mapsto (c+d\e)(x+y\e)+(u+v\e)$, where $c\in \mathbb{Z}_{12}^*$. \item The induced partition $\{K[\e],D[\e]\}$ of $\mathbb{Z}_{12}[\e]$ into \textit{contrapuntal consonances} $K[\e]$ and \textit{contrapuntal dissonances} $D[\e]$, where \[Y[\e]=\{r+k\e\ | \ r\in \mathbb{Z}_{12} \text{ and }k\in Y \}\] for $Y=K,D$. \end{itemize} We consider pairs $(\xi,\eta)$ of contrapuntal consonances in $K[\e]$, which we call \textbf{progressions}, and aim to determine when they are valid for counterpoint. The three principles in Sections~\ref{alternation}, \ref{locchar}, and \ref{variety} serve this purpose. \subsection{Alternation and repetition}\label{alternation} The model deals with \textbf{polarized progressions}, that is, progressions $(\xi,\eta)$ such that $\xi\in g(D[\e])$ and $\eta\in g(K[\e])$ for some $g\in\sym(\mathbb{Z}_{12}[\e])$. As proved in \cite{Theo}, \textit{polarized progressions are all but repetitions}. Since the model selects among them those that are optimal in a musical sense (Definition~\ref{def}), we say that \textit{the model does not decide on the nature of repetitions}. \subsection{Local characterization of consonances and dissonances}\label{locchar} Although alternation helps to regard $\xi$ as a deformed dissonance and $\eta$ as a deformed consonance, we should ensure that they resemble dissonances and consonances in an authentic way. Thus, we would want a property for the partition $\{K[\e],D[\e]\}$, analogous to the uniqueness one of $\{K,D\}$, that defines contrapuntal consonances and dissonances. The symmetry $e^{2\e} 5$ of $\mathbb{Z}_{12}[\e]$ is a natural extension of $e^2 5$ sending $K[\e]$ to $D[\e]$ since $e^{2\e}5$ is simple and acts on the interval part $d$ of a dual number $a+b\e$ just as $e^25$. But in this case \textit{it is not the unique} sending $K[\e]$ to $D[\e]$. For example, $e^{2\e+1} 5$ also does. However, we have the following \textit{local uniqueness property} \cite[Proposition~51]{MazzolaTopos}, which is a sort of \textit{characteristic one} \cite[Section~14.2]{Beau} of the consonance/dissonance partition $\{z+K\e,z+D\e\}$ of the \textit{fiber} $z+\mathbb{Z}_{12}\e$, thus offering a local definition of contrapuntal consonance and dissonance.\footnote{ See \cite[Section~4.2]{Theo} for the emergence of a similar \textit{characterizing} property of the consonance/dissonance partition $\{z+K\e,z+D\e\}$. The two properties are related in \cite[Section~10]{Theo}, where there is a proof that they are equivalent for the deformed partition $\{g(K[\e]),g(D[\e])\}$ at a given fiber \cite[Theorem~10.6]{Theo}.} \begin{theorem}\label{indxdich} For each cantus firmus note $z\in \mathbb{Z}_{12}$, the symmetry $p^z[\e]$, defined by $p^z[\e]=e^z\circ e^{2\e}5\circ e^{-z}=e^{8z+ 2\e}5$, is the unique in $\sym(\mathbb{Z}_{12}[\e])$ that \begin{itemize} \item[1.] leaves invariant the fiber $z+\mathbb{Z}_{12}\e$ and \item[2.] sends $K[\e]$ to $D[\e]$. \end{itemize} In particular, \begin{equation*} p^z[\e](z+K\e)=z+D\e. \end{equation*} \end{theorem} If we apply this property to the deformed partition $\{g(K[\e]),g(D[\e])\}$, we obtain the following one. \begin{itemize} \item The symmetry $p^z[\e]$ is the unique $p'\in \sym(R[\e])$ that leaves invariant the fiber $z+\mathbb{Z}_{12}\e$ and sends $g(K[x])$ to $g(D[x])$. \end{itemize} By \cite[Proposition~10.5]{Theo}, this condition is equivalent to the following equation. \begin{equation}\label{two} p^z[\e](g(K[\e]))=g(D[\e]) \end{equation} Mazzola requires it for the cantus firmus $z$ of $\xi$, which ensures that $\xi$ is a local dissonance. \subsection{Variety}\label{variety} In this model, the variety principle of counterpoint \cite[p.~21]{Fux} corresponds to the condition that there is a maximum of alternations from $\xi$, that is, the cardinality of $g(K[\e])\cap K[\e]$ is maximum among all $g\in \sym(R[\e])$ such that 1. $\xi \in g(D[\e])\cap K[\e]$ (alternation) and 2. Equation~\eqref{two} holds for the cantus firmus $z$ of $\xi$ (local dissonance). \subsection{Admitted successors} Now we can give the definition of admitted successor of a contrapuntal interval. \begin{definition}\label{def} A \textbf{contrapuntal symmetry} for a consonance $\xi\in K[\e]$, where $\xi=z+k\e$, is a symmetry $g$ of $\mathbb{Z}_{12}[\e]$ such that \begin{enumerate} \item $\xi\in g(D[\e])$, \item the symmetry $p^z[\e]$ sends $g(K[\e])$ to $g(D[\e])$, and \item the cardinality of $g(K[\e])\cap K[\e]$ is maximum among all $g$ satisfying 1 and 2. \end{enumerate} Note that the contrapuntal symmetry for a given consonance is not required to be unique. An \textbf{admitted successor} of a consonance $\xi\in K[\e]$ is an element $\eta$ of $g(K[\e])\cap K[\e]$ for some contrapuntal symmetry $g$. See Figure~\ref{counterpoint}. If $\eta$ is an admitted successor of $\xi$, we say that the progression $(\xi,\eta)$ is \textbf{allowed}. If it does not happen and $(\xi,\eta)$ is polarized, we say that it is \textbf{forbidden}. \fini \end{definition} \begin{figure} \caption{Here, $g$ is a deformation symmetry and $\eta$ an admitted successor of $\xi$.} \label{counterpoint} \end{figure} \subsection{Admitted successors computation}\label{adsuccomp} Denote by $H$ the group\footnote{In fact, it is a group since it consists of all symmetries that leave invariant the fiber $K\e$ at $0$.} of all symmetries of $\mathbb{Z}_{12}[\e]$ of the form $e^{v\e}(c+d\e)$. According to \cite{Theo}, after several simplifications, the admitted successors of $z+k\e\in K[\e]$ are the elements of the sets of the form \begin{equation}\label{transpose} e^z(h(K[\e])\cap K[\e]), \end{equation} where $h=e^{v\e}(c+d\e)\in H$ and \begin{enumerate} \item $v\in k-cD$, \item $5v+2=c2+v$, and \item the value $\rho\sum\limits_{i=0}^{\rho-1}|K_i||K_{e^vc(i)}|$ (the cardinality of $h(K[\e])\cap K[\e]$) is maximum among all $h=e^{v\e}(c+d\e)\in H$ satisfying 1 and 2, where $K_{i}=\{k\in K\ |\ k \equiv i\pmod{\rho}\}$, $\rho=\gcd (d,12)$, and $e^vc$ is reduced modulo $\rho$. \end{enumerate} The symmetries $h$ satisfying the previous conditions are also contrapuntal symmetries for $k\e$. This means that, to compute the admitted successors of a consonance $z+k\e$, it is enough to do so for $k\e$, and then apply the translation $e^z$ (Equation~\eqref{transpose}). This agrees with the previous result that counterpoint rules do not depend on particular notes; see Section~\ref{strtrans}. For these reasons, from now on \textit{we assume that all progressions are of the form $(k\epsilon, c'+k'\epsilon)$}. Finally, the counterpoint symmetries and admitted successors of $k\e\in K[\e]$ can be computed with \textit{Hichert's algorithm}. This algorithm ranges over all symmetries $h$ satisfying (1) and (2), and updates in each step the set of those whose associated cardinalities, according to (3), are maximum so far. Then, with the counterpoint symmetries at hand, we compute the associated successors sets according to the formula \begin{equation}\label{presuc} e^{v\e}(c+d\e)(K[\e])\cap K[\e]=\bigsqcup\limits_{r\in \mathbb{Z}_{12}}cr+((cK+v+dr)\cap K)\e. \end{equation} The Table~1 in the Online Supplement shows all counterpoint symmetries and successors. It was obtained with the Code~2 in the Online Supplement. According to Code~3 in the Online Supplement, out of the $287$ progressions that occur in the diatonic scale $X$, $6$ of them are repetitions (non-polarized), $250$ are allowed, and the remaining $31$ are forbidden. Next, we reduce the strict style so as to compare it with the model. \section{The reduced strict style}\label{redstrsty} Here we reduce, modulo $12$ (up to the octave), the strict style modulo translation studied in Section~\ref{strsty}. We use the projection that sends a progression $(kx,c'+k'x)$ in the strict style to $(\pi(k)\e,\pi(c')+\pi(k')\e)$, where $\pi:\mathbb{Z}\longrightarrow \mathbb{Z}_{12}$ is the natural projection. We first note that \textit{this projection covers all progressions in a diatonic scale}. In fact, each such progression $(k\e,c'+k'\e)$, regarded as $(kx,c'+k'x)$, satisfies all preliminary rules in Section~\ref{prerul}, except perhaps the condition that the maximum change between the discantus notes is an octave. In such a case, we transpose the second interval an eight downwards and obtain an strict style progression that projects on $(k\e,c'+k'\e)$. Now we define the progression \textit{rules of the reduced strict style}. According to \cite{MazzMuzz}, a progression in a diatonic scale is \begin{itemize} \item[] \textbf{good}, if it is the projection of at least one good progression, \item[] \textbf{inadmissible}, if it is derived from nothing but inadmissible progressions, and \item[] \textbf{bad}, if it is the projection of at least one bad progression, but not of a good one. \end{itemize} This definition leads to the following characterization of these rules. Code~4 in the Online Supplement contains some computations involved. \subsection*{Inadmissible progressions} Under projection, unison repetitions remain unchanged and they are good, parallel unisons or eights become parallel unisons or unison repetitions, parallel fifths become parallel fifths or fifth repetitions (good), and tritones become tritones possibly greater than the eight. \begin{itemize} \item[] \textbf{Parallel unisons and fifths} \textit{Some parallel unisons remain inadmissible}. If the skip is the tritone, then it is inadmissible. Now, parallel unisons come from other three kinds of progressions: 1. unison to octave, 2. parallel eights (inadmissible), and 3. octave to unison. The cases 1 or 3 are inadmissible if and only if $c'$ or $12-c'$ are greater than $7$, that is, $c'\in \{1,2,3,4,8,9,10,11\}$. Otherwise, we have good progressions by contrary motion and skips not too large. \textit{Parallel fifths are inadmissible}. They can only be the octave reduction of parallel fifths, which are inadmissible, because twelfths are not in the range of the strict style. \item[] \textbf{Tritones} \textit{Tritones are inadmissible}. Tritones only come from tritones, which are inadmissible. \item[] \textbf{Projected hidden parallelisms} \textit{Hidden parallel fifths from a sixth are inadmissible}. They are only the projection of progressions of the same kind. The remaining cases that have not tritones are not inadmissible (Code~4). \item[] \textbf{Projected too large skips} All cases fall into the previous inadmissible ones or are not inadmissible (Code~4). \end{itemize} \subsection*{Bad progressions} Bad progressions in $\mathbb{Z}_{12}[\e]$ have to be projections of at least a bad progression in $\mathbb{Z}[\e]$. \begin{itemize} \item[] \textbf{Projected imperfect consonances by similar skips} \textit{Here, the unique bad progression is $(0+7\e,5+9\e)$}; see Code~4. The other ones are good. \item[] \textbf{Hidden tritones} \textit{Hidden tritones are bad}. They only come from hidden tritones, which are bad. \end{itemize} To sum up, in the reduced strict style, the inadmissible progressions are all tritones and, excepting them, some parallel unisons, all parallel fifths, and all hidden parallel fifths from a sixth. Moreover, as noted in \cite{MazzMuzz}, \textit{only the parallel fifths and tritone rules preserve their generality} (unrestricted validity). \subsection*{On semantics} However, out of the previous categories (\textit{inadmissible}, \textit{bad}, \textit{good}), the most appropriate, to provide a quantitative assessment of the agreement of the model with the original rules, seem to be the \textit{inadmissible} and \textit{bad} ones. Certainly, an inadmissible progression only comes from inadmissible progressions so it is \textit{unequivocally inadmissible} and bad progressions are optional, and unequivocally bad in this case (check). In contrast, good progressions also come from inadmissible or bad progressions, except for the four imperfect consonance repetitions, which only come from good ones, as we deduce from the following definition. We have a \textbf{refined semantics} of the reduced strict style. We preserve the definition of inadmissible and bad progressions. Regarding a remaining good progression, we sub-classify it as \begin{itemize} \item[] \textbf{good-good}, if it is derived from nothing but good progressions, \item[] \textbf{ambiguous}, if it also comes from at least an inadmissible progression, and \item[] \textbf{good-bad}, if it also comes from at least a bad progression but not an inadmissible one. \end{itemize} This definition offers a fairer classification: ambiguous progressions have not a defined value comparable to \textit{allowed} and \textit{forbidden}. Table~\ref{tab:progredu} contains the counting of all reduced strict style progressions and their types, according to Code~4 in the Online Supplement. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline \multirow{9}{*}{$287$ prog.}&\multirow{4}{*}{$74$ inadmissible} & $10$ parallel fifths* \\ \cline{3-3} & & $9$ parallel unisons \\ \cline{3-3} & & $13$ hidden fifths from a sixth \\ \cline{3-3} & & $45$ tritones* \\ \cline{2-3} & \multirow{2}{*}{$23$ bad} & $1$ proj. imp. cons. by sim. skips \\ \cline{3-3}& & $22$ hidden tritones \\ \cline{2-3} & \multirow{3}{*}{$190$ good} & $4$ good-good \\ \cline{3-3}& & $16$ good-bad \\ \cline{3-3}& & $170$ ambiguous \\ \hline \end{tabular} \end{center} \caption{Counting of the reduced strict style progressions and their types. The symbol * refers to rules whose generality remains under projection.} \label{tab:progredu} \end{table} \section{The reduced strict style and the model}\label{comp} Table \ref{tab:comp} compares all types of strict style progressions with allowed and forbidden ones, according to Code~5 in the Online Supplement. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & inad.& inad.* & bad & good & good* & good-good & good-bad & amb.\\ \hline allowed & $55$ & $36$& $17$ & $178$ & $197$& $0$ & $16$&$162$\\ \hline forbidden & $19$ & $19$ & $6$ & $6$ & $6$& $0$& $0$& $6$\\ \hline repetitions & $0$ & $0$ & $0$ & $6$ & $6$ & $4$& $0$& $2$\\ \hline \end{tabular} \end{center} \caption{Inadmissible, bad, and good progressions (reduced strict style) versus allowed and forbidden ones (model). The symbol * refers to rules whose generality remains under projection.} \label{tab:comp} \end{table} Table \ref{tab:inadbad} shows all allowed and forbidden kinds of inadmissible and bad progressions, according to Code~6 in the Online Supplement. We observe that \textit{the model predicts the parallel fifths prohibition} and some tritone rules. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline &par. 5ths& par. un. & hid. 5ths& trit.& $(0+7\e,5+9\e)$& hid. trit.\\ \hline allow. & $0$& $8$ & $13$& $36$ & yes& $16$\\ \hline forb. & $10$ & $1$ & $0$& $9$ & no & $6$ \\ \hline \end{tabular} \end{center} \caption{Allowed and forbidden kinds of inadmissible and bad progressions.} \label{tab:inadbad} \end{table} We can measure the agreement between the model and the reduced style by means of the number of matches and mismatches. The matches are all allowed good, forbidden inadmissible, and (possibly) forbidden bad progressions. The mismatches are all forbidden good and allowed inadmissible ones. From Table~\ref{tab:comp}, we observe that there are $203$ matches, including forbidden bad progressions, and $61$ mismatches. If we only take into account rules whose generality remains under projection, we obtain $222$ matches and $42$ mismatches. We define matches and mismatches in the refined semantics by replacing good progressions for good-good progressions in the previous paragraph. In the case of the previous model, there are $25$ matches and $55$ mismatches. But, since there are only six good-good progressions, these measures are not appropriate. For example, the trivial model that forbids all progressions, except the four imperfect consonance repetitions, has $101$ matches and no mismatches---the best possible result. This means that the ambivalence of $186$ good progressions (ambiguous and good-bad) does not allow an appropriate semantics on reduced progressions that captures the essence of the original good ones.\footnote{In other words, a strong attribute of being good is invisible or undetectable from the point of view of the reduced progressions.} However, we use these measures to discard possible ambiguities. Thus, the careful review of the semantics offers a different point of view on the results of the model. For instance, we observe that the twelve examples (other than repetitions) of progressions forbidden by Mazzola but not by Fux, according to \cite[Fig.~3b]{Tympolem}, are the six forbidden bad together with the six forbidden good progressions in Table~\ref{tab:comp}. The last six progressions are ambiguous according to the refined semantics, so the twelve examples are not actually wrong predictions of the model. \section{Some questions on the model} The conceptual review of the principles in Sections~\ref{tang} and \ref{locchar} leads to the following questions. \begin{itemize} \item Why is the discantus necessarily a tangential alteration of the cantus firmus? \item Why do we not require the local characterization condition on $\{g(K[\e]),g(D[\e])\}$ for the cantus firmus of a possible successor $\eta$? \item More radically, why do we not require the local characterization on $\{g(K[\e]),g(D[\e])\}$ for all fibers? \end{itemize} These questions inspire the following three variations of the model. \section{First variation of the model}\label{natvar} Besides the uniqueness condition on the partition $\{K,D\}$, the dual numbers structure on counterpoint intervals is the most important structural feature of Mazzola's model. However, it is not utterly clear why we use the structure of the dual numbers ring to model intervals and not another one. Another simple choice for a ring structure on counterpoint intervals is that induced by the \textit{product} ring $\mathbb{Z}_{12}\times \mathbb{Z}_{12}$, whose elements $(c,d)$ can be regarded as a pair of cantus firmus and discantus notes that occur simultaneously. If we transfer its structure to contrapuntal intervals, under the map that sends $(c,d)$ to $(c,d-c)$ (cantus firmus and interval with the discantus), we obtain (check) the structure of the quotient ring $\mathbb{Z}_{12}[x]/\left\langle x^2-x\right\rangle$---a variation of the dual numbers ring. In this case, we denote the class of $x$ by $\x$, and the ring by $\mathbb{Z}_{12}[\x]$. The latter consists of linear polynomials $a+b\x$, where $a,b\in\mathbb{Z}_{12}$ and $\x^2=\x$, that is, $\x$ is idempotent. The rings $\mathbb{Z}_{12}[\e]$ and $\mathbb{Z}_{12}[\x]$ are equal as Abelian groups but they differ in their products. In $\mathbb{Z}_{12}[\e]$, $d\e d'\e=dd'\e^2=0$ for all $d,d'\in \mathbb{Z}_{12}$, so intervallic variations are regarded as \textit{infinitesimals}, whereas in $\mathbb{Z}_{12}[\x]$, $d\x d'\x=dd'\x ^2=dd'\x$, so these variations are just \textit{integers modulo $12$}. According to \cite{Theo}, like in the case of dual numbers, symmetries of $\mathbb{Z}_{12}[\x]$ are of the form $e^{u+v\x}(c+d\x)$, but both $c$ and $c+d$ are in $\mathbb{Z}_{12}^*$ now. In the same way, the extension of consonances and dissonances $\{K[\x],D[\x]\}$ to counterpoint intervals can be defined and Theorem~\ref{indxdich} remains valid by changing $\e$ for $\x$. The non-polarized symmetries, according to the definition in Section~\ref{alternation}, are all repetitions together with all parallelisms by tritone leaps in this case. Definition~\ref{def} is the same because the motivations remain valid, except that we replace $\e$ by $\x$. Regarding Section~\ref{adsuccomp}, now $H$ consists of all symmetries of $\mathbb{Z}_{12}[\x]$ of the form $e^{v\x}(c+d\x)$, which satisfy $c,c+d\in \mathbb{Z}_{12}^*$. However, conditions 1 and 2 become \begin{enumerate} \item $v\in k-(c+d)D$ and \item $5v+2=(c+d)2+v$, \end{enumerate} whereas 3 remains unchanged. We compute the successors sets with the formula: \begin{equation} e^{v\x}(c+d\x)(K[\x])\cap K[\x]=\bigsqcup\limits_{r\in \mathbb{Z}_{12}}cr+(((c+d)K+v+dr)\cap K)\x. \end{equation} The results of this model, and their comparison to the strict reduced style, are summarized in Tables \ref{tab:comp1} and \ref{tab:inadbad1}. Here, out of the $287$ progressions that occur in a diatonic scale, $7$ are non-polarized, $240$ are allowed, and $40$ are forbidden. These results correspond to Code~7 in the Online Supplement. We observe that the model predicts $21$ inadmissible progressions. Actually, it predicts $18$ out of the $19$ inadmissible progressions of the classical model together with three new hidden fifths from a sixth. We lose the parallel unison by tritone skip, since it is non-polarized here. Regarding bad progressions, it predicts the same six hidden tritones. However, in this case we have $198$ matches and $65$ mismatches with the reduced strict style. In the refined semantics the respective measures are $27$ and $52$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & inad.& bad & good & good-good & good-bad & amb.\\ \hline allowed & $52$ & $17$ & $171$ & $0$ & $15$&$156$\\ \hline forbidden & $21$ & $6$ & $13$ & $0$& $1$& $12$\\ \hline non-polarized & $1$ & $0$ & $6$ & $4$& $0$& $2$\\ \hline \end{tabular} \end{center} \caption{Inadmissible, bad, and good progressions (reduced strict style) versus allowed and forbidden ones (first variation). } \label{tab:comp1} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline &par. 5ths& par. un. & hid. 5ths & trit.& $(0+7\x,5+9\x)$& hid. trit.\\ \hline allow. & $0$& $8$ & $10$& $36$ & yes& $16$\\ \hline forb. & $10$ & $0$ & $3$& $8$ & no & $6$ \\ \hline non-pol. & $0$ & $1$ & $0$& $1$ & no & $0$ \\ \hline \end{tabular} \end{center} \caption{Allowed and forbidden kinds of inadmissible progressions (first variation).} \label{tab:inadbad1} \end{table} \section{Second variation of the model}\label{2var} If we also require the local characterization on the fibers of the possible successors of a consonance, then we change condition (3) in Definition~\ref{def} for the following one, where $P(z)$ denotes the property (2) in Definition~\ref{def}. \begin{itemize} \item The cardinality of $\{z'+k'\e \in g(K[\e])\cap K[\e]\ |\ P(z')\}$ is maximum among all $g$ satisfying (1) and (2). \end{itemize} Remarkably, this variation is equivalent to the following one, as proved in \cite[Section~11.3]{Theo}. \section{Third variation of the model}\label{3var} If we require the local characterization property on the deformed partition for all fibers, which amounts to a \textit{global condition}, then we change condition (2) in Definition~\ref{def} for the following one. \begin{itemize} \item For all $z\in \mathbb{Z}_{12}$, $P(z)$ holds. \end{itemize} According to \cite[Section~11.3]{Theo}, so as to compute the admitted successors according to this definition, it is enough to follow the procedure in Section~\ref{adsuccomp} by adding to condition (2) the equation $5d=d$. The final results of the second and third variations coincide and they are the same for the case of $\mathbb{Z}_{12}[\x]$ \cite[Sections~11.2-11.3]{Theo}, except for the difference that the inadmissible parallel unison by tritone skip is forbidden in the dual numbers case but non-polarized in the case of $\mathbb{Z}_{12}[\x]$. The comparison of these results with the reduced strict style are in Tables~\ref{tab:comp2} and \ref{tab:inadbad2}. They correspond to Code~8 in the Online Supplement. We only put the results for the case of $\mathbb{Z}_{12}[\e]$. Here, there are six repetitions, $46$ forbidden progressions, and $235$ allowed ones, that occur in a diatonic scale. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & inad.& bad & good & good-good & good-bad & amb.\\ \hline allowed & $53$ & $15$ & $167$ & $0$ & $12$&$155$\\ \hline forbidden & $21$ & $8$ & $17$ & $0$& $4$& $13$\\ \hline repetitions & $0$ & $0$ & $6$ & $4$& $0$& $2$\\ \hline \end{tabular} \end{center} \caption{Inadmissible, bad, and good progressions (reduced strict style) versus allowed and forbidden ones (final variations).} \label{tab:comp2} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline &par. 5ths& par. un. & hid. 5ths& trit.& $(0+7\e,5+9\e)$& hid. trit.\\ \hline allow. & $0$& $4$ & $13$& $38$ & yes& $14$\\ \hline forb. & $10$ & $5$ & $0$& $7$ & no & $8$ \\ \hline \end{tabular} \end{center} \caption{Allowed and forbidden kinds of inadmissible and bad progressions (final variations).} \label{tab:inadbad2} \end{table} The most important feature of these results, regarding the original model, is the \textit{prediction of new parallel unisons}. However, we have $196$ matches and $70$ mismatches. In the refined semantics, the respective measures are $29$ and $53$. \section{Conclusions}\label{conc} \subsection*{Economy} Essentially, we deduce all mathematical results of the theory from just a fact: the uniqueness property of the consonance/dissonance partition. In particular, we deduce the parallel fifths prohibition. \subsection*{Generality and universality} The models have a generalization to \textit{any} (not necessarily commutative) ring \cite{Theo}, taking the place of $\mathbb{Z}_{12}$. This is just an expression of the original intention of establishing an universal counterpoint by detecting the essential features of the Renaissance incarnation. Thus, the model paves the way to many other forms of counterpoint, which opens up a lot of fields of musical experimentation. \subsection*{Quantitative interpretation of the results} Table~\ref{tab:match} shows the matches and mismatches of all models with the reduced strict style. There is a number of mismatches in all cases, which might be interpreted as a weakness of the model, but the number of matches is about three times it. The number of matches and mismatches can increase or decrease from the classical to the final variations according to the semantics used, but the differences do not exceed five units. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & matches & mismatches & matches (ref. sem.) & mismatches (ref. sem.)\\ \hline classical & $203$ & $61$ & $25$ & $55$ \\ \hline first var. & $198$ & $65$ & $27$ & $52$\\ \hline final var. & $196$ & $70$ & $29$ & $53$\\ \hline \end{tabular} \end{center} \caption{Matches and mismatches of all models with the reduced strict style according to the original semantics and the refined semantics. Matches include all forbidden inadmissible, bad inadmissible, and allowed good progressions. Mismatches include all forbidden good and allowed inadmissible progressions. In the refined semantics, we change good progressions for good-good progressions, which do not come from inadmissible or bad progressions in the strict style.} \label{tab:match} \end{table} \subsection*{Qualitative interpretation of the results} Rather than a mere description of the Renaissance counterpoint rules (Section~\ref{strtrans}), we believe that the point is what mathematical counterpoint theory can explain about them. Mazzola's model explains the parallel fifths rule in its generality and some tritone prohibitions. Additionally, the first variation of the model explains some hidden fifths, and the final variations explain some parallel eights. \subsection{Recovery of the original phenomenon from the model}\label{recov} We can characterize perfect consonances as $0$ (which belongs to any ring) and the consonance with a general parallelism prohibition. The tritone can be characterized as the distinguished skip that makes a parallel unison forbidden. In the first variation, the tritone is the non-polarized skip. In this way, we can recover the original rules by establishing as prohibitions the parallel and hidden perfect consonances, the distinguished skip, and the too large skips. \section{Further developments}\label{further} \subsection*{Generalizations and structural approaches} Further generalizations to categories, beyond that to arbitrary rings, could be developed. The use of the product ring $\mathbb{Z}_{12}\times \mathbb{Z}_{12}$ to model contrapuntal intervals, as an alternative to the dual numbers, helps express the model in terms of universal constructions in the category of rings, so it could be translated to other categories. Also, the mathematical counterpoint theory is an independent field of study. There are several open questions like whether there is an entirely structural proof of the \textit{Counterpoint theorem}, which establishes all counterpoint symmetries and admitted successors of a given consonance, beyond algorithmic computations. Some recent advances in that direction, like the counting formulas for successors sets cardinalities and a maximization criterion, can be found in \cite{Theo}. \subsection*{Study of new counterpoint worlds} The musical study of these generalizations is an open field of research. It can start by establishing rules based on the model, analogous to the original rules of counterpoint, following the procedure suggested in Section~\ref{recov}. Then, composition processes should generate new music in these worlds, as mentioned in Section~\ref{intro}. The counterpoint world induced by Scriabin's mystic chord has also been subject of study \cite{Scriabinworld}. \subsection*{Musicology} An initial comparison of the model's results with the \textit{Missa Papae Marcelli} can be found in \cite{Nieto}. The analysis of other works of Renaissance polyphony is also a pending task. \subsection*{Remaining species} Although there are some advances regarding the second-species \cite{secsp}, a mathematical theory of the remaining species, and the cases of three or more voices, are not at hand. \section*{Funding} \addcontentsline{toc}{section}{Funding} This work was supported by Programa de Becas Posdoctorales en la UNAM 2019, which is coordinated by Direcci\'{o}n de Asuntos del Personal Acad\'{e}mico (DGAPA) at Universidad Nacional Aut\'{o}noma de M\'{e}xico. \section*{Supplemental online material} \addcontentsline{toc}{section}{Supplemental online material} Supplemental online material for this article can be accessed at \url{doi-provided-by-publisher} and/or \url{https://www.dropbox.com/s/1thilv2ji6cem0u/supplement.pdf?dl=0}. In the Online Supplement we include all \textit{Python} codes used in this paper. \section*{Disclosure statement} \addcontentsline{toc}{section}{Disclosure statement} No potential conflict of interest was reported by the authors. \\ \\ \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \title[Quantum mechanical detector model for moving, spread-out particles] {Investigation of a quantum mechanical detector model for moving, spread-out particles} \author{G C Hegerfeldt$^1$, J T Neumann$^1$ and L S Schulman$^2$} \address{$^1$ Institut f\"ur Theoretische Physik, Universit\"at G\"ottingen, Friedrich-Hund-Platz 1, 37077 G\"ottingen, Germany} \address{$^2$ Physics Department, Clarkson University, Potsdam, New York 13699-5820, USA} \eads{\mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} We investigate a fully quantum mechanical spin model for the detection of a moving particle. This model, developed in earlier work, is based on a collection of spins at fixed locations and in a metastable state, with the particle locally enhancing the coupling of the spins to an environment of bosons. The appearance of bosons from particular spins signals the presence of the particle at the spin location, and the first boson indicates its arrival. The original model used discrete boson modes. Here we treat the continuum limit, under the assumption of the Markov property, and calculate the arrival-time distribution for a particle to reach a specific region. \end{abstract} \pacs{\textbf{03.65.Xp, 03.65.Ta, 05.50.+q}} \submitto{\JPA} \section{Introduction} Until recently, in time-of-flight measurements for particles or atoms the quantum nature of the center-of-mass motion usually played no role since the particles or atoms were very fast. However, the advance of cooling techniques has made it possible to create ultracold gases in a trap and produce very slow atoms, e.g., by opening the trap. For these low velocities the quantum nature of the center-of-mass motion of an atom can have noticeable quantum effects, as the remarkable experiments of Szriftgiser et.\ al.\ have shown \cite{sgad1996}. In the simplest quantum mechanical formulation of a time-of-flight measurement one would create a particle at $t=0$ with a localized but extended wave function and then ask for the arrival time of the particle at some distant point. Repeating this one would get an arrival-time distribution that would depend on the particle's wave function. Similarly one might ask for passage or transit times through a region. Such questions, and more generally the role of time in quantum mechanics, have attracted much interest in recent years \cite{mme2002, lss1997}. But how should one measure the arrival time of a particle or atom at some particular point and what should the resulting distribution look like? Allcock \cite{ga1969} made an \emph{ad hoc} model of an arrival-time measurement using an imaginary step potential, which leads to an `absorption' of the wave packet; he then identified the absorption rate with the arrival-time distribution. In general, this distribution will not be normalized since part of the wave packet will be reflected from the imaginary step potential rather than being absorbed. Also, part of the wave packet may penetrate the step to some depth before being absorbed, thus causing a detection `delay.' As Allcock noticed, decreasing one effect will typically enlarge the other. Kijowski proposed physically motivated axioms from which he derived an `ideal' arrival-time distribution for a free quantum particle coming from one direction \cite{jk1974}. The resulting distribution agrees with an `approximate distribution' proposed heuristically by Allcock. This distribution has been related \cite{rg1997} to the arrival-time operator of Aharonov and Bohm \cite{ab_toa1961} (for more on the latter see \cite{oru1999} and references therein). No measurement procedure for the distribution was proposed, and its status, properties and generalizations are still being critically discussed in the literature; see e.g.\ \cite{crl2002, emnr2003, crl2005}. Halliwell \cite{jjh1999} employed a detection model based on a single spin coupled to a boson bath, a greatly simplified version of a general quantum mechanical detector model that was proposed in \cite{gs1990} and elaborated in \cite{lss1991, lss1997}. Working in one space dimension and using Bloch equations he arrived at a Schr\"odinger equation with an imaginary potential, thus giving a basis for Allcock's approach. An operational and realistic laser-based approach to the arrival-time problem was investigated in \cite{dambo2002, hsm2003, bn2003b, dambo2003, hsmn2004, rdnmh2004, hhm2005}. This approach proposes to measure the arrival-time by means of laser induced fluorescence \cite{dambo2002}. The idea is to consider a two-level atom with center-of-mass motion, to illuminate some region of space with a laser, and to take the detection time of the first fluorescence photon as the arrival time of the atom at the (sharp) onset of the laser. In this approach one has to deal with the typical problems of delay due to the time needed for pumping and decay of the excited state. There is also reflection without detection when the atom is reflected from the laser beam in the ground state without emitting a photon. Yet, interesting results could be derived. In the limit of a weak laser, there is almost no reflection but a strong delay due to the weak pumping to the upper energy level of the two-level system; dealing with this delay by means of a deconvolution, one recovers the flux at the position of the onset of the laser from the first-photon distribution \cite{dambo2002}. On the other hand, in the limit of strong pumping, reflection becomes dominant and the first-photon distribution is clearly not normalized; normalizing it via the operator normalization method of Brunetti and Fredenhagen, which preserves the bilinear structure of the distribution \cite{bf2002}, one recovers Kijowski's arrival-time distribution from the first-photon distribution \cite{hsm2003}. In this way, Kijowski's axiomatic distribution can be related to a particular measuring process. Further, in a certain limit it is possible to derive a closed one-channel equation for the ground state governing the first-photon distribution \cite{bn2003b}. This equation contains an in general complex potential which becomes purely imaginary for zero laser detuning. In this way the fluorescence model makes a connection to Allcock's \emph{ad hoc} ansatz of an imaginary potential. In the fluorescence model there is a back-reaction of the measurement on the center-of-mass motion of the atom, and this might cause deviations from an ideal distribution. It thus seems a good idea to use a measurement procedure that does not interact directly with the particle through its internal degrees of freedom, but rather to regard the particle only as a catalyst for a transition in a detector or its associated environment. Just such a detection model was developed in \cite{lss1997, gs1990, lss1991}. The model consists of a three-dimensional array of $D$ spins (the `detector') with ferromagnetic interaction. In the presence of a homogeneous magnetic field, and for sufficiently low temperature, all spins are aligned with the field. Reversing the magnetic field suddenly, such that the spins cannot follow the reversal, one can produce a metastable state of this compound spin system. The spins are weakly coupled to a bath of bosons. There is a particle to be detected and its effect on the collection of spins is to strongly enhance the spin-boson coupling when the particle's wave function overlaps that of a detector spin. Thus when the particle is close to a spin this spin flips much faster by virtue of the increased coupling to the bath. By means of the ferromagnetic interaction, this in turn triggers the subsequent spontaneous flipping of all spins even in the absence of the particle. In this way, the single spin flip is amplified to a macroscopic event and the associated bosons can be measured. The details of the amplification process and the probability of false detection due to spontaneous spin flips were considered in \cite{lss1997, gs1990, lss1991}. The motion of the particle, whose presence induces the first spin flip, was treated classically in the calculations. In the following, we will concentrate on the full quantum description of this first spin flip and the quantum mechanical aspects of the particle's motion, and comment only briefly on processes internal to the detector. In this paper we investigate this detector model in the limit of continuous boson modes, under the condition that the spin-boson interaction satisfies the Markov property (see (\ref{markovianproperty})), and use it to determine the arrival-time distribution of a spatially spread-out particle. It turns out that one is again led to a Schr\"odinger equation with an imaginary potential and the corresponding arrival-time distribution is similar to that of the fluorescence model. In this detector model there is also a back-reaction on the particle of interest. In order to eliminate this back-reaction we discuss the idea of decreasing the spin-bath coupling while simultaneously increasing the number of spins. It is shown that even in the limit when the spin-bath coupling goes to zero and the number of spins to infinity, there remains a back-reaction. The plan of the paper is as follows. In Section \ref{model} the detector model is reviewed, and in Section \ref{direct_approach} the arrival-time distribution obtained from a simplified version of the model is calculated by means of standard quantum mechanics. Another approach to calculate the arrival-time distribution is presented in Section \ref{calc} and compared to the straight-forward calculation. The advantage of this second approach is that is easily extended to the full model, the corresponding calculations shown in \ref{full_model}, and that it allows to some extent for an analytical treatment of the arrival-time problem. In Section \ref{relation} we discuss the relation of the present detection scheme to the fluorescence model. Section \ref{discussion} deals with the limit of zero coupling and an infinity of spins, and remarks on possible schemes for the optimization of the model and on its application to passage-time measurements. \section{The detector model} \label{model} The detector model of \cite{lss1997, gs1990, lss1991} is based on the following Hamiltonian. The excited state of the $j^\mathrm{th}$ spin is denoted by $\ket{\uparrow}_j$ and its ground state by $\ket{\downarrow}_j$. Define \begin{equation} \hat{\sigma}_z^{(j)} \equiv \ket{\uparrow}_{j\;j}\!\ket{\uparrow}bra \, - \, \ket{\downarrow}_{j\;j} \! \ket{\downarrow}bra \,. \label{detector} \end{equation} The Hamiltonian for the detector alone is given by \begin{equation} H_\mathrm{det} = \frac{1}{2}\sum_j \hbar\omega_0^{(j)} \hat{\sigma}_z^{(j)} - \frac{1}{2}\sum_{j<k} \hbar\omega_J^{(jk)} \hat{\sigma}_z^{(j)} \otimes \hat{\sigma}_z^{(k)}\,, \label{hdet} \end{equation} where $\hbar \omega_0^{(j)}$ is the energy difference between ground state and excited state of the $j^\mathrm{th}$ spin, and $\hbar\omega_J^{(jk)}\ge 0$ is the coupling energy between the spins $j$ and $k$. In addition there is a bath of bosons (e.g.\ phonons or photons) with free Hamiltonian \begin{equation} H_\mathrm{bath} = \sum_{\bell} \hbar \omega_\ell \hat{a}_{\bell}^\dagger \hat{a}_{\bell}, \end{equation} where $\hat{a}_{\bell}$ is the annihilation operator for a boson with wave vector $\bell$. Later a continuum limit will be taken. In general, the spins will be coupled to the bath, and there is the possibility of spontaneous spin flips due to \begin{equation} H_\mathrm{spon} = \sum_{j,\bell}\hbar \left( \gamma_{\bell}^{(j)} \rme^{\rmi f_{\bell}^{(j)}} \hat{a}_{\bell}^\dagger \hat{\sigma}_-^{(j)} + \mathrm{h.c.} \right), \label{spont} \end{equation} where \begin{equation} \hat{\sigma}_-^{(j)} \equiv \ket{\downarrow}_{j\;j} \! \ket{\uparrow}bra, \quad \hat{\sigma}_+^{(j)} \equiv \left( \hat{\sigma}_-^{(j)} \right)^\dagger = \ket{\uparrow}_{j\;j} \! \ket{\downarrow}bra, \end{equation} and the coupling constants $\gamma_{\bell}^{(j)}$ and the phases $f_{\bell}^{(j)}$ depend on the particular realization of the detector and the bath. The coupling between the $j^\mathrm{th}$ spin and the bath is assumed to be strongly enhanced when the particle is close to this spin. Let the $j^\mathrm{th}$ spin be located in a spatial region ${\mathcal G}_j$. The enhancement is taken to be proportional to a sensitivity function $\chi^{(j)} \left( \mathbf{x} \right)$ which vanishes outside ${\mathcal G}_j$, e.g.\ the characteristic function which is 1 on ${\mathcal G}_j$ and zero outside. The additional coupling depending on the particle's position is thus \begin{equation} H_\mathrm{coup} = \sum_j \chi^{(j)} \left( \hat{\mathbf{x}} \right) \sum_{\bell}\hbar \left( g_{\bell}^{(j)} \rme^{\rmi f_{\bell}^{(j)}} \hat{a}_{\bell}^\dagger \hat{\sigma}_-^{(j)} + \mathrm{h.c.} \right), \label{general_coupling} \end{equation} with $\left| g_{\bell}^{(j)} \right|^2 \gg \left| \gamma_{\bell}^{(j)} \right|^2$. The full Hamiltonian is \begin{equation} H = H_\mathrm{part} + H_\mathrm{det} + H_\mathrm{bath} + H_\mathrm{spon} + H_\mathrm{coup} \,, \label{hamiltonian} \end{equation} where $H_\mathrm{part}$ is the free Hamiltonian of the particle, \begin{equation}\label{free} H_\mathrm{part} = \hat{\mathbf{p}}^2/2m\,. \end{equation} Note that the `excitation number', i.e., the sum of the number of bosons and the number of up-spins, is a conserved quantity. The detection process now starts with the bath in its ground state $\ket{0}$ (no bosons present) and all $D$ spins in the excited state $\ket{\uparrow}all$. As a consequence of the excitation number conservation, it is sufficient to measure the state $\ket{0}$ of the bath in order to check whether or not any spin has flipped. For $\hbar \omega_0^{(j)}$ only slightly above the energetic threshold set by the ferromagnetic spin-spin coupling, and $\gamma_{\bell}^{(j)}$ sufficiently small, the probability of a spontaneous spin flip (`false positive') is very small \cite{lss1997, gs1990, lss1991}. But when the particle is close to the $j^\mathrm{th}$ spin, the excited state $\ket{\uparrow}_j$ decays much more quickly, due to the enhanced coupling, `$g_{\bell}^{(j)}$', of the spin to the bath. Then, the ferromagnetic force experienced by its neighbors is strongly reduced, and thus these spins can flip rather quickly even in the absence of the particle by means of the $\gamma_{\bell}^{(j)}$; by a kind of 'domino effect', the whole array of spins will eventually flip, amplifying the first spin flip to a macroscopic event \cite{lss1997, gs1990, lss1991}. \section{The direct approach in the one-spin case} \label{direct_approach} \subsection{A simplified model} We first consider a simplified model consisting of a particle in one dimension and only one spin. This simplification is reasonable if the radius of the region ${\mathcal G}_j$ is smaller than the distance between spins. (Our assumption of locality of the interaction though is a bit stronger than this however, since below in Section \ref{eigenstates}, for calculational convenience we will extend the region ${\mathcal G}_j$ to a half-line, i.e., $\chi(x) \to \Theta (x)$.) The vectors $\mathbf{x}$ and $\bell$ are replaced by $x$ and $\ell$. Also, we will temporarily neglect $H_\mathrm{spon}$ in view of assumption $\left| \gamma_\ell^{(j)} \right|^2 \ll \left| g_\ell^{(j)} \right|^2$, and accordingly the possibility of spontaneous spin flips. The free Hamiltonian for the particle motion in one dimension is \begin{equation} H^\mathrm{1d}_\mathrm{part} = \hat{p}^2/2m, \label{part_1d} \end{equation} and the free detector Hamiltonian with only one spin simplifies to \begin{equation} H^1_\mathrm{det} = \frac{1}{2}\hbar \omega_0 \hat{\sigma}_z. \end{equation} The free bath Hamiltonian is given by \begin{equation} H_\mathrm{bath}^\mathrm{1d} = \sum_\ell \hbar \omega_\ell \hat{a}_\ell^\dagger \hat{a}_\ell. \label{bath_1d} \end{equation} Furthermore, let the spin be located in the interval ${{\mathcal I}_d} \equiv [0,d]$ so that \begin{equation} H^{1,\mathrm{1d}}_\mathrm{coup} = \chi_{{\mathcal I}_d}sub \left( \hat{x} \right) \sum_\ell \hbar \left( g_\ell \rme^{\rmi f_\ell} \hat{a}_\ell^\dagger \hat{\sigma}_- + \mathrm{h.c.} \right). \label{general_coupling_one} \end{equation} where the sensitivity function $\chi_{{\mathcal I}_d}sub( x)$ vanishes outside ${{\mathcal I}_d}$. The full Hamiltonian of the simplified model is then given by \begin{equation} H^{1,\mathrm{1d}} = H^\mathrm{1d}_\mathrm{part} + H^1_\mathrm{det} + H^\mathrm{1d}_\mathrm{bath} + H^{1,\mathrm{1d}}_\mathrm{coup}. \label{hamiltonian_one} \end{equation} This simplified model allows for a direct investigation by means of standard quantum mechanics. \subsection{Energy eigenstates} \label{eigenstates} To get a first idea of how the present detector model works for an arrival-time measurement, we simplify the model in this section a little further by assuming the detector to be semi-infinite, extended over the whole positive axis, and take momentarily $$ \chi_{{\mathcal I}_d}sub (x) = \Theta (x), $$ where $\Theta$ is Heaviside's step function. Also, we assume for the phases in the coupling Hamiltonian $f_\ell \equiv 0$ throughout this section. The stationary Schr\"odinger equation with energy eigenvalue $E_k$ for a plane wave coming in from the left, initially no bosons present, and the spin in state $\ket{\uparrow}$, can be solved piecewise in position space. For $x<0$, the solution simply reads \begin{equation} \boldsymbol{\Phi}^<_k (x) =\sqrt{\frac{1}{2\pi}} \left( \left[ \rme^{\rmi kx} + R_0(k) \rme^{-\rmi kx} \right] \ket{\uparrow~0} + \sum_\ell R_\ell (k) \rme^{-\rmi k_\ell(k)x} \ket{\downarrow~1_\ell} \right), \end{equation} where the wave numbers $k,~k_\ell(k)$ are fixed by \begin{equation} \frac{\hbar^2 k^2}{2m} + \frac{\hbar \omega_0}{2} = E_k = \frac{\hbar^2 k_\ell(k)^2}{2m} - \frac{\hbar \omega_0}{2} + \hbar \omega_\ell, \label{energy_conservation} \end{equation} and where $\ket{\uparrow~0} \equiv \ket{\uparrow} \ket{0}$, $\ket{\downarrow~1_\ell}\equiv \ket{\downarrow} \ket{1_\ell}$. Note that there is the possibility that the particle is reflected from the detector. It may either be reflected after it has been detected and a boson of mode $\ell$ has been created, the coefficient for this event being $R_\ell(k)$, or it may even be reflected without being detected, the coefficient being $R_0(k)$. The latter will lead to a non-normalized arrival-time distribution. Since this no-detection probability is in general momentum dependent the momentum distribution of the actually detected part of the wave packet must be expected to differ from that of the originally prepared wave packet, hence leading to deviations of the `measured' arrival-time distribution from corresponding `ideal' quantities. For $x>0$, the operator $H^{1,\mathrm{1d}} - \hat{p}^2/2m$ is independent of $x$, because $\chi_{{\mathcal I}_d}sub (x) = \Theta (x)$ has been assumed, and it commutes with $\hat{p}^2/2m$. The eigenvalues of $H^{1,\mathrm{1d}} - \hat{p}^2/2m$ are real and denoted by $\hbar\Omega_\mu/2$. The corresponding eigenvectors are superpositions of $\ket{\uparrow~0}$ and $\ket{\downarrow~1_\ell}$ and denoted by $\ket{\boldsymbol\mu}$ so that \begin{equation} \left(H^{1,\mathrm{1d}} - \hat{p}^2/2m\right) \ket{\boldsymbol\mu} = \frac{\hbar \Omega_\mu}{2} \ket{\boldsymbol\mu}. \end{equation} To obtain an eigenvector of $H^{1,\mathrm{1d}}$ on $x>0$ for the eigenvalue $E_k$, one has to choose an eigenfunction $\rme^{\rmi q_\mu(k)x}$ of $\hat{p}^2/2m$ such that \begin{equation} E_k = (\hbar q_\mu(k) )^2/2m + \hbar\Omega_\mu/2. \end{equation} From (\ref{energy_conservation}) one has \begin{equation} q_\mu(k) = \sqrt{ k^2 + \frac{m}{\hbar} \left( \omega_0 - \Omega_\mu \right)}. \end{equation} Note that $q_\mu (k)$ is imaginary if $\Omega_\mu > \omega_0$ and \begin{equation} k^2 < \frac{m}{\hbar} \left( \Omega_\mu - \omega_0 \right), \end{equation} leading to exponential decay. Otherwise $q_\mu(k)$ is real. The solution of the stationary Schr\"odinger equation for $x>0$ belonging to the eigenvalue $E_k$ can then be written as \begin{equation} \boldsymbol{\Phi}_k^> (x) = \sqrt{\frac{1}{2\pi}} \sum_\mu \alpha_\mu(k) \rme^{\rmi q_\mu(k)x} \ket{\boldsymbol\mu}. \end{equation} The coefficients $\alpha_\mu(k),~R_0(k),~R_\ell(k)$ are obtained from the usual matching condition, i.e., both \begin{equation} \boldsymbol{\Phi}_k(x) := \cases{\begin{array}{ccc} \boldsymbol{\Phi}_k^< (x) & \mathrm{if} & x<0 \\ \boldsymbol{\Phi}_k^> (x) & \mathrm{if} & x \geq 0 \end{array}} \end{equation} and its first derivative have to be continuous at $x=0$. The eigenvectors $\ket{\boldsymbol \mu}$ can be determined numerically. \subsection{Detection of a wave packet} The probability of finding the detector spin in state $\ket{\downarrow}$ (and hence the bath in some boson state $|1_\ell\rangle$) at time $t$ is given by integration over the modulus square of the respective component of $\ket{\boldsymbol{\Psi}_t}$, \begin{eqnarray} P_1^\mathrm{disc} (t) &=& \sum_\ell\int_{-\infty}^\infty \rmd x \, \left| \left\langle x~\downarrow~1_\ell \left|\boldsymbol{\Psi}_t \right. \right\rangle \right|^2\\ &=& 1 - \int_{-\infty}^\infty \rmd x \, \left| \left\langle x~\uparrow~0 \left|\boldsymbol{\Psi}_t \right. \right\rangle \right|^2 \equiv 1 -P_0^\mathrm{disc}{}(t), \nonumber \end{eqnarray} where the superscript `$\mathrm{disc}$' distinguishes the discrete model from the continuum limit discussed in the next section. As long as no recurrences occur, i.e., no transitions $\ket{\downarrow~1_\ell} \mapsto \ket{\uparrow~0}$, one can regard \begin{equation} w_1^\mathrm{disc} (t) = \frac{\rmd}{\rmd t} P_1^\mathrm{disc}(t) = - \frac{\rmd}{\rmd t} P_0^\mathrm{disc}(t). \end{equation} as the probability density for a spin flip (i.e. for a detection) at time $t$. As an example we consider a maximal boson frequency $\omega_{_\mathrm{M}}$ and \begin{eqnarray}\label{ex} \omega_\ell &=& \omega_{_\mathrm{M}} n/N ~~~~~n=1,\nonumber \cdots, N \\ g_\ell &=& -\rmi G \sqrt{\omega_\ell/N}. \end{eqnarray} As particle we consider a cesium atom, prepared in the remote past far away from the detector such that the corresponding free packet (i.e., in the absence of the detector) at $t=0$ would be a Gaussian minimal uncertainty packet around $x=0$ with $\Delta p$ and average velocity $v_0$. Decomposing this into the eigenstates of $H$, the wave packet at time $t$ is \begin{equation}\label{state} \left\langle x \left| \boldsymbol{\Psi}_t \right. \right\rangle = \int_{-\infty}^{\infty} \rmd k \, \widetilde{\psi}(k) \boldsymbol{\Phi}_k(x) \rme^{-\rmi E_k t/\hbar} \end{equation} with \begin{equation}\label{Gauss} \widetilde{\psi} (k) = \left( \frac{\hbar }{\Delta p \sqrt{2 \pi}} \right)^{1/2} \exp \left(-\frac{\hbar^2}{4 (\Delta p)^2} \left( k - mv_0/\hbar \right)^2 \right). \end{equation} A numerical illustration of $w_1^\mathrm{disc}(t)$ for $N=40$ is given in figure \ref{w1discrete} (dots). The numerical calculation is time consuming, while in the continuous case with the quantum jump approach it is much faster (see next section). \begin{figure} \caption{Dots: spin-flip probability density $w_1^\mathrm{disc} \label{w1discrete} \end{figure} \section{Continuum limit and quantum jump approach} \label{calc} \subsection{Basic ideas} In this section the detector model and its application to arrival-times will be investigated in a continuum limit by means of the quantum jump approach \cite{qujump}. This approach uses continuous bath modes as a limit, so that there are no recurrences as in the discrete case. It is easily generalized to multiple spins and it is more accessible to analytic treatment. The bath modes are eliminated, but in contrast to Bloch equations one can work with a (conditional or effective) Hamiltonian and has reduced dimensions. It is based on watching for the {\em first} appearance of a boson. To do this one would have to observe the bath continuously. Since in standard quantum mechanics with the simple von Neumann measurement theory this would lead to difficulties associated with the quantum Zeno effect \cite{beskow, khalfin, Sudarshan}, the quantum jump approach circumvents this by temporally coarse-grained observations and a coarse-grained time scale. In the present situation, it reads as follows. Instead of continuous observation, one considers repeated instantaneous measurements, separated by a time $\Delta t$. For a Markovian system with correlation time $\tau_\mathrm{c}$, one takes $\Delta t \gg \tau_\mathrm{c}$ to avoid the quantum Zeno effect, but $\Delta t$ much shorter than the lifetime of the excited state $\ket{\uparrow}all$ in order to obtain a good time resolution. Typical numbers for quantum optical models are $\Delta t \simeq 10^{-13} \mathrm{s} \ldots 10^{-10} \mathrm{s}$. To find {\em no boson until $t = n \Delta t$}, no boson must have been found in the first $n$ measurements. The probability for this to happen will now be calculated. The detector interval ${{\mathcal I}_d}$ can now be finite or semi-infinite. Let the complete system (bath, detector, and particle) at $t_0=0$ be prepared in the state \begin{equation} \ket{\Psi_0} = \ket{0} \ket{\uparrow}all \ket{\psi_0}, \end{equation} where $\ket{\psi_0}$ denotes the spatial wave function of the particle. If no boson is found at the first measurement then, by the von Neumann-L\"uders reduction rule \cite{vN,Lueders}, the state (up to normalization) right after the measurement is given by projecting with $\ket{0}proj$, \begin{equation} \ket{\Psi_\mathrm{cond}^{\Delta t}} \equiv \ket{0}proj U(\Delta t, 0) \ket{0} \ket{\uparrow}all \ket{\psi_0}, \label{state1} \end{equation} where $U(t,t')$ denotes the time evolution operator of the complete system. The probability, $P_0(\Delta t)$, for no detection is the norm squared of the vector in (\ref{state1}), i.e. \begin{equation} P_0(\Delta t)= \parallel \ket{0}proj~ U(\Delta t, 0) \ket{0} \ket{\uparrow}all \ket{\psi_0}\parallel ^2. \end{equation} The state then evolves with $U(2\Delta t, \Delta t)$ until the next measurement, and so on. The state after the $n^\mathrm{th}$ consecutive no-boson measurement, $\ket{\Psi_\mathrm{cond}^{n \Delta t}}$, is, up to normalization, \begin{eqnarray} \ket{\Psi_\mathrm{cond}^{n \Delta t}} & = & \ket{0}proj U(n \Delta t, [n-1] \Delta t) \ket{0} \cdots \nonumber \\ & & \quad \quad \cdots \bra{0} U(\Delta t, 0) \ket{0} \ket{\uparrow}all \ket{\psi_0}. \label{n} \end{eqnarray} The probability, $P_0(n \Delta t)$, of finding the bath in the state $\ket{0}$ in \emph{all} of the first $n$ measurements is given by its norm squared, \begin{equation} P_0(n \Delta t)= \left\langle \Psi_\mathrm{cond}^{n \Delta t} \left| \Psi_\mathrm{cond}^{n \Delta t} \right. \right\rangle. \end{equation} Note that $\bra{0} U(\nu\Delta t, [\nu-1] \Delta t) \ket{0}$ is an operator in the particle-detector Hilbert space which does not rotate $\ket{\uparrow}all$, by excitation number conservation mentioned after (\ref{free}). Thus one can write \begin{equation} \ket{\Psi_\mathrm{cond}^{n \Delta t}}=\ket{\Psi_\mathrm{cond}^{t}} \equiv \ket{0} \ket{\uparrow}all \ket{\psi_\mathrm{cond}^t}, \end{equation} where $t = n \Delta t$, and hence \begin{equation} \label{P0} P_0(t) \equiv \left\langle \Psi_\mathrm{cond}^{n \Delta t} \right| \left. \! \Psi_\mathrm{cond}^{n \Delta t} \right\rangle = \left\langle \psi_\mathrm{cond}^t \right| \left. \! \psi_\mathrm{cond}^t \right\rangle \end{equation} is the probability that no transition $\ket{0} \longrightarrow \ket{1_\ell}$, i.e.\ that no detection occurs \emph{until} the time $t$. The probability for the first detection to occur at next measurement is just given by \begin{equation}\label{w0} P_0(t)- P_0(t+\Delta t)\equiv w_1(t) \Delta t. \end{equation} The crucial point now is to calculate the `conditional time evolution' of $\ket{\psi_\mathrm{cond}^t}$, i.e., the time evolution `under the condition that no detection occurs', and for this one has to evaluate $\bra{0} U(\nu \Delta t, [\nu-1] \Delta t) \ket{0}\ket{\uparrow}all$. \subsection{A simplified model} For greater clarity, the evaluation of $\bra{0} U(\nu \Delta t, [\nu-1] \Delta t) \ket{0}\ket{\uparrow}all$ will first be done for the simplified model introduced at the beginning of Section \ref{direct_approach}, while the generalization to the full model is referred to \ref{full_model}. We use the interaction picture w.r.t.\ $H_0^{1,\mathrm{1d}} = H^{1,\mathrm{1d}} - H_\mathrm{coup}^{1,\mathrm{1d}}$ and $U_I(t,t') = \rme^{\idh H_0^{1,\mathrm{1d}} t} U(t,t') \rme^{- \idh H^{1,\mathrm{1d}}_0 t'}$. Using (\ref{general_coupling_one}) with still discrete, but possibly infinitely many, modes a simple calculation gives in second order perturbation theory \begin{eqnarray} \fl \bra{0} U_I (\nu \Delta t, [\nu-1] \Delta t) \ket{0} \ket{\uparrow} = \ket{\uparrow} \left( 1 \!\!\!\; \mathrm{l} - \phantom{\int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1} \right. \nonumber \\ \left. \int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1 \int\limits_{[\nu-1] \Delta t}^{t_1} \rmd t_2 \, \sum_\ell \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_1 \right) \right) \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_2 \right) \right) \left| g_\ell \right|^2 \rme^{\rmi (\omega_0 - \omega_\ell) (t_1 - t_2)} \right), \label{Dt_full} \end{eqnarray} where $\hat{x} (t) = \hat{x} + \hat{p}t/m$ is the time evolution of the operator $\hat{x}$ in the Heisenberg picture of the free particle. The phases in the coupling terms have canceled; even if one would assume these phases to be dependent on the particle's position, $f_\ell(x)$, this would be the case to very good approximation since $\Delta t$ is very small and thus $\hat{x} (t_1) \approx \hat{x}(t_2)$ \cite{gch2003}. Consequently one obtains \begin{eqnarray} \fl \bra{0} U_I (\nu \Delta t, [\nu-1] \Delta t) \ket{0} \ket{\uparrow} = \ket{\uparrow} \left( 1 \!\!\!\; \mathrm{l} - \phantom{\int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1} \right. \nonumber \\ \left. \int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1 \int\limits_{[\nu-1] \Delta t}^{t_1} \rmd t_2 \, \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_1 \right) \right) \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_2 \right) \right) \cdot \kappa \left( t_1 - t_2 \right) \right) \label{Dt} \end{eqnarray} with the correlation function \begin{equation} \kappa (\tau) \equiv \sum_\ell \left| g_\ell \right|^2 \rme^{-\rmi (\omega_\ell - \omega_0) \tau}. \end{equation} To have irreversible decay we go to the continuum limit as follows. At first the bath modes are indexed by `wave numbers' \begin{equation}\label{l} \ell=2 \pi n/L_\mathrm{bath}, ~~~ n = 1,~2, \cdots \end{equation} and $\omega_\ell$ is chosen as \begin{equation}\label{om} \omega_\ell = c \left( \omega_\ell \right) \ell, \end{equation} so that \begin{equation}\label{Delta} \Delta \omega = \frac{c(\omega)^2}{c(\omega) - \omega c'(\omega)} \Delta \ell = \frac{c(\omega)^2}{c(\omega) - \omega c'(\omega)} \cdot \frac{2\pi}{L_\mathrm{bath}}. \end{equation} The coupling constants are taken to be of the form \begin{equation} g_\ell =\left( \Gamma (\omega_\ell) + \mathcal{O}(L_\mathrm{bath}^{-1})\right) \cdot \sqrt{\frac{\omega_\ell}{L_\mathrm{bath}}} \label{coupling_constants} \end{equation} where $\Gamma(\omega_\ell)$ does not depend on $L_\mathrm{bath}$. Then one obtains in the continuum limit by (\ref{Delta}) \begin{equation}\label{kappa} \kappa (\tau) = \frac{1}{2\pi}\int_0^\infty \rmd\omega \, \frac{c(\omega) - \omega c'(\omega)}{c(\omega)^2} \omega |\Gamma (\omega)|^2 \rme^{-\rmi(\omega - \omega_0) \tau} \end{equation} We assume $\Gamma(\omega)$ to be of such a form that the Markov property holds, i.e. \begin{equation} \kappa (\tau) \approx 0~~~ \mathrm{if} ~~~\tau > \tau_\mathrm{c} \label{markovianproperty} \end{equation} for some small correlation time $\tau_\mathrm{c}$. This is the case, e.g., for $\Gamma(\omega) \equiv \Gamma$ as in quantum optics. In the double integral of (\ref{Dt}) then only times with $t_1 - t_2 \le \tau_\mathrm{c}$ contribute, and if $\tau_\mathrm{c}$ is small enough one can write \begin{equation} \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_1 \right) \right) \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_2 \right) \right) \approx \chi_{{\mathcal I}_d}sub \left( \hat{x} \left( t_1 \right) \right)^2. \label{approx} \end{equation} The double integral then becomes \begin{equation} \int_0^{\Delta t} \rmd t' \chi_{{\mathcal I}_d}sub(\hat{x}(t' + (\nu-1)\Delta t))^2 \int_0^{t'} \rmd\tau \, \kappa(\tau). \label{double} \end{equation} With $\Delta t \gg \tau_\mathrm{c}$ the second integral can be extended to infinity, by the Markov property. Putting \begin{eqnarray} A &\equiv& 2 \mathrm{Re} \int_0^\infty \rmd\tau \, \kappa(\tau) = \omega_0 \frac{c(\omega_0) - \omega_0 c'(\omega_0)}{c(\omega_0)^2} \cdot \left| \Gamma \left(\omega_0 \right) \right|^2 \\ \nonumber \delta_\mathrm{shift} &\equiv& 2 \mathrm{Im} \int_0^\infty \rmd\tau \, \kappa(\tau) \label{A} \end{eqnarray} one obtains \begin{eqnarray} \fl \bra{0} U_I(\nu \Delta t, [\nu-1] \Delta t) \ket{0} \ket{\uparrow } & = & \ket{\uparrow} \left( 1 \!\!\!\; \mathrm{l} - \frac{1}{2}(A + \rmi\delta_\mathrm{shift}) \int\limits_{[\nu-1] \Delta t} ^{\nu \Delta t} \!\! \rmd t_1 \, \chi_{{\mathcal I}_d}sub \! \left( \hat{x} \left(t_1\right) \right)^2 \right) \nonumber \\ & = & \ket{\uparrow} \exp \left( - \frac{1}{2}(A + \rmi\delta_\mathrm{shift}) \int\limits_{[\nu-1] \Delta t} ^{\nu \Delta t} \, \rmd t_1 \, \chi_{{\mathcal I}_d}sub \left( \hat{x} \left(t_1\right) \right)^2 \right), \end{eqnarray} up to higher orders in $\Delta t$. Note that $A$ is a decay rate of the upper spin level; in quantum optics $ A$ and $\delta_\mathrm{shift}$ correspond to the Einstein coefficient and to a line shift. Going back to the Schr\"odinger picture one then obtains by (\ref{n}) \begin{equation}\label{cond} \ket{\psi_\mathrm{cond}^t}= \rme^{- \idh H_{\mathrm{cond}} (t-t_0)} \ket{\psi_0} \end{equation} with the `conditional Hamiltonian' \begin{equation} H_{\mathrm{cond}} \equiv \frac{\hat{p}^2}{2m} + \frac{\hbar}{2}(\delta_\mathrm{shift} - \rmi A)\chi_{{\mathcal I}_d}sub (\hat{x})^2. \end{equation} Note that this result is independent of the particular choice of $\Delta t$ as long as $\Delta t$ satisfies the above requirements. As a consequence, on a coarse-grained time scale in which $\Delta t$ is small, $t$ can be regarded as continuous and $\ket{\psi_\mathrm{cond}^t}$ obeys a Schr\"odinger equation with a complex potential, \begin{equation} \rmi\hbar \frac{\partial}{\partial t} \ket{\psi_\mathrm{cond}^t} = \left( \frac{\hat{p}^2}{2m} + \frac{\hbar}{2} (\delta_\mathrm{shift} - \rmi A)\chi_{{\mathcal I}_d}sub (\hat{x})^2\right) \ket{\psi_\mathrm{cond}^t}. \label{absorbing_potential} \end{equation} In this continuous, coarse-grained, time scale, (\ref{w0}) yields for the probability density, $w_1(t)$, for the first detection \begin{equation} w_1(t) = -\frac{\rmd P_0(t)}{\rmd t} \label{w1} \end{equation} and from (\ref{P0}) and (\ref{cond}) one easily finds \begin{eqnarray} w_1(t) & = & \frac{\rmi}{\hbar} \left\langle \psi_\mathrm{cond}^t \left|H_{\mathrm{cond}} - H_{\mathrm{cond}}^\dagger \right| \psi_\mathrm{cond}^t \right\rangle \nonumber \\ & = & A \int_0^d \rmd x \,\chi_{{\mathcal I}_d}sub (x)^2 \left| \left\langle x \left| \psi_\mathrm{cond}^t \right. \right\rangle \right|^2. \label{w1-1d} \end{eqnarray} If $\chi_{{\mathcal I}_d}sub (\hat{x})$ is the characteristic function of the interval [0,$d$] this is just the decay rate of the excited state of the detector multiplied by the probability that the particle is inside the detector but not yet detected --- a very physical result. \subsection{An example} As an example we consider the continuum limit of the discrete model of (\ref{ex}). In this case one has, with $\omega_{_\mathrm{M}}$ the maximal frequency, \begin{eqnarray}\label{excont} c(\omega) & \equiv & c_0 \nonumber \\ \omega_\ell &=& \omega_{_\mathrm{M}} n/N \equiv c_0 \, 2\pi n/L_\mathrm{bath}, ~~~n=1, \cdots, N \nonumber\\ L_\mathrm{bath} &\equiv& 2\pi c_0 N/\omega_{_\mathrm{M}} \nonumber\\ g_\ell &=& -\rmi G \sqrt{\omega_\ell/N} \equiv -\rmi G \sqrt{2\pi c_0/\omega_{_\mathrm{M}}}\sqrt{\frac{\omega_\ell}{L_\mathrm{bath}}}\nonumber\\ \Gamma\left( \omega \right) &=& \cases{ \begin{array}{cc} -\rmi G \sqrt{2\pi c_0/\omega_{_\mathrm{M}}}\equiv \Gamma & \mathrm{if} \;\; \omega \le \omega_{_\mathrm{M}} \\ 0 & \mathrm{else}. \end{array}} \end{eqnarray} In the continuum limit, $N$ or $L_\mathrm{bath}\to \infty$, one obtains in the case $\omega_{_\mathrm{M}} > \omega_0$ \begin{eqnarray} \kappa (\tau) & = & \frac{|G|^2}{\omega_{_\mathrm{M}}} \cdot \frac{ \left( 1 + \rmi \omega_{_\mathrm{M}} \tau \right) \rme^{-\rmi (\omega_{_\mathrm{M}}-\omega_0) \tau} - \rme^{\rmi \omega_0 \tau}}{\tau^2} \nonumber \\ A &=& 2\pi \left| G \right|^2 \,\frac{\omega_0}{\omega_{_\mathrm{M}}}\nonumber \\ \delta_\mathrm{shift}&=& 2 \left|G\right|^2 \left(\frac{\omega_0}{\omega_{_\mathrm{M}}} \ln \left[\frac{\omega_0}{\omega_{_\mathrm{M}} - \omega_0} \right] - 1\right) \end{eqnarray} and $\tau_\mathrm{c}$ is of the order of $\omega_0^{-1}$. In the integral for $w_1(t)$ in (\ref{w1-1d}) one has $\chi_{{\mathcal I}_d}sub (x) = \Theta (x)$. The resulting $w_1(t)$ is plotted in figure \ref{w1discrete} for the same wave function and parameters as for $w_1^\mathrm{disc}(t)$ in that figure. Both distributions are in good agreement up to the occurrence of recurrences $\ket{\downarrow~1_\ell} \mapsto \ket{\uparrow~0}$ in the discrete case. The agreement is also seen for other values of $\Delta p$. If $\Delta x$ is the width of the wave packet in position space and $v_0$ is its average velocity, then the width of the probability density for detection is at least of the order of $\Delta x / v_0$ since it takes some time for the wave packet to enter the detection region. (Further broadening of the detection density arises from the width of the delay of the first spin flip once the particle is inside the detector.) Consequently, wave packets with small $\Delta p$ and thus large $\Delta x$ yield rather broad detection densities. On the other hand, as soon as a significant part of the wave function overlaps the detector, in the discrete case the time scale of the recurrences is essentially determined by the properties of the detector and the bath and by their coupling. In case of wave packets with small $\Delta p$ long recurrence times are needed to obtain a good resolution of the typically broad detection densities. This requires a large number of bath modes in the discrete case. We further note that for more complicated incident wave packets as, e.g., the coherent superposition of several Gaussian wave packets with different mean velocities, the probability density exhibits a more complicated structure due to the self-interference of the wave function. \subsection{The general case} A procedure analogous to (\ref{Dt_full}) - (\ref{w1-1d}) can be applied to the three-dimensional model with several spins, as explained in \ref{full_model}. The bosons are allowed to have a direction $\mathbf{e}$ which varies over the unit sphere. In a continuum limit $\ket{\psi_\mathrm{cond}^t}$ obeys a Schr\"odinger equation with a complex potential \begin{equation} \rmi\hbar \frac{\partial}{\partial t} \ket{\psi_\mathrm{cond}^t} = H_{\mathrm{cond}} \ket{\psi_\mathrm{cond}^t} \label{three-d-schroedinger} \end{equation} where \begin{equation}\label{three-d-cond} H_{\mathrm{cond}} = \frac{\hat{\mathbf{p}}^2}{2m} + \frac{\hbar}{2}\{\delta_\mathrm{shift}(\hat{\mathbf{x}}) -\rmi A(\hat{\mathbf{x}})\}. \end{equation} $A(\mathbf{x})$ and $\delta_\mathrm{shift}({\mathbf{x}})$ are given in (\ref{Athree}) and (\ref{3d-shift}). The probability density for the first detection is again similar to (\ref{w1-1d}), \begin{eqnarray} w_1 (t) & = & \frac{\rmi}{\hbar} \left\langle \psi_\mathrm{cond}^t \left|H_{\mathrm{cond}} - H_{\mathrm{cond}}^\dagger \right| \psi_\mathrm{cond}^t \right\rangle \nonumber\\ & = & \int \rmd^3 x \; A \left( \mathbf{x} \right) \left| \left\langle \mathbf{x} \left| \psi_\mathrm{cond}^t \right. \right\rangle \right|^2, \label{w} \end{eqnarray} which is an average of the position dependent decay rate of the detector, weighted with the probability density for the particle to be at position $\mathbf{x}$ and yet undetected. \section{Relation of the present detector model to the fluorescence model} \label{relation} In the quantum optical fluorescence model \cite{dambo2002, hsm2003, bn2003b, dambo2003, hsmn2004, rdnmh2004, hhm2005} for arrival times one considers a two-level atom with ground state $\ket{1} \equiv {1 \choose 0}$ and excited state $\ket{2} \equiv {0 \choose 1}$, which enters a laser illuminated region. In the one-dimensional case one obtains a conditional Hamiltonian of the form \begin{equation} H_{\mathrm{cond}}^\mathrm{fl} = \frac{\hat{p}^2}{2m} + \frac{\hbar}{2} \left( \begin{array}{cc} 0 & \Omega(\hat{x}) \\ \Omega (\hat{x}) & -\rmi \gamma - 2\Delta \end{array} \right), \end{equation} where $\Omega (x)$ is the (position dependent) Rabi frequency of the laser, $\Delta$ the detuning (possibly also position dependent), and $\gamma$ the decay constant of the excited level. Note that in contrast to the present model this is a two-channel Hamiltonian. The reason for this is that the ground state of the quantized photon field is not related to a specific internal state of the atom due to the driving by the (classical) laser. In the limit \begin{equation} \frac{\hbar | 2 \Delta + \rmi \gamma|}{2} \gg \frac{\hbar}{2} \Omega,~ E \label{onechannel} \end{equation} where $E$ denotes the kinetic energy of the incident particle, the corresponding conditional Schr\"odinger equation reduces to a one-channel equation for the ground state amplitude with the complex potential \begin{equation} V(x) = \frac{\hbar \Delta \Omega(x)^2 - \rmi \hbar \gamma \Omega(x)^2 / 2}{4 \Delta^2 + \gamma^2}, \end{equation} and the excited state can be neglected in this limit \cite{bn2003b}. Physically, condition (\ref{onechannel}) means that the excited state decays very rapidly compared to the time-scales of the pumping and the center-of-mass motion. Thus, the first fluorescence photon is emitted, i.e., the particle is detected, when and where the excitation takes place. For $\Delta \equiv 0$ (laser in resonance) $V$ is a purely imaginary potential, similar as for the detector model outlined above; only the physical interpretation of the height of this imaginary potential differs. In other words, the one-channel limit of the fluorescence model coincides with the full quantum mechanical model from Section \ref{model} when considering the conditional interaction for the particle until the first detection. In this way, the fully quantum mechanical detector model of Section \ref{model} not only justifies the fluorescence model for quantum arrival times, at least in the limit of (\ref{onechannel}), but one can conversely immediately carry over the results of the fluorescence model to the detector model. The investigation of the fluorescence model has shown that the essential features like reflection and delay \cite{dambo2002}, and main results like, e.g., linking Kijowski's arrival-time distribution to a particular measuring process \cite{hsm2003}, can be obtained from the full two-channel model as well as from its one-channel limit. Hence these results immediately carry over to the present detector model. Also, the derivation of a complex potential model for particle detection from two different physical models, viz.\ the fluorescence model and the present detector model, indicates the importance of the complex potentials and of Kijowski's arrival-time distribution, which in turn can be derived from the complex potentials approach. This connection is interesting since it can illuminate the physical background of otherwise heuristically introduced complex potentials. Differences, however, arise for example in applications to passage times since the reset state after a detection is not the same in the two models \cite{nhs}. \section{Discussion and extensions} \label{discussion} The model investigated in this paper has three ingredients, viz. a particle in whose spatial properties one is interested, a `detector' based on spins, and a bath of bosons, originally in the ground state. There is neither a direct measurement on the particle of interest nor on the detector but only on the bath, which is checked for bosons. In this way one can hope to keep the disturbance of the particle by the measurement to a minimum. However, as in the fluorescence model and also seen in \cite{jjh1999} for a simplified spin model, also the present full model yields a description by a complex potential and thus shows the typical unwanted features: There is a detection delay, due to the finite spin decay or flip rate, and there is also necessarily the possibility of reflection of the particle by the detector without the detection of a boson, due to the increased bath-detector coupling caused by the particle's wave function inside the detector. This reflection without boson detection causes a non-detection of the particle so that the probability density $w_1(t)$ in (\ref{w}) for the first detection is not normalized. A similar effect arises from the transmission of the particle without boson detection. In order to reduce the detection delay one may be tempted to increase the spin-bath coupling, which mirrors the particle's wave function inside the detector. As a by-product this would also decrease transmission without detection. However, the increase of this spatially dependent coupling means an increase of the absorbing potential $-\rmi \hbar A(\mathbf{x})/2$, and this will also increase the reflection without boson detection, so much so that in the limit of infinite coupling everything is reflected while nothing is detected. The same phenomenon occurs in the fluorescence model \cite{dambo2002} and is a typical feature of complex potentials, as already noted by Allcock~\cite{ga1969}. One can also try to reduce the influence of the spin-bath system on the particle and thus the latter's disturbance by decreasing the spin-bath coupling at a space point and simultaneously increasing the number of spins located there. This seems natural because it is the flip of a single spin which gives rise to the detection, and with a larger number of spins this can compensate for the weaker coupling. To investigate this quantitatively we consider $N$ spins, later to be taken to $\infty$, in the same volume $V$ and $\chi^{(j)}(\mathbf{x})\equiv \chi_V(\mathbf{x})$ for all $j$. The coupling constants are taken in the form \begin{equation} g_{\bell}^{(j)} \equiv g_{\bell} = \frac{\Gamma \left(\omega_{\bell}, \mathbf{e}_{\bell} \right) + {\mathcal O} \left(L_\mathrm{bath}^{-1} \right)}{\sqrt{N}} \cdot \sqrt{\frac{\omega_\ell}{L_\mathrm{bath}^3}} \end{equation} and similarly for $\gamma_{\bell}^{(j)}$. Further, the ferromagnetic force experienced by the individual spin is assumed not to grow with increasing $N$ such as for nearest neighbor interaction. Then (\ref{Athree}) becomes \begin{eqnarray} \fl A \left( \mathbf{x} \right) & = & \sum_{j=1}^N \left( \tilde{\omega}_0 \right)^3 \left[\frac{c \left( \tilde{\omega}_0 \right) - \tilde{\omega}_0 c' \left( \tilde{\omega}_0 \right)}{c \left( \tilde{\omega}_0 \right)^4} \right] \int \frac{\rmd \Omega_\mathbf{e}}{(2 \pi)^2} \, \frac{\left| \Gamma \left( \tilde{\omega}_0 , \mathbf{e} \right) \right|^2 \chi_V ({\mathbf{x}})^2 + \left| \Gamma _\mathrm{spon} \left( \tilde{\omega}_0 , \mathbf{e} \right) \right|^2}{N} \nonumber \\ \fl & = & \left( \tilde{\omega}_0 \right)^3 \left[\frac{c \left( \tilde{\omega}_0 \right) - \tilde{\omega}_0 c' \left( \tilde{\omega}_0 \right)}{c \left( \tilde{\omega}_0 \right)^4} \right] \int \frac{\rmd \Omega_\mathbf{e}}{(2 \pi)^2} \left( \left| \Gamma \left( \tilde{\omega}_0 , \mathbf{e} \right) \right|^2 \chi_V ({\mathbf{x}})^2 + \left| \Gamma _\mathrm{spon} \left( \tilde{\omega}_0 , \mathbf{e} \right) \right|^2 \right) \end{eqnarray} which is just the decay rate for a single spin in $V$, with resonance frequency $\tilde{\omega}_0$ and the coupling as for $N=1$. A similar result holds for $\delta_\mathrm{shift} (\mathbf{x})$, defined in (\ref{3d-shift}). Thus, simply increasing the number of spins $N$ and scaling the coupling constants with $\sqrt{1/N}$ leaves $A$ and $\delta_\mathrm{shift}$ invariant and thus does not change the dynamics until the first detection (spin flip), and in particular does not help to avoid reflection of undetected particles. Any other scaling power of $N$, however, would not lead to a reasonable detector model in the limit $N \rightarrow \infty$ since then either $A$ and $\delta_\mathrm{shift}$ would go to zero or to $\infty$. Similar results also hold for the quantum optical fluorescence model. It is interesting to note that, although it is the flip of one single spin which triggers the detection, it is the totality of all spins located in $V$ which determines the conditional time evolution. It has been shown in the context of complex potentials, however, that one can deal with the delay/transmission-versus-reflection problem by dropping the restriction to rectangular potentials \cite{mbm1995, pms1998}. In fact, it is possible to absorb nearly the complete wave packet in a very short spatial interval; given a wave packet with a specific energy range, an appropriate imaginary potential can be constructed by means of inverse scattering techniques. We stress that there is no such a thing as \emph{the} optimal imaginary potential for all wave packets but the construction of the optimized potential requires \emph{a priori} information about energy range of the wave packet under consideration. The present detector model is applicable not only to arrival-time measurements, but also to more involved tasks like a measurement of passage times. A detailed analysis including numerical examples will appear elsewhere \cite{nhs}. It turns out that a too weak spin-bath coupling yields a broad passage-time distribution due to the slow response of the detector to the presence of the particle. A too strong spin-bath coupling, on the other hand, also yields a broad passage-time distribution due to the strong distortion of the wave packet during the measurement process. This is a quantum effect. There is, however, an intermediate range for $A(\mathbf{x})$ yielding rather narrow passage-time distributions. Indeed, a rough estimate in \cite{nhs} shows that for an optimal choice of incident wave packet and decay rate $A (\mathbf{x})$ the precision of the measurement can be expected to behave like $E^{-3/4}$, where $E$ is the energy of the incident particle. For low velocities, this means some improvement as compared to the results of models coupling the particle continuously or semi-continuously to a clock, where one has $E^{-1}$-behavior \cite{ap1980, amm2003}. Thus, it appears that the latter $E^{-1}$ behavior of the precision is not due to a fundamental limitation related to a kind of time-energy uncertainty relation. \section*{Summary} We have investigated the continuum limit of a fully quantum mechanical spin-model for the detection of a moving particle when the spin-boson interaction satisfies the Markov property. In an example with a single spin and 40 boson modes it was shown numerically that the continuum limit gave a good approximation to the discrete model up to times of revivals. We have derived analytical expressions for the arrival-time distribution. The conditional Schr\"odinger equation governing the particle's time evolution before the detection has the same form as the one-channel limit of the fluorescence model, which is based on the use of a laser-illuminated region. The quantum spin detector-model provides an easier way to obtain this one-channel equation, since no additional assumptions or limits are needed. \appendix \section{The quantum jump approach for several spins} \label{full_model} The continuum limit and quantum jump approach for the full model in Section \ref{model} is quite similar until (\ref{Dt_full}). In second order perturbation theory w.r.t.\ $H_\mathrm{coup} + H_\mathrm{spon}$ one obtains \begin{eqnarray} \fl \bra{0} U_I (\nu \Delta t, [\nu-1] \Delta t) \ket{0} \ket{\uparrow}all = \ket{\uparrow}all \left( 1 \!\!\!\; \mathrm{l} - \sum_{j,\bell} \int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1 \int\limits_{[\nu-1] \Delta t}^{t_1} \rmd t_2 \, \right. \nonumber \\ \fl \left. \phantom{\int\limits_{[\nu-1] \Delta t}^{\nu \Delta t} \rmd t_1} \rme^{\rmi \left( \widetilde{\omega}_0^{(j)} - \omega_\ell \right) \left( t_1 - t_2 \right)} \overline{\left( \chi^{(j)} \left( \hat{\mathbf{x}} \left( t_1 \right) \right) g_{\bell}^{(j)} + \gamma_{\bell}^{(j)} \right)} \cdot \left( \chi^{(j)} \left( \hat{\mathbf{x}} \left( t_2 \right) \right) g_{\bell}^{(j)} + \gamma_{\bell}^{(j)} \right) \right) \label{Dt_mult_three} \end{eqnarray} where \begin{equation} \widetilde{\omega}_0^{(j)} \equiv \omega_0^{(j)} - \left(\sum_{k=1}^{j-1} \omega_J^{(kj)} + \sum_{k=j+1}^D \omega_J^{(jk)} \right) \end{equation} are modified resonance frequencies arising from the ferromagnetic spin-spin coupling. The phases $f_{\bell}^{(j)}$ have canceled similar to the one-spin case since only products of the form $\hat{a}_\ell \hat{\sigma}_+^{(j)} \hat{a}_\ell^\dagger \hat{\sigma}_-^{(j)}$ contribute to the second order, and consequently the contributions from different spins do not mix. Similar to (\ref{kappa}) one can define correlation functions $\kappa^{(j)}_{\overline{g}g}$, $\kappa^{(j)}_{\overline{g}\gamma}$, $\kappa^{(j)}_{\overline{\gamma}g}$ and $\kappa^{(j)}_{\overline{\gamma}\gamma}$ in an obvious way. Before the continuum limit the bath modes are indexed by the wave vectors \begin{equation} \bell = \frac{2 \pi}{L_\mathrm{bath}} \left( \begin{array}{c} n_1 \\ n_2 \\ n_3 \end{array} \right), \;\; n_i =1,~2,~\cdots \end{equation} In analogy to (\ref{coupling_constants}), the coupling constants are taken in the form \begin{eqnarray}\label{coupling_constants_three} g_{\bell}^{(j)}&=& \left(\Gamma^{(j)} (\omega_\ell, \mathbf{e}_{\bell}) + \mathcal{O}(L_\mathrm{bath}^{-1})\right) \cdot \sqrt{\frac{\omega_\ell}{L_\mathrm{bath}^3}}\\ \gamma_{\bell}^{(j)} &=& \left(\Gamma_\mathrm{spon}^{(j)} (\omega_\ell, \mathbf{e}_{\bell}) + \mathcal{O}(L_\mathrm{bath}^{-1})\right) \cdot \sqrt{\frac{\omega_\ell}{L_\mathrm{bath}^3}} \end{eqnarray} with $\omega_\ell = c (\omega_\ell) \ell$, $\mathbf{e}_{\bell} = \bell/\ell$, and \begin{equation} \label{gg} \left| \Gamma^{(j)} (\omega_\ell, \mathbf{e}_{\bell}) \right|^2 \gg \left| \Gamma_\mathrm{spon}^{(j)} (\omega_\ell, \mathbf{e}_{\bell}) \right|^2. \end{equation} Again the Markov property is assumed to hold for the correlation functions in the continuum limit. The procedure is then analogous to the single spin case, and one obtains (\ref{cond}) with the conditional Hamiltonian \begin{equation} H_\mathrm{cond} = \frac{\hat{\mathbf{p}}^2}{2m} + \frac{\hbar}{2} \{\delta_\mathrm{shift} (\hat{\mathbf{x}} ) - \rmi A(\hat{\mathbf{x}} ) \} \end{equation} where $A(\mathbf{x})$ is given in analogy to (\ref{A}) by \begin{eqnarray}\label{Athree} A\left(\mathbf{x} \right) &=& 2\, \mathrm{Re}\, \sum_j \int_0^\infty \rmd\tau \, \{\kappa_{\overline{g}g}^{(j)}(\tau) \chi^{(j)}({\mathbf{x}})^2 + \kappa_{\overline{\gamma}\gamma}^{(j)}(\tau)\} \\ &=& \sum_j \left( \tilde{\omega}_0^{(j)} \right)^3 \left[\frac{c \left( \tilde{\omega}_0^{(j)} \right) - \tilde{\omega}_0^{(j)} c' \left( \tilde{\omega}_0^{(j)} \right)}{c \left( \tilde{\omega}_0^{(j)} \right)^4} \right] \int \frac{\rmd \Omega_\mathbf{e}}{(2 \pi)^2}\nonumber\\ && \phantom{\sum_j \left( \tilde{\omega}_0^{(j)} \right)^3} \times \left( \left| \Gamma^{(j)} \left( \tilde{\omega}_0^{(j)}, \mathbf{e} \right) \right|^2 \chi^{(j)}({\mathbf{x}})^2 + \left| \Gamma^{(j)}_\mathrm{spon} \left( \tilde{\omega}_0^{(j)}, \mathbf{e} \right) \right|^2 \right)\nonumber \end{eqnarray} where the $\rmd\Omega_\mathbf{e}$ integral is taken over the unit sphere and where the contributions from $\kappa^{(j)}_{\overline{g}\gamma}$, $\kappa^{(j)}_{\overline{\gamma}g}$ have been neglected, due to (\ref{gg}). The terms have the familiar form of the Einstein coefficients in quantum optics, where there would also be a sum over polarizations. $\delta_\mathrm{shift} \left(\mathbf{x}\right)$ is given by \begin{equation}\label{3d-shift} \delta_\mathrm{shift} \left(\mathbf{x}\right) = 2\, \mathrm{Im}\, \sum_j \int_0^\infty \rmd\tau \{\kappa_{\overline{g}g}^{(j)}(\tau) \chi^{(j)}({\mathbf{x}})^2 + \kappa_{\overline{\gamma}\gamma}^{(j)}(\tau)\}. \end{equation} Since the $\kappa_{\overline{\gamma}\gamma}$ term leads to a constant it just gives an overall phase factor and can therefore be omitted. \section*{References} \end{document}
\begin{document} \title[ON A GENERALIZATION FOR QUATERNION SEQUENCES] {ON A GENERALIZATION FOR \\QUATERNION SEQUENCES} \author[Serp\.{i}l HALICI]{Serp\.{i}l HALICI} \address{ Pamukkale University,\\ Faculty of Arts and Sciences,\\ Department of Mathematics,\\ Denizli/TURKEY} \email{[email protected]} \subjclass{11R52, 11B37, 11B39, 11B83} \keywords{Quaternion, Horadam Sequence, Generalized Fibonacci Sequence} \date{2016} \begin{abstract} In this study, we introduce a new classes of quaternion numbers. We show that this new classes quaternion numbers include all of quaternion numbers such as Fibonacci, Lucas, Pell, Jacobsthal, Pell-Lucas, Jacobsthal-Lucas quaternions have been studied by many authors. Moreover, for this newly defined quaternion numbers we give the generating function, norm value, Cassini identity, summation formula and their some properties. \end{abstract} \maketitle \section{Introduction} Quaternions play an important role in quantum physics, computer science and in many areas of mathematics. Also, quaternionic numbers are useful tools to the representations and generalizations of quantities in the high-dimensional physics theory. Classically quaternions are represented in the form of hyper-complex numbers with three imaginary components; \begin{equation} q=(a_0, a_1, a_2, a_3)=a_0+a_1i+a_2j+a_3k \end{equation} where $i,j$ and $k$ mutually unit bivectors and $a_0, a_1, a_2, a_3$ are real numbers. The vectors $i, j, k $ obey the famous multiplication rules; $i^2=j^2=k^2=ijk=-1$, discovered by Hamilton in $1843$. The conjugate of any quaternion $q$ is given by negating the three imaginary components[13]: \begin{equation} \overline{q}=(a_0,-a_1,-a_2,-a_3)=a_0-a_1i-a_2j-a_3k. \end{equation} For any quaternions $p$ and $q$, $\overline{q}\overline{p}=\overline{pq}$ can be written. The norm value of a quaternion $q$ is given by the following formula[13]: \begin{equation} ||q||=q\overline{q}=\overline{q}q. \end{equation} The modulus of quaternion $q$ is defined as the square root of its norm that is $|q|=\sqrt{||q||}.$ Every non zero quaternion $q$ has a multiplicative inverse that can be stated as follows: $$q^{-1}=\frac{\overline{q}}{||q||}.$$ Since there is a multiplicative inverse for any non zero quaternion $q$ and for any two quaternions $p$ and $q$ the property $||pq||=||p||||q||$ is satisfied in the quaternion algebra and $\mathbb{H}$ is a normed division algebra. In addition to this we know that $pq\neq qp$. Particulary, more detailed and useful knowledge can be found in [12, 13]. \\\\ Different types of quaternions have been studied by many mathematicians[3, 4, 5, 6, 14, 16, 17]. One of the earliest studies on this subject belong to Horadam[8] where it was defined Fibonacci quaternions. In his study author gave some quaternions recurrence relations. Swamy[15] defined the generalized Fibonacci sequence. In [5] Halici gave the Binet formula, generating function and some identities for Fibonacci quaternions. In[14] Polatli and Kesim investigated the quaternions with generalized Fibonacci and Lucas number components. In [3] Bolat and Ipek studied Pell and Pell-Lucas quaternions. In [16, 17] Liano and Wloch introduced Pell and Jacobsthal quaternions, respectively, but it should be noted that they did not give the Binet formulas of these quaternions. In [2] Catarino studied modified Pell quaternions and gave its norm value, the generating function, Binet formula, Cassini identity.\\\\ Inspired by these works, in this paper we define a new generalization of sequences of quaternions and we call them as Horadam quaternions. It should be noted that this new sequence generalize the Fibonacci, Lucas, Pell, Pell-Lucas, Jacobsthal and Jacobsthal-Lucas quaternions which are studied separately by some authors[5, 13, 3, 16]. \section{\textbf{HORADAM SEQUENCE}} We start by recalling some fundamental properties of Horadam numbers. Horadam defined the following sequence[7]; \begin{equation} W_n = W_n (a, b; p, q)=pW_{n-1}+qW_{n-2} ;\,\ n\geq 2,\,\ W_0 = a, \,\ W_1 = b \end{equation} In here $W_n$ is called as the $nth$ Horadam number which is a Fibonacci type number defined recursively by the second order linear recurrence relation. Then its the characteristic equation is $x^2-px-q=0$ and hence the roots of it are $\alpha=\frac{p+\sqrt{p^2+4q}}{2}$ and $\beta=\frac{p-\sqrt{p^2+4q}}{2}$. It is well known that for the Horadam numbers Binet formula is \begin{equation}W_n=\frac{A\alpha^n-B\beta^n}{\alpha-\beta}; A=b-a\beta, B=b-a\alpha. \end{equation} In general the sequence $(W_{n})$ is known as Horadam sequence. In fact Horadam sequence gives the well-known special sequences such as Fibonacci, Pell, Jacobsthal, Pell-Lucas, Jacobsthal-Lucas, Tagiuri, Fermat, Fermat-Lucas. For example, some of these sequences are \begin{equation} (W_n)={W_n(0,1;1,1)}_0^\infty, \,\ (W_n)={W_n(2,1;1,1)}_0^\infty \end{equation} which are Fibonacci and Lucas sequences, respectively.\\\\ As well as the formula $(2.2)$, for Horadam sequence another Binet formula can be given as in the following Lemma. \begin{lem} For Horadam sequence, we have \begin{equation} W_n=bT_n+aqT_{n-1}\end{equation} where $a, b, p $ and $q$ are initial values and \begin{equation} T_n=\frac{1}{\sqrt{p^2+4q}} \left \{ \Big( \frac{p+\sqrt{p^2+4q}}{2} \Big)^n- \Big( \frac{p-\sqrt{p^2+4q}}{2}\Big)^n \right \} \end{equation} \end{lem} \begin{proof} From the recurrence relation and Binet formula of Horadam sequence the proof is easily verified. \end{proof} It should be noted that choosing $(0, 1; p, q)$ and $(2, p; p, q)$ instead of the values $a, b, p $ and $q$ in the formula $(2.4)$, respectively, generalized Fibonacci and Lucas sequences are obtained. That is \begin{equation} W_n(0,1;p,q)=T_n=\Big( \frac{\alpha^n-\beta^n}{\alpha-\beta} \Big) \end{equation} and \begin{equation} W_n (2,p;p,q)=T_n=\alpha^n+\beta^n .\end{equation} Specially, if we write $p= q= 1$ in the equation $(2.5)$ and using the formula $(2.4)$, then we have the Binet formula Fibonacci sequence's; \begin{equation} W_n(0,1;p,q)=T_n= \frac{1}{\sqrt{5}} \left \{ \Big( \frac{1+\sqrt{5}}{2} \Big)^n- \Big( \frac{1-\sqrt{5}}{2}\Big)^n \right \}. \end{equation} \section{\textbf{HORADAM QUATERNIONS}} In this section we define Horadam quaternions which generalize all known quaternions. Also, we give some properties related to these quaternions such as Binet formula, the generating function, Cassini formula and some summation formulas. \\\\ Firstly, for $n\geq 3$ and integers $p, q$ we point out the following definition by Swamy[15]. \begin{equation} H_n=H_{n-1}+H_{n-2};\,\ H_1=p, \,\ H_2= p+q .\end{equation} In fact the formula $(3.1)$ is a generalization for Fibonacci numbers.The author gave also the following recurrence relation for this sequence: \begin{equation} H_{n+1}=qF_{n}+pF_{n+1}.\end{equation} And then he provided a new generalization for Fibonacci quaternions; it is this: \begin{equation} P_n = H_n + i H_{n+1} + j H_{n+2} + k H_{n+3}. \end{equation} Moreover, he obtained some useful equations involving these quaternions as follows. \begin{equation} P_n \overline{P_n} = \overline{P_n} P_n = 3(2pq-q^2)F_{2n+2} + (p^2 + q^2) F_{2n+3}, \end{equation} \begin{equation} P_n^2 + P_{n-1}^2=2(H_n P_n+ H_{n-1}P_{n-1})-(P_n \overline{P_n} + P_{n-1} \overline{P_{n-1}}), \end{equation} \begin{equation} P_n\overline{P_n} + P_{n-1}\overline{P_{n-1}}=3(p^2 L_{2n+2}+2pq L_{2n+1} + q^2 L_{2n}). \end{equation} Now, in a similar to Horadam's definition for the integers greater than zero and zero we will define a new quaternion as follows. \begin{equation} Q_{w,n+2}=W_{n+2}1+W_{n+3} i +W_{n+4} j +W_{n+5} k \end{equation} where $W_n$ is the $nth$ generalized Horadam number. Due to its components this new quaternion can be called Horadam quaternion. After some necessary calculations we obtain the following recurrence relation; \begin{equation} Q_{w,n+2}=pQ_{w,n+1}+qQ_{w,n}. \end{equation} We will show that this new quaternionic sequence generalize all the type of quaternion sequences. Let us calculate the initial values of the sequence in the equation $(3.7)$; \begin{equation} Q_{w,0}=(a, b, pb+qa, p^2b+pqa+qb), \end{equation} \begin{equation} Q_{w,1}=(b, pb+qa, p^2b+pqa+qb, p^3b+p^2qa+2pqb+q^2a). \end{equation}\\\\ It should be noted that taking $0,1$ $p$ and $1$ instead of $a,b,p,$ and $q$ in the equations $(3.9)$ and $(3.10)$ \begin{equation} Q_{w,0}=(0, 1, p, p^2+1) \,\ \mbox{and} \,\ Q_{w,1}=(1,p,p^2+1,p^3+2p) \end{equation} are obtained, respectively. $Q_{w,0}$ and $Q_{w,1}$ are the initial values of generalization Fibonacci and Lucas quaternions which are given by Polatli and Kesim in [14]. Since the quadratic equation of the recurrence relation $(3.8)$ is $\lambda^2-p\lambda-q=0,$ the roots of this equation are \begin{equation} \lambda_1=\alpha=\frac{p+\sqrt{p^2+4p}}{2}, \,\ \lambda_2=\beta=\frac{p-\sqrt{p^2+4q}}{2} \end{equation} In [5] we provided that Binet formula for Fibonacci quaternion is \begin{equation} Q_n=\frac{1}{\sqrt{5}}(\underline{\alpha} \alpha^n - \underline{\beta} \beta^n) \end{equation} where $\alpha$ and $\beta$ are the roots of Fibonacci sequence and \begin{equation} \underline{\alpha}=1+i\alpha+j\alpha^2+k\alpha^3 , \,\ \underline{\beta}=1+i\beta+j \beta^2+k\beta^3 \end{equation} In the next theorem we will give Binet formula for the Horadam quaternions. \begin{thm} The Binet formula of Horadam quaternions is \begin{equation} Q_{w,n}=\frac{1}{\alpha - \beta} (A\underline{\alpha} \alpha^n - B\underline{\beta}\beta^n) =bT_n+ aqT_{n-1} \end{equation} where $A=b-a\beta, \,\ B=b-a \alpha$ and $ T_{n} $ is as the equation $(2.5)$. \end{thm} \begin{proof} Using the definition of Horadam quaternions and its Binet formula we write \scriptsize { $$Q_{w,n}=\frac{A\alpha^n-B\beta^n}{\alpha-\beta}+\frac{A\alpha^{n+1}-B\beta^{n+1}}{\alpha-\beta}i+\frac{A\alpha^{n+2}-B\beta^{n+2}}{\alpha-\beta}j+\frac{A\alpha^{n+3}-B\beta^{n+3}}{\alpha-\beta}k$$} \normalsize And then by some direct computations the equation $(3.15)$ is obtained.\end{proof} In the following Remark we drive the Binet formulas for different types of quaternion sequences. \begin{rem} We can list results of the Theorem $ (3.1) $ as follows:\\\\ i) In the equation $(3.15)$ if we choose $a = 0, b = 1$ and $ p= q =1$, then we get \begin{equation} \scriptsize Q_{w,n}=\frac{1}{\alpha - \beta} (\underline{\alpha} \alpha^n - \underline{\beta}\beta^n); \,\ \underline{\alpha}=1+i\alpha+j\alpha^2+k\alpha^3 , \,\ \underline{\beta}=1+i\beta+j \beta^2+k\beta^3 \end{equation} which is Binet formula for Fibonacci quaternions. The formula $(3.16)$ is given by Halici in [5].\\\\ ii) In the equation $(3.15)$ by taking $a=0, b=1, p=2 $ and $ q=1 $ we have \begin{equation} Q_{w,n}=\frac{1}{2\sqrt{2}} ( \underline{\alpha}(1+\sqrt{2})^n-\underline{\beta}(1-\sqrt{2})^n ) \end{equation} which is the Binet formula for Pell quaternions. It should be noted that the Pell quaternions are studied and its Binet formula is given by Bolat and Ipek in [3] in which this formula the same as $(3. 17) $.\\\\ iii) In the equation $(3.15)$ if we take $a=0, b=1, p=1$ and $ q=2$, then we have \begin{equation} Q_{w,n}=\frac{1}{\alpha-\beta} ( \underline{\alpha}\alpha^n-\underline{\beta}\beta^n)=\frac{1}{3} ( \underline{\alpha}2^n-\underline{\beta}(-1)^n ) \end{equation} where $\underline{\alpha}=1+2i+4j+8k$ and $\underline{\beta}=1-i+j-k$. The equation $(3.18)$ gives $nth$ Jacobsthal quaternion. We state that this quaternion type is considered by Szynal-Lianna and Wloch in [17], but they did not give its Binet formula.\\\\ iv) In the equation $(3.15)$ if we choose $a=2, b=1, p=q=1$, then we get the Binet formula of Lucas quaternions as follows. \begin{equation} Q_{w,n}=\underline{\alpha}\alpha^n+\underline{\beta}\beta^n=K_n \end{equation} Here \scriptsize { $$\underline{\alpha}=1+i\alpha+j\alpha^2+k\alpha^3;\,\ \underline{\beta}=1+i\beta+j\beta^2+k\beta^3 ; \,\ \alpha=\frac{1+\sqrt{5}}{2}, \,\ \beta=\frac{1-\sqrt{5}}{2}.$$ } \\\\ \normalsize v) If we choose $a=2, b=1, p=2, q=1$ in the equation $(3.15)$ then we have the following formula \begin{equation} Q_{w,n}=\frac{1}{\alpha-\beta}(\underline{\alpha}(1-2\beta)\alpha^n-\underline{\beta}(1-2\alpha)\beta^n).\end{equation} The formula $(3.20)$ gives the $nth$ Pell-Lucas quaternion.\\\\ vi) If we define a new quaternion type with Jacobsthal-Lucas number components as follows. \begin{equation} Q_{jn}=j_n+ij_{n+1}+jj_{n+2}+kj_{n+3} , \end{equation} and if we use the equations $ j_{n+2}=j_{n+1}+2j_n; \,\ j_0=2, j_1=1 $ and $j_n=2^n-(-1)^n$, then we obtain \begin{equation} Q_{w,n}=(1+2i+4j+8k)\alpha^n - (1-i+j-k) \beta^n \end{equation} The equation $(3.22)$ gives the $nth$ Jacobsthal-Lucas quaternion. \end{rem} Generating functions provide a powerful tool for solving linear homogeneous recurrence relations with constant coefficients.These functions are used firstly in order to solve the Fibonacci recurrence relation by A. De Moivre. More generally, in the following theorem we present the generating function for Horadam quaternions. \begin{thm} The generating function of Horadam quaternions is \begin{equation} g(t)=\frac{Q_{w,0}+(Q_{w,1}-pQ_{w,0})t}{1-pt-qt^2}. \end{equation} Here $Q_{w,0}$ and $Q_{w,1}$ are the initial values for Horadam quaternions. \end{thm} \begin{proof} Let $g(t)$ be generating function for Horadam quaternions. Then we have \begin{equation} g(t)=\sum^\infty_{n=0}{Q_{w,n}t^n}. \end{equation} By multiplying both sides of the equation $(3.24)$ by $pt$ and $qt^2$ we get $$ptg(t)=\sum^\infty_{n=0}{pQ_{w,n}t^{n+1}} \mbox{ and } qt^2g(t)=\sum^\infty_{n=0}{qQ_{w,n}t^{n+2}}.$$ From the recurrence relation $Q_{w,n+2}=pQ_{w,n+1}+qQ_{w,n}$ we get\\ \begin{equation} g(t)=\frac{Q_{w,0}+(Q_{w,1}-pQ_{w,0})t}{1-pt-qt^2} \end{equation} is desired. \end{proof} In the following Remark we will investigate some special cases of the generating function which is given in $(3.23)$. \begin{rem} We can list results of the Theorem $ (3.3) $ as follows:\\\\ i) In the equation $(3.25)$ if we write the values $ a=0, b=1, p = q= 1$, then we get \begin{equation} g(t)=\frac{(0,1,1,2)+(1,0,1,1)t}{1-t-t^2}=\frac{t+i+(1+t)j+(2+t)k}{1-t-t^2}. \end{equation} The equation $(3.26)$ gives the generating function of Fibonacci quaternions. This function is given by Halici in [5]. \\\\ ii) In the equation $(3.25)$ if we write the values $a=2, b=1, p=q=1$, then we have \begin{equation}\scriptsize g(t)=\frac{(2,1,3,5)+(-1,2,2,2)t}{1-t-t^2}=\frac{(2-t)+(1+2t)i+(3+2t)j+(5+2t)k}{1-t-t^2} \end{equation} which is the generating function for the Lucas quaternions. \\\\ iii) If we write $a=0, b=1, p=2, q=1$ in the equation $(3.25)$, then we have the following function. \begin{equation} \scriptsize g(t)=\frac{(0,1,2,5)+(1,0,1,2)t}{1-2t-t^2}=\frac{t+i+(2+t)j+(5+2t)k}{1-2t-t^2}. \end{equation} It should be noted that this type quaternion is studied by Bolat and Ipek in [3], but they did not give the generating function of them. \\\\ iv) In the equation $(3.25)$ if we write the values $a=0, b=1, p=1, q=2$, then we have \begin{equation} \scriptsize g(t)=\frac{(0,1,1,3)+(1,0,2,2)t}{1-t-2t^2}=\frac{t+i+(1+2t)j+(3+2t)k}{1-t-2t^2} \end{equation} which is a generating function for the Jacobsthal quaternions.\\\\ v) If we write $a=2, b=1, p=2, q=1$ in the equation $(3.25)$, then we have \begin{equation} \scriptsize g(t)=\frac{(2,1,4,9)+(-3,2,1,4)t}{1-2t-t^2}=\frac{(2-3t)+(1+2t)i+(4+t)j+(9+4t)k}{1-2t-t^2} \end{equation} which is the generating function for the Pell-Lucas quaternions.\\\\ vi) If we write $a=2, b=1, p=1, q=2$ in the equation $(3.25)$, then we have \begin{equation} \scriptsize g(t)=\frac{(2,1,5,7)+(-1,4,2,10)t}{1-t-2t^2}=\frac{(2-t)+(1+4t)i+(5+2t)j+(7+10t)k}{1-t-2t^2} \end{equation} which is the generating function for the Jacobsthal-Lucas quaternions. \end{rem} It is well known that the Cassini formula is one of the oldest identities involving the Fibonacci type numbers. In the following theorem we will give the Cassini identity related with the Horadam quaternions. \begin{thm} The Cassini formula for Horadam quaternions is follows. \begin{equation} Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2=\frac{AB\alpha^{n-1}\beta^{n-1}}{\alpha-\beta}(\beta \underline{\alpha}\underline{\beta}-\alpha \underline{\beta}\underline{\alpha}). \end{equation} \end{thm} \begin{proof} Using the Binet formula, it can be shown that \scriptsize $$\scriptsize Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2 =\frac{1}{(\alpha-\beta)^2} \big\{ \big( A\underline{\alpha} \alpha^{n-1} - B \underline{\beta} \beta^{n-1} \big) \big(A\underline{\alpha} \alpha^{n+1} - B \underline{\beta} \beta^{n+1} \big)-\big(A\underline{\alpha} \alpha^{n} - B \underline{\beta} \beta^{n} \big)^2 \big\} $$ \scriptsize $$ Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2 =\frac{1}{(\alpha-\beta)^2}\big(-AB\underline{\alpha}\underline{\beta}\alpha^{n-1}\beta^{n+1}-AB\underline{\beta}\underline{\alpha}\alpha^{n+1}\beta^{n-1} + AB \underline{\alpha}\underline{\beta}\alpha^n\beta^n+AB\underline{\beta}\underline{\alpha}\alpha^n\beta^n\big)$$ \normalsize Thus, we have the following equation. $$ Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2 =\frac{AB\alpha^{n-1}\beta^{n-1}}{\alpha-\beta}(\beta \underline{\alpha}\underline{\beta}-\alpha \underline{\beta}\underline{\alpha}) $$ \end{proof} Consequently, we can give the following Remark which gives some special cases of the Horadam quaternions's Cassini formula. \begin{rem} We can interpret results of the Theorem $ (3.5) $ as follows:\\\\ In fact in the equation $(3.32)$ writing the values $A, B $ and $ \alpha, \beta $ for Fibonacci quaternions one can get the following equation. \begin{equation}\scriptsize Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2=\frac{AB\alpha^{n-1}\beta^{n-1}}{\alpha-\beta}(\beta \underline{\alpha}\underline{\beta}-\alpha \underline{\beta}\underline{\alpha})=(-1)^n (2+2i+4j+3k). \end{equation} From [5] we know that \begin{equation}\scriptsize Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2=(-1)^n (2Q_1 -3 k) \end{equation} where $Q_n$ is $nth$ Fibonacci quaternion. After the needed calculations one can see that the above last two equations are the same. Thus, $$ Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2=(-1)^n (2Q_1 -3 k)= (-1)^n (2+2i+4j+3k) $$ is written. \\\\ Similarly, in the formula $(3.32)$ writing the needed values for Pell quaternions \begin{equation} Q_{w,n-1} Q_{w,n+1} - Q_{w,n}^2=\frac{(-1)^{n+1}}{4}(\underline{\alpha}\underline{\beta}(\alpha^2+2)-\underline{\beta}\underline{\alpha}\beta^2)\end{equation} is obtained. This last formula is the Cassini formula for Pell quaternions which is given by Bolat and Ipek in [3]. \end{rem} A summation formula for Horadam quaternions $ Q_{w,n}$ can be given as follows. \begin{thm} A summation formula for Horadam quaternions is \begin{equation} \sum_{k=0}^n{Q_{w,k}}=\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big) + K\end{equation} where $A=b-a\beta, B=b-a\alpha$ and \scriptsize $$ K=\frac{(a+b-ap)+i(b+aq)+j(bp+aq+bq)+k[b(p^2+q)+(a+b)pq+aq^2]}{1-p-q}.$$ \end{thm} \begin{proof} Using the Binet formula of Horadam quaternions we can state $$\sum_{k=0}^n{Q_{w,k}}=\sum_{k=0}^n \frac{1}{\alpha-\beta} \big(A\underline{\alpha}\alpha^n - B\underline{\beta} \beta^n \big)=\frac{A\underline{\alpha}}{\alpha-\beta} \sum_{k=0}^n{\alpha^n}-\frac{B\underline{\beta}}{\alpha-\beta} \sum_{k=0}^n{\beta^n}.$$ By the aid of the values $A=b-a\beta, \,\ B=b-a\alpha$ and the definition of geometric series we get $$\sum_{k=0}^n{Q_{w,k}}=\frac{(b-a\beta) \underline{\alpha}(1-\alpha^{n+1})}{(\alpha-\beta)(1-\alpha)}-\frac{(b-a\alpha) \underline{\beta}(1-\beta^{n+1})}{(\alpha-\beta)(1-\beta)}$$ From some straightforward computations it follows that $$\sum_{k=0}^n{Q_{w,k}}=\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big) + K$$ which is desired. \end{proof} We will show that the sum formula in the equation $(3.36)$ is a generalization for all the other quaternion types. For this purpose we give the next Corollary. \begin{cor} In the equation $(3.36)$ the sum formula is a generalization for all quaternion types. \end{cor} \begin{proof} Firstly, we write the values $a=0, b=p=q=1$ in the equation $(3.36)$. Then we have $$ K=\frac{1+i+2j+3k}{-1}=-QF_1$$ where $QF_1$ is $1st$ Fibonacci quaternion. And then using the formula $$\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big)$$ we get $$\frac{1}{\sqrt{5}}\big( \underline{\alpha} \alpha^{n+2} - \underline{\beta} \beta^{n+2} \big) =QF_{n+2}$$ where $QF_{n+2}$ is $(n+2)th$ Fibonacci quaternion. Thus, we have this: $$ \sum_{k=0}^n{Q_{w,k}}=\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big) + K=QF_{n+2}-QF_1.$$ The last equation is a summation formula for Fibonacci quaternion and this formula is given by Halici in [5]. Now if we consider the Pell quaternions that is when $ a=0, b=p=2, q=1, \alpha-\beta =2\sqrt{2}$, then we obtain that $$K=\frac{1+i+3j+7k}{-2}=\frac{-1}{2}QPL_0, $$ where $QPL_0$ is $0th$ modified Pell quaternion. It should be noted that, in [3], Bolat and Ipek introduced this quaternion type as Pell-Lucas quaternion, but because of the initial values it must be the modified Pell quaternion (see, [2]).\\\\ And using by the formula $\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big)$ one can obtain this;\\\\ $$\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big)=\frac{1}{2}\big( \frac{ \underline{\alpha} \alpha^{n+1} + \underline{\beta} \beta^{n+1}}{2} \big)=\frac{1}{2}QPL_{n+1},$$\\\\ where $QPL_{n+1}$ is $(n+1)th$ modified Pell quaternion. As a result we obtain that $$\sum_{k=0}^n{Q_{w,k}}=\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big) + K=\frac{1}{2}(QPL_{n}-QPL_0).$$ So, this result is the same as the result which is given by Bolat and Ipek in [3].\\\\ Now, if we write the values $a=0, b=1, p=1, q=2$ in $(3.36)$ then we have $$K=\frac{(1+i+3j+7k)}{-2}=\frac{-1}{2}QJ_1$$ where $QJ_1$ is $1st$ Jacobsthal quaternion.\\\\ From the formula $$\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big)$$ we write $$\frac{1}{\alpha-\beta}\big( \frac{B\underline{\beta}\beta^{n+1}}{1-\beta}-\frac{A\underline{\alpha}\alpha^{n+1}}{1-\alpha} \big)=\frac{1}{3}(\underline{\alpha} 2^{n+2}+\underline{\beta} (-1)^{n+1})=QJ_{n+2}$$\\\\ So, we obtain the following formula. $$\sum_{k=0}^n{QJ_{w,k}}=QJ_{n+2}-\frac{1}{2}QJ_1$$ Likewise using the equation $(3.36)$ one can easily prove that the other summation formulas. So, we conclude that the formula $(3.36)$ is the general case of the other sum formulas.\end{proof} \begin{thm} The norm value of Horadam quaternions $Q_{w,n}$ is follows. \begin{equation} Nr^2(Q_{w,n})=\frac{1}{p^2+4q}(b^2A+2abqB+a^2q^2C) \end{equation} where $A, B, C$ are follows. $$A=\alpha^{2n} (1+\alpha^2+\alpha^4+\alpha^6)+\beta^{2n}(1+\beta^2+\beta^4+\beta^6)-2(-q)^n(1-q+q^2-q^3),$$ $$B=\alpha^{2n-1} (1+\alpha^2+\alpha^4+\alpha^6)+\beta^{2n-1}(1+\beta^2+\beta^4+\beta^6)-p(-q)^{n-1}(1+q+q^2+q^3),$$ $$C=\alpha^{2n} (1+\alpha^2+\alpha^4+\alpha^{-2})+\beta^{2n}(1+\beta^2+\beta^4+\beta^{-2})-2(-q)^n(1-q+q^2-(-q)^{-1}).$$ \end{thm} \begin{proof} Using the definitions norm and Binet formula $W_n=bT_n+aqT_{n-1}$ one can write $$Nr^2(Q_{w,n})=W^2_n+W^2_{n+1}+W^2_{n+2}+W^2_{n+3}$$ \scriptsize {$$Nr^2(Q_{w,n})=(bT_n+aqT_{n-1})^2+(bT_{n+1}+aqT_{n})^2+(bT_{n+2}+aqT_{n+1})^2+(bT_{n+3}+aqT_{n+2})^2.$$ } \normalsize Also, using the values $\alpha+\beta=p, \alpha-\beta=\sqrt{p^2+4q}, \alpha\beta=-q$ and $$ T_n= \frac{1}{\sqrt{p^2+4q}} \big\{ \big(\frac{p+\sqrt{p^2+4q}}{2} \big)^n - \big(\frac{p-\sqrt{p^2+4q}}{2} \big)^n \big\}$$ we can conclude that $$Nr^2(Q_{w,n})=\frac{1}{p^2+4q}(b^2A+2abqB+a^2q^2C)$$ which is desired result. \end{proof} \begin{rem} Some special cases of Theorem $(3.9)$ can be listed as follows.\\\\ i) Writing the values $a=0, b=p=q=1, \alpha - \beta =\sqrt{5}$ in the equation $(3.37)$ we find $$Nr^2(QF_n)=\frac{1}{5} (\alpha^{2n} (15+6\sqrt{5})+\beta^{2n}(15-6\sqrt{5})),$$ which is the norm value of Fibonacci quaternions.\\\\ ii) If we write the values $a=0, b=q=1, p=2, \alpha=1+\sqrt{2}, \beta=1-\sqrt{2}$ in the equation $(3.37)$, then we get $$Nr^2(QP_n)=\frac{1}{8} (\alpha^{2n} (120+84\sqrt{2})+\beta^{2n}(120-84\sqrt{2}))$$ which is the norm value for Pell quaternions and it is given by Anetta and Iwona in [16].\\\\ iii) If we write the values $a=0, b=1, p=1, q=2, \alpha-\beta =3$ in the equation $(3.37)$, then we have $$Nr^2(QJ_n)=\frac{1}{9} (85(2)^{2n} +10(-1)^n (2)^{n} +4)$$ The last equation gives the norm value for Jacobsthal quaternions and this value is calculated by Anetta and Wloch in [17].\\\\ In the same way using the equation $(3.37)$ one can easily prove the other norm values. \end{rem} \maketitle \section{\textbf{Conclusion}} In this paper, firstly we have defined the sequence of Horadam quaternions that generalize all the other quaternions by a recurrence relation of second order. And then we have presented some properties involving these sequence. Moreover, for these quaternions we give the Binet formula, generating function, Cassini identity, norm value and sum formula. In the future, we intend to introduce Horadam octonions and give fundamental properties for octonions of this type. \end{document}
\begin{document} \author{Karl Heuer} \symbolfootnote[0]{\textcopyright 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license \url{http://creativecommons.org/licenses/by-nc-nd/4.0/}} \title[]{A sufficient local degree condition for Hamiltonicity in locally finite claw-free graphs} \begin{abstract} Among the well-known sufficient degree conditions for the Hamiltonicity of a finite graph, the condition of Asratian and Khachatrian is the weakest and thus gives the strongest result. Diestel conjectured that it should extend to locally finite infinite graphs~$G$, in that the same condition implies that the Freudenthal compactification of $G$ contains a circle through all its vertices and ends. We prove Diestel's conjecture for claw-free graphs. \end{abstract} \maketitle \section{Introduction} Problems concerning the existence of Hamilton cycles in finite graphs are studied quite a lot, but to decide whether a given finite graph is Hamiltonian is difficult. Nevertheless, or even because of that, many sufficient or necessary conditions for Hamiltonicity have been found which are often easy to handle. One common class of sufficient conditions are degree conditions. An early result in this area is the following theorem of Dirac (1952). \begin{theorem}\label{dirac-theorem}\cite[Thm.\ 3]{dirac} Every finite graph with ${n \geq 3}$ vertices and minimum degree at least ${n / 2}$ is Hamiltonian. \end{theorem} The next result generalizes the theorem of Dirac and is due to Ore (1960). \begin{theorem}\label{ore-theorem}\cite[Thm.\ 2]{ore}. Let $G$ be a finite graph with ${n \geq 3}$ vertices. If \linebreak ${d(u) + d(v) \geq n}$ for any two non-adjacent vertices $u$ and $v$ of $G$, then $G$ is Hamiltonian. \end{theorem} Both of these theorems state sufficient conditions for Hamiltonicity which involve the total number of vertices in the given graph. So we could say that both conditions do not have a local form. Furthermore, these conditions imply that the considered graphs have diameter at most $2$. In contrast to this, the next theorem, which is due to Asratian and Khachatrian \cite{asra-good}, generalizes both theorems above and allows graphs of arbitrary diameter. In order to state the theorem, we need the following local property of a graph: \[ d(u) + d(w) \geq |N(u) \cup N(v) \cup N(w)|\textit{ for every induced path } uvw. \tag{$\ast$} \] \\ The theorem of Asratian and Khachatrian can now be formulated as follows: \begin{theorem}\label{Asra-fin}\cite[Thm.\ 1]{asra-good} Every finite connected graph which satisfies $(\ast)$ and has at least three vertices is Hamiltonian. \end{theorem} The vast majority of Hamiltonicity results deal only with finite graphs, since it is not clear what a Hamilton cycle in an infinite graph should be. We follow the topological approach initiated by Diestel and K\"{u}hn \cite{diestel_kuehn_1, diestel_kuehn_2, diestel_kuehn_TST} and further outlined in \cite{diestel_buch, diestel_arx}, which solves this problem in a reasonable way by using as infinite cycles of a graph $G$ the circles in its Freudenthal compactification $|G|$. Then circles which use infinitely many vertices of $G$ are possibly to exist. Now the notion of a Hamiltonicity extends in an obvious way: Call a locally finite connected graph $G$ \textit{Hamiltonian} if there is a circle in $|G|$ that contains all vertices of $G$. Some Hamiltonicity results for finite graphs have already been generalized to locally finite graphs using this notion but not all of them are complete generalizations. Theorems that involve local conditions as in Theorem~ \ref{Asra-fin} are more likely to generalize to locally finite graphs since they are still well-defined for infinite graphs and might allow compactness arguments. For results in this field, see \cite{brewster-funk, bruhn-HC, agelos-HC, Ha_Leh_Po, heuer_Inf_ObSu, lehner-HC}. This paper deals with a conjecture of Diestel \cite[Conj.\ 4.13]{diestel_arx} about Hamiltonicity which says that Theorem~\ref{Asra-fin} can be generalized to locally finite graphs. The main result of this paper is the following theorem, which shows that the conjecture of Diestel holds for claw-free graphs where we call a graph \textit{claw-free} if it does not contain the claw, i.e., the graph $K_{1, 3}$, as an induced subgraph. \begin{theorem}\label{Asra-loc-fin} Every locally finite, connected, claw-free graph which satisfies $(\ast)$ and has at least three vertices is Hamiltonian. \end{theorem} The rest of this paper is structured in the following way. In Section~2 we recall some basic definitions and introduce some notation we shall need in this paper. Section~3 contains some facts and lemmas which are needed in the proof of our main result. In the last section, Section~4, we consider locally finite graphs which satisfy condition~$(\ast)$. There we give two infinite classes of examples of locally finite graphs satisfying~$(\ast)$. In one class, all members are claw-free, while all elements of the other class have claws as induced subgraphs. The rest of Section~4 deals with the proof of Theorem~\ref{Asra-loc-fin}. At the very end of the paper we discuss where we need the assumption of being claw-free for the proof of our main theorem. \section{Basic definitions and notation} In general, we follow the graph-theoretic notation of \cite{diestel_buch} in this paper. For basic graph-theoretic facts, we refer the reader also to~\cite{diestel_buch}. Beside finite graph theory, a topologically approach to infinite locally finite graphs is covered in \cite[Ch.\ 8.5]{diestel_buch}. For a survey in this field, we refer to \cite{diestel_arx}. All graphs considered in this paper are undirected and simple. Furthermore, a graph is not assumed to be finite. Now we fix an arbitrary graph $G = (V, E)$ for this section. The graph $G$ is called \textit{locally finite} if every vertex of $G$ has only finitely many neighbours. For a vertex set $X$ of $G$, we denote by $G[X]$ the induced subgraph graph of $G$ whose vertex set is $X$. We write $G-X$ for the graph $G[V \setminus X]$, but for singleton sets, we omit the set brackets and write just $G-v$ instead of $G-\lbrace v \rbrace$ where $v \in V$. We denote the cut which consists of all edges of $G$ that have one endvertex in $X$ and the other endvertex in $V \setminus X$ by $\delta(X)$. Let $C$ be a cycle of $G$ and $u$ be a vertex of $C$. Then we write $u^+$ and $u^-$ for the neighbour of $u$ in $C$ in positive and negative, respectively, direction of $C$ given a fixed orientation of $C$. We will not mention that we fix an orientation for the considered cycle using this notation. We implicitly fix an arbitrary orientation of the cycle. Let $P$ be a path in $G$ and $T$ a tree in $G$. We write $\mathring{P}$ for the subpath of $P$ which is obtained from $P$ by removing the endvertices of $P$. If $s$ and $t$ are vertices of $T$, we write $sTt$ for the unique $s$--$t$ path in $T$. Note that this covers also the case where $T$ is a path. If $P_v = v_0 \ldots v_n$ and $P_w = w_0 \ldots w_k$ are paths in $G$ with $n, k \in \mathbb{N}$ where $v_n$ and $w_0$ may be equal but apart from that these paths are disjoint and the vertices $v_n, w_0$ are the only vertices of $P_v$ and $P_w$ which lie in $T$, then we write $v_0 \ldots v_nTw_0 \ldots w_k$ for the path with vertex set ${V(P_v) \cup V(v_nTw_0) \cup V(P_w)}$ and edge set ${E(P_v) \cup E(v_nTw_0) \cup E(P_w)}$. For a vertex set $X \subseteq V$ and an integer $k \geq 1$, we denote by $N_k(X)$ the set of vertices in $G$ that have distance at least $1$ and at most $k$ to $X$ in $G$. For $k=1$ we just write $N(X)$ instead of $N_1(X)$, which denotes the usual neighbourhood of $X$ in $G$. For a singleton set $\lbrace v \rbrace \subseteq V$, we omit the set brackets and write just $N_k(v)$ and $N(v)$ instead of $N_k(\lbrace v \rbrace)$ and $N(\lbrace v \rbrace)$, respectively. Given a subgraph $H$ of $G$, we just write $N_k(H)$ and $N(H)$ instead of $N_k(V(H))$ and $N(V(H))$, respectively. We denote the graph $K_{1, 3}$ also as \textit{claw}. The graph $G$ is called \textit{claw-free} if it does not contain the claw as an induced subgraph. A one-way infinite path in $G$ is called a \textit{ray} of $G$ and a two-way infinite path in $G$ is called a \textit{double ray} of $G$. An equivalence relation can be defined on the set of all rays of $G$ by saying that two rays in $G$ are equivalent if they cannot be separated by finitely many vertices. It is easy to check that this relation really defines an equivalence relation. The corresponding equivalence classes of this relation are called the \textit{ends} of $G$. For the rest of this section, we assume $G$ to be locally finite and connected. A topology can be defined on $G$ together with its ends to obtain the topological space $|G|$. For a precise definition of $|G|$, see \cite[Ch.\ 8.5]{diestel_buch}. An important fact about $|G|$ is that every ray of $G$ converges to the end of $G$ it is contained in. Apart from the definition of $|G|$ as in \cite[Ch.\ 8.5]{diestel_buch}, there is an equivalent way of defining the topological space $|G|$: Endow $G$ with the topology of a $1$-complex (also called CW complex of dimension $1$) and then build the Freudenthal compactification of $G$. This connection was examined in \cite{freud-equi}. For a point set $X$ in $|G|$, we denote its closure in $|G|$ by $\overline{X}$. We define a \textit{circle} in $|G|$ as the image of a homeomorphism which maps from the unit circle $S^1$ in $\mathbb{R}^2$ to $|G|$. The graph $G$ is called \textit{Hamiltonian} if there is a circle in $|G|$ containing all vertices of $G$. Such a circle is called a \textit{Hamilton circle} of $G$. For finite $G$, this coincides with the usual meaning, namely the existence of a cycle in $G$ which contains all vertices of $G$. Such cycles are called \textit{Hamilton cycles} of $G$. \section{Toolkit} In this section we collect some lemmas which we shall need later for the proof of the main result. The proof of each statement of this section can be found in \cite[Section 3]{heuer_Inf_ObSu}. We begin with two basic facts about minimal vertex separators in claw-free graphs. \begin{proposition}\label{2 comp} Let $G$ be a connected claw-free graph and $S$ be a minimal vertex separator in $G$. Then $G-S$ has exactly two components. \end{proposition} The following lemma together with Proposition~\ref{2 comp} are essential for the constructive proof of Lemma~\ref{Asra-cut-1}. \begin{lemma}\label{complete} Let $G$ be a connected claw-free graph and $S$ be a minimal vertex separator in $G$. For every vertex $s \in S$ and every component $K$ of $G-S$, the graph $G[N(s)\cap V(K)]$ is complete. \end{lemma} We proceed with a structural lemma on infinite, locally finite, connected, claw-free graphs (see Figure~1). \begin{lemma}\label{struct_toll}\cite[Lemma 3.10]{heuer_Inf_ObSu} Let $G$ be an infinite, locally finite, connected, claw-free graph and ${X}$ be a finite vertex set of $G$ such that $G[X]$ is connected. Furthermore, let $\mathscr{S} \subseteq V(G)$ be a finite minimal vertex set such that $\mathscr{S} \cap X = \emptyset$ and every ray starting in $X$ has to meet $\mathscr{S}$. Then the following holds: \begin{enumerate}[\normalfont(i)] \item $G-\mathscr{S}$ has $k \geq 1$ infinite components $K_1, \ldots, K_k$ and the set $\mathscr{S}$ is the disjoint union of minimal vertex separators $S_1, \ldots, S_k$ in $G$ such that for every $i$ with $1 \leq i \leq k$ each vertex in $S_i$ has a neighbour in $K_j$ if and only if $j=i$. \item $G-\mathscr{S}$ has precisely one finite component $K_0$. This component contains all vertices of $X$ and every vertex of $\mathscr{S}$ has a neighbour in $K_0$. \end{enumerate} \end{lemma} \begin{figure} \caption{The structure described in Lemma~\ref{struct_toll} \label{lemma_3_3_k_3} \end{figure} To prove that an infinite, locally finite, connected graph $G$ is Hamiltonian, we use the following lemma. A sequence of cycles of $G$ and a set of vertex sets which fulfill five conditions are needed to be able to apply this lemma. While a Hamilton circle is built from the sequence of cycles as a limit object, the vertex sets help to control this process by witnessing that the limit does really become a circle. Since this lemma is our key to prove Hamiltonicity, let us consider the idea of the lemma more carefully before we state it. We define a limit object from a sequence of cycles of $G$ by saying that a vertex or an edge of $G$ is contained in the limit if it lies in all but finitely many cycles of the sequence. Of course we must be able to tell for each vertex and for each edge of $G$ whether it is in the limit or not. The conditions $(\text{i})$ and $(\text{iv})$ of the lemma ensure that we can do this. Furthermore, condition $(\text{i})$ forces every vertex of $G$ to be in the limit, which is necessary for the limit to be a Hamilton circle. Ensuring that the limit object becomes a circle consists of two parts. One thing is to guarantee that the limit is topologically connected and that the degree of each vertex is two in the limit. Condition $(\text{iv})$ and the definition of the limit take care of this such that both of these properties can easily be verified. The problematic part is to ensure that all ends have degree $2$ in the limit object and not higher. Both parts together are equivalent to the limit object being a circle by a result of Bruhn and Stein~\cite[Prop.~3]{circle}. In fact, without further conditions, there might be ends with degree higher than $2$ in the limit. To prevent this problem, we use the conditions $(\text{ii})$, $(\text{iii})$ and $(\text{v})$. They guarantee the existence of a sequence of finite cuts for each end such that each sequence converges to its corresponding end and the limit object meets each of the cuts precisely twice. This prevents ends having a degree higher that $2$ in the limit object. Due to the five conditions of the lemma, it is not easy to find cycles and vertex sets which can be used for the application of the lemma. Indeed, the main work for the proof of Theorem~\ref{Asra-loc-fin} is to construct cycles and vertex sets which fulfill the required conditions. Especially the structure of a graph as described in Lemma~\ref{struct_toll} will help us in the construction. \begin{lemma}\label{HC-extract}\cite[Lemma 3.11]{heuer_Inf_ObSu} Let $G$ be an infinite, locally finite, connected graph and $(C_i)_{i \in \mathbb{N}}$ be a sequence of cycles of $G$. Then $G$ is Hamiltonian if there exists an integer $k_i \geq 1$ for every $i \geq 1$ and vertex sets $M^i_j \subseteq V(G)$ for every $i \geq 1$ and $j$ with $1 \leq j \leq k_i$ such that the following is true: \textnormal{ \begin{enumerate}[\normalfont(i)] \item \textit{For every vertex $v$ of $G$, there exists an integer $j \geq 0$ such that $v \in V(C_i)$ holds for every $i \geq j$.} \item \textit{For every $i \geq 1$ and $j$ with $1 \leq j \leq k_i$, the cut $\delta(M^i_j)$ is finite.} \item \textit{For every end $\omega$ of $G$, there is a function $f : \mathbb{N} \setminus \lbrace 0 \rbrace \longrightarrow \mathbb{N}$ such that the inclusion ${M^{j}_{f(j)} \subseteq M^i_{f(i)}}$ holds for all integers $i, j$ with $1 \leq i \leq j$ and the equation ${M_{\omega}:= \bigcap^{\infty}_{i=1} \overline{M^i_{f(i)}} = \lbrace \omega \rbrace}$ is true.} \item \textit{$E(C_i) \cap E(C_j) \subseteq E(C_{j+1})$ holds for all integers $i$ and $j$ with $0 \leq i < j$.} \item \textit{The equations $E(C_i) \cap \delta(M^p_j) = E(C_p) \cap \delta(M^p_j)$ and $|E(C_i) \cap \delta(M^p_j)| = 2$ hold for each triple $(i, p, j)$ which satisfies $1 \leq p \leq i$ and $1 \leq j \leq k_p$.} \end{enumerate} } \end{lemma} \section{A local degree condition} This section deals with locally finite graphs satisfying the following degree condition: \[ d(u) + d(w) \geq |N(u) \cup N(v) \cup N(w)|\textit{ for every induced path } uvw. \tag{$\ast$} \] \\ Asratian and Khachatrian gave an example of a class of finite graphs which satisfy this condition and have arbitrarily high diameter (see \cite{asra-good}). Seizing the idea of their examples we show in the first part of this section that it is easy to construct locally finite infinite graphs satisfying $(\ast)$, even claw-free ones. In order to state their examples, we use the notion of lexicographic products of graphs. Let $G$ and $H$ be two graphs. Then the \textit{lexicographic product} $G \circ H$ of $G$ and $H$ is the graph on the vertex set ${V(G \circ H) = V(G) \times V(H)}$ where two vertices $(u_1, h_1)$ and $(u_2, h_2)$ are adjacent if and only if either $u_1u_2 \in E(G)$ or $u_1 = u_2$ and $h_1h_2 \in E(H)$. Now we can state the examples of Asratian and Khachatrian. Let $G_{q,n} = C_q \circ K_n$ where $C_q$ is the cycle of length $q \geq 3$ and $K_n$ is the complete graph on $n \geq 2$ vertices (see Figure~2). The definition of $G_{q,n}$ ensures that the diameter of $G_{q,n}$ is $\lfloor {q/2} \rfloor$. To see that $G_{q,n}$ satisfies $(\ast)$ for every $q \geq 3$ and every $n \geq 2$, note that the equations $${d(u) + d(w) = 6n-2} \; \textnormal{ and } \; |N(u) \cup N(v) \cup N(w)| = 5n$$ hold for each induced path $uvw$ of $G_{q,n}$. Since we assume $n \geq 2$, the graph $G_{q,n}$ satisfies $(\ast)$. We can obtain infinite locally finite graphs satisfying $(\ast)$ in the same way. Let $G_{\mathbb{Z}, n} = D \circ K_n$ where $D$ is a double ray and $K_n$ is the complete graph on $n \geq 2$ vertices (see Figure~2). By the same equations as above, the graph $G_{\mathbb{Z}, n}$ \linebreak satisfies $(\ast)$ for every $n \geq 2$. \begin{figure} \caption{The graph $G_{4,2} \label{asra-beispiel} \end{figure} Note that the graphs $G_{q,n}$ and $G_{\mathbb{Z}, n}$ are claw-free. In general, being claw-free does not follow from $(\ast)$. To see this, we give examples of such graphs, whose structure is very similar to $G_{q,n}$ and $G_{\mathbb{Z}, n}$. We begin with the definition of the graph $H_{2q, n}$. Let ${V(H_{2q, n}) = A \times V(K_{1,3}) \cup B \times V(K_{n})}$ where ${A, B \subseteq V(C_{2q})}$ are the partition classes of a bipartition of the cycle $C_{2q}$ of length $2q$. Two vertices $(a, v)$ and $(b, w)$ of $H_{2q, n}$ are adjacent if ${ab \in E(C_{2q})}$ or if ${a = b \in A}$ and ${vw \in E(K_{1,3})}$ or if ${a = b \in B}$ and ${vw \in E(K_n)}$. The graph $H_{\mathbb{Z}, n}$ is defined analogously where the cycle $C_{2q}$ is replaced by a double ray $D$. The argumentation for checking that $H_{2q, n}$ and $H_{\mathbb{Z}, n}$ satisfy $(\ast)$ for ${q \geq 2}$ and ${n \geq 6}$ are the same. So we look only at $H_{2q, n}$. Let $(a, u)(b, v)(c, w)$ be an induced path in $H_{2q, n}$ or $H_{\mathbb{Z}, n}$. We distinguish four cases. If ${a = b = c}$ holds, then $uvw$ is an induced path in $K_{1, 3}$ and so we get $${d((a, u)) + d((c, w)) = 4n + 2 \geq 2n + 4 = |N((a, u)) \cup N((b, v)) \cup N((c, w))|}.$$ If $a = c \neq b$, then $u$ and $w$ are nonadjacent vertices of $K_{1,3}$ and $v$ lies in $K_n$. This gives $$d((a, u)) + d((c, w)) = 4n + 2 \geq 2n + 8 = |N((a, u)) \cup N((b, v)) \cup N((c, w))|$$ since $n \geq 3$. If $a$, $b$ and $c$ are pairwise distinct and $a, c \in B$, then we obtain $$d((a, u)) + d((c, w)) = 2n + 14 \geq 2n + 12 \geq |N((a, u)) \cup N((b, v)) \cup N((c, w))|.$$ Note that for $q \geq 3$ the second inequality becomes an equality. In the last case, the vertices $a$, $b$ and $c$ are pairwise distinct and $a, c \in A$. We get the inequality chain $$d((a, u)) + d((c, w)) \geq 4n + 2 \geq 3n + 8 = |N((a, u)) \cup N((b, v)) \cup N((c, w))|$$ since $n \geq 6$. Note here that the first inequality becomes an equality if $u$ and $v$ are both vertices of degree $1$ in $K_{1, 3}$. These examples show that there are finite and infinite locally finite graphs which satisfy $(\ast)$, have arbitrary high diameter and contain induced claws. The graphs $H_{\mathbb{Z}, n}$ show that even infinitely many induced claws can be contained in graphs satisfying $(\ast)$. Next we state a basic lemma about graphs having the property $(\ast)$. \begin{lemma}\label{ungl-kette}\cite{asra-good} Let $G$ be a graph which satisfies $(\ast)$. Then the following is true for every induced path $uvw$ of $G$: \[ |N(u) \cap N(w)| \geq |N(v) \setminus (N(u) \cup N(w))| \geq 2. \] \end{lemma} \begin{proof} Let $uvw$ be an induced path of $G$. We get the following inequality chain: \begin{align*} |N(u) \cap N(w)| &= d(u) + d(w) - |N(u) \cup N(w)| \\ &\geq |N(u) \cup N(v) \cup N(w)| - |N(u) \cup N(w)| \\ &= |N(v) \setminus (N(u) \cup N(w))| \\ &\geq |\lbrace u, w \rbrace| = 2 \end{align*} All equalities in this chain are obvious. The first inequality is valid due to $(\ast)$. The second inequality holds because $uvw$ is an induced path, which means that $u$ and $w$ are not adjacent. \end{proof} The proof of Theorem~\ref{Asra-fin} which Asratian and Khachatrian presented in \cite{asra-good} is basically the same as the one for the following lemma. For the sake of completeness we state the proof here. \begin{lemma}\label{Asra-enlarge} Let $G$ be a locally finite graph satisfying $(\ast)$, $C$ be a cycle of $G$ and $v$ be a vertex in $N(C)$. Then there exists a vertex $u \in N(v) \cap V(C)$ such that one of the following is true: \textnormal{ \begin{enumerate}[\normalfont(i)] \item \textit{The vertex $v$ is adjacent with $u^+$.} \item \textit{There is a vertex $x \in N(v) - V(C)$ such that $x$ is adjacent with $u^+$.} \item \textit{There exists a vertex $y \neq u$ in $N(v) \cap V(C)$ such that $u^+ \neq y$ and $u^+$ and $y^+$ are adjacent.} \end{enumerate} } \end{lemma} \begin{proof} Since cycles are finite, we get that the set $N(v) \cap V(C)$ is finite. Let ${N(v) \cap V(C) = \lbrace u_1, \ldots, u_n \rbrace}$. We know that $n \geq 1$ holds since $v$ lies in the neighbourhood of $C$. We may assume that $vu^+_i \notin E(G)$ is true for all $i$ with $1 \leq i \leq n$ because otherwise statement~(i) of the lemma would hold and we would be done. This assumption implies ${\lbrace u_1, \ldots, u_n \rbrace \cap \lbrace u^+_1, \ldots, u^+_n \rbrace = \emptyset}$. So we may assume further that ${\lbrace u^+_1, \ldots, u^+_n \rbrace}$ is an independent set because otherwise statement~(iii) of the lemma would be true and the proof complete. For every ${i \in \lbrace 1, \ldots, n \rbrace}$ let $${I_i = N(v) \cap N(u^+_i)} \; \textnormal{ and } \; {M_i = N(u_i) \setminus (N(v) \cup N(u^+_i))}.$$ By our assumptions, we know that $vu_iu^+_i$ is an induced path for every ${i \in \lbrace 1, \ldots, n \rbrace}$. So Lemma~\ref{ungl-kette} implies that ${|I_i| \geq |M_i|}$ and ${|I_i| \geq 2}$ are true for each ${i \in \lbrace 1, \ldots, n \rbrace}$. Now we will show that there is a vertex ${x \in I_j \setminus V(C)}$ for some ${j \in \lbrace 1, \ldots, n \rbrace}$, which implies statement~(ii) of the lemma. For this, we construct a function $f(k)$ and sequences $(Z^k_i)$, $(Y^k_i)$ for each ${i \in \lbrace 1, \ldots, n \rbrace}$ where the relations ${1 \leq f(k) \leq n}$ and ${Z^k_i \subseteq \lbrace u^+_1, \ldots, u^+_n \rbrace \cup \lbrace v \rbrace}$ and ${Y^k_i \subseteq \lbrace u_1, \ldots, u_n \rbrace}$ are always valid. We begin by setting ${f(1) = 1}$, ${Z^1_i = \lbrace v, u^+_i \rbrace}$ and ${Y^1_i = \lbrace u_i \rbrace}$ for each ${i \in \lbrace 1, \ldots, n \rbrace}$. Now assume we have already defined $f(i)$ for every ${i \in \lbrace 1, \ldots, k \rbrace}$ and both sequences up to length $k$ for $k \geq 1$. If ${(I_{f(k)} \setminus Y^k_{f(k)}) \subseteq V(C)}$ and ${I_{f(k)} \setminus Y^k_{f(k)}}$ is nonempty, take a vertex $w$ of ${I_{f(k)} \setminus Y^k_{f(k)}}$. In this case, we know that ${w = u_r}$ for some $r$ with ${1 \leq r \leq n}$ and proceed with the construction as follows for every ${i \in \lbrace 1, \ldots, n \rbrace}$: \[ f(k+1) = r, \] \[ Y^{k+1}_i = \begin{cases} Y^{k}_{f(k)} \cup \lbrace u_r \rbrace & \mbox{if } i = f(k) \\ Y^{k}_r & \mbox{if } i = r \\ Y^{k}_i & \mbox{otherwise}, \end{cases}\] \[ Z^{k+1}_i = \begin{cases} Z^{k}_{f(k)} & \mbox{if } i = f(k) \\ Z^{k}_r \cup \lbrace u^+_{f(k)} \rbrace & \mbox{if } i = r \\ Z^{k}_i & \mbox{otherwise}. \end{cases} \] Next we gather some facts about the sequences. \setcounter{claim}{0} \begin{claim} If $f(i)$ is defined for every ${i \in \lbrace 1, \ldots, k \rbrace}$ and both sequences up to length $k$ for ${k \geq 1}$, then we get: \textnormal{ \begin{enumerate}[\normalfont(a)] \item \textit{The relations ${Z^k_i \subseteq M_i}$, ${Y^k_i \subseteq I_i}$ and ${|Z^k_i| \geq |Y^k_i|}$ hold for each ${i \in \lbrace 1, \ldots, n \rbrace}$}. \item \textit{${|Z^k_{f(k)}| > |Y^k_{f(k)}|}$}. \item \textit{${I_{f(k)} \setminus Y^k_{f(k)} \neq \emptyset}$}. \item \textit{${|Y^k_{f(k-1)}| = |Y^{k-1}_{f(k-1)}| + 1}$ if ${k \geq 2}$}. \end{enumerate} } \end{claim} We prove Claim~1 by induction on $k$. It is easily checked that for $k=1$ all statements are true. Now assume all statements are true for some $k \geq 1$. We show statement~(a) for $k+1$ first. The relations ${Z^{k+1}_i \subseteq M_i}$ and ${Y^{k+1}_i \subseteq I_i}$ follow by the induction hypothesis and construction. For $i \neq f(k)$, the relation ${|Z^{k+1}_i| \geq |Y^{k+1}_i|}$ follows using the induction hypothesis and by the definition of both sets. If $i = f(k)$, the inequality is true because ${|Z^k_{f(k)}| > |Y^k_{f(k)}|}$ holds by the induction hypothesis. Statement~(b) for $k+1$ follows by definition of both sets and by the inequality ${|Z^k_{f(k+1)}| \geq |Y^k_{f(k+1)}|}$ of the induction hypothesis. To prove statement~(c) for $k+1$, note that ${|M_{f(k+1)}| \geq |Z^{k+1}_{f(k+1)}| > |Y^{k+1}_{f(k+1)}|}$ is true by statement~(a) and (b) for $k+1$. Now the statement follows since the inequality ${|I_{f(k+1)}| \geq |M_{f(k+1)}|}$ holds as shown before. Statement~(d) is obviously true for $k=2$ and follows for arbitrary ${k+1 > 2}$ from the definition of ${Y^{k+1}_{f(k)}}$. This completes the proof of Claim~1. Now we see that there is some $j$ and a vertex ${{x \in I_{f(j)} \setminus (Y^{j}_{f(j)} \cup V(C))}}$. Otherwise, statement~(c) of Claim~1 implies that we do not stop constructing the sequences $(Z^k_i)$ and $(Y^k_i)$. This yields a contradiction because then it follows from statement~(d) of Claim~1 and the pigeonhole principle that there exists some ${p \in \lbrace 1, \ldots, n \rbrace}$ such that ${|Y^{k}_{p}| \rightarrow \infty}$ for ${k \rightarrow \infty}$ although ${Y^{k}_{p} \subseteq \lbrace u_1, \ldots, u_n \rbrace}$. \end{proof} To each case of Lemma~\ref{Asra-enlarge} corresponds a cycle that contains $v$ and all vertices of $C$. We get these cycles by taking $C$ and replacing some of its edges. For case~(i), we replace the edge $uu^+$ by the path $uvu^+$. For case~(ii), we take the path $uvxu^+$ instead of the edge $uu^+$. In case~(iii), we delete the edges $uu^+, yy^+$ and add the edges $uv, vy, u^+y^+$ to obtain the desired cycle if $u^+$ and $y^+$ are adjacent. We call each such resulting cycle an \textit{extension of} $C$, or more precise a (i)-, (ii)- \linebreak or (iii)-\textit{extension of} $C$ depending on from which of the three cases of the lemma we obtain the resulting cycle. A vertex which is used for an extension of a cycle as $v$ above is called the \textit{target} of the extension (see Figure~3). For a cycle $C$, we call a finite sequence of cycles $(C_i)$, where ${0 \leq i \leq n}$ for some ${n\in \mathbb{N}}$, an \textit{extension sequence of} $C$ if $C_0 = C$ and $C_i$ is an extension of $C_{i-1}$ for every $i$ satisfying ${1 \leq i \leq n}$. Note that Theorem~\ref{Asra-fin} follows easily from Lemma~\ref{Asra-enlarge} together with Lemma~\ref{ungl-kette}. \begin{figure} \caption{(i)-, (ii)- and (iii)-extension of a cycle $C$ with target $v$.} \label{extensions} \end{figure} Now we have nearly everything we need to prove Theorem~\ref{Asra-loc-fin}. Before we can focus on the proof of the main theorem we state the following lemma, which is the key to make Lemma~\ref{HC-extract} applicable to the proof of the main theorem by helping us to define a suitable sequence of cycles together with vertex sets. To prove the following lemma we make much use of extensions of cycles, which we obtain as in Lemma~\ref{Asra-enlarge}. \begin{lemma}\label{Asra-cut-1} Let $G=(V,E)$ be an infinite locally finite, connected, claw-free graph which satisfies $(\ast)$, $C$ be a cycle of $G$ with ${V(C) \setminus N(N(C)) \neq \emptyset}$ and ${\mathscr{S} \subseteq N(C)}$ be a minimal vertex set such that every ray starting in $C$ meets $\mathscr{S}$. Furthermore, let $k$, $S_j$ and $K_j$ be analogously defined as in Lemma~\ref{struct_toll}. Then there exists a cycle $C'$ with the properties: \textnormal{ \begin{enumerate}[\normalfont(i)] \item \textit{${V(K_0) \cup \mathscr{S} \cup N_3(\mathscr{S}) \subseteq V(C')}$.} \item \textit{For every $j$ with ${1 \leq j \leq k}$, there are vertex sets ${M_j \subseteq V}$ such that the inclusions ${V(K_j) \setminus N(S_j) \subseteq M_j \subseteq V(K_j) \cup \mathscr{S} \cup N(\mathscr{S})}$ as well as the equation ${|E(C') \cap \delta(M_j)| = 2}$ are true.} \item \textit{${E(C - N(N(C))) \subseteq E(C')}$ and for every edge ${e = uv \in E(C') \setminus E(C)}$ the inclusion ${\lbrace u, v \rbrace \subseteq (V \setminus V(C)) \cup N_2(N(C))}$ holds.} \end{enumerate} } \end{lemma} \begin{proof} First we construct an extension sequence $(C_i)$ of $C$ where the vertex set of the last element in this sequence consists precisely of $V(K_0)$. We begin by setting $C_0 = C$. Note that ${V(C) \subseteq V(K_0)}$ is true by choice of $K_0$ and Lemma~\ref{struct_toll}. Assume we have already constructed an extension sequence of $C$ of length $m+1$ for $m \geq 0$ such that ${V(C_m) \subseteq V(K_0)}$ holds. If ${V(C_{m}) = V(K_0)}$ holds, we are done. Otherwise ${V(C_{m}) \neq V(K_0)}$ is valid. Then there exists a vertex ${v \in N(C_m) \cap V(K_0)}$. Choose this vertex as the target of an extension $C'_m$ of $C_m$. We can find such an extension by Lemma~\ref{Asra-enlarge}. If $V(C'_m)$ is still a subset of $V(K_0)$, we set $C_{m+1} = C'_m$. Otherwise $C'_m$ is a (ii)-extension of $C_m$ and contains a vertex ${x \in \mathscr{S} \setminus V(C_m)}$. Say ${x \in S_j}$ for some $j$ with ${1 \leq j \leq k}$. We know that $C'_m-x$ lies entirely in one component of $G-S_j$ because $C'_m-x$ is connected and contains no vertex of the separator $S_j$. Since $G$ is a connected claw-free graph and $S_j$ is a minimal vertex separator in $G$, we obtain by Lemma~\ref{complete} that the neighbourhood of $S_j$ in each component of $G-S_j$ induces a complete graph. So $v$ and the other neighbour of $x$ in $C'_m$ are adjacent. Hence, there exists a (i)-extension of $C_m$ with target $v$ which we choose as $C_{m+1}$. Since $K_0$ is finite by Lemma~\ref{struct_toll}, there has to be an integer $n$ such that ${V(C_n) = V(K_0)}$. Now we construct a sequence $(\tilde{C}_{i})$ of cycles with ${0 \leq i \leq k}$ such that the following properties hold: \begin{itemize} \item ${V(K_0) \subseteq V(\tilde{C}_{i})}$ for every $i$ with ${0 \leq i \leq k}$. \item For every $j$ with ${1 \leq j \leq k}$ the cycle $\tilde{C}_{j}$ contains precisely two vertices ${s^{\ell}_1, s^{\ell}_2}$ from $S_\ell$ and an $s^{\ell}_1$--$s^{\ell}_2$ path $P_\ell$ which satisfies ${V(\mathring{P}_\ell) \neq \emptyset}$ and ${V(\mathring{P}_\ell) \subseteq V(K_{\ell})}$ for every $\ell$ with ${1 \leq \ell \leq j}$. \item $\tilde{C}_{j}$ contains no vertices from ${\bigcup^k_{p=j+1} (S_p \cup K_p)}$ for every $j$ with ${0 \leq j \leq k}$. \end{itemize} We start to build this sequence by setting ${\tilde{C}_{0} = C_n}$. For every $j$ with ${1 \leq j \leq k}$ let $T_j$ be a finite tree in $K_j$ which contains all vertices from ${N_3(S_j) \cap V(K_j)}$. Such trees exist because $K_j$ is connected and $N_3(S_j)$ is finite since $G$ is locally finite and $S_j$ is finite. We use these trees not only in the construction of the sequence $(\tilde{C}_{i})$ but also afterwards. Now assume we have already constructed such a sequence of length $m+1$ for $m \geq 0$. Let $D$ be an extension of $\tilde{C}_{m}$ with target ${s^{m+1}_1 \in S_{m+1}}$. We distinguish two cases: \setcounter{case}{0} \begin{case} \textnormal{$D$ is a (i)- or (iii)-extension of $\tilde{C}_{m}$.} \end{case} Note for this case that $D$ contains only one vertex of ${S_{m+1} \cup V(K_{m+1})}$ by definition of (i)- or (iii)-extension, respectively, and because $\tilde{C}_{m}$ contains no vertex of ${S_{m+1} \cup V(K_{m+1})}$ by construction. All paths $P_{i}$ of $\tilde{C}_{m}$ stay the same for $D$ for all $i$ with ${1 \leq i \leq m}$. To check that this is valid, suppose we lose an edge of some path $P_{i'}$ with ${1 \leq i' \leq m}$. If $D$ is a (i)-extension, there is precisely one edge in ${E(\tilde{C}_{m}) \setminus E(D)}$. The endvertices of this edge are endvertices of the two edges in ${E(D) \setminus E(\tilde{C}_{m})}$, which are both incident with $s^{m+1}_1$. Since each edge of $P_{i'}$ has at least one endvertex in $K_{i'}$, the cycle $D$ contains at least one edge which is incident with ${s^{m+1}_1 \in S_{m+1}}$ and a vertex $v$ in $K_{i'}$. This is a contradiction because ${i' \neq m+1}$ and so $s^{m+1}_1$ and $v$ lie in different components of $G - S_{i'}$ by definition of $S_{i'}$ and $K_{i'}$ together with Lemma~\ref{struct_toll}. If $D$ is a (iii)-extension, ${E(D) \setminus E(\tilde{C}_{m})}$ contains precisely three edges, of which two, say $f$ and $g$, are incident with $s^{m+1}_1$ by definition of (iii)-extension with target $s^{m+1}_1$. Let $e$ be the edge in ${E(D) \setminus E(\tilde{C}_{m})}$ which is different from $f$ and $g$. The set ${E(\tilde{C}_{m}) \setminus E(D)}$ contains precisely two edges, of which each has a common endvertex with $e$ and with either $f$ or $g$. So at least one of the edges in ${E(\tilde{C}_{m}) \setminus E(D)}$ lies on $P_{i'}$ and has therefore an endvertex in $K_{i'}$. With the same argument as for $D$ being a (i)-extension, we know that $f$ and $g$ cannot have endvertices in $K_{i'}$. So $e$ has one endvertex $y^+$ in $K_{i'}$ and since all vertices of ${V(\tilde{C}_{m}) \cap (S_{i'} \cup V(K_{i'}))}$ lie on the path $P_{i'}$ whose endvertices lie in $S_{i'}$, we get that $y$ is an endvertex of either $f$ or $g$ which lies in $S_{i'}$ by definition of (iii)-extension. So the other endvertex of $e$ lies in another component of $G - S_{i'}$ than $K_{i'}$. This contradicts the fact that $S_{i'}$ is a separator. Now we pick an extension $D_1$ of $D$ with target $a \in N(s^{m+1}_1) \cap V(K_{m+1})$ (see Figure~4). Since the cycle $D$ contains only one vertex of ${S_{m+1} \cup V(K_{m+1})}$ and $S_{m+1}$ is a separator, we know that $a$ has only one neighbour on $D$. Hence, $D_1$ must be a (ii)-extension of $D$ and the neighbour $s^{m+1}_2$ of $a$ in $D_1$ which does not lie in $D$ must be an element of $S_{m+1}$. Otherwise, $S_{m+1}$ would not separate ${a \in V(K_{m+1})}$ from $V(K_0)$, which is a contradiction. Now set $$\tilde{C}_{m+1} = D_1 \; \textnormal{ and } \; P_{m+1} = s^{m+1}_1as^{m+1}_2.$$ For every $i$ with $1 \leq i \leq m$, we take the same paths as for $\tilde{C}_m$. This setting fulfills the required conditions because $$V(\tilde{C}_{m+1}) \setminus V(D) \subseteq S_{m+1} \cup V(K_{m+1}) \; \textnormal{ and } \; E(D) \setminus E(\tilde{C}_{m+1}) = \lbrace e \rbrace$$ where the edge $e$ does not lie on any path $P_i$ for $1 \leq i \leq m$ since it is incident \linebreak with $s^{m+1}_1$. This completes the observation of Case~1. \begin{figure} \caption{Situation in Case~1.} \label{lemma_4_3_case_1} \end{figure} \begin{case} \textnormal{$D$ is a (ii)-extension of $\tilde{C}_{m}$.} \end{case} Let $x$ be the neighbour of $s^{m+1}_1$ in $D$ which does not lie in $V(\tilde{C}_{m})$ and let $ys^{m+1}_1xz$ be the path which replaces the edge $yz$ in $\tilde{C}_{m}$ to form $D$. We know by construction that $$V(K_0) \subseteq V(\tilde{C}_{m}) \; \textnormal{ and } \; V(\tilde{C}_{m}) \cap S_{m+1} = \emptyset$$ hold. Furthermore, ${x \in N(s^{m+1}_1) \cap N(\tilde{C}_{m})}$ is valid by definition of (ii)-extension. Hence, $x$ is an element of $S_{\ell}$ for some $\ell$ with ${1 \leq \ell \leq k}$. Here we distinguish three subcases: \begin{subcase} \textnormal{The equation $\ell = m+1$ holds.} \end{subcase} In this situation, we pick two vertices $a, b$ such that $$a \in N(s^{m+1}_1) \cap V(K_{m+1}) \; \textnormal{ and } \; {b \in N(x) \cap V(K_{m+1})}.$$ Now we obtain a cycle $D'$ by replacing the edge $s^{m+1}_1x$ of $D$ by the path $$P_{m+1} = s^{m+1}_1aT_{m+1}bx.$$ In this case, $x$ plays the role of $s^{m+1}_{2}$. By setting $\tilde{C}_{m+1} = D'$ and taking $P_{m+1}$ together with all paths $P_i$ of $\tilde{C}_{m}$ for every $i$ with ${1 \leq i \leq m}$, all required properties are fulfilled. \begin{subcase} \textnormal{The relations $\ell \leq m$ and $z \in S_{\ell}$ are valid.} \end{subcase} In this case, we know that $y$ is no element of $S_{\ell}$ because $\tilde{C}_{m}$ contains only one vertex ${s^{\ell} \in S_{\ell}}$ that is different from $z$ by construction of $\tilde{C}_{m}$, but $s^{\ell}$ is no neighbour of $z$ in $\tilde{C}_{m}$ as $y$ is because of the $s^{\ell}$--$z$ path $P_{\ell}$ of $\tilde{C}_{m}$, which consists not just of one edge (see Figure~5). Furthermore, $y$ is no element of $K_{\ell}$ because otherwise the edge $ys^{m+1}_1$ would show that $S_{\ell}$ does not separate $K_{\ell}$ from the component of $G - S_{\ell}$ which contains $s^{m+1}_1$. This contradicts the definition of $S_{\ell}$. By construction of $\tilde{C}_{m}$, we know that $\tilde{C}_{m}$ and, therefore, also $D$ contain only one $s^{\ell}$--$z$ path, namely $P_{\ell}$, whose interior vertices lie in $K_{\ell}$. Now we move on with a slightly different cycle. In $D$, we replace the path ${s^{\ell}P_{\ell}zx}$ by the path ${s^{\ell}a'T_{\ell}b'x}$ where $$a' \in N(s^{\ell}) \cap V(K_{\ell}) \; \textnormal{ and } \; b' \in N(x) \cap V(K_{\ell})$$ to obtain a new cycle $D''$. Since $D''$ contains only one vertex of $S_{m+1}$ and still precisely two vertices of $S_i$ that are joined by a path through $K_i$ for every $i \leq m$, we can proceed as we did with $D$ in Case~1 and get the desired cycle $\tilde{C}_{m+1}$. \begin{figure} \caption{Situation in Subcase~2.2.} \label{lemma_4_3_case_2_2} \end{figure} \begin{subcase} \textnormal{The relations $\ell \neq m+1$ and $z \notin S_{\ell}$ are true.} \end{subcase} If $z$ lies not in $K_{\ell}$, the vertices $s^{m+1}_1$ and $z$ are in the same component of $G-S_{\ell}$ and are neighbours of $x \in S_{\ell}$. So by Lemma~\ref{complete}, the vertices $s^{m+1}_1$ and $z$ are adjacent. Now we use the (i)-extension of $\tilde{C}_{m}$ which is formed by replacing the edge $yz$ of $\tilde{C}_{m}$ by the path ${ys^{m+1}_1z}$ instead of $D$ and proceed as in Case~1. If $z$ is a vertex of $K_{\ell}$, we get that $y$ lies in $S_{\ell}$ because $y$ and $z$ are consecutive in $\tilde{C}_{m}$ and $y$ cannot be an element of $K_{\ell}$ since otherwise, the edge $ys^{m+1}_1$ would connect $K_{\ell}$ with the component of $G - S_{\ell}$ which contains $s^{m+1}_1$ and contradict the definition of $S_{\ell}$ (see Figure~6). By construction, we know that the neighbour $w$ of $y$ in $\tilde{C}_{m}$ which is different from $z$ is no vertex of ${V(K_{\ell}) \cup S_{\ell}}$. Hence, $s^{m+1}_1$ and $w$ lie in the same component of $G-S_{\ell}$ and are neighbours of $y$. As before, Lemma~\ref{complete} implies that $s^{m+1}_1$ and $w$ are adjacent. So we can use the (i)-extension of $\tilde{C}_{m}$ which arises by replacing the edge $wy$ of $\tilde{C}_{m}$ by the path $ws^{m+1}_1y$ instead of $D$ and proceed again as in Case~1. Now the description of how to deal with Case~2 is complete too and we get the desired sequence $(\tilde{C}_{i})$ of cycles with $0 \leq i \leq k$. \begin{figure} \caption{Situation in Subcase~2.3 if $z$ lies in $K_{\ell} \label{lemma_4_3_case_2_3} \end{figure} For the next step, let $(\hat{C}_{i})$ be an extension sequence of $\tilde{C}_{k}$ where the targets are always chosen from ${\bigcup^k_{p=1} T_{p}}$ until a cycle of this sequence contains all vertices of ${\bigcup^k_{p=1} T_{p}}$. The targets can always be chosen from $\bigcup^k_{p=1} T_{p}$ because each $T_p$ is connected and $\tilde{C}_{k}$ contains at least one vertex of each $T_p$ by construction. Furthermore, we build only finitely many extensions since each tree $T_p$ is finite. Now we prove the following claim: \setcounter{claim}{0} \begin{claim} Each cycle $\hat{C}_{i}$ of the extension sequence hits the cut ${\delta(S_{j} \cup V(K_{j}))}$ precisely twice for each $j$ with ${1 \leq j \leq k}$. \end{claim} We prove this statement inductively. We know that ${\hat{C}_{0} = \tilde{C}_{k}}$ fulfills the condition by its construction. So assume that $\hat{C}_{n}$ fulfills the condition for some $n$ with ${0 \leq n \leq k-1}$ and consider $\hat{C}_{n+1}$. Let ${t \in T_{j'}}$ be the target of the extension $\hat{C}_{n+1}$ for some $j'$ with ${1 \leq j' \leq k}$. Since $\hat{C}_{n+1}$ is a cycle that has vertices in ${S_{j'} \cup V(K_{j'})}$ and ${V \setminus (S_{j'} \cup V(K_{j'}))}$, the cycle $\hat{C}_{n+1}$ must hit the cut ${\delta(S_{j'} \cup V(K_{j'}))}$ at least twice and in an even number of edges. Furthermore, we know that both edges of $\hat{C}_{n+1}$ which are incident with $t$ are no elements of the cut ${\delta(S_{j'} \cup V(K_{j'}))}$ because all neighbours of $t$ lie in ${V(K_{j'}) \cup S_{j'}}$. Using the induction hypothesis, we get that $\hat{C}_{n}$ hits the cut ${\delta(S_{j'} \cup V(K_{j'}))}$ precisely twice. Then, by the definitions of the three types of extensions, $\hat{C}_{n+1}$ cannot meet the cut ${\delta(S_{j'} \cup V(K_{j'}))}$ four times, which implies that it hits the cut exactly twice. This completes the induction and the proof of the claim. \newline Now we construct a sequence $(\hat{D}_i)$ of cycles where the vertex set of the last cycle of this sequence contains all elements of $\mathscr{S}$ and the following properties hold for each $i \geq 0$ if the corresponding cycles are defined: \begin{itemize} \setlength\itemsep{3pt} \item ${V(K_0) \cup N_3(\mathscr{S}) \subseteq V(\hat{D}_i)}$. \item ${V(\hat{D}_{i+1}) \setminus V(\hat{D}_{i}) \subseteq \mathscr{S}}$. \item $1 \leq |V(\hat{D}_{i+1}) \setminus V(\hat{D}_{i})| \leq 2$. \end{itemize} \noindent Furthermore, for every $i \geq 0$ such that $\hat{D}_{i}$ is defined there shall exist vertex sets $M^i_j$ for ${1 \leq j \leq k}$ such that the following properties are fulfilled: \begin{itemize} \setlength\itemsep{3pt} \item $V(K_j) \setminus N(S_j) \subseteq M^i_j \subseteq V(K_j) \cup \mathscr{S} \cup N(\mathscr{S})$. \item $|E(\hat{D}_{i}) \cap \delta(M^i_j)| = 2$. \end{itemize} We begin by setting ${\hat{D}_{0} = \hat{C}_{k}}$. We know that ${V(K_0) \cup N_3(\mathscr{S}) \subseteq V(\hat{D}_{0})}$ holds by construction. Additionally, Claim~1 implies that ${M^{0}_{j} = S_{j} \cup V(K_{j})}$ is a valid choice for every $j$ with ${1 \leq j \leq k}$. Now assume we have already constructed the sequence up to $\hat{D}_{m}$ and there is still a vertex ${u \in \mathscr{S} \setminus V(\hat{D}_{m})}$, say ${u \in S_{i'}}$ for some $i'$ with ${1 \leq i' \leq k}$. If $u$ has two neighbours $w_1, w_2$ which are adjacent in $\hat{D}_{m}$, we define $\hat{D}_{m+1}$ as the (i)-extension of $\hat{D}_{m}$ where the edge $w_1w_2$ is replaced by the path $w_1uw_2$. We define the sets $M^{m+1}_{j}$ as follows for every $j$ with ${1 \leq j \leq k}$: \[M^{m+1}_{j} = \begin{cases} M^{m}_{j} \cup \lbrace u \rbrace &\mbox{if } w_1 \in M^{m}_{j} \textnormal{ or } w_2 \in M^{m}_{j} \\ M^{m}_{j} \setminus \lbrace u \rbrace & \mbox{otherwise}. \end{cases} \] All required conditions are fulfilled by this definition. So let us assume that $u$ does not have two neighbours which are consecutive in $\hat{D}_{m}$. Since $G$ is claw-free, we know that for every vertex ${w \in N(u) \cap V(\hat{D}_{m})}$, the vertices $w^+$ and $w^-$ are adjacent in $G$. Now let ${w_1 \in N(u) \cap V(\hat{D}_{m})}$ be fixed and consider the cycle $\hat{D}^u_{m}$ which is formed by replacing the path $w^+ww^-$ in $\hat{D}_{m}$ by the edge $w^+w^-$ for every ${w \in (N(u) \cap V(\hat{D}_{m})) \setminus \lbrace w_1 \rbrace}$. By Lemma~\ref{Asra-enlarge}, there exists an extension of $\hat{D}^u_{m}$ with target $u$, but since $w_1$ is the only neighbour of $u$ on $\hat{D}^u_{m}$, all extensions of $\hat{D}^u_{m}$ with target $u$ must be (ii)-extensions. So let there be a (ii)-extension of $\hat{D}^u_{m}$ with target $u$ where the edge ${w_1w_2 \in E(\hat{D}^u_{m})}$ is replaced by the path $w_1uhw_2$. If ${h \notin V(\hat{D}_{m})}$ holds, then there is also a (ii)-extension of $\hat{D}_{m}$ which we set as $\hat{D}_{m+1}$. The sets $M^{m+1}_{j}$ are defined for every $j$ with ${1 \leq j \leq k}$ in the following way: \[M^{m+1}_{j} = \begin{cases} M^{m}_{j} \cup \lbrace u, h \rbrace &\mbox{if } w_1 \in M^{m}_{j} \textnormal{ or } w_2 \in M^{m}_{j} \\ M^{m}_{j} \setminus \lbrace u, h \rbrace & \mbox{otherwise}. \end{cases} \] Using this definition, all required conditions are again fulfilled. It remains to handle the case when ${h \in V(\hat{D}_{m})}$ is true. In this situation, we build $\hat{D}_{m+1}$ as follows. We take $\hat{D}_{m}$, replace the path $h^+hh^-$ by the edge $h^+h^-$ and the edge $w_1w_2$ by the path $w_1uhw_2$ (see Figure~7). Furthermore, we set $M^{m+1}_{j}$ as before in the case with ${h \notin V(\hat{D}_{m})}$ for every $j$ with ${1 \leq j \leq k}$. We make some remarks to see that $$|E(\hat{D}_{m+1}) \cap \delta(M^{m+1}_j)| = 2$$ holds for every $j$ with $1 \leq j \leq k$. Note at first that neither $$V(\hat{D}_{m+1}) \cap M^{m+1}_{j} = \emptyset \; \textnormal{ nor } \; V(\hat{D}_{m+1}) \setminus M^{m+1}_{j} = \emptyset$$ because ${V(\hat{D}_{m}) \cap M^{m}_{j}}$ as well as ${V(\hat{D}_{m}) \setminus M^{m}_{j}}$ contains vertices with distance at least $2$ to $\mathscr{S}$. These facts are due to the relation $$V(T_j) \subseteq V(\hat{D}_{m})$$ and our assumption on $C$ that $$V(C) \setminus N(N(C)) \neq \emptyset$$ combined with the relation $$V(C) \subseteq V(\hat{D}_{m}).$$ Next, note that if the path $w_1uhw_2$ meets some cut $\delta(M^{m+1}_{j})$, it can meet it only once by definition of $M^{m+1}_{j}$ and then the edge $w_1w_2$ must be an element of $\delta(M^{m}_{j})$. Similarly, if ${h^+h^- \in \delta(M^{m+1}_{j})}$ holds, the path $h^+hh^-$ meets the cut $\delta(M^{m}_{j})$ only once. Both edges of this path cannot lie in the cut $\delta(M^{m}_{j})$ because $$|E(\hat{D}_{m}) \cap \delta(M^{m}_{j})| = 2$$ holds and so $h$ would be the only vertex in $$V(\hat{D}_{m}) \cap M^{m}_{j} \; \textnormal{ or } \; V(\hat{D}_{m}) \setminus M^{m}_{j}.$$ This would be a contradiction since both of these sets contain vertices with distance at least $2$ to $\mathscr{S}$. We know that $\hat{D}_{m+1}$ meets the cut $\delta(M^{m+1}_{j})$ at least twice because $\hat{D}_{m+1}$ is a cycle that has vertices in $M^{m+1}_{j}$ and ${V \setminus M^{m+1}_{j}}$. Since $$|E(\hat{D}_{m}) \cap \delta(M^{m}_j)| = 2$$ holds by construction and $\hat{D}_{m+1}$ is formed from $\hat{D}_{m}$ be deleting the edges $w_1w_2$, $h^+h$ and $hh^-$ but adding $w_1u$, $uh$, $hw_2$ and $h^+h^-$, the equation $$|E(\hat{D}_{m+1}) \cap \delta(M^{m+1}_j)| = 2$$ is valid. So the cycle $\hat{D}_{m+1}$ and the sets $M^{m+1}_j$ fulfill all required conditions. Since $G$ is locally finite and $\mathscr{S}$ is a subset of $N(C)$, we know that $\mathscr{S}$ is finite. Then there must exist an integer $M$ such that $\hat{D}_{M}$ contains all vertices of ${V(K_0) \cup \mathscr{S} \cup N_3(\mathscr{S})}$. We set $$C' = \hat{D}_{M} \; \textnormal{ and } \; M_j = M^{M}_j$$ for every $j$ with $1 \leq j \leq k$. The cycle $C'$ and the sets $M_j$ show that statement~(i) and (ii) of the lemma are true. \begin{figure} \caption{The cycle $\hat{D} \label{lemma_4_3_M_j} \end{figure} It remains to check that statement~(iii) of the lemma holds for the cycle $C'$. Note for the inclusion $${E(C - N(N(C))) \subseteq E(C')}$$ that if we have lost edges of the cycle $C$, then they have at least one endvertex in $N(N(C))$ because of the definition of extension and the operation where we replaced a path $h^+hh^-$ by the edge $h^+h^-$ and $h$ is a neighbour of some vertex in $\mathscr{S}$. This shows the first part of statement~(iii). Note for the other part of statement~(iii) that we got edges in ${E(C') \setminus E(C)}$ only by building extensions with targets in ${V \setminus V(C)}$, by taking paths whose vertices lie entirely in ${V \setminus V(C)}$ as in Case~2 and by replacing paths $h^+hh^-$ by the edge $h^+h^-$ where $h$ lies on some cycle and is a neighbour of some vertex in ${\mathscr{S} \subseteq N(C)}$. Since $h^+$ and $h^-$ are neighbours of $h$, we get that $$\lbrace h^+, h^- \rbrace \subseteq N_2(N(C)) \cup N(C).$$ Next let us check the location of the relevant edges of extensions. Let $Z'$ be an extension of some cycle $Z$ in $G$ with target in ${V \setminus V(C)}$. Then we know for each edge ${e = uv \in E(Z') \setminus E(Z)}$ that $$\lbrace u, v \rbrace \subseteq (V \setminus V(C)) \cup N_2(N(C))$$ holds by the definition of extension. Putting these observations together, statement~(iii) is completely proved. So the proof of the whole lemma is done. \end{proof} Having Lemma~\ref{Asra-cut-1} in our toolkit, we now prove Theorem~\ref{Asra-loc-fin}. As mentioned before, the rough idea of the proof is to construct a sequence of cycles together with certain vertex sets such that we get a Hamilton circle as a limit object from the sequence of cycles using Lemma~\ref{HC-extract}. For the construction of these objects, we use Lemma~\ref{Asra-cut-1}. \begin{proof}[Proof of Theorem~\ref{Asra-loc-fin}] Let $G = (V, E)$ be a locally finite, connected, claw-free graph which satisfies $(\ast)$ and has at least three vertices. We may assume that $G$ is infinite because for finite $G$ the statement follows from Theorem~\ref{Asra-fin}. First we define a sequence $(C_i)_{i \in \mathbb{N}}$ of cycles of $G$ such that $$V(C_i) \setminus N(N(C_i)) \neq \emptyset$$ holds for every ${i \in \mathbb{N}}$. Additionally, we define an integer sequence ${(k_i)_{i \in \mathbb{N} \setminus \lbrace 0 \rbrace}}$ and vertex sets ${M^i_j \subseteq V(G)}$ where ${i \in \mathbb{N} \setminus \lbrace 0 \rbrace}$ and for every such $i$ the inequality chain ${1 \leq j \leq k_i}$ is satisfied. We start by taking an arbitrary cycle as $\tilde{C}$. Note that $G$ contains a cycle since $G$ is connected, has three vertices and satisfies $(\ast)$. The argumentation is the the same as in the beginning of the proof of Theorem~\ref{Asra-fin}. Now take an extension sequence of $\tilde{C}$ where we choose the targets of the extensions always from $N(\tilde{C})$. Since $G$ is locally finite, $N(\tilde{C})$ is finite and the extension sequence ends after finitely many steps. We set $C_0$ as the last cycle of such an extension sequence of $\tilde{C}$. Note that $$V(\tilde{C}) \subseteq V(C_0) \setminus N(N(C_0)).$$ Now suppose we have already defined the sequence of cycles up to length $m+1$ for some ${m \geq 0}$ together with the integer sequence up to $k_m$ and the vertex sets $M^i_j$ for every ${i \leq m}$ where $j$ satisfies always ${1 \leq j \leq k_i}$. Then let $$\mathscr{S}^{m+1} \subseteq N(C_{m})$$ be a finite minimal vertex set such that every ray which starts in $V(C_m)$ has to meet $\mathscr{S}^{m+1}$. Such a set exists because $G$ is locally finite, which implies that $N(C_{m})$ is finite. Hence, we get $\mathscr{S}^{m+1}$ by sorting out vertices from $N(C_m)$. Next we set $k_{m+1}$ as the integer we get from Lemma~\ref{struct_toll}. Furthermore, let $$S^{m+1}_1, \ldots, S^{m+1}_{k_{m+1}}$$ be the minimal separators and $$K^{m+1}_0, \ldots, K^{m+1}_{k_{m+1}}$$ be the components of ${G-\mathscr{S}^{m+1}}$ we get from Lemma~\ref{struct_toll}. Applying Lemma~\ref{Asra-cut-1} with these objects and the cycle $C_m$, we obtain a cycle which we set as $C_{m+1}$. We also get vertex sets for every $j$ with ${1 \leq j \leq k_{m+1}}$ which we choose for the sets $M^{m+1}_j$ (see Figure~8). We want to use Lemma~\ref{HC-extract} to prove that $G$ is Hamiltonian. The next claim ensures that all required conditions are fulfilled to apply Lemma~\ref{HC-extract}. \setcounter{claim}{0} \begin{claim} \textnormal{ \begin{enumerate}[\normalfont(a)] \item \textit{For every vertex $v$ of $G$, there exists an integer ${j \geq 0}$ such that ${v \in V(C_i)}$ holds for every ${i \geq j}$.} \item \textit{For every ${i \geq 1}$ and $j$ with ${1 \leq j \leq k_i}$, the cut $\delta(M^i_j)$ is finite.} \item \textit{For every end $\omega$ of $G$, there is a function ${f : \mathbb{N} \setminus \lbrace 0 \rbrace \longrightarrow \mathbb{N}}$ such that the inclusion ${{M^{j}_{f(j)} \subseteq M^i_{f(i)}}}$ holds for all integers $i, j$ with ${1 \leq i \leq j}$ and the equation ${M_{\omega}:= \bigcap^{\infty}_{i=1} \overline{M^i_{f(i)}} = \lbrace \omega \rbrace}$ is true.} \item \textit{${E(C_i) \cap E(C_j) \subseteq E(C_{j+1})}$ holds for all integers $i$ and $j$ with ${0 \leq i < j}$.} \item \textit{The equations ${E(C_i) \cap \delta(M^p_j) = E(C_p) \cap \delta(M^p_j)}$ and ${|E(C_i) \cap \delta(M^p_j)| = 2}$ hold for each triple $(i, p, j)$ which satisfies ${1 \leq p \leq i}$ and ${1 \leq j \leq k_p}$.} \end{enumerate} } \end{claim} \begin{figure} \caption{The cycle $C_{m+1} \label{thm_1_4_C_m+1} \end{figure} Note that the inclusions $$V(K^{i}_0) \cup \mathscr{S}^{i} \cup N_3(\mathscr{S}^{i}) \subseteq V(C_{i}) \subseteq V(K^{i+1}_0)$$ and the equation ${N(K^{i}_0) = \mathscr{S}^{i}}$ hold for every ${i \geq 1}$ by definition of the cycles together with Lemma~\ref{Asra-cut-1}~(i) and by Lemma~\ref{struct_toll}. Since $G$ is connected, statement~(a) follows. We fix an arbitrary integer ${i \geq 1}$ and some $j$ with ${1 \leq j \leq k_i}$ for the proof of statement~(b). By definition of the set $M^i_j$ and Lemma~\ref{Asra-cut-1}~(ii), the inclusions $${V(K^i_j) \setminus N(S^i_j) \subseteq M^i_j \subseteq V(K^i_j) \cup \mathscr{S}^i \cup N(\mathscr{S}^i)}$$ are true. Now the definitions of $K^i_j$ and $\mathscr{S}^i$ imply that $N(M^i_j)$ is a subset of ${V(K^i_0) \cup \mathscr{S}^i \cup N_2(\mathscr{S}^i)}$. Using that $V(K^i_0)$ and $\mathscr{S}^i$ are finite sets by definition and that $G$ is locally finite, we obtain that $\delta(M^i_j)$ is a finite cut. We fix an arbitrary end $\omega$ of $G$ for statement~(c). Now we use that for every $i \geq 1$ the end $\omega$ is contained in precisely one of the closures ${\overline{K^i_{1}}, \ldots, \overline{K^i_{k_i}}}$, say ${\omega \in \overline{K^i_{j}}}$ where ${1 \leq j \leq k_i}$. Then set ${f(i) = j}$. First we prove that $${M^{j}_{f(j)} \subseteq M^i_{f(i)}}$$ holds for all integers $i, j$ with ${1 \leq i \leq j}$. For this, it suffices to show that the inclusion $${M^{i+1}_{f(i+1)} \subseteq M^i_{f(i)}}$$ is true for every ${i \geq 1}$. We get that the inclusions $${V(K^{i}_{f(i)}) \setminus N(S^{i}_{f(i)}) \subseteq M^{i}_{f(i)} \subseteq V(K^{i}_{f(i)}) \cup \mathscr{S}^{i} \cup N(\mathscr{S}^{i})}$$ hold for every ${i \geq 1}$ by definition of the set $M^{i}_{f(i)}$ and Lemma~\ref{Asra-cut-1}~(ii). Note that ${\mathscr{S}^{i+1} \cup N(\mathscr{S}^{i+1})}$ is not necessarily a subset of $V(K^{i}_{f(i)})$. Because of this, we have to look a bit more carefully at the set $M^{i}_{f(i)}$. For our purpose, it suffices to prove that $M^{i+1}_{f(i+1)}$ is a subset of ${V(K^{i}_{f(i)}) \setminus N(S^{i}_{f(i)})}$ for every ${i \geq 1}$. Suppose this is not true. Then, using the definition of $K^{n}_{f(n)}$ and $S^{n+1}_{\ell}$ together with Lemma~\ref{struct_toll}, there exists an integer ${n \geq 1}$ and an integer ${\ell \neq f(n+1)}$ with ${1 \leq \ell \leq k_{n+1}}$ such that $${V(K^{n}_{f(n)}) \cap (S^{n+1}_{\ell} \cup N(S^{n+1}_{\ell})) = \emptyset}$$ and $${X = M^{n+1}_{f(n+1)} \cap (S^{n+1}_{\ell} \cup N(S^{n+1}_{\ell}))} \neq \emptyset.$$ Since the inclusion $${V(K^{n+1}_0) \cup \mathscr{S}^{n+1} \cup N_3(\mathscr{S}^{n+1}) \subseteq V(C_{n+1})}$$ holds by definition of $C_{n+1}$ and Lemma~\ref{Asra-cut-1}~(i), we get that $C_{n+1}$ has vertices in $X$ and ${V \setminus X}$. So $C_{n+1}$ hits the cut $\delta(X)$ at least twice. Furthermore, we know that no edge of $\delta(X)$ is an edge of $K^{n}_{f(n)}$. The inclusion $${M^{n+1}_{f(n+1)} \subseteq V(K^{n+1}_{f(n+1)}) \cup \mathscr{S}^{n+1} \cup N(\mathscr{S}^{n+1})}$$ and the definition of $X$ ensure that $$\delta(X) \subseteq \delta(M^{n+1}_{f(n+1)}).$$ Hence, $E(C_{n+1})$ contains no edges of ${\delta(M^{n+1}_{f(n+1)}) \setminus \delta(X)}$ because of Lemma~\ref{Asra-cut-1}~(ii). This yields a contradiction. In order to show that $E(C_{n+1})$ must use at least one edge of ${\delta(M^{n+1}_{f(n+1)}) \setminus \delta(X)}$, we define the set $${Y = M^{n+1}_{f(n+1)} \cap V(K^{n}_{f(n)})}.$$ Using the inclusions $${V(K^{n+1}_{f(n+1)}) \setminus N(S^{n+1}_{f(n+1)}) \subseteq M^{n+1}_{f(n+1)} \subseteq V(K^{n+1}_{f(n+1)}) \cup \mathscr{S}^{n+1} \cup N(\mathscr{S}^{n+1})}$$ and the definitions of $K^{n}_{f(n)}$ and $K^{n+1}_{f(n+1)}$ together with Lemma~\ref{struct_toll}, it is ensured that $Y$ is not empty since the inclusion $$V(K^{n+1}_{f(n+1)}) \subseteq V(K^{n}_{f(n)})$$ holds and that the inclusion $${\delta(Y) \subseteq \delta(M^{n+1}_{f(n+1)}) \cap E(K^{n}_{f(n)})}$$ is true. So the edge sets $\delta(X)$ and $\delta(Y)$ are disjoint. Using the inclusion $${V(K^{n+1}_0) \cup \mathscr{S}^{n+1} \cup N_3(\mathscr{S}^{n+1}) \subseteq V(C_{n+1})}$$ again, we obtain that $C_{n+1}$ contains vertices in $Y$ and ${V \setminus Y}$, which implies that $E(C_{n+1})$ contains at least two edges of $\delta(Y)$. Since $$\delta(Y) \subseteq \delta(M^{n+1}_{f(n+1)}) \setminus \delta(X),$$ we have the desired contradiction. So the inclusion ${M^{j}_{f(j)} \subseteq M^i_{f(i)}}$ holds for all integers $i, j$ with ${1 \leq i \leq j}$. It remains to show that ${M_{\omega} = \lbrace \omega \rbrace}$ is true. As noted above, the inclusions $${V(K^i_{f(i)}) \setminus N(S^i_{f(i)}) \subseteq M^i_{f(i)} \subseteq V(K^i_{f(i)}) \cup \mathscr{S}^i \cup N(\mathscr{S}^i)}$$ are true for every ${i \geq 1}$. So $\omega$ is an element of $M_{\omega}$ by definition of the function $f$. To show that $M_{\omega}$ contains no vertex of $G$ and no other end of $G$, fix some vertex $v \in V$ and some end ${\omega' \neq \omega}$ of $G$. Now let $F$ be a finite set of vertices such that $\omega$ and $\omega'$ lie in closures of different components of ${G-F}$. We take an integer ${p \geq 1}$ such that the following inclusion is fulfilled: $$F \cup \lbrace v \rbrace \subseteq V(K^{p}_0).$$ To see that it is possible to find such an integer, note that each vertex ${w \in F \cup \lbrace v \rbrace}$ lies in some cycle $C_{\ell_w}$ where ${\ell_w \geq 0}$ by statement~(a). The construction of the cycles and Lemma~\ref{Asra-cut-1}~(i) ensure that the inclusion $$V(C_i) \subseteq V(C_{i+1})$$ holds for every ${i \geq 0}$. Since ${F \cup \lbrace v \rbrace}$ is finite, we can set ${p-1}$ as the maximum of all integers $\ell_w$. Now the definition of $K^{p}_0$ and Lemma~\ref{struct_toll} imply that $V(K^{p}_0)$ contains all vertices of ${F \cup \lbrace v \rbrace}$. So $\omega$ and $\omega'$ are also in closures of different components of ${G-(V(K^{p}_0) \cup \mathscr{S}^p))}$. As we have proved already, the set $M^{i+1}_{f(i+1)}$ is a subset of ${V(K^i_{f(i)}) \setminus N(S^i_{f(i)})}$ for every ${i \geq 1}$. So $\omega'$ and $v$ do not lie in the set $\overline{M^{p+1}_{f(p+1)}}$, which implies that they cannot be elements of $M_{\omega}$. Since each set $M^{i}_{f(i)}$ is a set of vertices, the intersection $M_{\omega}$ cannot contain inner points of edges. Therefore, the equation ${M_{\omega} = \lbrace \omega \rbrace}$ is valid and statement~(c) is true. To prove statement~(d), take an edge ${e \in E(C_i) \cap E(C_j)}$ for arbitrary integers $i$ and $j$ that satisfy ${0 \leq i < j}$. By definition of the cycles together with Lemma~\ref{Asra-cut-1}~(iii) and (i), we get that both endvertices of $e$ lie in ${V(C_i) \subseteq V(K^{i+1}_0)}$ and that the inclusions $${V(K^{i+1}_0) \cup \mathscr{S}^{i+1} \cup N_3(\mathscr{S}^{i+1}) \subseteq V(C_{i+1}) \subseteq V(C_j)}$$ hold. Using the equation $${N(K^{i+1}_0) = \mathscr{S}^{i+1}},$$ we obtain that $${e \in E(C_j - N(N(C_j)))}$$ is true. So $e$ is an element of $E(C_{j+1})$ by definition of the cycles and Lemma~\ref{Asra-cut-1}~(iii). Therefore, statement~(d) holds. Let us fix an arbitrary ${p \geq 1}$ and $j$ with ${1 \leq j \leq k_p}$ for statement~(e). The equation $$|E(C_p) \cap \delta(M^p_j)| = 2$$ is true by definition of the cycles and Lemma~\ref{Asra-cut-1}~(ii). So proving the equation $$E(C_p) \cap \delta(M^p_j) = E(C_i) \cap \delta(M^p_j)$$ for every $i \geq p$ suffices to show that statement~(e) holds. First we verify the inclusion $${E(C_p) \cap \delta(M^p_j) \subseteq E(C_i) \cap \delta(M^p_j)}.$$ We know that for every ${p \geq 1}$ where ${1 \leq j \leq k_p}$ the inclusions $${V(K^p_{j}) \setminus N(S^p_j) \subseteq M^p_{j} \subseteq V(K^p_{j}) \cup \mathscr{S}^p \cup N(\mathscr{S}^p)}$$ are true by definition of the set $M^p_j$ and Lemma~\ref{Asra-cut-1}~(ii). So $$\lbrace u,v \rbrace \subseteq \mathscr{S}^p \cup N_2(\mathscr{S}^p)$$ holds for each edge ${uv \in \delta(M^p_j)}$. By definition of the cycles and Lemma~\ref{Asra-cut-1}~(i), we know further that the inclusion $${V(K^{p}_0) \cup \mathscr{S}^p \cup N_3(\mathscr{S}^p) \subseteq V(C_i)}$$ is true for every ${i \geq p \geq 1}$. Hence, if $uv$ is an edge of ${E(C_i) \cap \delta(M^p_j)}$, it lies also in ${E(C_i - N(N(C_i)))}$, which implies ${uv \in E(C_{i+1})}$ by Lemma~\ref{Asra-cut-1}~(iii). So we get inductively that $$E(C_p) \cap \delta(M^p_j) \subseteq E(C_i) \cap \delta(M^p_j)$$ holds for every ${i \geq 1}$. It remains to check the opposing inclusion. We do this by showing that for every ${i \geq p}$ the cycle $C_i$ contains no edges of $\delta(M^p_j)$ but the two which are also edges of $C_p$. For this, we use induction on $i$. The definition of $C_p$ and Lemma~\ref{Asra-cut-1}~(ii) ensure that the statement holds for $i=p$. Next we fix an arbitrary $i > p$ and edge ${e = uv \in \delta(M^p_j) \setminus E(C_p)}$. Using the induction hypothesis, we get that $e$ is no edge of $C_{i-1}$. Now suppose for a contradiction that $e$ is an edge of $C_i$. Then the definition of $C_i$ together with Lemma~\ref{Asra-cut-1}~(iii) implies that the inclusion $${\lbrace u, v \rbrace \subseteq (V \setminus V(C_{i-1})) \cup N_2(N(C_{i-1}))}$$ holds. This leads towards a contradiction because we already know that the inclusion $$\lbrace u,v \rbrace \subseteq \mathscr{S}^p \cup N_2(\mathscr{S}^p)$$ is valid. Both inclusions cannot be true at the same time since $C_{i-1}$ contains all vertices of ${V(K^p_0) \cup \mathscr{S}^p \cup N_3(\mathscr{S}^{p})}$ by definition of the cycle and Lemma~\ref{Asra-cut-1}~(i). This completes the induction and, therefore, shows the equation $$E(C_p) \cap \delta(M^p_j) = E(C_i) \cap \delta(M^p_j)$$ for every ${i \geq p}$. Hence, statement~(e) is true and the proof of the claim is complete. \newline As mentioned before, we can now apply Lemma~\ref{HC-extract} using the sequence of cycles $(C_i)_{i \in \mathbb{N}}$, the integer sequence $(k_i)_{i \in \mathbb{N} \setminus \lbrace 0 \rbrace}$ and the vertex sets ${M^i_j}$ for every ${i \in \mathbb{N} \setminus \lbrace 0 \rbrace}$ and $j$ with ${1 \leq j \leq k_i}$ to obtain that $G$ is Hamiltonian. \end{proof} Now let us discuss how the proof of Theorem~\ref{Asra-loc-fin} depends on the assumption of being claw-free. Lemma~\ref{complete} and Lemma~\ref{struct_toll} do not have to be true if our graph contains claws. Without the second of these lemmas, the structure of the graph is not so clear anymore, but of course we can still find for every end a sequence of separators and components which captures the end. The harder problem is that without Lemma~\ref{complete} it is not clear how to control the growth of the sequence of cycles through the separators, which we need in order to apply Lemma~\ref{HC-extract}. In the proof of Lemma~\ref{Asra-cut-1} we made heavy use of Lemma~\ref{complete}. So in order to make progress towards a version of Theorem~\ref{Asra-loc-fin} which does not depend on the assumption of being claw-free, we need to find a way to control the growth of sequences of cycles given by extensions (maybe just along separators) only using property $(\ast)$. \end{document}
\begin{document} \title{Fully non-positive-partial-transpose genuinely entangled subspaces} \author{Owidiusz Makuta} \affiliation{Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotnik\'{o}w 32/46, 02-668 Warsaw, Poland} \author{Błażej Kuzaka} \affiliation{Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotnik\'{o}w 32/46, 02-668 Warsaw, Poland} \author{Remigiusz Augusiak} \affiliation{Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotnik\'{o}w 32/46, 02-668 Warsaw, Poland} \begin{abstract} Genuinely entangled subspaces are a class of subspaces in the multipartite Hilbert spaces that are composed of only genuinely entangled states. They are thus an interesting object of study in the context of multipartite entanglement. Here we provide a construction of multipartite subspaces that are not only genuinely entangled but also fully non-positive-partial-transpose (NPT) in the sense that any mixed state supported on them has non-positive partial transpose across any bipartition. Our construction originates from the stabilizer formalism known for its use in quantum error correction. To this end, we first introduce a couple of criteria allowing to assess whether any state from a given non-trivial stabilizer subspace is genuinely multipartite entangled. We then use these criteria to construct genuinely entangled stabilizer subspaces for any number of parties and prime local dimension and conjecture them to be of maximal dimension achievable within the stabilizer formalism. At the same time, we prove that every genuinely entangled subspace is fully NPT in the above sense, which implies a quite surprising fact that no genuinely entangled stabilizer subspace can support PPT entangled states. \end{abstract} \maketitle \section{Introduction} Entanglement is one of the most exhilarating features of quantum systems \cite{RevModPhys.81.865}. Its almost magic-like properties defy our real-life intuitions and show that the laws of the physical at a small scale cannot be reliably modelled with methods of classical physics. Throughout the years, many have tried, and succeeded, to use entanglement as a resource for seemingly infinite number of interesting applications that often have no classical analogue, such as the well-known quantum teleportation \cite{PhysRevLett.70.1895} or quantum cryptography \cite{Ekert91}. Entanglement is also vital for quantum steering \cite{PhysRevLett.98.140402} or Bell nonlocality \cite{Bell} which represent other forms of quantum correlations that have been turned into independent resources themselves (see, e.g., \cite{Steeringreview,Bellreview14}). In particular, Bell nonlocality lies at the heart of quantum information in the device-independent version \cite{PhysRevLett.98.230501}. It is thus not surprising that until now entanglement theory has been a very lively field of research. Obviously, in order to study entanglement, one has to identify which states are entangled in the first place. The separability problem, that is, the problem of deciding whether a given, in general mixed, quantum state is entangled, has been thus intensively studied over years \cite{GUHNE20091}. At first, this problem was considered mostly for bipartite quantum states \cite{GURVITS2004448}, but it quickly gained on importance in the many-body regime. This was driven by the development of experimental techniques that allow one to prepare and control interesting many-body states (see, e.g., Refs. \cite{JDScience,Kasevich,Experimental}) but also by the identification of certain applications of multipartite entanglement such as quantum metrology \cite{QuantumMetro11} or quantum computing \cite{QuantumComp01}. Entanglement turned also useful in the study of many-body phenomena \cite{RevModPhys.80.517}. Among many forms of entanglement featured by the multipartite scenario, the strongest and at the same time most valuable from the application point of view, is arguably the genuine multipartite entanglement \cite{G_hne_2010,PhysRevLett.106.190502,PhysRevA.83.040301}. Roughly speaking, a genuinely entangled multipartite quantum state is one that cannot be represented as a convex mixture of other states that are separable with respect to some bipartitions. A significant amount of attention has been devoted in the literature to characterise the properties of genuine entanglement and to provide methods of its detection \cite{PhysRevLett.106.190502,PhysRevA.98.062102,Micuda:19,Eltschka2020maximumnbody,PhysRevLett.128.080507}. Yet, the problem is tremendously difficult, certainly more difficult than in the bipartite case, and our understanding of genuine entanglement still remains incomplete. Our aim in this work is to join the effort of characterisation of genuine entanglement in multipartite systems, taking however, a slightly different perspective. Instead of focusing on particular multipartite quantum states (in general mixed) we rather consider entangled subspaces of multipartite Hilbert spaces. In this way we gain generality: a statement made for a subspace applies to any mixed state defined on it. While genuinely entangled subspaces have recently been an object of intensive exploration \cite{PhysRevA.98.012313,2020,PhysRevA.99.032335,Demianowicz_2021,demianowicz2021universal,Antipin_2021}, there are actually not many schemes allowing to judge whether a given subspace (and thus all states acting on it) is genuinely entangled. Here we will address this problem, concentrating on a particular class of multipartite subspaces that originate from the multiqudit stabilizer formalism \cite{GottesmanThesis}. The latter, being known for its use in quantum error correction \cite{PhysRevLett.77.793,KITAEV20032,PhysRevA.103.042420,Huber2020quantumcodesof}, provides an easy-to-handle and convenient description of a certain class of quantum states. We first provide a simple necessary and sufficient criterion allowing to decide whether a given stabilizer subspace in an $N$-qudit Hilbert space is genuinely entangled. Importantly, our criterion reduces to checking certain commutation relations for operators generating a given stabilizer subspace which can be done in a finite number of steps, thus allowing us to avoid the tedious task of assessing whether every state belonging to that subspace is genuinely entangled. We then generalize the vector formalism introduced for the multiqubit case in Ref. \cite{Makuta_2021} that provides an efficient description of entanglement properties of stabilizer subspaces. This formalism is also particularly useful in constructing stabilizer subspaces that are genuinely entangled, and thus we employ it to provide a family of such subspaces which we believe to have the maximal dimension achievable within the stabilizer formalism. At the same time, we explore the use of partial transposition in deciding whether stabilizer subspaces are genuinely entangled and show that any mixed state supported on such a subspace must be NPT with respect to any bipartition. We thus provide a family of (possibly) maximally-dimensional genuinely entangled subspaces in $N$-qudit Hilbert spaces that are fully NPT, thus extending the results of Ref. \cite{PhysRevA.87.064302} to the multipartite regime. \section{Preliminaries} This section serves as an introduction of commonly used terms within the field of quantum information. Readers already familiar with concepts of genuine entanglement, partial transpose and stabilizer formalism are advised to skip ahead to Section \ref{sec_ge_stab}.\\ \textit{(1) Genuine entanglement.} Let us consider a scenario where $N$ parties share a pure state $\ket{\psi}\in \mathcal{H}$. We can decompose the Hilbert space $\mathcal{H}$ in the following manner \begin{equation}\label{eq_hilbert_tensor} \mathcal{H}=\bigotimes_{i=1}^{N} \mathcal{H}_{i}, \end{equation} where $\mathcal{H}_{i}$ is a Hilbert subspace associated with $i$'th party. Let us then consider a partition of the set of parties $I_N:=\{1,\ldots,N\}$ into two disjoint subsets $Q$ and $\overline{Q}$ and call it a bipartition of $I_N$. We say that $\ket{\psi}$ is separable with respect the bipartition $Q|\overline{Q}$ if \begin{equation} \ket{\psi}= \ket{\psi}_{Q}\otimes \ket{\psi}_{\overline{Q}}, \end{equation} for some $\ket{\psi}_{Q}\in \mathcal{H}'=\bigotimes_{i\in Q}\mathcal{H}_{i}$. Otherwise, we call it entangled with respect to the bipartition $Q|\overline{Q}$. Then, we call $\ket{\psi}$ genuinely multipartite entangled iff it is entangled with respect to any nontrivial bipartition of the set $I_N$ \cite{GUHNE20091}. Of course, no state could genuinely entangled if we were to consider trivial bipartitions, i.e., ones for which $Q= \emptyset$ or $Q=\{1,\dots,N\}$. Thus, whenever we say "all bipartitions" we only mean the nontrivial ones. One can also introduce the above notions in the mixed-state case. Precisely, we say that a density matrix $\rho$ is entangled with respect to a given bipartition $Q|\overline{Q}$ if it cannot be represented as a convex combination of pure states that are separable across this bipartition. Then, $\rho$ is called genuinely multipartite entangled if it cannot be represented as a convex combination of pure state in which each pure state is separable across possibly different bipartitions. In an analogous manner we can also extend these definitions to entire subspaces of the joint Hilbert space $\mathcal{H}$: a subspace is entangled with respect to some bipartition or it is genuinely entangled if every state from that subspace is of the corresponding type \cite{PhysRevA.98.012313}. In particular, a genuinely entangled subspace of $\mathcal{H}$ is one that consists of only genuinely entangled pure states. \textit{(2) Positive partial transpose states.} Consider a density matrix $\rho$ acting on $\mathcal{H}$. We call it a non-positive-transpose (NPT) state with respect to a bipartition $Q|\overline{Q}$ if it its partial transposition with respect to $Q|\overline{Q}$ is not a positive semi-definite matrix, \begin{equation}\label{eq_npt} \rho^{T_{Q}}\ngeqslant 0, \end{equation} where $T_{Q}$ denotes a transposition over subsystems from $Q$ defined as \begin{equation} \rho^{T_{Q}}=\left(\bigotimes_{i\in Q} T^{(i)}\otimes \bigotimes_{j\in\overline{Q}}I^{(j)}\right)[\rho], \end{equation} where $T^{(i)}$ is the transposition in the standard basis performed on the $i$th subsystem, whereas $I^{(j)}$ is the identity on the subsystem $j$. In other words, partial transpose is a transpose operation carried out over only some parties. As an example, let us consider a matrix \begin{equation} C=\sum_{i,j,k,l}\gamma_{i,j,k,l}\ket{i}_{1}\bra{j}\otimes \ket{k}_{2}\bra{l}, \end{equation} where $1,2$ indice indicate the party. Then the partial transpose over the first party of this matrix is defined as follows \begin{equation} C^{T_{1}}=\sum_{i,j,k,l}\gamma_{i,j,k,l}\ket{j}_{1}\bra{i}\otimes \ket{k}_{2}\bra{l}. \end{equation} Let us finally define fully NPT states to be those for which \eqref{eq_npt} holds true for any bipartition $Q|\overline{Q}$ We can also extend the above definitions of to subspaces of $\mathcal{H}$. Precisely, a subspace $V\subset \mathcal{H}$ is termed NPT with respect to a bipartition $Q|\overline{Q}$ if every mixed state supported on it is NPT with respect to $Q|\overline{Q}$. Analogously, $V$ is called fully NPT if any mixed state supported on it is fully NPT. \textit{(3) Qudit stabilizer formalism.} In this work we concentrate on a class of quantum states (pure or mixed) that originate from the multiqudit stabilizer formalism. Let us assume that $N$ parties share a pure state $\ket{\psi}\in \mathcal{H}$. The decomposition of Hilbert space (\ref{eq_hilbert_tensor}) also holds true in this scenario, however in this case we additionally assume that for every $i\in\{1,\dots,N\}$ we have $\mathcal{H}_{i}=\mathbb{C}^{d}$, where $d$ is called a dimension of a local Hilbert space, or in other words each party holds one qudit. The most important objects in the qudit stabilizer formalism are the generalised Pauli operators defined as \begin{equation}\label{eq_xz_def} X=\sum_{i=0}^{d-1}\ket{i+1}\!\bra{i},\qquad Z=\sum_{i=0}^{d-1}\omega^{i}\ket{i}\!\bra{i}, \end{equation} where $\omega=\operatorname{exp}(2\pi \mathbb{i}/d)$ is a primitive $d$th root of unity, $\mathbb{i}$ denotes the imaginary unit, and the addition is modulo $d$, meaning that $\ket{d}\equiv\ket{0}$. With a bit of calculation one can verify that these matrices enjoy the following properties: \begin{equation} X^{\dagger}X=\mathbb{1},\qquad Z^{\dagger}Z=\mathbb{1}, \end{equation} \begin{equation}\label{eq_xz^d} X^{d}=\mathbb{1},\qquad Z^{d}=\mathbb{1}, \end{equation} meaning that they are unitary and their spectra are $\{1,\omega,\ldots,\omega^{d-1}\}$. Moreover, they obey the following commutation relations \begin{equation}\label{eq_xz_zx} X^{i}Z^{j}=\omega^{-ij}Z^{j}X^{i} \end{equation} for any $i,j=0,\ldots,d-1$. Let us now introduce the generalised Pauli group $\mathbbm{P}_{N,d}$ as a set of all $N$-fold tensor products of matrices $\omega^{r} X^{n}Z^{m}$, where and $r,n,m\in \{0,\dots,d-1\}$, equipped with a matrix multiplication as the group operation. Consider then a subgroup $\mathbb{S}\subset\mathbbm{P}_{N,d}$. It is called a stabilizer if \cite{PhysRevA.71.042315}: \begin{enumerate} \item all elements of $\mathbb{S}$ mutually commute, \item given $a\in\mathbb{C}$, we have $a\mathbb{1}\in\mathbb{S}$ if and only if $a=1$. \end{enumerate} We say that a state $\ket{\psi}$ is stabilized by $\mathbb{S}$ if \begin{equation}\label{eq_stab} G\ket{\psi}=\ket{\psi} \end{equation} holds true for any $G\in\mathbb{S}$. Then, a stabilizer subspace $V$ is a maximal subspace stabilized by a given stabilizer, i.e., a state $\ket{\psi}$ is stabilized by $\mathbb{S}$ if and only if $\ket{\psi}\in V$. We can also extend the definition of stabilisation to the mixed states: we say that a state described by a density matrix $\rho$ is stabilized by $\mathbb{S}$, if for every operator $G\in \mathbb{S}$ we have \begin{equation} G\rho=\rho G=\rho. \end{equation} For a more in depth discussion of a stabilizer formalism see \cite{GHEORGHIU2014505, nielsen00}. Given a stabilizer $\mathbb{S}$, it is often convenient to represent it in terms of the smallest set of independent elements of $\mathbb{S}$, called generators, using which one can represent any other element of $\mathbb{S}$, where 'independent' means that one generator cannot be represented as a product of the other generators. \begin{equation} G_{i}\neq G_{1}^{\alpha_{1}}\dots G_{i-1}^{\alpha_{i-1}}G_{i+1}^{\alpha_{i+1}}\dots G_{k}^{\alpha_{k}} \end{equation} for $\alpha_{i}\in\{0,\dots,d-1\}$. Of course from the definition of a stabilizer it follows that: \begin{equation} [G_i,G_j]=0\qquad (i\neq j). \end{equation} For prime $d$, this representation of a stabilizer allows us to easily determine the dimension of the corresponding stabilizer subspace $V$, namely $\dim V=d^{N-k}$; for non-prime $d$ the situation is slightly more complicated (see Ref. \cite{GHEORGHIU2014505}). \section{Genuinely entangled subspaces in a qudit stabilizer formalism} \label{sec_ge_stab} In this section we aim to characterise genuine entanglement in a qudit stabilizer formalism and find a genuinely entangled subspace that is as large as possible. To this end, we generalise the results from \cite{Makuta_2021} that concern genuine entanglement in a stabilizer formalism for a local Hilbert space of dimension $d=2$. Let us consider a nontrivial bipartition $Q|\overline{Q}$ of $\{1,\dots,N\}$ and a stabilizer $\mathbb{S}=\langle G_{1},\dots,G_{k}\rangle$ with a corresponding stabilizer subspace $V\subset \mathcal{H}=\mathbb{C}_{d}^{\otimes N}$. Each generator $G_{i}$ can be decomposed with respect to the above bipartition in the following way \begin{equation}\label{eq_g_bip} G_{i}=G_{i}^{(Q)}\otimes G_{i}^{(\overline{Q})}, \end{equation} where $G_{i}^{(Q)}$, $G_{i}^{(\overline{Q})}$ act on $\bigotimes_{i\in Q}\mathcal{H}_{i}$, $\bigotimes_{i\in \overline{Q}}\mathcal{H}_{i}$, respectively. With this we can formulate the following theorem. \begin{thm}\label{thm_sep} Given a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$ of an arbitrary local dimension, the corresponding stabilizer subspace $V\neq\{0\}$ is entangled with respect to a bipartition $Q|\overline{Q}$ iff there exists a pair $i,j\in \{1,\dots,k\}$ for which the following holds true: \begin{equation}\label{eq_gigj_com} \left[G_{i}^{(Q)},G_{j}^{(Q)}\right]\neq 0. \end{equation} \end{thm} \begin{proof} We will begin by proving the "$\Rightarrow$" implication by contradiction. Let us assume that $V$ is entangled with respect to a bipartition $Q|\overline{Q}$ and that for all $i,j\in\{1,\dots,N\}$ we have \begin{equation} \left[G_{i}^{(Q)},G_{j}^{(Q)}\right]=0. \end{equation} Since all generators commute, this implies that \begin{equation} \left[G_{i}^{(\overline{Q})},G_{j}^{(\overline{Q})}\right]=0 \end{equation} for all $i,j\in\{1,\dots,N\}$. Consequently, there exists a common eigenbasis for all $G_{i}^{(Q)}$ and another common eigenbasis for all $G_{i}^{(\overline{Q})}$: \begin{equation}\label{eq_g_bip_basis} G_{i}^{(Q)}\ket{\phi_{j}}=\lambda_{j}^{(i)}\ket{\phi_{j}}\qquad \textrm{and} \qquad G_{i}^{(\overline{Q})}\ket{\theta_{j}}=\overline{\lambda}_{j}^{(i)}\ket{\theta_{j}} \end{equation} for $\lambda_{j}^{(i)}, \overline{\lambda}_{j}^{(i)}\in \mathbb{C}$. Using this fact, we can construct a basis of the joint Hilbert space $\mathcal{H}$ by taking the tensor product of elements of both bases, that is, \begin{equation} \ket{\varphi_{ij}}=\ket{\phi_{i}}\otimes\ket{\theta_{j}}. \end{equation} Importantly, it follows from Eqs. (\ref{eq_g_bip}) and (\ref{eq_g_bip_basis}) that this is a common eigenbasis of generators $G_{i}$, hence some $\ket{\varphi_{ij}}$ correspond to the eigenvalue one of all $G_{i}$, or $V$ is empty. The later contradicts the assumption that $V$ is non-empty, while the former implies that $\ket{\varphi_{ij}}$ belongs to $V$. Since $\ket{\varphi_{ij}}$ is clearly separable with respect to the bipartition $Q|\overline{Q}$, this contradicts the assumption that $V$ is entangled across this bipartition. From the definition of Pauli group $\mathbb{P}_{N,d}$, $G_{i}$ is a tensor product of matrices $\omega^{r}X^{n}Z^{m}$ for $r,n,m\in\{0,\dots,d-1\}$. Let us now move on to the "$\Leftarrow$" implication. Let us assume that $V$ is separable with respect to a bipartition $Q|\overline{Q}$ but there exists a pair of generators $G_{i},G_{j}$ for which \eqref{eq_gigj_com} holds true. This means that there exists $\ket{\psi}\in V$ such that for the bipartition $Q|\overline{Q}$ we have \begin{equation} \ket{\psi}=\ket{\psi}_{Q}\otimes \ket{\psi}_{\overline{Q}}. \end{equation} This together with (\ref{eq_g_bip}) implies that the following holds true \begin{equation} G_{i}^{(Q)}\ket{\psi}_{Q}=e^{\mathbb{i} \phi_{i}}\ket{\psi}_{Q} \end{equation} for all $i$, where $\phi_{i}\in\mathbb{R}$. This formula allows us to write \begin{equation}\label{eq_g_anticom_1} \left\{G_{i}^{(Q)},G_{j}^{(Q)}\right\}\ket{\psi}_{Q}=2e^{\mathbb{i}(\phi_{i}+\phi_{j})}\ket{\psi}_{Q}, \end{equation} where $\{\cdot,\cdot\}$ is an anticommutator. On the other hand, from the definition of Pauli group $\mathbb{P}_{N,d}$, $G_{i}$ is a tensor product of matrices $\omega^{r}X^{n}Z^{m}$ for $r,n,m\in\{0,\dots,d-1\}$, which implies \begin{eqnarray}\label{eq_g_anticom_2} \left\{G_{i}^{(Q)},G_{j}^{(Q)}\right\}\ket{\psi}_{Q}&=&\left(1+\omega^{\tau_{i,j}^{(Q)}}\right)G_{j}^{(Q)}G_{i}^{(Q)}\ket{\psi}_{Q}\nonumber\\&=&\left(1+\omega^{\tau_{i,j}^{(Q)}}\right)\mathrm{e}^{\mathbb{i}(\phi_i+\phi_j)}\ket{\psi}_{Q},\nonumber\\ \end{eqnarray} where the value of $\tau_{i,j}^{(Q)}$ depends on the exact form of $G_{i}^{(Q)}$ and $G_{j}^{(Q)}$. After comparing Eqs. (\ref{eq_g_anticom_1}) and (\ref{eq_g_anticom_2}), one arrives at \begin{equation} 1+\omega^{\tau_{i,j}^{(Q)}}=2, \end{equation} which is fulfilled only if $\tau_{i,j}^{(Q)}=0$, which is to say that for the bipartition $Q|\overline{Q}$, for every $i,j\in\{1,\dots,k\}$ we have \begin{equation} \left[G_{i}^{(Q)},G_{j}^{(Q)}\right]=0, \end{equation} which contradicts the assumption. \end{proof} This theorem provides an alternative technique of determining whether a given subspace originating from the qudit stabilizer formalism is entangled. This requires far less effort than checking the same property on the explicit form of the stabilizer subspace, in particular for every pure state belonging to it. Moreover, by applying Theorem \ref{thm_sep} to all nontrivial bipartitions we arrive at the following corollary. \begin{cor}\label{cor_ge} Given a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$ of an arbitrary local dimension, the corresponding stabilizer subspace $V\neq\{0\}$ is genuinely entangled iff for each bipartition $Q|\overline{Q}$ there exists a pair $i,j\in \{1,\dots,k\}$ for which the following holds true: \begin{equation} \left[G_{i}^{(Q)},G_{j}^{(Q)}\right]\neq 0. \end{equation} \end{cor} Let us notice that the above fact is a direct generalisation of Theorem 1 from Ref. \cite{Makuta_2021} and it allows us to check whether a stabilizer subspace is genuinely entangled just by examining generators of the corresponding stabilizer. Having the above description of genuine entanglement within the qudit stabilizer formalism at hand, we now aim to find a genuinely entangled stabilizer subspace of the largest possible dimension. Luckily, we can once again refer to \cite{Makuta_2021} in which the maximal dimension of a genuinely entangled, stabilizer subspace for a case of $d=2$ was found. It has been achieved via a certain novel vector formalism, and so our first step in finding the aforementioned largest subspace is to generalise this formalism to arbitrary $d$. In the remainder of this section we assume that $d$ is a prime number. Let us consider the set $\mathbb{Z}_{d}=\{0,\dots,d-1\}$ equipped with addition and multiplication modulo $d$. We denote by $F_{N,d}=\mathbb{Z}_{d}^{N}$ a vector space over $\mathbb{Z}_{d}$, i.e., every $f\in F_{N,d}$ is a $N$-dimensional vector whose entries are from $\{0,\dots,d-1\}$. We can take advantage of this vector space to find a representation of a stabilizer $\mathbb{S}=\langle G_{1},\dots,G_{k}\rangle$ that gives us a clearer view on the entanglement of the subspace $V$. Let us consider a vector \begin{equation}\label{eq_vij_def} v_{i,j}=\sum_{n=1}^{N}\tau_{i,j}^{(n)}e_{n}, \end{equation} where $e_{n}$ is a unit vector that has an entry $1$ on the $n$'th side and $0$ elsewhere, and $\tau_{i,j}^{(n)}\in\{0,\dots,d-1\}$ is defined as follows \begin{equation}\label{eq_g_com} G_{i}^{(n)}G_{j}^{(n)}=\omega^{\tau_{i,j}^{(n)}}G_{j}^{(n)}G_{i}^{(n)}. \end{equation} For the definition of a stabilizer, all generators $G_{1},\dots, G_{k}$ have to mutually commute, which implies that for all $i$, $j$ we have \begin{equation}\label{eq_vij_com} \sum_{n=1}^{N}v_{i,j}^{(n)}=0, \end{equation} where $v_{i,j}^{(n)}$ is an $n$'th entry of $v_{i,j}$. Let us denote by $K(\mathbb{S})$ a subspace of $F_{N,d}$ that is spanned by all the vectors $v_{i,j}$ (\ref{eq_vij_def}) corresponding to the stabilizer $\mathbb{S}$. It is not difficult to see that for any stabilizer $\mathbb{S}$, $K(\mathbb{S})$ is a proper subspace of $F_{N,d}$: there exist vectors in $F_{N,d}$ that do not satisfy the condition (\ref{eq_vij_com}), and, at the same time, any linear combination of vectors that fulfil (\ref{eq_vij_com}) fulfils (\ref{eq_vij_com}) too. In fact, the condition (\ref{eq_vij_com}) implies that the maximal dimension of $K(\mathbb{S})$ is $N-1$. To make full use of the above formalism we need to reformulate Corollary \ref{cor_ge} in terms of vectors $v_{i,j}$. To this end, we need to define a representation of a bipartition of the set $\{1,\dots,N\}$, and so let us consider a vector $\phi\in F_{N,2}$ (notice that this is $F_{N,2}$ and not $F_{N,d}$). We call $\phi$ a representation of a bipartition $Q|\overline{Q}$ if the following holds true \begin{equation} \phi^{(n)}=1 \Longleftrightarrow n\in Q. \end{equation} Notably, this is not an isomorphism, since every $Q|\overline{Q}$ has two equivalent representation: one where $\phi^{(n)}=1$ for $n\in Q$ and the other one for $n\in \overline{Q}$. This representation also includes the trivial bipartitions, and since we usually want to exclude them, let us denote representations of the trivial bipartition as \begin{equation} \phi_{T_{0}}=\sum_{n=1}^{N}0 e_{n},\qquad \phi_{T_{1}}=\sum_{n=1}^{N} e_{n}, \end{equation} where $T_0$ and $T_1$ correspond to the cases $Q=\emptyset$ and $\overline{Q}=\emptyset$, respectively. The last ingredient needed to rephrase Corollary \ref{cor_ge} in this vector formalism is a function $h(\cdot,\cdot):$ $F_{N,d}\times F_{N,2}\rightarrow \mathbb{Z}_{d}$: \begin{equation}\label{eq_h_def} h(v,\phi)=\sum_{n=1}^{N}v^{(n)}\phi^{(n)}. \end{equation} This function resembles a scalar product in the sense that it is calculated similarly, however $h(\cdot,\cdot)$ does not meet the necessary conditions that a scalar product has to obey. We are finally ready to reformulate Corollary \ref{cor_ge} in terms of vectors from $K(\mathbb{S})$. \begin{lem}\label{lem_ge} Let the local dimension $d$ be a prime number. Given a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$, the corresponding stabilizer subspace $V\neq\{0\}$ is genuinely entangled iff for every $\phi\in F_{N,2}$ such that $\phi\neq \phi_{T_{0}},\phi_{T_{1}}$ there exists $v_{i,j}\in K(\mathbb{S})$ for $i\neq j$ such that \begin{equation} h(v_{i,j},\phi)\neq 0. \end{equation} \end{lem} The proof is very similar to the proof of Theorem \ref{thm_sep} and so we do not present it here. However, for completeness we provide it in Appendix \ref{app_lem}. The above lemma gives us a clearer picture of the relations that generators $G_{i}$ have to fulfil in order for a stabilizer subspace to be genuinely entangled. But notably in Ref. \cite{Makuta_2021} this lemma only serves as a stepping stone to a proof of the following theorem. \begin{thm}[\textbf{for} $\boldsymbol{d=2}$]\label{thm_d=2} A subspace $V$ stabilized by $\mathbb{S}$ is genuinely entangled iff $\dim K(\mathbb{S})=N-1$. \end{thm} This theorem provides a more convenient criterion than Corollary \ref{cor_ge} for determining if a stabilizer subspace is genuinely entangled. In fact, checking whether a stabilizer subspace generated by a stabilizer $\mathbb{S}$ is genuinely entangled boils down to computing the dimension of the corresponding subspace $K(\mathbb{S})$ which can easily be implemented in standard mathematical packages. However, the above theorem does not directly generalize to the case of arbitrary prime $d>2$. More precisely, while the "$\Leftarrow$" implication remains true, the other one does not hold anymore because there exist genuinely entangled subspaces for which the dimension of the corresponding subspaces $K(\mathbb{S})$ is lower than $N-1$. In what follows we provide an example of such a subspace. We also prove the sufficient condition for a stabilizer subspace to be genuinely entangled for any $d$, generalizing the "if part" of Theorem \ref{thm_d=2} to stabilizer subspace of arbitrary local dimension $d$. To demonstrate that Theorem \ref{thm_d=2} cannot be directly generalized to an arbitrary $d$ let us consider the following example of stabilizers for $d=3$ and $N=3$: \begin{equation} G_{1}=X\otimes X\otimes X, \quad G_{2}=Z\otimes Z\otimes Z, \end{equation} \begin{equation} G'_{1}=X\otimes Z\otimes \mathbb{1},\quad G'_{2}=Z\otimes X\otimes \mathbb{1}. \end{equation} It is easy to see that the corresponding stabilizers $\mathbb{S}=\langle G_{1},G_{2}\rangle$ and $\mathbb{S}'=\langle G'_{1}, G'_{2}\rangle$ stabilise three-dimensional subspaces in $\mathbb{C}_3^{\otimes 3}$, which we denote respectively by $V$ and $V'$. Using Corollary \ref{cor_ge} we can show that $V$ is genuinely entangled whereas $V'$ is not. At the same time, $\dim K(\mathbb{S})=\dim K(\mathbb{S}')=1$. This is thus certainly a nail in the coffin for the possibility of a direct generalisation of Theorem \ref{thm_d=2} to higher dimensional cases because it is clear that $\dim K(\mathbb{S})$ alone does not determine in general whether $V$ is genuinely entangled as it does for $d=2$. While it clearly follows from the above considerations that it is in general not possible to formulate an iff condition for a stabilizer subspace to be genuinely entangled based solely on the dimension of $K(\mathbb{S})$ (as it is the case for $d=2$), we believe that the latter can still be somewhat useful in assessing genuine entanglement of stabilizer subspaces. While the latter condition implies in particular that $G_i^d=\mathbbm{1}$ for every generator, Actually, we conjecture that the following necessary condition holds true. \begin{con}\label{con_ge} Given a stabilizer $\mathbb{S}=\langle G_{1},\dots,G_{k}\rangle$, if the local dimension $d$ is a prime number and the corresponding stabilizer subspace $V$ is genuinely entangled, then \begin{equation}\label{eq_dim_ks} \dim K(\mathbb{S}) \geqslant \left\lceil \frac{N-1}{d-1} \right\rceil. \end{equation} \end{con} For the particular case of $k=2$ and any $N$ and $d$ the above statement can actually be rigorously proven. Precisely, the following theorem holds true. \begin{thm}\label{Raimat} Consider a stabilizer $\mathbb{S}=\langle G_1,G_2\rangle$. If the local dimension $d$ is a prime number and the corresponding subspace $V$ generated by $\mathbb{S}$ is genuinely entangled, then \begin{equation}\label{eq_dimks} \dim K(\mathbb{S})\geqslant \left\lceil \frac{N-1}{d-1} \right\rceil. \end{equation} \end{thm} \begin{proof} The proof is quite technical and therefore it is deferred to Appendix \ref{ap_con}. \end{proof} Furthermore, in Appendix \ref{ap_gmax} we construct a stabilizer for which $V$ is genuinely entangled and $\dim K(\mathbb{S})=\lceil (N-1)/(d-1)\rceil$; we denote this stabilizer $\mathbb{S}_{\max}$, where we added the subscript '$\max$' for reasons that will become clear later. This implies that if the inequality (\ref{eq_dim_ks}) is in general false, the true function $p(N,d)$ that bounds $\dim K(\mathbb{S})$ from below, that is, one for which $\dim K(\mathbb{S})\geqslant p(N,d)$ for any genuinely entangled subspace, must satisfy $p(N,d)\leqslant \lceil (N-1)/(d-1)\rceil$ for any $N$ and $d$. It is worth pointing out the vector formalism introduced above turned out to be particularly useful in constructing the stabilizer $\mathbb{S}_{\max}$ presented in Appendix \ref{ap_gmax}. Precisely, the generators of this stabilizer are chosen so that the corresponding vectors $v_{i,j}$ give us the following basis in $K(\mathbb{S})$: \begin{equation}\label{eq_basis} u_{i}=\left\{ \begin{array}{ll} \displaystyle\sum_{j=(i-1)(d-1)+1}^{i(d-1)+1}e_{j}& \textrm{ for\;} i < \displaystyle\left\lceil \frac{N-1}{d-1} \right\rceil, \\[5ex] \displaystyle\sum_{j=(i-1)(d-1)+1}^{N-1}e_{j}+ a e_{N}& \textrm{ for\;}i = \displaystyle\left\lceil \frac{N-1}{d-1} \right\rceil, \end{array} \right. \end{equation} where $a=\lceil (N-1)/(d-1) \rceil(d-1)-(N-1)+1$. For example, for $d=3$ and $N=5$ our construction gives \begin{equation} u_{1}=(1,1,1,0,0),\qquad u_{2}=(0,0,1,1,1). \end{equation} To complete our characterization of genuine entanglement within the stabilizer formalism let us finally demonstrate that the '$\Leftarrow$' implication of Theorem \ref{thm_d=2} generalizes to any prime $d$. To this end let us first prove the following lemma. \begin{lem}\label{lem_dim} Assume $d$ to be prime and consider a stabilizer for which $\dim K(\mathbb{S})=N-1$. Then, a vector $v\in F_{N,d}$ fulfils \begin{equation}\label{eq_sum_v_0} \sum_{i=1}^{N}v^{(i)}=0, \end{equation} if and only if $v\in K(\mathbb{S})$. \end{lem} The proof can be found in Appendix \ref{app_lem}. With this lemma at hand we can prove the aforementioned implication. \begin{thm} Given a prime $d$ and a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$, if $\dim K(\mathbb{S})=N-1$ then the corresponding stabilizer subspace $V$ is genuinely entangled. \end{thm} \begin{proof} To prove that the stabilizer subspace in genuinely entangled we use the criterion from Lemma \ref{lem_ge}. Let us consider a representation $\phi$ of nontrivial bipartition $Q|\overline{Q}$. Without any loss of generality we can assume that $n_{0}\in \overline{Q}$ and $n_{1}\in Q$. Consider then a vector $v=(d-1)e_{n_{0}}+e_{n_{1}}$ for which we have $h(v,\phi)\neq 0$. Moreover, this vector clearly satisfies the condition (\ref{eq_sum_v_0}) which together with the fact that $\dim K(\mathbb{S})=N-1$ allows one to deduce \textit{via} Lemma \ref{lem_dim} that $v\in K(\mathbb{S})$. Now, such a vector can be found for every nontrivial $\phi$ and every such vector is in $K(\mathbb{S})$, therefore by the virtue of Lemma \ref{lem_ge} the stabilizer subspace $V$ is genuinely entangled. \end{proof} Let us conclude this section by wrapping up the main statements made in this section and picturing how the fact whether $V$ is genuinely entangled or not depends on the dimension of $K(\mathbb{S})$. Assuming that $d$ is prime, we can divide the possible values of $\dim K(\mathbb{S})$ into three regions: \begin{itemize} \item[(i)] for $\dim K(\mathbb{S})=N-1$ the corresponding stabilizer subspace is genuinely entangled, \item[(ii)] for $\dim K(\mathbb{S})<N-1$ but larger or equal to some number $p(N,d)$, which we suspect to be $\lceil (N-1)/(d-1)\rceil$, we are unable to determine whether $V$ is genuinely entangled or not just by looking at the dimension of $K(\mathbb{S})$, \item[(iii)] for $\dim K(\mathbb{S})$ smaller that $p(N,d)$, $V$ cannot be genuinely entangled. However, since we were not able to prove Conjecture \ref{con_ge}, the actual value of $p(N,d)$ remains unknown. \end{itemize} \subsection{The maximal dimension of genuinely entangled stabilizer subspaces} Interestingly, the stabilizer $\mathbb{S}_{\max}$ introduced in Appendix \ref{ap_gmax} and discussed above stabilises a genuinely entangled subspace in $\mathbb{C}_d^{\otimes N}$ whose dimension seems to be the maximal achievable within the stabilizer formalism for prime $d$. While the maximal dimension of a genuinely entangled stabilizer subspace for any $N$ and $d$ is unknown, we can speculate on it based on Conjecture \ref{con_ge}. Indeed, we can formulate another conjecture towards this goal. \begin{con}\label{con_dim} If $d$ is prime then the maximal dimension of genuinely entangled, stabilizer subspace is given by $d^{N-k_{\min}(N)}$, where \begin{equation}\label{eq_k_min} k_{\min}(N,d)=\left\lceil \frac{1+\sqrt{1+8\lceil (N-1)/(d-1)\rceil}}{2} \right\rceil. \end{equation} \end{con} Let us now provide a proof that Conjecture \ref{con_ge} implies Conjecture \ref{con_dim}. \begin{proof}Assume that Conjecture \ref{con_ge} is true. We first use the fact, proven in Ref. \cite{GHEORGHIU2014505}, that for a given stabilizer $\mathbb{S}=\langle G_{1},\dots,G_{k}\rangle$, the dimension of $V$ equals $d^{N-k}$ provided that $d$ is prime. Let us then follow the argumentation of the proof of Theorem 3 in Ref. \cite{Makuta_2021}. Namely, $k_{\min}(N,d)$ in Eq. (\ref{eq_k_min}) is the smallest positive integer that fulfils the following inequality \begin{equation}\label{eq_n-1/d-1} \frac{1}{2}k(k-1)\geqslant \left\lceil \frac{N-1}{d-1} \right\rceil \end{equation} which relates the lower bound on the dimension of $K(\mathbb{S})$ in (\ref{eq_dim_ks}) and the number of vectors $v_{i,j}$ corresponding to the stabilizer $\mathbb{S}=\langle G_{1},\dots,G_{k}\rangle$. Indeed, for a stabilizer subspace $V$ to be genuinely entangled that number of vectors must at least equal the minimal dimension of $K(\mathbb{S})$ in Conjecture \ref{con_ge}. \end{proof} Since this conjecture is a direct consequence of Conjecture \ref{con_ge} it follows that the stabilizer constructed in Appendix \ref{ap_gmax} gives rise to a genuinely entangled subspace of the maximal dimension postulated in Conjecture \ref{con_dim}. Thus, the corresponding subspace $V_{\max}$ (see Appendix \ref{ap_gmax}) is the one that we were looking for. Let us also notice that if the true function $p(N,d)$ that bounds the dimension of $K(\mathbb{S})$ from below in (\ref{eq_dim_ks}) turns out to be different than $\lceil (N-1)/(d-1)\rceil$, the reasoning presented in the proof of Conjecture \ref{con_dim} remains valid, one only needs to replace the inequality (\ref{eq_n-1/d-1}) by $k(k-1)/2\geqslant p(N,d)$ and modify accordingly $k_{\min}(N,d)$ in Eq. (\ref{eq_k_min}). \section{Fully NPT subspaces and genuine entanglement} \label{NPT} Our aim in the last section of the paper is to explore the relation between genuine multipartite entanglement in the stabilizer formalism and the entanglement criterion based on partial transposition. We show that partial transposition can be used to formulate an iff criterion for stabilizer subspaces to be genuinely entangled. Precisely, a mixed state supported on a stabilizer subspace is genuinely entangled if and only if all its partial transpositions are non-positive. This implies a quite unexpected fact that genuinely entangled, stabilizer subspaces, independently of how large they are, do not support mixed states whose partial transpositions are all positive. In order to achieve this goal we exploit the very simple approach used in Ref. \cite{PhysRevA.61.062102} to prove that no bipartite PPT entangled states violate the Clauser-Horne-Shimony-Holt Bell inequality. Before stating our results let us recall that we say that a stabilizer subspace $V$ is entangled with respect to a bipartition $Q|\overline{Q}$ if every pure state belonging to it (and thus every density matrix supported on it) is entangled across $Q|\overline{Q}$. We then call $V$ a non-positive partial transpose (NPT) subspace with respect to $Q|\overline{Q}$ if $\rho^{T_Q}\ngeq 0$ for every mixed state $\rho$ supported on it (i.e., such that $\mathrm{supp}(\rho)\subseteq V$). \begin{thm}\label{thm_npt} Consider a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$ of an arbitrary local dimension and a bipartition $Q|\overline{Q}$. The stabilizer subspace $V$ of $\mathbb{S}$ is entangled with respect to $Q|\overline{Q}$ iff it is NPT with respect to that bipartition. \end{thm} \begin{proof} The '$\Leftarrow$' implication simply follows from a well known fact that non-positive partial transpose is a sufficient condition for entanglement \cite{PhysRevLett.77.1413}, and so we only need to prove the "$\Rightarrow$" implication. Let us assume that $V$ is entangled with respect to the bipartition $Q|\overline{Q}$, but it is not NPT. This implies that there exists a state $\rho$, where $\operatorname{supp}(\rho)\subset V$, such that \begin{equation}\label{PPTQ} \rho^{T_{Q}}\geqslant 0. \end{equation} The fact that $V$ is entangled across $Q|\overline{Q}$ implies, by virtue of Theorem \ref{thm_sep}, that there exists a pair of generators of $\mathbb{S}$, $G_{i}$ and $G_{j}$, such that: \begin{equation}\label{eq_gq} G_{i}^{(Q)} G_{j}^{(Q)}=\omega^{\tau_{i,j}^{(Q)}}G_{j}^{(Q)}G_{i}^{(Q)}, \end{equation} where $\tau_{i,j}^{(Q)}\neq 0$. Let us introduce an operator $\mathcal{B}$ as a sum of these two generators $\mathbb{S}$ and their Hermitian conjugates: \begin{equation}\label{eq_B_def} \mathcal{B}= G_{i}+G_{i}^{\dagger}+G_{j}+G_{j}^{\dagger}. \end{equation} Now, following the approach Ref. \cite{PhysRevA.61.062102} we derive an upper bound on the expectation value of the operator $\mathcal{B}$ for any state that obeys (\ref{PPTQ}): \begin{eqnarray}\label{eq_B_bound} [ \operatorname{tr}(\mathcal{B}\rho)]^{2}&=&\frac{1}{2}[\operatorname{tr}(\mathcal{B}\rho)]^{2}+\frac{1}{2}\left[\operatorname{tr}(\mathcal{B}^{T_{Q}}\rho^{T_{Q}})\right]^{2}\nonumber\\ &\leqslant& \frac{1}{2}\operatorname{tr}(\mathcal{B}^{2}\rho) +\frac{1}{2}\operatorname{tr}(\mathcal{B}_{Q}^2\rho^{T_Q})\nonumber\\ &=&\frac{1}{2}\operatorname{tr}\left\{\left[\mathcal{B}^{2}+\left(\mathcal{B}_{Q}^2\right)^{T_Q}\right]\rho\right\}, \end{eqnarray} where $\mathcal{B}_{Q}=\mathcal{B}^{T_{Q}}$ and to obtain the first and the third equations we have used the fact that under the trace one can add partial transposition to both operators. Then, to obtain the inequality we exploited the fact that the variance of a quantum observable is non-negative, together with the fact that $\rho^{T_{Q}}\geqslant 0$. In order to determine the right-hand side of (\ref{eq_B_bound}) we use the explicit forms of the operators $\mathcal{B}^2$ and $\left(\mathcal{B}^2_{Q}\right)^{T_Q}$, \begin{eqnarray} \mathcal{B}^{2}&&=G_{i}^{2}+G_{i}^{\dagger 2}+G_{j}^{2}+G_{j}^{\dagger 2}+4\mathbb{1}+2g_{i,j}, \end{eqnarray} \begin{eqnarray} \left(\mathcal{B}^2_{Q}\right)^{T_Q}&&= G_{i}^{2}+ G_{i}^{2\dagger}+G_{j}^{2}+G_{j}^{2\dagger}+4\mathbb{1}+2\cos \left(\phi\right)g_{i,j},\nonumber\\ \end{eqnarray} where $\phi=2\pi\tau_{i,j}^{(Q)}/d$ and \begin{equation} g_{i,j}=G_{i}G_{j}+G_{i}G_{j}^{\dagger}+G_{i}^{\dagger}G_{j}+G_{i}^{\dagger}G_{j}^{\dagger}, \end{equation} and so we have: \begin{eqnarray} [\operatorname{tr}(\mathcal{B}\rho)]^{2}\leqslant&& \operatorname{tr}\Big[\rho\Big( G_{i}^{2}+G_{i}^{2\dagger}+G_{j}^{2}+G_{j}^{2\dagger}\nonumber\\ &&\hspace{1cm}+4\mathbb{1}+\left(1+\cos(\phi)\right)g_{i,j} \Big)\Big]. \end{eqnarray} Then, taking into account the fact that \begin{equation}\label{Gcond} G_i\rho=\rho G_i \end{equation} holds true for any $i$, the above simplifies to \begin{eqnarray}\label{contradiction} \operatorname{tr}(\mathcal{B}\rho)&\leqslant& 2\sqrt{3+\cos(\phi)},\nonumber\\ &<&4, \end{eqnarray} where the last inequality is a consequence of the fact that $\tau_{i,j}^{(Q)}\neq 0$, which means that $\cos (\phi)<1$. Thus, for any mixed state stabilized on $V$ obeying (\ref{PPTQ}), the expectation value of $\mathcal{B}$ is strictly smaller than four. On the other hand, due to Eq. (\ref{Gcond}) every density matrix $\rho$ for which $\operatorname{supp}(\rho)\subset V$ satisfies \begin{equation} \operatorname{tr}(\mathcal{B}\rho)=4 \end{equation} which clearly contradicts (\ref{contradiction}), completing the proof. \end{proof} This theorem shows that in the qudit stabilizer formalism subspaces in which all states are entangled with respect to some bipartition cannot support entangled states that are PPT with respect to that bipartition. Importantly, based on it one can formulate the following iff criterion for a stabilizer subspace to be genuinely entangled. \begin{cor}\label{cor_npt} A stabilizer subspace $V$ is genuinely entangled iff it is fully NPT. \end{cor} This statement follows from the fact that if $V$ is genuinely entangled then it is entangled with respect to any bipartition. Then, application of Theorem \ref{thm_npt} to every bipartition implies that $V$ is fully NPT. On the other hand, if every pure state from $V$ is fully NPT, it must also be genuinely entangled and thus $V$ is genuinely entangled itself. This result provides us with a convenient way of constructing multipartite subspaces that are fully NPT: by virtue of Corollary \ref{cor_npt} any genuinely entangled stabilizer subspace does the job. At the same time, it implies a quite surprising fact that genuinely entangled stabilizer subspaces do not support genuinely entangled states that are PPT with respect to any bipartition, which makes us believe that there are actually no PPT genuinely entangled states in the stabilizer formalism. Let us notice, on the other hand, that while Theorem \ref{thm_npt} provides us a method for detecting entanglement in the stabilizer formalism, the latter role is, however, better served by Theorem \ref{thm_sep}. \section{Conclusions} We have shown that all genuinely entangled stabilizer subspaces are fully NPT, or, equivalently, that such subspaces do not support PPT entangled states. To this end, we have introduced a criterion allowing one to judge in a finite number of steps whether a given stabilizer subspace in an $N$-qudit Hilbert space is genuinely entangled. We have also generalized the vector formalism introduced in Ref. \cite{Makuta_2021} in the case of qubit stabilizer formalism for prime local dimension and used it to construct genuinely entangled stabilizer subspaces which we conjecture to have the maximal dimension achievable within the stabilizer formalism. We have thus provided examples of genuinely entangled fully NPT subspaces, extending in a way the results of Ref. \cite{PhysRevA.87.064302} to the multipartite scenario. Our work leaves many open questions that might serve as inspiring starting points for further research. The most obvious one is whether the first conjecture stated in our paper can be proven rigorously. On the other hand, if this conjecture fails to be true, an alternative question would be whether one can find the true function $p(N,d)$ that should appear in Ineq. (\ref{eq_dimks}) and that can later be used to determine the maximal dimension of genuinely entangled stabilizer subspaces. Another related question is whether one can provide a general construction of fully NPT genuinely entangled subspaces beyond the stabilizer formalism that are of maximal dimension; it was show in Ref. \cite{PhysRevA.87.064302} that completely entangled subspaces that are NPT can be as large as any completely entangled subspaces. While we believe our construction to be of maximal dimension achievable within the stabilizer formalism, at least for prime $d$, it is most probably suboptimal when one takes into account arbitrary genuinely entangled subspaces. On the other hand, it would be highly interesting to follow the results of Sec. \ref{NPT} and further explore the relation between entanglement in the stabilizer formalism and the separability criterion based on partial transposition. While we prove that genuinely entangled stabilizer subspaces do not support PPT states, it is at the moment unclear whether there are no PPT genuinely entangled stabilizer states at all. In fact, there might exist stabilizer subspaces that are not genuinely entangled and yet support PPT genuinely entangled states. It would thus be highly interesting to explore whether there any PPT entangled states in this formalism, or, alternatively whether partial transposition is a necessary and sufficient condition for separability for stabilizer states. The last avenue for further research is to construct Bell inequalities that are maximally violated by genuinely entangled stabilizer subspaces and see whether they can be used for self-testing, generalizing the results Refs. \cite{PhysRevLett.125.260507,Makuta_2021}. \section{Proofs of Lemmas \ref{lem_ge} and \ref{lem_dim}}\label{app_lem} \setcounter{lem}{0} For completeness we provide here the proofs of Lemmas \ref{lem_ge} and \ref{lem_dim}. \begin{lem} Let the local dimension $d$ be prime. Given a stabilizer $\mathbb{S}=\langle G_{1},\dots, G_{k}\rangle$, the corresponding stabilizer subspace $V\neq\{0\}$ is genuinely entangled iff for every $\phi\in F_{N,2}$ such that $\phi\neq \phi_{T_{0}},\phi_{T_{1}}$ there exists $v_{i,j}\in K(\mathbb{S})$ for $i\neq j$ such that \begin{equation} h(v_{i,j},\phi)\neq 0. \end{equation} \end{lem} \begin{proof} Corollary \ref{cor_ge} states that for every nontrivial bipartition $Q|\overline{Q}$ of the set $\{1,\dots,N\}$ there exists $i,j\in \{1,\dots,k\}$ such that \begin{equation} \left[G_{i}^{(Q)},G_{j}^{(Q)}\right]\neq 0. \end{equation} This can be equivalently expressed as \begin{equation}\label{eq_tau} \tau_{i,j}^{(Q)}\neq 0, \end{equation} where \begin{equation} \tau_{i,j}^{(Q)}=\sum_{n\in Q}\tau_{i,j}^{(n)} \end{equation} and $\tau_{i,j}^{(n)}$ is defined in Eq. (\ref{eq_g_com}). Given the representation $\phi$ of a bipartition $Q|\overline{Q}$, the above equation can be reformulated as \begin{equation} \tau_{i,j}^{(Q)}=\sum_{n\in Q}\tau_{i,j}^{(n)}=\sum_{n=1}^{N}\tau_{i,j}^{(n)}\phi^{(n)}. \end{equation} By the definitions of $v_{i,j}$ [cf. Eq. \eqref{eq_vij_def}] and of $h(\cdot,\cdot)$ (\ref{eq_h_def}), the above rewrites as \begin{equation} \tau_{i,j}^{(Q)}=\sum_{n=1}^{N}\tau_{i,j}^{(n)}\phi^{(n)}=h(v_{i,j},\phi), \end{equation} Consequently, from \eqref{eq_tau} it follows that $h(v_{i,j},\phi)\neq 0$ for all nontrivial $\phi$ and some $v_{i,j}\in K(\mathbb{S})$ iff $V$ is genuinely entangled, which ends the proof. \end{proof} \begin{lem} Assume $d$ to be prime and consider a stabilizer for which $\dim K(\mathbb{S})=N-1$. Then, a vector $v\in F_{N,d}$ fulfils \begin{equation} \sum_{i=1}^{N}v^{(i)}=0, \end{equation} if and only if $v\in K(\mathbb{S})$. \end{lem} \begin{proof} Since the implication "$\Leftarrow$" follows from the definition of $K(\mathbb{S})$, in order to complete the proof it is sufficient to show that the number of vectors in $F_{N,d}$ fulfilling \eqref{eq_sum_v_0} equals the number of vectors in $K(\mathbb{S})$. Let us start from calculating how many vectors in $F_{N,d}$ satisfy \eqref{eq_sum_v_0}. Clearly, for any such vector we have \begin{equation} v^{(N)}=-\sum_{n=1}^{N-1}v^{(n)}. \end{equation} Notice that for any choice of $v^{(n)}$ for all $n\in\{1,\dots, N-1\}$ we can find exactly one $v^{(N)}$ that fulfils this equation. This implies that there are $d^{N-1}$ vectors $v\in F_{N,d}$ fulfilling \eqref{eq_sum_v_0}. Next, let us calculate the number of vectors in $K(\mathbb{S})$. To this end let us consider the following expression \begin{equation} \sum_{i=1}^{N-1} a_{i}u_{i}= \sum_{i=1}^{N-1}b_{i}u_{i}, \end{equation} where $\{u_{i}\}_{i=1}^{N-1}$ is a basis in $K(\mathbb{S})$ and $a_{i},b_{i}\in \mathbb{Z}_{d}$; notice that by definition each element of this basis must also satisfy (\ref{eq_sum_v_0}). Since $d$ is prime, the above is true only if $a_{i}=b_{i}$ for all $i\in \{1,\dots,N-1\}$. There are $d^{N-1}$ unique choices of $\{a_{i}\}_{i=1}^{N-1}$, hence there are $d^{N-1}$ vectors in $K(\mathbb{S})$, which ends the proof. \end{proof} \section{A proof of Theorem \ref{Raimat}}\label{ap_con} Here we provide a proof of Theorem \ref{Raimat} or, equivalently, of Conjecture \ref{con_ge} for $k=2$ and arbitrary $d$ and $N$. If someone will attempt to prove Conjecture \ref{con_ge} in its most general form, we hope that the following proof will serve as guide for how the general proof could be formulated. The main idea of this proof is to treat bipartition representations $\phi$ as solutions to an equation: \begin{equation} h(u,\phi)=\beta, \end{equation} where $u\in F_{N,d}$ and $\beta\in\{0,\dots,d-1\}$. The first step in our procedure is to calculate the number of solutions $\phi$ to this equation depending on $d$ and $N$. \begin{lem}\label{lem_2^(-(d-1))} Let $u\in F_{N,d}$ and let us assume that an equation \begin{equation}\label{eq_h_beta_simple} h(u,\phi)=\beta, \end{equation} has at least one solution $\phi\in F_{N,2}$. The number of $\phi$ that fulfil Eq. (\ref{eq_h_beta_simple}) is larger or equal than $2^{N-(d-1)}$. \end{lem} \begin{proof} Let $\sigma (N,\beta)$ denote the number of $\phi\in F_{N,2}$ that fulfil: \begin{equation}\label{eq_h_beta_2} \sum_{i=1}^{N}u^{(i)}\phi^{(i)} \mod d=\beta, \end{equation} where the upper index denotes the $i$'th entry. Rather than finding a lower bound on $\sigma(N,\beta)$ for each $\beta\in\{0,\dots,d-1\}$, we will find a minimum of $\sigma(N,\beta)$ over all $\beta\in\{0,\dots,d-1\}$. To this end, let us denote by $\sigma(n,\beta)$ the number of $\phi\in F_{n,2}$ that fulfil: \begin{equation}\label{eq_h_beta_4} \sum_{i=1}^{n}u^{(i)}\phi^{(i)} \mod d=\beta, \end{equation} where $n\in \{1,\dots,N\}$. We have introduced a new variable $n$, because in order to find a minimum of $\sigma(N,\beta)$ over $\beta$ we want to express $\sigma(n+1,\beta)$ in terms of $\sigma(n,\beta')$, namely \begin{equation}\label{eq_sigma_sum} \sigma (n+1,\beta)=\sigma(n,\beta)+\sigma (n,\beta-u^{(n+1)}), \end{equation} however at the end we will set $n=N$. To understand why the above equality is true let us examine (\ref{eq_h_beta_4}), but for $\sigma(n+1,\beta)$: \begin{eqnarray}\label{eq_h_betax2} \sum_{i=1}^{n+1}u^{(i)}\phi^{(i)} \mod d&=&\beta,\nonumber\\ \Downarrow \nonumber\\ \sum_{i=1}^{n}u^{(i)}\phi^{(i)} \mod d&=&\beta-u^{(n+1)}\phi^{(n+1)}. \end{eqnarray} As we can see, for $\phi^{(n+1)}=0$ we get an identical equation to (\ref{eq_h_beta_4}) and for $\phi^{(n+1)}=1$ we also have an equation equivalent to (\ref{eq_h_beta_4}) but for $\beta=\beta'-u^{(n+1)}$ and so $\sigma(n+1,\beta)$ is a sum of the number of solution for those two cases, therefore we get (\ref{eq_sigma_sum}). Let us denote by $\sigma_{\min}(n)$ the minimal, nonzero $\sigma(n,\beta)$ over all $\beta \in\{0,\dots,d-1\}$. From (\ref{eq_sigma_sum}) it follows that either \begin{equation}\label{eq_sigma_n+1_1} \sigma_{\min}(n+1)= \sigma_{\min}(n) \end{equation} or \begin{equation}\label{eq_sigma_n+1_2} \sigma_{\min}(n+1)\geqslant 2\sigma_{\min}(n). \end{equation} If (\ref{eq_sigma_n+1_1}) is true, then by the virtue of (\ref{eq_sigma_sum}) it implies that for some $\beta\in\{0,\dots,d-1\}$ for which $\sigma(n,\beta)=\sigma_{\min}(n)$ we have $\sigma (n,\beta-u^{(n+1)})=0$. Let us examine that case further. We can once again use Eq. (\ref{eq_sigma_sum}) to get \begin{equation} \sigma(n+1,\beta-u^{(n+1)})=\sigma(n,\beta-2u^{(n+1)}). \end{equation} Now, let us assume that $\sigma(n,\beta-k u^{(n+1)})=0$ for all positive integers $k\leqslant \kappa$, where $\kappa\in \mathbb{Z}_{+}$. Then from Eq. (\ref{eq_sigma_sum}) we have \begin{equation} \sigma(n+1,\beta-k u^{(n+1)})=\sigma(n,\beta-(k+1)u^{(n+1)}). \end{equation} Notice, that our assumption $\sigma(n,\beta-k u^{(n+1)})=0$ gives us a bound on the value of $\kappa$, since \begin{equation} \sigma(n,\beta-d u^{(n+1)})=\sigma(n,\beta)=\sigma_{\min}(n)\neq 0, \end{equation} and so $\kappa \in \{1,\dots d-1\}$. Importantly, in the case of $k=\kappa$ it follows from Eq. (\ref{eq_sigma_sum}) that \begin{equation}\label{eq_sigma_0} \sigma(n,\beta -\kappa u^{(n+1)})=0 \end{equation} and \begin{equation}\label{eq_sigma_neq0} \sigma(n+1,\beta -\kappa u^{(n+1)})\neq 0. \end{equation} In other words, every time (\ref{eq_sigma_n+1_1}) is true, it follows that there exists at least one $\beta'\in\{0,\dots,d-1\}$ for which equation \begin{equation}\label{eq_tau_beta_cond} \sum_{i=1}^{n'} u^{(i)}\phi^{(i)} \mod d = \beta' \end{equation} has no solution when $n'=n$ (as in (\ref{eq_sigma_0})) and has at least one solution when $n'=n+1$ (as in (\ref{eq_sigma_neq0})). Finally, we are ready to set a bound on $\sigma_{\min}(n)$. In the case of $n=1$ we have exactly two values of $\beta$ for which we have a solution to Eq. (\ref{eq_h_beta_2}), so there are $d-2$ values of $\beta$ with no solution. If we assume that $\sigma_{\min}(2)=\sigma_{\min}(1)=1$, then we know that there are $d-3$ values of $\beta'$ for which (\ref{eq_h_beta_2}) has no solutions. Following this pattern we conclude that if for all $n\in\{1,\dots,d-2\}$ we have $\sigma_{\min}(n+1)=\sigma_{\min}(n)=1$, then for $n=d-1$ and for all $\beta\in\{0,\dots,d-1\}$ the following holds true \begin{equation} \sigma(d-1,\beta)\neq 0. \end{equation} From this we can infer that for $n>d-1$, $\sigma_{\min}(n)$ will be bounded according to (\ref{eq_sigma_n+1_2}). Moreover, since in general the bound from (\ref{eq_sigma_n+1_2}) for the case $n'=n+1$ is a linear function of the bound for the case $n'=n$, the exact order of applying bounds (\ref{eq_sigma_n+1_1}) and (\ref{eq_sigma_n+1_2}) does not matter - only the total number of times that each bound is applied matters. Therefore, the assumption $\sigma_{\min}(n+1)=\sigma_{\min}(n)=1$ for $n\in\{1,\dots,d-2\}$ yields a lower bound on on $\sigma_{\min}(n)$: \begin{equation}\label{eq_sigma_bound} \sigma_{\min}(n)\geqslant \begin{cases} 1 \quad \textrm{for} \quad n\leqslant d-1,\\ 2^{n-(d-1)} \quad \textrm{for} \quad n> d-1. \end{cases} \end{equation} \and so we can simply write \begin{equation} \sigma_{\min}(N)\geqslant 2^{N-(d-1)} \end{equation} which ends the proof. \end{proof} This leads us strait to Theorem \ref{Raimat} which for completeness we restate here. \setcounter{thm}{2} \begin{thm}\label{thm_ge_ks} Consider a stabilizer $\mathbb{S}=\langle G_{1},G_{2}\rangle$. If the local dimension $d$ is prime and the corresponding stabilizer subspace $V$ is genuinely entangled then $\dim K(\mathbb{S})\geqslant \lceil \frac{N-1}{d-1} \rceil$. \end{thm} \begin{proof} From the number of generators it directly follows that $\dim K(\mathbb{S})\leqslant 1$, but since we assume that $V$ is genuinely entangled we know that $v_{1,2}\neq 0$, and so we can just write $\dim K(\mathbb{S})= 1$. We need to show that \begin{equation}\label{eq_1_geq} 1\geqslant \left\lceil \frac{N-1}{d-1} \right\rceil. \end{equation} By the virtue of Lemma \ref{lem_ge}, if $V$ is genuinely entangled, then \begin{equation}\label{eq_h=0} h(v_{1,2},\phi)=0 \end{equation} is only true for the representations of trivial bipartitions $\phi\in\{\phi_{T_{0}},\phi_{T_{1}}\}$, i.e., the number of $\phi\in F_{N,2}$ that fulfil (\ref{eq_h=0}) equals $2$. Then, from Lemma \ref{lem_2^(-(d-1))} we have \begin{equation} 2\geqslant 2^{N-(d-1)}, \end{equation} from which it easily follows that \begin{equation} 1\geqslant \frac{N-1}{d-1}. \end{equation} If this inequality is true, then (\ref{eq_1_geq}) is as well, which ends the proof. \end{proof} Now that this theorem has been proven, we can discus what are the difficulties of proving Conjecture \ref{con_ge} in its most general form. The issue lays in the proof of a version of Lemma \ref{lem_2^(-(d-1))} for an arbitrary $k$. The number of solutions in the most general case would be $2^{N-(d-1)\cdot \dim K(\mathbb{S})}$, but trying to prove it by using the same arguments runs into problem when discussing the number of $\sigma(n,\beta)$ that are zero for $n=n'$ and are nonzero for $n=n'+1$. For arbitrary $k$, the procedure from Lemma \ref{lem_2^(-(d-1))} would give us the number of $\phi$ to be larger or equal to $2^{N-(d^{\dim K(\mathbb{S})}-1)}$. This is a result of the fact that for higher $\dim K(\mathbb{S})$ a situation where increase in $n$ makes only one $\sigma(n,\beta)$ nonzero is really rare. For example, if increasing $n$ from 0 to $d-1$ resulted in the change from zero to nonzero for $d-1$ sigmas, then each increase after that would make at least $d$ sigmas nonzero (this situation occurs when on the first $d-1$ sites, one vector $v_{i,j}$ has ones and the other vectors have zeros). To conclude, in order to prove Conjecture \ref{con_ge} using the same methods as we did, one would need to find a better way of estimating how many $\sigma(n,\beta)$ become nonzero with each increase in $n$. \section{Genuinely entangled, stabilizer subspace of maximal dimension}\label{ap_gmax} In this section we show an example of a genuinely entangled, stabilizer subspace of dimension that we claim is maximal for such subspaces (see Conjecture \ref{con_dim}). More specifically we construct the generators of stabilizer of the aforementioned subspace. Since the expressions on the generators are recursive equations dependent on the number of qubits $N$, we will use a notation $G_{i}(N)$ for an $i$'th generator for $N$ qubits. Similarly, we denote by $v_{i,j}(N)$ the vector defined as in (\ref{eq_vij_def}) for generators $G_{i}(N)$ and $G_{j}(N)$. Moreover, the number of generators $k$ will also depend on $N$, however in this case we will use $k$ and $k(N)$ interchangeably depending on whether the dependence on $N$ is important to a specific case. Let $k=k_{min}(N)$ be defined by Eq. (\ref{eq_k_min}) and let us consider a set of generators $\{G_{i}(N)\}_{i=1}^{k}$. For $N=2$ we construct the generators as \begin{eqnarray}\label{eq_gmax_2_d} G_{1}(2)&&=X\otimes X^{d-1},\nonumber\\ G_{2}(2)&&=Z\otimes Z. \end{eqnarray} For arbitrary $N$ we give a recurrence relation for the generators and we distinguish two cases: if $N$ fulfils $k(N-1)=k(N)-1$, then \begin{equation}\label{eq_gmax_N=_d} G_{i}(N)=\begin{cases} G_{i}(N-1)\otimes \mathbb{1} &i\notin \{k-1,k\},\\ G_{i}(N-1)\otimes P_{k-1}^{d-1} & i=k-1,\\ \mathbb{1}^{\otimes(N-2)}\otimes P_{k}\otimes P_{k} &i=k, \end{cases} \end{equation} and otherwise \begin{equation}\label{eq_gmax_Nneq_d} G_{i}(N)=\begin{cases} G_{i}(N-1)\otimes \mathbb{1} &i\notin \{l,k\},\\ \tilde{G}_{i}(N-1)\otimes P_{k-1}^{d-1-m} &i=l,\\ G_{i}(N-1)\otimes P_{k} &i=k, \end{cases} \end{equation} where we define $\tilde{G}_{i}(N-1)$ as \begin{equation}\label{eq2_gmax_Nneq_d} \tilde{G}_{i}(N-1)=G_{i}(N-1)\left(\mathbb{1}^{\otimes (N-2)}\otimes P_{k-1}^{m+1}\right), \end{equation} parameter $m$ is equal to \begin{eqnarray}\label{m_prop} m&=&\sup(\mathcal{M}_{N,d}), \nonumber\\ \mathcal{M}_{N,d}&=&\Bigg\{n\in\mathbb{N}:\left\lceil\frac{N-1-n}{d-1}\right\rceil=\left\lceil\frac{N-1}{d-1}\right\rceil\Bigg\}, \end{eqnarray} matrix $P_{i}$ equals \begin{equation}\label{Pi} P_{i}=\left\{ \begin{array}{ll} X & \textrm{ for odd\;} i, \\[1ex] Z & \textrm{ for even\;}i, \end{array} \right. \end{equation} and \begin{equation}\label{eq_l_def} l(N)=\frac{k(k-1)}{2}+1-\left\lceil \frac{N-1}{d-1} \right\rceil. \end{equation} The above function $l(N)$ is defined on the set of $N$ for which $k(N)=\operatorname{const}$. Let $\{N_{\min},\dots,N_{\max}\}$ be a set of $N$ for which $k(N)=\operatorname{const}$, i.e., \begin{eqnarray} k(N_{\min}-1)&=k(N_{\min})-1,\nonumber\\ k(N_{\max}+1)&=k(N_{\max})+1. \end{eqnarray} From (\ref{eq_n-1/d-1}) we have \begin{equation} \left\lceil \frac{N_{\min}-2}{d-1}\right\rceil = \frac{1}{2}[k(N_{\min})-1] [k(N_{\min})-2], \end{equation} \begin{equation} \left\lceil \frac{N_{\max}-1}{d-1}\right\rceil = \frac{1}{2}k(N_{\min})\; [k(N_{\min})-1]. \end{equation} Substituting them into (\ref{eq_l_def}) we get a set $\{1,\dots,k-1\}$, which is the set of all $l(N)$ for $k=\operatorname{const}.$. Let us note here that for $d=2$ this set is effectively $\{1,\dots,k-2\}$, because in that case equation $l=k-1$ is equivalent to $k(N-1)=k(N)-1$ which is the condition for (\ref{eq_gmax_N=_d}). \begin{thm} The stabilizer $\mathbb{S}_{\max}$ generated by the generators defined in (\ref{eq_gmax_2_d}), (\ref{eq_gmax_N=_d}) and (\ref{eq_gmax_Nneq_d}) stabilize a genuinely entangled, stabilizer subspace, where $k_{\min}(N)$ is defined in (\ref{eq_k_min}). Moreover, if $d$ is prime then the dimension of the stabilizer subspace equals $d^{N-k_{\min}(N)}$. \end{thm} Before we start the proof of the above theorem, let us introduce a convenient convention, namely for even $k$ instead of calculating vectors $v_{i,k}$ we will calculate \begin{equation} v'_{i,k}=(d-1)v_{i,k}. \end{equation} This makes it so that the vast majority of vector elements equals $0,1$, which simplifies our calculations. Moreover, this does not pose any problems, since in this proof we only care about the general properties of $K(\mathbb{S}_{\max})$, which both $v_{i,k}$ and $v'_{i,k}$ are a part of, and not the vectors in particular. In practise, it is as if we assumed that $k$ is always odd while calculating the elements of $v_{i,k}$. With that, we can move to the main part of the proof. \begin{proof} To complete the proof we first need to show that $\mathbb{S}_{\max}=\langle G_{1}(N),\dots,G_{k}(N)\rangle$ is the stabilizer, after which we need to prove that the stabilizer subspace is genuinely entangled. The subgroup $\mathbb{S}$ is a stabilizer if $a\mathbb{1}\notin\mathbb{S}$ for $a\neq 1$ and if all the elements of $\mathbb{S}$ commute. This can be shown by proving the following conditions \begin{equation}\label{eq_stab_nont_1} \forall_{i\in\{1,\dots,k\}}\quad G_{i}(N)^{d}=\mathbb{1}, \end{equation} \begin{equation}\label{eq_stab_nont_2} \forall_{i,j\in\{1,\dots,k\}}\quad \left[G_{i}(N),G_{j}(N)\right]=0, \end{equation} \begin{equation}\label{eq_stab_nont_3} \forall_{a\neq 1}\;\forall{\alpha_{i}\in\{0,\dots,d-1\}}\quad \prod_{i=1}^{k}G_{i}^{\alpha_{i}}(N)\neq a \mathbb{1}. \end{equation} A proof of (\ref{eq_stab_nont_1}) is trivial, since by definition the generators $G_{i}(N)$ are a tensor product of matrices \eqref{eq_xz_def}. Next, proving (\ref{eq_stab_nont_3}) is also fairly simple: from Eq. (\ref{eq_gmax_2_d}) and Eq. (\ref{eq_gmax_N=_d}) it follows that $G^{(1)}_{i}(N)=X$ implies $i=1$ and similarly $G^{(2)}_{i}(N)=Z$ implies $i=2$. Moreover, from Eq. (\ref{eq_gmax_N=_d}) and Eq. (\ref{eq_gmax_Nneq_d}) we can conclude that for $n$ for which $k(n)=k(n+1)-1$, $G_{i}^{(n)}(N)=P_{k(n)}$ implies $i=k(n)$. Consequently, for every generator $G_{i}(N)$ we can find $n$ such that $G_{i}^{(n)}(N)\neq G_{j}^{(n)}(N)$ for $i\neq j$, which in turn implies that \begin{equation} \prod_{i=1}^{k}G_{i}^{\alpha_{i}}(N)\sim \mathbb{1}, \end{equation} iff for all $i\in \{1,\dots, k\}$, $\alpha_{i}=0$, but for that case $\prod_{i=1}^{k}G_{i}^{0}(N)=\mathbb{1}$, which proves (\ref{eq_stab_nont_3}). Proving Eq. (\ref{eq_stab_nont_2}) is more complex, but thankfully we can make use of the vectors $v_{i,j}(N)$ defined in Eq. (\ref{eq_vij_def}). In this formalism, the condition (\ref{eq_stab_nont_2}) translates to \begin{equation}\label{eq_vij_sum_d} \sum_{n=1}^{N}v_{i,j}^{(n)}=0 \mod d. \end{equation} Let us begin with $N=2$. From Eq. (\ref{eq_gmax_2_d}) it is clear that $v_{1,2}(2)=(1,d-1)$ and so (\ref{eq_vij_sum_d}) is fulfilled. The proof for an arbitrary $N$ will be done by induction, i.e., we assume that (\ref{eq_vij_sum_d}) is fulfilled for $N'=N-1$ and we show that it is fulfilled for $N'=N$. For clarity, we will consider two separate cases: first (\ref{eq_gmax_N=_d}) and then (\ref{eq_gmax_Nneq_d}). From (\ref{eq_gmax_N=_d}) we can easily conclude that for $i,j\neq k$ we have \begin{equation}\label{eq_vij_N=_d} v_{i,j}(N)=v_{i,j}(N-1)\oplus (0). \end{equation} As for the vectors $v_{i,k}(N)$, from (\ref{eq_gmax_N=_d}) it is clear that \begin{eqnarray} v_{i,k}^{(N)}(N)=0& \qquad &i\neq k-1,\nonumber\\ v_{i,k}^{(N)}(N)=d&-1\qquad &i=k-1,\nonumber\\ v_{i,k}^{(n)}(N)=0& &n\in\{1,\dots,N-2\}. \end{eqnarray} The only unknown elements are $v_{i,k}^{(N-1)}$, however they also can be easily determined, by analysing generators for $N'=N-1$. There are two subcases: for $N=3$ we have to refer to (\ref{eq_gmax_2_d}) and for $N>3$ to (\ref{eq_gmax_Nneq_d}). However, according to both (\ref{eq_gmax_2_d}) and (\ref{eq_gmax_Nneq_d}) we have \begin{eqnarray} G_{i}^{(N-1)}(N-1)&\in& \{\mathbb{1},P_{k'-1}^{q}\}\quad i\in\{1,\dots,k'-1\},\nonumber\\ G_{k'}^{(N-1)}(N-1)&=&P_{k'}, \end{eqnarray} where $k'=k(N-1)=k-1$ and $q\in \mathbb{Z}_{+}$. This implies that \begin{eqnarray}\label{eq_vik_d} v_{i,k}(N)&=&0 \qquad i\in\{1,\dots,k-2\},\nonumber\\ v_{k-1,k}(N)&=&e_{N-1}+(d-1)e_{N}. \end{eqnarray} Clearly from induction, all vectors $v_{i,j}(N)$ fulfil (\ref{eq_vij_sum_d}). Next, let us consider the case (\ref{eq_gmax_Nneq_d}). It is easy to see that \begin{equation}\label{eq_vij_+0_d} v_{i,j}(N)=v_{i,j}(N-1)\oplus (0) \end{equation} is true for $i,j\neq l$. Moreover, the above is also true for $i\neq l,k$ and $j=l$, since from (\ref{eq_gmax_N=_d}) and (\ref{eq_gmax_Nneq_d}) it follows that for such $i$ we have \begin{equation} G_{i}^{(N-1)}(N-1)\in \left\{\mathbb{1},P_{k-1}^{q}\right\} \end{equation} for some $q\in \mathbb{Z}_{+}$. This means that we have one vector left to consider, namely for $i=l$ and $j=k$ we have \begin{equation}\label{eq_vij_eN_d} v_{l,k}(N)=v_{l,k}(N-1)\oplus (0) +(m+1)e_{N-1} +(d-1-m)e_{N}. \end{equation} Clearly, if for $N'=N-1$ (\ref{eq_vij_sum_d}) is fulfilled then all $v_{i,j}(N)$ also fulfil (\ref{eq_vij_sum_d}). This proves (\ref{eq_stab_nont_2}) for all $N$. To reiterate, we proved that \eqref{eq_stab_nont_1}, \eqref{eq_stab_nont_2} and \eqref{eq_stab_nont_3} are fulfilled by $G_{1},\dots,G_{k}$, hence $\mathbb{S}_{\max}$ is a stabilizer. Moreover, since we have shown that $\prod_{i=1}^{k}G_{i}^{\alpha}(N)=\mathbb{1}$ is only true if all $\alpha_{i}=0$, the generators are independent and so if $d$ is prime then the dimension of the stabilizer subspace equals $d^{N-k_{\min}(N)}$ \cite{GHEORGHIU2014505}. This leaves only the question of genuine entanglement of the stablizer subspace $V$. For $N=2$ (\ref{eq_gmax_2_d}) it is easy to see from Corollary \ref{cor_ge} that $V$ is genuinely entangled, since the only nontrivial bipartition is $\{1\}|\{2\}$. For arbitrary $N$ we again use a prove by induction, i.e., we assume that for $N'=N-1$, the subspace $V(N-1)$ is genuinely entangled and we show that this implies that $V(N)$ is also genuinely entangled. First, consider a case $k(N-1)=k(N)-1$, which corresponds to generators (\ref{eq_gmax_N=_d}). From our assumption and from (\ref{eq_vij_+0_d}) it follows that if we only consider nontrivial bipartitions of $\{1,\dots,N-1\}$, then for all such bipartitions there exist $i,j\in\{1,\dots,k-1\}$ such that \begin{equation}\label{eq_h_neq0} h(v_{i,j}(N),\phi)\neq 0, \end{equation} where $\phi$ is a representation of a bipartition $Q|\overline{Q}$ and $h(\cdot,\cdot)$ is defined in Eq. (\ref{eq_h_def}). Moreover, from (\ref{eq_vij_N=_d}) and (\ref{eq_vij_+0_d}) it follows that $v_{i,j}^{(N)}(N)=0$ for $i,j\neq k$, and so by the above argument, for all $Q\subset \{1,\dots,N\}$ such that $Q$ is nontrivial and $Q\neq \{N\},\{1,\dots,N-1\}$ there exist $i,j\in\{1,\dots,k-1\}$ for which (\ref{eq_h_neq0}) is fulfilled. As for $Q= \{N\},\{1,\dots,N-1\}$, from (\ref{eq_vik_d}) we have that \begin{equation} h(v_{k-1,k}(N),\phi)\neq 0, \end{equation} hence for all nontrivial bipartitions of the set $\{1,\dots,N\}$ there exist a pair $i,j\in\{1,\dots,k\}$ such that \begin{equation} h(v_{i,j}(N),\phi)\neq 0, \end{equation} which by the virtue of Lemma \ref{lem_ge} shows that $V(N)$ is genuinely entangled. Next, let us consider a case (\ref{eq_gmax_Nneq_d}) for $l(N)=l(N-1)+1$. Since $l(N)$ is a nonincreasing function, we know that $v_{l,k}(N-1)=0$. Then, we can use the same argumentation as for the previous case but with a vector $v_{l,k}(N)$ instead of $v_{k-1,k}(N)$. Lastly, let us consider a case (\ref{eq_gmax_Nneq_d}) for $l(N)=l(N-1)$. By iterating (\ref{eq_vij_eN_d}) from $m'=m$ to $m'=0$ we can derive an explicit formula for $v_{l,k}(N)$: \begin{equation}\label{eq_vlk_sum} v_{l,k}(N)=\sum_{m'=0}^{m} e_{N-1-m'}+(d-1-m)e_{N}. \end{equation} Moreover, the same iteration of (\ref{eq_vij_+0_d}) implies that for all pairs $(i,j)\neq (l,k)$ and for all $n\in \{N-m,\dots,N\}$ \begin{equation} v_{i,j}^{(n)}=0. \end{equation} Therefore, our induction assumption implies that for every nontrivial subset $Q\subset \{1,\dots N-1-m\}$ there exist a pair $(i,j)\neq (l,k)$ for which (\ref{eq_h_neq0}) is fulfilled, which by the virtue of Lemma \ref{lem_ge} implies, that $V(N)$ is genuinely entangled. This leaves us with bipartitions $Q|\overline{Q}$ for which $Q\subset \{N-m,\dots,N\}$ or $\overline{Q}\subset \{N-m,\dots,N\}$. However, from (\ref{eq_vlk_sum}) it follows that for every such $Q$ (or $\overline{Q}$), (\ref{eq_h_neq0}) is fulfilled by $v_{l,k}(N)$, which ends the proof. \end{proof} \input{bibliography.bbl} \end{document}
\begin{document} \title{An upper bound on the $k$-modem illumination problem.} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \author{Frank Duque, Carlos Hidalgo-Toscano} \author{Frank Duque \thanks{Departamento de Matemáticas, Cinvestav, D.F. México, México. Partially supported by grant 153984 (CONACyT, Mexico). \texttt{[frduque, cmhidalgo]@math.cinvestav.mx}} \protect\quad Carlos Hidalgo-Toscano \samethanks} \maketitle \begin{abstract} A variation on the classical polygon illumination problem was introduced in [Aichholzer et. al. EuroCG'09]. In this variant light sources are replaced by wireless devices called $k$-modems, which can penetrate a fixed number $k$, of ``walls''. A point in the interior of a polygon is ``illuminated'' by a $k$-modem if the line segment joining them intersects at most $k$ edges of the polygon. It is easy to construct polygons of $n$ vertices where the number of $k$-modems required to illuminate all interior points is $\Omega(n/k)$. However, no non-trivial upper bound is known. In this paper we prove that the number of $k$-modems required to illuminate any polygon of $n$ vertices is at most $O(n/k)$. For the cases of illuminating an orthogonal polygon or a set of disjoint orthogonal segments, we give a tighter bound of $6n/k+1$. Moreover, we present an $O(n \log n)$ time algorithm to achieve this bound. \end{abstract} \section{Introduction} The classical art gallery illumination problem consists on finding the minimum number of light sources needed to illuminate a simple polygon. There exist several variations on this problem; one such variation was introduced in \cite{monotone}, it is known as the $k$-modem illumination problem. For a non-negative number $\ensuremath{k}$, a $\emph{\ensuremath{k}-modem}$ is a wireless device that can penetrate $\ensuremath{k}$ ``walls''. Let $\ensuremath{\mathcal{L}}$ be a set of $\ensuremath{n}$ line segments (or lines) in the plane. A $\ensuremath{k}$-modem $\emph{illuminates}$ all points $\ensuremath{p}$ of the plane such that the interior of the line segment joining $\ensuremath{p}$ and the $\ensuremath{k}$-modem intersects at most $\ensuremath{k}$ elements of $\ensuremath{\mathcal{L}}$. In general, $\ensuremath{k}$-modem illumination problems consist on finding the minimum number of $\ensuremath{k}$-modems necessary to illuminate a certain subset of the plane, for a given $\ensuremath{\mathcal{L}}$. Classical illumination \cite{sack1999handbook,o1987art} is just the case when $\ensuremath{k=0}$. Several upper bounds have been obtained for various classes of $\mathcal{L}$. In \cite{monotone} the authors studied the case when $\ensuremath{\mathcal{L}}$ is the set of edges of a monotone polygon with $n$ vertices; they showed that the interior of the polygon can be illuminated with at most $\left \lceil\frac{n}{2k}\right \rceil$ $k$-modems ($\left \lceil\frac{n}{k+4}\right \rceil$ if $k=1,2,3)$, and if the polygon is orthogonal it can be illuminated with $\left \lceil \frac{n-2}{2k+4}\right \rceil $ $k$-modems. In \cite{ktransmiters} the authors studied the case when $\mathcal{L}$ is a set of $n$ disjoint orthogonal line segments; they showed that $\left \lceil \frac{n+1}{2(k+1)^{0.264}}\right \rceil $ $k$-modems are sufficient to illuminate the plane. In \cite{fabila2009modem} the authors studied the problem of illuminating the plane with few modems of high power; they showed that when $\mathcal{L}$ is an arrangement of lines, one $\left \lceil\frac{3n}{2}\right \rceil$-modem is sufficient, when $\mathcal{L}$ is the set of edges of an orthogonal polygon, one $\left \lceil\frac{n}{3}\right \rceil$-modem is sufficient and when $\mathcal{L}$ is the set of edges of a simple polygon, one $\left \lceil\frac{2n+1}{3}\right \rceil$-modem is sufficient. It is worth noting that there are no published bounds for general polygons. There are also algorithmic results regarding simple polygons. In \cite{metaheuristic} the authors presented a hybrid metaheuristic strategy to find few $k$-modems that illuminate a simple polygon. In this case the $k$-modems are required to be placed at vertices of the polygon. They applied the hybrid metaheuristic to random sets of simple, monotone, orthogonal and grid monotone orthogonal polygons. Each set consisted of 40 polygons of 30, 50, 70, 100, 110, 130, 150 and 200 vertices. The average numbers of $k$-modems used by their strategy are shown in Table \ref{table:averages}. \begin{table}[h] \centering \begin{tabular}{ c c c c c } \toprule & Simple & Monotone & Orthogonal & Grid Monotone Orthogonal \\ \midrule $k=2$ & $\left \lceil \frac{n}{26} \right \rceil$ & $\left \lceil \frac{n}{15} \right \rceil$ & $\left \lceil \frac{n}{27} \right \rceil$ & $\left \lceil \frac{n}{18} \right \rceil$ \\ & & & & \\ $k=4$ & $\left \lceil \frac{n}{52} \right \rceil$ & $\left \lceil \frac{n}{26} \right \rceil$ & $\left \lceil \frac{n}{57} \right \rceil$ & $\left \lceil \frac{n}{35} \right \rceil$ \\ \bottomrule \end{tabular} \begin{centering} \protect\caption{Averages on the number of $k$-modems obtained by the strategy presented in \cite{metaheuristic}. } \label{table:averages} \par\end{centering} \end{table} It is known that the problem of finding the minimum number of 0-modems to illuminate a simple polygon is NP-hard \cite{Aggarwal:1984:AGT:911725}. It was recently proved that the same problem is also NP-hard for $k$-modems {[}reference to the paper of Christiane{]}. This paper is organized as follows. In Section \ref{SeccKmodemCutting} we present a new bound of $O(n/k)$ for the $k$-modem illumination problem through a variation of the Cutting Lemma. In Section \ref{sec:Orthogonal} we present a simple $O(n\log n)$ time algorithm that illuminates a set of $n$ disjoint orthogonal segments with at most $6\frac{n}{k}+1$ $k$-modems. This algorithm can be easily modified to obtain the same bound for orthogonal polygons. \section{\label{SeccKmodemCutting}Upper Bounds Using the Cutting Lemma} A generalized triangle is the intersection of three half planes. Note that a generalized triangle can be a point, a line, a bounded or an unbounded region. Given a set $\mathcal{L}$ of $n$ lines in the plane and $r>0$, a $\frac{1}{r} $-cutting is a partition of the plane into generalized triangles with disjoint interiors, such that each generalized triangle is intersected by at most $n/r$ lines. The Cutting Lemma gives an upper bound on the size of such a cutting. \begin{thm}[Cutting Lemma] Let $\mathcal{L}$ be a set of $n$ lines in the plane and $r>0$. Then there exists a $\frac{1}{r}$-cutting of size $O(r^{2})$. \end{thm} The Cutting Lemma was first proved in \cite{cuttingLemma1} and independently in \cite{CuttingLemma2}. It has become a classical tool in Computational Geometry used mainly in divide-and-conquer algorithms. It can be used to obtain an upper bound on the number of $k$-modems necessary to illuminate the plane in the presence of $n$ lines. \begin{thm}\label{thm:Lines-Mdems-Required} Let $\mathcal{L}$ be a set of $n$ lines in the plane. The number of $k$-modems required to illuminate the plane in the presence of $\mathcal{L}$ is $O(n^2/k^2)$. \end{thm} \begin{proof} By the Cutting Lemma, there exists a $\frac{1}{(n/k)}$-cutting for $\mathcal{L}$ of size $O(n^2/k^2)$. Note that each triangle can be illuminated with one $k$-modem. Thus the number of $k$-modems required to illuminate the plane is $O(n^2/k^2)$. \end{proof} Given a polygon $\mathcal{P}$ we can obtain the same bound on the number of $k$-modems needed to illuminate it. We first extend its edges to straight lines and then apply Theorem \ref{thm:Lines-Mdems-Required}. We can achieve a better upper bound of $O(n/k)$ by making use of a line segment version of the Cutting Lemma given in \cite{berg1995cuttings}. In that paper the authors consider cuttings in a more general setting: they define a cutting as a subdivision of the plane in \emph{boxes}. A \emph{box} is a closed subset of the plane which has constant description (that is, it can be represented in a computer with $O(1)$ space, and it can be checked in constant time whether a point lies in a box or whether an object intersects (the interior of) a box). \begin{thm} \label{thm:CuttingLemmaSegments} \cite{berg1995cuttings} Let $\mathcal{L}$ be a set of $n$ line segments in the plane with a total of $A$ intersections and $r>0$. Then there exists a $\frac{1}{r}$-cutting for $\mathcal{L}$ of size $O\left(r +A \left(\frac{r}{n}\right)^2\right)$. \end{thm} If we consider a polygon as a set of $n$ line segments that intersect only at their endpoints, Theorem \ref{thm:CuttingLemmaSegments} gives us a $\frac{1}{(n/k)}$-cutting of size $O\left(\frac{n}{k}+n\left(\frac{n^2}{k^2}\frac{1}{n^2}\right)\right)=O\left(\frac{n}{k}\right)$. Taking into account that the boxes used in the proof of Theorem \ref{thm:CuttingLemmaSegments} are generalized trapezoids (that is, intersections of four half planes), and that it is possible to illuminate each trapezoid with a $k$-modem, we obtain Theorem \ref{thm:CuttingLemmaPolygons}. Note that the same reasoning applies to sets of segments in the plane, which gives a bound of $O(n/k +A/k^2)$ for that case. \begin{thm}\label{thm:CuttingLemmaPolygons} Let $\mathcal{P}$ be a polygon with $n$ vertices. The number of $k$-modems needed to illuminate $\mathcal{P}$ is at most $O\left(n/k\right)$. \end{thm} \section{\label{sec:Orthogonal}An Algorithm for Orthogonal Line Segments Illumination} In this section we present an $O(n \log n)$ time algorithm to illuminate the plane with $k$-modems in the presence of a set $\mathcal{L}$ of $n$ disjoint orthogonal segments. The number of $k$-modems used by our algorithm is at most $6\frac{n}{k}+1$. We assume that $\mathcal{L}$ is contained in a rectangle $R$. Our objective is to partition $R$ into a certain kind of polygons called staircases, in a similar fashion to the cuttings introduced in Section \ref{SeccKmodemCutting}. We do this in such a way so that each staircase is illuminable with one $k$-modem. A \emph{staircase} is an orthogonal polygon $P$ such that: $P$ is bounded from below by a single horizontal segment \emph{Floor(P)}; $P$ is bounded from the right by a single vertical segment \emph{Rise(P)}; the left endpoint of $Floor(P)$ and the upper endpoint of $Rise(P)$ are joined by a monotone polygonal chain \emph{Steps(P)}. See Figure \ref{fig:staircase}. In what follows let $P$ be a staircase. \begin{figure} \caption{An example of a staircase.} \label{fig:staircase} \end{figure} Note that an axis parallel line intersecting the interior of $P$ splits it into two parts which are also staircases. Let $l$ be a horizontal line that intersects the interior of $P$ in a segment $s$. We denote by $Above(P,l)$ the staircase formed by $s$ and the part of $P$ above $s$, and denote by $Below(P,l)$ the staircase formed by $s$ and the part of $P$ below $s$. Likewise, let $l'$ be a vertical line that intersects the interior of $P$ in a segment $s$. We denote by $Left(P,l')$ the staircase formed by $s$ and the part of $P$ to the left of $s$, and denote by $Right(P,l')$ the staircase formed by $s$ and the part of $P$ to the right of $s$. See Figure \ref{fig:splits}. \begin{figure} \caption{An example of $Above(P,l)$, $Below(P,l)$, $Left(P,l')$ and $Right(P,l')$.} \label{fig:splits} \end{figure} \subsection{\label{sub:Partition-scheme-pseudocode}Illumination Algorithm} Let $\mathcal{P}$ be the set of endpoints of the segments in $\mathcal{L}$, excepting the rightmost points of the horizontal segments. The algorithm \emph{StaircasePartition($\mathcal{L})$} finds a rectangle $R$ that contains $\mathcal{L}$ and produces as output a partition of $R$ into staircases. Initially, $R$ is the only staircase in the partition, we use a horizontal sweep line $l$ starting at the top of $R$ that stops at each point of $\mathcal{P}$ and checks whether it is necessary to refine the partition. Throughout the algorithm, the staircases that are not intersected by the sweep line need no further processing. The staircases that are intersected by the sweep line might be splitted later. There are two reasons to split a staircase: the number of segments of $\mathcal{L}$ that intersect it might be too high, or a horizontal segment of $\mathcal{L}$ might cross the staircase completely. These two cases are handled by procedures called \emph{OverflowCut} and \emph{CrossingCut} respectively. \emph{OverflowCut} ensures that no staircase is intersected by more than $k$ segments; \emph{CrossingCut} ensures that no horizontal segment in $\mathcal{L}$ crosses completely a staircase. \emph{OverflowCut} is called whenever $k$ segments intersect the interior of a staircase $P$. \emph{OverflowCut} starts by replacing $P$ by $Above(P,l)$. If at most $\left \lfloor \frac{k}{2}\right \rfloor$ segments intersect both $P$ and $l$, $Below(P,l)$ is added to the partition. If more than $\left \lfloor \frac{k}{2}\right \rfloor$ segments intersect both $P$ and $l$, we search for a vertical line $l'$ that leaves at most $\left \lfloor \frac{k}{2}\right \rfloor$ of those segments to each side. The staircases $Left(Below(P,l), l')$ and $Right(Below(P,l),l')$ are added to the partition. See Figure \ref{fig:Overflow}. \begin{figure} \caption{The two cases for an OverflowCut on a staircase with $k=18$.} \label{fig:Overflow} \end{figure} \emph{CrossingCut} is called whenever a horizontal segment $s$ of $\mathcal{L}$ intersects at least three staircases of the partition. Let $P_1, \ldots , P_m$ be these staircases ordered from left to right. \emph{CrossingCut} replaces each $P_i$ by $Above(P_i, l)$ for $2 \leq i \leq m-1$, and merges $Below(P_i,l)$ with $P_m$. Note that the number of staircases in the partition does not grow. See Figure \ref{fig:Crossing}. \begin{figure} \caption{A CrossingCut.} \label{fig:Crossing} \end{figure} The algorithm begins by sorting the points in $\mathcal{P}$ by $y$ coordinate, which will be the stop points for the sweep line. After initializing the partition with $R$, the stop points are processed according to three cases. (As part of the steps taken in each case, we maintain the set of segments that intersect each staircase in the partition sorted from left to right, making a distinction of the ones that also intersect $l$.) \begin{itemize} \item The sweep line stops at an upper endpoint. This means we have found a new vertical segment that intersects a staircase $P$. That staircase may now be intersected by $k$ segments, if this happens we make an \emph{OverflowCut}. \item The sweep line stops at a lower endpoint. This means we have found the end of a segment that intersects $P$. In this case we only update the set of $P$. \item The sweep line stops at a left endpoint $p$ of a segment $s$. If the number of staircases that $s$ intersects is at least three, we perform a \emph{CrossingCut} on them. If necessary, an \emph{OverflowCut} is made on the staircases that contain the endpoints of $s$. \end{itemize} \begin{comment} \end{comment} At the end of the algorithm, the interior of each staircase $P$ in the partition is intersected by at most $k$ segments. Thus, a $k$-modem placed in the intersection between \emph{Floor(P)} and \emph{Rise(P)} is enough to illuminate it. \begin{lem} \label{lem:num-modems} Let $\mathcal{L}$ be a set of $n$ disjoint orthogonal segments. Let $\mathcal{P}'$ be the set of endpoints of the segments in $\mathcal{L}$, excepting the lower endpoints of the vertical segments. Then, the total number of calls to \textbf{OverflowCut} made by \textbf{StaircasePartition($\mathcal{L}$)} is at most $|\mathcal{P}'|/ \left \lceil k/2 \right \rceil$. \end{lem} \begin{proof} Let $P$ be a staircase in the partition that \emph{StaircasePartition($\mathcal{L})$} returns. $P$ was created after executing an \emph{OverflowCut}, so $Steps(P)$ was initially intersected by at most $\left \lfloor k/2 \right \rfloor$ vertical segments. Further modifications to $P$ are done by \emph{CrossingCuts}, which add edges to $Steps(P)$. The horizontal edges added to $Steps(P)$ are part of segments in $\mathcal{L}$, so they can't be intersected by any other segment. Thus, at most $\left \lfloor k/2 \right \rfloor$ vertical segments of $\mathcal{L}$ cross completely $P$. If \emph{OverflowCut} is called on $P$, $Above(P,l)$ becomes a staircase of the partition and won't be modified anymore. \emph{OverflowCut} was called because $k$ segments from $\mathcal{L}$ intersected $P$. These same segments intersect also $Above(P,l)$; of these at most $\left \lfloor k/2 \right \rfloor$ cross it completely. Therefore, at least $\left \lceil k/2 \right \rceil$ points of $\mathcal{P}'$ are contained in $Above(P,l)$. Thus, the total number of \emph{OverflowCut} calls is at most $|\mathcal{P}'|/ \left \lceil k/2 \right \rceil$. \end{proof} \begin{thm} Let $\mathcal{L}$ be a set of $n$ orthogonal disjoint segments. Then the number of $k$-modems needed to illuminate the plane in the presence of $\mathcal{L}$ is at most $6\frac{n}{k}+1$. The locations for those modems can be found in $O(n \log n)$ time. \end{thm} \begin{proof} We can assume that the number of vertical segments in $\mathcal{L}$ is at least $n/2$, otherwise we can rotate the plane. Let $\mathcal{P}'$ be the set of endpoints of the segments in $\mathcal{L}$, excepting the lower endpoints of the vertical segments. Therefore, $|\mathcal{P}'| \le 3n/2$. \emph{StaircasePartition($\mathcal{L}$)} adds staircases to the partition only when it makes a call to \emph{OverflowCut}. The staircases added are at most two. Using Lemma \ref{lem:num-modems}, we obtain that there are at most $2|\mathcal{P}'|/ \left \lceil k/2 \right \rceil + 1 \le 6\frac{n}{k}+1$ staircases in the partition. Since each staircase is illuminable with one $k$-modem, the bound follows. It remains to prove the $O(n \log n)$ bound on the time to find the locations of the $k$-modems. For each staircase $P$ in the partition, we maintain the horizontal segments of $Steps(P)$ sorted from left to right and the vertical segments of $Steps(P)$ sorted from bottom to top. Thus it is possible to find $Above(P,l)$ and $Below(P,l)$ in $O(\log n)$ time; the same happens with $Left(P,l')$ and $Right(P,l')$. An \emph{OverflowCut} call makes at most two splittings, so it takes at most $O(\log n)$ time. Since the number of \emph{OverflowCut} calls is $O(n/k)$, the total time required by them is $O(\frac{n}{k} \log n)$. Given $m$ staircases $P_1, \ldots , P_m$ intersected by a segment $s$, the splits and merges done by a \emph{CrossingCut} can be achieved in $(m-2)O(\log n)$ time. The $m-2$ upper parts of the staircases splitted horizontally become staircases that won't be modified again. Since there are at most $O(n/k)$ staircases, at most $O(n/k)$ splits and merges from \emph{CrossingCut} are done throughout the algorithm. Since each pair of split and merge operations takes $O(\log n)$ time, the total time required by the \emph{CrossingCut} calls is $O(\frac{n}{k} \log n)$. Besides the cost of the calls to \emph{OverflowCut} and \emph{CrossingCut}, at every stop point, we must determine the staircase where it is located. This can be done in $O(\log n)$ time by maintaining the order from left to right in which the sweep line intersects the staircases. Thus, the running time for the algorithm is $O(n \log n + \frac{n}{k} \log n)$. \end{proof} \section{Aknowledgements} We would like to thank Ruy Fabila-Monroy for introducing us to the $k$-modem problem and for his invaluable help in the development of this paper. \end{document}
\betagin{document} \betagin{center} {\bf OPTIMIZATION OF A PERTURBED SWEEPING PROCESS\I\!\!BY CONSTRAINED DISCONTINUOUS CONTROLS}\\left[3ex] GIOVANNI COLOMBO\footnote{Dipartimento di Matematica ``Tullio Levi-Civita", Universit$\bigtriangledown ve{\tauextrm{a}}$ di Padova, via Trieste 63, 35121 Padova, Italy ([email protected]) and G.N.A.M.P.A. of INdAM.} \quad BORIS S. MORDUKHOVICH\footnote{Department of Mathematics, Wayne State University, Detroit, Michigan 48202, USA ([email protected]). Research of this author was partly supported by the USA National Science Foundation under grants DMS-1512846 and DMS-1808978, and by the USA Air Force Office of Scientific Research grant \#15RT0462.} \quad DAO NGUYEN\footnote{Department of Mathematics, Wayne State University, Detroit, Michigan 48202, USA ([email protected]). Research of this author was partly supported by the USA National Science Foundation under grant DMS-1808978 and by the USA Air Force Office of Scientific Research grant \#15RT0462.} \epsilonnd{center} \small{\sc Abstract.} This paper deals with optimal control problems described by a controlled version of Moreau's sweeping process governed by convex polyhedra, where measurable control actions enter additive perturbations. This class of problems, which addresses unbounded discontinuous differential inclusions with intrinsic state constraints, is truly challenging and underinvestigated in control theory while being highly important for various applications. To attack such problems with constrained measurable controls, we develop a refined method of discrete approximations with establishing its well-posedness and strong convergence. This approach, married to advanced tools of first-order and second-order variational analysis and generalized differentiations, allows us to derive adequate collections of necessary optimality conditions for local minimizers, first in discrete-time problems and then in the original continuous-time controlled sweeping process by passing to the limit. The new results include an appropriate maximum condition and significantly extend the previous ones obtained under essentially more restrictive assumptions. We compare them with other versions of the maximum principle for controlled sweeping processes that have been recently established for global minimizers in problems with smooth sweeping sets by using different techniques. The obtained necessary optimality conditions are illustrated by several examples.\\left[1ex] {\epsilonm Key words.} Optimal control, sweeping process, variational analysis, discrete approximations, generalized differentiation, necessary optimality conditions.\\left[1ex] {\epsilonm AMS Subject Classifications.} 49M25, 49J53, 90C30.\vspace*{-0.2in} \left \|ormalsize \section{Introduction and Problem Formulation}\langlebel{assumptions} This paper addresses the following optimal control problem labeled as $(P)$:\\left[1ex] Minimize the Mayer-type cost functional \betagin{equation}\langlebel{cost1} J[x,u]:=\varphi\big(x(T)\big) \epsilonnd{equation} over the corresponding (described below) pairs $(x(\cdot),u(\cdot))$ satisfying \betagin{equation}\langlebel{Problem} \left\{\betagin{matrix} \dot{x}(t)\in-N\big(x(t);C\big)+g\big(x(t),u(t)\big)\;\tauextrm{ a.e. }\;t\in[0,T],\;x(0)=x_0\in C\subset\mathbb{R}^n,\\ u(t)\in U\subset\mathbb{R}^d\;\tauextrm{ a.e. }\;t\in[0,T], \epsilonnd{matrix}\right. \epsilonnd{equation} where the set $C$ is a convex {\epsilonm polyhedron} given by \betagin{equation}\langlebel{C} C:=\bigcap_{j=1}^{s}C^j\tauextrm{ with }C^j:=\left \|n x\in\mathbb{R}^n\big|\;\langle x^j_*,x\rangle\le c_j \Boxnn, \epsilonnd{equation} and where $N(x;C)$ stands for the normal cone of convex analysis defined by \betagin{equation}\langlebel{nc} N(x;C):=\big\{v\in\mathbb{R}^n\;\big|\;\langle v,y-x\rangle\le 0,\;y\in C\big\}\tauextrm{ if }x\in C\tauextrm{ and }N(x;C):=\epsilonmp\tauextrm{ if }x\left \|otin C. \epsilonnd{equation} Observe that due to the second part of definition \epsilonqref{nc} mandatory yields the presence of the hidden {\epsilonm pointwise state constraints} on the trajectories of \epsilonqref{Problem}: \betagin{equation}\langlebel{e:8} x(t)\in C,\tauextrm{ i.e. }\langle x^j_*,x(t)\rangle\le c_j\;\tauextrm{ for all }\;t\in [0,T]\;\tauextrm{ and }\;j=1,\ldots,s. \epsilonnd{equation} Considering the differential inclusion in \epsilonqref{Problem} without the additive perturbation term $g(x,u)$, we arrive at the framework of the {\epsilonm sweeping process} introduced by Jean-Jacques Moreau who was motivated by applications to problems of elastoplasticity; see \cite{mor_frict}. It has been well recognized that the (uncontrolled) Moreau's sweeping process has a {\epsilonm unique} absolutely continuous (or even Lipschitz continuous) solution for convex and mildly nonconvex sets $C$; see, e.g., \cite{CT} and the references therein. Thus there is no room for optimization of the sweeping process unless some additional functions or parameters of choice are inserted into its description. It is very different from control theory for Lipschitzian differential inclusions \betagin{equation}\langlebel{diff-inc} \dot x(t)\in F\big(x(t)\big)\tauextrm{ a.e. }t\in[0,T],\;x(0)=x_0\in\mathbb{R}^n, \epsilonnd{equation} which have multiple solutions. The latter type of dynamics extends the classical ODE control setting with $F(x):=f(x,U)$ in \epsilonqref{diff-inc}, where the choice of measurable controls $u(t)\in U\subset\mathbb{R}^d$ a.e.\ $t\in[0,T]$ creates the possibility to find an optimal one with respect to a prescribed performance. The main issue here is that the normal cone mapping $N(\cdot;C)$ in the sweeping process is {\epsilonm highly non-Lipschitzian} (even discontinuous) while being {\epsilonm maximal monotone}. On the other hand, the well-developed optimal control theory for differential inclusions \epsilonqref{diff-inc} strongly depends on Lipschitzian behavior of $F(\cdot)$; see, e.g., \cite{m-book2,v} with the references therein as well as more recent publications. Introducing controls into the perturbation term of \epsilonqref{Problem} allows us to have multiple solutions $x(\cdot)$ of this system by the choice of feasible control functions $u(\cdot)$ and thus to minimize the cost functional \epsilonqref{cost1} over feasible control-trajectory pairs. Problems of this type were considered in the literature from the viewpoint of the existence of optimal solutions and relaxation; see \cite{aht,cmf,et,Tol} among other publications. More recently, {\epsilonm necessary optimality conditions} for local minimizers were derived in \cite{cm1,cm2} by the {\epsilonm method of discrete approximations} for problems of type $(P)$ with smooth (in fact $W^{2,\infty}$) control functions without any constraints. Later on these results were further extended in \cite{cm3} to nonconvex (and hence nonpolyhedral) problems with prox-regular sets $C$ in the same control setting. Note that both $C$ and $g$ in \epsilonqref{Problem} may be time-dependent; we discuss the autonomous case just for simplicity. The discrete approximation approach implemented in \cite{cm1}--\cite{cm3} was based on the scheme from \cite{chhm1} developed for the unperturbed sweeping process with controls in the moving set. The later was in turn a sweeping control version of the original discrete approximations method to derive necessary optimality conditions for Lipschitzian differential inclusions \epsilonqref{diff-inc} suggested and implemented in \cite{m95}; see also \cite{m-book2}. Quite recently, other approximation procedures were developed to derive necessary optimality conditions for global minimizers of $(P)$ in the class of measurable controls while under rather strong assumptions. The first paper \cite{ac} assumes, among other requirements, that the boundary of the sweeping set $C$ in \epsilonqref{Problem} is ${\cal C}^3$-smooth, the control set $U$ is compact and convex, and its image $g(x,U)$ under $g$ is convex as well. The ${\cal C}^3$-smoothness assumption on $C$ was relaxed in \cite{pfs}, by employing a smooth approximation procedure not relying on the distance function as in \cite{ac}, for the case of $C:=\{x\in\mathbb{R}^n\;|\;\psi(x)\le 0\}$ with $\psi$ being a ${\cal C}^2$- smooth convex function. The necessary optimality conditions obtained in both papers \cite{ac,pfs} can be treated as somewhat different counterparts of the celebrated Pontryagin Maximum Principle (PMP) for state-constrained controlled differential equations $\dot x=f(x,u)$. Note that necessary optimality conditions in some other classes of optimal control problems governed by various controlled versions of the sweeping process were developed in \cite{ao,bk,cm1,cm2,cm3,chhm,chhm1,hm18}. The main goal of this paper is to derive necessary optimality conditions for local minimizers (in the senses specified below) of the formulated problem $(P)$, with the constraint set $U$ in \epsilonqref{Problem} given by an {\epsilonm arbitrary compact} and with the (nonsmooth) {\epsilonm polyhedral} set $C$ from \epsilonqref{C}, by significantly reducing regularity assumptions on the reference control. Although problem \epsilonqref{Problem} is stated in the class of {\epsilonm measurable} feasible control actions, we assume that the local {\epsilonm optimal} control under consideration is of {\epsilonm bounded variation}, hence allowing to be discontinuous. Our approach is based on developing the {\epsilonm method of discrete approximations}, which is certainly of its own interest and has never been implemented before in control theory for sweeping processes with discontinuous controls. The novel results in this direction establish a {\epsilonm strong} approximation of {\epsilonm every feasible} control-state pair for $(P)$ in the sense of the $L^2$-norm convergence of discretized controls and the $W^{1,2}$-norm convergence of the corresponding piecewise linear trajectories. Furthermore, we justify such a strong convergence of {\epsilonm optimal} solutions for discrete problems to the given local minimizer of $(P)$. Dealing further with {\epsilonm intrinsically nonsmooth} and {\epsilonm nonconvex} discrete-time approximation problems, we derive for them necessary optimality conditions of the discrete Euler-Lagrange type by using appropriate unconvexified tools of first-order and second-order variational analysis and generalized differentiation. Employing these tools and passing to the limit from discrete approximations lead us to new nondegenerate necessary optimality conditions for local optimal solutions of the sweeping control problem $(P)$. The obtained results significantly extend those recently established in \cite{cm2} for unconstrained $W^{2,\infty}$ optimal controls in $(P)$, contain a maximum condition, while being essentially different from the necessary optimality conditions derived in \cite{ac,pfs} for problems of type $(P)$ with smooth sets $C$ in addition to other assumptions. We present nontrivial examples that illustrate the efficiency of the new results. Further applications to some practical models are considered in our subsequent paper \cite{cmn}. The rest of the paper is organized as follows. In Section~2 we formulate the standing assumptions, discuss the types of local minimizers under consideration, and present some preliminary results. Section~3 is devoted to the construction of discrete approximations of the controlled constrained sweeping dynamics \epsilonqref{Problem} that allows us to deal with measurable controls (in fact of bounded variation) and to strongly approximate any feasible solutions of $(P)$ as mentioned above. This result plays a major role in the justification of the developed version the method of discrete approximations for problem $(P)$. In Section~4 we construct a sequence of discrete approximation of a given ``intermediate" local minimizer for $(P)$ that occupies an intermediate position between weak and strong minimizers in variational and control problems. The major result of this section justifies the strong $ W^{1,2}\tauimes L^2$ approximation of the given local minimum pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ by extended optimal solutions to the discretized problems. It makes a bridge between the continuous-time sweeping control problem $(P)$ and its discrete-time counterparts. It occurs that the discrete-time approximating problems are unavoidably nonsmooth and nonconvex, even when the initial data are differentiable. It is due to the presence of increasingly many geometric constraints generated by the normal cone graph. To deal with them, we need adequate tools of variational analysis involving not only first-order but also second-order generalized differentiation. The latter is because of the normal cone description of the sweeping process. In Section~5 we present the corresponding definitions of the first-order and second- order generalized differential constructions taken from \cite{m-book1} together with the results of their computations entirely in terms of the given data of \epsilonqref{Problem}. Section~6 provides the derivation of necessary optimality conditions for discrete-time problems by reducing them to problems of nondifferentiable programming with many geometric constraints, using necessary optimality conditions for them obtained via variational/ extremal principles, and then expressing the latter in terms of the given data of $(P)$ by employing calculus rules of generalized differentiation. Section~7 is the culmination. We pass to the limit from the necessary optimality conditions for discrete-time problems by using stability of discrete approximations, robustness of then generalized differential constructions, and establishing an appropriate convergence of adjoint functions, which is the most difficult part. In this way we arrive at new necessary conditions for local minimizers of $(P)$ expressed in terms of the given data of the original problem. The usefulness of the nondegenerated optimality conditions obtained is illustrated in Section~8 by nontrivial examples. Throughout the paper we use standard notations of variational analysis and optimal control; see, e.g., \cite{m-book1,m-book2}. Recall that $\mathbb{N}:=\{1,2,\ldots\}$.\vspace*{-0.2in} \section{Standing Assumptions and Basic Notions}\langlebel{stand} \setcounter{equation}{0}\vspace*{-0.1in} Dealing with the polyhedron $C$ from \epsilonqref{C} and having $\bar{x}\in C$, consider the set of {\epsilonm active constraint indices} \betagin{equation}\langlebel{aci} I(\bar{x}):=\big\{j\in\{1,\ldots,s\}\;\big|\;\langle x^j_*,\bar{x}\rangle=c_j\big\}. \epsilonnd{equation} Recall that the {\epsilonm linear independence constraint qualification} (LICQ) holds at $\bar{x}$ if \betagin{equation}\langlebel{plicq} I\!\!Big[\sum_{j\in I(\bar{x})}\alpha_jx^j_*=0,\;\alpha_j\in\mathbb{R}I\!\!Big]\Longrightarrow\big[\alpha_j=0\tauextrm{ for all }j\in I(\bar{x})\big\}. \epsilonnd{equation} Our {\epsilonm standing assumptions} in this paper are as follows:\\left[1ex] {\bf(H1)} The control region $U\left \|e\epsilonmp$ is a compact set in $\mathbb{R}^d$ (in fact it may be an arbitrary metric compact).\\ {\bf(H2)} The perturbation mapping $g\mbox{\rm co}\,lon\mathbb{R}^n\tauimes U\tauo\mathbb{R}^n$ is continuous in $(x,u)$ while being also Lipschitz continuous with respect to $x$ uniformly on $U$ whenever $x$ belongs to a bounded subset of $\mathbb{R}^n$ and satisfies there the sublinear growth condition \betagin{equation*} \|g(x,u)\|\le\beta\big(1+\|x\|\big)\;\mbox{ for all }\;u\in U \epsilonnd{equation*} with some positive constant $\beta$.\\ {\bf(H3)} The LICQ condition \epsilonqref{plicq} holds along the reference trajectory $\bar{x}(t)$ of \epsilonqref{Problem} for all $t\in[0,T]$. It follows from \cite[Theorem~1]{et} that for each measurable control $u(\cdot)$ there is a unique solution $x(\cdot)\in W^{1,2}([0,T],\mathbb{R}^n)$ to the Cauchy problem in \epsilonqref{Problem}. Thus by a {\epsilonm feasible process} for $(P)$ we understand a pair $(x(\cdot),u(\cdot))$ such that $u(\cdot)$ is measurable, $x(\cdot)\in W^{1,2}([0,T],\mathbb{R}^n)$, and all the constraints in \epsilonqref{Problem} are satisfied. The above discussion tells us that the set of feasible pairs for $(P)$ is nonempty. Furthermore, it follows from \cite[Theorem~2]{et} that under the assumptions above the sweeping control problem $(P)$ admits an {\epsilonm optimal solution} provided that the image set \betagin{equation*} g(x,U):=\big\{x\in\mathbb{R}^n\;\big|\;x=g(x,u)\tauextrm{ for some }u\in U\big\} \epsilonnd{equation*} is convex. Since in this paper we are interested in deriving necessary optimality conditions for a given local minimizer of $(P)$, we do not impose the aforementioned convexity assumption. Let us now specify what we mean by a local minimizer of $(P)$.\vspace*{-0.07in} \betagin{definition}\langlebel{Def3.1} We say that a feasible pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ for $(P)$ is a {\sc $W^{1,2}\tauimes L^2$-local minimizer} in this problem if $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$ and there exists $\epsilon>0$ such that $J[\bar{x},\bar{u}]\le J[x,u]$ for all feasible pairs $(x(\cdot),u(\cdot))$ satisfying the condition \betagin{equation*} \int_0^T\left(\left \|\dot{x}(t)-\dot{\bar{x}}(t)\epsilonn^2+\left \| u(t)-\bar{u}(t)\epsilonn^2\right)dt<\epsilon. \epsilonnd{equation*} \epsilonnd{definition}\vspace*{-0.05in} For the case of differential inclusions of type \epsilonqref{diff-inc} with no explicit controls, this notion corresponds to {\epsilonm intermediate local minimizers} of rank two introduced in \cite{m95} and then studied there and in other publications; see, e.g., \cite{m-book2,v} and the references therein. Quite recently, such minimizers have been revisited in \cite{hm18} for controlled sweeping processes different from \epsilonqref{Problem}; namely, for those where continuous control actions enter the moving set $C(t)=C(u(t))$. It is easy to see that {\epsilonm strong} ${\cal C}\tauimes L^2$- local minimizers of $(P)$ with $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$ fall into the category of Definition~\ref{Def3.1}, but not vice versa. In the general setting of $W^{1,2}\tauimes L^2$-local minimizers we need to use a certain relaxation procedure in the line of Bogolyubov and Young that has been well understood in the calculus of variations and optimal control; see, e.g., \cite{dfm,et,m-book2,Tol,v} for more recent publications in the case of differential inclusions. Taking into account the convexity and closedness of the normal cone $N(x;C)$ and the compactness of the set $g(x,U)$, the {\epsilonm relaxed} version $(R)$ of problem $(P)$ consists of minimizing the cost functional \epsilonqref{cost1} on absolutely continuous trajectories of the convexified differential inclusion \betagin{equation}\langlebel{conv} \dot{x}(t)\in-N\big(x(t);C\big)+\mbox{\rm co}\, g\big(x(t),U\big)\;\tauextrm{ a.e. }\;t\in[0,T],\;x(0)=x_0\in C\subset\mathbb{R}^n, \epsilonnd{equation} where `co' signifies the convex hull of the set. Then we come up with the following notion.\vspace*{-0.07in} \betagin{definition}\langlebel{relaxed} Let $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a feasible pair for $(P)$. We say that it is a {\sc relaxed $W^{1,2}\tauimes L^2$-local minimizer} for $(P)$ if $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$ and there is $\epsilon>0$ such that \betagin{equation*} \varphi\big(\bar{x}(T)\big)\le\varphi\big(x(T)\big)\;\tauextrm{ whenever }\;\int_0^T\left(\left \|\dot x(t)-\dot{\bar{x}}(t)\epsilonn^2+\left \| u(t)-\bar{u}(t)\epsilonn^2\right)dt<\epsilon, \epsilonnd{equation*} where $u(\cdot)$ is a measurable control with $u(t)\in\mbox{\rm co}\, U(t)$ a.e.\ on $[0,T]$, and where $x(\cdot)$ is a trajectory of the convexified inclusion \epsilonqref{conv} that can be strongly approximated in $W^{1,2}([0,T];\mathbb{R}^n)$ by feasible trajectories to $(P)$ generated by piecewise constant controls $u_m (\cdot)$ on $[0,T]$ with \betagin{equation*} \int_0^T\|u_m(t)-u(t)\|^2dt\tauo 0\;\tauextrm{ as }\;m\tauo\infty. \epsilonnd{equation*} \epsilonnd{definition}\vspace*{-0.05in} Since step functions are dense in the space $L^2([0,T];\mathbb{R}^d)$, we obviously have that there is no difference between $W^{1,2}\tauimes L^2$-local minimizers for $(P)$ and their relaxed counterparts provided that the sets $g(x,U)$ and $U$ are {\epsilonm convex}, which is not assumed in what follows. Moreover, it is possible to deduce from the proofs of \cite[Theorem~2]{et} and \cite[Theorem~4.2]{Tol} that any strong local minimizer for $(P)$ is automatically a relaxed one under the assumptions made, but we are not going to pursue this issue here. Consider further a set-valued mapping $F\mbox{\rm co}\,lon C\tauimes U\rightrightarrows\mathbb{R}^n$ defined by \betagin{equation}\langlebel{F0} F(x,u):=N(x;C)-g(x,u)\tauextrm{ for all }x\in C,\;u\in U \epsilonnd{equation} and deduce from the Motzkin's theorem of the alternative the representation \betagin{equation}\langlebel{F} F(x,u):= I\!\!Big\{\sum_{j\in I(x)}\langlembda^j x^j_*\;I\!\!Big|\;\langlembda^j\ge 0I\!\!Big\}-g(x,u),\quad x\in C,\;u\in U, \epsilonnd{equation}\vspace*{-0.35in} \section{Discrete Approximations of Feasible Solutions}\langlebel{disc-giov} \setcounter{equation}{0}\vspace*{-0.1in} In this section we start developing the {\epsilonm method of discrete approximations} to study the sweeping control problem $(P)$ under our standing assumptions. For simplicity, consider the standard Euler explicit scheme for the replacement of the time derivative in \epsilonqref{Problem} by \betagin{equation*} \dot x(t)\approx\frac{x(t+h)-x(t)}{h}\tauextrm{ as }h\downarrow 0, \epsilonnd{equation*} which we formalize as follows. For any $m\inI\!\!N$ denote by \betagin{equation*} \Delta_m:=\big\{0=t^0_m<t^1_m<\ldots<t^{2^m}_m=T\big\}\tauextrm{ with }h_m:=t^{i+1}_m-t^i_m \epsilonnd{equation*} the discrete mesh on $[0,T]$ and define the sequence of discrete-time systems \betagin{equation}\langlebel{e:3.4} x^{i+1}_m\in x^i_m-h_m F(x_m^{i},u_m^{i}),\;i=0,\ldots,2^m-1, \epsilonnd{equation} where we have $u_m^{i}\in U$ due to the definition of $F$ in \epsilonqref{F0}. Let $I_m^i:=[t_m^{i-1},t_m^i)$. The next result provides a constructive approximation of {\epsilonm any} feasible process for $(P)$ by feasible solutions to \epsilonqref{e:3.4} that are appropriately extended to the continuous-time interval $[0,T]$. This result plays a major role in the entire subsequent procedure to derive necessary optimality conditions for $(P)$ while certainly being of its independent interest. Recall that a {\epsilonm representative} of a given measurable function on $[0,T]$ is a function that agrees with the given one for a.e.\ $t\in[0,T]$.\vspace*{-0.07in} \betagin{theorem}\langlebel{Thm3.1} Let $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a feasible pair for problem $(P)$ such that $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$ and that $\bar{u}(\cdot)$ is of bounded variation $($BV$)$ while admitting a right continuous representative on $[0,T]$, which we keep denoting by $\bar{u}(\cdot)$. In addition to {\rm(H1)--(H3)}, suppose that the mapping $g(x,u)$ is locally Lipschitzian in both variables around $(\bar{x}(t),\bar{u}(t))$ for all $t\in[0,T]$. Then for each $i=1,\ldots,2^m$ there exist sequences of unit vectors $z^{ji}_m\in\mathbb{R}^n$, real numbers $c^{ji}_m$, and state-control pairs $(x_m(t),u_m(t))$, $0\le t\le T$, such that \betagin{equation}\langlebel{3.1a} z^{ji}_m\tauo x^j_*\tauextrm{ and }c^{ji}_m\tauo c_j\tauextrm{ as }\;m\tauo\infty \epsilonnd{equation} and the following properties are fulfilled:\\ {\bf (a)} The sequence of control mappings $u_m\mbox{\rm co}\,lon[0,T]\tauo U$, which are constant on each interval $I_m^i$, converges to $\bar{u}(\cdot)$ strongly in $L^2([0,T];\mathbb{R}^d)$ and pointwise on $[0,T]$.\\ {\bf (b)} The sequence of continuous state mappings $x_m\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^n$, which are affine on each interval $I_m^i$, converges strongly in $W^{1,2}([0,T];\mathbb{R}^n)$ to $\bar{x}(\cdot)$ while satisfying the inclusions \betagin{equation}\langlebel{3.1b} x_m(t_m^i)=\bar{x}(t_m^i)\in C_m^i\tauextrm{ for each }i=1,\ldots,2^m\tauextrm{ with }x_m(0)=x_0, \epsilonnd{equation} where the perturbed polyhedra $C_m^i$ are given by \betagin{equation}\langlebel{3.1C} C_m^i:=\bigcap_{j=1}^s\big\{x\in\mathbb{R}^n\;\big|\;\langlengle z_m^{ji},x\ranglengle\le c_m^{ji}\big\}\tauextrm{ for }i=1,\ldots,2^m\tauextrm{ with }C_m^0:=C. \epsilonnd{equation} {\bf (c)} For all $t\in(t_m^{i-1},t_m^i)$ and $i=1,\ldots,2^m$ we have the differential inclusions \betagin{equation}\langlebel{3.1c} \dot{x}_m(t)\in-N\big(x_m(t_m^{i});C_m^i\big)+g\big(x_m(t_m^{i}),u_m(t)\big). \epsilonnd{equation} \epsilonnd{theorem}\vspace*{-0.05in} As a part of the proof of Theorem~\ref{Thm3.1}, we establish the following lemma, which is of its own interest.\vspace*{-0.07in} \betagin{lemma}\langlebel{lemma1} Given a feasible solution $(\bar{x}(\cdot),\bar{u}(\cdot))$ to $(P)$ under the assumptions of Theorem~{\rm\ref{Thm3.1}}, we have: {\bf (i)} $\bar{x}(\cdot)$ is Lipschitz continuous on $[0,T]$ and right differentiable for every $t\in[0,T]$, and its right derivative denoted by $\dot\bar{x}(\cdot)$ is also right continuous on $[0,T]$. {\bf (ii)} The sweeping differential inclusion \betagin{equation*} \dot{\bar{x}}(t)\in-N\big(\bar{x}(t);C\big)+g\big(\bar{x}(t),\bar{u}(t)\big), \epsilonnd{equation*} with $\dot{\bar{x}}(t)$ taken from {\rm(i)} and the right continuous representative of $\bar{u}(t)$, is satisfied for each $t\in[0,T]$. \epsilonnd{lemma}\vspace*{-0.07in} {\bf Proof.} Considering the differential inclusion \betagin{equation*} \dot{x}(t)\in-N(x(t);C)+g\big(x(t),\bar{u}(t)\big),\;x(0)=x_0\in C, \epsilonnd{equation*} we deduce from, e.g., \cite[Propositions~3.8 and 3.12]{Bre} that there exists one and only one Lipschitz continuous solutions on $[0,T]$, which therefore agrees with the given trajectory $\bar{x}(\cdot)$. Furthermore, $\bar{x}(\cdot)$ is a unique solution of the differential inclusion \betagin{equation*} \dot{x}(t)\in-N\big(x(t);C\big)+g\big(\bar{x}(t),\bar{u}(t)\big)\tauextrm{ a.e. }t\in[0,T],\;x(0)=x_0\in C, \epsilonnd{equation*} where the perturbation term depends only on $t$. Then the assumptions imposed on $\bar{u}(\cdot)$ and $g$ ensure that the mapping $t\mapsto g(\bar{x}(t),\bar{u}(t))$ is BV on $[0,T]$. The result of \cite[Proposition~3.3]{Bre} tells us that $\bar{x}(\cdot)$ is right differentiable at each $t\in[0,T)$ and satisfies the equalities \betagin{equation}\langlebel{i2} \dot{\bar{x}}(t)=g\big(\bar{x}(t),\bar{u}(t)\big)-\mbox{\rm proj}\,_{N(\bar{x}(t);C)}\big(g(\bar{x}(t),\bar{u}(t)\big)=\mbox{\rm proj}\,_{T(\bar{x}(t);C)}\big(g(\bar{x}(t),\bar{u}(t)\big))\tauextrm{ a.e. }t\in[0,T] \epsilonnd{equation} written via the (unique) projection onto the convex set $N(\bar{x}(t);C)$, where the second one can be easily verified. Our goal is to show that $\dot{\bar{x}}(t)$ is right continuous on $[0,T]$ while satisfying \epsilonqref{i2} {\epsilonm for each} $t\in[0,T]$. Observe preliminary that, thanks to LICQ, the polyhedron $C$ has nonempty interior and so the normal cone $N(x;C)$ is pointed at each $x\in\partial C$. Denote $v(t):=g(\bar{x}(t),\bar{u}(t))$, fix $0\le\bar{t}<T$, and let $t_k\tauo\bar{t}^+$ as $k\tauo\infty$. We need to verify that $v(t_k)\tauo v(\bar{t})$, which is equivalent by \epsilonqref{i2} to \betagin{equation}\langlebel{i3} \mbox{\rm proj}\,_{N(\bar{x}(\bar{t});C)}\big(v(\bar{t})\big)=\underset{k\tauo\infty}{\lim}\mbox{\rm proj}\,_{N(\bar{x}(t_k);C)}\big(v(t_k)\big). \epsilonnd{equation} Note that there is nothing to prove if $\bar{x}(\bar{t})\in{\rm int}\,C$, since $N(\bar{x}(t_k);C)=\{0\}$ for all $k$ sufficiently large. To proceed further, assume that $\bar{x}(\bar{t})\in{\rm bd}\,C$ and observe easily that $N(\bar{x}(t_k);C)\subset N(\bar{x}(\bar{t});C)$ for all large $k$. Consider now the following three possible cases: {\bf(1)} If $v(\bar{t})\in T(\bar{x}(\bar{t});C)$, then $\langlengle v(\bar{t}),x_{\ast}^j\ranglengle\le 0$ for all $j\in I(\bar{x}(\bar{t}))$, and so $\lim_{k\tauo\infty}\langlengle v(t_k),x_{\ast}^j\ranglengle\le 0$ for all $j\in I(\bar{x}(\bar{t}))$. Since $N(\bar{x}(t_k);C)\subset N(\bar{x}(\bar{t}));C)$, we get \epsilonqref{i3}. {\bf(2)} If $v(\bar{t})\in N(\bar{x}(\bar{t});C)$, then arguing similarly to (1) and using the second equality in \epsilonqref{i2} show that $\mathrm{proj}_{T(\bar{x}(t_k);C)}\big(v(t_k)\big)\tauo 0=\mathrm{proj}_{T(\bar{x}(\bar{t});C)}(v(\bar{t}))$ as $k\tauo\infty$ and hence verifies \epsilonqref{i3} directly. {\bf(3)} Let now $v(\bar{t})\left \|ot\in T(\bar{x}(\bar{t});C)\cup N(\bar{x}(\bar{t});C)$. Then LICQ ensures the unique representation \betagin{equation}\langlebel{stella} \mathrm{proj}_{N(\bar{x}(\bar{t});C)}\big(v(\bar{t})\big)=\sum_{j\in I(\bar{x}(\bar{t}))}\langlembda_j(\bar{t})x_{\ast}^j, \epsilonnd{equation} where $\langlembda_j(\bar{t})\ge 0$ for all $j\in I(\bar{x}(\bar{t}))$. If in this case $\langlembda_j(\bar{t})=0$ for some $j\in I(\bar{x}(\bar{t}))$, then $$ \big\langlengle\mathrm{proj}_{T(\bar{x}(\bar{t});C)}\big(v(\bar{t})\big),x_{\ast}^j\big\ranglengle<0, $$ due to the LICQ assumption. This implies that $\langlengle\bar{x}(t_k),x_{\ast}^j\ranglengle\le\langlengle\bar{x}(\bar{t}),x_{\ast}^j\ranglengle$ for all $k$ sufficiently large. Consequently, the corresponding vector $x_{\ast}^j$ appears being multiplied by zero in the representation \betagin{equation}\langlebel{stella2} \mathrm{proj}_{N(\bar{x}(t_k);C)}\big(v(t_k)\big)=\sum_{j\in I(\bar{x}(t_k))}\langlembda_j(t_k)x_{\ast}^j. \epsilonnd{equation} Recalling that $I(\bar{x}(t_k))\subset I(\bar{x}(\bar{t}))$ for all large $k$, it turns out that the set of active indices in \epsilonqref{stella2} is the same as in \epsilonqref{stella}. Finally, it follows from \cite[Theorems~2.1 and 4.1]{rob} under the imposed LICQ that the coefficients $\langlembda_j(\cdot)$ are continuous with respect to $v(\cdot)$. This verifies \epsilonqref{i3} in case (3). $ \Box$\vspace*{0.02in} Now we are ready to proceed with the proof of the major Theorem~\ref{Thm3.1}.\\left[1ex] {\bf Proof of Theorem~\ref{Thm3.1}}. Fix $m\in\mathbb{N}$ and for all $t\in[t_m^i,t_m^{i+1})$ and $i=0,\ldots,2^m-1$ define \betagin{equation*} u_m(t):=\bar{u} (t_m^{i+1}),\quad x_m(t):=\bar{x}(t^i_m)+(t-t_m^i)\frac{\bar{x}(t_m^{i+1})-\bar{x}(t_m^i)}{h_m}. \epsilonnd{equation*} Then denote $\omega_m(t):=\dot\bar{x}_m(t)$ for which we have the representation \betagin{equation*} \omega_m(t)=\omega^i_m:=\frac{\bar{x}(t^{i+1}_m)-\bar{x}(t^i_m)}{h_m}\;\tauextrm{ whenever }\;t\in[t^i_m,t^{i+1}_m),\;i=0,\ldots,2^m-1. \epsilonnd{equation*} It follows from the right continuity of $\bar{u}$ that $u_m(t)\tauo\bar{u}(t)$ as $m\tauo\infty$ for all $t\in[0,T)$. Hence we get that $u_m(\cdot)\tauo\bar{u}(\cdot)$ strongly in $L^2(0,T)$ by the dominated convergence theorem, which verifies (a). To prove (b) and (c), let $\bar{t}$ be a nodal point of the $m$-th mesh that by construction remains a nodal point for all $m'$-mesh with $m'\ge m$. Denote by $i_m(\bar{t})$ the index $i$ such that $\bar{t}=i\frac{T}{2^m}$ and observe by Lemma~\ref{lemma1} that \betagin{equation}\langlebel{lim3.1} \lim_{m\tauo\infty}\omega^{i_m(\bar{t})}_m=\dot{\bar{x}}(\bar{t})\;\tauextrm{ and }\;\lim_{m\tauo\infty}\|\omega_m-\dot{\bar{x}}\|_{L^2(0,T)}=0. \epsilonnd{equation} Indeed, the first equality in \epsilonqref{lim3.1} is a consequence of the construction of $\omega_m(t)$ and the right continuity of the derivative $\dot\bar{x}(t)$ on $[0,T]$ by Lemma~\ref{lemma1}(i). This in turn yields the second equality therein by basic real analysis and thus justifies the strong $W^{1,2}$ convergence of $x_m(\cdot)$ to $\bar{x}(\cdot)$. To proceed further, for any $t\in[0,T)$ denote $\widetilde{a}u_m(t):=\min\{t_m^i\;|\;t\le t_m^i\}$ and get by assertion (a) that \betagin{eqnarray*} \dot{x}_m(t)-g(x_m\big(\widetilde{a}u_m(t)),u_m(t)\big)=\omega_m\big(\widetilde{a}u_m(t)\big)-g\big(x_m(\widetilde{a}u_m(t)),u_m(t)\big) \rightarrow\dot{\bar{x}}(t)-g\big(\bar{x}(t),\bar{u}(t)\big)\in-N\big(\bar{x}(t);C\big) \epsilonnd{eqnarray*} as $m\tauo\infty$. Thus there are unit vectors $z_m^{ji}\in\mathbb{R}^n$ and constants $c_m^{ji}\in\mathbb{R}$ as $j=1,\ldots,k$ and $i=1,\ldots, 2^m$ satisfying \epsilonqref{3.1a} and the relationships in \epsilonqref{3.1b}, \epsilonqref{3.1c} with the sets $C_m^i$ defined in \epsilonqref{3.1C}. $ \Box$\vspace*{-0.2in} \section{Discrete Approximations of Local Optimal Solutions}\langlebel{dis-app-opt} \setcounter{equation}{0}\vspace*{-0.1in} As seen above, Theorem~\ref{Thm3.1} provides a constructive discrete approximation of {\epsilonm any} feasible solution to problem $(P)$ by feasible solutions to discrete-time problems, with no connections to optimization. The main goal here is to study a {\epsilonm given} local optimal solution to $(P)$ by using discrete approximations as a {\epsilonm vehicle} to derive further necessary optimality conditions for it. To proceed in this direction, we construct a sequence of discrete-time optimization problems such that their optimal solutions always exist and strongly converge in the sense below to the given local minimizer of the original sweeping control problem. Our main attention in this section is paid to {\epsilonm relaxed $W^{1,2}\tauimes L^2$-local minimizers} $(\bar{x}(\cdot),\bar{u}(\cdot))$ for $(P)$ introduced in Definition~\ref{relaxed} while recalling that the relaxation is not needed if either the set $g(x,U)$ is convex, or $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a {\epsilonm strong} local minimizer for $(P)$; see the discussions in Section~\ref{stand}. Given a relaxed $W^{1,2}\tauimes L^2$-local minimizer $(\bar{x}(\cdot),\bar{u}(\cdot))$, we construct the following family of discrete-time problems $ (P_m)$, $m\inI\!\!N$, where $F$ is defined in \epsilonqref{F0}, and where $z_m^{ji},c_m^{ji}$ are taken from Theorem~{\rm\ref{Thm3.1}}: \betagin{equation}\langlebel{e:3.15} \tauextrm{minimize }\;J_m[x_m,u_m]:=\varphi\big(x_m(T)\big)+ \frac{1}{2}\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}\left(\left \|\frac{x^{i+1}_m-x^i_m}{h_m}- \dot{\bar{x}}(t)\epsilonn^2+\left \| u^{i}_m-\bar{u}^{i}\epsilonn^2\right)dt \epsilonnd{equation} over discrete trajectories $(x_m,u_m)=(x^0_m,x^1_m,\ldots,x^{2^m}_m,u^0_m,u^1_m,\ldots,u^{2^m-1}_m)$ subject to the constraints \betagin{equation}\langlebel{e:3.16} x^{i+1}_m\in x^i_m-h_m F(x^i_m,u^i_m)\tauextrm{ for }\;i=0,\ldots,2^m-1, \epsilonnd{equation} \betagin{equation*} \langlengle z_m^{ji},x^{i}_m\ranglengle\le c_m^{ji}\;\tauextrm{ for all }\;j=1,\ldots,s\;\tauextrm{ and }\;i=1,\ldots,2^m\;\tauext{ with }\;x^0_m:=x_0\in C, \;u^0_m:=\bar{u} (0), \epsilonnd{equation*} \betagin{equation}\langlebel{e:3.18} \sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}\left(\left \|\frac{x^{i+1}_m-x^i_m}{h_m}-\dot{\bar{x}}(t)\epsilonn^2+\left \| u_m^i-\bar{u}^i\epsilonn^2\right)dt\le\frac{\epsilon}{2}, \epsilonnd{equation} \betagin{equation}\langlebel{e:3.18+} u^i_m\in U\;\tauextrm{ for }\;i=0,\ldots,2^m-1. \epsilonnd{equation} To implement the method of discrete approximation, we have to make sure that each problem $(P_m)$ admits an optimal solution. By taking into account Theorem~\ref{Thm3.1}, we deduce it from the classical Weierstrass existence theorem in finite dimensions due to the construction of $(P_m)$ and the assumptions made.\vspace*{-0.07in} \betagin{proposition}\langlebel{ThmExis2} In addition to the assumptions of Theorem~{\rm\ref{Thm3.1}}, suppose that the cost function $\varphi$ is lower semicontinuous $($l.s.c.$)$ on bounded sets. Then each problem $(P_m)$ admits an optimal solution provided that $m\inI\!\!N$ is sufficiently large. \epsilonnd{proposition}\vspace*{-0.07in} {\bf Proof.} It follows from Theorem~\ref{Thm3.1} that the set of feasible solutions $(x_m,u_m)$ to $(P_m)$ is nonempty for any large $m$. It follows from the constraint structures in $(P_m)$ and the assumptions imposed on $U$ and $g$ that the feasible sets are closed. Furthermore, it easy to deduce from the localization in \epsilonqref{e:3.18} that the feasible sets are bounded as well. Thus the lower semicontinuity assumption on the cost function $\varphi$ ensures the existence of optimal solutions to $(P_m)$ by the Weierstrass theorem. $ \Box$\vspace*{0.02in} Now we are ready to derive the main result of this section that establishes the strong $W^{1,2}$ convergence of any sequence $(\bar{x}_m(\cdot), \bar{u}_m(\cdot))$ of optimal solutions to $(P_m)$, which are extended to the entire interval $[0,T]$, to the given local minimizer $(\bar{x}(\cdot),\bar{u} (\cdot))$ for the original problem $(P)$.\vspace*{-0.07in} \betagin{theorem}\langlebel{ThmStrong} Let $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a relaxed $W^{1,2}\tauimes W^{1,2}$-local minimizer for the sweeping control problem $(P)$, and let $\varphi$ be continuous around $\bar{x}(T)$ in addition to the assumptions of Theorem~{\rm\ref{Thm3.1}}. Consider any sequence of optimal solutions $(\bar{x}_m(\cdot),\bar{u}_m(\cdot))$ to problems $(P_m)$ and extend them to $[0,T]$ piecewise linearly for $\bar{x}_m (\cdot)$ and piecewise constantly for $\bar{u}_m(\cdot)$ without relabeling. Then we have the convergence \betagin{equation*} \big(\bar{x}_m(\cdot),\bar{u}_m(\cdot)\big)\tauo\big(\bar{x}(\cdot),\bar{u}(\cdot)\big)\;\tauextrm{ as }\;m\tauo\infty \epsilonnd{equation*} in the strong topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes L^2([0,T];\mathbb{R}^d)$. \epsilonnd{theorem}\vspace*{-0.07in} {\bf Proof}. It is sufficient to show that \betagin{equation}\langlebel{e:6.10} \underset{m\tauo\infty}{\mathrm{lim}}\int_0^T\left(\left \|\dot{\bar{x}}_m(t)-\dot{\bar{x}}(t)\epsilonn^2+\left \|\bar{u}_m(t)-\bar{u}(t)\epsilonn^2\right)dt=0. \epsilonnd{equation} Arguing by contradiction, suppose that there exists a subsequence of the integral values $\gamma_m$ in \epsilonqref{e:6.10} that converges, without relabeling, to some number $\gamma>0$. Due to \epsilonqref{e:3.18}, the sequence of extended optimal solutions $\{(\dot\bar{x}_m(\cdot),\bar{u}_m(\cdot))\}$ to $(P_m)$ is bounded in the reflexive space $L^2([0,T];\mathbb{R}^n)\tauimes L^2([0,T];\mathbb{R}^d)$, and thus it contains a weakly convergence subsequence in this product space, again without relabeling. Denote by $(\widetilde v(\cdot),\widetilde u(\cdot))$ the limit of the latter subsequence and then let \betagin{equation*} \widetilde{x}(t):=x_0+\int_0^T\widetilde v(\widetilde{a}u)d\widetilde{a}u\;\tauextrm{ for all }\;t\in[0,T]. \epsilonnd{equation*} Since $\dot{\widetilde{x}}(t)=\widetilde v(t)$ for a.e.\ $t\in[0,T]$, we have that \betagin{equation*} \big(\bar{x}_m(\cdot),\bar{u}_m(\cdot)\big)\tauo\big(\widetilde x(\cdot),\widetilde u(\cdot)\big)\;\tauextrm{ as }\;m\tauo\infty \epsilonnd{equation*} in the topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes L^2([0,T];\mathbb{R}^d)$. Invoking the Mazur weak closure theorem tells us that there is a sequence of convex combinations of $(\bar{x}_m(\cdot),\bar{u}_m(\cdot))$, which converges to $(\widetilde x(\cdot),\widetilde u(\cdot))$ strongly in $W^{1,2}([0,T];\mathbb{R}^n) \tauimes L^2([0,T];\mathbb{R}^d)$, and thus $(\dot{\bar{x}}_m(t),\bar{u}_m(t))\tauo(\dot{\widetilde x}(t),\widetilde u(t))$ for a.e.\ $t\in[0,T]$ along a subsequence. Furthermore, we can clearly replace above the piecewise linear extensions of the discrete trajectories $\bar{x}_m(\cdot)$ to the interval $[0,T]$ by the trajectories of \epsilonqref{Problem} generated by the controls $\bar{u}_m(\cdot)$ piecewise constantly extended to $[0,T]$. The obtained pointwise convergence of convex combinations allows us to conclude that $\widetilde u(t)\in\mbox{\rm co}\, U$ for a.e.\ $t\in[0,T]$ and that $\widetilde x(\cdot)$ satisfies the convexified differential inclusion \epsilonqref{conv}. Passing now to the limit as $m\tauo\infty$ in the cost functional and constraints \epsilonqref{e:3.15}--\epsilonqref{e:3.18+} of problem $(P_m)$ with taking into account the assumed local continuity of $\varphi$ and the constructions above, we conclude that the pair $(\widetilde x (\cdot),\widetilde u(\cdot))$ belongs to the prescribed $W^{1,2}\tauimes L^2$-neighborhood of the given local minimizer $(\bar{x}(\cdot),\bar{u}(\cdot))$ and satisfies the inequality \betagin{equation}\langlebel{contr} J[\widetilde{x},\widetilde{u}]+\gamma/2\le J[\bar{x},\bar{u}]\Longrightarrow J[\widetilde{x},\widetilde{u}]<J[\bar{x},\bar{u}] \epsilonnd{equation} due the aforementioned strong convergence of $(\bar{x}_m(\cdot),\bar{u}_m(\cdot))$ to $(\widetilde x(\cdot),\widetilde u(\cdot))$ and the structure of \epsilonqref{e:3.15}. Appealing to Definition~\ref{relaxed} tells us that \epsilonqref{contr} contradicts the very fact that $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed $W^{1,2}\tauimes L^2$-local minimizer of $(P)$. Thus we get \epsilonqref{e:6.10} and complete the proof of the theorem. $ \Box$\vspace*{0.01in} Recalling the discussion after Definition~\ref{relaxed} leads us to the following consequence of Theorem~\ref{ThmStrong}, which provides the strong approximation of local minimizers for $(P)$ without an explicit relaxation.\vspace*{-0.07in} \betagin{corollary}\langlebel{ThmStrong1} In addition to the assumptions of Theorem~{\rm\ref{ThmStrong}}, suppose that the sets $g(x,U)$ and $U$ are convex. Then the convergence result of Theorem~{\rm\ref{ThmStrong}} holds true. \epsilonnd{corollary}\vspace*{-0.3in} \section{Tools of Variational Analysis}\langlebel{gen-diff} \setcounter{equation}{0}\vspace*{-0.1in} The results of Section~\ref{dis-app-opt} make a bridge between the given local minimizer $(\bar{x}(\cdot),\bar{u}(\cdot))$ of the original problem $(P)$ and (global) optimal solutions for the sequence of discrete approximations $(P_m)$ that exist by Proposition~\ref{ThmExis2} and strongly converge to $(\bar{x}(\cdot),\bar{u}(\cdot))$ by Theorem~\ref{ThmStrong}. This supports our approach to derive necessary optimality conditions for $(\bar{x}(\cdot),\bar{u}(\cdot))$ by establishing firstly necessary conditions for optimal solutions to the discrete-time problems $(P_m)$ and then passing to the limit in them as $m\tauo\infty$. Looking at the structures of each problem $(P_m)$ and the equivalent problem of finite-dimensional mathematical programming defined in Section~\ref{nc-disc}, we observe that they are always {\epsilonm nonsmooth} and {\epsilonm nonconvex}, even when the initial data of $(P)$ possess these properties. This is due to the {\epsilonm graphical} set constraints associated with the discrete-time inclusions \epsilonqref{e:3.16} that are generated by the normal cone mapping in \epsilonqref{F0}. To proceed with deriving necessary optimality conditions for $(P_m)$ and then for $(P)$ by passing to the limit, we have to employ appropriate generalized differential constructions of variational analysis. These constructions should be {\epsilonm robust}, enjoy comprehensive {\epsilonm calculus rules}, and such that the corresponding normal cone is {\epsilonm not too large} while being applied to--specifically--graphical sets. It does hold, in particular, for the Clarke normal cone $\omegaverline N$, which is always a linear subspace of a maximum dimension for sets that are graphically homeomorphic to graphs of Lipschitzian functions; see \cite{m-book1,rw} for more details and references. For example, we have $\omegaverline N((0,0); \mbox{\rm gph}\,|x|)=\mathbb{R}^2$ for the graph of the simplest convex function on $\mathbb{R}$. All the required properties are satisfied for the generalized differential constructions initiated by the second author. Elements of the first-order theory and various applications can be found by now in many books; see, e.g., \cite{m-book1}--\cite{m18}, \cite{rw}, \cite{v}. We refer the reader to \cite{m-book2,m18} and the bibliographies therein for second-order constructions used in what follows. To briefly overview the needed notions, recall first the (Painlev\'e-Kuratowski) {\epsilonm outer limit} of a set-valued mapping/multifunction $F\mbox{\rm co}\,lon \mathbb{R}^n \rightrightarrows\mathbb{R}^m$ at $\bar{x}$ with $F(\bar{x})\left \|e\epsilonmp$ given by \betagin{equation}\langlebel{c53} \underset{x\tauo\bar{x}}{\tauextrm{Lim sup }}F(x):=\big\{y\in\mathbb{R}^m\;\big|\;\epsilonxists\tauextrm{ sequences }\;x_k\tauo\bar{x},\;y_k\tauo y\;\tauextrm{ such that }\;y_k\in F(x_k),\;k \inI\!\!N\big\} \epsilonnd{equation} Given now a set $\Omega\subset\mathbb{R}^n$ locally closed around $\bar{x}\in\Omega$, we define by using \epsilonqref{c53} the (basic, limiting, Mordukhovich) {\epsilonm normal cone} to $\Omega$ at $\bar{x}$ by \betagin{equation}\langlebel{c54} N(\bar{x};\Omega)=N_\Omega(\bar{x}):=\underset{x\tauo\bar{x}}{\tauextrm{Lim sup}}\big\{\tauextrm{cone}[x-Painlev\'{e}i(x;\Omega)]\big\}. \epsilonnd{equation} where $Painlev\'{e}i(x;\Omega):=\big\{u\in\Omega\;\big|\;\|x-u\|=\mbox{\rm dist}(x;\Omega)\big\}$ is the Euclidean projection of $x$ onto $\Omega$, and where `cone' stands for the (nonconvex) conic hull of the set. When $\Omega$ is convex, $(\ref{c54})$ reduces to the normal cone of convex analysis, but it is often nonconvex otherwise. Given further a set-valued mapping $F\mbox{\rm co}\,lon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ with its domain and graph \betagin{equation*} \mbox{\rm dom}\, F:=\big\{x\in\mathbb{R}^n\;\big|\;F(x)\left \|e\epsilonmp\}\;\tauextrm{ and }\;\mbox{\rm gph}\, F:=\big\{(x,y)\in\mathbb{R}^n\tauimes\mathbb{R}^m\;\big|\;y\in F(x)\big\} \epsilonnd{equation*} locally closed around $(\bar{x},\bar{y})\in\mbox{\rm gph}\, F$, the {\epsilonm coderivative} of $F$ at $(\bar{x},\bar{y})$ is generated by \epsilonqref{c54} as \betagin{equation}\langlebel{c55} D^*F(\bar{x},\bar{y})(u):=\big\{v\in\mathbb{R}^n\;\big|\;(v,-u)\in N\big((\bar{x},\bar{y});\mbox{\rm gph}\, F\big)\big\},\quad u\in\mathbb{R}^m. \epsilonnd{equation} When $F\mbox{\rm co}\,lon\mathbb{R}^n\tauo\mathbb{R}^m$ is single-valued and continuously differentiable $({\cal C}^1$-smooth) around $\bar{x}$, we have \betagin{equation*} D^*F(\bar{x})(u)=\big\{\left \|abla F(\bar{x})^*u \big\}\;\tauextrm{ for all }\;u\in\mathbb{R}^m \epsilonnd{equation*} via the adjoint/transposed Jacobian matrix $\left \|abla F(\bar{x})^*$, where $\bar{y}=F(\bar{x})$ is omitted. Let $\varphii\mbox{\rm co}\,lon\mathbb{R}^n\tauo\omegaR:=(-\infty,\infty]$ be an extended-real-valued l.s.c.\ function $\varphii\mbox{\rm co}\,lon\mathbb{R}^n\tauo\omegaR:=(-\infty,\infty]$ with \betagin{equation*} \mbox{\rm dom}\,\varphii:=\big\{x\in\mathbb{R}^n\;\big|\;\varphi(x)<\infty\big\}\;\tauextrm{ and }\;\epsilonpi\varphii:=\big\{(x,\alphapha)\in\mathbb{R}^{n+1}\;\big|\;\alphapha\ge\varphii(x)\big\} \epsilonnd{equation*} standing for its domain and epigraph. The (first-order) {\epsilonm subdifferential} of $\varphii$ at $\bar{x}\in\mbox{\rm dom}\,\varphii$ is defined geometrically via the normal cone \epsilonqref{c54} by \betagin{equation}\langlebel{sub1} \partial\varphii(\bar{x}):=\big\{v\in\mathbb{R}^m\;\big|\;(v,-1)\in N\big((\bar{x},\varphii(\bar{x}));\epsilonpi\varphii\big)\big\} \epsilonnd{equation} while admitting equivalent analytic representations; see, e.g., \cite{m-book1,rw}. Note that $N(\bar{x};\Omega)=\partial\delta(\bar{x};\Omega)$ for any $\bar{x}\in\Omega$, where $\delta(x;\Omega)$ denotes the indicator function of $\Omega$ equal to 0 for $x\in\Omega$ and $\infty$ otherwise. Then given a subgradient $\bar{v}\in\partial \varphii(\bar{x})$ and following \cite{m-book1,m18}, we define the {\epsilonm second-order subdifferential} (or {\epsilonm generalized Hessian}) of $\varphii$ at $\bar{x}$ relative to $\bar{v}$ by \betagin{equation*} \partial^2\varphii(\bar{x},\bar{v})(u):=(D^*\partial\varphii)(\bar{x},\bar{v})(u),\quad u\in\mathbb{R}^n, \epsilonnd{equation*} via the coderivative \epsilonqref{c55} of the first-order subdifferential mapping $x\mapsto\partial\varphii(x)$ from \epsilonqref{sub1}. If the function $\varphii$ is ${\cal C}^2$-smooth around $\bar{x}$, then we have the representation \betagin{equation*} \partial^2\varphii(\bar{x},\bar{v})(u)=\big\{\left \|abla^2\varphii(\bar{x})u\big\}\;\tauextrm{ for all }\;u\in\mathbb{R}^n, \epsilonnd{equation*} where $\left \|abla^2\varphii(\bar{x})$ stands for the classical (symmetric) Hessian of $\varphii$ at $\bar{x}$ with $\bar{v}=\left \|abla\varphii(\bar{x})$. If $\varphii(x):=\delta(x;\Omega)$, then $\partial^2\varphi(\bar{x},\bar{v})(u)=(D^*N_\Omega)(\bar{x},\bar{v})(u)$ for any $\bar{v}\in N(\bar{x};\Omega)$ and $u\in\mathbb{R}^n$. The latter second-order construction is evaluated below in the case of the polyhedral set $\Omega=C$ from \epsilonqref{C}. To proceed, define the index sets corresponding the generating vectors $x^j_*$ in \epsilonqref{C} by \betagin{equation}\langlebel{c56} I_0(w):=\big\{j\in I(x)\;\big|\;\langle x^j_*,w\rangle=c_j\big\}\;\tauextrm{ and }\;I_>(w):=\big\{j\in I(x)\;\big|\;\langle x^j_*,w\rangle>c_j\big\},\;w\in\mathbb{R}^n. \epsilonnd{equation} where $I(x)$ is taken from \epsilonqref{aci} with $\bar{x}:=x\in C$. The next theorem provides an effective upper estimate of the coderivative of $F$ from \epsilonqref{F0} with ensuring the equality under an additional assumption on $x^j_*$.\vspace*{-0.07in} \betagin{theorem}\langlebel{Thm6.1} Given $F$ in $(\ref{F0})$ with $C$ from $(\ref{C})$, denote $G(x):= N(x;C)$ and suppose in addition to standing assumptions that $g$ is ${\cal C}^1$-smooth around the reference points. Then for any $(x,u)\in C\tauimes U$ and $\omega+g(x,u)\in G(x)$ we have the coderivative upper estimate \betagin{equation}\langlebel{c57} D^*F(x,u,\omega)(w)\subsetI\!\!Big\{z=I\!\!Big(-\left \|abla_x g(x,u)^*w+\underset{j\in I_0(w)\cup I_>(w)}\sum\gamma^j x^j_*,-\left \|abla_u g(x,u)^*wI\!\!Big)I\!\!Big\}, \epsilonnd{equation} where $w\in\mbox{\rm dom}\, D^*G(x,\omega+g(x,u))$, where $I_0(w)$ and $I_>(w)$ are taken from \epsilonqref{c56}, and where $\gamma^j\in\mathbb{R}$ for $j\in I_0(w)$, while $\gamma^j\ge 0$ for $j\in I_>(w)$. Furthermore, \epsilonqref{c57} holds as an equality and the domain $\mbox{\rm dom}\, D^*G(x,\omega+g(x,u))$ can be computed by \betagin{equation}\langlebel{c58} \mbox{\rm dom}\, D^*G\big(x,\omega+g(x,u)\big)=I\!\!Big\{wI\!\!Big|\;\epsilonxists\langlembda^j\ge 0\tauextrm{ with }\omega+g(x,u)=\underset{j\in I(x)}\sum\langlembda^j x^j_*,\;\langlembda^j>0 \Longrightarrow\langle x^j_*,w\rangle=c_jI\!\!Big\} \epsilonnd{equation} provided that the generating vectors $\{x^j_*\;|\;j\in I(x)\}$ of the polyhedron $C$ are linearly independent. \epsilonnd{theorem}\vspace*{-0.07in} {\bf Proof}. Picking any $w\in\mbox{\rm dom}\, D^*G(x,\omega+g(x,u))$ and $z\in D^*F(x,u,y)(w)$ and then denoting $\widetilde{G}(x,u):=G(x)$ and $\widetilde{f}(x,u):=-g(x,u)$, we deduce from \cite[Theorem~1.62]{m-book2} that \betagin{equation*} z\in\left \|abla\widetilde{f}(x,u)^*w+D^*\widetilde{G}\big(x,u,\omega+g(x,u)\big)(w). \epsilonnd{equation*} Observe then the obvious composition representation \betagin{equation*} \widetilde{G}(x,u)=G\circ\widetilde{g}(x,u)\;\tauextrm{ with }\;\widetilde{g}(x,u):=x, \epsilonnd{equation*} where the latter mapping has the surjective derivative. It follows from \cite[Theorem~1.66]{m-book2} that \betagin{equation}\langlebel{59} z\in\left \|abla\widetilde{f}(x,u)^*w+\left \|abla\widetilde{g}(x,u)^*D^*G(\big(x,\omega+g(x,u)\big)(w). \epsilonnd{equation} Employing now in \epsilonqref{59} the coderivative estimate for the normal cone mapping $G$ obtained in \cite[Theorem~4.5]{hmn} with the exact coderivative calculation given in \cite[Theorem~4.6]{hmn} under the linear independence of the generating vectors $x^j_*$ and also taking into account the structure of the mapping $\widetilde{f}$ in \epsilonqref{59}, we arrive at \epsilonqref{c57} and the equality therein under the aforementioned assumption. $ \Box$\vspace*{-0.2in} \section{Necessary Optimality Conditions for Discrete-Time Problems}\langlebel{nc-disc} \setcounter{equation}{0}\vspace*{-0.1in} Here we derive necessary optimality conditions for solutions to each problem $(P_m)$, $m\inI\!\!N$, formulated in \epsilonqref{e:3.15}--\epsilonqref{e:3.18+}. It will be done by reducing each $(P_m)$ to a nondynamic problem of nondifferentiable programming with functional and many geometric constraints, then employing necessary optimality conditions for the latter problem obtained in terms of generalized differential constructions of Section~\ref{gen-diff}, and finally expressing the obtained conditions in terms of the given data of $(P_m)$ by using calculus rules of generalized differentiation. In this way we arrive at the following necessary conditions, which will be further specified below by applying the second-order calculations presented in Section~\ref{gen-diff}.\vspace*{-0.07in} \betagin{theorem}\langlebel{Thm5.2*} Let $(\bar{x}_m,\bar{u}_m)=(\bar{x}^0_m,\ldots,\bar{x}^{2^m}_m,\bar{u}^0_m,\ldots,\bar{u}^{2^m-1}_m)$ be an optimal solution to problem $(P_m)$. Assume that $\mbox{\rm gph}\, F$ is closed and the function $\varphi$ is Lipschitz continuous around the point $\bar{x}_m(T)$. Then there are elements $\langlembda_m\ge 0$, $\psi_m=(\psi^0_m,\ldots,\psi^{2^m-1}_m)$ with $\psi^i_m\in N(\bar{u}^i_m;U)$ as $i=0,\ldots,2^m-1,$ $\xi_m=(\xi^{1}_m,\ldots, \xi^{s}_m)\in\mathbb{R}^s_+$, and $p^i_m\in\mathbb{R}^n $ as $i=0,\ldots,2^m$ satisfying the conditions \betagin{equation}\langlebel{e:5.8*} \langlembda_m+\left \|\xi_m\epsilonn+\sum_{i=0}^{2^m-1}\left \| p^{i}_m\epsilonn+\left \|\psi_m\epsilonn\left \|e 0, \epsilonnd{equation} \betagin{equation}\langlebel{xi} \xi^{j}_m\big(\langle z^{j2^m}_m,x^{2^m}_m\rangle-c^{j2^m}_m\big)=0,\quad j=1,\ldots,s, \epsilonnd{equation} \betagin{equation}\langlebel{mutx} -p^{2^m}_m=\langlembda_m\vartheta^{2^m}_m+\sum_{j=1}^s\xi^{j}_m z^{j2^m}_m\in\langlembda_m\partial\varphi(\bar{x}^{2^m}_m)+\sum_{j=1}^s\xi^{j}_m z^{j2^m}_m, \epsilonnd{equation} \betagin{equation}\langlebel{e:5.10*} \betagin{array}{ll} &\displaystyleI\!\!Big(\frac{p^{i+1}_m-p^{i}_m}{h_m},-\frac{1}{h_m}\langlembda_m\tauh^{iu}_m,\frac{1}{h_m}\langlembda_m\tauh^{iy}_m-p^{i+1}_mI\!\!Big)\\ &\in\displaystyleI\!\!Big(0,\frac{1}{h_m}\psi^i_m,0I\!\!Big)+NI\!\!Big(I\!\!Big(\bar{x}^i_m,\bar{u}^i_m,-\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}I\!\!Big);\mbox{\rm gph}\, FI\!\!Big) \epsilonnd{array} \epsilonnd{equation} for $i=0,\ldots,2^m-1$, where we use the notation \betagin{equation}\langlebel{theta} \tauheta^i_m=\left(\tauh^{iy}_m,\tauh^{iu}_m\right):=I\!\!Big(\int_{t^i_{m}}^{t^{i+1}_{m}}I\!\!Big\|\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}-\dot{\bar{x}}(t)I\!\!Big\| dt,\int_{t^i_{m}}^{t^{i+1}_{m}} \|\bar{u}^i_m-\bar{u}(t)\|dtI\!\!Big). \epsilonnd{equation} \epsilonnd{theorem}\vspace*{-0.07in} {\bf Proof.} Denote $z:=(x^0_m,\ldots,x^{2^m}_m,u^0_m,\ldots,u^{2^m-1}_m,y^0_m,\ldots,y^{2^m-1}_m)\in\mathbb{R}^{(2\cdot 2^m+1)n+2^m\cdot d}$, where the starting point $x^0_m$ is fixed. Taking $\epsilon>0$ from $(P_m)$, consider the following problem of mathematical programming $(MP)$ with respect to the variable $z$: \betagin{equation*} \tauextrm{minimize }\;\varphii_0(z):=\varphi\big(x(T)\big)+\frac{1}{2}\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}\left \|\left(y^i_m-\dot{\bar{x}}(t),u_m^i-\bar{u}(t)\right)\epsilonn^2dt \epsilonnd{equation*} subject to finitely many equality, inequality, and geometric constraints given by \betagin{equation*} \varphii(z):=\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}\left \|\left(y^i_m,u^i_m\right)-\big(\dot{\bar{x}}(t),\bar{u}(t)\big)\epsilonn^2dt-\frac{\epsilon}{2}\le 0, \epsilonnd{equation*} \betagin{equation*} g_i(z):=x^{i+1}_m-x^i_m-h_m y^i_m=0,\quad i=0,\ldots,2^m-1, \epsilonnd{equation*} \betagin{equation*} h_{j}(z):=\langle z^{j2^m}_m,x^{2^m}_m\rangle-c^{j2^m}_m \le 0,\quad j=1,\ldots,s, \epsilonnd{equation*} \betagin{equation*} z\in\Xi_i:=\left\{(x^0_m,\ldots,y^{2^m-1}_m)\in\mathbb{R}^{(2\cdot 2^m+1)n+2^m\cdot d}\;I\!\!Big|\;-y^i_m \in F(x^i_m,u^i_m)\right\},\quad i=0,\ldots,2^m-1, \epsilonnd{equation*} \betagin{equation*} z\in\Xi_{2^m}:=\big\{(x^0_m,\ldots,y^{2^m-1}_m)\in\mathbb{R}^{(2\cdot 2^m+1)n+2^m\cdot d}\;\big|\;x^0_m\;\tauextrm{ is fixed}\big\}, \epsilonnd{equation*} \betagin{equation*} z\in\Omega_i:=\big\{(x^0_m,\ldots,y^{2^m-1}_m)\in\mathbb{R}^{(2\cdot 2^m+1)n+2^m\cdot d}\;\big|\;u^i_m\in U\big\},\quad i=0,\ldots,2^m-1, \epsilonnd{equation*} Necessary optimality conditions for problem $(MP)$ in terms of the generalized differential tools reviewed above can deduced from \cite[Proposition~6.4 and Theorem~6.5]{m18}. We specify them for the optimal solution $$ \bar{z}:=\left(\bar{x}^0_m,\ldots,\bar{x}^{2^m}_m,\bar{u}^0_m,\ldots,\bar{u}^{2^m-1}_m,\bar{y}^0_m,\ldots,\bar{y}^{2^m-1}_m\right) $$ to $(MP)$. It follows from Theorem~\ref{ThmStrong} that the inequality constraint in $(MP)$ defined by $\varphii$ is inactive for large $m$, and so the corresponding multiplier does not appear in the optimality conditions. Thus we can find $\langlembda_m\ge 0$, $\xi_m=(\xi^{1}_m,\ldots,\xi^{s}_m)\in \mathbb{R}^s_+$, $p^i_{m}\in\mathbb{R}^{n}$ as $i=1,\ldots,2^m$, and $$ z^*_i=\big(x^*_{0i},\ldots,x^*_{2^mi},u^*_{0i},\ldots,u^*_{(2^m-1)i},y^*_{0i},y^*_{1i},\ldots,y^*_{(2^m-1)i}\big),\quad i=0,\ldots,2^m, $$ which are not zero simultaneously while satisfying the conditions \betagin{equation}\langlebel{69} z^*_i\in\left\{\betagin{matrix} N(\bar{z};\Xi_i)+N(\bar{z};\Omega_i)\;\tauextrm{ if }\;i\in\big\{0,\ldots,2^m-1\big\},\\ N(\bar{z};\Xi_i);\tauextrm{ if }\;i=2^m, \epsilonnd{matrix}\right. \epsilonnd{equation} \betagin{equation*} -z^*_0-\ldots-z^*_{2^m}\in\langlembda_m\partial\varphii_0(\bar{z})+\sum_{j=1}^{s} \xi^{j}_m\left \|abla h_{j}(\bar{z})+\sum_{i=0}^{2^m-1}\left \|abla g_i(\bar{z})^*p^{i+1}_m, \epsilonnd{equation*} \betagin{equation}\langlebel{71+} \xi^{j}_m h_{j}(\bar{z})=0,\quad j=1,\ldots,s. \epsilonnd{equation} Note that the first line in \epsilonqref{69} comes by applying the normal cone intersection formula from \cite[Corollary~3.5]{m-book1} to $\bar{z}\in\Omega_i\cap \Xi_i$ for $i=0,\ldots,2^m-1$. It follows from the structure of the sets $\Omega_i$ and $\Xi_i$ that the inclusions in \epsilonqref{69} can be equivalently written as \betagin{equation}\langlebel{e:5.18*} \big(x^*_{ii},u^*_{ii}-\psi^{i}_m,-y^*_{ii}\big)\in NI\!\!Big(I\!\!Big(\bar{x}^i_m,\bar{u}^i_m,-\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}I\!\!Big);\mbox{\rm gph}\, FI\!\!Big)\;\tauextrm{ for } \;i=0,\ldots,2^m-1 \epsilonnd{equation} with every other components of $z^*_i$ equal to zero, where $\psi^{i}_m\in N(\bar{u}^i_m;U)$ for all $i=0,\ldots,2^m-1$. Observe furthermore that $x^*_{0m}$ and $u^*_{0m}$ determined by the normal cone to $\Xi_m$ are the only nonzero components of $z^*_m$. This implies by using \epsilonqref{69}--\epsilonqref{71+} that $$ -z^*_0-\ldots-z^*_{2^m}\in\langlembda_m\partial\varphii_0(\bar{z})+\sum_{j=1}^{s}\xi^{j}_m\left \|abla h_{j}(\bar{z})+\sum_{i=0}^{2^m-1}\left \|abla g_i(\bar{z})^*p^{i+1}_m, $$ with $\xi^{j}_m\left(\langle z^{j2^m}_m, x^{2^m}_m\rangle-c^{j2^m}_m\right)=0,\;j=1,\ldots,s$. Using the expressions for $\varphii_0$, $g_i$, and $h_j$ above together with the elementary subdifferential sum rule from \cite[Proposition~1.107]{m-book1} gives the calculations $$ I\!\!Big(\sum_{j=1}^{s}\xi^{j}_m\left \|abla h_{j}(\bar{z})I\!\!Big)_{x^{2^m}_m}=I\!\!Big(\sum_{j=1}^{s}\xi^{j}_m z^{j2^m}_mI\!\!Big), $$ $$ I\!\!Big(\sum_{i=0}^{2^m-1}\left \|abla g_i(\bar{z})^*p^{i+1}_mI\!\!Big)_{x^i_m}=\left\{\betagin{matrix} -p^{1}_m;\tauextrm{ if }\;i=0,\\ p^{i}_m-p^{i+1}_m\;\tauextrm{ if }\;i=1,\ldots,2^m-1,\\ p^{2^m}_m\;\tauextrm{ if }\;i=2^m, \epsilonnd{matrix}\right. $$ $$ I\!\!Big(\sum_{i=0}^{2^m-1}\left \|abla g_i(\bar{z})^*p^{i+1}_mI\!\!Big)_{y^i_m}=\left(-h_m p^{1}_m,-h_m p^{2}_m,\ldots,-h_m p^{2^m}_m\right), $$ $$ \partial(\varphii_0)(\bar{z})=\partial\varphi(\bar{x}^m_m)+\frac{1}{2}\sum_{i=0}^{2^m-1}\left \|abla\rho_i(\bar{z})\;\tauextrm{ with }\;\rho_i(\bar{z}):=\int_{t^i_m}^{t^{i+1}_m} I\!\!Big\| I\!\!Big(\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}-\dot{\bar{x}}(t),\bar{u}^i_m-\bar{u}(t)I\!\!Big)I\!\!Big\|^2dt. $$ The set $\langlembda_m\partial\varphii_0(\bar{z})$ is represented as the collection of $$ \langlembda_m\big(0,\ldots,0,\vartheta^{2^m}_m,\tauh^{0u}_m,\ldots,\tauh^{(2^m-1)u}_m,\tauh^{0y}_m,\ldots,\tauh^{(2^m-1)y}_{m}\big)\;\tauextrm{ with }\;\vartheta^{2^m}_m\in \partial\varphi(\bar{x}^{2^m}_m), $$ $$ (\tauh^{iu}_m,\tauh^{iy}_m)=I\!\!Big(\int_{t^i_m}^{t^{i+1}_m}\|\bar{u}^i_m -\bar{u}(t)\| dt,\;\int_{t^i_m}^{t^{i+1}_m}I\!\!Big\|\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}-\dot{\bar{x}}(t) I\!\!Big\|dtI\!\!Big),\quad i=0,\ldots,2^m-1. $$ Thus we obtain the following relationships \betagin{equation}\langlebel{daux} -x^*_{00}-x^*_{02^m}=-p^{1x}_m, \epsilonnd{equation} \betagin{equation}\langlebel{e:5.21*} -x^*_{ii}=p^{i}_m-p^{i+1}_m,\quad i=1,\ldots,2^m-1, \epsilonnd{equation} \betagin{equation}\langlebel{e:5.24*} 0=\langlembda_m\vartheta^{2^m}_m+p^{2^m}_m+\sum_{j=1}^{s}\xi^{j}_m z^{j2^m}_m\;\tauextrm{ with }\;\vartheta^{2^m}_m\in\partial\varphi(\bar{x}_m^{2^m}), \epsilonnd{equation} \betagin{equation}\langlebel{e:5.22*} -u^*_{00}=\langlembda_m\tauh^{0u}_m\;\tauextrm{ and }\;-u^*_{ii}=\langlembda_m\tauh^{iu}_m,\quad i=1,\ldots,2^m-1, \epsilonnd{equation} \betagin{equation}\langlebel{e:5.23*} -y^*_{ii}=\langlembda_m\tauh^{iy}_m-h_m p^{i+1}_m,\quad i=0,\ldots,2^m-1, \epsilonnd{equation} which allow us to arrive at all the necessary optimality conditions claimed in the theorem. Indeed, observe first that \epsilonqref{71+} yields \epsilonqref{xi}. Extending $p_m$ by $p^0_m:=x^*_{02^m}$ ensures that \epsilonqref{mutx} follows from $(\ref{e:5.24*})$. Then we deduce from \epsilonqref{e:5.21*}, \epsilonqref{e:5.22*}, and \epsilonqref{e:5.23*} that $$ \frac{x^*_{ii}}{h_m}=\frac{p^{i+1}_m-p^{i}_m}{h_m},\;\;\frac{u^*_{ii}}{h_m}=-\frac{1}{h_m}\langlembda_m\tauh^{iu}_m,\;\tauextrm{ and }\;\frac{y^*_{ii}}{h_m} =-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_m. $$ Substituting this into the left-hand side of \epsilonqref{e:5.18*} justifies the discrete-time adjoint inclusion \epsilonqref{e:5.10*}. Finally, to verify \epsilonqref{e:5.8*} we argue by contradiction and suppose that $\langlembda_m=0,\xi_m=0,\psi_m=0$, and $p^{i}_m=0$ as $i=0,\ldots, 2^m-1$, which yield $x^*_{02^m}=p^{0}_m=0$. Then it follows from \epsilonqref{e:5.24*} that $p^{2^m}_m=0$, and so $p^{i}_m=0$ whenever $i=0,\ldots,2^m$. By \epsilonqref{daux} and \epsilonqref{e:5.21*} we get $x^*_{ii}=0$ for all $i=0,\ldots,2^m-1$. Using \epsilonqref{e:5.22*} tells us that $u^*_{ii} =0$ as $i=1,\ldots,2^m-1$. Since the first condition in \epsilonqref{e:5.22*} yields also $u^*_{00}=0$, it follows that $u^*_{ii}=0$ for $i=0,\ldots,2^m-1$. In addition we have by \epsilonqref{e:5.23*} that $y^*_{ii}=0$ for all $i=0,\ldots,2^m-1$. Remembering that the components of $z^*_i$ different from $(x^*_{ii},u^*_{ii},y^*_{ii})$ are zero for $i=0,\ldots,2^m-1$ ensures that $z^*_{i}=0$ for $i=0,\ldots,2^m-1$ and similarly $z^*_{2^m}=0$. Therefore $z^*_i=0$ for all $i=0,\ldots,2^m$, which violates the nontriviality condition for $(MP)$ and thus completes the proof. $ \Box$\vspace*{0.01in} The next theorem applies to \epsilonqref{e:5.10*} the calculation result of Theorem~\ref{ThmStrong} and provides in this way necessary optimality conditions for problem $(P_m)$ expressed entirely via its initial data.\vspace*{-0.1in} \betagin{theorem}\langlebel{Thm6.2} Let $(\bar{x}_m,\bar{u}_m)$ be an optimal solution to problem $(P_m)$ formulated in \epsilonqref{e:3.15}--\epsilonqref{e:3.18+}, where the cost function $\varphi$ is locally Lipschitzian around $\bar{x}_m(T)$, and where the sweeping mapping $F$ is defined in \epsilonqref{F0}. Using the notation and assumptions of Theorem~{\rm\ref{Thm6.1}}, take $(\tauh^{iu}_m,\tauh^{iy}_m)$ from \epsilonqref{theta}. Then there exists dual elements $(\langlembda_m,\psi_m,p_m)$ as in Theorem~{\rm\ref{Thm5.2*}} together with vectors $\epsilonta^{i}_m\in\mathbb{R}^s_+$ for $i=0,\ldots,2^m$ and $\gamma^i_m\in\mathbb{R}^s$ for $i=0,\ldots,2^m-1$ satisfying the nontriviality condition \betagin{equation}\langlebel{ntc} \langlembda_m+\left \|\epsilonta^{2^m}_m\epsilonn+\sum_{i=0}^{2^m-1}\left \| p^{i}_m\epsilonn+\left \|\psi_m\epsilonn\left \|e 0, \epsilonnd{equation} the primal-dual relationships given for all $i=0,\ldots,2^m-1$ and $j=1,\ldots,s$ by \betagin{equation}\langlebel{87} -\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}+g(\bar{x}^i_m,\bar{u}^i_m)=\sum_{j\in I(\bar{x}^i_m)}\epsilonta^{ij}_m z^{ji}_m, \epsilonnd{equation} \betagin{equation}\langlebel{conx} \betagin{array}{ll} \displaystyle\frac{p^{i+1}_m-p^{i}_m}{h_m}&=-\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\displaystyle\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big)\\ &+\displaystyle\sum_{j\in I_0(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)\cup I_>(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)}\gamma^{ij}_m z^{ji}_m, \epsilonnd{array} \epsilonnd{equation} \betagin{equation}\langlebel{cony} -\frac{1}{h_m}\langlembda_m\tauh^{iu}_m-\frac{1}{h_m}\psi^i_m=-\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big) \epsilonnd{equation} with $\psi^{i}_m\in N(\bar{u}^i_m;U)$ as $i=0,\ldots,2^m-1$ taken from Theorem~{\rm\ref{Thm5.2*}}, the transversality condition \betagin{equation}\langlebel{nmutx} -p^{2^m}_m=\langlembda_m\vartheta^{2^m}_m+\sum_{j=1}^s\epsilonta^{2^mj}_m z^{j2^m}_m\in\langlembda_m\partial\varphi(\bar{x}^{2^m}_m)+\sum_{j=1}^s\epsilonta^{2^mj}_m z^{j2^m}_m, \epsilonnd{equation} and such that the following implications hold for $i=0,\ldots,2^m-1$ and $j=1,\ldots,s$: \betagin{equation}\langlebel{eta} I\!\!Big[\langle z^{ji}_m,\bar{x}^i_m\rangle<c^{ji}_mI\!\!Big]\Longrightarrow\epsilonta^{ij}_m=0, \epsilonnd{equation} \betagin{equation}\langlebel{93} \left\{\betagin{matrix} I\!\!Big[j\in I_0( p^{i+1}_m-\displaystyle\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)I\!\!Big]\Longrightarrow\gamma^{ij}_m\in\mathbb{R},\\ I\!\!Big[j\in I_>(p^{i+1}_m-\displaystyle\frac{1}{h_m}\langlembda_m \tauh^{iy}_m)I\!\!Big]\Longrightarrow\gamma^{ij}_m\ge 0,\\ I\!\!Big[j\left \|otin I_0(p^{i+1}_m-\displaystyle\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)\cup I_>(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)I\!\!Big] \Longrightarrow\gamma^{ij}_m=0. \epsilonnd{matrix}\right. \epsilonnd{equation} We also have the complementary slackness condition $(\ref{xi})$ together with \betagin{equation}\langlebel{94} \left[\langle z^{ji}_m,\bar{x}^i_m\rangle<c^{ji}_m\right]\Longrightarrow\gamma^{ij}_m=0\;\tauextrm{ for }\;i=0,\ldots,2^m-1\;\tauextrm{ and }\;j=1,\ldots,s, \epsilonnd{equation} \betagin{equation}\langlebel{eta1} \big[\langle z^{j2^m}_m,\bar{x}^{2^m}_m\rangle<c^{j2^m}_m\big]\Longrightarrow\epsilonta^{2^mj}_m=0\;\tauextrm{ for }\;j=1,\ldots,s, \epsilonnd{equation} Furthermore, the linear independence of the vectors $\{z^{ji}_m\;|\;j\in I(\bar{x}^i_m)\}$ ensures the implication \betagin{equation}\langlebel{96} \epsilonta^{ij}_m>0\LongrightarrowI\!\!Big[I\!\!Big\langle z^{ji}_m,p^{i+1}_m-\displaystyle\frac{1}{h_m}\langlembda_m\tauh^{iy}_mI\!\!Big\rangle=c^{ji}_mI\!\!Big] \epsilonnd{equation} Assuming in addition that the matrices $\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)$ are of full rank for all $i=0,\ldots,2^m-1$ and $m\inI\!\!N$ sufficiently large, we get the enhanced nontriviality condition \betagin{equation}\langlebel{entc} \langlembda_m+\|p^{0}_m\|+\|\psi_m\|\left \|e 0. \epsilonnd{equation} \epsilonnd{theorem}\vspace*{-0.07in} {\bf Proof.} Using the necessary optimality conditions of Theorem~\ref{Thm6.1}, we can rewrite \epsilonqref{e:5.10*} as \betagin{equation}\langlebel{cod-disc} I\!\!Big(\frac{p^{i+1}_m-p^{i}_m}{h_m},-\frac{1}{h_m}\langlembda_m\tauh^{iu}_m-\frac{1}{h_m}\psi^i_mI\!\!Big)\in D^*FI\!\!Big(\bar{x}^i_m,\bar{u}^i_m,-\frac{\bar{x}^{i+1}_m- \bar{x}^i_m}{h_m}I\!\!Big) \left(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_m\right) \epsilonnd{equation} for all $i=0,\ldots,2^m-1$ by the coderivative definition \epsilonqref{c55}. Taking into account that \betagin{equation} -\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}+g(\bar{x}^i_m,\bar{u}^i_m)\in G(\bar{x}^i_m)\;\tauextrm{ for }\;i=0,\ldots,2^m-1 \epsilonnd{equation} with $G(x)= N(x;C)$, we find vectors $\epsilonta^{i}_m \in\mathbb{R}^s_+$ as $i=0,\ldots,2^m-1$ such that conditions \epsilonqref{87} and \epsilonqref{eta} hold. Employing now the coderivative evaluation \epsilonqref{c57} from Theorem~\ref{Thm6.1} with $x:=\bar{x}^i_m$, $u:=\bar{u}^i_m$, $\omega:=-\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}$, and $w:=-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_m$ for $i=0,\ldots,2^m-1$ gives us $\gamma^i_m\in\mathbb{R}^s$ and the relationships $$ I\!\!Big(\frac{p^{i+1}_m-p^{i}_m}{h_m},-\frac{1}{h_m}\langlembda_m\tauh^{iu}_m-\frac{\psi^{iu}_m }{h_m}I\!\!Big) $$ =$\left(\betagin{matrix} \displaystyle-\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big)+ \sum_{j\in I_0(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m) \cup I_>(p^{i+1}_m -\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)}\gamma^{ij}_m z^{ji}_m,\\ -\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big) \epsilonnd{matrix} \right),$\\ $$ \psi^{iu}_m\left(\left \|u-\bar{u}^i_m\right)\le 0\;\tauextrm{ for all }\;\left \|u\in U\;\tauextrm{ and }\;i=0,\ldots,2^m-1. $$ This ensures the validity of all the conditions in \epsilonqref{conx}, \epsilonqref{cony}, \epsilonqref{93}, and \epsilonqref{94}. Denoting $\epsilonta^{2^m}_m:=\xi_m$ with $\xi_m$ taken from Theorem~\ref{Thm5.2*}, we get $\epsilonta^{i}_m\in\mathbb{R}^s_+$ for all $i=0,\ldots,2^m$ and deduce \epsilonqref{ntc} and \epsilonqref{nmutx} from those in \epsilonqref{e:5.8*} and \epsilonqref{mutx}. Implication \epsilonqref{eta1} follows directly from \epsilonqref{xi} and the definition of $\epsilonta^{2^m}_m$. Assume finally that the generating vectors $\{z^{ji}_m\;|\;j\in I(\bar{x}^i_m)\}$ are linear independent. In this case we deduce from \epsilonqref{c58} and \epsilonqref{cod-disc} that condition \epsilonqref{96} is satisfied. It remains to verify the enhanced nontriviality \epsilonqref{entc} under the additional assumption on the full rank of the matrices $\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)$. Suppose on the contrary that $\langlembda_m=0$, $p^{0}_m=0$, and $\psi_m=0$. Then $p^{i+1}_m=0$ as $i=0,\ldots,2^m-1$ by \epsilonqref{cony}. Then it follows from \epsilonqref{conx} the equality \betagin{equation*} \sum_{j\in I_0(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)\cup I_>(p^{i+1}_m-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m)}\gamma^{ij}_m z^{ji}_m=0. \epsilonnd{equation*} Invoking now \epsilonqref{nmutx} and $p^{2^m}_m=0$ tells us that $\sum_{j=1}^s\epsilonta^{2^mj}_m z^{j2^m}_m=0$. This implies by definition \epsilonqref{aci} of the active constraint indices and the imposed linear independence of $z^{ji}_m$ over this index set that $\epsilonta^{2^m}_m=0$. Thus \epsilonqref{ntc} is violated, which verifies \epsilonqref{entc} and completes the proof of the theorem. $ \Box$\vspace*{-0.2in} \section{Optimality Conditions for the Controlled Sweeping Process}\langlebel{nc-sweep} \setcounter{equation}{0}\vspace*{-0.1in} In this section we derive necessary optimality conditions for the local minimizer under consideration in the original problem $(P)$ by passing to the limit as $m\tauo\infty$ in the necessary optimality conditions of Theorem~\ref{Thm5.2*} for the discrete-time problems $(P_m)$. Furnishing the limiting procedure requires the usage of Theorem~\ref{ThmStrong} and the tools of generalized differentiation reviewed in Section~\ref{gen-diff}.\vspace*{-0.1in} \betagin{theorem}\langlebel{Thm6.1*} Let $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a relaxed $W^{1,2}\tauimes L^2$-local minimizer of problem $(P)$ such that $\bar{u}(\cdot)$ is of bounded variation and admits a right continuous representative on $[0,T]$. In addition to $(H1)$ and $(H2)$, assume that LICQ holds along $\bar{x}(\cdot)$ on $[0,T] $, that $g(\cdot,\cdot)$ is ${\cal C}^1$-smooth around $(\bar{x}(t),\bar{u}(t))$ with the full rank of the matrices $\left \|abla_u g(\bar{x}(t),\bar{u}(t))$ on $[0,T]$, and that $\varphi$ is locally Lipschitzian around $\bar{x}(T)$. Then there exist a multiplier $\langlembda\ge 0$, a signed vector measure $\gamma=(\gamma^1,\ldots,\gamma^n)\in C^*([0,T];\mathbb{R}^n)$ as well as adjoint arcs $p(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$ and $q(\cdot)\in BV([0,T];\mathbb{R}^n)$ such that the following conditions are fulfilled:\vspace*{-0.1in} \betagin{itemize} \item[\bf(i)] The {\sc primal-dual dynamic relationships:} \betagin{equation}\langlebel{37} -\dot{\bar{x}}(t)=\sum_{j=1}^s\epsilonta^j(t)x^j_*-g\big(\bar{x}(t),\bar{u}(t)\big)\;\tauextrm{ for a.e. }\;t\in[0,T], \epsilonnd{equation} where the functions $\epsilonta^j(\cdot)\in L^2([0,T];\mathbb{R}_+)$ are well defined at $t=T$ while being uniquely determined by the representation in \epsilonqref{37}; \betagin{equation}\langlebel{c:6.6} \dot{p}(t)=-\left \|abla_x g\big(\bar{x}(t),\bar{u}(t)\big)^*q(t)\;\tauextrm{ for a.e. }\;t\in[0,T], \epsilonnd{equation} where the right continuous representative of $q(\cdot)$, with the same notation, satisfies \betagin{equation}\langlebel{c:6.9} q(t)=p(t)-\int_{(t,T]}d\gamma(\widetilde{a}u) \epsilonnd{equation} for all $t\in[0,T]$ except at most a countable subset; \betagin{equation}\langlebel{c:6.6'} \psi(t):=\left \|abla_u g\big(\bar{x}(t),\bar{u}(t)\big)^*q(t)\in N\big(\bar{u}(t);U\big)\;\tauextrm{ for a.e. }\;t\in[0,T], \epsilonnd{equation} which gives us the {\sc maximization condition} \betagin{equation}\langlebel{max} \big\langle\psi(t),\bar{u}(t)\big\rangle=\max_{u\in U}\big\langle\psi(t),u\big\rangle\;\tauextrm{ for a.e. }\;t\in[0,T] \epsilonnd{equation} provided that the set $U$ is convex. Furthermore, for a.e.\ $t\in[0,T]$ including $t=T$ and for all $j=1,\ldots,s$ we have the {\sc complementarity conditions} \betagin{equation}\langlebel{41} \big\langle x^j_*,\bar{x}(t)\big\rangle<c_j\Longrightarrow\epsilonta^j(t)=0\;\tauextrm{ and }\;\epsilonta^j(t)>0\Longrightarrow\big\langle x^j_*,q(t)\big\rangle=c_j. \epsilonnd{equation} \item[\bf(ii)] The {\sc transversality conditions} at the right endpoint: \betagin{equation}\langlebel{42} -p(T)-\sum_{j\in I(\bar{x}(T))}\epsilonta^j(T)x^j_*\in\langlembda\partial\varphi\big(\bar{x}(T)\big)\;\tauextrm{ and }\;\sum_{j\in I(\bar{x}(T))}\epsilonta^j(T)x^j_*\in N\big(\bar{x}(T);C\big). \epsilonnd{equation} \item[\bf(iii)] The {\sc measure nonatomicity condition:} If $t\in[0,T)$ and $\langle x^j_*,\bar{x}(t)\rangle<c_j$ for all $j=1,\ldots,s$, then there is a neighborhood $V_t$ of $t$ in $[0,T]$ such that $\gamma(V)=0$ for all the Borel subsets $V$ of $V_t$. \item[\bf(iv)] {\sc Nontriviality conditions:} It always holds that \betagin{equation}\langlebel{e:83} \langlembda+\|p(T)\|+\|q(0)\|>0. \epsilonnd{equation} Assuming in addition that $\langle x^j_*,x_0\rangle<c_j$ for all $j=1,\ldots,s$, we have the {\sc enhanced nontriviality} \betagin{equation}\langlebel{enh1} \langlembda+\left \| p(T)\epsilonn>0. \epsilonnd{equation} \epsilonnd{itemize} \epsilonnd{theorem}\vspace*{-0.07in} {\bf Proof.} Given the local minimizer $(\bar{x}(\cdot),\bar{u}(\cdot))$ for $(P)$, construct the discrete-time problems $(P_m)$ for which optimal solutions $(\bar{x}_m(\cdot),\bar{u}_m(\cdot))$ exist by Proposition~\ref{Thm3.1} and converge to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the sense of Theorem~\ref{ThmStrong}. We derive each of the claimed necessary conditions in $(P)$ by passing to the limit from those in Theorem~\ref{Thm5.2*}. Let us split the derivation into several steps.\\left[1ex] {\bf Step~1:} {\epsilonm Verifying the primal equation and complementarity condition.} First we prove \epsilonqref{37} together with the first complementarity condition in \epsilonqref{41}. Based on \epsilonqref{theta}, define the functions $$ \tauh_m(t):=\frac{\tauh^{i}_m}{h_m}\;\tauextrm{ for }\;t\in [t^i_m,t^{i+1}_m)\tauextrm{ and }\;i=0,\ldots,2^m-1 $$ on $[0,T]$ whenever $m\inI\!\!N$. It is easy to see that \betagin{eqnarray*} \int_0^T\left \|\tauh^y_m(t)\epsilonn^2dt&=&\sum_{i=0}^{2^m-1}\frac{I\!\!Big\|\tauh^{iy}_mI\!\!Big\|^2}{h_m}\le\frac{1}{h_m}\sum_{i=0}^{2^m-1} I\!\!Big(\int_{t^i_m}^{t^{i+1}_m}\left \|\dot{\bar{x}}(t)- \frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}\epsilonn dtI\!\!Big)^2\\ &\le&\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}I\!\!Big\|\dot{\bar{x}}(t)-\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}I\!\!Big\|^2dt= \int_0^T\left \|\dot{\bar{x}}(t)-\dot{\bar{x}}_m(t)\epsilonn^2dt. \epsilonnd{eqnarray*} Using the strong convergence $(\bar{x}_m(\cdot),\bar{u}_m(\cdot))\tauo(\bar{x}(\cdot),\bar{u}(\cdot))$ in Theorem~\ref{ThmStrong} ensures that \betagin{equation}\langlebel{c:6.14} \int_0^T\left \|\tauh^y_m(t)\epsilonn^2dt\le\int_0^T\left \|\dot{\bar{x}}(t)-\dot{\bar{x}}_m(t)\epsilonn^2dt\tauo0\;\tauextrm{ as }\;m\tauo\infty. \epsilonnd{equation} This implies that a subsequence of $\{\tauh^y_m(t)\}$ converges, without relabeling, to zero a.e.\ on $[0,T]$. Likewise \betagin{eqnarray*} \int_0^TI\!\!Big\|\tauh^u_m(t)I\!\!Big\|^2dt&=&\sum_{i=0}^{2^m-1}\frac{I\!\!Big\|\tauh^{iu}_mI\!\!Big\|^2}{h_m}\le\frac{1}{h_m}\sum_{i=0}^{2^m-1} I\!\!Big(\int_{t^i_m}^{t^{i+1}_m}\left \|\bar{u}^i_m-\bar{u}(t)\epsilonn dtI\!\!Big)^2\\ &\le&\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}\left \|\bar{u}^i_m-\bar{u}(t)\epsilonn^2dt=\int_0^T\left \|\bar{u}_m(t)-\bar{u}(t)\epsilonn^2dt, \epsilonnd{eqnarray*} which tells us, again by using Theorem~\ref{ThmStrong}, that \betagin{equation}\langlebel{c:6.14'} \int_0^T\left \|\tauh^u_m(t)\epsilonn^2dt\le\int_0^T\left \|\bar{u}_m(t)-\bar{u}(t)\epsilonn^2dt\tauo 0\;\tauextrm{ as }\;m\tauo\infty, \epsilonnd{equation} and so $\tauh^u_m(t)\tauo 0$ for a.e.\ $t\in[0,T]$ along a subsequence. The assumed LICQ along $\bar{x}(\cdot)$ and the robustness of this condition yields by the choice of $z^{ji}_m$ and the convergence in Theorem~\ref{ThmStrong} that the vectors $\{z^{ji}_m\;|\;j\in I(\bar{x}^i_m)\}$ are linearly independent for each $i=1,\ldots,2^m$ and $m\inI\!\!N$ sufficiently large. Taking $\epsilonta^{i}_m\in\mathbb{R}^s_+$ from Theorem~\ref{Thm6.2}, we construct the piecewise constant functions $\epsilonta_m(\cdot)$ on $[0,T]$ by $\epsilonta_m (t):=\epsilonta^{i}_m $ for $t\in [t^i_m,t^{i+1}_m)$ with $\epsilonta_m(T):=\epsilonta^{2^m}_m$. It follows from \epsilonqref{87} that \betagin{equation}\langlebel{c:51} -\dot{\bar{x}}_m(t)=\sum_{j=1}^s\epsilonta^j_m(t)z^{ji}_m-g\big(\bar{x}_m(t^i_m),\bar{u}_m(t^i_m)\big)\;\tauextrm{ whenever }\;t\in (t^i_m,t^{i+1}_m),\quad m\inI\!\!N. \epsilonnd{equation} Furthermore, we get $-\dot{\bar{x}}(t)\in G(\bar{x}(t))-g(\bar{x}(t),\bar{u}(t))$ for a.e.\ $t\in[0,T]$ with the mapping $G(\cdot)=N(\cdot;C)$, which is measurable by \cite[Theorem~4.26]{rw}. The well-known measurable selection result (see, e.g., \cite[Corollary~4.6]{rw}) allows us to find nonnegative measurable functions $\epsilonta^j(\cdot)$ on $[0,T]$ for $j=1,\ldots,s$ such that equation \epsilonqref{37} holds. Combining \epsilonqref{c:51} and \epsilonqref{37} implies that \betagin{equation*} \dot{\bar{x}}(t)-\dot{\bar{x}}_m(t)=\sum_{j=1}^s\big[\epsilonta^j_m(t)z^{ji}_m-\epsilonta^j(t)x^j_*\big]+g\big(\bar{x}(t),\bar{u}(t)\big)-g\big(\bar{x}_m(t^i_m),\bar{u}_m(t^i_m)\big) \epsilonnd{equation*} for $t\in(t^i_m,t^{i+1}_m)$ and $i=0,\ldots,2^m-1$. It follows from the imposed LICQ that the functions $\epsilonta^j_m(t)$ and $\epsilonta^j(t)$ are uniquely defined for a.e.\ $t\in[0,T]$ and belong to $L^2([0,T];\mathbb{R}_+)$. The constructions above yield the estimate \betagin{equation*} I\!\!Big\|\sum_{j=1}^s\big[\epsilonta^j(t)x^j_*-\epsilonta^j_m(t)z^{ji}_m\big]I\!\!Big\|_{L^2}\le\left \|\dot{\bar{x}}_m(t)-\dot{\bar{x}}(t)\epsilonn_{L^2}+\left \| g\big(\bar{x}(t),\bar{u}(t)\big)- g\big(\bar{x}_m(t),\bar{u}_m(t)\big)\epsilonn_{L^2} \epsilonnd{equation*} whenever $t\in(t^i_m,t^{i+1}_m)$. Passing to the limit therein with the usage of Theorem~\ref{ThmStrong} gives us \betagin{equation*} \sum_{j\in I(\bar{x}(t))}\big[\epsilonta^j(t)x^j_*-\epsilonta^j_m(t)z^{ji}_m\big]\tauo 0\;\tauextrm{ as }\;m\tauo\infty\;\tauextrm{ for a.e. }\;t\in[0,T] \epsilonnd{equation*} and ensures the a.e.\ convergence $\epsilonta_m(t)\tauo\epsilonta(t)$ on $[0,T]$ by the imposed LICQ. We also have that the sequence $\{\epsilonta^{2^m}_m\}$ converges to the well-defined vector $(\epsilonta^{1}(T),\ldots,\epsilonta^{2^m}(T))$. Then the first complementarity condition in \epsilonqref{41} follows from \epsilonqref{eta} and \epsilonqref{eta1}.\\left[1ex] {\bf Step~2:} {\epsilonm Continuous-time extensions of approximating dual elements.} In the notation of Theorem~\ref{Thm5.2*}, define $q_m(t)$ by extending $p^i_m(t)$ piecewise linearly on $[0,T]$ with $q_m(t^i_m):=p^i_m$ for $i=0,\ldots,2^m$. Construct further $\gamma_m(t)$ and $\psi_m(t)$ on $[0,T]$ by \betagin{equation}\langlebel{c:6.25} \gamma_m(t):=\gamma^i_m,\quad\psi_m(t):=\frac{1}{h_m}\psi^i_m\;\tauextrm{ for }\;t\in[t^i_m,t^{i+1}_m)\;\tauextrm{ and }\;i=0,\ldots,2^m-1 \epsilonnd{equation} with $\gamma_m(T):=0$ and $\psi_m(T):=0$. Define now the function \betagin{equation*} \left \|u_m(t):=\max\big\{t^i_m\big|\;t^i_m\le t,\;0\le i\le 2^m-1\big\}\;\tauextrm{ for all }\;t\in[0,T],\quad m\inI\!\!N, \epsilonnd{equation*} and deduce respectively from $(\ref{conx})$ and $(\ref{cony})$ that \betagin{equation}\langlebel{conx'} \betagin{array}{ll} \dot{q}_m(t)=&-\left \|abla_x g\big(\bar{x}_m(\left \|u_m(t)),\bar{u}_m(\left \|u_m(t))\big)^*\big(-\langlembda_m\tauh^{y}_m(t)+q_m(\left \|u_m(t)+h_m)\big)\\\\ &+\displaystyle\sum_{j\in I_0(-\langlembda_m\tauh^y_m(t)+q_m(\left \|u_m(t)+h_m ))\cup I_>(-\langlembda_m \tauh^y_m(t)+q_m(\left \|u_m(t)+h_m))}\gamma^{j}_m(t)z^{ji}_m, \quad\;\tauextrm{ and } \epsilonnd{array} \epsilonnd{equation} \betagin{equation}\langlebel{cony1} -\langlembda_m\tauh^{u}_m (t)-\psi_m(t)=-\left \|abla_u g\big(\bar{x}_m(\left \|u_m(t)),\bar{u}_m(\left \|u_m(t))\big)^*\big(-\langlembda_m\tauh^{y}_m(t)+q_m(\left \|u_m(t)+h_m)\big) \epsilonnd{equation} for every $t\in(t^i_m,t^{i+1}_m)$ and $i=0,\ldots,2^m-1$. Next we extend the adjoint arcs $p_m(\cdot)$ to $[0,T]$ by \betagin{equation}\langlebel{c:6.29} p_m(t):=q_m(t)+\int_t^TI\!\!Big(\sum_{j=1}^s\gamma^j_m(\widetilde{a}u)z^{ji}_mI\!\!Big)d\widetilde{a}u\;\tauextrm{ for every }\;t\in[0,T]. \epsilonnd{equation} This shows that $p_m(T)=q_m(T)$ and that \betagin{equation}\langlebel{c:6.30} \dot{p}_m(t)=\dot{q}_m(t)-\sum_{j=1}^s\gamma^j_m(t)z^{ji}_m\;\tauextrm{ a.e. }\;t\in[0,T]. \epsilonnd{equation} The latter implies due to \epsilonqref{conx'}, \epsilonqref{cony1}, and the index definitions in \epsilonqref{c56} that \betagin{equation}\langlebel{c:59} \dot{p}_m(t)=-\left \|abla_x g\big(\bar{x}_m(\left \|u_m(t)),\bar{u}_m(\left \|u_m(t))\big)^*\big(-\langlembda_m\tauh^{y}_m(t)+q_m(\left \|u_m(t)+h_m)\big) \epsilonnd{equation} for every $t\in(t^i_m,t^{i+1}_m)$ and $i=0,\ldots,2^m-1$. Define now the vector measures $\gamma^{mes}_m$ on $[0,T]$ by \betagin{equation}\langlebel{c:6.34} \underset{B}{\int}d\gamma^{mes}_m:=\underset{B}{\int}\sum_{j=1}^s\gamma^j_m(t)z^{ji}_m dt \epsilonnd{equation} for every Borel subset $B\subset [0,T]$ and then drop for simplicity the index ``$mes$" in what follows if no confusion arises. Since all the expressions in the statement of Theorem~\ref{Thm5.2*} are positively homogeneous of degree one with respect to $\langlembda_m$, $p_m$, $\gamma_m$, and $\psi_m$, the enhanced nontriviality condition \epsilonqref{entc} and the constructions above allow us to normalize them by imposing the sequential equality \betagin{equation}\langlebel{c:6.35} \langlembda_m +\left \| p_m(T)\epsilonn+\left \| q_m(0)\epsilonn+\int_0^TI\!\!Big\|\sum_{j=1}^s\gamma^j_m(t)z^{ji}_mI\!\!Big\|dt+\int_0^T\left \|\psi_m(t)\epsilonn dt=1,\quad m\inI\!\!N, \epsilonnd{equation} which tells us, in particular, that all the terms in \epsilonqref{c:6.35} are uniformly bounded.\\left[1ex] {\bf Step~3:} {\epsilonm Verifying the dual dynamic relationships and the maximization condition.} By \epsilonqref{c:6.35}, suppose without loss of generality that $\langlembda_m\tauo\langlembda$ as $m\tauo\infty$ for some $\langlembda\ge 0$. To prove the uniform boundedness of the sequence $\{p^{0}_m,\ldots,p^{2^m}_m\}_{m\inI\!\!N}$ for all $i=0,\ldots, 2^m-1$, $m\in\mathbb{N}$, observe first from \epsilonqref{conx} that \betagin{equation*} p^{i+1}_m=p^{i}_m-h_m\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big)+h_m\sum_{j=1}^{s}\gamma^{ij}_m z^{ji}_m \epsilonnd{equation*} for all $i=0,\ldots,2^m-1$. This implies that \betagin{equation*} \betagin{aligned} \|p^{i}_m\|&\le\|p^{i+1}_m\|+h_m\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\|\cdotI\!\!Big\|I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big)I\!\!Big\|+h_m I\!\!Big\|\sum_{j=1}^{s}\gamma^{ij}_m z^{ji}_mI\!\!Big\|\\ &=\big(1+h_m\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\|\big)\|p^{i+1}_m\|+h_m\langlembda_m\|\tauh^{y}_m(t^i_m)\|\cdot\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\|+h_m I\!\!Big\|\sum_{j=1}^{s}\gamma^{ij}_m z^{ji}_mI\!\!Big\| \epsilonnd{aligned} \epsilonnd{equation*} whenever $i=0,\ldots,2^m-1$. It follows from \epsilonqref{c:6.14} and \epsilonqref{c:6.35} that the quantities $\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)$, $\langlembda_m\tauh^{iy}_m$, and $\sum_{j=1}^{s}\gamma^{ij}_m z^{ji}_m$ are uniformly bounded for $i=0,\ldots,2^m-1$. Thus we find a constant $M_1>0$ such that \betagin{equation*} h_m\langlembda_m\left \|\tauh^{y}_m(t^i_m)\epsilonn\cdot\left\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\right\|\le M_1 h_m\left \|\tauh^{y}_m(t^i_m)\epsilonn= M_1\sqrt{h_m\int_{t^i_m}^{t^{i+1}_m}\left \|\tauh^y_m (t)\epsilonn^2dt} \epsilonnd{equation*} for all $i=0,\ldots,2^m-1$ and $m\inI\!\!N$. It implies that $$ \sum_{i=0}^{2^m-1}h_m\langlembda_m\left \|\tauh^{y}_m(t^i_m)\epsilonn\cdot\left\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\right\|\le M_1\sqrt{\int_{0}^{T}\left \|\tauh^y_m(t)\epsilonn^2dt}\tauo 0\;\tauextrm{ as }\;m\tauo\infty. $$ On the other hand, we get due to \epsilonqref{c:6.35} that \betagin{equation}\langlebel{2ndterm} \sum_{i=0}^{2^m-1}h_mI\!\!Big\|\sum_{j=1}^s\gamma^{ij}_m z^{ji}_mI\!\!Big\|=\int_0^TI\!\!Big\|\sum_{j=1}^s\gamma^j_m(t)z^{ji}_mI\!\!Big\|dt\le 1. \epsilonnd{equation} Considering now the numbers \betagin{equation*} A^i_m:=h_m\langlembda_m\left \|\tauh^{y}_m(t^i_m)\epsilonn\cdot\left \|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\epsilonn+h_mI\!\!Big\|\sum_{j=1}^{s}\gamma^{ij}_m z^{ji}_mI\!\!Big\| \epsilonnd{equation*} for $i=0,\ldots,2^m-1$ and using the aforementioned uniform boundedness, find a constant $M_2>0$ such that $\sum_{i=0}^{2^m-1}A^i_m\le M_2$. Combining the latter with the estimates above tells us that \betagin{equation}\langlebel{t:67} \|p^{i}_m\|\le\big(1+M_1h_m\big)\|p^{i+1}_m\|+A^i_m,\quad i=0,\ldots,2^m-1. \epsilonnd{equation} Proceeding further by induction, we get the inequalities \betagin{eqnarray*} \|p^{i}_m\|&\le&\big(1+M_1h_m\big)^{2^m-i}\|p^{2^m}_m\|+\sum_{j=i}^{2^m-1}A^j_m(1+M_1h_m)^{j-i}\\ &\le& e^{M_1}+e^{M_1}\sum_{i=0}^{2^m-1} A^i_m\le e^{M_1}(1+M_2)\;\tauextrm{ for }\;i=2,\ldots,2^m-1, \epsilonnd{eqnarray*} which imply in turn the estimate \betagin{equation*} \|p^{i}_m\|\le M_3\;\tauextrm{ for some }\;M_3>0\;\tauextrm{ and all }\;i=2,\ldots,2^m-1. \epsilonnd{equation*} Hence the boundedness of $\{p^{0}_m\}$ and $\{p^{1}_m\}$ follows from \epsilonqref{t:67} and the boundedness of $\{p^{i}_m\}_{2\le i\le 2^m}$, which thus justifies the boundedness of the whole bundle $\{( p^{0}_m,\ldots,p^{2^m}_m)\}_{m\inI\!\!N}$. To verify the uniform boundedness of $q_m(\cdot)$, derive from their constructions and \epsilonqref{conx} that \betagin{equation}\langlebel{t:68} \betagin{array}{ll} \displaystyle\sum_{i=0}^{2^m-1}\left \| q_m(t^{i+1}_m)-q_m(t^i_m)\epsilonn&\displaystyle\le h_m\sum_{i=0}^{2^m-1}\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*(-\langlembda_m\tauh^{y}_m(t^i)+p^{i+1}_m)\|\\ &+\displaystyle\int_0^TI\!\!Big\|\sum_{j=1}^s\gamma^{j}_m(t)z^{ji}_mI\!\!Big\|dt \epsilonnd{array} \epsilonnd{equation} and observe furthermore that $$ h_m\sum_{i=0}^{2^m-1}\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*(-\langlembda_m\tauh^{y}_m(t^i)+p^{i+1}_m)\|\le T\underset{0\le i\le 2^m-1}\max\big\{\|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*(-\langlembda_m\tauh^{y}_m(t^i)+p^{i+1}_m)\|\big\}. $$ The latter ensures the boundedness of the first term on the right-hand side of \epsilonqref{t:68} due to the boundedness of $\{p^{i}_m\}_{m\inI\!\!N}$, while the boundedness of the second term therein follows from \epsilonqref{2ndterm}. Thus we get from \epsilonqref{t:68} that the functions $q_m(\cdot)$ on $[0,T]$ are of uniform bounded variation on $[0,T]$ and that \betagin{equation*} 2\left \| q_m(t)\epsilonn-\left \| q_m(0)\epsilonn-\left \| q_m(T)\epsilonn\le\left \| q_m(t)-q_m(0)\epsilonn+\left \| q_m(T)-q_m(t)\epsilonn\le\tauextrm{var}(q_m;[0,T]) \epsilonnd{equation*} for all $t\in[0,T]$. Thus the sequence $\{q_m(\cdot)\}$ is bounded on $[0,T]$ since the boundedness of $\{q_m(0)\}$ and $\{q_m(T)\}$ follows from \epsilonqref{c:6.35}. Applying now Helly's selection theorem gives us a function of bounded variation $q(\cdot)$ such that $q_m(t)\tauo q(t)$ as $m \tauo\infty$ pointwise on $[0,T]$. We see from \epsilonqref{c:6.34} and \epsilonqref{c:6.35} that the measure sequence $\{\gamma_m\}$ is bounded in $C^*([0,T];\mathbb{R}^n)$. Thus the weak$^*$ sequential compactness of bounded sets in this space allows us to find a measure $\gamma\in C^*([0,T];\mathbb{R}^n)$ such that $\{\gamma_m\}$ weak* converges to $\gamma$ in $C^*([0,T];\mathbb{R}^n)$ along a subsequence. It follows from \epsilonqref{c:59}, \epsilonqref{c:6.35}, and the uniform boundedness of $q_m(\cdot)$ on $[0,T]$ that the sequence $\{p_m(\cdot)\}$ is bounded in $W^{1,2}([0,T];\mathbb{R}^{n})$ and thus weakly compact in this space. By Mazur's theorem we conclude that a sequence of convex combinations of $\dot{p}_m(\cdot)$ converges to some $\dot{p}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^{n})$ a.e.\ pointwise on $[0,T]$. This gives us \epsilonqref{c:6.6} by passing to the limit along \epsilonqref{c:59} as $m\tauo\infty$ with the usage of \epsilonqref{c:6.14} and \epsilonqref{c:6.14'}. Note also that \betagin{equation} I\!\!Big\|\int_t^T\sum_{j=1}^s\gamma^j_m(\widetilde{a}u)z^{ji}_m d\widetilde{a}u-\int_{(t,T]}d\gamma(\widetilde{a}u)I\!\!Big\|=I\!\!Big\|\int_t^T d\gamma_m(\widetilde{a}u)-\int_{(t,T]}d\gamma(\widetilde{a}u)I\!\!Big\|\tauo 0\;\tauextrm{ as }\;m\tauo\infty \epsilonnd{equation} for all $t\in[0,T]$ except a countable subset of $[0,T]$ by the weak$^*$ convergence of the measures $\gamma_m$ to $\gamma$ in $C^*([0,T];\mathbb{R}^n)$; cf.\ \cite[p.\ 325]{v} for similar arguments. Hence we get the convergence5 \betagin{equation} \int^T_t\sum_{j=1}^s\gamma^j_m(\widetilde{a}u)z^{ji}_m d\widetilde{a}u\tauo\int_{(t,T]}d\gamma(\widetilde{a}u)\;\tauextrm{ on }\;[0,T]\;\tauextrm{ as }\;m\tauo\infty \epsilonnd{equation} and thus arrive at \epsilonqref{c:6.9} by passing to the limit in \epsilonqref{c:6.29}. The second (dual) complementarity condition in \epsilonqref{41} follows from \epsilonqref{96} while arguing by contradiction with the usage of the established a.e.\ pointwise convergence of the functions involved therein. To finish the proof of (i), it remains verifying the validity of the inclusion in \epsilonqref{c:6.6'} and the maximization condition \epsilonqref{max}. Using the strong convergence of the discrete optimal solutions from Theorem~\ref{ThmStrong}, the convergence of $(\tauh^y_m(t),\tauh^u_m(t))\tauo(0,0)$ for a.e. \ $t\in[0,T]$ obtained above as well as the robustness of the normal cone \epsilonqref{c54}, we arrive at \epsilonqref{c:6.6'} by passing the limit in \epsilonqref {cony} and in the inclusions $\psi^{i}_m\in N(\bar{u}^i_m;U)$, $i=0,\ldots,2^m-1$, of Theorem~\ref{Thm6.2}. If $U$ is convex, the maximization condition \epsilonqref{max} follows directly from \epsilonqref{c:6.6'} due to the structure \epsilonqref{nc} of the normal cone to convex sets.\\left[1ex] {\bf Step~4:} {\epsilonm Verifying transversality inclusions.} It follows from \epsilonqref{nmutx} and representation \epsilonqref{F} that \betagin{equation}\langlebel{c:72} -p^{2^m}_m-\langlembda_m\vartheta^{2^m}_m=\sum_{j=1}^s\epsilonta^{2^mj}_m z^{j2^{m}}_m=\sum_{j\in I(\bar{x}^{2^m}_m)}\epsilonta^{2^mj}_m z^{j2^{m}}_m \in N(\bar{x}^{2^m}_m;C^{2^m}_m), \epsilonnd{equation} where $\epsilonta^{2^mj}_m=0$ for $j\in\{1,\ldots,s\}\setminus I(\bar{x}^{2^m}_m)$. Denoting $\zetata_m:=\sum_{j\in I(\bar{x}^{2^m}_m)}\epsilonta^{2^mj}_m z^{j2^{m}}_m$, observe that a subsequence $\{\zetata_m\}$ converges to some $\zetata\in\mathbb{R}^n$ due to the boundedness of $\langlembda_m$ by \epsilonqref{c:6.35} and the convergence of $\{p^{2^m}_m\}$ and $\{\bar{x}^{2^m}_m \}$ with taking into account the robustness of the subdifferential. It follows from the robustness of the normal cone in \epsilonqref{c:72}, the convergence of $\bar{x}^{2^m}_m\tauo \bar{x}(T)$, and the inclusion $I(\bar{x}^{2^m}_m)\subset I(\bar{x}(T))$ for all $m$ sufficiently large that $\zetata\in N(\bar{x}(T);C)$. Thus we get from \epsilonqref{nmutx}) that \betagin{equation*} -p^{2^m}_m-\zetata_m\in\langlembda_m\partial\varphi(\bar{x}^{2^m}_m)\;\tauextrm{ for all }\;m\inI\!\!N. \epsilonnd{equation*} Passing now to the limit therein as $m\tauo\infty$ verifies both inclusions in \epsilonqref{42}.\\left[1ex] {\bf Step~5:} {\epsilonm Verifying measure nonatomicity.} Take $t\in[0,T]$ with $\langle x^j_*,\bar{x}(t)\rangle<c_j$ for all $j=1,\ldots,s$ and by continuity of $\bar{x} (\cdot)$ find a neighborhood $V_t$ of $t$ such that $\langle x^j_*,\bar{x}(\widetilde{a}u)\rangle<c_j$ whenever $\widetilde{a}u\in V_t$ and $j=1,\ldots,s$. Invoking Theorem~\ref{ThmStrong} tells us that $\langle z^{ji}_m,\bar{x}_m(t^i_m)\rangle<c^{ji}_m$ if $t^i_m\in V_t$ for all $j=1,\ldots,s$ and $m\inI\!\!N$ sufficiently large. Then we deduce from \epsilonqref{94} that $\gamma_m(t)=0$ on any Borel subset $V$ of $V_t$. Hence \betagin{equation}\langlebel{nonatom} \|\gamma_m\|(V)=\displaystyle\int_Vd\|\gamma_m\|=\int_V\|\gamma_m(t)\|dt=0 \epsilonnd{equation} by the construction of $\gamma_m$ in \epsilonqref{c:6.34}. Passing now to limit therein and taking into account the measure convergence established above, we get $\|\gamma\|(V)=0$, which justifies the claimed measure nonatomicity.\\left[1ex] {\bf Step~6:} {\epsilonm Verifying nontriviality conditions.} First we establish \epsilonqref{e:83} under the general assumptions of the theorem. Arguing by contradiction, suppose that $\langlembda=0$, $p(T)=0$ and $q(0)=0$. Thus $\langlembda_m\tauo 0$, $p_m(T)\tauo 0$, and $q_m(0)\tauo 0$ as $m\tauo\infty$. It follows from \epsilonqref{c:6.25} that \betagin{equation}\langlebel{mx} \int_0^T\left \|\gamma_m(t)\epsilonn dt=\sum_{i=0}^{2^m-1}h_m\left \|\gamma^i_m\epsilonn\;\tauextrm{ and }\;\int_0^T\left \|\psi_m(t)\epsilonn dt= \sum_{i=0}^{2^m-1}h_m\frac{\left \|\psi^i_m \epsilonn}{h_m}=\sum_{i=0}^{2^m-1}\left \|\psi^i_m\epsilonn. \epsilonnd{equation} Let us now verify the limiting condition \betagin{equation}\langlebel{gg-lim} \int_0^TI\!\!Big\|\sum_{j=1}^s\gamma^j_m(t)z^{ji}_mI\!\!Big\|dt\tauo 0\;\tauextrm{ as }\;m\tauo\infty. \epsilonnd{equation} Indeed, by $q_m(T)=p_m(T)$ and the assumption above we get $q_m(T)\tauo 0$ and thus deduce from \epsilonqref{94} and \epsilonqref{mx} that $\int_0^T\| \gamma_m(t)\|dt\tauo 0$ as $m\tauo\infty$. Recalling that $\gamma^{ij}_m=0$ for $i=0,\ldots,2^m-1$ and $j=1,\ldots,s$ by \epsilonqref{94} and remembering the weak$^*$ convergence of $\gamma_m(\cdot)\tauo\gamma(\cdot)$ in $C^*([0,T];\mathbb{R}^n)$ yield $\dot{p}(t)=\dot{q}(t)$ for a.e.\ $t\in[0,T]$ by passing to the limit in \epsilonqref{c:6.30}. Thus \epsilonqref{c:6.6} reduces in this case to the linear ODE $$ \dot{q}(t)=-\left \|abla_x g\big(\bar{x}(t),\bar{u}(t)\big)^*q(t)\;\tauextrm{ with }\;q(0)=0, $$ which has only the trivial solution $q(t)\epsilonquiv 0$ on $[0,T]$. This implies that \betagin{equation}\langlebel{max-p} \underset{i=0,\ldots,2^m}\max\big\{\|p^{i}_m\|\big\}=\underset{t\in[0,T]}\max\big\{\|q_m(t)\|\big\}\tauo 0\;\tauextrm{ as }\;m\tauo\infty. \epsilonnd{equation} By the constructions above we can estimate the left-hand side of \epsilonqref{gg-lim} by \betagin{eqnarray*} &&\displaystyle\int_0^TI\!\!Big\|\sum_{j=1}^s\gamma^j_m(t)z^{ji}_mI\!\!Big\|dt=\sum_{i=0}^{2^m-1}I\!\!Big\|h_m\sum_{j=1}^s\gamma^{ij}_m z^{ji}_mI\!\!Big\|\\ &&\le\displaystyle\sum_{i=0}^{2^m-1}I\!\!Big\|p^{i+1}_m-p^{i}_m+ h_m\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*I\!\!Big(-\frac{1}{h_m}\langlembda_m\tauh^{iy}_m+p^{i+1}_mI\!\!Big)I\!\!Big\|\\ &&\le\displaystyle\sum_{i=0}^{2^m-1}\big\|p^{i+1}_m\big\|\cdot\big\|\big(1+h_m\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\big)\big\|+\sum_{i=0}^{2^m-1}\left \| p^{i}_m \epsilonn+\displaystyle\sum_{i=0}^{2^m-1}\left \|\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)^*\langlembda_m\tauh^{iy}_m\epsilonn. \epsilonnd{eqnarray*} Then \epsilonqref{max-p} and the uniform boundedness of $\left \|abla_x g(\bar{x}^i_m,\bar{u}^i_m)$ ensure that the first two terms in the last line of the obtained estimate disappear as $m\tauo\infty$. To deal with the third term therein, we get by the definition of $\tauh^{iy}_m$ and Theorem~\ref{ThmStrong} that \betagin{equation}\langlebel{th-y} \sum_{i=0}^{2^m-1}\big\|\tauh^{iy}_m\big\|=\sum_{i=0}^{2^m-1}\int_{t^i_m}^{t^{i+1}_m}I\!\!Big\|\frac{\bar{x}^{i+1}_m-\bar{x}^i_m}{h_m}-\dot{\bar{x}}(t)I\!\!Big\|dt= \int_0^T\big\|\dot{\bar{x}}_m(t)-\dot{\bar{x}}(t)\big\|dt\tauo 0\;\tauextrm{ as }\;m\tauo\infty, \epsilonnd{equation} and therefore \epsilonqref{gg-lim} is justified. To proceed further with $\psi_m^i$ in \epsilonqref{mx}, we get by \epsilonqref{cony} that \betagin{equation*} \sum_{i=0}^{2^m-1}\|\psi^i_m\|\le\sum_{i=0}^{2^m-1}\|\langlembda_m\tauh^{iu}_m\|+\sum_{i=0}^{2^m-1}\big\|\langlembda_m\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)^*\tauh^{iy}_m \big\|+\sum_{i=0}^{2^m-1}\big\|\langlembda_m\left \|abla_u g(\bar{x}^i_m,\bar{u}^i_m)^*p^{i+1}_m\big\|, \epsilonnd{equation*} which yields $\sum_{i=0}^{2^m-1}\|\psi^i_m\|\tauo 0$ due to \epsilonqref{max-p}, \epsilonqref{th-y}, and \betagin{eqnarray*} \sum_{i=0}^{2^m-1}\|\tauh^{iu}_m\|=\int_0^T\big\|\bar{u}_m(t)-\bar{u}(t)\big\|dt\tauo 0\;\tauextrm{ as }\;m\tauo\infty \epsilonnd{eqnarray*} by Theorem~~\ref{ThmStrong}. This shows that the violation of \epsilonqref{e:83} implies the failure of \epsilonqref{c:6.35}, a contradiction. To complete the proof of the theorem, it remains to verify the validity of the {\epsilonm enhanced nontriviality condition} \epsilonqref{enh1} under the additional assumption made. Suppose on the contrary that $(\langlembda,p(T))=0$ while $\langle x^j_*,x_0\rangle<c_j$ for all $j=1,\ldots,s$. It follows from the above arguments in the step, by using the complementarity conditions \epsilonqref{94}, that $\dot{p}(t)=0$ for a.e.\ $t\in[0,T]$, which yields $p(t)=p(T) =0$ on $[0,T]$. Then we get by \epsilonqref{c:6.9} and \epsilonqref{nonatom} that \betagin{equation}\langlebel{qA} q(t)=\int_{(t,T]}d\gamma(\widetilde{a}u)=0\;\tauextrm{ for all }\;t\in[0,T]\setminus A, \epsilonnd{equation} where $A\subset[0,T]$ is a countable set. Consider the two possible cases regarding \epsilonqref{qA}: $\bullet$ $0\left \|otin A$, and thus $q(0)=0$. $\bullet$ $0\in A$. In this case the measure nonatomicity condition and the fact that $A$ is at most countable allow us to find $\widetilde{a}u>0$, $\widetilde{a}u\left \|ot\in A$, with $\int_{(0,\widetilde{a}u]}d\gamma(t)=0$, and thus $q(0)=\int_{(\widetilde{a}u,T]}d\gamma(t)=0$. Hence we always have $q(0)=0$ in \epsilonqref{qA} while showing in this way that the failure of \epsilonqref{enh1} contradicts the validity of \epsilonqref{e:83} established above. $ \Box$\vspace*{-0.2in} \section{Numerical Examples}\langlebel{exa} \setcounter{equation}{0}\vspace*{-0.1in} In this section we consider two examples illustrating some characteristic features and strength of the necessary optimality conditions for the sweeping control problem $(P)$ obtained in Theorem~\ref{Thm6.1*}. Prior to dealing with specific examples, let us present the following useful assertion, which is a consequence of the measure nonatomicity condition. \vspace*{-0.07in} \betagin{proposition}\langlebel{claim} Assume that $\langle x^*,\bar{x}(\widetilde{a}u)\rangle<c_j$ for all $\widetilde{a}u\in[t_1,t_2]$ with $t_1,t_2\in[0,T)$ and some vector $x^*\in\{x_{\ast}^j\;|\;j=1,\ldots,s\}$, and that the measure nonatomicity condition of Theorem~{\rm\ref{Thm6.1*}} is satisfied with the measure $\gamma$. Then we have $\gamma ([t_1,t_2])=0$ and $\gamma(\{\widetilde{a}u\})=0$ whenever $\widetilde{a}u\in[t_1,t_2]$, and so $\gamma((t_1,t_2))= \gamma([t_1,t_2))=\gamma((t_1,t_2])=0$. \epsilonnd{proposition}\vspace*{-0.07in} {\bf Proof.} Pick any $\widetilde{a}u\in[t_1,t_2]$ with $\langle x^*_1,\bar{x}(t)\rangle<c_j$ and find by the measure nonatomicity condition a neighborhood $V_\widetilde{a}u$ of $\widetilde{a}u$ in $[0,T]$ such that $\gamma(V)=0$ for all the Borel subsets $V$ of $V_\widetilde{a}u$; in particular, $\gamma(\{\widetilde{a}u\})=0$. By $[t_1,t_2]\subset \bigcup_{\widetilde{a}u\in[t_1,t_2]}V_\widetilde{a}u$ and the compactness of $[t_1,t_2]$ we find $\widetilde{a}u_1,\ldots,\widetilde{a}u_l\in[t_1,t_2]$ with $[t_1,t_2]\subset\bigcup_{i=1}^lV_{\widetilde{a}u_i}$. Fix $i=1,\ldots,l-1$ and take $\widetilde{\widetilde{a}u_i}\in V_{\widetilde{a}u_i}\cap V_{\widetilde{a}u_{i+1}}$ with $[\widetilde{a}u_i,\widetilde{\widetilde{a}u_i}]\subset V_{\widetilde{a}u_i}$ and $[\widetilde{\widetilde{a}u_i},\widetilde{a}u_{i+1}]\subset V_{\widetilde{a}u_{i+1}}$, where $\widetilde{a}u_1:=t_1$ and $\widetilde{a}u_l:=t_2$. Then we arrive at the equalities \betagin{equation*} \gamma([t_1,t_2])=\gammaI\!\!Big(\bigcup_{i=1}^{p-1}[\widetilde{a}u_i,\widetilde{\widetilde{a}u_i})\cup[\widetilde{\widetilde{a}u_i},\widetilde{a}u_{i+1})I\!\!Big)=\sum_{i=1}^{p-1} I\!\!Big(\gamma([\widetilde{a}u_i,\widetilde{\widetilde{a}u_i}))+\gamma([\widetilde{\widetilde{a}u_i},\widetilde{a}u_{i+1}))I\!\!Big)=0, \epsilonnd{equation*} which verify the claimed properties of the measure. $ \Box$\vspace*{-0.02in} Our first example is two-dimensional with respect to both state and control variables.\vspace*{-0.07in} \betagin{example}\langlebel{Ex-2d} Consider the sweeping control problem of minimizing the cost functional $$x_1(1)+x_2(1)\;\tauextrm{ subject to} $$ $$\left\{\betagin{matrix} \betagin{pmatrix} \dot{x}_1\\ \dot{x}_2 \epsilonnd{pmatrix}=\betagin{pmatrix} u_1\\ u_2 \epsilonnd{pmatrix}-N_C\betagin{pmatrix} x_1\\ x_2 \epsilonnd{pmatrix}\\ \tauextrm{with }\;\betagin{pmatrix} x_1\\ x_2 \epsilonnd{pmatrix}(0)=\betagin{pmatrix} 0\\ x_2^0 \epsilonnd{pmatrix} \epsilonnd{matrix}\right.$$ where $C:=\{(x_1,x_2)\in\mathbb{R}^2\;|\;x_2\ge 0\}$ and $(u_1,u_2)\in U:=[-1,1]\tauimes[-1,1]$. We rewrite the dynamics as $$\betagin{pmatrix} \dot{x}_1\\ \dot{x}_2 \epsilonnd{pmatrix}(t)=\betagin{pmatrix} u_1\\ u_2 \epsilonnd{pmatrix}(t)+\epsilonta(t)\betagin{pmatrix} 0\\ 1 \epsilonnd{pmatrix},\quad\epsilonta(t)\ge 0\;\tauextrm{ a.e. }\;t\in[0,1].$$ A direct checking shows that if $x_2^0\ge 1$ then the constraint is irrelevant and the optimal control is constant being equal to $(-1,-1)$. If instead $0\le x_2^0<1$, then the optimal couple is $\bar{u}_1(t)\epsilonquiv-1$ together with any measurable component $\bar{u}_2(t)$ such that $\bar{x}_2(1)=0$. The conditions of Theorem~\ref{Thm6.1*} tell us that:\\left[1ex] (1) $p=\betagin{pmatrix} p_1\\ p_2 \epsilonnd{pmatrix}$ is constant on $[0,1]$ (by $(\ref{c:6.6})$);\\left[1ex] (2) $\betagin{pmatrix} -p_1\\ -p_2 \epsilonnd{pmatrix}-\betagin{pmatrix} 0\\ -\epsilonta(1) \epsilonnd{pmatrix}=\betagin{pmatrix} \langlembda\\ \langlembda \epsilonnd{pmatrix}$,\;$\langlembda\ge 0$ (by $(\ref{42})$);\\left[1ex] (3) $x^0_2>0\Longrightarrow\langlembda+\|p\|>0$ (by $(\ref{enh1})$);\\left[1ex] (4) $q(t)=p-\displaystyle\int_{[t,1]}d\gamma(\widetilde{a}u)=\psi(t)\in N_{[-1,1]^2}\left(\betagin{matrix} \bar{u}_1\\ \bar{u}_2 \epsilonnd{matrix}\right)$ (by $(\ref{c:6.6'})$ and $(\ref{c:6.9})$);\\left[1ex] (5) $\langlembda+\|p\|+\|q(0)\|>0$ (by $(\ref{e:83})$);\\left[1ex] (6)] $\epsilonta(t)=0$ for a.e.\ $t\in[0,1]$ with $\bar{x}_2(t)>0$\;\tauextrm{ and }\;I\!\!Big[$\epsilonta(t)>0\Longrightarrow q(t)\left(\betagin{matrix} 0\\ 1 \epsilonnd{matrix}\right)=0$I\!\!Big] a.e.\ $t\in[0,1]$ (by \epsilonqref{41});\\left[1ex] (7) $d\gamma\big{|}_{\{t\:|\;\bar{x}_2(t)>0\}}=0$ (by the measure nonatomicity condition). To apply these conditions, consider first the case where $x_2^0>1$ in which the constraint is automatically satisfied for all the trajectories. Since $\bar{x}_2(1)>0$, we get $\epsilonta(1)=0$ from (6)). If $\langlembda=0$, then $p\epsilonquiv 0$ and the nontriviality condition (3) is violated. Thus we can suppose that $\langlembda=1$, and so $p=\betagin{pmatrix} -1\\ -1 \epsilonnd{pmatrix}$. Condition $(7)$ implies that $d\gamma=0$ on the set in question; hence $q\epsilonquiv p=\betagin{pmatrix} -1\\ -1 \epsilonnd{pmatrix}\epsilonquiv\psi$. This shows that $\psi=\betagin{pmatrix} -1\\ -1 \epsilonnd{pmatrix}$. Since $\psi\in N_{[-1,1]^2}\left(\betagin{matrix} \bar{u}_1\\ \bar{u}_2 \epsilonnd{matrix}\right)$, the optimal control is $\bar{u}(t)\epsilonquiv\betagin{pmatrix} -1\\ -1 \epsilonnd{pmatrix}$. It conforms that in this case we do not loose information with respect to the classical PMP. Consider now the case where $0<x_2^0\le 1$. Assuming that $x_2(1)>0$ yields $\epsilonta(1)=0$. Repeating the above arguments with the usage of (4) gives us the control $\betagin{pmatrix} -1\\ -1 \epsilonnd{pmatrix}$ on $[0,1]$ while implying that $x_2(1)=0$, a contradiction. Thus we get $x_2(1)=0$, which tells us that $\bar{u}_2\epsilonquiv-1$ in the case where $x_2^0=1$. Let us now deal with the first component $u_1$. Again, $\epsilonta(1)=0$ implies that $\langlembda=1$ and that $\bar{u}_1\epsilonquiv-1$ on $[0,1]$. If $\epsilonta(1)>0$, then $\langlembda=0$ is forbidden by taking $p=-\epsilonta(1)$ in (3), so $\bar{u}_1\epsilonquiv-1$ is obtained as well. The case where $x_2^0=0$ requires a longer discussion, which we omit here for the sake of brevity. This example was treated also in \cite{ac}, and the given discussion allows us to compare the two sets of necessary conditions obtained in \cite{ac} and in this paper. Actually most of them, including the adjoint equation and the transversality condition, are different. Those presented here deal only with reference trajectories where the control has bounded variation, but are more detailed and--at least in this example--are more effective for the control $u_2$ while being more difficult to use for $u_1$. This difference can be explained by the methods that are used to obtain the necessary conditions. Actually, the argument presented here takes into account the constraint at all the steps of the procedure. On the contrary, the method used in \cite{ac} is based on penalization, and so it does not see the hard constraint in the approximation steps. This explains why it behaves well with respect to $u_1$ that is not influenced by the constraint, while it is almost degenerate with respect to $u_2$. \epsilonnd{example}\vspace*{-0.05in} The next example is also two-dimensional while addressing a more complicated polyhedral set $C$ in comparison with the halfspace in Example~\ref{Ex-2d}.\vspace*{-0.1in} \betagin{example}\langlebel{Ex:3} Consider problem $(P)$ with the following initial data: $$ n=m=2,\;T=1,\;x_0:=I\!\!Big(-\frac{1}{2},-\frac{1}{2}I\!\!Big),\;x^1_*:=(1,0),\;x^2_*:=(0,1),\;c_1=c_2=0,\;\mbox{\rm var}\,phi(x):=\frac{\left \| x\epsilonn^2}{2},\;g(u)=u $$ with feasible controls $u(t)=(u^1(t),u^2(t))\in U$ a.e.\ $t\in[0,1]$ taking values in the unit square $U\subset\mathbb{R}^2$ with respect to the maximum norm $$ U:=\big\{(u^1,u^2)\in\mathbb{R}^2\;\big|\;\max\{u^1,u^2\}\le 1\big\}. $$ Applying necessary optimality conditions of Theorem~\ref{Thm6.1*}, we seek for solutions to $(P)$ such that \betagin{equation}\langlebel{ex4.3a} \langle x^j_*,\bar{x}(t)\rangle<c_j=0\;\mbox{ for all }\;t\in[0,1),\;j=1,2,\;\mbox{ and }\;\bar{x}(1)\in{\rm bd}(C), \epsilonnd{equation} and show that \epsilonqref{ex4.3a} holds for $\bar{x}(\cdot)$ found below. In the case of $(P)$ under consideration these conditions say that there exist $\langlembda\ge 0$ and $\epsilonta(\cdot)=\big(\epsilonta^1(\cdot),\epsilonta^2(\cdot)\big)\in L^2([0,1];\mathbb{R}^2_+)$ well defined at $t=1$ such that:\\left[1ex] (1) $\quad$ $\langle x^*_j,\bar{x}(t)\rangle<c_j\Longrightarrow\epsilonta^j(t)=0$ for $j=1,2$ and a.e.\ $t\in[0,1]$ including $t=1$;\\left[1ex] (2) $\quad$ $\epsilonta^j(t)>0\Longrightarrow q^j(t)=c_j$ for $j=1,2$ and a.e.\ $t\in[0,1]$;\\left[1ex] (3) $\quad$ $-\dot{\bar{x}}(t)=\big(-\dot{\bar{x}}^1(t),-\dot{\bar{x}}^2(t)\big)=(\epsilonta^1(t),\epsilonta^2(t))-\big(\bar{u}^1(t),\bar{u}^2(t)\big)$ for a.e.\ $t\in[0,1]$;\\left[1ex] (4) $\quad$ $\big(\dot{p}^{1}(t),\dot{p}^{2}(t)\big)=\big(0,0\big)$ for a.e.\ $t\in[0,1]$;\\left[1ex] (5) $\quad$ $\big(q^{1}(t),q^{2}(t)\big)\in N\left(\bar{u}(t);U\right)$ for a.e.\ $t\in[0,1]$;\\left[1ex] (6) $\quad$ $q(t)=p(t)-\gamma([t,1])$ for a.e.\ $t\in[0,1]$;\\left[1ex] (7) $\quad$ $-p(1)=\langlembda\big(\bar{x}^1(1),\bar{x}^2(1)\big)+\big(\epsilonta^1(1),\epsilonta^2(1)\big)$ with $\big(\epsilonta^1(1),\epsilonta^2(1)\big)\in N\big(\bar{x}(1);C\big)$;\\left[1ex] (8) $\quad$ $\langlembda+\|p(1)\|\left \|e 0$. Employing the first condition in \epsilonqref{ex4.3a} together with (1) and (3), gives us $\dot{\bar{x}}(t)=\bar{u}(t)$ for a.e.\ $t\in[0,1]$. It also follows from (5) and (6) that \betagin{equation*} q(t)=p(t)-\gamma\left([t,1]\right)\in N\left(\bar{u}(t);U\right)\;\tauextrm{ for a.e. }\;t\in[0,1], \epsilonnd{equation*} which can be written in the maximization form \epsilonqref{max}. It follows from (4) that $p(\cdot)$ is constant on $[0,1]$, i.e., $p(t)\epsilonquiv p(1)$. This allows us to deduce that \betagin{equation*} q(t)=p(1)-\gamma([t,1])=p(1)-\gamma(\{1\})\;\tauextrm{ for a.e. }\;t\in[0,1] \epsilonnd{equation*} by using the measure nonatomicity condition of Theorem~\ref{Thm6.1*} and Proposition~\ref{claim}. Considering control functions $\bar{u}(t)=(\vartheta_1,\vartheta_2)$ on $[0,1]$ and remembering the control constraints, we have $|\vartheta_1|\le 1$ and $|\vartheta_2|\le 1$. Thus $\bar{x}(t)=(-\frac{1}{2}+\vartheta_1 t,-\frac{1}{2}+\vartheta_2 t)$ for all $t\in[0,1]$, and by the second condition in \epsilonqref{ex4.3a} provides the following two possibilities: {\bf (1)} $\bar{x}^1(1)=0$. Then $\vartheta_1=\frac{1}{2}$ and the cost functional reduces is $J[\bar{x},\bar{u}]=\frac{\left(\vartheta_2-\frac{1}{2}\right)^2}{2}$. It obviously achieves its absolutely minimum value $\bar J= 0$ at the point $\vartheta_2=\frac{1}{2}$. {\bf (2)} $\bar{x}^2(1)=0$. Then $\vartheta_2=\frac{1}{2}$ and the minimum cost is $\bar J=0$ that is achieved at $\vartheta_1=\frac{1}{2}$.\\left[1ex] As a result, we arrive at a feasible solution giving the optimal value to the cost functionals: $$ \bar{u}(t)=\left(\frac{1}{2},\frac{1}{2}\right)\;\tauextrm{ and }\;\bar{x}(t)=\left(-\frac{1}{2}+\frac{1}{2}t,-\frac{1}{2}+\frac{1}{2}t\right),\quad t\in[0,1]. $$ satisfying all the assumptions above. \epsilonnd{example} \vspace*{-0.1in} {\bf Acknowledgements.} The authors are grateful to Tan Cao for many useful discussions.\vspace*{-0.15in} \betagin{thebibliography}{99} \bibitem{aht} S. Adly, T. Haddad and L. Thibault, Convex sweeping process in the framework of measure differential inclusions and evolution variational inequalities, {\epsilonm Math. Program.} {\bf 148} (2014), 5--47. \bibitem{ao} L. Adam and J. V. Outrata, On optimal control of a sweeping process coupled with an ordinary differential equation, {\epsilonm Discrete Contin. Dyn. Syst. Ser. B} {\bf 19} (2014), 2709--2738. \bibitem{ac} C. E. Arroud and G. Colombo, A maximum principle of the controlled sweeping process, {\epsilonm Set-Valued Var. Anal.} {\bf 26} (2018), DOI 10.1007/s11228-017-0400-4. \bibitem{Bre} H. Br\'ezis, {\epsilonm Operateurs Maximaux Monotones et Semi-Groupes de Contractions les Espaces de Hilbert}, North-Holland, Amsterdam, 1973. \bibitem{bk} M. Brokate and P. Krej\v{c}\'\i, Optimal control of ODE systems involving a rate independent variational inequality, {\epsilonm Disc. Contin. Dyn. Syst. Ser. B} {\bf 18} (2013), 331--348. \bibitem{cm1} T. H. Cao and B. S. Mordukhovich, Optimal control of a perturbed sweeping process via discrete approximations, {\epsilonm Discrete Contin. Dyn. Syst. Ser. B} 21 (2016), pp. 3331-3358. \bibitem{cm2} T. H. Cao and B. S. Mordukhovich, Optimality conditions for a controlled sweeping process with applications to the crowd motion model, {\epsilonm Discrete Contin. Dyn. Syst. Ser. B} 21 (2017), pp. 267-306. \bibitem{cm3} T. H. Cao and B. S. Mordukhovich, Optimal control of a nonconvex perturbed sweeping process, to appear in {\epsilonm J. Diff. Eqs.}, https://arxiv.org/abs/1711.02267. \bibitem{cmf} C. Castaing, M. D. P. Monteiro Marques and P. Raynaud de Fitte, Some problems in optimal control governed by the sweeping process, {\epsilonm J. Nonlinear Convex Anal.} {\bf 15} (2014), 1043--1070. \bibitem{chhm} G. Colombo, R. Henrion, N. D. Hoang and B. S. Mordukhovich, Optimal control of the sweeping process, {\epsilonm Dyn. Contin. Discrete Impuls. Syst. Ser. B} {\bf 19} (2012), 117--159. \bibitem{chhm1} G. Colombo, R. Henrion, N. D. Hoang and B. S. Mordukhovich, Optimal control of the sweeping process over polyhedral controlled sets, {\epsilonm J. Diff. Eqs.} {\bf 260} (2016), 3397--3447. \bibitem{cmn} G. Colombo, B. S. Mordukhovich and D. Nguyen, Applications of controlled perturbed sweeping processes to practical modeling, in preparation. \bibitem{CT} G. Colombo and L. Thibault, Prox-regular sets and applications, in: D.Y. Gao and D. Motreanu (Eds.), {\epsilonm Handbook of Nonconvex Analysis}, International Press, Boston, 2010, pp.\ 99--182. \bibitem{pfs} M. d. R. de Pinho, M. M. A. Ferreira and G. V. Smirnov, Optimal control involving sweeping processes, {\epsilonm Set-Valued Var. Anal.}, to appear. \bibitem{et} J. F. Edmond and L. Thibault, Relaxation of an optimal control problem involving a perturbed sweeping process, {\epsilonm Math. Program.} {\bf 104} (2005), 347--373. \bibitem{dfm} T. Donchev, E. Farkhi and B. S. Mordukhovich, Discrete approximations, relaxation, and optimization of one-sided Lipschitzian differential inclusions in Hilbert spaces, {\epsilonm J. Diff. Eqs.} {\bf 243} (2007), 301--328. \bibitem{hmn} R. Henrion, B. S. Mordukhovich and N. M. Nam, Second-order analysis of polyhedral systems in finite and infinite dimensions with applications to robust stability of variational inequalities, {\epsilonm SIAM J. Optim.} {\bf 20} (2010), 2199--2227. \bibitem{hm18} N. D. Hoang and B. S. Mordukhovich, Extended Euler-Lagrange and Hamiltonian formalisms in optimal control of sweeping processes with controlled sweeping sets, to appear in {\epsilonm J. Optim. Theory Appl.}, arxiv:1804.10635. \bibitem{m95} B. S. Mordukhovich, Discrete approximations and refined Euler-Lagrange conditions for differential inclusions, {\epsilonm SIAM J. Control Optim.} {\bf 33} (1995), 882--915. \bibitem{m-book1} B. S. Mordukhovich, {\epsilonm Variational Analysis and Generalized Differentiation, I: Basic Theory}, Springer, Berlin, 2006. \bibitem{m-book2} B. S. Mordukhovich, {\epsilonm Variational Analysis and Generalized Differentiation, II: Applications}, Springer, Berlin, 2006. \bibitem{m18} B. S. Mordukhovich, {\epsilonm Variational Analysis and Applications}, Springer, Cham, Switzerland, 2018. \bibitem{mor_frict} J. J. Moreau, On unilateral constraints, friction and plasticity, in: G. Capriz and G. Stampacchia (Eds.), {\epsilonm New Variational Techniques in Mathematical Physics}, Proceedings of C.I.M.E.\ Summer Schools, Cremonese, Rome, 1974, pp.\ 173--322. \bibitem{rob} S. M. Robinson, Strongly regular generalized equations, {\epsilonm Math. Oper. Res.} {\bf 5} (1980), 43--62. \bibitem{rw} R. T. Rockafellar and R. J-B. Wets, {\epsilonm Variational Analysis}, Springer, Berlin, 1998. \bibitem{Tol} A. A. Tolstonogov, Control sweeping process, {\epsilonm J. Convex Anal.} {\bf 23} (2016), 1099--1123. \bibitem{v} R. B. Vinter, {\epsilonm Optimal Control}, Birkha\"user, Boston, 2000. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{Near-sunflowers and focal families} \begin{abstract} We present some problems and results about variants of sunflowers in families of sets. In particular, we improve an upper bound of the first author, K\"orner and Monti on the maximum number of binary vectors of length $n$ so that every four of them are split into two pairs by some coordinate. We also propose a weaker version of the Erd\H{o}s-Rado sunflower conjecture. \end{abstract} \section{Introduction} \label{sec:introduction} Introduced by Erd\H{o}s and Rado~\cite{ER}, sunflowers (also called strong $\Delta$-systems) have a long history of study and applications in extremal combinatorics and theoretical computer science. Recall that a family $\mathcal{H}$ of $r$ distinct subsets of $[n] = \{1, 2,\ldots, n\}$ is called a \emph{sunflower} of size $r$ if every $i \in [n]$ belongs to either $0$, $1$ or $r$ of the sets in $\mathcal{H}$. Erd\H{o}s and Rado famously conjectured that if $\mathcal{F}$ is a $k$-uniform family of sets (i.e., $|A| = k$ for every $A \in \mathcal{F}$) not containing a sunflower of size $r$, then $|\mathcal{F}| \le C^k$, where $C$ is a constant depending only on $r$. For many years, the best known upper bound was close to $k!$ for any fixed $r$. A recent breakthrough due to Alweiss, Lovett, Wu and Zhang~\cite{ALWZ} improved the bound to $(\log k)^{(1+o(1))k}$ for any fixed $r$, but the original conjecture is still open even for $r=3$. Seeking a bound that depends on the size $n$ of the ground set rather than the uniformity $k$, Erd\H{o}s and Szemer\'edi~\cite{ES} conjectured that if $\mathcal{F}$ is a family of subsets of $[n]$ not containing a sunflower of size $r$, then $|\mathcal{F}| \le c^n$, where $c < 2$ is a constant depending only on $r$. They showed (implicitly, made explicit by Deuber et al.~\cite{DEGKM}) that their conjecture would follow from the Erd\H{o}s-Rado conjecture. The recent solution by Ellenberg and Gijswijt~\cite{EG} of the cap set problem, confirmed the $r=3$ case of the Erd\H{o}s-Szemer\'edi conjecture (via a reduction due to the first author, Shpilka and Umans~\cite{ASU}); see also Naslund and Sawin~\cite{NS} and Heged\H{u}s~\cite{H} for explicit bounds. But the conjecture is still open for $r \ge 4$. In this paper we introduce a weaker variant of sunflowers. \begin{definition} \label{def:ns} A family $\mathcal{H}$ of $r$ distinct subsets of $[n]$ is called a \emph{near-sunflower} of size $r$ if every $i \in [n]$ belongs to either $0$, $1$, $r-1$ or $r$ of the sets in $\mathcal{H}$. \end{definition} The weakening consists in adding the option of belonging to $r-1$ of the sets (this renders the property interesting only for $r \ge 4$). It is natural in that it makes the property symmetric: if $\mathcal{H}$ is a near-sunflower then so is $\{[n] \setminus A:\, A \in \mathcal{H}\}$. One may hope that when sunflowers are replaced by near-sunflowers, the notoriously difficult conjectures of Erd\H{o}s-Rado and Erd\H{o}s-Szemer\'edi will become easier. We show that this is indeed the case in the Erd\H{o}s-Szemer\'edi setting (bound depending on $n$), but leave the question in the Erd\H{o}s-Rado setting (bound depending on $k$) open. The following variant of near-sunflowers, in which one member of the family plays a distinguished role, will be of interest. It is convenient to define it for binary vectors of length $n$ instead of subsets of $[n]$ -- henceforth we will pass freely between these two equivalent formalisms. \begin{definition} \label{def:ff} A family $x^{(0)}, x^{(1)},\ldots, x^{(r-1)}$ of $r$ distinct vectors in $\{0,1\}^n$ is \emph{focal} with focus $x^{(0)}$ if for every coordinate $i \in [n]$ at least $r-2$ of the $r-1$ entries $x^{(1)}_i,\ldots, x^{(r-1)}_i$ are equal to $x^{(0)}_i$. \end{definition} Thus, a focal family is a near-sunflower with the additional property that one of the vectors -- the focus -- is always in the majority. Unlike near-sunflowers, focal families are interesting already for $r=3$. While sunflowers and focal families are both special kinds of near-sunflowers, they are not logically comparable to each other. The two extremal functions corresponding to our definitions are: \begin{eqnarray*} g^{\mathrm{ns}}_r(n) & = & \max \{|\mathcal{F}|:\, \mathcal{F} \subseteq \{0,1\}^n \textrm{ contains no near-sunflower of size }r\}\\ g^{\mathrm{ff}}_r(n) & = & \max \{|\mathcal{F}|:\, \mathcal{F} \subseteq \{0,1\}^n \textrm{ contains no focal family of size }r\} \end{eqnarray*} It follows from the definitions that $g^{\mathrm{ns}}_r(n) \le g^{\mathrm{ff}}_r(n)$. Our main result gives upper and lower bounds for these functions. \begin{theorem} \label{thm:main} For $r \ge 3$ we have: \begin{itemize} \item[(a)] $g^{\mathrm{ns}}_r(n) \le g^{\mathrm{ff}}_r(n) \le (r-1) 2^{\lceil \frac{(r-2)n}{r-1} \rceil}$. \item[(b)] There exist positive constants $c^{\mathrm{ns}}_r$ and $c^{\mathrm{ff}}_r$ so that \begin{eqnarray*} g^{\mathrm{ns}}_r(n) & \ge & c^{\mathrm{ns}}_r (\frac{2}{(r+1)^{\frac{1}{r-1}}})^n,\\ g^{\mathrm{ff}}_r(n) & \ge & c^{\mathrm{ff}}_r (\frac{2}{r^{\frac{1}{r-1}}})^n. \end{eqnarray*} \end{itemize} \end{theorem} In particular, for $r=4$, our bounds (ignoring constants) are $2^{\frac{2n}{3}}$ from above and $(\frac{8}{5})^{\frac{n}{3}}$ and $2^{\frac{n}{3}}$ from below, for near-sunflowers and focal families, respectively. Families without near-sunflowers of size $4$ were previously studied (with different terminology) by the first author, K\"orner and Monti~\cite{AKM}, settling a problem suggested by S\'os in the late 80's. While their lower bound was the same as ours, their upper bound was roughly $2^{0.773n}$, with a proof based on Sauer's lemma. It is remarkable that our short and elementary proof improves their bound. We note that they also extended their result to $r > 4$, but with a different definition. Whereas our near-sunflowers allow an element to belong to $0$, $1$, $r-1$ or $r$ of the sets, their definition allows everything except one forbidden value. We also remark that a concept analogous to our focal families, where instead of requiring ``at least $r-2$" in Definition~\ref{def:ff} one requires ``at least $1$," was studied in coding theory under the names separating codes (Cohen and Schaathun~\cite{CS}) and frameproof codes (Blackburn~\cite{B}). This case is somewhat simpler and the bounds obtained in those papers coincide with ours for the special case $r=3$. More generally, our paper follows a long line of literature in extremal combinatorics, information theory and coding theory. The common thread is bounding the largest possible cardinality of a family of vectors of length $n$, so that for any $r$ of them there exist coordinates displaying certain desirable patterns. For $r=2$, Sperner's~\cite{S} classical theorem on antichains is a prime example. For $r=3$ we mention the theorem of Erd\H{o}s, Frankl and F\"uredi~\cite{EFF} on families in which no set is covered by the union of two others; the problem of cancellative families solved by Tolhuizen~\cite{T}; and a variety of related problems described by K\"orner~\cite{K}. For $r=4$ there is Lindstr\"om's~\cite{L} theorem on determining two vectors from their sum modulo $2$, and K\"orner and Simonyi's~\cite{KS} bounds for two-different quadruples. For general $r$, we refer to the study of disjunctive codes (Dyachkov and Rykov~\cite{DR}). In all of these problems, and many others, the cardinality of the largest family grows exponentially in $n$, but (with few exceptions) the asymptotic growth rate is not known. Our problems are no exception. The proof of Theorem~\ref{thm:main} is given in the next section. In Section~\ref{sec:q} we adapt the definition and the bounds for focal families to vectors over larger alphabets, noting that the upper bound becomes essentially tight when the size of the alphabet exceeds $n$. We show in Section~\ref{sec:linear} that the upper bound in Theorem~\ref{thm:main} can be improved if the family of vectors is closed under addition modulo $2$ (i.e., forms a linear code). In Section~\ref{sec:one-sided} we consider one-sided focal families, where $0$ and $1$ entries are treated asymmetrically, and obtain corresponding bounds. Finally, in Section~\ref{sec:k} we discuss the challenge of obtaining an exponential upper bound in terms of the uniformity $k$, and prove such a bound under a stronger condition. \section{Proof of Theorem~\ref{thm:main}} \label{sec:proof} \paragraph{The upper bound.} Let $\mathcal{F} \subseteq \{0,1\}^n$ have cardinality $|\mathcal{F}| > (r-1) 2^{\lceil \frac{(r-2)n}{r-1} \rceil}$. We have to show that $\mathcal{F}$ contains a focal family of size $r$. Fix a partition $A_1,\ldots, A_{r-1}$ of $[n]$ into $r-1$ parts of size $|A_j| \ge \lfloor \frac{n}{r-1} \rfloor$ each. For a subset $S \subseteq [r-1]$ with $|S|=r-2$, say that a vector $x \in \mathcal{F}$ is $S$-\emph{unique} if there is no other vector in $\mathcal{F}$ with the same projection on $\bigcup_{j \in S} A_j$. Since $|\bigcup_{j \in S} A_j| \le \lceil \frac{(r-2)n}{r-1} \rceil$, for a given $S$ the number of $S$-unique vectors in $\mathcal{F}$ is at most $2^{\lceil \frac{(r-2)n}{r-1} \rceil}$. It follows from our assumption on $|\mathcal{F}|$ that there exists a vector $x^{(0)} \in \mathcal{F}$ which is not $S$-unique for any $S \subseteq [r-1]$ with $|S|=r-2$. This means that we can find vectors $x^{(1)},\ldots, x^{(r-1)} \in \mathcal{F} \setminus \{x^{(0)}\}$ so that each $x^{(j)}$ agrees with $x^{(0)}$ except possibly on coordinates in $A_j$. Note that $x^{(1)},\ldots, x^{(r-1)}$ are pairwise distinct, because if two of them were equal they would have to coincide with $x^{(0)}$. By construction, the subfamily $x^{(0)}, x^{(1)},\ldots, x^{(r-1)}$ is focal with focus $x^{(0)}$. \paragraph{The lower bounds.} As is common in such problems, we use random choice with alterations. We describe the argument for near-sunflowers, later pointing out how to adapt it to focal families. We start by forming a random family $\mathcal{G} \subseteq \{0,1\}^n$ to which each vector $x \in \{0,1\}^n$ belongs, independently, with probability $p$ (to be determined later). Then $\mathbb{E}(|\mathcal{G}|) = 2^n p$. Let $N_{\mathcal{G}}$ be a random variable counting the number of near-sunflowers of size $r$ contained in $\mathcal{G}$. By removing at most $N_{\mathcal{G}}$ vectors from $\mathcal{G}$, we obtain a family $\mathcal{F}$ of cardinality at least $|\mathcal{G}| - N_{\mathcal{G}}$ which contains no near-sunflower of size $r$. By linearity of expectation, $\mathbb{E}(|\mathcal{F}|) \ge 2^n p - N^{\mathrm{ns}}_r p^r$, where $N^{\mathrm{ns}}_r$ is the number of near-sunflowers of size $r$ in $\{0,1\}^n$. To estimate $N^{\mathrm{ns}}_r$, note that the number of $r \times n$ binary matrices so that the number of $1$ entries in each column is $0$, $1$, $r-1$ or $r$ is $(2r+2)^n$. Since near-sunflowers correspond to such matrices with distinct rows, and the order of the rows is immaterial, it follows that $N^{\mathrm{ns}}_r \le \frac{1}{r!} (2r+2)^n$. Thus, \[ \mathbb{E}(|\mathcal{F}|) \ge 2^n p - \frac{1}{r!} (2r+2)^n p^r,\] and choosing $p = c(\frac{1}{(r+1)^{\frac{1}{r-1}}})^n$ for a suitable $c=c(r) > 0$ yields $\mathbb{E}(|\mathcal{F}|) \ge c^{\mathrm{ns}}_r (\frac{2}{(r+1)^{\frac{1}{r-1}}})^n$ for some $c^{\mathrm{ns}}_r > 0$. Hence, there is a realization of $\mathcal{F}$ having at least this cardinality. Moving to focal families, the argument is similar, but now we have to estimate the number $N^{\mathrm{ff}}_r$ of focal families of size $r$ in $\{0,1\}^n$. The number of $r \times n$ binary matrices so that in each column the first entry is repeated at least $r-2$ times among the other entries is $(2r)^n$. Since focal families correspond to such matrices with distinct rows, and the order of the last $r-1$ rows is immaterial, it follows that $N^{\mathrm{ff}}_r \le \frac{1}{(r-1)!} (2r)^n$. Thus, \[ \mathbb{E}(|\mathcal{F}|) \ge 2^n p - \frac{1}{(r-1)!} (2r)^n p^r,\] and choosing $p = c(\frac{1}{r^{\frac{1}{r-1}}})^n$ for a suitable $c=c(r) > 0$ yields $\mathbb{E}(|\mathcal{F}|) \ge c^{\mathrm{ff}}_r (\frac{2}{r^{\frac{1}{r-1}}})^n$ for some $c^{\mathrm{ff}}_r > 0$, as required. \hspace{\stretch{1}}$\square$ \section{Focal families over larger alphabets} \label{sec:q} Given any integer $q \ge 2$, Definition~\ref{def:ff} can be applied verbatim to vectors in $[q]^n$ to define $q$-ary focal families. Let $g^{q\textrm{-}\mathrm{ff}}_r(n)$ be the corresponding extremal function. A straightforward adaptation of the proof above yields the following version of Theorem~\ref{thm:main} for $q$-ary focal families. \begin{theorem} \label{thm:q} For $q \ge 2$ and $r \ge 3$ we have \[ c^{q\textrm{-}\mathrm{ff}}_r (\frac{q}{((q-1)(r-1)+1)^{\frac{1}{r-1}}})^n \le g^{q\textrm{-}\mathrm{ff}}_r(n) \le (r-1) q^{\lceil \frac{(r-2)n}{r-1} \rceil} \] for some positive constant $c^{q\textrm{-}\mathrm{ff}}_r$. \end{theorem} When $q \ge n$ and $q$ is a prime power, we can replace the probabilistic lower bound by a constructive one which matches (up to a constant factor depending on $r$) the upper bound. \begin{proposition} \label{prop:q} If $q \ge n$ and $q$ is a prime power then \[ g^{q\textrm{-}\mathrm{ff}}_r(n) \ge q^{\lceil \frac{(r-2)n}{r-1} \rceil}. \] \end{proposition} \begin{proof} The Reed-Solomon code with suitable parameters gives the desired lower bound. For completeness, we describe the construction. We identify the elements of the finite field $\mathbb{F}_q$ with the $q$ symbols in our alphabet. We choose and fix $n$ distinct elements $a_1, a_2,\ldots, a_n \in \mathbb{F}_q$, and identify them with the coordinates $1, 2,\ldots, n$. There are $q^{\lceil \frac{(r-2)n}{r-1} \rceil}$ polynomials $p(x)$ of degree less than $\lceil \frac{(r-2)n}{r-1} \rceil$ over $\mathbb{F}_q$. With every such polynomial we associate the vector $(p(a_1), p(a_2),\ldots, p(a_n))$, which gives a family $\mathcal{F}$ of $q$-ary vectors of length $n$, with $|\mathcal{F}| = q^{\lceil \frac{(r-2)n}{r-1} \rceil}$. We claim that $\mathcal{F}$ contains no focal family of size $r$. Indeed, suppose that $x^{(0)}, x^{(1)}, \ldots, x^{(r-1)} \in \mathcal{F}$ form such a family with focus $x^{(0)}$. Then by the pigeonhole principle, some $x^{(j)}$, $j \in [r-1]$, has to agree with $x^{(0)}$ on at least $\lceil \frac{(r-2)n}{r-1} \rceil$ coordinates. This means that the corresponding polynomials $p^{(j)}$ and $p^{(0)}$ agree on at least $\lceil \frac{(r-2)n}{r-1} \rceil$ elements of $\mathbb{F}_q$. But this is impossible, as they are distinct polynomials of degree less than $\lceil \frac{(r-2)n}{r-1} \rceil$. \end{proof} \section{Improved upper bound in the linear case} \label{sec:linear} While Proposition~\ref{prop:q} shows that our upper bound is essentially tight when $q \ge n$, we believe that for $q=2$ and large $n$ it is not. To support this belief, we show here that the upper bound can be significantly improved if we restrict attention to families of binary vectors which are linear codes (i.e., closed under addition modulo $2$). This can be done for any value of $r$, but for simplicity and concreteness of the bound we do it for $r=4$. We are going to use a known bound on the tradeoff between cardinality and minimum Hamming distance in a family $\mathcal{F}$ of binary vectors of length $n$. Recall that, by the linear programming bound (McEliece, Rodemich, Rumsey and Welch~\cite{MRRW}), if the Hamming distance between any two distinct vectors in $\mathcal{F}$ is greater than $\delta n$, then \[ |\mathcal{F}| \le 2^{(h(\frac{1}{2} - \sqrt{\delta(1 - \delta)}) + o(1))n}, \] where $h(x)$ is the binary entropy function defined by \[ h(x) = -x \log_2 x -(1-x) \log _2 (1-x). \] We first prove the following theorem, which does not require linearity. \begin{theorem} \label{thm:3diff} Let $\mathcal{F}$ be a family of at least $2^{0.44n}$ subsets of $[n]$, where $n$ is large enough. Then there exist three pairs of distinct sets in $\mathcal{F}$ such that their symmetric differences $A \bigtriangleup B$, $C \bigtriangleup D$ and $E \bigtriangleup F$ are pairwise disjoint. \end{theorem} \begin{proof} We apply the above-mentioned bound repeatedly. First, a calculation shows that for $\delta = 0.213$, we have $h(\frac{1}{2} - \sqrt{\delta(1 - \delta)}) < 0.44$. As $|\mathcal{F}| \ge 2^{0.44n}$ and $n$ is large, the bound implies the existence of distinct sets $A, B \in \mathcal{F}$ with $|A \bigtriangleup B| \le 0.213n$. Next, by the pigeonhole principle, we can find at least $\frac{|\mathcal{F}|}{2^{|A \bigtriangleup B|}}$ sets in $\mathcal{F}$ having the same intersection with $A \bigtriangleup B$. Let $\mathcal{F}'$ be the family obtained by restricting these sets to $[n] \setminus (A \bigtriangleup B)$. A calculation shows that for $\delta' = 0.287$, we have $h(\frac{1}{2} - \sqrt{\delta'(1 - \delta')}) < 0.28$. As $|\mathcal{F}'| \ge 2^{0.44n - |A \bigtriangleup B|} > 2^{0.28(n - |A \bigtriangleup B|)}$ and $n$ is large, the bound implies the existence of distinct sets $C, D \in \mathcal{F}$ such that $(C \bigtriangleup D) \cap (A \bigtriangleup B) = \emptyset$ and $|C \bigtriangleup D| \le 0.287(n - |A \bigtriangleup B|)$. Now $|A \bigtriangleup B| + |C \bigtriangleup D| < 0.44n$, and again by the pigeonhole principle we can find two distinct sets $E, F \in \mathcal{F}$ having the same intersection with $(A \bigtriangleup B) \cup (C \bigtriangleup D)$, which completes the proof. \end{proof} \begin{corollary} \label{cor:linear} Let $\mathcal{F}$ be a linear subspace of $\{0,1\}^n$ of dimension at least $0.44n$, where $n$ is large enough. Then $\mathcal{F}$ contains a focal family of size $4$. \end{corollary} \begin{proof} Viewing $\mathcal{F}$ as a family of subsets of $[n]$, it is closed under symmetric difference. Hence the theorem yields three pairwise disjoint non-empty sets $X^{(1)}, X^{(2)}, X^{(3)} \in \mathcal{F}$. Taking the empty set as the focus $X^{(0)}$, we obtain a focal family of size $4$. \end{proof} \section{One-sided focal families} \label{sec:one-sided} The requirement defining a focal family may be separated into two one-sided requirements as follows. \begin{definition} \label{def:one-sided} Let $b \in \{0,1\}$. A family $x^{(0)}, x^{(1)},\ldots, x^{(r-1)}$ of $r$ distinct vectors in $\{0,1\}^n$ is $b$-\emph{focal} with focus $x^{(0)}$ if for every coordinate $i \in [n]$ such that $x^{(0)}_i = b$, at least $r-2$ of the $r-1$ entries $x^{(1)}_i,\ldots, x^{(r-1)}_i$ are equal to $b$. \end{definition} The corresponding extremal functions for $b = 0,1$ are: \[ g^{b\textrm{-}\mathrm{ff}}_r(n) = \max \{|\mathcal{F}|:\, \mathcal{F} \subseteq \{0,1\}^n \textrm{ contains no $b$-focal family of size }r\} \] It will be convenient to study the extremal questions first for $k$-uniform families. Let ${[n] \choose k}$ be the family of all $k$-element subsets of $[n]$. For $b = 0,1$ let: \[ g^{b\textrm{-}\mathrm{ff}}_r(n,k) = \max \{|\mathcal{F}|:\, \mathcal{F} \subseteq {[n] \choose k} \textrm{ contains no $b$-focal family of size }r\} \] Since $\mathcal{H}$ is $0$-focal if and only if $\{[n] \setminus A:\, A \in \mathcal{H}\}$ is $1$-focal, we have $g^{0\textrm{-}\mathrm{ff}}_r(n) = g^{1\textrm{-}\mathrm{ff}}_r(n)$ and $g^{0\textrm{-}\mathrm{ff}}_r(n,k) = g^{1\textrm{-}\mathrm{ff}}_r(n,n-k)$. So we only need to study these questions for one value of $b$. \begin{theorem} \label{thm:1-sided} For $r \ge 3$ and $0 \le k \le n$ we have \[ g^{1\textrm{-}\mathrm{ff}}_r(n,k) \le (r-1) \frac{{n \choose \lceil \frac{(r-2)k}{r-1} \rceil}}{{k \choose \lceil \frac{(r-2)k}{r-1} \rceil}}. \] \end{theorem} \begin{proof} Let $\mathcal{F}$ be a family of $k$-element subsets of $[n]$ containing no $1$-focal family of size $r$. For a set $A \in \mathcal{F}$, we say that a set $S$ is an \emph{own-subset} of $A$ if $S \subseteq A$ and $S \mathrm{ns}ubseteq B$ for any $B \in \mathcal{F} \setminus \{A\}$. Consider an arbitrary $(r-1)$-tuple $A_1,\ldots, A_{r-1}$ of pairwise disjoint $\lfloor \frac{k}{r-1} \rfloor$-element subsets of a set $A \in \mathcal{F}$. If for every $j \in [r-1]$ there exists a set $B_j \in \mathcal{F} \setminus \{A\}$ such that $A \setminus A_j \subseteq B_j$, then the sets $A, B_1,\ldots, B_{r-1}$ form a $1$-focal family of size $r$ with focus $A$, contradicting our assumption on the family $\mathcal{F}$. Hence there exists $j \in [r-1]$ so that $A \setminus A_j$ is an own-subset of $A$. We claim that for a fixed set $A \in \mathcal{F}$, the probability that a uniformly random $\lceil \frac{(r-2)k}{r-1} \rceil$-element subset $S$ of $A$ is an own-subset is at least $\frac{1}{r-1}$. Indeed, consider the following two-step random process. First, choose uniformly at random an $(r-1)$-tuple $A_1,\ldots, A_{r-1}$ of pairwise disjoint $\lfloor \frac{k}{r-1} \rfloor$-element subsets of $A$. Second, choose uniformly at random a value $j \in [r-1]$ and let $S = A \setminus A_j$. Clearly, the resulting $S$ is uniformly distributed over the $\lceil \frac{(r-2)k}{r-1} \rceil$-element subsets of $A$. Conditional on the choice in the first step, the argument in the previous paragraph implies that the probability that $S$ is an own-subset of $A$ is at least $\frac{1}{r-1}$. As this holds for any outcome of the first step, it also holds unconditionally. Thus, with each $A \in \mathcal{F}$ we can associate a family of at least $\frac{1}{r-1} {k \choose \lceil \frac{(r-2)k}{r-1} \rceil}$ own-subsets of $A$ of size $\lceil \frac{(r-2)k}{r-1} \rceil$. The disjoint union of these families over all $A \in \mathcal{F}$ is contained in ${[n] \choose \lceil \frac{(r-2)k}{r-1} \rceil}$, implying that $|\mathcal{F}| \cdot \frac{1}{r-1} {k \choose \lceil \frac{(r-2)k}{r-1} \rceil} \le {n \choose \lceil \frac{(r-2)k}{r-1} \rceil}$. It follows that $|\mathcal{F}| \le (r-1) \frac{{n \choose \lceil \frac{(r-2)k}{r-1} \rceil}}{{k \choose \lceil \frac{(r-2)k}{r-1} \rceil}}$, as claimed. \end{proof} \begin{corollary} \label{cor:one-sided} For $r \ge 3$ and $b = 0,1$ we have \[ g^{b\textrm{-}\mathrm{ff}}_r(n) \le (r-1) \sum_{k=0}^n \frac{{n \choose \lceil \frac{(r-2)k}{r-1} \rceil}}{{k \choose \lceil \frac{(r-2)k}{r-1} \rceil}} = (1 + \frac{r-2}{(r-1)^{\frac{r-1}{r-2}}} + o(1))^n. \] \end{corollary} \begin{proof} As pointed out above, it suffices to treat the case $b=1$. Let $\mathcal{F}$ be a family of subsets of $[n]$ containing no $1$-focal family of size $r$. Then $|\mathcal{F}| = \sum_{k=0}^n |\mathcal{F} \cap {[n] \choose k}|$, and applying the theorem to the families $\mathcal{F} \cap {[n] \choose k}$ yields the upper bound in summation form. To obtain the asymptotic expression for the sum, we first use Stirling's formula to approximate ${k \choose \lceil \frac{(r-2)k}{r-1} \rceil}$ up to a factor of order $\sqrt{k}$ by $(\frac{(r-1)^{r-1}}{(r-2)^{r-2}})^{\frac{k}{r-1}}$. Plugging this approximation in the sum gives \[ \sum_{k=0}^n {n \choose \lceil \frac{(r-2)k}{r-1} \rceil} (\frac{r-2}{(r-1)^{\frac{r-1}{r-2}}})^{\frac{(r-2)k}{r-1}} \] which, by the binomial formula, is $\Theta((1 + \frac{r-2}{(r-1)^{\frac{r-1}{r-2}}})^n)$. \end{proof} In the case $r=3$, a $1$-focal family is a triple of distinct sets satisfying $A \subseteq B \cup C$, and Corollary~\ref{cor:one-sided} reproduces the bound of $(\frac{5}{4}+o(1))^n$ obtained by Erd\H{o}s, Frankl and F\"uredi~\cite{EFF} for the maximum possible cardinality of families not containing such triples. In the case $r=4$, a $1$-focal family is a $4$-tuple of distinct sets satisfying $A \subseteq (B \cup C) \cap (B \cup D) \cap (C \cup D)$, and we get an upper bound of roughly $2^{0.47n}$ for the corresponding extremal problem. Lower bounds on the extremal functions $g^{b\textrm{-}\mathrm{ff}}_r(n,k)$ and $g^{b\textrm{-}\mathrm{ff}}_r(n)$ may be obtained, as above, by random choice with alterations. Here, however, one should start with a random subfamily of ${[n] \choose k}$ instead of $\{0,1\}^n$. Optimizing the bounds requires rather messy calculations, which we omit. \section{Bounds in terms of the set size} \label{sec:k} We turn our attention now to bounding the cardinality of a family of $k$-element sets (on a ground set of any size) not containing any near-sunflower of size $r$. Note that this question does not make sense for focal families, because we may take arbitrarily many pairwise disjoint $k$-element sets, avoiding focal families of size $3$. With respect to near-sunflowers, however, any upper bound on the cardinality of a $k$-uniform family not containing a sunflower of size $r$ automatically applies to our question, too; in particular, the recent bound of Alweiss, Lovett, Wu and Zhang~\cite{ALWZ} of order $(\log k)^{(1+o(1))k}$. Can this be improved for near-sunflowers? \begin{conjecture} \label{con:k} Let $r \ge 4$, and let $\mathcal{F}$ be a family of $k$-element sets which contains no near-sunflower of size $r$. Then $|\mathcal{F}| \le C^k$, where $C$ is a constant depending only on $r$. \end{conjecture} This is a weaker version of the Erd\H{o}s-Rado sunflower conjecture. In view of the fame and difficulty of the latter, this weakening may turn out to be a more accessible goal. But we have not been able to make progress, even for $r=4$. We do have an upper bound of the desired exponential form under a stronger condition. Saying that a $4$-tuple of distinct sets $A, B, C, D$ is not a near-sunflower can be expressed as follows: there is a way to partition $\{A, B, C, D\}$ into two pairs with intersecting symmetric differences. A natural strengthening is to require this for every pairing of $A, B, C, D$. Let $\mathcal{F}$ be a family of sets so that for any (ordered) four distinct sets $A, B, C, D \in \mathcal{F}$ we have $(A \bigtriangleup B) \cap (C \bigtriangleup D) \ne \emptyset$. K\"orner and Simonyi~\cite{KS} proved that if all sets are subsets of an $n$-element ground set then $|\mathcal{F}| \le 1.217^n$ for large $n$. But here we are interested in such families that are $k$-uniform on any ground set. Fixing $A, B$, the condition implies that any $C$ and $D$ must differ within $A \bigtriangleup B$, which has at most $2k$ elements, easily giving $|\mathcal{F}| \le 2^{2k}$. The following theorem improves this bound. \begin{theorem} \label{thm:k} Let $\mathcal{F}$ be a family of $k$-element sets so that for any (ordered) four distinct sets $A, B, C, D \in \mathcal{F}$ we have $(A \bigtriangleup B) \cap (C \bigtriangleup D) \ne \emptyset$. Then $|\mathcal{F}| \le 2.148^k$ for large enough $k$. \end{theorem} \begin{proof} Fix two sets $A, B \in \mathcal{F}$ so that $|A \cap B| = t$ maximizes the intersection size over all pairs of distinct sets in $\mathcal{F}$. For any set $E$, denote by $[E]_t$ a set which is $E$ itself if $|E| \le t$, and otherwise it is an arbitrarily chosen $(t+1)$-element subset of $E$. Let ${A \bigtriangleup B \choose \le t+1}$ be the family of all subsets of $A \bigtriangleup B$ of size at most $t+1$. Define a mapping $f : \mathcal{F} \setminus \{A,B\} \to {A \bigtriangleup B \choose \le t+1}$ by $f(C) = [C \cap (A \bigtriangleup B)]_t$. We check that $f$ is injective. Let $C$ and $D$ be two distinct sets in $\mathcal{F} \setminus \{A,B\}$. We have to show that $[C \cap (A \bigtriangleup B)]_t \ne [D \cap (A \bigtriangleup B)]_t$. If the sets on both sides have size at most $t$, then $C \cap (A \bigtriangleup B) \ne D \cap (A \bigtriangleup B)$ follows from $(A \bigtriangleup B) \cap (C \bigtriangleup D) \ne \emptyset$. If both $[C \cap (A \bigtriangleup B)]_t$ and $[D \cap (A \bigtriangleup B)]_t$ have size $t+1$, they cannot be equal since that would imply $|C \cap D| \ge t+1$, contradicting the maximality of $t$. Finally, if one of them has size at most $t$ and the other has size $t+1$, they are obviously not equal. This implies that \begin{equation*} |\mathcal{F}| - 2 \le \sum_{j=0}^{t+1} {2(k-t) \choose j}. \end{equation*} For large $k$, we want to bound the right-hand side from above by $C^k$ for some $C < 2.148$. Let us write $x = \frac{t}{2(k-t)}$. If $x \ge \frac{1}{2}$ then $2(k-t) \le k$ and the sum is bounded by $2^k$. Thus we may assume that $x < \frac{1}{2}$ and approximate the sum by $2^{2(k-t)h(x)} = 2^{\frac{2h(x)}{1+2x}k}$, where $h(x)$ is the binary entropy function. Routine calculations show that the maximum of $2^{\frac{2h(x)}{1+2x}}$ is attained when $x = (1-x)^3$ and its value is less than $2.148$. \end{proof} As in all these problems, the probabilistic method can be used to show the existence of a $k$-uniform family $\mathcal{F}$ with pairwise intersecting symmetric differences, so that $|\mathcal{F}|$ is exponential in $k$. Our argument gives $|\mathcal{F}| \approx 1.25^k$, we omit the details. \end{document}
\begin{document} \title{Learning Sets with Separating Kernels} \author{Ernesto De Vito\thanks{DIMA, Universit\`a di Genova, Genova, Italy. E-mail: {\em [email protected]}} \and Lorenzo Rosasco\thanks{DIBRIS, Universit\'a di Genova \& Istituto Italiano di Tecnologia, Italy, \&~Massachusetts Institute of Technology, U.S.A.. E-mail: {\em [email protected]}} \and Alessandro Toigo\thanks{Dipartimento di Matematica, Politecnico di Milano, Milano, Italy \& I.N.F.N., Sezione di Milano, Milano, Italy. E-mail: {\em [email protected]}}} \date{\empty} \maketitle \begin{abstract} We consider the problem of learning a set from random samples. We show how relevant geometric and topological properties of a set can be studied analytically using concepts from the theory of reproducing kernel Hilbert spaces. A new kind of reproducing kernel, that we call separating kernel, plays a crucial role in our study and is analyzed in detail. We prove a new analytic characterization of the support of a distribution, {that naturally leads to a family of regularized learning algorithms which are provably universally consistent and stable with respect to random sampling.} Numerical experiments show that the proposed approach is competitive, and often better, than other state of the art techniques.\\ \end{abstract} \tableofcontents \section{Introduction} In this paper we study the problem of learning from data the set where the data probability distribution is concentrated. Our study is more broadly motivated by questions in unsupervised learning, such as the problem of inferring geometric properties of probability distributions from random samples. In recent years, there has been great progress in the theory and algorithms for supervised learning, i.e.~function approximation problems from random noisy data \cite{bouboulug04b,cucsma02,degylu96,PogSma03,vapnik98}. On the other hand, while there are a number of methods and studies in unsupervised learning, e.g. algorithms for clustering, dimensionality reduction, dictionary learning (see Chapter~14 of~\cite{hatifr01}), many interesting problems remain largely unexplored. Our analysis starts with the observation that many studies in unsupervised learning hinge on at least one of the following two assumptions. The first is that the data are distributed according to a probability distribution which is absolutely continuous with respect to a reference measure, such as~the Lebesgue measure. In this case it is possible to define a density and the corresponding density level sets. Studies in this scenario include \cite{bibema09,dewi80,kots93,sthusc05} to name a few. Such an assumption prevents considering the case where the data are represented in a high dimensional Euclidean space but are concentrated on a Lebesgue negligible subset, as a lower dimensional submanifold. This motivates the second assumption -- sometimes called {\em manifold assumption} -- postulating that the data lie on a low dimensional Riemannian manifold embedded in an Euclidean space. This latter idea has triggered a large number of different algorithmic and theoretical studies (see for example \cite{beni03,beni08,yale1,yale2,isomap,RSLLE}). Though the manifold assumption has proved useful in some applications, there are many practical scenarios where it might not be satisfied. This observation has motivated considering more general situations such as {\em manifold plus noise} models \cite{limaro11,NiSmWe08}, and models where the data are described by combinations of more than one manifold \cite{lete10,vimasa05}. Here we consider a different point of view and work in a setting where the data are described by an abstract probability space and a {\em similarity function} induced by a reproducing kernel \cite{smazho08}. In this framework, we consider the basic problem of estimating the set where the data distribution is concentrated (see Section \ref{PW} for a detailed discussion of related works). A special class of reproducing kernels, that we call separating kernels, plays a special role in our study. First, it allows to define a suitable metric on the probability space and makes the support of the distribution well defined; second, it leads to a new analytical characterization of the support in terms of the null space of the integral operator {associated to} the reproducing kernel. This last result is the key towards a new computational approach to learn the support from data, since the integral operator can be approximated with high probability from random samples \cite{RoBeDe10,smazho08}. {Estimation of the null space of the integral operator can be unstable, and regularization techniques can be used to obtain stable estimators.} In this paper we study a class of regularization techniques proposed to solve ill-posed problems \cite{enhane} and already studied in the context of supervised learning \cite{bapero07,logerfo08}. Regularization is achieved by {\em filtering} out the small eigenvalues of the sample empirical matrix defined by the kernel. Different algorithms are defined by different filter functions and have different computational properties. Consistency and stability properties for a large class of spectral filters {and of the corresponding algorithms are established} in a unified framework. Numerical experiments show that the proposed algorithms are competitive, and often better, than other state of the art techniques. The paper is divided into two parts. The first part includes Section \ref{sec:basic}, where we establish several mathematical results relating reproducing kernel Hilbert spaces {of functions on a set $X$ and the geometry of the set $X$ itself.} In particular, in this section we introduce the concept of separating kernel, which we further explore in Section \ref{RKHSCR}. These results are of interest in their own right, and are at the heart of our approach. In the second {part of the} paper we discuss the problem of learning the support from data. {More precisely, in} Section \ref{sec:learn} we {illustrate some} algorithms for learning the support of a distribution from random {samples. In Section \ref{consistency} we establish universal consistency for the proposed methods and discuss stability to random sampling.} We conclude in Section \ref{sec:disc} and \ref{sec:emp} with some further discussions and some numerical experiments, respectively. A conference version of this paper appeared in \cite{deroto10}. We now start by describing in some more detail our results and discussing some related works. \subsection{Summary of main results}\label{sec:summary} In this section we briefly describe the main ideas and results in the paper. The setting we consider is described by a probability space $(X,\rho)$ and a measurable reproducing kernel $K$ on the set $X$ \cite{aron50}. The data are independent and identically distributed (i.i.d.) samples $x_1, \dots, x_n$, each one drawn from $X$ with probability $\rho$. The reproducing kernel $K$ reflects some prior information on the problem and, as we discuss in the following, will also define the geometry of $X$. The goal is to use the sample points $x_1, \dots, x_n$ to estimate the region where the probability measure $\rho$ is concentrated. To fix some ideas, the space $X$ can be thought of as a high-dimensional Euclidean space and the distribution $\rho$ as being concentrated on a region $X_\rho$, which is a smaller {-- and potentially lower dimensional -- subset of $X$ (e.g.~a linear} subspace or a manifold). In this example, the goal is to build from data an estimator $X_n$ which is, with high probability, close to $X_\rho$ with respect to a suitable metric. {We first note that a precise definition of $X_\rho$ requires some care. If $\rho$ is assumed to have a {continuous} density with respect to some fixed reference measure (for example, the Lebesgue measure in the Euclidean space){, then} the region $X_\rho$ can be easily defined to be the closure of the set of points where the density function is non-zero. {Nevertheless, this assumption} would prevent considering the situation where the data are concentrated on a ``small'', possibly lower dimensional, subset of $X$. {Note that, if} the set $X$ {were} endowed with a topological structure and $\rho$ {were} defined on the corresponding Borel $\sigma$-algebra, it {would be} natural to define $X_\rho$ as the support of the measure $\rho$, i.e.~the smallest {\em closed} subset of $X$ having measure one. However, since the set $X$ is only assumed to be a measurable space, no a priori given topology is available. {Here we also remark} that the definition of $X_\rho$ is not the only point where some further structure on $X$ would be useful. Indeed, when defining a learning error, a notion of distance between the set $X_\rho$ and its estimator $X_n$ is also needed and hence some metric structure on $X$ is required.} {{The idea is to use} the properties of the reproducing kernel $K$ to induce a metric structure {-- and consequently a topology --} on $X$. Indeed, under some mild technical assumptions on $K$, the function \[ \dk(x,y)= \sqrt{K(x,x)+K(y,y)-{2}K(x,y)} \qquad \forall ~{x,y\in X} \] defines a metric on $X$, thus making $X$ a topological space. {Then, it is natural} to define $X_\rho$ to be the support of $\rho$ with respect to such metric topology. {Moreover, the Hausdorff distance $d_H$ induced by the metric $\dk$ provides {a notion} of distance between closed sets.} {The problem we consider can now be restated as follows:} we want to learn from data an {estimator} $X_n$ of $X_\rho$, such that {$\lim_{n\to\infty} d_H(X_n,X_\rho)=0$ almost surely.} While $X_\rho$ is now well defined, it is not clear how to build an estimator from data. A main result in the paper, given in Theorem \ref{primo}, provides a new analytic characterization of $X_\rho$, which immediately suggests a new computational solution for the corresponding learning problem. To derive and state this result, we introduce a new notion of reproducing kernels, called separating kernels, that, roughly speaking, {captures} the sense in which the reproducing kernel and the probability distribution need to be related. We say that a reproducing kernel Hilbert space {$\hh$} (or equivalently its kernel) {\em separates} a subset $C\subset X$, if, for any $x\not\in C$, there exists $f\in\hh$ such that $$ f(x) \neq 0 \quad\text{ and }\quad f(y)= 0 \quad \forall y\in C . $$ If $K$ separates all possible closed subsets in $X$, we say that it is {\em completely separating}. Figure~\ref{separability} illustrates the notion of separating kernel in the simple example of the linear kernel in a Euclidean space. \begin{figure} \caption{The separating property is illustrated in a simple situation where $X={\R^2} \label{separability} \end{figure} Now, Theorem \ref{primo} states that, if either $K$ is completely separating, or at least separates $X_\rho$, then $X_\rho$ is the level set of a suitable distribution dependent continuous function $F_\rho$. More precisely, let ${\hh}$ be the reproducing kernel Hilbert space associated to $K$ \cite{aron50}, $T:\hh\to\hh$ the integral operator with kernel $K$, and denote by $T^\dag$ its pseudo-inverse. If we consider the function $F_\rho$ on $X$, defined by \begin{equation*}\label{mainintro2} F_\rho(x)=\scal{T^\dagger T K_x}{K_x} \qquad \forall x\in X, \end{equation*} and $K$ separates $X_\rho$, then we prove that \begin{equation*} X_\rho=\set{x\in X\mid F_\rho(x)=1}, \end{equation*} (where for simplicity we are assuming $K(x,x)=1$ for all $x\in X$). The above result is crucial since the integral operator $T$ can be approximated with high probability from data (see \cite{RoBeDe10} and references therein). However, since the definition of $F_\rho$ involves the pseudo-inverse of $T$, the support estimation problem can be unstable \cite{tikars77} and regularization techniques are needed to ensure stability. With this in mind, we propose and study a family of spectral regularization techniques which are classical in inverse problems \cite{enhane} and have been considered in supervised learning in \cite{bapero07,logerfo08}. We define an estimator by \[X_n=\{x\in X~|~ F_n(x)\geq 1-\tau_n\},\] where $F_n(x)= (1/n) {\mathbf K}_{{\mathbf x}}^{\top}\, g_{\la_n}({\mathbf K_n}/n){\mathbf K}_{{\mathbf x}}$, with ${\mathbf ({\mathbf K}_n)}_{i,j}=K(x_i,x_j)$, ${\mathbf K}_{{\mathbf x}}$ is the column vector whose $i$-th entry is $K(x_i,x)$, and ${\mathbf K}_{{\mathbf x}}^{\top}$ is its {transpose}. Here $g_{\la_n}({\mathbf K_n}/n)$ is a matrix defined via spectral calculus by a spectral filter function $g_{\la_n}$ that suppresses the contribution of the eigenvalues smaller than $\la_n$. {Examples of spectral filters include Tikhonov regularization and truncated singular values decomposition \cite{logerfo08}, to name a few.} {This class of methods can be studied within a unified framework, and the error analysis in the paper establishes strong universal consistency if $X_\rho$ is separated by $K$. More precisely, under the latter assumption, we show in Theorem~\ref{thm:hauss} that, \[ \lim_{n\to \infty} d_H (X_n, X_\rho)= 0 \qquad \text{almost surely},\] provided that $X$ is compact and the sequences $(\tau_n)_{n\geq 1},(\la_n)_{n\geq 1}$ are chosen so that, $$ {\tau_n= 1-\min_{1\leq i \leq n}F_n(x_i),} \quad\quad \lim_{n\to\infty} \la_n=0,\quad\quad \sup_{n\geq 1} (L_{\la_n} \log n)/\sqrt{n}<+\infty, $$ where $L_{\la_n}$ is the Lipschitz constant of the function $r_{\la_n}(\sigma)=\sigma g_{\la_n}(\sigma)$. The above result is universal in the sense that consistency can be shown without assuming regularity condition on $\rho$ or $X_\rho$. The proof of the above result crucially depends on estimating the deviation between $F_n$ and $F_\rho$. Indeed, for the above choice of the sequence $\la_n)_{n\geq 1}$ we show that \[ \lim_{n\to \infty} \sup_{x\in X}\abs{F_\rho(x)-F_n(x)}=0 \qquad \text{almost surely}. \] Under suitable distribution dependent assumptions, the above result can be further developed to obtain finite sample bounds quantifying stability to random sampling. Indeed, if the couple $(\rho, K)$ is such that {$\sup_{x\in X}\nor{T^{-s/2} T^\dagger T K_x}<+\infty$}, with $0< s\leq 1$, and the eigenvalues of the (compact and positive) operator $T$ satisfy $\sigma_j\sim j^{-1/b}$ for some $0< b\leq 1$, then we prove in Theorem \ref{rates} that, for $n\geq 1$ and $\delta > 0$, we have $$ \sup_{x\in X} \abs{F_n(x)-F_\rho(x)} \leq C_{s,b,\delta} \left(\frac{1}{n}\right)^{\frac{s}{2s+b+1}} $$ with probability at least $1-2e^{-\delta}$, for $\la_n =n^{-1/(2s+b+1)}$ and a suitable constant $C_{s,b,\delta}$ which does not depend on $n$. } Finally, we remark that our construction relies on the assumption that the kernel $K$ separates the support $X_\rho$. The question then arises whether there exist kernels that can separate a large number of, and perhaps {\em all}, closed subsets, namely kernels that are \emph{completely separating}. {Indeed, a positive answer can be given and, for} translation invariant kernels on $\R^d$, Theorem~\ref{ale} actually gives a sufficient condition for a kernel to be completely separating in terms of its Fourier transform. As a consequence, the Abel kernel {$K(x,y)=e^{-\nor{x-y}/\sigma}$} on the Euclidean space $X=\R^d$ is completely separating. Interestingly, the Gaussian kernel {$K(x,y)=e^{-\nor{x-y}^2 /\sigma^2}$}, which is very popular in machine learning, is not. \subsection{State of the art}\label{PW} The problem of building an estimator $X_n$ of a subset $X_\rho\subset X$ which is consistent with respect to some kind of metric among sets has been considered in seemingly diverse fields for different application purposes, from anomaly detection -- see \cite{chbaku09} for a review -- to surface estimation \cite{scgisp05}. We give a summary of the main approaches, with basic references for further details. \\ {\bf Support and Level Set Estimation}. Support estimation (also called set estimation) is a part of the theory of non-parametric statistics. We refer to \cite{cufr10,curo03} for a detailed review on this topic. Usually, the space $X$ is $\R^d$ with the Euclidean metric $d$, and $X_\rho$ is the corresponding support of $\rho$. If $X_\rho$ is convex, a natural estimator is the convex hull of the data $X_n=\operatorname{conv}\set{x_1,\ldots,x_n}$, for which convergence rates can be derived with respect to the Hausdorff distance \cite{duwa96,rei03}. If $X_\rho$ is not convex, Devroye and Wise \cite{dewi80} propose the estimator \[ X_n=\bigcup_{i=1}^n B(x_i,\eps_n),\] where $B(x,\eps)$ is the ball of center $x$ and radius $\eps$, and $\eps_n$ slowly goes to zero when $n$ tends to infinity. Consistency and minimax converges rates are studied in \cite{dewi80,kots93} with respect to the distance \[ d_{\mu}(C_1,C_2)=\mu(C_1\triangle C_2),\] where $C_1\triangle C_2=(C_1\setminus C_2)\cup (C_2\setminus C_1)$ and $\mu$ is a suitable known measure.\\ If $\rho$ has a density $f$ with respect to some known measure $\mu$, a traditional approach is based on a non-parametric estimator $f_n$ of $f$, a so called {\em plug-in} estimator. A kernel based class of plug-in estimators is proposed in \cite{cufr97}, namely \[ {X_n=\set{x \in X\mid f_n(x) \geq c_n} \quad \text{ with } \quad f_n(x)=\frac{1}{n h_n^d}\sum_{i=1}^n K\left(\frac{x-x_i}{h_n}\right),}\] where $h_n$ is a regularization parameter and $c_n$ is a suitable threshold. Convergence rates with respect to $d_\mu$ are provided in \cite{cufr97}. \\ A related problem is level set estimation, where the goal is to detect the high density regions $\set{x\in X\mid f(x)\geq c}$. Consistency and optimal convergence rates for different plug-in estimators $$ X_n = \set{x\in X\mid f_n(x)\geq c} $$ have been studied with respect to both $d_H$ and $d_\mu$, see for example \cite{bibema09,scno06,tsy97} for a slightly different approach.\\ {\bf One class learning algorithm}. In machine learning, set estimation has been viewed as a classification problem where we have at our disposal only positive examples. An interesting discussion on the relation between density level set estimation, binary classification and anomaly detection is given in~\cite{sthusc05}. In this context, some algorithms inspired by Support Vector Machine (SVM) have been studied in \cite{shplshsmwi01,sthusc05,veve06}. A kernel method based on kernel principal component analysis is presented in \cite{hof07} and is essentially a special case of our framework.\\ {\bf Manifold Learning}. As we mentioned before, a setting which is of special interest is the one in which $X$ is $\R^d$ and $X_\rho$ is a low dimensional Riemannian submanifold. In this case, the error of an estimator is studied in terms of the error functional $$ d_\rho(X_\rho, X_n)=\int_{X_\rho} d(x, X_n)d\rho(x), $$ where $d$ is the Euclidean metric. Some results in this framework are given in {\cite{alchma11,mapo10,nami10}}.\\ {\bf Computational Geometry}. A classic situation, considered for example in image reconstruction problems, is when the set $X_\rho$ is a hyper-surface of $\R^d$ and the data $x_1,\ldots,x_n$ are either chosen deterministically or sampled uniformly. The goal in this case is to find a smooth function $f$ that gives the Cartesian equation of the hyper-surface, see for example {\cite{PoissonSurface,Eigencrust,mls}}. \section{Kernels, Integral Operators and Geometry in Spaces of Probabilities}\label{sec:basic} In this section we establish the results that provide the foundations of our approach. The basic framework in this paper is described by a triple $(X,\rho,K)$, where \begin{itemize} \item[-] $X$ is a set (endowed with a $\sigma$-algebra $\A{X}$); \item[-] $\rho$ is a probability measure defined on $X$; \item[-] $K$ is a {(real)} reproducing kernel on $X$, i.e.~a {real} function on $X\times X$ of positive type. \end{itemize} We interpret $X$ as the data space and $\rho$ as the probability distribution generating the data. Roughly speaking, the kernel $K$ provides a natural {\em similarity measure} on $X$ and {it defines} its geometry. We denote by $\hh$ the reproducing kernel Hilbert {space} associated with the reproducing kernel $K$ (we refer to \cite{aron50,stch08} for an exhaustive review on the theory of reproducing kernel Hilbert spaces). The scalar product and norm in $\hh$ are denoted by $\scal{\cdot}{\cdot}$ and $\nor{\cdot}$, respectively. We recall that the elements of $\hh$ are {real} functions on $X$, and the reproducing property $f(x) = \scal{f}{K_x}$ holds true for all $x\in X$ and $f\in\hh$, where $K_x\in\hh$ is defined by $K_x(y)=K(y, x)$. In order to prove our results, we need some technical conditions on $K$. \begin{assm}\label{A} The kernel $K$ has the following properties: \begin{enumerate}[a)] \item\label{A3} for all $x,y\in X$ with $x\neq y$ we have $K_x\neq K_{y}$; \item\label{A1} the associated reproducing kernel Hilbert space $\hh$ is separable; \item\label{A2} the {real} function $K$ is measurable with respect to the product $\sigma$-algebra $\A{X}\otimes\A{X}$; \item\label{A4} for all $x\in X$, $K(x,x)=1$. \end{enumerate} \end{assm} Assumptions \ref{A}.\ref{A3}), \ref{A}.\ref{A1}) and \ref{A}.\ref{A2}) are minimal requirements. In particular, Assumptions \ref{A}.\ref{A3}) and \ref{A}.\ref{A1}) are needed in order to define a separable metric structure on $X$, while Assumption \ref{A}.\ref{A2}) ensures that such metric topology is compatible with the $\sigma$-algebra $\A{X}$ (see~Proposition~\ref{metrica} below). In Proposition~\ref{Prop.supp.}, the combination of~\ref{A}.\ref{A3}), \ref{A}.\ref{A1}) and \ref{A}.\ref{A2}) will allow us to define the support $X_\rho$ of the probability measure $\rho$, as anticipated in Section \ref{sec:summary}. Assumption~\ref{A}.\ref{A4}), instead, is a normalization requirement, and could be replaced by a suitable boundedness condition (in fact, even weaker integrability conditions could also be considered). We choose the normalization $K(x,x) = 1$ $\forall x\in X$ since it makes equations more readable, and it is not restrictive in view of {Proposition~\ref{l:normalization} in \ref{app:lemma}.} We now show how {the above assumptions} {allow} us to define a metric on $X$ and to characterize the corresponding support of $\rho$ in terms of the integral operator with kernel $K$. \subsection{Metric induced by a kernel} Our first result makes $X$ a separable metric space isometrically embedded in $\hh$. This point of view is developed in \cite{smazho08}. The relation between metric spaces isometrically embedded in Hilbert spaces and kernels of positive type was studied by Schoenberg around 1940. A recent discussion on this topic can be found in Chapter~2~\S~3 of \cite{BeChRe84}. \begin{prop}\label{metrica} Under~Assumption~\ref{A}.\ref{A3}), the map $\dk:X\times X\to [0,+\infty[$ defined by \begin{equation} \dk(x,y)=\nor{K_x-K_{y}}=\sqrt{K(x,x)+K(y,y)-{2}K(x,y)}\label{metric} \end{equation} is a metric on $X$. Furthermore \begin{enumerate}[i)] \item\label{P.1.i} the map {$x\mapsto K_x$ is an isometry from $X$ into $\hh$}; \item\label{P.1.ii} {the kernel} $K$ is a continuous function on $X\times X$, and each $f\in\hh$ is a continuous function. \end{enumerate} If {also} Assumption~\ref{A}.\ref{A1}) is satisfied, then \begin{enumerate}[i)] \setcounter{enumi}{2} \item\label{P.1.mio} the metric space $(X,\dk) $ is separable. \end{enumerate} Finally, if also Assumption~\ref{A}.\ref{A2}) holds true, then \begin{enumerate}[i)] \setcounter{enumi}{3} \item\label{P.1.iii} the closed subsets of $X$ are measurable (with respect to $\A{X}$); \item\label{P.1.iv} if $Y$ is a topological space endowed with its Borel $\sigma$-algebra and $f: X\to Y$ is continuous, then $f$ is measurable; in particular, the functions in $\hh$ are measurable. \end{enumerate} \end{prop} \begin{proof} Many of these properties are known in the literature, see for example~\cite{cadeto06,stch08} and references therein. For the reader's convenience, we give a self-contained short proof.\\ Assumption~\ref{A}.\ref{A3}) states that { the map $x\mapsto K_x$} is injective. Since {$\dk(x,y)=\nor{K_x-K_y}$} by definition, $\dk$ is the metric on $X$ making {$x\mapsto K_x$ an isometry}, as claimed in item \ref{P.1.i}). {About~\ref{P.1.ii}), the kernel $K$ is continuous since $K(x,y) = \scal{K_y}{K_x}$ and {the map $x\mapsto K_x$ is continuous by item \ref{P.1.i}); furthermore,} the elements of $\hh$ are continuous by the reproducing property $f(x) = \scal{f}{K_x}$.}\\ If also Assumption~\ref{A}.\ref{A1}) holds true, then the set ${\set{K_x\mid x\in X}}$ is separable {in $\hh$, and so is $X$ as the map $x\mapsto K_x$ is isometric from $X$ onto $\set{K_x\mid x\in X}$.} Item \ref{P.1.mio}) then follows.\\ Suppose now that also Assumption~\ref{A}.\ref{A2}) holds true. Then the map $\dk$ is a measurable map, so that the open balls of $X$ are measurable. Since $X$ is separable, any open set is a countable union of open balls, hence it is measurable. It follows that the closed subsets are measurable, too, hence item \ref{P.1.iii}).\\ Let $Y$ and $f$ be as in item \ref{P.1.iv}). If $A\subset Y$ is closed, then $f^{-1} (A)$ is closed in $X$, hence measurable by item \ref{P.1.iii}). It follows that $f^{-1} (A)$ is measurable for all Borel sets $A\subset Y$, i.e.~$f$ is measurable. Since the elements of $\hh$ are continuous by \ref{P.1.ii}), they are measurable, and item \ref{P.1.iv}) is proved. \end{proof} In the rest of the paper we will always consider $X$ as a topological metric space with metric $\dk$. Note that $\dk$ is the metric induced on $X$ by the norm of $\hh$ through the embedding ${x\mapsto K_x}$. The next result shows that under our assumptions we can define the set $X_\rho$ as the smallest closed subset of $X$ having measure one. \begin{prop}\label{Prop.supp.} Under Assumptions~\ref{A}.\ref{A3}), \ref{A}.\ref{A1}) and \ref{A}.\ref{A2}), there exists a unique closed subset $X_\rho\subset X$ with $\rho(X_\rho)=1$ satisfying the following property: if $C$ is a closed subset of $X$ and $\rho(C)=1$, then $C\supset X_\rho$. \end{prop} \begin{proof} Define the measurable set $X_\rho$ as \begin{equation*} \label{support} X_\rho=\bigcap_{ \begin{smallmatrix} C\text{ \rm closed }\\\rho(C)=1 \end{smallmatrix}} C . \end{equation*} Clearly, $X_\rho$ is closed {and measurable by Proposition \ref{metrica}}. Since $X$ is separable, there exists a sequence of closed subsets $(C_j)_{j\geq 1}$ such that every closed subset $C=\cap C_{j_k}$, for some suitable subsequence. Hence, $ X_\rho=\bigcap\limits_{ j\mid \rho(C_j)=1} C_j$ and, as a consequence, $\rho(X_\rho)=1$. \end{proof} We add one remark. The set $X_\rho$ is called {\em the support} of the measure $\rho$ and clearly depends both on the probability distribution and on the topology induced by the kernel $K$ through the metric $\dk$ on $X$. \subsection{Separating Kernels} The following definition of separating kernel plays a central role in our approach. \begin{defn}\label{separated} We say that {the reproducing kernel Hilbert space $\hh$} separates a subset $C\subset X$, if, for all $x\not\in C$, there exists $f\in\hh$ such that \begin{equation} f(x) \neq 0 \quad\text{ and }\quad f(y)= 0 \quad \forall y\in C . \label{separa} \end{equation} In this case we also say that the corresponding reproducing kernel separates $C$. \end{defn} We add some comments. First, in~\eqref{separa} the function $f$ depends on $x$ and $C$. Second, the reproducing property and \eqref{separa} imply that $K_x\neq 0$ and $K_x\neq K_{y}$ for all $x\not\in C$ and $y\in C$ (compare with Assumption~\ref{A}.\ref{A3})). Finally, we stress that a different notion of {\em separating} property is given in \cite{stch08}. \begin{rem}\label{sep_lin} Given an arbitrary reproducing kernel Hilbert space $\hh$, there exist sets that are not separated by $\hh$. For example, if $X= \R^d$ and $\hh$ is the reproducing kernel Hilbert space with linear kernel $K(x,y)= x^T y$, the only sets separated by $\hh$ are the linear manifolds, that is, the set of points defined by homogeneous linear equations (see Figure \ref{separability}). A natural question is then whether there exist kernels capable of separating large classes of subsets and in particular all the closed subsets. Section \ref{RKHSCR} anwers positively to this question, introducing the notion of completely separating kernels. \end{rem} Next, we provide an equivalent characterization of the separating property, which will be the key to a computational approach to support estimation. For any set $C$, let $P_C:\hh\to\hh$ be the orthogonal projection onto the closed subspace $$ \hh_C=\overline{\text{span}\set{ K_x\mid x\in C}}, $$ i.e.~the closure of the linear space generated by the family $\set{K_x\mid x\in C}$. Note that $P_C^2=P_C$, $P_C^{\top}=P_C$ and \begin{equation*}\label{proj_def} \ker{P_C}=\set{K_x\mid x\in C}^{\perp}=\{ f\in\hh \mid f(x) = 0 \ \forall x\in C \}. \end{equation*} Moreover, define the function \begin{equation}\label{FC_def} F_C : X\to\R, \qquad F_C (x) = \scal{P_CK_x}{K_x}. \end{equation} {\begin{rem}\label{rem:Stein1} The Hilbert space $\hh_C$ is a closed subspace of the reproducing kernel Hilbert space $\hh$, and it is itself a reproducing kernel Hilbert space of functions on $X$ with reproducing kernel $K_C(x,y)=\scal{P_CK_y}{P_CK_x}=\scal{P_CK_y}{K_x}$. {Note that $K_C(x,y) = K(x,y)$ for all $x,y\in C$ by definition of $P_C$.} Clearly, the function $F_C$ corresponds to the value of $K_C$ on the diagonal. \end{rem} } Then, we have the following theorem. \begin{thm}\label{prop_proj} For any subset $C\subset X$, the following facts are equivalent: \begin{enumerate}[i)] \item $\hh$ separates the set $C$ ; \item for all $x\not\in C$, $K_x \notin \ran{P_C}$; \item $\displaystyle{ C=\set{x\in X \mid F_C(x)=K(x,x)}}$; \item $\displaystyle{ {\set{K_x\mid x\in C} = \set{K_x\mid x\in X}}\cap \ran{P_C}}$. \end{enumerate} Under Assumption~$\ref{A}.\ref{A3})$, if $C$ is separated by $\hh$, then $C$ is closed with respect to the metric~$\dk$. \end{thm} \begin{proof} We first prove that i) $\Rightarrow$ ii). Given $x\notin C$, by assumption there is $f\in \hh$ such that $\scal{f}{K_x}=f(x)\neq 0$, i.e.~$K_x\not\in \set{f}^\perp$, and $\scal{f}{K_{y}} =f(y)=0$ for all $y\in C$, i.e.~$f\in \ker{P_C} = \ran{P_C}^\perp$. It follows that $\ran{P_C} \subset \set{f}^\perp$, and then $K_x\notin \ran{P_C}$.\\ We prove ii) $\Rightarrow$ iii). {If $x\in C$, then $K_x\in\ran{P_C}$ by definition of $P_C$, so that $F_C(x)=K(x,x)$. Hence $C\subset \set{x\in X \mid F_C(x) = K(x,x)}$. If $x\not\in C$, then by assumption $P_C K_x \neq K_x$, i.e.~$(I-P_C)K_x\neq 0$. By the equality $$ \nor{(I-P_C)K_x}^2 = \scal{K_x}{K_x} - \scal{P_C K_x}{K_x} - \scal{K_x}{P_C K_x} + \scal{P_C K_x}{P_C K_x} = K(x,x)-F_C(x) , $$ this implies $F_C(x) \neq K(x,x)$. Hence $C\supset \set{x\in X \mid F_C(x) = K(x,x)}$.\\} We prove iii) $\Rightarrow$ i). If $x\not\in C$, define $f=(I-P_C)K_x\in\ker{P_C}$, so that $f(y)=0$ for all $y\in C$. Furthermore, $f(x)=K(x,x)-F_C(x)\neq 0$. Thus, $f$ separates the set $C$.\\ Finally, iv) is a restatement of~ii) taking into account that $K_x\in\ran{P_C}$ for all $x\in C$ by construction.\\ Under Assumption~$\ref{A}.\ref{A3})$, {the map $x\mapsto F_C(x)-K(x,x) = \scal{P_C K_x}{K_x} - K(x,x)$ is continuous by Proposition~\ref{metrica}.} By item iii), $C$ is the $0$-level set of this function, hence $C$ is closed. \end{proof} {Proposition \ref{l:normalization} in \ref{app:lemma} shows} that the reproducing kernel $K$ can be normalized under the mild assumption that $K(x,x)\neq 0$ for all $x\in X$, so that Assumption~\ref{A}.\ref{A4}) can be satisfied up to a rescaling of $K$. \subsubsection{A Special Case: Metric Spaces}\label{sec:special_case} It may be the case that the set $X$ has its own metric $d_X$, and the $\sigma$-algebra $\A{X}$ is the Borel $\sigma$-algebra associated with the topology induced by $d_X$. The following proposition shows that {the metrics $\dk$ and $d_X$ induce the same topology on $X$}, provided that $\hh$ separates all the $d_X$-closed subsets and the corresponding kernel is continuous. \begin{prop}\label{c:connection} Let $X$ be a separable metric space with respect to a metric $d_X$, and $\A{X}$ the corresponding Borel $\sigma$-algebra. Let $\hh$ be a reproducing kernel Hilbert space on $X$ with kernel $K$. Assume that the kernel $K$ is a continuous function with respect to $d_X$ and that the space $\hh$ separates every subset of $X$ which is closed with respect to $d_X$. Then \begin{enumerate}[i)] \item Assumptions~$\ref{A}.\ref{A3})$, $\ref{A}.\ref{A1})$ and $\ref{A}.\ref{A2})$ hold true, and $K(x,x)>0$ for all $x\in X$; \item a set is closed with respect to $\dk$ if and only if it is closed with respect to $d_X$. \end{enumerate} \end{prop} \begin{proof} The kernel is measurable and the space $\hh$ is separable by Proposition $5.1$ and Corollary $5.2$ in \cite{cadeto06}. Since the points are closed sets for $d_X$ and the $d_X$-closed sets are separated by $\hh$, then $K_x\neq 0$ (i.e.~$K(x,x) > 0$) for all $x\in X$ and $K_x\neq K_{y}$ if $x\neqy$ by the discussion following Definition \ref{separated}. \\ We show that $d_X$ and $\dk$ are equivalent metrics. Take a sequence $(x_j)_{j\geq 1}$ such that for some $x\in X$ it holds that $\lim_{j\to\infty} d_X(x_j,x)=0$. Since $K$ is continuous with respect to $d_X$, we have $\lim_{j\to\infty} \dk(x_j,x)=0$. Hence, the $\dk$-closed sets are $d_X$-closed, too. Conversely, if the set $C$ is {$d_X$-closed}, since $\hh$ separates $C$, Theorem~\ref{prop_proj} implies that $C=\set{x\in X\mid K(x,x)-F_C(x)=0}$, which is a $\dk$-closed set by $\dk$-continuity of the map $x\mapsto K(x,x)-F_C(x)$. \end{proof} Item ii) of the above proposition states that the metrics $\dk$ and $d_X$ are equivalent and implies that the set $X_\rho$ defined in Proposition~\ref{Prop.supp.} coincides with the support of $\rho$ with respect to the topology induced by $d_X$. \subsection{{The} Integral Operator Defined by the Kernel}\label{sec:int} We denote by $\mathcal S_1$ the Banach space of the trace class operators on $\hh$, with trace class norm $$ \nor{A}_{\mathcal S_1} = \tr{(A^{\top} A)^{\frac{1}{2}}} = \sum_{i\in I} \scal{(A^{\top} A)^{\frac{1}{2}} e_i}{e_i} , $$ where $\set{e_i}_{i\in I}$ is any orthonormal basis of $\hh$. Furthermore, we let $\mathcal S_2$ be the separable Hilbert space of the Hilbert-Schmidt operators on $\hh$, with Hilbert-Schmidt norm \[ \nor{A}^2_{\mathcal S_2}=\tr{A^{\top}A} = \sum_{i\in I} \nor{A e_i}^2 . \] Finally, if $A$ is any bounded operator on $\hh$, we denote by $\nor{A}_\infty$ its uniform operator norm. It is standard that $\nor{A}_\infty\leq \nor{A}_{\mathcal S_2} \leq \nor{A}_{\mathcal S_1}$. Moreover, for all functions $f_1,f_2\in \hh$, the rank-one operator $f_1\otimes f_2$ on $\hh$ defined by $$ (f_1\otimes f_2)(f) = \scal{f}{f_2}\, f_1 \qquad \forall f\in\hh $$ is trace class, and $\nor{f_1\otimes f_2}_{\mathcal S_1} = \nor{f_1\otimes f_2}_{\mathcal S_2} = \nor{f_1}\nor{f_2}$. We recall a few facts on integral operators with kernel $K$ (see \cite{cadeto06} for proofs and further discussions). Under Assumption~\ref{A}, the $\mathcal S_1$-valued map $x\mapsto K_x\otimes K_x$ is Bochner-integrable with respect to $\rho$, and its integral \begin{equation}\label{eq:2} T {=} \int_{X} K_x\otimes K_x d\rho (x) \end{equation} defines a positive trace class operator $T$ with $\nor{T}_{\mathcal S_1} = \tr{T} = 1$ (a short proof is given in Proposition \ref{Tprop} of the Appendix). Using the reproducing property of $\hh$, it is straightforward to see that $T$ is simply the integral operator with kernel $K$ acting on $\hh$, i.e. \[ (Tf)(x)=\int_X K(x,y) f(y) d\rho(y) \qquad \forall f\in\hh . \] The following is a key result in our approach. \begin{thm}\label{T} Under Assumption~\ref{A}, the null space of $T$ is \begin{equation}\label{nucleo} \ker{T}=\set{K_x\mid x\in X_\rho}^\perp =\ker{P_{X_\rho}}, \end{equation} where $X_\rho$ {is the support of $\rho$ as defined in Proposition \ref{Prop.supp.}}. \end{thm} \begin{proof} Note that , for all $f\in \hh$, the set \[ C_f=\set{x\in X\mid f(x)=0}= \set{x\in X\mid \scal{f}{K_x}=0} \] is closed since $f$ is continuous. We now prove Equation \eqref{nucleo}. Since $T$ is a positive operator, spectral theorem gives that $Tf=0$ if and only if $\scal{Tf}{f}=0$. The definition of $T$ and the reproducing property gives that \begin{align*} \scal{Tf}{f}=\int_X \scal{(K_x\otimes K_x)f}{f} d\rho (x) = \int_X \abs{\scal{ K_x}{f}}^2 d\rho (x) =\int_X |f(x)|^2 d\rho (x), \end{align*} hence the condition $\scal{Tf}{f}=0$ is equivalent to the fact that $f(x)=0$ for $\rho$-almost every $x\in X$. Hence $f\in\ker{T}$ if and only if $\rho(C_f) = 1$, i.e.~$C_f\supset X_\rho$, or equivalently $\scal{f}{K_x}=0$ $\forall x\in X_\rho$. Equation \eqref{nucleo} then follows. \end{proof} In the following, we will use the abbreviated notation $P_\rho = P_{X_\rho}$. Note that the space $\hh$ splits into the direct sum $\hh=\hh_\rho\oplus \hh_\rho^\perp$, where \begin{align*} \hh_\rho & = {\ran{P_\rho} =\overline{\ran{T}} =\overline{\text{span}\{K_x \mid x\in X_\rho \}}} \\ \hh_\rho^\perp & = \ker{P_\rho} =\ker{T}=\{ f\in\hh \mid f(x) = 0 \ \forall x\in X_\rho \}. \end{align*} {\begin{rem}\label{rem:Stein2} The reproducing kernel Hilbert space $\hh_\rho$ (see Remark \ref{rem:Stein1}) has been considered before \cite{stsc12}, and in particular in the context of semi-supervised manifold regularization \cite{benisi06}, where $X_\rho$ is assumed to be an embedded manifold. The corresponding reproducing kernel is $K_\rho (x,y) = \scal{P_\rho K_y}{K_x}$ and $F_{X_\rho}(x) = K_\rho (x,x)$. See also the discussion in Section~6. \end{rem}} Under Assumption~\ref{A}, we also introduce the integral operator $L_K : L^2(X,\rho)\to L^2(X,\rho)$, \begin{equation*} (L_K \phi) (x) {=} \int_X K(x,y) \phi (y) d\rho (y) \qquad \forall \phi\in L^2(X,\rho),\label{eq:7} \end{equation*} which is a positive trace class operator, too. Note the difference between the operators $T$ and $L_K$: although their definitions are formally the same, the respective domains and images change. {Since $T$ and $L_K$ are positive trace class operators, by the Hilbert-Schmidt theorem each of them admits an orthonormal family of eigenvectors in $\hh$ and $L^2(X,\rho)$, respectively, with a corresponding family of positive eigenvalues. The two spectral decompositions are strongly related, as we now briefly recall (see also Proposition~8 of \cite{RoBeDe10} and Theorem 2.11 of \cite{stsc12}).} {Denote by $(\sigma_j)_{j\in J}$ the (finite or countable) family of strictly positive eigenvalues of $L_K$, where each eigenvalue is repeated according to its (finite) multiplicity. For each $j\in J$ select a corresponding eigenvector $\phi_j\in L^2(X,\rho)$ {in such a way that the sequence $(\phi_j)_{j\in J}$ is orthonormal in $L^2(X,\rho)$.} Hilbert-Schmidt theorem provides that \begin{equation}\label{eq:3} L_K=\sum_{j\in J} \sigma_j \phi_j\otimes \phi_j , \end{equation} where the series converges in trace norm. In general, each element $\phi_j$ is an equivalence class of functions defined $\rho$-almost everywhere. In particular, the value of $\phi_j$ is not defined outside $X_\rho$. However, in each equivalence class we can choose a unique continuous function, denoted again by $\phi_j$, which is defined at every point of $X$ by means of the {\em extension equation} {\cite{coilaf06,RoBeDe10}} \begin{equation}\label{eq:outofsample} \phi_j(x) = \sigma_j^{-1} \int_X K(x,y) \phi_j (y) d\rho (y)\qquad \forall x\in X . \end{equation} With this choice, which will be implicitly assumed in the following, the family $(\sigma_j)_{j\in J}$ coincides with the family of strictly positive eigenvalues of $T$ (with the same multiplicities), $(\sqrt{\sigma_j} \phi_j)_{j\in J}$ is a orthonormal family in $\hh$ of eigenfunctions of $T$, and \begin{equation} T =\sum_{j\in J} \sigma_j\,\, (\sqrt{\sigma_j} \phi_j)\otimes (\sqrt{\sigma_j} \phi_j) = \sum_{j\in J} \sigma_j^2 \ \phi_j\otimes \phi_j,\label{eq:6} \end{equation} where the series converges in the Banach space $\mathcal S_1$ (hence in $\mathcal S_2$), see e.g. \cite{cadeto06,RoBeDe10,stsc12}. As $\nor{T}_{\mathcal S_1} = 1$, the positive sequence $(\sigma_j)_{j\in J}$ is summable and sums up to $1$. It is clear that the family $(\sqrt{\sigma_j} \phi_j)_{j\in J}$ is an orthonormal basis of the Hilbert space $\hh_\rho$. Conversely, let $(f_j)_{j\in J}$ be an orthonormal basis of $\hh_\rho$ of eigenvectors of $T$ with corresponding eigenvalues $(\sigma_j)_{j\in J}$. Define \[ \phi_j (x) = \sigma_j^{-\frac{1}{2}} f_j (x) \qquad \forall x\in X {.} \] {Then, it is not difficult to show} that \eqref{eq:3}, \eqref{eq:outofsample} and \eqref{eq:6} hold true.} \subsection{An Analytic Characterization of the Support} Let Assumption~\ref{A} hold true. Collecting the previous results, if $\hh$ separates $X_\rho$, then Theorem \ref{prop_proj} gives that $$ X_{\rho}=\set{x\in X\mid F_{X_\rho}(x)=1 } . $$ The function {$F_\rho = F_{X_\rho}$} is defined by \eqref{FC_def} in terms of the projection $P_\rho$, which, in light of Theorem \ref{T}, can be characterized using the operator $T$. Indeed, from the definition of {$F_\rho$} and~\eqref{nucleo} we have \begin{equation}\label{Projection} {F_\rho(x)=} \scal{P_\rho K_x}{K_x}=\scal{ T^\dag TK_x}{K_x}=\scal{\theta(T)K_x}{K_x}=\sum_{j\in J} {\sigma_j\abs{\phi_j(x)}^2} \end{equation} where $T^\dag$ is the pseudo-inverse of $T$ and $\theta$ is the Heaviside function $\theta (\sigma) = \ind_{]0 , +\infty [} (\sigma)$ (note that with our definition $\theta(0)=0)$. The above discussion is summarized in the following theorem. \begin{thm}\label{primo} {If $\hh$ satisfies Assumption~$\ref{A}$ and separates} the support $X_\rho$ of the measure $\rho$, then \[ X_{\rho}= \set{x\in X\mid F_\rho(x)=1}=\set{x\in X\mid \scal{T^\dag TK_x}{K_x}=1 }.\] \end{thm} As we discussed before, a natural question is whether there exist kernels capable to separate {\em all} possible closed subsets of $X$. In a learning scenario, this can be translated into a {\em universality} property, in the sense that it allows to describe {\em any} probability distribution and learn consistently its support \cite{degylu96}. Note that in a supervised learning framework a similar role is played by the so called universal kernels \cite{cadeto10,Steinwart02}. The following section answers positively to the previous question, introducing and studying the concept of completely separating kernels. Interestingly, there are universal kernels in the sense of \cite{cadeto10,Steinwart02} which do not separate all closed subsets of $X$, as for example the Gaussian kernel. \section{Completely separating reproducing kernel Hilbert spaces}\label{RKHSCR} The property defining the class of kernels we are interested {in} is captured by the following definition. \begin{defn}[Completely Separating Kernel]\label{ass1} A reproducing kernel Hilbert space $\hh$ satisfying Assumption~$\ref{A}.\ref{A3})$ is called {\em completely separating} if $\hh$ separates all the subsets $C\subset X$ which are closed with respect to the metric~$\dk$ defined by~\eqref{metric}. In this case, we also say that the corresponding reproducing kernel is completely separating. \end{defn} The definition of completely separating reproducing kernel Hilbert spaces should be compared with the analogous notion of complete regularity for topological spaces. Indeed, we recall that a topological space is called {\em completely regular} if, for any closed subset $C$ and any point $x\notin C$, there exists a continuous function $f$ such that $f(x)\neq 0$ and $f(y)=0$ for all $y\in C$. As we discuss below, completely separating reproducing kernels do exist. For example, for $X=\R^d$ both the Abel kernel $K(x,y)=e^{-\nor{x-y}/\sigma}$ and the $\ell_1$-exponential kernel $K(x,y)=e^{-\nor{x-y}_1/\sigma}$ are completely separating, where $\nor{x}$ is just the Euclidean norm of $x=(x^1, \dots, x^d)$ in $\R^d$ and $\nor{x}_1 = \sum_{j=1}^d |x_j|$ is the $\ell_1$-norm. Indeed this follows from Theorem ~\ref{ale} and Proposition~\ref{prodotto} below, which give sufficient conditions for a kernel to be completely separating in the case $X=\R^d$. Note that the Gaussian kernel $K(x,y)=e^{-\nor{x-y}^2/\sigma^2}$ on $\R^d$ is not completely separating. This is a consequence of the following fact. It is known that the elements of the corresponding reproducing kernel Hilbert space $\hh$ are analytic functions, see Corollary~4.44 in \cite{stch08}. If $C$ is a closed subset of $\R^d$ with non-empty interior and $f\in\hh$ is equal to zero on $C$, then a standard result in complex analysis implies that $f(x)=0$ for all $x\in\R^d$, hence $\hh$ does not separate $C$. We end this section with {Proposition \ref{prodotto}, which gives} a simple way to build completely separating kernels in high dimensional spaces from completely separating kernels in one dimension, {the latter usually being} easier to characterize. \subsection{Separating Properties of Translation Invariant Kernels} The first result studies translation invariant kernels on $\R^d$, {i.e.~}of the form $K(x,y) = K(x-y)$. We show that if the Fourier transform of the kernel satisfies a suitable growth condition, then the corresponding reproducing kernel Hilbert space is completely separating. {As usual, $C(\R^d)$ denotes the space of real continuous functions on $\R^d$ and, for any $p\in[1,+\infty\,[$, $L^p(\R^d)$ is the space of (equivalence classes of) real functions on $\R^d$ which are $p$-integrable with respect to the Lebesgue measure $dx$. We will consider the {\em real} spaces $L^p_h(\R^d)$ of hermitian complex functions, {i.e.} $$ L^p_h(\R^d) = \{\phi_1 + i\phi_2 \mid \phi_1,\phi_2\in L^p(\R^d) \mbox{ and } \phi_1(-x) = \phi_1(x) \, , \, \phi_2(-x) = -\phi_2(x)\} . $$ If $\phi\in L^1(\R^d)$, its Fourier transform is the complex hermitian bounded continuous function $\hat{\phi}$ on $\R^d$ given by $$ \hat{\phi} (z) = \int_{\R^d} e^{-2\pi i z\cdot x} \phi(x) d x . $$ If $\phi\in L^2(\R^d)$, we denote by {$\hat{\phi}\in L^2_h (\R^d)$} its Fourier-Plancherel transform, obtained extending the above definition {on functions $\phi\in L^1(\R^d)\cap L^2(\R^d)$ to a unitary map $L^2(\R^d) \ni \phi \to \hat{\phi}\in L^2_h (\R^d)$.}} Throughout, we assume $\R^d$ to be a metric space with respect to the standard metric $d_{\R^d}$ induced by the Euclidean norm. We need a preliminary result characterizing a reproducing kernel Hilbert space, whose reproducing kernel is continuous and integrable, as a suitable non-closed subspace of $L^2(\R^d)$. The first part is a converse of Bochner's theorem (Theorem 4.18 in \cite{fol95}). \begin{prop}\label{Prop. HK con K di tipo pos.} Let {$K$ be a} continuous function in $L^1(\R^d)$ such that its Fourier transform $\hat{K}$ is strictly positive. Then the kernel $K(x,y)=K(x-y)$ is positive definite and its corresponding {(real)} reproducing kernel Hilbert space $\hh$ is \begin{equation} {\hh = \left\{ \phi\in C(\R^d)\cap L^2(\R^d)\mid \int_{\R^d} \hat{K} (z)^{-1} |\hat{\phi}(z)|^2 dz < +\infty \right\} }\label{HK con K L1} \end{equation} with norm \begin{equation}\label{HK con K L1 bis} \nor{\phi}^2 = \int_{\R^d}\hat{K}(z)^{-1} |\hat{\phi}(z)|^2 dz \qquad \forall \phi\in \hh . \end{equation} \end{prop} \begin{proof} {The integral operator $$ ({L_K} \phi)(x)=\int_{\R^d}K(x-y) \phi(y)\,dy=(K*\phi)(x), $$ is well defined and bounded from $L^2(\R^d)$ into $L^2(\R^d)$} since $K\in L^1(\R^d)$. Since ${L_K}$ is a convolution operator, Fourier transform turns it into the operator of multiplication by the bounded function $\hat{K}$, that is $\widehat{{L_K} \phi} = \hat{K} \hat{\phi}$ for all {$\phi\in L^2(\R ^d)$}. It follows that $$ \scal{{L_K} \phi}{\phi}_{L^2}= \scal{\hat{K}\hat{\phi}}{\hat{\phi}}_{L^2} {> 0\qquad \forall \phi\in L^2(\R^d)\setminus\{0\}} $$ since {$\hat{K}> 0$} by assumption, hence ${L_K}$ is a {strictly} positive operator. In order to show that $K$ is positive definite, pick a Dirac sequence $( \pphi_n )_{n\geq 1}$ {as in Chapter VIII.3 of \cite{la93}}, and, for each $x\in X$, define $\pphi_n^x $ be equal to $\pphi_n^x(y)= \pphi_n (y- x)$. Fixed $x_1,x_2,\ldots,x_N\in \R^d$ and $c_1,c_2,\ldots,c_N\in {\R}$, set $\phi_n = \sum_{i=1}^N c_i \pphi_n^{x_i}$, then $$ 0 \leq\scal{{L_K} \phi_n}{\phi_n}_{L^2} = \sum_{i,j=1}^N c_i {c_j} \scal{{L_K} \pphi_n^{x_i}}{\pphi_n^{x_j}}_{L^2} \mathop{\frecc}_{n\to\infty} \, \sum_{i,j=1}^N c_i {c_j} K(x_j,x_i), $$ where the last equality is due to continuity of $K$ and the usual properties of Dirac sequences. It follows that $\sum_{i,j=1}^N c_i {c_j} K(x_j,x_i)\geq 0$, i.e.~the kernel $K$ is positive definite.\\ Let $\hh$ be {the (real)} reproducing kernel Hilbert space associated to $K$. Since the support of the Lebesgue measure is $\R^d$, {Mercer theorem (as stated e.g.~in Proposition 6.1 of \cite{cadeto06} and the subsequent discussion, or Theorem 2.11 of \cite{stsc12}) shows that ${L_K}^{1/2}$ is a unitary isomorphism from $L^2 (\R^d)$ onto $\hh$. More precisely, for any $\psi\in L^2 (\R^d)$ there exists a unique function $\phi\in C(\R^d)$ such that its equivalence class belongs to ${L_K}^{1/2}\psi\in L^2(\R^d)$, and the correspondence $\psi\mapsto \phi$ is an {isometry} from $L^2 (\R^d)$ onto $\hh$. By further applying the Fourier-Plancherel transform and taking into account that $\hat{\phi}(z)= \sqrt{\hat{K}(z)}\, \hat{\psi}(z)$ for almost all $z\in\R^d$, one has $$ \nor{\phi}^2_\hh = \nor{\psi}^2_{L^2(\R^d)} = \nor{\hat{\psi}}^2_{L^2_h(\R^d)} = \int_{\R^d}\hat{K}(z)^{-1} |\hat{\phi}(z)|^2 dz <+\infty , $$ so that~\eqref{HK con K L1} and \eqref{HK con K L1 bis} follow.} \end{proof} We now state a sufficient condition on $K$ ensuring that $\hh$ is completely separating. \begin{thm}\label{ale} Let {$K$} be a continuous function in $L^1(\R^d)$ such that \begin{equation} \hat{K}(z) \geq \frac{a}{\left(1+b\nor{z}^{\gamma_1}\right)^{\gamma_2}} \qquad \forall y\in\R^d \label{poli} \end{equation} for some $a,b,\gamma_1,\gamma_2 >0$. Then, \begin{enumerate}[i)] \item the translation invariant kernel $K(x,y)=K(x-y)$ is positive definite and continuous; \item the topologies induced by the metric $\dk$ and the Euclidean metric $d_{\R^d}$ coincide on $\R^d$; \item the kernel $K$ is completely separating. \end{enumerate} \end{thm} \begin{proof} Condition~\eqref{poli} implies that $\hat{K}$ is strictly positive, so item~i) follows from Proposition~\ref{Prop. HK con K di tipo pos.}. In particular, from~\eqref{HK con K L1} we see {that,} {if $\phi\in L^2 (\R^d)$ and $\int_{\R^d}\left(1+b\nor{z}^{\gamma_1}\right)^{\gamma_2} |\hat{\phi}(z)|^2 dz$ is finite, then $\phi\in\hh$. } This implies that $C^\infty_c (\R^d) \subset \hh$: indeed, if $\phi\in C^\infty_c (\R^d)$, then $\hat{\phi}$ is a Schwartz function on $\R^d$ {(Theorem 3.2 in \cite{stwe71})}, hence the last integral is convergent. Functions in $C^\infty_c (\R^d)$ separate every set $C$ which is closed with respect to the metric $d_{\R^d}$ {(as it easily follows by suitably translating and dilating the function $\psi\in C^\infty_c (\R^d)$ defined in item (b) p.~19 of \cite{stwe71}),} hence $\hh$ separates the $d_{\R^d}$-closed subsets. Items ii) and iii) then follow from Proposition \ref{c:connection}. \end{proof} As an application, we show that the Abel kernel is completely separating. \begin{prop}\label{Prop. exp sep.} Let \begin{equation}\label{AbelKer} K : \R^d \times \R^d \to \R , \qquad K(x,y) = e^{-\frac{\nor{x-y}}{\sigma}} , \end{equation} with $\sigma > 0$. Then $K$ is a positive definite kernel and the corresponding reproducing kernel Hilbert space $\hh$ is completely separating for all $d\geq 1$. \end{prop} \begin{proof} A standard Fourier transform computation gives \begin{equation}\label{K^} \hat{K} (z) = \frac{1}{2\pi\sigma}\pi^{-\frac{d+1}{2}} \Gamma \left(\frac{d+1}{2}\right) \left( \frac {1} {4\pi^2\sigma^2}+ \nor{z}^2 \right)^{-\frac{d+1}{2}} , \end{equation} where $\Gamma$ is Euler gamma function (Theorem~1.14 in \cite{stwe71}). The claim then follows from Theorem~\ref{ale}. \end{proof} Equations~\eqref{HK con K L1}, \eqref{HK con K L1 bis} and \eqref{K^} show that (up to a rescaling of the norm) the reproducing kernel Hilbert space associated to the Abel Kernel \eqref{AbelKer} is just $W^{(d+1)/2} (\R^d)$, the Sobolev space of order $(d+1)/2$. \subsection{Building Separating Kernels} The following result gives a way to construct completely separating reproducing kernel Hilbert spaces on high dimensional spaces. \begin{prop}\label{prodotto} If $X_i$, $i = 1,2\ldots d$, are sets and $K^{(i)}$ are completely separating reproducing kernels on $X_i$ for all $i = 1,2\ldots d$, then the product kernel \[K((x_1 , \ldots, x_d),(y_1 , \ldots, y_d)) {=} K^{(1)} (x_1,y_1) \cdots K^{(d)} (x_d,y_d)\] is completely separating on the set $X = X_1 \times X_2 \times\ldots\times X_d$. \end{prop} \begin{proof} Each set $X_i$ and $X$ are endowed with the metric $d_{K^{(i)}}$ and $\dk$ induced by the corresponding kernels, and $\hh_i$ and $\hh$ denote the reproducing kernel Hilbert spaces with kernels $K^{(i)}$ and $K$, respectively. A standard result gives that $\hh = \hh_1\otimes \ldots \otimes \hh_d$ and $K_x = K^{(1)}_{x_1}\otimes \ldots \otimes K^{(d)}_{x_d}$ for all $x = (x_1 , \ldots, x_d) \in X$ \cite{aron50} . We claim that the $\dk$-topology on $X$ is contained in the product topology of the $d_{K^{(i)}}$-topologies on $X_i$ (actually, it is not difficult to show that the two topologies coincide). Indeed, if $( x_{i,k} )_{k\geq 1}$ are sequences in $X_i$ such that $\lim_{k\to\infty} d_{K^{(i)}}(x_{i,k} , x_i) = 0$ for all $i=1,\ldots, d$, then \begin{eqnarray*} && \lim_{k\to\infty} \dk\left( (x_{1,k} ,\ldots , x_{d,k}) , (x_1 ,\ldots , x_d) \right)^2 = \lim_{k\to\infty} \nor{K_{(x_{1,k} ,\ldots , x_{d,k})} - K_{(x_1 ,\ldots , x_d)}}^2 \\ && \qquad \quad = \lim_{k\to\infty} [K^{(1)} (x_{1,k} , x_{1,k}) \cdots K^{(d)} (x_{d,k}, x_{d,k}) - {2} \, K^{(1)} (x_{1,k} , x_1) \cdots K^{(d)} (x_{d,k},x_d) \\ && \qquad \qquad + K^{(1)} (x_1 , x_1) \cdots K^{(d)} (x_d,x_d)] \\ && \qquad \quad =0 , \end{eqnarray*} since $\lim_{k\to\infty} K^{(i)}(x_{i,k} , x_{i,k}) = \lim_{k\to\infty} K^{(i)}(x_{i,k} , x_i) = K^{(i)}(x_i , x_i)$. We now prove that $\hh$ is completely separating. If $C\subset X$ is $\dk$-closed and $x = (x_1 , \ldots, x_d) \in X\setminus C$, since $C$ is also closed in the product topology, for all $i=1,\ldots, d$ there exists an open neighborhood $U_i$ of $x_i$ in $X_i$ such that $U = U_1 \times \ldots \times U_d\subset X\setminus C$. Since each $\hh_i$ is completely separating, for all $i=1,\ldots,d$ there exists $f_i \in \hh_i$ such that $f_i (x_i) \neq 0$ and $f_i (y_i) = 0$ for all $y_i\in X_i \setminus U_i$. Then the product function $f=f_1\otimes \ldots \otimes f_d$ is in $\hh$, and satisfies $f(x) \neq 0$ and $f (y) = 0$ for all $y\in C$. \end{proof} As a consequence, the Abel kernel defined by the $\ell_1$-norm \begin{align*} K (x,y) = e^{-\frac{\nor{x-y}_1}{\sigma}} =\prod_{i=1}^d\, e^{-\frac{\abs{x_i-y_i}}{\sigma}}, \qquad x=(x_1,\ldots,x_d), \, y=(y_1,\ldots,y_d) \end{align*} is completely separating since each kernel in the product is positive definite and completely separating by Proposition \ref{Prop. exp sep.}. \section{A Spectral Approach to Learning the Support}\label{sec:learn} In this section we study the set estimation problem in the context of learning theory. We fix a triple $(X,\rho,K)$ as in Section \ref{sec:basic}, and assume throughout that the reproducing kernel $K$ satisfies Assumption~\ref{A}. We regard $X$ as a metric space with respect to $\dk$, and continue to denote by $X_\rho$ the support of $\rho$ defined in Proposition \ref{Prop.supp.}. If $\hh$ separates $X_\rho$, Theorem~\ref{primo} shows that the support $X_\rho$ is the $1$-level set of a suitable function $F_\rho$ defined by the integral operator $T$, and therefore depending on $K$ and $\rho$. However, the probability distribution $\rho$ is unknown, as we only have a set of i.i.d.~points $x_1, \dots, x_n$ sampled from $\rho$ at our disposal. Our task is now to use our sample in order to estimate the set $X_\rho$. The definition of $T$ given by~\eqref{eq:2} suggests that it can be estimated by the data dependent operator \begin{equation}\label{eq:Tn} T_n = \frac{1}{n}\sum_{i=1}^n K_{x_i}\otimes K_{x_i}. \end{equation} The operator $T_n$ is positive and with finite rank; in particular, $T_n\in\mathcal S_1$ and $\nor{T_n}_{\mathcal S_1} = \tr{T_n} = 1$. We denote by $(\sigma^{(n)}_j)_{j\in J_n}$ the strictly positive eigenvalues of $T_n$ (each one repeated according to its multiplicity) and by {$\big(\sqrt{\sigma^{(n)}_j}\phi^{(n)}_j\big)_{j\in J_n}$} the corresponding eigenvectors; note that in the present case the index set $J_n$ is finite. However, though $T_n$ converges to $T$ in all relevant topologies (see Lemma \ref{concentration} and Remark \ref{rem:S1} below), in general $T^\dag_nT_n$ does not converge to $T^\dag T$ since $T^\dag$ may be {unbounded}, or, equivalently, since $0$ may be an accumulation point of the spectrum of $T$ when $\dim\hh = \infty$. Hence, the problem of support estimation is {ill-posed}, and regularization techniques are needed to restore well-posedness and ensure a stable solution. In the following sections, we will show that spectral regularization \cite{bapero07,enhane,logerfo08} can be used to learn the support efficiently from the data. \subsection{Regularized Estimators via Spectral Filtering}\label{algo} An approach which is classical in inverse problems (see \cite{enhane}, and also \cite{bapero07,logerfo08} for applications to learning) consists in replacing the pseudo-inverses $T^\dag_n$ and $T^\dag$ with some bounded approximations obtained by {\em filtering out} the components corresponding to the eigenvalues of $T_n$ and $T$ which are smaller than a fixed regularization parameter $\lambda$. This is achieved by introducing a suitable {\em filter function} $\G:[0,+\infty[\to [0,+\infty[$ and replacing $T^\dag_n$, $T$ with the bounded operators $\G(T_n)$, $\G(T)$ defined by spectral calculus. If the function $\G$ is sufficiently regular, then convergence of $T_n$ to $T$ implies convergence of $\G(T_n)$ to $\G(T)$ in the Hilbert-Schmidt norm. On the other hand, if the regularization parameter $\lambda$ goes to zero, then $\G(T)$ converges to $T^\dag$ in an appropriate sense. We are now going to apply the same idea to our setting. Since we are interested in approximating the orthogonal projection $P_\rho = T^\dag T = \theta(T)$ rather than the pseudo-inverse $T^\dag$, we introduce a low-pass filter $\RR$, in a way that the bounded operator $\RR(T)$ is an approximation of $\theta(T)$. In terms of the previously defined function $\G$, this can be achieved by setting $\RR(\sigma){=} \G (\sigma) \sigma$ for all $\sigma\in\R$, so that $\RR(T) = \G (T)T$. Explicitely, in terms of the spectral decompositions of $T_n$ and $T$ we have \[{ \RR(T_n)=\sum_{j\in J_n} \RR(\sigma_j^{(n)})\ {\big(\sqrt{\sigma_j^{(n)}} \phi^{(n)}_j\big) \otimes \big(\sqrt{\sigma_j^{(n)}} \phi^{(n)}_j\big)}, \qquad \RR(T)=\sum_{j\in J} \RR(\sigma_j)\ (\sqrt{\sigma_j} \phi_j)\otimes (\sqrt{\sigma_j} \phi_j) .} \] Note that, since the spectra of $T_n$ and $T$ are both contained in the interval $[0,1]$, we can assume that the functions $\G$ and $\RR$ are defined on $[0,1]$. Moreover, as the operators $\RR(T_n)$ and $\RR(T)$ approximate orthogonal projections, it is useful to have the bound $0\leq\RR(T_n),\RR(T)\leq {I}$ satisfied for all $T_n$ and $T$'s, and this can be achieved by choosing the function $\RR$ such that $0\leq\RR(\sigma)\leq 1$ for all $\sigma$. As a consequence of the above discussion, the characterization of filter functions giving rise to stable algorithms is captured by the following assumption. \begin{assm}\label{B} The family of functions $(\RR)_{\lambda>0}$, with $\RR:[0,1]\to [0,1]$ for all $\lambda>0$, has the following properties: \begin{enumerate}[a)] \item\label{B1} $\RR(0) = 0$ for all $\lambda>0$; \item\label{B2} for all $\sigma >0$, we have $\lim_{\la\to 0^+} \RR(\sigma)=1$; \item\label{B3} for all $\la>0$, there exists a positive constant $L_\la$ such that \[ |\RR(\sigma)-\RR(\tau)|\leq L_\la |\sigma-\tau| \qquad \forall \sigma,\tau \in [0,1] . \] \end{enumerate} \end{assm} By Assumption \ref{B}.\ref{B1}, there exists a function $\G:[0,1]\to[0,+\infty[$ such that $\RR(\sigma) = \G(\sigma) \sigma$. On the other hand, by Assumption \ref{B}.\ref{B2}) we have $\lim_{\lambda\to 0^+} \RR (\sigma) = \theta (\sigma)$ for all $\sigma\in [0,1]$. Assumption \ref{B}.\ref{B3}) is of technical nature, and will become clear in Section \ref{subsec:cons}; here we note that in particular it implies that $\RR$ is a continuous function for all $\la >0$. A few examples of filter functions $\RR$ satisfying Assumption \ref{B} and of corresponding functions $\G$ are given in Table \ref{table filter}. It is easy to check that for each of them $L_{\la}=1/\la$. See \cite{enhane} for further examples. \begin{table}[htp] \begin{center} \begin{tabular}[c]{|l|c|c|} \hline & & \\ Tikhonov regularization & $\displaystyle{\RR(\sigma)=\frac{\sigma}{\sigma+\lambda}}$ & $\displaystyle{\G(\sigma)=\frac{1}{\sigma+\lambda}}$ \\ & & \\ \hline & & \\ Spectral cut-off & $\displaystyle{\RR(\sigma)= \ind_{]\la,+\infty[}(\sigma)+ \frac{\sigma}{\lambda} \ind_{[0,\la]}(\sigma)}$ & $\displaystyle{\G(\sigma)= \frac{1}{\sigma}\ind_{]\la,+\infty[}(\sigma)+ \frac{1}{\lambda} \ind_{[0,\la]}(\sigma)}$\\ & & \\ \hline & & \\ Landweber filter & $\displaystyle{\RR(\sigma)=\sigma \sum_{k=0}^{{m_\la}}(1-\sigma)^k }$ & $\displaystyle{\G(\sigma)= \sum_{k=0}^{{m_\la}}(1-\sigma)^k }$ \\ & & \\ \hline \end{tabular} \end{center} \caption{Examples of filter functions satisfying Assumption \ref{B}. For Landweber filter {$m_\la$ is an integer such that $\lim_{\la\to 0} m_\la= \infty$ }.}\label{table filter} \end{table} For a chosen filter, the corresponding regularized empirical estimator of $F_\rho$ is defined by \begin{equation}\label{estimator} F_n(x)=\scal{ \RRn (T_n)K_x}{K_x}= {\sum_{j\in J_n} \RRn (\sigma_j^{(n)}) \sigma^{(n)}_j \abs{\phi^{(n)}_j(x)}^2} \end{equation} where we allow the regularization parameter $\la_n$ to depend on the number of samples $n$. Note that the functions $F_n$ and $F_\rho$ are continuous on $X$ by continuity of the mapping $ x\mapsto K_x$ (see~\ref{P.1.i}) of Proposition \ref{metrica}). In Section~\ref{consistency} we will show that, for an appropriate choice of the sequence $(\la_n)_{n\geq 1}$, the estimator $F_n$ converges almost surely to $F_\rho$ uniformly on compact subsets of $X$. Unfortunately, this does not imply convergence of the $1$-level sets of $F_n$ to the $1$-level set of $F_\rho$ in any sense (as, for example, with respect to the Hausdorff distance). However, an estimator of $X_\rho$ can be obtained by setting \begin{equation}\label{set_est} X_n=\{x\in X\mid F_n(x)\geq 1-\tau_n\}, \end{equation} where $\tau_n > 0$ is an off-set parameter that depends on the sample size $n$ (recall that $F_n$ takes values in $[0,1]$). In Section~\ref{consistency} we show that, for a suitable choice of the sequence $(\tau_n)_{n\geq 1}$, the {closed} set $X_n$ is indeed a consistent estimator of the support with respect to the Hausdorff distance. In the following section we discuss some remarks about the computation of $F_n$. \subsection{Algorithmic and Computational Aspects}\label{complexity} We show that the computation of $F_n$ (hence of $X_n$) reduces to a finite dimensional problem involving the empirical kernel matrix defined by the data. To this purpose, it is useful to introduce the sampling operator \begin{equation}\label{eq:def_Sn} S_n:\hh\to {\R^n}\qquad S_n f = \col{f(x_1) \\ \vdots \\ f(x_n)}, \end{equation} which can be interpreted as the restriction operator which evaluates functions in $\hh$ on the points of the training set. The {transpose} of $S_n$ is \[ S_n^{\top}: {\R^n} \to \hh\qquad S^{\top}_n \col{\alpha_1 \\ \vdots \\ \alpha_n} =\sum_{i=1}^n \alpha_i K_{x_i}, \] and $S_n^{\top}$ can be interpreted as the out-of-sample extension operator \cite{coilaf06,RoBeDe10}. A simple computation shows that \[ T_n=\frac{1}{n} S_n^{\top} S_n\qquad S_nS_n^{\top}={\mathbf K}_n\qquad \bigl({\mathbf K}_n\bigr)_{ij}= K(x_i,x_j).\] Hence, considering the filter given in the form $\RR(T_n)=\G(T_n)T_n$, we have \[ \RR(T_n)= \G \left(\frac{S_n^{\top} S_n}{n}\right)\frac{S_n^{\top} S_n}{n}=\frac{1}{n} S_n^{\top}\, \G\left(\frac{S_nS^{\top}_n}{n}\right)\,S_n =\frac{1}{n} S_n^{\top}\, \G\left(\frac{{\mathbf K}_n}{n}\right)\,S_n, \] where the second equality follows from spectral calculus. Using the definition of the sampling operator, we can consider the $n$-dimensional vector ${\mathbf K}_x$ defined by \[ {\mathbf K}_x {=} S_n K_x=\col{K(x_1,x) \\ \vdots \\ K(x_n,x)}, \] {and} \eqref{estimator} can be written as \begin{equation}\label{eq:estimator2} F_n(x) = \scal{ \RRn (T_n)K_x}{K_x}= \scal{\frac{1}{n} \, \G\left(\frac{{\mathbf K}_n}{n}\right)\,S_nK_x}{S_n K_x} = \frac{1}{n} \, {\mathbf K}_x^\ast\, \Gn\left(\frac{{\mathbf K}_n}{n}\right)\,{\mathbf K}_x , \end{equation} where ${\mathbf K}_x^\ast$ is the conjugate transpose of ${\mathbf K}_x$. More explicitly we have \begin{equation}\label{rep} F_n(x) = \sum_{i=1}^n \alpha_i(x) K(x,x_i) \qquad \alpha_i(x) = \frac{1}{n}\sum_{j=1}^n \left(g_{\lambda_n}\left (\frac{\mathbf K_n}{n}\right)\right)_{ij} K(x_{j},x). \end{equation} The above equation shows that, while $\hh$ could be infinite dimensional, the computation of the estimator reduces to a finite dimensional problem. Further, though the mathematical definition of the filter is done through spectral calculus, the computations might not require performing an eigen-decomposition. As an example, for Tikhonov regularization { $g_{\la_n}(\sigma)=\frac{1}{\sigma+\la_n}$, so that $g_{\lambda_n}\left (\frac{\mathbf K_n}{n}\right)= (\frac{{\mathbf K}_n}{n}+\la_n)^{-1}$ and } the coefficient vector $\alpha(x)$ in \eqref{rep} is given by $$ \alpha(x)=({\mathbf K}_n+n\la_n)^{-1}{\mathbf K}_x. $$ In the case of the Landweber filter, it is possible to prove that the coefficient vector can be evaluated iteratively by setting $\alpha^0(x)=0$, and $$ \alpha^{t}(x)=\alpha^{t-1}(x) +\frac 1 n ({\mathbf K}_x-{\mathbf K}_n\alpha^{t-1}(x)) $$ for {$t=1, \dots, m_{\la_n}$.} {We refer to \cite{logerfo08} for the corresponding algorithm in a supervised framework; see also the discussion in Section~\ref{IPERM}}. We thus see that the estimator corresponding to Tikhonov regularization can be computed via Cholesky decomposition and has complexity of order $O(n^3)$. For Landweber iteration the complexity is $O(n^2m)$, where $m$ is the number of iterations. Finally, the spectral cut-off, or truncated SVD, requires $O(n^3)$ operations to compute the eigen-decompostion of the kernel matrix. Further discussions can be found in \cite{logerfo08} and references therein. We end remarking that, in order to test whether $N$ points belong or not to the support, we simply have to repeat the above computation replacing ${\mathbf K}_x$ by a $n\times N$ matrix ${\mathbf K}_{x,N}$, in which each column is a vector ${\mathbf K}_x$ corresponding to a point $x$ in the test set. Note that in this case the coefficients $\alpha(x)$ will also form a $n\times N$ matrix. \section{Error Analysis: Convergence and Stability}\label{consistency} In this section we develop an errror analysis for the proposed class of estimators. First, we discuss convergence (consistency) and then stability with respect to random sampling in terms of finite sample bounds. We continue to suppose throughout this section that Assumption \ref{A} holds true, and consider $X$ as a metric space with metric $\dk$. \subsection{Empirical data}\label{sec:misurismi} We recall that the empirical data are a set of i.i.d.~points $x_1, \dots, x_n$, each one drawn from $X$ with probability $\rho$. Since we need to study asymptotic properties when the sample size $n$ goes to infinity, we introduce the following probability space \begin{equation} \Omega=\set{(x_i)_{i\geq 1}\mid x_i\in X\ \forall i\geq 1} ,\label{eq:5} \end{equation} endowed with the product $\sigma$-algebra $\A{\Omega} {=} \A{X} \otimes \A{X} \otimes \ldots $ and the product probability measure $\mathbb P {=} \rho\otimes\rho\otimes\ldots$. We recall that, given {an integer $n$ and a topological space $M$ endowed with the $\sigma$-algebra of its Borel subsets}, an {\em $M$-valued estimator} of size $n$ is a measurable map $\Xi_n:\Omega\to M$ depending only on the first $n$-variables, that is \[ \Xi_n(\omega)=\xi_n(x_1,\ldots,x_n)\qquad \omega =(x_i)_{i\geq 1} \] for some measurable map $\xi_n : X^n \to M$. The number $n$ is the cardinality of the sampled data. We then have the following facts. \begin{prop}\label{prop:misurabilita} For all $n\geq 1$ \begin{enumerate}[i)] \item $T_n$ is a $\mathcal S_k$-valued estimator for $k=1,2$; \item if $X$ is locally compact, then $F_n$ is a $C(X)$-valued estimator, where $C(X)$ is the space of continuous functions on $X$ with the topology of uniform convergence on compact subsets. \end{enumerate} \end{prop} The proof of the above proposition is rather technical, and we defer the interested reader to~\ref{maurer} for more details. \begin{rem} {In item ii) of Proposition \ref{prop:misurabilita}, the assumption that $X$ is locally compact is needed to ensure that the topology of uniform convergence on compact subsets is a {separable} metric topology on $C(X)$, which in turn is essential to prove measurability of the random variable $F_n$ (see the proof of Proposition \ref{prop:misurabilita2} in \ref{maurer}). In many examples, the set $X$ has its own locally compact separable metric $d_X$. In this case, in order for $X$ to be locally compact metric space also for the metric $d_K$, it is enough that the kernel $K$ is a $d_X$-continuous function separating every subset of $X$ which is closed with respect to $d_X$, as the two topologies induced by $d_X$ and $d_K$ then coincide by item ii) of Proposition \ref{c:connection}. If $X$ is not locally compact (which we will regard as a pathological case), then, in order to have measurability of $F_n$, one needs to replace the probability measure $\mathbb P$ with the outer measure (see the discussion in Section 2 of \cite{kolt11} and in Section 1.7 of \cite{vawe96}).} \end{rem} \begin{rem} Statisticians adopt a different notation: the data are described by a family $Y_1,Y_2,\ldots$ of random variables taking value in $X$, each defined on the same probability space $(\Gamma,\A{\Gamma},\mathbb Q)$, which are {i.i.d.~according to} $\rho$. An $M$-valued estimator of size $n$ is then simply a random variable $\xi_n (Y_1 , \ldots , Y_n)$, where $\xi_n:X^n\to M$ is a measurable map. The equivalence between the two approaches is made clear by setting $(\Gamma,\A{\Gamma},\mathbb Q) \equiv (\Omega,\A{\Omega},{\mathbb P})$ and $Y_i(\omega) = x_i$ for all $\omega = (x_j)_{j\geq 1}$ and $i\geq 1$. \end{rem} Concentration of measure results for random variables in Hilbert spaces can be used to prove that $T_n$ is an unbiased estimator of $T$, as stated in the following lemma. \begin{lem}\label{concentration} For $n\geq 1$ and $\delta > 0$, \begin{equation}\label{eq:concentration_0} \nor{T-T_n}_{\mathcal S_2}\leq \frac{2(\delta \vee \sqrt{2\delta})}{\sqrt{n}} \end{equation} with probability at least $1-2e^{-\delta}$. Furthermore \begin{equation}\label{eq:concentration} \lim_{n\to \infty} \frac{\sqrt{n}}{\log n} \nor{T-T_n}_{\mathcal S_2}=0\qquad\text{almost surely}. \end{equation} \end{lem} \begin{proof} The result is known, but we report its short proof. For all $i\geq 1$ define the random variables $Z_i:\Omega\to \mathcal S_2 $ as \[ Z_i(\omega)=K_{x_i}\otimes K_{x_i}\qquad \omega=(x_j)_{j\geq 1}\in\Omega.\] The fact that $Z_i$ is measurable follows from Lemma \ref{lem:app1} in~\ref{maurer}. Then, for all $i\geq 1$, we have $\nor{Z_i}_{\mathcal S_2}\leq 1$ almost surely, $\EE{Z_i}=T$, and clearly $\EE{\nor{Z_i}^2_{\mathcal S_2}}\leq 1$. The first result follows easily applying Lemma \ref{conc_lemma} in~\ref{sec:conc} and simplifying the right hand side of \eqref{bernie}, and the second is a consequence of Lemma \ref{cor_conc} in~\ref{sec:conc}. \end{proof} \begin{rem}\label{rem:S1} Note that \eqref{eq:concentration} and Theorem 2.19 in \cite{Simon} imply that $$ \lim_{n\to \infty} \nor{T-T_n}_{\mathcal S_1}=0 \qquad \text{almost surely}. $$ \end{rem} \subsection{Consistency}\label{subsec:cons} We now choose a family of filter functions $(\RR)_{\lambda>0}$ and study the convergence of the associated estimators $F_n$ and $X_n$ introduced in Section \ref{sec:learn}. {\subsubsection{Consistency of $F_n$}} We begin proving convergence of the functions $F_n$ defined in \eqref{estimator} to the function $F_\rho$ in \eqref{Projection}. We introduce the map $G_{\la}:X\to\R$ defined by \begin{equation*}\label{intermediate} G_{\la}(x)=\scal{ \RR(T)K_x}{K_x} \qquad \forall x\in X, \end{equation*} which can be seen as the {\em infinite sample} analogue of $F_n$. Clearly, $G_{\la}$ is a continuous function. For all sets $C\subset X$, we then have the following splitting of the error into two parts, the {\em sample error} and the {\em approximation error} \begin{equation}\label{decomp} \sup_{x\in C} \abs{F_n(x)-F_\rho(x)} \leq \underbrace{ \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)}}_{\text{sample error}} + \underbrace{ \sup_{x\in C} \abs{G_{\la_n}(x)-F_\rho(x)}}_{\text{approximation error}}. \end{equation} In order to prove consistency, we need to show that the left hand side goes to $0$ as the sequence of regularization parameters $(\la_n)_{n\geq 1}$ tends to $0$. This will be done separately for the approximation and the sample errors in the next two propositions. \begin{prop}\label{appr_err} Under Assumption \ref{B}.\ref{B2}), if the sequence $(\la_n)_{n\geq 1}$ is such that $ \lim_{n\to \infty}\lambda_n=0$, then, for any compact subset $C\subset X$, \begin{equation*}\label{apprx_conv} \lim_{n\to \infty} \sup_{x\in C} \lvert G_{\la_n}(x)-F_\rho(x) \rvert=0. \end{equation*} \end{prop} \begin{proof} Assumption \ref{B}.\ref{B2}) and $\lim_{n\to \infty}\la_n=0$ imply that the sequence of non-negative functions $(\RRn)_{n\geq 1}$ is bounded by $1$ and converges pointwisely to the Heaviside function $\theta$ on the interval $[0,1]$. Spectral theorem ensures that, for all $x\in C$, \begin{equation} \lim_{n\to\infty} \RRn (T)K_x=\theta(T)K_x.\label{dubbio} \end{equation} Given $\eps>0$, by compactness of $C$ there exists a finite covering of $C$ by balls of radius $\eps$, namely $C \subset \cup_{i=1}^m B(x_i,\eps)$. By~\eqref{dubbio} there exists $n_0$ such that \[ \max_{i\in\{1,\ldots,m\}} \nor{r_{\la_n}(T)K_{x_i}-\theta(T)K_{x_i}}\leq \eps \qquad \forall n\geq n_0.\] Hence, for all $n\geq n_0$, we have \begin{align*} \sup_{x\in C} \abs{G_{\la_n}(x) - F_\rho(x)} & = \sup_{x\in C} \abs{\scal{(r_{\la_n}(T)-\theta(T))K_x }{K_x}} \\ & \leq \sup_{x\in C} \nor{K_x} \, \sup_{x\in C} \nor{(r_{\la_n}(T)-\theta(T))K_x } \\ & \leq \max_{i\in\{1,\ldots,m\}} \sup_{x\in B(x_i,\eps)} \nor{(r_{\la_n}(T)-\theta(T))K_{x_i} + (r_{\la_n}(T)-\theta(T))(K_x - K_{x_i}) } \\ & \leq \max_{i\in\{1,\ldots,m\}} \sup_{x\in B(x_i,\eps)} \bigl(\nor{ (r_{\la_n}(T)-\theta(T))K_{x_i}} + \nor{r_{\la_n}(T)-\theta(T)}_\infty \nor{K_x-K_{x_i}}\bigr) \\ & \leq \eps + \eps \sup_{\sigma\in[0,1]} \abs{r_{\la_n}(\sigma)-\theta(\sigma)} = 3\eps, \end{align*} where $\nor{K_x-K_{x_i}}<\eps$ for all $x\in B(x_i,\eps)$ since $\nor{K_x-K_{x_i}} = \dk (x,x_i)$, an, because $\abs{r_{\la_n}(\sigma)} \leq 1$, $\abs{\theta(\sigma)} \leq 1$, $\sup_{\sigma\in[0,1]} \abs{r_{\la_n}(\sigma)-\theta(\sigma)}\leq 2$. \end{proof} Convergence to zero of the sample error follows from \eqref{eq:concentration} and the next proposition. \begin{prop}\label{finite_bound} For all sets $C\subset X$ we have \begin{equation}\label{decomp2} \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)} \leq \nor{r_{\la_n}(T_n)-r_{\la_n}(T)}_{\mathcal S_2} . \end{equation} In particular, if Assumption \ref{B}.\ref{B3}) holds, then \begin{equation}\label{eq:inpiu} \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)} \leq L_{\la_n}\nor{T_n-T}_{{\cal S}_2} . \end{equation} \end{prop} \begin{proof} For all $x\in X$, we have the bound \begin{align*} \abs{F_n(x)-G_{\la_n}(x)} & = \abs{\scal{(\RRn (T_n)-\RRn (T))K_x}{K_x}} \\ & \leq \nor{r_{\la_n}(T_n)-r_{\la_n}(T)}_{\infty} \, \nor{K_x}^2 \\ & \leq \nor{r_{\la_n}(T_n)-r_{\la_n}(T)}_{\mathcal S_2} , \end{align*} which proves \eqref{decomp2}. Assumption~\ref{B}.\ref{B3}) and Theorem 8.1 in \cite{biso03} (see also Lemma~\ref{lem:ineMaurer} in~\ref{maurer2} for a simple unpublished proof due to A.~Maurer) imply that \[ \nor{ r_{\la_n}(T_n)-r_{\la_n}(T)}_{\mathcal S_2}\leq L_{\la_n}\nor{T_n-T}_{\mathcal S_2} . \] Inequality \eqref{eq:inpiu} then follows. \end{proof} The above results can be combined in the following theorem, showing that, if the sequence $\la_n$ is suitably chosen, then $F_n$ converges almost surely to $F_\rho$ with respect to the topology of uniform convergence on compact subsets of $X$. \begin{thm}\label{thm:consistency} Under Assumption \ref{B}, if the sequence $(\la_n)_{n\geq 1}$ is such that \begin{equation}\label{ParChoice} \lim_{n\to\infty}\lambda_n=0 \quad \text{and}\quad \sup_{n\geq 1} \frac {L_{\la_n} \log n}{\sqrt{n}}<+\infty, \end{equation} then, for every compact subset $C\subset X$, \begin{equation}\label{Consistency} \lim_{n\to\infty} \sup_{x\in C} \lvert F_n(x)-F_\rho(x)\rvert=0 \qquad\text{almost surely}. \end{equation} \end{thm} \begin{proof} We show convergence to zero of both the two terms in the right hand side of inequality \eqref{decomp}, thus implying \eqref{Consistency}. By \eqref{eq:inpiu}, we have $$ \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)} \leq L_{\la_n} \nor{T_n-T}_{\mathcal S_2} = \frac {L_{\la_n} \log n}{\sqrt{n}}\frac{\sqrt{n}\nor{T_n-T}_{\mathcal S_2}}{\log n } \leq M \frac{\sqrt{n}\nor{T_n-T}_{\mathcal S_2}}{\log n } , $$ where $M=\sup_{n\geq 1} (L_{\la_n} \log n)/\sqrt{n}$ is finite by~\eqref{ParChoice}. Then \eqref{eq:concentration} implies that the first term in the right hand side of inequality \eqref{decomp} converges to zero almost surely. Since the second term goes to zero by Proposition \ref{appr_err}, the claim follows. \end{proof} {\subsubsection{Consistency of $X_n$}} { As already remarked above, uniform convergence of $F_n$ to $F_\rho$ on compact subsets {\em does not} imply convergence of the level sets of $F_n$ to the corresponding level sets of $F_\rho$ in any sense (as, for example, with respect to the Hausdorff distance among compact subsets). For this reason, we introduce a family of threshold parameters $(\tau_n)_{n\geq 1}$ and define the estimator $X_n$ of the set $X_\rho$ as in \eqref{set_est}. We define a data dependent parameter $\tau_n$ as the function on $\Omega$ \begin{equation}\label{eq:deftaun} \tau_n(\omega)= 1-\min_{1\leq i \leq n}[F_n(\omega)](x_i) \qquad \omega=(x_i)_{i\geq 1}, \end{equation} where we wrote explicitely the dependence of $F_n$ on the training set $\omega\in\Omega$. Since $F_n$ takes values in $[0,1]$, clearly $\tau_n(\omega)\in [0,1]$. \begin{prop}\label{prop:limtaun} Suppose the metric space $X$ is compact. Then, under Assumption \ref{B}, the function $\tau_n$ is a $\R$-valued estimator. Moreover, if the sequence $(\la_n)_{n\geq 1}$ satisfies~\eqref{ParChoice}, we have $$ \lim_{n\to\infty} \tau_n = 0 \qquad \text{almost surely} . $$ \end{prop} \begin{proof} The proof that $\tau_n$ is a $\R$-valued estimator is of technical nature, and we postpone it to Proposition \ref{prop:misurtaun} in \ref{maurer}.\\ Here we prove that $\lim_{n\to\infty} \tau_n = 0$ with probabiliy $1$. By Theorem \ref{thm:consistency}, we can find an event $E_1\subset\Omega$ with $\PP{E_1}=1$ such that $\lim_{n\to\infty} \sup_{x\in X} \lvert F_n(x)-F_\rho(x)\rvert=0$ on $E_1$. Moreover, for the event $E_2=\{x_i \in X_\rho \mbox{ for all } i\geq 1\}$, we clearly have $\PP{E_2}=1$ by definition of $X_\rho$ and $\mathbb{P}$. If $\omega\in E_1\cap E_2$ and $\eps>0$ is fixed, then there exists $n_0 \geq 1$ (possibly depending on $\omega$ and $\eps$) such that for all $n\geq n_0$ $|[F_n(\omega)](x)-F_\rho(x)|\leq \eps$ for all $x\in X$. Since $F_\rho(x)=1$ for all $x\in X_\rho$ by definition and $x_1,\ldots,x_n\in X_\rho$, it follows that $|[F_n(\omega)](x_i)-1| \leq \eps$ for all $1\leq i\leq n$, that is \[ 0\leq 1-[F_n(\omega)](x_i) \leq \eps \qquad \forall i\in\{1,2,\ldots,n\}, \] so that $0\leq \tau_n(\omega) \leq \eps$. Thus, $\lim_{n\to\infty} \tau_n(\omega) = 0$, and, since $\PP{E_1\cap E_2}=1$, the sequence $(\tau_n)_{n\geq 1}$ goes to zero with probability $1$. \end{proof} The following is the central result of this section. It shows that, assuming $X$ is compact and for the above choice of the sequence $(\tau_n)_{n\geq 1}$, the Hausdorff distance between $X_n$ and $X_\rho$ goes to zero with probability $1$. Here we recall that the Hausdorff distance between two subsets $A,B\subset X$ is \[ d_H (A,B)=\max\set{\sup_{a\in A}\dk(a,B),\ \sup_{b\in B}\dk(b,A)}, \] where $\dk (x,Y) = \inf_{y\in Y} \dk (x,y)$. \begin{thm}\label{thm:hauss} Suppose the metric space $X$ is compact. Under Assumption \ref{B}, if $\hh$ separates the set $X_\rho$ and the sequence $(\la_n)_{n\geq 1}$ satisfies~\eqref{ParChoice}, for the choice of the threshold parameters $(\tau_n)_{n\geq 1}$ given in \eqref{eq:deftaun} we have \[ \lim_{n\to \infty} d_H (X_n, X_\rho)=0 \qquad \text{almost surely} . \] \end{thm} We devote the rest of this section to proof of the above theorem. For simplicity, we split it into a few lemmas. \begin{lem}\label{lem:agg1} Under the hypotheses of Theorem \ref{thm:hauss}, we have \begin{equation}\label{eq:limXrho} \lim_{n\to\infty} \sup_{x\in X_n}d_K(x,X_\rho) =0 \qquad \mbox{almost surely}. \end{equation} \end{lem} \begin{proof} Let $E$ be the event $E = \{\lim_{n\to\infty}\tau_n = 0\}$. Then, $\PP{E} = 1$ by Proposition \ref{prop:limtaun}. We fix $\omega\in E$, and suppose by contradiction that at such $\omega$ the limit \eqref{eq:limXrho} does not hold. Then (depending on $\omega$) there exists $\eps>0$ such that for all $k$ there is $n_k\geq k$ satisfying the inequality $\sup_{x\in X_{n_k}}\dk(x,X_\rho)\geq 2\eps$. Hence there is $z_k\in X_{n_k}$ such that \begin{equation} \dk(z_k, x)\geq \eps\qquad \text{for all }x\in X_\rho.\label{eq:1} \end{equation} Since $X$ is compact, possibly passing to a subsequence we can assume that the sequence $(z_k)_{k\geq 1}$ converges to a limit $z\in X$. We claim that $z\in X_\rho$. Indeed, if $k$ is sufficiently large, then we have \begin{align*} \abs{F_\rho(z) -1} & \leq \abs{F_\rho(z)-F_\rho(z_k)} + \abs{F_\rho(z_k) - F_{n_k}(z_k)} + \abs{F_{n_k}(z_k) -1} \\ & \leq \abs{F_\rho(z)-F_\rho(z_k)}+ \sup_{x\in X} \abs{F_\rho(x) - F_{n_k}(x)} + \tau_{n_k} , \\ \end{align*} where $\abs{F_{n_k}(z_k) -1} \leq \tau_{n_k}$ is due to the fact that $z_k\in X_{n_k}$, so that \[ 1+\tau_{n_k}\geq 1 \geq F_{n_k}(z_k)\geq 1 -\tau_{n_k}.\] As $n_k$ goes to $\infty$, we have $\sup_{x\in X} \abs{F_\rho(x) - F_{n_k}(x)} \to 0$ by Theorem \ref{thm:consistency}; moreover, since $F_\rho$ is continuous in $z$ and $\tau_{n_k}$ goes to zero, the above inequality for $\abs{F_\rho(z)-1}$ gives $F_\rho(z)=1$. Since $\hh$ separates $X_\rho$, this implies $z\in X_\rho$. However, \eqref{eq:1} implies that $\dk(z,x)\geq \eps$ for all $x\in X_\rho$, which is the desired contradiction. \end{proof} The proof that $\sup_{x\in X_n}d_K(x,X_\rho)$ goes to zero as $n\to\infty$ requires a further technical lemma, see \cite[Lemma 6.1]{GY02}. In its statement, for all $n\geq 1$ and $x\in X$, we denote by $\xi_{1,n}(x)$ the nearest neighbour of $x$ in the training set $\set{x_1,\ldots,x_n}$, i.e. $$ \xi_{1,n}(x) = {\rm arg\, min}_{x_1,x_2,\ldots x_n} d_K (x_i,x) . $$ \begin{lem}\label{lem:agg2} For all $x\in X_\rho$, \[ \lim_{n\to\infty} d_K(\xi_{1,n}(x),x)=0 \qquad \mbox{almost surely}. \] \end{lem} \begin{proof} Given $x\in X_\rho$, fix $\eps>0$ and, denoted by $B(x,\eps)$ the closed ball with center $x$ and radius $\eps$, set $p=\rho(B(x,\eps))$. By definition of the support and the fact that $\rho$ is a probability measure, $0<p\leq 1$. Furthermore \begin{align*} \PP{ d_K(\xi_{1,n}(x),x)>\eps} & =\PP{x_i\not\in B(x,\eps)\, \forall i=1,\ldots,n} \\ \mbox{(by independence of the $x_i$'s)}& = \Pi_{i=1}^n \PP{x_i\not\in B(x,\eps)} \\ \mbox{(since the $x_i$'s are identically distributed)} & = \Pi_{i=1}^n (1-\rho(B(x,\eps))\\ & = (1-p)^n . \end{align*} Since $0\leq 1-p<1$, the series $\sum_n(1-p)^n$ converges, so that Borel-Cantelli lemma yields \[ \PP{\bigcup_{n=1}^\infty \bigcap_{m=n}^\infty\set{ d_K(\xi_{1,m}(x),x) \leq \eps}}= 1 . \] Since this holds for all $\eps>0$, we have \[ \PP{\bigcap_{k=1}^\infty \bigcup_{n=1}^\infty \bigcap_{m=n}^\infty\set{ d_K(\xi_{1,m}(x),x) \leq \frac{1}{k}}}= 1 , \] and the lemma follows. \end{proof} \begin{lem}\label{lem:agg3} Under the hypotheses of Theorem \ref{thm:consistency}, if the metric space $X$ is compact, then \begin{equation}\label{eq:limXn} \lim_{n\to \infty} \sup_{x\in X_\rho}\dk(x,X_n) =0 \qquad \mbox{almost surely} . \end{equation} \end{lem} \begin{proof} Choose a denumerable dense family $\set{z_j}_{j\in J}$ in $X_\rho$. By the Lemma \ref{lem:agg2} there exists an event $E$ with probability $1$ such that \begin{equation}\label{eq:altrolim} \lim_{n\to+\infty} d_K(\xi_{1,n}(z_j),z_j)=0 \qquad \forall j\in J \end{equation} on $E$. We claim that the limit \eqref{eq:limXn} holds on $E$. Observe that, by definition of $\tau_n$, $x_i\in X_n$ for all $1\leq i \leq n$, and \[ \sup_{x\in X_\rho}d_K(x,X_n)\leq \sup_{x\in X_\rho}\min_{1\leq i\leq n} d_K(x,x_i) = \sup_{x\in X_\rho} d_K(\xi_{1,n}(x),x), \] so that it is enough to show that $ \lim_{n\to+\infty} \sup_{x\in X_\rho} d_K(\xi_{1,n}(x),x)=0$.\\ Fix $\eps>0$. Since $X_\rho$ is compact, there is a finite subset $J_\eps\subset J$ such that $\set{B(z_j,\eps)}_{j\in J_\eps}$ is a finite covering of $X_\rho$. We claim that \begin{equation} \sup_{x\in X_\rho} d_K(\xi_{1,n}(x),x)\leq \max_{j\in J_\eps} d_K(\xi_{1,n}(z_j),z_j)+\eps.\label{goin} \end{equation} Indeed, fixed $x\in X_\rho$, there exists an index $j\in J_\eps$ such that $x\in B(z_j,\eps)$. By definition of $\xi_{1,n}$, clearly \[ d_K(\xi_{1,n}(x),x)\leq d_K(\xi_{1,n}(z_j),x),\] so that by the triangular inequality we get \begin{align*} d_K(\xi_{1,n}(x),x)& \leq d_K(\xi_{1,n}(z_j),x)\leq d_K(\xi_{1,n}(z_j),z_j)+ d_K(z_j,x) \\ & \leq d_K(\xi_{1,n}(z_j),z_j)+\eps \\ & \leq \max_{j\in J_\eps} d_K(\xi_{1,n}(z_j),z_j)+\eps. \end{align*} Taking the $\sup$ over $X_\rho$ we get the claim.\\ Since $J_\eps$ is finite, by \eqref{eq:altrolim} \[ \lim_{n\to+\infty} \max_{j\in J_\eps} d_K(\xi_{1,n}(z_j),z_j)=0, \] hence \eqref{goin} yelds \[ \limsup_{n\to \infty} \sup_{x\in X_\rho} d_K(\xi_{1,n}(x),x) \leq \eps. \] Since $\eps$ is arbitrary, we get $\lim_{n\to+\infty} \sup_{x\in X_\rho} d_K(\xi_{1,n}(x),x)=0$, and this concludes the proof \end{proof} The proof of Theorem \ref{thm:hauss} follows easily combining the previous lemmas. \begin{proof}[Proof of Theorem \ref{thm:hauss}] As $d_H (X_n,X_\rho)=\max\{\sup_{x\in X_n}\dk(x,X_\rho),\ \sup_{x\in X_\rho}\dk(x,X_n)\}$, the theorem follows combining Lemmas \ref{lem:agg1} and \ref{lem:agg3}. \end{proof} We conclude this section with some comments. First, if $\hh$ does not separate $X_\rho$, then the statement of Theorem \ref{thm:hauss} continues to be true provided that the support $X_\rho$ is replaced by the level set $\set{x\in X\mid F_\rho(x)=1}$. Note that, the Hausdorff distance $d_H$ has been defined with respect to the metric $\dk$ induced by the kernel, however, if the set $X$ has its own metric $d_X$ making it compact and the hypotheses of Proposition \ref{c:connection} are satisfied, then Theorem \ref{thm:hauss} implies convergence of $X_n$ to $X_\rho$ also with respect to the Hausdorff distance associated to $d_X$. Finally, we remark that in Theorem \ref{thm:hauss} convergence of $X_n$ to $X_\rho$ does not depend on any a priori assumption on the probability $\rho$.} \subsection{Finite Sample Bounds and Stability of Random Sampling} In order to prove stability of our algorithms under random sampling and determine their convergence rates, we need to specify suitable a priori assumptions on the class of problems to be considered. In the present section, a detailed analysis {of the convergence rates of $F_n$ to $F_\rho$} will be carried out for the case of the Tikhonov filter $\RR(\sigma) = \sigma/(\sigma+\la)$. The techniques in \cite{cap06} should allow to derive similar results for filters other than Tikhonov. For all $\lambda>0$ we define \[ {\mathcal N}(\la)=\tr{(T+\la)^{-1}T}=\sum_{j\in J} \frac{\sigma_j}{\sigma_j+\la},\] which is finite since $T$ is a trace class operator. The above quantity is related to the degrees of freedom of the estimator \cite{hatifr01}. Here, we recall that ${\mathcal N}$ is a decreasing function of $\la$ and $\lim_{\la\to 0^+}{\mathcal N}(\la)=N$, where $N$ is the dimension of the range of $T$. The a priori conditions we consider in the present paper are given by the following two assumptions, which involve both the reproducing kernel $K$ and the probability measure $\rho$ (compare with {\cite{cap07,cade07}}). \begin{assm}\label{C} We assume that \begin{enumerate}[a)] \item\label{C2} there exist $b\in [0,1]$ and $D_b\geq 1$ such that \begin{equation}\label{prior2} \sup_{\la>0} {\mathcal N}(\la)\la^b\leq D_b^2 ; \end{equation} \item\label{C1} there exist $ 0< s\leq 1$ and a constant $C_s>0$ such that $P_\rho K_x\in \ran{T^{s/2}}$ for all $x\in X$, and \begin{equation}\label{prior1} \sup_{x\in X}\nor{T^{-\frac{s}{2}} P_\rho K_x}^2\leq C_s . \end{equation} \end{enumerate} \end{assm} The above conditions are classical in the theory of inverse problems and have been recently considered in supervised learning. Before showing how they allow to derive a finite sample bound on the error $\sup_{x\in X} \abs{F_n(x)-F_\rho(x)}$, we add some comments. First, Assumption \ref{C}.\ref{C2}) is related to the level of ill-posedness of the problem \cite{enhane} and can be interpreted as a condition specifying the {\em aspect ratio} of the range of $T$. Since $0<\la{\mathcal N}(\la) < \tr{T} = 1$, inequality~\eqref{prior2} is always satisfied with the choice $b=1$ and $D_1 = 1$, so that in this case we are not imposing any a priori assumption. If $\dim\ran{T} = N <\infty$, the best choice is $b=0$ and $D_0 = \sqrt{N}$; otherwise, if $\dim\ran{T} = \infty$, then necessarily $b>0$. In the latter case, a sufficient condition to have $b<1$ is to assume a decay rate $\sigma_j\sim j^{-1/b}$ on the eigenvalues of $T$ (see Proposition~3 of \cite{cade07}). Coming to Assumption \ref{C}.\ref{C1}), first of all we remark that it is always satisfied when $\dim\ran{T}$ is finite with the choice $s=1$ and $C_1 = \max_{j\in J} 1/\sigma_j$. In the general case, {Assumption \ref{C}.\ref{C1})} can be expressed {by the following equivalent condition} \begin{equation}\label{apriori} {\sum_{j\in J} \sigma_j^{1-s} |\phi_j(x)|^2 \leq C_s \qquad \forall x\in X ,} \end{equation} where {$(\phi_j,\sigma_j)_{j\in J}$ are the eigenvectors and eigenvalues of $L_K$,} which {were} defined in Section \ref{sec:int} (see in particular \eqref{eq:outofsample} for the definition of the functions $\phi_j$ outside the set $X_\rho$). Clearly, the higher is $s$, the stronger is the assumption. Note that in particular inequality~\eqref{apriori} holds true if there exists a constant\footnote{As as it happens for example for reproducing kernels on $X=[0,2\pi]^d$ which are invariant under translations, when $\rho$ is the Lebesgue measure on $[0,2\pi]^d$.} $\kappa>0$ such that $\sup_{x\in X}\abs{\phi_j(x)}\leq \kappa$ for all $j\in J$, and $s\in ]0,1]$ is chosen to make the series $\sum_{j\in J} \sigma_j^{1-s}$ finite. In this case, it is quite easy to give conditions on the eigenvalues $(\sigma_j)_{j\in J}$ assuring that both Assumptions \ref{C}.\ref{C2}) and \ref{C}.\ref{C1}) are satisfied. For example, if $\sigma_j\sim j^{-1/b}$ for some $0<b<1$, then \eqref{prior2} holds true with this choice of $b$, and~\eqref{prior1} is satisfied for any $0<s<1-b$. {\begin{rem} Setting $\beta = 1-s\in [0,1[$, condition~\eqref{apriori} is equivalent to the fact that {for all $x,y\in X$ the series \begin{equation}\label{eq:Stein3} K^\beta_\rho(x,y)= \sum_{j\in J} \sigma_j^\beta \phi_j(y)\phi_j(x) \end{equation} converges absolutely to a bounded reproducing kernel $K^\beta_\rho$. Convergence of the series \eqref{eq:Stein3} was studied e.g.~in \cite{stsc12}, where it is proved that, if the sequence of powers $(\sigma^\beta_j)_{j\in J}$ is summable, there exists a $\rho$-null set $N$ such that \eqref{eq:Stein3} converges absolutely on $(X\setminus N) \times (X\setminus N)$ (see \cite[Proposition 4.4]{stsc12}). We remark that this weaker fact is not sufficient in our setting: indeed, on the one hand it does not imply that the series \eqref{eq:Stein3} (or, equivalently, \eqref{apriori}) converges on all of $X$, and on the other it does not guarantee that such series is uniformly bounded, two conditions which however are both needed in the proof of Theorem \ref{rates} below to get uniform estimates on the whole set $X$.} A direction of future work is to study the geometric nature of the above conditions when $X$ is a metric space or a Euclidean space and $X_\rho$ a Riemannian submanifold. \end{rem}} The following theorem provides the finite sample bound on the error $\sup_{x\in X} \abs{F_n(x)-F_\rho(x)}$. \begin{thm}\label{rates} Suppose $\RR(\sigma)=\sigma/(\sigma+\la)$. If Assumption \ref{C} holds and we choose $$ \la_n =\left(\frac{1}{n}\right)^{\frac{1}{2s+b+1}}, $$ then, for $n\geq 1$ and $\delta > 0$, we have \begin{equation}\label{eq:rates} \sup_{x\in X} \abs{F_n(x)-F_\rho(x)} \leq (C_s\vee( D_b (2\delta\vee\sqrt{2\delta}))) \left(\frac{1}{n}\right)^{\frac{s}{2s+b+1}} \end{equation} with probability at least $1-2e^{-\delta}$. \end{thm} We postpone the proof to the end of the current section and add here some comments. The above finite sample bound quantifies the stability of the estimator with respect to random sampling. Equivalently, if we set the right hand term of the inequality to $\eps$ and solve for $n=n(\eps, \delta)$, we obtain the sample complexity of the problem, i.e.~how many samples are needed in order to achieve the maximum error $\eps$ with confidence $1-2e^{-\delta}$. As remarked before, Assumption \ref{C}.\ref{C2}) is verified for $b=1$ by any reproducing kernel. In this limit case our result gives a rate $n^{-s/(2s+2)}$, comparable with the one that can be obtained inserting \eqref{eq:inpiu} and \eqref{apprx_err} below into inequality \eqref{decomp}, {with $\nor{T_n - T}$ bounded by \eqref{eq:concentration_0}.} Note that, if $\dim\ran{T}=N<\infty$, choosing $b=0$, $D_0=\sqrt{N}$, $s=1$ and $C_1 = \max_{j\in J} 1/\sigma_j$, the rate in~\eqref{eq:rates} becomes $n^{-1/3}$. The proof of Theorem \ref{rates} follows the ideas in \cite{cade07} and is based on refined estimates of the sample and approximation errors. The techniques in \cite{cap06} should allow to derive similar results for filters beyond the Tikhonov one. \begin{prop}\label{prop:sample_err2} If Assumption \ref{C}.\ref{C2}) holds true, then, for $n\geq 1$ and $\delta > 0$, we have $$ \sup_{x\in X} \abs{F_n(x)-G_{\la_n}(x)} \leq \left(\frac{\delta}{n\la_n}+\sqrt{\frac{2\delta{\cal N} (\la_n)}{n\la_n}}\right) $$ with probability at least $1-2e^{-\delta}$. \end{prop} \begin{proof} Consider the following decomposition \begin{eqnarray*} \RRn (T) - \RRn (T_n) &=& (T+\la_n)^{-1}T-(T_n+\la_n)^{-1}T_n\\ &=& (T+\la_n)^{-1}T-(T+\la_n)^{-1}T_n+(T+\la_n)^{-1}T_n-(T_n+\la_n)^{-1}T_n\\ &=& (T+\la_n)^{-1}(T-T_n)+ (T+\la_n)^{-1}[(T_n + \la_n) - (T+\la_n)] (T_n+\la_n)^{-1}T_n\\ &=& (T+\la_n)^{-1}(T-T_n)+ (T+\la_n)^{-1}(T_n-T) (T_n+\la_n)^{-1}T_n\\ &=& (T+\la_n)^{-1}(T-T_n)[I- (T_n+\la_n)^{-1}T_n] \\ &=& \la_n (T+\la_n)^{-1}(T-T_n) (T_n+\la_n)^{-1}. \end{eqnarray*} It is easy to see that {$\nor{(T_n+\la_n)^{-1}}_\infty\leq \la_n^{-1}$}, hence {\[ \nor{\RRn (T) - \RRn (T_n)}_{\mathcal S_2}\leq \la_n\nor{(T+\la_n)^{-1}(T-T_n)}_{\mathcal S_2}\nor{(T_n+\la_n)^{-1}}_\infty \leq \nor{(T+\la_n)^{-1}(T-T_n)}_{\mathcal S_2} . \]} Then, from Lemma~\ref{conce2} in the Appendix we have that $$ \nor{(T+\la_n I)^{-1}(T-T_n)}_{\mathcal S_2}\leq \left( \frac{\delta}{n\la_n}+\sqrt{\frac{2\delta{\cal N}(\la_n)}{n\la_n}} \right), $$ with probability at least $1-2e^{-\delta}$, so that the result follows by \eqref{decomp2}. \end{proof} \begin{prop}\label{prop:apprx_err} If Assumption \ref{C}.\ref{C1}) holds true, then \begin{equation}\label{apprx_err} \sup_{x\in X}\abs{G_\la (x)-F_\rho(x)} \leq \la^{s}C_s . \end{equation} \end{prop} \begin{proof} Since $\theta(\sigma)-r_{\la}(\sigma)=\la/(\sigma+\la)$ for all $ \sigma >0$, we have \begin{align*} \abs{G_{\la}(x)-F_\rho(x)} & = \abs{\scal{(r_{\la}(T) -\theta(T))K_x}{K_x}} = \abs{\scal{(r_{\la}(T) -\theta(T))P_\rho K_x}{P_\rho K_x}} \\ & =\la \nor{ (T+\la)^{-\frac12}P_\rho K_x}^2 , \end{align*} as $P_{\rho}K_x \in \ker{T}^\perp$. Since by assumption $P_{\rho}K_x\in \ran{T^{s/2}}$ for some $0<s\leq 1$, spectral calculus and the bound $\sigma^s/(\sigma+\la) \leq \la^{s-1}$ give the inequality $$ \nor{ (T+\la)^{-\frac12}P_\rho K_x}^2 = \nor{ [(T+\la)^{-1} T^s]^{\frac 12} T^{-\frac s2} P_\rho K_x}^2 \leq \la^{s-1} \nor{T^{-\frac s2}P_\rho K_x}^2 , $$ so that \[ \abs{G_{\la}(x)-F_\rho(x)} \leq \la^{s} \nor{T^{-\frac s2}P_\rho K_x}^2\leq \la^sC_s \] for all $x\in X$. \end{proof} We are now ready to prove the main result. \begin{proof}[Proof of Theorem \ref{rates}] The choiche $\la_n=n^{-1/(2s+b+1)}$ is the one that set the contributions of the sample and approximation errors in \eqref{decomp} to be equal. Indeed, we begin by simplifying the bound on the sample error. If $\la \geq n^{-1}$, then $n\la\geq\sqrt{n\la^{b+1}}$ for all $0<b\leq1$, so that $$ \frac{\delta}{n\la }+\sqrt{\frac{2\delta{\cal N}(\la)}{n\la }} = \frac{\delta}{n \la}+\sqrt{\frac{2\delta {\mathcal N}(\la)\la^b}{n\la^{b+1} }} \leq D_b(\delta\vee\sqrt{2\delta}) \left(\frac{1}{n\la}+\frac{1}{\sqrt{n\la^{b+1}}}\right) \leq \frac{2D_b(\delta\vee\sqrt{2\delta})}{\sqrt{n\la^{b+1}}}, $$ where we used the definition of $D_b$ (and the fact that $D_b\geq 1$). Then, by the above inequality and Propositions \ref{prop:sample_err2} and \ref{prop:apprx_err}, inequality \eqref{decomp} gives \begin{equation}\label{staqua} \sup_{x\in X} \abs{F_n(x)-F_\rho(x)} \leq C_s\la^s+ \frac{2D_b(\delta\vee\sqrt{2\delta})}{\sqrt{n\la^{b+1}}}. \end{equation} If we set the contributions of the sample and approximation errors to be equal, the choice for $\la$ is $$ \la =\left(\frac{1}{n}\right)^{\frac{1}{2s+b+1}}. $$ It is easy to see that $\la \geq n^{-1}$ for all values of $s,b$, so that from \eqref{staqua} we have $$ \sup_{x\in X} \abs{F_n(x)-F_\rho(x)} \leq (C_s\vee (2D_b (\delta\vee\sqrt{2\delta}))) \left(\frac{1}{n}\right)^{\frac{s}{2s+b+1}} . $$ \end{proof} \subsection{The kernel PCA filter} A natural choice for the spectral filter $\RR$ would be the regularization defined by kernel PCA \cite{scsmmu98}, that corresponds to truncating the generalized inverse of the kernel matrix at some cutoff parameter $\la$. The corresponding filter function is \[ \RR(\sigma)= \begin{cases} 1 & \sigma\geq \la \\ 0 & \sigma <\la \end{cases}.\] The above filter does not satisfy the Lipschitz condition~\ref{B}.\ref{B3}) in Assumption~\ref{B}, so that the bound \eqref{eq:inpiu} for the sample error $\sup_{x\in X} \abs{F_n (x) - G_{\la_n} (x)}$ does not hold in this case\footnote{Note that, by Proposition \ref{prop:misurabilita2} in~\ref{maurer}, if $X$ is locally compact, then $F_n$ defined in \eqref{estimator} still is a $C(X)$-valued estimator.}. However, {{we can still achieve an estimate by employing inequality \eqref{ineMaurer_bis} in~\ref{maurer2}. To this aim,} with a slight abuse of the notation, here we count the eigenvalues of $T$ and $T_n$ without their multiplicities and we list them in decreasing order. Furthermore, for any $\la>0$ we set $\sigma_{j(\la)}$ and $\sigma_{k(\la)}^{(n)}$ as the smallest eigenvalues of $T$ and $T_n$ which are greater or equal to $\lambda$, i.e. \[ \sigma_1>\sigma_2>\ldots>\sigma_{j(\la)}\geq\la>\sigma_{j(\la)+1}\qquad \sigma^{(n)}_1>\sigma^{(n)}_2>\ldots>\sigma_{k(\la)}^{(n)}\geq\la>\sigma_{k(\la) +1}^{(n)}. \] Inequality~\eqref{ineMaurer_bis} implies that $$ \nor{ r_\la (T_n)-r_\la (T)}_{\mathcal S_2} \leq \frac{\nor{T_n-T}_{\mathcal S_2}}{\min\set{\sigma_{j(\la)}-\sigma_{k(\la)+1}^{(n)}, \sigma_{k(\la)}^{(n)}-\sigma_{j(\la)+1}}} \leq \frac{\nor{T_n-T}_{\mathcal S_2}}{\min\set{\sigma_{j(\la)}-\lambda, \lambda-\sigma_{j(\la)+1}}} , $$ and} inequality \eqref{decomp2} for the sample error then reads $$ \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)} \leq \frac{\nor{T_n-T}_{\mathcal S_2}}{\min\set{\sigma_{j(\la_n)}-\lambda_n , \lambda_n - \sigma_{j(\la_n)+1}}} . $$ By Lemma \ref{concentration}, in order to have convergence to $0$ of the right hand side of this expression we need to choose the sequence $(\la_n)_{n\geq 1}$ such that $$ \sup_{n\geq 1} \frac{\log n}{\sqrt{n} \min\set{\sigma_{j(\la_n)}-\lambda_n , \lambda_n - \sigma_{j(\la_n)+1}}} < \infty . $$ Since the gap $\sigma_{j(\la)}-\sigma_{j(\la)+1}$ can have any arbitrary rate of convergence to zero as $\la\to 0^+$, we thus see that there exists {\em no} distribution independent choice of $(\lambda_n)_{n\geq 1}$ ensuring the convergence to zero of the above bound. Note that $r_\la (T)$ is the projection $P_{j(\la)}$ onto the sum of the eigenspaces of the first $j(\la)$ eigenvalues of $T$ and $r_\la(T_n)$ is the projection $P^{(n)}_{k(\la)}$ onto the sum of the eigenspaces of the first $k(\la)$ eigenvalues of $T$. If $(M_n)_{n\geq 1}$ is any strictly increasing sequence with $M_n \in \N$ for all $n$, we can consider the following distribution dependent choice $\lambda_n=(\sigma_{M_n}+\sigma_{M_n+1})/2$. Then we have \[ \nor{ P^{(n)}_{M_n}-P_{M_n}}_{\mathcal S_2}= \nor{r_{\la_n}(T_n)-r_{\la_n}(T)}_{\mathcal S_2} \leq \frac{2\nor{T_n-T}_{\mathcal S_2}}{\sigma_{M_n}-\sigma_{M_n+1}}, \] which recovers a known result about kernel PCA (see for example \cite{zwbl06}). Furthermore, if the bound $\nor{T_n-T}_{\mathcal S_2}<(\sigma_{M_n}-\sigma_{M_n+1})/2$ holds, then we obtain $\nor{P^{(n)}_{M_n}-P_{M_n}}_{\mathcal S_2}<1$, hence we have the equality $\dim\ran{P^{(n)}_{M_n}}=\dim\ran{P_{M_n}}$. The following result extends Theorem~\ref{thm:consistency} to the case of kernel PCA, at the price of having a distribution dependent choice of the cut-off sequence $(M_n)_{n\geq 1}$. \begin{thm}\label{thm:consistencyPCA} If the sequence of natural numbers $(M_n)_{n\geq 1}$ is strictly increasing and such that \begin{equation*} \sup_{n\geq 1} \frac {\log n}{\sqrt{n}(\sigma_{M_n}-\sigma_{M_n+1})}<+\infty \end{equation*} and we define the sequence $(\la_n)_{n\geq 1}$ as $$ \lambda_n=\frac{\sigma_{M_n}+\sigma_{M_n+1}}{2} , $$ then, for every compact subset $C\subset X$, \begin{equation*} \lim_{n\to\infty} \sup_{x\in C} \lvert F_n(x)-F_\rho(x)\rvert=0 \qquad\text{almost surely}. \end{equation*} \end{thm} \begin{proof} By the above discussion and inequality \eqref{decomp2}, $$ \sup_{x\in C} \abs{F_n(x)-G_{\la_n}(x)} \leq \frac{2\nor{T_n-T}_{\mathcal S_2}}{\sigma_{M_n}-\sigma_{M_n+1}} \leq \frac{\sqrt{n} \nor{T_n-T}_{\mathcal S_2}}{\log n} \, \sup_{n\geq 1} \frac {2\log n}{\sqrt{n}(\sigma_{M_n}-\sigma_{M_n+1})} . $$ Convergence to $0$ of the sample error then follows from \eqref{eq:concentration}. Combining this fact and Proposition \ref{appr_err} into inequality \eqref{decomp}, the claim then follows. \end{proof} \section{Some Perspectives}\label{sec:disc} In this section we discuss some different perspectives to our approach and suggest some possible extensions. \subsection{Connection to Mercer Theorem}\label{sec:Mercer} We start discussing some connections between our analytical characterization of the support of $\rho$ {and Mercer} theorem \cite{mer09}. With the notations of Section~\ref{sec:int}, the fact that the family {$(\sqrt{\sigma_j}\phi_j)_{j\in J}$ is an orthonormal basis of $P_\rho\hh$ and the reproducing property give the relation \begin{equation}\label{eq:mercer} \scal{P_\rho K_{y}}{K_x}= \sum_{j\in J} \sigma_j \phi_j (x) {\phi_j (y)} \qquad \forall x,y\in X , \end{equation}} where the series converges absolutely. Note that in this expression the eigenfunctions $\phi_j$ of $L_K$ are defined outside $X_\rho$ through the extension equation~\eqref{eq:outofsample}. Restricting \eqref{eq:mercer} to $x,y\in X_\rho$, we obtain $$ K(x,y)=\sum_{j\in J} \sigma_j \phi_j (x) {\phi_j (y)} \qquad \forall x,y\in X_\rho , $$ which is nothing else than Mercer theorem \cite{stch08}. In particular, taking $x=y$, this formula implies that $\sum_{j\in J} \sigma_j \abs{\phi_j (x)}^2 =K(x,x)$ for all $x\in X_\rho$. On the other hand, the assumption that the reproducing kernel separates $X_\rho$ precisely ensures that $$\sum_{j\in J} \sigma_j \abs{\phi_j (x)}^2 \neq K(x,x)\qquad \forall x\not\in X_\rho.$$ (Recall that, if $K$ separates $X_\rho$, then $X_\rho$ is the $1$-level set of the function $F_\rho= \sum_{j\in J} \sigma_j \abs{\phi_j}^2 $.{)} \subsection{A Feature Space Point of View} In machine learning, kernel methods are often described in terms of a corresponding feature map \cite{vapnik98}. This point of view highlights the linear structure of the Hilbert space and often provides a more geometric interpretation. We recall that a feature map associated to a reproducing kernel is a map $\Psi:X\to{\mathcal F}$, where ${\mathcal F}$ is a Hilbert space with inner product $\scal{\cdot}{\cdot}_{\mathcal F}$, satisfying $ K(x,y)=\scal{\Psi(y)}{\Psi(x)}_{\mathcal F}. $ While every map $\Psi$ from $X$ into a Hilbert space $\cal F$ defines a reproducing kernel, it is also possible to prove that each kernel has an associated feature map (and in fact many). Indeed, given $K$, the natural assignment is $\ff\equiv\hh$ and $\Psi(x)\equiv { K_x}$. Such a choice is also minimal, in the sense that, if we make a different choice of $\ff$ and $\Psi$, then there exists an isometry $W:\hh\to\ff$ such that $\Psi(x) = W {K_x}$ $\forall x\in X${ -- see for example Proposition 2.4 of \cite{cadeto06} or Theorem~4.21 of \cite{stch08}, noticing that both papers deal with the transpose $W^\top:\ff\to\hh$.} We next review some of the concepts introduced in Section \ref{sec:basic} in terms of feature maps. For the sake of comparison we assume that $\nor{\Psi(x)}_{\cal F}=1$ for all $x\in X$ (this corresponds to the normalization assumption \ref{A}.\ref{A4})), we let ${\cal F}_C$ be the closure of the linear span of the set $\set{\Psi(x)\mid x \in C}$, and define $$ d_{\cal F}(\Psi(x), {\cal F}_C) = \inf_{f\in {\cal F}_C} \nor{\Psi(x)-f}_{\cal F}. $$ It is easy to see that the definition of separating kernel has the following equivalent and natural analogue in the context of feature maps. \begin{defn}\label{Phiseparated} We say that a feature map $\Psi$ separates a subset $C\subset X$ if \begin{eqnarray*} d_{\cal F}(\Psi(x), {\cal F}_C) =0 \quad\Longleftrightarrow\quad x~\in ~C. \end{eqnarray*} \end{defn} The above definition is equivalent to Definition \ref{separated} since $d_{\cal F}(\Psi(x), {\cal F}_C)=\nor{\Psi(x)-Q_C\Psi(x)}_{\cal F}$, where $Q_C$ is the orthogonal projection onto ${\cal F}_C$. Then, according to Definition \ref{Phiseparated}, a point $x\in C$ if and only if $\nor{\Psi(x)-Q_C\Psi(x)}_{\cal F}^2=0$. Since $\Psi(x) = WK_x$ $\forall x\in X$ and $Q_C W = WP_C$, this is equivalent to $$ 0 = \nor{\Psi(x)-Q_C\Psi(x)}_{\cal F}^2 = \nor{K_x - P_C K_x}^2 = K(x,x) - F_C (x) . $$ Theorem \ref{prop_proj} then implies that Definition \ref{separated} and \ref{Phiseparated} are equivalent. We thus see that the separating property has a clear geometric interpretation in the feature space: the set $\Psi(C)$ is the intersection of the closed subspace $\mathcal F_C$, i.e.~a linear manifold in $\cal F$, and $\Psi(X)$ -- see Figure~\ref{Fig:PhiSep}. In the above interpretation, the estimator we propose for the support then stems from the following observation: given a training set $x_1,\ldots , x_n$, we classify a new point $x$ as belonging to the estimator $X_n$ of $X_\rho$ if the distance of $\Psi(x)$ to the linear span of $\{ \Psi(x_1), \ldots \Psi(x_n)\}$ is sufficiently small. Given a training set $\set{x_1,\ldots,x_n}$, our estimator $F_n$ classifies a new point $x$ as belonging to the support if the distance of $\Psi(x)$ to the linear span of $\Psi(x_1),\ldots,\Psi(x_n)$ is sufficiently small. \begin{figure} \caption{The sets $X$ and the support $X_\rho$ are mapped into the feature space $\cal F$, by the feature map $\Psi$. Here we take ${\cal F} \label{Fig:PhiSep} \end{figure} \subsection{Inverse Problems and Empirical Risk Minimization}\label{IPERM} Here we suggest a simple interpretation of the estimator $F_n$ and stress the connection with the supervised setting. We regard the sampled data $x_1,\ldots,x_n$ as a training set of positive examples, so that each point $x_i\in X_\rho$ almost surely; the new datum is the point $x\in X$, and we evaluate the estimator $F_n$ at $x$. We label the examples according to the similarity function $K$ by setting \[ y_i(x)=K(x_i,x) \equiv ({\mathbf K}_x)_i \qquad i=1,\ldots,n . \] If $K$ satisfies Assumption \ref{A}, then, since $K(x,x)=1$ and $K$ is $\dk$-continuous, the function $y_i$ is close to $1$ whenever $x_i$ is close to $x$. The interpolation problem \[ \text{find } f\in\hh \text{ such that } f(x_i)=y_i(x) \ \forall i\in\{1,\ldots, n\} \quad\Longleftrightarrow\quad S_n f = {\mathbf K}_x \] (where $S_n$ is defined in \eqref{eq:def_Sn}) is ill-posed. To restore well-posedeness we can consider the corresponding least square problem (empirical risk minimization problem) $$ \min_{f\in \hh}\frac 1 n \sum_{i=1}^n \abs{f(x_i)-y_i(x)}^2 \quad\Longleftrightarrow\quad \min_{f\in \hh} \frac 1 n \nor{S_n f-{\mathbf K}_x}^2_{{\R^n}} , $$ or in fact its regularized version $$ \min_{f\in \hh}\left(\frac 1 n \sum_{i=1}^n \abs{f(x_i)-y_i(x)}^2+\la\nor{f}^2\right) \quad\Longleftrightarrow\quad \min_{f\in \hh} \left(\frac 1 n \nor{S_n f-{\mathbf K}_x}^2_{{\R^n}}+\la\nor{f}^2\right) , $$ where $\la>0$ is the regularization parameter (Tikhonov regularization). It is known \cite{enhane} that the minimum of the above expression is achieved by $f\equiv f_n^\la$, with \[ f_n^\la= \frac{1}{n}g_\la(\frac{S_n^{\top}S_n}{n})S^{\top}_ny , \] where $\G$ is the function $\G(\sigma) = 1/(\sigma + \la)$.\\ More generally, Tikhonov regularization can be replaced by spectral regularization induced by a different choice of the filter $\G$; the corresponding regularized solution $f_n^\la$ is still given by the previous equation, but the function $\G$ appearing in it is now completely arbitrary. Comparing with \eqref{eq:estimator2}, we see that $f^{\la_n}_n(x)=F_n(x)$. Equation \eqref{set_est} has then the following interpretation: a new point $x$ is estimated to be a positive example (that is, to belong to the support $X_\rho$) if and only if $f^{\la_n}_n(x) \geq 1-\tau$, where $\tau$ is a threshold parameter. The above discussion suggests several extensions and variations of our method, obtained considering more general penalized empirical risk minimization functionals of the form $$ \min_{f\in \hh}\left(\frac 1 n \sum_{i=1}^n V(y_i(x), f(x_i))+\la R(f)\right) , $$ where: \begin{itemize} \item $V$ is a (regression) loss function measuring the approximation property of $f$, for example the logistic loss or a robust loss such as the one used in support vector machine regression. Our theoretical analysis does not carry on to other loss functions and different mathematical concepts from empirical process theory are probably needed; \item $R$ is a regularizer measuring the complexity of a function $f\in\hh$. For example, one can consider the case where the kernel is given by a dictionary of atoms $f_\gamma:X\to {\R}$, with $\gamma \in \Gamma$, such that $\sum_{\gamma\in \Gamma} \abs{f_\gamma(x)}^2=1$, so that we have $K(x,y)=\sum_{\gamma\in \Gamma} f_\gamma(x) {f_\gamma(y)}$ and, hence, ${f=\sum_{\gamma\in \Gamma} w_\gamma f_\gamma}$, with $w=(w_\gamma)_{\gamma \in \Gamma}\in \ell_2(\Gamma)$. In this setting, Tikhonov regularization corresponds to the choice $R(f)=\sum_{\gamma\in \Gamma} \abs{w_\gamma}^2$, but other norms, such as the $\ell_1$ norm $\sum_{\gamma\in \Gamma}|w_\gamma|$, can also be considered. \end{itemize} \section{Empirical Analysis}\label{sec:emp} In this section we describe some preliminary experiments aimed at testing the properties and the performances of the proposed methods both on simulated and real data. We only discuss spectral algorithms induced by Tikhonov regularization to contrast the general method to some current state of the art algorithms. Note that while computations can be made more efficient in several ways, we consider a simple algorithmic protocol and leave a more refined computational study for future work. Recall that Tikhonov regularization defines an estimator $F_n(x)={\mathbf K_x}^\ast ({\mathbf K}_n+n\lambda)^{-1}{\mathbf K}_x$, and a point $x$ is labeled as belonging to the support $X_\rho$ if $F_n(x)\geq 1-\tau$. The computational cost for the algorithm is, in the worst case, of order $n^3$ -- like standard regularized least squares -- for training, and order $Nn^2$ if we have to predict the value of $F_n$ at $N$ test points. In practice, one has to choose a good value for the regularization parameter $\la$ and this requires computing multiple solutions, a so called {\em regularization path}. As noted in \cite{RifLip07}, if we form the inverse using the eigendecomposition of the kernel matrix the price of computing the full regularization path is essentially the same as that of computing a single solution (note that the cost of the eigen-decomposition of ${\mathbf K}_n$ is also of order $n^3$, though the constant is worse). This is the strategy that we consider in the following. In our experiments we considered two datasets: the MNIST\footnote{http://yann.lecun.com/exdb/mnist/} dataset and the CBCL\footnote{http://cbcl.mit.edu/} face database. For the digits we considered a reduced set consisting of a training set of 5000 images and a test set of 1000 images. In the first experiment we trained on $500$ images for the digit $3$ and tested on $200$ images of digits $3$ and $8$. Each experiment consists of training on one class and testing on two different classes and was repeated for 20 trials over different training set choices. For all our experiments we considered the Abel kernel. Note that in this case the algorithm requires to choose $3$ parameters: the regularization parameter $\la$, the kernel width $\sigma$ and the threshold $\tau$. In supervised learning cross validation is typically used for parameter tuning, but cannot be used in our setting since support estimation is an unsupervised problem. Then, we considered the following heuristics. The kernel width is chosen as the median of the distribution of distances of the $k$-th nearest neighbor of each training set point for $k=10$. Fixed the kernel width, we choose the regularization parameter in correspondence of the maximum curvature in the eigenvalue behavior -- see Figure \ref{f:Eigen} -- the rationale being that after this value the eigenvalues are relatively small. \begin{figure} \caption{{Decay of the eigenvalues of the kernel matrix ordered in decreasing magnitude and corresponding regularization parameter in logarithimic scale.} \label{f:Eigen} \end{figure} \begin{figure} \caption{{ROC curves for the different estimator in three different tasks: digit $9$vs $4$ (Left), digit $1$vs $7$ (Center), CBCL (Right). } \label{f:ROC} \end{figure} \begin{table}[t!] \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline &$3$vs $8$&$8$vs $3$&$1$vs $7$&$9$vs $4$&CBCL\\ \hline \textbf{Spectral}&$0.837\pm0.006$&$0.783\pm0.003$&$0.9921\pm 0.0005 $& $0.865\pm 0.002$&$ 0.868\pm 0.002$\\\hline \textbf{Parzen}&$0.784\pm0.007$&$0.766\pm0.003$&$0.9811\pm 0.0003 $& $0.724\pm 0.003$&$0.878\pm 0.002 $\\\hline \textbf{1CSVM}&$0.790\pm 0.006$&$0.764\pm0.003$&$0.9889\pm 0.0002$& $0.753\pm 0.004$ &$0.882\pm 0.002$\\ \hline \end{tabular} \end{center} \caption{Average and standard deviation of the AUC for the different estimators on the considered tasks.}\label{t:AUC} \end{table} For comparison we considered a Parzen window density estimator and one-class SVM (1CSVM) as implemented by \cite{SVM-KMToolbox}. For the Parzen window estimator we used the same kernel of the spectral algorithm, that is the Laplacian kernel, and also the same width. Given a kernel width, an estimate of the probability distribution is computed and can be used to estimate the support by fixing a threshold $\tau'$. For the one-class SVM we considered the Gaussian kernel, so that we have to fix the kernel width and a regularization parameter $\nu$. We fixed the kernel width to be the same used by our estimator and set $\nu=0.9$. For the sake of comparison, also for one-class SVM we considered a varying offset $\tau^{''}$. The performance is evaluated computing ROC curve (and the corresponding AUC value) for varying values of the thresholds $\tau,\tau',\tau^{''}$. The ROC curves on the different tasks are reported (for one of the trials) in Figure \ref{f:ROC}, Left. The mean and standard deviation of the AUC for the three methods is reported in Table \ref{t:AUC}. Similar experiments were repeated considering other pairs of digits, see Table \ref{t:AUC}. Also in the case of the CBCL datasets we considered a reduced dataset consisting of $472$ images for training and other $472$ for test. On the different test performed on the MNIST data the spectral algorithm always achieves results which are better -- and often substantially better -- than those of the other methods. On the CBCL dataset SVM provides the best result, but spectral algorithm still provides a competitive performance. { \begin{rem} We remark that, although binary classification data sets are used in the experiments, the considered set-up is that of a one-class classification problem. Indeed, the training and tuning of the algorithms are performed using only examples of one class and the other class is only considered for testing. Accordingly, the proposed methods are compared to state of the art algorithms for one-class classification. \end{rem}} \appendix \section{Auxiliary Proofs} In this section we give the proofs of a few technical results needed in the paper. {\subsection{Normalizing a Kernel}\label{app:lemma}} {The next result shows that, if $K$ is a reproducing kernel which is nonzero on the diagonal, then it can be normalized, and its normalized version separates the same sets. When $K(x,x) = 0$ for some $x\in X$, then clearly this result still holds replacing the set $X$ with $X\setminus X_0$ and considering the restriction of $K$ to $(X\setminus X_0)\times (X\setminus X_0)$, where $X_0 = \set{x\in X\mid K(x,x) = 0}$. \begin{prop}\label{l:normalization} Assume that $K(x,x)>0$ for all $x\in X$. Then, the reproducing kernel $K'$ on $X$, given by \[ K'(x,y)=\dfrac{K(x,y)}{\sqrt{K(x,x)K(y,y)}} \qquad \forall x,y\in X , \] is normalized and separates the same sets as $K$. \end{prop} \begin{proof} Clearly $K$ is a kernel of positive type. Denote by $\hh'$ the reproducing kernel Hilbert space with kernel $K'$, and define the feature map $\Psi: X\to \hh$, $\Psi(x)=K_x/\nor{K_x}$. It is simple to check that $\scal{\Psi(y)}{\Psi(x)}=K'(x,y)$ and $\Psi(X)^\perp=\{0\}$, so that the map $\Psi_*:\hh\to\hh'$ \[ (\Psi_*f)(x) =\scal{f}{\Psi(x)}\] is a unitary operator with $K'_x=\Psi_*( \Psi(x))$ \cite{cadeto06}. Clearly, for any $f\in \hh$ and $x\in X$ \[ \scal{\Psi_* f}{K'_x} = \scal{\Psi_* f}{\Psi_* \Psi(x)} = \frac{\scal{f}{K_x}}{\nor{K_x}}. \] The above equality shows that $\hh$ and $\hh'$ separate the same sets. \end{proof}} \subsection{Analytic Results}\label{maurer} In this section, we suppose that the kernel $K$ satisfies Assumption~\ref{A}, and endow the set $X$ with the metric { $\dk$} induced by $K$. {Measurability of a map taking values in a topological space will be always understood with respect to the Borel $\sigma$-algebra of such space.} The next simple lemma will be used frequently. \begin{lem}\label{lem:app1} For all $k=1,2$, the map $$ \xi:X \to \mathcal S_k , \qquad \xi(x)=K_x\otimes K_x $$ is continuous and measurable. Moreover, if $Z_i : \Omega \to \mathcal S_k$ is given by $$ Z_i (\omega) = K_{x_i} \otimes K_{x_i} \qquad \omega = (x_j)_{j\geq 1} , $$ then $Z_i$ is measurable for all $i\geq 1$. \end{lem} \begin{proof} The map ${x \mapsto K_x}$, is continuous {from $X$ into $\hh$} by item \ref{P.1.i}) in Proposition \ref{metrica}. Since {$\xi(x) = K_x\otimes K_x$}, continuity of $\xi$ follows at once. By item \ref{P.1.iv}) in Proposition \ref{metrica}, $\xi$ is then a measurable map, hence $Z_i$ is such. \end{proof} We recall some basic properties of the operator $T$ defined by the kernel. The next result is known (see for example \cite{dedero09}), but we report a short proof for completeness. \begin{prop}\label{Tprop} The $\mathcal S_1$-valued map $\xi$ {defined in Lemma~\ref{lem:app1}} is Bochner-integrable with respect to $\rho$, and its integral \[ T = \int_{X} K_x\otimes K_x d\rho(x) \] is a positive trace class operator on $\hh$, with $\nor{T}_{\mathcal S_1} = \tr{T} = 1$. \end{prop} \begin{proof} The map $\xi$ isbounded because $\nor{K_x\otimes K_x}_{\mathcal S_1} = \tr{K_x\otimes K_x} = K(x,x) = 1$ and measurable by Lemma \ref{lem:app1} . Therefore, $\xi$ is a Bochner-integrable $\mathcal S_1$-valued map, and its integral $T$ is a trace class operator. As $\xi(x)$ is a positive operator for all $x$, so is $T$. In particular, $\nor{T}_{\mathcal S_1} = \tr{T}$, and $\tr{T} = \int_X \tr{K_x\otimes K_x} d\rho(x)=1$. \end{proof} Now, we come to the proof of Proposition \ref{prop:misurabilita}. We will split it into the proofs of Propositions \ref{prop:misurabilita1} and \ref{prop:misurabilita2} below. \begin{lem}\label{lem:app2} For all $k=1,2$, the map $$ \check{T}_n : X^n \to {\mathcal S}_k , \qquad \check{T}_n (x_1,\ldots,x_n) {=} \frac{1}{n} \sum_{i=1}^n K_{x_i} \otimes K_{x_i} $$ is continuous and measurable. \end{lem} \begin{proof} Evident by Lemma \ref{lem:app1}. \end{proof} \begin{prop}\label{prop:misurabilita1} For all $n\geq 1$, the map $T_n$ defined in \eqref{eq:Tn} is a $\mathcal S_k$-valued estimator for $k=1,2$. \end{prop} \begin{proof} We have $$ T_n (\omega) = \check{T}_n (x_1,\ldots,x_n) \qquad \omega = (x_i)_{i\geq 1} , $$ hence $T_n$ is measurable by Lemma \ref{lem:app2}. \end{proof} For the next proposition we recall that the topology of uniform convergence on compact subsets of $X$ is generated by the following basis of open sets $U_{f,\eps,C}\subset C(X)$ $$ U_{f,\eps,C} = \set{g\in C(X) \mid \sup_{x\in C} \abs{f(x) - g(x)} < \eps} \qquad f\in C(X) ,\, \eps > 0 , \, C\subset X \mbox{ compact} . $$ \begin{prop}\label{prop:misurabilita2} Suppose $X$ is locally compact. Let $(\RR)_{\lambda>0}$ be a family of functions $\RR:[0,1]\to [0,1]$ such that each $\RR$ is upper semicontinuous. Then, for any sequence of positive numbers $(\la_n)_{n\geq 1}$ and all $n\geq 1$, the map $F_n$ defined in \eqref{estimator} is a $C(X)$-valued estimator, where $C(X)$ is the space of continuous functions on $X$ with the topology of uniform convergence on compact subsets. \end{prop} \begin{proof} Throughout the proof, $n\geq 1$ will be fixed. Let $(\pphi_k)_{k\geq 1}$ be a decreasing sequence of continuous functions $\pphi_k : [0,1] \to [0,1]$ such that $\pphi_k (\sigma) \downarrow \RRn(\sigma)$ for all $\sigma\in [0,1]$ (such sequence exists by (12.7.8) of \cite{Dieu2}). Then, by Lemma \ref{lem:app2} and continuity of the functional calculus (see e.g.~Problem 126 in \cite{HalmoProb}), for all $k\geq 1$ the map $$ \pphi_k (\check{T}_n) : X^n \to {\mathcal S}_0 , \qquad [\pphi_k (\check{T}_n)](x_1,\ldots,x_n) {=} \pphi_k (\check{T}_n (x_1,\ldots,x_n)) $$ is continuos from $X^n$ into the Banach space ${\mathcal S}_0$ of the bounded operators on $\hh$ with the uniform operator norm. Thus, for all $x\in X$, the real function $(x_1,\ldots,x_n) \mapsto \scal{[\pphi_k (\check{T}_n)](x_1,\ldots,x_n)\, K_x}{K_x}$ is continuous on $X^n$, hence is measurable by item \ref{P.1.iv}) of Proposition \ref{metrica}. By spectral calculus and dominated convergence theorem, for all $\omega = (x_i)_{i\geq 1} $ \begin{align*} \scal{\RRn (T_n (\omega)) \, K_x}{K_x} & = \scal{\RRn (\check{T}_n (x_1,\ldots,x_n)) \, K_x}{K_x} = \lim_{k\to\infty} \scal{[\pphi_k (\check{T}_n)](x_1,\ldots,x_n)\, K_x}{K_x} \end{align*} It then follows that, for each $x\in X$, the real function $\omega \mapsto \scal{\RRn (T_n(\omega)) K_x}{K_x}$ is measurable on $\Omega$, being the pointwise limit of measurable functions.\\ We now prove that the map $F_n : \omega \mapsto (x\mapsto \scal{\RRn (T_n(\omega)) K_x}{K_x})$ is measurable from $\Omega$ into the space $C(X)$. {By M2, p.~115 in \cite{la93}, this is equivalent to the measurability of the subsets $F_n^{-1} (U)\subset\Omega$ for all open sets $U\subset C(X)$. Since $X$ is a locally compact separable metric space, the topology of uniform convergence on compact subsets is a separable metric topology on $C(X)$ by (12.14.6.2) in \cite{Dieu2}. By separability of $C(X)$, each open set $U\subset C(X)$ then is the denumerable union of sets of the neighborhood basis $\{U_{f,\eps,C} \mid f\in C(X) ,\, \eps > 0 , \, C\subset X \mbox{ compact}\}$. Hence, it is enough to show that $F_n^{-1} (U_{f,\eps,C})$ is measurable for all $f$, $\eps$ and $C$. We have} $$ F_n^{-1} (U_{f,\eps,C}) = \set{{\omega\in\Omega}\mid \sup_{x\in C} \abs{f(x) - \scal{\RRn (T_n ({\omega})) \, K_x}{K_x}} < \eps} . $$ By separability of $X$, there exists a countable set $C_0 \subset C$ such that $\overline{C_0} = C$. A continuity argument then shows that \begin{align*} F_n^{-1} (U_{f,\eps,C}) & = \bigcap_{k\geq 1} \set{{\omega\in\Omega}\mid \sup_{x\in C} \abs{f(x) - \scal{\RRn (T_n ({\omega})) \, K_x}{K_x}} \leq \eps-\frac 1k} \\ & = \bigcap_{k\geq 1} \bigcap_{x\in C_0} \set{{\omega\in\Omega}\mid \abs{f(x) - \scal{\RRn (T_n ({\omega})) \, K_x}{K_x}} \leq \eps-\frac 1k} . \end{align*} Since each set $\set{{\omega\in\Omega}\mid \abs{f(x) - \scal{\RRn (T_n ({\omega})) \, K_x}{K_x}} \leq \eps-1/k}$ is measurable in $\Omega$, measurability of the countable intersection $F_n^{-1} (U_{f,\eps,C})$ then follows. \end{proof} {We conclude this section with the proof of measurability of the threshold parameters $(\tau_n)_{n\geq 1}$ defined in \eqref{eq:deftaun}. \begin{prop}\label{prop:misurtaun} Suppose $X$ is locally compact. Let $(\RR)_{\lambda>0}$ be a family of functions $\RR:[0,1]\to [0,1]$ such that each $\RR$ is upper semicontinuous. Then, for any sequence of positive numbers $(\la_n)_{n\geq 1}$ and all $n\geq 1$, the map $\tau_n$ defined in \eqref{eq:deftaun} is a $\R$-valued estimator. \end{prop} \begin{proof} As $F_n$ depends only on $(x_1,\ldots,x_n)$, it is clear that so does $\tau_n$. It remains to show measurability of $\tau_n$.\\ Given $i\geq 1$, the map $\omega\mapsto x_i$ is measurable by definition of the product $\sigma$-algebra $\A{\Omega}$ on $\Omega$. Moreover, for any $n\geq 1$, the map $F_n$ is measurable from $\Omega$ into $C(X)$ by Proposition \ref{prop:misurabilita2}. Therefore, the map $\Theta_1 : \Omega \to C(X)\times X$, with $\Theta_1 (\omega) = (F_n (\omega) \, , \, x_i)$, is measurable when $C(X)\times X$ is endowed with the product $\sigma$-algebra of the Borel $\sigma$-algebras of $C(X)$ and $X$, respectively.\\ Since $X$ is locally compact, the map $\Theta_2 : C(X)\times X \to \R$, with $\Theta_2(f,x) = f(x)$, is jointly continuous by \cite[Theorem 5, p.~223]{Kelley} and the discussion following it. Thus, $\Theta_2$ is measurable with respect to the Borel $\sigma$-algebras of $C(X)\times X$ and $\R$.\\ The metric spaces $X$ and $C(X)$ are both separable (for $C(X)$, this is (12.14.6.2) in \cite{Dieu2}). By \cite[Proposition 4.1.7]{Dudley}, the product $\sigma$-algebra of the Borel $\sigma$-algebras of $C(X)$ and $X$ then coincides with the Borel $\sigma$-algebra of $C(X)\times X$. Thus, the composition map $\Phi_i = \Theta_2 \Theta_1$, which is $\Phi_i(\omega) = [F_n(\omega)](x_i)$, is measurable.\\ Finally, the map $m(t_1,\ldots,t_n)\mapsto \min_{1\leq i\leq n} t_i$ is continuous from $\R^n$ into $\R$, so that $\tau_n = 1-m(\Phi_1,\Phi_2,\ldots,\Phi_n)$ is measurable. \end{proof}} \subsection{{A Useful Inequality}}\label{maurer2} The following proof of inequality \eqref{ineMaurer} below is due to A.~Maurer\footnote{http://www.andreas-maurer.eu}. \begin{lem}\label{lem:ineMaurer} Suppose $S$ and $T$ are two {symmetric} Hilbert-Schmidt operators on $\hh$ with spectrum contained in the interval $[a,b]$, and let $(\sigma_j)_{j\in J}$ and $(\tau_k)_{k\in K}$ be the eigenvalues of $S$ and $T$, respectively. Given a function $r:[a,b]\to\R$, if the constant $$ L = \sup_{j\in J, k\in K} \abs{\frac{r(\sigma_j) - r(\tau_k)}{\sigma_j - \tau_k}} \qquad (\mbox{with } 0/0 \equiv 0) $$ is finite, then \begin{equation}\label{ineMaurer_bis} \nor{r(S)-r(T)}_{\mathcal S_2}\leq L \nor{S-T}_{\mathcal S_2}. \end{equation} In particular, if $r$ is a Lipshitz function with Lipshitz constant $L_r$, then \begin{equation}\label{ineMaurer} \nor{r(S)-r(T)}_{\mathcal S_2}\leq L_r \nor{S-T}_{\mathcal S_2}. \end{equation} \end{lem} \begin{proof} Let $(f_j)_{j\in J}$ and $(g_k)_{k\in K}$ be the orthonormal bases of eigenvectors of $S$ and $T$ corresponding to the eigenvalues $(\sigma_j)_{j\in J}$ and $(\tau_k)_{k\in K}$, respectively, which here we list repeated accordingly to their multiplicity. We have \begin{align*} \nor{r(S)-r(T)}_{\mathcal S_2}^2 & = \sum_{j,k} \abs{\scal{ (r(S)-r(T)) f_j}{g_k}}^2 = \sum_{j,k} \left(r(\sigma_j)-r(\tau_k)\right)^2 \abs{\scal{f_j}{g_k}}^2 \\ & \leq L^2 \sum_{j,k} \left(\sigma_j-\tau_k\right)^2 \abs{\scal{f_j}{g_k}}^2= L^2 \sum_{j,k} \abs{\scal{ (S-T) f_j}{g_k}}^2 \\ & = L^2 \nor{S-T}_{\mathcal S_2}^2 , \end{align*} which is \eqref{ineMaurer_bis}. \end{proof} \subsection{Concentration of Measure Results}\label{sec:conc} We will use the following standard concentration inequality for Hilbert space random variables (see Theorem 8.6 in \cite{Pin94}, and \cite{pin99}). Let $\vv$ be a separable Hilbert space and $(\Omega,\A{\Omega},{\mathbb P})$ a probability space. Suppose that $Y_1, Y_2, \ldots$ is a sequence of independent $\vv$-valued random variables $Y_i: \Omega \to \vv$. If $\EE{\nor{Y_i}_\vv^m}\leq (1/2)m!B^2L^{m-2}$ $\forall m\geq 2$, then, for all $n\geq 1$ and $\eps >0$, \begin{equation}\label{ineq_prob} \PP{\nor{\frac{1}{n}\sum_{i=1}^n Y_i}_\vv > \eps}\leq 2e^{-\frac{n\eps^2}{B^2+L\eps+B\sqrt{B^2+2L\eps}}}. \end{equation} We will need in particular the next two straightforward consequences of this inequality. \begin{lem}\label{conc_lemma} If $Z_1, Z_2, \ldots$ is a sequence of {i.i.d.} $\vv$-valued random variables, such that $\nor{Z_i}_\vv\leq M$ almost surely, $\EE{Z_i} =\mu$ and $\EE{\nor{Z_i}_\vv^2} \leq \sigma^2$ for all $i$, then, for all $n\geq 1$ and $\delta > 0$, \begin{equation}\label{bernie} \nor{\frac1 n \sum_{i=1}^n Z_i - \mu}_\vv \leq \frac{M\delta}{n}+ \sqrt{ \frac{2\sigma^2\delta}{n}} \end{equation} with probability at least $1-2 e^{-\delta}$. \end{lem} \begin{proof} Let $Y_i=Z_i-\mu$. Then $\nor{Y_i}_\vv\leq 2M$ and $\EE{\nor{Y_i}_\vv^2} \leq \EE{\nor{Z_i}_\vv^2}=\sigma^2$. Moreover, for all $i$ and $m\geq 2$ $\EE{\nor{Y_i}_\vv^m}\leq\sigma^2 (2M)^{m-2}\leq (1/2)m! \sigma^2 M^{m-2}$, where the last inequality follows since $2^{m-2}\leq m!/2$. Then, \[ \PP{\nor{\frac{1}{n}\sum_{i=1}^nZ_i-\mu}_\vv > \eps} = \PP{\nor{\frac{1}{n}\sum_{i=1}^n Y_i}_\vv > \eps} \leq 2e^{-\frac{n\eps^2}{\sigma^2+M\eps+\sigma\sqrt{\sigma^2+2M\eps}}}=2 e^{-\frac{\sigma^2n}{M^2}g(\frac{M\eps}{\sigma^2})}=2e^{-\delta},\] where $g(t)=t^2/(1+t+\sqrt{1+2t})$. \\ Since $g^{-1}(t)=t+\sqrt{2t}$, by solving the equation $(\sigma^2 n /M^2) g(M\eps/\sigma^2)=\delta$ we have \[ \eps= \frac{\sigma^2}{M} \left(\frac{M^2\delta}{n\sigma^2}+ \sqrt{ \frac{2M^2\delta}{n\sigma^2}}\right)= \frac{M\delta}{n}+ \sqrt{ \frac{2\sigma^2\delta}{n}}. \] \end{proof} The above result and Borel-Cantelli lemma imply that $$ \lim_{n\to \infty} \nor{\frac1 n \sum_{i=1}^n Z_i - \mu}_\vv =0 $$ almost surely. In the paper we actually need a slightly stronger result which is given in the following lemma. \begin{lem}\label{cor_conc} If $Z_1, Z_2, \ldots$ is a sequence of {i.i.d.} $\vv$-valued random variables, such that $\nor{Z_i}_\vv\leq M$ almost surely, then we have \begin{equation*}\label{AS} \lim_{n\to \infty} \frac{\sqrt{n}}{\log n} \nor{\frac1 n \sum_{i=1}^n Z_i - \mu}_\vv = 0 \end{equation*} almost surely. \end{lem} \begin{proof} We continue with the notations in the proof of Lemma \ref{conc_lemma}. By \eqref{ineq_prob}, for all $\eps >0$ we have \[ \PP{\frac{\sqrt{n}}{\log n} \nor{\frac1 n \sum_{i=1}^n Z_i - \mu}_\vv > \eps} = \PP{\nor{\frac{1}{n}\sum_{i=1}^n Y_i}_\vv > \eps\,\frac{\log n}{\sqrt{n}}}\leq 2e^{-A(n,\eps)}=2\left(\frac{1}{n}\right)^{\frac{A(n,\eps)}{\log n}}, \] with \[ A(n,\eps) {=} \frac{\eps^2\log^2 n}{\sigma^2+M\eps\frac{\log n}{\sqrt{n}}+\sigma\sqrt{\sigma^2+2M\eps\frac{\log n}{\sqrt{n}}}}. \] It follows that \[ \sum_{n\geq 1} \PP{\frac{\sqrt{n}}{\log n} \nor{\frac1 n \sum_{i=1}^n Z_i - \mu}_\vv > \eps} \leq 2\,\sum_{n\geq 1}\left(\frac{1}{n}\right)^{\frac{A(n,\eps)}{\log n}}. \] For all $\eps>0$, $\lim_{n\to\infty} A(n,\eps)/\log n = +\infty$, so that the series $\sum_{n\geq 1} n^{-A(n,\eps)/\log n}$ is convergent, and Borel-Cantelli lemma gives the result. \end{proof} The following inequality is given in \cite{cade07} and we report its proof for completeness. \begin{lem}\label{conce2} If Assumption~\ref{A} holds true, then for all $\delta > 0$ we have \begin{equation*}\label{degree_conc} \nor{(T+\la)^{-1}(T-T_n)}_{\mathcal S_2}\leq \left( \frac{\delta}{n\la}+\sqrt{\frac{2\delta{\cal N}(\la)}{n\la}} \right) \end{equation*} with probability at least $1-2e^{-\delta}$. \end{lem} \begin{proof} Let $(\Omega,\A{\Omega},{\mathbb P})$ be the probability space defined at the beginning of Section \ref{sec:misurismi}. For all $i\geq 1$ we define the random variable $Y_i:\Omega\to \mathcal S_2 $ as \[Y_i(\omega)=(T+\la)^{-1} (K_{x_i}\otimes K_{x_i})\qquad \omega=(x_j)_{j\geq 1},\] which is measurable by Lemma \ref{lem:app1}. Then, we have $\nor{Y_i}_{\mathcal S_2}\leq 1 /\la$ almost surely, $\EE{Y_i}=(T+\la)^{-1}T$, $(1/n) \sum_{i=1}^n Y_i=(T+\la)^{-1}T_n$ and \begin{align*} \EE{\nor{Y_i}^2_{\mathcal S_2}} & = \int_\Omega \tr{Y_i(\omega)^\ast Y_i(\omega)} d\PP{\omega} = \int_X \tr{(T+\la)^{-2}(K_x\otimes K_x)} d\rho(x) \\ &=\tr{(T+\la)^{-2}T}\leq \nor{(T+\la)^{-1}}_{\infty}\tr{(T+\la)^{-1}T} \leq \frac{{\cal N}(\la)}{\la}, \end{align*} where we have bounded the operator norm $\nor{(T+\la)^{-1}}_{\infty}$ by $1/\la$. The result follows applying Lemma \ref{conc_lemma}. \end{proof} \end{document}
{\mathfrak{b}}egin{document} \title{Degenerate Affine Hecke-Clifford Algebras and Type $Q$ Lie Superalgebras} \author{David Hill } {\mathrm{ad}}dress{Department of Mathematics \\ University of California, Berkeley \\ Berkeley, CA 94720-3840} {{\mathfrak{b}}ar{e}}mail{[email protected]} \author{Jonathan R. Kujawa} {\mathrm{ad}}dress{Department of Mathematics \\ University of Oklahoma \\ Norman, OK 73019} {{\mathfrak{b}}ar{e}}mail{[email protected]} \author{Joshua Sussan} {\mathrm{ad}}dress{Department of Mathematics \\ University of California, Berkeley \\ Berkeley, CA 94720-3840} {{\mathfrak{b}}ar{e}}mail{[email protected]} \thanks{Research of the second author was partially supported by NSF grant DMS-0734226. Research of the first and third author was partially supported by NSF EMSW21-RTG grant DMS-0354321}\ \date{\today} {\mathfrak{s}}ubjclass[2000]{Primary 20C08,20C25; Secondary 17B60,17B20,17B37} {\mathfrak{b}}egin{abstract} We construct the finite dimensional simple integral modules for the (degenerate) affine Hecke-Clifford algebra (AHCA), ${\mathcal{A}}Se(d)$. Our construction includes an analogue of Zelevinsky's segment representations, a complete combinatorial description of the simple calibrated ${\mathcal{A}}Se(d)$-modules, and a classification of the simple integral ${\mathcal{A}}Se(d)$-modules. Our main tool is an analogue of the Arakawa-Suzuki functor for the Lie superalgebra ${\mathfrak{q}}(n)$. {{\mathfrak{b}}ar{e}}nd{abstract} \maketitle {\mathfrak{s}}ection{Introduction}{\langle}bel{S:Intro} {\mathfrak{s}}ubsection{} Throughout this paper, we will work over the ground field ${\mathbb{C}}$. As is well known, the symmetric group, $S_d$, has a non-trivial {{\mathfrak{b}}ar{e}}mph{central extension}: \[ \xymatrix{1\ar[r]&{\mathbb{Z}}/2{\mathbb{Z}}\ar[r]&\widehat{S}_d\ar[r]&S_d\ar[r]&1}. \] The double cover $\widehat{S}_d$ is generated by elements ${\mathfrak{z}}eta,{\mathfrak{h}}at{s}_1,{\langle}mbdaots,{\mathfrak{h}}at{s}_{d-1}$, where ${\mathfrak{z}}eta$ is central, ${\mathfrak{z}}eta^2=1$, and the ${\mathfrak{h}}at{s}_i$ satisfy the relations ${\mathfrak{h}}at{s}_i{\mathfrak{h}}at{s}_{i+1}{\mathfrak{h}}at{s}_i={\mathfrak{h}}at{s}_{i+1}{\mathfrak{h}}at{s}_i{\mathfrak{h}}at{s}_{i+1}$ and ${\mathfrak{h}}at{s}_j{\mathfrak{h}}at{s}_i={\mathfrak{z}}eta{\mathfrak{h}}at{s}_i{\mathfrak{h}}at{s}_j$ for admissible $i$ and $j$ satisfying $|i-j|>1$. The {{\mathfrak{b}}ar{e}}mph{projective} or ${{\mathfrak{b}}ar{e}}mph{spin}$ representations of $S_d$ are the linear representations of $\widehat{S}_d$ which factor through ${\mathbb{C}}\widehat{S}_d/({\mathfrak{z}}eta+1)$. This paper is a study of some structures arising from the projective representation theory of symmetric groups. The double cover $\widehat{S}_d$ suffers a defect: it is difficult to define parabolic induction, see \cite[Section 4]{stem}. Since the inductive approach to the study of linear representations of the symmetric group is so effective, it is preferable to study the {{\mathfrak{b}}ar{e}}mph{Sergeev algebra} ${\mathcal{S}}(d)$ introduced in \cite{s,n}, which provides a natural fix to this problem. As a vector space, ${\mathcal{S}}(d)={\mathbb{C}}l(d)\otimes{\mathbb{C}} S_d$, where ${\mathbb{C}}l(d)$ is the $2^d$-dimensional Clifford algebra with generators $c_1,{\langle}mbdaots,c_d$ subject to the relations $c_i^2=-1$ and $c_ic_j=-c_jc_i$ for $i{\mathfrak{n}}eq j$, and ${\mathbb{C}} S_d$ is the group algebra of $S_d$. Let $s_i=(i,i+1)\in S_d$ be the $i$th basic transposition, and identify ${\mathbb{C}}l(d)$ and ${\mathbb{C}} S_d$ with the subspaces ${\mathbb{C}}l(d)\otimes 1$ and $1\otimes{\mathbb{C}} S_d$ respectively. Multiplication is defined so that ${\mathbb{C}}l(d)$ and ${\mathbb{C}} S_d$ are subalgebras, and $wc_i=c_{w(i)}w$ for all $1\leq i\leq d$ and $w\in S_d$. The Sergeev algebra admits a natural definition of parabolic induction and the projective representation theory of the symmetric group can be recovered from that of ${\mathcal{S}}(d)$, \cite[Theorem 3.4]{bk1}. Additionally, the Sergeev algebra is a {{\mathfrak{b}}ar{e}}mph{superalgebra}, and plays the role of the symmetric group for a super version of Schur-Weyl duality known as Sergeev duality in honor of A. N. Sergeev who extended the classical theorem of Schur and Weyl \cite{s}. If $V={\mathbb{C}}^{n|n}$ is the standard representation of the Lie superalgebra ${\mathfrak{q}}(n)$, then both ${\mathcal{S}}(d)$ and ${\mathfrak{q}}(n)$ act on the tensor product $V^{\otimes d}$ and each algebra is the commutant algebra of the other. In particular, there exists an isomorphism of superalgebras \[ {\mathcal{S}}(d)\rightarrow\operatorname{End}_{{\mathfrak{q}}(n)}(V^{\otimes d}). \] The algebra ${\mathcal{S}}(d)$ admits an affinization, ${\mathcal{A}}Se(d)$, called the (degenerate) affine Hecke-Clifford algebra (AHCA). The affine Hecke-Clifford algebra was introduced by Nazarov in \cite{n} and studied in \cite{n,bk2,kl,w}. As a vector space, ${\mathcal{A}}Se(d)={\mathcal{P}}_d[x]\otimes{\mathcal{S}}(d)$, where ${\mathcal{P}}_d[x]={\mathbb{C}}[x_1,{\langle}mbdaots,x_d]$. We identify ${\mathcal{P}}_d[x]$ and ${\mathcal{S}}(d)$ with the subspaces ${\mathcal{P}}_d[x]\otimes 1$ and $1\otimes{\mathcal{S}}(d)$. Multiplication is defined so that these are subalgebras, $c_ix_j=x_jc_i$ if $j{\mathfrak{n}}eq i$, $c_ix_i=-x_ic_i$, $s_ix_j=x_js_i$ if $j{\mathfrak{n}}eq i,i+1$, and \[ s_ix_i=x_{i+1}s_i-1+c_ic_{i+1}. \] In addition to ${\mathcal{S}} (d)$ being a subalgebra of ${\mathcal{A}}Se(d)$, there also exists a natural surjection ${\mathcal{A}}Se(d)\twoheadrightarrow{\mathcal{S}}(d)$ obtained by mapping $x_1\mapsto 0$, $c_i\mapsto c_i$ and $s_i\mapsto s_i$. Therefore, the representation theory of the AHCA contains that of the Sergeev algebra. Surprisingly little is explicitly known about the representation theory of ${\mathcal{A}}Se(d)$, in contrast with its linear counterpart, the {{\mathfrak{b}}ar{e}}mph{(degenerate) affine Hecke algebra} ${\mathcal{H}^{\mathrm{aff}}}(d)$. The most significant contribution to the projective theory is from \cite{bk2,kl}, which describe the Grothendieck group of the full subcategory of {{\mathfrak{b}}ar{e}}mph{integral} ${\mathcal{A}}Se(d)$-modules in terms of the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$ (or, more generally, $A_{2{{\mathfrak{b}}ar{e}}ll}^{(2)}$ if working over a field of odd prime characteristic $2{{\mathfrak{b}}ar{e}}ll-1$). We will return to this important topic later on. The algebra ${\mathcal{H}^{\mathrm{aff}}}(d)$ has been studied for many years. Of particular interest are those modules for ${\mathcal{H}^{\mathrm{aff}}}(d)$ which admit a generalized weight space decomposition with respect to the polynomial generators. It is known that among these modules it is enough to consider those for which the generalized eigenvalues of the polynomial generators are integers, cf. \cite[$\S7.1$]{kl}. These are known as {{\mathfrak{b}}ar{e}}mph{integral modules}. As discovered in \cite{n}, the appropriate analogue of integral modules for ${\mathcal{A}}Se(d)$ are those which admit a generalized weight space decomposition with respect to the $x_i^2$, and the generalized eigenvalues of the $x_i^2$ are of the form $q(a):=a(a+1)$, $a\in{\mathbb{Z}}$. The finite dimensional, irreducible, integral modules for ${\mathcal{H}^{\mathrm{aff}}}(d)$ were classified by Zelevinsky in \cite{z} via combinatorial objects known as multisegments. A segment is an interval $[a,b]\in{\mathbb{Z}}$. To each segment $[a,b]$ with $d=b-a+1$, Zelevinsky associates a 1-dimensional ${\mathcal{H}^{\mathrm{aff}}}(d)$-module ${\mathbb{C}}_{[a,b]}$ defined from the trivial representation of ${\mathbb{C}} S_d$ by letting $x_1$ act by the scalar $a$. A multisegment may be regarded as a pair of compositions $({\mathfrak{b}}eta,\alpha)=((b_1,{\langle}mbdaots,b_n),(a_1,{\langle}mbdaots,a_n))\in{\mathbb{Z}}^n\times{\mathbb{Z}}^n$, with $d_i=b_i-a_i{\mathfrak{g}}eq 0$. If $d=d_1+\cdots+d_n$, Zelevinsky associates to the multisegment $({\mathfrak{b}}eta,\alpha)$ a {{\mathfrak{b}}ar{e}}mph{standard cyclic} ${\mathcal{H}^{\mathrm{aff}}}(d)$-module \[ {\mathcal{M}}({\mathfrak{b}}eta,\alpha) =\operatorname{Ind}_{{\mathcal{H}^{\mathrm{aff}}}(d_1)\otimes\cdots\otimes{\mathcal{H}^{\mathrm{aff}}}(d_n)}^{{\mathcal{H}^{\mathrm{aff}}}(d)} {\mathbb{C}}_{[a_1,b_1-1]} {\mathfrak{b}}oxtimes\cdots{\mathfrak{b}}oxtimes{\mathbb{C}}_{[a_n,b_n-1]}. \] To explain the classification, let $P={\mathbb{Z}}^n$ be the weight lattice associated to ${\mathfrak{gl}}_n({\mathbb{C}})$, $P^+$ the dominant weights, and $\rhoo=(n-1,{\langle}mbdaots,1,0)$. Additionally, define the weights \[ {P_{\mathrm{poly}}^+}os(d)=\{\mu\in{\mathbb{Z}}^n_{{\mathfrak{g}}eq0}\mid \mu_1+\cdots+\mu_n=d\}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, P^+[{\langle}mbda]=\{\mu\in P\mid \mu_i{\mathfrak{g}}eq\mu_j\mbox{ whenever }{\langle}mbda_i={\langle}mbda_j\}. \] Given ${\langle}mbda\in P^+$, let {\mathfrak{b}}egin{align}{\langle}bel{E:Bsubd} \mathcal{B}_d[{\langle}mbda]=\{\mu\in P^+[{\langle}mbda]\mid {\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)\}, {{\mathfrak{b}}ar{e}}nd{align} and \[ \mathcal{A}_d=\{({\langle}mbda,\mu)\mid {\langle}mbda\in P^+,\mbox{ and }\mu\in\mathcal{B}_d[{\langle}mbda+\rhoo]\}. \] Then, the set $\{{\mathcal{L}}({\mathfrak{b}}eta,\alpha)\mid ({\mathfrak{b}}eta,\alpha)\in{\mathcal{A}}_d\}$ is a complete list of irreducible integral ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules. In the case of ${\mathcal{A}}Se(d)$, the situation is more subtle. To describe this, fix a segment $[a,b]$. The obvious analogue of the trivial representation of ${\mathbb{C}} S_d$ is the $2^d$-dimensional basic spin representation ${\mathbb{C}}l_d={\mathbb{C}}l(d).1$ of ${\mathcal{S}}(d)$. If $a=0$, the action of ${\mathcal{A}}Se(d)$ factors through ${\mathcal{S}}(d)$ and it can be checked that ${\mathbb{C}}l_d$ is the desired segment representation. If $a{\mathfrak{n}}eq 0$, it is not immediately obvious how to proceed. Inspiration comes from a {{\mathfrak{b}}ar{e}}mph{rank 1} application of the functor described below. We define a module structure on the {{\mathfrak{b}}ar{e}}mph{double} of ${\mathbb{C}}l_d$: ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathcal{P}}hi_a\otimes{\mathbb{C}}l_d$, where ${\mathcal{P}}hi_a$ is a 2-dimensional Clifford algebra. The module ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$ is not irreducible, but decomposes as a direct sum of irreducibles ${\mathcal{P}}hi_{[a,b]}^+\oplus{\mathcal{P}}hi_{[a,b]}^-$, where ${\mathcal{P}}hi_{[a,b]}^+$ and ${\mathcal{P}}hi_{[a,b]}^-$ are isomorphic via an {{\mathfrak{b}}ar{e}}mph{odd} isomorphism. Let ${\mathcal{P}}hi_{[a,b]}$ denote one of these simple summands. Now, given a multisegment $({\langle}mbda,\mu)$, with ${\langle}mbda_i-\mu_i=d_i$ and $d=d_1+\cdots+d_n$, we define the standard cyclic module \[ {\mathcal{M}}({\langle}mbda,\mu)=\operatorname{Ind}_{{\mathcal{A}}Se(d_1)\otimes\cdots\otimes{\mathcal{A}}Se(d_n)}^{{\mathcal{A}}Se(d)} {\mathcal{P}}hi_{[\mu_1,{\langle}mbda_1-1]}\circledast\cdots\circledast{\mathcal{P}}hi_{[\mu_n,{\langle}mbda_n-1]}, \] where $\circledast$ is an analogue of the outer tensor product adapted for superalgebras, see section \ref{S:Prelim} below. A weight ${\langle}mbda\in P$ is called typical if ${\langle}mbda_i+{\langle}mbda_j{\mathfrak{n}}eq0$ for all $i{\mathfrak{n}}eq j$. Let \[ {P^{++}}=\{{\langle}mbda\in P^+\mid {\langle}mbda_{1}{\mathfrak{g}}eq \dotsb {\mathfrak{g}}eq {\langle}mbda_{n}, \thetaxt{ and } {\langle}mbda_i+{\langle}mbda_j{\mathfrak{n}}eq 0\mbox{ for all }i{\mathfrak{n}}eq j\} \] be the set of dominant typical weights. We prove {\mathfrak{b}}egin{thme} Assume that ${\langle}mbda\in{P^{++}}$ and $\mu\in\mathcal{B}_d[{\langle}mbda]$. Then, ${\mathcal{M}}({\langle}mbda,\mu)$ has a unique simple quotient, denoted ${\mathcal{L}}({\langle}mbda,\mu)$. {{\mathfrak{b}}ar{e}}nd{thme} In the special case where the multisegment $({\langle}mbda,\mu)$ corresponds to skew shapes (i.e. ${\langle}mbda,\mu \in P^+$), the associated ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules are called calibrated. The calibrated representations may also be characterized as those modules on which the polynomial generators act semisimply, and were originally classified by Cherednik in \cite{ch0}. In \cite{ram}, Ram gives a complete combinatorial description of the calibrated representations of ${\mathcal{H}^{\mathrm{aff}}}(d)$ in terms of skew shape tableaux and provides a complete classification (see also \cite{kr} for another combinatorial model). The projective analogue of the skew shapes are the shifted skew shapes which have appeared already in \cite{s2,stem} and correspond to when ${\langle}mbda$ and $\mu$ are {{\mathfrak{b}}ar{e}}mph{strict} partitions. As in the linear case, these are the modules for which the $x_i$ act semisimply. In the spirit of \cite{ram}, we prove that {\mathfrak{b}}egin{thme} For each shifted skew shape ${\langle}mbda/\mu$, where ${\langle}mbda$ and $\mu$ are strict partitions such that $ {\langle}mbda $ contains $ \mu, $ there is an irreducible ${\mathcal{A}}Se(d)$-module $H^{{\langle}mbda/\mu}$. Every irreducible, calibrated ${\mathcal{A}}Se(d)$-module is isomorphic to exactly one such $H^{{\langle}mbda/\mu}$. {{\mathfrak{b}}ar{e}}nd{thme} The $H^{{\langle}mbda/\mu}$ are constructed directly using the combinatorics of shifted skew shapes. Furthermore, we show that $H^{{\langle}mbda/\mu}\cong{\mathcal{L}}({\langle}mbda,\mu)$. We would also like to point out that Wan, \cite{wan}, has recently obtained a classification of the calibrated representations for ${\mathcal{A}}Se(d)$ over any arbitrary algebraically closed field of characteristic not equal to 2. The appearance of the weight lattice for ${\mathfrak{gl}}_n({\mathbb{C}})$ in the representation theory of ${\mathcal{H}^{\mathrm{aff}}}(d)$ is explained by a work of Arakawa and Suzuki who introduced in \cite{as} a functor from the BGG category $ \mathcal{O}(\mathfrak{gl}_n) $ to the category of finite dimensional representations of ${\mathcal{H}^{\mathrm{aff}}}(d)$. The authors proved that the functor maps Verma modules to the standard modules or zero. Using the Kazdhan-Lusztig conjecture together with the results of \cite{ginz}, they proved that simple objects in $\mathcal{O}(\mathfrak{gl}_n)$ are mapped by the functor to simple modules or zero. In \cite{su1}, Suzuki avoided the Kazdhan-Lusztig conjecture, and proved that the functor maps simples to simples using Zelevinsky's classification together with the existence of a nonzero ${\mathcal{H}^{\mathrm{aff}}}(d)$-contravariant form on certain standard modules, see \cite{r}. In \cite{su2}, Suzuki was able to avoid the results of Zelevinsky and independently reproduce the classification via a careful analysis of the standard modules. For a complete explanation of the functor in type $A$, we refer the reader to \cite{or}. The functor and related constructions have had numerous applications in various areas of representation theory. This includes the study of affine Braid groups and Hecke algebras \cite{or}, Yangians \cite{KN}, the centers of parabolic category $\mathcal{O}$ for $\mathfrak{gl}_{n}$ \cite{b2}, finite W-algebras \cite{bk4}, and the proof of Brou\'e's abelian defect conjecture for symmetric groups by Chuang and Rouquier via $\mathfrak{sl}_{2}$ categorification \cite{CR}. We define an analogous functor from the category $\mathcal{O}({\mathfrak{q}}(n))$ to the category of finite dimensional modules for ${\mathcal{A}}Se(d)$. The contruction of this functor relies on the following key result: {\mathfrak{b}}egin{thme} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. Then, there exists a homomorphism \[ {\mathcal{A}}Se(d)\rightarrow\operatorname{End}_{{{\mathfrak{b}}ar{f}}q (n)}(M\otimes V^{\otimes d}). \] {{\mathfrak{b}}ar{e}}nd{thme} To define the functor, let ${\mathfrak{q}}(n)={\mathfrak{n}}^+\oplus{\mathfrak{h}}\oplus{\mathfrak{n}}^-$ be the triangular decomposition of ${\mathfrak{q}}(n)$. For each ${\langle}mbda\in P$, the functor \[ F_{\langle}mbda:\mathcal{O}({\mathfrak{q}}(n))\rightarrow{\mathcal{A}}Se(d)\mbox{-mod} \] is defined by {\mathfrak{b}}egin{equation*} F_{\langle}mbda M=\{\,m\in M \mid {\mathfrak{n}}^+.m=0 \thetaxt{ and } hv ={\langle}mbda(h)v \thetaxt{ for all } h\in {\mathfrak{h}} \}. {{\mathfrak{b}}ar{e}}nd{equation*} The functor $F_{\langle}mbda$ is exact when ${\langle}mbda\in{P^{++}}$. The dimension of the highest weight space of a Verma module in $\mathcal{O}({\mathfrak{q}}(n))$ is generally greater than one. A consequence of this is that the functor maps a Verma module to a direct sum of the same standard module. A simple object in $ \mathcal{O}(\mathfrak{q}(n)) $ is mapped to a direct sum of the same simple module or else zero. Determining when a simple object is mapped to something non-zero is a more difficult question than in the non-super case and we have only partial results in this direction. The main difficulty is a lack of information about the category $\mathcal{O}({\mathfrak{q}}(n))$. The category of finite dimensional representations of ${\mathfrak{q}}(n)$ has been studied by Penkov and Serganova \cite{p,ps,ps2}; they give a character formula for all finite dimension simple ${\mathfrak{q}}(n)$-modules. Using other methods, Brundan \cite{b} has also studied this category, and has even obtained some (conjectural) information about the whole category $\mathcal{O}({\mathfrak{q}}(n))$ via the theory of crystals. The most useful information, however, comes from Gorelik \cite{g}, who defines the Shapovalov form for Verma modules and calculates the linear factorization of its determinant. In various works by Ariki, Grojnowski, Vazirani, and Kleshchev \cite{ar,gr,v,kl} it was shown that there is an action of $U({\mathfrak{gl}}_\infty)$ on the direct sum of Grothendieck groups of the categories of integral ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules, for all $d$. This gives another type of classification of the simple integral modules as nodes on the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{gl}}_\infty$. In \cite{bk1}, Brundan and Kleshchev show there is a classification of the simple integral modules for ${\mathcal{A}}Se(d)$ parameterized by the nodes of the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$, see also \cite{kl}. In \cite{lec}, Leclerc studied dual canonical bases of the quantum group ${\mathcal{U}}_q({\mathfrak{g}})$ for various finite dimensional simple Lie algebras ${\mathfrak{g}}$ via embeddings of the quantized enveloping algebra ${\mathcal{U}}_q({\mathfrak{n}})$ of a maximal nilpotent subalgebra ${\mathfrak{n}}{\mathfrak{s}}ubseteq{\mathfrak{g}}$ in the {{\mathfrak{b}}ar{e}}mph{quantum shuffle algebra}. To describe the quantum shuffle algebra associated to ${\mathfrak{g}}$ of rank $r$, let $\mathcal{F}$ be the free associative algebra on the letters $[0],{\langle}mbdaots,[r-1]$, and let $[i_1,i_2,{\langle}mbdaots,i_k]:=[i_1]\cdot[i_2]\cdots[i_k]$. Then, the quantum shuffle algebra is the algebra $(\mathcal{F},*)$, where \[ [i_1,{\langle}mbdaots,i_k]*[i_{k+1},{\langle}mbdaots,i_{k+{{\mathfrak{b}}ar{e}}ll}]={\mathfrak{s}}um_{\mathfrak{s}}igma q^{-e({\mathfrak{s}}igma)}[i_{\mathfrak{s}}igma(1),{\langle}mbdaots,i_{{\mathfrak{s}}igma(k+{{\mathfrak{b}}ar{e}}ll)}], \] where the sum is over all minimal length coset representatives in $S_{k+{{\mathfrak{b}}ar{e}}ll}/(S_k\times S_{{\mathfrak{b}}ar{e}}ll)$, and $e({\mathfrak{s}}igma)$ is some explicit function of ${\mathfrak{s}}igma$. There exists an {{\mathfrak{b}}ar{e}}mph{injective} homomorphism ${\mathcal{P}}sii:{\mathcal{U}}_q({\mathfrak{n}}){\mathfrak{h}}ookrightarrow{\mathcal{F}}$ satisfying ${\mathcal{P}}sii(xy)={\mathcal{P}}sii(x)*{\mathcal{P}}sii(y)$ for all $x,y\in{\mathcal{U}}_q({\mathfrak{n}})$. Let $\mathcal{W}={\mathcal{P}}sii({\mathcal{U}}_q({\mathfrak{n}}))$. The ordering $[0]<[1]<\cdots<[r-1]$ yields two total ordering on words in ${\mathcal{F}}$: One the standard lexicographic ordering reading from {{\mathfrak{b}}ar{e}}mph{left to right}, and the other the {{\mathfrak{b}}ar{e}}mph{costandard} lexicographic ordering reading from {{\mathfrak{b}}ar{e}}mph{right to left}. These orderings give rise to special words in ${\mathcal{F}}$ called Lyndon words, and every word has a canonical factorization as a non-increasing product of Lyndon words. In \cite{lec}, Leclerc uses the standard ordering, while we use the costandard ordering. It is easy to translate between results using one ordering as opposed to the other. However, in our situation, choosing the costandard ordering leads to some significant differences in the {{\mathfrak{b}}ar{e}}mph{shape} of Lyndon words. We will explain this shortly. Bases for ${\mathcal{W}}$ are parameterized by certain words called {{\mathfrak{b}}ar{e}}mph{good words}. A {{\mathfrak{b}}ar{e}}mph{good word} is a nonincreasing product of {{\mathfrak{b}}ar{e}}mph{good Lyndon word} which have been studied in \cite{lr, ro1,ro2,ro3}. The good Lyndon words are in 1-1 correspondence with the positive roots, $\Delta^+$, of ${\mathfrak{g}}$, and the (standard or costandard) lexicographic ordering on good Lyndon words gives rise to a convex ordering on $\Delta^+$. The convex ordering on $\Delta^+$ gives rise to a PBW basis for ${\mathcal{U}}_q({\mathfrak{n}})$, which in turn gives a multiplicative basis $\{E^*_g=(E^*_{l_k})*\cdots*(E_{l_1}^*)\}$ for ${\mathcal{W}}$ labeled by good words $g=l_1\cdots l_k$, where $l_1{\mathfrak{g}}eq\cdots {\mathfrak{g}}eq l_k$ are good Lyndon words. Additionally, the bar involution on ${\mathcal{U}}_q({\mathfrak{n}})$ gives rise to a bar involution on ${\mathcal{W}}$, and hence, a {{\mathfrak{b}}ar{e}}mph{dual canonical basis} $\{b^*_g\}$ labeled by good words. The transition matrix between the basis $\{E^*_g\}$ and $\{b^*_g\}$ is triangular and, in particular, $b^*_l=E^*_l$ for each good Lyndon word $l$. In what follows, let $\underline{w}$ denote the specialization at $q=1$ of an element $w\in{\mathcal{W}}$. For ${\mathfrak{g}}$ of type $A_\infty=\underrightarrow{\lim}A_r$, good Lyndon words are labelled by segments $[a,b]$, and there is no difference between the standard and costandard ordering. In this case, for a good Lyndon word $l$, $\underline{E^*_l}=l$. The Mackey theorem for ${\mathcal{H}^{\mathrm{aff}}}(d)$ (see section \ref{SS:Mackey}) implies that the formal character of a standard module ${\mathcal{M}}({\mathfrak{b}}eta,\alpha)$ is given by $\underline{E^*_g}$, where $g$ is the good word $[\alpha_1,{\langle}mbdaots,{\mathfrak{b}}eta_1-1,{\langle}mbdaots,\alpha_n,{\langle}mbdaots,{\mathfrak{b}}eta_n-1]$. A much deeper fact, proved by Ariki in \cite{ar}, is that the character of the simple module ${\mathcal{L}}({\mathfrak{b}}eta,\alpha)$ is given by the dual canonical basis element $\underline{b^*_g}$. Leclerc also studied the Lie algebra ${\mathfrak{b}}_r$ of type $B_r$, and hence that of type $B_\infty=\underrightarrow{\lim}B_r$. The good Lyndon words for ${\mathfrak{b}}_r$ with respect to the standard ordering are segments $[i,{\langle}mbdaots,j]$, $0\leq i\leq j<r$, and {{\mathfrak{b}}ar{e}}mph{double segments} $[0,{\langle}mbdaots,j,0,{\langle}mbdaots,k]$, $0\leq j<k<r$ (cf. \cite[$\S8.2$]{lec}). In this case, when $l=[i,{\langle}mbdaots,j]$ is a segment, $\underline{b_l^*}=[i,{\langle}mbdaots,j]=\operatorname{ch}{\mathcal{P}}hi_{[i,j]}$. However, when $l=[0,{\langle}mbdaots,j,0,{\langle}mbdaots,k]$ is a double segment {\mathfrak{b}}egin{align}{\langle}bel{E:StdDblSeg} \underline{b^*_l}=2[0]\cdot([0,{\langle}mbdaots,j]*[1,{\langle}mbdaots,k]). {{\mathfrak{b}}ar{e}}nd{align} When we adopt the costandard ordering, the picture becomes much more familiar. Indeed, the good Lyndon words are of the form $[i,{\langle}mbdaots,j]$ $0\leq i<j<r$ and $[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]$, $0\leq j<k<r$! In particular, they correspond to weights of the segment representations ${\mathcal{P}}hii_{[i,j]}$ and ${\mathcal{P}}hii_{[-j-1,k]}$ respectively. Moreover, for $l=[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]$ \[ \underline{b^*_{l}}=2[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]=\operatorname{ch} {\mathcal{P}}hii_{[-j-1,k]}. \] Leclerc conjectures \cite[Conjecture 52]{lec} that for each good word $g$ of {{\mathfrak{b}}ar{e}}mph{principal degree} $d$, there exists a simple ${\mathcal{A}}Se(d)$-module with character given by $b^*_g$. We are not yet able to confirm the conjecture for general good words. However, the combinatorial construction of $H^{{\langle}mbda/\mu}$ immediately implies Leclerc's conjecture for calibrated representations (cf. \cite[Proposition 51]{lec} and Corollary \ref{C:characters}). Additionally, for each good Lyndon word $l$ (with respect to the costandard ordering), there is a simple module with character $b^*_l$. Also, an application of the functor $F_{\langle}mbda$ gives a representation theoretic interpretation of {{\mathfrak{b}}ar{e}}qref{E:StdDblSeg} above. Indeed, let ${\langle}mbda=(k+1,j+1)$ and $\alpha=(1,-1)$. Then, \[ \operatorname{ch}{\mathcal{L}}({\langle}mbda,-\alpha)=2[0]\cdot([0,{\langle}mbdaots,j]*[1,{\langle}mbdaots,k]). \] Finally, the analysis of good Lyndon words leads to a classification of simple integral modules for ${\mathcal{A}}Se(d)$. Indeed, recall the set {{\mathfrak{b}}ar{e}}qref{E:Bsubd}, and let \[ \mathcal{B}_d=\{({\langle}mbda,\mu)\mid {\langle}mbda\in{P^{++}},\mbox{ and }\mu\in\mathcal{B}_d({\langle}mbda)\}. \] Then, {\mathfrak{b}}egin{thme} The following is a complete list of pairwise non-isomorphic simple modules for ${\mathcal{A}}Se(d)$: \[ \{\,{\mathcal{L}}({\langle}mbda,\mu) \mid ({\langle}mbda,\mu)\in \mathcal{B}_d\,\}. \] {{\mathfrak{b}}ar{e}}nd{thme} We believe this paper may serve as a starting point for future investigations into categorification theories associated to non-simply laced Dynkin diagrams. In particular, we hope that the functor introduced here will play a role in showing that the 2-category for $\mathfrak{b}_\infty$, introduced by Khovanov-Lauda and independently by Rouquier, acts on $\mathcal{O}(\mathfrak{q}(n))$, see \cite{khl1,khl2,khl3,rq}. Additionally, in \cite{wz}, Wang and Zhao initiated a study of super analogues of $W$-algebras. This functor should be useful for studying these $W$-superalgebras along the lines of \cite{bk3, bk4}. In \cite{b}, Brundan studied the category of finite dimensional modules for ${\mathfrak{q}}(n)$ via Kazhdan-Lusztig theory. Among the finite dimensional ${\mathfrak{q}}(n)$-modules are the polynomial representations, which correspond under our functor to calibrated representations. Other modules in this category are those associated to {{\mathfrak{b}}ar{e}}mph{rational} weights, i.e.\ strict partitions with negative parts allowed. The functor should map these modules to interesting ${\mathcal{A}}Se(d)$-modules. These should be investigated. It would also be interesting to compare the Kazhdan-Lusztig polynomials in \cite{b} to those appearing in \cite{lec}. We now briefly outline the paper. In section~\ref{S:Prelim}, we review some basic notion of super representation theory. In section ~\ref{S:ASA} we define the degenerate AHCA and review some of its properties which may also be found in \cite{kl}. The standard modules and their irreducible quotients are introduced in section \ref{S:standardreps}. The classification of the calibrated representations are given in section ~\ref{S:Calibrated}. In section ~\ref{S:Lie algebras} we review some basic notions about category $ \mathcal{O}(\mathfrak{q}(n)) $ which may be found in \cite{b,g}. Next, in section ~\ref{S:LieTheoreticConstr} the functor is developed along with its properties. Finally, in section \ref{S:Classification} a classification of simple modules is obtained. {\mathfrak{s}}ubsection{Acknowlegments}{\langle}bel{SS:acknowlegements} The work presented in this paper was begun while the second author visited the Mathematical Sciences Research Institute in Berkeley, CA. He would like to thank the administration and staff of MSRI for their hospitality and especially the organizers of the ``Combinatorial Representation Theory'' and ``Representation Theory of Finite Groups and Related Topics'' programs for providing an exceptionally stimulating semester. We would like to thank Mikhail Khovanov for suggesting we consider a super analogue of the Arakawa-Suzuki functor. We would also like to thank Bernard Leclerc for pointing out \cite{lec}, as well as Monica Vazirani and Weiqiang Wang for some useful comments. {\mathfrak{s}}ection{(Associative) Superalgebras and Their Modules}{\langle}bel{S:Prelim} We now review some basics of the theory of superalgebras, following \cite{bk1,bk2,kl}. The objects in this theory are ${\mathbb{Z}}_2$-graded. Throughout the exposition, we will make definitions for homogeneous elements in this grading. These definitions should always be extended by linearity. Also, we often choose to not write the prefix {{\mathfrak{b}}ar{e}}mph{super}. As the paper progresses this term may be dropped; however, we will always point out when we are explicitly ignoring the ${\mathbb{Z}}_2$-grading. A vector superspace is a ${\mathbb{Z}}_2$-graded ${\mathbb{C}}$-vector space $V=V_{{\mathfrak{z}}ero}\oplus V_{{{\mathfrak{b}}ar{1}}}$. Given a nonzero homogeneous vector $v\in V_{{{\mathfrak{b}}ar{i}}}$, let $p(v) = {{\mathfrak{b}}ar{i}} \in{\mathbb{Z}}_2$ be its {{\mathfrak{b}}ar{e}}mph{parity}. Given a superspace $V$, let ${\mathcal{P}}i V$ be the superspace obtained by reversing the parity. That is, ${\mathcal{P}}i V_{{\mathfrak{b}}ar{i}}=V_{{{\mathfrak{b}}ar{i}}+1}$. A supersubspace of $V$ is a {{\mathfrak{b}}ar{e}}mph{graded} subspace $U{\mathfrak{s}}ubseteq V$. That is, $U=(U\cap V_{{\mathfrak{z}}ero})\oplus(U\cap V_{{{\mathfrak{b}}ar{1}}})$. Observe that $U$ is a supersubspace if, and only if, $U$ is stable under the map $v\mapsto(-1)^{p(v)}v$ for homogeneous vectors $v\in V$. Given two superspaces $V,W$, the direct sum $V\oplus W$ and tensor product $V\otimes W$ satisfy $(V\oplus W)_{{{\mathfrak{b}}ar{i}}}=V_{{\mathfrak{b}}ar{i}}\oplus W_{{\mathfrak{b}}ar{i}}$ and \[ (V\otimes W)_{{\mathfrak{b}}ar{i}}={\mathfrak{b}}igoplus_{{{\mathfrak{b}}ar{j}}+{{\mathfrak{b}}ar{k}}={{\mathfrak{b}}ar{i}}}V_{{\mathfrak{b}}ar{j}}\otimes W_{{\mathfrak{b}}ar{k}}. \] We may regard $\operatorname{Hom}_{\mathbb{C}}(V,W)$ as a superspace by setting $\operatorname{Hom}_{\mathbb{C}}(V,W)_{{\mathfrak{b}}ar{i}}$ to be the set of all homogeneous linear maps of degree ${{\mathfrak{b}}ar{i}}$. That is, linear maps ${\mathbf{1}}rphi:V\rightarrow W$ such that ${\mathbf{1}}rphi(V_{{\mathfrak{b}}ar{j}}){\mathfrak{s}}ubseteq W_{{{\mathfrak{b}}ar{j}}+{{\mathfrak{b}}ar{i}}}$. Finally, $V^*=\operatorname{Hom}_{\mathbb{C}}(V,{\mathbb{C}})$ is a superspace, where ${\mathbb{C}}={\mathbb{C}}_{\mathfrak{z}}ero$. Now, a superalgebra is a vector superspace $A$ that has the structure of an associative, unital algebra such that $A_{{\mathfrak{b}}ar{i}} A_{{\mathfrak{b}}ar{j}}{\mathfrak{s}}ubseteq A_{{{\mathfrak{b}}ar{i}}+{{\mathfrak{b}}ar{j}}}$. A superideal of $A$ is a two sided ideal of $A$ that is also a supersubspace of $A$. A superalgebra homomorphism ${\mathbf{1}}rphi:A\rightarrow B$ is an even (i.e.\ grading preserving) linear map which is also an algebra homomorphism. Observe that since ${\mathbf{1}}rphi$ is even, its kernel, $\ker{\mathbf{1}}rphi$, is a superideal of $A$. Finally, given superalgebras $A$ and $B$, their tensor product $A\otimes B$ is a superalgebra with product given by {\mathfrak{b}}egin{equation}{\langle}bel{tensor product rule-algebra} (a\otimes b)(a'\otimes b')=(-1)^{p(a')p(b)}(aa'\otimes bb'). {{\mathfrak{b}}ar{e}}nd{equation} We now turn our attention to supermodules. Given a superalgebra $A$, let $A$-smod denote the category of all finite dimensional $A$-supermodules, and $A$-mod be the category of $A$-modules in the usual ungraded sense. An object in $A$-smod is a ${\mathbb{Z}}_2$-graded left $A$-module $M=M_{\mathfrak{z}}ero\oplus M_{{\mathfrak{b}}ar{1}}$ such that $A_{{\mathfrak{b}}ar{i}} M_{{\mathfrak{b}}ar{j}}{\mathfrak{s}}ubseteq M_{{{\mathfrak{b}}ar{i}}+{{\mathfrak{b}}ar{j}}}$. A homomorphism of $A$-supermodules $M$ and $N$ is a map of vector superspaces $f:M\rightarrow N$ satisfying $f(am)=(-1)^{p(a)p(f)}af(m)$ when $f$ is homogeneous. A submodule of an $A$-supermodule $M$ will always be a supersubspace of $M$. An $A$-supermodule $M$ is called irreducible if it contains no proper nontrivial subsupermodules. The supermodule $M$ may or may not remain irreducible when regarded as an object in $A$-mod. If $M$ remains irreducible as an $A$-module, it is called {{\mathfrak{b}}ar{e}}mph{absolutely irreducible}, and if it decomposes, it is called {{\mathfrak{b}}ar{e}}mph{self associate}. Alternatively, absolutely irreducible supermodules are said to be irreducible of type \thetaxttt{M}, while self associate supermodules are irreducible of type \thetaxttt{Q}. When $M\in A$-smod is self associate, there exists an odd $A$-smod homomorphism $\theta_M$ which interchanges the two irreducible components of $M$ as an object in $A$-mod. Now, let $A$ and $B$ be superalgebras, $M\in A$-smod and $N\in B$-smod. The vector superspace $M\otimes N$ has the structure of an $A\otimes B$-supermodule via the action is given by {\mathfrak{b}}egin{eqnarray}{\langle}bel{tensor product rule-module} (a\otimes b)(m\otimes n)=(-1)^{p(b)p(m)}(am\otimes bn) {{\mathfrak{b}}ar{e}}nd{eqnarray} for homogeneous $b\in B$ and $m\in M$. This is called the outer tensor product of $M$ and $N$ and is denoted $M{\mathfrak{b}}oxtimes N$. Unlike the classical situation, it may happen that the outer tensor product of irreducible supermodules is no longer irreducible. This only happens when both modules are self associate. To see this, let $M\in A$-smod and $N\in B$-smod be self associate, and recall the odd homomorphisms $\theta_M$ and $\theta_N$. Then, $\theta_M\otimes\theta_N:M{\mathfrak{b}}oxtimes N\rightarrow M{\mathfrak{b}}oxtimes N$, is an even automorphism of $M{\mathfrak{b}}oxtimes N$ that squares to $-1$. Hence $M{\mathfrak{b}}oxtimes N$ decomposes as direct sum of two $A\otimes B$-supermodules, namely the $({\mathfrak{p}}m{\mathfrak{s}}qrt{-1})$-eigenspaces. These two summands are absolutely irreducible and isomorphic under the odd isomorphism ${\mathfrak{t}}hetaeta_{M,N}:=\theta_M\otimes\mathrm{id}_N$, see \cite[Lemma 2.9]{bk1} and \cite[Section 2-b]{bk2}. When $M$ and $N$ are irreducible, define the (irreducible) $A\otimes B$-module $M\circledast N$ by the formula {\mathfrak{b}}egin{equation}{\langle}bel{E:startensor} M{\mathfrak{b}}oxtimes N = {\mathfrak{b}}egin{cases} M\circledast N, & \thetaxt{if either $M$ or $N$ is of type \thetaxttt{M};}\\ (M\circledast N)\oplus{\mathfrak{t}}hetaeta_{M,N}(M\circledast N),& \thetaxt{if both $M$ and $N$ are of type \thetaxttt{Q}.} {{\mathfrak{b}}ar{e}}nd{cases} {{\mathfrak{b}}ar{e}}nd{equation} When $M=M'\oplus M''$, define $M\circledast N=(M'\circledast N)\oplus(M''\circledast N)$. Finally, let $A-\mbox{smod}_{\mbox{ev}}$ be the abelian subcategory of $A-\mbox{smod}$ with the same objects, but only {{\mathfrak{b}}ar{e}}mph{even} morphisms. Then, the Grothendieck group $K(A-\mbox{smod})$ is the quotient of the Grothendieck group $K(A-\mbox{smod}_{\mbox{ev}})$ modulo the relation $M-{\mathcal{P}}i M$ for every $A$-supermodule $M$. We would like to emphasize again that we allow odd morphisms and, therefore, $M\cong{\mathcal{P}}i M$ in the original category. {\mathfrak{s}}ection{The Degenerate Affine Hecke-Clifford Algebra}{\langle}bel{S:ASA} In this section we define the algebra which is the principle object of study in this paper and summarize the results we will require in what follows. Many of the results may be found in \cite{kl}, however, we include them here in an effort to make this paper self contained and readable to a wider audience. {\mathfrak{s}}ubsection{The Algebra}{\langle}bel{SS:Saffdef} Let ${\mathbb{C}}l(d)$ denote the Clifford algebra over ${\mathbb{C}}$ with generators $c_1,{\langle}mbdaots,c_d$, and relations {\mathfrak{b}}egin{eqnarray}{\langle}bel{c} c_i^2=-1,\;\;\; c_ic_j=-c_jc_i\;\;\; 1\leq i{\mathfrak{n}}eq j\leq d. {{\mathfrak{b}}ar{e}}nd{eqnarray} Then ${\mathbb{C}}l (d)$ is a superalgebra by declaring the generators $c_{1}, \dotsc , c_{d}$ to all be of degree $\ensuremath{{\mathfrak{b}}ar{1}}$. Let $S_d$ be the symmetric group on $d$ letters with Coxeter generators $s_1,{\langle}mbdaots, s_{d-1}$ and relations {\mathfrak{b}}egin{eqnarray}{\langle}bel{s} s_i^2=1\;\;\; s_is_{i+1}s_i=s_{i+1}s_is_{i+1}\;\;\;s_is_j=s_js_i {{\mathfrak{b}}ar{e}}nd{eqnarray} for all admissible $i$ and $j$ such that $|i-j|>1$. The group algebra of the symmetric group, ${\mathbb{C}} S_{d}$, is a superalgebra by viewing it as concentrated in degree ${{\mathfrak{b}}ar{e}}ven$; that is, $({\mathbb{C}} S_{d})_{{{\mathfrak{b}}ar{e}}ven}= {\mathbb{C}} S_{d}$. The {{\mathfrak{b}}ar{e}}mph{Sergeev algebra} is given by setting \[ {\mathcal{S}}(d)= {\mathbb{C}}l(d)\otimes {\mathbb{C}} S_d \] as a vector superspace and declaring ${\mathbb{C}}l(d) \cong {\mathbb{C}}l(d)\otimes 1$ and ${\mathbb{C}} S_d \cong 1\otimes{\mathbb{C}} S_d$ to be subsuperalgebras. The Clifford generators $c_1,{\langle}mbdaots,c_d$ and Coxeter generators $s_1,{\langle}mbdaots,s_{d-1}$ are subject to the mixed relation {\mathfrak{b}}egin{eqnarray}{\langle}bel{c&s} s_ic_i=c_{i+1}s_i,\;\;\;s_ic_{i+1}=c_is_i,\;\;\; s_ic_j=c_js_i, {{\mathfrak{b}}ar{e}}nd{eqnarray} for all admissible $i$ and $j$ such that $j{\mathfrak{n}}eq i,i+1$. The algebra of primary interest in this paper is the {{\mathfrak{b}}ar{e}}mph{(degenerate) affine Hecke-Clifford algebra}, AHCA. It is given as \[ {\mathcal{A}}Se(d) = {\mathcal{P}}_d[x]\otimes{\mathcal{S}}(d) \] as a vector superspace, where ${\mathcal{P}}_d[x]:={\mathbb{C}}[x_1,{\langle}mbdaots,x_d]$ is the polynomial ring in $d$ variables and is viewed as a superalgebra concentrated in degree ${{\mathfrak{b}}ar{e}}ven$. Multiplication is defined so that ${\mathcal{S}} (d) \cong 1\otimes{\mathcal{S}} (d) $ and ${\mathcal{P}}_{d}[x] \cong {\mathcal{P}}_{d}[x] \otimes 1$ are subsuperalgebras. The generators of these two subalgebras are subject to the mixed relations {\mathfrak{b}}egin{eqnarray}{\langle}bel{c&x} c_ix_i=-x_ic_i,\;\;\;c_jx_i=x_ic_j,\;\;\;1\leq i{\mathfrak{n}}eq j\leq d, {{\mathfrak{b}}ar{e}}nd{eqnarray} and {\mathfrak{b}}egin{eqnarray}{\langle}bel{s&x} s_ix_i=x_{i+1}s_i-1+c_ic_{i+1},\;\;\;s_ix_j=x_js_i {{\mathfrak{b}}ar{e}}nd{eqnarray} for $1\leq i\leq d-1$, $1\leq j\leq d$, $j{\mathfrak{n}}eq i,i+1$. Note that relation {{\mathfrak{b}}ar{e}}qref{s&x} differs from the corresponding relation in \cite{bk2,kl}. This is because in {{\mathfrak{b}}ar{e}}qref{c} we choose $c_{i}^{2}=-1$, following \cite{o,s,s2}, whereas in {{\mathfrak{b}}ar{e}}mph{loc. cit.} the authors take $c_{i}^{2}=1$. The resulting algebras are isomorphic and the only effect of this convention is that this change of sign has to be taken into account when comparing formulae. It will be useful to consider another decomposition {\mathfrak{b}}egin{equation}{\langle}bel{E:AlternateDecomp} {\mathcal{A}}Se(d) \cong{\mathcal{A}}(d)\otimes{\mathbb{C}} S_d, {{\mathfrak{b}}ar{e}}nd{equation} where $A(d)$ is the subalgebra generated by ${\mathbb{C}}l(d)$ and ${\mathcal{P}}_{d}[x]$. As a superspace {\mathfrak{b}}egin{equation}{\langle}bel{E:Adef} A(d) \cong {\mathcal{P}}_d[x]\otimes{\mathbb{C}}l(d). {{\mathfrak{b}}ar{e}}nd{equation} We have the following PBW-type theorem for ${\mathcal{A}}Se(d)$. Given $\alpha=(\alpha_1,{\langle}mbdaots,\alpha_d)\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^d$ and ${\mathbf{1}}repsilon=({\mathbf{1}}repsilon_1,{\langle}mbdaots,{\mathbf{1}}repsilon_d)\in{\mathbb{Z}}_2^d$, set $x^\alpha=x_1^{\alpha_1}\cdots x_d^{\alpha_d}$ and $c^{\mathbf{1}}repsilon=c_1^{{\mathbf{1}}repsilon_1}\cdots c_d^{{\mathbf{1}}repsilon_d}$. Then, {\mathfrak{b}}egin{thm}\cite[Theorem 14.2.2]{kl} The set $\{\,x^\alpha c^{\mathbf{1}}repsilon w\,|\,\alpha\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^d,\,{\mathbf{1}}repsilon\in{\mathbb{Z}}_2^d,\,w\in S_d\}$ forms a basis for ${\mathcal{A}}Se(d)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{s}}ubsection{Some (Anti)Automorphisms}{\langle}bel{SS:alghomoms} The superalgebra ${\mathcal{A}}Se (d)$ admits an automorphism ${\mathfrak{s}}igma:{\mathcal{A}}Se(d)\rightarrow{\mathcal{A}}Se(d)$ given by {\mathfrak{b}}egin{equation}{\langle}bel{E:sigmadef} {\mathfrak{s}}igma(s_i)=-s_{d-i}, {\mathfrak{h}}space{.25in} {\mathfrak{s}}igma(c_i)=c_{d+1-i}, {\mathfrak{h}}space{.25in} {\mathfrak{s}}igma(x_i)=x_{n+1-i}. {{\mathfrak{b}}ar{e}}nd{equation} It also admits an antiautomorphism $\tauu:{\mathcal{A}}Se(d)\rightarrow{\mathcal{A}}Se(d)$ given by \[ \tauu(s_i)=s_i, {\mathfrak{h}}space{.25in} \tauu(c_i)=-c_i, {\mathfrak{h}}space{.25in} \tauu(x_i)=x_i. \] Note that, for superalgebras, antiautomorphism means that, for any homogeneous $x,y \in {\mathcal{A}}Se (d)$, {\mathfrak{b}}egin{equation}{\langle}bel{E:taudef} \tauu(xy) = (-1)^{p(x)p(y)}\tauu(y) \tauu(x). {{\mathfrak{b}}ar{e}}nd{equation} {\mathfrak{s}}ubsection{Weights and Integral Modules}{\langle}bel{SS:weights} We now introduce the class of integral ${\mathcal{A}}Se (d)$-modules. It is these modules which are the main focus of the paper. To this end, for each $a\in{\mathbb{C}}$, define {\mathfrak{b}}egin{equation}{\langle}bel{E:qdef} q(a)=a(a+1). {{\mathfrak{b}}ar{e}}nd{equation} By \cite[Theorem 14.3.1]{kl}, the center of ${\mathcal{A}}Se(d)$ consists of symmetric polynomials in $x_1^2,{\langle}mbdaots,x_d^2$. Let ${\mathcal{P}}_{d}[x^{2}]={\mathbb{C}}[x_1^2,{\langle}mbdaots,x_d^2]{\mathfrak{s}}ubset{\mathcal{P}}_d[x]$. A {{\mathfrak{b}}ar{e}}mph{weight} is an algebra homomorphism \[ {\mathfrak{z}}eta:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}. \] It is often convenient to identify a weight ${\mathfrak{z}}eta$ with the $d$-tuple of complex numbers ${\mathfrak{z}}eta=({\mathfrak{z}}eta(x_1^2),{\langle}mbdaots,{\mathfrak{z}}eta(x_d^2))\in{\mathbb{C}}^d$. Given an ${\mathcal{A}}Se(d)$-supermodule $M$ and a weight ${\mathfrak{z}}eta$, define the {{\mathfrak{b}}ar{e}}mph{${\mathfrak{z}}eta$ weight space}, \[ M_{\mathfrak{z}}eta=\left\{ m\in{\mathcal{M}} \mid x_i^2m =q\left({\mathfrak{z}}eta\left( x_i^2\right)\right)m \thetaxt{ for all $i=1,{\langle}mbdaots,d$} \right\}, \] and the {{\mathfrak{b}}ar{e}}mph{generalized ${\mathfrak{z}}eta$ weight space}, \[ M_{\mathfrak{z}}eta^{\mathrm{gen}} =\left\{ m\in{\mathcal{M}} \mid \left( x_i^2-q({\mathfrak{z}}eta\left(x_i^2 \right)\right)^km=0 \thetaxt{ for $k{\mathfrak{g}}g 0$ and all $i=1,{\langle}mbdaots,d$} \right\}. \] Observe that if $M_{\mathfrak{z}}eta^{\thetaxt{gen}}{\mathfrak{n}}eq 0$, then $M_{\mathfrak{z}}eta{\mathfrak{n}}eq0$. Following \cite{bk2}, say that an ${\mathcal{A}}Se(d)$-module $M$ is {{\mathfrak{b}}ar{e}}mph{integral} if \[ M={\mathfrak{b}}igoplus_{\mathfrak{z}}eta M_{\mathfrak{z}}eta^{\thetaxt{gen}} \] and $M^{\thetaxt{gen}}_{\mathfrak{z}}eta{\mathfrak{n}}eq0$ implies ${\mathfrak{z}}eta\left( x_i^2\right)\in{\mathbb{Z}}$ for $i=1,{\langle}mbdaots,d$. Let ${\mathbb{R}}ep{\mathcal{A}}Se(d)$ denote the full subcategory of ${\mathcal{A}}Se(d)$-smod of finite dimensional {{\mathfrak{b}}ar{e}}mph{integral} modules for the degenerate AHCA. Unless stated otherwise, all ${\mathcal{A}}Se(d)$-modules will be integral by assumption. {\mathfrak{s}}ubsection{The Mackey Theorem}{\langle}bel{SS:Mackey} In this section we review the Mackey Theorem for integral ${\mathcal{A}}Se$-modules. Refer to \cite{kl} for details. Let $\mu=(\mu_1,{\langle}mbdaots,\mu_k)$ be a composition of $d$. Define the parabolic subgroup $S_\mu=S_{\mu_1}\times\cdots\times S_{\mu_k}{\mathfrak{s}}ubseteq S_d$, and parabolic subalgebra ${\mathcal{A}}Se(\mu):={\mathcal{A}}Se(\mu_1)\otimes\cdots \otimes {\mathcal{A}}Se(\mu_k){\mathfrak{s}}ubseteq{\mathcal{A}}Se(d)$. Define the functor \[ \operatorname{Ind}_\mu^d:{\mathbb{R}}ep{\mathcal{A}}Se(\mu)\rightarrow{\mathbb{R}}ep{\mathcal{A}}Se(d),\;\;\; \operatorname{Ind}_\mu^dM={\mathcal{A}}Se(d)\otimes_{{\mathcal{A}}Se(\mu)}M. \] This functor is left adjoint to $\operatorname{Res}_\mu^d:{\mathbb{R}}ep{\mathcal{A}}Se(d)\rightarrow{\mathbb{R}}ep{\mathcal{A}}Se(\mu)$. Also, given a composition ${\mathfrak{n}}u=({\mathfrak{n}}u_1,{\langle}mbdaots,{\mathfrak{n}}u_{{\mathfrak{b}}ar{e}}ll)$ of $d$, which is a refinement of $\mu$ (i.e.\ there exist $0=i_1\leq{\langle}mbdaots\leq i_{k+1}={{\mathfrak{b}}ar{e}}ll$ such that ${\mathfrak{n}}u_{i_j}+{\langle}mbdaots+{\mathfrak{n}}u_{i_{j+1}-1}=\mu_j$), define $\operatorname{Ind}_{\mathfrak{n}}u^\mu$ and $\operatorname{Res}_{\mathfrak{n}}u^\mu$ in the obvious way. Now, let $\mu$ and ${\mathfrak{n}}u$ be compositions of $d$, and let $D_{\mu,{\mathfrak{n}}u}$ denote the set of minimal length $S_\mu{\mathfrak{b}}ackslash S_d/S_{\mathfrak{n}}u$-double coset representatives and $D_{\mathfrak{n}}u=D_{(1^d),{\mathfrak{n}}u}$. Let $w\in D_{\mu,{\mathfrak{n}}u}$. The following lemma is standard. {\mathfrak{b}}egin{lem}{\langle}bel{L:MinCosetReps} Let ${\mathfrak{n}}u=({\mathfrak{n}}u_1,{\langle}mbdaots,{\mathfrak{n}}u_n)$ be a composition of $d$, and set $a_i={\mathfrak{n}}u_1+\cdots+{\mathfrak{n}}u_{i-1}+1$ and $b_i={\mathfrak{n}}u_1+\cdots+{\mathfrak{n}}u_i$. If $w\in D_{\mathfrak{n}}u$ and $a_i\leq k<k'\leq b_i$ for some $i$, then $w(k)<w(k')$. {{\mathfrak{b}}ar{e}}nd{lem} It is known that $S_\mu\cap wS_{\mathfrak{n}}u w^{-1}$ and $w^{-1}S_\mu w\cap S_{\mathfrak{n}}u$ are parabolic subgroups of $S_d$. Hence we may define compositions $\mu\cap w{\mathfrak{n}}u$ and $w^{-1}\mu\cap{\mathfrak{n}}u$ by the formulae \[ S_\mu\cap w^{-1}S_{\mathfrak{n}}u w=S_{\mu\cap w{\mathfrak{n}}u}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, w^{-1}S_\mu w\cap S_{\mathfrak{n}}u=S_{w^{-1}\mu\cap{\mathfrak{n}}u}. \] Moreover, the map ${\mathfrak{s}}igma\mapsto w{\mathfrak{s}}igma w^{-1}$ induces a length preserving isomorphism $S_{\mu\cap w{\mathfrak{n}}u}\rightarrow S_{w^{-1}\mu\cap{\mathfrak{n}}u}$. Using this last fact, it can be proved that for each $w\in D_{\mu,{\mathfrak{n}}u}$ there exists an algebra isomorphism \[ {\mathbf{1}}rphi_{w^{-1}}:{\mathcal{A}}Se(\mu\cap w{\mathfrak{n}}u)\rightarrow{\mathcal{A}}Se(w^{-1}\mu\cap{\mathfrak{n}}u) \] given by ${\mathbf{1}}rphi_{w^{-1}}({\mathfrak{s}}igma)=w^{-1}{\mathfrak{s}}igma w$, ${\mathbf{1}}rphi_{w^{-1}}(c_i)=c_{w^{-1}(i)}$ and ${\mathbf{1}}rphi_{w^{-1}}(x_i)=x_{w^{-1}(i)}$ for $1\leq i\leq d$ and ${\mathfrak{s}}igma\in S_{\mu\cap w{\mathfrak{n}}u}$. If $M$ is a left ${\mathcal{A}}Se(\mu\cap w{\mathfrak{n}}u)$-supermodule, let $^wM$ denote the ${\mathcal{A}}Se(w^{-1}\mu\cap{\mathfrak{n}}u)$-supermodule obtained by twisting the action with the isomorphism ${\mathbf{1}}rphi_{w^{-1}}$. We have the following ``Mackey Theorem'': {\mathfrak{b}}egin{thm}{\langle}bel{Mackey}\cite[Theorem 14.2.5]{kl} Let $M$ be an ${\mathcal{A}}Se({\mathfrak{n}}u)$-supermodule. Then $\operatorname{Res}_\mu^d\operatorname{Ind}_{\mathfrak{n}}u^dM$ admits a filtration with subquotients isomorphic to \[ \operatorname{Ind}_{\mu\cap w{\mathfrak{n}}u}^\mu{}^w(\operatorname{Res}_{w^{-1}\mu\cap{\mathfrak{n}}u}^{\mathfrak{n}}u M), \] one for each $w\in D_{\mu,{\mathfrak{n}}u}$. Moreover the subquotients can be taken in any order refining the Bruhat order on $D_{\mu,{\mathfrak{n}}u}$. In particular, $\operatorname{Ind}_{\mu\cap{\mathfrak{n}}u}^\mu\operatorname{Res}_{\mu\cap{\mathfrak{n}}u}^{\mathfrak{n}}u M$ appears as a subsupermodule. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{s}}ubsection{Characters}{\langle}bel{SS:characters} Following \cite[Chapter 16]{kl}, we now describe the notion of characters for integral ${\mathcal{A}}Se(d)$-supermodules. Recall the subsuperalgebra ${\mathcal{A}}(d){\mathfrak{s}}ubseteq{\mathcal{A}}Se(d)$ defined in {{\mathfrak{b}}ar{e}}qref{E:Adef}. When $d=1$ and $a\in{\mathbb{Z}}$ there exists a $2$-dimensional simple ${\mathcal{A}}(1)$-module \[ {\mathcal{L}}(a)={\mathbb{C}}l(1)1_a={\mathbb{C}}1_a\oplus{\mathbb{C}} c_1.1_a, \] which is free as a ${\mathbb{C}}l (1)$-module satisfying \[ x_1.1_a={\mathfrak{s}}qrt{q(a)}1_a. \] The ${\mathbb{Z}}_{2}$-grading on ${\mathcal{L}}(a)$ is given by setting $p(1_{a})={{\mathfrak{b}}ar{e}}ven$. Observe that ${\mathcal{L}}(a)\cong{\mathcal{L}}(-a-1)$ and that by replacing ${\mathfrak{s}}qrt{q(a)}$ with $-{\mathfrak{s}}qrt{q(a)}$ in the action of $x_1$ yields an isomorphic supermodule under the odd isomorphism $1_a\mapsto c_1.1_a$. A direct calculation verifies that this module is of type \thetaxttt{M} if $a{\mathfrak{n}}eq 0$ and of type \thetaxttt{Q} if $a=0$. Now, ${\mathcal{A}}(d) \cong {\mathcal{A}}(1)\otimes\cdots\otimes{\mathcal{A}}(1)$. Hence, applying {{\mathfrak{b}}ar{e}}qref{E:startensor} we obtain a simple ${\mathcal{A}} (d)$-module ${\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)$. Given $(a_{1}, \dotsc , a_{d})\in{\mathbb{Z}}^d_{{\mathfrak{g}}eq0}$, let {\mathfrak{b}}egin{equation}{\langle}bel{E:gammazerodef} {\mathfrak{g}}amma_{0}(a_{1}, \dotsc, a_{d})=|\{ i \mid a_i=0 \}|. {{\mathfrak{b}}ar{e}}nd{equation} We have {\mathfrak{b}}egin{lem}{\langle}bel{A(d) irreducibles}\cite[Lemma 16.1.1]{kl} The set \[ \left\{{\mathcal{L}}(a_1)\circledast\cdots\circledast {\mathcal{L}}(a_d) \mid (a_1,{\langle}mbdaots,a_d)\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^d \right\} \] is a complete set of pairwise non-isomorphic irreducible integral ${\mathcal{A}} (d)$-modules. The module ${\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)$ is of type \thetaxttt{M} if ${\mathfrak{g}}amma_0$ is even and of type \thetaxttt{Q} if ${\mathfrak{g}}amma_0$ is odd. Moreover, \[ \dim{\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)=2^{n-\lfloor{\mathfrak{g}}amma_0/2\rfloor} \] where ${\mathfrak{g}}amma_0={\mathfrak{g}}amma_0(a_1,{\langle}mbdaots,a_d)$ as above. {{\mathfrak{b}}ar{e}}nd{lem} Restriction to the subalgebra $A(d)={\mathcal{A}}Se((1^d)){\mathfrak{s}}ubseteq{\mathcal{A}}Se(d)$ defines a functor from ${\mathbb{R}}ep{\mathcal{A}}Se(d)$ to ${\mathcal{A}} (d)$-mod. The map obtained by applying this functor and passing to the Grothendieck group of the category ${\mathcal{A}}(d)$-mod yields a map \[ \operatorname{ch}:{\mathbb{R}}ep{\mathcal{A}}Se(d)\rightarrow K({\mathcal{A}}(d)\mbox{-mod}) \] defined by \[ \operatorname{ch} M=\left[ \operatorname{Res}^{d}_{1^d}M \right] \] where $[X]$ is the image of an ${\mathcal{A}}(d)$-module, $X$, in $K({\mathcal{A}}(d)\mbox{-mod})$. The image $\operatorname{ch} M$ is called the {{\mathfrak{b}}ar{e}}mph{formal character} of the ${\mathcal{A}}Se(d)$-module $M$. The following fundamental result is given in \cite[Theorem 17.3.1]{kl}. {\mathfrak{b}}egin{lem}{\langle}bel{L:independenceofcharacters} The induced map on Grothendeick rings \[ \operatorname{ch} : K({\mathbb{R}}ep{\mathcal{A}}Se(d)) \to K({\mathcal{A}} (d)\thetaxt{-mod}) \] is injective. {{\mathfrak{b}}ar{e}}nd{lem} For convenience of notation, set \[ [a_1,{\langle}mbdaots,a_d]=[{\mathcal{L}}(a_1)\circledast\cdots\circledast {\mathcal{L}}(a_d)]. \] The following lemma describes how to calculate the character of $M \circledast N$ in terms of the characters of $M$ and $N$, and is a special case of the Mackey Theorem: {\mathfrak{b}}egin{lem}{\langle}bel{L:ShuffleLemma}\cite[Shuffle Lemma]{kl} Let $K\in{\mathcal{A}}Se(k)$ and $M\in{\mathcal{A}}Se(m)$ be simple, and assume that \[ \operatorname{ch} K={\mathfrak{s}}um_{{\underline{i}}\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^k}r_{{\underline{i}}}[i_1,{\langle}mbdaots,i_k]\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, \operatorname{ch} M={\mathfrak{s}}um_{{\underline{j}}\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^m}s_{{\underline{j}}}[j_1,{\langle}mbdaots,j_m]. \] Then, {\mathfrak{b}}egin{eqnarray*} \operatorname{ch}\operatorname{Ind}_{m,k}^{m+k}K\circledast M ={\mathfrak{s}}um_{{\underline{i}},{\underline{j}}}r_{{\underline{i}}}s_{{\underline{j}}}[i_1,{\langle}mbdaots,i_k]*[j_1,{\langle}mbdaots,j_m] {{\mathfrak{b}}ar{e}}nd{eqnarray*} where \[ [i_1,{\langle}mbdaots,i_k]*[i_{k+1},{\langle}mbdaots,i_{k+m}] ={\mathfrak{s}}um_{w\in D_{(m,k)}}[w(i_1),{\langle}mbdaots,w(i_{k+m})]. \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{s}}ubsection{Duality}{\langle}bel{SS:duality} Now, given an ${\mathcal{A}}Se(d)$-module $M$, we obtain a new module $M^{\mathfrak{s}}igma$ by twisting the action of ${\mathcal{A}}Se(d)$ by ${\mathfrak{s}}igma$. That is, define a new action, $*$, on $M$ by $x*m={\mathfrak{s}}igma(x).m$ for all $x\in{\mathcal{A}}Se(d)$. We have {\mathfrak{b}}egin{lem}{\langle}bel{sm twisted action}\cite[Lemma 14.6.1]{kl} If $M$ is an ${\mathcal{A}}Se(k)$-module and $N$ is an ${\mathcal{A}}Se({{\mathfrak{b}}ar{e}}ll)$-module, then \[ (\operatorname{Ind}_{k,{{\mathfrak{b}}ar{e}}ll}^{k+{{\mathfrak{b}}ar{e}}ll}M\circledast N)^{\mathfrak{s}}igma \cong\operatorname{Ind}_{k,{{\mathfrak{b}}ar{e}}ll}^{k+{{\mathfrak{b}}ar{e}}ll}M^{\mathfrak{s}}igma\circledast N^{\mathfrak{s}}igma. \] {{\mathfrak{b}}ar{e}}nd{lem} If $M$ is an ${\mathcal{A}}Se(d)$-module, with character \[ \operatorname{ch} M={\mathfrak{s}}um_{{\underline{i}}\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^d}r_{{\underline{i}}}[i_1,{\langle}mbdaots,i_d], \] then Lemma ~\ref{sm twisted action} implies that \[ \operatorname{ch} M^{{\mathfrak{s}}igma}={\mathfrak{s}}um_{{\underline{i}}\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^d}r_{{\underline{i}}}[i_d,{\langle}mbdaots,i_1]. \] {\mathfrak{s}}ubsection{Contravariant Forms}{\langle}bel{SS:contravariantforms} Let $M$ be in ${\mathbb{R}}ep{\mathcal{A}}Se(d)$. A bilinear form $(\cdot,\cdot):M\otimes M\rightarrow{\mathbb{C}}$ is called a contravariant form if \[ (x.v,v')=(v,\tauu(x).v') \] for all $x\in{\mathcal{A}}Se(d)$ and $v,v'\in M$. {\mathfrak{b}}egin{lem}{\langle}bel{L:ASeContraForm} Let $M$ be in ${\mathbb{R}}ep{\mathcal{A}}Se(d)$ equipped with a contravariant form $(\cdot,\cdot)$. Then \[ M_{{\mathfrak{b}}ar{e}}taa{\mathfrak{p}}erp M_{\mathfrak{z}}eta^{\mathrm{gen}}\;\;\;\mbox{unless}\;\;\;{{\mathfrak{b}}ar{e}}taa={\mathfrak{z}}eta. \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Assume ${{\mathfrak{b}}ar{e}}taa{\mathfrak{n}}eq{\mathfrak{z}}eta$, and let $v\in M_{{\mathfrak{b}}ar{e}}taa$ and $v'\in M^{\mathrm{gen}}_{\mathfrak{z}}eta$. Choose $i$ such that $q({{\mathfrak{b}}ar{e}}taa(x_i^2)){\mathfrak{n}}eq q({\mathfrak{z}}eta(x_i^2))$, and $N{\mathfrak{g}}g0$ such that \[ (x_i^2-q({\mathfrak{z}}eta(x_i^2))^N.v'=0. \] Then {\mathfrak{b}}egin{align*} (q({{\mathfrak{b}}ar{e}}taa(x_i^2))-q({\mathfrak{z}}eta(x_i^2))^N(v,v') =&((x_i^2-q({\mathfrak{z}}eta(x_i^2)))^N.v,v')\\ =&(v,\tauu((x_i^2-q({\mathfrak{z}}eta(x_i^2)))^N).v')\\ =&(v,(x_i^2-q({\mathfrak{z}}eta(x_i^2)))^N.v')=0 {{\mathfrak{b}}ar{e}}nd{align*} showing that $(v,v')=0$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{s}}ubsection{Intertwiners} Define the intertwiner {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:intertwiner} {\mathbf{1}}rphii_i=s_i(x_i^2-x_{i+1}^2)+(x_i+x_{i+1})-c_ic_{i+1}(x_i-x_{i+1}). {{\mathfrak{b}}ar{e}}nd{eqnarray} Given an ${\mathcal{A}}Se(d)$-supermodule $M$, we understand that ${\mathbf{1}}rphii_i{\mathcal{M}}_{{\mathfrak{z}}eta}^{\mathrm{gen}}{\mathfrak{s}}ubseteq{\mathcal{M}}_{s_i({\mathfrak{z}}eta)}^{\mathrm{gen}}$. Moreover, a straightforward calculation gives {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:intertwinersquared} {\mathbf{1}}rphii_i^2=2x_i^2+2x_{i+1}^2-(x_i^2-x_{i+1}^2)^2. {{\mathfrak{b}}ar{e}}nd{eqnarray} The following lemma now directly follows (see also \cite{kl}). {\mathfrak{b}}egin{lem}{\langle}bel{L:InvertibleIntertwiner} Assume that $Y$ is in ${\mathbb{R}}ep{\mathcal{A}}Se(d)$, and $v\in Y$ satisfies $x_i.v={\mathfrak{s}}qrt{q(a)}v$ and $x_{i+1}.v={\mathfrak{s}}qrt{q(b)}v$ for some $a,b\in{\mathbb{Z}}$. Then, ${\mathbf{1}}rphii_i^2.v{\mathfrak{n}}eq 0$ unless $q(a)=q(b+1)$ or $q(a)=q(b-1)$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{s}}ection{Standard Modules}{\langle}bel{S:standardreps} We construct a family of standard modules which are an analogue of Zelevinsky's construction for the degenerate affine Hecke algebra. The key ingredient is to define certain irreducible supermodules for a parabolic subalgebra of ${\mathcal{A}}Se (d)$; the so-called segment representations. The standard modules are then obtained by inducing from the outer tensor product of these modules. {\mathfrak{s}}ubsection{Segment Representations}{\langle}bel{subsection irred modules} We begin by constructing a family of irreducible ${\mathcal{A}}Se(d)$-supermodules that are analogues of Zelevinsky's segment representations for the degenerate affine Hecke algebra. To begin, define the $2^d$-dimensional ${\mathcal{S}}(d)$-supermodule {\mathfrak{b}}egin{equation}{\langle}bel{E:Cldef} {\mathbb{C}}l_{d}=\operatorname{Ind}_{S_d}^{{\mathcal{S}}(d)}{\mathbb{C}}{\mathbf{1}}, {{\mathfrak{b}}ar{e}}nd{equation} where ${\mathbb{C}}{\mathbf{1}}$ is the trivial representation of $S_d$. That is, ${\mathbb{C}}l_{d}={\mathbb{C}}l(d).{\mathbf{1}}$, where the cyclic vector ${\mathbf{1}}$ satisfies {\mathfrak{b}}egin{eqnarray*} w.{\mathbf{1}}={\mathbf{1}},\;\;\;w\in S_d. {{\mathfrak{b}}ar{e}}nd{eqnarray*} This is often referred to as the {{\mathfrak{b}}ar{e}}mph{basic spin representation} of ${\mathcal{S}}(d)$. Introduce algebra involutions ${\mathbf{1}}repsilonsilon_i:{\mathbb{C}}l(d)\rightarrow{\mathbb{C}}l(d)$ by ${\mathbf{1}}repsilonsilon_i(c_j)=(-1)^{{\mathfrak{p}}artialta_{ij}}c_j$ for $1\leq i,j\leq d$. The elements $ {\mathbf{1}}repsilonsilon_i $ act on ${\mathbb{C}}l_{d}$ by ${\mathbf{1}}repsilonsilon_i.{\mathbf{1}}={\mathbf{1}}$ for $1\leq i\leq d$ and, more generally, ${\mathbf{1}}repsilonsilon_i.s{\mathbf{1}}={\mathbf{1}}repsilonsilon_{i}(s){\mathbf{1}}$ for $1\leq i\leq d$. Also, note that the operators $ {\mathbf{1}}repsilonsilon_i $ commute with each other. For each $a\in{\mathbb{Z}}$, define the Clifford algebra {\mathfrak{b}}egin{equation}{\langle}bel{Pha} {\mathcal{P}}hi_a= {\mathfrak{b}}egin{cases} {\mathbb{C}}{\langle}ngle {\mathbf{1}}rphi {\rangle}ngle / ({\mathbf{1}}rphi^2-a), &\thetaxt{if $a {\mathfrak{n}}eq 0$}; \\ {\mathbb{C}} {\langle}ngle {\mathbf{1}}rphi {\rangle}ngle / ({\mathbf{1}}rphi), & \thetaxt{if $a=0$}.{{\mathfrak{b}}ar{e}}nd{cases} {{\mathfrak{b}}ar{e}}nd{equation} The ${\mathbb{Z}}_{2}$-grading on ${\mathcal{P}}hi_{a}$ is given by declaring $p({\mathbf{1}}rphi)={{\mathfrak{b}}ar{1}}$. Given a pair of integers $a\leq b$ define the {{\mathfrak{b}}ar{e}}mph{segment} {\mathfrak{b}}egin{equation*} [a,b]=\{a,a+1,{\langle}mbdaots,b\}. {{\mathfrak{b}}ar{e}}nd{equation*} Given a segment $[a,b]$ with $b-a+1=d\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}$, define the ${\mathcal{P}}hi_a\otimes {{\mathcal{S}}}(d)$-module {\mathfrak{b}}egin{equation}{\langle}bel{E:segment} {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathcal{P}}hi_a{\mathfrak{b}}oxtimes{\mathbb{C}}l_{d}. {{\mathfrak{b}}ar{e}}nd{equation} Of course, when $d=0$ the segment $[a,a-1]={{\mathfrak{b}}ar{e}}mptyset$, and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{{{\mathfrak{b}}ar{e}}mptyset}={\mathcal{P}}hi_a\otimes{\mathbb{C}}$. For $i=1, \dotsc ,d$ let $s_{ij}$ denote the transposition $(ij)$, and {\mathfrak{b}}egin{align}{\langle}bel{E:JMelt} {\mathcal{L}}_i={\mathfrak{s}}um_{j<i}(1-c_jc_i)s_{ij} {{\mathfrak{b}}ar{e}}nd{align} be the {{\mathfrak{b}}ar{e}}mph{$i$th Jucys-Murphy element} (cf. \cite[(13.22)]{kl}). {\mathfrak{b}}egin{prp}{\langle}bel{segment representation} Let $[a,b]$ be a segment with $b-a+1=d.$ Then, {\mathfrak{b}}egin{enumerate} \item[(i)] The vector space ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$ is an ${\mathcal{A}}Se(d)$-module with $s_i.v=(1\otimes s_i).v$, $c_i.v=(1\otimes c_i).v$ and {\mathfrak{b}}egin{align*} x_i.v &= \left(a\otimes {\mathbf{1}}repsilonsilon_i+1\otimes {\mathcal{L}}_{i}-{\mathbf{1}}rphi\otimes c_i\right).v \\ &=\left(a\otimes {\mathbf{1}}repsilonsilon_i+{\mathfrak{s}}um_{k<i}1\otimes(1-c_kc_i)s_{ki}-{\mathbf{1}}rphi\otimes c_i\right).v, {{\mathfrak{b}}ar{e}}nd{align*} for all $v\in{\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$. \item[(ii)] The action of ${\mathcal{P}}_d[x^2]$ on ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$ is determined by \[ x_i^2.({\mathbf{1}}rphi^{{\mathfrak{p}}artialta}\otimes {\mathbf{1}})=q(a+i-1){\mathbf{1}}rphi^{\mathfrak{p}}artialta\otimes{\mathbf{1}},\;\;\;{\mathfrak{p}}artialta\in\{0,1\}, \;\;\;i=1,{\langle}mbdaots,d. \] {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} (i) The fact that this is an ${\mathcal{A}}Se(d)$-module is an easy check which we leave to the reader. (ii) To check the action of $x_i^2$, observe that \[ x_i.1\otimes{\mathbf{1}}=\left(a+i-1-{\mathfrak{s}}um_{j<i}c_jc_i\right).1\otimes{\mathbf{1}} +c_i.{\mathbf{1}}rphi\otimes{\mathbf{1}} \] and \[ x_i.{\mathbf{1}}rphi\otimes{\mathbf{1}}=\left(a+i-1-{\mathfrak{s}}um_{j<i}c_jc_i\right).{\mathbf{1}}rphi\otimes{\mathbf{1}} +ac_i.1\otimes{\mathbf{1}}. \] Now, the result follows using the commutation relations for ${\mathcal{A}}Se(d)$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{rmk}{\langle}bel{R:Duality} In fact, we need not consider all $a,b\in{\mathbb{Z}}$. Given any segment $[a,b]$, consider the module ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^{\mathfrak{s}}igma$ obtained by twisting the action of ${\mathcal{A}}Se(d)$ by the automorphism ${\mathfrak{s}}igma$ as described in Section~\ref{SS:duality}. Note that when $b{\mathfrak{n}}eq-1$, \[ {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^{\mathfrak{s}}igma\cong{\mathfrak{h}}at{{\mathcal{P}}hi}_{[-b-1,-a-1]}. \] When $b=-1$, ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,-1]}^{\mathfrak{s}}igma\cong{\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,-a-1]}^{\oplus2}$. In particular, for $b{\mathfrak{n}}eq0$, ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-(b+1),b-1]}^{\mathfrak{s}}igma\cong{\mathfrak{h}}at{{\mathcal{P}}hi}_{[-b,b]}$, and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-1,-1]}^{\mathfrak{s}}igma\cong{\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,0]}^{\oplus2}$. Therefore, it is enough to describe the modules {\mathfrak{b}}egin{enumerate} \item ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$, $0\leq a\leq b$, and \item ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}$, $0<a\leq b$. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{rmk} The following result describes ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$ at the level of characters. {\mathfrak{b}}egin{prp}{\langle}bel{character formula} Let $[a,b]$ be a segment with $a,b {\mathfrak{g}}eq 0.$ Then, {\mathfrak{b}}egin{enumerate} \item if $0\leq a\leq b$, then {\mathfrak{b}}egin{equation*} \operatorname{ch}{\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathfrak{b}}egin{cases}[a,{\langle}mbdaots,b], &\thetaxt{if $a=0$};\\ 2[a,{\langle}mbdaots,b], &\thetaxt{if $a {\mathfrak{n}}eq 0$};{{\mathfrak{b}}ar{e}}nd{cases} {{\mathfrak{b}}ar{e}}nd{equation*} \item if $0<a\leq b$, then \[ \operatorname{ch}{\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}=4[a-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b] \] {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} The action of $x_i^2$ commutes with ${\mathbb{C}}l(d)$ and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathbb{C}}l(d).(1\otimes{\mathbf{1}})+{\mathbb{C}}l(d).({\mathbf{1}}rphi\otimes{\mathbf{1}})$. Therefore, applying Proposition~\ref{segment representation}(2), we deduce in both cases that the $x_i^2$ act by the prescribed eigenvalues. The result now follows from the dimension formula in Lemma \ref{A(d) irreducibles}. {{\mathfrak{b}}ar{e}}nd{proof} Let ${\mathbf{1}}rphi{\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}={\mathbf{1}}rphi\otimes{\mathbf{1}}$ and ${\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}=1\otimes{\mathbf{1}}$. Also, in what follows, we omit the tensor symbols. For example, we write \[ a{\mathbf{1}}repsilonsilon_i+{\mathcal{L}}_i-{\mathbf{1}}rphi c_i:=a\otimes {\mathbf{1}}repsilonsilon_i+1\otimes {\mathcal{L}}_{i}-{\mathbf{1}}rphi\otimes c_i. \] {\mathfrak{b}}egin{dfn}{\langle}bel{X} Let $a\in{\mathbb{Z}}$ and $\kappa_1,{\langle}mbdaots,\kappa_d\in{\mathbb{R}}$ satisfy $\kappa_i^2=q(a+i-1)$ where $d=b-a+1$. Given a subset $S{\mathfrak{s}}ubseteq\{1,{\langle}mbdaots,d\}$ define the element $X_{S} \in {\mathcal{A}}Se (d)$ by \[ X_S={\mathfrak{p}}rod_{i{\mathfrak{n}}otin S}(x_i+\kappa_i). \] Observe that $X_S$ is only defined up to the choices of sign for $\kappa_1,{\langle}mbdaots,\kappa_d$. {{\mathfrak{b}}ar{e}}nd{dfn} {\mathfrak{b}}egin{lem}{\langle}bel{nonzero} Let $[a,b]$ be a segment with $d=b-a+1$. Assume that either $-a{\mathfrak{n}}otin\{1,{\langle}mbdaots,d\}$ and $S$ is arbitrary, or assume that $-a\in\{1,{\langle}mbdaots,d\}$ and either $-a+1\in S$ or $-a\in S$. Then $X_S.{\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}{\mathfrak{n}}eq0$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Let ${\mathfrak{h}}at{{\mathbf{1}}}={\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}$. By Proposition~\ref{segment representation}(i), \[ x_k.v=(a{\mathbf{1}}repsilonsilon_k+{\mathcal{L}}_k-{\mathbf{1}}rphi c_k).v. \] Let $\{d_1>d_2>{\langle}mbdaots>d_{{\mathfrak{b}}ar{e}}ll\}=\{1,{\langle}mbdaots,d\}{\mathfrak{b}}ackslash S$. Since the $x_i$ mutually commute, {\mathfrak{b}}egin{eqnarray*} X_S.{\mathfrak{h}}at{{\mathbf{1}}}&=&(x_{d_1}+\kappa_{d_1})\cdots(x_{d_{{\mathfrak{b}}ar{e}}ll}+\kappa_{d_{{\mathfrak{b}}ar{e}}ll}).{\mathfrak{h}}at{{\mathbf{1}}}\\ &=&(a{\mathbf{1}}repsilonsilon_{d_1}+\kappa_{d_1}+{\mathcal{L}}_{d_1}-{\mathbf{1}}rphi c_{d_1})\cdots (a{\mathbf{1}}repsilonsilon_{d_{{\mathfrak{b}}ar{e}}ll}+\kappa_{d_{{\mathfrak{b}}ar{e}}ll}+{\mathcal{L}}_{d_{{\mathfrak{b}}ar{e}}ll}-{\mathbf{1}}rphi c_{d_{{\mathfrak{b}}ar{e}}ll}).{\mathfrak{h}}at{{\mathbf{1}}}\\ &=&((a+\kappa_{d_1})+{\mathcal{L}}_{d_1}-{\mathbf{1}}rphi c_{d_1})\cdots ((a+\kappa_{d_{{\mathfrak{b}}ar{e}}ll})+{\mathcal{L}}_{d_{{\mathfrak{b}}ar{e}}ll}-{\mathbf{1}}rphi c_{d_{{\mathfrak{b}}ar{e}}ll}).{\mathfrak{h}}at{{\mathbf{1}}}. {{\mathfrak{b}}ar{e}}nd{eqnarray*} The last equality follows since ${\mathbf{1}}repsilonsilon_k{\mathcal{L}}_j={\mathcal{L}}_j{\mathbf{1}}repsilonsilon_k$ if $k>j$. Now, {\mathfrak{b}}egin{eqnarray}{\langle}bel{w 1} X_S.{\mathfrak{h}}at{{\mathbf{1}}} &=&{\mathfrak{b}}igg({\mathfrak{b}}igg(a+\kappa_{d_1}+{\mathfrak{s}}um_{j<d_1}s_{jd_1}{\mathfrak{b}}igg)+ {\mathfrak{b}}igg({\mathfrak{s}}um_{j<d_1}s_{jd_1}c_j-{\mathbf{1}}rphi {\mathfrak{b}}igg)c_{d_1}{\mathfrak{b}}igg)\cdots\\{\mathfrak{n}}onumber&&{\mathfrak{h}}space{1.5in}\cdots {\mathfrak{b}}igg({\mathfrak{b}}igg(a+\kappa_{d_{{\mathfrak{b}}ar{e}}ll}+{\mathfrak{s}}um_{j<d_{{\mathfrak{b}}ar{e}}ll}s_{jd_{{\mathfrak{b}}ar{e}}ll}{\mathfrak{b}}igg) +{\mathfrak{b}}igg({\mathfrak{s}}um_{j<d_{{\mathfrak{b}}ar{e}}ll}s_{jd_{{\mathfrak{b}}ar{e}}ll}c_j-{\mathbf{1}}rphi {\mathfrak{b}}igg)c_{d_{{\mathfrak{b}}ar{e}}ll}{\mathfrak{b}}igg).{\mathfrak{h}}at{{\mathbf{1}}}\\{\mathfrak{n}}onumber &=&{\mathfrak{p}}rod_{i{\mathfrak{n}}otin S}(a+i-1+\kappa_i).{\mathfrak{h}}at{{\mathbf{1}}}+({\mathfrak{b}}igstar).{\mathfrak{h}}at{{\mathbf{1}}} {{\mathfrak{b}}ar{e}}nd{eqnarray} where $({\mathfrak{b}}igstar)=p'(c)-{\mathbf{1}}rphi p''(c)$, where $p'(c)\in{\mathbb{C}}l(d)_{\mathfrak{z}}ero$, $p''(c)\in{\mathbb{C}}l(d)_{{\mathfrak{b}}ar{1}}$, and $p'(c)$ has no constant term. Therefore, if either $a{\mathfrak{g}}eq 0$, or $-a+1\in S$, $X_S.{\mathbf{1}}{\mathfrak{n}}eq 0$. Now, assume $-a+1\in\{1,{\langle}mbdaots,d\}$, and $-a+1{\mathfrak{n}}otin S$, but $a\in S$. Observe that $\kappa_{-a+1}=\kappa_{-a}=0$. Now, {\mathfrak{b}}egin{eqnarray}{\langle}bel{nonzero 2} x_{-a}.{\mathfrak{h}}at{{\mathbf{1}}}=\left(-1-{\mathfrak{s}}um_{j<-a}c_jc_{-a}-{\mathbf{1}}rphi c_{-a}\right).{\mathfrak{h}}at{{\mathbf{1}}}=-c_{-a}c_{-a+1}x_{-a+1}.{\mathfrak{h}}at{{\mathbf{1}}}. {{\mathfrak{b}}ar{e}}nd{eqnarray} Let $R=S\cup\{-a+1\}$ and $T=R{\mathfrak{b}}ackslash\{-a\}$. Then, \[ X_S.{\mathfrak{h}}at{{\mathbf{1}}}=X_{R}x_{-a+1}.{\mathfrak{h}}at{{\mathbf{1}}}=c_{-a}c_{-a+1}X_{R}x_{-a}.{\mathfrak{h}}at{{\mathbf{1}}} =c_{-a}c_{-a+1}X_T.{\mathfrak{h}}at{{\mathbf{1}}}{\mathfrak{n}}eq0. \] Finally, if $d=-a$, then in {{\mathfrak{b}}ar{e}}qref{w 1}, $d_1=-a$ and it is clear that the coefficient of $c_{-a-1}c_{-a}$ is nonzero. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{lem}{\langle}bel{A submodule} If $i{\mathfrak{n}}otin S$, then $x_iX_S.{\mathfrak{h}}at{{\mathbf{1}}}=\kappa_iX_S.{\mathfrak{h}}at{{\mathbf{1}}}$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Since $x_{i}^{2}.{\mathfrak{h}}at{{\mathbf{1}}} = q(a-i+1){\mathfrak{h}}at{{\mathbf{1}}}= \kappa_{i}^{2}{\mathfrak{h}}at{{\mathbf{1}}}$, {\mathfrak{b}}egin{eqnarray*} x_i(x_i+\kappa_i).{\mathfrak{h}}at{{\mathbf{1}}}=(x_i^2+\kappa_ix_i){\mathfrak{h}}at{{\mathbf{1}}}=\kappa_i(\kappa_i+x_i){\mathfrak{h}}at{{\mathbf{1}}}, {{\mathfrak{b}}ar{e}}nd{eqnarray*} so the result follows because the $x_i$ commute. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{lem}{\langle}bel{s_i action} If $i,i+1{\mathfrak{n}}otin S$ and $i{\mathfrak{n}}eq-a$, then \[ s_iX_S.{\mathfrak{h}}at{{\mathbf{1}}}=\left({{\mathfrak{b}}ar{f}}rac{\kappa_{i+1}+\kappa_i}{2(a+i)}+ {{\mathfrak{b}}ar{f}}rac{\kappa_{i+1}-\kappa_i}{2(a+i)}c_ic_{i+1}\right)X_S.{\mathfrak{h}}at{{\mathbf{1}}}. \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Let $w:=X_S.{\mathfrak{h}}at{{\mathbf{1}}}$, and recall the intertwining element ${\mathbf{1}}rphii_i$. By character considerations ${\mathbf{1}}rphii_i.{\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}=\{0\}$. In particular, {\mathfrak{b}}egin{eqnarray*} 0&=&{\mathbf{1}}rphii_i.w\\ &=&(s_i(x_i^2-x_{i+1}^2)+(x_i+x_{i+1})-c_ic_{i+1}(x_i-x_{i+1})).w\\ &=&-2(a+i)s_i.w+((\kappa_{i+1}+\kappa_i)+(\kappa_{i+1}-\kappa_i)c_ic_{i+1}).w. {{\mathfrak{b}}ar{e}}nd{eqnarray*} Hence, the result. {{\mathfrak{b}}ar{e}}nd{proof} We can now describe the irreducible segment representations of ${\mathcal{A}}Se (d).$ {\mathfrak{b}}egin{thm}{\langle}bel{module decomposition} The following holds: {\mathfrak{b}}egin{enumerate} \item[(i)] The module ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,d-1]}$ is an irreducible ${\mathcal{A}}Se(d)$-module of type \thetaxttt{Q}. \item[(ii)] Assume $0<a\leq b$. The module ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$, has a submodule ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+={\mathbb{C}}l(d).w$, where $w=X_{{\mathfrak{b}}ar{e}}mptyset.{\mathfrak{h}}at{{\mathbf{1}}}$. Moreover, if $w'=(x_1-\kappa_1)X_{\{1\}}.{\mathfrak{h}}at{{\mathbf{1}}}$, and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^-={\mathbb{C}}l(d).w'$, then \[ {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+ \oplus {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^-. \] The submodules ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^{\mathfrak{p}}m$ are simple modules of type \thetaxttt{M}. \item[(iii)] If $0<a\leq b$, the ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}$ has a submodule ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+={\mathbb{C}}l(d)w\oplus{\mathbb{C}}l(d)\overline{w}$, where \[ w=-(1+{\mathfrak{s}}qrt{-1}c_ac_{a+1})X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,\overline{w}=s_aw. \] Moreover, if \[ w'=-(1-{\mathfrak{s}}qrt{-1}c_ac_{a+1})X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}},\;\;\;\overline{w}'=s_aw', \] and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^-={\mathbb{C}}l(d)w'\oplus{\mathbb{C}}l(d)\overline{w}'$, then \[ {\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+\oplus {\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^-. \] The submodules ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^{\mathfrak{p}}m$ are simple of type \thetaxttt{M}. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} (i) First, we deduce that ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,d-1]}$ is irreducible by character considerations. It has two {{\mathfrak{b}}ar{e}}mph{non-homogeneous} submodules: \[ {\mathbb{C}}l(d)({\mathfrak{s}}qrt{-d}+(c_1+\cdots+c_d)).{\mathfrak{h}}at{{\mathbf{1}}}_{[0,d-1]}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {\mathbb{C}}l(d)({\mathfrak{s}}qrt{-d}-(c_1+\cdots+c_d)).{\mathfrak{h}}at{{\mathbf{1}}}_{[0,d-1]}. \] These vector spaces are clearly stable under the action of ${\mathcal{S}}(d)$. Since $ x_1 $ acts by zero on these vector spaces, the action of ${\mathcal{A}}Se(d)$ factors through ${\mathcal{S}}(d)$ and thus these vector spaces are ${\mathcal{A}}Se(d)$-submodules. Therefore ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,d-1]}$ is of type \thetaxttt{Q} (cf. Section~\ref{S:Prelim}). (ii) Let ${\mathfrak{h}}at{{\mathbf{1}}}={\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}$, $w=X_{{\mathfrak{b}}ar{e}}mptyset.{\mathfrak{h}}at{{\mathbf{1}}}$ and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+={\mathbb{C}}l(d).w$. By Lemma \ref{nonzero}, $w{\mathfrak{n}}eq 0$. Now, Lemmas \ref{A submodule} and \ref{s_i action} together imply that ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+$ is a submodule. It now remains to show that ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+\oplus {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^-$, where ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^-$ is as in the statement of the proposition. To this end, assume that $w'\in {\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+$. That is, there exists $p(c)\in{\mathbb{C}}l(d)$ such that $p(c).w=w'$. Write \[ p(c)={\mathfrak{s}}um_{{\mathbf{1}}repsilon}a_{\mathbf{1}}repsilon c^{\mathbf{1}}repsilon, \] where the sum is over ${\mathbf{1}}repsilon=({\mathbf{1}}repsilon_1,{\langle}mbdaots,{\mathbf{1}}repsilon_d)\in{\mathbb{Z}}_2^d$. Then, for $1\leq i\leq d$, {\mathfrak{b}}egin{eqnarray*} (-1)^{{\mathfrak{p}}artialta_{1i}}w'&=&{{\mathfrak{b}}ar{f}}rac{1}{\kappa_i}x_i.w' ={{\mathfrak{b}}ar{f}}rac{1}{\kappa_i}x_i\left({\mathfrak{s}}um_{{\mathbf{1}}repsilon}a_{\mathbf{1}}repsilon c^{\mathbf{1}}repsilon\right).w =\left({\mathfrak{s}}um_{{\mathbf{1}}repsilon}(-1)^{{\mathbf{1}}repsilon_i}a_{\mathbf{1}}repsilon c^{\mathbf{1}}repsilon\right).w, {{\mathfrak{b}}ar{e}}nd{eqnarray*} where (of course) the ${\mathfrak{p}}artialta$ on the left of the equal sign is the Kronecker delta. This forces $ p(c)=r c_1 + s $ for complex numbers $ r $ and $ s$. Since $ w' $ is even, $ r=0 $ implying that $ w'=s w $ which is impossible. (iii) We deal with ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+$, the proposed submodule ${\mathcal{P}}hi_{[-a,b]}^-$ being similar. Let $w=-(1+{\mathfrak{s}}qrt{-1}c_ac_{a+1})X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}$, $\overline{w}=s_a.w$, and ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+={\mathbb{C}}l(d).w+{\mathbb{C}}l(d).\overline{w}$. The proof of Lemma \ref{nonzero} shows that \[ X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}={\mathfrak{p}}rod_{{\mathfrak{s}}ubstack{1\leq i\leq d\\i{\mathfrak{n}}eq a+1}}(a+i-1+\kappa_i).{\mathfrak{h}}at{{\mathbf{1}}}+({\mathfrak{b}}igstar).{\mathfrak{h}}at{{\mathbf{1}}} \] where $({\mathfrak{b}}igstar)=p'(c)-{\mathbf{1}}rphi p''(c)$ where $p'(c)\in{\mathbb{C}}l(d)_{\mathfrak{z}}ero$, $p''(c)\in{\mathbb{C}}l(d)_{{\mathfrak{b}}ar{1}}$, and $p'(c)$ has no constant term. It is also easy to see that $p'(c)$ and $p''(c)$ have coefficients in ${\mathbb{R}}$. We conclude from this that $w{\mathfrak{n}}eq 0$. Note that by definition, $c_ac_{a+1}.w=-{\mathfrak{s}}qrt{-1}w$. Lemma \ref{A submodule} shows that for $i{\mathfrak{n}}eq a,a+1$, $x_i.w=\kappa_iw$. Moreover, \[ x_a.w=-(1-{\mathfrak{s}}qrt{-1}c_ac_{a+1})x_aX_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}=0. \] Also, $x_a.{\mathfrak{h}}at{{\mathbf{1}}}=-c_ac_{a+1}x_{a+1}.{\mathfrak{h}}at{{\mathbf{1}}}$ (see the computation {{\mathfrak{b}}ar{e}}qref{nonzero 2} for details). Thus, {\mathfrak{b}}egin{eqnarray}{\langle}bel{alternate w} w=-{\mathfrak{s}}qrt{-1}(1+{\mathfrak{s}}qrt{-1}c_ac_{a+1})X_{\{a\}}.{\mathfrak{h}}at{{\mathbf{1}}} {{\mathfrak{b}}ar{e}}nd{eqnarray} so $x_{a+1}.w=0$. As for $\overline{w}=s_aw$, $x_i.\overline{w}=\kappa_i\overline{w}$ for $i{\mathfrak{n}}eq a,a+1$. Using commutation relations, we compute {\mathfrak{b}}egin{eqnarray}{\langle}bel{x_a} x_a\overline{w}=x_as_a.w=(s_ax_{a+1}-1-c_ac_{a+1}).w=-(1+{\mathfrak{s}}qrt{-1})w. {{\mathfrak{b}}ar{e}}nd{eqnarray} Similarly, {\mathfrak{b}}egin{eqnarray}{\langle}bel{x_{a+1}} x_{a+1}.\overline{w}=(1+{\mathfrak{s}}qrt{-1})w. {{\mathfrak{b}}ar{e}}nd{eqnarray} We now turn to the action of the symmetric group. First, for $i{\mathfrak{n}}eq a-1,a+1$, Lemma \ref{s_i action} shows that $s_i.w\in{\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+$. Also by Lemma \ref{s_i action}, \[ s_{a-1}X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}={{\mathfrak{b}}ar{f}}rac{\kappa_{a-1}}{2}(c_{a-1}c_a-1)X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}. \] Thus, {\mathfrak{b}}egin{eqnarray*} s_{a-1}.w&=&-{{\mathfrak{b}}ar{f}}rac{\kappa_{a-1}}{2}(1+{\mathfrak{s}}qrt{-1}c_{a-1}c_{a+1}) (c_{a-1}c_a-1)X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}\\ &=&-{{\mathfrak{b}}ar{f}}rac{\kappa_{a-1}}{2}(1+c_{a-1}c_a+{\mathfrak{s}}qrt{-1}c_{a-1}c_{a+1} -{\mathfrak{s}}qrt{-1}c_ac_{a+1})X_{\{a+1\}}.{\mathfrak{h}}at{{\mathbf{1}}}\\ &=&{{\mathfrak{b}}ar{f}}rac{\kappa_{a-1}}{2}(c_{a-1}c_a-1).w. {{\mathfrak{b}}ar{e}}nd{eqnarray*} Similarly, by {{\mathfrak{b}}ar{e}}qref{alternate w} and Lemma \ref{s_i action}, \[ s_{a+1}.w={{\mathfrak{b}}ar{f}}rac{\kappa_{a+2}}{2}(1+c_{a+1}c_{a+2}).w. \] Now, for $i{\mathfrak{n}}eq a-1,a+1$, $s_is_a=s_as_i$. Hence, by Lemma \ref{s_i action} {\mathfrak{b}}egin{eqnarray}{\langle}bel{s_i.overline{w}} s_i.\overline{w}=\left({{\mathfrak{b}}ar{f}}rac{\kappa_{i+1}+\kappa_i}{2(a+i)} +{{\mathfrak{b}}ar{f}}rac{\kappa_{i+1}-\kappa_i}{2(a+i)}c_ic_{i+1}\right).\overline{w}. {{\mathfrak{b}}ar{e}}nd{eqnarray} To deduce the action of $s_{a-1}$ and $s_a$ on $\overline{w}$, we proceed as in the proof of Lemma \ref{s_i action}. Recall again the intertwining elements ${\mathbf{1}}rphii_{a-1}$ and ${\mathbf{1}}rphii_{a+1}$. By character considerations, we deduce that ${\mathbf{1}}rphii_{a-1}.\overline{w}=0={\mathbf{1}}rphii_{a+1}.\overline{w}$. Unlike in lemma ~\ref{A submodule}, in this case the action of $x_a$ (resp. $x_{a+1}$) is given by {{\mathfrak{b}}ar{e}}qref{x_a} (resp. {{\mathfrak{b}}ar{e}}qref{x_{a+1}}). Thus, {\mathfrak{b}}egin{eqnarray}{\langle}bel{s_{a-1}.overline{w}} s_{a-1}.\overline{w}={{\mathfrak{b}}ar{f}}rac{(1+{\mathfrak{s}}qrt{-1})}{2}(1+c_{a-1}c_a).w -{{\mathfrak{b}}ar{f}}rac{\kappa_{a-1}}{2}(1-c_{a-1}c_a).\overline{w} {{\mathfrak{b}}ar{e}}nd{eqnarray} and {\mathfrak{b}}egin{eqnarray}{\langle}bel{s_a.overline{w}} s_{a+1}.\overline{w}={{\mathfrak{b}}ar{f}}rac{(1-{\mathfrak{s}}qrt{-1})}{2}(1-c_{a+1}c_{a+2}).w +{{\mathfrak{b}}ar{f}}rac{\kappa_{a+2}}{2}(1+c_{a+1}c_{a+2}).\overline{w}. {{\mathfrak{b}}ar{e}}nd{eqnarray} It is easy to see that ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+ + {\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^-$ since ${{\mathfrak{b}}ar{f}}rac{1}{2}(w+w')=X_{\{a\}}.{\mathfrak{h}}at{{\mathbf{1}}}$ is a cyclic vector for ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}$. As in part (ii), it is easy to see that if $ w' = p(c)w+r(c)s_a w $ where $ p(c) $ and $ r(c) $ are polynomials in the Clifford generators, that $ p(c) = {\langle}mbda_1 + {\langle}mbda_2 c_a c_{a+1} $ and $ r(c) = {\langle}mbda_3 + {\langle}mbda_4 c_a c_{a+1} $ for some complex numbers $ {\langle}mbda_1, {\langle}mbda_2, {\langle}mbda_3, {\langle}mbda_4$. Noting that $ c_a c_{a+1} w = -{\mathfrak{s}}qrt{-1} w $ gives that all the coefficients are zero. Therefore, we are left to show that ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+$ is simple. Indeed, assume $V{\mathfrak{s}}ubseteq{\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+$ is a submodule. Then, \[ \operatorname{ch} V=[a-1,{\langle}mbdaots,0,0,{\langle}mbdaots,b]. \] Let $v=p_1(c).w+p_2(c).\overline{w}\in V$ be a vector satisfying $x_i.v=\kappa_iv$ for all $i$, where $p_1(c),p_2(c)\in{\mathbb{C}}l(d)$. For $i=1,2$, define $p_i'(c)$ by the formulae $x_ap_i(c)=p_i'(c)x_a$. Then, \[ 0=x_a.v=-(1+{\mathfrak{s}}qrt{-1})p_2'(c).w \] showing that $p_2'(c)=0$ (hence, $p_2(c)=0$). Now, arguing as above with the vector $s_a.v$ shows that $p_1(c)=0$. {{\mathfrak{b}}ar{e}}nd{proof} We can now define the irreducible segment representations which are the key to defining the standard ${\mathcal{A}}Se (d)$-modules. {\mathfrak{b}}egin{dfn}{\langle}bel{segments} Let $a,b \in {\mathbb{Z}}_{{\mathfrak{g}}eq 0}$. {\mathfrak{b}}egin{enumerate} \item Let ${\mathcal{P}}hi_{[0,d-1]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[0,d-1]}$, ${\mathbf{1}}:=X_{\{1\}}.{\mathfrak{h}}at{{\mathbf{1}}}$, where $\kappa_i={\mathfrak{s}}qrt{q(i-1)}$. \item If $0 < a\leq b$, let ${\mathcal{P}}hi_{[a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^+$ in Proposition \ref{module decomposition}(ii), with $\kappa_i=+{\mathfrak{s}}qrt{q(a+i-1)}$ for all $i$, and let ${\mathbf{1}}:=w$. \item If $0<a\leq b$, let ${\mathcal{P}}hi_{[-a,b]}={\mathfrak{h}}at{{\mathcal{P}}hi}_{[-a,b]}^+$ with $\kappa_i=+{\mathfrak{s}}qrt{q(-a+i-1)}$, ${\mathbf{1}}:=w$ and $\overline{{\mathbf{1}}}:=\overline{w}$. \item If $0\leq a$, let ${\mathcal{P}}hi_{[a,a-1]}={\mathcal{P}}hi_{{\mathfrak{b}}ar{e}}mptyset={\mathbb{C}}$. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{dfn} {\mathfrak{s}}ubsection{Some Lie Theoretic Notation}{\langle}bel{SS:LieThy} It is convenient in this section to introduce some Lie theoretic notation. This section differs from \cite{kl} in that the notation defined here is associated to the Lie superalgebra ${\mathfrak{q}}(n)$ (as opposed to the Kac-Moody algebra ${\mathfrak{b}}_\infty$). Define the sets $P={\mathbb{Z}}^n$, $P_{{\mathfrak{g}}eq0}={\mathbb{Z}}^n_{{\mathfrak{g}}eq0}$, and {\mathfrak{b}}egin{eqnarray} {\langle}bel{dom wt}P^+&=&\{\,{\langle}mbda=({\langle}mbda_1,{\langle}mbdaots,{\langle}mbda_n)\in P\,|\,{\langle}mbda_i{\mathfrak{g}}eq{\langle}mbda_{i+1}\mbox{ for all }1\leq i\leq n\,\}\\ {\langle}bel{dom typ wt}{P^{++}}&=&\{\,{\langle}mbda\in P^+\,|\,{\langle}mbda_i+{\langle}mbda_j{\mathfrak{n}}eq0\mbox{ for all } 1\leq i,j\leq n\,\}\\ {\langle}bel{rat wt}{\mathcal{P}}r&=&\{\,{\langle}mbda\in P^+\,|\,{\langle}mbda_i={\langle}mbda_{i+1}\mbox{ implies }{\langle}mbda_i=0\,\}\\ {\langle}bel{poly wt}{P_{\mathrm{poly}}^+}&=&\{\,{\langle}mbda\in{\mathcal{P}}r\,|\,{\langle}mbda_n{\mathfrak{g}}eq 0\,\}\\ {\langle}bel{pos wt}{P_{\mathrm{poly}}^+}os&=&\{{\langle}mbda\in P\,|\,{\langle}mbda_i{\mathfrak{g}}eq0\mbox{ for all }i\,\}, {{\mathfrak{b}}ar{e}}nd{eqnarray} The weights {{\mathfrak{b}}ar{e}}qref{dom wt} are called dominant, and {{\mathfrak{b}}ar{e}}qref{dom typ wt} are called dominant typical. A weight ${\langle}mbda\in P$ is simply {{\mathfrak{b}}ar{e}}mph{typical} if ${\langle}mbda_i+{\langle}mbda_j{\mathfrak{n}}eq0$ for all $i,j$. The weights {{\mathfrak{b}}ar{e}}qref{rat wt} are called rational, {{\mathfrak{b}}ar{e}}qref{poly wt} are polynomial, and the set \ref{pos wt} are simply compositions. For each of the sets $X=P^+,P^{++},{\mathcal{P}}r,{P_{\mathrm{poly}}^+},{P_{\mathrm{poly}}^+}os$ above, define \[ X(d)=\{{\langle}mbda\in X|{\langle}mbda_1+\cdots+{\langle}mbda_n=d\}. \] Let $R{\mathfrak{s}}ubset P$ be the root system of type $A_{n-1}$. That is, $R=\{\alpha_{ij}\mid 1\leq i{\mathfrak{n}}eq j\leq n\}$ where $\alpha_{ij}$ is the $n$-tuple with 1 in the $i$th coordinate and $-1$ in the $j$th coordinate. The positive roots are $R^+=\{\alpha_{ij}\in R \mid i<j\}$, the root lattice $Q$ is the ${\mathbb{Z}}$-span of $R$, and $Q^+$ is the ${\mathbb{Z}}_{{\mathfrak{g}}eq 0}$-span of $R^+$. The symmetric group, $S_n$, acts on $P$ by place permutation. Define the length function ${{\mathfrak{b}}ar{e}}ll:S_n\rightarrow{\mathbb{Z}}_{{\mathfrak{g}}eq0}$ in the usual way: \[ {{\mathfrak{b}}ar{e}}ll(w)=|\{\alpha\in R^+\mid w(\alpha)\in-R^+\}|. \] Equivalently, ${{\mathfrak{b}}ar{e}}ll(w)$ is the number of simple transpositions occurring in a reduced expression for $w$. Write $w\rightarrow y$ if $y=s_\alpha w$ for some $\alpha\in R^+$ and ${{\mathfrak{b}}ar{e}}ll(w)<{{\mathfrak{b}}ar{e}}ll(y)$. Define the {{\mathfrak{b}}ar{e}}mph{Bruhat} order on $S_n$ by $w<_by$ if there exists a sequence $w\rightarrow w_1\rightarrow\cdots\rightarrow y$. Also, for ${\langle}mbda\in P$, define \[ S_n[{\langle}mbda]=\{\,w\in S_n \mid w({\langle}mbda)={\langle}mbda\,\},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, R[{\langle}mbda]=\{\,\alpha_{ij}\in R|\,s_{ij}({\langle}mbda)={\langle}mbda\,\}, \] and define \[ P^+[{\langle}mbda]=\{\,\mu\in P\,|\,\mu_i{\mathfrak{g}}eq\mu_j\mbox{ if }s_{ij}\in S_n[{\langle}mbda]\,\},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, P^-[{\langle}mbda]=\{\,\mu\in P\,|\,\mu_i\leq\mu_j\mbox{ if }s_{ij}\in S_n[{\langle}mbda]\,\} \] where $s_{ij}\in S_n$ denotes the transposition $(ij)$. {\mathfrak{s}}ubsection{Induced Modules}{\langle}bel{SS:inducedmodules} Using the irreducible segment representations defined above we now define standard representations. Let ${\langle}mbda,\mu\in P$ satisfy ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$. Define \[ \widehat{{\mathcal{P}}hi}({\langle}mbda,\mu) ={\mathfrak{h}}at{{\mathcal{P}}hi}_{[\mu_1,{\langle}mbda_1-1]}{\mathfrak{b}}oxtimes\cdots{\mathfrak{b}}oxtimes{\mathfrak{h}}at{{\mathcal{P}}hi}_{[\mu_n,{\langle}mbda_n-1]} \] and \[ {\mathcal{P}}hi({\langle}mbda,\mu)={\mathcal{P}}hi_{[\mu_1,{\langle}mbda_1-1]}\circledast\cdots\circledast{\mathcal{P}}hi_{[\mu_n,{\langle}mbda_n-1]}, \] and define {{\mathfrak{b}}ar{e}}mph{standard (cyclic) modules} for ${\mathcal{A}}Se(d)$ by {\mathfrak{b}}egin{equation}{\langle}bel{E:Mhatdef} \widehat{{\mathcal{M}}}({\langle}mbda,\mu)=\operatorname{Ind}_{d_1,{\langle}mbdaots,d_n}^d\widehat{{\mathcal{P}}hi}({\langle}mbda,\mu) {{\mathfrak{b}}ar{e}}nd{equation} and {\mathfrak{b}}egin{equation}{\langle}bel{E:Mdef} {\mathcal{M}}({\langle}mbda,\mu)=\operatorname{Ind}_{d_1,{\langle}mbdaots,d_n}^d{\mathcal{P}}hi({\langle}mbda,\mu). {{\mathfrak{b}}ar{e}}nd{equation} We call the standard modules $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ and ${\mathcal{M}}({\langle}mbda,\mu)$ {{\mathfrak{b}}ar{e}}mph{big} and {{\mathfrak{b}}ar{e}}mph{little}, respectively. Both the big and little standard modules are cyclic. Let {\mathfrak{b}}egin{align}{\langle}bel{E:hatcyclicvector} {\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}=1\otimes({\mathfrak{h}}at{{\mathbf{1}}}\otimes\cdots\otimes{{\mathfrak{h}}at{{\mathbf{1}}}}) \in\widehat{{\mathcal{M}}}({\langle}mbda,\mu) {{\mathfrak{b}}ar{e}}nd{align} be the distinguished cyclic generator of $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$. Fix the following choice of distinguished cyclic generator ${\mathbf{1}}_{{\langle}mbda,\mu}\in{\mathcal{M}}({\langle}mbda,\mu)$. Let $i_1<\cdots<i_k$ be such that $\mu_{i_j}=0$ for all $j$ and ${\mathfrak{g}}amma_0(\mu)=k$. Choose \[ {\mathbf{1}}_{{\langle}mbda,\mu}={\mathfrak{p}}rod_{j=1}^{\lfloor k/2\rfloor} (1-{\mathfrak{s}}qrt{-1}c_{i_{2j-1}}c_{i_{2j}})1\otimes({\mathbf{1}}\otimes\cdots\otimes{\mathbf{1}}). \] {\mathfrak{b}}egin{lem}{\langle}bel{L:standard cyclic dim} Let ${\langle}mbda, \mu \in P$ so that ${\langle}mbda - \mu \in P_{{\mathfrak{g}}eq 0}(d).$ Then, {\mathfrak{b}}egin{enumerate} \item[(i)] $\dim\widehat{{\mathcal{M}}}({\langle}mbda,\mu) ={{\mathfrak{b}}ar{f}}rac{d!}{d_1!\cdots d_n!}2^{d+n-{\mathfrak{g}}amma_0(\mu)}$ \item[(ii)] $\dim{\mathcal{M}}({\langle}mbda,\mu) ={{\mathfrak{b}}ar{f}}rac{d!}{d_1!\cdots d_n!}2^{d-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)}{2}\rfloor}$ \item[(iii)] $\widehat{M}({\langle}mbda,\mu)\cong{\mathcal{M}}({\langle}mbda,\mu)^{\oplus 2^{n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor}}$. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof}(i) The dimension of $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ follows from the definition. (ii) Use Proposition \ref{module decomposition}. (iii) Since induction commutes with direct sums we have that $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ is a direct sum of copies ${\mathcal{M}}({\langle}mbda,\mu)$. A count using (i) and (ii) yields (iii). {{\mathfrak{b}}ar{e}}nd{proof} We end this section by recording certain data about the weight spaces and generalized weight spaces of ${\mathcal{M}} ({\langle}mbda, \mu)$ which will be useful later. Define the weight ${\mathfrak{z}}eta_{{\langle}mbda,\mu}:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}$ by $f.{\mathbf{1}}_{{\langle}mbda,\mu}={\mathfrak{z}}eta_{{\langle}mbda,\mu}(f){\mathbf{1}}_{{\langle}mbda,\mu}$ for all $f\in{\mathcal{P}}_d[x]$. As in $\S$\ref{SS:LieThy}, the symmetric group, $S_d$, acts on an integral weight ${\mathfrak{z}}eta:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}$ by $w({\mathfrak{z}}eta)(x_i^2)={\mathfrak{z}}eta(x_{w(i)}^2)$. Let \[ S_d[{\mathfrak{z}}eta]=\{\,w\in S_d\,|\,w({\mathfrak{z}}eta)={\mathfrak{z}}eta\,\}. \] Define ${{\mathfrak{b}}ar{e}}ll(w)$ to be the length of $w$ (i.e.\ the number of simple transpositions occurring in a reduced expression of $w$) and recall the definition of the Bruhat order given in section~\ref{SS:LieThy}. {\mathfrak{b}}egin{lem}{\langle}bel{L:weights of M} Given ${\langle}mbda,\mu\in P$ with ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$, {\mathfrak{b}}egin{enumerate} \item[(i)] $P({\mathcal{M}}({\langle}mbda,\mu))=\{\,w({\mathfrak{z}}eta_{{\langle}mbda,\mu})\,|\,w\in D_{{\langle}mbda-\mu}\,\}$, \item[(ii)] For any ${\mathfrak{z}}eta\in P({\mathcal{M}}({\langle}mbda,\mu))$, \[ \dim{\mathcal{M}}({\langle}mbda,\mu)_{\mathfrak{z}}eta^{\mathrm{gen}}=2^{d-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)}{2}\rfloor} |\{\,w\in D_{{\langle}mbda-\mu}\,|\,w({\mathfrak{z}}eta)={\mathfrak{z}}eta\,\}|. \] In particular, \[ \dim{\mathcal{M}}({\langle}mbda,\mu)_{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}^{\mathrm{gen}}= 2^{d-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)}{2}\rfloor}{\mathfrak{b}}ig|D_{{\langle}mbda-\mu}\cap S_d[{\mathfrak{z}}eta_{{\langle}mbda,\mu}]{\mathfrak{b}}ig|. \] {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} (i) This follows directly upon applying the Mackey Theorem to the character map. (ii) Given $f\in{\mathcal{P}}_d[x^2]$ and $w\in S_d$, we have the relation \[ fw=w\cdot w^{-1}(f)+{\mathfrak{s}}um_{u<_{b}w}uC_uf_u \] where the sum is over $u<_{b}w$ in the Bruhat order, $C_u\in{\mathbb{C}}l(d)$, $f_u\in{\mathcal{P}}_d[x]$ and $\deg f_u<\deg f$, see \cite[Lemma 14.2.1]{kl}. Therefore, if $f\in{\mathcal{P}}_d[x^2]$, $C\in{\mathbb{C}}l(d)$ and $w\in D_{{\langle}mbda-\mu}$, {\mathfrak{b}}egin{align}{\langle}bel{E:lowerTriangular} f(wC.{\mathbf{1}}_{{\langle}mbda,\mu})=w({\mathfrak{z}}eta_{{\langle}mbda,\mu})(f)wC.{\mathbf{1}}_{{\langle}mbda,\mu}+{\mathfrak{s}}um_{u<_{b}w}uC_uf_u.{\mathbf{1}}_{{\langle}mbda,\mu} {{\mathfrak{b}}ar{e}}nd{align} where the sum is over $u\in D_{{\langle}mbda-\mu}$. In particular, $wC.{\mathbf{1}}_{{\langle}mbda-\mu}\in{\mathcal{M}}({\langle}mbda,\mu)_{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}^{\mathrm{gen}}$ only if $w\in D_{{\langle}mbda-\mu}\cap S_d[{\mathfrak{z}}eta_{{\langle}mbda,\mu}]$. Conversely, if $w\in D_{{\langle}mbda-\mu}\cap S_d[{\mathfrak{z}}eta_{{\langle}mbda,\mu}]$, it is straightforward to see that all $u$ occurring on the right hand side of {{\mathfrak{b}}ar{e}}qref{E:lowerTriangular} also belong to $D_{{\langle}mbda-\mu}\cap S_d[{\mathfrak{z}}eta_{{\langle}mbda,\mu}]$. This gives the result. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{s}}ubsection{Unique Simple Quotients}{\langle}bel{unique simple quotient} In general, the standard cyclic module ${\mathcal{M}}({\langle}mbda,\mu)$ may not have a unique simple head. However, in this subsection, we determine sufficient conditions for this to hold. Throughout this section, keep in mind that $q(a)=q(-a-1)$ for all $a\in{\mathbb{Z}}$. We follow closely the strategy in \cite{su2}. We begin with some preparatory lemmas. {\mathfrak{b}}egin{lem}{\langle}bel{L:x weights} Let $M$ be an ${\mathcal{A}}Se(d)$-module, and ${\mathfrak{z}}eta$ a weight of $M$, then there exists $v\in{\mathcal{M}}({\langle}mbda,\mu)_{{\mathfrak{z}}eta}$ such that \[ x_i.v={\mathfrak{s}}qrt{q({\mathfrak{z}}eta(x_i^2))}\;v \] for all $i=1,{\langle}mbdaots,d$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Choose $0{\mathfrak{n}}eq v_0\in M_{\mathfrak{z}}eta$. Recall the definition \ref{X}. We adapt this to our current situation by setting $\kappa_i={\mathfrak{s}}qrt{q({\mathfrak{z}}eta(x_i^2))}$ and $S=\{i \mid x_iv=-\kappa_iv \}$. Then, $v_1:=X_S.v_0\in{\mathcal{M}}_{\mathfrak{z}}eta$ is nonzero and $x_i.v_1={\mathfrak{p}}m\kappa_iv_1$ for all $i$. Now, set \[ v=\left({\mathfrak{p}}rod_{i\in S}c_i\right)v_1. \] Then, $v$ is nonzero and has the desired properties. {{\mathfrak{b}}ar{e}}nd{proof} Therefore, we may define the non-zero subspace \[ M_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta}}= \left\{\,m\in M_{\mathfrak{z}}eta \mid x_i.m={\mathfrak{s}}qrt{q({\mathfrak{z}}eta(x_i^2))}\;m\mbox{ for }i=1,{\langle}mbdaots,d\,\right\}. \] We will use the following key lemma repeatedly in this section. {\mathfrak{b}}egin{lem} {\langle}bel{techlemma} Let $ Y $ be in ${\mathbb{R}}ep{\mathcal{A}}Se(d)$ and $v \in Y_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta}}$ for some weight ${\mathfrak{z}}eta$. Assume that for some $1\leq i<d-1$, $x_i.v={\mathfrak{s}}qrt{q(a)}$, $x_{i+1}={\mathfrak{s}}qrt{q(b)}$ where $a,b\in{\mathbb{Z}}$ and either $q(a){\mathfrak{n}}eq0$ or $q(b){\mathfrak{n}}eq 0$. Further, if $q(a)=q(b{\mathfrak{p}}m1)$, assume that {\mathfrak{b}}egin{align}{\langle}bel{E:techlemma} s_{i+1}.v =(\kappa_1+\kappa_2c_{i+1}c_{i+2}).v {{\mathfrak{b}}ar{e}}nd{align} for some constants $\kappa_1,\kappa_2\in{\mathbb{C}}$, not both 0. Then, $v\in{\mathcal{A}}Se(d).{\mathbf{1}}rphii_i.v$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} First, if $q(a)=q(b){\mathfrak{n}}eq0$, then using {{\mathfrak{b}}ar{e}}qref{E:intertwiner} and Lemma 14.8.1 of \cite{kl} we deduce that \[ {\mathbf{1}}rphii_i.v=2q(a)v{\mathfrak{n}}eq0, \] so the result is trivial. If $q(a){\mathfrak{n}}eq q(b{\mathfrak{p}}m1)$, then using {{\mathfrak{b}}ar{e}}qref{E:intertwinersquared} we deduce that \[ {\mathbf{1}}rphii_i^2.v=(2q(a)-2q(b)-(q(a)-q(b))^2)v{\mathfrak{n}}eq0 \] and again the result is trivial. Now, let $\kappa_3=q(a)-q(b){\mathfrak{n}}eq0$, $\kappa_4={\mathfrak{s}}qrt{q(a)}-{\mathfrak{s}}qrt{q(b)}{\mathfrak{n}}eq0$ and $\kappa_5={\mathfrak{s}}qrt{q(a)}+{\mathfrak{s}}qrt{q(b)}>0$. Then, appealing again to {{\mathfrak{b}}ar{e}}qref{E:intertwiner} we have that \[ {\mathbf{1}}rphii_{i} v = (\kappa_3 s_{i} - \kappa_4 c_{i} c_{i+1} + \kappa_5)v \] Let $ \mathbf{c'}$ and $ \mathbf{c''} $ be two elements of the Clifford algebra. Consider an expression of the form {\mathfrak{b}}egin{align*} (1+ \mathbf{c'} s_{i+1}- \mathbf{c''} s_{i}s_{i+1}){\mathbf{1}}rphii_i v =&(\kappa_3 s_{i} - \kappa_4 c_{i}c_{i+1} + \kappa_5 + \kappa_3 \mathbf{c'} s_{i+1}s_{i}\\ &- \kappa_4 \mathbf{c'} c_{i}c_{i+2}s_{i+1}+ \kappa_5 \mathbf{c'} s_{i+1}- \kappa_3 \mathbf{c''} s_{i+1}s_{i}s_{i+1}\\ &+ \kappa_4 \mathbf{c''} c_{i+1}c_{i+2}s_{i}s_{i+1}-\kappa_5 \mathbf{c''} s_{i}s_{i+1}) v. {{\mathfrak{b}}ar{e}}nd{align*} By {{\mathfrak{b}}ar{e}}qref{E:techlemma}, this equals {\mathfrak{b}}egin{align*} (\kappa_3& s_{i}-\kappa_4 c_{i}c_{i+1}+\kappa_5+\kappa_3 \mathbf{c'} s_{i+1}s_{i} - \kappa_1 \kappa_4 \mathbf{c'} c_{i}c_{i+2}\\ &-\kappa_2\kappa_4 \mathbf{c'} c_{i}c_{i+1}+ \kappa_1 \kappa_5 \mathbf{c'} + \kappa_2 \kappa_5 \mathbf{c'} c_{i+1}c_{i+2} - \kappa_1\kappa_3 \mathbf{c''} s_{i+1}s_{i}\\ &-\kappa_2\kappa_3 \mathbf{c''} c_{i}c_{i+1} s_{i+1}s_{i} + \kappa_1 \kappa_4 \mathbf{c''} c_{i+1}c_{i+2}s_{i} - \kappa_2 \kappa_4 \mathbf{c''} c_{i}c_{i+1} s_{i}\\ &-\kappa_1\kappa_5 \mathbf{c''} s_{i} -\kappa_2\kappa_5 \mathbf{c''} c_{i}c_{i+2}s_{i})v. {{\mathfrak{b}}ar{e}}nd{align*} The coefficient of $ s_i v $ is $$ \kappa_3 + \kappa_1 \kappa_4 \mathbf{c''} c_{i+1}c_{i+2} - \kappa_2 \kappa_4 \mathbf{c''} c_{i}c_{i+1} - \kappa_1 \kappa_5 \mathbf{c''} - \kappa_2 \kappa_5 \mathbf{c''} c_{i}c_{i+2}. $$ The coefficient of $ s_{i+1}s_{i}v $ is $$ \kappa_3 \mathbf{c'} - \kappa_1 \kappa_3 \mathbf{c''} - \kappa_2 \kappa_3 \mathbf{c''} c_{i}c_{i+1}. $$ In order to make both of these coefficients zero, set $ \mathbf{c'} = \mathbf{c''}(\kappa_1+\kappa_2c_{i}c_{i+1}) $ and $$ \mathbf{c''} = {\mathfrak{g}}amma(\kappa_1 \kappa_5 +\kappa_1 \kappa_4 c_{i+1}c_{i+2} -\kappa_2 \kappa_4 c_{i}c_{i+1} -\kappa_2 \kappa_5 c_{i}c_{i+2}), $$ where $$ {\mathfrak{g}}amma = {{\mathfrak{b}}ar{f}}rac{-\kappa_3}{(\kappa_1^2+\kappa_2^2)(\kappa_4^2+\kappa_5^2)}. $$ The coefficient of $ v $ is {\mathfrak{b}}egin{align*} -\kappa_4 c_{i}c_{i+1}&+\kappa_5-\kappa_1\kappa_4 \mathbf{c'} c_{i}c_{i+2} - \kappa_2 \kappa_4 \mathbf{c'} c_{i}c_{i+1} + \kappa_1 \kappa_5 \mathbf{c'} + \kappa_2 \kappa_5 \mathbf{c'} c_{i+1} c_{i+2}\\ =& -\kappa_4 c_{i}c_{i+1} + \kappa_5 -\kappa_1 \kappa_4 \mathbf{c''}(\kappa_1c_{i}c_{i+2} + \kappa_2 c_{i+1}c_{i+2}) -\kappa_2\kappa_4 \mathbf{c''}(\kappa_1 c_{i}c_{i+1} - \kappa_2)\\ &+ \kappa_1\kappa_5 \mathbf{c''}(\kappa_1+\kappa_2c_{i}c_{i+1}) + \kappa_2 \kappa_4 \mathbf{c''}(\kappa_1 c_{i+1}c_{i+2} - \kappa_2 c_{i}c_{i+2}). {{\mathfrak{b}}ar{e}}nd{align*} This is equal to {\mathfrak{b}}egin{align*} \kappa_5 - \kappa_4& c_{i}c_{i+1} + (-\kappa_1\kappa_2\kappa_4 + \kappa_1 \kappa_2 \kappa_5) \mathbf{c''} c_{i}c_{i+1} + (-\kappa_1^2\kappa_4 - \kappa_2^2\kappa_5)\mathbf{c''} c_{i}c_{i+2}\\ &+(-\kappa_1 \kappa_2 \kappa_4 + \kappa_1 \kappa_2 \kappa_5)\mathbf{c''} c_{i+1}c_{i+2} + (\kappa_2^2 \kappa_4 + \kappa_1^2 \kappa_5) \mathbf{c''}\\ = &\kappa_5 - \kappa_4 c_{i}c_{i+1}+(\kappa_1 \kappa_2 \kappa_5 - \kappa_1 \kappa_2 \kappa_4) {\mathfrak{g}}amma(-\kappa_1 \kappa_5 c_{i}c_{r+p} - \kappa_1 \kappa_4 c_{i}c_{i+2} - \kappa_2 \kappa_4 - \kappa_2 \kappa_5 c_{i+1}c_{i+2})\\ &+(-\kappa_1^2 \kappa_4 - \kappa_2^2 \kappa_5) {\mathfrak{g}}amma(-\kappa_1 \kappa_5 c_{i}c_{i+2} + \kappa_1 \kappa_4 c_{i}c_{i+1} - \kappa_2 \kappa_5 + \kappa_2 \kappa_4 c_{i+1}c_{i+2})\\ &+(-\kappa_1 \kappa_2 \kappa_4 + \kappa_1 \kappa_2 \kappa_5) {\mathfrak{g}}amma(-\kappa_1 \kappa_5 c_{i+1}c_{i+2} - \kappa_2 \kappa_4c_{i}c_{i+2} + \kappa_1 \kappa_4 + \kappa_2 \kappa_5 c_{i}c_{i+1})\\ &+(\kappa_2^2 \kappa_4 + \kappa_1^2 \kappa_5) {\mathfrak{g}}amma(-\kappa_1 \kappa_4 c_{i+1}c_{i+2} + \kappa_2 \kappa_4 c_{i}c_{i+1} - \kappa_1 \kappa_5 + \kappa_2 \kappa_5 c_{i}c_{i+2})\\ =& \kappa_5 + {\mathfrak{p}}artialta_1 c_{i}c_{i+1} + {\mathfrak{p}}artialta_2 c_{i+1}c_{i+2} + {\mathfrak{p}}artialta_3 c_{i}c_{i+2} {{\mathfrak{b}}ar{e}}nd{align*} for some constants $ {\mathfrak{p}}artialta_1, {\mathfrak{p}}artialta_2, {\mathfrak{p}}artialta_3\in{\mathbb{R}}. $ Thus, {\mathfrak{b}}egin{align*} (\kappa_5 -{\mathfrak{p}}artialta_1 c_{i}c_{i+1}-{\mathfrak{p}}artialta_2 c_{i+1} c_{i+2} -{\mathfrak{p}}artialta_3 c_{i}c_{i+2})&(1 + \mathbf{c'} s_{i+1} - \mathbf{c''} s_{i}s_{i+1}){\mathbf{1}}rphii_i v\\ =& (\kappa_5+{\mathfrak{p}}artialta_1^2+{\mathfrak{p}}artialta_2^2+{\mathfrak{p}}artialta_3^2) v. {{\mathfrak{b}}ar{e}}nd{align*} Since $ {\mathfrak{p}}artialta_1^2, {\mathfrak{p}}artialta_2^2, {\mathfrak{p}}artialta_3^2 \in {\mathbb{R}}_{{\mathfrak{g}}eq 0} $ and $ \kappa_5 > 0, $ the result follows. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{prp}{\langle}bel{dominant wt space} Assume that ${\langle}mbda\in{P^{++}}$, $\mu\in P^+[{\langle}mbda]$, and ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$. Then, \[ {\mathcal{M}}({\langle}mbda,\mu)_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}}={\mathbb{C}}{\mathbf{1}}_{{\langle}mbda,\mu}. \] {{\mathfrak{b}}ar{e}}nd{prp} We begin by proving a special case of the Proposition. Suppose $ n $ divides $ d $, and $d/n=b-a$ for some $a,b\in{\mathbb{Z}}$, $b>0$. Let $ {\langle}mbda = (b, {\langle}mbdaots, b) $ and $ \mu = (a, {\langle}mbdaots, a) $ be weights of $ {\mathfrak{q}}(n). $ Set $ {\mathcal{M}}_{a,b,n} = {\mathcal{M}}({\langle}mbda, \mu), $ and set ${\mathbf{1}}_{a,b,n}={\mathbf{1}}_{{\langle}mbda,\mu}$. Let $$ {\mathfrak{z}}eta_{a,b,n} = (a,a+1, {\langle}mbdaots, b-1, {\langle}mbdaots, a, a+1, {\langle}mbdaots, b-1) $$ be a weight for $ {\mathcal{A}}Se(d) $ where the sequence $ a, a+1, {\langle}mbdaots, b-1 $ appears $ n $ times. The first goal is to compute the weight space $({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}}}$. Set $n=d$ in the definition above so that $b=a+1$. The resulting module is the Kato module $K(a, {\langle}mbdaots, a)=K_a$, where all the $ x_i^2 $ act by $q(a)$ on the vector ${\mathbf{1}}_{a,b,n}$. The following is \cite[Lemma 16.3.2, Theorem 16.3.3]{kl}. {\mathfrak{b}}egin{lem} {\langle}bel{katolemma} {\mathfrak{b}}egin{enumerate} \item If $a{\mathfrak{n}}eq-1$ or $0$, the weight space of $ K(a, {\langle}mbdaots, a) $ corresponding to $ (a, {\langle}mbdaots, a) $ with respect to the operators $ x_1^2, {\langle}mbdaots, x_n^2 $ has dimension $ 2^n. $ If $a=-1$ or $0$, then the weight space of $K(a,{\langle}mbdaots,a)$ corresponding to $(a,{\langle}mbdaots,a)$ with respect to the operators $ x_1, {\langle}mbdaots, x_n $ has dimension $2^{\lfloor{{\mathfrak{b}}ar{f}}rac{n+1}{2}\rfloor}$. \item The module $ K(a, {\langle}mbdaots, a) $ is equal to its generalized weight space for the weight $ (a, {\langle}mbdaots, a). $ \item The module $ K(a, {\langle}mbdaots, a) $ is simple of type \thetaxttt{Q} if $ a=0 $ and $ d $ is odd, and is of type \thetaxttt{M} otherwise. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{lem} Set $m=d/n$. In the set of weights of $ {\mathcal{M}}_{a,b,n}, $ there exists a unique anti-dominant weight $ {\mathfrak{z}}eta_{a,b,n}^{\circ} $ that is given by $$ {\mathfrak{z}}eta_{a,b,n}^{\circ} = (\underbrace{a, {\langle}mbdaots, a,}_n \underbrace{a+1, {\langle}mbdaots, a+1,}_n {\langle}mbdaots, \underbrace{b-1, {\langle}mbdaots, b-1}_n). $$ Take an element $ \tauu\in D_{{\langle}mbda-\mu} $ such that $ \tauu({\mathfrak{z}}eta_{a,b,n}) = {\mathfrak{z}}eta_{a,b,n}^{\circ}. $ If $a{\mathfrak{g}}eq 0$, it is given by $ \tauu = \omega^1 \cdots \omega^{m-1}, $ where $ \omega^p=\rhoo_{n-1}^p \rhoo_{n-2}^p \cdots \rhoo_1^p $, $$ \rhoo_k^p = \xi_{k(p+1)-(k-1)}^p \cdots \xi_{(k(p+1)-1)}^p \xi_{k(p+1)}^p, $$ and, for $1 \leq r \leq d-1, $ and $1 \leq p \leq d-r$, $\xi_r^p = s_{r+p-1} \cdots s_{r+1}s_r$. If $b\leq0$, then $\tauu={\mathfrak{s}}igma(\omega^1\cdots\omega^{m-1})$, where ${\mathfrak{s}}igma$ is the automorphism of ${\mathcal{A}}Se(d)$. Finally, if $a<0$ and $b>0$, $\tauu={\mathfrak{s}}igma_{(-a+1)n}(\omega^2\cdots\omega^{-a})\omega^{-a+1}\cdots\omega^{m-1}$, where ${\mathfrak{s}}igma_{-a}$ is the automorphism of ${\mathcal{A}}Se(-a){\mathfrak{s}}ubseteq{\mathcal{A}}Se(d)$ embedded on the left. {\mathfrak{b}}egin{lem}{\langle}bel{cyclicvectorlemma} The vector $ {\mathbf{1}}rphii_{\tauu} {\mathbf{1}}_{a,b,n} $ is a cyclic vector of $ {\mathcal{M}}_{a,b,n}. $ {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} This follows from iterated applications of lemma ~\ref{techlemma}. {{\mathfrak{b}}ar{e}}nd{proof} The proof of the following lemma is similar to \cite[Lemma A.7]{su2}, substituting Lemmas ~\ref{katolemma} and ~\ref{L:weights of M} appropriately into Suzuki's argument. {\mathfrak{b}}egin{lem} {\langle}bel{antidominantlemma} $({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}^{\circ}}} {\mathfrak{s}}ubseteq {\mathbf{1}}rphii_{\tauu} {\mathbb{C}}l(d) {\mathbf{1}}_{a,b,n}. $ {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} By an argument similar to the proof of \cite[Lemma A.7]{su2}, we deduce that \[ ({\mathcal{M}}_{a,b,n})_{{\mathfrak{z}}eta_{a,b,n}^{\circ}} \cong (K_a)_{a^{(n)}} \circledast (K_{a+1})_{(a+1)^{(n)}} \circledast \cdots \circledast (K_{b-1})_{(b-1)^{(n)}} \] if $a{\mathfrak{g}}eq 0$, and \[ ({\mathcal{M}}_{a,b,n})_{{\mathfrak{z}}eta_{a,b,n}^{\circ}} \cong (K_{-a-1})_{(-a-1)^{(n)}} \circledast \cdots (K_1)_{1^{(n)}}\circledast(K_0)_{0^{(2n)}}\circledast(K_1)_{1^{(n)}}\cdots \circledast (K_{b-1})_{(b-1)^{(n)}} \] if $a<0$. Here, $ (K_j)_{j^{(n)}} $ is the weight space $ K(j, {\langle}mbdaots, j)_{(j, {\langle}mbdaots, j)} $ of a Kato module. Since \[ ({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}^{\circ}}} {\mathfrak{s}}ubseteq ({\mathcal{M}}_{a,b,n})_{{\mathfrak{z}}eta_{a,b,n}^{\circ}}, \] we deduce that if $a{\mathfrak{g}}eq 0$ \[ ({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}^{\circ}}} = (K_a)_{{\mathfrak{s}}qrt{a^{(n)}}} \circledast (K_{a+1})_{{\mathfrak{s}}qrt{(a+1)^{(n)}}} \circledast \cdots \circledast(K_{b-1})_{{\mathfrak{s}}qrt{(b-1)^{(n)}}} {\mathfrak{s}}ubseteq {\mathbb{C}}l(d) {\mathbf{1}}rphii_{\tauu} {\mathbf{1}}_{a,b,n}. \] Similarly, if $a<0$, $({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}^{\circ}}}{\mathfrak{s}}ubseteq{\mathbb{C}}l(d) {\mathbf{1}}rphii_{\tauu} {\mathbf{1}}_{a,b,n}$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{prp} {\langle}bel{mainprop1} For the special standard module defined above, $ ({\mathcal{M}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}}} {\mathfrak{s}}ubseteq {\mathbb{C}}l(d) {\mathbf{1}}_{a,b,n}. $ {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} For $ i = 1, {\langle}mbdaots, d, $ let $ i = jm+r $ where $ 0 \leq j <n $ and $ 0 < r < m. $ Take any $v \in ({{\mathcal{M}}}_{a,b,n})_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{a,b,n}}}$. Lemma~\ref{antidominantlemma} implies that $ {\mathbf{1}}rphii_{\tauu} v = {\mathbf{1}}rphii_{\tauu} z{\mathbf{1}} $ for some $ z \in {\mathbb{C}}l(d). $ Put $ v_0 = v- z 1. $ Then $ {\mathbf{1}}rphii_{\tauu} v_0 = 0$. Note that since $ r {\mathfrak{n}}eq m, $ $ {\mathbf{1}}rphii_i v_0 = 0$ since $s_i({\mathfrak{z}}eta_{a,b,n})$ is not a weight of ${{\mathcal{M}}}_{a,b,n}$. If $r{\mathfrak{n}}eq -a$, we can solve for $s_iv_0$ in the equation ${\mathbf{1}}rphii_i.v_0=0$ to get \[ s_i.v_0=\left({{\mathfrak{b}}ar{f}}rac{\kappa_r-\kappa_{r-1}}{-2(a+r)} +{{\mathfrak{b}}ar{f}}rac{\kappa_r+\kappa_{r-1}}{-2(a+r)}c_ic_{i+1}\right)v_0 \] where $\kappa_r={\mathfrak{s}}qrt{q(a+r-1)}$. Similarly, if $ r {\mathfrak{n}}eq -a,$ \[ s_i.{{\mathbf{1}}}_{a,b,n}=\left({{\mathfrak{b}}ar{f}}rac{\kappa_r-\kappa_{r-1}}{-2(a+r)} +{{\mathfrak{b}}ar{f}}rac{\kappa_r+\kappa_{r-1}}{-2(a+r)}c_ic_{i+1}\right){{\mathbf{1}}}_{a,b,n}. \] If $r=-a$, then routine calculations from earlier gives that \[ c_i c_{i+1} {{\mathbf{1}}}_{a,b,n} = - {\mathfrak{s}}qrt{-1} {{\mathbf{1}}}_{a,b,n}. \] Hence there exists an ${\mathcal{A}}Se(d)$-homomorphism $ {\mathfrak{p}}sii:{{\mathcal{M}}}_{a,b,n} \rightarrow {{\mathcal{M}}}_{a,b,n} $ such that $ {\mathfrak{p}}sii({{\mathbf{1}}}_{a,b,n})=v_0 $ if $ a {\mathfrak{g}}eq 0 $ or $ b \leq 0. $ If $ a < 0 < b, $ then there is an ${\mathcal{A}}Se(d)$-homomorphism $ {\mathfrak{p}}sii:{{\mathcal{M}}}_{a,b,n} \rightarrow {{\mathcal{M}}}_{a,b,n} $ such that $ {\mathfrak{p}}sii({{\mathbf{1}}}_{a,b,n})={\mathfrak{p}}rod_{0 \leq j <n} (1+ {\mathfrak{s}}qrt{-1} c_{jm-a} c_{jm-a+1} )v_0 $ Thus by lemma ~\ref{antidominantlemma}, the kernel of $ {\mathfrak{p}}sii $ is equal to $ {\mathcal{M}}_{a,b,n}. $ Therefore $ v_0 =0. $ Thus $ v \in {\mathbb{C}}l(d) {{\mathbf{1}}}_{a,b,n}. $ {{\mathfrak{b}}ar{e}}nd{proof} We now reduce the general case to the special case above. To this end, fix ${\langle}mbda\in{P^{++}}$, $\mu\in P^+[{\langle}mbda]$, and ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$. Set $d_i={\langle}mbda_i-\mu_i$, and let $a_i=d_1+\cdots+d_{i-1}+1$, $b_i=d_1+\cdots+d_i$. Observe that {\mathfrak{b}}egin{align}{\langle}bel{E:Step2Formulae} {\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{a_i})=\mu_i \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{b_i})={\langle}mbda_i-1. {{\mathfrak{b}}ar{e}}nd{align} Furthermore, observe that if $a_i\leq c\leq b_i$, {\mathfrak{b}}egin{align}{\langle}bel{E:Step2Formulae2} {\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{c})={\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{b_i})-(b_i-c) \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{c})={\mathfrak{z}}eta_{{\langle}mbda,\mu}(x^2_{a_i})+(c-a_i). {{\mathfrak{b}}ar{e}}nd{align} Since ${\langle}mbda\in{P^{++}}$ and $\mu\in P^+[{\langle}mbda]$, we can find integers $ 0 = n'_0 < n'_1 < \cdots < n'_r = n $, and $ 0 = n_0 < n_1 < \cdots < n_s = n $ such that \[ R[{\langle}mbda] = R \cap {\mathfrak{s}}um_{i {\mathfrak{n}}eq n'_0, {\langle}mbdaots, n'_r} \mathbb{Z} \alpha_i\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, R[{\langle}mbda]\cap R[\mu] = R \cap {\mathfrak{s}}um_{i {\mathfrak{n}}eq n_0, {\langle}mbdaots, n_s} \mathbb{Z} \alpha_i. \] Let $$ I'_p = \{\, a_{n'_{p-1}+1}, a_{n'_{p-1}+1}+1, {\langle}mbdaots, b_{n'_p}-1\,\} \;\;\; (p=1, {\langle}mbdaots, r),\;\;\; I' = I'_1 \cup {\langle}mbdaots \cup I'_r, $$ and $$ I_p = \{\, a_{n_{p-1}+1}, a_{n_{p-1}+1}+1, {\langle}mbdaots, b_{n_p}-1\,\} \;\;\; (p=1, {\langle}mbdaots, s),\;\;\; I = I_1 \cup {\langle}mbdaots \cup I_s. $$ Then, $S_{{\langle}mbda-\mu}{\mathfrak{s}}ubseteq S_I{\mathfrak{s}}ubseteq S_{I'}$ and \[ S_{I'}/S_{{\langle}mbda-\mu}\cong D_{{\langle}mbda-\mu}\cap S_{I'}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, S_I/S_{{\langle}mbda-\mu}\cong D_{{\langle}mbda-\mu}\cap S_I,\;\;\;\mbox{(cf. $\S$\ref{SS:Mackey}).} \] {\mathfrak{b}}egin{lem}\cite[Lemma A.9]{su2} There is a containment of sets $ D_{{\langle}mbda - \mu}\cap S_d[{\mathfrak{z}}eta_{{\langle}mbda, \mu}] {\mathfrak{s}}ubset D_{{\langle}mbda - \mu}\cap S_I. $ {{\mathfrak{b}}ar{e}}nd{lem} Let $ v \in {\mathcal{M}}({\langle}mbda, \mu)_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{{\langle}mbda, \mu}}}. $ For each $ p \in \lbrace 1, {\langle}mbdaots, s \rbrace, $ we can write $ v = {\mathfrak{s}}um_j x_j^{(p)} z_j^{(p)} v_j $ where $ v_j \in {\mathcal{P}}hii({\langle}mbda, \mu), $ $ \lbrace x_j^{(p)} \rbrace_j $ are linearly independent elements of $ {\mathbb{C}}[D_{{\langle}mbda - \mu} \cap S_{I - I_p}] $ and $z_j^{(p)} \in {\mathbb{C}}[D_{{\langle}mbda - \mu} \cap S_{I_p}]$. Let ${\mathcal{P}}_d[x^2]_{I_p}={\mathbb{C}}[x_i^2|i\in I_p]$. {\mathfrak{b}}egin{lem}\cite[Lemma A.10]{su2} For $ f \in {\mathcal{P}}_d[x^2]_{I_p}, $ $ f z_k^{(p)} v_j = {\mathfrak{z}}eta_{{\langle}mbda, \mu}(f) z_k^{(p)} v_j. $ {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Observe \[ 0=(f-{\mathfrak{z}}eta_{{\langle}mbda,\mu}(f))v ={\mathfrak{s}}um_jx_j^{(p)}(f-{\mathfrak{z}}eta_{{\langle}mbda,\mu}(f))z_j^{(p)}{\mathbf{1}}_{{\langle}mbda,\mu}. \] Since $S_{I_p}{\mathfrak{s}}ubset S_d$ is closed with respect to the Bruhat order we have $f z_j^{(p)}{\mathbf{1}}_{{\langle}mbda,\mu}\in{\mathbb{C}}[D_{{\langle}mbda-\mu}\cap S_{I_p}]$. Since $\{x_j^{(p)}\}_j$ are linearly independent, each $(f-{\mathfrak{z}}eta_{{\langle}mbda,\mu}(f))z_j^{(p)}{\mathbf{1}}_{{\langle}mbda,\mu}$ must be 0. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{n}}oindent{{\mathfrak{b}}ar{e}}mph{Proof of Proposition \ref{dominant wt space}.} Let $ {\mathcal{A}}Se(I_p) $ be the subalgebra corresponding to $ I_p. $ Note that $ {\mathcal{A}}Se(I_p) \cong {\mathcal{A}}Se(|I_p|). $ First note that $ {\mathcal{A}}Se(I_p) v_j \cong {\mathcal{M}}_{a,b,n_p-n_{p-1}} $ for some $a,b$. Thus by Proposition ~\ref{mainprop1}, $z_k^{(p)} v_j \in {\mathbb{C}}{\mathbf{1}}_{{\langle}mbda, \mu}$. Thus, $ v \in {\mathbb{C}}[D_{{\langle}mbda - \mu} \cap S_{I - I_p}] $ for any $ p. $ It now follows that $ v \in \mathbb{C} {\mathbf{1}}_{{\langle}mbda, \mu}. ${\mathbb{Q}}ED {\mathfrak{b}}egin{thm}{\langle}bel{thm:unique irred quotient} Assume that ${\langle}mbda\in{P^{++}}$, $\mu\in P^+[{\langle}mbda]$, and ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$. Then ${\mathcal{M}}({\langle}mbda,\mu)$ has a unique simple quotient module, denoted ${\mathcal{L}}({\langle}mbda,\mu)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} Assume $N$ is a submodule of ${\mathcal{M}}({\langle}mbda,\mu)$. If $N_{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}^{\mathrm{gen}}{\mathfrak{n}}eq0$, then $N_{{\mathfrak{s}}qrt{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}}{\mathfrak{n}}eq0$. By the previous lemma, $N\cap{\mathbb{C}}l(d){\mathbf{1}}_{{\langle}mbda,\mu}{\mathfrak{n}}eq\{0\}$, so ${\mathbf{1}}_{{\langle}mbda,\mu}\in N$ because ${\mathbb{C}}l(d){\mathbf{1}}_{{\langle}mbda,\mu}$ is an irreducible ${\mathcal{A}}Se({\langle}mbda-\mu)$-module. Hence, $N={\mathcal{M}}({\langle}mbda,\mu)$. It follows that \[ N{\mathfrak{s}}ubseteq{\mathfrak{b}}igoplus_{{{\mathfrak{b}}ar{e}}taa{\mathfrak{n}}eq{\mathfrak{z}}eta_{{\langle}mbda,\mu}}{\mathcal{M}}({\langle}mbda,\mu)^{\mathrm{gen}}_{{\mathfrak{b}}ar{e}}taa. \] The sum of all proper submodules satisfies this property. Therefore, ${\mathcal{M}}({\langle}mbda,\mu)$ has a unique maximal proper submodule and a unique simple quotient. {{\mathfrak{b}}ar{e}}nd{proof} Let $\mathcal{R}({\langle}mbda,\mu)$ denote the unique maximal submodule, and define ${\mathcal{L}}({\langle}mbda,\mu)={\mathcal{M}}({\langle}mbda,\mu)/\mathcal{R}({\langle}mbda,\mu)$. {\mathfrak{s}}ection{Classification of Calibrated Representations}{\langle}bel{S:Calibrated} A representation $ M $ of the AHCA is called {{\mathfrak{b}}ar{e}}mph{calibrated} if the polynomial subalgebra ${\mathcal{P}}_d[x]{\mathfrak{s}}ubseteq{\mathcal{A}}Se(d)$ acts semisimply. The main combinatorial object associated to such a representation is the shifted skew shape. Calibrated representations of the affine Hecke algebra were studied and classified in \cite{ram}. The main combinatorial object in that case were pairs of skew shapes and content functions. That construction along with \cite[Conjecture 52]{lec} motivated the construction given here. A proof of a slightly modified version of that conjecture is given here. Leclerc defined a calibrated representation to be one in which ${\mathcal{P}}_d[x^2]$ acts semisimply. For example, the module $ {\mathcal{P}}hii_{[-1,0]} $ is calibrated in the sense of \cite{lec} but $ x_1, x_2 $ do not act diagonally in any basis. {\mathfrak{s}}ubsection{Construction of Calibrated Representations}{\langle}bel{SS:Calibrated} Let $ {\langle}mbda = ({\langle}mbda_1, {\langle}mbdaots, {\langle}mbda_r) $ and $ \mu = (\mu_1, {\langle}mbdaots, \mu_r) $ be two partitions with $ {\langle}mbda_1 > \cdots > {\langle}mbda_r >0 $ and $ \mu_1 {\mathfrak{g}}eq \cdots {\mathfrak{g}}eq \mu_r $ such that $ \mu_i = \mu_{i+1} $ implies $ \mu_i = 0$ and $ {\langle}mbda_i {\mathfrak{g}}eq \mu_i $ for all $ i. $ To such data, associate a shifted skew shape of boxes where row $ i $ has $ {\langle}mbda_i - \mu_i $ boxes and the leftmost box occurs in position $ i. $ Figure \ref{ex1} illustrates a skew shape for $ {\langle}mbda = (5,2,1) $ and $ \mu = (3,1,0). $ {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(420,85) {\mathfrak{p}}ut(0,70){${\langle}mbda=$} {\mathfrak{p}}ut(35,70){$0$}{\mathfrak{p}}ut(60,70){$1$}{\mathfrak{p}}ut(85,70){2}{\mathfrak{p}}ut(110,71){3} {\mathfrak{p}}ut(135,70){4} {\mathfrak{p}}ut(60,45){0}{\mathfrak{p}}ut(85,45){1} {\mathfrak{p}}ut(85,20){0} {\mathfrak{p}}ut(25,85){\line(1,0){125}} {\mathfrak{p}}ut(25,85){\line(0,-1){25}}{\mathfrak{p}}ut(50,85){\line(0,-1){50}} {\mathfrak{p}}ut(75,85){\line(0,-1){75}}{\mathfrak{p}}ut(100,85){\line(0,-1){75}} {\mathfrak{p}}ut(125,85){\line(0,-1){25}}{\mathfrak{p}}ut(150,85){\line(0,-1){25}} {\mathfrak{p}}ut(25,60){\line(1,0){125}} {\mathfrak{p}}ut(50,35){\line(1,0){50}} {\mathfrak{p}}ut(75,10){\line(1,0){25}} {\mathfrak{p}}ut(175,70){$\mu=$} {\mathfrak{p}}ut(210,70){$0$}{\mathfrak{p}}ut(235,70){$1$}{\mathfrak{p}}ut(260,70){2} {\mathfrak{p}}ut(235,45){0} {\mathfrak{p}}ut(200,85){\line(1,0){75}} {\mathfrak{p}}ut(200,85){\line(0,-1){25}}{\mathfrak{p}}ut(225,85){\line(0,-1){50}} {\mathfrak{p}}ut(250,85){\line(0,-1){50}}{\mathfrak{p}}ut(275,85){\line(0,-1){25}} {\mathfrak{p}}ut(200,60){\line(1,0){75}} {\mathfrak{p}}ut(225,35){\line(1,0){25}} {\mathfrak{p}}ut(310,70){${\langle}mbda/\mu=$} {\mathfrak{p}}ut(385,70){3} {\mathfrak{p}}ut(410,71){4} {\mathfrak{p}}ut(360,45){1} {\mathfrak{p}}ut(360,20){0} {\mathfrak{p}}ut(375,85){\line(1,0){50}} {\mathfrak{p}}ut(375,85){\line(0,-1){75}}{\mathfrak{p}}ut(400,85){\line(0,-1){25}} {\mathfrak{p}}ut(425,85){\line(0,-1){25}} {\mathfrak{p}}ut(350,60){\line(1,0){75}} {\mathfrak{p}}ut(350,60){\line(0,-1){50}} {\mathfrak{p}}ut(350,35){\line(1,0){25}} {\mathfrak{p}}ut(350,10){\line(1,0){25}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Skew Shape filled with contents}{\langle}bel{ex1} {{\mathfrak{b}}ar{e}}nd{figure} A standard filling of a skew shape $ {\langle}mbda / \mu $ with a total of $ d $ boxes is an insertion of the set $ \{ 1, {\langle}mbdaots, d \}$ into the boxes of the skew shape such that each box gets exactly one element, each element is used exactly once and the rows are increasing from left to right and the columns are increasing from top to bottom. In a shifted shape, ${\langle}mbda$, all the boxes will lie above one main diagonal running from northwest to southeast. Each box in this main diagonal will be assigned content 0. The contents of the other boxes will be constant along the diagonals where the contents of the diagonal northeast of its immediate neighbor will be one more than the contents of its immediate neighbor. In a shifted skew shape, ${\langle}mbda/\mu$, the contents are defined as in figure \ref{ex1}. Given a standard tableaux $ L $ for a shifted skew shape $ {\langle}mbda / \mu, $ let $ c(L_i) $ be the contents of the box labeled by $ i. $ Thus $ L $ gives rise to a $ d$-tuple $ c(L) = (c(L_1), {\langle}mbdaots, c(L_d)) $ called the content reading of $ {\langle}mbda / \mu $ with respect to $ L. $ Let $ {\langle}mbda / \mu $ be a shifted skew shape such that $ {\langle}mbda / \mu $ has $ d $ boxes. Set $ \kappa_{i,L} = {\mathfrak{s}}qrt{q(c(L_{i}))} $ and $$ \mathcal{Y}_{i,L} = {\mathfrak{s}}qrt{1-{{\mathfrak{b}}ar{f}}rac{1}{({\kappa_{i+1,L}-\kappa_{i,L}})^2} -{{\mathfrak{b}}ar{f}}rac{1}{({\kappa_{i+1,L}+\kappa_{i,L}})^2}}. $$ Now to a skew shape $ {\langle}mbda/\mu, $ associate a vector space $ \widehat{H}^{{\langle}mbda / \mu} = \oplus_L Cl(d) v_L $ where $ L $ ranges over all standard tableaux of shape $ {\langle}mbda/\mu $ and $ d $ is the number of boxes in the shifted skew shape. Define $ x_i v_L = \kappa_{i,L} v_L. $ Define $$ s_i v_L = {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L} $$ where $ v_{s_i L} = 0 $ if $ s_i L $ is not a standard tableaux. {\mathfrak{b}}egin{prp} The action of the $ x_i $ and $s_{i}$ given above endow $ \widehat{H}^{{\langle}mbda / \mu} $ with the structure of a $ {\mathcal{A}}Se(d)$-module. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} We have {\mathfrak{b}}egin{align*} s_i^2 v_L &= {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}-\kappa_{i,L}} s_i v_L - {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} s_i v_L + \mathcal{Y}_{i,L} s_i v_{s_i L}\\ &= {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}-\kappa_{i,L}}\left({{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}\right)\\ &-{{\mathfrak{b}}ar{f}}rac{c_i c_{i+1}}{\kappa_{i+1,L}+\kappa_{i,L}}\left({{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}\right)\\ &+ \mathcal{Y}_{i,L}\left({{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i,L}-\kappa_{i+1,L}} v_{s_i L} + {{\mathfrak{b}}ar{f}}rac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_{s_i L} + \mathcal{Y}_{i,L} v_{L}\right)\\ &= \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_{i+1,L}-\kappa_{i,L})^2}+{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_{i+1,L}+\kappa_{i,L})^2} + \mathcal{Y}_{i,L} \mathcal{Y}_{i,L}\right) v_L = v_L. {{\mathfrak{b}}ar{e}}nd{align*} Note that if $ v_{s_i L} = 0, $ then $ {{\mathfrak{b}}ar{f}}rac{1}{(\kappa_{i+1,L}-\kappa_{i,L})^2} + {{\mathfrak{b}}ar{f}}rac{1}{(\kappa_{i+1,L}+\kappa_{i,L})^2}=1. $ Next, $$ s_i x_i v_L = {{\mathfrak{b}}ar{f}}rac{\kappa_{i,L}}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + {{\mathfrak{b}}ar{f}}rac{\kappa_{i,L}}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}. $$ On the other hand, $$ x_{i+1} s_i v_L - v_L + c_i c_{i+1} v_L = {{\mathfrak{b}}ar{f}}rac{\kappa_{i+1,L}}{\kappa_{i+1,L}-\kappa_{i,L}} v_L - {{\mathfrak{b}}ar{f}}rac{\kappa_{i+1,L}}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L +\mathcal{Y}_{i,L} v_{s_i L} - v_L + c_i c_{i+1} v_L. $$ Thus it is easily seen that $$ s_i x_i v_L = x_{i+1} s_i v_L - v_L + c_i c_{i+1} v_L. $$ We now check the braid relations. To this end, fix $j\in\mathbb{N}$ and set $\kappa_i={\mathfrak{s}}qrt{j+i}$ for $i{\mathfrak{g}}eq0$. {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(340,40) {\mathfrak{p}}ut(115,14){$L\;\;=$} {\mathfrak{p}}ut(160,14){$i$}{\mathfrak{p}}ut(178,14){$i+1$}{\mathfrak{p}}ut(203,14){$i+2$} {\mathfrak{p}}ut(150,5){\line(1,0){75}} {\mathfrak{p}}ut(150,5){\line(0,1){25}}{\mathfrak{p}}ut(175,5){\line(0,1){25}} {\mathfrak{p}}ut(200,5){\line(0,1){25}}{\mathfrak{p}}ut(225,5){\line(0,1){25}} {\mathfrak{p}}ut(150,30){\line(1,0){75}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 1}{\langle}bel{F:Case 1} {{\mathfrak{b}}ar{e}}nd{figure} Case 1: Let $ L $ be the standard tableaux given in Figure \ref{F:Case 1}. A calculation gives {{\mathfrak{s}}igmaall {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_L = s_{i+1} s_i s_{i+1} v_L =&\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_2)^2(\kappa_2-\kappa_1)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_3)^2(\kappa_1+\kappa_2)}\right)v_L\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_2+\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_2-\kappa_1)}\right)c_ic_{i+1}v_L\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_2-\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_2+\kappa_1)}\right)c_{i+1}c_{i+2}v_L\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_2)^2(\kappa_2+\kappa_1)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_3)^2(\kappa_2-\kappa_1)}\right)c_ic_{i+2}v_L. {{\mathfrak{b}}ar{e}}nd{align*} } {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(340,70) {\mathfrak{p}}ut(50,27){$L_1=$} {\mathfrak{p}}ut(78,14){$i+2$}{\mathfrak{p}}ut(85,39){$i$}{\mathfrak{p}}ut(103,39){$i+1$} {\mathfrak{p}}ut(75,5){\line(1,0){25}} {\mathfrak{p}}ut(75,5){\line(0,1){50}}{\mathfrak{p}}ut(100,5){\line(0,1){50}} {\mathfrak{p}}ut(75,30){\line(1,0){50}} {\mathfrak{p}}ut(100,30){\line(0,1){25}}{\mathfrak{p}}ut(125,30){\line(0,1){25}} {\mathfrak{p}}ut(75,55){\line(1,0){50}} {\mathfrak{p}}ut(200,27){$L_2=$} {\mathfrak{p}}ut(228,14){$i+1$}{\mathfrak{p}}ut(235,39){$i$}{\mathfrak{p}}ut(253,39){$i+2$} {\mathfrak{p}}ut(225,5){\line(1,0){25}} {\mathfrak{p}}ut(225,5){\line(0,1){50}}{\mathfrak{p}}ut(250,5){\line(0,1){50}} {\mathfrak{p}}ut(225,30){\line(1,0){50}} {\mathfrak{p}}ut(250,30){\line(0,1){25}}{\mathfrak{p}}ut(275,30){\line(0,1){25}} {\mathfrak{p}}ut(225,55){\line(1,0){50}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 2}{\langle}bel{F:Case 2} {{\mathfrak{b}}ar{e}}nd{figure} Case 2: Let $ L_1 $ and $ L_2 $ be the standard tableaux given in Figure \ref{F:Case 2}. A calculation gives {{\mathfrak{s}}igmaall {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =&\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_2)^2(\kappa_1-\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_3)^2(\kappa_1+\kappa_3)}\right)v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_1-\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_3^2)(\kappa_1+\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_1-\kappa_3)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_2)^2(\kappa_1+\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_3)^2(\kappa_1-\kappa_3)}\right)c_ic_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_3-\kappa_2)(\kappa_1-\kappa_2)}\right)v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_3-\kappa_2)(\kappa_1+\kappa_2)}\right) c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_2)(\kappa_2+\kappa_3)}\right) c_{i+1}c_{i+2}v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_3)(\kappa_1+\kappa_2)}\right) c_ic_{i+2}v_{L_2}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1-\kappa_2)^2(\kappa_3-\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1+\kappa_2)^2(\kappa_1+\kappa_3)}\right)v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1^2-\kappa_2^2)(\kappa_3-\kappa_1)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1^2-\kappa_2^2)^2(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_1^2-\kappa_2^2)(\kappa_1+\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1^2-\kappa_2^2)^2(\kappa_3-\kappa_1)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1-\kappa_2)^2(\kappa_1+\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1+\kappa_2)^2(\kappa_3-\kappa_1)}\right)c_ic_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3-\kappa_2)(\kappa_1-\kappa_2)}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1-\kappa_2)(\kappa_2+\kappa_3)}\right) c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3-\kappa_2)(\kappa_1+\kappa_2)}\right) c_{i+1}c_{i+2}v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_3)(\kappa_1+\kappa_2)}\right)c_ic_{i+2}v_{L_1}. {{\mathfrak{b}}ar{e}}nd{align*} } {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(340,70) {\mathfrak{p}}ut(50,27){$L_1=$} {\mathfrak{p}}ut(80,14){$i$}{\mathfrak{p}}ut(103,39){$i+1$}{\mathfrak{p}}ut(103,14){$i+2$} {\mathfrak{p}}ut(75,5){\line(1,0){50}} {\mathfrak{p}}ut(75,5){\line(0,1){25}}{\mathfrak{p}}ut(100,5){\line(0,1){50}}{\mathfrak{p}}ut(125,5){\line(0,1){50}} {\mathfrak{p}}ut(75,30){\line(1,0){50}} {\mathfrak{p}}ut(100,55){\line(1,0){25}} {\mathfrak{p}}ut(200,27){$L_2=$} {\mathfrak{p}}ut(228,14){$i+1$}{\mathfrak{p}}ut(252,14){$i+2$}{\mathfrak{p}}ut(258,39){$i$} {\mathfrak{p}}ut(225,5){\line(1,0){50}} {\mathfrak{p}}ut(225,5){\line(0,1){25}}{\mathfrak{p}}ut(250,5){\line(0,1){50}}{\mathfrak{p}}ut(275,5){\line(0,1){50}} {\mathfrak{p}}ut(225,30){\line(1,0){50}} {\mathfrak{p}}ut(250,55){\line(1,0){25}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 3}{\langle}bel{F:Case 3} {{\mathfrak{b}}ar{e}}nd{figure} Case 3: Let $L_1$ and $L_2$ be as in figure \ref{F:Case 3}. Then, a calculation analogous to case 2 shows that $s_is_{i+1}s_iv_{L_1}=s_{i+1}s_is_{i+2}v_{L_1}$ and $s_is_{i+1}s_iv_{L_2}=s_{i+1}s_is_{i+2}v_{L_2}$. {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(420,60) {\mathfrak{p}}ut(0,27){$L_1=$} {\mathfrak{p}}ut(37,14){$i$}{\mathfrak{p}}ut(53,14){$i+1$}{\mathfrak{p}}ut(78,39){$i+2$} {\mathfrak{p}}ut(25,5){\line(1,0){50}} {\mathfrak{p}}ut(25,5){\line(0,1){25}}{\mathfrak{p}}ut(50,5){\line(0,1){25}} {\mathfrak{p}}ut(75,5){\line(0,1){50}} {\mathfrak{p}}ut(25,30){\line(1,0){75}} {\mathfrak{p}}ut(100,30){\line(0,1){25}} {\mathfrak{p}}ut(75,55){\line(1,0){25}} {\mathfrak{p}}ut(150,27){$L_2=$} {\mathfrak{p}}ut(187,14){$i$}{\mathfrak{p}}ut(203,14){$i+2$}{\mathfrak{p}}ut(228,39){$i+1$} {\mathfrak{p}}ut(175,5){\line(1,0){50}} {\mathfrak{p}}ut(175,5){\line(0,1){25}}{\mathfrak{p}}ut(200,5){\line(0,1){25}} {\mathfrak{p}}ut(225,5){\line(0,1){50}} {\mathfrak{p}}ut(175,30){\line(1,0){75}} {\mathfrak{p}}ut(250,30){\line(0,1){25}} {\mathfrak{p}}ut(225,55){\line(1,0){25}} {\mathfrak{p}}ut(300,27){$L_3=$} {\mathfrak{p}}ut(328,14){$i+1$}{\mathfrak{p}}ut(353,14){$i+2$}{\mathfrak{p}}ut(385,39){$i$} {\mathfrak{p}}ut(325,5){\line(1,0){50}} {\mathfrak{p}}ut(325,5){\line(0,1){25}}{\mathfrak{p}}ut(350,5){\line(0,1){25}} {\mathfrak{p}}ut(375,5){\line(0,1){50}} {\mathfrak{p}}ut(325,30){\line(1,0){75}} {\mathfrak{p}}ut(400,30){\line(0,1){25}} {\mathfrak{p}}ut(375,55){\line(1,0){25}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 4}{\langle}bel{F:Case 4} {{\mathfrak{b}}ar{e}}nd{figure} Case 4: Let $ L_1, L_2, $ and $ L_3 $ be the standard tableaux given in Figure \ref{F:Case 3}. {{\mathfrak{s}}igmaall {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1-\kappa_0)^2(\kappa_3-\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_1)^2(\kappa_1+\kappa_3)}\right)v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1^2-\kappa_0^2)(\kappa_3-\kappa_1)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1^2-\kappa_0^2)(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_1^2)(\kappa_1+\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_1^2)(\kappa_3-\kappa_1)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_1-\kappa_0)^2(\kappa_1+\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_1)^2(\kappa_3-\kappa_1)}\right)c_ic_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_0)(\kappa_3-\kappa_0)}\right)v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_0)(\kappa_0+\kappa_3)}\right) c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_0+\kappa_1)(\kappa_3-\kappa_0)}\right) c_{i+1}c_{i+2}v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_0+\kappa_1)(\kappa_0+\kappa_3)}\right) c_ic_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_2}}{\kappa_1-\kappa_0}\right)v_{L_3} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_2}}{\kappa_0+\kappa_1}\right) c_{i+1}c_{i+2}v_{L_3}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_0)^2(\kappa_1-\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_3)^2(\kappa_1+\kappa_3)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2} \mathcal{Y}_{i,L_3}}{\kappa_1-\kappa_0}\right)v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_0)^2(\kappa_1+\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_3)^2(\kappa_1-\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i,L_3}}{\kappa_0+\kappa_1}\right) c_ic_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_3-\kappa_0)(\kappa_1-\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_1-\kappa_0)(\kappa_0-\kappa_3)}\right)v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_2}}{(\kappa_3+\kappa_0)(\kappa_1+\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_1-\kappa_0)(\kappa_0+\kappa_3)}\right) c_ic_{i+1}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_3)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_1+\kappa_0)(\kappa_0+\kappa_3)}\right) c_{i+1}c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_3-\kappa_0)(\kappa_1+\kappa_3)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_1+\kappa_0)(\kappa_0-\kappa_3)}\right) c_ic_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1-\kappa_0)(\kappa_3-\kappa_0)}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1+\kappa_0)(\kappa_3-\kappa_0)}\right) c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_0)}\right) c_{i+1}c_{i+2}v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3+\kappa_0)(\kappa_1+\kappa_0)}\right) c_ic_{i+2}v_{L_1}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_3} = s_{i+1} s_i s_{i+1}v_{L_3} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_0)^2(\kappa_1-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_3)^2(\kappa_0+\kappa_1)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2} \mathcal{Y}_{i,L_3}}{\kappa_1-\kappa_3}\right) v_{L_3} \\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_1-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_0+\kappa_1)}\right)c_ic_{i+1}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_0^2-\kappa_3^2)(\kappa_0+\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_1-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_3-\kappa_0)^2(\kappa_0+\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_3)^2(\kappa_1-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i,L_3}}{\kappa_1+\kappa_3}\right) c_ic_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_0-\kappa_3)(\kappa_1-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_1-\kappa_3)(\kappa_3-\kappa_0)}\right)v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_3}}{(\kappa_3+\kappa_0)(\kappa_0+\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_1-\kappa_3)(\kappa_0+\kappa_3)}\right) c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_1+\kappa_3)(\kappa_0+\kappa_3)}\right) c_{i+1}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_0-\kappa_3)(\kappa_0+\kappa_1)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_1+\kappa_3)(\kappa_3-\kappa_0)}\right) c_ic_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_2}}{\kappa_1-\kappa_0}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_2}}{\kappa_1+\kappa_0}\right) c_ic_{i+1}v_{L_1}. {{\mathfrak{b}}ar{e}}nd{align*} } {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(420,185) {\mathfrak{p}}ut(0,148){$L_1=$} {\mathfrak{p}}ut(37,114){$i$}{\mathfrak{p}}ut(53,139){$i+1$}{\mathfrak{p}}ut(78,164){$i+2$} {\mathfrak{p}}ut(25,105){\line(1,0){25}} {\mathfrak{p}}ut(25,105){\line(0,1){25}}{\mathfrak{p}}ut(50,105){\line(0,1){50}} {\mathfrak{p}}ut(25,130){\line(1,0){50}} {\mathfrak{p}}ut(75,130){\line(0,1){50}} {\mathfrak{p}}ut(50,155){\line(1,0){50}} {\mathfrak{p}}ut(100,155){\line(0,1){25}} {\mathfrak{p}}ut(75,180){\line(1,0){25}} {\mathfrak{p}}ut(150,148){$L_2=$} {\mathfrak{p}}ut(178,114){$i+1$}{\mathfrak{p}}ut(212,139){$i$}{\mathfrak{p}}ut(228,164){$i+2$} {\mathfrak{p}}ut(175,105){\line(1,0){25}} {\mathfrak{p}}ut(175,105){\line(0,1){25}}{\mathfrak{p}}ut(200,105){\line(0,1){50}} {\mathfrak{p}}ut(175,130){\line(1,0){50}} {\mathfrak{p}}ut(225,130){\line(0,1){50}} {\mathfrak{p}}ut(200,155){\line(1,0){50}} {\mathfrak{p}}ut(250,155){\line(0,1){25}} {\mathfrak{p}}ut(225,180){\line(1,0){25}} {\mathfrak{p}}ut(300,148){$L_3=$} {\mathfrak{p}}ut(337,114){$i$}{\mathfrak{p}}ut(353,139){$i+2$}{\mathfrak{p}}ut(378,164){$i+1$} {\mathfrak{p}}ut(325,105){\line(1,0){25}} {\mathfrak{p}}ut(325,105){\line(0,1){25}}{\mathfrak{p}}ut(350,105){\line(0,1){50}} {\mathfrak{p}}ut(325,130){\line(1,0){50}} {\mathfrak{p}}ut(375,130){\line(0,1){50}} {\mathfrak{p}}ut(350,155){\line(1,0){50}} {\mathfrak{p}}ut(400,155){\line(0,1){25}} {\mathfrak{p}}ut(375,180){\line(1,0){25}} {\mathfrak{p}}ut(0,48){$L_4=$} {\mathfrak{p}}ut(28,14){$i+2$}{\mathfrak{p}}ut(62,39){$i$}{\mathfrak{p}}ut(78,64){$i+1$} {\mathfrak{p}}ut(25,5){\line(1,0){25}} {\mathfrak{p}}ut(25,5){\line(0,1){25}}{\mathfrak{p}}ut(50,5){\line(0,1){50}} {\mathfrak{p}}ut(25,30){\line(1,0){50}} {\mathfrak{p}}ut(75,30){\line(0,1){50}} {\mathfrak{p}}ut(50,55){\line(1,0){50}} {\mathfrak{p}}ut(100,55){\line(0,1){25}} {\mathfrak{p}}ut(75,80){\line(1,0){25}} {\mathfrak{p}}ut(150,48){$L_5=$} {\mathfrak{p}}ut(178,14){$i+2$}{\mathfrak{p}}ut(203,39){$i+1$}{\mathfrak{p}}ut(237,64){$i$} {\mathfrak{p}}ut(175,5){\line(1,0){25}} {\mathfrak{p}}ut(175,5){\line(0,1){25}}{\mathfrak{p}}ut(200,5){\line(0,1){50}} {\mathfrak{p}}ut(175,30){\line(1,0){50}} {\mathfrak{p}}ut(225,30){\line(0,1){50}} {\mathfrak{p}}ut(200,55){\line(1,0){50}} {\mathfrak{p}}ut(250,55){\line(0,1){25}} {\mathfrak{p}}ut(225,80){\line(1,0){25}} {\mathfrak{p}}ut(300,48){$L_6=$} {\mathfrak{p}}ut(328,14){$i+1$}{\mathfrak{p}}ut(353,39){$i+2$}{\mathfrak{p}}ut(387,64){$i$} {\mathfrak{p}}ut(325,5){\line(1,0){25}} {\mathfrak{p}}ut(325,5){\line(0,1){25}}{\mathfrak{p}}ut(350,5){\line(0,1){50}} {\mathfrak{p}}ut(325,30){\line(1,0){50}} {\mathfrak{p}}ut(375,30){\line(0,1){50}} {\mathfrak{p}}ut(350,55){\line(1,0){50}} {\mathfrak{p}}ut(400,55){\line(0,1){25}} {\mathfrak{p}}ut(375,80){\line(1,0){25}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 5}{\langle}bel{F:Case 5} {{\mathfrak{b}}ar{e}}nd{figure} Case 5: Let $ L_1, L_2, L_3, L_4, L_5, $ and $ L_6 $ be given as in Figure \ref{F:Case 5}. {{\mathfrak{s}}igmaall {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2-\kappa_0)^2(\kappa_4-\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_0)^2(\kappa_4+\kappa_2)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4-\kappa_0}\right)v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4-\kappa_2)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4+\kappa_2)}\right)c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_2^2-\kappa_0^2)(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4-\kappa_2)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2-\kappa_0)^2(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_0)^2(\kappa_4-\kappa_2)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4+\kappa_0}\right)c_ic_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_2-\kappa_0)(\kappa_4-\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_4-\kappa_0)(\kappa_0-\kappa_2)}\right)v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_1}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_4-\kappa_0)(\kappa_0+\kappa_2)}\right)c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_2)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_4+\kappa_0)(\kappa_0+\kappa_2)}\right) c_{i+1}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_2-\kappa_0)(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}}{(\kappa_4+\kappa_0)(\kappa_0-\kappa_2)}\right)c_{i}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2-\kappa_0)(\kappa_4-\kappa_0)}\right)v_{L_3}+ \left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2-\kappa_0)(\kappa_4+\kappa_0)}\right) c_ic_{i+1}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_0)}\right) c_{i+1}c_{i+2}v_{L_3} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_0)}\right) c_ic_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1})\mathcal{Y}_{i,L_3}}{\kappa_2-\kappa_0}\right)v_{L_6} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_3}}{\kappa_2+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i,L_1})(\mathcal{Y}_{i+1,L_2})}{\kappa_4-\kappa_2}\right)v_{L_4} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1}\mathcal{Y}_{i+1,L_2}}{\kappa_4+\kappa_2}\right) c_ic_{i+1}v_{L_4}+(\mathcal{Y}_{i,L_1}\mathcal{Y}_{i+1,L_2}\mathcal{Y}_{i,L_4})v_{L_5}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2-\kappa_0)^2(\kappa_4-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_0)^2(\kappa_4+\kappa_0)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i,L_2}}{\kappa_4-\kappa_2}\right)v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4+\kappa_0)}\right)c_ic_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_0^2-\kappa_2^2)(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0-\kappa_2)^2(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2+\kappa_0)^2(\kappa_4-\kappa_0)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4+\kappa_2}\right)c_ic_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_0-\kappa_2)(\kappa_4-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_4-\kappa_2)(\kappa_2-\kappa_0)}\right)v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_2}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right)c_ic_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_2)}\right) c_{i+1}c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_0-\kappa_2)(\kappa_0+\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)}\right)c_{i}c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_0-\kappa_2)(\kappa_4-\kappa_2)}\right)v_{L_4} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_0-\kappa_2)(\kappa_4+\kappa_2)}\right) c_ic_{i+1}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_2)}\right) c_{i+1}c_{i+2}v_{L_4} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_2}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_2)}\right) c_ic_{i+2}v_{L_4}+\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_4}}{\kappa_0-\kappa_2}\right)v_{L_5} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_4}}{\kappa_2+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i+1,L_1}}{\kappa_4-\kappa_0}\right)v_{L_3} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i+1,L_1}}{\kappa_4+\kappa_0}\right) c_ic_{i+1}v_{L_3} +\left(\mathcal{Y}_{i,L_2} \mathcal{Y}_{i+1, L_1} \mathcal{Y}_{i, L_3}\right)v_{L_6}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_3} = s_{i+1} s_i s_{i+1}v_{L_3} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_0)^2(\kappa_2-\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_0)^2(\kappa_4+\kappa_2)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3} \mathcal{Y}_{i, L_6}}{\kappa_2-\kappa_0}\right)v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_2-\kappa_4)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_4+\kappa_2)}\right)c_ic_{i+1}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_4^2-\kappa_0^2)(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_2-\kappa_4)}\right)c_{i+1}c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_0)^2(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_0)^2(\kappa_2-\kappa_4)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3} \mathcal{Y}_{i, L_6}}{\kappa_0+\kappa_2}\right)c_i c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_4-\kappa_0)(\kappa_2-\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_2-\kappa_0)(\kappa_0-\kappa_4)}\right)v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_3}}{(\kappa_4+\kappa_0)(\kappa_4+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_2-\kappa_0)(\kappa_0+\kappa_4)}\right) c_i c_{i+1}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_4+\kappa_0)(\kappa_2-\kappa_4)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_0+\kappa_2)(\kappa_0+\kappa_4)}\right) c_{i+1} c_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_4-\kappa_0)(\kappa_2+\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}}{(\kappa_0+\kappa_2)(\kappa_0-\kappa_4)}\right)c_{i}c_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4-\kappa_0)(\kappa_2-\kappa_0)}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4-\kappa_0)(\kappa_0+\kappa_2)}\right) c_i c_{i+1}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4+\kappa_0)(\kappa_2-\kappa_0)}\right) c_{i+1}c_{i+2}v_{L_1}+ \left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4+\kappa_0)(\kappa_0+\kappa_2)}\right) c_i c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_1}}{\kappa_4-\kappa_0}\right)v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_1}}{\kappa_4+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_2}\\ &\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}}{\kappa_2-\kappa_4}\right)v_{L_5} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}}{\kappa_4+\kappa_2}\right) c_ic_{i+1}v_{L_5} +(\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_3})v_{L_4}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_4} = s_{i+1} s_i s_{i+1}v_{L_4} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_2)^2(\kappa_0-\kappa_4)}+{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_2)^2(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i,L_5}}{\kappa_0-\kappa_2}\right)v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_0-\kappa_4)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_4+\kappa_0)}\right)c_ic_{i+1}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_4^2-\kappa_2^2)(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_0-\kappa_4)}\right)c_{i+1}c_{i+2}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_2)^2(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_2)^2(\kappa_0-\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i,L_5}}{\kappa_0+\kappa_2}\right) c_ic_{i+2}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_4-\kappa_2)(\kappa_0-\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_0-\kappa_2)(\kappa_2-\kappa_4)}\right)v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_4}}{(\kappa_4+\kappa_2)(\kappa_4+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_0-\kappa_2)(\kappa_2+\kappa_4)}\right)c_ic_{i+1}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_4)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_0+\kappa_2)(\kappa_2+\kappa_4)}\right) c_{i+1} c_{i+2}v_{L_5}\\ &\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_4)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}}{(\kappa_0+\kappa_2)(\kappa_2-\kappa_4)}\right) c_{i} c_{i+2}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4-\kappa_2)(\kappa_0-\kappa_2)}\right)v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right) c_i c_{i+1}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)}\right) c_{i+1}c_{i+2}v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_2)}\right) c_i c_{i+2}v_{L_2}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}}{\kappa_4-\kappa_2}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}}{\kappa_4+\kappa_2}\right) c_{i+1} c_{i+2}v_{L_1}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i+1,L_5}}{\kappa_0-\kappa_4}\right)v_{L_6} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i+1,L_5}}{\kappa_4+\kappa_0}\right) c_i c_{i+1}v_{L_6} +(\mathcal{Y}_{i,L_4} \mathcal{Y}_{i+1, L_5} \mathcal{Y}_{i, L_4})v_{L_3}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_5} = s_{i+1} s_i s_{i+1} v_{L_5} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_2)^2(\kappa_0-\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_2)^2(\kappa_2+\kappa_0)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5} \mathcal{Y}_{i, L_4}}{\kappa_0-\kappa_4}\right)v_{L_5} \\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_0-\kappa_2)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_2+\kappa_0)}\right)c_ic_{i+1}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_2^2-\kappa_4^2)(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_0-\kappa_2)}\right)c_{i+1}c_{i+2}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4-\kappa_2)^2(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_2)^2(\kappa_0-\kappa_2)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5} \mathcal{Y}_{i, L_4}}{\kappa_0+\kappa_4}\right)c_ic_{i+2}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_2-\kappa_4)(\kappa_0-\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_0-\kappa_4)(\kappa_4-\kappa_2)}\right)v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_5}}{(\kappa_4+\kappa_2)(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_0-\kappa_4)(\kappa_2+\kappa_4)}\right)c_ic_{i+1}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_0+\kappa_4)(\kappa_2+\kappa_4)}\right) c_{i+1}c_{i+2}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_2-\kappa_4)(\kappa_0+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_5}}{(\kappa_0+\kappa_4)(\kappa_4-\kappa_2)}\right) c_{i} c_{i+2}v_{L_4}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_5}}{(\kappa_2-\kappa_4)(\kappa_0-\kappa_4)}\right)v_{L_6} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right) c_ic_{i+1}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_4)}\right) c_{i+1}c_{i+2}v_{L_6} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_4)}\right) c_i c_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i+1,L_5})(\mathcal{Y}_{i,L_6})}{\kappa_2-\kappa_4}\right)v_{L_3} +\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i+1,L_5})(\mathcal{Y}_{i,L_6})}{\kappa_4+\kappa_2}\right) c_{i+1} c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i,L_5})(\mathcal{Y}_{i+1,L_4})}{\kappa_0-\kappa_2}\right)v_{L_2} +\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i,L_5})(\mathcal{Y}_{i+1,L_4})}{\kappa_2+\kappa_0}\right) c_i c_{i+1}v_{L_6} +\left(\mathcal{Y}_{i,L_5}\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}\right)v_{L_1}. {{\mathfrak{b}}ar{e}}nd{align*} {\mathfrak{b}}egin{align*} s_i s_{i+1} s_i v_{L_6} = s_{i+1} s_i s_{i+1} v_{L_6} =& \left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0-\kappa_4)^2(\kappa_2-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0+\kappa_4)^2(\kappa_2+\kappa_0)}+{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6} \mathcal{Y}_{i, L_3}}{\kappa_2-\kappa_4}\right) v_{L_6} \\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2+\kappa_0)}\right)c_ic_{i+1}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-1}{(\kappa_0^2-\kappa_4^2)(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{1}{(\kappa_0-\kappa_4)^2(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{1}{(\kappa_4+\kappa_0)^2(\kappa_2-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}\mathcal{Y}_{i,L_3}}{\kappa_2+\kappa_4}\right) c_ic_{i+2}v_{L_6}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_0-\kappa_4)(\kappa_2-\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_2-\kappa_4)(\kappa_4-\kappa_0)}\right)v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{-\mathcal{Y}_{i,L_6}}{(\kappa_4+\kappa_0)(\kappa_2+\kappa_0)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_2-\kappa_4)(\kappa_0+\kappa_4)}\right)c_ic_{i+1}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_0+\kappa_4)(\kappa_2-\kappa_0)} -{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_2+\kappa_4)(\kappa_0+\kappa_4)}\right) c_{i+1}c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_0-\kappa_4)(\kappa_0+\kappa_2)} +{{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}}{(\kappa_2+\kappa_4)(\kappa_4-\kappa_0)}\right) c_{i} c_{i+2}v_{L_3}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0-\kappa_4)(\kappa_2-\kappa_4)}\right)v_{L_5} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0-\kappa_4)(\kappa_2+\kappa_4)}\right) c_i c_{i+1} v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0+\kappa_4)(\kappa_2-\kappa_4)}\right) c_{i+1}c_{i+2}v_{L_5} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0+\kappa_4)(\kappa_2+\kappa_4)}\right) c_ic_{i+2}v_{L_5}\\ &+\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_5}}{\kappa_0-\kappa_4}\right)v_{L_4} +\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_5}}{\kappa_0+\kappa_4}\right) c_{i+1} c_{i+2}v_{L_4}\\ &\left({{\mathfrak{b}}ar{f}}rac{\mathcal{Y}_{i,L_6}\mathcal{Y}_{i+1,L_3}}{\kappa_2-\kappa_0}\right)v_{L_1} +\left({{\mathfrak{b}}ar{f}}rac{(\mathcal{Y}_{i,L_6})(\mathcal{Y}_{i+1,L_3})}{\kappa_2+\kappa_0}\right) c_i c_{i+1}v_{L_1} +\left(\mathcal{Y}_{i,L_6}\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_6}\right)v_{L_2}. {{\mathfrak{b}}ar{e}}nd{align*} } {\mathfrak{b}}egin{figure}[ht] {\mathfrak{b}}egin{picture}(340,60) {\mathfrak{p}}ut(125,27){$L=$} {\mathfrak{p}}ut(162,39){$i$}{\mathfrak{p}}ut(178,39){$i+1$}{\mathfrak{p}}ut(178,14){$i+2$} {\mathfrak{p}}ut(175,5){\line(1,0){25}} {\mathfrak{p}}ut(175,5){\line(0,1){50}}{\mathfrak{p}}ut(200,5){\line(0,1){50}} {\mathfrak{p}}ut(150,30){\line(1,0){50}} {\mathfrak{p}}ut(150,30){\line(0,1){25}} {\mathfrak{p}}ut(150,55){\line(1,0){50}} {{\mathfrak{b}}ar{e}}nd{picture} \caption{Case 6}{\langle}bel{F:Case 6} {{\mathfrak{b}}ar{e}}nd{figure} Case 6: Let $ L $ be as in Figure \ref{F:Case 6}. Then $$ s_i s_{i+1} s_i v_L = s_{i+1} s_i s_{i+1} v_L ={{\mathfrak{b}}ar{f}}rac{1}{{\mathfrak{s}}qrt{2}}(-c_i c_{i+1} v_L + c_i c_{i+2} v_L). $$ {{\mathfrak{b}}ar{e}}nd{proof} Now define an $ {\mathcal{A}}(d)$-module $ H^{{\langle}mbda / \mu} $ to be $ {\mathfrak{s}}um_{w\in S_n} {\mathbf{1}}rphii_w{\mathcal{L}}(c(L)) $ where $ L $ is a fixed standard filling of the shifted skew shape $ {\langle}mbda/ \mu $ and $ \mathcal{L}(c(L))={\mathcal{L}}(c(L_1))\circledast\cdots\circledast{\mathcal{L}}(c(L_d)) $ is an irreducible $ {\mathcal{A}}(d)$ submodule of $ Cl(d) v_L $ introduced in section \ref{SS:characters}. {\mathfrak{b}}egin{prp} The $ \mathcal{A}(d)$-module $ H^{{\langle}mbda / \mu} $ is a $ {\mathcal{A}}Se(d)$-module. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} Let $ c v_L \in H^{{\langle}mbda / \mu}. $ Then $ ({\mathbf{1}}rphii_i - s_i(x_i^2-x_{i+1}^2)) c v_L \in H^{{\langle}mbda / \mu}. $ Note that $ {\mathbf{1}}rphii_i c v_L = {}^{s_i}c {\mathbf{1}}rphii_i v_L = k {}^{s_i}c v_{s_i L} $ where $ {}^{s_i}c=s_ics_i$ denotes the Clifford element twisted by $ s_i. $ This element is in $ H^{{\langle}mbda / \mu} $ because the twisting of the Clifford element $ c $ by $ s_i $ is compatible with the permutation of the zero eigenvalues of the $ x_j's $ by $ s_i. $ Thus $ s_i(x_i^2-x_{i+1}^2) c v_L = k' s_i c v_L \in H^{{\langle}mbda / \mu}. $ Since by construction $ (x_i^2-x_{i+1}^2) v_L {\mathfrak{n}}eq 0 $ by construction, $ s_i c v_L \in H^{{\langle}mbda / \mu}. $ {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{thm} For each shifted skew shape $ {\langle}mbda/\mu, $ $ H^{{\langle}mbda/\mu} $ is an irreducible $ {\mathcal{A}}Se(d)$-module. Every irreducible, calibrated $ {\mathcal{A}}Se(d)$-module is isomorphic to exactly one such $ H^{{\langle}mbda/\mu}. $ {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} First to show that $ H^{({\langle}mbda, \mu)} $ is irreducible. Let $ L $ be a standard tableaux of shape $ {\langle}mbda / \mu. $ Let $ N $ be a non-zero submodule of $ H^{{\langle}mbda / \mu} $ and let $ v = \Sigma_Q C_Q v_Q \in N $ be non-zero where $ C_Q \in Cl(d). $ Let $ L $ be a standard tableaux such that $ \mathcal{Y}_L {\mathfrak{n}}eq 0. $ If $ P {\mathfrak{n}}eq L $ then there exists an $ i $ such that $ x_i v_P {\mathfrak{n}}eq x_i v_L. $ Suppose $ \mathcal{Y}_P {\mathfrak{n}}eq 0. $ Then $ {{\mathfrak{b}}ar{f}}rac{x_i-\kappa_{i,P}}{\kappa_{i,L}-\kappa_{i,P}} v $ no longer has a $ v_P $ term but still has a $ v_L $ term. This element is also in $ N. $ Iterating this process it is clear that $ v_L \in N. $ The set of tableaux is identified with an interval of $ S_n $ under the Bruhat order. The minimal element is the column reading $ C. $ Thus there exists a chain $ C < s_{i_1} C < \cdots < s_{i_p} \cdots s_{i_1} C = L. $ Therefore $ \tauu_{i_1} \cdots \tauu_{i_p} v_L = \kappa v_C $ for some non-zero complex number $ \kappa. $ This implies $ v_C \in N. $ Now let $ Q $ be an arbitrary standard tableaux of $ {\langle}mbda / \mu. $ There is a chain $ C < s_{j_1} C < \cdots < s_{j_p} \cdots s_{j_1} C = Q. $ Then $ \tauu_{j_p} \cdots \tauu_{j_1} v_C = \kappa' v_Q $ for some non-zero complex number $ \kappa'. $ Thus $ v_Q \in N $ so $ N = H^{{\langle}mbda/ \mu}. $ It is clear by looking at the eigenvalues that if $ {\langle}mbda / \mu {\mathfrak{n}}eq {\langle}mbda' / \mu', $ then $ H^{{\langle}mbda / \mu} {\mathfrak{n}}eq H^{{\langle}mbda' / \mu'}. $ Next to show that the weight of a calibrated module $ M $ is obtained by reading the contents of a shifted skew shape via a standard filling. That is, if $ (t_1, {\langle}mbdaots, t_d) $ be such a weight, then it is necessary to show that it is equal to $ (c(L_1), {\langle}mbdaots, c(L_d)) $ for some standard tableaux $ L. $ It will be shown that if $ t_i = t_j $ for some $ i<j, $ then there exists $ k,l $ such that $ i<k<l<j $ such that $ t_k = t_i {\mathfrak{p}}m 1 $ and $ t_l = t_i \mp 1 $ unless $ t_i =t_j= 0 $ in which case there is a $ k $ with $ i < k < j $ such that $ t_k =1. $ Let $ j>i $ be such that $ t_j=t_i $ and $ j-i $ is minimal, let $m_t\in M$ be anonzero vector of weight $t=(t_1,{\langle}mbdaots,t_d)$, and let ${\mathbf{1}}rrho_i={\mathfrak{s}}qrt{q(t_i)}$. The proof will be by induction on $ j-i. $ {\mathfrak{n}}oindent\thetaxtbf{Case 1:} Suppose $ j-i=1. $ First the case that $ t_i = 0. $ If $ t_i =0, $ then $ t_{i+1}=0 $ by assumption and then $ x_i s_i m_t = -m_t -c_i c_{i+1} m_t. $ It is clear that $ -m_t - c_i c_{i+1} m_t {\mathfrak{n}}eq 0. $ Otherwise, $ m_t = -c_i c_{i+1} m_t $ which implies after multiplying both sides by $ c_i c_{i+1} $ that $ m_t = -m_t $ giving $ m_t =0. $ Thus $ x_i^2 s_i m_t = 0 $ but $ x_i s_i m_t {\mathfrak{n}}eq 0. $ Similarly, $ x_{i+1}^2 s_i m_t = 0, $ but $ x_{i+1} s_i m_t {\mathfrak{n}}eq 0. $ Clearly $ (x_j - {\mathbf{1}}rrho_j) s_i m_t = 0 $ for $ j {\mathfrak{n}}eq i, i+1. $ Thus if $ t_i = 0, $ then $ s_i m_t \in M^{\thetaxt{gen}}_t, $ but not in $ M_t $contradicting the assumption that $ M $ is calibrated. Now assume $ t_i {\mathfrak{n}}eq 0. $ Then, $ s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2t_i} c_i c_{i+1} m_t \in M^{\thetaxt{gen}}_t $ but not in $ M_t. $ To see this, calculate: $$ x_i(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i}c_i c_{i+1} m_t) = t_i s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2} c_i c_{i+1} m_t - m_t. $$ This implies $ (x_i - {\mathbf{1}}rrho_i)(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t) = -m_t {\mathfrak{n}}eq 0 $ and $ (x_i - {\mathbf{1}}rrho_i)^2(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t) = 0. $ Similarly, $ (x_{i+1} - {\mathbf{1}}rrho_{i+1})(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t) = m_t {\mathfrak{n}}eq 0 $ and $ (x_{i+1} - {\mathbf{1}}rrho_{i+1})^2(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t) = 0. $ If $ j {\mathfrak{n}}eq i, i+1, $ then $ (x_j -{\mathbf{1}}rrho_j)(s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t) = 0. $ Thus $ s_i m_t - {{\mathfrak{b}}ar{f}}rac{1}{2{\mathbf{1}}rrho_i} c_i c_{i+1} m_t \in M^{\thetaxt{gen}}_t $ but not in $ M_t $ verifying case 1. {\mathfrak{n}}oindent\thetaxtbf{Case 2:} Suppose $ j-i =2. $ Since $ m_t $ is a weight vector, the vector $$ m_{s_i t}={\mathbf{1}}rphii_i m_t =({\mathbf{1}}rrho_i - {\mathbf{1}}rrho_{i+1}) s_i m_t -({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})c_i c_{i+1} m_t + ({\mathbf{1}}rrho_i + {\mathbf{1}}rrho_{i+1}) m_t $$ is a weight vector of weight $ t' = s_i t. $ Then $ t_{i+1}' = t_{i+2}'. $ By case 1, this is impossible so $ m_{s_i t}=0. $ Note that $ {\mathbf{1}}rrho_i + {\mathbf{1}}rrho_{i+1} {\mathfrak{n}}eq 0. $ If it did, then $ m_{s_i t} = 0 $ which would imply $ c_i c_{i+1} m_t = 0 $ which would imply $ m_t = 0. $ Thus, $ s_i m_t = {{\mathfrak{b}}ar{f}}rac{m_t}{{\mathbf{1}}rrho_{i+1}-{\mathbf{1}}rrho_i} + {{\mathfrak{b}}ar{f}}rac{c_i c_{i+1} m_t}{{\mathbf{1}}rrho_{i+1}+{\mathbf{1}}rrho_i}. $ Since $ s_i^2 m_t = m_t, $ it follows that $ m_t = ({{\mathfrak{b}}ar{f}}rac{2({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})}{({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^2}) m_t. $ This implies $ 2({\mathbf{1}}rrho_i + {\mathbf{1}}rrho_{i+1}) = ({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^2. $ The solutions of this equation are $$ {\mathbf{1}}rrho_{i+1} \in \lbrace {\mathfrak{p}}m {\mathfrak{s}}qrt{(t_i+1)(t_i+2)},{\mathfrak{p}}m {\mathfrak{s}}qrt{(t_i-1)(t_i)} \rbrace. $$ Since it is assumed that the positive square root is taken, there are only two subcases to investigate. For the first subcase, assume $ {\mathbf{1}}rrho_{i+1}={\mathfrak{s}}qrt{q(t_i+1)}. $ A routine calculation gives $$ s_i s_{i+1} s_i m_t = {{\mathfrak{b}}ar{f}}rac{-s_i m_t}{({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^2} + {{\mathfrak{b}}ar{f}}rac{c_i c_{i+2} s_i m_t}{{\mathbf{1}}rrho_{i+1}-{\mathbf{1}}rrho_i} + {{\mathfrak{b}}ar{f}}rac{c_{i+1}c_{i+2} s_i m_t}{{\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1}} - {{\mathfrak{b}}ar{f}}rac{c_i c_{i+1} s_i m_t}{({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})^2}. $$ From this it follows that the coefficient of $ m_t $ is $ {{\mathfrak{b}}ar{f}}rac{1}{({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^3}+{{\mathfrak{b}}ar{f}}rac{1}{({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})^3}. $ Similarly, from $$ s_{i+1} s_{i} s_{i+1} m_t = {{\mathfrak{b}}ar{f}}rac{-s_{i+1} m_t}{({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^2} + {{\mathfrak{b}}ar{f}}rac{c_i c_{i+2} s_{i+1} m_t}{{\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1}} + {{\mathfrak{b}}ar{f}}rac{c_{i}c_{i+1} s_{i+1} m_t}{{\mathbf{1}}rrho_{i+1}-{\mathbf{1}}rrho_i} - {{\mathfrak{b}}ar{f}}rac{c_{i+1} c_{i+2} s_{i+1} m_t}{({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})^2} $$ it follows that the coefficient of $ m_t $ is $ {{\mathfrak{b}}ar{f}}rac{-1}{({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^3}+{{\mathfrak{b}}ar{f}}rac{-1}{({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})^3}. $ Therefore $ ({\mathbf{1}}rrho_i-{\mathbf{1}}rrho_{i+1})^3 + ({\mathbf{1}}rrho_i+{\mathbf{1}}rrho_{i+1})^3 =0. $ Recalling that $ {\mathbf{1}}rrho_{i+1}={\mathfrak{s}}qrt{q(t_i+1)} $ in this subcase, it is clear that $ t_i = t_{i+2}=0 $ and $ t_{i+1} = 1. $ The other subcase is similar. Now for the induction step. Assume $j-i>2$. If $ t_{j-1} {\mathfrak{n}}eq t_j {\mathfrak{p}}m 1, $ then the vector $ {\mathbf{1}}rphii_{j-1} m_t $ is a non-zero weight vector of weight $ t' = s_{j-1} t $ by \cite[Lemma 14.8.1]{kl}. Since $ t_i' = t_i = t_j = t_{j-1}', $ the induction hypothesis may be applied to conclude that there exists $ k $ and $ l $ with $ i < k < l < j-1 $ such that $ t_k' = t_j {\mathfrak{p}}m 1 $ and $ t_l' = t_j \mp 1. $ (In the case $ t_i = t_j = 0, $ then there exists $ t_k' =1. $) This implies $ t_k = t_j {\mathfrak{p}}m 1 $ and $ t_l = t_j \mp 1. $ (In the case $ t_i = t_j =0, $ there exists $ t_k = 1. $) Similarly, if $ t_{i+1} {\mathfrak{n}}eq t_i {\mathfrak{p}}m 1, $ consider $ {\mathbf{1}}rphii_i m_t $ and proceed by induction. Otherwise, $ t_{i+1} = t_i {\mathfrak{p}}m 1 $ and $ t_{j-1} = t_i {\mathfrak{p}}m 1. $ Since $ i $ and $ j $ are chosen such that $ t_i = t_j $ and $ j-i $ is minimal, $ t_{i+1} {\mathfrak{n}}eq t_{j-1}. $ This then gives the conclusion. (If $ t_i = t_j = 0, $ then $ t_{i+1} = 1 $ or $ t_{j-1} = 1. $ Suppose $ M $ is an irreducible, calibrated $ {\mathcal{A}}Se(d)$-module such that $ m_t $ is a weight vector with weight $ t = (t_1,{\langle}mbdaots,t_d) $ such that $ t_{i+1} = t_i {\mathfrak{p}}m 1. $ Then $ {\mathbf{1}}rphii_i m_t = 0. $ This follows exactly as in step 5 of \cite[Theorem 4.1]{ram}. Finally, let $ m_t $ be a non-zero weight vector of an irreducible, calibrated module $ M. $ By the above, $ t = (c(L_1), {\langle}mbdaots, c(L_d)) $ for $ L $ some standard tableaux of shifted skew shape $ {\langle}mbda / \mu. $ The rest of the proof follows as in step 6 of \cite[Theorem 4.1]{ram}. Choose a word $ w = s_{i_p} \cdots s_{i_1} $ such that $ w $ applied to the column reading tableaux of $ {\langle}mbda / \mu $ gives the tableaux $ L. $ Then $ m_C = {\mathbf{1}}rphii_{i_1} \cdots {\mathbf{1}}rphii_{i_p} m_t $ is non-zero. Now to any other standard tableaux $ Q $ of $ {\langle}mbda / \mu $ there is a non-zero weight vector obtained by applying a sequence of intertwiners to $ m_C. $ By the above, $ {\mathbf{1}}rphii_i m_Q = 0 $ if $ s_i Q $ is not standard. Thus the span of vectors $ \lbrace m_Q \rbrace $ over all the standard tableaux of shape $ {\langle}mbda /\mu $ is a submodule of $ M. $ Since $ M $ is irreducible, this span must be the entire module. Thus there is an isomorphism $ M \cong H^{{\langle}mbda / \mu} $ defined by sending $ {\mathbf{1}}rphii_w m_C $ to $ {\mathbf{1}}rphii_w v_C. $ {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:CalibratedSimples} Let ${\langle}mbda/\mu$ be a shifted skew shape. Then, ${\mathcal{L}}({\langle}mbda,\mu)\cong H^{{\langle}mbda/\mu}$. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{proof} Let $T$ be the standard tableaux obtained by filling in the numbers $1,{\langle}mbdaots,d$ along rows from top to bottom and left to right. Note that if $s_i\in S_{{\langle}mbda-\mu}$, then $v_{s_iT}=0$ because $s_iT$ is not standard. By Frobenius reciprocity, it follows that there exists a surjective ${\mathcal{A}}Se(d)$-homomorphism $f:{\mathcal{M}}({\langle}mbda,\mu)\rightarrow H^{{\langle}mbda/\mu}$ given by $f({\mathbf{1}}_{{\langle}mbda-\mu})=v_T$. {{\mathfrak{b}}ar{e}}nd{proof} Furthermore, by construction we have the following result. Note that this agrees with Leclerc's conjectural formula for the calibrated simple modules of ${\mathcal{A}}Se (d)$ \cite[Proposition 51]{lec}. {\mathfrak{b}}egin{cor}{\langle}bel{C:characters} Let ${\langle}mbda/\mu$ be a shifted skew shape. Then, \[ \operatorname{ch} {\mathcal{L}} ({\langle}mbda, \mu)= {\mathfrak{s}}um_{L} \left[c(L_1),{\langle}mbdaots,c(L_d) \right], \] where the sum is over all standard fillings of the shape ${\langle}mbda / \mu$. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{s}}ection{The Lie Superalgebras ${\mathfrak{gl}}(n|n)$ and ${\mathfrak{q}}(n)$}{\langle}bel{S:Lie algebras} {\mathfrak{s}}ubsection{The Algebras}{\langle}bel{SS:qndfn} Let $I=\{-n,{\langle}mbdaots,-1,1,{\langle}mbdaots,n\}$, and $I^+=\{1,{\langle}mbdaots,n\}$. Let $V={\mathbb{C}}^{n|n}$ be the $2n$-dimensional vector superspace with standard basis $\{v_i\}_{i\in I}$. The standard basis for the superalgebra $\operatorname{End}(V)$ is the set of matrix units $\{E_{ij}\}_{i,j\in I}$, and the ${\mathbb{Z}}_2$-grading for $\operatorname{End}(V)$ and $V$ are given by \[ p(v_k)={\mathfrak{z}}ero,\;\;\;p(v_{-k})={{\mathfrak{b}}ar{1}},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, p(E_{ij})=p(v_i)+p(v_j) \] for $k\in I^+$ and $i,j\in I$. Let $C={\mathfrak{s}}um_{i,j\in I^+}(E_{-i,j}-E_{i,-j})$, and let $Q(V){\mathfrak{s}}ubset\operatorname{End}(V)$ be the supercentralizer of $C$. Then, $Q(V)$ has basis given by elements \[ e_{ij}=E_{ij}+E_{-i,-j},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, f_{ij}=E_{-i,j}+E_{i,-j}\;\;\;i,j\in I^+. \] When $Q(V)$ and $\operatorname{End}(V)$ are viewed as Lie superalgebras relative to the superbracket: \[ [x,y]=xy-(-1)^{p(x)p(y)}yx, \] for homogeneous $x,y\in\operatorname{End}(V)$, we denote them ${\mathfrak{q}}(n)$ and ${\mathfrak{gl}}(n|n)$ respectively. We end this section by introducing important elements of ${\mathfrak{gl}}(n|n)$ that will be needed later. Set {\mathfrak{b}}egin{eqnarray}{\langle}bel{bar-e/f} {{\mathfrak{b}}ar{e}}_{ij}=E_{ij}-E_{-i,-j},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {{\mathfrak{b}}ar{f}}_{ij}=E_{-i,j}-E_{i,-j},\;\;\;i,j\in I^+. {{\mathfrak{b}}ar{e}}nd{eqnarray} {\mathfrak{s}}ubsection{Root Data, Category $\mathcal{O}$, and Verma Modules}{\langle}bel{SS:RootData} Fix the triangular decomposition \[ {\mathfrak{q}}(n)={\mathfrak{n}}^-\oplus{\mathfrak{h}}\oplus{\mathfrak{n}}^+, \] where ${\mathfrak{n}}^+_{{\mathfrak{b}}ar{0}}$ (resp. ${\mathfrak{n}}^-_{{\mathfrak{b}}ar{0}}$) is the subalgebra spanned by the $e_{ij}$ for $1\leq i<j\leq n$ (resp. $i>j$), ${\mathfrak{h}}_{\mathfrak{z}}ero$ is spanned by the $e_{ii}$, $1\leq i\leq n$, ${\mathfrak{n}}^+_{{\mathfrak{b}}ar{1}}$ (resp. ${\mathfrak{n}}^-_{{\mathfrak{b}}ar{1}}$) is the subalgebra spanned by the $f_{ij}$ for $1\leq i<j\leq n$ (resp. $i>j$) and ${\mathfrak{h}}_{{\mathfrak{b}}ar{1}}$ is spanned by the $f_{ii}$, $1\leq i\leq n$. Let ${\mathfrak{b}}^+={\mathfrak{h}}\oplus{\mathfrak{n}}^+$ and let ${\mathfrak{b}}^-={\mathfrak{h}}\oplus{\mathfrak{n}}^-$. The isomorphism ${\mathfrak{q}}(n)_{\mathfrak{z}}ero\rightarrow{\mathfrak{gl}}(n)$, $e_{ij}\mapsto E_{ij}$, identifies ${\mathfrak{h}}_{\mathfrak{z}}ero$ with the standard torus for ${\mathfrak{gl}}(n)$. Let ${\mathbf{1}}repsilon_i\in{\mathfrak{h}}_{\mathfrak{z}}ero^*$ denote the $i$th coordinate function. For $i{\mathfrak{n}}eq j$, define $\alpha_{ij}={\mathbf{1}}repsilon_i-{\mathbf{1}}repsilon_j$, and fix the choice of simple roots $\Delta=\{\alpha_i=\alpha_{i,i+1}|1\leq i<n\}$. The corresponding root system is $R=\{\alpha_{ij}|1\leq i{\mathfrak{n}}eq j\leq n\}$, and the positive roots are $R^+=\{\alpha_{ij}|1\leq i<j\leq n\}$. The root lattice is $Q={\mathfrak{s}}um_{i=1}^{n-1}{\mathbb{Z}}\alpha_i$ and weight lattice $P={\mathfrak{s}}um_{i=1}^n{\mathbb{Z}}{\mathbf{1}}repsilon_i$. We can, and will, identify $P={\mathbb{Z}}^n$, and $Q=\{{\langle}mbda\in P|{\langle}mbda_1+\cdots+{\langle}mbda_n=0\}$. Define the sets of weights $P^+$, ${P^{++}}$, ${\mathcal{P}}r$, ${P_{\mathrm{poly}}^+}$ and ${P_{\mathrm{poly}}^+}os$ as in $\S$\ref{SS:LieThy}. We call these sets dominant, dominant-typical, rational, polynomial, and positive, respectively. Finally, let ${P^{++}_{\mathrm{rat}}} = {\mathcal{P}}r \cap {P^{++}}$, and ${P_{\mathrm{poly}}^+}t ={P_{\mathrm{poly}}^+} \cap {P^{++}}$. To begin, let $\mathcal{O}:=\mathcal{O}({\mathfrak{q}}(n))$ denote the category of all finitely generated ${\mathfrak{q}}(n)$-supermodules $M$ that are locally finite dimensional over ${\mathfrak{b}}$ and satisfy \[ M={\mathfrak{b}}igoplus_{{\langle}mbda\in P}M_{\langle}mbda \] where $M_{\langle}mbda=\{\,v\in M \mid h.v={\langle}mbda(h)v\mbox{ for all }h\in{\mathfrak{h}}_{\mathfrak{z}}ero\,\}$ is the ${\langle}mbda$-weight space of $M$. We now define two classes of {{\mathfrak{b}}ar{e}}mph{Verma modules}. To this end, given ${\langle}mbda\in P$, let ${\mathbb{C}}_{\langle}mbda$ be the 1-dimensional ${\mathfrak{h}}_{\mathfrak{z}}ero$-module associated to the weight ${\langle}mbda$. Let $\theta_{\langle}mbda:{\mathfrak{h}}_{{\mathfrak{b}}ar{1}}\rightarrow{\mathbb{C}}$ be given by $\theta_{\langle}mbda(k)={\langle}mbda([k,k])$ for all $k\in{\mathfrak{h}}_{{\mathfrak{b}}ar{1}}$. Let ${\mathfrak{h}}_{{\mathfrak{b}}ar{1}}'=\ker\theta$. Let $\overline{{\mathcal{U}}({\mathfrak{h}})}={\mathcal{U}}({\mathfrak{h}})/\mathfrak{i}$, where $\mathfrak{i}$ is the left ideal of $U({\mathfrak{h}})$ generated by $\{\,h-{\langle}mbda(h) \mid h\in{\mathfrak{h}}_{\mathfrak{z}}ero\,\}\cup{\mathfrak{h}}_{{\mathfrak{b}}ar{1}}'$. Recall, ${\mathfrak{g}}amma_0({\langle}mbda)=|\{\,i \mid {\langle}mbda_i=0\,\}|$. Since $\overline{{\mathcal{U}}({\mathfrak{h}})} $ is isomorphic to a Clifford algebra of rank $ n-{\mathfrak{g}}amma_0({\langle}mbda), $ we can define the $\overline{{\mathcal{U}}({\mathfrak{h}})}$-modules $ C({\langle}mbda) $ and $ E({\langle}mbda) $ where $ C({\langle}mbda) $ is the regular representation of the resulting Clifford algebra and $ E({\langle}mbda) $ is its unique irreducible quotient. Both $ C({\langle}mbda) $ and $ E({\langle}mbda) $ become modules for ${\mathcal{U}}({\mathfrak{h}})$ via inflation through the canonical projection ${\mathcal{U}} ({\mathfrak{h}})\to \overline{{\mathcal{U}}({\mathfrak{h}})}$. Note that as a $ {\mathcal{U}}({\mathfrak{h}})$-module, $ C({\langle}mbda) \cong \operatorname{Ind}_{{\mathcal{U}}({\mathfrak{h}}_{\mathfrak{z}}ero+{\mathfrak{h}}_{{\mathfrak{b}}ar{1}}')}^{{\mathcal{U}}({\mathfrak{h}})}{\mathbb{C}}_{\langle}mbda. $ Extend $C({\langle}mbda)$ and $E({\langle}mbda)$ to representations of ${\mathcal{U}}({\mathfrak{b}}^+)$ by inflation, and define the {{\mathfrak{b}}ar{e}}mph{Big Verma} $\widehat{M}({\langle}mbda)$ and {{\mathfrak{b}}ar{e}}mph{Little Verma} $M({\langle}mbda)$ by \[ \widehat{M}({\langle}mbda)=\operatorname{Ind}_{{\mathcal{U}}({\mathfrak{b}}^+)}^{{\mathcal{U}}({\mathfrak{q}}(n))}C({\langle}mbda)\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, M({\langle}mbda)=\operatorname{Ind}_{{\mathcal{U}}({\mathfrak{b}}^+)}^{{\mathcal{U}}({\mathfrak{q}}(n))}E({\langle}mbda). \] The following lemma is obtained from the standard decomposition of the Clifford algebra into irreducible modules: {\mathfrak{b}}egin{lem}{\langle}bel{L:little verma in big verma} We have $\widehat{M}({\langle}mbda) \cong M({\langle}mbda)^{\oplus 2^{\lfloor {{\mathfrak{b}}ar{f}}rac{n-{\mathfrak{g}}amma_0({\langle}mbda)}{2} \rfloor}}. $ {{\mathfrak{b}}ar{e}}nd{lem} It is known that $M({\langle}mbda)$ has a unique irreducible quotient $L({\langle}mbda)$ (see, for example, \cite{g}). Moreover, it is known $L({\langle}mbda)$ is finite dimensional if, and only if, ${\langle}mbda\in{\mathcal{P}}r$ (see \cite{p}). The following lemma seems standard, but we cannot find it stated in the literature. See \cite[Corollary 7.1, 11.6]{g} for related statements. If $M$ is a ${\mathcal{U}} ({{\mathfrak{b}}ar{f}}q)$-module, then recall that a vector $m \in M$ is called {{\mathfrak{b}}ar{e}}mph{primitive} if ${\mathfrak{n}}^{+}v=0$. {\mathfrak{b}}egin{lem}{\langle}bel{L:InjHom} Let ${\langle}mbda\in P$, and assume that for some $\alpha\in R^+$, there exists $r>0$ such that $s_\alpha{\langle}mbda={\langle}mbda-r\alpha$. Then, there exists an injective homomorphism \[ M(s_\alpha{\langle}mbda)\rightarrow M({\langle}mbda). \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} Let $\alpha=\alpha_{ij}$, and let $v_{\langle}mbda\in M({\langle}mbda)_{\langle}mbda$ be an odd primitive vector. Then, direct calculation verifies that \[ v_{{\langle}mbda-r\alpha}:=(e_{ji}^{r-1}(rf_{ji}-e_{ji}(f_{ii}-f_{jj})).v_{\langle}mbda \] is a primitive vector of weight ${\langle}mbda-r\alpha$ (see, for example \cite[Corollary 7.1]{g}). This implies that there is an injective ${\mathcal{U}}({\mathfrak{b}}^+)$-homomorphism \[ E(s_\alpha{\langle}mbda)\to{\mathcal{U}}({\mathfrak{h}}).v_{{\langle}mbda-r\alpha}. \] Indeed, clearly every vector in ${\mathcal{U}}({\mathfrak{h}}).v_{{\langle}mbda-r\alpha}$ has weight ${\langle}mbda-r\alpha$. Moreover, if $N\in{\mathcal{U}}({\mathfrak{n}}^+)$ and $H\in{\mathcal{U}}({\mathfrak{h}})$, then $[N,H]\in{\mathcal{U}}({\mathfrak{n}}^+)$, so \[ N.(H.v_{{\langle}mbda-r\alpha})=(HN+[N,H]).v_{{\langle}mbda-r\alpha}=0. \] The result follows because, by our choice of primitive vector, a standard argument using the filtration of ${\mathcal{U}} ({{\mathfrak{b}}ar{f}}q(n) )$ by total degree and a calculation in $U({{\mathfrak{b}}ar{f}}q (2)$ shows that ${\mathcal{U}}(b^-).v_{{\langle}mbda-r\alpha}$ is a free ${\mathcal{U}}({\mathfrak{n}}^-)$-module. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{s}}ubsection{The Shapovalov Form}{\langle}bel{SS:ShapovalovForm} The Shapovalov map for ${\mathfrak{q}}(n)$ was constructed in \cite{g}. We review this construction briefly. Let $\mathcal{D}$ be the category of $Q^-=-Q^+$-graded ${\mathfrak{q}}(n)$-modules with degree 0 with respect to this grading. We regard the big and little Verma's as objects in this category by declaring $\deg M({\langle}mbda)_{{\langle}mbda-{\mathfrak{n}}u}=-{\mathfrak{n}}u$ for all ${\mathfrak{n}}u\in Q^+$. Let $\mathcal{C}$ be the category of left ${\mathfrak{h}}$-modules. Let ${\mathcal{P}}sii_0:\mathcal{D}\rightarrow\mathcal{C}$ be the functor ${\mathcal{P}}sii_0(N)=N_0$ (i.e.\ the degree 0 component). The functor ${\mathcal{P}}sii_0$ has a left adjoint $\operatorname{Ind}:\mathcal{C}\rightarrow\mathcal{D}$ given by $\operatorname{Ind} A=\operatorname{Ind}_{{\mathfrak{b}}^+}^{{\mathfrak{q}}(n)} A$, where we regard the ${\mathfrak{h}}$-module $A$ as a ${\mathfrak{b}}^+$-module by inflation. The functor ${\mathcal{P}}sii_0$ also has an exact right adjoint $\operatorname{Coind}$ (see \cite[Proposition 4.3]{g}). As in \cite{g}, let ${\mathfrak{t}}hetaeta(A):\operatorname{Ind} A\rightarrow\operatorname{Coind} A$ be the morphism corresponding to the identity map $\mathrm{id}_A:A\rightarrow A$. This induces a morphism of functors ${\mathfrak{t}}hetaeta:\operatorname{Ind}\rightarrow\operatorname{Coind}$. The main property we will use is {\mathfrak{b}}egin{thm}\cite[Proposition 4.4]{g} We have $\ker{\mathfrak{t}}hetaeta(A)$ is the maximal graded submodule of $\operatorname{Ind} A$ which avoids $A$. {{\mathfrak{b}}ar{e}}nd{thm} Define the Shapovalov map $S:={\mathfrak{t}}hetaeta({\mathcal{U}}({\mathfrak{h}})):\operatorname{Ind}({\mathcal{U}}({\mathfrak{h}})) \rightarrow \operatorname{Coind}({\mathcal{U}}({\mathfrak{h}}))$. Given an object $A$ in $\mathcal{C}$, proposition 4.3 of \cite{g} shows there is a canonical isomorphism $\operatorname{Ind} A\cong \operatorname{Ind}{\mathcal{U}}({\mathfrak{h}})\otimes_{{\mathcal{U}}({\mathfrak{h}})}A$ and $\operatorname{Coind} A\cong\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})\otimes_{{\mathcal{U}}({\mathfrak{h}})}A$. In this way, we may identify ${\mathfrak{t}}hetaeta(A)$ with ${\mathfrak{t}}hetaeta({\mathcal{U}}({\mathfrak{h}}))\otimes_{{\mathcal{U}}({\mathfrak{h}})}\mathrm{id}_A$. It follows that the map ${\mathfrak{t}}hetaeta(A)$ is completely determined by the Shapovalov map. In order to describe $S$ in more detail, we introduce some auxiliary data. Let ${\mathbf{1}}rsigma:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$ be the antiautomorphism defined by ${\mathbf{1}}rsigma(x)=-x$ for all $x\in{\mathfrak{q}}(n)$ and extended to ${\mathcal{U}}({\mathfrak{q}}(n))$ by the rule ${\mathbf{1}}rsigma(xy)=(-1)^{p(x)p(y)}{\mathbf{1}}rsigma(y){\mathbf{1}}rsigma(x)$ for $x,y\in{\mathcal{U}}({\mathfrak{q}}(n))$. Also, define the Harish-Chandra projection $HC:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{h}})$ along the decomposition \[ {\mathcal{U}}({\mathfrak{q}}(n))={\mathcal{U}}({\mathfrak{h}})\oplus({\mathcal{U}}({\mathfrak{q}}(n)){\mathfrak{n}}^++{\mathfrak{n}}^-{\mathcal{U}}({\mathfrak{q}}(n))). \] Now, we may naturally identify $\operatorname{Ind}{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^-)$ as $ ({\mathfrak{b}}^-,{\mathfrak{h}}) $-bimodules. The $Q^-$-grading on ${\mathcal{U}}({\mathfrak{b}}^-)$ is given by {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:Q grading of bminus} {\mathcal{U}}({\mathfrak{b}}^-)_{-{\mathfrak{n}}u}=\{\,x\in{\mathcal{U}}({\mathfrak{b}}^-) \mid [h,x]=-{\mathfrak{n}}u(h)x\mbox{ for all }h\in{\mathfrak{h}}_{\mathfrak{z}}ero\,\} {{\mathfrak{b}}ar{e}}nd{eqnarray} for all ${\mathfrak{n}}u\in Q^+$. To describe $\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})$, let $\mathcal{D}_+$ be the category of $Q^+$ graded submodules and $\operatorname{Ind}_+$ be the left adjoint to the functor ${\mathcal{P}}sii_0^+:\mathcal{C}\rightarrow\mathcal{D}_+$. We may naturally identify $\operatorname{Ind}_+{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^+)$ as $ ({\mathfrak{b}}^+,{\mathfrak{h}}) $-bimodules and ${\mathcal{U}}({\mathfrak{b}}^+)$ has a $Q^+$-grading analogous to {{\mathfrak{b}}ar{e}}qref{E:Q grading of bminus}. Now, let ${\mathcal{U}}({\mathfrak{h}})^{\mathbf{1}}rsigma$ be the $({\mathfrak{h}}, {\mathfrak{h}})$-bimodule obtained by twisting the action of ${\mathfrak{h}}$ with ${\mathbf{1}}rsigma$. That is, $h.x=(-1)^{p(h)p(x)} {\mathbf{1}}rsigma(h)x$ and $ x.h = (-1)^{p(h)p(x)} x {\mathbf{1}}rsigma(h)$ for all $x\in{\mathcal{U}}({\mathfrak{h}})^{\mathbf{1}}rsigma$ and $h\in{\mathfrak{h}}$. Then, there is a natural identification of $\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})$ with the graded dual of ${\mathcal{U}}({\mathfrak{b}}^+)$ as $ ({\mathcal{U}}{\mathfrak{g}}, {\mathcal{U}}{\mathfrak{h}}) $ -bimodules: \[ \operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^+)^{\#} :={\mathfrak{b}}igoplus_{{\mathfrak{n}}u\in Q^+}\operatorname{Hom}_{\mathcal{C}}({\mathcal{U}}(b^+)_{\mathfrak{n}}u,{\mathcal{U}}({\mathfrak{h}})^{\mathbf{1}}rsigma), \] see \cite[Proposition 4.3(iii)]{g}. Observe that ${\mathcal{U}}({\mathfrak{b}}^+)^{\#}$ has a $Q^-$ grading given by ${\mathcal{U}}({\mathfrak{b}}^+)^{\#}_{-{\mathfrak{n}}u}=\operatorname{Hom}_{\mathcal{C}}({\mathcal{U}}(b^+)_{\mathfrak{n}}u,{\mathcal{U}}({\mathfrak{h}})^{\mathbf{1}}rsigma)$. Using these identifications, we realize the Shapovalov map via the formula: \[ S(x)(y)=(-1)^{p(x)p(y)}HC({\mathbf{1}}rsigma(y)x), \] for $x\in{\mathcal{U}}({\mathfrak{q}}(n))$ and $y\in{\mathcal{U}}({\mathfrak{q}}(n))$, \cite[$\S$4.2.4, Claim 3]{g}. The Shapovalov map is homogeneous of degree 0. Therefore, $S={\mathfrak{s}}um_{{\mathfrak{n}}u\in Q^+}S_{\mathfrak{n}}u$, where $S_{\mathfrak{n}}u:{\mathcal{U}}({\mathfrak{b}}^-)_{-{\mathfrak{n}}u}\rightarrow{\mathcal{U}}({\mathfrak{b}}^+)^{\#}_{-{\mathfrak{n}}u}$ is given by restriction. For our purposes, it is more convenient to introduce a bilinear form \[ (\cdot,\cdot)_S:{\mathcal{U}}({\mathfrak{q}}(n))\otimes{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{h}}) \] with the property that ${\mathbb{R}}ad(\cdot,\cdot)_S=\ker S$. To do this we introduce the (non-super) {{\mathfrak{b}}ar{e}}mph{transpose} antiautomorphism $\tauu:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$ given by $\tauu(x)=x^t$ if $x\in{\mathfrak{q}}(n)$ and extend to ${\mathcal{U}}({\mathfrak{q}}(n))$ by $\tauu(xy)=\tauu(y)\tauu(x)$. Note that this is the ``naive'' antiautomorphism introduced in \cite{g}. Define $(\cdot,\cdot)_S$ by \[ (u,v)_S=(-1)^{p(u)p(v)}S(v)({\mathbf{1}}rsigma\tauu(u))=HC(\tauu(u)v) \] for all $u,v\in{\mathcal{U}}({\mathfrak{q}}(n))$. {\mathfrak{b}}egin{prp} The radical of the form may be identified as: ${\mathbb{R}}ad(\cdot,\cdot)_S=\ker S$. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} Assume $u\in\ker S$ and $v\in{\mathcal{U}}({\mathfrak{b}}^-)$. Then, $\tauu(v)\in{\mathcal{U}}({\mathfrak{b}}^+)$ and \[ (\tauu{\mathbf{1}}rsigma(v),u)_S =(-1)^{p(u)p(v)}S(u)({\mathbf{1}}rsigma\tauu\tauu{\mathbf{1}}rsigma(v))=(-1)^{p(u)p(v)}S(u)(v)=0, \] showing that $u\in{\mathbb{R}}ad(\cdot,\cdot)_S$. Conversely, assume $u\in{\mathbb{R}}ad(\cdot,\cdot)_S$ and $v\in{\mathcal{U}}({\mathfrak{b}}^+)$. Then, $\tauu{\mathbf{1}}rsigma(v)\in{\mathcal{U}}({\mathfrak{b}}^-)$ and \[ 0=(\tauu{\mathbf{1}}rsigma(v),u)_S =(-1)^{p(u)p(v)}S(u)({\mathbf{1}}rsigma\tauu\tauu{\mathbf{1}}rsigma(v))=(-1)^{p(u)p(v)}S(u)(v). \] Hence, $u\in\operatorname{Ker} S$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{rmk} We have already defined $\tauu$ to be an antiautomorphism of the AHCA. We will show the compatibility of the two anti-automorphisms in Proposition \ref{P:when tau's collide}. {{\mathfrak{b}}ar{e}}nd{rmk} {\mathfrak{s}}ection{A Lie-Theoretic construction of ${\mathcal{A}}Se(d)$}{\langle}bel{S:LieTheoreticConstr} Let $X$ be a ${{\mathfrak{b}}ar{f}}q(n)$-supermodule. In this section we construct a homomorphism of superalgebras \[ {\mathcal{A}}Se(d)\rightarrow\operatorname{End}_{{{\mathfrak{b}}ar{f}}q(n)}(X\otimes V^{\otimes d}) \] along the lines of Arakawa and Suzuki, \cite{as}. The main difficulty is the lack of an even invariant bilinear form, and consequently, a lack of a suitable Casimir element in $q(n)^{\otimes2}$. However, we find inspiration for a suitable substitute in Olshanski's work in the quantum setting \cite{o}. {\mathfrak{s}}ubsection{Lie Bialgebra structures on ${{\mathfrak{b}}ar{f}}q(n)$} We begin by reviewing the construction of a Manin triple for ${{\mathfrak{b}}ar{f}}q(n)$ from \cite{o} (see also \cite{d1}). A Manin triple $({\mathfrak{p}},{\mathfrak{p}}_1,{\mathfrak{p}}_2)$ consists of a Lie superalgebra ${\mathfrak{p}}$, a nondegenerate even invariant bilinear symmetric form $B$ and two subalgebras ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$ which are $B$-isotropic transversal subspaces of ${\mathfrak{p}}$. Then, $B$ defines a nondegenerate pairing between ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$. Define a cobracket $\Delta:{\mathfrak{p}}_1\rightarrow {\mathfrak{p}}_1^{\otimes2}$ by dualizing the bracket ${\mathfrak{p}}_2^{\otimes 2}\rightarrow{\mathfrak{p}}_2$: \[ B^{\otimes2}(\Delta(X),Y_1\otimes Y_2)=B(X,[Y_1,Y_2]),\;\;\;(X\in{\mathfrak{p}}_1). \] Then, the pair $({\mathfrak{p}}_1,\Delta)$ is called a Lie (super)bialgebra. Choose a basis $\{X_\alpha\}$ for ${\mathfrak{p}}_1$ and a basis $\{Y_\alpha\}$ for ${\mathfrak{p}}_2$ such that $B(X_\alpha,Y_{\mathfrak{b}}eta)={\mathfrak{p}}artialta_{\alpha{\mathfrak{b}}eta}$, and set $s={\mathfrak{s}}um_\alpha X_\alpha\otimes Y_\alpha$. Then, it turns out that $s$ satisfies the classical Yang-Baxter equation \[ [s^{12},s^{13}]+[s^{12},s^{23}]+[s^{13},s^{23}]=0 \] and $\Delta(X)=[1\otimes X+X\otimes 1,s]$, for $X\in{\mathfrak{p}}_1$. {\mathfrak{s}}ubsection{The Super Casimir} Note that when ${\mathfrak{p}}={\mathfrak{g}}$ is a simple Lie algebra, ${\mathfrak{p}}_1=\mathfrak{b}_+$, ${\mathfrak{p}}_2=\mathfrak{b}_-$ are the positive and negative Borel subalgebras and $B$ is the trace form, $s$ becomes the classical $r$-matrix, which we will denote $r^{12}$. We can repeat this construction with the roles of ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$ reversed and obtain another classical $r$-matrix which we denote $r^{21}$. Then, the Casimir is simply $\Omega=r^{12}+r^{21}$, see \cite{as} $\S1.2$. In \cite{o}, Olshanski constructs such an element $s$ for ${\mathfrak{p}}={\mathfrak{gl}}(n|n)$, ${\mathfrak{p}}_1={{\mathfrak{b}}ar{f}}q(n)$ and some fixed choice of ${\mathfrak{p}}_2$ analogous to a positive Borel. We will review this construction to obtain an element which we will call $s_+$, then replace ${\mathfrak{p}}_2$ with an analogue of a negative Borel to obtain another element called $s_-$. Then, we show that the element $\Omega=s_++s_-$ performs the role of the Casimir in our setting. {\mathfrak{b}}egin{dfn}Let ${\mathfrak{p}}={\mathfrak{gl}}(n|n)$, $B(x,y)={\mathfrak{s}}tr(xy)$ (where ${\mathfrak{s}}tr(E_{ij})={\mathfrak{p}}artialta_{ij}\mathrm{sgn}(i)$ for $i,j\in I$), and ${\mathfrak{p}}_1={\mathfrak{q}}(n)$. {\mathfrak{b}}egin{enumerate} \item Let \[ {\mathfrak{p}}_2^+={\mathfrak{s}}um_{i\in I^+}{\mathbb{C}}(E_{ii}-E_{-i,-i})+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I,\\ i<j}}{\mathbb{C}} E_{ij}. \] Then the corresponding element $s_+$ is given by \[ s_+={{\mathfrak{b}}ar{f}}rac12{\mathfrak{s}}um_{i\in I^+}e_{ii}\otimes{{\mathfrak{b}}ar{e}}_{ii}+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I^+\\i>j}}e_{ij}\otimes E_{ji}-{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I^+\\i<j}}e_{ij}\otimes E_{-j,-i}-{\mathfrak{s}}um_{i,j\in I^+}f_{ij}\otimes E_{-j,i}. \] \item Let \[ {\mathfrak{p}}_2^-={\mathfrak{s}}um_{i\in I^+}{\mathbb{C}}(E_{ii}-E_{-i,-i})+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I,\\ i>j}}{\mathbb{C}} E_{ij}. \] Then, the corresponding element $s_-$ is given by \[ s_-={{\mathfrak{b}}ar{f}}rac12{\mathfrak{s}}um_{i\in I^+}e_{ii}\otimes{{\mathfrak{b}}ar{e}}_{ii}-{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I^+\\i>j}}e_{ij}\otimes E_{-j,-i}+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{i,j\in I^+\\i<j}}e_{ij}\otimes E_{j,i}+{\mathfrak{s}}um_{i,j\in I^+}f_{ij}\otimes E_{j,-i}. \] {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{dfn} We now define our substitute Casimir: {\mathfrak{b}}egin{eqnarray}{\langle}bel{casimir} \Omega=s_++s_-={\mathfrak{s}}um_{i,j\in I^+}e_{ij}\otimes{{\mathfrak{b}}ar{e}}_{ji}-{\mathfrak{s}}um_{i,j\in I^+}f_{ij}\otimes{{\mathfrak{b}}ar{f}}_{ji}\in Q(V)\otimes\operatorname{End}(V), {{\mathfrak{b}}ar{e}}nd{eqnarray} where ${{\mathfrak{b}}ar{e}}_{ij}$ and ${{\mathfrak{b}}ar{f}}_{ij}$ are given in {{\mathfrak{b}}ar{e}}qref{bar-e/f}. {\mathfrak{s}}ubsection{Classical Sergeev Duality}{\langle}bel{SS:Sergeev Duality} We now need to recall Sergeev's duality between ${\mathcal{S}} (d)$ and ${{\mathfrak{b}}ar{f}}q (n)$. Recall the matrix $C={\mathfrak{s}}um_{i\in I^+}{{\mathfrak{b}}ar{f}}_{ii}$ from the previous section, and define the superpermutation operator \[ S={\mathfrak{s}}um_{i,j\in I}\mathrm{sgn}(j)E_{ij}\otimes E_{ji}\in\operatorname{End}(V)^{\otimes2}, \] where $\mathrm{sgn}(j)$ is the sign of $j$. Let ${\mathfrak{p}}i_i:\operatorname{End}(V)\rightarrow\operatorname{End}(V)^{\otimes d}$ be given by ${\mathfrak{p}}i_i(x)=1^{\otimes i-1}\otimes x\otimes 1^{\otimes d-i}$ for all $x\in\operatorname{End}(V)$ and $i=1,{\langle}mbdaots, d$; similarly, define ${\mathfrak{p}}i_{ij}:\operatorname{End}(V)^{\otimes 2}\rightarrow\operatorname{End}(V)^{\otimes d}$ by ${\mathfrak{p}}i_{ij}(x\otimes y)=1^{\otimes i-1}\otimes x\otimes 1^{\otimes j-i-1}\otimes y\otimes 1^{\otimes d-j}$. Set $C_i={\mathfrak{p}}i_i(C)$ and, for $1\leq i<j\leq d$, set $S_{ij}={\mathfrak{p}}i_{ij}(S)$. Then, {\mathfrak{b}}egin{thm}{\langle}bel{Sergeev Duality Theorem}\cite[Theorem 3]{s} The map which sends $c_i\mapsto C_i$ and $s_i\mapsto S_{i,i+1}$ is an isomorphism of superalgebras \[ {\mathcal{S}}(d)\rightarrow\operatorname{End}_{{{\mathfrak{b}}ar{f}}q(n)}(V^{\otimes d}). \] {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{s}}ubsection{${\mathcal{A}}Se(d)$-action}{\langle}bel{SS:action} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. In this section we construct an action of ${\mathcal{A}}Se(d)$ on $M\otimes V^{\otimes d}$ that commutes with the action of ${\mathfrak{q}}(n)$. To this end, extend the map ${\mathfrak{p}}i_i$ from $\S$\ref{SS:Sergeev Duality} to a map ${\mathfrak{p}}i_i:\operatorname{End}(V)\rightarrow\operatorname{End}(V)^{\otimes d+1}$ so that ${\mathfrak{p}}i_i(x)=1^{\otimes i}\otimes x\otimes 1^{\otimes d-i}$ for $x\in \operatorname{End}(V)$ and $i=0,{\langle}mbdaots,d$ (i.e.\ add a 0th tensor place); similarly, extend ${\mathfrak{p}}i_{ij}$. Define $C_i$ and $S_{ij}$ as in $\S$\ref{SS:Sergeev Duality}. Define \[ \Omega_{ij}={\mathfrak{p}}i_{ij}(\Omega)\;\;\;0\leq i<j\leq d \] and set $X_i=\Omega_{0i}+{\mathfrak{s}}um_{1\leq j<i}(1-C_jC_i)S_{ji}$. {\mathfrak{b}}egin{thm}{\langle}bel{Affine Sergeev Action} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. Then, the map which sends $c_i\mapsto C_i$, $s_i\mapsto S_{i,i+1}$ and $x_i\mapsto X_i$ defines a homomorphism \[ {\mathcal{A}}Se(d)\rightarrow\operatorname{End}_{{{\mathfrak{b}}ar{f}}q(n)}(M\otimes V^{\otimes d}). \] {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} It is clear from Theorem \ref{Sergeev Duality Theorem} that the $C_i$ and $S_{i,i+1}$ form a copy of the Sergeev algebra ${\mathcal{S}}(d)$ inside $\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$ via the obvious embedding $\operatorname{End}_{{\mathfrak{q}}(n)}(V^{\otimes d}){\mathfrak{h}}ookrightarrow\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$, $A\mapsto\mathrm{id}_M\otimes A$. Moreover, for $i=1,{\langle}mbdaots,d$, $X_i\in\operatorname{End}(M\otimes V^{\otimes d})$, since $X_i\in Q(n)\otimes\operatorname{End}(V)^{\otimes d}$. Therefore it is enough to check the following properties: {\mathfrak{b}}egin{enumerate} \item[(a)] The $X_i$ satisfy the mixed relations {{\mathfrak{b}}ar{e}}qref{c&x} and {{\mathfrak{b}}ar{e}}qref{s&x}, \item[(b)] $X_iX_j-X_jX_i=0$, and \item[(c)] the $X_i$ commute with the action of ${\mathfrak{q}}(n)$ on $M\otimes V^{\otimes d}$. {{\mathfrak{b}}ar{e}}nd{enumerate} First, we check that $\Omega(1\otimes C)=-(1\otimes C)\Omega$. To do this, a calculation shows that $C{\mathfrak{b}}ar{e}_{ji}=-{\mathfrak{b}}ar{e}_{ji}$ and $C{\mathfrak{b}}ar{f}_{ji}={\mathfrak{b}}ar{f}_{ji}C$. Hence, \[ (1\otimes C)(e_{ij}\otimes{\mathfrak{b}}ar{e}_{ji})=-(e_{ij}\otimes{\mathfrak{b}}ar{e}_{ji})(1\otimes C) \] and {\mathfrak{b}}egin{eqnarray*} (1\otimes C)(f_{ij}\otimes{\mathfrak{b}}ar{f}_{ji})&=&(-1)^{p(f_{ij})p(C)} (f_{ij}\otimes C{\mathfrak{b}}ar{f}_{ji})\\ &=&(-1)^{p({\mathfrak{b}}ar{f}_{ji})p(C)}(f_{ij}\otimes {\mathfrak{b}}ar{f}_{ji}C)\\ &=&(-1)^{p({\mathfrak{b}}ar{f}_{ji})p(C)+p(1)p({\mathfrak{b}}ar{f}_{ji})}(f_{ij}\otimes {\mathfrak{b}}ar{f}_{ji})(1\otimes C), {{\mathfrak{b}}ar{e}}nd{eqnarray*} so the result follows since $p(1)={\mathfrak{z}}ero$. Next, it is easy to see that $S_i\Omega_{0i}S_i=\Omega_{i+1}$ using {{\mathfrak{b}}ar{e}}qref{tensor product rule-algebra}. Therefore, (a) follows from the definition of $X_i$. It is now easy to show that, for $i<j$, (b) is equivalent to \[ \Omega_{0i}\Omega_{0j}-\Omega_{0j}\Omega_{0i}=(\Omega_{0j}-\Omega_{0i})S_{ij}+ (\Omega_{0j}+\Omega_{0i})C_iC_jS_{ij}. \] This equality is then a direct calculation. Finally, to verify (c), it is enough to show that for any $X\in{\mathfrak{q}}(n)$, \[ [1\otimes X+X\otimes1,\Omega]=0. \] This is another routine calculation using {{\mathfrak{b}}ar{e}}qref{tensor product rule-algebra}. {{\mathfrak{b}}ar{e}}nd{proof} Now, recall the ``naive'' antiautomorphism $\tauu:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$. This extends to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))$. Extend $\tauu$ to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes 2}$ by $\tauu(x\otimes y)=(-1)^{p(x)}\tauu(x)\otimes\tauu(y)$. By induction, extend $\tauu$ to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes k}$ by $\tauu(x_1\otimes\cdots\otimes x_k)=(-1)^{p(x_1)}\tauu(x_1)\otimes\tauu(x_2\otimes\cdots\otimes x_k)$. A direct check verifies the following result. {\mathfrak{b}}egin{prp}{\langle}bel{P:when tau's collide} We have that $\tauu(C_i)=-C_i$, $\tauu(S_{i,i+1})=S_{i,i+1}$ and $\tauu(X_i)=X_i$ for all admissible $i$'s. In particular, the antiautomorphism $\tauu^{\otimes d+1}:{\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes d+1}\rightarrow{\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes d+1}$ coincides with the antiautomorphism $\tauu:{\mathcal{A}}Se(d)\rightarrow{\mathcal{A}}Se(d)$. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{s}}ubsection{The Functor $F_{\langle}mbda$}{\langle}bel{SS:Flambda} In the previous section, we showed that there is a homomorphism from ${\mathcal{A}}Se(d)$ to $\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$. Since the action of ${\mathcal{A}}Se(d)$ on $M\otimes V^{\otimes d}$ commutes with the action of ${\mathfrak{q}}(n)$, it preserves both primitive vectors and weight spaces. By {{\mathfrak{b}}ar{e}}mph{primitive vector} we mean an element of $M\otimes V^{\otimes d}$ which is annihilated by the subalgebra ${\mathfrak{n}}^{+}$ given by the triangular decomposition of ${{\mathfrak{b}}ar{f}}q (n)$ as in Section~\ref{SS:RootData}. Therefore, given a weight ${\langle}mbda\in P(M\otimes V^{\otimes d})$ we have an action of ${\mathcal{A}}Se(d)$ on {\mathfrak{b}}egin{equation}{\langle}bel{E:Flambdadef} F_{\langle}mbda M :=\left\{m \in M\otimes V^{\otimes d} \mid {\mathfrak{n}}^+.m=0 \thetaxt{ and } m \in \left(M\otimes V^{\otimes d} \right)_{{\langle}mbda} \right\} {{\mathfrak{b}}ar{e}}nd{equation} In the case when ${\langle}mbda\in{P^{++}}$ we can provide alternative descriptions of the functor $F_{{\langle}mbda}$. First we recall the following key result of Penkov \cite{p}. Given a weight ${\langle}mbda \in P$, we write $\operatorname{ch}i_{{\langle}mbda}$ for the central character defined by the simple ${{\mathfrak{b}}ar{f}}q (n)$-module of highest weight ${\langle}mbda$. Then, there is a block decomposition {\mathfrak{b}}egin{equation}{\langle}bel{E:blockdecomp} \mathcal{O}({\mathfrak{q}}(n))={\mathfrak{b}}igoplus_{\operatorname{ch}i_{_{{\langle}mbda}}}\mathcal{O}({\mathfrak{q}}(n))^{[{\langle}mbda]} {{\mathfrak{b}}ar{e}}nd{equation} where the sum is over all central characters $\operatorname{ch}i_{{\langle}mbda}$ and $\mathcal{O}({\mathfrak{q}}(n))^{[{\langle}mbda]}=\mathcal{O}({\mathfrak{q}}(n))^{[\operatorname{ch}i_{\langle}mbda]}$ denotes the block determined by the central character $\operatorname{ch}i_{\langle}mbda$. Given $N$ in $\mathcal{O}({\mathfrak{q}}(n))$, let $N^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]}=N^{[{\mathfrak{g}}amma]}$ denote the projection of $N$ onto the direct summand which lies in $\mathcal{O}({\mathfrak{q}}(n))^{[\operatorname{ch}i_{\mathfrak{g}}amma]}$ The question then becomes to describe when $\operatorname{ch}i_{{\langle}mbda}=\operatorname{ch}i_{\mu}$ for ${\langle}mbda, \mu \in P$. This is answered in the case when ${\langle}mbda$ is typical by the following result of Penkov \cite{p}. Recall that the symmetric group acts on $P$ by permuation of coordinates. {\mathfrak{b}}egin{prp}{\langle}bel{P:penkov} Let ${\langle}mbda \in {P^{++}} $ be a typical weight and let $\mu \in P$. Then $\operatorname{ch}i_{{\langle}mbda}=\operatorname{ch}i_{\mu}$ if and only if $\mu =w({\langle}mbda)$ for some $w \in S_n.$ {{\mathfrak{b}}ar{e}}nd{prp} For short we call a weight ${\langle}mbda \in P$ {{\mathfrak{b}}ar{e}}mph{atypical} if it is not typical. By the description of the blocks $\mathcal{O}({\mathfrak{q}}(n))^{[{\langle}mbda]}$, if $L(\mu)$ is an object of $\mathcal{O}({\mathfrak{q}}(n))^{[{\langle}mbda]}$ then ${\langle}mbda$ is typical if and only if $\mu$ is typical (c.f.\ \cite[Proposition 1.1]{ps2} and the remarks which follow it). We then have the following preparatory lemma. {\mathfrak{b}}egin{lem} Let ${\langle}mbda,{\mathfrak{g}}amma \in P$. Then the following statements hold: {\mathfrak{b}}egin{enumerate} \item [(i)] Assume ${\mathfrak{g}}amma$ is atypical and ${\langle}mbda$ is typical. If $N$ is an object of $\mathcal{O}^{[{\mathfrak{g}}amma]}$, then $N_{{\langle}mbda}=({{\mathfrak{b}}ar{f}}nminus N)_{{\langle}mbda}.$ \item [(ii)] Assume ${\langle}mbda, {\mathfrak{g}}amma \in P^{++}$ are typical and dominant and ${\langle}mbda {\mathfrak{n}}eq {\mathfrak{g}}amma$. If $N$ is an object of $\mathcal{O}^{[{\mathfrak{g}}amma]}$, then $N_{{\langle}mbda}=({{\mathfrak{b}}ar{f}}nminus N)_{{\langle}mbda}$. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} By \cite[Lemma 4.5]{b}, every object $\mathcal{O}({\mathfrak{q}}(n))$ has a finite Jordan-H\"{o}lder series. The proof of (i) is by induction on the length of a composition series of $N.$ The base case is when $N$ has length one (ie.\ $N \cong L({\mathfrak{n}}u)$ is a simple module). This case immediately follows from the fact that in order for $N_{{\langle}mbda}$ to be nontrivial it must be that ${\langle}mbda < {\mathfrak{n}}u$. But then it follows from the assumption that ${\mathfrak{n}}u$ is atypical (since $L({\mathfrak{n}}u)$ is an object of $\mathcal{O}^{[{\mathfrak{g}}amma]}$) while ${\langle}mbda$ is typical. Now consider a composition series \[ 0=N_{0} {\mathfrak{s}}ubset N_{1} {\mathfrak{s}}ubset \dotsb {\mathfrak{s}}ubset N_{t} =N. \] Let $v \in N_{{\langle}mbda}$ so that $v +N_{t-1} \in N_{t}/N_{t-1}$ is nonzero. Since $N_{t}/N_{t-1}$ is a simple module in $\mathcal{O}^{[{\mathfrak{g}}amma]},$ by the base case there exists a $w \in N_{t}=N$ and $y \in {{\mathfrak{b}}ar{f}}nminus$ so that $yw + N_{t-1}=v + N_{t-1}.$ Thus, $v-yw \in N_{t-1}$ and is of weight ${\langle}mbda.$ By the inductive assumption, there exists $w' \in N_{t-1} {\mathfrak{s}}ubset N$ and $y' \in {{\mathfrak{b}}ar{f}}nminus$ such that $y'w' = v-yw.$ That is, $v=yw+y'w' \in {{\mathfrak{b}}ar{f}}nminus N.$ This proves the desired result. Now, (ii) follows by a similar argument by induction on the length of a composition series. If $N$ is simple and $N_{{\langle}mbda}{\mathfrak{n}}eq 0$, then ${\langle}mbda$ is not the highest weight of $N$ (as ${\mathfrak{g}}amma$ is the unique dominant highest weight among the simple modules in $\mathcal{O}^{[{\mathfrak{g}}amma]}$ by Proposition~\ref{P:penkov}). From this it immediately follows that $N_{{\langle}mbda}=({{\mathfrak{b}}ar{f}}nminus N)_{{\langle}mbda}$. Now proceed by induction as in the previous paragraph. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{lem}{\langle}bel{L:TypicalFlambda} Let ${\langle}mbda \in P^{++}$ be typical and dominant, and let $M \in \mathcal{O}.$ Then \[ F_{{\langle}mbda}\left(M \right) \cong \left( (M\otimes V^{\otimes d})^{[{\langle}mbda]}\right)_{\langle}mbda \cong \left[ M\otimes V^{\otimes d}/{\mathfrak{n}}_-(M\otimes V^{\otimes d})\right]_{\langle}mbda \] as ${\mathcal{A}}Se (d)$-modules. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} It should first be remarked that since the action of ${\mathcal{A}}Se (d)$ commutes with the action of ${\mathfrak{q}} (n)$, the action of ${\mathcal{A}}Se (d)$ on $M \otimes V^{\otimes d}$ induces an action on each of the vector spaces given in the theorem. Now, by Proposition~\ref{P:penkov} and the assumption that ${\langle}mbda$ is dominant, it follows that for any module $N \in \mathcal{O}^{[{\langle}mbda]},$ $N_{{\mathfrak{n}}u} {\mathfrak{n}}eq 0$ only if ${\mathfrak{n}}u \leq {\langle}mbda$ in the dominance order. Thus any vector of weight ${\langle}mbda$ in $M \otimes V^{\otimes d}$ is necessarily a primitive vector. On the other hand, if there is a primitive vector of weight ${\langle}mbda$ in $M \otimes V^{\otimes d},$ then it must lie in the image of a nonzero homomorphism $M({\langle}mbda) \to M \otimes V^{\otimes d}$. But as $M({\langle}mbda)$ is an object in $\mathcal{O}^{[{\langle}mbda]}$, it follows that the primitive vector lies in $\left( (M\otimes V^{\otimes d})^{[{\langle}mbda]}\right)_{\langle}mbda $. Thus, there exists a canonical projection map \[ F_{{\langle}mbda}\left(M \right) \to \left( (M\otimes V^{\otimes d})^{[{\langle}mbda]}\right)_{\langle}mbda \] and this map is necessarily a vector space isomorphism. The fact that it is a ${\mathcal{A}}Se (d)$-module homomorphism follows from the fact that the action of ${\mathcal{A}}Se (d)$ on both vector spaces is induced by the action of ${\mathcal{A}}Se (d)$ on $M \otimes V^{\otimes d}.$ Now consider the block decomposition {\mathfrak{b}}egin{equation*} M\otimes V^{\otimes d}= \oplus_{\operatorname{ch}i_{{\mathfrak{g}}amma}} (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]}, {{\mathfrak{b}}ar{e}}nd{equation*} where the direct sum runs over dominant ${\mathfrak{g}}amma \in {{\mathfrak{b}}ar{f}}h_{{\mathfrak{z}}ero}^{*}$ so that different $\operatorname{ch}i_{{\mathfrak{g}}amma}$ are different central characters of $U({{\mathfrak{b}}ar{f}}g )$. This then induces the vector space direct sum decomposition \[ (M\otimes V^{\otimes d})/{{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d}) = \oplus_{\operatorname{ch}i_{{\mathfrak{g}}amma}} (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]}/{{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]}, \] where $(M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]}$ denotes the direct summand of $M\otimes V^{\otimes d}$ which lies in the block $\mathcal{O}^{[{\mathfrak{g}}amma]}.$ By the previous lemma, if ${\mathfrak{g}}amma$ is atypical or if ${\mathfrak{g}}amma$ is typical and ${\mathfrak{g}}amma {\mathfrak{n}}eq {\langle}mbda$, then \[ \left[ (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]})/ {{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\mathfrak{g}}amma}]})\right]_{{\langle}mbda}=0. \] Therefore, {\mathfrak{b}}egin{equation}{\langle}bel{E:functorvariations} \left[ (M\otimes V^{\otimes d})/{{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d})\right]_{{\langle}mbda} = \left[(M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\langle}mbda}]}/{{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\langle}mbda}]} \right]_{{\langle}mbda}. {{\mathfrak{b}}ar{e}}nd{equation} Finally, if $N$ is an object of $\mathcal{O}^{[{\langle}mbda]},$ then $N_{\mu} {\mathfrak{n}}eq 0$ only if $\mu \leq {\langle}mbda$ in the dominance order. Thus weight considerations imply $\left[ {{\mathfrak{b}}ar{f}}nminus (M\otimes V^{\otimes d})^{[\operatorname{ch}i_{{\langle}mbda}]})\right]_{{\langle}mbda}=0$ which, in turn, implies that canonical projection \[ \left( (M\otimes V^{\otimes d})^{[{\langle}mbda]}\right)_{\langle}mbda \to \left[ M\otimes V^{\otimes d}/{\mathfrak{n}}_-(M\otimes V^{\otimes d})\right]_{\langle}mbda \] is a vector space isomorphism. That is its a ${\mathcal{A}}Se (d)$-module homomorphism follows from the fact that in both cases the action is induced from the ${\mathcal{A}}Se (d)$ action on $M \otimes V^{\otimes d}$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:Flambdaexactness} If ${\langle}mbda \in {P^{++}}$ is dominant and typical, then the functor $F_{{\langle}mbda}:\mathcal{O} \to {\mathcal{A}}Se (d)$-mod is exact. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{proof} This follows immediately from the first alternative description of $F_{{\langle}mbda}$ in the above theorem as it is the composition of the exact functors $- \otimes V^{\otimes d}$, projection onto the direct summand lying in the block $\mathcal{O}^{[{\langle}mbda]}$, and projection onto the ${\langle}mbda$ weight space. {{\mathfrak{b}}ar{e}}nd{proof} In what follows when ${\langle}mbda$ is dominant and typical we use whichever description of $F_{{\langle}mbda}$ given in lemma ~\ref{L:TypicalFlambda} is most convenient. {\mathfrak{s}}ubsection{Image of the Functor}{\langle}bel{SS:functorimage} We can now describe the image of Verma modules under the functor. {\mathfrak{b}}egin{lem}{\langle}bel{L:description} Let $M(\mu)$ be a Verma module in $\mathcal{O}$ and let ${\langle}mbda \in P^{++}$ be a dominant and typical weight. The natural inclusion \[ E(\mu)\otimes(V^{\otimes d})_{{\langle}mbda-\mu}{\mathfrak{h}}ookrightarrow(M(\mu)\otimes V^{\otimes d})_{\langle}mbda \] induces an isomorphism of ${\mathcal{S}}(d)$-modules $E(\mu)\otimes(V^{\otimes d})_{{\langle}mbda-\mu}\cong F_{\langle}mbda(M(\mu))$. In particular, $F_{\langle}mbda(M(\mu))=0$ unless ${\langle}mbda-\mu\in {P_{\mathrm{poly}}^+}os(d)$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} This is proved exactly as in \cite[Lemma 3.3.2]{as}, except now the highest weight space of $M(\mu)$ is $E(\mu)$. Namely, by the tensor identity and the PBW theorem, {\mathfrak{b}}egin{equation}{\langle}bel{E:tensoridentity} M(\mu) \otimes V^{\otimes d} \cong U({{\mathfrak{b}}ar{f}}g ) \otimes_{U({{\mathfrak{b}}ar{f}}b)} \left(E(\mu) \otimes V^{\otimes d} \right) \cong U({{\mathfrak{b}}ar{f}}nminus) \otimes E(\mu) \otimes V^{\otimes d}, {{\mathfrak{b}}ar{e}}nd{equation} where the first isomorphism is as ${{\mathfrak{b}}ar{f}}g$-modules and the second is as ${{\mathfrak{b}}ar{f}}h_{{\mathfrak{b}}ar{0}}$-modules. Thus the canonical projection map induces the isomorphism of ${{\mathfrak{b}}ar{f}}h_{{\mathfrak{b}}ar{0}}$-modules given by {\mathfrak{b}}egin{equation*} 1 \otimes E(\mu) \otimes V^{\otimes d} \cong M(\mu) \otimes V^{\otimes d} / {{\mathfrak{b}}ar{f}}nminus \left(M(\mu) \otimes V^{\otimes d} \right). {{\mathfrak{b}}ar{e}}nd{equation*} Taking ${\langle}mbda$ weight spaces on both sides yields the vector space isomorphism \[ 1 \otimes E(\mu) \otimes \left( V^{\otimes d}\right)_{{\langle}mbda-\mu}\cong \left[ M(\mu) \otimes V^{\otimes d} / {{\mathfrak{b}}ar{f}}nminus \left(M(\mu) \otimes V^{\otimes d} \right)\right]_{{\langle}mbda}. \] Now, the composition of the natural inclusion $E(\mu)\otimes(V^{\otimes d})_{{\langle}mbda-\mu}{\mathfrak{h}}ookrightarrow(M(\mu)\otimes V^{\otimes d})_{\langle}mbda$ with {{\mathfrak{b}}ar{e}}qref{E:tensoridentity}, and the isomorphism above implies that \[ E(\mu) \otimes \left( V^{\otimes d}\right)_{{\langle}mbda-\mu} \cong 1 \otimes E(\mu) \otimes \left( V^{\otimes d}\right)_{{\langle}mbda-\mu} \cong \left[ M(\mu) \otimes V^{\otimes d} / {{\mathfrak{b}}ar{f}}nminus \left(M(\mu) \otimes V^{\otimes d} \right)\right]_{{\langle}mbda} = F_{{\langle}mbda}\left(M(\mu) \right), \] That it is an isomorphism of ${\mathcal{S}}(d)$-modules follows from the fact that in each case the action of ${\mathcal{S}}(d)$ is via the action induced from the action of ${\mathcal{S}}(d)$ on $M(\mu) \otimes V^{\otimes d}.$ {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:StandardDim} Let ${\langle}mbda \in P^{++}$ be a dominant and typical weight and let $\mu \in P$ with ${\langle}mbda-\mu\in {P_{\mathrm{poly}}^+}os(d)$. Set $d_{i}={\langle}mbda_{i}-\mu_{i}$ for $i=1, \dotsc ,n$. {\mathfrak{b}}egin{enumerate} \item [(i)] Let $M(\mu)$ be the little Verma module of highest weight $\mu$. Then, \[ \dim F_{{\langle}mbda}(M(\mu)) = 2^{d+\lfloor(n-{\mathfrak{g}}amma_0(\mu)+1)/2 \rfloor}{{\mathfrak{b}}ar{f}}rac{d!}{d_{1}! \dotsb d_{n}!}. \] \item [(ii)] Let $\widehat{M}(\mu)$ be the big Verma module of highest weight $\mu$. Then, \[ \dim F_{\langle}mbda(\widehat{M}(\mu))=2^{d+n-{\mathfrak{g}}amma_0(\mu)}{{\mathfrak{b}}ar{f}}rac{d!}{d_1!\cdots d_n!}. \] {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{proof} We have $\dim E(\mu) = 2^{\lfloor(n-{\mathfrak{g}}amma_0(\mu)+1)/2 \rfloor}$. For each ${\mathbf{1}}repsilon_{i}$ $(i=1, \dotsc , n)$, $\dim V_{{\mathbf{1}}repsilon_{i}}=2.$ A combinatorial count shows that \[ \dim \left(V^{\otimes d} \right)_{{\langle}mbda - \mu} = {{\mathfrak{b}}ar{f}}rac{d!}{d_{1}! \dotsb d_{n}!}2^{d}. \] The statement of (i) then follows by Lemma~\ref{L:description}. The statement of (ii) follows from (i) and Lemma~\ref{L:little verma in big verma}. {{\mathfrak{b}}ar{e}}nd{proof} Fix ${\langle}mbda,\mu\in P$ such that ${\langle}mbda-\mu\in {P_{\mathrm{poly}}^+}os(d)$, and let $d_i={\langle}mbda_i-\mu_i$. Let $\{u_i,u_{{\mathfrak{b}}ar{i}}\}_{i=1,{\langle}mbdaots,n}$ be the standard basis for $V$, let $v_\mu\in E(\mu)$, and let $u_{{\langle}mbda-\mu}=u_1^{\otimes d_1}\otimes\cdots\otimes u_n^{\otimes d_n}\in (V^{\otimes d})_{{\langle}mbda-\mu}$. Finally, let \[ m_k={\mathfrak{s}}um_{i=1}^kd_k, \] and define $F_k={\mathfrak{p}}i_0(f_{kk})$ (see Section~\ref{SS:action}). {\mathfrak{b}}egin{lem}{\langle}bel{X action} Let $v_{\mu} \in M(\mu)_{\mu}$ be a primitive vector of weight $\mu,$ and let $u=u_{{\langle}mbda - \mu}= u_{1}^{\otimes d_{1}} \otimes \dotsb u_{n}^{\otimes d_{n}}.$ For each $1\leq k\leq n$ and $m_{k-1}<i\leq m_k$, \[ X_i.v_\mu\otimes u_{{\langle}mbda-\mu}{{\mathfrak{b}}ar{e}}quiv \left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<l<i}C_lC_i -F_{k}C_i\right)v_\mu\otimes u_{{\langle}mbda-\mu} \] modulo ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. As a consequence, \[ X_i^2v_\mu\otimes u_{{\langle}mbda-\mu}{{\mathfrak{b}}ar{e}}quiv(\mu_k+i-m_{k-1}-1)(\mu_k+i-m_{k-1})v_\mu\otimes u_{{\langle}mbda-\mu}, \] again modulo ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} We first do some preliminary calculations. Let $1 \leq j < k \leq n$ be fixed, let $m_{k-1} \leq i \leq m_{k}$ be fixed, and consider the vector \[ v_{\mu}\otimes u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}}, \] where the $u_{j}$ is the $i$th tensor and $a+b+1=d_{k}$ (i.e.\ among the $u_{k}$'s, the one in the $i$th position, recalling that $v_{\mu}$ is in the zeroth position, is replaced with $u_{j}$). For short, let us write $u = u_{1}^{\otimes d_{1}} \otimes \dotsb u_{n}^{\otimes d_{n}}$ and ${\mathfrak{h}}at{u}=u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}}$. Then, {\mathfrak{b}}egin{eqnarray*} e_{kj}(v_{\mu}\otimes{\mathfrak{h}}at{u})&=&(e_{kj}v_{\mu})\otimes{\mathfrak{h}}at{u}\\ &&+ {\mathfrak{s}}um_{r = 1}^{d_{j}} v_{\mu}\otimes u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{j}^{\otimes r-1 } \otimes u_{k} \otimes u_{j}^{\otimes d_{j}-r} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}} + v_{\mu} \otimes u\\ &=&(e_{kj}v_{\mu})\otimes {\mathfrak{h}}at{u} + {\mathfrak{s}}um_{r = 1}^{d_{j}} S_{m_{j-1}+r, i} (v_{\mu}\otimes u) + v_{\mu} \otimes u. {{\mathfrak{b}}ar{e}}nd{eqnarray*} Similarly, if we write $\operatorname{ch}eck{u}=C_{i}{\mathfrak{h}}at{u}=u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes v_{-j} \otimes v_{k}^{\otimes b} \otimes \dotsb \otimes v_{n}^{\otimes d_{n}}$, then {\mathfrak{b}}egin{eqnarray*} f_{kj} (v_{\mu}\otimes \operatorname{ch}eck{u}) &=& (f_{kj}v_{\mu})\otimes \operatorname{ch}eck{u} +(-1)^{p(v_{\mu})}\times\\ &&\times {\mathfrak{s}}um_{r=1}^{d_{j}} v_{\mu} \otimes u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{j}^{\otimes r-1} \otimes u_{-k} \otimes u_{j}^{\otimes d_{j}-r} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{-j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}}\\ &&+ (-1)^{p(v_{\mu})}v_{\mu} \otimes u\\ &=&(f_{kj}v_{\mu})\otimes \operatorname{ch}eck{u} +(-1)^{p(v_{\mu})} {\mathfrak{s}}um_{r = 1}^{d_{j}} C_{m_{j-1}+a}C_{i}S_{m_{j-1}+r, i} (v_{\mu}\otimes u) + (-1)^{p(v_{\mu})} v_{\mu} \otimes u. {{\mathfrak{b}}ar{e}}nd{eqnarray*} We can now consider the first statement of the lemma. Throughout, we write ${{\mathfrak{b}}ar{e}}quiv$ for congruence modulo the subspace ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. Let $1 \leq k \leq n$ be fixed so that $m_{k-1} < i \leq m_{k}$ (ie.\ there is a $u_{k}$ in the $i$th position of $v_{\mu}\otimes u$). Using that $v_{\mu}$ is a primitive vector and the equalities given above, we deduce that {\mathfrak{b}}egin{small} {\mathfrak{b}}egin{eqnarray*} X_i\left( v_\mu\otimes u_{{\langle}mbda-\mu}\right)&=&{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll,j=1}^n e_{{{\mathfrak{b}}ar{e}}ll j}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes {{\mathfrak{b}}ar{e}}_{j{{\mathfrak{b}}ar{e}}ll}u_k\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\ &&-(-1)^{p(v_{\mu})}{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll,j=1}^nf_{{{\mathfrak{b}}ar{e}}ll j}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes {{\mathfrak{b}}ar{f}}_{j{{\mathfrak{b}}ar{e}}ll}u_k\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\ &&+{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll<i}(1-C_{{\mathfrak{b}}ar{e}}ll C_i)S_{{{\mathfrak{b}}ar{e}}ll i}(v_\mu\otimes u)\\ &=&{\mathfrak{s}}um_{j\leq k} e_{kj}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes u_j\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\ &&-(-1)^{p(v_\mu)}{\mathfrak{s}}um_{j\leq k}f_{kj}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes u_{-j}\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\ &&+{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll<i}(1-C_{{\mathfrak{b}}ar{e}}ll C_i)S_{{{\mathfrak{b}}ar{e}}ll i}(v_\mu\otimes u)\\ & {{\mathfrak{b}}ar{e}}quiv& - {\mathfrak{s}}um_{j < k} \left[{\mathfrak{s}}um_{a = 1}^{d_{j}} S_{m_{j-1}+a, i} (v_{\mu}\otimes u) + v_{\mu} \otimes u \right] \\ &&+ {\mathfrak{s}}um_{j < k} \left[ {\mathfrak{s}}um_{a = 1}^{d_{j}} C_{m_{j-1}+a}C_{i}S_{m_{j-1}+a, i} (v_{\mu}\otimes u) + v_{\mu} \otimes u \right] \\ &&+ \mu_{k}v_{\mu}\otimes u - C_{i} \left((f_{kk}v_{\mu})\otimes u \right) +{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll<i}(1-C_{{\mathfrak{b}}ar{e}}ll C_i)S_{{{\mathfrak{b}}ar{e}}ll i} (v_\mu\otimes u)\\ &=& - {\mathfrak{s}}um_{l \leq m_{k-1}} S_{l,i}v_{\mu}\otimes u -(k-1)v_{\mu} \otimes u + {\mathfrak{s}}um_{l \leq m_{k-1}} C_{l}C_{i}S_{l,i}v_{\mu}\otimes u +(k-1)v_{\mu}\otimes u \\ &&+ \mu_{k}v_{\mu}\otimes u - C_{i}\left((f_{kk}v_{\mu})\otimes u \right) +{\mathfrak{s}}um_{{{\mathfrak{b}}ar{e}}ll<i}(1-C_{{\mathfrak{b}}ar{e}}ll C_i)S_{{{\mathfrak{b}}ar{e}}ll i}(v_\mu\otimes u)\\ & = &\mu_kv_\mu\otimes u_{{\langle}mbda-\mu}+C_i((f_{kk}v_\mu)\otimes u_{{\langle}mbda-\mu})+{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i} (1-C_{{\mathfrak{b}}ar{e}}ll C_i)S_{l,i}(v_\mu\otimes u)\\ &=&\left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_iS_{l,i}\right) (v_\mu\otimes u_{{\langle}mbda-\mu})+C_i((f_{kk}v_\mu)\otimes u)\\ &=&\left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i -F_{k}C_i\right) (v_\mu \otimes u). {{\mathfrak{b}}ar{e}}nd{eqnarray*} {{\mathfrak{b}}ar{e}}nd{small} Note the last equality makes use of the fact that $S_{l,i} v_{\mu}\otimes u = v_{\mu}\otimes u$ for $m_{k-1}<l<i$ and that as (odd) linear maps $F_{k}C_{i}=-C_{i}F_{k}.$ Now we consider the second statement of the lemma. Using the previous calculation, the fact that $X_{i}$ and the $C$'s satisfy relation {{\mathfrak{b}}ar{e}}qref{c&x} of the degenerate AHCA, and the fact that $f_{kk}v_{\mu} \in M(\mu)_{\mu}$ is again a primitive vector of weight $\mu$, {\mathfrak{b}}egin{eqnarray*} X_i^2(v_\mu\otimes u_{{\langle}mbda-\mu}) &{{\mathfrak{b}}ar{e}}quiv& X_i\left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i-F_kC_i\right) (v_\mu\otimes u_{{\langle}mbda-\mu})\\ &=&\left(\mu_k+i-m_{k-1}-1+{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i \right)X_{i} (v_{\mu}\otimes u)- C_iX_{i}((f_{kk}v_\mu)\otimes u_{{\langle}mbda-\mu})\\ & {{\mathfrak{b}}ar{e}}quiv & \left(\mu_k+i-m_{k-1}-1+{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i \right)\times\\ &&\times \left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i-F_kC_i\right) (v_\mu\otimes u_{{\langle}mbda-\mu}) \\ &&-C_{i}\left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i-F_kC_i\right) ((f_{kk}v_\mu)\otimes u_{{\langle}mbda-\mu}) \\ &=&\left(\mu_k+i-m_{k-1}-1+{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i \right) \left(\mu_k+i-m_{k-1}-1-{\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i \right) v_{{\langle}mbda} \otimes u\\ &&+ C_{i}F_{k}C_{i}((f_{kk}v_{\mu}) \otimes u) \\ &=&\left( (\mu_k+i-m_{k-1}-1)^{2} - \left( {\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i\right)^{2}\right) v_{\mu}\otimes u + (f^{2}_{kk}v_{\mu}) \otimes u \\ &=&\left( (\mu_k+i-m_{k-1}-1)^{2} +(\mu_k+i-m_{k-1}-1)\right) v_{\mu}\otimes u. {{\mathfrak{b}}ar{e}}nd{eqnarray*} The last equality follows from the fact that in the Clifford algebra \[ \left( {\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}C_{{\mathfrak{b}}ar{e}}ll C_i\right)^{2} = {\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i}(C_{{\mathfrak{b}}ar{e}}ll C_i)^{2} = {\mathfrak{s}}um_{m_{k-1}<{{\mathfrak{b}}ar{e}}ll<i} -1 = -(i-m_{k-1} +1) \] and that, in ${\mathfrak{q}} (n)$, $f_{kk}^2=e_{kk}$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:ImageisIntegral} Let ${\langle}mbda \in P^{++}$ be a dominant typical weight, let $\mu\in P$, and let $M(\mu)$ be a Verma module in $\mathcal{O}({{\mathfrak{b}}ar{f}}q (n))$. Then for $i=1, \dotsc , d$ the element $x_{i}^{2}$ acts on $F_{{\langle}mbda}(M(\mu))$ with generalized eigenvalues of the form $q(a)$ for various $a \in {\mathbb{Z}}$. Hence, $F_{{\langle}mbda}(M(\mu))$ is integral. {{\mathfrak{b}}ar{e}}nd{cor} As a consequence of the previous corollary we see that for ${\langle}mbda \in P^{++}$ we have that $F_{{\langle}mbda}\left(L(\mu) \right)$ is integral for any simple module $L(\mu)$ in $\mathcal{O}$ and, therefore, \[ F_{\langle}mbda:\mathcal{O}({\mathfrak{q}}(n))\rightarrow{\mathbb{R}}ep{\mathcal{A}}Se(d). \] {\mathfrak{b}}egin{prp} Let ${\langle}mbda\in{P^{++}}$ and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$. Then, $F_{\langle}mbda(\widehat{M}(\mu))\cong\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} Let $v_+\in{\mathbb{C}}_\mu$ be a nonzero vector in the 1-dimensional ${\mathfrak{h}}_{\mathfrak{z}}ero$-module ${\mathbb{C}}_\mu$, let $v_\mu=1\otimes v_+\in C(\mu)_{\mathfrak{z}}ero$ be its image and let $u_{{\langle}mbda-\mu}$ be as in the prevous lemma. Then $v_\mu\otimes u_{{\langle}mbda-\mu}$ is a cyclic vector for $F_{\langle}mbda(\widehat{M}(\mu))$ as a ${\mathcal{A}}Se (d)$-module. Recall the cyclic vector ${\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}\in\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$. For ${\mathfrak{p}}artialta_1,{\langle}mbdaots,{\mathfrak{p}}artialta_n\in\{0,1\}$, let ${\mathbf{1}}rphi_1^{{\mathfrak{p}}artialta_1}\cdots{\mathbf{1}}rphi_n^{{\mathfrak{p}}artialta_n}{\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}=1\otimes{\mathbf{1}}rphi_1^{{\mathfrak{p}}artialta_1}{\mathfrak{h}}at{{\mathbf{1}}}\otimes\cdots\otimes{\mathbf{1}}rphi_n^{{\mathfrak{p}}artialta_n}{\mathfrak{h}}at{{\mathbf{1}}}$, cf. {{\mathfrak{b}}ar{e}}qref{E:hatcyclicvector}. Note that $w.(v_\mu\otimes u_{{\langle}mbda-\mu})=v_\mu\otimes u_{{\langle}mbda-\mu}$ for all $w\in S_{{\langle}mbda-\mu}$. Comparing Lemma \ref{X action} and Proposition \ref{segment representation}, we deduce that, by Frobenious reciprocity, there exists a surjective ${\mathcal{A}}Se(d)$-homomorphism $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)\rightarrow F_{\langle}mbda(\widehat{M}(\mu))$ sending ${\mathbf{1}}rphi_1^{{\mathfrak{p}}artialta_1}\cdots{\mathbf{1}}rphi_n^{{\mathfrak{p}}artialta_n}{\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}\mapsto F_1^{{\mathfrak{p}}artialta_1}\cdots F_n^{{\mathfrak{p}}artialta_n}v_\mu\otimes u_{{\langle}mbda-\mu}$. That this is an isomorphism follows by comparing dimensions using Lemmas~\ref{L:standard cyclic dim} and~\ref{C:StandardDim}. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:Image of the little verma} We have \[ F_{\langle}mbda M(\mu)\cong{\mathcal{M}}({\langle}mbda,\mu)^{\oplus 2^{{\mathbf{1}}rpi(\mu)}} \] where \[ {\mathbf{1}}rpi(\mu) ={\mathfrak{b}}egin{cases}\lfloor{{\mathfrak{b}}ar{f}}rac{n+1}{2}\rfloor&\mbox{if }{\mathfrak{g}}amma_0(\mu)\mbox{ is even,}\\\lfloor{{\mathfrak{b}}ar{f}}rac{n}{2}\rfloor&\mbox{if }{\mathfrak{g}}amma_0(\mu)\mbox{ is odd.}{{\mathfrak{b}}ar{e}}nd{cases}. \] {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{proof} Using the additivity of the functor $F_{{\langle}mbda}$, the previous proposition, and Lemmas~\ref{L:little verma in big verma} and~\ref{L:standard cyclic dim} we obtain $F_{\langle}mbda M(\mu)=2^{n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor -\lfloor{{\mathfrak{b}}ar{f}}rac{n-{\mathfrak{g}}amma_0(\mu)}{2}\rfloor}{\mathcal{M}}({\langle}mbda,\mu)$. It is just left to observe that \[ n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor -\lfloor{{\mathfrak{b}}ar{f}}rac{n-{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor={\mathbf{1}}rpi(\mu). \] {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{lem}{\langle}bel{L:IsoStdMod} Assume that ${\langle}mbda\in{P^{++}}$, $\mu\in P^+[{\langle}mbda]$, ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$, and $\alpha\in R^+[{\langle}mbda]$. Then, ${\mathcal{M}}({\langle}mbda,\mu)\cong{\mathcal{M}}({\langle}mbda,s_\alpha\mu)$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} By Lemma \ref{L:InjHom}, there exists an injective homomorphism $M(s_\alpha\mu)\rightarrow M(\mu)$. Since ${\mathbf{1}}rpi(\mu)={\mathbf{1}}rpi(s_\alpha\mu)$, there exists an injective homomorphism \[ {\mathcal{M}}({\langle}mbda,s_\alpha\mu)^{{\mathbf{1}}rpi(\mu)}=F_{\langle}mbda M(s_\alpha\mu)\to F_{\langle}mbda M(\mu)= {\mathcal{M}}({\langle}mbda,\mu)^{{\mathbf{1}}rpi(\mu)}. \] Since $\dim{\mathcal{M}}({\langle}mbda,s_\alpha\mu)=\dim{\mathcal{M}}({\langle}mbda,\mu)$ and by Theorem~\ref{thm:unique irred quotient} ${\mathcal{M}}({\langle}mbda,\mu)$ is indecomposible, it follows that this map is an isomorphism. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{thm}{\langle}bel{T:MaxSubmod} Assume ${\langle}mbda\in{P^{++}}$ and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$. Then, ${\mathcal{M}}({\langle}mbda,\mu)$ has a unique maximal submodule $\mathcal{R}({\langle}mbda,\mu)$ and unique irreducible quotient ${\mathcal{L}}({\langle}mbda,\mu)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} There exists $w\in S_d[{\langle}mbda]$ such that $w\mu\in P^+[{\langle}mbda]$. By Lemma \ref{L:IsoStdMod}, ${\mathcal{M}}({\langle}mbda,w\mu)\cong{\mathcal{M}}({\langle}mbda,\mu)$. By Theorem \ref{thm:unique irred quotient}, ${\mathcal{M}}({\langle}mbda,w\mu)$ has a unique maximal submodule and unique irreducible quotient, so the result follows. {{\mathfrak{b}}ar{e}}nd{proof} Given $\mu\in P$, the Shapovalov form on $M(\mu)$ induces a non-degenerate ${\mathfrak{q}}(n)$-contravariant form on $L(\mu)$, which we will denote $(\cdot,\cdot)_\mu$. In turn we have a non-degenerate ${\mathfrak{q}}(n)$-contravariant form on $L(\mu)\otimes V^{\otimes d}$ given by $(\cdot,\cdot)_\mu\otimes(\cdot,\cdot)_{{\mathbf{1}}repsilon_1}^{\otimes d}$. Observe that different weight spaces are orthogonal with respect to this form and different blocks of $\mathcal{O}({\mathfrak{q}} (n))$ given by central characters are also orthogonal. Therefore, when ${\langle}mbda\in{P^{++}}$ is dominant and typical it follows that the bilinear form restricts to a form on $(L(\mu)\otimes V^{\otimes d})^{[{\langle}mbda]}_{\langle}mbda=F_{\langle}mbda(L(\mu))$, which is non-degenerate whenever it is nonzero. By Proposition \ref{P:when tau's collide}, this form is ${\mathcal{A}}Se(d)$-contravariant. Similarly, Proposition \ref{P:when tau's collide} implies that the Shapovalov form on $\widehat{M}(\mu)$ induces an ${\mathcal{A}}Se(d)$-contravariant form on $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$. Now, if ${\langle}mbda\in{P^{++}}$ and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$, then by Theorem \ref{T:MaxSubmod}, $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ posesses a unique submodule $\widehat{\mathcal{R}}({\langle}mbda,\mu)$ which is maximal among those which avoid the generalized ${\mathfrak{z}}eta_{{\langle}mbda,\mu}$ weight space. Indeed, \[ \widehat{\mathcal{R}}({\langle}mbda,\mu)=\mathcal{R}({\langle}mbda,\mu)^{\oplus 2^{n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor}}. \] {\mathfrak{b}}egin{prp}{\langle}bel{P:ASeRadical} Assume that ${\langle}mbda\in{P^{++}}$, $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$, and $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ possesses a nonzero contravariant form $(\cdot,\cdot)$. Let $\mathcal{R}$ denote the radical of this form. Then, \[ \mathcal{R}{\mathfrak{s}}upseteq\widehat{\mathcal{R}}({\langle}mbda,\mu). \] {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} First, recall that $\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$ is cyclically generated by ${\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}\in\widehat{{\mathcal{M}}}({\langle}mbda,\mu)_{{\mathfrak{z}}eta_{{\langle}mbda,\mu}}$. Now, assume $v\in\widehat{\mathcal{R}}({\langle}mbda,\mu)$ and $v'\in\widehat{{\mathcal{M}}}({\langle}mbda,\mu)$. Then, $v'=X.{\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu}$ for some $X\in{\mathcal{A}}Se(d)$. Moreover, $\tauu(X).v\in\widehat{\mathcal{R}}({\langle}mbda,\mu)$. Applying Lemma \ref{L:ASeContraForm} and the definition of $\widehat{\mathcal{R}}({\langle}mbda,\mu)$ we deduce that \[ (v',v)=(X.{\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu},v)=({\mathfrak{h}}at{{\mathbf{1}}}_{{\langle}mbda,\mu},\tauu(X).v)=0. \] Hence, $v\in\mathcal{R}$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:ASeRadical} Given ${\langle}mbda\in{P^{++}}$ and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$, \[ \mathcal{R}=\mathcal{R}({\langle}mbda,\mu)^{\oplus k}\oplus{\mathcal{M}}({\langle}mbda,\mu)^{\oplus 2^{n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor}-k} \] for some $0\leq k\leq2^{n-\lfloor{{\mathfrak{b}}ar{f}}rac{{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor}$. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{thm}{\langle}bel{T:SimplesToSimples} Assume ${\langle}mbda\in{P^{++}}$, and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$. If $F_{\langle}mbda L(\mu)$ is nonzero, then \[ F_{\langle}mbda L(\mu)\cong{\mathcal{L}}({\langle}mbda,\mu)^{\oplus{{\mathfrak{b}}ar{e}}ll} \] for some $0<{{\mathfrak{b}}ar{e}}ll\leq{\mathbf{1}}rpi(\mu)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} Let $\widehat{L}(\mu)=L(\mu)^{\oplus 2^{\lfloor{{\mathfrak{b}}ar{f}}rac{n-{\mathfrak{g}}amma_0(\mu)+1}{2}\rfloor}}$, so that $\widehat{L}(\mu)=\widehat{M}(\mu)/\widehat{R}(\mu)$ where $\widehat{R}(\mu)$ is the radical of the Shapovalov form on $\widehat{M}(\mu)$. Applying the functor, we see that \[ F_{\langle}mbda \widehat{L}(\mu)=\widehat{{\mathcal{M}}}({\langle}mbda,\mu)/ F_{\langle}mbda\widehat{R}(\mu). \] Now, $F_{\langle}mbda\widehat{R}(\mu)=\mathcal{R}$. Hence, Corollary \ref{C:ASeRadical} and a calculation similar to Corollary \ref{C:Image of the little verma} gives the result. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{prp}\cite[Proposition 18.18.1]{kl} Any finite dimensional irreducible ${\mathcal{A}}Se(d)$-module is a composition factor of ${\mathcal{M}}({\langle}mbda,{\langle}mbda-{\mathbf{1}}repsilon)$ for some ${\langle}mbda\in{P^{++}}$. {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{thm} Any finite dimensional simple module for ${\mathcal{A}}Se(d)$ is isomorphic ${\mathcal{L}}({\langle}mbda,\mu)$ for some $\mu\in({\langle}mbda-{\mathbf{1}}repsilon)-Q^+$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} The functor $F_{\langle}mbda$ transforms the compostition series for $M({\langle}mbda-{\mathbf{1}}repsilon)$ into the compostition series for ${\mathcal{M}}({\langle}mbda,{\langle}mbda-{\mathbf{1}}repsilon)$. It is now just left to observe that if $L(\mu)$ is a composition factor for $M({\langle}mbda-{\mathbf{1}}repsilon)$, then $\mu\in({\langle}mbda-{\mathbf{1}}repsilon)- Q^+$. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{s}}ubsection{Calibrated Representations Revisited} {\mathfrak{b}}egin{thm} If ${\langle}mbda,\mu\in{P_{\mathrm{poly}}^+}$ satisfy ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$, then $F_{\langle}mbda(L(\mu)) {\mathfrak{n}}eq 0$ and hence one has a simple module ${\mathcal{L}} ({\langle}mbda, \mu)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} The formal character of $L(\mu)$ when $\mu\in{P_{\mathrm{poly}}^+}$ is given by the $Q$-Schur function $Q_\mu$ (c.f.\ \cite{s}). There is a nondegenerate bilinear form, $(\cdot,\cdot)_{{P_{\mathrm{poly}}^+}}$ on the subring of symmetric functions spanned by Schur's $Q$-functions given by \[ \left(Q_{{\langle}mbda}, Q_{\mu} \right)_{{P_{\mathrm{poly}}^+}} = \operatorname{Hom}_{{{\mathfrak{b}}ar{f}}q (n)}\left(L({\langle}mbda), L(\mu) \right). \] Furthermore, the basis $Q_\mu$ ($\mu\in{P_{\mathrm{poly}}^+}$) is an orthogonal basis. Within this subring are the skew $Q$-Schur functions $Q_{{\langle}mbda/\mu}$. We refer the reader to \cite{stem, m} for details. Under the hypotheses of the theorem, ${\langle}mbda/\mu$ is a skew shape. Moreover, $F_{\langle}mbda L(\mu)=0$ implies that {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:PolynomialRep} 0=\operatorname{Hom}_{{{\mathfrak{b}}ar{f}}q (n)} \left(L({\langle}mbda),L(\mu)\otimes V^{\otimes d} \right)={\mathfrak{b}}igoplus_{{\mathfrak{n}}u\in{P_{\mathrm{poly}}^+}(d)}\operatorname{Hom}(L({\langle}mbda),L(\mu)\otimes L({\mathfrak{n}}u))^{\oplus N_{\mathfrak{n}}u}. {{\mathfrak{b}}ar{e}}nd{eqnarray} The second equality follows from Sergeev duality which implies that as a ${\mathfrak{q}}(n)$-module \[ V^{\otimes d}={\mathfrak{b}}igoplus_{{\mathfrak{n}}u\in{P_{\mathrm{poly}}^+}(d)}L({\mathfrak{n}}u)^{\oplus N_{\mathfrak{n}}u}, \] where $N_{\mathfrak{n}}u$ is the dimension of the Specht module of ${\mathcal{S}}(d)$ corresponding to ${\mathfrak{n}}u$ \cite{s2}. In terms of the bilinear form on symmetric functions, {{\mathfrak{b}}ar{e}}qref{E:PolynomialRep} implies {\mathfrak{b}}egin{equation}{\langle}bel{E:perp} 0 = \left(Q_{{\langle}mbda}, Q_{\mu}Q_{{\mathfrak{n}}u} \right) {{\mathfrak{b}}ar{e}}nd{equation} for all ${\mathfrak{n}}u \in {P_{\mathrm{poly}}^+} (d)$. In fact {{\mathfrak{b}}ar{e}}qref{E:perp} holds for all ${\mathfrak{n}}u \in {P_{\mathrm{poly}}^+}$ since different graded summands of the symmetric function ring are orthogonal. However, \[ \left(Q_{{\langle}mbda}, Q_{\mu}Q_{{\mathfrak{n}}u} \right) = \left(Q_{\mu}^{{\mathfrak{b}}ot}Q_{{\langle}mbda}, Q_{{\mathfrak{n}}u} \right) = 2^{{{\mathfrak{b}}ar{e}}ll(\mu)}\left( Q_{{\langle}mbda/\mu}, Q_{{\mathfrak{n}}u}\right), \] where $Q_{\mu}^{{\mathfrak{b}}ot}$ denotes the adjoint of $Q_{\mu}$ with respect to the form and the second equality follows from $Q_{\mu}^{{\mathfrak{b}}ot}Q_{{\langle}mbda}= 2^{-{{\mathfrak{b}}ar{e}}ll(\mu)} Q_{{\langle}mbda/\mu}$ (cf.\ \cite[II.8]{m}). Thus, {{\mathfrak{b}}ar{e}}qref{E:PolynomialRep} implies that \[ (Q_{{\langle}mbda/\mu},Q_{\mathfrak{n}}u)=0 \] for all ${\mathfrak{n}}u\in{P_{\mathrm{poly}}^+}$. But the $Q$-functions form an orthogonal basis for this subring. This implies $Q_{{\langle}mbda/\mu}=0$, which is not true. Hence, $F_{{\langle}mbda}L(\mu) {\mathfrak{n}}eq 0$. {{\mathfrak{b}}ar{e}}nd{proof} Arguing as in section 7 of \cite{su2} using Sergeev duality \cite{s,s2} we obtain the following result. {\mathfrak{b}}egin{cor} Let ${\langle}mbda,\mu\in{P_{\mathrm{poly}}^+}$ such that ${\langle}mbda-\mu\in P_{{\mathfrak{g}}eq 0}(d)$. Then the group character of ${\mathcal{L}}({\langle}mbda,\mu)\downarrow_{{\mathcal{S}}(d)}$ is a power of $2$ multiple of the skew $Q$-Schur function $Q_{{\langle}mbda/\mu}$. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{s}}ection{A Classification of Simple Modules}{\langle}bel{S:Classification} In \cite{bk2,kl}, it was shown that the Grothendieck group of finite dimensional integral representations of ${\mathcal{A}}Se(d)$ is a module for the Kostant-Tits ${\mathbb{Z}}$-form of the Kac-Moody Lie algebra ${\mathfrak{b}}_\infty$. Indeed, let ${\mathfrak{n}}_\infty$ be a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$, and let ${\mathcal{U}}_{\mathbb{Z}}^*({\mathfrak{n}}_\infty)$ be the {{\mathfrak{b}}ar{e}}mph{minimal} admissible lattice inside the universal envelope of ${\mathfrak{n}}_\infty$. This lattice is spanned by Lusztig's dual canonical basis, {\mathfrak{b}}egin{thm}\cite[Theorem 20.5.2]{kl} There is an isomorphism of graded Hopf algebras \[ {\mathcal{U}}_{\mathbb{Z}}^*({\mathfrak{n}}^+_\infty)\cong{\mathfrak{b}}igoplus_{d{\mathfrak{g}}eq 0}K({\mathbb{R}}ep{\mathcal{A}}Se(d)). \] {{\mathfrak{b}}ar{e}}nd{thm} and, {\mathfrak{b}}egin{thm}\cite[Theorem 21.0.4]{kl} The set $B(\infty)$ of isomorphism classes of simple ${\mathcal{A}}Se(d)$-modules, for all $d$, can be given the structure of a crystal (in the sense of Kashiwara). Moreover, this crystal is isomorphic to Kashiwara's crystal associated to the crystal base of ${\mathcal{U}}_{\mathbb{Q}}({\mathfrak{n}}_\infty)$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{s}}ubsection{Quantum Groups and Shuffle Algebras}{\langle}bel{SS:ShuffleAlg} Let ${\mathfrak{b}}_r$ be the simple finite dimensional Lie algebra of type $B_r$ over ${\mathbb{C}}$, and ${\mathcal{U}}_q({\mathfrak{b}}_r)$ the associated quantum group with Chevalley generators $e_i,f_i$ ($i=0,{\langle}mbdaots,r-1$) corresponding to the labeling of the Dynkin diagram: {\mathfrak{b}}egin{center} {\mathfrak{b}}egin{picture}(340,30) {\mathfrak{p}}ut(100,15){\circle{4}}{\mathfrak{p}}ut(99,0){$0$} {\mathfrak{p}}ut(100,17){\line(1,0){32}}{\mathfrak{p}}ut(100,13){\line(1,0){32}}{\mathfrak{p}}ut(113,12.5){$<$} {\mathfrak{p}}ut(133,15){\circle{4}}{\mathfrak{p}}ut(132,0){$1$} {\mathfrak{p}}ut(135,15){\line(1,0){30}} {\mathfrak{p}}ut(167,15){\circle{4}}{\mathfrak{p}}ut(166,0){$2$} {\mathfrak{p}}ut(169,15){\line(1,0){30}} {\mathfrak{p}}ut(201,15){\circle{4}}{\mathfrak{p}}ut(200,0){$3$} {\mathfrak{p}}ut(210,12){$\cdots$} {\mathfrak{p}}ut(235,15){\circle{4}}{\mathfrak{p}}ut(228,0){$r-2$} {\mathfrak{p}}ut(237,15){\line(1,0){30}} {\mathfrak{p}}ut(269,15){\circle{4}}{\mathfrak{p}}ut(265,0){$r-1$} {{\mathfrak{b}}ar{e}}nd{picture} {{\mathfrak{b}}ar{e}}nd{center} Fix a triangular decomposition ${\mathfrak{b}}_r={\mathfrak{n}}^+_r\oplus{\mathfrak{h}}_r\oplus{\mathfrak{n}}^-_r$. Let $\Delta$ be the root system of ${\mathfrak{b}}_r$ relative to this decomposition, $\Delta^+$ the positive roots, and ${\mathcal{P}}i=\{{\mathfrak{b}}eta_0,{\langle}mbdaots,{\mathfrak{b}}eta_{r-1}\}$ the simple roots. Let $\mathcal{Q}$ be the root lattice and $\mathcal{Q}^+={\mathfrak{s}}um_{i=0}^{r-1}{\mathbb{Z}}_{{\mathfrak{g}}eq 0}{\mathfrak{b}}eta_i$. Finally, let $(\cdot,\cdot)$ denote the trace form on ${\mathfrak{h}}^*$. The Cartan matrix of ${\mathfrak{b}}_r$ is then $A=(a_{ij})_{i,j=0}^{r-1}$, where \[ a_{ij}={{\mathfrak{b}}ar{f}}rac{2({\mathfrak{b}}eta_i,{\mathfrak{b}}eta_j)}{({\mathfrak{b}}eta_i,{\mathfrak{b}}eta_i)},\;\;\; d_i={{\mathfrak{b}}ar{f}}rac{({\mathfrak{b}}eta_i,{\mathfrak{b}}eta_i)}2\in\{1,2\}. \] Let $q_i=q^{d_i}$. To avoid confusion with notation we will use later, we adopt the following non-standard notation for $q$-integers and $q$-binomial coefficients: \[ (k)_i={{\mathfrak{b}}ar{f}}rac{q_i^k-q_i^{-k}}{q_i-q_i^{-1}}. \] The algebra ${\mathcal{U}}_q={\mathcal{U}}_q({\mathfrak{n}}^+_r)$ is naturally $\mathcal{Q}^+$-graded by assigning to $e_i$ the degree ${\mathfrak{b}}eta_i$. Let $|u|$ be the ${\mathcal{Q}}^+$-degree of a homogeneous element $u\in{\mathcal{U}}_q({\mathfrak{n}}^+_b)$. There exist $q$-derivations $e_i'$, $i=0,{\langle}mbdaots,r-1$ given by \[ e_i'(e_j)={\mathfrak{p}}artialta_{ij}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, e_i'(uv)=e_i'(u)v+q^{({\mathfrak{b}}eta_i,|u|)}ue_i'(v) \] for all homogeneous $u,v\in{\mathcal{U}}_q^+$. Now, let ${\mathcal{F}}$ be the free associative algebra over ${\mathbb{Q}}(q)$ generated by the set of letters $\{[0],{\langle}mbdaots,[r-1]\}$. Write $[i_1,{\langle}mbdaots,i_k]:=[i_1]\cdot[i_2]\cdots[i_k]$, and let $[]$ denote the empty word. The algebra ${\mathcal{F}}$ is ${\mathcal{Q}}^+$ graded by assigning the degree ${\mathfrak{b}}eta_i$ to $[i]$ (as before, let $|f|$ denote the ${\mathcal{Q}}^+$-degree of a homogeneous $f\in{\mathcal{F}}$). Notice that ${\mathcal{F}}$ also has a {{\mathfrak{b}}ar{e}}mph{principal grading} obtained by setting the degree of a letter $[i]$ to be 1; let ${\mathcal{F}}_d$ be the $d$th graded component in this grading. Now, define the (quantum) shuffle product, $*$, on ${\mathcal{F}}$ inductively by {\mathfrak{b}}egin{align}{\langle}bel{E:inductiveqshuffle} (x\cdot[i])*(y\cdot[j])=(x*(y\cdot[j])\cdot[i]+q^{-(|x|+{\mathfrak{b}}eta_i,{\mathfrak{b}}eta_j)}((x\cdot[i])*y)\cdot[j],\;\;\;x*[]=[]*x=x. {{\mathfrak{b}}ar{e}}nd{align} Iterating this formula yields \[ [i_1,{\langle}mbdaots,i_{{\mathfrak{b}}ar{e}}ll]*[i_{{{\mathfrak{b}}ar{e}}ll+1},{\langle}mbdaots,i_{{{\mathfrak{b}}ar{e}}ll+k}] ={\mathfrak{s}}um_{w\in D_{({{\mathfrak{b}}ar{e}}ll,k)}}q^{-e(w)}[i_{w(1)},{\langle}mbdaots,i_{w(k+{{\mathfrak{b}}ar{e}}ll)}] \] where \[ e(w)={\mathfrak{s}}um_{{\mathfrak{s}}ubstack{s\leq{{\mathfrak{b}}ar{e}}ll<t\\w(s)<w(t)}}({\mathfrak{b}}eta_{i_{w(s)}},{\mathfrak{b}}eta_{i_{w(t)}}), \] see \cite[$\S2.5$]{lec} for details. The product $*$ is associative and, \cite[Proposition 1]{lec}, {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:qShuffle} x*y=q^{-(|x|,|y|)}y\overline{*}x {{\mathfrak{b}}ar{e}}nd{eqnarray} where $\overline{*}$ is obtained by replacing $q$ with $q^{-1}$ in the definition of $*$. Now, to $f=[i_1,{\langle}mbdaots,i_k]\in{\mathcal{F}}$, associate ${\mathfrak{p}}artial_f=e_{i_1}'\cdots e_{i_k}'\in\operatorname{End} {\mathcal{U}}_q$, and ${\mathfrak{p}}artial_{[]}=\operatorname{Id}_{{\mathcal{U}}_q}$. Then, {\mathfrak{b}}egin{prp}\cite{ro1,ro2,grn} There exists an injective ${\mathbb{Q}}(q)$-linear homomorphism \[ {\mathcal{P}}sii:{\mathcal{U}}_q\rightarrow({\mathcal{F}},*) \] defined on homogeneous $u\in{\mathcal{U}}_q$ by the formula ${\mathcal{P}}sii(u)={\mathfrak{s}}um{\mathfrak{p}}artial_f(u)f$, where the sum is over all monomials $f\in{\mathcal{F}}$ such that $|f|=|u|$. {{\mathfrak{b}}ar{e}}nd{prp} Therefore ${\mathcal{U}}_q^+$ is isomorphic to the subalgebra ${\mathcal{W}}{\mathfrak{s}}ubseteq({\mathcal{F}},*)$ generated by the letters $[i]$, $0\leq i<r$. Let ${\mathcal{A}}={\mathbb{Q}}[q,q^{-1}]$, and let ${\mathcal{U}}_{\mathcal{A}}$ denote the ${\mathcal{A}}$-subalgebra of ${\mathcal{U}}_q$ generated by the divided powers $e_i^k/(k)_i!$ ($0\leq i<r$, $k\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}$). Let $(\cdot,\cdot)_K:{\mathcal{U}}_q\times{\mathcal{U}}_q\rightarrow{\mathbb{Q}}(q)$ denote the unique symmetric bilinear form satisfying \[ (1,1)_K=1\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,(e_i'(u),v)_k=(u,e_iv)_K \] for all $0\leq i<r$, and $u,v\in{\mathcal{U}}_q$. Let {\mathfrak{b}}egin{align}{\langle}bel{E:DualEnvelope} {\mathcal{U}}_{\mathcal{A}}^*=\{\,u\in{\mathcal{U}}_q \mid (u,v)_K\in{\mathcal{A}}\mbox{ for all }v\in{\mathcal{U}}_{\mathcal{A}}\,\} {{\mathfrak{b}}ar{e}}nd{align} and let $u^*\in{\mathcal{U}}_{\mathcal{A}}^*$ denote the dual to $u\in{\mathcal{U}}_{\mathcal{A}}$ relative to $(\cdot,\cdot)_K$. Now, given a monomial \[ [i_1^{a_1},i_2^{a_2},{\langle}mbdaots,i_k^{a_k}] =[\underbrace{i_1,{\langle}mbdaots,i_1}_{a_1},\underbrace{i_2,{\langle}mbdaots,i_2}_{a_2}, {\langle}mbdaots,\underbrace{i_k,{\langle}mbdaots,i_k}_{a_k}] \] with $i_j{\mathfrak{n}}eq i_{j+1}$ for $1\leq j<k$, let $c_{i_1,{\langle}mbdaots,i_k}^{a_1,{\langle}mbdaots,a_k}=(a_1)_{i_1}!\cdots(a_k)_{i_k}!$, so that $(c_{i_1,{\langle}mbdaots,i_k}^{a_1,{\langle}mbdaots,a_k})^{-1}e_{i_1}^{a_1}\cdots e_{i_k}^{a_k}$ is a product of divided powers. Let \[ {\mathcal{F}}_{\mathcal{A}}={\mathfrak{b}}igoplus{\mathcal{A}} c_{i_1,{\langle}mbdaots,i_k}^{a_1,{\langle}mbdaots,a_k} [i_1^{a_1},i_2^{a_2},{\langle}mbdaots,i_k^{a_k}] \] and ${\mathcal{W}}^*_{\mathcal{A}}={\mathcal{W}}\cap{\mathcal{F}}_{\mathcal{A}}$. It is known that ${\mathcal{W}}_{\mathcal{A}}^*={\mathcal{P}}sii({\mathcal{U}}_{\mathcal{A}}^*)$, \cite[Lemma 8]{lec}. Define \[ {\mathcal{F}}_{\mathbb{C}}={\mathbb{C}}\otimes_{\mathcal{A}}{\mathcal{F}}_{\mathcal{A}}, \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,{\mathcal{W}}_{\mathbb{C}}^*={\mathbb{C}}\otimes_{\mathcal{A}}{\mathcal{W}}_{\mathcal{A}}^* \] where ${\mathbb{C}}$ is an ${\mathcal{A}}$-module via $q\rightarrow 1$. Given an element $E\in{\mathcal{W}}_{\mathcal{A}}$ (resp. ${\mathcal{F}}_{\mathcal{A}}$) let $\underline{E}$ denote its image in ${\mathcal{W}}_{\mathbb{C}}$ (resp. ${\mathcal{F}}_{\mathbb{C}}$). Observe that $({\mathcal{F}}_{\mathbb{C}},*)$ is the classical shuffle algebra and the shuffle product coincides with the formula for the characters associated to parabolic induction of ${\mathcal{A}}Se(d)$-modules (see Lemma \ref{L:ShuffleLemma}). We close this section by describing the bar involution on ${\mathcal{F}}$: {\mathfrak{b}}egin{dfn}{\langle}bel{D:BarInv}\cite[Proposition 6]{lec} Let $-:{\mathcal{F}}\rightarrow{\mathcal{F}}$ be the ${\mathbb{Q}}$-linear automorphism of $({\mathcal{F}},*)$ defined by ${\mathfrak{b}}ar{q}=q^{-1}$ and \[ \overline{[i_1,{\langle}mbdaots,i_k]} =q^{-{\mathfrak{s}}um_{1\leq s<t\leq k}({\mathfrak{b}}eta_{i_s},{\mathfrak{b}}eta_{i_t})}[i_k,{\langle}mbdaots,i_1]. \] {{\mathfrak{b}}ar{e}}nd{dfn} {\mathfrak{s}}ubsection{Good Words and Lyndon Words}{\langle}bel{SS:LyndonWords} In what follows, it is convenient to differ from the conventions in \cite{lec}. In particular, it is natural from our point of view to order monomial in ${\mathcal{F}}$ lexicographically reading from {{\mathfrak{b}}ar{e}}mph{right to left}. Unlike the type $A$ case, this convention leads to some significant differences in the good Lyndon words that appear. This section contains a careful explanation of all the changes that occur. Fix the ordering on the set of letters in ${\mathcal{F}}$ (resp. ${\mathcal{P}}i$): $[0]<[1]<\cdots<[r-1]<[]$ (resp. ${\mathfrak{b}}eta_0<{\mathfrak{b}}eta_1<\cdots<{\mathfrak{b}}eta_{r-1}$). Give the set of monomials in ${\mathcal{F}}$ the associated lexicographic order read from right to left. That is, \[ [i_1,{\langle}mbdaots,i_k]<[j_1,{\langle}mbdaots,j_{{\mathfrak{b}}ar{e}}ll]\mbox{ if }i_k<j_{{\mathfrak{b}}ar{e}}ll,\mbox{ or for some }m, i_{k-m}<j_{{{\mathfrak{b}}ar{e}}ll-m}\mbox{ and }i_{k-s}<j_{{{\mathfrak{b}}ar{e}}ll-s}\mbox{ for all }s<m. \] Note that since the empty word is larger than any letter, every word is smaller than all of its right factors: {\mathfrak{b}}egin{align}{\langle}bel{E:rightfactors} [i_1,{\langle}mbdaots,i_k]<[i_j,{\langle}mbdaots,i_k],\mbox{ for all }1<j\leq k. {{\mathfrak{b}}ar{e}}nd{align} (For those familiar with the theory, this definition is needed to ensure that the induced Lyndon ordering on positive roots is convex, cf. $\S$\ref{SS:PBWandCanonical} below.) For a homogeneous element $f\in{\mathcal{F}}$, let $\min(f)$ be the smallest monomial occurring in the expansion of $f$. A monomial $[i_1,{\langle}mbdaots,i_k]$ is called a {{\mathfrak{b}}ar{e}}mph{good word} if there exists a homogeneous $w\in{\mathcal{W}}$ such that $[i_1,{\langle}mbdaots,i_k]=\min(w)$, and is called a {{\mathfrak{b}}ar{e}}mph{Lyndon word} if it is larger than any of its proper left factors: \[ [i_1,{\langle}mbdaots,i_j]<[i_1,{\langle}mbdaots,i_k],\mbox{ for any }1\leq j<k. \] Let $\mathcal{G}$ denote the set of good words, ${\mathcal{L}}$ the set of Lyndon words, and $\mathcal{GL}={\mathcal{L}}\cap\mathcal{G}{\mathfrak{s}}ubset\mathcal{G}$ the set of good Lyndon words. {\mathfrak{b}}egin{lem}{\langle}bel{L:GoodFactors}\cite[Lemma 13]{lec} Every factor of a good word is good. {{\mathfrak{b}}ar{e}}nd{lem} Because of our ordering conventions, \cite[Lemma 15, Proposition 16]{lec} become {\mathfrak{b}}egin{lem}\cite[Lemma 15]{lec} Let $l\in{\mathcal{L}}$, $w$ a monomial such that $w{\mathfrak{g}}eq l$. Then, $\min(w*l)=wl$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{n}}oindent and {\mathfrak{b}}egin{prp}{\langle}bel{P:GLproduct}\cite[Proposition 16]{lec} Let $l\in\mathcal{GL}$, and $g\in\mathcal{G}$ with $g{\mathfrak{g}}eq l$. Then $gl\in\mathcal{G}$. {{\mathfrak{b}}ar{e}}nd{prp} Hence, we deduce from Lemma \ref{L:GoodFactors} and Proposition \ref{P:GLproduct} \cite[Proposition 17]{lec}: {\mathfrak{b}}egin{prp}\cite{lr,lec} A monomial $g$ is a good word if, and only if, there exist good Lyndon words $l_1{\mathfrak{g}}eq{\langle}mbdaots{\mathfrak{g}}eq l_k$ such that \[ g=l_1l_2\cdots l_k. \] {{\mathfrak{b}}ar{e}}nd{prp} As in \cite{lec}, we have {\mathfrak{b}}egin{prp}\cite{lr,lec} The map $l\rightarrow|l|$ is a bijection $\mathcal{GL}\rightarrow\Delta^+$. {{\mathfrak{b}}ar{e}}nd{prp} Given ${\mathfrak{g}}amma\in\Delta^+$, let ${\mathfrak{g}}amma\rightarrow l({\mathfrak{g}}amma)$ be the inverse of the above bijection (called the Lyndon covering of $\Delta^+$). We now define the {{\mathfrak{b}}ar{e}}mph{bracketing} of Lyndon words, that gives rise to the {{\mathfrak{b}}ar{e}}mph{Lyndon basis} of ${\mathcal{W}}$. To this end, given $l\in{\mathcal{L}}$ such that $l$ is not a letter, define the standard factorization of $l$ to be $l=l_1l_2$ where $l_2\in{\mathcal{L}}$ is a proper left factor of maximal length. Define the $q$-bracket {\mathfrak{b}}egin{align}{\langle}bel{E:qbracket} [f_1,f_2]_q=f_1f_2-q^{(|f_1|,|f_2|)}f_2f_1 {{\mathfrak{b}}ar{e}}nd{align} for homogeneous $f_1,f_2\in{\mathcal{F}}$ in the ${\mathcal{Q}}^+$-grading. Then, the bracketing ${\langle} l{\rangle}$ of $l\in{\mathcal{L}}$ is defined inductively by ${\langle} l{\rangle}=l$ if $l$ is a letter, and {\mathfrak{b}}egin{align}{\langle}bel{E:Lyndonbracketing} {\langle} l{\rangle}=[{\langle} l_1{\rangle},{\langle} l_2{\rangle}]_q {{\mathfrak{b}}ar{e}}nd{align} if $l=l_1l_2$ is the standard factorization of $l$. {\mathfrak{b}}egin{exa} (1) ${\langle} [0]{\rangle}=[0]$; {\mathfrak{n}}oindent(2) ${\langle} [12]{\rangle}=[[1],[2]]_q=[12]-q^{-1}[21]$; {\mathfrak{n}}oindent(3) ${\langle}[012]{\rangle}=[[0],[12]-q^{-1}[21]]_q=[012]-q^{-1}[021]-q^{-2}[120]+q^{-3}[210]$. {{\mathfrak{b}}ar{e}}nd{exa} As is suggested in this example, we have {\mathfrak{b}}egin{prp}{\langle}bel{P:bracketingtriangularity}\cite[Proposition 19]{lec} For $l\in{\mathcal{L}}$, ${\langle} l{\rangle}=l+r$ where $r$ is a linear combination of words $w$ such that $|w|=|l|$ and $w<l$. {{\mathfrak{b}}ar{e}}nd{prp} Any word $w\in{\mathcal{F}}$ has a canonical factorization $w=l_1\cdots l_k$ such that $l_1,{\langle}mbdaots,l_k\in{\mathcal{L}}$ and $l_1{\mathfrak{g}}eq\cdots{\mathfrak{g}}eq l_k$. We define the bracketing of an arbitrary word $w$ in terms of this factorization: ${\langle} w{\rangle}={\langle} l_1{\rangle}\cdots{\langle} l_k{\rangle}$. Define a homomorphism $\Xi:({\mathcal{F}},\cdot)\to({\mathcal{F}},*)$ by $\Xi([i])=[i]$. Then, $\Xi([i_1,{\langle}mbdaots,i_k])=[i_1]*\cdots*[i_k]={\mathcal{P}}sii(e_{i_1}\cdots e_{i_k})$. In particular, $\Xi({\mathcal{F}})={\mathcal{W}}$. We have the following characterization of good words: {\mathfrak{b}}egin{lem}\cite[Lemma 21]{lec} The word $w$ is good if and only if it cannot be expressed modulo $\ker\Xi$ as a linear combination of words $v<w$. {{\mathfrak{b}}ar{e}}nd{lem} For $g\in\mathcal{G}$, set $r_g=\Xi({\langle} g{\rangle})$. Then, we have {\mathfrak{b}}egin{thm}{\langle}bel{T:Lyndonbasis}\cite[Propostion 22, Theorem 23]{lec} Let $g\in\mathcal{G}$ and $g=l_1\cdots l_k$ be the canonical factorization of $g$ as a nonincreasing product of good Lyndon words. Then {\mathfrak{b}}egin{enumerate} \item $r_g=r_{l_1}*\cdots*r_{l_k}$, \item $r_g={\mathcal{P}}sii(e_g)+{\mathfrak{s}}um_{w<g}x_{gw}{\mathcal{P}}sii(e_w)$ where, for a word $v=[i_1,{\langle}mbdaots,i_k]$, $e_v=e_{i_1}\cdots e_{i_k}$, and \item $\{r_g|g\in\mathcal{G}\}$ is a basis for ${\mathcal{W}}$. {{\mathfrak{b}}ar{e}}nd{enumerate} {{\mathfrak{b}}ar{e}}nd{thm} The basis $\{r_g\mid g\in\mathcal{G}\}$ is called the Lyndon basis of ${\mathcal{W}}$. An immediate consequence of Proposition \ref{P:bracketingtriangularity} and Theorem \ref{T:Lyndonbasis} is the following: {\mathfrak{b}}egin{prp}{\langle}bel{P:LyndonCoveringProperty}\cite[Proposition 24]{lec} Assume ${\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2\in \Delta^+$, ${\mathfrak{g}}amma_1+{\mathfrak{g}}amma_1={\mathfrak{g}}amma\in\Delta^+$, and $l({\mathfrak{g}}amma_1)<l({\mathfrak{g}}amma_2)$. Then, $l({\mathfrak{g}}amma_1)l({\mathfrak{g}}amma_2){\mathfrak{g}}eq l({\mathfrak{g}}amma)$. {{\mathfrak{b}}ar{e}}nd{prp} This gives an inductive algorithm to determine $l({\mathfrak{g}}amma)$ for ${\mathfrak{g}}amma\in\Delta^+$ (cf. \cite[$\S4.3$]{lec}): For ${\mathfrak{b}}eta_i\in{\mathcal{P}}i{\mathfrak{s}}ubset\Delta^+$, $l({\mathfrak{b}}eta_i)=[i]$. If ${\mathfrak{g}}amma$ is not a simple root, then there exists a factorization $l({\mathfrak{g}}amma)=l_1l_2$ with $l_1,l_2$ Lyndon words. By Lemma \ref{L:GoodFactors}, $l_1$ and $l_2$ are good, so $l_1=l({\mathfrak{g}}amma_1)$ and $l_2=l({\mathfrak{g}}amma_2)$ for some ${\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2\in\Delta^+$ with ${\mathfrak{g}}amma_1+{\mathfrak{g}}amma_2={\mathfrak{g}}amma$. Assume that we know $l({\mathfrak{g}}amma_0)$ for all ${\mathfrak{g}}amma_0\in\Delta^+$ satisfying ${\mathfrak{h}}eight({\mathfrak{g}}amma_0)<{\mathfrak{h}}eight({\mathfrak{g}}amma)$. Define \[ C({\mathfrak{g}}amma)=\{\,({\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2)\in\Delta^+\times\Delta^+ \mid {\mathfrak{g}}amma={\mathfrak{g}}amma_1+{\mathfrak{g}}amma_2, \mbox{ and }l({\mathfrak{g}}amma_1)<l({\mathfrak{g}}amma_2)\,\}. \] Then, Proposition \ref{P:LyndonCoveringProperty} implies {\mathfrak{b}}egin{prp}\cite[Proposition 25]{lec} We have \[ l({\mathfrak{g}}amma)=\min\{\,l({\mathfrak{g}}amma_1)l({\mathfrak{g}}amma_2) \mid ({\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2)\in C({\mathfrak{g}}amma)\,\} \] {{\mathfrak{b}}ar{e}}nd{prp} In our situation, \[ \Delta^+=\{{\mathfrak{b}}eta_i+{\mathfrak{b}}eta_{i+1}+\cdots+{\mathfrak{b}}eta_j|0\leq i\leq j<r\} \cup\{2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j+{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k|0\leq j<k<r\}. \] A straightforward inductive argument shows that \[ l({\mathfrak{b}}eta_i+{\mathfrak{b}}eta_{i+1}\cdots+{\mathfrak{b}}eta_j)=[i,i+1,{\langle}mbdaots,j]\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, l(2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j+{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k)=[j,j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k-1,k]. \] Remarkably, {\mathfrak{b}}egin{prp} In the notation of Lemma \ref{L:ShuffleLemma} we have \[ l({\mathfrak{b}}eta_i+\cdots+{\mathfrak{b}}eta_j)=\operatorname{ch}{\mathcal{P}}hii_{[i,j]} \] and \[ 2l(2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j+{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k) =\operatorname{ch}{\mathcal{P}}hii_{[-j-1,k]}. \] {{\mathfrak{b}}ar{e}}nd{prp} Observe that we may write any good Lyndon word uniquely in the form $l=[i,i+1,{\langle}mbdaots,j]$ where $i,j\in{\mathbb{Z}}$ and $0\leq|i|\leq j<r$. For example, {\mathfrak{b}}egin{align}{\langle}bel{E:GoodLyndonWordConvention} l(2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j +{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k)=[-j-1,{\langle}mbdaots,k]. {{\mathfrak{b}}ar{e}}nd{align} In the following definition, we mean for $n$ to vary. Given ${\langle}mbda\in P_{>0}^{++}$, let {\mathfrak{b}}egin{align}{\langle}bel{E:Bdld} \mathcal{B}_d({\langle}mbda)=\{\,\mu\in P^+[{\langle}mbda] \mid {\langle}mbda-\mu\in {P_{\mathrm{poly}}^+}os(d)\mbox{ and }|\mu_i|<{\langle}mbda_i\mbox{ for all }i\,\} {{\mathfrak{b}}ar{e}}nd{align} and let {\mathfrak{b}}egin{align}{\langle}bel{E:Bd} \mathcal{B}_d=\{\,({\langle}mbda,\mu) \mid {\langle}mbda\in P_{>0}^{++}\mbox{ and }\mu\in\mathcal{B}_d({\langle}mbda)\,\}. {{\mathfrak{b}}ar{e}}nd{align} Let $\mathcal{G}_d=\mathcal{G}\cap{\mathcal{F}}_d$ be the set of good words of principal degree $d$. We have {\mathfrak{b}}egin{lem}{\langle}bel{L:BdGd} The map $({\langle}mbda,\mu)\mapsto[{\langle}mbda-\mu]=[\mu_1,{\langle}mbdaots,{\langle}mbda_1-1,{\langle}mbdaots,\mu_n,{\langle}mbdaots,{\langle}mbda_n-1]$ induces a bijection $\mathcal{B}_d\rightarrow\mathcal{G}_d$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{pff} By {{\mathfrak{b}}ar{e}}qref{E:GoodLyndonWordConvention}, $[{\langle}mbda-\mu]$ is a well-defined element of ${\mathcal{F}}_d$. Since ${\langle}mbda\in P^{++}_{>0}$ and $\mu\in P^+[{\langle}mbda]$, the ordering convention and {{\mathfrak{b}}ar{e}}qref{E:rightfactors} imply that $[{\langle}mbda-\mu]\in\mathcal{G}_d$. This map is clearly bijective. {{\mathfrak{b}}ar{e}}nd{pff} {\mathfrak{s}}ubsection{PBW and Canonical Bases}{\langle}bel{SS:PBWandCanonical} The lexicographic ordering on $\mathcal{GL}$ induces a total ordering on $\Delta^+$, which is {{\mathfrak{b}}ar{e}}mph{convex}, meaning that if ${\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2\in\Delta^+$ with ${\mathfrak{g}}amma_1<{\mathfrak{g}}amma_2$, and ${\mathfrak{g}}amma={\mathfrak{g}}amma_1+{\mathfrak{g}}amma_2\in\Delta^+$, then ${\mathfrak{g}}amma_1<{\mathfrak{g}}amma<{\mathfrak{g}}amma_2$ (cf. \cite{ro3,lec}). Indeed, assume ${\mathfrak{g}}amma_1,{\mathfrak{g}}amma_2,{\mathfrak{g}}amma={\mathfrak{g}}amma_1+{\mathfrak{g}}amma_2\in\Delta^+$ and ${\mathfrak{g}}amma_1<{\mathfrak{g}}amma_2$. Proposition \ref{P:LyndonCoveringProperty} and {{\mathfrak{b}}ar{e}}qref{E:rightfactors} imply that $l({\mathfrak{g}}amma)\leq l({\mathfrak{g}}amma_1)l({\mathfrak{g}}amma_2)<l({\mathfrak{g}}amma_2)$. If $l({\mathfrak{g}}amma)=l({\mathfrak{g}}amma_1)l({\mathfrak{g}}amma_2)$, then the definition of Lyndon words implies $l({\mathfrak{g}}amma_1)<l({\mathfrak{g}}amma)$. We are therefore left to prove that $l({\mathfrak{g}}amma_1)<l({\mathfrak{g}}amma)$ even if $l({\mathfrak{g}}amma)<l({\mathfrak{g}}amma_1)l({\mathfrak{g}}amma_2)$. This cannot happen if ${\mathfrak{g}}amma={\mathfrak{b}}eta_i+\cdots+{\mathfrak{b}}eta_j$. In the case ${\mathfrak{g}}amma=2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j +{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k$, the possibilities for ${\mathfrak{g}}amma_1<{\mathfrak{g}}amma_2$ are ${\mathfrak{g}}amma_1={\mathfrak{b}}eta_i+\cdots+{\mathfrak{b}}eta_j$ and ${\mathfrak{g}}amma_2=2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_{i-1}+{\mathfrak{b}}eta_i+\cdots+{\mathfrak{b}}eta_k$ for $0\leq i\leq j$. In any of these cases, $[i,{\langle}mbdaots,j]<[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]$. That is, $l({\mathfrak{g}}amma_1)<l({\mathfrak{g}}amma)<l({\mathfrak{g}}amma_2)$. Each convex ordering, ${\mathfrak{g}}amma_1<\cdots<{\mathfrak{g}}amma_N$, on $\Delta^+$ arises from a unique decomposition $w_0=s_{i_1}s_{i_2}\cdots s_{i_N}$ of the longest element of the Weyl group of type $B_r$ via \[ {\mathfrak{g}}amma_1={\mathfrak{b}}eta_{i_1},\;{\mathfrak{g}}amma_2=s_{i_1}{\mathfrak{b}}eta_{i_2},\;\cdots,{\mathfrak{g}}amma_N=s_{i_1}\cdots s_{i_{N-1}}{\mathfrak{b}}eta_{i_N}. \] Lusztig associates to this data a PBW basis of ${\mathcal{U}}_{\mathcal{A}}$ denoted \[ E^{(a_1)}({\mathfrak{g}}amma_1)\cdots E^{(a_n)}({\mathfrak{g}}amma_N),\;\;\;(a_1,{\langle}mbdaots,a_N)\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}^N. \] Leclerc \cite[$\S4.5$]{lec} describes the image in ${\mathcal{W}}$ of this basis for the convex Lyndon ordering. We use the same braid group action as Leclerc and the results of \cite[$\S4.5,4.6$]{lec} carry over, making changes in the same manner indicated in the previous section. We describe the relevant facts below. For $g=l({\mathfrak{g}}amma_1)^{a_1}\cdots l({\mathfrak{g}}amma_k)^{a_k}$, where ${\mathfrak{g}}amma_1>\cdots>{\mathfrak{g}}amma_k$ and $a_1,{\langle}mbdaots,a_k\in{\mathbb{Z}}_{>0}$ set \[ E_g={\mathcal{P}}sii(E^{(a_k)}({\mathfrak{g}}amma_k)\cdots E^{(a_1)}({\mathfrak{g}}amma_1))\in{\mathcal{W}}_{\mathcal{A}} \] and let $E_g^*\in{\mathcal{W}}_{\mathcal{A}}^*$ be the image of $(E^{(a_k)}({\mathfrak{g}}amma_k)\cdots E^{(a_1)}({\mathfrak{g}}amma_1))^*\in{\mathcal{U}}_{\mathcal{A}}^*$. Observe that the order of the factors in the definition of $E_g$ above are increasing with respect to the Lyndon ordering. Leclerc shows that if ${\mathfrak{g}}amma\in\Delta^+$, then {\mathfrak{b}}egin{align}{\langle}bel{E:Proportional} \kappa_{l({\mathfrak{g}}amma)}E_{l({\mathfrak{g}}amma)}=r_{l({\mathfrak{g}}amma)}, {{\mathfrak{b}}ar{e}}nd{align} For some $\kappa_{l({\mathfrak{g}}amma)}\in{\mathbb{Q}}(q)$, \cite[Theorem 28]{lec} (the proof of this theorem in our case is obtained by reversing all the inequalities and using the standard factorization as opposed to the costandard factorization). More generally, let $f\mapsto f^t$ be the linear map defined by $[i_1,{\langle}mbdaots,i_k]^t=[i_k,{\langle}mbdaots,i_1]$ and $(x*y)^t=y^t*x^t$. Then, $E_g$ is proportional to $\overline{r_g^t}$ (cf. \cite[$\S4.6$, $\S5.5.2-5.5.3$]{lec}). As in \cite[$\S5.5.3$]{lec}, we see that there exists an explicit $c_g\in{\mathbb{Z}}$ such that \[ E_g^*=q^{c_g} (E_{l_m}^*)*\cdots*(E_{l_1}^*) \] if $g=l_1\cdots l_m$ with $l_1>\cdots>l_m$. Using {{\mathfrak{b}}ar{e}}qref{E:qShuffle} we deduce that \[ E_g^*=q^{C_g}(E_{l_1}^*){\mathfrak{b}}ar{*}\cdots{\mathfrak{b}}ar{*}(E_{l_m}^*), \] where $C_g=c_g-{\mathfrak{s}}um_{1\leq i<j\leq m}({\mathfrak{b}}eta_i,{\mathfrak{b}}eta_j)$. In particular, {\mathfrak{b}}egin{align}{\langle}bel{E:Eshuffle} \underline{E_g^*}=\underline{(E_{l_1}^*)*\cdots*(E_{l_m}^*)}. {{\mathfrak{b}}ar{e}}nd{align} Using the bar involution (Definition \ref{D:BarInv}), Leclerc constructs the canonical basis, $\{b_g \mid g\in\mathcal{G}\}$ for ${\mathcal{W}}_{\mathcal{A}}$ via the PBW basis $\{E_g \mid g\in\mathcal{G}\}$. It has the form \[ b_g=E_g+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{h\in\mathcal{G}\{\mathfrak{h}}<g}}\operatorname{ch}i_{gh}E_h. \] The dual canonical basis then has the form \[ b_g^*=E_g^*+{\mathfrak{s}}um_{{\mathfrak{s}}ubstack{h\in\mathcal{G}\{\mathfrak{h}}>g}}\operatorname{ch}i_{gh}^*E_h^*. \] In particular, for good Lyndon words, \cite[Corollary 41]{lec}, $b^*_{l}=E^*_{l}$ for every $l\in\mathcal{GL}$. As in \cite[Lemma 8.2]{lec}, we see that $b^*_{[i,{\langle}mbdaots,j]}=[i,{\langle}mbdaots,j]$ for $0\leq i<j<r$. We now prove {\mathfrak{b}}egin{lem}{\langle}bel{L:DblSeg} For $0\leq j<k<r$, one has \[ b^*_{[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}=(2)_0[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]. \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{pff} We prove this by induction on $j$ and $k$ with $j<k$, using {{\mathfrak{b}}ar{e}}qref{E:inductiveqshuffle}, {{\mathfrak{b}}ar{e}}qref{E:qbracket}, and {{\mathfrak{b}}ar{e}}qref{E:Lyndonbracketing} for the computations. Observe that for $k{\mathfrak{g}}eq1$, $r_{[0,1,{\langle}mbdaots,k]}=(q^2-q^{-2})^k[0,1,{\langle}mbdaots,k]$, which can be proved easily by downward induction on $j$, $0\leq j<k$, using {{\mathfrak{b}}ar{e}}qref{E:inductiveqshuffle} and \[ r_{[j,{\langle}mbdaots,k]}=\Xi({\langle}[j,{\langle}mbdaots,k]{\rangle})=\Xi([[j],{\langle}[j+1,{\langle}mbdaots,k]]_q)=[j]*r_{[j+1,{\langle}mbdaots,k]}-q^{-2}r_{[j+1,{\langle}mbdaots,k]}*[j]. \] By {{\mathfrak{b}}ar{e}}qref{E:inductiveqshuffle}, we have {\mathfrak{b}}egin{align*} [0]*[0,1]-[0,1]*[0]&=[0,1,0]+q^2([0]*[0])[1]-([0]*[0])[1]-[0,1,0]\\ &=(q^2-1)([0,0]+q^{-2}[0,0])[1]=(q^2-q^{-2})[0,0,1] {{\mathfrak{b}}ar{e}}nd{align*} Therefore, applying {{\mathfrak{b}}ar{e}}qref{E:Lyndonbracketing} and the relevant definitions, we deduce that {\mathfrak{b}}egin{align*} r_{[0,0,1]}&=\Xi({\langle}[0,0,1]{\rangle})\\ &=\Xi([[0],{\langle}[0,1]{\rangle}]_q^2)\\ &=[0]*r_{[0,1]}-r_{[0,1]}*[0]\\ &=(q^2-q^{-2})([0]*[0,1]-[0,1]*[0])\\ &=(q^2-q^{-2})^2[0,0,1] {{\mathfrak{b}}ar{e}}nd{align*} Once again, using {{\mathfrak{b}}ar{e}}qref{E:inductiveqshuffle}, we deduce that for all $k{\mathfrak{g}}eq2$, {\mathfrak{b}}egin{align}{\langle}bel{E:DblSegReduction0} [0]*[0,{\langle}mbdaots,k]-[0,{\langle}mbdaots,k]*[0]=([0]*[0,{\langle}mbdaots,k-1]-[0,{\langle}mbdaots,k-1]*[0])[k]. {{\mathfrak{b}}ar{e}}nd{align} Assume $k{\mathfrak{g}}eq2$. Then, $({\mathfrak{b}}eta_0,{\mathfrak{b}}eta_0+\cdots+{\mathfrak{b}}eta_k)=0$, so iterated applications of {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction0} yields {\mathfrak{b}}egin{align*} r_{[0,0,{\langle}mbdaots,k]}&=[0]*r_{[0,{\langle}mbdaots,k]}-r_{[0,{\langle}mbdaots,k]}*[0]\\ &=(q^2-q^{-2})^k([0]*[0,{\langle}mbdaots,k]-[0,{\langle}mbdaots,k]*[0])\\ &=(q^2-q^{-2})^k([0]*[0,1]-[0,1]*[0])[2,{\langle}mbdaots, k]\\ &=(q^2-q^{-2})^{k+1}[0,0,{\langle}mbdaots, k] {{\mathfrak{b}}ar{e}}nd{align*} Now, assume that $k{\mathfrak{g}}eq 2$, and $0<j<k$. To compute $r_{[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}$, we need the following. For $|j-k|>1$, {\mathfrak{b}}egin{align}{\langle}bel{E:DblSegReduction1} [j]*[j-1,{\langle}mbdaots,&k]-q^{-2}[j-1,{\langle}mbdaots,k]*[j]\\ {\mathfrak{n}}onumber&=([j]*[j-1,{\langle}mbdaots,k-1]-q^{-2}[j-1,{\langle}mbdaots,k-1]*[j])[k]. {{\mathfrak{b}}ar{e}}nd{align} For $j=k-1$, {\mathfrak{b}}egin{align}{\langle}bel{E:DblSegReduction2} [j]*[j-1,{\langle}mbdaots,0,&0,{\langle}mbdaots,j+1]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j+1]*[j]\\ {\mathfrak{n}}onumber&=(q^2[j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]*[j])[j+1]. {{\mathfrak{b}}ar{e}}nd{align} Finally, {\mathfrak{b}}egin{align}{\langle}bel{E:DblSegReduction3} q^2[j]*[j-1,{\langle}mbdaots,0,&0,{\langle}mbdaots,j]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]*[j]\\ {\mathfrak{n}}onumber&=([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]*[j])[j,j+1]. {{\mathfrak{b}}ar{e}}nd{align} Indeed, {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction1} and {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction2} are straightforward applications of {{\mathfrak{b}}ar{e}}qref{E:inductiveqshuffle}. Equation {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction3} involves a little more calculation: {\mathfrak{b}}egin{align*} q^2[j]*[j-1,{\langle}mbdaots,0,&0,{\langle}mbdaots,j]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]*[j]\\ =&q^2[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j,j]+q^{-2}([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-1]\\ &-[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-1]*[j])[j]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j,j]\\ =&(q^2-q^{-2})[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j,j]+q^{-2}([j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]\\&+q^2([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2])[j-1] -([j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]*[j])[j-1]\\&-q^4[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j])[j]\\ =&([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]*[j])[j,j+1], {{\mathfrak{b}}ar{e}}nd{align*} Note that {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction1} holds for both $[j-1,j,{\langle}mbdaots,k]$ and $[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]$. Now, assume that we have shown that $r_{[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}=(q^2-q^{-2})^{j+k}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]$. Then, since $({\mathfrak{b}}eta_j,2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_{j-1}+{\mathfrak{b}}eta_j+\cdots+{\mathfrak{b}}eta_k)=-2$, {\mathfrak{b}}egin{align*} r_{[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}=&[j]*r_{[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}-r_{[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]}*[j]\\ =&(q^2-q^{-2})^{j+k}[j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,k]*[j]\\ =&(q^2-q^{-2})^{j+k}([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j+1]\\&-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j+1]*[j])[j+2,{\langle}mbdaots,k] \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction1}}\\ =&(q^2-q^{-2})^{j+k}(q^2[j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]\\&-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j]*[j])[j+1,{\langle}mbdaots,k] \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction2}}\\ =&(q^2-q^{-2})^{j+k}([j]*[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]\\&-q^{-2}[j-1,{\langle}mbdaots,0,0,{\langle}mbdaots,j-2]*[j])[j,{\langle}mbdaots,k] \,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction3}}\\ =&(q^2-q^{-2})^{j+k}([j]*[j-1]-q^{-2}[j-1]*[j])[j-2,{\langle}mbdaots,0,0,{\langle}mbdaots,k]\;\;\;\;\mbox{by {{\mathfrak{b}}ar{e}}qref{E:DblSegReduction1}}\\ =&(q^2-q^{-2})^{j+k+1}[j,{\langle}mbdaots,0,0,{\langle}mbdaots,k]. {{\mathfrak{b}}ar{e}}nd{align*} Finally, the result follows after computing the normalizing coefficient {{\mathfrak{b}}ar{e}}qref{E:Proportional} using \cite[Equation (28)]{lec}. We leave the details to the reader. {{\mathfrak{b}}ar{e}}nd{pff} {\mathfrak{s}}ubsection{}In section we give a representation theoretic interpretation of the good Lyndon words associated to the root vectors $2{\mathfrak{b}}eta_0+\cdots+2{\mathfrak{b}}eta_j+{\mathfrak{b}}eta_{j+1}+\cdots+{\mathfrak{b}}eta_k$ ($0\leq j<k<r$) which appear in \cite[Lemma 53]{lec}. The corresponding dual canonical basis vectors are given by the formula \[ [0]\cdot([1,{\langle}mbdaots,j]*[0,{\langle}mbdaots,k]). \] {\mathfrak{b}}egin{lem} Let $0\leq a<b$, $d=b+a+2$, ${\langle}mbda=(b+1,a+1)$ and $\alpha=(1,-1)$. Then, for $1\leq k\leq a$, \[ \operatorname{ch}{\mathcal{L}}({\langle}mbda,-k\alpha)=2\underline{[k-1]\cdot([k-2,k-3,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k,{\langle}mbdaots,a])} \] where if $ k=1$, we interpret \[[k-2,k-3,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]=[0,1,{\langle}mbdaots, b] \] {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{proof} By \cite[Proposition 11.4]{g}, for each $k\in{\mathbb{Z}}_{{\mathfrak{g}}eq0}$, there exists a short exact sequence \[ \xymatrix{0\ar[r]&L(-(k+1)\alpha)\ar[r]&M(-k\alpha)\ar[r]&L(-k\alpha)\ar[r]&0}. \] For $k\leq a+1$, applying the functor $F_{\langle}mbda$ yields the exact sequence {\mathfrak{b}}egin{eqnarray}{\langle}bel{E:ShortExactSeq} \xymatrix@1{0\ar[r]&F_{\langle}mbda L(-(k+1)\alpha)\ar[r]&2{\mathcal{M}}({\langle}mbda,-k\alpha)\ar[r]&F_{\langle}mbda L(-k\alpha)\ar[r]&0}. {{\mathfrak{b}}ar{e}}nd{eqnarray} Therefore, \[ \operatorname{ch} F_{\langle}mbda L(-k\alpha)=4\underline{[k-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k,{\langle}mbdaots,a]}-\operatorname{ch} F_{\langle}mbda L(-(k+1)\alpha). \] Note that when $k=a+1$, $F_{\langle}mbda L(-(k+1)\alpha)=0$ since ${\mathcal{M}}({\langle}mbda,-(a+2)\alpha)=0$. Therefore the sequence {{\mathfrak{b}}ar{e}}qref{E:ShortExactSeq} implies $F_{\langle}mbda L(-k\alpha)=2{\mathcal{L}}({\langle}mbda,-(a+1)\alpha)\cong2{\mathcal{M}}({\langle}mbda,-(a+1)\alpha)\cong 2{\mathcal{P}}hii_{[-a-1,b]}$, and \[ \operatorname{ch}{\mathcal{P}}hii_{[-a-1,b]}=2\underline{[a,a-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]}. \] We now prove the lemma by downward induction on $k\leq a$. We have {\mathfrak{b}}egin{align*} \operatorname{ch} F_{\langle}mbda L(-a\alpha)=&4\,\underline{[a-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[a]-4[a,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]}\\ =&4\,\underline{[a-1]\cdot([a-2,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[a])}. {{\mathfrak{b}}ar{e}}nd{align*} Hence, $F_{\langle}mbda L(-a\alpha)=2{\mathcal{L}}({\langle}mbda,-a\alpha)$ and the lemma holds for $k=a$. Now, assume $k<a$, $F_{\langle}mbda L(-(k+1)\alpha)=2{\mathcal{L}}({\langle}mbda,-(k+1)\alpha)$, and \[ \operatorname{ch}{\mathcal{L}}({\langle}mbda,-(k+1)\alpha)=2\underline{[k]\cdot([k-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k+1,{\langle}mbdaots,a])}. \] Then, {\mathfrak{b}}egin{align*} \operatorname{ch} F_{\langle}mbda L(-k\alpha)=&4\underline{[k-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k,{\langle}mbdaots,a]}- 4\underline{[k]\cdot([k-1,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k+1,{\langle}mbdaots,a])}\\ =&4\underline{[k-1]\cdot([k-2,{\langle}mbdaots,1,0,0,1,{\langle}mbdaots,b]*[k,{\langle}mbdaots,a])}. {{\mathfrak{b}}ar{e}}nd{align*} Hence, $F_{\langle}mbda L(-k\alpha){\mathfrak{n}}eq 0$, so $F_{\langle}mbda L(-k\alpha)=2{\mathcal{L}}({\langle}mbda,-k\alpha)$ and the lemma holds. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:LecDblSeg} Let $0\leq a<b$, $d=b+a+2$, ${\langle}mbda=(b+1,a+1)$ and $\mu=-\alpha=(-1,1)$. Then, \[ \operatorname{ch}{\mathcal{L}}({\langle}mbda,-\alpha)=2\,\underline{[0]\cdot[0,{\langle}mbdaots,b]*[1,{\langle}mbdaots,a]}. \] {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{s}}ubsection{A Basis for the Grothendieck Group $K(\mbox{Rep} {\mathcal{A}}Se(d))$}{\langle}bel{SS:GrothendieckGroup} {\mathfrak{b}}egin{thm}{\langle}bel{T:GrothendieckBasis1} The set \[ \{ \left[ {\mathcal{M}}({\langle}mbda,\mu)\right]\mid ({\langle}mbda,\mu)\in\mathcal{B}_d\} \] forms a basis for $K({\mathbb{R}}ep{\mathcal{A}}Se(d))$. {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} By Lemma \ref{L:DblSeg} and {{\mathfrak{b}}ar{e}}qref{E:Eshuffle}, it follows that $\operatorname{ch}{\mathcal{M}}({\langle}mbda,\mu)=\underline{E^*_{[{\langle}mbda-\mu]}}$. The result now follows from Lemma \ref{L:BdGd} and the fact that the character map is injective. {{\mathfrak{b}}ar{e}}nd{proof} We will now describe a basis for $K({\mathbb{R}}ep{\mathcal{A}}Se(d))$ in terms of the simple modules ${\mathcal{L}}({\langle}mbda,\mu)$. {\mathfrak{b}}egin{prp}{\langle}bel{P:StandardSegmentForm} Let $b{\mathfrak{g}}eq0$, ${\langle}mbda=(b+1,b+1)$ and $\alpha=(1,-1)$. Then, \[ {\mathcal{P}}hii_{[-b-1,b]}\cong{\mathcal{L}}({\langle}mbda,b\alpha). \] {{\mathfrak{b}}ar{e}}nd{prp} {\mathfrak{b}}egin{proof} There is a surjective homomorphism ${\mathcal{M}}({\langle}mbda,b\alpha)\to{\mathcal{P}}hii_{[-b-1,b]}$. The result follows since ${\mathcal{P}}hii_{[-b-1,b]}$ is simple. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{b}}egin{cor}{\langle}bel{C:StandardizingWords} Assume that ${\langle}mbda\in{\mathcal{P}}_{>0}^{++}$, $\mu\in P^+[{\langle}mbda]$, ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$, and $|\mu_i|\leq{\langle}mbda_i$ for all $i$. Then, there exists $({{\mathfrak{b}}ar{e}}taa,{\mathfrak{n}}u)\in\mathcal{B}_d$ such that \[ {\mathcal{L}}({\langle}mbda,\mu)\cong{\mathcal{L}}({{\mathfrak{b}}ar{e}}taa,{\mathfrak{n}}u), \] and $[{\langle}mbda-\mu]\leq[{{\mathfrak{b}}ar{e}}taa-{\mathfrak{n}}u]$. {{\mathfrak{b}}ar{e}}nd{cor} {\mathfrak{b}}egin{proof} First, we may assume $\mu_i<{\langle}mbda_i$ for all $i$, since the terms for which ${\langle}mbda_i=\mu_i$ do not contribute to ${\mathcal{L}}({\langle}mbda,\mu)$. Proceed by induction on $N({\langle}mbda,\mu)=|\{i=1,{\langle}mbdaots,n \mid \mu_i=-{\langle}mbda_i\}|$. If $N({\langle}mbda,\mu)=0$, then $({\langle}mbda,\mu)\in\mathcal{B}^+_d$ so there is nothing to do. If $N({\langle}mbda,\mu)>0$, let $j$ be the smallest index such that $\mu_j=-{\langle}mbda_j$. Set ${\langle}mbda^{(1)}=({\langle}mbda_1,{\langle}mbdaots,{\langle}mbda_{j-1},{\langle}mbda_j,{\langle}mbda_j,{\langle}mbda_{j+1},{\langle}mbdaots,{\langle}mbda_n)$ and $\mu^{(1)}=(\mu_1,{\langle}mbdaots,\mu_{j-1},{\langle}mbda_j-1,\mu_j+1,\mu_{j+1},{\langle}mbdaots,{\langle}mbda_n)$. Clearly, ${\langle}mbda^{(1)}\in P_{>0}^{++}$ and $\mu^{(1)}\in{\langle}mbda^{(1)}-{P_{\mathrm{poly}}^+}os(d)$. We now show $\mu^{(1)}\in P^+[{\langle}mbda]$. Indeed, ${\langle}mbda_j>0$, so ${\langle}mbda_j-1>1-{\langle}mbda_j=\mu_j+1$; and, $\mu_j{\mathfrak{g}}eq\mu_{j+1}$, so $\mu_j+1>\mu_{j+1}$. Since $\mu_j<{\langle}mbda_j-1$, the $j$th twisted good Lyndon word in $[{\langle}mbda^{(1)}-\mu^{(1)}]$ is greater than the $j$th twisted good Lyndon word in $[{\langle}mbda-\mu]$. Hence, $[{\langle}mbda-\mu]\leq[{\langle}mbda^{(1)}-\mu^{(1)}]$. Now, there exists a surjective homomorphism {\mathfrak{b}}egin{align*} {\mathcal{P}}hii_{[\mu_1,{\langle}mbda_1-1]}\circledast\cdots\circledast{\mathcal{M}}(({\langle}mbda_j,{\langle}mbda_j),({\langle}mbda_j-1,\mu_j+1)) &\circledast\cdots\circledast{\mathcal{P}}hii_{[\mu_{n},{\langle}mbda_{n}-1]}\\ &\to{\mathcal{P}}hii_{[\mu_1,{\langle}mbda_1-1]}\circledast\cdots\circledast{\mathcal{P}}hii_{[\mu_j,{\langle}mbda_j-1]} \circledast\cdots\circledast{\mathcal{P}}hii_{[\mu_{n},{\langle}mbda_{n}-1]} {{\mathfrak{b}}ar{e}}nd{align*} Hence, a surjective homomorphism ${\mathcal{M}}({\langle}mbda^{(1)},\mu^{(1)})\to{\mathcal{L}}({\langle}mbda,\mu)$. It follows that ${\mathcal{L}}({\langle}mbda^{(1)},\mu^{(1)})\cong{\mathcal{L}}({\langle}mbda,\mu)$. Since $N({\langle}mbda^{(1)},\mu^{(1)})<N({\langle}mbda,\mu)$ the result follows. {{\mathfrak{b}}ar{e}}nd{proof} Recall that given $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$ there exists a unique $w\in S_d[{\langle}mbda]$ such that $w\mu\in P^+[{\langle}mbda]$. Let $\mu^+$ denote this element. Also, given ${\langle}mbda\in{P^{++}}$, and $\mu\in{\langle}mbda-{P_{\mathrm{poly}}^+}os(d)$, let $[{\langle}mbda-\mu]^+=[{\langle}mbda-\mu^+]\in\mathcal{TG}$ be the associated twisted good word. The following lemma is straightforward. {\mathfrak{b}}egin{lem}{\langle}bel{L:WordTriangularity} Assume that ${\langle}mbda\in{P^{++}}$, ${\langle}mbda-\mu\in{P_{\mathrm{poly}}^+}os(d)$ and ${\mathfrak{g}}amma\in Q^+$. Then, $[{\langle}mbda-\mu]\leq[{\langle}mbda-(\mu-{\mathfrak{g}}amma)^+]$. {{\mathfrak{b}}ar{e}}nd{lem} {\mathfrak{b}}egin{thm} The following is a complete list of pairwise non-isomorphic simple modules for ${\mathcal{A}}Se(d)$: \[ \{\,{\mathcal{L}}({\langle}mbda,\mu)\mid ({\langle}mbda,\mu)\in \mathcal{B}^+_d\,\}. \] {{\mathfrak{b}}ar{e}}nd{thm} {\mathfrak{b}}egin{proof} Every composition factor of $M(\mu)$ is of the form $L(\mu-{\mathfrak{g}}amma)$ for some ${\mathfrak{g}}amma \in Q^+$. Applying the functor, we deduce that every composition factor of ${\mathcal{M}}({\langle}mbda,\mu)$ is of the form ${\mathcal{L}}({\langle}mbda,\mu-{\mathfrak{g}}amma)\cong{\mathcal{L}}({\langle}mbda,(\mu-{\mathfrak{g}}amma)^+)$. Now, putting together Corollary \ref{C:StandardizingWords} and Lemma \ref{L:WordTriangularity}, we deduce that in the Grothendieck group \[ [{\mathcal{M}}({\langle}mbda,\mu)]={\mathfrak{s}}um_{{\mathfrak{s}}ubstack{{\mathfrak{n}}u \in\mathcal{B}_d({{\mathfrak{b}}ar{e}}taa)\\ {{\mathfrak{b}}ar{e}}taa \in P^{++}_{>0}\\ [{\langle}mbda-\mu]\leq[{{\mathfrak{b}}ar{e}}taa-{\mathfrak{n}}u]}}c_{{\langle}mbda,\mu, {{\mathfrak{b}}ar{e}}taa, {\mathfrak{n}}u}[{\mathcal{L}}({{\mathfrak{b}}ar{e}}taa,{\mathfrak{n}}u)], \] where the $c_{{\langle}mbda,\mu,{{\mathfrak{b}}ar{e}}taa, {\mathfrak{n}}u}$ are integers and where $c_{{\langle}mbda,\mu,{\langle}mbda, \mu} {\mathfrak{n}}eq 0 $. Therefore, the transition matrix between the basis for $K({\mathbb{R}}ep{\mathcal{A}}Se(d))$ given by standard modules and that given by simples is triangular. {{\mathfrak{b}}ar{e}}nd{proof} {\mathfrak{n}}ewpage {\mathfrak{s}}ection{Table of Notation}{\langle}bel{SS:TableofNotation} For the convenience of the reader we provide a table of notation with a reference to where the notation is first defined. {\mathfrak{b}}igskip {\mathfrak{b}}egin{center} {\mathfrak{b}}egin{tabular}{ccl} Notation & & First Defined\\ {\mathfrak{h}}line ${\mathcal{S}} (d)$, ${\mathcal{A}}Se (d)$, ${\mathcal{P}}_{d}[x]$, ${\mathcal{A}}(d) $ & & Section~\ref{SS:Saffdef} \\ $q(a)$ & & Section~\ref{SS:weights},{{\mathfrak{b}}ar{e}}qref{E:qdef}\\ ${\mathcal{P}}_{d}[x^{2}]$ & & Section~\ref{SS:weights} \\ $\operatorname{Ind}^{d}_{\mu}$ & & Section~\ref{SS:Mackey} \\ $D_{\mathfrak{n}}u$, $D_{(m,k)}$ & & Section~\ref{SS:Mackey} \\ ${\mathfrak{g}}amma_{0}={\mathfrak{g}}amma_{0}(a_{1}, \dots ,a_{d})$ & & Section~\ref{SS:characters}, {{\mathfrak{b}}ar{e}}qref{E:gammazerodef}\\ $[a_1,{\langle}mbdaots,a_d]$ & & Section~\ref{SS:characters} \\ ${\mathbb{C}}l_{d}$ & & Section~\ref{subsection irred modules}, {{\mathfrak{b}}ar{e}}qref{E:Cldef}\\ ${\mathcal{L}}_i$, $s_{ij}$&& Section~\ref{subsection irred modules}, {{\mathfrak{b}}ar{e}}qref{E:JMelt}\\ $[a,b]$ & & Section~\ref{subsection irred modules} \\ ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}$, ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^{+}$, ${\mathfrak{h}}at{{\mathcal{P}}hi}_{[a,b]}^{-}$ & & Section~\ref{subsection irred modules} \\ ${\mathcal{P}}hi_{[a,b]}$ & & Section~\ref{subsection irred modules}, Definition~\ref{segments} \\ ${\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}$, $ {\mathbf{1}}rphi{\mathfrak{h}}at{{\mathbf{1}}}_{[a,b]}$ & & Section~\ref{subsection irred modules} \\ ${{\mathbf{1}}}_{a,b,n}$ & & Section ~\ref{unique simple quotient}\\ $R$, $R^{+}$, $Q$, $Q^{+}$ & & Section~\ref{SS:LieThy} \\ $P$, $P_{{\mathfrak{g}}eq 0}$, $P^{+}$, $P^{++}$, $P^{+}_{\thetaxt{rat}}$, $P^{+}_{\thetaxt{poly}}$ & & Section~\ref{SS:LieThy} \\ $P(d)$, $P_{{\mathfrak{g}}eq 0}(d)$, $P^{+}(d)$, $P^{++}(d)$, $P^{+}_{\thetaxt{rat}}(d)$, $P^{+}_{\thetaxt{poly}}(d)$ & & Section~\ref{SS:LieThy} \\ $S_{n}[{\langle}mbda]$, $R[{\langle}mbda]$, $P^{+}[{\langle}mbda]$, $P^-[{\langle}mbda]$ & & Section~\ref{SS:LieThy} \\ $\widehat{{\mathcal{P}}hii}({\langle}mbda, \mu)$, ${\mathcal{P}}hii({\langle}mbda, \mu)$ & & Section~\ref{SS:inducedmodules} \\ $\widehat{{\mathcal{M}}}({\langle}mbda, \mu)$, ${\mathcal{M}} ({\langle}mbda, \mu)$ & & Section~\ref{SS:inducedmodules}, {{\mathfrak{b}}ar{e}}qref{E:Mhatdef}, {{\mathfrak{b}}ar{e}}qref{E:Mdef}\\ ${{\mathcal{M}}}_{a,b,n}$ & & Section ~\ref{unique simple quotient}\\ $S_{n}[{\mathfrak{z}}eta]$ & & Section~\ref{SS:inducedmodules} \\ $\mathcal{R}({\langle}mbda, \mu)$ & & Section~\ref{unique simple quotient} \\ $L({\langle}mbda, \mu)$ & & Section~\ref{unique simple quotient}, Theorem~\ref{thm:unique irred quotient} \\ ${\langle}mbda/\mu$& & Section~\ref{S:Calibrated}\\ $\mathcal{Y}_{i,L}$& & Section~\ref{S:Calibrated}\\ $H^{{\langle}mbda/\mu}$& & Section~\ref{S:Calibrated}\\ $e_{i,j}$, $f_{i,j}$, ${\mathfrak{b}}ar{e}_{i,j}$, ${\mathfrak{b}}ar{f}_{i,j}$ & & Section~\ref{SS:qndfn} \\ $\mathcal{O}$, $\mathcal{O}({{\mathfrak{b}}ar{f}}q (n))$ & & Section~\ref{SS:RootData} \\ $\widehat{M}({\langle}mbda)$, $M({\langle}mbda)$ & & Section~\ref{SS:RootData} \\ $(\cdot, \cdot)_{S}$ & & Section~\ref{SS:ShapovalovForm} {{\mathfrak{b}}ar{e}}nd{tabular} {\mathfrak{n}}ewpage {\mathfrak{b}}egin{tabular}{ccl} Notation & & First Defined\\ {\mathfrak{h}}line $C_{i}$, $S_{i,j}$, $F_i$ & & Section~\ref{SS:Sergeev Duality} \\ $\Omegaega_{i,j}$ & & Section~\ref{SS:action} \\ $F_{{\langle}mbda}$ & & Section~\ref{SS:Flambda}, {{\mathfrak{b}}ar{e}}qref{E:Flambdadef}\\ $(\cdot, \cdot)_{\mu}$ & & Section~\ref{SS:functorimage}\\ ${\mathbf{1}}rpi(\mu)$ & & Section~\ref{SS:functorimage}\\ $\Delta^+$, ${\mathcal{P}}i$, $\mathcal{Q}$, $\mathcal{Q}^+$& & Section~\ref{SS:ShuffleAlg}\\ $(\mathcal{F},*)$, $\mathcal{W}$& & Section~\ref{SS:ShuffleAlg}\\ ${\mathcal{F}}_{\mathcal{A}}$, ${\mathcal{F}}_{\mathbb{C}}$, ${\mathcal{W}}_{\mathcal{A}}$, ${\mathcal{W}}_{\mathbb{C}}$& & Section~\ref{SS:ShuffleAlg}\\ $\underline{E}\in{\mathcal{W}}_{\mathbb{C}}$& & Section~\ref{SS:ShuffleAlg}\\ $\mathcal{GL}$, $\mathcal{G}$ & & Section~\ref{SS:LyndonWords}\\ $\mathcal{B}_d[{\langle}mbda]$, $\mathcal{B}_d$ & & Section~\ref{SS:LyndonWords}\\ $[\cdot,\cdot]_q$, $\Xi$, $r_g$ & & Section~\ref{SS:LyndonWords}\\ $E_g$, $E_g^*$, $b_g$, $b_g^*$& & Section~\ref{SS:PBWandCanonical} {{\mathfrak{b}}ar{e}}nd{tabular} {{\mathfrak{b}}ar{e}}nd{center} {\mathfrak{p}}agebreak {\mathfrak{b}}egin{thebibliography}{99} {\mathfrak{b}}ibitem{as} T. Arakawa and T. Suzuki, Duality between ${\mathfrak{sl}}_n({\mathbb{C}})$ and the degenerate affine Hecke algebra of type $A$, {{\mathfrak{b}}ar{e}}mph{J. Algebra} \thetaxtbf{209} (1998), 288--304. {\mathfrak{b}}ibitem{ar} S. Ariki, On the decomposition numbers of the Hecke algebra of type $G(m,1,n)$, {{\mathfrak{b}}ar{e}}mph{J. Math. Kyoto Univ.} \thetaxtbf{36} (1996), 789--808. {\mathfrak{b}}ibitem{b} J. Brundan, Kazhdan-Lusztig polynomials and character formulae for the Lie superalgebra ${\mathfrak{q}}(n)$, {{\mathfrak{b}}ar{e}}mph{Adv. in Math.} \thetaxtbf{182} (2004), 28--77. {\mathfrak{b}}ibitem{b2} {\mathfrak{b}}ysame, Centers of degenerate cyclotomic Hecke algebras and parabolic category $\mathcal{O}$. {{\mathfrak{b}}ar{e}}mph{Represent. Theory} \thetaxtbf{12} (2008), 236--259. {\mathfrak{b}}ibitem{bk1} J. Brundan and A. Kleshchev, Projective Representations of the Symmetric Group via Sergeev Duality, Math. Z. \thetaxtbf{239} (2002) no. 1, 27--68. {\mathfrak{b}}ibitem{bk2} {\mathfrak{b}}ysame, Hecke-Clifford Superalgebras, Crystals of Type $A_{2l}^{(2)}$ and Modular Branching Rules for $\widehat{S}_n$, {{\mathfrak{b}}ar{e}}mph{Represent. Theory} \thetaxtbf{5} (2001), 317--403. {\mathfrak{b}}ibitem{bk3} {\mathfrak{b}}ysame, Schur-Weyl duality for higher levels, {{\mathfrak{b}}ar{e}}mph{Selecta Math.} \thetaxtbf{14} (2008), 1--57. {\mathfrak{b}}ibitem{bk4} {\mathfrak{b}}ysame, Representations of shifted Yangians and finite $W$-algebras, {{\mathfrak{b}}ar{e}}mph{Mem. Amer. Math. Soc.} \thetaxtbf{196} (2008), no. 918. {\mathfrak{b}}ibitem{ch0} I. Cherednik, Special bases of irreducible representations of a degenerate affine Hecke algebra. {{\mathfrak{b}}ar{e}}mph{Functional Anal. Appl.} \thetaxtbf{20} (1986), no. 1, 76--78. {\mathfrak{b}}ibitem{ch} {\mathfrak{b}}ysame, Double affine Hecke algebras. London Mathematical Society Lecture Note Series, 319. Cambridge University Press, Cambridge, 2005. {\mathfrak{b}}ibitem{CR} J. Chuang and R. Rouquier, Derived equivalences for symmetric groups and $\mathfrak{}{sl}_{}2$-categorification. {{\mathfrak{b}}ar{e}}mph{Ann. of Math.} (2) \thetaxtbf{167} (2008), no. 1, 245--298. {\mathfrak{b}}ibitem{d1} V. G. Drinfeld, {{\mathfrak{b}}ar{e}}mph{Proc. Intern. Cong. Math., Berkeley}, vol. 1, Academic Press, New York, 1987, pp. 798--820. {\mathfrak{b}}ibitem{fr} A. Frisk, Typical blocks of the category $\mathcal{O}$ for the queer Lie superalgebra. {{\mathfrak{b}}ar{e}}mph{J. Algebra Appl.} \thetaxtbf{6} (2007), no. 5, 731--778. {\mathfrak{b}}ibitem{ginz} V. A. Ginzburg, Proof of the Deligne-Langlands conjecture, {{\mathfrak{b}}ar{e}}mph{Soviet. Math. Dokl.} \thetaxtbf{35} (2) (1987), 304--308. {\mathfrak{b}}ibitem{g} M. Gorelik, Shapovalov determinants of $Q$-type Lie superalgebras. {{\mathfrak{b}}ar{e}}mph{Int. Math. Res. Pap.} 2006, Art. ID 96895, 71 pp. {\mathfrak{b}}ibitem{grn} J. A. Green, Quantum groups, Hall algebras and quantum shuffles. In: Finite reductive groups (Luminy 1994), 273--290, Birkhäuser Prog. Math. 141, 1997. {\mathfrak{b}}ibitem{gr} I. Grojnowski, Affine $ sl_p $ controls the representation theory of the symmetric group and related Hecke algebras. math.RT/9907129. {\mathfrak{b}}ibitem{kds} A. De Sole and V. Kac, Finite vs affine $W$-algebras. (English summary) {{\mathfrak{b}}ar{e}}mph{Jpn. J. Math. 1} (2006), no. 1, 137--261. {\mathfrak{b}}ibitem{ka} M. Kashiwara, On crystal bases of the $q$-analogue of universal enveloping algebras. {{\mathfrak{b}}ar{e}}mph{Duke Math. J.} \thetaxtbf{63} (1991), 465--516 . {\mathfrak{b}}ibitem{khl1} M. Khovanov and A. Lauda, A diagrammatic approach to categorification of quantum groups I. arXiv:0803.4121 {\mathfrak{b}}ibitem{khl2} {\mathfrak{b}}ysame, A diagrammatic approach to categorification of quantum groups II. arXiv:0804.2080 {\mathfrak{b}}ibitem{khl3} {\mathfrak{b}}ysame, A diagrammatic approach to categorification of quantum groups III. arXiv:0807.3250 {\mathfrak{b}}ibitem{KN} S. Khoroshkin and M. Nazarov, Yangians and Mickelsson algebras. I. {{\mathfrak{b}}ar{e}}mph{Transform. Groups} \thetaxtbf{11} (2006), no. 4, 625--658. {\mathfrak{b}}ibitem{kl} A. Kleshchev, Linear and Projective Representations of Symmetric Groups, Cambridge University Press, 2005. {\mathfrak{b}}ibitem{kr} A. Kleshchev and A. Ram, Homogeneous Representations of Khovanov-Lauda Algebras. arXiv:0809.0557 {\mathfrak{b}}ibitem{lr} P. Lalonde and A. Ram, Standard Lyndon bases of Lie algebras and enveloping algebras. {{\mathfrak{b}}ar{e}}mph{Trans. Am. Math. Soc.} \thetaxtbf{347} (1995), 1821--1830. {\mathfrak{b}}ibitem{lec} B. Leclerc, Dual canonical bases, quantum shuffles and $q$-characters. {{\mathfrak{b}}ar{e}}mph{Math. Z.} \thetaxtbf{246} (2004), no. 4, 691--732. {\mathfrak{b}}ibitem{m} I. G. Macdonald, Symmetric Functions and Hall Polynomials, second edition, Oxford University Press, Oxford, 1995. {\mathfrak{b}}ibitem{n} M. Nazarov, Young's symmetrizers for projective representations of the symmetric group. {{\mathfrak{b}}ar{e}}mph{Adv. Math.} \thetaxtbf{127} (1997), no. 2, 190--257. {\mathfrak{b}}ibitem{o} G. I. Olshanski, Quantized Universal Enveloping Superalgebra of type $Q$ and a Super-Extension of the Hecke Algebra, {{\mathfrak{b}}ar{e}}mph{Letters in Mathematical Physics} \thetaxtbf{24} (1992), 93--102. {\mathfrak{b}}ibitem{or} R. Orellana and A. Ram, Affine braids, Markov traces and the category $\mathcal{O}$. Algebraic groups and homogeneous spaces, {{\mathfrak{b}}ar{e}}mph{Tata Inst. Fund. Res. Stud. Math.}, Mumbai, (2007), 423--473. {\mathfrak{b}}ibitem{p} I. Penkov, Characters of typical irreducible finite-dimensional ${\mathfrak{q}}(n)$-modules. (Russian) {{\mathfrak{b}}ar{e}}mph{Funktsional. Anal. i Prilozhen.} \thetaxtbf{20} (1986), no. 1, 37--45, 96. {\mathfrak{b}}ibitem{ps} I. Penkov and Serganova, V., Characters of finite-dimensional irreducible ${\mathfrak{q}}(n)$-modules. {{\mathfrak{b}}ar{e}}mph{Lett. Math. Phys.} \thetaxtbf{40} (1997), no. 2, 147--158. {\mathfrak{b}}ibitem{ps2} {\mathfrak{b}}ysame, Characters of irreducible $G$-modules and cohomology of $G/P$ for the Lie supergroup $G=Q(N)$. Algebraic geometry, 7. J. Math. Sci. (New York) 84 (1997), no. 5, 1382--1412. {\mathfrak{b}}ibitem{ram} A. Ram, Skew shape representations are irreducible. Combinatorial and geometric representation theory (Seoul, 2001), {{\mathfrak{b}}ar{e}}mph{Contemp. Math.}, \thetaxtbf{325}, Amer. Math. Soc., Providence, RI, (2003), 161--189 {\mathfrak{b}}ibitem{r} J. D. Rogawski, On modules over the Hecke algebra of a $p$-adic group, {{\mathfrak{b}}ar{e}}mph{Invent. Math.} \thetaxtbf{79} (1985), no. 3, 443--465. {\mathfrak{b}}ibitem{ro1} M. Rosso, Groupes quantiques et algèbres de battage quantiques. {{\mathfrak{b}}ar{e}}mph{C. R. Acad. Sci. Paris} \thetaxtbf{320}, (1995), 145--148. {\mathfrak{b}}ibitem{ro2} {\mathfrak{b}}ysame, Quantum groups and quantum shuffles. {{\mathfrak{b}}ar{e}}mph{Invent. Math.} \thetaxtbf{133}, (1998), 399--416. {\mathfrak{b}}ibitem{ro3} {\mathfrak{b}}ysame, Lyndon bases and the multiplicative formula for $R$-matrices. Preprint, 2002. {\mathfrak{b}}ibitem{rq} R. Rouquier, 2-Kac-Moody Algebras. arXiv:0812.5023 {\mathfrak{b}}ibitem{s} A. N. Sergeev, Tensor algebra of the identity representation as a module over the Lie superalgebras $GL(n,m)$ and $Q(n)$, {{\mathfrak{b}}ar{e}}mph{Math. USSR Sbornik} \thetaxtbf{51} (1985), 419--427. {\mathfrak{b}}ibitem{s2} {\mathfrak{b}}ysame, The Howe Duality and the Projective Representations of Symmetric Groups, {{\mathfrak{b}}ar{e}}mph{Represent. Theory} \thetaxtbf{3} (1999), 416--434. {\mathfrak{b}}ibitem{stem} J. R. Stembridge, Shifted tableaux and the projective representations of symmetric groups. {{\mathfrak{b}}ar{e}}mph{Adv. Math.} \thetaxtbf{74} (1989), no. 1, 87--134. {\mathfrak{b}}ibitem{su1} T. Suzuki, Rogawski's Conjecture on the Jantzen Filtration for the Degenerate Affine Hecke Algebra of Type $A$, {{\mathfrak{b}}ar{e}}mph{Represent. Theory} (Electronic Jour. of AMS) \thetaxtbf{2} (1998), 393--409. {\mathfrak{b}}ibitem{su2} {\mathfrak{b}}ysame, Representations of degenerate affine Hecke algebra and ${\mathfrak{gl}}_n$. {{\mathfrak{b}}ar{e}}mph{Combinatorial methods in representation theory} Adv. Stud. Pure Math., 28, Kinokuniya, Tokyo, (2000), 343--372. {\mathfrak{b}}ibitem{sv} T. Suzuki and M. Vazirani, Tableaux on periodic skew diagrams and irreducible representations of the double affine Hecke algebra of type A. {{\mathfrak{b}}ar{e}}mph{Int. Math. Res. Not.} (2005), no. 27, 1621--1656. {\mathfrak{b}}ibitem{v} M. Vazirani, Irreducible Modules over the Affine Hecke Algebra: A Strong Multiplicity One Result, Ph.D. Thesis, UC Berkeley, 1999. {\mathfrak{b}}ibitem{wan} J. Wan, Completely Splittable Representations of Affine Hecke-Clifford Algebras. In preparation. {\mathfrak{b}}ibitem{w} W. Wang, Spin Hecke algebras of finite and affine types. {{\mathfrak{b}}ar{e}}mph{Advances in Math.} 212 (2007), 723--748. {\mathfrak{b}}ibitem{wz} W. Wang and L. Zhao, Representations of Lie Superalgebras in Prime Characteristic II: The Queer Series. preprint. {\mathfrak{b}}ibitem{z} A. Zelevinsky, Induced representations of reductive $p$-adic groups II. {{\mathfrak{b}}ar{e}}mph{Ann. Sci. E.N.S.} \thetaxtbf{13} (1980), 165--210. {{\mathfrak{b}}ar{e}}nd{thebibliography} {{\mathfrak{b}}ar{e}}nd{document}
\begin{document} {\cal A}ketitle \begin{abstract} Let \({\cal A}\) and \({\cal B}\) be two first order structures of the same vocabulary. We shall consider the {\em Ehrenfeucht-Fra\"\i ss\'e-game of length \(\omegay\) of \({\cal A}\) and \({\cal B}\)} which we denote by \(\EFG{\omegay}\). This game is like the ordinary Ehrenfeucht-Fra\"\i ss\'e-game of \(L_{\omega\omega}\) except that there are \(\omegay\) moves. It is clear that \(\EFG{\omegay}\) is determined if \({\cal A}\) and \({\cal B}\) are of cardinality \(\le\aleph_1\). We prove the following results: \begin{atheorem} If V=L, then there are models \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_2\) such that the game \(\EFG{\omegay}\) is non-determined. \end{atheorem} \begin{atheorem} If it is consistent that there is a measurable cardinal, then it is consistent that \(\EFG{\omegay}\) is determined for all \({\cal A}\) and \({\cal B}\) of cardinality \(\le\aleph_2\). \end{atheorem} \begin{atheorem} For any \(\kappa\ge\aleph_3\) there are \({\cal A}\) and \({\cal B}\) of cardinality \(\kappa\) such that the game \(\EFG{\omegay}\) is non-determined. \end{atheorem} \end{abstract} \subseteqction{Introduction.} Let \({\cal A}\) and \({\cal B}\) be two first order structures of the same vocabulary \(L\). We denote the domains of \({\cal A}\) and \({\cal B}\) by \(A\) and \(B\) respectively. All vocabularies are assumed to be relational. The {\em Ehrenfeucht-Fra\"\i ss\'e-game of length \(\gamma\) of \({\cal A}\) and \({\cal B}\)} denoted by \(\EFG{\gamma}\) is defined as follows: There are two players called \(\forall\) and \(\forallI\). First \(\forall\) plays \(x_0\) and then \(\forallI\) plays \(y_0\). After this \(\forall\) plays \(x_1\), and \(\forallI\) plays \(y_1\), and so on. If \(\langle(x_{\beta},y_{\beta}):\beta<\alpha\rangle\) has been played and \(\alpha<\gamma\), then \(\forall\) plays \(x_{\alpha}\) after which \(\forallI\) plays \(y_{\alpha}\). Eventually a sequence \(\langle(x_{\beta},y_{\beta}):\beta<\gamma\rangle\) has been played. The rules of the game say that both players have to play elements of \(A\cup B\). Moreover, if \(\forall\) plays his \(x_{\beta}\) in \(A\) (\(B\)), then \(\forallI\) has to play his \(y_{\beta}\) in \(B\) (\(A\)). Thus the sequence \(\langle(x_{\beta},y_{\beta}):\beta<\gamma\rangle\) determines a relation \(\pi\subseteq A\times B\). Player \(\forallI\) wins this round of the game if \(\pi\) is a partial isomorphism. Otherwise \(\forall\) wins. The notion of winning strategy is defined in the usual manner. We say that a player {\em wins} \(\EFG{\gamma}\) if he has a winning strategy in \(\EFG{\gamma}\). Recall that \begin{eqnarray*} {\cal A}\equiv_{\omega\omega}{\cal B} &\iff &\forall n<\omega(\forallI{\cal B}ox{ wins }\EFG{n})\\ {\cal A}\equiv_{\infty\omega}{\cal B} &\iff &\forallI{\cal B}ox{ wins }\EFG{\omega}. \end{eqnarray*} In particular, \(\EFG{\gamma}\) is determined for \(\gamma\le\omega\). The question, whether \(\EFG{\gamma}\) is determined for \(\gamma>\omega\), is the subject of this paper. We shall concentrate on the case \(\gamma=\omegay\). The notion \begin{equation} \langlebel{IIwins} \forallI{\cal B}ox{ wins }\EFG{\gamma} \end{equation} can be viewed as a natural generalization of \({\cal A}\equiv_{\infty\omega}{\cal B}\). The latter implies isomorphism for countable models. Likewise (\ref{IIwins}) implies isomorphism for models of cardinality \(|\gamma|\): \begin{proposition} \langlebel{first} Suppose \({\cal A}\) and \({\cal B}\) have cardinality \(\le\kappa\). Then \(\EFG{\kappa}\) is determined: \(\forallI\) wins if \({\cal A}\cong{\cal B}\), and \(\forall\) wins if \({\cal A}\not\cong{\cal B}\). \end{proposition} \noindent {\bf Proof.}\hspace{2mm} If \(f:{\cal A}\cong{\cal B}\), then the winning strategy of \(\forallI\) in \(\EFG{\kappa}\) is to play in such a way that the resulting \(\pi\) satisfies \(\pi\subseteq f\). On the other hand, if \({\cal A}\not\cong{\cal B}\), then the winning strategy of \(\forall\) is to systematically enumerate \(A\cup B\) so that the final \(\pi\) will satisfy \(A= \mbox{\rm dom} (\pi)\) and \(B= rng(\pi)\). $\Box$\par For models of arbitrary cardinality we have the following simple but useful criterion of (\ref{IIwins}), namely in the terminology of \cite{NS} that they are ``potentially isomorphic''. We use \(Col(\langlembda,\kappa)\) to denote the notion of forcing which collapses \(|\langlembda|\) to \(\kappa\) (with conditions of cardinality less than $\kappa$). \begin{proposition} \langlebel{coll} Suppose \({\cal A}\) and \({\cal B}\) have cardinality \(\le\langlembda\) and \(\kappa\) is regular. Player \(\forallI\) wins \(\EFG{\kappa}\) if and only if \(\Vdash_{Col(\langlembda,\kappa)}{\cal A}\cong{\cal B}\). \end{proposition} \noindent {\bf Proof.}\hspace{2mm} Suppose \(\tau\) is a winning strategy of \(\forallI\) in \(\EFG{\kappa}\). Since \(Col(\langlembda,\kappa)\) is \(<\!\kappa\)--closed, \[\Vdash_{Col(\langlembda,\kappa)}``\tau{\cal B}ox{ is a winning strategy of }\exists{\cal B}ox{ in } G_{\kappa}({\cal A},{\cal B})".\] Hence \(\Vdash_{Col(\langlembda,\kappa)}{\cal A}\cong{\cal B}\) by Proposition~\ref{first}. Suppose then \(p\Vdash \tilde{f}:{\cal A}\cong{\cal B}\) for some \(p\in Col(\langlembda,\kappa)\). While the game \(\EFG{\kappa}\) is played, \(\forallI\) keeps extending the condition \(p\) further and further. Suppose he has extended \(p\) to \(q\) and \(\forall\) has played \(x\in A\). Then \(\forallI\) finds \(r\le q\) and \(y\in B\) with \(r\Vdash \tilde{f}(x)=y\). Using this simple strategy \(\forallI\) wins. $\Box$\par \begin{proposition} \langlebel{stable} Suppose \(T\) is an $\omega$-stable first order theory with NDOP. Then \(\EFG{\omegay}\) is determined for all models \({\cal A}\) of \(T\) and all models \({\cal B}\). \end{proposition} \noindent {\bf Proof.}\hspace{2mm} Suppose \({\cal A}\) is a model of \(T\). If \({\cal B}\) is not \(L_{\infty\omega_1}\)-equivalent to \({\cal A}\), then \(\forall\) wins \(\EFG{\omegay}\) easily. So let us suppose \({\cal A}\equiv_{\infty\omega_1}{\cal B}\). We may assume \(A\) and \(B\) are of cardinality \(\ge\aleph_1\). If we collapse \(|A|\) and \(|B|\) to \(\aleph_1\), \(T\) will remain $\omega$-stable with NDOP, and \({\cal A}\) and \({\cal B}\) will remain \(L_{\infty\omega_1}\)-equivalent. So \({\cal A}\) and \({\cal B}\) become isomorphic by \cite[Chapter XIII, Section 1]{Shelah}. Now Proposition 2 implies that \(\exists\) wins \(\EFG{\omegay}\). $\Box$\par Hyttinen \cite{Hyt} showed that \(\EFG{\gamma}\) may be non-determined for all \(\gamma\) with \(\omega<\gamma<\omegay\) and asked whether \(\EFG{\omegay}\) may be non-determined. Our results show that \(\EFG{\omegay}\) may be non-determined for \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_3\) (Theorem~\ref{aleph_3}), but for models of cardinality \(\aleph_2\) the answer is more complicated. Let \(F(\omegay)\) be the free group of cardinality \(\aleph_1\). Using the combinatorial principle \(\Box_{\omegay}\) we construct an abelian group \(G\) of cardinality \(\aleph_2\) such that \({\cal G}_{\omegay}(F(\omegay),G)\) is non-determined (Theorem~\ref{aleph_2}). On the other hand, we show that starting with a model with a measurable cardinal one can build a forcing extension in which \(\EFG{\omegay}\) is determined for all models \({\cal A}\) and \({\cal B}\) of cardinality \(\le\aleph_2\) (Theorem~\ref{determined}). Thus the free abelian group \(F(\omegay)\) has the remarkable property that the question \begin{quote} Is \({\cal G}_{\omegay}(F(\omegay),G)\) determined for all \(G\)? \end{quote} cannot be answered in ZFC alone. Proposition~\ref{stable} shows that no model of an \(\aleph_1\)-categorical first order theory can have this property. We follow Jech \cite{Jech} in set theoretic notation. We use \(S^m_n\) to denote the set \(\{\alpha<\omega_m : \mbox{\rm cf} (\alpha)=\omega_n\}\). Closed and unbounded sets are called cub sets. A set of ordinals is {\em \(\langlembda\)-closed} if it is closed under supremums of ascending \(\langlembda\)-sequences \(\langle \alpha_i : i<\langlembda\rangle\) of its elements. A subset of a cardinal is {\em \(\langlembda\)-stationary} if it meets every \(\langlembda\)-closed unbounded subset of the cardinal. The closure of a set \(A\) of ordinals in the order topology of ordinals is denoted by \(\overline{A}\). The free abelian group of cardinality \(\kappa\) is denoted by \(F(\kappa)\). \subseteqction{A non-determined \({\cal G}_{\omegay}(F(\omegay),G)\) with \(G\) a group of cardinality \(\aleph_2\).} In this section we use \(\Box_{\omegay}\) to construct a group \(G\) of cardinality \(\aleph_2\) such that the game \({\cal G}_{\omegay}(F(\omegay),G)\) is non-determined (Theorem~\ref{aleph_2}). For background on almost free groups the reader is referred to \cite{EM}. However, our presentation does not depend on special knowledge of almost free groups. All groups below are assumed to be abelian. By \(\Box_{\omegay}\) we mean the principle, which says that there is a sequence \(\langle C_{\alpha} : \alpha<\omega_2, \alpha=\cup\alpha\rangle\) such that \begin{enumerate} \item \(C_{\alpha}\) is a cub subset of \(\alpha\). \item If \(\mbox{\rm cf} (\alpha)=\omega\), then \(|C_{\alpha}|=\omega\). \item If \(\gamma\) is a limit point of \(C_{\alpha}\), then \(C_{\gamma}=C_{\alpha}\cap\gamma\). \end{enumerate} Recall that \(\Box_{\omegay}\) follows from \(V=L\) by a result of R. Jensen. For a sequence of sets \(C_{\alpha}\) as above we can let \(E_{\beta}=\{\alpha\in S^2_0 : {\cal B}ox{ the order type of }C_{\alpha} {\cal B}ox{ is }\beta\}\). For some \(\beta<\omegay\) the set \(E_{\beta}\) has to be stationary. Let us use \(E\) to denote this \(E_{\beta}\). Then \(E\) is a so called {\em non-reflecting} stationary set, i.e., if \(\gamma=\cup\gamma\) then \(E\cap\gamma\) is non-stationary on \(\gamma\). Indeed, then some final segment \(D_{\gamma}\) of the set of limit points of \(C_{\gamma}\) is a cub subset of \(\gamma\) disjoint from \(E\). Moreover, \(\mbox{\rm cf} (\alpha)=\omega\) for all \(\alpha\in E\). \begin{theorem} \langlebel{aleph_2} Assuming \(\Box_{\omegay}\), there is a group \(G\) of cardinality \(\aleph_2\) such that the game \({\cal G}_{\omegay}(F(\omegay),G)\) is non-determined. \end{theorem} \noindent {\bf Proof.}\hspace{2mm} Let \({\open Z}^{\omega_2}\) denote the direct product of \(\omega_2\) copies of the additive group \({\open Z}\) of the integers. Let \(x_{\alpha}\) be the element of \({\open Z}^{\omega_2}\) which is \(0\) on coordinates \(\not=\alpha\) and \(1\) on the coordinate \(\alpha\). Let us fix for each \(\delta\in S^2_0\) an ascending cofinal sequence \(\eta_{\delta}:\omega\to\delta\). For such \(\delta\), let \[z_{\delta}=\sum_{n=0}^{\infty}2^nx_{\eta_{\delta}(n)}.\] Let \(\langle C_{\alpha} : \alpha=\cup\alpha<\omega_2\rangle\), \(\langle D_{\alpha} : \alpha=\cup\alpha<\omega_2\rangle\) and \(E=E_{\beta}\) be obtained from \(\Box_{\omegay}\) as above. We are ready to define the groups we need for the proof: Let \(G\) be the smallest pure subgroup of \({\open Z}^{\omega_2}\) which contains \(x_{\alpha}\) for \(\alpha<\omega_2\) and \(z_{\delta}\) for \(\delta\in E\), let \(G_{\alpha}\) be the smallest pure subgroup of \({\open Z}^{\omega_2}\) which contains \(x_{\gamma}\) for \(\gamma<\alpha\) and \(z_{\delta}\) for \(\delta\in E\cap\alpha\), let \(F\) \((=F(\omega_2))\) be the subgroup of \({\open Z}^{\omega_2}\) generated freely by \(x_{\alpha}\) for \(\alpha<\omega_2\), and finally, let \(F_{\alpha}\) be the subgroup of \({\open Z}^{\omega_2}\) generated freely by \(x_{\gamma}\) for \(\gamma<\alpha\). The properties we shall want of $G_\alpha$ are standard but for the sake of completeness we shall sketch proofs. We need that each $G_\alpha$ is free and for any $\beta \notin E$ any free basis of $G_\beta$ can be extended to a free basis of $G_\alpha$ for all $\alpha > \beta$. The proof is by induction on $\alpha$. For limit ordinals we use the fact that $E$ is non-reflecting. The case of successors of ordinals not in $E$ is also easy. Assume now that $\delta \in E$ and the induction hypothesis has been verified up to $\delta$. By the induction hypothesis for any $\beta < \delta$ such that $\beta \notin E$, there is $n_0$ so that $$G_\delta = G_\beta \oplus H \oplus K$$ where $K$ is the group freely generated by $\subseteqt{x_{\eta_\delta(n)}}{n_0 \leq n}$ and $x_{\eta_\delta(m)} \in G_\beta$ for all $m < n_0$. Then $$G_{\delta +1} = G_\beta \oplus H \oplus K'$$ where $K'$ is freely generated by $$\subseteqt{\sum_{m=n}^{\infty}2^{m - n}x_{\eta_{\delta}(m)}}{n_0 \leq n}.$$ On the other hand, if \(\delta\in E\) and \(\{x_{\eta_{\delta}(n)} : n<\omega\}\subseteq B\), where \(B\) is a subgroup of \(G\) such that $z_\delta \notin B$, then \(G/B\) is non-free, as \(z_{\delta}+B\) is infinitely divisible by \(2\) in \(G/B\). \begin{claim} \(\forallI\) does not win \({\cal G}_{\omegay}(F,G)\). \end{claim} Suppose \(\tau\) is a winning strategy of \(\forallI\). Let \(\alpha\in E\) such that the pair \(( G_{\alpha},F_{\alpha})\) is closed under the first \(\omega\) moves of \(\tau\), that is, if \(\forall\) plays his first \(\omega\) moves inside \( G_{\alpha}\cup F_{\alpha}\), then \(\tau\) orders \(\forallI\) to do the same. We shall play \(G_{\omegay}(F,A)\) pointing out the moves of \(\forall\) and letting \(\tau\) determine the moves of \(\forallI\). On his move number \(2n\) \(\forall\) plays the element \(x_{\eta_{\alpha}(n)}\) of \(G_{\alpha}\). On his move number \(2n+1\) \(\forall\) plays some element of \(F_{\alpha}\). Player \(\forall\) plays his moves in \(F_{\alpha}\) in such a way that during the first \(\omega\) moves eventually some countable direct summand \(K\) of \(F_{\alpha}\) as well as some countable \(B\subseteq G_{\alpha}\) are enumerated. Let \(J\) be the smallest pure subgroup of \(G\) containing \(B\cup\{z_{\alpha}\}\). During the next \(\omega\) moves of \(G_{\omegay}(F,A)\) player \(\forall\) enumerates \(J\) and \(\forallI\) responds by enumerating some \(H\subseteq F\). Since \(\tau\) is a winning strategy, \(H\) has to be a subgroup of \(F\). But now \(H/K\) is free, whereas \(J/B\) is non-free, so \(\forall\) will win the game, a contradiction. \begin{claim} \(\forall\) does not win \({\cal G}_{\omegay}(F,G)\). \end{claim} Suppose \(\tau\) is a winning strategy of \(\forall\). If we were willing to use CH, we could just take \(\alpha\) of cofinality \(\omegay\) such that \((F_{\alpha},G_{\alpha})\) is closed under \(\tau\), and derive a contradiction from the fact that \(F_{\alpha}\cong G_{\alpha}\). However, since we do not want to assume CH, we have to appeal to a longer argument. Let \(\kappa=(2^{\omega})^{++}\). Let \({\cal M}\) be the expansion of \(\langle H(\kappa),\in\rangle\) obtained by adding the following structure to it: \begin{description} \item[(H1)] The function \(\delta{\cal A}psto\eta_{\delta}\). \item[(H2)] The function \(\delta{\cal A}psto z_{\delta}\). \item[(H3)] The function \(\alpha{\cal A}psto C_{\alpha}\). \item[(H4)] A well-ordering \(<\) of the universe. \item[(H5)] The winning strategy \(\tau\). \end{description} Let \({\cal N}=\langle N,\in,\ldots \rangle\) be an elementary submodel of \({\cal M}\) such that \(\omegay\subseteq N\) and \(N\cap\omega_2\) is an ordinal \(\alpha\) of cofinality \(\omegay\). Let \(D_{\alpha}=\{\beta_i : i<\omegay\}\) in ascending order. Since \(C_{\beta_i}=C_{\alpha}\cap\beta_i\), every initial segment of \(C_{\alpha}\) is in \(N\). By elementaricity, \(G_{\beta_i}\in N\) for all \(i<\omegay\). Let \(\phi\) be an isomorphism \(G_{\alpha}\to F_{\alpha}\) obtained as follows: \(\phi\) restricted to \(G_{\beta_0}\) is the \(<\)-least isomorphisms between the free groups \(G_{\beta_0}\) and \(F_0\). If \(\phi\) is defined on all \(G_{\beta_j}\), \(j<i\), then \(\phi\) is defined on \(G_{\beta_i}\) as the \(<\)-least extension of \(\bigcup_{j<i}\phi_{\beta_j}\) to an isomorphism between \(G_{\beta_i}\) and \(F_i\). Recall that by our choice of \(D_{\alpha}\) \(G_{\beta_{i+1}}/G_{\beta_i}\) is free, so such extensions really exist. We derive a contradiction by showing that \(\forallI\) can play \(\phi\) against \(\tau\) for the whole duration of the game \(\EFG{\omegay}\). To achieve this we have to show that, when \(\forallI\) plays his canonical strategy based on \(\phi\) the strategy \(\tau\) of \(\forall\) directs \(\forall\) to go on playing elements which are in \(N\), that is, elements of \(G_{\alpha}\cup F_{\alpha}\). Suppose a sequence \(s=\langle (x_{\gamma},y_{\gamma}) : \gamma<\mu\rangle,\mu<\omegay\), has been played. It suffices to show that \(s\in N\). Choose \(\beta_i\) so that the elements of \(s\) are in \(G_{\beta_i}\cup F_{\beta_i}\). Now \(s\) is uniquely determined by \(\phi\mathord{\restriction} G_{\beta_i}\) and \(\tau\). Note that because \(C_{\beta_i}=C_{\alpha}\cap\beta_i\), \(\phi\mathord{\restriction} G_{\beta_i}\) can be defined inside \(N\) similarly as \(\phi\) was defined above, using \(C_{\beta_i}\) instead of \(C_{\alpha}\). Thus \(s\in N\) and we are done. We have proved that \({\cal G}_{\omegay}(F,G)\) is nondetermined. This clearly implies \({\cal G}_{\omegay}(F(\omegay),G)\) is nondetermined. $\Box$\par {\bf Remark.} R. Jensen \cite[p. 286]{Jensen} showed that if \(\Box_{\omegay}\) fails, then \(\omega_2\) is Mahlo in \(L\). Therefore, if \(\EFG{\omegay}\) is determined for all almost free groups \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_2\), then \(\omega_2\) is Mahlo in \(L\). If we start with \(\Box_{\kappa}\), we get an almost free group \(A\) of cardinality \(\kappa^+\) such that \({\cal G}_{\omegay}(F(\omegay),A)\) is nondetermined. \subseteqction {\({\cal G}_{\omegay}(F(\omegay),G)\) can be determined for all \(G\).} In this section all groups are assumed to be abelian. It is easy to see that \(\forallI\) wins \({\cal G}_{\omegay}(F(\omegay),G)\) for any uncountable free group \(G\), so in this exposition \(F(\omegay)\) is a suitable representative of all free groups. In the study of determinacy of \({\cal G}_{\omegay}(F(\omegay),{\cal A})\) for various \({\cal A}\) it suffices to study groups \({\cal A}\), since for other \({\cal A}\) player \(\forall\) easily wins the game. Starting from a model with a Mahlo cardinal we construct a forcing extension in which \({\cal G}_{\omegay}(F(\omegay),G)\) is determined, when \(G\) is any group of cardinality \(\aleph_2\). This can be extended to groups \(G\) of any cardinality, if we start with a supercompact cardinal. In the proof of the next results we shall make use of {\em stationary logic} \(L(\mbox{\sl aa\,})\). For the definition and basic facts about \(L(\mbox{\sl aa\,})\) the reader is referred to \cite{BKM}. This logic has a new quantifier \(\mbox{\sl aa\,} s\) quantifying over variables \(s\) ranging over countable subsets of the universe. A cub set of such \(s\) is any set which contains a superset of any countable subset of the universe and which is closed under unions of countable chains. The semantics of aa \(s\) is defined as follows: \[\mbox{\sl aa\,} s\phi(s,\ldots )\iff \phi(s,\ldots ) {\cal B}ox{ holds for a cub set of }s.\] Note that a group of cardinality \(\aleph_1\) is free if and only if it satisfies \begin{equation} \langlebel{aa} \mbox{\sl aa\,} s\:\mbox{\sl aa\,} s'(s\subseteq s'\rightarrow s'/s {\cal B}ox{ is free}). \end{equation} \begin{proposition} \langlebel{aa-char} Let \(G\) be a group. Then the following conditions are equivalent: \begin{description} \item[(1)] \(\forallI\) wins \({\cal G}_{\omegay}(F(\omegay),G)\). \item[(2)] \(G\) satisfies (\ref{aa}). \item[(3)] \(G\) is the union of a continuous chain \(\langle G_{\alpha} : \alpha<\omega_2\rangle\) of free subgroups with \(G_{\alpha+1}/G_{\alpha}\) \(\aleph_1\)-free for all \(\alpha<\omega_2\). \end{description} \end{proposition} \noindent {\bf Proof.}\hspace{2mm} (1) implies (2): Suppose \(\forallI\) wins \({\cal G}_{\omegay}(F(\omegay),G)\). By Proposition~\ref{coll} we have \(\Vdash_{Col(|G|,\omegay)}``G{\cal B}ox{ is free}."\) Using the countable completeness of \(Col(|G|,\omegay)\) it is now easy to construct a cub set \(S\) of countable subgroups of \(G\) such that if \(A\in S\) then for all \(B\in S\) with \(A\subseteq B\) we have \(B/A\) free. Thus \(G\) satisfies (\ref{aa}). (2) implies (3) quite trivially. (3) implies (1): Suppose a continuous chain as in (3) exists. If we collapse \(|G|\) to \(\aleph_1\), then in the extension the chain has length \(<\omega_2\). Now we use Theorem 1 of \cite{Hill}: \begin{quotation} \noindent If a group \(A\) is the union of a continuous chain of \(<\omega_2\) free subgroups \(\{A_{\alpha}:\alpha<\gamma\}\) of cardinality \(\le\aleph_1\) such that each \(A_{\alpha+1}/A_{\alpha}\) is \(\aleph_1\)-free, then \(A\) is free. \end{quotation} Thus \(G\) is free in the extension and (1) follows from Proposition~\ref{coll}. $\Box$\par Let us consider the following principle: \begin{description} \item[\((*)\)] For all stationary \(E\subseteq S^2_0\) and countable subsets \(a_{\alpha}\) of \(\alpha\in E\) such that $a_\alpha$ is cofinal in $\alpha$ and of order type $\omega$ there is a closed \(C\subseteq\omega_2\) of order type \(\omegay\) such that \(\{\alpha\in E : a_{\alpha}\subseteqtminus C {\cal B}ox{ is finite}\}\) is stationary in \(C\). \end{description} \begin{lemma} The principle \((*)\) implies that \({\cal G}_{\omegay}(F(\omegay),G)\) is determined for all groups \(G\) of cardinality \(\aleph_2\). \end{lemma} \noindent {\bf Proof.}\hspace{2mm} Suppose \(G\) is a group of cardinality \(\aleph_2\). We may assume the domain of \(G\) is \(\omega_2\). Let us assume \(G\) is \(\aleph_2\)-free, as otherwise \(\forall\) easily wins. If we prove that \(G\) satisfies (\ref{aa}), then Proposition~\ref{aa-char} implies that \(\forallI\) wins \({\cal G}_{\omegay}(F(\omegay),G)\). To prove (\ref{aa}), assume the contrary. By Proposition~\ref{aa-char} we may assume that \(G\) can be expressed as the union of a continuous chain \(\langle G_{\alpha} : \alpha<\omega_2\rangle\) of free groups with \(G_{\alpha +1}/G_{\alpha}\) non-\(\aleph_1\)-free for \(\alpha\in E\), \(E\subseteq\omega_2\) stationary. By Fodor's Lemma, we may assume \(E\subseteq S^2_0\). Also we may assume that for all $\alpha$, every ordinal in $G_{\alpha +1} \subseteqtminus G_\alpha$ is greater than every ordinal in $G_\alpha$. Finally by intersecting with a closed unbounded set we may assume that for all $\alpha \in E$ the set underlying $G_\alpha$ is $\alpha$. Choose for each \(\alpha\in E\) some countable subgroup \(b_{\alpha}\) of \(G_{\alpha +1}\) with \(b_{\alpha}+G_{\alpha}/G_{\alpha}\) non-free. Let \(c_{\alpha}=b_{\alpha}\cap G_{\alpha}\). We will choose $a_\alpha$ so that any final segment generates a subgroup containing $c_\alpha$. Enumerate $c_\alpha$ as $\subseteqt{g_n}{n < \omega}$ such that each element is enumerated infintely often. Choose an increasing sequence $(\alpha_n\colon n < \omega)$ cofinal in $\alpha$ so that for all $n$, $g_n \in G_{\alpha_n}$. Finally, for each $n$, choose $h_n \in G_{\alpha_n +1} \subseteqtminus G_{\alpha_n}$. Let $a_\alpha = \subseteqt{h_n}{n < \omega} \cup \subseteqt{h_n + g_n}{n < \omega}$. It is now easy to check that $a_\alpha$ is a sequence of order type $\omega$ which is cofinal in $\alpha$ and any subgroup of $G$ which contains all but finitely many of the elements of $a_\alpha$ contains $c_\alpha$. By \((*)\) there is a continuous \(C\) of order type \(\omegay\) such that \(\{\alpha\in C : a_{\alpha}\subseteqtminus C {\cal B}ox{ is finite}\}\) is stationary in \(C\). Let \(D=\langle C \bigcup \sum_{\alpha\in C}b_{\alpha}\rangle\). Since \(|D|\le\aleph_1\), \(D\) is free. For any $\alpha\in C$, let $$D_\alpha = \langle (C \cap \alpha) \bigcup (\sum_{\beta \in (C \cap \alpha)} b_\beta)\rangle.$$ Note that $D = \bigcup_{\alpha \in C} D_\alpha$, each $D_\alpha$ is countable and for limit point $\delta$ of $C$, $D_\delta = \bigcup_{\alpha \in (C \cap \delta)} D_\alpha$. Hence there is an \(\alpha\in C\cap E\) such that \(a_{\alpha}\subseteqtminus C\) is finite and \(D/D_{\alpha}\) is free. Hence \(b_{\alpha} + D_\alpha/D_\alpha\) is free. But \[b_{\alpha} + D_\alpha/D_\alpha \cong b_\alpha/b_{\alpha}\cap D_{\alpha}=b_{\alpha}/b_\alpha \cap G_{\alpha},\] which is not free, a contradiction. $\Box$\par For the next theorem we need a lemma from \cite{GiSh}. A proof is included for the convenience of the reader. \begin{lemma} \langlebel{stat} \cite{GiSh} Suppose $\langlembda$ is a regular cardinal and ${\Bbb Q}$ is a notion of forcing which satisfies the $\langlembda$-c.c. Suppose $\cal I$ is a normal $\langlembda$-complete ideal on $\langlembda$ and \({\cal I}^+=\{S\subseteq\langlembda : S\not\in {\cal I}\}\). For all sets $S\in{\cal I}^+$ and sequences of conditions $\langle p_\alpha \colon \alpha \in S\rangle$, there is a set $C$ with \(\langlembda\subseteqtminus C\in{\cal I}\) so that for all $\alpha \in C \cap S$, \[p_\alpha \Vdash_{{\open Q}} ``\subseteqt{\beta}{p_\beta \in \til{G}}\in {\cal J}^+ {\cal B}ox{, where } {\cal J}{\cal B}ox{ is the ideal generated by $\cal I$}".\] \end{lemma} \noindent {\bf Proof.}\hspace{2mm} Suppose the lemma is false. So there is an ${\cal I}$-positive set $S' \subseteq S$ such that for all $\alpha \in S'$ there is an extension $r_\alpha$ of $p_\alpha$ and a set $I_\alpha \in \cal I$ (note: $I_\alpha$ is in the ground model) so that \[r_\alpha \Vdash \{\beta : p_{\beta}\in\tilde{G}\} \subseteq I_\alpha.\] Let $I$ be the diagonal union of $\subseteqt{I_\alpha}{\alpha \in S'}$. Suppose now that $\alpha < \beta$ and $\alpha, \beta \in (S' \subseteqtminus I)$. Since $\beta \notin I$, $r_\alpha \Vdash p_{\beta}\not\in\tilde{G}$. Hence $r_\alpha \Vdash r_\beta \not\in \tilde{G}$. So $r_\alpha, r_\beta$ are incompatible. Hence $\subseteqt{r_\alpha}{\alpha \in S' \subseteqtminus I}$ is an antichain which, since $S'$ is $\cal I$-positive, is of cardinality $\langlembda$. This is a contradiction. $\Box$\par \begin{theorem} \langlebel{Mahlo} Assuming the consistency of a Mahlo cardinal, it is consistent that \((*)\) holds and hence that \({\cal G}_{\omegay}(F(\omegay),G)\) is determined for all groups \(G\) of cardinality \(\aleph_2\). \end{theorem} \noindent {\bf Proof.}\hspace{2mm} By a result of Harrington and Shelah \cite{HaSh} we may start with a Mahlo cardinal \(\kappa\) in which every stationary set of cofinality \(\omega\) reflects, that is, if \(S\subseteq\kappa\) is stationary, and \(\mbox{\rm cf} (\alpha)=\omega\) for \(\alpha\in S\), then \(S\cap\langlembda\) is stationary in \(\langlembda\) for some inaccessible \(\langlembda<\kappa\). For any inaccessible \(\langlembda\) let \({\open P}_{\langlembda}\) be the Levy-forcing for collapsing \(\langlembda\) to \(\omega_2\). The conditions of \({\open P}_{\langlembda}\) are countable functions \(f:\langlembda\times\omegay\to\langlembda\) such that \(f(\alpha,\beta)<\alpha\) for all \(\alpha\) and \(\beta\) and each $f$ is increasing and continuous in the second coordinate. It is well-known that \({\open P}_{\langlembda}\) is countably closed and satisfies the \(\langlembda\)-chain condition \cite[p. 191]{Jech}. Let \({\open P}={\open P}_{\kappa}\). Suppose \(p\in{\open P}\) and \[p\Vdash``\til{E}\subseteq S^2_0 {\cal B}ox{ is stationary and } \forall\alpha\in\til{E}(\til{a}_{\alpha}\subseteq\alpha {\cal B}ox{ is cofinal in $\alpha$ and of order type $\omega$)"}.\] Let \[S=\{\alpha<\kappa : \exists q\le p(q\Vdash \alpha\in\til{E})\}.\] For any \(\alpha\in S\) let \(p_{\alpha}\le p\) such that \(p_{\alpha}\Vdash\alpha\in\til{E}\). Since \({\open P}\) is countably closed, we can additionally require that for some countable \(a_{\alpha}\subseteq\alpha\) we have \(p_{\alpha}\Vdash\til{a}_{\alpha}=a_{\alpha}\). The set \(S\) is stationary in \(\kappa\), for if \(C\subseteq\kappa\) is cub, then \(p\Vdash C\cap\til{E}\not=\emptyset\), whence \(C\cap S\not=\emptyset\). Also \(\mbox{\rm cf} (\alpha)=\omega\) for \(\alpha\in S\). Let \(\langlembda\) be inaccessible such that \(S\cap\langlembda\) is stationary in \(\langlembda\). We may choose \(\langlembda\) in such a way that \(\alpha\in S\cap\langlembda\) implies \(p_{\alpha}\in{\open P}_{\langlembda}\). By Lemma~\ref{stat} there is a \(\delta\in S\cap\langlembda\) such that \[p_{\delta}\Vdash_{{\open P}_{\langlembda}}``\til{E_1}=\{\alpha<\langlembda : p_{\alpha}\in\til{G}\} {\cal B}ox{ is stationary.}"\] Let \({\open Q}\) be the set of conditions \(f\in {\open P}\) with \(\mbox{\rm dom} (f)\subseteq(\kappa\subseteqtminus\langlembda)\times\omegay\). Note that \({\open P}\cong{\open P}_{\langlembda}\otimes{\open Q}\). Let \(G\) be \({\open P}\)-generic containing \(p_{\delta}\) and \(G_{\langlembda}=G\cap{\open P}_{\langlembda}\) for any inaccessible \(\langlembda\le\kappa\). Then \(G_{\langlembda}\) is \({\open P}_{\langlembda}\)-generic and \(\omega_2\) of \(V[G_{\langlembda}]\) is \(\langlembda\). Let us work now in \(V[G_{\langlembda}]\). Thus \(\langlembda\) is the current \(\omega_2\), \(E_1=\{\alpha<\langlembda : p_{\alpha}\in G_{\langlembda}\}\) is stationary, and we have the countable sets \(a_{\alpha}\subseteq\alpha\) for \(\alpha\in E_1\). Since \({\open Q}\) collapses \(\langlembda\) there is a name \(\til{f}\) such that \[\Vdash_{{\open Q}}``\til{f}:\omegay\to\langlembda{\cal B}ox{ is continuous and cofinal}."\] More precisely $\til{f}$ is the name for the function $f$ defined by $f(\alpha) = \beta$ if and only if there is some $g \in G$ so that $g(\langlembda, \alpha) = \beta$. Let \(\til{C}\) denote the range of \(\til{f}\). We shall prove the following statement: \noindent{\bf Claim:} \(\Vdash_{{\open Q}} \{\alpha<\langlembda : a_{\alpha}\subseteqtminus \til{C} {\cal B}ox{ is finite }\}{\cal B}ox{ is stationary in } \til{C}. \) Suppose \(q\in{\open Q}\) so that \(q\Vdash``\til{D}\subseteq\omegay{\cal B}ox{ is a cub}."\) Let \({\cal M}\) be an appropriate expansion of \(\langle H(\kappa),\in\rangle\) and \(\langle {\cal N}_i : i<\langlembda\rangle\), \({\cal N}_i=\langle N_i,\in,\ldots \rangle\), a sequence of elementary submodels of \({\cal M}\) such that: \begin{description} \item[(i)] Everything relevant is in \(N_0\). \item[(ii)] If \(\alpha_i=N_i\cap\langlembda\), then \(\alpha_i<\alpha_j\) for \(i<j<\langlembda\). \item[(iii)] \(N_{i+1}\) is closed under countable sequences. \item[(iv)] \(|N_i|=\omegay\). \item[(v)] \(N_i=\bigcup_{j<i}N_j\) for \(i\) a limit ordinal. \end{description} Choose \(\gamma=\alpha_i\in E_1\) and let \(\langle i_n :n<\omega\rangle\) be a sequence of successor ordinals such that \(\gamma=\sup \{\alpha_{i_n}:n<\omega\}\). Let \(q_0\le q\) and \(\beta_0 \in\omegay\) such that \(q_0 , \beta_0 \in N_{i_0}\), \[q_0\Vdash ``\beta_0\in\til{D}{\cal B}ox{''}\] and $q_0$ decides the value of $\til{f}''\beta_0$ (which will by elementaricity necessarilly be a subset of $\alpha_{i_0}$). If \(q_n\) and \(\beta_n\) are defined we choose \(q_{n+1}\le q_n\) and \(\beta_{n+1}\in\omegay\) such that \(q_{n+1}, \beta_{n+1}\in N_{i_{n+1}}\), \[q_{n+1}\Vdash ``\beta_{n+1}\in\til{D}{\cal B}ox{ and } a_{\gamma}\cap(\alpha_{i_{n+1}}\subseteqtminus \alpha_{i_n})\subseteq\til{f}{\cal B}ox{''}\beta_{n+1}\subseteq \alpha_{i_{n+1}}"\] and \(q_{n+1}\) decides \(\til{f}{\cal B}ox{''}\beta_{n+1}\). Finally, let \(q_{\omega}=\bigcup\{q_n:n<\omega\}\) and \(\beta=\bigcup\{\beta_n:n<\omega\}\). Then \[q_{\omega}\Vdash``\beta\in\til{D} {\cal B}ox{ and } a_{\gamma}\subseteqtminus\til{f}{\cal B}ox{''}\beta{\cal B}ox{ is finite}."\] The claim, and thereby the theorem, is proved. $\Box$\par \begin{corollary} The statement that \(\EFG{\omegay}\) is determined for every structure \({\cal A}\) of cardinality \(\aleph_2\) and every uncountable free group \({\cal B}\), is equiconsistent with the existence of a Mahlo cardinal. \end{corollary} \noindent{\bf Remark.} If \({\cal G}_{\omegay}(A,F(\omega_1))\) is determined for all groups \(A\) of cardinality \(\kappa^+\), \(\kappa\) singular, then \(\Box_{\kappa}\) fails. This implies that the Covering Lemma fails for the Core Model, whence there is an inner model for a measurable cardinal. This shows that the conclusion of Theorem~\ref{Mahlo} cannot be strengthened to arbitrary \(G\). However, by starting with a larger cardinal we can make this extension: \begin{theorem} \langlebel{succ} Assuming the consistency of a supercompact cardinal, it is consistent that \({\cal G}_{\omegay}(F(\omegay),G)\) is determined for all groups \(G\). \end{theorem} \noindent {\bf Proof.}\hspace{2mm} Let us assume that the stationary logic \(L_{\omega_1\omega}(aa)\) has the L\"owenheim-Skolem property down to \(\aleph_1\). This assumption is consistent relative to the consistency of a supercompact cardinal \cite{BD}. Let \(G\) be an arbitrary \(\aleph_2\)-free group. Let \(H\) be an \(L(aa)\)-elementary submodel of \(G\) of cardinality \(\aleph_1\). Thus \(H\) is a free group. The group \(H\) satisfies the sentence (\ref{aa}), whence so does \(G\). Now the claim follows from Proposition~\ref{aa-char}. $\Box$\par \begin{corollary} Assuming the consistency of a supercompact cardinal, it is consistent that \(\EFG{\omegay}\) is determined for every structure \({\cal A}\) and every uncountable free group \({\cal B}\). \end{corollary} \subseteqction {\(\EFG{\omegay}\) can be determined for all \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_2\).} We prove the consistency of the statement that \(\EFG{\omegay}\) is determined for all \({\cal A}\) and \({\cal B}\) of cardinality \(\le\aleph_2\) assuming the consistency of a measurable cardinal. Actually we make use of an assumption that we call \(I^*(\omega)\) concerning stationary subsets of \(\omega_2\). This assumption is known to imply that \(\omega_2\) is measurable in an inner model. It follows from the previous section that some large cardinal axioms are needed to prove the stated determinacy. Let \(I^*(\omega)\) be the following assumption about \(\omegay\)-stationary subsets of \(\omega_2\): \begin{description} \item[\(I^*(\omega)\)] Let \({\cal I}\) be the \(\omegay\)-nonstationary ideal \(NS_{\omegay}\) on \(\omega_2\). Then \({\cal I}^+\) has a \(\sigma\)-closed dense subset \(K\). \end{description} \noindent Hodges and Shelah \cite{HS} define a principle \(I(\omega)\), which is like \(I^*(\omega)\) except that \({\cal I}\) is not assumed to be the \(\omegay\)-nonstationary ideal. They use \(I(\omega)\) to prove the determinacy of an Ehrenfeucht-Fra\"\i ss\'e-game played on several boards simulataneously. Note that \(I^*(\omega)\) implies \({\cal I}\) is precipitous, so the consistency of \(I^*(\omega)\) implies the consistency of a measurable cardinal \cite{JMMP}. \begin{theorem} \langlebel{I} (\cite{JMMP})The assumption \(I^*(\omega)\) is consistent relative to the consistency of a measurable cardinal. \end{theorem} We shall consider models \({\cal A},{\cal B}\) of cardinality \(\aleph_2\), so we may as well assume they have \(\omega_2\) as universe. For such \({\cal A}\) and \(\alpha<\omega_2\) we let \({\cal A}_{\alpha}\) denote the structure \({\cal A}\cap\alpha\). Similarly \({\cal B}_{\alpha}\). \begin{lemma} \langlebel{stationary} Suppose \({\cal A}\) and \({\cal B}\) are structures of cardinality \(\aleph_2\). If \(\forall\) does not have a winning strategy in \(\EFG{\omegay}\), then \[S=\{\alpha : {\cal A}_{\alpha}\cong{\cal B}_{\alpha}\}\] is \(\omegay\)-stationary. \end{lemma} \noindent {\bf Proof.}\hspace{2mm} Let \(C\subseteq\omega_2\) be \(\omegay\)-closed and unbounded. Suppose \(S\cap C=\emptyset\). We derive a contradiction by describing a winning strategy of \(\forall\): Let \(\pi:\omega_1\to\omega_1\times\omegay\times 2\) be onto with \(\alpha,\beta,d\le\pi(\alpha,\beta,d)\) for all \(\alpha,\beta<\omegay\) and \(d<2\). If \(\alpha<\omega_2\), let \(\theta_{\alpha}:\omegay\to\alpha\) be onto. Suppose the sequence \(\langle (x_i,y_i) : i<\alpha\rangle\) has been played. Here \(x_i\) denotes a move of \(\forall\) and \(y_i\) a move of \(\forallI\). During the game \(\forall\) has built an ascending sequence \(\{c_i: i<\alpha\}\) of elements of \(C\). Now he lets \(c_{\alpha}\) be the smallest element of \(C\) greater than all the elements \(x_i,y_i,i<\alpha\). Suppose \(\pi(\alpha)=(i,\gamma,d)\). Now \(\forall\) will play \(\theta_{c_i}(\gamma)\) as an element of \({\cal A}\), if \(d=0\), and as an element of \({\cal B}\) if \(d=1\). After all \(\omegay\) moves of \(\EFG{\omegay}\) have been played, some \({\cal A}_{\alpha}\) and \({\cal B}_{\alpha}\), where \(\alpha\in C\), have been enumerated. Since \(\alpha\not\in S\), \(\forall\) has won the game. $\Box$\par \begin{theorem} \langlebel{determined} Assume \(I^*(\omega)\). The game \(\EFG{\omegay}\) is determined for all \({\cal A}\) and \({\cal B}\) of cardinality \(\le\aleph_2\). \end{theorem} \noindent {\bf Proof.}\hspace{2mm} Suppose \(\forall\) does not have a winning strategy. By Lemma~\ref{stationary} the set \(S=\{\alpha : {\cal A}_{\alpha}\cong{\cal B}_{\alpha}\}\) is \(\omegay\)-stationary. Let \(I\) and \(K\) be as in \(I^*(\omega)\). If \(\alpha\in S\), let \(h_{\alpha}:{\cal A}_{\alpha}\cong{\cal B}_{\alpha}\). We describe a winning strategy of \(\forallI\). The idea of this strategy is that \(\forallI\) lets the isomorphisms \(h_{\alpha}\) determine his moves. Of course, different \(h_{\alpha}\) may give different information to \(\forallI\), so he has to decide which \(h_{\alpha}\) to follow. The key point is that \(\forallI\) lets some \(h_{\alpha}\) determine his move only if there are stationarily many other \(h_{\beta}\) that agree with \(h_{\alpha}\) on this move. Suppose the sequence \(\langle (x_i,y_i) : i<\alpha\rangle\) has been played. Again \(x_i\) denotes a move of \(\forall\) and \(y_i\) a move of \(\forallI\). Suppose \(\forall\) plays next \(x_{\alpha}\) and this is (say) in \(A\). During the game \(\forallI\) has built a descending sequence \(\{S_i: i<\alpha\}\) of elements of \(K\) with \(S_0\subseteq S\). The point of the sets \(S_i\) is that \(\forallI\) has taken care that for all \(i<\alpha\) and \(\beta\in S_i\) we have \(y_i=h_{\beta}(x_i)\) or \(x_i=h_{\beta}(y_i)\) depending on whether \(\forall\) played \(x_i\) in \(A\) or \(B\). Now \(\forallI\) lets \(S'_{\alpha}\subseteq\bigcap_{i<\alpha}S_i\) so that \(S'_{\alpha}\in K\) and \(\forall i\in S'_{\alpha}(x_{\alpha}<i)\). For each \(i\in S'_{\alpha}\) we have \(h_i(x_{\alpha})<i\). By normality, there are an \(S_{\alpha}\subseteq S'_{\alpha}\) in \(K\) and a \(y_{\alpha}\) such that \(h_i(x_{\alpha})=y_{\alpha}\) for all \(i\in S_{\alpha}\). This element \(y_{\alpha}\) is the next move of \(\forallI\). Using this strategy \(\forallI\) wins. $\Box$\par \subseteqction{A non-determined \(\EFG{\omegay}\) with \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_3\).} We construct directly in ZFC two models \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_3\) with \(\EFG{\omegay}\) non-determined. It readily follows that such models exist in all cardinalities \(\ge\aleph_3\). The construction uses a square-like principle (Lemma~\ref{comb}), which is provable in ZFC. \begin{lemma} \langlebel{BM}\cite{Sh89,ShBook} There is a stationary \(X\subseteq S^3_1\) and a sequence \(\langle D_{\alpha} : \alpha\in X\rangle\) such that \begin{enumerate} \item \(D_{\alpha}\) is a cub subset of \(\alpha\) for all \(\alpha\in X\). \item The order type of \(D_{\alpha}\) is \(\omegay\). \item If \(\alpha,\beta\in X\) and \(\gamma<min\{\alpha,\beta\}\) is a limit of both \(D_{\alpha}\) and \(D_{\beta}\), then \(D_{\alpha}\cap\gamma = D_{\beta}\cap\gamma\). \item If \(\gamma\in D_{\alpha}\), then \(\gamma\) is a limit point of \(D_{\alpha}\) if and only if \(\gamma\) is a limit ordinal. \end{enumerate} \end{lemma} \noindent {\bf Proof.}\hspace{2mm} We shall sketch, for completeness, a proof of this given by Burke and Magidor~\cite[Lemma 7.7]{BM}. Let \(<^*\) be a well-ordering of \(H(\omega_3)\). For each \(\alpha\in S^3_1\), let \(\langle N^{\alpha}_{\delta} : \delta<\omega_2\rangle\) be a continuously increasing chain of elementary submodels of \(\langle H(\omega_3),\in,<^*\rangle\) such that \begin{description} \item[(N1)] \((\omegay+1)\cup\{\omega_2,\alpha\}\subseteq N^{\alpha}_0\). \item[(N2)] \(|N^{\alpha}_{\delta}|\le\omegay\). \item[(N3)] \(N^{\alpha}_{\delta}\cap\omega_2\in\omega_2\). \item[(N4)] \(\overline{N^{\alpha}_{\delta}\cap\omega_3}\in N^{\alpha}_{\delta+1}\). \end{description} Let \(A^{\alpha}_{\delta}=N^{\alpha}_{\delta}\cap\alpha\) for each \(\alpha\in S^3_1\). Since, \(\alpha\in N^{\alpha}_{\delta}\), \(A^{\alpha}_{\delta}\) is cofinal in \(\alpha\). Let \(X\subseteq S^3_1\) be stationary such that for some \(\delta,\rho<\omega_2\) and for all \(\alpha\in X\) we have \begin{enumerate} \item \(\delta\)= least ordinal of cofinality \(\omegay\) with \(N^{\alpha}_{\delta}\cap\omega_2=\delta\). \item The order type of \(\overline{A^{\alpha}_{\delta}}\) is \(\rho+1\). \end{enumerate} Let \(f:\omegay\to\rho\) be cofinal and continuous. Let \(g:\rho+1\cong\overline{A^{\alpha}_{\delta}}\) such that \(gf\) maps successors to successors. Let \(D_{\alpha}\) be the image of \(\omegay\) under \(gf\). $\Box$\par \begin{lemma} \langlebel{comb} There are sets \(S,T\) and \(C_{\alpha}\) for \(\alpha\in S\) such that the following hold: \begin{enumerate} \item \(S\subseteq S_0^3\cup S_1^3\) and \(S\cap S^3_1\) is stationary. \item \(T\subseteq S^3_0\) is stationary and \(S\cap T=\emptyset\). \item If \(\alpha\in S\), then \(C_{\alpha}\subseteq\alpha\cap S\) is closed and of order-type \(\le\omegay\). \item If \(\alpha\in S\) and \(\beta\in C_{\alpha}\), then \(C_{\beta}=C_{\alpha}\cap\beta\). \item If \(\alpha\in S\cap S^3_1\), then \(C_{\alpha}\) is cub on \(\alpha\). \end{enumerate} \end{lemma} \noindent {\bf Proof.}\hspace{2mm} Let \(S\) and \(\langle D_{\alpha} : \alpha\in S\rangle\) be as in Lemma~\ref{BM}. Let \(S'=X\cup Y\), where \(Y\) consists of ordinals which are limit points \(<\alpha\) of some \(D_{\alpha},\alpha\in X\). If \(\alpha\in X\), we let \(C_{\alpha}\) be the set of limit points \(<\alpha\) of \(D_{\alpha}\). If \(\alpha\in Y\), we let \(C_{\alpha}\) be the set of limit points \(<\alpha\) of \(D_{\beta}\cap\alpha\), where \(\beta>\alpha\) is chosen arbitrarily from \(X\). Now claims 1,3,4 and 6 are clearly satisfied. Let \(S^3_0=\bigcup_{i<\omega_2}T_i\) where the \(T_i\) are disjoint stationary sets. Since \(|\overline{C_{\alpha}}|\le\omegay\), there is \(i_{\alpha}<\omega_2\) such that \(i\ge i_{\alpha}\) implies \(\overline{C_i}\cap T_i=\emptyset.\) Let \(S''\subseteq S'\) be stationary such that \(\alpha\in S''\) implies \(i_{\alpha}\) is constant \(i\). Let \(T=T_i\). Finally, let \(S=S''\cup\bigcup\{C_{\alpha} : \alpha\in S''\}\). Claim 2 is satisfied, and the Lemma is proved. $\Box$\par \begin{theorem} \langlebel{aleph_3} There are structures \({\cal A}\) and \({\cal B}\) of cardinality \(\aleph_3\) with one binary predicate such that the game \(\EFG{\omegay}\) is non-determined. \end{theorem} \noindent {\bf Proof.}\hspace{2mm} Let \(S,T\) and \(\langle C_{\alpha} : \alpha\in S\rangle\) be as in Lemma~\ref{comb}. We shall construct a sequence \(\{M_{\alpha} : \alpha<\omega_3\}\) of sets and a sequence \(\{G_{\alpha} : \alpha\in S\}\) of functions such that the conditions (M1)--(M6) below hold. Let \(W_{\alpha}\) be the set of all mappings \[G^{d_0}_{\gamma_0}\ldots G^{d_n}_{\gamma_n},\] where \(\gamma_0,\ldots ,\gamma_n\in S\cap\alpha\), \(d_i\in\{-1,1\}\), \(G^{1}_{\gamma}\) means \(G_{\gamma}\) and \(G^{-1}_{\gamma}\) means the inverse of \(G_{\gamma}\). Let \(W=W_{\omega_3}\). (Note that $W$ consists of a set of partial functions.) The conditions on the \(M_{\alpha}\)'s and the \(G_{\alpha}\)'s are: \begin{description} \item[(M1)] \(M_{\alpha}\subseteq M_{\beta}\) if \(\alpha<\beta\), and \(M_{\alpha}\subset M_{\alpha+1}\) if \(\alpha\in S\). \item[(M2)] \(M_{\nu}=\bigcup_{\alpha<\nu}M_{\alpha}\) for limit \(\nu\). \item[(M3)] \(G_{\alpha}\) is a bijection of \(M_{\alpha+1}\) for \(\alpha\in S\). \item[(M4)] If \(\beta\in S\) and \(\alpha\in C_{\beta}\), then \(G_{\alpha}\subseteq G_{\beta}\). \item[(M5)] If for some $\beta$, $G_\beta(a) = b$ and for some $w \in W$, $w(a) = b$, then there is some $\alphamma$ so that $w \subseteq G_\alphamma$. Furthermore if $\beta$ is the minimum ordinal so that $G_\beta(a) = b$ then $\alphamma = \beta$ or $\beta \in C_\alphamma$. \end{description} In order to construct the set \(M=\bigcup_{\alpha<\omega_3}M_{\alpha}\) and the mappings \(G_{\alpha}\) we define an oriented graph with \(M\) as the set of vertices. We use the terminology of Serre~\cite{S} for graph-theoretic notions. If \(x\) is an edge, the origin of \(x\) is denoted by \(o(x)\) and the terminus by \(t(x)\). Our graph has an inverse edge \(\ol{x}\) for each edge \(x\). Thus \(o(\ol{x})=t(x)\) and \(t(\ol{x})=o(x)\). Some edges are called {\em positive}, the rest are called {\em negative}. An edge is positive if and only if its inverse is negative. For each edge \(x\) of \(M\) there is a set \(L(x)\) of labels. The set of possible labels for positive edges is \(\{g_{\alpha} : \alpha<\omega_3\}.\) The negative edges can have elements of \(\{g^{-1}_{\alpha} : \alpha<\omega_3\}\) as labels. The labels are assumed to be given in such a way that a positive edge gets \(g_{\alpha}\) as a label if and only if its inverse gets the label \(g^{-1}_{\alpha}\). During the construction the sets of labels will be extended step by step. The construction is analogous to building an acyclic graph on which a group acts freely. The graph then turns out to be the Cayley graph of the group. The labelled graph we will build will be the ``Cayley graph'' of $W$ which will be as free as possible given (M1)--(M4). Condition (M5) is a consequence of the freeness of the construction. Let us suppose the sets \(M_{\beta},\beta<\alpha\), of vertices have been defined. Let \(M_{<\alpha}=\bigcup_{\beta<\alpha}M_{\alpha}\). Some vertices in \(M_{<\alpha}\) have edges between them and a set \(L(x)\) of labels has been assigned to each such edge \(x\). If \(\alpha\) is a limit ordinal, we let \(M_{\alpha}=M_{<\alpha}\). So let us assume \(\alpha=\beta+1\). If \(\beta\not\in S\), \(M_{\alpha}=M_{\beta}\). So let us assume \(\beta\in S\). Let \(\gamma=\sup(C_{\beta})\). Notice that since $S$ consists entirely of limit ordinals and $C_\beta \subseteq S$, either $\gamma = \beta$ or $\gamma +1 < \beta$. \noindent{\bf Case 1.} \(\gamma=\beta\): We extend \(M_{\beta}\) to \(M_{\alpha}\) by adding new vertices \(\{P_z : z\in {\open Z}\}\) and for each \(z\in{\open Z}\) a positive edge \(x^{P_z}_{\alpha}\) with \(o(x^{P_z}_{\alpha})=P_z\) and \(t(x^{P_z}_{\alpha})=P_{z+1}\). We also let \(L(x^{P_z}_{\alpha})= \{g_{\beta}\} \cup \subseteqt{g_\delta}{\beta \in C_\delta}\). \noindent{\bf Case 2.} \(\gamma+1<\beta\): We extend \(M_{\beta}\) to \(M_{\alpha}\) by adding new vertices \(\{P'_z : z\in {\open Z} \subseteqtminus \{0\}\}\) for each \(P\in M_{\beta}\subseteqtminus M_{\gamma+1}\). For notational convenience let $P'_0 = P$. Now we add for each \(P\in M_{\beta}\subseteqtminus M_{\gamma+1}\) new edges as follows. For each \(z\in{\open Z}\) we add a positive edge \(x^{P'_z}_{\alpha}\) with \[ o(x^{P'_z}_{\alpha}) = P'_z, t(x^{P'_z}_{\alpha}) = P'_{z+1}, L(x^{P'_z}_{\alpha}) = \{g_{\beta}\} \cup \subseteqt{g_\delta}{\beta \in C_\delta} \] This determines completely the inverse of \(x^{P'_z}_{\alpha}\). This ends the construction of the graph. In the construction each vertex \(P\) in \(M_{\alpha+1}\),\(\alpha\in S\), is made the origin of a unique edge \(x^P_{\alpha}\) with \(g_{\alpha}\in L(x^P_{\alpha})\). We define \[G_{\alpha}(P)=t(x^P_{\alpha}).\] The construction of the sets \(M_{\alpha}\) and the mappings \(G_{\alpha}\) is now completed. It follows immediately from the construction that each \(G_{\alpha}\), \(\alpha\in S\), is a bijection of \(M_{\alpha+1}\). So (M1)--(M3) hold. (M4) holds, because \(g_{\alpha}\) is added to the labels of any edge with \(g_{\beta}\), where \(\beta\in C_{\alpha}\), as a label. Finally, (M5) is a consequence of the fact that the graph is circuit-free. Let us fix \(a_0\in M_1\) and \(b_0=G_{\beta_0}(a_0)\), where \(\beta_0\in C_{\alpha}\) for all \(\alpha\in S\). Note that we may assume, without loss of generality, the existence of such a \(\beta_0\). If \(a_0, a_1\in M\), let \[R_{(a_0, a_1)}=\{(a'_0, a'_1)\in M^{2} : \exists w\in W (w(a_0)=a'_0\wedge w(a_1)=a'_1)\}.\] We let \[{\cal M}= \langle M, (R_{(a_0, a_1)})_{(a_0, a_1)\in M^{2}}\rangle\] \[{\cal A}= \langle{\cal M},a_0\rangle\] \[{\cal B}= \langle {\cal M},b_0\rangle\] and show that \(\EFG{\omegay}\) is non-determined. The reduction of the language of \({\cal A}\) and \({\cal B}\) to one binary predicate is easy. One just adds a copy of \(\omega_3\), together with its ordering, and a copy of $M\times M$ to the structures with the projection maps. Then fix a bijection $\phi$ from $\omega_3$ to $M^2$. Add a new binary predicate $R$ to the language and interpret $R$ to be contained in $\omega_3 \times M^2$ such that $R(\beta, (a, b))$ holds if and only if $R_{\phi(\alpha)}(a, b)$ holds. We can now dispense with the old binary predicates. We have replaced our structure by one in a finite language without making any difference to who wins the game \(\EFG{\omegay}\). The extra step of reducing to a single binary predicate is standard. An important property of these models is that if \(\alpha\in S\cap S^3_1\), then \(G_{\alpha}\mathord{\restriction} M_{\alpha}\) is an automorphism of the restriction of \({\cal M}\) to \(M_{\alpha}\) and takes \(a_0\) to \(b_0\). \begin{claim} \(\forall\) does not win \(\EFG{\omegay}\). \end{claim} Suppose \(\forall\) has a winning strategy \(\tau\). Again, there is a quick argument which uses CH: Find \(\alpha\in S\) such that \(M_{\alpha}\) is closed under \(\tau\) and \(\mbox{\rm cf} (\alpha)=\omegay\). Now \(C_{\alpha}\) is cub on \(\alpha\), whence \(G_{\alpha}\) maps \(M_{\alpha}\) onto itself. Using \(G_{\alpha}\) player \(\forallI\) can easily beat \(\tau\), a contradiction. In the following longer argument we need not assume CH. Let \(\kappa\) be a large regular cardinal. Let \({\cal H}\) be the expansion of \(\langle H(\kappa),\in\rangle\) obtained by adding the following structure to it: \begin{description} \item[(H1)] The function \(\alpha{\cal A}psto M_{\alpha}\). \item[(H2)] The function \(\alpha{\cal A}psto G_{\alpha}\). \item[(H3)] The function \(\alpha{\cal A}psto C_{\alpha}\). \item[(H4)] A well-ordering \(<^*\) of the universe. \item[(H5)] The winning strategy \(\tau\). \item[(H6)] The sets \(S\) and \(T\). \end{description} Let \({\cal N}=\langle N,\in,\ldots \rangle\) be an elementary submodel of \({\cal H}\) such that \(\alpha=N\cap\omega_3\in S\cap S^3_1\). Now \(C_{\alpha}\) is a cub of order-type \(\omegay\) on \(\alpha\) and \(G_{\alpha}\) maps \(M_{\alpha}\) onto \(M_{\alpha}\). Moreover, \(G_{\alpha}\) is a partial isomorphism from \({\cal A}\) into \({\cal B}\). Provided that \(\tau\) does not lead \(\forall\) to play his moves outside \(M_{\alpha}\), \(\forallI\) has on obvious strategy: he lets \(G_{\alpha}\) determine his moves. So let us assume a sequence \(\langle (x_{\xi},y_{\xi}) : \xi<\gamma\rangle\) has been played inside \(M_{\alpha}\) and \(\gamma<\omegay\). Let \(\beta\in C_{\alpha}\) such that \(M_{\beta}\) contains the elements \(x_{\xi},y_{\xi}\) for \(\xi<\gamma\). The sequence \(\langle y_{\xi} : \xi<\gamma\rangle\) is totally determined by \(G_{\beta}\) and \(\tau\). Since \(G_{\beta}\in N\), \(\langle y_{\xi} : \xi<\gamma\rangle\in N\), and we are done. \begin{claim} \(\forallI\) does not win \(\EFG{\omegay}\). \end{claim} Suppose \(\forallI\) has a winning strategy \(\tau\). Let \({\cal H}\) be as above and \({\cal N}=\langle N,\in,\ldots \rangle\) be an elementary submodel of \({\cal H}\) such that \(\alpha=N\cap\omega_3\in T\). We let \(\forall\) play during the first \(\omega\) moves of \(\EFG{\omegay}\) a sequence \(\langle a_n : n<\omega\rangle\) in \({\cal A}\) such that if \(\alpha_n\) is the least \(\alpha_n\) with \(a_n\in M_{\alpha_n}\), then the sequence \(\langle\alpha_n : n<\omega\rangle\) is ascending and \(\sup \{\alpha_n : n<\omega\}=\alpha\). Let \(\forallI\) respond following \(\tau\) with \(\langle b_n : n<\omega\rangle\). As his move number \(\omega\) player \(\forall\) plays some element \(a_{\omega}\in M\subseteqtminus M_{\alpha}\) in \({\cal A}\) and \(\forallI\) answers according to \(\tau\) with \(b_{\omega}\). For all $i \leq \omega$, $R_{(a_0, a_i)}(a_0, a_i)$ holds. Hence $R_{(a_0, a_i)}(b_0, b_i)$ holds. So there is $w_i$ such that $w_i(a_0) = b_0$ and $w_i(a_i) = b_i$. Since $G_{\beta_0}(a_0) = b_0$, by (M5), for each $i$ there is $\beta_i$ so that $G_{\beta_i}(a_i) = b_i$. We can assume that $\beta_i$ is chosen to be minimal. Notice that for all $i$, $\beta_i > \alpha_i$ and for $i < \omega$, $\beta_i \in {\cal N}$. So $\sup \subseteqt{\beta_i}{i < \omega} = \alpha$. Also, by the same reasoning as above, for each $i < \omega$, $R_{(a_i, a_\omega)}(b_i, b_\omega)$ holds. Applying (M5), we get that $G_{\beta_\omega}(a_i) = b_i$. Using (M5) again and the minmality of $\beta_i$, for all $i < \omega$, $\beta_i \in C_{\beta_\omega}$. Thus \(\alpha\) is a limit of elements of \(C_{\beta_{\omega}}\), contradicting \(\alpha\in T\). $\Box$\par \end{document}
\begin{document} \title{Path integral for relativistic oscillators :model \\ of the Klein-Gordon particle in AdS space} \author{M. T. Chefrour \\ D\'epartement de Physique, Facult\'e des Sciences,\\ Universit\'e Badji Mokhtar, Annaba, Algeria. \and F. Benamira and L. Guechi \\ Laboratoire de Physique Th\'{e}orique,\\ D\'{e}partement de Physique, Facult\'{e} des Sciences,\\ Universit\'e Mentouri, Route d'Ain El Bey, \\ Constantine, Algeria.} \date{\today } \maketitle \begin{abstract} Explicit path integration is carried out for the Green's functions of special relativistic harmonic oscillators in (1+1)- and (3+1)-dimensional Minkowski space-time modeled by a Klein-Gordon particle in the universal covering space-time of the anti-de Sitter static space-time. The energy spectrum together with the normalized wave functions are obtained. In the non-relativistic limit, the bound states of the one- and three-dimensional ordinary oscillators are regained. PACS 03.65-Quantum theory, quantum mechanics. typescript using Latex ( version 2.0) \end{abstract} \section{Introduction} Relativistic problems that can be solved exactly by the use of the path integral approach are very limited especially for two reasons: (1) For a relativistic particle with spin, the propagator cannot be described by a simple path integral based on any reasonable action. The fact that the spin has no classical origin makes it difficult to propose for it continuous paths\cite{FeyHib} . (2) If the particles interact with each other or with an external potential, they can produce quantum effects which cannot be described by path fluctuations alone. These effects can be handled by perturbation theory in the framework of the quantum field theory\cite{Klei1} . However, in recent years, there have been a few successful examples where the difficulty which concerns the spin has been shaped. The Dirac propagator for a free particle\cite{BarDur} has been derived in the framework of a model where the spin is classically described by internal variables. The path integral treatments of the Dirac-Coulomb problem\cite{KayIno} and a Dirac electron in a one-dimensional Coulomb potential on the half-line and in the presence of an external superstrong magnetic field\cite{BerCarLim} have been obtained via the Biedenharn transformation\cite{Bied} . The electron in the presence of a constant magnetic field\cite{PapDev} and the problem of charged particles in interaction with an electromagnetic plane wave alone \cite{BouCheGueHam} or plus a parallel magnetic field\cite {BenCheGueHam} have been studied by introducing a fifth parameter in order to bring the problem into a non-relativistic form. The relativistic spinless Coulomb system\cite{Klei2} and the Klein-Gordon particle in vector plus scalar Hulth\'{e}n-type potentials\cite{CheGueLecHamMes} have also been solved by path integration. Recently, from various aspects (\cite {DulVan,MosSze,BenMarRomNun,AldBisNav,NavNav} and references therein), there has been renewed interest for the relativistic harmonic oscillators because of a crucial point. Indeed, a simple replacement of the coordinates and generalized momenta in the corresponding classical Hamiltonian by their quantum mechanical counterparts is, in general, not correct since the ambiguity resulting from ordering the operators must be resolved. To parameterize the operator ordering ambiguity of the position- and the momentum-operators, we show that it is necessary to introduce two parameters $\alpha $ and $\beta $ which can not be freely chosen. The problem of the quantum relativistic oscillators represented by quantum free relativistic particles on the universal covering space-time of the anti- de Sitter static space-time (CAdS) is a model characterized by a constraint on these parameters. This model is called '' special quantum relativistic oscillators '' in the sense that $\alpha $ and $\beta $ are chosen to adjust the non-relativistic limit and to preserve the reality of the energy spectrum of the physical system. To our knowledge, there is no path integral discussion for the quantum relativistic harmonic oscillators. The purpose of the present paper is to fill this gap. The treatment will be restricted to spinless systems. Our study is organized in the following way: in sec. II, we construct the path integral associated with the $(1+1)$-dimensional special relativistic harmonic oscillator. The Green's function is derived in closed form, from which we obtain the energy spectrum and the normalized wave functions. In sec. III, we extend the discussion to the $(3+1)$-dimensional case. The radial Green's function is also given in closed form. The energy levels and the normalized wave functions are then deduced. The section IV will be a conclusion. \section{The (1+1)-dimensional special relativistic oscillator} The relativistic harmonic oscillator interaction in $(1+1)$ Minkowski space-time is equivalent to a free relativistic particle in the universal covering space-time of the anti-de Sitter space-time (CAdS). For a static form of the anti-de Sitter space-time metric, the line element is given by \begin{equation} ds^2=\Lambda (x)c^2dt^2-\frac 1{\Lambda (x)}dx^2, \label{a.1} \end{equation} where \begin{equation} \Lambda (x)=1+\frac{\omega ^2}{c^2}x^2. \label{a.2} \end{equation} Classical mechanics is described in this space by the classical Lagrangian and Hamiltonian, respectively: \begin{equation} L=-Mc\sqrt{1-\frac 1{\Lambda (x)}\frac{v^2}{c^2}+\frac{\omega ^2}{c^2}x^2}, \label{a.3} \end{equation} \begin{equation} H^2=M^2c^4+p^2c^2+M^2\omega ^2c^2x^2+2\omega ^2x^2p^2+\frac{\omega ^4}{c^2} x^4p^2. \label{a.4} \end{equation} If we proceed by adopting the substitutions $H\rightarrow \widehat{P} _{0}=i\hbar \frac{\partial }{\partial t},$ $p\rightarrow \widehat{P}_{x}= \frac{\hbar }{i}\frac{\partial }{\partial x},$ $x\rightarrow \widehat{x}$, there is an ambiguity which results from ordering \ the operators in the quantum mechanical counterpart of (\ref{a.4}). Since, there exist different ways to put the terms $x^{4}p^{2}$and $x^{2}p^{2}$into symmetrically ordered forms, we can construct a number of Hermitian mechanical quantum counterparts of (\ref{a.4}). In order to avoid this ambiguity, we write all the Hermitian forms for each term as a linear combination. Whence, after calculation of all the commutators, we find the following replacements \begin{equation} \left\{ \begin{array}{c} x^{4}p^{2}\rightarrow -\hbar ^{2}\left( x^{4}\frac{d^{2}}{dx^{2}}+4x^{3} \frac{d}{dx}+\alpha x^{2}\right) , \\ x^{2}p^{2}\rightarrow -\hbar ^{2}\left( x^{2}\frac{d^{2}}{dx^{2}}+2x\frac{d}{ dx}+\beta \right) , \end{array} \right. \label{a.5} \end{equation} where the parameters $\alpha $ and $\beta $ will be fixed farther. The Green's function $G(x^{\prime \prime },x^{\prime })$ that we consider obeys the Klein-Gordon equation \begin{equation} \left\{ \Box +\kappa \Lambda (x)+(1-\alpha +2\beta )\frac{\omega ^4}{c^4}x^2+ \frac{\omega ^2}{c^2}\right\} G(x^{\prime \prime },x^{\prime })=-\frac 1{\hbar ^2c^2}\delta \left( x^{\prime \prime }-x^{\prime }\right) . \label{a.6} \end{equation} where \begin{equation} \Box =\frac{1}{c^{2}}\frac{\partial ^{2}}{\partial t^{2}}-\Lambda (x)\frac{ \partial ^{2}}{\partial x^{2}}\Lambda (x), \label{a.7} \end{equation} and \begin{equation} \kappa =\left( \frac{Mc}{\hbar }\right) ^{2}+\left( 1-2\beta \right) \frac{ \omega ^{2}}{c^{2}}. \label{a.8} \end{equation} Note that choosing to work with the symmetrically ordered form (\ref{a.7}) of the D'Alembertian operator, the quantization of the original problem will not be modified. By using the Schwinger's integral representation \cite{Schwinger} , the solution of the differential equation (\ref{a.6}) can be written as follows: \begin{equation} G(x^{\prime \prime },x^{\prime })=\frac 1{2i\hbar c^2}\int_0^\infty d\lambda \left\langle x^{\prime \prime },t^{\prime \prime }\right| \exp \left[ \frac i\hbar \widehat{H}\lambda \right] \left| x^{\prime },t^{\prime }\right\rangle , \label{a.9} \end{equation} where the integrand $\left\langle x^{\prime \prime },t^{\prime \prime }\right| \exp \left[ \frac i\hbar \widehat{H}\lambda \right] \left| x^{\prime },t^{\prime }\right\rangle $ is similar to the propagator of a quantum system evolving in $\lambda $ time from $(x^{\prime },t^{\prime })$ to $(x^{\prime \prime },t^{\prime \prime })$ with the effective Hamiltonian, \begin{equation} \widehat{H}=\frac 12\left[ \!-\Lambda (x)\widehat{P}_x^2\Lambda (x)+\frac{ \widehat{P}_0^2}{c^2}-\hbar ^2\kappa \Lambda (x)-\hbar ^2(1\!-\!\alpha +\!2\beta )\frac{\omega ^4}{c^4}x^2-\frac{\hbar ^2\omega ^2}{c^2}\right] . \label{a.10} \end{equation} The integrand in Eq. (\ref{a.9}) may be written as the path integral \cite {BouCheGueHam,BenCheGueHam,CheGueLecHamMes,Feyn,Schulm} \begin{eqnarray} P(x^{\prime \prime },t^{\prime \prime },x^{\prime },t^{\prime };\lambda ) &=&\left\langle x^{\prime \prime },t^{\prime \prime }\right| \exp \left[ \frac i\hbar \widehat{H}\lambda \right] \left| x^{\prime },t^{\prime }\right\rangle \nonumber \\ &=&\stackunder{N\rightarrow \infty }{\lim }\!\int \!\stackrel{N}{\stackunder{ n=1}{\prod }}\!dx_ndt_n\!\stackrel{N+1}{\stackunder{n=1}{\prod }}\!\frac{ d(P_x)_n}{2\pi \hbar }\frac{d(P_0)_n}{2\pi \hbar }\exp \left\{ \!\frac i\hbar \stackunder{n=1}{\!\stackrel{N+1}{\sum }\!}A_1^\varepsilon \right\} , \nonumber \\ && \label{a.11} \end{eqnarray} with the short-time action \begin{eqnarray} A_{1}^{\varepsilon } &=&(P_{0})_{n}\triangle t_{n}-(P_{x})_{n}\triangle x_{n}+\frac{\varepsilon }{2}\left( \frac{(P_{0})_{n}^{2}}{c^{2}}-\Lambda (x_{n})\Lambda (x_{n-1})(P_{x})_{n}^{2}\right. \nonumber \\ &&\left. -\hbar ^{2}\frac{\omega ^{2}}{c^{2}}-\hbar ^{2}\kappa \Lambda (x_{n})-\hbar ^{2}(1-\alpha +2\beta )\frac{\omega ^{4}}{c^{4}} x_{n}^{2}\right) , \label{a.12} \end{eqnarray} where \begin{equation} \varepsilon =\frac{\lambda }{N+1}=s_{n}-s_{n-1}, \label{a.13} \end{equation} and $s\in \left[ 0,\lambda \right] $ is a new time-like variable. Let us first notice that the integrations on the variables $t_{n}$ give $N$ Dirac distributions $\delta \left( (P_{0})_{n}-(P_{0})_{n+1}\right) .$ Thereafter the integrations on $(P_{0})_{n}$ give $ (P_{0})_{1}=(P_{0})_{2}=...=(P_{0})_{N+1}=E.$ The propagator (\ref{a.11}) then becomes \begin{equation} P(x^{\prime \prime },t^{\prime \prime },x^{\prime },t^{\prime };\lambda )=\int_{-\infty }^{+\infty }\frac{dE}{2\pi \hbar }\exp \left[ -\frac i\hbar E(t^{\prime \prime }-t^{\prime })\right] P_E(x^{\prime \prime },x^{\prime };\lambda ), \label{a.14} \end{equation} where the kernel $P_E(x^{\prime \prime },x^{\prime };\lambda )$ is given by \begin{equation} P_{E}(x^{\prime \prime },x^{\prime };\lambda )=\stackunder{N\rightarrow \infty }{\lim }\int \stackrel{N}{\stackunder{n=1}{\prod }}dx_{n}\stackrel{N+1 }{\stackunder{n=1}{\prod }}\frac{d(P_{x})_{n}}{2\pi \hbar }\exp \left[ \frac{ i}{\hbar }\stackunder{n=1}{\stackrel{N+1}{\sum }}A_{2}^{\varepsilon }\right] , \label{a.15} \end{equation} with the short-time action \begin{eqnarray} A_2^\varepsilon &=&-(P_x)_n\triangle x_n+\frac \varepsilon 2\left[ -\Lambda (x_n)\Lambda (x_{n-1})(P_x)_n^2+\frac{\hbar ^2\omega ^2}{c^2}\left( \frac{E^2 }{\hbar ^2\omega ^2}-1\right) \right. \nonumber \\ &&\left. -\hbar ^2\kappa \Lambda (x_n)-\hbar ^2(1-\alpha +2\beta )\frac{ \omega ^4}{c^4}x_n^2\right] . \label{a.16} \end{eqnarray} Note that (\ref{a.14}) is invariant under the change $E\rightarrow -E.$ Then, by integrating with respect to the variables $(P_x)_n$, we get \begin{eqnarray} P_E(x^{\prime \prime },x^{\prime };\lambda ) &=&\frac 1{\sqrt{\Lambda (x^{\prime })\Lambda (x^{\prime \prime })}}\stackunder{N\rightarrow \infty }{ \lim }\stackrel{N+1}{\stackunder{n=1}{\prod }}\left[ \frac 1{2i\pi \hbar \varepsilon }\right] ^{\frac 12} \nonumber \\ &&\times \stackrel{N}{\stackunder{n=1}{\prod }}\left[ \int \frac{dx_n}{ \Lambda (x_n)}\right] \exp \left[ \frac i\hbar \stackunder{n=1}{\stackrel{N+1 }{\sum }}A_3^\varepsilon \right] , \label{a.17} \end{eqnarray} with the short-time action in configuration space \begin{eqnarray} A_3^\varepsilon &=&\frac{\triangle x_n^2}{2\varepsilon \Lambda (x_n)\Lambda (x_{n-1})}+\frac \varepsilon 2\hbar ^2\left[ \left( \frac{E^2}{\hbar ^2\omega ^2}-1\right) \frac{\omega ^2}{c^2}-\kappa \Lambda (x_n)\right. \nonumber \\ &&\left. -(1-\alpha +2\beta )\frac{\omega ^4}{c^4}x_n^2\right] . \label{a.18} \end{eqnarray} Substituting (\ref{a.14}) into (\ref{a.9}), we can rewrite (\ref{a.9}) in the form: \begin{equation} G(x^{\prime \prime },x^{\prime })=\int_{-\infty }^{+\infty }\frac{dE}{2\pi \hbar }\exp \left[ -\frac i\hbar E(t^{\prime \prime }-t^{\prime })\right] G_E(x^{\prime \prime },x^{\prime }), \label{a.19} \end{equation} with \begin{equation} G_E(x^{\prime \prime },x^{\prime })=\frac 1{2i\hbar c^2}\int_0^\infty d\lambda P_E(x^{\prime \prime },x^{\prime };\lambda ). \label{a.20} \end{equation} If we now introduce a new variable $u_{n}$ together with a rescaling of time \cite{DurKlei} from $\varepsilon $ to $\sigma _{n\text{ }}$given by \begin{equation} \left\{ \begin{array}{c} x_n=\frac c\omega \sinh u_n, \\ \\ \varepsilon =\sigma _n\frac{c^2}{\omega ^2}\frac 1{\cosh u_n\cosh u_{n-1}}, \end{array} \right. \label{a.21} \end{equation} and incorporate the constraint \begin{equation} \lambda =\frac{c^2}{\omega ^2}\int_0^S\frac{ds}{\cosh ^2u}, \label{a.22} \end{equation} by using the identity \begin{equation} \frac{c^{2}/\omega ^{2}}{\cosh u^{\prime \prime }\cosh u^{\prime }} \int_{0}^{\infty }dS\delta \left( \lambda -\frac{c^{2}}{\omega ^{2}} \int_{0}^{S}\frac{ds}{\cosh ^{2}u}\right) =1, \label{a.23} \end{equation} the path integral (\ref{a.20}) can be written as: \begin{equation} G_E(x^{\prime \prime },x^{\prime })=\frac 1{2i\hbar \omega c\left( \cosh u^{\prime \prime }\cosh u^{\prime }\right) ^{\frac 32}}\int_0^\infty dSP(u^{\prime \prime },u^{\prime };S), \label{a.24} \end{equation} where \begin{eqnarray} P(u^{\prime \prime },u^{\prime };S) &=&\stackunder{N\rightarrow \infty }{ \lim }\int \stackrel{N+1}{\stackunder{n=1}{\prod }}\left[ \frac{1}{2i\pi \hbar \sigma _{n}}\right] ^{\frac{1}{2}}\stackrel{N}{\stackunder{n=1}{\prod } }du_{n}\exp \left\{ \frac{i}{\hbar }\stackunder{n=1}{\stackrel{N+1}{\sum }} \left[ \frac{\triangle u_{n}^{2}}{2\sigma _{n}}\right. \right. \nonumber \\ &&\ +\frac{\triangle u_{n}^{4}}{8\sigma _{n}}\left( \frac{1}{3}-\frac{1}{ \cosh ^{2}\widetilde{u}_{n}}\right) -\frac{\hbar ^{2}}{2}\left( \frac{c^{2}}{ \omega ^{2}}\kappa -\frac{E^{2}/\hbar ^{2}\omega ^{2}-1}{\cosh ^{2} \widetilde{u}_{n}}\right. \nonumber \\ &&\ \left. \left. \left. +(1-\alpha +2\beta )\tanh ^{2}\widetilde{u} _{n}\right) \sigma _{n}\right] \right\} . \label{a.25} \end{eqnarray} Here, we have used the usual abbreviations $\triangle u_{n}=u_{n}-u_{n-1},$ $ \widetilde{u}_{n}=\frac{u_{n}+u_{n-1}}{2},$ $u^{\prime }=u(0)$ and $ u^{\prime \prime }=u(S).$ Note that the term in $(\triangle u_{n})^{4}$ contributes significantly to the path integral. It can be estimated by using the formula \cite{Dewitt} \begin{equation} \int_{-\infty }^{+\infty }\exp (-\alpha _{1}x^{2}+\alpha _{2}x^{4})dx=\int_{-\infty }^{+\infty }\exp \left( -\alpha _{1}x^{2}+\frac{ 3\alpha _{2}}{4\alpha _{1}^{2}}\right) dx, \label{a.26} \end{equation} valid for $\left| \alpha _{1}\right| $ large and Re$(\alpha _{1})>0.$ This leads to \begin{eqnarray} P(u^{\prime \prime },u^{\prime };S) &=&\int Du(s)\exp \left\{ \frac{i}{\hbar }\int_{0}^{S}\left[ \frac{\stackrel{.}{u}^{2}}{2}-\frac{\hbar ^{2}}{2}\left( \frac{c^{2}}{\omega ^{2}}\kappa +\frac{1}{4}\right) \right. \right. \nonumber \\ &&\left. \left. +\frac{\hbar ^{2}}{2}\frac{E^{2}/\hbar ^{2}\omega ^{2}-1/4}{ \cosh ^{2}u}-\frac{\hbar ^{2}}{2}(1-\alpha +2\beta )\tanh ^{2}u\right] ds\right\} . \nonumber \\ && \label{a.27} \end{eqnarray} By noting that $\tanh ^{2}u=1-\frac{1}{\cosh ^{2}u},$ this last path integral is identical in form with that of the symmetric Rosen-Morse potential \cite{RosMor} which has been studied recently \cite {JunIno,BohJun,Grosc,KleiMus,CheGueHamLec} , but in order to obtain the equivalent to the Klein-Gordon equation in the AdS space-time we impose a restriction on the parameters $\alpha $ and $\beta $ defined by the following two equations: \begin{equation} 1-\alpha +2\beta =0, \label{a.28} \end{equation} \begin{equation} (1-2\beta )\frac{\omega ^2}{c^2}=\xi R, \label{a.29} \end{equation} where $R=-2\frac{\omega ^2}{c^2}$ is the scalar curvature and $\xi $ is a numerical factor. Whence it follows that \begin{equation} \alpha =2\xi +2\text{\qquad and\qquad }\beta =\xi +\frac{1}{2}. \label{a.30} \end{equation} In this case, the propagator (\ref{a.27}) reduces to \begin{eqnarray} P(u^{\prime \prime },u^{\prime };S) &=&\int Du(s)\exp \left\{ \frac{i}{\hbar }\int_{0}^{S}\left[ \frac{\stackrel{.}{u}^{2}}{2}-\frac{\hbar ^{2}}{2}\left( \left( \frac{Mc^{2}}{\hbar \omega }\right) ^{2}-2\xi +\frac{1}{4}\right) \right. \right. \nonumber \\ &&\left. \left. +\frac{\hbar ^{2}}{2}\frac{E^{2}/\hbar ^{2}\omega ^{2}-1/4}{ \cosh ^{2}u}\right] ds\right\} , \label{a.31} \end{eqnarray} which is likewise the propagator relative to a symmetric Rosen-Morse potential. The Green's function associated with this potential has been evaluated through various techniques of path integration \cite {JunIno,BohJun,Grosc,KleiMus,CheGueHamLec} . The result is \begin{eqnarray} G(u^{\prime \prime },u^{\prime };E) &=&\int_{0}^{+\infty }dSP(u^{\prime \prime },u^{\prime };S) \nonumber \\ &=&-\frac{i}{\hbar }\Gamma (\gamma -l_{E})\Gamma (1+l_{E}+\gamma )P_{l_{E}}^{-\gamma }(\tanh u^{\prime \prime })P_{l_{E}}^{-\gamma }(-\tanh u^{\prime }), \nonumber \\ && \label{a.32} \end{eqnarray} where $P_{l_{E}}^{-\gamma }(\tanh u)$ is the associated Legendre function with \begin{equation} l_E=-\frac 12+\frac E{\hbar \omega }, \label{a.33} \end{equation} and \begin{equation} \gamma =\pm \frac 12\sqrt{1+4N^2-8\xi },\quad N=\frac{Mc^2}{\hbar \omega }. \label{a.34} \end{equation} If we take into account Eqs. (\ref{a.30}), insert (\ref{a.32}) into (\ref {a.24}), and remember the first equation of the transformation (\ref{a.21}), we obtain the Green's function for the one-dimensional special relativistic harmonic oscillator under consideration \begin{eqnarray} G_E(x^{\prime \prime },x^{\prime }) &=&-\frac{\Gamma (\gamma -l_E)\Gamma (1+l_E+\gamma )}{2\hbar ^2\omega c}\left[ \left( 1+\frac{\omega ^2}{c^2} x^{\prime \prime }{}^2\right) \left( 1+\frac{\omega ^2}{c^2}x^{\prime 2}\right) \right] ^{-\frac 34} \nonumber \\ &&\times P_{l_E}^{-\gamma }\left( \frac{\frac \omega cx^{\prime \prime }}{ \sqrt{1+\frac{\omega ^2}{c^2}x^{\prime \prime 2}}}\right) P_{l_E}^{-\gamma }\left( -\frac{\frac \omega cx^{\prime }}{\sqrt{1+\frac{\omega ^2}{c^2} x^{\prime 2}}}\right) . \label{a.35} \end{eqnarray} The poles of the Green's function yield the discrete energy spectrum. These are just the poles of $\Gamma (\gamma -l_E)$ which occur when $\gamma -l_E=-n $ for $n=0,1,2,...$. They are given through the equations \begin{equation} \frac{1}{2}-\frac{E}{\hbar \omega }\pm \frac{1}{2}\sqrt{1+4N^{2}-8\xi }=-n. \label{a.36} \end{equation} So, algebraically we obtain two distinct sets of energy levels according to the positive and negative signs of the parameter $\gamma $. But we have to check whether the corresponding wave functions, which will be expressed in terms of the Legendre functions of the first kind $P_{l_{E}}^{-\gamma }(y),$ satisfy the boundary conditions for $y=\frac{\omega }{c}x/\sqrt{1+\frac{ \omega ^{2}}{c^{2}}x^{2}}\rightarrow \pm 1.$ By inspecting their asymptotic behaviors\cite{Erdelyi} \begin{equation} P_{l_{E}}^{-\gamma }(y)\stackunder{y\rightarrow 1}{\simeq }\frac{(1-y)^{ \frac{\gamma }{2}}}{2^{\frac{\gamma }{2}}\Gamma (1+\gamma )};\qquad \gamma \neq 0,-1,-2,-3,..., \label{a.37} \end{equation} \begin{equation} P_{l_{E}}^{-\gamma }(y)\stackunder{y\rightarrow -1}{\simeq }\left\{ \begin{array}{c} -\frac{\Gamma (-\gamma )}{2^{\frac{\gamma }{2}}\pi }\sin (l_{E}\pi )(1+y)^{ \frac{\gamma }{2}}\qquad \text{for\quad Re}(\gamma )<0, \\ \\ \frac{2^{\frac{\gamma }{2}}\Gamma (\gamma )}{\Gamma (1+l_{E}+\gamma )\Gamma (\gamma -l_{E})}(1+y)^{-\frac{\gamma }{2}}\qquad \text{for\quad Re}(\gamma )>0, \end{array} \right. \label{a.38} \end{equation} we see that $P_{l_{E}}^{-\gamma }(y)$ diverges if Re$(\gamma )<0$. Therefore, we must choose the positive sign of $\gamma $ and hence the energy eigenvalues are \begin{equation} E_{n}=\left( n+\frac{1}{2}+\frac{1}{2}\sqrt{1+4N^{2}-8\xi }\right) \hbar \omega . \label{a.39} \end{equation} On the other hand, the reality of the parameter $\gamma $ implies the following range of the numerical factor $\xi <\frac{1}{8}\left( 1+4N^{2}\right) $. In the limit $c\rightarrow \infty $, the energy spectrum approaches \begin{equation} E_n^{NR}+Mc^2=\hbar \omega \left( n+\frac 12\right) +Mc^2. \label{a.40} \end{equation} The first term gives the energy levels in the non-relativistic case and the second term is the rest energy of the harmonic oscillator. The corresponding energy eigenfunctions can be found by approximation near the poles $\gamma -l_E\approx -n$ : \begin{equation} \Gamma \left( \gamma -l_{E}\right) \approx \frac{(-1)^{n}}{n!}\frac{1}{ \gamma -l_{E}+n}=\frac{(-1)^{n+1}}{n!}\frac{2\left( n+\gamma +\frac{1}{2} \right) \hbar ^{2}\omega ^{2}}{E^{2}-E_{n}^{2}}. \label{a.41} \end{equation} Using this behavior and the known property of the symmetry of the associated Legendre functions under spatial reflection, $x\rightarrow -x$, we get the contribution of the bound states to the spectral representation of the Green's function as \begin{eqnarray} G_{E}(x^{\prime \prime },x^{\prime }) &=&\stackunder{n=0}{\stackrel{\infty }{ \sum }}\frac{\omega }{c}\frac{n+\gamma +\frac{1}{2}}{n!\left( E^{2}-E_{n}^{2}\right) }\Gamma (2\gamma +n+1) \nonumber \\ &&\times \left( 1+\frac{\omega ^{2}}{c^{2}}x^{\prime \prime 2}\right) ^{- \frac{3}{4}}\left( 1+\frac{\omega ^{2}}{c^{2}}x^{\prime 2}\right) ^{-\frac{3 }{4}} \nonumber \\ &&\times P_{n+\gamma }^{-\gamma }\left( \frac{\frac{\omega }{c}x^{\prime \prime }}{\sqrt{1+\frac{\omega ^{2}}{c^{2}}x^{\prime \prime 2}}}\right) P_{n+\gamma }^{-\gamma }\left( \frac{\frac{\omega }{c}x^{\prime }}{\sqrt{1+ \frac{\omega ^{2}}{c^{2}}x^{\prime 2}}}\right) \nonumber \\ \ &=&\stackunder{n=0}{\stackrel{\infty }{\sum }}\frac{\Psi _{n}^{\gamma }(x^{\prime \prime })\Psi _{n}^{\gamma \ast }(x^{\prime })}{E^{2}-E_{n}^{2}}. \label{a.42} \end{eqnarray} The properly normalized wave functions are thus \begin{eqnarray} \Psi _{n}^{\gamma }(x) &=&\left[ \frac{\omega }{c}\frac{n+\gamma +\frac{1}{2} }{n!}\Gamma (2\gamma +n+1)\right] ^{\frac{1}{2}}\left( 1+\frac{\omega ^{2}}{ c^{2}}x^{2}\right) ^{-\frac{3}{4}} \nonumber \\ &&\times P_{n+\gamma }^{-\gamma }\left( \frac{\frac{\omega }{c}x}{\sqrt{1+ \frac{\omega ^{2}}{c^{2}}x^{2}}}\right) . \label{a.43} \end{eqnarray} Taking into account the relation between the Gegenbauer polynomials and the associated Legendre functions ( see formula (8.936) p. 1031 in Ref. \cite {GradRyz} ) \begin{equation} C_{n}^{\lambda }(t)=\frac{\Gamma (2\lambda +n)\Gamma (\lambda +\frac{1}{2})}{ \Gamma (2\lambda )\Gamma (n+1)}\left[ \frac{1}{4}\left( t^{2}-1\right) \right] ^{\frac{1}{4}-\frac{\lambda }{2}}P_{\lambda +n-\frac{1}{2}}^{\frac{1 }{2}-\lambda }(t), \label{a.44} \end{equation} and using the doubling formula (see Eq. (8.335.1), p. 938 in Ref. \cite {GradRyz} ) \begin{equation} \Gamma (2x)=\frac{2^{2x-1}}{\sqrt{\pi }}\Gamma (x)\Gamma \left( x+\frac{1}{2} \right) , \label{a.45} \end{equation} we can also express (\ref{a.43}) in the form: \begin{eqnarray} \Psi _{n}^{\gamma }(x) &=&\left[ \frac{\omega }{c}\frac{\left( \gamma +n+ \frac{1}{2}\right) n!}{\Gamma (2\gamma +n+1)}\right] ^{\frac{1}{2} }(2i)^{\gamma }\Gamma \left( \gamma +\frac{1}{2}\right) \left( 1+\frac{ \omega ^{2}}{c^{2}}x^{2}\right) ^{-\frac{1}{2}\left( \gamma +\frac{3}{2} \right) } \nonumber \\ &&\times C_{n}^{\gamma +\frac{1}{2}}\left( \frac{\frac{\omega }{c}x}{\sqrt{1+ \frac{\omega ^{2}}{c^{2}}x^{2}}}\right) . \label{a.46} \end{eqnarray} In the limit $c\rightarrow \infty $, $\gamma \rightarrow N=\frac{Mc^2}{\hbar \omega }$ and with the help of the formula ( see Eq. (8.328.1), p. 937 in Ref. \cite{GradRyz} ) \begin{equation} \stackunder{z\rightarrow \infty }{\lim }\frac{\Gamma (z+a)}{\Gamma (z)} e^{-a\ln z}=1, \label{a.47} \end{equation} we see that \begin{eqnarray} &&\stackunder{c\rightarrow \infty }{\lim }\gamma ^{\frac n2}\left[ \frac \omega c\frac{\left( \gamma +n+\frac 12\right) n!}{\Gamma (2\gamma +n+1)} \right] ^{\frac 12}(2)^\gamma \Gamma \left( \gamma +\frac 12\right) \nonumber \\ &=&\stackunder{c\rightarrow \infty }{\lim }\left[ \frac \omega {c\sqrt{\pi }} \frac{n!}{2^n}\right] ^{\frac 12}\left[ \frac{\Gamma \left( \gamma +\frac 12\right) }{\Gamma (\gamma )}\right] ^{\frac 12}=\left( \frac{M\omega }{\pi \hbar }\right) ^{\frac 14}\sqrt{\frac{n!}{2^n}}. \label{a.48} \end{eqnarray} By the use of the limit relation ( see Eq. (8.936.5), p. 1031 in Ref. \cite {GradRyz} ) \begin{equation} \stackunder{\lambda \rightarrow \infty }{\lim }\lambda ^{-\frac n2}C_n^{\frac \lambda 2}\left( t\sqrt{\frac 2\lambda }\right) =\frac{ 2^{-\frac n2}}{n!}H_n(t), \label{a.49} \end{equation} the wave functions of the harmonic oscillator in the non-relativistic approximation are naturally regained \begin{equation} \stackunder{c\rightarrow \infty }{\lim }\Psi _{n}^{\gamma }(x)=\left( \frac{ M\omega }{\pi \hbar }\right) ^{\frac{1}{4}}\frac{1}{\sqrt{2^{n}n!}}e^{-\frac{ M\omega }{2\hbar }x^{2}}H_{n}\left( \sqrt{\frac{M\omega }{\hbar }}x\right) , \label{a.50} \end{equation} where $H_{n}\left( \sqrt{\frac{M\omega }{\hbar }}x\right) $ is the Hermite polynomial of n$^{\text{th}}$ order. \section{The (3+1)-dimensional special relativistic oscillator} The special relativistic harmonic oscillator in $(3+1)$ Minkowski space-time is simulated in the universal covering space-time (CAdS) of the anti- de Sitter space-time with a negative curvature $R=-12\frac{\omega ^{2}}{c^{2}}$ and a static metric of the form: \begin{equation} ds^2=\Lambda (r)c^2dt^2-\frac 1{\Lambda (r)}dr^2-r^2\left( d\theta ^2+\sin ^2\theta d\phi ^2\right) , \label{a.51} \end{equation} where \begin{equation} \Lambda (r)=1+\frac{\omega ^2}{c^2}r^2 \label{a.52} \end{equation} is chosen in order to impose the non-relativistic limit. The Lagrangian reads as: \begin{equation} L=-Mc\sqrt{\Lambda (r)-\frac{v^2}{c^2}+\frac{\omega ^2}{c^4}\frac{(\stackrel{ \rightarrow }{r}\stackrel{\rightarrow }{v})^2}{\Lambda (r)}} \label{a.53} \end{equation} and the classical Hamiltonian is given by \begin{equation} H^{2}=\Lambda (r)\left( M^{2}c^{4}+p^{2}c^{2}+\omega ^{2}(\stackrel{ \rightarrow }{r}\stackrel{\rightarrow }{p})^{2}\right) . \label{a.54} \end{equation} As in the one-dimensional relativistic oscillator to construct the quantum mechanical counterpart of (\ref{a.54}), we must respect the ordering ambiguity of the position- and momentum-operators. Similarly to (\ref{a.5}), we will be led\ to make the following substitutions: \begin{equation} \left\{ \begin{array}{c} x_{i}^{4}p_{i}^{2}\rightarrow -\hbar ^{2}\left( x_{i}^{4}\frac{\partial ^{2} }{\partial x_{i}^{2}}+4x_{i}^{3}\frac{\partial }{\partial x_{i}}+\alpha x_{i}^{2}\right) , \\ x_{i}^{2}p_{i}^{2}\rightarrow -\hbar ^{2}\left( x_{i}^{2}\frac{\partial ^{2} }{\partial x_{i}^{2}}+2x_{i}\frac{\partial }{\partial x_{i}}+\beta \right) , \\ x_{i}^{3}p_{i}\rightarrow -i\hbar \left( x_{i}^{3}\frac{\partial }{\partial x_{i}}+\frac{3}{2}x_{i}^{2}\right) , \\ x_{i}p_{i}\rightarrow -i\hbar \left( x_{i}\frac{\partial }{\partial x_{i}}+ \frac{1}{2}\right) . \end{array} \right. \label{a.55} \end{equation} The Green's function $G(\stackrel{\rightarrow }{r^{\prime \prime }} ,t^{\prime \prime },\stackrel{\rightarrow }{r^{\prime }},t^{\prime })$ for the problem satisfies the Klein-Gordon equation \begin{equation} \left( \Box +U(r)\right) G(\stackrel{\rightarrow }{r}^{\prime \prime },t^{\prime \prime };\stackrel{\rightarrow }{r}^{\prime },t^{\prime })=-\frac 1{\hbar ^2c^2}\delta \left( \stackrel{\rightarrow }{r}^{\prime \prime }-\stackrel{\rightarrow }{r}^{\prime }\right) \delta \left( t^{\prime \prime }-t^{\prime }\right) . \label{a.56} \end{equation} where \begin{equation} \Box =\frac 1{c^2}\frac{\partial ^2}{\partial t^2}-\Lambda (r)\frac 1{r^2}\frac \partial {\partial r}r^2\frac \partial {\partial r}\Lambda (r)+\Lambda (r)\frac{\widehat{l}^2}{\hbar ^2r^2}, \label{a.57} \end{equation} ($\widehat{l}^2$ is the square of the orbital angular momentum operator) and $U(r)$ is the central potential \begin{equation} U(r)=\frac{\hbar ^2\omega ^2}{c^2}\left[ 4\beta -\alpha -\frac{M^2c^4}{\hbar ^2\omega ^2}-8+\left( \alpha +2\beta +\frac 72\right) \Lambda (r)\right] . \label{a.58} \end{equation} It follows that the Green's function $G(\stackrel{\rightarrow }{r^{\prime \prime }},t^{\prime \prime },\stackrel{\rightarrow }{r^{\prime }},t^{\prime })$ can be expanded into partial waves\cite{PeakIno} in spherical polar coordinates \begin{equation} G(\stackrel{\rightarrow }{r^{\prime \prime }},t^{\prime \prime },\stackrel{ \rightarrow }{r^{\prime }},t^{\prime })=\frac 1{r^{\prime \prime }r^{\prime }}\stackunder{l=0}{\stackrel{\infty }{\sum }}G_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime })Y_l^{m*}(\theta ^{\prime \prime },\phi ^{\prime \prime })Y_l^m(\theta ^{\prime },\phi ^{\prime }), \label{a.59} \end{equation} where the radial Green's function, expressed in the Schwinger's integral representation\cite{Schwinger} , is \begin{equation} G_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime })=\frac 1{2i\hbar c^2}\int_0^\infty d\lambda \left\langle r^{\prime \prime },t^{\prime \prime }\right| \exp \left[ \frac i\hbar \widehat{H}_l\lambda \right] \left| r^{\prime },t^{\prime }\right\rangle , \label{a.60} \end{equation} The integrand in Eq. (\ref{a.60}) is similar to the propagator of an harmonic oscillator which evolves in the time-like parameter $\lambda $ with the effective Hamiltonian \begin{equation} \widehat{H}_l=\frac 12\left[ -\Lambda (r)\widehat{P}_r^2\Lambda (r)+\frac{ \widehat{P}_0^2}{c^2}-\hbar ^2l(l+1)\frac{\Lambda (r)}{r^2}+U(r)\right] . \label{a.61} \end{equation} To find the energy eigenvalues $E_{n_{r},l}$ and the wave functions $\Psi _{n_{r},l}(r)=r^{-1}\Phi _{n_{r},l}(r)$, we may evaluate (\ref{a.60}) by path integration. The effective Hamiltonian (\ref{a.61}) involves a centrifugal barrier which possesses a singularity at $r=0$, so that the discrete form of the expression (\ref{a.60}) is not defined due to a path collapse. To obtain a tractable and stable path integral, we introduce an appropriate regulating function ( following Kleinert \cite{Kleinert}) and write (\ref{a.60}) in the form: \begin{equation} G_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime })=\frac 1{2i\hbar c^2}\int_0^\infty dSP_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime };S), \label{a.62} \end{equation} where the transformed path integral is given in the canonical form by \begin{eqnarray} P_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime };S) &=&f_R(r^{\prime \prime })f_L(r^{\prime })\left\langle r^{\prime \prime },t^{\prime \prime }\right| \exp \left[ \frac i\hbar Sf_L(r)\widehat{H} _lf_R(r)\right] \left| r^{\prime },t^{\prime }\right\rangle \nonumber \\ &=&f_R(r^{\prime \prime })f_L(r^{\prime })\int Dr(s)Dt(s)\int \frac{ DP_r(s)DP_0(s)}{(2\pi \hbar )^2} \nonumber \\ &&\times \exp \left\{ \frac i\hbar \int_0^Sds\left[ -P_r\stackrel{.}{r}+P_0 \stackrel{.}{t}+f_L(r)H_lf_R(r)\right] \right\} \nonumber \\ &=&f_R(r^{\prime \prime })f_L(r^{\prime })\stackunder{N\rightarrow \infty }{ \lim }\stackunder{n=1}{\stackrel{N}{\prod }}\left[ \int dr_ndt_n\right] \nonumber \\ &&\times \stackunder{n=1}{\stackrel{N+1}{\prod }}\left[ \int \frac{ d(P_r)_nd(P_0)_n}{(2\pi \hbar )^2}\right] \exp \left\{ \frac i\hbar \stackunder{n=1}{\stackrel{N+1}{\sum }}A_1^{\varepsilon _s}\right\} , \label{a.63} \end{eqnarray} with the short-time action \begin{eqnarray} A_1^{\varepsilon _s} &=&-(P_r)_n\triangle r_n+(P_0)_n\triangle t_n+\frac{ \varepsilon _s}2f_L(r_n)\left[ -\Lambda (r_n)\Lambda (r_{n-1})(P_r)_n^2+ \frac{(P_0)_n^2}{c^2}\right. \nonumber \\ &&\left. -\hbar ^2l(l+1)\frac{\Lambda (r_n)}{r_n^2}+U(r_n)\right] f_R(r_{n-1}), \label{a.64} \end{eqnarray} and \begin{equation} \varepsilon _{s}=\frac{S}{N+1}=\triangle s_{n}=\frac{\triangle \tau _{n}}{ f_{L}(r_{n})f_{R}(r_{n-1})};\qquad \triangle \tau _{n}=\varepsilon _{\tau }= \frac{\lambda }{N+1}. \label{a.65} \end{equation} The regulating function is defined as \cite{Kleinert} \begin{equation} f(r)=f_L(r)f_R(r)=f^{1-\lambda ^{\prime }}(r)f^{\lambda ^{\prime }}(r). \label{a.66} \end{equation} As in the $(1+1)$-dimensional case, by doing successively the $t_n$ and $ (P_0)_n$ integrations we arrive at \begin{equation} P_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime };S)=\frac 1{2\pi \hbar }\int_{-\infty }^{+\infty }dE\exp \left[ -\frac i\hbar E(t^{\prime \prime }-t^{\prime })\right] P_l(r^{\prime \prime },r^{\prime };S), \label{a.67} \end{equation} where the invariant kernel $P_l(r^{\prime \prime },r^{\prime };S)$ under the change $E\rightarrow -E$ is given by \begin{eqnarray} P_{l}(r^{\prime \prime },r^{\prime };S) &=&f_{R}(r^{\prime \prime })f_{L}(r^{\prime })\stackunder{N\rightarrow \infty }{\lim }\stackunder{n=1}{ \stackrel{N}{\prod }}\left[ \int dr_{n}\right] \nonumber \\ &&\times \stackunder{n=1}{\stackrel{N+1}{\prod }}\left[ \int \frac{ d(P_{r})_{n}}{(2\pi \hbar )}\right] \exp \left\{ \frac{i}{\hbar }\stackunder{ n=1}{\stackrel{N+1}{\sum }}A_{2}^{\varepsilon _{s}}\right\} , \label{a.68} \end{eqnarray} with \begin{eqnarray} A_2^{\varepsilon _s} &=&-(P_r)_n\triangle r_n+\frac{\varepsilon _s}2f_L(r_n) \left[ -\Lambda (r_n)\Lambda (r_{n-1})(P_r)_n^2+\frac{E^2}{c^2}\right. \nonumber \\ &&\left. -\hbar ^2l(l+1)\frac{\Lambda (r_n)}{r_n^2}+U(r_n)\right] f_R(r_{n-1}). \label{a.69} \end{eqnarray} Substituting (\ref{a.67}) into (\ref{a.62}), we observe that the t-dependent term does not contain the variable $S$. Therefore, we can rewrite the partial Green's function (\ref{a.62}) in the form: \begin{equation} G_l(r^{\prime \prime },t^{\prime \prime },r^{\prime },t^{\prime })=\frac 1{2\pi \hbar }\int_{-\infty }^{+\infty }dE\exp \left[ -\frac i\hbar E(t^{\prime \prime }-t^{\prime })\right] G_l(r^{\prime \prime },r^{\prime }), \label{a.70} \end{equation} with \begin{equation} G_l(r^{\prime \prime },r^{\prime })=\frac 1{2i\hbar c^2}\int_0^\infty dSP_l(r^{\prime \prime },r^{\prime };S). \label{a.71} \end{equation} The path integration of the kernel $P_l(r^{\prime \prime },r^{\prime };S)$ can be performed for any splitting parameter $\lambda ^{\prime }$. However, to simplify the calculation, we prefer to work with the mid-point prescription by taking $\lambda ^{\prime }=\frac 12$. This can be justified by the fact that the final result is independent of this parameter. Then, by integrating with respect to the variables $(P_r)_n$, we find \begin{eqnarray} P_{l}(r^{\prime \prime },r^{\prime };S) &=&\frac{\left[ f(r^{\prime })f(r^{\prime \prime })\right] ^{\frac{1}{4}}}{\sqrt{\Lambda (r^{\prime })\Lambda (r^{\prime \prime })}}\stackunder{N\rightarrow \infty }{\lim } \stackunder{n=1}{\stackrel{N+1}{\prod }}\sqrt{\frac{1}{2i\pi \hbar \varepsilon _{s}}} \nonumber \\ &&\times \stackunder{n=1}{\stackrel{N}{\prod }}\left[ \int \frac{dr_{n}}{ \Lambda (r_{n})\sqrt{f(r_{n})}}\right] \exp \left\{ \frac{i}{\hbar } \stackunder{n=1}{\stackrel{N+1}{\sum }}A_{3}^{\varepsilon _{s}}\right\} \label{a.72} \end{eqnarray} with the short-time action in configuration space \begin{eqnarray} A_3^{\varepsilon _s} &=&\frac{\triangle r_n^2}{2\varepsilon _s\Lambda (r_n)\Lambda (r_{n-1})\sqrt{f(r_n)f(r_{n-1})}}+\frac{\varepsilon _s}2f(r_n) \left[ \frac{E^2}{c^2}\right. \nonumber \\ &&\left. -\hbar ^2l(l+1)\frac{\Lambda (r_n)}{r_n^2}+U(r_n)\right] . \label{a.73} \end{eqnarray} We now use the following space transformation: $r\rightarrow u,$ $r\in \left[ 0,\infty \right[ ,$ $u\in \left] -\infty ,\infty \right[ $ defined by \begin{equation} r=\frac{c}{\omega }e^{u}. \label{a.74} \end{equation} The appropriate regulating function is then defined by \begin{equation} f(r(u))=\frac{c^2}{4\omega ^2\cosh ^2u}. \label{a.75} \end{equation} By taking into account all the quantum corrections arising, of course, from the transformations (\ref{a.74}) and (\ref{a.75}), the Green's function (\ref {a.71}) can straightforward be written as follows: \begin{equation} G_l(r^{\prime \prime },r^{\prime })=\frac 1{4i\hbar \omega c\sqrt{\Lambda (r^{\prime \prime })\Lambda (r^{\prime })\cosh u^{\prime \prime }\cosh u^{\prime }}}\int_0^\infty dSP_l(u^{\prime \prime },u^{\prime };S), \label{a.76} \end{equation} with \begin{eqnarray} P_l(u^{\prime \prime },u^{\prime };S) &=&\int Du(s)\exp \left\{ \frac i\hbar \int_0^S\left[ \frac{\stackrel{.}{u}^2}2-\frac{\hbar ^2}4\left( \nu ^2+k^2\right. \right. \right. \nonumber \\ &&\left. \left. \left. +\left( \nu ^2-k^2\right) \tanh u\right) +\frac{\hbar ^2}8\frac{\frac{E^2}{\hbar ^2\omega ^2}+4\beta -\alpha -2}{\cosh ^2u}\right] \right\} , \label{a.77} \end{eqnarray} where $\nu ^2=N^2-2\beta -\alpha +\frac{11}4,$ $k=l+\frac 12$ and $ N=Mc^2/\hbar \omega .$ This kernel is formally identical with that of the general Rosen-Morse ( or general modified P\"oschl-Teller ) potential studied recently \cite {JunIno,BohJun,Grosc,KleiMus} . The Green's function associated to this potential is \begin{equation} G(u^{\prime \prime },u^{\prime };E_{RM^{\prime }})=\int_{0}^{\infty }dSP_{l}(u^{\prime \prime },u^{\prime };S). \label{a.78} \end{equation} As is shown by Kleinert \cite{Kleinert} , the Green's function of the general Rosen-Morse potential is related to the fixed-energy amplitude for the mass point subjected to an angular barrier near the surface of a sphere in $D=4$ dimensions by \begin{eqnarray} G(u^{\prime \prime },u^{\prime };E_{RM^{\prime }}) &=&\frac{1}{\sqrt{\sin \theta ^{\prime \prime }\sin \theta ^{\prime }}}G(\theta ^{\prime \prime },\theta ^{\prime };E_{PT^{\prime }}) \nonumber \\ &=&-\frac{i}{\hbar }\frac{\Gamma (M_{1}-L_{E})\Gamma (L_{E}+M_{1}+1)}{\Gamma (M_{1}+M_{2}+1)\Gamma (M_{1}-M_{2}+1)} \nonumber \\ &&\times \left( \frac{1+\tanh u^{\prime }}{2}\right) ^{(M_{1}-M_{2})/2}\left( \frac{1-\tanh u^{\prime }}{2}\right) ^{(M_{1}+M_{2})/2} \nonumber \\ &&\times \left( \frac{1-\tanh u^{\prime \prime }}{2}\right) ^{(M_{1}+M_{2})/2}\left( \frac{1+\tanh u^{\prime \prime }}{2}\right) ^{(M_{1}-M_{2})/2} \nonumber \\ &&\times F\left( M_{1}\!-\!L_{E},L_{E}\!+\!M_{1}\!+\!1;M_{1}\!-\!M_{2}\!+\!1; \frac{1+\tanh u^{\prime }}{2}\right) \nonumber \\ &&\times F\left( M_{1}\!-\!L_{E},L_{E}\!+\!M_{1}\!+\!1;M_{1}\!+\!M_{2}\!+\!1; \frac{1-\tanh u^{\prime \prime }}{2}\right) , \nonumber \\ && \label{a.79} \end{eqnarray} with $\tanh u=-\cos \theta ,\theta \in \left( 0,\pi \right) ,u\in \left] -\infty ,+\infty \right[ $ and $u^{\prime \prime }>u^{\prime }.$ Here, the mass point is taken equal to unity. In addition, we set \begin{equation} \left\{ \begin{array}{c} L_E=-\frac 12+\left( \frac 1{16}+\frac{2E_{PT^{\prime }}}\hbar \right) ^{\frac 12}, \\ E_{PT^{\prime }}=\frac{\hbar ^2}8\left( \frac{E^2}{\hbar ^2\omega ^2}+4\beta -\alpha -\frac 54\right) , \end{array} \right. \label{a.80} \end{equation} and if we choose \begin{equation} \left\{ \begin{array}{c} M_1=\frac 12\left( \sqrt{N^2-2\beta -\alpha +\frac{11}4}+l+\frac 12\right) , \\ M_2=\frac 12\left( \sqrt{N^2-2\beta -\alpha +\frac{11}4}-l-\frac 12\right) , \end{array} \right. \label{a.81} \end{equation} the boundary conditions for the wave functions appearing in (\ref{a.79}) will be satisfied. The equivalence between the relativistic harmonic oscillator interaction in $ (3+1)$ Minkowski space-time and a free relativistic particle in CAdS is characterized by the following restriction on the parameters $\alpha $ and $ \beta $: \begin{equation} \alpha =8\xi ,\qquad \beta =2\xi +\frac{1}{4}. \label{a.82} \end{equation} Inserting (\ref{a.79}) into (\ref{a.76}), we get, for the radial Green's function, the closed form: \begin{eqnarray} G_{l}(r^{\prime \prime },r^{\prime }) &=&-\frac{\Gamma (M_{1}-L_{E})\Gamma (L_{E}+M_{1}+1)}{4\hbar ^{2}\omega c\Gamma (M_{1}+M_{2}+1)\Gamma (M_{1}-M_{2}+1)} \nonumber \\ &&\times \left( \Lambda (r^{\prime \prime })\Lambda (r^{\prime })\cosh u^{\prime \prime }\cosh u^{\prime }\right) ^{-\frac{1}{2}} \nonumber \\ &&\times \left( \frac{1+\tanh u^{\prime }}{2}\right) ^{(M_{1}-M_{2})/2}\left( \frac{1-\tanh u^{\prime }}{2}\right) ^{(M_{1}+M_{2})/2} \nonumber \\ &&\times \left( \frac{1-\tanh u^{\prime \prime }}{2}\right) ^{(M_{1}+M_{2})/2}\left( \frac{1+\tanh u^{\prime \prime }}{2}\right) ^{(M_{1}-M_{2})/2} \nonumber \\ &&\times F\left( M_{1}-L_{E},L_{E}+M_{1}+1;M_{1}-M_{2}+1;\frac{1+\tanh u^{\prime }}{2}\right) \nonumber \\ &&\times F\left( M_{1}-L_{E},L_{E}+M_{1}+1;M_{1}+M_{2}+1;\frac{1-\tanh u^{\prime \prime }}{2}\right) . \nonumber \\ && \label{a.83} \end{eqnarray} The poles of (\ref{a.83}) are all contained in the first $\Gamma $ function in the numerator, \begin{equation} M_1-L_E=-n_r. \label{a.84} \end{equation} Converting this into energy by using Eqs. (\ref{a.80}), (\ref{a.81}) and ( \ref{a.84}) yields \begin{equation} E_{n_{r},l}=\beta _{n_{r},l}\hbar \omega , \label{a.85} \end{equation} with \begin{equation} \beta _{n_{r},l}=2n_{r}+l+\frac{1}{2}\sqrt{9+4N^{2}-48\xi }+\frac{3}{2}, \label{a.86} \end{equation} where $n_{r}$ is the radial quantum number and $l$ the angular momentum. Here, the parameter $\xi $ is subject to the condition $\xi <\frac{1}{48} (9+4N^{2})$. In the non-relativistic approximation \begin{equation} E_{n_r,l}\stackunder{c\rightarrow \infty }{\rightarrow }\left( 2n_r+l+\frac 32\right) \hbar \omega +Mc^2, \label{a.87} \end{equation} where the first term represents the well-known energy spectrum of the three-dimensional non-relativistic harmonic oscillator. As in the one-dimensional case, the radial wave functions can be found by approximation near the poles $M_1-L_E\approx -n_r$: \begin{equation} \Gamma (M_{1}-L_{E})\approx \frac{(-1)^{n_{r}}}{n_{r}!}\frac{1}{ M_{1}-L_{E}+n_{r}}=\frac{(-1)^{n_{r}+1}}{n_{r}!}\frac{4\hbar ^{2}\omega ^{2}\beta _{n_{r},l}}{E^{2}-E_{n_{r},l}^{2}}. \label{a.88} \end{equation} Using this behavior and taking into consideration the Gauss transformation formula (see Eq. (9.131.2) , p. 1043 in Ref.\cite{GradRyz} ) \begin{eqnarray} F(a,b,c;z) &=&\frac{\Gamma (c)\Gamma (c\!-\!a\!-\!b)}{\Gamma (c\!-\!a)\Gamma (c\!-\!b)}F(a,b,a\!+\!b\!-\!c\!+\!1;1\!-\!z)+\frac{\Gamma (c)\Gamma (a\!+\!b\!-\!c)}{\Gamma (a)\Gamma (b)} \nonumber \\ &&\times (1\!-\!z)^{c-a-b}F(c\!-\!a,c\!-\!b,c\!-\!a\!-\!b\!+\!1;1\!-\!z), \label{a.89} \end{eqnarray} knowing that the second term of this latter is null because the Euler function $\Gamma (a)$ is infinite ( $a=-n_{r}\leq 0$ ), we can write Eq. ( \ref{a.83}) as: \begin{equation} G_{l}(r^{\prime \prime },r^{\prime })=\stackunder{n_{r}=0}{\stackrel{\infty }{\sum }}\frac{\Phi _{n_{r},l}(r^{\prime \prime })\Phi _{n_{r},l}^{\ast }(r^{\prime })}{E^{2}-E_{n_{r},l}^{2}}, \label{a.90} \end{equation} where \begin{eqnarray} \Phi _{n_{r},l}(r) &=&\left[ \frac{2\beta _{n_{r},l}\Gamma \left( \beta _{n_{r},l}-n_{r}\right) \Gamma \left( n_{r}+l+\frac{3}{2}\right) }{ n_{r}!\Gamma \left( \beta _{n_{r},l}-n_{r}-\frac{1}{2}\right) }\right] ^{ \frac{1}{2}}\frac{1}{\Gamma \left( l+\frac{3}{2}\right) }\left( \frac{\omega }{c}\right) ^{l+\frac{3}{2}}r^{l} \nonumber \\ &&\!\!\!\!\times \left( 1\!+\!\frac{\omega ^{2}}{c^{2}}r^{2}\right) ^{n_{r}-\beta _{n_{r},l}/2}F\left( \!\!-n_{r},\beta _{n_{r},l}-n_{r};l+\frac{ 3}{2};\frac{\frac{\omega ^{2}}{c^{2}}r^{2}}{1+\frac{\omega ^{2}}{c^{2}}r^{2}} \!\!\right) \label{a.91} \end{eqnarray} are the radial wave functions. By substituting (see Eq. (9.131.1), p. 1043 in Ref.\cite{GradRyz} ) \begin{equation} F(\alpha ,\beta ;\gamma ;z)=(1-z)^{-\alpha }F(\alpha ,\gamma -\beta ;\gamma ;\frac z{z-1}) \label{a.92} \end{equation} into (\ref{a.91}) and using the connecting formula (see Eq. (8.962.1), p. 1036 in Ref.\cite{GradRyz} ) \begin{equation} P_n^{(\alpha ,\beta )}(x)=\frac{\Gamma (n+\alpha +1)}{n!\Gamma (\alpha +1)} F\left( -n,n+\alpha +\beta +1;\alpha +1;\frac{1-x}2\right) , \label{a.93} \end{equation} we can also express (\ref{a.91}) in the form \begin{eqnarray} \Phi _{n_r,l}(r) &=&\left[ \frac{2\beta _{n_r,l}n_r!\Gamma \left( \beta _{n_r,l}-n_r\right) }{\Gamma \left( n_r+\gamma +1\right) \Gamma \left( n_r+l+\frac 32\right) }\right] ^{\frac 12}\left( \frac \omega c\right) ^{l+\frac 32}r^l \nonumber \\ &&\times \left( 1+\frac{\omega ^2}{c^2}r^2\right) ^{-\frac 12\beta _{n_r,l}}P_{n_r}^{\left( l+\frac 12,-\beta _{n_r,l}\right) }\left( 1+2\frac{ \omega ^2}{c^2}r^2\right) . \label{a.94} \end{eqnarray} By using the following limiting relations: \begin{equation} \left\{ \begin{array}{c} \stackunder{\gamma \rightarrow \infty }{\lim }\Gamma (n+\gamma +1)= \stackunder{\gamma \rightarrow \infty }{\lim }\gamma ^{n+1}\Gamma (\gamma ), \\ \stackunder{\gamma \rightarrow \infty }{\lim }\left( 1+\frac{\omega ^2}{c^2} r^2\right) ^{-\frac 12\left( l+\gamma +\frac 32\right) }=e^{-\frac{M\omega }{ 2\hbar }r^2}, \\ \stackunder{\gamma \rightarrow \infty }{\lim }F\left( \alpha ,\gamma ;\nu ;\frac z\gamma \right) =F(\alpha ,\nu ;z), \end{array} \right. \label{a.95} \end{equation} we obtain the well-known radial wave functions of the non-relativistic harmonic oscillator \begin{eqnarray} \Phi _{n_r,l}(r) &=&\left[ \frac{2\Gamma \left( n_r+l+\frac 32\right) }{n_r!} \right] ^{\frac 12}\left( \frac{M\omega }\hbar \right) ^{\frac 12\left( l+\frac 32\right) }\frac{r^l}{\Gamma \left( l+\frac 32\right) } \nonumber \\ &&\times \exp \left[ -\frac{M\omega }{2\hbar }r^2\right] F\left( -n_r,l+\frac 32;\frac{M\omega }\hbar r^2\right) . \label{a.96} \end{eqnarray} \section{Conclusion} In this paper we have dealt with special relativistic harmonic oscillators in $(1+1)-$ and $(3+1)-$dimensional Minkowski space-time modeled by a free relativistic particle in the universal covering space-time of the anti-de Sitter space-time. The explicit path integral solution, as presented above, provides a valuable alternative way to the one obtained through the Klein-Gordon equation. After formulating the problem in terms of symmetric and general Rosen-Morse potentials for the one- and three-dimensional relativistic oscillators, respectively and by imposing a restriction on the parameters $\alpha $ and $\beta $ in such a way that the system under consideration is equivalent to a free relativistic particle in CAdS, the Green's functions are obtained in a closed form. The energy spectrum and the properly normalized wave functions are extracted from the poles and the residues at the poles of the Green's function, respectively. In the flat-space limit $(R\rightarrow 0)$, that is to say in the non-relativistic approximation $(c\rightarrow \infty )$, the usual harmonic oscillator spectrum and the corresponding normalized wave functions are regained. \end{document}
\begin{document} \title{A Proof of the Conjecture by Carpentier-De Sole-Kac} \begin{center} 0. Introduction \end{center} Let $\mathcal{K}$ be a differential field with derivation $\partial$, and let $R$ be a differential subring of $\mathcal{K}$. We consider an $n \times n$ matrix $M$, whose elements lie in the ring $R[\partial]$. A number of useful notions can be defined on such a matrix, such as the following ones: \newline\newline For a matrix $A$ with entries in the field $\mathcal{K}[\partial]$, the Dieudonn\'e Determinant has the form $det A = det_1 A\lambda^d$, where $det_1 A \in \mathcal{K}$, $\lambda$ is indeterminate, and $d$ is an integer. Some of its characterizing properties are $det A \cdot det B = det AB$, that $det A$ changes sign upon permuting two rows of $A$, and that subtracting $h$ times $A$'s ith row from its jth row leaves $det A$ unchanged, when $h \in \mathcal{K}[\partial]$ and $i \neq j$. Furthermore, if $A$ is upper triangular, then $det_1 A$ is the product of the leading coefficients of the diagonal entries, and $d$ is the sum of their orders. In such a case, if any of the diagonal entries are 0, then these evaluate to $det_1 A=0$ and $d=-\infty$. That a determinant with these properties exists is shown in a more general context in [Die43]. \newline\newline \textbf{Definition 0.1.} The total order of a matrix $A$ is $$ tord(A) = \max_{\sigma \in S_n}\sum_{i=1}^{n}ord(A_{i,\sigma (i)}) $$ where $S_n$ denotes the group of permutations of $\{1,2,\dots n\}$. We can then also define the degeneracy degree of $A$ by $$ dd(A) = tord(A) - d(A) $$ where $d(A)$ is the order of $det A$. \newline\newline \textbf{Definition 0.2.} A system of integers $(N_1,N_2,\dots N_n,h_1,h_2,\dots,h_n)$ is called a majorant of $A$ if for all $i,j \in \{1,2,\dots n\}$ $$ ord(A_{ij})\leq N_j - h_i $$ Given a matrix A and a majorant of that matrix, one can associate a characteristic matrix $\bar{A}(\lambda)$ to the majorant by the equation $$ \bar{A}_{ij}(\lambda) = A_{ij;N_j-h_i} \lambda^{N_j - h_i} $$ where $A_{ij;N_j-h_i}$ is the coefficient of $\partial^{N_j-h_i}$ in $A_{ij}$. We can also note that fact that for any majorant $$ \sum_{i=1}^{n}N_i-h_i \geq tord(A) $$ We call a majorant optimal when equality holds in the above statement. The following theorem of [CDSK12] follows from the results in [Huf65]. \newline\newline \textbf{Theorem 0.3} Let $A$ be a matrix with elements in $\mathcal{K}[\partial]$ and let $det A \neq 0$. Then: \begin{enumerate}[label = \roman*.] \item $dd(A) \geq 0$ \item there exists an optimal majorant of A \item if $dd(A) \geq 1$, then $det(\bar{A}(\lambda)) = 0$ for any majorant \item if $dd(A) = 0$, then $det(\bar{A}(\lambda)) = 0$ for any majorant which is not optimal, and $det(\bar{A}(\lambda)) = det A$ for any majorant which is optimal \end{enumerate} It is obvious that $det_1 A \in R$ if $dd(A) = 0$, and it was shown in [CDSK12] that $det_1 A$ always lies in the integral closure of $R$ in $K$. However, in general $det_1 A \notin R$; there is a counterexample in [CDSK12] with $dd(A) = 2$. It was conjectured in [CDSK12] that $det_1 A \in R$ when $dd(A) = 1$. In the present paper this conjecture is proved. \begin{center} 1. Proof of the Carpentier-De Sole-Kac conjecture. \end{center} \textbf{Proposition 1.1.} Let $\mathcal{K}$ be a differential field, and let $R$ be a differential subring that is a subring of $\mathcal{K}$. Now, let $M$ be a matrix whose elements lie in $R[\partial]$, and let $D = det_1 M$. If $dd(M) = 1$, then $D \in R$. \newline\newline Proof. First, note that we may multiply we may increase the order of each term in a given row or column by either left or right multiplying by a matrix of the form $diag(1\: \ldots 1\: \partial \: 1\: \ldots 1)$. This operation results in a matrix whose entries are still in $R$ and also leaves the coefficient of the determinant unchanged. Furthermore, it increases both the total order and degree of the determinant by 1, leaving the degeneracy degree unchanged. Next, we would like to extract an optimal majorant $N_1, N_2,\ldots N_n, h_1,h_2,\ldots h_n$. Both a definition and some of its basic properties can be found in [CDSK12,Def4.6] and [CDSK,Thm4.7]. A majorant exists as long as $D\neq 0$, which we may assume, since otherwise $D=0$, at which point $D$ must lie in $R$ anyway. Now, define $N$ to be the largest of all the $N_i$ and $h$ the minimum of the $h_i$. Note that when we use the process described earlier to increase the degrees of the elements of column i by 1, we can get an optimal majorant for the new matrix by simply taking the old majorant and increasing $N_i$ by 1. Similarly, if we increase the degrees in row i by 1, then the new majorant is just the old majorant, with $h_i$ decreased by 1. We can now use this to create a new matrix $M'$ as follows. Increase the degrees in column i by 1 $N - N_i$ times, and increase the degrees in row i by 1 $h_i - h$ times. Then, $M'$ will have a determinant coefficient of $D$, degeneracy degree 1, and an optimal majorant of $N,N,\ldots N,h,h,\ldots h$. From this optimal majorant, we may extract two properties of $M'$. First, every entry has degree less than or equal to $N-h$. Also, the total order is $n(N-h)$, found by summing over the majorant. These two facts together imply that every row must contain at least one term of degree exactly $N-h$. In fact, they show something stronger, but this is all we will need. We now know that the largest degree term has degree $N-h$. We can now express the matrix $M'$ in the form $A\partial^{N-h} + B\partial^{N-h-1} \ldots$, where $A$ is the leading coefficient matrix of terms of degree $N-h$, $B$ contains the terms of degree $N-h-1$, and so on. Now, we know that since $M'$ is degenerate, its characteristic matrix has determinant 0 (see [CDSK12,Thm4.7]). In this case, the characteristic matrix is exactly $A$. Letting the rows of $A$ be called $A_1,A_2,\ldots A_n$, the fact that $A$ has determinant 0 implies a linear relation amongst the rows with coefficients in the fraction field of $R$. We may clear denominators to get a relation with coefficients strictly in the ring $R$. $$c_1A_1 + c_2A_2 \ldots + c_nA_n = 0$$ Next, multiply the first row of $M'$ by $c_1$ via the use of an elementary matrix. From the total order and degeneracy of $M'$, we can see that the determinant of this product is $c_1D\lambda^{n(N-h)-1}$. We can further alter the matrix by next subtracting $c_i$ times the ith row from the first row, for each $i\neq1$. This operation leaves the determinant unchanged, and yields a new matrix we call $M''$. We express $M''$ as a sum similar to the one we used before, so that $M'' = A'\partial^{N-h} + B'\partial^{N-h-1} \ldots$ for new matrices $A'$ and $B'$. These matrices take on the following forms: $$ A'= \begin{bmatrix} 0\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} \qquad\qquad B'= \begin{bmatrix} c_1B_1 + c_2B_2 \ldots + c_nB_n\\ B_2\\ B_3\\ \vdots\\ B_n\\ \end{bmatrix} $$ We now note that the total order of $M''$ is at least $n(N-h)-1$, the order of its determinant. On the other hand, $N,N,\ldots N,h+1,h,h,\ldots h$ is a majorant, which implies that not only is it an optimal majorant, but that $M''$ is non-degenerate, and thus that its determinant is the determinant of the characteristic matrix [CDSK12,Thm.4.7]. To finish the proof, we now show that the determinant of the characteristic matrix is a multiple of $c_1$. As this is equal to $c_1D$, we can then apply the cancellation law to show that $D$ lies in the ring $R$. The characteristic matrix is $$ M''_{char} = \begin{bmatrix} c_1B_1 + c_2B_2 \ldots + c_nB_n\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} $$ Since the determinant is linear in the first row, we can set the determinant of $M''_{char}$ equal to $det(M''_1) + det(M''_2) + \ldots + det(M''_n)$, where $$ M''_i = \begin{bmatrix} c_iB_i\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} $$ $M''_1$ has a first row that is a multiple of $c_1$, so its determinant is already a multiply of $c_1$. We now look at the other $M''_i$: Since we can multiply by elementary matrices, we can divide out the factor of $c_i$ from the top row and multiply the ith row by $c_i$, leaving the determinant unaffected. Next, for each $j\neq1$, add $c_j$ times row j to row i. The resulting row i can then be simplified using our earlier linear relation and becomes a multiple of $c_1$. Thus, the determinant of $M''_i$ is a multiple of $c_1$, as desired. Below, these last steps are written out explicitly: $$ \begin{bmatrix} c_iB_i\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ c_iA_i\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ c_2A_2+c_3A_3+\ldots+c_nA_n\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ -c_1A_1\\ \vdots\\ A_n\\ \end{bmatrix} $$ \newline\newline \begin{center} References \end{center} \begin{flushleft} [CDSK12] S. Carpentier, A. De Sole, V.G. Kac, \textit{Some Algebraic Properties of Differential Operators}, arXiv:1201.1992v1 [Die43] J. Dieudonn\'e \textit{Les d\'eterminants sur un corps non commutatif}, Bull. Soc. Math. France \textbf{71}, (1943), 27-45. [Huf65] G. Hufford, \textit{On the characteristic matrix of a matrix of differential operators}, J. Differential Equations \textbf{1}, (1965) 27–38. \end{flushleft} \end{document}
\begin{document} \author{Luca Minotti \thanks{Universit\`a di Pavia. email: \textsf{[email protected]}. } } \title{Visco-Energetic solutions to one-dimensional \ rate-independent problems} \begin{abstract} Visco-Energetic solutions of rate-independent systems (recently introduced in \cite{MinSav16}) are obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation is reinforced by a viscous correction ${\mathrm d}elta$, typically a quadratic perturbation of the dissipation distance. Like Energetic and Balanced Viscosity solutions, they provide a variational characterization of rate-independent evolutions, with an accurate description of their jump behaviour. \par In the present paper we study Visco-Energetic solutions in the one-dimensional case and we obtain a full characterization for a broad class of energy functionals. In particular, we prove that they exhibit a sort of intermediate behaviour between Energetic and Balanced Viscosity solutions, which can be finely tuned according to the choice of the viscous correction ${\mathrm d}elta$. \end{abstract} \tableofcontents \section{Introduction} Rate-independent problems occur in several contexts. We refer the reader to the recent monograph \cite{Mielke-Roubicek15} for a survey of rate-independent modeling and analysis in a wide variety of applications. The analytical theory of rate-independent evolutions encounters some mathematical challenges, which are apparent even in the simplest example, the \emph{doubly nonlinear} differential inclusion \begin{equation} {\langle}bel{DN} \partial \mathbb{P}si(u'(t))+{\mathrm D} {\ensuremath{\mathcal E}}(t,u(t))\ni 0\quad \text{in $X^*$}\quad\text{ for a.a. $t\in(a,b)$}. \tag{DN} \end{equation} Here $X^*$ is the dual of a finite-dimensional linear space, ${\mathrm D}{\ensuremath{\mathcal E}}$ is the (space) differential of a time-dependent energy functional ${\ensuremath{\mathcal E}}\in {\mathrm C}^1([a,b]\times X;\mathbb{R})$ and $\mathbb{P}si: X\rightarrow (0,+\infty)$ is a convex and nondegenerate dissipation potential, hereafter supposed \emph{positively homogeoneous of degree 1}. \par It is well known that if the energy ${\ensuremath{\mathcal E}}(t,\cdot)$ is not strictly convex, one cannot expect the existence of an absolutely continuous solution to \eqref{DN}, so that the natural space for candidate solutions $u$ is $\mathbb{B}V([a,b];X)$. This fact has motivated the development of various weak formulations of \eqref{DN}, which should also take into account the behaviour of $u$ at jump points. \paragraph{Energetic solutions.} The first is the notion of \emph{Energetic solutions}, \cite{Mielke-Theil-Levitas02,Mielke-Theil04,Mainik-Mielke05}. For the simplified rate-independent evolution \eqref{DN}, Energetic solutions are curves $u:[a,b]\to X$ with bounded variation that are characterized by two variational conditions, called \emph{stability} (S$_\mathbb{P}si$) and \emph{energy balance} (E$_\mathbb{P}si$): \begin{equation} {\langle}bel{en-stability} {\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,z)+\mathbb{P}si(z-u(t)){\mbox{\boldmath$q$}}uad\text{for every $z\in X$}, \tag{${\mathrm S}_\mathbb{P}si$} \end{equation} \begin{equation} {\langle}bel{en-energy-balance} {\ensuremath{\mathcal E}}(t,u(t))+\varpsi(u;[a,t])={\ensuremath{\mathcal E}}(a,u(a))+\int_a^t\partial_t{\ensuremath{\mathcal E}}(s,u(s))\,{\mathrm d} s, \tag{${\mathrm E}_\mathbb{P}si$} \end{equation} where $\varpsi$ is the pointwise total variation with respect to $\mathbb{P}si$ (see \eqref{eq:total-variation} in section \ref{sec:2} for the precise definition). \par One of the strongest feature of the energetic approach is the possibility to construct energetic solutions by solving the \emph{time Incremental Minimization Scheme} \begin{equation} {\langle}bel{IMS} \min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\mathbb{P}si\big(U-U_\tau^{n-1}\big). \tag{$\mathrm{IM}_\mathbb{P}si$} \end{equation} If ${\ensuremath{\mathcal E}}$ has compact sublevels then for every ordered partition $\tau=\{t^0_\tau=a,t^1_\tau,\cdots,t^{N-1}_\tau,t^N_\tau=b\}$ of the interval $[a,b]$ with variable time step $\tau^n:=t^n_\tau-t^{n-1}_\tau$ and for every initial choice $U^0_\tau= u(a)$ we can construct by induction an approximate sequence $(U^n_\tau)_{n=0}^N$ solving \eqref{IMS}. If $\overline U_\tau$ denotes the left-continuous piecewise constant interpolant of $(U^n_\tau)_n$, then the family of discrete solutions $\overline U_\tau$ has limit curves with respect to pointwise convergence as the maximum of the step sizes $|\tau|=\max \tau^n$ vanishes, and every limit curve $u$ is an energetic solution. \par Consider for instance the 1-dimensional example when the energy has the form \begin{equation} {\langle}bel{1d-energy} {\ensuremath{\mathcal E}}(t,u):=W(u)-\ell(t)u\quad\text{for a double-well potential such as $W(u)=(u^2-1)^2$}. \end{equation} When the loading $\ell \in {\mathrm C}^1([a,b])$ is strictly increasing, $\mathbb{P}si(v):=\alpha |v|$ with $\alpha>0$, and $u(a)$ is choosen carefully, it is possible to prove, \cite{Rossi-Savare13}, that an Energetic solution $u$ is an increasing selection of the equation \begin{equation} {\langle}bel{eq:23} \alpha+W^{**}(u(t))\ni\ell(t){\mbox{\boldmath$q$}}uad\text{for every $t\in [a,b]$}, \end{equation} where $W^{**}$ is the convex envelope $W^{**}(u)=((u^2-1)_+)^2$. \begin{figure} \caption{Energetic solution for a double-well energy $W$ with an increasing load $\ell$.} \end{figure} In this context, the solution $u$ have a jump when it is satisfied the so-called \emph{Maxwell rule} \begin{equation} \int_{\ul(t)}^{\ur(t)}\mathbb{B}ig( W'(r)-\ell(t)+\alpha_+ \mathbb{B}ig)\,{\mathrm d} r=0. \end{equation} The latter evolution mode prescribes that for all $t\in [a,b]$, the function $u(t)$ only attains \emph{absolute minima} of the function $u\mapsto W(u)-(\ell(t)-\alpha_+)u$. This corresponds to a convexification of $W$ and causes the system to jump ``early''. \paragraph{Balanced Viscosity (BV) solutions.} The global stability condition \eqref{en-stability} may lead the system to change instantaneously in a very drastic way, jumping into far apart energetic configurations. In order to obtain a formulation where local effects are more relevant (see \cite{DalMaso-Toader02, NegOrt07?QSCP,Efendiev-Mielke06}), a natural idea is to consider rate-independent evolution as the limit of systems with smaller and smaller viscosity, namely to study the approximation of \eqref{DN} \begin{equation} {\langle}bel{eq:DNepsilon} \partial\mathbb{P}si_\varepsilon(u'(t))+{\mathrm D}{\ensuremath{\mathcal E}}(t,u(t))\ni 0 \quad\text{in $X^*$},{\mbox{\boldmath$q$}}uad \mathbb{P}si_\varepsilon(v):=\mathbb{P}si(v)+{\mathfrak a}c{\varepsilon}{2}\mathbb{P}si^2(v), \tag{$\mathrm{DN}_\varepsilon$} \end{equation} which corresponds to introduce a quadratic (or even more general) perturbation in the time Incremental Minimization Scheme: \begin{equation} {\langle}bel{eq:173} \min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\mathbb{P}si\big(U-U_\tau^{n-1}\big)+{\mathfrak a}c{\varepsilon^n}{2\tau^n}\mathbb{P}si^2\big(U-U_\tau^{n-1}\big). \tag{$\mathrm{IM}_{\mathbb{P}si,\varepsilon}$} \end{equation} The choice $\varepsilon^n=\varepsilon^n(\tau){\mathrm d}ownarrowarrow 0$ with $ {\mathfrak a}c{\varepsilon^n(\tau)}{|\tau|}\uparrowarrow+\infty$ leads to the notion of \emph{Balanced Viscosity} solutions \cite{Rossi-Mielke-Savare08,Mielke-Rossi-Savare09,Mielke-Rossi-Savare12,Mielke-Rossi-Savare13}. Under suitable smoothness and lower semicontinuity assumptions, it is possible to prove that all the limit curves satisfy a \emph{local stability} condition and a modified energy balance, involving an augmented total variation that encodes a more refined description of the jump behaviour of $u$: roughly speaking, a jump between $\ul(t)$ and $\ur(t)$ occurs only when these values can be connected by a rescaled solution $\vartheta$ of \eqref{eq:DNepsilon}, where the energy is frozen at the jump time $t$ \begin{equation} {\langle}bel{eq:173bis} \partial\mathbb{P}si(\vartheta'(s))+\vartheta'(s)+{\mathrm D}{\ensuremath{\mathcal E}}(t,\vartheta(s))\ni0. \end{equation} In the one-dimensional example \eqref{1d-energy}, with the loading $\ell$ strictly increasing and under suitable choices of the initial datum, it is possible to prove, \cite{Rossi-Savare13}, that $u$ is a BV solution if and only if it is nondecreasing and \begin{equation} {\langle}bel{eq:24} \alpha+W'(u(t))=\ell(t){\mbox{\boldmath$q$}}uad \text{for all $t\in[a,b]\setminus \Ju$}. \end{equation} \begin{figure} \caption{BV solution for a double-well energy $W$ with an increasing load $\ell$. The blue line denotes the path described by the optimal transition $\vartheta$ solving \eqref{eq:173bis} \end{figure} The evolution mode \eqref{eq:24} follows the so called \emph{Delay rule}, related to hysteresis behaviour. The system accepts also \emph{relative minima} of $u\mapsto W(u)-(\ell(t)-\alpha)u$, and thus the function $t\mapsto u(t)$ tend to jump ``as late as possible''. \paragraph{Visco-Energetic solutions and main results of the paper.} Recently, in \cite{MinSav16}, the new notion of Visco-Energetic (VE) solutions has been proposed. This is a sort of intermediate situation between energetic and balanced viscosity, since these solutions are obtained by studying the time Incremental Minimization Scheme \eqref{eq:173} when one keeps constant the ratio $\mu:=\varepsilon^n/\tau^n$. In this way the dissipation $\mathbb{P}si$ is corrected by an extra viscous penalization term, for example of the form \begin{equation} {\langle}bel{eq:169} {\mathrm d}elta(u,v):={\mathfrak a}c{\mu}2\mathbb{P}si^2(v-u){\mbox{\boldmath$q$}}uad \text{for every $u,v\in X,\quad\mu\ge0$,} \end{equation} which induces a stronger localization of the minimizers, according to the size of the parameter $\mu$. The new modified time Incremental Minimization Scheme is therefore \begin{equation} {\langle}bel{eq:167mu} \min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\mathbb{P}si\big(U-U_\tau^{n-1}\big)+{\mathrm d}elta(U,U_\tau^{n-1}) \tag{$\mathrm{IM}_{\mathbb{P}si,\mu}$}. \end{equation} \par As in the energetic and BV cases, a variational characterization of the functions obtained as a limit of the solution of \eqref{eq:167mu} is possible, still involving a suitable stability condition and an energetic balance. Concerning stability, we have a natural generalization of \eqref{en-stability}: \begin{equation} {\langle}bel{eq:168} {\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,v)+\mathbb{P}si(v-u(t))+{\mathrm d}elta(u(t),v) \quad\text{for every }v\in X,\ t\in [a,b]\setminus \Ju. \tag{S$_{\sf D}$} \end{equation} The right replacement of the energy balance condition is harder to formulate. A heuristic idea, which one can figure out by the direct analysis of \eqref{1d-energy}, is that jump transitions between $\ul(t)$ and $\ur(t)$ should be described by discrete trajectories $\vartheta:Z\to X$ defined in a subset $Z\subset \mathbb{Z}$ such that each value $\vartheta(n)$ is a minimizer of the incremental problem \eqref{eq:167mu}, with datum $\vartheta(n-1)$ and with the energy ``frozen'' at time $t$. In the simplest cases $Z=\mathbb{Z}$, the left and right jump values are the limit of $\vartheta(n)$ as $n\to\pm\infty$, but more complicated situations can occur, when $Z$ is a proper subset of $\mathbb{Z}$ or one has to deal with concatenation of (even countable) discrete transitions and sliding parts parametrized by a continuous variable, where the stability condition \eqref{eq:168} holds. \par In order to capture all of these possibilities, VE transitions are parametrized by continuous maps $\vartheta:E\to X$ defined in an \emph{arbitrary compact subset} of $\mathbb{R}$. We refer to section \ref{subsec:Visco-Energetic-Euclidean} for the precise description of the new dissipation cost and the corresponding total variation. \par In the present paper we study Visco-Energetic solutions in the one dimensional setting and we obtain a full characterization for the same broad class of energy functionals of \cite{Rossi-Savare13}. Respect to Energetic and BV solutions, the main difficulty here comes from the description of solutions at jumps: as we have mentioned, transitions are now defined in an arbitrary compact subset of $\mathbb{R}$, so that a wide range of possibilities can occur. For instance, the energetic case is a very particular situation, where (e.g. for an increasing jump) the transitions have the form \begin{equation} \vartheta: \{0;1\}\rightarrow\mathbb{R}{\mbox{\boldmath$q$}}uad\text{such that $\vartheta(0)=\ul(t),\quad\vartheta(1)=\ur(t)$}, \end{equation} defined in a compact set that consists just in two points. \par However, thanks to an accurate analysis of VE dissipation cost, we are able to describe all these possibilities. Coming back to the standard example \eqref{1d-energy}, with the viscous correction ${\mathrm d}elta$ of the form \eqref{eq:169}, the behaviour of VE solutions strongly depends on the parameter $\mu$. More precisely, the following situations can occur: \begin{itemize} \item The viscous correction term is ``strong'', for example $\mu \ge -\min W''$. In this case VE solutions exhibits a behaviour comparable to BV solutions: both satisfies the same local stability condition and equation \eqref{eq:24} holds, so that they follow a \emph{delay rule}. \item No viscous corrections are added to the system, which corresponds to $\mu=0$. In this case VE solutions coincides with energetic solutions, equation \eqref{eq:23} holds and they satisfy the \emph{Maxwell rule}. \item A ``weak'' viscous correction is added to the system, which corresponds to a small $\mu>0$. We have a sort of intermediate situation between the two previous cases: a jump can occur even before reaching a local extremum of $W'$. In particular, an increasing jump can occur when the \emph{modified Maxwell rule} is satisfied: \begin{equation} {\langle}bel{eq:201} \int_{\ul(t)}^{u_+}\mathbb{B}ig( W'(r)-\ell(t)+\alpha+\mu (r-\ul(t))\mathbb{B}ig)\,{\mathrm d} r=0,{\mbox{\boldmath$q$}}uad \text{for some $u_+>\ul(t)$}. \end{equation} In this case $\ur(t)$ may differ from $u_+$: see Figure \ref{fig:3} for more details. \end{itemize} \begin{figure} \caption{Visco-Energetic solutions for a double-well energy $W$ with an increasing load $\ell$. When $\mu>-\min W''$ (first picture) the solution jumps when it reach the maximum of $W'$ and the transition is the ``double chain'' obtained by solving the Incremental Minimization Scheme with frozen time $t$. When $\mu$ is small (second picture) the optimal transition $\vartheta$ makes a first jump connecting $\ul(t)$ with $u_+$ according to the modified Maxwell rule \eqref{eq:201} \end{figure} \paragraph{Plan of the paper.} In the paper we will analyse VE solutions to one-dimensional rate-independent evolutions driven by general (nonconvex) potentials and we will assume that the viscous corrections ${\mathrm d}elta$ satisfies only the natural assumptions of the visco-energetic theory, including in particular the quadratic case \eqref{eq:169}. In the preliminary section \ref{sec:2}, we recall the main definitions of Visco-Energetic solutions, their dissipation cost and the corresponding total variation, along with some useful properties and characterizations coming from the general theory; all the assumptions of the one-dimensional setting are collected in section \ref{subsec:one-dimensional-setting}. \par In section \ref{sec:3}, after a brief discussion about the stability conditions, we give a characterizations of Visco-Energetic solutions with a general (i.e. non monotone) external loading. This characterization involves the \emph{one-sided global slopes} with a ${\mathrm d}elta$ correction, which are defined in section \ref{subsec:stability}. \par In section \ref{sec:4} we analyse the case of a monotone loading $\ell$. We exhibit a more explicit characterization of Visco-Energetic solutions, in term of the monotone envelopes of the one-sided global slopes. This characterization, in a suitable sense, generalizes \eqref{eq:23} and \eqref{eq:24}. \section{Preliminaries} {\langle}bel{sec:2} Throughout this section, $[a,b]\subseteq\mathbb{R}$ and \[ (X,\|\cdot\|_X)\quad\text{ will be a finite dimensional normed vector space.} \] We first recall the key elements of the rate-independent system $(X,{\ensuremath{\mathcal E}},\mathbb{P}si)$ along with the main definitions of Visco-Energetic solutions, their dissipation cost and some useful properties coming from the general theory, \cite{MinSav16}. \subsection{Rate-independent setting and BV functions} {\langle}bel{subsec:setting} Hereafter we consider a rate-independent system $(X,{\ensuremath{\mathcal E}},\mathbb{P}si)$, where the dissipation potential \[ \mathbb{P}si: X \rightarrow [0,+\infty)\text{ is 1-positively homogeneous, convex, with $\mathbb{P}si(v)>0$ if $v\neq 0$,} \] and ${\ensuremath{\mathcal E}}$ is a smooth, time dependent energy functional, which we take of the form \begin{equation} {\ensuremath{\mathcal E}}(t,u):=W(u)-{\langle}ngle\ell(t),u{\rangle}ngle \end{equation} for some $W\in {\mathrm C}^1(X)$ bounded from below with a constant $-{\langle}mbda>-\infty$ and $\ell\in {\mathrm C}^1\left([a,b];X^*\right)$. We shall also use the notation ${\ensuremath{\mathcal P}}(t,u):=\partial_t{\ensuremath{\mathcal E}}(t,u)=-{\langle}ngle\ell'(t),u{\rangle}ngle$ for the partial time derivative of ${\ensuremath{\mathcal E}}$, and we set \begin{equation} {\langle}bel{eq:K*} K^*:=\partial\mathbb{P}si(0)=\{w\in X^*:\mathbb{P}si_*(w)\le 1\}\subset X^*,\quad\text{where $\mathbb{P}si_*(w):=\sup_{\mathbb{P}si(v)\le 1}{\langle}ngle w,v{\rangle}ngle$}. \end{equation} \par The rate-independent system associated with the energy functional ${\ensuremath{\mathcal E}}$ and the dissipation potential $\mathbb{P}si$ can be formally described by the \emph{rate-independent doubly nonlinear} differential inclusion \begin{equation} \partial \mathbb{P}si(u'(t))+{\mathrm D} {\ensuremath{\mathcal E}}(t,u(t))\ni 0\quad \text{in $X^*$}\quad\text{ for a.a. $t\in(a,b)$}. \tag{DN} \end{equation} \par It is well known that for nonconvex energies, solutions to \eqref{DN} may exhibit discontinuities in time. Therefore, we shall consider functions of bounded variation pointwise defined in every $t\in[a,b]$, such that the pointwise total variation $\varpsi(u;[a,b])$ is finite, where \begin{equation} {\langle}bel{eq:total-variation} \varpsi(u;[a,b]):=\sup\left\{\sum_{m=1}^M\mathbb{P}si(u(t_m)-u(t_{m-1})):a=t_0<t_1<{\mathrm d}ots<t_M=b\right\}. \end{equation} Notice that a function $u\in \mathbb{B}V([a,b];X)$ admits left and right limits at every $t\in [a,b]$: \begin{equation} \ul(t):=\lim_{s\uparrowarrow t} u(s),\quad \ur(t):=\lim_{s{\mathrm d}ownarrowarrow t} u(s),\quad\text{with $\ul(a):=u(a)$ and $\ur(b):=u(b)$} \end{equation} and its pointwise jump set $\Ju$ is the at most countable set defined by \begin{equation} \Ju:=\{t\in [a,b]: \ul(t)\neq u(t)\text{ or }u(t)\neq \ur(t)\}\supset\text{ess-$\Ju$}:=\{t\in (a,b):\ul(t)\neq\ur(t)\}. \end{equation} \par We denote by $u'$ the distributional derivative of $u$ (extended by $u(a) \in (-\infty,a)$ and by $u(b)$ in $(b,+\infty)$): it is a Radon vector measure with finite total variation $|u'|$ supported in $[a,b]$. It is well known, \cite{Ambrosio-Fusco-Pallara00}, that $u'$ can be decomposed into the sum of its diffuse part $u'_{\mathrm{co}}$ and its jump part $u'_{\mathrm J}$: \[ u'=u'_{\mathrm{co}}+u'_{\mathrm J},\quad u'\llcorner \text{ess-$\Ju$},\quad\text{so that $u'_{\mathrm{co}}(\{t\})=0$ for every $t\in[a,b]$.} \] \subsection{Visco-Energetic (VE) solutions in the finite-dimensional case} {\langle}bel{subsec:Visco-Energetic-Euclidean} We recall the notion of Visco-Energetic solutions for the rate-independent system $(X,{\ensuremath{\mathcal E}},\mathbb{P}si)$ introduced in section \ref{subsec:setting}. The first ingredient we need is a \emph{viscous correction}, namely a continuous map ${\mathrm d}elta:X\times X\rightarrow [0,+\infty)$, and its associated augmented dissipation \begin{equation} {\sf D}(u,v):=\mathbb{P}si(v-u)+{\mathrm d}elta(u,v){\mbox{\boldmath$q$}}uad \text{for every $u,v\in X$ }. \end{equation} As in the energetic framework, \cite{Mielke-Theil-Levitas02,Mielke-Theil04,Mainik-Mielke05}, Visco-Energetic solutions to the rate-independent system $(X,{\ensuremath{\mathcal E}},\mathbb{P}si)$ are curves $u:[a,b]\rightarrow X$ with bounded variation that are characterized by a \emph{stability condition} and an \emph{energetic balance}. \par Concerning stability, we have a similar inequality, but we have to replace $\mathbb{P}si$ with the augmented dissipation ${\sf D}$. More precisely, we will require that for every $t\notin\Ju$ \begin{equation} {\langle}bel{stability} {\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u(t),v)\quad\text{for every $v\in X$}, \tag{${\mathrm S}_{\sf D}$} \end{equation} which is naturally associated with the ${\sf D}$ stable set $\mathbb{S}S_\sfD$. \begin{definition}[{\sf D}-stable set] The ${\sf D}$-stable set is the subsets of $[a,b]\times X$ \begin{equation} {\langle}bel{eq:SSD} \mathbb{S}S_\sfD:=\left\{(t,u): {\ensuremath{\mathcal E}}(t,u)\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)\quad\text{for every $v\in X$}\right\}. \end{equation} Its section at time $t$ will be denoted with $\mathbb{S}S_\sfD(t)$. \end{definition} \par As intuition suggests, not every viscous correction ${\mathrm d}elta$ will be admissible for our purpose. A full description of Visco-Energetic solutions and admissible viscous corrections is discussed in \cite{MinSav16}, where the general metric-topological setting is considered. For the sake of simplicity, in this section we will assume that ${\mathrm d}elta$ satisfies the following condition \begin{equation} \lim_{v\rightarrow u}{\mathfrak a}c{{\mathrm d}elta(u,v)}{\mathbb{P}si(v-u)}=0 \quad\text{for every $u \in \mathbb{S}S_\sfD(t)$,\quad $t\in [a,b].$} \end{equation} \par The \emph{energetic balance} is harder to formulate than stability and we first need to introduce the key concepts of transition cost and augmented total variation associated with the dissipation ${\sf D}$. \par Hereafter, for every subset $E\subset\mathbb{R}$ we call $E^-:=\inf E$, $E^+:=\sup E$; whenever $E$ is compact, we will denote by ${\mathfrak H}(E)$ the (at most) countable collection of the connected components of the open set $[E^-,E^+]\setminus E$. We also denote by ${\mathfrak P}_f(E)$ the collection of all finite subsets of $E$. \par Concerning the transition cost, the main point is to consider transitions parametrized by continuous maps $\vartheta:E\to X$ defined in arbitrary compact subsets of $\mathbb{R}$ such that $\vartheta(E^-)=\ul(t)$ and $\vartheta(E^+)=\ur(t)$. More precisely, the first ingredient will be a \emph{residual stability function}: \begin{definition}[Residual stability function] {\langle}bel{def:res-stability} For every $t\in[a,b]$ and $u\in X$ the residual stability function is defined by \begin{align} {\langle}bel{eq:108} \mathbb{R}es(t,u):&=\sup_{v\in X} \{\mathcal{E}(t,u)-\mathcal{E}(t,v)-{\sf D}(u,v)\} \\ {\langle}bel{eq:109} &=\mathcal{E}(t,u)-\inf_{v\in X}\{\mathcal{E}(t,v)+{\sf D}(u,v)\}. \end{align} \end{definition} $\mathbb{R}es$ provides a measure of the failure of the stability condition \eqref{stability}, since for every $u\in X$, $t\in[a,b]$ we get \begin{equation} {\langle}bel{eq:62} {\ensuremath{\mathcal E}}(t,u)\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)+\mathbb{R}es(t,u) \end{equation} and \begin{equation} \mathbb{R}es(t,u)=0\quad \mathbb{L}ongleftrightarrow \quad \relax u\in \mathbb{S}S_\sfD(t).{\langle}bel{eq:63} \end{equation} The transition cost is the sum of three contributions, accordingly with the following definition. \begin{definition}[Transition cost] {\langle}bel{def:transition-cost} Let $E\subset \mathbb{R}$ compact and $\vartheta\in {\mathrm C}(E;X)$. For every $t\in[a,b]$ we define the \emph{transition cost function} $\mathbb{C}f(t,\vartheta,E)$ by \begin{equation} {\langle}bel{eq:33} \mathbb{C}f(t,\vartheta,E):=\varpsi(\vartheta,E)+\mathbb{C}d(\vartheta,E)+\sum_{s\in E\setminus \{E^+ \}}\mathbb{R}es(t,\vartheta(s)) \end{equation} where the first term is the usual total variation \eqref{eq:total-variation}, the second one is \[ \mathbb{C}d(\vartheta,E):=\sum_{I\in \mathfrak H(E)}{\mathrm d}elta(\vartheta(I^- ),\vartheta(I^+ )), \] and the third term is \[ \sum_{s\in E\setminus\{E^+ \}}\mathbb{R}es(t,\vartheta(s)):=\sup\left\{\sum_{s\in P}\mathbb{R}es(t,\vartheta(s)): P\in \mathfrak P_f(E\setminus \{E^+ \})\right\}, \] with the sum defined as $0$ if $E\setminus \{E^+ \}=\emptyset$. \end{definition} We adopt the convention $\mathbb{C}f(t,\vartheta,\emptyset):=0$. It is not difficult to check that the transition cost $\mathbb{C}f(t,\vartheta,E)$ is additive with respect to $E$: \begin{equation} {\langle}bel{eq:195} \mathbb{C}f(t,\vartheta,E\cap[a,c])= \mathbb{C}f(t,\vartheta,E\cap[a,b])+ \mathbb{C}f(t,\vartheta,E\cap[b,c])\quad \text{for every }a<b<c. \end{equation} It has been proved, \cite[Theorem 6.3]{MinSav16}, that for every $t\in[a,b]$ and for every $\vartheta\in {\mathrm C}(E;X)$ \begin{equation}{\langle}bel{eq:crinqualitytheta} {\ensuremath{\mathcal E}}(t,\vartheta(E^+ ))+\mathbb{C}f(t,\vartheta,E)\geq {\ensuremath{\mathcal E}}(t,\vartheta(E^- )). \end{equation} The dissipation cost $\mathbb{F}d(t,u_0,u_1)$ induced by the function $\mathbb{C}f$ is defined by minimizing $\mathbb{C}f(t,\vartheta,E)$ among all the transitions $\vartheta$ connecting $u_0$ to $u_1$: \begin{definition}[Jump dissipation cost and augmented total variation] {\langle}bel{dissipationcost} Let $t\in[a,b]$ be fixed and let us consider $u_0, u_1\in X$. We set \begin{equation} {\langle}bel{eq:dissipationcost} \mathbb{F}d(t,u_0,u_1):=\inf\left\{\mathbb{C}f(t, \vartheta,E): E\mathbb{S}ubset \mathbb{R},\ \vartheta\in {\mathrm C}(E; X),\ \vartheta(E^- )=u_0,\ \vartheta(E^+ )=u_1\right\}, \end{equation} with the incremental dissipation cost $\mathbb{D}elta_{\sf c}(t, u_0,u_1):=\mathbb{F}d(t,u_0,u_1)-\mathbb{P}si(u_1-u_0)$. We also define \begin{multline} \Jmp{\mathbb{D}elta_{\sf c}}(u,[a,b]):=\mathbb{D}elta_{\sf c} (a,u(a),\ur(a)) + \mathbb{D}elta_{\sf c} (b,\ul(b),u(b)) \\ +\sum_{t \in \Ju\cap (a,b)} \mathbb{D}elta_{\sf c} (t,\ul(t),u(t),\ur(t)), \end{multline} and the corresponding augmented total variation $\mVar{\mathbb{P}si,{\sf c}}$ is then \begin{equation} {\langle}bel{eq:varP} \mVar{\mathbb{P}si,{\sf c}}(u,[a,b]):=\varpsi(u,[a,b])+ \Jmp{\mathbb{D}elta_{\sf c}}(u,[a,b]). \end{equation} \end{definition} The infimum in \eqref{eq:dissipationcost} is attained whenever there is at least one admissible transition $\vartheta$ with finite cost. In this case, we say that $\vartheta$ is an \emph{optimal transition}. \begin{definition}[Optimal transitions] Let $t\in[a,b]$ and $u_-$, $u_+\in X$. We say that a curve $\vartheta\in {\mathrm C}(E;X)$, $E$ being a compact subset of $\mathbb{R}$, is an optimal transition between $u_-$ and $u_+$ if \begin{equation} u_-=\vartheta(E^- ),\quad u_+=\vartheta(E^+ ),\quad \mathbb{F}d(t,u_-,u_+)=\mathbb{C}f(t,\vartheta,E). \end{equation} $\vartheta$ is \emph{tight} if for every $I\in \mathfrak H(E)$ $\vartheta(I^-)\neq \vartheta(I^+)$. $\vartheta$ is a \begin{align} \text{pure jump transition, if }&E\setminus \{E^-,E^+\}\text{ is discrete,}\\ \text{sliding transition, if } &\mathbb{R}es(t,\vartheta(r))=0\quad \text{for every $r\in E$}, \\ \text{viscous transition, if } &\mathbb{R}es(t,\vartheta(r))>0\quad \text{for every $r\in E\setminus \{E^{\pm}\}$} {\langle}bel{eq:viscoustransition}. \end{align} \end{definition} Notice that if $\vartheta$ is a transition with finite cost $\mathbb{C}f(t,\vartheta,E)<\infty$, then the set \begin{equation} E_{\mathbb{R}es}:=\{r\in E \setminus E^+: \mathbb{R}es(t, \vartheta(r)>0\}\quad\text{is discrete, i.e. all its points are isoltaed}. \end{equation} \par With these notions at our disposal, we can now give the precise definition of Visco-Energetic solutions to the rate-independent system $(X,{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. \begin{definition}[Visco-Energetic (VE) solutions] We say that a curve $u\in \mathbb{B}V ([a,b];X)$ is a \emph{Visco-Energetic (VE) solution} of the rate-independent system $(X,\mathcal{E},\mathbb{P}si,{\mathrm d}elta)$ if it satisfies the stability condition \begin{equation}{\langle}bel{eq:stability} u(t)\in \mathbb{S}S_\sfD(t)\quad\text{for every }t\in [a,b]\setminus \Ju, \tag{S$_{\sf D}$} \end{equation} and the energetic balance \begin{equation} {\langle}bel{energybalance} \mathcal{E}(t,u(t))+\mVar{\mathbb{P}si,{\sf c}}(u,[a,t])=\mathcal{E}(a,u(a))+\int_a^t{\ensuremath{\mathcal P}}(s,u(s))\,{\mathrm d} s \tag{$\mathrm{E_{\mathbb{P}si,{\sf c}}}$} \end{equation} for every $t\in[a,b]$. \end{definition} \par Existence of Visco-Energetic solutions in a much more general metric-topological setting is proved in \cite{MinSav16}. Solutions are obtained as a limit of piecewise constant interpolant of discrete solutions $U^n_\tau$ obtained by recursively solving the modified time \emph{Incremental Minimization Scheme} \begin{equation} {\langle}bel{eq:167} \min_{U\in X} {\ensuremath{\mathcal E}}(t^n_\tau,U)+{\sf D}(U^{n-1}_\tau,U). \tag{IM$_{\sf D}$} \end{equation} starting from an initial datum $U_\tau^0\approx u_0$. \subsection{Some useful properties of VE solutions} {\langle}bel{subsec:VE-Euclidean-properties} In this section we collect a list of useful properties of Visco-Energetic solutions and we prove an equivalent characterization in the finite-dimensional setting, involving a doubly nonlinear evolution equation. For more details about these results and their proof we refer to \cite{MinSav16,Minotti16T}. \par To simply the notations, we first introduce the \emph{Minimal set}, which is related to the connection of two points through a step of Minimizing Movements. \begin{definition}[Moreau-Yosida regularization and Minimal set] Suppose that ${\ensuremath{\mathcal E}}$ satisfies \eqref{eq:E-assumption} and $\eqref{eq:W'-assumption}$. The ${\sf D}$-Moreau-Yosida regularization ${\ensuremath{\mathcal Y}}:[a,b]\times \mathbb{R}\to \mathbb{R}$ of ${\ensuremath{\mathcal E}}$ is defined by \begin{equation} {\langle}bel{eq:111} {\ensuremath{\mathcal Y}}(t,u):=\min_{v\in\mathbb{R}}{\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v). \end{equation} For every $t\in [a,b]$ and $u\in \mathbb{R}$ the minimal set is \begin{equation} {\langle}bel{eq:107} {\mathrm M}(t,u):=\mathop{\rm argmin}\limits_{\mathbb{R}} {\ensuremath{\mathcal E}}(t,\cdot)+{\sf D}(u,\cdot)= \mathbb{B}ig\{v\in \mathbb{R}:{\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)={\ensuremath{\mathcal Y}}(t,u)\mathbb{B}ig\}. \end{equation} \end{definition} Notice that, by \eqref{eq:E-assumption} and \eqref{eq:W'-assumption}, ${\mathrm M}(t,u)\neq\emptyset$ for every $t,u$. It is also clear that ${\ensuremath{\mathcal R}}(t,u)={\ensuremath{\mathcal E}}(t,u)-{\ensuremath{\mathcal Y}}(t,u)$ and that \[ u\in \mathbb{S}S_\sfD(t)\mathbb{L}ongrightarrow u\in {\mathrm M}(t,u). \] \par As we have mentioned in the Introduction, when $t\in\Ju$ and $\vartheta:E\rightarrow\mathbb{R}$ is an optimal transition between $\ul(t)$ and $\ur(t)$, $\vartheta$ ``keeps trace'' of the whole construction via \eqref{eq:167}. For instance, when $\vartheta(E)$ is discrete, every point is obtained with a step of Minimizing Movements from the previous one, with the energy frozen the time $t$. The next result, \cite[Theorem 3.16]{MinSav16}, formalises this property and characterizes Visco-Energetic optimal transitions. Whenever a set $E\subset\mathbb{R}$ is given, we will use the notations \begin{equation} {\langle}bel{eq:44} r_E^-:=\sup\{E\cap (-\infty,r)\}\cup \{E^-\},{\mbox{\boldmath$q$}}uad r_E^+:=\inf\{E\cap(r,+\infty)\}\cup\{E^+\}. \end{equation} \begin{theorem} {\langle}bel{prop:2} A curve $\vartheta\in {\mathrm C}(E,\mathbb{R})$ with $\vartheta(E)\ni u(t)$ is an optimal transition between $\ul(t)$ and $\ur(t)$ satisfying \begin{equation} {\langle}bel{eq:180} {\ensuremath{\mathcal E}}(t,\ul(t))-{\ensuremath{\mathcal E}}(t,\ur(t))=\mathbb{C}f(t,\vartheta,E) \end{equation} if and only if it satisfies \begin{equation} {\langle}bel{eq:120} \mVar\mathbb{P}si(\vartheta,E\cap[r_0,r_1])\le {\ensuremath{\mathcal E}}(t,\vartheta(r_0))-{\ensuremath{\mathcal E}}(t,\vartheta(r_1))\quad \text{for every }r_0,r_1\in E,\ r_0\le r_1, \end{equation} and \begin{equation} {\langle}bel{eq:121} \vartheta(r)\in {\mathrm M}(t,\vartheta(r^-_E))\quad \text{for every }r\in E\setminus \{E^-\}. \end{equation} \end{theorem} In some situations, the first inequality \eqref{eq:180} can be proved thanks to the following elementary lemma, whose proof is analogous to \cite[Lemma 6.1]{MinSav16} \begin{lemma} {\langle}bel{le:elementary} Let $E\subset \mathbb{R}$ be a compact set with $E^- <E^+ $, let $L(E)$ be the set of limit points of $E$. We consider a function $f:E\to \mathbb{R}$ lower semicontinuous and continuous on the left and a function $g\in {\mathrm C}(E)$ strictly increasing, satisfying the following two conditions:\\ i) for every $I\in \mathfrak H(E)$ \begin{equation} {\langle}bel{eq:65} {\mathfrak a}c{f(I^+ )-f(I^- )} {g(I^+ )-g(I^- )}\ge 1; \end{equation} ii) for every $t\in L(E)$ which is an accumulation point of $L(E)\cap (-\infty,t)$ we have \begin{equation} {\langle}bel{eq:22} \limsup_{s\uparrow t,\ s\in L(E)} {\mathfrak a}c{f(t)-f(s)} {g(t)-g(s)}\ge 1. \end{equation} Then the map $s\mapsto f(s)-g(s)$ is non decreasing in $E$; in particular \begin{equation} {\langle}bel{eq:52} f(E^+ )-f(E^- )\ge g(E^+ )-g(E^- ). \end{equation} \end{lemma} \par The following proposition, a consequence of \eqref{eq:crinqualitytheta}, is useful to prove existence of VE solutions since it gives some sufficient conditions. \begin{proposition}[Sufficient criteria for VE solutions] {\langle}bel{prop:leqinequality} Let $u\in \mathbb{B}V([a,b];X)$ be a curve satisfying the stability condition \eqref{stability}. Then $u$ is a \mathbb{V}E\ solution of the rate-independent system $(X,\mathcal{E},\mathbb{P}si,{\mathrm d}elta)$ if and only if it satisfies one of the following equivalent characterizations: \begin{enumerate}[i)] \item $u$ satisfies the $(\mathbb{P}si,{\sf c})$-energy-dissipation inequality \begin{equation} {\langle}bel{leqinequality} \mathcal{E}(b,u(b))+\mVar{\mathbb{P}si,{\sf c}}(u,[a,b])\leq\mathcal{E}(a,u(a))+\int_a^b{\ensuremath{\mathcal P}}(s,u(s)){\mathrm d} s. \end{equation} \item $u$ satisfies the ${\sf d}$-energy-dissipation inequality \begin{equation} {\ensuremath{\mathcal E}}(t,u(t))+\varpsi(u,[s,t])\leq {\ensuremath{\mathcal E}}(s,u(s))+\int_s^t{\ensuremath{\mathcal P}}(r,u(r)){\mathrm d} r\quad\text{for all $s\le t\in[a,b]$}{\langle}bel{eq:110} \end{equation} and the following jump conditions at each point $t\in \Ju$ \begin{align}{\langle}bel{Jve} \tag{$\mathrm{J_{VE}}$} \begin{split} \mathcal{E}(t,u(t- ))-\mathcal{E}(t,u(t))=\mathbb{F}d(t,u(t- ),u(t)), \\ \mathcal{E}(t,u(t))-\mathcal{E}(t,u(t+ ))=\mathbb{F}d(t,u(t),u(t+ )), \\ \mathcal{E}(t,u(t- ))- \mathcal{E}(t,u(t+ ))=\mathbb{F}d(t,u(t- ),u(t+ )). \end{split} \end{align} \end{enumerate} \end{proposition} \par Another simple property concerns the behaviour of Visco-Energetic solutions with respect to restrictions and concatenation. The proof is trivial. \begin{proposition}[Restriction and concatenation principle] {\langle}bel{lem:1} The following properties hold: \begin{enumerate} \item The restriction of a Visco-Energetic solution in $[a,b]$ to an interval $[\alpha,\beta]\subseteq [a,b]$ is a Visco-Energetic solution in $[\alpha,\beta]$; \item If $a=t_0<t_1<t_{m-1}<t_m=b$ is a subdivision of $[a,b]$ and $u:[a,b]\rightarrow\mathbb{R}$ is Visco-Energetic solution on each one of the intervals $[t_{j-1},t_j]$, then $u$ is a Visco-Energetic solution in $[a,b]$. \end{enumerate} \end{proposition} \par In our finite-dimensional setting it is possible to give another sufficient criterium for Visco-Energetic solutions, more precisely a characterization through the stability condition \eqref{stability}, a doubly nonlinear differential inclusion, and the Jump condition \eqref{Jve}. This result will be the starting point for our discussion in the one-dimensional case. \begin{theorem}[Characterization of VE solutions] {\langle}bel{prop:differential-characterization} A curve $u\in \mathbb{B}V([a,b];X)$ is a Visco-Energetic solution of the rate-independent system $(X,\mathcal{E},\mathbb{P}si,{\mathrm d}elta)$ if and only if it satisfies the stability condition \eqref{stability}, the doubly nonlinear differential inclusion \begin{equation} {\langle}bel{eq:DN0} \partial\mathbb{P}si\left({\mathfrak a}c{{\mathrm d} u'_{\mathrm{co}}}{{\mathrm d}\mu}(t)\right)+{\mathrm D} W(u(t))\ni\ell(t)\quad\text{for $\mu$-a.e. $t\in(a,b)$,\quad $\mu:=\mathscr{L}^1+|u'_{\mathrm{co}}|$} \tag{$\mathrm{DN}_0$} \end{equation} and the jump conditions \eqref{Jve} at every $t\in\Ju$: \begin{align} \tag{$\mathrm{J_{VE}}$} \begin{split} \mathcal{E}(t,u(t- ))-\mathcal{E}(t,u(t))=\mathbb{F}d(t,u(t- ),u(t)), \\ \mathcal{E}(t,u(t))-\mathcal{E}(t,u(t+ ))=\mathbb{F}d(t,u(t),u(t+ )), \\ \mathcal{E}(t,u(t- ))- \mathcal{E}(t,u(t+ ))=\mathbb{F}d(t,u(t- ),u(t+ )). \end{split} \end{align} \end{theorem} \begin{proof} From the definition of the viscous dissipation cost $\cost(t,\ul(t),\ur(t))$, \ref{def:transition-cost}, it is immediate to check that \[ \varpsi\big(u,[a,b]\big)\le \mVar{\mathbb{P}si,{\sf c}}\big(u,[a,b]\big), \] so that Visco-Energetic solutions are in particular \emph{local solutions}, in the sense of \cite{Mielke-Rossi-Savare12}. This differential characterization is therefore an immediate consequence of Proposition \ref{prop:leqinequality} and \cite[Proposition 2.7]{Mielke-Rossi-Savare12}. \end{proof} \subsection{The one-dimensional setting} {\langle}bel{subsec:one-dimensional-setting} From now on we consider the particular case $X=\mathbb{R}$, which we also identify with $X^*$. We will denote by $v^+$, $v^-$ the positive and the negative part of $v\in\mathbb{R}$. \paragraph{Dissipation.} A dissipation potential is a function of the form \begin{equation} \mathbb{P}si(v):=\alpha_+v^++\alpha_-v^-,\quad v\in\mathbb{R},\quad\text{for some $\alpha_\pm>0$.} \end{equation} Hence, we have \[ \partial\mathbb{P}si(v)=\begin{cases} \alpha_+ \quad &\text{if $v>0$,} \\ [-\alpha_-,\alpha_+] \quad&\text{if $v=0$,} \\ -\alpha_- \quad&\text{if $v<0$} \end{cases}{\mbox{\boldmath$q$}}uad\text{for all $v\in\mathbb{R}$}, \] and \begin{equation} {\langle}bel{eq:40} K^*=[-\alpha_-,\alpha_+],\quad\mathbb{P}si_*(w)={\mathfrak a}c{1}{\alpha_+}w^+ +{\mathfrak a}c{1}{\alpha_-}w^-\quad\text{for all $w\in\mathbb{R}$}. \end{equation} \paragraph{Energy functional.} The energy is given by a function ${\ensuremath{\mathcal E}}:[a,b]\times\mathbb{R}\rightarrow\mathbb{R}$ of the form \begin{equation} {\langle}bel{eq:E-assumption} {\ensuremath{\mathcal E}}(t,u):=W(u)-\ell(t)u \end{equation} with $\ell\in {\mathrm C}^1([a,b])$ and $W:\mathbb{R}\rightarrow\mathbb{R}$ such that \begin{equation} {\langle}bel{eq:W'-assumption} W\in {\mathrm C}^1(\mathbb{R}),\quad\lim_{x\rightarrow-\infty}W'(x)=-\infty,\quad\lim_{x\rightarrow+\infty}W'(x)=+\infty. \end{equation} \paragraph{Viscous correction.} The admissible one-dimensional viscous correction is a continuous map ${\mathrm d}elta:\mathbb{R}\times\mathbb{R}\rightarrow [0,+\infty)$ which satisfies \begin{equation}{\langle}bel{eq:delta} \tag{${\mathrm d}elta1$} \lim_{v\rightarrow u}{\mathfrak a}c{{\mathrm d}elta(u,v)}{|v-u|}=0{\mbox{\boldmath$q$}}uad\text{for every $u\in \mathbb{S}S_\sfD(t)$},\quad t\in [a,b], \end{equation} and the reverse triangle inequality \begin{equation} {\langle}bel{eq:delta2} \tag{${\mathrm d}elta2$} {\mathrm d}elta(u_0,u_1)> {\mathrm d}elta(u_0,v)+{\mathrm d}elta(v,u_1){\mbox{\boldmath$q$}}uad \text{for every $u_0< v< u_1$}. \end{equation} We still use the notation ${\sf D}(u,v):=\mathbb{P}si(v-u)+{\mathrm d}elta(u,v)$ for the augmented dissipation. \par \begin{remark}[Admissible viscous corrections]\uparrowshape Assumption \eqref{eq:delta} is necessary for the general theory of Visco-Energetic solutions; \eqref{eq:delta2} will be crucial for our one-dimensional characterization (see section \ref{subsec:main-theorem}). However, these assumptions are quite natural: they are satisfied, for example, if we choose ${\mathrm d}elta$ of the form \[ {\mathrm d}elta(u,v)=f(\mathbb{P}si(u-v)){\mbox{\boldmath$q$}}uad\text{with $f$ positive, strictly convex, with $\lim_{r\rightarrow 0}{\mathfrak a}c{f(r)}{r}=0$}. \] For instance, the standard choice ${\mathrm d}elta(u,v)={\mathfrak a}c{\mu}{2}(v-u)^2$, for some positive parameter $\mu$, is admissible. This particular case will be analysed with some example in sections \ref{sec:3} and \ref{sec:4}. \end{remark} \section{Visco-Energetic solutions of rate-independent systems in \texorpdfstring{$\mathbb{R}$}{R}} {\langle}bel{sec:3} As we have underlined in the Introduction, Visco-Energetic solutions of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta$) are intermediate between energetic, which correspond to the choice ${\mathrm d}elta\equiv 0$, and Balanced Viscosity solutions, which corresponds to a choice of ${\mathrm d}elta={\mathrm d}elta(\tau)$, depending of $\tau$, in \eqref{eq:167} of the form \[ {\mathrm d}elta_\tau(u,v):=\mu(|\tau|){\mathrm d}elta(u,v),{\mbox{\boldmath$q$}}uad \mu:(0,+\infty)\rightarrow(0,+\infty), \quad\lim_{r\rightarrow 0}\mu(r)=+\infty. \] Guided by the characterizations of this two cases, given in \cite{Rossi-Savare13} in a similar one-dimensional setting and recalled in the Introduction, we obtain a full characterization for the visco-energetic case. In particular, the main results of \cite{Rossi-Savare13} can be recover for some choices of ${\mathrm d}elta$. \subsection{One-sided global slopes with a \texorpdfstring{${\mathrm d}elta$}{d} correction} {\langle}bel{subsec:stability} One-sided global slopes are used in \cite{Rossi-Savare13} to give a one-dimensional characterization of Energetic solutions of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si)$. We recall their definitions: \begin{equation}{\langle}bel{eq:one-sided-slopes} \mathbb{W}ir(u):= \inf_{z>u} {\mathfrak a}c{W(z)-W(u)}{z-u},{\mbox{\boldmath$q$}}uad \mathbb{W}sl(u):= \sup_{z<u} {\mathfrak a}c{W(z)-W(u)}{z-u}, \end{equation} where the subscripts $\mathsf{ir}$ and $\mathsf{sl}$ stands for \emph{inf-right} and \emph{sup-left} respectively. \par In this section we introduce a generalization of $W'_{{\sf i}{\sf r}}$ and $W'_{{\sf s}{\sf l}}$, and we prove some important properties. These slopes allow us to give an equivalent, one-dimensional, characterization of the ${\sf D}$-Stability \eqref{eq:stability}. \begin{definition} For every $u\in \mathbb{R}$ we define the one-sided global slopes with a ${\mathrm d}elta$ correction \begin{gather} \mathbb{W}ird{\mathrm d}elta(u):=\inf_{z>u}\left\{{\mathfrak a}c{1}{z-u}\mathbb{B}ig(W(z)-W(u)+{\mathrm d}elta(u,z)\mathbb{B}ig)\right\}, {\langle}bel{eq:Wir} \\ \mathbb{W}sld{\mathrm d}elta(u):=\sup_{z<u}\left\{{\mathfrak a}c{1}{z-u}\mathbb{B}ig(W(z)-W(u)+{\mathrm d}elta(u,z)\mathbb{B}ig)\right\}. {\langle}bel{eq:Wsl} \end{gather} \end{definition} For simplicity, we will still use the notations $\mathbb{W}ir$ and $\mathbb{W}sl$ instead of $\mathbb{W}ird0$ and $\mathbb{W}sld0$ when ${\mathrm d}elta\equiv 0$. From \eqref{eq:delta} it follows that the modified global slopes satisfy \begin{equation} {\langle}bel{eq:4} \mathbb{W}ird{\mathrm d}elta(u)\le W'(u)\le \mathbb{W}sld{\mathrm d}elta(u), \quad \text{for every $u \in \mathbb{R}$} \end{equation} and it is not difficult to check they are continuous. Indeed, it is sufficient to introduce the continuous function $V: \mathbb{R}\times \mathbb{R}\rightarrow \mathbb{R}$ \[ V(u,z):=\begin{cases} W'(u) &\quad \text{if $z=u$}, \\ {\mathfrak a}c{1}{z-u}\mathbb{B}ig(W(z)-W(u)+{\mathrm d}elta(u,z)\mathbb{B}ig)&\quad \text {if $z\neq u$}, \end{cases} \] and observe, e.g. for $\mathbb{W}ird{\mathrm d}elta$, that \[ \mathbb{W}ird{\mathrm d}elta (u) = \min \{V(u,z): z\ge u \} \] and for $u$ in a bounded set the minimum is attained in a compact set thanks to \eqref{eq:W'-assumption}. \par If ${\mathrm d}elta$ is big enough, in a suitable sense, equalities hold in \eqref{eq:4}. An important result is stated in the following proposition. \begin{proposition} {\langle}bel{prop:1} Suppose that $W$ satisfies the ${\mathrm d}elta$-convexity assumption \begin{equation} {\langle}bel{eq:41} W(v)\le (1-t)W(u)+tW(w)+t(1-t){\mathrm d}elta(u,w),{\mbox{\boldmath$q$}}uad v=(1-t)u+tw,\quad t\in[0,1]. \end{equation} Then the one-sided slopes coincides with the usual derivative: \[ \mathbb{W}ird{{\mathrm d}elta}(u)=W'(u)=\mathbb{W}sld{{\mathrm d}elta}(u){\mbox{\boldmath$q$}}uad \text{for every $u\in\mathbb{R}$}. \] \end{proposition} \begin{proof} We prove the first equality since the second one is analogous. Let us take $v,w\in\mathbb{R}$ with $u<v\le w$ and $t\in (0,1]$ such that $v=(1-t)u+tw$. Then \begin{multline*} {\mathfrak a}c{W(v)-W(u)}{v-u}\le {\mathfrak a}c{(1-t)W(u)+tW(w)+ t(1-t){\mathrm d}elta(u,w)-W(u)}{t(w-u)} = \\ {\mathfrak a}c{W(w)-W(u)+{\mathrm d}elta(u,w)}{w-u} -t{\mathfrak a}c{{\mathrm d}elta(u,w)}{w-u}\le {\mathfrak a}c{W(w)-W(u)+{\mathrm d}elta(u,w)}{w-u}. \end{multline*} Passing to the limit as $v{\mathrm d}ownarrowarrow u$ we get \[ W'(u)\le {\mathfrak a}c{W(w)-W(u)+{\mathrm d}elta(u,w)}{w-u}\quad \text{for every $w>u$.} \] Now it is enough to take to infimum over $w>u$. \end{proof} \begin{remark} \uparrowshape An interesting consequence of Proposition \ref{prop:1} is that if $W$ satisfies the usual ${\langle}mbda$-convexity assumption \begin{equation} {\langle}bel{eq:lambda-convex} W(v)\le (1-t)W(u)+tW(w)-{\langle}mbda t(1-t)(w-u)^2,{\mbox{\boldmath$q$}}uad v=(1-t)u+tw,\quad t\in[0,1] \end{equation} for some ${\langle}mbda\in \mathbb{R}$, then for every $\mu\ge \min\{-{\langle}mbda, 0\}$ we can choose ${\mathrm d}elta(u,w):=\mu(w-u)^2$ and \eqref{eq:41} holds. In particular, if $W$ is convex, for every admissible viscous correction ${\mathrm d}elta$ the one-sided global slopes coincide with the usual derivative. \end{remark} \par If $\mathbb{W}ird{\mathrm d}elta (u)<W'(u)$ in a point $u\in\mathbb{R}$, then from \eqref{eq:W'-assumption} there exist $z>u$ which attains the infimum in \eqref{eq:Wir}. The same happens if $\mathbb{W}sld{\mathrm d}elta(u)>W'(u)$. Moreover, from the continuity of $W$ and of the global slopes, there exist a neighborhood of $u$ in which the strict inequality holds. In this neighborhood $\mathbb{W}ird{\mathrm d}elta$, or $\mathbb{W}sld{\mathrm d}elta$, are decreasing. \begin{proposition} {\langle}bel{prop: Wir.prop} Let $I \subseteq \mathbb{R}$ be an open interval such that \[ \mathbb{W}ird{\mathrm d}elta (v) < W'(v) \quad\text{(resp. $\mathbb{W}sld{\mathrm d}elta(v)>W'(v)$)} {\mbox{\boldmath$q$}}uad \text{ for every $v\in I$}. \] Then $\mathbb{W}ird{\mathrm d}elta$ (resp. $\mathbb{W}sld{\mathrm d}elta$) is decresing on $I$. \end{proposition} \begin{proof} Let $v_1\in I$ and let $z>v_1$ be an element that attains the infimum in \eqref{eq:Wir}. Then for every $v_2<z$ we have the inequality \[ \mathbb{W}ird{\mathrm d}elta(v_2)-\mathbb{W}ird{\mathrm d}elta(v_1) \le \\ {\mathfrak a}c{W(z)-W(v_2)+{\mathrm d}elta(v_2,z)}{z-v_2}- \left({\mathfrak a}c{W(z)-W(v_1)+{\mathrm d}elta(v_1,z)}{z-v_1}\right). \] From \eqref{eq:delta2}, ${\mathrm d}elta(v_1,z)\ge{\mathrm d}elta(v_2,z)$ so that \[ {\mathfrak a}c{1}{v_2-v_1}\left[{\mathfrak a}c{{\mathrm d}elta(v_2,z)}{z-v_2}-{\mathfrak a}c{{\mathrm d}elta(v_1,z)}{z(v_1)-v_1}\right]\le {\mathfrak a}c{{\mathrm d}elta(v_2,z)}{(z-v_2)(z-v_1)}. \] Combining this with the simple identity \[ {\mathfrak a}c{W(z)-W(v_1)}{z-v_1}= \\ {\mathfrak a}c{W(z)-W(v_2)}{z-v_2}\left(1-{\mathfrak a}c{v_2-v_1}{z-v_1}\right)+{\mathfrak a}c{W(v_2)-W(v_1)}{v_2-v_1}{\mathfrak a}c{v_2-v_1}{z-v_1} \] after a simple computation we obtain \[ {\mathfrak a}c{\mathbb{W}ir (v_2)-\mathbb{W}ir (v_1)}{v_2-v_1} \le {\mathfrak a}c{1}{z-v_1}\left[{\mathfrak a}c{W(z)-W(v_2)+{\mathrm d}elta(v_2,z)}{z-v_2}- {\mathfrak a}c{W(v_2)-W(v_1)}{v_2-v_1}\right]. \] Passing to the limsup for $v_2{\mathrm d}ownarrowarrow v_1$ we get \[ \limsup_{v_2{\mathrm d}ownarrowarrow v_1} {\mathfrak a}c{\mathbb{W}ird{\mathrm d}elta (v_2)-\mathbb{W}ird{\mathrm d}elta(v_1)}{v_2-v_1}\le {\mathfrak a}c{1}{z-v_1}\left(\mathbb{W}ird{\mathrm d}elta (v_1) - W'(v_1)\right) < 0. \] The claim follows from a classical result concerning Dini derivatives, see \cite{Gal57}. \end{proof} \paragraph{Characterizations of ${\sf D}$-Stability.} Taking \eqref{eq:Wir} and \eqref{eq:Wsl} into account, we can formulate a characterization of the global ${\sf D}$-stability \eqref{stability}. Since the energy is of the form ${\ensuremath{\mathcal E}}(t,u)=W(u)-\ell(t)u$, \eqref{stability} is equivalent to \[ W(u(t))-W(v)-\ell(t)(u(t)-v)\le \mathbb{P}si(v-u)+{\mathrm d}elta(u,v)\quad\text{for every $t\in[a,b]\setminus \Ju$, $v\in \mathbb{R}$}. \] Dividing by $u(t)-v$ and taking the infimum over $v>u(t)$, or the supremum over $v<u(t)$, for every $t\in[a,b]\setminus \Ju$ we get the system of inequalities \begin{equation} {\langle}bel{eq:1d-stability} -\alpha_- \le \ell(t)-\mathbb{W}sld{\mathrm d}elta(u(t))\le \ell(t)-W'(u(t))\le \ell(t)-\mathbb{W}ird{\mathrm d}elta(u(t))\le\alpha_+, \tag{${\mathrm S}_{{\sf D}, \mathbb{R}}$} \end{equation} which are the one-dimensional version of the global ${\sf D}$-stability. The continuity property of the ${\mathrm d}elta$-corrected one-sided slopes also yields for every $t\in (a,b)$ \begin{gather} -\alpha_- \le \ell(t)-\mathbb{W}sld{\mathrm d}elta(\ur(t))\le \ell(t)-W'(\ur(t))\le \ell(t)-\mathbb{W}ird{\mathrm d}elta(\ur(t))<\alpha_+, {\langle}bel{eq:1d-stabilityright} \\ -\alpha_- \le \ell(t)-\mathbb{W}sld{\mathrm d}elta(\ul(t))\le \ell(t)-W'(\ul(t))\le \ell(t)-\mathbb{W}ird{\mathrm d}elta(\ul(t))<\alpha_+. {\langle}bel{eq:1d-stabilityleft} \end{gather} \begin{remark}\uparrowshape The stability region $\mathbb{S}S_\sfD$ is bigger when ${\mathrm d}elta$ increases. If we call \begin{equation} \mathbb{S}S_\infty:=\{(t,u)\in [a,b]\times \mathbb{R}: {\mathrm D}_u{\ensuremath{\mathcal E}}(t,u)\in K^*\}, \end{equation} where $K^*$ is defined in \eqref{eq:K*}, the set of points which satisfies the \emph{local stability} condition typical of BV solutions, \cite{Mielke-Rossi-Savare12,Mielke-Rossi-Savare13}, it is immediate to check that \[ \mathbb{S}S\kern-3pt_\sfd\subseteq\mathbb{S}S_\sfD \subseteq \mathbb{S}S_\infty{\mbox{\boldmath$q$}}uad\text{for every admissible viscous correction}. \] The first inclusion is an equality if ${\mathrm d}elta\equiv 0$. If the energy satisfies the ${\mathrm d}elta$-convexity property \eqref{eq:41}, or, equivalently, if ${\mathrm d}elta$ is chosen big enough, from Propostion \ref{prop:1} we get $\mathbb{S}S_\sfD=\mathbb{S}S_\infty$. \end{remark} \subsection{Visco-Energetic Maxwell rule} After the brief discussion about stability in section \ref{subsec:stability}, we now focus on jumps. In this section we show a relation between the minimal sets \eqref{eq:107} and the one-sided global slopes $\mathbb{W}ird{\mathrm d}elta$ and $\mathbb{W}sld{\mathrm d}elta$, along with some geometrical interpretations of the results. \begin{proposition} {\langle}bel{prop:3} Let $t,u\in\mathbb{R}$. Suppose that $z\in {\mathrm M}(t,u)$. Then \begin{gather} \mathbb{W}ird{\mathrm d}elta(v)\le{\mathfrak a}c{W(z)-W(v)}{z-v}+{\mathfrak a}c{{\mathrm d}elta(v,z)}{z-v}< \ell(t)-\alpha_+ {\mbox{\boldmath$q$}}uad \text{if $u< v<z$}, {\langle}bel{eq:8} \\ \mathbb{W}sld{\mathrm d}elta(v)\ge {\mathfrak a}c{W(z)-W(v)}{z-v}+{\mathfrak a}c{{\mathrm d}elta(v,z)}{z-v}> \ell(t)+\alpha_- {\mbox{\boldmath$q$}}uad \text{if $u> v>z$}, {\langle}bel{eq:9} \end{gather} Moreover, if $u\in \mathbb{S}S_\sfD(t)$ the following identities hold: \begin{gather} \mathbb{W}ird{\mathrm d}elta(u)={\mathfrak a}c{W(z)-W(u)}{z-u}+{\mathfrak a}c{{\mathrm d}elta(u,z)}{z-u}=\ell(t)-\alpha_+ {\mbox{\boldmath$q$}}uad \text{if $z>u$},{\langle}bel{eq:10} \\ \mathbb{W}sld{\mathrm d}elta(u)={\mathfrak a}c{W(z)-W(u)}{z-u}+{\mathfrak a}c{{\mathrm d}elta(u,z)}{z-u}= \ell(t)+\alpha_- {\mbox{\boldmath$q$}}uad \text{if $z<u$}. {\langle}bel{eq:11} \end{gather} \end{proposition} \begin{proof} Let us consider the case $z>u$. From the minimality of $z$ for every $v\in (u,z)$ we get \[ W(z)-W(v)-\ell(t)(z-v)\le -\alpha_+(z-v)+{\mathrm d}elta(u,v)-{\mathrm d}elta(u,z). \] Taking $\eqref{eq:delta2}$ into account and dividing by $z-u$ we get \[ {\mathfrak a}c{W(z)-W(v)}{z-v}-\ell(t)< -\alpha_+ -{\mathfrak a}c{{\mathrm d}elta(v,z)}{z-v}, \] which proves \eqref{eq:8}. If $u\in \mathbb{S}S_\sfD(t)$, we can combine the one dimensional ${\sf D}$-stability condition \eqref{eq:1d-stability} with \eqref{eq:8}, where we pass to the limit for $v{\mathrm d}ownarrowarrow u$, and we get \[ \mathbb{W}ird{\mathrm d}elta(u)\le{\mathfrak a}c{W(z)-W(u)}{z-u}+{\mathfrak a}c{{\mathrm d}elta(u,z)}{z-u}\le\ell(t)-\alpha_+\le \mathbb{W}ird{\mathrm d}elta(u), \] so that all the previous inequalities are identities and \eqref{eq:10} is proved. The case $z<u$ can be proved in a similar way. \end{proof} \begin{remark} {\langle}bel{rem:1} \uparrowshape Notice that the strict inequality in \eqref{eq:delta2} implies \begin{align} \text{if $z\in {\mathrm M}(t,u)$, $z>u$, then}{\mbox{\boldmath$q$}}uad \mathbb{W}ird{\mathrm d}elta(v)<\ell(t)-\alpha_+\quad \forall v\in (u,z), {\langle}bel{eq:16}\\ \text{if $z\in {\mathrm M}(t,u)$, $z<u$, then}{\mbox{\boldmath$q$}}uad \mathbb{W}sld{\mathrm d}elta(v)>\ell(t)-\alpha_-\quad \forall v\in (z,u). {\langle}bel{eq:17} \end{align} In particular $v\not\in \mathbb{S}S_\sfD(t)$, since \eqref{eq:16} and \eqref{eq:17} contradict the global stability \eqref{eq:1d-stability}. This inequalities will be one the key ingredients for the characterization Theorem \ref{thm:main-theorem}. \end{remark} \paragraph{${\sf D}$-Maxwell rule.} Equalities \eqref{eq:10} and \eqref{eq:11} admit a nice geometrical interpretation. Suppose that $u$ is a Visco-Energetic solution, $t\in\Ju$ and that there exist $z\in {\mathrm M}(t,\ul(t))$ with $z>\ul(t)$. According to \eqref{eq:1d-stabilityleft}, $\ul(t)$ is stable, so that we can choose $u=\ul(t)$ in \eqref{eq:10} and we get \begin{equation} {\langle}bel{eq:12} W(z)=W(\ul(t))+(\ell(t)-\alpha_+)(z-\ul(t))-{\mathrm d}elta(\ul(t),z). \end{equation} This identity is a generalization of the so-called \emph{Mawell rule}: in the energetic case, combining global stability and energetic balance, we easily get $z=\ur(t)$, so that \eqref{eq:12} assume the classical formulation \begin{equation} \int_{\ul(t)}^{\ur(t)}\mathbb{B}ig( W'(r)-\ell(t)+\alpha_+ \mathbb{B}ig)\,{\mathrm d} r=0. \end{equation} \par Considering for simplicity the choice ${\mathrm d}elta(u,v):={\mathfrak a}c{\mu}{2}(v-u)^2$, for some parameter $\mu>0$, when $W'(\ul(t))=\ell(t)-\alpha_+$ \eqref{eq:12} can be rewritten in the form \[ W(z)=W(\ul(t))+W'(\ul(t))(z-\ul(t))-{\mathfrak a}c{\mu}{2}(z-\ul(t))^2. \] This means that we can have a jump only when the area between the graph $W'$ and the straight line whose slope is $-\mu$ vanishes. If $\mu$ is big enough, then the area is always positive and ${\mathrm M}(t,u)=\{u\}$. In this case the description of the jump transition will be more complicated (see section \ref{subsec:main-theorem} and \ref{sec:4} for more details). \subsection{Main characterization Theorem} {\langle}bel{subsec:main-theorem} In this section we exhibit an explicit characterization of Visco-Energetic solutions for a general (i.e. non monotone) external loading $\ell$. This result is the equivalent of \cite[Theorem 3.1]{Rossi-Savare13} and \cite[Theorem 5.1]{Rossi-Savare13} for Energetic and BV solutions. \begin{theorem}[1d-characterization of VE solutions] {\langle}bel{thm:main-theorem} Let $u\in \mathrm{BV}([a,b];\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. Then the the following properties hold: \begin{enumerate}[a)] \item $u$ satisfies the 1d-stability condition \eqref{eq:1d-stability} for every $t\in[a,b]\setminus \Ju$ (and therefore \eqref{eq:1d-stabilityright} and \eqref{eq:1d-stabilityleft} as well); \item $u$ satisfies the following precise formulation of the doubly nonlinear differential inclusion: \begin{gather} W'(\ur(t))=\mathbb{W}ird{\mathrm d}elta(\ur(t))=\ell(t)-\alpha_+ \quad \text{ for every $t\in \mathrm{supp}\left((u')^+\right)\cap [a,b)$}, {\langle}bel{eq:equation1} \\ W'(\ur(t))=\mathbb{W}sld{\mathrm d}elta(\ur(t))=\ell(t)+\alpha_- \quad \text{ for every $t\in \mathrm{supp}\left((u')^-\right)\cap [a,b)$}; {\langle}bel{eq:equation2} \end{gather} \item at each point $t\in \Ju$, $u$ fulfils the jump conditions \begin{equation} {\langle}bel{eq:jump1} \min(\ul(t),\ur(t))\le u(t) \le \max(\ul(t),\ur(t)) \end{equation} and \begin{equation} {\langle}bel{eq:jump2} \mathbb{W}ird{\mathrm d}elta(v)\le \ell(t)-\alpha_+\quad \text{if $\ul(t)< \ur(t)$},{\mbox{\boldmath$q$}}uad \mathbb{W}sld{\mathrm d}elta(v)\ge \ell(t)+\alpha_-\quad \text{if $\ul(t)>\ur(t)$}, \end{equation} for every $v$ such that $\min\left(\ul(t),\ur(t)\right)\le v\le \max\left(\ul(t),\ur(t)\right)$. \end{enumerate} \par \noindent Conversely, let $u\in\mathbb{B}V([a,b],\mathbb{R})$ be a curve satisfying \eqref{eq:equation1}, \eqref{eq:equation2}, \eqref{eq:jump1}, \eqref{eq:jump2}, along with the following modified version of a): \begin{enumerate}[a)] \item[a')] $u$ satisfies the 1d-stability condition \eqref{eq:1d-stability} for every $t\in (a,b)$. \end{enumerate} Then $u$ is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. \end{theorem} \par Since any jump point belongs either to the support of $\left(u'\right)^+$ or of $\left(u'\right)^-$, combining \eqref{eq:equation1}, \eqref{eq:jump1}, \eqref{eq:jump2} and \eqref{eq:equation2}, \eqref{eq:jump1} and \eqref{eq:jump2} we also get at every $t\in \Ju\cap (a,b)$ \begin{gather} \mathbb{W}ird{\mathrm d}elta(\ul)=\mathbb{W}ird{\mathrm d}elta(\ur)=W'(\ur)=\ell(t)-\alpha_+{\mbox{\boldmath$q$}}uad \text{ if $\ul<\ur$}, {\langle}bel{eq:6}\\ \mathbb{W}sld{\mathrm d}elta(\ul)=\mathbb{W}sld{\mathrm d}elta(\ur)=W'(\ur)=\ell(t)-\alpha_-{\mbox{\boldmath$q$}}uad \text{ if $\ul>\ur$}, {\langle}bel{eq:7} \end{gather} and this identities still hold in $t=a$ or $t=b$ if $u(a)\in S_{\sf D}(a)$ or $u(b)\in S_{\sf D}(b)$. \begin{remark} \uparrowshape For a full characterization of Visco-Energetic solutions we need that \eqref{eq:1d-stability} holds also when $t\in \Ju$. This condition is required just to recover the first and the second equalities in \eqref{Jve} from the third. However, it is quite natural: if $u$ is a Visco-Energetic solution, we can consider the right continuous function \[ \tilde{u}\in \mathbb{B}V([a,b],\mathbb{R})\quad\text{such that $\tilde{u}(t):=\ul(t)$ for every $t\in[a,b]$}. \] Then $\tilde{u}$ is still a Visco-Energetic solution and $\tilde{u}(t)$ is stable for every $t\in(a,b]$. \end{remark} \begin{proof}[Proof of theorem \ref{thm:main-theorem}] We split the argument in various steps. \par \underline {Claim 1}. \emph{ ${\sf D}$-stability \eqref{stability} is equivalent to \eqref{eq:1d-stability}}.\newline It is a consequence of the choice ${\ensuremath{\mathcal E}}(t,u)=W(u)-\ell(t)u$; see the discussion in section \ref{subsec:stability}. \par \underline{Claim 2}. \emph{\eqref{Jve} implies the jump conditions \eqref{eq:jump1} and \eqref{eq:jump2}}.\newline From the general properties of the viscous dissipation cost, there exist an optimal transition $\vartheta\in {\mathrm C}(E;\mathbb{R})$ connecting $\ul(t)$ and $\ur(t)$, namely \[ \vartheta(E^-)=\ul(t),\quad\vartheta(E^+)=\ur(t),\quad \mathbb{F}d(t,\ul(t),\ur(t))=\mathbb{C}f(t,\vartheta,E). \] Since \eqref{Jve} holds, we can apply Theorem \ref{prop:2}. Let us start from the case $\ul(t)<\ur(t)$ and let $v\in [\ul(t),\ur(t)$]. If $v\notin \vartheta(E)$, which is compact, there exist an open interval $I\subset [\ul(t),\ur(t)]\setminus \vartheta(E)$ such that $v\in I$. From \eqref{eq:121} \[ \vartheta(I^+)\in{\mathrm M} (t,\vartheta(I^-)), \] so that, by Proposition \ref{prop:3}, we get $\mathbb{W}ird{\mathrm d}elta(v)\le \ell(t)-\alpha_+$. By continuity, the inequality still holds if $v\in \vartheta(E)$ is isolated in $\vartheta(E)\cap[v,+\infty)$. Otherwise, $v\in {\mathrm L}\big(\vartheta(E)\cap [v,+\infty)\big)$, where ${\mathrm L}$ denotes the set of the limit points. From \eqref{eq:120} we have \[ {\ensuremath{\mathcal E}}(t,v)\ge {\ensuremath{\mathcal E}}(t,v_1)+\alpha_+(v_1-v)\quad \text{for every $v_1\in \vartheta(E),\quad v_1> v $}, \] which yields \[ \mathbb{W}ird{\mathrm d}elta(v)\le {\mathfrak a}c{W(v_1)-W(v)}{v_1-v}+{\mathfrak a}c{{\mathrm d}elta(v,v_1)}{v_1-v}\le \ell(t)-\alpha_+ +{\mathfrak a}c{{\mathrm d}elta(v,v_1)}{v_1-v}. \] We can pass to the limit for $v_1{\mathrm d}ownarrowarrow z$ so that $\eqref{eq:jump2}$ holds in $[\ul(t),\ur(t))$. By continuity, it still holds in $v=\ur(t)$. The case $\ul(t)>\ur(t)$ can be proved in a similar way. \par The property \eqref{eq:jump1} easily follows by summing the identities of the jump conditions \eqref{Jve}, thus obtaining \[ \cost(t,\ul(t),\ur(t))=\cost(t,\ul(t),u(t))+\cost(t,u(t),\ur(t)), \] and considering the additivity of the cost \eqref{eq:195}. \par \underline{Claim 3}. \emph{The jump conditions \eqref{eq:jump1}, \eqref{eq:jump2} and a') imply \eqref{Jve}}.\newline Let us start again with $\ul(t)<\ur(t)$. We still want to apply Theorem \ref{prop:2}: we need to find an admissible transition $\vartheta\in {\mathrm C}(E;\mathbb{R})$ which satisfies \eqref{eq:120} and \eqref{eq:121}. To define such a transition, let us consider \[ S:=\{v\in [\ul(t),\ur(t)]: \mathbb{W}ird{\mathrm d}elta(v)=\ell(t)-\alpha_+\text{ and } \mathbb{W}sld{\mathrm d}elta(v)\le \ell(t)+\alpha_- \}. \] The set $S$ is compact, then there exists a sequence of disjoint open intervals $I_k$ such that $[S^-,S^+]\setminus S=\bigcup_{k=0}^\infty I_k$. Let us fix for a moment one of these $I_k$. Taking into account assumption \eqref{eq:W'-assumption}, we can have only two possibilities. \par \begin{enumerate}[a)] \item[-] \emph{Case 1: ``The initial jump''}. The infimum in $\mathbb{W}ird{\mathrm d}elta(I_k^-)$ is attained in a point $z>I_k^-$.\newline From $\mathbb{W}ird{\mathrm d}elta(I_k^-)=\ell(t)-\alpha_+$ and \eqref{eq:10} we recover the energetic balance \begin{equation} {\ensuremath{\mathcal E}}(t,z)+{\sf D}(I_k^-,z)={\ensuremath{\mathcal E}}(t,I_k^-). \end{equation} Arguing as in Proposition \ref{prop:3}, $\mathbb{W}ird{\mathrm d}elta(v)<\ell(t)-\alpha_+$ for every $v\in(I_k^-,z)$, so that $z\in \overline{I_k}$. We can thus define by induction the sequence $(u_n^k)$ such that \[ u^k_0:=z,{\mbox{\boldmath$q$}}uad u_{n+1}^k=u_n^k\quad\text{if $u_n^k=I_k^+$} {\mbox{\boldmath$q$}}uad u^k_{n+1}\in {\mathrm M}(t,u_n^k) \quad \text{otherwise}. \] Notice that from Proposition \ref{prop:3} and Remark \ref{rem:1}, by induction we easily get $u_n^k\in \overline{I_k}$ for every $n\in \mathbb{N}$. Moreover, \begin{equation} \mathbb{P}si(u_{n+1}^k-u_n^k)\le {\ensuremath{\mathcal E}}(t,u_n)-{\ensuremath{\mathcal E}}(t,u_{n+1}), \end{equation} so that $(u_n^k)$ is a Cauchy sequence and then it converges to some $\bar{u}^k\in \overline{I_k}$. From the general properties of the residual stability function \begin{equation} {\langle}bel{eq:19} \mathbb{R}es(t,u_n^k)={\ensuremath{\mathcal E}}(t,u_n^k)-{\ensuremath{\mathcal E}}(t,u_{n+1}^k)-{\sf D}(u_n^k,u_{n+1}^k). \end{equation} By passing to the limit in \eqref{eq:19} we get \[ \mathbb{R}es(t,\bar{u}^k)=0,\quad \text{so that $\bar{u}^k\in S$}, \] which means $\bar{u}^k\in\{I_k^-;I_k^+\}$. In addition, $\bar{u}^k\neq I_k^-$ since ${\ensuremath{\mathcal E}}(t,u_{n+1}^k)<{\ensuremath{\mathcal E}}(t,u_n^k)$ every time that $u_{n+1}^k\neq u_n^k$, which implies ${\ensuremath{\mathcal E}}(t,\bar{u}^k)<{\ensuremath{\mathcal E}}(t,I_k^-)$. Finally, we conclude $\bar{u}^k=I_k^+$ and we set $E_k:=\bigcup_{n=0}^\infty \{u_n^k\}$. \item[-] \emph{Case 2: ``The (double) chain''}. $W'(I_k^-)< {\mathfrak a}c{W(z)-W(I_k^-)+{\mathrm d}elta(I_k^-,z)}{z-I_k^-}$ for every $z>I_k^-$.\newline In this case $\mathbb{W}ird{\mathrm d}elta(I_k^-)=W'(I_k^-)=\ell(t)-\alpha_+$. The energy ${\ensuremath{\mathcal E}}(t,u)=W(u)-(W'(I_k^-)+\alpha_+)u$ has negative derivative in $u=I_k^-$, so that it is decreasing in a neighborhood of $I_k^-$. Let us choose $\varepsilon>0$ such that ${\ensuremath{\mathcal E}}(t,I_k^-+\varepsilon)<{\ensuremath{\mathcal E}}(t,I_k^-)$. We can thus define by induction the following sequence $(u^k_{n,\varepsilon})$: \[ u_{0,\varepsilon}^k:=I_k^-+\varepsilon,{\mbox{\boldmath$q$}}uad u_{n+1,\varepsilon}^k=u_{n,\varepsilon}^k\quad\text{if $u_{n,\varepsilon}^k=I_k^+$}, {\mbox{\boldmath$q$}}uad u_{n+1,\varepsilon}^k\in{\mathrm M}(t,u_{n,\varepsilon}^k)\quad\text{otherwise}. \] As in the previous case, this sequence is well defined and it converges to $I_k^+$. In order to pass to the limit for $\varepsilon{\mathrm d}ownarrowarrow 0$, we apply a compactness argument: we consider the family of sets \[ E_{k,\varepsilon}:=\bigcup_{n=0}^\infty\{u_{n,\varepsilon}^k\}\cup \{I_k^+\}. \] $E_{k,\varepsilon}$ are compact and $E_{k,\varepsilon}\subseteq \overline{I_k}$. We can apply Kuratowski Theorem (see e.g.~\cite{Kuratowski55}): there exists a compact subset $E_k\subseteq \overline{I_k}$ such that, up to a subsequence, $E_{k,\varepsilon}\rightarrow E_k$ in the Hausdorff metric. It is easy to check, \cite[Lemma 3.11]{MinSav16}, that $E_k^-=I_k^-$, $E_k^+=I_k^+$ and \begin{equation} {\langle}bel{eq:28} z\in {\mathrm M}(t,z_{E_k}^-){\mbox{\boldmath$q$}}uad\text{for every $z\in E_k$}, \end{equation} where $z_{E_k}^-$ is defined in \eqref{eq:44}. \end{enumerate} In conclusion, we repeat this construction for every open interval $I_k$ and we consider $E:=\bigcup_{k=0}^{\infty} E_k\cup S$. Notice that $E^-=\ul(t)$, $E^+=\ur(t)$ and $E$ is a compact subset of $\mathbb{R}$. Indeed, $E$ is bounded and if $(x_n)$ is a sequence in $E$ that accumulates in some point $\bar{x}$, by construction $x_n$ is definitively contained in one of the sets $E_k$ or in $S$, which are compact. \par We can thus consider the curve \[ \vartheta:E\rightarrow \mathbb{R} \quad\text{such that $\vartheta(z)=z$ for every $z\in E$}: \] it is an admissible transition connecting $\ul(t)$ and $\ur(t)$, with $\vartheta(E)\ni u(t)$ thanks to $a')$. It remains just to prove that $\vartheta$ satisfies \eqref{eq:120} and\eqref{eq:121}. \par Concerning \eqref{eq:120}, for every $I\subset {\mathfrak H}(E)$, by construction $\vartheta(I^+)\in{\mathrm M}(t,\vartheta(I^-)$, so that \begin{equation} {\langle}bel{eq:45} \varpsi(\vartheta, E\cap [I^-,I^+])=\mathbb{P}si(\vartheta(I^+)-\vartheta(I^-))\le {\ensuremath{\mathcal E}}(t,\vartheta(I^-))-{\ensuremath{\mathcal E}}(t,\vartheta(I^+)). \end{equation} When $s\in {\mathrm L}\big(\vartheta(E)\cap (-\infty,s]\big)$ we get $\vartheta(s)\in S$, so that $\mathbb{W}sld{\mathrm d}elta(\vartheta(s))\le \ell(t)+\alpha_-$. In particular, since $\vartheta$ is increasing \[ {\mathfrak a}c{W(\vartheta(r))-W(\vartheta(s))+{\mathrm d}elta(\vartheta(s),\vartheta(r))}{\vartheta(r)-\vartheta(s)}\le \ell(t)+\alpha_-{\mbox{\boldmath$q$}}uad\text{for every $r<s$}. \] After a trivial computation, by using \eqref{eq:delta} and by passing to the limit we get \begin{equation} {\langle}bel{eq:46} \limsup_{r\uparrowarrow s}{\mathfrak a}c{{\ensuremath{\mathcal E}}(t,\vartheta(r))-{\ensuremath{\mathcal E}}(t,\vartheta(s))}{\varpsi\big(\vartheta,E\cap [r,s]\big)}\ge 1. \end{equation} We can thus recover \eqref{eq:120} from \eqref{eq:45} and \eqref{eq:46} by using Lemma \ref{le:elementary}, where we set $f(s):=-{\ensuremath{\mathcal E}}(t,\vartheta(s))$ and $g(s):=\varpsi(\vartheta,E\cap [E^-,s])$. \par Finally, \eqref{eq:121} holds by construction if $r$ is isolated in $E\cap (-\infty,r]$. Otherwise, $r_{E^-}=r$ and it is still satisfied. In conclusion, by Theorem \ref{prop:2} $\vartheta$ is an optimal transition satisfying the third of \eqref{Jve}. Considering the restriction of $\vartheta$ on $E\cap [\ul(t),u(t)]$ and $E\cap [u(t),\ur(t)]$ we also get the first two identities of \eqref{Jve}. \underline{Claim 4}. \emph{$b)$ is equivalent to the doubly nonlinear equation \eqref{eq:DN0}.} \newline We notice that \eqref{eq:DN0} yields \begin{equation} {\langle}bel{eq:47} W'(u(t))= \ell(t)-\alpha_+{\mbox{\boldmath$q$}}uad \text{for $\left(u'_{\rm{co}}\right)^+$-a.e. $t\in (a,b)$}, \end{equation} so that \eqref{eq:equation1} holds by continuity and by \eqref{eq:1d-stabilityright} in $\rm{supp} \left(u'\right)^+\setminus \Ju$. On the other hand, for every $t\in \Ju\cap \,\mathrm{supp} \left(u'\right)^+$ we have $\ul(t)<\ur(t)$. From \eqref{eq:1d-stabilityright} and \eqref{eq:jump2}, $\ell(t)-\alpha_+=\mathbb{W}ir(u_r(t))$ and then combining Proposition \ref{prop: Wir.prop} and \eqref{eq:jump2} again we get \[ W'(\ur(t))=\mathbb{W}ir(\ur(t))=\ell(t)-\alpha_+, \] which proves \eqref{eq:equation1}. The identities in \eqref{eq:equation2} follow by the same argument. \par The converse implication is trivial since $\mu$ is diffuse and therefore $\ul(t)=\ur(t)=u(t)$ for $\mu$-a.e. $t\in (a,b)$. Then \eqref{eq:DN0} follows combining \eqref{eq:equation1}, \eqref{eq:equation2} and \eqref{eq:1d-stability}. \end{proof} The previous general result has a simple consequence: a Visco-Energetic solution is locally constant in a neighborhood of a point where the stability condition \eqref{eq:1d-stability} holds with a strict inequality. \begin{corollary} {\langle}bel{cor:main-theorem-corollary} Let $u\in \rm{BV}([a,b];\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. Then $u$ is locally constant in the open set \[ {\ensuremath{\mathcal I}}:=\left\{t\in [a,b]: -\alpha_-<\ell(t)-\mathbb{W}sld{\mathrm d}elta(u(t))\le \ell(t)-\mathbb{W}ird{\mathrm d}elta(u(t))<\alpha_+\right\}. \] \end{corollary} \begin{proof} By \eqref{eq:jump2} any $t\in {\ensuremath{\mathcal I}}$ is a continuity point for $u$; the continuity properties of $\mathbb{W}ird{\mathrm d}elta(\cdot)$ and $\mathbb{W}sld{\mathrm d}elta(\cdot)$ then show that a neighborhood of $t$ is also contained in ${\ensuremath{\mathcal I}}$, so that ${\ensuremath{\mathcal I}}$ is open and disjoint from $\Ju$. Relations \eqref{eq:equation1} and \eqref{eq:equation2} then yield that \[ u'=0 {\mbox{\boldmath$q$}}uad \text{in the sense of distributions in ${\ensuremath{\mathcal I}}$}, \] so that $u$ is locally constant. \end{proof} \paragraph{Example.} We conclude this section with the classic example of the double-well potential energy $W(u)={\mathfrak a}c{1}{4}(u^2-1)^2$. \begin{figure} \caption{Visco-Energetic solution of a double-well potential energy with an oscillating external loading and a quadratic viscous-correction ${\mathrm d} \end{figure} This energy clearly satisfies \eqref{eq:W'-assumption}. Notice also that $W'(u)=u^3-u$ and $\min W''=-1$. Therefore, if we choose ${\mathrm d}elta(u,v):=(v-u)^2$, according to Proposition \ref{prop:1}, $\mathbb{W}ird{\mathrm d}elta=\mathbb{W}sld{\mathrm d}elta=W'$ and we expect a similar behaviour to BV solutions, with the optimal transition similar in the form to a ``double chain'' at every jump point. If the loading is oscillating, for example $\ell(t)=\sin(t)$, $\alpha_\pm={\mathfrak a}c{1}{2}$ and we choose the initial datum such that $W'(u(a))=\ell(t)-\alpha_+$, the result is a loop typical of the hysteresis fenomena: the solution $u$ is locally constant when $\ell$ change direction. \section{Visco-Energetic solutions with monotone loadings} {\langle}bel{sec:4} Visco-Energetic solutions of rate-independent systems in $\mathbb{R}$, driven by monotone loadings, involve the notion of the \textit{upper and lower monotone} (i.e. nondecreasing) \textit{envelopes} of the graph of $\mathbb{W}ird{\mathrm d}elta$ and $\mathbb{W}sld{\mathrm d}elta$. \par In this section we first focus on a few properties of this maps and their inverse and then we exhibit the explicit formulae characterizing Visco-Energetic solutions when $\ell$ is increasing or decreasing. \subsection{Monotone envelopes of one-sided global slopes} {\langle}bel{subsec:monotone-envelopes} \begin{definition}[Upper monotone envelope of $\mathbb{W}ird{\mathrm d}elta$] For every $\bar{u}$ in $\mathbb{R}$, we define the maximal monotone map $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ \begin{equation} \begin{split} \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u):=\max_{\bar{u}\le v\le u}\mathbb{W}ird{\mathrm d}elta(v)&\quad\text{if $u>\bar{u}$},{\mbox{\boldmath$q$}}uad \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\bar{u}):=(-\infty,\mathbb{W}ird{\mathrm d}elta(\bar{u})], \\ &\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)=\emptyset\quad \text{if $u<\bar{u}$}. \end{split} \end{equation} \end{definition} We call $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot)$ the \textit{upper monotone envelope of $\mathbb{W}ird{\mathrm d}elta$} in the interval $(\bar{u},+\infty)$. The \emph{contact set} is defined by \[ C^{\bar{u}}:=\{\bar{u}\}\cup\{u>\bar{u}:\mathbb{W}ird{\mathrm d}elta(u)=\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)\}. \] Thanks to \eqref{eq:W'-assumption}, it is easy to check that \begin{equation} \lim_{v\rightarrow -\infty}\mathbb{W}ird{\mathrm d}elta(v)=-\infty,{\mbox{\boldmath$q$}}uad \lim_{v\rightarrow +\infty}\mathbb{W}ird{\mathrm d}elta(v)=+\infty, \end{equation} so that the map $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot)$ is monotone and surjective; it is also single-valued on $(\bar{u},+\infty)$ (where we identify the set $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)$ with its unique element with a slight abuse of notation). We can thus consider the inverse graph $\textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\cdot):\mathbb{R}\rightarrow[\bar{u},+\infty)$ of $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot)$: it is defined by \[ u\in\textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\ell)\quad\mathbb{L}eftrightarrow\quad\ell\in\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u){\mbox{\boldmath$q$}}uad\text{for $u$, $\ell\in\mathbb{R}$}. \] Clearly, $\textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\cdot)$ is a maximal monotone graph in $\mathbb{R}$ and it is uniquely characterized by a left-continuous monotone function $p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\cdot)$ and a right-continuous monotone function $p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\cdot)$ such that \[ \textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\ell)=[p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell),p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell)],\quad \text{i.e.}\quad \ell\in\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)\quad\mathbb{L}eftrightarrow\quad p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell)\le u \le p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell). \] We also consider a further selection in the graph of $\textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\cdot)$: \[ \textit{\textbf{p}}^{\bar{u}}_{\mathrm d}eltac(\ell):=\{u\in \textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\ell):\mathbb{W}ird{\mathrm d}elta(u)=\ell\}=\{u\in C^{\bar{u}}:\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)\ni\ell\}=\textit{\textbf{p}}^{\bar{u}}_{\mathrm d}elta(\ell)\cap C^{\bar{u}}. \] By introducing the set \[ A_{\mathrm d}elta^{\bar{u}}:=\{f:(\bar{u},+\infty)\rightarrow\mathbb{R}:\text{ $f$ is nondecreasing and fulfills $f\ge \mathbb{W}ird{\mathrm d}elta$}\}, \] we have \begin{equation} {\langle}bel{eq:13} \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot)\restr{(\bar{u},+\infty)}\in A_{\mathrm d}elta^{\bar{u}},\quad \mathbb{W}ird{\mathrm d}elta(u)\le \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)\le f(u) \text{ for all $f\in A_{\mathrm d}elta^{\bar{u}}, u\in(\bar{u},+\infty)$}, \end{equation} so that $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta$ is the minimal nondecreasing map above the graph of $\mathbb{W}ird{\mathrm d}elta$ in $(\bar{u},+\infty)$. It immediately follows from \eqref{eq:13} that \[ \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)=\inf\{f(u): f\in A_{\mathrm d}elta^{\bar{u}}\}{\mbox{\boldmath$q$}}uad \text{for all $u>\bar{u}$}. \] The following result collects some simple properties of $p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\cdot)$ and $p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\cdot)$. \begin{proposition} Assume \eqref{eq:W'-assumption}. Then for every $\ell\ge \mathbb{W}ird{\mathrm d}elta(\bar{u})$ there holds \begin{equation} {\langle}bel{eq:14} \mathbb{W}ird{\mathrm d}elta(u)\le \ell{\mbox{\boldmath$q$}}uad \text{if $u\in [\bar{u},p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell)]$}. \end{equation} Moreover, for every $\ell\in\mathbb{R}$ we have \begin{equation} {\langle}bel{eq:15} p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell)=\min \{u\ge \bar{u}:\mathbb{W}ird{\mathrm d}elta(u)\ge\ell\},{\mbox{\boldmath$q$}}uad p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell)=\inf\{u\ge \bar{u}: \mathbb{W}ird{\mathrm d}elta(u)>\ell\}. \end{equation} \end{proposition} \begin{proof} Property \eqref{eq:14} is an immediate consequence of the inequality $\mathbb{W}ird{\mathrm d}elta\le \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(\cdot)$ in $[\bar{u},+\infty]$. \newline To prove the first of \eqref{eq:15} it is sufficient to notice that \[ \mathbb{W}ird{\mathrm d}elta(u)\le \textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)<\ell \quad \text{if }\bar{u}\le u< p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell), \] and $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)=\ell$ if $u=p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell)$. For the second of \eqref{eq:15}, we observe that, when $u>p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell)$ we have $\textit{\textbf{m}}^{\bar{u}}_{\mathrm d}elta(u)>\ell$, and we know that there exists $v\in [p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell),u]$ such that $\mathbb{W}ird{\mathrm d}elta(v)>\ell$. Since $u$ is arbitrary we get \[ p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell)\ge \inf\{u\ge \bar{u}: \mathbb{W}ird{\mathrm d}elta(u)>\ell\}. \] The converse inequality follows from \eqref{eq:14}. \end{proof} \par In a completely similar way we can introduce the \textit{maximal monotone map} below the graph of $\mathbb{W}sld{\mathrm d}elta$ on the interval $(-\infty,\bar{u}]$. \begin{definition}[Lower monotone envelope of $\mathbb{W}sld{\mathrm d}elta$] For every $\bar{u}$ in $\mathbb{R}$, we define the maximal monotone map $\textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ \begin{equation} \begin{split} \textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(u):=\inf_{\bar{u}\le v\le u}\mathbb{W}sld{\mathrm d}elta(v)&\quad\text{if $u<\bar{u}$},{\mbox{\boldmath$q$}}uad \textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(\bar{u}):=[\mathbb{W}sld{\mathrm d}elta(\bar{u}),+\infty), \\ &\textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(u)=\emptyset\quad \text{if $u>\bar{u}$}, \end{split} \end{equation} \end{definition} which satisfies \[ \textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(u)=\sup \{f(u):f\in B_{\mathrm d}elta^{\bar{u}}\}\quad\text{for $u<\bar{u}$}, \] where \[ B_{\mathrm d}elta^{\bar{u}}:=\{f:(-\infty,\bar{u})\rightarrow\mathbb{R}: \text{ $f$ is nondrecreasing and fulfills $f\le \mathbb{W}sld{\mathrm d}elta$} \}. \] As before, the inverse graph $\textit{\textbf{q}}^{\bar{u}}_{\mathrm d}elta(\cdot):=(\textit{\textbf{n}}^{\bar{u}}_{\mathrm d}elta(\cdot))^{-1}:\mathbb{R}\rightarrow (-\infty,\bar{u}]$ can be represented as $\textit{\textbf{q}}^{\bar{u}}_{\mathrm d}elta(u)=[q_{{\sf l},{\mathrm d}elta}^{\bar{u}}(u),q_{{\sf r},{\mathrm d}elta}^{\bar{u}}(u)]$, where \[ q_{{\sf l},{\mathrm d}elta}^{\bar{u}}(u)=\sup \{u\le \bar{u}: \mathbb{W}sld{\mathrm d}elta(u)<\ell\},{\mbox{\boldmath$q$}}uad q_{{\sf r},{\mathrm d}elta}^{\bar{u}}(u)=\max \{u\le \bar{u}: \mathbb{W}sld{\mathrm d}elta(u)\le\ell\} \] and we set \[ \textit{\textbf{q}}^{\bar{u}}_{\mathrm d}eltac(\ell):=\{u\in \textit{\textbf{q}}^{\bar{u}}_{\mathrm d}elta(\ell):\mathbb{W}sld{\mathrm d}elta(u)=\ell\}. \] \subsection{Monotone loadings and Visco-Energetic solutions} We apply the notions introduced the previous section to characterize Visco-Energetic solutions when $\ell$ is monotone. First of all, we provide an explicit formula yielding Visco-Energetic solutions for an increasing loading $\ell$. The case of a decreasing and of a piecewise monotone loading can be proved in a similar way. \begin{theorem} {\langle}bel{thm:monotonicity-theorem1} Let $\bar{u}\in \mathbb{R}$, $\ell\in {\mathrm C}^1([a,b])$ be a nondecreasing loading such that \begin{equation} {\langle}bel{eq:monotone-loading-assumption1} \ell(a)\ge \mathbb{W}sld{\mathrm d}elta(\bar{u})-\alpha_-. \end{equation} Any nondecreasing map $u:[a,b]\rightarrow\mathbb{R}$, with $u(a)=\bar{u}$, such that for every $t\in(a,b]$ \begin{equation}{\langle}bel{eq:monotone-loading-assumption2} \mathbb{W}sld{\mathrm d}elta(u(t))-\alpha_-\le W'(u(t)),{\mbox{\boldmath$q$}}uad u(t)\in \textit{\textbf{p}}^{\bar{u}}_{\mathrm d}eltac(\ell(t)-\alpha_+) \end{equation} is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. In particular, \eqref{eq:monotone-loading-assumption2} yields \begin{equation}{\langle}bel{eq:monotone-loading-characterization1} u(t)\in [p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell(t)-\alpha_+), p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell(t)-\alpha_+)]\quad\text{for every $t\in (a,b]$}. \end{equation} \end{theorem} \begin{proof} We apply Theorem \ref{thm:main-theorem}. Concerning the global stability condition, notice that \eqref{eq:monotone-loading-assumption2} yield \begin{equation} {\langle}bel{eq:31} \mathbb{W}ird{\mathrm d}elta(u(t))=W'(u(t)){\mbox{\boldmath$q$}}uad\text{for every $t\in(a,b]$}. \end{equation} Indeed, if $\mathbb{W}ird{\mathrm d}elta(u(t))\neq W'(u(t))$, from Proposition \ref{prop: Wir.prop}, $\mathbb{W}ird{\mathrm d}elta$ is decreasing in a neighborhood of $u(t)$, which contradicts the second of \eqref{eq:monotone-loading-assumption2}. Therefore, the first of \eqref{eq:monotone-loading-assumption2}, combined with \eqref{eq:31} gives \eqref{eq:1d-stability} for every $t\in(a,b]$. \par To check the equation \eqref{eq:equation1}, we set \[ \gamma:=\inf \{t>a: u(t)>u(a)\}. \] If $\mathbb{W}ird{\mathrm d}elta(u(a))<\ell(a)-\alpha_+$, then from \eqref{eq:monotone-loading-assumption2} $a\in \Ju$ and $\mathbb{W}ird{\mathrm d}elta(\ur(a))=\ell(a)-\alpha_+$. Otherwise, $u(a)$ satisfies the stability condition and $u$ is clearly a constant Visco-Energetic solution on $[a,\gamma]$. Thus, it is not restrictive to assume that $\gamma=a$ by Proposition \ref{lem:1}. In this case $\ur(t)>u(a)$ for every $t>a$ and by continuity \eqref{eq:monotone-loading-assumption2} yields \begin{equation} {\langle}bel{eq:32} \mathbb{W}ird{\mathrm d}elta(\ul(t))=\mathbb{W}ird{\mathrm d}elta(\ur(t))=\ell(t)-\alpha_+{\mbox{\boldmath$q$}}uad \text{for every $t\in (a,b)$}, \end{equation} and the second identity still holds in $t=a$. Thus, from the first of \eqref{eq:31} and the continuity of $W$ we finally get \eqref{eq:equation1}. \par To check the jump conditions, let us first notice that combining equation \eqref{eq:32} and \eqref{eq:monotone-loading-assumption2} \[ \mathbb{W}ird{\mathrm d}elta(v)\le \ell(t)-\alpha_+{\mbox{\boldmath$q$}}uad\text{ for every $t\in (a,b]$, $\bar{u}\le v\le \ur(t)$}. \] Then, from \eqref{eq:15} and the monotonicity of $u$ we get \begin{equation} {\langle}bel{eq:35} p_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell(t)-\alpha_+)\le \ul(t)\le u(t)\le \ur(t)\le p_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell(t)-\alpha_+)\quad \text{for every $t\in(a,b]$} \end{equation} which yields \eqref{eq:jump1} and \eqref{eq:jump2} by \eqref{eq:14}. Moreover, the inequalities in \eqref{eq:35} also shows that \eqref{eq:monotone-loading-assumption2} implies \eqref{eq:monotone-loading-characterization1} \end{proof} \par If the loading is decreasing, a similar result still holds. It can be proved with an adaptation of the proof of Theorem \ref{thm:monotonicity-theorem1}. \begin{theorem} {\langle}bel{thm:45} Let $\bar{u}\in \mathbb{R}$ and $\ell\in {\mathrm C}^1[a,b])$ be a nonincreasing loading such that \begin{equation} \ell(a)\le \mathbb{W}ird{\mathrm d}elta(\bar{u})+\alpha_-. \end{equation} Any nonincreasing map $u:[a,b]\rightarrow\mathbb{R}$, with $u(a)=\bar{u}$, such that \begin{equation} {\langle}bel{eq:34} \mathbb{W}ird{\mathrm d}elta(u(t))+\alpha_+\ge W'(u(t)),{\mbox{\boldmath$q$}}uad u(t)\in \textit{\textbf{q}}^{\bar{u}}_{\mathrm d}eltac(\ell(t)+\alpha_-)\text{ for every $t\in (a,b]$} \end{equation} is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$. In particular, \eqref{eq:34} yields \begin{equation} u(t)\in [q_{{\sf l},{\mathrm d}elta}^{\bar{u}}(\ell(t)+\alpha_-), q_{{\sf r},{\mathrm d}elta}^{\bar{u}}(\ell(t)+\alpha_-)]\quad\text{for every $t\in [a,b]$}, \end{equation} \end{theorem} \begin{remark} The first condition of \eqref{eq:monotone-loading-assumption2} (resp. the first of \eqref{eq:34}) holds if the energy density $W$ satisfies the ${\mathrm d}elta$-convexity assumption \eqref{eq:41}. In this case \[ \mathbb{W}ird{\mathrm d}elta(u)=W'(u)=\mathbb{W}sld{\mathrm d}elta(u){\mbox{\boldmath$q$}}uad\text{for every $u\in\mathbb{R}$}. \] In particular, it is satisfied if $W$ is ${\langle}mbda$-convex, see \eqref{eq:lambda-convex}, and we choose a quadratic ${\mathrm d}elta$, tuned by a parameter $\mu\ge\min\{-{\langle}mbda,0\}$. \end{remark} \par The next result shows that, under a slightly stronger condition on the initial data, any Visco-Energetic solution driven by an increasing loading admits a similar representation to \eqref{eq:monotone-loading-assumption2}: the second inclusion holds for every $t\not\in \Ju$. \begin{theorem} [Nondecreasing loading] {\langle}bel{thm:monotonicity-main-theorem} Let $\ell\in {\mathrm C}^1([a,b])$ be a nondecreasing loading and let $u\in \mathbb{B}V([a,b],\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$ satisfying \begin{gather} \ell(a)\ge \mathbb{W}sld{\mathrm d}elta(u(a))-\alpha_-, \tag{IC1} \\ W'<W'(u(a)) \text{ in a left neighborhood of u(a)}\quad \text{if $W'(u(a))=\ell(a)+\alpha_-$}; \tag{IC2} {\langle}bel{eq:monotonicity-theorem-assumption1} \end{gather} and, for every $z<u(a)$, \begin{equation} {\mathfrak a}c{W(z)-W(u(a))+{\mathrm d}elta(u(a),z)}{z-u(a)}<\mathbb{W}sld{\mathrm d}elta(u(a)) \quad \text{if $\mathbb{W}sld{\mathrm d}elta(u(a))=\ell(a)+\alpha_-$}.\tag{IC3} {\langle}bel{eq:monotonicity-theorem-assumpion2} \end{equation} Then, similarly to Theorem \ref{thm:monotonicity-theorem1}, $u$ satisfies \begin{equation} {\langle}bel{eq:monotonicity-theorem-thesis1} u\text{ is nondecreasing},{\mbox{\boldmath$q$}}uad u(t)\in \textit{\textbf{p}}^{\bar{u}}_{\mathrm d}eltaca(\ell(t)-\alpha_+)\quad\text{for every $t\in [a,b]\setminus \Ju$} \end{equation} and therefore \begin{equation} {\langle}bel{eq:monotonicity-theorem-thesis2} u(t)\in [p_{{\sf l},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+), p_{{\sf r},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+)]\quad\text{for every $t\in [a,b]$}. \end{equation} \end{theorem} This result is a generalization to the visco-energetic framework of \cite[Theorem 6.3]{Rossi-Savare13}. One of the technical point there is to avoid the extra assumption \begin{equation} {\langle}bel{eq:monotonicity-extra-assumption} \mathbb{W}sl(u)-\alpha_- \le \ell(t)< \mathbb{W}ir(u)+\alpha_+\quad \text{for every $u \in\mathbb{R}$}, \end{equation} which is not immediately satisfied if $\alpha_\pm$ are very small. In our context, if ${\mathrm d}elta$ is too small \eqref{eq:monotonicity-extra-assumption} is still not satisfied even if we replace the one-sided slopes with their ${\mathrm d}elta$-corrected versions. \par The next technical lemma contributes to solve this issue. Compared with the same result in the energetic setting, we need a more refined analysis of the behaviour of $u$ at jumps. \begin{lemma} {\langle}bel{lem:monotonicity-proof-lemma} Under the same assumptions of Theorem \ref{thm:monotonicity-main-theorem}, let $a<\sigma'<\sigma\le b$ be such that \begin{equation} \ell(t)-W'(\ur(t))>-\alpha_-=\ell(\sigma)-W'(\ur(\sigma))\quad\text{for every $t\in[\sigma',\sigma)$}. \end{equation} Then $\sigma\notin \Ju$. \end{lemma} \begin{proof} We argue by contradiction and assume that $\sigma\in \Ju$. In view of \eqref{eq:equation1} necessarily \begin{equation} {\langle}bel{eq:monotone-loading-ineq2} \ul(\sigma)>\ur(\sigma), \end{equation} and \eqref{eq:equation2} shows that $\ur$ is nondecreasing in $[\sigma',\sigma)$. Moreover, combining \eqref{eq:1d-stabilityleft} and \eqref{eq:jump2}, \[ \ell(\sigma)+\alpha_-=\mathbb{W}sld{\mathrm d}elta(\ul(\sigma))>W'(\ul(\sigma)), \] so that by Proposition \ref{prop: Wir.prop} there exist $\tilde{u}<\ul(\sigma)$ which attains the supremum in the definition of $\mathbb{W}sld{\mathrm d}elta(\ul(\sigma))$ and a neighborhood of $\ul(\sigma)$ in which $\mathbb{W}sld{\mathrm d}elta(u)$ is decreasing. \par We want to prove that $\ul(\sigma)=u(a)$. We consider the set \[ {\ensuremath{\mathcal P}}:=\{\rho\in [a,\sigma): \ur(t)\equiv \ul(\sigma),\mbox{ }\ell(t)=\ell(\sigma)\quad \text{for all $t\in[\rho,\sigma)$}\}, \] and we prove that $[a,\sigma)={\ensuremath{\mathcal P}}$. \underline{Claim 1}. ${\ensuremath{\mathcal P}}\neq \emptyset$ and ${\ensuremath{\mathcal P}}$ is closed in $[a,\sigma)$. \newline We need to show that $\ell$ and $u$ are constant in a left neighborhood of $\sigma$. We already know that they are nondecreasing in $[\sigma',\sigma)$. To show that they are also nonincreasing we argue by contradiction: assume that there exists a sequence $t_n<\sigma$ converging to $\sigma$ such that \[ u_n:=\ur(t_n)\uparrowarrow \ul(\sigma), \mbox{ }\ell_n:=\ell(t_n)\uparrowarrow\ell(\sigma) \quad\text{and $u_n+\ell_n<\ul(\sigma)+\ell(\sigma)$}. \] Then, for $n$ great enough, $u_n>\tilde{u}$. If $u_n<\ul(\sigma)$ the global stability \eqref{eq:1d-stabilityright} and the jump condition \eqref{eq:17} yield \[ \ell_n+\alpha_-\ge \mathbb{W}sld{\mathrm d}elta(u_n)>\ell(\sigma)+\alpha_-\ge \ell_n+\alpha_-, \] which is absurd. Similarly, if $\ell_n<\ell(\sigma)$, \[ \ell_n+\alpha_-\ge \mathbb{W}sld{\mathrm d}elta(u_n)\ge\ell(\sigma)+\alpha_-> \ell_n+\alpha_-. \] This proves that ${\ensuremath{\mathcal P}}$ contains a left neighborhood of $\sigma$ and then it is non-empty. Moreover, ${\ensuremath{\mathcal P}}$ is clearly closed in $[a,\sigma)$. \par \underline{Claim 2}. Suppose that $\rho\in {\ensuremath{\mathcal P}}$. Then $\rho\notin \Ju$. \newline By contradiction suppose that $\rho\in \Ju$. From Proposition \ref{prop: Wir.prop}, $\mathbb{W}sl$ is decreasing in an open set containing $\ul(\sigma))=\ur(\rho)$. From \eqref{eq:jump2} the only possibility is that $\ul(\rho)<\ur(\rho)$. \newline Suppose that $\ul(\rho)\le \ur(\sigma)$. Then we consider an optimal transition $\vartheta\in {\mathrm C}(E,\mathbb{R})$ connecting $\ul(\rho)$ and $\ur(\rho)$. Clearly, $\ur(\sigma)\notin E$ since ${\ensuremath{\mathcal E}}(\rho,\ur(\sigma))={\ensuremath{\mathcal E}}(\sigma,\ur(\sigma))$ but from $\rm{(J_{VE})}$ the energy is decreasing during a transition: \[ {\ensuremath{\mathcal E}}(\sigma,\ur(\sigma))<{\ensuremath{\mathcal E}}(\sigma,\ul(\sigma))={\ensuremath{\mathcal E}}(\rho,\ur(\rho))<{\ensuremath{\mathcal E}}(\rho,\ur(\sigma)). \] Then $\ur(\sigma)$ must be in a hole of $E$. Combing Theorem \ref{prop:2} and Proposition \ref{prop:3}, \[ \mathbb{W}ird{\mathrm d}elta(\ur(\sigma))<\ell(\sigma)-\alpha_+, \] which contradicts the global stability \eqref{eq:1d-stabilityright}. In a similar way we can discuss the case $\ul(\rho)\ge \ur(\sigma)$. \par \underline{Claim 3}. If $a<\rho\in {\ensuremath{\mathcal P}}$, there exist $\varepsilon>0$ such that $\ell(t)\equiv \ell(\rho)\equiv \ell(\sigma)$ and $u(t)=u(\rho)=\ul(\sigma)$ for every $t\in [\rho-\varepsilon,\rho]$. \newline Thanks to Claim 2, $\rho\notin \Ju$. Then we can argue as in Claim 1, starting from $\ur(\rho)=\ul(\sigma)$ and $\ell(\rho)=\ell(\sigma)$. \par \underline{Conclusion}. ${\ensuremath{\mathcal P}}$ is also open in $[a,\sigma)$ since for every $\rho\in {\ensuremath{\mathcal P}}\cap (a,\sigma)$, it contains a left neighborhood of $\rho$ (${\ensuremath{\mathcal P}}$ obviously contains also a right neighborhood of $\rho$). Since ${\ensuremath{\mathcal P}}$ is both open and closed, ${\ensuremath{\mathcal P}}=[a.\sigma)$. \par Another application of Claim 2, combined with \eqref{eq:monotonicity-theorem-assumpion2}, which prevents the case $u(a)>\ur(a)$, yields that $a\notin \Ju$, so that $\ul(\sigma)=u(a)$ and $\ell(\sigma)=\ell(a)$. Finally, another application of \eqref{eq:monotonicity-theorem-assumpion2} gives the contradiction with \eqref{eq:monotone-loading-ineq2}. \end{proof} With these notions at our disposal, the proof of Theorem \ref{thm:monotonicity-main-theorem} is a simple adaptation of \cite[Theorem 6.3]{Rossi-Savare13}. For completeness, we report the steps in details. \begin{proof}[Proof of Theorem \ref{thm:monotonicity-main-theorem}] We split again the argument in various steps. \par \underline{Claim 1}. There exists $\gamma\in [a,b]$ such that $\ell(t)-W'(\ur(r))>\gamma$ for all $t\in (\gamma,b]$ and $u(t)\equiv u(a),\,\ell(t)\equiv\ell(a)$ in $[a,\gamma]$. \newline Let us consider the set \[ \mathbb{S}igma:=\{t\in [a,b]: W'(\ur(t))=\ell(t)+\alpha_-\} \] and observe that $t_n\in\mathbb{S}igma,\quad t_n{\mathrm d}ownarrowarrow t\quad \mathbb{R}ightarrow\quad t\in \mathbb{S}igma$. If $a\in \mathbb{S}igma$, we denote by $\mathbb{S}igma_a$ the connected component of $\mathbb{S}igma$ containing $a$ and we set $\gamma:=\sup \mathbb{S}igma_a$. If $\gamma>a$, then \[ W'(\ur(t))=\ell(t)+\alpha_-\quad\text{for every $t\in[a,\gamma]$}, \] so that by \eqref{eq:equation1} $u$ is nonincreasing in $[a,\gamma]$. Assumption \eqref{eq:monotonicity-theorem-assumpion2} imply that $a\notin \Ju$ and $\ur(a)=u(a)$. Since also $\ell$ is nondecreasing we conclude by \eqref{eq:monotonicity-theorem-assumption1} and \eqref{eq:equation2} that $u(t)\equiv u(a)$ and $\ell(t)\equiv\ell(a)$ in $[a,\gamma]$; moreover, by the same argument, $\gamma\notin \Ju$, so that $\gamma\in \mathbb{S}igma$. When $a\notin \mathbb{S}igma$ we simply set $\gamma:=a$ and $\mathbb{S}igma_a=\emptyset$. \par The claim then follows if we show that $\mathbb{S}igma\setminus\mathbb{S}igma_a$ is empty. This is trivial if $\gamma=b$. If $\gamma<b$ we suppose $\mathbb{S}igma\setminus \mathbb{S}igma_a \neq \emptyset$ and we argue by contradiction. We can find points $\gamma_2>\gamma_1>\gamma$ such that $\gamma_1\notin \mathbb{S}igma$ and $\gamma_2\in \mathbb{S}igma$. We can consider $\sigma:=\min(\mathbb{S}igma\cap [\gamma_1,b])>\gamma_1>\gamma$. Lemma \ref{lem:monotonicity-proof-lemma} with $\sigma':=\gamma_1$ yields that $\sigma\notin \Ju$, so that we can find $\varepsilon>0$ such that \begin{equation} {\langle}bel{eq:monotone-loading-ineq3} -\alpha_-<\ell(t)-W'(\ur(t))<\alpha_+{\mbox{\boldmath$q$}}uad\text{for every $t\in(\sigma-\varepsilon,\sigma)$}. \end{equation} Point $b)$ of Theorem \ref{thm:main-theorem} implies that $\ur(t)=u(t)\equiv \ul(\sigma)=\ur(\sigma)$ is constant in $(\sigma-\varepsilon,\sigma)$. Hence, $W'(\ur(t))\equiv W'(\ur(\sigma))=\ell(\sigma)+{\mathrm d}elta_-\ge \ell(t)+{\mathrm d}elta_-$ for every $t\in (\sigma-\varepsilon,\sigma)$, since $\ell$ is nondecreasing and $\sigma\in \mathbb{S}igma$. This contradicts \eqref{eq:monotone-loading-ineq3}. \par \underline{Claim 2}. $u$ is nondecreasing in $[a,b]$. \newline Relation \eqref{eq:equation2} and Claim 1 imply that $\left( u'\right)^-\left([a,b)\right)=0$, so that $u$ in nondrecreasing in $[a,b)$. If $b$ is a jump point, then by \eqref{eq:jump1} $u(b)>\ul(b)$. \par \underline{Claim 3}. Let $B:=\{t\in [\gamma,b]:\mathbb{W}ird{\mathrm d}elta(u(a))+\alpha_+=\ell(t)\}$ and let $\beta:=\min B$ (with the convention $\beta=b$ if $B$ is empty). Then $u(t)\equiv u(a)$ in $[a,\beta)$ and \begin{equation} {\langle}bel{eq:monotone-loading-identity1} \mathbb{W}ird{\mathrm d}elta(\ul(t))=\mathbb{W}ird{\mathrm d}elta(\ur(t))=\ell(t)-\alpha_+\quad\text{for all $t\in (\beta,b)$}. \end{equation} In particular, $\ul(t)\ge p_{{\sf l},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+)$ for all $t\in [a,b]$. \newline The first statement follows from the previous Claim and Corollary \ref{cor:main-theorem-corollary}. \newline To prove the second identity in \eqref{eq:monotone-loading-identity1} for $\ur(t)$, we argue by contradiction and we suppose that exists a point $s\in(\beta,b]$ such that $\mathbb{W}ird{\mathrm d}elta(\ur(s))+\alpha_+>\ell(s)$. Then, in view of Corollary \eqref{cor:main-theorem-corollary} $u$ is locally constant around $s$. Since $\ell$ is nondecreasing, because of $\eqref{eq:equation1}$, we conclude that $u(t)\equiv u(s)$ for every $t \in[\gamma,s]$, so that $s\le \beta$, a contradiction. The first identity of \eqref{eq:monotone-loading-identity1} follows by continuity and by \eqref{eq:equation1}. \newline The last statement is a consequence of \eqref{eq:15}. Notice that we can also take $t=b$ since $\mathbb{W}ird{\mathrm d}elta(\ul(t))=\ell(t)-\alpha_+$ still holds in $t=b$. \par \underline{Claim 4}. For all $t\in [a,b]$ we have $\ur(t)\le p_{{\sf r},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+)$. \newline If $\ur(t)=u(a)$ there is nothing to prove. Otherwise, let $t\ge\beta$ and take $z\in (u(a),\ur(t))$. Since $u$ is nondecreasing, there exists $s\in [\beta,t]$ such that $\ul(s)\le z\le \ur(s)$, so that \eqref{eq:jump2} (in the case $s\in\Ju$) or \eqref{eq:equation1} (in the case $\ul(s)=\ur(s)$) yield \[ \mathbb{W}ird{\mathrm d}elta(z)\le\ell(s)-\alpha_+\le \ell(t)-\alpha_+, \] since $\ell$ is nondecreasing. Being $z<\ur(t)$ arbitrary, the claim follows from the second of \eqref{eq:15}. \par \underline{Conclusion}. Combining Claim 2, Claim 3 and Claim 4, we get \[ p_{{\sf l},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+)\le \ul(t)\le u(t)\le\ur(t) \le p_{{\sf r},{\mathrm d}elta}^{u(a)}(\ell(t)-\alpha_+)\quad \text{for every $t\in [a,b]$}, \] which proves relation \eqref{eq:monotonicity-theorem-thesis2}. Finally, \eqref{eq:monotonicity-theorem-thesis1} is due to \eqref{eq:monotonicity-theorem-thesis2} and \eqref{eq:monotone-loading-identity1}. \end{proof} In a similar way, we can deduce the the characterization of Visco-Energetic solutions in the case of a decreasing load. \begin{theorem}[Nonincreasing loading] Let $\ell\in {\mathrm C}^1([a,b])$ be a nonincreasing loading and let $u\in \rm{BV} ([a,b],\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\mathbb{P}si,{\mathrm d}elta)$ satisfying \begin{gather} \ell(a)\le \mathbb{W}ird{\mathrm d}elta(u(a))+\alpha_+, \\ W'>W'(u(a)) \text{ in a right neighborhood of u(a)}\quad \text{if $W'(u(a))=\ell(a)-\alpha_+$}; \end{gather} and, for every $z>u(a)$, \begin{equation} {\mathfrak a}c{W(z)-W(u(a))+{\mathrm d}elta(u(a),z)}{z-u(a)}>\mathbb{W}ird{\mathrm d}elta(u(a)) \quad \text{if $\mathbb{W}ird{\mathrm d}elta(u(a))=\ell(a)-\alpha_+$}. \end{equation} Then, similarly to Theorem \ref{thm:45}, $u$ satisfies \begin{equation} u\text{ is nonincreasing},{\mbox{\boldmath$q$}}uad u(t)\in \textit{\textbf{q}}^{\bar{u}}_{\mathrm d}eltaca(\ell(t)+\alpha_-)\quad\text{for every $t\in [a,b]\setminus \Ju$} \end{equation} and therefore \begin{equation} u(t)\in [q_{{\sf l},{\mathrm d}elta}^{u(a)}(\ell(t)\alpha_-), q_{{\sf r},{\mathrm d}elta}^{u(a)}(\ell(t)+\alpha_-)]\quad\text{for every $t\in [a,b]$}. \end{equation} \end{theorem} \paragraph{Example.} We conclude with a final example, involving a more complex potential $W$ (see figure \ref{fig:7}). When $W\in {\mathrm C}^2([a,b];\mathbb{R})$ and we choose ${\mathrm d}elta(u,v):={\mathfrak a}c{\mu}{2}(v-u)^2$, with $\mu\ge-\min W''$, Visco-Energetic solutions follow the monotone envelope of $W'+\alpha_+$. \begin{figure} \caption{Visco-Energetic solutions of a nonconvex energy and an increasing loading. The optimal transition is a combination of sliding and viscous parts.} \end{figure} {\mathrm d}ef$'$} \def\cprime{$'$} \def\cprime{$'${$'$} {\mathrm d}ef$'$} \def\cprime{$'$} \def\cprime{$'${$'$} {\mathrm d}ef$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \end{document}
\begin{document} \author{Avraham Aizenbud} \operatorname{ad}dress{Avraham Aizenbud, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{[email protected]} \urladdr{http://www.wisdom.weizmann.ac.il/~aizenr/} \author{Dmitry Gourevitch} \operatorname{ad}dress{Dmitry Gourevitch, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{[email protected]} \urladdr{http://www.wisdom.weizmann.ac.il/~dimagur} \keywords{Vanishing of distributions, spherical spaces, multiplicity, Shalika, Bessel function} \subjclass[2010]{20G05, 22E45, 46F99} \date{\today} \title{Vanishing of certain equivariant distributions on spherical spaces} \maketitle \begin{abstract} \DimaA{ We prove vanishing of $\fz$-eigen distributions on a split real reductive group which change according to a non-degenerate character under the left action of the unipotent radical of the Borel subgroup, and are equivariant under the right action of a spherical subgroup.} This is a generalization of a result by Shalika, that concerned the group case. Shalika's result was crucial in the proof of his multiplicity one theorem. We view our result as a step in the study of multiplicities of quasi-regular representations on spherical varieties. As an application we prove non-vanishing of \DimaA{spherical} Bessel functions. \end{abstract} \section{Introduction}\label{sec:intro} \subsection{Main results} In this paper we prove the following generalization of Shalika's result \cite[\S 2]{Shal}. \begin{introtheorem}\label{thm:main} \DimaA{Let $G$ be a split real reductive group and $H$ be its spherical subgroup. Let $U$ be the unipotent radical of a Borel subgroup $B$ of $G$. Let $\psi$ be a non-degenerate character of $U$ and $\chi$ be a character of $H$. Let $Z$ be the complement to the union of open $B\times H$-double cosets in $G$. Let $\fz$ be the center of the universal enveloping algebra of the Lie algebra $\g$ of $G$. Then there are no non-zero $\fz$-eigen $(U\times H,\psi\times \chi)$-equivariant distributions supported on $Z.$} \end{introtheorem} This result in the group case (\cite[\S 2]{Shal}) was crucial in the proof of Shalika's multiplicity one theorem. Our proof begins by applying the technique used by Shalika. However, this technique was not enough for this generality and we had to complement it by using integrability of the singular support, as in \cite{AGAMOT}. Theorem \operatorname{Re}f{thm:main} provides a new tool for the study of the multiplicities of the irreducible quotients of the quasi-regular representation of $G$ on Schwartz functions on $G/H$, see \S\S \operatorname{Re}f{subsec:qr} below for more details. \subsection{Non-vanishing of spherical Bessel functions} Another application of Theorem \operatorname{Re}f{thm:main} is to the study of spherical Bessel distributions and functions. \begin{defn} Let $G$ be a split real reductive group, and $H \subset G$ be a spherical subgroup. Let $(\pi,V)$ be a (smooth) irreducible admissible representation of $G$. Let $varietyphii$ be an \DimaA{$(H,\chi)$-equivariant} continuous functional on $V$ and $v$ be a $(U,\psi)$-equivariant continuous functional on the contragredient representation $\tilde V$. Define the \emph{spherical Bessel distribution} by $$\xi_{v,varietyphii}(f):=\langle v, \pi^*(f) varietyphii \rangle. $$ Define the \emph{spherical Bessel function} to be the restriction $j_{v,varietyphii}:=\xi_{v,varietyphii}|_{\DimaA{G} - Z}$. \end{defn} It is well-known that $j_{v,varietyphii}$ is a smooth function. Theorem \operatorname{Re}f{thm:main} easily implies the following corollary. \begin{introcor}\label{cor:SpherChar} Suppose that \DimaA{$v$ and $varietyphii$ are non-zero.} Then $j_{v,varietyphii} \neq 0$. \end{introcor} \DimaA{ \subsection{Non-archimedean analogs}$\,$\\ Over non-archimedean fields, the universal enveloping algebra does not act on the distributions. However, the Bernstein's center $\operatorname{End}_{G \times G}(\mathcal{S}(G))$ does act. In \cite{AGS_Z} we study this action in details. In \cite{AGK} we prove, using \cite[Theorem A]{AGS_Z}, analogs of Theorem \operatorname{Re}f{thm:main} and Corollary \operatorname{Re}f{cor:SpherChar} for non-archimedean fields of characteristic zero. These analogs are somewhat weaker for general spherical pairs, but are of the same strength for the group case and for Galois symmetric pairs. The group case of the non-archimedean counterpart of Corollary \operatorname{Re}f{cor:SpherChar} was proven before in \cite[Appendix B]{LM}. } \subsection{Relation with multiplicities in regular representations of symmetric spaces}\label{subsec:qr} Let $(G,H)$ be a symmetric pair of real reductive groups. Suppose that $G$ is quasi-split and let $B\subset G$ be a Borel subgroup. Let $k$ be the number of open $B$-orbits on $G/H$. Theorem \operatorname{Re}f{thm:main} can be used in order to study the following conjecture. \begin{introconj} Let $(\pi,V)$ be a (smooth) irreducible admissible representation of $G$. Then the dimension of the space \DimaA{$(V^*)^{H}$ of $H$-invariant} continuous functionals on $V$ is at most $k$. In particular, any complex reductive symmetric pair is a Gelfand pair. \end{introconj} We suggest to divide this conjecture into two cases \begin{itemize} \item $\pi$ is non-degenerate, i.e. $\pi$ has a non-zero continuous $(U,\psi)$-equivariant functional for some non-degenerate character $\pi$ of $U$ \item $\pi$ is degenerate. \end{itemize} In the first case, the last conjecture follows from the following one \begin{introconj} Let $U$ be the unipotent radical of $B$ and let $\psi$ be its non-degenerate character. Let $\fz$ be the center of the universal enveloping algebra of the Lie algebra $\g$ of $G$. Let $\lambda$ be a character of $\fz$. Then the dimension of the space of $(\fz,\lambda)$-eigen $(U,\psi)$-equivariant distributions on $G/H$ does not exceed $k$. \end{introconj} We believe that Theorem \operatorname{Re}f{thm:main} can be useful in approaching this conjecture, since it allows to reduce the study of distributions to the union of open $B$-orbits. \section{Preliminaries} \subsection{Conventions} \begin{itemize} \item By an algebraic manifold we mean a smooth real algebraic variety. \item We will use capital Latin letters to denote Lie groups and the corresponding Gothic letters to denote their Lie algebras. \item Let a Lie group $G$ act on a smooth manifold $M$. For a vector $v\in \fg$ and a point $x \in M$ we will denote by $vx\in T_xM$ the image of $v$ under the differential of the action map $g \mapsto gx$. Similarly, we will use the notation $\fh x$, for any subspace $\fh \subset \fg$. \item We denote by $G_x$ the stabilizer of $x$ in $G$ and by $\fg_x$ its Lie algebra. \end{itemize} \subsection{Tangential and transversal differential operators}\label{subsec:trans} In this subsection we shortly review the method of \cite[\S 2]{Shal}. For a more detailed description see \cite[\S\S 2.1]{JSZ}. \begin{defn} Let $M$ be a smooth manifold and $N$ be a smooth submanifold. \begin{itemize} \item A vector field $v$ on $M$ is called \emph{tangential} to $N$ if for every point $p\in N, \, v_p\in T_pN$ and \emph{transversal} to $N$ if for every point $p\in N, \, v_p\notin T_pN$. \item A differential operator $D$ is called \emph{tangential} to $N$ if every point $p\in N$ has an open neighborhood $U_x \subset N$ such that $D|_{U_x}$ is a finite sum of differential operators of the form $varietyphii V_1\cdot ...\cdot V_r$ where $varietyphii$ is a smooth function on $U_x, \, r\geq 0$, and $V_i$ are vector fields on $U_x$ tangential to $U_x \cap N$. \end{itemize} \end{defn} \begin{lem}[cf. the proof of {\cite[Proposition 2.10]{Shal}}]\label{lem:TransTan} Let $M$ be a smooth manifold and $N$ be a smooth submanifold. Let $D$ be a differential operator on $M$ tangential to $N$ and $V$ be a vector field on $M$ transversal to $N$. Let $\'etalea$ be a distribution on $M$ supported in $N$ such that $D\'etalea=V\'etalea$. Then $\'etalea=0$. \end{lem} \subsection{Singular support}\label{subsec:SS} Let $M$ be an algebraic manifold and $\'etalea$ be a distribution on $M$. The \emph{singular support} of $\'etalea$ is defined to be the singular support of the D-module generated by $\'etalea$ and denoted $SS(\'etalea) \subset T^*M$. We will shortly review the properties of the singular support that are most important for this paper. For more detailed overview we refer the reader to \cite[\S\S 2.3 and Appendix B]{AGAMOT}. \begin{notn} For a point $x\in M$ \begin{itemize} \item we denote by $SS_x(\'etalea)$ the fiber of $x$ under the natural projection $SS(\'etalea)\to M$, \item for a submanifold $N \subset M$ we denote by $CN_N^M\subset T^*M$ the conormal bundle to $N$ in $M$, and by $CN_{N,x}^M$ the conormal space at $x$ to $N$ in $M$. \end{itemize} \end{notn} \begin{lem}[{See e.g. \cite[Fact 2.3.9 and Appendix B]{AGAMOT} }] \label{Ginv} Let an algebraic group $G$ act on $M$. Suppose that $\'etalea$ is $G$-equivariant. Then $$SS(\'etalea) \subset \{(x,varietyphii) \in T^*M \, | \, \forall \alphahapha \in \g, \, varietyphii(\alphahapha(x)) =0\}=\bigcup_{x\in M}CN_{Gx}^M.$$ \end{lem} \begin{lem} \label{lem:nilp} Let $G$ be a real reductive group, $\cN \subset \fg^*$ be the nilpotent cone and $\fz$ be the center of the universal enveloping algebra $\cU(\g)$. Let $\xi$ be a $\fz$-eigen distribution on $G$. Identify $T^*G$ with $G\times \fg^*$ using the right action. Then $SS(\xi)\subset G \times \cN$. \end{lem} This lemma is well-known but we will prove it here for the benefit of the reader. \begin{proof} Consider the standard filtrations on $\cU(\fg)$ and on the ring of differential operators $D(G)$. Consider $\fg$ as the space of left-invariant vector fields on $G$. Then the natural map $\mathbb{R}amiA{i:}\cU(\fg)\to D(G)$ is a morphism of filtered algebras. \mathbb{R}amiA{We have a commutative diagram $$\xymatrix{\parbox{30pt}{$Gr\cU(\fg)$}\ar@{->}^{Gr(i)}[r]\ar@{<-}_{\pi_U}[d] & \parbox{20pt}{$Gr D(G)$}\ar@{<-}_{\pi_D}[d]\\ \parbox{20pt}{$S(\fg)$ }\ar@{->}^{\bar i}[r] & \parbox{40pt}{$\cO(T^*G)$,}} $$ Where $\bar i$ and ${\pi_U}$ are the algebra homomorphisms which extend the natural embeddings $\g \to Gr\cU(\fg)$ and $\g \to \cO(T^*G)$, and $\pi_D$ is the algebra homomorphism which extends the natural embedding of vector fields on $G$ into $D(G)$. By the PBW theorem the vertical maps are isomorphisms. This implies that $Gr(i)$ is an embedding, and thus the filtration on $\cU(\g)$ is the one induced from $D(G)$ using the embedding $i$. Therefore we have the following commutative diagram $$ \xymatrix{ \cU(\fg)\ar@{->}^{i}[r]\ar@{->}^{\sigmama_{U}}[d] & D(G)\ar@{->}^{\sigmama_{D}}[d]\\ \parbox{30pt}{$Gr\cU(\fg)$}\ar@{->}^{Gr(i)}[r]\ar@{<-}_{\pi_U}[d] & \parbox{20pt}{$Gr D(G)$}\ar@{<-}_{\pi_D}[d]\\ S(\fg)\ar@{->}^{\bar i}[r] & \cO(T^*G),} $$ where ${\sigmama_{U}}$ and $\sigmama_D$ are the (nonlinear) symbol maps. Note that the map ${\bar i}$ is a section of the restriction map $r:\cO(T^*G)\to \cO(T_e^*G)\cong\cO(\g^*)\cong S(\g).$ } In order to prove the lemma it is enough to show that $SS_{e}(\xi)\subset \cN$. Note that $\cN$ is the zero set the ideal $I\subset S(\fg)=\cO(\fg^*)$ generated by all homogeneous non-constant $\fg$-invariant polynomials. We have to show that for any homogeneous \mathbb{R}amiA{$p \in S(\g)^\g$ of degree $d>0,$} there exists \mathbb{R}amiA{non-constant} $u\in \fz$ such that \mathbb{R}amiA{$r(\pi_D^{-1}(\sigmama_D(i(u))))=p$, or equivalently (in view of the above) that $\sigmama_{U}(u)=\pi_U(p)$. Let $s:S(\g)\to\cU(\g)$ be the symmetrization map. It is easy to see that $\sigmama^{d}_{U}(s(p))=\pi_U(p)$, where $\sigmama^{d}_{U}$ denotes the $d$'s symbol. Since $\pi_U$ is an isomorphism this implies that $\sigmama^{d}_{U}(s(p)) \neq 0$ and thus $\sigmama_{U}(s(p))=\sigmama^{d}_{U}(s(p))=\pi_U(p)$. This implies the assertion. } \end{proof} \begin{thm}[Integrability theorem, cf. \cite{Gab,GQS,KKS, Mal}] The $SS(\'etalea)$ is a coisotropic subvariety of $T^*M$. \end{thm} This theorem implies the following corollary (see \cite[\S 3]{Aiz} for more details). \begin{cor}\label{cor:WCI} Let $N\subset M$ be a closed algebraic submanifold. Suppose that $\xi$ is supported in $N$. Suppose further that for any $x \in N$, we have $CN_{N,x}^M Nuclear spaceubseteq SS_x(\'etalea)$. Then $\'etalea=0$. \end{cor} \section{Proof of the main result} \subsection{Sketch of the proof} \DimaA{We decompose $G$ into $B\times H$-double cosets. Each double coset} $\cO$ we decompose $\cO=\cO_s \cup \cO_c$ in a certain way. We prove the required vanishing \DimaA{coset by coset}, using Shalika's method (see \S\S\operatorname{Re}f{subsec:trans}) for $\cO_s$ and singular support analyses (see \S\S\operatorname{Re}f{subsec:SS}) for $\cO_c$. \subsection{Notation and lemmas} \begin{notation}$\,$ \begin{itemize} \item Fix a torus $T\subset B$ and let $\ft \subset \fb$ denote the corresponding Lie algebras. Let $\Phii$ denote the root system, $\Phii^+$ denote the set of positive roots and $\Deltata\subset \Phii^+$ denote the set of simple roots. For $\alphahap\in \Phii$ let $\g_{\alphahapha} \subset \g$ is the root space corresponding to ${\alphahapha}$. \item Let $C\in \fz$ denote the Casimir element. \item We choose $E_{\alphahap}\in \fg_{\alphahap}$, for any $\alphahap \in \Phii$ such that $C=\sum_{\alphahap \in \Phii^+} E_{-\alphahap}E_{\alphahap}+D$, where $D$ is in the universal enveloping algebra of the Cartan subalgebra $\ft$. \item Let $\cO\subset G$ be a $B\times H$-\DimaA{double coset. Consider the left action of $\fu$ on $\cO$ and define} $$\cO_c:=\left \{x \in \cO \, |\, \sum_{\alphahap \in \Deltata} d\psi(E_\alphahap)E_{-\alphahap}x \in T_x\cO \right \}$$ and $\cO_s:=\cO\setminus \cO_c$. \end{itemize} \end{notation} We will need the following lemmas, that will be proved in subsequent subsections. \DimaA{ \begin{lemma}\label{lem:WF} Let $x\in G$. Let $\xi$ be a $\fz$-eigen $(U\times H,\psi \times \chi)$ equivariant distribution on $G$. Then $SS_x(\xi)\subset CN_{BxH,x}$. \end{lemma} \begin{lemma}\label{lem:cOProp} Let $\cO\subset Z$ be a $B\times H$-double coset. Then $\cO_s \neq \emptyset$. \end{lemma} } \subsection{Proof of Theorem \operatorname{Re}f{thm:main}} Suppose that there exists a non-zero $\fz$-eigen $(U,\psi)$- equivariant distribution $\xi$ supported on $Z$. For any \DimaA{$B\times H$-double coset $\cO \subset G$}, stratify $\cO_{c}$ to a union of smooth locally closed varieties $\cO_c^i$. The collection \DimaA{$$\{\cO_c^i \, | \, \cO \text{ is a }B\times H-\text{double coset}\}\cup \{\cO_s \, | \,\cO\text{ is a } B\times H-\text{double coset}\}$$} is a stratification of \DimaA{$G$}. Reorder this collection to a sequence $\{S_i\}_{i=1}^N$ of smooth locally closed subvarieties of \DimaA{$G$} s.t. $U_k:=\bigcup_{i=1}^k S_i$ is open in \DimaA{$G$} for any $1\leq k \leq N$. Let $k$ be the maximal integer such that $\xi|_{U_{k-1}}=0$. Let $\'etalea:=\xi|_{U_{k}}$. We will now show that $\'etalea=0$, which leads to a contradiction. \begin{enumerate}[{Case} 1.] \item $S_{k}=\cO_s$ for some \DimaA{double coset} $\cO$. Recall that we have the following decomposition of the Casimir element $$C=\sum_{\alphahap \in \Phii^+} E_{-\alphahap}E_{\alphahap}+D$$ Since $\'etalea$ is $\fz$-eigen and $(U,\psi)$-equivariant, we have, for some scalar $\lambda$, $$\lambda \'etalea=C\'etalea=\sum_{\alphahap \in\Phii^+} E_{-\alphahap}E_{\alphahap}\'etalea+D\'etalea=\sum_{\alphahap \in \Phii^+} E_{-\alphahap}d\psi(E_\alphahap)\'etalea +D\'etalea=\sum_{\alphahap \in \Deltata} E_{-\alphahap}d\psi(E_\alphahap)\'etalea +D\'etalea$$ Let $V:= \sum_{\alphahap \in \Deltata} d\psi(E_\alphahap)E_{-\alphahap}$ and $D':=\lambda Id-D$. We have $V\'etalea=D'\'etalea$, and it is easy to see that $D'$ is tangential to $\cO_s$, and $V$ is transversal to $\cO_s$. Now, Lemma \operatorname{Re}f{lem:TransTan} implies $\'etalea=0$ which is a contradiction. \item $S_{k}\subset \cO_c$ for some orbit $\cO$. By Corollary \operatorname{Re}f{cor:WCI} it is enough to show that for any $x \in S_k$ we have \begin{equation}\label{eq:NonCoIsot} CN_{S_k,x}^{\DimaA{G}} Nuclear spaceubseteq SS_x(\'etalea). \end{equation} By Lemma \operatorname{Re}f{lem:WF}, $ SS_x(\'etalea)\subset CN_{\cO,x}^{\DimaA{G}}$. By Lemma \operatorname{Re}f{lem:cOProp}, $S_{k}\subsetneq \cO$, thus $CN_{S_{k},x}^{\DimaA{G}} \supsetneq CN_{\cO,x}^{\DimaA{G}}$ which implies \eqref{eq:NonCoIsot}. \end{enumerate} $\Box$ \DimaA{ \subsection{Proof of Lemma \operatorname{Re}f{lem:WF}}\label{subsec:PfLemWF} \begin{proof} Let $\fh$ denote the Lie algebra of $H$ and $ad(x)\fh$ denote its conjugation by $x$. Identify $T_x^*G$ with $\fg^*$ using multiplication by $x^{-1}$ on the right. Then $$CN_{BxH,x}^G= (\ft+\mathfrak{u}+ad(x)\fh)^\bot.$$ Since $\xi$ is $\mathfrak{u}\times \fh$-equivariant, Lemma \operatorname{Re}f{Ginv} implies that $SS_x(\xi)\subset (\mathfrak{u}+ad(x)\fh)^\bot$. Since $\xi$ is also $\fz$-eigen, Lemma \operatorname{Re}f{lem:nilp} implies that $SS_x(\xi)\subset \cN $, where $\cN \subset \fg^*$ is the nilpotent cone. Now we have $$SS_x(\xi)\subset(\mathfrak{u}+ad(x)\fh)^\bot \cap \cN =(\ft+\mathfrak{u}+ad(x)\fh)^\bot=CN_{BxH,x}^G.$$ \end{proof} \subsection{Proof of Lemma \operatorname{Re}f{lem:cOProp}}\label{subsec:cOProp} First we need the following lemmas and notation. \begin{lemma}\label{lem:dense} Let $K\subset K_{i} \subset G$ for $i=1,\dots,n$ be algebraic subgroups. Suppose that $K_i$ generate $G$. Let $Y$ be a transitive $G$ space. Let $y \in Y$. Assume that $Ky$ is Zariski dense in $K_i y$ for each $i$. Then $Ky$ is Zariski dense in $Y$. \end{lemma} \begin{proof} By induction we may assume that $n=2$. Let $$O_l:=\underbrace{K_1K_2\cdots K_1K_2 }_{l \text{ times}}y.$$ It is enough to prove that for any $l$ the orbit $Ky$ is dense in $O_l$. Let us prove it by induction on $l$. Suppose that we have already proven that $Ky$ is dense in $O_{l-1}$. Then $$\overline{Ky}=\overline{K_2y}=\overline{K_2Ky}=\overline{K_2O_{l-1}y}.$$ Thus $Ky$ is dense in $K_2O_{l-1}$. Similarly, $K_2O_{l-1}$ is dense in $K_1K_2O_{l-1}=O_l$. \end{proof} \begin{notn}$\,$ \begin{itemize} \item Let $Y$ denote the symmetric space $G/H$, and $Z'$ denote the image of $Z$ in $Y$. \item For a simple root $\alphahap\in \Deltata$, denote by $P_{\alphahap}\subset G$ the parabolic subgroup whose Lie algebra is $\fg_{-\alphahap}\oplus \fb$. \end{itemize} \end{notn} \begin{lemma}\label{lem:out} Let $x \in Z'$. Then there exists a simple root $\alphaha \in \Deltata$ such that $\g_{-\alphaha}x Nuclear spaceubseteq \fb x$. \end{lemma} \begin{proof} Assume the contrary. Then for any $\alphaha \in \Deltata, \,T_x P_{\alphaha}x=T_xBx$. Thus $Bx$ is Zariski dense in $P_{\alphaha}x$. By Lemma \operatorname{Re}f{lem:dense} this implies that $Bx$ is dense in $Y$, which contradicts the condition $x \in Z'$. \end{proof} \begin{proof}[Proof of Lemma \operatorname{Re}f{lem:cOProp}] Note that $\cO_c$ is invariant with respect to the right action of $H$. Let $\cO'$ denote the image of $\cO$ in $Y$, and let $\cO_c'$ denote the image of $\cO_c$ in $Y$. Choose $x\in \cO'$ and let $a:B \to \cO'$ denote the action map. It is enough to show that $a^{-1}(\cO_c')\neq B$. \begin{multline*} a^{-1}(\cO_c')=\{b \in B \, | \, \sum \psi(E_{\alphahap})E_{-\alphahap}\in \fg_{bx} + \fb\}= \{b \in B \, | \, \sum \psi(E_{\alphahap})ad(b^{-1})E_{-\alphahap} \in \fg_{x} + \fb\}=\\ \{tu \in B \, | \, \sum \psi(E_{\alphahap})ad(t^{-1})E_{-\alphahap} \in \fg_{x} + \fb\}= \{tu \in B \, | \, \sum \psi(E_{\alphahap})\alphahap(t)E_{-\alphahap} \in \fg_{x} + \fb\} \end{multline*} By Lemma \operatorname{Re}f{lem:out} we can choose $\alphahap \in \Deltata$ such that $\g_{-\alphaha}x Nuclear spaceubseteq \fb x$. For any $varietyepsilon>0$ there exists $t \in T$ s.t. $\alphahap(t)=1$ and $\forall \beta \neq \alphahap \in \Deltata$ we have $|\beta(t)|<varietyepsilon$. It is easy to see that for $varietyepsilon$ small enough, $t \notin p^{-1}(\cO_c')$. \end{proof} } \end{document}
\begin{document} \title{Symmetries of second order differential equations on Lie algebroids} \author{Liviu Popescu} \maketitle \begin{abstract} In this paper we investigate the relations between semispray, nonlinear connection, dynamical covariant derivative and Jacobi endomorphism on Lie algebroids. Using these geometric structures, we study the symmetries of second order differential equations in the general framework of Lie algebroids. \end{abstract} MSC2010: 17B66, 34A26, 53C05, 70S10 Keywords: Lie algebroids, symmetries, semispray, nonlinear connection, dyna-mical covariant derivative, Jacobi endomorphism. \section{\textbf{Introduction}} The geometry of second order differential equations (SODE) on the tangent bundle $TM$ of a differentiable manifold $M$ is closely related to the geometry of nonlinear connections \cite{Cr1, Gr1}. The system of SODE can by represented using the notion of semispray, which together with the nonlinear connection induce two important concepts: the dynamical covariant derivative and Jacobi endomorphism \cite{Bu2, Bu3, Cr2, Gr2, Ma, Sa}. The notion of dynamical covariant derivative was introduced for the first time in the case of tangent bundle by J. Cari\~nena and E. Mart\'\i nez \cite{Ca0} as a derivation of degree 0 along the tangent bundle projection. The notion of symmetry in fields theory using various geometric framework is intensely studied (see for instance \cite{Ab, Bua, Bu3, Ga, Le0, Le1,Pr1, Pr2, Ro}). The notion of Lie algebroid is a natural generalization of the tangent bundle and Lie algebra. In the last decades the Lie algebroids \cite{Mk1, Mk2} are the objects of intensive studies with applications to mechanical systems or optimal control \cite {Ar2, Co, Fe, Le, Li, Ma2, Ma4, Me, Po1, Po2, Po2', Po3, Po4, We} and are the natural framework in which one can develop the theory of differential equations, where the notion of symmetry plays a very important role. In this paper we study some properties of semispray and generalize the notion of symmetry for second order differential equations on Lie algebroids and characterize its properties using the dynamical covariant derivative and Jacobi endomorphism. The paper is organized as follows. In section two the preliminary geometric structures on Lie algebroids are introduced and some relations between them are given. We present the Jacobi endomorphism on Lie algebroids and find the relation with the curvature tensor of Ehresmann nonlinear connection. In section three we study the dynamical covariant derivative on Lie algebroids. Using a semispray and an arbitrary nonlinear connection, we introduce the dynamical covariant derivative on Lie algebroids as a tensor derivation and prove that the compatibility condition with the tangent structure fixes the canonical nonlinear connection. In the case of the canonical nonlinear connection induced by a semispray, more properties of dynamical covariant derivative are added. In the case of homogeneous second order differential equations (spray) the relation between the dynamical covariant derivative and Berwald linear connection is given. In the last section we study the dynamical symmetries, Lie symmetries, Newtonoid sections and Cartan symmetries on Lie algebroids and find the relations between them. These structures are studied for the first time on the tangent bundle by G. Prince in \cite{Pr1, Pr2}. Also, we prove that an exact Cartan symmetry induces a conservation law and conversely, which extends the work developed in \cite{Mar}. Moreover, we find the invariant equations of dynamical symmetries, Lie symmetries and Newtonoid sections in terms of dynamical covariant derivative and Jacobi endomorphism, which generalize some results from \cite{Bu3, Pr1, Pr2}. We have to mention that the Noether type theorems for Lagrangian systems on Lie algebroids can be found in \cite{Ca, Ma2} and Jacobi sections for second order differential equations on Lie algebroids are studied in \cite{Ca1}. Finally, using an example from optimal control theory (driftless control affine systems), we prove that the framework of Lie algebroids is more useful than the tangent bundle in order to find the symmetries of the dynamics induced by a Lagrangian function. Also, using the $k$-symplectic formalism on Lie algebroids developed in \cite{Le2} one can study the symmetries in this new framework, which generalize the results from \cite{Bua}. \section{\textbf{Lie algebroids}} Let $M$ be a real, $C^\infty $-differentiable, $n$-dimensional manifold and $ (TM,\pi _M,M)$ its tangent bundle. A Lie algebroid over a manifold $M$ is a triple $(E,[\cdot ,\cdot ]_E,\sigma )$, where ($E,\pi ,M)$ is a vector bundle of rank $m$ over $M,$ which satisfies the following conditions: \\a) $ C^\infty (M)$-module of sections $\Gamma (E)$ is equipped with a Lie algebra structure $[\cdot ,\cdot ]_E$. \\b) $\sigma :E\rightarrow TM$ is a bundle map (called the anchor) which induces a Lie algebra homomorphism (also denoted $\sigma $) from the Lie algebra of sections $(\Gamma (E),[\cdot ,\cdot ]_E)$ to the Lie algebra of vector fields $(\mathcal{\chi }(M),[\cdot ,\cdot ])$ satisfying the Leibnitz rule \begin{equation} \lbrack s_1,fs_2]_E=f[s_1,s_2]_E+(\sigma (s_1)f)s_2,\ \forall s_1,s_2\in \Gamma (E),\ f\in C^\infty (M). \end{equation} From the above definition it results: \\$1^{\circ }$ $[\cdot ,\cdot ]_E$ is a $\Bbb{R}$-bilinear operation, \\$2^{\circ }$ $[\cdot ,\cdot ]_E$ is skew-symmetric, i.e. $[s_1,s_2]_E=-[s_2,s_1]_E,\quad \forall s_1,s_2\in \Gamma (E),$\\$3^{\circ }$ $[\cdot ,\cdot ]_E$ verifies the Jacobi identity \begin{equation*} \lbrack s_1,[s_2,s_3]_E]_E+[s_2,[s_3,s_1]_E]_E+[s_3,[s_1,s_2]_E]_E=0,\ \end{equation*} and $\sigma $ being a Lie algebra homomorphism, means that $\sigma [s_1,s_2]_E=[\sigma (s_1),\sigma (s_2)].$ The existence of a Lie bracket on the space of sections of a Lie algebroid leads to a calculus on its sections analogous to the usual Cartan calculus on differential forms. If $f$ is a function on $M$, then $df(x)\in E_x^{*}$ is given by $ \left\langle df(x),a\right\rangle =\sigma (a)f$, for $\forall a\in E_x$. For $\omega $ $\in $ $\bigwedge^k(E^{*})$ the \textit{exterior derivative} $ d^E\omega \in \bigwedge^{k+1}(E^{*})$ is given by the formula \begin{eqnarray*} d^E\omega (s_1,...,s_{k+1}) &=&\overset{k+1}{\sum_{i=1}}(-1)^{i+1}\sigma (s_i)\omega (s_1,...,\overset{\symbol{94}}{s}_i,...,s_{k+1})+ \\ &&\ \ \ \ \ \ \ +\sum_{1\leq i<j\leq k+1}(-1)^{i+j}\omega ([s_{i,}s_j]_E,s_1,...,\overset{\symbol{94}}{s_i},...,\overset{\symbol{94}}{ s_j},...s_{k+1}), \end{eqnarray*} where $s_i\in \Gamma (E)$, $i=\overline{1,k+1}$, and the hat over an argument means the absence of the argument. It results that \begin{equation*} (d^E)^2=0,\quad d^E(\omega _1\wedge \omega _2)=d^E\omega _1\wedge \omega _2+(-1)^{\deg \omega _1}\omega _1\wedge d^E\omega _2. \end{equation*} The cohomology associated with $d^E$ is called the \textit{Lie algebroid cohomology} of $E$. Also, for $\xi $ $\in \Gamma (E)$ one can define the \textit{Lie derivative} with respect to $\xi$, given by $\mathcal{L}_\xi =i_\xi \circ d^E+d^E\circ i_\xi $, where $i_\xi $ is the contraction with $\xi $. We recall that if $L$ and $K$ are $(1,1)$-type tensor field, Fr\"olicher-Nijenhuis bracket $[L,K]$ is the vector valued 2-form \cite{Fr} \begin{eqnarray*} \lbrack L,K]_E(X,Y) &=&[LX,KY]_E+[KX,LY]_E+(LK+KL)[X,Y]_E- \\ &&\ \ \ \ \ \ -L[X,KY]_E-K[X,LY]_E-L[KX,Y]_E-K[LX,Y]_E, \end{eqnarray*} and the Nijenhuis tensor of $L$ is given by \begin{equation*} \mathbf{N}_L(X,Y)=\frac 12[L,L]_E=[LX,LY]_E+L^2[X,Y]_E-L[X,LY]_E-L[LX,Y]_E. \end{equation*} For a vector field in $\mathcal{X}(E)$ and a $(1,1)$-type tensor field $L$ on $E$ the Fr\"olicher-Nijenhuis bracket $[X,L]_E=\mathcal{L}_XL$ is the $ (1,1)$-type tensor field on $E$ given by \begin{equation*} \mathcal{L}_XL=\mathcal{L}_X\circ L-L\circ \mathcal{L}_X, \end{equation*} where $\mathcal{L}_X$ is the usual Lie derivative. If we take the local coordinates $(x^i)$ on an open $U\subset $ $M$, a local basis $\{s_\alpha \}$ of the sections of the bundle $\pi ^{-1}(U)\rightarrow U$ generates local coordinates $(x^i,y^\alpha )$ on $E$. The local functions $\sigma _\alpha ^i(x)$, $L_{\alpha \beta }^\gamma (x)$ on $M$ given by \begin{equation} \sigma (s_\alpha )=\sigma _\alpha ^i\frac \partial {\partial x^i},\quad [s_\alpha ,s_\beta ]_E=L_{\alpha \beta }^\gamma s_\gamma ,\quad i=\overline{1,n},\quad \alpha ,\beta ,\gamma =\overline{1,m}, \end{equation} are called the \textit{structure functions of the Lie algebroid}, and satisfy the \textit{structure equations} on Lie algebroids \begin{equation*} \sum_{(\alpha ,\beta ,\gamma )}\left( \sigma _\alpha ^i\frac{\partial L_{\beta \gamma }^\delta }{\partial x^i}+L_{\alpha \eta }^\delta L_{\beta \gamma }^\eta \right) =0,\quad \sigma _\alpha ^j\frac{\partial \sigma _\beta ^i}{\partial x^j}-\sigma _\beta ^j\frac{\partial \sigma _\alpha ^i}{\partial x^j}=\sigma _\gamma ^iL_{\alpha \beta }^\gamma . \end{equation*} Locally, if $f\in C^\infty (M)$ then $d^Ef=\frac{\partial f}{\partial x^i} \sigma _\alpha ^is^\alpha ,$ where $\{s^\alpha \}$ is the dual basis of $ \{s_\alpha \}$ and if $\theta \in \Gamma (E^{*}),$ $\theta =\theta _\alpha s^\alpha $ then \begin{equation*} d^E\theta =\left( \sigma _\alpha ^i\frac{\partial \theta _\beta }{\partial x^i}-\frac 12\theta _\gamma L_{\alpha \beta }^\gamma \right) s^\alpha \wedge s^\beta. \end{equation*} Particularly, we get $d^Ex^i=\sigma _\alpha ^is^\alpha $ and $d^Es^\alpha =-\frac 12L_{\beta \gamma }^\alpha s^\beta \wedge s^\gamma .$ \subsection{\textbf{The prolongation of a Lie algebroid over the vector bundle projection}} Let $(E,\pi ,M)$ be a vector bundle. For the projection $\pi :E\rightarrow M$ we can construct the prolongation of $E$ (see \cite{Hi, Le, Ma2}). The associated vector bundle is ($\mathcal{T}E,\pi _2,E$) where \begin{equation*} \mathcal{T}E=\underset{_{w\in E}}{\cup }\mathcal{T}_wE,\quad \mathcal{T} _wE=\{(u_x,v_w)\in E_x\times T_wE\mid \sigma (u_x)=T_w\pi (v_w),\quad \pi (w)=x\in M\}, \end{equation*} and the projection $\pi _2(u_x,v_w)=\pi _E(v_w)=w$, where $\pi _E:TE\rightarrow E$ is the tangent projection. We also have the canonical projection $\pi _1:\mathcal{T}E\rightarrow E$ given by $\pi _1(u,v)=u$. The projection onto the second factor $\sigma ^1:\mathcal{T}E\rightarrow TE$, $ \sigma ^1(u,v)=v$ will be the anchor of a new Lie algebroid over the manifold $E$. An element of $\mathcal{T}E$ is said to be vertical if it is in the kernel of the projection $\pi _1$. We will denote $(V\mathcal{T}E,\pi _{2\mid _{V\mathcal{T}E}},E)$ the vertical bundle of $(\mathcal{T}E,\pi _2,E) $ and $\sigma ^1\left| _{V\mathcal{T}E}\right. :V\mathcal{T} E\rightarrow VTE$ is an isomorphism. If $f\in C^\infty (M)$ we will denote by $f^c$ and $f^v$ the \textit{complete and vertical lift} to $E$ of $f$ defined by \begin{equation*} f^c(u)=\sigma (u)(f),\quad f^v(u)=f(\pi (u)),\quad u\in E. \end{equation*} For $s\in \Gamma (E)$ we can consider the \textit{vertical lift} of $s$ given by $s^v(u)=s(\pi (u))_u^v,$ for $u\in E,$ where $_u^v:E_{\pi (u)}\rightarrow T_u(E_{\pi (u)})$ is the canonical isomorphism. There exists a unique vector field $s^c$ on $E$, the \textit{complete lift} of $s$ satisfying the following conditions: i) $s^c$ is $\pi $-projectable on $\sigma (s),$ ii) $s^c(\overset{\wedge }{\alpha })=\widehat{\mathcal{L}_s\alpha },$\\for all $\alpha \in \Gamma (E^{*}),$ where $\overset{\wedge }{\alpha }(u)=\alpha (\pi (u))(u)$, $u\in E$ (see \cite{Gu1, Gu2}). \\Considering the prolongation $\mathcal{T}E$ of $E$ \cite{Ma2}, we may introduce the \textit{vertical lift }$s^{\mathrm{v}}$ and the \textit{complete lift} $s^{ \mathrm{c}}$ of a section $s\in \Gamma (E)$ as the sections of $\mathcal{T} E\rightarrow E$ given by \begin{equation*} s^{\mathrm{v}}(u)=(0,s^v(u)),\quad s^{\mathrm{c}}(u)=(s(\pi (u)),s^c(u)),\quad u\in E. \end{equation*} Other two canonical objects on $\mathcal{T}E$ are the \textit{Euler section} $\Bbb{C}$ and the \textit{tangent structure} (\textit{vertical endomorphism}) $J$. The Euler section $\Bbb{C}$ is the section of $\mathcal{T} E\rightarrow E$ defined by $\Bbb{C}(u)=(0,u_u^v),\ \forall u\in E.$ The vertical endomorphism is the section of $(\mathcal{T}E)\oplus (\mathcal{T} E)^{*}\rightarrow E$ characterized by $J(s^{\mathrm{v}})=0,$ $J(s^{\mathrm{c} })=s^{\mathrm{v}}$, $s\in \Gamma (E)$ which satisfies \begin{equation*} J^2=0,\ ImJ=\ker J=V\mathcal{T}E,\quad \ [\Bbb{C},J]_{\mathcal{T}E}=-J. \end{equation*} A section $\mathcal{S}$ of $\mathcal{T}E\rightarrow E$ is called \textit{ semispray} (\textit{second order differential equation -SODE}) on $E$ if $J( \mathcal{S})=\Bbb{C}$. The local basis of $\Gamma (\mathcal{T}E)$ is given by $\{\mathcal{X}_\alpha ,\mathcal{V}_\alpha \}$, where \begin{equation} \mathcal{X}_\alpha (u)=\left( s_\alpha (\pi (u)),\left. \sigma _\alpha ^i\frac \partial {\partial x^i}\right| _u\right) ,\quad \mathcal{V}_\alpha (u)=\left( 0,\left. \frac \partial {\partial y^\alpha }\right| _u\right), \end{equation} and $(\partial /\partial x^i,\partial /\partial y^\alpha )$ is the local basis on $TE$ (see \cite{Ma2}). The structure functions of $\mathcal{T}E$ are given by the following formulas \begin{equation} \sigma ^1(\mathcal{X}_\alpha )=\sigma _\alpha ^i\frac \partial {\partial x^i},\quad \sigma ^1(\mathcal{V}_\alpha )=\frac \partial {\partial y^\alpha }, \end{equation} \begin{equation} \lbrack \mathcal{X}_\alpha ,\mathcal{X}_\beta ]_{\mathcal{T}E}=L_{\alpha \beta }^\gamma \mathcal{X}_\gamma ,\quad [\mathcal{X}_\alpha ,\mathcal{V} _\beta ]_{\mathcal{T}E}=0,\quad [\mathcal{V}_\alpha ,\mathcal{V}_\beta ]_{ \mathcal{T}E}=0. \end{equation} The vertical lift of a section $\rho =\rho ^\alpha s_\alpha $ is $\rho ^{ \mathrm{v}}=\rho ^\alpha \mathcal{V}_\alpha $.$\,$ The coordinate expression of Euler section is $\Bbb{C}=y^\alpha \mathcal{V}_\alpha $ and the local expression of $J$ is given by $J=\mathcal{X}^\alpha \otimes \mathcal{V} _\alpha ,$ where $\{\mathcal{X}^\alpha ,\mathcal{V}^\alpha \}$ denotes the corresponding dual basis of $\{\mathcal{X}_\alpha ,\mathcal{V}_\alpha \}$. The Nijenhuis tensor of the vertical endomorphism vanishes and it results that $J$ is integrable. The expression of the complete lift of a section $ \rho =\rho ^\alpha s_\alpha $ is \begin{equation} \rho ^{\mathrm{c}}=\rho ^\alpha \mathcal{X}_\alpha +(\sigma _\varepsilon ^i \frac{\partial \rho ^\alpha }{\partial x^i}-L_{\beta \varepsilon }^\alpha \rho ^\beta )y^\varepsilon \mathcal{V}_\alpha . \end{equation} In particular $s_\alpha ^{\mathrm{v}}=\mathcal{V}_\alpha $, $s_\alpha ^{ \mathrm{c}}=\mathcal{X}_\alpha -L_{\alpha \varepsilon }^\beta y^\varepsilon \mathcal{V}_\beta .$ The local expression of the differential of a function $ L$ on $\mathcal{T}E$ is $d^EL=\sigma _\alpha ^i\frac{\partial L}{\partial x^i }\mathcal{X}^\alpha +\frac{\partial L}{\partial y^\alpha }\mathcal{V}^\alpha $ and we have $d^Ex^i=\sigma _\alpha ^i\mathcal{X}^\alpha $, $\ d^Ey^\alpha = \mathcal{V}^\alpha$. The differential of sections of $(\mathcal{T}E)^{*}$ is determined by \begin{equation*} d^E\mathcal{X}^\alpha =-\frac 12L_{\beta \gamma }^\alpha \mathcal{X}^\beta \wedge \mathcal{X}^\gamma ,\quad d^E\mathcal{V}^\alpha =0. \end{equation*} In local coordinates a semispray has the expression \begin{equation} \mathcal{S}(x,y)=y^\alpha \mathcal{X}_\alpha +\mathcal{S}^\alpha (x,y) \mathcal{V}_\alpha . \end{equation} and the following equality holds \begin{equation} J[\mathcal{S},JX]_{\mathcal{T}E}=-JX,\ X\in \Gamma (E). \end{equation} The integral curves of $\sigma ^1(\mathcal{S})$ satisfy the differential equations \begin{equation*} \frac{dx^i}{dt}=\sigma _\alpha ^i(x)y^\alpha ,\quad \frac{dy^\alpha }{dt}= \mathcal{S}^\alpha (x,y). \end{equation*} If we have the relation $[\Bbb{C},\mathcal{S}]_{\mathcal{T}E}=\mathcal{S}$ then $\mathcal{S}$ is called spray and the functions $\mathcal{S}^\alpha $ are homogeneous functions of degree $2$ in $y^\alpha .$ Let us consider a regular Lagrangian $L$ on $E$, that is the matrix \begin{equation*} g_{\alpha \beta }=\frac{\partial ^2L}{\partial y^\alpha \partial y^\beta}, \end{equation*} has constant rank $m$. We have the Cartan 1-section $\theta _L=\frac{\partial L}{\partial y^\alpha }\mathcal{X}^\alpha$ and the Cartan 2-section $\omega _L=d^E\theta _L$, which is a symplectic structure induced by $L$ given by \cite{Ma2} \begin{equation*} \omega _L=g_{\alpha \beta }\mathcal{V}^\beta \wedge \mathcal{X}^\alpha +\frac 12\left( \sigma _\alpha ^i\frac{\partial ^2L}{\partial x^i\partial y^\beta }-\sigma _\beta ^i\frac{\partial ^2L}{\partial x^i\partial y^\alpha } -\frac{\partial L}{\partial y^\varepsilon }L_{\alpha \beta }^\varepsilon \right) \mathcal{X}^\alpha \wedge \mathcal{X}^\beta . \end{equation*} Considering the energy function $E_L=\Bbb{C}(L)-L$, with local expression \begin{equation*} E_L=y^\alpha \frac{\partial L}{\partial y^\alpha }-L, \end{equation*} then the symplectic equation \begin{equation*} i_S\omega _L=-d^EE_L, \end{equation*} determines the components of the canonical semispray \cite{Ma2} \begin{equation} S^\varepsilon =g^{\varepsilon \beta }\left( \sigma _\beta ^i\frac{\partial L }{\partial x^i}-\sigma _\alpha ^i\frac{\partial ^2L}{\partial x^i\partial y^\beta }y^\alpha -L_{\beta \alpha }^\gamma y^\alpha \frac{\partial L}{ \partial y^\gamma }\right) , \end{equation} where $g_{\alpha \beta }g^{\beta \gamma }=\delta _\alpha ^\gamma $, which depends only on the regular Lagrangian and the structure function of the Lie algebroid. \subsection{\textbf{Nonlinear connections on Lie algebroids}} A nonlinear connection is an important tool in the geometry of systems of second order differential equations. The system of SODE can by represented using the notion of semispray, which together with a nonlinear connection induce two important concepts (the dynamical covariant derivative and Jacobi endomorphism) which are used in order to find the invariant equations of the symmetries of SODE. \begin{definition} A nonlinear connection on $\mathcal{T}E$ is an almost product structure $ \mathcal{N}$ on $\pi _2:\mathcal{T}E\rightarrow E$ (i.e. a bundle morphism $ \mathcal{N}:\mathcal{T}E\rightarrow \mathcal{T}E$, such that $\mathcal{N} ^2=Id$) smooth on $\mathcal{T}E\backslash \{0\}$ such that $V\mathcal{T} E=\ker (Id+\mathcal{N}).$ \end{definition} If $\mathcal{N}$ is a connection on $\mathcal{T}E$ then $H\mathcal{T}E=\ker (Id-\mathcal{N})$ is the horizontal subbundle associated to $\mathcal{N}$ and $\mathcal{T}E=V\mathcal{T}E\oplus H\mathcal{T}E.$ Each $\rho \in \Gamma ( \mathcal{T}E)$ can be written as $\rho =\rho ^{\mathrm{h}}+\rho ^{\mathrm{v} },$ where $\rho ^{\mathrm{h}}$, $\rho ^{\mathrm{v}}$ are sections in the horizontal and respective vertical subbundles. If $\rho ^{\mathrm{h}}=0,$ then $\rho $ is called\textit{\ vertical }and if $\rho ^{\mathrm{v}}=0,$ then $\rho $ is called \textit{horizontal}. A connection $\mathcal{N}$ on $ \mathcal{T}E$ induces two projectors $\mathrm{h},\mathrm{v}:\mathcal{T} E\rightarrow \mathcal{T}E$ such that $\mathrm{h}(\rho )=\rho ^{\mathrm{h}}$ and $\mathrm{v}(\rho )=\rho ^{\mathrm{v}}$ for every $\rho \in \Gamma ( \mathcal{T}E)$. We have \begin{equation*} \mathrm{h}=\frac 12(Id+\mathcal{N}),\quad \mathrm{v}=\frac 12(Id-\mathcal{N} ),\quad \ker \mathrm{h}=Im\mathrm{v}=V\mathcal{T}E,\quad Im\mathrm{h}=\ker \mathrm{v}=H\mathcal{T}E. \end{equation*} \begin{equation*} \mathrm{h}^2=\mathrm{h},\quad \mathrm{v}^2=\mathrm{v},\quad \mathrm{hv}= \mathrm{vh}=0,\quad \mathrm{h}+\mathrm{v}=Id,\quad \mathrm{h}-\mathrm{v}= \mathcal{N}. \end{equation*} \begin{equation*} J\mathrm{h}=J,\quad \mathrm{h}J=0,\quad J\mathrm{v}=0,\quad \mathrm{v}J=J. \end{equation*} Locally, a connection can be expressed as $\mathcal{N}(\mathcal{X}_\alpha )= \mathcal{X}_\alpha -2\mathcal{N}_\alpha ^\beta \mathcal{V}_\beta $, $ \mathcal{N}(\mathcal{V}_\beta )=-\mathcal{V}_\beta ,$ where $\mathcal{N} _\alpha ^\beta =\mathcal{N}_\alpha ^\beta (x,y)$ are \ the local coefficients of $\mathcal{N}$. The sections \begin{equation*} \delta _\alpha =\mathrm{h}(\mathcal{X}_\alpha )=\mathcal{X}_\alpha -\mathcal{ N}_\alpha ^\beta \mathcal{V}_\beta , \end{equation*} generate a basis of $H\mathcal{T}E$. The frame $\{\delta _\alpha ,\mathcal{V} _\alpha \}$ is a local basis of $\mathcal{T}E$ called Berwald basis. The dual adapted basis is $\{\mathcal{X}^\alpha ,\delta \mathcal{V}^\alpha \}$ where $\delta \mathcal{V}^\alpha =\mathcal{V}^\alpha -\mathcal{N}_\beta ^\alpha \mathcal{X}^\beta .$ The Lie brackets of the adapted basis $\{\delta _\alpha ,\mathcal{V}_\alpha \}$ are \cite{Po2} \begin{equation} \lbrack \delta _\alpha ,\delta _\beta ]_{\mathcal{T}E}=L_{\alpha \beta }^\gamma \delta _\gamma +\mathcal{R}_{\alpha \beta }^\gamma \mathcal{V} _\gamma ,\quad [\delta _\alpha ,\mathcal{V}_\beta ]_{\mathcal{T}E}=\frac{ \partial \mathcal{N}_\alpha ^\gamma }{\partial y^\beta }\mathcal{V}_\gamma ,\quad [\mathcal{V}_\alpha ,\mathcal{V}_\beta ]_{\mathcal{T}E}=0, \end{equation} \begin{equation} \mathcal{R}_{\alpha \beta }^\gamma =\delta _\beta (\mathcal{N}_\alpha ^\gamma )-\delta _\alpha (\mathcal{N}_\beta ^\gamma )+L_{\alpha \beta }^\varepsilon \mathcal{N}_\varepsilon ^\gamma . \end{equation} \begin{definition} The curvature of the nonlinear connection $\mathcal{N}$ on $\mathcal{T}E$ is $\Omega =-\mathbf{N}_{\mathrm{h}}$ where $\mathrm{h}$ is the horizontal projector and $\mathbf{N}_{\mathrm{h}}$ is the Nijenhuis tensor of $\mathrm{h }$. \end{definition} In local coordinates we have \begin{equation*} \Omega =-\frac 12\mathcal{R}_{\alpha \beta }^\gamma \mathcal{X}^\alpha \wedge \mathcal{X}^\beta \otimes \mathcal{V}_\gamma , \end{equation*} where $\mathcal{R}_{\alpha \beta }^\gamma $ are given by (11) and represent the local coordinate functions of the curvature tensor. The curvature of the nonlinear connection is an obstruction to the integrability of $H\mathcal{T} E $, understanding that a vanishing curvature entails that horizontal sections are closed under the Lie algebroid bracket of $\mathcal{T}E$. The horizontal distribution {}$H\mathcal{T}E$ is integrable if and only if the curvature $\Omega $ of the nonlinear connection vanishes. Also, from the Jacobi identity we obtain \begin{equation*} \lbrack \mathrm{h},\Omega ]_{\mathcal{T}E}=0. \end{equation*} Let us consider a semispray $\mathcal{S}$ and an arbitrary nonlinear connection $\mathcal{N}$ with induced $(\mathrm{h},\mathrm{v})$ projectors. Then we set (see also \cite{Po3}). \begin{definition} The vertically valued $(1,1)$-type tensor field on Lie algebroid $\mathcal{T} E$ given by \begin{equation} \Phi =-\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}, \end{equation} will be called the Jacobi endomorphism. \end{definition} The Jacobi endomorphism $\Phi$ has been used in the study of Jacobi equations for SODE on Lie algebroids in \cite{Ca1} and to express one of the Helmholtz conditions of the inverse problem of the calculus of variations on Lie algebroids \cite{Po3} (see also \cite{Bar}). We obtain \begin{equation*} \Phi =-\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}=\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}=\mathrm{v}\circ (\mathcal{L}_{\mathcal{S} }\circ \mathrm{h}-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}})=\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}, \end{equation*} and in local coordinates the action of Lie derivative on the Berwald basis is given by \begin{equation} \mathcal{L}_{\mathcal{S}}\delta _\beta =\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha + \mathcal{R}_\beta ^\gamma \mathcal{V}_\gamma ,\quad \mathcal{L}_{\mathcal{S}} \mathcal{V}_\beta =-\delta _\beta -\left( \mathcal{N}_\beta ^\alpha +\frac{ \partial \mathcal{S}^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha. \end{equation} The Jacobi endomorphism has the local form \begin{equation} \Phi =\mathcal{R}_\beta ^\alpha \mathcal{V}_\alpha \otimes \mathcal{X}^\beta ,\quad \mathcal{R}_\beta ^\gamma =-\sigma _\beta ^i\frac{\partial \mathcal{S} ^\gamma }{\partial x^i}-\mathcal{S}(\mathcal{N}_\beta ^\gamma )+\mathcal{N} _\beta ^\alpha \mathcal{N}_\alpha ^\gamma +\mathcal{N}_\beta ^\alpha \frac{ \partial \mathcal{S}^\gamma }{\partial y^\alpha }+\mathcal{N}_\varepsilon ^\gamma L_{\alpha \beta }^\varepsilon y^\alpha . \end{equation} \begin{proposition} The following formula holds \begin{equation} \Phi =i_{\mathcal{S}}\Omega +\mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{ S}}\mathrm{h}. \end{equation} \end{proposition} \textbf{Proof}. Indeed, $\Phi (\rho )=\mathrm{v}\circ \mathcal{L}_{\mathcal{S }}\mathrm{h}\rho =\mathrm{v}\circ \mathcal{L}_{\mathrm{h}\mathcal{S}}\mathrm{ h}\rho +\mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{S}}\mathrm{h}\rho $ and $\Omega (\mathcal{S},\rho )=\mathrm{v}[\mathrm{h}\mathcal{S},\mathrm{h} \rho ]_{\mathcal{T}E}=\mathrm{v}\circ \mathcal{L}_{\mathrm{h}\mathcal{S}} \mathrm{h}\rho$, which yields $\Phi (\rho )=\Omega (\mathcal{S},\rho )+ \mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{S}}\mathrm{h}\rho .$ \hbox{\rlap{$\sqcap$}$\sqcup$} For a given semispray $\mathcal{S}$ on $\mathcal{T}E$ the Lie derivative $\mathcal{L}_\mathcal{S}$ defines a tensor derivation on $\mathcal{T}E$, but does not preserve some of the geometric structure as tangent structure and nonlinear connection. Next, using a nonlinear connection, we introduce a tensor derivation on $\mathcal{T}E$, called the dynamical covariant derivative, that preserves some other geometric structures. \section{\textbf{Dynamical covariant derivative on Lie algebroids}} In the following we will introduce the notion of dynamical covariant derivative on Lie algebroids as a tensor derivation and study its properties. We will use the Jacobi endomorphism and the dynamical covariant derivative in the study of symmetries for SODE on Lie algebroids. \begin{definition} \cite{Po3} A map $\nabla :\frak{T}(\mathcal{T}E\backslash \{0\})\rightarrow \frak{T}(\mathcal{T}E\backslash \{0\})$ is said to be a tensor derivation on $\mathcal{T}E\backslash \{0\}$ if the following conditions are satisfied:\\ i) $\nabla $ is $\Bbb{R}$-linear\\ii) $\nabla $ is type preserving, i.e. $ \nabla (\frak{T}_s^r(\mathcal{T}E\backslash \{0\})\subset \frak{T}_s^r( \mathcal{T}E\backslash \{0\})$, for each $(r,s)\in \Bbb{N}\times \Bbb{N.}$\\ iii) $\nabla $ obeys the Leibnitz rule $\nabla (P\otimes S)=\nabla P\otimes S+P\otimes \nabla S$, for any tensors $P,S$ on $\mathcal{T}E\backslash \{0\}. $\\iv) $\nabla \,$commutes with any contractions, where $\frak{T}_{\bullet }^{\bullet }(\mathcal{T}E\backslash \{0\})$ is the space of tensors on $\mathcal{T}E\backslash \{0\}.$ \end{definition} For a semispray $\mathcal{S}$ and an arbitrary nonlinear connection $ \mathcal{N}$ we consider the $\Bbb{R}$-linear map $\nabla :\Gamma (\mathcal{T }E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\})$ given by \begin{equation} \nabla =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+\mathrm{v} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v,} \end{equation} which will be called the dynamical covariant derivative induced by the semispray $\mathcal{S}$ and the nonlinear connection $\mathcal{N}$. By setting $\nabla f=\mathcal{S}(f),$ for $f\in C^\infty (E\backslash \{0\})$ using the Leibnitz rule and the requirement that $\nabla $ commutes with any contraction, we can extend the action of $\nabla $ to arbitrary section on $ \mathcal{T}E\backslash \{0\}$. For a section on $\mathcal{T}E\backslash \{0\} $ the dynamical covariant derivative is given by $(\nabla \varphi )(\rho )=S(\varphi )(\rho )-\varphi (\nabla \rho ).$ For a $(1,1)$-type tensor field $T$ on $\mathcal{T}E\backslash \{0\}$ the dynamical covariant derivative has the form \begin{equation} \nabla T=\nabla \circ T-T\circ \nabla. \end{equation} and by direct computation using (17) we obtain \begin{equation*} \nabla \mathrm{h}=\nabla \mathrm{v}=0. \end{equation*} which means that $\nabla$ preserves the horizontal and vertical sections. Also, we get \begin{equation*} \nabla \mathcal{V}_\beta =\mathrm{v}[\mathcal{S},\mathcal{V}_\beta ]_{ \mathcal{T}E}=-\left( \mathcal{N}_\beta ^\alpha +\frac{\partial \mathcal{S} ^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha ,\quad \nabla \delta \mathcal{V}^\beta =\left( \mathcal{N}_\alpha ^\beta +\frac{\partial \mathcal{ S}^\beta }{\partial y^\alpha }\right) \delta \mathcal{V}^\alpha, \end{equation*} \begin{equation*} \nabla \delta _\beta =\mathrm{h}[\mathcal{S},\delta _\beta ]_{\mathcal{T} E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha,\quad \nabla \mathcal{X}^\beta =-\left( \mathcal{N}_\alpha ^\beta -L_{\alpha \varepsilon }^\beta y^\varepsilon \right) \mathcal{X}^\alpha . \end{equation*} The action of the dynamical covariant derivative on the horizontal section $ X=hX$ is given by following relations \begin{equation} \nabla X=\nabla \left( X^\alpha \delta _\alpha \right) =\nabla X^\alpha \delta _\alpha ,\quad \nabla X^\alpha =\mathcal{S}(X^\alpha )+\left( \mathcal{N}_\beta ^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha \right) X^\beta. \end{equation} \begin{proposition} The following results hold \begin{equation} \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=-\mathrm{h},\quad J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=-\mathrm{v}, \end{equation} \begin{equation} \nabla J=\mathcal{L}_{\mathcal{S}}J+\mathcal{N},\quad \nabla J=-\left( \frac{ \partial \mathcal{S}^\beta }{\partial y^\alpha }-y^\varepsilon L_{\alpha \varepsilon }^\beta +2\mathcal{N}_\alpha ^\beta \right) \mathcal{V}_\beta \otimes \mathcal{X}^\alpha. \end{equation} \end{proposition} \textbf{Proof}. From (8) we get \begin{equation*} J[\mathcal{S},JX]_{\mathcal{T}E}=-JX\Rightarrow J\left( [\mathcal{S},JX]_{ \mathcal{T}E}+X\right) =0\Rightarrow [\mathcal{S},JX]_{\mathcal{T}E}+X\in V \mathcal{T}E, \end{equation*} \begin{equation*} \mathrm{h}\left( [\mathcal{S},JX]_{\mathcal{T}E}+X\right) =0\Rightarrow \mathrm{h}[\mathcal{S},JX]_{\mathcal{T}E}=-\mathrm{h}X\Leftrightarrow \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=-\mathrm{h}. \end{equation*} Also, in $J[\mathcal{S},JX]_{\mathcal{T}E}+JX=0$ considering $JX=\mathrm{v}Z$ it results $J[\mathcal{S},\mathrm{v}Z]_{\mathcal{T}E}=-\mathrm{v} Z\Leftrightarrow J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=-\mathrm{v} .$ Next \begin{eqnarray*} \nabla \circ J &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h} \circ J+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}\circ J= \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ J= \\ \ &=&(Id-\mathrm{h})\circ \mathcal{L}_{\mathcal{S}}\circ J=\mathcal{L} _S\circ J-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=\mathcal{L} _S\circ J+\mathrm{h}. \end{eqnarray*} But, on the other hand \begin{equation*} J\circ \nabla =J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}=J\circ \mathcal{L}_{\mathcal{S}}\circ (Id-\mathrm{v})=J\circ \mathcal{L}_{\mathcal{S }}-J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=J\circ \mathcal{L}_{ \mathcal{S}}+v. \end{equation*} and we obtain \begin{equation*} \nabla \circ J-J\circ \nabla =\mathcal{L}_S\circ J+\mathrm{h}-J\circ \mathcal{L}_{\mathcal{S}}-\mathrm{v}\Rightarrow \nabla J=\mathcal{L}_{ \mathcal{S}}J+\mathrm{h}-\mathrm{v}=\mathcal{L}_{\mathcal{S}}J+\mathcal{N}. \end{equation*} For the last relation, we have \begin{eqnarray*} \nabla J &=&\nabla \left( \mathcal{X}^\beta \otimes \mathcal{V}_\beta \right) =\nabla \mathcal{X}^\beta \otimes \mathcal{V}_\beta +\mathcal{X} ^\beta \otimes \nabla \mathcal{V}_\beta \\ \ &=&-\left( \mathcal{N}_\alpha ^\beta -L_{\alpha \varepsilon }^\beta y^\varepsilon \right) \mathcal{X}^\alpha \otimes \mathcal{V}_\beta +\mathcal{ X}^\beta \otimes \left( -\mathcal{N}_\beta ^\alpha -\frac{\partial \mathcal{S }^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha \\ \ &=&-\mathcal{N}_\alpha ^\beta \mathcal{X}^\alpha \otimes \mathcal{V}_\beta +L_{\alpha \varepsilon }^\beta y^\varepsilon \mathcal{X}^\alpha \otimes \mathcal{V}_\beta -\mathcal{N}_\beta ^\alpha \mathcal{X}^\beta \otimes \mathcal{V}_\alpha -\frac{\partial \mathcal{S}^\alpha }{\partial y^\beta } \mathcal{X}^\beta \otimes \mathcal{V}_\alpha \\ \ &=&\left( L_{\alpha \varepsilon }^\beta y^\varepsilon -\frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }-2\mathcal{N}_\alpha ^\beta \right) \mathcal{X}^\alpha \otimes \mathcal{V}_\beta . \end{eqnarray*} \hbox{\rlap{$\sqcap$}$\sqcup$}. The above proposition leads to the following result: \begin{theorem} For a semispray $\mathcal{S}$, an arbitrary nonlinear connection $\mathcal{N} $ and $\nabla$ the dynamical covariant derivative induced by $\mathcal{S}$ and $ \mathcal{N}$, the following conditions are equivalent:\\ $i)$ $\nabla J=0,$\\ $ii)$ $\mathcal{L}_SJ+\mathcal{N}=0,$\\ $iii)$ $\mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S} ^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right)$. \end{theorem} \textbf{Proof}. The proof follows from the relations (20). \hbox{\rlap{$\sqcap$}$\sqcup$} This theorem shows that the compatibility condition $\nabla J=0$ of the dynamical covariant derivative with the tangent structure determines the nonlinear connection $\mathcal{N}=-\mathcal{L}_{\mathcal{S}}J$. For the particular case of tangent bundle we obtain the results from \cite{Bu3}. In the following we deal with this nonlinear connection induced by semispray. \subsection{The canonical nonlinear connection induced by a semispray} A semispray $\mathcal{S},$ together with the condition $\nabla J=0,$ determines the canonical nonlinear connection $\mathcal{N}=-\mathcal{L}_{ \mathcal{S}}J$ with local coefficients \begin{equation*} \mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right) . \end{equation*} In this case the following equations hold \begin{equation*} \lbrack \mathcal{S},\mathcal{V}_\beta ]_{\mathcal{T}E}=-\delta _\beta +\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha , \end{equation*} \begin{equation*} \lbrack \mathcal{S},\delta _\beta ]_{\mathcal{T}E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha + \mathcal{R}_\beta ^\alpha \mathcal{V}_\alpha , \end{equation*} where \begin{equation} \mathcal{R}_\beta ^\alpha =-\sigma _\beta ^i\frac{\partial \mathcal{S} ^\alpha }{\partial x^i}-\mathcal{S}(\mathcal{N}_\beta ^\alpha )-\mathcal{N} _\gamma ^\alpha \mathcal{N}_\beta ^\gamma +(L_{\varepsilon \beta }^\gamma \mathcal{N}_\gamma ^\alpha +L_{\gamma \varepsilon }^\alpha \mathcal{N}_\beta ^\gamma )y^\varepsilon . \end{equation} are the local coefficients of the Jacobi endomorphism. \begin{proposition} If $\mathcal{S}$ is a spray, then the Jacobi endomorphism is the contraction with $\mathcal{S}$ of curvature of the nonlinear connection \begin{equation*} \Phi =i_{\mathcal{S}}\Omega . \end{equation*} \end{proposition} \textbf{Proof}. If $\mathcal{S}$ is a spray, then the coefficients $\mathcal{ S}^\alpha $ are 2-homogeneous with respect to the variables $y^\beta$ and it results \begin{equation*} 2\mathcal{S}^\alpha =\frac{\partial \mathcal{S}^\alpha }{\partial y^\beta } y^\beta =-2\mathcal{N}_\beta ^\alpha y^\beta +L_{\beta \gamma }^\alpha y^\beta y^\gamma =-2\mathcal{N}_\beta ^\alpha y^\beta . \end{equation*} \begin{equation*} \mathcal{S}=\mathrm{h}\mathcal{S}=y^\alpha \delta _\alpha ,\quad \mathrm{v} \mathcal{S}=0,\quad \mathcal{N}_\beta ^\alpha =\frac{\partial \mathcal{N} _\varepsilon ^\alpha }{\partial y^\beta }y^\varepsilon +L_{\beta \varepsilon }^\alpha y^\varepsilon , \end{equation*} which together with (15) yields $\Phi =i_{\mathcal{S}}\Omega$. Locally, we get $\mathcal{R} _\beta ^\alpha =\mathcal{R}_{\varepsilon \beta }^\alpha y^\varepsilon $ and represents the local relation between the Jacobi endomorphism and the curvature of the nonlinear connection. Also, we have $\Phi(\mathcal{S})=0$. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ Next, we introduce the almost complex structure in order to find the decomposition formula for the dynamical covariant derivative. \begin{definition} The almost complex structure is given by the formula \begin{equation*} \Bbb{F}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}-J. \end{equation*} \end{definition} We have to show that $\Bbb{F}^2=-Id$. Indeed, from the relation $\mathcal{L} _{\mathcal{S}}\mathrm{h}=\mathcal{L}_{\mathcal{S}}\circ \mathrm{h}-\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}$ we obtain $\Bbb{F}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}- \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}-J=\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\circ (\mathrm{h}-Id)-J=-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}-J$ and $\Bbb{F}^2=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} -J\right) \circ \left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v }\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}\circ J+$\\ $+J\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+J^2= \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J+J\circ \mathcal{L}_{ \mathcal{S}}\circ \mathrm{v}$ $=-\mathrm{h}-\mathrm{v}=-Id.$ \begin{proposition} The following results hold \begin{equation*} \begin{array}{c} \Bbb{F}\circ J=\mathrm{h},\quad J\circ \Bbb{F}=\mathrm{v},\quad \mathrm{v} \circ \Bbb{F}=\Bbb{F}\circ \mathrm{h}=-J, \\ \mathrm{h}\circ \Bbb{F}=\Bbb{F}\circ \mathrm{v}=\Bbb{F}+J,\quad \mathcal{N} \circ \Bbb{F}=\Bbb{F}+2J,\quad \Phi =\mathcal{L}_{\mathcal{S}}\mathrm{h}- \Bbb{F}-J. \end{array} \end{equation*} \end{proposition} \textbf{Proof}. Using the relations (19) we obtain\\ $\Bbb{F}\circ J=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) \circ J=$ $-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}\circ J-J^2$ $=-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ J=\mathrm{h}$,\\ $J\circ \Bbb{F}=-J\circ \left( \mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}+J\right) =-J\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}-J^2=-J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \mathrm{v}$,\\ $\mathrm{v}\circ \Bbb{F}=\mathrm{v}\circ \left( \mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}-J\right) =-\mathrm{v}\circ J=-J$, $\Bbb{F}\circ \mathrm{h}=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} -J\right) \circ \mathrm{h}=-J\circ \mathrm{h}=-J,$ $\mathrm{h}\circ \Bbb{F}=\mathrm{h}\circ \left( \mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}-J\right) =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}} \mathrm{h}=\Bbb{F}+J$, $\Bbb{F}\circ \mathrm{v=}\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) \circ \mathrm{v}=-\mathrm{ h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=$ $\Bbb{F}+J$. In the same way, the other relations can be proved. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ In local coordinates we have \begin{equation*} \Bbb{F}=-\mathcal{V}_\alpha \otimes \mathcal{X}^\alpha +\delta_\alpha \otimes \delta \mathcal{V}^\alpha . \end{equation*} For a semispray $\mathcal{S}$ and the associated nonlinear connection we consider the $\Bbb{R}$-linear map $\nabla _0:\Gamma (\mathcal{T}E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\})$ given by \begin{equation*} \nabla _0\rho =\mathrm{h}[\mathcal{S},\mathrm{h}\rho ]_{\mathcal{T}E}+ \mathrm{v}[\mathcal{S},\mathrm{v}\rho ]_{\mathcal{T}E},\quad \forall \rho \in \Gamma (\mathcal{T}E\backslash \{0\}). \end{equation*} It results that \begin{equation*} \nabla _0(f\rho )=\mathcal{S}(f)\rho +f\nabla _0\rho ,\quad \forall f\in C^\infty (E),\ \rho \in \Gamma (\mathcal{T}E\backslash \{0\}). \end{equation*} Any tensor derivation on $\mathcal{T}E\backslash \{0\}$ is completely determined by its actions on smooth functions and sections on $\mathcal{T} E\backslash \{0\}$ (see \cite{Sz2} generalized Willmore's theorem). Therefore, there exists a unique tensor derivation $\nabla $ on $\mathcal{T} E\backslash \{0\}$ such that \begin{equation*} \nabla \mid _{C^\infty (E)}=\mathcal{S},\quad \nabla \mid _{\Gamma (\mathcal{ T}E\backslash \{0\})}=\nabla _0. \end{equation*} We will call the tensor derivation $\nabla $, the \textit{dynamical covariant derivative} induced by the semispray $\mathcal{S}$ (see \cite{Bu2} for the tangent bundle case). \begin{proposition} The dynamical covariant derivative has the following decomposition \begin{equation} \nabla =\mathcal{L}_{\mathcal{S}}+\Bbb{F}+J-\Phi. \end{equation} \end{proposition} \textbf{Proof}. Using the formula (16) and the expressions of $\Bbb{F}$ and $\Phi $ we obtain \begin{eqnarray*} \nabla &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+\mathrm{v }\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ &=&\mathrm{h}\circ \left( \mathcal{L}_{\mathcal{S}}\mathrm{h}+\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\right) +\mathrm{v}\circ \left( \mathcal{L}_{ \mathcal{S}}\mathrm{v}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\right) = \\ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}+(\mathrm{h}+\mathrm{v})\circ \mathcal{L} _{\mathcal{S}}=\mathcal{L}_{\mathcal{S}}+\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}= \\ &=&\mathcal{L}_{\mathcal{S}}+\Bbb{F}+J-\Phi . \end{eqnarray*} In this case the dynamical covariant derivative is characterized by the following formulas \begin{equation*} \nabla \mathcal{V}_\beta =\mathrm{v}[\mathcal{S},\mathcal{V}_\beta ]_{ \mathcal{T}E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha =-\frac 12\left( \frac{ \partial \mathcal{S}^\alpha }{\partial y^\beta }+L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha , \end{equation*} \begin{equation*} \nabla \delta _\beta =\mathrm{h}[\mathcal{S},\delta _\beta ]_{\mathcal{T} E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha =-\frac 12\left( \frac{\partial \mathcal{S}^\alpha }{\partial y^\beta }+L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha. \end{equation*} The next result shows that $\nabla$ acts identically on both vertical and horizontal distribution, that is enough to find the action of $\nabla$ on either one of the two distributions. \begin{proposition} The dynamical covariant derivative induced by the semispray $\mathcal{S}$ is compatible with $J$ and $\Bbb{F}$, that is \begin{equation*} \nabla J=0,\ \nabla \Bbb{F}=0. \end{equation*} \end{proposition} \textbf{Proof}. $\nabla J=0$ follows from (20). Using the formula $\Bbb{F}=- \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J$ and $\nabla \Bbb{F}=\nabla \circ \Bbb{F}-\Bbb{F}\circ \nabla$ we obtain \begin{eqnarray*} \nabla \Bbb{F} &=&(\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+ \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})\circ (-\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})-(-\mathrm{h}\circ \mathcal{L }_{\mathcal{S}}\circ \mathrm{v})\circ (\mathrm{h}\circ \mathcal{L}_{\mathcal{ S}}\circ \mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} )= \\ \ &=&-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\circ \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ (\mathrm{v}-\mathrm{h} )\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=\mathrm{h}\circ \mathcal{L} _{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S}}J\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ (\mathcal{L}_{\mathcal{S} }\circ J-J\circ \mathcal{L}_{\mathcal{S}})\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S} }\circ (J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})-(\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J)\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ \ &=&-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}+\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L} _{\mathcal{S}}\circ \mathrm{v}=0. \end{eqnarray*} \hbox{\rlap{$\sqcap$}$\sqcup$}\\ The next proposition proves that in the case of spray $\nabla$ has more properties. \begin{proposition} If the dynamical covariant derivative is induced by a spray $\mathcal{S}$ then \begin{equation*} \nabla \mathcal{S}=0,\ \nabla \Bbb{C}=0. \end{equation*} \end{proposition} \textbf{Proof}. Indeed, if $\mathcal{S}$ is a spray then we have $\mathcal{S}= \mathrm{h}\mathcal{S}$ and $\mathrm{v}\mathcal{S}=0$ and it results $\nabla \mathcal{S}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h} \mathcal{S}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} \mathcal{S}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{S}=0.$ Also $\nabla \Bbb{C}=0$ follows from $\mathrm{h}\Bbb{C}=0$, $\mathrm{v}\Bbb{C }=\Bbb{C}$ and $[\Bbb{C},\mathcal{S}]_{\mathcal{T}E}=\mathcal{S}$. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ Next, we introduce the Berwald linear connection induced by a nonlinear connection and prove that in the case of homogeneous second order differential equations it coincides with the dynamical covariant derivative. The Berwald linear connection is given by \begin{equation*} \mathcal{D}:\Gamma (\mathcal{T}E\backslash \{0\})\times \Gamma (\mathcal{T} E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\}) \end{equation*} \begin{equation*} \mathcal{D}_XY=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T}E}+\mathrm{h}[ \mathrm{v}X,\mathrm{h}Y]_{\mathcal{T}E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{ \mathcal{T}E}+(\Bbb{F}+J)[\mathrm{h}X,JY]_{\mathcal{T}E}. \end{equation*} \begin{proposition} The Berwald linear connection has the following properties \begin{equation*} \mathcal{D}\mathrm{h}=0,\quad \mathcal{D}\mathrm{v}=0,\quad \mathcal{D} J=0,\quad \mathcal{D}\Bbb{F}=0. \end{equation*} \end{proposition} \textbf{Proof}. Using the properties of the vertical and horizontal projectors we obtain\\ $\mathcal{D}_X\mathrm{v}Y=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}$ and\\ $\mathrm{v}(\mathcal{D}_XY)=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}$ which yields $\mathcal{D} \mathrm{v}=0$. Also,\\ $\mathcal{D}_X\mathrm{h}Y=\mathrm{h}[\mathrm{v}X,\mathrm{h}Y]_{\mathcal{T} E}+(\Bbb{F}+J)[\mathrm{h}X,JY]_{\mathcal{T}E}=\mathrm{h}(\mathcal{D}_XY)$ and it results $\mathcal{D}\mathrm{h}=0$. Moreover,\\ $\mathcal{D}_XJY=\mathrm{v}[\mathrm{h}X,JY]_{\mathcal{T}E}+J[\mathrm{v}X, \mathrm{h}Y]_{\mathcal{T}E}$ and $J(\mathcal{D}_XY)=J[\mathrm{v}X,\mathrm{h} Y]_{\mathcal{T}E}+\mathrm{v}[\mathrm{h}X,JY]_{\mathcal{T}E}$ and we obtain $\mathcal{D}J=0.$ From\\ $\mathcal{D}_X\Bbb{F}Y=\mathrm{v}[\mathrm{h}X,-JY]_{\mathcal{T}E}+\mathrm{h}[ \mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}+J[\mathrm{v}X,-\mathrm{h}Y]_{ \mathcal{T}E}+(\Bbb{F}+J)[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T}E}$ and\\ $\Bbb{F}(\mathcal{D}_XY)=(\Bbb{F}+J)[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}-J[\mathrm{v}X,\mathrm{h}Y]_{\mathcal{T}E}+\mathrm{h}[\mathrm{v}X,(\Bbb{F} +J)Y]_{\mathcal{T}E}-v[\mathrm{h}X,JY]_{\mathcal{T}E}=$\\ $\mathcal{D}_X\Bbb{F}Y$ we get $\mathcal{D}\Bbb{F}=0.$ \hbox{\rlap{$\sqcap$}$\sqcup$}\\ It results that the Berwald connection preserves both horizontal and vertical sections. Moreover, $\mathcal{D}$ has the same action on horizontal and vertical distributions and locally we have the following formulas \begin{equation*} \mathcal{D}_{\delta _\alpha }\delta _\beta =\frac{\partial \mathcal{N} _\alpha ^\gamma }{\partial y^\beta }\delta _\gamma ,\quad \mathcal{D} _{\delta _\alpha }\mathcal{V}_\beta =\frac{\partial \mathcal{N}_\alpha ^\gamma }{\partial y^\beta }\mathcal{V}_\gamma ,\quad \mathcal{D}_{\mathcal{V }_\alpha }\delta _\beta =0,\quad \mathcal{D}_{\mathcal{V}_\alpha }\mathcal{V} _\beta =0. \end{equation*} We can see that the dynamical covariant derivative has the same properties and this leads to the next result. \begin{proposition} If $\mathcal{S}$ is a spray then the following equality holds \begin{equation*} \nabla =\mathcal{D}_{\mathcal{S}}. \end{equation*} \end{proposition} \textbf{Proof}. If $\mathcal{S}$ is a spray then $\mathcal{S}=\mathrm{h} \mathcal{S}$ and $\mathrm{v}\mathcal{S}=0$ which implies \begin{equation*} \mathcal{D}_{\mathcal{S}}Y=\mathrm{v}[\mathcal{S},\mathrm{v}Y]_{\mathcal{T} E}+(\Bbb{F}+J)[\mathcal{S},JY]_{\mathcal{T}E}. \end{equation*} But $\nabla Y=\mathrm{h}[\mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}+\mathrm{v}[ \mathcal{S},\mathrm{v}Y]_{\mathcal{T}E}$ and we will prove that $\mathrm{h}[ \mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=(\Bbb{F}+J)[\mathcal{S},JY]_{ \mathcal{T}E}$ using the computation in local coordinates. Let us consider $ Y=X^\alpha (x,y)\mathcal{X}_\alpha +Y^\beta (x,y)\mathcal{V}_\beta $ and using (10) we get \begin{equation*} \lbrack \mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=[y^\alpha \delta _\alpha ,X^\beta \delta _\beta ]_{\mathcal{T}E}=y^\alpha X^\beta \mathcal{R}_{\alpha \beta }^\varepsilon \mathcal{V}_\varepsilon +y^\alpha X^\beta L_{\alpha \beta }^\varepsilon \delta _\varepsilon +y^\alpha \delta _\alpha (X^\beta )\delta _\beta +X^\beta N_\beta ^\alpha \delta _\alpha, \end{equation*} \begin{equation*} \mathrm{h}[\mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=\left( y^\alpha \delta _\alpha (X^\beta )+X^\alpha N_\alpha ^\beta+y^\alpha X^\varepsilon L_{\alpha \varepsilon}^\beta \right) \delta _\beta. \end{equation*} Next \begin{equation*} \lbrack \mathcal{S},JY]_{\mathcal{T}E}=[y^\alpha \delta _\alpha ,X^\beta \mathcal{V}_\beta ]_{\mathcal{T}E}=y^\alpha X^\beta \frac{\partial N_\alpha ^\varepsilon }{\partial y^\beta }\mathcal{V}_\varepsilon +y^\alpha \delta _\alpha (X^\beta )\mathcal{V}_\beta -X^\beta \delta _\beta . \end{equation*} Also, we have \begin{equation*} y^\alpha X^\beta \frac{\partial N_\alpha ^\varepsilon }{\partial y^\beta }= \mathcal{N}_\beta ^\varepsilon X^\beta -L_{\beta \alpha }^\varepsilon y^\alpha X^\beta, \end{equation*} and using the relations $(\Bbb{F}+J)(\mathcal{V}_\alpha )=\delta _\alpha $, $ (\Bbb{F}+J)(\delta _\alpha )=0$ we obtain the result which ends the proof. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ Moreover, $\nabla{\mathcal{S}} =\mathcal{D}_{\mathcal{S}}{\mathcal{S}}=0$ and it results that the integral curves of the spray are geodesics of the Berwald linear connection. \section{\textbf{Symmetries for semispray}} In this section we study the symmetries of SODE on Lie algebroids and prove that the canonical nonlinear connection can be determined by these symmetries. We find the relations between dynamical symmetries, Lie symmetries, Newtonoid sections, Cartan symmetries and conservation laws, and show when one of them will imply the others. Also, we obtain the invariant equations of these symmetries, using the dynamical covariant derivative and Jacobi endomorphism. In the particular case of the tangent bundle some results from \cite{Bu3, Mar, Pr1, Pr2} are obtained. \begin{definition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a dynamical symmetry of semispray $\mathcal{S}$ if $[\mathcal{S},X]_{\mathcal{T}E}=0.$ \end{definition} In local coordinates for $X=X^\alpha (x,y)\mathcal{X}_\alpha +Y^\alpha (x,y) \mathcal{V}_\alpha $ we obtain \begin{equation*} \lbrack \mathcal{S},X]_{\mathcal{T}E}=\left( y^\alpha L_{\alpha \gamma }^\beta X^\gamma -Y^\beta +\mathcal{S}(X^\beta )\right) \mathcal{X}_\beta +\left( \mathcal{S}(Y^\beta )-X(\mathcal{S}^\beta )\right) \mathcal{V}_\beta , \end{equation*} and it results that the dynamical symmetry is characterized by the equations \begin{equation} Y^\alpha =\mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta , \end{equation} \begin{equation} \mathcal{S}(Y^\alpha )-X(\mathcal{S}^\alpha )=0. \end{equation} Introducing (23) into (24) we obtain \begin{equation*} \mathcal{S}^2(X^\alpha )-X(\mathcal{S}^\alpha )=\left( \sigma _\gamma ^i \frac{\partial L_{\varepsilon \beta }^\alpha }{\partial x^i}X^\beta +L_{\varepsilon \beta }^\alpha \sigma _\gamma ^i\frac{\partial X^\beta }{ \partial x^i}\right) y^\gamma y^\varepsilon +\mathcal{S}^\gamma \left( L_{\gamma \beta }^\alpha X^\beta +y^\varepsilon L_{\varepsilon \beta }^\alpha \frac{\partial X^\beta }{\partial y^\gamma }\right). \end{equation*} \begin{definition} A section $\widetilde{X}=\widetilde{X}^\alpha (x,y)s_\alpha $ on $E\backslash \{0\}$ is a Lie symmetry of a semispray if its complete lift $\widetilde{X}^c$ is a dynamical symmetry, that is $[\mathcal{S},\widetilde{X}^c]_{\mathcal{T}E}=0.$ \end{definition} \begin{proposition} The local expression of a Lie symmetry is given by \begin{eqnarray*} \mathcal{S}^\alpha \frac{\partial \widetilde{X}^\beta }{\partial y^\alpha } =0, \end{eqnarray*} \begin{equation*} \mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta +y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X}^\alpha \sigma _\alpha ^i \frac{\partial \mathcal{S}^\beta }{\partial x^i}-y^\varepsilon \widetilde{X} _{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }+S^\alpha y^\varepsilon \left( \sigma _\varepsilon ^i\frac{ \partial ^2\widetilde{X}^\beta }{\partial y^\alpha \partial x^i}-L_{\gamma \varepsilon }^\beta \frac{\partial \widetilde{X}^\gamma }{\partial y^\alpha } \right) =0. \end{equation*} where \begin{equation*} \widetilde{X}_{\mid_\varepsilon }^\alpha :=\sigma_\varepsilon ^i\frac{ \partial \widetilde{X}^\alpha }{\partial x^i}-L_{\beta \varepsilon}^\alpha \widetilde{X}^\beta , \end{equation*} \end{proposition} \textbf{Proof}. Considering $\widetilde{X}^c=\widetilde{X}^\alpha \mathcal{X} _\alpha +y^\varepsilon \widetilde{X}_{\mid \varepsilon }^\alpha \mathcal{V} _\alpha $ and using (1) we obtain \begin{equation*} \lbrack \mathcal{S},\widetilde{X}^c]_{\mathcal{T}E}=\left( \widetilde{X} ^\alpha y^\varepsilon L_{\varepsilon \alpha }^\beta +y^\alpha \sigma _\alpha ^i\frac{\partial \widetilde{X}^\beta }{\partial x^i}-y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\beta +\mathcal{S}^\alpha \frac{\partial \widetilde{X}^\beta }{\partial y^\alpha }\right) \mathcal{X}_\beta +\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation*} \begin{equation*} \ \ \ \ \left( y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X} ^\alpha \sigma _\alpha ^i\frac{\partial \mathcal{S}^\beta }{\partial x^i}+ \mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta -y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{ \partial y^\alpha }+S^\alpha y^\varepsilon \left( \sigma _\varepsilon ^i \frac{\partial ^2\widetilde{X}^\beta }{\partial y^\alpha \partial x^i} -L_{\gamma \varepsilon }^\beta \frac{\partial \widetilde{X}^\gamma }{ \partial y^\alpha }\right) \right) \mathcal{V}_\beta. \end{equation*} We deduce that $\widetilde{X}^\alpha y^\varepsilon L_{\varepsilon \alpha }^\beta +y^\alpha \sigma _\alpha ^i\frac{\partial \widetilde{X}^\beta }{ \partial x^i}-y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\beta =0$ and it results the local expression of a Lie symmetry. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ We have to remark that a section $\widetilde{X}=\widetilde{X}^\alpha (x)s_\alpha $ on $E\backslash \{0\}$ is a Lie symmetry if and only if (see also \cite{Pe}) \begin{equation*} y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X}^\alpha \sigma _\alpha ^i \frac{\partial \mathcal{S}^\beta }{\partial x^i}+\mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta -y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{\partial y^\alpha } =0. \end{equation*} and it results, by direct computation, that the components $\widetilde{X} ^\alpha (x)$ satisfy the equations (23), (24). \begin{definition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is called Newtonoid if $J[\mathcal{S},X]_{\mathcal{T}E}=0.$ \end{definition} In local coordinates we obtain \begin{equation*} J[\mathcal{S},X]_{\mathcal{T}E}=\left( \mathcal{S}(X^\alpha )-Y^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V} _\alpha , \end{equation*} which yields \begin{equation} Y^\alpha =\mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta ,\quad X=X^\alpha \mathcal{X}_\alpha +\left( \mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V} _\alpha . \end{equation} We remark that a section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is a dynamical symmetry if and only if it is a Newtonoid and satisfies the equation (24). The set of Newtonoid sections denoted $\frak{X}_{\mathcal{S}}$ is given by \begin{equation*} \frak{X}_{\mathcal{S}}=Ker(J\circ \mathcal{L}_{\mathcal{S}})=Im(Id+J\circ \mathcal{L}_{\mathcal{S}}). \end{equation*} In the following we will use the dynamical covariant derivative in order to find the invariant equations of Newtonoid sections and dynamical symmetries on Lie algebroids. Let $\mathcal{S}$ be a semispray, $\mathcal{N}$ an arbitrary nonlinear connection and $\nabla $ the induced dynamical covariant derivative. We set: \begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a Newtonoid if and only if \begin{equation} \mathrm{v}(X)=J(\nabla X), \end{equation} which locally yields \begin{equation*} X=X^\alpha \delta _\alpha +\nabla X^\alpha \mathcal{V}_\alpha, \end{equation*} with $\nabla X^\alpha $ given by formula (18). \end{proposition} \textbf{Proof}. We know that $J\circ \nabla =J\circ \mathcal{L}_{\mathcal{S} }+\mathrm{v}$ and it results $J[\mathcal{S},X]_{\mathcal{T}E}=0$ if and only if $\mathrm{v}(X)=J(\nabla X).$ In local coordinates we obtain \begin{eqnarray*} X &=&X^\alpha \left( \delta _\alpha +\mathcal{N}_\alpha ^\beta \mathcal{V} _\beta \right) +\left( \mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V}_\alpha \\ \ &=&X^\alpha \delta _\alpha +\left( \mathcal{S}(X^\alpha )+X^\beta ( \mathcal{N}_\beta ^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha \right) \mathcal{V}_\alpha \\ \ &=&X^\alpha \delta _\alpha +\nabla X^\alpha \mathcal{V}_\alpha. \end{eqnarray*} \hbox{\rlap{$\sqcap$}$\sqcup$} \begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a dynamical symmetry if and only if $X$ is a Newtonoid and \begin{equation} \nabla (J\nabla X)+\Phi (X)=0. \end{equation} \end{proposition} \textbf{Proof}. If $X$ is a dynamical symmetry then $\mathrm{h}[\mathcal{S} ,X]_{\mathcal{T}E}=\mathrm{v}[\mathcal{S},X]_{\mathcal{T}E}=0$ and composing by $J$ we get $J[\mathcal{S},X]_{\mathcal{T}E}=0$ that means $X$ is a Newtonoid. Therefore, $\mathrm{v}[\mathcal{S},X]_{\mathcal{T}E}=\mathrm{v}[ \mathcal{S},\mathrm{v}X]_{\mathcal{T}E}+\mathrm{v}[\mathcal{S},\mathrm{h}X]_{ \mathcal{T}E}=\nabla (\mathrm{v}X)+\Phi (X)$ and using (26) we get $\nabla (J\nabla X)+\Phi (X)=0.$ \hbox{\rlap{$\sqcap$}$\sqcup$}\\ For $f\in C^\infty (E)$ and $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ we define the product \begin{equation*} f*X=(Id+J\circ \mathcal{L}_{\mathcal{S}})(fX)=fX+fJ[\mathcal{S},X]_{\mathcal{ T}E}+\mathcal{S}(f)JX, \end{equation*} and remark that a section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a Newtonoid if and only if \begin{equation*} X=X^\alpha (x,y)*\mathcal{X}_\alpha . \end{equation*} If $X\in \frak{X}_{\mathcal{S}}$ then \begin{equation*} f*X=fX+\mathcal{S}(f)JX. \end{equation*} The next result proves that the canonical nonlinear connection can by determined by symmetry. \begin{proposition} Let us consider a semispray $\mathcal{S}$, an arbitrary nonlinear connection $\mathcal{N}$ and $\nabla $ the dynamical covariant derivative. The following conditions are equivalent:\\ $i)$ $\nabla$ restricts to $\nabla :\frak{X}_{\mathcal{S}}\rightarrow \frak{X }_{\mathcal{S}}$ satisfies the Leibnitz rule with respect to the $*$ pro-duct.\\ $ii)$ $\nabla J=0$,\\ $iii)$ $\mathcal{L}_{\mathcal{S}}J+\mathcal{N}=0,$\\ $iv)$ $\mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S} ^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right)$. \end{proposition} \textbf{Proof}. For $ii)\Rightarrow i)$ we consider $X\in \frak{X}_{\mathcal{ S}}$ and using (26) we have $\mathrm{v}X=J(\nabla X)$ which leads to $\nabla (\mathrm{v}X)=\nabla (J\nabla X)$. It results $(\nabla \mathrm{v})X+\mathrm{v }(\nabla X)=(\nabla J)(\nabla X)+J\nabla (\nabla X)$ and using the relations $\nabla \mathrm{v}=0$ and $\nabla J=0$ we obtain $\mathrm{v}(\nabla X)=J\nabla (\nabla X)$ which implies $\nabla X\in \frak{X}_{\mathcal{S}}$ . For $X\in \frak{X}_{\mathcal{S}}$ we have \begin{equation*} \nabla \left( f*X\right) =\nabla (fX+\mathcal{S}(f)JX)=\mathcal{S} (f)X+f\nabla X+\mathcal{S}^2(f)JX+\mathcal{S}(f)\nabla (JX), \end{equation*} \begin{equation*} \nabla f*X+f*\nabla X=\mathcal{S}(f)X+\mathcal{S}^2(f)JX+f\nabla X+\mathcal{S }(f)J(\nabla X). \end{equation*} But $\nabla (JX)=(\nabla J)X+J(\nabla X)$ and from $\nabla J=0$ we obtain $ \nabla (JX)=J(\nabla X)$ which leads to $\nabla \left( f*X\right) =\nabla f*X+f*\nabla X$. For $i)\Rightarrow ii)$ we consider the set $\frak{X}_{\mathcal{S}}\cup \Gamma ^{\mathrm{v}}(\mathcal{T}E\backslash \{0\})$ which is a set of generators for $\Gamma (\mathcal{T}E\backslash \{0\})$. We have $\nabla J(X)=0$ for $X\in \Gamma ^{\mathrm{v}}(\mathcal{T}E\backslash \{0\})$ and for $X\in \frak{X}_{\mathcal{S}}$ using $\nabla \left( f*X\right) =\nabla f*X+f*\nabla X$ it results $\mathcal{S}(f)\nabla (JX)=\mathcal{S}(f)J(\nabla X),$ which implies $\mathcal{S}(f)(\nabla J)X=0,$ for an arbitrary function $ f\in C^\infty (E\backslash \{0\})$. Therefore, $\nabla J=0$ on $ \frak{X}_{\mathcal{S}}$ which ends the proof. The equivalence of the conditions $ii)$, $iii)$, $iv)$ have been proved in the Theorem 1. \hbox{\rlap{$\sqcap$}$\sqcup$} Next, we consider the dynamical covariant derivative $\nabla$ induced by the semispray $\mathcal{S}$, the canonical nonlinear connection $\mathcal{N}=- \mathcal{L}_{\mathcal{S}}J$ and find the invariant equations of dynamical and Lie symmetries. \begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is a dynamical symmetry if and only if $X$ is a Newtonoid and \begin{equation} \nabla ^2JX+\Phi (X)=0, \end{equation} which locally yields \begin{equation*} \nabla ^2X^\alpha +\mathcal{R}_\beta ^\alpha X^\beta =0. \end{equation*} \end{proposition} \textbf{Proof}. From (20) it results $\nabla J=0$ and using (27) and (17) we get (28). Next, using (25) and (14) the local components of the vertical section $\nabla ^2JX+\Phi (X)$ is $\nabla ^2X^\alpha +\mathcal{R}_\beta ^\alpha X^\beta$. \hbox{\rlap{$\sqcap$}$\sqcup$} \begin{proposition} A section $\widetilde{X}\in \Gamma (E\backslash \{0\})$ is a Lie symmetry of $\mathcal{S}$ if and only if \begin{equation} \nabla ^2\widetilde{X}^v+\Phi (\widetilde{X}^c)=0 \end{equation} \end{proposition} \textbf{Proof}. Using (28) and the relation $J(\widetilde{X}^c)=\widetilde{X} ^v$ we obtain (29). \hbox{\rlap{$\sqcap$}$\sqcup$}\\ Let us consider in the following a regular Lagrangian $L$ on $E$, the Cartan 1-section $\theta_L$, the symplectic structure $\omega_L=d^E\theta_L$, the energy function $E_L$ and the induced canonical semispray $\mathcal{S}$ with the components given by the relation (9). \begin{proposition} If $\widetilde{X}$ is a section on $E$ such that $\mathcal{L}_{\widetilde{X} ^c}\theta _{L\text{ }}$is closed and $d^E(\widetilde{X}^cE_L)=0$, then $ \widetilde{X}$ is a Lie symmetry of the canonical semispray $\mathcal{S}$ induced by $L$. \end{proposition} \textbf{Proof}. We have \begin{eqnarray*} i_{[\widetilde{X}^c,\mathcal{S}]}\omega _L &=&\mathcal{L}_{\widetilde{X} ^c}(i_{\mathcal{S}}\omega _L)-i_{\mathcal{S}}(\mathcal{L}_{\widetilde{X} ^c}\omega _L)=-\mathcal{L}_{\widetilde{X}^c}d^EE_L-i_{\mathcal{S}}(\mathcal{L }_{\widetilde{X}^c}d^E\theta _L) \\ &=&-d^E\mathcal{L}_{\widetilde{X}^c}E_L-i_{\mathcal{S}}d^E(\mathcal{L}_{ \widetilde{X}^c}\theta _L)=-d^E(\widetilde{X}^cE_L)-i_{\mathcal{S}}d^E( \mathcal{L}_{\widetilde{X}^c}\theta _L)=0.\end{eqnarray*} But $\omega _L$ is a symplectic structure ($L$ is regular) and we get $[ \widetilde{X}^c,\mathcal{S}]=0$ which ends the proof. \hbox{\rlap{$\sqcap$}$\sqcup$} \begin{definition} a) A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is called a Cartan symmetry of the Lagrangian $L$, if $\mathcal{L}_X\omega _L=0$ and $\mathcal{L }_XE_L=0$.\\ b) A function $f\in C^\infty (E)$ is a constant of motion (or a conservation law) for the Lagrangian $L$ if $\mathcal{S}(f)=0$ \end{definition} \begin{proposition} The canonical semispray induced by the regular Lagrangian $L$ is a Cartan symmetry. \end{proposition} \textbf{Proof}. Using the relation $i_{\mathcal{S}}\omega_L=-d^EE_L$ and the skew symmetry of the symplectic 2-section $\omega _L$ we obtain \[ 0=i_{\mathcal{S}}\omega _L(\mathcal{S})=-d^EE_L(\mathcal{S})=-\mathcal{S}(E_L)=- \mathcal{L}_SE_L. \] Also, from $d^E\omega _L=0$ we get \[ \mathcal{L}_{\mathcal{S}}\omega _L=d^Ei_{\mathcal{S}}\omega _L+i_{\mathcal{S} }d^E\omega _L=-d^E(d^EE_L)=0, \] and it results that the semispray $\mathcal{S}$ is a Cartan symmetry. \hbox{\rlap{$\sqcap$}$\sqcup$} \begin{proposition} A Cartan symmetry $X$ of the Lagrangian $L$ is a dynamical symmetry for the canonical semispray $\mathcal{S}$. \end{proposition} \textbf{Proof}. From the symplectic equation $i_S\omega _L=-d^EE_L$, applying the Lie derivative in both sides, we obtain \begin{equation*} \mathcal{L}_X(i_S\omega _L)=-\mathcal{L}_Xd^EE_L=-d^E\mathcal{L}_XE_L=0. \end{equation*} Also, using the formula $i_{[X,Y]_{\mathcal{T}E}}=\mathcal{L}_X\circ i_Y-i_Y\circ \mathcal{L}_X$ it results \begin{equation*} \mathcal{L}_X(i_{\mathcal{S}}\omega _L)=-i_{[\mathcal{S},X]_{\mathcal{T} E}}\omega _L+i_{\mathcal{S}}\mathcal{L}_X\omega _L=-i_{[\mathcal{S},X]_{ \mathcal{T}E}}\omega _L \end{equation*} which yields \begin{equation} i_{[\mathcal{S},X]_{\mathcal{T}E}}\omega _L=0. \end{equation} But $\omega _L$ is a symplectic 2-section and we conclude that $[\mathcal{S},X]_{ \mathcal{T}E}=0$, so $X$ is a dynamical symmetry. \hbox{\rlap{$\sqcap$}$\sqcup$}\\ Since Lie and exterior derivatives commute, we obtain \[ d^E\mathcal{L}_X\theta _L=\mathcal{L}_Xd^E\theta _L=\mathcal{L}_X\omega _L=0, \] It results that, for a Cartan symmetry, the 1-section $\mathcal{L}_X\theta _L$ is a closed 1-section. \begin{definition} A Cartan symmetry $X$ is said to be an exact Cartan symmetry if the 1-section $\mathcal{L}_X\theta _L$ is exact. \end{definition} The next result proves that there is a one to one correspondence between exact Cartan symmetries and conservation laws. Also, if $X$ is an exact Cartan symmetry, then there is a function $f\in C^\infty (E)$ such that $\mathcal{L}_X\theta _L=d^{E}f$. \begin{proposition} If $X$ is an exact Cartan symmetry, then $f-\theta_L(X)$ is a conservation law for the Lagrangian $L$. Conversely, if $f\in C^\infty (E)$ is a conservation law for $L$, then $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ the unique solution of the equation $i_X\omega_L=-d^Ef$ is an exact Cartan symmetry. \end{proposition} \textbf{Proof}. We have $\mathcal{S}(f-\theta _L(X)) =d^E(f-\theta _L(X))(\mathcal{S})=\left( \mathcal{L}_X\theta _L-d^Ei_X(\theta _L)\right) (\mathcal{S})=i_Xd^E\theta _L(\mathcal{S}) =i_X\omega _L(\mathcal{S})=-i_{\mathcal{S}}\omega _L(X)=d^EE_L(X)=0,$ and it results that $f-\theta _L(X)$ is a conservation law for the dynamics associated to the regular Lagrangian $L$. Conversely, if $X$ is the solution of the equation $i_X\omega_L=-d^Ef$ then $\mathcal{L}_X{\theta_L}=i_X{\omega_L}$ is an exact 1-section. Consequently, $0=d^E\mathcal{L}_X\theta _L=\mathcal{L}_Xd^E\theta _L=\mathcal{L}_X\omega _L.$ Also, $f$ is a conservation law, and we have $0=\mathcal{S}(f)=d^Ef(\mathcal{S})=-i_X\omega _L(\mathcal{S})=i_S\omega _L(X)=-d^EE_L(X)=-X(E_L).$ Therefore, we obtain $\mathcal{L}_XE_L=0$ and $X$ is an exact Cartan symmetry. \hbox{\rlap{$\sqcap$}$\sqcup$} \\We have to mention that the Noether type theorems for Lagrangian systems on Lie algebroids are studied in \cite{Ca, Ma2} and Jacobi sections for second order differential equations on Lie algebroids are investigated in \cite{Ca1}. \quad \subsection{Example} Next, we consider an example from optimal control theory and prove that the framework of Lie algebroids is more useful that the tangent bundle in order to calculate some symmetries of the dynamics induced by a Lagrangian function. Let us consider the following distributional system in $\Bbb{R}^3$ (driftless control affine system) \cite{Po2'}: \[ \left\{ \begin{array}{l} \dot x^1=u^1+u^2x^1 \\ \dot x^2=u^2x^2 \\ \dot x^3=u^2 \end{array} \right. \] Let $x_0$ and $x_1$ be two points in $\Bbb{R}^3$. An optimal control problem consists of finding the trajectories of our control system which connect $x_0 $ and $x_1$ and minimizing the Lagrangian \[ {\min }\int_0^T\mathcal{L}(u(t))dt,\ \mathcal{L} (u)=\frac 12\left( (u^1)^2+(u^2)^2\right) ,\quad x(0)=x_0,\ x(T)=x_1, \] where $\dot x^i=\frac{dx^i}{dt}$ and $u^1,u^2$ are control variables. From the system of differential equations we obtain $u^2=\dot x^3$, $u^1=\dot x^1-\dot x^3x^1$. The Lagrangian function on the tangent bundle $T\Bbb{R}^3$ has the form \[ \mathcal{L}=\frac 12\left( \dot x^1-\dot x^3x^1)^2+(\dot x^3)^2\right) , \] with the constraint \[ \dot x^2=\dot x^3x^2. \] Then, using the Lagrange multiplier $\lambda =\lambda (t)$, we obtain the total Lagrangian (including the constraints) given by \[ L(x,\dot x)=\mathcal{L}(x,\dot x)+\lambda \left( \dot x^2-\dot x^3x^2\right) =\frac 12\left( (\dot x^1-\dot x^3x^1)^2+(\dot x^3)^2\right) +\lambda \left( \dot x^2-\dot x^3x^2\right) . \] We observe that the Hessian matrix of $L$ is singular, and $L$ is a degenerate Lagrangian (not regular). The corresponding Euler-Lagrange equations lead to a complicated system of second order differential equations. Moreover, because the Lagrangian is not regular, we cannot obtain the explicit coefficients of the semispray $\mathcal{S}$ from the equation $i_{\mathcal{S}}\omega _L=-dE_L$ and it is difficult to study the symmetries of SODE in this case.\\ For this reason, we will use a different approach, considering the framework of Lie algebroids. The system can be written in the next form \begin{eqnarray*} \left. \dot x=u^1X_1+u^2X_2,\quad x=\left( \begin{array}{c} x^1 \\ x^2 \\ x^3 \end{array} \right) \in \Bbb{R}^3,\ X_1=\left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) ,\ X_2=\left( \begin{array}{c} x^1 \\ x^2 \\ 1 \end{array} \right) .\right. \end{eqnarray*} The associated distribution $\Delta =span\{X_1,X_2\}$ has the constant rank $2$ and is holonomic, because \[ X_1=\frac \partial {\partial x^1},\quad X_2=x^1\frac \partial {\partial x^1}+x^2\frac \partial {\partial x^2}+\frac \partial {\partial x^3},\quad [X_1,X_2]=X_1. \] From the Frobenius theorem, the distribution $\Delta $ is integrable, it determines a foliation on $T\Bbb{R}^3$ and two points can be joined by a optimal trajectory if and only if they are situated on the same leaf (see \cite{Po2'}). In order to apply the theory of Lie algebroids, we consider the Lie algebroid being just the distribution, $E=\Delta $ and the anchor $\sigma :E\rightarrow T\Bbb{R}^3$ is the inclusion, with the components \[ \sigma _\alpha ^i=\left( \begin{array}{cc} 1 & x^1 \\ 0 & x^2 \\ 0 & 1 \end{array} \right) . \] From the relation \[ \lbrack X_\alpha ,X_\beta ]=L_{\alpha \beta }^\gamma X_\gamma ,\quad \alpha ,\beta ,\gamma =1,2, \] we obtain the non-zero structure functions \[ L_{12}^1=1,\ L_{21}^1=-1. \] The components of the semispray from (9) are given by \[ \mathcal{S}^1=-u^1u^2,\ \mathcal{S}^2=(u^1)^2. \] The functions $\mathcal{S}^\alpha $ are homogeneous of degree 2 in $u$ and it results that $\mathcal{S}$ is a spray. By straightforward computation we obtain the expression of the canonical spray induced by $\mathcal{L}$ \[ \mathcal{S}(x,u)=(u^1+u^2x^1)\frac \partial {\partial x^1}+u^2x^2\frac \partial {\partial x^2}+u^2\frac \partial {\partial x^3}-u^1u^2\frac \partial {\partial u^1}+(u^1)^2\frac \partial {\partial u^2}. \] From the Proposition 17 it results that $\mathcal{S}(x,u)$ is a Cartan symmetry of the dynamics associated to the regular Lagrangian $\mathcal{L}$ on Lie algebroids.\\ The coefficients of the canonical nonlinear connection $\mathcal{N}=- \mathcal{L}_{\mathcal{S}}J$ are given by \[ \mathcal{N}_1^1=u^2,\ \mathcal{N}_2^1=0,\ \mathcal{N}_1^2=u^1,\ \mathcal{N}_2^2=0, \] and the components of the Jacobi endomorphism from (21) have the form \[ \mathcal{R}_1^1=-(u^2)^2,\ \mathcal{R}_1^2=-u^1u^2,\ \mathcal{R} _2^1=u^1u^2,\ \mathcal{R}_2^2=(u^1)^2. \] Also, the non-zero coefficients of the curvature from (11) of $\mathcal{N}$ are \[ \mathcal{R}_{12}^1=u^2,\ \mathcal{R}_{12}^2=u^1,\ \mathcal{R}_{21}^1=-u^2,\ \mathcal{R}_{21}^2=-u^1, \] and we obtain that the Jacobi endomorphism is the contraction with $\mathcal{S}$ of the curvature of $\mathcal{N}$, or locally $\mathcal{R}_\beta ^\alpha =\mathcal{R}_{\varepsilon \beta }^\alpha u^\varepsilon $.\\ The Euler-Lagrange equations on Lie algebroids given by (see \cite{We}) \[ \frac{dx^i}{dt}=\sigma _\alpha ^iu^\alpha ,\quad \frac d{dt}\left( \frac{ \partial \mathcal{L}}{\partial u^\alpha }\right) =\sigma _\alpha ^i\frac{ \partial \mathcal{L}}{\partial x^i}-L_{\alpha \beta }^\varepsilon u^\beta \frac{\partial \mathcal{L}}{\partial u^\varepsilon }, \] lead to the following differential equations \[ \dot u^1=-u^1u^2,\quad \dot u^2=(u^1)^2, \] which can be written in the form \[ \frac{dx^i}{dt}=\sigma _\alpha ^iu^\alpha ,\quad \frac{du^\alpha }{dt}= \mathcal{S}^\alpha (x,u), \] and give the integral curves of $\mathcal{S}$. The Cartan 1-section $\theta _{\mathcal{L}}$ has the form \[ \theta _{\mathcal{L}}=u^1dx^1+u^2(x^1dx^1+x^2dx^2+dx^3), \] and the symplectic structure is $\omega _{\mathcal{L}}=d^E\theta _{\mathcal{L}}$. The energy of Lagragian $\mathcal{L}$ is \[ E_{\mathcal{L}}=\frac 12\left( (u^1)^2+(u^2)^2\right). \] For the optimal solution of the control system (using the framework of Lie algebroids) see \cite{Po2'}.\\ \quad \\ \textbf{Conclusions}. The main purpose of this paper is to study the symmetries of SODE on Lie algebroids and relations between them, using the dynamical covariant derivative and Jacobi endomorphism. The existence of a semispray $\mathcal{S}$ together with an arbitrary nonlinear connection $ \mathcal{N}$ define a dynamical covariant derivative and the Jacobi endomorphism. Let us remark that at this point we do not have any relation between $\mathcal{S}$ and the nonlinear connection $\mathcal{N}.$ This will be given considering the compatibility condition between the dynamical covariant derivative and the tangent structure, $\nabla J=0$, which fix the canonical nonlinear connection $\mathcal{N}=-\mathcal{L}_{ \mathcal{S}}J$. This canonical nonlinear connection depends only on semispray. In this case we have the decomposition $\nabla =\mathcal{L}_{ \mathcal{S}}+\Bbb{F}+\mathcal{J}-\Phi $ which can be compared with the tangent case from \cite{Bu3, Ma}. Also, in the case of homogeneous SODE (spray), the dynamical covariant derivative coincides with Berwald linear connection and the Jacobi endomorphism is the contraction with $\mathcal{S}$ of the curvature of the nonlinear connection. We study the dynamical symmetry, Lie symmetry, Newtonoid section and Cartan symmetry on Lie algebroids and find their invariant equations with the help of dynamical covariant derivative and Jacobi endomorphism. Finally, we give an example from optimal control theory which proves that the framework of Lie algebroids is more useful than the tangent bundle in order to find the symmetries of the dynamics induced by a Lagrangian function. For further developments one can study the symmetries using the $k$-symplectic formalism on Lie algebroids given in \cite{Le2}.\\ \textbf{Acknowledgments}. The author wishes to express his thanks to the referees for useful comments and suggestions concerning this paper. Author's address: \\University of Craiova,\\Dept. of Statistics and Informatics,\\ 13, Al. I. Cuza, st., Craiova 200585, Romania\\e-mail: [email protected]; [email protected] \end{document}
\begin{document} \makeatletter \begin{center} \epsfxsize=10in \end{center} \def$\Box${$\Box$} \begin{center} \vskip 1cm {\LARGE\bf Recurrence Divisibility Tests} \vskip 1cm {\large Mehdi Hassani} \vskip .5cm Department of Mathematics\\ Institute for Advanced Studies in Basic Sciences\\ Zanjan, Iran\\ \href{mailto:[email protected]}{\tt [email protected]} \end{center} \newtheorem{thm}{Theorem} \newtheorem{prop}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{cor}{Corollary} \newtheorem{prob}{Note and Problem} \def\framebox(5.2,6.2){}{\framebox(5.2,6.2){}} \def\dashbox{2.71}(3.5,9.00){}{\dashbox{2.71}(3.5,9.00){}} \def\rule{5.25\unitlength}{9.75\unitlength}{\rule{5.25\unitlength}{9.75\unitlength}} \def\rule{8.00\unitlength}{12.00\unitlength}{\rule{8.00\unitlength}{12.00\unitlength}} \def\qed{\hbox{\hskip 6pt\vrule width 7pt height11pt depth1pt\hskip 3pt} } \newenvironment{proof}{\trivlist\item[\hskip\labelsep{\bf Proof}:]}{ $\framebox(5.2,6.2){}$ \endtrivlist} \newcommand{\COM}[2]{{#1\choose#2}} \thispagestyle{empty} \null \addtolength{\textheight}{1cm} \begin{abstract} In this note, we are going to introduce some recurrence divisibility tests for all primes except than 2 and 5. \end{abstract} \hrule \noindent 2000 {\it Mathematics Subject Classification}: 13A05, 11A41, 11B37, 11D04.\\ \noindent \emph{Keywords: divisibility, primes, recurrence, linear diophantine equation.} \hrule \vskip .2cm\hspace{-8mm} For $n\in{\mathbb N}$, we define $ud(n)=$ unite digit of $n$. As we know, we have the following divisibility tests for 2 and 5: $$ 2|n\Leftrightarrow 2|ud(n), $$ and $$ 5|n\Leftrightarrow ud(n)\in\{0,5\}. $$ In this note, we are going to find some \textit{recurrence divisibility tests} for other primes except than 2 and 5. As soon, you will understand our mind by ``recurrence''. \begin{thm} For every prime $p\neq 2, 5$, there exists unique $t(p)\in\mathbb{Z}_p$, in which $$ p|n\Leftrightarrow p\Big|\Big\lfloor\frac{n}{10}\Big\rfloor-t(p)ud(n). $$ Also, we have $$ t(p)=\left\{ \begin{array}{ll} \frac{p-1}{10} & {\rm if~} ud(p)=1,\\ \frac{7p-1}{10} & {\rm if~} ud(p)=3,\\ \frac{3p-1}{10} & {\rm if~} ud(p)=7,\\ \frac{9p-1}{10} & {\rm if~} ud(p)=9. \end{array} \right. $$ \end{thm} \begin{proof} Suppose $p\neq 2, 5$ is a prime. Since gcd$(10,p)=1$, the diophantine equation $10t+1=kp$ with $0\leq t<p$, has an unique solution module $p$ and clearly $kt>0$. Let $t=t(p)$. So, $p|10t(p)+1$ and $1\leq t(p)\leq p-1$.\\ Now, suppose $p|n$, or consequently $p|t(p)n$. Also, we have $p|10t(p)+1$. Therefore, by considering $ud(n)=n-10\lfloor\frac{n}{10}\rfloor$, we have $p|\lfloor\frac{n}{10}\rfloor-t(p)ud(n)$.\\ Also, suppose $p|\lfloor\frac{n}{10}\rfloor-t(p)ud(n)$ or $p|(10t(p)+1)\lfloor\frac{n}{10}\rfloor-t(p)n$. Since $p|10t(p)+1$, we obtain $p|t(p)n$ and since $1\leq t(p)\leq p-1$, we have gcd$(p,t(p))=1$, therefore $p|n$.\\ Now, we compute the value of $t(p)$. To do this, we note that since $p\neq 2, 5$, we have $ud(p)\in\{1,3,7,9\}$. If we let $k=k(p)$, we have the relation $10t(p)+1=k(p)p$ with $1\leq t(p)\leq p-1$. So, $ud(pk(p))=1$ always holds. Also, the condition $1\leq t(p)\leq p-1$ yield that $\frac{9}{p}\leq k(p)\leq 10-\frac{11}{p}$. So, for all primes $p\neq 2, 5$ we have $1\leq k(p)\leq 9$. If $ud(p)=1$, then since $ud(pk(p))=1$ and $1\leq k(p)\leq 9$, we obtain $k(p)=1$ and so $t(p)=\frac{p-1}{10}$. Other cases have similar reasons. This completes the proof. \end{proof} \textbf{Note 1.} We called this divisibility test, ``recurrence'', because for all primes $p\neq 2, 5$ we have $$ \lfloor\log_{10}n\rfloor= 1+\Big\lfloor\log_{10}\Big(\Big\lfloor\frac{n}{10}\Big\rfloor-t(p)ud(n)\Big)\Big\rfloor. $$ It means, this test works by cancelling digits of given number for check, 1 digit by 1 digit.\\\\ \textbf{Note 2.} Same divisibility tests can be obtain which work by cancelling $m$ digit by $m$ digit.\\\\ \textbf{Note 3.} We can yield same tests for all $n\in{\mathbb N}$ with $2\nmid n$ and $5\nmid n$.\\\\ \textbf{Acknowledgements.} I deem my duty to thank Mr. H. Osanloo, who mentioned me recurrence divisibility tests in some special cases. \end{document}
\begin{document} \title{Wave packet methods in cavity QED} \author{Jonas Larson} \address{ICFO-Institut de Ci\`{e}ncies Fot\`{o}niques, E-08860 Castelldefels, Barcelona, Spain} \ead{[email protected]} \begin{abstract} The Jaynes-Cummings model, with and without the rotating wave approximation, is expressed in the conjugate variable representation and solved numerically by wave packet propagation. Both cases are then cast into systems of two coupled harmonic oscillators, reminiscent of coupled bound electronic potential curves of diatomic molecules. Using the knowledge of such models, this approach of the problem gives new insight of the dynamics. The effect of the rotating wave approximation is discussed. The collapse-revival phenomenon is especially analyzed in a non-standard manner. Extensions of the method is briefly mentioned in terms of a three-level atom and the Dicke model. \end{abstract} \section{Introduction} The main concepts of a wave packet approach to cavity quantum electrodynamics (QED) where put forward in \cite{oldwine}. The idea is to formulate the model Hamiltonian, either the {\it Jaynes-Cummings} (JC) model or the {\it Rabi} model \cite{jc,rabi}, in terms of the quadrature operators for the field rather than the commonly used boson ladder operators. The quadrature operators obey the regular conjugate commutator relations like position and momentum of a particle. As such, the problem can be viewed as a wave packet evolving on two coupled and bound {\it potential curves} (arising due to the two-level structure of the considered atom). The state of the system is, of course, embedded in the wave packet, where roughly speaking the potential curves determines the internal state of the two-level atom and the vibrational states of the wave packet correspond to the field mode state. Thus, once the wave packet is obtained any quantity is easily calculated. The JC and Rabi models are related through the {\it rotating wave approximation} (RWA) in which rapidly oscillating terms are neglected in the Rabi model to give the JC one. Correspondingly, the JC model is exactly solvable and the physical quantities of interest are in general represented by infinite sums deriving from the quantized nature of the cavity field. These analytical results are valid only in the regime where the RWA is justified, and beyond these the Rabi model must be considered. In the conjugate variable representation, both models exhibit an {\it avoided crossing} between the two potential curves, crucially affecting their dynamics. Surprisingly, especially around the avoided crossing the solvable JC model shows a non-intuitive wave packet dynamics, while the Rabi model display a more expected evolution. In this paper we extend some of the results of \cite{oldwine} and describe how the method can be generalized to multi-level systems. Explicitly, we deepen the analysis about the phenomenon of collapse-revivals and discuss the differences in the evolution between the two models. The approach to collapse-revivals in the JC and Rabi models is somewhat different from the one typically used for wave packets evolution in molecular and chemical physics. This is thoroughly considered and the relation between the two viewpoints is sorted out. In particular, what is characterized by the {\it revival time} $T_{rev}$ for the JC model is labeled the {\it classical period} in wave packet dynamics. Thus, the time scale set by $T_{rev}$ can become very long compared to what one would expect from the classical period, which comes about due to the internal two-level structure of the model. Unlike \cite{oldwine}, this paper briefly considers multi-level systems, that is when the number of internal states exceeds 2. We first consider a simple example of a three-level $\Lambda$-atom coupled to a single cavity mode, which serves as a prototype of how to extend the theory. More interesting is the following system of $N$ two-level atoms coupled to a quantized field, namely the {\it Dicke} model \cite{dicke}. Here, the proper basis for the atomic subsystem is a collective one reminiscent of angular momentum states. Within this basis, the adiabatic diagonalization is straightforward and the $2^N$ potential curves are regained. An alternative approach is to use the Holstein-Primakoff representation \cite{hp}, in which the $2^N$ atomic (fermionic) degrees of freedom is replaced by a single bosonic degree of freedom. In such case, the coupled potential curves are represented by one potential surface in 2-D. We proceed as follows. In the next section \ref{sec2} we introduce the model systems, the relation between the two models \ref{ssec2a} is discussed and we give both models in the conjugate variable representation in \ref{ssec2b}. In this subsection we also mention how one may derive a semi-classical model that describes the population transfer between the two potential curves as the wave packet traverses the crossing. A deeper insight of the coupled dynamics is gained from the adiabatic representation, presented in \ref{ssec2c}. The following section \ref{sec3} considers the non-intuitive evolution of the JC model compared to the Rabi model, while section \ref{sec4} in detail studies the collapse-revival phenomenon, emphasizing on parallels between these two models and other wave packet systems. Section \ref{sec5} is devoted to generalizations of the method to multi-level systems. Firstly, in \ref{ssec5a} the idea is sketched using a simple three-level system, and in \ref{ssec5b} we consider the Dicke model. Finally, section \ref{sec6} gives a summary and outlooks. \section{The model system}\label{sec2} \subsection{Relation between the Rabi and the Jaynes-Cummings models}\label{ssec2a} In most cavity QED experiments, their characteristics is well described by the Jaynes-Cummings model, in which single modes and atomic transitions are isolated and coherently coupled. This is achievable due to the strong atom-field coupling and the use of high-$Q$ cavities and long-lived atomic states, such that losses can be discarded over typical interaction periods. Within the dipole approximation, a microscopical derivation leads to the {\it Rabi model} defined by the Hamiltonian \begin{equation}\label{hrabi} H'_{Rabi}=\hbar\omega \left(a^\dagger a+\frac{1}{2}\right)+\frac{\hbar\Omega'}{2}\sigma_z+\hbar g'_0\left(\sigma^++\sigma^-\right)\left(a^\dagger+a\right). \end{equation} Here, $a^\dagger$ ($a$) is the creation (annihilation) operator for the field mode; $a|n\rangle=\sqrt{n+1}|n+1\rangle$ ($a|n\rangle=\sqrt{n}|n-1\rangle$), the sigma operators are the standard Pauli matrices acting on the two-level atom; $\sigma_z|\pm\rangle=\pm|\pm\rangle$, $\sigma^\pm|\pm\rangle=|\mp\rangle$, $\sigma_x=\sigma^++\sigma^-$ and $\sigma_y=-i\left(\sigma^+-\sigma^-\right)$, $\omega$ ($\Omega$) the field (atomic transition) frequency and $g'_0$ is the effective atom-field coupling. Before proceeding, we define characteristic time and energy scales by $\omega^{-1}$ and $\hbar\omega$ respectively, and hence introduce the dimensionless variables $H_{Rabi}=H'_{Rabi}/\hbar\omega$, $\Omega=\Omega'/\omega$ and $g_0=g'_0/\omega$. The interaction in (\ref{hrabi}) is built up from four terms; $\sigma^+a^\dagger$ and $\sigma^-a$ corresponding to simultaneous excitation/deaxcitation of the atom and the field, and $\sigma^+a$ and $a^\dagger\sigma^-$ originating from excitation of the atom by absorption of one photon and vice versa While the first terms are appropriately called {\it non-energy conserving terms}, the latter are termed {\it energy conserving terms}. In a rotating frame with respect to the first two terms of (\ref{hrabi}), the interaction constitutes precess with either the frequency $|\Omega+1|$ or $|\Omega-1|$, and provided \begin{equation}\label{rwacon} |\Omega-1|\ll|\Omega+1| \end{equation} the rapidly oscillating terms may be left out. In this RWA limit, the Rabi Hamiltonian relaxes to the acclaimed Jaynes-Cummings model \begin{equation}\label{hjc0} H_{JC}=\left(a^\dagger a+\frac{1}{2}\right)+\frac{\hbar\Omega}{2}\sigma_z+\hbar g_0\left(\sigma^+a+\sigma^-a^\dagger\right). \end{equation} The applicability of the RWA is not solely determined by the requirement (\ref{rwacon}), but also on the ratio $g_0/\Omega$. For large values of $g_0/\Omega$, the typical time scale of the interaction exceeds the one of free atomic evolution and the time-accumulated error arising from neglecting the non-energy conserving terms becomes important. Thus, apart from the condition (\ref{rwacon}) between the atomic transition and field frequencies, the validity of the RWA necessitates that $g_0<\Omega$ \cite{com1}. Given within the RWA, the number of excitations $N=a^\dagger a+\frac{1}{2}\sigma_z$ is a conserved quantity, and taking this symmetry into account the JC model is readily solvable with eigenstates \begin{equation}\label{eigs} \begin{array}{l} \displaystyle{|E_n\rangle_+=\cos\left(\frac{\theta}{2}\right)|+,n-1\rangle+\sin\left(\frac{\theta}{2}\right)|-,n\rangle,}\\ \\ \displaystyle{|E_n\rangle_-=\sin\left(\frac{\theta}{2}\right)|+,n-1\rangle-\cos\left(\frac{\theta}{2}\right)|-,n\rangle,} \end{array} \end{equation} where \begin{equation} \tan(2\theta)=\frac{2g_0\sqrt{n}}{\Delta} \end{equation} and $\Delta=\Omega-1$. The corresponding eigenvalues read \begin{equation} E_n^\pm=\left(n+\frac{1}{2}\right)\pm\sqrt{\frac{\Delta^2}{4}+g_0^2n}, \end{equation} together with the uncoupled ground state $|E_0\rangle=|-,0\rangle$ with $E_0=-\Omega/2$. \subsection{Conjugate variable representation}\label{ssec2b} The algebraic approach applied to the models presented in the previous section is, in many cases, preferable to other methods. In terms of the Rabi Hamiltonian, no exact analytical solutions exist and approximate concepts are developed to find the required quantities in certain parameter regimes \cite{oldwine}. For example, this enables for perturbation expansions or truncation of continued fraction solutions. Non the less, the validity of the results are restricted to specific parameters, and to go beyond these one must consider numerical approaches. The procedure used here adopts the $x$-representation in which the boson operators define the conjugate variables \begin{equation} p=i\frac{1}{\sqrt{2}}\left(a^\dagger-a\right),\hspace{1cm}x=\frac{1}{\sqrt{2}}\left(a^\dagger+a\right) \end{equation} obeying $[x,p]=i$. In this representation, the Hamiltonians (\ref{hrabi}) and (\ref{hjc0}) become \begin{equation}\label{conrabi} \displaystyle{H_{Rabi}=\frac{p^2}{2}+\frac{x^2}{2}+\left[\begin{array}{cc}\displaystyle{\frac{\Omega}{2}} & g_0\sqrt{2}x \\ g_0\sqrt{2}x & -\displaystyle{\frac{\Omega}{2}}\end{array}\right]}, \end{equation} \begin{equation}\label{conjc} \displaystyle{H_{JC}=\frac{p^2}{2}+\frac{x^2}{2}+\left[\begin{array}{cc}\displaystyle{\frac{\Omega}{2}} & \displaystyle{\frac{g_0}{\sqrt{2}}\left(x+ip\right)} \\ \displaystyle{\frac{g_0}{\sqrt{2}}\left(x-ip\right)} & -\displaystyle{\frac{\Omega}{2}}\end{array}\right]}. \end{equation} It is convenient to unitarily rotate $H_{Rabi}$ and $H_{JC}$ by $U=\frac{1}{\sqrt{2}}(\sigma_x+\sigma_z)$ which gives the transformed Hamiltonians \begin{equation} \tilde{H}_{Rabi}=\frac{p^2}{2}+\left[\begin{array}{cc}V_h\left(x+\sqrt{2}g_0\right) & \displaystyle{\frac{\Omega}{2}} \\ \displaystyle{\frac{\Omega}{2}} & V_h\left(Q-\sqrt{2}g_0\right)\end{array}\right]-g_0^2, \end{equation} \begin{equation} \tilde{H}_{JC}=\frac{p^2}{2}+\left[\begin{array}{cc}V_h\left(x+\displaystyle{\frac{g_0}{\sqrt{2}}}\right) & \displaystyle{\frac{\Omega}{2}-i\frac{g_0}{\sqrt{2}}p} \\ \displaystyle{\frac{\Omega}{2}+i{\frac{g_0}{\sqrt{2}}p}} & V_h\left(x-\displaystyle{\frac{g_0}{\sqrt{2}}}\right)\end{array}\right]-\frac{g_0^2}{4}, \end{equation} where $V_h(x)=x^2/2$. In the current nomenclature both models are identified as two displaced coupled harmonic oscillators, in which, however, the amount of displacement and the character of the couplings are emphatically different. For $x=0$ the two potential curves possess an avoided crossing and the dynamics around such degenrate points have been thoroughly analyzed in the studies of excited diatomic molecules \cite{barry}. Typically, the crucial part of the evolution occurs when the wave packet transverses an avoided crossing. To a good approximation the potentials can be linearized in the vicinity of the crossing. Considering the wave packet as a classical point particle following the classical equations of motion, the population transfer between the two levels while passing the crossing in the Rabi model can be estimated by the {\it Landau-Zener formula} \cite{oldwine,LZ1,LZ2} \begin{equation}\label{lzform} P_{LZ}=1-\exp\left(-\frac{\sqrt{2}\pi\Omega^2}{8g_0v}\right). \end{equation} Here $v$ is the classical velocity of the wave packet at the crossing, which, of course, depends on the initial state. Equation (\ref{lzform}) directly gives some indicatives of the system behaviours; for a large velocity $v$ only a small fraction is transferred to the opposite state, and the same holds for a modest parameter $\Omega$ which couples the two levels. In order for the wave packet to bisect the crossing region, its initial mean position $x_i$ must satisfy $x_i<0$ or $x_i^2>2g_0^2$ for an initial state starting out ``on'' the right shifted oscillator or $x_i>0$ or $x_i^2<-2g_0^2$ for the left oscillator. For such a state, starting out on a single potential curve, a classical estimate of the velocity at the crossing gives $v\approx\sqrt{x_i^2-g_0^2/2}$. The evolutions is said to be {\it adiabatic} if $P_{LZ}\approx1$, {\it diabatic} in the opposite limit and {\it mesobatic} in the intermediate regime. The same arguments applied to the JC model is considerably less direct as the two potential curves are coupled by a ``momentum'' dependent term. This term is the background to the aberrance between the two models in a most unexpected way. For $|\Omega|\gg g_0\sqrt{\bar{n}}$, where $\bar{n}$ is the average number of photons (directly related to the velocity $v$), it follows that the equations can be decoupled, which corresponds to adiabatic elimination of the two atomic internal states. This will be confirmed in the next subsection. \subsection{Adiabatic diagonalization}\label{ssec2c} The previous section introduced the idea of potential curves associated with the Rabi and JC models, and the concept of adiabaticity became somewhat clear in this picture. Here we will stress this even further in order to gain a deeper understanding of the dynamics. As the JC model is exactly solvable we only carry out the analysis for the Rabi model. The basis in which the Hamiltonian (\ref{conrabi}) is written might not be the most ``optimal'' one in the sense of decoupling the internal states. In most cases, the {\it potential matrix} describing the coupled dynamics has a smooth $x$-dependence, such that the characteristic length of derivative terms of the coupling elements decrease with the order of derivatives. A systematic way to remove the low-derivative terms from the off diagonals is by the {\it adiabatic diagonalization}. The unitary matrix $U_1$ that diagonalizates a the potential matrix for the Rabi model is given in (\ref{eigs}) by replacing the angle; $\tan\left(2\theta\right)=2\sqrt{2}g_0x/\Omega$. From the identity \begin{equation}\label{noncom} U_1^\dagger pU_1=p-\sigma_y\hbar\partial\theta, \end{equation} where \begin{equation} \partial\theta\equiv\frac{\partial\theta}{\partial x} \end{equation} it is clear that the transformed Hamiltonian is not diagonal. Explicitly we derive \begin{equation}\label{adham} \tilde{H}_{ad}=\frac{p^2}{2}+\frac{x^2}{2}+(\partial\theta)^2+\left[\begin{array}{cc}\lambda & \displaystyle{\frac{\partial^2\theta-2i(\partial\theta)p}{2}} \\ \displaystyle{\frac{\partial^2\theta-2i(\partial\theta)p}{2}} & -\lambda\end{array}\right], \end{equation} where \begin{equation} \lambda=\sqrt{\frac{\Omega^2}{4}+2g_0^2x^2} \end{equation} and \begin{equation} \partial\theta=\frac{\sqrt{2}\Omega g_0}{\Omega^2+8g_0^2x^2},\hspace{1cm}\partial^2\theta=-\frac{16\sqrt{2}\Omega g_0^3}{\left(\Omega^2+8g_0^2x^2\right)^2}. \end{equation} The sizes of $\partial\theta$ and $\partial^2\theta$ measure the amount of adiabaticity \cite{jonasad,com2}. From these we can draw several conclusions; a large $\Omega$ favors an adiabatic evolution, so does a large $x$. The first is the regular adiabatic dispersive limit. The latter, however, is noticeably different from typical adiabaticicity in the JC model, but non the less it is intuitive since only close to the curve crossing is the adiabaticity assumed to break down. It seems that also $\Omega\rightarrow0$ would give adiabatic evolution, which, however, is false since this limit describes the diabatic evolution. We recognize the {\it adiabatic potentials} \begin{equation}\label{rabipot} V_{ad}^\pm(x)=\frac{x^2}{2}+(\partial\theta(x))^2\pm\lambda(x) \end{equation} and the internal {\it adiabatic basis states} as $|\uparrow\rangle=\cos(\theta)|+\rangle+\sin(\theta)|-\rangle$ and $|\downarrow\rangle=-\sin(\theta)|+\rangle+\cos(\theta)|-\rangle$. The internal {\it diabatic basis states} are termed $|u\rangle=\frac{1}{\sqrt{2}}(|+\rangle+|-\rangle)$ and $|d\rangle=\frac{1}{\sqrt{2}}(|+\rangle-|-\rangle)$, with corresponding {\it diabatic potentials} $V_d^\pm(x)=V_h(x\pm\sqrt{2}g_0)$. Thus, a general {\it adiabatic} or {\it diabatic state} is is given by $\psi(x)|\uparrow(\downarrow)\rangle$ or $\psi(x)|u(d)\rangle$ respectively. Examples of the adibataic (solid) and diabatic (dotted) potentials are presented in figure \ref{fig1}. \begin{figure} \caption{\label{fig1} \label{fig1} \end{figure} Labeling $H_{ad}^{(1)}=H_{ad}=U_1HU_1^\dagger$, the transformed Hamiltonian $H_{ad}^{(1)}$ defines a new adiabatic diagonalization matrix $U_2$ and so forth, leading to the expansion $H_{ad}^{(n)}=\Big[U_nU_{n-1}\cdots U_2U_1\Big]H\Big[U_1^\dagger U_2^\dagger\cdots U_{n-1}^\dagger U_n^\dagger\Big]$. We further note that the rather general form of the adiabatic Hamiltonian (\ref{adham}) shares great similarities to the JC one (\ref{conjc}) expressed in conjugate variables. \section{Regarding the $p$-dependent coupling}\label{sec3} The presence of the $p$-dependent term in the potential matrix of the JC model (\ref{conjc}), makes its evolution, in some sense, more complex (or less intuitive) than for its companion the Rabi model. On the one hand, we know that the JC model is analytically solvable and any physical quantity can in principle be obtained from certain infinite sums. Non the less, the fact remains that the wave packet dynamics is still more involved due to this term. The analysis will be restricted to either initial Fock number, or coherent states, which in $x$-representations read respectively \begin{equation}\label{instate} \begin{array}{l} \displaystyle{\psi_n(x,0)=\frac{1}{\sqrt{2^nn!}}\left(\frac{1}{\pi}\right)^{1/4}\mathrm{H}_n(x)\mathrm{e}^{-\frac{x^2}{2}}},\\ \\ \displaystyle{\psi_\nu(x,0)=\left(\frac{1}{\pi}\right)^{1/4}\mathrm{e}^{-(\Im\nu)^2}\mathrm{e}^{-\frac{1}{2}\left(x-\sqrt{2}\nu\right)^2}}, \end{array} \end{equation} where $\mathrm{H}_n$ is the $n$th order Hermite polynomial and $\nu$ is the amplitude of the coherent state; $\bar{n}=|\nu|^2$. Thus, in the case of a coherent state, the classical velocity $v=\sqrt{2\bar{n}-g_0^2/2}$ at the crossing. For an initial Gaussian (\ref{instate}) centered at $x_i=0$, corresponding to field vacuum, one expects that the wave packet will split up and accelerate down towards the two potential minima \cite{com3}. This is indeed what we find for the Rabi model, while after all, this can not be the case for the JC model since we know that for an initial Fock state, the model exhibits Rabi oscillations. Thus, for the JC model the field wave packet will either Rabi oscillate between the two states $\psi_0(x)$ and $\psi_1(x)$ or remain in $\psi_0(x)$ throughout, whether the initial atomic state is $|+\rangle$ or $|-\rangle$. This constrain of the wave packet to the origin is a direct effect of the $p$-dependent coupling, and holds for any Fock state $\psi_n(x)$ and therefore also for Fock states extending over large ranges $x$. The wave packet must be considered in its entirety as a single object, or in other words the coherence extending over the whole wave packet must be taken into account and one can not incoherently split up the wave packet in separate individual pieces. From this fact, understanding the influence of the $p$-dependent coupling becomes more challenging. None the less, we have that for a $n$ photon Fock state $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2=n+1/2$ and it is reasonable to expect that the "momentum" coupling will affect the dynamics due to the large spread in $p$. These conclusions are visualized in figure \ref{fig2}, which displays the squared absolute amplitude of the wave packet in the two models, together with the atomic inversion $\langle\sigma_z\rangle$. We use the split operator method \cite{split} to obtain the wave packet \begin{equation} |\Psi(x,t)\rangle=\left[\begin{array}{c}\psi_+(x,t)\\ \psi_-(x,t)\end{array}\right], \end{equation} and its squared absolute amplitude $P(x,t)=|\psi_+(x,t)|^2+|\psi_-(x,t)|^2$ \begin{figure} \caption{\label{fig2} \label{fig2} \end{figure} \section{Collapse-revivals}\label{sec4} Collapses of physical variables are caused by dephasing between the constitute terms making up the quantum state. While the collapses are expected, more surprising is the phenomenon of revivals, occurring when the terms return back in phase. Hence, the evolution must be sufficiently coherent in order to be able to complentate the rephasing. It is clear that revivals in the models considered in this paper is a direct outcome of the quantized 'grainyness' of the field and thus a novel quantum effect \cite{jcrev}. In other fields, the existence of collapse-revivals has been greatly studied, and probably the most significant contribution owes the one of wave packet dynamics describing the vibrations in molecules \cite{wp}. Here the wave packet evolves in a bound electronic potential, where the discreteness derives from the vibrational eigen-modes in the particular electronic molecular state. For a harmonic potential, a wave packet bouncing back and forth in the potential, will reshape after one period of oscillation, which defines the classical period of motion. If the potential, however, is anharmonic the wave packet will not fully reshape after one classical period, bringing about the collapse. Depending on the degree of anharmonicity, the wave packet may reshape at later times characterizing the revivals. For a fairly localized excited wave packet, with average quantum number $\bar{n}\gg1$, we assume $\Delta n/\bar{n}\ll1$ where $\Delta n$ is the spread of quantum numbers. In this case we expand the eigenenergies accordingly, \begin{equation}\label{eigexp} E(n)=E(n_0)+E'(n_0)(n-n_0)+\frac{E''(n_0)}{2}(n-n_0)^2+\frac{E'''(n_0)}{6}(n-n_0)^3...\,, \end{equation} where $E'(n_0)=(dE(n)/dn)_{n=n_0}$ and so on. The various terms define different time scales according to \begin{equation}\label{timescales} \begin{array}{lll} T_{cl}=\frac{2\pi}{|E'(n_0)|}, & T_{rev}=\frac{2\pi}{|E''(n_0)|}, & T_{sup}=\frac{2\pi}{|E'''(n_0)|}, \end{array} \end{equation} characterizing {\it classical}, {\it revival} and {\it superrevival} times respectively. For a harmonic oscillator only the first of these term is non-zero and identifies the classical period $T_{cl}=2\pi/\omega$. We note that, assuming zero detuning, $\Delta=0$, the JC energies expand as $\sqrt{n}=\sqrt{\bar{n}}+\frac{1}{2\sqrt{\bar{n}}}(n-\bar{n})-\frac{1}{8\bar{n}^{3/2}}(n-\bar{n})^2+...\,$. This is not, however, the standard way of deriving the revival times in the JC model. Normally one solves for the time it takes for consecutive Rabi frequencies to differ by a multiple of $2\pi$ \begin{equation} \left(2\Omega_{\bar{n}+1}-2\Omega_{\bar{n}}\right)T'_{rev}=2\pi, \end{equation} where $\Omega_n=\sqrt{\frac{\Delta^2}{4}+g_0^2n}$. The derived revival time is not identified with the previously defined $T_{rev}$ of equation (\ref{timescales}), but rather associates with the classical time $T_{cl}$. In other words, $T_{cl}$ does {\it not} correspond to the time for the constitute wave packets to bounce back and forth in the potential. However, in the JC model, as well as in the Rabi model, one has two internal degrees of freedom, and the dynamics is obtained from both the internal wave packets and their coupled motion. In particular, reshaping of the wave packet means that both internal wave packets must reform simultaneously. Thus, using the definitions (\ref{timescales}), $T_{cl}$ is typically orders of magnitude larger in a multi-level system than for an internal structure-less wave packet evolution. Another way of picturing the phenomenon is that the constitute wave packets must overlap in phase space in order to give rise to interference, manifesting itself in the form of revivals. By introducing $\langle x\rangle_\pm$ as the mean position of the two wave packets $\psi_\pm(x,t)$, and similarly for the momentum $\langle p\rangle_\pm$, revivals occur when $\delta x\equiv\langle x\rangle_+-\langle x\rangle_-=0$ and $\delta p\equiv\langle p\rangle_+-\langle p\rangle_-=0$ simultaneously. In figure \ref{fig3} we show the atomic inversion and in figure \ref{fig4} the quantities $\delta x$ and $\delta p$ as function of time $t$. In the $x$-direction, both models oscillates between $\pm10$, while in the momentum direction the Rabi model has $\langle p\rangle$ exceeding 10 at some occasions. \begin{figure} \caption{\label{fig3} \label{fig3} \end{figure} \begin{figure} \caption{\label{fig4} \label{fig4} \end{figure} As argued, the revivals obtained in the Rabi and JC models correspond to the classical period $T_{cl}$ in equation (\ref{timescales}), and not to $T_{rev}$ arising from the anharmonities. Due to the internal two-level structure, the time $T_{cl}$ can become rather long provided the proper adiabatic, diabatic or mesobatic potentials differ in their corresponding harmonic frequencies. However, if the frequencies almost coincide, $T_{cl}$ does not need to be large as seen in the example of figures \ref{fig5} (atomic inversion) and \ref{fig6} (wave packet amplitudes). Here the parameters are such that the evolution is approximately diabatic and the two Rabi diabatic potential curves share almost identical curvature, causing the two wave packets to oscillate with nearly the same classical frequencies. The JC model shows more abstract dynamics, more reminiscent of the one seen in figures \ref{fig3} and \ref{fig4}. \begin{figure} \caption{\label{fig5} \label{fig5} \end{figure} \begin{figure} \caption{\label{fig6} \label{fig6} \end{figure} \section{Extension to multi-level internal structure}\label{sec5} In \cite{oldwine}, the analysis was restricted to two-level atoms interacting with a single cavity mode. The generalization to multi-level atoms and/or multi-mode fields is straightforward. However, the split operator method can only handle up to two dimensions, limiting the numerics to at most two modes, while the number of internal states may be as large as 20. However, other approximate wave packet methods exists where the dimension might be considerably larger \cite{mch}. Here we only consider extensions in the direction of multi-level internal states, and the many mode situation will be dealt with in the future projects. \subsection{Three-level $\Lambda$-atom}\label{ssec5a} In the regular two-level atom model, losses of the excited level often affects the dynamics in an undesired manner. This can be circumvented by coupling two metastable "ground states" in a three-level $\Lambda$-atom configuration, adiabatically eliminating the excited state. However, the full three-level dynamics show interesting features beyond the two-level atom situation \cite{multi}. The model studied here is used mostly to present the methods, as it can in fact be reduced into a two-level model. Additionally, generalizing this model to non reducible three-level systems is straightforward. For simplicity we assume the two lower atomic states, $|g_1\rangle$ and $|g_2\rangle$, to be degenerate, and further that they dipole couple to the excited state $|e\rangle$ through couplings $\lambda_1$ and $\lambda_2$. The Hamiltonian (without the RWA) reads \begin{equation} H_\Lambda=\frac{p^2}{2}+\frac{x^2}{2}+\Omega\sigma_{ee}+\big[\lambda_1\left(\sigma_{g_1e}+\sigma_{eg_1}\right)+\lambda_2\left(\sigma_{g_2e}+\sigma_{eg_2}\right)\big]x, \end{equation} where $\sigma_{ij}=|i\rangle\langle j|$. It may be written in the atomic bare basis $\left\{|g_1\rangle,|e\rangle,|g_2\rangle\right\}$ as \begin{equation} H_\Lambda=\frac{p^2}{2}+\frac{x^2}{2}+\left[\begin{array}{ccc} 0 & \lambda_1x & 0 \\ \lambda_1x & \Omega & \lambda_2x \\ 0 & \lambda_2x & 0\end{array}\right]. \end{equation} Introducing the angles $\tan(\theta)=\lambda_1/\lambda_2$ and $\tan(2\phi)=2\lambda_0x/\Omega$, with $\lambda_0=\sqrt{\lambda_1^2+\lambda_2^2}$, the potential matrix is diagonalized by \begin{equation} U_2=\left[\begin{array}{ccc} \sin(\phi)\sin(\theta) & \cos(\theta) & \cos(\phi)\sin(\theta) \\ \cos(\phi) & 0 & -\sin(\phi) \\ \sin(\phi)\cos(\theta) & -\sin(\theta) & \cos(\phi)\cos(\theta)\end{array}\right]. \end{equation} As $\theta$ is $x$-independent we find \begin{equation} U_2pU_2^\dagger=p-iU_2\partial_\phi U_2^\dagger\partial_x\phi, \end{equation} and one derives that \begin{equation} U\partial_\phi U^\dagger=\left[\begin{array}{ccc}0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0\end{array}\right]. \end{equation} The adiabatic potentials are \begin{equation} V_\pm(x)=\frac{x^2}{2}+(\partial_x\phi)^2+\left(\frac{\Omega}{2}\pm\sqrt{\frac{\Omega^2}{4}+2\lambda_2^2x^2}\right),\hspace{1.2cm}V_0(x)=\frac{x^2}{2}+(\partial_x\phi)^2, \end{equation} where the adiabatic state corresponding to $V_0(x)$ is called the {\it dark state} which becomes degenerate with the state of $V_-(x)$ at $x=0$. This degeneracy is lifted however, if a detuning is assumed between the two ground states. We note that the two adiabatic potentials $V_\pm(x)$ are similar to the ones of the Rabi model (\ref{rabipot}). Actually, the absence of a second non-zero diagonal element in the potential matrix gives the system a symmetry such that it can be separated into a $2\times2$ problem. The unitary transformation \begin{equation} U_3=\frac{1}{\lambda_0}\left[\begin{array}{ccc} \lambda_1 & 0 & \lambda_2\\ 0 & \lambda_0 & 0 \\ \lambda_2 & 0 & -\lambda_2\end{array}\right] \end{equation} casts the potential matrix in the form \begin{equation} V'(x)=U_3V(x)U_3^\dagger=\left[\begin{array}{ccc} 0 & \lambda_0x & 0 \\ \lambda_0x & \Omega & 0\\ 0 & 0 & 0\end{array}\right]. \end{equation} We may remark that like for the Rabi model, a semiclassical approach of this $\Lambda$-system naturally leads to the solvable generalized three-level Landau-Zener model \cite{3lz}. Wave packet propagation methods in a related system has been considered in \cite{jonas2}. \subsection{Dicke model}\label{ssec5b} Extending the Rabi Hamiltonian to contain $N$ number of two-level atoms gives the Dicke model \cite{dicke} \begin{equation} H_D=\frac{p^2}{2}+\frac{x^2}{2}+\Omega S_x+\frac{g_0}{\sqrt{N}}S_zx \end{equation} with the total spin observables $S_k=\sum_{i=1}^N\sigma_k^{(i)}$, $k=x,z$ and $\sigma_k^{(i)}$ is the $k$'th Pauli matrix for atom $i$. The scaling of the atom-field coupling by $\sqrt{N}$ is inserted to assure a well defined thermodynamic limit as $N\rightarrow\infty$. Operators acting on different atoms commutes, so that the adiabatic atomic ground state reads \begin{equation} |E_0\rangle=\big\{\cos(\theta)|+\rangle-\sin(\theta)|-\rangle\big\}^{\otimes N}, \end{equation} where $\otimes N$ means direct product and as before $\tan(2\theta)=2g_0x/\sqrt{N}\Omega$. Formally, the adiabatic diagonalization procedure follows like for the single atom case, replacing Pauli matrices by total spin operators. We unitary transform the Hamiltonian by \begin{equation} U_4=\mathrm{e}^{-i\theta S_y} \end{equation} and the non-adiabatic corrections arise from equation (\ref{noncom}) substituting $\sigma_y$ by $S_y$. From the theory of angular momentum we directly find the adiabatic potentials \begin{equation} V_{m_s}(x)=\frac{x^2}{2}+m_s\sqrt{\frac{\Omega^2}{4}+\frac{g_0^2}{N}x^2},\hspace{1,2cm}m_s=-N,-N+1,...,N-1,N. \end{equation} Here $m_s$ is the quantum number for the total spin in the $z$-direction. It is convenient to introduce the {\it Dicke states} $|s,m_s\rangle$ being eigenstates of the total spin $S|s,m_s\rangle=s(s+1)|s,m_s\rangle$ and of the $z$-component $S_z|s,m_s\rangle=m_s|s,m_s\rangle$. The adiabatic ground state thus identifies with quantum numbers $s=N$ and $m_s=-N$, and applying the rotated annihilation operator $\tilde{J}_+=U_4^\dagger J_+U_4$ repeated times to the ground state generates all the adiabatic states $|N,m_s\rangle$. The states with lower $s$ quantum number can be obtained by standard methods. In the large $N$ limit, quantum fluctuations can in general be regarded as small and one may linearize these terms. The Holstein-Primakoff representation of the spin operators \cite{hp} has turned out to be an efficient approach in analyzing the Dicke model \cite{eb1,eb2}. The spin operators are expressed in boson operators accordingly; \begin{equation} S_z=b^\dagger b.\frac{N}{2},\hspace{1cm}S_+=b^\dagger\sqrt{N-b^\dagger b},\hspace{1cm}S_-=S_+^\dagger. \end{equation} Before linearizing, we note from figure \ref{fig1} that the low lying adiabatic potentials (we assume cold atoms and therefore a low temperature) either have one or two global minima, and this fact should be taken into account for when expanding in $N^{-1}$. This is directly related to the presence of a quantum phase transition, with critical coupling $g_0^{(c)}=\sqrt{\Omega}/2$, between the {\it normal phase} of a vacuum field and all atoms deexcited and the {\it superradient phase} of a macroscopic field and atomic excitation \cite{dickept1,dickept2}. Due to this it is convenient to coherently shift the boson operators as \cite{eb1,eb2} \begin{equation} a\rightarrow c+\alpha_s,\hspace{1.2cm}b\rightarrow d+\beta_s, \end{equation} where \begin{equation} \alpha_s=g_0\sqrt{N(1-\mu^2)},\hspace{1.2cm}\beta_s=\sqrt{\frac{N}{2}(1-\mu)} \end{equation} and \begin{equation} \mu=\left\{\begin{array}{llll} \left(\frac{g_0^{(c)}}{g_0^2}\right)^2,& & g_0>g_0^{(c)}\\ 1, & & g_0\leq g_0^{(c)}\end{array}\right.. \end{equation} Thus, we note that in the case of a single global minimum the shifts vanish. Here $c$ and $d$ represents quantum fluctuations around the classical values $\alpha_s$ and $\beta_s$. The resulting expanded Dicke Hamiltonian becomes \cite{eb1,eb2} \begin{equation} H_D'=c^\dagger c+\frac{\Omega(1+\mu)}{2\mu}d^\dagger d+\frac{\Omega(1-\mu)(3+\mu)}{8\mu(1+\mu)}(d^\dagger+d)^2+g_0\mu\sqrt{\frac{2}{1+\mu}}(c^\dagger+c)(d^\dagger+d). \end{equation} For $\mu=1$ (the ground adiabatic potential possess only a single minimum) the Hamiltonian becomes bi-linear and one may decouple the two boson modes into two disconnected harmonic oscillators. Using the Holstein-Primakoff representation we turn the fermionic degrees of freedom into a single bosonic degree of freedom. The cost of reducing the $2^N$ potential curves to a single one is that the wave packet now evolves in 2-D rather than 1-D. Non the less, wave packet propagation in 2-D is easily performed with the split operator method, and will be consider in future works. Another algebraic method that could be considered is the {\it "Schwinger's oscillator model of angular momentum"} \cite{schwinger}, in which, however, two bosonic degrees of freedom is introduced instead of the fermionic subsystem. \section{Conclusion}\label{sec6} The method of wave packet propagation, in which the cavity fields quadrature operators serve as conjugate variables, has been explored. It has been applied to the seminal JC model and its companion, the Rabi model, which is related to the JC one by the RWA. Various bases, different form the conventional bare and dressed bases, and their corresponding potential curves were consider. This new numerical approach is more commonly used in chemical and molecular physics, were it has been utilized for more than three decades \cite{heller}. The effect of the RWA was discussed in this representation, and rather unexpected phenomena of the wave packet evolution appear once the RWA has been assumed. The main part of the analysis has been devoted to the collapse-revival effect of these models. Typically it is studied using algebraic methods, while we examine it from the shapes of the coupled potential curves. The terminology of collapse-revivals in wave packet models is not the same as the one for the JC model and we sort this out. The internal two-level structure of the Rabi and JC models may cause very long characteristic time scales, compared to the ones of a wave packet in a single anharmonic potential. Finally we have sketched how the method is extended to systems with more internal degrees of freedom, and here especially to the three-level $\Lambda$-atom and the Dicke model. For the Dicke model, we show how one may transform the internal degrees of freedom into a single "external" degree of freedom using the Holstein-Primakoff representation. More thorough research on multi-level and multi-mode systems are left for future works. \ack The author acknowledges support from the Swedish government/Vetenskapsr{\aa}det and the European Commission (EMALI, MRTN-CT-2006-035369; SCALA, Contract No. 015714). \section*{References} \end{document}